diff --git a/books/cam/IA_L/analysis_i.tex b/books/cam/IA_L/analysis_i.tex
new file mode 100644
index 0000000000000000000000000000000000000000..bbd6e06dc7c081a431e24987cb46b074a11b2a9b
--- /dev/null
+++ b/books/cam/IA_L/analysis_i.tex
@@ -0,0 +1,2901 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IA}
+\def\nterm {Lent}
+\def\nyear {2015}
+\def\nlecturer {W.\ T.\ Gowers}
+\def\ncourse {Analysis I}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Limits and convergence}\\
+Sequences and series in $\R$ and $\C$. Sums, products and quotients. Absolute convergence; absolute convergence implies convergence. The Bolzano-Weierstrass theorem and applications (the General Principle of Convergence). Comparison and ratio tests, alternating series test.\hspace*{\fill} [6]
+
+\vspace{10pt}
+\noindent\textbf{Continuity}\\
+Continuity of real- and complex-valued functions defined on subsets of $\R$ and $\C$. The intermediate value theorem. A continuous function on a closed bounded interval is bounded and attains its bounds.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Differentiability}\\
+Differentiability of functions from $\R$ to $\R$. Derivative of sums and products. The chain rule. Derivative of the inverse function. Rolle's theorem; the mean value theorem. One-dimensional version of the inverse function theorem. Taylor's theorem from $\R$ to $\R$; Lagrange's form of the remainder. Complex differentiation.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Power series}\\
+Complex power series and radius of convergence. Exponential, trigonometric and hyperbolic functions, and relations between them. *Direct proof of the differentiability of a power series within its circle of convergence*.\hspace*{\fill}[4]
+
+\vspace{10pt}
+\noindent\textbf{Integration}\\
+Definition and basic properties of the Riemann integral. A non-integrable function. Integrability of monotonic functions. Integrability of piecewise-continuous functions. The fundamental theorem of calculus. Differentiation of indefinite integrals. Integration by parts. The integral form of the remainder in Taylor's theorem. Improper integrals.\hspace*{\fill} [6]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+In IA Differential Equations, we studied calculus in a non-rigorous setting. While we did define differentiation (properly) as a limit, we did not define what a limit was. We didn't even have a proper definition of integral, and just mumbled something about it being an infinite sum.
+
+In Analysis, one of our main goals is to put calculus on a rigorous foundation. We will provide unambiguous definitions of what it means to take a limit, a derivative and an integral. Based on these definitions, we will prove the results we've had such as the product rule, quotient rule and chain rule. We will also rigorously prove the fundamental theorem of calculus, which states that the integral is the inverse operation to the derivative.
+
+However, this is not all Analysis is about. We will study all sorts of limiting (``infinite'') processes. We can see integration as an infinite sum, and differentiation as dividing two infinitesimal quantities. In Analysis, we will also study infinite series such as $1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + \cdots$, as well as limits of infinite sequences.
+
+Another important concept in Analysis is \emph{continuous functions}. In some sense, continuous functions are functions that preserve limit processes. While their role in this course is just being ``nice'' functions to work with, they will be given great importance when we study metric and topological spaces.
+
+This course is continued in IB Analysis II, where we study uniform convergence (a stronger condition for convergence), calculus of multiple variables and metric spaces.
+
+\section{The real number system}
+We are all familiar with real numbers --- they are ``decimals'' consisting of infinitely many digits. When we really want to study real numbers, while this definition is technically valid, it is not a convenient definition to work with. Instead, we take an \emph{axiomatic} approach. We define the real numbers to be a set that satisfies certain properties, and, if we really want to, show that the decimals satisfy these properties. In particular, we define real numbers to be an ordered field with the least upper bound property.
+
+We'll now define what it means to be an ``ordered field with the least upper bound property''.
+\begin{defi}[Field]
+ A \emph{field} is a set $\F$ with two binary operations $+$ and $\times$ that satisfies all the familiar properties satisfied by addition and multiplication in $\Q$, namely
+ \begin{enumerate}
+ \item $(\forall a, b, c)\,a + (b + c) = (a + b) + c$ and $a\times (b\times c) = (a\times b)\times c$ \hfill (associativity)
+ \item $(\forall a, b)\,a + b = b + a$ and $a\times b= b\times a$ \hfill (commutativity)
+ \item $\exists 0, 1$ such that $(\forall a)a + 0 = a$ and $a\times 1 = a$ \hfill (identity)
+ \item $\forall a\,\exists (-a)$ such that $a + (-a) = 0$. If $a\not= 0$, then $\exists a^{-1}$ such that $a\times a^{-1} = 1$. \hfill (inverses)
+ \item $(\forall a, b, c)\, a\times (b + c) = (a\times b) + (a\times c)$ \hfill (distributivity)
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}
+ $\Q, \R, \C$, integers mod $p$, $\{a + b\sqrt{2}: a, b\in \Z\}$ are all fields.
+\end{eg}
+
+\begin{defi}[Totally ordered set]
+ An \emph{(totally) ordered set} is a set $X$ with a relation $<$ that satisfies
+ \begin{enumerate}
+ \item If $x, y, z\in X$, $x < y$ and $y < z$, then $x < z$ \hfill (transitivity)
+ \item If $x, y\in X$, exactly one of $x < y, x = y, y < x$ holds \hfill (trichotomy)
+ \end{enumerate}
+\end{defi}
+We call it a \emph{totally} ordered set, as opposed to simply an ordered set, when we want to emphasize the order is total, i.e.\ every pair of elements are related to each other, as opposed to a \emph{partial order}, where ``exactly one'' in trichotomy is replaced with ``at most one''.
+
+Now to define an ordered field, we don't simply ask for a field with an order. The order has to interact nicely with the field operations.
+\begin{defi}[Ordered field]
+ An \emph{ordered field} is a field $\F$ with a relation $<$ that makes $\F$ into an ordered set such that
+ \begin{enumerate}
+ \item if $x, y, z \in \F$ and $x < y$, then $x + z < y + z$
+ \item if $x, y, z \in \F$, $x < y$ and $z > 0$, then $xz < yz$
+ \end{enumerate}
+\end{defi}
+
+\begin{lemma}
+ Let $\F$ be an ordered field and $x\in \F$. Then $x^2 \geq 0$.
+\end{lemma}
+
+\begin{proof}
+ By trichotomy, either $x < 0$, $x = 0$ or $x > 0$. If $x = 0$, then $x^2 = 0$. So $x^2 \geq 0$. If $x > 0$, then $x^2 > 0\times x = 0$. If $x < 0$, then $x - x < 0 - x$. So $0 < -x$. But then $x^2 = (-x)^2 > 0$.
+\end{proof}
+
+\begin{defi}[Least upper bound]
+ Let $X$ be an ordered set and let $A\subseteq X$. An \emph{upper bound} for $A$ is an element $x\in X$ such that $(\forall a\in A)\,a \leq x$. If $A$ has an upper bound, then we say that $A$ is \emph{bounded above}.
+
+ An upper bound $x$ for $A$ is a \emph{least upper bound} or \emph{supremum} if nothing smaller that $x$ is an upper bound. That is, we need
+ \begin{enumerate}
+ \item $(\forall a\in A)\,a \leq x$
+ \item $(\forall y < x)(\exists a\in A)\,a > y$
+ \end{enumerate}
+
+ We usually write $\sup A$ for the supremum of $A$ when it exists. If $\sup A\in A$, then we call it $\max A$, the maximum of $A$.
+\end{defi}
+
+\begin{eg}
+ Let $X = \Q$. Then the supremum of $(0, 1)$ is $1$. The set $\{x: x^2 < 2\}$ is bounded above by $2$, but has no supremum (even though $\sqrt{2}$ seems like a supremum, we are in $\Q$ and $\sqrt{2}$ is non-existent!).
+
+ $\max [0, 1] = 1$ but $(0, 1)$ has no maximum because the supremum is not in $(0, 1)$.
+\end{eg}
+
+We can think of the supremum as a point we can get arbitrarily close to in the set but cannot pass through.
+
+\begin{defi}[Least upper bound property]
+ An ordered set $X$ has the \emph{least upper bound property} if every non-empty subset of $X$ that is bounded above has a supremum.
+\end{defi}
+
+Obvious modifications give rise to definitions of lower bound, greatest lower bound (or infimum) etc. It is simple to check that an ordered field with the least upper bound property has the greatest lower bound property.
+
+\begin{defi}[Real numbers]
+ The \emph{real numbers} is an ordered field with the least upper bound property.
+\end{defi}
+Of course, it is \emph{very} important to show that such a thing exists, or else we will be studying nothing. It is also nice to show that such a field is unique (up to isomorphism). However, we will not prove these in the course.
+
+In a field, we can define the ``natural numbers'' to be $2 = 1 + 1$, $3 = 1 + 2$ etc. Then an important property of the real numbers is
+\begin{lemma}[Archimedean property v1)]
+ Let $\F$ be an ordered field with the least upper bound property. Then the set $\{1, 2, 3, \cdots\}$ is not bounded above.
+\end{lemma}
+
+\begin{proof}
+ If it is bounded above, then it has a supremum $x$. But then $x - 1$ is not an upper bound. So we can find $n\in \{1, 2, 3, \cdots\}$ such that $n> x - 1$. But then $n + 1 > x$, but $x$ is supposed to be an upper bound.
+\end{proof}
+
+Is the least upper bound property required to prove the Archimedean property? It seems like \emph{any} ordered field should satisfy this even if they do not have the least upper bound property. However, it turns out there are ordered fields in which the integers are bounded above.
+
+Consider the field of rational functions, i.e.\ functions in the form $\frac{P(x)}{Q(x)}$ with $P(x), Q(x)$ being polynomials, under the usual addition and multiplication. We order two functions $\frac{P(x)}{Q(x)}, \frac{R(x)}{S(x)}$ as follows: these two functions intersect only finitely many times because $P(x)S(x) = R(x)Q(x)$ has only finitely many roots. After the last intersection, the function whose value is greater counts as the greater function. It can be checked that these form an ordered field.
+
+In this field, the integers are the constant functions $1, 2, 3, \cdots$, but it is bounded above since the function $x$ is greater than all of them.
+
+\section{Convergence of sequences}
+Having defined real numbers, the first thing we will study is sequences. We will want to study what it means for a sequence to \emph{converge}. Intuitively, we would like to say that $1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4},\cdots$ converges to $0$, while $1, 2, 3, 4, \cdots$ diverges. However, the actual formal definition of convergence is rather hard to get right, and historically there have been failed attempts that produced spurious results.
+\subsection{Definitions}
+\begin{defi}[Sequence]
+ A \emph{sequence} is, formally, a function $a: \N \to \R$ (or $\C$). Usually (i.e.\ always), we write $a_n$ instead of $a(n)$. Instead of $a$, we usually write it as $(a_n)$, $(a_n)_1^\infty$ or $(a_n)_{n = 1}^\infty$ to indicate it is a sequence.
+\end{defi}
+
+\begin{defi}[Convergence of sequence]
+ Let $(a_n)$ be a sequence and $\ell\in \R$. Then $a_n$ \emph{converges to} $\ell$, \emph{tends to} $\ell$, or $a_n \to \ell$ , if for all $\varepsilon > 0$, there is some $N \in \N$ such that whenever $n > N$, we have $|a_n - \ell| < \varepsilon$. In symbols, this says
+ \[
+ (\forall \varepsilon > 0)(\exists N)(\forall n\geq N)\,|a_n - \ell| < \varepsilon.
+ \]
+ We say $\ell$ is the \emph{limit} of $(a_n)$.
+\end{defi}
+One can think of $(\exists N)(\forall n\geq N)$ as saying ``eventually always'', or as ``from some point on''. So the definition means, if $a_n\to \ell$, then given any $\varepsilon$, there eventually, everything in the sequence is within $\varepsilon$ of $\ell$.
+
+We'll now provide an alternative form of the Archimedean property. This is the form that is actually useful.
+\begin{lemma}[Archimedean property v2]
+ $1/n \to 0$.
+\end{lemma}
+
+\begin{proof}
+ \textcolor{red}{Let $\varepsilon > 0$}. We want to find an $N$ such that $|1/N - 0| = 1/N < \varepsilon$. So \textcolor{red}{pick $N$} such that $N > 1/\varepsilon$. There exists such an $N$ by the Archimedean property v1. Then \textcolor{red}{for all $n \geq N$}, we have $0 < 1/n \leq 1/N < \varepsilon$. So \textcolor{red}{$|1/n - 0| < \varepsilon$}.
+\end{proof}
+Note that the red parts correspond to the \emph{definition} of convergence of a sequence. This is generally how we prove convergence from first principles.
+
+\begin{defi}[Bounded sequence]
+ A sequence $(a_n)$ is \emph{bounded} if
+ \[
+ (\exists C)(\forall n)\,|a_n| \leq C.
+ \]
+ A sequence is \emph{eventually bounded} if
+ \[
+ (\exists C)(\exists N)(\forall n\geq N)\, |a_n| \leq C.
+ \]
+\end{defi}
+The definition of an \emph{eventually bounded} sequence seems a bit daft. Clearly every eventually bounded sequence is bounded! Indeed it is:
+\begin{lemma}
+ Every eventually bounded sequence is bounded.
+\end{lemma}
+
+\begin{proof}
+ Let $C$ and $N$ be such that $(\forall n\geq N)\,|a_n| \leq C$. Then $\forall n \in \N$, $|a_n| \leq \max\{|a_1|, \cdots, |a_{N - 1}|, C\}$.
+\end{proof}
+
+The proof is rather trivial. However, most of the time it is simpler to prove that a sequence is eventually bounded, and this lemma saves us from writing that long line every time.
+
+\subsection{Sums, products and quotients}
+Here we prove the things that we think are obviously true, e.g.\ sums and products of convergent sequences are convergent.
+
+\begin{lemma}[Sums of sequences]
+ If $a_n \to a$ and $b_n \to b$, then $a_n + b_n \to a + b$.
+\end{lemma}
+
+\begin{proof}
+ Let $\varepsilon > 0$. We want to find a clever $N$ such that for all $n \geq N$, $|a_n + b_n - (a+b)| < \varepsilon$. Intuitively, we know that $a_n$ is very close to $a$ and $b_n$ is very close to $b$. So their sum must be very close to $a + b$.
+
+ Formally, since $a_n\to a$ and $b_n \to b$, we can find $N_1, N_2$ such that $\forall n \geq N_1$, $|a_n - a| < \varepsilon/2$ and $\forall n \geq N_2$, $|b_n - b| < \varepsilon/2$.
+
+ Now let $N = \max\{N_1, N_2\}$. Then by the triangle inequality, when $n \geq N$,
+ \[
+ |(a_n + b_n) - (a + b)| \leq |a_n - a| + |b_n - b| < \varepsilon. \qedhere
+ \]
+\end{proof}
+
+We want to prove that the product of convergent sequences is convergent. However, we will not do it in one go. Instead, we separate it into many smaller parts.
+\begin{lemma}[Scalar multiplication of sequences]
+ Let $a_n \to a$ and $\lambda \in \R$. Then $\lambda a_n \to \lambda a$.
+\end{lemma}
+
+\begin{proof}
+ If $\lambda = 0$, then the result is trivial.
+
+ Otherwise, let $\varepsilon > 0$. Then $\exists N$ such that $\forall n \geq N$, $|a_n - a| < \varepsilon/|\lambda|$. So $|\lambda a_n - \lambda a| < \varepsilon$.
+\end{proof}
+
+\begin{lemma}
+ Let $(a_n)$ be bounded and $b_n \to 0$. Then $a_nb_n \to 0$.
+\end{lemma}
+
+\begin{proof}
+ Let $C\not=0$ be such that $(\forall n)\, |a_n|\leq C$. Let $\varepsilon > 0$. Then $\exists N$ such that $(\forall n\geq N)\, |b_n| < \varepsilon/C$. Then $|a_nb_n| < \varepsilon$.
+\end{proof}
+
+\begin{lemma}
+ Every convergent sequence is bounded.
+\end{lemma}
+
+\begin{proof}
+ Let $a_n \to l$. Then there is an $N$ such that $\forall n \geq N$, $|a_n - l| \leq 1$. So $|a_n| \leq |l| + 1$. So $a_n$ is eventually bounded, and therefore bounded.
+\end{proof}
+
+\begin{lemma}[Product of sequences]
+ Let $a_n\to a$ and $b_n\to b$. Then $a_nb_n\to ab$.
+\end{lemma}
+
+\begin{proof}
+ Let $a_n = a + \varepsilon_n$. Then $a_nb_n = (a + \varepsilon_n)b_n = ab_n + \varepsilon_n b_n$.
+
+ Since $b_n \to b$, $ab_n \to ab$. Since $\varepsilon_n \to 0$ and $b_n$ is bounded, $\varepsilon_nb_n \to 0$. So $a_nb_n \to ab$.
+\end{proof}
+
+\begin{proof}
+ (alternative) Observe that $a_nb_n - ab = (a_n - a) b_n + (b_n - b)a$. We know that $a_n - a \to 0$ and $b_n - b\to 0$. Since $(b_n)$ is bounded, so $(a_n - a)b_n + (b_n - b)a \to 0$. So $a_nb_n \to ab$.
+\end{proof}
+
+Note that in this proof, we no longer write ``Let $\varepsilon > 0$''. In the beginning, we have no lemmas proven. So we must prove everything from first principles and use the definition. However, after we have proven the lemmas, we can simply use them instead of using first principles. This is similar to in calculus, where we use first principles to prove the product rule and chain rule, then no longer use first principles afterwards.
+
+\begin{lemma}[Quotient of sequences]
+ Let $(a_n)$ be a sequence such that $(\forall n)\, a_n \not= 0$. Suppose that $a_n \to a$ and $a\not = 0$. Then $1/a_n \to 1/a$.
+\end{lemma}
+
+\begin{proof}
+ We have
+ \[
+ \frac{1}{a_n} - \frac{1}{a} = \frac{a - a_n}{aa_n}.
+ \]
+ We want to show that this $\to 0$. Since $a - a_n \to 0$, we have to show that $1/(aa_n)$ is bounded.
+
+ Since $a_n \to a$, $\exists N$ such that $\forall n\geq N$, $|a_n - a| \leq a/2$. Then $\forall n\geq N$, $|a_n| \geq |a|/2$. Then $|1/(a_na)| \leq 2/|a|^2$. So $1/(a_na)$ is bounded. So $(a - a_n)/(aa_n)\to 0$ and the result follows.
+\end{proof}
+
+\begin{cor}
+ If $a_n \to a, b_n \to b$, $b_n, b\not= 0$, then $a_n/b_n = a/b$.
+\end{cor}
+
+\begin{proof}
+ We know that $1/b_n \to 1/b$. So the result follows by the product rule.
+\end{proof}
+
+\begin{lemma}[Sandwich rule]
+ Let $(a_n)$ and $(b_n)$ be sequences that both converge to a limit $x$. Suppose that $a_n \leq c_n \leq b_n$ for every $n$. Then $c_n \to x$.
+\end{lemma}
+
+\begin{proof}
+ Let $\varepsilon > 0$. We can find $N$ such that $\forall n \geq N$, $|a_n - x| < \varepsilon$ and $|b_n - x| < \varepsilon$.
+
+ Then $\forall n\geq N$, we have $x - \varepsilon < a_n \leq c_n \leq b_n < x + \varepsilon$. So $|c_n - x| < \varepsilon$.
+\end{proof}
+
+\begin{eg}
+ $1/2^n \to 0$. For every $n$, $n < 2^n$. So $0 < 1/2^n < 1/n$. The result follows from the sandwich rule.
+\end{eg}
+\begin{eg}
+ We want to show that
+ \[
+ \frac{n^2 + 3}{(n + 5)(2n - 1)} \to \frac{1}{2}.
+ \]
+ We can obtain this by
+ \[
+ \frac{n^2 + 3}{(n + 5)(2n - 1)} = \frac{1 + 3/n^2}{(1 + 5/n)(2 - 1/n)} \to \frac{1}{2},
+ \]
+ by sum rule, sandwich rule, Archimedean property, product rule and quotient rule.
+\end{eg}
+
+\begin{eg}
+ Let $k\in \N$ and let $\delta > 0$. Then
+ \[
+ \frac{n^k}{(1 + \delta)^n}\to 0.
+ \]
+ This can be summarized as ``exponential growth beats polynomial growth eventually''.
+
+ By the binomial theorem,
+ \[
+ (1 + \delta)^n \geq \binom{n}{k + 1}\delta^{k + 1}.
+ \]
+ Also, for $n\geq 2k$,
+ \[
+ \binom{n}{k + 1} = \frac{n(n - 1)\cdots(n - k)}{(k + 1)!} \geq \frac{(n/2)^{k + 1}}{(k + 1)!}.
+ \]
+ So for sufficiently large $n$,
+ \[
+ \frac{n^k}{(1 + \delta)^n} \leq \frac{n^k 2^{k + 1} (k+1)!}{n^{k + 1}\delta^{k + 1}} = \frac{2^{k + 1} (k + 1)!}{\delta^{k + 1}} \cdot \frac{1}{n} \to 0.
+ \]
+\end{eg}
+
+\subsection{Monotone-sequences property}
+Recall that we characterized the least upper bound property. It turns out that there is an alternative characterization of real number using sequences, known as the \emph{monotone-sequences property}. In this section, we will show that the two characterizations are equivalent, and use the monotone-sequences property to deduce some useful results.
+\begin{defi}[Monotone sequence]
+ A sequence $(a_n)$ is \emph{increasing} if $a_n\leq a_{n + 1}$ for all $n$.
+
+ It is \emph{strictly increasing} if $a_n < a_{n + 1}$ for all $n$. \emph{(Strictly) decreasing} sequences are defined analogously.
+
+ A sequence is \emph{(strictly) monotone} if it is (strictly) increasing or (strictly) decreasing.
+\end{defi}
+
+\begin{defi}[Monotone-sequences property]
+ An ordered field has the \emph{monotone sequences property} if every increasing sequence that is bounded above converges.
+\end{defi}
+
+We want to show that the monotone sequences property is equivalent to the least upper bound property.
+\begin{lemma}
+ Least upper bound property $\Rightarrow$ monotone-sequences property.
+\end{lemma}
+
+\begin{proof}
+ Let $(a_n)$ be an increasing sequence and let $C$ an upper bound for $(a_n)$. Then $C$ is an upper bound for the set $\{a_n: n \in \N\}$. By the least upper bound property, it has a supremum $s$. We want to show that this is the limit of $(a_n)$.
+
+ Let $\varepsilon > 0$. Since $s = \sup \{a_n: n\in \N\}$, there exists an $N$ such that $a_N > s - \varepsilon$. Then since $(a_n)$ is increasing, $\forall n \geq N$, we have $s - \varepsilon < a_N \leq a_n \leq s$. So $|a_n - s| < \varepsilon$.
+\end{proof}
+
+We first prove a handy lemma.
+\begin{lemma}
+ Let $(a_n)$ be a sequence and suppose that $a_n \to a$. If $(\forall n)\, a_n \leq x$, then $a\leq x$.
+\end{lemma}
+
+\begin{proof}
+ If $a > x$, then set $\varepsilon = a - x$. Then we can find $N$ such that $a_N > x$. Contradiction.
+\end{proof}
+
+Before showing the other way implication, we will need the following:
+\begin{lemma}
+ Monotone-sequences property $\Rightarrow$ Archimedean property.
+\end{lemma}
+
+\begin{proof}
+ We prove version 2, i.e.\ that $1/n \to 0$.
+
+ Since $1/n > 0$ and is decreasing, by MSP, it converges. Let $\delta$ be the limit. By the previous lemma, we must have $\delta \geq 0$.
+
+ If $\delta > 0$, then we can find $N$ such that $1/N < 2\delta$. But then for all $n \geq 4N$, we have $1/n \leq 1/(4N) < \delta/2$. Contradiction. Therefore $\delta = 0$.
+\end{proof}
+
+\begin{lemma}
+ Monotone-sequences property $\Rightarrow$ least upper bound property.
+\end{lemma}
+
+\begin{proof}
+ Let $A$ be a non-empty set that's bounded above. Pick $u_0, v_0$ such that $u_0$ is not an upper bound for $A$ and $v_0$ is an upper bound. Now do a repeated bisection: having chosen $u_n$ and $v_n$ such that $u_n$ is not an upper bound and $v_n$ is, if $(u_n + v_n)/2$ is an upper bound, then let $u_{n + 1} = u_n$, $v_{n + 1} = (u_n + v_n)/2$. Otherwise, let $u_{n + 1} = (u_n + v_n)/2$, $v_{n + 1} = v_n$.
+
+ Then $u_0 \leq u_1 \leq u_2 \leq \cdots$ and $v_0\geq v_1 \geq v_2 \geq \cdots$. We also have
+ \[
+ v_n - u_n = \frac{v_0 - u_0}{2^n} \to 0.
+ \]
+ By the monotone sequences property, $u_n\to s$ (since $(u_n)$ is bounded above by $v_0$). Since $v_n - u_n \to 0$, $v_n \to s$. We now show that $s = \sup A$.
+
+ If $s$ is not an upper bound, then there exists $a\in A$ such that $a > s$. Since $v_n \to s$, then there exists $m$ such that $v_m < a$, contradicting the fact that $v_m$ is an upper bound.
+
+ To show it is the \emph{least} upper bound, let $t < s$. Then since $u_n \to s$, we can find $m$ such that $u_m > t$. So $t$ is not an upper bound. Therefore $s$ is the least upper bound.
+\end{proof}
+Why do we need to prove the Archimedean property first? In the proof above, we secretly used the it. When showing that $v_n - u_n \to 0$, we required the fact that $\frac{1}{2^n} \to 0$. To prove this, we sandwiched it with $\frac{1}{n}$. But to show $\frac{1}{n}\to 0$, we need the Archimedean property.
+
+\begin{lemma}
+ A sequence can have at most 1 limit.
+\end{lemma}
+
+\begin{proof}
+Let $(a_n)$ be a sequence, and suppose $a_n \to x$ and $a_n\to y$. Let $\varepsilon > 0$ and pick $N$ such that $\forall n \geq N$, $|a_n - x| < \varepsilon/2$ and $|a_n - y| < \varepsilon/2$. Then $|x - y| \leq |x - a_N| + |a_N - y| < \varepsilon/2 + \varepsilon/2 = \varepsilon$. Since $\varepsilon$ was arbitrary, $x$ must equal $y$.
+\end{proof}
+
+\begin{lemma}[Nested intervals property]
+ Let $\F$ be an ordered field with the monotone sequences property. Let $I_1\supseteq I_2 \supseteq \cdots$ be closed bounded non-empty intervals. Then $\bigcap_{n = 1}^\infty I_n \not= \emptyset$.
+\end{lemma}
+
+\begin{proof}
+ Let $T_n = [a_n, b_n]$ for each $n$. Then $a_1 \leq a_2 \leq\cdots$ and $b_1 \geq b_2\geq \cdots$. For each $n$, $a_n \leq b_n \leq b_1$. So the sequence $a_n$ is bounded above. So by the monotone sequences property, it has a limit $a$. For each $n$, we must have $a_n\leq a$. Otherwise, say $a_n > a$. Then for all $m \geq n$, we have $a_m \geq a_n > a$. This implies that $a > a$, which is nonsense.
+
+ Also, for each fixed $n$, we have that $\forall m\geq n$, $a_m \leq b_m \leq b_n$. So $a \leq b_n$. Thus, for all $n$, $a_n \leq a \leq b_n$. So $a\in I_n$. So $a\in \bigcap_{n = 1}^\infty I_n$.
+\end{proof}
+
+We can use this to prove that the reals are uncountable:
+\begin{prop}
+ $\R$ is uncountable.
+\end{prop}
+
+\begin{proof}
+ Suppose the contrary. Let $x_1, x_2, \cdots$ be a list of all real numbers. Find an interval that does not contain $x_1$. Within that interval, find an interval that does not contain $x_2$. Continue \emph{ad infinitum}. Then the intersection of all these intervals is non-empty, but the elements in the intersection are not in the list. Contradiction.
+\end{proof}
+
+A powerful consequence of this is the \emph{Bolzano-Weierstrass theorem}. This is formulated in terms of subsequences:
+\begin{defi}[Subsequence]
+ Let $(a_n)$ be a sequence. A \emph{subsequence} of $(a_n)$ is a sequence of the form $a_{n_1}, a_{n_2}, \cdots$, where $n_1 < n_2 < \cdots$.
+\end{defi}
+
+\begin{eg}
+ $1, 1/4, 1/9, 1/16, \cdots$ is a subsequence of $1, 1/2, 1/3, \cdots$.
+\end{eg}
+
+
+\begin{thm}[Bolzano-Weierstrass theorem]
+ Let $\F$ be an ordered field with the monotone sequences property (i.e.\ $\F = \mathbb{R}$).
+
+ Then every bounded sequence has a convergent subsequence.
+\end{thm}
+
+\begin{proof}
+ Let $u_0$ and $v_0$ be a lower and upper bound, respectively, for a sequence $(a_n)$. By repeated bisection, we can find a sequence of intervals $[u_0, v_0] \supseteq [u_1, v_1]\supseteq [u_2,v_2] \supseteq\cdots$ such that $v_n - u_n = (v_0 - u_0)/2^n$, and such that each $[u_n, v_n]$ contains infinitely many terms of $(a_n)$.
+
+ By the nested intervals property, $\bigcap_{n = 1}^\infty [u_n, v_n] \not= \emptyset$. Let $x$ belong to the intersection. Now pick a subsequence $a_{n_1}, a_{n_2}, \cdots$ such that $a_{n_k} \in [u_k, v_k]$. We can do this because $[u_k, v_k]$ contains infinitely many $a_n$, and we have only picked finitely many of them. We will show that $a_{n_k} \to x$.
+
+ Let $\varepsilon > 0$. By the Archimedean property, we can find $K$ such that $v_K - u_K = (v_0 - u_0)/2^K \leq \varepsilon$. This implies that $[u_K, v_K] \subseteq (x - \varepsilon, x + \varepsilon)$, since $x\in [u_K, v_K]$.
+
+ Then $\forall k \geq K$, $a_{n_k}\in [u_k, v_k] \subseteq [u_K, v_K] \subseteq (x - \varepsilon, x + \varepsilon)$. So $|a_{n_k} - x| < \varepsilon$.
+\end{proof}
+
+\subsection{Cauchy sequences}
+The third characterization of real numbers is in terms of Cauchy sequences. Cauchy convergence is an alternative way of defining convergent sequences without needing to mention the actual limit of the sequence. This allows us to say $\{3, 3.1, 3.14, 3.141, 3.1415, \cdots\}$ is \emph{Cauchy convergent} in $\Q$ even though the limit $\pi$ is not in $\Q$.
+
+\begin{defi}[Cauchy sequence]
+ A sequence $(a_n)$ is \emph{Cauchy} if for all $\varepsilon$, there is some $N \in \N$ such that whenever $p, q \geq N$, we have $|a_p - a_q| < \varepsilon$. In symbols, we have
+ \[
+ (\forall \varepsilon > 0)(\exists N)(\forall p, q\geq N)\, |a_p - a_q| < \varepsilon.
+ \]
+\end{defi}
+Roughly, a sequence is Cauchy if all terms are eventually close to each other (as opposed to close to a limit).
+
+\begin{lemma}
+ Every convergent sequence is Cauchy.
+\end{lemma}
+
+\begin{proof}
+ Let $a_n \to a$. Let $\varepsilon > 0$. Then $\exists N$ such that $\forall n \geq N$, $|a_n - a| < \varepsilon/2$. Then $\forall p, q\geq N$, $|a_p - a_q| \leq |a_p - a| + |a - a_q| < \varepsilon/2 + \varepsilon/2 = \varepsilon$.
+\end{proof}
+
+\begin{lemma}
+ Let $(a_n)$ be a Cauchy sequence with a subsequence $(a_{n_k})$ that converges to $a$. Then $a_n\to a$.
+\end{lemma}
+
+\begin{proof}
+ Let $\varepsilon > 0$. Pick $N$ such that $\forall p, q\geq N$, $|a_p - a_q| < \varepsilon/2$. Then pick $K$ such that $n_K \geq N$ and $|a_{n_K} - a| < \varepsilon/2$.
+
+ Then $\forall n \geq N$, we have
+ \[
+ |a_n - a| \leq |a_n - a_{n_K}| + |a_{n_K} - a| < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon.\qedhere
+ \]
+\end{proof}
+
+An important result we have is that in $\R$, Cauchy convergence and regular convergence are equivalent.
+\begin{thm}[The general principle of convergence]
+ Let $\F$ be an ordered field with the monotone-sequence property. Then every Cauchy sequence of $\F$ converges.
+\end{thm}
+
+\begin{proof}
+Let $(a_n)$ be a Cauchy sequence. Then it is eventually bounded, since $\exists N$, $\forall n \geq N$, $|a_n - a_N| \leq 1$ by the Cauchy condition. So it is bounded. Hence by Bolzano-Weierstrass, it has a convergent subsequence. Then $(a_n)$ converges to the same limit.
+\end{proof}
+
+\begin{defi}[Complete ordered field]
+ An ordered field in which every Cauchy sequence converges is called \emph{complete}.
+\end{defi}
+
+Hence we say that $\R$ is a complete ordered field.
+
+However, not every complete ordered field is (isomorphic to) $\R$. For example, we can take the rational functions as before, then take the Cauchy completion of it (i.e.\ add all the limits we need). Then it is already too large to be the reals (it still doesn't have the Archimedean property) but is a complete ordered field.
+
+To show that completeness implies the monotone-sequences property, we need an additional condition: the Archimedean property.
+
+\begin{lemma}
+ Let $\F$ be an ordered field with the Archimedean property such that every Cauchy sequence converges. The $\F$ satisfies the monotone-sequences property.
+\end{lemma}
+
+\begin{proof}
+ Instead of showing that every bounded monotone sequence converges, and is hence Cauchy, We will show the equivalent statement that every increasing non-Cauchy sequence is not bounded above.
+
+ Let $(a_n)$ be an increasing sequence. If $(a_n)$ is not Cauchy, then
+ \[
+ (\exists \varepsilon > 0)(\forall N)(\exists p, q > N)\,|a_p - a_q| \geq \varepsilon.
+ \]
+ wlog let $p > q$. Then
+ \[
+ a_p \geq a_q + \varepsilon \geq a_N + \varepsilon.
+ \]
+ So for any $N$, we can find a $p > N$ such that
+ \[
+ a_p \geq a_N + \varepsilon.
+ \]
+ Then we can construct a subsequence $a_{n_1}, a_{n_2}, \cdots$ such that
+ \[
+ a_{n_{k + 1}} \geq a_{n_k} + \varepsilon.
+ \]
+ Therefore
+ \[
+ a_{n_k} \geq a_{n_1} + (k - 1)\varepsilon.
+ \]
+ So by the Archimedean property, $(a_{n_k})$, and hence $(a_n)$, is unbounded.
+\end{proof}
+
+Note that the definition of a convergent sequence is
+\[
+ (\exists l)(\forall \varepsilon > 0)(\exists N)(\forall n\geq N)\, |a_n - l| < \varepsilon,
+\]
+while that of Cauchy convergence is
+\[
+ (\forall \varepsilon > 0)(\exists N)(\forall p, q\geq N)\, |a_p - a_q| < \varepsilon.
+\]
+In the first definition, $l$ quantifies over all real numbers, which is uncountable. However, in the second definition, we only have to quantify over natural numbers, which is countable (by the Archimedean property, we only have to consider the cases $\varepsilon = 1/n$).
+
+Since they are equivalent in $\R$, the second definition is sometimes preferred when we care about logical simplicity.
+
+\subsection{Limit supremum and infimum}
+Here we will define the limit supremum and infimum. While these are technically not part of the course, eventually some lecturers will magically assume students know this definition. So we might as well learn it here.
+
+\begin{defi}[Limit supremum/infimum]
+ Let $(a_n)$ be a bounded sequence. We define the \emph{limit supremum} as
+ \[
+ \limsup_{n\to \infty} a_n = \lim_{n\to \infty}\left(\sup_{m \geq n} a_m\right).
+ \]
+ To see that this exists, set $b_n = \sup_{m\geq n}a_m$. Then $(b_n)$ is decreasing since we are taking the supremum of fewer and fewer things, and is bounded below by any lower bound for $(a_n)$ since $b_n \geq a_n$. So it converges.
+
+ Similarly, we define the \emph{limit infimum} as
+ \[
+ \liminf_{n\to \infty}a_n = \lim_{n\to\infty}\left(\inf_{m\geq n} a_m\right).
+ \]
+\end{defi}
+
+\begin{eg}
+ Take the sequence
+ \[
+ 2, -1, \frac{3}{2}, -\frac{1}{2}, \frac{4}{3}, -\frac{1}{3}, \cdots
+ \]
+ Then the limit supremum is $1$ and the limit infimum is $0$.
+\end{eg}
+
+\begin{lemma}
+ Let $(a_n)$ be a sequence. The following two statements are equivalent:
+ \begin{itemize}
+ \item $a_n\to a$
+ \item $\limsup a_n = \liminf a_n = a$.
+ \end{itemize}
+\end{lemma}
+
+\begin{proof}
+ If $a_n \to a$, then let $\varepsilon > 0$. Then we can find an $n$ such that
+ \[
+ a - \varepsilon \leq a_m \leq a + \varepsilon\text{ for all } m \geq n
+ \]
+ It follows that
+ \[
+ a - \varepsilon \leq \inf_{m \geq n}a_m \leq \sup_{m\geq n} a_m \leq a + \varepsilon.
+ \]
+ Since $\varepsilon$ was arbitrary, it follows that
+ \[
+ \liminf a_n = \limsup a_n = a.
+ \]
+ Conversely, if $\liminf a_n = \limsup a_n = a$, then let $\varepsilon > 0$. Then we can find $n$ such that
+ \[
+ \inf_{m\geq n} a_m > a - \varepsilon\text{ and }\sup _{m \geq n} a_m < a + \varepsilon.
+ \]
+ It follows that $\forall m\geq n$, we have $|a_m - a| < \varepsilon$.
+\end{proof}
+
+\section{Convergence of infinite sums}
+In this chapter, we investigate which infinite \emph{sums}, as opposed to sequences, converge. We would like to say $1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots = 2$, while $1 + 1 + 1 + 1 + \cdots$ does not converge. The majority of the chapter is coming up with different tests to figure out if infinite sums converge.
+
+\subsection{Infinite sums}
+\begin{defi}[Convergence of infinite sums and partial sums]
+ Let $(a_n)$ be a real sequence. For each $N$, define
+ \[
+ S_N = \sum_{n = 1}^N a_n.
+ \]
+ If the sequence $(S_N)$ converges to some limit $s$, then we say that
+ \[
+ \sum_{n = 1}^\infty a_n = s,
+ \]
+ and we say that the series $\displaystyle\sum_{n = 1}^\infty a_n$ \emph{converges}.
+
+ We call $S_N$ the $N$th \emph{partial sum}.
+\end{defi}
+
+There is an immediate necessary condition for a series to converge.
+\begin{lemma}
+ If $\displaystyle\sum_{n = 1}^\infty a_n$ converges. Then $a_n \to 0$.
+\end{lemma}
+
+\begin{proof}
+ Let $\displaystyle\sum_{n = 1}^{\infty} a_n = s$. Then $S_n \to s$ and $S_{n - 1} \to s$. Then $a_n = S_n - S_{n - 1} \to 0$.
+\end{proof}
+
+However, the converse is false!
+\begin{eg}[Harmonic series]
+ If $a_n = 1/n$, then $a_n \to 0$ but $\sum a_n = \infty$.
+
+ We can prove this as follows:
+ \[
+ S_{2^n} - S_{2^{n -1 }} = \frac{1}{2^{n - 1} + 1} + \cdots + \frac{1}{2^n} \geq \frac{2^{n - 1}}{2^n} = \frac{1}{2}.
+ \]
+ Therefore $S_{2^n} \geq S_1 + n/2$. So the partial sums are unbounded.
+\end{eg}
+
+\begin{eg}[Geometric series]
+ Let $|\rho| < 1$. Then
+ \[
+ \sum_{n = 0}^\infty \rho^n = \frac{1}{1 - \rho}.
+ \]
+ We can prove this by considering the partial sums:
+ \[
+ \sum_{n = 0}^N \rho^n = \frac{1 - \rho^{N + 1}}{1 - \rho}.
+ \]
+ Since $\rho^{N + 1} \to 0$, this tends to $1/(1 - \rho)$.
+\end{eg}
+
+\begin{eg}
+ $\displaystyle \sum_{n = 2}^\infty \frac{1}{n(n - 1)}$ converges. This is since
+ \[
+ \frac{1}{n(n -1 )} = \frac{1}{n - 1} - \frac{1}{n}.
+ \]
+ So
+ \[
+ \sum_{n = 2}^{N}\frac{1}{n(n - 1)} = 1 - \frac{1}{N} \to 1.
+ \]
+\end{eg}
+\begin{lemma}
+ Suppose that $a_n \geq 0$ for every $n$ and the partial sums $S_n$ are bounded above. Then $\sum_{n=1}^\infty a_n$ converges.
+\end{lemma}
+
+\begin{proof}
+ The sequence $(S_n)$ is increasing and bounded above. So the result follows form the monotone sequences property.
+\end{proof}
+
+The simplest convergence test we have is the \emph{comparison test}. Roughly speaking, it says that if $0 \leq a_n \leq b_n$ for all $n$ and $\sum b_n$ converges, then $\sum a_n$ converges. However, we will prove a much more general form here for convenience.
+\begin{lemma}[Comparison test]
+ Let $(a_n)$ and $(b_n)$ be non-negative sequences, and suppose that $\exists C, N$ such that $\forall n\geq N$, $a_n \leq Cb_n$. Then if $\sum b_n$ converges, then so does $\sum a_n$.
+\end{lemma}
+
+\begin{proof}
+ Let $M > N$. Also for each $R$, let $S_R = \sum_{n = 1}^R a_n$ and $T_R = \sum_{n = 1}^R b_n$. We want $S_R$ to be bounded above.
+ \[
+ S_M - S_N = \sum_{n = N + 1}^M a_n \leq C\sum _{n = N + 1}^M b_n \leq C\sum_{n = N + 1}^\infty b_n.
+ \]
+ So $\forall M\geq N$, $S_M \leq S_n + C\sum_{n = N + 1}^\infty b_n$. Since the $S_M$ are increasing and bounded, it must converge.
+\end{proof}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\sum \frac{1}{n2^n}$ converges, since $\sum \frac{1}{2^n}$ converges.
+ \item $\sum \frac{n}{2^n}$ converges.
+
+ If $n \geq 4$, then $n \leq 2^{n/2}$. That's because $4 = 2^{4/2}$ and for $n\geq 4$, $(n + 1)/n < \sqrt{2}$, so when we increase $n$, we multiply the right side by a greater number by the left. Hence by the comparison test, it is sufficient to show that $\sum 2^{n/2}/2^n = \sum 2^{-n/2}$ converges, which it does (geometric series).
+ \item $\sum \frac{1}{\sqrt{n}}$ diverges, since $\frac{1}{\sqrt{n}} \geq \frac{1}{n}$. So if it converged, then so would $\sum \frac{1}{n}$, but $\sum \frac{1}{n}$ diverges.
+ \item $\sum \frac{1}{n^2}$ converges, since for $n \geq 2$, $\frac{1}{n^2} \leq \frac{1}{n(n - 1)}$, and we have proven that the latter converges.
+ \item Consider $\displaystyle \sum_{n = 1}^\infty \frac{n + 5}{n^3 - 7n^2/2}$. We show this converges by noting
+ \[
+ n^3 - \frac{7n^2}{2} = n^2\left(n - \frac{7}{2}\right).
+ \]
+ So if $n \geq 8$, then
+ \[
+ n^3 - \frac{7n^2}{2} \geq \frac{n^3}{2}.
+ \]
+ Also, $n + 5 \leq 2n$. So
+ \[
+ \frac{n + 5}{n^3 - 7n^2/2} \leq 4/n^2.
+ \]
+ So it converges by the comparison test.
+ \item If $\alpha > 1$, then $\sum 1/n^\alpha$ converges.
+
+ Let $S_N = \sum_{n = 1}^N 1/n^\alpha$. Then
+ \begin{align*}
+ S_{2^n} - S_{2^{n - 1}} &= \frac{1}{(2^{n - 1} + 1)^\alpha} + \cdots + \frac{1}{(2^{n})^\alpha}\\
+ &\leq \frac{2^{n - 1}}{(2^{n - 1})^\alpha}\\
+ &= (2^{n - 1})^{1 - \alpha}\\
+ &= (2^{1 - \alpha})^{n - 1}.
+ \end{align*}
+ But $2^{1 - \alpha} < 1$. So
+ \[
+ S_{2^n} = (S_{2^n} - S_{2^{n - 1}}) + (S_{2^{n - 1}} - S_{2^{n -2 }}) + \cdots (S_2 - S_1) + S_1
+ \]
+ and is bounded above by comparison with the geometric series $1 + 2^{1 - \alpha} + (2^{1 - \alpha})^2 + \cdots$
+ \end{enumerate}
+\end{eg}
+
+\subsection{Absolute convergence}
+Here we'll consider two stronger conditions for convergence --- absolute convergence and unconditional convergence. We'll prove that these two conditions are in fact equivalent.
+\begin{defi}[Absolute convergence]
+ A series $\sum a_n$ \emph{converges absolutely} if the series $\sum |a_n|$ converges.
+\end{defi}
+
+\begin{eg}
+ The series $\sum \frac{(-1)^{n + 1}}{n} = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots$ converges, but not absolutely.
+
+ To see the convergence, note that
+ \[
+ a_{2n - 1} + a_{2n} = \frac{1}{2n - 1} - \frac{1}{2n} = \frac{1}{2n(2n- 1)}.
+ \]
+ It is easy to compare with $1/n^2$ to get that the partial sums $S_{2n}$ converges. But $S_{2n + 1} - S_{2n} = 1/(2n + 1) \to 0$, so the $S_{2n + 1}$ converges to the same limit.
+
+ It does not converge absolutely, because the sum of the absolute values is the harmonic series.
+\end{eg}
+
+\begin{lemma}
+ Let $\sum a_n$ converge absolutely. Then $\sum a_n$ converges.
+\end{lemma}
+
+\begin{proof}
+ We know that $\sum |a_n|$ converges. Let $S_N = \sum_{n = 1}^N a_n$ and $T_N = \sum_{n = 1}^N |a_n|$.
+
+ We know two ways to show random sequences converge, without knowing what they converge to, namely monotone-sequences and Cauchy sequences. Since $S_N$ is not monotone, we shall try Cauchy sequences.
+
+ If $p > q$, then
+ \[
+ |S_p - S_q| = \left|\sum_{n = q + 1}^p a_n\right| \leq \sum_{n = q + 1}^p |a_n| = T_p - T_q.
+ \]
+ But the sequence $T_p$ converges. So $\forall \varepsilon > 0$, we can find $N$ such that for all $p > q \geq N$, we have $T_p - T_q < \varepsilon$, which implies $|S_p - S_q| < \varepsilon$.
+\end{proof}
+
+\begin{defi}[Unconditional convergence]
+ A series $\sum a_n$ \emph{converges unconditionally} if the series $\sum_{n = 1}^{\infty} a_{\pi(n)}$ converges for every bijection $\pi: \N \to \N$, i.e.\ no matter how we re-order the elements of $a_n$, the sum still converges.
+\end{defi}
+
+\begin{thm}
+ If $\sum a_n$ converges absolutely, then it converges unconditionally.
+\end{thm}
+
+\begin{proof}
+ Let $S_n = \sum_{n = 1}^N a_{\pi (n)}$. Then if $p > q$,
+ \[
+ |S_p - S_q| = \left|\sum_{n = q + 1}^p a_{\pi(n)}\right| \leq \sum_{n = q + 1}^\infty|a_{\pi (n)}|.
+ \]
+ Let $\varepsilon > 0$. Since $\sum |a_n|$ converges, pick $M$ such that $\sum_{n = M + 1}^\infty|a_n| < \varepsilon$.
+
+ Pick $N$ large enough that $\{1, \cdots, M\}\subseteq \{\pi (1), \cdots, \pi(N)\}$.
+
+ Then if $n > N$, we have $\pi(n) > M$. Therefore if $p > q \geq N$, then
+ \[
+ |S_p - S_q| \leq \sum_{n = q + 1}^p |a_{\pi(n)}| \leq \sum_{n = M + 1}^\infty |a_n| < \varepsilon.
+ \]
+ Therefore the sequence of partial sums is Cauchy.
+\end{proof}
+
+The converse is also true.
+\begin{thm}
+ If $\sum a_n$ converges unconditionally, then it converges absolutely.
+\end{thm}
+
+\begin{proof}
+ We will prove the contrapositive: if it doesn't converge absolutely, it doesn't converge unconditionally.
+
+ Suppose that $\sum |a_n| = \infty$. Let $(b_n)$ be the subsequence of non-negative terms of $a_n$, and $(c_n)$ be the subsequence of negative terms. Then $\sum b_n$ and $\sum c_n$ cannot both converge, or else $\sum |a_n|$ converges.
+
+ wlog, $\sum b_n = \infty$. Now construct a sequence $0 = n_0 < n_1 < n_2 < \cdots$ such that $\forall k$,
+ \[
+ b_{n_{k - 1} + 1} + b_{n_{k - 1} + 2} + \cdots + b_{n_k} + c_k \geq 1,
+ \]
+ This is possible because the $b_n$ are unbounded and we can get it as large as we want.
+
+ Let $\pi$ be the rearrangement
+ \[
+ b_1, b_2, \cdots b_{n_1}, c_1, b_{n_1 + 1}, \cdots b_{n_2}, c_2, b_{n_2 + 1}, \cdots b_{n_3}, c_3,\cdots
+ \]
+ So the sum up to $c_k$ is at least $k$. So the partial sums tend to infinity.
+\end{proof}
+
+We can prove an even stronger result:
+\begin{lemma}
+ Let $\sum a_n$ be a series that converges absolutely. Then for any bijection $\pi: \N\to \N$,
+ \[
+ \sum_{n = 1}^\infty a_n = \sum_{n = 1}^\infty a_{\pi(n)}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Let $\varepsilon > 0$. We know that both $\sum |a_n|$ and $\sum|a_{\pi(n)}|$ converge. So let $M$ be such that $\sum_{n > M}|a_n| < \frac{\varepsilon}{2}$ and $\sum_{n > M}|a_{\pi(n)}| < \frac{\varepsilon}{2}$.
+
+ Now $N$ be large enough such that
+ \[
+ \{1, \cdots, M\}\subseteq \{\pi(1), \cdots, \pi(N)\},
+ \]
+ and
+ \[
+ \{\pi(1), \cdots, \pi(M)\}\subseteq \{1, \cdots, N\}.
+ \]
+ Then for every $K\geq N$,
+ \[
+ \left|\sum_{n = 1}^K a_n - \sum_{n = 1}^K a_{\pi(n)}\right| \leq \sum_{n = M + 1}^K |a_n| + \sum_{n = M + 1}^K |a_{\pi(n)}| < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon.
+ \]
+ We have the first inequality since given our choice of $M$ and $N$, the first $M$ terms of the $\sum a_n$ and $\sum a_{\pi(n)}$ sums are cancelled by some term in the huge sum.
+
+ So $\forall K \geq N$, the partial sums up to $K$ differ by at most $\varepsilon$. So $|\sum a_n - \sum a_{\pi(n)}| \leq \varepsilon$.
+
+ Since this is true for all $\varepsilon$, we must have $\sum a_n = \sum a_{\pi(n)}$.
+\end{proof}
+
+\subsection{Convergence tests}
+We'll now come up with a \emph{lot} of convergence tests.
+\begin{lemma}[Alternating sequence test]
+ Let $(a_n)$ be a decreasing sequence of non-negative reals, and suppose that $a_n \to 0$. Then $\displaystyle \sum_{n = 1}^\infty (-1)^{n + 1}a_n$ converges, i.e.\ $a_1 - a_2 + a_3 - a_4 + \cdots$ converges.
+\end{lemma}
+
+\begin{proof}
+ Let $\displaystyle S_N = \sum_{n = 1}^N(-1)^{n + 1}a_n$. Then
+ \[
+ S_{2n} = (a_1 - a_2) + (a_3 - a_4) + \cdots + (a_{2n - 1} - a_{2n}) \geq 0,
+ \]
+ and $(S_{2n})$ is an increasing sequence.
+
+ Also,
+ \[
+ S_{2n + 1} = a_1 - (a_2 - a_3) - (a_4 - a_5) - \cdots - (a_{2n} - a_{2n + 1}),
+ \]
+ and $(S_{2n + 1})$ is a decreasing sequence. Also $S_{2n + 1} - S_{2n} = a_{2n + 1} \geq 0$.
+
+ Hence we obtain the bounds $0 \leq S_{2n} \leq S_{2n + 1} \leq a_1$. It follows from the monotone sequences property that $(S_{2n})$ and $(S_{2n + 1})$ converge.
+
+ Since $S_{2n + 1} - S_{2n} = a_{2n + 1} \to 0$, they converge to the same limit.
+\end{proof}
+
+\begin{eg}
+ \[
+ 1 - \frac{1}{\sqrt[3]{2}} + \frac{1}{\sqrt[3]{3}} - \frac{1}{\sqrt[3]{4}} + \cdots \text{converges}.
+ \]
+\end{eg}
+
+\begin{lemma}[Ratio test]
+ We have three versions:
+ \begin{enumerate}
+ \item If $\exists c < 1$ such that
+ \[
+ \frac{|a_{n + 1}|}{|a_n|} \leq c,
+ \]
+ for all $n$, then $\sum a_n$ converges.
+ \item If $\exists c < 1$ and $\exists N$ such that
+ \[
+ (\forall n \geq N)\, \frac{|a_{n + 1}|}{|a_n|} \leq c,
+ \]
+ then $\sum a_n$ converges. Note that just because the ratio is always less than $1$, it doesn't necessarily converge. It has to be always less than a fixed number $c$. Otherwise the test will say that $\sum 1/n$ converges.
+ \item If $\exists \rho \in (-1, 1)$ such that
+ \[
+ \frac{a_{n + 1}}{a_n} \to \rho,
+ \]
+ then $\sum a_n$ converges. Note that we have the \emph{open} interval $(-1, 1)$. If $\frac{|a_{n + 1}|}{|a_n|} \to 1$, then the test is inconclusive!
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item $|a_n| \leq c^{n - 1}|a_1|$. Since $\sum c^n$ converges, so does $\sum |a_n|$ by comparison test. So $\sum a_n$ converges absolutely, so it converges.
+ \item For all $k\geq 0$, we have $|a_{N + k}|\leq c^k|a_N|$. So the series $\sum |a_{N + k}|$ converges, and therefore so does $\sum |a_k|$.
+ \item If $\frac{a_{n + 1}}{a_n} \to \rho$, then $\frac{|a_{n + 1}|}{|a_n|} \to |\rho|$. So (setting $\varepsilon = (1 - |\rho|)/2$) there exists $N$ such that $\forall n \geq N$, $\frac{|a_{n + 1}|}{|a_n|} \leq \frac{1 + |\rho|}{2} < 1$. So the result follows from (ii).\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ If $|b| < 1$, then $\sum nb^n$ converges, since
+ \[
+ \frac{a_{n + 1}}{a_n} = \frac{(n + 1) b^{n + 1}}{nb^n} = \left(1 + \frac{1}{n}\right) b\to b < 1.
+ \]
+ So it converges.
+
+ We can also evaluate this directly by considering $\displaystyle\sum_{i = 1}^\infty \sum_{n = i}^\infty b^n$.
+\end{eg}
+
+The following two tests were taught at the end of the course, but are included here for the sake of completeness.
+\begin{thm}[Condensation test]
+ Let $(a_n)$ be a decreasing non-negative sequence. Then $\sum_{n = 1}^\infty a_n < \infty$ if and only if
+ \[
+ \sum_{k = 1}^\infty 2^k a_{2^k} < \infty.
+ \]
+\end{thm}
+
+\begin{proof}
+ This is basically the proof that $\sum \frac{1}{n}$ diverges and $\sum \frac{1}{n^{\alpha}}$ converges for $\alpha < 1$ but written in a more general way.
+
+ We have
+ \begin{align*}
+ &a_1 + a_2 + (a_3 + a_4) + (a_5 + \cdots + a_8) + (a_9 + \cdots + a_{16}) + \cdots\\
+ \geq & a_1 + a_2 + 2a_4 + 4a_8 + 8 a_{16} + \cdots
+ \end{align*}
+ So if $\sum 2^k a_{2^k}$ diverges, $\sum a_n$ diverges.
+
+ To prove the other way round, simply group as
+ \begin{align*}
+ & a_1 + (a_2 + a_3) + (a_4 + \cdots + a_7) + \cdots\\
+ \leq & a_1 + 2a_2 + 4a_4 + \cdots .\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{eg}
+ If $a_n = \frac{1}{n}$, then $2^k a_{2^k} = 1$. So $\sum_{k = 1}^\infty 2^k a_{2^k} = \infty$. So $\sum_{n = 1}^\infty \frac{1}{n} = \infty$.
+\end{eg}
+
+After we formally define integrals, we will prove the integral test:
+\begin{thm}[Integral test]
+ Let $f: [1, \infty] \to \R$ be a decreasing non-negative function. Then $\sum_{n = 1}^\infty f(n)$ converges iff $\int_1^\infty f(x)\;\d x < \infty$.
+\end{thm}
+
+\subsection{Complex versions}
+Most definitions in the course so far carry over unchanged to the complex numbers. e.g.\ $z_n \to z$ iff $(\forall \varepsilon > 0)(\exists N)(\forall n\geq N)\, |z_n - z| < \varepsilon$.
+
+Two exceptions are least upper bound and monotone sequences, because the complex numbers do not have an ordering! (It cannot be made into an ordered field because the square of every number in an ordered field has to be positive) Fortunately, Cauchy sequences still work.
+
+We can prove the complex versions of most theorems so far by looking at the real and imaginary parts.
+
+\begin{eg}
+ Let $(z_n)$ be a Cauchy sequence in $\C$. Let $z_n = x_n+ iy_n$. Then $(x_n)$ and $(y_n)$ are Cauchy. So they converge, from which it follows that $z_n = x_n + iy_n$ converges.
+\end{eg}
+
+Also, the Bolzano-Weierstrass theorem still holds: If $(z_n)$ is bounded, let $z_n = x_n + y_n$, then $(x_n)$ and $(y_n)$ are bounded. Then find a subsequence $(x_{n_k})$ that converges. Then find a subsequence of $(y_{n_k})$ that converges.
+
+Then nested-intervals property has a ``nested-box'' property as a complex analogue.
+
+Finally, the proof that absolutely convergent sequences converge still works. It follows that the ratio test still works.
+
+\begin{eg}
+ If $|z| < 1$, then $\sum nz^n$ converges. Proof is the same as above.
+\end{eg}
+
+However, we do have an extra test for complex sums.
+\begin{lemma}[Abel's test]
+ Let $a_1 \geq a_2 \geq \cdots \geq 0$, and suppose that $a_n \to 0$. Let $z\in \C$ such that $|z| = 1$ and $z \not= 1$. Then $\sum a_n z^n$ converges.
+\end{lemma}
+
+\begin{proof}
+ We prove that it is Cauchy. We have
+ \begin{align*}
+ \sum_{n = M}^N a_n z^n &= \sum_{n = M}^N a_n\frac{z^{n + 1} - z^n}{z - 1}\\
+ &= \frac{1}{z - 1}\sum_{n = M}^N a_n (z^{n + 1} - z^n)\\
+ &= \frac{1}{z - 1}\left(\sum_{n = M}^N a_n z^{n + 1} - \sum_{n = M}^N a_n z^n\right)\\
+ &= \frac{1}{z - 1}\left(\sum_{n = M}^N a_n z^{n + 1} - \sum_{n = M - 1}^{N - 1} a_{n + 1} z^{n + 1}\right)\\
+ &= \frac{1}{z - 1}\left(a_N z^{N + 1} - a_M z^M + \sum_{n = M}^{N - 1} (a_n - a_{n + 1})z^{n + 1}\right)\\
+ \intertext{We now take the absolute value of everything to obtain}
+ \left|\sum_{n = M}^N a_nz^n\right| &\leq \frac{1}{|z - 1|} \left(a_N + a_M + \sum_{n = M}^{N - 1}(a_n - a_{n + 1})\right)\\
+ &= \frac{1}{|z - 1|}\left(a_N + a_M + (a_M - a_{M + 1}) + \cdots + (a_{N - 1} - a_{N})\right)\\
+ &= \frac{2a_M}{|z - 1|} \to 0.
+ \end{align*}
+ So it is Cauchy. So it converges
+\end{proof}
+Note that here we transformed the sum $\sum a_n(z^{n + 1} - z^n)$ into $a_N z^{N + 1} - a_M z^M + \sum (a_n - a_{n + 1})z^{n + 1}$. What we have effectively done is a discrete analogue of integrating by parts.
+
+\begin{eg}
+ The series $\sum z^n/n$ converges if $|z| < 1$ or if $|z| = 1$ and $z \not= 1$, and it diverges if $z = 1$ or $|z| > 1$.
+
+ The cases $|z| < 1$ and $|z| > 1$ are trivial from the ratio test, and Abel's test is required for the $|z| = 1$ cases.
+\end{eg}
+\section{Continuous functions}
+\subsection{Continuous functions}
+\begin{defi}[Continuous function]
+ Let $A\subseteq \R$, $a\in A$, and $f: A\to \R$. Then $f$ is \emph{continuous at} $a$ if for any $\varepsilon > 0 $, there is some $\delta > 0$ such that if $y \in A$ is such that $|y - a| < \delta$, then $|f(y) - f(a)| < \varepsilon$. In symbols, we have
+ \[
+ (\forall \varepsilon > 0)(\exists \delta > 0)(\forall y\in A)\, |y - a| < \delta \Rightarrow |f(y) - f(a)| < \varepsilon.
+ \]
+ $f$ is \emph{continuous} if it is continuous at every $a\in A$. In symbols, we have
+ \[
+ (\forall a\in A)(\forall \varepsilon > 0)(\exists \delta > 0)(\forall y\in A)\, |y - a| < \delta \Rightarrow |f(y) - f(a)| < \varepsilon.
+ \]
+\end{defi}
+Intuitively, $f$ is continuous at $a$ if we can obtain $f(a)$ as accurately as we wish by using more accurate values of $a$ (the definition says that if we want to approximate $f(a)$ by $f(y)$ to within accuracy $\varepsilon$, we just have to get our $y$ to within $\delta$ of $a$ for some $\delta$).
+
+For example, suppose we have the function
+\[
+ f(x) = \begin{cases} 0 & x \leq \pi\\ 1& x > \pi\end{cases}.
+\]
+Suppose that we don't know what the function actually is, but we have a computer program that computes this function. We want to know what $f(\pi)$ is. Since we cannot input $\pi$ (it has infinitely many digits), we can try $3$, and it gives $0$. Then we try $3.14$, and it gives $0$ again. If we try $3.1416$, it gives $1$ (since $\pi = 3.1415926\cdots < 3.1416$). We keep giving more and more digits of $\pi$, but the result keeps oscillating between $0$ and $1$. We have no hope of what $f(\pi)$ might be, even approximately. So this $f$ is discontinuous at $\pi$.
+
+However, if we have the function $g(x) = x^2$, then we \emph{can} find the (approximate) value of $g(\pi)$. We can first try $g(3)$ and obtain $9$. Then we can try $g(3.14) = 9.8596$, $g(3.1416) = 9.86965056$ etc. We can keep trying and obtain more and more accurate values of $g(\pi)$. So $g$ is continuous at $\pi$.
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item Constant functions are continuous.
+ \item The function $f(x) = x$ is continuous (take $\delta = \varepsilon$).
+ \end{itemize}
+\end{eg}
+
+The definition of continuity of a function looks rather like the definition of convergence. In fact, they are related by the following lemma:
+
+\begin{lemma}
+ The following two statements are equivalent for a function $f: A\to \R$.
+ \begin{itemize}
+ \item $f$ is continuous
+ \item If $(a_n)$ is a sequence in $A$ with $a_n \to a$, then $f(a_n) \to f(a)$.
+ \end{itemize}
+\end{lemma}
+
+\begin{proof}
+ (i)$\Rightarrow$(ii) Let $\varepsilon > 0$. Since $f$ is continuous at $a$,
+ \[
+ (\exists \delta > 0)(\forall y\in A)\, |y-a|< \delta \Rightarrow |f(y) - f(a)| < \varepsilon.
+ \]
+ We want $N$ such that $\forall n \geq N$, $|f(a_n) - f(a)| < \varepsilon$. By continuity, it is enough to find $N$ such that $\forall n\geq N$, $|a_n - a| < \delta$. Since $a_n \to a$, such an $N$ exists.
+
+ (ii)$\Rightarrow$(i) We prove the contrapositive: Suppose $f$ is not continuous at $a$. Then
+ \[
+ (\exists \varepsilon > 0)(\forall \delta > 0)(\exists y\in A)\, |y - a| < \delta \text{ and }|f(y) - f(a)| \geq \varepsilon.
+ \]
+ For each $n$, we can therefore pick $a_n \in A$ such that $|a_n - a| < \frac{1}{n}$ and $|f(a_n) - f(a)| \geq \varepsilon$. But then $a_n \to a$ (by Archimedean property), but $f(a_n) \not\to f(a)$.
+\end{proof}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Let $f(x) = \begin{cases} -1 & x < 0 \\ 1 & x\geq 0\end{cases}$. Then $f$ is not continuous because $-\frac{1}{n} \to 0$ but $f(-\frac{1}{n}) \to -1 \not= f(0)$.
+ \item Let $f: \Q \to \R$ with
+ \[
+ f(x) =
+ \begin{cases}
+ 1 & x^2 > 2\\
+ 0 & x^2 < 2
+ \end{cases}
+ \]
+ Then $f$ is continuous. For every $a\in \Q$, we can find an interval about $a$ on which $f$ is constant. So $f$ is continuous at $a$.
+ \item Let
+ \[
+ f(x) =
+ \begin{cases}
+ \sin \frac{1}{x} & x \not= 0\\
+ 0 & x = 0
+ \end{cases}
+ \]
+ Then $f(a)$ is discontinuous. For example, let $a_n = 1/[(2n + 0.5)\pi]$. Then $a_n\to 0$ and $f(a_n) \to 1 \not= f(0)$.
+ \end{enumerate}
+\end{eg}
+We can use this sequence definition as the definition for continuous functions. This has the advantage of being cleaner to write and easier to work with. In particular, we can reuse a lot of our sequence theorems to prove the analogous results for continuous functions.
+
+\begin{lemma}
+ Let $A\subseteq \R$ and $f, g: A\to \R$ be continuous functions. Then
+ \begin{enumerate}
+ \item $f + g$ is continuous
+ \item $fg$ is continuous
+ \item if $g$ never vanishes, then $f/g$ is continuous.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $a\in A$ and let $(a_n)$ be a sequence in $A$ with $a_n \to a$. Then
+ \[
+ (f + g)(a_n) = f(a_n) + g(a_n).
+ \]
+ But $f(a_n) \to f(a)$ and $g(a_n) \to g(a)$. So
+ \[
+ f(a_n) + g(a_n) \to f(a) + g(a) = (f + g)(a).
+ \]
+ \end{enumerate}
+ (ii) and (iii) are proved in exactly the same way.
+\end{proof}
+
+With this lemma, from the fact that constant functions and $f(x) = x$ are continuous, we know that all polynomials are continuous. Similarly, rational functions $P(x)/Q(x)$ are continuous except when $Q(x) = 0$.
+
+\begin{lemma}
+ Let $A, B\subseteq \R$ and $f: A\to B$, $g: B\to \R$. Then if $f$ and $g$ are continuous, $g\circ f: A\to \R$ is continuous.
+\end{lemma}
+
+\begin{proof}
+ We offer two proofs:
+ \begin{enumerate}
+ \item Let $(a_n)$ be a sequence in $A$ with $a_n \to a\in A$. Then $f(a_n) \to f(a)$ since $f$ is continuous. Then $g(f(a_n)) \to g(f(a))$ since $g$ is continuous. So $g\circ f$ is continuous.
+ \item Let $a\in A$ and $\varepsilon > 0$. Since $g$ is continuous at $f(a)$, there exists $\eta > 0$ such that $\forall z\in B$, $|z - f(a)| < \eta \Rightarrow |g(z) - g(f(a))| < \varepsilon$.
+
+ Since $f$ is continuous at $a$, $\exists \delta > 0$ such that $\forall y\in A$, $|y - a| < \delta \Rightarrow |f(y) - f(a)| < \eta$. Therefore $|y - a| < \delta \Rightarrow |g(f(y)) - g(f(a))| < \varepsilon$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+There are two important theorems regarding continuous functions --- the maximum value theorem and the intermediate value theorem.
+\begin{thm}[Maximum value theorem]
+ Let $[a, b]$ be a closed interval in $\R$ and let $f: [a, b] \to \R$ be continuous. Then $f$ is bounded and attains its bounds, i.e.\ $f(x) = \sup f$ for some $x$, and $f(y) = \inf f$ for some $y$.
+\end{thm}
+
+\begin{proof}
+ If $f$ is not bounded above, then for each $n$, we can find $x_n\in [a, b]$ such that $f(x_n) \geq n$ for all $n$.
+
+ By Bolzano-Weierstrass, since $x_n \in [a, b]$ and is bounded, the sequence $(x_n)$ has a convergent subsequence $(x_{n_k})$. Let $x$ be its limit. Then since $f$ is continuous, $f(x_{n_k}) \to f(x)$. But $f(x_{n_k}) \geq n_k \to \infty$. So this is a contradiction.
+
+ Now let $C = \sup\{f(x): x\in [a, b]\}$. Then for every $n$, we can find $x_n$ such that $f(x_n) \geq C - \frac{1}{n}$. So by Bolzano-Weierstrass, $(x_n)$ has a convergent subsequence $(x_{n_k})$. Since $C - \frac{1}{n_{k}}\leq f(x_{n_k}) \leq C$, $f(x_{n_k})\to C$. Therefore if $x = \lim x_{n_k}$, then $f(x) = C$.
+
+ A similar argument applies if $f$ is unbounded below.
+\end{proof}
+
+\begin{thm}[Intermediate value theorem]
+ Let $a < b\in \R$ and let $f: [a, b] \to \R$ be continuous. Suppose that $f(a) < 0 < f(b)$. Then there exists an $x\in (a, b)$ such that $f(x) = 0$.
+\end{thm}
+
+\begin{proof}
+ We have several proofs:
+ \begin{enumerate}
+ \item Let $A = \{x: f(x) < 0\}$ and let $s = \sup A$. We shall show that $f(s) = 0$ (this is similar to the proof that $\sqrt{2}$ exists in Numbers and Sets). If $f(s) < 0$, then setting $\varepsilon = |f(s)|$ in the definition of continuity, we can find $\delta > 0$ such that $\forall y$, $|y - s| < \delta \Rightarrow f(y) < 0$. Then $s + \delta/2 \in A$, so $s$ is not an upper bound. Contradiction.
+
+ If $f(s) > 0$, by the same argument, we can find $\delta > 0$ such that $\forall y$, $|y - s| < \delta \Rightarrow f(y) > 0$. So $s - \delta/2$ is a smaller upper bound.
+ \item Let $a_0 = a$, $b_0 = b$. By repeated bisection, construct nested intervals $[a_n, b_n]$ such that $b_n - a_n = \frac{b_0 - a_0}{2^n}$ and $f(a_n) < 0 \leq f(b_n)$. Then by the nested intervals property, we can find $x\in \cap_{n = 0}^\infty [a_n, b_n]$. Since $b_n - a_n \to 0$, $a_n, b_n \to x$.
+
+ Since $f(a_n) < 0$ for every $n$, $f(x) \leq 0$. Similarly, since $f(b_n) \geq 0$ for every $n$, $f(x) \geq 0$. So $f(x) = 0$.\qedhere
+ \end{enumerate}
+\end{proof}
+It is easy to generalize this to get that, if $f(a) < c < f(b)$, then $\exists x\in (a, b)$ such that $f(x) = c$, by applying the result to $f(x) - c$. Also, we can assume instead that $f(b) < c < f(a)$ and obtain the same result by looking at $-f(x)$.
+
+\begin{cor}
+ Let $f: [a, b]\to [c, d]$ be a continuous strictly increasing function with $f(a) = c$, $f(b) = d$. Then $f$ is invertible and its inverse is continuous.
+\end{cor}
+
+\begin{proof}
+ Since $f$ is strictly increasing, it is an injection (suppose $x \not= y$. wlog, $x < y$. Then $f(x) < f(y)$ and so $f(x) \not= f(y)$). Now let $y\in (c, d)$. By the intermediate value theorem, there exists $x\in (a, b)$ such that $f(x) = y$. So $f$ is a surjection. So it is a bijection and hence invertible.
+
+ Let $g$ be the inverse. Let $y\in [c, d]$ and let $\varepsilon > 0$. Let $x = g(y)$. So $f(x) = y$. Let $u = f(x - \varepsilon)$ and $v = f(x + \varepsilon)$ (if $y = c$ or $d$, make the obvious adjustments). Then $u < y < v$. So we can find $\delta > 0$ such that $(y - \delta , y + \delta) \subseteq (u, v)$. Then $|z - y| < \delta \Rightarrow g(z) \in (x - \varepsilon, x + \varepsilon) \Rightarrow |g(z) - g(y)| < \varepsilon$.
+\end{proof}
+With this corollary, we can create more continuous functions, e.g.\ $\sqrt{x}$.
+
+\subsection{Continuous induction*}
+Continuous induction is a generalization of induction on natural numbers. It provides an alternative mechanism to prove certain results we have shown.
+
+\begin{prop}[Continuous induction v1]
+ Let $a < b$ and let $A\subseteq [a, b]$ have the following properties:
+ \begin{enumerate}
+ \item $a\in A$
+ \item If $x\in A$ and $x\not= b$, then $\exists y\in A$ with $y > x$.
+ \item If $\forall \varepsilon > 0$, $\exists y\in A: y\in (x - \varepsilon, x]$, then $x\in A$.
+ \end{enumerate}
+ Then $b\in A$.
+\end{prop}
+
+\begin{proof}
+ Since $a\in A$, $A\not= \emptyset$. $A$ is also bounded above by $b$. So let $s = \sup A$. Then $\forall \varepsilon > 0$, $\exists y\in A$ such that $y > s - \varepsilon$. Therefore, by (iii), $s\in A$.
+
+ If $s\not= b$, then by (ii), we can find $y\in A$ such that $y > s$.
+\end{proof}
+
+It can also be formulated as follows:
+\begin{prop}[Continuous induction v2]
+ Let $A\subseteq [a, b]$ and suppose that
+ \begin{enumerate}
+ \item $a\in A$
+ \item If $[a, x]\subseteq A$ and $x\not = b$, then there exists $y > x$ such that $[a, y]\subseteq A$.
+ \item If $[a, x)\subseteq A$, then $[a, x]\subseteq A$.
+ \end{enumerate}
+ Then $A = [a, b]$
+\end{prop}
+
+\begin{proof}
+ We prove that version 1 $\Rightarrow$ version 2.
+ Suppose $A$ satisfies the conditions of v2. Let $A' = \{x\in [a, b]: [a, x]\subseteq A\}$.
+
+ Then $a\in A'$. If $x\in A'$ with $x \not= b$, then $[a, x]\subseteq A$. So $\exists y > x$ such that $[a, y] \subseteq A$. So $\exists y > x$ such that $y\in A'$.
+
+ If $\forall \varepsilon > 0, \exists y\in (x - \varepsilon, x]$ such that $[a, y]\subseteq A$, then $[a, x)\subseteq A$. So by (iii), $[a, x]\subseteq A$, so $x\in A'$. So $A'$ satisfies properties (i) to (iii) of version 1. Therefore $b\in A'$. So $[a, b]\subseteq A$. So $A = [a, b]$.
+\end{proof}
+
+We reprove intermediate value theorem here:
+
+\begin{thm}[Intermediate value theorem]
+ Let $a < b\in \R$ and let $f: [a, b] \to \R$ be continuous. Suppose that $f(a) < 0 < f(b)$. Then there exists an $x\in (a, b)$ such that $f(x) = 0$.
+\end{thm}
+
+\begin{proof}
+ Assume that $f$ is continuous. Suppose $f(a) < 0 < f(b)$. Assume that $(\forall x)\, f(x) \not =0$, and derive a contradiction.
+
+ Let $A = \{x: f(x) < 0\}$ Then $a\in A$. If $x\in A$, then $f(x) < 0$, and by continuity, we can find $\delta > 0$ such that $|y - x| < \delta\Rightarrow f(y) < 0$. So if $x\not= b$, then we can find $y\in A$ such that $y > x$.
+
+ We prove the contrapositive of the last condition, i.e.
+ \[
+ x\not\in A\Rightarrow (\exists \delta > 0)(\forall y\in A)\, y\not\in(x - \delta, x].
+ \]
+ If $x\not\in A$, then $f(x) > 0$ (we assume that $f$ is never zero. If not, we're done). Then by continuity, $\exists \delta > 0$ such that $|y - x| < \delta \Rightarrow f(y) > 0$. So $y\not\in A$.
+
+ Hence by continuous induction, $b\in A$. Contradiction.
+\end{proof}
+
+Now we prove that continuous functions in closed intervals are bounded.
+\begin{thm}
+ Let $[a, b]$ be a closed interval in $\R$ and let $f: [a, b] \to \R$ be continuous. Then $f$ is bounded.
+\end{thm}
+
+\begin{proof}
+ Let $f: [a, b]$ be continuous. Let $A = \{x: f\text{ is bounded on }[a, x]\}$. Then $a\in A$. If $x\in A, x\not= b$, then $\exists \delta > 0$ such that $|y - x| < \delta \Rightarrow |f(y) - f(x)| < 1$. So $\exists y > x$ (e.g.\ take $\min\{x + \delta/2, b\}$) such that $f$ is bounded on $[a, y]$, which implies that $y\in A$.
+
+ Now suppose that $\forall \varepsilon > 0$, $\exists y\in (x, - \varepsilon, x]$ such that $y\in A$. Again, we can find $\delta > 0$ such that $f$ is bounded on $(x - \delta, x + \delta)$, and in particular on $(x - \delta, x]$. Pick $y$ such that $f$ is bounded on $[a, y]$ and $y > x - \delta$. Then $f$ is bounded on $[a, x]$. So $x\in A$.
+
+ So we are done by continuous induction.
+\end{proof}
+
+Finally, we can prove a theorem that we have not yet proven.
+\begin{defi}[Cover of a set]
+ Let $A\subseteq \R$. A \emph{cover} of $A$ by open intervals is a set $\{I_\gamma: \gamma\in \Gamma\}$ where each $I_\gamma$ is an open interval and $A \subseteq \bigcup_{\gamma\in \Gamma}$ $I_\gamma$.
+
+ A \emph{finite subcover} is a finite subset $\{I_{\gamma_1}, \cdots, I_{\gamma_n}\}$ of the cover that is still a cover.
+\end{defi}
+
+Not every cover has a finite subcover. For example, the cover $\{(\frac{1}{n}, 1): n\in \N\}$ of $(0, 1)$ has no finite subcover.
+
+\begin{thm}[Heine-Borel*]
+ Every cover of a closed, bounded interval $[a, b]$ by open intervals has a finite subcover. We say closed intervals are \emph{compact} (cf.\ Metric and Topological Spaces).
+\end{thm}
+
+\begin{proof}
+ Let $\{I_\gamma: \gamma\in \Gamma\}$ be a cover of $[a, b]$ by open intervals. Let $A = \{x: [a, x]$ can be covered by finitely many of the $I_\gamma\}$.
+
+ Then $a\in A$ since $a$ must belong to some $I_\gamma$.
+
+ If $x\in A$, then pick $\gamma$ such that $x\in I_\gamma$. Then if $x\not = b$, since $I_\gamma$ is an open interval, it contains $[x, y]$ for some $y > x$. Then $[a, y]$ can be covered by finitely many $I_\gamma$, by taking a finite cover for $[a, x]$ and the $I_\gamma$ that contains $x$.
+
+ Now suppose that $\forall \varepsilon > 0, \exists y\in A$ such that $y\in (x - \varepsilon, x]$.
+
+ Let $I_\gamma$ be an open interval containing $x$. Then it contains $(x - \varepsilon, x]$ for some $\varepsilon > 0$. Pick $y\in A$ such that $y\in (x - \varepsilon, x]$. Now combine $I_\gamma$ with a finite subcover of $[a, y]$ to get a finite subcover of $[a, x]$. So $x\in A$.
+
+ Then done by continuous induction.
+\end{proof}
+
+We can use Heine-Borel to prove that continuous functions on $[a, b]$ are bounded.
+
+\begin{thm}
+ Let $[a, b]$ be a closed interval in $\R$ and let $f: [a, b] \to \R$ be continuous. Then $f$ is bounded and attains it bounds, i.e.\ $f(x) = \sup f$ for some $x$, and $f(y) = \inf f$ for some $y$.
+\end{thm}
+
+\begin{proof}
+ Let $f: [a, b]\to \R$ be continuous. Then by continuity,
+ \[
+ (\forall x\in [a, b])(\exists \delta_x > 0)(\forall y)\, |y - x| < \delta_x\Rightarrow |f(y) - f(x)| < 1.
+ \]
+ Let $\gamma = [a, b]$ and for each $x\in \gamma$, let $I_x = (x - \delta_x, x + \delta_x)$. So by Heine-Borel, we can find $x_1, \cdots, x_n$ such that $[a, b]\subseteq \bigcup_1^n (x_i - \delta_{x_i}, x_i + \delta_{x_i})$.
+
+ But $f$ is bounded in each interval $(x_i - \delta_{x_i}, x_i + \delta_{x_i})$ by $|f(x_i)| + 1$. So it is bounded on $[a, b]$ by $\max|f(x_i)| + 1$.
+\end{proof}
+
+\section{Differentiability}
+In the remainder of the course, we will properly develop calculus, and put differentiation and integration on a rigorous foundation. Every notion will be given a proper definition which we will use to prove results like the product and quotient rule.
+\subsection{Limits}
+First of all, we need the notion of limits. Recall that we've previously had limits for \emph{sequences}. Now, we will define limits for functions.
+\begin{defi}[Limit of functions]
+ Let $A\subseteq \R$ and let $f: A\to \R$. We say
+ \[
+ \lim_{x\to a}f(x) = \ell,
+ \]
+ or ``$f(x) \to \ell$ as $x \to a$'', if
+ \[
+ (\forall \varepsilon > 0)(\exists \delta > 0)(\forall x\in A)\, 0 < |x - a| < \delta \Rightarrow |f(x) - \ell| < \varepsilon.
+ \]
+ We couldn't care less what happens when $x = a$, hence the strict inequality $0 < |x - a|$. In fact, $f$ doesn't even have to be defined at $x = a$.
+\end{defi}
+
+\begin{eg}
+ Let
+ \[
+ f(x) =
+ \begin{cases}
+ x & x \not = 2\\
+ 3 & x = 2
+ \end{cases}
+ \]
+ Then $\lim\limits_{x\to 2} = 2$, even though $f(2) = 3$.
+\end{eg}
+
+\begin{eg}
+ Let $f(x) = \frac{\sin x}{x}$. Then $f(0)$ is not defined but $\lim\limits_{x\to 0}f(x) = 1$.
+
+ We will see a proof later after we define what $\sin$ means.
+\end{eg}
+
+We notice that the definition of the limit is suspiciously similar to that of continuity. In fact, if we define
+\[
+ g(x) =
+ \begin{cases}
+ f(x) & x \not =a\\
+ \ell & x = a
+ \end{cases}
+\]
+Then $f(x) \to \ell$ as $x \to a$ iff $g$ is continuous at $a$.
+
+Alternatively, $f$ is continuous at $a$ if $f(x) \to f(a)$ as $x \to a$. It follows also that $f(x) \to \ell$ as $x\to a$ iff $f(x_n) \to \ell$ for every sequence $(x_n)$ in $A$ with $x_n\to a$.
+
+The previous limit theorems of sequences apply here as well
+\begin{prop}
+ If $f(x)\to \ell$ and $g(x)\to m$ as $x\to a$, then $f(x) + g(x) \to \ell+ m$, $f(x)g(x) \to \ell m$, and $\frac{f(x)}{g(x)}\to \frac{\ell}{m}$ if $g$ and $m$ don't vanish.
+\end{prop}
+
+\subsection{Differentiation}
+Similar to what we did in IA Differential Equations, we define the derivative as a limit.
+\begin{defi}[Differentiable function]
+ $f$ is \emph{differentiable} at $a$ with derivative $\lambda$ if
+ \[
+ \lim_{x\to a}\frac{f(x) - f(a)}{x - a} = \lambda.
+ \]
+ Equivalently, if
+ \[
+ \lim_{h\to 0}\frac{f(a + h) - f(a)}{h} = \lambda.
+ \]
+ We write $\lambda = f'(a)$.
+\end{defi}
+Here we see why, in the definition of the limit, we say that we don't care what happens when $x = a$. In our definition here, our function is 0/0 when $x = a$, and we can't make any sense out of what happens when $x = a$.
+
+Alternatively, we write the definition of differentiation as
+\[
+ \frac{f(x + h) - f(x)}{h} = f'(x) + \varepsilon(h),
+\]
+where $\varepsilon(h) \to 0$ as $h \to 0$. Rearranging, we can deduce that
+\[
+ f(x + h) = f(x) + hf'(x) + h\varepsilon(h),
+\]
+Note that by the definition of the limit, we don't have to care what value $\varepsilon$ takes when $h = 0$. It can be $0$, $\pi$ or $10^{10^{10}}$. However, we usually take $\varepsilon(0) = 0$ so that $\varepsilon$ is continuous.
+
+Using the small-$o$ notation, we usually write $o(h)$ for a function that satisfies $\frac{o(h)}{h}\to 0$ as $h\to 0$. Hence we have
+\begin{prop}
+ \[
+ f(x + h) = f(x) + hf'(x) + o(h).
+ \]
+\end{prop}
+We can interpret this as an approximation of $f(x + h)$:
+\[
+ f(x + h) = \underbrace{f(x) + hf'(x)}_{\text{linear approximation}} + \underbrace{o(h)}_{\text{error term}}.
+\]
+And differentiability shows that this is a very good approximation with small $o(h)$ error.
+
+Conversely, we have
+\begin{prop}
+ If $f(x + h) = f(x) + hf'(x) + o(h)$, then $f$ is differentiable at $x$ with derivative $f'(x)$.
+\end{prop}
+\begin{proof}
+ \[
+ \frac{f(x + h) - f(x)}{h} = f'(x) + \frac{o(h)}{h} \to f'(x).\qedhere
+ \]
+\end{proof}
+
+We can take derivatives multiple times, and get multiple derivatives.
+\begin{defi}[Multiple derivatives]
+ This is defined recursively: $f$ is $(n + 1)$-times differentiable if it is $n$-times differentiable and its $n$th derivative $f^{(n)}$ is differentiable. We write $f^{(n + 1)}$ for the derivative of $f^{(n)}$, i.e.\ the $(n + 1)$th derivative of $f$.
+
+ Informally, we will say $f$ is $n$-times differentiable if we can differentiate it $n$ times, and the $n$th derivative is $f^{(n)}$.
+\end{defi}
+
+We can prove the usual rules of differentiation using the small $o$-notation. It can also be proven by considering limits directly, but the notation will become a bit more daunting.
+\begin{lemma}[Sum and product rule]
+ Let $f, g$ be differentiable at $x$. Then $f + g$ and $fg$ are differentiable at $x$, with
+ \begin{align*}
+ (f + g)'(x) &= f'(x) + g'(x)\\
+ (fg)'(x) &= f'(x)g(x) + f(x)g'(x)
+ \end{align*}
+\end{lemma}
+
+\begin{proof}
+ \begin{align*}
+ (f + g)(x + h) ={}& f(x + h) + g(x + h)\\
+ ={}& f(x) +hf'(x) + o(h) + g(x) + hg'(x) + o(h)\\
+ ={}& (f + g)(x) + h(f'(x) + g'(x)) + o(h)\\
+ fg(x + h) ={}& f(x + h)g(x + h)\\
+ ={}& [f(x) + hf'(x) + o(h)][g(x) + hg'(x) + o(h)]\\
+ ={}& f(x)g(x) + h[f'(x)g(x) + f(x)g'(x)]\\
+ &+ \underbrace{o(h)[g(x) + f(x) + hf'(x) + hg'(x) + o(h)] + h^2f'(x)g'(x)}_{\text{error term}}\\
+ \intertext{By limit theorems, the error term is $o(h)$. So we can write this as}
+ ={}& fg(x) + h(f'(x)g(x) + f(x)g'(x)) + o(h).\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{lemma}[Chain rule]
+ If $f$ is differentiable at $x$ and $g$ is differentiable at $f(x)$, then $g\circ f$ is differentiable at $x$ with derivative $g'(f(x))f'(x)$.
+\end{lemma}
+\begin{proof}
+ If one is sufficiently familiar with the small-$o$ notation, then we can proceed as
+ \[
+ g(f(x + h)) = g(f(x) + h f'(x) + o(h)) = g(f(x)) + h f'(x) g'(f(x)) + o(h).
+ \]
+ If not, we can be a bit more explicit about the computations, and use $h\varepsilon(h)$ instead of $o(h)$:
+ \begin{align*}
+ (g\circ f)(x + h) ={}& g(f(x + h))\\
+ ={}& g[f(x) + \underbrace{hf'(x) + h\varepsilon_1(h)}_{\text{the ``}h\text{'' term}}]\\
+ ={}& g(f(x)) + \big(fg'(x) + h\varepsilon_1(h)\big)g'(f(x))\\
+ &+ \big(hf'(x) + h\varepsilon_1(h)\big)\varepsilon_2(hf'(x) + h\varepsilon_1(h))\\
+ ={}& g\circ f(x) + hg'(f(x))f'(x)\\
+ &+ \underbrace{h\Big[\varepsilon_1(h)g'(f(x)) + \big(f'(x) + \varepsilon_1(h)\big)\varepsilon_2\big(hf'(x) + h\varepsilon_1(h)\big)\Big]}_{\text{error term}}.
+ \end{align*}
+ We want to show that the error term is $o(h)$, i.e.\ it divided by $h$ tends to $0$ as $h\to 0$.
+
+ But $\varepsilon_1(h)g'(f(x))\to 0$, $f'(x) + \varepsilon_1(h)$ is bounded, and $\varepsilon_2(hf'(x) + h\varepsilon_1(h))\to 0$ because $hf'(x) + h\varepsilon_1(h) \to 0$ and $\varepsilon_2(0) = 0$. So our error term is $o(h)$.
+\end{proof}
+
+We usually don't write out the error terms so explicitly, and just use heuristics like $f(x + o(h)) = f(x) + o(h)$; $o(h) + o(h) = o(h)$; and $g(x) \cdot o(h) = o(h)$ for any (bounded) function $g$.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Constant functions are differentiable with derivative $0$.
+ \item $f(x) = \lambda x$ is differentiable with derivative $\lambda$.
+ \item Using the product rule, we can show that $x^n$ is differentiable with derivative $nx^{n - 1}$ by induction.
+ \item Hence all polynomials are differentiable.
+ \end{enumerate}
+\end{eg}
+
+\begin{eg}
+ Let $f(x) = 1/x$. If $x\not = 0$, then
+ \[
+ \frac{f(x + h) - f(x)}{h} = \frac{\frac{1}{x + h} - \frac{1}{x}}{h} = \frac{\left(\frac{-h}{x(x + h)}\right)}{h} = \frac{-1}{x(x + h)} \to \frac{-1}{x^2}
+ \]
+ by limit theorems.
+\end{eg}
+
+\begin{lemma}[Quotient rule]
+ If $f$ and $g$ are differentiable at $x$, and $g(x) \not = 0$, then $f/g$ is differentiable at $x$ with derivative
+ \[
+ \left(\frac{f}{g}\right)'(x) = \frac{f'(x)g(x) - g'(x)f(x)}{g(x)^2}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ First note that $1/g(x) = h(g(x))$ where $h(y) = 1/y$. So $1/g(x)$ is differentiable at $x$ with derivative $\displaystyle \frac{-1}{g(x)^2}g'(x)$ by the chain rule.
+
+ By the product rule, $f/g$ is differentiable at $x$ with derivative
+ \[
+ \frac{f'(x)}{g(x)} - f(x)\frac{g'(x)}{g(x)^2} = \frac{f'(x)g(x) - f(x)g'(x)}{g(x)^2}.\qedhere
+ \]
+\end{proof}
+\begin{lemma}
+ If $f$ is differentiable at $x$, then it is continuous at $x$.
+\end{lemma}
+
+\begin{proof}
+ As $y\to x$, $\displaystyle \frac{f(y) - f(x)}{y - x} \to f'(x)$. Since, $y - x \to 0$, $f(y) - f(x) \to 0$ by product theorem of limits. So $f(y) \to f(x)$. So $f$ is continuous at $x$.
+\end{proof}
+
+\begin{thm}
+ Let $f:[a, b]\to [c, d]$ be differentiable on $(a, b)$, continuous on $[a, b]$, and strictly increasing. Suppose that $f'(x)$ never vanishes. Suppose further that $f(a) = c$ and $f(b) = d$. Then $f$ has an inverse $g$ and for each $y\in (c, d)$, $g$ is differentiable at $y$ with derivative $1/f'(g(y))$.
+
+ In human language, this states that if $f$ is invertible, then the derivative of $f^{-1}$ is $1/f'$.
+\end{thm}
+Note that the conditions will (almost) always require $f$ to be differentiable on \emph{open} interval $(a, b)$, continuous on \emph{closed} interval $[a, b]$. This is because it doesn't make sense to talk about differentiability at $a$ or $b$ since the definition of $f'(a)$ requires $f$ to be defined on both sides of $a$.
+
+\begin{proof}
+ $g$ exists by an earlier theorem about inverses of continuous functions.
+
+ Let $y, y + k\in (c, d)$. Let $x = g(y)$, $x + h = g(y + k)$.
+
+ Since $g(y + k) = x + h$, we have $y + k = f(x + h)$. So $k = f(x + h) - y = f(x + h) - f(x)$. So
+ \[
+ \frac{g(y + k) - g(y)}{k} = \frac{(x + h) - x}{f(x + h) - f(x)} = \left(\frac{f(x + h) - f(x)}{h}\right)^{-1}.
+ \]
+ As $k \to 0$, since $g$ is continuous, $g(y + k) \to g(y)$. So $h \to 0$. So
+ \[
+ \frac{g(y + k) - g(y)}{k} \to [f'(x)]^{-1} = [f'(g(y)]^{-1}.\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ Let $f(x) = x^{1/2}$ for $x > 0$. Then $f$ is the inverse of $g(x) = x^2$. So
+ \[
+ f'(x) = \frac{1}{g'(f(x))} = \frac{1}{2x^{1/2}} = \frac{1}{2}x^{-1/2}.
+ \]
+ Similarly, we can show that the derivative of $x^{1/q}$ is $\frac{1}{q}x^{1/q - 1}$.
+
+ Then let's take $x^{p/q} = (x^{1/q})^p$. By the chain rule, its derivative is
+ \[
+ p(x^{1/q})^{p - 1}\cdot \frac{1}{q}x^{1/q - 1} = \frac{p}{q}x^{\frac{p - 1}{q} + \frac{1}{q} - 1} = \frac{p}{q}x^{\frac{p}{q} - 1}.
+ \]
+\end{eg}
+
+\subsection{Differentiation theorems}
+Everything we've had so far is something we already know. It's just that now we can prove them rigorously. In this section, we will come up with genuinely new theorems, including but not limited to \emph{Taylor's theorem}, which gives us Taylor's series.
+
+\begin{thm}[Rolle's theorem]
+ Let $f$ be continuous on a closed interval $[a, b]$ (with $a < b$) and differentiable on $(a, b)$. Suppose that $f(a) = f(b)$. Then there exists $x\in (a, b)$ such that $f'(x) = 0 $.
+\end{thm}
+It is intuitively obvious: if you move up and down, and finally return to the same point, then you must have changed direction some time. Then $f'(x) = 0$ at that time.
+
+\begin{proof}
+ If $f$ is constant, then we're done.
+
+ Otherwise, there exists $u$ such that $f(u) \not= f(a)$. wlog, $f(u) > f(a)$. Since $f$ is continuous, it has a maximum, and since $f(u) > f(a) = f(b)$, the maximum is not attained at $a$ or $b$.
+
+ Suppose maximum is attained at $x\in (a, b)$. Then for any $h \not = 0$, we have
+ \[
+ \frac{f(x + h) - f(x)}{h}
+ \begin{cases}
+ \leq 0 & h > 0\\
+ \geq 0 & h < 0
+ \end{cases}
+ \]
+ since $f(x + h) - f(x) \leq 0$ by maximality of $f(x)$. By considering both sides as we take the limit $h\to 0$, we know that $f'(x) \leq 0$ and $f'(x) \geq 0$. So $f'(x) = 0$.
+\end{proof}
+
+\begin{cor}[Mean value theorem]
+ Let $f$ be continuous on $[a, b]$ ($a < b$), and differentiable on $(a, b)$. Then there exists $x\in (a, b)$ such that
+ \[
+ f'(x) = \frac{f(b) - f(a)}{b - a}.
+ \]
+ Note that $\frac{f(b) - f(a)}{b - a}$ is the slope of the line joining $f(a)$ and $f(b)$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (3, 2);
+ \draw [dashed] (1, 1.26666) -- (2.7, 2.4);
+ \draw (0, 0) .. controls (3, 0) and (0, 2) .. (3, 2) node [above, pos = 0.8] {$f(x)$};
+ \node [circ] {};
+ \node [left] {$f(a)$};
+ \node at (3, 2) [circ] {};
+ \node at (3, 2) [right] {$f(b)$};
+ \end{tikzpicture}
+ \end{center}
+\end{cor}
+
+The mean value theorem is sometimes described as ``rotate your head and apply Rolle's''. However, if we actually rotate it, we might end up with a non-function. What we actually want is a shear.
+\begin{proof}
+ Let
+ \[
+ g(x) = f(x) - \frac{f(b) - f(a)}{b - a}x.
+ \]
+ Then
+ \[
+ g(b) - g(a) = f(b) - f(a) - \frac{f(b) - f(a)}{b - a}(b - a) = 0.
+ \]
+ So by Rolle's theorem, we can find $x\in (a, b)$ such that $g'(x) = 0$. So
+ \[
+ f'(x) = \frac{f(b) - f(a)}{b - a},
+ \]
+ as required.
+\end{proof}
+
+We've always assumed that if a function has a positive derivative everywhere, then the function is increasing. However, it turns out that this is really hard to prove directly. It does, however, follow quite immediately from the mean value theorem.
+\begin{eg}
+ Suppose $f'(x) > 0$ for every $x\in (a, b)$. Then for $u, v$ in $[a, b]$, we can find $w\in (u, v)$ such that
+ \[
+ \frac{f(v) - f(u)}{v - u} = f'(w) > 0.
+ \]
+ It follows that $f(v) > f(u)$. So $f$ is strictly increasing.
+
+ Similarly, if $f'(x) \geq 2$ for every $x$ and $f(0) = 0$, then $f(1) \geq 2$, or else we can find $x\in (0, 1)$ such that
+ \[
+ 2\leq f'(x) = \frac{f(1) - f(0)}{1 - 0} = f(1).
+ \]
+\end{eg}
+
+\begin{thm}[Local version of inverse function theorem]
+ Let $f$ be a function with continuous derivative on $(a, b)$.
+
+ Let $x\in (a, b)$ and suppose that $f'(x) \not= 0$. Then there is an open interval $(u, v)$ containing $x$ on which $f$ is invertible (as a function from $(u, v)$ to $f((u, v))$). Moreover, if $g$ is the inverse, then $g'(f(z)) = \frac{1}{f'(z)}$ for every $z\in (u, v)$.
+
+ This says that if $f$ has a non-zero derivative, then it has an inverse locally and the derivative of the inverse is $1/f'$.
+\end{thm}
+Note that this not only requires $f$ to be differentiable, but the derivative itself also has to be continuous.
+
+\begin{proof}
+ wlog, $f'(x) > 0$. By the continuity, of $f'$, we can find $\delta > 0$ such that $f'(z) > 0$ for every $z\in (x - \delta, x + \delta)$. By the mean value theorem, $f$ is strictly increasing on $(x - \delta, x + \delta)$, hence injective. Also, $f$ is continuous on $(x - \delta, x + \delta)$ by differentiability.
+
+ Then done by the inverse function theorem.
+\end{proof}
+
+Finally, we are going to prove Taylor's theorem. To do so, we will first need some lemmas.
+\begin{thm}[Higher-order Rolle's theorem]
+ Let $f$ be continuous on $[a, b]$ ($a < b$) and $n$-times differentiable on an open interval containing $[a, b]$. Suppose that
+ \[
+ f(a) = f'(a) = f^{(2)}(a) = \cdots = f^{(n - 1)}(a) = f(b) = 0.
+ \]
+ Then $\exists x\in (a, b)$ such that $f^{(n)}(x) = 0$.
+\end{thm}
+
+\begin{proof}
+ Induct on $n$. The $n = 0$ base case is just Rolle's theorem.
+
+ Suppose we have $k < n$ and $x_k\in (a, b)$ such that $f^{(k)}(x_k) = 0$. Since $f^{(k)}(a) = 0$, we can find $x_{k + 1}\in (a, x_k)$ such that $f^{(k + 1)}(x_{k + 1}) = 0$ by Rolle's theorem.
+
+ So the result follows by induction.
+\end{proof}
+
+\begin{cor}
+ Suppose that $f$ and $g$ are both differentiable on an open interval containing $[a, b]$ and that $f^{(k)}(a) = g^{(k)}(a)$ for $k = 0, 1, \cdots, n - 1$, and also $f(b) = g(b)$. Then there exists $x\in (a, b)$ such that $f^{(n)}(x) = g^{(n)}(x)$.
+\end{cor}
+
+\begin{proof}
+ Apply generalised Rolle's to $f - g$.
+\end{proof}
+
+Now we shall show that for any $f$, we can find a polynomial $p$ of degree at most $n$ that satisfies the conditions for $g$, i.e.\ a $p$ such that $p^{(k)}(a) = f^{(k)}(a)$ for $k = 0, 1, \cdots, n - 1$ and $p(b) = f(b)$.
+
+A useful ingredient is the observation that if
+\[
+ Q_k(x) = \frac{(x - a)^k}{k!},
+\]
+then
+\[
+ Q_k^{(j)}(a) =
+ \begin{cases}
+ 1 & j = k\\
+ 0 & j \not= k
+ \end{cases}
+\]
+Therefore, if
+\[
+ Q(x) = \sum_{k = 0}^{n - 1}f^{(k)}(a) Q_k(x),
+\]
+then
+\[
+ Q^{(j)}(a) = f^{(j)}(a)
+\]
+for $j = 0, 1, \cdots, n - 1$. To get $p(b) = f(b)$, we use our $n$th degree polynomial term:
+\[
+ p(x) = Q(x) + \frac{(x - a)^n}{(b - a)^n}\big(f(b) - Q(b)\big).
+\]
+Then our final term does not mess up our first $n - 1$ derivatives, and gives $p(b)= f(b)$.
+
+By the previous corollary, we can find $x\in (a, b)$ such that
+\[
+ f^{(n)}(x) = p^{(n)}(x).
+\]
+That is,
+\[
+ f^{(n)}(x) = \frac{n!}{(b - a)^n}\big(f(b) - Q(b)\big).
+\]
+Therefore
+\[
+ f(b) = Q(b) + \frac{(b - a)^n}{n!}f^{(n)}(x).
+\]
+Alternatively,
+\[
+ f(b) = f(a) + (b - a)f'(a) + \cdots + \frac{(b - a)^{n - 1}}{(n- 1)!}f^{(n - 1)}(a) + \frac{(b - a)^n}{n!}f^{(n)}(x).
+\]
+Setting $b = a + h$, we can rewrite this as
+\begin{thm}[Taylor's theorem with the Lagrange form of remainder]
+ \[
+ f(a + h) = \underbrace{f(a) + hf'(a) + \cdots + \frac{h^{n - 1}}{(n - 1)!}f^{(n - 1)}(a)}_{(n - 1)\text{-degree approximation to }f\text{ near }a} + \underbrace{\frac{h^n}{n!}f^{(n)}(x)}_{\text{error term}}.
+ \]
+ for some $x\in (a, a + h)$.
+\end{thm}
+Strictly speaking, we only proved it for the case when $h > 0$, but we can easily show it holds for $h < 0$ too by considering $g(x) = f(-x)$.
+
+Note that the remainder term is \emph{not} necessarily small, but this often gives us the best $(n - 1)$-degree approximation to $f$ near $a$. For example, if $f^{(n)}$ is bounded by $C$ near $a$, then
+\[
+ \left|\frac{h^n}{n!}f^{(n)}(x)\right| \leq \frac{C}{n!}|h|^n = o(h^{n - 1}).
+\]
+
+\begin{eg}
+ Let $f: \R \to \R$ be a differentiable function such that $f(0) = 1$ and $f'(x) = f(x)$ for every $x$ (intuitively, we know it is $e^x$ , but that thing doesn't exist!). Then for every $x$, we have
+ \[
+ f(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots = \sum_{n = 0}^\infty \frac{x^n}{n!}.
+ \]
+ While it seems like we can prove this works by differentiating it and see that $f'(x) = f(x)$, the sum rule only applies for finite sums. We don't know we can differentiate a sum term by term. So we have to use Taylor's theorem.
+
+ Since $f'(x) =f(x)$, it follows that all derivatives exist. By Taylor's theorem,
+ \[
+ f(x) = f(0) + f'(0) x + \frac{f^{(2)}(0)}{2!}x^2 + \cdots + \frac{f^{(n - 1)}(0)}{(n - 1)!}x^{n - 1} + \frac{f^{(n)}(u)}{n!}x^n.
+ \]
+ for some $u$ between $0$ and $x$. This equals to
+ \[
+ f(x) = \sum_{k = 0}^{n - 1}\frac{x^k}{k!} + \frac{f^{(n)}(u)}{n!}x^n.
+ \]
+ We must show that the remainder term $\frac{f^{(n)}(u)}{n!}x^n \to 0$ as $n \to \infty$. Note here that $x$ is fixed, but $u$ can depend on $n$.
+
+ But we know that $f^{(n)}(u) = f(u)$, but since $f$ is differentiable, it is continuous, and is bounded on $[0, x]$. Suppose $|f(u)| \leq C$ on $[0, x]$. Then
+ \[
+ \left|\frac{f^{(n)(u)}}{n!}x^n\right| \leq \frac{C}{n!}|x|^n \to 0
+ \]
+ from limit theorems. So it follows that
+ \[
+ f(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots = \sum_{n = 0}^\infty \frac{x^n}{n!}.
+ \]
+\end{eg}
+\subsection{Complex differentiation}
+\begin{defi}[Complex differentiability]
+ Let $f: \C \to \C$. Then $f$ is differentiable at $z$ with derivative $f'(z)$ if
+ \[
+ \lim_{h \to 0}\frac{f(z + h) - f(z)}{h}\text{ exists and equals }f'(z).
+ \]
+ Equivalently,
+ \[
+ f(z + h) = f(z) + hf'(z) + o(h).
+ \]
+\end{defi}
+This is exactly the same definition with real differentiation, but has very different properties!
+
+All the usual rules --- chain rule, product rule etc. also apply (with the same proofs). Also the derivatives of polynomials are what you expect. However, there are some more interesting cases.
+
+\begin{eg}
+ $f(z) = \bar z$ is \emph{not} differentiable.
+ \[
+ \frac{\overline{z + h} - \overline{z\vphantom{h}}}{h} = \frac{\bar h}{h} =
+ \begin{cases}
+ 1 & h\text{ is real}\\
+ -1 & h\text{ is purely imaginary}
+ \end{cases}
+ \]
+\end{eg}
+If this seems weird, this is because we often think of $\C$ as $\R^2$, but they are not the same. For example, reflection is a linear map in $\R^2$, but not in $\C$. A linear map in $\C$ is something in the form $x \mapsto bx$, which can only be a dilation or rotation, not reflections or other weird things.
+
+\begin{eg}
+ $f(z) = |z|$ is also not differentiable. If it were, then $|z|^2$ would be as well (by the product rule). So would $\frac{|z|^2}{z} = \bar z$ when $z \not= 0$ by the quotient rule. At $z = 0$, it is certainly not differentiable, since it is not even differentiable on $\R$.
+\end{eg}
+\section{Complex power series}
+Before we move on to integration, we first have a look at complex power series. This will allow us to define the familiar exponential and trigonometric functions.
+\begin{defi}[Complex power series]
+ A \emph{complex power series} is a series of the form
+ \[
+ \sum_{n = 0}^{\infty}a_n z^n.
+ \]
+ when $z\in \C$ and $a_n\in \C$ for all $n$. When it converges, it is a function of $z$.
+\end{defi}
+
+When considering complex power series, a very important concept is the \emph{radius of convergence}. To make sense of this concept, we first need the following lemma:
+\begin{lemma}
+ Suppose that $\sum a_nz^n$ converges and $|w| < |z|$, then $\sum a_n w^n$ converges (absolutely).
+\end{lemma}
+
+\begin{proof}
+ We know that
+ \[
+ |a_n w^n| = |a_nz^n|\cdot \left|\frac{w}{z}\right|^n.
+ \]
+ Since $\sum a_nz^n$ converges, the terms $a_nz^n$ are bounded. So pick $C$ such that
+ \[
+ |a_nz^n| \leq C
+ \]
+ for every $n$. Then
+ \[
+ 0 \leq \sum_{n = 0}^\infty |a_nw^n| \leq \sum_{n = 0}^\infty C\left|\frac{w}{z}\right|^n,
+ \]
+ which converges (geometric series). So by the comparison test, $\sum a_nw^n$ converges absolutely.
+\end{proof}
+It follows that if $\sum a_nz^n$ does not converge and $|w| > |z|$, then $\sum a_nw^n$ does not converge.
+
+Now let $R = \sup\{|z|: \sum a_nz^n$ converges $\}$ ($R$ may be infinite). If $|z| < R$, then we can find $z_0$ with $|z_0|\in (|z|, R]$ such that $\sum_n^\infty a_nz_0^n$ converges. So by lemma above, $\sum a_n z^n$ converges. If $|z| > R$, then $\sum a_nz^n$ diverges by definition of $R$.
+
+\begin{defi}[Radius of convergence]
+ The \emph{radius of convergence} of a power series $\sum a_nz^n$ is
+ \[
+ R = \sup\left\{|z|: \sum a_nz^n\text{ converges }\right\}.
+ \]
+ $\{z: |z| < R\}$ is called the \emph{circle of convergence}.\footnote{Note to pedants: yes it is a disc, not a circle}.
+
+ If $|z| < R$, then $\sum a_nz^n$ converges. If $|z| > R$, then $\sum a_nz^n$ diverges. When $|z| = R$, the series can converge at some points and not the others.
+\end{defi}
+
+\begin{eg}
+ $\displaystyle\sum_{n = 0}^\infty z^n$ has radius of convergence of $1$. When $|z| = 1$, it diverges (since the terms do not tend to $0$).
+\end{eg}
+
+\begin{eg}
+ $\displaystyle\sum_{n = 0}^\infty \frac{z^n}{n}$ has radius of convergence $1$, since the ratio of $(n + 1)$th term to $n$th is
+ \[
+ \frac{z^{n + 1}/(n + 1)}{z^n /n} = z\cdot\frac{n}{n + 1} \to z.
+ \]
+ So if $|z| < 1$, then the series converges by the ratio test. If $|z| > 1$, then eventually the terms are increasing in modulus.
+
+ If $z = 1$, then it diverges (harmonic series). If $|z| = 1$ and $z \not= 1$, it converges by Abel's test.
+\end{eg}
+
+\begin{eg}
+ The series $\displaystyle \sum_{n = 1}^\infty \frac{z^{n}}{n^2}$ converges for $|z| \leq 1$ and diverges for $|z| > 1$.
+\end{eg}
+
+As evidenced by the above examples, the ratio test can be used to find the radius of convergence. We also have an alternative test based on the $n$th root.
+
+\begin{lemma}
+ The radius of convergence of a power series $\sum a_nz^n$ is
+ \[
+ R = \frac{1}{\limsup \sqrt[n]{|a_n|}}.
+ \]
+ Often $\sqrt[n]{|a_n|}$ converges, so we only have to find the limit.
+\end{lemma}
+
+\begin{proof}
+ Suppose $|z| < 1/\limsup \sqrt[n]{|a_n|}$. Then $|z| \limsup \sqrt[n]{|a_n|} < 1$. Therefore there exists $N$ and $\varepsilon > 0$ such that
+ \[
+ \sup_{n \geq N}|z|\sqrt[n]{|a_n|} \leq 1 - \varepsilon
+ \]
+ by the definition of $\limsup$. Therefore
+ \[
+ |a_n z^n| \leq (1 - \varepsilon)^n
+ \]
+ for every $n \geq N$, which implies (by comparison with geometric series) that $\sum a_n z^n$ converges absolutely.
+
+ On the other hand, if $|z|\limsup\sqrt[n]{|a_n|} > 1$, it follows that $|z|\sqrt[n]{|a_n|} \geq 1$ for infinitely many $n$. Therefore $|a_nz^n| \geq 1$ for infinitely many $n$. So $\sum a_nz^n$ does not converge.
+\end{proof}
+
+\begin{eg}
+ The radius of convergence of $\displaystyle \frac{z^n}{2^n}$ is $2$ because $\sqrt[n]{|a_n|} = \frac{1}{2}$ for every $n$. So $\limsup \sqrt[n]{|a_n|} = \frac{1}{2}$. So $1/\limsup \sqrt[n]{|a_n|} = 2$.
+\end{eg}
+
+But often it is easier to find the radius convergence from elementary methods such as the ratio test, e.g.\ for $\sum n^2 z^n$.
+
+\subsection{Exponential and trigonometric functions}
+\begin{defi}[Exponential function]
+ The \emph{exponential function} is
+ \[
+ e^z = \sum_{n = 0}^\infty \frac{z^n}{n!}.
+ \]
+ By the ratio test, this converges on all of $\C$.
+\end{defi}
+A fundamental property of this function is that
+\[
+ e^{z + w} = e^ze^w.
+\]
+Once we have this property, we can say that
+\begin{prop}
+ The derivative of $e^z$ is $e^z$.
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ \frac{e^{z + h} - e^z}{h} &= e^z \left(\frac{e^h - 1}{h}\right)\\
+ &= e^z\left(1 + \frac{h}{2!} + \frac{h^2}{3!} + \cdots\right)
+ \end{align*}
+ But
+ \[
+ \left|\frac{h}{2!} + \frac{h^2}{3!} + \cdots \right| \leq \frac{|h|}{2} + \frac{|h|^2}{4} + \frac{|h|^3}{8} + \cdots = \frac{|h|/2}{1 - |h|/2} \to 0.
+ \]
+ So
+ \[
+ \frac{e^{z + h} - e^z}{h} \to e^z.\qedhere
+ \]
+\end{proof}
+
+But we must still prove that $e^{z + w} = e^ze^w$.
+
+Consider two sequences $(a_n), (b_n)$. Their \emph{convolution} is the sequence $(c_n)$ defined by
+\[
+ c_n = a_0b_n + a_1b_{n - 1} + a_2b_{n - 2} + \cdots + a_nb_0.
+\]
+The relevance of this is that if you take
+\[
+ \left(\sum_{n = 0}^N a_nz^n\right)\left(\sum_{n = 0}^N b_nz^n\right)\text{ and }\sum_{n = 0}^N c_n z^n,
+\]
+and equate coefficients of $z^n$, you get
+\[
+ c_n = a_0b_n + a_1b_{n - 1} + a_2b_{n - 2} + \cdots + a_nb_0.
+\]
+\begin{thm}
+ Let $\sum_{n = 0}^\infty a_n$ and $\sum_{n = 0}^\infty b_n$ be two absolutely convergent series, and let $(c_n)$ be the convolution of the sequences $(a_n)$ and $(b_n)$. Then $\sum_{n = 0}^\infty c_n$ converges (absolutely), and
+ \[
+ \sum_{n = 0}^{\infty} c_n = \left(\sum_{n = 0}^\infty a_n\right)\left(\sum_{n = 0}^\infty b_n \right).
+ \]
+\end{thm}
+
+\begin{proof}
+ We first show that a rearrangement of $\sum c_n$ converges absolutely. Hence it converges unconditionally, and we can rearrange it back to $\sum c_n$.
+
+ Consider the series
+ \[
+ (a_0b_0) + (a_0 b_1 + a_1b_1 + a_1b_0) + (a_0 b_2 + a_1 b_2 + a_2b_2 + a_2b_1 + a_2b_0) + \cdots\tag{$*$}
+ \]
+ Let
+ \[
+ S_N = \sum_{n = 0}^{N}a_n, \quad T_N = \sum_{n = 0}^N b_n,\quad U_N = \sum_{n = 0}^N | a_n|,\quad V_N = \sum_{n = 0}^N|b_n|.
+ \]
+ Also let $S_N \to S, T_N \to T, U_N \to U, V_N \to V$ (these exist since $\sum a_n$ and $\sum b_n$ converge absolutely).
+
+ If we take the modulus of the terms of $(*)$, and consider the first $(N + 1)^2$ terms (i.e.\ the first $N + 1$ brackets), the sum is $U_NV_N$. Hence the series converges absolutely to $UV$. Hence $(*)$ converges.
+
+ The partial sum up to $(N + 1)^2$ of the series $(*)$ itself is $S_NT_N$, which converges to $ST$. So the whole series converges to $ST$.
+
+ Since it converges absolutely, it converges unconditionally. Now consider a rearrangement:
+ \[
+ a_0 b_0 + (a_0b_1 + a_1b_0) + (a_0b_2 + a_1b_1 + a_2b_0) + \cdots
+ \]
+ Then this converges to $ST$ as well. But the partial sum of the first $1 + 2 + \cdots + N$ terms is $c_0 + c_1 + \cdots + c_N$. So
+ \[
+ \sum_{n = 0}^N c_n \to ST = \left(\sum_{n = 0}^\infty a_n\right)\left(\sum_{n = 0}^\infty b_n \right). \qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ \[
+ e^z e^w = e^{z + w}.
+ \]
+\end{cor}
+
+\begin{proof}
+ By theorem above (and definition of $e^z$),
+ \begin{align*}
+ e^z e^w &= \sum_{n = 0}^\infty \left(1\cdot \frac{w^n}{n!} + \frac{z}{1!}\frac{w^{n - 1}}{(n- 1)!} + \frac{z^2}{2!}\frac{w^{n - 2}}{(n- 2)!} + \cdots + \frac{z^{n}}{n!}\cdot 1\right)\\
+ e^z e^w &= \sum_{n = 0}^\infty \frac{1}{n!}\left(w^n + \binom{n}{1} zw^{n - 1} + \binom{n}{2}z^2 w^{n - 2} + \cdots + \binom{n}{n}z^n\right)\\
+ &= \sum_{n = 0}^\infty (z + w)^n \text{ by the binomial theorem}\\
+ &= e^{z + w}. \qedhere
+ \end{align*}
+\end{proof}
+Note that if $(c_n)$ is the convolution of $(a_n)$ and $(b_n)$, then the convolution of $(a_nz^n)$ and $(b_nz^n)$ is $(c_nz^n)$. Therefore if both $\sum a_n z^n$ and $\sum b_nz^n$ converge absolutely, then their product is $\sum c_n z^n$.
+
+Note that we have now completed the proof that the derivative of $e^z$ is $e^z$.
+
+Now we define $\sin z$ and $\cos z$:
+\begin{defi}[Sine and cosine]
+ \begin{align*}
+ \sin z &= \frac{e^{iz} - e^{-iz}}{2i} = z - \frac{z^3}{3!} + \frac{z^5}{5!} - \frac{z^7}{7!} + \cdots\\
+ \cos z &= \frac{e^{iz} + e^{-iz}}{2} = 1 - \frac{z^2}{2!} + \frac{z^4}{4!} - \frac{z^6}{6!} + \cdots.
+ \end{align*}
+\end{defi}
+
+We now prove certain basic properties of $\sin$ and $\cos$, using known properties of $e^z$.
+\begin{prop}
+ \begin{align*}
+ \frac{\d}{\d z}\sin z &= \frac{ie^{iz} + ie^{-iz}}{2i} = \cos z\\
+ \frac{\d}{\d z}\cos z &= \frac{ie^{iz} - ie^{-iz}}{2} = -\sin z\\
+ \sin^2 z + \cos ^2 z &= \frac{e^{2iz} + 2 + e^{-2iz}}{4} + \frac{e^{2iz} - 2 + e^{-2iz}}{-4} = 1.
+ \end{align*}
+\end{prop}
+It follows that if $x$ is real, then $|\cos x|$ and $|\sin x|$ are at most $1$.
+
+\begin{prop}
+ \begin{align*}
+ \cos(z + w) &= \cos z\cos w - \sin z\sin w\\
+ \sin(z + w) &= \sin z \cos w + \cos z \sin w
+ \end{align*}
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ \cos z\cos w - \sin z \sin w &= \frac{(e^{iz} + e^{-iz})(e^{iw} + e^{-iw})}{4} + \frac{(e^{iz} - e^{-iz})(e^{iw} - e^{-iw})}{4}\\
+ &= \frac{e^{i(z + w)} + e^{-i(z + w)}}{2}\\
+ &= \cos (z + w). \qedhere
+ \end{align*}
+ Differentiating both sides wrt $z$ gives
+ \[
+ -\sin z \cos w - \cos z \sin w = -\sin (z + w).
+ \]
+ So
+ \[
+ \sin(z + w) = \sin z\cos w + \cos z \sin w.\qedhere
+ \]
+\end{proof}
+When $x$ is real, we know that $\cos x \leq 1$. Also $\sin 0 = 0$, and $\frac{\d }{\d x}\sin x = \cos x \leq 1$. So for $x \geq 0$, $\sin x \leq x$, ``by the mean value theorem''. Also, $\cos 0 = 1$, and $\frac{\d}{\d x}\cos x = -\sin x$, which, for $x \geq 0$, is greater than $-x$. From this, it follows that when $x \geq 0$, $\cos x \geq 1 - \frac{x^2}{2}$ (the $1 - \frac{x^2}{2}$ comes from ``integrating'' $-x$, (or finding a thing whose derivative is $-x$)).
+
+Continuing in this way, we get that for $x \geq 0$, if you take truncate the power series for $\sin x$ or $\cos x$, it will be $\geq \sin x, \cos x$ if you stop at a positive term, and $\leq$ if you stop at a negative term. For example,
+\[
+ \sin x \geq x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \frac{x^9}{9!} - \frac{x^{11}}{11!}.
+\]
+In particular,
+\[
+ \cos 2 \leq 1 - \frac{2^2}{2!} + \frac{2^4}{4!} = 1 - 2 + \frac{2}{3} < 0.
+\]
+Since $\cos 0 = 1$, it follows by the intermediate value theorem that there exists some $x\in (0, 2)$ such that $\cos x = 0$. Since $\cos x \geq 1 - \frac{x^2}{2}$, we can further deduce that $x > 1$.
+
+\begin{defi}[Pi]
+ Define the smallest $x$ such that $\cos x = 0$ to be $\frac{\pi}{2}$.
+\end{defi}
+
+Since $\sin^2 z + \cos ^2 z = 1$, it follows that $\sin \frac{\pi}{2} = \pm 1$. Since $\cos x > 0$ on $[0, \frac{\pi}{2}]$, $\sin \frac{\pi}{2} \geq 0$ by the mean value theorem. So $\sin \frac{\pi}{2} = 1$.
+
+Thus
+\begin{prop}
+ \begin{align*}
+ \cos \left(z + \frac{\pi}{2}\right) &= -\sin z\\
+ \sin \left(z + \frac{\pi}{2}\right) &= \cos z\\
+ \cos (z + \pi) &= -\cos z\\
+ \sin (z + \pi) &= -\sin z\\
+ \cos (z + 2\pi) &= \cos z\\
+ \sin (z + 2\pi) &= \sin z
+ \end{align*}
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ \cos\left(z + \frac{\pi}{2}\right) &= \cos z\cos \frac{\pi}{2} - \sin z\sin \frac{\pi}{2}\\
+ &= -\sin z\sin \frac{\pi}{2}\\
+ &= -\sin z
+ \end{align*}
+ and similarly for others.
+\end{proof}
+\subsection{Differentiating power series}
+We shall show that inside the circle of convergence, the derivative of $\sum_{n = 0}^\infty a_z^n$ is given by the obvious formula $\sum_{n = 1}^\infty na_n z^{n - 1}$.
+
+We first prove some (seemingly arbitrary and random) lemmas to build up the proof of the above statement. They are done so that the final proof will not be full of tedious algebra.
+
+\begin{lemma}
+ Let $a$ and $b$ be complex numbers. Then
+ \[
+ b^n - a^n - n(b - a)a^{n - 1} = (b - a)^2(b^{n - 2} + 2ab^{n - 3} + 3a^2 b^{n - 4} + \cdots + (n - 1)a^{n - 2}).
+ \]
+\end{lemma}
+
+\begin{proof}
+ If $b = a$, we are done. Otherwise,
+ \[
+ \frac{b^n - a^n}{b - a} = b^{n - 1} + ab^{n - 2} + a^2b^{n - 3} + \cdots + a^{n - 1}.
+ \]
+ Differentiate both sides with respect to $a$. Then
+ \[
+ \frac{-na^{n - 1}(b - a) + b^n - a^n}{(b - a)^2} = b^{n - 2} + 2ab^{n - 3} + \cdots + (n - 1)a^{n - 2}.
+ \]
+ Rearranging gives the result.
+
+ Alternatively, we can do
+ \[
+ b^n - a^n = (b - a)(b^{n - 1} + ab^{n - 2} + \cdots + a^{n - 1}).
+ \]
+ Subtract $n(b - a)a^{n - 1}$ to obtain
+ \[
+ (b - a)[b^{n - 1} - a^{n - 1} + a(b^{n - 2} - a^{n - 2}) + a^2(b^{n - 3} - a^{n -3 }) + \cdots]
+ \]
+ and simplify.
+\end{proof}
+This implies that
+\[
+ (z + h)^n - z^n - nhz^{n - 1} = h^2((z + h)^{n - 2} + 2z(z + h)^{n - 3} + \cdots + (n - 1)z^{n - 2}),
+\]
+which is actually the form we need.
+\begin{lemma}
+ Let $a_n z^n$ have radius of convergence $R$, and let $|z| < R$. Then $\sum na_n z^{n - 1}$ converges (absolutely).
+\end{lemma}
+
+\begin{proof}
+ Pick $r$ such that $|z| < r < R$. Then $\sum |a_n| r^n$ converges, so the terms $|a_n|r^n$ are bounded above by, say, $C$. Now
+ \[
+ \sum n|a_n z^{n - 1}| = \sum n|a_n|r^{n - 1}\left(\frac{|z|}{r}\right)^{n - 1} \leq \frac{C}{r}\sum n\left(\frac{|z|}{r}\right)^{n - 1}
+ \]
+ The series $\sum n\left(\frac{|z|}{r}\right)^{n - 1}$ converges, by the ratio test. So $\sum n|a_n z^{n - 1}|$ converges, by the comparison test.
+\end{proof}
+
+\begin{cor}
+ Under the same conditions,
+ \[
+ \sum_{n = 2}^\infty \binom{n}{2}a_n z^{n -2 }
+ \]
+ converges absolutely.
+\end{cor}
+
+\begin{proof}
+ Apply Lemma above again and divide by 2.
+\end{proof}
+
+\begin{thm}
+ Let $\sum a_n z^n$ be a power series with radius of convergence $R$. For $|z| < R$, let
+ \[
+ f(z) = \sum_{n = 0}^\infty a_n z^n\text{ and }g(z) = \sum_{n = 1}^\infty na_nz^{n - 1}.
+ \]
+ Then $f$ is differentiable with derivative $g$.
+\end{thm}
+
+\begin{proof}
+ We want $f(z + h) - f(z) - hg(z)$ to be $o(h)$. We have
+ \[
+ f(z + h) - f(z) - hg(z) = \sum_{n = 2}^\infty a_n ((z + h)^n - z^n - hnz^n).
+ \]
+ We started summing from $n = 2$ since the $n = 0$ and $n = 1$ terms are 0. Using our first lemma, we are left with
+ \[
+ h^2\sum_{n = 2}^\infty a_n \big((z + h)^{n - 2} + 2z(z + h)^{n - 3} + \cdots + (n - 1)z^{n - 2}\big)
+ \]
+ We want the huge infinite series to be bounded, and then the whole thing is a bounded thing times $h^2$, which is definitely $o(h)$.
+
+ Pick $r$ such that $|z| < r < R$. If $h$ is small enough that $|z + h| \leq r$, then the last infinite series is bounded above (in modulus) by
+ \[
+ \sum_{n = 2}^\infty|a_n|(r^{n - 2} + 2r^{n - 2} + \cdots + (n - 1)r^{n - 2}) = \sum_{n = 2}^\infty |a_n|\binom{n}{2}r^{n -2 },
+ \]
+ which is bounded. So done.
+\end{proof}
+In IB Analysis II, we will prove the same result using the idea of uniform convergence, which gives a much nicer proof.
+
+\begin{eg}
+ The derivative of
+ \[
+ e^z = 1 + z + \frac{z^2}{2!} + \frac{z^3}{3!} + \cdots
+ \]
+ is
+ \[
+ 1 + z + \frac{z^2}{2!} + \cdots = e^z.
+ \]
+ So we have another proof that of this fact.
+
+ Similarly, the derivatives of $\sin z$ and $\cos z$ work out as $\cos z$ and $-\sin z$.
+\end{eg}
+
+\subsection{Hyperbolic trigonometric functions}
+\begin{defi}[Hyperbolic sine and cosine]
+ We define
+ \begin{align*}
+ \cosh z &= \frac{e^z + e^{-z}}{2} = 1 + \frac{z^2}{2!} + \frac{z^4}{4!} + \frac{z^6}{6!} + \cdots\\
+ \sinh z &= \frac{e^z - e^{-z}}{2} = z + \frac{z^3}{3!} + \frac{z^5}{5!} + \frac{z^7}{7!} + \cdots
+ \end{align*}
+\end{defi}
+
+Either from the definition or from differentiating the power series, we get that
+\begin{prop}
+ \begin{align*}
+ \frac{\d}{\d z}\cosh z &= \sinh z\\
+ \frac{\d }{\d z}\sinh z &= \cosh z
+ \end{align*}
+\end{prop}
+Also, by definition, we have
+\begin{prop}
+ \begin{align*}
+ \cosh iz &= \cos z\\
+ \sinh iz &= i\sin z
+ \end{align*}
+\end{prop}
+Also,
+\begin{prop}
+ \[
+ \cosh^2 z - \sinh^2 z = 1,
+ \]
+\end{prop}
+\section{The Riemann Integral}
+Finally, we can get to integrals. There are many ways to define an integral, which can have some subtle differences. The definition we will use here is the \emph{Riemann integral}, which is the simplest definition, but is also the weakest one, in the sense that many functions are \emph{not} Riemann integrable but integrable under other definitions.
+
+Still, the definition of the Riemann integral is not too straightforward, and requires a lot of preliminary definitions.
+\subsection{Riemann Integral}
+\begin{defi}[Dissections]
+ Let $[a, b]$ be a closed interval. A \emph{dissection} of $[a, b]$ is a sequence $a = x_0 < x_1 < x_2 < \cdots < x_n = b$.
+\end{defi}
+
+\begin{defi}[Upper and lower sums]
+ Given a dissection $\mathcal{D}$, the \emph{upper sum} and \emph{lower sum} are defined by the formulae
+ \begin{align*}
+ U_\mathcal{D} (f) &= \sum_{i = 1}^{n}(x_i - x_{i - 1}) \sup_{x\in [x_{i - 1}, x_{i}]}f (x)\\
+ L_\mathcal{D} (f) &= \sum_{i = 1}^{n}(x_i - x_{i - 1}) \inf_{x\in [x_{i - 1}, x_{i}]}f (x)
+ \end{align*}
+ Sometimes we use the shorthand
+ \[
+ M_i = \sup_{x\in [x_{i - 1}, x_i]} f(x), \quad m_i = \inf_{x\in [x_{i - 1} - x_i]} f(x).
+ \]
+\end{defi}
+The upper sum is the total area of the red rectangles, while the lower sum is the total area of the black rectangles:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (5, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 5) node [above] {$y$};
+
+ \draw [domain=-1:5] plot (\x, {(\x + 1)*(\x + 1)/10 + 1});
+
+ \draw (0.5, 0) node [below] {$a$} -- (0.5, 1.225) -- (1, 1.225);
+ \draw (1, 0) node [below] {$x_1$} -- (1, 1.4) -- (1.5, 1.4);
+ \draw (1.5, 0) node [below] {$x_2$} -- (1.5, 1.625) -- (2, 1.625) -- (2, 0) node [below] {$x_3$};
+ \node at (2.4, 0.8) {$\cdots$};
+ \draw (2.75, 0) node [below] {$x_i$} -- (2.75, 2.40625) -- (3.25, 2.40625) -- (3.25, 0) node [anchor = north west] {$\!\!\!\!\!x_{i + 1}\cdots$};
+ \node at (3.65, 1.2) {$\cdots$};
+ \draw (4, 0) -- (4, 3.5) -- (4.5, 3.5) -- (4.5, 0) node [below] {$b$};
+
+ \draw [red] (0.5, 1.225) -- (0.5, 1.4) -- (1, 1.4) -- (1, 1.625) -- (1.5, 1.625) -- (1.5, 1.9) -- (2, 1.9) -- (2, 1.625);
+ \draw [red] (2.75, 2.40625) -- (2.75, 2.80625) -- (3.25, 2.80625) -- (3.25, 2.40625);
+ \draw [red] (4, 3.5) -- (4, 4.025) -- (4.5, 4.025) -- (4.5, 3.5);
+ \end{tikzpicture}
+\end{center}
+
+\begin{defi}[Refining dissections]
+ If $\mathcal{D}_1$ and $\mathcal{D}_2$ are dissections of $[a, b]$, we say that $\mathcal{D}_2$ refines $\mathcal{D}_1$ if every point of $\mathcal{D}_1$ is a point of $\mathcal{D}_2$.
+\end{defi}
+
+\begin{lemma}
+ If $\mathcal{D}_2$ refines $\mathcal{D}_1$, then
+ \[
+ U_{\mathcal{D}_2} f \leq U_{\mathcal{D}_1}f\text{ and }L_{\mathcal{D}_2 f}\geq L_{\mathcal{D}_1}f.
+ \]
+\end{lemma}
+Using the picture above, this is because if we cut up the dissections into smaller pieces, the red rectangles can only get chopped into shorter pieces and the black rectangles can only get chopped into taller pieces.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 3) node [above] {$y$};
+
+ \draw [domain=-0.5:3] plot (\x, {(\x + 1)*(\x + 1)/10 + 1});
+
+ \draw (0.5, 0) node [below] {$x_0$} -- (0.5, 1.225) -- (2, 1.225) -- (2, 0) node [below] {$x_1$};
+
+ \draw [red] (0.5, 1.225) -- (0.5, 1.9) -- (2, 1.9) -- (2, 1.225);
+ \draw [->] (3.5, 1.5) -- (4.5, 1.5);
+ \begin{scope}[shift={(5.5, 0)}]
+ \draw [->] (-0.5, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 3) node [above] {$y$};
+
+ \draw [domain=-0.5:3] plot (\x, {(\x + 1)*(\x + 1)/10 + 1});
+
+ \draw (0.5, 0) node [below] {$x_0$} -- (0.5, 1.225) -- (1, 1.225);
+ \draw (1, 0) node [below] {$x_1$} -- (1, 1.4) -- (1.5, 1.4);
+ \draw (1.5, 0) node [below] {$x_2$} -- (1.5, 1.625) -- (2, 1.625) -- (2, 0) node [below] {$x_3$};
+
+ \draw [red] (0.5, 1.225) -- (0.5, 1.4) -- (1, 1.4) -- (1, 1.625) -- (1.5, 1.625) -- (1.5, 1.9) -- (2, 1.9) -- (2, 1.625);
+
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+
+\begin{proof}
+ Let $\mathcal{D}$ be $x_0 < x_1 < \cdots < x_n$. Let $\mathcal{D}_2$ be obtained from $\mathcal{D}_1$ by the addition of one point $z$. If $z\in (x_{i - 1}, x_i)$, then
+ \begin{align*}
+ U_{\mathcal{D}_2}f - U_{\mathcal{D}_1}f &= \left[(z - x_{i - 1}) \sup_{x\in [x_{i - 1}, z]} f(x)\right]\\
+ &+ \left[(x_i - z)\sup_ {x\in[z, x_i]}f(x)\right] - (x_i - x_{i - 1})M_i.
+ \end{align*}
+ But $\sup_{x\in [x_{i - 1}, z]} f(x)$ and $\sup_{x\in [z, x_i]} f(x)$ are both at most $M_i$. So this is at most $M_i( z - x_{i - 1} + x_i - z - (x_i - x_{i - 1})) =0 $. So
+ \[
+ U_{\mathcal{D}_2} f\leq U_{\mathcal{D}_1}f.
+ \]
+ By induction, the result is true whenever $\mathcal{D}_2$ refines $\mathcal{D}_1$.
+
+ A very similar argument shows that $L_{\mathcal{D}_2f} \geq L_{\mathcal{D}_1}f$.
+\end{proof}
+\begin{defi}[Least common refinement]
+ If $\mathcal{D}_1$ and $\mathcal{D}_2$ be dissections of $[a, b]$. Then the least common refinement of $\mathcal{D}_1$ and $\mathcal{D}_2$ is the dissection made out of the points of $\mathcal{D}_1$ and $\mathcal{D}_2$.
+\end{defi}
+
+\begin{cor}
+ Let $\mathcal{D}_1$ and $\mathcal{D}_2$ be two dissections of $[a, b]$. Then
+ \[
+ U_{\mathcal{D}_1}f \geq L_{\mathcal{D}_2}f.
+ \]
+\end{cor}
+\begin{proof}
+ Let $\mathcal{D}$ be the least common refinement (or indeed any common refinement). Then by lemma above (and by definition),
+ \[
+ U_{\mathcal{D}_1}f \geq U_{\mathcal{D}}f \geq L_{\mathcal{D}}f \geq L_{\mathcal{D}_2}f.\qedhere
+ \]
+\end{proof}
+
+Finally, we can define the integral.
+\begin{defi}[Upper, lower, and Riemann integral]
+ The \emph{upper integral} is
+ \[
+ \overline{\int_a^b} f(x)\;\d x = \inf_{\mathcal{D}}U_{\mathcal{D}}f.
+ \]
+ The \emph{lower integral} is
+ \[
+ \underline{\int_a^b} f(x)\;\d x = \sup_{\mathcal{D}}L_{\mathcal{D}}f.
+ \]
+ If these are equal, then we call their common value the \emph{Riemann integral} of $f$, and is denoted $\int_a^b f(x)\;\d x$.
+
+ If this exists, we say $f$ is \emph{Riemann integrable}.
+\end{defi}
+We will later prove the \emph{fundamental theorem of calculus}, which says that integration is the reverse of differentiation. But why don't we simply define integration as anti-differentiation, and prove that it is the area of the curve? There are things that we cannot find (a closed form of) the anti-derivative of, like $e^{-x^2}$. In these cases, we wouldn't want to say the integral doesn't exist --- it surely does according to this definition!
+
+There is an immediate necessary condition for Riemann integrability --- boundedness. If $f$ is unbounded above in $[a, b]$, then for any dissection $\mathcal{D}$, there must be some $i$ such that $f$ is unbounded on $[x_{i - 1}, x_i]$. So $M_i = \infty$. So $U_\mathcal{D} f = \infty$. Similarly, if $f$ is unbounded below, then $L_{\mathcal{D}} f = -\infty$. So unbounded functions are not Riemann integrable.
+
+\begin{eg}
+ Let $f(x) = x$ on $[a, b]$. Intuitively, we know that the integral is $(b^2 - a^2)/2$, and we will show this using the definition above. Let $\mathcal{D} = x_0 < x_1 < \cdots < x_n$ be a dissection. Then
+ \begin{align*}
+ U_{\mathcal{D}}f &= \sum_{i = 1}^n (x_i - x_{i - 1})x_i\\
+ \intertext{We \emph{know} that the integral is $\frac{b^2 - a^2}{2}$. So we put each term of the sum into the form $\frac{x_i^2 - x_{i - 1}^2}{2}$ plus some error terms.}
+ &= \sum_{i = 1}^n \left(\frac{x_i^2}{2} - \frac{x_{i - 1}^2}{2} + \frac{x_i^2}{2} - x_{i - 1}x_i + \frac{x_{i - 1}^2}{2}\right)\\
+ &= \frac{1}{2}\sum_{i = 1}^n (x_i^2 - x_{i - 1}^2 + (x_i - x_{i - 1})^2)\\
+ &= \frac{1}{2}(b^2 - a^2) + \frac{1}{2}\sum_{i = 1}^n(x_i - x_{i - 1})^2.
+ \end{align*}
+ \begin{defi}[Mesh]
+ The \emph{mesh} of a dissection $\mathcal{D}$ is $\max_i (x_{i+1} - x_i)$.
+ \end{defi}
+ Then if the mesh is $ < \delta$, then
+ \[
+ \frac{1}{2}\sum_{i = 1}^n (x_i - x_{i - 1})^2 \leq \frac{\delta}{2}\sum_{i = 1}^n (x_i - x_{i - 1}) = \frac{\delta}{2}(b - a).
+ \]
+ So by making $\delta$ small enough, we can show that for any $\varepsilon > 0$,
+ \[
+ \overline{\int_a^b} x\;\d x < \frac{1}{2}(b^2 - a^2) + \varepsilon.
+ \]
+ Similarly,
+ \[
+ \underline{\int_a^b} x\;\d x > \frac{1}{2}(b^2 - a^2) - \varepsilon.
+ \]
+ So
+ \[
+ \int_a^b x\;\d x = \frac{1}{2}(b^2 - a^2).
+ \]
+\end{eg}
+
+\begin{eg}
+ Define $f: [0, 1] \to \R$ by
+ \[
+ f(x) =
+ \begin{cases}
+ 1 & x \in \Q\\
+ 0 & x \not\in \Q
+ \end{cases}.
+ \]
+ Let $x_0 < x_1 < \cdots < x_n$ be a dissection. Then for every $i$, we have $m_i = 0$ (since there is an irrational in every interval), and $M_i = 1$ (since there is a rational in every interval). So
+ \[
+ U_{\mathcal{D}}f = \sum_{i = 1}^nM_i(x_i - x_{i - 1}) = \sum_{i = 1}^n (x_i - x_{i - 1}) = 1.
+ \]
+ Similarly, $L_\mathcal{D} f = 0$. Since $\mathcal{D}$ was arbitrary, we have
+ \[
+ \overline{\int_0^1}f(x)\;\d x = 1, \quad \underline{\int_0^1}f(x)\;\d x = 0.
+ \]
+ So $f$ is \emph{not} Riemann integrable.
+
+ Of course, this function is not interesting at all. The whole point of its existence is to show undergraduates that there are some functions that are not integrable!
+\end{eg}
+
+Note that it is important to say that the function is not \emph{Riemann} integrable. There are other notions for integration in which this function is integrable. For example, this function is \emph{Lebesgue-integrable}.
+
+Using the definition to show integrability is often cumbersome. Most of the time, we use the \emph{Riemann's integrability criterion}, which follows rather immediately from the definition, but is much nicer to work with.
+\begin{prop}[Riemann's integrability criterion]
+ This is sometimes known as Cauchy's integrability criterion.
+
+ Let $f: [a, b] \to \R$. Then $f$ is Riemann integrable if and only if for every $\varepsilon > 0$, there exists a dissection $\mathcal{D}$ such that
+ \[
+ U_\mathcal{D} - L_\mathcal{D} < \varepsilon.
+ \]
+\end{prop}
+
+\begin{proof}
+ $(\Rightarrow)$ Suppose that $f$ is integrable. Then (by definition of Riemann integrability), there exist $\mathcal{D}_1$ and $\mathcal{D}_2$ such that
+ \[
+ U_{\mathcal{D}_1} < \int_a^b f(x)\;\d x + \frac{\varepsilon}{2},
+ \]
+ and
+ \[
+ L_{\mathcal{D}_2} > \int_a^b f(x)\;\d x - \frac{\varepsilon}{2}.
+ \]
+ Let $\mathcal{D}$ be a common refinement of $\mathcal{D}_1$ and $\mathcal{D}_2$. Then
+ \[
+ U_\mathcal{D} f - L_\mathcal{D} f \leq U_{\mathcal{D}_1} f- L_{\mathcal{D}_2} f < \varepsilon.
+ \]
+ $(\Leftarrow)$ Conversely, if there exists $\mathcal{D}$ such that
+ \[
+ U_\mathcal{D} f - L_\mathcal{D}f < \varepsilon,
+ \]
+ then
+ \[
+ \inf U_\mathcal{D} f - \sup L_\mathcal{D} f < \varepsilon,
+ \]
+ which is, by definition, that
+ \[
+ \overline{\int_a^b} f(x)\;\d x - \underline{\int_a^b} f(x)\;\d x < \varepsilon.
+ \]
+ Since $\varepsilon > 0$ is arbitrary, this gives us that
+ \[
+ \overline{\int_a^b} f(x)\;\d x = \underline{\int_a^b} f(x)\;\d x.
+ \]
+ So $f$ is integrable.
+\end{proof}
+
+The next big result we want to prove is that integration is linear, ie
+\[
+ \int_a^b (\lambda f(x) + \mu g(x))\;\d x = \lambda\int_a^b f(x)\;\d x + \mu\int_a^bg(x)\;\d x.
+\]
+We do this step by step:
+\begin{prop}
+ Let $f: [a, b] \to \R$ be integrable, and $\lambda \geq 0$. Then $\lambda f$ is integrable, and
+ \[
+ \int_a^b \lambda f(x)\;\d x = \lambda\int_a^b f(x)\;\d x.
+ \]
+\end{prop}
+
+\begin{proof}
+ Let $\mathcal{D}$ be a dissection of $[a, b]$. Since
+ \[
+ \sup_{x\in [x_{i - 1}, x_i]}\lambda f(x) = \lambda\sup_{x\in [x_{i - 1}, x_i]}f(x),
+ \]
+ and similarly for inf, we have
+ \begin{align*}
+ U_{\mathcal{D}}(\lambda f) &= \lambda U_{\mathcal{D}} f\\
+ L_{\mathcal{D}}(\lambda f) &= \lambda L_\mathcal{D} f.
+ \end{align*}
+ So if we choose $\mathcal{D}$ such that $U_{\mathcal{D}}f - L_\mathcal{D} f < \varepsilon/\lambda$, then $U_\mathcal{D}(\lambda f) - L_\mathcal{D}(\lambda f) < \varepsilon$. So the result follows from Riemann's integrability criterion.
+\end{proof}
+
+\begin{prop}
+ Let $f: [a, b] \to \R$ be integrable. Then $-f$ is integrable, and
+ \[
+ \int_a^b -f(x)\;\d x = -\int_a^bf(x)\;\d x.
+ \]
+\end{prop}
+
+\begin{proof}
+ Let $\mathcal{D}$ be a dissection. Then
+ \begin{align*}
+ \sup_{x\in [x_{i - 1}, x_i]}-f(x) &= -\inf_{x\in [x_{i - 1}, x_i]} f(x)\\
+ \inf_{x\in [x_{i - 1}, x_i]}-f(x) &= -\sup_{x\in [x_{i - 1}, x_i]} f(x).
+ \end{align*}
+ Therefore
+ \[
+ U_\mathcal{D}(-f) = \sum_{i = 1}^n (x_i - x_{i - 1})(-m_i) = -L_\mathcal{D}(f).
+ \]
+ Similarly,
+ \[
+ L_\mathcal{D}(-f) = -U_\mathcal{D}f.
+ \]
+ So
+ \[
+ U_\mathcal{D}(-f) - L_\mathcal{D}(-f) = U_\mathcal{D}f - L_\mathcal{D}f.
+ \]
+ Hence if $f$ is integrable, then $-f$ is integrable by the Riemann integrability criterion.
+\end{proof}
+
+\begin{prop}
+ Let $f, g: [a, b] \to \R$ be integrable. Then $f + g$ is integrable, and
+ \[
+ \int_a^b(f(x) + g(x))\;\d x = \int_a^b f(x)\;\d x + \int_a^b g(x)\;\d x.
+ \]
+\end{prop}
+
+\begin{proof}
+ Let $\mathcal{D}$ be a dissection. Then
+ \begin{align*}
+ U_\mathcal{D}(f + g) &= \sum_{i = 1}^n (x_i - x_{i - 1})\sup_{x\in [x_{i - 1}, x_i]}(f(x) + g(x))\\
+ &\leq \sum_{i = 1}^n (x_i - x_{i - 1}) \left(\sup_{u\in [x_{i - 1}, x_i]}f(u) + \sup_{v\in [x_{i - 1}, x_i]}g(v)\right)\\
+ &= U_\mathcal{D}f + U_\mathcal{D} g
+ \end{align*}
+ Therefore,
+ \[
+ \overline{\int_a^b}(f(x) + g(x))\;\d x \leq \overline{\int_a^b} f(x)\;\d x + \overline{\int_a^b} g(x)\;\d x = \int_a^b f(x)\;\d x + \int_a^bg(x)\;\d x.
+ \]
+ Similarly,
+ \[
+ \underline{\int_a^b}(f(x) + g(x))\;\d x \geq \int_a^b f(x)\;\d x + \int_a^b g(x)\;\d x.
+ \]
+ So the upper and lower integrals are equal, and the result follows.
+\end{proof}
+So we now have that
+\[
+ \int_a^b (\lambda f(x) + \mu g(x))\;\d x = \lambda\int_a^b f(x)\;\d x + \mu\int_a^bg(x)\;\d x.
+\]
+We will prove more ``obvious'' results.
+\begin{prop}
+ Let $f, g: [a, b] \to \R$ be integrable, and suppose that $f(x) \leq g(x)$ for every $x$. Then
+ \[
+ \int_a^b f(x)\;\d x \leq \int_a^b g(x)\;\d x.
+ \]
+\end{prop}
+
+\begin{proof}
+ Follows immediately from the definition.
+\end{proof}
+
+\begin{prop}
+ Let $f: [a, b] \to \R$ be integrable. Then $|f|$ is integrable.
+\end{prop}
+
+\begin{proof}
+ Note that we can write
+ \[
+ \sup_{x\in [x_{i - 1}, x_i]}f(x) - \inf_{x\in [x_{i - 1}, x_i]}f(x) = \sup_{u, v\in [x_{i - 1}, x_i]}|f(u) - f(v)|.
+ \]
+ Similarly,
+ \[
+ \sup_{x\in [x_{i - 1}, x_i]}|f(x)| - \inf_{x\in [x_{i - 1}, x_i]}|f(x)| = \sup_{u, v\in [x_{i - 1}, x_i]}||f(u)| - |f(v)||.
+ \]
+ For any pair of real numbers, $x, y$, we have that $||x| - |y|| \leq |x - y|$ by the triangle inequality. Then for any interval $u, v\in [x_{i - 1}, x_i]$, we have
+ \[
+ ||f(u)| - |f(v)|| \leq |f(u) - f(v)|.
+ \]
+ Hence we have
+ \[
+ \sup_{x\in [x_{i - 1}, x_i]}|f(x)| - \inf_{x\in [x_{i - 1}, x_i]}|f(x)| \leq \sup_{x\in [x_{i - 1}, x_i]}f(x) - \inf_{x\in [x_{i - 1}, x_i]} f(x).
+ \]
+ So for any dissection $\mathcal{D}$, we have
+ \[
+ U_\mathcal{D} (|f|) - L_\mathcal{D}(|f|) \leq U_\mathcal{D}(f) - L_\mathcal{D}(f).
+ \]
+ So the result follows from Riemann's integrability criterion.
+\end{proof}
+Combining these two propositions, we get that if
+\[
+ |f(x) - g(x)| \leq C,
+\]
+for every $x\in[a, b]$, then
+\[
+ \left|\int_a^bf(x)\;\d x - \int_a^b g(x)\;\d x\right| \leq C(b - a).
+\]
+
+\begin{prop}[Additivity property]
+ Let $f: [a, c] \to \R$ be integrable, and let $b\in (a, c)$. Then the restrictions of $f$ to $[a, b]$ and $[b, c]$ are Riemann integrable, and
+ \[
+ \int_a^b f(x)\;\d x + \int_b^c f(x)\;\d x = \int_a^c f(x) \;\d x
+ \]
+ Similarly, if $f$ is integrable on $[a, b]$ and $[b, c]$, then it is integrable on $[a, c]$ and the above equation also holds.
+\end{prop}
+
+\begin{proof}
+ Let $\varepsilon> 0$, and let $a = x_0 < x_1 < \cdots < x_n = c$ be a dissection of $\mathcal{D}$ of $[a, c]$ such that
+ \[
+ U_\mathcal{D}(f) \leq \int_a^c f(x)\;\d x + \varepsilon,
+ \]
+ and
+ \[
+ L_\mathcal{D}(f) \geq \int_a^c f(x)\;\d x - \varepsilon.
+ \]
+ Let $\mathcal{D}'$ be the dissection made of $\mathcal{D}$ plus the point $b$. Let $\mathcal{D}_1$ be the dissection of $[a, b]$ made of points of $\mathcal{D}'$ from $a$ to $b$, and $D_2$ be the dissection of $[b, c]$ made of points of $\mathcal{D}'$ from $b$ to $c$. Then
+ \[
+ U_{\mathcal{D}_1}(f) + U_{\mathcal{D}_2}(f) = U_{\mathcal{D}'}(f) \leq U_{\mathcal{D}}(f),
+ \]
+ and
+ \[
+ L_{\mathcal{D}_1}(f) + L_{\mathcal{D}_2}(f) = L_{\mathcal{D}'}(f) \geq L_\mathcal{D} (f).
+ \]
+ Since $U_\mathcal{D}(f) - L_\mathcal{D}(f) < 2\varepsilon$, and both $U_{\mathcal{D}_2}(f) - L_{\mathcal{D}_2} (f)$ and $U_{\mathcal{D}_1}(f) - L_{\mathcal{D}_1} (f)$ are non-negative, we have $U_{\mathcal{D}_1} (f) - L_{\mathcal{D}_1} (f)$ and $U_{\mathcal{D}_2}(f) - L_{\mathcal{D}_2}(f)$ are less than $2\varepsilon$. Since $\varepsilon$ is arbitrary, it follows that the restrictions of $f$ to $[a, b]$ and $[b, c]$ are both Riemann integrable. Furthermore,
+ \begin{multline*}
+ \int_a^b f(x)\;\d x + \int_b^c f(x)\;\d x \leq U_{\mathcal{D}_1}(f) + U_{\mathcal{D}_2}(f) = U_{\mathcal{D}'}(f) \leq U_{\mathcal{D}}(f)\\
+ \leq \int_a^c f(x)\;\d x + \varepsilon.
+ \end{multline*}
+ Similarly,
+ \begin{multline*}
+ \int_a^bf(x)\;\d x + \int_b^cf(x)\;\d x \geq L_{\mathcal{D}_1}(f) + L_{\mathcal{D}_2}(f) = L_{\mathcal{D}'}(f) \geq L_{\mathcal{D}}(f)\\
+ \geq \int_a^c f(x)\;\d x - \varepsilon.
+ \end{multline*}
+ Since $\varepsilon$ is arbitrary, it follows that
+ \[
+ \int_a^b f(x)\;\d x + \int_b^c f(x)\;\d x = \int_a^c f(x)\;\d x.
+ \]
+ The other direction is left as an (easy) exercise.
+\end{proof}
+
+\begin{prop}
+ Let $f, g: [a, b] \to \R$ be integrable. Then $fg$ is integrable.
+\end{prop}
+
+\begin{proof}
+ Let $C$ be such that $|f(x)|, |g(x)| \leq C$ for every $x\in [a, b]$. Write $L_i$ and $\ell_i$ for the $\sup$ and $\inf$ of $g$ in $[x_{i - 1}, x_i]$. Similarly write $M_i$ and $m_i$ for the $\sup$ and $\inf$ of $f$ in $[x_{i - 1}, x_i]$. Now let $\mathcal{D}$ be a dissection, and for each $i$, let $u_i$ and $v_i$ be two points in $[x_{i - 1}, x_i]$.
+
+ We will pretend that $u_i$ and $v_i$ are the minimum and maximum when we write the proof, but we cannot assert that they are, since $fg$ need not have maxima and minima. We will then note that since our results hold for arbitrary $u_i$ and $v_i$, it must hold when $fg$ is at its supremum and infimum.
+
+ We find what we pretend is the difference between the upper and lower sum:
+ \begin{align*}
+ &\quad \left|\sum_{i = 1}^n \big(x_i - x_{i - 1})(f(v_i)g(v_i) - f(u_i)g(u_i)\big)\right| \\
+ &= \left|\sum_{i = 1}^{n}(x_i - x_{i - 1})\big(f(v_i)(g(v_i) - g(u_i)) + (f(v_i) - f(u_i))g(u_i)\big)\right|\\
+ &\leq \sum_{i = 1}^n \big(C(L_i - \ell_i) + (M_i - m_i)C\big)\\
+ &=C(U_\mathcal{D}g - L_\mathcal{D}g + U_\mathcal{D}f - L_\mathcal{D}f).
+ \end{align*}
+ Since $u_i$ and $v_i$ are arbitrary, it follows that
+ \[
+ U_\mathcal{D}(fg) - L_\mathcal{D}(fg) \leq C(U_\mathcal{D}f - L_\mathcal{D}f + U_\mathcal{D}g - L_\mathcal{D}g).
+ \]
+ Since $C$ is fixed, and we can get $U_\mathcal{D} f - L_\mathcal{D}f$ and $U_\mathcal{D}g - L_\mathcal{D}g$ arbitrary small (since $f$ and $g$ are integrable), we can get $U_\mathcal{D}(fg) - L_\mathcal{D}(fg)$ arbitrarily small. So the result follows.
+\end{proof}
+
+\begin{thm}
+ Every continuous function $f$ on a closed bounded interval $[a, b]$ is Riemann integrable.
+\end{thm}
+
+\begin{proof}
+ wlog assume $[a, b] = [0, 1]$.
+
+ Suppose the contrary. Let $f$ be non-integrable. This means that there exists some $\varepsilon$ such that for every dissection $\mathcal{D}$, $U_{\mathcal{D}} - L_{\mathcal{D}} > \varepsilon$. In particular, for every $n$, let $\mathcal{D}_n$ be the dissection $0, \frac{1}{n}, \frac{2}{n}, \cdots, \frac{n}{n}$.
+
+ Since $U_{\mathcal{D}_n} - L_{\mathcal{D}_n} > \varepsilon$, there exists some interval $\left[\frac{k}{n}, \frac{k + 1}{n}\right]$ in which $\sup f - \inf f > \varepsilon$. Suppose the supremum and infimum are attained at $x_n$ and $y_n$ respectively. Then we have $|x_n - y_n| < \frac{1}{n}$ and $f(x_n) - f(y_n) > \varepsilon$.
+
+ By Bolzano Weierstrass, $(x_n)$ has a convergent subsequence, say $(x_{n_i})$. Say $x_{n_i}\to x$. Since $|x_n - y_n| < \frac{1}{n}\to 0$, we must have $y_{n_i}\to x$. By continuity, we must have $f(x_{n_i}) \to f(x)$ and $f(y_{n_i}) \to f(x)$, but $f(x_{n_i})$ and $f(y_{n_i})$ are always apart by $\varepsilon$. Contradiction.
+\end{proof}
+With this result, we know that a lot of things are integrable, e.g.\ $e^{-x^2}$.
+
+To prove this, we secretly used the property of \emph{uniform continuity}:
+\begin{defi}[Uniform continuity*]
+ Let $A\subseteq \R$ and let $f: A\to \R$. Then $f$ is \emph{uniformly continuous} if
+ \[
+ (\forall \varepsilon)(\exists \delta > 0)(\forall x)(\forall y)\;|x - y| < \delta \Rightarrow |f(x) - f(y)| \leq \varepsilon.
+ \]
+\end{defi}
+This is different from regular continuity. Regular continuity says that at any point $x$, we can find a $\delta$ that works for this point. Uniform continuity says that we can find a $\delta$ that works for \emph{any} $x$.
+
+It is easy to show that a uniformly continuous function is integrable, since by uniformly continuity, as long as the mesh of a dissection is sufficiently small, the difference between the upper sum and the lower sum can be arbitrarily small by uniform continuity. Thus to prove the above theorem, we just have to show that continuous functions on a closed bounded interval are uniformly continuous.
+
+\begin{thm}[non-examinable]
+ Let $a < b$ and let $f: [a, b] \to \R$ be continuous. Then $f$ is uniformly continuous.
+\end{thm}
+
+\begin{proof}
+ Suppose that $f$ is not uniformly continuous. Then
+ \[
+ (\exists \varepsilon)(\forall \delta > 0)(\exists x)(\exists y)\;|x - y| < \delta \text{ and } |f(x) - f(y)| \geq \varepsilon.
+ \]
+ Therefore, we can find sequences $(x_n), (y_n)$ such that for every $n$, we have
+ \[
+ |x_n - y_n| \leq \frac{1}{n}\text{ and }|f(x_n) - f(y_n)| \geq \varepsilon.
+ \]
+ Then by Bolzano-Weierstrass theorem, we can find a subsequence $(x_{n_k})$ converging to some $x$. Since $|x_{n_k} - y_{n_k}| \leq \frac{1}{n_k}$, $y_{n_k}\to x$ as well. But $|f(x_{n_k}) - f(y_{n_k})| \geq \varepsilon$ for every $k$. So $f(x_{n_k})$ and $f(y_{n_k})$ cannot both converge to the same limit. So $f$ is not continuous at $x$.
+\end{proof}
+This proof is very similar to the proof that continuous functions are integrable. In fact, the proof that continuous functions are integrable is just a fuse of this proof and the (simple) proof that uniformly continuously functions are integrable.
+
+\begin{thm}
+ Let $f: [a, b] \to \R$ be monotone. Then $f$ is Riemann integrable.
+\end{thm}
+Note that monotone functions need not be ``nice''. It can even have infinitely many discontinuities. For example, if $f: [0, 1] \to \R$ maps $x$ to the $1/(\text{first non-zero digit in the binary expansion of }x)$, with $f(0) = 0$.
+
+\begin{proof}
+ let $\varepsilon > 0$. Let $\mathcal{D}$ be a dissection of mesh less than $\frac{\varepsilon}{f(b) - f(a)}$. Then
+ \begin{align*}
+ U_\mathcal{D} f - L_\mathcal{D}f &= \sum_{i = 1}^n (x_i - x_{i - 1})(f(x_i) - f(x_{i - 1}))\\
+ &\leq \frac{\varepsilon}{f(b) - f(a)} \sum_{i = 1}^n (f(x_i) - f(x_{i - 1}))\\
+ &= \varepsilon.\qedhere
+ \end{align*}
+\end{proof}
+
+Pictorially, we see that the difference between the upper and lower sums is total the area of the red rectangles.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (5, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 5) node [above] {$y$};
+
+ \draw [domain=-1:5] plot (\x, {(\x + 1)*(\x + 1)/10 + 1});
+
+ \draw (0.5, 0) rectangle (1, 1.225);
+ \draw (1, 0) rectangle (1.5, 1.4);
+ \draw (1.5, 0) rectangle (2, 1.625);
+ \draw (2, 0) rectangle (2.5, 1.9);
+ \draw (2.5, 0) rectangle (3, 2.225);
+ \draw (3, 0) rectangle (3.5, 2.6);
+ \draw (3.5, 0) rectangle (4, 3.026);
+ \draw (4, 0) rectangle (4.5, 3.5);
+
+ \draw [red] (0.5, 1.225) rectangle (1, 1.4);
+ \draw [red] (1, 1.4) rectangle (1.5, 1.625);
+ \draw [red] (1.5, 1.625) rectangle (2, 1.9);
+ \draw [red] (2, 1.9) rectangle (2.5, 2.225);
+ \draw [red] (2.5, 2.225) rectangle (3, 2.6);
+ \draw [red] (3, 2.6) rectangle (3.5, 3.025);
+ \draw [red] (3.5, 3.025) rectangle (4, 3.5);
+ \draw [red] (4, 3.5) rectangle (4.5, 4.025);
+ \end{tikzpicture}
+\end{center}
+To calculate the total area, we can stack the red areas together to get something of width $\frac{\varepsilon}{f(b) - f(a)}$ and height $f(b) - f(a)$. So the total area is just $\varepsilon$.
+
+\begin{lemma}
+ Let $a < b$ and let $f$ be a bounded function from $[a, b] \to \R$ that is continuous on $(a, b)$. Then $f$ is integrable.
+\end{lemma}
+An example where this would apply is $\int_0^1 \sin \frac{1}{x}$. It gets nasty near $x = 0$, but its ``nastiness'' is confined to $x = 0$ only. So as long as its nastiness is sufficiently contained, it would still be integrable.
+
+The idea of the proof is to integrate from a point $x_1$ very near $a$ up to a point $x_{n - 1}$ very close to $b$. Since $f$ is bounded, the regions $[a, x_1]$ and $[x_{n - 1}, b]$ are small enough to not cause trouble.
+
+\begin{proof}
+ Let $\varepsilon > 0$. Suppose that $|f(x)| \leq C$ for every $x\in [a, b]$. Let $x_0 = a$ and pick $x_1$ such that $x_1 - x_0 < \frac{\varepsilon}{8C}$. Also choose $z$ between $x_1$ and $b$ such that $b - z < \frac{\varepsilon}{8C}$.
+
+ Then $f$ is continuous $[x_1, z]$. Therefore it is integrable on $[x_1, z]$. So we can find a dissection $\mathcal{D}'$ with points $x_1 < x_2 < \cdots < x_{n - 1} = z$ such that
+ \[
+ U_{\mathcal{D}'}f - L_{\mathcal{D}'}f < \frac{\varepsilon}{2}.
+ \]
+ Let $\mathcal{D}$ be the dissection $a = x_0 < x_1 < \cdots < x_n = b$. Then
+ \[
+ U_\mathcal{D} f - L_\mathcal{D} f < \frac{\varepsilon}{8C}\cdot 2C + \frac{\varepsilon}{2} + \frac{\varepsilon}{8C}\cdot 2C = \varepsilon.
+ \]
+ So done by Riemann integrability criterion.
+\end{proof}
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item $f(x) =
+ \begin{cases}
+ \sin \frac{1}{x}& x \not = 0\\
+ 0 & x = 0
+ \end{cases}$ defined on $[-1, 1]$ is integrable.
+ \item $g(x) =
+ \begin{cases}
+ x & x \leq 1\\
+ x^2 + 1 & x > 1
+ \end{cases}$ defined on $[0, 1]$ is integrable.
+ \end{itemize}
+\end{eg}
+
+\begin{cor}
+ Every piecewise continuous and bounded function on $[a, b]$ is integrable.
+\end{cor}
+
+\begin{proof}
+ Partition $[a, b]$ into intervals $I_1, \cdots, I_k$, on each of which $f$ is (bounded and) continuous. Hence for every $I_j$ with end points $x_{j - 1}$, $x_j$, $f$ is integrable on $[x_{j - 1}, x_j]$ (which may not equal $I_j$, e.g.\ $I_j$ could be $[x_{j - 1}, x_j)$). But then by the additivity property of integration, we get that $f$ is integrable on $[a, b]$
+\end{proof}
+
+We defined Riemann integration in a very general way --- we allowed \emph{arbitrary} dissections, and took the extrema over all possible dissection. Is it possible to just consider some particular nice dissections instead? Perhaps unsurprisingly, yes! It's just that we opt to define it the general way so that we can easily talk about things like least common refinements.
+\begin{lemma}
+ Let $f: [a, b] \to \R$ be Riemann integrable, and for each $n$, let $\mathcal{D}_n$ be the dissection $a = x_0 < x_1 < \cdots < x_n = b$, where $x_i = a + \frac{i(b - a)}{n}$ for each $i$. Then
+ \[
+ U_{\mathcal{D}_n}f \to \int_a^b f(x)\;\d x
+ \]
+ and
+ \[
+ L_{\mathcal{D}_n}f \to \int_a^b f(x)\;\d x.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Let $\varepsilon > 0$. We need to find an $N$. The only thing we know is that $f$ is Riemann integrable, so we use it:
+
+ Since $f$ is integrable, there is a dissection $\mathcal{D}$, say $u_0 < u_1 < \cdots < u_m$, such that
+ \[
+ U_\mathcal{D} f - \int_a^b f(x)\;\d x < \frac{\varepsilon}{2}.
+ \]
+ We also know that $f$ is bounded. Let $C$ be such that $|f(x)| \leq C$.
+
+ For any $n$, let $\mathcal{D}'$ be the least common refinement of $\mathcal{D}_n$ and $\mathcal{D}$. Then
+ \[
+ U_{\mathcal{D}'}f \leq U_\mathcal{D} f.
+ \]
+ Also, the sums $U_{\mathcal{D}_n}f$ and $U_\mathcal{D'}f$ are the same, except that at most $m$ of the subintervals $[x_{i - 1}, x_i]$ are subdivided in $\mathcal{D}'$.
+
+ For each interval that gets chopped up, the upper sum decreases by at most $\frac{b - a}{n}\cdot 2C$. Therefore
+ \[
+ U_{\mathcal{D}_n}f - U_{\mathcal{D}'}f \leq \frac{b - a}{n}2C\cdot m.
+ \]
+ Pick $n$ such that $2Cm(b - a)/n < \frac{\varepsilon}{2}$. Then
+ \[
+ U_{\mathcal{D}_n} f - U_\mathcal{D}f < \frac{\varepsilon}{2}.
+ \]
+ So
+ \[
+ U_{\mathcal{D}_n}f - \int_a^b f(x)\;\d x < \varepsilon.
+ \]
+ This is true whenever $n > \frac{4C(b - a)m}{\varepsilon}$. Since we also have $U_{\mathcal{D}_n} f \geq \int_a^b f(x)\;\d x$, therefore
+ \[
+ U_{\mathcal{D}_n}f \to \int_a^b f(x)\;\d x.
+ \]
+ The proof for lower sums is similar.
+\end{proof}
+
+For convenience, we define the following:
+\begin{notation}
+ If $b > a$, we define
+ \[
+ \int_b^a f(x)\;\d x = -\int_a^b f(x)\;\d x.
+ \]
+\end{notation}
+
+We now prove that the fundamental theorem of calculus, which says that integration is the reverse of differentiation.
+\begin{thm}[Fundamental theorem of calculus, part 1]
+ Let $f: [a, b]\to \R$ be continuous, and for $x\in [a, b]$, define
+ \[
+ F(x) = \int_a^x f(t)\;\d t.
+ \]
+ Then $F$ is differentiable and $F'(x) = f(x)$ for every $x$.
+\end{thm}
+
+\begin{proof}
+ \[
+ \frac{F(x + h) - F(x)}{h} = \frac{1}{h}\int_x^{x + h}f(t)\;\d t
+ \]
+ Let $\varepsilon > 0$. Since $f$ is continuous, at $x$, then there exists $\delta$ such that $|y - x| < \delta$ implies $|f(y) - f(x)| < \varepsilon$.
+
+ If $|h| < \delta$, then
+ \begin{align*}
+ \left|\frac{1}{h}\int_x^{x + h}f(t) \;\d t - f(x)\right| &= \left|\frac{1}{h}\int_x^{x + h}(f(t) - f(x))\;\d t\right|\\
+ &\leq \frac{1}{|h|}\left|\int_x^{x + h}|f(t) - f(x)|\;\d t\right|\\
+ &\leq \frac{\varepsilon|h|}{|h|}\\
+ &= \varepsilon.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{cor}
+ If $f$ is continuously differentiable on $[a, b]$, then
+ \[
+ \int_a^b f'(t)\;\d t = f(b) - f(a).
+ \]
+\end{cor}
+
+\begin{proof}
+ Let
+ \[
+ g(x) = \int_a^x f'(t)\;\d t.
+ \]
+ Then
+ \[
+ g'(x) = f'(x) = \frac{\d }{\d x}(f(x) - f(a)).
+ \]
+ Since $g'(x) - f'(x) = 0$, $g(x) - f(x)$ must be a constant function by the mean value theorem. We also know that
+ \[
+ g(a) = 0 = f(a) - f(a)
+ \]
+ So we must have $g(x) = f(x) - f(a)$ for every $x$, and in particular, for $x = b$.
+\end{proof}
+
+\begin{thm}[Fundamental theorem of calculus, part 2]
+ Let $f: [a, b] \to \R$ be a differentiable function, and suppose that $f'$ is integrable. Then
+ \[
+ \int_a^b f'(t)\;\d t = f(b) - f(a).
+ \]
+\end{thm}
+Note that this is a stronger result than the corollary above, since it does not require that $f'$ is continuous.
+
+\begin{proof}
+ Let $\mathcal{D}$ be a dissection $x_0 < x_1 < \cdots < x_n$. We want to make use of this dissection. So write
+ \[
+ f(b) - f(a) = \sum_{i = 1}^n (f(x_i) - f(x_{i - 1})).
+ \]
+ For each $i$, there exists $u_i\in (x_{i - 1}, x_i)$ such that $f(x_i) - f(x_{i - 1j}) = (x_i - x_{i - 1})f'(u_i)$ by the mean value theorem. So
+ \[
+ f(b) - f(a) = \sum_{i = 1}^n (x_i - x_{i - 1})f'(u_i).
+ \]
+ We know that $f'(u_i)$ is somewhere between $\sup\limits_{x\in[x_i, x_{i - 1}]}f'(x)$ and $\inf\limits_{x\in[x_i, x_{i - 1}]}f'(x)$ by definition. Therefore
+ \[
+ L_\mathcal{D} f' \leq f(b) - f(a) \leq U_\mathcal{D} f'.
+ \]
+ Since $f'$ is integrable and $\mathcal{D}$ was arbitrary, $L_\mathcal{D}f'$ and $U_\mathcal{D}f'$ can both get arbitrarily close to $\int_a^b f'(t)\;\d t$. So
+ \[
+ f(b) - f(a) = \int_a^b f'(t)\;\d t.\qedhere
+ \]
+\end{proof}
+Note that the condition that $f'$ is integrable is essential. It is possible to find a differentiable function whose derivative is not integrable! You will be asked to find it in the example sheet.
+
+Using the fundamental theorem of calculus, we can easily prove integration by parts:
+\begin{thm}[Integration by parts]
+ Let $f, g:[a, b]\to \R$ be integrable such that everything below exists. Then
+ \[
+ \int_a^b f(x)g'(x)\;\d x = f(b)g(b) - f(a)g(a) - \int_a^b f'(x)g(x)\;\d x.
+ \]
+\end{thm}
+
+\begin{proof}
+ By the fundamental theorem of calculus,
+ \[
+ \int_a^b (f(x)g'(x) + f'(x)g(x))\;\d x = \int_a^b(fg)'(x)\;\d x = f(b)g(b) - f(a)g(a).
+ \]
+ The result follows after rearrangement.
+\end{proof}
+
+Recall that when we first had Taylor's theorem, we said it had the Lagrange form of the remainder. There are many other forms of the remainder term. Here we will look at the integral form:
+\begin{thm}[Taylor's theorem with the integral form of the remainder]
+ Let $f$ be $n + 1$ times differentiable on $[a, b]$ with with $f^{(n + 1)}$ continuous. Then
+ \begin{align*}
+ f(b) &= f(a) + (b - a)f'(a) + \frac{(b - a)^2}{2!}f^{(2)}(a) + \cdots \\
+ &+ \frac{(b - a)^n}{n!}f^{(n)}(a) + \int_a^b \frac{(b - t)^n}{n!}f^{(n + 1)}(t)\;\d t.
+ \end{align*}
+\end{thm}
+\begin{proof}
+ Induction on $n$.
+
+ When $n = 0$, the theorem says
+ \[
+ f(b) - f(a) = \int_a^b f'(t)\;\d t.
+ \]
+ which is true by the fundamental theorem of calculus.
+
+ Now observe that
+ \begin{align*}
+ \int_a^b \frac{(b - t)^n}{n!}f^{(n + 1)}(t)\;\d t ={}& \left[\frac{-(b - t)^{n + 1}}{(n + 1)!}f^{(n + 1)}(t)\right]_a^b\\
+ &+ \int_a^b \frac{(b - t)^{n + 1}}{(n + 1)!}f^{(n + 1)}(t)\;\d t \\
+ ={}& \frac{(b - a)^{n + 1}}{(n + 1)!} f^{(n + 1)}(a) + \int_a^b \frac{(b - t)^{n + 1}}{(n + 1)!}f^{(n + 2)}(t)\;\d t.
+ \end{align*}
+ So the result follows by induction.
+\end{proof}
+Note that the form of the integral remainder is rather weird and unexpected. How could we have come up with it? We might start with the fundamental theorem of algebra and integrate by parts. The first attempt would be to integrate $1$ to $t$ and differentiate $f'(t)$ to $f^{(2)}(t)$. So we have
+\begin{align*}
+ f(b) &= f(a) + \int_a^b f'(t)\;\d t\\
+ &= f(a) + [tf'(t)]_a^b - \int_a^b tf^{(2)}(t)\;\d t\\
+ &= f(a) + bf'(b) - af'(a) - \int_a^b tf^{(2)}(t)\;\d t\\
+ \intertext{We want something in the form $(b - a)f'(a)$, so we take that out and see what we are left with.}
+ &= f(a) + (b - a)f'(a) + b(f'(b) - f'(a)) - \int_a^b tf^{(2)}(t)\;\d t\\
+ \intertext{Then we note that $f'(b) - f'(a) = \int_a^b f^{(2)}(t)\;\d t$. So we have}
+ &= f(a) + (b - a)f'(a) + \int_a^b (b - t)f^{(2)}(t)\;\d t.
+\end{align*}
+Then we can see that the right thing to integrate is $(b - t)$ and continue to obtain the result.
+
+\begin{thm}[Integration by substitution]
+ Let $f: [a, b] \to \R$ be continuous. Let $g: [u, v] \to \R$ be continuously differentiable, and suppose that $g(u) = a, g(v) = b$, and $f$ is defined everywhere on $g([u, v])$ (and still continuous). Then
+ \[
+ \int_a^b f(x)\;\d x = \int_u^v f(g(t))g'(t)\;\d t.
+ \]
+\end{thm}
+
+\begin{proof}
+ By the fundamental theorem of calculus, $f$ has an anti-derivative $F$ defined on $g([u, v])$. Then
+ \begin{align*}
+ \int_u^v f(g(t))g'(t) \;\d t &= \int_u^v F'(g(t))g'(t)\;\d t \\
+ &= \int_u^v (F\circ g)'(t)\;\d t \\
+ &= F\circ g(v) - F\circ g(u)\\
+ &= F(b) - F(a)\\
+ &= \int_a^b f(x)\;\d x.\qedhere
+ \end{align*}
+\end{proof}
+We can think of ``integration by parts'' as what you get by integrating the product rule, and ``integration by substitution'' as what you get by integrating the chain rule.
+
+\subsection{Improper integrals}
+It is sometimes sensible to talk about integrals of unbounded functions or integrating to infinity. But we have to be careful and write things down nicely.
+
+\begin{defi}[Improper integral]
+ Suppose that we have a function $f: [a, b] \to \R$ such that, for every $\varepsilon > 0$, $f$ is integrable on $[a + \varepsilon, b]$ and $\lim\limits_{\varepsilon \to 0}\int_{a + \varepsilon}^b f(x)\;\d x$ exists. Then we define the improper integral
+ \[
+ \int_a^bf(x)\;\d x \text{ to be } \lim_{\varepsilon \to 0}\int_{a + \varepsilon}^b f(x)\;\d x.
+ \]
+ even if the Riemann integral does not exist.
+
+ We can do similarly for $[a, b - \varepsilon]$, or integral to infinity:
+ \[
+ \int_a^\infty f(x)\;\d x = \lim_{b \to \infty} \int_a^b f(x)\;\d x.
+ \]
+ when it exists.
+\end{defi}
+\begin{eg}
+ \[
+ \int_\varepsilon^1 x^{-1/2}\;\d x = \left[2x^{-1/2}\right]^1_\varepsilon = 2 - 2\varepsilon^{1/2} \to 2.
+ \]
+ So
+ \[
+ \int_0^1 x^{-1/2}\;\d x = 2,
+ \]
+ even though $x^{-1/2}$ is unbounded on $[0, 1]$.
+
+ Note that officially we are required to make $f(x) = x^{-1/2}$ a function with domain $[0, 1]$. So we can assign $f(0) = \pi$, or any number, since it doesn't matter.
+\end{eg}
+
+\begin{eg}
+ \[
+ \int_1^x \frac{1}{t^2}\;\d t = \left[-\frac{1}{t}\right]_1^x = 1 - \frac{1}{x} \to 1\text{ as }x\to \infty
+ \]
+ by the fundamental theorem of calculus. So
+ \[
+ \int_1^{\infty}\frac{1}{x^2}\;\d x = 1.
+ \]
+\end{eg}
+
+Finally, we can prove the integral test, whose proof we omitted when we first began.
+\begin{thm}[Integral test]
+ Let $f: [1, \infty] \to \R$ be a decreasing non-negative function. Then $\sum_{n = 1}^\infty f(n)$ converges iff $\int_1^\infty f(x)\;\d x < \infty$.
+\end{thm}
+
+\begin{proof}
+ We have
+ \[
+ \int_n^{n + 1}f(x)\;\d x \leq f(n) \leq \int_{n -1}^n f(x)\;\d x,
+ \]
+ since $f$ is decreasing (the right hand inequality is valid only for $n\geq 2$). It follows that
+ \[
+ \int_1^{N + 1}f(x)\;\d x \leq \sum_{n = 1}^N f(n) \leq \int_1^N f(x)\;\d x + f(1)
+ \]
+ So if the integral exists, then $\sum f(n)$ is increasing and bounded above by $\int_1^\infty f(x)\;\d x$, so converges.
+
+ If the integral does not exist, then $\int_1^N f(x)\;\d x$ is unbounded. Then $\sum_{n = 1}^N f(n)$ is unbounded, hence does not converge.
+\end{proof}
+
+\begin{eg}
+ Since $\int_1^x \frac{1}{t^2}\;\d t < \infty$, it follows that $\sum_{n = 1}^\infty \frac{1}{n^2}$ converges.
+\end{eg}
+\end{document}
diff --git a/books/cam/IA_L/dynamics_and_relativity.tex b/books/cam/IA_L/dynamics_and_relativity.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ddd990849fbb86a25460f446b861b494cd206ff5
--- /dev/null
+++ b/books/cam/IA_L/dynamics_and_relativity.tex
@@ -0,0 +1,3744 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IA}
+\def\nterm {Lent}
+\def\nyear {2015}
+\def\nlecturer {G.\ I.\ Ogilvie}
+\def\ncourse {Dynamics and Relativity}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\emph{Familiarity with the topics covered in the non-examinable Mechanics course is assumed.}
+\vspace{10pt}
+
+\noindent\textbf{Basic concepts}\\
+Space and time, frames of reference, Galilean transformations. Newton's laws. Dimensional analysis. Examples of forces, including gravity, friction and Lorentz.\hspace*{\fill} [4]
+
+\vspace{10pt}
+\noindent\textbf{Newtonian dynamics of a single particle}\\
+Equation of motion in Cartesian and plane polar coordinates. Work, conservative forces and potential energy, motion and the shape of the potential energy function; stable equilibria and small oscillations; effect of damping.
+
+\vspace{5pt}
+\noindent Angular velocity, angular momentum, torque.
+
+\vspace{5pt}
+\noindent Orbits: the $u(\theta)$ equation; escape velocity; Kepler's laws; stability of orbits; motion in a repulsive potential (Rutherford scattering). Rotating frames: centrifugal and coriolis forces. *Brief discussion of Foucault pendulum.*\hspace*{\fill} [8]
+
+\vspace{10pt}
+\noindent\textbf{Newtonian dynamics of systems of particles}\\
+Momentum, angular momentum, energy. Motion relative to the centre of mass; the two body problem. Variable mass problems; the rocket equation.\hspace*{\fill} [2]
+
+\vspace{10pt}
+\noindent\textbf{Rigid bodies}\\
+Moments of inertia, angular momentum and energy of a rigid body. Parallel axes theorem. Simple examples of motion involving both rotation and translation (e.g.\ rolling).\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Special relativity}\\
+The principle of relativity. Relativity and simultaneity. The invariant interval. Lorentz transformations in $(1 + 1)$-dimensional spacetime. Time dilation and length contraction. The Minkowski metric for $(1 + 1)$-dimensional spacetime. Lorentz transformations in $(3 + 1)$ dimensions. 4-vectors and Lorentz invariants. Proper time. 4-velocity and 4-momentum. Conservation of 4-momentum in particle decay. Collisions. The Newtonian limit.\hspace*{\fill} [7]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+You've been lied to. You thought you applied for mathematics. And here you have a course on physics. No, this course is not created for students taking the ``Maths with Physics'' option. They don't have to take this course (don't ask why).
+
+Ever since Newton invented calculus, mathematics is becoming more and more important in physics. Physicists seek to describe the universe in a few equations, and derive everyday (physical) phenomena as mathematical consequences of these equations.
+
+In this course, we will start with Newton's laws of motion and use it to derive a lot of physical phenomena, including planetary orbits, centrifugal forces\footnote{Yes, they exist.} and the motion of rotating bodies.
+
+The important thing to note is that we can ``prove'' all these phenomena just under the assumption that Newton's laws are correct (plus the formulas for, say, the strength of the gravitational force). We are just doing mathematics here. We don't need to do any experiments to obtain the results (of course, we need experiments to verify that Newton's laws are indeed the equations that describe this universe).
+
+However, it turns out that Newton was wrong. While his theories were accurate for most everyday phenomena, they weren't able to adequately describe electromagnetism. This lead to Einstein discovering \emph{special relativity}. Special relativity is also required to describe motion that is very fast. We will have a brief introduction to special relativity at the end of the course.
+
+\section{Newtonian dynamics of particles}
+Newton's equations describe the motion of a \emph{(point) particle}.
+\begin{defi}[Particle]
+ A \emph{particle} is an object of insignificant size, hence it can be regarded as a point. It has a \emph{mass} $m > 0$, and an \emph{electric charge} $q$.
+
+ Its position at time $t$ is described by its \emph{position vector}, $\mathbf{r}(t)$ or $\mathbf{x}(t)$ with respect to an origin $O$.
+\end{defi}
+Depending on context, different things can be considered as particles. We could consider an electron to be a point particle, even though it is more accurately described by the laws of quantum mechanics than those of Newtonian mechanics. If we are studying the orbit of planets, we can consider the Sun and the Earth to be particles.
+
+An important property of a particle is that it has no \emph{internal structure}. It can be completely described by its position, momentum, mass and electric charge. For example, if we model the Earth as a particle, we will have to ignore its own rotation, temperature etc.
+
+If we want to actually describe a rotating object, we usually consider it as a collection of point particles connected together, and apply Newton's law to the individual particles.
+
+As mentioned above, the position of a particle is described by a position \emph{vector}. This requires us to pick an origin of the coordinate system, as well as an orientation of the axes. Each choice is known as a frame of reference.
+
+\begin{defi}[Frame of reference]
+ A \emph{frame of reference} is a choice of coordinate axes for $\mathbf{r}$.
+\end{defi}
+We don't impose many restrictions on the choice of coordinate axes. They can be fixed, moving, rotating, or even accelerating.
+
+Using the position vector $\mathbf{r}$, we can define various interesting quantities which describe the particle.
+\begin{defi}[Velocity]
+ The \emph{velocity} of the particle is
+ \[
+ \mathbf{v} = \dot{\mathbf{r}} = \frac{\d \mathbf{r}}{\d t}.
+ \]
+\end{defi}
+
+\begin{defi}[Acceleration]
+ The \emph{acceleration} of the particle is
+ \[
+ \mathbf{a} = \dot{\mathbf{v}} = \ddot{\mathbf{r}} = \frac{\d^2 \mathbf{r}}{d t^2}.
+ \]
+\end{defi}
+
+\begin{defi}[Momentum]
+ The \emph{momentum} of a particle is
+ \[
+ \mathbf{p} = m\mathbf{v} = m\dot{\mathbf{r}}.
+ \]
+ $m$ is the \emph{inertial mass} of the particle, and measures its reluctance to accelerate, as described by Newton's second law.
+\end{defi}
+
+\subsection{Newton's laws of motion}
+We will first state Newton's three laws of motion, and then discuss them individually.
+\begin{law}[Newton's First Law of Motion]
+ A body remains at rest, or moves uniformly in a straight line, unless acted on by a force. (This is in fact Galileo's Law of Inertia)
+\end{law}
+
+\begin{law}[Newton's Second Law of Motion]
+ The rate of change of momentum of a body is equal to the force acting on it (in both magnitude and direction).
+\end{law}
+
+\begin{law}[Newton's Third Law of Motion]
+ To every action there is an equal and opposite reaction: the forces of two bodies on each other are equal and in opposite directions.
+\end{law}
+The first law might seem redundant given the second if interpreted literally. According to the second law, if there is no force, then the momentum doesn't change. Hence the body remains at rest or moves uniformly in a straight line.
+
+So why do we have the first law? Historically, it might be there to explicitly counter Aristotle's idea that objects naturally slow down to rest. However, some (modern) physicists give it an alternative interpretation:
+
+Note that the first law isn't always true. Take yourself as a frame of reference. When you move around your room, things will seem like they are moving around (relative to you). When you sit down, they stop moving. However, in reality, they've always been sitting there still. On second thought, this is because you, the frame of reference, is accelerating, not the objects. The first law only holds in frames that are themselves not accelerating. We call these \emph{inertial frames}.
+\begin{defi}[Inertial frames]
+ \emph{Inertial frames} are frames of references in which the frames themselves are not accelerating. Newton's Laws only hold in inertial frames.
+\end{defi}
+Then we can take the first law to assert that inertial frames exists. Even though the Earth itself is rotating and orbiting the sun, for most purposes, any fixed place on the Earth counts as an inertial frame.
+
+\subsection{Galilean transformations}
+The goal of this section is to investigate inertial frames. We know that inertial frames are not unique. Given an inertial frame, what other inertial frames can we obtain?
+
+First of all, we can rotate our axes or move our origin. In particular, we can perform the following operations:
+\begin{itemize}
+ \item Translations of space:
+ \[
+ \mathbf{r}' = \mathbf{r} - \mathbf{r}_0
+ \]
+ \item Translations of time:
+ \[
+ t' = t - t_0
+ \]
+ \item Rotation (and reflection):
+ \[
+ \mathbf{r}' = R\mathbf{r}
+ \]
+ with $R\in \Or(3)$.
+\end{itemize}
+These are not very interesting. They are simply symmetries of space itself.
+
+The last possible transformation is \emph{uniform motion}. Suppose that $S$ is an inertial frame. Then any other frame $S'$ in uniform motion relative to $S$ is also inertial:
+\begin{center}
+ \begin{tikzpicture}
+ \node at (0, 1.5) [left] {$S$};
+ \node at (4, 1.5) [left] {$S'$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$y$};
+ \draw [->] (0, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (4, 0) -- (4, 3) node [above] {$y'$};
+ \draw [->] (4, 0) -- (7, 0) node [right] {$x'$};
+ \draw [->] (4, 1.5) -- (4.5, 1.5) node [right] {$\mathbf{v}$};
+ \end{tikzpicture}
+\end{center}
+Assuming the frames coincide at $t = 0$, then
+\begin{align*}
+ x' &= x - vt\\
+ y' &= y\\
+ z' &= z\\
+ t' &= t
+\end{align*}
+Generally, the position vector transforms as
+\[
+ \mathbf{r}' = \mathbf{r} - \mathbf{v}t,
+\]
+where $\mathbf{v}$ is the (constant) velocity of $S'$ relative to $S$. The velocity and acceleration transform as follows:
+\begin{align*}
+ \dot{\mathbf{r}}' &= \dot{\mathbf{r}} - \mathbf{v}\\
+ \ddot{\mathbf{r}}' &= \ddot{\mathbf{r}}
+\end{align*}
+\begin{defi}[Galilean boost]
+ A \emph{Galilean boost} is a change in frame of reference by
+ \begin{align*}
+ \mathbf{r}' &= \mathbf{r} - \mathbf{v}t\\
+ t' &= t
+ \end{align*}
+ for a fixed, constant $\mathbf{v}$.
+\end{defi}
+
+All these transformations together generate the \emph{Galilean group}, which describes the symmetry of Newtonian equations of motion.
+
+\begin{law}[Galilean relativity]
+ The \emph{principle of relativity} asserts that the laws of physics are the same in inertial frames.
+\end{law}
+
+This implies that physical processes work the same
+\begin{itemize}
+ \item at every point of space
+ \item at all times
+ \item in whichever direction we face
+ \item at whatever constant velocity we travel.
+\end{itemize}
+
+In other words, the equations of Newtonian physics must have \emph{Galilean invariance}.
+
+Since the laws of physics are the same regardless of your velocity, velocity must be a \emph{relative concept}, and there is no such thing as an ``absolute velocity'' that all inertial frames agree on.
+
+However, all inertial frames must agree on whether you are accelerating or not (even though they need not agree on the direction of acceleration since you can rotate your frame). So acceleration is an \emph{absolute} concept.
+
+\subsection{Newton's Second Law}
+Newton's second law is often written in the form of an equation.
+\begin{law}
+ The \emph{equation of motion} for a particle subject to a force $\mathbf{F}$ is
+ \[
+ \frac{\d \mathbf{p}}{\d t} = \mathbf{F},
+ \]
+ where $\mathbf{p} = m\mathbf{v} = m\dot{\mathbf{r}}$ is the (linear) momentum of the particle. We say $m$ is the (inertial) mass of the particle, which is a measure of its reluctance to accelerate.
+\end{law}
+
+Usually, $m$ is constant. Then
+\[
+ \mathbf{F} = m\mathbf{a} = m\ddot{\mathbf{r}}.
+\]
+Usually, $\mathbf{F}$ is specified as a function of $\mathbf{r}, \dot{\mathbf{r}}$ and $t$. Then we have a second-order ordinary differential equation for $\mathbf{r}$.
+
+To determine the solution, we need to specify the initial values of $\mathbf{r}$ and $\dot{\mathbf{r}}$, i.e.\ the initial position and velocity. The trajectory of the particle is then uniquely determined for all future (and past) times.
+
+\section{Dimensional analysis}
+When considering physical theories, it is important to be aware that physical quantities are not pure numbers. Each physical quantity has a \emph{dimension}. Roughly speaking, dimensions are what units represent, such as length, mass and time. In any equation relating physical quantities, the dimensions must be consistent, i.e.\ the dimensions on both sides of the equation must be equal.
+
+For many problems in dynamics, the three basic dimensions are sufficient:
+\begin{itemize}
+ \item length, $L$
+ \item mass, $M$
+ \item time, $T$
+\end{itemize}
+
+The dimensions of a physical quantity $X$, denoted by $[X]$ are expressible uniquely in terms of $L$, $M$ and $T$. For example,
+\begin{itemize}
+ \item $[\text{area}] = L^2$
+ \item $[\text{density}] = L^{-3} M$
+ \item $[\text{velocity}] = LT^{-1}$
+ \item $[\text{acceleration}] = LT^{-2}$
+ \item $[\text{force}] = LMT^{-2}$ since the dimensions must satisfy the equation $F = ma$.
+ \item $[\text{energy}] = L^2MT^{-2}$, e.g.\ consider $E = mv^2/2$.
+\end{itemize}
+
+Physical constants also have dimensions, e.g.\ $[G] = L^3M^{-1}T^{-2}$ by $F = GMm/r^2$.
+
+The only allowed operations on quantities with dimensions are sums and products (and subtraction and division), and if we sum two terms, they must have the same dimension. For example, it does not make sense to add a length with an area. More complicated functions of dimensional quantities are not allowed, e.g.\ $e^{x}$ again makes no sense if $x$ has a dimension, since
+\[
+ e^x = 1 + x + \frac{1}{2}x^2 + \cdots
+\]
+and if $x$ had a dimension, we would be summing up terms of different dimensions.
+\subsection{Units}
+People use \emph{units} to represent dimensional quantities. A unit is a convenient standard physical quantity, e.g.\ a fixed amount of mass. In the SI system, there are base units corresponding to the basics dimensions. The three we need are
+\begin{itemize}
+ \item meter (m) for length
+ \item kilogram (kg) for mass
+ \item second (s) for time
+\end{itemize}
+A physical quantity can be expressed as a pure number times a unit with the correct dimensions, e.g.
+\[
+ G = \SI{6.67e-11}{\meter\cubed\per\kilogram\per\second\squared}.
+\]
+It is important to realize that SI units are chosen arbitrarily for historical reasons only. The equation of physics must work in any consistent system of units. This is captured by the fact that physical equations must be dimensionally consistent.
+
+\subsection{Scaling}
+We've had so many restrictions on dimensional quantities --- equations must be dimensionally consistent, and we cannot sum terms of different dimensions. However, this is \emph{not} a hindrance when developing new theories. In fact, it is a very useful tool. First of all, it allows us to immediately rule out equations that do not make sense dimensionally. More importantly, it allows us to guess the form of the equation we want.
+
+Suppose we believe that a physical quantity $Y$ depends on $3$ other physical quantities $X_1, X_2, X_3$, i.e.\ $Y = Y(X_1, X_2, X_3)$. Let their dimensions be as follows:
+\begin{itemize}
+ \item $[Y] = L^aM^bT^c$
+ \item $[X_i] = L^{a_i}M^{b_i}T^{c_i}$
+\end{itemize}
+Suppose further that we know that the relationship is a power law, i.e.
+\[
+ Y = CX_1^{p_1}X_2^{p_2}X_3^{p_3},
+\]
+where $C$ is a dimensionless constant (i.e.\ a pure number). Since the dimensions must work out, we know that
+\begin{align*}
+ a &= p_1a_1 + p_2a_2 + p_3a_3\\
+ b &= p_1b_1 + p_2b_2 + p_3b_3\\
+ c &= p_1c_1 + p_2c_2 + p_3c_3
+\end{align*}
+for which there is a unique solution provided that the dimensions of $X_1, X_2$ and $X_3$ are independent. So just by using dimensional analysis, we can figure out the relation between the quantities up to a constant. The constant can then be found experimentally, which is much easier than finding the form of the expression experimentally.
+
+However, in reality, the dimensions are not always independent. For example, we might have two length quantities. More importantly, the situation might involve more than 3 variables, and we do not have a unique solution.
+
+First consider a simple case --- if $X_1^2 X_2$ is dimensionless, then the relation between $Y$ and $X_i$ can involve more complicated terms such as $\exp (X_1^2 X_2)$, since the argument of $\exp$ is now dimensionless.
+
+In general, suppose we have many terms, and the dimensions of $X_i$ are not independent. We order the quantities so that the independent terms $[X_1], [X_2], [X_3]$ are at the front. For each of the remaining variables, form a dimensionless quantity $\lambda_i = X_iX_1^{q_1}X_2^{q_2}X_3^{q_3}$. Then the relationship must be of the form
+\[
+ Y = f(\lambda_4, \lambda_5, \cdots) X_1^{p_1}X_2^{p_2}X_3^{p_3}.
+\]
+where $f$ is an arbitrary function of the dimensionless variables.
+
+Formally, this results is described by the \emph{Buckingham's Pi theorem}, but we will not go into details.
+
+\begin{eg}[Simple pendulum]\leavevmode
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-1, 0) -- (1, 0);
+ \draw (0, 0) -- (1, -2) node [right] {$m$} node [circ] {} node [pos = 0.5, anchor = south west] {$\ell$};
+ \draw [dashed] (0, 0) -- (0, -2.5);
+ \draw [->] (0, -2) -- (1, -2) node [below, pos = 0.5] {$d$};
+ \draw [->] (1, -2) -- (0, -2);
+ \end{tikzpicture}
+ \end{center}
+ We want to find the period $P$. We know that $P$ could depend on
+ \begin{itemize}
+ \item mass $m$: $[m] = M$
+ \item length $\ell$: $[\ell] = L$
+ \item gravity $g$: $[g] = LT^{-2}$
+ \item initial displacement $d$: $[d] = L$
+ \end{itemize}
+ and of course $[P] = T$.
+
+ We observe that $m, \ell, g$ have independent dimensions, and with the fourth, we can form the dimensionless group $d/\ell$. So the relationship must be of the form
+ \[
+ P = f\left(\frac{d}{l}\right) m^{p_1}\ell^{p_2}g^{p_3},
+ \]
+ where $f$ is a dimensionless function. For the dimensions to balance,
+ \[
+ T = M^{p_1}L^{p_2}L^{p_3}T^{-2p_3}.
+ \]
+ So $p_1 = 0, p_2 = -p_3 = 1/2$. Then
+ \[
+ P = f\left(\frac{d}{\ell}\right) \sqrt{\frac{\ell}{g}}.
+ \]
+ While we cannot find the exact formula, using dimensional analysis, we know that if both $\ell$ and $d$ are quadrupled, then $P$ will double.
+\end{eg}
+
+\section{Forces}
+\emph{Force} is a central concept in Newtonian mechanics. As described by Newton's laws of motion, forces are what causes objects to accelerate, according to the formula $\mathbf{F} = m\mathbf{a}$. To completely specify the dynamics of a system, one only needs to know what forces act on what bodies.
+
+However, unlike in Star Wars, the force is given a secondary importance in modern treatments of mechanics. Instead, the \emph{potential} is what is considered to be fundamental, with force being a derived concept. In quantum mechanics, we cannot even meaningfully define forces.
+
+Yet, there are certain systems that must be described with forces instead of potentials, the most important of which is a system that involves friction of some sort.
+
+\subsection{Force and potential energy in one dimension}
+To define the potential, consider a particle of mass $m$ moving in a straight line with position $x(t)$. Suppose $F = F(x)$, i.e.\ it depends on position only. We define the potential energy as follows:
+\begin{defi}[Potential energy]
+ Given a force field $F = F(x)$, we define the \emph{potential energy} to be a function $V(x)$ such that
+ \[
+ F = -\frac{\d V}{\d x}.
+ \]
+ or
+ \[
+ V = -\int F \;\d x.
+ \]
+ $V$ is defined up to a constant term. We usually pick a constant of integration such that the potential drops to $0$ at infinity.
+\end{defi}
+Using the potential, we can write the equation of motion as
+\[
+ m\ddot{x} = -\frac{\d V}{\d x}.
+\]
+There is a reason why we call this the potential \emph{energy}. We usually consider it to be an energy of some sort. In particular, we define the total energy of a system to be
+\begin{defi}[Total energy]
+ The \emph{total energy} of a system is given by
+ \[
+ E = T + V,
+ \]
+ where $V$ is the potential energy and $T = \frac{1}{2}m\dot{x}^2$ is the kinetic energy.
+\end{defi}
+If the force of a system is derived from a potential, we can show that energy is conserved.
+\begin{prop}
+ Suppose the equation of a particle satisfies
+ \[
+ m\ddot{x} = -\frac{\d V}{\d x}.
+ \]
+ Then the total energy
+ \[
+ E = T + V = \frac{1}{2} m\dot{x}^2 + V(x)
+ \]
+ is conserved, i.e.\ $\dot{E} = 0$.
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ \frac{\d E}{\d t} &= m\dot{x}\ddot{x} + \frac{\d V}{\d x}\dot{x}\\
+ &= \dot{x}\left(m\ddot{x} + \frac{\d V}{\d x}\right)\\
+ &= 0\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{eg}
+ Consider the harmonic oscillator, whose potential is given by
+ \[
+ V = \frac{1}{2} kx^2.
+ \]
+ Then the equation of motion is
+ \[
+ m\ddot{x} = -kx.
+ \]
+ This is the case of, say, Hooke's Law for a spring.
+
+ The general solution of this is
+ \[
+ x(t) = A\cos (\omega t) + B\sin (\omega t)
+ \]
+ with $\omega = \sqrt{k/m}$.
+
+ $A$ and $B$ are arbitrary constants, and are related to the initial position and velocity by $x(0) = A, \dot{x}(0) = \omega B$.
+\end{eg}
+For a general potential energy $V(x)$, conservation of energy allows us to solve the problem formally:
+\[
+ E = \frac{1}{2}m\dot{x}^2 + V(x)
+\]
+Since $E$ is a constant, from this equation, we have
+\begin{align*}
+ \frac{\d x}{\d t} &= \pm \sqrt{\frac{2}{m}(E - V(x))}\\
+ t - t_0 &= \pm \int \frac{\d x}{\sqrt{\frac{2}{m}(E - V(x))}}.
+\end{align*}
+To find $x(t)$, we need to do the integral and then solve for $x$. This is usually not possible by analytical methods, but we can approximate the solution by numerical methods.
+
+\subsection{Motion in a potential}
+Given an arbitrary potential $V(x)$, it is often difficult to completely solve the equations of motion. However, just by looking at the graph of the potential, we can usually get a qualitative understanding of the dynamics.
+
+\begin{eg}
+ Consider $V(x) = m(x^3 - 3x)$. Note that this can be dimensionally consistent even though we add up $x^3$ and $-3x$, if we declare ``3'' to have dimension $L^2$.
+
+ We plot this as follows:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [domain=-2.2:2.2, samples=50] plot (\x, {0.5*(\x*\x*\x - 3*\x)});
+ \draw [->] (0, -2) -- (0, 2) node [above] {$V$};
+ \node [anchor = north east] {$O$};
+ \draw (1, 0) -- (1, -0.1) node [below] {$1$};
+ \draw (2, 0) -- (2, -0.1) node [below] {$2$};
+ \draw (-1, 0) -- (-1, -0.1) node [below] {$-1$};
+ \draw (-2, 0) -- (-2, -0.1) node [below] {$-2$};
+ \end{tikzpicture}
+ \end{center}
+ Suppose we release the particle from rest at $x = x_0$. Then $E = V(x_0)$. We can consider what happens to the particle for different values of $x_0$.
+ \begin{itemize}
+ \item $x_0 = \pm 1$: This is an equilibrium and the Particle stays there for all $t$.
+ \item $-1 < x_0 < 2$: The particle does not have enough energy to escape the well. So it oscillates back and forth in potential well.
+ \item $x_0 < -1$: The particle falls to $x = -\infty$.
+ \item $x_0 > 2$: The particle has enough energy to overshoot the well and continues to $x = -\infty$.
+ \item $x_0 = 2$: This is a special case. Obviously, the particle goes towards $x = -1$. But how long does it take, and what happens next? Here we have $E = 2m$. We noted previously
+ \[
+ t - t_0 = -\int \frac{\d x}{\sqrt{\frac{2}{m}(E - V(x))}}.
+ \]
+ Let $x = -1 + \varepsilon(t)$. Then
+ \begin{align*}
+ \frac{2}{m}(E-V(x)) &= 4 - 2(-1 + \varepsilon)^3 + 6(-1 + \varepsilon)\\
+ &= 6\varepsilon^2 - 2\varepsilon^3.
+ \end{align*}
+ So
+ \[
+ t - t_0 = -\int_3^\varepsilon \frac{\d \varepsilon'}{\sqrt{6\varepsilon^2 - 2\varepsilon^3}}
+ \]
+ We reach $x = -1$ when $\varepsilon \to 0$. But for small $\varepsilon$, the integrand is approximately $\propto 1/\varepsilon$, which integrates to $\ln \varepsilon \to -\infty$ as $\varepsilon \to 0$. So $\varepsilon = 0$ is achieved after infinite time, i.e.\ it takes infinite time to reach $\varepsilon = 0$, or $x = -1$.
+ \end{itemize}
+\end{eg}
+\subsubsection*{Equilibrium points}
+In reality, most of the time, particles are not flying around wildly doing crazy stuff. Instead, they stay at (or near) some stable point, and only move very little in a predictable manner. We call these points \emph{equilibrium points}.
+
+\begin{defi}[Equilibrium point]
+ A particle is in \emph{equilibrium} if it has no tendency to move away. It will stay there for all time. Since $m\ddot{x} = -V'(x)$, the equilibrium points are the stationary points of the potential energy, i.e.
+ \[
+ V'(x_0) = 0.
+ \]
+\end{defi}
+Consider motion near an equilibrium point. We assume that the motion is small and we can approximate $V$ by a second-order Taylor expansion. Then we can write $V$ as
+\[
+ V(x) \approx V(x_0) + \frac{1}{2}V''(x_0)(x - x_0)^2.
+\]
+Then the equation of motion is
+\[
+ m\ddot{x} = -V''(x_0)(x - x_0).\\
+\]
+If $V''(x_0) > 0$, then this is of the form of the harmonic oscillator. $V$ has a local minimum at $x_0$, and we say the equilibrium point is \emph{stable}. The particle oscillates with angular frequency
+\[
+ \omega = \sqrt{\frac{V''(x_0)}{m}}.
+\]
+If $V''(x_0) < 0$, then $V$ has a local maximum at $x_0$. In this case, the equilibrium point is unstable, and the solution to the equation is
+\[
+ x - x_0 \approx Ae^{\gamma t} + Be^{-\gamma t}
+\]
+for
+\[
+ \gamma = \sqrt{\frac{-V''(x_0)}{m}}.
+\]
+For almost all initial conditions, $A \not= 0$ and the particle will diverge from the equilibrium point, leading to a breakdown of the approximation.
+
+If $V''(x_0) = 0$, then further work is required to determine the outcome.
+
+\begin{eg}
+ Consider the simple pendulum.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} -- (1, -2) node [right] {$m$} node [circ] {} node [pos = 0.5, anchor = south west] {$\ell$};
+ \draw [dashed] (0, 0) -- (0, -2.5);
+ \draw [->] (0, -2) -- (1, -2) node [below, pos = 0.5] {$d$};
+ \draw [->] (1, -2) -- (0, -2);
+ \draw (0, -0.6) arc(270:296.57:0.6);
+ \node at (0.2, -0.8) {$\theta$};
+ \end{tikzpicture}
+ \end{center}
+ Suppose that the pendulum makes an angle $\theta$ with the vertical. Then the energy is
+ \[
+ E = T + V = \frac{1}{2}m\ell^2 \dot{\theta}^2 - mg\ell \cos\theta.
+ \]
+ Therefore $V\propto -\cos\theta$. We have a stable equilibrium at $\theta = 0 $, and unstable equilibrium at $\theta = \pi$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [domain=-1:1] plot ({2*\x}, {-1 * cos(180 * \x)});
+ \draw [->] (-2.5, 0) -- (2.5, 0) node [right] {$\theta$};
+ \draw [->] (0, -1.2) -- (0, 1.5) node [above] {$V$};
+ \draw [dashed] (-2, 1) -- (-2, 0) node [below] {$-\pi$};
+ \draw [dashed] (2, 1) -- (2, 0) node [below] {$\pi$};
+ \draw [dashed] (-2, 1) -- (2, 1);
+ \node [anchor = north east] at (0, 1) {$mg\ell$};
+ \node [anchor = north east] at (0, -1) {$-mg\ell$};
+
+ \end{tikzpicture}
+ \end{center}
+ If $E > mg\ell$, then $\dot{\theta}$ never vanishes and the pendulum makes full circles.
+
+ If $0 < E < mg\ell$, then $\dot{\theta}$ vanishes at $\theta = \pm \theta_0$ for some $0 < \theta_0 < \pi$ i.e.\ $E = -mg\ell \cos\theta_0$. The pendulum oscillates back and forth. It takes a quarter of a period to reach from $\theta = 0$ to $\theta = \theta_0$. Using the previous general solution, oscillation period $P$ is given by
+ \[
+ \frac{P}{4} = \int_0^{\theta_0} = \frac{\d \theta}{\sqrt{\frac{2E}{m\ell^2} + \frac{2g}{\ell }\cos\theta}}.
+ \]
+ Since we know that $E = -mg\ell \cos \theta_0$, we know that
+ \[
+ \frac{P}{4} = \sqrt{\frac{\ell}{g}}\int_0^{\theta_0} \frac{\d \delta}{\sqrt{2\cos\theta - 2\cos\theta_0}}.
+ \]
+ The integral is difficult to evaluate in general, but for small $\theta_0$, we can use $\cos\theta \approx 1 - \frac{1}{2}\theta^2$. So
+ \[
+ P \approx 4\sqrt{\frac{\ell}{g}}\int_0^{\theta_0}\frac{\d \theta}{\sqrt{\theta_0^2 - \theta^2}} = 2\pi\sqrt{\frac{\ell}{g}}
+ \]
+ and is independent of the amplitude $\theta_0$. This is of course the result for the harmonic oscillator.
+\end{eg}
+\subsubsection*{Force and potential energy in three dimensions}
+Everything looks nice so far. However, in real life, the world has (at least) three (spatial) dimensions. To work with multiple dimensions, we will have to promote our quantities into vectors.
+
+Consider a particle of mass $m$ moving in 3D. The equation of motion is now a vector equation
+\[
+ m\ddot{\mathbf{r}} = \mathbf{F}.
+\]
+We'll define the familiar quantities we've had.
+\begin{defi}[Kinetic energy]
+ We define the \emph{kinetic energy} of the particle to be
+ \[
+ T = \frac{1}{2}m|\mathbf{v}|^2 = \frac{1}{2}m\dot{\mathbf{r}}\cdot \dot{\mathbf{r}}.
+ \]
+\end{defi}
+If we want to know how it varies with time, we obtain
+\[
+ \frac{\d T}{\d t} = m\ddot{\mathbf{r}}\cdot \dot{\mathbf{r}} = \mathbf{F}\cdot \dot{\mathbf{r}} = \mathbf{F}\cdot \mathbf{v}.
+\]
+This is the power.
+\begin{defi}[Power]
+ The \emph{power} is the rate at which work is done on a particle by a force. It is given by
+ \[
+ P = \mathbf{F}\cdot \mathbf{v}.
+ \]
+\end{defi}
+\begin{defi}[Work done]
+ The \emph{work done} on a particle by a force is the change in kinetic energy caused by the force. The work done on a particle moving from $\mathbf{r}_1 = \mathbf{r}(t_1)$ to $\mathbf{r}_2 = \mathbf{r}(t_2)$ along a trajectory $C$ is the line integral
+ \[
+ W = \int_C \mathbf{F}\cdot\d \mathbf{r} = \int_{t_1}^{t_2} \mathbf{F}\cdot \dot{\mathbf{r}}\;\d t = \int_{t_1}^{t_2} P \;\d t.
+ \]
+\end{defi}
+Usually, we are interested in forces that conserve energy. These are forces which can be given a potential, and are known as \emph{conservative forces}.
+\begin{defi}[Conservative force and potential energy]
+ A \emph{conservative force} is a force field $\mathbf{F}(\mathbf{r})$ that can be written in the form
+ \[
+ \mathbf{F} = -\nabla V.
+ \]
+ $V$ is the \emph{potential energy function}.
+\end{defi}
+
+\begin{prop}
+ If $\mathbf{F}$ is conservative, then the energy
+ \begin{align*}
+ E &= T + V\\
+ &= \frac{1}{2}m|\mathbf{v}|^2 + V(\mathbf{r})
+ \end{align*}
+ is conserved. For any particle moving under this force, the work done is equal to the change in potential energy, and is independent of the path taken between the end points. In particular, if we travel around a closed loop, no work is done.
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ \frac{\d E}{\d t} &= \frac{\d}{\d t}\left(\frac{1}{2}m\dot{\mathbf{r}}\cdot \dot{\mathbf{r}} + V\right)\\
+ &= m\ddot{\mathbf{r}}\cdot \dot{\mathbf{r}} + \frac{\partial V}{\partial x_i}\frac{\d x_i}{\d t}\\
+ &= (m\ddot{\mathbf{r}} + \nabla V)\cdot \dot{\mathbf{r}}\\
+ &= (m\ddot{\mathbf{r}} - \mathbf{F})\cdot \dot{\mathbf{r}}\\
+ &= 0
+ \end{align*}
+ So the energy is conserved. In this case, the work done is
+ \[
+ W = \int_C \mathbf{F}\cdot \d \mathbf{r} = -\int_C (\nabla V)\cdot \d \mathbf{r} = V(\mathbf{r}_1) - V(\mathbf{r}_2).\qedhere
+ \]
+\end{proof}
+\subsection{Central forces}
+While in theory the potential can take any form it likes, most of the time, our system has \emph{spherical symmetry}. In this case, the potential depends only on the distance from the origin.
+\begin{defi}[Central force]
+ A \emph{central force} is a force with a potential $V(r)$ that depends only on the distance from the origin, $r = |\mathbf{r}|$. Note that a central force can be either attractive or repulsive.
+\end{defi}
+
+When dealing with central forces, the following formula is often helpful:
+\begin{prop}
+ $\nabla r = \hat{\mathbf{r}}$.
+\end{prop}
+Intuitively, this is because the direction in which $r$ increases most rapidly is $\mathbf{r}$, and the rate of increase is clearly 1. This can also be proved algebraically:
+
+\begin{proof}
+ We know that
+ \[
+ r^2 = x_1^2 + x_2^2 + x_3^2.
+ \]
+ Then
+ \[
+ 2r\frac{\partial r}{\partial x_i} = 2x_i.
+ \]
+ So
+ \[
+ \frac{\partial r}{\partial x_i} = \frac{x_i}{r} = (\hat{\mathbf{r}})_i.\qedhere
+ \]
+\end{proof}
+
+\begin{prop}
+ Let $\mathbf{F} = -\nabla V(r)$ be a central force. Then
+ \[
+ \mathbf{F} = -\nabla V = -\frac{\d V}{\d r} \hat{\mathbf{r}},
+ \]
+ where $\hat{\mathbf{r}} = \mathbf{r}/r$ is the unit vector in the radial direction pointing away from the origin.
+\end{prop}
+
+\begin{proof}
+ Using the proof above,
+ \[
+ (\nabla V)_i = \frac{\partial V}{\partial x_i} = \frac{\d V}{\d r} \frac{\partial r}{\partial x_i} = \frac{\d V}{\d r}(\hat{\mathbf{r}})_i\qedhere
+ \]
+\end{proof}
+Since central forces have spherical symmetry, they give rise to an additional conserved quantity called \emph{angular momentum}.
+
+\begin{defi}[Angular momentum]
+ The \emph{angular momentum} of a particle is
+ \[
+ \mathbf{L} = \mathbf{r}\times \mathbf{p} = m\mathbf{r}\times \dot{\mathbf{r}}.
+ \]
+\end{defi}
+
+\begin{prop}
+ Angular momentum is conserved by a central force.
+\end{prop}
+
+\begin{proof}
+ \[
+ \frac{\d \mathbf{L}}{\d t} = m\dot{\mathbf{r}} \times \dot{\mathbf{r}} + m\mathbf{r}\times \ddot{\mathbf{r}} = \mathbf{0} + \mathbf{r}\times \mathbf{F} = \mathbf{0}.
+ \]
+ where the last equality comes from the fact that $\mathbf{F}$ is parallel to $\mathbf{r}$ for a central force.
+\end{proof}
+In general, for a non-central force, the rate of change of angular momentum is the \emph{torque}.
+\begin{defi}[Torque]
+ The \emph{torque} $G$ of a particle is the rate of change of angular momentum.
+ \[
+ \mathbf{G} = \frac{\d \mathbf{L}}{\d t} = \mathbf{r}\times \mathbf{F}.
+ \]
+\end{defi}
+
+Note that $\mathbf{L}$ and $\mathbf{G}$ depends on the choice of origin. For a central force, only the angular momentum about the center of the force is conserved.
+
+\subsection{Gravity}
+We'll now study an important central force --- gravity. This law was discovered by Newton and was able to explain the orbits of various planets. However, we will only study the force and potential aspects of it, and postpone the study of orbits for a later time.
+
+\begin{law}[Newton's law of gravitation]
+ If a particle of mass $M$ is fixed at a origin, then a second particle of mass $m$ experiences a potential energy
+ \[
+ V(r) = -\frac{GMm}{r},
+ \]
+ where $G \approx \SI{6.67e-11}{\meter\cubed\per\kilogram\per\second\squared}$ is the \emph{gravitational constant}.
+
+ The gravitational force experienced is then
+ \[
+ F = -\nabla V = -\frac{GMm}{r^2}\hat{\mathbf{r}}.
+ \]
+\end{law}
+Since the force is negative, particles are attracted to the origin.
+
+The potential energy is a function of the masses of \emph{both} the fixed mass $M$ and the second particle $m$. However, it is useful what the fixed mass $M$ does with reference to the second particle.
+\begin{defi}[Gravitaional potential and field]
+ The \emph{gravitational potential} is the gravitational potential energy per unit mass. It is
+ \[
+ \Phi_g(r) = -\frac{GM}{r}.
+ \]
+ Note that \emph{potential} is confusingly different from \emph{potential energy}.
+
+ If we have a second particle, the potential \emph{energy} is given by $V = m\Phi_g$.
+
+ The \emph{gravitational field} is the force per unit mass,
+ \[
+ \mathbf{g} = -\nabla \Phi_g = -\frac{GM}{r^2}\hat{\mathbf{r}}.
+ \]
+\end{defi}
+
+If we have many fixed masses $M_i$ at points $\mathbf{r}_i$, we can add up their gravitational potential directly. Then the total gravitational potential is given by
+\[
+ \Phi_g(\mathbf{r}) = -\sum_i \frac{GM_i}{|\mathbf{r} - \mathbf{r}_i|}.
+\]
+Again, $V = m\Phi_g$ for a particle of mass $m$.
+
+An important (mathematical) result about gravitational fields is that we can treat spherical objects as point particles. In particular,
+\begin{prop}
+ The external gravitational potential of a spherically symmetric object of mass $M$ is the same as that of a point particle with the same mass at the center of the object, i.e.
+ \[
+ \Phi_g(r) = -\frac{GM}{r}.
+ \]
+\end{prop}
+The proof can be found in the Vector Calculus course.
+
+\begin{eg}
+ If you live on a spherical planet of mass $M$ and radius $R$, and can move only a small distance $z \ll R$ above the surface, then
+ \begin{align*}
+ V(r) &= V(R + z)\\
+ &= -\frac{GMm}{R + z}\\
+ &= -\frac{GMm}{R}\left(1 - \frac{z}{R} + \cdots\right)\\
+ &\approx \text{const.} + \frac{GMm}{R^2}z\\
+ &= \text{const.} + mgz,
+ \end{align*}
+ where $g = GM/R^2 \approx \SI{9.8}{\meter\per\second\squared}$ for Earth. Usually we are lazy and just say that the potential is $mgz$.
+\end{eg}
+
+\begin{eg}
+ How fast do we need to jump to escape the gravitational pull of the Earth? If we jump upwards with speed $v$ from the surface, then
+ \[
+ E = T + V = \frac{1}{2}mv^2 - \frac{GMm}{R}.
+ \]
+ After escape, we must have $T \geq 0$ and $V = 0$. Since energy is conserved, we must have $E \geq 0$ from the very beginning. i.e.
+ \[
+ v > v_{esc} = \sqrt{\frac{2GM}{R}}.
+ \]
+\end{eg}
+\subsubsection*{Inertial and gravitational mass}
+A careful reader would observe that ``mass'' appears in two unrelated equations:
+\[
+ \mathbf{F} = m_i\ddot{\mathbf{r}}
+\]
+and
+\[
+ \mathbf{F} = -\frac{GM_gm_g}{r^2}\hat{\mathbf{r}},
+\]
+and they play totally different roles. The first is the \emph{inertial mass}, which measures the resistance to motion, while the second is the \emph{gravitational mass}, which measures its response to gravitational forces.
+
+Conceptually, these are quite different. There is no \emph{a priori} reason why these two should be equal. However, experiment shows that they are indeed equivalent to each other, i.e.\ $m_i = m_g$, with an accuracy of $10^{-12}$ or better.
+
+This (philosophical) problem was only resolved when Einstein introduced his general theory of relativity, which says that gravity is actually a \emph{fictitious} force, which means that the acceleration of the particle is independent of its mass.
+
+We can further distinct the gravitational mass by ``passive'' and ``active'', i.e.\ the amount of gravitational field generated by a particle ($M$), and the amount of gravitational force received by a particle ($m$), but they are still equal, and we end up calling all of them ``mass''.
+\subsection{Electromagnetism}
+Next we will study the laws of electromagnetism. We will only provide a very rudimentary introduction to electromagnetism. Electromagnetism will be examined more in-depth in the IB Electromagnetism and II Electrodynamics courses.
+
+As the name suggests, electromagnetism comprises two parts --- electricity and magnetism. Similar to gravity, we generally imagine electromagnetism working as follows: charges generate fields, and fields cause charges to move.
+
+A common charged particle is the \emph{electron}, which is currently believed to be a fundamental particle. It has charge $q_e = \SI{-1.6e-19}{\coulomb}$. Other particles' charges are always integer multiples of $q_e$ (unless you are a quark).
+
+In electromagnetism, there are two fields --- the \emph{electric field} $\mathbf{E}(\mathbf{r}, t)$ and the \emph{magnetic field} $\mathbf{B}(\mathbf{r}, t)$. Their effects on charged particles is described by the \emph{Lorentz force law}.
+\begin{law}[Lorentz force law]
+ The \emph{electromagnetic force} experienced by a particle with electric charge $q$ is
+ \[
+ \mathbf{F} = q(\mathbf{E} + \mathbf{v}\times \mathbf{B}).
+ \]
+\end{law}
+This is the first time where we see a force that depends on the \emph{velocity} of the particle. In all other forces we've seen, the force depends only on the field which implicitly depends on the position only. This is weird, and seems to violate Galilean relativity, since velocity is a relative concept that depends on the frame of reference. It turns out that weird things happen to the $\mathbf{B}$ and $\mathbf{E}$ fields when you change the frame of reference. You will learn about these in the IB Electromagnetism course (if you take it).
+
+As a result, the magnetic force is not a conservative force, and it cannot be given a (regular) potential. On the other hand, assuming that the fields are time-independent, the electric field \emph{is} conservative. We write the potential as $\Phi_e(\mathbf{r})$, and $\mathbf{E} = -\nabla \Phi_e$.
+
+\begin{defi}[Electrostatic potential]
+ The electrostatic potential is a function $\Phi_e(\mathbf{r})$ such that
+ \[
+ \mathbf{E} = -\nabla \Phi_e.
+ \]
+\end{defi}
+While the magnetic force is not conservative in the traditional sense, it always acts perpendicularly to the velocity. Hence it does no work. So overall, energy is conserved under the action of the electromagnetic force.
+\begin{prop}
+ For time independent $\mathbf{E}(\mathbf{r})$ and $\mathbf{B}(\mathbf{r})$, the energy
+ \[
+ E = T + V = \frac{1}{2}m|\mathbf{v}|^2 + q\Phi_e
+ \]
+ is conserved.
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ \frac{\d E}{\d t} &= m\ddot{\mathbf{r}}\cdot \dot{\mathbf{r}} + q(\nabla \Phi_e)\cdot \dot{\mathbf{r}}\\
+ &= (m\ddot{\mathbf{r}} - q\mathbf{E}) \cdot \dot{\mathbf{r}}\\
+ &= (q\dot{\mathbf{r}}\times \mathbf{B})\cdot \dot{\mathbf{r}}\\
+ &= 0\qedhere
+ \end{align*}
+\end{proof}
+
+\subsubsection*{Motion in a uniform magnetic field}
+Consider the particular case where there is no electric field, i.e.\ $\mathbf{E} = \mathbf{0}$, and that the magnetic field is uniform throughout space. We choose our axes such that $\mathbf{B} = (0, 0, B)$ is constant.
+
+According to the Lorentz force law, $\mathbf{F} = q(\mathbf{E} + \mathbf{v}\times \mathbf{B}) = q\mathbf{v}\times \mathbf{B}$. Since the force is always perpendicular to the velocity, we expect this to act as a centripetal force to make the particle travel in circles.
+
+Indeed, writing out the components of the equation of motion, we obtain
+\begin{align*}
+ m\ddot{x} &= qB\dot{y}\tag{1}\\
+ m\ddot{y} &= -qB\dot{x}\tag{2}\\
+ m\ddot{z} &= 0\tag{3}
+\end{align*}
+From (3), we see that there is uniform motion parallel to $\mathbf{B}$, which is not interesting. We will look at the $x$ and $y$ components.
+
+There are many ways to solve this system of equations. Here we solve it using complex numbers.
+
+Let $\zeta = x + iy$. Then (1) + (2)$i$ gives
+\[
+ m\ddot{\zeta} = -iqB\dot{\zeta}.
+\]
+Then the solution is
+\[
+ \zeta = \alpha e^{-i\omega t} + \beta,
+\]
+where $\omega = qB/m$ is the \emph{gyrofrequency}, and $\alpha$ and $\beta$ are complex integration constants. We see that the particle goes in circles, with center $\beta$ and radius $\alpha$.
+
+We can choose coordinates such that, at $t = 0$, $\mathbf{r} = 0$ and $\dot{\mathbf{r}} = (0, v, w)$, i.e.~$\zeta = 0$ and $\dot{\zeta} = iv$, and $z = 0$ and $\dot{z} = w$.
+
+The solution is then
+\[
+ \zeta = R(1 - e^{-i\omega t}).
+\]
+with $R = v/\omega = (mv)/(qB)$ is the \emph{gyroradius} or \emph{Larmor radius}. Alternatively,
+\begin{align*}
+ x &= R(1 - \cos \omega t)\\
+ y &= R\sin \omega t\\
+ z &= wt.
+\end{align*}
+This is circular motion in the plane perpendicular to $\mathbf{B}$:
+\[
+ (x - R)^2 + y^2 = R^2,
+\]
+combined with uniform motion parallel to $\mathbf{B}$, i.e.~a helix.
+
+Alternatively, we can solve this with vector operations. Start with
+\[
+ m\ddot{\mathbf{r}} = q\dot{\mathbf{r}}\times \mathbf{B}
+\]
+Let $\mathbf{B} = B\mathbf{n}$ with $|\mathbf{n}| = 1$. Then
+\[
+ \ddot{\mathbf{r}} = \omega \dot{\mathbf{r}}\times \mathbf{n},
+\]
+with our gyrofrequency $\omega = qB/m$. We integrate once, assuming $\mathbf{r}(0) = \mathbf{0}$ and $\dot{\mathbf{r}}(0) = \mathbf{v}_0$.
+\[
+ \dot{\mathbf{r}} = \omega \mathbf{r} \times \mathbf{n} + \mathbf{v}_0. \tag{$*$}
+\]
+Now we project $(*)$ parallel to and perpendicular to $\mathbf{B}$.
+
+First we dot $(*)$ with $\mathbf{n}$:
+\[
+ \dot{\mathbf{r}}\cdot \mathbf{n} = \mathbf{v}_0\cdot \mathbf{n} = w = \text{const}.
+\]
+We integrate again to obtain
+\[
+ \mathbf{r}\cdot \mathbf{n} = wt.
+\]
+This is the part parallel to $\mathbf{B}$.
+
+To resolve perpendicularly, write $\mathbf{r} = (\mathbf{r}\cdot \mathbf{n})\mathbf{n} + \mathbf{r}_\bot$, with $\mathbf{r}_\bot \cdot \mathbf{n} = 0$.
+
+The perpendicular component of $(*)$ gives
+\[
+ \dot{\mathbf{r}}_\bot = w\mathbf{r}_\bot \times \mathbf{n} + \mathbf{v}_0 - (\mathbf{v}_0\cdot \mathbf{n})\mathbf{n}.
+\]
+We solve this by differentiating again to obtain
+\[
+ \ddot{\mathbf{r}}_\bot = \omega \dot{\mathbf{r}}_\bot \times \mathbf{n} = -\omega^2 \mathbf{r}_\bot + \omega \mathbf{v}_0 \times \mathbf{n},
+\]
+which we can solve using particular integrals and complementary functions.
+
+\subsubsection*{Point charges}
+So far we've discussed the effects of the fields on particles. But how can we create fields in the first place? We'll only look at the simplest case, where a point charge generates an electric field.
+
+\begin{law}[Columb's law]
+ A particle of charge $Q$, fixed at the origin, produces an electrostatic potential
+ \[
+ \Phi_e = \frac{Q}{4\pi\varepsilon_0 r},
+ \]
+ where $\varepsilon_0 \approx \SI{8.85e-12}{\per\meter\cubed\per\kilogram\second\squared\coulomb\squared}$.
+
+ The corresponding electric field is
+ \[
+ \mathbf{E} = -\nabla \Phi_e = \frac{Q}{4\pi\varepsilon_0} \frac{\hat{\mathbf{r}}}{r^2}.
+ \]
+ The resulting force on a particle of charge $q$ is
+ \[
+ \mathbf{F} = q\mathbf{E} = \frac{Qq}{4\pi\varepsilon_0}\frac{\hat{\mathbf{r}}}{r^2}.
+ \]
+\end{law}
+\begin{defi}[Electric constant]
+ $\varepsilon_0$ is the \emph{electric constant} or \emph{vacuum permittivity} or \emph{permittivity of free space}.
+\end{defi}
+
+The form of equations here are closely analogous to those of gravity. However, there is an important difference: charges can be positive or negative. Thus electrostatic forces can be either attractive or repulsive, whereas gravity is always attractive.
+
+\subsection{Friction}
+At an atomic level, energy is always conserved. However, in many everyday processes, this does not appear to be the case. This is because \emph{friction} tends to take kinetic energy away from objects.
+
+In general, we can divide friction into \emph{dry friction} and \emph{fluid friction}.
+\subsubsection*{Dry friction}
+When solid objects are in contact, a \emph{normal reaction force} $\mathbf{N}$ (perpendicular to the contact surface) prevents them from interpenetrating, while a \emph{frictional force} $\mathbf{F}$ (tangential to the surface) resists relative tangential motion (sliding or slipping).
+\begin{center}
+ \begin{tikzpicture}[rotate = 30]
+ \draw [fill=gray] (-2, 0) rectangle (2, -0.2);
+ \draw (-1, 1) rectangle (1, 0);
+ \draw [->] (0, 0) -- (0, 1.3) node [above] {$\mathbf{N}$};
+ \draw [->] (0, 0.13) -- (0.6, 0.13) node [anchor = south west] {$\!\!\mathbf{F}$};
+ \draw [->] (0, 0.5) -- (-0.577, -0.5) node [below] {$m\mathbf{g}$};
+ \end{tikzpicture}
+\end{center}
+If the tangential force is small, it is insufficient to overcome friction and no sliding occurs. We have \emph{static friction} of
+\[
+ |\mathbf{F}| \leq \mu_s |\mathbf{N}|,
+\]
+where $\mu_s$ is the \emph{coefficient of static friction}.
+
+When the external force on the object exceeds $\mu_s |\mathbf{N}|$, sliding starts, and we have a \emph{kinetic friction} of
+\[
+ |\mathbf{F}| = \mu_k |\mathbf{N}|,
+\]
+where $\mu_k$ is the \emph{coefficient of kinetic friction}.
+
+These coefficients are measures of roughness and depend on the two surfaces involved. For example, Teflon on Teflon has coefficient of around 0.04, while rubber on asphalt has about 0.8, while a hypothetical perfectly smooth surface has coefficient $0$. Usually, $\mu_s > \mu_k > 0$.
+
+\subsubsection*{Fluid drag}
+When a solid object moves through a fluid (i.e.\ liquid or gas), it experiences a \emph{drag force}.
+
+There are two important regimes.
+
+\begin{enumerate}
+ \item Linear drag: for small things in viscous fluids moving slowly, e.g.\ a single cell organism in water, the friction is proportional to the velocity, i.e.
+ \[
+ \mathbf{F} = -k_1 \mathbf{v}.
+ \]
+ where $\mathbf{v}$ is the velocity of the object relative to the fluid, and $k_1 > 0$ is a constant. This $k_1$ depends on the shape of the object. For example, for a sphere of radius $R$, Stoke's Law gives
+ \[
+ k_1 = 6\pi \mu R,
+ \]
+ where $\mu$ is the viscosity of the fluid.
+ \item Quadratic drag: for large objects moving rapidly in less viscous fluid, e.g.\ cars or tennis balls in air, the friction is proportional to the square of the velocity, i.e.
+ \[
+ \mathbf{F} = -k_2|\mathbf{v}|^2\hat{\mathbf{v}}.
+ \]
+\end{enumerate}
+In either case, the object loses energy. The power exerted by the drag force is
+\[
+ \mathbf{F}\cdot \mathbf{v} = \begin{cases} -k_1|\mathbf{v}|^2\\-k_2 |\mathbf{v}|^3 \end{cases}
+\]
+\begin{eg}
+ Consider a projectile moving in a uniform gravitational field and experiencing a linear drag force.
+
+ At $t = 0$, we throw the projectile with velocity $\mathbf{u}$ from $\mathbf{x} = \mathbf{0}$.
+
+ The equation of motion is
+ \[
+ m\frac{\d \mathbf{v}}{\d t} = m\mathbf{g} - k\mathbf{v}.
+ \]
+ We first solve for $\mathbf{v}$, and then deduce $\mathbf{x}$.
+
+ We use an integrating factor $\exp(\frac{k}{m}t)$ to obtain
+ \begin{align*}
+ \frac{\d }{\d t} \left(e^{kt/m}\mathbf{v}\right) &= e^{kt/m} \mathbf{g}\\
+ e^{kt/m} \mathbf{v}&= \frac{m}{k}e^{kt/m}\mathbf{g} + \mathbf{c}\\
+ \mathbf{v} &= \frac{m}{k}\mathbf{g} + \mathbf{c} e^{-kt/m}
+ \end{align*}
+ Since $\mathbf{v} = \mathbf{u}$ at $t = 0$, we get $\mathbf{c} = \mathbf{u} - \frac{m}{k}\mathbf{g}$. So
+ \[
+ \mathbf{v} = \dot{\mathbf{x}} = \frac{m}{k}\mathbf{g} + \left(\mathbf{u} - \frac{m}{k}\mathbf{g}\right)e^{-kt/m}.
+ \]
+ Integrating once gives
+ \[
+ \mathbf{x} = \frac{m}{k}\mathbf{g}t - \frac{m}{k}\left(\mathbf{u} - \frac{m}{k} \mathbf{g}\right) e^{-kt/m} + \mathbf{d}.
+ \]
+ Since $\mathbf{x} = \mathbf{0}$ at $t = 0$. So
+ \[
+ \mathbf{d} = \frac{m}{k}\left(\mathbf{u} - \frac{m}{k}\mathbf{g}\right).
+ \]
+ So
+ \[
+ \mathbf{x} = \frac{m}{k}\mathbf{g}t + \frac{m}{k}\left (\mathbf{u} - \frac{m}{k}\mathbf{g}\right)(1 - e^{-kt/m}).
+ \]
+ In component form, let $\mathbf{x} = (x, y)$, $\mathbf{u} = (u\cos \theta, u\sin \theta)$, $\mathbf{g} = (0, -g)$. So
+ \begin{align*}
+ x &= \frac{mu}{k}\cos \theta (1 - e^{-kt/m})\\
+ y &= -\frac{mgt}{k} + \frac{m}{k}\left(u\sin \theta + \frac{mg}{k}\right)(1 - e^{-kt/m}).
+ \end{align*}
+ We can characterize the strength of the drag force by the dimensionless constant $ku/(mg)$, with a larger constant corresponding to a larger drag force.
+\end{eg}
+\subsubsection*{Effect of damping on small oscillations}
+We've previously seen that particles near a potential minimum oscillate indefinitely. However, if there is friction in the system, the oscillation will damp out and energy is continually lost. Eventually, the system comes to rest at the stable equilibrium.
+
+\begin{eg}
+ If a linear drag force is added to a harmonic oscillator, then the equation of motion becomes
+ \[
+ m\ddot{\mathbf{x}} = -m\omega^2 \mathbf{x} - k\dot{\mathbf{x}},
+ \]
+ where $\omega$ is the angular frequency of the oscillator in the absence of damping. Rewrite as
+ \[
+ \ddot{\mathbf{x}} + 2\gamma \dot{\mathbf{x}} + \omega^2 x = 0,
+ \]
+ where $\gamma = k/2m > 0$. Solutions are $x = e^{\lambda t}$, where
+ \[
+ \lambda^2 + 2\gamma \lambda + \omega^2 = 0,
+ \]
+ or
+ \[
+ \lambda = -\gamma \pm \sqrt{\gamma^2 - \omega^2}.
+ \]
+ If $\gamma > \omega$, then the roots are real and negative. So we have exponential decay. We call this an overdamped oscillator.
+
+ If $0 < \gamma < \omega$, then the roots are complex with $\Re(\lambda) = -\gamma$. So we have decaying oscillations. We call this an underdamped oscillator.
+
+ For details, refer to Differential Equations.
+\end{eg}
+
+\section{Orbits}
+The goal of this chapter is to study the orbit of a particle in a central force,
+\[
+ m\ddot{\mathbf{r}} = -\nabla V(r).
+\]
+While the universe is in three dimensions, the orbit is confined to a plane. This is since the angular momentum $\mathbf{L} = m\mathbf{r}\times \dot{\mathbf{r}}$ is a constant vector, as we've previously shown. Furthermore $\mathbf{L}\cdot \mathbf{r} = 0$. Therefore, the motion takes place in a plane passing through the origin, and perpendicular to $\mathbf{L}$.
+
+\subsection{Polar coordinates in the plane}
+We choose our axes such that the orbital plane is $z = 0$. To describe the orbit, we introduce polar coordinates $(r, \theta)$:
+\[
+ x = r\cos\theta, \quad y = r\sin \theta.
+\]
+Our object is to separate the motion of the particle into radial and angular components. We do so by defining unit vectors in the directions of increasing $r$ and increasing $\theta$:
+\[
+ \hat{\mathbf{r}} = \begin{pmatrix}\cos \theta\\ \sin \theta\end{pmatrix}, \quad \hat{\boldsymbol\theta} = \begin{pmatrix}-\sin \theta\\\cos\theta \end{pmatrix}.
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$y$};
+ \draw (0, 0) -- (2, 1.5) node [circ]{} node [pos = 0.5, anchor = south east] {$r$};
+ \draw [->] (2, 1.5) -- (2.5, 1.875) node [anchor = south west] {$\hat{\mathbf{r}}$};
+ \draw [->] (2, 1.5) -- (1.625, 2) node [anchor = south east] {$\hat{\boldsymbol\theta}$};
+ \draw (0.7, 0) arc (0:36.87:0.7);
+ \node at (0.9, 0.3) {$\theta$};
+ \end{tikzpicture}
+\end{center}
+These two unit vectors form an orthonormal basis. However, they are not basis vectors in the normal sense. The directions of these basis vectors depend on time. In particular, we have
+
+\begin{prop}
+ \begin{align*}
+ \frac{\d \hat{\mathbf{r}}}{\d \theta} &= \begin{pmatrix} -\sin \theta\\ \cos \theta\end{pmatrix} = \hat{\boldsymbol\theta}\\
+ \frac{\d \hat{\boldsymbol\theta}}{\d \theta} &= \begin{pmatrix}-\cos \theta\\ -\sin \theta\end{pmatrix} = -\hat{\mathbf{r}}.
+ \end{align*}
+\end{prop}
+
+Often, we want the derivative with respect to time, instead of $\theta$. By the chain rule, we have
+\[
+ \frac{\d \hat{\mathbf{r}}}{\d t} = \dot{\theta}\hat{\boldsymbol\theta},\quad \frac{\d \hat{\boldsymbol\theta}}{\d t} = -\dot{\theta} \hat{\mathbf{r}}.
+\]
+We can now express the position, velocity and acceleration in this new polar basis. The position is given by
+\[
+ \mathbf{r} = r\hat{\mathbf{r}}.
+\]
+Taking the derivative gives the velocity as
+\[
+ \dot{\mathbf{r}} = \dot{r} \hat{\mathbf{r}} + r\dot{\theta} \hat{\boldsymbol\theta}.
+\]
+The acceleration is then
+\begin{align*}
+ \ddot{\mathbf{r}} &= \ddot{r} \hat{\mathbf{r}} + \dot{r}\dot{\theta}\hat{\boldsymbol\theta} + \dot{r}\dot{\theta}\hat{\boldsymbol\theta} + r\ddot{\theta}\hat{\boldsymbol\theta} - r\dot{\theta}^2\hat{\mathbf{r}}\\
+ &= (\ddot{r} - r\dot{\theta}^2)\hat{\mathbf{r}} + (r\ddot{\theta} + 2\dot{r}\dot{\theta})\hat{\boldsymbol\theta}.
+\end{align*}
+\begin{defi}[Radial and angular velocity]
+ $\dot{r}$ is the \emph{radial velocity}, and $\dot{\theta}$ is the \emph{angular velocity}.
+\end{defi}
+
+\begin{eg}[Uniform motion in a circle]
+ If we are moving in a circle, then $\dot{r} = 0$ and $\dot{\theta} = \omega = $ constant. So
+ \[
+ \dot{\mathbf{r}} = r\omega\hat{\boldsymbol\theta}.
+ \]
+ The speed is given by
+ \[
+ v = |\dot{\mathbf{r}}| = r|\omega| = \text{const}
+ \]
+ and the acceleration is
+ \[
+ \ddot{\mathbf{r}} = -r\omega^2 \hat{\mathbf{r}}.
+ \]
+ Hence in order to make a particle of mass $m$ move uniformly in a circle, we must supply a \emph{centripetal force} $mv^2/r$ towards the center.
+\end{eg}
+
+\subsection{Motion in a central force field}
+Now let's put in our central force. Since $V = V(r)$, we have
+\[
+ \mathbf{F} = -\nabla V = \frac{\d V}{\d r} \hat{\mathbf{r}}.
+\]
+So Newton's 2nd law in polar coordinates is
+\[
+ m(\ddot{r} - r\dot{\theta}^2)\hat{\mathbf{r}} + m(r\ddot{\theta} + 2\dot{r}\dot{\theta})\hat{\boldsymbol\theta} = -\frac{\d V}{\d r}\hat{\mathbf{r}}.
+\]
+The $\theta$ component of this equation is
+\[
+ m(r\ddot{\theta} - 2\dot{r}\dot{\theta}) = 0.
+\]
+We can rewrite it as
+\[
+ \frac{1}{r}\frac{\d }{\d t}(mr^2 \dot{\theta}) = 0.
+\]
+Let $L = mr^2 \dot{\theta}$. This is the $z$ component (and the only component) of the conserved angular momentum $\mathbf{L}$:
+\begin{align*}
+ \mathbf{L} &= m\mathbf{r}\times \dot{\mathbf{r}}\\
+ &= mr\hat{\mathbf{r}}\times (\dot{r}\hat{\mathbf{r}} + r\dot{\theta}\hat{\boldsymbol\theta})\\
+ &= mr^2 \dot{\theta}\; \hat{\mathbf{r}}\times \hat{\boldsymbol\theta}\\
+ &= mr^2 \dot{\theta}\; \hat{\mathbf{z}}.
+\end{align*}
+So the angular component tells us that $L$ is constant, which is the conservation of angular momentum.
+
+However, a more convenient quantity is the angular momentum \emph{per unit mass}:
+\begin{notation}[Angular momentum per unit mass]
+ The \emph{angular momentum per unit mass} is
+ \[
+ h = \frac{L}{m} = r^2\dot\theta = \text{const.}
+ \]
+\end{notation}
+Now the radial ($r$) component of the equation of motion is
+\[
+ m(\ddot{r} - r\dot{\theta}^2) = -\frac{\d V}{\d r}.
+\]
+We eliminate $\dot{\theta}$ using $r^2\dot{\theta} = h$ to obtain
+\[
+ m\ddot{r} = -\frac{\d V}{\d r} + \frac{mh^2}{r^3} = -\frac{\d V_{\text{eff}}}{\d r},
+\]
+where
+\[
+ V_{\text{eff}}(r) = V(r) + \frac{mh^2}{2r^2}.
+\]
+We have now reduced the problem to 1D motion in an (effective) potential --- as studied previously.
+
+The total energy of the particle is
+\begin{align*}
+ E &= \frac{1}{2}m|\dot{\mathbf{r}}|^2 + V(r)\\
+ &= \frac{1}{2}m(\dot{r}^2 + r^2\dot{\theta}^2) + V(r)\\
+ \intertext{(since $\dot{\mathbf{r}} = \dot{r}\hat{\mathbf{r}} + r\dot{\theta}\hat{\boldsymbol\theta}$, and $\hat{\mathbf{r}}$ and $\hat{\boldsymbol\theta}$ are orthogonal)}
+ &= \frac{1}{2}m\dot{r}^2 + \frac{mh^2}{2r^2} + V(r)\\
+ &= \frac{1}{2}m\dot{r}^2 + V_{\text{eff}}(r).
+\end{align*}
+
+\begin{eg}
+ Consider an attractive force following the inverse-square law (e.g.\ gravity). Here
+ \[
+ V = -\frac{mk}{r},
+ \]
+ for some constant $k$. So
+ \[
+ V_{\text{eff}} = -\frac{mk}{r} + \frac{mh^2}{2r^2}.
+ \]
+ We have two terms of opposite signs and different dependencies on $r$. For small $r$, the second term dominates and $V_{\text{eff}}$ is large. For large $r$, the first term dominates. Then $V_{\text{eff}}$ asymptotically approaches $0$ from below.
+ \begin{center}
+ \begin{tikzpicture}[xscale=0.5]
+ \draw (0, 0) -- (8, 0) node [right] {$r$};
+ \draw (0, -2) -- (0, 2) node [above] {$V_{\text{eff}}$};
+ \draw [samples=70, domain=0.5:7.8] plot (\x, {-3/\x + 2/(\x*\x)});
+
+ \draw [dashed] (1.33, -1.125) -- (0, -1.125) node [left] {$E_{\min}$};
+ \draw [dashed] (1.33, -1.125) -- (1.33, 0) node [above] {$r_*$};
+ \end{tikzpicture}
+ \end{center}
+ The minimum of $V_{\text{eff}}$ is at
+ \[
+ r_{*} = \frac{h^2}{k},\quad E_{\text{min}} = -\frac{mk^2}{2h^2}.
+ \]
+ We have a few possible types of motion:
+ \begin{itemize}
+ \item If $E = E_{\min}$, then $r$ remains at $r_*$ and $\dot{\theta} h/r^2$ is constant. So we have a uniform motion in a circle.
+ \item If $E_{\min} < E < 0$, then $r$ oscillates and $\dot{r}=h/r^2$ does also. This is a non-circular, bounded orbit.
+
+ We'll now introduce a lot of (pointless) definitions:
+
+ \begin{defi}[Periapsis, apoapsis and apsides]
+ The points of minimum and maximum $r$ in such an orbit are called the \emph{periapsis} and \emph{apoapsis}. They are collectively known as the \emph{apsides}.
+ \end{defi}
+
+ \begin{defi}[Perihelion and aphelion]
+ For an orbit around the Sun, the periapsis and apoapsis are known as the \emph{perihelion} and \emph{aphelion}.
+ \end{defi}
+
+ In particular
+ \begin{defi}[Perigee and apogee]
+ The perihelion and aphelion of the Earth are known as the \emph{perigee} and \emph{apogee}.
+ \end{defi}
+
+ \item If $E \geq 0$, then $r$ comes in from $\infty$, reaches a minimum, and returns to infinity. This is an unbounded orbit.
+ \end{itemize}
+
+ We will later show that in the case of motion in an inverse square force, the trajectories are conic sections (circles, ellipses, parabolae and hyperbolae).
+\end{eg}
+
+\subsubsection*{Stability of circular orbits}
+We'll now look at circular orbits, since circles are nice. Consider a general potential energy $V(r)$. We have to answer two questions:
+\begin{itemize}
+ \item Do circular orbits exist?
+ \item If they do, are they stable?
+\end{itemize}
+
+The conditions for existence and stability are rather straightforward. For a circular orbit, $r = r_* =$ const for some value of $h\not= 0$ (if $h = 0$, then the object is just standing still!). Since $\ddot{r} = 0$ for constant $r$, we require
+\[
+ V'_{\text{eff}}(r_*) = 0.
+\]
+The orbit is stable if $r_*$ is a minimum of $V_{\text{eff}}$, i.e.
+\[
+ V_{\text{eff}}''(r_*) > 0.
+\]
+In terms of $V(r)$, circular orbit requires
+\[
+ V'(r_*) = \frac{mh^2}{r_*^3}
+\]
+and stability further requires
+\[
+ V''(r_*) + \frac{3mh^2}{r_*^4} = V''(r_*) + \frac{3}{r_*}V'(r_*) > 0.
+\]
+In terms of the radial force $F(r) = -V'(r)$, the orbit is stable if
+\[
+ F'(r_*) + \frac{3}{r}F(r_*) < 0.
+\]
+\begin{eg}
+ Consider a central force with
+ \[
+ V(r) = -\frac{mk}{r^p}
+ \]
+ for some $k, p > 0$. Then
+ \[
+ V''(r) + \frac{3}{r}V'(r) = \big( -p(p + 1) + 3p\big)\frac{mk}{r^{p + 2}} = p(2-p)\frac{mk}{r^{p + 2}}.
+ \]
+ So circular orbits are stable for $p < 2$. This is illustrated by the graphs of $V_{\text{eff}}(r)$ for $p = 1$ and $p = 3$.
+ \begin{center}
+ \begin{tikzpicture}[xscale=0.5]
+ \draw (0, 0) -- (8, 0);
+ \draw (0, -2) -- (0, 2) node [above] {$V_{\text{eff}}$};
+ \draw [blue, samples=10, domain=0.5:1.5] plot (\x, {-3/\x + 2/(\x*\x)});
+ \draw [blue, domain=1.5:7.8] plot (\x, {-3/\x + 2/(\x*\x)}) node [right] {$p = 1$};
+
+
+ \draw [red, dashed, samples=70, domain=0.4:7.8] plot (\x, {-0.6/(\x*\x*\x) + 1.2/(\x*\x)}) node [right] {$p = 3$};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\subsection{Equation of the shape of the orbit}
+In general, we could determine $r(t)$ by integrating the energy equation
+\begin{align*}
+ E &= \frac{1}{2}m\dot{r}^2 + V_{\text{eff}}(r)\\
+ t &= \pm \sqrt{\frac{m}{2}}\int \frac{\d r}{\sqrt{E - V_{\text{eff}}(r)}}
+\end{align*}
+However, this is usually not practical, because we can't do the integral. Instead, it is usually much easier to find the shape $r(\theta)$ of the orbit.
+
+Still, solving for $r(\theta)$ is also not easy. We will need a magic trick --- we introduce the new variable
+\begin{notation}
+ \[
+ u = \frac{1}{r}.
+ \]
+\end{notation}
+Then
+\[
+ \dot{r} = \frac{\d r}{\d \theta}\dot{\theta} = \frac{\d r}{\d \theta}\frac{h}{r^2} = -h\frac{\d u}{\d \theta},
+\]
+and
+\[
+ \ddot{r} = \frac{\d }{\d t}\left(-h \frac{\d u}{\d \theta}\right) = -h\frac{\d ^2 u}{\d \theta^2}\dot{\theta} = -h\frac{\d ^2u}{\d \theta^2}\frac{h}{r^2} = -h^2u^2\frac{\d^2 u}{\d \theta^2}.
+\]
+This doesn't look very linear with $u^2$, but it will help linearizing the equation when we put in other factors.
+
+The radial equation of motion
+\[
+ m\ddot{r} - \frac{mh^2}{r^3} = F(r)
+\]
+then becomes
+\begin{prop}[Binet's equation]
+ \[
+ -mh^2u^2\left(\frac{\d^2u}{\d \theta^2}+ u\right) = F\left(\frac{1}{u}\right).
+ \]
+\end{prop}
+This still looks rather complicated, but observe that for an inverse square force, $F(1/u)$ is proportional to $u^2$, and then the equation is linear!
+
+In general, given an arbitrary $F(r)$, we aim to solve this second order ODE for $u(\theta)$. If needed, we can then work out the time-dependence via
+\[
+ \dot{\theta} = hu^2.
+\]
+\subsection{The Kepler problem}
+The Kepler problem is the study of the orbits of two objects interacting via a central force that obeys the inverse square law. The goal is to classify the possible orbits and study their properties. One of the most important examples of the Kepler problem is the orbit of celestial objects, as studied by Kepler himself.
+\subsubsection*{Shapes of orbits}
+For a planet orbiting the sun, the potential and force are given by
+\[
+ V(r) = \frac{mk}{r},\quad F(r) = -\frac{mk}{r^2}
+\]
+with $k = GM$ (for the Coulomb attraction of opposite charges, we have the same equation with $\displaystyle k = -\frac{Qq}{4\pi\varepsilon_0 m}$).
+
+Binet's equation then becomes linear, and
+\[
+ \frac{\d^2 u}{\d \theta^2} + u = \frac{k}{h^2}.
+\]
+We write the general solution as
+\[
+ u = \frac{k}{h^2} + A\cos(\theta - \theta_0),
+\]
+where $A \geq 0$ and $\theta_0$ are arbitrary constants.
+
+If $A = 0$, then $u$ is constant, and the orbit is circular. Otherwise, $u$ reaches a maximum at $\theta = \theta_0$. This is the periapsis. We now re-define polar coordinates such that the periapsis is at $\theta = 0$. Then
+\begin{prop}
+ The orbit of a planet around the sun is given by
+ \[
+ r = \frac{\ell}{1 + e\cos \theta},\tag{$*$}
+ \]
+ with $\ell = h^2/k$ and $e = Ah^2/k$. This is the polar equation of a conic, with a focus (the sun) at the origin.
+\end{prop}
+
+\begin{defi}[Eccentricity]
+ The dimensionless parameter $e \geq 0$ in the equation of orbit is the \emph{eccentricity} and determines the shape of the orbit.
+\end{defi}
+
+We can rewrite ($*$) in Cartesian coordinates with $x = r\cos \theta$ and $y = r\sin \theta$. Then we obtain
+\[
+ (1 - e^2)x^2 + 2e\ell x + y^2 = \ell^2.\tag{$\dagger$}
+\]
+There are three different possibilities:
+\begin{itemize}
+ \item Ellipse: ($0\leq e < 1$). $r$ is bounded by
+ \[
+ \frac{\ell}{1 + e} \leq r \leq \frac{\ell}{1 - e}.
+ \]
+ ($\dagger$) can be put into the equation of an ellipse centered on $(-ea, 0)$,
+ \[
+ \frac{(x + ea)^2}{a^2} + \frac{y^2}{b^2} = 1,
+ \]
+ where $\displaystyle a = \frac{\ell}{1 - e^2}$ and $\displaystyle b = \frac{\ell}{\sqrt{1 - e^2}}\leq a$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [gray] (-2, 0) -- (2, 0);
+ \draw [gray] (0, -1.6) -- (0, 1.6);
+ \draw [gray, ->] (0.7, -0.2) -- (0, -0.2) node [gray!50!black, pos = 0.5, below] {$ae$};
+ \draw [gray, ->] (0, -0.2) -- (0.7, -0.2);
+ \draw [gray, ->] (-0.2, 0) -- (-0.2, 1.6) node [gray!50!black, pos = 0.5, left] {$b$};
+ \draw [gray, ->] (-0.2, 1.6) -- (-0.2, 0);
+
+ \draw [gray, ->] (0, -0.2) -- (-2, -0.2) node [gray!50!black, pos = 0.5, below] {$a$};
+ \draw [gray, ->] (-2, -0.2) -- (0, -0.2);
+ \draw [gray, dashed] (0.7, 0) -- (0.7, 1.5) node [gray!50!black, pos = 0.5, right] {$\ell$};
+
+ \draw (.7, 0) node [anchor = south west] {$O$} node [circ] {};
+ \draw [->-=0.1] (0, 0) circle [x radius = 2, y radius = 1.6];
+ \end{tikzpicture}
+ \end{center}
+
+ $a$ and $b$ are the semi-major and semi-minor axis. $\ell$ is the \emph{semi-latus rectum}. One focus of the ellipse is at the origin. If $e = 0$, then $a = b = \ell$ and the ellipse is a circle.
+
+ \item Hyperbola: ($e > 1$). For $e > 1$, $r\to \infty$ as $\theta \to \pm \alpha$, where $\alpha = \cos^{-1}(1/e)\in (\pi/2, \pi)$. Then ($\dagger$) can be put into the equation of a hyperbola centered on $(ea, 0)$,
+ \[
+ \frac{(x - ea)^2}{a^2} - \frac{y^2}{b^2} = 1,
+ \]
+ with $\displaystyle a = \frac{\ell}{e^2 - 1}$, $\displaystyle b = \frac{\ell}{\sqrt{e^2 - 1}}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [gray] (-3, -2) -- (3, 2);
+ \draw [gray] (3, -2) -- (-3, 2);
+ \draw [->-=0.3] (-3, -1.9) .. controls (-0.2, 0) .. (-3, 1.9);
+ \draw [gray, ->] (0, 0) -- (-0.9, 0);
+ \draw [gray, ->] (-0.9, 0) -- (0, 0) node [pos = 0.5, above] {$a$};
+ \node at (-2, 0) [circ] {};
+ \node at (-2, 0) [left] {$O$};
+ \draw [gray, dashed] (-2, 0) -- (-2, 1.2) node [pos = 0.5, left] {$\ell$};
+ \draw [gray, dashed] (-2, 0) -- (-1.38462, -0.923077) node [pos = 0.5, right] {$b$};
+ \end{tikzpicture}
+ \end{center}
+ This corresponds to an unbound orbit that is deflected (scattered) by an attractive force.
+
+ $b$ is both the semi-minor axis and the \emph{impact parameter}. It is the distance by which the planet would miss the object if there were no attractive force.
+
+ The asymptote is $y = \frac{b}{a}(x - ea)$, or
+ \[
+ x \sqrt{e^2 - 1} - y = eb.
+ \]
+ Alternatively, we can write the equation of the asymptote as
+ \[
+ (x, y) \cdot \left(\frac{\sqrt{e^2 - 1}}{e}, -\frac{1}{e}\right) = b
+ \]
+ or $\mathbf{r} \cdot \mathbf{n} = b$, the equation of a line at a distance $b$ from the origin.
+
+ \item Parabola: ($e = 1$). Then ($*$) becomes
+ \[
+ r = \frac{\ell}{1 + \cos \theta}.
+ \]
+ We see that $r\to \infty$ as $\theta \to \pm \pi$. ($\dagger$) becomes the equation of a parabola, $y^2 = \ell(\ell - 2x)$. The trajectory is similar to that of a hyperbola.
+\end{itemize}
+
+\subsubsection*{Energy and eccentricity}
+We can figure out which path a planet follows by considering its energy.
+\begin{align*}
+ E &= \frac{1}{2}m(\dot{r}^2 + r^2\dot{\theta}^2) - \frac{mk}{r}\\
+ &= \frac{1}{2}mh^2\left(\left(\frac{\d u}{\d \theta}\right)^2 + u^2\right) - mku
+\end{align*}
+Substitute $\displaystyle u = \frac{1}{\ell}(1 + e\cos \theta)$ and $\displaystyle \ell = \frac{h^2}{k}$, and it simplifies to
+\[
+ E = \frac{mk}{2\ell}(e^2 - 1),
+\]
+which is independent of $\theta$, as it must be.
+
+Orbits are bounded for $e < 1$. This corresponds to the case $E < 0$. Unbounded orbits have $e > 1$ and thus $E > 0$. A parabolic orbit has $e = 1$, $E = 0$, and is ``marginally bound''.
+
+Note that the condition $E > 0$ is equivalent to $|\dot{\mathbf{r}}| > \sqrt{\frac{2GM}{r}} = v_{\mathrm{esc}}$, which means you have enough kinetic energy to escape orbit.
+
+\subsubsection*{Kepler's laws of planetary motion}
+When Kepler first studied the laws of planetary motion, he took a telescope, observed actual planets, and came up with his famous three laws of motion. We are now going to derive the laws with pen and paper instead.
+
+\begin{law}[Kepler's first law]
+ The orbit of each planet is an ellipse with the Sun at one focus.
+\end{law}
+
+\begin{law}[Kepler's second law]
+ The line between the planet and the sun sweeps out equal areas in equal times.
+\end{law}
+
+\begin{law}[Kepler's third law]
+ The square of the orbital period is proportional to the cube of the semi-major axis, or
+ \[
+ P^2 \propto a^3.
+ \]
+\end{law}
+
+We have already shown that Law 1 follows from Newtonian dynamics and the inverse-square law of gravity. In the solar system, planets generally have very low eccentricity (i.e.\ very close to circular motion), but asteroids and comets can have very eccentric orbits. In other solar systems, even planets have have highly eccentric orbits. As we've previously shown, it is also possible for the object to have a parabolic or hyperbolic orbit. However, we tend not to call these ``planets''.
+
+Law 2 follows simply from the conservation of angular momentum: The area swept out by moving $\d \theta$ is $\d A = \frac{1}{2}r^2\;\d \theta$ (area of sector of circle). So
+\[
+ \frac{\d A}{\d t} = \frac{1}{2}r^2\dot{\theta} = \frac{h}{2} = \text{const}.
+\]
+and is true for \emph{any} central force.
+
+Law 3 follows from this: the total area of the ellipse is $A = \pi ab = \frac{h}{2}P$ (by the second law). But $b^2 = a^2( 1 - e^2)$ and $h^2 = k\ell = ka(1 - e^2)$. So
+\[
+ P^2 = \frac{(2\pi)^2a^4(1 - e^2)}{ka(1 - e^2)} = \frac{(2\pi)^2 a^3}{k}.
+\]
+Note that the third law is very easy to prove directly for circular orbits. Since the radius is constant, $\ddot{r} = 0$. So the equations of motion give
+\[
+ -r \dot{\theta}^2 = -\frac{k}{r^2}
+\]
+So
+\[
+ r^3 \dot{\theta}^2 = k
+\]
+Since $\dot{\theta}\propto P^{-1}$, the result follows.
+
+\subsection{Rutherford scattering}
+Finally, we will consider the case where the force is \emph{repulsive} instead of attractive. An important example is the Rutherford gold foil experiment, where Rutherford bombarded atoms with alpha particles, and the alpha particles are repelled by the nucleus of the atom.
+
+Under a repulsive force, the potential and force are given by
+\[
+ V(r) = +\frac{mk}{r}, \quad F(r) = +\frac{mk}{r^2}.
+\]
+For Coulomb repulsion of like charges,
+\[
+ k = \frac{Qq}{4\pi \varepsilon_0 m} > 0.
+\]
+The solution is now
+\[
+ u = -\frac{k}{h^2} + A\cos (\theta - \theta_0).
+\]
+wlog, we can take $A \geq 0, \theta_0 = 0$. Then
+\[
+ r = \frac{\ell}{e\cos \theta - 1}
+\]
+with
+\[
+ \ell = \frac{h^2}{k}, \quad e = \frac{Ah^2}{k}.
+\]
+We know that $r$ and $\ell$ are positive. So we must have $e \geq 1$. Then $r\to \infty$ as $\theta \to \pm \alpha$, where $\alpha = \cos^{-1}(1/e)$.
+
+The orbit is a hyperbola, again given by
+\[
+ \frac{(x - ea)^2}{a^2} - \frac{y^2}{b^2} = 1,
+\]
+with $a = \frac{\ell}{e^2 - 1}$ and $b = \frac{\ell}{\sqrt{e^2 - 1}}$. However, this time, the trajectory is the other branch of the hyperbola.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [gray] (-3, -2) -- (3, 2);
+ \draw [gray] (3, -2) -- (-3, 2);
+ \draw [->-=0.3] (3, -1.9) .. controls (0.2, 0) .. (3, 1.9);
+ \node at (-2, 0) [circ] {};
+ \node at (-2, 0) [left] {$O$};
+ \draw [dashed] (-2, 0) -- (-1.38462, 0.923077) node [pos = 0.5, right] {$b$};
+ \draw (0.3, 0) arc (0:-33:0.3);
+ \draw (0.3, 0) arc (0:33:0.3);
+ \node at (0.3, 0) [right] {$2\alpha$};
+ \draw (0, 0.4) arc (90:33:0.4);
+ \draw (0, 0.4) arc (90:147:0.4);
+ \node at (0, 0.4) [above] {$\beta$};
+ \end{tikzpicture}
+\end{center}
+It seems as if the particle is deflected by $O$.
+
+We can characterize the path of the particle by the impact parameter $b$ and the incident speed $v$ (i.e.\ the speed when far away from the origin). We know that the angular momentum per unit mass is $h = bv$ (velocity $\times$ perpendicular distance to $O$).
+
+How does the scattering angle $\beta = \pi - 2\alpha$ depend on the impact parameter $b$ and the incident speed $v$?
+
+Recall that the angle $\alpha$ is given by $\alpha = \cos^{-1} (1/e)$. So we obtain
+\[
+ \frac{1}{e} = \cos \alpha = \cos \left(\frac{\pi}{2} - \frac{\beta}{2}\right) = \sin\left(\frac{\beta}{2}\right),
+\]
+So
+\[
+ b = \frac{\ell}{\sqrt{e^2 - 1}} = \frac{(bv)^2}{k}\tan \frac{\beta}{2}.
+\]
+So
+\[
+ \beta = 2\tan^{-1}\left(\frac{k}{bv^2}\right).
+\]
+We see that if we have a small impact parameter, i.e.\ $b \ll k/v^2$, then we can have a scattering angle approaching $\pi$.
+
+\section{Rotating frames}
+Recall that Newton's laws hold only in inertial frames. However, sometimes, our frames are not inertial. In this chapter, we are going to study a particular kind of non-inertial frame --- a rotating frame. An important rotating frame is the Earth itself, but there are also other examples such as merry-go-rounds.
+
+\subsection{Motion in rotating frames}
+Now suppose that $S$ is an inertial frame, and $S'$ is rotating about the $z$ axis with angular velocity $\omega = \dot{\theta}$ with respect to $S$.
+
+\begin{defi}[Angular velocity vector]
+ The \emph{angular velocity vector} of a rotating frame is $\boldsymbol\omega = \omega\hat{\mathbf{z}}$, where $\hat{\mathbf{z}}$ is the axis of rotation and $\omega$ is the angular speed.
+\end{defi}
+
+First we wish to relate the basis vectors $\{\mathbf{e}_i\}$ and $\{\mathbf{e}_i'\}$ of $S$ and $S'$ respectively.
+
+Consider a particle at rest in $\mathbf{S}'$. From the perspective of $S$, its velocity is
+\[
+ \left(\frac{\d \mathbf{r}}{\d t}\right)_S = \boldsymbol\omega\times \mathbf{r},
+\]
+where $\boldsymbol\omega = \omega\hat{\mathbf{z}}$ is the \emph{angular velocity vector} (aligned with the rotation axis). This formula also applies to the basis vectors of $S'$.
+\[
+ \left(\frac{\d \mathbf{e}_i'}{\d t}\right)_S = \boldsymbol\omega\times \mathbf{e}_i'.
+\]
+Now given a general time-dependent vector $\mathbf{a}$, we can express it in the $\{\mathbf{e}_i'\}$ basis as follows:
+\[
+ \mathbf{a} = \sum a_i'(t) \mathbf{e}_i'.
+\]
+From the perspective of $S'$, $\mathbf{e}_i'$ is constant and the time derivative of $\mathbf{a}$ is given by
+\[
+ \left(\frac{\d \mathbf{a}}{\d t}\right)_{S'} = \sum \frac{\d a_i'}{\d t}\mathbf{e}_i'.
+\]
+In $S$, however, $\mathbf{e}_i'$ is not constant. So we apply the product rule to obtain the time derivative of $\mathbf{a}$:
+\[
+ \left(\frac{\d \mathbf{a}}{\d t}\right)_S = \sum \frac{\d a_i}{\d t}\mathbf{e}_i' + \sum a_i'\boldsymbol\omega \times \mathbf{e}_i' = \left(\frac{\d \mathbf{a}}{\d t}\right)_{S'} + \boldsymbol\omega \times \mathbf{a}.
+\]
+This key identity applies to all vectors and can be written as an operator equation:
+\begin{prop}
+ If $S$ is an inertial frame, and $S'$ is rotating relative to $S$ with angular velocity $\boldsymbol \omega$, then
+ \[
+ \left(\frac{\d}{\d t}\right)_S = \left(\frac{\d}{\d t}\right)_{S'} + \boldsymbol\omega \times.
+ \]
+ Applied to the position vector $\mathbf{r}(t)$ of a particle, it gives
+ \[
+ \left(\frac{\d \mathbf{r}}{\d t}\right)_S = \left(\frac{\d \mathbf{r}}{\d t}\right)_{S'} + \boldsymbol\omega \times \mathbf{r}.
+ \]
+\end{prop}
+We can interpret this as saying that the difference in velocity measured in the two frames is the relative velocity of the frames.
+
+We apply this formula a second time, and allow $\boldsymbol\omega$ to depend on time. Then we have
+\begin{align*}
+ \left(\frac{\d ^2\mathbf{r}}{\d t^2}\right)_S &= \left(\left(\frac{\d }{\d t}\right)_{S'} + \boldsymbol\omega\times \right)\left(\left(\frac{\d \mathbf{r}}{\d t}\right)_{S'} + \boldsymbol\omega \times \mathbf{r}\right).\\
+ &= \left(\frac{\d^2 \mathbf{r}}{\d t^2}\right)_{S'} + 2\boldsymbol \omega \times \left(\frac{\d \mathbf{r}}{\d t}\right)_{S'} + \dot{\boldsymbol\omega} \times \mathbf{r} + \boldsymbol\omega\times(\boldsymbol\omega\times \mathbf{r})
+\end{align*}
+Since $S$ is inertial, Newton's Second Law is
+\[
+ m\left(\frac{\d ^2\mathbf{r}}{\d t^2}\right)_S = \mathbf{F}.
+\]
+So
+\begin{prop}
+ \[
+ m\left(\frac{\d^2 \mathbf{r}}{\d t^2}\right)_{S'} = \mathbf{F} - 2m\boldsymbol\omega \times \left(\frac{\d \mathbf{r}}{\d t}\right)_{S'} - m\dot{\boldsymbol\omega}\times \mathbf{r} - m\boldsymbol\omega \times (\boldsymbol\omega \times \mathbf{r}).
+ \]
+\end{prop}
+
+\begin{defi}[Fictious forces]
+ The additional terms on the RHS of the equation of motion in rotating frames are \emph{fictitious forces}, and are needed to explain the motion observed in $S'$. They are
+ \begin{itemize}
+ \item \emph{Coriolis force}: $-2m\boldsymbol\omega \times \left(\frac{\d \mathbf{r}}{\d t}\right)_{S'}.$
+ \item \emph{Euler force}: $-m\dot{\boldsymbol\omega}\times \mathbf{r}$
+ \item \emph{Centrifugal force}: $-m\boldsymbol\omega\times(\boldsymbol\omega\times \mathbf{r})$.
+ \end{itemize}
+\end{defi}
+In most cases, $\boldsymbol \omega$ is constant and can neglect the Euler force.
+\subsection{The centrifugal force}
+What exactly does the centrifugal force do? Let $\boldsymbol\omega = \omega\hat{\boldsymbol\omega}$, where $|\hat{\boldsymbol\omega}| = 1$. Then
+\[
+ -m\boldsymbol\omega \times (\boldsymbol\omega \times \mathbf{r}) = -m\big((\boldsymbol\omega \cdot \mathbf{r})\boldsymbol\omega - (\boldsymbol\omega \cdot \boldsymbol\omega)\mathbf{r}\big) = m\omega^2 \mathbf{r}_{\bot},
+\]
+where $\mathbf{r}_{\bot} = \mathbf{r} - (\mathbf{r}\cdot \hat{\boldsymbol\omega})\hat{\boldsymbol\omega}$ is the projection of the position on the plane perpendicular to $\boldsymbol\omega$. So the centrifugal force is directed away from the axis of rotation, and its magnitude is $m\omega^2$ times the distance form the axis.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (0, 3) node [above] {$\boldsymbol\omega$};
+ \draw [->] (-0.15, 2.5) arc (120:420:0.3 and 0.1);
+ \draw [->] (0, 0.5) -- (1.5, 1.5) node [pos=0.5, anchor = north west] {$\mathbf{r}$};
+ \draw [->] (0, 1.5) -- (1.5, 1.5) node [pos = 0.5, above] {$\mathbf{r}_\bot$};
+ \end{tikzpicture}
+\end{center}
+Note that
+\begin{align*}
+ \mathbf{r}_\bot \cdot \mathbf{r}_\bot &= \mathbf{r}\cdot \mathbf{r} - (\mathbf{r} \cdot \hat{\boldsymbol\omega})^2\\
+ \nabla(|\mathbf{r}_\bot|^2) &= 2\mathbf{r} - 2(\mathbf{r}\cdot \hat{\boldsymbol\omega})\hat{\boldsymbol\omega} = 2\mathbf{r}_\bot.
+\end{align*}
+So
+\[
+ -m\boldsymbol\omega\times(\boldsymbol\omega\times \mathbf{r}) = -\nabla\left(-\frac{1}{2}m\omega^2|\mathbf{r}_\bot|^2\right) = -\nabla\left(-\frac{1}{2}m|\boldsymbol\omega\times \mathbf{r}|^2\right).
+\]
+Thus the centrifugal force is a conservative (fictitious) force.
+
+On a rotating planet, the gravitational and centrifugal forces per unit mass combine to make the \emph{effective gravity},
+\[
+ \mathbf{g}_{\text{eff}} = \mathbf{g} + \omega^2 \mathbf{r}_\bot.
+\]
+This gravity will not be vertically downwards. Consider a point $P$ at latitude $\lambda$ on the surface of a spherical planet of radius $R$.
+
+We construct orthogonal axes:
+\begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius = 2];
+ \node [anchor = north east] {$O$};
+ \draw [->] (0, 0) -- (0, 2.5) node [above] {$\boldsymbol\omega$};
+ \draw (0, 0) -- (2, 0);
+ \draw (0, 0) -- (1.4, 1.4) node [below] {$P$};
+ \draw [->] (1.4, 1.4) -- (1.8, 1.8) node [anchor = south west] {$\hat{\mathbf{z}}$};
+ \draw [->] (1.4, 1.4) -- (1, 1.8) node [anchor = south east] {$\hat{\mathbf{y}}$};
+ \draw (0.3, 0) arc (0:45:0.3);
+ \node at (0.5, 0.2) {$\lambda$};
+ \end{tikzpicture}
+\end{center}
+with $\hat{\mathbf{x}}$ into the page. So $\hat{\mathbf{z}}$ is up, $\hat{\mathbf{y}}$ is North, and $\hat{\mathbf{x}}$ is East.
+
+At $P$, we have
+\begin{align*}
+ \mathbf{r} &= R\hat{\mathbf{z}}\\
+ \mathbf{g} &= -g\hat{\mathbf{z}}\\
+ \boldsymbol\omega &= \omega(\cos\lambda \hat{\mathbf{y}} + \sin \lambda \hat{\mathbf{z}})
+\end{align*}
+So
+\begin{align*}
+ \mathbf{g}_{\mathrm{eff}} &= \mathbf{g} + \omega^2 \mathbf{r}_\bot \\
+ &= -g\hat{\mathbf{z}} + \omega^2 R\cos\lambda(\cos\lambda\hat{\mathbf{z}} - \sin\lambda\hat{\mathbf{y}})\\
+ &= -\omega^2 R\cos\lambda\sin\lambda\hat{\mathbf{y}} - (g - \omega^2 R\cos^2\lambda)\hat{\mathbf{z}}.
+\end{align*}
+So the angle $\alpha$ between $\mathbf{g}$ and $\mathbf{g}_{\text{eff}}$ is given by
+\[
+ \tan \alpha = \frac{\omega^2 R\cos\lambda\sin\lambda}{g - \omega^2R\cos^2\lambda}.
+\]
+This is 0 at the equator and the poles, and greatest when you are halfway between. However, this is still tiny on Earth, and does not affect our daily life.
+
+\subsection{The Coriolis force}
+The Coriolis force is a more subtle force. Writing $\mathbf{v} = \left(\frac{\d \mathbf{r}}{\d t}\right)_{S'}$, we can write the force as
+\[
+ \mathbf{F} = -2m\boldsymbol\omega\times \mathbf{v}.
+\]
+Note that this has the same form as the Lorentz force caused by a magnetic field, and is velocity-dependent. However, unlike the effects of a magnetic field, particles do not go around in circles in a rotating frame, since we also have the centrifugal force in play.
+
+Since this force is always perpendicular to the velocity, it does no work.
+
+Consider motion parallel to the Earth's surface. We only care about the effect of the Coriolis force on the horizontal trajectory, and ignore the vertical component that is tiny compared to gravity.
+
+So we only take the vertical component of $\boldsymbol\omega$, $\omega\sin\lambda\hat{\mathbf{z}}$. The horizontal velocity $\mathbf{v} = v_x \hat{\mathbf{x}} + v_y \hat{\mathbf{y}}$ generates a horizontal Coriolis force:
+\[
+ -2m\omega\sin\lambda\hat{\mathbf{z}}\times \mathbf{v} = -2m\omega\sin\lambda(v_y \hat{\mathbf{x}} - v_x \hat{\mathbf{y}}).
+\]
+In the Northern hemisphere $(0 < \lambda < \pi/2)$, this causes a deflection towards the right. In the Southern Hemisphere, the deflection is to the left. The effect vanishes at the equator.
+
+Note that only the horizontal effect of horizontal motion vanishes at the equator. The vertical effects or those caused by vertical motion still exist.
+
+\begin{eg}
+ Suppose a ball is dropped from a tower of height $h$ at the equator. Where does it land?
+
+ In the rotating frame,
+ \[
+ \ddot{\mathbf{r}} = \mathbf{g} - 2\boldsymbol\omega\times \dot{\mathbf{r}} - \boldsymbol\omega\times(\boldsymbol\omega\times \mathbf{r}).
+ \]
+ We work to first order in $\omega$. Then
+ \[
+ \ddot{\mathbf{r}} = \mathbf{g} - 2\boldsymbol\omega\times \dot{\mathbf{r}} + O(\omega^2).
+ \]
+ Integrate wrt $t$ to obtain
+ \[
+ \dot{\mathbf{r}} = \mathbf{g}t - 2\boldsymbol\omega \times (\mathbf{r} - \mathbf{r}_0) + O(\omega^2),
+ \]
+ where $\mathbf{r}_0$ is the initial position. We substitute into the original equation to obtain
+ \[
+ \ddot{\mathbf{r}} = \mathbf{g} - 2\boldsymbol\omega\times \mathbf{g}t + O(\omega^2).
+ \]
+ (where some new $\omega^2$ terms are thrown into $O(\omega^2)$). We integrate twice to obtain
+ \[
+ \mathbf{r} = \mathbf{r}_0 + \frac{1}{2}\mathbf{g}t^2 - \frac{1}{3}\boldsymbol\omega \times \mathbf{g}t^3 + O(\omega^2).
+ \]
+ In components, we have $\mathbf{g} = (0, 0, -g)$, $\boldsymbol\omega = (0, \omega, 0)$ and $\mathbf{r}_0 = (0, 0, R + h)$. So
+ \[
+ \mathbf{r} = \left(\frac{1}{3}\omega gt^3, 0, R + h - \frac{1}{2}gt^2\right) + O(\omega^2).
+ \]
+ So the particle hits the ground at $t = \sqrt{2h/g}$, and its eastward displacement is $\frac{1}{3}wg\left(\frac{2h}{g}\right)^{3/2}$.
+
+ This can be understood in terms of angular momentum conservation in the non-rotating frame. At the beginning, the particle has the same angular velocity with the Earth. As it falls towards the Earth, to maintain the same angular momentum, the angular velocity has to increase to compensate for the decreased radius. So it spins faster than the Earth and drifts towards the East, relative to the Earth.
+\end{eg}
+
+\begin{eg}
+ Consider a pendulum that is free to swing in any plane, e.g.\ a weight on a string. At the North pole, it will swing in a plane that is fixed in an inertial frame, while the Earth rotates beneath it. From the perspective of the rotating frame, the plane of the pendulum rotates backwards. This can be explained as a result of the Coriolis force.
+
+ In general, at latitude $\lambda$, the plane rotates rightwards with period $\frac{1\text{ day}}{\sin \lambda}$.
+\end{eg}
+
+\section{Systems of particles}
+Now suppose we have $N$ interacting particles. We adopt the following notation: particle $i$ has mass $m_i$, position $\mathbf{r}_i$, and momentum $\mathbf{p}_i = m_i \dot{\mathbf{r}}_i$. Note that the subscript denotes which particle it is referring to, not vector components.
+
+Newton's Second Law for particle $i$ is
+\[
+ m_i \ddot{\mathbf{r}}_i = \dot{\mathbf{p}}_i = \mathbf{F}_i,
+\]
+where $\mathbf{F}_i$ is the total force acting on particle $i$. We can write $\mathbf{F}_i$ as
+\[
+ \mathbf{F}_i = \mathbf{F}_i^{\text{ext}} + \sum_{j = 1}^N \mathbf{F}_{ij},
+\]
+where $\mathbf{F}_{ij}$ is the force on particle $i$ by particle $j$, and $\mathbf{F}_i^{\text{ext}}$ is the external force on $i$, which comes from particles outside the system.
+
+Since a particle cannot exert a force on itself, we have $\mathbf{F}_{ii} = \mathbf{0}$. Also, Newton's third law requires that
+\[
+ \mathbf{F}_{ij} = -\mathbf{F}_{ji}.
+\]
+For example, if the particles interact only via gravity, then we have
+\[
+ \mathbf{F}_{ij} = -\frac{Gm_im_j(\mathbf{r}_i - \mathbf{r}_j)}{|\mathbf{r}_i - \mathbf{r}_j|^3} = -\mathbf{F}_{ji}.
+\]
+\subsection{Motion of the center of mass}
+Sometimes, we are interested in the collection of particles as a whole. For example, if we treat a cat as a collection of particles, we are more interested in how the cat as a whole falls, instead of tracking the individual particles of the cat.
+
+Hence, we define some aggregate quantities of the system such as the total mass and investigate how these quantities relate.
+\begin{defi}[Total mass]
+ The \emph{total mass} of the system is $M = \sum m_i$.
+\end{defi}
+
+\begin{defi}[Center of mass]
+ The \emph{center of mass} is located at
+ \[
+ \mathbf{R} = \frac{1}{M}\sum_{i = 1}^N m_i\mathbf{r}_i.
+ \]
+ This is the mass-weighted average position.
+\end{defi}
+
+\begin{defi}[Total linear momentum]
+ The \emph{total linear momentum} is
+ \[
+ \mathbf{P} = \sum_{i = 1}^N \mathbf{p}_i = \sum_{i = 1}^N m_i \dot{\mathbf{r}}_i = M\dot{\mathbf{R}}.
+ \]
+ Note that this is equivalent to the momentum of a single particle of mass $M$ at the center of mass.
+\end{defi}
+
+\begin{defi}[Total external force]
+ The \emph{total external force} is
+ \[
+ \mathbf{F} = \sum_{i = 1}^N \mathbf{F}_i^{\text{ext}}.
+ \]
+\end{defi}
+
+We can now obtain the equation of motion of the center of mass:
+\begin{prop}
+ \[
+ M\ddot{\mathbf{R}} = \mathbf{F}.
+ \]
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ M\ddot{\mathbf{R}} &= \dot{\mathbf{P}}\\
+ &= \sum_{i = 1}^N \dot{\mathbf{p}}_i\\
+ &= \sum_{i = 1}^N \mathbf{F}_i^{\text{ext}} + \sum_{i = 1}^N\sum_{j = 1}^N \mathbf{F}_{ij}\\
+ &= \mathbf{F} + \frac{1}{2}\sum_i \sum_j(\mathbf{F}_{ij} + \mathbf{F}_{ji})\\
+ &= \mathbf{F}\qedhere
+ \end{align*}
+\end{proof}
+This means that if we don't care about the internal structure, we can treat the system as a point particle of mass $M$ at the center of mass $\mathbf{R}$. This is why Newton's Laws apply to macroscopic objects even though they are not individual particles.
+
+\begin{law}[Conservation of momentum]
+ If there is no external force, i.e.\ $\mathbf{F} = \mathbf{0}$, then $\dot{\mathbf{P}} = \mathbf{0}$. So the total momentum is conserved.
+\end{law}
+
+If there is no external force, then the center of mass moves uniformly in a straight line. In this case, we can pick a particularly nice frame of reference, known as the \emph{center of mass frame}.
+\begin{defi}[Center of mass frame]
+ The \emph{center of mass frame} is an inertial frame in which $\mathbf{R} = 0$ for all time.
+\end{defi}
+Doing calculations in the center of mass frame is usually much more convenient than using other frames,
+
+After doing linear motion, we can now look at angular motion.
+\begin{defi}[Total angular momentum]
+ The \emph{total angular momentum} of the system about the origin is
+ \[
+ \mathbf{L} = \sum_i \mathbf{r}_i \times \mathbf{p}_i.
+ \]
+\end{defi}
+How does the total angular momentum change with time? Here we have to assume a stronger version of Newton's Third law, saying that
+\[
+ \mathbf{F}_{ij} = -\mathbf{F}_{ji}\text{ and is parallel to }(\mathbf{r}_i - \mathbf{r}_j).
+\]
+This is true, at least, for gravitational and electrostatic forces.
+
+Then we have
+\begin{align*}
+ \dot{\mathbf{L}} &= \sum_i \mathbf{r}_i \times \dot{\mathbf{p}}_i + \dot{\mathbf{r}}_i \times \mathbf{p}_i\\
+ &= \sum_i \mathbf{r}_i \times \left(\mathbf{F}_i^{\text{ext}} + \sum_j F_{ij}\right) + m(\dot{\mathbf{r}}_i\times \dot{\mathbf{r}}_i)\\
+ &= \sum_i \mathbf{r}_i\times \mathbf{F}_i^{\text{ext}} + \sum_i \sum_j \mathbf{r}_i \times \mathbf{F}_{ij}\\
+ &= \sum_i \mathbf{G}_i^{\text{ext}} + \frac{1}{2}\sum_i\sum_j (\mathbf{r}_i \times \mathbf{F}_{ij} + \mathbf{r}_j\times \mathbf{F}_{ji})\\
+ &= \mathbf{G} + \frac{1}{2}\sum_i\sum_j(\mathbf{r}_i - \mathbf{r}_j)\times \mathbf{F}_{ij}\\
+ &= \mathbf{G},
+\end{align*}
+where
+\begin{defi}[Total external torque]
+ The \emph{total external torque} is
+ \[
+ \mathbf{G} = \sum_i \mathbf{r}_i \times \mathbf{F}_i^{\text{ext}}.
+ \]
+\end{defi}
+So the total angular momentum is conserved if $\mathbf{G} = \mathbf{0}$, ie the total external torque vanishes.
+
+\subsection{Motion relative to the center of mass}
+So far, we have shown that externally, a multi-particle system behaves as if it were a point particle at the center of mass. But internally, what happens to the individual particles themselves?
+
+We write $\mathbf{r}_i = \mathbf{R} + \mathbf{r}_i^c$, where $\mathbf{r}_i^c$ is the position of particle $i$ relative to the center of mass.
+
+We first obtain two useful equalities:
+\[
+ \sum_i m_i\mathbf{r}_i^c = \sum m_i \mathbf{r}_i - \sum m_i \mathbf{R} = M\mathbf{R} - M\mathbf{R} = \mathbf{0}.
+\]
+Differentiating gives
+\[
+ \sum_i m_i \dot{\mathbf{r}}_i^c = \mathbf{0}.
+\]
+Using these equalities, we can express the angular momentum and kinetic energy in terms of $\mathbf{R}$ and $\mathbf{r}_i^c$ only:
+\begin{align*}
+ \mathbf{L} &= \sum_i m_i(\mathbf{R} + \mathbf{r}_i^c) \times (\dot{\mathbf{R}} + \dot{\mathbf{r}}_i^c)\\
+ &= \sum_i m_i \mathbf{R}\times \dot{\mathbf{R}} + \mathbf{R}\times \sum_i m_i \dot{\mathbf{r}}_i^c + \sum_i m_i \mathbf{r}_i^c\times \dot{\mathbf{R}} + \sum_i m_i \mathbf{r}_i^c \times \dot{\mathbf{r}}_i^c\\
+ &= M\mathbf{R}\times \dot{\mathbf{R}} + \sum_i m_i \mathbf{r}_i^c \times \dot{\mathbf{r}}_i^c\displaybreak[0]\\
+ T&= \frac{1}{2}\sum_i m_i|\dot{\mathbf{r}}_i|^2\\
+ &= \frac{1}{2}\sum_i m_I (\dot{\mathbf{R}} + \dot{\mathbf{r}_i}^c)\cdot (\dot{\mathbf{R}} + \dot{\mathbf{r}}_i^c)\\
+ &= \frac{1}{2}\sum_i m_i \dot{\mathbf{R}}\cdot \dot{\mathbf{R}} + \dot{\mathbf{R}} \cdot \sum_i m_i \dot{\mathbf{r}}_i^c + \frac{1}{2}\sum_i m_i \dot{\mathbf{r}}_i^c\cdot \dot{\mathbf{r}}_i^c\\
+ &=\frac{1}{2}M|\dot{\mathbf{R}}|^2 + \frac{1}{2}\sum_i m_i|\dot{\mathbf{r}}_i^c|^2
+\end{align*}
+We see that each item is composed of two parts --- that of the center of mass and that of motion relative to center of mass.
+
+If the forces are conservative in the sense that
+\[
+ \mathbf{F}_i^{\text{ext}} = -\nabla_i V_i(\mathbf{r}_i),
+\]
+and
+\[
+ \mathbf{F}_{ij} = -\nabla_i V_{ij} (\mathbf{r}_i - \mathbf{r}_j),
+\]
+where $\nabla_i$ is the gradient with respect to $\mathbf{r}_i$, then energy is conserved in the from
+\[
+ E = T + \sum_i V_i(\mathbf{r}_i) + \frac{1}{2}\sum_i\sum_j V_{ij}(\mathbf{r}_i - \mathbf{r}_j) = \text{const.}
+\]
+\subsection{The two-body problem}
+The \emph{two-body problem} is to determine the motion of two bodies interacting only via gravitational forces.
+
+The center of mass is at
+\[
+ \mathbf{R} = \frac{1}{M} (m_1 \mathbf{r}_1 + m_2 \mathbf{r}_2),
+\]
+where $M = m_1 + m_2$.
+
+The magic trick to solving the two-body problem is to define the separation vector (or relative position vector)
+\[
+ \mathbf{r} = \mathbf{r}_1 - \mathbf{r}_2.
+\]
+Then we write everything in terms of $\mathbf{R}$ and $\mathbf{r}$.
+\[
+ \mathbf{r}_1 = \mathbf{R} + \frac{m_2}{M}\mathbf{r},\quad \mathbf{r}_2 = \mathbf{R} - \frac{m_1}{M}\mathbf{r}.
+\]
+\begin{center}
+ \begin{tikzpicture}[rotate=30]
+ \node [circ] {};
+ \node [anchor = south east] {$\mathbf{r}_2$};
+
+ \node at (3, 0) [circ, minimum size = 4] {};
+ \node at (3, 0) [anchor = south east] {$\mathbf{r}_1$};
+
+ \node at (2, 0) [circ, minimum size = 3] {};
+ \node at (2, 0) [anchor = south east] {$\mathbf{R}$};
+ \draw [->-=0.5] (0, 0) -- (3, 0) node [pos =0.5, anchor = south east] {$\mathbf{r}$};
+ \end{tikzpicture}
+\end{center}
+Since the external force $\mathbf{F} = \mathbf{0}$, we have $\ddot{\mathbf{R}} = \mathbf{0}$, i.e.\ the center of mass moves uniformly.
+
+Meanwhile,
+\begin{align*}
+ \ddot{\mathbf{r}} &= \ddot{\mathbf{r}}_1 - \ddot{\mathbf{r}}_2\\
+ &= \frac{1}{m_1} \mathbf{F}_{12} - \frac{1}{m_2}\mathbf{F}_{21}\\
+ &= \left(\frac{1}{m_1} + \frac{1}{m_2}\right) \mathbf{F}_{12}
+\end{align*}
+We can write this as
+\[
+ \mu \ddot{\mathbf{r}} = \mathbf{F}_{12}(\mathbf{r}),
+\]
+where
+\[
+ \mu = \frac{m_1m_2}{m_1 + m_2}
+\]
+is the \emph{reduced mass}. This is the same as the equation of motion for \emph{one particle} of mass $\mu$ with position vector $\mathbf{r}$ relative to a fixed origin --- as studied previously.
+
+For example, with gravity,
+\[
+ \mu \ddot{\mathbf{r}} = -\frac{Gm_1m_2 \hat{\mathbf{r}}}{|\mathbf{r}|^2}.
+\]
+So
+\[
+ \ddot{\mathbf{r}} = -\frac{GM\hat{\mathbf{r}}}{|\mathbf{r}|^2}.
+\]
+For example, give a planet orbiting the Sun, both the planet and the sun moves in ellipses about their center of mass. The orbital period depends on the total mass.
+
+It can be shown that
+\begin{align*}
+ \mathbf{L} &= M\mathbf{R} \times \dot{\mathbf{R}} + \mu \mathbf{r}\times \dot{\mathbf{r}}\\
+ T &= \frac{1}{2} M|\dot{\mathbf{R}}|^2 + \frac{1}{2}\mu |\dot{\mathbf{r}}|^2
+\end{align*}
+by expressing $\mathbf{r}_1$ and $\mathbf{r}_2$ in terms of $\mathbf{r}$ and $\mathbf{R}$.
+
+\subsection{Variable-mass problem}
+All systems we've studied so far have fixed mass. However, in real life, many objects have changing mass, such as rockets, fireworks, falling raindrops and rolling snowballs.
+
+Again, we will use Newton's second law, which states that
+\[
+ \frac{\d \mathbf{p}}{\d t} = \mathbf{F},\quad\text{with }\mathbf{p} = m\dot{\mathbf{r}}.
+\]
+We will consider a rocket moving in one dimension with mass $m(t)$ and velocity $v(t)$. The rocket propels itself forwards by burning fuel and ejecting the exhaust at velocity $-u$ relative to the rocket.
+
+At time $t$, the rocket looks like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0.5) rectangle (3, -0.5);
+ \node at (1.5, 0) {$m(t)$};
+ \draw [->] (1, 0.7) -- (2, 0.7) node [pos = 0.5, above] {$v(t)$};
+ \end{tikzpicture}
+\end{center}
+At time $t + \delta t$, it ejects exhaust of mass $m(t) - m(t + \delta t)$ with velocity $v(t) - u + O(\delta t)$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0.5) rectangle (3, -0.5);
+ \node at (1.5, 0) {$m(t)$};
+ \draw [->] (1, 0.7) -- (2, 0.7) node [pos = 0.5, above] {$v(t)$};
+ \node at (-1, 0) [circ] {$m$};
+ \draw [->] (-1.33, 0) -- (-2, 0) node [left] {$v(t) - u$};
+ \end{tikzpicture}
+\end{center}
+The change in total momentum of the system (rocket + exhaust) is
+\begin{align*}
+ \delta p &= m(t + \delta t)v(t + \delta t) + [m(t) - m(t + \delta t)][v(t) - u(t) + O(\delta t)] - m(t)v(t)\\
+ &= (m + \dot{m}\delta t + O(\delta t^2))(v + \dot{v} \delta t + O(\delta t^2)) - \dot{m}\delta t(v - u) + O(\delta t^2) - mv\\
+ &= (\dot{m}v + m\dot{v} - \dot{m}v + \dot{m}u)\delta t + O(\delta t^2)\\
+ &= (m\dot{v} + \dot{m}u)\delta t + O(\delta t^2).
+\end{align*}
+Newton's second law gives
+\[
+ \lim_{\delta \to 0} \frac{\delta p}{\delta t} = F
+\]
+where $F$ is the external force on the rocket. So we obtain
+\begin{prop}[Rocket equation]
+ \[
+ m\frac{\d v}{\d t} + u\frac{\d m}{\d t} = F.
+ \]
+\end{prop}
+\begin{eg}
+ Suppose that we travel in space with $F = 0$. Assume also that $u$ is constant. Then we have
+ \[
+ m\frac{\d v}{\d t} = -u\frac{\d m}{d t}.
+ \]
+ So
+ \[
+ v = v_0 + u \log \left(\frac{m_0}{m(t)}\right),
+ \]
+ Note that we are expressing things in terms of the mass remaining $m$, not time $t$.
+
+ Note also that the velocity does not depend on the rate at which mass is ejected, only the velocity at which it is ejected. Of course, if we expressed $v$ as a function of time, then the velocity at a specific time \emph{does} depend on the rate at which mass is ejected.
+\end{eg}
+
+\begin{eg}
+ Consider a falling raindrop of mass $m(t)$, gathering mass from a stationary cloud. In this case, $u = v$. So
+ \[
+ m\frac{\d v}{\d t} + v\frac{\d m}{\d t} = \frac{\d }{\d t}(mv) = mg,
+ \]
+ with $v$ measured downwards. To obtain a solution of this, we will need a model to determine the rate at which the raindrop gathers mass.
+\end{eg}
+
+\section{Rigid bodies}
+This chapter is somewhat similar to the previous chapter. We again have a lot of particles and we study their motion. However, instead of having forces between the individual particles, this time the particles are constrained such that their relative positions are fixed. This corresponds to a solid object that cannot deform. We call these \emph{rigid bodies}.
+
+\begin{defi}[Rigid body]
+ A \emph{rigid body} is an extended object, consisting of $N$ particles that are constrained such that the distance between any pair of particles, $|\mathbf{r}_i - \mathbf{r}_j|$, is fixed.
+\end{defi}
+
+The possible motions of a rigid body are the continuous isometries of Euclidean space, i.e.\ translations and rotations. However, as we have previously shown, pure translations of rigid bodies are uninteresting --- they simply correspond to the center of mass moving under an external force. Hence we will first study rotations.
+
+Later, we will combine rotational and translational effects and see what happens.
+
+\subsection{Angular velocity}
+We'll first consider the cases where there is just one particle, moving in a circle of radius $s$ about the $z$ axis. Its position and velocity vectors are
+\begin{align*}
+ \mathbf{r} &= (s\cos \theta, s\sin \theta, z)\\
+ \dot{\mathbf{r}} &= (-s \dot{\theta}\sin \theta, s\dot{\theta}\cos \theta, 0).
+\end{align*}
+We can write
+\[
+ \dot{\mathbf{r}} = \boldsymbol\omega\times \mathbf{r},
+\]
+where
+\[
+ \boldsymbol\omega = \dot{\theta}\hat{\mathbf{z}}
+\]
+is the angular velocity vector.
+
+In general, we write
+\[
+ \boldsymbol\omega = \dot{\theta} \hat{\mathbf{n}} = \omega\hat{\mathbf{n}},
+\]
+where $\hat{\mathbf{n}}$ is a unit vector parallel to the rotation axis.
+
+The kinetic energy of this particle is thus
+\begin{align*}
+ T &= \frac{1}{2}m|\dot{\mathbf{r}}|^2\\
+ &= \frac{1}{2}m s^2 \dot{\theta}^2\\
+ &= \frac{1}{2}I \omega^2
+\end{align*}
+where $I = ms^2$ is the \emph{moment of inertia}. This is the counterpart of ``mass'' in rotational motion.
+\begin{defi}[Moment of inertia]
+ The \emph{moment of inertia} of a particle is
+ \[
+ I = ms^2 = m|\hat{\mathbf{n}}\times \mathbf{r}|^2,
+ \]
+ where $s$ is the distance of the particle from the axis of rotation.
+\end{defi}
+\subsection{Moment of inertia}
+In general, consider a rigid body in which all $N$ particles rotate about the same axis with the same angular velocity:
+\[
+ \dot{\mathbf{r}}_i = \boldsymbol\omega\times \mathbf{r}_i.
+\]
+This ensures that
+\[
+ \frac{\d }{\d t}|\mathbf{r}_i - \mathbf{r}_j|^2 = 2(\dot{\mathbf{r}}_i - \dot{\mathbf{r}}_j)\cdot (\mathbf{r}_i - \mathbf{r}_j) = 2\big(\boldsymbol\omega\times (\mathbf{r}_i - \mathbf{r}_j)\big) \cdot (\mathbf{r}_i - \mathbf{r}_j) = 0,
+\]
+as required for a rigid body.
+
+Similar to what we had above, the rotational kinetic energy is
+\[
+ T = \frac{1}{2}\sum_{i = 1}^N m_i|\dot{\mathbf{r}_i}|^2 = \frac{1}{2}I\omega^2,
+\]
+where
+\begin{defi}[Moment of inertia]
+ The \emph{moment of inertia} of a rigid body about the rotation axis $\hat{\mathbf{n}}$ is
+ \[
+ I = \sum_{i = 1}^N m_is_i^2 = \sum_{i = 1}^N m_i |\hat{\mathbf{n}}\times \mathbf{r}_i|^2.
+ \]
+\end{defi}
+Again, we define the angular momentum of the system:
+\begin{defi}
+ The \emph{angular momentum} is
+ \[
+ \mathbf{L} = \sum_i m_i \mathbf{r}_i \times \dot{\mathbf{r}}_i = \sum_i m_i \mathbf{r}_i \times (\boldsymbol\omega \times \mathbf{r}_i).
+ \]
+\end{defi}
+Note that our definitions for angular motion are analogous to those for linear motion. The moment of inertia $I$ is defined such that $T = \frac{1}{2}I\omega^2$. Ideally, we would want the momentum to be $\mathbf{L} = I\boldsymbol \omega$. However, this not true. In fact, $\mathbf{L}$ need not be parallel to $\boldsymbol\omega$.
+
+What \emph{is} true, is that the component of $\mathbf{L}$ parallel to $\boldsymbol\omega$ is equal to $I\omega$. Write $\boldsymbol\omega = \omega\hat{\mathbf{n}}$. Then we have
+\begin{align*}
+ \mathbf{L} \cdot \hat{\mathbf{n}} &= \omega \sum_i m_i \hat{\mathbf{n}}\cdot (\mathbf{r}_i \times (\hat{\mathbf{n}} \times \mathbf{r}_i))\\
+ &= \omega \sum_i m(\hat{\mathbf{n}}\times \mathbf{r}_i)\cdot (\hat{\mathbf{n}} \times \mathbf{r}_i)\\
+ &= I\omega.
+\end{align*}
+What does $\mathbf{L}$ itself look like? Using vector identities, we have
+\[
+ \mathbf{L} = \sum_i m_i\big((\mathbf{r}_i\cdot \mathbf{r}_i)\boldsymbol \omega - (\mathbf{r}_i \cdot \boldsymbol\omega)\mathbf{r}_i\big)
+\]
+Note that this is a linear function of $\boldsymbol\omega$. So we can write
+\[
+ \mathbf{L} = I\boldsymbol \omega,
+\]
+where we abuse notation to use $I$ for the \emph{inertia tensor}. This is represented by a symmetric matrix with components
+\[
+ I_{jk} = \sum_i m_i(|\mathbf{r}_i|^2 \delta_{jk} - (\mathbf{r}_i)_j(\mathbf{r}_i)_k),
+\]
+where $i$ refers to the index of the particle, and $j, k$ are dummy suffixes.
+
+If the body rotates about a \emph{principal axis}, i.e.\ one of the three orthogonal eigenvectors of $I$, then $\mathbf{L}$ will be parallel to $\boldsymbol\omega$. Usually, the principal axes lie on the axes of rotational symmetry of the body.
+
+\subsection{Calculating the moment of inertia}
+For a solid body, we usually want to think of it as a continuous substance with a mass density, instead of individual point particles. So we replace the sum of particles by a volume integral weighted by the mass density $\rho(\mathbf{r})$.
+
+\begin{defi}[Mass, center of mass and moment of inertia]
+ The \emph{mass} is
+ \[
+ M = \int \rho \;\d V.
+ \]
+ The \emph{center of mass} is
+ \[
+ \mathbf{R} = \frac{1}{M}\int \rho \mathbf{r}\;\d V
+ \]
+ The \emph{moment of inertia} is
+ \[
+ I = \int \rho s^2 \;\d V = \int \rho|\hat{\mathbf{n}}\times \mathbf{r}|^2\;\d V.
+ \]
+\end{defi}
+In theory, we can study inhomogeneous bodies with varying $\rho$, but usually we mainly consider homogeneous ones with constant $\rho$ throughout.
+
+\begin{eg}[Thin circular ring]
+ Suppose the ring has mass $M$ and radius $a$, and a rotation axis through the center, perpendicular to the plane of the ring.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [x radius = 1.5, y radius = 0.4];
+ \draw circle [x radius = 1.4, y radius = 0.35];
+ \draw (0, 0) -- (1.45, 0) node [pos = 0.5, above] {$a$};
+ \draw [dashed] (0, -1) -- (0, 1);
+ \end{tikzpicture}
+ \end{center}
+ Then the moment of inertia is
+ \[
+ I = Ma^2.
+ \]
+\end{eg}
+
+\begin{eg}[Thin rod]
+ Suppose a rod has mass $M$ and length $\ell$. It rotates through one end, perpendicular to the rod.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] (0, -1) -- (0, 1);
+ \draw (0, 0.05) rectangle (2, -0.05);
+ \node at (1, 0.3) {$\ell$};
+ \end{tikzpicture}
+ \end{center}
+ The mass per unit length is $M/\ell$. So the moment of inertia is
+ \[
+ I = \int_0 ^\ell \frac{M}{\ell}x^2\;\d x = \frac{1}{3}M\ell^2.
+ \]
+\end{eg}
+
+\begin{eg}[Thin disc]
+ Consider a disc of mass $M$ and radius $a$, with a rotation axis through the center, perpendicular to the plane of the disc.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] circle[x radius = 1.5, y radius = 0.4];
+ \draw (0, 0) -- (1.45, 0) node [pos = 0.5, above] {$a$};
+ \draw [dashed] (0, -1) -- (0, -0.4);
+ \draw [dashed] (0, 0) -- (0, 1);
+ \end{tikzpicture}
+ \end{center}
+ Then
+ \begin{align*}
+ I &= \int_{0}^{2\pi}\int_0^a \underbrace{\frac{M}{\pi a^2}}_{\text{mass per unit length}} \underbrace{r^2}_{s^2} \underbrace{r\;\d r\;\d \theta}_{\text{area element}}\\
+ &= \frac{M}{\pi a^2}\int_0^a r^3\;\d r \int_0^{2\pi}\;\d \theta\\
+ &= \frac{M}{\pi a^2}\frac{1}{4}a^4 (2\pi)\\
+ &= \frac{1}{2}Ma^2.
+ \end{align*}
+ Now suppose that the rotation axis is in the plane of the disc instead (also rotating through the center). Then
+ \begin{align*}
+ I &= \int_{0}^{2\pi}\int_0^a \underbrace{\frac{M}{\pi a^2}}_{\text{mass per unit length}} \underbrace{(r\sin \theta)^2}_{s^2} \underbrace{r\;\d r\;\d \theta}_{\text{area element}}\\
+ &= \frac{M}{\pi a^2}\int_0^a r^3\;\d r \int_0^{2\pi}\sin^2 \theta\;\d \theta\\
+ &= \frac{M}{\pi a^2}\frac{1}{4}a^4 \pi\\
+ &= \frac{1}{4}Ma^2.
+ \end{align*}
+\end{eg}
+\begin{eg}
+ Consider a solid sphere with mass $M$, radius $a$, with a rotation axis though the center.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius = 1];
+ \draw (0, 0) -- (0.707, 0.707) node [pos =0.5, anchor = south east] {$a$};
+ \draw [dashed] (0, -1.3) -- (0, 1.3);
+ \end{tikzpicture}
+ \end{center}
+ Using spherical polar coordinates $(r, \theta, \phi)$ based on the rotation axis,
+ \begin{align*}
+ I &= \int_{0}^{2\pi}\int_0^\pi\int_0^a \underbrace{\frac{M}{\frac{4}{3}\pi a^3}}_{\rho}\underbrace{(r\sin \theta)^2}_{s^2} \underbrace{r^2\sin \theta\;\d r\;\d \theta\;\d \phi}_{\text{volume element}}\\
+ &= \frac{M}{\frac{4}{3}\pi a^3}\int_0^a r^4 \;\d r\int_0^\pi (1 - \cos^2)\sin \theta\;\d \theta \int_0^{2\pi}\;\d \phi\\
+ &= \frac{M}{\frac{4}{3}\pi a^3}\cdot \frac{1}{5}a^5 \cdot \frac{4}{3}\cdot 2\pi\\
+ &= \frac{2}{5}Ma^2.
+ \end{align*}
+\end{eg}
+
+Usually, finding the moment of inertia involves doing complicated integrals. We will now come up with two theorems that help us find moments of inertia.
+\begin{thm}[Perpendicular axis theorem]
+ For a two-dimensional object (a lamina), and three perpendicular axes $x, y, z$ through the same spot, with $z$ normal to the plane,
+ \[
+ I_z = I_x + I_y,
+ \]
+ where $I_z$ is the moment of inertia about the $z$ axis.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (2, -1) node [right] {$x$};
+ \draw [->] (0, 0) -- (2, 1) node [right] {$y$};
+ \draw [->] (0, 0) -- (0, 2) node [above] {$z$};
+ \draw (-2, 0) -- (0, -1) -- (2, 0) -- (0, 1) -- cycle;
+ \end{tikzpicture}
+ \end{center}
+\end{thm}
+Note that this does \emph{not} apply to 3D objects! For example, in a sphere, $I_x = I_y = I_z$.
+
+\begin{proof}
+ Let $\rho$ be the mass per unit volume. Then
+ \begin{align*}
+ I_x &= \int \rho y^2 \;\d A\\
+ I_y &= \int \rho x^2 \;\d A\\
+ I_z &= \int \rho (x^2 + y^2)\;\d A = I_x + I_y.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{eg}
+ For a disc, $I_x = I_y$ by symmetry. So $I_z = 2 I_x$.
+\end{eg}
+
+\begin{thm}[Parallel axis theorem]
+ If a rigid body of mass $M$ has moment of inertia $I^C$ about an axis passing through the center of mass, then its moment of inertia about a parallel axis a distance $d$ away is
+ \[
+ I = I^C + Md^2.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates{(0, 0) (1, -2) (3, -1) (4, 0) (4, 1) (2, 1) (1, 3)};
+ \node [circ] at (2, 0) {};
+ \node at (2, 0) [left] {CM};
+ \draw [dashed] (2, -2.5) -- (2, 3);
+ \draw [dashed] (3, -2.5) -- (3, 3);
+ \draw [->] (2, 0) -- (3, 0);
+ \draw [->] (3, 0) -- (2, 0) node [pos = 0.5, above] {$d$};
+ \end{tikzpicture}
+ \end{center}
+\end{thm}
+
+\begin{proof}
+ With a convenient choice of Cartesian coordinates such that the center of mass is at the origin and the two rotation axes are $x = y =0$ and $x = d, y = 0$,
+ \[
+ I^C = \int \rho (x^2 + y^2) \;\d V,
+ \]
+ and
+ \[
+ \int \rho \mathbf{r}\;\d V = \mathbf{0}.
+ \]
+ So
+ \begin{align*}
+ I &= \int \rho((x - d)^2 + y^2)\;\d V\\
+ &=\int \rho (x^2 + y^2)\;\d V - 2d\int \rho x\;\d V + \int d^2 \rho\;\d V\\
+ &= I^c + 0 + Md^2\\
+ &= I^c + Md^2.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{eg}
+ Take a disc of mass $M$ and radius $a$, and rotation axis through a point on the circumference, perpendicular to the plane of the disc. Then
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] circle [x radius = 2, y radius = 0.6];
+ \node [circ] {};
+ \draw (0, 0) -- (1.03, 0.5) node [pos = 0.5, anchor = south east] {$a$};
+ \draw [dashed] (2, -1) -- (2, 1);
+ \end{tikzpicture}
+ \end{center}
+ \[
+ I = I^c + Ma^2 = \frac{1}{2}Ma^2 + Ma^2 = \frac{3}{2}Ma^2.
+ \]
+\end{eg}
+
+\subsection{Motion of a rigid body}
+The general motion of a rigid body can be described as a translation of its centre of mass, following a trajectory $\mathbf{R}(t)$, together with a rotation about an axis through the center of mass. As before, we write
+\[
+ \mathbf{r}_i = \mathbf{R} + \mathbf{r}_i^c.
+\]
+Then
+\[
+ \dot{\mathbf{r}}_i = \dot{\mathbf{R}} + \dot{\mathbf{r}}_i^c.
+\]
+Using this, we can break down the velocity and kinetic energy into translational and rotational parts.
+
+If the body rotates with angular velocity $\boldsymbol\omega$ about the center of mass, then
+\[
+ \dot{\mathbf{r}}_i^c = \boldsymbol\omega \times \mathbf{r}_i^c.
+\]
+Since $\mathbf{r}_i^c = \mathbf{r}_i - \mathbf{R}$, we have
+\[
+ \dot{\mathbf{r}_i} = \dot{\mathbf{R}} + \boldsymbol \omega \times \mathbf{r}_i^c = \dot{\mathbf{R}} + \boldsymbol\omega\times (\mathbf{r}_i - \mathbf{R}).
+\]
+On the other hand, the kinetic energy, as calculated in previous lectures, is
+\begin{align*}
+ T &= \frac{1}{2}M|\dot{\mathbf{R}}|^2 + \frac{1}{2}\sum_i m_i |\dot{\mathbf{r}}_i^c|^2\\
+ &= \underbrace{\frac{1}{2}M|\dot{\mathbf{R}}|^2}_{\text{translational KE}} + \underbrace{\frac{1}{2}I^c\omega^2}_{\text{rotational KE}}.
+\end{align*}
+Sometimes we do not want to use the center of mass as the center. For example, if an item is held at the edge and spun around, we'd like to study the motion about the point at which the item is held, and not the center of mass.
+
+So consider any point $Q$, with position vector $\mathbf{Q}(t)$ that is not the center of mass but moves with the rigid body, i.e.
+\[
+ \dot{\mathbf{Q}} = \dot{\mathbf{R}} + \boldsymbol\omega\times (\mathbf{Q} - \mathbf{R}).
+\]
+Usually this is a point inside the object itself, but we do not assume that in our calculation.
+
+Then we can write
+\begin{align*}
+ \dot{\mathbf{r}}_i &= \dot{\mathbf{R}} + \boldsymbol\omega\times (\mathbf{r}_i - \mathbf{R})\\
+ &= \dot{\mathbf{Q}} - \boldsymbol\omega\times (\mathbf{Q} - \mathbf{R}) + \boldsymbol\omega \times (\mathbf{r}_i - \mathbf{R})\\
+ &= \dot{\mathbf{Q}} + \boldsymbol\omega\times (\mathbf{r}_i - \mathbf{Q}).
+\end{align*}
+Therefore the motion can be considered as a translation of $Q$ (with \emph{different} velocity than the center of mass), together with rotation about $Q$ (with the \emph{same} angular velocity $\boldsymbol\omega$).
+
+\subsubsection*{Equations of motion}
+As shown previously, the linear and angular momenta evolve according to
+\begin{align*}
+ \dot{\mathbf{P}} &= \mathbf{F}\quad \text{(total external force)}\\
+ \dot{\mathbf{L}} &= \mathbf{G}\quad \text{(total external torque)}
+\end{align*}
+These two equations determine the translational and rotational motion of a rigid body.
+
+$\mathbf{L}$ and $\mathbf{G}$ depend on the choice of origin, which could be any point that is fixed in an inertial frame. More surprisingly, it can also be applied to the center of mass, even if this is accelerated:
+If
+\[
+ m_i \ddot{\mathbf{r}}_i = \mathbf{F}_i,
+\]
+then
+\[
+ m_i \ddot{\mathbf{r}}_i^c = \mathbf{F}_i + m_i \ddot{\mathbf{R}}.
+\]
+So there is a fictitious force $m_i \ddot{\mathbf{R}}$ in the center-of-mass frame. But the total torque of the fictitious forces about the center of mass is
+\[
+ \sum_i \mathbf{r}_i^c \times \left(-m_i \ddot{\mathbf{R}}\right) = -\left(\sum m_i \mathbf{r}_i^c\right) \times \ddot{\mathbf{R}} = \mathbf{0}\times \dot{\mathbf{R}} = 0.
+\]
+So we can still apply the above two equations.
+
+In summary, the laws of motion apply in any inertial frame, or the center of mass (possibly non-inertial) frame.
+
+\subsubsection*{Motion in a uniform gravitational field}
+In a uniform gravitational field $\mathbf{g}$, the total gravitational force and torque are the same as those that would act on a single particle of mass $M$ located at the center of mass (which is also the \emph{center of gravity}):
+\[
+ \mathbf{F} = \sum_i \mathbf{F}_i^{\mathrm{ext}} = \sum_i m_i \mathbf{g} = M\mathbf{g},
+\]
+and
+\[
+ \mathbf{G} = \sum_i \mathbf{G}_i^{\mathrm{ext}} = \sum_i \mathbf{r}_i \times (m_i \mathbf{g}) = \sum m_i \mathbf{r}_i \times \mathbf{g} = M\mathbf{R}\times \mathbf{g}.
+\]
+In particular, the gravitational torque about the center of mass vanishes: $\mathbf{G}^c = \mathbf{0}$.
+
+We obtain a similar result for gravitational potential energy.
+
+The gravitational potential in a uniform $\mathbf{g}$ is
+\[
+ \Phi_g = -\mathbf{r}\cdot \mathbf{g}.
+\]
+(since $\mathbf{g} = -\nabla \Phi_g$ by definition)
+
+So
+\begin{align*}
+ V^{\mathrm{ext}} &= \sum_i V_i^{\mathrm{ext}}\\
+ &= \sum_i m_i (- \mathbf{r}_i \cdot \mathbf{g})\\
+ &= M (-\mathbf{R}\cdot \mathbf{g}).
+\end{align*}
+
+\begin{eg}[Thrown stick]
+ Suppose we throw a symmetrical stick. So the center of mass is the actual center. Then the center of the stick moves in a parabola. Meanwhile, the stick rotates with constant angular velocity about its center due to the absence of torque.
+\end{eg}
+
+\begin{eg}
+ Swinging bar.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-1, 0) -- (1, 0);
+ \draw [ultra thick] (0, 0) -- (1, -2) node [anchor = south west, pos = 0.5] {$\ell$};
+ \draw [dashed] (0, 0) -- (0, -2.5);
+ \draw [->] (0.5, -1) -- (0.5, -2) node [below] {$Mg$};
+ \end{tikzpicture}
+ \end{center}
+ This is an example of a \emph{compound pendulum}.
+
+ Consider the bar to be rotating about the pivot (and not translating). Its angular momentum is $L = I\dot{\theta}$ with $I = \frac{1}{3}M\ell^2$. The gravitational torque about the pivot is
+ \[
+ G = - Mg\frac{\ell}{2} \sin \theta.
+ \]
+ The equation of motion is
+ \[
+ \dot{L} = G.
+ \]
+ So
+ \[
+ I\ddot{\theta} = -Mg \frac{\ell}{2}\sin \theta,
+ \]
+ or
+ \[
+ \ddot{\theta} = -\frac{3g}{2\ell} \sin \theta.
+ \]
+ which is exactly equivalent to a simple pendulum of length $2\ell/3$, with angular frequency $\sqrt{\frac{3g}{2\ell}}$.
+
+ This can also be obtained from an energy argument:
+ \[
+ E = T + V = \frac{1}{2}I\dot{\theta}^2 - Mg\frac{\ell}{2}\cos\theta.
+ \]
+ We differentiate to obtain
+ \[
+ \frac{\d E}{\d t} = \dot{\theta}(I\ddot{\theta} + Mg \frac{\ell}{2}\sin \theta) = 0.
+ \]
+ So
+ \[
+ I\ddot{\theta} = -Mg \frac{\ell}{2}\sin \theta.
+ \]
+\end{eg}
+
+\subsubsection*{Sliding versus rolling}
+Consider a cylinder or sphere of radius $a$, moving along a stationary horizontal surface.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill = gray] (-3, 0) rectangle (3, -1);
+ \draw (0, 1) circle [radius=1];
+ \draw (0, 1) -- (0.707, 1.707) node [pos=0.5, anchor = south east] {$a$};
+ \draw [->] (1.5, 1) -- (2.6, 1) node [right] {$v$};
+ \draw (0, 0.7) arc (270:380:0.3) node [anchor = north west] {$\omega$};
+ \draw [->] (0.01, 0.7) -- (0, 0.7);
+ \end{tikzpicture}
+\end{center}
+In general, the motion consists of a translation of the center of mass (with velocity $v$) plus a rotation about the center of mass (with angular velocity $\omega$).
+
+The horizontal velocity at the point of contact is
+\[
+ v_{\text{slip}} = v - a\omega.
+\]
+For a pure sliding motion, $v \not = 0$ and $\omega = 0$, in which case $v - a\omega \not = 0$: the point of contact moves relative to the surface and kinetic friction may occur.
+
+For a pure rolling motion, $v\not = 0$ and $\omega \not = 0$ such that $v - a\omega = 0$: the point of contact is stationary. This is the no-slip condition.
+
+The rolling body can alternatively be considered to be rotating instantaneously about the point of contact (with angular velocity $\omega$) and not translating.
+
+\begin{eg}[Rolling down hill]\leavevmode
+ \begin{center}
+ \begin{tikzpicture}[rotate=-30]
+ \draw [fill = gray] (-3, 0) rectangle (3, -1);
+ \draw (0, 1) circle [radius=1];
+ \draw (0, 1) -- (0.707, 1.707) node [pos=0.5, anchor = south east] {$a$};
+ \draw [->] (1.5, 1) -- (2.6, 1) node [right] {$v$};
+ \draw (0, 0.7) arc (270:380:0.3) node [anchor = north west] {$\omega$};
+ \draw [->] (0.01, 0.7) -- (0, 0.7);
+ \draw [dashed] (3, -1) -- (2, -1.577);
+ \draw (2.5, -1) arc (180:210:0.5) node [anchor = south east] {$\alpha$};
+ \end{tikzpicture}
+ \end{center}
+ Consider a cylinder or sphere of mass $M$ and radius $a$ rolling down a rough plane inclined at angle $\alpha$. The no-slip (rolling) condition is
+ \[
+ v - a\omega = 0 .
+ \]
+ The kinetic energy is
+ \[
+ T = \frac{1}{2}Mv^2 + \frac{1}{2}I\omega^2 = \frac{1}{2}\left(M + \frac{I}{a^2}\right)v^2.
+ \]
+ The total energy is
+ \[
+ E = \frac{1}{2}\left(M + \frac{I}{a^2}\right) \dot{x}^2 - Mgx\sin \alpha,
+ \]
+ where $x$ is the distance down slope. While there is a frictional force, the instantaneous velocity is $0$, and no work is done. So energy is conserved, and we have
+ \[
+ \frac{\d E}{\d t} = \dot{x}\left(\left(M + \frac{I}{a^2}\right)\ddot{x} - Mg\sin \alpha\right) = 0.
+ \]
+ So
+ \[
+ \left(M + \frac{I}{a^2}\right)\ddot{x} = Mg\sin \alpha.
+ \]
+ For example, if we have a uniform solid cylinder,
+ \[
+ I = \frac{1}{2}Ma^2\quad\text{(as for a disc)}
+ \]
+ and so
+ \[
+ \ddot{x} = \frac{2}{3}g\sin \alpha.
+ \]
+ For a thin cylindrical shell,
+ \[
+ I = Ma^2.
+ \]
+ So
+ \[
+ \ddot{x} = \frac{1}{2}g\sin \alpha.
+ \]
+ Alternatively, we may do it in terms of forces and torques,
+ \begin{center}
+ \begin{tikzpicture}[rotate=-30]
+ \draw [fill = gray] (-3, 0) rectangle (3, -1);
+ \draw (0, 1) circle [radius=1];
+ \draw [->] (1.5, 1) -- (2.6, 1) node [right] {$v$};
+ \draw [dashed] (3, -1) -- (2, -1.577);
+ \draw (2.5, -1) arc (180:210:0.5) node [anchor = south east] {$\alpha$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$N$};
+ \draw [->] (0, -0.1) -- (-0.5, -0.1) node [below] {$F$};
+ \draw [->] (0, 1) -- (0.866, -0.5) node [below] {$Mg$};
+ \end{tikzpicture}
+ \end{center}
+ The equations of motion are
+ \[
+ M\dot{v} = Mg\sin \alpha - F
+ \]
+ and
+ \[
+ I\dot \omega = aF.
+ \]
+ While rolling,
+ \[
+ \dot{v} - a\dot\omega = 0.
+ \]
+ So
+ \[
+ M\dot{v} = Mg\sin \alpha - \frac{I}{a^2}\dot{v},
+ \]
+ leading to the same result.
+
+ Note that even though there is a frictional force, it does no work, since $v_{\mathrm{slip}} = 0$. So energy is still conserved.
+\end{eg}
+
+\begin{eg}[Snooker ball]\leavevmode
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] (-2, 0) rectangle (2, -1);
+ \draw (0, 1) circle [radius=1];
+ \draw (0, 1) -- (-0.707, 1.707) node [pos = 0.5, anchor = south west] {$a$};
+ \draw [->] (0.1, 0) -- (0.1, 0.7) node [pos = 0.5, right] {$N$};
+ \draw [->] (0, 1) -- (0, 0.3) node [pos = 0.5, left] {$Mg$};
+ \draw [->] (0, -0.1) -- (-0.5, -0.1) node [pos = 0.5, below] {$F$};
+ \draw [->] (0, 1.3) arc (90:-30:0.3) node [anchor = south west] {$\omega$};
+ \end{tikzpicture}
+ \end{center}
+ It is struck centrally so as to initiate translation, but not rotation. Sliding occurs initially. Intuitively, we think it will start to roll, and we'll see that's the case.
+
+ The constant frictional force is
+ \[
+ F = \mu_k N = \mu_k Mg,
+ \]
+ which applies while $v - a\omega > 0$.
+
+ The moment of inertia about the center of mass is
+ \[
+ I = \frac{2}{5}Ma^2.
+ \]
+ The equations of motion are
+ \begin{align*}
+ M\dot{v} &= -F\\
+ I\dot\omega &= aF
+ \end{align*}
+ Initially, $v = v_0$ and $\omega = 0$. Then the solution is
+ \begin{align*}
+ v &= v_0 - \mu_k gt\\
+ \omega &= \frac{5}{2} \frac{\mu_k g}{a}t
+ \end{align*}
+ as long as $v - a \omega > 0$. The slip velocity is
+ \[
+ v_{\mathrm{slip}} = v - a\omega = v_0 - \frac{7}{2}\mu_k gt = v_0 \left(1 - \frac{t}{t_\mathrm{roll}}\right),
+ \]
+ where
+ \[
+ t_{\mathrm{roll}} = \frac{2v_0}{7\mu_k g}.
+ \]
+ This is valid up till $t = t_{\mathrm{roll}}$. Then the slip velocity is 0, rolling begins and friction ceases.
+
+ At this point, $v = a\omega = \frac{5}{7}v_0$. The energy is then $\frac{5}{14}Mv_0^2 < \frac{1}{2}Mv_0^2$. So energy is lost to friction.
+\end{eg}
+\section{Special relativity}
+When particles move Extremely Fast\textsuperscript{TM}, Newtonian Dynamics becomes inaccurate and is replaced by Einstein's Special Theory of Relativity (1905).
+
+Its effects are noticeable only when particles approach to the speed of light,
+\[
+ c = \SI{299792458}{\meter\per\second} \approx \SI{3e8}{\meter\per\second}
+\]
+This is \emph{really fast}.
+
+The Special Theory of Relativity rests on the following postulate:
+\begin{center}
+ \emph{The laws of physics are the same in all inertial frames}
+\end{center}
+This is the principle of relativity familiar to Galileo. Galilean relativity mentioned in the first chapter satisfies this postulate for dynamics. People then thought that Galilean relativity is what the world obeys. However, it turns out that there is a whole family of solutions that satisfy the postulate (for dynamics), and Galilean relativity is just one of them.
+
+This is not a problem (yet), since Galilean relativity seems so intuitive, and we might as well take it to be the true one. However, it turns out that solving Maxwell's equations of electromagnetism gives an explicit value of the speed of light, $c$. This is independent of the frame of reference. So the speed of light must be the same in every inertial frame.
+
+This is not compatible with Galilean relativity.
+
+Consider the two inertial frames $S$ and $S'$, moving with relative velocity $v$. Then if light has velocity $c$ in $S$, then Galilean relativity predicts it has velocity $c - v$ in $S'$, which is wrong.
+
+Therefore, we need to find a different solution to the principle of relativity that preserves the speed of light.
+
+\subsection{The Lorentz transformation}
+Consider again inertial frames $S$ and $S'$ whose origins coincide at $t = t' = 0$. For now, neglect the $y$ and $z$ directions, and consider the relationship between $(x, t)$ and $(x', t')$. The general form is
+\[
+ x' = f(x, t),\quad t' = g(x, t),
+\]
+for some functions $f$ and $g$. This is not very helpful.
+
+In any inertial frame, a free particle moves with constant velocity. So straight lines in $(x, t)$ must map into straight lines in $(x', t')$. Therefore the relationship must be linear.
+
+Given that the origins of $S$ and $S'$ coincide at $t = t' = 0$, and $S'$ moves with velocity $v$ relative to $S$, we know that the line $x = vt$ must map into $x'= 0$.
+
+Combining these two information, the transformation must be of the form
+\[
+ x' = \gamma(x - vt),\tag{1}
+\]
+for some factor $\gamma$ that may depend on $|v|$ (\emph{not} $v$ itself. We can use symmetry arguments to show that $\gamma$ should take the same value for velocities $v$ and $-v$).
+
+Note that Galilean transformation is compatible with this -- just take $\gamma$ to be always $1$.
+
+Now reverse the roles of the frames. From the perspective $S'$, $S$ moves with velocity $-v$. A similar argument leads to
+\[
+ x = \gamma(x' + vt'),\tag{2}
+\]
+with the same factor $\gamma$, since $\gamma$ only depends on $|v|$. Now consider a light ray (or photon) passing through the origin $x = x' = 0$ at $t = t' = 0$. Its trajectory in $S$ is
+\[
+ x = ct.
+\]
+We want a $\gamma$ such that the trajectory in $S'$ is
+\[
+ x' = ct'
+\]
+as well, so that the speed of light is the same in each frame. Substitute these into (1) and (2)
+\begin{align*}
+ ct' &= \gamma(c - v)t\\
+ ct &= \gamma(c + v)t'
+\end{align*}
+Multiply the two equations together and divide by $tt'$ to obtain
+\[
+ c^2 = \gamma^2(c^2 - v^2).
+\]
+So
+\[
+ \gamma = \sqrt{\frac{c^2}{c^2 - v^2}} = \frac{1}{\sqrt{1 - (v/c)^2}}.
+\]
+\begin{defi}[Lorentz factor]
+ The \emph{Lorentz factor} is
+ \[
+ \gamma = \frac{1}{\sqrt{1 - (v/c)^2}}.
+ \]
+\end{defi}
+Note that
+\begin{itemize}
+ \item $\gamma \geq 1$ and is an increasing function of $|v|$.
+ \item When $v \ll c$, then $\gamma \approx 1$, and we recover the Galilean transformation.
+ \item When $|v|\to c$, then $\gamma\to \infty$.
+ \item If $|v| \geq c$, then $\gamma$ is imaginary, which is physically impossible (or at least \emph{weird}).
+ \item If we take $c\to \infty$, then $\gamma = 1$. So Galilean transformation is the transformation we will have if light is infinitely fast. Alternatively, in the world of Special Relativity, the speed of light is ``infinitely fast''.
+\end{itemize}
+\begin{center}
+ \begin{tikzpicture}[xscale=3, yscale=0.2]
+ \draw [->] (0, 0) -- (1.1, 0) node [right] {$\frac{v}{c}$};
+ \draw [->] (0, 0) -- (0, 15) node [above] {$\gamma$};
+ \draw [domain=0:0.997, samples=100] plot (\x, {1 / sqrt(1 - \x * \x)});
+ \draw [dashed] (1, 15) -- (1, 0) node [below] {$1$};
+ \end{tikzpicture}
+\end{center}
+For the sense of scale, we have the following values of $\gamma$ at different speeds:
+\begin{itemize}
+ \item $\gamma = 2$ when $v = 0.866c$.
+ \item $\gamma = 10$ when $v = 0.9949$.
+ \item $\gamma = 20$ when $v = 0.999c$.
+\end{itemize}
+
+We still have to solve for the relation between $t$ and $t'$. Eliminate $x$ between (1) and (2) to obtain
+\[
+ x = \gamma(\gamma(x - vt) + vt').
+\]
+So
+\[
+ t' = \gamma t - (1 - \gamma^{-2})\frac{\gamma x}{v} = \gamma\left(t - \frac{v}{c^2}x\right).
+\]
+So we have
+\begin{law}[Principle of Special Relativity]
+ Let $S$ and $S'$ be inertial frames, moving at the relative velocity of $v$. Then
+ \begin{align*}
+ x' &= \gamma(x - vt)\\
+ t' &= \gamma\left(t - \frac{v}{c^2}x\right),
+ \end{align*}
+ where
+ \[
+ \gamma = \frac{1}{\sqrt{1 - (v/c)^2}}.
+ \]
+ This is the \emph{Lorentz transformations} in the standard configuration (in one spatial dimension).
+\end{law}
+The above is the form the Lorentz transformation is usually written, and is convenient for actual calculations. However, this lacks symmetry between space and time. To display the symmetry, one approach is to use units such that $c = 1$. Then we have
+\begin{align*}
+ x' &= \gamma(x - vt),\\
+ t' &= \gamma(t - vx).
+\end{align*}
+Alternatively, if we want to keep our $c$'s, instead of comparing $x$ and $t$, which have different units, we can compare $x$ and $ct$. Then we have
+\begin{align*}
+ x' &= \gamma\left(x - \frac{v}{c}(ct)\right),\\
+ ct' &= \gamma\left(ct - \frac{v}{c}x\right).
+\end{align*}
+Symmetries aside, to express $x, t$ in terms of $x', t'$, we can invert this linear mapping to find (after some algebra)
+\begin{align*}
+ x &= \gamma(x' + vt')\\
+ t &= \gamma\left(t' + \frac{v}{c^2}x'\right)
+\end{align*}
+Directions perpendicular to the relative motion of the frames are unaffected:
+\begin{align*}
+ y' &= y\\
+ z' &= z
+\end{align*}
+Now we check that the speed of light is really invariant:
+
+For a light ray travelling in the $x$ direction in $S$:
+\[
+ x = ct,\quad y = 0,\quad z = 0.
+\]
+In $S'$, we have
+\[
+ \frac{x'}{t'} = \frac{\gamma(x - vt)}{\gamma(t - vx/c^2)} = \frac{(c - v)t}{(1 - v/c)t} = c,
+\]
+as required.
+
+For a light ray travelling in the $Y$ direction in $S$,
+\[
+ x = 0,\quad y = ct,\quad z = 0.
+\]
+In $S'$,
+\[
+ \frac{x'}{t'} = \frac{\gamma(x - vt)}{\gamma(t - vx/c^2)} = -v,
+\]
+and
+\[
+ \frac{y'}{t'} = \frac{y}{\gamma(t - vx/c^2} = \frac{c}{\gamma},
+\]
+and
+\[
+ z' = 0.
+\]
+So the speed of light is
+\[
+ \frac{\sqrt{x'^2 + y'^2}}{t'} = \sqrt{v^2 + \gamma^{-2}c^2} = c,
+\]
+as required.
+
+More generally, the Lorentz transformation implies
+\begin{align*}
+ c^2t'^2 - r'^2 &= c^2t'^2 - x'^2 - y'^2 - z'^2\\
+ &= c^2 \gamma^2\left(t - \frac{v}{c^2}x\right)^2 - \gamma^2(x - vt)^2 - y^2 - z^2\\
+ &= \gamma^2\left(1 - \frac{v^2}{c^2}\right)(c^2t^2 - x^2) - y^2 - z^2\\
+ &= c^2t^2 - x^2 - y^2 - z^2\\
+ &= c^2t^2 - r^2.
+\end{align*}
+We say that the quantity $c^2t^2 - x^2 - y^2 - z^2$ is \emph{Lorentz-invariant}.
+
+So if $\frac{r}{t} = c$, then $\frac{r'}{t'} = c$ also.
+
+\subsection{Spacetime diagrams}
+It is often helpful to plot out what is happening on a diagram. We plot them on a graph, where the position $x$ is on the horizontal axis and the time $ct$ is on the vertical axis. We use $ct$ instead of $t$ so that the dimensions make sense.
+\begin{center}
+ \begin{tikzpicture}[yscale=1.3]
+ \draw [->] (0, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 2.3) node [above] {$ct$};
+ \draw (0.7, 0.3) .. controls (0.7, 0.8) and (2, 1.3) .. (2, 1.8) node [right] {world line};
+
+ \node [circ] at (1.35, 1.05) {};
+ \node at (1.35, 1.05) [anchor = north west] {$P$};
+ \end{tikzpicture}
+\end{center}
+\begin{defi}[Spacetime]
+ The union of space and time in special relativity is called \emph{Minkowski spacetime}. Each point $P$ represents an \emph{event}, labelled by coordinates $(ct, x)$ (note the order!).
+
+ A particle traces out a \emph{world line} in spacetime, which is straight if the particle moves uniformly.
+
+ Light rays moving in the $x$ direction have world lines inclined at $45^\circ$.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.6]
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -3) -- (0, 3) node [above] {$ct$};
+ \draw (-2.5, -2.5) -- (2.5, 2.5) node [anchor = south west] {light ray};
+ \draw (2.5, -2.5) -- (-2.5, 2.5) node [anchor = south east] {light ray};
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+
+We can also draw the axes of $S'$, moving in the $x$ direction at velocity $v$ relative to $S$. The $ct'$ axis corresponds to $x' = 0$, i.e.\ $x = vt$. The $x'$ axis corresponds to $t' = 0$, i.e.\ $t = vx/c^2$.
+\begin{center}
+ \begin{tikzpicture}[scale=0.6]
+ \draw [->, blue] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->, blue] (0, -3) -- (0, 3) node [above] {$ct$};
+ \draw [->, red] (-2.85, -0.8) -- (2.85, 0.8) node [right] {$x'$};
+ \draw [->, red] (-0.8, -2.85) -- (0.8, 2.85) node [above] {$ct'$};
+ \draw [dashed] (-2.4, -2.4) -- (2.4, 2.4);
+ \end{tikzpicture}
+\end{center}
+Note that the $x'$ and $ct'$ axes are \emph{not} orthogonal, but are symmetrical about the diagonal (dashed line). So they agree on where the world line of a light ray should lie on.
+\subsection{Relativistic physics}
+Now we can look at all sorts of relativistic weirdness!
+\subsubsection*{Simultaneity}
+The first relativistic weirdness is that different frames disagree on whether two evens are simultaneous
+\begin{defi}[Simultaneous events]
+ We say two events $P_1$ and $P_2$ are simultaneous in the frame $S$ if $t_1 = t_2$.
+\end{defi}
+They are represented in the following spacetime diagram by horizontal dashed lines.
+
+However, events that are simultaneous in $S'$ have equal values of $t'$, and so lie on lines
+\[
+ ct - \frac{v}{c}x = \text{constant}.
+\]
+\begin{center}
+ \begin{tikzpicture}[scale=0.6]
+ \draw [->, blue] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->, blue] (0, -3) -- (0, 3) node [above] {$ct$};
+ \draw [dashed, blue] (-3, 1) -- (3, 1);
+ \draw [dashed, blue] (-3, 2) -- (3, 2);
+ \node [circ] at (-1.5, 2) {};
+ \node [circ] at (1.5, 2) {};
+ \node at (-1.5, 2) [below] {$P_1$};
+ \node at (1.5, 2) [below] {$P_2$};
+ \draw [->, red] (-2.85, -0.8) -- (2.85, 0.8) node [right] {$x'$};
+ \draw [dashed, red] (-2.85, 0.2) -- (2.85, 1.8);
+ \draw [dashed, red] (-2.85, 1.2) -- (2.85, 2.8);
+ \draw [->, red] (-0.8, -2.85) -- (0.8, 2.85) node [above] {$ct'$};
+ \end{tikzpicture}
+\end{center}
+The lines of simultaneity of $S'$ and those of $S$ are different, and events simultaneous in $S$ need not be simultaneous in $S'$. So simultaneity is relative. $S$ thinks $P_1$ and $P_2$ happened at the same time, while $S'$ thinks $P_2$ happens first.
+
+Note that this is genuine disagreement. It is not due to effects like, it takes time for the light conveying the information to different observers. Our account above already takes that into account (since the whole discussion does not involve specific observers).
+
+\subsubsection*{Causality}
+Although different people may disagree on the temporal order of events, the consistent ordering of cause and effect can be ensured.
+
+Since things can only travel at at most the speed of light, $P$ cannot affect $R$ if $R$ happens a millisecond after $P$ but is at millions of galaxies away. We can draw a \emph{light cone} that denotes the regions in which things can be influenced by $P$. These are the regions of space-time light (or any other particle) can possibly travel to. $P$ can only influence events within its \emph{future light cone}, and \emph{be influenced} by events within its \emph{past light cone}.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=white!60!yellow] (0.5, 0) -- (2, 1.5) -- (3.5, 0);
+ \draw [fill=red!50!yellow] (0.5, 3) -- (2, 1.5) -- (3.5, 3);
+ \draw (0, 0) -- (4, 0) node [right] {$x$};
+ \draw (0, 0) -- (0, 3.5) node [above] {$ct$};
+ \node [circ] at (2, 1.5) {};
+ \node at (2, 1.5) [left] {$P$};
+ \node [circ] at (2, 2.5){};
+ \node at (2, 2.5) [left] {$Q$};
+ \node [circ] at (3, 2) {};
+ \node at (3, 2) [right] {$R$};
+ \end{tikzpicture}
+\end{center}
+All observers agree that $Q$ occurs after $P$. Different observers may disagree on the temporal ordering of $P$ and $R$. However, since nothing can travel faster than light, $P$ and $R$ cannot influence each other. Since everyone agrees on how fast light travels, they also agree on the light cones, and hence causality. So philosophers are happy.
+
+\subsubsection*{Time dilation}
+Suppose we have a clock that is stationary in $S'$ (which travels at constant velocity $v$ with respect to inertial frame $S$) ticks at constant intervals $\Delta t'$. What is the interval between ticks in $S$?
+
+Lorentz transformation gives
+\[
+ t = \gamma\left(t' + \frac{v}{c^2}x'\right).
+\]
+Since $x' =$ constant for the clock, we have
+\[
+ \Delta t = \gamma \Delta t' > \Delta t'.
+\]
+So the interval measured in $S$ is greater! So moving clocks run slowly.
+
+A non-mathematical explanation comes from Feynman (not lectured): Suppose we have a very simple clock: We send a light beam towards a mirror, and wait for it to reflect back. When the clock detects the reflected light, it ticks, and then sends the next light beam.
+
+Then the interval between two ticks is the distance $2d$ divided by the speed of light.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] (-0.05, -0.5) rectangle (0.05, 0.5);
+ \draw [fill=gray] (1.95, -0.5) rectangle (2.05, 0.5);
+ \draw [dashed] (0.1, 0.4) -- (1.9, 0.4) node [pos=0.5, above] {$d$};
+ \draw [->] (0.1, 0.1) -- (1.9, 0.1);
+ \draw [->] (1.9, -0.1) -- (0.1, -0.1);
+ \end{tikzpicture}
+\end{center}
+From the point of view of an observer moving downwards, by the time light reaches the right mirror, it would have moved down a bit. So $S$ sees
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] (-0.05, -0.5) rectangle (0.05, 0.5);
+ \draw [fill=gray] (1.95, -1) rectangle (2.05, 0);
+ \draw [->] (0.1, 0) -- (1.9, -0.5);
+ \draw [->] (1.9, -0.5) -- (0.1, -1);
+ \draw [fill=gray!50!white, dashed] (-0.05, -0.5) rectangle (0.05, -1.5);
+ \draw [gray, dashed, ->] (0.1, 0.4) -- (1.9, 0.4) node [pos=0.5, above] {$d$};
+ \draw [gray, ->] (0.101, 0.4) -- (0.1, 0.4);
+ \draw [gray, ->, dashed] (-0.2, 0) -- (-0.2, -1) node [pos = 0.5, left] {$a$};
+ \draw [gray, ->] (-0.2, -0.01) -- (-0.2, 0);
+ \end{tikzpicture}
+\end{center}
+However, the distance travelled by the light beam is now $\sqrt{(2d)^2 + a^2} > 2d$. Since they agree on the speed of light, it must have taken longer for the clock to receive the reflected light in $S$. So the interval between ticks are longer.
+
+By the principle of relativity, all clocks must measure the same time dilation, or else we can compare the two clocks and know if we are ``moving''.
+
+This is famously evidenced by muons. Their half-life is around 2 microseconds (i.e.\ on average they decay to something else after around 2 microseconds). They are created when cosmic rays bombard the atmosphere. However, even if they travel at the speed of light, 2 microseconds only allows it to travel \SI{600}{m}, certainly not sufficient to reach the surface of Earth. However, we observe \emph{lots} of muons on Earth. This is because muons are travelling so fast that their clocks run really slowly.
+
+\subsubsection*{The twin paradox}
+Consider two twins: Luke and Leia. Luke stays at home. Leia travels at a constant speed $v$ to a distant planet $P$, turns around, and returns at the same speed.
+
+In Luke's frame of reference,
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (3, 0) node [right] {$x$};
+ \draw (0, 0) -- (0, 3.3) node [above] {$ct$};
+ \node at (0, 1) [left] {Luke};
+ \draw (0, 1.5) -- (-0.1, 1.5) node [left] {$cT$};
+ \draw (0, 3) -- (-0.1, 3) node [left] {$2cT$};
+ \draw (0, 0) -- (0.5, 1.5) node [pos=0.5, right] {Leia: $x = vt$} node [circ] {} node [right] {$A$ (Leia's arrival)} -- (0, 3) node [circ] {} node [right] {$R$};
+ \end{tikzpicture}
+\end{center}
+Leia's arrival ($A$) at $P$ has coordinates
+\[
+ (ct, x) = (cT, vT).
+\]
+The time experienced by Leia on her outward journey is
+\[
+ T' = \gamma\left(T - \frac{v}{c^2}T\right) = \frac{T}{\gamma}.
+\]
+By Leia's return $R$, Luke has aged by $2T$, but Leia has aged by $\frac{2T}{\gamma} < 2T$. So she is younger than Luke, because of time dilation.
+
+The paradox is: From Leia's perspective, Luke travelled away from her at speed and the returned, so he should be younger than her!
+
+Why is the problem not symmetric?
+
+We can draw Leia's initial frame of reference in dashed lines:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (3, 0) node [right] {$x$};
+ \draw (0, 0) -- (0, 3.3) node [above] {$ct$};
+ \draw (0, 0) -- (0.5, 1.5) node [pos=0.5, right] {Leia} node [circ] {} node [right] {$A$} -- (0, 3) node [pos=0.5, right] {Han} node [circ] {} node [right] {$R$};
+ \draw [dashed] (0.5, 1.5) -- (1, 3) node [above] {$ct'$};
+ \draw [dashed] (0, 0) -- (3, 1) node [right] {$x'$};
+ \draw [dashed] (0.5, 1.5) -- (0, 1.333) node [left] {$X$};
+ \draw [dashed] (0.5, 1.5) -- (0, 1.667) node [left] {$Z$};
+ \end{tikzpicture}
+\end{center}
+In Leia's frame, by the time she arrives at $A$, she has experienced a time $T' = \frac{T}{\gamma}$ as shown above. This event is simultaneous with event $X$ in Leia's frame. Then in Luke's frame, the coordinates of $X$ are
+\[
+ (ct, x) = \left(\frac{cT'}{\gamma}, 0\right) = \left(\frac{cT}{\gamma^2}, 0\right),
+\]
+obtained through calculations similar to that above. So Leia thinks Luke has aged less by a factor of $1/\gamma^2$. At this stage, the problem \emph{is} symmetric, and Luke also thinks Leia has aged less by a factor of $1/\gamma^2$.
+
+Things change when Leia turns around and changes frame of reference. To understand this better, suppose Leia meets a friend, Han, who is just leaving $P$ at speed $v$. On his journey back, Han also thinks Luke ages $T/\gamma^2$. But in his frame of reference, his departure is simultaneous with Luke's event $Z$, not $X$, since he has different lines of simultaneity.
+
+So the asymmetry between Luke and Leia occurs when Leia turns around. At this point, she sees Luke age rapidly from $X$ to $Z$.
+
+\subsubsection*{Length contraction}
+A rod of length $L'$ is stationary in $S'$. What is its length in $S$?
+
+In $S'$, then length of the rod is the distance between the two ends at the same time. So we have
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (3, 0) node [right] {$x'$};
+ \draw (0, 0) -- (0, 3) node [above] {$ct'$};
+ \draw (1.5, 3) -- (1.5, 0) node [below] {$L'$};
+ \draw [->] (0, 1.5) -- (1.5, 1.5) node [pos = 0.5, above] {$L'$};
+ \draw [->] (1.5, 1.5) -- (0, 1.5);
+ \end{tikzpicture}
+\end{center}
+In $S$, we have
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (3, 0) node [right] {$x$};
+ \draw (0, 0) -- (0, 3) node [above] {$ct$};
+ \draw (0, 0) -- (1, 3);
+ \draw (1.5, 0) -- (2.5, 3);
+ \draw [->] (0.5, 1.5) -- (2, 1.5) node [pos = 0.5, above] {$L$};
+ \draw [->] (2, 1.5) -- (0.5, 1.5);
+ \draw [dashed] (0, 0) -- (3, 1) node [right] {$x'$};
+ \draw [dashed] (0, 0) -- (1.5, 0.5);
+ \draw [->] (0.0667, 0.2) -- (1.75416, 0.7625) node [pos=0.5, above] {$L'$};
+ \draw [->] (1.75416, 0.7625) -- (0.0667, 0.2);
+ \end{tikzpicture}
+\end{center}
+The lines $x' = 0$ and $x' = L'$ map into $x = vt$ and $x = vt + L'/\gamma$. So the length in $S$ is $L = L'/\gamma < L'$. Therefore moving objects are contracted in the direction of motion.
+\begin{defi}[Proper length]
+ The \emph{proper length} is the length measured in an object's rest frame.
+\end{defi}
+
+This is analogous to the fact that if you view a bar from an angle, it looks shorter than if you view it from the front. In relativity, what causes the contraction is not a spatial rotation, but a spacetime \emph{hyperbolic} rotation.
+
+Question: does a train of length $2L$ fit alongside a platform of length $L$ if it travels through the station at a speed $v$ such that $\gamma = 2$?
+
+For the system of observers on the platform, the train contracts to a length $2L/\gamma = L$. So it fits.
+
+But for the system of observers on the train, the platform contracts to length $L/\gamma = L/2$, which is much too short!
+
+This can be explained by the difference of lines of simultaneity, since length is the distance between front and back \emph{at the same time}.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (3, 0) node [right] {$x$};
+ \draw (0, 0) -- (0, 3) node [above] {$ct$};
+ \draw [red] (0, 0) -- (1, 3) node [above] {\footnotesize back of train};
+ \draw [red] (2, 0) -- (3, 3) node [above] {\footnotesize front of train};
+ \draw [blue] (0.5, 3) -- (0.5, 0) node [below, align=center] {\footnotesize back of\\\footnotesize platform} ;
+ \draw [blue] (2.5, 3) -- (2.5, 0) node [below, align=center] {\footnotesize front of\\\footnotesize platform};
+ \draw [->] (0.5, 1.5) -- (2.5, 1.5) node [above, pos=0.5] {$L$};
+ \draw [->] (2.5, 1.5) -- (0.5, 1.5);
+ \draw [dashed] (0, 1.5) -- (3, 1.5) node [right] {fits in $S$};
+ \draw [dashed] (0, 1) -- (3, 2) node [right] {doesn't fit in $S'$};
+ \end{tikzpicture}
+\end{center}
+
+\subsubsection*{Composition of velocities}
+A particle moves with constant velocity $u'$ in frame $S'$, which moves with velocity $v$ relative to $S$. What is its velocity $u$ in $S$?
+
+The world line of the particle in $S'$ is
+\[
+ x' = u't'.
+\]
+In $S$, using the inverse Lorentz transformation,
+\[
+ u = \frac{x}{t} = \frac{\gamma(x' + vt')}{\gamma(t' + (v/c^2) x')} = \frac{u't' + vt'}{t' + (v/c^2)u't'} = \frac{u' + v}{1 + u'v/c^2}.
+\]
+This is the formula for the relativistic composition of velocities.
+
+The inverse transformation is found by swapping $u$ and $u'$, and swapping the sign of $v$, i.e.
+\[
+ u' = \frac{u - v}{1 - uv/c^2}.
+\]
+Note the following:
+\begin{itemize}
+ \item if $u'v \ll c^2$, then the transformation reduces to the standard Galilean addition of velocities $u \approx u' + v$.
+ \item $u$ is a monotonically increasing function of $u$ for any constant $v$ (with $|v| < c$).
+ \item When $u' = \pm c$, $u = u'$ for any $v$, i.e.\ the speed of light is constant in all frames of reference.
+ \item Hence $|u'| < c$ iff $|u| < c$. This means that we cannot reach the speed of light by composition of velocities.
+\end{itemize}
+
+\subsection{Geometry of spacetime}
+We'll now look at the geometry of spacetime, and study the properties of vectors in this spacetime. While spacetime has 4 dimensions, and each point can be represented by 4 real numbers, this is not ordinary $\R^4$. This can be seen when changing coordinate systems, instead of rotating the axes like in $\R^4$, we ``squash'' the axes towards the diagonal, which is a \emph{hyperbolic rotation}. In particular, we will have a different notion of a dot product. We say that this space has dimension $d = 1 + 3$.
+
+\subsubsection*{The invariant interval}
+In regular Euclidean space, given a vector $\mathbf{x}$, all coordinate systems agree on the length $|\mathbf{x}|$. In Minkowski space, they agree on something else.
+
+Consider events $P$ and $Q$ with coordinates $(ct_1, x_1)$ and $(ct_2, x_2)$ separated by $\Delta t = t_2 - t_1$ and $\Delta x = x_2 - x_1$.
+
+\begin{defi}[Invariant interval]
+ The \emph{invariant interval} or \emph{spacetime interval} between $P$ and $Q$ is defined as
+ \[
+ \Delta s^2 = c^2 \Delta t^2 - \Delta x^2.
+ \]
+ Note that this quantity $\Delta s^2$ can be both positive or negative --- so $\Delta s$ might be imaginary!
+\end{defi}
+
+\begin{prop}
+ All inertial observers agree on the value of $\Delta s^2$.
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ c^2 \Delta t'^2 - \Delta x'^2 &= c^2 \gamma^2 \left(\Delta t - \frac{v}{c^2}\Delta x\right)^2 - \gamma^2 (\Delta x - v\Delta t)^2\\
+ &= \gamma^2 \left(1 - \frac{v^2}{c^2}\right)(c^2 \Delta t^2 - \Delta x^2)\\
+ &= c^2\Delta t^2 - \Delta x^2.\qedhere
+ \end{align*}
+\end{proof}
+
+In three spatial dimensions,
+\[
+ \Delta s^2 = c^2\Delta t^2 - \Delta x^2 - \Delta y^2 - \Delta z^2.
+\]
+We take this as the ``distance'' between the two points. For two infinitesimally separated events, we have
+\begin{defi}[Line element]
+ The \emph{line element} is
+ \[
+ \d s^2 = c^2 \d t^2 - \d x^2 - \d y^2 - \d z^2.
+ \]
+\end{defi}
+
+\begin{defi}[Timelike, spacelike and lightlike separation]
+ Events with $\Delta s^2 > 0$ are \emph{timelike separated}. It is possible to find inertial frames in which the two events occur in the same position, and are purely separated by time. Timelike-separated events lie within each other's light cones and can influence one another.
+
+ Events with $\Delta s^2 < 0$ are \emph{spacelike separated}. It is possible to find inertial frame in which the two events occur in the same time, and are purely separated by space. Spacelike-separated events lie out of each other's light cones and cannot influence one another.
+
+ Events with $\Delta s^2 = 0$ are \emph{lightlike} or \emph{null separated}. In all inertial frames, the events lie on the boundary of each other's light cones. e.g.\ different points in the trajectory of a photon are lightlike separated, hence the name.
+\end{defi}
+Note that $\Delta s^2 = 0$ does not imply that $P$ and $Q$ are the same event.
+
+\subsubsection*{The Lorentz group}
+The coordinates of an event $P$ in frame $S$ can be written as a \emph{4-vector} (i.e.\ 4-component vector) $X$. We write
+\[
+ X =
+ \begin{pmatrix}
+ ct\\
+ x\\
+ y\\
+ z
+ \end{pmatrix}
+\]
+The invariant interval between the origin and $P$ can be written as an inner product
+\[
+ X\cdot X = X^T\eta X = c^2t^2 - x^2 - y^2 - z^2,
+\]
+where
+\[
+ \eta =
+ \begin{pmatrix}
+ 1 & 0 & 0 & 0\\
+ 0 & -1 & 0 & 0\\
+ 0 & 0 & -1 & 0\\
+ 0 & 0 & 0 & -1
+ \end{pmatrix}.
+\]
+4-vectors with $X\cdot X > 0$ are called timelike, and those $X \cdot X < 0$ are spacelike. If $X\cdot X = 0$, it is lightlike or null.
+
+A Lorentz transformation is a linear transformation of the coordinates from one frame $S$ to another $S'$, represented by a $4\times 4$ tensor (``matrix''):
+\[
+ X' = \Lambda X
+\]
+Lorentz transformations can be defined as those that leave the inner product invariant:
+\[
+ (\forall X)(X'\cdot X' = X\cdot X),
+\]
+which implies the matrix equation
+\[
+ \Lambda^T\eta \Lambda = \eta.\tag{$*$}
+\]
+These also preserve $X\cdot Y$ if $X$ and $Y$ are both 4-vectors.
+
+Two classes of solution to this equation are:
+\[
+ \Lambda =
+ \begin{pmatrix}
+ 1 & 0 & 0 & 0\\
+ 0\\
+ 0 & & R\\
+ 0
+ \end{pmatrix},
+\]
+where $R$ is a $3\times 3$ orthogonal matrix, which rotates (or reflects) space and leaves time intact; and
+\[
+ \Lambda =
+ \begin{pmatrix}
+ \gamma & -\gamma \beta & 0 & 0\\
+ -\gamma\beta & \gamma & 0 & 0\\
+ 0 & 0 & 1 & 0\\
+ 0 & 0 & 0 & 1
+ \end{pmatrix},
+\]
+where $\beta = \frac{v}{c}$, and $\gamma = 1/\sqrt{1 - \beta^2}$. Here we leave the $y$ and $z$ coordinates intact, and apply a Lorentz boost along the $x$ direction.
+
+The set of all matrices satisfying equation $(*)$ form the \emph{Lorentz group} $O(1, 3)$. It is generated by rotations and boosts, as defined above, which includes the absurd spatial reflections and time reversal.
+
+The subgroup with $\det \Lambda = +1$ is the \emph{proper Lorentz group} $SO(1, 3)$.
+
+The subgroup that preserves spatial orientation and the direction of time is the \emph{restricted Lorentz group} $SO^+(1, 3)$. Note that this is different from $SO(1, 3)$, since if you do \emph{both} spatial reflection and time reversal, the determinant of the matrix is still positive. We want to eliminate those as well!
+
+\subsubsection*{Rapidity}
+Focus on the upper left $2\times 2$ matrix of Lorentz boosts in the $x$ direction. Write
+\[
+ \Lambda[\beta] =
+ \begin{pmatrix}
+ \gamma & -\gamma\beta\\
+ -\gamma\beta & \gamma
+ \end{pmatrix}
+ ,\quad
+ \gamma = \frac{1}{\sqrt{1 - \beta^2}}.
+\]
+Combining two boosts in the $x$ direction, we have
+\[
+ \Lambda[\beta_1]\Lambda[\beta_2] =
+ \begin{pmatrix}
+ \gamma_1 & -\gamma_1\beta_1\\
+ -\gamma_1\beta_1 & \gamma_1
+ \end{pmatrix}
+ \begin{pmatrix}
+ \gamma_2 & -\gamma_2\beta_2\\
+ -\gamma_2\beta_2 & \gamma_2
+ \end{pmatrix}
+ = \Lambda\left[\frac{\beta_1 + \beta_2}{1 + \beta_1\beta_2}\right]
+\]
+after some messy algebra. This is just the velocity composition formula as before.
+
+This result does not look nice. This suggests that we might be writing things in the wrong way.
+
+We can compare this with spatial rotation. Recall that
+\[
+ R(\theta) =
+ \begin{pmatrix}
+ \cos \theta & \sin \theta\\
+ -\sin \theta & \cos \theta
+ \end{pmatrix}
+\]
+with
+\[
+ R(\theta_1)R(\theta_2) = R(\theta_1 + \theta_2).
+\]
+For Lorentz boosts, we can define
+\begin{defi}[Rapidity]
+ The \emph{rapidity} of a Lorentz boot is $\phi$ such that
+ \[
+ \beta = \tanh \phi,\quad \gamma = \cosh\phi,\quad \gamma\beta=\sinh \phi.
+ \]
+\end{defi}
+Then
+\[
+ \Lambda[\beta] =
+ \begin{pmatrix}
+ \cosh \phi & -\sinh \phi\\
+ -\sinh \phi & \cosh \phi
+ \end{pmatrix}
+ = \Lambda(\phi).
+\]
+The rapidities add like rotation angles:
+\[
+ \Lambda(\phi_1)\Lambda(\phi_2) = \Lambda(\phi_1 + \phi_2).
+\]
+This shows the close relation betweens spatial rotations and Lorentz boosts. Lorentz boots are simply \emph{hyperbolic} rotations in spacetime!
+
+\subsection{Relativistic kinematics}
+In Newtonian mechanics, we describe a particle by its position $\mathbf{x}(t)$, with its velocity being $\mathbf{u}(t) = \frac{\d \mathbf{x}}{\d t}$.
+
+In relativity, this is unsatisfactory. In special relativity, space and time can be mixed together by Lorentz boosts, and we prefer not to single out time from space. For example, when we write the 4-vector $X$, we put in both the time and space components, and Lorentz transformations are $4\times 4$ matrices that act on $X$.
+
+In the definition of velocity, however, we are differentiating space with respect to time, which is rather weird. First of all, we need something to replace time. Recall that we defined ``proper length'' as the length in the item in its rest frame. Similarly, we can define the \emph{proper time}.
+
+\begin{defi}[Proper time]
+ The \emph{proper time} $\tau$ is defined such that
+ \[
+ \Delta \tau = \frac{\Delta s}{c}
+ \]
+ $\tau$ is the time experienced by the particle, i.e.\ the time in the particles rest frame.
+\end{defi}
+The world line of a particle can be parametrized using the proper time by $t(\tau)$ and $\mathbf{x}(\tau)$.
+\begin{center}
+ \begin{tikzpicture}[yscale=1.3]
+ \draw [->] (0, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 2.3) node [above] {$ct$};
+ \draw (0.7, 0.3) .. controls (0.7, 0.8) and (2, 1.3) .. (2, 1.8);
+
+ \node [circ] at (1.35, 1.05) {};
+ \node at (1.35, 1.05) [anchor = north west] {$\tau_1$};
+ \node [circ] at (1.93, 1.6) {};
+ \node at (1.93, 1.6) [anchor=north west] {$\tau_2$};
+ \end{tikzpicture}
+\end{center}
+Infinitesimal changes are related by
+\[
+ \d \tau = \frac{\d s}{c} = \frac{1}{c}\sqrt{c^2\;\d t^2 - |\d \mathbf{x}|^2} = \sqrt{1 - \frac{|\mathbf{u}|^2}{c^2}}\;\d t.
+\]
+Thus
+\[
+ \frac{\d t}{\d \tau} = \gamma_u
+\]
+with
+\[
+ \gamma_u = \frac{1}{\sqrt{1 - \frac{|\mathbf{u}|^2}{c^2}}}.
+\]
+The total time experienced by the particle along a segment of its world line is
+\[
+ T = \int \;\d \tau = \int\frac{1}{\gamma_u}\;\d t.
+\]
+We can then define the \emph{position 4-vector} and \emph{4-velocity}.
+\begin{defi}[Position 4-vector and 4-velocty]
+ The \emph{position 4-vector} is
+ \[
+ X(\tau) =
+ \begin{pmatrix}
+ ct(\tau)\\
+ \mathbf{x}(\tau)
+ \end{pmatrix}.
+ \]
+ Its \emph{4-velocity} is defined as
+ \[
+ U = \frac{\d X}{\d \tau} =
+ \begin{pmatrix}
+ c\frac{\d t}{\d \tau}\\
+ \frac{\d \mathbf{x}}{\d \tau}
+ \end{pmatrix}
+ = \frac{\d t}{\d \tau}
+ \begin{pmatrix}
+ c\\
+ \mathbf{u}
+ \end{pmatrix} = \gamma_u
+ \begin{pmatrix}
+ c\\
+ \mathbf{u}
+ \end{pmatrix},
+ \]
+ where $\mathbf{u} = \frac{\d \mathbf{x}}{\d t}$.
+\end{defi}
+Another common notation is
+\[
+ X = (ct, \mathbf{x}),\quad U = \gamma_u (c, \mathbf{u}).
+\]
+If frames $S$ and $S'$ are related by $X' = \Lambda X$, then the 4-velocity also transforms as $U' = \Lambda U$.
+
+\begin{defi}[4-vector]
+ A \emph{4-vector} is a 4-component vectors that transforms in this way under a Lorentz transformation, i.e.\ $X' = \Lambda X$.
+
+ When using suffix notation, the indices are written above (superscript) instead of below (subscript). The indices are written with Greek letters which range from $0$ to $3$. So we have $X^\mu$ instead of $X_i$, for $\mu = 0, 1, 2, 3$. If we write $X_\mu$ instead, it means a different thing. This will be explained more in-depth in the electromagnetism course (and you'll get more confused!).
+\end{defi}
+
+$U$ is a 4-vector because $X$ is a 4-vector and $\tau$ is a Lorentz invariant. Note that $\d X/\d t$ is \emph{not} a 4-vector.
+
+Note that this definition of 4-vector is analogous to that of a tensor --- things that transform nicely according to our rules. Then $\tau$ would be a scalar, i.e.\ rank-0 tensor, while $t$ is just a number, not a scalar.
+
+For any 4-vector $U$, the inner product $U\cdot U = U' \cdot U'$ is Lorentz invariant, i.e.\ the same in all inertial frames. In the rest frame of the particle, $U = (c, 0)$. So $U\cdot U = c^2$.
+
+In any other frame, $Y = \gamma_u(c, \mathbf{u})$. So
+\[
+ Y\cdot Y = \gamma_u^2 (c^2 - |\mathbf{u}|^2) = c^2
+\]
+as expected.
+
+\subsubsection*{Transformation of velocities revisited}
+We have seen that velocities cannot be simply added in relativity. However, the 4-velocity does transform linearly, according to the Lorentz transform:
+\[
+ U' = \Lambda U.
+\]
+In frame $S$, consider a particle moving with speed $u$ at an angle $\theta$ to the $x$ axis in the $xy$ plane. This is the most general case for motion not parallel to the Lorentz boost.
+
+Its 4-velocity is
+\[
+ U =
+ \begin{pmatrix}
+ \gamma_u c\\
+ \gamma_u u\cos \theta\\
+ \gamma_u u\sin \theta\\
+ 0
+ \end{pmatrix}, \quad \gamma_u = \frac{1}{\sqrt{1 - u^2/c^2}}.
+\]
+With frames $S$ and $S'$ in standard configuration (i.e.\ origin coincide at $t = 0$, $S'$ moving in $x$ direction with velocity $v$ relative to $S$),
+\[
+ U' = \begin{pmatrix}
+ \gamma_{u'} c\\
+ \gamma_{u'} u'\cos \theta'\\
+ \gamma_{u'} u'\sin \theta'\\
+ 0
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ \gamma_v & -\gamma_v v/c & 0 & 0\\
+ -\gamma_{v} v/c & \gamma_v & 0 & 0\\
+ 0 & 0 & 1 & 0\\
+ 0 & 0 & 0 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ \gamma_u c\\
+ \gamma_u u\cos \theta\\
+ \gamma_u u\sin \theta\\
+ 0
+ \end{pmatrix}
+\]
+Instead of evaluating the whole matrix, we can divide different rows to get useful results.
+
+The ratio of the first two lines gives
+\[
+ u'\cos \theta' = \frac{u\cos \theta - v}{1 - \frac{uv}{c^2}\cos \theta},
+\]
+just like the composition of parallel velocities.
+
+The ratio of the third to second line gives
+\[
+ \tan \theta' = \frac{u\sin \theta}{\gamma_v(u\cos \theta - v)},
+\]
+which describes \emph{aberration}, a change in the direction of motion of a particle due to the motion of the observer. Note that this isn't just a relativistic effect! If you walk in the rain, you have to hold your umbrella obliquely since the rain seems to you that they are coming from an angle. The relativistic part is the $\gamma_v$ factor in the denominator.
+
+This is also seen in the aberration of starlight ($u = c$) due to the Earth's orbital motion. This causes small annual changes in the apparent positions of stars.
+\subsubsection*{4-momentum}
+\begin{defi}[4-momentum]
+ The \emph{4-momentum} of a particle of mass $m$ is
+ \[
+ P = mU = m\gamma_u
+ \begin{pmatrix}
+ c\\
+ \mathbf{u}
+ \end{pmatrix}
+ \]
+ The 4-momentum of a system of particles is the sum of the 4-momentum of the particles, and is conserved in the absence of external forces.
+
+ The spatial components of $P$ are the \emph{relativistic 3-momentum},
+ \[
+ \mathbf{p} = m\gamma_u \mathbf{u},
+ \]
+ which differs from the Newtonian expression by a factor of $\gamma_u$. Note that $|\mathbf{p}| \to \infty$ as $|\mathbf{u}| \to c$.
+\end{defi}
+
+What is the interpretation of the time component $P^0$ (i.e.\ the first time component of the $P$ vector)? We expand for $|\mathbf{u}| \ll c$:
+\[
+ P^0 = m\gamma c = \frac{mc}{\sqrt{1 - |\mathbf{u}|^2/c^2}} = \frac{1}{c}\left(mc^2 + \frac{1}{2}m|\mathbf{u}|^2 + \cdots\right).
+\]
+We have a constant term $mc^2$ plus a kinetic energy term $\frac{1}{2}m|\mathbf{u}|^2$, plus more tiny terms, all divided by $c$. So this suggests that $P^0$ is indeed the energy for a particle, and the remaining $\cdots$ terms are relativistic corrections for our old formula $\frac{1}{2}m|\mathbf{u}|^2$ (the $mc^2$ term will be explained later). So we interpret $P$ as
+\[
+ P =
+ \begin{pmatrix}
+ E/c\\
+ \mathbf{p}
+ \end{pmatrix}
+\]
+\begin{defi}[Relativistic energy]
+ The \emph{relativistic energy} of a particle is $E = P^0c$. So
+ \[
+ E = m\gamma c^2 = mc^2 + \frac{1}{2}m|\mathbf{u}|^2 + \cdots
+ \]
+\end{defi}
+Note that $E\to \infty$ as $|\mathbf{u}| \to c$.
+
+For a stationary particle, we obtain
+\[
+ E = mc^2.
+\]
+This implies that mass is a form of energy. $m$ is sometimes called the \emph{rest mass}.
+
+The energy of a moving particle, $m\gamma_u c^2$, is the sum of the rest energy $mc^2$ and kinetic energy $m(\gamma_u - 1)c^2$.
+
+Since $P\cdot P = \frac{E^2}{c^2} - |\mathbf{p}|^2$ is a Lorentz invariant (lengths of 4-vectors are always Lorentz invariant) and equals $m^2 c^2$ in the particle's rest frame, we have the general relation between energy and momentum
+\[
+ E^2 = |\mathbf{p}|^2 c^2 + m^2c^4
+\]
+In Newtonian physics, mass and energy are separately conserved. In relativity, mass is not conserved. Instead, it is just another form of energy, and the total energy, including mass energy, is conserved.
+
+Mass can be converged into kinetic energy and vice versa (e.g.\ atomic bombs!)
+
+\subsubsection*{Massless particles}
+Particles with zero mass ($m = 0$), e.g.\ photons, can have non-zero momentum and energy because they travel at the speed of light ($\gamma = \infty$).
+
+In this case, $P\cdot P = 0$. So massless particles have light-like (or null) trajectories, and no proper time can be defined for such particles.
+
+Other massless particles in the Standard Model of particle physics include the gluon.
+
+For these particles, energy and momentum are related by
+\[
+ E^2 = |\mathbf{p}|^2 c^2.
+\]
+So
+\[
+ E = |\mathbf{p}|c.
+\]
+Thus
+\[
+ P = \frac{E}{c}
+ \begin{pmatrix}
+ 1\\
+ \mathbf{n}
+ \end{pmatrix},
+\]
+where $\mathbf{n}$ is a unit (3-)vector in the direction of propagation.
+
+According to quantum mechanics, fundamental ``particles'' aren't really particles but have both particle-like and wave-like properties (if that sounds confusing, yes it is!). Hence we can assign it a \emph{de Broglie wavelength}, according to the \emph{de Broglie relation}:
+\[
+ |\mathbf{p}| = \frac{h}{\lambda}
+\]
+where $h \approx \SI{6.63e-34}{\meter\squared\kilogram\per\second}$ is \emph{Planck's constant}.
+
+For massless particles, this is consistent with \emph{Planck's relation}:
+\[
+ E = \frac{hc}{\lambda} = h\nu,
+\]
+where $\nu = \frac{c}{\lambda}$ is the \emph{wave frequency}.
+\subsubsection*{Newton's second law in special relativity}
+\begin{defi}[4-force]
+ The \emph{4-force} is
+ \[
+ F = \frac{\d P}{\d \tau}
+ \]
+\end{defi}
+This equation is the relativistic counterpart to Newton's second law.
+
+It is related to the 3-force $\mathbf{F}$ by
+\[
+ F = \gamma_u
+ \begin{pmatrix}
+ \mathbf{F}\cdot \mathbf{u}/c\\
+ \mathbf{F}
+ \end{pmatrix}
+\]
+Expanding the definition of the 4-force componentwise, we obtain
+\[
+ \frac{\d E}{\d \tau} = \gamma_u \mathbf{F}\cdot \mathbf{u} \Rightarrow \frac{\d E}{\d t} = \mathbf{F}\cdot \mathbf{u}
+\]
+and
+\[
+ \frac{\d \mathbf{p}}{\d \tau} = \gamma_u \mathbf{F} \Rightarrow \frac{\d \mathbf{p}}{\d t} = \mathbf{F}
+\]
+Equivalently, for a particle of mass $m$,
+\[
+ F =mA,
+\]
+where
+\[
+ A = \frac{\d U}{\d \tau}
+\]
+is the 4-acceleration.
+
+We have
+\[
+ U = \gamma_u
+ \begin{pmatrix}
+ c\\
+ \mathbf{u}
+ \end{pmatrix}
+\]
+So
+\[
+ A = \gamma_u \frac{\d U}{\d t} = \gamma_u
+ \begin{pmatrix}
+ \dot{\gamma}_u c\\
+ \gamma_u \mathbf{a} + \dot{\gamma}_u \mathbf{u}.
+ \end{pmatrix}
+\]
+where $\mathbf{\mathbf{a}} = \frac{\d \mathbf{u}}{\d t}$ and $\dot{\gamma}_u = \gamma_u^3 \frac{\mathbf{a}\cdot \mathbf{u}}{c^2}$.
+
+In the instantaneous rest frame of a particle, $\mathbf{u} = \mathbf{0}$ and $\gamma_u = 1$. So
+\[
+ U =
+ \begin{pmatrix}
+ c\\
+ \mathbf{0}
+ \end{pmatrix}, \quad
+ A =
+ \begin{pmatrix}
+ 0\\
+ \mathbf{a}
+ \end{pmatrix}
+\]
+Then $\mathbf{U}\cdot \mathbf{A} = 0$. Since this is a Lorentz invariant, we have $\mathbf{U} \cdot \mathbf{A} = 0$ in all frames.
+
+\subsection{Particle physics}
+Many problems can be solved using the conservation of 4-momentum,
+\[
+ P =
+ \begin{pmatrix}
+ E/c\\
+ \mathbf{p}
+ \end{pmatrix},
+\]
+for a system of particles.
+\begin{defi}[Center of momentum frame]
+ The \emph{center of momentum (CM) frame}, or \emph{zero momentum frame}, is an inertial frame in which the total 3-momentum is $\sum \mathbf{p} = 0$.
+\end{defi}
+This exists unless the system consists of one or more massless particle moving in a single direction.
+
+\subsubsection*{Particle decay}
+A particle of mass $m_1$ decays into two particles of masses $m_2$ and $m_2$.
+
+We have
+\[
+ P_1 = P_2 + P_3.
+\]
+i.e.
+\begin{align*}
+ E_1 &= E_2 + E_3\\
+ \mathbf{p}_1 &= \mathbf{p}_2 + \mathbf{p}_3.
+\end{align*}
+In the CM frame (i.e.\ the rest frame of the original particle),
+\begin{align*}
+ E_1 = m_1 c^2 &= \sqrt{|\mathbf{p}_2|^2 c^2 + m_2^2c^4} + \sqrt{|\mathbf{p}_3|^2 c^2 + m_2^2 c^4}\\
+ &\geq m_2c^2 + m_3 c^2.
+\end{align*}
+So decay is possible only if
+\[
+ m_1 \geq m_2 + m_3.
+\]
+(Recall that mass is not conserved in relativity!)
+
+\begin{eg}
+ A possible decay path of the Higgs' particle can be written as
+ \begin{align*}
+ \mathrm{h} &\to \gamma \gamma\\
+ \text{Higgs'\ particle} &\to 2\text{ photons}
+ \end{align*}
+ This is possible by the above criterion, because $m_\mathrm{h} \geq 0$, while $m_\gamma = 0$.
+
+ The full conservation equation is
+ \[
+ P_\mathrm{h} =
+ \begin{pmatrix}
+ m_\mathrm{h}c\\
+ \mathbf{0}
+ \end{pmatrix} =
+ P_{\gamma_1} + P_{\gamma_2}
+ \]
+ So
+ \begin{align*}
+ \mathbf{p}_{\gamma_1} &= \mathbf{p}_{\gamma_2}\\
+ E_{\gamma_1} = E_{\gamma_2} &= \frac{1}{2}m_\mathrm{h} c^2.
+ \end{align*}
+\end{eg}
+
+\subsubsection*{Particle scattering}
+When two particles collide and retain heir identities, the total 4-momentum is conserved:
+\[
+ P_1 + P_2 = P_3 + P_4
+\]
+In the laboratory frame $S$, suppose that particle 1 travels with speed $u$ and collides with particle 2 (at rest).
+\begin{center}
+ \begin{tikzpicture}
+ \node [draw, circle] (1') at (3.5, 1) {1};
+ \node [draw, circle] (2') at (3.5, -1.2) {2};
+ \node [draw, circle] (2) at (2, 0) {2}
+ edge [->] (1')
+ edge [->] (2');
+ \node [draw, circle] (1) {1}
+ edge [->] (2);
+ \draw [dashed] (2.3, 0) -- (4, 0);
+ \draw (2.5, 0) arc (0:33.69:0.5);
+ \draw (2.6, 0) arc (0:-38.66:0.6);
+ \node at (2.8, -0.3) {$\phi$};
+ \node at (2.7, 0.2) {$\theta$};
+ \end{tikzpicture}
+\end{center}
+In the CM frame $S'$,
+\[
+ \mathbf{p}'_1 + \mathbf{p}'_2 = 0 = \mathbf{p}_3' + \mathbf{p}'_4.
+\]
+Both before and after the collision, the two particles have equal and opposite 3-momentum.
+\begin{center}
+ \begin{tikzpicture}
+ \node [draw, circle] at (-2, 0) {1}
+ edge [->] (0, 0);
+ \node [draw, circle] at (2, 0) {2}
+ edge [->] (0, 0);
+ \node at (-1, 0) [above] {$\mathbf{p}_1$};
+ \node at (1, 0) [above] {$\mathbf{p}_2$};
+ \node [draw, circle] at (1, 1.5) {1}
+ edge [<-] (0, 0);
+ \node [draw, circle] at (-1, -1.5) {2}
+ edge [<-] (0, 0);
+ \node at (0.5, 0.75) [left] {$\mathbf{p}_3$};
+ \node at (-0.5, -0.75) [left] {$\mathbf{p}_4$};
+ \end{tikzpicture}
+\end{center}
+The scattering angle $\theta'$ is undetermined and can be thought of as being random. However, we can derive some conclusions about the angles $\theta$ and $\phi$ in the laboratory frame.
+
+(staying in $S'$ for the moment) Suppose the particles have equal mass $m$. They then have the same speed $v$ in $S'$.
+
+Choose axes such that
+\[
+ P_1' =
+ \begin{pmatrix}
+ m\gamma_v c\\
+ m\gamma_v v\\
+ 0\\
+ 0
+ \end{pmatrix},\quad
+ P_2' =
+ \begin{pmatrix}
+ m\gamma_v c\\
+ -m\gamma_v v\\
+ 0\\
+ 0
+ \end{pmatrix}
+\]
+and after the collision,
+\[
+ P_3' =
+ \begin{pmatrix}
+ m\gamma_v c\\
+ m\gamma_v v\cos \theta'\\
+ m\gamma_v v\sin \theta'\\
+ 0
+ \end{pmatrix},\quad
+ P_4' =
+ \begin{pmatrix}
+ m\gamma_v c\\
+ -m\gamma_v v\cos \theta'\\
+ -m \gamma_v v\sin \theta'\\
+ 0
+ \end{pmatrix}.
+\]
+We then use the Lorentz transformation to return to the laboratory frame $S$. The relative velocity of the frames is $v$. So the Lorentz transform is
+\[
+ \Lambda =
+ \begin{pmatrix}
+ \gamma_v & \gamma_v v/c & 0 & 0\\
+ \gamma_v v/c & \gamma_v & 0 & 0\\
+ 0 & 0 & 1 & 0\\
+ 0 & 0 & 0 & 1
+ \end{pmatrix}
+\]
+and we find
+\[
+ P_1 =
+ \begin{pmatrix}
+ m\gamma_u c\\
+ m\gamma_u u\\
+ 0\\
+ 0
+ \end{pmatrix},\quad
+ P_2 =
+ \begin{pmatrix}
+ mc\\
+ 0\\
+ 0\\
+ 0
+ \end{pmatrix}
+\]
+where
+\[
+ u = \frac{2v}{1 + v^2/c^2},
+\]
+(cf.\ velocity composition formula)
+
+Considering the transformations of $P_3'$ and $P_4'$, we obtain
+\[
+ \tan \theta = \frac{\sin \theta'}{\gamma_v (1 + \cos \theta')} = \frac{1}{\gamma_v}\tan \frac{\theta'}{2},
+\]
+and
+\[
+ \tan \phi = \frac{\sin \theta'}{\gamma_v(1 - \cos \theta')} = \frac{1}{\gamma_v}\cot \frac{\theta'}{2}.
+\]
+Multiplying these expressions together, we obtain
+\[
+ \tan \theta\tan \phi = \frac{1}{\gamma_v^2}.
+\]
+So even though we do not know what $\theta$ and $\phi$ might be, they \emph{must} be related by this equation.
+
+In the Newtonian limit, where $|\mathbf{v}| \ll c$, we have $\gamma_v \approx 1$. So
+\[
+ \tan \theta\tan \phi = 1,
+\]
+i.e.\ the outgoing trajectories are perpendicular in $S$.
+
+\subsubsection*{Particle creation}
+Collide two particles of mass $m$ fast enough, and you create an extra particle of mass $M$.
+\[
+ P_1 + P_2 = P_3 + P_4 + P_5,
+\]
+where $P_5$ is the momentum of the new particle.
+
+In the CM frame,
+\begin{center}
+ \begin{tikzpicture}
+ \node [draw, circle] at (-2, 0) {1}
+ edge [->] (0, 0);
+ \node [draw, circle] at (2, 0) {2}
+ edge [->] (0, 0);
+ \node at (-1, 0) [above] {$v$};
+ \node at (1, 0) [above] {$v$};
+ \end{tikzpicture}
+\end{center}
+\[
+ P_1 + P_2 =
+ \begin{pmatrix}
+ 2m\gamma_v c\\
+ \mathbf{0}
+ \end{pmatrix}
+\]
+We have
+\[
+ P_3 + P_4 + P_5 =
+ \begin{pmatrix}
+ (E_3 + E_4 + E_5)/c\\
+ \mathbf{0}
+ \end{pmatrix}
+\]
+So
+\[
+ 2m\gamma_v c^2 = E_3 + E_4 + E_5 \geq 2mc^2 + Mc^2.
+\]
+So in order to create this new particle, we must have
+\[
+ \gamma_v \geq 1 + \frac{M}{2m}.
+\]
+Alternatively, it occurs only if the initial kinetic energy in the CM frame satisfies
+\[
+ 2(\gamma_v - 1)mc^2 \geq Mc^2.
+\]
+If we transform to a frame in which the initial speeds are $u$ and 0 (i.e.\ stationary target), then
+\[
+ u = \frac{2v}{1 + v^2/c^2}
+\]
+Then
+\[
+ \gamma_u = 2\gamma_v^2 - 1.
+\]
+So we require
+\[
+ \gamma_u \geq 2\left(1 + \frac{M}{2m}\right)^2 - 1 = 1 + \frac{2M}{m} + \frac{M^2}{2m}.
+\]
+This means that the initial kinetic energy in this frame must be
+\[
+ m(\gamma_u - 1)c^2 \geq \left(2 + \frac{M}{2m}\right)Mc^2,
+\]
+which could be much larger than $Mc^2$, especially if $M\gg m$, which usually the case. For example, the mass of the Higgs' boson is 130 times the mass of the proton. So it would be much advantageous to collide two beams of protons head on, as opposed to hitting a fixed target.
+\end{document}
diff --git a/books/cam/IA_L/probability.tex b/books/cam/IA_L/probability.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1c18112efbc47d25f8fd7072e2964ab1b4bc09f8
--- /dev/null
+++ b/books/cam/IA_L/probability.tex
@@ -0,0 +1,3557 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IA}
+\def\nterm {Lent}
+\def\nyear {2015}
+\def\nlecturer {R.\ Weber}
+\def\ncourse {Probability}
+\def\nofficial {http://www.statslab.cam.ac.uk/~rrw1/prob/index.html}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+ \noindent\textbf{Basic concepts}\\
+ Classical probability, equally likely outcomes. Combinatorial analysis, permutations and combinations. Stirling's formula (asymptotics for $\log n!$ proved).\hspace*{\fill} [3]
+
+ \vspace{10pt}
+ \noindent\textbf{Axiomatic approach}\\
+ Axioms (countable case). Probability spaces. Inclusion-exclusion formula. Continuity and subadditivity of probability measures. Independence. Binomial, Poisson and geometric distributions. Relation between Poisson and binomial distributions. Conditional probability, Bayes's formula. Examples, including Simpson's paradox.\hspace*{\fill} [5]
+
+ \vspace{10pt}
+ \noindent\textbf{Discrete random variables}\\
+ Expectation. Functions of a random variable, indicator function, variance, standard deviation. Covariance, independence of random variables. Generating functions: sums of independent random variables, random sum formula, moments.
+
+ \vspace{5pt}
+ \noindent Conditional expectation. Random walks: gambler's ruin, recurrence relations. Difference equations and their solution. Mean time to absorption. Branching processes: generating functions and extinction probability. Combinatorial applications of generating functions.\hspace*{\fill} [7]
+
+ \vspace{10pt}
+ \noindent\textbf{Continuous random variables}\\
+ Distributions and density functions. Expectations; expectation of a function of a random variable. Uniform, normal and exponential random variables. Memoryless property of exponential distribution. Joint distributions: transformation of random variables (including Jacobians), examples. Simulation: generating continuous random variables, independent normal random variables. Geometrical probability: Bertrand's paradox, Buffon's needle. Correlation coefficient, bivariate normal random variables.\hspace*{\fill} [6]
+
+ \vspace{10pt}
+ \noindent\textbf{Inequalities and limits}\\
+ Markov's inequality, Chebyshev's inequality. Weak law of large numbers. Convexity: Jensen's inequality for general random variables, AM/GM inequality.
+
+ \vspace{5pt}
+ \noindent Moment generating functions and statement (no proof) of continuity theorem. Statement of central limit theorem and sketch of proof. Examples, including sampling.\hspace*{\fill} [3]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+In every day life, we often encounter the use of the term probability, and they are used in many different ways. For example, we can hear people say:
+\begin{enumerate}
+ \item The probability that a fair coin will land heads is $1/2$.
+ \item The probability that a selection of 6 members wins the National Lottery Lotto jackpot is 1 in $\binom{19}{6} = 13 983 816$ or $7.15112\times 10^{-8}$.
+ \item The probability that a drawing pin will land 'point up' is 0.62.
+ \item The probability that a large earthquake will occur on the San Andreas Fault in the next 30 years is about $21\%$
+ \item The probability that humanity will be extinct by 2100 is about $50\%$
+\end{enumerate}
+The first two cases are things derived from logic. For example, we know that the coin either lands heads or tails. By definition, a fair coin is equally likely to land heads or tail. So the probability of either must be $1/2$.
+
+The third is something probably derived from experiments. Perhaps we did 1000 experiments and 620 of the pins point up. The fourth and fifth examples belong to yet another category that talks about are beliefs and predictions.
+
+We call the first kind ``classical probability'', the second kind ``frequentist probability'' and the last ``subjective probability''. In this course, we only consider classical probability.
+
+\section{Classical probability}
+We start with a rather informal introduction to probability. Afterwards, in Chapter~\ref{sec:axioms}, we will have a formal axiomatic definition of probability and formally study their properties.
+
+\subsection{Classical probability}
+\begin{defi}[Classical probability]
+ \emph{Classical probability} applies in a situation when there are a finite number of equally likely outcome.
+\end{defi}
+
+A classical example is the problem of points.
+
+\begin{eg}
+ $A$ and $B$ play a game in which they keep throwing coins. If a head lands, then $A$ gets a point. Otherwise, $B$ gets a point. The first person to get 10 points wins a prize.
+
+ Now suppose $A$ has got 8 points and $B$ has got 7, but the game has to end because an earthquake struck. How should they divide the prize? We answer this by finding the probability of $A$ winning. Someone must have won by the end of 19 rounds, i.e.\ after 4 more rounds. If $A$ wins at least 2 of them, then $A$ wins. Otherwise, $B$ wins.
+
+ The number of ways this can happen is $\binom{4}{2} + \binom{4}{3} + \binom{4}{4} = 11$, while there are $16$ possible outcomes in total. So $A$ should get $11/16$ of the prize.
+\end{eg}
+
+In general, consider an experiment that has a random outcome.
+
+\begin{defi}[Sample space]
+ The set of all possible outcomes is the \emph{sample space}, $\Omega$. We can lists the outcomes as $\omega_1, \omega_2, \cdots \in \Omega$. Each $\omega \in \Omega$ is an \emph{outcome}.
+\end{defi}
+
+\begin{defi}[Event]
+ A subset of $\Omega$ is called an \emph{event}.
+\end{defi}
+
+\begin{eg}
+ When rolling a dice, the sample space is $\{1, 2, 3, 4, 5, 6\}$, and each item is an outcome. ``Getting an odd number'' and ``getting 3'' are two possible events.
+\end{eg}
+
+In probability, we will be dealing with sets a lot, so it would be helpful to come up with some notation.
+\begin{defi}[Set notations]
+ Given any two events $A, B\subseteq \Omega$,
+ \begin{itemize}
+ \item The \emph{complement} of $A$ is $A^C = A' = \bar A = \Omega\setminus A$.
+ \item ``$A$ or $B$'' is the set $A\cup B$.
+ \item ``$A$ and $B$'' is the set $A\cap B$.
+ \item $A$ and $B$ are \emph{mutually exclusive} or \emph{disjoint} if $A\cap B = \emptyset$.
+ \item If $A\subseteq B$, then $A$ occurring implies $B$ occurring.
+ \end{itemize}
+\end{defi}
+
+\begin{defi}[Probability]
+ Suppose $\Omega = \{\omega_1,\omega_2, \cdots, \omega_N\}$. Let $A\subseteq \Omega$ be an event. Then the \emph{probability} of $A$ is
+ \[
+ \P(A) = \frac{\text{Number of outcomes in } A}{\text{Number of outcomes in }\Omega} = \frac{|A|}{N}.
+ \]
+\end{defi}
+Here we are assuming that each outcome is equally likely to happen, which is the case in (fair) dice rolls and coin flips.
+
+\begin{eg}
+ Suppose $r$ digits are drawn at random from a table of random digits from 0 to 9. What is the probability that
+ \begin{enumerate}
+ \item No digit exceeds $k$;
+ \item The largest digit drawn is $k$?
+ \end{enumerate}
+
+ The sample space is $\Omega = \{(a_1, a_2, \cdots, a_r): 0 \leq a_i \leq 9\}$. Then $|\Omega| = 10^r$.
+
+ Let $A_k = [\text{no digit exceeds }k] = \{(a_1, \cdots, a_r): 0 \leq a_i \leq k\}$. Then $|A_k| = (k + 1)^r$. So
+ \[
+ P(A_k) = \frac{(k + 1)^r}{10^r}.
+ \]
+ Now let $B_k = [\text{largest digit drawn is }k]$. We can find this by finding all outcomes in which no digits exceed $k$, and subtract it by the number of outcomes in which no digit exceeds $k - 1$. So $|B_k| = |A_k| - |A_{k - 1}|$ and
+ \[
+ P(B_k) = \frac{(k + 1)^r - k^r}{10^r}.
+ \]
+\end{eg}
+\subsection{Counting}
+To find probabilities, we often need to \emph{count} things. For example, in our example above, we had to count the number of elements in $B_k$.
+\begin{eg}
+ A menu has 6 starters, 7 mains and 6 desserts. How many possible meals combinations are there? Clearly $6 \times 7 \times 6 = 252$.
+\end{eg}
+Here we are using the fundamental rule of counting:
+\begin{thm}[Fundamental rule of counting]
+ Suppose we have to make $r$ multiple choices in sequence. There are $m_1$ possibilities for the first choice, $m_2$ possibilities for the second etc. Then the total number of choices is $m_1\times m_2\times \cdots m_r$.
+\end{thm}
+
+\begin{eg}
+ How many ways can $1, 2, \cdots, n$ be ordered? The first choice has $n$ possibilities, the second has $n - 1$ possibilities etc. So there are $n\times (n - 1)\times\cdots \times 1 = n!$.
+\end{eg}
+
+\subsubsection*{Sampling with or without replacement}
+Suppose we have to pick $n$ items from a total of $x$ items. We can model this as follows: Let $N = \{1, 2, \cdots, n\}$ be the list. Let $X = \{1, 2, \cdots, x\}$ be the items. Then each way of picking the items is a function $f: N\to X$ with $f(i) =$ item at the $i$th position.
+
+\begin{defi}[Sampling with replacement]
+ When we \emph{sample with replacement}, after choosing at item, it is put back and can be chosen again. Then \emph{any} sampling function $f$ satisfies sampling with replacement.
+\end{defi}
+
+\begin{defi}[Sampling without replacement]
+ When we \emph{sample without replacement}, after choosing an item, we kill it with fire and cannot choose it again. Then $f$ must be an injective function, and clearly we must have $x \geq n$.
+\end{defi}
+
+We can also have sampling with replacement, but we require each item to be chosen at least once. In this case, $f$ must be surjective.
+
+\begin{eg}
+ Suppose $N = \{a, b, c\}$ and $X = \{p, q, r, s\}$. How many injective functions are there $N\to X$?
+
+ When we choose $f(a)$, we have 4 options. When we choose $f(b)$, we have 3 left. When we choose $f(c)$, we have 2 choices left. So there are $24$ possible choices.
+\end{eg}
+
+\begin{eg}
+ I have $n$ keys in my pocket. We select one at random once and try to unlock. What is the possibility that I succeed at the $r$th trial?
+
+ Suppose we do it with replacement. We have to fail the first $r - 1$ trials and succeed in the $r$th. So the probability is
+ \[
+ \frac{(n - 1)(n - 1) \cdots (n - 1)(1)}{n^r} = \frac{(n - 1)^{r - 1}}{n^r}.
+ \]
+ Now suppose we are smarter and try without replacement. Then the probability is
+ \[
+ \frac{(n - 1)(n - 2)\cdots (n - r + 1)(1)}{n(n - 1) \cdots (n - r + 1)} = \frac{1}{n}.
+ \]
+\end{eg}
+\begin{eg}[Birthday problem]
+ How many people are needed in a room for there to be a probability that two people have the same birthday to be at least a half?
+
+ Suppose $f(r)$ is the probability that, in a room of $r$ people, there is a birthday match.
+
+ We solve this by finding the probability of no match, $1 - f(r)$. The total number of possibilities of birthday combinations is $365^r$. For nobody to have the same birthday, the first person can have any birthday. The second has 364 else to choose, etc. So
+ \[
+ \P(\text{no match}) = \frac{365\cdot 364\cdot 363 \cdots (366 - r)}{365\cdot 365\cdot 365 \cdots 365}.
+ \]
+ If we calculate this with a computer, we find that $f(22) = 0.475695$ and $f(23) = 0.507297$.
+
+ While this might sound odd since 23 is small, this is because we are thinking about the wrong thing. The probability of match is related more to the number of \emph{pairs} of people, not the number of people. With 23 people, we have $23\times 22/2 = 253$ pairs, which is quite large compared to 365.
+\end{eg}
+\subsubsection*{Sampling with or without regard to ordering}
+There are cases where we don't care about, say list positions. For example, if we pick two representatives from a class, the order of picking them doesn't matter.
+
+In terms of the function $f: N\to X$, after mapping to $f(1), f(2), \cdots, f(n)$, we can
+\begin{itemize}
+ \item Leave the list alone
+ \item Sort the list ascending. i.e.\ we might get $(2, 5, 4)$ and $(4, 2, 5)$. If we don't care about list positions, these are just equivalent to $(2, 4, 5)$.
+ \item Re-number each item by the number of the draw on which it was first seen. For example, we can rename $(2, 5, 2)$ and $(5, 4, 5)$ both as $(1, 2, 1)$. This happens if the labelling of items doesn't matter.
+ \item Both of above. So we can rename $(2, 5, 2)$ and $(8, 5, 5)$ both as $(1, 1, 2)$.
+\end{itemize}
+
+\subsubsection*{Total number of cases}
+Combining these four possibilities with whether we have replacement, no replacement, or ``everything has to be chosen at least once'', we have 12 possible cases of counting. The most important ones are:
+\begin{itemize}
+ \item Replacement + with ordering: the number of ways is $x^n$.
+ \item Without replacement + with ordering: the number of ways is $x_{(n)} = x^{\underline{n}} = x(x - 1)\cdots (x - n + 1)$.
+ \item Without replacement + without order: we only care which items get selected. The number of ways is $\binom{x}{n} = C^x_n = x_{(n)}/n!$.
+ \item Replacement + without ordering: we only care how many times the item got chosen. This is equivalent to partitioning $n$ into $n_1 + n_2 + \cdots + n_k$. Say $n = 6$ and $k = 3$. We can write a particular partition as
+ \[
+ **\mid *\mid ***
+ \]
+ So we have $n + k - 1$ symbols and $k - 1$ of them are bars. So the number of ways is $\binom{n + k - 1}{k - 1}$.
+\end{itemize}
+
+\subsubsection*{Multinomial coefficient}
+Suppose that we have to pick $n$ items, and each item can either be an apple or an orange. The number of ways of picking such that $k$ apples are chosen is, by definition, $\binom{n}{k}$.
+
+In general, suppose we have to fill successive positions in a list of length $n$, with replacement, from a set of $k$ items. The number of ways of doing so such that item $i$ is picked $n_i$ times is defined to be the \emph{multinomial coefficient} $\binom{n}{n_1, n_2, \cdots, n_k}$.
+
+\begin{defi}[Multinomial coefficient]
+ A \emph{multinomial coefficient} is
+ \[
+ \binom{n}{n_1, n_2, \cdots, n_k} = \binom{n}{n_1}\binom{n - n_1}{n_2}\cdots \binom{n - n_1\cdots - n_{k - 1}}{n_k} = \frac{n!}{n_1!n_2!\cdots n_k!}.
+ \]
+ It is the number of ways to distribute $n$ items into $k$ positions, in which the $i$th position has $n_i$ items.
+\end{defi}
+
+\begin{eg}
+ We know that
+ \[
+ (x + y)^n = x^n + \binom{n}{1}x^{n - 1}y + \cdots + y^n.
+ \]
+ If we have a trinomial, then
+ \[
+ (x + y + z)^n = \sum_{n_1, n_2, n_3} \binom{n}{n_1, n_2, n_3} x^{n_1}y^{n_2}z^{n_3}.
+ \]
+\end{eg}
+
+\begin{eg}
+ How many ways can we deal 52 cards to 4 player, each with a hand of 13? The total number of ways is
+ \[
+ \binom{52}{13, 13, 13, 13} = \frac{52!}{(13!)^4} = 53644737765488792839237440000 = 5.36\times 10^{28}.
+ \]
+\end{eg}
+
+
+While computers are still capable of calculating that, what if we tried more \st{power} cards? Suppose each person has $n$ cards. Then the number of ways is
+\[
+ \frac{(4n)!}{(n!)^4},
+\]
+which is \emph{huge}. We can use Stirling's Formula to approximate it:
+\subsection{Stirling's formula}
+Before we state and prove Stirling's formula, we prove a weaker (but examinable) version:
+\begin{prop}
+ $\log n!\sim n\log n$
+\end{prop}
+\begin{proof}
+ Note that
+ \[
+ \log n! = \sum_{k=1}^n \log k.
+ \]
+ Now we claim that
+ \[
+ \int_1^n \log x\;\d x \leq \sum_1^n \log k\leq \int_1^{n + 1}\log x\;\d x.
+ \]
+ This is true by considering the diagram:
+ \begin{center}
+ \begin{tikzpicture}[xscale=0.7, yscale=1.3]
+ \draw [->] (0, 0) -- (11, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 2.5) node [above] {$y$};
+ \draw [domain=2:10] plot (\x, {ln (\x - 1)}) node [anchor = south west] {$\ln x$};
+ \draw [domain=1:10] plot (\x, {ln \x }) node [anchor = north west] {$\ln (x - 1)$};
+ \draw [fill = black, fill opacity=0.2] (2, 0)-- (2, 0.693147180559945) -- (3, 0.693147180559945) -- (3, 0);
+ \draw [fill = black, fill opacity=0.2] (3, 0)-- (3, 1.09861228866811) -- (4, 1.09861228866811) -- (4, 0);
+ \draw [fill = black, fill opacity=0.2] (4, 0)-- (4, 1.38629436111989) -- (5, 1.38629436111989) -- (5, 0);
+ \draw [fill = black, fill opacity=0.2] (5, 0)-- (5, 1.6094379124341) -- (6, 1.6094379124341) -- (6, 0);
+ \draw [fill = black, fill opacity=0.2] (6, 0)-- (6, 1.79175946922806) -- (7, 1.79175946922806) -- (7, 0);
+ \draw [fill = black, fill opacity=0.2] (7, 0)-- (7, 1.94591014905531) -- (8, 1.94591014905531) -- (8, 0);
+ \draw [fill = black, fill opacity=0.2] (8, 0)-- (8, 2.07944154167984) -- (9, 2.07944154167984) -- (9, 0);
+ \draw [fill = black, fill opacity=0.2] (9, 0)-- (9, 2.19722457733622) -- (10, 2.19722457733622) -- (10, 0);
+ \end{tikzpicture}
+ \end{center}
+ We actually evaluate the integral to obtain
+ \[
+ n\log n - n + 1 \leq \log n! \leq (n+ 1)\log (n + 1) - n;
+ \]
+ Divide both sides by $n\log n$ and let $n\to \infty$. Both sides tend to $1$. So
+ \[
+ \frac{\log n!}{n\log n} \to 1.\qedhere
+ \]
+\end{proof}
+
+Now we prove Stirling's Formula:
+\begin{thm}[Stirling's formula]
+ As $n\to \infty$,
+ \[
+ \log\left(\frac{n! e^n}{n^{n + \frac{1}{2}}}\right) = \log \sqrt{2\pi} + O\left(\frac{1}{n}\right)
+ \]
+\end{thm}
+\begin{cor}
+ \[
+ n!\sim \sqrt{2\pi}n^{n + \frac{1}{2}} e^{-n}
+ \]
+\end{cor}
+\begin{proof}(non-examinable)
+ Define
+ \[
+ d_n = \log \left(\frac{n!e^n}{n^{n + 1/2}}\right) = \log n! - (n + 1/2)\log n + n
+ \]
+ Then
+ \[
+ d_n - d_{n + 1} = (n + 1/2)\log\left(\frac{n + 1}{n}\right) - 1.
+ \]
+ Write $t = 1/(2n + 1)$. Then
+ \[
+ d_n - d_{n + 1} = \frac{1}{2t}\log\left(\frac{1 + t}{1 - t}\right) - 1.
+ \]
+ We can simplifying by noting that
+ \begin{align*}
+ \log (1 + t) - t &= -\frac{1}{2}t^2 + \frac{1}{3}t^3 - \frac{1}{4}t^4 + \cdots\\
+ \log (1 - t) + t &= -\frac{1}{2}t^2 - \frac{1}{3}t^3 - \frac{1}{4}t^4 - \cdots
+ \end{align*}
+ Then if we subtract the equations and divide by $2t$, we obtain
+ \begin{align*}
+ d_n - d_{n + 1} &= \frac{1}{3}t^2 + \frac{1}{5}t^4 + \frac{1}{7}t^6 + \cdots\\
+ &< \frac{1}{3}t^2 + \frac{1}{3}t^4 + \frac{1}{3}t^6 + \cdots\\
+ &= \frac{1}{3}\frac{t^2}{1 - t^2}\\
+ &= \frac{1}{3}\frac{1}{(2n + 1)^2 - 1}\\
+ &= \frac{1}{12}\left(\frac{1}{n} - \frac{1}{n + 1}\right)
+ \end{align*}
+ By summing these bounds, we know that
+ \[
+ d_1 - d_n < \frac{1}{12}\left(1 - \frac{1}{n}\right)
+ \]
+ Then we know that $d_n$ is bounded below by $d_1 +$ something, and is decreasing since $d_n - d_{n + 1}$ is positive. So it converges to a limit $A$. We know $A$ is a lower bound for $d_n$ since $(d_n)$ is decreasing.
+
+ Suppose $m > n$. Then $d_n - d_m < \left(\frac{1}{n} - \frac{1}{m}\right)\frac{1}{12}$. So taking the limit as $m\to \infty$, we obtain an upper bound for $d_n$: $d_n < A + 1/(12n)$. Hence we know that
+ \[
+ A < d_n < A + \frac{1}{12n}.
+ \]
+ However, all these results are useless if we don't know what $A$ is. To find $A$, we have a small detour to prove a formula:
+
+ Take $I_n = \int_0^{\pi/2} \sin^n\theta\;\d \theta$. This is decreasing for increasing $n$ as $\sin^n\theta$ gets smaller. We also know that
+ \begin{align*}
+ I_n &= \int_0^{\pi/2}\sin^n \theta\;\d \theta\\
+ &= \left[-\cos\theta\sin^{n - 1}\theta \right]_0^{\pi/2} + \int_0^{\pi/2} (n - 1)\cos^2\theta \sin^{n - 2}\theta \;\d\theta\\
+ &= 0 + \int_0^{\pi/2}(n - 1)(1 - \sin^2 \theta)\sin^{n - 2}\theta \;\d \theta\\
+ &= (n - 1)(I_{n - 2} - I_n)
+ \end{align*}
+ So
+ \[
+ I_n = \frac{n - 1}{n}I_{n - 2}.
+ \]
+ We can directly evaluate the integral to obtain $I_0 = \pi/2$, $I_1 = 1$. Then
+ \begin{align*}
+ I_{2n} &= \frac{1}{2}\cdot\frac{3}{4}\cdots \frac{2n - 1}{2n} \pi/2 = \frac{(2n)!}{(2^nn!)^2}\frac{\pi}{2}\\
+ I_{2n + 1} &= \frac{2}{3}\cdot\frac{4}{5}\cdots\frac{2n}{2n + 1} = \frac{(2^nn!)^2}{(2n + 1)!}
+ \end{align*}
+ So using the fact that $I_n$ is decreasing, we know that
+ \[
+ 1 \leq \frac{I_{2n}}{I_{2n + 1}} \leq \frac{I_{2n - 1}}{I_{2n + 1}} = 1 + \frac{1}{2n} \to 1.
+ \]
+ Using the approximation $n!\sim n^{n + 1/2}e^{-n + A}$, where $A$ is the limit we want to find, we can approximate
+ \[
+ \frac{I_{2n}}{I_{2n + 1}} = \pi(2n + 1)\left[\frac{( (2n)!)^2}{2^{4n + 1}(n!)^4}\right] \sim \pi(2n + 1)\frac{1}{ne^{2A}}\to \frac{2\pi}{e^{2A}}.
+ \]
+ Since the last expression is equal to 1, we know that $A = \log\sqrt{2\pi}$. Hooray for magic!
+\end{proof}
+
+This approximation can be improved:
+\begin{prop}[non-examinable]
+ We use the $1/12n$ term from the proof above to get a better approximation:
+ \[
+ \sqrt{2\pi}n^{n + 1/2}e^{-n + \frac{1}{12n + 1}} \leq n! \leq \sqrt{2\pi} n^{n + 1/2} e^{-n + \frac{1}{12n}}.
+ \]
+\end{prop}
+
+\begin{eg}
+ Suppose we toss a coin $2n$ times. What is the probability of equal number of heads and tails? The probability is
+ \[
+ \frac{\binom{2n}{n}}{2^{2n}} = \frac{(2n)!}{(n!)^2 2^{2n}} \sim \frac{1}{\sqrt{n\pi}}
+ \]
+\end{eg}
+
+\begin{eg}
+ Suppose we draw 26 cards from 52. What is the probability of getting 13 reds and 13 blacks? The probability is
+ \[
+ \frac{\binom{26}{13}\binom{26}{13}}{\binom{52}{26}} = 0.2181.
+ \]
+\end{eg}
+\section{Axioms of probability}
+\label{sec:axioms}
+\subsection{Axioms and definitions}
+So far, we have semi-formally defined some probabilistic notions. However, what we had above was rather restrictive. We were only allowed to have a finite number of possible outcomes, and all outcomes occur with the same probability. However, most things in the real world do not fit these descriptions. For example, we cannot use this to model a coin that gives heads with probability $\pi^{-1}$.
+
+In general, ``probability'' can be defined as follows:
+\begin{defi}[Probability space]
+ A \emph{probability space} is a triple $(\Omega, \mathcal{F}, \P)$. $\Omega$ is a set called the \emph{sample space}, $\mathcal{F}$ is a collection of subsets of $\Omega$, and $\P: \mathcal{F}\to [0, 1]$ is the \emph{probability measure}.
+
+ $\mathcal{F}$ has to satisfy the following axioms:
+ \begin{enumerate}
+ \item $\emptyset, \Omega\in \mathcal{F}$.
+ \item $A\in \mathcal{F} \Rightarrow A^C\in \mathcal{F}$.
+ \item $A_1, A_2, \cdots \in \mathcal{F} \Rightarrow \bigcup_{i = 1}^\infty A_i \in \mathcal{F}$.
+ \end{enumerate}
+ And $\P$ has to satisfy the following \emph{Kolmogorov axioms}:
+ \begin{enumerate}
+ \item $0 \leq \P(A) \leq 1 $ for all $A\in \mathcal{F}$
+ \item $\P(\Omega) = 1$
+ \item For any countable collection of events $A_1, A_2, \cdots$ which are disjoint, i.e.\ $A_i\cap A_j = \emptyset$ for all $i, j$, we have
+ \[
+ \P\left(\bigcup_i A_i\right) = \sum_i \P(A_i).
+ \]
+ \end{enumerate}
+ Items in $\Omega$ are known as the \emph{outcomes}, items in $\mathcal{F}$ are known as the \emph{events}, and $\P(A)$ is the \emph{probability} of the event $A$.
+\end{defi}
+
+If $\Omega$ is finite (or countable), we usually take $\mathcal{F}$ to be all the subsets of $\Omega$, i.e.\ the power set of $\Omega$. However, if $\Omega$ is, say, $\R$, we have to be a bit more careful and only include nice subsets, or else we cannot have a well-defined $\P$.
+
+Often it is not helpful to specify the full function $\P$. Instead, in discrete cases, we just specify the probabilities of each outcome, and use the third axiom to obtain the full $\P$.
+
+\begin{defi}[Probability distribution]
+ Let $\Omega = \{\omega_1, \omega_2, \cdots\}$. Choose numbers $p_1, p_2, \cdots $ such that $\sum_{i = 1}^\infty p_i= 1$. Let $p(\omega_i) = p_i$. Then define
+ \[
+ \P(A) = \sum_{\omega_i\in A} p(\omega_i).
+ \]
+ This $\P(A)$ satisfies the above axioms, and $p_1, p_2, \cdots$ is the \emph{probability distribution}
+\end{defi}
+
+Using the axioms, we can quickly prove a few rather obvious results.
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item $\P(\emptyset) = 0$
+ \item $\P(A^C) = 1 - \P(A)$
+ \item $A\subseteq B \Rightarrow \P(A) \leq \P(B)$
+ \item $\P (A\cup B) = \P(A) + \P(B) - \P(A\cap B)$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item $\Omega$ and $\emptyset$ are disjoint. So $\P(\Omega) + \P(\emptyset) = \P(\Omega \cup \emptyset) = \P(\Omega)$. So $\P(\emptyset) = 0$.
+ \item $\P(A) + \P(A^C) = \P(\Omega) = 1$ since $A$ and $A^C$ are disjoint.
+ \item Write $B = A\cup (B\cap A^C)$. Then $\\P(B) = \P(A) + \P(B\cap A^C) \geq \P(A)$.
+ \item $\P(A\cup B) = \P(A) + \P(B\cap A^C)$. We also know that $\P(B) = \P(A\cap B) + \P(B\cap A^C)$. Then the result follows.\qedhere
+ \end{enumerate}
+\end{proof}
+
+From above, we know that $\P(A\cup B) \leq \P(A) + \P(B)$. So we say that $\P$ is a \emph{subadditive} function. Also, $\P(A\cap B) + \P(A \cup B) \leq \P(A) + \P(B)$ (in fact both sides are equal!). We say $\P$ is \emph{submodular}.
+
+The next theorem is better expressed in terms of \emph{limits}.
+\begin{defi}[Limit of events]
+ A sequence of events $A_1, A_2, \cdots$ is \emph{increasing} if $A_1 \subseteq A_2 \cdots$. Then we define the \emph{limit} as
+ \[
+ \lim_{n\to \infty} A_n = \bigcup_{1}^\infty A_n.
+ \]
+ Similarly, if they are \emph{decreasing}, i.e.\ $A_1\supseteq A_2\cdots$, then
+ \[
+ \lim_{n\to \infty} A_n = \bigcap_{1}^\infty A_n.
+ \]
+\end{defi}
+
+\begin{thm}
+ If $A_1, A_2, \cdots$ is increasing or decreasing, then
+ \[
+ \lim_{n\to \infty} \P(A_n) = \P\left(\lim_{n\to \infty} A_n\right).
+ \]
+\end{thm}
+
+\begin{proof}
+ Take $B_1 = A_1$, $B_2 = A_2\setminus A_1$. In general,
+ \[
+ B_n = A_n\setminus\bigcup_1^{n - 1}A_i.
+ \]
+ Then
+ \[
+ \bigcup_1^n B_i = \bigcup_1^n A_i,\quad \bigcup_1^\infty B_i = \bigcup _1^\infty A_i.
+ \]
+ Then
+ \begin{align*}
+ \P(\lim A_n) &= \P\left(\bigcup_1^\infty A_i\right)\\
+ &= \P\left(\bigcup_1^\infty B_i\right)\\
+ &=\sum_1^\infty \P(B_i)\text{ (Axiom III)}\\
+ &= \lim_{n \to \infty}\sum_{i = 1}^n \P(B_i)\\
+ &= \lim_{n \to \infty} \P\left(\bigcup_1^n A_i\right)\\
+ &= \lim_{n \to \infty} \P(A_n).
+ \end{align*}
+ and the decreasing case is proven similarly (or we can simply apply the above to $A_i^C$).
+\end{proof}
+
+\subsection{Inequalities and formulae}
+\begin{thm}[Boole's inequality]
+ For any $A_1, A_2, \cdots$,
+ \[
+ \P\left(\bigcup_{i = 1}^\infty A_i\right) \leq \sum_{i = 1}^\infty \P(A_i).
+ \]
+\end{thm}
+This is also known as the ``union bound''.
+
+\begin{proof}
+ Our third axiom states a similar formula that only holds for disjoint sets. So we need a (not so) clever trick to make them disjoint. We define
+ \begin{align*}
+ B_1 &= A_1\\
+ B_2 &= A_2\setminus A_1\\
+ B_i &= A_i\setminus \bigcup_{k = 1}^{i - 1}A_k.
+ \end{align*}
+ So we know that
+ \[
+ \bigcup B_i = \bigcup A_i.
+ \]
+ But the $B_i$ are disjoint. So our Axiom (iii) gives
+ \[
+ \P\left(\bigcup_i A_i\right) = \P\left(\bigcup _i B_i\right) = \sum_i \P\left(B_i\right) \leq \sum_i \P\left(A_i\right).
+ \]
+ Where the last inequality follows from (iii) of the theorem above.
+\end{proof}
+
+\begin{eg}
+ Suppose we have countably infinite number of biased coins. Let $A_k = [k$th toss head$]$ and $\P(A_k) = p_k$. Suppose $\sum_1^\infty p_k < \infty$. What is the probability that there are infinitely many heads?
+
+ The event ``there is at least one more head after the $i$th coin toss'' is $\bigcup_{k = i}^\infty A_k$. There are infinitely many heads if and only if there are unboundedly many coin tosses, i.e.\ no matter how high $i$ is, there is still at least more more head after the $i$th toss.
+
+ So the probability required is
+ \[
+ \P\left(\bigcap_{i = 1}^\infty\bigcup _{k = i}^\infty A_k\right) = \lim_{i \to \infty} \P\left(\bigcup_{k = i}^\infty A_k\right) \leq \lim_{i\to \infty}\sum_{k = i}^\infty p_k = 0
+ \]
+ Therefore $\P($infinite number of heads$) = 0$.
+\end{eg}
+
+\begin{eg}[Erd\"os 1947]
+ Is it possible to colour a complete $n$-graph (i.e.\ a graph of $n$ vertices with edges between every pair of vertices) red and black such that there is no $k$-vertex complete subgraph with monochrome edges?
+
+ Erd\"os said this is possible if
+ \[
+ \binom{n}{k} 2^{1 - \binom{k}{2}} < 1.
+ \]
+ We colour edges randomly, and let $A_i=[i$th subgraph has monochrome edges$]$. Then the probability that at least one subgraph has monochrome edges is
+ \[
+ \P\left(\bigcup A_i\right) \leq \sum \P(A_i) = \binom{n}{k} 2\cdot 2^{-\binom{k}{2}}.
+ \]
+ The last expression is obtained since there are $\binom{n}{k}$ ways to choose a subgraph; a monochrome subgraph can be either red or black, thus the multiple of 2; and the probability of getting all red (or black) is $2^{-\binom{k}{2}}$.
+
+ If this probability is less than 1, then there must be a way to colour them in which it is impossible to find a monochrome subgraph, or else the probability is 1. So if $\binom{n}{k} 2^{1 - \binom{k}{2}} < 1$, the colouring is possible.
+\end{eg}
+
+\begin{thm}[Inclusion-exclusion formula]
+ \begin{align*}
+ \P\left(\bigcup_i^n A_i\right) &= \sum_1^n \P(A_i) - \sum_{i_1 < i_2} \P(A_{i_1}\cap A_{j_2}) + \sum_{i_1 < i_2 < i_3}\P(A_{i_1}\cap A_{i_2} \cap A_{i_3}) - \cdots\\
+ &+ (-1)^{n - 1} \P(A_1\cap \cdots \cap A_n).
+ \end{align*}
+\end{thm}
+
+\begin{proof}
+ Perform induction on $n$. $n = 2$ is proven above.
+
+ Then
+ \[
+ \P(A_1\cup A_2\cup \cdots A_n) = \P(A_1) + \P(A_2\cup\cdots\cup A_n) - \P\left(\bigcup_{i = 2}^n (A_1\cap A_i)\right).
+ \]
+ Then we can apply the induction hypothesis for $n - 1$, and expand the mess. The details are very similar to that in IA Numbers and Sets.
+\end{proof}
+
+\begin{eg}
+ Let $1, 2, \cdots, n$ be randomly permuted to $\pi(1), \pi(2), \cdots, \pi(n)$. If
+ $i \not= \pi(i)$ for all $i$, we say we have a \emph{derangement}.
+
+ Let $A_i = [i = \pi(i)]$.
+
+ Then
+ \begin{align*}
+ \P\left(\bigcup _{i = 1}^n A_i\right) &= \sum_{k} \P(A_k) - \sum_{k_1 < k_2} \P(A_{k_1} \cap A_{k_2}) + \cdots\\
+ &= n\cdot \frac{1}{n} - \binom{n}{2}\frac{1}{n}\frac{1}{n - 1} + \binom{n}{3}\frac{1}{n}\frac{1}{n - 1}\frac{1}{n - 2} + \cdots\\
+ &= 1 - \frac{1}{2!} + \frac{1}{3!} - \cdots + (-1)^{n - 1}\frac{1}{n!}\\
+ &\to e^{-1}
+ \end{align*}
+ So the probability of derangement is $1 - \P(\bigcup A_k) \approx 1 - e^{-1}\approx 0.632$.
+\end{eg}
+
+Recall that, from inclusion exclusion,
+\[
+ \P(A\cup B\cup C) = \P(A) + \P(B) + \P(C) - \P(AB) - \P(BC) - \P(AC) + \P(ABC),
+\]
+where $\P(AB)$ is a shorthand for $\P(A\cap B)$. If we only take the first three terms, then we get Boole's inequality
+\[
+ \P(A\cup B\cup C) \leq \P(A) + \P(B) + \P(C).
+\]
+In general
+\begin{thm}[Bonferroni's inequalities]
+ For any events $A_1, A_2, \cdots, A_n$ and $1 \leq r\leq n$, if $r$ is odd, then
+ \begin{align*}
+ \P\left(\bigcup_{1}^n A_i\right) &\leq \sum_{i_1} \P(A_{i_1}) - \sum_{i_1 < i_2} \P(A_{i_1}A_{i_2}) + \sum_{i_1 < i_2 < i_3} \P(A_{i_1}A_{i_2}A_{i_3}) + \cdots\\
+ &+ \sum_{i_1 < i_2 < \cdots < i_r} \P(A_{i_1}A_{i_2}A_{i_3}\cdots A_{i_r}).\\
+ \intertext{If $r$ is even, then}
+ \P\left(\bigcup_{1}^n A_i\right) &\geq \sum_{i_1} \P(A_{i_1}) - \sum_{i_1 < i_2} \P(A_{i_1}A_{i_2}) + \sum_{i_1 < i_2 < i_3} \P(A_{i_1}A_{i_2}A_{i_3}) + \cdots\\
+ &- \sum_{i_1 < i_2 < \cdots < i_r} \P(A_{i_1}A_{i_2}A_{i_3}\cdots A_{i_r}).
+ \end{align*}
+\end{thm}
+
+\begin{proof}
+ Easy induction on $n$.
+\end{proof}
+
+\begin{eg}
+ Let $\Omega = \{1, 2, \cdots, m\}$ and $1 \leq j, k \leq m$. Write $A_k = \{1, 2, \cdots, k\}$. Then
+ \[
+ A_k \cap A_j = \{1, 2, \cdots, \min(j, k)\} = A_{\min(j, k)}
+ \]
+ and
+ \[
+ A_k \cup A_j = \{1, 2, \cdots, \max(j, k)\} = A_{\max(j, k)}.
+ \]
+ We also have $\P(A_k) = k/m$.
+
+ Now let $1 \leq x_1, \cdots, x_n \leq m$ be some numbers. Then Bonferroni's inequality says
+ \[
+ \P\left(\bigcup A_{x_{i}}\right) \geq \sum \P(A_{x_i}) - \sum_{i < j} \P(A_{x_i}\cap A_{x_j}).
+ \]
+ So
+ \[
+ \max\{x_1, x_2, \cdots, x_n\} \geq \sum x_i - \sum_{i_1 < i_2} \min\{x_1, x_2\}.
+ \]
+\end{eg}
+
+\subsection{Independence}
+\begin{defi}[Independent events]
+ Two events $A$ and $B$ are \emph{independent} if
+ \[
+ \P(A\cap B) = \P(A)\P(B).
+ \]
+ Otherwise, they are said to be \emph{dependent}.
+\end{defi}
+Two events are independent if they are not related to each other. For example, if you roll two dice separately, the outcomes will be independent.
+
+\begin{prop}
+ If $A$ and $B$ are independent, then $A$ and $B^C$ are independent.
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ \P(A\cap B^C) &= \P(A) - \P(A\cap B)\\
+ &= \P(A) - \P(A)\P(B)\\
+ &= \P(A)(1 - \P(B))\\
+ &= \P(A)\P(B^C)
+ \end{align*}
+\end{proof}
+
+This definition applies to two events. What does it mean to say that three or more events are independent?
+\begin{eg}
+ Roll two fair dice. Let $A_1$ and $A_2$ be the event that the first and second die is odd respectively. Let $A_3 = [$sum is odd$]$. The event probabilities are as follows:
+ \begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ Event & Probability\\
+ \midrule
+ $A_1$ & $1/2$\\
+ $A_2$ & $1/2$\\
+ $A_3$ & $1/2$\\
+ $A_1\cap A_2$ & $1/4$\\
+ $A_1\cap A_3$ & $1/4$\\
+ $A_2\cap A_3$ & $1/4$\\
+ $A_1\cap A_2\cap A_3$ & $0$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ We see that $A_1$ and $A_2$ are independent, $A_1$ and $A_3$ are independent, and $A_2$ and $A_3$ are independent. However, the collection of all three are \emph{not} independent, since if $A_1$ and $A_2$ are true, then $A_3$ cannot possibly be true.
+\end{eg}
+
+From the example above, we see that just because a set of events is pairwise independent does not mean they are independent all together. We define:
+\begin{defi}[Independence of multiple events]
+ Events $A_1, A_2, \cdots$ are said to be \emph{mutually independent} if
+ \[
+ \P(A_{i_1}\cap A_{i_2} \cap \cdots \cap A_{i_r}) = \P(A_{i_1})\P(A_{i_2})\cdots \P(A_{i_r})
+ \]
+ for any $i_1, i_2, \cdots i_r$ and $r \geq 2$.
+\end{defi}
+\begin{eg}
+ Let $A_{ij}$ be the event that $i$ and $j$ roll the same. We roll 4 dice. Then
+ \[
+ \P(A_{12}\cap A_{13}) = \frac{1}{6}\cdot \frac{1}{6} = \frac{1}{36} = \P(A_{12})\P(A_{13}).
+ \]
+ But
+ \[
+ \P(A_{12}\cap A_{13}\cap A_{23}) = \frac{1}{36} \not= \P(A_{12})\P(A_{13})\P(A_{23}).
+ \]
+ So they are not mutually independent.
+\end{eg}
+
+We can also apply this concept to experiments. Suppose we model two independent experiments with $\Omega_1 = \{\alpha_1, \alpha_2, \cdots\}$ and $\Omega_2 = \{\beta_1, \beta_2, \cdots\}$ with probabilities $\P(\alpha_i) = p_i$ and $\P(\beta_i) = q_i$. Further suppose that these two experiments are independent, i.e.
+\[
+ \P((\alpha_i, \beta_j)) = p_iq_j
+\]
+for all $i, j$. Then we can have a new sample space $\Omega = \Omega_1\times \Omega_2$.
+
+Now suppose $A\subseteq \Omega_1$ and $B\subseteq \Omega_2$ are results (i.e.\ events) of the two experiments. We can view them as subspaces of $\Omega$ by rewriting them as $A\times \Omega_2$ and $\Omega_1\times B$. Then the probability
+\[
+ \P(A\cap B) = \sum_{\alpha_i\in A, \beta_i\in B} p_iq_i = \sum_{\alpha_i\in A} p_i\sum_{\beta_i\in B}q_i = \P(A)\P(B).
+\]
+So we say the two experiments are ``independent'' even though the term usually refers to different events in the same experiment. We can generalize this to $n$ independent experiments, or even countably many infinite experiments.
+
+
+\subsection{Important discrete distributions}
+We're now going to quickly go through a few important discrete probability distributions. By \emph{discrete} we mean the sample space is countable. The sample space is $\Omega = \{\omega_1, \omega_2, \cdots\}$ and $p_i = \P(\{\omega_i\})$.
+
+\begin{defi}[Bernoulli distribution]
+ Suppose we toss a coin. $\Omega=\{H, T\}$ and $p\in [0, 1]$. The \emph{Bernoulli distribution}, denoted $B(1, p)$ has
+ \[
+ \P(H) = p;\quad \P(T) = 1- p.
+ \]
+\end{defi}
+
+\begin{defi}[Binomial distribution]
+ Suppose we toss a coin $n$ times, each with probability $p$ of getting heads. Then
+ \[
+ \P(HHTT\cdots T) = pp(1 - p)\cdots (1 - p).
+ \]
+ So
+ \[
+ \P(\text{two heads}) = \binom{n}{2}p^2(1 - p)^{n -2}.
+ \]
+ In general,
+ \[
+ \P(k\text{ heads}) = \binom{n}{k}p^k(1 - p)^{n - k}.
+ \]
+ We call this the \emph{binomial distribution} and write it as $B(n, p)$.
+\end{defi}
+
+\begin{defi}[Geometric distribution]
+ Suppose we toss a coin with probability $p$ of getting heads. The probability of having a head after $k$ consecutive tails is
+ \[
+ p_k = (1- p)^k p
+ \]
+ This is \emph{geometric distribution}. We say it is \emph{memoryless} because how many tails we've got in the past does not give us any information to how long I'll have to wait until I get a head.
+\end{defi}
+
+\begin{defi}[Hypergeometric distribution]
+ Suppose we have an urn with $n_1$ red balls and $n_2$ black balls. We choose $n$ balls. The probability that there are $k$ red balls is
+ \[
+ \P(k\text{ red}) = \frac{\binom{n_1}{k}\binom{n_2}{n - k}}{\binom{n_1 + n_2}{n}}.
+ \]
+\end{defi}
+
+\begin{defi}[Poisson distribution]
+ The \emph{Poisson distribution} denoted $P(\lambda)$ is
+ \[
+ p_k = \frac{\lambda^k}{k!}e^{-\lambda}
+ \]
+ for $k\in \N$.
+\end{defi}
+What is this weird distribution? It is a distribution used to model rare events. Suppose that an event happens at a rate of $\lambda$. We can think of this as there being a lot of trials, say $n$ of them, and each has a probability $\lambda/n$ of succeeding. As we take the limit $n\to \infty$, we obtain the Poisson distribution.
+
+\begin{thm}[Poisson approximation to binomial]
+ Suppose $n\to \infty$ and $p\to 0$ such that $np = \lambda$. Then
+ \[
+ q_k = \binom{n}{k}p^k(1 -p)^{n - k} \to \frac{\lambda^k}{k!}e^{-\lambda}.
+ \]
+\end{thm}
+
+\begin{proof}
+\begin{align*}
+ q_k &= \binom{n}{k}p^k(1 - p)^{n - k}\\
+ &= \frac{1}{k!} \frac{n(n - 1)\cdots(n - k + 1)}{n^k}(np)^k \left(1 - \frac{np}{n}\right)^{n - k}\\
+ &\to \frac{1}{k!}\lambda^ke^{-\lambda}
+\end{align*}
+since $(1 - a/n)^n \to e^{-a}$.
+\end{proof}
+\subsection{Conditional probability}
+\begin{defi}[Conditional probability]
+ Suppose $B$ is an event with $\P(B) > 0$. For any event $A\subseteq \Omega$, the \emph{conditional probability of $A$ given $B$} is
+ \[
+ \P(A\mid B) = \frac{\P(A\cap B)}{\P(B)}.
+ \]
+ We interpret as the probability of $A$ happening given that $B$ has happened.
+\end{defi}
+Note that if $A$ and $B$ are independent, then
+\[
+ \P(A\mid B) = \frac{\P(A\cap B)}{\P(B)} = \frac{\P(A)\P(B)}{\P(B)} = \P(A).
+\]
+\begin{eg}
+ In a game of poker, let $A_i = [$player $i$ gets royal flush$]$. Then
+ \[
+ \P(A_1) = 1.539\times 10^{-6}.
+ \]
+ and
+ \[
+ \P(A_2\mid A_1) = 1.969\times 10^{-6}.
+ \]
+ It is significantly bigger, albeit still incredibly tiny. So we say ``good hands attract''.
+
+ If $\P(A\mid B) > \P(A)$, then we say that $B$ attracts $A$. Since
+ \[
+ \frac{\P(A\cap B)}{\P(B)} > \P(A) \Leftrightarrow \frac{\P(A\cap B)}{\P(A)} > \P(B),
+ \]
+ $A$ attracts $B$ if and only if $B$ attracts $A$. We can also say $A$ repels $B$ if $A$ attracts $B^C$.
+\end{eg}
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item $\P(A\cap B) = \P(A\mid B)\P(B)$.
+ \item $\P(A\cap B\cap C) = \P(A\mid B\cap C) \P(B\mid C) \P(C)$.
+ \item $\P(A\mid B\cap C) = \frac{\P(A\cap B\mid C)}{\P(B\mid C)}$.
+ \item The function $\P(\ph \mid B)$ restricted to subsets of $B$ is a probability function (or measure).
+ \end{enumerate}
+\end{thm}
+\begin{proof}
+ Proofs of (i), (ii) and (iii) are trivial. So we only prove (iv). To prove this, we have to check the axioms.
+
+ \begin{enumerate}
+ \item Let $A\subseteq B$. Then $\P(A\mid B) = \frac{\P(A\cap B)}{\P(B)} \leq 1$.
+ \item $\P(B\mid B) = \frac{\P(B)}{\P(B)} = 1$.
+ \item Let $A_i$ be disjoint events that are subsets of $B$. Then
+ \begin{align*}
+ \P\left(\left.\bigcup_i A_i\right\vert B\right) &= \frac{\P(\bigcup_i A_i\cap B)}{\P(B)}\\
+ &= \frac{\P\left(\bigcup_i A_i\right)}{\P(B)}\\
+ &= \sum \frac{\P(A_i)}{\P(B)}\\
+ &=\sum \frac{\P(A_i\cap B)}{\P(B)}\\
+ &= \sum \P(A_i \mid B).\qedhere
+ \end{align*}%\qedhere
+ \end{enumerate}
+\end{proof}
+\begin{defi}[Partition]
+ A \emph{partition of the sample space} is a collection of disjoint events $\{B_i\}_{i = 0}^\infty$ such that $\bigcup_i B_i = \Omega$.
+\end{defi}
+For example, ``odd'' and ``even'' partition the sample space into two events.
+
+The following result should be clear:
+\begin{prop}
+ If $B_i$ is a partition of the sample space, and $A$ is any event, then
+ \[
+ \P(A) = \sum_{i = 1}^\infty \P(A\cap B_i) = \sum_{i = 1}^\infty \P(A\mid B_i) \P(B_i).
+ \]
+\end{prop}
+
+\begin{eg}
+ A fair coin is tossed repeatedly. The gambler gets $+1$ for head, and $-1$ for tail. Continue until he is broke or achieves $\$a$. Let
+ \[
+ p_x = \P(\text{goes broke}\mid \text{starts with \$}x),
+ \]
+ and $B_1$ be the event that he gets head on the first toss. Then
+ \begin{align*}
+ p_x &= \P(B_1)p_{x + 1} + \P(B_1^C) p_{x - 1}\\
+ p_x &= \frac{1}{2}p_{x + 1} + \frac{1}{2}p_{x - 1}
+ \end{align*}
+ We have two boundary conditions $p_0 = 1$, $p_a = 0$. Then solving the recurrence relation, we have
+ \[
+ p_x = 1 - \frac{x}{a}.
+ \]
+\end{eg}
+
+\begin{thm}[Bayes' formula]
+ Suppose $B_i$ is a partition of the sample space, and $A$ and $B_i$ all have non-zero probability. Then for any $B_i$,
+ \[
+ \P(B_i \mid A) = \frac{\P(A \mid B_i)\P(B_i)}{\sum_j\P(A \mid B_j)\P(B_j)}.
+ \]
+ Note that the denominator is simply $\P(A)$ written in a fancy way.
+\end{thm}
+
+\begin{eg}[Screen test]
+ Suppose we have a screening test that tests whether a patient has a particular disease. We denote positive and negative results as $+$ and $-$ respectively, and $D$ denotes the person having disease. Suppose that the test is not absolutely accurate, and
+ \begin{align*}
+ \P(+ \mid D) &= 0.98\\
+ \P(+ \mid D^C) &= 0.01\\
+ \P(D) &= 0.001.
+ \end{align*}
+ So what is the probability that a person has the disease given that he received a positive result?
+ \begin{align*}
+ \P(D\mid +) &= \frac{\P(+ \mid D)\P(D)}{\P(+\mid D)\P(D) + \P(+\mid D^C)\P(D^C)}\\
+ &= \frac{0.98\cdot 0.001}{0.98\cdot 0.001 + 0.01\cdot 0.999}\\
+ &= 0.09
+ \end{align*}
+ So this test is pretty useless. Even if you get a positive result, since the disease is so rare, it is more likely that you don't have the disease and get a false positive.
+\end{eg}
+
+\begin{eg}
+ Consider the two following cases:
+ \begin{enumerate}
+ \item I have 2 children, one of whom is a boy.
+ \item I have two children, one of whom is a son born on a Tuesday.
+ \end{enumerate}
+ What is the probability that both of them are boys?
+
+ \begin{enumerate}
+ \item $\P(BB\mid BB\cup BG) = \frac{1/4}{1/4 + 2/4} = \frac{1}{3}$.
+ \item Let $B^*$ denote a boy born on a Tuesday, and $B$ a boy not born on a Tuesday. Then
+ \begin{align*}
+ \P(B^*B^* \cup B^*B\mid BB^* \cup B^*B^*\cup B^*G)&=\frac{\frac{1}{14}\cdot \frac{1}{14} + 2\cdot \frac{1}{14}\cdot\frac{6}{14}}{\frac{1}{14}\cdot \frac{1}{14} + 2\cdot \frac{1}{14}\cdot\frac{6}{14} + 2\cdot \frac{1}{14}\cdot \frac{1}{2}}\\
+ &= \frac{13}{27}.
+ \end{align*}
+ \end{enumerate}
+ How can we understand this? It is much easier to have a boy born on a Tuesday if you have two boys than one boy. So if we have the information that a boy is born on a Tuesday, it is now less likely that there is just one boy. In other words, it is more likely that there are two boys.
+\end{eg}
+
+\section{Discrete random variables}
+With what we've got so far, we are able to answer questions like ``what is the probability of getting a heads?'' or ``what is the probability of getting 10 heads in a row?''. However, we cannot answer questions like ``what do we expect to get on average?''. What does it even mean to take the average of a ``heads'' and a ``tail''?
+
+To make some sense of this notion, we have to assign, to each outcome, a number. For example, if let ``heads'' correspond to $1$ and ``tails'' correspond to $0$. Then on average, we can expect to get $0.5$. This is the idea of a random variable.
+
+\subsection{Discrete random variables}
+\begin{defi}[Random variable]
+ A \emph{random variable} $X$ taking values in a set $\Omega_X$ is a function $X: \Omega \to \Omega_X$. $\Omega_X$ is usually a set of numbers, e.g.\ $\R$ or $\N$.
+\end{defi}
+Intuitively, a random variable assigns a ``number'' (or a thing in $\Omega_X$) to each event (e.g.\ assign $6$ to the event ``dice roll gives 6'').
+
+\begin{defi}[Discrete random variables]
+ A random variable is \emph{discrete} if $\Omega_X$ is finite or countably infinite.
+\end{defi}
+
+\begin{notation}
+ Let $T\subseteq \Omega_X$, define
+ \[
+ \P(X\in T) = \P(\{\omega \in \Omega: X(\omega) \in T\}).
+ \]
+ i.e.\ the probability that the outcome is in $T$.
+\end{notation}
+Here, instead of talking about the probability of getting a particular outcome or event, we are concerned with the probability of a random variable taking a particular value. If $\Omega$ is itself countable, then we can write this as
+\[
+ \P(X \in T) = \sum_{\omega \in \Omega: X(\omega)\in T}p_\omega.
+\]
+\begin{eg}
+ Let $X$ be the value shown by rolling a fair die. Then $\Omega_X = \{1, 2, 3, 4, 5, 6\}$. We know that
+ \[
+ \P(X = i) = \frac{1}{6}.
+ \]
+ We call this the discrete uniform distribution.
+\end{eg}
+\begin{defi}[Discrete uniform distribution]
+ A \emph{discrete uniform distribution} is a discrete distribution with finitely many possible outcomes, in which each outcome is equally likely.
+\end{defi}
+
+\begin{eg}
+ Suppose we roll two dice, and let the values obtained by $X$ and $Y$. Then the sum can be represented by $X + Y$, with
+ \[
+ \Omega_{X + Y} = \{2, 3, \cdots, 12\}.
+ \]
+\end{eg}
+This shows that we can add random variables to get a new random variable.
+
+\begin{notation}
+ We write
+ \[
+ \P_X(x) = \P(X = x).
+ \]
+ We can also write $X\sim B(n, p)$ to mean
+ \[
+ \P(X = r) = \binom{n}{r}p^r(1 - p)^{n - r},
+ \]
+ and similarly for the other distributions we have come up with before.
+\end{notation}
+
+\begin{defi}[Expectation]
+ The \emph{expectation} (or \emph{mean}) of a real-valued $X$ is equal to
+ \[
+ \E[X] = \sum_{\omega\in \Omega}p_\omega X(\omega).
+ \]
+ provided this is \emph{absolutely convergent}. Otherwise, we say the expectation doesn't exist. Alternatively,
+ \begin{align*}
+ \E[X] &= \sum_{x\in \Omega_X}\sum_{\omega: X(\omega) = x}p_\omega X(\omega)\\
+ &= \sum_{x\in \Omega_X}x\sum_{\omega:X(\omega) = x}p_\omega\\
+ &= \sum_{x\in \Omega_X}xP(X = x).
+ \end{align*}
+ We are sometimes lazy and just write $\E X$.
+\end{defi}
+This is the ``average'' value of $X$ we expect to get. Note that this definition only holds in the case where the sample space $\Omega$ is countable. If $\Omega$ is continuous (e.g.\ the whole of $\R$), then we have to define the expectation as an integral.
+
+\begin{eg}
+ Let $X$ be the sum of the outcomes of two dice. Then
+ \[
+ \E[X] = 2\cdot \frac{1}{36} + 3\cdot \frac{2}{36} + \cdots + 12\cdot \frac{1}{36} = 7.
+ \]
+\end{eg}
+Note that $\E[X]$ can be non-existent if the sum is not absolutely convergent. However, it is possible for the expected value to be infinite:
+
+\begin{eg}[St. Petersburg paradox]
+ Suppose we play a game in which we keep tossing a coin until you get a tail. If you get a tail on the $i$th round, then I pay you $\$2^i$. The expected value is
+ \[
+ \E[X] = \frac{1}{2}\cdot 2 + \frac{1}{4}\cdot 4 + \frac{1}{8}\cdot 8 + \cdots = \infty.
+ \]
+ This means that on average, you can expect to get an infinite amount of money! In real life, though, people would hardly be willing to pay $\$20$ to play this game. There are many ways to resolve this paradox, such as taking into account the fact that the host of the game has only finitely many money and thus your real expected gain is much smaller.
+\end{eg}
+
+\begin{eg}
+ We calculate the expected values of different distributions:
+ \begin{enumerate}
+ \item Poisson $P(\lambda)$. Let $X\sim P(\lambda)$. Then
+ \[
+ P_X(r) = \frac{\lambda^r e^{-\lambda}}{r!}.
+ \]
+ So
+ \begin{align*}
+ \E[X] &= \sum_{r = 0}^\infty rP(X = r)\\
+ &= \sum_{r = 0}^\infty \frac{r \lambda^r e^{-\lambda}}{r!}\\
+ &= \sum_{r = 1}^\infty \lambda \frac{\lambda^{r - 1}e^{-\lambda}}{(r - 1)!}\\
+ &= \lambda\sum_{r = 0}^\infty\frac{\lambda^r e^{-\lambda}}{r!}\\
+ &= \lambda.
+ \end{align*}
+ \item Let $X\sim B(n, p)$. Then
+ \begin{align*}
+ \E[X] &= \sum_0^n rP(x = r)\\
+ &= \sum_0^n r\binom{n}{r} p^r(1 - p)^{n - r}\\
+ &= \sum_0^n r\frac{n!}{r!(n - r)!}p^r (1 - p)^{n - r}\\
+ &= np\sum_{r = 1}^n \frac{(n - 1)!}{(r - 1)![(n - 1) - (r - 1)]!}p^{r - 1}(1 - p)^{(n - 1) - (r - 1)}\\
+ &= np\sum_{0}^{n - 1}\binom{n - 1}{r}p^r(1 - p)^{n - 1 - r}\\
+ &= np.
+ \end{align*}
+ \end{enumerate}
+\end{eg}
+Given a random variable $X$, we can create new random variables such as $X + 3$ or $X^2$. Formally, let $f: \R \to \R$ and $X$ be a real-valued random variable. Then $f(X)$ is a new random variable that maps $\omega \mapsto f(X(\omega))$.
+
+\begin{eg}
+ if $a, b, c$ are constants, then $a + bX$ and $(X - c)^2$ are random variables, defined as
+ \begin{align*}
+ (a + bX)(\omega) &= a + bX(\omega)\\
+ (X - c)^2(\omega) &= (X(\omega) - c)^2.
+ \end{align*}
+\end{eg}
+
+\begin{thm}\leavevmode
+\begin{enumerate}
+ \item If $X \geq 0$, then $\E[X] \geq 0$.
+ \item If $X\geq 0$ and $\E[X] = 0$, then $\P(X = 0) = 1$.
+ \item If $a$ and $b$ are constants, then $\E[a + bX] = a + b\E[X]$.
+ \item If $X$ and $Y$ are random variables, then $\E[X + Y] = \E[X] + \E[Y]$. This is true even if $X$ and $Y$ are not independent.
+ \item $\E[X]$ is a constant that minimizes $\E[(X - c)^2]$ over $c$.
+\end{enumerate}
+\end{thm}
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item $X \geq 0$ means that $X(\omega) \geq 0$ for all $\omega$. Then
+ \[
+ \E[X] = \sum_\omega p_\omega X(\omega) \geq 0.
+ \]
+ \item If there exists $\omega$ such that $X(\omega) > 0$ and $p_\omega > 0$, then $\E[X] > 0$. So $X(\omega) = 0$ for all $\omega$.
+ \item
+ \[
+ \E[a + bX] = \sum_\omega (a + bX(\omega))p_\omega = a + b\sum_\omega p_\omega= a + b\ \E[X].
+ \]
+ \item
+ \[ \E[X + Y] = \sum_\omega p_\omega[X(\omega) + Y(\omega)] = \sum_\omega p_\omega X(\omega) + \sum_\omega p_\omega Y(\omega) = \E[X] + \E[Y].
+ \]
+ \item
+ \begin{align*}
+ \E[(X - c)^2] &= \E[(X - \E[X] + \E[X] - c)^2]\\
+ &= \E[(X - \E[X])^2 + 2(\E[X] - c)(X - \E[X]) + (\E[X] - c)^2]\\
+ &= \E(X - \E[X])^2 + 0 + (\E[X] - c)^2.
+ \end{align*}
+ This is clearly minimized when $c = \E[X]$. Note that we obtained the zero in the middle because $\E[X - \E[X]] = \E[X] - \E[X] = 0$.\qedhere
+ \end{enumerate}
+\end{proof}
+An easy generalization of (iv) above is
+\begin{thm}
+ For any random variables $X_1, X_2, \cdots X_n$, for which the following expectations exist,
+ \[
+ \E\left[\sum_{i = 1}^n X_i\right] = \sum_{i = 1}^n \E[X_i].
+ \]
+\end{thm}
+
+\begin{proof}
+ \[
+ \sum_\omega p(\omega)[X_1(\omega) + \cdots + X_n(\omega)] = \sum_\omega p(\omega)X_1(\omega) + \cdots + \sum_\omega p(\omega) X_n(\omega).\qedhere
+ \]
+\end{proof}
+
+\begin{defi}[Variance and standard deviation]
+ The \emph{variance} of a random variable $X$ is defined as
+ \[
+ \var(X) = \E[(X - \E[X])^2].
+ \]
+ The \emph{standard deviation} is the square root of the variance, $\sqrt{\var(X)}$.
+\end{defi}
+This is a measure of how ``dispersed'' the random variable $X$ is. If we have a low variance, then the value of $X$ is very likely to be close to $\E[X]$.
+
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item $\var X \geq 0$. If $\var X = 0$, then $\P(X = \E[X]) = 1$.
+ \item $\var (a + bX) = b^2 \var(X)$. This can be proved by expanding the definition and using the linearity of the expected value.
+ \item $\var(X) = \E[X^2] - \E[X]^2$, also proven by expanding the definition.
+ \end{enumerate}
+\end{thm}
+
+\begin{eg}[Binomial distribution]
+ Let $X\sim B(n, p)$ be a binomial distribution. Then $\E[X] = np$. We also have
+ \begin{align*}
+ \E[X(X - 1)] &= \sum_{r = 0}^n r(r - 1)\frac{n!}{r!(n - r)!} p^r(1 - p)^{n - r}\\
+ &= n(n - 1)p^2\sum_{r = 2}^n \binom{n - 2}{r - 2} p^{r - 2}(1 - p)^{(n - 2) - (r - 2)}\\
+ &= n(n - 1)p^2.
+ \end{align*}
+ The sum goes to $1$ since it is the sum of all probabilities of a binomial $N(n - 2, p)$
+ So $\E[X^2] = n(n - 1)p^2 + \E[X] = n(n - 1)p^2 + np$. So
+ \[
+ \var(X) = \E[X^2] - (\E[X])^2 = np(1 - p) = npq.
+ \]
+\end{eg}
+
+\begin{eg}[Poisson distribution]
+ If $X\sim P(\lambda)$, then $\E[X] = \lambda$, and $\var(X) = \lambda$, since $P(\lambda)$ is $B(n, p)$ with $n\to \infty, p \to 0, np \to \lambda$.
+\end{eg}
+
+\begin{eg}[Geometric distribution]
+ Suppose $\P(X = r) = q^r p$ for $r= 0, 1, 2, \cdots$. Then
+ \begin{align*}
+ \E[X] &= \sum_{0}^\infty rpq^r \\
+ &= pq\sum_{0}^\infty rq^{r - 1}\\
+ &= pq\sum_{0}^\infty \frac{\d }{\d q}q^r\\
+ &= pq\frac{\d }{\d q}\sum_{0}^\infty q^r\\
+ &= pq\frac{\d }{\d q}\frac{1}{1 - q}\\
+ &= \frac{pq}{(1 - q)^2}\\
+ &= \frac{q}{p}.
+ \end{align*}
+ Then
+ \begin{align*}
+ \E[X(X - 1)] &= \sum_{0}^\infty r(r - 1)pq^r\\
+ &= pq^2 \sum_0^\infty r(r - 1)q^{r - 2}\\
+ &= pq^2\frac{\d ^2}{\d q^2}\frac{1}{1 - q}\\
+ &= \frac{2pq^2}{(1 - q)^3}
+ \end{align*}
+ So the variance is
+ \[
+ \var(X) = \frac{2pq^2}{(1 - q)^3}+ \frac{q}{p} - \frac{q^2}{p^2} = \frac{q}{p^2}.
+ \]
+\end{eg}
+
+\begin{defi}[Indicator function]
+ The \emph{indicator function} or \emph{indicator variable} $I[A]$ (or $I_A$) of an event $A\subseteq \Omega$ is
+ \[
+ I[A](\omega) =
+ \begin{cases}
+ 1 & \omega\in A\\
+ 0 & \omega\not\in A
+ \end{cases}
+ \]
+\end{defi}
+This indicator random variable is not interesting by itself. However, it is a rather useful tool to prove results.
+
+It has the following properties:
+\begin{prop}\leavevmode
+ \begin{itemize}
+ \item $\E[I[A]] = \sum_\omega p(\omega) I[A](\omega) = \P(A)$.
+ \item $I[A^C] = 1 - I[A]$.
+ \item $I[A\cap B] = I[A]I[B]$.
+ \item $I[A\cup B] = I[A] + I[B] - I[A]I[B]$.
+ \item $I[A]^2 = I[A]$.
+ \end{itemize}
+\end{prop}
+These are easy to prove from definition. In particular, the last property comes from the fact that $I[A]$ is either $0$ and $1$, and $0^2 = 0, 1^2 = 1$.
+
+\begin{eg}
+ Let $2n$ people ($n$ husbands and $n$ wives, with $n > 2$) sit alternate man-woman around the table randomly. Let $N$ be the number of couples sitting next to each other.
+
+ Let $A_i = [i$th couple sits together$]$. Then
+ \[
+ N = \sum_{i = 1}^n I[A_i].
+ \]
+ Then
+ \[
+ \E[N] = \E\left[\sum I[A_i]\right] = \sum_{1}^n \E\big[I[A_i]\big] = n\E\big[I[A_1]\big] = n\P(A_i) = n\cdot \frac{2}{n} = 2.
+ \]
+ We also have
+ \begin{align*}
+ \E[N^2] &= \E\left[\left(\sum I[A_i]\right)^2\right]\\
+ &= \E\left[\sum_i I[A_i]^2 + 2\sum_{i < j}I[A_i]I[A_j]\right]\\
+ &= n\E\big[I[A_i]\big] + n(n - 1)\E\big[I[A_1]I[A_2]\big]
+ \end{align*}
+ We have $\E[I[A_1]I[A_2]] = \P(A_1\cap A_2) = \frac{2}{n}\left(\frac{1}{n - 1}\frac{1}{n - 1} + \frac{n - 2}{n - 1}\frac{2}{n - 1}\right)$. Plugging in, we ultimately obtain $\var(N) = \frac{2(n- 2)}{n - 1}$.
+
+ In fact, as $n\to \infty$, $N\sim P(2)$.
+\end{eg}
+
+We can use these to prove the inclusion-exclusion formula:
+\begin{thm}[Inclusion-exclusion formula]
+ \begin{align*}
+ \P\left(\bigcup_i^n A_i\right) &= \sum_1^n \P(A_i) - \sum_{i_1 < i_2} \P(A_{i_1}\cap A_{j_2}) + \sum_{i_1 < i_2 < i_3}\P(A_{i_1}\cap A_{i_2} \cap A_{i_3}) - \cdots\\
+ &+ (-1)^{n - 1} \P(A_1\cap \cdots \cap A_n).
+ \end{align*}
+\end{thm}
+
+\begin{proof}
+ Let $I_j$ be the indicator function for $A_j$. Write
+ \[
+ S_r = \sum_{i_1 < i_2 < \cdots < i_r}I_{i_1}I_{i_2}\cdots I_{i_r},
+ \]
+ and
+ \[
+ s_r = \E[S_r] = \sum_{i_1 < \cdots < i_r}\P(A_{i_1}\cap \cdots \cap A_{i_r}).
+ \]
+ Then
+ \[
+ 1 - \prod_{j = 1}^n(1 - I_j) = S_1 - S_2 + S_3 \cdots + (-1)^{n - 1}S_n.
+ \]
+ So
+ \[
+ \P\left(\bigcup_1^n A_j\right) = \E\left[1 - \prod_1^n(1 - I_j)\right] = s_1 - s_2 + s_3 - \cdots + (-1)^{n - 1}s_n.\qedhere
+ \]
+\end{proof}
+
+We can extend the idea of independence to random variables. Two random variables are independent if the value of the first does not affect the value of the second.
+
+\begin{defi}[Independent random variables]
+ Let $X_1, X_2, \cdots, X_n$ be discrete random variables. They are \emph{independent} iff for any $x_1, x_2, \cdots, x_n$,
+ \[
+ \P(X_1 = x_1, \cdots, X_n = x_n) = \P(X_1 = x_1)\cdots \P(X_n = x_n).
+ \]
+\end{defi}
+
+\begin{thm}
+ If $X_1, \cdots, X_n$ are independent random variables, and $f_1, \cdots, f_n$ are functions $\R \to \R$, then $f_1(X_1), \cdots, f_n(X_n)$ are independent random variables.
+\end{thm}
+
+\begin{proof}
+ Note that given a particular $y_i$, there can be many different $x_i$ for which $f_i(x_i) = y_i$. When finding $\P(f_i(x_i)= y_i)$, we need to sum over all $x_i$ such that $f_i(x_i) = f_i$. Then
+ \begin{align*}
+ \P(f_1(X_1) = y_1, \cdots f_n(X_n) = y_n) &= \sum_{\substack{x_1: f_1(x_1) = y_1\\\cdot\vspace{-2pt}\\\cdot\\x_n: f_n(x_n) = y_n}} \P(X_1 = x_1, \cdots, X_n = x_n)\\
+ &= \sum_{\substack{x_1: f_1(x_1) = y_1\\\cdot\vspace{-2pt}\\\cdot\\x_n: f_n(x_n) = y_n}} \prod_{i = 1}^n \P(X_i = x_i)\\
+ &= \prod_{i = 1}^n \sum_{x_i:f_i(x_i) = y_i} \P(X_i = x_i)\\
+ &= \prod_{i = 1}^n \P(f_i(x_i) = y_i).
+ \end{align*}
+ Note that the switch from the second to third line is valid since they both expand to the same mess.
+\end{proof}
+
+\begin{thm}
+ If $X_1, \cdots, X_n$ are independent random variables and all the following expectations exists, then
+ \[
+ \E\left[\prod X_i\right] = \prod \E[X_i].
+ \]
+\end{thm}
+
+\begin{proof}
+ Write $R_i$ for the range of $X_i$. Then
+ \begin{align*}
+ \E\left[\prod_1^n X_i\right] &= \sum_{x_1\in R_1}\cdots \sum_{x_n\in R_n}x_1x_2\cdots x_n\times \P(X_1 = x_1, \cdots, X_n = x_n)\\
+ &= \prod_{i = 1}^n \sum_{x_i \in R_i}x_i\P(X_i = x_i)\\
+ &= \prod_{i = 1}^n \E[X_i].\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{cor}
+ Let $X_1,\cdots X_n$ be independent random variables, and $f_1, f_2, \cdots f_n$ are functions $\R\to \R$. Then
+ \[
+ \E\left[\prod f_i(x_i)\right] = \prod \E[f_i(x_i)].
+ \]
+\end{cor}
+
+\begin{thm}
+ If $X_1, X_2, \cdots X_n$ are independent random variables, then
+ \[
+ \var\left(\sum X_i\right) = \sum \var (X_i).
+ \]
+\end{thm}
+
+\begin{proof}
+ \begin{align*}
+ \var\left(\sum X_i\right) &= \E\left[\left(\sum X_i\right)^2\right] - \left(\E\left[\sum X_i\right]\right)^2\\
+ &= \E\left[\sum X_i^2 + \sum_{i \not =j} X_iX_j\right] - \left(\sum\E [X_i]\right)^2\\
+ &= \sum \E[X_i^2] + \sum_{i\not= j}\E[X_i]\E[X_j] - \sum(\E[X_i])^2 - \sum_{i\not= j}\E[X_i]\E[X_j]\\
+ &= \sum \E[X_i^2] - (\E[X_i])^2.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{cor}
+ Let $X_1, X_2, \cdots X_n$ be independent identically distributed random variables (iid rvs). Then
+ \[
+ \var\left(\frac{1}{n}\sum X_i\right) = \frac{1}{n}\var (X_1).
+ \]
+\end{cor}
+
+\begin{proof}
+ \begin{align*}
+ \var\left(\frac{1}{n}\sum X_i\right) &= \frac{1}{n^2}\var\left(\sum X_i\right)\\
+ &= \frac{1}{n^2}\sum \var(X_i)\\
+ &= \frac{1}{n^2} n \var(X_1)\\
+ &= \frac{1}{n}\var (X_1)
+ \end{align*}
+\end{proof}
+This result is important in statistics. This means that if we want to reduce the variance of our experimental results, then we can repeat the experiment many times (corresponding to a large $n$), and then the sample average will have a small variance.
+
+\begin{eg}
+ Let $X_i$ be iid $B(1, p)$, i.e.\ $\P(1) = p$ and $\P(0) = 1 - p$. Then $Y = X_1 + X_2 + \cdots + X_n \sim B(n, p)$.
+
+ Since $\var(X_i) = \E[X_i^2] - (\E[X_i])^2 = p - p^2 = p(1 - p)$, we have $\var (Y) = np(1 - p)$.
+\end{eg}
+
+\begin{eg}
+ Suppose we have two rods of unknown lengths $a, b$. We can measure the lengths, but is not accurate. Let $A$ and $B$ be the measured value. Suppose
+ \[
+ \E[A] = a,\quad \var(A) = \sigma^2
+ \]
+ \[
+ \E[B] = b, \quad \var(B) = \sigma^2.
+ \]
+ We can measure it more accurately by measuring $X = A + B$ and $Y = A - B$. Then we estimate $a$ and $b$ by
+ \[
+ \hat{a} = \frac{X + Y}{2},\; \hat{b} = \frac{X - Y}{2}.
+ \]
+ Then $\E[\hat{a}] = a$ and $\E[\hat{b}] = b$, i.e.\ they are unbiased. Also
+ \[
+ \var (\hat{a}) = \frac{1}{4}\var(X + Y) = \frac{1}{4}2\sigma^2 = \frac{1}{2}\sigma^2,
+ \]
+ and similarly for $b$. So we can measure it more accurately by measuring the sticks together instead of separately.
+\end{eg}
+
+\subsection{Inequalities}
+Here we prove a lot of different inequalities which may be useful for certain calculations. In particular, Chebyshev's inequality will allow us to prove the weak law of large numbers.
+
+\begin{defi}[Convex function]
+ A function $f: (a, b) \to \R$ is \emph{convex} if for all $x_1, x_2\in (a, b)$ and $\lambda_1, \lambda_2 \geq 0$ such that $\lambda_1 + \lambda_2 = 1$,\[
+ \lambda_1f(x_1) + \lambda_2 f(x_2) \geq f(\lambda_1x_1 + \lambda_2 x_2).
+ \]
+ It is \emph{strictly convex} if the inequality above is strict (except when $x_1 = x_2$ or $\lambda_1$ or $\lambda_2 = 0$).
+ \begin{center}
+ \begin{tikzpicture}
+ \draw(-2, 4) -- (-2, 0) -- (2, 0) -- (2, 4);
+ \draw (-1.3, 0.1) -- (-1.3, -0.1) node [below] {$x_1$};
+ \draw (1.3, 0.1) -- (1.3, -0.1) node [below] {$x_2$};
+ \draw (-1.7, 2) parabola bend (-.2, 1) (1.7, 3.3);
+ \draw [dashed] (-1.3, 0) -- (-1.3, 1.53) node [circ] {};
+ \draw [dashed] (1.3, 0) -- (1.3, 2.42) node [circ] {};
+ \draw (-1.3, 1.53) -- (1.3, 2.42);
+ \draw [dashed] (0, 0) node [below] {\tiny $\lambda_1x_1 + \lambda_2x_2$} -- (0, 1.975) node [above] {\tiny$\lambda_1 f(x_1) + \lambda_2 f(x_2)\quad\quad\quad\quad\quad\quad$} node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ A function is \emph{concave} if $-f$ is convex.
+\end{defi}
+
+A useful criterion for convexity is
+\begin{prop}
+ If $f$ is differentiable and $f''(x) \geq 0$ for all $x\in (a, b)$, then it is convex. It is strictly convex if $f''(x) > 0$.
+\end{prop}
+
+\begin{thm}[Jensen's inequality]
+ If $f:(a, b) \to \R$ is convex, then
+ \[
+ \sum_{i = 1}^n p_i f(x_i) \geq f\left(\sum_{i = 1}^n p_ix_i\right)
+ \]
+ for all $p_1, p_2, \cdots, p_n$ such that $p_i \geq 0$ and $\sum p_i = 1$, and $x_i \in (a, b)$.
+
+ This says that $\E[f(X)] \geq f(\E[X])$ (where $\P(X=x_i) = p_i$).
+
+ If $f$ is strictly convex, then equalities hold only if all $x_i$ are equal, i.e.\ $X$ takes only one possible value.
+\end{thm}
+
+\begin{proof}
+ Induct on $n$. It is true for $n = 2$ by the definition of convexity. Then
+ \begin{align*}
+ f(p_1x_1 + \cdots + p_nx_n) &= f\left(p_1x_1 + (p_2 + \cdots + p_n)\frac{p_2x_2 + \cdots + l_nx_n}{p_2 + \cdots + p_n}\right)\\
+ &\leq p_1f(x_1) + (p_2 + \cdots p_n)f\left(\frac{p_2x_2 + \cdots + p_n x_n}{p_2 + \cdots + p_n }\right).\\
+ &\leq p_1f(x_1) + (p_2 + \cdots + p_n)\left[\frac{p_2}{(\;)}f(x_2) + \cdots + \frac{p_n}{(\;)}f(x_n)\right]\\
+ &= p_1f(x_1) + \cdots + p_n(x_n).
+ \end{align*}
+ where the $(\;)$ is $p_2 + \cdots + p_n$.
+
+ Strictly convex case is proved with $\leq$ replaced by $<$ by definition of strict convexity.
+\end{proof}
+
+\begin{cor}[AM-GM inequality]
+ Given $x_1, \cdots, x_n$ positive reals, then
+ \[
+ \left(\prod x_i\right)^{1/n} \leq \frac{1}{n}\sum x_i.
+ \]
+\end{cor}
+
+\begin{proof}
+ Take $f(x) = -\log x$. This is convex since its second derivative is $x^{-2} > 0$.
+
+ Take $\P(x = x_i) = 1/n$. Then
+ \[
+ \E[f(x)] = \frac{1}{n}\sum -\log x_i = -\log \text{GM}
+ \]
+ and
+ \[
+ f(\E[x]) = -\log \frac{1}{n}\sum x_i = -\log\text{AM}
+ \]
+ Since $f(\E[x]) \leq \E[f(x)]$, AM $\geq$ GM. Since $-\log x$ is strictly convex, AM $=$ GM only if all $x_i$ are equal.
+\end{proof}
+
+\begin{thm}[Cauchy-Schwarz inequality]
+ For any two random variables $X, Y$,
+ \[
+ (\E[XY])^2 \leq \E[X^2]\E[Y^2].
+ \]
+\end{thm}
+
+\begin{proof}
+ If $Y = 0$, then both sides are $0$. Otherwise, $\E[Y^2] > 0$. Let
+ \[
+ w = X - Y\cdot \frac{\E[XY]}{\E[Y^2]}.
+ \]
+ Then
+ \begin{align*}
+ \E[w^2] &= \E\left[X^2 - 2XY\frac{\E[XY]}{\E[Y^2]} + Y^2\frac{(\E[XY])^2}{(\E[Y^2])^2}\right]\\
+ &= \E[X^2] - 2\frac{(\E[XY])^2}{\E[Y^2]} + \frac{(\E[XY])^2}{\E[Y^2]}\\
+ &= \E[X^2] - \frac{(\E[XY])^2}{\E[Y^2]}
+ \end{align*}
+ Since $\E[w^2] \geq 0$, the Cauchy-Schwarz inequality follows.
+\end{proof}
+
+\begin{thm}[Markov inequality]
+ If $X$ is a random variable with $\E|X| < \infty$ and $\varepsilon > 0$, then
+ \[
+ \P(|X| \geq \varepsilon) \leq \frac{\E|X|}{\varepsilon}.
+ \]
+\end{thm}
+
+\begin{proof}
+ We make use of the indicator function. We have
+ \[
+ I[|X|\geq \varepsilon] \leq \frac{|X|}{\varepsilon}.
+ \]
+ This is proved by exhaustion: if $|X| \geq \varepsilon$, then LHS = 1 and RHS $\geq$ 1; If $|X| < \varepsilon$, then LHS = 0 and RHS is non-negative.
+
+ Take the expected value to obtain
+ \[
+ \P(|X| \geq \varepsilon) \leq \frac{\E |X|}{\varepsilon}.\qedhere
+ \]
+\end{proof}
+
+Similarly, we have
+\begin{thm}[Chebyshev inequality]
+ If $X$ is a random variable with $\E[X^2] < \infty$ and $\varepsilon > 0$, then
+ \[
+ \P(|X| \geq \varepsilon) \leq \frac{\E[X^2]}{\varepsilon^2}.
+ \]
+\end{thm}
+
+\begin{proof}
+ Again, we have
+ \[
+ I[\{|X|\geq \varepsilon\}] \leq \frac{x^2}{\varepsilon^2}.
+ \]
+ Then take the expected value and the result follows.
+\end{proof}
+Note that these are really powerful results, since they do not make \emph{any} assumptions about the distribution of $X$. On the other hand, if we know something about the distribution, we can often get a larger bound.
+
+An important corollary is that if $\mu = \E[X]$, then
+\[
+ \P(|X - \mu| \geq \varepsilon) \leq \frac{\E[(X - \mu)^2]}{\varepsilon^2} = \frac{\var X}{\varepsilon^2}
+\]
+
+\subsection{Weak law of large numbers}
+\begin{thm}[Weak law of large numbers]
+ Let $X_1, X_2, \cdots$ be iid random variables, with mean $\mu$ and $\var \sigma^2$.
+
+ Let $S_n = \sum_{i = 1}^n X_i$.
+
+ Then for all $\varepsilon > 0$,
+ \[
+ \P\left(\left|\frac{S_n}{n} - \mu\right| \geq \varepsilon\right) \to 0
+ \]
+ as $n\to \infty$.
+
+ We say, $\frac{S_n}{n}$ tends to $\mu$ (in probability), or
+ \[
+ \frac{S_n}{n}\to_p \mu.
+ \]
+\end{thm}
+
+\begin{proof}
+ By Chebyshev,
+ \begin{align*}
+ \P\left(\left|\frac{S_n}{n} - \mu\right| \geq \varepsilon\right)&\leq \frac{\E\left(\frac{S_n}{n} - \mu\right)^2}{\varepsilon^2}\\
+ &= \frac{1}{n^2}\frac{\E(S_n -n\mu)^2}{\varepsilon^2}\\
+ &= \frac{1}{n^2\varepsilon^2}\var (S_n)\\
+ &= \frac{n}{n^2\varepsilon^2}\var (X_1)\\
+ &= \frac{\sigma^2}{n\varepsilon^2} \to 0\qedhere
+ \end{align*}
+\end{proof}
+
+Note that we cannot relax the ``independent'' condition. For example, if $X_1 = X_2 = X_3 = \cdots = $ 1 or 0, each with probability $1/2$. Then $S_n/n \not\to 1/2$ since it is either 1 or 0.
+
+\begin{eg}
+ Suppose we toss a coin with probability $p$ of heads. Then
+ \[
+ \frac{S_n}{n} = \frac{\text{number of heads}}{\text{number of tosses}}.
+ \]
+ Since $\E[X_i] = p$, then the weak law of large number tells us that
+ \[
+ \frac{S_n}{n} \to_p p.
+ \]
+ This means that as we toss more and more coins, the proportion of heads will tend towards $p$.
+\end{eg}
+
+Since we called the above the \emph{weak} law, we also have the \emph{strong} law, which is a stronger statement.
+\begin{thm}[Strong law of large numbers]
+ \[
+ \P\left(\frac{S_n}{n}\to \mu\text{ as }n\to \infty\right) = 1.
+ \]
+ We say
+ \[
+ \frac{S_n}{n}\to_{\mathrm{as}} \mu,
+ \]
+ where ``as'' means ``almost surely''.
+\end{thm}
+It can be shown that the weak law follows from the strong law, but not the other way round. The proof is left for Part II because it is too hard.
+
+
+\subsection{Multiple random variables}
+If we have two random variables, we can study the relationship between them.
+
+\begin{defi}[Covariance]
+ Given two random variables $X, Y$, the \emph{covariance} is
+ \[
+ \cov(X, Y) = \E[(X - \E[X])(Y - \E[Y])].
+ \]
+\end{defi}
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\cov(X, c) = 0$ for constant $c$.
+ \item $\cov(X + c, Y) = \cov (X, Y)$.
+ \item $\cov (X, Y) = \cov (Y, X)$.
+ \item $\cov (X, Y) = \E[XY] - \E[X]\E[Y]$.
+ \item $\cov (X, X) = \var (X)$.
+ \item $\var (X + Y) = \var (X) + \var (Y) + 2\cov (X, Y)$.
+ \item If $X$, $Y$ are independent, $\cov(X, Y) = 0$.
+ \end{enumerate}
+\end{prop}
+These are all trivial to prove and proof is omitted.
+
+It is important to note that $\cov(X, Y) = 0$ does not imply $X$ and $Y$ are independent.
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item Let $(X, Y) = (2, 0), (-1, -1)$ or $(-1, 1)$ with equal probabilities of $1/3$. These are not independent since $Y = 0\Rightarrow X = 2$.
+
+ However, $\cov (X, Y) = \E[XY] - \E[X]\E[Y] = 0 - 0\cdot 0 = 0$.
+
+ \item If we randomly pick a point on the unit circle, and let the coordinates be $(X, Y)$, then $\E[X] = \E[Y] = \E[XY] = 0$ by symmetry. So $\cov(X, Y) = 0$ but $X$ and $Y$ are clearly not independent (they have to satisfy $x^2 + y^2 = 1$).
+ \end{itemize}
+\end{eg}
+
+The covariance is not that useful in measuring how well two variables correlate. For one, the covariance can (potentially) have dimensions, which means that the numerical value of the covariance can depend on what units we are using. Also, the magnitude of the covariance depends largely on the variance of $X$ and $Y$ themselves. To solve these problems, we define
+
+\begin{defi}[Correlation coefficient]
+ The \emph{correlation coefficient} of $X$ and $Y$ is
+ \[
+ \corr(X, Y) = \frac{\cov (X, Y)}{\sqrt{\var (X)\var (Y)}}.
+ \]
+\end{defi}
+
+\begin{prop}
+ $|\corr(X, Y)| \leq 1$.
+\end{prop}
+
+\begin{proof}
+ Apply Cauchy-Schwarz to $X - \E[X]$ and $Y - \E[Y]$.
+\end{proof}
+
+Again, zero correlation does not necessarily imply independence.
+
+Alternatively, apart from finding a fixed covariance or correlation number, we can see how the distribution of $X$ depends on $Y$. Given two random variables $X, Y$, $\P(X = x, Y = y)$ is known as the \emph{joint distribution}. From this joint distribution, we can retrieve the probabilities $\P(X = x)$ and $\P(Y = y)$. We can also consider different conditional expectations.
+
+\begin{defi}[Conditional distribution]
+ Let $X$ and $Y$ be random variables (in general not independent) with joint distribution $\P(X = x, Y = y)$. Then the \emph{marginal distribution} (or simply \emph{distribution}) of $X$ is
+ \[
+ \P(X = x) = \sum_{y\in \Omega_y}\P(X = x, Y = y).
+ \]
+ The \emph{conditional distribution} of $X$ given $Y$ is
+ \[
+ \P(X = x\mid Y = y) = \frac{\P(X = x, Y = y)}{\P(Y = y)}.
+ \]
+ The \emph{conditional expectation} of $X$ given $Y$ is
+ \[
+ \E[X\mid Y = y] = \sum_{x\in \Omega_X}x\P(X = x\mid Y = y).
+ \]
+ We can view $\E[X\mid Y]$ as a random variable in $Y$: given a value of $Y$, we return the expectation of $X$.
+\end{defi}
+
+\begin{eg}
+ Consider a dice roll. Let $Y = 1$ denote an even roll and $Y = 0$ denote an odd roll. Let $X$ be the value of the roll. Then $\E[X\mid Y] = 3 + Y$, ie 4 if even, 3 if odd.
+\end{eg}
+
+\begin{eg}
+ Let $X_1, \cdots, X_n$ be iid $B(1, p)$. Let $Y = X_1 + \cdots + X_n$. Then
+ \begin{align*}
+ \P(X_1 = 1\mid Y = r) &= \frac{\P(X_1 = 1, \sum_{2}^n X_i = r - 1)}{\P(Y = r)}\\
+ &= \frac{p\binom{n - 1}{r - 1}p^{r - 1}(1 - p)^{(n - 1) - (r - 1)}}{\binom{n}{r} p^r (1 - p)^{n - 1}} = \frac{r}{n}.
+ \end{align*}
+ So
+ \[
+ \E[X_1\mid Y] = 1 \cdot \frac{r}{n} + 0\left(1 - \frac{r}{n}\right) = \frac{r}{n} = \frac{Y}{n}.
+ \]
+ Note that this is a random variable!
+\end{eg}
+
+\begin{thm}
+ If $X$ and $Y$ are independent, then
+ \[
+ \E[X\mid Y] = \E[X]
+ \]
+\end{thm}
+
+\begin{proof}
+ \begin{align*}
+ \E[X\mid Y = y] &= \sum_{x}x\P(X = x\mid Y = y)\\
+ &= \sum_x x\P(X = x)\\
+ &= \E[X]\qedhere
+ \end{align*}
+\end{proof}
+
+We know that the expected value of a dice roll given it is even is 4, and the expected value given it is odd is 3. Since it is equally likely to be even or odd, the expected value of the dice roll is 3.5. This is formally captured by
+\begin{thm}[Tower property of conditional expectation]
+ \[
+ \E_Y[\E_X[X\mid Y]] = \E_X[X],
+ \]
+ where the subscripts indicate what variable the expectation is taken over.
+\end{thm}
+
+\begin{proof}
+ \begin{align*}
+ \E_Y[\E_X[X\mid Y]] &= \sum_y \P(Y = y)\E[X\mid Y = y]\\
+ &= \sum _y \P(Y = y) \sum_{x}x\P(X = x\mid Y = y)\\
+ &= \sum_x\sum_y x\P(X = x, Y = y)\\
+ &= \sum _x x\sum_y \P(X = x, Y = y)\\
+ &= \sum_x x\P(X = x)\\
+ &= \E[X].\qedhere
+ \end{align*}
+\end{proof}
+This is also called the law of total expectation. We can also state it as: suppose $A_1, A_2, \cdots, A_n$ is a partition of $\Omega$. Then
+\[
+ \E[X] = \sum_{i: \P(A_i) > 0}\E[X\mid A_i]\P(A_i).
+\]
+
+\subsection{Probability generating functions}
+Consider a random variable $X$, taking values $0, 1, 2, \cdots$. Let $p_r = \P(X = r)$.
+\begin{defi}[Probability generating function (pgf)]
+ The \emph{probability generating function (pgf)} of $X$ is
+ \[
+ p(z) = \E[z^X] = \sum_{r = 0}^\infty \P(X = r)z^r = p_0 + p_1z + p_2z^2\cdots = \sum_0^\infty p_rz^r.
+ \]
+ This is a power series (or polynomial), and converges if $|z| \leq 1$, since
+ \[
+ |p(z)| \leq \sum_r p_r |z^r| \leq \sum_r p_r = 1.
+ \]
+ We sometimes write as $p_X(z)$ to indicate what the random variable.
+\end{defi}
+This definition might seem a bit out of the blue. However, it turns out to be a rather useful algebraic tool that can concisely summarize information about the probability distribution.
+
+\begin{eg}
+ Consider a fair di.e.\ Then $p_r = 1/6$ for $r = 1, \cdots, 6$. So
+ \[
+ p(z) = \E[z^X] = \frac{1}{6}(z + z^2 + \cdots + z^6) = \frac{1}{6}z\left(\frac{1 - z^6}{1 - z}\right).
+ \]
+\end{eg}
+
+\begin{thm}
+ The distribution of $X$ is uniquely determined by its probability generating function.
+\end{thm}
+
+\begin{proof}
+ By definition, $p_0 = p(0)$, $p_1 = p'(0)$ etc. (where $p'$ is the derivative of $p$). In general,
+ \[
+ \left. \frac{\d ^i}{\d z^i}p(z) \right|_{z = 0} = i! p_i.
+ \]
+ So we can recover $(p_0, p_1, \cdots)$ from $p(z)$.
+\end{proof}
+
+\begin{thm}[Abel's lemma]
+ \[
+ \E[X] = \lim_{z\to 1}p'(z).
+ \]
+ If $p'(z)$ is continuous, then simply $\E[X] = p'(1)$.
+\end{thm}
+Note that this theorem is trivial if $p'(1)$ exists, as long as we know that we can differentiate power series term by term. What is important here is that even if $p'(1)$ doesn't exist, we can still take the limit and obtain the expected value, e.g.\ when $\E[X] = \infty$.
+
+\begin{proof}
+ For $z < 1$, we have
+ \[
+ p'(z) = \sum_1^\infty rp_r z^{r - 1} \leq \sum_1^\infty rp_r = \E[X].
+ \]
+ So we must have
+ \[
+ \lim_{z\to 1}p'(z) \leq \E[X].
+ \]
+ On the other hand, for any $\varepsilon$, if we pick $N$ large, then
+ \[
+ \sum_{1}^N rp_r \geq \E[X] - \varepsilon.
+ \]
+ So
+ \[
+ \E[X] - \varepsilon \leq \sum_1^N rp_r = \lim_{z\to 1}\sum _1^N rp_r z^{r - 1}\leq \lim_{z \to 1} \sum_1^\infty rp_r z^{r - 1} = \lim_{z\to 1} p' (z).
+ \]
+ So $\E[X] \leq \lim\limits_{z \to 1}p'(z)$. So the result follows
+\end{proof}
+
+\begin{thm}
+ \[
+ \E[X(X - 1)] = \lim_{z \to 1}p''(z).
+ \]
+\end{thm}
+\begin{proof}
+ Same as above.
+\end{proof}
+
+\begin{eg}
+ Consider the Poisson distribution. Then
+ \[
+ p_r = \P(X = r) = \frac{1}{r!}\lambda^r e^{-\lambda}.
+ \]
+ Then
+ \[
+ p(z) = \E[z^X] = \sum_0^\infty z^r \frac{1}{r!}\lambda^r e^{-\lambda} = e^{\lambda z}e^{-\lambda} = e^{\lambda(z - 1)}.
+ \]
+ We can have a sanity check: $p(1) = 1$, which makes sense, since $p(1)$ is the sum of probabilities.
+
+ We have
+ \[
+ \E[X]= \left.\frac{\d }{\d z}e^{\lambda(z - 1)}\right|_{z = 1} = \lambda,
+ \]
+ and
+ \[
+ \E[X(X - 1)] = \left.\frac{\d ^2}{\d x^2}e^{\lambda(z - 1)}\right|_{z = 1} = \lambda^2
+ \]
+ So
+ \[
+ \var(X) = \E[X^2] - \E[X]^2 = \lambda^2 + \lambda - \lambda^2 = \lambda.
+ \]
+\end{eg}
+
+\begin{thm}
+ Suppose $X_1, X_2, \cdots, X_n$ are independent random variables with pgfs $p_1, p_2, \cdots, p_n$. Then the pgf of $X_1 + X_2 + \cdots + X_n$ is $p_1(z)p_2(z)\cdots p_n(z)$.
+\end{thm}
+
+\begin{proof}
+ \[
+ \E[z^{X_1 + \cdots + X_n}] = \E[z^{X_1}\cdots z^{X_n}] = \E[z^{X_1}]\cdots\E[z^{X_n}] = p_1(z)\cdots p_n(z).\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ Let $X\sim B(n, p)$. Then
+ \[
+ p(z) = \sum_{r = 0}^{n} \P(X = r)z^r = \sum \binom{n}{r} p^r(1 - p)^{n - r}z^r = (pz + (1 - p))^n = (pz + q)^n.
+ \]
+ So $p(z)$ is the product of $n$ copies of $pz + q$. But $pz + q$ is the pgf of $Y\sim B(1, p)$.
+
+ This shows that $X = Y_1 + Y_2 + \cdots + Y_n$ (which we already knew), i.e.\ a binomial distribution is the sum of Bernoulli trials.
+\end{eg}
+
+\begin{eg}
+ If $X$ and $Y$ are independent Poisson random variables with parameters $\lambda, \mu$ respectively, then
+ \[
+ \E[t^{X + Y}] = \E[t^X]\E[t^Y] = e^{\lambda(t - 1)}e^{\mu(t - 1)} = e^{(\lambda + \mu)(t - 1)}
+ \]
+ So $X + Y\sim \P(\lambda + \mu)$.
+
+ We can also do it directly:
+ \[
+ \P(X + Y = r) = \sum_{i = 0}^r \P(X = i, Y = r - i) = \sum_{i = 0}^r \P(X = i)\P(X = r - i),
+ \]
+ but is much more complicated.
+\end{eg}
+
+We can use pgf-like functions to obtain some combinatorial results.
+\begin{eg}
+ Suppose we want to tile a $2\times n$ bathroom by $2\times 1$ tiles. One way to do it is
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (6, 2);
+ \draw (1, 0) rectangle (3, 1);
+ \draw (1, 1) rectangle (3, 2);
+ \draw (4, 0) rectangle (6, 1);
+ \draw (4, 1) rectangle (6, 2);
+ \end{tikzpicture}
+ \end{center}
+ We can do it recursively: suppose there are $f_n$ ways to tile a $2\times n$ grid. Then if we start tiling, the first tile is either vertical, in which we have $f_{n - 1}$ ways to tile the remaining ones; or the first tile is horizontal, in which we have $f_{n - 2}$ ways to tile the remaining. So
+ \[
+ f_n = f_{n - 1} + f_{n - 2},
+ \]
+ which is simply the Fibonacci sequence, with $f_0 = f_1 = 1$.
+
+ Let
+ \[
+ F(z) = \sum_{n = 0}^\infty f_nz^n.
+ \]
+ Then from our recurrence relation, we obtain
+ \[
+ f_nz^n = f_{n - 1}z^n + f_{n - 2}z^n.
+ \]
+ So
+ \[
+ \sum_{n = 2}^\infty f_n z^n = \sum_{n = 2}^{\infty} f_{n - 1}z^n + \sum_{n = 2}^\infty f_{n - 2}z^n.
+ \]
+ Since $f_0 = f_1 = 1$, we have
+ \[
+ F(z) - f_0 - zf_1 = z(F(z) - f_0) + z^2F(z).
+ \]
+ Thus $F(z) = (1 - z - z^2)^{-1}$. If we write
+ \[
+ \alpha_1 = \frac{1}{2}(1 + \sqrt{5}),\quad \alpha_2 = \frac{1}{2}(1 - \sqrt{5}).
+ \]
+ then we have
+ \begin{align*}
+ F(z) &= (1 - z - z^2)^{-1}\\
+ &= \frac{1}{(1 - \alpha_1 z)(1 - \alpha_2 z)}\\
+ &= \frac{1}{\alpha_1 - \alpha_2}\left(\frac{\alpha_1}{1 - \alpha_1 z} - \frac{\alpha_2}{1 - \alpha_2 z}\right)\\
+ &= \frac{1}{\alpha_1 - \alpha_2}\left(\alpha_1 \sum_{n = 0}^\infty \alpha_1^nz^n - \alpha_2\sum_{n = 0}^\infty \alpha_2^n z^n\right).
+ \end{align*}
+ So
+ \[
+ f_n = \frac{\alpha_1^{n + 1} - \alpha_2^{n + 1}}{\alpha_1 - \alpha_2}.
+ \]
+\end{eg}
+
+\begin{eg}
+ A \emph{Dyck word} is a string of brackets that match, such as (), or ((())()).
+
+ There is only one Dyck word of length 2, (). There are 2 of length 4, (()) and ()(). Similarly, there are 5 Dyck words of length 5.
+
+ Let $C_n$ be the number of Dyck words of length $2n$. We can split each Dyck word into $(w_1)w_2$, where $w_1$ and $w_2$ are Dyck words. Since the lengths of $w_1$ and $w_2$ must sum up to $2(n -1)$,
+ \[
+ C_{n + 1} = \sum_{i = 0}^n C_iC_{n - i}.\tag{$*$}
+ \]
+ We again use pgf-like functions: let
+ \[
+ c(x) = \sum_{n = 0}^\infty C_n x^n.
+ \]
+ From $(*)$, we can show that
+ \[
+ c(x) = 1 + xc(x)^2.
+ \]
+ We can solve to show that
+ \[
+ c(x) = \frac{1 - \sqrt{1 - 4x}}{2x} = \sum_0^\infty \binom{2n}{n}\frac{x^n}{n + 1},
+ \]
+ noting that $C_0 = 1$. Then
+ \[
+ C_n = \frac{1}{n + 1}\binom{2n}{n}.
+ \]
+\end{eg}
+
+\subsubsection*{Sums with a random number of terms}
+A useful application of generating functions is the sum with a random number of random terms. For example, an insurance company may receive a random number of claims, each demanding a random amount of money. Then we have a sum of a random number of terms. This can be answered using probability generating functions.
+
+\begin{eg}
+Let $X_1, X_2, \cdots, X_n$ be iid with pgf $p(z) = \E[z^X]$. Let $N$ be a random variable independent of $X_i$ with pgf $h(z)$. What is the pgf of $S = X_1 + \cdots + X_N$?
+\begin{align*}
+ \E[z^{S}] &= \E[z^{X_1 + \cdots + X_N}]\\
+ &= \E_N[\underbrace{\E_{X_i}[z^{X_1 + \ldots + X_N}\mid N]}_{\text{assuming fixed }N}]\\
+ &= \sum_{n = 0}^\infty \P(N = n) \E[z^{X_1 + X_2 + \cdots + X_n}]\\
+ &= \sum_{n = 0}^\infty \P(N = n) \E[z^{X_1}] \E[z^{X_2}]\cdots\E[z^{X_n}]\\
+ &= \sum_{n = 0}^\infty \P(N = n) (\E[z^{X_1}])^n\\
+ &= \sum_{n = 0}^\infty \P(N = n) p(z)^n \\
+ &= h(p(z))
+\end{align*}
+since $h(x) = \sum_{n = 0}^\infty \P(N = n) x^n$.
+
+So
+\begin{align*}
+ \E[S] &= \left.\frac{\d }{\d z}h(p(z))\right|_{z = 1}\\
+ &= h'(p(1))p'(1)\\
+ &= \E[N]\E[X_1]
+\end{align*}
+To calculate the variance, use the fact that
+\[
+ \E[S(S - 1)] = \left.\frac{\d ^2}{\d z^2}h(p(z))\right|_{z = 1}.
+\]
+Then we can find that
+\[
+ \var (S) = \E[N]\var(X_1) + \E[X_1^2]\var(N).
+\]
+\end{eg}
+
+
+\section{Interesting problems}
+Here we are going to study two rather important and interesting probabilistic processes --- branching processes and random walks. Solutions to these will typically involve the use of probability generating functions.
+
+\subsection{Branching processes}
+Branching processes are used to model population growth by reproduction. At the beginning, there is only one individual. At each iteration, the individual produces a random number of offsprings. In the next iteration, each offspring will individually independently reproduce randomly according to the same distribution. We will ask questions such as the expected number of individuals in a particular generation and the probability of going extinct.
+
+Consider $X_0, X_1, \cdots$, where $X_n$ is the number of individuals in the $n$th generation. We assume the following:
+\begin{enumerate}
+ \item $X_0 = 1$
+ \item Each individual lives for unit time and produces $k$ offspring with probability $p_k$.
+ \item Suppose all offspring behave independently. Then
+ \[
+ X_{n + 1} = Y_1^n + Y_2^n + \cdots + Y_{X_n}^n,
+ \]
+ where $Y_i^n$ are iid random variables, which is the same as $X_1$ (the superscript denotes the generation).
+\end{enumerate}
+It is useful to consider the pgf of a branching process. Let $F(z)$ be the pgf of $Y_i^n$. Then
+\[
+ F(z) = E[z^{Y_i^n}] = E[z^{X_1}] = \sum_{k = 0}^\infty p_k z^k.
+\]
+Define
+\[
+ F_n(z) = E[z^{X_n}].
+\]
+The main theorem of branching processes here is
+\begin{thm}
+ \[
+ F_{n + 1}(z) = F_n(F(z)) = F(F(F(\cdots F(z) \cdots )))) = F(F_n(z)).
+ \]
+\end{thm}
+
+\begin{proof}
+ \begin{align*}
+ F_{n + 1}(z) &= \E[z^{X_{n + 1}}]\\
+ &= \E[\E[z^{X_{n + 1}}\mid X_n]]\\
+ &= \sum_{k = 0}^\infty \P(X_n = k)\E[z^{X_{n + 1}}\mid X_n = k]\\
+ &= \sum_{k = 0}^\infty \P(X_n = k)\E[z^{Y_1^n + \cdots + Y_k^n}\mid X_n = k]\\
+ &= \sum_{k = 0}^\infty \P(X_n = k)\E[z^{Y_1}]\E[z^{Y_2}]\cdots\E[z^{Y_n}]\\
+ &= \sum_{k = 0}^\infty \P(X_n = k)(\E[z^{X_1}])^k\\
+ &= \sum_{k = 0}\P(X_n = k)F(z)^k\\
+ &= F_n(F(z))
+ \end{align*}
+\end{proof}
+
+\begin{thm}
+ Suppose
+ \[
+ \E[X_1] = \sum k p_k = \mu
+ \]
+ and
+ \[
+ \var (X_1) = \E[(X - \mu)^2] = \sum (k - \mu)^2 p_k < \infty.
+ \]
+ Then
+ \[
+ \E[X_n] = \mu^n,\quad \var X_n = \sigma^2\mu^{n - 1}(1 + \mu + \mu^2 + \cdots + \mu^{n - 1}).
+ \]
+\end{thm}
+\begin{proof}
+\begin{align*}
+ \E[X_n] &= \E[\E[X_n\mid X_{n - 1}]]\\
+ &= \E[\mu X_{n - 1}]\\
+ &= \mu \E[X_{n - 1}]
+\end{align*}
+Then by induction, $\E[X_n] = \mu^n$ (since $X_0 = 1$).
+
+To calculate the variance, note that
+\[
+ \var (X_n) = \E[X_n^2] - (\E[X_n])^2
+\]
+and hence
+\[
+ \E[X_n^2] = \var(X_n) + (\E[X])^2
+\]
+We then calculate
+\begin{align*}
+ \E[X_n^2] &= \E[\E[X_n^2\mid X_{n - 1}]]\\
+ &= \E[\var(X_n) + (\E[X_n])^2 \mid X_{n - 1}]\\
+ &= \E[X_{n - 1}\var (X_1) + (\mu X_{n - 1})^2]\\
+ &= \E[X_{n - 1}\sigma^2 + (\mu X_{n - 1})^2]\\
+ &= \sigma^2 \mu^{n - 1} + \mu^2 \E[X_{n - 1}^2].
+\end{align*}
+So
+\begin{align*}
+ \var X_n &= \E[X_n^2] - (\E[X_n])^2 \\
+ &= \mu^2 \E[X_{n -1}^2] + \sigma^2 \mu^{n - 1} - \mu^2 (\E[X_{n - 1}])^2\\
+ &= \mu^2(\E[X_{n - 1}^2] - \E[X_{n - 1}]^2) + \sigma^2\mu^{n - 1}\\
+ &= \mu^2 \var (X_{n - 1}) + \sigma^2 \mu^{n -1 }\\
+ &= \mu^4 \var(X_{n - 2}) + \sigma^2(\mu^{n - 1} + \mu^n)\\
+ &= \cdots\\
+ &= \mu^{2(n - 1)}\var (X_1) + \sigma^2 (\mu^{n - 1} + \mu^{n} + \cdots + \mu^{2n - 3})\\
+ &= \sigma^2\mu^{n - 1}(1 + \mu + \cdots + \mu^{n - 1}).
+\end{align*}
+Of course, we can also obtain this using the probability generating function as well.
+\end{proof}
+\subsubsection*{Extinction probability}
+Let $A_n$ be the event $X_n = 0$, ie extinction has occurred by the $n$th generation. Let $q$ be the probability that extinction eventually occurs. Let
+\[
+ A = \bigcup_{n = 1}^\infty A_n = [\text{extinction eventually occurs}].
+\]
+Since $A_1 \subseteq A_2 \subseteq A_3 \subseteq \cdots$, we know that
+\[
+ q = \P(A) = \lim_{n\to \infty} \P(A_n) = \lim_{n \to \infty} \P(X_n = 0).
+\]
+But
+\[
+ \P(X_n = 0) = F_n(0),
+\]
+since $F_n(0) = \sum \P(X_n = k)z^k$. So
+\[
+ F(q) = F\left(\lim_{n\to \infty} F_n(0)\right) = \lim_{n\to \infty} F(F_n(0)) = \lim_{n\to \infty}F_{n + 1}(0) = q.
+\]
+So
+\[
+ F(q) = q.
+\]
+Alternatively, using the law of total probability
+\[
+ q = \sum_k \P(X_1 = k) \P(\text{extinction}\mid X_1 = k) = \sum_k p_k q^k = F(q),
+\]
+where the second equality comes from the fact that for the whole population to go extinct, each individual population must go extinct.
+
+This means that to find the probability of extinction, we need to find a fixed point of $F$. However, if there are many fixed points, which should we pick?
+
+\begin{thm}
+ The probability of extinction $q$ is the smallest root to the equation $q = F(q)$. Write $\mu = \E[X_1]$. Then if $\mu \leq 1$, then $q = 1$; if $\mu > 1$, then $q < 1$.
+\end{thm}
+
+\begin{proof}
+ To show that it is the smallest root, let $\alpha$ be the smallest root. Then note that $0 \leq \alpha \Rightarrow F(0) \leq F(\alpha) = \alpha$ since $F$ is increasing (proof: write the function out!). Hence $F(F(0)) \leq \alpha$. Continuing inductively, $F_n(0) \leq \alpha$ for all $n$. So
+ \[
+ q = \lim_{n \to \infty}F_n(0) \leq \alpha.
+ \]
+ So $q = \alpha$.
+
+ To show that $q = 1$ when $\mu \leq 1$, we show that $q = 1$ is the only root. We know that $F'(z), F''(z) \geq 0$ for $z\in (0, 1)$ (proof: write it out again!). So $F$ is increasing and convex. Since $F'(1) = \mu \leq 1$, it must approach $(1, 1)$ from above the $F = z$ line. So it must look like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (4, 0) node [right] {$z$};
+ \draw (0, 0) -- (0, 4) node [above] {$F(z)$};
+ \draw [dashed] (0, 0) -- (4, 4);
+ \draw (0, 2.3) parabola (4, 4);
+ \end{tikzpicture}
+ \end{center}
+ So $z = 1$ is the only root.
+\end{proof}
+
+\subsection{Random walk and gambler's ruin}
+Here we'll study random walks, using gambler's ruin as an example.
+
+\begin{defi}[Random walk]
+ Let $X_1, \cdots, X_n$ be iid random variables such that $X_n = +1$ with probability $p$, and $-1$ with probability $1 - p$. Let $S_n = S_0 + X_1 + \cdots + X_n$. Then $(S_0, S_1, \cdots, S_n)$ is a \emph{1-dimensional random walk}.
+
+ If $p = q = \frac{1}{2}$, we say it is a \emph{symmetric random walk}.
+\end{defi}
+
+\begin{eg}
+ A gambler starts with $\$z$, with $z < a$, and plays a game in which he wins $\$1$ or loses $\$ 1$ at each turn with probabilities $p$ and $q$ respectively. What are
+ \[
+ p_z = \P(\text{random walk hits $a$ before } 0 \mid \text{starts at } z),
+ \]
+ and
+ \[
+ q_z = \P(\text{random walk hits 0 before }a \mid \text{starts at }z)?
+ \]
+ He either wins his first game, with probability $p$, or loses with probability $q$. So
+ \[
+ p_z = qp_{z - 1} + pp_{z + 1},
+ \]
+ for $0 < z < a$, and $p_0 = 0, p_a = 1$.
+
+ Try $p_z = t^z$. Then
+ \[
+ pt^2 - t + q = (pt - q)(t - 1) = 0,
+ \]
+ noting that $p = 1 - q$. If $p \not = q$, then
+ \[
+ p_z = A1^z + B\left(\frac{q}{p}\right)^z.
+ \]
+ Since $p_0 = 0$, we get $A = -B$. Since $p_a = 1$, we obtain
+ \[
+ p_z = \frac{1 - (q/p)^z}{1 - (q/p)^a}.
+ \]
+ If $p = q$, then $p_z = A + Bz = z/a$.
+
+ Similarly, (or perform the substitutions $p\mapsto q$, $q\mapsto p$ and $z \mapsto a - z$)
+ \[
+ q_z = \frac{(q/p)^a - (q/p)^z}{(q / p)^a - 1}
+ \]
+ if $p\not = q$, and
+ \[
+ q_z = \frac{a - z}{a}
+ \]
+ if $p = q$. Since $p_z + q_z = 1$, we know that the game will eventually end.
+
+ What if $a\to \infty$? What is the probability of going bankrupt?
+ \begin{align*}
+ \P(\text{path hits 0 ever}) &= \P\left(\bigcup_{a = z+1}^\infty [\text{path hits 0 before }a]\right)\\
+ &= \lim_{a\to \infty}\P(\text{path hits 0 before }a)\\
+ &= \lim_{a\to \infty}q_z\\
+ &= \begin{cases}
+ (q/p)^z & p > q\\
+ 1 & p \leq q.
+ \end{cases}
+ \end{align*}
+ So if the odds are against you (i.e.\ the probability of losing is greater than the probability of winning), then no matter how small the difference is, you are bound to going bankrupt eventually.
+\end{eg}
+\subsubsection*{Duration of the game}
+Let $D_z =$ expected time until the random walk hits $0$ or $a$, starting from $z$. We first show that this is bounded. We know that there is one simple way to hit $0$ or $a$: get $+1$ or $-1$ for $a$ times in a row. This happens with probability $p^a + q^a$, and takes $a$ steps. So even if this were the only way to hit $0$ or $a$, the expected duration would be $\frac{a}{p^a + q^a}$. So we must have
+\[
+ D_z \leq \frac{a}{p^a + q^a}
+\]
+This is a \emph{very} crude bound, but it is sufficient to show that it is bounded, and we can meaningfully apply formulas to this finite quantity.
+
+We have
+\begin{align*}
+ D_z &= \E[\text{duration}]\\
+ &= \E[\E[\text{duration}\mid X_1]]\\
+ &= p\E[\text{duration}\mid X_1 = 1] + q\E[\text{duration}\mid X_1 = -1]\\
+ &= p(1 + D_{z + 1}) + q(1 + D_{z - 1})
+\end{align*}
+So
+\[
+ D_z = 1 + pD_{z + 1} + qD_{z - 1},
+\]
+subject to $D_0 = D_a = 0$.
+
+We first find a particular solution by trying $D_z = Cz$. Then
+\[
+ Cz = 1 + pC(z + 1) + qC(z - 1).
+\]
+So
+\[
+ C = \frac{1}{q - p},
+\]
+for $p \not = q$. Then we find the complementary solution. Try $D_z = t^z$.
+\[
+ pt^2 - t + q = 0,
+\]
+which has roots $1$ and $q/p$. So the general solution is
+\[
+ D_z = A + B\left(\frac{q}{p}\right)^z + \frac{z}{q - p}.
+\]
+Putting in the boundary conditions,
+\[
+ D_z = \frac{z}{q - p} - \frac{a}{q - p} \cdot \frac{1 - (q/p)^z}{1 - (q/p)^a}.
+\]
+If $p = q$, then to find the particular solution, we have to try
+\[
+ D_z = Cz^2.
+\]
+Then we find $C = -1$. So
+\[
+ D_z = -z^2 + A + Bz.
+\]
+Then the boundary conditions give
+\[
+ D_z = z(a - z).
+\]
+\subsubsection*{Using generating functions}
+We can use generating functions to solve the problem as well.
+
+Let $U_{z,n} = \P(\text{random walk absorbed at 0 at }n\mid \text{start in }z)$.
+
+We have the following conditions: $U_{0, 0} = 1$; $U_{z, 0} = 0$ for $0 < z \leq a$; $U_{0, n} = U_{a, n} = 0$ for $n > 0$.
+
+We define a pgf-like function.
+\[
+ U_z(s) = \sum_{n = 0 }^\infty U_{z, n}s^n.
+\]
+We know that
+\[
+ U_{z, n + 1} = p U_{z + 1, n} + qU_{z - 1, n}.
+\]
+Multiply by $s^{n + 1}$ and sum on $n = 0, 1, \cdots$. Then
+\[
+ U_z(s) = psU_{z + 1}(s) + qsU_{z - 1}(s).
+\]
+We try $U_z(s) = [\lambda(s)]^z$. Then
+\[
+ \lambda(s) = ps\lambda(s)^2 + qs.
+\]
+Then
+\[
+ \lambda_1(s), \lambda_2(s) = \frac{1 \pm \sqrt{1 - 4pqs^2}}{2ps}.
+\]
+So
+\[
+ U_z(s) = A(s) \lambda_1(s)^z + B(s)\lambda_2(s)^z.
+\]
+Since $U_0(s) = 1$ and $U_a(s) = 0$, we know that
+\[
+ A(s) + B(s) = 1
+\]
+and
+\[
+ A(s)\lambda_1(s)^a + B(s)\lambda_2(s)^a = 0.
+\]
+Then we find that
+\[
+ U_z(s) = \frac{\lambda_1(s)^a\lambda_2(s)^z - \lambda_2(s)^a\lambda_1(s)^z}{\lambda_1(s)^a - \lambda_2(s)^a}.
+\]
+Since $\lambda_1(s)\lambda_2(s) = \frac{q}{p}$, we can ``simplify'' this to obtain
+\[
+ U_z(s) = \left(\frac{q}{p}\right)^z \cdot \frac{\lambda_1(s)^{a - z} - \lambda_2(s)^{a - z}}{\lambda_1(s)^a - \lambda_2(s)^a}.
+\]
+We see that $U_z(1) = q_z$. We can apply the same method to find the generating function for absorption at $a$, say $V_z(s)$. Then the generating function for the duration is $U_z + V_z$. Hence the expected duration is $D_z = U'_z(1) + V'_z(1).$
+\section{Continuous random variables}
+\subsection{Continuous random variables}
+So far, we have only looked at the case where the outcomes $\Omega$ are discrete. Consider an experiment where we throw a needle randomly onto the ground and record the angle it makes with a fixed horizontal. Then our sample space is $\Omega = \{\omega\in \R: 0 \leq \omega < 2\pi\}$. Then we have
+\[
+ \P(\omega \in [0, \theta]) = \frac{\theta}{2\pi}, \quad 0 \leq \theta \leq 2\pi.
+\]
+With continuous distributions, we can no longer talk about the probability of getting a particular number, since this is always zero. For example, we will almost never get an angle of \emph{exactly} $0.42$ radians.
+
+Instead, we can only meaningfully talk about the probability of $X$ falling into a particular range. To capture the distribution of $X$, we want to define a function $f$ such that for each $x$ and small $\delta x$, the probability of $X$ lying in $[x, x + \delta x]$ is given by $f(x) \delta x + o(\delta x)$. This $f$ is known as the \emph{probability density function}. Integrating this, we know that the probability of $X\in [a, b]$ is $\int_a^b f(x)\;\d x$. We take this as the definition of $f$.
+
+\begin{defi}[Continuous random variable]
+ A random variable $X : \Omega \to \R$ is \emph{continuous} if there is a function $f : \R \to \R_{\geq 0}$ such that
+ \[
+ \P(a \leq X \leq b) = \int_a^b f(x) \;\d x.
+ \]
+ We call $f$ the \emph{probability density function}, which satisfies
+ \begin{itemize}
+ \item $f \geq 0$
+ \item $\int_{-\infty}^\infty f(x) = 1$.
+ \end{itemize}
+\end{defi}
+Note that $\P(X = a) = 0$ since it is $\int_a^a f(x)\;\d x$. Then we also have
+\[
+ \P\left(\bigcup_{a\in \Q} [X = a]\right) = 0,
+\]
+since it is a countable union of probability 0 events (and axiom 3 states that the probability of the countable union is the sum of probabilities, i.e.\ 0).
+
+\begin{defi}[Cumulative distribution function]
+ The \emph{cumulative distribution function} (or simply \emph{distribution function}) of a random variable $X$ (discrete, continuous, or neither) is
+ \[
+ F(x) = \P(X \leq x).
+ \]
+\end{defi}
+We can see that $F(x)$ is increasing and $F(x) \to 1$ as $x\to \infty$.
+
+In the case of continuous random variables, we have
+\[
+ F(x) = \int_{-\infty}^x f(z) \;\d z.
+\]
+Then $F$ is continuous and differentiable. In general, $F'(x) = f(x)$ whenever $F$ is differentiable.
+
+The name of \emph{continuous} random variables comes from the fact that $F(x)$ is (absolutely) continuous.
+
+The cdf of a continuous random variable might look like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (3, 0);
+ \draw [->] (0, 0) -- (0, 2) node [above] {$\P$};
+ \draw (0, 0) parabola (2.5, 1.5);
+ \end{tikzpicture}
+\end{center}
+The cdf of a discrete random variable might look like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (3, 0);
+ \draw [->] (0, 0) -- (0, 2) node [above] {$\P$};
+ \draw (0, 0) -- (1, 0) -- (1, 0.5) -- (2, 0.5) -- (2, 1) -- (3, 1) -- (3, 1.5);
+ \end{tikzpicture}
+\end{center}
+The cdf of a random variable that is neither discrete nor continuous might look like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (3, 0);
+ \draw [->] (0, 0) -- (0, 2) node [above] {$\P$};
+ \draw (0, 0) parabola (1, 1);
+ \draw (1, 1) -- (2.5, 1) -- (2.5, 2);
+ \node [circ] at (2.5, 2) {};
+ \end{tikzpicture}
+\end{center}
+Note that we always have
+\[
+ \P(a < x \leq b) = F(b) - F(a).
+\]
+This will be equal to $\int_a^b f(x)\;\d x$ in the case of continuous random variables.
+
+\begin{defi}[Uniform distribution]
+ The \emph{uniform distribution} on $[a, b]$ has pdf
+ \[
+ f(x) = \frac{1}{b - a}.
+ \]
+ Then
+ \[
+ F(x) = \int_a^x f(z)\;\d z = \frac{x - a}{b - a}
+ \]
+ for $a \leq x \leq b$.
+
+ If $X$ follows a uniform distribution on $[a, b]$, we write $X\sim U[a, b]$.
+\end{defi}
+\begin{defi}[Exponential random variable]
+ The \emph{exponential random variable with parameter $\lambda$} has pdf
+ \[
+ f(x) = \lambda e^{-\lambda x}
+ \]
+ and
+ \[
+ F(x) = 1 - e^{-\lambda x}
+ \]
+ for $x \geq 0$.
+
+ We write $X \sim \mathcal{E}(\lambda)$.
+\end{defi}
+
+Then
+\[
+ \P(a \leq x \leq b) = \int_a^b f(z) \;\d z = e^{-\lambda a} - e^{-\lambda b}.
+\]
+\begin{prop}
+ The exponential random variable is \emph{memoryless}, i.e.
+ \[
+ \P(X \geq x + z \mid X \geq x) = \P(X \geq z).
+ \]
+ This means that, say if $X$ measures the lifetime of a light bulb, knowing it has already lasted for 3 hours does not give any information about how much longer it will last.
+\end{prop}
+Recall that the geometric random variable is the discrete memoryless random variable.
+
+\begin{proof}
+ \begin{align*}
+ \P(X \geq x + z \mid X \geq x) &= \frac{\P(X \geq x + z)}{\P(X \geq x)}\\
+ &= \frac{\int_{x + z}^\infty f(u)\;\d u}{\int_x^\infty f(u)\;\d u}\\
+ &= \frac{e^{-\lambda(x + z)}}{e^{-\lambda x}}\\
+ &= e^{-\lambda z}\\
+ &= \P(X \geq z).\qedhere
+ \end{align*}
+\end{proof}
+
+Similarly, given that, you have lived for $x$ days, what is the probability of dying within the next $\delta x$ days?
+\begin{align*}
+ \P(X \leq x + \delta x \mid X \geq x) &= \frac{\P(x \leq X \leq x + \delta x)}{\P(X \geq x)}\\
+ &= \frac{\lambda e^{-\lambda x} \delta x}{e^{-\lambda x}}\\
+ &= \lambda \delta x.
+\end{align*}
+So it is independent of how old you currently are, assuming your survival follows an exponential distribution.
+
+In general, we can define the hazard rate to be
+\[
+ h(x) = \frac{f(x)}{1 - F(x)}.
+\]
+Then
+\[
+ \P(x \leq X \leq x + \delta x \mid X \geq x) = \frac{\P(x \leq X \leq x + \delta x)}{\P(X \geq x)} = \frac{\delta x f(x)}{1 - F(x)} = \delta x\cdot h(x).
+\]
+In the case of exponential distribution, $h(x)$ is constant and equal to $\lambda$.
+
+Similar to discrete variables, we can define the expected value and the variance. However, we will (obviously) have to replace the sum with an integral.
+\begin{defi}[Expectation]
+ The \emph{expectation} (or \emph{mean}) of a continuous random variable is
+ \[
+ \E[X] = \int_{-\infty}^\infty xf(x) \;\d x,
+ \]
+ provided not both $\int_0^\infty xf(x)\;\d x$ and $\int_{-\infty}^0 xf(x) \;\d x$ are infinite.
+\end{defi}
+
+\begin{thm}
+ If $X$ is a continuous random variable, then
+ \[
+ \E[X] = \int_0^\infty \P(X \geq x)\;\d x - \int_0^\infty \P(X \leq -x)\;\d x.
+ \]
+\end{thm}
+This result is more intuitive in the discrete case:
+\[
+ \sum_{x = 0}^\infty x \P(X = x) = \sum_{x = 0}^\infty \sum_{y = x + 1}^\infty \P(X = y) = \sum_{x = 0}^\infty \P(X > x),
+\]
+where the first equality holds because on both sides, we have $x$ copies of $\P(X = x)$ in the sum.
+
+\begin{proof}
+ \begin{align*}
+ \int_0^\infty \P(X \geq x)\;\d x &= \int_0^\infty \int_x^\infty f(y)\;\d y\;\d x\\
+ &= \int_0^\infty \int_0^\infty I[y\geq x]f(y)\;\d y\;\d x\\
+ &= \int_0^\infty \left(\int_0^\infty I[x\leq y]\;\d x\right) f(y)\;\d y\\
+ &= \int_0^\infty yf(y)\;\d y.
+ \end{align*}
+ We can similarly show that $\int_0^\infty \P(X \leq -x)\;\d x = -\int_{-\infty}^0 yf(y)\;\d y$.
+\end{proof}
+
+\begin{eg}
+ Suppose $X\sim \mathcal{E}(\lambda)$. Then
+ \[
+ \P(X \geq x) = \int_x^{\infty} \lambda e^{-\lambda t}\;\d t = e^{-\lambda x}.
+ \]
+ So
+ \[
+ \E[X] = \int_0^\infty e^{-\lambda x}\;\d x = \frac{1}{\lambda}.\qedhere
+ \]
+\end{eg}
+
+\begin{defi}[Variance]
+ The \emph{variance} of a continuous random variable is
+ \[
+ \var (X) = \E[(X - \E[X])^2] = \E[X^2] - (E[X])^2.
+ \]
+\end{defi}
+So we have
+\[
+ \var (X) = \int_{-\infty}^\infty x^2 f(x) \;\d x - \left(\int_{-\infty}^\infty xf(x)\;\d x\right)^2.
+\]
+\begin{eg}
+ Let $X\sim U[a, b]$. Then
+ \[
+ \E[X] = \int_a^b x\frac{1}{b - a}\;d x = \frac{a + b}{2}.
+ \]
+ So
+ \begin{align*}
+ \var (X) &= \int_a^b x^2\frac{1}{b - a}\;\d x - (\E[X])^2\\
+ &= \frac{1}{12}(b - a)^2.
+ \end{align*}
+\end{eg}
+
+Apart from the mean, or expected value, we can also have other notions of ``average values''.
+\begin{defi}[Mode and median]
+ Given a pdf $f(x)$, we call $\hat x$ a \emph{mode} if
+ \[
+ f(\hat x) \geq f(x)
+ \]
+ for all $x$. Note that a distribution can have many modes. For example, in the uniform distribution, all $x$ are modes.
+
+ We say it is a median if
+ \[
+ \int_{-\infty}^{\hat x}f(x)\;\d x = \frac{1}{2} = \int_{\hat x}^\infty f(x)\;\d x.
+ \]
+ For a discrete random variable, the median is $\hat x$ such that
+ \[
+ \P(X \leq \hat x) \geq \frac{1}{2}, \quad \P(X \geq \hat x) \geq \frac{1}{2}.
+ \]
+ Here we have a non-strict inequality since if the random variable, say, always takes value $0$, then both probabilities will be 1.
+\end{defi}
+
+Suppose that we have an experiment whose outcome follows the distribution of $X$. We can perform the experiment many times and obtain many results $X_1, \cdots, X_n$. The average of these results is known as the \emph{sample mean}.
+
+\begin{defi}[Sample mean]
+ If $X_1, \cdots, X_n$ is a random sample from some distribution, then the \emph{sample mean} is
+ \[
+ \bar X = \frac{1}{n}\sum_1^n X_i.
+ \]
+\end{defi}
+
+\subsection{Stochastic ordering and inspection paradox}
+We want to define a (partial) order on two different random variables. For example, we might want to say that $X + 2 > X$, where $X$ is a random variable.
+
+A simple definition might be \emph{expectation ordering}, where $X \geq Y$ if $\E[X] \geq \E[Y]$. This, however, is not satisfactory since $X \geq Y$ and $Y \geq X$ does not imply $X = Y$. Instead, we can use the \emph{stochastic order}.
+\begin{defi}[Stochastic order]
+ The \emph{stochastic order} is defined as: $X \geq_{\mathrm{st}} Y$ if $\P(X > t) \geq \P(Y > t)$ for all $t$.
+\end{defi}
+This is clearly transitive. Stochastic ordering implies expectation ordering, since
+\[
+ X \geq_{\mathrm{st}} Y \Rightarrow \int_0^\infty \P(X > x)\;\d x \geq \int_0^\infty \P(y > x)\;\d x \Rightarrow \E[X] \geq \E[Y].
+\]
+Alternatively, we can also order things by hazard rate.
+\begin{eg}[Inspection paradox]
+ Suppose that $n$ families have children attending a school. Family $i$ has $X_i$ children at the school, where $X_1, \cdots, X_n$ are iid random variables, with $P(X_i = k) = p_k$. Suppose that the average family size is $\mu$.
+
+ Now pick a child at random. What is the probability distribution of his family size? Let $J$ be the index of the family from which she comes (which is a random variable). Then
+ \[
+ \P(X_J = k\mid J = j) = \frac{\P(J = j, X_j = k)}{\P(J = j)}.
+ \]
+ The denominator is $1/n$. The numerator is more complex. This would require the $j$th family to have $k$ members, which happens with probability $p_k$; and that we picked a member from the $j$th family, which happens with probability $\E\left[\frac{k}{k + \sum_{i \not=j} X_i}\right]$. So
+ \[
+ \P(X_J = k \mid J = j) = \E\left[\frac{nkp_k}{k + \sum_{i \not=j} X_i}\right].
+ \]
+ Note that this is independent of $j$. So
+ \[
+ \P(X_J = k) = \E\left[\frac{nkp_k}{k + \sum_{i \not=j} X_i}\right].
+ \]
+ Also, $\P(X_1 = k) = p_k$. So
+ \[
+ \frac{\P(X_J = k)}{\P(X_1 = k)} = \E\left[\frac{nk}{k + \sum_{i \not= j}X_i}\right].
+ \]
+ This is increasing in $k$, and greater than $1$ for $k > \mu$. So the average value of the family size of the child we picked is greater than the average family size. It can be shown that $X_J$ is stochastically greater than $X_1$.
+
+ This means that if we pick the children randomly, the sample mean of the family size will be greater than the actual mean. This is since for the larger a family is, the more likely it is for us to pick a child from the family.
+\end{eg}
+
+\subsection{Jointly distributed random variables}
+\begin{defi}[Joint distribution]
+ Two random variables $X, Y$ have \emph{joint distribution} $F: \R^2 \mapsto [0, 1]$ defined by
+ \[
+ F(x, y) = \P(X \leq x, Y \leq y).
+ \]
+ The \emph{marginal distribution} of $X$ is
+ \[
+ F_X(x) = \P(X \leq x) = \P(X \leq x, Y < \infty) = F(x, \infty) = \lim_{y \to \infty}F(x, y)
+ \]
+\end{defi}
+
+\begin{defi}[Jointly distributed random variables]
+ We say $X_1, \cdots, X_n$ are \emph{jointly distributed continuous random variables} and have \emph{joint pdf} $f$ if for any set $A\subseteq \R^n$
+ \[
+ \P((X_1, \cdots, X_n)\in A) = \int_{(x_1, \cdots x_n)\in A} f(x_1, \cdots, x_n)\;\d x_1\cdots \d x_n.
+ \]
+ where
+ \[
+ f(x_1, \cdots, x_n) \geq 0
+ \]
+ and
+ \[
+ \int_{\R^n} f(x_1, \cdots, x_n)\;\d x_1 \cdots \d x_n = 1.
+ \]
+\end{defi}
+
+\begin{eg}
+ In the case where $n = 2$,
+ \[
+ F(x, y) = \P(X \leq x,Y \leq y) = \int_{-\infty}^x\int_{-\infty}^y f(x, y)\;\d x\;\d y.
+ \]
+ If $F$ is differentiable, then
+ \[
+ f(x, y) = \frac{\partial^2}{\partial x\partial y}F(x, y).
+ \]
+\end{eg}
+
+\begin{thm}
+ If $X$ and $Y$ are jointly continuous random variables, then they are individually continuous random variables.
+\end{thm}
+
+\begin{proof}
+ We prove this by showing that $X$ has a density function.
+
+ We know that
+ \begin{align*}
+ \P(X \in A) &= \P(X \in A, Y\in (-\infty, +\infty))\\
+ &= \int_{x\in A}\int_{-\infty}^\infty f(x, y)\;\d y\;\d x\\
+ &= \int_{x\in A} f_X(x)\;\d x
+ \end{align*}
+ So
+ \[
+ f_X(x) = \int_{-\infty}^\infty f(x, y)\;\d y
+ \]
+ is the (marginal) pdf of $X$.
+\end{proof}
+
+\begin{defi}[Independent continuous random variables]
+ Continuous random variables $X_1, \cdots, X_n$ are independent if
+ \[
+ \P(X_1\in A_1, X_2 \in A_2, \cdots, X_n\in A_n) = \P(X_1 \in A_1)\P(X_2 \in A_2) \cdots \P(X_n \in A_n)
+ \]
+ for all $A_i\subseteq \Omega_{X_i}$.
+
+ If we let $F_{X_i}$ and $f_{X_i}$ be the cdf, pdf of $X$, then
+ \[
+ F(x_1, \cdots, x_n) = F_{X_1}(x_1)\cdots F_{X_n}(x_n)
+ \]
+ and
+ \[
+ f(x_1, \cdots, x_n) = f_{X_1}(x_1) \cdots f_{X_n}(x_n)
+ \]
+ are each individually equivalent to the definition above.
+\end{defi}
+
+To show that two (or more) random variables are independent, we only have to factorize the joint pdf into factors that each only involve one variable.
+\begin{eg}
+ If $(X_1, X_2)$ takes a random value from $[0, 1] \times [0, 1]$, then $f(x_1, x_2) = 1$. Then we can see that $f(x_1, x_2) = 1\cdot 1 = f(x_1)\cdot f(x_2)$. So $X_1$ and $X_2$ are independent.
+
+ On the other hand, if $(Y_1, Y_2)$ takes a random value from $[0, 1] \times [0, 1]$ with the restriction that $Y_1 \leq Y_2$, then they are not independent, since $f(x_1, x_2) = 2 I[Y_1 \leq Y_2]$, which cannot be split into two parts.
+\end{eg}
+
+\begin{prop}
+ For independent continuous random variables $X_i$,
+ \begin{enumerate}
+ \item $\E [\prod X_i] = \prod \E [X_i]$
+ \item $\var (\sum X_i) = \sum \var (X_i)$
+ \end{enumerate}
+\end{prop}
+
+\subsection{Geometric probability}
+Often, when doing probability problems that involve geometry, we can visualize the outcomes with the aid of a picture.
+
+\begin{eg}
+ Two points $X$ and $Y$ are chosen independently on a line segment of length $L$. What is the probability that $|X - Y| \leq \ell$? By ``at random'', we mean
+ \[
+ f(x, y) = \frac{1}{L^2},
+ \]
+ since each of $X$ and $Y$ have pdf $1/L$.
+
+ We can visualize this on a graph:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (3, 3);
+ \draw [fill=gray!50!white] (0, 0) -- (0.5, 0) -- (3, 2.5) -- (3, 3) -- (2.5, 3) -- (0, 0.5) -- cycle;
+ \node at (1.5, 1.5) {A};
+ \node at (0.5, 0) [below] {$\ell$};
+ \node at (3, 0) [below] {$L$};
+ \node at (0, 0.5) [left] {$\ell$};
+ \node at (0, 3) [left] {$L$};
+ \end{tikzpicture}
+ \end{center}
+ Here the two axes are the values of $X$ and $Y$, and $A$ is the permitted region. The total area of the white part is simply the area of a square with length $L - \ell$. So the area of $A$ is $L^2 - (L - \ell)^2 = 2L\ell - \ell^2$. So the desired probability is
+ \[
+ \int_A f(x, y)\;\d x\;\d y = \frac{2L\ell - \ell^2}{L^2}.
+ \]
+\end{eg}
+
+\begin{eg}[Bertrand's paradox]
+ Suppose we draw a random chord in a circle. What is the probability that the length of the chord is greater than the length of the side of an inscribed equilateral triangle?
+
+ There are three ways of ``drawing a random chord''.
+ \begin{enumerate}
+ \item We randomly pick two end points over the circumference independently. Now draw the inscribed triangle with the vertex at one end point. For the length of the chord to be longer than a side of the triangle, the other end point must between the two other vertices of the triangle. This happens with probability $1/3$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius = 1];
+ \draw (0, 1) -- (0.866, -0.5) -- (-0.866, -0.5) -- cycle;
+ \node [circ, fill=red] at (0, 1) {};
+ \draw [mblue] (0, 1) -- (-1, 0);
+ \draw [mred] (0, 1) -- (0, -1);
+ \draw [mblue] (0, 1) -- (1, 0);
+ \end{tikzpicture}
+ \end{center}
+ \item wlog the chord is horizontal, on the lower side of the circle. The mid-point is uniformly distributed along the middle (dashed) line. Then the probability of getting a long line is $1/2$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius = 1];
+ \draw (0, 1) -- (0.866, -0.5) -- (-0.866, -0.5) -- cycle;
+ \node [circ, fill=mred] at (0, 1) {};
+ \draw [dashed] (0, 1) -- (0, -1);
+ \draw [mblue] (-0.4359, -0.9) -- (0.4359, -0.9);
+ \draw [mblue] (-0.8, -0.6) -- (0.8, -0.6);
+ \draw [mred] (-0.9539, -0.3) -- (0.9539, -0.3);
+ \draw [mred] (-1, 0) -- (1, 0);
+ \end{tikzpicture}
+ \end{center}
+ \item The mid point of the chord is distributed uniformly across the circle. Then you get a long line if and only if the mid-point lies in the smaller circle shown below. This occurs with probability $1/4$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius = 1];
+ \draw (0, 1) -- (0.866, -0.5) -- (-0.866, -0.5) -- cycle;
+ \node [circ, fill=mred] at (0, 1) {};
+ \draw circle [radius = 0.5];
+ \draw [mred] (0, -1) -- (-0.6, 0.8);
+ \draw [mblue] (-0.9539, 0.3) -- (0.4359, 0.9);
+ \draw [mblue] (0.6, -0.8) -- (0.9165, 0.4);
+ \draw [mblue] (-1, 0) -- (0.3, -0.9539);
+ \end{tikzpicture}
+ \end{center}
+ \end{enumerate}
+ We get different answers for different notions of ``random''! This is why when we say ``randomly'', we should be explicit in what we mean by that.
+\end{eg}
+
+\begin{eg}[Buffon's needle]
+ A needle of length $\ell$ is tossed at random onto a floor marked with parallel lines a distance $L$ apart, where $\ell \leq L$. Let $A$ be the event that the needle intersects a line. What is $\P(A)$?
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, -1) -- (3, -1);
+ \draw (-3, 1) -- (3, 1);
+ \draw (-2, -0.1) -- (-0.5, 0.7) node [anchor = south east, pos = 0.5] {$\ell$};
+ \draw [->] (-2, -0.1) -- (-2, 1) node [left, pos = 0.5] {$X$};
+ \draw [->] (-2, 1) -- (-2, -0.1);
+ \draw [dashed] (-2.5, -0.1) -- (-0.5, -0.1);
+ \draw (-1.5, -0.1) arc (0:29:0.5);
+ \node at (-1.2, 0.1) {$\theta$};
+ \draw [->] (2, -1) -- (2, 1) node [right, pos = 0.5] {$L$};
+ \draw [->] (2, 1) -- (2, -1);
+ \end{tikzpicture}
+ \end{center}
+ Suppose that $X\sim U[0, L]$ and $\theta\sim U[0, \pi]$. Then
+ \[
+ f(x, \theta) = \frac{1}{L\pi}.
+ \]
+ This touches a line iff $X \leq \ell \sin \theta$. Hence
+ \[
+ \P(A) = \int_{\theta = 0}^\pi \underbrace{\frac{\ell \sin \theta}{L}}_{\P(X \leq \ell\sin \theta)}\frac{1}{\pi}\d \theta = \frac{2\ell }{\pi L}.
+ \]
+ Since the answer involves $\pi$, we can estimate $\pi$ by conducting repeated experiments! Suppose we have $N$ hits out of $n$ tosses. Then an estimator for $p$ is $\hat{p} = \frac{N}{n}$. Hence
+ \[
+ \hat{\pi} = \frac{2\ell}{\hat{p}L}.
+ \]
+ We will later find out that this is a really inefficient way of estimating $\pi$ (as you could have guessed).
+\end{eg}
+\subsection{The normal distribution}
+\begin{defi}[Normal distribution]
+ The \emph{normal distribution} with parameters $\mu, \sigma^2$, written $N(\mu, \sigma^2)$ has pdf
+ \[
+ f(x) = \frac{1}{\sqrt{2 \pi}\sigma}\exp\left(-\frac{(x - \mu)^2}{2\sigma^2}\right),
+ \]
+ for $-\infty < x < \infty$.
+
+ It looks approximately like this:
+ \begin{center}
+ \begin{tikzpicture}[yscale=1.5]
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, 1.3) -- (0, 0) node [below] {$\mu$};
+ \draw [domain=-3:3,samples=50, mblue] plot (\x, {exp(-\x * \x)});
+ \end{tikzpicture}
+ \end{center}
+ The \emph{standard normal} is when $\mu = 0, \sigma^2 = 1$, i.e.\ $X\sim N(0, 1)$.
+
+ We usually write $\phi(x)$ for the pdf and $\Phi(x)$ for the cdf of the standard normal.
+\end{defi}
+This is a rather important probability distribution. This is partly due to the central limit theorem, which says that if we have a large number of iid random variables, then the distribution of their averages are approximately normal. Many distributions in physics and other sciences are also approximately or exactly normal.
+
+We first have to show that this makes sense, i.e.
+\begin{prop}
+ \[
+ \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{1}{2\sigma^2}(x - \mu)^2}\;\d x = 1.
+ \]
+\end{prop}
+\begin{proof}
+ Substitute $z = \frac{(x - \mu)}{\sigma}$. Then
+ \[
+ I = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}z^2}\;\d z.
+ \]
+ Then
+ \begin{align*}
+ I^2 &= \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}} e^{-x^2/2}\;\d x\int_{\infty}^\infty \frac{1}{\sqrt{2\pi}} e^{-y^2/2}\;\d y\\
+ &= \int_0^\infty \int_0^{2\pi}\frac{1}{2\pi}e^{-r^2/2}r\;\d r\;\d \theta\\
+ &= 1.\qedhere
+ \end{align*}
+\end{proof}
+
+We also have
+\begin{prop}
+ $\E[X] = \mu$.
+\end{prop}
+\begin{proof}
+\begin{align*}
+ \E[X] &= \frac{1}{\sqrt{2\pi}\sigma} \int_{-\infty}^\infty x e^{-(x - \mu)^2/2\sigma^2}\;\d x\\
+ &= \frac{1}{\sqrt{2\pi} \sigma}\int_{-\infty}^\infty (x - \mu)e^{-(x - \mu)^2/2\sigma^2}\;\d x + \frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^\infty \mu e^{-(x - \mu)^2/2\sigma^2}\;\d x.
+\end{align*}
+The first term is antisymmetric about $\mu$ and gives $0$. The second is just $\mu$ times the integral we did above. So we get $\mu$.
+\end{proof}
+Also, by symmetry, the mode and median of a normal distribution are also both $\mu$.
+
+\begin{prop}
+ $\var(X) = \sigma^2$.
+\end{prop}
+
+\begin{proof}
+ We have $\var (X) = \E[X^2] - (\E[X])^2$. Substitute $Z = \frac{X - \mu}{\sigma}$. Then $\E[Z] = 0$, $\E[Z^2] = \frac{1}{\sigma^2}\E[X^2]$.
+
+ Then
+ \begin{align*}
+ \var(Z) &= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty z^2 e^{-z^2/2}\;\d z\\
+ &= \left[-\frac{1}{\sqrt{2\pi}}ze^{-z^2/2}\right]_{-\infty}^\infty + \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-z^2/2}\;\d z\\
+ &= 0 + 1\\
+ &= 1
+ \end{align*}
+ So $\var X = \sigma^2$.
+\end{proof}
+
+\begin{eg}
+ UK adult male heights are normally distributed with mean 70'' and standard deviation 3''. In the Netherlands, these figures are 71'' and 3''.
+
+ What is $\P(Y > X)$, where $X$ and $Y$ are the heights of randomly chosen UK and Netherlands males, respectively?
+
+ We have $X\sim N(70, 3^2)$ and $Y\sim N(71, 3^2)$. Then (as we will show in later lectures) $Y - X \sim N(1, 18)$.
+ \[
+ \P(Y > X) = \P(Y - X > 0) = \P\left(\frac{Y - X - 1}{\sqrt{18}} > \frac{-1}{\sqrt{18}}\right) = 1 - \Phi(-1/\sqrt{18}),
+ \]
+ since $\frac{(Y - X) - 1}{\sqrt{18}}\sim N(0, 1)$, and the answer is approximately 0.5931.
+
+ Now suppose that in both countries, the Olympic male basketball teams are selected from that portion of male whose hight is at least above 4'' above the mean (which corresponds to the $9.1\%$ tallest males of the country). What is the probability that a randomly chosen Netherlands player is taller than a randomly chosen UK player?
+
+ For the second part, we have
+ \[
+ \P(Y > X\mid X \geq 74, Y \geq 75) = \frac{\int_{x = 74}^{75} \phi_X(x)\;\d x + \int_{x = 75}^\infty \int_{y=x}^\infty \phi_Y(y)\phi_X(x)\;\d y\;\d x}{\int_{x=74}^\infty \phi_X(x)\;\d x \int_{y=75}^\infty \phi_Y(y)\;\d y},
+ \]
+ which is approximately 0.7558. So even though the Netherlands people are only slightly taller, if we consider the tallest bunch, the Netherlands people will be much taller on average.
+\end{eg}
+
+\subsection{Transformation of random variables}
+We will now look at what happens when we apply a function to random variables. We first look at the simple case where there is just one variable, and then move on to the general case where we have multiple variables and can mix them together.
+\subsubsection*{Single random variable}
+\begin{thm}
+ If $X$ is a continuous random variable with a pdf $f(x)$, and $h(x)$ is a continuous, strictly increasing function with $h^{-1}(x)$ differentiable, then $Y = h(X)$ is a random variable with pdf
+ \[
+ f_Y(y) = f_X(h^{-1}(y))\frac{\d }{\d y}h^{-1}(y).
+ \]
+\end{thm}
+\begin{proof}
+ \begin{align*}
+ F_Y(y) &= \P(Y \leq y)\\
+ &= \P(h(X) \leq y)\\
+ &= \P(X \leq h^{-1}(y))\\
+ &= F(h^{-1}(y))
+ \end{align*}
+ Take the derivative with respect to $y$ to obtain
+ \[
+ f_Y(y) = F'_Y(y) = f(h^{-1}(y))\frac{\d}{\d y}h^{-1}(y).\qedhere
+ \]
+\end{proof}
+It is often easier to redo the proof than to remember the result.
+
+\begin{eg}
+ Let $X\sim U[0, 1]$. Let $Y = -\log X$. Then
+ \begin{align*}
+ \P(Y \leq y) &= \P(-\log X \leq y)\\
+ &= \P(X \geq e^{-y})\\
+ &= 1 - e^{-y}.
+ \end{align*}
+ But this is the cumulative distribution function of $\mathcal{E}(1)$. So $Y$ is exponentially distributed with $\lambda = 1$.
+\end{eg}
+
+In general, we get the following result:
+\begin{thm}
+ Let $U\sim U[0, 1]$. For any strictly increasing distribution function $F$, the random variable $X = F^{-1}U$ has distribution function $F$.
+\end{thm}
+
+\begin{proof}
+ \[
+ \P(X \leq x) = \P(F^{-1}(U) \leq x) = \P(U \leq F(x)) = F(x).\qedhere
+ \]
+\end{proof}
+This condition ``strictly increasing'' is needed for the inverse to exist. If it is not, we can define
+\[
+ F^{-1}(u) = \inf\{x: F(x)\geq u, 0 < u < 1\},
+\]
+and the same result holds.
+
+This can also be done for discrete random variables $\P(X = x_i) = p_i$ by letting
+\[
+ X = x_j\text{ if }\sum_{i = 0}^{j - 1}p_i \leq U < \sum_{i = 0}^j p_i,
+\]
+for $U\sim U[0, 1]$.
+
+\subsubsection*{Multiple random variables}
+Suppose $X_1, X_2, \cdots, X_n$ are random variables with joint pdf $f$. Let
+\begin{align*}
+ Y_1 &= r_1(X_1, \cdots, X_n)\\
+ Y_2 &= r_2(X_1, \cdots, X_n)\\
+ &\vdots \\
+ Y_n &= r_n(X_1, \cdots, X_n).
+\end{align*}
+For example, we might have $Y_1 = \frac{X_1}{X_1 + X_2}$ and $Y_2 = X_1 + X_2$.
+
+Let $R\subseteq \R^n$ such that $\P((X_1, \cdots, X_n)\in R) = 1$, i.e.\ $R$ is the set of all values $(X_i)$ can take.
+
+Suppose $S$ is the image of $R$ under the above transformation, and the map $R\to S$ is bijective. Then there exists an inverse function
+\begin{align*}
+ X_1 &= s_1(Y_1, \cdots, Y_n)\\
+ X_2 &= s_2(Y_1, \cdots, Y_n)\\
+ &\vdots\\
+ X_n &= s_n(Y_1, \cdots, Y_n).
+\end{align*}
+For example, if $X_1, X_2$ refers to the coordinates of a random point in Cartesian coordinates, $Y_1, Y_2$ might be the coordinates in polar coordinates.
+
+\begin{defi}[Jacobian determinant]
+ Suppose $\frac{\partial s_i}{\partial y_j}$ exists and is continuous at every point $(y_1, \cdots, y_n)\in S$. Then the \emph{Jacobian determinant} is
+ \[
+ J = \frac{\partial (s_1, \cdots, s_n)}{\partial (y_1, \cdots, y_n)} =
+ \det
+ \begin{pmatrix}
+ \frac{\partial s_1}{\partial y_1} & \cdots & \frac{\partial s_1}{\partial y_n}\\
+ \vdots & \ddots & \vdots\\
+ \frac{\partial s_n}{\partial y_1} & \cdots & \frac{\partial s_n}{\partial y_n}\\
+ \end{pmatrix}
+ \]
+\end{defi}
+Take $A\subseteq \R$ and $B = r(A)$. Then using results from IA Vector Calculus, we get
+\begin{align*}
+ \P((X_1, \cdots, X_n)\in A) &= \int_A f(x_1, \cdots, x_n) \;\d x_1\cdots \d x_n\\
+ &= \int_B f(s_1(y_1, \cdots y_n), s_2, \cdots, s_n) |J|\;\d y_1\cdots \;\d y_n\\
+ &= \P((Y_1, \cdots Y_n)\in B).
+\end{align*}
+So
+\begin{prop}
+ $(Y_1, \cdots, Y_n)$ has density
+ \[
+ g(y_1, \cdots, y_n) = f(s_1(y_1, \cdots, y_n), \cdots s_n(y_1, \cdots, y_n))|J|
+ \]
+ if $(y_1, \cdots, y_n)\in S$, $0$ otherwise.
+\end{prop}
+
+\begin{eg}
+ Suppose $(X, Y)$ has density
+ \[
+ f(x, y) =
+ \begin{cases}
+ 4xy & 0 \leq x \leq 1, 0\leq y \leq 1\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ We see that $X$ and $Y$ are independent, with each having a density $f(x) = 2x$.
+
+ Define $U = X/Y$, $V = XY$. Then we have $X = \sqrt{UV}$ and $Y = \sqrt{V/U}$.
+
+ The Jacobian is
+ \[
+ \det
+ \begin{pmatrix}
+ \partial x/\partial u & \partial x/\partial v\\
+ \partial y/\partial u & \partial y/\partial v
+ \end{pmatrix}
+ =
+ \det
+ \begin{pmatrix}
+ \frac{1}{2}\sqrt{v/u} & \frac{1}{2}\sqrt{u/v}\\
+ -\frac{1}{2}\sqrt{v/u^3} & \frac{1}{2}\sqrt{1/uv}
+ \end{pmatrix}
+ = \frac{1}{2u}
+ \]
+ Alternatively, we can find this by considering
+ \[
+ \det
+ \begin{pmatrix}
+ \partial u/\partial x & \partial u/\partial y\\
+ \partial v/\partial x & \partial u/\partial y
+ \end{pmatrix} = 2u,
+ \]
+ and then inverting the matrix. So
+ \[
+ g(u, v) = 4\sqrt{uv}\sqrt{\frac{v}{u}}\frac{1}{2u} = \frac{2v}{u},
+ \]
+ if $(u, v)$ is in the image $S$, $0$ otherwise. So
+ \[
+ g(u, v) = \frac{2v}{u}I[(u, v)\in S].
+ \]
+ Since this is not separable, we know that $U$ and $V$ are not independent.
+\end{eg}
+
+In the linear case, life is easy. Suppose
+\[
+ \mathbf{Y} = \begin{pmatrix}
+ Y_1\\
+ \vdots\\
+ Y_n
+ \end{pmatrix} = A
+ \begin{pmatrix}
+ X_1\\
+ \vdots\\
+ X_n
+ \end{pmatrix} = A\mathbf{X}
+\]
+Then $\mathbf{X} = A^{-1}\mathbf{Y}$. Then $\frac{\partial x_i}{\partial y_j} = (A^{-1})_{ij}$. So $|J| = |\det(A^{-1})| = |\det A|^{-1}$. So
+\[
+ g(y_1, \cdots, y_n) = \frac{1}{|\det A|}f(A^{-1}\mathbf{y}).
+\]
+
+\begin{eg}
+ Suppose $X_1, X_2$ have joint pdf $f(x_1,x_2)$. Suppose we want to find the pdf of $Y = X_1 + X_2$. We let $Z = X_2$. Then $X_1 = Y - Z$ and $X_2 = Z$. Then
+ \[
+ \begin{pmatrix}
+ Y\\
+ Z
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 1 & 1 \\
+ 0 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ X_1\\
+ X_2
+ \end{pmatrix}
+ = A\mathbf{X}
+ \]
+ Then $|J| = 1/|\det A| = 1$. Then
+ \[
+ g(y, z) = f(y - z, z)
+ \]
+ So
+ \[
+ g_Y(y) = \int_{-\infty}^\infty f(y - z, z)\;\d z = \int_{-\infty}^\infty f(z, y - z)\;\d z.
+ \]
+ If $X_1$ and $X_2$ are independent, $f(x_1, x_2) = f_1(x_1) f_2(x_2)$. Then
+ \[
+ g(y) = \int_{-\infty}^\infty f_1(z) f_2(y - z)\;\d z.
+ \]
+\end{eg}
+
+\subsubsection*{Non-injective transformations}
+We previously discussed transformation of random variables by injective maps. What if the mapping is not? There is no simple formula for that, and we have to work out each case individually.
+
+\begin{eg}
+ Suppose $X$ has pdf $f$. What is the pdf of $Y=|X|$?
+
+ We use our definition. We have
+ \[
+ \P(|X| \in (a, b)) = \int_a^b f(x) + \int_{-b}^{-a} f(x)\;\d x = \int_a^b (f(x) + f(-x))\;\d x.
+ \]
+ So
+ \[
+ f_Y(x) = f(x) + f(-x),
+ \]
+ which makes sense, since getting $|X| = x$ is equivalent to getting $X = x$ or $X = -x$.
+\end{eg}
+
+\begin{eg}
+ Suppose $X_1\sim \mathcal{E}(\lambda), X_2\sim \mathcal{E}(\mu)$ are independent random variables. Let $Y = \min(X_1, X_2)$. Then
+ \begin{align*}
+ \P(Y \geq t) &= \P(X_1 \geq t, X_2 \geq t)\\
+ &= \P(X_1\geq t)\P(X_2 \geq t)\\
+ &= e^{-\lambda t}e^{-\mu t}\\
+ &= e^{-(\lambda +\mu)t}.
+ \end{align*}
+ So $Y\sim \mathcal{E}(\lambda + \mu)$.
+\end{eg}
+
+Given random variables, not only can we ask for the minimum of the variables, but also ask for, say, the second-smallest one. In general, we define the \emph{order statistics} as follows:
+\begin{defi}[Order statistics]
+ Suppose that $X_1, \cdots, X_n$ are some random variables, and $Y_1, \cdots, Y_n$ is $X_1, \cdots, X_n$ arranged in increasing order, i.e.\ $Y_1 \leq Y_2 \leq \cdots \leq Y_n$. This is the \emph{order statistics}.
+
+ We sometimes write $Y_i = X_{(i)}$.
+\end{defi}
+
+Assume the $X_i$ are iid with cdf $F$ and pdf $f$. Then the cdf of $Y_n$ is
+\[
+ \P(Y_n \leq y) = \P(X_1 \leq y, \cdots, X_n \leq y) = \P(X_1 \leq y)\cdots\P(X_n \leq y) = F(y)^n.
+\]
+So the pdf of $Y_n$ is
+\[
+ \frac{\d }{\d y}F(y)^n = nf(y)F(y)^{n - 1}.
+\]
+Also,
+\[
+ \P(Y_1 \geq y) = \P(X_1\geq y, \cdots, X_n \geq y) = (1 - F(y))^n.
+\]
+What about the joint distribution of $Y_1, Y_n$?
+\begin{align*}
+ G(y_1, y_n) &= \P(Y_1\leq y_1, Y_n \leq y_n)\\
+ &= \P(Y_n \leq y_n) - \P(Y_1 \geq y_1 , Y_n \leq y_n)\\
+ &= F(y_n)^n - (F(y_n) - F(y_1))^n.
+\end{align*}
+Then the pdf is
+\[
+ \frac{\partial^2}{\partial y_1 \partial y_n}G(y_1, y_n) = n(n - 1)(F(y_n) - F(y_1))^{n - 2}f(y_1)f(y_n).
+\]
+We can think about this result in terms of the multinomial distribution. By definition, the probability that $Y_1\in [y_1, y_1 + \delta)$ and $Y_n\in (y_n - \delta, y_n]$ is approximately $g(y_1, y_n)$.
+
+Suppose that $\delta$ is sufficiently small that all other $n - 2$ $X_i$'s are very unlikely to fall into $[y_1, y_1 + \delta)$ and $(y_n - \delta, y_n]$. Then to find the probability required, we can treat the sample space as three bins. We want exactly one $X_i$ to fall into the first and last bins, and $n - 2$ $X_i$'s to fall into the middle one. There are $\binom{n}{1, n-2, 1} = n(n - 1)$ ways of doing so.
+
+The probability of each thing falling into the middle bin is $F(y_n) - F(y_1)$, and the probabilities of falling into the first and last bins are $f(y_1)\delta$ and $f(y_n)\delta$. Then the probability of $Y_1\in [y_1, y_1 + \delta)$ and $Y_n\in (y_n - \delta, y_n]$ is
+\[
+ n(n - 1)(F(y_n) - F(y_1))^{n - 2} f(y_1)f(y_n)\delta^2,
+\]
+and the result follows.
+
+We can also find the joint distribution of the order statistics, say $g$, since it is just given by
+\[
+ g(y_1, \cdots y_n) = n! f(y_1)\cdots f(y_n),
+\]
+if $y_1 \leq y_2\leq \cdots \leq y_n$, 0 otherwise. We have this formula because there are $n!$ combinations of $x_1, \cdots, x_n$ that produces a given order statistics $y_1, \cdots, y_n$, and the pdf of each combination is $f(y_1)\cdots f(y_n)$.
+
+In the case of iid exponential variables, we find a nice distribution for the order statistic.
+\begin{eg}
+ Let $X_1, \cdots, X_n$ be iid $\mathcal{E}(\lambda)$, and $Y_1, \cdots, Y_n$ be the order statistic. Let
+ \begin{align*}
+ Z_1 &= Y_1\\
+ Z_2 &= Y_2 - Y_1\\
+ &\vdots\\
+ Z_n &= Y_n - Y_{n - 1}.
+ \end{align*}
+ These are the distances between the occurrences. We can write this as a $\mathbf{Z} = A\mathbf{Y}$, with
+ \[
+ A =
+ \begin{pmatrix}
+ 1 & 0 & 0 & \cdots& 0\\
+ -1 & 1 & 0 & \cdots & 0\\
+ \vdots & \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & 0 & \cdots & 1\\
+ \end{pmatrix}
+ \]
+ Then $\det (A) = 1$ and hence $|J| = 1$. Suppose that the pdf of $Z_1, \cdots, Z_n$ is, say $h$. Then
+ \begin{align*}
+ h(z_1, \cdots, z_n) &= g(y_1, \cdots, y_n)\cdot 1\\
+ &= n!f(y_1) \cdots f(y_n)\\
+ &= n!\lambda^n e^{-\lambda (y_1 + \cdots + y_n)}\\
+ &= n!\lambda^n e^{-\lambda (nz_1 + (n - 1)z_2 + \cdots + z_n)}\\
+ &= \prod_{i = 1}^n (\lambda i)e^{-(\lambda i)z_{n + 1 - i}}
+ \end{align*}
+ Since $h$ is expressed as a product of $n$ density functions, we have
+ \[
+ Z_i \sim \mathcal{E}((n + 1 - i)\lambda).
+ \]
+ with all $Z_i$ independent.
+\end{eg}
+
+
+\subsection{Moment generating functions}
+If $X$ is a continuous random variable, then the analogue of the probability generating function is the moment generating function:
+\begin{defi}[Moment generating function]
+ The \emph{moment generating function} of a random variable $X$ is
+ \[
+ m(\theta) = \E[e^{\theta X}].
+ \]
+ For those $\theta$ in which $m(\theta)$ is finite, we have
+ \[
+ m(\theta) = \int_{-\infty}^\infty e^{\theta x}f(x)\;\d x.
+ \]
+\end{defi}
+We can prove results similar to that we had for probability generating functions.
+
+We will assume the following without proof:
+\begin{thm}
+ The mgf determines the distribution of $X$ provided $m(\theta)$ is finite for all $\theta$ in some interval containing the origin.
+\end{thm}
+
+\begin{defi}[Moment]
+ The $r$th \emph{moment} of $X$ is $\E[X^r]$.
+\end{defi}
+
+\begin{thm}
+ The $r$th moment $X$ is the coefficient of $\frac{\theta^r}{r!}$ in the power series expansion of $m(\theta)$, and is
+ \[
+ \E[X^r] = \left.\frac{\d ^n}{\d \theta^n}m(\theta)\right|_{\theta = 0} = m^{(n)}(0).
+ \]
+\end{thm}
+
+\begin{proof}
+ We have
+ \[
+ e^{\theta X} = 1 + \theta X + \frac{\theta^2}{2!}X^2 + \cdots.
+ \]
+ So
+ \[
+ m(\theta) = \E[e^{\theta X}] = 1 + \theta \E[X] + \frac{\theta^2}{2!}\E [X^2] + \cdots\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ Let $X\sim \mathcal{E}(\lambda)$. Then its mgf is
+ \[
+ \E[e^{\theta X}] = \int_0^\infty e^{\theta x}\lambda e^{-\lambda x}\;\d x = \lambda\int_0^\infty e^{-(\lambda - \theta )x}\;\d x = \frac{\lambda}{\lambda - \theta},
+ \]
+ where $0 < \theta < \lambda$. So
+ \[
+ \E[X] = m'(0) = \left.\frac{\lambda}{(\lambda - \theta)^2}\right|_{\theta = 0} = \frac{1}{\lambda}.
+ \]
+ Also,
+ \[
+ \E[X^2] = m''(0) = \left.\frac{2\lambda}{(\lambda - \theta)^3}\right|_{\theta = 0} = \frac{2}{\lambda^2}.
+ \]
+ So
+ \[
+ \var(X) = \E[X^2] - \E[X]^2 = \frac{2}{\lambda^2} - \frac{1}{\lambda^2} = \frac{1}{\lambda^2}.
+ \]
+\end{eg}
+
+\begin{thm}
+ If $X$ and $Y$ are independent random variables with moment generating functions $m_X(\theta), m_Y(\theta)$, then $X + Y$ has mgf $m_{X + Y}(\theta) = m_X(\theta)m_Y(\theta)$.
+\end{thm}
+
+\begin{proof}
+ \[
+ \E[e^{\theta(X + Y)}] = \E[e^{\theta X}e^{\theta Y}] = \E[e^{\theta X}]\E[e^{\theta Y}] = m_X(\theta)m_Y(\theta).\qedhere
+ \]
+\end{proof}
+\section{More distributions}
+\subsection{Cauchy distribution}
+
+\begin{defi}[Cauchy distribution]
+ The \emph{Cauchy distribution} has pdf
+ \[
+ f(x) = \frac{1}{\pi(1 + x^2)}
+ \]
+ for $-\infty < x < \infty$.
+\end{defi}
+We check that this is a genuine distribution:
+\[
+ \int_{-\infty}^\infty \frac{1}{\pi(1 + x^2)}\;\d x = \int_{-\pi/2}^{\pi/2}\frac{1}{\pi}\;\d \theta = 1
+\]
+with the substitution $x = \tan \theta$. The distribution is a bell-shaped curve.
+
+\begin{prop}
+ The mean of the Cauchy distribution is undefined, while $\E[X^2] = \infty$.
+\end{prop}
+
+\begin{proof}
+ \[
+ \E[X] = \int_0^\infty \frac{x}{\pi(1 + x^2)}\;\d x + \int_{-\infty}^0 \frac{x}{\pi(1 + x^2)}\;\d x = \infty - \infty
+ \]
+ which is undefined, but $\E[X^2] = \infty + \infty = \infty$.
+\end{proof}
+
+Suppose $X, Y$ are independent Cauchy distributions. Let $Z = X + Y$. Then
+\begin{align*}
+ f(z) &= \int_{-\infty}^\infty f_X(x)f_Y(z - x)\;\d x\\
+ &= \int_{-\infty}^\infty \frac{1}{\pi^2} \frac{1}{(1 + x^2)(1 + (z - x)^2)}\;\d x\\
+ &= \frac{1/2}{\pi(1 + (z/2)^2)}
+\end{align*}
+for all $-\infty < z < \infty$ (the integral can be evaluated using a tedious partial fraction expansion).
+
+So $\frac{1}{2}Z$ has a Cauchy distribution. Alternatively the arithmetic mean of Cauchy random variables is a Cauchy random variable.
+
+By induction, we can show that $\frac{1}{n}(X_1 + \cdots + X_n)$ follows Cauchy distribution. This becomes a ``counter-example'' to things like the weak law of large numbers and the central limit theorem. Of course, this is because those theorems require the random variable to have a mean, which the Cauchy distribution lacks.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item If $\Theta\sim U[-\frac{\pi}{2}, \frac{\pi}{2}]$, then $X = \tan \theta$ has a Cauchy distribution. For example, if we fire a bullet at a wall 1 meter apart at a random random angle $\theta$, the vertical displacement follows a Cauchy distribution.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \draw [dashed] (0, 0) -- (3, 0) node [pos = 0.5, below] {$1$};
+ \draw [dashed] (0, 0) -- (3, 2);
+ \draw [->] (0, 0) -- (1.5, 1);
+ \draw (0.5, 0) arc (0:33.69:0.5);
+ \node at (0.7, 0.2) {$\theta$};
+ \draw [->] (3.2, 0) -- (3.2, 2) node [pos = 0.5, right] {$X = \tan \theta$};
+ \draw [->] (3.2, 2) -- (3.2, 0);
+ \draw (3, -2.5) -- (3, 2.5);
+ \end{tikzpicture}
+ \end{center}
+ \item If $X, Y\sim N(0, 1)$, then $X/Y$ has a Cauchy distribution.
+ \end{enumerate}
+\end{eg}
+
+\subsection{Gamma distribution}
+Suppose $X_1, \cdots, X_n$ are iid $\mathcal{E}(\lambda)$. Let $S_n = X_1 + \cdots + X_n$. Then the mgf of $S_n$ is
+\[
+ \E\left[e^{\theta(X_1 + \ldots + X_n)}\right] = \E\left[e^{\theta X_1}\right]\cdots \E\left[e^{\theta X_n}\right] = \left(\E\left[e^{\theta X}\right]\right)^n = \left(\frac{\lambda}{\lambda - \theta}\right)^n.
+\]
+We call this a gamma distribution.
+
+We claim that this has a distribution given by the following formula:
+\begin{defi}[Gamma distribution]
+ The \emph{gamma distribution} $\Gamma(n, \lambda)$ has pdf
+ \[
+ f(x) = \frac{\lambda^n x^{n - 1}e^{-\lambda x}}{(n - 1)!}.
+ \]
+ We can show that this is a distribution by showing that it integrates to $1$.
+\end{defi}
+We now show that this is indeed the sum of $n$ iid $\mathcal{E}(\lambda)$:
+\begin{align*}
+ \E[e^{\theta X}] &= \int_0^\infty e^{\theta x} \frac{\lambda^n x^{n - 1}e^{-\lambda x}}{(n - 1)!}\;\d x\\
+ &= \left(\frac{\lambda}{\lambda - \theta}\right)^n \int_0^\infty \frac{(\lambda - \theta)^n x^{n - 1}e^{-(\lambda - \theta)x}}{(n - 1)!}\;\d x
+\end{align*}
+The integral is just integrating over $\Gamma(n, \lambda - \theta)$, which gives one. So we have
+\[
+ \E[e^{\theta X}]= \left(\frac{\lambda}{\lambda - \theta}\right)^n.
+\]
+
+\subsection{Beta distribution*}
+Suppose $X_1, \cdots, X_n$ are iid $U[0, 1]$. Let $Y_1 \leq Y_2 \leq \cdots \leq Y_n$ be the order statistics. Then the pdf of $Y_i$ is
+\[
+ f(y) = \frac{n!}{(i - 1)!(n - i)!}y^{i - 1}(1 - y)^{n - i}.
+\]
+Note that the leading term is the multinomial coefficient $\binom{n}{i - 1, 1, n - 1}$. The formula is obtained using the same technique for finding the pdf of order statistics.
+
+This is the \emph{beta distribution}: $Y_i \sim \beta(i, n - i + 1)$. In general
+\begin{defi}[Beta distribution]
+ The \emph{beta distribution} $\beta(a, b)$ has pdf
+ \[
+ f(x; a, b) = \frac{\Gamma(a + b)}{\Gamma(a)\Gamma(b)}x^{a - 1}(1 - x)^{b - 1}
+ \]
+ for $0 \leq x \leq 1$.
+
+ This has mean $a/(a + b)$.
+\end{defi}
+Its moment generating function is
+\[
+ m(\theta) = 1 + \sum_{k = 1}^\infty \left(\prod_{r = 0}^{k - 1}\frac{a + r}{a + b + r}\right)\frac{\theta^k}{k!},
+\]
+which is horrendous!
+
+\subsection{More on the normal distribution}
+\begin{prop}
+ The moment generating function of $N(\mu, \sigma^2)$ is
+ \[
+ \E[e^{\theta X}] = \exp\left(\theta\mu + \frac{1}{2}\theta^2 \sigma^2\right).
+ \]
+\end{prop}
+
+\begin{proof}
+ \[
+ \E[e^{\theta X}] = \int_{-\infty}^\infty e^{\theta x} \frac{1}{\sqrt{2\pi} \sigma} e^{-\frac{1}{2} \frac{(x - \mu)^2}{\sigma^2}}\;\d x.
+\]
+Substitute $z = \frac{x - \mu}{\sigma}$. Then
+\begin{align*}
+ \E[e^{\theta X}] &= \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}}e^{\theta(\mu + \sigma z)} e^{-\frac{1}{2}z^2}\;\d z\\
+ &= e^{\theta\mu + \frac{1}{2}\theta^2\sigma^2}\int_{-\infty}^\infty \underbrace{\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(z - \theta \sigma)^2}}_{\text{pdf of }N(\sigma\theta, 1)}\;\d z\\
+ &= e^{\theta\mu + \frac{1}{2}\theta^2\sigma^2}.\qedhere
+\end{align*}
+\end{proof}
+
+\begin{thm}
+ Suppose $X, Y$ are independent random variables with $X\sim N(\mu_1, \sigma_1^2)$, and $Y\sim (\mu_2, \sigma_2^2)$. Then
+ \begin{enumerate}
+ \item $X + Y \sim N(\mu_1 + \mu_2 , \sigma_1^2 + \sigma_2^2)$.
+ \item $aX \sim N(a\mu_1, a^2 \sigma_1^2)$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item
+ \begin{align*}
+ \E[e^{\theta(X + Y)}] &= \E[e^{\theta X}] \cdot \E[e^{\theta Y}]\\
+ &= e^{\mu_1 \theta + \frac{1}{2}\sigma_1^2\theta^2} \cdot e^{\mu_2 \theta + \frac{1}{2}\sigma_2^2 \theta}\\
+ &= e^{(\mu_1 + \mu_2)\theta + \frac{1}{2}(\sigma_1^2 + \sigma_2^2)\theta^2}
+ \end{align*}
+ which is the mgf of $N(\mu_1 + \mu_2, \sigma_1^2 + \sigma_2^2)$.
+ \item
+ \begin{align*}
+ \E[e^{\theta (aX)}] &= \E[e^{(\theta a)X}]\\
+ &= e^{\mu(a\theta) + \frac{1}{2}\sigma^2 (a \theta)^2}\\
+ &= e^{(a\mu)\theta + \frac{1}{2}(a^2\sigma^2)\theta^2}\qedhere
+ \end{align*}%\qedhere
+ \end{enumerate}
+\end{proof}
+
+Finally, suppose $X\sim N(0, 1)$. Write $\phi(x) = \frac{1}{\sqrt{2\pi}} e^{-x^2/2}$ for its pdf. It would be very difficult to find a closed form for its cumulative distribution function, but we can find an upper bound for it:
+\begin{align*}
+ \P(X \geq x) &= \int_x^\infty \phi(t)\;\d t\\
+ &\leq \int_x^\infty \left(1 + \frac{1}{t^2}\right)\phi(t)\;\d t \\
+ &= \frac{1}{x}\phi(x)
+\end{align*}
+To see the last step works, simply differentiate the result and see that you get $\left(1 + \frac{1}{x^2}\right)\phi(x)$.
+So
+\[
+ \P(X \geq x) \leq \frac{1}{x}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}.
+\]
+Then
+\[
+ \log \P(X \geq x)\sim -\frac{1}{2}x^2.
+\]
+
+\subsection{Multivariate normal}
+Let $X_1, \cdots, X_n$ be iid $N(0, 1)$. Then their joint density is
+\begin{align*}
+ g(x_1, \cdots, x_n) &= \prod_{i = 1}^n \phi(x_i) \\
+ &= \prod_{1}^n\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x_i^2}\\
+ &= \frac{1}{(2\pi)^{n/2}} e^{-\frac{1}{2}\sum_1^n x_i^2}\\
+ &= \frac{1}{(2\pi)^{n/2}}e^{-\frac{1}{2}\mathbf{x}^T \mathbf{x}},
+\end{align*}
+where $\mathbf{x} = (x_1, \cdots, x_n)^T$.
+
+This result works if $X_1, \cdots, X_n$ are iid $N(0, 1)$. Suppose we are interested in
+\[
+ \mathbf{Z} = \boldsymbol\mu + A\mathbf{X},
+\]
+where $A$ is an invertible $n\times n$ matrix. We can think of this as $n$ measurements $\mathbf{Z}$ that are affected by underlying standard-normal factors $\mathbf{X}$. Then
+\[
+ \mathbf{X} = A^{-1}(\mathbf{Z} - \boldsymbol\mu)
+\]
+and
+\[
+ |J| = |\det (A^{-1})| = \frac{1}{\det A}
+\]
+So
+\begin{align*}
+ f(z_1, \cdots, z_n) &= \frac{1}{(2\pi)^{n/2}}\frac{1}{\det A}\exp\left[-\frac{1}{2}\big((A^{-1}(\mathbf{z} - \boldsymbol\mu))^T(A^{-1}(\mathbf{z} - \boldsymbol\mu))\big)\right]\\
+ &= \frac{1}{(2\pi)^{n/2}\det A}\exp\left[-\frac{1}{2}(\mathbf{z} - \boldsymbol\mu)^T \Sigma^{-1}(\mathbf{z} - \boldsymbol\mu)\right]\\
+ &= \frac{1}{(2\pi)^{n/2}\sqrt{\det \Sigma}}\exp\left[-\frac{1}{2}(\mathbf{z} - \boldsymbol\mu)^T \Sigma^{-1}(\mathbf{z} - \boldsymbol\mu)\right].
+\end{align*}
+where $\Sigma = AA^T$ and $\Sigma^{-1} = (A^{-1})^TA^{-1}$. We say
+\[
+ \mathbf{Z} =
+ \begin{pmatrix}
+ Z_1\\
+ \vdots\\
+ Z_n
+ \end{pmatrix}
+ \sim MVN(\boldsymbol\mu, \Sigma)\text{ or }N(\boldsymbol\mu, \Sigma).
+\]
+This is the multivariate normal.
+
+What is this matrix $\Sigma$? Recall that $\cov(Z_i, Z_j) = \E[(Z_i - \mu_i)(Z_j - \mu_j)]$. It turns out this covariance is the $i, j$th entry of $\Sigma$, since
+\begin{align*}
+ \E[(\mathbf{Z} - \boldsymbol\mu)(\mathbf{Z} - \boldsymbol\mu)^T] &= \E[AX(AX)^T]\\
+ &= \E(AXX^TA^T) = A\E[XX^T]A^T\\
+ &= AIA^T\\
+ &= AA^T \\
+ &= \Sigma
+\end{align*}
+So we also call $\Sigma$ the covariance matrix.
+
+In the special case where $n = 1$, this is a normal distribution and $\Sigma = \sigma^2$.
+
+Now suppose $Z_1, \cdots, Z_n$ have covariances $0$. Then $\Sigma = \diag(\sigma_1^2, \cdots, \sigma_n^2)$. Then
+\[
+ f(z_1, \cdots, z_n) = \prod_1^n \frac{1}{\sqrt{2\pi}\sigma_i} e^{-\frac{1}{2\sigma_i^2}(z_i - \mu_i)^2}.
+\]
+So $Z_1, \cdots, Z_n$ are independent, with $Z_i\sim N(\mu_i, \sigma_i^2)$.
+
+Here we proved that if $\cov = 0$, then the variables are independent. However, this is only true when $Z_i$ are multivariate normal. It is generally not true for arbitrary distributions.
+
+For these random variables that involve vectors, we will need to modify our definition of moment generating functions. We define it to be
+\[
+ m(\boldsymbol\theta) = \E[e^{\boldsymbol\theta^T \mathbf{X}}] = \E[e^{\theta_1 X_1 + \cdots + \theta_n X_n}].
+\]
+\subsubsection*{Bivariate normal}
+This is the special case of the multivariate normal when $n = 2$. Since there aren't too many terms, we can actually write them out.
+
+The \emph{bivariate normal} has
+\[
+ \Sigma=
+ \begin{pmatrix}
+ \sigma_1^2 & \rho\sigma_1\sigma_2\\
+ \rho\sigma_1\sigma_2 & \sigma_2^2
+ \end{pmatrix}.
+\]
+Then
+\[
+ \mathrm{corr}(X_1, X_2) = \frac{\cov(X_1, X_2)}{\sqrt{\var(X_1)\var(X_2)}} = \frac{\rho\sigma_1\sigma_2}{\sigma_1\sigma_2} = \rho.
+\]
+And
+\[
+ \Sigma^{-1} = \frac{1}{1 - \rho^2}
+ \begin{pmatrix}
+ \sigma_1^{-2} & -\rho\sigma_1^{-1}\sigma_2^{-1}\\
+ -\rho\sigma_1^{-1}\sigma_2^{-1} & \sigma_2^{-2}
+ \end{pmatrix}
+\]
+The joint pdf of the bivariate normal with zero mean is
+\[
+ f(x_1, x_2) = \frac{1}{2\pi \sigma_1 \sigma_2 \sqrt{1 - \rho^2}} \exp\left(-\frac{1}{2(1 - \rho^2)}\left(\frac{x_1^2}{\sigma_1^2} - \frac{2\rho x_1x_2}{\sigma_1\sigma_2} + \frac{x_2^2}{\sigma_2^2}\right)\right)
+\]
+If the mean is non-zero, replace $x_i$ with $x_i - \mu_i$.
+
+The joint mgf of the bivariate normal is
+\[
+ m(\theta_1, \theta_2) = e^{\theta_1\mu_1 + \theta_2 \mu_2 + \frac{1}{2}(\theta_1^2\sigma_1^2 + 2\theta_1\theta_2\rho\sigma_1\sigma_2 + \theta_2^2\sigma_2^2)}.
+\]
+Nice and elegant.
+
+\section{Central limit theorem}
+Suppose $X_1, \cdots, X_n$ are iid random variables with mean $\mu$ and variance $\sigma^2$. Let $S_n = X_1 + \cdots + X_n$. Then we have previously shown that
+\[
+ \var(S_n/\sqrt{n}) = \var\left(\frac{S_n - n \mu}{\sqrt{n}}\right) = \sigma^2.
+\]
+\begin{thm}[Central limit theorem]
+ Let $X_1, X_2, \cdots$ be iid random variables with $\E[X_i] = \mu$, $\var(X_i) = \sigma^2 < \infty$. Define
+ \[
+ S_n = X_1 + \cdots + X_n.
+ \]
+ Then for all finite intervals $(a, b)$,
+ \[
+ \lim_{n \to \infty}\P\left(a \leq \frac{S_{n} - n\mu}{\sigma\sqrt{n}} \leq b\right) = \int_a^b \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2}t^2}\;\d t.
+ \]
+ Note that the final term is the pdf of a standard normal. We say
+ \[
+ \frac{S_n - n\mu}{\sigma\sqrt{n}} \to_{D} N(0, 1).
+ \]
+\end{thm}
+
+To show this, we will use the continuity theorem without proof:
+\begin{thm}[Continuity theorem]
+ If the random variables $X_1, X_2, \cdots$ have mgf's $m_1(\theta), m_2(\theta), \cdots$ and $m_n(\theta) \to m(\theta)$ as $n\to\infty$ for all $\theta$, then $X_n \to_D $ the random variable with mgf $m(\theta)$.
+\end{thm}
+
+We now provide a sketch-proof of the central limit theorem:
+\begin{proof}
+ wlog, assume $\mu = 0, \sigma^2 = 1$ (otherwise replace $X_i$ with $\frac{X_i - \mu}{\sigma}$).
+
+ Then
+ \begin{align*}
+ m_{X_i}(\theta) &= \E[e^{\theta X_i}] = 1 + \theta \E[X_i] + \frac{\theta^2}{2!} \E[X_i^2] + \cdots\\
+ &=1 + \frac{1}{2}\theta^2 + \frac{1}{3!}\theta^3 \E[X_i^3] + \cdots
+ \end{align*}
+ Now consider $S_n/\sqrt{n}$. Then
+ \begin{align*}
+ \E[e^{\theta S_n/\sqrt{n}}] &= \E[e^{\theta(X_1 + \ldots + X_n)/\sqrt{n}}]\\
+ &= \E[e^{\theta X_1/\sqrt{n}}] \cdots \E[e^{\theta X_n/\sqrt{n}}]\\
+ &= \left(\E[e^{\theta X_1/\sqrt{n}}]\right)^n\\
+ &= \left(1 + \frac{1}{2}\theta^2 \frac{1}{n} + \frac{1}{3!}\theta^3 \E[X^3]\frac{1}{n^{3/2}} + \cdots\right)^{n}\\
+ &\to e^{\frac{1}{2}\theta^2}
+ \end{align*}
+ as $n \to \infty$ since $(1 + a/n)^n \to e^a$. And this is the mgf of the standard normal. So the result follows from the continuity theorem.
+\end{proof}
+Note that this is not a very formal proof, since we have to require $\E[X^3]$ to be finite. Also, sometimes the moment generating function is not defined. But this will work for many ``nice'' distributions we will ever meet.
+
+The proper proof uses the characteristic function
+\[
+ \chi_X(\theta) = E[e^{i\theta X}].
+\]
+An important application is to use the normal distribution to approximate a large binomial.
+
+Let $X_i \sim B(1, p)$. Then $S_n \sim B(n, p)$. So $\E[S_n] = np$ and $\var (S_n) = p(1 - p)$. So
+\[
+ \frac{S_n - np}{\sqrt{np(1 - p)}} \to_D N(0, 1).
+\]
+
+\begin{eg}
+ Suppose two planes fly a route. Each of $n$ passengers chooses a plane at random. The number of people choosing plane 1 is $S\sim B(n, \frac{1}{2})$. Suppose each has $s$ seats. What is
+ \[
+ F(s) = \P(S > s),
+ \]
+ i.e.\ the probability that plane 1 is over-booked? We have
+ \[
+ F(s) = \P(S > s) = \P\left(\frac{S - n/2}{\sqrt{n\cdot \frac{1}{2}\cdot \frac{1}{2}}} > \frac{s - n/2}{\sqrt{n}/2}\right).
+ \]
+ Since
+ \[
+ \frac{S - np}{\sqrt{n}/2}\sim N(0, 1),
+ \]
+ we have
+ \[
+ F(s) \approx 1 - \Phi\left(\frac{s - n/2}{\sqrt{n}/2}\right).
+ \]
+ For example, if $n = 1000$ and $s = 537$, then $\frac{S_n - n/2}{\sqrt{n}/2}\approx 2.34$, $\Phi(2.34)\approx 0.99$, and $F(s) \approx 0.01$. So with only $74$ seats as buffer between the two planes, the probability of overbooking is just $1/100$.
+\end{eg}
+
+\begin{eg}
+ An unknown proportion $p$ of the electorate will vote Labour. It is desired to find $p$ without an error not exceeding $0.005$. How large should the sample be?
+
+ We estimate by
+ \[
+ p' = \frac{S_n}{n},
+ \]
+ where $X_i\sim B(1, p)$. Then
+ \begin{align*}
+ \P(|p' - p| \leq 0.005) &= \P(|S_n - np| \leq 0.005n)\\
+ &= \P\left(\underbrace{\frac{|S_n - np|}{\sqrt{np(1 - p)}}}_{\approx N(0, 1)} \leq \frac{0.005n}{\sqrt{np(1 - p)}}\right)
+ \end{align*}
+ We want $|p' - p| \leq 0.005$ with probability $\geq 0.95$. Then we want
+ \[
+ \frac{0.005n}{\sqrt{np(1 - p)}} \geq \Phi^{-1}(0.975) = 1.96.
+ \]
+ (we use 0.975 instead of 0.95 since we are doing a two-tailed test) Since the maximum possible value of $p(1 - p)$ is $1/4$, we have
+ \[
+ n \geq 38416.
+ \]
+ In practice, we don't have that many samples. Instead, we go by
+ \[
+ \P(|p' < p| \leq 0.03) \geq 0.95.
+ \]
+ This just requires $n \geq 1068$.
+\end{eg}
+
+\begin{eg}[Estimating $\pi$ with Buffon's needle]
+ Recall that if we randomly toss a needle of length $\ell$ to a floor marked with parallel lines a distance $L$ apart, the probability that the needle hits the line is $p = \frac{2\ell}{\pi L}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, -1) -- (3, -1);
+ \draw (-3, 1) -- (3, 1);
+ \draw (-2, -0.1) -- (-0.5, 0.7) node [anchor = south east, pos = 0.5] {$\ell$};
+ \draw [->] (-2, -0.1) -- (-2, 1) node [left, pos = 0.5] {$X$};
+ \draw [->] (-2, 1) -- (-2, -0.1);
+ \draw [dashed] (-2.5, -0.1) -- (-0.5, -0.1);
+ \draw (-1.5, -0.1) arc (0:29:0.5);
+ \node at (-1.2, 0.1) {$\theta$};
+ \draw [->] (2, -1) -- (2, 1) node [right, pos = 0.5] {$L$};
+ \draw [->] (2, 1) -- (2, -1);
+ \end{tikzpicture}
+ \end{center}
+ Suppose we toss the pin $n$ times, and it hits the line $N$ times. Then
+ \[
+ N\approx N(np, np(1 - p))
+ \]
+ by the Central limit theorem. Write $p'$ for the actual proportion observed. Then
+ \begin{align*}
+ \hat{\pi} &= \frac{2\ell}{(N/n) L} \\
+ &= \frac{\pi 2\ell/(\pi L)}{p'}\\
+ &= \frac{\pi p}{p + (p' - p)}\\
+ &= \pi \left(1 - \frac{p' - p}{p} + \cdots \right)
+ \end{align*}
+ Hence
+ \[
+ \hat\pi - \pi \approx \frac{p - p'}{p}.
+ \]
+ We know
+ \[
+ p' \sim N\left(p, \frac{p(1 - p)}{n}\right).
+ \]
+ So we can find
+ \[
+ \hat \pi - \pi \sim N\left(0, \frac{\pi^2 p(1 - p)}{np^2}\right) = N\left(0, \frac{\pi^2(1 - p)}{np}\right)
+ \]
+ We want a small variance, and that occurs when $p$ is the largest. Since $p = 2\ell/\pi L$, this is maximized with $\ell = L$. In this case,
+ \[
+ p = \frac{2}{\pi},
+ \]
+ and
+ \[
+ \hat \pi - \pi \approx N\left(0, \frac{(\pi - 2)\pi^2}{2n}\right).
+ \]
+ If we want to estimate $\pi$ to 3 decimal places, then we need
+ \[
+ \P(|\hat \pi - \pi| \leq 0.001) \geq 0.95.
+ \]
+ This is true if and only if
+ \[
+ 0.001\sqrt{\frac{2n}{(\pi - 2)(\pi^2)}} \geq \Phi^{-1}(0.975) = 1.96
+ \]
+ So $n\geq 2.16 \times 10^7$. So we can obtain $\pi$ to 3 decimal places just by throwing a stick 20 million times! Isn't that exciting?
+\end{eg}
+
+\ifx \nhtml \undefined
+\begin{landscape}
+\section{Summary of distributions}
+\subsection{Discrete distributions}
+\begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ \textbf{Distribution} & \textbf{PMF} & \textbf{Mean} & \textbf{Variance} & \textbf{PGF} \\
+ \midrule
+ Bernoulli & $p^k(1 - p)^{1 - k}$ & $p$ & $p(1 - p)$ & $q + pz$\\
+ Binomial & $\displaystyle\binom{n}{k}p^k(1 - p)^{n - k}$ & $np$ & $np(1 - p)$ & $(q + pz)^n$\\
+ Geometric & $(1 - p)^k p$ & $\displaystyle\frac{1 - p}{p}$ & $\displaystyle\frac{1 - p}{p^2}$ & $\displaystyle\frac{1 - p}{1 - pz}$\\
+ Poisson & $\displaystyle\frac{\lambda^k}{k!}e^{-\lambda}$ & $\lambda$ & $\lambda$ & $e^{\lambda(z - 1)}$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+\subsection{Continuous distributions}
+\begin{center}
+ \begin{tabular}{cccccc}
+ \toprule
+ \textbf{Distribution} & \textbf{PDF} & \textbf{CDF} & \textbf{Mean} & \textbf{Variance} & \textbf{MGF} \\
+ \midrule
+ Uniform & $\displaystyle\frac{1}{b - a}$ & $\displaystyle\frac{x - a}{b - a}$ & $\displaystyle\frac{a + b}{2}$ & $\displaystyle \frac{1}{12}(b - a)^2$ & $\displaystyle\frac{e^{\theta b} - e^{\theta a}}{\theta (b - a)}$ \\
+ Normal & $\displaystyle \frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(x - \mu)^2}{2\sigma^2}\right)$ & / & $\mu$ & $\sigma^2$ & $e^{\theta\mu + \frac{1}{2}\theta^2\sigma^2}$\\
+ Exponential & $\lambda e^{-\lambda x}$ & $1 - e^{-\lambda x}$ & $\displaystyle \frac{1}{\lambda}$ & $\displaystyle\frac{1}{\lambda^2}$ & $\displaystyle\frac{\lambda}{\lambda - \theta}$ \\
+ Cauchy & $\displaystyle \frac{1}{\pi(1 + x^2)}$ & / & undefined & undefined & undefined\\
+ Gamma & $\displaystyle \frac{\lambda^n x^{n - 1}e^{-\lambda x}}{(n - 1)!}$ & / & $\displaystyle\frac{n}{\lambda}$ & $\displaystyle \frac{n}{\lambda^2}$ & $\displaystyle\left(\frac{\lambda}{\lambda - \theta}\right)^n$ \\
+ Multivariate normal & $\displaystyle \frac{1}{(2\pi)^{n/2}\sqrt{\det \Sigma}} \exp\left[-\frac{1}{2}(\mathbf{z} - \boldsymbol\mu)^T\Sigma^{-1}(\mathbf{z} - \boldsymbol\mu)\right]$ & / & $\boldsymbol\mu$ & $\Sigma$ & /\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+\end{landscape}
+\fi
+\end{document}
+
diff --git a/books/cam/IA_L/vector_calculus.tex b/books/cam/IA_L/vector_calculus.tex
new file mode 100644
index 0000000000000000000000000000000000000000..369d95e2a9b9a8e15bc27b47dac55f72cb4c6bda
--- /dev/null
+++ b/books/cam/IA_L/vector_calculus.tex
@@ -0,0 +1,3671 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IA}
+\def\nterm {Lent}
+\def\nyear {2015}
+\def\nlecturer {B.\ Allanach}
+\def\ncourse {Vector Calculus}
+\def\nofficial {http://users.hepforge.org/~allanach/teaching.html}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+ \noindent\textbf{Curves in $\R^3$}\\
+ Parameterised curves and arc length, tangents and normals to curves in $\R^3$, the radius of curvature.\hspace*{\fill} [1]
+
+ \vspace{10pt}
+ \noindent\textbf{Integration in $\R^2$ and $\R^3$}\\
+ Line integrals. Surface and volume integrals: definitions, examples using Cartesian, cylindrical and spherical coordinates; change of variables.\hspace*{\fill} [4]
+
+ \vspace{10pt}
+ \noindent\textbf{Vector operators}\\
+ Directional derivatives. The gradient of a real-valued function: definition; interpretation as normal to level surfaces; examples including the use of cylindrical, spherical *and general orthogonal curvilinear* coordinates.
+
+ \vspace{5pt}
+ \noindent Divergence, curl and $\nabla^2$ in Cartesian coordinates, examples; formulae for these operators (statement only) in cylindrical, spherical *and general orthogonal curvilinear* coordinates. Solenoidal fields, irrotational fields and conservative fields; scalar potentials. Vector derivative identities.\hspace*{\fill} [5]
+
+ \vspace{10pt}
+ \noindent\textbf{Integration theorems}\\
+ Divergence theorem, Green's theorem, Stokes's theorem, Green's second theorem: statements; informal proofs; examples; application to fluid dynamics, and to electromagnetism including statement of Maxwell's equations.\hspace*{\fill} [5]
+
+ \vspace{10pt}
+ \noindent\textbf{Laplace's equation}\\
+ Laplace's equation in $\R^2$ and $\R^3$: uniqueness theorem and maximum principle. Solution of Poisson's equation by Gauss's method (for spherical and cylindrical symmetry) and as an integral.\hspace*{\fill} [4]
+
+ \vspace{10pt}
+ \noindent\textbf{Cartesian tensors in $\R^3$}\\
+ Tensor transformation laws, addition, multiplication, contraction, with emphasis on tensors of second rank. Isotropic second and third rank tensors. Symmetric and antisymmetric tensors. Revision of principal axes and diagonalization. Quotient theorem. Examples including inertia and conductivity.\hspace*{\fill} [5]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+In the differential equations class, we learnt how to do calculus in one dimension. However, (apparently) the world has more than one dimension. We live in a 3 (or 4) dimensional world, and string theorists think that the world has more than 10 dimensions. It is thus important to know how to do calculus in many dimensions.
+
+For example, the position of a particle in a three dimensional world can be given by a position vector $\mathbf{x}$. Then by definition, the velocity is given by $\frac{\d}{\d t} \mathbf{x} = \dot{\mathbf{x}}$. This would require us to take the derivative of a vector.
+
+This is not too difficult. We can just differentiate the vector componentwise. However, we can reverse the problem and get a more complicated one. We can assign a number to each point in (3D) space, and ask how this number changes as we move in space. For example, the function might tell us the temperature at each point in space, and we want to know how the temperature changes with position.
+
+In the most general case, we will assign a vector to each point in space. For example, the electric field vector $\mathbf{E}(\mathbf{x})$ tells us the direction of the electric field at each point in space.
+
+On the other side of the story, we also want to do integration in multiple dimensions. Apart from the obvious ``integrating a vector'', we might want to integrate over surfaces. For example, we can let $\mathbf{v}(\mathbf{x})$ be the velocity of some fluid at each point in space. Then to find the total fluid flow through a surface, we integrate $\mathbf{v}$ over the surface.
+
+In this course, we are mostly going to learn about doing calculus in many dimensions. In the last few lectures, we are going to learn about Cartesian tensors, which is a generalization of vectors.
+
+Note that throughout the course (and lecture notes), summation convention is implied unless otherwise stated.
+
+\section{Derivatives and coordinates}
+\subsection{Derivative of functions}
+We used to define a derivative as the limit of a quotient and a function is differentiable if the derivative exists. However, this obviously cannot be generalized to vector-valued functions, since you cannot divide by vectors. So we want an alternative definition of differentiation, which can be easily generalized to vectors.
+
+Recall, that if a function $f$ is differentiable at $x$, then for a small perturbation $\delta x$, we have
+\[
+ \delta f \stackrel{\text{def}}{=} f(x + \delta x) - f(x) = f'(x) \delta x + o(\delta x),
+\]
+which says that the resulting change in $f$ is approximately proportional to $\delta x$ (as opposed to $1/\delta x$ or something else). It can be easily shown that the converse is true --- if $f$ satisfies this relation, then $f$ is differentiable.
+
+This definition is more easily extended to vector functions. We say a function $\mathbf{F}$ is differentiable if, when $x$ is perturbed by $\delta x$, then the resulting change is ``something'' times $\delta x$ plus an $o(\delta x)$ error term. In the most general case, $\delta x$ will be a vector and that ``something'' will be a matrix. Then that ``something'' will be what we call the derivative.
+
+\subsubsection*{Vector functions \tph{$\R \to \R^n$}{R to Rn}{&\#x211D; → &\#x211D;n}}
+We start with the simple case of vector functions.
+\begin{defi}[Vector function]
+ A \emph{vector function} is a function $\mathbf{F}: \R\to \R^n$.
+\end{defi}
+This takes in a number and returns a vector. For example, it can map a time to the velocity of a particle at that time.
+
+\begin{defi}[Derivative of vector function]
+ A vector function $\mathbf{F}(x)$ is \emph{differentiable} if
+ \[
+ \delta \mathbf{F} \stackrel{\text{def}}{=}\mathbf{F}(x + \delta x)- \mathbf{F}(x) = \mathbf{F}'(x)\delta x + o(\delta x)
+ \]
+ for some $\mathbf{F}'(x)$. $\mathbf{F}'(x)$ is called the \emph{derivative} of $\mathbf{F}(x)$.
+\end{defi}
+We don't have anything new and special here, since we might as well have defined $\mathbf{F}'(x)$ as
+\[
+ \mathbf{F}' = \frac{\d \mathbf{F}}{\d x} = \lim_{\delta x \to 0} \frac{1}{\delta x}[\mathbf{F}(x + \delta x) - \mathbf{F}(x)],
+\]
+which is easily shown to be equivalent to the above definition.
+
+Using differential notation, the differentiability condition can be written as
+\[
+ \d\mathbf{F} = \mathbf{F}'(x)\;\d x.
+\]
+Given a basis $\mathbf{e}_i$ that is independent of $x$, vector differentiation is performed componentwise, i.e.
+\begin{prop}
+ \[
+ \mathbf{F}'(x) = F'_i(x)\mathbf{e}_i.
+ \]
+\end{prop}
+Leibnitz identities hold for the products of scalar and vector functions.
+\begin{prop}
+ \begin{align*}
+ \frac{\d}{\d t}(f\mathbf{g}) &= \frac{\d f}{\d t}\mathbf{g} + f\frac{\d \mathbf{g}}{\d t}\\
+ \frac{\d}{\d t}(\mathbf{g}\cdot \mathbf{h}) &= \frac{\d \mathbf{g}}{\d t}\cdot \mathbf{h} + \mathbf{g}\cdot \frac{\d \mathbf{h}}{\d t}\\
+ \frac{\d}{\d t}(\mathbf{g}\times \mathbf{h}) &= \frac{\d \mathbf{g}}{\d t}\times \mathbf{h} + \mathbf{g}\times \frac{\d \mathbf{h}}{\d t}
+ \end{align*}
+ Note that the order of multiplication must be retained in the case of the cross product.
+\end{prop}
+
+\begin{eg}
+ Consider a particle with mass $m$. It has position $\mathbf{r}(t)$, velocity $\dot{\mathbf{r}}(t)$ and acceleration $\ddot{\mathbf{r}}$. Its momentum is $\mathbf{p} = m\dot{\mathbf{r}}(t)$.
+
+ Note that derivatives with respect to $t$ are usually denoted by dots instead of dashes.
+
+ If $\mathbf{F}(\mathbf{r})$ is the force on a particle, then Newton's second law states that
+ \[
+ \dot{\mathbf{p}} = m\ddot{\mathbf{r}} = \mathbf{F}.
+ \]
+ We can define the angular momentum about the origin to be
+ \[
+ \mathbf{L} = \mathbf{r}\times \mathbf{p} = m\mathbf{r} \times \dot{\mathbf{r}}.
+ \]
+ If we want to know how the angular momentum changes over time, then
+ \[
+ \dot{\mathbf{L}} = m\dot{\mathbf{r}}\times \dot{\mathbf{r}} + m\mathbf{r}\times \ddot{\mathbf{r}} = m\mathbf{r}\times \ddot{\mathbf{r}} = \mathbf{r}\times \mathbf{F}.
+ \]
+ which is the \emph{torque} of $\mathbf{F}$ about the origin.
+\end{eg}
+
+\subsubsection*{Scalar functions \tph{$\R^n \to \R$}{Rn to R}{&\#x211D;n → &\#x211D;}}
+We can also define derivatives for a different kind of function:
+\begin{defi}
+ A \emph{scalar function} is a function $f: \R^n \to \R$.
+\end{defi}
+A scalar function takes in a position and gives you a number, e.g.\ the potential energy of a particle at different positions.
+
+Before we define the derivative of a scalar function, we have to first define what it means to take a limit of a vector.
+\begin{defi}[Limit of vector]
+ The \emph{limit of vectors} is defined using the norm. So $\mathbf{v}\to \mathbf{c}$ iff $|\mathbf{v} - \mathbf{c}| \to 0$. Similarly, $f(\mathbf{r}) = o(\mathbf{r})$ means $\frac{|f(\mathbf{r})|}{|\mathbf{r}|} \to 0$ as $\mathbf{r}\to \mathbf{0}$.
+\end{defi}
+
+\begin{defi}[Gradient of scalar function]
+ A scalar function $f(\mathbf{r})$ is \emph{differentiable} at $\mathbf{r}$ if
+ \[
+ \delta f \stackrel{\text{def}}{=} f(\mathbf{r} + \delta \mathbf{r}) - f(\mathbf{r}) = (\nabla f)\cdot \delta \mathbf{r} + o(\delta \mathbf{r})
+ \]
+ for some vector $\nabla f$, the \emph{gradient} of $f$ at $\mathbf{r}$.
+\end{defi}
+Here we have a fancy name ``gradient'' for the derivative. But we will soon give up on finding fancy names and just call everything the ``derivative''!
+
+Note also that here we genuinely need the new notion of derivative, since ``dividing by $\delta \mathbf{r}$'' makes no sense at all!
+
+The above definition considers the case where $\delta \mathbf{r}$ comes in all directions. What if we only care about the case where $\delta \mathbf{r}$ is in some particular direction $\mathbf{n}$? For example, maybe $f$ is the potential of a particle that is confined to move in one straight line only.
+
+Then taking $\delta \mathbf{r} = h\mathbf{n}$, with $\mathbf{n}$ a unit vector,
+\[
+ f(\mathbf{r} + h\mathbf{n}) - f(\mathbf{r}) = \nabla f \cdot (h\mathbf{n}) + o(h) = h(\nabla f\cdot \mathbf{n}) + o(h),
+\]
+which gives
+\begin{defi}[Directional derivative]
+ The \emph{directional derivative} of $f$ along $\mathbf{n}$ is
+ \[
+ \mathbf{n}\cdot \nabla f = \lim_{h \to 0} \frac{1}{h}[f(\mathbf{r} + h\mathbf{n}) - f(\mathbf{r})],
+ \]
+ It refers to how fast $f$ changes when we move in the direction of $\mathbf{n}$.
+\end{defi}
+Using this expression, the directional derivative is maximized when $\mathbf{n}$ is in the same direction as $\nabla f$ (then $\mathbf{n}\cdot \nabla f = |\nabla f|$). So $\nabla f$ points in the direction of greatest slope.
+
+How do we evaluate $\nabla f$? Suppose we have an orthonormal basis $\mathbf{e}_i$. Setting $\mathbf{n} = \mathbf{e}_i$ in the above equation, we obtain
+\[
+ \mathbf{e}_i \cdot \nabla f = \lim_{h\to 0} \frac{1}{h}[f(\mathbf{r} + h\mathbf{e}_i) - f(\mathbf{r})] = \frac{\partial f}{\partial x_i}.
+\]
+Hence
+\begin{thm}
+ The gradient is
+ \[
+ \nabla f = \frac{\partial f}{\partial x_i}\mathbf{e}_i
+ \]
+\end{thm}
+
+Hence we can write the condition of differentiability as
+\[
+ \delta f = \frac{\partial f}{\partial x_i}\delta x_i + o(\delta \mathbf{x}).
+\]
+In differential notation, we write
+\[
+ \d f = \nabla f\cdot \d \mathbf{r} = \frac{\partial f}{\partial x_i}\d x_i,
+\]
+which is the chain rule for partial derivatives.
+
+\begin{eg}
+ Take $f(x, y, z) = x + e^{xy}\sin z$. Then
+ \begin{align*}
+ \nabla f &= \left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z}\right)\\
+ &= (1 + ye^{xy}\sin z, xe^{xy}\sin z, e^{xy}\cos z)
+ \end{align*}
+ At $(x, y, z) = (0, 1, 0)$, $\nabla f = (1, 0, 1)$. So $f$ increases/decreases most rapidly for $\mathbf{n} = \pm \frac{1}{\sqrt{2}}(1, 0, 1)$ with a rate of change of $\pm \sqrt{2}$. There is no change in $f$ if $\mathbf{n}$ is perpendicular to $\pm \frac{1}{\sqrt{2}}(1, 0, 1)$.
+\end{eg}
+
+Now suppose we have a scalar function $f(\mathbf{r})$ and we want to consider the rate of change along a path $\mathbf{r}(u)$. A change $\delta u$ produces a change $\delta \mathbf{r} = \mathbf{r}' \delta u + o(\delta u)$, and
+\[
+ \delta f = \nabla f\cdot \delta \mathbf{r} + o(|\delta \mathbf{r}|) = \nabla f\cdot \mathbf{r}'(u)\delta u + o(\delta u).
+\]
+This shows that $f$ is differentiable as a function of $u$ and
+\begin{thm}[Chain rule]
+ Given a function $f(\mathbf{r}(u))$,
+ \[
+ \frac{\d f}{\d u} = \nabla f\cdot \frac{\d \mathbf{r}}{\d u} = \frac{\partial f}{\partial x_i} \frac{\d x_i}{\d u}.
+ \]
+\end{thm}
+Note that if we drop the $\d u$, we simply get
+\[
+ \d f = \nabla f\cdot \d \mathbf{r} = \frac{\partial f}{\partial x_i}\d x_i,
+\]
+which is what we've previously had.
+\subsubsection*{Vector fields \tph{$\R^n\to \R^m$}{Rn to Rm}{&\#x211D;n → &\#x211D;m}}
+We are now ready to tackle the general case, which are given the fancy name of \emph{vector fields}.
+\begin{defi}[Vector field]
+ A \emph{vector field} is a function $\mathbf{F}: \R^n\to \R^m$.
+\end{defi}
+
+\begin{defi}[Derivative of vector field]
+ A vector field $\mathbf{F}: \R^n \to \R^m$ is differentiable if
+ \[
+ \delta \mathbf{F} \stackrel{def}{=} \mathbf{F}(\mathbf{x} + \delta\mathbf{x}) - \mathbf{F}(\mathbf{x}) = M\delta\mathbf{x} + o(\delta \mathbf{x})
+ \]
+ for some $m\times n$ matrix $M$. $M$ is the \emph{derivative} of $\mathbf{F}$.
+\end{defi}
+As promised, $M$ does not have a fancy name.
+
+Given an arbitrary function $\mathbf{F}: \R^n \to \R^m$ that maps $\mathbf{x}\mapsto \mathbf{y}$ and a choice of basis, we can write $\mathbf{F}$ as a set of $m$ functions $y_j = F_j(\mathbf{x})$ such that $\mathbf{y} = (y_1, y_2, \cdots, y_m)$. Then
+\[
+ \d y_j = \frac{\partial F_j}{\partial x_i} \d x_i.
+\]
+and we can write the derivative as
+\begin{thm}
+ The derivative of $\mathbf{F}$ is given by
+ \[
+ M_{ji} =\frac{\partial y_j}{\partial x_i}.
+ \]
+\end{thm}
+Note that we could have used this as the definition of the derivative. However, the original definition is superior because it does not require a selection of coordinate system.
+
+\begin{defi}
+ A function is \emph{smooth} if it can be differentiated any number of times. This requires that all partial derivatives exist and are totally symmetric in $i, j$ and $k$ (i.e.\ the differential operator is commutative).
+\end{defi}
+The functions we will consider will be smooth except where things obviously go wrong (e.g.\ $f(x) = 1/x$ at $x = 0$).
+
+\begin{thm}[Chain rule]
+ Suppose $g: \R^p\to\R^n$ and $f: \R^n \to \R^m$. Suppose that the coordinates of the vectors in $\R^p, \R^n$ and $\R^m$ are $u_a, x_i$ and $y_r$ respectively. By the chain rule,
+ \[
+ \frac{\partial y_r}{\partial u_a} = \frac{\partial y_r}{\partial x_i}\frac{\partial x_i}{\partial u_a},
+ \]
+ with summation implied. Writing in matrix form,
+ \[
+ M(f\circ g)_{ra} = M(f)_{ri}M(g)_{ia}.
+ \]
+ Alternatively, in operator form,
+ \[
+ \frac{\partial}{\partial u_a} = \frac{\partial x_i}{\partial u_a}\frac{\partial}{\partial x_i}.
+ \]
+\end{thm}
+
+\subsection{Inverse functions}
+Suppose $g, f: \R^n \to \R^n$ are inverse functions, i.e.\ $g\circ f = f\circ g = \id$. Suppose that $f(\mathbf{x}) =\mathbf{u}$ and $g(\mathbf{u}) = \mathbf{x}$.
+
+Since the derivative of the identity function is the identity matrix (if you differentiate $\mathbf{x}$ wrt to $\mathbf{x}$, you get $1$), we must have
+\[
+ M(f\circ g) = I.
+\]
+Therefore we know that
+\[
+ M(g) = M(f)^{-1}.
+\]
+We derive this result more formally by noting
+\[
+ \frac{\partial u_b}{\partial u_a} = \delta_{ab}.
+\]
+So by the chain rule,
+\[
+ \frac{\partial u_b}{\partial x_i}\frac{\partial x_i}{\partial u_a} = \delta_{ab},
+\]
+i.e.\ $M(f\circ g) = I$.
+
+In the $n = 1$ case, it is the familiar result that $\d u/\d x = 1/(\d x/\d u)$.
+
+\begin{eg}
+ For $n = 2$, write $u_1 = \rho$, $u_2 =\varphi$ and let $x_1 = \rho \cos \varphi$ and $x_2 = \rho \sin \varphi$. Then the function used to convert between the coordinate systems is $g(u_1, u_2) = (u_1\cos u_2, u_1\sin u_2)$
+
+ Then
+ \[
+ M(g) =
+ \begin{pmatrix}
+ \partial x_1/\partial \rho & \partial x_1/\partial \varphi\\
+ \partial x_2/\partial \rho & \partial x_2/\partial \varphi
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ \cos\varphi & -\rho\sin \varphi\\
+ \sin \varphi & \rho \cos \varphi
+ \end{pmatrix}
+ \]
+ We can invert the relations between $(x_1, x_2)$ and $(\rho, \varphi)$ to obtain
+ \begin{align*}
+ \varphi &= \tan^{-1} \frac{x_2}{x_1}\\
+ \rho &= \sqrt{x_1^2 + x_2^2}
+ \end{align*}
+ We can calculate
+ \[
+ M(f) =
+ \begin{pmatrix}
+ \partial\rho/\partial x_1 & \partial\rho/\partial x_2\\
+ \partial\varphi/\partial x_1 & \partial\varphi/\partial x_2\\
+ \end{pmatrix}
+ = M(g)^{-1}.
+ \]
+ These matrices are known as Jacobian matrices, and their determinants are known as the Jacobians.
+\end{eg}
+Note that
+\[
+ \det M(f)\det M(g) = 1.
+\]
+\subsection{Coordinate systems}
+Now we can apply the results above the changes of coordinates on Euclidean space. Suppose $x_i$ are the coordinates are Cartesian coordinates. Then we can define an arbitrary new coordinate system $u_a$ in which each coordinate $u_a$ is a function of $\mathbf{x}$. For example, we can define the plane polar coordinates $\rho, \varphi$ by
+\[
+ x_1 = \rho\cos\varphi, \quad x_2 = \rho\sin \varphi.
+\]
+However, note that $\rho$ and $\varphi$ are not components of a position vector, i.e.\ they are not the ``coefficients'' of basis vectors like $\mathbf{r} = x_1\mathbf{e}_1 + x_2\mathbf{e}_2$ are. But we can associate related basis vectors that point to directions of increasing $\rho$ and $\varphi$, obtained by differentiating $\mathbf{r}$ with respect to the variables and then normalizing:
+\[
+ \mathbf{e}_\rho = \cos \varphi\, \mathbf{e}_1 + \sin \varphi\, \mathbf{e}_2,\quad \mathbf{e}_\varphi = -\sin \varphi\, \mathbf{e}_1 + \cos \varphi\, \mathbf{e}_2.
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$\mathbf{e}_1$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$\mathbf{e}_2$};
+ \draw (0, 0) -- (2, 1.5) node [circ]{} node [pos = 0.5, anchor = south east] {$\rho$};
+ \draw [->] (2, 1.5) -- (2.5, 1.875) node [anchor = south west] {$\mathbf{e}_\rho$};
+ \draw [->] (2, 1.5) -- (1.625, 2) node [anchor = south east] {$\mathbf{e}_\varphi$};
+ \draw (0.7, 0) arc (0:36.87:0.7);
+ \node at (0.9, 0.3) {$\varphi$};
+ \end{tikzpicture}
+\end{center}
+These are not ``usual'' basis vectors in the sense that these basis vectors vary with position and are undefined at the origin. However, they are still very useful when dealing with systems with rotational symmetry.
+
+In three dimensions, we have cylindrical polars and spherical polars.
+\begin{center}
+ \begin{tabularx}{\textwidth}{XX}
+ \toprule
+ \multicolumn{1}{c}{Cylindrical polars} & \multicolumn{1}{c}{Spherical polars}\\
+ \midrule
+ \multicolumn{2}{c}{Conversion formulae}\\
+ \midrule
+ $x_1 = \rho \cos \varphi$ & $x_1 = r\sin \theta\cos \varphi$\\
+ $x_2 = \rho \sin \varphi$ & $x_2 = r\sin \theta \sin \varphi$\\
+ $x_3 = z$ & $x_3 = r\cos\theta$\\
+ \midrule
+ \multicolumn{2}{c}{Basis vectors}\\
+ \midrule
+ $\mathbf{e}_\rho = (\cos\varphi, \sin \varphi, 0)$ & $\mathbf{e}_r = (\sin\theta\cos\varphi, \sin \theta\sin \varphi , \cos \theta)$\\
+ $\mathbf{e}_\varphi = (-\sin \varphi, \cos \varphi, 0)$ & $\mathbf{e}_\varphi = (-\sin \varphi, \cos \varphi, 0)$\\
+ $\mathbf{e}_z = (0, 0, 1)$ & $\mathbf{e}_\theta = (\cos \theta\cos \varphi, \cos\theta\sin\varphi, -\sin \theta)$\\
+ \bottomrule
+ \end{tabularx}
+\end{center}
+
+\section{Curves and Line}
+\subsection{Parametrised curves, lengths and arc length}
+There are many ways we can described a curve. We can, say, describe it by a equation that the points on the curve satisfy. For example, a circle can be described by $x^2 + y^2 = 1$. However, this is not a good way to do so, as it is rather difficult to work with. It is also often difficult to find a closed form like this for a curve.
+
+Instead, we can imagine the curve to be specified by a particle moving along the path. So it is represented by a function $\mathbf{x}: \R \to \R^n$, and the curve itself is the image of the function. This is known as a \emph{parametrisation} of a curve. In addition to simplified notation, this also has the benefit of giving the curve an \emph{orientation}.
+
+\begin{defi}[Parametrisation of curve]
+ Given a curve $C$ in $\R^n$, a \emph{parametrisation} of it is a continuous and invertible function $\mathbf{r}: D\to \R^n$ for some $D\subseteq \R$ whose image is $C$.
+
+ $\mathbf{r}'(u)$ is a vector tangent to the curve at each point. A parametrization is \emph{regular} if $\mathbf{r}'(u) \not= 0$ for all $u$.
+\end{defi}
+Clearly, a curve can have many different parametrizations.
+
+\begin{eg}
+ The curve
+ \[
+ \frac{1}{4}x^2 + y^2 = 1, \quad y \geq 0, \quad z = 3.
+ \]
+ can be parametrised by $2\cos u\hat{\mathbf{i}} + \sin u\hat{\mathbf{j}} + 3\hat{\mathbf{k}}$
+\end{eg}
+If we change $u$ (and hence $\mathbf{r}$) by a small amount, then the distance $|\delta \mathbf{r}|$ is roughly equal to the change in arclength $\delta s$. So $\delta s = |\delta \mathbf{r}| + o(\delta \mathbf{r})$. Then we have
+
+\begin{prop}
+ Let $s$ denote the arclength of a curve $\mathbf{r}(u)$. Then
+ \[
+ \frac{\d s}{\d u} = \pm \left|\frac{\d \mathbf{r}}{\d u}\right| = \pm |\mathbf{r}'(u)|
+ \]
+ with the sign depending on whether it is in the direction of increasing or decreasing arclength.
+\end{prop}
+
+\begin{eg}
+ Consider a helix described by $\mathbf{r}(u) = (3\cos u, 3\sin u, 4u)$. Then
+ \begin{align*}
+ \mathbf{r}'(u) &= (-3\sin u, 3\cos u, 4)\\
+ \frac{\d s}{\d u} &= |\mathbf{r}'(u)| = \sqrt{3^2 + 4^2} = 5
+ \end{align*}
+ So $s = 5u$. i.e.\ the arclength from $\mathbf{r}(0)$ and $\mathbf{r}(u)$ is $s = 5u$.
+\end{eg}
+
+We can change parametrisation of $\mathbf{r}$ by taking an invertible smooth function $u\mapsto \tilde{u}$, and have a new parametrization $\mathbf{r}(\tilde{u}) = \mathbf{r}(\tilde{u}(u))$. Then by the chain rule,
+\begin{align*}
+ \frac{\d \mathbf{r}}{\d u} &= \frac{\d \mathbf{r}}{\d \tilde{u}}\times \frac{\d \tilde{u}}{\d u}\\
+ \frac{\d \mathbf{r}}{\d \tilde{u}} &= \frac{\d \mathbf{r}}{\d u}/ \frac{\d \tilde{u}}{\d u}
+\end{align*}
+It is often convenient to use the arclength $s$ as the parameter. Then the tangent vector will always have unit length since the proposition above yields
+\[
+ |\mathbf{r}'(s)| = \frac{\d s}{\d s} = 1.
+\]
+We call $\d s$ the scalar line element, which will be used when we consider integrals.
+\begin{defi}[Scalar line element]
+ The \emph{scalar line element} of $C$ is $\d s$.
+\end{defi}
+
+\begin{prop}
+ $\d s = \pm |\mathbf{r}'(u)| \d u$
+\end{prop}
+\subsection{Line integrals of vector fields}
+\begin{defi}[Line integral]
+ The \emph{line integral} of a smooth vector field $\mathbf{F}(\mathbf{r})$ along a path $C$ parametrised by $\mathbf{r}(u)$ along the direction (orientation) $ \mathbf{r}(\alpha)\to \mathbf{r}(\beta)$ is
+ \[
+ \int_C \mathbf{F}(\mathbf{r})\cdot \d \mathbf{r} = \int_\alpha^\beta \mathbf{F}(\mathbf{r}(u))\cdot \mathbf{r}'(u)\; \d u.
+ \]
+ We say $\d \mathbf{r} = \mathbf{r}'(u) \d u$ is the \emph{line element} on $C$. Note that the upper and lower limits of the integral are the end point and start point respectively, and $\beta$ is not necessarily larger than $\alpha$.
+\end{defi}
+For example, we may be moving a particle from $\mathbf{a}$ to $\mathbf{b}$ along a curve $C$ under a force field $\mathbf{F}$. Then we may divide the curve into many small segments $\delta \mathbf{r}$. Then for each segment, the force experienced is $\mathbf{F}(\mathbf{r})$ and the work done is $\mathbf{F}(\mathbf{r})\cdot \delta\mathbf{r}$. Then the total work done across the curve is
+\[
+ W = \int_C \mathbf{F}(\mathbf{r})\cdot \d \mathbf{r}.
+\]
+
+\begin{eg}
+ Take $\mathbf{F}(\mathbf{r}) = (xe^y, z^2, xy)$ and we want to find the line integral from $\mathbf{a}=(0, 0, 0)$ to $\mathbf{b}=(1, 1, 1)$.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [left] {$a$};
+ \node at (2, 2) [circ] {};
+ \node at (2, 2) [right] {$b$};
+ \draw [->-=0.6] (0, 0) parabola (2, 2);
+ \node at (1.8, 1) {$C_1$};
+ \draw [->-=0.6] (0, 0) -- (2, 2) node [pos = 0.5, anchor = south east] {$C_2$};
+ \end{tikzpicture}
+ \end{center}
+ We first integrate along the curve $C_1: \mathbf{r}(u) = (u, u^2, u^3)$. Then $\mathbf{r}'(u) = (1, 2u, 3u^2)$, and $\mathbf{F}(\mathbf{r}(u)) = (ue^{u^2}, u^6, u^3)$. So
+ \begin{align*}
+ \int_{C_1} \mathbf{F}\cdot \d\mathbf{r} &= \int_0^1 \mathbf{F}\cdot\mathbf{r}'(u)\; \d u\\
+ &= \int_0^1 ue^{u^2} + 2u^7 + 3u^5\;\d u\\
+ &= \frac{e}{2} -\frac{1}{2} + \frac{1}{4} + \frac{1}{2}\\
+ &= \frac{e}{2} + \frac{1}{4}
+ \end{align*}
+ Now we try to integrate along another curve $C_2: \mathbf{r}(t) = (t, t, t)$. So $\mathbf{r}'(t) = (1,1, 1)$.
+ \begin{align*}
+ \int_{C_2} \mathbf{F}\cdot \d \mathbf{r} &= \int \mathbf{F}\cdot \mathbf{r}'(t)\d t\\
+ &= \int_0^1 te^t + 2t^2\; \d t\\
+ &= \frac{5}{3}.
+ \end{align*}
+ We see that the line integral depends on the curve $C$ in general, not just $\mathbf{a}, \mathbf{b}$.
+\end{eg}
+
+We can also use the arclength $s$ as the parameter. Since $\d \mathbf{r} = \mathbf{t}\;\d s$, with $\mathbf{t}$ being the unit tangent vector, we have
+\[
+ \int_C \mathbf{F}\cdot \d \mathbf{r} = \int_C \mathbf{F}\cdot \mathbf{t}\;\d s.
+\]
+Note that we do not necessarily have to integrate $\mathbf{F}\cdot \mathbf{t}$ with respect to $s$. We can also integrate a scalar function as a function of $s$, $\int_C f(s)\;\d s$. By convention, this is calculated in the direction of increasing $s$. In particular, we have
+\[
+ \int_C 1\;\d s = \text{length of C}.
+\]
+\begin{defi}[Closed curve]
+ A \emph{closed curve} is a curve with the same start and end point. The line integral along a closed curve is (sometimes) written as $\oint$ and is (sometimes) called the \emph{circulation} of $\mathbf{F}$ around $C$.
+\end{defi}
+
+Sometimes we are not that lucky and our curve is not smooth. For example, the graph of an absolute value function is not smooth. However, often we can break it apart into many smaller segments, each of which is smooth. Alternatively, we can write the curve as a sum of smooth curves. We call these \emph{piecewise smooth} curves.
+
+\begin{defi}[Piecewise smooth curve]
+ A \emph{piecewise smooth curve} is a curve $C = C_1 + C_2 + \cdots + C_n$ with all $C_i$ smooth with regular parametrisations. The line integral over a piecewise smooth $C$ is
+ \[
+ \int_C \mathbf{F}\cdot \d \mathbf{r} = \int_{C_1} \mathbf{F}\cdot \d \mathbf{r} + \int_{C_2} \mathbf{F}\cdot \d \mathbf{r} + \cdots + \int_{C_n} \mathbf{F}\cdot \d \mathbf{r}.
+ \]
+\end{defi}
+
+\begin{eg}
+ Take the example above, and let $C_3 = -C_2$. Then $C = C_1 + C_3$ is piecewise smooth but not smooth. Then
+ \begin{align*}
+ \oint _C \mathbf{F}\cdot \d \mathbf{r} &= \int_{C_1} \mathbf{F}\cdot \d \mathbf{r} + \int_{C_3} \mathbf{F}\cdot \d \mathbf{r}\\
+ &= \left(\frac{e}{2} + \frac{1}{4}\right) - \frac{5}{3}\\
+ &= -\frac{17}{12} + \frac{e}{2}.
+ \end{align*}
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [left] {$a$};
+ \node at (2, 2) [circ] {};
+ \node at (2, 2) [right] {$b$};
+ \draw [->-=0.6] (0, 0) parabola (2, 2);
+ \node at (1.8, 1) {$C_1$};
+ \draw [->-=0.6] (2, 2) -- (0, 0) node [pos = 0.5, anchor = south east] {$C_3$};;
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+\subsection{Gradients and Differentials}
+Recall that the line integral depends on the actual curve taken, and not just the end points. However, for some nice functions, the integral \emph{does} depend on the end points only.
+\begin{thm}
+ If $\mathbf{F} = \nabla f(\mathbf{r})$, then
+ \[
+ \int _C \mathbf{F}\cdot \d \mathbf{r} = f(\mathbf{b}) - f(\mathbf{a}),
+ \]
+ where $\mathbf{b}$ and $\mathbf{a}$ are the end points of the curve.
+
+ In particular, the line integral does \emph{not} depend on the curve, but the end points only. This is the vector counterpart of the fundamental theorem of calculus. A special case is when $C$ is a closed curve, then $\oint_C \mathbf{F}\cdot \d \mathbf{r} = 0$.
+\end{thm}
+
+\begin{proof}
+ Let $\mathbf{r}(u)$ be any parametrization of the curve, and suppose $\mathbf{a} = \mathbf{r}(\alpha)$, $\mathbf{b} = \mathbf{r}(\beta)$. Then
+ \[
+ \int_C \mathbf{F}\cdot \d \mathbf{r} = \int_C\nabla f\cdot \d \mathbf{r} = \int \nabla f\cdot \frac{\d \mathbf{r}}{\d u}\; \d u.
+ \]
+ So by the chain rule, this is equal to
+ \[
+ \int_\alpha^\beta \frac{\d }{\d u} (f(\mathbf{r}(u))) \;\d u = [f(\mathbf{r}(u))]_\alpha^\beta = f(\mathbf{b}) - f(\mathbf{a}).\qedhere
+ \]
+\end{proof}
+
+\begin{defi}[Conservative vector field]
+ If $\mathbf{F} = \nabla f$ for some $f$, the $\mathbf{F}$ is called a \emph{conservative vector field}.
+\end{defi}
+
+The name \emph{conservative} comes from mechanics, where conservative vector fields represent conservative forces that conserve energy. This is since if the force is conservative, then the integral (i.e.\ work done) about a closed curve is $0$, which means that we cannot gain energy after travelling around the loop.
+
+It is convenient to treat differentials $\mathbf{F}\cdot \d \mathbf{r} = F_i \d x_i$ as if they were objects by themselves, which we can integrate along curves if we feel like doing so.
+
+Then we can define
+\begin{defi}[Exact differential]
+ A differential $\mathbf{F}\cdot\d \mathbf{r}$ is \emph{exact} if there is an $f$ such that $\mathbf{F} = \nabla f$. Then
+ \[
+ \d f = \nabla f\cdot \d \mathbf{r} = \frac{\partial f}{\partial x_i} \d x_i.
+ \]
+\end{defi}
+
+To test if this holds, we can use the necessary condition
+\begin{prop}
+ If $\mathbf{F} = \nabla f$ for some $f$, then
+ \[
+ \frac{\partial F_i}{\partial x_j} = \frac{\partial F_j}{\partial x_i}.
+ \]
+ This is because both are equal to $\partial^2 f/\partial x_i\partial x_j$.
+\end{prop}
+
+For an exact differential, the result from the previous section reads
+\[
+ \int_C \mathbf{F}\cdot \d \mathbf{r} = \int_C \d f = f(\mathbf{b}) - f(\mathbf{a}).
+\]
+Differentials can be manipulated using (for constant $\lambda, \mu$):
+\begin{prop}
+ \begin{align*}
+ \d (\lambda f + \mu g) &= \lambda \d f + \mu \d g\\
+ \d (fg) &= (\d f)g + f(\d g)
+ \end{align*}
+\end{prop}
+
+Using these, it may be possible to find $f$ by inspection.
+
+\begin{eg}
+ Consider
+ \[
+ \int_C 3x^2 y\sin z\;\d x + x^3 \sin z \;\d y + x^3 y\cos z\;\d z.
+ \]
+ We see that if we integrate the first term with respect to $x$, we obtain $x^3 y\sin z$. We obtain the same thing if we integrate the second and third term. So this is equal to
+ \[
+ \int_C \d (x^3 y \sin z) = [x^3 y\sin z]^{\mathbf{b}}_{\mathbf{a}}.
+ \]
+\end{eg}
+
+\subsection{Work and potential energy}
+\begin{defi}[Work and potential energy]
+ If $\mathbf{F}(\mathbf{r})$ is a force, then $\int_C \mathbf{F}\cdot \d \mathbf{r}$ is the \emph{work done} by the force along the curve $C$. It is the limit of a sum of terms $\mathbf{F}(\mathbf{r})\cdot \delta \mathbf{r}$, i.e.\ the force along the direction of $\delta \mathbf{r}$.
+\end{defi}
+
+Consider a point particle moving under $\mathbf{F}(\mathbf{r})$ according to Newton's second law: $\mathbf{F}(\mathbf{r}) = m\ddot{\mathbf{r}}$.
+
+Since the kinetic energy is defined as
+\[
+ T(t) = \frac{1}{2}m\dot{\mathbf{r}}^2,
+\]
+the rate of change of energy is
+\[
+ \frac{\d}{\d t}T(t) = m\dot{\mathbf{r}}\cdot \ddot{\mathbf{r}} = \mathbf{F}\cdot \dot{\mathbf{r}}.
+\]
+Suppose the path of particle is a curve $C$ from $\mathbf{a} = \mathbf{r}(\alpha)$ to $\mathbf{b} = \mathbf{r}(\beta)$, Then
+\[
+ T(\beta) - T(\alpha) = \int_\alpha^\beta \frac{\d T}{\d t} \;\d t = \int_\alpha^\beta \mathbf{F}\cdot \dot{\mathbf{r}}\;\d t = \int_C \mathbf{F}\cdot \d \mathbf{r}.
+\]
+So the work done on the particle is the change in kinetic energy.
+
+\begin{defi}[Potential energy]
+ Given a conservative force $\mathbf{F} = -\nabla V$, $V(\mathbf{x})$ is the \emph{potential energy}. Then
+ \[
+ \int_C \mathbf{F}\cdot \d \mathbf{r} = V(\mathbf{a}) - V(\mathbf{b}).
+ \]
+\end{defi}
+Therefore, for a conservative force, we have $\mathbf{F} = \nabla V$, where $V(\mathbf{r})$ is the potential energy.
+
+So the work done (gain in kinetic energy) is the loss in potential energy. So the total energy $T + V$ is conserved, i.e.\ constant during motion.
+
+We see that energy is conserved for conservative forces. In fact, the converse is true --- the energy is conserved only for conservative forces.
+
+\section{Integration in \tph{$\R^2$ and $\R^3$}{R2 and R3}{&\#x211D;2 and &\#x211D;2}}
+\subsection{Integrals over subsets of \tph{$\R^2$}{R2}{&\#x211D;2}}
+\begin{defi}[Surface integral]
+ Let $D\subseteq \R^2$. Let $\mathbf{r} = (x, y)$ be in Cartesian coordinates. We can approximate $D$ by $N$ disjoint subsets of simple shapes, e.g.\ triangles, parallelograms. These shapes are labelled by $I$ and have areas $\delta A_i$. \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$y$};
+ \draw (2, 1.6) -- (3.6, 1.6);
+ \draw (2, 2) -- (3.6, 2);
+ \draw (2, 2.4) -- (3.6, 2.4);
+ \draw (2, 2.8) -- (3.6, 2.8);
+ \draw (2.2, 1.4) -- (2.2, 3);
+ \draw (2.6, 1.4) -- (2.6, 3);
+ \draw (3, 1.4) -- (3, 3);
+ \draw (3.4, 1.4) -- (3.4, 3);
+ \draw plot [smooth cycle] coordinates {(1, 1) (2.5, 1.2) (4, 0.9) (4.2, 3.2) (1.5, 3)};
+ \node at (4.2, 3.2) [anchor = south west] {$D$};
+ \end{tikzpicture}
+ \end{center}
+ To integrate a function $f$ over $D$, we would like to take the sum $\sum f(\mathbf{r}_i) \delta A_i$, and take the limit as $\delta A_i \to 0$. But we need a condition stronger than simply $\delta A_i \to 0$. We won't want the areas to grow into arbitrarily long yet thin strips whose area decreases to 0. So we say that we find an $\ell$ such that each area can be contained in a disc of diameter $\ell$.
+
+ Then we take the limit as $\ell \to 0$, $N\to \infty$, and the union of the pieces tends to $D$. For a function $f(\mathbf{r})$, we define the \emph{surface integral} as
+ \[
+ \int_D f(\mathbf{r}) \;\d A = \lim_{\ell\to 0}\sum _I f(\mathbf{r}_i)\delta A_i.
+ \]
+ where $\mathbf{r}_i$ is some point within each subset $A_i$. The integral \emph{exists} if the limit is well-defined (i.e.\ the same regardless of what $A_i$ and $\mathbf{r}_i$ we choose before we take the limit) and exists.
+\end{defi}
+If we take $f = 1$, then the surface integral is the area of $D$.
+
+On the other hand, if we put $z = f(x, y)$ and plot out the surface $z = f(x, y)$, then the area integral is the volume under the surface.
+
+The definition allows us to take the $\delta A_i$ to be any weird shape we want. However, the sensible thing is clearly to take $A_i$ to be rectangles.
+
+We choose the small sets in the definition to be rectangles, each of size $\delta A_I = \delta x\delta y$. We sum over subsets in a narrow horizontal strip of height $\delta y$ with $y$ and $\delta y$ held constant. Take the limit as $\delta x\to 0$. We get a contribution $\delta y\int_{x_y} f(y, x)\;\d x$ with range $x_y\in \{x: (x, y)\in D\}$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$y$};
+ \draw (1.08, 2.2) node [anchor = south east] {$\delta y$} -- (4.34, 2.2);
+ \draw (1.22, 2.6) -- (4.36, 2.6);
+ \draw [dashed] (1.08, 2.2) -- (1.08, 0);
+ \draw [dashed] (4.34, 2.2) -- (4.34, 0);
+ \draw [dashed] (1.08, 2.2) -- (0, 2.2) node [left] {$y$};
+ \draw [->] (2.4, -0.5) -- (1.08, -0.5);
+ \draw [->] (3, -0.5) node [left] {$x_y$} -- (4.34, -0.5);
+ \draw plot [smooth cycle] coordinates {(1, 1) (2.5, 1.2) (4, 0.9) (4.2, 3.2) (1.5, 3)};
+ \draw [dashed] (3.5, 3.36) -- (0, 3.36);
+ \draw [dashed] (3.9, 0.82) -- (0, 0.82);
+ \draw [->] (-0.5, 2.4) -- (-0.5, 3.36);
+ \draw [->] (-0.5, 1.8) node [above] {$Y$} -- (-0.5, 0.82);
+ \node at (4.2, 3.2) [anchor = south west] {$D$};
+ \end{tikzpicture}
+\end{center}
+We sum over all such strips and take $\delta y\to 0$, giving
+
+\begin{prop}
+ \[
+ \int_D f(x, y)\;\d A = \int_Y\left(\int_{x_y}f(x, y)\;\d x\right) \d y.
+ \]
+ with $x_y$ ranging over $\{x: (x, y) \in D\}$.
+\end{prop}
+
+Note that the range of the inner integral is given by a set $x_y$. This can be an interval, or many disconnected intervals, $x_y = [a_1, b_1]\cup [a_2, b_2]$. In this case,
+\[
+ \int_{x_y} f(x)\; \d x = \int_{a_1}^{b_1} f(x) \;\d x + \int_{a_2}^{b_2}f(x)\;\d x.
+\]
+This is useful if we want to integrate over a concave area and we have disconnected vertical strips.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$y$};
+ \draw plot [smooth cycle] coordinates {(1, 1) (4, 0.8) (4.2, 1.1) (3, 2) (4.1, 3) (1.2, 2.7)};
+ \draw (3.2, 0.8) -- (3.2, 1.76);
+ \draw (3.4, 0.8) -- (3.4, 1.61);
+ \draw (3.2, 2.24) -- (3.2, 3.02);
+ \draw (3.4, 2.4) -- (3.4, 3.04);
+ \end{tikzpicture}
+\end{center}
+We could also do it the other way round, integrating over $y$ first, and come up with the result
+\[
+ \int_D f(x, y)\; \d A = \int_X\left(\int_{y_x}f(x, y)\; \d y\right) \d x.
+\]
+\begin{thm}[Fubini's theorem]
+ If $f$ is a continuous function and $D$ is a compact (i.e.\ closed and bounded) subset of $\R^2$, then
+ \[
+ \iint f\;\d x\;\d y = \iint f\;\d y\;\d x.
+ \]
+ While we have rather strict conditions for this theorem, it actually holds in many more cases, but those situations have to be checked manually.
+\end{thm}
+\begin{defi}[Area element]
+ The \emph{area element} is $\d A$.
+\end{defi}
+
+\begin{prop}
+ $\d A = \d x\;\d y$ in Cartesian coordinates.
+\end{prop}
+
+\begin{eg}
+ We integrate over the triangle bounded by $(0, 0), (2, 0)$ and $(0, 1)$. We want to integrate the function $f(x, y) = x^2y$ over the area. So
+ \begin{align*}
+ \int _D f(x y)\;\d A &= \int_0^1 \left(\int_0^{2 - 2y}x^2y \;\d x\right)\; \d y\\
+ &= \int_0^1 y\left[\frac{x^3}{3}\right]^{2 - 2y}_0 \;\d y\\
+ &= \frac{8}{3}\int_0^1 y(1 - y)^3\;\d y\\
+ &= \frac{2}{15}
+ \end{align*}
+ We can integrate it the other way round:
+ \begin{align*}
+ \int_D x^2 y\;\d A &= \int_0^2 \int_0^{1 - x/2}x^2y \;\d y\;\d x\\
+ &= \int_0^2 x^2\left[\frac{1}{2}y^2\right]_0^{1 - x/2}\;\d x\\
+ &= \int_0^2 \frac{x^2}{2} \left(1 - \frac{x}{2}\right)^2 \;\d x\\
+ &= \frac{2}{15}
+ \end{align*}
+\end{eg}
+
+Since it doesn't matter whether we integrate $x$ first or $y$ first, if we find it difficult to integrate one way, we can try doing it the other way and see if it is easier.
+
+While this integral is tedious in general, there is a special case where it is substantially easier.
+\begin{defi}[Separable function]
+ A function $f(x, y)$ is separable if it can be written as $f(x, y) = h(y)g(x)$.
+\end{defi}
+
+\begin{prop}
+ Take separable $f(x, y) = h(y)g(x)$ and $D$ be a rectangle $\{(x, y): a\leq x\leq b, c\leq y \leq d\}$. Then
+ \[
+ \int_Df(x, y)\;\d x\;\d y = \left(\int_a^b g(x)\;\d x\right)\left(\int_c^d h(y)\;\d y\right)
+ \]
+\end{prop}
+
+\subsection{Change of variables for an integral in \tph{$\R^2$}{R2}{&\#x211D;2}}
+\begin{prop}
+ Suppose we have a change of variables $(x, y)\leftrightarrow (u, v)$ that is smooth and invertible, with regions $D, D'$ in one-to-one correspondence. Then
+ \[
+ \int_D f(x, y)\;\d x\;\d y = \int_{D'} f(x(u, v), y(u, v))|J|\;\d u\;\d v,
+ \]
+ where
+ \[
+ J = \frac{\partial (x, y)}{\partial (u, v)} =
+ \begin{vmatrix}
+ \dfrac{\partial x}{\partial u} & \dfrac{\partial x}{\partial v}\vspace{5pt}\\
+ \dfrac{\partial y}{\partial u} & \dfrac{\partial y}{\partial v}\\
+ \end{vmatrix}
+ \]
+ is the Jacobian. In other words,
+ \[
+ \;\d x\;\d y = |J|\;\d u\;\d v.
+ \]
+\end{prop}
+
+\begin{proof}
+ Since we are writing $(x(u, v), y(u, v))$, we are actually transforming from $(u, v)$ to $(x, y)$ and not the other way round.
+
+ Suppose we start with an area $\delta A' = \delta u\delta v$ in the $(u, v)$ plane. Then by Taylors' theorem, we have
+ \[
+ \delta x = x(u + \delta u, v + \delta v) - x(u, v) \approx \frac{\partial x}{\partial u}\delta u + \frac{\partial x}{\partial v}\delta v.
+ \]
+ We have a similar expression for $\delta y$ and we obtain
+ \[
+ \begin{pmatrix}
+ \delta x\\
+ \delta y
+ \end{pmatrix}
+ \approx
+ \begin{pmatrix}
+ \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v}\\
+ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v}\\
+ \end{pmatrix}
+ \begin{pmatrix}
+ \delta u\\
+ \delta v
+ \end{pmatrix}
+ \]
+ Recall from Vectors and Matrices that the determinant of the matrix is how much it scales up an area. So the area formed by $\delta x$ and $\delta y$ is $|J|$ times the area formed by $\delta u$ and $\delta v$. Hence
+ \[
+ \d x\;\d y = |J| \;\d u\;\d v.\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ We transform from $(x, y)$ to $(\rho, \varphi)$ with
+ \begin{align*}
+ x &= \rho\cos \varphi\\
+ y &= \rho\sin \varphi
+ \end{align*}
+ We have previously calculated that $|J| = \rho$. So
+ \[
+ \d A = \rho \;\d \rho \;\d \varphi.
+ \]
+ Suppose we want to integrate a function over a quarter area $D$ of radius $R$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (2.5, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 2.5) node [above] {$y$};
+ \draw [fill=gray] (0, 0) -- (2, 0) arc (0:90:2) -- cycle;
+ \node at (1.4, 1.4) [anchor=south west] {$D$};
+ \end{tikzpicture}
+ \end{center}
+ Let the function to be integrated be $f = \exp(-(x^2 + y^2)/2) = \exp(-\rho^2/2)$. Then
+ \begin{align*}
+ \int f\;\d A &= \int f\rho\;\d\rho\;\d\varphi\\
+ &=\int_{\rho=0}^R\left(\int_{\varphi=0}^{\pi/2}e^{-\rho^2/2}\rho \;\d \varphi\right)\delta \rho\\
+ \intertext{Note that in polar coordinates, we are integrating over a rectangle and the function is separable. So this is equal to}
+ &= \left[-e^{-\rho^2/2}\right]^R_0\left[\varphi\right]_0^{\pi/2}\\
+ &= \frac{\pi}{2}\left( 1 - e^{-R^2/2}\right).\tag{$*$}
+ \end{align*}
+ Note that the integral exists as $R\to \infty$.
+
+ Now we take the case of $x, y\to \infty$ and consider the original integral.
+ \begin{align*}
+ \int _D f\;\d A &= \int_{x = 0}^{\infty} \int_{y=0}^\infty e^{-(x^2 + y^2)/2}\;\d x\;\d y\\
+ &= \left(\int_0^\infty e^{-x^2/2}\;\d x\right)\left(\int_0^\infty e^{-y^2/2}\;\d y\right)\\
+ &= \frac{\pi}{2}
+ \end{align*}
+ where the last line is from (*). So each of the two integrals must be $\sqrt{\pi/2}$, i.e.
+ \[
+ \int_0^\infty e^{-x^2/2}\;\d x = \sqrt{\frac{\pi}{2}}.
+ \]
+
+\end{eg}
+\subsection{Generalization to \tph{$\R^3$}{R3}{&\#x211D;3}}
+We will do exactly the same thing as we just did, but with one more dimension:
+\begin{defi}[Volume integral]
+ Consider a volume $V\subseteq \R^3$ with position vector $\mathbf{r} = (x, y, z)$. We approximate $V$ by $N$ small disjoint subsets of some simple shape (e.g.\ cuboids) labelled by $I$, volume $\delta V_I$, contained within a solid sphere of diameter $\ell$.
+
+ Assume that as $\ell \to 0$ and $N\to \infty$, the union of the small subsets tend to $V$. Then
+ \[
+ \int_V f(\mathbf{r})\;\d V = \lim_{\ell \to 0} \sum_{I}f(\mathbf{r}_I^*) \delta V_I,
+ \]
+ where $\mathbf{r}_I^*$ is any chosen point in each small subset.
+\end{defi}
+To evaluate this, we can take $\delta V_I = \delta x \delta y \delta z$, and take $\delta x\to 0$, $\delta y\to 0$ and $\delta z$ in some order. For example,
+\[
+ \int_V f(\mathbf{r})\;\d v = \int_D \left(\int_{Z_{xy}}f(x, y, z)\;\d z\right)\;\d x\;\d y.
+\]
+So we integrate $f(x, y, z)$ over $z$ at each point $(x, y)$, then take the integral of that over the area containing all required $(x, y)$.
+
+Alternatively, we can take the area integral first, and have
+\[
+ \int_V f(\mathbf{r})\;\d V = \int_z\left(\int_{D_Z} f(x, y, z)\;\d x\;\d y\right)\;\d z.
+\]
+Again, if we take $f = 1$, then we obtain the volume of $V$.
+
+Often, $f(\mathbf{r})$ is the density of some quantity, and is usually denoted by $\rho$. For example, we might have mass density, charge density, or probability density. $\rho(\mathbf{r})\delta V$ is then the amount of quantity in a small volume $\delta V$ at $\mathbf{r}$. Then $\int_V \rho(\mathbf{r}) \;\d V$ is the total amount of quantity in $V$.
+
+\begin{defi}[Volume element]
+ The \emph{volume element} is $\d V$.
+\end{defi}
+
+\begin{prop}
+ $\d V = \d x\; \d y\; \d z$.
+\end{prop}
+
+We can change variables by some smooth, invertible transformation $(x, y, z)\mapsto (u, v, w)$. Then
+\begin{prop}
+ \[
+ \int_V f\;\d x\;\d y\;\d z = \int_V f|J|\;\d u\;\d v\;\d w,
+ \]
+ with
+ \[
+ J = \frac{\partial(x, y, z)}{\partial(u, v, w)} =
+ \begin{vmatrix}
+ \dfrac{\partial x}{\partial u} & \dfrac{\partial x}{\partial v} & \dfrac{\partial x}{\partial w}\vspace{5pt} \\
+ \dfrac{\partial y}{\partial u} & \dfrac{\partial y}{\partial v} & \dfrac{\partial y}{\partial w}\vspace{5pt} \\
+ \dfrac{\partial z}{\partial u} & \dfrac{\partial z}{\partial v} & \dfrac{\partial z}{\partial w}
+ \end{vmatrix}
+ \]
+\end{prop}
+
+\begin{prop}
+ In cylindrical coordinates,
+ \[
+ \d V = \rho\;\d \rho\;\d \varphi\;\d z.
+ \]
+ In spherical coordinates
+ \[
+ \d V = r^2\sin\theta \;\d r\;\d \theta\;\d \varphi.
+ \]
+\end{prop}
+
+\begin{proof}
+ Loads of algebra.
+\end{proof}
+
+\begin{eg}
+ Suppose $f(\mathbf{r})$ is spherically symmetric and $V$ is a sphere of radius $a$ centered on the origin. Then
+ \begin{align*}
+ \int_V f\;\d V &= \int_{r = 0}^a \int_{\theta = 0}^\pi \int_{\varphi = 0}^{2\pi} f(r)r^2 \sin\theta\;\d r\;\d \theta\;\d\varphi\\
+ &= \int_0^a \d r\int_0^\pi \d \theta\int_0^{2\pi} \d \varphi \;r^2 f(r) \sin \theta\\
+ &= \int_0^a \; r^2 f(r) \d r\Big[-\cos \theta\Big]_0^\pi \Big[\varphi\Big]_0^{2\pi}\\
+ &= 4\pi\int_0^a f(r)r^2 \;\d r.
+ \end{align*}
+ where we separated the integral into three parts as in the area integrals.
+
+ Note that in the second line, we rewrote the integrals to write the differentials next to the integral sign. This is simply a different notation that saves us from writing $r = 0$ etc. in the limits of the integrals.
+
+ This is a useful general result. We understand it as the sum of spherical shells of thickness $\delta r$ and volume $4\pi r^2 \delta r$.
+
+ If we take $f = 1$, then we have the familiar result that the volume of a sphere is $\frac{4}{3}\pi a^3$.
+\end{eg}
+
+\begin{eg}
+ Consider a volume within a sphere of radius $a$ with a cylinder of radius $b$ ($b < a$) removed. The region is defined as
+ \begin{align*}
+ x^2 + y^2 + z^2 &\leq a^2\\
+ x^2 + y^2 &\geq b^2.
+ \end{align*}
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \clip (-.99, 2) rectangle (-3, -2.1) (.99, 2) rectangle (3, -2.1);
+ \draw circle [radius=2];
+ \end{scope}
+ \draw (0, 1.72) circle [x radius=1, y radius = 0.15];
+ \draw [dash pattern= on 4pt off 4pt] (1, -1.72) arc (0: 180:1 and 0.15);
+ \draw (-1, -1.72) arc (180: 360:1 and 0.15);
+ \draw [dash pattern= on 4pt off 4pt] (1, 1.72) -- (1, -1.72);
+ \draw [dash pattern= on 4pt off 4pt] (-1, 1.72) -- (-1, -1.72);
+ \draw [dash pattern= on 2pt off 2pt] (0, 0) -- (1, 1.72) node [pos = 0.5, anchor = north west] {$a$};
+ \draw [dash pattern= on 2pt off 2pt] (0, 0) -- (-1, 0) node [pos = 0.5, above] {$b$};
+ \end{tikzpicture}
+ \end{center}
+ We use cylindrical coordinates. The second criteria gives
+ \[
+ b \leq \rho \leq a.
+ \]
+ For the $x^2 + y^2 + z^2 \leq a^2$ criterion, we have
+ \[
+ -\sqrt{a^2 - \rho^2} \leq z \leq \sqrt{a^2 - \rho^2}.
+ \]
+ So the volume is
+ \begin{align*}
+ \int_V \;\d V &= \int_b^a\d \rho\int_0^{2\pi}\d \varphi \int_{-\sqrt{a^2 - \rho^2}}^{\sqrt{a^2 - \rho^2}}\d z\; \rho\\
+ &= 2\pi\int_b^a 2\rho\sqrt{a^2 - \rho^2}\;\d \rho\\
+ &= 2\pi \left[\frac{2}{3}(a^2 - \rho^2)^{3/2}\right]^a_b\\
+ &= \frac{4}{3}\pi (a^2 - b^2)^{3/2}.
+ \end{align*}
+\end{eg}
+\begin{eg}
+ Suppose the density of electric charge is $\rho(\mathbf{r}) = \rho_0 \frac{z}{a}$ in a hemisphere $H$ of radius $a$, with $z \geq 0$. What is the total charge of $H$?
+
+ We use spherical polars. So
+ \[
+ r \leq a,\quad 0 \leq \varphi \leq 2\pi,\quad 0 \leq \theta \leq \frac{\pi}{2}.
+ \]
+ We have
+ \[
+ \rho(\mathbf{r}) = \frac{\rho_0}{a}r\cos \theta.
+ \]
+ The total charge $Q$ in $H$ is
+ \begin{align*}
+ \int_H \rho \;\d V &= \int_0^a\d r\int_0^{\pi/2}\d \theta\int_0^{2\pi}\d\varphi\; \frac{\rho_0}{a}r\cos\theta r^2\sin\theta\\
+ &= \frac{\rho_0}{a}\int_0^a r^3 \;\d r \int_0^{\pi/2}\sin\theta \cos\theta\;\d \theta \int_0^{2\pi}\;\d \varphi\\
+ &= \frac{\rho_0}{a}\left[\frac{r^4}{4}\right]^a_0\left[\frac{1}{2}\sin^2\theta\right]^{\pi/2}_0 [\varphi]^{2\pi}_0\\
+ &= \frac{\rho_0 \pi a^3}{4}.
+ \end{align*}
+\end{eg}
+\subsection{Further generalizations}
+\subsubsection*{Integration in \tph{$\R^n$}{Rn}{&\#x2121D;n}}
+Similar to the above, $\int_D f(x_1, x_2, \cdots x_n)\;\d x_1 \;\d x_2\cdots\;\d x_n$ is simply the integration over an $n$-dimensional volume. The change of variable formula is
+\begin{prop}
+ \[
+ \int_D f(x_1, x_2, \cdots x_n)\;\d x_1 \;\d x_2\cdots\;\d x_n = \int_{D'} f(\{x_i(\mathbf{u})\}) |J| \;\d u_1\;\d u_2\cdots\;\d u_n.
+ \]
+\end{prop}
+
+\subsubsection*{Change of variables for \texorpdfstring{$n = 1$}{n = 1}}
+In the $n = 1$ case, the Jacobian is $\frac{\d x}{\d u}$. However, we use the following formula for change of variables:
+\[
+ \int_D f(x) \;\d x = \int_{D'} f(x(u))\left|\frac{\d x}{\d u}\right| \;\d u.
+\]
+We introduce the modulus because of our natural convention about integrating over $D$ and $D'$. If $D = [a, b]$ with $a < b$, we write $\int_a^b$. But if $a\mapsto \alpha$ and $b \mapsto \beta$, but $\alpha > \beta$, we would like to write $\int_\beta ^\alpha$ instead, so we introduce the modulus in the 1D case.
+
+To show that the modulus is the right thing to do, we check case by case: If $a < b$ and $\alpha < \beta$, then $\frac{\d x}{\d u}$ is positive, and we have, as expected
+\[
+ \int_a^b f(x)\;\d x = \int_\alpha^\beta f(u)\frac{\d x}{\d u}\;\d u.
+\]
+If $\alpha > \beta$, then $\frac{\d x}{\d u}$ is negative. So
+\[
+ \int_a^b f(x)\;\d x = \int_\alpha^\beta f(u)\frac{\d x}{\d u}\;\d u = -\int_\beta^\alpha f(u)\frac{\d x}{\d u}\;\d u.
+\]
+By taking the absolute value of $\frac{\d x}{\d u}$, we ensure that we always have the numerically smaller bound as the lower bound.
+
+This is not easily generalized to higher dimensions, so we don't employ the same trick in other cases.
+
+\subsubsection*{Vector-valued integrals}
+We can define $\int_V \mathbf{F}(\mathbf{r})\;\d V$ in a similar way to $\int_V f(\mathbf{r})\;\d V$as the limit of a sum over small contributions of volume. In practice, we integrate them componentwise. If
+\[
+ \mathbf{F}(\mathbf{r}) = F_{i}(\mathbf{r})\mathbf{e}_i,
+\]
+then
+\[
+ \int_V \mathbf{F}(\mathbf{r})\;\d V = \int_V (F_i(\mathbf{r})\;\d V)\mathbf{e}_i.
+\]
+For example, if a mass has density $\rho (\mathbf{r})$, then its mass is
+\[
+ M = \int_V \rho(\mathbf{r})\;\d V
+\]
+and its center of mass is
+\[
+ \mathbf{R} = \frac{1}{M}\int_V \mathbf{r}\rho (\mathbf{r})\;\d V.
+\]
+\begin{eg}
+ Consider a solid hemisphere $H$ with $r \leq a$, $z \geq 0$ with uniform density $\rho$. The mass is
+ \[
+ M = \int_H \rho \;\d V = \frac{2}{3}\pi a^3\rho.
+ \]
+ Now suppose that $\mathbf{R} = (X, Y, Z)$. By symmetry, we expect $X = Y = 0$. We can find this formally by
+ \begin{align*}
+ X &= \frac{1}{M}\int_H x\rho \;\d V\\
+ &= \frac{\rho}{M}\int_0^a \int_0^{\pi/2}\int_0^{2\pi}xr^2 \sin \theta\;\d \varphi\;\d \theta\;\d r\\
+ &= \frac{\rho}{M}\int_0^{a}r^3\;\d r\times \int_0^{\pi/2}\sin^2\theta\;\d \theta\times \int_0^{2\pi}\cos\varphi\;\d \varphi\\
+ &= 0
+ \end{align*}
+ as expected. Note that it evaluates to 0 because the integral of $\cos$ from $0$ to $2\pi$ is 0. Similarly, we obtain $Y = 0$.
+
+ Finally, we find $Z$.
+ \begin{align*}
+ Z &= \frac{\rho}{M}\int_0^a r^3\;\d r \int_0^{\pi/2}\sin\theta\cos\theta\;\d \theta \int_0^{2\pi}\;\d \varphi\\
+ &= \frac{r}{M}\left[\frac{a^4}{4}\right]\left[\frac{1}{2}\sin^2\theta\right]_0^{\pi/2}2\pi\\
+ &= \frac{3a}{8}.
+ \end{align*}
+ So $\mathbf{R} = (0, 0, 3a/8)$.
+\end{eg}
+
+\section{Surfaces and surface integrals}
+\subsection{Surfaces and Normal}
+So far, we have learnt how to do calculus with regions of the plane or space. What we would like to do now is to study surfaces in $\R^3$. The first thing to figure out is how to specify surfaces. One way to specify a surface is to use an equation. We let $f$ be a smooth function on $\R^3$, and $c$ be a constant. Then $f(\mathbf{r}) = c$ defines a smooth surface (e.g.\ $x^2 + y^2 + z^2 = 1$ denotes the unit sphere).
+
+Now consider any curve $\mathbf{r}(u)$ on $S$. Then by the chain rule, if we differentiate $f(\mathbf{r}) = c$ with respect to $u$, we obtain
+\[
+ \frac{\d }{\d u}[f(\mathbf{r}(u))] = \nabla f\cdot \frac{\d \mathbf{r}}{\d u} = 0.
+\]
+This means that $\nabla f$ is always perpendicular to $\frac{\d \mathbf{r}}{\d u}$. Since $\frac{\d \mathbf{r}}{\d u}$ is the tangent to the curve, $\nabla f$ is perpendicular to the tangent. Since this is true for \emph{any} curve $\mathbf{r}(u)$, $\nabla f$ is perpendicular to any tangent of the surface. Therefore
+\begin{prop}
+ $\nabla f$ is the normal to the surface $f(\mathbf{r}) = c$.
+\end{prop}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Take the sphere $f(\mathbf{r}) = x^2 + y^2 + z^2 = c$ for $c > 0$. Then $\nabla f = 2(x, y, z) = 2\mathbf{r}$, which is clearly normal to the sphere.
+ \item Take $f(\mathbf{r}) = x^2 + y^2 - z^2 = c$, which is a hyperboloid. Then $\nabla f = 2(x, y, -z)$.
+
+ In the special case where $c = 0$, we have a double cone, with a singular apex $\mathbf{0}$. Here $\nabla f = \mathbf{0}$, and we cannot find a meaningful direction of normal.
+ \end{enumerate}
+\end{eg}
+
+\begin{defi}[Boundary]
+ A surface $S$ can be defined to have a \emph{boundary} $\partial S$ consisting of a piecewise smooth curve. If we define $S$ as in the above examples but with the additional restriction $z \geq 0$, then $\partial S$ is the circle $x^2 + y^2 = c$, $z = 0$.
+
+ A surface is \emph{bounded} if it can be contained in a solid sphere, \emph{unbounded} otherwise. A bounded surface with no boundary is called \emph{closed} (e.g.\ sphere).
+\end{defi}
+\begin{eg}\leavevmode
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [draw=none, fill=gray, opacity=0.6] (-2, 0) arc (180:360:2 and 0.5) arc (0:180:2);
+ \draw [mred, thick, dashed] (2, 0) arc (0:180:2 and 0.5);
+ \draw [mred, thick] (-2, 0) arc (180:360:2 and 0.5);
+ \draw (2, 0) arc (0:180:2);
+ \end{tikzpicture}
+ \end{center}
+ The boundary of a hemisphere is a circle (drawn in red).
+\end{eg}
+
+\begin{defi}[Orientable surface]
+ At each point, there is a unit normal $\mathbf{n}$ that's unique up to a sign.
+
+ If we can find a consistent choice of $\mathbf{n}$ that varies smoothly across $S$, then we say $S$ is \emph{orientable}, and the choice of sign of $\mathbf{n}$ is called the \emph{orientation} of the surface.
+\end{defi}
+Most surfaces we encounter are orientable. For example, for a sphere, we can declare that the normal should always point outwards. A notable example of a non-orientable surface is the M\"obius strip (or Klein bottle).
+
+For simple cases, we can describe the orientation as ``inward'' and ``outward''.
+
+\subsection{Parametrized surfaces and area}
+However, specifying a surface by an equation $f(\mathbf{r}) = c$ is often not too helpful. What we would like is to put some coordinate system onto the surface, so that we can label each point by a pair of numbers $(u, v)$, just like how we label points in the $x,y$-plane by $(x, y)$. We write $\mathbf{r}(u, v)$ for the point labelled by $(u, v)$.
+
+\begin{eg}
+ Let $S$ be part of a sphere of radius $a$ with $0 \leq \theta \leq \alpha$.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \clip (-2, 1) rectangle (2, 0);
+ \draw [densely dotted] circle [radius=2];
+ \end{scope}
+ \begin{scope}
+ \clip (-2, 2) rectangle (2, 1);
+ \draw circle [radius=2];
+ \end{scope}
+ \draw [dashed] (1.7, 1) arc (0:180:1.7 and 0.25);
+ \draw (-1.7, 1) arc (180:360:1.7 and 0.25);
+ \draw (1.7, 1) -- (0, 0) -- (0, 2);
+ \draw (0, 0.4) arc (90:30:.4);
+ \node at (0.3, 0.5) {$\alpha$};
+ \end{tikzpicture}
+ \end{center}
+ We can then label the points on the spheres by the angles $\theta, \varphi$, with
+ \[
+ \mathbf{r}(\theta, \varphi) = (a\cos\varphi\sin \theta, a\sin \theta\sin \varphi, a\cos \theta) = a\mathbf{e}_r.
+ \]
+ We restrict the values of $\theta, \varphi$ by $0 \leq \theta \leq \alpha$, $0 \leq \varphi \leq 2\pi$, so that each point is only covered once.
+\end{eg}
+Note that to specify a surface, in addition to the function $\mathbf{r}$, we also have to specify what values of $(u, v)$ we are allowed to take. This corresponds to a region $D$ of allowed values of $u$ and $v$. When we do integrals with these surfaces, these will become the bounds of integration.
+
+When we have such a parametrization $\mathbf{r}$, we would want to make sure this indeed gives us a two-dimensional surface. For example, the following two parametrizations would both be bad:
+\[
+ \mathbf{r}(u, v) = u,\quad \mathbf{r}(u, v) = u + v.
+\]
+The idea is that $\mathbf{r}$ has to depend on both $u$ and $v$, and in ``different ways''. More precisely, when we vary the coordinates $(u, v)$, the point $\mathbf{r}$ will change accordingly. By the chain rule, this is given by
+\[
+ \delta \mathbf{r} = \frac{\partial \mathbf{r}}{\partial u}\delta u + \frac{\partial \mathbf{r}}{\partial v}\delta v + o(\delta u, \delta v).
+\]
+Then $\frac{\partial \mathbf{r}}{\delta u}$ and $\frac{\partial \mathbf{r}}{\partial v}$ are tangent vectors to curves on $S$ with $v$ and $u$ constant respectively. What we want is for them to point in different directions.
+
+\begin{defi}[Regular parametrization]
+ A parametrization is \emph{regular} if for all $u, v$,
+ \[
+ \frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v} \not = \mathbf{0},
+ \]
+ i.e.\ there are always two independent tangent directions.
+\end{defi}
+The parametrizations we use will all be regular.
+
+Given a surface, how could we, say, find its area? We can use our parametrization. Suppose points on the surface are given by $\mathbf{r}(u, v)$ for $(u, v) \in D$. If we want to find the area of $D$ itself, we would simply integrate
+\[
+ \int_D \;\d u\;\d v.
+\]
+However, we are just using $u$ and $v$ as arbitrary labels for points in the surface, and one unit of area in $D$ does not correspond to one unit of area in $S$. Instead, suppose we produce a small rectangle in $D$ by changing $u$ and $v$ by small $\delta u, \delta v$. In $D$, this corresponds to a rectangle with vertices $(u, v), (u + \delta u, v), (u, v + \delta v), (u + \delta u, v + \delta v)$, and spans an area $\delta u \delta v$. In the surface $S$, these small changes $\delta u, \delta v$ correspond to changes $\frac{\partial \mathbf{r}}{\partial u} \delta u$ and $\frac{\partial \mathbf{r}}{\partial v} \delta v$, and these span a \emph{vector} area of
+\[
+ \delta \mathbf{S} = \frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v} \delta u \delta v= \mathbf{n}\;\delta S.
+\]
+Note that the order of $u, v$ gives the choice of the sign of the unit normal.
+
+The actual area is then given by
+\[
+ \delta S =\left|\frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v}\right|\;\delta u\;\delta v.
+\]
+Making these into differentials instead of deltas, we have
+\begin{prop}
+ The \emph{vector area element} is
+ \[
+ \d \mathbf{S} = \frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v}\;\d u\;\d v.
+ \]
+ The \emph{scalar area element} is
+ \[
+ \d S = \left|\frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v}\right|\;\d u\;\d v.
+ \]
+\end{prop}
+By summing and taking limits, the area of $S$ is
+\[
+ \int_S \d S = \int_D \left|\frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v}\right| \d u\;\d v.
+\]
+\begin{eg}
+ Consider again the part of the sphere of radius $a$ with $0 \leq \theta \leq \alpha$.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \clip (-2, 1) rectangle (2, 0);
+ \draw [densely dotted] circle [radius=2];
+ \end{scope}
+ \begin{scope}
+ \clip (-2, 2) rectangle (2, 1);
+ \draw circle [radius=2];
+ \end{scope}
+ \draw [dashed] (1.7, 1) arc (0:180:1.7 and 0.25);
+ \draw (-1.7, 1) arc (180:360:1.7 and 0.25);
+ \draw (1.7, 1) -- (0, 0) -- (0, 2);
+ \draw (0, 0.4) arc (90:30:.4);
+ \node at (0.3, 0.5) {$\alpha$};
+ \end{tikzpicture}
+ \end{center}
+ Then we have
+ \[
+ \mathbf{r}(\theta, \varphi) = (a\cos\varphi\sin \theta, a\sin \theta\sin \varphi, a\cos \theta) = a\mathbf{e}_r.
+ \]
+ So we find
+ \[
+ \frac{\partial \mathbf{r}}{\partial \theta} = a\mathbf{e}_\theta.
+ \]
+ Similarly, we have
+ \[
+ \frac{\partial \mathbf{r}}{\partial \varphi} = a\sin \theta \mathbf{e}_\varphi.
+ \]
+ Then
+ \[
+ \frac{\partial \mathbf{r}}{\partial \theta}\times \frac{\partial \mathbf{r}}{\partial \varphi} = a^2\sin \theta\, \mathbf{e}_r.
+ \]
+ So
+ \[
+ \d S = a^2\sin \theta\;\d \theta\;\d \varphi.
+ \]
+ Our bounds are $0 \leq \theta \leq \alpha$, $0 \leq \varphi \leq 2\pi$.
+
+ Then the area is
+ \[
+ \int_0^{2\pi}\int_0^{\alpha} a^2\sin \theta\;\d \theta \;\d \varphi = 2\pi a^2(1 - \cos\alpha).
+ \]
+\end{eg}
+\subsection{Surface integral of vector fields}
+Just computing the area of a surface would be boring. Suppose we have a surface $S$ parametrized by $\mathbf{r}(u, v)$, where $(u, v)$ takes values in $D$. We would like to ask how much ``stuff'' is passing through $S$, where the flow of stuff is given by a vector field $\mathbf{F}(\mathbf{r})$.
+
+We might attempt to use the integral
+\[
+ \int_D |\mathbf{F}| \;\d S.
+\]
+However, this doesn't work. For example, if all the flow is tangential to the surface, then nothing is really passing through the surface, but $|\mathbf{F}|$ is non-zero, so we get a non-zero integral. Instead, what we should do is to consider the component of $\mathbf{F}$ that is \emph{normal} to the surface $S$, i.e.\ parallel to its normal.
+
+\begin{defi}[Surface integral]
+ The \emph{surface integral} or $\emph{flux}$ of a vector field $\mathbf{F}(\mathbf{r})$ over $S$ is defined by
+ \[
+ \int_S \mathbf{F}(\mathbf{r})\cdot \d \mathbf{S} = \int_S \mathbf{F}(\mathbf{r})\cdot \mathbf{n}\;\d S = \int_D \mathbf{F}(\mathbf{r}(u, v))\cdot \left(\frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v}\right)\;\d u\;\d v.
+ \]
+ Intuitively, this is the total amount of $\mathbf{F}$ passing through $S$. For example, if $\mathbf{F}$ is the electric field, the flux is the amount of electric field passing through a surface.
+\end{defi}
+
+For a given orientation, the integral $\int \mathbf{F}\cdot \d \mathbf{S}$ is independent of the parametrization. Changing orientation is equivalent to changing the sign of $\mathbf{n}$, which is in turn equivalent to changing the order of $u$ and $v$ in the definition of $S$, which is also equivalent to changing the sign of the flux integral.
+
+\begin{eg}
+ Consider a sphere of radius $a$, $\mathbf{r}(\theta, \varphi)$. Then
+ \[
+ \frac{\partial \mathbf{r}}{\partial \theta} = a\mathbf{e}_\theta,\quad \frac{\partial \mathbf{r}}{\partial \varphi} = a\sin \theta \mathbf{e}_\varphi.
+ \]
+ The vector area element is
+ \[
+ \d \mathbf{S} = a^2\sin \theta \mathbf{e}_r \;\d \theta\; \d\varphi,
+ \]
+ taking the outward normal $\mathbf{n} = \mathbf{e}_r = \mathbf{r}/a$.
+
+ Suppose we want to calculate the fluid flux through the surface. The \emph{velocity field} $\mathbf{u}(\mathbf{r})$ of a fluid gives the motion of a small volume of fluid $\mathbf{r}$. Assume that $\mathbf{u}$ depends smoothly on $\mathbf{r}$ (and $t$). For any small area $\delta S$, on a surface $S$, the volume of fluid crossing it in time $\delta t$ is $\mathbf{u}\cdot \delta \mathbf{S}\; \delta t$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [x radius = 1, y radius = 0.3];
+ \node at (1, 0) [right] {$\delta S$};
+ \draw (1, 2) circle [x radius = 1, y radius = 0.3];
+ \draw (-1, 0) -- (0, 2);
+ \draw [->] (1, 0) -- (2, 2) node [right] {$\mathbf{u}\;\delta t$};
+ \draw [->] (0, 0) -- (0, .75) node [above] {$\mathbf{n}$};
+ \end{tikzpicture}
+ \end{center}
+ So the amount of flow of $\mathbf{u}$ over at time $\delta t$ through $S$ is
+ \[
+ \delta t\int_S \mathbf{u}\cdot \d \mathbf{S}.
+ \]
+ So $\int_S \mathbf{u}\cdot \d \mathbf{S}$ is the \emph{rate} of volume crossing $S$.
+
+ For example, let $\mathbf{u} = (-x, 0, z)$ and $S$ be the section of a sphere of radius $a$ with $0 \leq \varphi \leq 2\pi$ and $0 \leq \theta \leq \alpha$. Then
+ \[
+ \d \mathbf{S} = a^2 \sin \theta \mathbf{n}\;\d \varphi \;\d \theta,
+ \]
+ with
+ \[
+ \mathbf{n} = \frac{\mathbf{r}}{a} = \frac{1}{a}(x, y, z).
+ \]
+ So
+ \[
+ \mathbf{n}\cdot \mathbf{u} = \frac{1}{a}(-x^2 + z^2) = a(-\sin^2\theta\cos^2\varphi + \cos^2 \theta).
+ \]
+ Therefore
+ \begin{align*}
+ \int_S \mathbf{u}\cdot \d \mathbf{S} &= \int_0^\alpha \int_0^{2\pi} a^3 \sin \theta[(\cos^2\theta - 1) \cos^2 \varphi + \cos^2 \theta]\;\d \varphi\;\d \theta\\
+ &= \int_0^\alpha a^3\sin \theta[\pi(cos^2\theta - 1) + 2\pi \cos^2\theta]\; \d \theta\\
+ &=\int_0^\alpha a^3\pi(3\cos^3 \theta - 1)\sin \theta\;\d \theta\\
+ &= \pi a^3[cos\theta - \cos^3 \theta]_0^\alpha\\
+ &= \pi a^3 \cos \alpha\sin^2 \alpha.
+ \end{align*}
+\end{eg}
+
+What happens when we change parametrization? Let $\mathbf{r}(u, v)$ and $\mathbf{r}(\tilde{u}, \tilde{v})$ be two regular parametrizations for the surface. By the chain rule,
+\begin{align*}
+ \frac{\partial \mathbf{r}}{\partial u} &= \frac{\partial \mathbf{r}}{\partial \tilde{u}}\frac{\partial\tilde{u}}{\partial u} + \frac{\partial \mathbf{r}}{\partial \tilde{v}}\frac{\partial\tilde{v}}{\partial u}\\
+ \frac{\partial \mathbf{r}}{\partial v} &= \frac{\partial \mathbf{r}}{\partial \tilde{u}}\frac{\partial\tilde{u}}{\partial v} + \frac{\partial \mathbf{r}}{\partial \tilde{v}}\frac{\partial\tilde{v}}{\partial v}
+\end{align*}
+So
+\[
+ \frac{\partial \mathbf{r}}{\partial u}\times\frac{\partial\mathbf{r}}{\partial v} = \frac{\partial (\tilde{u}, \tilde{v})}{\partial (u, v)} \frac{\partial \mathbf{r}}{\partial \tilde {u}}\times\frac{\partial\mathbf{r}}{\partial \tilde{v}}
+\]
+where $\frac{\partial (\tilde{u}, \tilde{v})}{\partial(u, v)}$ is the Jacobian.
+
+Since
+\[
+ \d \tilde{u}\;\d \tilde{v} = \frac{\partial (\tilde{u}, \tilde{v})}{\partial (u, v)}\;\d u\;\d v,
+\]
+We recover the formula
+\[
+ \d S = \left|\frac{\partial \mathbf{r}}{\partial u}\times\frac{\partial\mathbf{r}}{\partial v}\right|\;\d u\;\d v = \left|\frac{\partial \mathbf{r}}{\partial \tilde {u}}\times\frac{\partial\mathbf{r}}{\partial \tilde{v}}\right| \;\d \tilde{u}\;\d \tilde{v}.
+\]
+Similarly, we have
+\[
+ \d \mathbf{S} = \frac{\partial \mathbf{r}}{\partial u}\times\frac{\partial\mathbf{r}}{\partial v}\;\d u\;\d v = \frac{\partial \mathbf{r}}{\partial \tilde {u}}\times\frac{\partial\mathbf{r}}{\partial \tilde{v}} \;\d \tilde{u}\;\d \tilde{v}.
+\]
+provided $(u, v)$ and $(\tilde{u}, \tilde{v})$ have the same orientation.
+\subsection{Change of variables in \tph{$\R^2$ and $\R^3$}{R2 and R3}{&\#x211D;2 and &\#x211D;2} revisited}
+In this section, we derive our change of variable formulae in a slightly different way.
+
+\subsubsection*{Change of variable formula in \tph{$\R^2$}{R2}{&\#x211D;2}}
+We first derive the 2D change of variable formula from the 3D surface integral formula.
+
+Consider a subset $S$ of the plane $\R^2$ parametrized by $\mathbf{r}(x(u, v), y(u, v))$. We can embed it to $\R^3$ as $\mathbf{r}(x(u, v), y(u, v), 0)$. Then
+\[
+ \frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial\mathbf{r}}{\partial v} = (0, 0, J),
+\]
+with $J$ being the Jacobian.
+Therefore
+\[
+ \int_S f(\mathbf{r})\;\d S = \int_D f(\mathbf{r}(u, v)) \left|\frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v}\right| \;\d u\;\d v= \int_D f(\mathbf{r}(u, v))|J|\;\d u\;\d v,
+\]
+and we recover the formula for changing variables in $\R^2$.
+
+\subsubsection*{Change of variable formula in \tph{$\R^3$}{R3}{&\#x211D;3}}
+In $\R^3$, suppose we have a volume parametrised by $\mathbf{r}(u, v, w)$. Then
+\[
+ \delta \mathbf{r} = \frac{\partial \mathbf{r}}{\partial u}\delta u + \frac{\partial \mathbf{r}}{\partial v}\delta v + \frac{\partial \mathbf{r}}{\partial w}\delta w + o(\delta u, \delta v, \delta w).
+\]
+Then the cuboid $\delta u, \delta v, \delta w$ in $u, v, w$ space is mapped to a parallelopiped of volume
+\[
+ \delta V = \left|\frac{\partial \mathbf{r}}{\partial u}\delta u\cdot \left( \frac{\partial \mathbf{r}}{\partial v}\delta v \times \frac{\partial \mathbf{r}}{\partial w}\delta w\right)\right| = |J|\;\delta u\; \delta v\;\delta w.
+\]
+So $\d V = |J|\; \d u\; \d v\; \d w$.
+
+\section{Geometry of curves and surfaces}
+Let $\mathbf{r}(s)$ be a curve parametrized by arclength $s$. Since $\mathbf{t}(s) = \frac{\d \mathbf{r}}{\d s}$ is a unit vector, $\mathbf{t}\cdot \mathbf{t} = 1$. Differentiating yields $\mathbf{t}\cdot \mathbf{t}' = 0$. So $\mathbf{t}'$ is a normal to the curve if $\mathbf{t}' \not= 0$.
+
+We define the following:
+\begin{defi}[Principal normal and curvature]
+ Write $\mathbf{t}' = \kappa \mathbf{n}$, where $\mathbf{n}$ is a unit vector and $\kappa > 0$. Then $\mathbf{n}(s)$ is called the \emph{principal normal} and $\kappa(s)$ is called the \emph{curvature}.
+\end{defi}
+Note that we \emph{must} be differentiating against $s$, not any other parametrization! If the curve is given in another parametrization, we can either change the parametrization or use the chain rule.
+
+We take a curve that can Taylor expanded around $s = 0$. Then
+\[
+ \mathbf{r}(s) = \mathbf{r}(0) + s\mathbf{r}'(0) + \frac{1}{2}s^2 \mathbf{r}''(0) + O(s^3).
+\]
+We know that $\mathbf{r}' = \mathbf{t}$ and $\mathbf{r}'' = \mathbf{t}'$. So we have
+\[
+ \mathbf{r}(s) = \mathbf{r}(0) + s\mathbf{t}(0) + \frac{1}{2}\kappa(0) s^2 \mathbf{n} + O(s^3).
+\]
+How can we interpret $\kappa$ as the curvature? Suppose we want to approximate the curve near $\mathbf{r}(0)$ by a circle. We would expect a more ``curved'' curve would be approximated by a circle of smaller radius. So $\kappa$ should be inversely proportional to the radius of the circle. In fact, we will show that $\kappa = 1/a$, where $a$ is the radius of the best-fit circle.
+
+Consider the vector equation for a circle passing through $\mathbf{r}(0)$ with radius $a$ in the plane defined by $\mathbf{t}$ and $\mathbf{n}$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius = 1];
+ \draw [->] (0, 0) -- (0.707, 0.707) node [anchor = south east, pos = 0.5] {$a$};
+ \draw [->] (-2, -2) -- (0, -1) node [anchor = south east, pos = 0.5] {$\mathbf{r}(0)$};
+ \draw [->] (0, -1) -- (0.5, -1) node [right] {$\mathbf{t}$};
+ \draw [->] (0, -1) -- (0, -0.5) node [left] {$\mathbf{n}$};
+ \draw [dashed] (0, 0) -- (0, -1);
+ \draw [dashed] (0, 0) -- (0.707, -0.707);
+ \draw (0, -0.5) arc (270:315:0.5);
+ \node at (0.2, -0.6) {$\theta$};
+ \draw (-2, 0) parabola bend (0, -1) (2, 0);
+ \end{tikzpicture}
+\end{center}
+Then the equation of the circle is
+\[
+ \mathbf{r} = \mathbf{r}(0) + a(1 - \cos \theta) \mathbf{n} + a\sin \theta \mathbf{t}.
+\]
+We can expand this to obtain
+\[
+ \mathbf{r} = \mathbf{r}(0) + a\theta \mathbf{t} + \frac{1}{2}\theta^2 a\mathbf{n} + o(\theta^3).
+\]
+Since the arclength $s = a\theta$, we obtain
+\[
+ \mathbf{r} = \mathbf{r}(0) + s\mathbf{t} + \frac{1}{2}\frac{1}{a}s^2\mathbf{n} + O(s^3).
+\]
+As promised, $\kappa = 1/a$, for $a$ the radius of the circle of best fit.
+
+\begin{defi}[Radius of curvature]
+ The \emph{radius of curvature} of a curve at a point $\mathbf{r}(s)$ is $1/\kappa(s)$.
+\end{defi}
+Since we are in 3D, given $\mathbf{t}(s)$ and $\mathbf{n}(s)$, there is another normal to the curve. We can add a third normal to generate an orthonormal basis.
+\begin{defi}[Binormal]
+ The \emph{binormal} of a curve is $\mathbf{b} = \mathbf{t}\times \mathbf{n}$.
+\end{defi}
+
+We can define the torsion similar to the curvature, but with the binormal instead of the tangent.\footnote{This was not taught in lectures, but there is a question on the example sheet about the torsion, so I might as well include it here.}
+
+\begin{defi}[Torsion] Let $\mathbf{b}' = -\tau \mathbf{n}$. Then $\tau$ is the \emph{torsion}.
+\end{defi}
+Note that this makes sense, since $\mathbf{b}'$ is both perpendicular to $\mathbf{t}$ and $\mathbf{b}$, and hence must be in the same direction as $\mathbf{n}$. ($\mathbf{b}' = \mathbf{t}'\times \mathbf{n} + \mathbf{t}\times \mathbf{n}' = \mathbf{t}\times \mathbf{n}'$, so $\mathbf{b}'$ is perpendicular to $\mathbf{t}$; and $\mathbf{b} \cdot \mathbf{b} = 1\Rightarrow \mathbf{b}\cdot \mathbf{b}' = 0$. So $\mathbf{b}'$ is perpendicular to $\mathbf{b}$).
+
+The geometry of the curve is encoded in how this basis ($\mathbf{t}, \mathbf{n}, \mathbf{b}$) changes along it. This can be specified by two scalar functions of arc length --- the curvature $\kappa(s)$ and the \emph{torsion} $\tau(s)$ (which determines what the curve looks like to third order in its Taylor expansions and how the curve lifts out of the $\mathbf{t}, \mathbf{r}$ plane).
+
+\subsubsection*{Surfaces and intrinsic geometry*}
+We can study the geometry of surfaces through curves which lie on them. At a given point $P$ at a surface $S$ with normal $\mathbf{n}$, consider a plane containing $\mathbf{n}$. The intersection of the plane with the surface yields a curve on the surface through $P$. This curve has a curvature $\kappa$ at $P$.
+
+If we choose different planes containing $\mathbf{n}$, we end up with different curves of different curvature. Then we define the following:
+\begin{defi}[Principal curvature]
+ The \emph{principal curvatures} of a surface at $P$ are the minimum and maximum possible curvature of a curve through $P$, denoted $\kappa_{\min}$ and $\kappa_{\max}$ respectively.
+\end{defi}
+
+\begin{defi}[Gaussian curvature]
+ The \emph{Gaussian curvature} of a surface at a point $P$ is $K = \kappa_{\min}\kappa_{\max}$.
+\end{defi}
+
+\begin{thm}[Theorema Egregium]
+ $K$ is \emph{intrinsic} to the surface $S$. It can be expressed in terms of lengths, angles etc. which are measured entirely on the surface. So $K$ can be defined on an arbitrary surface without embedding it on a higher dimension surface.
+\end{thm}
+The is the start of \emph{intrinsic geometry}: if we embed a surface in Euclidean space, we can determine lengths, angles etc on it. But we don't have to do so --- we can ``live in '' the surface and do geometry in it without an embedding.
+
+For example, we can consider a geodesic triangle $D$ on a surface $S$. It consists of three geodesics: shortest curves between two points.
+
+Let $\theta_i$ be the interior angles of the triangle (defined by using scalar products of tangent vectors). Then
+\begin{thm}[Gauss-Bonnet theorem]
+ \[
+ \theta_1 +\theta_2 + \theta_3 = \pi + \int_D K\; \d A,
+ \]
+ integrating over the area of the triangle.
+\end{thm}
+
+\section{Div, Grad, Curl and \tph{$\nabla$}{del}{∇}}
+\subsection{Div, Grad, Curl and \tph{$\nabla$}{del}{∇}}
+Recalled that $\nabla f$ is given by $(\nabla f)_i = \frac{\partial f}{\partial x_i}$. We can regard this as obtained from the scalar field $f$ by applying
+\[
+ \nabla = \mathbf{e}_i \frac{\partial}{\partial x_i}
+\]
+for cartesian coordinates $x_i$ and orthonormal basis $\mathbf{e}_i$, where $\mathbf{e}_i$ are orthonormal and right-handed, i.e.\ $\mathbf{e}_i\times \mathbf{e}_j = \varepsilon_{ijk} \mathbf{e}_k$ (it is left handed if $\mathbf{e}_i\times \mathbf{e}_j = -\varepsilon_{ijk} \mathbf{e}_k$).
+
+We can alternatively write this as
+\[
+ \nabla = \left(\frac{\partial}{\partial x}, \frac{\partial }{\partial y}, \frac{\partial}{\partial z}\right).
+\]
+$\nabla$ (\emph{nabla} or \emph{del}) is both an operator and a vector. We can apply it to a vector field $\mathbf{F}(\mathbf{r}) = \mathbf{F}_i(\mathbf{r})\mathbf{e}_i$ using the scalar or vector product.
+
+\begin{defi}[Divergence]
+ The \emph{divergence} or \emph{div} of $\mathbf{F}$ is
+ \[
+ \nabla\cdot \mathbf{F} = \frac{\partial F_i}{\partial x_i} = \frac{\partial F_1}{\partial x_1} + \frac{\partial F_2}{\partial x_2} + \frac{\partial F_3}{\partial x_3}.
+ \]
+\end{defi}
+
+\begin{defi}[Curl]
+ The \emph{curl} of $\mathbf{F}$ is
+ \[
+ \nabla\times \mathbf{F} = \varepsilon_{ijk}\frac{\partial F_k}{\partial x_j}\mathbf{e}_i = \begin{vmatrix}
+ \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3\\
+ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z}\\
+ F_x & F_y & F_z
+ \end{vmatrix}
+ \]
+\end{defi}
+
+\begin{eg}
+ Let $\mathbf{F} = (xe^z, y^2\sin x, xyz)$. Then
+ \[
+ \nabla \cdot \mathbf{F} = \frac{\partial }{\partial x}xe^z + \frac{\partial}{\partial y}y^2 \sin x + \frac{\partial}{\partial z}xyz = e^z + 2y\sin x + xy.
+ \]
+ and
+ \begin{align*}
+ \nabla \times F &= \hat{\mathbf{i}} \left[\frac{\partial}{\partial y}(xyz) - \frac{\partial}{\partial z}(y^2\sin x)\right]\\
+ &+ \hat{\mathbf{j}} \left[\frac{\partial}{\partial z}(xe^z) + \frac{\partial}{\partial x}(xyz)\right]\\
+ &+ \hat{\mathbf{k}}\left[\frac{\partial}{\partial x}(y^2\sin x) - \frac{\partial}{\partial y} (xe^z)\right]\\
+ &= (xz, xe^z - yz, y^2\cos x).
+ \end{align*}
+\end{eg}
+Note that $\nabla$ is an operator, so ordering is important. For example,
+\[
+ \mathbf{F}\cdot \nabla = F_i\frac{\partial }{\partial x_i}
+\]
+is a scalar differential operator, and
+\[
+ \mathbf{F}\times \nabla = \mathbf{e}_k\varepsilon_{ijk}F_i\frac{\partial}{\partial x_j}
+\]
+is a vector differential operator.
+
+\begin{prop}
+ Let $f, g$ be scalar functions, $\mathbf{F}, \mathbf{G}$ be vector functions, and $\mu, \lambda$ be constants. Then
+ \begin{align*}
+ \nabla(\lambda f + \mu g) &= \lambda\nabla f + \mu\nabla g\\
+ \nabla\cdot (\lambda \mathbf{F} + \mu \mathbf{G}) &= \lambda\nabla \cdot \mathbf{F} + \mu\nabla\cdot \mathbf{G}\\
+ \nabla\times (\lambda \mathbf{F} + \mu \mathbf{G}) &= \lambda\nabla\times \mathbf{F} + \mu\nabla\times \mathbf{G}.
+ \end{align*}
+\end{prop}
+
+Note that Grad and Div can be analogously defined in any dimension $n$, but curl is specific to $n = 3$ because it uses the vector product.
+
+\begin{eg}
+ Consider $r^\alpha$ with $r = |\mathbf{r}|$. We know that $\mathbf{r}= x_i\mathbf{e}_i$. So $r^2 = x_ix_i$. Therefore
+ \[
+ 2r\frac{\partial r}{\partial x_j} = 2x_j,
+ \]
+ or
+ \[
+ \frac{\partial r}{\partial x_i} = \frac{x_i}{r}.
+ \]
+ So
+ \[
+ \nabla r^\alpha = \mathbf{e}_i \frac{\partial}{\partial x_i}(r^\alpha) = \mathbf{e}_i\alpha r^{\alpha - 1}\frac{\partial r}{\partial x_i} = \alpha r^{\alpha - 2}\mathbf{r}.
+ \]
+ Also,
+ \[
+ \nabla\cdot \mathbf{r} = \frac{\partial x_i}{\partial x_i} = 3.
+ \]
+ and
+ \[
+ \nabla \times \mathbf{r} = \mathbf{e}_k \varepsilon_{ijk}\frac{\partial x_j}{\partial x_i} = 0.
+ \]
+\end{eg}
+
+\begin{prop}
+ We have the following Leibnitz properties:
+ \begin{align*}
+ \nabla(fg) &= (\nabla f)g + f(\nabla g)\\\
+ \nabla\cdot (f\mathbf{F}) &= (\nabla f)\cdot \mathbf{F} + f(\nabla\cdot \mathbf{F})\\
+ \nabla\times (f\mathbf{F}) &= (\nabla f)\times \mathbf{F} + f(\nabla\times \mathbf{F})\\
+ \nabla(\mathbf{F}\cdot \mathbf{G}) &= \mathbf{F}\times (\nabla \times \mathbf{G}) + \mathbf{G}\times (\nabla \times \mathbf{F}) + (\mathbf{F}\cdot \nabla)\mathbf{G} + (\mathbf{G}\cdot \nabla) \mathbf{F}\\
+ \nabla \times (\mathbf{F}\times \mathbf{G}) &= \mathbf{F}(\nabla\cdot \mathbf{G}) - \mathbf{G}(\nabla\cdot \mathbf{F}) + (\mathbf{G}\cdot \nabla)\mathbf{F} - (\mathbf{F}\cdot \nabla)\mathbf{G}\\
+ \nabla\cdot (\mathbf{F}\times \mathbf{G}) &= (\nabla\times \mathbf{F})\cdot \mathbf{G} - \mathbf{F}\cdot (\nabla\times \mathbf{G})
+ \end{align*}
+ which can be proven by brute-forcing with suffix notation and summation convention.
+\end{prop}
+
+There is absolutely no point in memorizing these (at least the last three). They can be derived when needed via suffix notation.
+\begin{eg}
+ \begin{align*}
+ \nabla\cdot (r^\alpha \mathbf{r}) &= (\nabla r^\alpha)\mathbf{r} + r^\alpha \nabla\cdot \mathbf{r}\\
+ &= (\alpha r^{\alpha - 2}\mathbf{r})\cdot \mathbf{r} + r^\alpha (3)\\
+ &= (\alpha + 3)r^\alpha\\
+ \nabla\times (r^\alpha \mathbf{r}) &= (\nabla(r^\alpha))\times \mathbf{r} + r^\alpha(\nabla\times \mathbf{r})\\
+ &= \alpha r^{\alpha - 2} \mathbf{r}\times \mathbf{r}\\
+ &= \mathbf{0}
+ \end{align*}
+\end{eg}
+\subsection{Second-order derivatives}
+We have
+\begin{prop}
+ \begin{align*}
+ \nabla\times (\nabla f) &= 0\\
+ \nabla\cdot (\nabla\times \mathbf{F}) &=0
+ \end{align*}
+\end{prop}
+
+\begin{proof}
+ Expand out using suffix notation, noting that
+ \[
+ \varepsilon_{ijk}\frac{\partial^2 f}{\partial x_i \partial x_j} = 0.
+ \]
+ since if, say, $k = 3$, then
+ \[
+ \varepsilon_{ijk}\frac{\partial^2 f}{\partial x_i \partial x_j} = \frac{\partial^2 f}{\partial x_1 \partial x_2} - \frac{\partial^2 f}{\partial x_2 \partial x_1} = 0.
+ \]
+
+\end{proof}
+
+The converse of each result holds for fields defined in all of $\R^3$:
+\begin{prop}
+ If $\mathbf{F}$ is defined in all of $\R^3$, then
+ \[
+ \nabla\times \mathbf{F} = 0 \Rightarrow \mathbf{F} = \nabla f
+ \]
+ for some $f$.
+\end{prop}
+
+\begin{defi}[Conservative/irrotational field and scalar potential]
+ If $\mathbf{F} = \nabla f$, then $f$ is the \emph{scalar potential}. We say $\mathbf{F}$ is \emph{conservative} or \emph{irrotational}.
+\end{defi}
+Similarly,
+\begin{prop}
+ If $\mathbf{H}$ is defined over all of $\R^3$ and $\nabla\cdot \mathbf{H} = 0$, then $\mathbf{H} = \nabla \times \mathbf{A}$ for some $\mathbf{A}$.
+\end{prop}
+
+\begin{defi}[Solenoidal field and vector potential]
+ If $\mathbf{H} = \nabla \times \mathbf{A}$, $\mathbf{A}$ is the \emph{vector potential} and $\mathbf{H}$ is said to be \emph{solenoidal}.
+\end{defi}
+
+Not that is is true only if $\mathbf{F}$ or $\mathbf{H}$ is defined on all of $\R^3$.
+
+\begin{defi}[Laplacian operator]
+ The \emph{Laplacian operator} is defined by
+ \[
+ \nabla^2 = \nabla\cdot \nabla = \frac{\partial^2}{\partial x_i \partial x_i} = \left(\frac{\partial^2}{\partial x_1^2} + \frac{\partial^2}{\partial x_2^2} + \frac{\partial^2}{\partial x_3^3}\right).
+ \]
+ This operation is defined on both scalar and vector fields --- on a scalar field,
+ \[
+ \nabla^2 f = \nabla\cdot (\nabla f),
+ \]
+ whereas on a vector field,
+ \[
+ \nabla^2 \mathbf{A} = \nabla(\nabla\cdot \mathbf{A}) - \nabla\times (\nabla\times \mathbf{A}).
+ \]
+\end{defi}
+
+\section{Integral theorems}
+\subsection{Statement and examples}
+There are three big integral theorems, known as Green's theorem, Stoke's theorem and Gauss' theorem. There are all generalizations of the fundamental theorem of calculus in some sense. In particular, they all say that an $n$ dimensional integral of a derivative is equivalent to an $n - 1$ dimensional integral of the original function.
+
+We will first state all three theorems with some simple applications. In the next section, we will see that the three integral theorems are so closely related that it's easiest to show their equivalence first, and then prove just one of them.
+
+\subsubsection{Green's theorem (in the plane)}
+\begin{thm}[Green's theorem]
+ For smooth functions $P(x, y)$, $Q(x, y)$ and $A$ a bounded region in the $(x, y)$ plane with boundary $\partial A = C$,
+ \[
+ \int_A \left(\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right)\;\d A = \int_C(P\;\d x + Q\;\d y).
+ \]
+ Here $C$ is assumed to be piecewise smooth, non-intersecting closed curve, traversed anti-clockwise.
+\end{thm}
+
+\begin{eg}
+ Let $Q = xy^2$ and $P = x^2y$. If $C$ is the parabola $y^2 = 4ax$ and the line $x = a$, both with $-2a \leq y \leq 2a$, then Green's theorem says
+ \[
+ \int_A (y^2 - x^2)\;\d A = \int_C x^2 \;\d x + xy^2\;\d y.
+ \]
+ From example sheet 1, each side gives $\frac{104}{105} a^4$.
+\end{eg}
+
+\begin{eg}
+ Let $A$ be a rectangle confined by $0 \leq x \leq a$ and $0 \leq y \leq b$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$y$};
+ \draw [->-=0.5] (0, 0) -- (3, 0) node [below] {$a$};
+ \draw [->-=0.5] (3, 0) -- (3, 2);
+ \draw [->-=0.5] (3, 2) -- (0, 2) node [left] {$b$};
+ \draw [->-=0.5] (0, 2) -- (0, 0);
+ \node at (1.5, 1) {$A$};
+ \end{tikzpicture}
+ \end{center}
+ Then Green's theorem follows directly from the fundamental theorem of calculus in 1D. We first consider the first term of Green's theorem:
+ \begin{align*}
+ \int -\frac{\partial P}{\partial y} \;\d A &= \int_0^a \int_0^b -\frac{\partial P}{\partial y}\;\d y\;\d x\\
+ &= \int_0^a [-P(x, b) + P(x, 0)]\;\d x\\
+ &= \int_C P\;\d x
+ \end{align*}
+ Note that we can convert the 1D integral in the second-to-last line to a line integral around the curve $C$, since the $P(x, 0)$ and $P(x, b)$ terms give the horizontal part of $C$, and the lack of $\d y$ term means that the integral is nil when integrating the vertical parts.
+
+ Similarly,
+ \[
+ \int_A \frac{\partial Q}{\partial x}\;\d A = \int_C Q\;\d y.
+ \]
+ Combining them gives Green's theorem.
+\end{eg}
+
+Green's theorem also holds for a bounded region $A$, where the boundary $\partial A$ consists of \emph{disconnected} components (each piecewise smooth, non-intersecting and closed) with anti-clockwise orientation on the exterior, and clockwise on the interior boundary, e.g.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill = gray] circle [radius = 2];
+ \draw [fill = white] (1, 0) circle [radius = 0.6];
+ \draw [fill = white] (-1, 0) circle [radius = 0.6];
+ \draw [->] (2, 0) -- (2, 0.01);
+ \draw [->] (1.6, 0) -- (1.6, -0.01);
+ \draw [->] (-0.4, 0) -- (-0.4, -0.01);
+ \end{tikzpicture}
+\end{center}
+The orientation of the curve comes from imagining the surface as:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill = gray] circle [radius = 2];
+ \draw [fill = white] (1, 0) circle [radius = 0.6];
+ \draw [fill = white] (-1, 0) circle [radius = 0.6];
+ \draw [->] (0, -2) -- (0.01, -2);
+ \draw [->] (1, 0.6) -- (1.01, 0.6);
+ \draw [->] (-0.4, 0) -- (-0.4, -0.01);
+ \draw [fill = white, white] (1.5, 0.1) rectangle (2.1, -0.1);
+ \draw [->] (2, -0.1) -- (1.58, -0.1);
+ \draw [->] (1.58, 0.1) -- (2, 0.1);
+ \end{tikzpicture}
+\end{center}
+and take the limit as the gap shrinks to 0.
+
+\subsubsection{Stokes' theorem}
+\begin{thm}[Stokes' theorem]
+ For a smooth vector field $\mathbf{F}(\mathbf{r})$,
+ \[
+ \int_S \nabla\times \mathbf{F}\cdot \d \mathbf{S} = \int_{\partial S} \mathbf{F}\cdot \d \mathbf{r},
+ \]
+ where $S$ is a smooth, bounded surface and $\partial S$ is a piecewise smooth boundary of $S$.
+
+ The direction of the line integral is as follows: If we walk along $C$ with $\mathbf{n}$ facing up, then the surface is on your left.
+\end{thm}
+It also holds if $\partial S$ is a collection of disconnected piecewise smooth closed curves, with the orientation determined in the same way as Green's theorem.
+
+\begin{eg}
+ Let $S$ be the section of a sphere of radius $a$ with $0 \leq \theta \leq \alpha$. In spherical coordinates,
+ \[
+ \d \mathbf{S} = a^2 \sin \theta \mathbf{e}_r \;\d \theta\;\d \varphi.
+ \]
+ Let $\mathbf{F} = (0, xz, 0)$. Then $\nabla \times \mathbf{F} = (-x, 0, z)$. We have previously shown that
+ \[
+ \int_S \nabla\times \mathbf{F}\cdot \d \mathbf{S} = \pi a^3 \cos\alpha\sin^2 \alpha.
+ \]
+ Our boundary $\partial C$ is
+ \[
+ \mathbf{r}(\varphi) = a(\sin \alpha\cos \varphi, \sin \alpha\sin \varphi, \cos \alpha).
+ \]
+ The right hand side of Stokes' is
+ \begin{align*}
+ \int_C \mathbf{F}\cdot \d \mathbf{r} &= \int_0^{2\pi}\underbrace{a\sin \alpha\cos \varphi}_{x}\underbrace{\vphantom{\varphi}a\cos\alpha}_z \underbrace{a\sin \alpha\cos\varphi\;\d \varphi}_{\d y}\\
+ &= a^3\sin^2\alpha\cos\alpha\int_0^{2\pi}\cos^2\varphi\;\d \varphi\\
+ &= \pi a^3\sin^2\alpha\cos\alpha.
+ \end{align*}
+ So they agree.
+\end{eg}
+
+\subsubsection{Divergence/Gauss theorem}
+\begin{thm}[Divergence/Gauss theorem]
+ For a smooth vector field $\mathbf{F}(\mathbf{r})$,
+ \[
+ \int _V \nabla\cdot \mathbf{F}\;\d V = \int_{\partial V}\mathbf{F}\cdot \d \mathbf{S},
+ \]
+ where $V$ is a bounded volume with boundary $\partial V$, a piecewise smooth, closed surface, with outward normal $\mathbf{n}$.
+\end{thm}
+
+\begin{eg}
+ Consider a hemisphere.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \clip (-2, 2) rectangle (2, 0);
+ \draw circle [radius=2];
+ \draw [dashed] circle [x radius = 2, y radius = 0.5];
+ \end{scope}
+ \begin{scope}
+ \clip (-2, 0) rectangle (2, -0.7);
+ \draw circle [x radius = 2, y radius = 0.5];
+ \end{scope}
+ \node {$S_2$};
+ \node at (1.7, 1.7) {$S_1$};
+ \end{tikzpicture}
+ \end{center}
+
+ $V$ is a solid hemisphere
+ \[
+ x^2 + y^2 + z^2 \leq a^2, \quad z \geq 0,
+ \]
+ and $\partial V = S_1 + S_2$, the hemisphere and the disc at the bottom.
+
+ Take $\mathbf{F} = (0, 0, z + a)$ and $\nabla \cdot \mathbf{F} = 1$. Then
+ \[
+ \int_V\nabla \cdot \mathbf{F}\;\d V = \frac{2}{3}\pi a^3,
+ \]
+ the volume of the hemisphere.
+
+ On $S_1$,
+ \[
+ \d \mathbf{S} = \mathbf{n}\;\d S = \frac{1}{a}(x, y, z)\;\d S.
+ \]
+ Then
+ \[
+ \mathbf{F}\cdot \d \mathbf{S} = \frac{1}{a}z(z + a)\;\d S = \cos \theta a(\cos \theta + 1)\underbrace{a^2\sin \theta\;\d \theta\;\d \varphi}_{\d S}.
+ \]
+ Then
+ \begin{align*}
+ \int_{S_1}\mathbf{F}\cdot \d \mathbf{S} &= a^3\int_0^{2\pi}\d \varphi \int_0^{\pi/2}\sin \theta(\cos^2\theta + \cos \theta)\;\d \theta\\
+ &= 2\pi a^3 \left[\frac{-1}{3}\cos^3 \theta - \frac{1}{2}\cos^2 \theta\right]_0^{\pi/2}\\
+ &= \frac{5}{3}\pi a^3.
+ \end{align*}
+ On $S_2$, $\d \mathbf{S} = \mathbf{n}\;\d S = -(0, 0, 1)\;\d S$. Then $\mathbf{F}\cdot \d \mathbf{S} = -a\;\d S$. So
+ \[
+ \int_{S_2} \mathbf{F}\cdot \d \mathbf{S} = -\pi a^3.
+ \]
+ So
+ \[
+ \int_{S_1}\mathbf{F}\cdot \d \mathbf{S} +\int_{S_2}\mathbf{F}\cdot \d \mathbf{S} = \left(\frac{5}{3} - 1\right)\pi a^3 = \frac{2}{3}\pi a^3,
+ \]
+ in accordance with Gauss' theorem.
+\end{eg}
+
+\subsection{Relating and proving integral theorems}
+We will first show the following two equivalences:
+\begin{itemize}
+ \item Stokes' theorem $\Leftrightarrow $ Green's theorem
+ \item 2D divergence theorem $\Leftrightarrow$ Greens' theorem
+\end{itemize}
+Then we prove the 2D version of divergence theorem directly to show that all of the above hold. A sketch of the proof of the 3D version of divergence theorem will be provided, because it is simply a generalization of the 2D version, except that the extra dimension makes the notation tedious and difficult to follow.
+
+\begin{prop}
+ Stokes' theorem $\Rightarrow $ Green's theorem
+\end{prop}
+
+\begin{proof}
+ Stokes' theorem talks about 3D surfaces and Green's theorem is about 2D regions. So given a region $A$ on the $(x, y)$ plane, we pretend that there is a third dimension and apply Stokes' theorem to derive Green's theorem.
+
+ Let $A$ be a region in the $(x, y)$ plane with boundary $C = \partial A$, parametrised by arc length, $(x(s), y(s), 0)$. Then the tangent to $C$ is
+ \[
+ \mathbf{t} = \left(\frac{\d x}{\d s}, \frac{\d y}{\d s}, 0\right).
+ \]
+ Given any $P(x, y)$ and $Q(x, y)$, we can consider the vector field
+ \[
+ \mathbf{F} = (P, Q, 0),
+ \]
+ So
+ \[
+ \nabla \times \mathbf{F} = \left(0, 0, \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right).
+ \]
+ Then the left hand side of Stokes is
+ \[
+ \int_C \mathbf{F} \cdot \d \mathbf{r} = \int_C \mathbf{F}\cdot \mathbf{t}\;\d s = \int_C P\;\d x + Q\;\d y,
+ \]
+ and the right hand side is
+ \[
+ \int_A (\nabla\times \mathbf{F})\cdot \hat{\mathbf{k}}\;\d A = \int_A \left(\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right)\;\d A.\qedhere
+ \]
+\end{proof}
+
+\begin{prop}
+ Green's theorem $\Rightarrow $ Stokes' theorem.
+\end{prop}
+
+\begin{proof}
+ Green's theorem describes a 2D region, while Stokes' theorem describes a 3D surface $\mathbf{r}(u, v)$. Hence to use Green's to derive Stokes' we need find some 2D thing to act on. The natural choice is the parameter space, $u, v$.
+
+ Consider a parametrised surface $S = \mathbf{r}(u, v)$ corresponding to the region $A$ in the $u, v$ plane. Write the boundary as $\partial A = (u(t), v(t))$. Then $\partial S = \mathbf{r}(u(t), v(t))$.
+
+ We want to prove
+ \[
+ \int_{\partial S}\mathbf{F}\cdot \d \mathbf{r} = \int_S (\nabla \times \mathbf{F})\cdot \d \mathbf{S}
+ \]
+ given
+ \[
+ \int_{\partial A} F_u\;\d u + F_v\;\d v = \int_A\left(\frac{\partial F_v}{\partial u} - \frac{\partial F_u}{\partial v}\right)\;\d A.
+ \]
+ Doing some pattern-matching, we want
+ \[
+ \mathbf{F}\cdot \d \mathbf{r} = F_u \;\d u + F_v \;\d v
+ \]
+ for some $F_u$ and $F_v$.
+
+ By the chain rule, we know that
+ \[
+ \d \mathbf{r} = \frac{\partial \mathbf{r}}{\partial u} \d u + \frac{\partial \mathbf{r}}{\partial v}\d v.
+ \]
+ So we choose
+ \[
+ F_u = \mathbf{F}\cdot \frac{\partial \mathbf{r}}{\partial u}, \quad F_v = \mathbf{F}\cdot\frac{\partial \mathbf{r}}{\partial v}.
+ \]
+ This choice matches the left hand sides of the two equations.
+
+ To match the right, recall that
+ \[
+ (\nabla\times \mathbf{F}) \cdot \d \mathbf{S} = (\nabla\times \mathbf{F})\cdot \left(\frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v}\right)\;\d u\;\d v.
+ \]
+ Therefore, for the right hand sides to match, we want
+ \[
+ \frac{\partial F_v}{\partial u} - \frac{\partial F_u}{\partial v} = (\nabla\times \mathbf{F})\cdot \left(\frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v}\right).\tag{$*$}
+ \]
+ Fortunately, this is true. Unfortunately, the proof involves complicated suffix notation and summation convention:
+ \[
+ \frac{\partial F_v}{\partial u} = \frac{\partial}{\partial u}\left(\mathbf{F}\cdot \frac{\partial \mathbf{r}}{\partial v}\right) = \frac{\partial}{\partial u}\left(F_i\frac{\partial x_i}{\partial v}\right) = \left(\frac{\partial F_i}{\partial x_j}\frac{\partial x_j}{\partial u}\right)\frac{\partial x_i}{\partial v} + F_i\frac{\partial x_i}{\partial u\partial v}.
+ \]
+ Similarly,
+ \[
+ \frac{\partial F_u}{\partial u} = \frac{\partial}{\partial u}\left(\mathbf{F}\cdot \frac{\partial \mathbf{r}}{\partial u}\right) = \frac{\partial}{\partial u}\left(F_j\frac{\partial x_j}{\partial u}\right) = \left(\frac{\partial F_j}{\partial x_i}\frac{\partial x_i}{\partial v}\right)\frac{\partial x_j}{\partial u} + F_i\frac{\partial x_i}{\partial u\partial v}.
+ \]
+ So
+ \[
+ \frac{\partial F_v}{\partial u} - \frac{\partial F_u}{\partial v} = \frac{\partial x_j}{\partial u}\frac{\partial x_i}{\partial v}\left(\frac{\partial F_i}{\partial x_j} - \frac{\partial F_j}{\partial x_i}\right).
+ \]
+ This is the left hand side of $(*)$.
+
+ The right hand side of $(*)$ is
+ \begin{align*}
+ (\nabla \times \mathbf{F})\cdot \left(\frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v}\right) &= \varepsilon_{ijk}\frac{\partial F_j}{\partial x_i}\varepsilon_{kpq}\frac{\partial x_p}{\partial u}\frac{\partial x_q}{\partial v}\\
+ &= (\delta_{ip}\delta_{jq} - \delta_{iq}\delta_{jp}) \frac{\partial F_j}{\partial x_i}\frac{\partial x_p}{\partial u}\frac{\partial x_q}{\partial v}\\
+ &= \left(\frac{\partial F_j}{\partial x_i} - \frac{\partial F_i}{\partial x_j}\right)\frac{\partial x_i}{\partial u}\frac{\partial x_j}{\partial v}.
+ \end{align*}
+ So they match. Therefore, given our choice of $F_u$ and $F_v$, Green's theorem translates to Stokes' theorem.
+\end{proof}
+
+
+\begin{prop}
+ Greens theorem $\Leftrightarrow$ 2D divergence theorem.
+\end{prop}
+
+\begin{proof}
+ The 2D divergence theorem states that
+ \[
+ \int_A (\nabla\cdot \mathbf{G})\;\d A = \int_{\partial A} \mathbf{G}\cdot \mathbf{n}\;\d s.
+ \]
+ with an outward normal $\mathbf{n}$.
+
+ Write $\mathbf{G}$ as $(Q, -P)$. Then
+ \[
+ \nabla\cdot \mathbf{G} = \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}.
+ \]
+ Around the curve $\mathbf{r}(s) = (x(s), y(s))$, $\mathbf{t}(s) = (x'(s), y'(s))$. Then the normal, being tangent to $\mathbf{t}$, is $\mathbf{n}(s) = (y'(s), -x'(s))$ (check that it points outwards!). So
+ \[
+ \mathbf{G}\cdot \mathbf{n} = P\frac{\d x}{\d s} + Q\frac{\d y}{\d s}.
+ \]
+ Then we can expand out the integrals to obtain
+ \[
+ \int_C \mathbf{G}\cdot \mathbf{n}\;\d s = \int_C P\;\d x + Q\;\d y,
+ \]
+ and
+ \[
+ \int_A(\nabla\cdot \mathbf{G})\;\d A = \int_A\left(\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right)\;\d A.
+ \]
+ Now 2D version of Gauss' theorem says the two LHS are the equal, and Green's theorem says the two RHS are equal. So the result follows.
+\end{proof}
+
+\begin{prop}
+ 2D divergence theorem.
+ \[
+ \int_A (\nabla\cdot \mathbf{G})\;\d A = \int_{C = \partial A} \mathbf{G}\cdot \mathbf{n}\;\d s.
+ \]
+\end{prop}
+
+\begin{proof}
+ For the sake of simplicity, we assume that $\mathbf{G}$ only has a vertical component, noting that the same proof works for purely horizontal $\mathbf{G}$, and an arbitrary $\mathbf{G}$ is just a linear combination of the two.
+
+ Furthermore, we assume that $A$ is a simple, convex shape. A more complicated shape can be cut into smaller simple regions, and we can apply the simple case to each of the small regions.
+
+ Suppose $\mathbf{G} = G(x, y)\hat{\mathbf{j}}$. Then
+ \[
+ \nabla \cdot \mathbf{G} = \frac{\partial G}{\partial y}.
+ \]
+ Then
+ \[
+ \int_A \nabla\cdot \mathbf{G}\;\d A = \int_X \left(\int_{Y_x}\frac{\partial G}{\partial y}\;\d y\right)\;\d x.
+ \]
+ Now we divide $A$ into an upper and lower part, with boundaries $C_+ = y_+(x)$ and $C_- = y_-(x)$ respectively. Since I cannot draw, $A$ will be pictured as a circle, but the proof is valid for any simple convex shape.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [red] (3, 1.5) arc (0:180:1);
+ \node [red, anchor = south east] at (1.2, 1.75) {$C_+$};
+ \draw [blue] (1, 1.5) arc (180:360:1);
+ \node [blue, anchor = north west] at (3, 1.5) {$C_-$};
+
+ \draw (1.8, 0.52) -- (1.8, 2.48);
+ \draw (2.2, 0.52) -- (2.2, 2.48);
+ \draw [dashed] (1.8, 0.52) -- (1.8, 0);
+ \draw [dashed] (2.2, 0.52) -- (2.2, 0);
+ \node [below] at (2, 0) {$\d y$};
+
+ \draw [dashed] (1.8, 0.52) -- (0, 0.52);
+ \draw [dashed] (1.8, 2.48) -- (0, 2.48);
+ \draw [->] (-0.3, 2.48) -- (-0.3, 0.52);
+ \draw [->] (-0.3, 0.52) -- (-0.3, 2.48) node [pos = 0.5, fill=white] {$Y_x$};
+
+ \draw [->] (0, 0) -- (5, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$y$};
+ \end{tikzpicture}
+ \end{center}
+ We see that the boundary of $Y_x$ at any specific $x$ is given by $y_-(x)$ and $y_+(x)$. Hence by the Fundamental theorem of Calculus,
+ \[
+ \int_{Y_x}\frac{\partial G}{\partial y}\;\d y = \int_{y_-(x)}^{y_+(x)} \frac{\partial G}{\partial y}\;\d y = G(x, y_+(x)) - G(x, y_-(x)).
+ \]
+ To compute the full area integral, we want to integrate over all $x$. However, the divergence theorem talks in terms of $\d s$, not $\d x$. So we need to find some way to relate $\d s$ and $\d x$. If we move a distance $\delta s$, the change in $x$ is $\delta s\cos \theta$, where $\theta$ is the angle between the tangent and the horizontal. But $\theta$ is also the angle between the normal and the vertical. So $\cos \theta = \mathbf{n}\cdot \hat{\mathbf{j}}$. Therefore $\d x = \hat{\mathbf{j}}\cdot \mathbf{n}\;\d s$.
+
+ In particular, $G\;\d x = G\,\hat{\mathbf{j}}\cdot \mathbf{n}\;\d s = \mathbf{G}\cdot \mathbf{n}\;\d s$, since $\mathbf{G} = G\,\hat{\mathbf{j}}$.
+
+ However, at $C_-$, $\mathbf{n}$ points downwards, so $\mathbf{n}\cdot \hat{\mathbf{j}}$ happens to be negative. So, actually, at $C_-$, $\d x = -\mathbf{G}\cdot \mathbf{n}\;\d s$.
+
+ Therefore, our full integral is
+ \begin{align*}
+ \int_A \nabla\cdot \mathbf{G}\;\d A &= \int_X \left(\int_{y_x}\frac{\partial G}{\partial y}\;\d Y\right)\;\d x\\
+ &= \int_X G(x, y_+(x)) - G(x, y_{-}(x))\;\d x\\
+ &= \int_{C_+}\mathbf{G}\cdot \mathbf{n}\;\d s + \int_{C_-}\mathbf{G}\cdot \mathbf{n}\;\d s\\
+ &= \int_C \mathbf{G}\cdot \mathbf{n}\;\d s.\qedhere
+ \end{align*}
+\end{proof}
+
+To prove the 3D version, we again consider $\mathbf{F} = F(x, y, z)\hat{\mathbf{k}}$, a purely vertical vector field. Then
+\[
+ \int_V \nabla\cdot \mathbf{F}\;\d V = \int_D \left(\int_{Z_{xy}} \frac{\partial F}{\partial z}\;\d z\right)\;\d A.
+\]
+Again, split $S = \partial V$ into the top and bottom parts $S_+$ and $S_-$ (ie the parts with $\hat{\mathbf{k}}\cdot \mathbf{n} \geq 0$ and $\hat{\mathbf{k}}\cdot \mathbf{n} < 0$), and parametrize by $z_+(x, y)$ and $z_-(x, y)$. Then the integral becomes
+\[
+ \int_V \nabla\cdot \mathbf{F}\;\d V = \int_D (F(x, y, z_+) - F(x, y, z_-))\;\d A = \int_S \mathbf{F}\cdot \mathbf{n}\;\d S.
+\]
+\section{Some applications of integral theorems}
+\subsection{Integral expressions for div and curl}
+We can use these theorems to come up with alternative definitions of the div and curl. The advantage of these alternative definitions is that they do not require a choice of coordinate axes. They also better describe how we should interpret div and curl.
+
+Gauss' theorem for $\mathbf{F}$ in a small volume $V$ containing $\mathbf{r}_0$ gives
+\[
+ \int_{\partial V}\mathbf{F}\cdot \d \mathbf{S} = \int_V\nabla\cdot \mathbf{F}\;\d V\approx (\nabla\cdot \mathbf{F})(\mathbf{r}_0) \vol(V).
+\]
+We take the limit as $V\to 0$ to obtain
+\begin{prop}
+ \[
+ (\nabla \cdot \mathbf{F})(\mathbf{r}_0) = \lim_{\mathrm{diam}(V) \to 1} \frac{1}{\vol(V)}\int_{\partial V}\mathbf{F}\cdot \d \mathbf{S},
+ \]
+ where the limit is taken over volumes containing the point $\mathbf{r}_0$.
+\end{prop}
+Similarly, Stokes' theorem gives, for $A$ a surface containing the point $\mathbf{r}_0$,
+\[
+ \int _{\partial A}\mathbf{F}\cdot \d \mathbf{r} = \int_A(\nabla\times \mathbf{F})\cdot \mathbf{n}\;\d A \approx \mathbf{n}\cdot (\nabla\times \mathbf{F})(\mathbf{r}_0) \area(A).
+\]
+So
+\begin{prop}
+ \[
+ \mathbf{n}\cdot (\nabla \times \mathbf{F})(\mathbf{r}_0) = \lim_{\mathrm{diam}(A) \to 0}\frac{1}{\area(A)}\int_{\partial A}\mathbf{F}\cdot \d \mathbf{r},
+ \]
+ where the limit is taken over all surfaces $A$ containing $\mathbf{r}_0$ with normal $\mathbf{n}$.
+\end{prop}
+These are coordinate-independent definitions of div and curl.
+
+\begin{eg}
+ Suppose $\mathbf{u}$ is a velocity field of fluid flow. Then
+ \[
+ \int_S \mathbf{u}\cdot \d \mathbf{S}
+ \]
+ is the rate of which fluid crosses $S$. Taking $V$ to be the volume occupied by a fixed quantity of fluid material, we have
+ \[
+ \dot{V} = \int_{\partial V}\mathbf{u}\cdot \d \mathbf{S}
+ \]
+ Then, at $\mathbf{r}_0$,
+ \[
+ \nabla\cdot \mathbf{u} = \lim_{V\to 0}\frac{\dot{V}}{V},
+ \]
+ the relative rate of change of volume. For example, if $\mathbf{u}(\mathbf{r}) = \alpha\mathbf{r}$ (ie fluid flowing out of origin), then $\nabla\cdot \mathbf{u} = 3\alpha$, which increases at a constant rate everywhere.
+
+ Alternatively, take a planar area $A$ to be a disc of radius $a$. Then
+ \[
+ \int_{\partial A}\mathbf{u}\cdot \d \mathbf{r} = \int_{\partial A}\mathbf{u}\cdot \mathbf{t} \;\d s = 2\pi a \times \text{average of $\mathbf{u}\cdot \mathbf{t}$ around the circumference}.
+ \]
+ ($\mathbf{u}\cdot \mathbf{t}$ is the component of $\mathbf{u}$ which is tangential to the boundary) We define the quantity
+ \[
+ \omega = \frac{1}{a} \times (\text{average of }\mathbf{u}\cdot \mathbf{t}).
+ \]
+ This is the local angular velocity of the current. As $a \to 0$, $\frac{1}{a} \to \infty$, but the average of $\mathbf{u}\cdot \mathbf{t}$ will also decrease since a smooth field is less ``twirly'' if you look closer. So $\omega$ tends to some finite value as $a\to 0$. We have
+ \[
+ \int_{\partial A} \mathbf{u}\cdot \d \mathbf{r} = 2\pi a^2 \omega.
+ \]
+ Recall that
+ \[
+ \mathbf{n}\cdot \nabla \times \mathbf{u} = \lim_{A \to 0}\frac{1}{\pi a^2}\int_{\partial A}\mathbf{u}\cdot \d \mathbf{r} = 2\omega,
+ \]
+ ie twice the local angular velocity. For example, if you have a washing machine rotating at a rate of $\boldsymbol\omega$, Then the velocity $\mathbf{u} = \boldsymbol\omega\times \mathbf{r}$. Then the curl is
+ \[
+ \nabla\times (\boldsymbol\omega\times \mathbf{r}) = 2\boldsymbol\omega,
+ \]
+ which is twice the angular velocity.
+\end{eg}
+
+\subsection{Conservative fields and scalar products}
+\begin{defi}[Conservative field]
+ A vector field $\mathbf{F}$ is \emph{conservative} if
+ \begin{enumerate}
+ \item $\mathbf{F} = \nabla f$ for some scalar field $f$; or
+ \item $\int_C \mathbf{F}\cdot \d \mathbf{r}$ is independent of $C$, for fixed end points and orientation; or
+ \item $\nabla \times \mathbf{F} = 0$.
+ \end{enumerate}
+ In $\R^3$, all three formulations are equivalent.
+\end{defi}
+
+We have previously shown (i) $\Rightarrow$ (ii) since
+\[
+ \int_C \mathbf{F}\cdot \d \mathbf{r} = f(\mathbf{b}) - f(\mathbf{a}).
+\]
+We have also shown that (i) $\Rightarrow$ (iii) since
+\[
+ \nabla \times (\nabla f) = 0.
+\]
+So we want to show that (iii) $\Rightarrow $ (ii) and (ii) $\Rightarrow $ (i)
+\begin{prop}
+ If (iii) $\nabla\times \mathbf{F}= 0 $, then (ii) $\int_C \mathbf{F}\cdot \d \mathbf{r}$ is independent of $C$.
+\end{prop}
+
+\begin{proof}
+ Given $\mathbf{F}(\mathbf{r})$ satisfying $\nabla\times \mathbf{F} = 0$, let $C$ and $\tilde C$ be any two curves from $\mathbf{a}$ to $\mathbf{b}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [left] {$\mathbf{a}$};
+ \node at (2, 2) [circ] {};
+ \node at (2, 2) [right] {$\mathbf{b}$};
+ \node at (2, 1) {$\tilde C$};
+ \draw [->-=0.6] plot [smooth, tension=1] coordinates {(0, 0) (0.8, 1.7) (2, 2)};
+ \node [above] at (0.8, 1.7) {$C$};
+ \draw [->-=0.6] plot [smooth, tension=1] coordinates {(0, 0) (1.2, -0.1) (2, 2)};
+ \end{tikzpicture}
+ \end{center}
+ If $S$ is any surface with boundary $\partial S = C - \tilde C$, By Stokes' theorem,
+ \[
+ \int_{S}\nabla \times \mathbf{F}\cdot \d \mathbf{S} = \int_{\partial S}\mathbf{F}\cdot \d \mathbf{r} = \int_C \mathbf{F}\cdot \d \mathbf{r} - \int_{\tilde{C}}\mathbf{F}\cdot \d \mathbf{r}.
+ \]
+ But $\nabla\times \mathbf{F} = 0$. So
+ \[
+ \int_C \mathbf{F}\cdot \d \mathbf{r} - \int_{\tilde{C}} \mathbf{F}\cdot \d \mathbf{r} = 0,
+ \]
+ or
+ \[
+ \int_C \mathbf{F}\cdot \d \mathbf{r} = \int_{\tilde{C}}\mathbf{F}\cdot \d \mathbf{r}.\qedhere
+ \]
+\end{proof}
+
+\begin{prop}
+ If (ii) $\int_C \mathbf{F}\cdot \d \mathbf{r}$ is independent of $C$ for fixed end points and orientation, then (i) $\mathbf{F} = \nabla f$ for some scalar field $f$.
+\end{prop}
+
+\begin{proof}
+ We fix $\mathbf{a}$ and define $f(\mathbf{r}) = \int_C \mathbf{F}(\mathbf{r}')\cdot\d \mathbf{r}'$ for any curve from $\mathbf{a}$ to $\mathbf{r}$. Assuming (ii), $f$ is well-defined. For small changes $\mathbf{r}$ to $\mathbf{r} + \delta \mathbf{r}$, there is a small extension of $C$ by $\delta C$. Then
+ \begin{align*}
+ f(\mathbf{r} + \delta \mathbf{r}) &= \int_{C + \delta C}\mathbf{F} (\mathbf{r}')\cdot \d \mathbf{r}'\\
+ &= \int_C \mathbf{F}\cdot \d \mathbf{r}' + \int_{\delta C}\mathbf{F}\cdot \d \mathbf{r}'\\
+ &= f(\mathbf{r}) + \mathbf{F}(\mathbf{r})\cdot \delta \mathbf{r} + o(\delta \mathbf{r}).
+ \end{align*}
+ So
+ \[
+ \delta f = f(\mathbf{r} + \delta \mathbf{r}) - f(\mathbf{r}) = \mathbf{F}(\mathbf{r})\cdot \delta \mathbf{r} + o(\delta \mathbf{r}).
+ \]
+ But the definition of grad is exactly
+ \[
+ \delta f = \nabla f\cdot \delta \mathbf{r} + o(\delta \mathbf{r}).
+ \]
+ So we have $\mathbf{F} = \nabla f$.
+\end{proof}
+Note that these results assume $\mathbf{F}$ is defined on the whole of $\R^3$. It also works of $\mathbf{F}$ is defined on a \emph{simply connected} domain $D$, ie a subspace of $\R^3$ without holes. By definition, this means that any two curves $C, \tilde{C}$ with fixed end points can be smoothly deformed into one another (alternatively, any loop can be shrunk into a point).
+
+If we have a smooth transformation from $C$ to $\tilde{C}$, the process sweeps out a surface bounded by $C$ and $\tilde{C}$. This is required by the proof that (iii) $\Rightarrow$ (ii).
+
+If $D$ is not simply connected, then we obtain a multi-valued $f(\mathbf{r})$ on $D$ in general (for the proof (ii) $\Rightarrow$ (i)). However, we can choose to restrict to a subset $D_0\subseteq D$ such that $f(\mathbf{r})$ is single-valued on $D_0$.
+
+\begin{eg}
+ Take
+ \[
+ \mathbf{F} = \left(\frac{- y}{x^2 + y^2}, \frac{x}{x^2 + y^2}, 0\right).
+ \]
+ This obeys $\nabla\times \mathbf{F} = 0$, and is defined on $D = \R^3 \setminus \{z\text{-axis}\}$, which is not simply-connected. We can also write
+ \[
+ \mathbf{F} = \nabla f,
+ \]
+ where
+ \[
+ f = \tan^{-1}\frac{y}{x}.
+ \]
+ which is multi-valued. If we integrate it about the closed loop $x^2 + y^2 = 1, z = 0$, i.e.\ a circle about the $z$ axis, the integral gives $2\pi$, as opposed to the expected $0$ for a conservative force. This shows that the simply-connected-domain criterion is important!
+
+ However $f$ can be single-valued if we restrict it to
+ \[
+ D_0 = \R^3 - \{\text{half-plane }x \geq 0,\, y = 0\},
+ \]
+ which is simply-connected. (Draw and check!) Any closed curve we can draw in this area will have an integral of $0$ (the circle mentioned above will no longer be closed!).
+\end{eg}
+
+\subsection{Conservation laws}
+\begin{defi}[Conservation equation]
+ Suppose we are interested in a quantity $Q$. Let $\rho(\mathbf{r}, t)$ be the amount of stuff per unit volume and $\mathbf{j}(\mathbf{r}, t)$ be the flow rate of the quantity (eg if $Q$ is charge, $j$ is the current density).
+
+ The conservation equation is
+ \[
+ \frac{\partial \rho}{\partial t} + \nabla\cdot \mathbf{j} = 0.
+ \]
+\end{defi}
+This is stronger than the claim that the total amount of $Q$ in the universe is fixed. It says that $Q$ cannot just disappear here and appear elsewhere. It must continuously flow out.
+
+In particular, let $V$ be a fixed time-independent volume with boundary $S = \partial V$. Then
+\[
+ Q(t) = \int_V \rho (\mathbf{r}, t)\; \d V
+\]
+Then the rate of change of amount of $Q$ in $V$ is
+\[
+ \frac{\d Q}{\d t} = \int_V \frac{\partial \rho }{\partial t}\;\d V = -\int_V \nabla \cdot \mathbf{j}\;\d V = -\int_S \mathbf{j}\cdot \d \mathbf{s}.
+\]
+by divergence theorem. So this states that the rate of change of the quantity $Q$ in $V$ is the flux of the stuff flowing out of the surface. ie $Q$ cannot just disappear but must smoothly flow out.
+
+In particular, if $V$ is the whole universe (ie $\R^3$), and $\mathbf{j}\to 0$ sufficiently rapidly as $|\mathbf{r}| \to \infty$, then we calculate the total amount of $Q$ in the universe by taking $V$ to be a solid sphere of radius $R$, and take the limit as $R\to \infty$. Then the surface integral $\to 0$, and the equation states that
+\[
+ \frac{\d Q}{\d t} = 0,
+\]
+\begin{eg}
+ If $\rho(\mathbf{r}, t)$ is the charge density (i.e.\ $\rho\delta V$ is the amount of charge in a small volume $\delta V$), then $Q(t)$ is the total charge in $V$. $\mathbf{j}(\mathbf{r}, t)$ is the electric current density. So $\mathbf{j}\cdot \d \mathbf{S}$ is the charge flowing through $\delta S$ per unit time.
+\end{eg}
+
+\begin{eg}
+ Let $\mathbf{j} = \rho \mathbf{u}$ with $\mathbf{u}$ being the velocity field. Then $(\rho\mathbf{u}\; \delta t)\cdot \delta \mathbf{S}$ is equal to the mass of fluid crossing $\delta S$ in time $\delta t$. So
+ \[
+ \frac{\d Q}{\d t} = -\int_S \mathbf{j}\cdot \d \mathbf{S}
+ \]
+ does indeed imply the conservation of mass. The conservation equation in this case is
+ \[
+ \frac{\partial \rho}{\partial t} + \nabla\cdot (\rho \mathbf{u}) = 0
+ \]
+ For the case where $\rho$ is constant and uniform (i.e.\ independent of $\mathbf{r}$ and $t$), we get that $\nabla\cdot \mathbf{u} = 0$. We say that the fluid is \emph{incompressible}.
+\end{eg}
+
+\section{Orthogonal curvilinear coordinates}
+\subsection{Line, area and volume elements}
+In this chapter, we study funny coordinate systems. A coordinate system is, roughly speaking, a way to specify a point in space by a set of (usually 3) numbers. We can think of this as a function $\mathbf{r}(u, v, w)$.
+
+By the chain rule, we have
+\[
+ \d \mathbf{r} = \frac{\partial \mathbf{r}}{\partial u}\d u + \frac{\partial \mathbf{r}}{\partial v}\d v + \frac{\partial \mathbf{r}}{\partial w}\d w
+\]
+For a good parametrization,
+\[
+ \frac{\partial \mathbf{r}}{\partial u}\cdot \left(\frac{\partial \mathbf{r}}{\partial v}\times \frac{\partial \mathbf{r}}{\partial w}\right) \not = 0,
+\]
+i.e.\ $\frac{\partial \mathbf{r}}{\partial u}, \frac{\partial \mathbf{r}}{\partial v}$ and $\frac{\partial \mathbf{r}}{\partial w}$ are linearly independent. These vectors are tangent to the curves parametrized by $u, v, w$ respectively when the other two are being fixed.
+
+Even better, they should be orthogonal:
+\begin{defi}[Orthogonal curvilinear coordinates]
+ $u, v, w$ are \emph{orthogonal curvilinear} if the tangent vectors are orthogonal.
+\end{defi}
+We can then set
+\[
+ \frac{\partial \mathbf{r}}{\partial u} = h_u \mathbf{e}_u, \quad \frac{\partial \mathbf{r}}{\partial v} = h_v \mathbf{e}_v,\quad \frac{\partial \mathbf{r}}{\partial w} = h_w \mathbf{e}_w,
+\]
+with $h_u, h_v, h_w > 0$ and $\mathbf{e}_u, \mathbf{e}_v, \mathbf{e}_w$ form an \emph{orthonormal} right-handed basis (i.e.\ $\mathbf{e}_u \times \mathbf{e}_v = \mathbf{e}_w$). Then
+\[
+ \d \mathbf{r} = h_u \mathbf{e}_u \;\d u + h_v \mathbf{e}_v\;\d v + h_w\mathbf{e}_w \;\d w,
+\]
+and $h_u, h_v, h_w$ determine the changes in length along each orthogonal direction resulting from changes in $u, v, w$. Note that clearly by definition, we have
+\[
+ h_u = \left|\frac{\partial \mathbf{r}}{\partial u}\right|.
+\]
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item In cartesian coordinates, $\mathbf{r}(x, y, z) = x\hat{\mathbf{i}} + y\hat{\mathbf{j}} + z\hat{\mathbf{k}}$. Then $h_x = h_y = h_z = 1$, and $\mathbf{e}_x = \hat{\mathbf{i}}, \mathbf{e}_y = \hat{\mathbf{j}}$ and $\mathbf{e}_z = \hat{\mathbf{k}}$.
+ \item In cylindrical polars, $\mathbf{r}(\rho, \varphi, z) = \rho[\cos \varphi\hat{\mathbf{i}} + \sin \varphi \hat{\mathbf{j}}] + z \hat{\mathbf{k}}$. Then $h_\rho = h_z = 1$, and
+ \[
+ h_\varphi = \left|\frac{\partial \mathbf{r}}{\partial \varphi}\right| = |(-\rho \sin\varphi, \rho \sin \varphi, 0)| = \rho.
+ \]
+ The basis vectors $\mathbf{e}_\rho, \mathbf{e}_\varphi, \mathbf{e}_z$ are as in section 1.
+ \item In spherical polars,
+ \[
+ \mathbf{r}(r, \theta, \varphi) = r(\cos \varphi\sin \theta\hat{\mathbf{i}} + \sin \theta\sin \varphi\hat{\mathbf{j}} + \cos \theta \hat{\mathbf{k}}).
+ \]
+ Then $h_r = 1, h_\theta = r$ and $h_\varphi = r\sin \theta$.
+ \end{enumerate}
+\end{eg}
+
+Consider a surface with $w$ constant and parametrised by $u$ and $v$. The vector area element is
+\[
+ \d \mathbf{S} = \frac{\partial \mathbf{r}}{\partial u}\times \frac{\partial \mathbf{r}}{\partial v}\;\d u\;\d v = h_u \mathbf{e}_u \times h_v \mathbf{e}_v \;\d u\;\d v = h_uh_v \mathbf{e}_w \;\d u\;\d v.
+\]
+We interpret this as $\delta S$ having a small rectangle with sides approximately $h_u \delta u$ and $h_v \delta v$. The volume element is
+\[
+ \d V = \frac{\partial \mathbf{r}}{\partial u}\cdot \left(\frac{\partial \mathbf{r}}{\partial v}\times \frac{\partial \mathbf{r}}{\partial w}\right)\;\d u\;\d v\;\d w = h_uh_vh_w \;\d u\;\d v\;\d w,
+\]
+i.e.\ a small cuboid with sides $h_u \delta _u$, $h_v\delta _v$ and $h_w \delta_w$ respectively.
+
+\subsection{Grad, Div and Curl}
+Consider $f(\mathbf{r}(u, v, w))$ and compare
+\[
+ \d f = \frac{\partial f}{\partial u}\;\d u + \frac{\partial f}{\partial v}\;\d v + \frac{\partial f}{\partial w}\;\d w,
+\]
+with $\d f = (\nabla f)\cdot \d \mathbf{r}$. Since we know that
+\[
+ \d \mathbf{r} = \frac{\partial \mathbf{r}}{\partial u}\;\d u + \frac{\partial \mathbf{r}}{\partial v}\;\d v + \frac{\partial \mathbf{r}}{\partial w}\;\d w = h_u \mathbf{e}_u \;\d u + h_v \mathbf{e}_v \;\d v + h_w \mathbf{e}_w \;\d v,
+\]
+we can compare the terms to know that
+\begin{prop}
+ \[
+ \nabla f = \frac{1}{h_u} \frac{\partial f}{\partial u} \mathbf{e}_u + \frac{1}{h_v} \frac{\partial f}{\partial v}\mathbf{e}_v + \frac{1}{h_w} \frac{\partial f}{\partial w}\mathbf{e}_w.
+ \]
+\end{prop}
+\begin{eg}
+ Take $f = r\sin \theta\cos \varphi$ in spherical polars. Then
+ \begin{align*}
+ \nabla f &= \sin \theta\cos \varphi\,\mathbf{e}_r + \frac{1}{r}(r\cos \theta\cos \varphi)\,\mathbf{e}_\theta + \frac{1}{r\sin \theta}(-r\sin \theta\sin \varphi)\,\mathbf{e}_\varphi\\
+ &= \cos\varphi(\sin \theta \,\mathbf{e}_r + \cos \theta \,\mathbf{e}_\theta) - \sin \varphi \,\mathbf{e}_\varphi.
+ \end{align*}
+\end{eg}
+
+Then we know that the differential operator is
+\begin{prop}
+ \[
+ \nabla = \frac{1}{h_u}\mathbf{e}_u \frac{\partial }{\partial u} + \frac{1}{h_v}\mathbf{e}_v\frac{\partial }{\partial v} + \frac{1}{h_w}\mathbf{e}_w \frac{\partial}{\partial w}.
+ \]
+\end{prop}
+We can apply this to a vector field
+\[
+ \mathbf{F} = F_u \mathbf{e}_u + F_v \mathbf{e}_v + F_w \mathbf{e}_w
+\]
+using scalar or vector products to obtain
+\begin{prop}
+ \begin{align*}
+ \nabla \times \mathbf{F} &= \frac{1}{h_vh_w}\left[\frac{\partial}{\partial v}(h_wF_w) - \frac{\partial }{\partial w}(h_vF_v)\right]\mathbf{e}_u + \text{ two similar terms}\\
+ &= \frac{1}{h_uh_vh_w}
+ \begin{vmatrix} h_u\mathbf{e}_u & h_v\mathbf{e}_v & h_w \mathbf{e}_w\\
+ \frac{\partial}{\partial u} & \frac{\partial}{\partial v} & \frac{\partial}{\partial w}\\
+ h_u F_u & h_v F_v & h_wF_w
+ \end{vmatrix}
+ \end{align*}
+ and
+ \[
+ \nabla\cdot \mathbf{F} = \frac{1}{h_uh_vh_w}\left[\frac{\partial}{\partial u}(h_vh_wF_u) + \text{ two similar terms}\right].
+ \]
+\end{prop}
+There are several ways to obtain these formulae. We can
+
+\begin{proof}(non-examinable)
+ \begin{enumerate}
+ \item Apply $\nabla\cdot$ or $\nabla\times$ and differentiate the basis vectors explicitly.
+ \item First, apply $\nabla\cdot$ or $\nabla\times$, but calculate the result by writing $\mathbf{F}$ in terms of $\nabla u, \nabla v$ and $\nabla w$ in a suitable way. Then use $\nabla\times \nabla f = 0$ and $\nabla\cdot (\nabla\times f) = 0$.
+ \item Use the integral expressions for div and curl.
+
+ Recall that
+ \[
+ \mathbf{n}\cdot \nabla \times \mathbf{F} = \lim_{A \to 0}\frac{1}{A}\int_{\partial A}\mathbf{F}\cdot \d \mathbf{r}.
+ \]
+ So to calculate the curl, we first find the $\mathbf{e}_w$ component.
+
+ Consider an area with $W$ fixed and change $u$ by $\delta u$ and $v$ by $\delta v$. Then this has an area of $h_u h_v \delta u\delta v$ with normal $\mathbf{e}_w$. Let $C$ be its boundary.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (2, 0) node [right] {$u$} node [pos = 0.5, below] {$\delta u$};
+ \draw [->] (0, -0.5) -- (0, 2) node [above] {$v$} node [pos = 0.5, left] {$\delta v$};
+ \draw [->-=0.6] (1.5, 0) -- (1.5, 1.5) node [pos = 0.5, right] {$C$};
+ \draw [->-=0.6] (1.5, 1.5) -- (0, 1.5);
+ \draw [->-=0.6] (0, 1.5) -- (0, 0);
+ \draw [->-=0.6] (0, 0) -- (1.5, 0);
+ \end{tikzpicture}
+ \end{center}
+ We then integrate around the curve $C$. We split the curve $C$ up into 4 parts (corresponding to the four sides), and take linear approximations by assuming $F$ and $h$ are constant when moving through each horizontal/vertical segment.
+ \begin{align*}
+ \int_C \mathbf{F}\cdot \d \mathbf{r} &\approx F_u(u, v) h_u(u, v)\;\delta u + F_v(u + \delta u, v) h_v(u + \delta u, v)\;\delta u \\
+ &- F_u(u, v + \delta v)h_u (u, v + \delta v)\; \delta u - F_v(u, v)h_v(u, v)\;\delta v\\
+ &\approx \left[\frac{\partial}{\partial u}h_vF_v - \frac{\partial}{\partial v}(h_uF_u)\right]\delta u\delta v.
+ \end{align*}
+ Divide by the area and take the limit as area $\to 0$, we obtain
+ \[
+ \lim_{A \to 0} \frac{1}{A}\int_C \mathbf{F}\cdot \d \mathbf{r} = \frac{1}{h_uh_v}\left[\frac{\partial}{\partial u}h_vF_v - \frac{\partial}{\partial v}(h_u F_u)\right].
+ \]
+ So, by the integral definition of divergence,
+ \[
+ \mathbf{e}_w\cdot \nabla\times \mathbf{F} = \frac{1}{h_uh_v}\left[\frac{\partial }{\partial u}(h_vF_v) - \frac{\partial }{\partial v}(h_uF_u)\right],
+ \]
+ and similarly for other components.
+
+ We can find the divergence similarly.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ Let $\mathbf{A} = \frac{1}{r}\tan \frac{\theta}{2} \mathbf{e}_\varphi$ in spherical polars. Then
+ \[
+ \nabla\times \mathbf{A} = \frac{1}{r^2\sin \theta}
+ \begin{vmatrix}
+ \mathbf{e}_r & r\mathbf{e}_\theta & r\sin \theta \mathbf{e}_\varphi\\
+ \frac{\partial}{\partial r} & \frac{\partial}{\partial \theta} & \frac{\partial}{\partial \varphi}\\
+ 0 & 0 & r\sin \theta \cdot \frac{1}{r}\tan \frac{\theta}{2}
+ \end{vmatrix} = \frac{\mathbf{e}_r}{r^2\sin \theta}\frac{\partial}{\partial \theta}\left[\sin \theta\tan \frac{\theta}{2}\right] = \frac{1}{r^2}\mathbf{e}_r.
+ \]
+\end{eg}
+
+\section{Gauss' Law and Poisson's equation}
+\subsection{Laws of gravitation}
+Consider a distribution of mass producing a gravitational force $\mathbf{F}$ on a point mass $m$ at $\mathbf{r}$. The total force is a sum of contributions from each part of the mass distribution, and is proportional to $m$. Write
+\[
+ \mathbf{F} = m\mathbf{g}(\mathbf{r}),
+\]
+\begin{defi}[Gravitational field]
+ $\mathbf{g}(\mathbf{r})$ is the \emph{gravitational field}, \emph{acceleration due to gravity}, or \emph{force per unit mass}.
+\end{defi}
+The gravitational field is conservative, ie
+\[
+ \oint_C \mathbf{g}\cdot \d \mathbf{r} = 0.
+\]
+This means that if you walk around the place and return to the same position, the total work done is $0$ and you did not gain energy, i.e.\ gravitational potential energy is conserved.
+
+Gauss' law tells us what this gravitational field looks like:
+\begin{law}[Gauss' law for gravitation]
+ Given any volume $V$ bounded by closed surface $S$,
+ \[
+ \int_S \mathbf{g}\cdot \d \mathbf{S} = -4\pi GM,
+ \]
+ where $G$ is Newton's gravitational constant, and $M$ is the total mass contained in $V$.
+\end{law}
+These equations determine $\mathbf{g}(\mathbf{r})$ from a mass distribution.
+
+\begin{eg}
+ We can obtain Newton's law of gravitation from Gauss' law together with an assumption about symmetry.
+
+ Consider a total mass $M$ distributed with a spherical symmetry about the origin $\mathbf{O}$, with all the mass contained within some radius $r = a$. By spherical symmetry, we have $\mathbf{g}(\mathbf{r}) = g(\mathbf{r})\hat{\mathbf{r}}$.
+
+ Consider Gauss' law with $S$ being a sphere of radius $r = R > a$. Then $\hat{\mathbf{n}} = \hat{\mathbf{r}}$. So
+ \[
+ \int_S \mathbf{g}\cdot \d \mathbf{S} = \int_S g(R)\hat{\mathbf{r}}\cdot \hat{\mathbf{r}}\;\d S = \int g(R)\d S = 4\pi R^2 g(R).
+ \]
+ By Gauss' law, we obtain
+ \[
+ 4\pi R^2 g(R) = -4\pi GM.
+ \]
+ So
+ \[
+ g(R) = -\frac{GM}{R^2}
+ \]
+ for $R > a$.
+
+ Therefore the gravitational force on a mass $m$ at $\mathbf{r}$ is
+ \[
+ \mathbf{F}(\mathbf{r}) = -\frac{GMm}{r^2}\hat{\mathbf{r}}.
+ \]
+ If we take the limit as $a\to 0$, we get a point mass $M$ at the origin. Then we recover Newton's law of gravitation for point masses.
+\end{eg}
+
+The condition $\int_C \mathbf{g}\cdot \d \mathbf{r} = 0$ for any closed $C$ can be re-written by Stoke's theorem as
+\[
+ \int_S \nabla\times \mathbf{g}\cdot \d \mathbf{S} = 0,
+\]
+where $S$ is bounded by the closed curve $C$. This is true for arbitrary $S$. So
+\[
+ \nabla\times \mathbf{g} = 0.
+\]
+In our example above, $\nabla\times \mathbf{g} = 0$ due to spherical symmetry. But here we showed that it is true for all cases.
+
+Note that we exploited symmetry to solve Gauss' law. However, if the mass distribution is not sufficiently symmetrical, Gauss' law in integral form can be difficult to use. But we can rewrite it in differential form. Suppose
+\[
+ M = \int_V \rho(\mathbf{r})\;\d V,
+\]
+where $\rho$ is the mass density. Then by Gauss' theorem
+\[
+ \int_S \mathbf{g} \cdot \d \mathbf{S} = -4\pi GM \Rightarrow \int_V \nabla\cdot \mathbf{g}\;\d V = \int_V -4\pi G\rho \;\d V.
+\]
+Since this is true for all $V$, we must have
+\begin{law}[Gauss' Law for gravitation in differential form]
+ \[
+ \nabla\cdot \mathbf{g} = -4\pi G\rho.
+ \]
+\end{law}
+Since $\nabla\times \mathbf{g} = 0$, we can introduce a gravitational potential $\varphi(\mathbf{r})$ with $\mathbf{g} = -\nabla \varphi$. Then Gauss' Law becomes
+\[
+ \nabla^2 \varphi = 4\pi G\rho.
+\]
+In the example with spherical symmetry, we can solve that
+\[
+ \varphi(r) = -\frac{GM}{r}
+\]
+for $r > a$.
+\subsection{Laws of electrostatics}
+Consider a distribution of electric charge at rest. They produce a force on a charge $q$, at rest at $\mathbf{r}$, which is proportional to $q$.
+\begin{defi}[Electric field]
+ The force produced by electric charges on another charge $q$ is $\mathbf{F} = q\mathbf{E}(\mathbf{r})$, where $\mathbf{E}(\mathbf{r})$ is the \emph{electric field}, or force per unit charge.
+\end{defi}
+Again, this is conservative. So
+\[
+ \oint_C \mathbf{E}\cdot \d \mathbf{r} = 0
+\]
+for any closed curve $C$. It also obeys
+\begin{law}[Gauss' law for electrostatic forces]
+ \[
+ \int_S \mathbf{E}\cdot \d \mathbf{S} = \frac{Q}{\varepsilon_0},
+ \]
+ where $\varepsilon_0$ is the \emph{permittivity of free space}, or \emph{electric constant}.
+\end{law}
+
+Then we can write it in differential form, as in the gravitational case.
+\begin{law}[Gauss' law for electrostatic forces in differential form]
+ \[
+ \nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0}.
+ \]
+\end{law}
+Assuming constant (or no) magnetic field, we have
+\[
+ \nabla\times \mathbf{E} = 0.
+\]
+So we can write $\mathbf{E} =- \nabla \varphi$.
+\begin{defi}[Electrostatic potential]
+ If we write $\mathbf{E} = -\nabla \varphi$, then $\varphi$ is the \emph{electrostatic potential}, and
+ \[
+ \nabla^2 \varphi = \frac{\rho}{\varepsilon_0}.
+ \]
+\end{defi}
+
+\begin{eg}
+ Take a spherically symmetric charge distribution about $O$ with total charge $Q$. Suppose all charge is contained within a radius $r = a$. Then similar to the gravitational case, we have
+ \[
+ \mathbf{E}(\mathbf{r}) = \frac{Q\hat{\mathbf{r}}}{4\pi \varepsilon_0 r^2},
+ \]
+ and
+ \[
+ \varphi(\mathbf{r}) = \frac{-Q}{4\pi\varepsilon_0 r}.
+ \]
+ As $a \to 0$, we get \emph{point charges}. From $\mathbf{E}$, we can recover Coulomb's law for the force on another charge $q$ at $\mathbf{r}$:
+ \[
+ \mathbf{F} = q\mathbf{E} =\frac{qQ\hat{\mathbf{r}}}{4\pi \varepsilon_0 r^2}.
+ \]
+\end{eg}
+
+\begin{eg}[Line charge]
+ Consider an infinite line with uniform charge density \emph{per unit length} $\sigma$.
+
+ We use cylindrical polar coordinates:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, -2) -- (0, 2) node [above] {$z$};
+ \draw [->] (0, 0) -- (1, 0) node [right] {$r = \sqrt{x^2 + y^2}$};
+
+ \draw [->] (0.1, 1) -- (1, 1) node [right] {$E$};
+ \draw [->] (-0.1, 1) -- (-1, 1);
+ \draw [->] (-0.1, 0) -- (-1, 0);
+ \draw [->] (0.1, -1) -- (1, -1);
+ \draw [->] (-0.1, -1) -- (-1, -1);
+
+ \draw [dashed] (-0.5, 1.5) -- (0.5, 1.5) -- (0.5, -1.5) -- (-0.5, -1.5) -- cycle;
+ \end{tikzpicture}
+ \end{center}
+ By symmetry, the field is radial, i.e.
+ \[
+ \mathbf{E}(r) = E(r) \hat{\mathbf{r}}.
+ \]
+ Pick $S$ to be a cylinder of length $L$ and radius $r$. We know that the end caps do not contribute to the flux since the field lines are perpendicular to the normal. Also, the curved surface has area $2\pi r L$. Then by Gauss' law in integral form,
+ \[
+ \int_S\mathbf{E}\cdot \d \mathbf{S} = E(r)2\pi rL = \frac{\sigma L}{\varepsilon_0}.
+ \]
+ So
+ \[
+ \mathbf{E}(r) = \frac{\sigma}{2\pi \varepsilon_0 r} \hat{\mathbf{r}}.
+ \]
+ Note that the field varies as $1/r$, not $1/r^2$. Intuitively, this is because we have one more dimension of ``stuff'' compared to the point charge, so the field does not drop as fast.
+\end{eg}
+
+\subsection{Poisson's Equation and Laplace's equation}
+\begin{defi}[Poisson's equation]
+ The \emph{Poisson's equation} is
+ \[
+ \nabla^2 \varphi = -\rho,
+ \]
+ where $\rho$ is given and $\varphi(\mathbf{r})$ is to be solved.
+\end{defi}
+This is the form of the equations for gravity and electrostatics, with $-4\pi G \rho$ and $\rho/\varepsilon_0$ in place of $\rho$ respectively.
+
+When $\rho = 0$, we get
+\begin{defi}[Laplace's equation]
+ Laplace's equation is
+ \[
+ \nabla^2 \varphi = 0.
+ \]
+\end{defi}
+One example is irrotational and incompressible fluid flow: if the velocity is $\mathbf{u}(\mathbf{r})$, then irrotationality gives $\mathbf{u} = \nabla \varphi$ for some \emph{velocity potential} $\varphi$. Since it is incompressible, $\nabla\cdot \mathbf{u} = 0$ (cf.\ previous chapters). So $\nabla^2 \varphi = 0$.
+
+The expressions for $\nabla^2$ can be found in non-Cartesian coordinates, but are a bit complicated.
+
+We're concerned here mainly with cases exhibiting spherical or cylindrical symmetry (use $r$ for radial coordinate here). i.e.\ when $\varphi(\mathbf{r})$ has spherical or cylindrical symmetry. Write $\varphi = \varphi(r)$. Then
+\[
+ \nabla \varphi = \varphi'(r)\hat{\mathbf{r}}.
+\]
+Then Laplace's equation $\nabla^2 \varphi = 0$ becomes an ordinary differential equation.
+\begin{itemize}
+ \item For spherical symmetry, using the chain rule, we have
+ \[
+ \nabla^2 \varphi = \varphi'' + \frac{2}{r}\varphi' = \frac{1}{r^2}(r^2\varphi')' = 0.
+ \]
+ Then the general solution is
+ \[
+ \varphi = \frac{A}{r} + B.
+ \]
+ \item For cylindrical symmetry, with $r^2 = x_1^2 + x_2^2$, we have
+ \[
+ \nabla^2 \varphi = \varphi'' + \frac{1}{r}\varphi' = \frac{1}{r}(r\varphi')' = 0.
+ \]
+ Then
+ \[
+ \varphi = A\ln r + B.
+ \]
+\end{itemize}
+Then solutions to Poisson's equations can be obtained in a similar way, i.e.\ by integrating the differential equations directly, or by adding particular integrals to the solutions above.
+
+For example, for a spherically symmetric solution of $\nabla^2 \varphi = -\rho_0$, with $\rho_0$ constant, recall that $\nabla^2 r^\alpha = \alpha(\alpha + 1)r^{\alpha - 2}$. Taking $\alpha = 2$, we find the particular integral
+\[
+ \varphi = -\frac{\rho_0}{6}r^2,
+\]
+So the general solution with spherical symmetry and constant $\rho_0$ is
+\[
+ \varphi(r) = \frac{A}{r} + B - \frac{1}{6}\rho_0 r^2.
+\]
+To determine $A, B$, we must specify boundary conditions. If $\varphi$ is defined on all of $\R^3$, we often require $\varphi \to 0$ as $|\mathbf{r}| \to \infty$. If $\varphi$ is defined on a bounded volume $V$, then there are two kinds of common boundary conditions on $\partial V$:
+\begin{itemize}
+ \item Specify $\varphi$ on $\partial V$ --- a \emph{Dirichlet} condition
+ \item Specify $\mathbf{n}\cdot \nabla \varphi$ (sometimes written as $\frac{\partial \varphi}{\partial \mathbf{n}}$): a \emph{Neumann} condition. ($\mathbf{n}$ is the outward normal on $\partial V$).
+\end{itemize}
+The type of boundary conditions we get depends on the physical content of the problem. For example, specifying $\frac{\partial \varphi}{\partial \mathbf{n}}$ corresponds to specifying the normal component of $\mathbf{g}$ or $\mathbf{E}$.
+
+We can also specify different boundary conditions on different boundary components.
+
+\begin{eg}
+ We might have a spherically symmetric distribution with constant $\rho_0$, defined in $a \leq r \leq b$, with $\varphi(a) = 0$ and $\frac{\partial \varphi}{\partial \mathbf{n}}(b) = 0$.
+
+ Then the general solution is
+ \[
+ \varphi(r) = \frac{A}{r} + B - \frac{1}{6}\rho_0 r^2.
+ \]
+ We apply the first boundary condition to obtain
+ \[
+ \frac{A}{a} + B - \frac{1}{6}\rho_0 a^2 = 0.
+ \]
+ The second boundary condition gives
+ \[
+ \mathbf{n}\cdot \nabla \varphi = -\frac{A}{b^2} - \frac{1}{3}\rho_0 b = 0.
+ \]
+ These conditions give
+ \[
+ A = -\frac{1}{3}\rho_0 b^3, \quad B = \frac{1}{5}\rho_0 a^2 + \frac{1}{3}\rho_0 \frac{b^3}{a}.
+ \]
+\end{eg}
+
+\begin{eg}
+ We might also be interested with spherically symmetric solution with
+ \[
+ \nabla^2 \varphi =
+ \begin{cases}
+ -\rho_0 & r \leq a\\
+ 0 & r > a
+ \end{cases}
+ \]
+ with $\varphi$ non-singular at $r = 0$ and $\varphi(r) \to 0$ as $r\to \infty$, and $\varphi, \varphi'$ continuous at $r = a$. This models the gravitational potential on a uniform planet.
+
+ Then the general solution from above is
+ \[
+ \varphi =
+ \begin{cases}
+ \frac{A}{r} + B - \frac{1}{6}\rho_0 r^2 & r \leq a\\
+ \frac{C}{r} + D & r > a.
+ \end{cases}
+ \]
+ Since $\varphi$ is non-singular at $r = 0$, we have $A = 0$. Since $\varphi \to 0$ as $r\to \infty$, $D = 0$. So
+ \[
+ \varphi =
+ \begin{cases}
+ B - \frac{1}{6}\rho_0 r^2 & r \leq a\\
+ \frac{C}{r} & r > a.
+ \end{cases}
+ \]
+ This is the gravitational potential inside and outside a planet of constant density $\rho_0$ and radius $a$.
+ We want $\varphi$ and $\varphi'$ to be continuous at $r = a$. So we have
+ \begin{align*}
+ B + \frac{1}{6}4\pi \rho_0 Ga^2 &= \frac{C}{a}\\
+ \frac{4}{3}\pi G\rho_0 a = -\frac{C}{a^2}.
+ \end{align*}
+ The second equation gives $C = -GM$. Substituting that into the first equation to find $B$, we get
+ \[
+ \varphi (r) =
+ \begin{cases}
+ \frac{GM}{2a}\left[\left(\frac{r}{a}^2 - 3\right)\right] & r \leq a\\
+ -\frac{GM}{r} & r > a
+ \end{cases}
+ \]
+ Since $g = -\varphi'$, we have
+ \[
+ g(r) =
+ \begin{cases}
+ -\frac{GMr}{a^3}& r \leq a\\
+ -\frac{GM}{r} & r > a
+ \end{cases}
+ \]
+ We can plot the potential energy:
+ \begin{center}
+ \begin{tikzpicture}[xscale=1.5]
+ \draw [->] (-0.5, 0) -- (3, 0) node [right] {$r$};
+ \draw [->] (0, -2) -- (0, 0.5) node [above] {$\varphi(r)$};
+ \draw (0, -1.8) .. controls (2, -1.8) and (1, 0) .. (2.8, 0);
+ \draw [dashed] (1.45, -2) -- +(0, 2) node [above] {$r = a$};
+ \end{tikzpicture}
+ \end{center}
+ We can also plot $-g(r)$, the inward acceleration:
+ \begin{center}
+ \begin{tikzpicture}[xscale=1.5]
+ \draw [->] (-0.5, 0) -- (3, 0) node [right] {$r$};
+ \draw [->] (0, -0.5) -- (0, 2) node [above] {$-g(r)$};
+ \draw (0, 0) -- (1.45, 1.45);
+ \draw (2.8, 0) parabola (1.45, 1.45);
+ \draw [dashed] (1.45, 1.45) -- +(0, -1.45) node [below] {$r = a$};
+ \end{tikzpicture}
+ \end{center}
+ Alternatively, we can apply Gauss' Law for a flux of $\mathbf{g} = g(r) \mathbf{e}_r$ out of $S$, a sphere of radius $R$. For $R \leq a$,
+ \[
+ \int_S \mathbf{g}\cdot \d \mathbf{S} = 4\pi R^2 g(R) = -4\pi GM\left(\frac{R}{a}\right)^3
+ \]
+ So
+ \[
+ g(R) = -\frac{GMR}{a^3}.
+ \]
+ For $R \geq a$, we can simply apply Newton's law of gravitation.
+
+ In general, even if the problem has nothing to do with gravitation or electrostatics, if we want to solve $\nabla^2 \varphi = -\rho$ with $\rho$ and $\varphi$ sufficiently symmetric, we can consider the flux of $\nabla \varphi$ out of a surface $S = \partial V$:
+ \[
+ \int_S \nabla \varphi \cdot \d \mathbf{S} = -\int_V \rho \;\d V,
+ \]
+ by divergence theorem. This is called the Gauss Flux method.
+\end{eg}
+
+\section{Laplace's and Poisson's equations}
+\subsection{Uniqueness theorems}
+\begin{thm}
+ Consider $\nabla^2 \varphi = - \rho$ for some $\rho (\mathbf{r})$ on a bounded volume $V$ with $S = \partial V$ being a closed surface, with an outward normal $\mathbf{n}$.
+
+ Suppose $\varphi$ satisfies either
+ \begin{enumerate}
+ \item Dirichlet condition, $\varphi(\mathbf{r}) = f(\mathbf{r})$ on $S$
+ \item Neumann condition $\frac{\partial \varphi(\mathbf{r})}{\partial \mathbf{n}} = n\cdot \nabla \varphi = g(\mathbf{r})$ on $S$.
+ \end{enumerate}
+ where $f, g$ are given. Then
+ \begin{enumerate}
+ \item $\varphi(\mathbf{r})$ is unique
+ \item $\varphi(\mathbf{r})$ is unique up to a constant.
+ \end{enumerate}
+\end{thm}
+This theorem is practically important - if you find a solution by any magical means, you know it is the \emph{only} solution (up to a constant).
+
+Since the proof of the cases of the two different boundary conditions are very similar, they will be proved together. When the proof is broken down into (i) and (ii), it refers to the specific cases of each boundary condition.
+\begin{proof}
+ Let $\varphi_1(\mathbf{r})$ and $\varphi_2(\mathbf{r})$ satisfy Poisson's equation, each obeying the boundary conditions (N) or (D). Then $\Psi(\mathbf{r}) = \varphi_2(\mathbf{r}) - \varphi_1(\mathbf{r})$ satisfies $\nabla^2\Psi = 0$ on $V$ by linearity, and
+ \begin{enumerate}
+ \item $\Psi = 0$ on $S$; or
+ \item $\frac{\partial \Psi}{\partial \mathbf{n}} = 0$ on $S$.
+ \end{enumerate}
+ Combining these two together, we know that $\Psi\frac{\partial \Psi}{\partial \mathbf{n}} = 0$ on the surface. So using the divergence theorem,
+ \[
+ \int_V \nabla\cdot (\Psi\nabla \Psi) \;\d V = \int_S (\Psi\nabla\Psi)\cdot \d \mathbf{S} = 0.
+ \]
+ But
+ \[
+ \nabla\cdot(\Psi\nabla \Psi) = (\nabla\Psi)\cdot (\nabla\Psi) + \Psi\underbrace{\nabla^2\Psi}_{=0} = |(\nabla\Psi)|^2.
+ \]
+ So
+ \[
+ \int_V |\nabla \Psi|^2\;\d V = 0.
+ \]
+ Since $|\nabla\Psi|^2 \geq 0$, the integral can only vanish if $|\nabla \Psi| = 0$. So $\nabla\Psi = 0$. So $\Psi = c$, a constant on $V$. So
+ \begin{enumerate}
+ \item $\Psi = 0$ on $S$ $\Rightarrow c = 0$. So $\varphi_1 = \varphi_2$ on $V$.
+ \item $\varphi_2(\mathbf{r}) = \varphi_1(\mathbf{r}) + C$, as claimed.\qedhere
+ \end{enumerate}
+\end{proof}
+We've proven uniqueness. How about existence? It turns out it isn't difficult to craft a boundary condition in which there are no solutions.
+
+For example, if we have $\nabla^2 \varphi = -\rho$ on $V$ with the condition $\frac{\partial \varphi}{\partial \mathbf{n}} = g$, then by the divergence theorem,
+\[
+ \int_V \nabla^2\varphi \;\d V = \int_{\partial S} \frac{\partial \varphi}{\partial \mathbf{n}}\;\d S.
+\]
+Using Poisson's equation and the boundary conditions, we have
+\[
+ \int_V \rho \;\d V + \int_{\partial V} g\;\d S = 0
+\]
+So if $\rho$ and $g$ don't satisfy this equation, then we can't have any solutions.
+
+The theorem can be similarly proved and stated for regions in $\R^2, \R^3, \cdots$, by using the definitions of grad, div and the divergence theorem. The result also extends to unbounded domains. To prove it, we can take a sphere of radius $R$ and impose the boundary conditions $|\Psi(\mathbf{r})| = O(1/R)$ or $|\frac{\partial \Psi}{\partial \mathbf{n}}(\mathbf{r})| = O(1/R^2)$ as $R\to \infty$. Then we just take the relevant limits to complete the proof.
+
+Similar results also apply to related equations and different kinds of boundary conditions, eg $D$ or $N$ on different parts of the boundary. But we have to analyse these case by case and see if the proof still applies.
+
+The proof uses a special case of the result
+\begin{prop}[Green's first identity]
+ \[
+ \int_S (u\nabla v)\cdot \d S = \int_V (\nabla u)\cdot (\nabla v)\;\d V + \int_V u\nabla^2 v\;\d V,
+ \]
+\end{prop}
+By swapping $u$ and $v$ around and subtracting the equations, we have
+\begin{prop}[Green's second identity]
+ \[
+ \int_S (u\nabla v - v\nabla u)\cdot\d \mathbf{S} = \int_V (u\nabla^2 v - v\nabla^2 u)\;\d V.
+ \]
+\end{prop}
+These are sometimes useful, but can be easily deduced from the divergence theorem when needed.
+\subsection{Laplace's equation and harmonic functions}
+\begin{defi}[Harmonic function]
+ A \emph{harmonic function} is a solution to Laplace's equation $\nabla^2\varphi = 0$.
+\end{defi}
+These have some very special properties.
+\subsubsection{The mean value property}
+\begin{prop}[Mean value property]
+ Suppose $\varphi(\mathbf{r})$ is harmonic on region $V$ containing a solid sphere defined by $|\mathbf{r} - \mathbf{a}| \leq R$, with boundary $S_R = |\mathbf{r} - \mathbf{a}| = R$, for some $R$. Define
+ \[
+ \bar\varphi (R) = \frac{1}{4\pi R^2}\int_{S_R}\varphi(\mathbf{r})\;\d S.
+ \]
+ Then $\varphi(\mathbf{a}) = \bar\varphi (R)$.
+\end{prop}
+In words, this says that the value at the center of a sphere is the average of the values on the surface on the sphere.
+
+\begin{proof}
+ Note that $\bar\varphi (R) \to \varphi(\mathbf{a})$ as $R \to 0$. We take spherical coordinates $(u, \theta, \chi)$ centered on $\mathbf{r} = \mathbf{a}$. The scalar element (when $u = R$) on $S_R$ is
+ \[
+ \d S = R^2 \sin \theta\;\d \theta\;\d \chi.
+ \]
+ So $\frac{\d S}{R^2}$ is independent of $R$. Write
+ \[
+ \bar\varphi(R) = \frac{1}{4\pi}\int \varphi \;\frac{\d S}{R^2}.
+ \]
+ Differentiate this with respect to $R$, noting that $\d S/R^2$ is independent of $R$. Then we obtain
+ \[
+ \frac{\d}{\d R}\bar\varphi(R) = \frac{1}{4\pi R^2}\int \left.\frac{\partial \varphi}{\partial u}\right|_{u = R}\;\d S
+ \]
+ But
+ \[
+ \frac{\partial\varphi}{\partial u} = \mathbf{e}_u \cdot \nabla \varphi = \mathbf{n}\cdot \nabla \varphi = \frac{\partial\varphi}{\partial \mathbf{n}}
+ \]
+ on $S_R$. So
+ \[
+ \frac{\d}{\d R}\bar\varphi(R) = \frac{1}{4\pi R^2}\int_{S_R} \nabla \varphi \cdot \d \mathbf{S} = \frac{1}{4\pi R^2}\int_{V_R}\nabla^2 \varphi \;\d V = 0
+ \]
+ by divergence theorem. So $\bar \varphi(R)$ does not depend on $R$, and the result follows.
+\end{proof}
+\subsubsection{The maximum (or minimum) principle}
+In this section, we will talk about maxima of functions. It should be clear that the results also hold for minima.
+
+\begin{defi}[Local maximum]
+ We say that $\varphi(\mathbf{r})$ has a \emph{local maximum} at $\mathbf{a}$ if for some $\varepsilon > 0$, $\varphi(\mathbf{r}) < \varphi(\mathbf{a})$ when $0 < |\mathbf{r} - \mathbf{a}| < \varepsilon$.
+\end{defi}
+
+\begin{prop}[Maximum principle]
+ If a function $\varphi$ is harmonic on a region $V$, then $\varphi$ cannot have a maximum at an interior point of $\mathbf{a}$ of $V$.
+\end{prop}
+
+\begin{proof}
+ Suppose that $\varphi$ had a local maximum at $\mathbf{a}$ in the interior. Then there is an $\varepsilon$ such that for any $\mathbf{r}$ such that $0 < |\mathbf{r} - \mathbf{a}| < \varepsilon$, we have $\varphi(\mathbf{r}) < \varphi (\mathbf{a})$.
+
+ Note that if there is an $\varepsilon$ that works, then any smaller $\varepsilon$ will work. Pick an $\varepsilon$ sufficiently small such that the region $|\mathbf{r} - \mathbf{a}| < \varepsilon$ lies within $V$ (possible since $\mathbf{a}$ lies in the interior of $V$).
+
+ Then for any $\mathbf{r}$ such that $|\mathbf{r} - \mathbf{a}| = \varepsilon$, we have $\varphi(\mathbf{r}) < \varphi(\mathbf{a})$.
+ \[
+ \bar\varphi (\varepsilon) = \frac{1}{4\pi R^2}\int_{S_R}\varphi(\mathbf{r})\;\d S < \varphi(\mathbf{a}),
+ \]
+ which contradicts the mean value property.
+\end{proof}
+
+We can understand this by performing a local analysis of stationary points by differentiation. Suppose at $\mathbf{r} = \mathbf{a}$, we have $\nabla\varphi = 0$. Let the eigenvalues of the Hessian matrix $H_{ij} = \frac{\partial^2}{\partial x_i \partial x_j}$ be $\lambda_i$. But since $\varphi$ is harmonic, we have $\nabla^2 \varphi = 0$, i.e.\ $\frac{\partial^2\varphi}{\partial x_i\partial x_i} = H_{ii} = 0$. But $H_{ii}$ is the trace of the Hessian matrix, which is the sum of eigenvalues. So $\sum \lambda_i = 0$.
+
+Recall that a maximum or minimum occurs when all eigenvalues have the same sign. This clearly cannot happen if the sum is 0. Therefore we can only have saddle points.
+
+(note we ignored the case where all $\lambda_i = 0$, where this analysis is inconclusive)
+\subsection{Integral solutions of Poisson's equations}
+\subsubsection{Statement and informal derivation}
+We want to find a solution to Poisson's equations. We start with a discrete case, and try to generalize it to a continuous case.
+
+If there is a single point source of strength $\lambda$ at $\mathbf{a}$, the potential $\varphi$ is
+\[
+ \varphi = \frac{\lambda}{4\pi} \frac{1}{|\mathbf{r} - \mathbf{a}|}.
+\]
+(we have $\lambda = -4\pi GM$ for gravitation and $Q/\varepsilon_0$ for electrostatics)
+
+If we have many sources $\lambda_\alpha$ at positions $\mathbf{r}_\alpha$, the potential is a sum of terms
+\[
+ \varphi(\mathbf{r}) = \sum_{\alpha} \frac{1}{4\pi}\frac{\lambda_\alpha}{|\mathbf{r} - \mathbf{r}_\alpha|}.
+\]
+If we have infinitely many of them, having a distribution of $\rho(\mathbf{r})$ with $\rho(\mathbf{r}')\;\d V'$ being the contribution from a small volume at position $\mathbf{r}'$. It would be reasonable to guess that the solution is what we obtain by replacing the sum with an integral:
+
+\begin{prop}
+ The solution to Poisson's equation $\nabla^2 \varphi = -\rho$, with boundary conditions $|\varphi (\mathbf{r})| = O(1/|\mathbf{r}|)$ and $|\nabla\varphi(\mathbf{r})| = O(1/|\mathbf{r}|^2)$, is
+ \[
+ \varphi(\mathbf{r}) = \frac{1}{4\pi}\int_{V'} \frac{\rho(\mathbf{r}')}{|\mathbf{r} - \mathbf{r}'|}\;\d V'
+ \]
+ For $\rho(\mathbf{r}')$ non-zero everywhere, but suitably well-behaved as $|\mathbf{r}'| \to \infty$, we can also take $V' = \R^3$.
+\end{prop}
+
+\begin{eg}
+ Suppose
+ \[
+ \nabla^2 \varphi =
+ \begin{cases}
+ -\rho_0 & |\mathbf{r}| \leq a\\
+ 0 & |\mathbf{r}| > a.
+ \end{cases}
+ \]
+ Fix $\mathbf{r}$ and introduce polar coordinates $r', \theta, \chi$ for $\mathbf{r}'$. We take the $\theta = 0$ direction to be the direction along the line from $\mathbf{r}'$ to $\mathbf{r}$.
+
+ Then
+ \[
+ \varphi(\mathbf{r}) = \frac{1}{4\pi}\int_{V'}\frac{\rho_0}{|\mathbf{r} - \mathbf{r}'|}\;\d V'.
+ \]
+ We have
+ \[
+ \d V' = r'^2 \sin \theta\;\d r'\;\d \theta\;\d \chi.
+ \]
+ We also have
+ \[
+ |\mathbf{r} - \mathbf{r}'| = \sqrt{r^2 + r'^2 - 2rr'\cos \theta}
+ \]
+ by the cosine rule ($c^2 = a^2 + b^2 - 2ab\cos C$). So
+ \begin{align*}
+ \varphi(\mathbf{r}) &= \frac{1}{4\pi}\int_0^a \;\d r'\int_0^\pi \;\d \theta \int_0^{2\pi}\;\d \chi \frac{\rho_0 r'^2 \sin \theta}{\sqrt{r^2 + r'^2 - 2rr'\cos \theta}}\\
+ &= \frac{\rho_0}{2}\int_0^a \;\d r' \frac{r'^2}{rr'}\left[\sqrt{r^2 + r'^2 - rr'\cos \theta}\right]^{\theta = \pi}_{\theta = 0}\\
+ &= \frac{\rho_0}{2}\int_0^a \;\d r' \frac{r'}{r}(|\mathbf{r} + \mathbf{r}'| + |\mathbf{r} - \mathbf{r}'|)\\
+ &= \frac{\rho_0}{2}\int_0^a\left[ \;\d r' \frac{r'}{r}\left(
+ \begin{cases}
+ 2r' & r > r'\\
+ 2r & r < r'
+ \end{cases}\right)\right]
+ \end{align*}
+ If $r > a$, then $r > r'$ always. So
+ \[
+ \varphi(\mathbf{r}) = \rho_0 \int_0^a \frac{r'^2}{r}\;\d r' = \frac{r_0 a^3}{3r}.
+ \]
+ If $r < a$, then the integral splits into two parts:
+ \[
+ \varphi(\mathbf{r}) = \rho_0\left(\int_0^r \;\d r' \frac{r'^2}{r} + \int_r^a \;\d r' r'\right) = \rho_0\left[-\frac{1}{6}r^2 + \frac{a^2}{2}\right].
+ \]
+\end{eg}
+
+\subsubsection{Point sources and \tph{$\delta$}{delta}{δ}-functions*}
+Recall that
+\[
+ \Psi = \frac{\lambda}{4\pi |\mathbf{r} - \mathbf{a}|}
+\]
+is our potential for a point source. When $\mathbf{r}\not = \mathbf{a}$, we have
+\[
+ \nabla \Psi = -\frac{\lambda}{4\pi}\frac{\mathbf{r} - \mathbf{a}}{|\mathbf{r} - \mathbf{a}|^3},\quad\nabla^2\Psi = 0.
+\]
+What about when $\mathbf{r} = \mathbf{a}$? $\Psi$ is singular at this point, but can we say anything about $\nabla^2 \Psi$?
+
+For any sphere with center $\mathbf{a}$, we have
+\[
+ \int_S \nabla \Psi\cdot \d \mathbf{S} = -\lambda.
+\]
+By the divergence theorem, we have
+\[
+ \int \nabla^2\Psi \;\d V = -\lambda.
+\]
+for $V$ being a solid sphere with $\partial V = S$. Since $\nabla^2\Psi$ is zero at any point $\mathbf{r} \not = \mathbf{a}$, we must have
+\[
+ \nabla^2\Psi = -\lambda \delta(\mathbf{r} - \mathbf{a}),
+\]
+where $\delta$ is the 3d delta function, which satisfies
+\[
+ \int_V f(\mathbf{r}) \delta (\mathbf{r} - \mathbf{a})\;\d V = f(\mathbf{a})
+\]
+for any volume containing $\mathbf{a}$.
+
+In short, we have
+\[
+ \nabla^2\left(\frac{1}{|\mathbf{r} - \mathbf{r}'|}\right) = -4\pi\delta(\mathbf{r} - \mathbf{r}').
+\]
+Using these, we can verify that the integral solution of Poisson's equation we obtained previously is correct:
+\begin{align*}
+ \nabla^2 \Psi(\mathbf{r}) &= \nabla^2\left(\frac{1}{4\pi}\int_{V'}\frac{\rho(\mathbf{r}')}{|\mathbf{r} - \mathbf{r}'|}\;\d V'\right)\\
+ &= \frac{1}{4\pi}\int_{V'} \rho(\mathbf{r}')\nabla^2\left(\frac{1}{|\mathbf{r} - \mathbf{r}'|}\right)\;\d V'\\
+ &= -\int_{V'} \rho(\mathbf{r}') \delta(\mathbf{r} - \mathbf{r}')\;\d V'\\
+ &= -\rho (\mathbf{r}),
+\end{align*}
+as required.
+\section{Maxwell's equations}
+\subsection{Laws of electromagnetism}
+Maxwell's equations are a set of four equations that describe the behaviours of electromagnetism. Together with the Lorentz force law, these describe \emph{all} we know about (classical) electromagnetism. All other results we know are simply mathematical consequences of these equations. It is thus important to understand the mathematical properties of these equations.
+
+To begin with, there are two fields that govern electromagnetism, known as the \emph{electric} and \emph{magnetic} field. These are denoted by $\mathbf{E}(r, t)$ and $\mathbf{B}(r, t)$ respectively.
+
+To understand electromagnetism, we need to understand how these fields are formed, and how these fields affect charged particles. The second is rather straightforward, and is given by the Lorentz force law.
+
+\begin{law}[Lorentz force law]
+ A point charge $q$ experiences a force of
+ \[
+ \mathbf{F} = q(\mathbf{E} + \dot{\mathbf{r}} \times \mathbf{B}).
+ \]
+\end{law}
+
+The dynamics of the field itself is governed by \emph{Maxwell's equations}. To state the equations, we need to introduce two more concepts.
+
+\begin{defi}[Charge and current density]
+ $\rho(\mathbf{r}, t)$ is the \emph{charge density}, defined as the charge per unit volume.
+
+ $\mathbf{j}(\mathbf{r}, t)$ is the \emph{current density}, defined as the electric current per unit area of cross section.
+\end{defi}
+
+Then Maxwell's equations say
+\begin{law}[Maxwell's equations]
+ \begin{align*}
+ \nabla\cdot \mathbf{E} &= \frac{\rho}{\varepsilon_0}\\
+ \nabla\cdot \mathbf{B} &= 0\\
+ \nabla\times \mathbf{E} + \frac{\partial \mathbf{B}}{\partial t} &= 0\\
+ \nabla\times \mathbf{B} - \mu_0\varepsilon_0 \frac{\partial \mathbf{E}}{\partial t} &= \mu_0 \mathbf{j},
+ \end{align*}
+ where $\varepsilon_0$ is the electric constant (permittivity of free space) and $\mu_0$ is the magnetic constant (permeability of free space), which are constants determined experimentally.
+\end{law}
+
+We can quickly derive some properties we know from these four equations. The conservation of electric charge comes from taking the divergence of the last equation.
+\[
+ \underbrace{\nabla\cdot (\nabla\times \mathbf{B})}_{=0} - \mu_0\varepsilon_0\frac{\partial}{\partial t} \underbrace{(\nabla\cdot \mathbf{E})}_{=\rho/\varepsilon_0} = \mu_0 \nabla\cdot \mathbf{j}.
+\]
+So
+\[
+ \frac{\partial \rho}{\partial t} + \nabla\cdot \mathbf{j} = 0.
+\]
+We can also take the volume integral of the first equation to obtain
+\[
+ \int_V\nabla\cdot \mathbf{E}\;\d V = \frac{1}{\varepsilon_0} \int_V \rho\;\d V = \frac{Q}{\varepsilon_0}.
+\]
+By the divergence theorem, we have
+\[
+ \int_S \mathbf{E}\cdot \d \mathbf{S} = \frac{Q}{\varepsilon_0},
+\]
+which is Gauss' law for electric fields
+
+We can integrate the second equation to obtain
+\[
+ \int_S \mathbf{B}\cdot \d \mathbf{S} = 0.
+\]
+This roughly states that there are no ``magnetic charges''.
+
+The remaining Maxwell's equations also have integral forms. For example,
+\[
+ \int_{C = \partial S} \mathbf{E}\cdot \d \mathbf{r} = \int_S \nabla \times \mathbf{E}\;\d S = -\frac{\d }{\d t}\int_S \mathbf{B}\cdot \d \mathbf{S},
+\]
+where the first equality is from from Stoke's theorem. This says that a changing magnetic field produces a current.
+
+\subsection{Static charges and steady currents}
+If $\rho, \mathbf{j}, \mathbf{E}, \mathbf{B}$ are all independent of time, $\mathbf{E}$ and $\mathbf{B}$ are no longer linked.
+
+We can solve the equations for electric fields:
+\begin{align*}
+ \nabla\cdot \mathbf{E} &= \rho/\varepsilon_0\\
+ \nabla\times \mathbf{E} &= \mathbf{0}
+\end{align*}
+Second equation gives $\mathbf{E} = -\nabla \varphi$. Substituting into first gives $\nabla^2 \varphi = -\rho/\varepsilon_0$.
+
+The equations for the magnetic field are
+\begin{align*}
+ \nabla\cdot \mathbf{B} &= 0\\
+ \nabla\times \mathbf{B} &= \mu_0 \mathbf{j}
+\end{align*}
+First equation gives $B = \nabla \times \mathbf{A}$ for some \emph{vector potential} $\mathbf{A}$. But the vector potential is not well-defined. Making the transformation $\mathbf{A}\mapsto \mathbf{A} + \nabla \chi(\mathbf{x})$ produces the same $\mathbf{B}$, since $\nabla\times (\nabla \chi) = 0$. So choose $\chi$ such that $\nabla\cdot \mathbf{A} = 0$. Then
+\[
+ \nabla^2 \mathbf{A} = \nabla(\underbrace{\nabla\cdot \mathbf{A}}_{=0}) - \nabla\times (\underbrace{\nabla\times \mathbf{A}}_{\mathbf{B}}) = -\mu_0 \mathbf{j}.
+\]
+In summary, we have
+\begin{center}
+ \begin{tabularx}{\textwidth}{XX}
+ \toprule
+ Electrostatics & Magnetostatics\\
+ \midrule
+ $\nabla\cdot \mathbf{E} = \rho/\varepsilon_0$ & $\nabla\cdot \mathbf{B} = 0$\\
+ $\nabla\times \mathbf{E} = \mathbf{0}$ & $\nabla\times \mathbf{B} = \mu_0 \mathbf{j}$\\
+ $\nabla^2 \varphi = -\rho/\varepsilon_0$ & $\nabla^2 \mathbf{A} = -\mu_0 \mathbf{j}$.\\
+ $\varepsilon_0$ sets the scale of electrostatic effects, e.g.\ the Coulomb force & $\mu_0$ sets the scale of magnetic effects, e.g.\ force between two wires with currents.\\
+ \bottomrule
+ \end{tabularx}
+\end{center}
+
+\subsection{Electromagnetic waves}
+Consider Maxwell's equations in empty space, i.e.\ $\rho = 0$, $\mathbf{j} = \mathbf{0}$. Then Maxwell's equations give
+\[
+ \nabla^2 \mathbf{E} = \nabla(\nabla\cdot \mathbf{E}) - \nabla\times (\nabla\times \mathbf{E}) = \nabla\times \frac{\partial \mathbf{B}}{\partial t} = \frac{\partial}{\partial t} (\nabla \times \mathbf{B}) = \mu_0\varepsilon_0 \frac{\partial^2 \mathbf{E}}{\partial^2 t}.
+\]
+Define $c = \frac{1}{\sqrt{\mu_0\varepsilon_0}}$. Then the equation gives
+\[
+ \left(\nabla^2 - \frac{1}{c^2}\frac{\partial^2}{\partial t^2}\right)\mathbf{E} = 0.
+\]
+This is the wave equation describing propagation with speed $c$. Similarly, we can obtain
+\[
+ \left(\nabla^2 - \frac{1}{c^2}\frac{\partial^2}{\partial t^2}\right)\mathbf{B} = 0.
+\]
+So Maxwell's equations predict that there exists electromagnetic waves in free space, which move with speed $c = \frac{1}{\sqrt{\varepsilon_0 \mu_0}} \approx \SI{3.00e8}{\meter\per\second}$, which is the speed of light! Maxwell then concluded that light is electromagnetic waves!
+
+\section{Tensors and tensor fields}
+\subsection{Definition}
+There are two ways we can think of a vector in $\R^3$. We can either interpret it as a ``point'' in space, or we can view it simply as a list of three numbers. However, the list of three numbers is simply a \emph{representation} of the vector with respect to some particular basis. When we change basis, in order to represent the same point, we will need to use a different list of three numbers. In particular, when we perform a rotation by $R_{ip}$, the new components of the vector is given by
+\[
+ v_i' = R_{ip}v_p.
+\]
+Similarly, we can imagine a matrix as either a linear transformation or an array of 9 numbers. Again, when we change basis, in order to represent the same transformation, we will need a different array of numbers. This time, the transformation is given by
+\[
+ A_{ij}' = R_{ip}R_{jq}A_{pq}.
+\]
+We can think about this from another angle. To define an arbitrary quantity $A_{ij}$, we can always just write down 9 numbers and be done with it. Moreover, we can write down a different set of numbers in a different basis. For example, we can define $A_{ij} = \delta_{ij}$ in our favorite basis, but $A_{ij} = 0$ in all other bases. We can do so because we have the power of the pen.
+
+However, for this $A_{ij}$ to represent something physically meaningful, i.e.\ an actual linear transformation, we have to make sure that the components of $A_{ij}$ transform sensibly under a basis transformation. By ``sensibly'', we mean that it has to follow the transformation rule $A_{ij}' = R_{ip}R_{jq}A_{pq}$. For example, the $A_{ij}$ we defined in the previous paragraph does \emph{not} transform sensibly. While it is something we can define and write down, it does not correspond to anything meaningful.
+
+The things that transform sensibly are known as \emph{tensors}. For example, vectors and matrices (that transform according to the usual change-of-basis rules) are tensors, but that $A_{ij}$ is not.
+
+In general, tensors are allowed to have an arbitrary number of indices. In order for a quantity $T_{ij\cdots k}$ to be a tensor, we require it to transform according to
+\[
+ T_{ij\cdots k}' = R_{ip}R_{jq}\cdots R_{kr}T_{pq\cdots r},
+\]
+which is an obvious generalization of the rules for vectors and matrices.
+
+\begin{defi}[Tensor]
+ A \emph{tensor} of rank $n$ has components $T_{ij\cdots k}$ (with $n$ indices) with respect to each basis $\{\mathbf{e}_i\}$ or coordinate system $\{x_i\}$, and satisfies the following rule of change of basis:
+ \[
+ T_{ij\cdots k}' = R_{ip}R_{jq}\cdots R_{kr}T_{pq\cdots r}.
+ \]
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item A tensor $T$ of rank 0 doesn't transform under change of basis, and is a scalar.
+ \item A tensor $T$ of rank 1 transforms under $T'_i = R_{ip}T_p$. This is a vector.
+ \item A tensor $T$ of rank 2 transforms under $T_{ij}' = R_{ip} R_{jq} T_{pq}$. This is a matrix.
+ \end{itemize}
+\end{eg}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item If $\mathbf{u}, \mathbf{v}, \cdots\mathbf{w}$ are $n$ vectors, then
+ \[
+ T_{ij\cdots k} =u_i v_j \cdots w_k
+ \]
+ defines a tensor of rank $n$. To check this, we check the tensor transformation rule. We do the case for $n = 2$ for simplicity of expression, and it should be clear that this can be trivially extended to arbitrary $n$:
+ \begin{align*}
+ T_{ij}' &= u_i'v_j' = (R_{ip} u_p)(R_{jq}v_q)\\
+ &= R_{ip}R_{jq}(u_p v_q)\\
+ &= R_{ip}R_{jq}T_{pq}
+ \end{align*}
+ Then linear combinations of such expressions are also tensors, e.g.\ $T_{ij} = u_i v_j + a_ib_j$ for any $\mathbf{u}, \mathbf{v}, \mathbf{a},\mathbf{b}$.
+ \item $\delta_{ij}$ and $\varepsilon_{ijk}$ are tensors of rank 2 and 3 respectively --- with the special property that their components are unchanged with respect to the basis coordinate:
+ \[
+ \delta_{ij}' = R_{ip}R_{jq}\delta_{pq} = R_{ip}R_{jp} = \delta_{ij},
+ \]
+ since $R_{ip}R_{jp} = (RR^T)_{ij} = I_{ij}$. Also
+ \[
+ \varepsilon_{ijk}' = R_{ip}R_{jq}R_{kr}\varepsilon_{pqr} = (\det R)\varepsilon_{ijk} = \varepsilon_{ijk},
+ \]
+ using results from Vectors and Matrices.
+ \item (Physical example) In some substances, an applied electric field $\mathbf{E}$ gives rise to a current density $\mathbf{j}$, according to the linear relation $j_i = \varepsilon_{ij} E_j$, where $\varepsilon_{ij}$ is the \emph{conductivity tensor}.
+
+ Note that this relation entails that the resulting current need not be in the same direction as the electric field. This might happen if the substance has special crystallographic directions that favours electric currents.
+
+ However, if the substance is \emph{isotropic}, we have $\varepsilon_{ij} = \sigma\delta_{ij}$ for some $\sigma$. In this case, the current \emph{is} parallel to the field.
+ \end{enumerate}
+\end{eg}
+
+\subsection{Tensor algebra}
+\begin{defi}[Tensor addition]
+ Tensors $T$ and $S$ of the same rank can be \emph{added}; $T + S$ is also a tensor of the same rank, defined as
+ \[
+ (T + S)_{ij\cdots k} = T_{ij \cdots k} + S_{ij\cdots k}.
+ \]
+ in any coordinate system.
+\end{defi}
+To check that this is a tensor, we check the transformation rule. Again, we only show for $n = 2$:
+\[
+ (T + S)_{ij}' = T_{ij}' + S_{ij}' = R_{ip}R_{jq}T_{pq} + R_{ip}R_{jq}S_{pq} = (R_{ip}R_{jq})(T_{pq} + S_{pq}).
+\]
+\begin{defi}[Scalar multiplication]
+ A tensor $T$ of rank $n$ can be multiplied by a scalar $\alpha$. $\alpha T$ is a tensor of the same rank, defined by
+ \[
+ (\alpha T)_{ij} = \alpha T_{ij}.
+ \]
+\end{defi}
+It is trivial to check that the resulting object is indeed a tensor.
+
+\begin{defi}[Tensor product]
+ Let $T$ be a tensor of rank $n$ and $S$ be a tensor of rank $m$. The \emph{tensor product} $T\otimes S$ is a tensor of rank $n + m$ defined by
+ \[
+ (T \otimes S)_{x_1 x_2\cdots x_n y_1y_2\cdots y_m} = T_{x_1x_2\cdots x_n}S_{y_1y_2\cdots y_n}.
+ \]
+ It is trivial to show that this is a tensor.
+
+ We can similarly define tensor products for any (positive integer) number of tensors, e.g.\ for $n$ vectors $\mathbf{u}, \mathbf{v} \cdots, \mathbf{w}$, we can define
+ \[
+ T = \mathbf{u}\otimes \mathbf{v}\otimes \cdots \otimes \mathbf{w}
+ \]
+ by
+ \[
+ T_{ij\cdots k} = u_i v_j \cdots w_k,
+ \]
+ as defined in the example in the beginning of the chapter.
+\end{defi}
+
+\begin{defi}[Tensor contraction]
+ For a tensor $T$ of rank $n$ with components $T_{ijp\cdots q}$, we can \emph{contract on} the indices $i, j$ to obtain a new tensor of rank $n - 2$:
+ \[
+ S_{p\cdots q} = \delta_{ij}T_{ij p\cdots q} = T_{iip\cdots q}
+ \]
+ Note that we don't have to always contract on the first two indices. We can contract any pair we like.
+\end{defi}
+To check that contraction produces a tensor, we take the ranks 2 $T_{ij}$ example. Contracting, we get $T_{ii}$ ,a rank-0 scalar. We have $T_{ii}' = R_{ip}R_{iq}T_{pq} = \delta_{pq}T_{pq} = T_{pp} = T_{ii}$, since $R$ is an orthogonal matrix.
+
+If we view $T_{ij}$ as a matrix, then the contraction is simply the trace of the matrix. So our result above says that the trace is invariant under basis transformations --- as we already know in IA Vectors and Matrices.
+
+Note that our usual matrix product can be formed by first applying a tensor product to obtain $M_{ij}N_{pq}$, then contract with $\delta_{jp}$ to obtain $M_{ij}N_{jq}$.
+\subsection{Symmetric and antisymmetric tensors}
+\begin{defi}[Symmetric and anti-symmetric tensors]
+ A tensor $T$ of rank $n$ is \emph{symmetric} in the indices $i,j$ if it obeys
+ \[
+ T_{ijp\cdots q} = T_{jip\cdots q}.
+ \]
+ It is \emph{anti-symmetric} if
+ \[
+ T_{ijp\cdots q} = -T_{jip\cdots q}.
+ \]
+ Again, a tensor can be symmetric or anti-symmetric in any pair of indices, not just the first two.
+\end{defi}
+
+This is a property that holds in any coordinate systems, if it holds in one, since
+\[
+ T_{k\ell r\ldots s}' = R_{ki}R_{\ell j}R_{rp}\cdots R_{sq}T_{ijp\cdots q} = \pm R_{ki} R_{\ell j} R_{rp}\cdots R_{sq}T_{jip\cdots q} = \pm T_{\ell kr\cdots s}'
+\]
+as required.
+
+\begin{defi}[Totally symmetric and anti-symmetric tensors]
+ A tensor is \emph{totally (anti-)symmetric} if it is (anti-)symmetric in every pair of indices.
+\end{defi}
+
+\begin{eg}
+ $\delta_{ij} = \delta_{ji}$ is totally symmetric, while $\varepsilon_{ijk} = -\varepsilon_{jik}$ is totally antisymmetric.
+
+ There are totally symmetric tensors of arbitrary rank $n$. But in $\R^3$,
+ \begin{itemize}
+ \item Any totally antisymmetric tensor of rank 3 is $\lambda \varepsilon_{ijk}$ for some scalar $\lambda$.
+ \item There are no totally antisymmetric tensors of rank greater than $3$, except for the trivial tensor with all components $0$.
+
+ Proof: exercise (hint: pigeonhole principle)
+ \end{itemize}
+\end{eg}
+
+\subsection{Tensors, multi-linear maps and the quotient rule}
+\subsubsection*{Tensors as multi-linear maps}
+In Vectors and Matrices, we know that matrices are linear maps. We will prove an analogous fact for tensors.
+
+\begin{defi}[Multilinear map]
+ A map $T$ that maps $n$ vectors $\mathbf{a}, \mathbf{b}, \cdots, \mathbf{c}$ to $\R$ is multi-linear if it is linear in each of the vectors $\mathbf{a}, \mathbf{b}, \cdots, \mathbf{c}$ individually.
+\end{defi}
+
+We will show that a tensor $T$ of rank $n$ is a equivalent to a multi-linear map from $n$ vectors $\mathbf{a}, \mathbf{b}, \cdots, \mathbf{c}$ to $\R$ defined by
+\[
+ T(\mathbf{a}, \mathbf{b}, \cdots, \mathbf{c}) = T_{ij \cdots k} a_ib_j\cdots c_k.
+\]
+To show that tensors are \emph{equivalent} to multi-linear maps, we have to show the following:
+\begin{enumerate}
+ \item Defining a map with a tensor makes sense, i.e.\ the expression $T_{ij\cdots k}a_ib_j\cdots c_k$ is the same regardless of the basis chosen;
+ \item While it is always possible to write a multi-linear map as $T_{ij \cdots k} a_ib_j\cdots c_k$, we have to show that $T_{ij\cdots k}$ is indeed a tensor, i.e.\ transform according to the tensor transformation rules.
+\end{enumerate}
+
+To show the first property, just note that the $T_{ij \cdots k} a_ib_j\cdots c_k$ is a tensor product (followed by contraction), which retains tensor-ness. So it is also a tensor. In particular, it is a rank 0 tensor, i.e.\ a scalar, which is independent of the basis.
+
+To show the second property, assuming that $T$ is a multi-linear map, it must be independent of the basis, so
+\[
+ T_{ij \cdots k} a_ib_j\cdots c_k = T_{ij\cdots k}' a_i' b_j' \cdots c_k'.
+\]
+Since $v_p' = R_{pi}v_i$ by tensor transformation rules, multiplying both sides by $R_{pi}$ gives $v_i = R_{pi} v_p'$. Substituting in gives
+\[
+ T_{ij\cdots k} (R_{pi}a_p')(R_{qj}b_q')\cdots (R_{kr}c_r') = T_{pq\cdots r}' a_p' b_q' \cdots c_r'.
+\]
+Since this is true for all $\mathbf{a}, \mathbf{b}, \cdots \mathbf{c}$, we must have
+\[
+ T_{ij\cdots k}R_{pi}R_{qj}\cdots R_{rk} = T'_{pq\cdots r}
+\]
+Hence $T_{ij\cdots k}$ obeys the tensor transformation rule, and is a tensor.
+
+This shows that there is a one-to-one correspondence between tensors of rank $n$ and multi-linear maps.
+
+This gives a way of thinking about tensors independent of any coordinate system or choice of basis, and the tensor transformation rule emerges naturally.
+
+Note that the above is exactly what we did with linear maps and matrices.
+\subsubsection*{The quotient rule}
+If $T_{\underbrace{i\cdots j}_n\underbrace{p\cdots q}_m}$ is a tensor of rank $n+ m$, and $u_{p\cdots q}$ is a tensor of rank $m$ then
+\[
+ v_{i, \cdots j}= T_{i\cdots j p\cdots q}u_{p\cdots q}
+\]
+is a tensor of rank $n$, since it is a tensor product of $T$ and $u$, followed by contraction.
+
+The converse is also true:
+\begin{prop}[Quotient rule]
+ Suppose that $T_{i\cdots jp\cdots q}$ is an array defined in each coordinate system, and that $v_{i\cdots j} = T_{i\cdots jp\cdots q} u_{p\cdots q}$ is also a tensor for any tensor $u_{p \cdots q}$. Then $T_{i\cdots j p\cdots q}$ is also a tensor.
+\end{prop}
+
+Note that we have previously seen the special case of $n = m = 1$, which says that linear maps are tensors.
+\begin{proof}
+ We can check the tensor transformation rule directly. However, we can reuse the result above to save some writing.
+
+ Consider the special form $u_{p \cdots q} = c_p \cdots d_q$ for any vectors $\mathbf{c}, \cdots \mathbf{d}$. By assumption,
+ \[
+ v_{i\cdots j} = T_{i\cdots jp\cdots q}c_p\cdots d_q
+ \]
+ is a tensor. Then
+ \[
+ v_{i\cdots j}a_i \cdots b_j = T_{i\cdots jp\cdots q}a_i\cdots b_jc_p\cdots d_q
+ \]
+ is a scalar for any vectors $\mathbf{a}, \cdots, \mathbf{b}, \mathbf{c},\cdots, \mathbf{d}$. Since $T_{i\cdots jp\cdots q}a_i\cdots b_jc_p\cdots d_q$ is a scalar and hence gives the same result in every coordinate system, $T_{i\cdots jp\cdots q}$ is a multi-linear map. So $T_{i\cdots jp\cdots q}$ is a tensor.
+\end{proof}
+
+
+\subsection{Tensor calculus}
+\subsubsection*{Tensor fields and derivatives}
+Just as with scalars or vectors, we can define tensor fields:
+\begin{defi}[Tensor field]
+ A \emph{tensor field} is a tensor at each point in space $T_{ij\cdots k}(\mathbf{x})$, which can also be written as $T_{ij\cdots k}(x_\ell)$.
+\end{defi}
+
+We assume that the fields are \emph{smooth} so they can be differentiated any number of times
+\[
+ \frac{\partial}{\partial x_p}\cdots \frac{\partial}{\partial x_q}T_{ij\cdots k},
+\]
+except for where things obviously fail, e.g.\ for where $T$ is not defined. We now claim:
+\begin{prop}
+ \[
+ \underbrace{\frac{\partial}{\partial x_p}\cdots \frac{\partial}{\partial x_q}}_{m}T_{\underbrace{ij\cdots k}_n},\tag{$*$}
+ \]
+ is a tensor of rank $n + m$.
+\end{prop}
+\begin{proof}
+ To show this, it suffices to show that $\frac{\partial}{\partial x_p}$ satisfies the tensor transformation rules for rank 1 tensors (i.e.\ it is something like a rank 1 tensor). Then by the exact same argument we used to show that tensor products preserve tensorness, we can show that the $(*)$ is a tensor. (we cannot use the result of tensor products directly, since this is not exactly a product. But the exact same proof works!)
+
+ Since $x_i' = R_{iq}x_q$, we have
+ \[
+ \frac{\partial x_i'}{\partial x_p} = R_{ip}.
+ \]
+ (noting that $\frac{\partial x_p}{\partial x_q} = \delta_{pq}$). Similarly,
+ \[
+ \frac{\partial x_q}{\partial x_i'} = R_{iq}.
+ \]
+ Note that $R_{ip}, R_{iq}$ are constant matrices.
+
+ Hence by the chain rule,
+ \[
+ \frac{\partial}{\partial x_i'} = \left(\frac{\partial x_q}{\partial x_i'}\right)\frac{\partial}{\partial x_q} = R_{iq} \frac{\partial}{\partial x_q}.
+ \]
+ So $\frac{\partial}{\partial x_p}$ obeys the vector transformation rule. So done.
+\end{proof}
+\subsubsection*{Integrals and the tensor divergence theorem}
+It is also straightforward to do integrals. Since we can sum tensors and take limits, the definition of a tensor-valued integral is straightforward.
+
+For example, $\int_V T_{ij\cdots k}(\mathbf{x})\;\d V$ is a tensor of the same rank as $T_{ij\cdots k}$ (think of the integral as the limit of a sum).
+
+For a physical example, recall our discussion of the flux of quantities for a fluid with velocity $\mathbf{u}(\mathbf{x})$ through a surface element --- assume a uniform density $\rho$. The flux of volume is $\mathbf{u}\cdot \mathbf{n}\delta s = u_j n_j \delta S$. So the flux of mass is $\rho u_j n_j \delta S$. Then the flux of the $i$th component of momentum is $\rho u_i u_j n_j \delta S = T_{ij}n_jk \delta S$ (mass times velocity), where $T_{ij} = \rho u_iu_j$. Then the flux through the surface $S$ is $\int_S T_{ij}n_j \;\d S$.
+
+It is easy to generalize the divergence theorem from vectors to tensors. We can then use it to discuss conservation laws for tensor quantities.
+
+Let $V$ be a volume bounded by a surface $S=\partial V$ and $T_{ij\cdots k\ell}$ be a smooth tensor field. Then
+
+\begin{thm}[Divergence theorem for tensors]
+ \[
+ \int_S T_{ij\cdots k\ell}n_\ell\;\d S = \int_V \frac{\partial}{\partial x_\ell}(T_{ij\cdots k\ell})\;\d V,
+ \]
+ with $\mathbf{n}$ being an outward pointing normal.
+\end{thm}
+The regular divergence theorem is the case where $T$ has one index and is a vector field.
+
+\begin{proof}
+ Apply the usual divergence theorem to the vector field $\mathbf{v}$ defined by $v_\ell = a_i b_j \cdots c_k T_{ij\cdots k\ell}$, where $\mathbf{a}, \mathbf{b}, \cdots, \mathbf{c}$ are fixed constant vectors.
+
+ Then
+ \[
+ \nabla\cdot \mathbf{v} = \frac{\partial v_\ell}{\partial x_\ell} = a_i b_j \cdots c_k \frac{\partial}{\partial x_\ell}T_{ij\cdots k\ell},
+ \]
+ and
+ \[
+ \mathbf{n}\cdot \mathbf{v} = n_\ell v_\ell = a_i b_j \cdots c_k T_{ij\cdots k\ell }n_\ell.
+ \]
+ Since $\mathbf{a}, \mathbf{b}, \cdots, \mathbf{c}$ are arbitrary, therefore they can be eliminated, and the tensor divergence theorem follows.
+\end{proof}
+\section{Tensors of rank 2}
+\subsection{Decomposition of a second-rank tensor}
+This decomposition might look arbitrary at first sight, but as time goes on, you will find that it is actually very useful in your future career (at least, the lecturer claims so).
+
+Any second rank tensor can be written as a sum of its symmetric and anti-symmetric parts
+\[
+ T_{ij} = S_{ij} + A_{ij},
+\]
+where
+\[
+ S_{ij} = \frac{1}{2}(T_{ij} + T_{ji}),\quad A_{ij} = \frac{1}{2}(T_{ij} - T_{ji}).
+\]
+Here $T_{ij}$ has 9 independent components, whereas $S_{ij}$ and $A_{ij}$ have 6 and 3 independent components, since they must be of the form
+\[
+ (S_{ij}) =
+ \begin{pmatrix}
+ a & d & e\\
+ d & b & f\\
+ e & f & c
+ \end{pmatrix}
+ ,\quad
+ (A_{ij}) =
+ \begin{pmatrix}
+ 0 & a & b\\
+ -a & 0 & c\\
+ -b & -c & 0
+ \end{pmatrix}.
+\]
+The symmetric part can be be further reduced to a \emph{traceless} part plus an \emph{isotropic} (i.e.\ multiple of $\delta_{ij}$) part:
+\[
+ S_{ij} = P_{ij} + \frac{1}{3}\delta_{ij} Q,
+\]
+where $Q = S_{ii}$ is the trace of $S_{ij}$ and $P_{ij} = P_{ji} = S_{ij} -\frac{1}{3}\delta_{ij}Q$ is traceless. Then $P_{ij}$ has 5 independent components while $Q$ has 1.
+
+Since the antisymmetric part has 3 independent components, just like a usual vector, we should be able to write $A_{i}$ in terms of a single vector. In fact, we can write the antisymmetric part as
+\[
+ A_{ij} = \varepsilon_{ijk}B_k
+\]
+for some vector $B$. To figure out what this $B$ is, we multiply by $\varepsilon_{ij\ell}$ on both sides and use some magic algebra to obtain
+\[
+ B_k = \frac{1}{2}\varepsilon_{ijk}A_{ij} = \frac{1}{2}\varepsilon_{ijk}T_{ij},
+\]
+where the last equality is from the fact that only antisymmetric parts contribute to the sum.
+
+Then
+\[
+ (A_{ij}) =
+ \begin{pmatrix}
+ 0 & B_3 & -B_2\\
+ -B_3 & 0 & B_1\\
+ B_2 & -B_1 & 0
+ \end{pmatrix}
+\]
+To summarize,
+\[
+ T_{ij} = P_{ij} + \varepsilon_{ijk}B_k + \frac{1}{3}\delta_{ij}Q,
+\]
+where $B_k = \frac{1}{2}\varepsilon_{pqj} T_{pq}$, $Q = T_{kk}$ and $P_{ij} = P_{ji} = \frac{T_{ij} + T_{ji}}{2} - \frac{1}{3}\delta_{ij}Q$.
+
+\begin{eg}
+ The derivative of a vector field $F_i (\mathbf{r})$ is a tensor $T_{ij} = \frac{\partial F_i}{\partial x_j}$, a tensor field. Our decomposition given above has the symmetric traceless piece
+ \[
+ P_{ij} = \frac{1}{2}\left(\frac{\partial F_i}{\partial x_j} + \frac{\partial F_j}{\partial x_i}\right) - \frac{1}{3}\delta_{ij}\frac{\partial F_k}{\partial x_k} = \frac{1}{2}\left(\frac{\partial F_i}{\partial x_j} + \frac{\partial F_j}{\partial x_i}\right) - \frac{1}{3}\delta_{ij}\nabla\cdot \mathbf{F},
+ \]
+ an antisymmetric piece $A_{ij} = \varepsilon_{ijk}B_k$, where
+ \[
+ B_k = \frac{1}{2}\varepsilon_{ijk}\frac{\partial F_i}{\partial x_j} = -\frac{1}{2}(\nabla\times \mathbf{F})_k.
+ \]
+ and trace
+ \[
+ Q = \frac{\partial F_k}{\partial x_k} = \nabla \cdot \mathbf{F}.
+ \]
+ Hence a complete description involves a scalar $\nabla\cdot \mathbf{F}$, a vector $\nabla\times \mathbf{F}$, and a symmetric traceless tensor $P_{ij}$.
+\end{eg}
+
+\subsection{The inertia tensor}
+Consider masses $m_\alpha$ with positions $\mathbf{r}_\alpha$, all rotating with angular velocity $\boldsymbol\omega$ about $\mathbf{0}$. So the velocities are $\mathbf{v}_\alpha = \boldsymbol\omega\times \mathbf{r}_\alpha$. The total angular momentum is
+\begin{align*}
+ \mathbf{L} &= \sum_{\alpha} \mathbf{r}_\alpha \times m_\alpha \mathbf{v}_\alpha \\
+ &= \sum_\alpha m_\alpha \mathbf{r}_\alpha \times (\boldsymbol\omega\times \mathbf{r}_\alpha)\\
+ &= \sum_\alpha m_\alpha( |\mathbf{r}_\alpha|^2\boldsymbol\omega - (\mathbf{r}_\alpha \cdot \boldsymbol\omega)\mathbf{r}_\alpha).
+\end{align*}
+by vector identities. In components, we have
+\[
+ L_i = I_{ij}\omega_j,
+\]
+where
+\begin{defi}[Inertia tensor]
+ The \emph{inertia tensor} is
+ \[
+ I_{ij} = \sum_\alpha m_\alpha [|\mathbf{r}_\alpha|^2 \delta_{ij} - (\mathbf{r}_\alpha)_i (\mathbf{r}_\alpha)_j].
+ \]
+\end{defi}
+For a rigid body occupying volume $V$ with mass density $\rho(\mathbf{r})$, we replace the sum with an integral to obtain
+\[
+ I_{ij} = \int_V \rho (\mathbf{r})(x_kx_k \delta_{ij} - x_i x_j)\;\d V.
+\]
+By inspection, $I$ is a symmetric tensor.
+
+\begin{eg}
+ Consider a rotating cylinder with uniform density $\rho_0$. The total mass is $2\ell \pi a^2 \rho_0$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (2, 0) node [right] {$x_1$};
+ \draw [->] (0, 0) -- (0, 2) node [above] {$x_3$};
+ \draw [->] (0, 0) -- (-1, -2) node [below] {$x_2$};
+ \draw (0.7, -1.5) -- (0.7, 1.5) node [pos = 0.7, right] {$2\ell$};
+ \draw (-0.7, -1.5) -- (-0.7, 1.5);
+ \draw (0, 1.5) circle [x radius=0.7, y radius=0.3];
+ \draw [dashed] (0.7, -1.5) arc (0:180:0.7 and 0.3);
+ \draw (-0.7, -1.5) arc (180:360:0.7 and 0.3);
+ \draw (0, 1.5) -- (0.7, 1.5) node [pos = 0.5, above] {$a$};
+ \end{tikzpicture}
+ \end{center}
+ Use cylindrical polar coordinates:
+ \begin{align*}
+ x_1 &= r\cos \theta\\
+ x_2 &= r\sin \theta\\
+ x_3 &= x_3\\
+ \d V &= r\;\d r\;\d \theta \;\d x_3
+ \end{align*}
+ We have
+ \begin{align*}
+ I_{33} &= \int_V \rho_0 (x_1^2 + x_2^2)\;\d V\\
+ &= \rho_0 \int_0^a \int_0^{2\pi} \int_{-\ell}^\ell r^2 (r\;\d r\;\d \theta \;\d x_2)\\
+ &= \rho_0 \cdot 2\pi \cdot 2\ell \left[\frac{r^4}{4}\right]_0^a\\
+ &= \varepsilon_0 \pi \ell a^4.
+ \end{align*}
+ Similarly, we have
+ \begin{align*}
+ I_{11} &= \int_V \rho_0 (x_2^2 + x_3^2)\;\d V\\
+ &= \rho_0 \int_0^a \int_0^{2\pi}\int_{-\ell}^\ell (r^2 \sin^2 \theta + x_3^2) r\;\d r\;\d \theta \;\d x_3\\
+ &= \rho_0 \int_0^a \int_0^{2\pi} r\left(r^2 \sin^2 \theta\left[x_3\right]_{-\ell}^\ell + \left[\frac{x_3^3}{3}\right]^{\ell}_{-\ell}\right)\;\d \theta\;\d r\\
+ &= \rho_0 \int_0^a \int_0^{2\pi} r\left(r^2 \sin^2 \theta 2\ell + \frac{2}{3}\ell^3\right)\;\d \theta\;\d r\\
+ &= \rho_0 \left(2\pi a \cdot \frac{2}{3}\ell^3 + 2\ell\int_0^a r^2 \;\d r\int_0^{2\pi}\sin^2 \theta\right)\\
+ &= \rho_0 \pi a^2 \ell\left(\frac{a^2}{2} + \frac{2}{3}\ell^2\right)
+ \end{align*}
+ By symmetry, the result for $I_{22}$ is the same.
+
+ How about the off-diagonal elements?
+ \begin{align*}
+ I_{13} &= -\int_V \rho_0 x_1 x_3 \;\d V\\
+ &= -\rho_0 \int_0^a \int_{-\ell}^\ell \int_0^{2\pi} r^2 \cos \theta x_3 \;\d r\;\d x_3 \;\d \theta\\
+ &= 0
+ \end{align*}
+ Since $\int_0^{2\pi} \;\d \theta \cos \theta = 0$. Similarly, the other off-diagonal elements are all 0. So the non-zero components are
+ \begin{align*}
+ I_{33} &= \frac{1}{2}Ma^2\\
+ I_{11} = I_{22} &= M\left(\frac{a^2}{4} + \frac{\ell^2}{3}\right)
+ \end{align*}
+ In the particular case where $\ell = \frac{a\sqrt{3}}{2}$, we have $I_{ij} = \frac{1}{2}ma^2 \delta_{ij}$. So in this case,
+ \[
+ \mathbf{L} = \frac{1}{2}Ma^2 \boldsymbol\omega
+ \]
+ for rotation about any axis.
+\end{eg}
+
+\subsection{Diagonalization of a symmetric second rank tensor}
+Recall that using matrix notation,
+\[
+ T = (T_{ij}),\quad T' = (T_{ij}'),\quad R = (R_{ij}),
+\]
+and the tensor transformation rule $T'_{ij} = R_{ip}R_{jq}T_{pq}$ becomes
+\[
+ T' = RTR^T = RTR^{-1}.
+\]
+If $T$ is symmetric, it can be diagonalized by such an orthogonal transformation. This means that there exists a basis of orthonormal eigenvectors $\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3$ for $T$ with real eigenvalues $\lambda_1, \lambda_2, \lambda_3$ respectively. The directions defined by $\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3$ are the \emph{principal axes} for $T$, and the tensor is diagonal in Cartesian coordinates along these axes.
+
+This applies to any symmetric rank-2 tensor. For the special case of the inertia tensor, the eigenvalues are called the \emph{principal moments of inertia}.
+
+As exemplified in the previous example, we can often guess the correct principal axes for $I_{ij}$ based on the symmetries of the body. With the axes we chose, $I_{ij}$ was found to be diagonal by direct calculation.
+
+\section{Invariant and isotropic tensors}
+\subsection{Definitions and classification results}
+\begin{defi}[Invariant and isotropic tensor]
+ A tensor $T$ is \emph{invariant} under a particular rotation $R$ if
+ \[
+ T_{ij\cdots k}' = R_{ip}R_{jq}\cdots R_{kr}T_{pq\cdots r} = T_{ij\cdots k},
+ \]
+ i.e.\ every component is unchanged under the rotation.
+
+ A tensor $T$ which is invariant under every rotation is \emph{isotropic}, i.e.\ the same in every direction.
+\end{defi}
+
+\begin{eg}
+ The inertia tensor of a sphere is isotropic by symmetry.
+
+ $\delta_{ij}$ and $\varepsilon_{ijk}$ are also isotropic tensors. This ensures that the component definitions of the scalar and vector products $\mathbf{a}\cdot \mathbf{b} = a_i b_j \delta_{ij}$ and $(\mathbf{a}\times \mathbf{b})_i = \varepsilon_{ijk} a_j b_k$ are independent of the Cartesian coordinate system.
+\end{eg}
+
+Isotropic tensors in $\R^3$ can be classified:
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item There are no isotropic tensors of rank 1, except the zero tensor.
+ \item The most general rank 2 isotropic tensor is $T_{ij} = \alpha \delta_{ij}$ for some scalar $\alpha$.
+ \item The most general rank 3 isotropic tensor is $T_{ijk} = \beta \varepsilon_{ijk}$ for some scalar $\beta$.
+ \item All isotropic tensors of higher rank are obtained by combining $\delta_{ij}$ and $\varepsilon_{ijk}$ using tensor products, contractions, and linear combinations.
+ \end{enumerate}
+\end{thm}
+We will provide a sketch of the proof:
+\begin{proof}
+ We analyze conditions for invariance under specific rotations through $\pi$ or $\pi/2$ about coordinate axes.
+
+ \begin{enumerate}
+ \item Suppose $T_i$ is rank-1 isotropic. Consider a rotation about $x_3$ through $\pi$:
+ \[
+ (R_{ij}) =
+ \begin{pmatrix}
+ -1 & 0 & 0\\
+ 0 & -1 & 0\\
+ 0 & 0 & 1
+ \end{pmatrix}.
+ \]
+ We want $T_1 = R_{ip}T_p = R_{11} T_1 = -T_1$. So $T_1 = 0$. Similarly, $T_2 = 0$. By consider a rotation about, say $x_1$, we have $T_3 = 0$.
+ \item Suppose $T_{ij}$ is rank-2 isotropic. Consider
+ \[
+ (R_{ij}) =
+ \begin{pmatrix}
+ 0 & 1 & 0\\
+ -1 & 0 & 0\\
+ 0 & 0 & 1
+ \end{pmatrix},
+ \]
+ which is a rotation through $\pi/2$ about the $x_3$ axis. Then
+ \[
+ T_{13} = R_{1p}R_{3q} T_{pq} = R_{12}R_{33}T_{23} = T_{23}
+ \]
+ and
+ \[
+ T_{23} = R_{2p}R_{3q} T_{pq} = R_{21}R_{33}T_{13} = -T_{13}
+ \]
+ So $T_{13} = T_{23} = 0$. Similarly, we have $T_{31} = T_{32} = 0$.
+
+ We also have
+ \[
+ T_{11} = R_{1p} R_{1q} T_{pq} = R_{12} R_{12}T_{22} = T_{22}.
+ \]
+ So $T_{11} = T_{22}$.
+
+ By picking a rotation about a different axis, we have $T_{21} = T_{12}$ and $T_{22} = T_{33}$.
+
+ Hence $T_{ij} = \alpha \delta_{ij}$.
+
+ \item Suppose that $T_{ijk}$ is rank-3 isotropic. Using the rotation by $\pi$ about the $x_3$ axis, we have
+ \[
+ T_{133} = R_{1p}R_{3q}R_{3r}T_{pqr} = -T_{133}.
+ \]
+ So $T_{133} = 0$. We also have
+ \[
+ T_{111} = R_{1p}R_{1q}R_{1r}T_{pqr} = -T_{111}.
+ \]
+ So $T_{111} = 0$. We have similar results for $\pi$ rotations about other axes and other choices of indices.
+
+ Then we can show that $T_{ijk} = 0$ unless all $i, j, k$ are distinct.
+
+ Now consider
+ \[
+ (R_{ij}) =
+ \begin{pmatrix}
+ 0 & 1 & 0\\
+ -1 & 0 & 0\\
+ 0 & 0 & 1
+ \end{pmatrix},
+ \]
+ a rotation about $x_3$ through $\pi/2$. Then
+ \[
+ T_{123} = R_{1p}R_{2q}R_{3r}T_{pqr} = R_{12}R_{21}R_{33}T_{213} =-T_{213}.
+ \]
+ So $T_{123} = -T_{213}$. Along with similar results for other indices and axes of rotation, we find that $T_{ijk}$ is totally antisymmetric, and $T_{ijk} = \beta \varepsilon_{ijk}$ for some $\beta$.\qedhere
+ \end{enumerate}
+\end{proof}
+\begin{eg}
+ The most general isotropic tensor of rank 4 is
+ \[
+ T_{ijk\ell} = \alpha \delta_{ij}\delta_{k\ell} + \beta \delta_{ik}\delta_{j\ell} + \gamma \delta_{i\ell}\delta_{jk}
+ \]
+ for some scalars $\alpha, \beta, \gamma$. There are no other independent combinations. (we might think we can write a rank-4 isotropic tensor in terms of $\varepsilon_{ijk}$, like $\varepsilon_{ijp}\varepsilon_{k\ell p}$, but this is just $\delta_{ik}\delta_{j\ell} - \delta_{i\ell}\delta_{jk}$. It turns out that anything you write with $\varepsilon_{ijk}$ can be written in terms of $\delta_{ij}$ instead)
+\end{eg}
+
+\subsection{Application to invariant integrals}
+We have the following very useful theorem. It might seem a bit odd and arbitrary at first sight --- if so, read the example below first (after reading the statement of the theorem), and things will make sense!
+\begin{thm}
+ Let
+ \[
+ T_{ij\cdots k} = \int_V f(\mathbf{x}) x_i x_j \cdots x_k\;\d V.
+ \]
+ where $f(\mathbf{x})$ is a scalar function and $V$ is some volume.
+
+ Given a rotation $R_{ij}$, consider an \emph{active} transformation: $\mathbf{x} = x_i \mathbf{e}_i$ is mapped to $\mathbf{x}' = x_i' \mathbf{e}_i$ with $x_i' = R_{ij} x_i$, i.e.\ we map the components but not the basis, and $V$ is mapped to $V'$.
+
+ Suppose that under this active transformation,
+ \begin{enumerate}
+ \item $f(\mathbf{x}) = f(\mathbf{x}')$,
+ \item $V' = V$ (e.g.\ if $V$ is all of space or a sphere).
+ \end{enumerate}
+ Then $T_{ij\cdots k}$ is invariant under the rotation.
+\end{thm}
+
+\begin{proof}
+ First note that the Jacobian of the transformation $R$ is $1$, since it is simply the determinant of $R$ ($x_i' = R_{ip}x_p \Rightarrow \frac{\partial x_i'}{\partial x_p} = R_{ip}$), which is by definition 1. So $\d V = \d V'$.
+
+ Then we have
+ \begin{align*}
+ R_{ip}R_{jq}\cdots R_{kr} T_{pq\cdots r} &= \int_V f(\mathbf{x})x_i' x_j' \cdots x_k' \;\d V\\
+ &= \int_V f(\mathbf{x}')x_i' x_j' \cdots x_k'\;\d V&&\text{using (i)}\\
+ &= \int_{V'} f(\mathbf{x}') x_i' x_j' \cdots x_k' \;\d V'&&\text{using (ii)}\\
+ &= \int_{V} f(\mathbf{x}) x_i x_j \cdots x_k \;\d V&&\text{since $x_i$ and $x_i'$ are dummy}\\
+ &= T_{ij\cdots k}.\qedhere
+ \end{align*}
+\end{proof}
+
+The result is particularly useful if (i) and (ii) hold for \emph{any} rotation $R$, in which case $T_{ij\cdots k}$ is isotropic.
+
+\begin{eg}
+ Let
+ \[
+ T_{ij} = \int_V x_i x_j \;\d V,
+ \]
+ with $V$ being a solid sphere of $|\mathbf{r}| < a$. Our result applies with $f = 1$, which, being a constant, is clearly invariant under rotations. Also the solid sphere is invariant under any rotation. So $T$ must be isotropic. But the only rank 2 isotropic tensor is $\alpha \delta_{ij}$. Hence we must have
+ \[
+ T_{ij} = \alpha \delta_{ij},
+ \]
+ and all we have to do is to determine the scalar $\alpha$.
+
+ Taking the trace, we have
+ \[
+ T_{ii} = 3\alpha = \int_V x_i x_i \;\d V = 4\pi \int_0^a r^2 \cdot r^2\;\d r = \frac{4}{5}\pi a^5.
+ \]
+ So
+ \[
+ T_{ij} = \frac{4}{15}\pi a^5 \delta_{ij}.
+ \]
+ Normally if we are only interested in the $i \not= j$ case, we just claim that $T_{ij} = 0$ by saying ``by symmetry, it is $0$''. But now we can do it (more) rigorously!
+
+ There is a closely related result for the inertia tensor of a solid sphere of constant density $\rho_0$, or of mass $M = \frac{4}{3}\pi a^3 \rho_0$.
+
+ Recall that
+ \[
+ I_{ij} = \int_V \rho_0 (x_k x_k \delta_{ij} - x_i x_j)\;\d V.
+ \]
+ We see that $I_{ij}$ is isotropic (since we have just shown that $\int x_i x_j\;\d V$ is isotropic, and $x_kx_k \delta_{ij}$ is also isotropic). Let $I_{ij} = \beta \delta_{ij}$. Then
+ \begin{align*}
+ I_{ij} &= \int_V \rho_0 (x_k x_k \delta_{ij} - x_ix_j)\;\d V \\
+ &= \rho_0\left(\delta_{ij}\int_V x_k x_k\;\d V - \int_V x_ix_j \;\d V\right)\\
+ &= \rho_0\left(\delta_{ij}T_{kk} - T_{ij}\right)\\
+ &= \rho_0\left(\frac{4}{5}\pi a^5\delta_{ij} - \frac{4}{15}\pi a^5 \delta_{ij}\right)\\
+ &= \frac{8}{15}\rho_0 \pi a^5 \delta_{ij}\\
+ &= \frac{2}{5}Ma^2 \delta_{ij}.
+ \end{align*}
+\end{eg}
+
+\end{document}
diff --git a/books/cam/IA_M/differential_equations.tex b/books/cam/IA_M/differential_equations.tex
new file mode 100644
index 0000000000000000000000000000000000000000..709d2f25be13a8d6df42729d391bff21f3a4a341
--- /dev/null
+++ b/books/cam/IA_M/differential_equations.tex
@@ -0,0 +1,2827 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IA}
+\def\nterm {Michaelmas}
+\def\nyear {2014}
+\def\nlecturer {M.\ G.\ Worster}
+\def\ncourse {Differential Equations}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+ \noindent\textbf{Basic calculus}\\
+ Informal treatment of differentiation as a limit, the chain rule, Leibnitz's rule, Taylor series, informal treatment of $O$ and $o$ notation and l'H\^opital's rule; integration as an area, fundamental theorem of calculus, integration by substitution and parts.\hspace*{\fill}[3]
+
+ \vspace{5pt}
+ \noindent Informal treatment of partial derivatives, geometrical interpretation, statement (only) of symmetry of mixed partial derivatives, chain rule, implicit differentiation. Informal treatment of differentials, including exact differentials. Differentiation of an integral with respect to a parameter.\hspace*{\fill}[2]
+
+ \vspace{10pt}
+ \noindent\textbf{First-order linear differential equations}\\
+ Equations with constant coefficients: exponential growth, comparison with discrete equations, series solution; modelling examples including radioactive decay.
+
+ \vspace{5pt}
+ \noindent Equations with non-constant coefficients: solution by integrating factor.\hspace*{\fill}[2]
+
+ \vspace{10pt}
+ \noindent\textbf{Nonlinear first-order equations}\\
+ Separable equations. Exact equations. Sketching solution trajectories. Equilibrium solutions, stability by perturbation; examples, including logistic equation and chemical kinetics. Discrete equations: equilibrium solutions, stability; examples including the logistic map.\hspace*{\fill}[4]
+
+ \vspace{10pt}
+ \noindent\textbf{Higher-order linear differential equations}\\
+ Complementary function and particular integral, linear independence, Wronskian (for second-order equations), Abel's theorem. Equations with constant coefficients and examples including radioactive sequences, comparison in simple cases with difference equations, reduction of order, resonance, transients, damping. Homogeneous equations. Response to step and impulse function inputs; introduction to the notions of the Heaviside step-function and the Dirac delta-function. Series solutions including statement only of the need for the logarithmic solution.\hspace*{\fill}[8]
+
+ \vspace{10pt}
+ \noindent\textbf{Multivariate functions: applications}\\
+ Directional derivatives and the gradient vector. Statement of Taylor series for functions on $\R^n$. Local extrema of real functions, classification using the Hessian matrix. Coupled first order systems: equivalence to single higher order equations; solution by matrix methods. Non-degenerate phase portraits local to equilibrium points; stability.
+
+ \vspace{5pt}
+ \noindent Simple examples of first- and second-order partial differential equations, solution of the wave equation in the form $f(x + ct) + g(x - ct)$.\hspace*{\fill}[5]}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+In this course, it is assumed that students already know how to do calculus. While we will define all of calculus from scratch, it is there mostly to introduce the big and small $o$ notation which will be used extensively in this and future courses (as well as for the sake of completeness). It is impossible for a person who hasn't seen calculus before to learn calculus from those few pages.
+
+Calculus is often used to model physical systems. For example, if we know that the force $F = m\ddot x$ on a particle at any time $t$ is given by $t^2 - 1$, then we can write this as
+\[
+ m\ddot x = t^2 - 1.
+\]
+We can easily integrate this twice with respect to $t$, and find the position $x$ as a function of time.
+
+However, often the rules governing a physical system are not like this. Instead, the force on the particle is more likely to depend on the \emph{position} of the particle, instead of what time it is. Hence the actual equation of motion might be
+\[
+ m\ddot x = x^2 - 1.
+\]
+This is an example of a \emph{differential equation}. We are given an equation that a function $x$ obeys, often involving derivatives of $x$, and we have to find all functions that satisfy this equation (of course, the first equation is also a differential equation, but a rather boring one).
+
+A closely related notion is \emph{difference equations}. These are discrete analogues of differential equations. A famous example is the \emph{Fibonacci sequence}, which states that
+\[
+ F_{n + 2} - F_{n + 1} - F_n = 0.
+\]
+This specifies a relationship between terms in a sequence $(F_n)$, and we want to find an explicit solution to this equation.
+
+In this course, we will develop numerous techniques to solve different differential equations and difference equations. Often, this involves guessing of some sort.
+
+\section{Differentiation}
+We will first quickly go through basic notions of differentiation and integration. You should already be familiar with these from A levels or equivalent.
+
+\subsection{Differentiation}
+\begin{defi}[Derivative of function]
+ The \emph{derivative} of a function $f(x)$ with respect to $x$, interpreted as the rate of change of $f(x)$ with $x$, is
+ \[
+ \frac{\d f}{\d x} = \lim_{h\to 0} \frac{f(x + h) - f(x)}{h}.
+ \]
+ A function $f(x)$ is differentiable at $x$ if the limit exists (i.e.\ the left-hand and right-hand limits are equal).
+\end{defi}
+
+\begin{eg}
+ $f(x)=|x|$ is not differentiable at $x = 0$ as $\lim\limits_{h\to 0^+} \frac{|h| - |0|}{h}= 1$ and $\lim\limits_{h\to 0^-} \frac{|h| - |0|}{h}= -1$.
+\end{eg}
+
+\begin{notation}
+ We write $\frac{\d f}{\d x} = f'(x) = \frac{\d}{\d x} f(x)$. Also, $\frac{\d}{\d x}\left(\frac{\d}{\d x} f(x)\right) = \frac{\d^2}{\d x^2} f(x) = f''(x)$.
+
+ Note that the notation $f'$ represents the derivative with respect to the argument. For example, $f'(2x) = \frac{\d f}{\d (2x)}$
+\end{notation}
+
+\subsection{Small \texorpdfstring{$o$}{o} and big \texorpdfstring{$O$}{O} notations}
+\begin{defi}[$O$ and $o$ notations]\leavevmode
+ \begin{enumerate}
+ \item ``$f(x) = o(g(x))$ as $x\to x_0$'' if $\lim\limits_{x\to x_0} \frac{f(x)}{g(x)} = 0$. Intuitively, $f(x)$ is much smaller than $g(x)$.
+ \item ``$f(x) = O(g(x))$ as $x\to x_0$'' if $\frac{f(x)}{g(x)}$ is bounded as $x\to x_0$. Intuitively, $f(x)$ is about as big as $g(x)$.
+
+ Note that for $f(x) = O(g(x))$ to be true, $\displaystyle \lim_{x\to x_0} \frac{f(x)}{g(x)}$ need not exist.
+ \end{enumerate}
+ Usually, $x_0$ is either $0$ or infinity. Clearly, we have $f(x)=o(g(x))$ implies $f(x) = O(g(x))$.
+\end{defi}
+Note that this is an abuse of notation. We are not really saying that $f(x)$ is ``equal'' to $o(g(x))$, since $o(g(x))$ itself is not a function. Instead, $o(g(x))$ represents a \emph{class} of functions (namely functions that satisfy the property above), and we are saying that $f$ is in this class. Technically, a better notation might be $f(x) \in o(g(x))$, but in practice, writing $f(x) = o(g(x))$ is more common and more convenient.
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item $x=o(\sqrt{x})$ as $x\to 0$ and $\sqrt{x} = o(x)$ as $x\to \infty$.
+ \item $\sin 2x = O(x)$ as $x\to 0$ as $\sin \theta \approx \theta$ for small $\theta$.
+ \item $\sin 2x = O(1)$ as $x\to \infty$ even though the limit does not exist.
+ \end{itemize}
+\end{eg}
+
+This notation will frequently be used in calculus. For example, if we want to ignore all terms second order in $x$ in an expression, we can write out the first order terms and then append $+O(x^2)$. In particular, we can use it to characterize derivatives in a different way.
+\begin{prop}
+ \[
+ f(x_0 + h) = f(x_0) + f'(x_0)h + o(h)
+ \]
+\end{prop}
+
+\begin{proof}
+ We have
+ \[
+ f'(x_0) = \frac{f(x_0 + h) - f(x_0)}{h} + \frac{o(h)}{h}
+ \]
+ by the definition of the derivative and the small $o$ notation. The result follows.
+\end{proof}
+
+\subsection{Methods of differentiation}
+\begin{thm}[Chain rule]
+ Given $f(x) = F(g(x))$, then
+ \[
+ \frac{\d f}{\d x} = \frac{\d F}{\d g}\frac{\d g}{\d x}.
+ \]
+\end{thm}
+
+\begin{proof}
+ Assuming that $\frac{\d g}{\d x}$ exists and is therefore finite, we have
+ \begin{align*}
+ \frac{\d f}{\d x} &= \lim_{h\to 0}\frac{F(g(x + h)) - F(g(x))}{h}\\
+ &= \lim_{h\to 0}\frac{F[g(x) + hg'(x) + o(h)] - F(g(x))}{h}\\
+ &= \lim_{h\to 0}\frac{F(g(x)) + (hg'(x) + o(h))F'(g(x)) + o(hg'(x) + o(h)) - F(g(x))}{h}\\
+ &= \lim_{h\to 0}g'(x)F'(g(x)) + \frac{o(h)}{h}\\
+ &= g'(x)F'(g(x))\\
+ &= \frac{\d F}{\d g}\frac{\d g}{\d x}\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{thm}[Product Rule]
+ Give $f(x) = u(x)v(x)$. Then
+ \[
+ f'(x) = u'(x)v(x) + u(x)v'(x).
+ \]
+\end{thm}
+
+\begin{thm}[Leibniz's Rule]
+ Given $f = uv$, then
+ \[
+ f^{(n)}(x) = \sum_{r = 0}^n \binom{n}{r}u^{(r)}v^{(n - r)},
+ \]
+ where $f^{(n)}$ is the n-th derivative of $f$.
+\end{thm}
+
+\subsection{Taylor's theorem}
+\begin{thm}[Taylor's Theorem]
+ For $n$-times differentiable $f$, we have
+ \[
+ f(x + h) = f(x) + hf'(x) + \frac{h^2}{2!}f''(x) + \cdots + \frac{h^n}{n!}f^{(n)}(x) + E_n,
+ \]
+ where $E_n = o(h^{n})$ as $h\to 0$. If $f^{(n+1)}$ exists, then $E_n = O(h^{n+1})$.
+\end{thm}
+Note that this only gives a local approximation around $x$. This does not necessarily tell anything about values of $f$ far from $x$ (but sometimes does).
+
+An alternative form of the sum above is:
+\[
+ f(x) = f(x_0) + (x-x_0)f'(x_0) + \cdots + \frac{(x-x_0)^n}{n!}f^{(n)}(x_0) + E_n.
+\]
+When the limit as $n\to \infty$ is taken, the Taylor series of $f(x)$ about the point $x = x_0$ is obtained.
+
+\subsection{L'Hopital's rule}
+\begin{thm}[L'Hopital's Rule]
+ Let $f(x)$ and $g(x)$ be differentiable at $x_0$, and $\displaystyle \lim_{x\to x_0}f(x) = \lim_{x\to x_0}g(x) = 0$. Then
+ \[
+ \lim_{x\to x_0} \frac{f(x)}{g(x)} = \lim_{x\to x_0} \frac{f'(x)}{g'(x)}.
+ \]
+\end{thm}
+\begin{proof}
+ From the Taylor's Theorem, we have $f(x) = f(x_0) + (x - x_0)f'(x_0) + o(x - x_0)$, and similarly for $g(x)$. Thus
+ \begin{align*}
+ \lim_{x\to x_0} \frac{f(x)}{g(x)} &= \lim_{x\to x_0} \frac{f(x_0) + (x - x_0)f'(x_0) + o(x - x_0)}{g(x_0) + (x - x_0)g'(x_0) + o(x - x_0)}\\
+ &= \lim_{x\to x_0} \frac{f'(x_0) + \frac{o(x-x_0)}{x-x_0}}{g'(x_0) + \frac{o(x-x_0)}{x-x_0}}\\
+ &= \lim_{x\to x_0} \frac{f'(x)}{g'(x)}\qedhere
+ \end{align*}
+\end{proof}
+
+\section{Integration}
+\subsection{Integration}
+\begin{defi}[Integral]
+ An \emph{integral} is the limit of a sum, e.g.
+ \[
+ \int_a^b f(x) \;\d x = \lim_{\Delta x\to 0}\sum_{n=0}^N f(x_n)\Delta x.
+ \]
+ For example, we can take $\Delta x=\frac{b - a}{N}$ and $x_n = a + n\Delta x$. Note that an integral need not be defined with this particular $\Delta x$ and $x_n$. The term ``integral'' simply refers to any limit of a sum (The usual integrals we use are a special kind known as Riemann integral, which we will study formally in Analysis I). Pictorially, we have
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (5, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 5) node [above] {$y$};
+
+ \draw [domain=-1:5] plot (\x, {(\x + 1)*(\x + 1)/10 + 1});
+
+ \draw (0.5, 0) node [below] {$a$} -- (0.5, 1.225) -- (1, 1.225);
+ \draw (1, 0) node [below] {$x_1$} -- (1, 1.4) -- (1.5, 1.4);
+ \draw (1.5, 0) node [below] {$x_2$} -- (1.5, 1.625) -- (2, 1.625) -- (2, 0) node [below] {$x_3$};
+ \node at (2.4, 0.8) {$\cdots$};
+ \draw (2.75, 0) node [below] {$x_n$} -- (2.75, 2.40625) -- (3.25, 2.40625) -- (3.25, 0) node [anchor = north west] {$\!\!\!\!\!x_{n + 1}\cdots$};
+ \node at (3.65, 1.2) {$\cdots$};
+ \draw (4, 0) -- (4, 3.5) -- (4.5, 3.5) -- (4.5, 0) node [below] {$b$};
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+
+The area under the graph from $x_n$ to $x_{n+1}$ is $f(x_n)\Delta x + O(\Delta x^2)$. Provided that $f$ is differentiable, the total area under the graph from $a$ to $b$ is
+\[
+ \lim_{N\to \infty} \sum_{n=0}^{N-1}(f(x_n)\Delta x) + N\cdot O(\Delta x^2) = \lim_{N\to \infty} \sum_{n=0}^{N-1}(f(x_n)\Delta x) + O(\Delta x) = \int_a^b f(x)\;\d x
+\]
+\begin{thm}[Fundamental Theorem of Calculus]
+ Let $F(x) = \int_a^x f(t)\;\d t$. Then $F'(x) = f(x)$.
+\end{thm}
+
+\begin{proof}
+ \begin{align*}
+ \frac{\d}{\d x}F(x) &= \lim_{h\to 0}\frac{1}{h}\left[\int_a^{x+h}f(t)\;\d t - \int_a^x f(t)\;\d t\right]\\
+ &= \lim_{h\to 0} \frac{1}{h}\int_x^{x+h}f(t) \;\d t\\
+ &= \lim_{h\to 0} \frac{1}{h}[f(x)h + O(h^2)]\\
+ &= f(x)
+ \end{align*}
+\end{proof}
+Similarly, we have
+\[
+ \frac{\d}{\d x}\int_x^b f(t)\;\d t = -f(x)
+\]
+and
+\[
+ \frac{\d}{\d x}\int_a^{g(x)} f(t)\;\d t = f(g(x))g'(x).
+\]
+
+\begin{notation}
+ We write $\int f(x)\;\d x= \int^x f(t)\;\d t$, where the unspecified lower limit gives rise to the constant of integration.
+\end{notation}
+
+\subsection{Methods of integration}
+Integration is substantially harder than differentiation. Given an expression, the product rule and chain rule combined with a few standard derivatives can usually differentiate it with ease. However, many seemingly-innocent expressions can turn out to be surprisingly difficult, if not impossible, to integrate. Hence we have to come up with a lot of tricks to do integrals.
+
+\begin{eg}[Integration by substitution]
+ Consider $\int \frac{1 - 2x}{\sqrt{x - x^2}}\;\d x$. Write $u = x - x^2$ and $\d u = (1 - 2x)\;\d x$. Then the integral becomes
+ \[
+ \int \frac{\d u}{\sqrt{u}} = 2\sqrt{u} + C = 2\sqrt{x - x^2} + C.
+ \]
+\end{eg}
+
+Trigonometric substitution can be performed with reference to the following table: if the function in the 2nd column is found in the integrand, perform the substitution in the 3rd column and simplify using the identity in the first column:
+\begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ Useful identity & Part of integrand & Substitution \\
+ \midrule
+ $\cos^2\theta + \sin^2\theta = 1$ & $\sqrt{1 - x^2}$ & $x = \sin \theta$ \\
+ $1 + \tan^2\theta = \sec^2\theta$ & $1 + x^2$ & $x = \tan\theta$ \\
+ $\cosh^2u - \sinh^2 u = 1$ & $\sqrt{x^2 - 1}$ & $x=\cosh u$ \\
+ $\cosh^2u - \sinh^2 u = 1$ & $\sqrt{1 + x^2}$ & $x=\sinh u$ \\
+ $1 - \tanh^2 u = \sech^2u$ & $1 - x^2$ & $x = \tanh u$ \\
+ \bottomrule
+ \end{tabular}
+\end{center}
+\begin{eg}
+ Consider $\int \sqrt{2x - x^2}\;\d x = \int\sqrt{1 - (x - 1)^2}\;\d x$. Let $x - 1=\sin\theta$ and thus $\d x = \cos \theta\;\d\theta$. The expression becomes
+ \begin{align*}
+ \int \cos^2\theta \;\d \theta &= \int\frac{\cos 2\theta + 1}{2}\;\d \theta\\
+ &= \frac{1}{4}\sin 2\theta + \frac{1}{2}\theta + C\\
+ &= \frac{1}{2}\sin^{-1}(x - 1) + \frac{1}{2}(x - 1)\sqrt{2x - x^2} + C.
+ \end{align*}
+\end{eg}
+
+\begin{thm}[Integration by parts]
+ \[
+ \int uv'\;\d x = uv - \int vu' \;\d x.
+ \]
+\end{thm}
+
+\begin{proof}
+ From the product rule, we have $(uv)' = uv' + u'v$. Integrating the whole expression and rearranging gives the formula above.
+\end{proof}
+
+\begin{eg}
+ Consider $\int_0^\infty xe^{-x}\d x$. Let $u = x$ ad $v' = e^{-x}$. Then $u' = 1$ and $v = -e^{-x}$. We have
+ \begin{align*}
+ \int_0^\infty xe^{-x}\;\d x &= [-xe^{-x}]^\infty_0 + \int_0^\infty e^{-x} \;\d x\\
+ &= 0 + [-e^{-x}]_0^\infty\\
+ &= 1
+ \end{align*}
+\end{eg}
+
+\begin{eg}[Integration by Parts]
+ Consider $\int \log x\; \d x$. Let $u = \log x$ and $v' = 1$. Then $u' = \frac{1}{x}$ and $v = x$. So we have
+ \begin{align*}
+ \int \log x\; \d x &= x\log x - \int \d x\\
+ &= x \log x - x + C
+ \end{align*}
+\end{eg}
+
+\section{Partial differentiation}
+\subsection{Partial differentiation}
+So far, we have only considered functions of one variable. If we have a function of multiple variables, say $f(x, y)$, we can either differentiate it with respect to $x$ or with respect to $y$.
+\begin{defi}[Partial derivative]
+ Given a function of several variables $f(x, y)$, the \emph{partial derivative} of $f$ with respect to $x$ is the rate of change of $f$ as $x$ varies, keeping $y$ constant. It is given by
+ \[
+ \left. \frac{\partial f}{\partial x}\right|_y = \lim_{\delta x\to 0} \frac{f(x + \delta x, y) - f(x, y)}{\delta x}
+ \]
+\end{defi}
+
+\begin{eg}
+ Consider $f(x, y) = x^2 + y^3 + e^{xy^2}$. Computing the partial derivative is equivalent to computing the regular derivative with the other variables treated as constants. e.g.
+ \[
+ \left.\frac{\partial f}{\partial x}\right|_y = 2x + y^2e^{xy^2}.
+ \]
+ Second and mixed partial derivatives can also be computed:
+ \begin{align*}
+ \frac{\partial^2f}{\partial x^2} &= 2 + y^4e^{xy^2}\\
+ \frac{\partial^2 f}{\partial y\partial x} &= \frac{\partial}{\partial y}\left(\frac{\partial f}{\partial x}\right) = 2ye^{xy^2} + 2xy^{3}e^{xy^2}
+ \end{align*}
+\end{eg}
+It is often cumbersome to write out the variables that are kept constant. If \emph{all} other variables are being held constant, we simply don't write them out, and just say $\frac{\partial f}{\partial x}$.
+
+Another convenient notation is
+\begin{notation}
+ \[
+ f_x = \frac{\partial f}{\partial x},\quad f_{xy} = \frac{\partial^2 f}{\partial y\partial x}.
+ \]
+\end{notation}
+It is important to know how the order works in the second case. The left hand side should be interpreted as $(f_x)_y$. So we first differentiate with respect to $x$, and then $y$. However, in most cases, this is not important, since we have
+\begin{thm}
+ If $f$ has continuous second partial derivatives, then $f_{xy} = f_{yx}$.
+\end{thm}
+We will not prove this statement and just assume it to be true (since this is an applied course).
+
+\subsection{Chain rule}
+Consider an arbitrary displacement in any direction $(x, y) \to (x+\delta x, y + \delta y)$. We have
+\begin{align*}
+ \delta f &= f(x+\delta x, y + \delta y) - f(x, y)\\
+ &= f(x+\delta x, y + \delta y) - f(x + \delta x, y) + f(x+\delta x, y) - f(x, y)\\
+ &= f_y(x + \delta x, y)\delta y + o(\delta y) + f_x(x, y)\delta x + o(\delta x)\\
+ &= (f_y(x, y) + o(1))\delta y + o(\delta y) + f_x(x, y)\delta x + o(\delta x)\\
+ \delta f&= \frac{\partial f}{\partial x}\delta x + \frac{\partial f}{\partial y}\delta y + o(\delta x, \delta y)
+\end{align*}
+Take the limit as $\delta x, \delta y \to 0$, we have
+\begin{thm}[Chain rule for partial derivatives]
+ \[
+ \d f = \frac{\partial f}{\partial x}\d x + \frac{\partial f}{\partial y}\d y.
+ \]
+ Given this form, we can integrate the differentials to obtain the integral form:
+ \[
+ \int \d f = \int \frac{\partial f}{\partial x}\d x + \int \frac{\partial f}{\partial y}\d y,
+ \]
+ or divide by another small quantity. e.g.\ to find the slope along the path $(x(t), y(t))$, we can divide by $\d t$ to obtain
+ \[
+ \frac{\d f}{\d t} = \frac{\partial f}{\partial x}\frac{\d x}{\d t} + \frac{\partial f}{\partial y}\frac{\d y}{\d t}.
+ \]
+\end{thm}
+
+If we pick the parameter $t$ to be the arclength $s$, we have
+\[
+ \frac{\d f}{\d s} = \frac{\partial f}{\partial x}\frac{\d x}{\d s} + \frac{\partial f}{\partial y}\frac{\d y}{\d s} = \left(\frac{\d x}{\d s}, \frac{\d y}{\d s}\right)\cdot \left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}\right) = \hat{s}\cdot \nabla f,
+\]
+which is known as the directional derivative (cf.\ Chapter~\ref{sec:directional-derivative}).
+
+Alternatively, the path may also be given by $y = y(x)$. So $f = f(x, y(x))$. Then the slope along the path is
+\[
+ \frac{\d f}{\d x} = \left.\frac{\partial f}{\partial x}\right|_y + \frac{\partial f}{\partial y}{\frac{\d y}{\d x}}.
+\]
+The chain rule can also be used for the change of independent variables, e.g.\ change to polar coordinates $x = x(r, \theta)$, $y = y(r, \theta)$. Then
+\[
+ \left.\frac{\partial f}{\partial \theta}\right|_r = \left. \frac{\partial f}{\partial x}\right|_y \left.\frac{\partial x}{\partial \theta}\right|_r + \left.\frac{\partial f}{\partial y}\right|_x\left.\frac{\partial y}{\partial \theta}\right|_r.
+\]
+\subsection{Implicit differentiation}
+Consider the contour surface of a function $F(x, y, z)$ given by $F(x, y, z) = $ const. This implicitly defines $z = z(x, y)$. e.g.\ If $F(x, y, z) = xy^2 + yz^2 + z^5x = 5$, then we can have $x = \frac{5 - yz^2}{y^2 + z^5}$. Even though $z(x, y)$ cannot be found explicitly (involves solving quintic equation), the derivatives of $z(x, y)$ can still be found by differentiating $F(x, y, z) =$ const w.r.t. $x$ holding $y$ constant. e.g.
+\begin{align*}
+ \frac{\partial }{\partial x}(xy^2 + yz^2 + z^5x) &= \frac{\partial }{\partial x}5\\
+ y^2 + 2yz\frac{\partial z}{\partial x} + z^5 + 5z^4x\frac{\partial z}{\partial x} &= 0\\
+ \frac{\partial z}{\partial x} &= -\frac{y^2 + z^5}{2yz + 5z^4x}
+\end{align*}
+In general, we can derive the following formula:
+\begin{thm}[Multi-variable implicit differentiation] Given an equation
+ \[
+ F(x, y, z) = c
+ \]
+ for some constant $c$, we have
+ \[
+ \left.\frac{\partial z}{\partial x}\right|_y = -\frac{(\partial F)/(\partial x)}{(\partial F)/(\partial z)}
+ \]
+\end{thm}
+
+\begin{proof}
+ \begin{align*}
+ \d F &= \frac{\partial F}{\partial x}\d x + \frac{\partial F}{\partial y}\d y + \frac{\partial F}{\partial z}\d z\\
+ \left.\frac{\partial F}{\partial x}\right|_y &= \frac{\partial F}{\partial x}\left.\frac{\partial x}{\partial x}\right|_y + \frac{\partial F}{\partial y}\left.\frac{\partial y}{\partial x}\right|_y + \frac{\partial F}{\partial z}\left.\frac{\partial z}{\partial x}\right|_y = 0\\
+ \frac{\partial F}{\partial x} + \frac{\partial F}{\partial z}\left.\frac{\partial z}{\partial x}\right|_y &= 0\\
+ \left.\frac{\partial z}{\partial x}\right|_y &= -\frac{(\partial F)/(\partial x)}{(\partial F)/(\partial z)}\qedhere
+ \end{align*}
+\end{proof}
+\subsection{Differentiation of an integral wrt parameter in the integrand}
+Consider a family of functions $f(x, c)$. Define $I(b, c) = \int_0^bf(x, c)\d x$. Then by the fundamental theorem of calculus, we have $\frac{\partial I}{\partial b} = f(b, c)$. On the other hand, we have
+\begin{align*}
+ \frac{\partial I}{\partial c} &= \lim_{\delta c\to 0}\frac{1}{\delta c}\left[\int_0^bf(x, c + \delta c)\d x - \int_0^b f(x, c)\;\d x\right]\\
+ &= \lim_{\delta c\to 0}\int_0^b\frac{f(x, c + \delta c) - f(x, c)}{\delta c}\;\d x\\
+ &= \int_0^b\lim_{\delta c\to 0}\frac{f(x, c + \delta c) - f(x, c)}{\delta c}\;\d x\\
+ &= \int_0^b\frac{\partial f}{\partial c}\;\d x
+\end{align*}
+In general, if $I(b(x), c(x)) = \int_0^{b(x)} f(y, c(x))\d y$, then by the chain rule, we have
+\[
+ \frac{\d I}{\d x} = \frac{\partial I}{\partial b}\frac{\d b}{\d x} + \frac{\partial I}{\partial c}\frac{\d c}{\d x} = f(b, c)b'(x) + c'(x) \int_0^b\frac{\partial f}{\partial c}\;\d y.
+\]
+So we obtain
+
+\begin{thm}[Differentiation under the integral sign]
+ \[
+ \frac{\d}{\d x} \int_{0}^{b(x)} f(y, c(x))\;\d y= f(b, c)b'(x) + c'(x)\int_0^b \frac{\partial f}{\partial c}\;\d y
+ \]
+\end{thm}
+This is sometimes a useful technique that allows us to perform certain otherwise difficult integrals, as you will see in the example sheets. However, here we will only have time for a boring example.
+
+\begin{eg}
+ Let $I = \int_0^1 e^{-\lambda x^2}\d x$. Then
+ \[
+ \frac{\d I}{\d \lambda} = \int_0^1-x^2e^{-\lambda x^2}\;\d x.
+ \]
+ If $I = \int_0^\lambda e^{-\lambda x^2}\d x$. Then
+ \[
+ \frac{\d I}{\d \lambda} = e^{-\lambda^3} + \int_0^{\lambda}-x^2e^{-\lambda x^2}\;\d x.
+ \]
+\end{eg}
+
+\section{First-order differential equations}
+A differential equation is an equation that involves derivatives, such as $x^2\frac{\d y}{\d x} + 2y = 0$. Unlike regular equations where the solution is a number, the solution to a differential equation is a function that satisfies the equation. In this chapter, we will look at \emph{first-order differential equations}, where only first derivatives are involved.
+
+\subsection{The exponential function}
+Often, the solutions to differential equations involve the exponential function. Consider a function $f(x) = a^x$, where $a>0$ is constant.
+\begin{center}
+ \begin{tikzpicture}[yscale = 0.15]
+ \draw [->] (-2, 0) -- (3.5, 0) node [right] {$x$};
+ \draw [->] (0, -5) -- (0, 25) node [above] {$y$};
+
+ \node [anchor = north east] at (0, 0) {$O$};
+ \draw [mblue, semithick, domain=-2:3] plot (\x, {exp(\x)});
+ \node [anchor = south east] at (0, 1) {1};
+ \node [circ] at (0, 1) {};
+ \end{tikzpicture}
+\end{center}
+The derivative of this function is given by
+\begin{align*}
+ \frac{\d f}{\d x} &= \lim_{h\to 0}\frac{a^{x+h}-a^x}{h}\\
+ &= a^x\lim_{h\to 0}\frac{a^h-1}{h}\\
+ &= \lambda a^x\\
+ &= \lambda f(x)
+\end{align*}
+where $\displaystyle \lambda = \lim_{h\to 0}\frac{a^h-1}{h} = f'(0) = $ const. So the derivative of an exponential function is a multiple of the original function. In particular,
+
+\begin{defi}[Exponential function]
+ $\exp(x) = e^x$ is the unique function $f$ satisfying $f'(x) = f(x)$ and $f(0) = 1$.
+
+ We write the inverse function as $\ln x$ or $\log x$.
+\end{defi}
+Then if $y = a^x = e^{x\ln a}$, then $y' = e^{x\ln a}\ln a = a^x\ln a$. So $\lambda = \ln a$.
+
+Using this property, we find that the value of $e$ is given by
+\[
+ e=\lim_{k\to \infty} \left(1 + \frac{1}{k}\right)^k \approx 2.718281828\cdots.
+\]
+The importance of the exponential function lies in it being an \emph{eigenfunction} of the differential operator.
+\begin{defi}[Eigenfunction]
+ An \emph{eigenfunction} under the differential operator is a function whose functional form is unchanged by the operator. Only its magnitude is changed. i.e.
+ \[
+ \frac{\d f}{\d x} = \lambda f
+ \]
+\end{defi}
+\begin{eg}
+ $e^{mx}$ is an eigenfunction since $\frac{\d }{\d x}e^{mx} = me^{mx}$.
+\end{eg}
+
+\subsection{Homogeneous linear ordinary differential equations}
+Before we start, we will first define the terms used in the title. These are useful criteria for categorizing differential equations.
+
+\begin{defi}[Linear differential equation]
+ A differential equation is \emph{linear} if the dependent variable ($y$, $y'$, $y''$ etc.) appears only linearly.
+\end{defi}
+
+\begin{defi}[Homogeneous differential equation]
+ A differential equation is \emph{homogeneous} if $y=0$ is a solution.
+\end{defi}
+
+\begin{defi}[Differential equation with constant coefficients]
+ A differential equation has \emph{constant coefficients} if the independent variable $x$ does not appear explicitly.
+\end{defi}
+
+\begin{defi}[First-order differential equation]
+ A differential equation is \emph{first-order} if only first derivatives are involved.
+\end{defi}
+
+\begin{thm}
+ Any linear, homogeneous, ordinary differential equation with constant coefficients has solutions of the form $e^{mx}$.
+\end{thm}
+This theorem is evident when we consider an example.
+\begin{eg}
+ Given $5\frac{\d y}{\d x} - 3y = 0$. Try $y = e^{mx}$. Then
+ \begin{align*}
+ \frac{\d y}{\d x} &= me^{mx}\\
+ 5me^{mx} - 3e^{mx} &= 0
+ \end{align*}
+ Since this must hold for all values of $x$, there exists some value $x$ for which $e^{mx} \not= 0$ and we can divide by $e^{mx}$ (note in this case this justification is not necessary, because $e^{mx}$ is never equal to 0. However, we should justify as above if we are dividing, say, $x^m$). Thus $5m - 3 = 0$ and $m = 3/5$. So $y = e^{3x/5}$ is a solution.
+
+ Because the equation is linear and homogeneous, any multiple of a solution is also a solution. Therefore $y = Ae^{3x/5}$ is a solution for any value of $A$.
+
+ But is this the most general form of the solution? One can show that an $n^\text{th}$-order linear differential equation has $n$ and only $n$ independent solutions. So $y = Ae^{3x/5}$ is indeed the most general solution.
+\end{eg}
+
+We can determine $A$ by applying a given boundary condition.
+
+\subsubsection*{Discrete equations}
+Suppose that we are given the equation $5y' - 3y = 0$ with boundary condition $y = y_0$ at $x = 0$. This gives a unique function $y$. We can approximate this by considering discrete steps of length $h$ between $x_n$ and $x_{n+1}$. (Using the simple Euler numerical scheme,) we can approximate the equation by
+\[
+ 5\frac{y_{n+1} - y_n}{h} - 3y_n \approx 0.
+\]
+Rearranging the terms, we have $y_{n+1} \approx (1 + \frac{3}{5}h)y_n$. Applying the relation successively, we have
+\begin{align*}
+ y_n &= \left(1 + \frac{3}{5}h\right)y_{n - 1}\\
+ &= \left(1 + \frac{3}{5}h\right)\left(1 + \frac{3}{5}h\right)y_{n - 2}\\
+ &= \left(1 + \frac{3}{5}h\right)^ny_0
+\end{align*}
+For each given value of $x$, choose $h = x/n$, so
+\[
+ y_n = y_0\left(1 + \frac{3}{5}(x/n)\right)^n.
+\]
+Taking the limit as $n\to \infty$, we have
+\[
+ y(x) = \lim_{n\to \infty} y_0\left(1 + \frac{3x/5}{n}\right)^n = y_0 e^{3x/5},
+\]
+in agreement with the solution of the differential equation.
+\begin{center}
+ \begin{tikzpicture}[yscale = 0.15]
+ \draw [->] (-2, 0) -- (3.5, 0) node [right] {$x$};
+ \draw [->] (0, -5) -- (0, 25) node [above] {$y$};
+
+ \draw [mblue, semithick, domain=-2:3] plot (\x, {exp(\x)}) node [right] {solution of diff. eq.};
+ \node [anchor = south east] at (0, 1) {1};
+ \node [circ] at (0, 1) {};
+
+ \draw [mgreen, semithick] (0, 1) -- (1, 2) -- (2, 4.7) -- (3, 12) node [right] {discrete approximation};
+ \draw [dashed] (1, 2) -- (1, 0);
+ \draw [dashed] (2, 4.7) -- (2, 0);
+ \draw [dashed] (3, 12) -- (3, 0);
+
+ \node [anchor = north east] at (0, 0) {$O$};
+ \node [anchor = north west] at (0, 0) {$x_0$};
+ \node [below] at (1, 0) {$x_1$};
+ \node [below] at (2, 0) {$x_2$};
+ \node [below] at (3, 0) {$x_3$};
+ \end{tikzpicture}
+\end{center}
+
+\subsubsection*{Series solution}
+We can also try to find a solution in the form of a Taylor Series $y = \sum\limits_{n=0}^\infty a_nx^n$. We have $y' = \sum a_nnx^{n-1}$. Substituting these into our equation
+\begin{align*}
+ 5y' - 3y &= 0\\
+ 5(xy') - 3x(y) &= 0\\
+ \sum a_n(5n - 3x)x^n &= 0
+\end{align*}
+Consider the coefficient of $x^n$: $5n a_n - 3 a_{n-1} = 0$. Since this holds for all values of $n$, when $n = 0$, we get $0a_0 = 0$. This tells that $a_0$ is arbitrary. If $n>0$, then
+\[
+ a_n = \frac{3}{5n}a_{n-1} = \frac{3^2}{5^2}\frac{1}{n(n-1)}a_{n-2} = \cdots = \left(\frac{3}{5}\right)^n \frac{1}{n!}a_0.
+\]
+Therefore we have
+\[
+ y = a_0\sum_{n = 0}^\infty \left(\frac{3x}{5}\right)^n\frac{1}{n!} \left[= a_0 e^{3x/5}\right].
+\]
+
+\subsection{Forced (inhomogeneous) equations}
+Recall that a homogeneous equation is an equation like $5y' - 3y = 0$, with no $x$ or constant terms floating around. A forced, or inhomogeneous, equation is one that is not homogeneous. For example, $5y' - 3y = 10$ or $5y' - 3y = x^2$ are forced equations. We call the ``extra'' terms $10$ and $x^2$ the \emph{forcing terms}.
+
+\subsubsection{Constant forcing}
+
+\begin{eg}
+ Consider $5y' - 3y = 10$. We can spot that there is a equilibrium (constant) solution $y = y_p = -\frac{10}{3}$ with $y_p' = 0$.
+
+ The particular solution $y_p$ is a solution of the ODE. Now suppose the general solution is $y = y_p + y_c$. We see that $5y_c' - 3y_c = 0$. So $y_c$ satisfies the homogeneous equation we already solved, and
+ \[
+ y = -\frac{10}{3} + Ae^{3x/5}.
+ \]
+ Note that any boundary conditions to determine $A$ must be applied to the full solution $y$ and not the complementary function $y_c$.
+\end{eg}
+
+This is the general method of solving forced equations. We first find one particular solution to the problem, often via educated guesses. Then we solve the homogeneous equation to get a general complementary solution. Then the general solution to the full equation is the sum of the particular solution and the general complementary solution.
+
+\subsubsection{Eigenfunction forcing}
+This is the case when the forcing term is an eigenfunction of the differential operator.
+\begin{eg}
+ In a radioactive rock, isotope A decays into isotope B at a rate proportional to the number $a$ of remaining nuclei A, and B also decays at a rate proportional to the number $b$ of remaining nuclei B. Determine $b(t)$.
+
+ We have
+ \begin{align*}
+ \frac{\d a}{\d t} &= -k_a a\\
+ \frac{\d b}{\d t} &= k_a a - k_b b.
+ \end{align*}
+ Solving the first equation, we obtain $a = a_0e^{-k_at}$. Then we have
+ \[
+ \frac{\d b}{\d t} + k_b b = k_aa_0e^{-k_at}.
+ \]
+ We usually put the variables involving $b$ on the left hand side and the others on the right. The right-hand term $k_aa_0e^{-k_at}$ is the \emph{forcing term}.
+
+ Note that the forcing term is an eigenfunction of the differential operator on the LHS. So that suggests that we can try a particular integral $b_p = Ce^{-k_at}$. Substituting it in, we obtain
+ \begin{align*}
+ -k_aC + k_bC &= k_a a_0\\
+ C &= \frac{k_a}{k_b - k_a}a_0.
+ \end{align*}
+ Then write $b = b_p + b_c$. We get $b'_c + k_bb_c = 0$ and $b_c = De^{-k_bt}$. All together, we have the general solution
+ \[
+ b = \frac{k_a}{k_b - k_a}a_0 e^{-k_at} + De^{-k_bt}.
+ \]
+ Assume the following boundary condition: $b = 0$ when $t = 0$, in which case we can find
+ \[
+ b = \frac{k_a}{k_b - k_a}a_0\left(e^{-k_at} - e^{-k_bt}\right).
+ \]
+ \begin{center}
+ \begin{tikzpicture}[scale = 1.5]
+ \draw [->] (0, 0) -- (5, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 3.5) node [above] {$y$};
+ \node [anchor = north east] {$O$};
+ \draw [domain = 0:4.5, mgreen, semithick, samples=50] plot (\x, {3*exp(-\x)});
+ \draw [domain = 0:4.5, mblue, semithick, samples=50] plot (\x, {8*(exp(-\x) - exp(-1.3*\x))});
+ \node [mgreen] at (0.6, 2) {$a$};
+ \node [mblue] at (0.6, 0.9) {$b$};
+ \end{tikzpicture}
+ \end{center}
+ The isotope ratio is
+ \[
+ \frac{b}{a} = \frac{k_a}{k_b - k_a}\left[1 - e^{(k_a - k_b)t}\right].
+ \]
+ So given the current ratio $b/a$, with laboratory determined rates $k_a$ and $k_b$, we can determine the value of $t$, i.e.\ the age of the rock.
+\end{eg}
+
+\subsection{Non-constant coefficients}
+Consider the general form of equation
+\[
+ a(x)y' + b(x)y = c(x).
+\]
+Divide by $a(x)$ to get the standard form
+\[
+ y' + p(x) y = f(x).
+\]
+We solve this by multiplying an integrating factor $\mu (x)$ to obtain $(\mu)y' + (\mu p)y = \mu f$.
+
+We want to choose a $\mu$ such that the left hand side is equal to $(\mu y)'$. By the product rule, we want $\mu p = \mu'$, i.e.
+\begin{align*}
+ p &= \frac{1}{\mu}\frac{\d\mu }{\d x}\\
+ \int p\; \d x &= \int \frac{1}{\mu}\frac{\d \mu}{\d x} \; \d x\\
+ &= \int \frac{1}{\mu}\; \d u\\
+ &= \ln \mu (+ C)\\
+ \mu &= \exp\left(\int p\; \d x\right)
+\end{align*}
+Then by construction, we have $(\mu y)' = \mu f$ and thus
+\[
+ y = \frac{\int \mu f\;\d x}{\mu},\text{ where } \mu = \exp\left(\int p\; \d x\right)
+\]
+\begin{eg}
+ Consider $xy' + (1 - x)y = 1$. To obtain it in standard form, we have $y' + \frac{1 - x}{x} y = \frac{1}{x}$.
+ We have $\mu = \exp\left(\int (\frac{1}{x} - 1)\; \d x\right) = e^{\ln x - x} = xe^{-x}$. Then
+ \begin{align*}
+ y &= \frac{\int xe^{-x}\frac{1}{x}\;\d x}{xe^{-x}}\\
+ &= \frac{-e^{-x} + C}{xe^{-x}}\\
+ &= \frac{-1}{x} + \frac{C}{x}e^x
+ \end{align*}
+ Suppose that we have a boundary condition $y$ is finite at $x = 0$. Since we have $y = \frac{Ce^x - 1}{x}$, we have to ensure that $Ce^x - 1\to 0$ as $x\to 0$. Thus $C = 1$, and by L'Hopital's rule, $y\to 1$ as $x\to 0$.
+\end{eg}
+
+\subsection{Non-linear equations}
+In general, a first-order equation has the form
+\[
+ Q(x, y)\frac{\d y}{\d x} + P(x, y) = 0.
+\]
+(this is not exactly the most general form, since theoretically we can have powers of $y'$ or even other complicated functions of $y'$, but we will not consider that case here).
+\subsubsection{Separable equations}
+
+\begin{defi}[Separable equation]
+ A first-order differential equation is \emph{separable} if it can be manipulated into the following form:
+ \[
+ q(y) \;\d y = p(x) \;\d x.
+ \]
+ in which case the solution can be found by integration
+ \[
+ \int q(y)\;\d y = \int p(x)\; \d x.
+ \]
+\end{defi}
+This is the easy kind.
+
+\begin{eg}
+ \begin{align*}
+ (x^2y - 3y)\frac{\d y}{\d x} - 2xy^2 &= 4x\\
+ \frac{\d y}{\d x} &= \frac{4x + 2xy^2}{x^2y - 3y}\\
+ &= \frac{2x(2 + y^2)}{y(x^2 - 3)}\\
+ \frac{y}{2 + y^2}\; \d y &= \frac{2x}{x^2 - 3}\;\d x\\
+ \int \frac{y}{2 + y^2}\; \d y &= \int\frac{2x}{x^2 - 3}\;\d x\\
+ \frac{1}{2}\ln(2 + y^2) &= \ln (x^2 - 3) + C\\
+ \ln \sqrt{2 + y^2} &= \ln A(x^2 - 3)\\
+ \sqrt{y^2 + 2} &= A(x^2 - 3)
+ \end{align*}
+\end{eg}
+\subsubsection{Exact equations}
+\begin{defi}[Exact equation]
+ $Q(x, y)\frac{\d y}{\d x} + P(x, y) = 0$ is an \emph{exact equation} iff the differential form $Q(x, y)\;\d y + P(x, y)\;\d x$ is \emph{exact}, i.e.\ there exists a function $f(x, y)$ for which
+ \[
+ \d f = Q(x, y)\;\d y + P(x, y)\;\d x
+ \]
+\end{defi}
+
+If $P(x, y)\;\d x + Q(x, y)\;\d y$ is an exact differential of $f$, then $\d f = P(x, y) \;\d x + Q(x, y)\;\d y$. But by the chain rule, $\d f = \frac{\partial f}{\partial x}\d x + \frac{\partial f}{\partial y}\d y$ and this equality holds for any displacements $\d x, \d y$. So
+\[
+ \frac{\partial f}{\partial x} = P,\quad\frac{\partial f}{\partial y} = Q.
+\]
+From this we have
+\[
+ \frac{\partial^2 f}{\partial y\partial x} = \frac{\partial P}{\partial y},\quad\frac{\partial^2 f}{\partial x \partial y} = \frac{\partial Q}{\partial x}.
+\]
+We know that the two mixed 2nd derivatives are equal. So
+\[
+ \frac{\partial P}{\partial y} = \frac{\partial Q}{\partial x}.
+\]
+The converse is not necessarily true. Even if this equation holds, the differential need not be exact. However, it is true if the domain is \emph{simply-connected}.
+\begin{defi}[Simply-connected domain]
+ A domain $\mathcal{D}$ is simply-connected if it is connected and any closed curve in $\mathcal{D}$ can be shrunk to a point in $\mathcal{D}$ without leaving $\mathcal{D}$.
+\end{defi}
+
+\begin{eg}
+ A disc in 2D is simply-connected. A disc with a ``hole'' in the middle is not simply-connected because a loop around the hole cannot be shrunk into a point. Similarly, a sphere in 3D is simply-connected but a torus is not.
+\end{eg}
+
+\begin{thm}
+ If $\frac{\partial P}{\partial y} = \frac{\partial Q}{\partial x}$ through a simply-connected domain $\mathcal{D}$, then $P\;\d x + Q\; \d y$ is an exact differential of a single-valued function in $\mathcal{D}$.
+\end{thm}
+If the equation is exact, then the solution is simply $f = $ constant, and we can find $f$ by integrating $\frac{\partial f}{\partial x} = P$ and $\frac{\partial f}{\partial y} = Q$.
+
+\begin{eg}
+ \[
+ 6y(y - x)\frac{\d y}{\d x} + (2x - 3y^2) = 0.
+ \]
+ We have
+ \[
+ P = 2x - 3y^2, \quad Q = 6y(y - x).
+ \]
+ Then $\frac{\partial P}{\partial y} = \frac{\partial Q}{\partial x} = -6y$. So the differential form is exact. We now have
+ \[
+ \frac{\partial f}{\partial x} = 2x - 3y^2, \quad \frac{\partial f}{\partial y} = 6y^2 - 6xy.
+ \]
+ Integrating the first equation, we have
+ \[
+ f = x^2 - 3xy^2 + h(y).
+ \]
+ Note that since it was a partial derivative w.r.t. $x$ holding $y$ constant, the ``constant'' term can be any function of $y$. Differentiating the derived $f$ w.r.t $y$, we have
+ \[
+ \frac{\partial f}{\partial y} = -6xy + h'(y).
+ \]
+ Thus $h'(y) = 6y^2$ and $h(y) = 2y^3 + C$, and
+ \[
+ f = x^2 - 3xy^2 + 2y^3 + C.
+ \]
+ Since the original equation was $\d f = 0$, we have $f = $ constant. Thus the final solution is
+ \[
+ x^2 - 3xy^2 + 2y^3 = C.
+ \]
+\end{eg}
+
+\subsection{Solution curves (trajectories)}
+\begin{eg}
+ Consider the first-order equation
+ \[
+ \frac{\d y}{\d t} = t(1 - y^2).
+ \]
+ We can solve it to obtain
+ \begin{align*}
+ \frac{\d y}{1 - y^2} &= t\; \d t\\
+ \frac{1}{2}\ln\frac{1 + y}{1 - y} &= \frac{1}{2}t^2 + C\\
+ \frac{1 + y}{1 - y} &= Ae^{t^2}\\
+ y &= \frac{A - e^{-t^2}}{A + e^{-t^2}}
+ \end{align*}
+ We can plot the solution for different values of $A$ and obtain the following graph:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (5, 0) node [right] {$t$};
+ \draw [->] (0, -2.2) -- (0, 2.2) node [above] {$y$};
+
+ \draw (0, -1) node [left] {$-1$} -- (5, -1);
+ \draw (0, 1) node [left] {$1$} -- (5, 1);
+
+ \foreach \t in {-2,...,2} {
+ \pgfmathsetmacro{\a}{0.33333 * \t};
+ \pgfmathsetmacro{\b}{0.95 + .025 * \t};
+ \draw [semithick, mred] (0, \a) .. controls (2, \a) and (0, \b - 0.01) .. (4, \b - 0.01) -- (5, \b);
+ }
+ \draw [semithick, mred] (0, -1.33333) .. controls (1, -1.33333) and (2, -1.5) .. (2, -2.2);
+ \draw [semithick, mred] (0, 1.66666) .. controls (1, 1.66666) and (2, 1.1) .. (4, 1.1) -- (5, 1.1);
+ \draw [semithick, mred] (0, 1.33333) .. controls (1, 1.33333) and (2, 1.05) .. (4, 1.05) -- (5, 1.05);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+But can we understand the nature of the family of solutions without solving the equation? Can we sketch the graph without solving it?
+
+We can spot that $y = \pm 1$ are two constant solutions and we can plot them first. We also note (and mark on our graph) that $y' = 0$ at $t = 0$ for any $y$.
+
+Then notice that for $t > 0$, $y' > 0$ if $-1 < y < 1$. Otherwise, $y' < 0$.
+
+Now we can find \emph{isoclines}, which are curves along which $\frac{\d y}{\d t}$ (i.e.\ $f$) is constant: $t(1 - y^2) = D$ for some constant $D$. Then $y^2 = 1 - D/t$. After marking a few isoclines, can sketch the approximate form of our solution:
+\begin{center}
+ \begin{tikzpicture}
+ \def\arrowlength{0.3}
+ \tikzset{isocline/.style = {
+ mgreen,
+ decoration = {
+ markings,
+ mark=between positions 0.05 and 1 step 0.1 with {
+ \pgftransformresetnontranslations
+ \pgftransformrotate{#1}
+ \draw [black, arrows={-latex'}] (-\arrowlength/2, 0) -- +(\arrowlength, 0);
+ }
+ },
+ postaction={decorate}
+ }};
+
+ \draw [->] (-1, 0) -- (5, 0) node [right] {$t$};
+ \draw [->] (0, -2.2) -- (0, 2.2) node [above] {$y$};
+
+ \draw (0, -1) node [left] {$-1$} -- (5, -1);
+ \draw (0, 1) node [left] {$1$} -- (5, 1);
+
+ \draw [isocline=55] (5, 0.8) .. controls (-0.5, 0.4) and (-0.5, -0.4) .. (5, -0.8);
+ \draw [isocline=-70] (0.3, 2.2) parabola [bend at end] (5, 1.2);
+ \draw [isocline=-70] (0.3, -2.2) parabola [bend at end] (5, -1.2);
+
+ \foreach \y in {-1.66666, -1.33333, -0.66666, -0.33333, 0.33333, 0.66666, 1.33333, 1.66666} {
+ \draw [arrows={-latex'}] (0, \y) -- +(\arrowlength, 0);
+ }
+
+ \draw [semithick, mred] (0, -0.66666) .. controls (2, -0.66666) and (0, 0.9) .. (4, 0.9) -- (5, 0.9);
+
+ \draw [semithick, mred] (0, -1.33333) .. controls (1, -1.33333) and (2, -1.5) .. (2, -2.2);
+ \draw [semithick, mred] (0, 1.66666) .. controls (1, 1.66666) and (2, 1.1) .. (4, 1.1) -- (5, 1.1);
+ \end{tikzpicture}
+\end{center}
+In general, we sketch the graph of a differential equation
+\[
+ \frac{\d y}{\d t} = f(t, y)
+\]
+by locating constant solutions (and determining stability: see below) and isoclines.
+
+\subsection{Fixed (equilibrium) points and stability}
+\begin{defi}[Equilibrium/fixed point]
+ An \emph{equilibrium point} or a \emph{fixed point} of a differential equation is a constant solution $y = c$. This corresponds to $\frac{\d y}{\d t} = 0$ for all $t$.
+\end{defi}
+These are usually the easiest solutions to find, since we are usually given an expression for $\frac{\d y}{\d t}$ and we can simply equate it to zero.
+
+\begin{defi}[Stability of fixed point]
+ An equilibrium is \emph{stable} if whenever $y$ is deviated slightly from the constant solution $y = c$, $y \to c$ as $t \to \infty$. An equilibrium is \emph{unstable} if the deviation grows as $t \to \infty$.
+\end{defi}
+
+The objective of this section is to study how to find equilibrium points and their stability.
+\begin{eg}
+ Referring to the differential equation above ($\dot y = t (1 - y^2)$), we see that the solutions converge towards $y = 1$, and this is a stable fixed point. They diverge from $y = -1$, and this is an unstable fixed point.
+\end{eg}
+
+\subsubsection{Perturbation analysis}
+Perturbation analysis is used to determine stability. Suppose $y = a$ is a fixed point of $\frac{\d y}{\d t} = f(y, t)$, so $f(a, t) = 0$. Write $y = a + \varepsilon(t)$, where $\varepsilon(t)$ is a small perturbation from $y = a$. We will later assume that $\varepsilon$ is arbitrarily small. Putting this into the differential equation, we have
+\begin{align*}
+ \frac{\d \varepsilon}{\d t} = \frac{\d y}{\d t} &= f(a + \varepsilon, t)\\
+ &= f(a, t) + \varepsilon\frac{\partial f}{\partial y}(a, t) + O(\varepsilon^2)\\
+ &= \varepsilon\frac{\partial f}{\partial y}(a, t) + O(\varepsilon^2)
+\end{align*}
+Note that this is a Taylor expansion valid when $\varepsilon \ll 1$. Thus $O(\varepsilon^2)$ can be neglected and
+\[
+ \frac{\d \varepsilon}{\d t} \cong \varepsilon\frac{\partial f}{\partial y}.
+\]
+We can use this to study how the perturbation grows with time.
+
+This approximation is called a linearization of the differential equation.
+\begin{eg}
+ Using the example $\dot y = t(1 - y^2)$ above, we have
+ \[
+ \frac{\partial f}{\partial y} = -2yt =
+ \begin{cases}
+ -2t & \text{ at } y = 1\\
+ 2t & \text{ at } y = -1
+ \end{cases}.
+ \]
+ At $y = 1$, $\dot{\varepsilon} = -2t\varepsilon$ and $\varepsilon = \varepsilon_0 e^{-t^2}$. Since $\varepsilon \to 0$ as $t \to \infty$, $y \to 1$. So $y = 1$ is a stable fixed point.
+
+ On the other hand, if we consider $y = -1$, then $\dot\varepsilon = 2t\varepsilon$ and $\varepsilon = \varepsilon_0 e^{t^2}$. Since $\varepsilon \to \infty$ as $t\to \infty$, $y = -1$ is unstable.
+
+ Technically $\varepsilon \to \infty$ is not a correct statement, since the approximation used is only valid for small $\varepsilon$. But we can be sure that the perturbation grows (even if not $\to \infty$) as $t$ increases.
+\end{eg}
+
+\subsubsection{Autonomous systems}
+Often, the mechanics of a system does not change with time. So $\dot y$ is only a function of $y$ itself. We call this an \emph{autonomous system}.
+
+\begin{defi}[Autonomous system]
+ An \emph{autonomous system} is a system in the form $\dot y = f(y)$, where the derivative is only (explicitly) dependent on $y$.
+\end{defi}
+
+Near a fixed point $y = a$, where $f(a) = 0$, write $y = a + \varepsilon(t)$. Then $\dot \varepsilon = \varepsilon\frac{\d f}{\d y}(a) = k\varepsilon$ for some constant $k$. Then $\varepsilon = \varepsilon_0 e^{kt}$. The stability of the system then depends solely on sign of $k$.
+
+\begin{eg}
+ Consider a chemical reaction NaOH + HCl $\rightarrow$ H$_2$O + NaCl. We have
+ \begin{center}
+ \begin{tabular}{lccccccc}
+ \toprule
+ & NaOH & + & HCl & $\rightarrow$ & H$_2$O & + & NaCl \\
+ \midrule
+ Number of molecules & $a$ & & $b$ & & $c$ & & $c$ \\
+ Initial number of molecules & $a_0$ & & $b_0$ & & $0$ & & $0$ \\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ If the reaction is in dilute solution, then the reaction rate is proportional to $ab$. Thus
+ \begin{align*}
+ \frac{\d c}{\d t} &= \lambda ab\\
+ &= \lambda (a_0 - c)(b_0 - c)\\
+ &= f(c)
+ \end{align*}
+ We can plot $\frac{\d c}{\d t}$ as a function of $c$, and wlog $a_0 < b_0$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (5, 0) node [right] {$c$};
+ \draw [->] (0, -1) -- (0, 4) node [above] {$\dot c$};
+ \draw [mblue, semithick] (0.5, 3) parabola bend (2.5, -1) (4.5, 3);
+ \node at (1.5, 0) [anchor = north east] {$a_0$};
+ \node at (3.5, 0) [anchor = north west] {$b_0$};
+ \end{tikzpicture}
+ \end{center}
+ We can also plot a \emph{phase portrait}, which is a plot of the dependent variable only, where arrows show the evolution with time,
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.5] (0, 0) -- (2, 0);
+ \draw [->-=0.6] (4, 0) -- (2, 0);
+ \draw [->-=0.5, ->] (4, 0) -- (6, 0) node [right] {$c$};
+ \draw (2, 0.1) -- (2, -0.1) node [below] {$a_0$};
+ \draw (4, 0.1) -- (4, -0.1) node [below] {$b_0$};
+ \end{tikzpicture}
+ \end{center}
+ We can see that the fixed point $c = a_0$ is stable while $c = b_0$ is unstable.
+
+ We can also solve the equation explicitly to obtain
+ \[
+ c = \frac{a_0b_0[1 - e^{-(b_0 - a_0)\lambda t}]}{b_0 - a_0e^{-\lambda(b_0-a_0)t}}.
+ \]
+ Of course, this example makes no sense physically whatsoever because it assumes that we can have negative values of $a$ and $b$. Clearly in reality we cannot have, say, $-1$ mol of NaOH, and only solutions for $c \leq a_0$ are physically attainable, and in this case, any solution will tend towards $c = a_0$.
+\end{eg}
+
+\subsubsection{Logistic Equation}
+The logistic equation is a simple model of population dynamics. Suppose we have a population of size $y$. It has a birth rate $\alpha y$ and a death rate $\beta y$. With this model, we obtain
+\begin{align*}
+ \frac{\d y}{\d t} &= (\alpha - \beta)y\\
+ y &= y_0 e^{(\alpha - \beta)t}
+\end{align*}
+Our population increases or decreases exponentially depending on whether the birth rate exceeds death rate or vice versa.
+
+However, in reality, there is fighting for limited resources. The probability of some piece of food (resource) being found is $\propto y$. The probability of the same piece of food being found by two individuals is $\propto y^2$. If food is scarce, they fight (to the death), so death rate due to fighting (competing) is $\gamma y^2$ for some $\gamma$. So
+\begin{align*}
+ \frac{\d y}{\d t} &= (\alpha - \beta)y - \gamma y^2\\
+ \frac{\d y}{\d t}&= ry \left(1 - \frac{y}{Y}\right),
+\end{align*}
+where $r = \alpha - \beta$ and $Y = r/\gamma$. This is the differential logistic equation. Note that it is separable and can be solved explicitly.
+
+However, we find the phase portrait instead. If $r = \alpha - \beta > 0$, then the graph is a quadratic parabola, and we see that $Y$ is a stable fixed point.
+\begin{center}
+ \begin{tikzpicture}[yscale = 1.5]
+ \draw [->] (-0.5, 0) -- (4.5, 0) node [right] {$y$};
+ \draw [->] (0, -1) -- (0, 2) node [above] {$f$};
+ \node [anchor = north east] at (0, 0) {$O$};
+ \draw [mblue, semithick] (0, 0) parabola bend (1.75, 1.5) (4, -.9796);
+ \node [anchor = north east] at (3.5, 0) {$Y$};
+
+ \draw [->-=0.5] (0, -1.5) -- (3.5, -1.5);
+ \draw [-<-=0.2, ->] (3.5, -1.5) -- (4.5, -1.5) node [right] {$y$};
+ \draw (0, -1.4) -- (0, -1.6) node [below] {$O$};
+ \draw (3.5, -1.4) -- (3.5, -1.6) node [below] {$Y$};
+ \end{tikzpicture}
+\end{center}
+Now when the population is small, we have
+\[
+ \dot y \simeq ry
+\]
+So the population grows exponentially. Eventually, the stable equilibrium $Y$ is reached.
+
+\subsection{Discrete equations (Difference equations)}
+Since differential equations are approximated numerically by computers with discrete equations, it is important to study the behaviour of discrete equations, and their difference with continuous counterparts.
+
+In the logistic equation, the evolution of species may occur discretely (e.g.\ births in spring, deaths in winter), and we can consider the population at certain time intervals (e.g.\ consider the population at the end of each month). We might have a model in the form
+\[
+ x_{n + 1} = \lambda x_n(1 - x_n).
+\]
+This can be derived from the continuous equation with a discrete approximation
+\begin{align*}
+ \frac{y_{n+1} - y_n}{\Delta t} &= ry_n\left(1 - \frac{y_n}{Y}\right)\\
+ y_{n + 1} &= y_n + r\Delta ty_n\left(1 - \frac{y_n}{Y}\right)\\
+ &= (1 + r\Delta t)y_n - \frac{r\Delta t}{Y}y_n^2\\
+ &= (1 + r\Delta t)y_n\left[1 - \left(\frac{r\Delta t}{1 + r\Delta t}\right)\frac{y_n}{Y}\right]
+\end{align*}
+Write
+\[
+ \lambda = 1 + r\Delta t,\;\;\;\;\;\; x_n = \left(\frac{r\Delta t}{1 + r\Delta t}\right)\frac{y_n}{Y},
+\]
+then
+\[
+ x_{n+1}=\lambda x_n(1 - x_n).
+\]
+This is the discrete logistic equation or logistic map.
+
+If $\lambda < 1$, then deaths exceed births and the population decays to zero.
+\begin{center}
+ \begin{tikzpicture}[scale=5]
+ \draw [<->] (0, 1) node [above] {$x_{n + 1}$} -- (0, 0) -- (1.2, 0) node [right] {$x_n$};
+ \draw [mgreen, semithick] (0, 0) -- (1, 1) node [right] {$x_{n + 1} = x_n$};
+ \draw [mblue, semithick] (0, 0) parabola bend (0.5, 0.225) (1, 0);
+
+ \draw [mred] (0.4, 0) node [below, black] {$x_0$}
+ -- (0.4, 0.216) -- (0.216, 0.216)
+ -- (0.216, 0.152) -- (0.152, 0.152)
+ -- (0.152, 0.116) -- (0.116, 0.116)
+ -- (0.116, 0.092) -- (0.092, 0.092)
+ -- (0.092, 0.075) -- (0.075, 0.075);
+ \end{tikzpicture}
+\end{center}
+We see that $x = 0$ is a fixed point.
+
+In general, to find fixed points, we solve for $x_{n + 1} = x_n$, i.e.\ $f(x_n) = x_n$. For the logistic map, we have
+\begin{align*}
+ \lambda x_n(1 - x_n) &= x_n\\
+ x_n[1 - \lambda(1 - x_n)] &= 0\\
+ x_n = 0 &\text{ or } x_n = 1 - \frac{1}{\lambda}
+\end{align*}
+When $1 < \lambda < 2$, we have
+\begin{center}
+ \begin{tikzpicture}[scale=5]
+ \draw [<->] (0, 1) node [above] {$x_{n + 1}$} -- (0, 0) -- (1.2, 0) node [right] {$x_n$};
+ \draw [mgreen, semithick] (0, 0) -- (1, 1) node [right] {$x_{n + 1} = x_n$};
+ \draw [mblue, semithick] (0, 0) parabola bend (0.5, 0.45) (1, 0);
+
+ \draw [mred] (0.2, 0) node [below, black] {$x_0$}
+ -- (0.2, 0.288) -- (0.288, 0.288)
+ -- (0.288, 0.369) -- (0.369, 0.369)
+ -- (0.369, 0.419) -- (0.419, 0.419)
+ -- (0.419, 0.438) -- (0.438, 0.438)
+ -- (0.438, 0.443) -- (0.443, 0.443);
+ \end{tikzpicture}
+\end{center}
+We see that $x_n = 0$ is an unstable fixed point and $x_n = 1 - \frac{1}{\lambda}$ is a stable fixed point.
+
+When $2 < \lambda < 3$, we have
+\begin{center}
+ \begin{tikzpicture}[scale=5]
+ \draw [<->] (0, 1) node [above] {$x_{n + 1}$} -- (0, 0) -- (1.2, 0) node [right] {$x_n$};
+ \draw [mgreen, semithick] (0, 0) -- (1, 1) node [right] {$x_{n + 1} = x_n$};
+ \draw [mblue, semithick] (0, 0) parabola bend (0.5, 0.675) (1, 0);
+
+ \draw [mred] (0.1, 0) node [below, black] {$x_0$}
+ -- (0.1, 0.243) -- (0.243, 0.243)
+ -- (0.243, 0.497) -- (0.497, 0.497)
+ -- (0.497, 0.675) -- (0.675, 0.675)
+ -- (0.675, 0.592) -- (0.592, 0.592)
+ -- (0.592, 0.652) -- (0.652, 0.652)
+ -- (0.652, 0.613) -- (0.613, 0.613)
+ -- (0.613, 0.641) -- (0.641, 0.641)
+ -- (0.641, 0.621) -- (0.621, 0.621)
+ -- (0.621, 0.635) -- (0.635, 0.635)
+ -- (0.635, 0.626) -- (0.626, 0.626)
+ -- (0.626, 0.632) -- (0.632, 0.632)
+ -- (0.632, 0.628) -- (0.628, 0.628)
+ -- (0.628, 0.631) -- (0.631, 0.631)
+ -- (0.631, 0.629) -- (0.629, 0.629)
+ -- (0.629, 0.63) -- (0.63, 0.63)
+ -- (0.63, 0.629) -- (0.629, 0.629)
+ -- (0.629, 0.63) -- (0.63, 0.63);
+ \end{tikzpicture}
+\end{center}
+There is an oscillatory convergence to $x_n = 1 - \frac{1}{\lambda}$.
+
+When $\lambda > 3$, we have a limit cycle, in which $x_n$ oscillates between 2 values, i.e.\ $x_{n + 2} = x_n$. When $\lambda = 1 + \sqrt{6} \approx 3.449$, we have a 4-cycle, and so on.
+\begin{center}
+ \begin{tikzpicture}[scale=5]
+ \draw [<->] (0, 1) node [above] {$x_{n + 1}$} -- (0, 0) -- (1.2, 0) node [right] {$x_n$};
+ \draw [mgreen, semithick] (0, 0) -- (1, 1) node [right] {$x_{n + 1} = x_n$};
+ \draw [mblue, semithick] (0, 0) parabola bend (0.5, 0.8) (1, 0);
+
+ \draw [mred] (0.15, 0) node [below, black] {$x_0$}
+ -- (0.15, 0.408) -- (0.408, 0.408)
+ -- (0.408, 0.773) -- (0.773, 0.773)
+ -- (0.773, 0.562) -- (0.562, 0.562)
+ -- (0.562, 0.788) -- (0.788, 0.788)
+ -- (0.788, 0.535) -- (0.535, 0.535)
+ -- (0.535, 0.796) -- (0.796, 0.796)
+ -- (0.796, 0.52) -- (0.52, 0.52)
+ -- (0.52, 0.799) -- (0.799, 0.799)
+ -- (0.799, 0.514) -- (0.514, 0.514)
+ -- (0.514, 0.799) -- (0.799, 0.799)
+ -- (0.799, 0.514) -- (0.514, 0.514)
+ -- (0.514, 0.799) -- (0.799, 0.799)
+ -- (0.799, 0.514) -- (0.514, 0.514)
+ -- (0.514, 0.799) -- (0.799, 0.799)
+ -- (0.799, 0.514) -- (0.514, 0.514)
+ -- (0.514, 0.799) -- (0.799, 0.799)
+ -- (0.799, 0.514) -- (0.514, 0.514);
+ \end{tikzpicture}
+\end{center}
+We have the following plot of the stable solutions for different values of $\lambda$ (plotted as $r$ in the horizontal axis)
+\begin{center}
+ \includegraphics[width=330pt]{images/de_bifurcation.png}
+
+ Credits: Wikimedia Commons: Jordan Pierce - Public Domain CC0 1.0
+\end{center}
+Note that the fixed point still exists after $\lambda = 3$, but is no longer stable. Similarly, the 2-cycles still exist after $\lambda = 1 + \sqrt{6}$, but it is not stable.
+
+\section{Second-order differential equations}
+We now move on to second-order differential equations. While we will only look at second-order equations, most of the methods in this section apply to higher order differential equations as well.
+\subsection{Constant coefficients}
+The general form of an equation with constant coefficients is
+\[
+ ay'' + by' + cy = f(x).
+\]
+We solve this in two steps:
+\begin{enumerate}
+ \item Find the complementary functions which satisfy the homogeneous equation $ay'' + by' + cy = 0$.
+ \item Find a particular solution that satisfies the full equation.
+\end{enumerate}
+\subsubsection{Complementary functions}
+Recall that $e^{\lambda x}$ is an eigenfunction of the differential operator $\frac{\d}{\d x}$. Hence it is also an eigenfunction of the second derivative $\frac{\d^2}{\d x^2} = \frac{\d}{\d x}\left(\frac{\d }{\d x}\right)$.
+
+If the complementary function has the form $y_c = e^{\lambda x}$, then $y'_c = \lambda e^{\lambda x}$ and $y''_c = \lambda^2 e^{\lambda x}$. Substituting into the differential equation gives
+\begin{defi}[Characteristic equation]
+ The \emph{characteristic equation} of a (second-order) differential equation $ay'' + by' + c = 0$ is
+ \[
+ a\lambda^2 + b\lambda + c = 0.
+ \]
+\end{defi}
+
+In this case there are two solutions to the characteristic equation, giving (in principle) two complementary functions $y_1 = e^{\lambda_1 x}$ and $y_2 = e^{\lambda_2 x}$.
+
+If $\lambda_1$ and $\lambda_2$ are distinct, then $y_1$ and $y_2$ are linearly independent and complete --- they form a basis of the solution space. The (most) general complementary function is
+\[
+ y_c = Ae^{\lambda_1 x} + Be^{\lambda_2 x}.
+\]
+\begin{eg}
+ $y'' - 5y' + 6y = 0$. Try $y = e^{\lambda x}$. The characteristic equation is $\lambda^2 - 5\lambda + 6 = 0$. Then $\lambda = 2$ or $3$. So the general solution is $y = Ae^{2x} + Be^{3x}$.
+
+ Note that $A$ and $B$ can be complex constants.
+\end{eg}
+
+\begin{eg}[Simple harmonic motion]
+ $y'' + 4y = 0$. Try $y = e^{\lambda x}$. The characteristic equation is $\lambda^2 + 4 = 0$, with solutions $\lambda = \pm 2i$. Then our general solution is $y = Ae^{2ix} + Be^{-2ix}$. However, if this is in a case of simple harmonic motion in physics, we want the function to be real (or \emph{look} real). We can write
+ \begin{align*}
+ y &= A(\cos 2x + i\sin 2x) + B(\cos 2x - i\sin 2x)\\
+ &= (A + B)\cos 2x + i(A - B)\sin 2x\\
+ &= \alpha\cos 2x + \beta\sin 2x
+ \end{align*}
+ where $\alpha = A + B$ and $\beta = i(A - B)$, and $\alpha$ and $\beta$ are independent constants.
+
+ In effect, we have changed the basis from $\{e^{2ix}, e^{-2ix}\}$ to $\{\cos 2x$, $\sin 2x\}$.
+\end{eg}
+
+\begin{eg}[Degeneracy] $y'' - 4y' + 4y = 0$.
+
+ Try $y = e^{\lambda x}$. We have $\lambda ^2 - 4\lambda + 4 = 0$ and $(\lambda - 2)^2 = 0$. So $\lambda = 2$ or $2$. But $e^{2x}$ and $e^{2x}$ are clearly not linearly independent. We have only managed to find one basis function of the solution space, but a second order equation has a 2 dimensional solution space. We need to find a second solution.
+
+ We can perform \emph{detuning}. We can separate the two functions found above from each other by considering $y'' - 4y' + (4 - \varepsilon^2)y = 0$. This turns into the equation we want to solve as $\varepsilon \to 0$. Try $y = e^{\lambda x}$. We obtain $\lambda^2 - 4\lambda + 4 - \varepsilon^2$. The two roots are $\lambda = 2 \pm \varepsilon$. Then
+ \begin{align*}
+ y &= Ae^{(2 + \varepsilon)x} + Be^{(2 - \varepsilon)x}\\
+ &= e^{2x}[A e^{\varepsilon x} + Be^{-\varepsilon x}]
+ \intertext{Taking the limit as $\varepsilon \to 0$, we use the Taylor expansion of $e^{\varepsilon x}$ to obtain}
+ y &= e^{2x}[(A + B) + \varepsilon x(A - B) + O(A\varepsilon^2, B\varepsilon^2)]
+ \intertext{We let $(A + B) = \alpha$ and $\varepsilon(A - B) = \beta$. This is perfectly valid for any non-zero $\varepsilon$. Then $A = \frac{1}{2}(\alpha + \frac{\beta}{\varepsilon})$ and $B = \frac{1}{2}(\alpha - \frac{\beta}{\varepsilon})$. So we have}
+ y &= e^{2x}[\alpha + \beta x + O(A\varepsilon^2, B\varepsilon^2)]\\
+ \intertext{We know for any $\varepsilon$, we have a solution of this form. Now we turn the procedure around. We fix some $\alpha$ and $\beta$. Then given any $\varepsilon$, we can find some constants $A$, $B$ (depending on $\varepsilon$) such that the above holds. As we decrease the size of $\varepsilon$, we have $A, B = O(\frac{1}{\varepsilon})$. So $O(A\varepsilon^2) = O(B\varepsilon^2) = O(\varepsilon)$. So our solution becomes}
+ y &= e^{2x}[\alpha + \beta x + O(\varepsilon)]\\
+ &\to e^{2x}(\alpha + \beta x)
+ \end{align*}
+ In this way, we have derived two separate basis functions. In general, if $y_1(x)$ is a degenerate complementary function of a linear differential equation with constant coefficients, then $y_2(x) = xy_1(x)$ is an independent complementary function.
+\end{eg}
+
+\subsubsection{Second complementary function}
+In general (i.e.\ if we don't have constant coefficients), we can find a second complementary function associated with a degenerate solution of the homogeneous equation by looking for a solution in the form $y_2(x) = v(x) y_1(x)$, where $y_1(x)$ is the degenerate solution we found.
+
+\begin{eg}
+ Consider $y'' - 4y' + 4y = 0$. We have $y_1 = e^{2x}$. We try $y_2 = ve^{2x}$. Then
+ \begin{align*}
+ y_2' &= (v' + 2v)e^{2x}\\
+ y_2'' &= (v'' + 4v' + 4v)e^{2x}.
+ \end{align*}
+ Substituting into the original equation gives
+ \[
+ (v'' + 4v' + 4v) - 4(v' + 2v) + 4v = 0.
+ \]
+ Simplifying, this tells us $v'' = 0$, which forces $v$ to be a linear function of $x$. So $y_2 = (Ax + B)e^{2x}$ for some $A, B \in \R$.
+\end{eg}
+
+\subsubsection{Phase space}
+If we are given a general $n$th order differential equation of the form
+\[
+ a_n(x) y^{(n)} + a_{n - 1}y^{(n - 1)} + \cdots + a_1(x) y' + a_0 (x) y = f(x),
+\]
+and we have a solution $y$, then we can plot a graph of $y$ versus $x$, and see how $y$ evolves with $x$.
+
+However, one problem is that for such an equation, the solution is not just determined by the initial condition $y(x_0)$, but also $y'(x_0)$, $y''(x_0)$ etc. So if we just have a snapshot of the value of $y$ at a particular point $x_0$, we have completely no idea how it would evolve in the future.
+
+So how much information do we actually need? At any point $x_0$, if we are given the first $n - 1$ derivatives, i.e.\ $y(x_0)$, $y'(x_0)$, $\cdots$, $y^{(n - 1)}(x_0)$, we can then get the $n$th derivative and also any higher derivatives from the differential equation. This means that we know the Taylor series of $y$ about $x_0$, and it follows that the solution is uniquely determined by these conditions (note that it takes considerably more extra work to actually \emph{prove} rigorously that the solution is uniquely determined by these initial conditions, but it can be done for sufficiently sensible $f$, as will be done in IB Analysis II).
+
+Thus we are led to consider the \emph{solution vector}
+\[
+ \mathbf{Y}(x) = (y(x), y'(x), \cdots , y^{n - 1}(x)).
+\]
+We say such a vector lies in the \emph{phase space}, which is an $n$-dimensional space. So for each $x$, we thus get a point $\mathbf{Y}(x)$ lying in the $n$-dimensional space. Moreover, given any point in the phase space, if we view it as the initial conditions for our differential equation, we get a unique trajectory in the phase space.
+
+\begin{eg}
+ Consider $y'' + 4y = 0$. Suppose we have an initial condition of $y_1(0) = 1, y'_1(0) = 0$. Then we can solve the equation to obtain $y_1(x) = \cos 2x$. Thus the initial solution vector is $\mathbf{Y}_1(0) = (1, 0)$, and the trajectory as $x$ varies is given by $\mathbf{Y}_1(x) = (\cos 2x, -2 \sin 2x)$. Thus as $x$ changes, we trace out an ellipse in the clockwise direction:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$y$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$y'$};
+
+ \draw (0, 0) circle [x radius = 1.2, y radius = 1.6];
+ \draw [->] (0, 0) -- (1, 0.8844) node [right] {$\mathbf{Y}_1(x)$};
+ \end{tikzpicture}
+ \end{center}
+ Another possible initial condition is $y_2(0) = 0, y_2'(0) = 2$. In this case, we obtain the solution $y(x) = \sin 2x$, with a solution vector $\mathbf{Y}_2(x) = (\sin 2x, 2 \cos 2x)$.
+
+ Note that as vectors, our two initial conditions $(1, 0)$ and $(0, 2)$ are independent. Moreover, as $x$ changes, the two solution vectors $\mathbf{Y}_1(x), \mathbf{Y}_2(x)$ remain independent. This is an important observation that allows the method of variation of parameters later on.
+\end{eg}
+
+In general, for a 2nd order equation, the phase space is a 2-dimensional space, and we can take the two complementary functions $\mathbf{Y}_1$ and $\mathbf{Y}_2$ as basis vectors for the phase space at each particular value of $x$. Of course, we need the two solutions to be linearly independent.
+
+\begin{defi}[Wronskian]
+ Given a differential equation with solutions $y_1, y_2$, the \emph{Wronskian} is the determinant
+ \[
+ W(x) = \begin{vmatrix}y_1 & y_2 \\ y_1' & y_2'\end{vmatrix}.
+ \]
+\end{defi}
+
+\begin{defi}[Independent solutions]
+ Two solutions $y_1(x)$ and $y_2(x)$ are \emph{independent} solutions of the differential equation if and only if $\mathbf{Y}_1$ and $\mathbf{Y}_2$ are linearly independent as vectors in the phase space for some $x$, i.e.\ iff the Wronskian is non-zero for some $x$.
+\end{defi}
+
+In our example, we have $W(x) = 2\cos^2 2x + 2\sin^2 2x = 2 \not= 0$ for all $x$.
+
+\begin{eg}
+ In our earlier example, $y_1 = e^{2x}$ and $y_2 = xe^{2x}$. We have
+ \[
+ W = \left|\begin{matrix} e^{2x} & xe^{2x}\\ 2e^{2x} & e^{2x} + 2xe^{2x} \end{matrix}\right| = e^{4x}(1 + 2x - 2x) = e^{4x} \not= 0.
+ \]
+\end{eg}
+
+In both cases, the Wronskian is \emph{never} zero. Is it possible that it is zero for some $x$ while non-zero for others? The answer is no.
+
+\begin{thm}[Abel's Theorem]
+ Given an equation $y'' + p(x)y' + q(x) y = 0$, either $W = 0$ for all $x$, or $W \not= 0$ for all $x$. i.e.\ iff two solutions are independent for some particular $x$, then they are independent for all $x$.
+\end{thm}
+
+\begin{proof}
+ If $y_1$ and $y_2$ are both solutions, then
+ \begin{align*}
+ y_2(y_1'' + py_1' + qy_1) &= 0\\
+ y_1(y_2'' + py_2' + qy_2) &= 0\\
+ \intertext{Subtracting the two equations, we have}
+ y_1y_2'' - y_2y_1'' + p(y_1y_2' - y_2y_1') &= 0\\
+ \intertext{Note that $W =y_1y_2' - y_2y_1'$ and $W' = y_1y_2'' + y_1'y_2' - (y_2'y_1' + y_2y_1'') = y_1y_2'' - y_2y_1''$}
+ W' + P(x)W &= 0\\
+ W(x) &= W_0 e^{-\int P\; \d x},
+ \end{align*}
+ Where $W_0 = $ const. Since the exponential function is never zero, either $W_0 = 0$, in which case $W = 0$, or $W_0 \not= 0$ and $W \not= 0$ for any value of $x$.
+\end{proof}
+In general, any linear $n$th-order homogeneous differential equation can be written in the form $\mathbf{Y}' + A\mathbf{Y} = 0$, a system of first-order equations. It can then be shown that $W' + \tr(A)W = 0$, and $W = W_0e^{-\int \tr A\;\d x}$. So Abel's theorem holds.
+
+\subsection{Particular integrals}
+We now consider equations of the form $ay'' + by' + cy = f(x)$. We will come up with several ways to find a particular integral.
+
+\subsubsection{Guessing}
+If the forcing terms are simple, we can easily ``guess'' the form of the particular integral, as we've previously done for first-order equations.
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ $f(x)$ & $y_p(x)$\\
+ \midrule
+ $e^{mx}$ & $Ae^{mx}$\\
+ $\sin kx$ & \multirow{2}{*}{$A\sin kx + B\cos kx$}\\
+ $\cos kx$ & \\
+ polynomial $p_n(x)$ & $q_n(x) = a_nx^n + \cdots + a_1x + a_0$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+It is important to remember that the equation is linear, so we can superpose solutions and consider each forcing term separately.
+
+\begin{eg}
+ Consider $y'' - 5y' + 6y = 2x + e^{4x}$. To obtain the forcing term $2x$, we need a first order polynomial $ax + b$, and to get $e^{4x}$ we need $ce^{4x}$. Thus we can guess
+ \begin{align*}
+ y_p &= ax + b + ce^{4x}\\
+ y'_p &= a + 4ce^{4x}\\
+ y''_p &= 16ce^{4x}
+ \end{align*}
+ Substituting in, we get
+ \[
+ 16ce^{4x} - 5(a + 4ce^{4x}) + 6(ax + b + ce^{4x}) = 2x + e^{4x}
+ \]
+ Comparing coefficients of similar functions, we have
+ \begin{align*}
+ 16c - 20c + 6c &= 1\Rightarrow c = \frac{1}{2}\\
+ 6a &= 2 \Rightarrow a = \frac{1}{3}\\
+ -5a + 6b &= 0 \Rightarrow b = \frac{5}{18}
+ \end{align*}
+ Since the complementary function is $y_c = Ae^{3x} + Be^{2x}$, the general solution is $y = Ae^{3x} + Be^{2x} + \frac{1}{2}e^{4x} + \frac{1}{3}x + \frac{5}{18}$.
+
+ Note that any boundary condition to determine $A$ and $B$ must be applied to the full solution, not the complementary function
+\end{eg}
+\subsubsection{Resonance}
+Consider $\ddot y + \omega_0^2 y = \sin \omega_0 t$. The complementary solution is $y_c = A\sin \omega_0 t + B\cos w_0 t$. We notice that the forcing is itself a complementary function. So if we guess a particular integral $y_p = C\sin \omega_0 t + D\cos \omega_0 t$, we'll simply find $\ddot y_p + \omega_0 ^2 y_p = 0$, so we can't balance the forcing.
+
+This is an example of a simple harmonic oscillator being forced at its natural frequency.
+
+We can \emph{detune} our forcing away from the natural frequency, and consider $\ddot y + \omega_0^2 y = \sin \omega t$ with $\omega \not= \omega_0$. Try
+\[
+ y_p = C(\sin \omega t - \sin \omega_0 t).
+\]
+We have
+\[
+ \ddot y_p = C(-\omega^2 \sin \omega t + \omega_0^2 \sin\omega_0 t).
+\]
+Substituting into the differential equation, we have $C(\omega_0^2 - \omega^2) = 1$. Then
+\[
+ y_p = \frac{\sin \omega t - \sin \omega_0t}{\omega_0^2 - \omega^2}.
+\]
+We can simplify this to
+\[
+ y_p = \frac{2}{\omega_0^2 - \omega^2} \cos \left(\frac{\omega_0 + \omega}{2}t\right) \sin \left(\frac{\omega - \omega_0}{2} t\right)
+\]
+We let $\omega_0 - \omega = \Delta \omega$. Then
+\[
+ y_p = \frac{-2}{(2\omega + \Delta \omega)\Delta \omega}\cos \left[\left(\omega + \frac{\Delta \omega}{2}\right)t\right] \sin \left(\frac{\Delta \omega}{2}t\right).
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6.5, 0) node [right] {$t$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$y_p$};
+
+ \draw [semithick, mblue, domain=0:6,samples=600] plot(\x, {2 * cos(2000 * \x) * sin (90 * \x)});
+ \draw [semithick, morange] (0, 0) sin (1, 2) cos (2, 0) sin (3, -2) cos (4, 0) sin (5, 2) cos (6, 0);
+ \draw [semithick, mgreen] (0, 0) sin (1, -2) cos (2, 0) sin (3, 2) cos (4, 0) sin (5, -2) cos (6, 0);
+
+ \draw [semithick, mblue] (7.5, 0.7) -- (8, 0.7) node [black, right] {$y_p$};
+ \draw [semithick, morange] (7.5, 0) -- (8, 0) node [black, right] {$\sin\left(\frac{\Delta \omega}{2} t\right)$};
+ \draw [semithick, mgreen] (7.5, -0.7) -- (8, -0.7) node [black, right] {$-\sin\left(\frac{\Delta \omega}{2} t\right)$};
+
+ \draw [<->] (1, -2.5) -- (5, -2.5) node [pos=0.5,fill=white] {$O\left(\frac{1}{\Delta \omega}\right)$};
+ \end{tikzpicture}
+\end{center}
+This oscillation in the amplitude of the $\cos$ wave is known as \emph{beating}. This happens when the forcing frequency is close to the natural frequency. The wavelength of the $\sin$ function has order $O(\frac{1}{\Delta \omega})$ and $\cos$ has wavelength $O(\frac{1}{\omega_0})$. As $\Delta\omega\to 0$, the wavelength of the beating envelope $\to \infty$ and we just have the initial linear growth.
+
+Mathematically, since $\sin\theta \approx \theta$ as $\theta \to 0$, as $\Delta\omega \to 0$, we have
+\[
+ y_p\to \frac{-t}{2\omega_0}\cos\omega_0 t.
+\]
+In general, if the forcing is a linear combination of complementary functions, then the particular integral is proportional to $t$ (the independent variable) times the non-resonant guess.
+
+\subsubsection{Variation of parameters}
+So far, we have been finding particular integrals by guessing. Here we are going to come up with a method that can systematically help us find the particular integral. Of course, this is substantially more complicated than guessing. So if the form of the particular integral is obvious, we should go for guessing.
+
+Suppose we have a second order differential equation
+\[
+ y'' + p(x)y' + q(x)y = f(x).
+\]
+We then know any particular solution vector can be written in the form
+\[
+ \mathbf{Y}(x) =
+ \begin{pmatrix}
+ y(x)\\
+ y'(x)
+ \end{pmatrix},
+\]
+and our job is to find one solution vector that satisfies the equation. We presuppose such a solution actually exists, and we will try to find out what it is.
+
+The trick is to pick a convenient basis for this space. Let $y_1(x)$ and $y_2(x)$ be linearly independent complementary functions of the ODE. Then for each $x$, the solution vectors $\mathbf{Y}_1(x) = (y_1(x), y_1'(x))$ and $\mathbf{Y}_2(x) = (y_2(x), y_2'(x))$ form a basis of the solution space. So for each particular $x$, we can find some constants $u, v$ (depending on $x$) such that the following equality holds:
+\[
+ \mathbf{Y}(x) = u(x) \mathbf{Y}_1(x) + v(x) \mathbf{Y}_2(x),
+\]
+since $\mathbf{Y}_1$ and $\mathbf{Y}_2$ are a basis.
+
+Component-wise, we have
+\begin{align*}
+ y_p &= uy_1 + vy_2 \tag{a}\\
+ y_p' &= uy_1' + vy_2' \tag{b}
+\end{align*}
+Differentiating the second equation, we obtain
+\[
+ y''_p = (uy_1'' + u'y_1') + (vy_2'' + v'y_2') \tag{c}
+\]
+If we consider (c) + $p$(b) + $q$(a), we have $y_1' u' + y_2'v' = f$.
+
+Now note that we derived the equation of $y_p'$ from the vector equation. This must be equal to what we get if we differentiate (a). By (a)$'$ - (b), we obtain $y_1u' + y_2v' = 0$. Now we have two simultaneous equations for $u'$ and $v'$, which we should be able to solve.
+
+We can, for example, write them in matrix form as
+\[
+ \begin{pmatrix}
+ y_1 & y_2\\
+ y_1' & y_2'
+ \end{pmatrix}
+ \begin{pmatrix}
+ u'\\
+ v'
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ 0\\
+ f
+ \end{pmatrix}
+\]
+Inverting the left matrix, we have
+\[
+ \begin{pmatrix}
+ u'\\
+ v'
+ \end{pmatrix} = \frac{1}{W}
+ \begin{pmatrix}
+ y_2' & -y_2\\
+ -y_1' & y_1
+ \end{pmatrix}
+ \begin{pmatrix}
+ 0\\f
+ \end{pmatrix}
+\]
+So $u' = -\frac{y_2}{W}f$ and $v' = \frac{y_1}{W}f$.
+
+\begin{eg}
+ $y'' + 4y = \sin 2x$. We know that $y_1 = \sin 2x$ and $y_2 = \cos 2x$. $W = -2$. We write
+ \[
+ y_p = u\sin 2x + v\cos 2x.
+ \]
+ Using the formulae above, we obtain
+ \[
+ u' = \frac{\cos 2x\sin 2x}{2} = \frac{\sin 4x}{4},\;\;\; v' = \frac{-\sin^2 2x}{2} = \frac{\cos 4x - 1}{4}
+ \]
+ So
+ \[
+ u = -\frac{\cos 4x}{16}, \;\;\; v = \frac{\sin 4x}{16} - \frac{x}{4}
+ \]
+ Therefore
+ \[
+ y_p = \frac{1}{16}(-\cos 4x\sin 2x + \sin 4x\cos 2x - \frac{x}{4}\cos 2x) = \frac{1}{16}\sin 2x - \frac{x}{4}\cos 2x
+ \]
+ Note that $-\frac{1}{4}x\cos 2x$ is what we found previously by detuning, and $\frac{1}{16}\sin 2x$ is a complementary function, so the results agree.
+\end{eg}
+
+It is generally not a good idea to remember the exact formula for the results we've obtained above. Instead, whenever faced with such questions, you should be able to re-derive the results instead.
+
+\subsection{Linear equidimensional equations}
+Equidimensional equations are often called homogeneous equations, but this is confusing as it has the same name as those with no forcing term. So we prefer this name instead.
+
+\begin{defi}[Equidimensional equation]
+ An equation is \emph{equidimensional} if it has the form
+ \[
+ ax^2y'' + bxy' + cy = f(x),
+ \]
+ where $a, b, c$ are constants.
+\end{defi}
+To understand the name ``equidimensional'', suppose we are doing physics and variables have dimensions. Say $y$ has dimensions $L$ and $x$ has dimensions $T$. Then $y'$ has dimensions $LT^{-1}$ and $y''$ has dimensions $LT^{-2}$. So all terms $x^2 y''$, $xy'$ and $y$ have the same dimensions.
+
+\subsubsection*{Solving by eigenfunctions}
+Note that $y = x^k$ is an eigenfunction of $x\frac{\d}{\d x}$. We can try an eigenfunction $y = x^k$. We have $y' = kx^{k - 1}$ and thus $xy' = kx^k = ky$; and $y'' = k(k - 1)x^{k - 2}$ and $x^2y'' = k(k - 1)x^k$.
+
+Substituting in, we have
+\[
+ ak(k - 1) + bk + c = 0,
+\]
+which we can solve, in general, to give two roots $k_1$ and $k_2$, and $y_c = Ax^{k_1} + Bx^{k_2}$.
+
+\subsubsection*{Solving by substitution}
+Alternatively, we can make a substitution $z = \ln x$. Then we can show that
+\[
+ a\frac{\d ^2 y}{d z^2} + (b - a)\frac{\d y}{\d z} + cy = f(e^z).
+\]
+This turns an equidimensional equation into an equation with constant coefficients. The characteristic equation for solutions in the form $y = e^{\lambda z}$ is of form $a^2\lambda^2 + (b - a)\lambda + c = 0$, which we can rearrange to become $a\lambda(\lambda - 1) + b\lambda + c = 0$. So $\lambda = k_1, k_2$.
+
+Then the complementary function is $y_c = Ae^{k_1z} + Be^{k_2z} = Ax^{k_1} + Bx^{k_2}$.
+
+\subsubsection*{Degenerate solutions}
+If the roots of the characteristic equation are equal, then $y_c = \{e^{kz}, ze^{kz}\} = \{x^k, x^k\ln x\}$. Similarly, if there is a resonant forcing proportional to $x^{k_1}$ (or $x^{k_2}$), then there is a particular integral of the form $x^{k_1}\ln x$.
+
+These results can be easily obtained by considering the substitution method of solving, and then applying our results from homogeneous linear equations with constant coefficients.
+
+\subsection{Difference equations}
+Consider an equation of the form
+\[
+ a y_{n + 2} + by_{n + 1} + cy_n = f_n.
+\]
+We can solve in a similar way to differential equations, by exploiting linearity and eigenfunctions.
+
+We can think of the difference operator $D[y_n] = y_{n + 1}$. This has an eigenfunction $y_n = k^n$. We have $D[y_n] = D[k^n] = k^{n + 1} = k\cdot k^n = ky_n$.
+
+To solve the difference equation, we first look for complementary functions satisfying
+\[
+ ay_{n + 2} + by_{n + 1} + cy_n = 0
+\]
+We try $y_n = k^n$ to obtain
+\begin{align*}
+ ak^{n + 2} + bk^{n + 1} + ck^n &= 0\\
+ ak^2 + bk + c &= 0
+\end{align*}
+from which we can determine $k$. So the general complementary function is $y_n^c = Ak_1^n + Bk_2^n$ if $k_1 \not= k_2$. If they are equal, then $y_n^c = (A + Bn)k^n$.
+
+To find the particular integral, we guess.
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ $f_n$ & $y_n^p$\\
+ \midrule
+ $k^n$ & $Ak^n$ if $k \not= k_1, k_2$\\
+ $k_1^n$ & $nk_1^n$\\
+ $n^p$ & $An^p + Bn^{p - 1} + \cdots + Cn + D$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+
+\begin{eg}[Fibonacci sequence]
+ The Fibonacci sequence is defined by
+ \[
+ y_n = y_{n - 1} + y_{n - 2}
+ \]
+ with $y_0 = y_1 = 1$.
+
+ We can write this as
+ \[
+ y_{n + 2} - y_{n + 1} - y_n = 0
+ \]
+ We try $y_n = k^n$. Then $k^2 - k - 1 = 0$. Then
+ \begin{align*}
+ k^2 - k - 1 &= 0\\
+ k &= \frac{1 \pm \sqrt{5}}{2}
+ \end{align*}
+ We write $k = \varphi_1, \varphi_2$. Then $y_n = A\varphi_1^n + B\varphi_2^n$. Our initial conditions give
+ \begin{align*}
+ A + B &= 1\\
+ A\varphi_1 + B\varphi_2 &= 1
+ \end{align*}
+ We get $\displaystyle A = \frac{\varphi_1}{\sqrt{5}}$ and $\displaystyle B = \frac{-\varphi_2}{\sqrt{5}}$. So
+ \[
+ y_n = \frac{\varphi_1^{n + 1} - \varphi_2^{n + 1}}{\sqrt{5}} = \frac{\varphi_1^{n + 1} - \left(\frac{-1}{\varphi_1}\right)^{n + 1}}{\sqrt{5}}
+ \]
+\end{eg}
+
+\subsection{Transients and damping}
+In many physical systems, there is some sort of restoring force and some damping, e.g.\ car suspension system.
+
+Consider a car of mass $M$ with a vertical force $F(t)$ acting on it (e.g.\ mouse jumping on the car). We can consider the wheels to be springs ($F = kx$) with a ``shock absorber'' ($F = l\dot x$). Then the equation of motion can be given by
+\[
+ M\ddot x = F(t) - kx - l\dot x.
+\]
+So we have
+\[
+ \ddot x + \frac{l}{M}\dot x + \frac{k}{M}x = \frac{1}{M}F(t).
+\]
+Note that if we don't have the damping and the forcing, we end up with simple harmonic motion of angular frequency $\sqrt{k/M}$. Write $t = \tau \sqrt{M/k}$, where $\tau$ is dimensionless. The timescale $\sqrt{M/k}$ is proportional to the period of the undamped, unforced system (or 1 over its natural frequency). Then we obtain
+\[
+ \ddot x + 2\kappa\dot x + x = f(\tau)
+\]
+where, $\dot x$ means $\frac{\d x}{d\tau}$, $\kappa = \frac{l}{2\sqrt{kM}}$ and $f = \frac{F}{k}$.
+
+By this substitution, we are now left with only one parameter $\kappa$ instead of the original three ($M, l, k$).
+
+We will consider different possible cases.
+
+\subsubsection*{Free (natural) response \texorpdfstring{$f = 0$}{f = 0}}
+\begin{align*}
+ \ddot x + 2\kappa \dot x + x &= 0\\
+ \intertext{We try $x = e^{\lambda \tau}$}
+ \lambda^2 + 2\kappa\lambda + 1 &= 0\\
+ \lambda &= -\kappa \pm \sqrt{\kappa^2 - 1}\\
+ &= -\lambda_1, -\lambda_2
+\end{align*}
+where $\lambda_1$ and $\lambda_2$ have positive real parts.
+
+\subsubsection*{Underdamping}
+If $\kappa < 1$, we have $x = e^{-\kappa\tau}(A\sin \sqrt{1 - \kappa^2}\tau +B\cos \sqrt{1 - \kappa^2}\tau)$.
+
+The period is $\frac{2\pi}{\sqrt{1 - \kappa^2}}$ and its amplitude decays in a characteristic of $O(\frac{1}{\kappa})$. Note that the damping increases the period. As $\kappa \to 1$, the oscillation period $\to \infty$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$\tau$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$x$};
+
+ \draw [semithick, mblue, domain = 0:5.8, samples=100] plot (\x, {1.8 * exp (-0.5 * \x) * sin (300 * \x + 40)});
+ \end{tikzpicture}
+\end{center}
+
+\subsubsection*{Critically damping}
+If $\kappa = 1$, then $x = (A + B\tau)e^{-\kappa\tau}$.
+
+The rise time and decay time are both $O(\frac{1}{\kappa}) = O(1)$. So the dimensional rise and decay times are $O(\sqrt{M/k})$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$\tau$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$x$};
+
+ \draw [semithick, mblue, domain = 0:5.8, samples=100] plot (\x, {(1 + 3 * \x) * exp (-\x)});
+ \end{tikzpicture}
+\end{center}
+
+\subsubsection*{Overdamping}
+If $\kappa > 1$, then $x = Ae^{-\lambda_1\tau} + Be^{-\lambda_2\tau}$ with $\lambda_1 < \lambda_2$. Then the decay time is $O(1/\lambda_1)$ and the rise time is $O(1/\lambda_2)$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$\tau$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$x$};
+
+ \draw [semithick, mblue, domain = 0:5.8, samples=100] plot (\x, {2 * exp (-0.3 * \x) - exp(-2 * \x)});
+ \end{tikzpicture}
+\end{center}
+Note that in all cases, it is possible to get a large initial increase in amplitude.
+
+\subsubsection*{Forcing}
+In a forced system, the complementary functions typically determine the short-time transient response, while the particular integral determines the long-time (asymptotic) response.
+For example, if $f(\tau) = \sin\tau$, then we can guess $x_p = C\sin \tau + D\cos\tau$. In this case, it turns out that $x_p = -\frac{1}{2\kappa}\cos\tau$.
+
+The general solution is thus $x = Ae^{-\lambda_1\tau} + Be^{-\lambda \tau} - \frac{1}{2\kappa}\cos\tau \sim -\frac{1}{2\kappa}\cos\tau$ as $\tau\to \infty$ since $\Re (\lambda_{1, 2}) > 0$.
+
+It is important to note that the forcing response is out of phase with the forcing.
+\subsection{Impulses and point forces}
+\subsubsection{Dirac delta function}
+Consider a ball bouncing on the ground. When the ball hits the ground at some time $T$, it experiences a force from the ground for some short period of time. The force on the ball exerted by the ground $F(t)$ is $0$ for most of the time, except during the short period $(T - \varepsilon, T + \varepsilon)$.
+
+Often we don't know (or we don't wish to know) the details of $F(t)$ but we can note that it only acts for a short time of $O(\varepsilon)$ that is much shorter than the overall time $O(t_2 - t_1)$ of the system. It is convenient mathematically to imagine the force acting instantaneously at time $t = T$, i.e.\ consider the limit $\varepsilon\to 0$.
+
+Newton's second law gives $m\ddot x = F(t) - mg$. While we cannot solve it, we can integrate the equation from $T - \varepsilon$ to $T + \varepsilon$. So
+\begin{align*}
+ \int_{T - \varepsilon}^{T + \varepsilon} m\frac{\d ^2 x}{\d t^2}\;\d t &= \int_{T - \varepsilon}^{T + \varepsilon} F(t)\;\d t - \int_{T - \varepsilon}^{T + \varepsilon} mg\;\d t\\
+ \left[ m\frac{\d x}{\d t}\right]^{T + \varepsilon}_{T - \varepsilon} &= I - 2\varepsilon mg\\
+ \Delta p &= I - O(\varepsilon)
+\end{align*}
+Where $\Delta p$ is the change in momentum and the impulse $I = \int_{T - \varepsilon}^{T + \varepsilon} F(t) \;\d t$ is the area under the force curve. Note that the impulse $I$ is the only property of $F$ that influences the macroscopic behaviour of the system. If the contact time $2\varepsilon$ is small, we'll neglect it and write
+\[
+ \Delta p = I
+\]
+Assuming that $F$ only acts on a negligible amount of time $\varepsilon$, all that matters to us is its integral $I$, i.e.\ the area under the force curve.
+
+wlog, assume $T = 0$ for easier mathematical treatment. We can consider a family of functions $D(t; \varepsilon)$ such that
+\begin{gather*}
+ \lim_{\varepsilon\to 0} D(t; \varepsilon) = 0 \text{ for all }t \not= 0;\\
+ \lim_{\varepsilon\to 0}\int_{-\infty}^\infty D(t; \varepsilon) \;\d t = 1.
+\end{gather*}
+So we can replace the force in our example by $ID(t; \varepsilon)$, and then take the limit as $\varepsilon \to 0$.
+
+For example, we can choose
+\[
+ D(t; \varepsilon) = \frac{1}{\varepsilon\sqrt{\pi}}e^{-t^2/\varepsilon^2}
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3.25, 0) node [right] {$t$};
+ \draw [->, use as bounding box] (0, 0) -- (0, 4) node [above] {$D$};
+
+ \draw [semithick, mblue, domain=-3:3, samples = 100] plot (\x, { 1.6 * exp( - \x * \x)});
+ \draw [semithick, morange, domain=-3:3, samples = 100] plot (\x, { 3.2 * exp( - 4 * \x * \x)});
+
+ \draw [mblue, semithick] (3.5, 2.25) -- (4, 2.25) node [right, black] {$\varepsilon = 1$};
+ \draw [morange, semithick] (3.5, 1.75) -- (4, 1.75) node [right, black] {$\varepsilon = 0.5$};
+ \end{tikzpicture}
+\end{center}
+This has height $O(1/\varepsilon)$ and width $O(\varepsilon)$.
+
+It can be checked that this satisfies the properties listed above. Note that as $\varepsilon \to 0$, $D(0; \varepsilon)\to \infty$. Therefore $\displaystyle \lim_{\varepsilon\to 0} D(0; \varepsilon)$ does not exist.
+
+\begin{defi}[Dirac delta function]
+ The \emph{Dirac delta function} is defined by
+ \[
+ \delta(x) = \lim_{\varepsilon \to 0} D(x; \varepsilon)
+ \]
+ on the understanding that we can only use its integral properties. For example, when we write
+ \[
+ \int_{-\infty}^{\infty} g(x)\delta (x) \;\d x,
+ \]
+ we actually mean
+ \[
+ \lim_{\varepsilon \to 0} \int_{-\infty}^{\infty} g(x)D(x; \varepsilon)\;\d x.
+ \]
+ In fact, this is equal to $g(0)$.
+
+ More generally, $\int_a^b g(x)\delta(x - c)\;\d x = g(c)$ if $c\in (a, b)$ and $0$ otherwise, provided $g$ is continuous at $x = c$.
+\end{defi}
+
+This gives a convenient way of representing and making calculations involving impulsive or point forces. For example, in the previous example, we can write
+\[
+ m\ddot x = -mg + I\delta(t - T).
+\]
+
+\begin{eg}
+ $y'' - y = 3\delta(x - \frac{\pi}{2})$ with $y = 0$ at $x = 0, \pi$. Note that our function $y$ is split into two parts by $x = \frac{\pi}{2}$.
+
+ First consider the region $0 \leq x < \frac{\pi}{2}$. Here the delta function is $0$, and we have $y'' - y = 0$ and $y = 0$ at $x = 0$. Then $y = Ce^x + De^{-x} = A\sinh x + B\cosh x$ and obtain $B = 0$ from the boundary condition.
+ In the region $\frac{\pi}{2} < x \leq \pi$, we again obtain $y = C\sinh(\pi - x) + D\cosh (\pi - x)$ and (from the boundary condition), $D = 0$.
+
+ When $x = \frac{\pi}{2}$, first insist that $y$ is continuous at $x = \frac{\pi}{2}$. So $A = C$. Then note that we have to solve
+ \[
+ y'' - y = 3\delta\left(x - \frac{\pi}{2}\right)
+ \]
+ But remember that the delta function makes sense only in an integral. So we integrate both sides from $\frac{\pi}{2}^-$ to$\frac{\pi}{2}^+$. Then we obtain
+ \[
+ [y']^{\frac{\pi}{2}^+}_{\frac{\pi}{2}^-} - \int_{\frac{\pi}{2}^-}^{\frac{\pi}{2}^+} y\;\d x= 3
+ \]
+ Since we assume that $y$ is well behaved, the second integral is 0. So we are left with
+ \[
+ [y']^{\frac{\pi}{2}^+}_{\frac{\pi}{2}^-} = 3
+ \]
+ So we have
+ \begin{align*}
+ -C\cosh \frac{\pi}{2} - A\cosh\frac{\pi}{2} &= 3\\
+ A = C &= \frac{-3}{2\cosh \frac{\pi}{2}}
+ \end{align*}
+\end{eg}
+Then we have
+\[
+ y =
+ \begin{cases}
+ \frac{-3\sinh x}{2\cosh \frac{\pi}{2}} & 0 \leq x < \frac{\pi}{2}\\
+ \frac{-3\sinh(\pi - x)}{2\cosh \frac{\pi}{2}} & \frac{\pi}{2} < x \leq \pi
+ \end{cases}
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (7, 0) node [right] {$x$};
+
+ \draw [->] (0, 0) -- (0, -3.5) node [below] {$y$};
+
+ \draw [semithick, mblue, domain=0:1.5708] plot ({2 * \x}, {-1.3 * sinh (\x)});
+ \draw [semithick, mblue, domain=1.5708:3.14159] plot ({2 * \x}, {-1.3 * sinh (pi - \x)});
+ \end{tikzpicture}
+\end{center}
+Note that at $x = \frac{\pi}{2}$, our final function has continuous $y$, discontinuous $y'$ and infinite $y''$. In general, differentiating a function makes it \emph{less} continuous. This is why we insisted at first that $y$ has to be continuous. Otherwise, $y'$ would look like a delta function, and $y''$ would be something completely unrecognizable.
+
+Hence the discontinuity is always addressed by the highest order derivative since differentiation increases the discontinuity.
+
+\subsection{Heaviside step function}
+
+\begin{defi}[Heaviside step function]
+ Define the Heaviside step function as:
+ \[
+ H(x) = \int_{-\infty}^x \delta(t) \;\d t
+ \]
+ We have
+ \[
+ H(x) =\begin{cases} 0 & x < 0\\1 & x > 0\\\text{undefined} & x = 0\end{cases}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$y$};
+
+ \draw [mblue, semithick] (-3, 0) -- (0, 0);
+ \draw [mblue, semithick] (0, 2) -- (3, 2);
+ \end{tikzpicture}
+ \end{center}
+ By the fundamental theorem of calculus,
+ \[
+ \frac{\d H}{\d x} = \delta(x)
+ \]
+ But remember that these functions and relationships can only be used inside integrals.
+\end{defi}
+
+\section{Series solutions}
+Often, it is difficult to solve a differential equation directly. However, we can attempt to find a Taylor series for the solution.
+
+We will consider equations of the form
+\[
+ p(x) y'' + q(x) y' + r(x) y = 0.
+\]
+\begin{defi}[Ordinary and singular points]
+ The point $x = x_0$ is an \emph{ordinary point} of the differential equation if $\frac{q}{p}$ and $\frac{r}{p}$ have Taylor series about $x_0$ (i.e.\ are ``analytic'', cf.\ Complex Analysis). Otherwise, $x_0$ is a \emph{singular point}.
+
+ If $x_0$ is a singular point but the equation can be written as
+ \[
+ P(x)(x - x_0)^2y'' + Q(x)(x - x_0)y' + R(x)y = 0,
+ \]
+ where $\frac{Q}{P}$ and $\frac{R}{P}$ have Taylor series about $x_0$, then $x_0$ is a \emph{regular singular point}.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $(1 - x^2)y'' - 2cy' + 2y = 0$. $x = 0$ is an ordinary point. However, $x = \pm 1$ are (regular) singular points since $p(\pm 1) = 0$.
+ \item $\sin x y'' + \cos x y' + 2y = 0$. $x = n\pi$ are regular singular points while all others are ordinary.
+ \item $(1 + \sqrt{x}) y'' - 2xy' + 2y = 0$. $x = 0$ is an irregular singular point because $\sqrt{x}$ is not differentiable at $x = 0$.
+ \end{enumerate}
+\end{eg}
+
+It is possible to show that if $x_0$ is an ordinary point, then the equation is guaranteed to have two linearly independent solutions of the form
+\[
+ y = \sum_{n = 0}^\infty a_n(x - x_0)^n,
+\]
+i.e.\ Taylor series about $x_0$. The solution must be convergent in some neighbourhood of $x_0$.
+
+If $x_0$ is a regular singular point, then there is at least one solution of the form
+\[
+ y = \sum_{n = 0}^\infty a_n(x - x_0)^{n + \sigma}
+\]
+with $a_0 \not= 0$ (to ensure $\sigma$ is unique). The \emph{index} $\sigma$ can be any complex number. This is called a Frobenius series.
+
+Alternatively, it can be nice to think of the Frobenius series as
+\begin{align*}
+ y &= (x - x_0)^\sigma \sum_{n = 0}^\infty a_n (x - x_0)^n\\
+ &= (x-x_0)^\sigma f(x)
+\end{align*}
+where $f(x)$ is analytic and has a Taylor series.
+
+We will not prove these results, but merely apply them.
+
+\subsubsection*{Ordinary points}
+\begin{eg}
+ Consider $(1 - x^2)y'' - 2xy' + 2y = 0$. Find a series solution about $x = 0$ (which is an ordinary point).
+
+ We try $y = \sum_{n = 0}^\infty a_nx^n$. First, we write the equation in the form of an equidimensional equation with polynomial coefficients by multiplying both sides by $x^2$. This little trick will make subsequent calculations slightly nicer. We obtain
+ \begin{align*}
+ (1 - x)^2 (x^2y'') - 2x^2(xy') + 2x^2 y &= 0\\
+ \sum a_n[(1 - x^2) n(n - 1) - 2x^2n + 2x^2]x^n &= 0\\
+ \sum a_n[n(n - 1) + (-n^2 - n + 2)x^2]x^n &= 0
+ \end{align*}
+ We look at the coefficient of $x^n$ and obtain the following general recurrence relation:
+ \begin{gather*}
+ n(n - 1) a_n + [-(n - 2)^2 - (n - 2) + 2]a_{n - 2} = 0\\
+ n(n - 1)a_n = (n^2 - 3n)a_{n - 2}
+ \end{gather*}
+ Here we do \emph{not} divide by anything since they might be zero.
+
+ First consider the case $n = 0$. The left hand side gives $0\cdot a_0 = 0$ (the right hand side is $0$ since $a_{n - 2} = 0$). So any value of $a_0$ satisfies the recurrence relationship, and it can take any arbitrary value. This corresponds to a constant of integration. Similarly, by considering $n = 1$, $a_1$ is arbitrary.
+
+ For $n > 1$, $n$ and $n - 1$ are non-zero. So we have
+ \begin{align*}
+ a_n &= \frac{n - 3}{n - 1} a_{n - 2}\\
+ \intertext{In this case (but generally not), we can further simplify it to obtain:}
+ a_n &= \frac{n - 3}{n - 1}\frac{n - 5}{n - 3}a_{n - 4}\\
+ &= \frac{n - 5}{n - 1}a_{n - 4}\\
+ &\;\;\vdots
+ \intertext{So}
+ a_{2k} &= \frac{-1}{2k - 1}a_0,\\
+ a_{2k + 1} &= 0.
+ \end{align*}
+ So we obtain
+ \begin{align*}
+ y &= a_0[1 - \frac{x^2}{1} - \frac{x^4}{3} - \frac{x^6}{5} - \cdots] + a_1 x\\
+ &= a_0\left[1 - \frac{x}{2}\ln\left(\frac{1 + x}{1 - x}\right)\right] + a_1x
+ \end{align*}
+ Notice the logarithmic behaviour near $x = \pm 1$ which are regular singular points.
+\end{eg}
+
+\subsubsection*{Regular singular points}
+\begin{eg}
+ Consider $4xy'' + 2(1 - x^2)y' - xy = 0$. Note that $x = 0$ is a singular point. However, if we multiply throughout by $x$ to obtain an equidimensional equation, we obtain
+ \[
+ 4(x^2 y'') + 2(1 - x^2)xy' - x^2 y = 0.
+ \]
+ Since $\frac{Q}{P} = \frac{1 - x^2}{2}$ and $\frac{R}{P} = -\frac{x^2}{4}$ both have Taylor series, $x = 0$ is a regular singular point. Try
+ \[
+ y = \sum_{n = 0}^\infty a_n x^{n + \sigma}\text{ with }a_0 \not= 0.
+ \]
+ Substituting in, we have
+ \[
+ \sum a_n x^{n + \sigma}[4(n + \sigma)(n + \sigma - 1) + 2(1 - x^2)(n + \sigma) - x^2]
+ \]
+ By considering the coefficient of $x^{n + \sigma}$, we obtain the general recurrence relation
+ \[
+ [4(n + \sigma)(n + \sigma - 1) + 2(n + \sigma)]a_n -[2(n - 2 + \sigma) + 1]a_{n - 2} = 0.
+ \]
+ Simplifying the equation gives
+ \[
+ 2(n + \sigma)(2n + 2\sigma - 1)a_n = (2n + 2\sigma-3)a_{n - 2}.
+ \]
+ The $n = 0$ case gives the \emph{indicial equation} for the \emph{index} $\sigma$:
+ \[
+ 2\sigma(2\sigma - 1)a_0 = 0.
+ \]
+ Since $a_0 \not= 0$, we must have $\sigma = 0$ or $\frac{1}{2}$. The $\sigma = 0$ solution corresponds to an analytic (``Taylor series'') solution, while $\sigma = \frac{1}{2}$ corresponds to a non-analytic one.
+
+ When $\sigma = 0$, the recurrence relation becomes
+ \[
+ 2n(2n - 1)a_n = (2n - 3)a_{n - 2}.
+ \]
+ When $n = 0$, this gives $0\cdot a_0 = 0$. So $a_0$ is arbitrary. For $n >0$, we can divide and obtain
+ \[
+ a_n = \frac{2n - 3}{2n(2n - 1)}a_{n - 2}.
+ \]
+ We can see that $a_1 = 0$ and so are subsequent odd terms.
+
+ If $n = 2k$, i.e.\ $n$ is even, then
+ \begin{align*}
+ a_{2k} &= \frac{4k - 3}{4k(4k - 1)}a_{2k - 2}\\
+ y &= a_0\left[1 + \frac{1}{4\cdot 3}x^2 + \frac{5}{8\cdot 7\cdot 4\cdot 3}x^4 + \cdots\right]
+ \end{align*}
+ Note that we have only found one solution in this case.
+
+ Now when $\sigma = \frac{1}{2}$, we obtain
+ \[
+ (2n + 1)(2n)a_n = (2n - 2)a_{n - 2}
+ \]
+ When $n = 0$, we obtain $0\cdot a_0 = 0$, so $a_0$ is arbitrary. To avoid confusion with the $a_0$ above, call it $b_0$ instead.
+
+ When $n = 1$, we obtain $6a_1 = 0$ and $a_1 = 0$ and so are subsequent odd terms.
+
+ For even $n$,
+ \[
+ a_n = \frac{n - 1}{n(2n + 1)}a_{n - 2}
+ \]
+ So
+ \[
+ y = b_0 x^{1/2}\left[1 + \frac{1}{2\cdot 5}x^2 + \frac{3}{2\cdot 5\cdot 4\cdot 9}x^4 + \cdots\right]
+ \]
+\end{eg}
+
+\subsubsection*{Resonance of solutions}
+Note that the indicial equation has two roots $\sigma_1, \sigma_2$. Consider the two different cases:
+\begin{enumerate}
+ \item If $\sigma_2 - \sigma_1$ is not an integer, then there are two linearly independent Frobenius solutions
+ \[
+ y = \left[(x - x_0)^{\sigma_1}\sum_{n = 0}^{\infty} a_n(x - x_0)^n\right] + \left[(x - x_0)^{\sigma_2}\sum_{n = 0}^{\infty} b_n(x - x_0)^n\right].
+ \]
+ As $x\to x_0$, $y \sim (x - x_0)^{\sigma_1}$, where $\Re(\sigma_1) \leq \Re(\sigma_2)$
+
+ \item If $\sigma_2 - \sigma_1$ is an integer (including when they are equal), there is one solution of the form
+ \[
+ y_1 = (x - x_0)^{\sigma_2}\sum_{n = 0}^{\infty} a_n(x - x_0)^n
+ \]
+ with $\sigma_2 \geq \sigma_1$.
+
+ In this case, $\sigma = \sigma_1$ will not give a valid solution, as we will later see. Instead, the other solution is (usually) in the form
+ \[
+ y_2 = \ln(x - x_0)y_1 + \sum_{n = 0}^\infty b_n(x - x_0)^{n + \sigma_1}.
+ \]
+ This form arises from resonance between the two solutions. But if the resonance somehow avoids itself, we can possibly end up with two regular Frobenius series solutions.
+
+ We can substitute this form of solution into the differential equation to determine $b_n$.
+\end{enumerate}
+\begin{eg}
+ Consider $x^2 y'' - xy = 0$. $x = 0$ is a regular singular point. It is already in equidimensional form $(x^2y'') - x(y) = 0$. Try
+ \[
+ y = \sum_{n = 0}^\infty a_n x^{n + \sigma}
+ \]
+ with $a_0 \not= 0$. We obtain
+ \[
+ \sum a_nx^{n + \sigma}[(n + \sigma)(n + \sigma - 1) - x] = 0.
+ \]
+ The general recurrence relation is
+ \[
+ (n + \sigma)(n + \sigma - 1)a_n = a_{n - 1}.
+ \]
+ $n = 0$ gives the indicial equation
+ \[
+ \sigma(\sigma - 1) = 0.
+ \]
+ Then $\sigma = 0, 1$. We are guaranteed to have a solution in the form $\sigma = 1$. When $\sigma = 1$, the recurrence relation becomes
+ \[
+ (n + 1)n a_n = a_{n - 1}.
+ \]
+ When $n = 0$, $0\cdot a_0 = 0$ so $a_0$ is arbitrary.
+ When $n > 0$, we obtain
+ \[
+ a_n = \frac{1}{n(n +1)}a_{n - 1} = \frac{1}{(n + 1)(n!)^2}a_0.
+ \]
+ So
+ \[
+ y_1 = a_0x\left(1 + \frac{x}{2} + \frac{x^2}{12} + \frac{x^3}{144} + \cdots \right).
+ \]
+ When $\sigma = 0$, we obtain
+ \[
+ n(n - 1)a_n = a_{n - 1}.
+ \]
+ When $n = 0$, $0\cdot a_0 = 0$ and $a_0$ is arbitrary. When $n = 1$, $0\cdot a_1 = a_0$. However, $a_0\not= 0$ by our initial constraint. Contradiction. So there is no solution in this form (If we ignore the constraint that $a_0\not= 0$, we know that $a_0$ is arbitrary. But this gives exactly the same solution we found previously with $\sigma = 1$)
+
+ The other solution is thus in the form
+ \[
+ y_2 = y_1\ln x + \sum_{n = 0}^\infty b_nx^n.
+ \]
+\end{eg}
+\section{Directional derivative}
+\label{sec:directional-derivative}
+\subsection{Gradient vector}
+Consider a function $f(x, y)$ and a displacement $\d\mathbf{s} = (\d x, \d y)$. The change in $f(x, y)$ during that displacement is
+\[
+ \d f = \frac{\partial f}{\partial x}\d x + \frac{\partial f}{\partial y}\d y
+\]
+We can also write this as
+\begin{align*}
+ \d f &= (\d x, \d y)\cdot \left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}\right)\\
+ &= \d\mathbf{s}\cdot \nabla f
+\end{align*}
+where $\nabla f = \mathrm{grad}f = \left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}\right)$ are the Cartesian components of the \emph{gradient} of $f$.
+
+We write $\d\mathbf{s} = \hat{\mathbf{s}}\;\d s$, where $|\hat{\mathbf{s}}| = 1$. Then
+\begin{defi}[Directional derivative]
+ The \emph{directional derivative} of $f$ in the direction of $\hat{\mathbf{s}}$ is
+ \[
+ \frac{\d f}{\d s} = \hat{\mathbf{s}}\cdot \nabla f.
+ \]
+\end{defi}
+
+\begin{defi}[Gradient vector]
+ The \emph{gradient vector} $\nabla f$ is defined as the vector that satisfies
+ \[
+ \frac{\d f}{\d s} = \mathbf{\hat{s}}\cdot \nabla f.
+ \]
+\end{defi}
+Officially, we take this to be the definition of $\nabla f$. Then $\nabla f = \left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}\right)$ is a theorem that can be proved from this definition.
+
+We know that the directional derivative is given by
+\[
+ \frac{\d f}{\d s} = \mathbf{\hat{s}}\cdot \nabla f = |\nabla f| \cos \theta
+\]
+where $\theta$ is the angle between the displacement and $\nabla f$. Then when $\cos\theta$ is maximized, $\frac{\d f}{\d s} = |\nabla f|$. So we know that
+\begin{enumerate}
+ \item $\nabla f$ has magnitude equal to the maximum rate of change of $f(x, y)$ in the $xy$ plane.
+ \item It has direction in which $f$ increases most rapidly.
+ \item If $\d\mathbf{s}$ is a displacement along a contour of $f$ (i.e.\ along a line in which $f$ is constant), then
+ \[
+ \frac{\d f}{\d s} = 0.
+ \]
+ So $\mathbf{\hat{s}}\cdot \nabla f = 0$, i.e.\ $\nabla f$ is orthogonal to the contour.
+\end{enumerate}
+\subsection{Stationary points}
+There is always (at least) one direction in which $\frac{\d f}{\d s} = 0$, namely the direction parallel to the contour of $f$. However, local maxima and minima have
+\[
+ \frac{\d f}{\d s} = 0
+\]
+for \emph{all} directions, i.e.\ $\mathbf{\hat{s}}\cdot \nabla f = 0$ for all $\mathbf{\hat{s}}$, i.e.\ $\nabla f = \mathbf{0}$. Then we know that
+\[
+ \frac{\partial f}{\partial x} = \frac{\partial f}{\partial y} = 0.
+\]
+However, apart from maxima and minima, in 3 dimensions, we also have saddle points:
+\begin{center}
+ \begin{tikzpicture}
+ \begin{axis}[hide axis, xtick=\empty, ytick=\empty]
+ \addplot3 [mesh, draw=gray, samples = 11] {x^2 - y^2};
+ \node [circ] at (axis cs:0, 0, 0) {};
+ \end{axis}
+ \end{tikzpicture}
+\end{center}
+In general, we define a saddle point to be a point at which $\nabla f = 0$ but is not a maximum or minimum.
+
+When we plot out contours of functions, near maxima and minima, the contours are locally elliptical. Near saddle points, they are locally hyperbolic. Also, the contour lines cross at and only at saddle points.
+
+\subsection{Taylor series for multi-variable functions}
+Suppose we have a function $f(x, y)$ and a point $\mathbf{x}_0$. Now consider a finite displacement $\delta s$ along a straight line in the $x,y$ plane. Then
+\[
+ \delta s\frac{\d }{\d s} = \delta \mathbf{s}\cdot \nabla
+\]
+The Taylor series along the line is
+\begin{align*}
+ f(s) &= f(s_0 + \delta s)\\
+ &=f(s_0) + \delta s\frac{\d f}{\d s} + \frac{1}{2}(\delta s)^2\frac{\d ^2 f}{\d s^2}\\
+ &= f(s_0) + \delta \mathbf{s} \cdot \nabla f + \frac{1}{2}\delta s^2 (\mathbf{\hat{s}}\cdot \nabla)(\mathbf{\hat{s}}\cdot \nabla)f.
+\end{align*}
+We get
+\begin{align*}
+ \delta \mathbf{s}\cdot \nabla f &= (\delta x)\frac{\partial f}{\partial x} + (\delta y)\frac{\partial f}{\partial y}\\
+ &= (x - x_0)\frac{\partial f}{\partial x} + (y - y_0)\frac{\partial f}{\partial y}
+\end{align*}
+and
+\begin{align*}
+ \delta s^2 (\mathbf{\hat{s}}\cdot \nabla)(\mathbf{\hat{s}}\cdot \nabla)f &= (\delta \mathbf{s}\cdot \nabla)(\delta \mathbf{s}\cdot \nabla) f\\
+ &= \left[\delta x\frac{\partial }{\partial x} + \delta y\frac{\partial}{\partial y}\right]\left[\delta x\frac{\partial f}{\partial x} + \delta y\frac{\partial f}{\partial y}\right]\\
+ &= \delta x^2 \frac{\partial^2 f}{\partial x^2} + 2\delta x\delta y\frac{\partial^2 f}{\partial x\partial y} + \delta y^2 \frac{\partial^2 f}{\partial y^2}\\
+ &=
+ \begin{pmatrix}
+ \delta x& \delta y
+ \end{pmatrix}
+ \begin{pmatrix}
+ f_{xx}&f_{xy}\\
+ f_{yx}&f_{yy}
+ \end{pmatrix}
+ \begin{pmatrix}
+ \delta x\\\delta y
+ \end{pmatrix}
+\end{align*}
+\begin{defi}[Hessian matrix]
+ The \emph{Hessian matrix} is the matrix
+ \[
+ \nabla \nabla f =
+ \begin{pmatrix}
+ f_{xx}&f_{xy}\\
+ f_{yx}&f_{yy}
+ \end{pmatrix}
+ \]
+\end{defi}
+
+In conclusion, we have
+\begin{align*}
+ f(x, y) &= f(x_0, y_0) + (x - x_0)f_x + (y - y_0)f_y \\
+ &+ \frac{1}{2}[(x - x_0)^2 f_{xx} + 2(x - x_0)(y - y_0)f_{xy} + (y - y_0)^2 f_{yy}]
+\end{align*}
+In general, the coordinate-free form is
+\[
+ f(\mathbf{x}) = f(\mathbf{x_0}) + \delta \mathbf{x}\cdot\nabla f(\mathbf{x}_0) + \frac{1}{2}\delta \mathbf{x}\cdot \nabla \nabla f\cdot \delta \mathbf{x}
+\]
+where the dot in the second term represents a matrix product. Alternatively, in terms of the gradient operator (and real dot products), we have
+\[
+ f(\mathbf{x}) = f(\mathbf{x_0}) + \delta \mathbf{x}\cdot \nabla f(\mathbf{x}_0) + \frac{1}{2}[\nabla (\nabla f\cdot \delta \mathbf{x})]\cdot \delta\mathbf{x}
+\]
+
+\subsection{Classification of stationary points}
+At a stationary point $\mathbf{x}_0$, we know that $\nabla f(\mathbf{x}_0) = 0$. So at a point near the stationary point,
+\[
+ f(\mathbf{x})\approx f(\mathbf{x}_0) + \frac{1}{2}\delta\mathbf{x}\cdot H\cdot \delta\mathbf{x},
+\]
+where $H = \nabla\nabla f(\mathbf{x}_0)$ is the Hessian matrix.
+
+At a minimum, Every point near $\mathbf{x}_0$ has $f(\mathbf{x}) > f(\mathbf{x}_0)$, i.e.\ $\delta \mathbf{x}\cdot H\cdot \delta \mathbf{x} > 0$ for all $\delta \mathbf{x}$. We say $\delta \mathbf{x}\cdot H\cdot\delta \mathbf{x}$ is \emph{positive definite}.
+
+Similarly, at a maximum, $\delta \mathbf{x}\cdot H\cdot \delta\mathbf{x} < 0$ for all $\delta \mathbf{x}$. We say $\delta \mathbf{x}\cdot H\cdot \delta \mathbf{x}$ is \emph{negative definite}.
+
+At a saddle, $\delta \mathbf{x} \cdot H\cdot \delta\mathbf{x}$ is indefinite, i.e.\ it can be positive, negative or zero depending on the direction.
+
+This, so far, is not helpful, since we do not have an easy way to know what sign $\delta \mathbf{x}\cdot H\cdot \delta \mathbf{x}$ could be. Now note that $H = \nabla\nabla f$ is symmetric (because $f_{xy} = f_{yx}$). So $H$ can be diagonalized (cf.\ Vectors and Matrices). With respect to these axes in which $H$ is diagonal (principal axes), we have
+\begin{align*}
+ \delta\mathbf{x}\cdot H\cdot \delta\mathbf{x} &= (\delta x, \delta y, \cdots, \delta z)
+ \begin{pmatrix}
+ \lambda_1\\
+ &\lambda_2\\
+ &&\ddots\\
+ &&&\lambda_n
+ \end{pmatrix}
+ \begin{pmatrix}
+ \delta x\\\delta y\\\vdots\\\delta z
+ \end{pmatrix}\\
+ &= \lambda_1(\delta x)^2 + \lambda_2 (\delta y)^2 + \cdots + \lambda_n(\delta z)^2
+\end{align*}
+where $\lambda_1, \lambda_2, \cdots \lambda_n$ are the eigenvalues of $H$.
+
+So for $\delta \mathbf{x}\cdot H\cdot \delta \mathbf{x}$ to be positive-definite, we need $\lambda_i > 0$ for all $i$. Similarly, it is negative-definite iff $\lambda_i < 0$ for all $i$. If eigenvalues have mixed sign, then it is a saddle point.
+
+Finally, if there is at least one zero eigenvalue, then we need further analysis to determine the nature of the stationary point.
+
+Apart from finding eigenvalues, another way to determine the definiteness is using the \emph{signature}.
+\begin{defi}[Signature of Hessian matrix]
+ The \emph{signature} of $H$ is the pattern of the signs of the subdeterminants:
+ \[
+ \underbrace{f_{xx}}_{|H_1|},
+ \underbrace{
+ \begin{vmatrix}
+ f_{xx} & f_{xy}\\
+ f_{yx} & f_{yy}
+ \end{vmatrix}}_{|H_2|},\cdots,
+ \underbrace{\begin{vmatrix}
+ f_{xx} & f_{xy} & \cdots & f_{xz}\\
+ f_{yx} & f_{yy} & \cdots & f_{yz}\\
+ \vdots & \vdots & \ddots & \vdots\\
+ f_{zx} & f_{zy} & \cdots & f_{zz}
+ \end{vmatrix}}_{|H_n| = |H|}
+ \]
+\end{defi}
+
+\begin{prop}
+ $H$ is positive definite if and only if the signature is $+, +, \cdots, +$. $H$ is negative definite if and only if the signature is $-, +, \cdots, (-1)^n$. Otherwise, $H$ is indefinite.
+\end{prop}
+
+\subsection{Contours of \texorpdfstring{$f(x, y)$}{f(x, y)}}
+Consider $H$ in 2 dimensions, and axes in which $H$ is diagonal. So $H =
+\begin{pmatrix}
+ \lambda_1 & 0\\
+ 0 & \lambda_2
+\end{pmatrix}$. Write $\mathbf{x} - \mathbf{x}_0 = (X, Y)$.
+
+Then near $\mathbf{x}_0$, $f = $ constant $\Rightarrow \mathbf{x}H\mathbf{x} = $ constant, i.e.\ $\lambda_1 X^2 + \lambda_2 Y^2 = $ constant. At a maximum or minimum, $\lambda_1$ and $\lambda_2$ have the same sign. So these contours are locally ellipses. At a saddle point, they have different signs and the contours are locally hyperbolae.
+
+\begin{eg}
+ Find and classify the stationary points of $f(x, y) = 4x^3 - 12xy + y^2 + 10y + 6$. We have
+ \begin{align*}
+ f_x &= 12x^2 - 12y\\
+ f_y &= -12x + 2y + 10\\
+ f_{xx} &= 24x\\
+ f_{xy} &= -12\\
+ f_{yy} &= 2
+ \end{align*}
+ At stationary points, $f_x = f_y = 0$. So we have
+ \[
+ 12x^2 - 12y = 0,\quad -12x + 2y + 10 = 0.
+ \]
+ The first equation gives $y = x^2$. Substituting into the second equation, we obtain $x = 1, 5$ and $y = 1, 25$ respectively. So the stationary points are $(1, 1)$ and $(5, 25)$
+
+ To classify them, first consider $(1, 1)$. Our Hessian matrix $H =
+ \begin{pmatrix}
+ 24 & -12\\
+ -12 & 2
+ \end{pmatrix}$. Our signature is $|H_1| = 24$ and $|H_2| = -96$. Since we have a $+, -$ signature, this an indefinite case and it is a saddle point.
+
+ At $(5, 25)$, $H =
+ \begin{pmatrix}
+ 120 & -12\\
+ -12 & 2
+ \end{pmatrix}$
+ So $|H_1| = 120$ and $|H_2| = 240 - 144 = 96$. Since the signature is $+, +$, it is a minimum.
+
+ To draw the contours, we draw what the contours look like near the stationary points, and then try to join them together, noting that contours cross only at saddles.
+ \begin{center}
+ \includegraphics[width=300pt]{images/de_contour.pdf}
+ \end{center}
+\end{eg}
+
+\section{Systems of differential equations}
+\subsection{Linear equations}
+Consider two dependent variables $y_1(t), y_2(t)$ related by
+\begin{align*}
+ \dot y_1 &= ay_1 + by_2 + f_1(t)\\
+ \dot y_2 &= cy_1 + dy_2 + f_2(t)
+\end{align*}
+We can write this in vector notation by
+\[
+ \begin{pmatrix}
+ \dot y_1\\\dot y_2
+ \end{pmatrix} =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}
+ \begin{pmatrix}
+ y_1\\y_2
+ \end{pmatrix}
+ +
+ \begin{pmatrix}
+ f_1\\f_2
+ \end{pmatrix}
+\]
+or $\mathbf{\dot Y} = M\mathbf{Y} + \mathbf{F}$. We can convert this to a higher-order equation by
+\begin{align*}
+ \ddot y_1 &= a\dot y_1 + b\dot y_2 + \dot f_1\\
+ &= a\dot y_1 + b(cy_1 + dy_2 + f_2) + \dot f_1\\
+ &= a\dot y_1 + bcy_1 + d(\dot y_1 - ay_1 - f_1) + bf_2 + \dot f_1
+\end{align*}
+so
+\[
+ \ddot y_1 - (a + d)\dot y_1 + (ad - bc) y_1 = bf_2 - df_1 + \dot f_1
+\]
+and we know how to solve this. However, this actually complicates the equation.
+
+So what we usually do is the other way round: if we have a high-order equation, we can do this to change it to a system of first-order equations:
+
+If $\ddot y + a\dot y + by = f$, write $y_1 = y$ and $y_2 = \dot y$. Then let $\mathbf{Y} =
+\begin{pmatrix}
+ y\\\dot y
+\end{pmatrix}$
+
+Our system of equations becomes
+\begin{align*}
+ \dot y_1 &= y_2\\
+ \dot y_2 &= f - a y_2 - by_1
+\end{align*}
+or
+\[
+ \mathbf{\dot{Y}} =
+ \begin{pmatrix}
+ 0 & 1\\
+ -b & -a
+ \end{pmatrix}
+ \begin{pmatrix}
+ y_1\\y_2
+ \end{pmatrix}
+ \begin{pmatrix}
+ 0 \\ f
+ \end{pmatrix}.
+\]
+Now consider the general equation
+\begin{align*}
+ \mathbf{\dot {Y}} &= M\mathbf{Y} + \mathbf{F}\\
+ \mathbf{\dot {Y}} - M\mathbf{Y} &= \mathbf{F}
+\end{align*}
+We first look for a complementary solution $\mathbf{Y}_c = \mathbf{v} e^{\lambda t}$, where $\mathbf{v}$ is a constant vector. So we get
+\[
+ \lambda\mathbf{v} - M\mathbf{v} = \mathbf{0}.
+\]
+We can write this as
+\[
+ M\mathbf{v} = \lambda \mathbf{v}.
+\]
+So $\lambda$ is the eigenvalue of $M$ and $\mathbf{v}$ is the corresponding eigenvector.
+
+We can solve this by solving the characteristic equation $\det(M - \lambda I) = 0$. Then for each $\lambda$, we find the corresponding $\mathbf{v}$.
+
+\begin{eg}
+ \[
+ \mathbf{\dot{Y}} =
+ \begin{pmatrix}
+ -4 & 24\\
+ 1 & -2
+ \end{pmatrix}
+ \mathbf{Y} +
+ \begin{pmatrix}
+ 4\\1
+ \end{pmatrix}e^t
+ \]
+ The characteristic equation of $M$ is $
+ \begin{vmatrix}
+ -4 - \lambda & 24\\
+ 1 & -2 - \lambda
+ \end{vmatrix} = 0$, which gives $(\lambda + 8)(\lambda - 2) = 0$ and $\lambda = 2, -8$.
+
+ When $\lambda = 2$, $\mathbf{v}$ satisfies
+ \[
+ \begin{pmatrix}
+ -6 & 24\\
+ 1 & -4
+ \end{pmatrix}
+ \begin{pmatrix}
+ v_1\\v_2
+ \end{pmatrix} = \mathbf{0},
+ \]
+ and we obtain $\mathbf{v}_1 =
+ \begin{pmatrix}
+ 4\\1
+ \end{pmatrix}$.
+
+ When $\lambda = -8$, we have
+ \[
+ \begin{pmatrix}
+ 4 & 24\\
+ 1 & 6
+ \end{pmatrix}
+ \begin{pmatrix}
+ v_1\\v_2
+ \end{pmatrix} = \mathbf{0},
+ \]
+ and $\mathbf{v}_2 =
+ \begin{pmatrix}
+ -6 \\1
+ \end{pmatrix}$. So the complementary solution is
+ \[
+ \mathbf{Y} = A
+ \begin{pmatrix}
+ 4\\1
+ \end{pmatrix}e^{2t} + B
+ \begin{pmatrix}
+ -6\\1
+ \end{pmatrix}e^{-8t}
+ \]
+ To plot the phase-space trajectories, we first consider the cases where $\mathbf{Y}$ is an eigenvector. If $\mathbf{Y}$ is an integer multiple of $\begin{pmatrix}4\\1\end{pmatrix}$, then it will keep moving outwards in the same direction. Similarly, if it is an integer multiple of $\begin{pmatrix}-6\\1\end{pmatrix}$, it will move towards the origin.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-4, 0) -- (4, 0) node [right] {$y_1$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$y_2$};
+
+ \draw [->-=0.785, -<-=0.215, mblue, semithick] (-4, -1) -- (4, 1) node [right] {$v_1 = (4, 1)$};
+ \draw [->-=0.25, -<-=0.75, mblue, semithick] (-4, 0.667) -- (4, -0.667) node [right] {$v_2 = (-6, 1)$};
+ \end{tikzpicture}
+ \end{center}
+ We can now add more (curved) lines based on these two:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-4, 0) -- (4, 0) node [right] {$y_1$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$y_2$};
+
+ \draw [->-=0.785, -<-=0.215, mblue, semithick] (-4, -1) -- (4, 1) node [right] {$v_1 = (4, 1)$};
+ \draw [->-=0.25, -<-=0.75, mblue, semithick] (-4, 0.667) -- (4, -0.667) node [right] {$v_2 = (-6, 1)$};
+
+ \draw [mblue, semithick, ->- = 0.3, ->- = 0.7] (-4, 1) .. controls (0, 0.333) and (0, 0.35) .. (4, 1.3);
+ \draw [mblue, semithick, -<- = 0.3, -<- = 0.7] (-4, -1.3) .. controls (0, -0.3) and (0, -0.333) .. (4, -1);
+
+ \draw [mblue, semithick, ->- = 0.3, ->- = 0.7] (-4, 0.333) .. controls (-2, 0) and (-2, -0.3) .. (-4, -0.6);
+ \draw [mblue, semithick, ->- = 0.3, ->- = 0.7] (4, -0.333) .. controls (2, 0) and (2, 0.3) .. (4, 0.6);
+ \end{tikzpicture}
+ \end{center}
+ To find the particular integral, we try $\mathbf{Y}_p = \mathbf{u}e^t$. Then
+ \begin{align*}
+ \begin{pmatrix}
+ u_1\\u_2
+ \end{pmatrix} -
+ \begin{pmatrix}
+ -4 & 24\\
+ 1 & -2
+ \end{pmatrix}
+ \begin{pmatrix}
+ u_1\\u_2
+ \end{pmatrix}&=
+ \begin{pmatrix}
+ 4\\1
+ \end{pmatrix}\\
+ \begin{pmatrix}
+ 5 & -24\\
+ -1 & 3
+ \end{pmatrix}
+ \begin{pmatrix}
+ u_1\\u_2
+ \end{pmatrix}&=
+ \begin{pmatrix}
+ 4\\1
+ \end{pmatrix}\\
+ \begin{pmatrix}
+ u_1\\u_2
+ \end{pmatrix} &= -\frac{1}{9}
+ \begin{pmatrix}
+ 3 & 24\\
+ 1 & 5
+ \end{pmatrix}
+ \begin{pmatrix}
+ 4 \\ 1
+ \end{pmatrix}\\
+ &=
+ \begin{pmatrix}
+ -4\\-1
+ \end{pmatrix}
+ \end{align*}
+ So the general solution is
+ \[
+ \mathbf{Y} = A
+ \begin{pmatrix}
+ 4\\1
+ \end{pmatrix}e^{2t} + B
+ \begin{pmatrix}
+ -6\\1
+ \end{pmatrix}e^{-8t} -
+ \begin{pmatrix}
+ 4\\1
+ \end{pmatrix}e^t
+ \]
+\end{eg}
+
+In general, there are three possible cases of $\mathbf{\dot{Y}} = M\mathbf{Y}$ corresponding to three different possible eigenvalues of $M$:
+\begin{enumerate}
+ \item If both $\lambda_1, \lambda_2$ are real with opposite sign ($\lambda_1\lambda_2 < 0$). wlog assume $\lambda_1 > 0$. Then there is a saddle as above:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.785, -<-=0.215, mblue, semithick] (-4, -1) -- (4, 1) node [right] {$v_1 = (4, 1)$};
+ \draw [->-=0.25, -<-=0.75, mblue, semithick] (-4, 0.667) -- (4, -0.667) node [right] {$v_2 = (-6, 1)$};
+
+ \draw [mblue, semithick, ->- = 0.3, ->- = 0.7] (-4, 1) .. controls (0, 0.333) and (0, 0.35) .. (4, 1.3);
+ \draw [mblue, semithick, -<- = 0.3, -<- = 0.7] (-4, -1.3) .. controls (0, -0.3) and (0, -0.333) .. (4, -1);
+
+ \draw [mblue, semithick, ->- = 0.3, ->- = 0.7] (-4, 0.333) .. controls (-2, 0) and (-2, -0.3) .. (-4, -0.6);
+ \draw [mblue, semithick, ->- = 0.3, ->- = 0.7] (4, -0.333) .. controls (2, 0) and (2, 0.3) .. (4, 0.6);
+ \end{tikzpicture}
+ \end{center}
+
+ \item If $\lambda_1, \lambda_2$ are real with $\lambda_1\lambda_2 > 0$. wlog assume $|\lambda_1| \geq |\lambda_2|$. Then the phase portrait is
+ \begin{center}
+ \begin{tikzpicture}[rotate=-20]
+ \draw [mblue, semithick] (-3, 0) -- (3, 0) node [right] {$v_1$};
+ \draw [mblue, semithick] (0, -2) -- (0, 2) node [above] {$v_2$};
+
+ \draw [mblue, semithick] (-2, 2) parabola bend (0, 0) (2, 2);
+ \draw [mblue, semithick] (-1, 2) parabola bend (0, 0) (1, 2);
+ \draw [mblue, semithick] (-2, -2) parabola bend (0, 0) (2, -2);
+ \draw [mblue, semithick] (-1, -2) parabola bend (0, 0) (1, -2);
+ \end{tikzpicture}
+ \end{center}
+ If both $\lambda_1, \lambda_2 < 0$, then the arrows point towards the intersection and we say there is a stable node. If both are positive, they point outwards and there is an unstable node.
+
+ \item If $\lambda_1, \lambda_2$ are complex conjugates, then we obtain a spiral
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [mblue, semithick, domain = 0:40,samples=300] plot ({2*exp(-\x/10)*cos(50*\x)}, {2*exp(-\x/10)*sin(50*\x)});
+ \end{tikzpicture}
+ \end{center}
+ If $\Re (\lambda_{1, 2}) < 0$, then it spirals inwards. If $\Re (\lambda_{1, 2}) > 0$, then it spirals outwards. If $\Re (\lambda_{1, 2}) = 0$, then we have ellipses with common centers instead of spirals. We can determine whether the spiral is positive (as shown above), or negative (mirror image of the spiral above) by considering the eigenvectors. An example of this is given below.
+\end{enumerate}
+
+\subsection{Nonlinear dynamical systems}
+Consider the second-order autonomous system (i.e.\ $t$ does not explicitly appear in the forcing terms on the right)
+\begin{align*}
+ \dot x &= f(x, y)\\
+ \dot y &= g(x, y)
+\end{align*}
+It can be difficult to solve the equations, but we can learn a lot about phase-space trajectories of these solutions by studying the equilibria and their stability.
+
+\begin{defi}[Equilibrium point]
+ An \emph{equilibrium point} is a point in which $\dot x = \dot y = 0$ at $\mathbf{x}_0 = (x_0, y_0)$.
+\end{defi}
+
+Clearly this occurs when $f(x_0, y_0) = g(x_0, y_0) = 0$. We solve these simultaneously for $x_0, y_0$.
+
+To determine the stability, write $x = x_0 + \xi$, $y = y_0 + \eta$. Then
+\begin{align*}
+ \dot \xi &= f(x_0 + \xi, y_0 + \eta)\\
+ &= f(x_0, y_0)+ \xi \frac{\partial f}{\partial x}(\mathbf{x}_0) + \eta \frac{\partial f}{\partial y}(\mathbf{x}_0) + O(\xi^2, \eta^2)
+\end{align*}
+So if $\xi, \eta \ll 1$,
+\[
+ \begin{pmatrix}
+ \dot \xi\\\dot \eta
+ \end{pmatrix} =
+ \begin{pmatrix}
+ f_x & f_y\\
+ g_x & g_y
+ \end{pmatrix}
+ \begin{pmatrix}
+ \xi\\\eta
+ \end{pmatrix}
+\]
+This is a linear system, and we can determine its character from the eigensolutions.
+
+\begin{eg}
+ (Population dynamics - predator-prey system) Suppose that there are $x$ prey and $y$ predators. Then we have the following for the prey:
+ \[
+ \dot x = \underbrace{\alpha x}_{\text{births - deaths}} - \underbrace{\beta x^2}_{\text{natural competition}} - \underbrace{\gamma xy}_{\text{killed by predators}}.
+ \]
+ and the following for the predators:
+ \[
+ \dot y = \underbrace{\varepsilon xy}_{\text{birth/survival rate}} - \underbrace{\delta y}_{\text{natural death rate}}
+ \]
+ For example, let
+ \begin{align*}
+ \dot x &= 8x - 2x^2 - 2xy\\
+ \dot y &= xy - y
+ \end{align*}
+ We find the fixed points: $x(8 - 2x - 2y) = 0$ gives $x = 0$ or $y = 4 - x$.
+
+ We also want $y(x - 1) = 0$ which gives $y = 0$ or $x = 1$.
+
+ So the fixed points are $(0, 0), (4, 0), (1, 3)$.
+
+ Near $(0, 0)$, we have
+ \[
+ \begin{pmatrix}
+ \dot \xi\\\dot \eta
+ \end{pmatrix}=
+ \begin{pmatrix}
+ 8 & 0\\
+ 0 & -1
+ \end{pmatrix}
+ \begin{pmatrix}
+ \xi\\
+ \eta
+ \end{pmatrix}
+ \]
+ We clearly have eigenvalues $8, -1$ with the standard basis as the eigenvectors.
+ \begin{center}
+ \begin{tikzpicture}[scale = 0.7]
+ \draw [<->] (-2, 0) -- (2, 0);
+ \draw (2, 0) -- (3, 0);
+ \draw (-2, 0) -- (-3, 0);
+
+ \draw [->] (0, 2) -- (0, 1.4);
+ \draw (0, 1.4) -- (0, -1.4);
+ \draw [->] (0, -2) -- (0, -1.4);
+ \end{tikzpicture}
+ \end{center}
+ Near $(4, 0)$, we have $x = 4 + \xi$, $y = \eta$. Instead of expanding partial derivatives, we can obtain from the equations directly:
+ \begin{align*}
+ \dot\xi &= (4 + \xi)(8 - 8 - 2\xi - 2\eta)\\
+ &= - 8\xi - 8\eta -2\xi^2 - 2\xi\eta\\
+ \dot\eta &= \eta(4 + \xi - 1)\\
+ &= 3\eta + \xi\eta
+ \end{align*}
+ Ignoring the second-order terms, we have
+ \[
+ \begin{pmatrix}
+ \dot\xi\\\dot\eta
+ \end{pmatrix} =
+ \begin{pmatrix}
+ -8 & -8 \\
+ 0 & 3\\
+ \end{pmatrix}
+ \begin{pmatrix}
+ \xi\\\eta
+ \end{pmatrix}
+ \]
+ The eigenvalues are $-8$ and $3$, with associated eigenvectors $(1, 0), (8, -11)$.
+ \begin{center}
+ \begin{tikzpicture}[scale = 0.7]
+ \draw (-2, 0) -- (2, 0);
+ \draw [<-] (2, 0) -- (3, 0);
+ \draw [<-] (-2, 0) -- (-3, 0);
+
+ \draw (-1, 2) -- (-.7, 1.4);
+ \draw [<->](-.7, 1.4) -- (.7, -1.4);
+ \draw (1, -2) -- (.7, -1.4);
+ \end{tikzpicture}
+ \end{center}
+ Near $(1, 3)$, we have $x = 1 + \xi, y = 3 + \eta$. So
+ \begin{align*}
+ \dot \xi &= (1 + \xi)(8 - 2 - 2\xi - 6 - 2\eta)\\
+ &\approx -2\xi - 2\eta\\
+ \dot\eta &= (3 + \eta)(1 + \xi - 1)\\
+ &\approx 3\eta
+ \end{align*}
+ So
+ \[
+ \begin{pmatrix}
+ \dot\xi\\\dot\eta
+ \end{pmatrix} =
+ \begin{pmatrix}
+ -2 & -2\\
+ 3 & 0
+ \end{pmatrix}
+ \begin{pmatrix}
+ \xi\\\eta
+ \end{pmatrix}
+ \]
+ The eigenvalues are $- 1\pm i\sqrt{5}$. Since it is complex with a negative real part, it is a stable spiral.
+
+ We can determine the chirality of the spirals by considering what happens to a small perturbation to the right $
+ \begin{pmatrix}
+ \xi\\0
+ \end{pmatrix}$ with $\xi > 0$. We have $
+ \begin{pmatrix}
+ \dot\xi\\\dot\eta
+ \end{pmatrix} =
+ \begin{pmatrix}
+ -2\xi\\3\xi
+ \end{pmatrix}$. So $\mathbf{x}$ will head top-left, and the spiral is counter-clockwise (``positive'').
+
+ We can patch the three equilibrium points together and obtain the following phase portrait:
+ \begin{center}
+ \includegraphics[width=180pt]{images/de_population_phase.pdf}
+ \end{center}
+ We see that $(1, 3)$ is a stable solution which almost all solutions spiral towards.
+\end{eg}
+
+\section{Partial differential equations (PDEs)}
+\subsection{First-order wave equation}
+Consider the equation of the form
+\[
+ \frac{\partial y}{\partial t} = c\frac{\partial y}{\partial x},
+\]
+with $c$ a constant and $y$ a function of $x$ and $t$. This is known as the \emph{(first-order) wave equation}. We will later see that solutions correspond to waves travelling in one direction.
+
+We write this as
+\[
+ \frac{\partial y}{\partial t} - c\frac{\partial y}{\partial x} = 0.
+\]
+Recall that along a path $x = x(t)$ so that $y = y(x(t), t)$,
+\begin{align*}
+ \frac{\d y}{\d t} &= \frac{\partial y}{\partial x}\frac{\d x}{\d t} + \frac{\partial y}{\partial t}\\
+ &= \frac{\d x}{\d t}\frac{\partial y}{\partial x} + c\frac{\partial y}{\partial x}
+\end{align*}
+by the chain rule. Now we choose a path along which
+\[
+ \frac{\d x}{\d t} = -c. \tag{1}
+\]
+Along such paths,
+\[
+ \frac{\d y}{\d t} = 0 \tag{2}
+\]
+So we have replaced the original partial differential equations with a pair of ordinary differential equations.
+
+Each path that satisfies (1) can be described by $x = x_0 - ct$, where $x_0$ is a constant. We can write this as $x_0 = x + ct$.
+
+From (2), along each of these paths, $y$ is constant. So suppose for each $x_0$, the value of $y$ along the path $x_0 = x + ct$ is given by $f(x_0)$, where $f$ is an arbitrary function. Then the solution to the wave equation is
+\[
+ y = f(x + ct),
+\]
+By differentiating this directly, we can easily check that every function of this form is a solution to the wave equation.
+
+The contours of $y$ look rather boring.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (5, 0) node [right] {$x$};
+ \draw [->] (0, -1) -- (0, 3) node [above] {$t$};
+ \node [anchor = north east] at (0, 0) {$O$};
+
+ \foreach \x in {1, 2, 3, 4, 5} {
+ \draw [red] (\x- 0.5, 0) -- (\x - 1.5, 2.5);
+ }
+ \node [right] at (4, 1.5) {contours of $y$};
+ \end{tikzpicture}
+\end{center}
+Note that as we move up the time axis, we are simply taking the $t = 0$ solution and translating it to the left.
+
+The paths we've identified are called the ``characteristics'' of the wave equation. In this particular example, $y$ is constant along the characteristics (because the equation is unforced).
+
+We usually have initial conditions e.g.
+\[
+ y(x, 0) = x^2 - 3
+\]
+Since we know that $y = f(x + ct)$ and $f(x) = x^2 - 3$, $y$ must be given by
+\[
+ y = (x + ct)^2 - 3.
+\]
+We can plot the $xy$ curve for different values of $t$ to obtain this:
+\begin{center}
+ \begin{tikzpicture}[yscale=0.5]
+ \draw [->] (-5, 0) -- (5, 0) node [right] {$x$};
+ \draw [->] (0, -3.5) -- (0, 6.5) node [above] {$y$};
+
+ \foreach \t in {-2,...,2}{
+ \pgfmathsetmacro\k{(\t + 6) * 12.5}
+ \draw [mblue!\k] (3 + \t ,6) parabola bend (\t , -3) (- 3 + \t, 6);
+ }
+ \end{tikzpicture}
+\end{center}
+We see that each solution is just a translation of the $t = 0$ version.
+
+We can also solve forced equations, such as
+\[
+ \frac{\partial y}{\partial t} + 5\frac{\partial y}{\partial x} = e^{-t},\quad y(x, 0) = e^{-x^2}.
+\]
+Along each path $x_0 = x - 5t$, we have $\frac{\d y}{\d t} = e^{-t}$. So $y = f(x_0) - e^{-t}$ for some function $f$.
+
+Using the boundary conditions provided, At $t = 0$, $y = f(x_0) - 1$ and $x = x_0$. So $f(x_0) - 1 = e^{-x_0^2}$, i.e.\ $f(x_0) = 1 + e^{-x_0^2}$. So
+\[
+ y = 1 + e^{-(x - 5t)^2} - e^{-t}.
+\]
+\subsection{Second-order wave equation}
+We consider equations in the following form:
+\[
+ \frac{\partial ^2 y}{\partial t^2} = c^2 \frac{\partial^2 y}{\partial x^2}.
+\]
+This is the \emph{second-order wave equation}, and is often known as the ``hyperbolic equation'' because the form resembles that of a hyperbola (which has the form $x^2 - b^2 y^2 = 0$). However, the differential equation has no connections to hyperbolae whatsoever.
+
+This equation models an actual wave in one dimension. Consider a horizontal string, along the $x$ axis. We let $y(x, t)$ be the vertical displacement of the string at the point $x$ at time $t$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) sin (1.5, 0.667) cos (3, 0) sin (4.5, -0.667) cos (6, 0);
+ \draw [dashed] (0, 0) -- (6, 0);
+ \draw [->] (1.5, 0) -- (1.5, 0.667) node [pos = 0.5, right] {$y$};
+ \end{tikzpicture}
+\end{center}
+Suppose that $\rho(x)$ is the mass per unit length of a string. Then the restoring force on the string $\displaystyle ma = \rho \frac{\partial ^2 y}{\partial t^2}$ is proportional to the second derivative $\displaystyle\frac{\partial^2 y}{\partial x^2}$. So we obtain this wave equation.
+
+(Why is it proportional to the second derivative? It certainly cannot be proportional to $y$, because we get no force if we just move the whole string upwards. It also cannot be proportional to $\partial y/\partial x$: if we have a straight slope, then the force pulling upwards is the same as the force pulling downwards, and we should have no force. We have a force only if the string is curved, and curvature is measured by the second derivative)
+
+To solve the equation, suppose that $c$ is constant. Then we can write
+\begin{align*}
+ \frac{\partial ^2 y}{\partial t^2} - c^2 \frac{\partial^2 y}{\partial x^2} &= 0\\
+ \left(\frac{\partial}{\partial t} + c\frac{\partial}{\partial x}\right)\left(\frac{\partial}{\partial t} - c\frac{\partial}{\partial x}\right) y &= 0
+\end{align*}
+If $y = f(x + ct)$, then the first operator differentiates it to give a constant (as in the first-order wave equation). Then applying the second operator differentiates it to $0$. So $y = f(x + ct)$ is a solution.
+
+Since the operators are commutative, $y = f(x - ct)$ is also a solution. Since the equation is linear, the general solution is
+\[
+ y = f(x + ct) + g(x - ct).
+\]
+This shows that the solution composes of superpositions of waves travelling to the left and waves travelling to the right.
+
+We can show that this is indeed the most general solution by substituting $\xi = x + ct$ and $\eta = x - ct$. We can show, using the chain rule, that $y_{tt} - c^2 y_{xx} \equiv -4c^2 y_{\eta\xi} = 0$. Integrating twice gives $y = f(\xi) + g(\eta)$.
+
+How many boundary conditions do we need to have a unique solution? In ODEs, we simply count the order of the equation. In PDEs, we have to count over all variables. In this case, we need 2 boundary conditions and 2 initial conditions. For example, we can have:
+\begin{itemize}
+ \item Initial conditions: at $t = 0$,
+ \begin{gather*}
+ y = \frac{1}{1 + x^2}\\
+ \frac{\partial y}{\partial t} = 0
+ \end{gather*}
+ \item Boundary conditions: $y \to 0$ as $x \to \pm \infty$.
+\end{itemize}
+We know that the solution has the form
+\[
+ y = f(x + ct) + g(x - ct).
+\]
+The first initial condition give
+\[
+ f(x) + g(x) = \frac{1}{1 + x^2}\tag{1}
+\]
+The second initial condition gives
+\[
+ \frac{\partial y}{\partial t} = cf'(x) - cg'(x)\tag{2} = 0
+\]
+From (2), we know that $f' = g'$. So $f$ and $g$ differ by a constant. wlog, we can assume that they are indeed equal, since if we had, say, $f = g + 2$, then we can let $y = (f(x + ct) + 1) + (g(x - ct) + 1)$ instead, and the new $f$ and $g$ are equal.
+
+From (1), we must have
+\[
+ f(x) = g(x) = \frac{1}{2(1 + x^2)}
+\]
+So, overall,
+\[
+ y = \frac{1}{2}\left[\frac{1}{1 + (x + ct)^2} + \frac{1}{1 + (x - ct)^2}\right]
+\]
+Where we substituted $x$ for $x +ct$ and $x - ct$ in $f$ and $g$ respectively.
+
+\subsection{The diffusion equation}
+Heat conduction in a solid in one dimension is modelled by the diffusion equation
+\[
+ \frac{\partial T}{\partial t} = \kappa \frac{\partial^2 T}{\partial x^2}
+\]
+This is known as a parabolic PDE (parabolic because it resembles $y = ax^2$).
+
+Here $T(x, t)$ is the temperature and the constant $\kappa$ is the \emph{diffusivity}.
+
+\begin{eg}
+ Consider an infinitely long bar heated at one end ($x = 0$). Note in general the ``velocity'' $\partial T/\partial t$ is proportional to curvature $\partial^2 T/\partial x^2$, while in the wave equation, it is the ``acceleration'' that is proportional to curvature. In this case, instead of oscillatory, the diffusion equation is dissipative and all unevenness simply decays away.
+
+ Suppose $T(x,0) = 0$, $T(0, t) = H(t) =
+ \begin{cases}
+ 0 & t < 0\\
+ 1 & t > 0
+ \end{cases}$, and $T(x, t)\to 0$ as $x\to \infty$. In words, this says that the rod is initially cool (at temperature $0$), and then the end is heated up on one end after $t = 0$.
+
+ There is a \emph{similarity solution} of the diffusion equation valid on an infinite domain (or our semi-infinite domain) in which $T(x, t) = \theta(\eta)$, where $\displaystyle \eta = \frac{x}{2\sqrt{\kappa t}}$.
+
+ Applying the chain rule, we have
+ \begin{align*}
+ \frac{\partial T}{\partial x} &= \frac{\d \theta}{\d \eta} \frac{\partial \eta}{\partial x}\\
+ &= \frac{1}{2\sqrt{\kappa t}} \theta'(\eta)\\
+ \frac{\partial^2 T}{\partial x^2} &= \frac{1}{2\sqrt{\kappa t}} \frac{\d\theta'}{\d\eta}\frac{\partial \eta}{\partial x}\\
+ &= \frac{1}{4\kappa t}\theta''(\eta)\\
+ \frac{\partial T}{\partial t} &= \frac{\d \theta}{\d \eta}\frac{\partial \eta}{\partial t}\\
+ &= -\frac{1}{2}\frac{x}{2\sqrt{\kappa}}\frac{1}{t^{3/2}} \theta'(\eta)\\
+ &= -\frac{\eta}{2t}\theta'(\eta)
+ \end{align*}
+ Putting this into the diffusion equation yields
+ \begin{align*}
+ -\frac{\eta}{2t}\theta' &= \kappa \frac{1}{4\kappa t}\theta''\\
+ \theta'' + 2\eta\theta' &= 0
+ \end{align*}
+ This is an ordinary differential equation for $\theta(\eta)$. This can be seen as a first-order equation for $\theta'$ with non-constant coefficients. Use the integrating factor $\mu = \exp(\int 2\eta \;\d \eta) = e^{\eta^2}$. So
+ \begin{align*}
+ (e^{\eta^2}\theta')' &= 0\\
+ \theta' &= Ae^{-\eta^2}\\
+ \theta &= A\int_0^\eta e^{-u^2}\;\d u + B\\
+ &= \alpha\erf(\eta) + B
+ \end{align*}
+ where $\erf(\eta) = \frac{2}{\sqrt{\pi}} \int_0^\eta e^{-u^2}\;\d u$ from statistics, and $\erf(\eta)\to 1$ as $\eta\to \infty$.
+
+ Now look at the boundary and initial conditions, (recall $\eta = x/(2\sqrt{\kappa t})$) and express them in terms of $\eta$. As $x \to 0$, we have $\eta \to 0$. So $\theta = 1$ at $\eta = 0$.
+
+ Also, if $x\to \infty, t\to 0^+$, then $\eta \to \infty$. So $\theta \to 0$ as $\eta \to \infty$.
+
+ So $\theta(0) = 1 \Rightarrow B = 1$. Colloquially, $\theta(\infty) = 0$ gives $\alpha = -1$. So $\theta = 1 - \erf(\eta)$. This is also sometimes written as $\erfc(\eta)$, the error function complement of $\eta$. So
+ \[
+ T = \erfc\left(\frac{x}{2\sqrt{\kappa t}}\right)
+ \]
+ In general, at any particular fixed time $t_0$, $T(x)$ looks like
+ \begin{center}
+ \begin{tikzpicture}[xscale=3]
+ \draw [->] (0, 0) -- (2.1, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$T$};
+
+ %% approximation of error function
+ \draw [mblue, semithick, domain=0:2] plot (\x, {2.5 * (1 + 0.278393 * \x + 0.230389*\x*\x + 0.000972*\x*\x*\x + 0.078108 * \x*\x*\x*\x)^(-4)});
+ \end{tikzpicture}
+ \end{center}
+ with decay length $O(\sqrt{\kappa t})$. So if we actually have a finite bar of length $L$, we can treat it as infinite if $\sqrt{\kappa t} \ll L$, or $t\ll L^2/\kappa$
+\end{eg}
+\end{document}
diff --git a/books/cam/IA_M/groups.tex b/books/cam/IA_M/groups.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b9945c636011eba0b13a14a7af65d953b667a81d
--- /dev/null
+++ b/books/cam/IA_M/groups.tex
@@ -0,0 +1,2518 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IA}
+\def\nterm {Michaelmas}
+\def\nyear {2014}
+\def\nlecturer {J.\ Goedecke}
+\def\ncourse {Groups}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+ \noindent\textbf{Examples of groups}\\
+ Axioms for groups. Examples from geometry: symmetry groups of regular polygons, cube, tetrahedron. Permutations on a set; the symmetric group. Subgroups and homomorphisms. Symmetry groups as subgroups of general permutation groups. The M\"obius group; cross-ratios, preservation of circles, the point at infinity. Conjugation. Fixed points of M\"obius maps and iteration.\hspace*{\fill} [4]
+
+ \vspace{10pt}
+ \noindent\textbf{Lagrange's theorem}\\
+ Cosets. Lagrange's theorem. Groups of small order (up to order 8). Quaternions. Fermat-Euler theorem from the group-theoretic point of view.\hspace*{\fill} [5]
+
+ \vspace{10pt}
+ \noindent\textbf{Group actions}\\
+ Group actions; orbits and stabilizers. Orbit-stabilizer theorem. Cayley's theorem (every group is isomorphic to a subgroup of a permutation group). Conjugacy classes. Cauchy's theorem.\hspace*{\fill} [4]
+
+ \vspace{10pt}
+ \noindent\textbf{Quotient groups}\\
+ Normal subgroups, quotient groups and the isomorphism theorem.\hspace*{\fill} [4]
+
+ \vspace{10pt}
+ \noindent
+ \textbf{Matrix groups}\\
+ The general and special linear groups; relation with the M\"obius group. The orthogonal and special orthogonal groups. Proof (in $\R^3$) that every element of the orthogonal group is the product of reflections and every rotation in $\R^3$ has an axis. Basis change as an example of conjugation.\hspace*{\fill} [3]
+
+ \vspace{10pt}
+ \noindent\textbf{Permutations}\\
+ Permutations, cycles and transpositions. The sign of a permutation. Conjugacy in $S_n$ and in $A_n$. Simple groups; simplicity of $A_5$.\hspace*{\fill} [4]}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+Group theory is an example of \emph{algebra}. In pure mathematics, algebra (usually) does not refer to the boring mindless manipulation of symbols. Instead, in algebra, we have some set of objects with some operations on them. For example, we can take the integers with addition as the operation. However, in algebra, we allow \emph{any} set and \emph{any} operations, not just numbers.
+
+Of course, such a definition is too broad to be helpful. We categorize algebraic structures into different types. In this course, we will study a particular kind of structures, \emph{groups}. In the IB Groups, Rings and Modules course, we will study rings and modules as well.
+
+These different kinds of structures are defined by certain \emph{axioms}. The \emph{group axioms} will say that the operation must follow certain rules, and any set and operation that satisfies these rules will be considered to form a group. We will then have a different set of axioms for rings, modules etc.
+
+As mentioned above, the most familiar kinds of algebraic structures are number systems such as integers and rational numbers. The focus of group theory, however, is not on things that resemble ``numbers''. Instead, it is the study of \emph{symmetries}.
+
+First of all, what is a symmetry? We are all familiar with, say, the symmetries of an (equilateral) triangle (we will always assume the triangle is equilateral). We rotate a triangle by $120^\circ$, and we get the original triangle. We say that rotating by $120^\circ$ is a symmetry of a triangle. In general, a symmetry is something we do to an object that leaves the object intact.
+
+Of course, we don't require that the symmetry leaves \emph{everything} intact. Otherwise, we would only be allowed to do nothing. Instead, we require certain important things to be intact. For example, when considering the symmetries of a triangle, we only care about how the resultant object looks, but don't care about where the individual vertices went.
+
+In the case of the triangle, we have six symmetries: three rotations (rotation by $0^\circ, 120^\circ$ and $240^\circ$), and three reflections along the axes below:
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0,120,240} {
+ \begin{scope}[rotate=\x]
+ \draw (-1, -0.577) -- (1, -0.577);
+ \draw [mred, dashed] (0, -1.2) -- (0, 1.778);
+ \end{scope}
+ }
+ \end{tikzpicture}
+\end{center}
+These six together form the underlying set of the \emph{group of symmetries}. A more sophisticated example is the symmetries of $\R^3$. We define these as operations on $\R^3$ that leave distances between points unchanged. These include translations, rotations, reflections, and combinations of these.
+
+So what is the operation? This operation combines two symmetries to give a new symmetry. The natural thing to do is to do the symmetry one after another. For example, if we combine the two $120^\circ$ rotations, we get a $240^\circ$ rotation.
+
+Now we are studying algebra, not geometry. So to define the group, we \emph{abstract away} the triangle. Instead, we define the group to be six objects, say $\{e, r, r^2, s, rs, r^2s\}$, with rules defining how we combine two elements to get a third. Officially, we do not mention the triangle at all when defining the group.
+
+We can now come up with the group axioms. What rules should the set of symmetries obey? First of all, we must have a ``do nothing'' symmetry. We call this the \emph{identity} element. When we compose the identity with another symmetry, the other symmetry is unchanged.
+
+Secondly, given a symmetry, we can do the reverse symmetry. So for any element, there is an inverse element that, when combined with the original, gives the identity.
+
+Finally, given three symmetries, we can combine them, one after another. If we denote the operation of the group as $*$, then if we have three symmetries, $x, y, z$, we should be able to form $x*y*z$. If we want to define it in terms of the binary operation $*$, we can define it as $(x*y)*z$, where we first combine the first two symmetries, then combine the result with the third. Alternatively, we can also define it as $x*(y*z)$. Intuitively, these two should give the same result, since both are applying $x$ after $y$ after $z$. Hence we have the third rule $x*(y*z) = (x*y)*z$.
+
+Now a group is any set with an operation that satisfies the three rules above. In group theory, the objective is to study the properties of groups just assuming these three axioms. It turns out that there is a \emph{lot} we can talk about.
+
+\section{Groups and homomorphisms}
+\subsection{Groups}
+\begin{defi}[Binary operation]
+ A \emph{(binary) operation} is a way of combining two elements to get a new element. Formally, it is a map $*: A \times A \rightarrow A$.
+\end{defi}
+\begin{defi}[Group]
+ A \emph{group} is a set $G$ with a binary operation $*$ satisfying the following axioms:
+ \begin{enumerate}[label=\arabic{*}.]
+ \item There is some $e \in G$ such that for all $a$, we have
+ \[
+ a*e = e*a = a.\tag{identity}
+ \]
+ \item For all $a \in G$, there is some $a^{-1} \in G$ such that
+ \[
+ a*a^{-1} = a^{-1}*a = e.\tag{inverse}
+ \]
+ \item For all $a, b, c\in G$, we have
+ \[
+ (a*b)*c = a*(b*c).\tag{associativity}
+ \]
+ \end{enumerate}
+\end{defi}
+
+\begin{defi}[Order of group]
+ The \emph{order} of the group, denoted by $|G|$, is the number of elements in $G$. A group is a finite group if the order is finite.
+\end{defi}
+
+Note that \emph{technically}, the inverse axiom makes no sense, since we have not specified what $e$ is. Even if we take it to be the $e$ given by the identity axiom, the identity axiom only states there is \emph{some} $e$ that satisfies that property, but there could be many! We don't know which one $a * a^{-1}$ is supposed to be equal to! So we should technically take that to mean there is some $a^{-1}$ such that $a*a^{-1}$ and $a^{-1} * a$ satisfy the identity axiom. Of course, we will soon show that identities are indeed unique, and we will happily talk about ``the'' identity.
+
+Some people put a zeroth axiom called ``closure'':
+\begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{-1}
+ \item For all $a, b \in G$, we have $a * b \in G$.\hfill (closure)
+\end{enumerate}
+Technically speaking, this axiom also makes no sense --- when we say $*$ is a binary operation, by definition, $a * b$ \emph{must} be a member of $G$. However, in practice, we often have to check that this axiom actually holds. For example, if we let $G$ be the set of all matrices of the form
+\[
+ \begin{pmatrix}
+ 1 & x & y\\
+ 0 & 1 & z\\
+ 0 & 0 & 1
+ \end{pmatrix}
+\]
+under matrix multiplication, we will have to check that the product of two such matrices is indeed a matrix of this form. Officially, we are checking that the binary operation is a well-defined operation on $G$.
+
+It is important to know that it is generally \emph{not} true that $a*b = b*a$. There is no \emph{a priori} reason why this should be true. For example, if we are considering the symmetries of a triangle, rotating and then reflecting is different from reflecting and then rotating.
+
+However, for some groups, this happens to be true. We call such groups \emph{abelian groups}.
+\begin{defi}[Abelian group]
+ A group is \emph{abelian} if it satisfies
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{3}
+ \item $(\forall a, b \in G)\, a*b = b*a$. \hfill (commutativity)
+ \end{enumerate}
+\end{defi}
+If it is clear from context, we are lazy and leave out the operation $*$, and write $a*b$ as $ab$. We also write $a^2 = aa$, $a^n = \underbrace{aaa\cdots a}_{n \text{ copies}}$, $a^0 = e$, $a^{-n} = (a^{-1})^n$ etc.
+\begin{eg}
+ The following are abelian groups:
+ \begin{enumerate}
+ \item $\Z$ with $+$
+ \item $\Q$ with $+$
+ \item $\Z_n$ (integers mod $n$) with $+_n$
+ \item $\Q^*$ with $\times$
+ \item $\{-1, 1\}$ with $\times$
+ \end{enumerate}
+ The following are non-abelian groups:
+ \begin{enumerate}[resume]
+ \item Symmetries of an equilateral triangle (or any $n$-gon) with composition. ($D_{2n}$)
+ \item $2\times 2$ invertible matrices with matrix multiplication ($\GL_2(\R)$)
+ \item Symmetry groups of 3D objects
+ \end{enumerate}
+\end{eg}
+
+Recall that the first group axiom requires that there exists \emph{an} identity element, which we shall call $e$. Then the second requires that for each $a$, there is an inverse $a^{-1}$ such that $a^{-1}a = e$. This only makes sense if there is only one identity $e$, or else which identity should $a^{-1}a$ be equal to?
+
+We shall now show that there can only be one identity. It turns out that the inverses are also unique. So we will talk about \emph{the} identity and \emph{the} inverse.
+\begin{prop}
+ Let $(G, *)$ be a group. Then
+ \begin{enumerate}
+ \item The identity is unique.
+ \item Inverses are unique.
+ \end{enumerate}
+\end{prop}
+\begin{proof}\leavevmode
+ \begin{enumerate}[label=(\roman{*})]
+ \item Suppose $e$ and $e'$ are identities. Then we have $ee' = e'$, treating $e$ as an inverse, and $ee' = e$, treating $e'$ as an inverse. Thus $e = e'$.
+ \item Suppose $a^{-1}$ and $b$ both satisfy the inverse axiom for some $a\in G$. Then $b = be = b(aa^{-1}) = (ba)a^{-1} = ea^{-1} = a^{-1}$. Thus $b = a^{-1}$.\qedhere
+ \end{enumerate}
+\end{proof}
+\begin{prop}
+ Let $(G, *)$ be a group and $a, b\in G$. Then
+ \begin{enumerate}
+ \item $(a^{-1})^{-1} = a$
+ \item $(ab)^{-1} = b^{-1}a^{-1}$
+ \end{enumerate}
+\end{prop}
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Given $a^{-1}$, both $a$ and $(a^{-1})^{-1}$ satisfy
+ \[
+ xa^{-1} = a^{-1}x = e.
+ \]
+ By uniqueness of inverses, $(a^{-1})^{-1} = a$.
+ \item We have
+ \begin{align*}
+ (ab)(b^{-1}a^{-1}) &= a(bb^{-1})a^{-1} \\
+ &= aea^{-1}\\
+ &= aa^{-1}\\
+ &= e
+ \end{align*}
+ Similarly, $(b^{-1}a^{-1})ab = e$. So $b^{-1}a^{-1}$ is an inverse of $ab$. By the uniqueness of inverses, $(ab)^{-1} = b^{-1}a^{-1}$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Sometimes if we have a group $G$, we might want to discard some of the elements. For example if $G$ is the group of all symmetries of a triangle, we might one day decide that we hate reflections because they reverse orientation. So we only pick the rotations in $G$ and form a new, smaller group. We call this a \emph{subgroup} of $G$.
+
+\begin{defi}[Subgroup]
+ A $H$ is a \emph{subgroup} of $G$, written $H\leq G$, if $H\subseteq G$ and $H$ with the restricted operation $*$ from $G$ is also a group.
+\end{defi}
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item $(\Z, +)\leq (\Q, +) \leq (\R, +)\leq (\C, +)$
+ \item $({e}, *) \leq (G, *)$ (trivial subgroup)
+ \item $G \leq G$
+ \item $(\{\pm 1\}, \times) \leq (\Q^*, \times)$
+ \end{itemize}
+\end{eg}
+
+According to the definition, to prove that $H$ is a subgroup of $G$, we need to make sure $H$ satisfies all group axioms. However, this is often tedious. Instead, there are some simplified criteria to decide whether $H$ is a subgroup.
+\begin{lemma}[Subgroup criteria I]
+ Let $(G, *)$ be a group and $H\subseteq G$. $H \leq G$ iff
+ \begin{enumerate}
+ \item $e \in H$
+ \item $(\forall a, b\in H)\,ab \in H$
+ \item $(\forall a \in H)\,a^{-1} \in H$
+ \end{enumerate}
+\end{lemma}
+\begin{proof}
+ The group axioms are satisfied as follows:
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{-1}
+ \item Closure: (ii)
+ \item Identity: (i). Note that $H$ and $G$ must have the same identity. Suppose that $e_H$ and $e_G$ are the identities of $H$ and $G$ respectively. Then $e_He_H = e_H$. Now $e_H$ has an inverse in $G$. Thus we have $e_He_He_H^{-1} = e_He_H^{-1}$. So $e_He_G = e_G$. Thus $e_H = e_G$.
+ \item Inverse: (iii)
+ \item Associativity: inherited from $G$.\qedhere
+ \end{enumerate}
+\end{proof}
+Humans are lazy, and the test above is still too complicated. We thus come up with an even simpler test:
+
+\begin{lemma}[Subgroup criteria II]
+ A subset $H\subseteq G$ is a subgroup of $G$ iff:
+ \begin{enumerate}[label=(\Roman{*})]
+ \item $H$ is non-empty
+ \item $(\forall a, b\in H)\,ab^{-1}\in H$
+ \end{enumerate}
+\end{lemma}
+\begin{proof}
+ (I) and (II) follow trivially from (i), (ii) and (iii).
+
+ To prove that (I) and (II) imply (i), (ii) and (iii), we have
+ \begin{enumerate}
+ \item $H$ must contain at least one element $a$. Then $aa^{-1} = e \in H$.
+ \setcounter{enumi}{2}
+ \item $ea^{-1} = a^{-1} \in H$.
+ \setcounter{enumi}{1}
+ \item $a(b^{-1})^{-1} = ab\in H$.
+ \end{enumerate}
+\end{proof}
+\begin{prop}
+ The subgroups of $(\Z, +)$ are exactly $n\Z$, for $n\in \N$ ($n\Z$ is the integer multiples of $n$).
+\end{prop}
+\begin{proof}
+ Firstly, it is trivial to show that for any $n \in \N$, $n\Z$ is a subgroup. Now show that any subgroup must be in the form $n\Z$.
+
+ Let $H\leq \Z$. We know $0\in H$. If there are no other elements in $H$, then $H = 0\Z$. Otherwise, pick the smallest positive integer $n$ in $H$. Then $H=n\Z$.
+
+ Otherwise, suppose $(\exists a\in H)\,n \nmid a$. Let $a = pn + q$, where $0 < q < n$. Since $a - pn\in H$, $q\in H$. Yet $q < n$ but $n$ is the smallest member of $H$. Contradiction. So every $a\in H$ is divisible by $n$. Also, by closure, all multiples of $n$ must be in $H$. So $H = n\Z$.
+\end{proof}
+\subsection{Homomorphisms}
+It is often helpful to study functions between different groups. First, we need to define what a function is. These definitions should be familiar from IA Numbers and Sets.
+
+\begin{defi}[Function]
+ Given two sets $X$, $Y$, a \emph{function} $f: X \rightarrow Y$ sends each $x\in X$ to a particular $f(x)\in Y$. $X$ is called the domain and $Y$ is the co-domain.
+\end{defi}
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item Identity function: for any set $X$, $1_X: X \rightarrow X$ with $1_X(x) = x$ is a function. This is also written as $\mathrm{id}_X$.
+ \item Inclusion map: $\iota: \Z \rightarrow \Q$: $\iota(n) = n$. Note that this differs from the identity function as the domain and codomain are different in the inclusion map.
+ \item $f_1: \Z \rightarrow \Z$: $f_1(x) = x + 1$.
+ \item $f_2: \Z \rightarrow \Z$: $f_2(x) = 2x$.
+ \item $f_3: \Z \rightarrow \Z$: $f_3(x) = x^2$.
+ \item For $g: \{0, 1, 2, 3, 4\} \rightarrow \{0, 1, 2, 3, 4\}$, we have:
+ \begin{itemize}
+ \item $g_1(x) = x + 1$ if $x < 4$; $g_1(4) = 4$.
+ \item $g_2(x) = x + 1$ if $x < 4$; $g_1(4) = 0$.
+ \end{itemize}
+ \end{itemize}
+\end{eg}
+\begin{defi}[Composition of functions]
+ The \emph{composition} of two functions is a function you get by applying one after another. In particular, if $f: X \rightarrow Y$ and $G: Y\rightarrow Z$, then $g\circ f: X \rightarrow Z$ with $g\circ f(x) = g(f(x))$.
+\end{defi}
+\begin{eg}
+ $f_2\circ f_1(x) = 2x + 2$. $f_1\circ f_2 (x) = 2x + 1$. Note that function composition is not commutative.
+\end{eg}
+\begin{defi}[Injective functions]
+ A function $f$ is \emph{injective} if it hits everything at most once, i.e.
+ \[
+ (\forall x, y\in X)\,f(x) = f(y)\Rightarrow x = y.
+ \]
+\end{defi}
+
+\begin{defi}[Surjective functions]
+ A function is \emph{surjective} if it hits everything at least once, i.e.
+ \[
+ (\forall y\in Y)(\exists x\in X)\,f(x) = y.
+ \]
+\end{defi}
+
+\begin{defi}[Bijective functions]
+ A function is \emph{bijective} if it is both injective and surjective. i.e.\ it hits everything exactly once. Note that a function has an inverse iff it is bijective.
+\end{defi}
+
+\begin{eg}
+ $\iota$ and $f_2$ are injective but not subjective. $f_3$ and $g_1$ are neither. $1_X$, $f_1$ and $g_2$ are bijective.
+\end{eg}
+
+\begin{lemma}
+ The composition of two bijective functions is bijective
+\end{lemma}
+
+When considering sets, functions are allowed to do all sorts of crazy things, and can send any element to any element without any restrictions. However, we are currently studying groups, and groups have additional structure on top of the set of elements. Hence we are not interested in arbitrary functions. Instead, we are interested in functions that ``respect'' the group structure. We call these \emph{homomorphisms}.
+\begin{defi}[Group homomorphism]
+ Let $(G, *)$ and $(H, \times)$ be groups. A function $f:G\rightarrow H$ is a \emph{group homomorphism} iff
+ \[
+ ( \forall g_1, g_2 \in G)\, f(g_1)\times f(g_2) = f(g_1 * g_2),
+ \]
+\end{defi}
+
+\begin{defi}[Group isomorphism]
+ \emph{Isomorphisms} are bijective homomorphisms. Two groups are \emph{isomorphic} if there exists an isomorphism between them. We write $G\cong H$.
+\end{defi}
+We will consider two isomorphic groups to be ``the same''. For example, when we say that there is only one group of order $2$, it means that any two groups of order $2$ must be isomorphic.
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item $f: G \to H$ defined by $f(g) = e$, where $e$ is the identity of $H$, is a homomorphism.
+ \item $1_G: G \rightarrow G$ and $f_2: \Z \rightarrow 2\Z$ are isomorphisms. $\iota: \Z\rightarrow\Q$ and $f_2:\Z\rightarrow\Z$ are homomorphisms.
+ \item $\mathrm{exp}: (\R, +) \rightarrow (\R^+, \times)$ with $\mathrm{exp}(x) = e^x$ is an isomorphism.
+ \item Take $(\Z_4, +)$ and $H: (\{e^{ik\pi/2}:k=0, 1 ,2, 3\}, \times)$. Then $f: \Z_4 \rightarrow H$ by $f(a) = e^{i\pi a/2}$ is an isomorphism.
+ \item $f: \GL_2(\R) \rightarrow \R^*$ with $f(A) = \det(A)$ is a homomorphism, where $\GL_2(\R)$ is the set of $2\times 2$ invertible matrices.
+ \end{itemize}
+\end{eg}
+
+\begin{prop}
+ Suppose that $f: G\rightarrow H$ is a homomorphism. Then
+ \begin{enumerate}
+ \item Homomorphisms send the identity to the identity, i.e.
+ \[
+ f(e_G) = e_H
+ \]
+ \item Homomorphisms send inverses to inverses, i.e.
+ \[
+ f(a^{-1}) = f(a)^{-1}
+ \]
+ \item The composite of 2 group homomorphisms is a group homomorphism.
+ \item The inverse of an isomorphism is an isomorphism.
+ \end{enumerate}
+\end{prop}
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item \begin{align*}
+ f(e_G) &= f(e_G^2) = f(e_G)^2\\
+ f(e_G)^{-1}f(e_G) &= f(e_G)^{-1}f(e_G)^2\\
+ f(e_G) &= e_H
+ \end{align*}
+ \item \begin{align*}
+ e_H &= f(e_G)\\
+ &= f(aa^{-1})\\
+ &= f(a)f(a^{-1})
+ \end{align*}
+ Since inverses are unique, $f(a^{-1}) = f(a)^{-1}$.
+ \item Let $f:G_1 \rightarrow G_2$ and $g:G_2 \rightarrow G_3$. Then $g(f(ab)) = g(f(a)f(b)) = g(f(a))g(f(b))$.
+ \item Let $f:G \rightarrow H$ be an isomorphism. Then
+ \begin{align*}
+ f^{-1}(ab) &= f^{-1}\Big\{f\big[f^{-1}(a)\big]f\big[f^{-1}(b)\big]\Big\}\\
+ &= f^{-1}\Big\{f\big[f^{-1}(a)f^{-1}(b)\big]\Big\}\\
+ &= f^{-1}(a)f^{-1}(b)
+ \end{align*}
+ So $f^{-1}$ is a homomorphism. Since it is bijective, $f^{-1}$ is an isomorphism.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[Image of homomorphism]
+ If $f:G\rightarrow H$ is a homomorphism, then the \emph{image} of $f$ is
+ \[
+ \im f = f(G) = \{f(g):g\in G\}.
+ \]
+\end{defi}
+
+\begin{defi}[Kernel of homomorphism]
+ The \emph{kernel} of $f$, written as
+ \[
+ \ker f = f^{-1}(\{e_H\}) = \{g\in G:f(g)=e_H\}.
+ \]
+\end{defi}
+
+\begin{prop}
+ Both the image and the kernel are subgroups of the respective groups, i.e.\ $\im f\leq H$ and $\ker f \leq G$.
+\end{prop}
+
+\begin{proof}
+ Since $e_H\in \im f$ and $e_G\in \ker f$, $\im f$ and $\ker f$ are non-empty. Moreover, suppose $b_1, b_2\in \im f$. Now $\exists a_1, a_2 \in G$ such that $f(a_i) = b_i$. Then $b_1b_2^{-1} = f(a_1)f(a_2^{-1}) = f(a_1a_2^{-1})\in \im f$.
+
+ Then consider $b_1,b_2\in \ker f$. We have $f(b_1b_2^{-1}) = f(b_1)f(b_2)^{-1} = e^2 = e$. So $b_1b_2^{-1}\in \ker f$.
+\end{proof}
+
+\begin{prop}
+ Given any homomorphism $f:G\rightarrow H$ and any $a\in G$, for all $k\in \ker f$, $aka^{-1}\in\ker f$.
+\end{prop}
+This proposition seems rather pointless. However, it is not. All subgroups that satisfy this property are known as \emph{normal subgroups}, and normal subgroups have very important properties. We will postpone the discussion of normal subgroups to later lectures.
+
+\begin{proof}
+ $f(aka^{-1}) = f(a)f(k)f(a)^{-1} = f(a)ef(a)^{-1} = e$. So $aka^{-1}\in \ker f$.
+\end{proof}
+
+\begin{eg}
+ Images and kernels for previously defined functions:
+ \begin{enumerate}
+ \item For the function that sends everything to $e$, $\im f = \{e\}$ and $\ker f = G$.
+ \item For the identity function, $\im 1_G = G$ and $\ker 1_G = \{e\}$.
+ \item For the inclusion map $\iota: \Z\rightarrow\Q$, we have $\im \iota = \Z$ and $\ker \iota = \{0\}$
+ \item For $f_2:\Z\rightarrow\Z$ and $f_2(x) = 2x$, we have $\im f_2 = 2\Z$ and $\ker f_2 = \{0\}$.
+ \item For $\det: \GL_2(\R) \rightarrow \R^*$, we have $\im \det = \R^*$ and $\ker \det = \{A:\det A = 1\} = \mathrm{SL}_2(\R)$
+ \end{enumerate}
+\end{eg}
+\begin{prop}
+ For all homomorphisms $f:G\rightarrow H$, $f$ is
+ \begin{enumerate}
+ \item surjective iff $\im f = H$
+ \item injective iff $\ker f = \{e\}$
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item By definition.
+ \item We know that $f(e) = e$. So if $f$ is injective, then by definition $\ker f = \{e\}$. If $\ker f = \{e\}$, then given $a, b$ such that $f(a) = f(b)$, $f(ab^{-1}) = f(a)f(b)^{-1} = e$. Thus $ab^{-1}\in \ker f = \{e\}$. Then $ab^{-1} = e$ and $a = b$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+So far, the definitions of images and kernels seem to be just convenient terminology to refer to things. However, we will later prove an important theorem, the \emph{first isomorphism theorem}, that relates these two objects and provides deep insights (hopefully).
+
+Before we get to that, we will first study some interesting classes of groups and develop some necessary theory.
+
+\subsection{Cyclic groups}
+The simplest class of groups is \emph{cyclic groups}. A cyclic group is a group of the form $\{e, a, a^2, a^2, \cdots, a^{n - 1}\}$, where $a^n = e$. For example, if we consider the group of all rotations of a triangle, and write $r = $ rotation by $120^\circ$, the elements will be $\{e, r, r^2\}$ with $r^3 = e$.
+
+Officially, we define a cyclic group as follows:
+\begin{defi}[Cyclic group $C_n$]
+ A group $G$ is \emph{cyclic} if
+ \[
+ (\exists a)(\forall b)(\exists n\in\Z)\, b = a^n,
+ \]
+ i.e.\ every element is some power of $a$. Such an $a$ is called a generator of $G$.
+
+ We write $C_n$ for the cyclic group of order $n$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\Z$ is cyclic with generator $1$ or $-1$. It is \emph{the} infinite cyclic group.
+ \item $(\{+1, -1\}, \times)$ is cyclic with generator $-1$.
+ \item $(\Z_n, +)$ is cyclic with all numbers coprime with $n$ as generators.
+ \end{enumerate}
+\end{eg}
+
+\begin{notation}
+ Given a group $G$ and $a\in G$, we write $\bra a\ket$ for the cyclic group generated by $a$, i.e.\ the subgroup of all powers of $a$. It is the smallest subgroup containing $a$.
+\end{notation}
+
+\begin{defi}[Order of element]
+ The \emph{order} of an element $a$ is the smallest integer $n$ such that $a^n = e$. If $n$ doesn't exist, $a$ has infinite order. Write $\ord(a)$ for the order of $a$.
+\end{defi}
+We have given two different meanings to the word ``order''. One is the order of a group and the other is the order of an element. Since mathematicians are usually (but not always) sensible, the name wouldn't be used twice if they weren't related. In fact, we have
+
+\begin{lemma}
+ For $a$ in $g$, $\ord (a) = |\bra a\ket|$.
+\end{lemma}
+\begin{proof}
+ If $\ord (a) = \infty$, $a^n \not= a^m$ for all $n\not= m$. Otherwise $a^{m-n} = e$. Thus $|\bra a\ket| = \infty = \ord (a)$.
+
+ Otherwise, suppose $\ord (a) = k$. Thus $a^k = e$. We now claim that $\bra a\ket = \{e, a^1, a^2, \cdots a^{k-1}\}$. Note that $\bra a\ket$ does not contain higher powers of $a$ as $a^k = e$ and higher powers will loop back to existing elements. There are also no repeating elements in the list provided since $a^m = a^n \Rightarrow a^{m-n} = e$. So done.
+\end{proof}
+
+It is trivial to show that
+\begin{prop}
+ Cyclic groups are abelian.
+\end{prop}
+
+\begin{defi}[Exponent of group]
+ The \emph{exponent} of a group $G$ is the smallest integer $n$ such that $a^n = e$ for all $a \in G$.
+\end{defi}
+
+\subsection{Dihedral groups}
+\begin{defi}[Dihedral groups $D_{2n}$]
+ Dihedral groups are the symmetries of a regular $n$-gon. It contains $n$ rotations (including the identity symmetry, i.e.\ rotation by $0^\circ$) and $n$ reflections.
+
+ We write the group as $D_{2n}$. Note that the subscript refers to the order of the group, not the number of sides of the polygon.
+\end{defi}
+
+The dihedral group is not hard to define. However, we need to come up with a presentation of $D_{2n}$ that is easy to work with.
+
+We first look at the rotations. The set of all rotations is generated by $r = \frac{360^\circ}{n}$. This $r$ has order $n$.
+
+How about the reflections? We know that each reflection has order $2$. Let $s$ be our favorite reflection. Then using some geometric arguments, we can show that any reflection can be written as a product of $r^m$ and $s$ for some $m$. We also have $srs = r^{-1}$.
+
+Hence we can define $D_{2n}$ as follows: $D_{2n}$ is a group generated by $r$ and $s$, and every element can be written as a product of $r$'s and $s$'s. Whenever we see $r^n$ and $s^2$, we replace it by $e$. When we see $srs$, we replace it by $r^{-1}$.
+
+It then follows that every element can be written in the form $r^m s$.
+
+Formally, we can write $D_{2n}$ as follows:
+\begin{align*}
+ D_{2n} &= \bra r, s\mid r^n=s^2=e, srs^{-1} = r^{-1}\ket\\
+ &= \{e, r, r^2, \cdots r^{n-1}, s, rs, r^2s, \cdots r^{n-1}s\}
+\end{align*}
+This is a notation we will commonly use to represent groups. For example, a cyclic group of order $n$ can be written as
+\[
+ C_n = \bra a\mid a^n =e \ket.
+\]
+
+\subsection{Direct products of groups}
+Recall that if we have to sets $X, Y$, then we can obtain the product $X\times Y = \{(x, y): x\in X, y\in Y\}$. We can do the same if $X$ and $Y$ are groups.
+
+\begin{defi}[Direct product of groups]
+ Given two groups $(G, \circ)$ and $(H, \bullet)$, we can define a set $G\times H = \{(g, h): g\in G, h\in H\}$ and an operation $(a_1, a_2)*(b_1, b_2) = (a_1\circ b_1, a_2\bullet b_2)$. This forms a group.
+\end{defi}
+
+Why would we want to take the product of two groups? Suppose we have two independent triangles. Then the symmetries of this system include, say rotating the first triangle, rotating the second, or rotating both. The symmetry group of this combined system would then be $D_6 \times D_6$.
+
+\begin{eg}
+ \begin{align*}
+ C_2\times C_2 &= \{(0, 0), (0, 1), (1, 0), (1, 1)\}\\
+ &= \{e, x, y, xy\} \text{ with everything order 2}\\
+ &= \bra x, y\mid x^2=y^2=e, xy = yx\ket
+ \end{align*}
+\end{eg}
+
+\begin{prop}
+ $C_n\times C_m\cong C_{nm}$ iff $\hcf(m, n) = 1$.
+\end{prop}
+
+\begin{proof}
+ Suppose that $\hcf(m, n) = 1$. Let $C_n = \bra a\ket$ and $C_m = \bra b \ket$. Let $k$ be the order of $(a, b)$. Then $(a, b)^k = (a^k, b^k) = e$. This is possible only if $n \mid k$ and $m \mid k$, i.e.\ $k$ is a common multiple $n$ and $m$. Since the order is the minimum value of $k$ that satisfies the above equation, $k = \lcm(n, m) = \frac{nm}{\hcf(n, m)} = nm$.
+
+ Now consider $\bra (a, b)\ket \leq C_n\times C_m$. Since $(a, b)$ has order $nm$, $\bra (a, b)\ket$ has $nm$ elements. Since $C_n\times C_m$ also has $nm$ elements, $\bra (a, b)\ket$ must be the whole of $C_n\times C_m$. And we know that $\bra (a, b)\ket\cong C_{nm}$. So $C_n\times C_m \cong C_{nm}$.
+
+ On the other hand, suppose $\hcf(m, n) \not= 1$. Then $k = \lcm(m, n) \not= mn$. Then for any $(a, b)\in C_n \times C_m$,we have $(a, b)^k = (a^k, b^k) = e$. So the order of any $(a, b)$ is at most $k < mn$. So there is no element of order $mn$. So $C_n\times C_m$ is not a cyclic group of order $nm$.
+\end{proof}
+
+Given a complicated group $G$, it is sometimes helpful to write it as a product $H\times K$, which could make things a bit simpler. We can do so by the following theorem:
+\begin{prop}[Direct product theorem]
+ Let $H_1, H_2\leq G$. Suppose the following are true:
+ \begin{enumerate}
+ \item $H_1\cap H_2 = \{e\}$.
+ \item $(\forall a_i\in H_i)\, a_1a_2=a_2a_1$.
+ \item $(\forall a\in G)(\exists a_i\in H_i)\,a = a_1a_2$. We also write this as $G=H_1H_2$.
+ \end{enumerate}
+ Then $G\cong H_1\times H_2$.
+\end{prop}
+
+\begin{proof}
+ Define $f:H_1\times H_2\rightarrow G$ by $f(a_1, a_2) = a_1a_2$. Then it is a homomorphism since
+ \begin{align*}
+ f((a_1, a_2)*(b_1,b_2)) &= f(a_1b_1, a_2b_2)\\
+ &= a_1b_1a_2b_2\\
+ &= a_1a_2b_1b_2\\
+ &= f(a_1, a_2)f(b_1,b_2).
+ \end{align*}
+ Surjectivity follows from (iii). We'll show injectivity by showing that the kernel is $\{e\}$. If $f(a_1, a_2)=e$, then we know that $a_1a_2 = e$. Then $a_1=a_2^{-1}$. Since $a_1 \in H_1$ and $a_2^{-1} \in H_2$, we have $a_1 = a_2^{-1} \in H_1\cap H_2 = \{e\}$. Thus $a_1 = a_2 = e$ and $\ker f = \{e\}$.
+\end{proof}
+
+\section{Symmetric group I}
+We will devote two full chapters to the study of symmetric groups, because it is really important. Recall that we defined a symmetry to be an operation that leaves some important property of the object intact. We can treat each such operation as a bijection. For example, a symmetry of $\R^2$ is a bijection $f: \R^2 \to \R^2$ that preserves distances. Note that we must require it to be a bijection, instead of a mere function, since we require each symmetry to be an inverse.
+
+We can consider the case where we don't care about anything at all. So a ``symmetry'' would be any arbitrary bijection $X \to X$, and the set of all bijections will form a group, known as the \emph{symmetric group}. Of course, we will no longer think of these as ``symmetries'' anymore, but just bijections.
+
+In some sense, the symmetric group is the most general case of a symmetry group. In fact, we will later (in Chapter~\ref{sec:action}) show that every group can be written as a subgroup of some symmetric group.
+\subsection{Symmetric groups}
+\begin{defi}[Permutation]
+ A \emph{permutation} of $X$ is a bijection from a set $X$ to $X$ itself. The set of all permutations on $X$ is $\Sym X$.
+\end{defi}
+When composing permutations, we treat them as functions. So if $\sigma$ and $\rho$ are permutations, $\sigma\circ \rho$ is given by first applying $\rho$, then applying $\sigma$.
+
+\begin{thm}
+ $\Sym X$ with composition forms a group.
+\end{thm}
+
+\begin{proof}
+ The groups axioms are satisfied as follows:
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{-1}
+ \item If $\sigma: X\to X$ and $\tau: X\to X$, then $\sigma\circ\tau:X\to X$. If they are both bijections, then the composite is also bijective. So if $\sigma, \tau\in \Sym X$, then $\sigma\circ\tau\in\Sym X$.
+ \item The identity $1_X:X\to X$ is clearly a permutation, and gives the identity of the group.
+ \item Every bijective function has a bijective inverse. So if $\sigma\in \Sym X$, then $\sigma^{-1} \in \Sym X$.
+ \item Composition of functions is associative.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[Symmetric group $S_n$]
+ If $X$ is finite, say $|X| = n$ (usually use $X = \{1, 2, \cdots, n\}$), we write $\Sym X = S_n$. The is the \emph{symmetric group} of degree $n$.
+\end{defi}
+It is important to note that the \emph{degree} of the symmetric group is different from the \emph{order} of the symmetric group. For example, $S_3$ has degree 3 but order 6. In general, the order of $S_n$ is $n!$.
+
+There are two ways to write out an element of the symmetric group. The first is the \emph{two row notation}.
+\begin{notation}
+ (Two row notation) We write $1, 2, 3, \cdots n$ on the top line and their images below, e.g.
+ \[
+ \begin{pmatrix}
+ 1 & 2 & 3\\
+ 2 & 3 & 1
+ \end{pmatrix}\in S_3 \text{ and }
+ \begin{pmatrix}
+ 1 & 2 & 3 & 4 & 5\\
+ 2 & 1 & 3 & 4 & 5
+ \end{pmatrix}\in S_5
+ \]
+ In general, if $\sigma: X\to X$, we write
+ \[
+ \begin{pmatrix}
+ 1 & 2 & 3 &\cdots& n\\
+ \sigma(1) & \sigma(2)&\sigma(3) &\cdots& \sigma{(n)}
+ \end{pmatrix}
+ \]
+\end{notation}
+
+\begin{eg}
+ For small $n$, we have
+ \begin{enumerate}
+ \item When $n = 1$, $S_n = \left\{\begin{pmatrix}1\\1\end{pmatrix}\right\} = \{e\}\cong C_1$.
+ \item When $n = 2$, $S_n = \left\{\begin{pmatrix}1 & 2\\ 1 & 2\end{pmatrix}, \begin{pmatrix}1 & 2\\2 & 1\end{pmatrix}\right\}\cong C_2$
+ \item When $n = 3$,
+ \[
+ S_n = \left\{\begin{matrix}\begin{pmatrix}1 & 2 & 3\\1 & 2 & 3\end{pmatrix}, &\begin{pmatrix}1 & 2 & 3\\ 2 & 3 & 1\end{pmatrix}, &\begin{pmatrix}1 & 2 & 3\\3 & 1 & 2\end{pmatrix}\\\\\begin{pmatrix}1 & 2 & 3\\2 & 1 & 3\end{pmatrix}, &\begin{pmatrix}1 & 2 & 3\\ 3 & 2 & 1\end{pmatrix}, &\begin{pmatrix}1 & 2 & 3\\1 & 3 & 2\end{pmatrix}\end{matrix}\right\}\cong D_6.
+ \]
+ Note that $S_3$ is not abelian. Thus $S_n$ is not abelian for $n \geq 3$ since we can always view $S_3$ as a subgroup of $S_n$ by fixing $4, 5, 6, \cdots n$.
+ \end{enumerate}
+\end{eg}
+In general, we can view $D_{2n}$ as a subgroup of $S_n$ because each symmetry is a permutation of the corners.
+
+While the two row notation is fully general and can represent any (finite) permutation, it is clumsy to write and wastes a lot of space. It is also very annoying to type using \LaTeX. Hence, most of the time, we actually use the cycle notation.
+
+\begin{notation}[Cycle notation]
+ If a map sends $1 \mapsto 2$, $2\mapsto 3$, $3\mapsto 1$, then we write it as a cycle $(1\;2\;3)$. Alternatively, we can write $(2\;3\;1)$ or $(3\;1\;2)$, but by convention, we usually write the smallest number first. We leave out numbers that don't move. So we write $(1\; 2)$ instead of $(1\; 2)(3)$.
+
+ For more complicated maps, we can write them as products of cycles. For example, in $S_4$, we can have things like $(1\; 2)(3\; 4)$.
+\end{notation}
+The order of each cycle is the length of the cycle, and the inverse is the cycle written the other way round, e.g.\ $(1\; 2\; 3)^{-1} = (3\; 2\; 1) = (1\; 3\; 2)$.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Suppose we want to simplify $(1\; 2\; 3)(1\; 2)$. Recall that composition is from right to left. So $1$ gets mapped to $3$ ($(1\; 2)$ maps $1$ to $2$, and $(1\; 2\; 3)$ further maps it to $3$). Then $3$ gets mapped to $1$. $2$ is mapped to $2$ itself. So $(1\; 2\; 3)(1\; 2) = (1\;3)(2)$
+ \item $(1\; 2\; 3\; 4)(1\; 4) = (1)(2\; 3\; 4) = (2\; 3\; 4)$.
+ \end{enumerate}
+\end{eg}
+\begin{defi}[$k$-cycles and transpositions]
+ We call $(a_1\; a_2\; a_3\cdots a_k)$ a \emph{$k$-cycle}. $2$-cycles are called \emph{transpositions}. Two cycles are \emph{disjoint} if no number appears in both cycles.
+\end{defi}
+\begin{eg}
+ $(1\; 2)$ and $(3\; 4)$ are disjoint but $(1\; 2\; 3)$ and $(1\; 2)$ are not.
+\end{eg}
+\begin{lemma}
+ Disjoint cycles commute.
+\end{lemma}
+\begin{proof}
+ If $\sigma, \tau\in S_n$ are disjoint cycles. Consider any $n$. Show that: $\sigma(\tau(a)) = \tau(\sigma(a))$. If $a$ is in neither of $\sigma$ and $\tau$, then $\sigma(\tau(a)) = \tau(\sigma(a)) = a$. Otherwise, wlog assume that $a$ is in $\tau$ but not in $\sigma$. Then $\tau(a)\in \tau$ and thus $\tau(a)\not\in \sigma$. Thus $\sigma(a) = a$ and $\sigma(\tau(a)) = \tau(a)$. Therefore we have $\sigma(\tau(a)) = \tau(\sigma(a)) = \tau(a)$. Therefore $\tau$ and $\sigma$ commute.
+\end{proof}
+In general, non-disjoint cycles may not commute. For example, $(1\; 3)(2\; 3) = (1\; 3\; 2)$ while $(2\; 3)(1\; 3) = (1\; 2\; 3)$.
+
+\begin{thm}
+ Any permutation in $S_n$ can be written (essentially) uniquely as a product of disjoint cycles. (Essentially unique means unique up to re-ordering of cycles and rotation within cycles, e.g.\ $(1\; 2)$ and $(2\; 1)$)
+\end{thm}
+
+\begin{proof}
+ Let $\sigma\in S_n$. Start with $(1\; \sigma(1)\; \sigma^2(1)\; \sigma^3(1)\;\cdots)$. As the set $\{1, 2, 3\cdots n\}$ is finite, for some $k$, we must have $\sigma^k(1)$ already in the list. If $\sigma^k(1) = \sigma^l(1)$, with $l < k$, then $\sigma^{k-l}(1) = 1$. So all $\sigma^i(1)$ are distinct until we get back to $1$. Thus we have the first cycle $(1\; \sigma(1)\; \sigma^2(1)\; \sigma^3(1)\;\cdots\;\sigma^{k-1}(1))$.
+
+ Now choose the smallest number that is not yet in a cycle, say $j$. Repeat to obtain a cycle $(j\; \sigma(j)\; \sigma^2(j)\;\cdots\; \sigma^{l - 1}(j))$. Since $\sigma$ is a bijection, nothing in this cycle can be in previous cycles as well.
+
+ Repeat until all $\{1, 2, 3\cdots n\}$ are exhausted. This is essentially unique because every number $j$ completely determines the whole cycle it belongs to, and whichever number we start with, we'll end up with the same cycle.
+\end{proof}
+
+\begin{defi}[Cycle type]
+ Write a permutation $\sigma\in S_n$ in disjoint cycle notation. The \emph{cycle type} is the list of cycle lengths. This is unique up to re-ordering. We often (but not always) leave out singleton cycles.
+\end{defi}
+\begin{eg}
+ $(1\; 2)$ has cycle type $2$ (transposition). $(1\; 2)(3\; 4)$ has cycle type $2, 2$ (double transposition). $(1\; 2\; 3)(4\; 5)$ has cycle type $3, 2$.
+\end{eg}
+\begin{lemma}
+ For $\sigma\in S_n$, the order of $\sigma$ is the least common multiple of cycle lengths in the disjoint cycle notation. In particular, a $k$-cycle has order $k$.
+\end{lemma}
+
+\begin{proof}
+ As disjoint cycles commute, we can group together each cycle when we take powers. i.e.\ if $\sigma = \tau_1\tau_2\cdots\tau_l$ with $\tau_i$ all disjoint cycles, then $\sigma^m = \tau_1^m\tau_2^m\cdots\tau_l^m$.
+
+ Now if cycle $\tau_i$ has length $k_i$, then $\tau_i^{k_i} = e$, and $\tau_i^m = e$ iff $k_i \mid m$. To get an $m$ such that $\sigma^m = e$, we need all $k_i$ to divide $m$. i.e.\ $m$ is a common multiple of $k_i$. Since the order is the least possible $m$ such that $\sigma^m = e$, the order is the least common multiple of $k_i$.
+\end{proof}
+
+\begin{eg}
+ Any transpositions and double transpositions have order 2.
+
+ $(1\; 2\; 3)(4\; 5)$ has order 6.
+\end{eg}
+
+\subsection{Sign of permutations}
+To classify different permutations, we can group different permutations according to their cycle type. While this is a very useful thing to do, it is a rather fine division. In this section, we will assign a ``sign'' to each permutation, and each permutation can either be odd or even. This high-level classification allows us to separate permutations into two sets, which is also a useful notion.
+
+To define the sign, we first need to write permutations as products of transpositions.
+\begin{prop}
+ Every permutation is a product of transpositions.
+\end{prop}
+This is not a deep or mysterious fact. All it says is that you can rearrange things however you want just by swapping two objects at a time.
+
+\begin{proof}
+ As each permutation is a product of disjoint cycles, it suffices to prove that each cycle is a product of transpositions. Consider a cycle $(a_1\; a_2\; a_3\; \cdots\; a_k)$. This is in fact equal to $(a_1\; a_2)(a_2\; a_3)\cdots (a_{k-1}\; a_k)$. Thus a $k$-cycle can be written as a product of $k - 1$ transpositions.
+\end{proof}
+
+Note that the product is not unique. For example,
+\[
+ (1\; 2\; 3\; 4\; 5) =(1\; 2)(2\; 3)(3\; 4)(4\; 5) = (1\; 2)(2\; 3)(1\; 2)(3\; 4)(1\; 2)(4\; 5).
+\]
+However, the number of terms in the product, mod 2, is always the same.
+
+\begin{thm}
+ Writing $\sigma\in S_n$ as a product of transpositions in different ways, $\sigma$ is either always composed of an even number of transpositions, or always an odd number of transpositions.
+\end{thm}
+The proof is rather magical.
+
+\begin{proof}
+ Write $\#(\sigma)$ for the number of cycles in disjoint cycle notation, including singleton cycles. So $\#(e) = n$ and $\#((1\; 2)) = n - 1$. When we multiply $\sigma$ by a transposition $\tau = (c\; d)$ (wlog assume $c < d$),
+ \begin{itemize}
+ \item If $c, d$ are in the same $\sigma$-cycle, say, $(c\; a_2\; \cdots \; a_{k - 1}\; d\; a_{k + 1}\; \cdots a_{k + l})(c\; d) = (c\; a_{k+1}\; a_{k+2}\;\cdots a_{k + l})(d\; a_2\; a_3\;\cdots\; a_{k - 1})$. So $\#(\sigma\tau) = \#(\sigma) + 1$ .
+ \item If $c, d$ are in different $\sigma$-cycles, say
+
+ $(d\; a_2\; a_3\;\cdots\;a_{k - 1})(c\; a_{k + 1}\; a_{k + 2}\;\cdots\; a_{k + l})(c\; d) $
+
+ $=(c\; a_2\; \cdots \; a_{k - 1}\; d\; a_{k + 1}\; \cdots a_{k + l})(c\; d)(c\; d)$
+
+ $= (c\; a_2\; \cdots \; a_{k - 1}\; d\; a_{k + 1}\; \cdots a_{k + l})$ and $\#(\sigma\tau) = \#(\sigma) - 1$.
+
+ \end{itemize}
+ Therefore for any transposition $\tau$, $\#(\sigma\tau) \equiv \#(\sigma) + 1 \pmod 2$.
+
+ Now suppose $\sigma = \tau_1\cdots\tau_l = \tau_1'\cdots\tau_{k}'$. Since disjoint cycle notation is unique, $\#(\sigma)$ is uniquely determined by $\sigma$.
+
+ Now we can construct $\sigma$ by starting with $e$ and multiplying the transpositions one by one. Each time we add a transposition, we increase $\#(\sigma)$ by $1 \pmod 2$. So $\#(\sigma) \equiv \#(e) + l\pmod 2$. Similarly, $\#(\sigma) \equiv \#(e) + k \pmod 2$. So $l \equiv k \pmod 2$.
+\end{proof}
+
+\begin{defi}[Sign of permutation]
+ Viewing $\sigma\in S_n$ as a product of transpositions, $\sigma = \tau_1\cdots \tau_l$, we call $\sgn(\sigma) = (-1)^l$. If $\sgn(\sigma) = 1$, we call $\sigma$ an even permutation. If $\sgn(\sigma) = -1$, we call $\sigma$ an odd permutation.
+\end{defi}
+While $l$ itself is not well-defined, it is either always odd or always even, and $(-1)^l$ is well-defined.
+
+\begin{thm}
+ For $n\geq 2$, $\sgn : S_n \rightarrow \{\pm 1\}$ is a surjective group homomorphism.
+\end{thm}
+\begin{proof}
+ Suppose $\sigma_1 = \tau_1\cdots \tau_{l_1}$ and $\sigma_2 = \tau'_1\cdots \tau_{l_2}$. Then $\sgn(\sigma_1\sigma_2) = (-1)^{l_1 + l_2} = (-1)^{l_1}(-1)^{l_2} = \sgn(\sigma_1)\sgn(\sigma_2)$. So it is a homomorphism.
+
+ It is surjective since $\sgn(e) = 1$ and $\sgn((1\; 2)) = -1$.
+\end{proof}
+
+It is this was rather trivial to prove. The hard bit is showing that $\sgn$ is well defined. If a question asks you to show that $\sgn$ is a well-defined group homomorphism, you \emph{have} to show that it is well-defined.
+
+\begin{lemma}
+ $\sigma$ is an even permutation iff the number of cycles of even length is even.
+\end{lemma}
+
+\begin{proof}
+ A $k$-cycle can be written as $k - 1$ transpositions. Thus an even-length cycle is odd, vice versa.
+
+ Since $\sgn$ is a group homomorphism, writing $\sigma$ in disjoint cycle notation, $\sigma = \sigma_1\sigma_2\cdots\sigma_l$, we get $\sgn(\sigma) = \sgn(\sigma_1)\cdots \sgn(\sigma_l)$. Suppose there are $m$ even-length cycles and $n$ odd-length cycles, then $\sgn(\sigma) = (-1)^m 1^n$. This is equal to $1$ iff $(-1)^m = 1$, i.e.\ $m$ is even.
+\end{proof}
+Rather confusingly, odd length cycles are even, and even length cycles are odd.
+
+\begin{defi}[Alternating group $A_n$]
+ The \emph{alternating group} $A_n$ is the kernel of $\sgn$, i.e.\ the even permutations.
+ Since $A_n$ is a kernel of a group homomorphism, $A_n \leq S_n$.
+\end{defi}
+Among the many uses of the $\sgn$ homomorphism, it is used in the definition of the determinant of a matrix: if $A_{n\times n}$ is a square matrix, then
+\[
+ \det A = \sum_{\sigma\in S_n}\sgn(\sigma) a_{1\sigma(1)}\cdots a_{n\sigma(n)}.
+\]
+
+\begin{prop}
+ Any subgroup of $S_n$ contains either no odd permutations or exactly half.
+\end{prop}
+
+\begin{proof}
+ If $S_n$ has at least one odd permutation $\tau$, then there exists a bijection between the odd and even permutations by $\sigma \mapsto \sigma\tau$ (bijection since $\sigma \mapsto \sigma \tau^{-1}$ is a well-defined inverse). So there are as many odd permutations as even permutations.
+\end{proof}
+After we prove the isomorphism theorem later, we can provide an even shorter proof of this.
+
+\section{Lagrange's Theorem}
+One can model a Rubik's cube with a group, with each possible move corresponding to a group element. Of course, Rubik's cubes of different sizes correspond to different groups.
+
+Suppose I have a $4\times 4\times 4$ Rubik's cube, but I want to practice solving a $2\times 2\times 2$ Rubik's cube. It is easy. I just have to make sure every time I make a move, I move two layers together. Then I can pretend I am solving a $2\times 2\times 2$ cube. This corresponds to picking a particular subgroup of the $4\times 4\times 4$ group.
+
+Now what if I have a $3\times 3\times 3$ cube? I can still practice solving a $2\times 2\times 2$ one. This time, I just look at the corners and pretend that the edges and centers do not exist. Then I am satisfied when the corners are in the right positions, while the centers and edges can be completely scrambled. In this case, we are not taking a subgroup. Instead, we are identifying certain moves together. In particular, we are treating two moves as the same as long as their difference is confined to the centers and edges.
+
+Let $G$ be the $3\times 3\times 3$ cube group, and $H$ be the subgroup of $G$ that only permutes the edges and centers. Then for any $a, b\in G$, we think $a$ and $b$ are ``the same'' if $a^{-1}b \in H$. Then the set of things equivalent to $a$ is $aH = \{ah: h \in H\}$. We call this a \emph{coset}, and the set of cosets form a group.
+
+An immediate question one can ask is: why not $Ha = \{ha: h\in H\}$? In this particular case, the two happen to be the same for all possible $a$. However, for a general subgroup $H$, they need not be. We can still define the coset $aH = \{ah: h \in H\}$, but these are less interesting. For example, the set of all $\{aH\}$ will no longer form a group. We will look into these more in-depth in the next chapter. In this chapter, we will first look at results for general cosets. In particular, we will, step by step, prove the things we casually claimed above.
+
+\begin{defi}[Cosets]
+ Let $H\leq G$ and $a\in G$. Then the set $aH =\{ah : h\in H\}$ is a \emph{left coset} of $H$ and $Ha = \{ha : h\in H\}$ is a \emph{right coset} of $H$.
+\end{defi}
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Take $2\Z \leq Z$. Then $6 + 2\Z = \{\text{all even numbers}\} = 0 + 2\Z$. $1 + 2\Z = \{\text{all odd numbers}\} = 17 + 2\Z$.
+ \item Take $G = S_3$, let $H = \bra (1\; 2)\ket = \{e, (1\; 2)\}$. The left cosets are
+ \begin{align*}
+ eH = (1\; 2)H &= \{e, (1\; 2)\}\\
+ (1\; 3)H = (1\; 2\; 3)H &= \{(1\; 3), (1\; 2\; 3)\}\\
+ (2\; 3)H = (1\; 3\; 2)H &= \{(2\; 3), (1\; 3\; 2)\}
+ \end{align*}
+ \item Take $G = D_6$ (which is isomorphic to $S_3$). Recall $D_6 = \bra r, s \mid r^3 e = s^2, rs = sr^{-1}\ket$.Take $H = \bra s\ket = \{e, s\}$. We have left coset $rH = \{r, rs = sr^{-1}\}$ and the right coset $Hr = \{r, sr\}$. Thus $rH \not= Hr$.
+ \end{enumerate}
+\end{eg}
+
+\begin{prop}
+ $aH = bH \Leftrightarrow b^{-1}a\in H$.
+\end{prop}
+\begin{proof}
+ $(\Rightarrow)$ Since $a\in aH$, $a\in bH$. Then $a = bh$ for some $h\in H$. So $b^{-1}a = h\in H$.
+
+ $(\Leftarrow)$. Let $b^{-1}a = h_0$. Then $a = bh_0$. Then $\forall ah\in aH$, we have $ah = b(h_0h)\in bH$. So $aH \subseteq bH$. Similarly, $bH\subseteq aH$. So $aH = bH$.
+\end{proof}
+
+\begin{defi}[Partition]
+ Let $X$ be a set, and $X_1, \cdots X_n$ be subsets of $X$. The $X_i$ are called a \emph{partition} of $X$ if $\bigcup X_i = X$ and $X_i\cap X_j = \emptyset$ for $i\not= j$. i.e.\ every element is in exactly one of $X_i$.
+\end{defi}
+
+\begin{lemma}
+ The left cosets of a subgroup $H\leq G$ partition $G$, and every coset has the same size.
+\end{lemma}
+
+\begin{proof}
+ For each $a\in G$, $a\in aH$. Thus the union of all cosets gives all of $G$. Now we have to show that for all $a, b\in G$, the cosets $aH$ and $bH$ are either the same or disjoint.
+
+ Suppose that $aH$ and $bH$ are not disjoint. Let $ah_1 = bh_2 \in aH \cap bH$. Then $b^{-1}a = h_2 h_1^{-1}\in H$. So $aH = bH$.
+
+ To show that they each coset has the same size, note that $f: H \to aH$ with $f(h) = ah$ is invertible with inverse $f^{-1}(h) = a^{-1}h$. Thus there exists a bijection between them and they have the same size.
+\end{proof}
+
+\begin{defi}[Index of a subgroup]
+ The \emph{index} of $H$ in $G$, written $|G:H|$, is the number of left cosets of $H$ in $G$.
+\end{defi}
+
+\begin{thm}[Lagrange's theorem]
+ If $G$ is a finite group and $H$ is a subgroup of $G$, then $|H|$ divides $|G|$. In particular,
+ \[
+ |H||G:H| = |G|.
+ \]
+\end{thm}
+Note that the converse is not true. If $k$ divides $|G|$, there is not necessarily a subgroup of order $k$, e.g.\ $|A_4| = 12$ but there is no subgroup of order $6$. However, we will later see that this is true if $k$ is a prime (cf.\ Cauchy's theorem).
+
+\begin{proof}
+ Suppose that there are $|G: H|$ left cosets in total. Since the left cosets partition $G$, and each coset has size $|H|$, we have
+ \[
+ |H||G:H| = |G|.\qedhere
+ \]
+\end{proof}
+Again, the hard part of this proof is to prove that the left cosets partition $G$ and have the same size. If you are asked to prove Lagrange's theorem in exams, that is what you actually have to prove.
+
+\begin{cor}
+ The order of an element divides the order of the group, i.e.\ for any finite group $G$ and $a\in G$, $\ord(a)$ divides $|G|$.
+\end{cor}
+\begin{proof}
+ Consider the subgroup generated by $a$, which has order $\ord(a)$. Then by Lagrange's theorem, $\ord(a)$ divides $|G|$.
+\end{proof}
+
+\begin{cor}
+ The exponent of a group divides the order of the group, i.e.\ for any finite group $G$ and $a\in G$, $a^{|G|} = e$.
+\end{cor}
+
+\begin{proof}
+ We know that $|G| = k\ord(a)$ for some $k\in \N$. Then $a^{|G|} = (a^{\ord(a)})^k = e^k = e$.
+\end{proof}
+
+\begin{cor}
+ Groups of prime order are cyclic and are generated by every non-identity element.
+\end{cor}
+
+\begin{proof}
+ Say $|G| = p$. If $a\in G$ is not the identity, the subgroup generated by $a$ must have order $p$ since it has to divide $p$. Thus the subgroup generated by $a$ has the same size as $G$ and they must be equal. Then $G$ must be cyclic since it is equal to the subgroup generated by $a$.
+\end{proof}
+
+A useful way to think about cosets is to view them as equivalence classes. To do so, we need to first define what an equivalence class is.
+\begin{defi}[Equivalence relation]
+ An \emph{equivalence relation} $\sim$ is a relation that is reflexive, symmetric and transitive. i.e.
+ \begin{enumerate}
+ \item $(\forall x)\,x\sim x$ \hfill (reflexivity)
+ \item $(\forall x, y)\,x\sim y \Rightarrow y\sim x$ \hfill (symmetry)
+ \item $(\forall x, y, z)\,[(x\sim y) \wedge (y\sim z)\Rightarrow x\sim z]$ \hfill (transitivity)
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}
+ The following relations are equivalence relations:
+ \begin{enumerate}
+ \item Consider $\Z$. The relation $\equiv_n$ defined as $a\equiv_n b \Leftrightarrow n \mid (a - b)$.
+ \item Consider the set (formally: class) of all finite groups. Then ``is isomorphic to'' is an equivalence relation.
+ \end{enumerate}
+\end{eg}
+
+\begin{defi}[Equivalence class]
+ Given an equivalence relation $\sim$ on $A$, the \emph{equivalence class} of $a$ is
+ \[
+ [a]_{\sim} = [a] = \{b\in A: a\sim b\}
+ \]
+\end{defi}
+
+\begin{prop}
+ The equivalence classes form a partition of $A$.
+\end{prop}
+
+\begin{proof}
+ By reflexivity, we have $a\in [a]$. Thus the equivalence classes cover the whole set. We must now show that for all $a, b\in A$, either $[a] = [b]$ or $[a]\cap [b]=\emptyset$.
+
+ Suppose $[a]\cap[b]\not=\emptyset$. Then $\exists c\in [a]\cap[b]$. So $a\sim c, b\sim c$. By symmetry, $c\sim b$. By transitivity, we have $a\sim b$. Now for all $b'\in [b]$, we have $b\sim b'$. Thus by transitivity, we have $a\sim b'$. Thus $[b]\subseteq[a]$. Similarly, $[a]\subseteq[b]$ and $[a] = [b]$.
+\end{proof}
+
+\begin{lemma}
+ Given a group $G$ and a subgroup $H$, define the equivalence relation on $G$ with $a\sim b$ iff $b^{-1}a\in H$. The equivalence classes are the left cosets of $H$.
+\end{lemma}
+
+\begin{proof}
+ First show that it is an equivalence relation.
+ \begin{enumerate}
+ \item Reflexivity: Since $aa^{-1} = e\in H$, $a\sim a$.
+ \item Symmetry: $a\sim b\Rightarrow b^{-1}a\in H \Rightarrow (b^{-1}a)^{-1} = a^{-1}b\in H\Rightarrow b\sim a$.
+ \item Transitivity: If $a\sim b$ and $b\sim c$, we have $b^{-1}a, c^{-1}b\in H$. So $c^{-1}bb^{-1}a = c^{-1}a\in H$. So $a\sim c$.
+ \end{enumerate}
+ To show that the equivalence classes are the cosets, we have $a\sim b\Leftrightarrow b^{-1}a\in H \Leftrightarrow aH = bH$.
+\end{proof}
+
+\begin{eg}
+ Consider $(\Z, +)$, and for fixed $n$, take the subgroup $n\Z$. The cosets are $0+ H, 1 + H, \cdots (n - 1)+H$. We can write these as $[0], [1], [2] \cdots [n]$. To perform arithmetic ``mod $n$'', define $[a] + [b] = [a + b]$, and $[a][b] = [ab]$. We need to check that it is well-defined, i.e.\ it doesn't depend on the choice of the representative of $[a]$.
+
+ If $[a_1] = [a_2]$ and $[b_1] = [b_2]$, then $a_1 = a_2 + kn$ and $b_1 = b_2 + kn$, then $a_1 + b_1 = a_2 + b_2 + n(k + l)$ and $a_1b_1 = a_2b_2 + n(kb_2 +la_2 + kln)$. So $[a_1 + b_1] = [a_2 + b_2]$ and $[a_1b_1] = [a_2b_2]$.
+\end{eg}
+
+We have seen that $(\Z_n, +_n)$ is a group. What happens with multiplication? We can only take elements which have inverses (these are called units, cf.\ IB Groups, Rings and Modules). Call the set of them $U_n = \{[a]: (a, n) = 1\}$. We'll see these are the units.
+\begin{defi}[Euler totient function]
+ (Euler totient function) $\phi (n) = |U_n|$.
+\end{defi}
+
+\begin{eg}
+ If $p$ is a prime, $\phi(n) = p - 1$. $\phi(4) = 2$.
+\end{eg}
+
+\begin{prop}
+ $U_n$ is a group under multiplication mod $n$.
+\end{prop}
+
+\begin{proof}
+ The operation is well-defined as shown above. To check the axioms:
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{-1}
+ \item Closure: if $a, b$ are coprime to $n$, then $a\cdot b$ is also coprime to $n$. So $[a], [b]\in U_n \Rightarrow [a]\cdot [b] = [a\cdot b]\in U_n$
+ \item Identity: $[1]$
+ \item Let $[a]\in U_n$. Consider the map $U_n \to U_n$ with $[c]\mapsto [ac]$. This is injective: if $[ac_1] = [ac_2]$, then $n $ divides $a(c_1 - c_2)$. Since $a$ is coprime to $n$, $n$ divides $c_1 - c_2$, so $[c_1] = [c_2]$. Since $U_n$ is finite, any injection ($U_n \to U_n$) is also a surjection. So there exists a $c$ such that $[ac] = [a][c] = 1$. So $[c] = [a]^{-1}$.
+ \item Associativity (and also commutativity): inherited from $\Z$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{thm}[Fermat-Euler theorem] Let $n\in N$ and $a\in \Z$ coprime to $n$. Then
+ \[
+ a^{\phi(n)} \equiv 1\pmod n.
+ \]
+ In particular, (Fermat's Little Theorem) if $n = p$ is a prime, then for any $a$ not a multiple of $p$.
+ \[
+ a^{p - 1}\equiv 1\pmod p.
+ \]
+\end{thm}
+
+\begin{proof}
+ As $a$ is coprime with $n$, $[a]\in U_n$. Then $[a]^{|U_n|} = [1]$, i.e.\ $a^{\phi(n)} \equiv 1\pmod n$.
+\end{proof}
+
+\subsection{Small groups}
+We will study the structures of certain small groups.
+\begin{eg}[Using Lagrange theorem to find subgroups]
+ To find subgroups of $D_{10}$, we know that the subgroups must have size 1, 2, 5 or 10:
+ \begin{enumerate}[label=\arabic{*}:]
+ \item $\{e\}$
+ \item The groups generated by the 5 reflections of order 2
+ \setcounter{enumi}{4}
+ \item The group must be cyclic since it has prime order 5. It is then generated by an element of order 5, i.e.\ $r, r^2, r^3$ and $r^4$. They generate the same group $\bra r\ket$.
+ \setcounter{enumi}{9}
+ \item $D_{10}$
+ \end{enumerate}
+
+ As for $D_8$, subgroups must have order 1, 2, 4 or 8.
+ \begin{enumerate}[label=\arabic{*}:]
+ \item $\{e\}$
+ \item 5 elements of order $2$, namely 4 reflections and $r^2$.
+ \setcounter{enumi}{3}
+ \item First consider the subgroup isomorphic to $C_4$, which is $\bra r\ket$. There are two other non-cyclic group.
+ \setcounter{enumi}{7}
+ \item $D_8$
+ \end{enumerate}
+\end{eg}
+
+\begin{prop}
+ Any group of order 4 is either isomorphic to $C_4$ or $C_2\times C_2$.
+\end{prop}
+
+\begin{proof}
+ Let $|G| = 4$. By Lagrange theorem, possible element orders are $1$ ($e$ only), $2$ and $4$. If there is an element $a\in G$ of order $4$, then $G = \bra a\ket \cong C_4$.
+
+ Otherwise all non-identity elements have order 2. Then $G$ must be abelian (For any $a, b$, $(ab)^2 = 1 \Rightarrow ab = (ab)^{-1} \Rightarrow ab = b^{-1}a^{-1} \Rightarrow ab = ba$).
+ Pick $2$ elements of order 2, say $b, c\in G$, then $\bra b\ket=\{e, b\}$ and $\bra c\ket = \{e, c\}$. So $\bra b\ket\cap \bra c\ket = \{e\}$. As $G$ is abelian, $\bra b\ket$ and $\bra c\ket$ commute. We know that $bc = cb$ has order 2 as well, and is the only element of $G$ left. So $G \cong \bra b\ket\times \bra c\ket \cong C_2\times C_2$ by the direct product theorem.
+\end{proof}
+
+\begin{prop}
+ A group of order $6$ is either cyclic or dihedral (i.e.\ is isomorphic to $C_6$ or $D_6$). (See proof in next section)
+\end{prop}
+
+\subsection{Left and right cosets}
+As $|aH| = |H|$ and similarly $|H| = |Ha|$, left and right cosets have the same size. Are they necessarily the same? We've previously shown that they might \emph{not} be the same. In some other cases, they are.
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Take $G = (\Z, +)$ and $H = 2\Z$. We have $0 + 2\Z = 2\Z + 0 = $ even numbers and $1 + 2\Z = 2\Z + 1 = $ odd numbers. Since $G$ is abelian, $aH = Ha$ for all $a, \in G$, $H\leq G$.
+ \item Let $G = D_6 = \bra r, s \mid r^3 = e = s^2, rs = sr^{-1}\ket$. Let $U = \bra r\ket$. Since the cosets partition $G$, so one must be $U$ and the other $sU = \{s, sr = r^2s, sr^2 = rs\} = Us$. So for all $a\in G, aU = Ua$.
+ \item Let $G = D_6$ and take $H = \bra s\ket$ . We have $H = \{e, s\}$, $rH = \{r, rs = sr^{-1}\}$ and $r^2 H = \{r^2, r^s\}$; while $H = \{e, s\}, Hr = \{r, sr\}$ and $Hr^2=\{r^2, sr^2\}$. So the left and right subgroups do not coincide.
+ \end{enumerate}
+\end{eg}
+This distinction will become useful in the next chapter.
+
+\section{Quotient groups}
+In the previous section, when attempting to pretend that a $3\times 3\times 3$ Rubik's cube is a $2\times 2\times 2$ one, we came up with the cosets $aH$, and claimed that these form a group. We also said that this is not the case for arbitrary subgroup $H$, but only for subgroups that satisfy $aH = Ha$. Before we prove these, we first study these subgroups a bit.
+
+\subsection{Normal subgroups}
+\begin{defi}[Normal subgroup]
+ A subgroup $K$ of $G$ is a \emph{normal subgroup} if
+ \[
+ (\forall a\in G)(\forall k\in K)\,aka^{-1}\in K.
+ \]
+ We write $K\lhd G$. This is equivalent to:
+ \begin{enumerate}
+ \item $(\forall a\in G)\,aK = Ka$, i.e.\ left coset $=$ right coset
+ \item $(\forall a\in G)\,aKa^{-1} = K$ (cf.\ conjugacy classes)
+ \end{enumerate}
+\end{defi}
+
+From the example last time, $H=\bra s\ket\leq D_6$ is not a normal subgroup, but $K=\bra r\ket \lhd D_6$. We know that every group $G$ has at least two normal subgroups $\{e\}$ and $G$.
+
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item Every subgroup of index $2$ is normal.
+ \item Any subgroup of an abelian group is normal.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $K\leq G$ has index $2$, then there are only two possible cosets $K$ and $G\setminus K$. As $eK = Ke$ and cosets partition $G$, the other left coset and right coset must be $G\setminus K$. So all left cosets and right cosets are the same.
+ \item For all $a\in G$ and $k\in K$, we have $aka^{-1} = aa^{-1}k = k\in K$.\qedhere
+ \end{enumerate}
+\end{proof}
+\begin{prop}
+ Every kernel is a normal subgroup.
+\end{prop}
+
+\begin{proof}
+ Given homomorphism $f:G\rightarrow H$ and some $a\in G$, for all $k\in \ker f$, we have $f(aka^{-1}) = f(a)f(k)f(a)^{-1} = f(a)ef(a)^{-1} = e$. Therefore $aka^{-1}\in\ker f$ by definition of the kernel.
+\end{proof}
+
+In fact, we will see in the next section that all normal subgroups are kernels of some homomorphism.
+
+\begin{eg}
+ Consider $G = D_8$. Let $K = \bra r^2\ket$ is normal. Check: Any element of $G$ is either $sr^\ell$ or $r^\ell$ for some $\ell$. Clearly $e$ satisfies $aka^{-1}\in K$. Now check $r^2$: For the case of $sr^\ell$, we have $sr^\ell r^2(sr^\ell)^{-1} = sr^\ell r^2 r^{-\ell}s^{-1} = sr^2 s = ssr^{-2} = r^2$. For the case of $r^\ell$, $r^\ell r^2r^{-\ell} = r^2$.
+\end{eg}
+
+\begin{prop}
+ A group of order $6$ is either cyclic or dihedral (i.e.\ $\cong C_6$ or $D_6$).
+\end{prop}
+
+\begin{proof}
+ Let $|G| = 6$. By Lagrange theorem, possible element orders are $1, 2, 3$ and $6$. If there is an $a\in G$ of order $6$, then $G = \bra a\ket \cong C_6$. Otherwise, we can only have elements of orders $2$ and $3$ other than the identity. If $G$ only has elements of order 2, the order must be a power of 2 by Sheet 1 Q. 8, which is not the case. So there must be an element $r$ of order 3. So $\bra r\ket\lhd G$ as it has index $2$. Now $G$ must also have an element $s$ of order $2$ by Sheet 1 Q. 9.
+
+ Since $\bra r\ket$ is normal, we know that $srs^{-1}\in \bra r\ket$. If $srs^{-1} = e$, then $r = e$, which is not true. If $srs^{-1} = r$, then $sr = rs$ and $sr$ has order $6$ (lcm of the orders of $s$ and $r$), which was ruled out above. Otherwise if $srs^{-1} = r^2 = r^{-1}$, then $G$ is dihedral by definition of the dihedral group.
+\end{proof}
+
+\subsection{Quotient groups}
+\begin{prop}
+ Let $K\lhd G$. Then the set of (left) cosets of $K$ in $G$ is a group under the operation $aK*bK = (ab)K$.
+\end{prop}
+
+\begin{proof}
+ First show that the operation is well-defined. If $aK = a'K$ and $bK = b'K$, we want to show that $aK*bK = a'K * b'K$. We know that $a' = ak_1$ and $b' = bk_2$ for some $k_1, k_2\in K$. Then $a'b' = ak_1bk_2$. We know that $b^{-1}k_1b\in K$. Let $b^{-1}k_1b = k_3$. Then $k_1 b = bk_3$. So $a'b' = abk_3k_2\in (ab)K$. So picking a different representative of the coset gives the same product.
+ \begin{enumerate}[label=\arabic{*}.]
+ \item Closure: If $aK, bK$ are cosets, then $(ab)K$ is also a coset
+ \item Identity: The identity is $eK = K$ (clear from definition)
+ \item Inverse: The inverse of $aK$ is $a^{-1}K$ (clear from definition)
+ \item Associativity: Follows from the associativity of $G$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[Quotient group]
+ Given a group $G$ and a normal subgroup $K$, the \emph{quotient group} or \emph{factor group} of $G$ by $K$, written as $G/K$, is the set of (left) cosets of $K$ in $G$ under the operation $aK*bK = (ab)K$.
+\end{defi}
+Note that the \emph{set} of left cosets also exists for non-normal subgroups (abnormal subgroups?), but the group operation above is not well defined.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Take $G = \Z$ and $n\Z$ (which must be normal since $G$ is abelian), the cosets are $k + n\Z$ for $0 \leq k < n$. The quotient group is $\Z_n$. So we can write $\Z/(n\Z) = \Z_n$. In fact these are the only quotient groups of $\Z$ since $n\Z$ are the only subgroups.
+
+ Note that if $G$ is abelian, $G/K$ is also abelian.
+ \item Take $K = \bra r\ket \lhd D_6$. We have two cosets $K$ and $sK$. So $D_6/K$ has order 2 and is isomorphic to $C_2$.
+ \item Take $K = \bra r^2\ket \lhd D_8$. We know that $G/K$ should have $\frac{8}{2} = 4$ elements. We have $G/K = \{ K, rK = r^3 K, sK = sr^2K, srK = sr^3K\}$. We see that all elements (except $K$) has order $2$, so $G/K\cong C_2\times C_2$.
+ \end{enumerate}
+\end{eg}
+Note that quotient groups are \emph{not} subgroups of $G$. They contain different kinds of elements. For example, $\Z/n\Z \cong C_n$ are finite, but all subgroups of $\Z$ infinite.
+
+
+\begin{eg}
+ (Non-example) Consider $D_6$ with $H = \bra s\ket$. $H$ is not a normal subgroup. We have $rH * r^2 H = r^3 H = H$, but $rH = rsH$ and $r^2H = srH$ (by considering the individual elements). So we have $rsH * srH = r^2 H\not= H$, and the operation is not well-defined.
+\end{eg}
+
+\begin{lemma}
+ Given $K\lhd G$, the \emph{quotient map} $q: G\rightarrow G/K$ with $g\mapsto gK$ is a surjective group homomorphism.
+\end{lemma}
+
+\begin{proof}
+ $q(ab) = (ab)K = aKbK = q(a)q(b)$. So $q$ is a group homomorphism. Also for all $aK \in G/K$, $q(a) = aK$. So it is surjective.
+\end{proof}
+Note that the kernel of the quotient map is $K$ itself. So any normal subgroup is a kernel of some homomorphism.
+
+\begin{prop}
+ The quotient of a cyclic group is cyclic.
+\end{prop}
+
+\begin{proof}
+ Let $G = C_n$ with $H\leq C_n$. We know that $H$ is also cyclic. Say $C_n = \bra c\ket$ and $H = \bra c^k \ket \cong C_\ell$, where $k\ell = n$. We have $C_n/H = \{H, cH, c^2H, \cdots c^{k - 1}H\}=\bra cH\ket \cong C_k$.
+\end{proof}
+
+\subsection{The Isomorphism Theorem}
+Now we come to the Really Important Theorem\textsuperscript{TM}.
+\begin{thm}[The Isomorphism Theorem]
+ Let $f:G\to H$ be a group homomorphism with kernel $K$. Then $K\lhd G$ and $G/K\cong \im f$.
+\end{thm}
+
+\begin{proof}
+ We have proved that $K\lhd G$ before. We define a group homomorphism $\theta: G/K\to \im f$ by $\theta(aK) = f(a)$.
+
+ First check that this is well-defined: If $a_1K = a_2K$, then $a_2^{-1}a_1\in K$. So
+ \[
+ f(a_2)^{-1}f(a_1) = f(a_2^{-1}a_1) = e.
+ \]
+ So $f(a_1) = f(a_2)$ and $\theta(a_1K) = \theta(a_2 K)$.
+
+ Now we check that it is a group homomorphism:
+ \[
+ \theta(aKbK) = \theta(abK) = f(ab) = f(a)f(b) = \theta(aK) \theta(bK).
+ \]
+ To show that it is injective, suppose $\theta(aK) = \theta(bK)$. Then $f(a) = f(b)$. Hence $f(b)^{-1}f(a) = e$. Hence $b^{-1}a \in K$. So $aK = bK$.
+
+ By definition, $\theta$ is surjective since $\im \theta = \im f$. So $\theta$ gives an isomorphism $G/K \cong \im f \leq H$.
+\end{proof}
+If $f$ is injective, then the kernel is $\{e\}$, so $G/K\cong G$ and $G$ is isomorphic to a subgroup of $H$. We can think of $f$ as an inclusion map. If $f$ is surjective, then $\im f = H$. In this case, $G/K \cong H$.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Take $f: \GL_n(\R) \to \R^*$ with $A \mapsto \det A$, $\ker f = \SL_N(\R)$. $\im f = \R^*$ as for all $\lambda \in \R^*$, $\det
+ \begin{pmatrix}
+ \lambda & 0 & \cdots & 0 \\
+ 0 &1 & \cdots & 0\\
+ \vdots &\vdots &\ddots & \vdots\\
+ 0& 0 & 0 &1
+ \end{pmatrix}
+ = \lambda$. So we know that $\GL_n(\R)/\SL_n(\R) \cong \R^*$.
+ \item Define $\theta: (\R, +) \to (\C*, \times)$ with $r\mapsto \exp(2\pi ir)$. This is a group homomorphism since $\theta(r + s) = \exp(2\pi i(r + s)) = \exp (2\pi i r)\exp (2\pi i s) = \theta(r)\theta(s)$. We know that the kernel is $\Z\lhd \R$. Clearly the image is the unit circle $(S_1, \times)$. So $\R/\Z \cong (S_1, \times)$.
+ \item $G = (\Z_p^*, \times)$ for prime $p\not= 2$. We have $f: G\to G$ with $a\mapsto a^2$. This is a homomorphism since $(ab)^2 = a^2b^2$ ($\Z_p^*$ is abelian). The kernel is $\{\pm 1\} = \{1, p - 1\}$. We know that $\im f\cong G/\ker f$ with order $\frac{p - 1}{2}$. These are known as quadratic residues.
+ \end{enumerate}
+\end{eg}
+
+\begin{lemma}
+ Any cyclic group is isomorphic to either $\Z$ or $\Z/(n\Z)$ for some $n\in \N$.
+\end{lemma}
+
+\begin{proof}
+ Let $G = \bra c\ket$. Define $f: \Z \to G$ with $m\mapsto c^m$. This is a group homomorphism since $c^{m_1 + m_2} = c^{m_1}c^{m_2}$. $f$ is surjective since $G$ is by definition all $c^m$ for all $m$. We know that $\ker f\lhd \Z$. We have three possibilities. Either
+ \begin{enumerate}
+ \item $\ker f = \{e\}$, so $F$ is an isomorphism and $G\cong \Z$; or
+ \item $\ker f = \Z$, then $G\cong \Z/\Z = \{e\} = C_1$; or
+ \item $\ker f = n\Z$ (since these are the only proper subgroups of $\Z$), then $G\cong \Z/(n\Z)$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[Simple group]
+ A group is \emph{simple} if it has no non-trivial proper normal subgroup, i.e.\ only $\{e\}$ and $G$ are normal subgroups.
+\end{defi}
+
+\begin{eg}
+ $C_p$ for prime $p$ are simple groups since it has no proper subgroups at all, let alone normal ones.
+ $A_5$ is simple, which we will prove after Chapter~\ref{sec:sym2}.
+\end{eg}
+
+The finite simple groups are the building blocks of all finite groups. All finite simple groups have been classified (The Atlas of Finite Groups). If we have $K\lhd G$ with $K\not= G$ or $\{e\}$, then we can ``quotient out'' $G$ into $G/K$. If $G/K$ is not simple, repeat. Then we can write $G$ as an ``inverse quotient'' of simple groups.
+
+\section{Group actions}
+\label{sec:action}
+Recall that we came up with groups to model symmetries and permutations. Intuitively, elements of groups are supposed to ``do things''. However, as we developed group theory, we abstracted these away and just looked at how elements combine to form new elements. \emph{Group actions} recapture this idea and make each group element correspond to some function.
+
+\subsection{Group acting on sets}
+\begin{defi}[Group action]
+ Let $X$ be a set and $G$ be a group. An \emph{action} of $G$ on $X$ is a homomorphism $\varphi: G \to \Sym X$.
+\end{defi}
+This means that the homomorphism $\varphi$ turns each element $g\in G$ into a permutation of $X$, in a way that respects the group structure.
+
+Instead of writing $\varphi(g)(x)$, we usually directly write $g(x)$ or $gx$.
+
+Alternatively, we can define the group action as follows:
+\begin{prop}
+ Let $X$ be a set and $G$ be a group. Then $\varphi: G \to \Sym X$ is a homomorphism (i.e.\ an action) iff $\theta: G\times X \to X$ defined by $\theta(g, x) = \varphi(g)(x)$ satisfies
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{-1}
+ \item $(\forall g\in G)(x\in X)\,\theta(g, x)\in X$.
+ \item $(\forall x\in X)\,\theta(e, x) = x$.
+ \item $(\forall g, h\in G)(\forall x\in X)\,\theta(g, \theta (h, x)) = \theta(gh, x)$.
+ \end{enumerate}
+\end{prop}
+This criteria is \emph{almost} the definition of a homomorphism. However, here we do not explicitly require $\theta(g, \cdot)$ to be a bijection, but require $\theta(e, \cdot)$ to be the identity function. This automatically ensures that $\theta(g, \cdot)$ is a bijection, since when composed with $\theta(g^{-1}, \cdot)$, it gives $\theta(e, \cdot)$, which is the identity. So $\theta(g, \cdot)$ has an inverse. This is usually an easier thing to show.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Trivial action: for any group $G$ acting on any set $X$, we can have $\varphi(g) = 1_X$ for all $g$, i.e.\ $G$ does nothing.
+ \item $S_n$ acts on $\{1, \cdots n\}$ by permutation.
+ \item $D_{2n}$ acts on the vertices of a regular $n$-gon (or the set $\{1, \cdots, n\}$).
+ \item The rotations of a cube act on the faces/vertices/diagonals/axes of the cube.
+ \end{enumerate}
+\end{eg}
+Note that different groups can act on the same sets, and the same group can act on different sets.
+
+\begin{defi}[Kernel of action]
+ The \emph{kernel} of an action $G$ on $X$ is the kernel of $\varphi$, i.e.\ all $g$ such that $\varphi(g) = 1_X$.
+\end{defi}
+Note that by the isomorphism theorem, $\ker \varphi\lhd G$ and $G/K$ is isomorphic to a subgroup of $\Sym X$.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $D_{2n}$ acting on $\{1, 2\cdots n\}$ gives $\varphi: D_{2n}\to S_n$ with kernel $\{e\}$.
+ \item Let $G$ be the rotations of a cube and let it act on the three axes $x, y, z$ through the faces. We have $\varphi: G\to S_3$. Then any rotation by $180^\circ$ doesn't change the axes, i.e.\ act as the identity. So the kernel of the action has at least 4 elements: $e$ and the three $180^\circ$ rotations. In fact, we'll see later that these 4 are exactly the kernel.
+ \end{enumerate}
+\end{eg}
+
+\begin{defi}[Faithful action]
+ An action is \emph{faithful} if the kernel is just $\{e\}$.
+\end{defi}
+
+\subsection{Orbits and Stabilizers}
+\begin{defi}[Orbit of action]
+ Given an action $G$ on $X$, the \emph{orbit} of an element $x\in X$ is
+ \[
+ \orb (x) = G(x) = \{y\in X: (\exists g\in G)\,g(x) = y\}.
+ \]
+ Intuitively, it is the elements that $x$ can possibly get mapped to.
+\end{defi}
+\begin{defi}[Stabilizer of action]
+ The \emph{stabilizer} of $x$ is
+ \[
+ \stab(x) = G_x = \{g\in G: g(x) = x\}\subseteq G.
+ \]
+ Intuitively, it is the elements in $G$ that do not change $x$.
+\end{defi}
+
+\begin{lemma}
+ $\stab(x)$ is a subgroup of $G$.
+\end{lemma}
+
+\begin{proof}
+ We know that $e(x) = x$ by definition. So $\stab(x)$ is non-empty. Suppose $g, h\in \stab(x)$, then $gh^{-1}(x) = g(h^{-1}(x)) = g(x) = x$. So $gh^{-1}\in \stab (X)$. So $\stab (x)$ is a subgroup.
+\end{proof}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Consider $D_8$ acting on the corners of the square $X = \{1, 2, 3, 4\}$. Then $\orb(1) = X$ since $1$ can go anywhere by rotations. $\stab(1) = \{e, $ reflection in the line through 1$\}$
+ \item Consider the rotations of a cube acting on the three axes $x, y, z$. Then $\orb(x)$ is everything, and $\stab(x)$ contains $e$, $180^\circ$ rotations and rotations about the $x$ axis.
+ \end{enumerate}
+\end{eg}
+
+\begin{defi}[Transitive action]
+ An action $G$ on $X$ is \emph{transitive} if $(\forall x)\,\orb(x) = X$, i.e.\ you can reach any element from any element.
+\end{defi}
+
+\begin{lemma}
+ The orbits of an action partition $X$.
+\end{lemma}
+
+\begin{proof}
+ Firstly, $(\forall x)(x\in \orb (x))$ as $e(x) = x$. So every $x$ is in some orbit.
+
+ Then suppose $z\in \orb (x)$ and $z\in \orb (y)$, we have to show that $\orb(x) = \orb (y)$. We know that $z = g_1(x)$ and $z = g_2(y)$ for some $g_1, g_2$. Then $g_1(x) = g_2(y)$ and $y = g_2^{-1}g_1(x)$.
+
+ For any $w = g_3(y)\in \orb (y)$, we have $w = g_3g_2^{-1}g_1(x)$. So $w\in \orb (x)$. Thus $\orb (y) \subseteq \orb (x)$ and similarly $\orb (x) \subseteq \orb (y)$. Therefore $\orb (x) = \orb (y)$.
+\end{proof}
+
+Suppose a group $G$ acts on $X$. We fix an $x \in X$. Then by definition of the orbit, given any $g \in G$, we have $g(x) \in \orb(x)$. So each $g \in G$ gives us a member of $\orb(x)$. Conversely, every object in $\orb(x)$ arises this way, by definition of $\orb(x)$. However, different elements in $G$ can give us the same orbit. In particular, if $g \in \stab(x)$, then $hg$ and $h$ give us the same object in $\orb(x)$, since $hg(x) = h(g(x)) = h(x)$. So we have a correspondence between things in $\orb(x)$ and members of $G$, ``up to $\stab (x)$''.
+
+\begin{thm}[Orbit-stabilizer theorem]
+ Let the group $G$ act on $X$. Then there is a bijection between $\orb(x)$ and cosets of $\stab(x)$ in $G$. In particular, if $G$ is finite, then
+ \[
+ |\orb(x)||\stab(x)| = |G|.
+ \]
+\end{thm}
+
+\begin{proof}
+ We biject the cosets of $\stab(x)$ with elements in the orbit of $x$. Recall that $G: \stab(x)$ is the set of cosets of $\stab(x)$. We can define
+ \begin{align*}
+ \theta: (G:\stab(x))& \to \orb(x)\\
+ g\stab(x) &\mapsto g(x).
+ \end{align*}
+ This is well-defined --- if $g\stab(x) = h\stab(x)$, then $h = gk$ for some $k\in \stab(x)$. So $h(x) = g(k(x)) = g(x)$.
+
+ This map is surjective since for any $y\in \orb(x)$, there is some $g \in G$ such that $g(x) = y$, by definition. Then $\theta(g \stab(x)) = y$. It is injective since if $g(x) = h(x)$, then $h^{-1}g(x) = x$. So $h^{-1}g\in \stab (x)$. So $g\stab(x) = h\stab(x)$.
+
+ Hence the number of cosets is $|\orb(x)|$. Then the result follows from Lagrange's theorem.
+\end{proof}
+An important application of the orbit-stabilizer theorem is determining group sizes. To find the order of the symmetry group of, say, a pyramid, we find something for it to act on, pick a favorite element, and find the orbit and stabilizer sizes.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Suppose we want to know how big $D_{2n}$ is. $D_{2n}$ acts on the vertices $\{1, 2, 3, \cdots, n\}$ transitively. So $|\orb(1)| = n$. Also, $\stab(1) =\{e, $ reflection in the line through 1$\}$. So $|D_{2n}| = |\orb(1)||\stab(1)| = 2n$.
+
+ Note that if the action is transitive, then all orbits have size $|X|$ and thus all stabilizers have the same size.
+ \item Let $\bra (1\; 2)\ket$ act on $\{1, 2, 3\}$. Then $\orb(1) = \{1, 2\}$ and $\stab(1) = \{e\}$. $\orb(3) = \{3\}$ and $\stab(3) = \bra(1\; 2)\ket$.
+ \item Consider $S_4$ acting on $\{1, 2, 3, 4\}$. We know that $\orb(1) = X$ and $|S_4| = 24$. So $|\stab(1)| = \frac{24}{4} = 6$. That makes it easier to find $\stab(1)$. Clearly $S_{\{2, 3, 4\}} \cong S_3$ fix $1$. So $S_{\{2, 3, 4\}}\leq \stab(1)$. However, $|S_3| = 6 = |\stab(1)|$, so this is all of the stabilizer.
+ \end{enumerate}
+\end{eg}
+
+\subsection{Important actions}
+Given any group $G$, there are a few important actions we can define. In particular, we will define the \emph{conjugation} action, which is a very important concept on its own. In fact, the whole of the next chapter will be devoted to studying conjugation in the symmetric groups.
+
+First, we will study some less important examples of actions.
+\begin{lemma}[Left regular action]
+ Any group $G$ acts on itself by left multiplication. This action is faithful and transitive.
+\end{lemma}
+\begin{proof}
+ We have
+ \begin{enumerate}[label=\arabic{*}.]
+ \item $(\forall g\in G)(x\in G)\,g(x) = g\cdot x \in G$ by definition of a group.
+ \item $(\forall x\in G)\,e\cdot x = x$ by definition of a group.
+ \item $g(hx) = (gh)x$ by associativity.
+ \end{enumerate}
+ So it is an action.
+
+ To show that it is faithful, we want to know that $[(\forall x\in X)\,gx = x]\Rightarrow g = e$. This follows directly from the uniqueness of identity.
+
+ To show that it is transitive, $\forall x, y\in G$, then $(yx^{-1})(x) = y$. So any $x$ can be sent to any $y$.
+\end{proof}
+
+\begin{thm}[Cayley's theorem]
+ Every group is isomorphic to some subgroup of some symmetric group.
+\end{thm}
+
+\begin{proof}
+ Take the left regular action of $G$ on itself. This gives a group homomorphism $\varphi: G\to \Sym G$ with $\ker \varphi = \{e\}$ as the action is faithful. By the isomorphism theorem, $G \cong \im \varphi \leq \Sym G$.
+\end{proof}
+
+\begin{lemma}[Left coset action]
+ Let $H\leq G$. Then $G$ acts on the left cosets of $H$ by left multiplication transitively.
+\end{lemma}
+
+\begin{proof}
+ First show that it is an action:
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{-1}
+ \item $g(aH) = (ga)H$ is a coset of $H$.
+ \item $e(aH) = (ea)H = aH$.
+ \item $g_1(g_2(aH)) = g_1((g_2a)H) = (g_1g_2a)H = (g_1g_2)(aH)$.
+ \end{enumerate}
+
+ To show that it is transitive, given $aH, bH$, we know that $(ba^{-1})(aH) = bH$. So any $aH$ can be mapped to $bH$.
+\end{proof}
+In the boring case where $H = \{e\}$, then this is just the left regular action since $G/\{e\} \cong G$.
+
+\begin{defi}[Conjugation of element]
+ The \emph{conjugation} of $a\in G$ by $b\in G$ is given by $bab^{-1}\in G$. Given any $a, c$, if there exists some $b$ such that $c = bab^{-1}$, then we say $a$ and $c$ are \emph{conjugate}.
+\end{defi}
+What is conjugation? This $bab^{-1}$ form looks familiar from Vectors and Matrices. It is the formula used for changing basis. If $b$ is the change-of-basis matrix and $a$ is a matrix, then the matrix in the new basis is given by $bab^{-1}$. In this case, $bab^{-1}$ is the same matrix viewed from a different basis.
+
+In general, two conjugate elements are ``the same'' in some sense. For example, we will later show that in $S_n$, two elements are conjugate if and only if they have the same cycle type. Conjugate elements in general have many properties in common, such as their order.
+
+\begin{lemma}[Conjugation action]
+ Any group $G$ acts on itself by conjugation (i.e.\ $g(x) = gxg^{-1}$).
+\end{lemma}
+
+\begin{proof}
+ To show that this is an action, we have
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{-1}
+ \item $g(x) = gxg^{-1} \in G$ for all $g, x\in G$.
+ \item $e(x) = exe^{-1} = x$
+ \item $g(h(x)) = g(hxh^{-1}) = ghxh^{-1}g^{-1} = (gh)x(gh)^{-1} = (gh)(x)$\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[Conjugacy classes and centralizers]
+ The \emph{conjugacy classes} are the orbits of the conjugacy action.
+ \[
+ \ccl(a) = \{b\in G: (\exists g\in g)\,gag^{-1} = b\}.
+ \]
+ The \emph{centralizers} are the stabilizers of this action, i.e.\ elements that commute with $a$.
+ \[
+ C_G(a) = \{g\in G: gag^{-1} = a\} = \{g\in G: ga = ag\}.
+ \]
+\end{defi}
+The centralizer is defined as the elements that commute with a particular element $a$. For the whole group $G$, we can define the \emph{center}.
+
+\begin{defi}[Center of group]
+ The \emph{center} of $G$ is the elements that commute with all other elements.
+ \[
+ Z(G) = \{g\in G: (\forall a)\,gag^{-1} = a\} = \{g\in G: (\forall a)\,ga = ag\}.
+ \]
+ It is sometimes written as $C(G)$ instead of $Z(G)$.
+\end{defi}
+
+In many ways, conjugation is related to normal subgroups.
+\begin{lemma}
+ Let $K\lhd G$. Then $G$ acts by conjugation on $K$.
+\end{lemma}
+
+\begin{proof}
+ We only have to prove closure as the other properties follow from the conjugation action. However, by definition of a normal subgroup, for every $g \in G, k \in K$, we have $gkg^{-1}\in K$. So it is closed.
+\end{proof}
+
+\begin{prop}
+ Normal subgroups are exactly those subgroups which are unions of conjugacy classes.
+\end{prop}
+
+\begin{proof}
+ Let $K\lhd G$. If $k\in K$, then by definition for every $g \in G$, we get $gkg^{-1}\in K$. So $\ccl(k)\subseteq K$. So $K$ is the union of the conjugacy classes of all its elements.
+
+ Conversely, if $K$ is a union of conjugacy classes and a subgroup of $G$, then for all $k \in K, g \in G$, we have $gkg^{-1}\in K$. So $K$ is normal.
+\end{proof}
+
+\begin{lemma}
+ Let $X$ be the set of subgroups of $G$. Then $G$ acts by conjugation on $X$.
+\end{lemma}
+
+\begin{proof}
+ To show that it is an action, we have
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{-1}
+ \item If $H\leq G$, then we have to show that $gHg^{-1}$ is also a subgroup. We know that $e\in H$ and thus $geg^{-1} = e\in gHg^{-1}$, so $gHg^{-1}$ is non-empty. For any two elements $gag^{-1}$ and $gbg^{-1}\in gHg^{-1}$, $(gag^{-1})(gbg^{-1})^{-1} = g(ab^{-1})g^{-1}\in gHg^{-1}$. So $gHg^{-1}$ is a subgroup.
+ \item $eHe^{-1} = H$.
+ \item $g_1(g_2Hg_2^{-1})g_1^{-1} = (g_1g_2)H(g_1g_2)^{-1}$.\qedhere
+ \end{enumerate}
+\end{proof}
+Under this action, normal subgroups have singleton orbits.
+
+\begin{defi}[Normalizer of subgroup]
+ The \emph{normalizer} of a subgroup is the stabilizer of the (group) conjugation action.
+ \[
+ N_G(H) = \{g\in G: gHg^{-1} = H\}.
+ \]
+\end{defi}
+We clearly have $H\subseteq N_G(H)$. It is easy to show that $N_G(H)$ is the largest subgroup of $G$ in which $H$ is a normal subgroup, hence the name.
+
+There is a connection between actions in general and conjugation of subgroups.
+
+\begin{lemma}
+ Stabilizers of the elements in the same orbit are conjugate, i.e.\ let $G$ act on $X$ and let $g\in G, x\in X$. Then $\stab(g(x)) = g\stab(x)g^{-1}$.
+\end{lemma}
+
+\subsection{Applications}
+\begin{eg}
+ Let $G^{+}$ be the rotations of a cube acting on the vertices. Let $X$ be the set of vertices. Then $|X| = 8$. Since the action is transitive, the orbit of element is the whole of $X$. The stabilizer of vertex $1$ is the set of rotations through $1$ and the diagonally opposite vertex, of which there are 3. So $|G^{+}| = |\orb(1)||\stab(1)| = 8\cdot 3 = 24$.
+\end{eg}
+
+\begin{eg}
+ Let $G$ be a finite simple group of order greater than 2, and $H\leq G$ have index $n\not= 1$. Then $|G| \leq n!/2$.
+\end{eg}
+
+\begin{proof}
+ Consider the left coset action of $G$ on $H$. We get a group homomorphism $\varphi: G\to S_n$ since there are $n$ cosets of $H$. Since $H\not= G$, $\varphi$ is non-trivial and $\ker \varphi \not=G$. Now $\ker \varphi \lhd G$. Since $G$ is simple, $\ker\varphi = \{e\}$. So $G\cong \im \varphi \subseteq S_n$ by the isomorphism theorem. So $|G| \leq |S_n| = n!$.
+
+ We can further refine this by considering $\sgn\circ \varphi: G \to \{\pm 1\}$. The kernel of this composite is normal in $G$. So $K = \ker(\sgn\circ\phi) = \{e\}$ or $G$. Since $G/K\cong \im (\sgn\circ\phi)$, we know that $|G|/|K| = 1$ or $2$ since $\im(\sgn\circ \phi)$ has at most two elements. Hence for $|G|> 2$, we cannot have $K = \{e\}$, or else $|G|/|K| > 2$. So we must have $K = G$, so $\sgn(\varphi(g)) = 1$ for all $g$ and $\im \varphi \leq A_n$. So $|G|\leq n!/2$
+\end{proof}
+
+We have seen on Sheet 1 that if $|G|$ is even, then $G$ has an element of order 2. In fact,
+\begin{thm}[Cauchy's Theorem]
+ Let $G$ be a finite group and prime $p$ dividing $|G|$. Then $G$ has an element of order $p$ (in fact there must be at least $p - 1$ elements of order $p$).
+\end{thm}
+It is important to remember that this only holds for prime $p$. For example, $A_4$ doesn't have an element of order $6$ even though $6 \mid 12 = |A_4|$. The converse, however, holds for any number trivially by Lagrange's theorem.
+
+\begin{proof}
+ Let $G$ and $p$ be fixed. Consider $G^p = G\times G\times \cdots \times G$, the set of $p$-tuples of $G$. Let $X \subseteq G^p$ be $X = \{(a_1, a_2, \cdots, a_p)\in G^p: a_1a_2\cdots a_p = e\}$.
+
+ In particular, if an element $b$ has order $p$, then $(b, b, \cdots, b)\in X$. In fact, if $(b, b, \cdots, b)\in X$ and $b\not= e$, then $b$ has order $p$, since $p$ is prime.
+
+ Now let $H = \bra h: h^p = e\ket\cong C_p$ be a cyclic group of order $p$ with generator $h$ (This $h$ is not related to $G$ in any way). Let $H$ act on $X$ by ``rotation'':
+ \[
+ h(a_1, a_2, \cdots, a_p) = (a_2, a_3, \cdots, a_p, a_1)
+ \]
+ This is an action:
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{-1}
+ \item If $a_1\cdots a_p = e$, then $a_1^{-1} = a_2\cdots a_p$. So $a_2\cdots a_pa_1 = a_1^{-1}a_1 = e$. So $(a_2, a_3, \cdots, a_p, a_1)\in X$.
+ \item $e$ acts as an identity by construction
+ \item The ``associativity'' condition also works by construction.
+ \end{enumerate}
+
+ As orbits partition $X$, the sum of all orbit sizes must be $|X|$. We know that $|X| = |G|^{p - 1}$ since we can freely choose the first $p - 1$ entries and the last one must be the inverse of their product. Since $p$ divides $|G|$, $p$ also divides $|X|$. We have $|\orb(a_1, \cdots , a_p)||\stab_H(a_1, \cdots, a_p)| = |H| = p$. So all orbits have size 1 or $p$, and they sum to $|X| = p\times$ something. We know that there is one orbit of size 1, namely $(e, e, \cdots, e)$. So there must be at least $p - 1$ other orbits of size 1 for the sum to be divisible by $p$.
+
+ In order to have an orbit of size 1, they must look like ${(a, a, \cdots, a)}$ for some $a\in G$, which has order $p$.
+\end{proof}
+
+\section{Symmetric groups II}
+\label{sec:sym2}
+In this chapter, we will look at conjugacy classes of $S_n$ and $A_n$. It turns out this is easy for $S_n$, since two elements are conjugate if and only if they have the same cycle type. However, it is slightly more complicated in $A_n$. This is since while $(1\; 2\; 3)$ and $(1\; 3\; 2)$ might be conjugate in $S_4$, the element needed to perform the conjugation might be odd and not in $A_n$.
+
+\subsection{Conjugacy classes in \texorpdfstring{$S_n$}{Sn}}
+Recall $\sigma, \tau\in S_n$ are conjugate if $\exists \rho\in S_n$ such that $\rho \sigma\rho^{-1} = \tau$.
+
+We first investigate the special case, when $\sigma$ is a $k$-cycle.
+\begin{prop}
+ If $(a_1\; a_2\; \cdots \; a_k)$ is a $k$-cycle and $\rho\in S_n$, then $\rho (a_1\; \cdots\; a_k)\rho^{-1}$ is the $k$-cycle $(\rho(a_1)\; \rho(a_2)\; \cdots \rho(a_3))$.
+\end{prop}
+
+\begin{proof}
+ Consider any $\rho(a_1)$ acted on by $\rho (a_1\; \cdots\; a_k)\rho^{-1}$. The three permutations send it to $\rho(a_1)\mapsto a_1 \mapsto a_2 \mapsto \rho(a_2)$ and similarly for other $a_i$s. Since $\rho$ is bijective, any $b$ can be written as $\rho(a)$ for some $a$. So the result is the $k$-cycle $(\rho(a_1)\; \rho(a_2)\; \cdots \rho(a_3))$.
+\end{proof}
+
+\begin{cor}
+ Two elements in $S_n$ are conjugate iff they have the same cycle type.
+\end{cor}
+
+\begin{proof}
+ Suppose $\sigma = \sigma_1\sigma_2\cdots \sigma_\ell$, where $\sigma_i$ are disjoint cycles. Then $\rho\sigma\rho^{-1} = \rho\sigma_1\rho^{-1}\rho\sigma_2\rho^{-1}\cdots \rho\sigma_\ell\rho^{-1}$. Since the conjugation of a cycle conserves its length, $\rho\sigma\rho^{-1}$ has the same cycle type.
+
+ Conversely, if $\sigma, \tau$ have the same cycle type, say
+ \[
+ \sigma = (a_1\; a_2\;\cdots\; a_k)(a_{k + 1}\; \cdots \;a_{k + \ell}),\quad \tau = (b_1\; b_2\;\cdots\; b_k)(b_{k + 1}\; \cdots \;b_{k + \ell}),
+ \]
+ if we let $\rho(a_i) = b_i$, then $\rho\sigma\rho^{-1} = \tau$.
+\end{proof}
+
+\begin{eg}
+ Conjugacy classes of $S_4$:
+ \begin{center}
+ \begin{tabular}{lcccc}
+ \toprule
+ Cycle type & Example element & Size of ccl & Size of centralizer & Sign \\
+ \midrule
+ (1, 1, 1, 1) & e & 1 & 24 & $+1$ \\
+ (2, 1, 1) & (1\; 2) & 6 & 4 & $-1$ \\
+ (2, 2) & (1\; 2)(3\; 4) & 3 & 8 & $+1$ \\
+ (3, 1) & (1\; 2\; 3) & 8 & 3 & $+1$ \\
+ (4) & (1\; 2\; 3\; 4) & 6 & 4 & $-1$ \\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+
+ \noindent We know that a normal subgroup is a union of conjugacy classes. We can now find all normal subgroups by finding possible union of conjugacy classes whose cardinality divides 24. Note that the normal subgroup must contain $e$.
+ \begin{enumerate}
+ \item Order 1: $\{e\}$
+ \item Order 2: None
+ \item Order 3: None
+ \item Order 4: $\{e, (1\; 2)(3\; 4), (1\; 3)(2\; 4), (1\; 4)(2\; 3)\}\cong C_2\times C_2 = V_4$ is a possible candidate. We can check the group axioms and find that it is really a subgroup
+ \item Order 6: None
+ \item Order 8: None
+ \item Order 12: $A_4$ (We know it is a normal subgroup since it is the kernel of the signature and/or it has index 2)
+ \item Order 24: $S_4$
+ \end{enumerate}
+ We can also obtain the quotients of $S_4$: $S_4/\{e\} \cong S_4$, $S_4/V_4 \cong S_3\cong D_6$, $S_4/A_4 \cong C_2$, $S_4/S_4 = \{e\}$.
+\end{eg}
+
+\subsection{Conjugacy classes in \texorpdfstring{$A_n$}{An}}
+We have seen that $|S_n| = 2|A_n|$ and that conjugacy classes in $S_n$ are ``nice''. How about in $A_n$?
+
+The first thought is that we write it down:
+\begin{align*}
+ \ccl_{S_n}(\sigma) &= \{\tau\in S_n: (\exists \rho \in S_n)\, \tau = \rho\sigma\rho^{-1}\}\\
+ \ccl_{A_n}(\sigma) &= \{\tau\in A_n: (\exists \rho \in A_n)\, \tau = \rho\sigma\rho^{-1}\}
+\end{align*}
+Obviously $\ccl_{A_n}(\sigma)\subseteq \ccl_{S_n}(\sigma)$, but the converse need not be true since the conjugation need to map $\sigma$ to $\tau$ may be odd.
+
+\begin{eg}
+ Consider $(1\; 2\; 3)$ and $(1\; 3\; 2)$. They are conjugate in $S_3$ by $(2\; 3)$, but $(2\; 3)\not\in A_3$. (This does not automatically entail that they are not conjugate in $A_3$ because there might be another even permutation that conjugate $(1\; 2\; 3)$ and $(1\; 3\; 2)$. In $A_5$, $(2\; 3)(4\; 5)$ works (but not in $A_3$))
+\end{eg}
+
+We can use the orbit-stabilizer theorem:
+\begin{align*}
+ |S_n| &= |\ccl_{S_n}(\sigma)||C_{S_n}(\sigma)|\\
+ |A_n| &= |\ccl_{A_n}(\sigma)||C_{A_n}(\sigma)|
+\end{align*}
+We know that $A_n$ is half of $S_n$ and $\ccl_{A_n}$ is contained in $\ccl_{S_n}$. So we have two options: either $\ccl_{S_n}(\sigma) = \ccl_{A_n}(\sigma)$ and $|C_{S_n}(\sigma)| = \frac{1}{2}|C_{A_n}(\sigma)|$; or $\frac{1}{2}|\ccl_{S_n}(\sigma)| = |\ccl_{A_n}(\sigma)|$ and $C_{A_n}(\sigma) = C_{S_n}(\sigma)$.
+
+\begin{defi}[Splitting of conjugacy classes]
+ When $|\ccl_{A_n}(\sigma)| = \frac{1}{2}|\ccl_{S_n}(\sigma)|$, we say that the conjugacy class of $\sigma$ \emph{splits} in $A_n$.
+\end{defi}
+
+So the conjugacy classes are either retained or split.
+
+\begin{prop}
+ For $\sigma\in A_n$, the conjugacy class of $\sigma$ splits in $A_n$ if and only if no odd permutation commutes with $\sigma$.
+\end{prop}
+
+\begin{proof}
+ We have the conjugacy classes splitting if and only if the centralizer does not. So instead we check whether the centralizer splits. Clearly $C_{A_n}(\sigma) = C_{S_n}(\sigma)\cap A_n$. So splitting of centralizer occurs if and only if an odd permutation commutes with $\sigma$.
+\end{proof}
+
+\begin{eg}
+ Conjugacy classes in $A_4$:
+ \begin{center}
+ \begin{tabular}{lcccc}
+ \toprule
+ Cycle type & Example & $|\ccl_{S_4}|$ & Odd element in $C_{S_4}$? & $|\ccl_{A_4}|$ \\
+ \midrule
+ (1, 1, 1, 1) & $e$ & 1 & Yes $(1\; 2)$ & 1 \\
+ (2, 2) & $(1\; 2)(3\; 4)$ & 3 & Yes $(1\; 2)$ & 3 \\
+ (3, 1) & $(1\; 2\; 3)$ & 8 & No & 4, 4 \\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ In the (3, 1) case, by the orbit stabilizer theorem, $|C_{S_4}((1\; 2\; 3))| = 3$, which is odd and cannot split.
+\end{eg}
+
+\begin{eg}
+ Conjugacy classes in $A_5$:
+ \begin{center}
+ \begin{tabular}{lcccc}
+ \toprule
+ Cycle type & Example & $|\ccl_{S_5}|$ & Odd element in $C_{S_5}$? & $|\ccl_{A_5}|$ \\
+ \midrule
+ (1, 1, 1, 1, 1) & $e$ & 1 & Yes $(1\; 2)$ & 1 \\
+ (2, 2, 1) & $(1\; 2)(3\; 4)$ & 15 & Yes $(1\; 2)$ & 15 \\
+ (3, 1, 1) & $(1\; 2\; 3)$ & 20 & Yes $(4\; 5)$ & 20 \\
+ (5) & $(1\; 2\; 3\; 4\; 5)$ & 24 & No & 12, 12 \\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Since the centralizer of $(1\; 2\; 3\; 4\; 5)$ has size $5$, it cannot split, so its conjugacy class must split.
+\end{eg}
+
+\begin{lemma}
+ $\sigma = (1\; 2\; 3\; 4\; 5)\in S_5$ has $C_{S_5}(\sigma) = \bra \sigma \ket$. \end{lemma}
+
+\begin{proof}
+ $|\ccl_{S_n}(\sigma)| = 24$ and $|S_5| = 120$. So $|C_{S_5}(\sigma)| = 5$. Clearly $\bra \sigma \ket \subseteq C_{S_5}(\sigma)$. Since they both have size $5$, we know that $C_{S_5}(\sigma) = \bra \sigma\ket$
+\end{proof}
+
+\begin{thm}
+ $A_5$ is simple.
+\end{thm}
+
+\begin{proof}
+ We know that normal subgroups must be unions of the conjugacy classes, must contain $e$ and their order must divide 60. The possible orders are 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30. However, the conjugacy classes 1, 15, 20, 12, 12 cannot add up to any of the possible orders apart from 1 and 60. So we only have trivial normal subgroups.
+\end{proof}
+In fact, all $A_n$ for $n\geq 5$ are simple, but the proof is horrible (cf.\ IB Groups, Rings and Modules).
+
+\section{Quaternions}
+In the remaining of the course, we will look at different important groups. Here, we will have a brief look at
+\begin{defi}[Quaternions]
+ The \emph{quaternions} is the set of matrices
+ \begin{gather*}
+ \begin{pmatrix}
+ 1&0\\0&1
+ \end{pmatrix},
+ \begin{pmatrix}
+ i & 0\\0&-i
+ \end{pmatrix},
+ \begin{pmatrix}
+ 0&1\\-1&0
+ \end{pmatrix},
+ \begin{pmatrix}
+ 0&i\\i&0
+ \end{pmatrix}
+ \\
+ \begin{pmatrix}
+ -1&0\\0&-1
+ \end{pmatrix},
+ \begin{pmatrix}
+ -i & 0\\0&i
+ \end{pmatrix},
+ \begin{pmatrix}
+ 0&-1\\1&0
+ \end{pmatrix},
+ \begin{pmatrix}
+ 0&-i\\-i&0
+ \end{pmatrix}
+ \end{gather*}
+ which is a subgroup of $\GL_2(\C)$.
+\end{defi}
+
+\begin{notation}
+ We can also write the quaternions as
+ \[
+ Q_8 = \bra a, b : a^4 = e, b^2 = a^2, bab^{-1} = a^{-1}\ket
+ \]
+ Even better, we can write
+ \[
+ Q_8 = \{1, -1, i, -i, j, -j, k, -k\}
+ \]
+ with
+ \begin{enumerate}
+ \item $(-1)^2 = 1$
+ \item $i^2 = j^2 = k^2 = -1$
+ \item $(-1)i = -i$ etc.
+ \item $ij = k$, $jk = i$, $ki = j$
+ \item $ji = -k$, $kj = -i$, $ik = -j$
+ \end{enumerate}
+ We have
+ \begin{gather*}
+ 1 = \begin{pmatrix}
+ 1&0\\0&1
+ \end{pmatrix},
+ i = \begin{pmatrix}
+ i & 0\\0&-i
+ \end{pmatrix},
+ j = \begin{pmatrix}
+ 0&1\\-1&0
+ \end{pmatrix},
+ k = \begin{pmatrix}
+ 0&i\\i&0
+ \end{pmatrix}\\
+ -1 = \begin{pmatrix}
+ -1&0\\0&-1
+ \end{pmatrix},
+ -i = \begin{pmatrix}
+ -i & 0\\0&i
+ \end{pmatrix},
+ -j = \begin{pmatrix}
+ 0&-1\\1&0
+ \end{pmatrix},
+ -k = \begin{pmatrix}
+ 0&-i\\-i&0
+ \end{pmatrix}
+ \end{gather*}
+\end{notation}
+
+\begin{lemma}
+ If $G$ has order 8, then either $G$ is abelian (i.e.\ $\cong C_8, C_4\times C_2$ or $C_2\times C_2\times C_2$), or $G$ is not abelian and isomorphic to $D_8$ or $Q_8$ (dihedral or quaternion).
+\end{lemma}
+
+\begin{proof}
+ Consider the different possible cases:
+ \begin{itemize}
+ \item If $G$ contains an element of order 8, then $G\cong C_8$.
+ \item If all non-identity elements have order 2, then $G$ is abelian (Sheet 1, Q8). Let $a\not= b\in G\setminus\{e\}$. By the direct product theorem, $\bra a, b\ket = \bra a\ket\times\bra b\ket$. Then take $c\not\in \bra a, b\ket$. By the direct product theorem, we obtain $\bra a, b, c\ket = \bra a\ket\times\bra b\ket\times\bra c\ket = C_2\times C_2\times C_2$. Since $\bra a, b, c\ket\subseteq G$ and $|\bra a, b, c\ket| = |G|$, $G = \bra a, b, c\ket \cong C_2\times C_2\times C_2$.
+ \item $G$ has no element of order 8 but has an order 4 element $a\in G$. Let $H = \bra a\ket$. Since $H$ has index 2, it is normal in $G$. So $G/H \cong C_2$ since $|G/H| = 2$. This means that for any $b\not\in H$, $bH$ generates $G/H$. Then $(bH)^2 = b^2H = H$. So $b^2\in H$. Since $b^2\in \bra a\ket$ and $\bra a\ket$ is a cyclic group, $b^2$ commutes with $a$.
+
+ If $b^2 = a$ or $a^3$, then $b$ has order 8. Contradiction. So $b^2 = e$ or $a^2$.
+
+ We also know that $H$ is normal, so $bab^{-1}\in H$. Let $bab^{-1} = a^\ell$. Since $a$ and $b^2$ commute, we know that $a = b^2 ab^{-2} = b(bab^{-1})b^{-1} = ba^\ell b^{-1} = (bab^{-1})^{\ell} = a^{\ell^2}$. So $\ell^2 \equiv 1\pmod 4$. So $\ell \equiv \pm 1 \pmod 4$.
+
+ \begin{itemize}
+ \item When $l\equiv 1\pmod 4$, $bab^{-1} = a$, i.e.\ $ba = ab$. So $G$ is abelian.
+ \begin{itemize}
+ \item If $b^2 = e$, then $G = \bra a, b\ket \cong \bra a\ket \times \bra b\ket \cong C_4\times C_2$.
+ \item If $b^2 = a^2$, then $(ba^{-1})^2 = e$. So $G = \bra a, ba^{-1}\ket \cong C_4\times C_2$.
+ \end{itemize}
+ \item If $l \equiv -1\pmod 4$, then $bab^{-1} = a^{-1}$.
+ \begin{itemize}
+ \item If $b^2 = e$, then $G = \bra a, b: a^4 = e = b^2, bab^{-1} = a^{-1}\ket$. So $G\cong D_8$ by definition.
+ \item If $b^2 = a^2$, then we have $G\cong Q_8$.\qedhere
+ \end{itemize}
+ \end{itemize}%\qedhere
+ \end{itemize}
+\end{proof}
+\section{Matrix groups}
+\subsection{General and special linear groups}
+Consider $M_{n\times n}(F)$, i.e.\ the set of $n\times n$ matrices over the field $F = \R$ or $\C$ (or $\F_p$). We know that matrix multiplication is associative (since they represent functions) but are, in general, not commutative. To make this a group, we want the identity matrix $I$ to be the identity. To ensure everything has an inverse, we can only include invertible matrices.
+
+(We do not necessarily need to take $I$ as the identity of the group. We can, for example, take $e =
+\begin{pmatrix}
+ 0 & 0\\
+ 0 & 1
+\end{pmatrix}$ and obtain a group in which every matrix is of the form $\begin{pmatrix}
+ 0 & 0\\
+ 0 & a
+\end{pmatrix}$ for some non-zero $a$. This forms a group, albeit a boring one (it is simply $\cong \R^*$))
+\begin{defi}[General linear group $\GL_n(F)$]
+ \[
+ \GL_n(F) = \{A\in M_{n\times n}(F) : A\text{ is invertible}\}
+ \]
+ is the \emph{general linear group}.
+\end{defi}
+Alternatively, we can define $\GL_n(F)$ as matrices with non-zero determinants.
+
+\begin{prop}
+ $\GL_n(F)$ is a group.
+\end{prop}
+\begin{proof}
+ Identity is $I$, which is in $\GL_n(F)$ by definition ($I$ is its self-inverse). The composition of invertible matrices is invertible, so is closed. Inverse exist by definition. Multiplication is associative.
+\end{proof}
+
+\begin{prop}
+ $\det: \GL_n(F) \to F\setminus\{0\}$ is a surjective group homomorphism.
+\end{prop}
+
+\begin{proof}
+ $\det AB = \det A\det B$. If $A$ is invertible, it has non-zero determinant and $\det A\in F\setminus\{0\}$.
+
+ To show it is surjective, for any $x\in F\setminus\{0\}$, if we take the identity matrix and replace $I_{11}$ with $x$, then the determinant is $x$. So it is surjective.
+\end{proof}
+
+\begin{defi}[Special linear group $\SL_n(F)$]
+ The \emph{special linear group} $\SL_n(F)$ is the kernel of the determinant, i.e.
+ \[
+ SL_n(F) = \{A\in \GL_n(F): \det A = 1\}.
+ \]
+\end{defi}
+
+So $\SL_n(F)\lhd \GL_n(F)$ as it is a kernel. Note that $Q_8\leq \SL_2(\C)$
+\subsection{Actions of \texorpdfstring{$\GL_n(\C)$}{GLn(C)}}
+\begin{prop}
+ $\GL_n(\C)$ acts faithfully on $\C^n$ by left multiplication to the vector, with two orbits ($\mathbf{0}$ and everything else).
+\end{prop}
+
+\begin{proof}
+ First show that it is a group action:
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{0}
+ \item If $A\in \GL_n(\C)$ and $\mathbf{v}\in \C^n$, then $A\mathbf{v}\in \C^n$. So it is closed.
+ \item $I\mathbf{v} = \mathbf{v}$ for all $\mathbf{v}\in \C^n$.
+ \item $A(B\mathbf{v}) = (AB)\mathbf{v}$.
+ \end{enumerate}
+
+ Now prove that it is faithful: a linear map is determined by what it does on a basis. Take the standard basis $\mathbf{e}_1 = (1, 0, \cdots, 0), \cdots \mathbf{e}_n = (0, \cdots, 1)$. Any matrix which maps each $\mathbf{e}_k$ to itself must be $I$ (since the columns of a matrix are the images of the basis vectors)
+
+ To show that there are 2 orbits, we know that $A\mathbf{0} = \mathbf{0}$ for all $A$. Also, as $A$ is invertible, $A\mathbf{v} = \mathbf{0}\Leftrightarrow \mathbf{v} = \mathbf{0}$. So $\mathbf{0}$ forms a singleton orbit. Then given any two vectors $\mathbf{v}\not= \mathbf{w}\in \C^n\setminus\{0\}$, there is a matrix $A\in \GL_n(\C)$ such that $A\mathbf{v} = \mathbf{w}$ (cf.\ Vectors and Matrices).
+\end{proof}
+
+Similarly, $\GL_n(\R)$ acts on $\R^n$.
+
+\begin{prop}
+ $\GL_n(\C)$ acts on $M_{n\times n}(\C)$ by conjugation. (Proof is trivial)
+\end{prop}
+This action can be thought of as a ``change of basis'' action. Two matrices are conjugate if they represent the same map but with respect to different bases. The $P$ is the base change matrix.
+
+From Vectors and Matrices, we know that there are three different types of orbits for $\GL_2(\C)$: $A$ is conjugate to a matrix of one of these forms:
+\begin{enumerate}
+ \item $
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \mu
+ \end{pmatrix}
+ $, with $\lambda \not= \mu$, i.e.\ two distinct eigenvalues
+ \item $
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \lambda\\
+ \end{pmatrix}$, i.e.\ a repeated eigenvalue with 2-dimensional eigenspace
+ \item $
+ \begin{pmatrix}
+ \lambda & 1\\
+ 0 & \lambda
+ \end{pmatrix}$, i.e.\ a repeated eigenvalue with a 1-dimensional eigenspace
+\end{enumerate}
+Note that we said there are three \emph{types} of orbits, not three orbits. There are infinitely many orbits, e.g.\ one for each of $\lambda I$.
+\subsection{Orthogonal groups}
+Recall that $A^T$ is defined by $A^{T}_{ij} = A_{ji}$, i.e.\ we reflect the matrix in the diagonal. They have the following properties:
+\begin{enumerate}
+ \item $(AB)^T = B^TA^T$
+ \item $(A^{-1})^T = (A^T)^{-1}$
+ \item $A^{T}A = I\Leftrightarrow AA^{T} = I\Leftrightarrow A^{-1} = A^{T}$. In this case $A$ is \emph{orthogonal}
+ \item $\det A^{T} = \det A$
+\end{enumerate}
+We are now in $\R$, because orthogonal matrices don't make sense with complex matrices.
+
+Note that a matrix is orthogonal if the columns (or rows) form an orthonormal basis of $\R^n$: $AA^T = I\Leftrightarrow a_{ik}a_{jk} = \delta_{ij} \Leftrightarrow \mathbf{a}_i\cdot \mathbf{a}_j = \delta_{ij}$, where $\mathbf{a}_i$ is the $i$th column of $A$.
+
+The importance of orthogonal matrices is that they are the isometries of $\R^n$.
+\begin{lemma}[Orthogonal matrices are isometries] For any orthogonal $A$ and $x, y\in \R^n$, we have
+ \begin{enumerate}
+ \item $(A\mathbf{x})\cdot (A\mathbf{y}) = \mathbf{x\cdot y}$
+ \item $|A\mathbf{x}| = |\mathbf{x}|$
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}
+ Treat the dot product as a matrix multiplication. So
+ \[
+ (A\mathbf{x})^T(A\mathbf{y}) = \mathbf{x}^{T}A^TAy = \mathbf{x}^TI\mathbf{y} = \mathbf{x}^T\mathbf{y}
+ \]
+ Then we have $|A\mathbf{x}|^2 = (A\mathbf{x})\cdot (A\mathbf{x}) = \mathbf{x}\cdot \mathbf{x} = |\mathbf{x}|^2$. Since both are positive, we know that $|A\mathbf{x}| = |\mathbf{x}|$.
+\end{proof}
+It is important to note that orthogonal matrices are isometries, but not all isometries are orthogonal. For example, translations are isometries but are not represented by orthogonal matrices, since they are not linear maps and cannot be represented by matrices at all! However, it is true that all linear isometries can be represented by orthogonal matrices.
+
+\begin{defi}[Orthogonal group $O(n)$]
+ The \emph{orthogonal group} is
+ \[
+ \Or(n) = \Or_n = \Or_n(\R) = \{A\in \GL_n(\R): A^TA = I\},
+ \]
+ i.e.\ the group of orthogonal matrices.
+\end{defi}
+We will later show that this is the set of matrices that preserve distances in $\R^n$.
+
+\begin{lemma}
+ The orthogonal group is a group.
+\end{lemma}
+
+\begin{proof}
+ We have to check that it is a subgroup of $\GL_n(\R)$: It is non-empty, since $I\in \Or(n)$. If $A, B\in \Or(n)$, then $(AB^{-1})(AB^{-1})^T = AB^{-1}(B^{-1})^TA^{T} = AB^{-1}BA^{-1} = I$, so $AB^{-1}\in \Or(n)$ and this is indeed a subgroup.
+\end{proof}
+
+
+\begin{prop}
+ $\det: \Or(n) \to \{\pm 1\}$ is a surjective group homomorphism.
+\end{prop}
+
+\begin{proof}
+ For $A\in \Or(n)$, we know that $A^TA = I$. So $\det A^TA = (\det A)^2 = 1$. So $\det A = \pm 1$. Since $\det(AB) = \det A\det B$, it is a homomorphism. We have
+ \[
+ \det I = 1,\;\;\;\det
+ \begin{pmatrix}
+ -1 & 0 &\cdots & 0\\
+ 0 & 1 & \cdots & 0\\
+ \vdots & \vdots & \ddots & 0\\
+ 0 & 0 & \cdots & 1
+ \end{pmatrix} = -1,
+ \]
+ so it is surjective.
+\end{proof}
+
+\begin{defi}[Special orthogonal group $\SO(n)$]
+ The \emph{special orthogonal group} is the kernel of $\det: \Or(n) \to \{\pm 1\}$.
+ \[
+ \SO(n) =\SO_n = \SO_n(\R) = \{A\in \Or(n): \det A = 1\}.
+ \]
+\end{defi}
+By the isomorphism theorem, $\Or(n)/\SO(n) \cong C_2$.
+
+What's wrong with matrices with determinant $-1$? Why do we want to eliminate these? An important example of an orthogonal matrix with determinant $-1$ is a \emph{reflection}. These transformations reverse orientation, and is often unwanted.
+
+\begin{lemma}
+ $\Or(n) = \SO(n) \cup
+ \begin{pmatrix}
+ -1 & 0 &\cdots & 0\\
+ 0 & 1 & \cdots & 0\\
+ \vdots & \vdots & \ddots & 0\\
+ 0 & 0 & \cdots &1
+ \end{pmatrix}\SO(n)$
+\end{lemma}
+
+\begin{proof}
+ Cosets partition the group.
+\end{proof}
+\subsection{Rotations and reflections in \texorpdfstring{$\R^2$}{R2} and \texorpdfstring{$\R^3$}{R3}}
+\begin{lemma}
+ $\SO(2)$ consists of all rotations of $\R^2$ around 0.
+\end{lemma}
+
+\begin{proof}
+ Let $A\in \SO(2)$. So $A^TA = I$ and $\det A = 1$. Suppose $A =
+ \begin{pmatrix}
+ a & b\\c & d
+ \end{pmatrix}$. Then $A^{-1} =
+ \begin{pmatrix}
+ d & -b\\-c & a
+ \end{pmatrix}.$ So $A^T = A^{-1}$ implies $ad - bc = 1$, $c = -b$, $d = a$. Combining these equations we obtain $a^2 + c^2 = 1$. Set $a = \cos\theta = d$, and $c = \sin\theta = -b$. Then these satisfies all three equations. So
+ \[
+ A =
+ \begin{pmatrix}
+ \cos\theta & -\sin\theta\\
+ \sin\theta & \cos\theta
+ \end{pmatrix}.
+ \]
+ Note that $A$ maps $(1, 0)$ to $(\cos\theta, \sin \theta)$, and maps $(0, 1)= (-\sin\theta, \cos\theta)$, which are rotations by $\theta$ counterclockwise. So $A$ represents a rotation by $\theta$.
+\end{proof}
+
+\begin{cor}
+ Any matrix in $\Or(2)$ is either a rotation around $0$ or a reflection in a line through $0$.
+\end{cor}
+
+\begin{proof}
+ If $A\in \SO(2)$, we've show that it is a rotation. Otherwise,
+ \[
+ A =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix}
+ \begin{pmatrix}
+ \cos\theta & -\sin\theta\\
+ \sin\theta & \cos\theta
+ \end{pmatrix} =
+ \begin{pmatrix}
+ \cos\theta & -\sin\theta \\
+ -\sin\theta & -\cos\theta
+ \end{pmatrix}
+ \]
+ since $\Or(2) = \SO(2)\cup
+ \begin{pmatrix}
+ 1&0\\0&-1
+ \end{pmatrix}\SO(2)$. This has eigenvalues $1, -1$. So it is a reflection in the line of the eigenspace $E_1$. The line goes through $\mathbf{0}$ since the eigenspace is a subspace which must include $\mathbf{0}$.
+\end{proof}
+
+\begin{lemma}
+ Every matrix in $\SO(3)$ is a rotation around some axis.
+\end{lemma}
+
+\begin{proof}
+ Let $A\in \SO(3)$. We know that $\det A = 1$ and $A$ is an isometry. The eigenvalues $\lambda$ must have $|\lambda| = 1$. They also multiply to $\det A = 1$. Since we are in $\R$, complex eigenvalues come in complex conjugate pairs. If there are complex eigenvalues $\lambda$ and $\bar\lambda$, then $\lambda\bar\lambda = |\lambda|^2 = 1$. The third eigenvalue must be real and has to be $+1$.
+
+ If all eigenvalues are real. Then eigenvalues are either $1$ or $-1$, and must multiply to $1$. The possibilities are $1, 1, 1$ and $-1, -1, 1$, all of which contain an eigenvalue of $1$.
+
+ So pick an eigenvector for our eigenvalue $1$ as the third basis vector. Then in some orthonormal basis,
+ \[
+ A = \begin{pmatrix}
+ a & b & 0\\
+ c & d & 0\\
+ 0 & 0 & 1
+ \end{pmatrix}
+ \]
+ Since the third column is the image of the third basis vector, and by orthogonality the third row is $0, 0, 1$. Now let
+ \[A' = \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \GL_2(\R)
+ \]
+ with $\det A' = 1$. $A'$ is still orthogonal, so $A'\in \SO(2)$. Therefore $A'$ is a rotation and
+ \[
+ A =
+ \begin{pmatrix}
+ \cos\theta & -\sin\theta & 0\\
+ \sin\theta & \cos\theta & 0\\
+ 0 & 0 & 1
+ \end{pmatrix}
+ \]
+ in some basis, and this is exactly the rotation through an axis.
+\end{proof}
+
+\begin{lemma}
+ Every matrix in $\Or(3)$ is the product of at most three reflections in planes through 0.
+\end{lemma}
+Note that a rotation is a product of two reflections. This lemma effectively states that every matrix in $\Or(3)$ is a reflection, a rotation or a product of a reflection and a rotation.
+
+\begin{proof}
+ Recall $\Or(3) = \SO(3) \cup
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & 1 & 0\\
+ 0 & 0 & -1
+ \end{pmatrix}SO(3)$.
+ So if $A\in \SO(3)$, we know that $A =
+ \begin{pmatrix}
+ \cos\theta & -\sin\theta & 0\\
+ \sin\theta & \cos\theta & 0\\
+ 0 & 0 & 1
+ \end{pmatrix}$ in some basis, which is a composite of two reflections:
+ \[
+ A =
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & -1 & 0\\
+ 0 & 0 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ \cos\theta & \sin\theta & 0\\
+ \sin\theta & -\cos\theta & 0\\
+ 0 & 0 & 1
+ \end{pmatrix},
+ \]
+ Then if $A\in \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & 1 & 0\\
+ 0 & 0 & -1
+ \end{pmatrix}\SO(3)$, then it is automatically a product of three reflections.
+\end{proof}
+In the last line we've shown that everything in $\Or(3)\setminus \SO(3)$ can be written as a product of three reflections, but it is possible that they need only 1 reflection. However, some matrices do genuinely need 3 reflections, e.g.
+$\begin{pmatrix}
+ -1 & 0 & 0\\
+ 0 & -1 & 0\\
+ 0 & 0 & -1
+\end{pmatrix}$
+
+\subsection{Unitary groups}
+The concept of orthogonal matrices only make sense if we are talking about real matrices. If we are talking about complex, then instead we need \emph{unitary matrices}. To do so, we replace the transposition with the \emph{Hermitian conjugate}. It is defined by $A^\dagger = (A^*)^T$ with $(A^\dagger)_{ij} = A_{ji}^*$, where the asterisk is the complex conjugate. We still have
+\begin{enumerate}
+ \item $(AB)^\dagger = B^\dagger A^\dagger$
+ \item $(A^{-1})^\dagger = (A^\dagger)^{-1}$
+ \item $A^\dagger A = I \Leftrightarrow AA^\dagger = I \Leftrightarrow A^\dagger = A^{-1}$. We say $A$ is a \emph{unitary matrix}
+ \item $\det A^{\dagger} = (\det A)^*$
+\end{enumerate}
+
+\begin{defi}[Unitary group $U(n)$]
+ The \emph{unitary group} is
+ \[
+ \U(n) = \U_n = \{A\in \GL_n(\C): A^\dagger A = I\}.
+ \]
+\end{defi}
+
+\begin{lemma}
+ $\det: \U(n)\to S^1$, where $S^1$ is the unit circle in the complex plane, is a surjective group homomorphism.
+\end{lemma}
+
+\begin{proof}
+ We know that $1 = \det I = \det A^\dagger A = |\det A|^2$. So $|\det A| = 1$. Since $\det AB = \det A\det B$, it is a group homomorphism.
+
+ Now given $\lambda\in S_1$, we have
+ $\begin{pmatrix}
+ \lambda & 0 & \cdots & 0\\
+ 0 & 1 & \cdots & 0\\
+ \vdots & \vdots & \ddots & 0\\
+ 0 & 0 & 0 & 1
+ \end{pmatrix}\in \U(n)$. So it is surjective.
+\end{proof}
+
+\begin{defi}[Special unitary group $SU(n)$]
+ The \emph{special unitary group} $\SU(n) = \SU_n$ is the kernel of $\det U(n)\to S^1$.
+\end{defi}
+
+Similarly, unitary matrices preserve the complex dot product: $(A\mathbf{x})\cdot (A\mathbf{y}) = \mathbf{x}\cdot \mathbf{y}$.
+
+\section{More on regular polyhedra}
+In this section, we will look at the symmetry groups of the cube and the tetrahedron.
+\subsection{Symmetries of the cube}
+\subsubsection*{Rotations}
+Recall that there are $|G^+| = 24$ rotations of the group by the orbit-stabilizer theorem.
+\begin{prop}
+ $G^+ \cong S_4$, where $G^+$ is the group of all rotations of the cube.
+\end{prop}
+
+\begin{proof}
+ Consider $G^+$ acting on the 4 diagonals of the cube. This gives a group homomorphism $\varphi: G^+ \to S_4$. We have $(1\; 2\; 3\; 4)\in\im \varphi$ by rotation around the axis through the top and bottom face. We also $(1\; 2)\in \im \varphi$ by rotation around the axis through the mid-point of the edge connect $1$ and $2$. Since $(1\; 2)$ and $(1\; 2\; 3\; 4)$ generate $S_4$ (Sheet 2 Q. 5d), $\im\varphi = S_4$, i.e.\ $\phi$ is surjective. Since $|S_4| = |G^+|$, $\varphi$ must be an isomorphism.
+\end{proof}
+
+\subsubsection*{All symmetries}
+Consider the reflection in the mid-point of the cube $\tau$, sending every point to its opposite. We can view this as $-I$ in $\R^3$. So it commutes with all other symmetries of the cube.
+\begin{prop}
+ $G \cong S_4\times C_2$, where $G$ is the group of all symmetries of the cube.
+\end{prop}
+
+\begin{proof}
+ Let $\tau$ be ``reflection in mid-point'' as shown above. This commutes with everything. (Actually it is enough to check that it commutes with rotations only)
+
+ We have to show that $G = G^+\bra \tau\ket$. This can be deduced using sizes: since $G^+$ and $\bra\tau\ket$ intersect at $e$ only, (i) and (ii) of the Direct Product Theorem gives an injective group homomorphism $G^+\times \bra\tau\ket \to G$. Since both sides have the same size, the homomorphism must be surjective as well. So $G\cong G^+\times \bra \tau\ket \cong S_4\times C_2$.
+\end{proof}
+
+In fact, we have also proved that the group of symmetries of an octahedron is $S_4\times C_2$ since the octahedron is the dual of the cube. (if you join the centers of each face of the cube, you get an octahedron)
+\begin{center}
+ \begin{tikzpicture}[z = -5.5, scale = 1.5]
+ \coordinate (O1) at (0, 0, -1);
+ \coordinate (O2) at (-1, 0, 0);
+ \coordinate (O3) at (0, 0, 1);
+ \coordinate (O4) at (1, 0, 0);
+ \coordinate (O5) at (0, 1, 0);
+ \coordinate (O6) at (0, -1, 0);
+
+ \draw [draw = blue, dashed] (O1) -- (O2) -- (O5) -- cycle;
+ \draw [draw = blue, dashed] (O4) -- (O1) -- (O5) -- cycle;
+ \draw [draw = blue, dashed] (O1) -- (O2) -- (O6) -- cycle;
+ \draw [draw = blue, dashed] (O4) -- (O1) -- (O6) -- cycle;
+ \draw [draw = blue] (O2) -- (O3) -- (O5) -- cycle;
+ \draw [draw = blue] (O3) -- (O4) -- (O5) -- cycle;
+ \draw [draw = blue] (O2) -- (O3) -- (O6) -- cycle;
+ \draw [draw = blue] (O3) -- (O4) -- (O6) -- cycle;
+
+ \coordinate (C1) at (-1, -1, -1);
+ \coordinate (C2) at (-1, -1, 1);
+ \coordinate (C3) at (-1, 1, 1);
+ \coordinate (C4) at (-1, 1, -1);
+ \coordinate (C5) at (1, -1, -1);
+ \coordinate (C6) at (1, -1, 1);
+ \coordinate (C7) at (1, 1, 1);
+ \coordinate (C8) at (1, 1, -1);
+
+ \draw [draw = red, dashed] (C1) -- (C2);
+ \draw [draw = red] (C2) -- (C3);
+ \draw [draw = red] (C3) -- (C4);
+ \draw [draw = red, dashed] (C4) -- (C1);
+ \draw [draw = red] (C5) -- (C6) -- (C7) -- (C8) -- cycle;
+ \draw [draw = red, dashed] (C1) -- (C5);
+ \draw [draw = red] (C2) -- (C6);
+ \draw [draw = red] (C3) -- (C7);
+ \draw [draw = red] (C4) -- (C8);
+ \end{tikzpicture}
+\end{center}
+
+\subsection{Symmetries of the tetrahedron}
+\subsubsection*{Rotations}
+Let $1, 2, 3, 4$ be the vertices (in any order). $G^+$ is just the rotations. Let it act on the vertices. Then $\orb(1) = \{1, 2, 3, 4\}$ and $\stab(1) = \{$ rotations in the axis through 1 and the center of the opposite face $\} = \{e, \frac{2\pi}{3}, \frac{4\pi}{3}\}$
+
+So $|G^+| = 4\cdot 3 = 12$ by the orbit-stabilizer theorem.
+
+The action gives a group homomorphism $\varphi: G^+ \to S_4$. Clearly $\ker \varphi = \{e\}$. So $G^+ \leq S_4$ and $G^+$ has size 12. We ``guess'' it is $A_4$ (actually it \emph{must} be $A_4$ since that is the only subgroup of $S_4$ of order 12, but it's nice to see why that's the case).
+
+If we rotate in an axis through 1, we get $(2\; 3\; 4), (2\; 4\; 3)$. Similarly, rotating through other axes through vertices gives all 3-cycles.
+
+If we rotate through an axis that passes through two opposite edges, e.g.\ through 1-2 edge and 3-4 edge, then we have $(1\; 2)(3\; 4)$ and similarly we obtain all double transpositions. So $G^+ \cong A_4$. This shows that there is no \emph{rotation} that fixes two vertices and swaps the other two.
+
+\subsubsection*{All symmetries}
+Now consider the plane that goes through 1, 2 and the mid-point of 3 and 4. Reflection through this plane swaps 3 and 4, but doesn't change $1, 2$. So now $\stab(1) = \bra (2\; 3\; 4), (3, 4)\ket \cong D_6$ (alternatively, if we want to fix 1, we just move 2, 3, 4 around which is the symmetries of the triangular base)
+
+So $|G| = 4\cdot 6 = 24$ and $G\cong S_4$ (which makes sense since we can move any of its vertices around in any way and still be a tetrahedron, so we have all permutations of vertices as the symmetry group)
+
+\section{M\texorpdfstring{\"o}{o}bius group}
+\subsection{M\texorpdfstring{\"o}{o}bius maps}
+We want to study maps $f: \C \to \C$ in the form $f(z) = \frac{az + b}{cz + d}$ with $a, b, c, d\in \C$ and $ad - bc \not= 0$.
+
+We impose $ad - bc\not= 0$ or else the map will be constant: for any $z, w\in \C$, $f(z) - f(w) = \frac{(az + b)(cw + d) - (aw + b)(cz + d)}{(cw + d)(cz + d)} = \frac{(ad - bc)(z - w)}{(cw + d)(cz + d)}$. If $ad - bc = 0$, then $f$ is constant and boring (more importantly, it will not be invertible).
+
+If $c\not=0$, then $f(-\frac{d}{c})$ involves division by 0. So we add $\infty$ to $\C$ to form the extended complex plane (Riemann sphere) $\C\cup \{\infty\}= \C_\infty$ (cf.\ Vectors and Matrices). Then we define $f(-\frac{d}{c}) = \infty$. We call $\C_\infty$ a one-point compactification of $\C$ (because it adds one point to $\C$ to make it compact, cf.\ Metric and Topology).
+
+\begin{defi}[M\"obius map]
+ A \emph{M\"obius map} is a map from $\C_\infty \to \C_\infty$ of the form
+ \[
+ f(z) = \frac{az + b}{cz + d},
+ \]
+ where $a, b, c, d\in \C$ and $ad - bc\not= 0$, with $f(-\frac{d}{c}) = \infty$ and $f(\infty) = \frac{a}{c}$ when $c\not= 0$. (if $c = 0$, then $f(\infty)=\infty$)
+\end{defi}
+
+\begin{lemma}
+ The M\"obius maps are bijections $\C_\infty \to \C_\infty$.
+\end{lemma}
+
+\begin{proof}
+ The inverse of $f(z) = \frac{az + b}{cz+ d}$ is $g(z) = \frac{dz - b}{-cz + a}$, which we can check by composition both ways.
+\end{proof}
+
+\begin{prop}
+ The M\"obius maps form a group $M$ under function composition. (The M\"obius group)
+\end{prop}
+\begin{proof}
+ The group axioms are shown as follows:
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{-1}
+ \item If $f_1(z) = \frac{a_1z + b_1}{c_1z + d_1}$ and $f_2(z) = \frac{a_2z + b_2}{c_2 z + d_2}$, then $\displaystyle f_2\circ f_1 (z) = \frac{a_2\left(\frac{a_1z + b_1}{c_1z + d_1}\right) + b_2}{c_2\left(\frac{a_1z + b_1}{c_1z + d_1}\right) + d_2} = \frac{(a_1a_2 + b_2c_1)z + (a_2b_1 + b_2d_1)}{(c_2a_1 + d_2c_1)z + (c_2b_1 + d_1d_2)}$. Now we have to check that $ad - bc \not = 0$: we have $(a_1a_2 + b_2c_1)(c_2b_1 + d_1d_2) - (a_2b_1 + b_2d_1)(c_2a_1 + d_2c_1) = (a_1d_1 - b_1c_1)(a_2d_2 - b_2c_2)\not =0 $.
+
+ (This works for $z\not= \infty, -\frac{d_1}{c_1}$. We have to manually check the special cases, which is simply yet more tedious algebra)
+ \item The identity function is $1(z) = \frac{1z + 0}{0 + 1}$ which satisfies $ad - bc \not= 0$.
+ \item We have shown above that $f^{-1}(z) = \frac{dz - b}{-cz + a}$ with $da - bc\not= 0$, which are also M\"obius maps
+ \item Composition of functions is always associative
+ \end{enumerate}
+\end{proof}
+$M$ is not abelian. e.g.\ $f_1(z) = 2z$ and $f_2(z) = z + 1$ are not commutative: $f_1\circ f_2(z) = 2z+2$ and $f_2\circ f_1(z) = 2z + 1$.
+
+Note that the point at ``infinity'' is not special. $\infty$ is no different to any other point of the Riemann sphere. However, from the way we write down the M\"obius map, we have to check infinity specially. In this particular case, we can get quite far with conventions such as $\frac{1}{\infty} = 0$, $\frac{1}{0} = \infty$ and $\frac{a\cdot \infty}{c\cdot \infty} = \frac{a}{c}$.
+
+Clearly $\frac{az + b}{cz + d} = \frac{\lambda az + \lambda b}{\lambda cz + \lambda d}$ for any $\lambda \not= 0$. So we do not have a unique representation of a map in terms of $a, b, c, d$. But $a, b, c, d$ does uniquely determine a M\"obius map.
+
+\begin{prop}
+ The map $\theta: \GL_2(\C)\to M$ sending $
+ \displaystyle \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \mapsto \frac{az + b}{cz + d}$ is a surjective group homomorphism.
+\end{prop}
+
+\begin{proof}
+ Firstly, since the determinant $ad - bc$ of any matrix in $\GL_2(\C)$ is non-zero, it does map to a M\"obius map. This also shows that $\theta$ is surjective.
+
+ We have previously calculated that
+ \[
+ \theta(A_2)\circ \theta(A_1) = \frac{(a_1a_2 + b_2c_1)z + (a_2b_1 + b_2d_1)}{(c_2a_1 + d_2c_1)z + (c_2b_1 + d_1d_2)} = \theta(A_2A_1)
+ \]
+ So it is a homomorphism.
+\end{proof}
+
+The kernel of $\theta$ is
+\[
+ \ker(\theta) = \left\{A\in \GL_2(\C): (\forall z)\,z = \frac{az + b}{cz + d}\right\}
+\]
+We can try different values of $z$: $z = \infty \Rightarrow c = 0$; $z = 0 \Rightarrow b = 0$; $z = 1\Rightarrow d = a$. So
+\[
+ \ker\theta = Z = \{\lambda I: \lambda \in \C, \lambda\not= 0\},
+\]
+where $I$ is the identity matrix and $Z$ is the centre of $\GL_2(\C)$.
+
+By the isomorphism theorem, we have
+\[
+ M \cong \GL_2(\C)/Z
+\]
+
+\begin{defi}[Projective general linear group $\mathrm{PGL}_2(\C)$]
+ (Non-examinable) The projective general linear group is
+ \[
+ \mathrm{PGL}_2(\C) = \GL_2(\C)/Z.
+ \]
+\end{defi}
+Since $f_A = f_B$ iff $B = \lambda A$ for some $\lambda\not= 0$ (where $A, B$ are the corresponding matrices of the maps), if we restrict $\theta$ to $\SL_2(\C)$, we have $\left.\theta\right|_{\SL_2(\C)}: \SL_2(\C)\to M$ is also surjective. The kernel is now just $\{\pm I\}$. So
+\[
+ M \cong \SL_2(\C)/\{\pm I\} = \mathrm{PSL_2}(\C)
+\]
+Clearly $\mathrm{PSL}_2(\C)\cong \mathrm{PGL}_2(\C)$ since both are isomorphic to the M\"obius group.
+
+\begin{prop}
+ Every M\"obius map is a composite of maps of the following form:
+ \begin{enumerate}
+ \item Dilation/rotation: $f(z) = az$, $a\not= 0$
+ \item Translation: $f(z) = z + b$
+ \item Inversion: $f(z) = \frac{1}{z}$
+ \end{enumerate}
+\end{prop}
+\begin{proof}
+ Let $\frac{az + b}{cz + d}\in M$.
+
+ If $c = 0$, i.e.\ $g(\infty) = \infty$, then $g(z) = \frac{a}{d}z + \frac{b}{d}$, i.e.
+ \[
+ z\mapsto \frac{a}{d} z\mapsto \frac{a}{d}z + \frac{b}{d}.
+ \]
+ If $c\not= 0$, let $g(\infty)=z_0$, Let $h(z) = \frac{1}{z - z_0}$. Then $hg(\infty) = \infty$ is of the above form. We have $h^{-1}(w) = \frac{1}{w} + z_0$being of type (iii) followed by (ii). So $g = h^{-1} (hg)$ is a composition of maps of the three forms listed above.
+
+ Alternatively, with sufficient magic, we have
+ \[
+ z\mapsto z + \frac{d}{c} \mapsto \frac{1}{z + \frac{d}{c}} \mapsto -\frac{ad + bc}{c^2(z + \frac{d}{c})}\mapsto \frac{a}{c} -\frac{ad + bc}{c^2(z + \frac{d}{c})} = \frac{az + b}{cz + d}.\qedhere
+ \]
+\end{proof}
+Note that the non-calculation method above can be transformed into another (different) composition with the same end result. So the way we compose a M\"obius map from the ``elementary'' maps are not unique.
+
+\subsection{Fixed points of M\texorpdfstring{\"o}{o}bius maps}
+\begin{defi}[Fixed point]
+ A \emph{fixed point} of $f$ is a $z$ such that $f(z) = z$.
+\end{defi}
+
+We know that any M\"obius map with $c = 0$ fixes $\infty$. We also know that $z\to z + b$ for any $b\not= 0$ fixes $\infty$ only, where as $z\mapsto az$ for $a\not= 0, 1$ fixes $0$ and $\infty$. It turns out that you cannot have more than two fixed points, unless you are the identity.
+
+\begin{prop}
+ Any M\"obius map with at least 3 fixed points must be the identity.
+\end{prop}
+
+\begin{proof}
+ Consider $f(z) = \frac{az + b}{cz + d}$. This has fixed points at those $z$ which satisfy $\frac{az + b}{cz + d} = z \Leftrightarrow cz^2 + (d - a)z - b = 0$. A quadratic has at most two roots, unless $c = b = 0$ and $d = a$, in which the equation just says $0 = 0$.
+
+ However, if $c = b= 0$ and $d = a$, then $f$ is just the identity.
+\end{proof}
+
+\begin{prop}
+ Any M\"obius map is conjugate to $f(z) = \nu z$ for some $\nu\not= 0$ or to $f(z) = z + 1$.
+\end{prop}
+
+\begin{proof}
+ We have the surjective group homomorphism $\theta: \GL_2(\C) \to M$. The conjugacy classes of $\GL_2(\C)$ are of types
+ \begin{align*}
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \mu
+ \end{pmatrix} &\mapsto g(z) = \frac{\lambda z + 0}{0z + \mu} = \frac{\lambda}{\mu}z\\
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \lambda
+ \end{pmatrix} &\mapsto g(z) = \frac{\lambda z + 0}{0z + \lambda} = 1 z\\
+ \begin{pmatrix}
+ \lambda & 1\\
+ 0 & \lambda
+ \end{pmatrix} &\mapsto g(z) = \frac{\lambda z + 1}{\lambda} = z + \frac{1}{\lambda}
+ \end{align*}
+ But the last one is not in the form $z + 1$. We know that the last $g(z)$ can also be represented by $
+ \begin{pmatrix}
+ 1 & \frac{1}{\lambda}\\
+ 0 & 1
+ \end{pmatrix}$, which is conjugate to $
+ \begin{pmatrix}
+ 1 & 1\\
+ 0 & 1
+ \end{pmatrix}$ (since that's its Jordan-normal form). So $z + \frac{1}{\lambda}$ is also conjugate to $z + 1$.
+\end{proof}
+
+Now we see easily that (for $\nu \not= 0, 1$), $\nu z$ has $0$ and $\infty$ as fixed points, $z + 1$ only has $\infty$. Does this transfer to their conjugates?
+
+\begin{prop}
+ Every non-identity has exactly 1 or 2 fixed points.
+\end{prop}
+
+\begin{proof}
+ Given $f\in M$ and $f\not= \mathrm{id}$. So $\exists h\in M$ such that $hfh^{-1}(z) = \nu{z}$. Now $f(w) = w \Leftrightarrow hf(w) = h(w) \Leftrightarrow hfh^{-1}(h(w)) = h(w)$. So $h(w)$ is a fixed point of $hfh^{-1}$. Since $h$ is a bijection, $f$ and $hfh^{-1}$ have the same number of fixed points.
+
+ So $f$ has exactly $2$ fixed points if $f$ is conjugate to $\nu z$, and exactly 1 fixed point if $f$ is conjugate to $z + 1$.
+\end{proof}
+Intuitively, we can show that conjugation preserves fixed points because if we conjugate by $h$, we first move the Riemann sphere around by $h$, apply $f$ (that fixes the fixed points) then restore the Riemann sphere to its original orientation. So we have simply moved the fixed point around by $h$.
+
+\subsection{Permutation properties of M\texorpdfstring{\"o}{o}bius maps}
+We have seen that the M\"obius map with three fixed points is the identity. As a corollary, we obtain the following.
+
+\begin{prop}
+ Given $f, g\in M$. If $\exists z_1, z_2, z_3\in \C_{\infty}$ such that $f(z_i) = g(z_i)$, then $f = g$. i.e.\ every M\"obius map is uniquely determined by three points.
+\end{prop}
+
+\begin{proof}
+ As M\"obius maps are invertible, write $f(z_i) = g(z_i)$ as $g^{-1}f(z_i) = z_i$. So $g^{-1}f$ has three fixed points. So $g^{-1}f$ must be the identity. So $f = g$.
+\end{proof}
+
+\begin{defi}[Three-transitive action]
+ An action of $G$ on $X$ is called \emph{three-transitive} if the induced action on $\{(x_1, x_2, x_3)\in X^3: x_i\text{ pairwise disjoint}\}$, given by $g(x_1, x_2, x_3) = (g(x_1), g(x_2), g(x_3))$, is transitive.
+
+ This means that for any two triples $x_1, x_2, x_3$ and $y_1, y_2, y_3$ of distinct elements of $X$, there exists $g\in G$ such that $g(x_i) = y_i$.
+
+ If this $g$ is always unique, then the action is called \emph{sharply three transitive}
+\end{defi}
+This is a really weird definition. The reason we raise it here is that the M\"obius map satisfies this property.
+
+\begin{prop}
+ The M\"obius group $M$ acts sharply three-transitively on $\C_\infty$.
+\end{prop}
+
+\begin{proof}
+ We want to show that we can send any three points to any other three points. However, it is easier to show that we can send any three points to $0, 1, \infty$.
+
+ Suppose we want to send $z_1\to \infty, z_2\mapsto 0, z_3 \mapsto 1$. Then the following works:
+ \[
+ f(z) = \frac{(z - z_2)(z_3 - z_1)}{(z - z_1)(z_3 - z_2)}
+ \]
+ If any term $z_i$ is $\infty$, we simply remove the terms with $z_i$, e.g.\ if $z_1 = \infty$, we have $f(z) = \frac{z - z_2}{z_3 - z_2}$.
+
+ So given also $w_1, w_2, w_3$ distinct in $\C_\infty$ and $g\in M$ sending $w_1\mapsto \infty, w_2\mapsto 0, w_3\mapsto 1$, then we have $g^{-1}f(z_i) = w_i$.
+
+ The uniqueness of the map follows from the fact that a M\"obius map is uniquely determined by 3 points.
+\end{proof}
+
+3 points not only define a M\"obius map uniquely. They also uniquely define a line or circle. Note that on the Riemann sphere, we can think of a line as a circle through infinity, and it would be technically correct to refer to both of them as ``circles''. However, we would rather be clearer and say ``line/circle''.
+
+We will see how M\"obius maps relate to lines and circles. We will first recap some knowledge about lines and circles in the complex plane.
+
+\begin{lemma}
+ The general equation of a circle or straight line in $\C$ is
+ \[
+ Az\bar z + \bar Bz + B\bar z + C = 0,
+ \]
+ where $A, C\in \R$ and $|B|^2 > AC$.
+\end{lemma}
+$A = 0$ gives a straight line. If $A \not= 0, B = 0$, we have a circle centered at the origin. If $C = 0$, the circle passes through 0.
+
+\begin{proof}
+ This comes from noting that $|z - B| = r$ for $r \in \R> 0$ is a circle; $|z - a| = |z - b|$ with $a\not= b $ is a line. The detailed proof can be found in Vectors and Matrices.
+\end{proof}
+
+\begin{prop}
+ M\"obius maps send circles/straight lines to circles/straight lines. Note that it can send circles to straight lines and vice versa.
+
+ Alternatively, M\"obius maps send circles on the Riemann sphere to circles on the Riemann sphere.
+\end{prop}
+
+\begin{proof}
+ We can either calculate it directly using $w = \frac{az + b}{cz + d}\Leftrightarrow z = \frac{dw - b}{-cw + a}$ and substituting $z$ into the circle equation, which gives $A' w\bar w + \bar B' w + B'\bar w + C' = 0$ with $A', C'\in \R$.
+
+ Alternatively, we know that each M\"obius map is a composition of translation, dilation/rotation and inversion. We can check for each of the three types. Clearly dilation/rotation and translation maps a circle/line to a circle/line. So we simply do inversion: if $w = z^{-1}$
+ \begin{align*}
+ &\; Az\bar z + \bar Bz + B\bar z + C = 0\\
+ \Leftrightarrow &\; Cw\bar w + Bw + \bar B\bar w + A = 0\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{eg}
+ Consider $f(z) = \frac{z - i}{z + i}$. Where does the real line go? The real line is simply a circle through $0, 1, \infty$. $f$ maps this circle to the circle containing $f(\infty) = 1$, $f(0) = -1$ and $f(1) = -i$, which is the unit circle.
+
+ Where does the upper half plane go? We know that the M\"obius map is smooth. So the upper-half plane either maps to the inside of the circle or the outside of the circle. We try the point $i$, which maps to $0$. So the upper half plane is mapped to the inside of the circle.
+\end{eg}
+\subsection{Cross-ratios}
+ Finally, we'll look at an important concept known as \emph{cross-ratios}. Roughly speaking, this is a quantity that is preserved by M\"obius transforms.
+
+\begin{defi}[Cross-ratios]
+ Given four distinct points $z_1, z_2, z_3, z_4\in \C_\infty,$ their \emph{cross-ratio} is $[z_1, z_2, z_3, z_4] = g(z_4)$, with $g$ being the unique M\"obius map that maps $z_1\mapsto \infty, z_2\mapsto 0, z_3\mapsto 1$. So $[\infty, 0, 1, \lambda] = \lambda$ for any $\lambda\not= \infty, 0, 1$. We have
+ \[
+ [z_1, z_2, z_3, z_4] = \frac{z_4 - z_2}{z_4 - z_1} \cdot \frac{z_3 - z_1}{z_3 - z_2}
+ \]
+ (with special cases as above).
+\end{defi}
+We know that this exists and is uniquely defined because $M$ acts sharply three-transitively on $\C_\infty$.
+
+Note that different authors use different permutations of $1, 2, 3, 4$, but they all lead to the same result as long as you are consistent.
+
+\begin{lemma}
+ For $z_1, z_2, z_3, z_4\in \C_\infty$ all distinct, then
+ \[
+ [z_1, z_2, z_3, z_4] = [z_2, z_1, z_4, z_3] = [z_3, z_4, z_1, z_2] = [z_4, z_3, z_2, z_1]
+ \]
+ i.e.\ if we perform a double transposition on the entries, the cross-ratio is retained.
+\end{lemma}
+
+\begin{proof}
+ By inspection of the formula.
+\end{proof}
+
+\begin{prop}
+ If $f\in M$, then $[z_1, z_2, z_3, z_4] = [f(z_1), f(z_2), f(z_3), f(z_4)]$.
+\end{prop}
+
+\begin{proof}
+ Use our original definition of the cross ratio (instead of the formula). Let $g$ be the unique M\"obius map such that $[z_1, z_2, z_3, z_4] = g(z_4) = \lambda$, i.e.
+ \begin{align*}
+ z_1 &\xmapsto{g} \infty\\
+ z_2 &\mapsto 0\\
+ z_3 &\mapsto 1\\
+ z_4 &\mapsto \lambda
+ \end{align*}
+ We know that $gf^{-1}$ sends
+ \begin{align*}
+ f(z_1)\xmapsto{f^{-1}} z_1 &\xmapsto{g} \infty\\
+ f(z_2)\xmapsto{f^{-1}} z_2 &\xmapsto{g} 0\\
+ f(z_3)\xmapsto{f^{-1}} z_3 &\xmapsto{g} 1\\
+ f(z_4)\xmapsto{f^{-1}} z_4 &\xmapsto{g} \lambda
+ \end{align*}
+ So $[f(z_1), f(z_2), f(z_3), f(z_4)] = gf^{-1}f(z_4) = g(z_4) = \lambda$.
+\end{proof}
+
+In fact, we can see from this proof that: given $z_1, z_2, z_3, z_4$ all distinct and $w_1, w_2, w_3, w_4$ distinct in $\C_\infty$, then $\exists f\in M$ with $f(z_i) = w_i$ iff $[z_1, z_2, z_3, z_4] = [w_1, w_2, w_3, w_4]$.
+
+\begin{cor}
+ $z_1, z_2, z_3, z_4$ lie on some circle/straight line iff $[z_1, z_2, z_3, z_4]\in \R$.
+\end{cor}
+
+\begin{proof}
+ Let $C$ be the circle/line through $z_1, z_2, z_3$. Let $g$ be the unique M\"obius map with $g(z_1) = \infty$, $g(z_2) = 0$, $g(z_3) = 1$. Then $g(z_4) = [z_1, z_2, z_3, z_4]$ by definition.
+
+ Since we know that M\"obius maps preserve circle/lines, $z_4\in C \Leftrightarrow g(z_4)$ is on the line through $\infty, 0, 1$, i.e.\ $g(z_4) \in \R$.
+\end{proof}
+
+\section{Projective line (non-examinable)}
+We have seen in matrix groups that $\GL_2(\C)$ acts on $\C^2$, the column vectors. Instead, we can also have $\GL_2(\C)$ acting on the set of 1-dimensional subspaces (i.e.\ lines) of $\C^2$.
+
+For any $\mathbf{v}\in \C^2$, write the line generated by $\mathbf{v}$ as $\bra \mathbf{v}\ket$. Then clearly $\bra \mathbf{v} \ket = \{\lambda \mathbf{v}: \lambda\in \C\}$. Now for any $A\in \GL_2(\C)$, define the action as $A\bra \mathbf{v}\ket = \bra A\mathbf{v}\ket$. Check that this is well-defined: for any $\bra \mathbf{v} \ket = \bra \mathbf{w}\ket$, we want to show that $\bra A\mathbf{v}\ket = \bra A\mathbf{w}\ket$. This is true because $\bra \mathbf{v} \ket = \bra \mathbf{w}\ket$ if and only if $\mathbf{w} = \lambda \mathbf{v}$ for some $\lambda\in \C\setminus\{0\}$, and then $\bra A\mathbf{w}\ket = \bra A\lambda\mathbf{v}\ket = \bra \lambda (A\mathbf{v})\ket = \bra A\mathbf{v}\ket$.
+
+What is the kernel of this action? By definition the kernel has to fix all lines. In particular, it has to fix our magic lines generated by $\binom{1}{0}, \binom{0}{1}$ and $\binom{1}{1}$. Since we want $A\bra \binom{1}{0}\ket = \bra \binom{1}{0}\ket$, so we must have $A\binom{1}{0} = \binom{\lambda}{0}$ for some $\lambda$. Similarly, $A\binom{0}{1} = \binom{0}{\mu}$. So we can write $A =
+\begin{pmatrix}
+ \lambda & 0\\
+ 0 & \mu
+\end{pmatrix}$. However, also need $A\bra \binom{1}{1}\ket = \bra\binom{1}{1}\ket$. Since $A$ is a linear function, we know that $A \binom{1}{1} = A \binom{1}{0} + A \binom{0}{1} = \binom{\lambda }{\mu}$. For the final vector to be parallel to $\binom{1}{1}$, we must have $\lambda = \mu$. So $A = \lambda I$ for some $I$. Clearly any matrix of this form fixes any line. So the kernel $Z = \{\lambda I: \lambda\in \C\setminus\{0\}\}$.
+
+Note that every line is uniquely determined by its slope. For any $\mathbf{v} = (v_1, v_2), \mathbf{w} = (w_1, w_2)$, we have $\bra \mathbf{v}\ket = \bra \mathbf{w}\ket$ iff $z_1/z_2 = w_1/w_2$. So we have a one-to-one correspondence from our lines to $\C_\infty$, that maps $\bra \binom{z_1}{z_2}\ket\leftrightarrow z_1/z_2$.
+
+Finally, for each $A\in \GL_2(\C)$, given any line $\bra \binom{z}{1}\ket$, we have
+\[
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}
+ \left\bra
+ \begin{pmatrix}
+ z\\1
+ \end{pmatrix}\right\ket = \left\bra
+ \begin{pmatrix}
+ az + b\\
+ cz + d
+ \end{pmatrix}
+ \right\ket \leftrightarrow \frac{az + b}{cz + d}
+\]
+So $\GL_2(\C)$ acting on the lines is just ``the same'' as the M\"obius groups acting on points.
+\end{document}
diff --git a/books/cam/IA_M/numbers_and_sets.tex b/books/cam/IA_M/numbers_and_sets.tex
new file mode 100644
index 0000000000000000000000000000000000000000..dc52f398aebe1f5c8d206a9c3c8a1d327f484c51
--- /dev/null
+++ b/books/cam/IA_M/numbers_and_sets.tex
@@ -0,0 +1,2250 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IA}
+\def\nterm {Michaelmas}
+\def\nyear {2014}
+\def\nlecturer {A.\ G.\ Thomason}
+\def\ncourse {Numbers and Sets}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{
+ \small
+ \noindent\textbf{Introduction to number systems and logic}\\
+ Overview of the natural numbers, integers, real numbers, rational and irrational numbers, algebraic and transcendental numbers. Brief discussion of complex numbers; statement of the Fundamental Theorem of Algebra.
+
+ \vspace{5pt}
+ \noindent Ideas of axiomatic systems and proof within mathematics; the need for proof; the role of counter-examples in mathematics. Elementary logic; implication and negation; examples of negation of compound statements. Proof by contradiction.\hspace*{\fill}[2]
+
+ \vspace{10pt}
+ \noindent\textbf{Sets, relations and functions}\\
+ Union, intersection and equality of sets. Indicator (characteristic) functions; their use in establishing set identities. Functions; injections, surjections and bijections. Relations, and equivalence relations. Counting the combinations or permutations of a set. The Inclusion-Exclusion Principle.\hspace*{\fill}[4]
+
+ \vspace{10pt}
+ \noindent\textbf{The integers}\\
+ The natural numbers: mathematical induction and the well-ordering principle. Examples, including the Binomial Theorem.\hspace*{\fill}[2]
+
+ \vspace{10pt}
+ \noindent\textbf{Elementary number theory}\\
+ Prime numbers: existence and uniqueness of prime factorisation into primes; highest common factors and least common multiples. Euclid's proof of the infinity of primes. Euclid's algorithm. Solution in integers of $ax+by = c$.
+
+ \vspace{5pt}
+ \noindent Modular arithmetic (congruences). Units modulo $n$. Chinese Remainder Theorem. Wilson's Theorem; the Fermat-Euler Theorem. Public key cryptography and the RSA algorithm.\hspace*{\fill}[8]
+
+ \vspace{10pt}
+ \noindent\textbf{The real numbers}\\
+ Least upper bounds; simple examples. Least upper bound axiom. Sequences and series; convergence of bounded monotonic sequences. Irrationality of $\sqrt{2}$ and $e$. Decimal expansions. Construction of a transcendental number.\hspace*{\fill} [4]
+
+ \vspace{10pt}
+ \noindent\textbf{Countability and uncountability}\\
+ Definitions of finite, infinite, countable and uncountable sets. A countable union of countable sets is countable. Uncountability of R. Non-existence of a bijection from a set to its power set. Indirect proof of existence of transcendental numbers.\hspace*{\fill}[4]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+According to the Faculty, this course is not aimed at teaching you any new knowledge. In particular, the Faculty says
+\begin{quote}
+ This course is concerned not so much with teaching you new parts of mathematics \ldots
+\end{quote}
+Instead, this course is intended to teach you about how to do maths properly. The objective of this course is to start from a few \emph{axioms}, which are assumptions about our system, and try to prove everything rigorously from the axioms.
+
+This is different from how mathematics is usually done in secondary school. In secondary school, we just accept that certain statements are true. For example, probably no one rigorously proved to you that each natural number has a unique prime factorization. In this course, \emph{nothing} is handwaved. Everything will be obtained as a logical consequence of the axioms and previous (proven) results.
+
+In the course, you shouldn't focus too much on the content itself. Instead, the major takeaway should be how we do mathematics. In particular, how we construct and present arguments.
+
+The actual content of this course is rather diverse. As the name suggests, the course touches on numbers and sets. The ``numbers'' part is mostly basic number theory, which you may have come across if you have participated in mathematical olympiads. We also study some properties of real numbers, but the actual serious study of real numbers is done in IA Analysis I.
+
+The ``sets'' part starts with some basic definitions of sets, functions and relations, which are really important and will crop up in all courses you will encounter. In some sense, these form the language in which mathematics is written. At the end of the course, we will touch on countability, which tells us how ``big'' a set is.
+
+\section{Proofs and logic}
+\subsection{Proofs}
+As the first course in pure mathematics, we will have an (informal) look at proofs and logic.
+
+In mathematics, we often want to \emph{prove} things. This is different from most other disciplines. For example, in science, we perform experiments to convince ourselves that our theory is correct. However, no matter how many experiments we do, we still cannot be absolutely sure that our theory is correct. For example, Newtonian mechanics was believed to be correct for a long time, but is nowadays only considered to be an approximation of reality.
+
+On the other hand, when we prove a theorem in mathematics, we are completely sure that the theorem is true. Also, when we actually prove a theorem, (hopefully) we can also understand \emph{why} it is true, and gain further insight.
+
+\begin{defi}[Proof]
+ A \emph{proof} is a sequence of true statements, without logical gaps, that is a logical argument establishing some conclusion.
+\end{defi}
+To prove things, we need to start from some assumptions. These assumptions are known as \emph{axioms}. When we call something an axiom, it does \emph{not} mean that we take these statements to be true without questioning. Instead, we are saying ``if we assume these axioms, then these results hold''. Two people can disagree on what the axioms are and still be friends.
+
+We also tend to define concepts as unambiguously as possible. Of course, just like a dictionary cannot define all words without being circular, we do not define \emph{everything} in mathematics.
+To prove things, we have to start somewhere, with some agreed assumptions (\emph{axioms}). We also don't define everything rigorously (or else how could one start speaking?).
+
+In mathematics, we are often concerned about truth. Often, we only care about statements that can take some truth value.
+\begin{defi}[Statement]
+ A \emph{statement} is a sentence that can have a true value.
+\end{defi}
+
+\begin{eg}
+ The following are statements:
+ \begin{enumerate}
+ \item There are infinitely many primes of the form $n^2 + 1$
+ \item There is always a prime number between $n$ and $2n$
+ \item There is no computer program that can factorize an $n$-digit number in $n^3$ steps
+ \item For every polynomial $p(x) = a_nx^n + a_{n - 1}x^{n - 1} + \cdots + a_0$, where $a_i$ are complex numbers, $n\geq 1$, $a_n \not=0$, there exists (possibly complex) $z$ such that $p(z) = 0$
+ \item $m\times n = n\times m$ for all natural numbers $n, m$
+ \item $2 + 2 = 4$
+ \end{enumerate}
+ The current status (as of 2015) of these statements are:
+ \begin{enumerate}
+ \item No one has a proof but it is probably true
+ \item This is known to be true
+ \item No one knows (related to P = NP problem)
+ \item This is known to be true (Fundamental Theorem of Algebra)
+ \item This is known to be true
+ \item This is known to be true (obviously --- does this need to be proved?)
+ \end{enumerate}
+\end{eg}
+
+\subsection{Examples of proofs}
+Apart from having a proof, it is very important that a proof is \emph{correct}. Here we will look at some examples of proofs and non-proofs.
+
+We first start with a simple example.
+\begin{prop}
+ For all natural numbers $n$, $n^3 - n$ is a multiple of 3.
+\end{prop}
+
+\begin{proof}
+ We have $n^3 - n = (n - 1)n(n + 1)$. One of the three consecutive integers is divisible by 3. Hence so is their product.
+\end{proof}
+
+\begin{prop}
+ If $n^2$ is even, then so is $n$.
+\end{prop}
+
+\begin{proof}
+ If $n$ is even, then $n = 2k$ for some integer $k$. Then $n^2 = 4k^2$, which is even.
+\end{proof}
+This is incorrect! We wanted to proof ``$n^2$ is even'' $\Rightarrow $ ``$n$ is even'', but what we proved is ``$n$ is even'' $\Rightarrow$ ``$n^2$ is even'', which are distinct statements.
+
+Instead, a correct proof is as follows:
+
+\begin{proof}
+ Suppose $n$ is odd. Then $n = 2k + 1$ for some integer $k$. Then $n^2 = 4k^2 + 4k + 1 = 2(2k^2 + 2) + 1$, which is odd. This contradicts our assumption that $n^2$ is even.
+\end{proof}
+This is an example of \emph{proof by contradiction}. We assume what we want to prove is false, and show that this leads to nonsense.
+
+\begin{prop}
+ The solutions to $x^2 - 5x + 6 = 0$ are $x = 2$ and $x = 3$.
+\end{prop}
+
+Note that these are actually 2 different statements:
+\begin{enumerate}
+ \item $x = 2$ and $x = 3$ are solutions
+ \item There are no other solutions
+\end{enumerate}
+We can write this as an ``if and only if'' statement: $x$ is a solution if and only if $x = 2$ or $x = 3$. Alternatively, we say ``$x$ is a solution iff $x = 2$ or $x = 3$''; or ``$x$ is a solution $\Leftrightarrow$ $x = 2$ or $x = 3$''.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $x = 2$ or $x = 3$, then $x - 2 = 0$ or $x - 3 = 0$. So $(x - 2)(x - 3) = 0$.
+ \item If $x^2 - 5x + 6 = 0$, then $(x - 2)(x - 3) = 0$. So $x - 2 = 0$ or $x - 3 = 0$. Then $x = 2$ or $x = 3$.
+ \end{enumerate}
+ Note that the second direction is simply the first argument reversed. We can write this all in one go:
+ \begin{align*}
+ x = 3 \text{ or }x = 2&\Leftrightarrow x - 3 = 0\text{ or }x - 2 = 0\\
+ &\Leftrightarrow (x - 3)(x - 2) = 0\\
+ &\Leftrightarrow x^2 - 5x - 6 = 0
+ \end{align*}
+ Note that we used the ``if and only if'' sign between all lines.
+\end{proof}
+
+We'll do another non-proof.
+\begin{prop}
+ Every positive number is $\geq 1$.
+\end{prop}
+
+\begin{proof}
+ Let $r$ be the smallest positive real. Then either $r < 1$, $r = 1$ or $r > 1$.
+
+ If $r < 1$, then $0 < r^2 < r$. Contradiction. If $r > 1$, then $0 < \sqrt{r} < r$. Contradiction. So $r = 1$.
+\end{proof}
+Now this is obviously false, since $0.5 < 1$. The problem with this proof is that the smallest positive real need not exist. So we have to make sure we justify all our claims.
+
+\subsection{Logic}
+Mathematics is full of logical statements, which are made of statements and logical connectives. Usually, we use shorthands for the logical connectives.
+
+Let $P$ and $Q$ be statements. Then $P\wedge Q$ stands for ``$P$ and $Q$''; $P\vee Q$ stands for ``$P$ or $Q$''; $P\Rightarrow Q$ stands for ``$P$ implies $Q$''; $P\Leftrightarrow Q$ stands for ``$P$ iff $Q$''; $\neg P$ stands for ``not $P$''. The truth of these statements depends on the truth of $P$ and $Q$ . It can be shown by a truth table:
+\begin{center}
+ \begin{tabular}{cccccccc}
+ \toprule
+ $P$ & $Q$ &\quad &$P\wedge Q$ & $P\vee Q$ & $P\Rightarrow Q$ & $P\Leftrightarrow Q$ & $\neg P$ \\
+ \midrule
+ T & T & & T & T & T & T & F\\
+ T & F & & F & T & F & F & F\\
+ F & T & & F & T & T & F & T\\
+ F & F & & F & F & T & T & T\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+Certain logical propositions are equivalent, which we denote using the $\Leftrightarrow$ sign. For example,
+\[
+ \neg(P\wedge Q) \Leftrightarrow (\neg P\vee \neg Q),
+\]
+or
+\[
+ (P\Rightarrow Q) \Leftrightarrow (\neg P\vee Q) \Leftrightarrow (\neg Q\Rightarrow \neg P).
+\]
+By convention, negation has the highest precedence when bracketing. For example, $\neg P\vee \neg Q$ should be bracketed as $(\neg P)\vee (\neg Q)$.
+
+We also have quantifiers. $(\forall x) P(x)$ means ``for all $x$, $P(x)$ is true'', while $(\exists x) P(x)$ means ``there exists $x$ such that $P(x)$ is true''.
+
+The quantifiers are usually \emph{bounded}, i.e.\ we write $\forall x\in X$ or $\exists x\in X$ to mean ``for all $x$ in the set $X$'' and ``there exists $x$ in the set $X$'' respectively.
+
+Quantifiers are negated as follows:
+\[
+ \neg (\forall x) P(x) \Leftrightarrow (\exists x)(\neg P(x));
+\]
+\[
+ \neg (\exists x) P(x) \Leftrightarrow (\forall x)(\neg P(x)).
+\]
+\section{Sets, functions and relations}
+In this chapter, we will look at the basic building blocks of mathematics, namely sets, functions and relations. Most of mathematics can be expressed in terms of these notions, and it is helpful to be familiar with the relevant terms.
+\subsection{Sets}
+\begin{defi}[Set]
+ A \emph{set} is a collection of stuff, without regards to order. Elements in a set are only counted once. For example, if $a = 2, b = c = 1$, then $A = \{a, b, c\}$ has only two members. We write $x\in X$ if $x$ is a member of the set $X$.
+\end{defi}
+
+\begin{eg}
+ Common sets and the symbols used to denote them:
+ \begin{itemize}
+ \item $\N = \{1, 2, 3, \cdots \}$ is the natural numbers
+ \item $\N_0 = \{0, 1, 2, \cdots \}$ is the natural numbers with $0$
+ \item $\Z = \{\cdots, -2, -1, 0, 1, 2, \cdots \}$ is the integers
+ \item $\Q = \{\frac{a}{b}: a, b\in \Z, b \not= 0\}$ is the rational numbers
+ \item $\R$ is the real numbers
+ \item $\C$ is the complex numbers
+ \end{itemize}
+ It is still debated whether $0$ is a natural number. Those who believe that $0$ is a natural number usually write $\N$ for $\{0, 1, 2, \cdots\}$, and $\N^+$ for the positive natural numbers. However, most of the time, it doesn't matter, and when it does, you should specify it explicitly.
+\end{eg}
+\begin{defi}[Equality of sets]
+ $A$ is equal to $B$, written as $A = B$, if
+ \[
+ (\forall x)\,x\in A \Leftrightarrow x\in B,
+ \]
+ i.e.\ two sets are equal if they have the same elements.
+\end{defi}
+
+\begin{defi}[Subsets]
+ $A$ is a \emph{subset} of $B$, written as $A\subseteq B$ or $A\subset B$, if all elements in $A$ are in $B$. i.e.
+ \[
+ (\forall x)\,x\in A\Rightarrow x\in B.
+ \]
+\end{defi}
+
+\begin{thm}
+ $(A=B)\Leftrightarrow (A\subseteq B \text{ and }B\subseteq A)$
+\end{thm}
+
+Suppose $X$ is a set and $P$ is the property of some elements in $x$, we can write a set $\{x\in X:P(x)\}$ for the subset of $x$ comprising of the elements for which $P(x)$ is true. e.g.\ $\{n\in \N : n \text{ is prime}\}$ is the set of all primes.
+
+\begin{defi}[Intersection, union, set difference, symmetric difference and power set]
+ Given two sets $A$ and $B$, we define the following:
+ \begin{itemize}
+ \item Intersection: $A\cap B = \{x:x\in A \text{ and } x\in B\}$
+ \item Union: $A\cup B = \{x:x\in A\text{ or }x\in B\}$
+ \item Set difference: $A\setminus B = \{x\in A: x\not\in B\}$
+ \item Symmetric difference: $A\Delta B = \{x: x\in A\text{ xor } x\in B\}$, i.e.\ the elements in exactly one of the two sets
+ \item Power set: $\mathcal{P}(A) = \{ X : X\subseteq A\}$, i.e.\ the set of all subsets
+ \end{itemize}
+\end{defi}
+
+New sets can only be created via the above operations on old sets (plus replacement, which says that you can replace an element of a set with another element). One cannot arbitrarily create sets such as $X=\{x:x\text{ is a set and }x\not\in x\}$. Otherwise paradoxes will arise.
+
+We have several rules regarding how these set operations behave, which should be intuitively obvious.
+\begin{prop}\leavevmode
+ \begin{itemize}
+ \item $(A\cap B)\cap C = A \cap (B\cap C)$
+ \item $(A\cup B)\cup C = A\cup (B\cup C)$
+ \item $A\cap(B\cup C) = (A\cap B)\cup (A\cap C)$
+ \end{itemize}
+\end{prop}
+
+\begin{notation}
+ If $A_\alpha$ are sets for all $\alpha \in I$, then
+ \[
+ \bigcap_{\alpha\in I}A_\alpha = \{x: (\forall\alpha\in I) x\in A_\alpha\}
+ \]
+ and
+ \[
+ \bigcup_{\alpha\in I}A_\alpha = \{x: (\exists\alpha\in I) x\in A_\alpha\}.
+ \]
+\end{notation}
+
+\begin{defi}[Ordered pair]
+ An \emph{ordered pair} $(a, b)$ is a pair of two items in which order matters. Formally, it is defined as $\{\{a\}, \{a, b\}\}$. We have $(a, b) = (a', b')$ iff $a = a'$ and $b = b'$.
+\end{defi}
+
+\begin{defi}[Cartesian product]
+ Given two sets $A, B$, the \emph{Cartesian product} of $A$ and $B$ is $A\times B = \{(a, b):a\in A, b\in B\}$. This can be extended to $n$ products, e.g.\ $\R^3 = \R\times\R\times\R = \{(x,y,z): x, y, z\in \R\}$ (which is officially $\{(x, (y, z)): x, y, z\in \R\}$).
+\end{defi}
+
+\subsection{Functions}
+\begin{defi}[Function/map]
+ A \emph{function} (or \emph{map}) $f: A\to B$ is a ``rule'' that assigns, for each $a\in A$, precisely one element $f(a)\in B$. We can write $a\mapsto f(a)$. $A$ and $B$ are called the \emph{domain} and \emph{co-domain} respectively.
+\end{defi}
+If we wish to be very formal, we can define a function to be a subset $f\subseteq A\times B$ such that for any $a\in A$, there exists a unique $b\in B$ such that $(a, b)\in f$. We then think of $(a, b) \in f$ as saying $f(a) = b$. However, while this might act as a formal definition of a function, it is a terrible way to think about functions.
+
+\begin{eg}
+ $x^2: \R \to \R$ is a function that sends $x$ to $x^2$. $\frac{1}{x}:\R\to\R$ is not a function since $f(0)$ is not defined. $\pm x: \R\to\R$ is also not a function since it is multi-valued.
+\end{eg}
+
+It is often helpful to categorize functions into different categories.
+\begin{defi}[Injective function]
+ A function $f: X \to Y$ is \emph{injective} if it hits everything at most once, i.e.
+ \[
+ (\forall x, y\in X)\,f(x) = f(y)\Rightarrow x = y.
+ \]
+\end{defi}
+
+\begin{defi}[Surjective function]
+ A function $f: X \to Y$ is \emph{surjective} if it hits everything at least once, i.e.
+ \[
+ (\forall y\in Y)(\exists x\in X)\,f(x) = y
+ \]
+\end{defi}
+
+\begin{eg}
+ $f: \R \to\R^+\cup\{0\}$ with $x \mapsto x^2$ is surjective but not injective.
+\end{eg}
+
+\begin{defi}[Bijective function]
+ A function is \emph{bijective} if it is both injective and surjective. i.e.\ it hits everything exactly once.
+\end{defi}
+
+\begin{defi}[Permutation]
+ A \emph{permutation} of $A$ is a bijection $A\to A$.
+\end{defi}
+
+\begin{defi}[Composition of functions]
+ The \emph{composition} of two functions is a function you get by applying one after another. In particular, if $f: X \rightarrow Y$ and $G: Y\rightarrow Z$, then $g\circ f: X \rightarrow Z$ is defined by $g\circ f(x) = g(f(x))$. Note that function composition is associative.
+\end{defi}
+
+\begin{defi}[Image of function]
+ If $f: A\to B$ and $U\subseteq A$, then $f(U) = \{f(u):u\in U\}$. $f(A)$ is the \emph{image} of $A$.
+\end{defi}
+By definition, $f$ is surjective iff $f(A) = B$.
+
+\begin{defi}[Pre-image of function]
+ If $f: A\to B$ and $V\subseteq B$, then $f^{-1}(V) = \{a\in A: f(a)\in V\}$.
+\end{defi}
+This is the \emph{pre-image} of the function $f$, and acts on \emph{subsets} of $B$. This is defined for any function $f$. It is important to note that we use the same symbol $f^{-1}$ to denote the \emph{inverse function}, which we will define later, but they are very distinct entities. For example, we will see that the inverse function exists only for bijective functions.
+
+To define the inverse function, we will first need some preliminary definitions.
+
+\begin{defi}[Identity map]
+ The \emph{identity map} $\id _A: A\to A$ is defined as the map $a\mapsto a$.
+\end{defi}
+
+\begin{defi}[Left inverse of function]
+ Given $f: A\to B$, a \emph{left inverse} of $f$ is a function $g:B\to A$ such that $g\circ f = \id _A$.
+\end{defi}
+
+\begin{defi}[Right inverse of function]
+ Given $f: A\to B$, a \emph{right inverse} of $f$ is a function $g:B\to A$ such that $f\circ g = \id _B$.
+\end{defi}
+
+\begin{thm}
+ The left inverse of $f$ exists iff $f$ is injective.
+\end{thm}
+
+\begin{proof}
+ ($\Rightarrow$)
+ If the left inverse $g$ exists, then $\forall a, a'\in A, f(a) = f(a') \Rightarrow g( f(a))=g(f(a'))\Rightarrow a=a'$. Therefore $f$ is injective.
+
+ ($\Leftarrow$) If $f$ is injective, we can construct a $g$ defined as
+ \[
+ g: \begin{cases}
+ g(b) = a &\text{if }b\in f(A), \text{ where }f(a) = b\\
+ g(b) = \text{anything} & \text{otherwise}
+ \end{cases}.
+ \]
+ Then $g$ is a left inverse of $f$.
+\end{proof}
+
+\begin{thm}
+ The right inverse of $f$ exists iff $f$ is surjective.
+\end{thm}
+
+\begin{proof}
+ ($\Rightarrow$) We have $f(g(B)) = B$ since $f\circ g$ is the identity function. Thus $f$ must be surjective since its image is $B$.
+
+ ($\Leftarrow$) If $f$ is surjective, we can construct a $g$ such that for each $b\in B$, pick one $a\in A$ with $f(a) = b$, and put $g(b) = a$.
+\end{proof}
+
+(Note that to prove the second part, for each $b$, we need to \emph{pick} an $a$ such that $f(a) = b$. If $B$ is infinite, doing so involves making infinite arbitrary choices. Are we allowed to do so?
+
+To make infinite choices, we need to use the \emph{Axiom of choice}, which explicitly says that this is allowed. In particular, it says that given a family of sets $A_i$ for $i \in I$, there exists a \emph{choice function} $f: I \to \bigcup A_i$ such that $f(i)\in A_i$ for all $i$.
+
+So can we prove the theorem without the Axiom of Choice? The answer is no. This is since if we assume surjective functions have inverses, then we can prove the Axiom of Choice.
+
+Assume any surjective function $f$ has a right inverse. Given a family of non-empty sets $A_i$ for $i\in I$ (wlog assume they are disjoint), define a function $f: \bigcup A_i \to I$ that sends each element to the set that contains the element. This is surjective since each set is non-empty. Then it has a right inverse. Then the right inverse must send each set to an element in the set, i.e.\ is a choice function for $A_i$.)
+
+
+\begin{defi}[Inverse of function]
+ An \emph{inverse} of $f$ is a function that is both a left inverse and a right inverse. It is written as $f^{-1}: B\to A$. It exists iff $f$ is bijective, and is necessarily unique.
+\end{defi}
+
+\subsection{Relations}
+\begin{defi}[Relation]
+ A \emph{relation} $R$ on $A$ specifies that some elements of $A$ are related to some others. Formally, a relation is a subset $R\subseteq A\times A$. We write $aRb$ iff $(a, b)\in R$.
+\end{defi}
+
+\begin{eg}
+ The following are examples of relations on natural numbers:
+ \begin{enumerate}
+ \item $aRb$ iff $a$ and $b$ have the same final digit. e.g.\ $(37)R(57)$.
+ \item $aRb$ iff $a$ divides $b$. e.g.\ $2R6$ and $2\not \!\!R 7$.
+ \item $aRb$ iff $a\not= b$.
+ \item $aRb$ iff $a = b = 1$.
+ \item $aRb$ iff $|a - b|\leq 3$.
+ \item $aRb$ iff either $a, b\geq 5$ or $a, b\leq 4$.
+ \end{enumerate}
+\end{eg}
+
+Again, we wish to classify different relations.
+\begin{defi}[Reflexive relation]
+ A relation $R$ is \emph{reflexive} if
+ \[
+ (\forall a)\,aRa.
+ \]
+\end{defi}
+
+\begin{defi}[Symmetric relation]
+ A relation $R$ is \emph{symmetric} iff
+ \[
+ (\forall a, b)\,aRb\Leftrightarrow bRa.
+ \]
+\end{defi}
+
+\begin{defi}[Transitive relation]
+ A relation $R$ is \emph{transitive} iff
+ \[
+ (\forall a, b, c)\,aRb\wedge bRc \Rightarrow aRc.
+ \]
+\end{defi}
+
+\begin{eg}
+ With regards to the examples above,
+ \begin{center}
+ \begin{tabular}{lcccccc}
+ \toprule
+ Examples & (i) & (ii) & (iii) & (iv) & (v) & (vi) \\
+ \midrule
+ Reflexive & \checkmark & \checkmark & $\times$ & $\times$ & \checkmark & \checkmark \\
+ Symmetric & \checkmark & $\times$ & \checkmark & \checkmark & \checkmark & \checkmark \\
+ Transitive & \checkmark & \checkmark & $\times$ & \checkmark & $\times$ & \checkmark \\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+\end{eg}
+
+\begin{defi}[Equivalence relation]
+ A relation is an \emph{equivalence relation} if it is reflexive, symmetric and transitive. e.g.\ (i) and (vi) in the above examples are equivalence relations.
+\end{defi}
+If it is an equivalence relation, we usually write $\sim$ instead of $R$. As the name suggests, equivalence relations are used to describe relations that are similar to equality. For example, if we want to represent rational numbers as a pair of integers, we might have an equivalence relation defined by $(n, m)\sim (p, q)$ iff $nq = mp$, such that two pairs are equivalent if they represent the same rational number.
+
+\begin{eg}
+ If we consider a deck of cards, define two cards to be related if they have the same suite.
+\end{eg}
+
+As mentioned, we like to think of things related by $\sim$ as equal. Hence we want to identify all ``equal'' things together and form one new object.
+\begin{defi}[Equivalence class]
+ If $\sim$ is an equivalence relation, then the \emph{equivalence class} $[x]$ is the set of all elements that are related via $\sim$ to $x$.
+\end{defi}
+
+\begin{eg}
+ In the cards example, $[8\heartsuit]$ is the set of all hearts.
+\end{eg}
+
+\begin{defi}[Partition of set]
+ A \emph{partition} of a set $X$ is a collection of subsets $A_\alpha$ of $X$ such that each element of $X$ is in exactly one of $A_\alpha$.
+\end{defi}
+
+\begin{thm}
+ If $\sim$ is an equivalence relation on $A$, then the equivalence classes of $\sim$ form a partition of $A$.
+\end{thm}
+
+\begin{proof}
+ By reflexivity, we have $a\in [a]$. Thus the equivalence classes cover the whole set. We must now show that for all $a, b\in A$, either $[a] = [b]$ or $[a]\cap [b]=\emptyset$.
+
+ Suppose $[a]\cap[b]\not=\emptyset$. Then $\exists c\in [a]\cap[b]$. So $a\sim c, b\sim c$. By symmetry, $c\sim b$. By transitivity, we have $a\sim b$. For all $b'\in [b]$, we have $b\sim b'$. Thus by transitivity, we have $a\sim b'$. Thus $[b]\subseteq[a]$. By symmetry, $[a]\subseteq[b]$ and $[a] = [b]$.
+\end{proof}
+
+On the other hand, each partition defines an equivalence relation in which two elements are related iff they are in the same partition. Thus partitions and equivalence relations are ``the same thing''.
+\begin{defi}[Quotient map]
+ The \emph{quotient map} $q$ maps each element in $A$ to the equivalence class containing $a$, i.e.\ $a\mapsto [a]$. e.g.\ $q(8\heartsuit) = \{\heartsuit\}$.
+\end{defi}
+
+\section{Division}
+If you think you already know how to divide, well perhaps you are right. However, in this chapter, we will define these notions formally and properly prove things we already know (plus maybe some new things).
+\subsection{Euclid's Algorithm}
+\begin{defi}[Factor of integers]
+ Given $a, b\in \Z$, we say $a$ \emph{divides} $b$, $a$ is a \emph{factor} of $b$ or $a\mid b$ if $(\exists c\in \Z)\,b = ac$. For any $b$, $\pm 1$ and $\pm b$ are always factors of $b$. The other factors are called \emph{proper factors}.
+\end{defi}
+
+\begin{thm}[Division Algorithm]
+ Given $a, b\in \Z$, $b \not= 0$, there are unique $q, r\in \Z$ with $a = qb + r$ and $0\leq r < b$.
+\end{thm}
+Despite the name, the division algorithm is not an algorithm in the usual sense. Instead, it merely states that you can divide. Even the proof does not specify a (non-brute force) way of how to divide.
+
+\begin{proof}
+ Choose $q = \max\{q : qb \leq a\}$. This maximum exists because the set of all $q$ such that $qb\leq a$ is finite. Now write $r = a - qb$. We have $0\leq r < b$ and thus $q$ and $r$ are found.
+
+ To show that they are unique, suppose that $a = qb + r = q'b + r'$. We have $(q - q')b = (r' - r)$. Since both $r$ and $r'$ are between $0$ and $b$, we have $-b < r - r' < b$. However, $r' - r$ is a multiple of $b$. Thus $q - q' = r' - r = 0$. Consequently, $q = q'$ and $r = r'$.
+\end{proof}
+
+\begin{defi}[Common factor of integers]
+ A \emph{common factor} of $a$ and $b$ is a number $c\in\Z$ such that $c\mid a$ and $c\mid b$.
+\end{defi}
+
+\begin{defi}[Highest common factor/greatest common divisor]
+ The \emph{highest common factor} or \emph{greatest common divisor} of two numbers $a, b\in \N$ is a number $d\in \N$ such that $d$ is a common factor of $a$ and $b$, and if $c$ is also a common factor, $c\mid d$.
+\end{defi}
+Clearly if the hcf exists, it must be the largest common factor, since all other common factors divide it, and thus necessarily unique.
+
+You might think reasonably it is more natural to define $\hcf(a, b)$ to be the largest common factor. Then show that it has the property that all common factors divide it. But the above definition is superior because it does not require a prior ordering of the natural numbers (and can be extended to any ring even if they are not ordered, as we will do in IB Groups, Rings and Modules).
+
+
+\begin{notation}
+ We write $d = \hcf(a, b) = \gcd(a, b) = (a, b)$.
+
+ Here we use $(a, b)$ to stand for a number, and has nothing to do with an ordered pair.
+\end{notation}
+
+\begin{prop}
+ If $c\mid a$ and $c\mid b$, $c\mid (ua + vb)$ for all $u, v\in \Z$.
+\end{prop}
+
+\begin{proof}
+ By definition, we have $a = kc$ and $b = lc$. Then $ua + vb = ukc + vlc = (uk + vl)c$. So $c\mid (ua + vb)$.
+\end{proof}
+
+\begin{thm}
+ Let $a,b\in \N$. Then $(a, b)$ exists.
+\end{thm}
+
+\begin{proof}
+ Let $S=\{ua + vb: u, v\in\Z\}$ be the set of all linear combinations of $a, b$. Let $d$ be the smallest positive member of $S$. Say $d = xa + yb$. Hence if $c\mid a, c\mid b$, then $c\mid d$. So we need to show that $d\mid a$ and $d\mid b$, and thus $d=(a, b)$.
+
+ By the division algorithm, there exist numbers $q, r\in\Z$ with $a = qd + r$ with $0\leq r < d$. Then $r = a - qd = a(1 - qx) - qyb$. Therefore $r$ is a linear combination of $a$ and $b$. Since $d$ is the smallest positive member of $S$ and $0\leq r < d$, we have $r = 0$ and thus $d\mid a$. Similarly, we can show that $d\mid b$.
+\end{proof}
+
+\begin{cor}
+ (from the proof) Let $d = (a, b)$, then $d$ is the smallest positive linear combination of $a$ and $b$.
+\end{cor}
+
+\begin{cor}[B\'{e}zout's identity]
+ Let $a, b\in\N$ and $c\in \Z$. Then there exists $u, v\in \Z$ with $c=ua + vb$ iff $(a, b)\mid c$.
+\end{cor}
+
+\begin{proof}
+ ($\Rightarrow$) Let $d=(a, b)$. If $c$ is a linear combination of $a$ and $b$, then $d\mid c$ because $d\mid a$ and $d\mid b$.
+
+ ($\Leftarrow$) Suppose that $d\mid c$. Let $d = xa + yb$ and $c = kd$. Then $c = (kx)a + (ky)b$. Thus $c$ is a linear combination of $a$ and $b$.
+\end{proof}
+
+Note that the proof that $(a, b)$ exists is existential, not constructive. How can we actually find $d$, and how can we find $x, y$ such that $d = xa + yb$?
+
+While it might be easy to simply inspect $d$ for small numbers, how would you find common factors of, say, 4931 and 3795? We cannot use primes because (a) prime factorization is hard; and (b) primes are not yet defined.
+
+You might spot if $c\mid 4931$ and $c\mid 3795$, then $c\mid (4931 - 3795)=1136$. The process is also reversible --- if $c\mid 1136$ and $c\mid 3795$, then $c\mid (1136 + 3795) = 4931$. Thus the problem is equivalent to finding common factors of $3795$ and $1136$. The process can be repeated until we have small numbers.
+
+\begin{prop}[Euclid's Algorithm]
+ If we continuously break down $a$ and $b$ by the following procedure:
+ \begin{align*}
+ a &= q_1b + r_1\\
+ b &= q_2r_1 + r_2\\
+ r_1 &= q_3r_2 + r_3\\
+ &\vdots\\
+ r_{n-2} &= q_nr_{n-1}
+ \end{align*}
+ then the highest common factor is $r_{n-1}$.
+\end{prop}
+
+\begin{proof}
+ We have $($common factors of $a, b)=($common factors of $b, r_1)=($common factors of $r_1, r_2 )= \cdots = ($factors of $r_{n-1})$.
+\end{proof}
+This gives an alternative proof that hcfs exist.
+
+How efficient is this algorithm? For every step, we have $a\geq b + r_1 > 2r_1$. Thus every two steps, the number on the left goes down by at least half. Hence the number of digits goes down every 8 steps. Thus the time needed is $\leq 8\times$ number of digits and has time complexity $O(\log b)$.
+
+\begin{eg}
+ Suppose $a = 57$ and $b = 42$.
+ \begin{center}
+ \begin{tabular}{l l l}
+ common factors of $57$ and $42$ & & $57 = 1\times 42 + 15$ \\
+ = common factors of $42$ and $15$ & & $42 = 2\times 15 + 12$ \\
+ = common factors of $15$ and $12$ & & $15 = 1\times 12 + 3$ \\
+ = common factors of $12$ and $3$ & & $12 = 4\times 3 + 0$ \\
+ = common factors of $3$ and $0$ \\
+ = factors of $3$.
+ \end{tabular}
+ \end{center}
+ So the hcf is $3$.
+\end{eg}
+
+By reversing Euclid's Algorithm, we can find the hcf of two numbers as a linear combination of $a$ and $b$.
+\begin{eg}
+ Consider $57$ and $21$.
+ \begin{align*}
+ 57 &= 2\times 21 + 15\\
+ 21 &= 1\times 15 + 6\\
+ 15 &= 2\times 6 + 3\\
+ 6 &= 2\times 3
+ \end{align*}
+ In the opposite direction, we have
+ \begin{align*}
+ 3 &= 15 - 2\times 6\\
+ &= 15 - 2\times (21 - 15)\\
+ &= 3\times 15 - 2\times 21\\
+ &= 3\times (57 - 2\times 21) - 2\times 21\\
+ &= 3\times 57 - 8\times 21
+ \end{align*}
+\end{eg}
+
+This gives an alternative constructive proof of B\'{e}zout's identity. Moreover, it gives us a quick way of expressing $(a, b) = ax + by$. However, this algorithm requires storing the whole process of Euclid's Algorithm and is not efficient space-wise.
+
+To achieve higher space efficiency, we attempt to find a recurrence relation for the coefficients $A_j, B_j$ such that $a\times B_j - b \times A_j = (-1)^{j}r_j$. The possible factor of $-1$ is there just so that the recurrence relation will look nicer. Suppose that this is satisfied for all indices less than $j$. Then we have
+\begin{align*}
+ (-1)^jr_{j} &= (-1)^j(r_{j - 2} - q_jr_{j - 1})\\
+ &= (-1)^{j - 2} r_{j - 2} + q_j (-1)^{j - 1} r_{j - 1}\\
+ &= a (B_{j - 2} + q_j B_{j - 1}) - b (A_{j - 2} + q_j A_{j - 1}).
+\end{align*}
+Hence we can obtain the following recurrence relation:
+\begin{align*}
+ A_j &= q_jA_{j-1} + A_{j-2}\\
+ B_j &= q_jB_{j-1} + B_{j-2}
+\end{align*}
+with
+\[
+ a\times B_j - b\times A_j = (-1)^{j}r_j.
+\]
+In particular, $a\times B_{n-1} - b\times A_{n-1} = (-1)^{n-1}r_{n - 1} = (a, b)$.
+
+Also, by an easy induction, $A_jB_{j - 1} - B_jA_{j-1} = (-1)^j$. So $(A_j, B_j) = 1$.
+
+These coefficients also play another role. We can put the Euclid's Algorithm's equations in the following form:
+\begin{align*}
+ \frac{57}{21} &= 2 + \frac{15}{21}\\
+ \frac{21}{15} &= 1 + \frac{6}{15}\\
+ \frac{15}{6} &= 2 + \frac{3}{6}\\
+ \frac{6}{3} &= 2
+\end{align*}
+Then we can write out the fraction $\frac{57}{21}$ in continued fraction form
+\[
+ \frac{57}{21} = 2 + \cfrac{1}{1 + \cfrac{1}{2 + \cfrac{1}{2}}}
+\]
+Expanding this continued fractions term by term, we can have the sequence $2, 2 + \frac{1}{1} = 3$, $2 + \frac{1}{1 + \frac{1}{2}} = \frac{8}{3}$. These are called the ``convergents''. The sequence happens to be $\frac{A_i}{B_i}$.
+
+\subsection{Primes}
+There are a lot of ways we can define prime numbers in $\N$. The definition we will use is the following:
+\begin{defi}[Prime number]
+ $p\in \N$ is a \emph{prime} if $p > 1$ and the only factors of $p$ (in $\Z$) are $\pm 1$ and $\pm p$.
+\end{defi}
+In this chapter, the objective is to prove things we already know and think are obvious.
+\begin{thm}
+ Every number can be written as a product of primes.
+\end{thm}
+
+\begin{proof}
+ If $n\in \N$ is not a prime itself, then by definition $n = ab$. If either $a$ or $b$ is not prime, then that number can be written as a product, say $b = cd$. Then $n = acd$ and so on. Since these numbers are getting smaller, and the process will stop when they are all prime.
+\end{proof}
+In the proof, we handwaved a bit when we said ``and so on''. We will later come up with the principle of (strong) induction that rigorously justifies this. This is the case for many proofs we will have here.
+
+\begin{thm}
+ There are infinitely many primes.
+\end{thm}
+
+\begin{proof}
+ (Euclid's proof) Suppose there are finitely many primes, say $p_1, p_2 \cdots p_n$. Then $N = p_1p_2\cdots p_n + 1$ is divisible by none of the primes. Otherwise, $p_j\mid (N - p_1p_2\cdots p_n)$, i.e.\ $p_j\mid 1$, which is impossible. However, $N$ is a product of primes, so there must be primes not amongst $p_1, p_2\cdots p_n$.
+\end{proof}
+
+\begin{proof}
+ (Erd\"{o}s 1930) Suppose that there are finitely many primes, $p_1, p_2\cdots p_k$. Consider all numbers that are the products of these primes, i.e.\ $p_1^{j_1}p_2^{j_2}\cdots p_k^{j_k}$, where $j_i \geq 0$. Factor out all squares to obtain the form $m^2p_1^{i_1}p_2^{i_2}\cdots p_k^{i_k}$, where $m\in \N$ and $i_j = 0$ or $1$.
+
+ Let $N\in\N$. Given any number $x \leq N$, when put in the above form, we have $m \leq \sqrt{N}$. So there are at most $\sqrt{N}$ possible values of $m$. For each $m$, there are $2^k$ numbers of the form $m^2p_1^{i_1}p_2^{i_2}\cdots p_k^{i_k}$. So there are only $\sqrt{N}\times 2^k$ possible values of $x$ of this kind.
+
+ Now pick $N\geq 4^k$. Then $N > \sqrt{N}\times 2^k$. So there must be a number $\leq N$ not of this form, i.e.\ it has a prime factor not in this list.
+\end{proof}
+Historically, many people have came up with ``new'' proofs that there are infinitely many primes. However, most of these proofs were just Euclid's proof in disguise. Erd\"os' proof is genuinely a new proof. For example, Euclid's proof comes up with a \emph{particular} number $N$, and says \emph{all} its factors are not in the list of primes. On the other hand, Erd\"os' proof says that there is \emph{some} number, which we don't know, with \emph{at least one} factor not in the list.
+
+Also, the proofs give different bounds on when we should expect to see the $k$th prime. For example, Euclid tells us that the $k$th prime must be less than $2^{2^k}$, while Erd\"os tells us it is less than $4^k$.
+
+\begin{thm}
+ If $a\mid bc$ and $(a, b) = 1$, then $a \mid c$.
+\end{thm}
+
+\begin{proof}
+ From Euclid's algorithm, there exist integers $u, v\in \Z$ such that $ua + vb = 1$. So multiplying by $c$, we have $uac + vbc = c$. Since $a \mid bc$, $a \mid $LHS. So $a \mid c$.
+\end{proof}
+
+\begin{defi}[Coprime numbers]
+ We say $a, b$ are \emph{coprime} if $(a, b) = 1$.
+\end{defi}
+
+\begin{cor}
+ If $p$ is a prime and $p\mid ab$, then $p\mid a$ or $p\mid b$. (True for all $p, a, b$)
+\end{cor}
+
+\begin{proof}
+ We know that $(p, a) = p$ or $1$ because $p$ is a prime. If $(p, a) = p$, then $p \mid a$. Otherwise, $(p, a) = 1$ and $p \mid b$ by the theorem above.
+\end{proof}
+
+\begin{cor}
+ If $p$ is a prime and $p\mid n_1n_2\cdots n_i$, then $p \mid n_i$ for some $i$.
+\end{cor}
+Note that when we defined primes, we defined it in terms of factors of $p$. This corollary is the opposite --- it is about how $p$ behaves as a factor of other numbers.
+
+\begin{thm}[Fundamental Theorem of Arithmetic]
+ Every natural number is expressible as a product of primes in exactly one way. In particular, if $p_1p_2\cdots p_k = q_1q_2\cdots q_l$, where $p_i, q_i$ are primes but not necessarily distinct, then $k = l$. $q_1, \cdots q_l$ are $p_1, \cdots p_k$ in some order.
+\end{thm}
+
+\begin{proof}
+ Since we already showed that there is at least one way above, we only need to show uniqueness.
+
+ Let $p_1\cdots p_k = q_1\cdots q_l$. We know that $p_1 \mid q_1\cdots q_l$. Then $p_1 \mid q_1(q_2q_3\cdots q_l)$. Thus $p_1 \mid q_i$ for some $i$. wlog assume $i = 1$. Then $p_1 = q_1$ since both are primes. Thus $p_2p_3 \cdots p_k = q_2q_3\cdots q_l$. Likewise, we have $p_2 = q_2, \cdots$ and so on.
+\end{proof}
+
+\begin{cor}
+ If $a = p_1^{i_1}p_2^{i_2}\cdots p_r^{i_r}$ and $b = p_1^{j_1}p_2^{j_2}\cdots p_r^{j_r}$, where $p_i$ are distinct primes (exponents can be zero). Then $(a, b)=\prod p_k^{\min\{i_k, j_k\}}$. Likewise, $\lcm(a, b) = \prod p_k^{\max\{i_k, j_k\}}$. We have $\hcf(a, b)\times\lcm(a, b) = ab$.
+\end{cor}
+However, this is not an efficient way to calculate $(a, b)$, since prime factorization is very hard.
+
+Note that this is a property peculiar to natural numbers. There are ``arithmetical systems'' (permitting addition, multiplication and subtraction) where factorization is not unique, e.g.\ even numbers.
+
+\begin{eg}
+ The following systems have no prime unique factorization
+ \begin{enumerate}
+ \item Even numbers. ``Primes'' are twice of odd numbers. So 6 is a prime (NOT divisible by 2!) while 8 is not. We have $60 = 2\times 30 = 6\times 10$, where $2, 6, 10, 30$ are primes. However, this example is not ``proper'' since there is no identity element. (i.e.\ not a ring)
+ \item Consider $\Z[\sqrt{-5}] = \{a + b\sqrt{-5} : a, b\in \Z\}$. We have $6 = 2\times 3 = (1 - \sqrt{-5})(1 + \sqrt{-5})$. It can be shown that these are primes (see IB Groups, Rings and Modules).
+ \end{enumerate}
+\end{eg}
+Exercise: Where does the proof of the Fundamental Theorem of Arithmetic fail in these examples?
+
+\section{Counting and integers}
+This chapter exists because experience shows that mathematicians do not know how to count.
+
+\subsection{Basic counting}
+A useful theorem is the pigeonhole principle.
+\begin{thm}[Pigeonhole Principle]
+ If we put $mn + 1$ pigeons into $n$ pigeonholes, then some pigeonhole has at least $m + 1$ pigeons.
+\end{thm}
+
+\begin{eg}
+ In Cambridge, there are 2 people who have the same number of hairs.
+\end{eg}
+
+Another useful tool for counting is the indicator function.
+\begin{defi}[Indicator function/characteristic function]
+ Let $X$ be a set. For each $A\subseteq X$, the \emph{indicator function} or \emph{characteristic function} of $A$ is the function $i_A: X\to \{0, 1\}$ with $i_A(x) = 1$ if $x\in A$, $0$ otherwise. It is sometimes written as $\chi_A$.
+\end{defi}
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $i_A = i_B \Leftrightarrow A = B$
+ \item $i_{A\cap B} = i_A i_B$
+ \item $i_{\bar{A}} = 1 - i_A$
+ \item $i_{A\cup B} = 1 - i_{\overline{A\cup B}} = 1 - i_{\bar A\cap \bar B} = 1 - i_{\bar{A}}i_{\bar{B}} = 1 - (1 - i_A)(1 - i_B) = i_A + i_B - i_{A\cap B}$.
+ \item $i_{A\setminus B} = i_{A\cap \bar B} = i_Ai_{\bar B} = i_A(1 - i_B) = i_A - i_{A\cap B}$
+ \end{enumerate}
+\end{prop}
+
+\begin{eg}
+ We can use the indicator function to prove certain properties about sets:
+ \begin{enumerate}
+ \item Proof that $A\cap(B\cup C) = (A\cap B)\cup (A\cap C)$:
+ \begin{align*}
+ i_{A\cap (B\cup C)} &= i_Ai_{B\cup C}\\
+ &= i_A(i_B + i_C - i_Bi_C)\\
+ &= i_Ai_B + i_Ai_C - i_Ai_Bi_C\\
+ i_{(A\cap B)\cup (A\cap C)} &= i_{A\cap B} + i_{A\cap C} - i_{A\cap C}i_{A\cap B}\\
+ &= i_Ai_B + i_Ai_C - i_Ai_Ci_Ai_B\\
+ &= i_Ai_B + i_Ai_C - i_Ai_Bi_C \qedhere
+ \end{align*}
+ Therefore $i_{A\cap (B\cup C)} = i_{(A\cap B)\cup (A\cap C)}$ and thus $A\cap(B\cup C) = (A\cap B)\cup (A\cap C)$.
+
+ Note that $i_A = i_A^2$ since $i_A = 0 $ or $1$, and $0^2 = 0$ and $1^2 = 1$.
+ \item Proof that the symmetric difference is associative: Observe that $i_{A\Delta B} \equiv i_A + i_B \pmod 2$. Thus $i_{(A\Delta B)\Delta C} = i_{A\Delta(B\Delta C)} \equiv i_A + i_B + i_C \pmod2$.
+ \end{enumerate}
+\end{eg}
+Indicator functions are handy for computing the sizes of finite sets because if $A\subseteq X$, then $|A| = \sum\limits_{x\in X}i_A(x)$.
+
+\begin{prop}
+ $|A\cup B| = |A| + |B| - |A\cap B|$
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ |A\cup B| &= \sum_{x\in X} i_{A(x)\cup B(x)}\\
+ &= \sum (i_A(x) + i_B(x) - i_{A\cap B}(x))\\
+ &= \sum i_A(x) + \sum i_B(x) - \sum i_{A\cap B}(x)\\
+ &= |A| + |B| - |A\cap B|\qedhere
+ \end{align*}
+\end{proof}
+
+More importantly, we will use indicator functions to prove a powerful result.
+\begin{thm}[Inclusion-Exclusion Principle]
+ Let $A_i$ be subsets of a finite set $X$, for $1 \leq i\leq n$. Then
+ \[
+ |\bar A_1\cap \cdots \cap \bar A_n| = |X| - \sum_i |A_i| + \sum_{i < j}|A_i\cap A_j| - \cdots + (-1)^n|A_1\cap \cdots A_n|.
+ \]
+ Equivalently,
+ \[
+ |A_1\cup \cdots \cup A_n| = \sum_i|A_i| - \sum_{i < j}|A_i\cap A_j| + \cdots +(-1)^{n-1}|A_1\cap \cdots \cap A_n|.
+ \]
+ The two forms are equivalent since $|A_1\cup\cdots\cup A_n| = |X| - |\bar A_1\cap \cdots \cap \bar A_n|$.
+\end{thm}
+
+\begin{proof}
+ Using indicator functions,
+ \begin{align*}
+ i_{\bar A_1\cap \bar A_2\cap \cdots \cap \bar A_n} &= \prod_j i_{\bar A_j}\\
+ &= \prod_j (1 - i_{A_j})\\
+ &= 1 - \sum_i i_{A_i} + \sum _{i < j}i_{A_i}i_{A_j} - \cdots + (-1)^ni_{A_1}i_{A_2}\cdots i_{A_n}\\
+ &= 1 - \sum_i i_{A_i} + \sum_{i < j}i_{A_i\cap A_j} - \cdots + (-1)^ni_{A_1\cap A_2\cap A_3\cap \cdots \cap A_n}
+ \intertext{Thus}
+ |\bar A_1\cap \cdots \cap \bar A_n| &= \sum_{x\in X} i_{\bar A_1\cap \bar A_2\cap \cdots \cap \bar A_n}(x)\\
+ &= \sum_x 1 - \sum_i\sum_x i_{A_i}(x) + \sum_{i < j}\sum_x i_{A_i\cap A_j}(x) - \cdots\\
+ & + \sum_x (-1)^ni_{A_1\cap A_2\cap A_3\cap \cdots \cap A_n}(x)\\
+ &= |X| - \sum_i |A_i| + \sum_{i < j}|A_i\cap A_j|\\
+ & - \sum_{i < j < k}|A_i\cap A_j\cap A_k| + \cdots + (-1)^n|A_1\cap A_2\cap \cdots A_n|\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{eg}
+ How many numbers $\leq 200$ are coprime to 110?
+
+ Let $X = \{ 1, \cdots 200\}$, and $A_1 = \{x: 2 \mid x\}$, $A_2 = \{x: 5\mid x\}$, $A_3 = \{x: 11\mid x\}$. We know that
+ \begin{align*}
+ |A_1| &= \lfloor 200/2\rfloor = 100\\
+ |A_2| &= \lfloor 200/5\rfloor = 40\\
+ |A_3| &= \lfloor 200/11\rfloor = 18\\
+ |A_1\cap A_2| &= \lfloor 200/10\rfloor = 20\\
+ |A_1\cap A_3| &= \lfloor 200/22\rfloor = 9\\
+ |A_2\cap A_3| &= \lfloor 200/55\rfloor = 3\\
+ |A_1 \cap A_2\cap A_3| &= \lfloor 200/110\rfloor = 1
+ \end{align*}
+ Then the answer is $200 - 100 - 40 - 18 + 20 + 9 + 3 - 1 = 73$.
+\end{eg}
+
+\subsection{Combinations}
+``Combinations'' is about counting the ways we can pick things without regards to order. We can formulate these problems in the language of sets: given a set $X$, how many subsets of $X$ are there that satisfy some particular properties? For example, if we want to pick 3 people from 10, we can let $X$ be the set of the 10 people. Then the number of ways of picking the 3 people is the number of subsets of size three.
+
+\begin{eg}
+ How many subsets of $\{1, 2, \cdots n\}$ are there? There are $2\times 2\times \cdots \times 2 = 2^n$. Since for each subset, every element is either in or out of the subset, and there are two choices for each element. Equivalently, there are $2^n$ possible indicator functions, i.e.\ functions $\{1, 2, 3, \cdots, n\} \to \{0, 1\}$.
+\end{eg}
+
+\begin{defi}[Combination $\binom{n}{r}$]
+ The number of subsets of $\{1, 2, 3, \cdots, n\}$ of size $r$ is denoted by $\binom{n}{r}$. The symbol is pronounced as ``$n$ choose $r$''.
+\end{defi}
+This is the \emph{definition} of $\binom{n}{r}$. This does not in any way specify how we can actually calculate the value of $\binom{n}{r}$.
+
+\begin{prop}
+ By definition,
+ \[
+ \binom{n}{0} + \binom{n}{1} + \cdots + \binom{n}{n} = 2^n
+ \]
+\end{prop}
+
+\begin{thm}[Binomial theorem]
+ For $n\in \N$ with $a, b\in \R$, we have
+ \[
+ (a + b)^n = \binom{n}{0}a^n b^0 + \binom{n}{1}a^{n-1}b^1 + \cdots + \binom{n}{r}a^{n - r}b^r + \cdots + \binom{n}{n}a^0b^n
+ \]
+\end{thm}
+
+\begin{proof}
+ We have $(a + b)^n = (a + b)(a + b)\cdots (a + b)$. When we expand the product, we get all terms attained by choosing $b$ from some brackets, $a$ from the rest. The term $a^{n - r}b^r$ comes from choosing $b$ from $r$ brackets, $a$ from the rest, and there are $\binom{n}{r}$ ways to make such a choice.
+\end{proof}
+This theorem is not immediately useful since we do not know the value of $\binom{n }{r}$!
+
+Because of this theorem, $\binom{n }{r}$ is sometimes called a ``binomial coefficient''.
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\displaystyle\binom{n}{r} = \binom{n}{n - r}$. This is because choosing $r$ things to keep is the same as choosing $n - r$ things to throw away.
+ \item $\displaystyle\binom{n }{r - 1} + \binom{n}{r} = \binom{n + 1}{r}$ (Pascal's identity) The RHS counts the number of ways to choose a team of $r$ players from $n + 1$ available players, one of whom is Pietersen. If Pietersen is chosen, there are $\binom{n}{r - 1}$ ways to choose the remaining players. Otherwise, there are $\binom{n}{r}$ ways. The total number of ways is thus $\binom{n}{r - 1} + \binom{n}{r}$.
+
+ Now given that $\binom{n}{0} =\binom{n}{n}= 1$, since there is only one way to choose nothing or everything, we can construct \emph{Pascal's triangle}:
+ \begin{center}
+ \begin{tabular}{lllllllll}
+ & & & & 1 & & & & \\
+ & & & 1 & & 1 & & & \\
+ & & 1 & & 2 & & 1 & & \\
+ & 1 & & 3 & & 3 & & 1 & \\
+ 1 & & 4 & & 6 & & 4 & & 1 \\
+ \end{tabular}
+ \end{center}
+ where each number is the sum of the two numbers above it, and the $r$th item of the $n$th row is $\binom{n}{r}$ (first row is row $0$).
+ \item $\displaystyle\binom{n }{k}\binom{k }{r} = \binom{n}{r}\binom{n - r}{k - r}$. We are counting the number of pairs of sets $(Y, Z)$ with $|Y| = k$ and $|Z| = r$ with $Z\subseteq Y$. In the LHS, we first choose $Y$ then choose $Z\subseteq Y$. The RHS chooses $Z$ first and then choose the remaining $Y\setminus Z$ from $\{1, 2, \cdots n\}\setminus Z$.
+ \item $\displaystyle \binom{a}{r}\binom{b}{0} + \binom{a}{r - 1}\binom{b}{1} + \cdots + \binom{a }{r - k}\binom{b}{k} + \cdots \binom{a}{0}\binom{b}{r} = \binom{a + b}{r}$ (Vandermonde's convolution) Suppose we have $a$ men and $b$ women, and we need to choose a committee of $r$ people. The right hand side is the total number of choices. The left hand side breaks the choices up according to the number of men vs women.
+ \end{enumerate}
+\end{prop}
+
+\begin{eg}
+ A greengrocer stocks $n$ kinds of fruit. In how many ways can we choose a bag of $r$ fruits? If we are only allowed to choose one of each kind, then the answer is $\binom{n}{r}$. But we might have $r = 4$, and we want to allow picking $2$ apples, $1$ plum and $1$ quince. The total number of ways to choose is $\binom{n + r - 1}{r}$. Why?
+
+ Each choice can be represented by a binary string of length $n + r - 1$, with $r$ 0's and $n - 1$ 1's. The string can be constructed as follows (by example): when $n = 5$ and $r=8$, a possible binary string $000100110010$. The block of zeros corresponds to the number of each fruit chosen, and the 1s separate the choices. In the string above, we have $3$ of type 1, $2$ of type 2, $0$ of type 3, $2$ of type 4 and $1$ of type 5.
+
+ Then clearly the number of possible strings is $\binom{n + r - 1}{r}$.
+\end{eg}
+
+\begin{prop}
+ $\displaystyle\binom{n}{r} = \frac{n!}{(n - r)!r!}$.
+\end{prop}
+
+\begin{proof}
+ There are $n(n - 1)(n - 2)\cdots (n - r + 1) = \frac{n!}{(n - r)!}$ ways to choose $r$ elements in order. Each choice of subsets is chosen this way in $r!$ orders, so the number of subsets is $\frac{n!}{(n - r)!r!}$.
+\end{proof}
+
+We might write $x^{\underline{r}}$ for the polynomial $x(x - 1)\cdots (x - r + 1)$. We call this ``$x$ to the $r$ falling''. We can write $\displaystyle \binom{n}{r} = \frac{n^{\underline{r}}}{r!}$. Multiplying Vandermonde by $r!$, we obtain the ``falling binomial theorem''
+\[
+ \binom{r}{0}a^{\underline{r}}b^{\underline{0}} + \binom{r}{1}a^{\underline{r - 1}}b^{\underline{1}} + \cdots + \binom{r}{r}a^{\underline{0}}b^{\underline{r}} = (a + b)^{\underline{r}}.
+\]
+
+\begin{eg}
+ A bank prepares a letter for each of its $n$ customers, saying how much it cares. (Each of these letters costs the customer $\pounds 40$) There are $n!$ ways to put the letters in the envelopes. In how many ways can this be done so that no one gets the right letter (i.e.\ how many \emph{derangements} are there of $n$ elements)?
+
+ We let $X$ be the set of all envelopings (permutation of $n$). $|X| = n!$. For each $i$, let $A_i = \{x\in X: x \text{ assigns the correct letter to customer }i\}$. We want to know $|\bigcap_i \bar A_i|$. We know that $|A_i| = (n - 1)!$ since $i$'s letter gets in $i$'s envelopes and all others can be placed randomly. We have $|A_i\cap A_j| = (n - 2)!$ as well. Similarly, $|A_i\cap A_j \cap A_k| = (n - 3)!$.
+
+ By the inclusion-exclusion formula, we have
+ \begin{align*}
+ \left|\bigcap_i \bar A_i\right| &= |X| - \sum |A_i| + \sum |A_i\cap A_j| + \cdots\\
+ &= n! - \binom{n}{1}(n - 1)! + \binom{n}{2}(n - 2)! - \cdots\\
+ &= n!\left(1 - \frac{1}{1!} + \frac{1}{2!} - \cdots + \frac{(-1)^n}{n!}\right)\\
+ &\approx n! e^{-1}
+ \end{align*}
+\end{eg}
+\subsection{Well-ordering and induction}
+Several proofs so far involved ``take the least integer such that'', e.g.\ division algorithm; or involved a sequence of moves ``and so on\ldots'' e.g.\ Euclid's algorithm, every number is a product of primes. We rely on the following:
+\begin{thm}[Weak Principle of Induction]
+ Let $P(n)$ be a statement about the natural number $n$. Suppose that
+ \begin{enumerate}
+ \item $P(1)$ is true
+ \item $(\forall n)\,P(n)\Rightarrow P(n + 1)$
+ \end{enumerate}
+ Then $P(n)$ is true for all $n\geq 1$.
+\end{thm}
+
+
+\begin{eg}[Tower of Hanoi]
+ Referring to the image below,
+ \begin{center}
+ \usetikzlibrary{shapes}
+ \definecolor{darkbrown}{rgb}{0.375,0.25,0.125}
+ \tikzset{
+ disc/.style={shade, shading=radial, rounded rectangle,minimum height=.5cm,
+ inner color=#1!20, outer color=#1!60!gray},
+ disc 1/.style={disc=yellow, minimum width=15mm},
+ disc 2/.style={disc=orange, minimum width=20mm},
+ disc 3/.style={disc=red, minimum width=25mm},
+ disc 4/.style={disc=green, minimum width=33mm},
+ }
+ \begin{tikzpicture}
+ \fill [darkbrown] (-0.125,2.5mm) rectangle (.125,3.5);
+ \fill [darkbrown] (3.075,2.5mm) rectangle (3.325,3.5);
+ \fill [darkbrown] (6.275,2.5mm) rectangle (6.525,3.5);
+
+ \fill [darkbrown] (-1.6, 0) rectangle (8,0.25);
+
+ \node[disc 4,yshift={5mm}] {$n$};
+ \node[yshift={11mm}] {$\vdots$};
+ \node[disc 3,yshift={15mm}] {3};
+ \node[disc 2,yshift={20mm}] {2};
+ \node[disc 1,yshift={25mm}] {1};
+
+ \node at (0, -.25) {A};
+ \node at (3.2, -.25) {B};
+ \node at (6.4, -.25) {C};
+ \end{tikzpicture}
+ \end{center}
+ The objective is to move the $n$ rings on peg A to peg B, with the constraints that you can only move one ring at a time, and you can never place a larger ring onto a smaller ring.
+
+ Now claim that this needs exactly $2^n - 1$ moves.
+
+ Let $P(n)$ be ``$n$ rings needs $2^n - 1$ moves''. Note that this statement contains two assertions --- (1) we can do it in $2^n - 1$ moves; (2) We can't do it in fewer.
+
+ First consider $P(1)$. We simply have to move the ring from A to B.
+
+ Suppose we have $n + 1$ rings. We can move the top $n$ rings to peg C, then move the bottom ring to peg B, then move the $n$ rings from $C$ back to $B$. Assuming $P(n)$ is true, this needs at most $2\times (2^n - 1) + 1 = 2^{n + 1} -1 $ moves.
+
+ Can we do it in fewer moves? To succeed, we must free the bottom ring, so we must shift the top $n$ rings to another pe.g.\ This needs $\geq 2^n - 1$ moves by $P(n)$. Then we need to shift the bottom ring. Then we need to shift the $n$ smaller rings to the big one. This needs $\geq 2^n - 1$ moves by $P(n)$. So this needs $\geq 2^{n + 1} - 1$ moves altogether.
+
+ So we showed that $P(n)\Rightarrow P(n + 1)$ (we used $P(n)$ four times). By the WPI, $P(n)$ is true for all $n$.
+\end{eg}
+
+\begin{eg}
+ All numbers are equal. Let $P(n)$ be ``if $\{a_1, \cdots a_n\}$ is a set of $n$ numbers, then $a_1 = a_2 = \cdots a_n$''. $P(1)$ is trivially true. Suppose we have $\{a_1, a_2\cdots a_{n+1}\}$. Assuming $P(n)$, apply it to $\{a_1, a_2\cdots a_n\}$ and $\{a_2, \cdots , a_{n + 1}\}$, then $a_1 = \cdots = a_n$ and $a_2 = a_3 = \cdots = a_{n + 1}$. So $a_1 = a_2 = \cdots = a_{n + 1}$. Hence $P(n)\Rightarrow P(n + 1)$. So $P(n)$ is true for all $n\in \N$.
+\end{eg}
+
+\begin{thm}
+ Inclusion-exclusion principle.
+\end{thm}
+
+\begin{proof}
+ Let $P(n)$ be the statement ``for any sets $A_1\cdots A_n$'', we have $|A_1\cup \cdots \cup A_n| = \sum_i|A_i| - \sum_{i < j} |A_i\cap A_j| + \cdots \pm |A_i\cap A_2\cap \cdots \cap A_n|$''.
+
+ $P(1)$ is trivially true. $P(2)$ is also true (see above). Now given $A_1 \cdots A_{n + 1}$, Let $B_i = A_i\cap A_{n + 1}$ for $1 \leq i\leq n$. We apply $P(n)$ both to the $A_i$ and $B_i$.
+
+ Now observe that $B_i\cap B_j = A_i\cap A_j \cap A_{n + 1}$. Likewise, $B_i\cap B_j\cap B_k = A_i\cap A_j\cap A_k\cap A_{n + 1}$. Now
+ \begin{align*}
+ |A_1\cup A_2\cup \cdots \cup A_{n + 1}| &= |A_1\cup \cdots \cup A_n| + |A_{n + 1}| - |(A_1\cup\cdots\cup A_n)\cap A_{n + 1}|\\
+ &= |A_1\cup \cdots \cup A_n| + |A_{n + 1}| - |B_{1}\cup \cdots \cup B_n|\\
+ &= \sum_{i\leq n}|A_i| - \sum_{i < j\leq n}|A_i\cap A_j| + \cdots + |A_{n + 1}| \\
+ &-\sum_{i \leq n}|B_i| + \sum_{i < j\leq n}|B_i\cap B_j| - \cdots \\
+ \intertext{Note $\sum\limits_{i\leq n}|B_i| = \sum\limits_{i\leq n}|A_i\cap A_{n + 1}|$. So $\sum\limits_{i < j\leq n}|A_i\cap A_j| + \sum\limits_{i\leq n}|B_i| = \sum\limits_{i< j \leq n + 1}|A_i \cap A_j|$, and similarly for the other terms. So}
+ &= \sum_{i \leq n + 1}|A_i| - \sum_{i< j \leq n + 1}|A_i \cap A_j| + \cdots
+ \end{align*}
+ So $P(n)\Rightarrow P(n + 1)$ for $n\geq 2$. By WPI, $P(n)$ is true for all $n$.
+\end{proof}
+
+However, WPI is not quite what we want for ``every number is a product of primes''. We need a different form of induction.
+
+\begin{thm}[Strong principle of induction]
+ Let $P(n)$ be a statement about $n\in \N$. Suppose that
+ \begin{enumerate}
+ \item $P(1)$ is true
+ \item $\forall n\in N$, if $P(k)$ is true $\forall k < n$ then $P(n)$ is true.
+ \end{enumerate}
+ Then $P(n)$ is true for all $n\in N$.
+\end{thm}
+Note that (i) is redundant as it follows from (ii), but we state it for clarity.
+
+\begin{eg}
+ ``Evolutionary trees'' Imagine that we have a mutant that can produce two offsprings. Each offspring is either an animal or another mutant. A possible evolutionary tree is as follows:
+
+ \tikzstyle{level 1}=[level distance=1cm, sibling distance=3.5cm]
+ \tikzstyle{level 2}=[level distance=1cm, sibling distance=2cm]
+ \begin{center}
+ \begin{tikz}
+ \node {mutant}
+ child {
+ node {mutant} edge from parent
+ child {
+ node {pig} edge from parent
+ }
+ child {
+ node {mutant} edge from parent
+ child {
+ node {slug} edge from parent
+ }
+ child {
+ node {man} edge from parent
+ }
+ }
+ }
+ child {
+ node {mutant} edge from parent
+ child {
+ node {gnu} edge from parent
+ }
+ child {
+ node {ibex} edge from parent
+ }
+ };
+ \end{tikz}
+ \end{center}
+ Let $P(n)$ be the statement $n - 1$ mutants produces $n$ animals. Given some tree with $n$ animals, remove the top mutant to get two sub-trees, with $n_1$ and $n_2$ animals, where $n_1 + n_2 = n$. If $P(k)$ is true $\forall k < n$, then $P(n_1)$ and $P(n_2)$ are true. So the total number of mutants is $1 + (n_1 - 1) + (n_2 - 1) = n - 1$. So $P(n)$ is true. Hence by strong principle of induction, $P(n)$ is true for all $n$.
+\end{eg}
+
+\begin{thm}
+ The strong principle of induction is equivalent to the weak principle of induction.
+\end{thm}
+
+\begin{proof}
+ Clearly the strong principle implies the weak principle since if $P(n)\Rightarrow P(n + 1)$, then $(P(1)\wedge P(2)\wedge \cdots \wedge P(n))\Rightarrow P(n + 1)$.
+
+ Now show that the weak principle implies the strong principle. Suppose that $P(1)$ is true and $(\forall n)\,P(1)\wedge P(2)\wedge \cdots \wedge P(n - 1)\Rightarrow P(n)$. We want to show that $P(n)$ is true for all $n$ using the weak principle.
+
+ Let $Q(n) = $ ``$P(k)$ is true $\forall k\leq n$''. Then $Q(1)$ is true. Suppose that $Q(n)$ is true. Then $P(1)\wedge P(2)\wedge\cdots \wedge P(n)$ is true. So $P(n+1)$ is true. Hence $Q(n + 1)$ is true. By the weak principle, $Q(n)$ is true for all $n$. So $P(n)$ is true for all $n$.
+\end{proof}
+While strong and weak induction are practically equivalent, they are rather distinct conceptually. Weak induction is expressed in terms of ``adding 1'', while strong induction is based on the ordering of natural numbers.
+
+It turns out that there is another statement about the natural numbers that can be stated in terms of orders. We first formally define what it means to be an order.
+
+\begin{defi}[Partial order]
+ A \emph{partial order} on a set is a reflexive, antisymmetric ($(aRb) \wedge (bRa) \Leftrightarrow a = b$) and transitive relation.
+\end{defi}
+
+\begin{eg}
+ The ordinary ordering of $\N$ $a\leq b$ is a partial order of $\N$. Also, $a\mid b$ on $\N$ is also a partial order.
+\end{eg}
+
+\begin{defi}[Total order]
+ A \emph{total order} is a partial order where $\forall a\not= b$, exactly one of $aRb$ or $bRa$ holds. This means that every two things must be related.
+\end{defi}
+
+\begin{defi}[Well-ordered total order]
+ A total order is \emph{well-ordered} if every non-empty subset has a minimal element, i.e.\ if $S\not= \emptyset$, then $\exists m\in S$ such that $x < m \Rightarrow x\not\in S$.
+\end{defi}
+
+\begin{eg}
+ $\Z$ with the usual order is not well-ordered since the set of even integers has no minimum. The positive rationals are also not well-ordered under the usual order.
+\end{eg}
+
+\begin{thm}[Well-ordering principle]
+ $\N$ is well-ordered under the usual order, i.e.\ every non-empty subset of $\N$ has a minimal element.
+\end{thm}
+
+\begin{thm}
+ The well-ordering principle is equivalent to the strong principle of induction.
+\end{thm}
+
+\begin{proof}
+ First prove that well-ordering implies strong induction. Consider a proposition $P(n)$. Suppose $P(k)$ is true $\forall k < n$ implies $P(n)$.
+
+ Assume the contrary. Consider the set $S = \{n\in \N : \neg P(n)\}$. Then $S$ has a minimal element $m$. Since $m$ is the minimal counterexample to $P$, $P(k)$ is true for all $k < m$. However, this implies that $P(m)$ is true, which is a contradiction. Therefore $P(n)$ must be true for all $n$.
+
+ To show that strong induction implies well-ordering, let $S\subseteq \N$. Suppose that $S$ has no minimal element. We need to show that $S$ is empty. Let $P(n)$ be the statement $n\not\in S$.
+
+ Certainly $1\not\in S$, or else it will be the minimal element. So $P(1)$ is true. Suppose we know that $P(k)$ is true for all $k < n$, i.e.\ $k\not\in S$ for all $k < n$. Now $n\not\in S$, or else $n$ will be the minimal element. So $P(n)$ is true. By strong induction, $P(n)$ is true for all $n$, i.e.\ $S$ is empty.
+\end{proof}
+The well-ordering principle enables us to show that $P(n)$ is true as follows: if $P(n)$ fails for some $n$, then there is a minimal counterexample $m$. Then we try to show that this leads to a contradiction.
+
+\begin{eg}
+ Proof that every number is a product of primes by strong induction: Assume the contrary. Then there exists a minimal $n$ that cannot be written as a product of prime (by the well-ordering principle). If $n$ is a prime, then $n$ is a product of primes. Otherwise, write $n = ab$, where $1 < a, b < n$. By minimality of $n$, both $a$ and $b$ are products of primes. Hence so is $n$. Contradiction.
+\end{eg}
+
+\begin{eg}
+ All numbers are interesting. Suppose that there are uninteresting numbers. Then there exists a smallest uninteresting number. Then the property of being the smallest uninteresting number is itself interesting. Contradiction.
+\end{eg}
+
+\begin{eg}
+ Consider a total order on $\N\times \N$ by ``lexicographic'' or ``dictionary'' order, i.e.\ $(a, b) \leq (c, d)$ if $a < c$ or $(a = c\wedge b\leq d)$.
+
+ The Ackermann function is a function $a:\N_0\times \N_0 \to \N$ is defined by
+ \[
+ a(m, n) =\begin{cases}n+1 & \mbox{if } m = 0 \\a(m-1, 1) & \mbox{if } m > 0 \mbox{ and } n = 0 \\a(m-1, a(m, n-1)) & \mbox{if } m > 0 \mbox{ and } n > 0.\end{cases}
+ \]
+ We want to show that this is well-defined.
+
+ Note that $a(m, n)$ is expressed in terms of $a$ at points $(x, y) < (m, n)$. So $a$ is well-defined if lexicographic order is well-ordered, i.e.\ every non-empty subset has a minimal element (if $a$ were not well-defined, then would be a smallest place where the definition is bad. But definition of that point is defined in terms of smaller points which are well defined).
+
+ We can see that $\N_0\times \N_0$ is well-ordered: if $S\subseteq \N_0\times \N_0$ is non-empty, let $S_x$ be the set of $\{x\in \N: (\exists y)\,(x, y)\in S\}$, i.e.\ the set of all $x$-coordinates of $S$. By the well-ordering principle, $S_x$ has a minimal element $m$. Then let $S_y = \{y\in \N_0: (m, y)\in S\}$. Then $S_y$ has a minimal element $n$. Then $(m, n)$ is the minimal element of $S$.
+\end{eg}
+
+\section{Modular arithmetic}
+We are going to study \emph{modular arithmetic}. In modular arithmetic, we first pick a particular natural number to be the \emph{modulus}, say 7. Then we consider two numbers to be ``equal'' if their difference is a multiple of the modulus. For example, modulo 7, we will think that 3 and 10 are ``the same'', while 2 and 4 are different.
+
+We will study arithmetic under this number system. Like the integers, we are allowed to add and multiply numbers. However, while in $\Z$, we can only divide by 1 and -1, in modular arithmetic, more numbers can be divided. For example, modulo 10, we are allowed to divide by 3.
+
+An important application of modular arithmetic is RSA encryption. This is a widely deployed asymmetric encryption algorithm. By asymmetric, we mean that the key for encryption is different from the key for decryption. This is very useful in real life, since we can broadcast the encryption key to the world, and keep the decryption key to ourselves. This way anyone can send you encrypted messages that only you can decrypt.
+
+\subsection{Modular arithmetic}
+\begin{defi}[Modulo]
+ If $a, b\in \Z$ have the same remainder after division by $m$, i.e.\ $m \mid (a - b)$, we say $a$ and $b$ are \emph{congruent modulo} $m$, and write
+ \[
+ a\equiv b\pmod m
+ \]
+\end{defi}
+
+\begin{eg}
+ The check digits of the ISBN (or Hong Kong ID Card Number) are calculated modulo 11.
+\end{eg}
+
+\begin{eg}
+ $9 \equiv 0\pmod 3$, $11\equiv 6\pmod 5$.
+\end{eg}
+
+\begin{prop}
+ If $a\equiv b\pmod m$, and $d \mid m$, then $a \equiv b\pmod d$.
+\end{prop}
+
+\begin{proof}
+ $a\equiv b\pmod m$ if and only if $m \mid (a - b)$, hence $d \mid (a - b)$, i.e.\ $a \equiv b\pmod d$.
+\end{proof}
+
+Observe that with $m$ fixed, $a\equiv b\pmod m$ is an equivalence relation. The set of equivalence classes is written as $\Z_m$ or $\Z/(m\Z)$.
+
+\begin{eg}
+ $\Z_3 = \{[0], [1], [2]\}$
+\end{eg}
+
+\begin{prop}
+ If $a\equiv b\pmod m$ and $u\equiv v \pmod m$, then $a + u\equiv b + v\pmod m$ and $au \equiv bv \pmod m$.
+\end{prop}
+
+\begin{proof}
+ Since $a\equiv b\pmod m$ and $u\equiv v \pmod m$, we have $m \mid (a - b) + (u - v) = (a + u) - (b + v)$. So $a + u\equiv b + v\pmod m$
+
+ Since $a\equiv b\pmod m$ and $u\equiv v \pmod m$, we have $m \mid (a - b)u + b(u - v) = au - bv$. So $au \equiv bv \pmod m$.
+\end{proof}
+
+This means that we can do arithmetic modulo $n$. Formally, we are doing arithmetic with the congruence classes, i.e $\Z_m$. For example, in $\Z_7$, $[4] + [5] = [9] = [2]$.
+
+Modular arithmetic can sometimes be used to show that equations have no solutions.
+\begin{eg}
+ $2a^2 + 3b^3 = 1$ has no solutions in $\Z$. If there were a solution, then $2a^2 \equiv 1\pmod 3$. But $2\cdot 0^2 \equiv 0$, $2\cdot 1^2\equiv 2$ and $2\cdot 2^2 \equiv 2$. So there is no solution to the congruence, and hence none to the original equation.
+\end{eg}
+
+Observe that all odd numbers are either $\equiv 1\pmod 4$ or $\equiv 3\equiv -1\pmod 4$. So we can classify primes depending on their value modulo $4$.
+
+\begin{thm}
+ There are infinitely many primes that are $\equiv -1 \pmod 4$.
+\end{thm}
+
+\begin{proof}
+ Suppose not. So let $p_1, \cdots p_k$ be all primes $\equiv -1 \pmod 4$. Let $N = 4p_1p_2\cdots p_k - 1$. Then $N\equiv -1\pmod 4$. Now $N$ is a product of primes, say $N= q_1q_2\cdots q_\ell$. But $2\nmid N$ and $p_i\nmid N$ for all $i$. So $q_i \equiv 1\pmod 4$ for all $i$. But then that implies $N = q_1q_2\cdots q_\ell \equiv 1\pmod 4$, which is a contradiction.
+\end{proof}
+
+\begin{eg}
+ Solve $7x \equiv 2 \pmod {10}$. Note that $3\cdot 7\equiv 1\pmod {10}$. If we multiply the equation by $3$, then we get $3\cdot 7\cdot x \equiv 3\cdot 2\pmod {10}$. So $x\equiv 6\pmod {10}$. Effectively, we divided by $7$.
+\end{eg}
+``Division'' doesn't always work for all numbers, e.g.\ you cannot divide by $2$ mod 10. We give a name to numbers we can divide.
+
+\begin{defi}[Unit (modular arithmetic)]
+ $u$ is a \emph{unit} if $\exists v$ such that $uv \equiv 1\pmod m$.
+\end{defi}
+
+\begin{thm}
+ $u$ is a unit modulo $m$ if and only if $(u, m) = 1$.
+\end{thm}
+
+\begin{proof}
+ $(\Rightarrow)$ Suppose $u$ is a unit. Then $\exists v$ such that $uv \equiv 1\pmod m$. Then $uv = 1 + mn$ for some $n$, or $uv - mn = 1$. So $1$ can be written as a linear combination of $u$ and $m$. So $(u, m) = 1$.
+
+ $(\Leftarrow)$ Suppose that $(u, m) = 1$. Then there exists $a, b$ with $ua + mb = 1$. Thus $ua \equiv 1\pmod m$.
+\end{proof}
+Using the above proof, we can find the inverse of a unit efficiently by Euclid's algorithm.
+
+\begin{cor}
+ If $(a, m) = 1$, then the congruence $ax \equiv b\pmod m$ has a unique solution (mod $m$).
+\end{cor}
+
+\begin{proof}
+ If $ax\equiv b\pmod m$, and $(a, m) = 1$, then $\exists a^{-1}$ such that $a^{-1}a\equiv 1\pmod m$. So $a^{-1}ax\equiv a^{-1}b\pmod m$ and thus $x\equiv a^{-1}b\pmod m$. Finally we check that $x \equiv a^{-1}b \pmod m$ is indeed a solution: $ax \equiv aa^{-1}b \equiv b \pmod m$.
+\end{proof}
+
+\begin{prop}
+ There is a solution to $ax \equiv b\pmod m$ if and only if $(a, m) \mid b$.
+
+ If $d = (a, m) \mid b$, then the solution is the unique solution to $\frac{a}{d}x \equiv \frac{b}{d} \pmod {\frac{m}{d}}$
+\end{prop}
+
+\begin{proof}
+ Let $d = (a, m)$. If there is a solution to $ax \equiv b\pmod m$, then $m \mid ax - b$. So $d \mid ax - b$ and $d \mid b$.
+
+ On the contrary, if $d \mid b$, we have $ax \equiv b\pmod m \Leftrightarrow ax - b = km$ for some $k\in Z$. Write $a = da'$, $b = db'$ and $m = dm'$. So $ax\equiv b\pmod m \Leftrightarrow da'x - db' = dkm'\Leftrightarrow a'x - b' = km'\Leftrightarrow a'x \equiv b'\pmod {m'}$. Note that $(a', m') = 1$ since we divided by their greatest common factor. Then this has a unique solution modulo $m'$.
+\end{proof}
+
+\begin{eg}
+ $2x \equiv 3 \pmod 4$ has no solution since $(2, 4) = 2$ which does not divide $3$.
+\end{eg}
+
+\subsection{Multiple moduli}
+In this section, we are concerned with multiple equations involving different moduli.
+
+Suppose we are given $x \equiv 2\pmod 3$ and $x\equiv 1\pmod 4$. What is the general solution to $x$? We work in mod $12$. Since we are given that $x\equiv 2\pmod 3$, we know that $x \equiv 2, 5, 8$ or $11 \pmod {12}$. Similarly, since $x \equiv 1 \pmod 4$, we must have $x \equiv 1, 5$ or $9\pmod {12}$. Combining these results, we must have $x \equiv 5\pmod {12}$.
+
+On the other hand, if $x\equiv 5\pmod {12}$, then $x\equiv 5\equiv 2 \pmod 3$ and $x\equiv 5\equiv 1\pmod 4$. So $x\equiv 5\pmod {12}$ is indeed the most general solution.
+
+In general, we have the \emph{Chinese remainder theorem}.
+\begin{thm}[Chinese remainder theorem]
+ Let $(m, n) = 1$ and $a, b\in \Z$. Then there is a unique solution (modulo $mn$) to the simultaneous congruences
+ \[
+ \begin{cases}x\equiv a\pmod m \\ x\equiv b\pmod n \end{cases},
+ \]
+ i.e.\ $\exists x$ satisfying both and every other solution is $\equiv x \pmod {mn}$.
+\end{thm}
+
+\begin{proof}
+ Since $(m, n) = 1$, $\exists u, v\in \Z$ with $um + vn = 1$. Then $vn \equiv 1\pmod m$ and $um \equiv 1 \pmod n$. Put $x = umb + vna$. So $x\equiv a\pmod m$ and $x\equiv b\pmod n$.
+
+ To show it is unique, suppose both $y$ and $x$ are solutions to the equation. Then
+ \begin{align*}
+ &\; y \equiv a\pmod m \text{ and } y \equiv b\pmod n\\
+ \Leftrightarrow &\; y \equiv x\pmod m \text{ and } y\equiv x\pmod n\\
+ \Leftrightarrow &\; m \mid y - x\text{ and } n \mid y - x\\
+ \Leftrightarrow &\; mn \mid y - x\\
+ \Leftrightarrow &\; y \equiv x\pmod {mn}\qedhere
+ \end{align*}
+\end{proof}
+This shows a congruence (mod $mn$) is equivalent to one (mod $n$) and another (mod $m$).
+
+We can easily extend this to more than two moduli by repeatedly applying this theorem.
+
+\begin{prop}
+ Given any $(m,n) = 1$, $c$ is a unit mod $mn$ iff $c$ is a unit both mod $m$ and mod $n$.
+\end{prop}
+
+\begin{proof}
+ $(\Rightarrow)$ If $\exists u$ such that $cu \equiv 1 \pmod {mn}$, then $cu \equiv 1\pmod m$ and $cu\equiv 1\pmod n$. So $c$ is a unit mod $m$ and $n$.
+
+ $(\Leftarrow)$ Suppose there exists $u, v$ such that $cu\equiv 1\pmod m$ and $cv \equiv 1\pmod n$. Then by CRT, $\exists w$ with $w\equiv u \pmod m$ and $w\equiv v\pmod n$. Then $cw\equiv cu\equiv 1\pmod m$ and $cw\equiv cv\equiv 1\pmod n$.
+
+ But we know that $1\equiv 1\pmod m$ and $1\equiv 1\pmod n$. So $1$ is a solution to $cw \equiv 1\pmod m$, $cw\equiv 1\pmod n$. By the ``uniqueness'' part of the Chinese remainder theorem, we must have $cw\equiv 1\pmod {mn}$.
+\end{proof}
+
+\begin{defi}[Euler's totient function]
+ We denote by $\phi(m)$ the number of integers $a$, $0\leq a\leq m$, such that $(a, m) = 1$, i.e.\ $a$ is a unit mod $m$. Note $\phi(1) = 1$.
+\end{defi}
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\phi(mn) = \phi(m)\phi(n)$ if $(m, n) = 1$, i.e.\ $\phi$ is multiplicative.
+ \item If $p$ is a prime, $\phi(p) = p - 1$
+ \item If $p$ is a prime, $\phi(p^k) = p^k - p^{k - 1} = p^k(1 - 1/p)$
+ \item $\phi(m) = m\prod_{p \mid m}(1 - 1/p)$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ We will only prove (iv). In fact, we will prove it twice.
+ \begin{enumerate}
+ \item Suppose $m = p_1^{k_1}p_2^{k_2}\cdots p_\ell^{k_\ell}$. Then
+ \begin{align*}
+ \phi(m) &= \phi(p_1^{k_1})\phi(p_2^{k_2})\cdots \phi(p_\ell^{k_\ell})\\
+ &= p_1^{k_1}(1-1/p_1)p_2^{k_2}(1 - 1/p_2)\cdots p_\ell^{k_\ell}(1 - 1/p_\ell)\\
+ &= m\prod_{p \mid m}(1 - 1/p)
+ \end{align*}
+ \item Let $m = p_1^{k_1}p_2^{k_2}\cdots p_\ell^{k_\ell}$. Let $X = \{0, \cdots m - 1\}$. Let $A_j = \{x\in X: p_j\mid x\}$. Then $|X| = m$, $|A_j| = m/p_j$, $|A_i\cap A_j| = m/(p_ip_j)$ etc. So $\phi(m) = |\bar A_1\cap \bar A_2\cap \cdots \bar A_\ell| = m\prod_{p \mid m}(1 - 1/p)$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ $\phi(60) = 60(1 - 1/2)(1 - 1/3)(1 - 1/5) = 16$.
+\end{eg}
+
+If $a, b$ are both units (mod $m$), then so is $ab$, for if $au \equiv 1$ and $bv \equiv 1$, then $(ab)(uv)\equiv 1$. So the units form a multiplicative group of size $\phi(m)$.
+
+\subsection{Prime moduli}
+Modular arithmetic has some nice properties when the modulus is a prime number.
+\begin{thm}[Wilson's theorem]
+ $(p - 1)! \equiv -1\pmod p$ if $p$ is a prime.
+\end{thm}
+
+\begin{proof}
+ If $p$ is a prime, then $1, 2, \cdots, p - 1$ are units. Among these, we can pair each number up with its inverse (e.g.\ 3 with 4 in modulo 11). The only elements that cannot be paired with a different number are $1$ and $-1$, who are self-inverses, as show below:
+ \begin{align*}
+ &\;x^2 \equiv 1\pmod p\\
+ \Leftrightarrow&\; p \mid (x^2 - 1)\\
+ \Leftrightarrow&\; p \mid (x - 1)(x + 1)\\
+ \Leftrightarrow&\; p \mid x - 1 \text{ or } p \mid x + 1\\
+ \Leftrightarrow&\; x \equiv \pm 1\pmod p
+ \end{align*}
+ Now $(p - 1)!$ is a product of $(p - 3)/2$ inverse pairs together with 1 and $-1$. So the product is $-1$.
+\end{proof}
+
+\begin{thm}[Fermat's little theorem]
+ Let $p$ be a prime. Then $a^p \equiv a\pmod p$ for all $a\in \Z$. Equivalently, $a^{p - 1}\equiv 1\pmod p$ if $a\not\equiv 0 \pmod p$.
+\end{thm}
+
+\begin{proof}
+ Two proofs are offered:
+ \begin{enumerate}
+ \item The numbers $\{1, 2, \cdots p - 1\}$ are units modulo $p$ and form a group of order $p - 1$. So $a^{p - 1} \equiv 1$ by Lagrange's theorem.
+ \item If $a\not\equiv 0$, then $a$ is a unit. So $ax \equiv ay$ iff $x\equiv y$. Then $a, 2a, 3a, \cdots (p - 1)a$ are distinct mod $p$. So they are congruent to $1, 2, \cdots p -1$ in some order. Hence $a\cdot 2a\cdots 3a\cdots (p - 1)a\equiv 1\cdot 2\cdot 3 \cdots (p - 1)$. So $a^{p - 1}(p - 1)! \equiv (p - 1)!$. So $a^{p - 1} \equiv 1\pmod p$.\qedhere
+ \end{enumerate}
+\end{proof}
+Neither Wilson nor Fermat's theorem hold if the modulus is non-prime. However, Fermat's theorem can be generalized:
+\begin{thm}[Fermat-Euler Theorem]
+ Let $a, m$ be coprime. Then
+ \[
+ a^{\phi(m)} \equiv 1\pmod m.
+ \]
+\end{thm}
+
+\begin{proof}
+ Lagrange's theorem: The units mod $m$ form a group of size $\phi(m)$.
+
+ Alternatively, let $U = \{x\in \N: 0 < x < m, (x, m) = 1\}$. These are the $\phi(m)$ units. Since $a$ is a unit, $ax\equiv ay \pmod m$ only if $x\equiv y\pmod m$. So if $U = \{u_1, u_2, \cdots , u_{\phi(m)}\}$, then $\{au_1, au_2, \cdots au_{\phi(m)}\}$ are distinct and are units. So they must be $u_1, \cdots u_{\phi(m)}$ in some order. Then $au_1au_2\cdots au_{\phi(m)} \equiv u_1u_2\cdots u_{\phi(m)}$. So $a^{\phi(m)}z \equiv z$, where $z = u_1u_2\cdots u_{\phi(m)}$. Since $z$ is a unit, we can multiply by its inverse and obtain $a^{\phi(m)} \equiv 1$.
+\end{proof}
+
+\begin{defi}[Quadratic residues]
+ The \emph{quadratic residues} are the ``squares'' mod $p$, i.e.\ $1^2, 2^2, \cdots, (p - 1)^2$.
+\end{defi}
+
+Note that if $a^2 \equiv b^2\pmod p$, then $p \mid a^2 - b^2 = (a - b)(a + b)$. Then $p \mid a - b$ or $p \mid a + b$. So $a\equiv \pm b\pmod p$. Thus every square is a square of exactly two numbers.
+
+\begin{eg}
+ If $p = 7$, then $1^2 \equiv 6^2 \equiv 1$, $2^2 \equiv 5^2 \equiv 4$, $3^2 \equiv 4^2 \equiv 2$. So $1, 2, 4$ are quadratic residues. $3, 5, 6$ are not.
+\end{eg}
+
+\begin{prop}
+ If $p$ is an odd prime, then $-1$ is a quadratic residue if and only if $p\equiv 1\pmod 4$.
+\end{prop}
+
+\begin{proof}
+ If $p \equiv 1 \pmod 4$, say $p = 4k + 1$, then by Wilson's theorem, $-1 \equiv (p - 1)!\equiv 1\cdot 2\cdots \left(\frac{p - 1}{2}\right)\left(-\frac{p - 1}{2}\right)\cdots (-2)(-1)\equiv (-1)^{\frac{p - 1}{2}}\left(\frac{p - 1}{2}\right)!^2 = (-1)^{2k}(2k!)^2 \equiv (2k!)^2$. So $-1$ is a quadratic residue.
+
+ When $p \equiv -1\pmod 4$, i.e.\ $p = 4k + 3$, suppose $-1$ is a square, i.e.\ $-1 \equiv z^2$. Then by Fermat's little theorem, $1\equiv z^{p - 1} \equiv z^{4k+ 2}\equiv (z^2)^{2k + 1} \equiv (-1)^{2k + 1}\equiv -1$. Contradiction.
+\end{proof}
+\begin{prop}
+ (Unproven) A prime $p$ is the sum of two squares if and only if $p\equiv 1\pmod 4$.
+\end{prop}
+
+\begin{prop}
+ There are infinitely many primes $\equiv 1\pmod 4$.
+\end{prop}
+
+\begin{proof}
+ Suppose not, and $p_1, \cdots p_k$ are all the primes $\equiv 1\pmod 4$. Let $N = (2p_1\cdots p_k)^2 + 1$. Then $N$ is not divisible by $2$ or $p_1,\cdots, p_k$. Let $q$ be a prime $q \mid N$. Then $q \equiv -1\pmod 4$. Then $N \equiv 0\pmod q$ and hence $(2p_1\cdots p_k)^2 + 1\equiv 0\pmod q$, i.e.\ $(2p_1\cdots p_k)^2 \equiv -1\pmod q$. So $-1$ is a quadratic residue mod $q$, which is a contradiction since $q \equiv -1\pmod 4$.
+\end{proof}
+
+\begin{prop}
+ Let $p = 4k + 3$ be a prime. Then if $a$ is a quadratic residue, i.e.\ $a \equiv z^2 \pmod p$ for some $z$, then $z = \pm a^{k + 1}$.
+\end{prop}
+
+\begin{proof}
+ By Fermat's little theorem, $a^{2k + 1} \equiv z^{4k + 2} \equiv z^{p - 1} \equiv 1$. If we multiply by $a$, then $a^{2k + 2} \equiv a \pmod p$. So $(\pm a^{k + 1})^2 \equiv a \pmod p$.
+\end{proof}
+This allows us to take square roots efficiently. This efficiency requires an effective way of computing powers of $a$ efficiently. This can be done by repeated squaring. For example, to find $a^{37}$, we can calculate this by $a^{37} = a^{32}a^4a^1 = ((((a^2)^2)^2)^2)^2 \cdot (a^2)^2\cdot a$. Thus calculation of $a^n$ has time complexity $O(\log n)$, as opposed to $O(n)$ if you take powers manually.
+
+Suppose $a$ is a square mod $n$, where $n = pq$ and $p, q$ are distinct primes. Then $a$ is a square mod $p$ and a square mod $q$. So there exists some $s$ with $(\pm s)^2 \equiv a\pmod p$ and some $t$ with $(\pm t)^2\equiv a\pmod q$. By the Chinese remainder theorem, we can find a unique solution of each case, so we get 4 square roots of $a$ modulo $n$.
+
+\subsection{Public-key (asymmetric) cryptography}
+\subsubsection*{Tossing a coin over a phone}
+Suppose we have Alice and Bob who wish to toss a coin fairly over the phone. Alice chooses two 100 digit primes with $p, q\equiv 3\pmod 4$. Then she tells Bob the product $n = pq$. Bob picks a number $u$ coprime to $n$, computes $a\equiv u^2\pmod n$ and tells Alice the value of $a$.
+
+Alice can compute the square roots of $a$ by the above algorithm ($O(\log n)$), obtain $\pm u, \pm v$ and tells Bob one of these pairs.
+
+Now if Alice picks $\pm u$, Bob says ``you win''. Otherwise, Bob says ``you lose''.
+
+Can Bob cheat? If he says ``you lose'' when Alice says $\pm u$, Bob must produce the other pair $\pm v$, but he can't know $\pm v$ without factorizing $n$. (If he knows $\pm u$ and $\pm v$, then $u^2 \equiv v^2\pmod n$, then $n \mid (u - v)(u + v)$. But $n\nmid (u - v)$ and $n\nmid (u + v)$. So $p \mid (u - v)$ and $q \mid (u + v)$. Then $p = (n, u - v)$ and $q = (n, u + v)$ which we can calculate efficiently by Euclid's algorithm)
+
+Thus cheating is as hard as prime factorization.
+
+Note that a difficult part is to generate the 100-digit prime. While there are sufficiently many primes to keep trying random numbers until we get one, we need an efficient method to test whether a number is prime. We cannot do this by factorization since it is slow. So we need an efficient prime-checking function.
+
+We can test whether a large number is prime by doing Fermat-like checks. We choose random numbers and take it to the $(p - 1)$th power and see if they become 1. If it is not 1, then it is definitely not a prime. If we do sufficiently many tests that all result in 1, we can be sufficiently certain that it is a prime (even though not with 100\% certainty).
+
+(Recent advancements in algorithms have found efficient ways of deterministic prime test, but they are generally slower than the above algorithm and is not widely used)
+
+It is currently believed that it is hard to prime factorize a number, so this is secure as far as we know.
+\subsubsection*{RSA encryption}
+
+\begin{thm}[RSA Encryption]
+ We want people to be able to send a message to Bob without Eve eavesdropping. So the message must be encrypted. We want an algorithm that allows anyone to encrypt, but only Bob to decrypt (e.g.\ many parties sending passwords with the bank).
+
+ Let us first agree to write messages as sequences of numbers, e.g.\ in ASCII or UTF-8.
+
+ After encoding, the encryption part is often done with RSA encryption (Rivest, Shamier, Adleman). Bob thinks of two large primes $p, q$. Let $n = pq$ and pick $e$ coprime to $\phi(n) = (p - 1)(q - 1)$. Then work out $d$ with $de \equiv 1\pmod {\phi(n)}$ (i.e.\ $de = k\phi(n) + 1$). Bob then publishes the pair $(n, e)$.
+
+ For Alice to encrypt a message, Alice splits the message into numbers $M < n$. Alice sends $M^e \pmod n$ to Bob.
+
+ Bob then computes $(M^e)^d = M^{k\phi(n) + 1} \equiv M\pmod n$ by Fermat-Euler theorem.
+
+ How can Eve find $M$? We can, of course, factorize $n$, find $d$ efficiently, and be in the same position as Bob. However, it is currently assumed that this is hard. Is there any other way? Currently we do not know if RSA can be broken without factorizing (cf.\ RSA problem).
+\end{thm}
+
+\section{Real numbers}
+So far, we have only worked with natural numbers and integers. Unfortunately the real world often involves rational numbers and even real numbers. In this chapter, our goal is to study real numbers. To do so, we will first have to define the real numbers. To do so, we will start from the natural numbers.
+
+Before we start, an important (philosophical) point has to be made. The idea is to define, say, the ``real numbers'' as a set with some operations (e.g.\ addition, multiplication) that satisfies some particular properties, known as the \emph{axioms}. When we do this, there are two questions we can ask --- is there actually a set that satisfies these properties, and is it unique?
+
+The first question can be answered by an explicit construction, i.e.\ we find a concrete set that actually satisfies these properties. However, it is important to note that we perform the construction only to show that it makes sense to talk about such a set. For example, we will construct a real number as a pair of subsets of $\Q$, but it would be absurd to actually think that a real number ``is'' a pair of sets. It's just that it \emph{can be constructed} as a pair of sets. You would be considered insane if you asked if
+\[
+ \exists x: x \in 3 \vee x \in \pi
+\]
+holds, even though it is a valid thing to ask if we view each real number as a set (and is in fact true).
+
+The next problem of uniqueness is more subtle. Firstly, it is clear that the constructions themselves aren't unique --- instead of constructing the natural number $0$ as the set $\emptyset$, as we will later do, we could as well define it as $\{\{\emptyset\}\}$, and the whole construction will go through. However, we could still hope that all possible constructions are ``isomorphic'' in some way. It turns out this is true for what we have below, but the proofs are not trivial.
+
+However, while this is nice, \emph{it doesn't really matter}. This is since we don't care how, say, the real numbers are constructed. When working with them, we just assume that they satisfy the relevant defining properties. So we can just choose \emph{anything} that satisfies the axioms, and use it. The fact that there are other ones not isomorphic to this is not a problem.
+
+Since we are mostly interested in the natural numbers and the real numbers, we will provide \emph{both} the axiomatic description and explicit construction for these. However, we will only give explicit constructions for the integers and rationals, and we will not give detailed proofs of why they work.
+
+\subsection{Construction of numbers}
+\subsubsection*{Construction of natural numbers}
+Our construction of natural numbers will include $0$.
+
+\begin{defi}[Natural numbers]
+ The natural numbers $\N$ is defined by \emph{Peano's axioms}. We call a set $\N$ ``natural numbers'' if it has a special element $0$ and a map $S: \N \to \N$ that maps $n$ to its ``successor'' (intuitively, it is $+1$) such that:
+ \begin{enumerate}
+ \item $S(n) \not= 0$ for all $n \in \N$
+ \item For all $n, m \in \N$, if $S(n) = S(m)$, then $n = m$.
+ \item For any subset $A \subseteq \N$, if $0 \in A$ and ``$n \in A \Rightarrow S(n) \in A$, then in fact $A = \N$.
+ \end{enumerate}
+ The last axiom is the axiom of induction.
+
+ We write $1 = S(0), 2 = S(1)$, $3 = S(2)$ etc. We can (but will not) prove that the axiom of induction allows us to define functions on $\N$ recursively (cf.\ IID Logic and Set Theory). Assuming this, we can define addition and multiplication recursively by
+ \begin{align*}
+ n + 0 &= n &n\times 0 &= 0\\
+ n + S(m) &= S(n + m) & n \times S(m) &= n \times m + n
+ \end{align*}
+ We can show by induction that these satisfy the usual rules (e.g.\ associativity, distributivity).
+\end{defi}
+We can construct this explicitly by $0 = \emptyset$, $1 = \{0\}$, $2 = \{0, 1\}$ etc. In general, we define $S(n) = \{n\} \cup n$. Note that this is in some sense a circular definition, since we are defining the natural numbers recursively, but to do recursion, we need the natural numbers. Also, it is not clear how we can show this satisfies the axioms above. To actually do this properly, we will need to approach this in a slightly different way, and the details are better left for the IID Logic and Set Theory course.
+
+\subsubsection*{Construction of integers}
+\begin{defi}[Integers]
+ $\Z$ is obtained from $\N$ by allowing subtraction. Formally, we define $\Z$ to be the equivalence classes of $\N\times \N$ under the equivalence relation
+ \[
+ (a, b) \sim (c, d) \quad\text{if and only if}\quad a + d = b + c.
+ \]
+ Intuitively, we think of $(a, b)$ as $a - b$.
+
+ We write $a$ for $[(a, 0)]$ and $-a$ for $[(0, a)]$, and define the operations by
+ \begin{align*}
+ (a, b) + (c, d) &= (a + c, b + d)\\
+ (a, b)\times (c, d) &= (ac + bd, bd + ad).
+ \end{align*}
+ We can check that these are well-defined and satisfy the usual properties.
+\end{defi}
+
+\subsubsection*{Construction of rationals}
+\begin{defi}[Rationals]
+ $\Q$ is obtained from $\Z$ by allowing division. Formally, we define $\Q$ to be the equivalence classes of $\Z \times \N$ under the relation
+ \[
+ (a, b) \sim (c, d)\quad\text{if and only if}\quad ad = bc.
+ \]
+ We write $\frac{a}{b}$ for $[(a, b)]$. We define
+ \begin{align*}
+ (a, b) + (c, d) &= (ad + bc, bd)\\
+ (a, b)\times (c, d) &= (ac, bd).
+ \end{align*}
+ We can check that these are well-defined and satisfy the usual properties.
+\end{defi}
+
+Algebraically, we say $\Q$ is a ``totally ordered field''.
+\begin{defi}[Totally ordered field]
+ A set $F$ equipped with binary operations $+, \times$ and relation $\leq$ is a \emph{totally ordered field} if
+ \begin{enumerate}
+ \item $F$ is an additive abelian group with identity $0$.
+ \item $F\setminus \{0\}$ is a multiplicative abelian group with identity $1$.
+ \item Multiplication is distributed over addition: $a(b + c) = ab + ac$.
+ \item $\leq$ is a total order.
+ \item For any $p, q, r \in F$, if $p \leq q$, then $p + r \leq q+ r$.
+ \item For any $p, q, r \in F$, if $p \leq q$ and $0 \leq r$, then $p r \leq qr$.
+ \end{enumerate}
+\end{defi}
+
+\begin{prop}
+ $\Q$ is a totally ordered-field.
+\end{prop}
+
+Examples of non-totally-ordered fields include $\Z_p$, which is a field but not totally ordered.
+\begin{prop}
+ $\Q$ is densely ordered, i.e.\ for any $p, q \in \Q$, if $p < q$, then there is some $r \in \Q$ such that $p < r < q$.
+\end{prop}
+\begin{proof}
+ Take $r = \frac{p + q}{2}$.
+\end{proof}
+
+However, $\Q$ is not enough for our purposes.
+\begin{prop}
+ There is no rational $q\in \Q$ with $q^2 = 2$.
+\end{prop}
+
+\begin{proof}
+ Suppose not, and $(\frac{a}{b})^2 = 2$, where $b$ is chosen as small as possible. We will derive a contradiction in four ways.
+ \begin{enumerate}
+ \item $a^2 = 2b^2$. So $a$ is even. Let $a = 2a'$. Then $b^2 = 2a'^2$. Then $b$ is even as well, and $b = 2b'$. But then $\frac{a}{b} = \frac{a'}{b'}$ with a smaller $b'$. Contradiction.
+ \item We know that $b$ is a product of primes if $b \not= 1$. Let $p \mid b$. Then $a^2 = 2b^2$. So $p \mid a^2$. So $p \mid a$. Contradict $b$ minimal.
+ \item (Dirichlet) We have $\frac{a}{b} = \frac{2b}{a}$. So $a^2 = 2b^2$. For any, $u, v$, we have $a^2v = 2b^2v$ and thus $uab + a^2v = uab + 2b^2v$. So $\frac{a}{b} = \frac{au + 2bv}{bu + av}$. Put $u = -1, v = 1$. Then $\frac{a}{b} = \frac{2b - a}{a - b}$. Since $a < 2b, a - b < b$. So we have found a rational with smaller $b$.
+ \item Same as 3, but pick $u, v$ so $bu + av = 1$ since $a$ and $b$ are coprime. So $\frac{a}{b}$ is an integer.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsubsection*{Construction of real numbers}
+As shown above, the rational numbers don't have all the numbers we need. What exactly do we mean by ``numbers are missing''? We might want to say that the problem is that not all polynomial equations have solutions. However, this is not the real problem. First of all, even if we are working with the reals, not all equations have solutions, e.g.\ $x^2 + 1 = 0$. Also, some real numbers such as $\pi$ are not solutions to polynomial equations (with integer coefficients), but we still want them.
+
+The real problem is expressed in terms of least upper bounds, or suprema.
+
+\begin{defi}[Least upper bound/supremum and greatest lower bound/infimum]
+ For an ordered set $X$, $s\in X$ is a \emph{least upper bound} (or \emph{supremum}) for the set $S\subseteq X$, denoted by $s = \sup S$, if
+ \begin{enumerate}
+ \item $s$ is an upper bound for $S$, i.e.\ for every $x \in S$, we have $x \leq s$.
+ \item if $t$ is any upper bound for $S$, then $s \leq t$.
+ \end{enumerate}
+ Similarly, $s\in X$ is a \emph{greatest lower bound} (or \emph{infimum}) if $s$ is a lower bound and any lower bound $t \leq s$.
+\end{defi}
+By definition, the least upper bound for $S$, if exists, is unique.
+
+The problem with $\Q$ is that if we let $S = \{q\in \Q: q^2 < 2\}$, then it has no supremum in $\Q$.
+
+Recall that $\Q$ is a totally ordered field. We will define the real numbers axiomatically to be a totally ordered field without this problem.
+\begin{defi}[Real numbers]
+ The \emph{real numbers} is a totally ordered field containing $\Q$ that satisfies the least upper bound axiom.
+\end{defi}
+
+\begin{axiom}[Least upper bound axiom]
+ Every non-empty set of the real numbers that has an upper bound has a least upper bound.
+\end{axiom}
+We have the requirement ``non-empty'' since every number is an upper bound of $\emptyset$ but it has no least upper bound.
+
+We will leave the construction to the end of the section.
+
+Note that $\Q$ is a subset of $\R$, in the sense that we can find a copy of $\Q$ inside $\R$. By definition of a field, there is a multiplicative identity $1 \in \R$. We can then define the natural numbers by
+\[
+ n = \underbrace{1 + \cdots + 1}_{n\text{ times}}.
+\]
+We can then define the negative integers by letting $-n$ be the additive inverse of $n$. Then $\frac{1}{n}$ is the multiplicative inverse of $n$ (for $n \not= 0$), and $\frac{m}{n}$ is just $m$ copies of $\frac{1}{n}$ added together. This is our canonical copy of $\Q$.
+
+\begin{cor}
+ Every non-empty set of the real numbers bounded below has an infimum.
+\end{cor}
+
+\begin{proof}
+ Let $S$ be non-empty and bounded below. Then $-S = \{-x: x\in S\}$ is a non-empty set bounded above, and $\inf S = -\sup (-S)$.
+\end{proof}
+
+Alternatively, we can prove it just using the ordering of $\R$:
+\begin{proof}
+ Let $S$ be non-empty and bounded below. Let $L$ be the set of all lower bounds of $S$. Since $S$ is bounded below, $L$ is non-empty. Also, $L$ is bounded above by any member of $S$. So $L$ has a least upper bound $\sup L$.
+
+ For each $x \in S$, we know $x$ is an upper bound of $L$. So we have $\sup L \leq x$ by definition. So $\sup L$ is indeed a lower bound of $S$. Also, by definition, every lower bound of $S$ is less than (or equal to) $\sup L$. So this is the infimum.
+\end{proof}
+
+Now the set $\{q\in \Q: q^2 < 2\}$ has a supremum in $\R$ (by definition).
+
+We make some useful definitions.
+\begin{defi}[Closed and open intervals]
+ A \emph{closed interval} $[a, b]$ with $a \leq b\in \R$ is the set $\{x\in \R: a\leq x\leq b\}$.
+
+ An \emph{open interval} $(a, b)$ with $a \leq b\in \R$ is the set $\{x\in \R: a< x< b\}$.
+
+ Similarly, we can have $[a, b) = \{x\in \R: a\leq x < b\}$ and $(a, b] = \{x\in \R: a< x \leq b\}$.
+\end{defi}
+
+\begin{eg}
+ Let $S = [0, 1]$. Then $S\not= \emptyset$. Also $S$ has an upper bound, e.g.\ $2$. Hence $\sup S$ exists.
+
+ To find it explicitly, notice that $1$ is an upper bound for $S$ by definition, and if $t < 1$, then $t$ is not an upper bound for $S$ since $1\in S$ but $1\not\leq t$. So every upper bound is at least $1$ and therefore $1$ is the supremum of $S$.
+
+ Now let $T = (0, 1)$. Again $T$ is non-empty and has an upper bound (e.g.\ $2$). So again $\sup T$ exists. We know that $1$ is an upper bound. If $t < 0$, then $0.5\in S$ but $s\not\leq t$. So $t$ is not an upper bound. Now suppose $0\leq t < 1$, then $0 < t < \frac{1 + t}{2} < 1$ and so $\frac{1 + t}{2}\in S$ but $\frac{1 + t}{2} \not\leq t$. So $t$ is not an upper bound. So $\sup T = 1$.
+
+ Note that these cases differ by $\sup S\in S$ but $\sup T\not\in T$. $S$ has a maximum element $1$ and the maximum is the supremum. $T$ doesn't have a maximum, but the supremum can still exist.
+\end{eg}
+
+The real numbers has a rather interesting property.
+\begin{thm}[Axiom of Archimedes]
+ Given $r\in \R$, there exists $n\in \N$ with $n > r$.
+\end{thm}
+
+This was considered an axiom by Archimedes but we can prove this with the least upper bound axiom.
+
+\begin{proof}
+ Assume the contrary. Then $r$ is an upper bound for $\N$. $\N$ is not empty since $1\in \N$. By the least upper bound axiom, $s = \sup \N$ exists. Since $s$ is the least upper bound for $\N$, $s - 1$ is not an upper bound for $\N$. So $\exists m\in \N$ with $m > s - 1$. Then $m + 1\in \N$ but $m + 1 > s$, which contradicts the statement that $s$ is an upper bound.
+\end{proof}
+
+Notice that every non-empty set $S\in \R$ which is bounded below has a \emph{greatest lower bound} (or \emph{infimum}). In particular, we have
+
+\begin{prop}
+ $\inf\{\frac{1}{n}: n\in \N\} = 0$.
+\end{prop}
+
+\begin{proof}
+ Certainly $0$ is a lower bound for $S$. If $t > 0$, there exists $n \in \N$ such that $n \geq 1/t$. So $t \geq 1/n\in S$. So $t$ is not a lower bound for $S$.
+\end{proof}
+
+\begin{thm}
+ $\Q$ is dense in $\R$, i.e.\ given $r, s\in \R$, with $r < s$, $\exists q\in \Q$ with $r < q < s$.
+\end{thm}
+
+\begin{proof}
+ wlog assume first $r \geq 0$ (just multiply everything by $-1$ if $r < 0$ and swap $r$ and $s$). Since $s - r > 0$, there is some $n \in \N$ such that $\frac{1}{n} < s - r$. By the Axiom of Archimedes, $\exists N\in \N$ such that $N > sn$.
+
+ Let $T = \{k\in \N: \frac{k}{n}\geq s\}$. $T$ is not empty, since $N\in T$. Then by the well-ordering principle, $T$ has a minimum element $m$. Now $m\not= 1$ since $\frac{1}{n} < s - r \leq s$. Let $q = \frac{m - 1}{n}$. Since $m - 1\not\in T$, $q < s$. If $q =\frac{m - 1}{n}< r$, then $\frac{m}{n} < r + \frac{1}{n} < s$, so $m\not\in T$, contradiction. So $r < q < s$.
+\end{proof}
+
+\begin{thm}
+ There exists $x\in \R$ with $x^2 = 2$.
+\end{thm}
+
+\begin{proof}
+ Let $S = \{r\in \R: r^2 \leq 2\}$. Then $0\in S$ so $S\not= \emptyset$. Also for every $r \in S$, we have $r \leq 3$. So $S$ is bounded above. So $x = \sup S$ exists and $0\leq x \leq 3$.
+
+ By trichotomy, either $x^2 < 2, x^2 > 2$ or $x^2 = 2$.
+
+ Suppose $x^2 < 2$. Let $0 < t < 1$. Then consider $(x + t)^2 = x^2 + 2xt + t^2 < x^2 + 6t + t \leq x^2 + 7t$. Pick $t < \frac{2 - x^2}{7}$, then $(x + t)^2 < 2$. So $x + t \in S$. This contradicts the fact that $x$ is an upper bound of $S$.
+
+ Now suppose $x^2 > 2$. Let $0 < t < 1$. Then consider $(x - t)^2 = x^2 - 2xt + t^2 \geq x^2 - 6t$. Pick $t < \frac{x^2 - 2}{6}$. Then $(x - t)^2 > 2$, so $x - t$ is an upper bound for $S$. This contradicts the fact that $x$ is the least upper bound of $S$.
+
+ So by trichotomy, $x^2 = 2$.
+\end{proof}
+
+Now let's try to construct the real numbers from the rationals. The idea is that each set like $\{q\in \Q: q^2 < 2\}$ represents a ``missing number'', namely the supremum $\sqrt{2}$. However, many different sets can correspond to the same missing number, e.g.\ $\{q \in \Q: q^2 < 2\} \cup \{-3\}$ also ``should have'' supremum $\sqrt{2}$. So we need to pick a particular one that represents this missing number.
+
+To do so, we pick the maximal one, i.e.\ a subset $S$ such that for $x \in S$ and $y \in \Q$, if $y < x$, then $y \in S$ (if $y$ is not in $S$, we can add it to $S$, and $S \cup \{y\}$ will ``have the same upper bound''). Each rational $q \in \Q$ can also be represented by the set $\{x \in \Q: x \leq q\}$. These are known as \emph{Dedekind cuts}. By convention, a Dedekind cut is instead defined as the pair $(S, \Q \setminus S)$, and can be characterized by the following definition:
+
+\begin{defi}[Dedekind cut]
+ A \emph{Dedekind cut} of $\Q$ is a set of partition of $\Q$ into $L$ and $R$ such that
+ \[
+ (\forall l\in L)(\forall r\in R)\,l < r,
+ \]
+ and $R$ has no minimum.
+\end{defi}
+The requirement that $R$ has no minimum corresponds to our (arbitrary) decision that the rationals should be embedded as
+\[
+ q\mapsto \{x\in q: x \leq \Q\}, \{x\in \Q: x> q\},
+\]
+instead of $q\mapsto \{x\in q: x < \Q\}, \{x\in \Q: x \geq q\}$,
+
+We can then construct the set $\R$ from $\Q$ by letting $\R$ be the set of all Dedekind cuts. The supremum of any bounded set of real numbers is obtained by taking the union of (the left sides) of the Dedekind cuts. The definition of the arithmetic operations is left as an exercise for the reader (to actually define them is tedious but not hard).
+
+\subsection{Sequences}
+Here we will look at sequences, with series in the next chapter. Only a brief introduction to these topics will be provided here, as these topics will be studied later in Analysis I.
+
+\begin{defi}[Sequence]
+ A \emph{sequence} is a function $\N\to \R$. If $a$ is a sequence, instead of $a(1), a(2), \cdots$, we usually write $a_1, a_2, \cdots$. To emphasize it is a sequence, we write the sequence as $(a_n)$.
+\end{defi}
+We want to capture the notion of a sequence tending to a limit. For example, we want to say that $1, \frac{1}{2}, \frac{1}{3}, \cdots$ tends to $0$, while $1, 2, 3, \cdots$ does not converge to a limit.
+
+The idea is that if $a_n \to l$, then we can get as close to $l$ as we like, as long as we are sufficiently far down the sequence. More precisely, given any ``error threshold'' $\varepsilon$, we can find a (possibly large) number $N$ such that whenever $n \geq N$, we have $|a_n - l| < \varepsilon$.
+
+\begin{defi}[Limit of sequence]
+ The sequence $(a_n)$ \emph{tends to} $l\in \R$ as $n$ tends to infinity if and only if
+ \[
+ (\forall \varepsilon > 0)(\exists N \in \N)(\forall n \geq N)\,|a_n - l| < \varepsilon.
+ \]
+ If $a_n$ tends to $l$ as $n$ tends to infinity, we write $a_n \to l$ as $n\to \infty$; $\displaystyle \lim_{n \to \infty} a_n = l$; or $a_n$ converges to $l$.
+\end{defi}
+Intuitively, if $a_n \to l$, we mean given any $\varepsilon$, for sufficiently large $n$, $a_n$ is always within $l\pm \varepsilon$.
+
+The definition $a_n\not\to l$ is the negation of the above statement:
+\[
+ (\exists \varepsilon > 0)(\forall N\in \N)(\exists n\geq N)\,|a_n - l| \geq \varepsilon.
+\]
+
+\begin{defi}[Convergence of sequence]
+ The sequence $(a_n)$ \emph{converges} if there exists an $l$ such that $a_n\to l$. The sequence \emph{diverges} if it doesn't converge.
+\end{defi}
+
+Every proof of $a_n \to l$ looks like: Given $\varepsilon > 0$, (argument to show $N$ exists, maybe depending on $\varepsilon$), such that $\forall n\geq N$, $|a_n - l| < \varepsilon$.
+\begin{eg}
+ Show that $a_n = 1 - \frac{1}{n} \to 1$.
+
+ Given $\varepsilon > 0$, choose $N > \frac{1}{\varepsilon}$, which exists by the Axiom of Archimedes. If $n \geq N$, then $|a_n - 1| = \frac{1}{n} \leq \varepsilon$. So $a_n\to 1$.
+\end{eg}
+
+\begin{eg}
+ Let
+ \[
+ a_n = \begin{cases}\frac{1}{n} & n\text{ is prime}\\ \frac{1}{2n} & n\text{ is not prime}\end{cases}.
+ \]
+ We will show that $a_n \to 0$. Given $\varepsilon > 0$. Choose $N > \frac{1}{\varepsilon}$. Then $\forall n\geq N$, $|a_n - 0| \leq \frac{1}{n} < \varepsilon$.
+\end{eg}
+
+\begin{eg}
+ Prove that
+ \[
+ a_n = \begin{cases}1 & n\text{ is prime}\\ 0 & n\text{ is not prime}\end{cases}
+ \]
+ diverges.
+
+ Let $\varepsilon = \frac{1}{3}$. Suppose $l\in \R$. If $l < \frac{1}{2}$, then $|a_n - l| > \varepsilon$ when $n$ is prime. If $l\geq \frac{1}{2}$, then $|a_n - l| > \varepsilon$ when $n$ is not prime. Since the primes and non-primes are unbounded, $(\forall N)\exists n > N$ such that $|a_n - l| > \varepsilon$. So $a_n$ diverges.
+\end{eg}
+
+An important property of $\R$ is the following:
+\begin{thm}
+ Every bounded monotonic sequence converges.
+\end{thm}
+In case of confusion, the terms are defined as follows: $(a_n)$ is increasing if $m \leq n$ implies $a_m \leq a_n$. Decreasing is defined similarly. Then it is monotonic if it is increasing or decreasing. $(a_n)$ is bounded if there is some $B\in \R$ such that $|a_n| \leq B$ for all $n$.
+
+\begin{proof}
+ wlog assume $(a_n)$ is increasing. The set $\{a_n: n\geq 1\}$ is bounded and non-empty. So it has a supremum $l$ (least upper bound axiom). Show that $l$ is the limit:
+
+ Given any $\varepsilon >0$, $l - \varepsilon$ is not an upper bound of $a_n$. So $\exists N$ such that $a_N \geq l - \varepsilon$. Since $a_n$ is increasing, we know that $l \geq a_m \geq a_N > l - \varepsilon$ for all $m \geq N$. So $\exists N $ such that $\forall n\geq N$, $|a_n - l| < \varepsilon$. So $a_n \to l$.
+\end{proof}
+We can show that this theorem is equivalent to the least upper bound axiom.
+
+\begin{defi}[Subsequence]
+ A \emph{subsequence} of $(a_n)$ is $(a_{g(n)})$ where $g: \N\to\N$ is strictly increasing. e.g.\ $a_2, a_3, a_5, a_7\cdots$ is a subsequence of $(a_n)$.
+\end{defi}
+
+\begin{thm}
+ Every sequence has a monotonic subsequence.
+\end{thm}
+
+\begin{proof}
+ Call a point $a_k$ a ``peak'' if $(\forall m \geq k)\,a_m \leq a_k$. If there are infinitely many peaks, then they form a decreasing subsequence. If there are only finitely many peaks, $\exists N$ such that no $a_n$ with $n > N$ is a peak. Pick $a_{N_1}$ with $N_1 > N$. Then pick $a_{N_2}$ with $N_2 > N_1$ and $a_{N_2} > a_{N_1}$. This is possible because $a_{N_1}$ is not a peak. The pick $a_{N_3}$ with $N_3 > N_2$ and $a_{N_3}> a_{N_2}$, \emph{ad infinitum}. Then we have a monotonic subsequence.
+\end{proof}
+
+We will now prove the following basic properties of convergence:
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item If $a_n\to a$ and $a_n\to b$, then $a = b$ (i.e.\ limits are unique)
+ \item If $a_n \to a$ and $b_n = a_n$ for all but finitely many $n$, then $b_n \to a$.
+ \item If $a_n = a$ for all $n$, then $a_n \to a$.
+ \item If $a_n\to a$ and $b_n\to b$, then $a_n + b_n \to a+ b$
+ \item If $a_n\to a$ and $b_n \to b$, then $a_nb_n\to ab$
+ \item If $a_n\to a\not= 0$, and $\forall n(a_n \not= 0)$. Then $1/a_n \to 1/a$.
+ \item If $a_n \to a$ and $b_n \to a$, and $\forall n(a_n\leq c_n\leq b_n)$, then $c_n \to a$. (Sandwich theorem)
+ \end{enumerate}
+\end{thm}
+Many students are confused as to why we should prove these ``obvious'' properties of convergence. It seems ``obvious'' that if $a_n$ converges to $a$ and $b_n$ converges to $b$, then the sum converges to $a + b$. However, it is not obvious (at least for the first-time learners) that if $(\forall \varepsilon > 0)(\exists N)(\forall n \geq N)\, |a_n - a| < \varepsilon$ and $(\forall \varepsilon > 0)(\exists N)(\forall n \geq N)\, |b_n - b| < \varepsilon$, then $(\forall \varepsilon > 0)(\exists N)(\forall n \geq N)\, |(a_n + b_n) - (a + b)| < \varepsilon$. In some sense, what we are trying to prove is that our attempt at defining convergence actually satisfies the ``obvious'' properties we think convergence should satisfy.
+
+In proving this, we will make frequent use of the \emph{triangle inequality}: $|x + y|\leq |x| + |y|$.
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Suppose instead $a < b$. Then choose $\varepsilon = \frac{b - a}{2}$. By the definition of the limit, $\exists N_1$ such that $\forall n\geq N_1$, $|a_n - a| < \varepsilon$. There also $\exists N_2$ st. $\forall n\geq N_2$, $|a_n - b| < \varepsilon$.
+
+ Let $N = \max\{N_1, N_2\}$. If $n\geq \max\{N_1, N_2\}$, then $|a - b| \leq |a - a_n| + |a_n - b| < 2\varepsilon = b - a.$
+ Contradiction. So $a = b$.
+ \item Given $\varepsilon > 0$, there $\exists N_1$ st. $\forall n\geq N_1$, we have $|a_n - a| < \varepsilon$. Since $b_n = a_n$ for all but finitely many $n$, there exists $N_2$ such that $\forall n\geq N_2$, $a_n = b_n$.
+
+ Let $N = \max\{N_1, N_2\}$. Then $\forall n\geq N$, we have $|b_n - a| = |a_n - a| < \varepsilon$. So $b_n\to a$.
+ \item $\forall \varepsilon$, take $N = 1$. Then $|a_n - a| = 0 < \varepsilon$ for all $n \geq 1$.
+ \item Given $\varepsilon > 0$, $\exists N_1$ such that $\forall n\geq N_1$, we have $|a_n - a| < \varepsilon/2$. Similarly, $\exists N_2$ such that $\forall n\geq N_2$, we have $|b_n - b| < \varepsilon/2$.
+
+ Let $N = \max\{N_1, N_2\}$. Then $\forall n \geq N$, $|(a_n + b_n) - (a + b)| \leq |a_n - a| + |b_n - b| < \varepsilon$.
+ \item Given $\varepsilon > 0$, Then there exists $N_1, N_2, N_3$ such that
+ \begin{align*}
+ &\forall n\geq N_1: |a_n - a| < \frac{\varepsilon}{2(|b| + 1)}\\
+ &\forall n\geq N_2: |b_n - b| < \frac{\varepsilon}{2|a|}\\
+ &\forall n\geq N_3: |b_n - b| < 1 \Rightarrow |b_n| < |b| +1
+ \end{align*}
+ Then let $N = \max\{N_1, N_2, N_3\}$. Then $\forall n\geq N$,
+ \begin{align*}
+ |a_nb_n - ab| &= |b_n(a_n - a) + a(b_n - b)|\\
+ &\leq |b_n| |a_n - a| + |a||b_n - b|\\
+ &< (|b| + 1) |a_n - a| + |a||b_n - b|\\
+ &< \frac{\varepsilon}{2} + \frac{\varepsilon}{2}\\
+ &= \varepsilon
+ \end{align*}
+ \item Given $\varepsilon > 0$, then $\exists N_1, N_2$ such that $|a_n - a| < \frac{|a|^2}{2}\varepsilon$ and $|a_n - a| < \frac{|a|}{2}$.
+
+ Let $N = \max\{N_1, N_2\}$. The $\forall n \geq N$,
+ \begin{align*}
+ \left|\frac{1}{a_n} - \frac{1}{a}\right| &= \frac{|a_n - a|}{|a_n||a|}\\
+ &< \frac{2}{|a|^2}|a_n - a|\\
+ &< \varepsilon
+ \end{align*}
+ \item By (iii) to (v), we know that $b_n - a_n \to 0$. Let $\varepsilon > 0$. Then $\exists N$ such that $\forall n\geq N$, we have $|b_n - a_n| < \varepsilon$. So $|c_n - a_n| < \varepsilon$. So $c_n - a_n \to 0$. So $c_n = (c_n - a_n) + a_n \to a$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ Let $x_n = \frac{n^2(n + 1)(2n + 1)}{n^4 + 1}$. Then we have
+ \[
+ x_n = \frac{(1 + 1/n)(2 + 1/n)}{1 + 1/n^4}\to \frac{1\cdot 2}{1} = 2
+ \]
+ by the theorem (many times).
+\end{eg}
+
+\begin{eg}
+ Let $y_n = \frac{100^n}{n!}$. Since $\frac{y_{n + 1}}{y_n} = \frac{100}{n + 1} < \frac{1}{2}$ for large $n > 200$, we know that $0 \leq y_n < y_{200}\cdot \frac{2^{200}}{2^n}$. Since $y_{200}\cdot \frac{2^{200}}{2^n} \to 0$, we know that $y_n\to 0$ as well.
+\end{eg}
+
+\subsection{Series}
+In a field, the sum of two numbers is defined. By induction, the sum of finitely many numbers is defined as well. However, infinite sums (``series'') are not. We will define what it means to take an infinite sum. Of course, infinite sums exist only for certain nice sums. For example, $1 + 1 + 1 + \cdots$ does not exist.
+
+\begin{defi}[Series and partial sums]
+ Let $(a_n)$ be a sequence. Then $s_m = \sum_{n = 1}^m a_n$ is the \emph{$m$th partial sum} of $(a_n)$. We write
+ \[
+ \sum_{n = 1}^\infty a_n = \lim_{m\to \infty} s_m
+ \]
+ if the limit exists.
+\end{defi}
+
+\begin{eg}
+ Let $a_n = \frac{1}{n(n - 1)}$ for $n\geq 2$. Then
+ \[
+ s_m = \sum_{n = 2}^m \frac{1}{n(n - 1)} = \sum_{n = 2}^m\left(\frac{1}{n - 1} - \frac{1}{n}\right) = 1 - \frac{1}{m}\to 1.
+ \]
+ Then
+ \[
+ \sum_{n = 2}^\infty \frac{1}{n(n - 1)} = 1.
+ \]
+\end{eg}
+
+\begin{eg}
+ Let $a_n = \frac{1}{n^2}$. Then $s_m = \sum_{n = 1}^{m} \frac{1}{n^2}$. We know that $s_m$ is increasing. We also know that $s_m \leq 1 + \sum \frac{1}{n(n -1)} \leq 2$, i.e.\ it is bounded above. So $s_m$ converges and $\sum_{n = 1}^{\infty} \frac{1}{n^2}$ exists (in fact it is $\pi^2/6$).
+\end{eg}
+
+\begin{eg}
+ (Geometric series) Suppose $a_n = r^n$, where $|r| < 1$. Then $s_m = r\cdot \frac{1 - r^m}{1 - r} \to \frac{r}{1- r}$ since $r^n \to 0$. So
+ \[
+ \sum_{n = 1}^\infty r^n = \frac{r}{1 - r}.
+ \]
+\end{eg}
+
+\begin{eg}
+ (Harmonic series) Let $a_n = \frac{1}{n}$. Consider
+ \begin{align*}
+ S_{2^k} &= 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} + \frac{1}{9} + \cdots + \frac{1}{2^k}\\
+ & \geq 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{16} + \cdots + \frac{1}{2^k}\\
+ &\geq 1 + \frac{k}{2}.
+ \end{align*}
+ So $\displaystyle \sum_{n = 1}^\infty\frac{1}{n}$ diverges.
+\end{eg}
+
+\subsubsection*{Decimal expansions}
+\begin{defi}[Decimal expansion]
+ Let $(d_n)$ be a sequence with $d_n\in \{0, 1, \cdots 9\}$. Then $\displaystyle \sum_{n = 1}^\infty \frac{d}{10^n}$ converges to a limit $r$ with $0 \leq r \leq 1$ since the partial sums $s_m$ are increasing and bounded by $\sum \frac{9}{10^n}\to 1$ (geometric series). We say $r = 0.d_1d_2d_3\cdots$, the \emph{decimal expansion} of $r$.
+\end{defi}
+
+Does every $x$ with $0 \leq x < 1$ have a decimal expansion?
+Pick $d_1$ maximal such that $\frac{d_1}{10} \leq x < 1$. Then $0 \leq x - \frac{d_1}{10} < \frac{1}{10}$ since $d_1$ is maximal. Then pick $d_2$ maximal such that $\frac{d_2}{100} \leq x - \frac{d_1}{10}$. By maximality, $0 \leq x - \frac{d_1}{10} - \frac{d_2}{100} < \frac{1}{100}$. Repeat inductively, pick maximal $d_n$ with
+\[
+ \frac{d_n}{10^n} \leq x- \sum_{j = 1}^{n - 1} \frac{d_j}{10^j}
+\]
+so
+\[
+ 0 \leq x - \sum_{j = 1}^n \frac{d_j}{10^j} < \frac{1}{10^n}.
+\]
+Since both LHS and RHS $\to 0$, by sandwich, $x - \sum_{j = 1}^\infty \frac{d_j}{10^j} = 0$, i.e.\ $x = 0.d_1d_2\cdots$.
+
+Since we have shown that at least one decimal expansion, can the same number have two different decimal expansions? i.e.\ if $0.a_1a_2\cdots = 0.b_1b_2\cdots$, must $a_i = b_i$ for all $i$?
+
+Now suppose that the $a_j$ and $b_j$ are equal until $k$, i.e.\ $a_j = b_j$ for $j < k$. wlog assume $a_k < b_k$. Then
+\[
+ \sum_{j = k + 1}^\infty \frac{a_j}{10^j} \leq \sum_{j = k+1}^\infty \frac{9}{10^j} = \frac{9}{10^{k+1}}\cdot \frac{1}{1 - 1/10} = \frac{1}{10^k}.
+\]
+So we must have $b_k = a_k + 1$, $a_j = 9$ for $j > k$ and $b_j = 0$ for $j > k$. For example, $0.47999\cdots = 0.48000\cdots$.
+
+\subsection{Irrational numbers}
+Recall $\Q\subseteq \R$.
+\begin{defi}[Irrational number]
+ Numbers in $\R\setminus \Q$ are \emph{irrational}.
+\end{defi}
+
+\begin{defi}[Periodic number]
+ A decimal is \emph{periodic} if after a finite number $\ell$ of digits, it repeats in blocks of $k$ for some $k$, i.e.\ $d_{n + k} = d_n$ for $n > \ell$.
+\end{defi}
+
+\begin{prop}
+ A number is periodic iff it is rational.
+\end{prop}
+
+\begin{proof}
+ Clearly a periodic decimal is rational: Say $x = 0.7413157157157\cdots$. Then
+ \begin{align*}
+ 10^\ell x &= 10^4x\\
+ &= 7413.157157\cdots \\
+ &= 7413 + 157\left(\frac{1}{10^3} + \frac{1}{10^6} + \frac{1}{10^9} + \cdots\right)\\
+ &= 7413 + 157\cdot \frac{1}{10^3}\cdot \frac{1}{1 - 1/10^3}\in \Q
+ \end{align*}
+ Conversely, let $x\in \Q$. Then $x$ has a periodic decimal. Suppose $x = \frac{p}{2^c5^dq}$ with $(q, 10) = 1$. Then $10^{\max(c, d)}x = \frac{a}{q} = n + \frac{b}{q}$ for some $a, b, n\in \Z$ and $0\leq b < q$. However, since $(q, 10) = 1$, by Fermat-Euler, $10^{\phi(q)}\equiv 1\pmod q$, i.e.\ $10^{\phi(q)} - 1 = kq$ for some $k$. Then
+ \[
+ \frac{b}{q} = \frac{kb}{kq} = \frac{kb}{999\cdots 9} = kb\left(\frac{1}{10^{\phi(q)}} + \frac{1}{10^{2\phi(q)}} + \cdots \right).
+ \]
+ Since $kb < kq < 10^{\phi(q)}$, write $kb = d_1d_2\cdots d_{\phi(q)}$. So $\frac{b}{q} = 0.d_1d_2\cdots d_{\phi(q)}d_1d_2\cdots$ and $x$ is periodic.
+\end{proof}
+
+\begin{eg}
+ $x = 0.01101010001010\cdots$, where $1$s appear in prime positions, is irrational since the digits don't repeat.
+\end{eg}
+\subsection{Euler's number}
+\begin{defi}[Euler's number]
+ \[
+ e = \sum_{j=0}^\infty \frac{1}{j!} = 1 + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \cdots
+ \]
+\end{defi}
+This sum exists because the partial sums are bounded by $1 + \frac{1}{1} + \frac{1}{2} + \frac{1}{4} + \frac{1}{8}\cdots = 3$ and it is increasing. So $2 < e < 3$.
+
+\begin{prop}
+ $e$ is irrational.
+\end{prop}
+
+\begin{proof}
+ Is $e\in \Q$? Suppose $e = \frac{p}{q}$. We know $q\geq 2$ since $e$ is not an integer (it is between 2 and 3). Then $q!e \in \N$. But
+ \[
+ q!e = \underbrace{q! + q! + \frac{q!}{2!} + \frac{q!}{3!} + \cdots + \frac{q!}{q!}}_{n} + \underbrace{\frac{q!}{(q + 1)!} + \frac{q!}{(q + 2)!} + \cdots}_{x},
+ \]
+ where $n \in \N$. We also have
+ \[
+ x = \frac{1}{q + 1} + \frac{1}{(q + 1)(q + 2)} + \cdots.
+ \]
+ We can bound it by
+ \[
+ 0 < x < \frac{1}{q+1} +\frac{1}{(q + 1)^2} + \frac{1}{(q + 1)^3} + \cdots = \frac{1}{q + 1}\cdot \frac{1}{1 - 1/(q + 1)} = \frac{1}{q} < 1.
+ \]
+ This is a contradiction since $q!e$ must be in $\N$ but it is a sum of an integer $n$ plus a non-integer $x$.
+\end{proof}
+
+\subsection{Algebraic numbers}
+Rational numbers are ``nice'', because they can be written as fractions. Irrational numbers are bad. However, some irrational numbers are worse than others. We can further classify some irrational numbers as being \emph{transcendental}.
+\begin{defi}[Algebraic and transcendental numbers]
+ An \emph{algebraic number} is a root of a polynomial with integer coefficients (or rational coefficients). A number is \emph{transcendental} if it is not algebraic.
+\end{defi}
+
+\begin{prop}
+ All rational numbers are algebraic.
+\end{prop}
+
+\begin{proof}
+ Let $x = \frac{p}{q}$, then $x$ is a root of $qx - p = 0$.
+\end{proof}
+
+\begin{eg}
+ $\sqrt{2}$ is irrational but algebraic since it is a root of $x^2 - 2 = 0$.
+\end{eg}
+
+So do transcendental numbers exist?
+\begin{thm}
+ (Liouville 1851; Non-examinable) $L$ is transcendental, where
+ \[
+ L = \sum_{n = 1}^\infty \frac{1}{10^{n!}} = 0.11000100\cdots
+ \]
+ with $1$s in the factorial positions.
+\end{thm}
+
+\begin{proof}
+ Suppose instead that $f(L) = 0$ where $f(x) = a_kx^k + a_{k -1}x^{k - 1} + \cdots + a_0$, where $a_i\in \Z$, $a_k\not= 0$.
+
+ For any rational $p/q$, we have
+ \[
+ f\left(\frac{p}{q}\right) = a_k\left(\frac{p}{q}\right)^k + \cdots + a_0 = \frac{\text{integer}}{q^k}.
+ \]
+ So if $p/q$ is not a root of $f$, then $|f(p/q)| \geq q^{-k}$.
+
+ For any $m$, we can write $L = $ first $m$ terms + rest of the terms $ = s + t$.
+
+ Now consider $|f(s)| = |f(L) - f(s)|$ (since $f(L) = 0$). We have
+ \begin{align*}
+ |f(L) - f(s)| &= \left|\sum a_i(L^i - s^i)\right|\\
+ &\leq \sum |a_i(L^i - s^i)|\\
+ &= \sum |a_i|(L - s)(L^{i - 1} + \cdots + s^{i - 1})\\
+ &\leq \sum |a_i|(L - s)i,\\
+ &= (L - s)\sum i|a_i|\\
+ &= tC
+ \end{align*}
+ with $C = \sum i|a_i|$.
+
+ Writing $s$ as a fraction, its denominator is at most $10^{m!}$. So $|f(s)| \geq 10^{-k\times m!}$. Combining with the above, we have $tC \geq 10^{-k\times m!}$.
+
+ We can bound $t$ by
+ \[
+ t = \sum_{j = m + 1}^\infty 10^{-j!} \leq \sum_{\ell = (m + 1)!}^\infty 10^{-\ell} = \frac{10}{9}10^{-(m + 1)!}.
+ \]
+ So $(10C/9)10^{-(m + 1)!} \geq 10^{-k\times m!}$. Pick $m\in \N$ so that $m > k$ and $10^{m!} > \frac{10C}{9}$. This is always possible since both $k$ and $10C/9$ are constants. Then the inequality gives $10^{-(m + 1)} \geq 10^{-(k + 1)}$, which is a contradiction since $m > k$.
+\end{proof}
+
+\begin{thm}
+ (Hermite 1873) $e$ is transcendental.
+\end{thm}
+\begin{thm}
+ (Lindermann 1882) $\pi$ is transcendental.
+\end{thm}
+
+\section{Countability}
+After messing with numbers, we finally get back to sets. Here we are concerned about the sizes of sets. We can count how big a set is by constructing bijections. Two sets have the same number of things if there is a bijection between them. In particular, a set has $n$ things if we can bijection it with $[n] = \{1, 2, 3, \cdots, n\}$.
+
+First first prove a few preliminary properties about bijecting with $[n]$ that should be obviously true.
+
+\begin{lemma}
+ If $f:[n] \to [n]$ is injective, then $f$ is bijective.
+\end{lemma}
+
+\begin{proof}
+ Perform induction on $n$: It is true for $n = 1$. Suppose $n > 1$. Let $j = f(n)$. Define $g: [n]\to [n]$ by
+ \[
+ g(j) = n,\quad g(n) = j, \quad g(i) = i \text{ otherwise}.
+ \]
+ Then $g$ is a bijection. So the map $g\circ f$ is injective. It fixes $n$, i.e.\ $g\circ f(n) = n$. So the map $h:[n - 1]\to [n - 1]$ by $h(i) = g\circ f(i)$ is well-defined and injective. So $h$ is surjective. So $h$ is bijective. So $g\circ f$ is bijective. So is $f$.
+\end{proof}
+
+\begin{cor}
+ If $A$ is a set and $f: A\to [n]$ and $g: A\to [m]$ are both bijections, then $m = n$.
+\end{cor}
+
+\begin{proof}
+ wlog assume $m \geq n$. Let $h: [n]\to [m]$ with $h(i) = i$, which is injective. Then the map $h\circ f\circ g^{-1}: [m]\to [m]$ is injective. Then by the lemma this is surjective. So $h$ must be surjective. So $n\geq m$. Hence $n = m$.
+\end{proof}
+This shows that we cannot biject a set to two different numbers, or a set cannot have two different sizes!
+
+\begin{defi}[Finite set and cardinality of set]
+ The set $A$ is \emph{finite} if there exists a bijection $A\to [n]$ for some $n\in\N_0$. The \emph{cardinality} or \emph{size} of $A$, written as $|A|$, is $n$. By the above corollary, this is well-defined.
+\end{defi}
+
+\begin{lemma}
+ Let $S\subseteq \N$. Then either $S$ is finite or there is a bijection $g:\N \to S$.
+\end{lemma}
+
+\begin{proof}
+ If $S\not= \emptyset$, by the well-ordering principle, there is a least element $s_1\in S$. If $S\setminus \{s_1\} \not= \emptyset$, it has a least element $s_2$. If $S\setminus \{s_1, s_2\}$ is not empty, there is a least element $s_3$. If at some point the process stops, then $S = \{s_1, s_2,\cdots, s_n\}$, which is finite. Otherwise, if it goes on forever, the map $g: \N \to S$ given by $g(i) = s_i$ is well-defined and is an injection. It is also a surjection because if $k\in S$, then $k$ is a natural number and there are at most $k$ elements of $S$ less than $k$. So $k$ will be mapped to $s_i$ for some $i\leq k$.
+\end{proof}
+
+\begin{defi}[Countable set]
+ A set $A$ is \emph{countable} if $A$ is finite or there is a bijection between $A$ and $\N$. A set $A$ is \emph{uncountable} if $A$ is not countable.
+\end{defi}
+
+This is one possible definition of countability, but there are some (often) more helpful definitions.
+\begin{thm}
+ The following are equivalent:
+ \begin{enumerate}
+ \item $A$ is countable
+ \item There is an injection from $A\to \N$
+ \item $A = \emptyset$ or there is a surjection from $\N \to A$
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ (i) $\Rightarrow$ (iii): If $A$ is finite, there is a bijection $f: A \to S$ for some $S\subseteq \N$. For all $x\in \N$, if $x\in S$, then map $x\mapsto f^{-1}(x)$. Otherwise, map $x$ to any element of $A$. This is a surjection since $\forall a\in A$, we have $f(a)\mapsto a$.
+
+ (iii) $\Rightarrow$ (ii): If $A\not= \emptyset$ and $f: \N\to A$ is a surjection. Define a map $g: A\to \N$ by $g(a) = \min f^{-1}(\{a\})$, which exists by well-ordering. So $g$ is an injection.
+
+ (ii) $\Rightarrow$ (i): If there is an injection $f: A\to \N$, then $f$ gives a bijection between $A$ and $S = f(A)\subseteq \N$. If $S$ is finite, so is $A$. If $S$ is infinite, there is a bijection $g$ between $S$ and $\N$. So there is a bijection $g\circ f$ between $A$ and $\N$.
+\end{proof}
+
+Often, the injection definition is the most helpful.
+
+\begin{prop}
+ The integers $\Z$ are countable.
+\end{prop}
+\begin{proof}
+ The map $f: \Z\to \N$ given by
+ \[
+ f(n) =
+ \begin{cases}
+ 2n & n > 0\\
+ 2(-n) + 1 & n \leq 0
+ \end{cases}
+ \]
+ is a bijection.
+\end{proof}
+
+\begin{prop}
+ $\N \times \N$ is countable.
+\end{prop}
+\begin{proof}
+ We can map $(a, b)\mapsto 2^a3^b$ injectively by the fundamental theorem of arithmetic. So $\N\times \N$ is countable.
+
+ We can also have a bijection by counting diagonally: $(a, b) \mapsto \binom{a + b}{2} - a + 1$:
+ \begin{center}
+ \begin{tikzpicture}[scale=1.8]
+ \draw (0, 0) -- (0, 3.5);
+ \draw (0, 0) -- (3.8, 0);
+ \foreach \i in {1, 2, 3, 4} {
+ \node [left] at (0, \i - 1) {\i};
+ \node [below] at (\i - 1, 0) {\i};
+ }
+
+ \foreach \i in {1, 2, 3, 4} {
+ \foreach \j in {1, 2, 3, 4} {
+ \node [circ] at (\i - 1, \j - 1) {};
+ \node [anchor = south west] at (\i - 1, \j - 1) {\pgfmathparse{(\i + \j)*(\i + \j - 1) / 2 - \i + 1}\pgfmathprintnumber{\pgfmathresult}};
+ }
+ }
+
+ \draw [dashed] (1, 0) -- (0, 1);
+ \draw [dashed] (2, 0) -- (0, 2);
+ \draw [dashed] (3, 0) -- (0, 3);
+ \draw [dashed] (3.8, 0.2) -- (.5, 3.5);
+ \draw [dashed] (3.8, 1.2) -- (1.5, 3.5);
+ \end{tikzpicture}
+ \end{center}
+\end{proof}
+
+Since $\Z$ is countable, we have an injection $\Z\to \N$, so there is an injection from $\Z\times \N\to \N\times \N \to \N$. So $\Z \times \N$ is countable. However, the rationals are the equivalence classes of $\Z\times\N$. So $\Q$ is countable.
+
+\begin{prop}
+ If $A\to B$ is injective and $B$ is countable, then $A$ is countable (since we can inject $B \to \N$).
+\end{prop}
+
+\begin{prop}
+ $\Z^k$ is countable for all $k\in \N$
+\end{prop}
+
+\begin{proof}
+ Proof by induction: $\Z$ is countable. If $\Z^k$ is countable, $\Z^{k + 1} = \Z\times \Z^k$. Since we can map $\Z^k \to \N$ injectively by the induction hypothesis, we can map injectively $\Z^{k + 1}\to \Z\times \N$, and we can map that to $\N$ injectively.
+\end{proof}
+
+\begin{thm}
+ A countable union of countable sets is countable.
+\end{thm}
+
+\begin{proof}
+ Let $I$ be a countable index set, and for each $\alpha \in I$, let $A_\alpha$ be a countable set. We need to show that $\bigcup_{\alpha\in I} A_\alpha$ is countable. It is enough to construct an injection $h: \bigcup_{\alpha\in I} A_\alpha \to \N\times \N$ because $\N\times\N$ is countable. We know that $I$ is countable. So there exists an injection $f: I\to \N$. For each $\alpha\in I$, there exists an injection $g_\alpha: A_\alpha \to\N$.
+
+ For $a\in \bigcup A_\alpha$, pick $m = \min\{j\in \N: a\in A_{\alpha}\text{ and }f(\alpha) = j\}$, and let $\alpha$ be the corresponding index such that $f(\alpha) = m$. We then set $h(a) = (m, g_\alpha(a))$, and this is an injection.
+\end{proof}
+
+\begin{prop}
+ $\Q$ is countable.
+\end{prop}
+\begin{proof}
+ It can be proved in two ways:
+ \begin{enumerate}
+ \item $\Q = \bigcup_{n\geq 1} \frac{1}{n}\Z = \bigcup_{n\geq 1} \left\{\frac{m}{n}: m\in \Z\right\}$, which is a countable union of countable sets.
+ \item $\Q$ can be mapped injectively to $\Z\times \N$ by $a/b\mapsto (a, b)$, where $b > 0$ and $(a, b) = 1$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{thm}
+ The set of algebraic numbers is countable.
+\end{thm}
+\begin{proof}
+ Let $\mathcal{P}_k$ be the set of polynomials of degree $k$ with integer coefficients. Then $a_kx^k + a_{k - 1}x^{k - 1} + \cdots + a_0 \mapsto (a_k, a_{k - 1}, \cdots, a_0)$ is an injection $\mathcal{P}_k \to \Z^{k + 1}$. Since $\Z^{k + 1}$ is countable, so is $\mathcal{P}_k$.
+
+ Let $\mathcal{P}$ be the set of all polynomials with integer coefficients. Then clearly $\mathcal{P} = \bigcup \mathcal{P}_k$. This is a countable union of countable sets. So $\mathcal{P}$ is countable.
+
+ For each polynomial $p \in \mathcal{P}$, let $R_p$ be the set of its roots. Then $R_p$ is finite and thus countable. Hence $\bigcup_{p\in \mathcal{P}} R_p$, the set of all algebraic numbers, is countable.
+\end{proof}
+
+\begin{thm}
+ The set of real numbers $\R$ is uncountable.
+\end{thm}
+
+\begin{proof}
+ (Cantor's diagonal argument) Assume $\R$ is countable. Then we can list the reals as $r_1,r_2,r_3\cdots$ so that every real number is in the list. Write each $r_n$ uniquely in decimal form (i.e.\ without infinite trailing '9's). List them out vertically:
+ \begin{align*}
+ r_1 &= n_1\,.\,d_{11}\,d_{12}\,d_{13}\,d_{14}\cdots\\
+ r_2 &= n_2\,.\,d_{21}\,d_{22}\,d_{23}\,d_{24}\cdots\\
+ r_3 &= n_3\,.\,d_{31}\,d_{32}\,d_{33}\,d_{34}\cdots\\
+ r_4 &= n_4\,.\,d_{41}\,d_{42}\,d_{43}\,d_{44}\cdots
+ \end{align*}
+ Define $r = 0\,.\,d_1\,d_2\,d_3\,d_4\cdots$ by $d_n =
+ \begin{cases}
+ 0 & d_{nn}\not= 0\\
+ 1 & d_{nn}=0
+ \end{cases}$. Then by construction, this differs from the $n$th number in the list by the $n$th digit, and is so different from every number in the list. Then $r$ is a real number but not in the list. Contradiction.
+\end{proof}
+
+\begin{cor}
+ There are uncountable many transcendental numbers.
+\end{cor}
+
+\begin{proof}
+ If not, then the reals, being the union of the transcendentals and algebraic numbers, must be countable. But the reals is uncountable.
+\end{proof}
+This is an easy but non-constructive proof that transcendental numbers exists. ``If we can't find one, find lots!'' (it is debatable whether this proof is constructive or not. Some argue that we can use this to construct a transcendental number by listing all the algebraic numbers and perform the diagonal argument to obtain a number not in the list, i.e.\ a transcendental number. So this is in fact constructive)
+
+\begin{eg}
+ Let $\mathcal{F}_k = \{Y\subseteq \N: |Y| = k\}$, i.e.\ the set of all subsets of $\N$ of size $k$. We can inject $\mathcal{F}_k \to \Z^k$ in the obvious way, e.g.\ $\{1, 3, 7\}\mapsto (1, 3, 7)$ etc. So it is countable. So $\mathcal{F} = \bigcup_{k\geq 0}\mathcal{F}_k$, the set of all finite subsets of $\N$ is countable.
+\end{eg}
+
+\begin{eg}
+ Recall $\mathcal{P}(X) = \{Y: Y\subseteq X\}$. Now suppose $\mathcal{P}(\N)$ is countable. Let $S_1,S_2,S_3, \cdots$ be the list of all subsets of $\N$. Let $S = \{n: n\not\in S_n\}$. But then $S$ is not in the list. Contradiction. So $\mathcal{P}(\N)$ is uncountable.
+\end{eg}
+
+\begin{eg}
+ Let $\Sigma$ be the set of all functions $\N\to \N$ (i.e.\ the set of all integer sequences). If $\Sigma$ were countable, we could list it as $f_1, f_2, f_3\cdots$. But then consider $f$ given by $f(n) =
+ \begin{cases}
+ 1 & f_n(n) \not= 1\\
+ 2 & f_n(n) = 1
+ \end{cases}$. Again $f$ is not in the list. Contradiction. So $\Sigma$ is uncountable.
+
+ Alternatively, there is a bijection between $\mathcal{P}(\N)$ and the set of 0, 1 sequences by $S\mapsto$ the indicator function. So we can inject $\mathcal{P}(\N) \to \Sigma$ by $S\mapsto $ indicator function $+1$. So $\Sigma$ cannot be countable (since $\mathcal{P}(\N)$ is uncountable).
+
+ Or, we can let $\Sigma^*\subseteq \Sigma$ be the set of bijections from $\N\to\N$. Let $\Sigma^{**}\subseteq \Sigma^*$ be the bijections of the special form: for every $n$,
+ \[
+ \text{either}
+ \begin{cases}
+ f(2n - 1) = 2n - 1\\
+ f(2n) = 2n
+ \end{cases}\text{, or }
+ \begin{cases}
+ f(2n - 1) = 2n\\
+ f(2n) = 2n - 1
+ \end{cases},
+ \]
+ i.e.\ for every odd-even pair, we either flip them or keep them the same.
+
+ But there is a bijection between $\Sigma^{**}$ and $0,1$ sequences: if the $n$th term in the sequence $ = 0$, don't flip the $n$th pair in the function, vice versa. Hence $\Sigma^{**}$ is uncountable.
+\end{eg}
+
+\begin{thm}
+ Let $A$ be a set. Then there is no surjection from $A\to \mathcal{P}(A)$.
+\end{thm}
+
+\begin{proof}
+ Suppose $f: A\to \mathcal{P}(A)$ is surjective. Let $S = \{a\in A: a\not\in f(a)\}$. Since $f$ is surjective, there must exist $s\in A$ such that $f(s) = S$. If $s\in S$, then $s\not\in S$ by the definition of $S$. Conversely, if $s\not\in S$, then $s\in S$. Contradiction. So $f$ cannot exist.
+\end{proof}
+
+This shows that there are infinitely many different possible ``infinite sizes'' of sets.
+
+We conclude by two theorems that we will not prove.
+\begin{thm}[Cantor-Schr\"oder-Bernstein theorem]
+ Suppose there are injections $A\to B$ and $B\to A$. Then there's a bijection $A\leftrightarrow B$.
+\end{thm}
+
+\noindent \textbf{Continuum hypothesis.} There is no set whose size lies between $\N$ and $\R$. In 1963, Paul Cohen proved that it is impossible to prove this or disprove this statement (in ZFC). The proof can be found in the Part III Topics in Set Theory course.
+\end{document}
diff --git a/books/cam/IA_M/vectors_and_matrices.tex b/books/cam/IA_M/vectors_and_matrices.tex
new file mode 100644
index 0000000000000000000000000000000000000000..964b290d46b2eb980331f63971b9ee83a2e0948f
--- /dev/null
+++ b/books/cam/IA_M/vectors_and_matrices.tex
@@ -0,0 +1,3484 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IA}
+\def\nterm {Michaelmas}
+\def\nyear {2014}
+\def\nlecturer {N.\ Peake}
+\def\ncourse {Vectors and Matrices}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small \noindent\textbf{Complex numbers}\\
+Review of complex numbers, including complex conjugate, inverse, modulus, argument and Argand diagram. Informal treatment of complex logarithm, $n$-th roots and complex powers. de Moivre's theorem.\hspace*{\fill}[2]
+
+\vspace{10pt}
+\noindent\textbf{Vectors}\\
+Review of elementary algebra of vectors in $\R^3$, including scalar product. Brief discussion of vectors in $\R^n$ and $\C^n$; scalar product and the Cauchy-Schwarz inequality. Concepts of linear span, linear independence, subspaces, basis and dimension.
+
+\vspace{5pt}
+\noindent Suffix notation: including summation convention, $\delta_{ij}$ and $\varepsilon_{ijk}$. Vector product and triple product: definition and geometrical interpretation. Solution of linear vector equations. Applications of vectors to geometry, including equations of lines, planes and spheres.\hspace*{\fill}[5]
+
+\vspace{10pt}
+\noindent\textbf{Matrices}\\
+Elementary algebra of $3\times 3$ matrices, including determinants. Extension to $n\times n$ complex matrices. Trace, determinant, non-singular matrices and inverses. Matrices as linear transformations; examples of geometrical actions including rotations, reflections, dilations, shears; kernel and image.\hspace*{\fill}[4]
+
+\vspace{5pt}
+\noindent Simultaneous linear equations: matrix formulation; existence and uniqueness of solutions, geometric interpretation; Gaussian elimination.\hspace*{\fill}[3]
+
+\vspace{5pt}
+\noindent Symmetric, anti-symmetric, orthogonal, hermitian and unitary matrices. Decomposition of a general matrix into isotropic, symmetric trace-free and antisymmetric parts.\hspace*{\fill}[1]
+
+\vspace{10pt}
+\noindent\textbf{Eigenvalues and Eigenvectors}\\
+Eigenvalues and eigenvectors; geometric significance.\hspace*{\fill}[2]
+
+\vspace{5pt}
+\noindent Proof that eigenvalues of hermitian matrix are real, and that distinct eigenvalues give an orthogonal basis of eigenvectors. The effect of a general change of basis (similarity transformations). Diagonalization of general matrices: sufficient conditions; examples of matrices that cannot be diagonalized. Canonical forms for $2 \times 2$ matrices.\hspace*{\fill}[5]
+
+\vspace{5pt}
+\noindent Discussion of quadratic forms, including change of basis. Classification of conics, cartesian and polar forms.\hspace*{\fill}[1]
+
+\vspace{5pt}
+\noindent Rotation matrices and Lorentz transformations as transformation groups.\hspace*{\fill}[1]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+Vectors and matrices is the language in which a lot of mathematics is written in. In physics, many variables such as position and momentum are expressed as vectors. Heisenberg also formulated quantum mechanics in terms of vectors and matrices. In statistics, one might pack all the results of all experiments into a single vector, and work with a large vector instead of many small quantities. In group theory, matrices are used to represent the symmetries of space (as well as many other groups).
+
+So what is a vector? Vectors are very general objects, and can in theory represent very complex objects. However, in this course, our focus is on vectors in $\R^n$ or $\C^n$. We can think of each of these as an array of $n$ real or complex numbers. For example, $(1, 6, 4)$ is a vector in $\R^3$. These vectors are added in the obvious way. For example, $(1, 6, 4) + (3, 5, 2) = (4, 11, 6)$. We can also multiply vectors by numbers, say $2(1, 6, 4) = (2, 12, 8)$. Often, these vectors represent points in an $n$-dimensional space.
+
+Matrices, on the other hand, represent \emph{functions} between vectors, i.e.\ a function that takes in a vector and outputs another vector. These, however, are not arbitrary functions. Instead matrices represent \emph{linear functions}. These are functions that satisfy the equality $f(\lambda \mathbf{x} + \mu \mathbf{y}) = \lambda f(\mathbf{x}) + \mu f(\mathbf{y})$ for arbitrary numbers $\lambda, \mu$ and vectors $\mathbf{x}, \mathbf{y}$. It is important to note that the function $\mathbf{x} \mapsto \mathbf{x} + \mathbf{c}$ for some constant vector $\mathbf{c}$ is \emph{not} linear according to this definition, even though it might look linear.
+
+It turns out that for each linear function from $\R^n$ to $\R^m$, we can represent the function uniquely by an $m\times n$ array of numbers, which is what we call the \emph{matrix}. Expressing a linear function as a matrix allows us to conveniently study many of its properties, which is why we usually talk about matrices instead of the function itself.
+
+\section{Complex numbers}
+In $\R$, not every polynomial equation has a solution. For example, there does not exist any $x$ such that $x^2 + 1 = 0$, since for any $x$, $x^2$ is non-negative, and $x^2 + 1$ can never be $0$. To solve this problem, we introduce the ``number'' $i$ that satisfies $i^2 = -1$. Then $i$ is a solution to the equation $x^2 + 1 = 0$. Similarly, $-i$ is also a solution to the equation.
+
+We can add and multiply numbers with $i$. For example, we can obtain numbers $3 + i$ or $1 + 3i$. These numbers are known as \emph{complex numbers}. It turns out that by adding this single number $i$, \emph{every} polynomial equation will have a root. In fact, for an $n$th order polynomial equation, we will later see that there will always be $n$ roots, if we account for multiplicity. We will go into details in Chapter~\ref{sec:eigen}.
+
+Apart from solving equations, complex numbers have a lot of rather important applications. For example, they are used in electronics to represent alternating currents, and form an integral part in the formulation of quantum mechanics.
+
+\subsection{Basic properties}
+\begin{defi}[Complex number]
+ A \emph{complex number} is a number $z\in \C$ of the form $z = a + ib$ with $a, b\in \R$, where $i^2=-1$. We write $a = \Re(z)$ and $b = \Im(z)$.
+\end{defi}
+
+We have
+\begin{align*}
+ z_1\pm z_2 &= (a_1 + ib_1)\pm (a_2 + ib_2)\\
+ &= (a_1\pm a_2) + i(b_1 \pm b_2)\\
+ z_1z_2 &= (a_1 + ib_1)(a_2 + ib_2)\\
+ &= (a_1a_2 - b_1b_2) + i(b_1a_2 + a_1b_2)\\
+ z^{-1} &= \frac{1}{a + ib}\\
+ &= \frac{a - ib}{a^2 + b^2}
+\end{align*}
+\begin{defi}[Complex conjugate]
+ The \emph{complex conjugate} of $z = a+ ib$ is $a - ib$. It is written as $\bar{z}$ or $z^*$.
+\end{defi}
+
+It is often helpful to visualize complex numbers in a diagram:
+\begin{defi}[Argand diagram]
+ An \emph{Argand diagram} is a diagram in which a complex number $z = x + iy$ is represented by a vector $\mathbf{p}=\begin{pmatrix}x\\y\end{pmatrix}$. Addition of vectors corresponds to vector addition and $\bar{z}$ is the reflection of $z$ in the $x$-axis.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (3, 0) node [right] {Re};
+ \draw [->] (0, -1) -- (0, 2) node [above] {Im};
+ \draw [->] (0, 0) -- (.5, 1) node [above] {$z_1$};
+ \draw [->] (0, 0) -- (2, .7) node [above] {$z_2$};
+ \draw [->] (0, 0) -- (2, -.7) node [above] {$\bar z_2$};
+ \draw [->] (0, 0) -- (2.5, 1.7) node [anchor=south west] {$z_1 + z_2$};
+ \draw [dashed] (.5, 1) -- (2.5, 1.7) -- (2, .7);
+ \end{tikzpicture}
+\end{center}
+\end{defi}
+
+\begin{defi}[Modulus and argument of complex number]
+ The \emph{modulus} of $z = x + iy$ is $r = |z| = \sqrt{x^2 + y^2}$. The \emph{argument} is $\theta = \arg z = \tan^{-1} (y/x)$. The modulus is the length of the vector in the Argand diagram, and the argument is the angle between $z$ and the real axis. We have
+ \[
+ z = r(\cos\theta + i\sin \theta)
+ \]
+ Clearly the pair $(r, \theta)$ uniquely describes a complex number $z$, but each complex number $z\in \C$ can be described by many different $\theta$ since $\sin (2\pi + \theta) = \sin \theta$ and $\cos(2\pi + \theta) = \cos\theta$. Often we take the \emph{principle value} $\theta \in (-\pi, \pi]$.
+\end{defi}
+
+When writing $z_i = r_i(\cos\theta_i + i\sin \theta_i)$, we have
+\begin{align*}
+ z_1z_2 &= r_1r_2[(\cos\theta_1\cos\theta_2 - \sin\theta_1\sin\theta_2) + i(\sin\theta_1\cos\theta_2 + \sin\theta_2\cos\theta_1)]\\
+ &= r_1r_2[\cos(\theta_1 + \theta_2) + i\sin(\theta_1+\theta_2)]
+\end{align*}
+In other words, when multiplying complex numbers, the moduli multiply and the arguments add.
+
+\begin{prop}
+ $z\bar{z} = a^2 + b^2 = |z|^2$.
+\end{prop}
+
+\begin{prop}
+ $z^{-1} = \bar{z}/|z|^2$.
+\end{prop}
+
+\begin{thm}[Triangle inequality]
+ For all $z_1, z_2 \in \C$, we have
+ \[
+ |z_1 + z_2| \leq |z_1| + |z_2|.
+ \]
+ Alternatively, we have $|z_1 - z_2|\geq ||z_1| - |z_2||$.
+\end{thm}
+
+\subsection{Complex exponential function}
+Exponentiation was originally defined for integer powers as repeated multiplication. This is then extended to rational powers using roots. We can also extend this to any real number since real numbers can be approximated arbitrarily accurately by rational numbers. However, what does it mean to take an exponent of a complex number?
+
+To do so, we use the Taylor series definition of the exponential function:
+\begin{defi}[Exponential function]
+ The \emph{exponential function} is defined as
+ \[
+ \exp (z) = e^z = 1 + z + \frac{z^2}{2!} + \frac{z^3}{3!} + \cdots = \sum_{n = 0}^\infty \frac{z^n}{n!}.
+ \]
+\end{defi}
+This automatically allows taking exponents of arbitrary complex numbers. Having defined exponentiation this way, we want to check that it satisfies the usual properties, such as $\exp(z + w) = \exp(z)\exp(w)$. To prove this, we will first need a helpful lemma.
+
+\begin{lemma}
+ \[
+ \sum_{n = 0}^\infty\sum_{m = 0}^\infty a_{mn} = \sum_{r = 0}^\infty\sum_{m = 0}^r a_{r - m, m}
+ \]
+\end{lemma}
+
+\begin{proof}
+ \begin{align*}
+ \sum_{n = 0}^\infty\sum_{m = 0}^\infty a_{mn} &= a_{00} + a_{01} + a_{02} + \cdots\\
+ &+ a_{10} + a_{11} + a_{12} + \cdots\\
+ &+ a_{20} + a_{21} + a_{22} + \cdots\\
+ &= (a_{00}) + (a_{10} + a_{01}) + (a_{20} + a_{11} + a_{02}) + \cdots\\
+ &= \sum_{r = 0}^\infty\sum_{m = 0}^r a_{r - m, m} \qedhere
+ \end{align*}
+\end{proof}
+This is not exactly a rigorous proof, since we should not hand-wave about infinite sums so casually. But in fact, we did not even show that the definition of $\exp(z)$ is well defined for all numbers $z$, since the sum might diverge. All these will be done in that IA Analysis I course.
+
+\begin{thm}
+ $\exp(z_1)\exp(z_2) = \exp(z_1 + z_2)$
+\end{thm}
+
+\begin{proof}
+ \begin{align*}
+ \exp(z_1)\exp(z_2) &= \sum_{n = 0}^\infty\sum_{m = 0}^\infty \frac{z_1^m}{m!}\frac{z_2^n}{n!}\\
+ &= \sum_{r = 0}^\infty\sum_{m = 0}^r \frac{z_1^{r - m}}{(r - m)!}\frac{z_2^m}{m!}\\
+ &= \sum_{r = 0}^\infty\frac{1}{r!}\sum_{m = 0}^r \frac{r!}{(r - m)!m!}z_1^{r - m}z_2^m\\
+ &= \sum_{r = 0}^\infty\frac{(z_1 + z_2)^r}{r!} \qedhere
+ \end{align*}
+\end{proof}
+
+Again, to define the sine and cosine functions, instead of referring to ``angles'' (since it doesn't make much sense to refer to complex ``angles''), we again use a series definition.
+\begin{defi}[Sine and cosine functions]
+ Define, for all $z\in \C$,
+ \begin{alignat*}{2}
+ \sin z &= \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}z^{2n+1} &\;= z - \frac{1}{3!}z^3 + \frac{1}{5!}z^5 + \cdots\\
+ \cos z &= \sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}z^{2n} &\;= 1 - \frac{1}{2!}z^2 + \frac{1}{4!}z^4 + \cdots
+ \end{alignat*}
+\end{defi}
+
+One very important result is the relationship between $\exp$, $\sin$ and $\cos$.
+\begin{thm}
+ $e^{iz} = \cos z + i\sin z$.
+\end{thm}
+Alternatively, since $\sin (-z) = -\sin z$ and $\cos(-z) = \cos z$, we have
+\begin{align*}
+ \cos z &= \frac{e^{iz} + e^{-iz}}{2},\\
+ \sin z &= \frac{e^{iz} - e^{-iz}}{2i}.
+\end{align*}
+
+\begin{proof}
+ \begin{align*}
+ e^{iz} &= \sum_{n=0}^\infty \frac{i^n}{n!}z^n\\
+ &= \sum_{n=0}^\infty \frac{i^{2n}}{(2n)!}z^{2n} + \sum_{n=0}^\infty \frac{i^{2n+1}}{(2n+1)!}z^{2n+1}\\
+ &= \sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}z^{2n} + i \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}z^{2n+1}\\
+ &= \cos z + i\sin z \qedhere
+ \end{align*}
+\end{proof}
+Thus we can write $z = r(\cos\theta + i\sin\theta) = re^{i\theta}$.
+
+\subsection{Roots of unity}
+\begin{defi}[Roots of unity]
+ The $n$th \emph{roots of unity} are the roots to the equation $z^n = 1$ for $n\in \N$. Since this is a polynomial of order $n$, there are $n$ roots of unity. In fact, the $n$th roots of unity are $\exp\left(2\pi i\frac{k}{n}\right)$ for $k = 0, 1, 2, 3\cdots n - 1$.
+\end{defi}
+
+\begin{prop}
+ If $\omega = \exp\left(\frac{2\pi i}{n}\right)$, then $1 + \omega + \omega^2 + \cdots + \omega^{n - 1} = 0$
+\end{prop}
+
+\begin{proof}
+ Two proofs are provided:
+ \begin{enumerate}
+ \item Consider the equation $z^n = 1$. The coefficient of $z^{n-1}$ is the sum of all roots. Since the coefficient of $z^{n-1}$ is 0, then the sum of all roots $= 1 + \omega + \omega^2 + \cdots + \omega^{n-1} = 0$.
+ \item Since $\omega^n - 1 = (\omega - 1)(1 + \omega + \cdots + \omega^{n - 1})$ and $\omega \not= 1$, dividing by $(\omega - 1)$, we have $1 + \omega + \cdots + \omega^{n-1} = (\omega^n - 1)/(\omega - 1) = 0$. \qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{Complex logarithm and power}
+\begin{defi}[Complex logarithm]
+ The \emph{complex logarithm} $w = \log z$ is a solution to $e^\omega = z$, i.e.\ $\omega = \log z$. Writing $z = re^{i\theta}$, we have $\log z = \log(re^{i\theta}) = \log r + i\theta$. This can be multi-valued for different values of $\theta$ and, as above, we should select the $\theta$ that satisfies $-\pi < \theta \leq \pi$.
+\end{defi}
+\begin{eg}
+ $\log 2i = \log 2 + i\frac{\pi}{2}$
+\end{eg}
+
+\begin{defi}[Complex power]
+ The \emph{complex power} $z^\alpha$ for $z, \alpha\in \C$ is defined as $z^\alpha = e^{\alpha\log z}$. This, again, can be multi-valued, as $z^\alpha = e^{\alpha\log|z|}e^{i\alpha\theta}e^{2in\pi\alpha}$ (there are finitely many values if $\alpha\in\Q$, infinitely many otherwise). Nevertheless, we make $z^\alpha$ single-valued by insisting $-\pi < \theta \leq \pi$.
+\end{defi}
+
+\subsection{De Moivre's theorem}
+\begin{thm}[De Moivre's theorem]
+ \[
+ \cos n\theta + i\sin n\theta = (\cos\theta + i\sin\theta)^n.
+ \]
+\end{thm}
+\begin{proof}
+ First prove for the $n \geq 0$ case by induction. The $n = 0$ case is true since it merely reads $1 = 1$. We then have
+ \begin{align*}
+ (\cos\theta + i\sin\theta)^{n + 1} &= (\cos\theta + i\sin\theta)^n (\cos\theta + i\sin\theta)\\
+ &= (\cos n\theta + i\sin n\theta )(\cos\theta + i\sin\theta)\\
+ &= \cos(n+1)\theta + i\sin(n+1)\theta
+ \end{align*}
+ If $n < 0$, let $m = -n$. Then $m > 0$ and
+ \begin{align*}
+ (cos\theta + i\sin\theta)^{-m} &= (\cos m\theta + i\sin m\theta)^{-1}\\
+ &= \frac{\cos m\theta - i\sin m\theta}{(\cos m\theta + i\sin m\theta)(\cos m\theta - i\sin m\theta)}\\
+ &= \frac{\cos (-m\theta) + i\sin (-m\theta)}{\cos^2 m\theta + \sin^2 m\theta}\\
+ &= \cos (-m\theta) + i\sin (-m\theta)\\
+ &= \cos n\theta + i\sin n\theta \qedhere
+ \end{align*}
+\end{proof}
+Note that ``$\cos n\theta + i\sin n\theta = e^{in\theta} = (e^{i\theta})^n = (\cos \theta + i\sin \theta)^n$'' is \emph{not} a valid proof of De Moivre's theorem, since we do not know yet that $e^{in\theta} = (e^{i\theta})^n$. In fact, De Moivre's theorem tells us that this is a valid rule to apply.
+
+\begin{eg}
+ We have $\cos 5\theta + i\sin5\theta = (\cos\theta + i\sin\theta)^5$. By binomial expansion of the RHS and taking real and imaginary parts, we have
+ \begin{align*}
+ \cos 5\theta &= 5\cos\theta - 20\cos^3\theta + 16\cos^5\theta\\
+ \sin 5\theta &= 5\sin\theta - 20\sin^3\theta + 16\sin^5\theta
+ \end{align*}
+\end{eg}
+
+\subsection{Lines and circles in \texorpdfstring{$\C$}{C}}
+Since complex numbers can be regarded as points on the 2D plane, we can often use complex numbers to represent two dimensional objects.
+
+Suppose that we want to represent a straight line through $z_0 \in \C$ parallel to $w\in \C$. The obvious way to do so is to let $z = z_0 + \lambda w$ where $\lambda$ can take any real value. However, this is not an optimal way of doing so, since we are not using the power of complex numbers fully. This is just the same as the vector equation for straight lines, which you may or may not know from your A levels.
+
+Instead, we arrange the equation to give $\lambda = \frac{z - z_0}{w}$. We take the complex conjugate of this expression to obtain $\bar{\lambda} = \frac{\bar{z} - \bar{z_0}}{\bar{w}}$. The trick here is to realize that $\lambda$ is a real number. So we must have $\lambda = \bar \lambda$. This means that we must have
+\begin{align*}
+ \frac{z - z_0}{w} &= \frac{\bar{z} - \bar{z_0}}{\bar{w}}\\
+ z\bar w - \bar z w &= z_0 \bar w - \bar z_0 w.
+\end{align*}
+
+\begin{thm}[Equation of straight line]
+ The equation of a straight line through $z_0$ and parallel to $w$ is given by
+ \[
+ z\bar w - \bar z w = z_0 \bar w - \bar z_0 w.
+ \]
+\end{thm}
+
+The equation of a circle, on the other hand, is rather straightforward. Suppose that we want a circle with center $c\in \C$ and radius $\rho \in \R^+$. By definition of a circle, a point $z$ is on the circle iff its distance to $c$ is $\rho$, i.e.\ $|z - c| = \rho$. Recalling that $|z|^2 = z\bar z$, we obtain,
+\begin{align*}
+ |z - c| &= \rho\\
+ |z - c|^2 &= \rho^2\\
+ (z - c)(\bar z - \bar c) &= \rho^2\\
+ z\bar z - \bar c z - c\bar z &= \rho^2 - c\bar c
+\end{align*}
+\begin{thm}
+ The general equation of a circle with center $c\in \C$ and radius $\rho \in \R^+$ can be given by
+ \[
+ z\bar z - \bar c z - c\bar z = \rho^2 - c\bar c.
+ \]
+\end{thm}
+\section{Vectors}
+We might have first learned vectors as arrays of numbers, and then defined addition and multiplication in terms of the individual numbers in the vector. This however, is not what we are going to do here. The array of numbers is just a \emph{representation} of the vector, instead of the vector itself.
+
+Here, we will define vectors in terms of what they are, and then the various operations are defined axiomatically according to their properties.
+\subsection{Definition and basic properties}
+\begin{defi}[Vector]
+ A \emph{vector space} over $\R$ or $\C$ is a collection of vectors $\mathbf{v}\in V$, together with two operations: addition of two vectors and multiplication of a vector with a scalar (i.e.\ a number from $\R$ or $\C$, respectively).
+
+ \emph{Vector addition} has to satisfy the following axioms:
+ \begin{enumerate}
+ \item $\mathbf{a} + \mathbf{b} = \mathbf{b} + \mathbf{a}$ \hfill (commutativity)
+ \item $(\mathbf{a} + \mathbf{b}) + \mathbf{c} = \mathbf{a} + (\mathbf{b} + \mathbf{c})$ \hfill (associativity)
+ \item There is a vector $\mathbf{0}$ such that $\mathbf{a} + \mathbf{0} = \mathbf{a}$. \hfill (identity)
+ \item For all vectors $\mathbf{a}$, there is a vector $(-\mathbf{a})$ such that $\mathbf{a} + (-\mathbf{a}) = \mathbf{0}$ \hfill (inverse)
+ \end{enumerate}
+ \emph{Scalar multiplication} has to satisfy the following axioms:
+ \begin{enumerate}
+ \item $\lambda(\mathbf{a + b}) = \lambda\mathbf{a} + \lambda\mathbf{b}$.
+ \item $(\lambda + \mu)\mathbf{a} = \lambda\mathbf{a} + \mu\mathbf{a}$.
+ \item $\lambda(\mu\mathbf{a}) = (\lambda\mu)\mathbf{a}$.
+ \item $1\mathbf{a = a}$.
+ \end{enumerate}
+\end{defi}
+
+Often, vectors have a length and direction. The length is denoted by $|\mathbf{v}|$. In this case, we can think of a vector as an ``arrow'' in space. Note that $\lambda\mathbf{a}$ is either parallel ($\lambda \ge 0$) to or anti-parallel ($\lambda \le 0$) to $\mathbf{a}$.
+\begin{defi}[Unit vector]
+ A \emph{unit vector} is a vector with length 1. We write a unit vector as $\hat{\mathbf{v}}$.
+\end{defi}
+
+\begin{eg}
+ $\R^n$ is a vector space with component-wise addition and scalar multiplication. Note that the vector space $\R$ is a line, but not all lines are vector spaces. For example, $x + y = 1$ is not a vector space since it does not contain $\mathbf{0}$.
+\end{eg}
+
+\subsection{Scalar product}
+In a vector space, we can define the \emph{scalar product} of two vectors, which returns a scalar (i.e.\ a real or complex number). We will first look at the usual scalar product defined for $\R^n$, and then define the scalar product axiomatically.
+
+\subsubsection{Geometric picture (\texorpdfstring{$\R^2$}{R2} and \texorpdfstring{$\R^3$}{R3} only)}
+\begin{defi}[Scalar/dot product]
+ $\mathbf{a}\cdot\mathbf{b} = \mathbf{|a||b|}\cos\theta$, where $\theta$ is the angle between $\mathbf{a}$ and $\mathbf{b}$. It satisfies the following properties:
+ \begin{enumerate}
+ \item $\mathbf{a\cdot b = b\cdot a}$
+ \item $\mathbf{a\cdot a = |a|}^2 \geq 0$
+ \item $\mathbf{a\cdot a} = 0$ iff $\mathbf{a = 0}$
+ \item If $\mathbf{a\cdot b} = 0$ and $\mathbf{a, b}\not= \mathbf{0}$, then $\mathbf{a}$ and $\mathbf{b}$ are perpendicular.
+ \end{enumerate}
+\end{defi}
+Intuitively, this is the product of the parts of $\mathbf{a}$ and $\mathbf{b}$ that are parallel.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mblue, ->] (0, 0) -- (3, 0) node [right] {$\mathbf{b}$};
+ \draw [mred, ->] (0, 0) -- (2, 2) node [anchor = south west] {$\mathbf{a}$} node [pos=0.5, above] {$|\mathbf{a}|$};
+ \draw [mred, ->] (0, 0) -- (2, 0) node [pos=0.5, below] {$|\mathbf{a}|\cos \theta$};
+ \draw [mred, dashed] (2, 0) -- (2, 2);
+ \draw [mred] (1.8, 0) -- (1.8, 0.2) -- (2, 0.2);
+ \end{tikzpicture}
+\end{center}
+Using the dot product, we can write the projection of $\mathbf{b}$ onto $\mathbf{a}$ as $(|\mathbf{b}|\cos\theta)\hat{\mathbf{a}} = \mathbf{(\hat{a}\cdot b)\hat{a}}$.
+
+The cosine rule can be derived as follows:
+\begin{align*}
+ |\overrightarrow{BC}|^2 &= |\overrightarrow{AC} - \overrightarrow{AB}|^2\\
+ &= (\overrightarrow{AC} - \overrightarrow{AB})\cdot (\overrightarrow{AC} - \overrightarrow{AB})\\
+ &= |\overrightarrow{AB}|^2 + |\overrightarrow{AC}|^2 - 2|\overrightarrow{AB}||\overrightarrow{AC}|\cos\theta
+\end{align*}
+We will later come up with a convenient algebraic way to evaluate this scalar product.
+
+\subsubsection{General algebraic definition}
+\begin{defi}[Inner/scalar product]
+ In a real vector space $V$, an \emph{inner product} or \emph{scalar product} is a map $V\times V\to \R$ that satisfies the following axioms. It is written as $\mathbf{x\cdot y}$ or $\bra\mathbf{x\mid y}\ket$.
+ \begin{enumerate}
+ \item $\mathbf{x\cdot y = y\cdot x}$ \hfill (symmetry)
+ \item $\mathbf{x}\cdot (\lambda\mathbf{y} + \mu\mathbf{z}) = \lambda\mathbf{x\cdot y} + \mu\mathbf{x\cdot z}$ \hfill (linearity in 2nd argument)
+ \item $\mathbf{x\cdot x}\geq 0$ with equality iff $\mathbf{x = 0}$\hfill (positive definite)
+ \end{enumerate}
+\end{defi}
+Note that this is a definition only for \emph{real} vector spaces, where the scalars are real. We will have a different set of definitions for complex vector spaces.
+
+In particular, here we can use (i) and (ii) together to show linearity in 1st argument. However, this is generally not true for complex vector spaces.
+
+\begin{defi}
+ The \emph{norm} of a vector, written as $|\mathbf{a}|$ or $\|\mathbf{a}\|$, is defined as
+ \[
+ |\mathbf{a}| = \sqrt{\mathbf{a\cdot a}}.
+ \]
+\end{defi}
+
+\begin{eg}
+ Instead of the usual $\R^n$ vector space, we can consider the set of all real (integrable) functions as a vector space. We can define the following inner product:
+ \[
+ \bra f\mid g\ket = \int_0^1f(x)g(x)\;\mathrm{d} x.
+ \]
+\end{eg}
+
+\subsection{Cauchy-Schwarz inequality}
+\begin{thm}[Cauchy-Schwarz inequality]
+ For all $\mathbf{x, y}\in \R^n$,
+ \[
+ |\mathbf{x}\cdot \mathbf{y}| \leq |\mathbf{x}||\mathbf{y}|.
+ \]
+\end{thm}
+
+\begin{proof}
+ Consider the expression $|\mathbf{x} - \lambda \mathbf{y}|^2$. We must have
+ \begin{align*}
+ |\mathbf{x} - \lambda\mathbf{y}|^2 \geq 0\\
+ (\mathbf{x} - \lambda\mathbf{y})\cdot (\mathbf{x} - \lambda\mathbf{y}) \geq 0\\
+ \lambda^2 |\mathbf{y}|^2 - \lambda (2\mathbf{x\cdot y}) + |\mathbf{x}|^2 \geq 0.
+ \end{align*}
+ Viewing this as a quadratic in $\lambda$, we see that the quadratic is non-negative and thus cannot have 2 real roots. Thus the discriminant $\Delta \leq 0$. So
+ \begin{align*}
+ 4(\mathbf{x\cdot y})^2 &\leq 4|\mathbf{y}|^2|\mathbf{x}|^2\\
+ (\mathbf{x\cdot y})^2 &\leq |\mathbf{x}|^2|\mathbf{y}|^2\\
+ |\mathbf{x\cdot y}| &\leq \mathbf{|x||y|}. \qedhere
+ \end{align*}
+\end{proof}
+Note that we proved this using the axioms of the scalar product. So this result holds for \emph{all} possible scalar products on \emph{any} (real) vector space.
+
+\begin{eg}
+ Let $\mathbf{x} = (\alpha, \beta, \gamma)$ and $\mathbf{y} = (1, 1, 1)$. Then by the Cauchy-Schwarz inequality, we have
+ \begin{align*}
+ \alpha + \beta + \gamma &\leq \sqrt{3}\sqrt{\alpha^2 + \beta^2 + \gamma^2}\\
+ \alpha^2 + \beta^2 + \gamma^2 &\geq \alpha\beta + \beta\gamma + \gamma\alpha,
+ \end{align*}
+ with equality if $\alpha = \beta = \gamma$.
+\end{eg}
+
+\begin{cor}[Triangle inequality]
+ \[
+ \mathbf{|x + y|} \leq \mathbf{|x| + |y|}.
+ \]
+\end{cor}
+\begin{proof}
+ \begin{align*}
+ |\mathbf{x + y}|^2 &= \mathbf{(x + y)\cdot (x + y)}\\
+ &= |\mathbf{x}|^2 + 2\mathbf{x\cdot y} + |\mathbf{y}|^2\\
+ &\leq |\mathbf{x}|^2 + 2\mathbf{|x||y|} + |\mathbf{y}|^2\\
+ &= (\mathbf{|x| + |y|})^2.\\
+ \intertext{So}
+ \mathbf{|x + y|} &\leq \mathbf{|x| + |y|}. \qedhere
+ \end{align*}
+\end{proof}
+
+\subsection{Vector product}
+Apart from the scalar product, we can also define the \emph{vector product}. However, this is defined only for $\R^3$ space, but not spaces in general.
+\begin{defi}[Vector/cross product]
+ Consider $\mathbf{a, b}\in \R^3$. Define the \emph{vector product}
+ \[
+ \mathbf{a\times b} = \mathbf{|a||b|}\sin\theta \hat{\mathbf{n}},
+ \]
+ where $\mathbf{\hat{n}}$ is a unit vector perpendicular to both $\mathbf{a}$ and $\mathbf{b}$. Since there are two (opposite) unit vectors that are perpendicular to both of them, we pick $\mathbf{\hat{n}}$ to be the one that is perpendicular to $\mathbf{a}, \mathbf{b}$ in a \emph{right-handed} sense.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [mred, ->] (0, 0) -- (2, -0.7) node [right] {$\mathbf{a}$};
+ \draw [mblue, ->] (0, 0) -- (2, 0.7) node [right] {$\mathbf{b}$};
+ \draw [mgreen, ->] (0, 0) -- (0, 2) node [above] {$\mathbf{a}\times \mathbf{b}$};
+
+ \draw [mred] (0, 0.2) -- (0.2, 0.13) -- (0.2, -0.07);
+ \draw [mblue] (0, 0.2) -- (0.2, 0.27) -- (0.2, 0.07);
+ \end{tikzpicture}
+ \end{center}
+ The vector product satisfies the following properties:
+ \begin{enumerate}
+ \item $\mathbf{a\times b = -b\times a}$.
+ \item $\mathbf{a\times a = 0}$.
+ \item $\mathbf{a\times b = 0}\Rightarrow \mathbf{a} = \lambda\mathbf{b}$ for some $\lambda\in \R$ (or $\mathbf{b} = \mathbf{0})$.
+ \item $\mathbf{a}\times (\lambda \mathbf{b}) = \lambda(\mathbf{a\times b})$.
+ \item $\mathbf{a\times (b + c) = a\times b + a\times c}$.
+ \end{enumerate}
+\end{defi}
+
+If we have a triangle $OAB$, its area is given by $\frac{1}{2}|\overrightarrow{OA}||\overrightarrow{OB}|\sin\theta = \frac{1}{2}|\overrightarrow{OA}\times\overrightarrow{OB}|$. We define the vector area as $\frac{1}{2}\overrightarrow{OA}\times\overrightarrow{OB}$, which is often a helpful notion when we want to do calculus with surfaces.
+
+There is a convenient way of calculating vector products:
+\begin{prop}
+ \begin{align*}
+ \mathbf{a\times b} &= (a_1\hat{\mathbf{i}} + a_2\hat{\mathbf{j}} + a_3\hat{\mathbf{k}})\times(b_1\hat{\mathbf{i}} + b_2\hat{\mathbf{j}} + b_3\hat{\mathbf{k}})\\
+ &= (a_2b_3 - a_3b_2)\hat{\mathbf{i}} + \cdots\\
+ &= \begin{vmatrix} \hat{\mathbf{i}} & \hat{\mathbf{j}} & \hat{\mathbf{k}}\\
+ a_1 & a_2 & a_3\\
+ b_1 & b_2 & b_3\\
+ \end{vmatrix}
+ \end{align*}
+\end{prop}
+
+\subsection{Scalar triple product}
+\begin{defi}[Scalar triple product]
+ The \emph{scalar triple product} is defined as
+ \[
+ \mathbf{[a, b, c] = a\cdot (b\times c)}.
+ \]
+\end{defi}
+
+\begin{prop}
+ If a parallelepiped has sides represented by vectors $\mathbf{a, b, c}$ that form a right-handed system, then the volume of the parallelepiped is given by $\mathbf{[a, b, c]}$.
+\end{prop}
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mblue, ->] (0, 0) -- (3, 0) node [right] {$\mathbf{b}$};
+ \draw [mred, dashed, ->] (0, 0) -- (2, 1) node [right] {$\mathbf{c}$};
+ \draw [mgreen, ->] (0, 0) -- (1, 2) node [above] {$\mathbf{a}$};
+
+ \draw [mblue, dashed] (2, 1) -- +(3, 0);
+ \draw [mblue] (1, 2) -- +(3, 0);
+ \draw [mblue] (3, 3) -- +(3, 0);
+
+ \draw [mgreen, dashed] (2, 1) -- +(1, 2);
+ \draw [mgreen] (5, 1) -- +(1, 2);
+ \draw [mgreen] (3, 0) -- +(1, 2);
+
+ \draw [mred] (3, 0) -- +(2, 1);
+ \draw [mred] (1, 2) -- +(2, 1);
+ \draw [mred] (4, 2) -- +(2, 1);
+ \end{tikzpicture}
+\end{center}
+
+\begin{proof}
+ The area of the base of the parallelepiped is given by $\mathbf{|b||c|}\sin\theta = \mathbf{|b\times c|}$. Thus the volume$ = \mathbf{|b\times c||a|}\cos\phi = \mathbf{|a\cdot(b\times c)|}$, where $\phi$ is the angle between $\mathbf{a}$ and the normal to $\mathbf{b}$ and $\mathbf{c}$. However, since $\mathbf{a, b, c}$ form a right-handed system, we have $\mathbf{a\cdot (b\times c)} \geq 0$. Therefore the volume is $\mathbf{a\cdot(b\times c)}$.
+\end{proof}
+Since the order of $\mathbf{a, b, c}$ doesn't affect the volume, we know that
+\[
+ \mathbf{[a, b, c] = [b, c, a] = [c, a, b] = -[b, a, c] = -[a, c, b] = -[c, b, a]}.
+\]
+
+\begin{thm}
+ $\mathbf{a\times (b + c) = a\times b + a\times c}$.
+\end{thm}
+\begin{proof}
+ Let $\mathbf{d = a\times (b + c) - a\times b - a\times c}$. We have
+ \begin{align*}
+ \mathbf{d\cdot d} &= \mathbf{d\cdot[a\times (b + c)] - d\cdot(a\times b) - d\cdot(a\times c)}\\
+ &= \mathbf{(b+c)\cdot(d \times a) - b\cdot(d\times a) - c\cdot(d\times a)}\\
+ &= 0
+ \end{align*}
+ Thus $\mathbf{d = 0}$.
+\end{proof}
+
+\subsection{Spanning sets and bases}
+\subsubsection{2D space}
+\begin{defi}[Spanning set]
+ A set of vectors $\{\mathbf{a, b}\}$ \emph{spans} $\R^2$ if for all vectors $\mathbf{r}\in \R^2$, there exist some $\lambda, \mu\in \R$ such that $\mathbf{r} = \lambda\mathbf{a} + \mu\mathbf{b}$.
+\end{defi}
+
+In $\R^2$, two vectors span the space if $\mathbf{a}\times \mathbf{b} \not= \mathbf{0}$.
+\begin{thm}
+ The coefficients $\lambda, \mu$ are unique.
+\end{thm}
+
+\begin{proof}
+ Suppose that $\mathbf{r} = \lambda\mathbf{a} + \mu\mathbf{b} = \lambda'\mathbf{a} + \mu'\mathbf{b}$. Take the vector product with $\mathbf{a}$ on both sides to get $(\mu - \mu')\mathbf{a\times b} = \mathbf{0}$. Since $\mathbf{a\times b}\not= \mathbf{0}$, then $\mu=\mu'$. Similarly, $\lambda = \lambda'$.
+\end{proof}
+
+\begin{defi}[Linearly independent vectors in $\R^2$]
+ Two vectors $\mathbf{a}$ and $\mathbf{b}$ are \emph{linearly independent} if for $\alpha, \beta\in \R$, $\alpha\mathbf{a} + \beta\mathbf{b} = \mathbf{0}$ iff $\alpha = \beta = 0$. In $\R^2$, $\mathbf{a}$ and $\mathbf{b}$ are linearly independent if $\mathbf{a\times b} \not= \mathbf{0}$.
+\end{defi}
+
+\begin{defi}[Basis of $\R^2$]
+ A set of vectors is a \emph{basis} of $\R^2$ if it spans $\R^2$ and are linearly independent.
+\end{defi}
+
+\begin{eg}
+ $\{\hat{\mathbf{i}}, \hat{\mathbf{j}}\} = \{(1, 0), (0, 1)\}$ is a basis of $\R^2$. They are the standard basis of $\R^2$.
+\end{eg}
+
+\subsubsection{3D space}
+We can extend the above definitions of spanning set and linear independent set to $\R^3$. Here we have
+\begin{thm}
+ If $\mathbf{a}, \mathbf{b}, \mathbf{c}\in\R^3$ are non-coplanar, i.e.\ $\mathbf{a}\cdot(\mathbf{b}\times \mathbf{c})\not= 0$, then they form a basis of $\R^3$.
+\end{thm}
+
+\begin{proof}
+ For any $\mathbf{r}$, write $\mathbf{r} = \lambda\mathbf{a} + \mu\mathbf{b} + \nu\mathbf{c}$. Performing the scalar product with $\mathbf{b\times c}$ on both sides, one obtains $\mathbf{r\cdot(b\times c)} = \lambda \mathbf{a\cdot(b\times c)} + \mu\mathbf{b\cdot (b\times c)} + \nu\mathbf{c\cdot(b\times c)} = \lambda \mathbf{[a, b, c]}$. Thus $\lambda = \mathbf{[r, b, c]/[a,b, c]}$. The values of $\mu$ and $\nu$ can be found similarly. Thus each $\mathbf{r}$ can be written as a linear combination of $\mathbf{a}, \mathbf{b}$ and $\mathbf{c}$.
+
+ By the formula derived above, it follows that if $\alpha\mathbf{a} + \beta\mathbf{b} + \gamma\mathbf{c} = \mathbf{0}$, then $\alpha = \beta = \gamma = 0$. Thus they are linearly independent.
+\end{proof}
+Note that while we came up with formulas for $\lambda, \mu$ and $\nu$, we did not actually prove that these coefficients indeed work. This is rather unsatisfactory. We could, of course, expand everything out and show that this indeed works, but in IB Linear Algebra, we will prove a much more general result, saying that if we have an $n$-dimensional space and a set of $n$ linear independent vectors, then they form a basis.
+
+In $\R^3$, the standard basis is $\mathbf{\hat{i}, \hat{j}, \hat{k}}$, or $(1, 0, 0), (0, 1, 0)$ and $(0, 0, 1)$.
+\subsubsection{\texorpdfstring{$\R^n$}{Rn} space}
+In general, we can define
+\begin{defi}[Linearly independent vectors]
+ A set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\cdots \mathbf{v}_m\}$ is \emph{linearly independent} if
+ \[
+ \sum_{i = 1}^m\lambda_i\mathbf{v}_i = \mathbf{0} \Rightarrow (\forall i)\,\lambda_i = 0.
+ \]
+\end{defi}
+\begin{defi}[Spanning set]
+ A set of vectors $\{\mathbf{u}_1, \mathbf{u}_2, \mathbf{u}_3\cdots \mathbf{u}_m\}\subseteq \R^n$ is a \emph{spanning set} of $\R^n$ if
+ \[
+ (\forall \mathbf{x} \in \R^n)(\exists \lambda_i)\,\sum_{i = 1}^m\lambda_i\mathbf{u}_i = \mathbf{x}
+ \]
+\end{defi}
+
+\begin{defi}[Basis vectors]
+ A \emph{basis} of $\R^n$ is a linearly independent spanning set. The standard basis of $\R^n$ is $\mathbf{e}_1 = (1, 0, 0, \cdots 0), \mathbf{e}_2 = (0, 1, 0, \cdots 0),\cdots \mathbf{e}_n = (0, 0, 0, \cdots, 1)$.
+\end{defi}
+
+\begin{defi}[Orthonormal basis]
+ A basis $\{\mathbf{e}_i\}$ is \emph{orthonormal} if $\mathbf{e}_i\cdot \mathbf{e}_j = 0$ if $i\not= j$ and $\mathbf{e}_i\cdot \mathbf{e}_i = 1$ for all $i, j$.
+
+ Using the Kronecker Delta symbol, which we will define later, we can write this condition as $\mathbf{e}_i \cdot \mathbf{e}_j = \delta_{ij}$.
+\end{defi}
+
+\begin{defi}[Dimension of vector space]
+ The \emph{dimension} of a vector space is the number of vectors in its basis. (Exercise: show that this is well-defined)
+\end{defi}
+We usually denote the components of a vector $\mathbf{x}$ by $x_i$. So we have $\mathbf{x} = (x_1, x_2, \cdots, x_n)$.
+
+\begin{defi}[Scalar product]
+ The \emph{scalar product} of $\mathbf{x, y}\in \R^n$ is defined as $\mathbf{x\cdot y} = \sum x_i y_i$.
+\end{defi}
+The reader should check that this definition coincides with the $|\mathbf{x}||\mathbf{y}|\cos\theta$ definition in the case of $\R^2$ and $\R^3$.
+
+\subsubsection{\texorpdfstring{$\C^n$}{Cn} space}
+$\C^n$ is very similar to $\R^n$, except that we have complex numbers. As a result, we need a different definition of the scalar product. If we still defined $\mathbf{u}\cdot \mathbf{v} = \sum u_i v_i$, then if we let $\mathbf{u} = (0, i)$, then $\mathbf{u}\cdot \mathbf{u} = -1 < 0$. This would be bad if we want to use the scalar product to define a norm.
+
+\begin{defi}[$\C^n$]
+ $\C^n = \{(z_1, z_2, \cdots, z_n): z_i\in\C\}$. It has the same standard basis as $\R^n$ but the scalar product is defined differently. For $\mathbf{u, v}\in \C^n$, $\mathbf{u\cdot v} = \sum u_i^*v_i$. The scalar product has the following properties:
+ \begin{enumerate}
+ \item $\mathbf{u}\cdot \mathbf{v} = (\mathbf{v}\cdot \mathbf{u})^*$
+ \item $\mathbf{u}\cdot(\lambda\mathbf{v}+\mu\mathbf{w}) = \lambda\mathbf{(u\cdot v)} + \mu\mathbf{(u\cdot w)}$
+ \item $\mathbf{u\cdot u} \geq 0$ and $\mathbf{u\cdot u} = 0$ iff $\mathbf{u = 0}$
+ \end{enumerate}
+\end{defi}
+Instead of linearity in the first argument, here we have $(\lambda\mathbf{u} + \mu\mathbf{v})\cdot\mathbf{w} = \lambda^*\mathbf{u}\cdot \mathbf{w} + \mu^*\mathbf{v}\cdot \mathbf{w}$.
+
+\begin{eg}
+ \begin{align*}
+ &\sum_{k = 1}^4 (-i)^k|\mathbf{x} + i^k\mathbf{y}|^2\\
+ &= \sum(-i)^k\bra\mathbf{x} + i^k \mathbf{y}\mid \mathbf{x} + i^k\mathbf{y}\ket\\
+ &= \sum(-i)^k (\bra\mathbf{x} + i^k\mathbf{y}\mid \mathbf{x}\ket + i^k\bra\mathbf{x} + i^k\mathbf{y} \mid \mathbf{y}\ket)\\
+ &= \sum(-i)^k (\bra\mathbf{x}\mid \mathbf{x}\ket + (-i)^k\bra\mathbf{y}\mid \mathbf{x}\ket + i^k\bra\mathbf{x}\mid \mathbf{y}\ket + i^k(-i)^k\bra\mathbf{y}\mid \mathbf{y}\ket)\\
+ &= \sum(-i)^k [(| \mathbf{x}|^2 + |\mathbf{y}|^2) + (-1)^k\bra\mathbf{y}\mid \mathbf{x}\ket + \bra\mathbf{x}\mid \mathbf{y}\ket]\\
+ &= (|\mathbf{x}|^2 + |\mathbf{y}|^2)\sum(-i)^k + \bra\mathbf{y}\mid \mathbf{x}\ket\sum(-1)^k + \bra\mathbf{x}\mid \mathbf{y}\ket\sum1\\
+ &= 4\bra\mathbf{x}\mid \mathbf{y}\ket.
+ \end{align*}
+\end{eg}
+
+We can prove the Cauchy-Schwarz inequality for complex vector spaces using the same proof as the real case, except that this time we have to first multiply $\mathbf{y}$ by some $e^{i\theta}$ so that $\mathbf{x} \cdot (e^{i\theta} \mathbf{y})$ is a real number. The factor of $e^{i\theta}$ will drop off at the end when we take the modulus signs.
+
+\subsection{Vector subspaces}
+\begin{defi}[Vector subspace]
+ A \emph{vector subspace} of a vector space $V$ is a subset of $V$ that is also a vector space under the same operations. Both $V$ and $\{\mathbf{0}\}$ are subspaces of $V$. All others are proper subspaces.
+
+ A useful criterion is that a subset $U\subseteq V$ is a subspace iff
+ \begin{enumerate}
+ \item $\mathbf{x, y}\in U \Rightarrow (\mathbf{x + y}) \in U$.
+ \item $\mathbf{x}\in U \Rightarrow \lambda\mathbf{x} \in U$ for all scalars $\lambda$.
+ \item $\mathbf{0}\in U$.
+ \end{enumerate}
+ This can be more concisely written as ``$U$ is non-empty and for all $\mathbf{x, y}\in U$, $(\lambda\mathbf{x} + \mu\mathbf{y})\in U$''.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item If $\{\mathbf{a, b, c}\}$ is a basis of $\R^3$, then $\{\mathbf{a + c, b + c}\}$ is a basis of a 2D subspace.
+
+ Suppose $\mathbf{x, y}\in \spn\{\mathbf{a + c, b + c}\}$. Let
+ \begin{align*}
+ \mathbf{x} &= \alpha_1(\mathbf{a + c}) + \beta_1(\mathbf{b + c});\\
+ \mathbf{y} &= \alpha_2(\mathbf{a + c}) + \beta_2(\mathbf{b + c}).
+ \end{align*}
+ Then
+ \[
+ \lambda\mathbf{x} + \mu\mathbf{y} = (\lambda\alpha_1+\mu\alpha_2)(\mathbf{a + c}) + (\lambda\beta_1 + \mu\beta_2)\mathbf{(b + c)}\in\spn\{\mathbf{a + c, b + c}\}.
+ \]
+ Thus this is a subspace of $\R^3$.
+
+ Now check that $\mathbf{a + c, b + c}$ is a basis. We only need to check linear independence. If $\alpha(\mathbf{a + c}) + \beta(\mathbf{b + c}) = \mathbf{0}$, then $\alpha\mathbf{a} + \beta\mathbf{b} + (\alpha + \beta)\mathbf{c} = \mathbf{0}$. Since $\{\mathbf{a, b, c}\}$ is a basis of $\R^3$, therefore $\mathbf{a, b, c}$ are linearly independent and $\alpha = \beta = 0$. Therefore $\mathbf{a + c, b + c}$ is a basis and the subspace has dimension $2$.
+ \item Given a set of numbers $\alpha_i$, let $U = \{\mathbf{x}\in \R^n: \sum_{i=1}^n \alpha_ix_i = 0\}$. We show that this is a vector subspace of $\R^n$: Take $\mathbf{x, y}\in U$, then consider $\lambda\mathbf{x} + \mu\mathbf{y}$. We have $\sum\alpha_i(\lambda x_i + \mu y_i) = \lambda\sum\alpha_ix_i + \mu\sum\alpha_iy_i = 0$. Thus $\lambda\mathbf{x} + \mu\mathbf{y} \in U$.
+
+ The dimension of the subspace is $n-1$ as we can freely choose $x_i$ for $i = 1, \cdots, n - 1$ and then $x_n$ is uniquely determined by the previous $x_i$'s.
+ \item Let $W = \{\mathbf{x}\in \R^n: \sum \alpha_ix_i = 1\}$. Then $\sum\alpha_i(\lambda x_i + \mu y_i) = \lambda + \mu \not= 1$. Therefore $W$ is not a vector subspace.
+ \end{enumerate}
+\end{eg}
+
+\subsection{Suffix notation}
+Here we are going to introduce a powerful notation that can help us simplify a lot of things.
+
+First of all, let $\mathbf{v}\in \R^3$. We can write $\mathbf{v} = v_1\mathbf{e}_1 + v_2\mathbf{e}_2 + v_3\mathbf{e}_3 = (v_1, v_2, v_3)$. So in general, the $i$th component of $\mathbf{v}$ is written as $v_i$. We can thus write vector equations in component form. For example, $\mathbf{a} = \mathbf{b} \rightarrow a_i = b_i$ or $\mathbf{c}=\alpha\mathbf{a} + \beta\mathbf{b} \rightarrow c_i = \alpha a_i + \beta b_i$. A vector has one \emph{free} suffix, $i$, while a scalar has none.
+
+\begin{notation}[Einstein's summation convention]
+ Consider a sum $\mathbf{x}\cdot \mathbf{y} = \sum x_i y_i$. The \emph{summation convention} says that we can drop the $\sum$ symbol and simply write $\mathbf{x}\cdot \mathbf{y} = x_i y_i$. If suffixes are repeated once, summation is understood.
+
+ Note that $i$ is a dummy suffix and doesn't matter what it's called, i.e.\ $x_iy_i = x_jy_j = x_k y_k$ etc.
+
+ The rules of this convention are:
+ \begin{enumerate}
+ \item Suffix appears once in a term: free suffix
+ \item Suffix appears twice in a term: dummy suffix and is summed over
+ \item Suffix appears three times or more: WRONG!
+ \end{enumerate}
+\end{notation}
+
+\begin{eg}
+ $[\mathbf{(a\cdot b)c - (a \cdot c)b}]_i = a_jb_jc_i - a_jc_jb_i$ summing over $j$ understood.
+\end{eg}
+
+It is possible for an item to have more than one index. These objects are known as \emph{tensors}, which will be studied in depth in the IA Vector Calculus course.
+
+Here we will define two important tensors:
+\begin{defi}[Kronecker delta]\leavevmode
+ \[
+ \delta_{ij} =
+ \begin{cases}
+ 1 & i = j\\
+ 0 & i\not=j
+ \end{cases}.
+ \]
+ We have
+ \[
+ \begin{pmatrix}
+ \delta_{11} & \delta_{12} & \delta_{13}\\
+ \delta_{21} & \delta_{22} & \delta_{23}\\
+ \delta_{31} & \delta_{32} & \delta_{33}
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & 1 & 0\\
+ 0 & 0 & 1
+ \end{pmatrix}
+ = \mathbf{I}.
+ \]
+ So the Kronecker delta represents an identity matrix.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $a_i\delta_{i1} = a_1$. In general, $a_i\delta_{ij} = a_j$ ($i$ is dummy, $j$ is free).
+ \item $\delta_{ij}\delta_{jk} = \delta_{ik}$
+ \item $\delta_{ii} = n$ if we are in $\R^n$.
+ \item $a_p\delta_{pq}b_q = a_pb_p$ with $p, q$ both dummy suffices and summed over.
+ \end{enumerate}
+\end{eg}
+
+\begin{defi}[Alternating symbol $\varepsilon_{ijk}$]
+ Consider rearrangements of $1, 2, 3$. We can divide them into even and odd permutations. Even permutations include $(1, 2, 3)$, $(2, 3, 1)$ and $(3, 1, 2)$. These are permutations obtained by performing two (or no) swaps of the elements of $(1, 2, 3)$. (Alternatively, it is any ``rotation'' of $(1, 2, 3)$)
+
+ The odd permutations are $(2, 1, 3)$, $(1, 3, 2)$ and $(3, 2, 1)$. They are the permutations obtained by one swap only.
+
+ Define
+ \[
+ \varepsilon_{ijk} =
+ \begin{cases}
+ +1 & ijk \text{ is even permutation}\\
+ -1 & ijk\text{ is odd permutation}\\
+ 0 & \text{otherwise (i.e.\ repeated suffices)}
+ \end{cases}
+ \]
+ $\varepsilon_{ijk}$ has 3 free suffices.
+
+ We have $\varepsilon_{123} = \varepsilon_{231} = \varepsilon_{312} = +1$ and $\varepsilon_{213} = \varepsilon_{132} = \varepsilon_{321} = -1$. $\varepsilon_{112} = \varepsilon_{111} = \cdots = 0$.
+\end{defi}
+
+We have
+\begin{enumerate}
+ \item $\varepsilon_{ijk}\delta_{jk} = \varepsilon_{ijj} = 0$
+ \item If $a_{jk} = a_{kj}$ (i.e.\ $a_{ij}$ is symmetric), then $\varepsilon_{ijk}a_{jk} = \varepsilon_{ijk}a_{kj} = -\varepsilon_{ikj}a_{kj}$. Since $\varepsilon_{ijk}a_{jk} = \varepsilon_{ikj}a_{kj}$ (we simply renamed dummy suffices), we have $\varepsilon_{ijk}a_{jk} = 0$.
+\end{enumerate}
+
+\begin{prop}
+ $(\mathbf{a} \times \mathbf{b})_i = \varepsilon_{ijk}a_jb_k$
+\end{prop}
+
+\begin{proof}
+ By expansion of formula
+\end{proof}
+
+\begin{thm}
+ $\varepsilon_{ijk}\varepsilon_{ipq} = \delta_{jp}\delta_{kq} - \delta_{jq}\delta_{kp}$
+\end{thm}
+
+\begin{proof}
+ Proof by exhaustion:
+ \[
+ \text{RHS} = \begin{cases}
+ +1 &\text{ if } j = p \text{ and } k = q\\
+ -1 &\text{ if } j = q \text{ and } k = p\\
+ 0 &\text{ otherwise}
+ \end{cases}
+ \]
+ LHS: Summing over $i$, the only non-zero terms are when $j, k\not=i$ and $p, q\not=i$. If $j = p$ and $k = q$, LHS is $(-1)^2$ or $(+1)^2 = 1$. If $j = q$ and $k = p$, LHS is $(+1)(-1)$ or $(-1)(+1) = -1$. All other possibilities result in 0.
+\end{proof}
+Equally, we have $\varepsilon_{ijk}\varepsilon_{pqk} = \delta_{ip}\delta_{jq} - \delta_{jp}\delta_{iq}$ and $\varepsilon_{ijk}\varepsilon_{pjq} = \delta_{ip}\delta_{kq} - \delta_{iq}\delta_{kp}$.
+
+\begin{prop}
+ \[
+ \mathbf{a\cdot (b\times c) = b\cdot(c\times a)}
+ \]
+\end{prop}
+\begin{proof}
+ In suffix notation, we have
+ \[
+ \mathbf{a\cdot (b\times c)} = a_i\mathbf{(b\times c)}_i = \varepsilon_{ijk}b_jc_ka_i = \varepsilon_{jki}b_jc_ka_i = \mathbf{b\cdot (c\times a)}.\qedhere
+ \]
+\end{proof}
+
+\begin{thm}[Vector triple product]
+ \[
+ \mathbf{a\times (b\times c) = (a\cdot c)b - (a\cdot b)c}.
+ \]
+\end{thm}
+\begin{proof}
+ \begin{align*}
+ \mathbf{[a\times(b\times c)]}_i &= \varepsilon_{ijk} a_j(b\times c)_k \\
+ &= \varepsilon_{ijk}\varepsilon_{kpq}a_jb_pc_q\\
+ &= \varepsilon_{ijk}\varepsilon_{pqk} a_jb_pc_q\\
+ &= (\delta_{ip}\delta_{jq}-\delta_{iq}\delta_{jp})a_jb_pc_q\\
+ &= a_jb_ic_j - a_jc_ib_j\\
+ &= \mathbf{(a\cdot c)}b_i - \mathbf{(a\cdot b)}c_i\qedhere
+ \end{align*}
+\end{proof}
+Similarly, $\mathbf{(a\times b)\times c = (a\cdot c)b - (b\cdot c)a}$.
+
+\subsubsection*{Spherical trigonometry}
+\begin{prop}
+ $\mathbf{(a\times b)\cdot (a\times c) = (a\cdot a)(b\cdot c) - (a\cdot b)(a\cdot c)}$.
+\end{prop}
+\begin{proof}
+ \begin{align*}
+ \text{LHS} &= (\mathbf{a\times b})_i(\mathbf{a\times c})_i\\
+ &= \varepsilon_{ijk}a_jb_k\varepsilon_{ipq} a_pc_q\\
+ &= (\delta_{jp}\delta_{kq} - \delta_{jq}\delta_{kp})a_jb_ka_pc_q\\
+ &= a_jb_k a_jc_k - a_j b_k a_k c_j\\
+ &= \mathbf{(a\cdot a)(b\cdot c) - (a\cdot b)(a\cdot c)}\qedhere
+ \end{align*}
+\end{proof}
+
+Consider the unit sphere, center $O$, with $\mathbf{a, b, c}$ on the surface.
+\begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius = 2.5];
+ \pgfpathmoveto{\pgfpoint{-0.86602cm}{-0.5cm}};
+ \pgfpatharcto{2cm}{2cm}{0}{0}{0}{\pgfpoint{0cm}{1cm}}\pgfusepath{stroke};
+ \pgfpathmoveto{\pgfpoint{0cm}{1cm}};
+ \pgfpatharcto{2cm}{2cm}{0}{0}{0}{\pgfpoint{0.86602cm}{-0.5cm}}\pgfusepath{stroke};
+ \pgfpathmoveto{\pgfpoint{0.86602cm}{-0.5cm}};
+ \pgfpatharcto{2cm}{2cm}{0}{0}{0}{\pgfpoint{-0.86602cm}{-0.5cm}}\pgfusepath{stroke};
+
+ \draw (0, 1) node [above] {$A$} node [circ] {};
+ \draw (-0.86602, -0.5) node [left] {$B$} node [circ] {};
+ \draw (0.86602, -0.5) node [right] {$C$} node [circ] {};
+ \draw (-0.5, 0.5) node [left] {$\delta(A, B)$};
+
+ \pgfpathmoveto{\pgfpoint{-0.29cm}{0.77cm}};
+ \pgfpatharcto{0.5cm}{0.5cm}{0}{0}{1}{\pgfpoint{0.29cm}{0.77cm}}\pgfusepath{stroke};
+ \draw (0, 0.5) node {$\alpha$};
+ \end{tikzpicture}
+\end{center}
+Suppose we are living on the surface of the sphere. So the distance from $A$ to $B$ is the arc length on the sphere. We can imagine this to be along the circumference of the circle through $A$ and $B$ with center $O$. So the distance is $\angle AOB$, which we shall denote by $\delta (A, B)$. So $\mathbf{a}\cdot \mathbf{b} = \cos \angle AOB = \cos \delta (A, B)$. We obtain similar expressions for other dot products. Similarly, we get $|\mathbf{a}\times \mathbf{b}| = \sin \delta(A, B)$.
+\begin{align*}
+ \cos \alpha &= \mathbf{\frac{(a\times b)\cdot(a\times c)}{|a\times b||a\times c|}}\\
+ &= \mathbf{\frac{b\cdot c - (a\cdot b)(a\cdot c)}{|a\times b||a\times c|}}
+\end{align*}
+Putting in our expressions for the dot and cross products, we obtain
+\[
+ \cos\alpha\sin\delta(A, B)\sin\delta(A, C) = \cos\delta(B, C) - \cos\delta(A, B)\cos\delta(A, C).
+\]
+This is the spherical cosine rule that applies when we live on the surface of a sphere. What does this spherical geometry look like?
+
+Consider a spherical equilateral triangle. Using the spherical cosine rule,
+\[
+ \cos\alpha = \frac{\cos\delta - \cos^2\delta}{\sin^2\delta} = 1 - \frac{1}{1 + \cos\delta}.
+\]
+Since $\cos\delta\leq 1$, we have $\cos\alpha\leq \frac{1}{2}$ and $\alpha \geq 60^\circ$. Equality holds iff $\delta = 0$, i.e.\ the triangle is simply a point. So on a sphere, each angle of an equilateral triangle is greater than $60^\circ$, and the angle sum of a triangle is greater than $180^\circ$.
+
+\subsection{Geometry}
+\subsubsection{Lines}
+Any line through $\mathbf{a}$ and parallel to $\mathbf{t}$ can be written as
+\[
+ \mathbf{x} = \mathbf{a} + \lambda\mathbf{t}.
+\]
+By crossing both sides of the equation with $\mathbf{t}$, we have
+\begin{thm} The equation of a straight line through $\mathbf{a}$ and parallel to $\mathbf{t}$ is
+ \[
+ \mathbf{(x - a)\times t = 0}\text{ or }\mathbf{x\times t = a\times t}.
+ \]
+\end{thm}
+\subsubsection{Plane}
+To define a plane $\Pi$, we need a normal $\mathbf{n}$ to the plane and a fixed point $\mathbf{b}$. For any $\mathbf{x}\in \Pi$, the vector $\mathbf{x - b}$ is contained in the plane and is thus normal to $\mathbf{n}$, i.e.\ $\mathbf{(x - b)\cdot n} = 0$.
+\begin{thm}
+ The equation of a plane through $\mathbf{b}$ with normal $\mathbf{n}$ is given by
+ \[
+ \mathbf{x\cdot n = b\cdot n}.
+ \]
+\end{thm}
+If $\mathbf{n = \hat n}$ is a unit normal, then $d = \mathbf{x\cdot\hat{n} = b\cdot\hat{n}}$ is the perpendicular distance from the origin to $\Pi$.
+
+Alternatively, if $\mathbf{a, b, c}$ lie in the plane, then the equation of the plane is
+\[
+ \mathbf{(x - a)\cdot [(b - a)\times (c - a)]} = 0.
+\]
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Consider the intersection between a line $\mathbf{x\times t = a\times t}$ with the plane $\mathbf{x\cdot n = b\cdot n}$. Cross $\mathbf{n}$ on the right with the line equation to obtain
+ \[
+ \mathbf{(x\cdot n)t - (t\cdot n)x = (a\times t)\times n}
+ \]
+ Eliminate $\mathbf{x\cdot n}$ using $\mathbf{x\cdot n = b\cdot n}$
+ \[
+ \mathbf{(t\cdot n)x = (b\cdot n)t - (a\times t)\times n}
+ \]
+ Provided $\mathbf{t\cdot n}$ is non-zero, the point of intersection is
+ \[
+ \mathbf{x = \frac{(b\cdot n)t - (a\times t)\times n}{t\cdot n}}.
+ \]
+ Exercise: what if $\mathbf{t\cdot n} = 0$?
+ \item Shortest distance between two lines. Let $L_1$ be $(\mathbf{x} - \mathbf{a}_1)\times \mathbf{t}_1 = \mathbf{0}$ and $L_2$ be $(\mathbf{x} - \mathbf{a}_2)\times \mathbf{t}_2 = \mathbf{0}$.
+
+ The distance of closest approach $s$ is along a line perpendicular to both $L_1$ and $L_2$, i.e.\ the line of closest approach is perpendicular to both lines and thus parallel to $\mathbf{t}_1\times \mathbf{t}_2$. The distance $s$ can then be found by projecting $\mathbf{a}_1 - \mathbf{a}_2$ onto $\mathbf{t}_1\times \mathbf{t}_2$. Thus $s = \left|(\mathbf{a}_1 - \mathbf{a}_2)\cdot\frac{\mathbf{t}_1\times \mathbf{t}_2}{|\mathbf{t}_1\times \mathbf{t}_2|}\right|$.
+ \end{enumerate}
+\end{eg}
+\subsection{Vector equations}
+\begin{eg}
+ $\mathbf{x - (x\times a)\times b = c}$. Strategy: take the dot or cross of the equation with suitable vectors. The equation can be expanded to form
+ \begin{align*}
+ \mathbf{x - (x\cdot b)a + (a\cdot b)x} &= \mathbf{c}.\\
+ \intertext{Dot this with $\mathbf{b}$ to obtain}
+ \mathbf{x\cdot b - (x\cdot b)(a\cdot b) + (a\cdot b)(x\cdot b)} &= \mathbf{c\cdot b}\\
+ \mathbf{x\cdot b} &= \mathbf{c\cdot b}.
+ \end{align*}
+ Substituting this into the original equation, we have
+ \[
+ \mathbf{x}(1 + \mathbf{a\cdot b}) = \mathbf{c + (c\cdot b)a}
+ \]
+ If $(1 + \mathbf{a \cdot b})$ is non-zero, then
+ \[
+ \mathbf{x} = \frac{\mathbf{c + (c\cdot b)a}}{1 + \mathbf{a\cdot b}}
+ \]
+ Otherwise, when $(1 + \mathbf{a\cdot b}) = 0$, if $\mathbf{c + (c\cdot b)a \not= 0}$, then a contradiction is reached. Otherwise, $\mathbf{x\cdot b = c\cdot b}$ is the most general solution, which is a plane of solutions.
+\end{eg}
+
+\section{Linear maps}
+A \emph{linear map} is a special type of function between vector spaces. In fact, most of the time, these are the only functions we actually care about. They are maps that satisfy the property $f(\lambda \mathbf{a} + \mu \mathbf{b}) = \lambda f(\mathbf{a}) + \mu f(\mathbf{b})$.
+
+We will first look at two important examples of linear maps --- rotations and reflections, and then study their properties formally.
+\subsection{Examples}
+\subsubsection{Rotation in \texorpdfstring{$\R^3$}{R3}}
+In $\R^3$, first consider the simple cases where we rotate about the $z$ axis by $\theta$. We call this rotation $R$ and write $\mathbf{x}' = R(\mathbf{x})$.
+
+Suppose that initially, $\mathbf{x} = (x, y, z) = (r\cos \phi, r\sin \phi, z)$. Then after a rotation by $\theta$, we get
+\begin{align*}
+ \mathbf{x}' &= (r\cos(\phi + \theta), r\sin (\phi + \theta), z) \\
+ &= (r\cos \phi \cos \theta - r\sin \phi \sin \theta, r\sin \phi \cos \theta + r \cos \phi\sin \theta, z)\\
+ &= (x\cos\theta - y\sin\theta, x\sin\theta + y\cos\theta, z).
+\end{align*}
+We can represent this by a matrix $R$ such that $x'_i = R_{ij}x_j$. Using our formula above, we obtain
+\[
+ R = \begin{pmatrix}
+ \cos\theta & -\sin\theta & 0 \\
+ \sin\theta & \cos\theta & 0 \\
+ 0 & 0 & 1
+ \end{pmatrix}
+\]
+Now consider the general case where we rotate by $\theta$ about $\hat {\mathbf{n}}$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) node [below] {$O$} -- (0, 3.5) node [above] {$\hat{\mathbf{n}}$};
+ \draw [mred, ->] (0, 0) -- (3, 2) node [right] {$A$} node [pos = 0.5, anchor = north west] {$\mathbf{x}$};
+ \draw [mred] (0, 2) -- (3, 2);
+ \node at (0, 2) [left] {$B$};
+ \draw [mblue] (0, 2) -- (2.5, 2.75) node [anchor = south west] {$A'$};
+ \draw [dashed] (2.5, 2.75) -- (2.5, 2) node [below] {$C$};
+ \draw [mblue, dashed, ->] (0, 0) -- (2.5, 2.75) node [pos = 0.5, anchor = south east] {$\mathbf{x}'$};
+ \draw (2.35, 2) -- (2.35, 2.15) -- (2.5, 2.15);
+
+ \draw (5, 2) node [left] {$B$}
+ -- (7, 2) node [right] {$A$}
+ -- (6.5, 3) node [above] {$A'$}
+ -- cycle ;
+ \draw (6.5, 3) -- (6.5, 2) node [below] {$C$};
+ \draw (5.5, 2) arc(0:34:.5);
+ \draw (5.5, 2) node [anchor = south west] {$\theta$};
+ \draw (6.35, 2) -- (6.35, 2.15) -- (6.5, 2.15);
+ \end{tikzpicture}
+\end{center}
+We have $\mathbf{x'} = \overrightarrow{OB} + \overrightarrow{BC} + \overrightarrow{CA'}$.
+We know that
+\begin{align*}
+ \overrightarrow{OB} &= \mathbf{(\hat{n}\cdot x)\hat{n}}\\
+ \overrightarrow{BC} &= \overrightarrow{BA}\cos\theta\\
+ &= (\overrightarrow{BO} + \overrightarrow{OA})\cos\theta \\
+ &= \mathbf{(-(\hat{n}\cdot x)\hat{n} + x)}\cos\theta
+\end{align*}
+Finally, to get $\overrightarrow{CA}$, we know that $|\overrightarrow{CA'}| = |\overrightarrow{BA'}|\sin\theta = |\overrightarrow{BA}|\sin\theta = |\mathbf{\hat{n}\times x}|\sin\theta$. Also, $\overrightarrow{CA'}$ is parallel to $\hat {\mathbf{n}}\times \mathbf{x}$. So we must have $\overrightarrow{CA'} = (\hat{\mathbf{n}} \times \mathbf{x})\sin \theta$.
+
+Thus $\mathbf{x}' = \mathbf{x}\cos\theta + (1 - \cos\theta)\mathbf{(\hat{n}\cdot x)\hat{n} + \hat{n}\times x}\sin\theta$. In components,
+\[
+ x_i' = x_i\cos\theta + (1 - \cos\theta)n_jx_jn_i - \varepsilon_{ijk}x_jn_k\sin\theta.
+\]
+We want to find an $R$ such that $x_i' = R_{ij}x_j$. So
+\[
+ R_{ij} = \delta_{ij}\cos\theta + (1 - \cos\theta)n_in_j - \varepsilon_{ijk}n_k\sin\theta.
+\]
+
+\subsubsection{Reflection in \texorpdfstring{$\R^3$}{R3}}
+Suppose we want to reflect through a plane through $O$ with normal $\hat{\mathbf{n}}$. First of all the projection of $\mathbf{x}$ onto $\hat{\mathbf{n}}$ is given by $(\mathbf{x}\cdot \hat{\mathbf{n}})\hat{\mathbf{n}}$. So we get $\mathbf{x}' = \mathbf{x} - 2\mathbf{(x\cdot \hat{n})\hat{n}}$. In suffix notation, we have $x_i' = x_i - 2x_jn_jn_i$. So our reflection matrix is $R_{ij} = \delta_{ij} - 2n_in_j$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (4, 1.25) -- (5, -0.5) node [below] {$\mathbf{x}'$};
+ \draw [fill=mblue, fill opacity=0.8] (0, 0) -- (5, 0) -- (7, 2.5) -- (2, 2.5) -- cycle;
+ \draw [->] (3, 1.25) -- (3, 3) node [above] {$\hat{\mathbf{n}}$};
+ \draw [->] (4, 1.25) -- (5, 3) node [above] {$\mathbf{x}$};
+ \draw [dashed] (5, 3) -- (5, 1.25);
+ \end{tikzpicture}
+\end{center}
+
+\subsection{Linear Maps}
+\begin{defi}[Domain, codomain and image of map]
+ Consider sets $A$ and $B$ and mapping $T:A\to B$ such that each $x\in A$ is mapped into a unique $x' = T(x)\in B$. $A$ is the \emph{domain} of $T$ and $B$ is the \emph{co-domain} of $T$. Typically, we have $T:\R^n \to \R^m$ or $T:\C^n\to \C^m$.
+\end{defi}
+
+\begin{defi}[Linear map]
+ Let $V, W$ be real (or complex) vector spaces, and $T: V\to W$. Then $T$ is a \emph{linear map} if
+ \begin{enumerate}
+ \item $T(\mathbf{a + b}) = T(\mathbf{a}) + T(\mathbf{b})$ for all $\mathbf{a, b}\in V$.
+ \item $T(\lambda\mathbf{a}) = \lambda T(\mathbf{a})$ for all $\lambda \in \R$ (or $\C$).
+ \end{enumerate}
+ Equivalently, we have $T(\lambda\mathbf{a} + \mu\mathbf{b}) = \lambda T(\mathbf{a}) + \mu T(\mathbf{b})$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Consider a translation $T:\R^3 \to \R^3$ with $T(\mathbf{x}) = \mathbf{x + a}$ for some fixed, given $\mathbf{a}$. This is \emph{not} a linear map since $T(\lambda\mathbf{x} + \mu\mathbf{y}) \not= \lambda \mathbf{x} + \mu \mathbf{y} + (\lambda + \mu)\mathbf{a}$.
+ \item Rotation, reflection and projection are linear transformations.
+ \end{enumerate}
+\end{eg}
+
+\begin{defi}[Image and kernel of map]
+ The \emph{image} of a map $f: U\to V$ is the subset of $V$ $\{f(\mathbf{u}): \mathbf{u}\in U\}$. The \emph{kernel} is the subset of $U$ $\{\mathbf{u}\in U: f(\mathbf{u}) = \mathbf{0}\}$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Consider $S: \R^3 \to \R^2$ with $S(x, y, z) = (x + y, 2x - z)$. Simple yet tedious algebra shows that this is linear.
+ Now consider the effect of $S$ on the standard basis. $S(1, 0, 0) = (1, 2)$, $S(0, 1, 0) = (1, 0)$ and $S(0, 0, 1) = (0, -1)$. Clearly these are linearly dependent, but they do span the whole of $\R^2$. We can say $S(\R^3) = \R^2$. So the image is $\R^2$.
+
+ Now solve $S(x, y, z) = \mathbf{0}$. We need $x + y = 0$ and $2x - z = 0$. Thus $\mathbf{x} = (x, -x, 2x)$, i.e.\ it is parallel to $(1, -1, 2)$. So the set $\{\lambda(1, -1, 2):\lambda\in \R\}$ is the kernel of $S$.
+ \item Consider a rotation in $\R^3$. The kernel is the zero vector and the image is $\R^3$.
+ \item Consider a projection of $\mathbf{x}$ onto a plane with normal $\mathbf{\hat n}$. The image is the plane itself, and the kernel is any vector parallel to $\mathbf{\hat n}$
+ \end{enumerate}
+\end{eg}
+
+\begin{thm}
+ Consider a linear map $f: U\to V$, where $U, V$ are vector spaces. Then $\im (f)$ is a subspace of $V$, and $\ker (f)$ is a subspace of $U$.
+\end{thm}
+
+\begin{proof}
+ Both are non-empty since $f(\mathbf{0}) = \mathbf{0}$.
+
+ If $\mathbf{x, y}\in \im (f)$, then $\exists \mathbf{a, b}\in U$ such that $\mathbf{x} = f(\mathbf{a}), \mathbf{y} = f(\mathbf{b})$. Then $\lambda \mathbf{x} + \mu\mathbf {y} = \lambda f(\mathbf{a}) + \mu f(\mathbf{b}) = f(\lambda\mathbf{a} + \mu\mathbf{b})$. Now $\lambda\mathbf{a} + \mu\mathbf{b}\in U$ since $U$ is a vector space, so there is an element in $U$ that maps to $\lambda\mathbf{x}+ \mu\mathbf{y}$. So $\lambda\mathbf{x}+ \mu\mathbf{y}\in \im (f)$ and $\im (f)$ is a subspace of $V$.
+
+ Suppose $\mathbf{x, y}\in \ker(f)$, i.e.\ $f(\mathbf{x}) = f(\mathbf {y}) = \mathbf{0}$. Then $f(\lambda\mathbf{x} + \mu\mathbf{y}) = \lambda f(\mathbf{x}) + \mu f(\mathbf{y}) = \lambda \mathbf{0} + \mu\mathbf{0} = \mathbf{0}$. Therefore $\lambda\mathbf{x}+ \mu\mathbf{y} \in \ker (f)$.
+\end{proof}
+
+\subsection{Rank and nullity}
+\begin{defi}[Rank of linear map]
+ The \emph{rank} of a linear map $f: U\to V$, denoted by $r(f)$, is the dimension of the image of $f$.
+\end{defi}
+
+\begin{defi}[Nullity of linear map]
+ The \emph{nullity} of $f$, denoted $n(f)$ is the dimension of the kernel of $f$.
+\end{defi}
+
+\begin{eg}
+ For the projection onto a plane in $\R^3$, the image is the whole plane and the rank is $2$. The kernel is a line so the nullity is $1$.
+\end{eg}
+
+\begin{thm}[Rank-nullity theorem]
+ For a linear map $f: U \to V$,
+ \[
+ r(f) + n(f) = \dim (U).
+ \]
+\end{thm}
+
+\begin{proof}
+ (Non-examinable) Write $\dim(U) = n$ and $n(f) = m$. If $m = n$, then $f$ is the zero map, and the proof is trivial, since $r(f) = 0 $. Otherwise, assume $m < n$.
+
+ Suppose $\{\mathbf{e}_1, \mathbf{e}_2,\cdots, \mathbf{e}_m\}$ is a basis of $\ker f$, Extend this to a basis of the whole of $U$ to get $\{\mathbf{e}_1, \mathbf{e}_2, \cdots, \mathbf{e}_m, \mathbf{e}_{m+1}, \cdots, \mathbf{e}_n\}$. To prove the theorem, we need to prove that $\{f(\mathbf{e}_{m+1}), f(\mathbf{e}_{m + 2}), \cdots f({\mathbf{e}_n})\}$ is a basis of $\im (f)$.
+ \begin{enumerate}
+ \item First show that it spans $\im (f)$. Take $\mathbf{y}\in \im(f)$. Thus $\exists \mathbf{x}\in U$ such that $\mathbf{y} = f(\mathbf{x})$. Then
+ \[
+ \mathbf{y} = f(\alpha_1\mathbf{e}_1 + \alpha_2\mathbf{e}_2 + \cdots + \alpha_n \mathbf{e}_n),
+ \]
+ since $\mathbf{e}_1, \cdots \mathbf{e}_n$ is a basis of $U$. Thus
+ \[
+ \mathbf{y} = \alpha_1f(\mathbf{e}_1) + \alpha_2f(\mathbf{e}_2) + \cdots + \alpha_m f(\mathbf{e}_m) + \alpha_{m + 1}f(\mathbf{e}_{m + 1}) + \cdots + \alpha_nf(\mathbf{e}_n).
+ \]
+ The first $m$ terms map to $\mathbf{0}$, since $\mathbf{e_1, \cdots e_m}$ is the basis of the kernel of $f$. Thus
+ \[
+ \mathbf{y} = \alpha_{m + 1} f(\mathbf{e}_{m + 1}) + \cdots + \alpha_n f(\mathbf{e}_n).
+ \]
+ \item To show that they are linearly independent, suppose
+ \[
+ \alpha_{m + 1} f(\mathbf{e}_{m + 1}) + \cdots + \alpha_n f(\mathbf{e}_n) = \mathbf{0}.
+ \]
+ Then
+ \[
+ f(\alpha_{m + 1}\mathbf{e}_{m + 1} + \cdots + \alpha_n\mathbf{e}_n) = \mathbf{0}.
+ \]
+ Thus $\alpha_{m + 1}\mathbf{e}_{m + 1} + \cdots + \alpha_n\mathbf{e}_n\in \ker (f)$. Since $\{\mathbf{e}_1, \cdots, \mathbf{e}_m\}$ span $\ker (f)$, there exist some $\alpha_1, \alpha_2, \cdots \alpha_m$ such that
+ \[
+ \alpha_{m + 1}\mathbf{e}_{m + 1} + \cdots + \alpha_n\mathbf{e}_n = \alpha_1\mathbf{e_1} + \cdots + \alpha_m\mathbf{e}_m.
+ \]
+ But $\mathbf{e}_1\cdots \mathbf{e}_n$ is a basis of $U$ and are linearly independent. So $\alpha_i = 0$ for all $i$. Then the only solution to the equation $\alpha_{m + 1} f(\mathbf{e}_{m + 1}) + \cdots + \alpha_n f(\mathbf{e}_n) = \mathbf{0}$ is $\alpha_i = 0$, and they are linearly independent by definition. \qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ Calculate the kernel and image of $f:\R^3\to \R^3$, defined by $f(x, y, z) = (x + y + z, 2x - y+ 5z, x + 2z)$.
+
+ First find the kernel: we've got the system of equations:
+ \begin{align*}
+ x + y + z &= 0\\
+ 2x - y + 5z &= 0\\
+ x + 2z &= 0
+ \end{align*}
+ Note that the first and second equation add to give $3x + 6z = 0$, which is identical to the third. Then using the first and third equation, we have $y = -x - z = z$. So the kernel is any vector in the form $(-2z, z, z)$ and is the span of $(-2, 1, 1)$.
+
+ To find the image, extend the basis of $\ker(f)$ to a basis of the whole of $\R^3$: $\{(-2, 1, 1), (0, 1, 0), (0, 0, 1)\}$. Apply $f$ to this basis to obtain $(0, 0, 0), (1, -1, 0)$ and $(1, 5, 2)$. From the proof of the rank-nullity theorem, we know that $f(0, 1, 0)$ and $f(0, 0, 1)$ is a basis of the image.
+
+ To get the standard form of the image, we know that the normal to the plane is parallel to $(1, -1, 0)\times (1, 5, 2) \parallel (1, 1, -3)$. Since $\mathbf{0}\in \im (f)$, the equation of the plane is $x + y - 3z = 0$.
+\end{eg}
+
+\subsection{Matrices}
+In the examples above, we have represented our linear maps by some object $R$ such that $x_i' = R_{ij}x_j$. We call $R$ the \emph{matrix} for the linear map. In general, let $\alpha: \R^n \to \R^m$ be a linear map, and $\mathbf{x}' = \alpha(\mathbf{x})$.
+
+Let $\{\mathbf{e}_i\}$ be a basis of $\R^n$. Then $\mathbf{x} = x_j \mathbf{e}_j$ for some $x_j$. Then we get
+\[
+ \mathbf{x}' = \alpha(x_j \mathbf{e}_j) = x_j \alpha(\mathbf{e}_j).
+\]
+So we get that
+\[
+ x_i' = [\alpha(\mathbf{e}_j)]_i x_j.
+\]
+We now define $A_{ij} = [\alpha(\mathbf{e}_j)]_i$. Then $x_i' = A_{ij}x_j$. We write
+\[
+ A = \{A_{ij}\} =
+ \begin{pmatrix}
+ A_{11} & \cdots & A_{1n}\\
+ \vdots & A_{ij} & \vdots\\
+ A_{m1} & \cdots & A_{mn}
+ \end{pmatrix}
+\]
+Here $A_{ij}$ is the entry in the $i$th row of the $j$th column. We say that $A$ is an $m\times n$ matrix, and write $\mathbf{x}' = A\mathbf{x}$.
+
+We see that the columns of the matrix are the images of the standard basis vectors under the mapping $\alpha$.
+
+\begin{eg}
+\end{eg}
+
+
+\subsubsection{Examples}
+\begin{enumerate}
+ \item In $\R^2$, consider a reflection in a line with an angle $\theta$ to the $x$ axis. We know that $\mathbf{\hat{i}}\mapsto \cos 2\theta \mathbf{\hat{i}} + \sin 2\theta\mathbf{\hat j}$ , with $\mathbf{\hat{j}}\mapsto -\cos 2\theta \mathbf{\hat{j}} + \sin 2\theta\mathbf{\hat i}$. Then the matrix is
+ $\begin{pmatrix}
+ \cos 2\theta & \sin 2\theta\\
+ \sin 2\theta & -\cos 2\theta
+ \end{pmatrix}$.
+
+ \item In $\R^3$, as we've previously seen, a rotation by $\theta$ about the $z$ axis is given by
+ \[
+ R = \begin{pmatrix}
+ \cos\theta & -\sin\theta & 0 \\
+ \sin\theta & \cos\theta & 0 \\
+ 0 & 0 & 1
+ \end{pmatrix}
+ \]
+ \item In $\R^3$, a reflection in plane with normal $\hat{\mathbf{n}}$ is given by $R_{ij} = \delta_{ij} - 2\hat n_i\hat n_j$. Written as a matrix, we have
+ \[
+ \begin{pmatrix}
+ 1 - 2\hat n_1^2 & -2\hat n_1\hat n_2 & -2\hat n_1\hat n_3\\
+ -2\hat n_2\hat n_1 & 1 - 2\hat n_2^2 & -2\hat n_2\hat n_3\\
+ -2\hat n_3\hat n_1 & -2\hat n_3\hat n_2 & 1 - 2\hat n_3^2
+ \end{pmatrix}
+ \]
+ \item Dilation (``stretching'') $\alpha: \R^3 \to \R^3$ is given by a map $(x, y, z)\mapsto (\lambda x, \mu y, \nu z)$ for some $\lambda, \mu, \nu$. The matrix is
+ \[
+ \begin{pmatrix}
+ \lambda & 0 & 0\\
+ 0 & \mu & 0\\
+ 0 & 0 & \nu
+ \end{pmatrix}
+ \]
+ \item Shear: Consider $S:\R^3 \to \R^3$ that sheers in the $x$ direction:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 2.5) node [above] {$y$};
+ \draw [->] (0, 0) -- (1, 1) node [above] {$\mathbf{x}$};
+ \draw [->] (0, 0) -- (2, 1) node [above] {$\mathbf{x}'$};
+ \draw [->] (1, 1.5) -- (2, 1.5) node [align=center, above] {sheer in $x$ direction};
+ \end{tikzpicture}
+ \end{center}
+ We have $(x, y, z)\mapsto (x + \lambda y, y, z)$. Then
+ \[
+ S =
+ \begin{pmatrix}
+ 1 & \lambda & 0\\
+ 0 & 1 & 0\\
+ 0 & 0 & 1
+ \end{pmatrix}
+ \]
+\end{enumerate}
+
+\subsubsection{Matrix Algebra}
+This part is mostly on a whole lot of definitions, saying what we can do with matrices and classifying them into different types.
+
+\begin{defi}[Addition of matrices] Consider two linear maps $\alpha, \beta: \R^n\to \R^m$. The sum of $\alpha$ and $\beta$ is defined by
+ \begin{align*}
+ (\alpha + \beta)(\mathbf{x}) &= \alpha(\mathbf{x}) + \beta(\mathbf{x})\\
+ \intertext{In terms of the matrix, we have}
+ (A + B)_{ij}x_j &= A_{ij}x_j + B_{ij}x_j,\\
+ \intertext{or}
+ (A + B)_{ij} &= A_{ij}+B_{ij}.
+ \end{align*}
+\end{defi}
+
+\begin{defi}[Scalar multiplication of matrices]
+ Define $(\lambda\alpha)\mathbf{x} = \lambda[\alpha(\mathbf{x})]$. So $(\lambda A)_{ij} = \lambda A_{ij}$.
+\end{defi}
+
+\begin{defi}[Matrix multiplication]
+ Consider maps $\alpha: \R^\ell \to \R^n$ and $\beta: \R^n\to \R^m$. The composition is $\beta\alpha:\R^\ell\to\R^m$. Take $\mathbf{x}\in \R^\ell\mapsto \mathbf{x}''\in \R^m$. Then $\mathbf{x}'' = (BA)\mathbf{x} = B\mathbf{x'}$, where $\mathbf{x}' = A\mathbf{x}$. Using suffix notation, we have $x_i'' = (B\mathbf{x}')_i = b_{ik}x_k' = B_{ik}A_{kj}x_j$. But $x_i'' = (BA)_{ij}x_j$. So
+ \[
+ (BA)_{ij} = B_{ik}A_{kj}.
+ \]
+ Generally, an $m\times n$ matrix multiplied by an $n\times \ell$ matrix gives an $m\times\ell$ matrix. $(BA)_{ij}$ is given by the $i$th row of $B$ dotted with the $j$th column of $A$.
+\end{defi}
+Note that the number of columns of $B$ has to be equal to the number of rows of $A$ for multiplication to be defined. If $\ell = m$ as well, then both $BA$ and $AB$ make sense, but $AB\not= BA$ in general. In fact, they don't even have to have the same dimensions.
+
+Also, since function composition is associative, we get $A(BC) = (AB)C$.
+
+\begin{defi}[Transpose of matrix]
+ If $A$ is an $m\times n$ matrix, the \emph{transpose} $A^T$ is an $n\times m$ matrix defined by $(A^T)_{ij} = A_{ji}$.
+\end{defi}
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $(A^T)^T = A$.
+ \item If $\mathbf{x}$ is a column vector$\begin{pmatrix}x_1\\x_2\\\vdots\\x_n\end{pmatrix}$, $\mathbf{x}^T$ is a row vector $(x_1\; x_2\cdots x_n)$.
+ \item $(AB)^T = B^TA^T$ since $(AB)^T_{ij} = (AB)_{ji} = A_{jk}B_{ki} = B_{ki}A_{jk} $\\$= (B^T)_{ik}(A^T)_{kj} = (B^TA^T)_{ij}$.
+ \end{enumerate}
+\end{prop}
+
+\begin{defi}[Hermitian conjugate]
+ Define $A^{\dagger} = (A^T)^*$. Similarly, $(AB)^\dagger = B^\dagger A^\dagger$.
+\end{defi}
+
+\begin{defi}[Symmetric matrix]
+ A matrix is \emph{symmetric} if $A^T = A$.
+\end{defi}
+
+\begin{defi}[Hermitian matrix]
+ A matrix is \emph{Hermitian} if $A^\dagger = A$. (The diagonal of a Hermitian matrix must be real).
+\end{defi}
+
+\begin{defi}[Anti/skew symmetric matrix]
+ A matrix is \emph{anti-symmetric} or \emph{skew symmetric} if $A^T = -A$. The diagonals are all zero.
+\end{defi}
+
+\begin{defi}[Skew-Hermitian matrix]
+ A matrix is \emph{skew-Hermitian} if $A^\dagger = -A$. The diagonals are pure imaginary.
+\end{defi}
+
+\begin{defi}[Trace of matrix]
+ The \emph{trace} of an $n\times n$ matrix $A$ is the sum of the diagonal. $\tr(A) = A_{ii}$.
+\end{defi}
+
+\begin{eg}
+ Consider the reflection matrix $R_{ij} = \delta_{ij} - 2\hat n_i \hat n_j$. We have $\tr(A) = R_{ii} = 3 - 2\hat{n}\cdot \hat{n} = 3 - 2 = 1$.
+\end{eg}
+
+\begin{prop}
+ $\tr(BC) = \tr(CB)$
+\end{prop}
+
+\begin{proof}
+ $\tr(BC) = B_{ik}C_{ki} = C_{ki}B_{ik} = (CB)_{kk} = \tr(CB)$
+\end{proof}
+
+\begin{defi}[Identity matrix]
+ $I = \delta_{ij}$.
+\end{defi}
+\subsubsection{Decomposition of an \texorpdfstring{$n\times n$}{n x n} matrix}
+Any $n\times n$ matrix $B$ can be split as a sum of symmetric and antisymmetric parts. Write
+\[
+ B_{ij} = \underbrace{\frac{1}{2}(B_{ij} + B_{ji})}_{S_{ij}} + \underbrace{\frac{1}{2}(B_{ij} - B_{ji})}_{A_{ij}}.
+\]
+We have $S_{ij} = S_{ji}$, so $S$ is symmetric, while $A_{ji} = -A_{ij}$, and $A$ is antisymmetric. So $B = S + A$.
+
+Furthermore , we can decompose $S$ into an isotropic part (a scalar multiple of the identity) plus a trace-less part (i.e.\ sum of diagonal $= 0$). Write
+\[
+ S_{ij} = \underbrace{\frac{1}{n}\tr (S)\delta_{ij}}_{\text{isotropic part}} + \underbrace{(S_{ij} - \frac{1}{n}\tr(S)\delta_{ij})}_{T_{ij}}.
+\]
+We have $\tr(T) = T_{ii} = S_{ii} - \frac{1}{n}\tr(S)\delta_{ii} = \tr(S) - \frac{1}{n}\tr(S)(n) = 0$.
+
+Putting all these together,
+\[
+ B = \frac{1}{n}\tr(B)I + \left\{\frac{1}{2}(B + B^T) - \frac{1}{n}\tr(B)I\right\} + \frac{1}{2}(B - B^T).
+\]
+In three dimensions, we can write the antisymmetric part $A$ in terms of a single vector: we have
+\[
+ A = \begin{pmatrix}
+ 0 & a & -b\\
+ -a & 0 & c\\
+ b & -c & 0
+ \end{pmatrix}
+\]
+and we can consider
+\[
+ \varepsilon_{ijk}\omega_k =
+ \begin{pmatrix}
+ 0 & \omega_3 & -\omega_2\\
+ -\omega_3 & 0 & \omega_1\\
+ \omega_2 & -\omega_1 & 0
+ \end{pmatrix}
+\]
+So if we have $\mathbf{\omega} = (c, b, a)$, then $A_{ij} = \varepsilon_{ijk}\omega_k$.
+
+This decomposition can be useful in certain physical applications. For example, if the matrix represents the stress of a system, different parts of the decomposition will correspond to different types of stresses.
+
+\subsubsection{Matrix inverse}
+
+\begin{defi}[Inverse of matrix]
+ Consider an $m\times n$ matrix $A$ and $n\times m$ matrices $B$ and $C$. If $BA = I$, then we say $B$ is the \emph{left inverse} of $A$. If $AC = I$, then we say $C$ is the \emph{right inverse} of $A$. If $A$ is square ($n\times n$), then $B = B(AC) = (BA)C = C$, i.e.\ the left and right inverses coincide. Both are denoted by $A^{-1}$, the \emph{inverse} of $A$. Therefore we have
+ \[
+ AA^{-1} = A^{-1}A = I.
+ \]
+\end{defi}
+Note that not all square matrices have inverses. For example, the zero matrix clearly has no inverse.
+
+\begin{defi}[Invertible matrix]
+ If $A$ has an inverse, then $A$ is \emph{invertible}.
+\end{defi}
+
+\begin{prop}
+ $(AB)^{-1} = B^{-1}A^{-1}$
+\end{prop}
+
+\begin{proof}
+ $(B^{-1}A^{-1})(AB) = B^{-1}(A^{-1}A)B = B^{-1}B = I$.
+\end{proof}
+
+\begin{defi}[Orthogonal and unitary matrices]
+ A real $n\times n$ matrix is \emph{orthogonal} if $A^TA = AA^T = I$, i.e.\ $A^T = A^{-1}$. A complex $n\times n$ matrix is \emph{unitary} if $U^\dagger U = UU^\dagger = I$, i.e.\ $U^\dagger = U^{-1}$.
+\end{defi}
+Note that an orthogonal matrix $A$ satisfies $A_{ik}(A^T_{kj}) = \delta_{ij}$, i.e.\ $A_{ik}A_{jk} = \delta_{ij}$. We can see this as saying ``the scalar product of two distinct rows is 0, and the scalar product of a row with itself is 1''. Alternatively, the rows (and columns --- by considering $A^T$) of an orthogonal matrix form an orthonormal set.
+
+Similarly, for a unitary matrix, $U_{ik}U_{kj}^\dagger = \delta_{ij}$, i.e.\ $u_{ik}u_{jk}^* = u_{ik}^*u_{jk} =\delta_{ij}$. i.e.\ the rows are orthonormal, using the definition of complex scalar product.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item The reflection in a plane is an orthogonal matrix. Since $R_{ij} = \delta_{ij} - 2n_in_j$, We have
+ \begin{align*}
+ R_{ik}R_{jk} &= (\delta_{ik} - 2n_in_k)(\delta_{jk} - 2n_jn_k)\\
+ &= \delta_{ik}\delta_{jk} - 2\delta_{jk}n_in_k - 2\delta_{ik}n_jn_k + 2n_in_kn_jn_k\\
+ &= \delta_{ij} - 2n_in_j - 2n_jn_i + 4n_in_j(n_kn_k)\\
+ &= \delta_{ij}
+ \end{align*}
+ \item The rotation is an orthogonal matrix. We could multiply out using suffix notation, but it would be cumbersome to do so. Alternatively, denote rotation matrix by $\theta$ about $\mathbf{\hat n}$ as $R(\theta, \mathbf{\hat n})$. Clearly, $R(\theta, \mathbf{\hat n})^{-1} = R(-\theta, \mathbf{\hat n})$. We have
+ \begin{align*}
+ R_{ij}(-\theta, \mathbf{\hat n}) &= (\cos\theta)\delta_{ij} + n_in_j(1 - \cos\theta) + \varepsilon_{ijk}n_k\sin\theta\\
+ &= (\cos\theta)\delta_{ji} + n_jn_i(1 - \cos\theta) - \varepsilon_{jik}n_k\sin\theta\\
+ &= R_{ji}(\theta, \mathbf{\hat n})
+ \end{align*}
+ In other words, $R(-\theta, \mathbf{\hat n}) = R(\theta, \mathbf{\hat n})^T$. So $R(\theta, \mathbf{\hat n})^{-1} = R(\theta, \mathbf{\hat n})^T$.
+ \end{enumerate}
+\end{eg}
+
+\subsection{Determinants}
+Consider a linear map $\alpha: \R^3\to \R^3$. The standard basis $\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3$ is mapped to $\mathbf{e}_1', \mathbf{e}_2', \mathbf{e}_3'$ with $\mathbf{e}_i' = A\mathbf{e}_i$. Thus the unit cube formed by $\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3$ is mapped to the parallelepiped with volume
+\begin{align*}
+ [\mathbf{e}_1', \mathbf{e}_2', \mathbf{e}_3'] &= \varepsilon_{ijk}(e_1')_i (e_2')_j (e_3')_k\\
+ &= \varepsilon_{ijk} A_{i\ell} \underbrace{(e_1)_\ell}_{\delta_{1\ell}} A_{jm}\underbrace{(e_2)_m}_{\delta_{2m}} A_{kn}\underbrace{(e_3)_n}_{\delta_{3n}}\\
+ &= \varepsilon_{ijk} A_{i1}A_{j2}A_{k3}
+\end{align*}
+We call this the determinant and write as
+\[
+ \det(A) = \begin{vmatrix} A_{11} & A_{12} & A_{13}\\A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33}\end{vmatrix}
+\]
+
+\subsubsection{Permutations}
+To define the determinant for square matrices of arbitrary size, we first have to consider \emph{permutations}.
+
+\begin{defi}[Permutation]
+ A \emph{permutation} of a set $S$ is a bijection $\varepsilon: S\to S$.
+\end{defi}
+
+\begin{notation}
+ Consider the set $S_n$ of all permutations of $1, 2, 3, \cdots , n$. $S_n$ contains $n!$ elements. Consider $\rho\in S_n$ with $i \mapsto \rho(i)$. We write
+ \[
+ \rho = \begin{pmatrix} 1 & 2 & \cdots & n\\ \rho(1) & \rho (2) &\cdots & \rho (n)\end{pmatrix}.
+ \]
+\end{notation}
+
+\begin{defi}[Fixed point]
+ A \emph{fixed point} of $\rho$ is a $k$ such that $\rho(k) = k$. e.g.\ in $\begin{pmatrix} 1 & 2 & 3 & 4\\4 & 1 & 3 & 2\end{pmatrix}$, $3$ is the fixed point. By convention, we can omit the fixed point and write as $\begin{pmatrix} 1 & 2 & 4\\ 4 & 1 & 2\end{pmatrix}$.
+\end{defi}
+
+\begin{defi}[Disjoint permutation]
+ Two permutations are \emph{disjoint} if numbers moved by one are fixed by the other, and vice versa. e.g.\ $\begin{pmatrix} 1 & 2 & 4 & 5 & 6\\ 5 & 6 & 1 & 4 & 2\end{pmatrix} = \begin{pmatrix}2 & 6\\ 6& 2\end{pmatrix}\begin{pmatrix}1 & 4 & 5\\5 & 1 & 4\end{pmatrix}$, and the two cycles on the right hand side are disjoint. Disjoint permutations commute, but in general non-disjoint permutations do not.
+\end{defi}
+
+\begin{defi}[Transposition and $k$-cycle]
+ $\begin{pmatrix} 2 & 6 \\ 6 & 2\end{pmatrix}$ is a \emph{2-cycle} or a \emph{transposition}, and we can simply write $(2\; 6)$. $\begin{pmatrix}1 & 4 & 5\\5 & 1 & 4\end{pmatrix}$ is a 3-cycle, and we can simply write $(1\; 5\; 4)$. (1 is mapped to 5; 5 is mapped to 4; 4 is mapped to 1)
+\end{defi}
+
+\begin{prop}
+ Any $q$-cycle can be written as a product of 2-cycles.
+\end{prop}
+
+\begin{proof}
+ $(1\; 2\; 3\; \cdots \; n) = (1\; 2)(2\; 3)(3\; 4)\cdots (n-1\; n)$.
+\end{proof}
+
+\begin{defi}[Sign of permutation]
+ The \emph{sign} of a permutation $\varepsilon(\rho)$ is $(-1)^r$, where $r$ is the number of 2-cycles when $\rho$ is written as a product of 2-cycles. If $\varepsilon(\rho) = +1$, it is an even permutation. Otherwise, it is an odd permutation. Note that $\varepsilon(\rho\sigma) = \varepsilon(\rho)\varepsilon(\sigma)$ and $\varepsilon(\rho^{-1}) = \varepsilon(\rho)$.
+\end{defi}
+The proof that this is well-defined can be found in IA Groups.
+
+\begin{defi}[Levi-Civita symbol]
+ The \emph{Levi-Civita} symbol is defined by
+ \[
+ \varepsilon_{j_1j_2\cdots j_n} = \begin{cases}+1 & \text{ if } j_1j_2j_3\cdots j_n\text{ is an even permutation of }1, 2, \cdots n\\
+ -1 & \text{ if it is an odd permutation}\\
+ 0 & \text{ if any 2 of them are equal}
+ \end{cases}
+ \]
+ Clearly, $\varepsilon_{\rho(1)\rho(2)\cdots \rho(n)} = \varepsilon(\rho)$.
+\end{defi}
+
+\begin{defi}[Determinant]
+ The \emph{determinant} of an $n\times n$ matrix $A$ is defined as:
+ \[
+ \det (A) = \sum_{\sigma\in S_n} \varepsilon(\sigma) A_{\sigma(1)1}A_{\sigma(2)2}\cdots A_{\sigma(n)n},
+ \]
+ or equivalently,
+ \[
+ \det(A) = \varepsilon_{j_1j_2\cdots j_n}A_{j_11}A_{j_22}\cdots A_{j_nn}.
+ \]
+\end{defi}
+
+\begin{prop}
+ \[
+ \begin{vmatrix}
+ a & b\\
+ c & d
+ \end{vmatrix} = ad - bc
+ \]
+\end{prop}
+
+\subsubsection{Properties of determinants}
+\begin{prop}
+ $\det (A) = \det (A^T)$.
+\end{prop}
+
+\begin{proof}
+ Take a single term $A_{\sigma(1)1}A_{\sigma(2)2}\cdots A_{\sigma(n)n}$ and let $\rho$ be another permutation in $S_n$. We have
+ \[
+ A_{\sigma(1)1}A_{\sigma(2)2}\cdots A_{\sigma(n)n} = A_{\sigma(\rho(1))\rho(1)}A_{\sigma(\rho(2))\rho(2)}\cdots A_{\sigma(\rho(n))\rho(n)}
+ \]
+ since the right hand side is just re-ordering the order of multiplication. Choose $\rho = \sigma^{-1}$ and note that $\varepsilon(\sigma) = \varepsilon(\rho)$. Then
+ \[
+ \det(A) = \sum_{\rho\in S_n} \varepsilon(\rho) A_{1\rho(1)}A_{2\rho(2)}\cdots A_{n\rho(n)} = \det (A^T). \qedhere
+ \]
+\end{proof}
+
+\begin{prop}
+ If matrix $B$ is formed by multiplying every element in a single row of $A$ by a scalar $\lambda$, then $\det (B) = \lambda \det (A)$. Consequently, $\det (\lambda A) = \lambda^n \det(A)$.
+\end{prop}
+
+\begin{proof}
+ Each term in the sum is multiplied by $\lambda$, so the whole sum is multiplied by $\lambda^n$.
+\end{proof}
+
+\begin{prop}
+ If 2 rows (or 2 columns) of $A$ are identical, the determinant is $0$.
+\end{prop}
+
+\begin{proof}
+ wlog, suppose columns 1 and 2 are the same. Then
+ \[
+ \det (A) = \sum_{\sigma\in S_n} \varepsilon(\sigma) A_{\sigma(1)1}A_{\sigma(2)2}\cdots A_{\sigma(n)n}.
+ \]
+ Now write an arbitrary $\sigma$ in the form $\sigma = \rho(1\; 2)$. Then $\varepsilon(\sigma) = \varepsilon(\rho)\varepsilon((1\; 2)) = -\varepsilon(\rho)$. So
+ \[
+ \det (A) = \sum_{\rho\in S_n} -\varepsilon(\rho) A_{\rho(2)1}A_{\rho(1)2}A_{\rho(3)3}\cdots A_{\rho(n)n}.
+ \]
+ But columns 1 and 2 are identical, so $A_{\rho(2)1} = A_{\rho(2)2}$ and $A_{\rho(1)2} = A_{\rho(1)1}$. So $\det (A) = -\det (A)$ and $\det(A) = 0$.
+\end{proof}
+
+\begin{prop}
+ If 2 rows or 2 columns of a matrix are linearly dependent, then the determinant is zero.
+\end{prop}
+
+\begin{proof}
+ Suppose in $A$, $($column $r) + \lambda($column $s) = 0$. Define
+ \[
+ B_{ij} =
+ \begin{cases}
+ A_{ij} & j\not= r\\
+ A_{ij} + \lambda A_{is} & j = r
+ \end{cases}.
+ \]
+ Then $\det (B) = \det(A) + \lambda \det($matrix with column $r =$ column $s) = \det(A)$. Then we can see that the $r$th column of $B$ is all zeroes. So each term in the sum contains one zero and $\det (A) = \det (B) = 0$.
+\end{proof}
+Even if we don't have linearly dependent rows or columns, we can still run the exact same proof as above, and still get that $\det (B) = \det (A)$. Linear dependence is only required to show that $\det (B) = 0$. So in general, we can add a linear multiple of a column (or row) onto another column (or row) without changing the determinant.
+
+\begin{prop}
+ Given a matrix $A$, if $B$ is a matrix obtained by adding a multiple of a column (or row) of $A$ to another column (or row) of $A$, then $\det A = \det B$.
+\end{prop}
+
+\begin{cor}
+ Swapping two rows or columns of a matrix negates the determinant.
+\end{cor}
+
+\begin{proof}
+ We do the column case only. Let $A = (\mathbf{a}_1 \cdots \mathbf{a}_i \cdots \mathbf{a}_j \cdots \mathbf{a}_n)$. Then
+ \begin{align*}
+ \det (\mathbf{a}_1 \cdots \mathbf{a}_i \cdots \mathbf{a}_j \cdots \mathbf{a}_n)&=\det(\mathbf{a}_1 \cdots \mathbf{a}_i + \mathbf{a}_j \cdots \mathbf{a}_j \cdots \mathbf{a}_n)\\
+ &=\det (\mathbf{a}_1 \cdots \mathbf{a}_i + \mathbf{a}_j \cdots \mathbf{a}_j - (\mathbf{a}_i + \mathbf{a}_j) \cdots \mathbf{a}_n)\\
+ &=\det (\mathbf{a}_1 \cdots \mathbf{a}_i + \mathbf{a}_j \cdots -\mathbf{a}_i \cdots \mathbf{a}_n)\\
+ &=\det (\mathbf{a}_1 \cdots \mathbf{a}_j \cdots -\mathbf{a}_i \cdots \mathbf{a}_n)\\
+ &=-\det (\mathbf{a}_1 \cdots \mathbf{a}_j \cdots \mathbf{a}_i \cdots \mathbf{a}_n)
+ \end{align*}
+ Alternatively, we can prove this from the definition directly, using the fact that the sign of a transposition is $-1$ (and that the sign is multiplicative).
+\end{proof}
+
+\begin{prop}
+ $\det(AB) = \det(A)\det(B)$.
+\end{prop}
+
+\begin{proof}
+ First note that $\sum_\sigma \varepsilon(\sigma)A_{\sigma(1)\rho(1)}A_{\sigma(2)\rho(2)} = \varepsilon(\rho)\det (A)$, i.e.\ swapping columns (or rows) an even/odd number of times gives a factor $\pm 1$ respectively. We can prove this by writing $\sigma = \mu \rho$.
+
+ Now
+ \begin{align*}
+ \det AB &= \sum_\sigma \varepsilon(\sigma)(AB)_{\sigma(1)1}(AB)_{\sigma(2)2}\cdots (AB)_{\sigma(n)n}\\
+ &= \sum_\sigma \varepsilon(\sigma) \sum_{k_1,k_2,\cdots,k_n}^{n} A_{\sigma(1)k_1}B_{k_11}\cdots A_{\sigma(n)k_n}B_{k_nn}\\
+ &= \sum_{k_1,\cdots,k_n}B_{k_11}\cdots B_{k_nn}\underbrace{\sum_\sigma \varepsilon(\sigma) A_{\sigma(1)k_1}A_{\sigma(2)k_2}\cdots A_{\sigma(n)k_n}}_{S}
+ \intertext{Now consider the many different $S$'s. If in $S$, two of $k_1$ and $k_n$ are equal, then $S$ is a determinant of a matrix with two columns the same, i.e.\ $S = 0$. So we only have to consider the sum over distinct $k_i$s. Thus the $k_i$s are are a permutation of $1, \cdots n$, say $k_i =\rho (i)$. Then we can write}
+ \det AB &= \sum_\rho B_{\rho(1)1}\cdots B_{\rho(n)n} \sum_\sigma \varepsilon(\sigma) A_{\sigma(1)\rho(1)} \cdots A_{\sigma(n)\rho(n)}\\
+ &= \sum_\rho B_{\rho(1)1}\cdots B_{\rho(n)n} (\varepsilon(\rho)\det A)\\
+ &= \det A\sum_\rho \varepsilon(\rho) B_{\rho(1)1}\cdots B_{\rho(n)n}\\
+ &= \det A\det B \qedhere
+ \end{align*}
+\end{proof}
+
+\begin{cor}
+ If $A$ is orthogonal, $\det A = \pm 1$.
+\end{cor}
+
+\begin{proof}
+ \begin{align*}
+ AA^T &= I\\
+ \det AA^T &= \det I\\
+ \det A\det A^T &= 1\\
+ (\det A)^2 &= 1\\
+ \det A &= \pm 1 \qedhere
+ \end{align*}
+\end{proof}
+
+\begin{cor}
+ If $U$ is unitary, $|\det U| = 1$.
+\end{cor}
+
+\begin{proof}
+ We have $\det U^\dagger = (\det U^T)^* = \det(U)^*$. Since $UU^\dagger = I$, we have $\det(U)\det(U)^* = 1$.
+\end{proof}
+
+\begin{prop}
+ In $\R^3$, orthogonal matrices represent either a rotation ($\det = 1$) or a reflection ($\det = -1$).
+\end{prop}
+\subsubsection{Minors and Cofactors}
+\begin{defi}[Minor and cofactor]
+ For an $n\times n$ matrix $A$, define $A^{ij}$ to be the $(n - 1)\times (n - 1)$ matrix in which row $i$ and column $j$ of $A$ have been removed.
+
+ The \emph{minor} of the $ij$th element of $A$ is $M_{ij} = \det A^{ij}$
+
+ The \emph{cofactor} of the $ij$th element of $A$ is $\Delta_{ij} = (-1)^{i + j}M_{ij}$.
+\end{defi}
+
+\begin{notation}
+ We use $\bar \;$ to denote a symbol which has been missed out of a natural sequence.
+\end{notation}
+\begin{eg}
+ $1, 2, 3, 5 = 1, 2, 3, \bar 4, 5$.
+\end{eg}
+
+The significance of these definitions is that we can use them to provide a systematic way of evaluating determinants. We will also use them to find inverses of matrices.
+\begin{thm}[Laplace expansion formula]
+ For any particular fixed $i$,
+ \[
+ \det A = \sum_{j = 1}^{n} A_{ji}\Delta_{ji}.
+ \]
+\end{thm}
+\begin{proof}
+ \[
+ \det A = \sum_{j_i = 1}^nA_{j_ii} \sum_{j_1, \cdots, \overline{j_i}, \cdots j_n}^n \varepsilon_{j_1j_2\cdots j_n} A_{j_11}A_{j_22}\cdots \overline{A_{j_ii}}\cdots A_{j_nn}
+ \]
+ Let $\sigma \in S_n$ be the permutation which moves $j_i$ to the $i$th position, and leave everything else in its natural order, i.e.\setcounter{MaxMatrixCols}{11}
+ \[
+ \sigma =
+ \begin{pmatrix}
+ 1 &\cdots& i & i + 1 & i + 2 & \cdots &j_i - 1&j_i& j_i + 1 & \cdots & n\\
+ 1 & \cdots & j_i & i & i + 1 & \cdots & j_i - 2 & j_i - 1 & j_i + 1 & \cdots & n
+ \end{pmatrix}
+ \]
+ if $j_i > i$, and similarly for other cases. To perform this permutation, $|i - j_i|$ transpositions are made. So $\varepsilon(\sigma) = (-1)^{i - j_i}$.
+
+ Now consider the permutation $\rho\in S_n$
+ \[
+ \rho =
+ \begin{pmatrix}
+ 1 & \cdots & \cdots & \bar {j_i} & \cdots & n\\
+ j_1 & \cdots & \bar{j_i} & \cdots & \cdots & j_n\\
+ \end{pmatrix}
+ \]
+ The composition $\rho\sigma$ reorders $(1, \cdots, n)$ to $(j_1, j_2,\cdots, j_n)$. So $\varepsilon(\rho\sigma) = \varepsilon_{j_1\cdots j_n} = \varepsilon(\rho)\varepsilon(\sigma) = (-1)^{i - j_i} \varepsilon_{j_1\cdots \bar j_i \cdots j_n}$. Hence the original equation becomes
+ \begin{align*}
+ \det A &= \sum_{j_i = 1}^n A_{j_i i} \sum_{j_1\cdots \bar j_i\cdots j_n}(-1)^{i - j_i} \varepsilon_{j_1\cdots \bar j_i \cdots j_n} A_{j_11}\cdots \overline{A_{j_ii}} \cdots A_{j_nn}\\
+ &= \sum_{j_i = 1}^n A_{j_ii} (-1)^{i - j_i}M_{j_ii}\\
+ &= \sum_{j_i = 1}^{n} A_{j_ii}\Delta_{j_ii}\\
+ &= \sum_{j = 1}^{n} A_{ji}\Delta_{ji} \qedhere
+ \end{align*} % Check!
+\end{proof}
+
+\begin{eg}
+ $\det A = \begin{vmatrix}2 & 4 & 2\\ 3 & 2 & 1\\ 2 & 0 & 1\end{vmatrix}$. We can pick the first row and have
+ \begin{align*}
+ \det A&= 2\begin{vmatrix}2 & 1\\0 & 1 \end{vmatrix} - 4\begin{vmatrix} 3 & 1\\ 2 & 1\end{vmatrix} + 2\begin{vmatrix}3 & 2 \\ 2 & 0\end{vmatrix}\\
+ &= 2(2 - 0) - 4(3 - 2) + 2(0 - 4)\\
+ &= -8.
+ \end{align*}
+ Alternatively, we can pick the second column and have
+ \begin{align*}
+ \det A&= -4\begin{vmatrix}3 & 1\\2 & 1 \end{vmatrix} + 2\begin{vmatrix} 2 & 2\\ 2 & 1\end{vmatrix} - 0\begin{vmatrix}2 & 2 \\ 3 & 1\end{vmatrix}\\
+ &= -4(3 - 2) + 2(2 - 4) - 0\\
+ &= -8.
+ \end{align*}
+\end{eg}
+
+In practical terms, we use a combination of properties of determinants with a sensible choice of $i$ to evaluate $\det(A)$.
+
+\begin{eg}
+ Consider $\begin{vmatrix}1 & a & a^2\\1 & b & b^2\\1 & c & c^2 \end{vmatrix}$. Row 1 - row 2 gives
+ \[
+ \begin{vmatrix}0 & a - b & a^2 - b^2\\1 & b & b^2\\1 & c & c^2 \end{vmatrix} = (a - b)\begin{vmatrix}0 & 1 & a + b\\1 & b & b^2\\1 & c & c^2 \end{vmatrix}.
+ \]
+ Do row 2 - row 3. We obtain
+ \[
+ (a - b)(b - c)\begin{vmatrix}0 & 1 & a + b\\0 & 1 & b + c\\1 & c & c^2 \end{vmatrix}.
+ \]
+ Row 1 - row 2 gives
+ \[
+ (a - b)(b - c)(a - c)\begin{vmatrix}0 & 0 & 1\\0 & 1 & b + c\\1 & c & c^2 \end{vmatrix} = (a - b)(b - c)(a - c).
+ \]
+\end{eg}
+
+\section{Matrices and linear equations}
+\subsection{Simple example, \texorpdfstring{$2\times 2$}{2 x 2}}
+Consider the system of equations
+\begin{align*}
+ A_{11}x_1 + A_{12}x_2 &= d_1\tag{a}\\
+ A_{21}x_1 + A_{22}x_2 &= d_2\tag{b}.
+ \intertext{We can write this as}
+ A\mathbf{x} &= \mathbf{d}.
+\end{align*}
+If we do (a)$\times A_{22} - $(b)$\times A_{12}$ and similarly the other way round, we obtain
+\begin{align*}
+ (A_{11}A_{22} - A_{12}A_{21})x_1 &= A_{22}d_1 - A_{12}d_2\\
+ \underbrace{(A_{11}A_{22} - A_{12}A_{21})}_{\det A}x_2 &= A_{11}d_2 - A_{21}d_1
+\end{align*}
+Dividing by $\det A$ and writing in matrix form, we have
+\[
+ \begin{pmatrix}
+ x_1\\
+ x_2
+ \end{pmatrix} = \frac{1}{\det A}
+ \begin{pmatrix}
+ A_{22} & - A_{12}\\
+ -A_{21} & A_{11}
+ \end{pmatrix}
+ \begin{pmatrix}
+ d_1\\
+ d_2
+ \end{pmatrix}
+\]
+On the other hand, given the equation $A\mathbf{x} = \mathbf{d}$, if $A^{-1}$ exists, then by multiplying both sides on the left by $A^{-1}$, we obtain $\mathbf{x} = A^{-1}\mathbf{d}$.
+
+Hence, we have constructed $A^{-1}$ in the $2\times 2$ case, and shown that the condition for its existence is $\det A \not= 0$, with
+\[
+ A^{-1} =\frac{1}{\det A}\begin{pmatrix}A_{22} & - A_{12}\\-A_{21} & A_{11}\end{pmatrix}
+\]
+
+\subsection{Inverse of an \texorpdfstring{$n\times n$}{n x n} matrix}
+For larger matrices, the formula for the inverse is similar, but slightly more complicated (and costly to evaluate). The key to finding the inverse is the following:
+\begin{lemma}
+ $\sum A_{ik}\Delta_{jk} = \delta_{ij}\det A$.
+\end{lemma}
+
+\begin{proof}
+ If $i \not= j$, then consider an $n\times n$ matrix $B$, which is identical to $A$ except the $j$th row is replaced by the $i$th row of $A$. So $\Delta_{jk}$ of $B = \Delta_{jk}$ of $A$, since $\Delta_{jk}$ does not depend on the elements in row $j$. Since $B$ has a duplicate row, we know that
+ \[
+ 0 = \det B = \sum_{k = 1}^n B_{jk}\Delta_{jk} = \sum_{k = 1}^n A_{ik}\Delta_{jk}.
+ \]
+ If $i = j$, then the expression is $\det A$ by the Laplace expansion formula.
+\end{proof}
+
+\begin{thm}
+ If $\det A \not =0$, then $A^{-1}$ exists and is given by
+ \[
+ (A^{-1})_{ij} = \frac{\Delta_{ji}}{\det A}.
+ \]
+\end{thm}
+
+\begin{proof}
+ \[
+ (A^{-1})_{ik}A_{kj} = \frac{\Delta_{ki}}{\det A} A_{kj} = \frac{\delta_{ij}\det A}{\det A} = \delta_{ij}.
+ \]
+ So $A^{-1}A = I$.
+\end{proof}
+The other direction is easy to prove. If $\det A = 0$, then it has no inverse, since for any matrix $B$, $\det AB = 0$, and hence $AB$ cannot be the identity.
+
+\begin{eg}
+ Consider the shear matrix $S_\lambda = \begin{pmatrix} 1 & \lambda & 0 \\ 0 & 1 & 0\\ 0 & 0 & 1\end{pmatrix}$. We have $\det{S_\lambda} = 1$. The cofactors are
+ \begin{center}
+ \begin{tabular}{ccc}
+ $\Delta_{11} = 1$ & $\Delta_{12} = 0$ & $\Delta_{13} = 0$ \\
+ $\Delta_{21} - \lambda$ & $\Delta_{22} = 1$ & $\Delta_{23} = 0$ \\
+ $\Delta_{31} = 0$ & $\Delta_{32} = 0$ & $\Delta_{33} = 1$
+ \end{tabular}
+ \end{center}
+ So $S_\lambda^{-1} = \begin{pmatrix} 1 & -\lambda & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\end{pmatrix}$.
+\end{eg}
+
+How many arithmetic operations are involved in calculating the inverse of an $n\times n$ matrix? We just count multiplication operations since they are the most time-consuming. Suppose that calculating $\det A$ takes $f_n$ multiplications. This involves $n$ $(n - 1)\times (n - 1)$ determinants, and you need $n$ more multiplications to put them together. So $f_n = nf_{n -1} + n$. So $f_n = O(n!)$ (in fact $f_n \approx (1 + e)n!$).
+
+To find the inverse, we need to calculate $n^2$ cofactors. Each is a $n -1$ determinant, and each takes $O((n - 1)!)$. So the time complexity is $O(n^2 (n - 1)!) = O(n\cdot n!)$.
+
+This is incredibly slow. Hence while it is theoretically possible to solve systems of linear equations by inverting a matrix, sane people do not do so in general. Instead, we develop certain better methods to solve the equations. In fact, the ``usual'' method people use to solve equations by hand only has complexity $O(n^3)$, which is a much better complexity.
+
+\subsection{Homogeneous and inhomogeneous equations}
+Consider $A\mathbf{x} = \mathbf{b}$ where $A$ is an $n\times n$ matrix, $\mathbf{x}$ and $\mathbf{b}$ are $n\times 1$ column vectors.
+\begin{defi}[Homogeneous equation]
+ If $\mathbf{b} = \mathbf{0}$, then the system is \emph{homogeneous}. Otherwise, it's \emph{inhomogeneous}.
+\end{defi}
+
+Suppose $\det A\not= 0$. Then there is a unique solution $\mathbf{x} = A^{-1}\mathbf{b}$ ($\mathbf{x} = \mathbf{0}$ for homogeneous).
+
+How can we understand this result? Recall that $\det A\not= 0$ means that the columns of $A$ are linearly independent. The columns are the images of the standard basis, $\mathbf{e}_i' = A\mathbf{e_i}$. So $\det A\not = 0$ means that $\mathbf{e}_i'$ are linearly independent and form a basis of $\R^n$. Therefore the image is the whole of $\R^n$. This automatically ensures that $\mathbf{b}$ is in the image, i.e.\ there is a solution.
+
+To show that there is exactly one solution, suppose $\mathbf{x}$ and $\mathbf{x}'$ are both solutions. Then $A\mathbf{x} = A\mathbf{x}' = \mathbf{b}$. So $A(\mathbf{x} - \mathbf{x}') = \mathbf{0}$. So $\mathbf{x} - \mathbf{x}'$ is in the kernel of $A$. But since the rank of $A$ is $n$, by the rank-nullity theorem, the nullity is $0$. So the kernel is trivial. So $\mathbf{x} - \mathbf{x}' = \mathbf{0}$, i.e.\ $\mathbf{x} = \mathbf{x}'$.
+
+\subsubsection{Gaussian elimination}
+Consider a general solution
+\begin{align*}
+ A_{11}x_1 + A_{12}x_2 + \cdots + A_{1n}x_n &= d_1\\
+ A_{21}x_1 + A_{22}x_2 + \cdots + A_{2n}x_n &= d_2\\
+ \vdots&\\
+ A_{m1}x_1 + A_{m2}x_2 + \cdots + A_{mn}x_n &= d_m
+\end{align*}
+So we have $m$ equations and $n$ unknowns.
+
+Assume $A_{11}\not=0$ (if not, we can re-order the equations). We can use the first equation to eliminate $x_1$ from the remaining $(m - 1)$ equations. Then use the second equation to eliminate $x_2$ from the remaining $(m - 2)$ equations (if anything goes wrong, just re-order until things work). Repeat.
+
+We are left with
+\begin{align*}
+ A_{11}x_1 + A_{12}x_2 + A_{13}x_3 + \cdots + A_{1n}x_n &= d_1\\
+ A_{22}^{(2)}x_2 + A_{23}^{(2)}x_3 + \cdots + A_{2n}^{(2)}x_n &= d_2\\
+ \vdots&\\
+ A_{rr}^{(r)}x_r + \cdots + A_{rn}^{(r)}x_n &= d_r\\
+ 0 &= d_{r + 1}^{(r)}\\
+ \vdots&\\
+ 0 &= d_{m}^{(r)}
+\end{align*}
+Here $A_{ii}^{(i)} \not=0$ (which we can achieve by re-ordering), and the superfix $(i)$ refers to the ``version number'' of the coefficient, e.g.\ $A_{22}^{(2)}$ is the second version of the coefficient of $x_2$ in the second row.
+
+Let's consider the different possibilities:
+\begin{enumerate}
+ \item $r < m$ and at least one of $d^{(r)}_{r + 1}, \cdots d_m^{(r)} \not= 0$. Then a contradiction is reached. The system is inconsistent and has no solution. We say it is \emph{overdetermined}.
+ \begin{eg}
+ Consider the system
+ \begin{align*}
+ 3x_1 + 2x_2 + x_3 &= 3\\
+ 6x_1 + 3x_2 + 3x_3 &= 0\\
+ 6x_1 + 2x_2 + 4x_3 &= 6
+ \intertext{This becomes }
+ 3x_1 + 2x_2 + x_3 &= 3\\
+ 0 - x_2 + x_3 &= -6\\
+ 0 - 2x_2 + 2x_3 &= 0
+ \intertext{And then}
+ 3x_1 + 2x_2 + x_3 &= 3\\
+ 0 - x_2 + x_3 &= -6\\
+ 0 &= 12
+ \end{align*}
+ We have $d_3^{(3)} = 12 = 0$ and there is no solution.
+ \end{eg}
+ \item If $r = n\leq m$, and all $d_{r + i}^{(r)} = 0$. Then from the $n$th equation, there is a unique solution for $x_n = d_{n}^{(n)}/A_{nn}^{(n)}$, and hence for all $x_i$ by back substitution. This system is \emph{determined}.
+ \begin{eg}
+ \begin{align*}
+ 2x_1 + 5x_2 &= 2\\
+ 4x_1 + 3x_2 &= 11\\
+ \intertext{This becomes}
+ 2x_1 + 5x_2 &= 2\\
+ -7x_2 &= 7
+ \end{align*}
+ So $x_2 = -1$ and thus $x_1 = 7/2$.
+ \end{eg}
+ \item If $r < n$ and $d_{r + i}^{(r)} = 0$, then $x_{r + 1}, \cdots x_n$ can be freely chosen, and there are infinitely many solutions. System is \emph{under-determined}. e.g.
+ \begin{align*}
+ x_1 + x_2 &= 1\\
+ 2x_1 + 2x_2 &= 2
+ \intertext{Which gives}
+ x_1 + x_2 &= 1\\
+ 0 &= 0
+ \end{align*}
+ So $x_1 = 1 - x_2$ is a solution for any $x_2$.
+\end{enumerate}
+In the $n = m$ case, there are $O(n^3)$ operations involved, which is much less than inverting the matrix. So this is an efficient way of solving equations.
+
+This is also be related to the determinant. Consider the case where $m = n$ and $A$ is square. Since row operations do not change the determinant and swapping rows give a factor of $(-1)$. So
+\[
+ \det A = (-1)^k
+ \begin{vmatrix}
+ A_{11} &A_{12}&\cdots& \cdots& \cdots & A_{1n}\\
+ 0 & A_{22}^{(2)} &\cdots& \cdots & \cdots & A_{2n}^{(n)} \\
+ \vdots & \vdots &\ddots & \vdots & \vdots & \vdots\\
+ 0 & 0 & \cdots & A_{rr}^{(r)} & \cdots & A_{rn}^{(n)}\\
+ 0 & 0 & \cdots & 0 & 0 & \cdots\\
+ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots
+ \end{vmatrix}
+\]
+This determinant is an \emph{upper triangular} one (all elements below diagonal are $0$) and the determinant is the product of its diagonal elements.
+
+Hence if $r < n$ (and $d_i^{(r)} = 0$ for $i > r$), then we have case (ii) and the $\det A = 0$. If $r = n$, then $\det A = (-1)^k A_{11}A_{22}^{(2)}\cdots A_{nn}^{(n)} \not= 0$.
+
+\subsection{Matrix rank}
+Consider a linear map $\alpha: \R^n\to \R^m$. Recall the rank $r(\alpha)$ is the dimension of the image. Suppose that the matrix $A$ is associated with the linear map. We also call $r(A)$ the \emph{rank} of $A$.
+
+Recall that if the standard basis is $\mathbf{e}_1,\cdots \mathbf{e}_n$, then $A\mathbf{e}_1, \cdots, A\mathbf{e}_n$ span the image (but not necessarily linearly independent).
+
+Further, $A\mathbf{e}_1, \cdots, A\mathbf{e}_n$ are the columns of the matrix $A$. Hence $r(A)$ is the number of linearly independent columns.
+
+\begin{defi}[Column and row rank of linear map]
+ The \emph{column rank} of a matrix is the maximum number of linearly independent columns.
+
+ The \emph{row rank} of a matrix is the maximum number of linearly independent rows.
+\end{defi}
+
+\begin{thm}
+ The column rank and row rank are equal for any $m\times n$ matrix.
+\end{thm}
+
+\begin{proof}
+ Let $r$ be the row rank of $A$. Write the biggest set of linearly independent rows as $\mathbf{v}_1^T, \mathbf{v}_2^T, \cdots \mathbf{v}_r^T$ or in component form $\mathbf{v}_k^T = (v_{k1}, v_{k2}, \cdots, v_{kn})$ for $k = 1, 2, \cdots, r$.
+
+ Now denote the $i$th row of $A$ as $\mathbf{r}_i^T = (A_{i1}, A_{i2}, \cdots A_{in})$.
+
+ Note that every row of $A$ can be written as a linear combination of the $\mathbf{v}$'s. (If $\mathbf{r_i}$ cannot be written as a linear combination of the $\mathbf{v}$'s, then it is independent of the $\mathbf{v}$'s and $\mathbf{v}$ is not the maximum collection of linearly independent rows) Write
+ \[
+ \mathbf{r}_i^T = \sum_{k = 1}^r C_{ik}\mathbf{v}_{k}^T.
+ \]
+ For some coefficients $C_{ik}$ with $1 \leq i\leq m$ and $1 \leq k \leq r$.
+
+ Now the elements of $A$ are
+ \[
+ A_{ij} = (\mathbf{r}_i)^T_j = \sum_{k = 1}^r C_{ik}(\mathbf{v}_k)_j,
+ \]
+ or
+ \[
+ \begin{pmatrix}
+ A_{1j}\\
+ A_{2j}\\
+ \vdots\\
+ A_{mj}
+ \end{pmatrix} = \sum_{k = 1}^r {\mathbf{v}_k}_j
+ \begin{pmatrix}
+ C_{1k}\\
+ C_{2k}\\\
+ \vdots\\
+ C_{mk}
+ \end{pmatrix}
+ \]
+ So every column of $A$ can be written as a linear combination of the $r$ column vectors $\mathbf{c}_k$. Then the column rank of $A \leq r$, the row rank of $A$.
+
+ Apply the same argument to $A^T$ to see that the row rank is $\leq$ the column rank.
+\end{proof}
+
+\subsection{Homogeneous problem \texorpdfstring{$A\mathbf{x} = \mathbf{0}$}{Ax = 0}}
+We restrict our attention to the square case, i.e.\ number of unknowns = number of equations. Here $A$ is an $n\times n$ matrix. We want to solve $A\mathbf{x} = \mathbf{0}$.
+
+First of all, if $\det A\not=0$, then $A^{-1}$ exists and $\mathbf{x}^{-1} = A^{-1}\mathbf{0} = \mathbf{0}$, which is the unique solution. Hence if $A\mathbf{x} = \mathbf{0}$ with $\mathbf{x} \not= \mathbf{0}$, then $\det A = 0$.
+
+\subsubsection{Geometrical interpretation}
+We consider a $3\times 3$ matrix
+\[
+ A = \begin{pmatrix} \mathbf{r}_1^T\\\mathbf{r}_2^T\\\mathbf{r}_3^T\end{pmatrix}
+\]
+$A\mathbf{x} = \mathbf{0}$ means that $\mathbf{r}_i\cdot \mathbf{x} = 0$ for all $i$. Each equation $\mathbf{r}_i\cdot \mathbf{x} = 0$ represents a plane through the origin. So the solution is the intersection of the three planes.
+
+There are three possibilities:
+\begin{enumerate}
+ \item If $\det A =[\mathbf{r}_1, \mathbf{r}_2, \mathbf{r}_3] \not= 0$, span$\{\mathbf{r}_1, \mathbf{r}_2, \mathbf{r}_3\} = \R^3$ and thus $r(A) = 3$. By the rank-nullity theorem, $n(A) = 0$ and the kernel is $\{\mathbf{0}\}$. So $\mathbf{x} = \mathbf{0}$ is the unique solution.
+ \item If $\det A = 0$, then $\dim(\spn\{\mathbf{r}_1, \mathbf{r}_2, \mathbf{r}_3\}) = 1$ or $2$.
+ \begin{enumerate}
+ \item If rank $= 2$, wlog assume $\mathbf{r}_1, \mathbf{r}_2$ are linearly independent. So $\mathbf{x}$ lies on the intersection of two planes $\mathbf{x}\cdot \mathbf{r}_1 = 0$ and $\mathbf{x}\cdot \mathbf{r}_2 = 0$, which is the line $\{\mathbf{x}\in \R^3: \mathbf{x} = \lambda \mathbf{r}_1\times \mathbf{r}_2\}$ (Since $\mathbf{x}$ lies on the intersection of the two planes, it has to be normal to the normals of both planes). All such points on this line also satisfy $\mathbf{x}\cdot\mathbf{r}_3 = 0$ since $\mathbf{r}_3$ is a linear combination of $\mathbf{r}_1$ and $\mathbf{r}_2$. The kernel is a line, $n(A) = 1$.
+ \item If rank = 1, then $\mathbf{r}_1, \mathbf{r}_2, \mathbf{r}_3$ are parallel. So $\mathbf{x}\cdot \mathbf{r}_1 = 0 \Rightarrow \mathbf{x}\cdot \mathbf{r}_2 = \mathbf{x}\cdot \mathbf{r}_3 = 0$. So all $\mathbf{x}$ that satisfy $\mathbf{x}\cdot \mathbf{r}_1 = 0$ are in the kernel, and the kernel now is a plane. $n(A) = 2$.
+ \end{enumerate}
+\end{enumerate}
+(We also have the trivial case where $r(A) = 0$, we have the zero mapping and the kernel is $\R^3$)
+
+\subsubsection{Linear mapping view of \texorpdfstring{$A\mathbf{x} = \mathbf{0}$}{Ax = 0}}
+In the general case, consider a linear map $\alpha: \R^n \to \R^n$ $\mathbf{x} \mapsto \mathbf{x}' = A\mathbf{x}$. The kernel $k(A) = \{\mathbf{x}\in \R^n: A\mathbf{x} = \mathbf{0}\}$ has dimension $n(A)$.
+
+\begin{enumerate}
+ \item If $n(A) = 0$, then $A(\mathbf{e}_1), A(\mathbf{e}_2), \cdots, A(\mathbf{e}_n)$ is a linearly independent set, and $r(A) = n$.
+ \item If $n(A) > 0$, then the image is not the whole of $\R^n$. Let $\{\mathbf{u}_i\}, i = 1, \cdots, n(A)$ be a basis of the kernel, i.e.\ so given any solution to $A\mathbf{x} = \mathbf{0}$, $\displaystyle \mathbf{x} = \sum_{i = 1}^{n(A)} \lambda_i \mathbf{u}_i$ for some $\lambda_i$. Extend $\{\mathbf{u}_i\}$ to be a basis of $\R^n$ by introducing extra vectors $\mathbf{u}_{i}$ for $i = n(A) + 1, \cdots, n$. The vectors $A(\mathbf{u}_i)$ for $i = n(A) + 1, \cdots, n$ form a basis of the image.
+\end{enumerate}
+
+\subsection{General solution of \texorpdfstring{$A\mathbf{x} = \mathbf{d}$}{Ax = d}}
+Finally consider the general equation $A\mathbf{x} = \mathbf{d}$, where $A$ is an $n\times n$ matrix and $\mathbf{x}, \mathbf{d}$ are $n \times 1$ column vectors. We can separate into two main cases.
+
+\begin{enumerate}
+ \item $\det(A) \not= 0$. So $A^{-1}$ exists and $n(A) = 0$, $r(A) = n$. Then for any $\mathbf{d}\in \R^n$, a unique solution must exists and it is $\mathbf{x} = A^{-1}\mathbf{d}$.
+ \item $\det(A) = 0$. Then $A^{-1}$ does not exist, and $n(A) > 0$, $r(A) < n$. So the image of $A$ is not the whole of $\R^n$.
+ \begin{enumerate}
+ \item If $\mathbf{d}\not\in \im A$, then there is no solution (by definition of the image)
+ \item If $\mathbf{d}\in \im A$, then by definition there exists at least one $\mathbf{x}$ such that $A\mathbf{x} = \mathbf{d}$. The general solution of $A\mathbf{x} = \mathbf{d}$ can be written as $\mathbf{x} = \mathbf{x}_0 + \mathbf{y}$, where $\mathbf{x}_0$ is a particular solution (i.e.\ $A\mathbf{x}_0 = \mathbf{d}$), and $\mathbf{y}$ is any vector in $\ker A$ (i.e.\ $A\mathbf{y} = \mathbf{0}$). (cf.\ Isomorphism theorem)
+
+ If $n(A) = 0$, then $\mathbf{y = 0}$ only, and then the solution is unique (i.e.\ case (i)). If $n(A) > 0$ , then $\{\mathbf{u}_i\}, i = 1, \cdots, n(A)$ is a basis of the kernel. Hence
+ \[
+ \mathbf{y} = \sum_{j = 1}^{n(A)} \mu_j \mathbf{u}_j,
+ \]
+ so
+ \[
+ \mathbf{x} = \mathbf{x}_0 + \sum_{j = 1}^{n(A)} \mu_j \mathbf{u}_j
+ \]
+ for any $\mu_j$, i.e.\ there are infinitely many solutions.
+ \end{enumerate}
+\end{enumerate}
+\begin{eg}
+ \[
+ \begin{pmatrix}
+ 1 & 1\\
+ a & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ x_1\\
+ x_2
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 1\\
+ b
+ \end{pmatrix}
+ \]
+ We have $\det A = 1 - a$. If $a \not= 1$, then $A^{-1}$ exists and
+ \[
+ A^{-1} = \frac{1}{1 - a} = \frac{1}{1 - a}\begin{pmatrix}
+ 1 & -1\\
+ -a & 1
+ \end{pmatrix}.
+ \]
+ Then
+ \[
+ \mathbf{x} = \frac{1}{1- a}\begin{pmatrix}
+ 1 - b\\
+ -a + b
+ \end{pmatrix}.
+ \]
+ If $a = 1$, then
+ \[
+ A\mathbf{x} = \begin{pmatrix}
+ x_1 + x_2\\
+ x_1 + x_2
+ \end{pmatrix} = (x_1 + x_2)\begin{pmatrix}
+ 1\\
+ 1
+ \end{pmatrix}.
+ \]
+ So $\im A = \spn\left\{
+ \begin{pmatrix}
+ 1\\1
+ \end{pmatrix}
+ \right\}$ and $\ker A = \spn\left\{
+ \begin{pmatrix}
+ 1\\-1
+ \end{pmatrix}
+ \right\}$. If $b \not=1 $, then $\begin{pmatrix}
+ 1\\b
+ \end{pmatrix}
+ \not\in \im A$ and there is no solution. If $b = 1$, then $
+ \begin{pmatrix}
+ 1\\b
+ \end{pmatrix}
+ \in \im A$.
+
+ We find a particular solution of
+ $\begin{pmatrix}
+ 1\\
+ 0
+ \end{pmatrix}$. So The general solution is
+ \[
+ \mathbf{x} =
+ \begin{pmatrix}
+ 1\\0
+ \end{pmatrix}
+ + \lambda
+ \begin{pmatrix}
+ 1\\-1
+ \end{pmatrix}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Find the general solution of
+ \[
+ \begin{pmatrix}
+ a & a & b\\
+ b & a & a\\
+ a & b & a
+ \end{pmatrix}
+ \begin{pmatrix}
+ x\\y\\z
+ \end{pmatrix}
+ =\begin{pmatrix}
+ 1\\c\\1
+ \end{pmatrix}
+ \]
+ We have $\det A = (a - b)^2 (2a + b)$. If $a \not= b$ and $b \not= -2a$, then the inverse exists and there is a unique solution for any $c$. Otherwise, the possible cases are
+ \begin{enumerate}
+ \item $a = b, b \not= -2a$. So $a\not= 0$. The kernel is the plane $x + y + z = 0$ which is $\spn\left\{
+ \begin{pmatrix}
+ -1\\1\\0
+ \end{pmatrix},
+ \begin{pmatrix}
+ -1\\ 0\\ 1
+ \end{pmatrix}\right\}$
+ We extend this basis to $\R^3$ by adding $
+ \begin{pmatrix}
+ 1\\0\\0
+ \end{pmatrix}$.
+
+ So the image is the span of $
+ \begin{pmatrix}
+ a\\a\\a
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 1\\1\\1
+ \end{pmatrix}$. Hence if $c\not= 1$, then $
+ \begin{pmatrix}
+ 1\\c\\1
+ \end{pmatrix}$ is not in the image and there is no solution. If $c = 1$, then a particular solution is $
+ \begin{pmatrix}
+ \frac{1}{a}\\0\\0
+ \end{pmatrix}$
+ and the general solution is
+ \[
+ \mathbf{x} =
+ \begin{pmatrix}
+ \frac{1}{a}\\0\\0
+ \end{pmatrix} + \lambda
+ \begin{pmatrix}
+ -1\\1\\0
+ \end{pmatrix} + \mu
+ \begin{pmatrix}
+ -1\\0\\1
+ \end{pmatrix}
+ \]
+ \item If $a\not= b$ and $b = -2a$, then $a \not= 0$. The kernel satisfies
+ \begin{align*}
+ x + y - 2z &= 0\\
+ -2x + y + z &= 0\\
+ x - 2y + z &= 0
+ \end{align*}
+ This can be solved to give $x = y = z$, and the kernel is $\spn\left\{
+ \begin{pmatrix}
+ 1\\1\\1
+ \end{pmatrix}\right\}$. We add $
+ \begin{pmatrix}
+ 1\\0\\0
+ \end{pmatrix}$ and $
+ \begin{pmatrix}
+ 0\\0\\1
+ \end{pmatrix}$ to form a basis of $\R^3$. So the image is the span of $
+ \begin{pmatrix}
+ 1\\-2\\1
+ \end{pmatrix},
+ \begin{pmatrix}
+ -2\\1\\1
+ \end{pmatrix}$.
+
+ If $
+ \begin{pmatrix}
+ 1\\c\\1
+ \end{pmatrix}$ is in the image, then
+ \[
+ \begin{pmatrix}
+ 1\\c\\1
+ \end{pmatrix} = \lambda
+ \begin{pmatrix}
+ 1\\-2\\1
+ \end{pmatrix}
+ + \mu
+ \begin{pmatrix}
+ -2\\1\\1
+ \end{pmatrix}.
+ \]
+ Then the only solution is $\mu = 0, \lambda = 1, c = -2$. Thus there is no solution if $c \not= -2$, and when $c = -2$, pick a particular solution $
+ \begin{pmatrix}
+ \frac{1}{a}\\0\\0
+ \end{pmatrix}$ and the general solution is
+ \[
+ \mathbf{x} = \begin{pmatrix}
+ \frac{1}{a}\\0\\0
+ \end{pmatrix} +\lambda
+ \begin{pmatrix}
+ 1\\1\\1
+ \end{pmatrix}
+ \]
+ \item If $a = b$ and $b = -2a$, then $a = b = 0$ and $\ker A = \R^3$. So there is no solution for any $c$.
+ \end{enumerate}
+ \end{eg}
+
+\section{Eigenvalues and eigenvectors}
+\label{sec:eigen}
+Given a matrix $A$, an eigenvector is a vector $\mathbf{x}$ that satisfies $A\mathbf{x} = \lambda \mathbf{x}$ for some $\lambda$. We call $\lambda$ the associated eigenvalue. In some sense, these vectors are not modified by the matrix, and are just scaled up by the matrix. We will look at the properties of eigenvectors and eigenvalues, and see their importance in diagonalizing matrices.
+
+\subsection{Preliminaries and definitions}
+\begin{thm}[Fundamental theorem of algebra]
+ Let $p(z)$ be a polynomial of degree $m \geq 1$, i.e.
+ \[
+ p(z) = \sum_{j = 0}^m c_jz^j,
+ \]
+ where $c_j\in \C$ and $c_m \not= 0$.
+
+ Then $p(z) = 0$ has precisely $m$ (not necessarily distinct) roots in the complex plane, accounting for multiplicity.
+\end{thm}
+Note that we have the disclaimer ``accounting for multiplicity''. For example, $x^2 - 2x + 1 = 0$ has only one distinct root, $1$, but we say that this root has multiplicity 2, and is thus counted twice. Formally, multiplicity is defined as follows:
+
+\begin{defi}[Multiplicity of root]
+ The root $z = \omega$ has \emph{multiplicity} $k$ if $(z - \omega)^k$ is a factor of $p(z)$ but $(z - \omega)^{k + 1}$ is not.
+\end{defi}
+
+\begin{eg}
+ Let $p(z) = z^3 - z^2 - z + 1 = (z - 1)^2(z + 1)$. So $p(z) = 0$ has roots $1, 1, -1$, where $z = 1$ has multiplicity $2$.
+\end{eg}
+
+\begin{defi}[Eigenvector and eigenvalue]
+ Let $\alpha: \C^n\to \C^n$ be a linear map with associated matrix $A$. Then $\mathbf{x}\not= \mathbf{0}$ is an \emph{eigenvector} of $A$ if
+ \[
+ A\mathbf{x} = \lambda\mathbf{x}
+ \]
+ for some $\lambda$. $\lambda$ is the associated \emph{eigenvalue}. This means that the direction of the eigenvector is preserved by the mapping, but is scaled up by $\lambda$.
+\end{defi}
+
+There is a rather easy way of finding eigenvalues:
+\begin{thm}
+ $\lambda$ is an eigenvalue of $A$ iff
+ \[
+ \det(A - \lambda I) = 0.
+ \]
+\end{thm}
+
+\begin{proof}
+ $(\Rightarrow)$ Suppose that $\lambda$ is an eigenvalue and $\mathbf{x}$ is the associated eigenvector. We can rearrange the equation in the definition above to
+ \[
+ (A - \lambda I)\mathbf{x} = \mathbf{0}
+ \]
+ and thus
+ \[
+ \mathbf{x}\in \ker(A - \lambda I)
+ \]
+ But $\mathbf{x}\not= \mathbf{0}$. So $\ker(A - \lambda I)$ is non-trivial and $\det(A - \lambda I) = 0$. The $(\Leftarrow)$ direction is similar.
+\end{proof}
+
+\begin{defi}[Characteristic equation of matrix]
+ The \emph{characteristic equation} of $A$ is
+ \[
+ \det(A - \lambda I) = 0.
+ \]
+\end{defi}
+
+\begin{defi}[Characteristic polynomial of matrix]
+ The \emph{characteristic polynomial} of $A$ is
+ \[
+ p_A(\lambda) = \det(A - \lambda I).
+ \]
+\end{defi}
+
+From the definition of the determinant,
+\begin{align*}
+ p_A(\lambda) &= \det(A - \lambda I)\\
+ &= \varepsilon_{j_1j_2\cdots j_n} (A_{j_1 1} - \lambda\delta_{j_11})\cdots (A_{j_n n} - \lambda\delta_{j_nn})\\
+ &= c_0 + c_1\lambda + \cdots + c_n\lambda^n
+\end{align*}
+for some constants $c_0, \cdots, c_n$. From this, we see that
+\begin{enumerate}
+ \item $p_A(\lambda)$ has degree $n$ and has $n$ roots. So an $n\times n$ matrix has $n$ eigenvalues (accounting for multiplicity).
+ \item If $A$ is real, then all $c_i\in \R$. So eigenvalues are either real or come in complex conjugate pairs.
+ \item $c_n = (-1)^n$ and $c_{n - 1} = (-1)^{n - 1}(A_{11} + A_{22} + \cdots + A_{nn}) = (-1)^{n - 1}\tr(A)$. But $c_{n -1}$ is the sum of roots, i.e.\ $c_{n - 1}= (-1)^{n - 1}(\lambda_1 + \lambda_2 + \cdots \lambda_n)$, so
+ \[
+ \tr(A) = \lambda_1 + \lambda_2 + \cdots + \lambda_n.
+ \]
+ Finally, $c_0 = p_A(0) = \det(A)$. Also $c_0$ is the product of all roots, i.e.\ $c_0 = \lambda_1\lambda_2\cdots \lambda_n$. So
+ \[
+ \det A = \lambda_1\lambda_2\cdots \lambda_n.
+ \]
+\end{enumerate}
+
+The kernel of the matrix $A - \lambda I$ is the set $\{\mathbf{x}: A\mathbf{x} = \lambda\mathbf{x}\}$. This is a vector subspace because the kernel of any map is always a subspace.
+
+\begin{defi}[Eigenspace]
+ The \emph{eigenspace} denoted by $E_\lambda$ is the kernel of the matrix $A - \lambda I$, i.e.\ the set of eigenvectors with eigenvalue $\lambda$.
+\end{defi}
+
+\begin{defi}[Algebraic multiplicity of eigenvalue]
+ The \emph{algebraic multiplicity} $M(\lambda)$ or $M_\lambda$ of an eigenvalue $\lambda$ is the multiplicity of $\lambda$ in $p_A(\lambda) = 0$. By the fundamental theorem of algebra,
+ \[
+ \sum_\lambda M(\lambda) = n.
+ \]
+ If $M(\lambda) > 1$, then the eigenvalue is \emph{degenerate}.
+\end{defi}
+
+\begin{defi}[Geometric multiplicity of eigenvalue]
+ The \emph{geometric multiplicity} $m(\lambda)$ or $m_\lambda$ of an eigenvalue $\lambda$ is the dimension of the eigenspace, i.e.\ the maximum number of linearly independent eigenvectors with eigenvalue $\lambda$.
+\end{defi}
+
+\begin{defi}[Defect of eigenvalue]
+ The \emph{defect} $\Delta_\lambda$ of eigenvalue $\lambda$ is
+ \[
+ \Delta_\lambda = M(\lambda) - m(\lambda).
+ \]
+ It can be proven that $\Delta_\lambda \geq 0$, i.e.\ the geometric multiplicity is never greater than the algebraic multiplicity.
+\end{defi}
+
+\subsection{Linearly independent eigenvectors}
+\begin{thm}
+ Suppose $n\times n$ matrix $A$ has \emph{distinct} eigenvalues $\lambda_1, \lambda_2, \cdots, \lambda_n$. Then the corresponding eigenvectors $\mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_n$ are linearly independent.
+\end{thm}
+
+\begin{proof}
+ Proof by contradiction: Suppose $\mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_n$ are linearly dependent. Then we can find non-zero constants $d_i$ for $i = 1, 2, \cdots, r$, such that
+ \[
+ d_1\mathbf{x}_1 + d_2\mathbf{x}_2 + \cdots + d_r\mathbf{x}_r = \mathbf{0}.
+ \]
+ Suppose that this is the shortest non-trivial linear combination that gives $\mathbf{0}$ (we may need to re-order $\mathbf{x}_i$).
+
+ Now apply $(A - \lambda_1 I)$ to the whole equation to obtain
+ \[
+ d_1(\lambda_1 - \lambda_1)\mathbf{x}_1 + d_2(\lambda_2 - \lambda_1)\mathbf{x}_2 + \cdots + d_r(\lambda_r - \lambda_1)\mathbf{x}_r = \mathbf{0}.
+ \]
+ We know that the first term is $\mathbf{0}$, while the others are not (since we assumed $\lambda_i \not= \lambda_j$ for $i\not= j$). So
+ \[
+ d_2(\lambda_2 - \lambda_1)\mathbf{x}_2 + \cdots + d_r(\lambda_r - \lambda_1)\mathbf{x}_r = \mathbf{0},
+ \]
+ and we have found a shorter linear combination that gives $\mathbf{0}$. Contradiction.
+\end{proof}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $A = \begin{pmatrix} 0 & 1\\
+ -1 & 0
+ \end{pmatrix}$. Then $p_A(\lambda) = \lambda^2 + 1 = 0$. So $\lambda_1 = i$ and $\lambda_2 = -i$.
+
+ To solve $(A - \lambda_1 I)\mathbf{x} = \mathbf{0}$, we obtain
+ \[
+ \begin{pmatrix}
+ -i & 1\\-1 & -i
+ \end{pmatrix}
+ \begin{pmatrix}
+ x_1\\x_2
+ \end{pmatrix}
+ = \mathbf{0}.
+ \]
+ So we obtain
+ \[
+ \begin{pmatrix}
+ x_1\\x_2
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 1\\i
+ \end{pmatrix}
+ \]
+ to be an eigenvector. Clearly any scalar multiple of $\begin{pmatrix}
+ 1\\i
+ \end{pmatrix}$ is also a solution, but still in the same eigenspace $E_i = \spn \begin{pmatrix}
+ 1\\i
+ \end{pmatrix}$
+
+ Solving $(A - \lambda_2I)\mathbf{x} = \mathbf{0}$ gives
+ \[
+ \begin{pmatrix}
+ x_1\\x_2
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 1\\-i
+ \end{pmatrix}.
+ \]
+ So $E_{-i} = \spn
+ \begin{pmatrix}
+ 1\\-i
+ \end{pmatrix}$.
+
+ Note that $M(\pm i) = m(\pm i) = 1$, so $\Delta_{\pm i} = 0$. Also note that the two eigenvectors are linearly independent and form a basis of $\C^2$.
+ \item Consider
+ \[
+ A = \begin{pmatrix}
+ -2 & 2 & -3\\
+ 2 & 1 & -6\\
+ -1 & -2 & 0
+ \end{pmatrix}
+ \]
+ Then $\det(A - \lambda I) = 0$ gives $45 + 21\lambda - \lambda^2 - \lambda^3$. So $\lambda_1 = 5, \lambda_2 = \lambda_3 = -3$.
+
+ The eigenvector with eigenvalue $5$ is
+ \[
+ \mathbf{x} =
+ \begin{pmatrix}
+ 1\\2\\-1
+ \end{pmatrix}
+ \]
+ We can find that the eigenvectors with eigenvalue $-3$ are
+ \[
+ \mathbf{x} =
+ \begin{pmatrix}
+ -2x_2 + 3x_3\\x_2\\x_3
+ \end{pmatrix}
+ \]
+ for any $x_2, x_3$. This gives two linearly independent eigenvectors, say $
+ \begin{pmatrix}
+ -2\\1\\0
+ \end{pmatrix},
+ \begin{pmatrix}
+ 3\\0\\1
+ \end{pmatrix}$.
+
+ So $M(5) = m(5) = 1$ and $M(-3) = m(-3) = 2$, and there is no defect for both of them. Note that these three eigenvectors form a basis of $\C^3$.
+ \item Let
+ \[
+ A = \begin{pmatrix}
+ -3&-1&1\\
+ -1 & -3 & 1\\
+ -2 & -2 & 0
+ \end{pmatrix}
+ \]
+ Then $0 = p_A(\lambda) = -(\lambda+2)^4$. So $\lambda = -2, -2, -2$. To find the eigenvectors, we have
+ \[
+ (A + 2I)\mathbf{x} =
+ \begin{pmatrix}
+ -1&-1&1\\
+ -1 & -1 & 1\\
+ -2 & -2 & 2
+ \end{pmatrix}
+ \begin{pmatrix}
+ x_1\\x_2\\x_3
+ \end{pmatrix}
+ = \mathbf{0}
+ \]
+ The general solution is thus $x_1 + x_2 - x_3 = 0$, and the general solution is thus $x =
+ \begin{pmatrix}
+ x_1\\x_2\\x_1 + x_2
+ \end{pmatrix}$. The eigenspace $E_{-2} = \spn\left\{
+ \begin{pmatrix}
+ 1\\0\\1
+ \end{pmatrix},
+ \begin{pmatrix}
+ 0\\1\\1
+ \end{pmatrix}\right\}$.
+
+ Hence $M(-2) = 3$ and $m(-2) = 2$. Thus the defect $\Delta_{-2} = 1$. So the eigenvectors do not form a basis of $\C^3$.
+ \item Consider the reflection $R$ in the plane with normal $\mathbf{n}$. Clearly $R\mathbf{n} = -\mathbf{n}$. The eigenvalue is $-1$ and the eigenvector is $\mathbf{n}$. Then $E_1 = \spn\{\mathbf{n}\}$. So $M(-1) = m(-1) = 1$.
+
+ If $\mathbf{p}$ is any vector in the plane, $R\mathbf{p} = \mathbf{p}$. So this has an eigenvalue of $1$ and eigenvectors being any vector in the plane. So $M(1) = m(1) = 2$.
+
+ So the eigenvectors form a basis of $\R^3$.
+ \item Consider a rotation $R$ by $\theta$ about $\mathbf{n}$. Since $R\mathbf{n} = \mathbf{n}$, we have an eigenvalue of $1$ and eigenspace $E_1 = \spn\{\mathbf{n}\}$.
+
+ We know that there are no other real eigenvalues since rotation changes the direction of any other vector. The other eigenvalues turn out to be $e^{\pm i\theta}$. If $\theta \not= 0$, there are 3 distinct eigenvalues and the eigenvectors form a basis of $\C^3$.
+ \item Consider a shear
+ \[
+ A =
+ \begin{pmatrix}
+ 1&\mu\\0&1
+ \end{pmatrix}
+ \]
+ The characteristic equation is $(1 - \lambda)^2 = 0$ and $\lambda = 1$. The eigenvectors corresponding to $\lambda = 1$ is $\mathbf{x} =
+ \begin{pmatrix}
+ 1\\0
+ \end{pmatrix}$. We have $M(1) = 2$ and $m(1) = 1$. So $\Delta_1 = 1$.
+ \end{enumerate}
+\end{eg}
+If $n\times n$ matrix $A$ has $n$ distinct eigenvalues, and hence has $n$ linearly independent eigenvectors $\mathbf{v}_1, \mathbf{v}_2, \cdots \mathbf{v}_n$, then \emph{with respect to this eigenvector basis}, $A$ is diagonal.
+
+In this basis, $v_1 = (1, 0, \cdots, 0)$ etc. We know that $A\mathbf{v}_i = \lambda_i\mathbf{v}_i$ (no summation). So the image of the $i$th basis vector is $\lambda_i$ times the $i$th basis. Since the columns of $A$ are simply the images of the basis,
+\[
+ \begin{pmatrix}
+ \lambda_1 & 0 & \cdots & 0\\
+ 0 & \lambda_2 & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & \lambda_n
+ \end{pmatrix}
+\]
+The fact that $A$ can be diagonalized by changing the basis is an important observation. We will now look at how we can change bases and see how we can make use of this.
+
+\subsection{Transformation matrices}
+How do the components of a vector or a matrix change when we change the basis?
+
+Let $\{\mathbf{e}_1, \mathbf{e}_2, \cdots, \mathbf{e}_n\}$ and $\{\tilde{\mathbf{e}}_1, \tilde{\mathbf{e}}_2,\cdots, \tilde{\mathbf{e}}_n\}$ be 2 different bases of $\R^n$ or $\C^n$. Then we can write
+\[
+ \tilde{\mathbf{e}_j} = \sum_{i = 1}^n P_{ij}\mathbf{e}_i
+\]
+i.e.\ $P_{ij}$ is the $i$th component of $\tilde{\mathbf{e}_j}$ with respect to the basis $\{\mathbf{e}_1, \mathbf{e}_2, \cdots, \mathbf{e}_n\}$. Note that the sum is made as $P_{ij}\mathbf{e}_i$, not $P_{ij}\mathbf{e}_j$. This is different from the formula for matrix multiplication.
+
+Matrix $P$ has as its columns the vectors $\tilde{\mathbf{e}_j}$ relative to $\{\mathbf{e}_1, \mathbf{e}_2, \cdots, \mathbf{e}_n\}$. So $P = (\tilde{\mathbf{e}_1}\; \tilde{\mathbf{e}_2}\; \cdots \; \tilde{\mathbf{e}_n})$ and
+\[
+ P(\mathbf{e}_i) = \tilde{\mathbf{e}_i}
+\]
+Similarly, we can write
+\[
+ \mathbf{e}_i = \sum_{k = 1}^nQ_{ki} \tilde{\mathbf{e}_k}
+\]
+with $Q = (\mathbf{e}_1\; \mathbf{e}_2\;\cdots\;\mathbf{e}_n)$.
+
+Substituting this into the equation for $\tilde{\mathbf{e}_j}$, we have
+\begin{align*}
+ \tilde{\mathbf{e}_j} &= \sum_{i = 1}^n\left(\sum_{k = 1}^{n} Q_{ki}\tilde{\mathbf{e}_k}\right)P_{ij}\\
+ &= \sum_{k = 1}^n \tilde{\mathbf{e}_k} \left(\sum_{i = 1}^n Q_{ki}P_{ij}\right)
+\end{align*}
+But $\tilde{\mathbf{e}}_1, \tilde{\mathbf{e}}_2,\cdots, \tilde{\mathbf{e}}_n$ are linearly independent, so this is only possible if
+\[
+ \sum_{i = 1}^n Q_{ki}P_{ij} = \delta_{kj},
+\]
+which is just a fancy way of saying $QP = I$, or $Q = P^{-1}$.
+\subsubsection{Transformation law for vectors}
+With respect to basis $\left\{\mathbf{e}_i\right\}$, $\mathbf{u} = \sum_{i = 1}^n u_i\mathbf{e}_i$.
+With respect to basis $\left\{\tilde{\mathbf{e}_i}\right\}$, $\mathbf{u} = \sum_{i = 1}^n \tilde{u_i}\tilde{\mathbf{e}_i}$. Note that this is the \emph{same} vector $\mathbf{u}$ but has different components with respect to different bases. Using the transformation matrix above for the basis, we have
+\begin{align*}
+ \mathbf{u} &= \sum_{j= 1}^n \tilde{u_j} \sum_{i = 1}^{n}P_{ij}\mathbf{e}_i\\
+ &= \sum_{i = 1}^n \left(\sum_{j = 1}^n P_{ij}\tilde{u_j}\right) \mathbf{e}_i
+\end{align*}
+By comparison, we know that
+\[
+ u_i = \sum_{j = 1}^n P_{ij}\tilde{u_j}
+\]
+
+\begin{thm}
+ Denote vector as $\mathbf{u}$ with respect to $\{\mathbf{e}_i\}$ and $\tilde{\mathbf{u}}$ with respect to $\{\tilde{\mathbf{e}_i}\}$. Then
+ \[
+ \mathbf{u} = P\mathbf{\tilde{u}}\text{ and }\mathbf{\tilde{u}} = P^{-1}\mathbf{u}
+ \]
+\end{thm}
+
+\begin{eg}
+ Take the first basis as $\{\mathbf{e}_1 = (1, 0), \mathbf{e}_2 = (0, 1)\}$ and the second as $\{\tilde{\mathbf{e}_1} = (1, 1), \tilde{\mathbf{e}_2} = (-1, 1)\}$.
+
+ So $\tilde{\mathbf{e}_1} = \mathbf{e}_1 + \mathbf{e}_2$ and $\tilde{\mathbf{e}_2} = -\mathbf{e}_1 + \mathbf{e}_2$. We have
+ \[
+ \mathbf{P} =
+ \begin{pmatrix}
+ 1 & -1\\
+ 1 & 1
+ \end{pmatrix}.
+ \]
+ Then for an arbitrary vector $\mathbf{u}$, we have
+ \begin{align*}
+ \mathbf{u}&= u_1\mathbf{e}_1 + u_2\mathbf{e}_2\\
+ &= u_1\frac{1}{2}(\tilde{\mathbf{e}_1} - \tilde{\mathbf{e}_2}) + u_2\frac{1}{2}(\tilde{\mathbf{e}_1} + \tilde{\mathbf{e}_2})\\
+ &= \frac{1}{2}(u_1 + u_2)\tilde{\mathbf{e}_1} + \frac{1}{2}(-u_1 + u_2)\tilde{\mathbf{e}_2}.
+ \end{align*}
+ Alternatively, using the formula above, we obtain
+ \begin{align*}
+ \mathbf{\tilde{u}} &= P^{-1} \mathbf{u}\\
+ &= \frac{1}{2}
+ \begin{pmatrix}
+ 1&1\\-1&1
+ \end{pmatrix}
+ \begin{pmatrix}
+ u_1\\u_2
+ \end{pmatrix}\\
+ &=
+ \begin{pmatrix}
+ \frac{1}{2}(u_1 + u_2)\\
+ \frac{1}{2}(-u_1 + u_2)
+ \end{pmatrix}
+ \end{align*}
+ Which agrees with the above direct expansion.
+\end{eg}
+\subsubsection{Transformation law for matrix}
+Consider a linear map $\alpha: \C^n \to \C^n$ with associated $n\times n$ matrix $A$. We have
+\[
+ \mathbf{u}' = \alpha(\mathbf{u}) = A\mathbf{u}.
+\]
+Denote $\mathbf{u}$ and $\mathbf{u}'$ as being with respect to basis $\{\mathbf{e}_i\}$ (i.e.\ same basis in both spaces), and $\mathbf{\tilde{u}, \tilde{u}'}$ with respect to $\{\tilde{\mathbf{e}_i}\}$.
+
+Using what we've got above, we have
+\begin{align*}
+ \mathbf{u}' &= A\mathbf{u}\\
+ P\mathbf{\tilde{u}'} &= AP\tilde{\mathbf{u}}\\
+ \mathbf{\tilde{u}'} &= P^{-1}AP\mathbf{\tilde{u}}\\
+ &= \tilde{A}\tilde{\mathbf{u}}
+\end{align*}
+So
+\begin{thm}
+ \[
+ \tilde{A} = P^{-1}AP.
+ \]
+\end{thm}
+
+\begin{eg}
+ Consider the shear $S_\lambda =
+ \begin{pmatrix}
+ 1 & \lambda & 0\\
+ 0 & 1 & 0\\
+ 0 & 0 & 1
+ \end{pmatrix}$ with respect to the standard basis. Choose a new set of basis vectors by rotating by $\theta$ about the $\mathbf{e}_3$ axis:
+ \begin{align*}
+ \tilde{\mathbf{e}_1} &= \cos\theta \mathbf{e}_1 + \sin\theta \mathbf{e}_2\\
+ \tilde{\mathbf{e}_2} &= -\sin\theta \mathbf{e}_1 + \cos\theta \mathbf{e}_2\\
+ \tilde{\mathbf{e}_3} &= \mathbf{e}_3
+ \end{align*}
+ So we have
+ \[
+ P =
+ \begin{pmatrix}
+ \cos\theta & -\sin\theta & 0\\
+ \sin\theta & \cos\theta & 0\\
+ 0 & 0 & 1
+ \end{pmatrix}, P^{-1} =
+ \begin{pmatrix}
+ \cos\theta & \sin\theta & 0\\
+ -\sin\theta & \cos\theta & 0\\
+ 0 & 0 & 1
+ \end{pmatrix}
+ \]
+ Now use the basis transformation laws to obtain
+ \[
+ \tilde{S}_\lambda =
+ \begin{pmatrix}
+ 1 + \lambda\sin\theta\cos\theta & \lambda \cos^2\theta & 0\\
+ -\lambda \sin^2\theta & 1 - \lambda\sin\theta\cos\theta & 0\\
+ 0 & 0 & 1
+ \end{pmatrix}
+ \]
+ Clearly this is much more complicated than our original basis. This shows that choosing a sensible basis is important.
+\end{eg}
+
+More generally, given $\alpha: \C^m \to \C^n$, given $\mathbf{x}\in \C^m$, $\mathbf{x}'\in \C^n$ with $\mathbf{x}' = A\mathbf{x}$. We know that $A$ is an $n\times m$ matrix.
+
+Suppose $\C^m$ has a basis $\{\mathbf{e}_i\}$ and $\C^n$ has a basis $\{\mathbf{f}_i\}$. Now change bases to $\{\tilde{\mathbf{e}_i}\}$ and $\{\tilde{\mathbf{f}_i}\}$.
+
+We know that $\mathbf{x} = P\mathbf{\tilde{x}}$ with $P$ being an $m\times m$ matrix, with $\mathbf{x}' = R\tilde{\mathbf{x}}'$ with $R$ being an $n\times n$ matrix.
+
+Combining both of these, we have
+\begin{align*}
+ R\tilde{\mathbf{x}}' &= AP\tilde{\mathbf{x}}\\
+ \tilde{\mathbf{x}}' &= R^{-1}AP\mathbf{\tilde{x}}
+\end{align*}
+Therefore $\tilde{A} = R^{-1}AP$.
+
+\begin{eg}
+ Consider $\alpha: \R^3 \to \R^2$, with respect to the standard bases in both spaces,
+ \[
+ A =
+ \begin{pmatrix}
+ 2 & 3 & 4\\
+ 1 & 6 & 3
+ \end{pmatrix}
+ \]
+ Use a new basis $
+ \begin{pmatrix}
+ 2\\1
+ \end{pmatrix},
+ \begin{pmatrix}
+ 1\\5
+ \end{pmatrix}$ in $\R^2$ and keep the standard basis in $\R^3$. The basis change matrix in $\R^3$ is simply $I$, while
+ \[
+ R =
+ \begin{pmatrix}
+ 2& 1\\
+ 1 & 5
+ \end{pmatrix}, R^{-1} = \frac{1}{9}
+ \begin{pmatrix}
+ 5 & -1\\
+ -1 & 2
+ \end{pmatrix}
+ \]
+ is the transformation matrix for $\R^2$. So
+ \begin{align*}
+ \tilde{A} &=
+ \begin{pmatrix}
+ 2 & 1\\1 & 5
+ \end{pmatrix}
+ \begin{pmatrix}
+ 2 & 3 & 4\\1 & 6 & 3
+ \end{pmatrix}I\\
+ &= \frac{1}{9}
+ \begin{pmatrix}
+ 5 & -1\\
+ -1 & 2
+ \end{pmatrix}
+ \begin{pmatrix}
+ 2 & 3 & 4\\1 & 6 & 3
+ \end{pmatrix}\\
+ &=
+ \begin{pmatrix}
+ 1 & 1 & 17/9\\
+ 0 & 1 & 2/9
+ \end{pmatrix}
+ \end{align*}
+ We can alternatively do it this way: we know that $\tilde{\mathbf{f}_1} =
+ \begin{pmatrix}
+ 2\\1
+ \end{pmatrix}, \tilde{\mathbf{f}_2} =
+ \begin{pmatrix}
+ 1\\5
+ \end{pmatrix}$
+ Then we know that
+ \begin{align*}
+ \tilde{\mathbf{e}_1} = \mathbf{e}_1 &\mapsto 2\mathbf{f}_1 + \mathbf{f}_2 = \mathbf{f}_1\\
+ \tilde{\mathbf{e}_2} = \mathbf{e}_2 &\mapsto 3\mathbf{f}_1 + 6\mathbf{f}_2 = \tilde{\mathbf{f}_1} + \tilde{\mathbf{f}_2}\\
+ \tilde{\mathbf{e}_3} = \mathbf{e}_3 &\mapsto 4\mathbf{f}_1 + 3\mathbf{f}_2 = \frac{17}{9} \tilde{\mathbf{f}_1} + \frac{2}{9}\tilde{\mathbf{f}_2}
+ \end{align*}
+ and we can construct the matrix correspondingly.
+\end{eg}
+\subsection{Similar matrices}
+\begin{defi}[Similar matrices]
+ Two $n\times n$ matrices $A$ and $B$ are \emph{similar} if there exists an invertible matrix $P$ such that
+ \[
+ B = P^{-1}AP,
+ \]
+ i.e.\ they represent the same map under different bases. Alternatively, using the language from IA Groups, we say that they are in the same conjugacy class.
+\end{defi}
+
+\begin{prop}
+ Similar matrices have the following properties:
+ \begin{enumerate}
+ \item Similar matrices have the same determinant.
+ \item Similar matrices have the same trace.
+ \item Similar matrices have the same characteristic polynomial.
+ \end{enumerate}
+\end{prop}
+Note that (iii) implies (i) and (ii) since the determinant and trace are the coefficients of the characteristic polynomial
+
+\begin{proof}
+ They are proven as follows:
+ \begin{enumerate}
+ \item $\det B =\det (P^{-1}AP) = (\det A) (\det P)^{-1} (\det P) = \det A$
+ \item
+ \begin{align*}
+ \tr B &= B_{ii}\\
+ &= P_{ij}^{-1} A_{jk} P_{ki}\\
+ &= A_{jk} P_{ki}P_{ij}^{-1}\\
+ &= A_{jk}(PP^{-1})_{kj}\\
+ &= A_{jk}\delta_{kj}\\
+ &= A_{jj}\\
+ &= \tr A
+ \end{align*}
+ \item
+ \begin{align*}
+ p_B(\lambda) &= \det(B - \lambda I)\\
+ &= \det(P^{-1}AP - \lambda I)\\
+ &= \det(P^{-1}AP - \lambda P^{-1}IP)\\
+ &= \det(P^{-1}(A - \lambda I)P)\\
+ &= \det(A - \lambda I)\\
+ &= p_A(\lambda)\qedhere
+ \end{align*}%\qedhere
+ \end{enumerate}
+\end{proof}
+\subsection{Diagonalizable matrices}
+\begin{defi}[Diagonalizable matrices]
+ An $n\times n$ matrix $A$ is \emph{diagonalizable} if it is similar to a diagonal matrix. We showed above that this is equivalent to saying the eigenvectors form a basis of $\C^n$.
+\end{defi}
+
+The requirement that matrix $A$ has $n$ distinct eigenvalues is a \emph{sufficient} condition for diagonalizability as shown above. However, it is \emph{not} necessary.
+
+Consider the second example in Section 5.2,
+\[
+ A = \begin{pmatrix}
+ -2 & 2 & -3\\
+ 2 & 1 & -6\\
+ -1 & -2 & 0
+ \end{pmatrix}
+\]
+We found three linear eigenvectors
+\[
+ \tilde{\mathbf{e}_1} =
+ \begin{pmatrix}
+ 1\\2\\1
+ \end{pmatrix}, \tilde{\mathbf{e}_2} =
+ \begin{pmatrix}
+ -2\\1\\0
+ \end{pmatrix}, \tilde{\mathbf{e}_3} =
+ \begin{pmatrix}
+ 3\\0\\1
+ \end{pmatrix}
+\]
+If we let
+\[
+ P =
+ \begin{pmatrix}
+ 1 & -2 & 3\\
+ 2 & 1 & 0\\
+ 1 & 0 & 1
+ \end{pmatrix}, P^{-1} = \frac{1}{8}
+ \begin{pmatrix}
+ 1 & 2 & -3\\
+ -2 & 4 & 6\\
+ 1 & 2 & 5
+ \end{pmatrix},
+\]
+then
+\[
+ \tilde{A} = P^{-1}AP =
+ \begin{pmatrix}
+ 5 & 0 & 0\\
+ 0 & -3 & 0\\
+ 0 & 0 & -3
+ \end{pmatrix},
+\]
+so $A$ is diagonalizable.
+
+\begin{thm}
+ Let $\lambda_1, \lambda_2, \cdots, \lambda_r$, with $r \leq n$ be the distinct eigenvalues of $A$. Let $B_1, B_2, \cdots B_r$ be the bases of the eigenspaces $E_{\lambda_1}, E_{\lambda_2}, \cdots, E_{\lambda_r}$ correspondingly. Then the set $\displaystyle B = \bigcup_{i= 1}^r B_i$ is linearly independent.
+\end{thm}
+
+This is similar to the proof we had for the case where the eigenvalues are distinct. However, we are going to do it much concisely, and the actual meat of the proof is actually just a single line.
+\begin{proof}
+ Write $B_1 = \{\mathbf{x}_1^{(1)}, \mathbf{x}_2^{(1)}, \cdots \mathbf{x}_{m(\lambda_1)}^{(1)}\}$. Then $m(\lambda_1) = \dim (E_{\lambda_1})$, and similarly for all $B_i$.
+
+ Consider the following general linear combination of all elements in $B$. Consider the equation
+ \[
+ \sum_{i = 1}^r\sum_{j = 1}^{m(\lambda_i)} \alpha_{ij} \mathbf{x}_j^{(i)} = 0.
+ \]
+ The first sum is summing over all eigenspaces, and the second sum sums over the basis vectors in $B_i$. Now apply the matrix
+ \[
+ \prod_{k = 1, 2, \cdots, \bar{K}, \cdots, r} (A - \lambda_kI)
+ \]
+ to the above sum, for some arbitrary $K$. We obtain
+ \[
+ \sum_{j = 1}^{m(\lambda_K)}\alpha_{Kj}\left[\prod_{k = 1, 2, \cdots, \bar{K}, \cdots, r}(\lambda_K - \lambda_k)\right]\mathbf{x}_j^{(K)} = 0.
+ \]
+ Since the $\mathbf{x}^{(K)}_j$ are linearly independent ($B_K$ is a basis), $\alpha_{Kj} = 0$ for all $j$. Since $K$ was arbitrary, all $\alpha_{ij}$ must be zero. So $B$ is linearly independent.
+\end{proof}
+
+\begin{prop}
+ $A$ is diagonalizable iff all its eigenvalues have zero defect.
+\end{prop}
+\subsection{Canonical (Jordan normal) form}
+Given a matrix $A$, if its eigenvalues all have non-zero defect, then we can find a basis in which it is diagonal. However, if some eigenvalue \emph{does} have defect, we can still put it into an almost-diagonal form. This is known as the \emph{Jordan normal form}.
+
+\begin{thm}
+ Any $2\times 2$ complex matrix $A$ is similar to exactly one of
+ \[
+ \begin{pmatrix}
+ \lambda_1 & 0\\
+ 0 & \lambda_2
+ \end{pmatrix},
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \lambda
+ \end{pmatrix},
+ \begin{pmatrix}
+ \lambda & 1\\
+ 0 & \lambda
+ \end{pmatrix}
+ \]
+\end{thm}
+\begin{proof}
+ For each case:
+ \begin{enumerate}
+ \item If $A$ has two distinct eigenvalues, then eigenvectors are linearly independent. Then we can use $P$ formed from eigenvectors as its columns
+ \item If $\lambda_1=\lambda_2 = \lambda$ and $\dim E_\lambda = 2$, then write $E_\lambda = \spn\{\mathbf{u}, \mathbf{v}\}$, with $\mathbf{u}, \mathbf{v}$ linearly independent. Now use $\{\mathbf{u}, \mathbf{v}\}$ as a new basis of $\C^2$ and $\tilde{A} = P^{-1}AP =
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \lambda
+ \end{pmatrix} = \lambda I$
+
+ Note that since $P^{-1}AP = \lambda I$, we have $A = P(\lambda I)P^{-1} = \lambda I$. So $A$ is \emph{isotropic}, i.e.\ the same with respect to any basis.
+ \item If $\lambda_1 = \lambda_2 = \lambda$ and $\dim (E_\lambda) = 1$, then $E_\lambda = \spn\{\mathbf{v}\}$. Now choose basis of $\C^2$ as $\{\mathbf{v}, \mathbf{w}\}$, where $\mathbf{w}\in \C^2\setminus E_\lambda$.
+
+ We know that $A\mathbf{w}\in \C^2$. So $A\mathbf{w} = \alpha \mathbf{v} + \beta \mathbf{w}$. Hence, if we change basis to $\{\mathbf{v}, \mathbf{w}\}$, then $\tilde{A} = P^{-1}AP =
+ \begin{pmatrix}
+ \lambda & \alpha\\
+ 0 & \beta
+ \end{pmatrix}$.
+
+ However, $A$ and $\tilde{A}$ both have eigenvalue $\lambda$ with algebraic multiplicity $2$. So we must have $\beta = \lambda$. To make $\alpha = 1$, let $\mathbf{u} = (\tilde{A} - \lambda I)\mathbf{w}$. We know $\mathbf{u}\not= \mathbf{0}$ since $\mathbf{w}$ is not in the eigenspace. Then
+ \[
+ (\tilde{A} - \lambda I)\mathbf{u} = (\tilde{A} - \lambda I)^2 \mathbf{w} =
+ \begin{pmatrix}
+ 0 & \alpha\\
+ 0 & 0
+ \end{pmatrix}
+ \begin{pmatrix}
+ 0 & \alpha\\
+ 0 & 0
+ \end{pmatrix}\mathbf{w} = \mathbf{0}.
+ \]
+ So $\mathbf{u}$ is an eigenvector of $\tilde{A}$ with eigenvalue $\lambda$.
+
+ We have $\mathbf{u} = \tilde A\mathbf{w} - \lambda\mathbf{w}$. So $\tilde A\mathbf{w} = \mathbf{u} + \lambda\mathbf{w}$.
+
+ Change basis to $\{\mathbf{u}, \mathbf{w}\}$. Then $A$ with respect to this basis is $
+ \begin{pmatrix}
+ \lambda & 1\\
+ 0 & \lambda
+ \end{pmatrix}$.
+
+ This is a two-stage process: $P$ sends basis to $\{\mathbf{v}, \mathbf{w}\}$ and then matrix $Q$ sends to basis $\{\mathbf{u}, \mathbf{w}\}$. So the similarity transformation is $Q^{-1}(P^{-1}AP)Q = (PQ)^{-1}A(PQ)$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{prop}
+ (Without proof) The canonical form, or Jordan normal form, exists for any $n\times n$ matrix $A$. Specifically, there exists a similarity transform such that $A$ is similar to a matrix to $\tilde{A}$ that satisfies the following properties:
+ \begin{enumerate}
+ \item $\tilde{A}_{\alpha\alpha} = \lambda_\alpha$, i.e.\ the diagonal composes of the eigenvalues.
+ \item $\tilde{A}_{\alpha, \alpha + 1} = 0$ or $1$.
+ \item $\tilde{A}_{ij} = 0$ otherwise.
+ \end{enumerate}
+\end{prop}
+The actual theorem is actually stronger than this, and the Jordan normal form satisfies some additional properties in addition to the above. However, we shall not go into details, and this is left for the IB Linear Algebra course.
+
+\begin{eg}
+ Let
+ \[
+ A = \begin{pmatrix}
+ -3 & -1 & 1\\
+ -1 & -3 & 1\\
+ -2 & -2 & 0
+ \end{pmatrix}
+ \]
+ The eigenvalues are $-2, -2, -2$ and the eigenvectors are $
+ \begin{pmatrix}
+ -1 \\1 \\ 0
+ \end{pmatrix},
+ \begin{pmatrix}
+ 1 \\ 0 \\1
+ \end{pmatrix}$. Pick $\mathbf{w} =
+ \begin{pmatrix}
+ 1\\0\\0
+ \end{pmatrix}$. Write $\mathbf{u} = (A - \lambda I)\mathbf{w} =
+ \begin{pmatrix}
+ -1 & -1 & 1\\
+ -1 & -1 & 1\\
+ -2 & -2 & 2
+ \end{pmatrix}
+ \begin{pmatrix}
+ 1\\0\\0
+ \end{pmatrix} =
+ \begin{pmatrix}
+ -1\\-1\\-2
+ \end{pmatrix}$. Note that $A\mathbf{u} = -2\mathbf{u}$. We also have $A\mathbf{w} = \mathbf{u} - 2\mathbf{w}$. Form a basis $\{\mathbf{u}, \mathbf{w}, \mathbf{v}\}$, where $\mathbf{v}$ is another eigenvector linearly independent from $\mathbf{u}$, say $
+ \begin{pmatrix}
+ 1\\0\\1
+ \end{pmatrix}$.
+
+ Now change to this basis with
+ $P = \begin{pmatrix}
+ -1 & 1 & 1\\
+ -1 & 0 & 0\\
+ -2 & 0 & 1
+ \end{pmatrix}$. Then the Jordan normal form is $P^{-1}AP =
+ \begin{pmatrix}
+ -2 & 1 & 0\\
+ 0 & -2 & 0\\
+ 0 & 0 & -2
+ \end{pmatrix}$
+\end{eg}
+
+\subsection{Cayley-Hamilton Theorem}
+\begin{thm}[Cayley-Hamilton theorem]
+ Every $n\times n$ complex matrix satisfies its own characteristic equation.
+\end{thm}
+
+\begin{proof}
+ We will only prove for diagonalizable matrices here. So suppose for our matrix $A$, there is some $P$ such that $D = \mathrm{diag}(\lambda_1, \lambda_2, \cdots, \lambda_n) = P^{-1}AP$.
+ Note that
+ \[
+ D^i = (P^{-1}AP)(P^{-1}AP)\cdots(P^{-1}AP) = P^{-1}A^iP.
+ \]
+ Hence
+ \[
+ p_D(D) = p_D(P^{-1}AP) = P^{-1}[p_D(A)]P.
+ \]
+ Since similar matrices have the same characteristic polynomial. So
+ \[
+ p_A(D) = P^{-1}[p_A(A)]P.
+ \]
+ However, we also know that $D^i = \mathrm{diag}(\lambda_1^i, \lambda_2^i, \cdots \lambda_n^i)$. So
+ \[
+ p_A(D) = \mathrm{diag}(p_A(\lambda_1), p_A(\lambda_2), \cdots, p_A(\lambda_n)) = \mathrm{diag}(0, 0, \cdots, 0)
+ \]
+ since the eigenvalues are roots of $p_A(\lambda) = 0$. So $0 = p_A(D) = P^{-1}p_A(A)P$ and thus $p_A(A) = 0$.
+\end{proof}
+
+There are a few things to note.
+\begin{enumerate}
+ \item If $A^{-1}$ exists, then $A^{-1} p_A(A) = A^{-1}(c_0 + c_1A + c_2A^2 + \cdots + c_n A^n) = 0$. So $c_0 A^{-1} + c_1 + c_2A + \cdots + c_n A^{n - 1}$. Since $A^{-1}$ exists, $c_0 = \pm \det A \not= 0$. So
+ \[
+ A^{-1} = \frac{-1}{c_0}(c_1 + c_2 A + \cdots + c_n A^{n -1}).
+ \]
+ So we can calculate $A^{-1}$ from positive powers of $A$.
+ \item We can define matrix exponentiation by
+ \[
+ e^A = I + A + \frac{1}{2!}A^2 + \cdots + \frac{1}{n!}A^n + \cdots.
+ \]
+ It is a fact that this always converges.
+
+ If $A$ is diagonalizable with $P$ with $D = P^{-1}AP = \mathrm{diag}(\lambda_1, \lambda_2, \cdots, \lambda_n)$, then
+ \begin{align*}
+ P^{-1}e^A P &= P^{-1}IP + P^{-1}AP + \frac{1}{2!}P^{-1}A^2P + \cdots\\
+ &= I + D + \frac{1}{2!}D^{2} + \cdots\\
+ &= \mathrm{diag}(e^{\lambda_1}, e^{\lambda_2}, \cdots e^{\lambda_n})
+ \end{align*}
+ So
+ \[
+ e^A = P[\mathrm{diag}(e^{\lambda_1}, e^{\lambda_2}, \cdots, e^{\lambda_n})]P^{-1}.
+ \]
+ \item For $2\times 2$ matrices which are similar to $B =
+ \begin{pmatrix}
+ \lambda & 1\\
+ 0 & \lambda
+ \end{pmatrix}$
+ We see that the characteristic polynomial $p_B(z) = \det (B - zI) = (\lambda - z)^2$. Then $p_B(B) = (\lambda I - B)^2 =
+ \begin{pmatrix}
+ 0 & -1\\
+ 0 & 0
+ \end{pmatrix}^2 =
+ \begin{pmatrix}
+ 0 & 0\\
+ 0 & 0
+ \end{pmatrix}$.
+
+ Since we have proved for the diagonalizable matrices above, we now know that \emph{any} $2\times 2$ matrix satisfies Cayley-Hamilton theorem.
+\end{enumerate}
+In IB Linear Algebra, we will prove the Cayley Hamilton theorem properly for all matrices without assuming diagonalizability.
+
+\subsection{Eigenvalues and eigenvectors of a Hermitian matrix}
+\subsubsection{Eigenvalues and eigenvectors}
+\begin{thm}
+ The eigenvalues of a Hermitian matrix $H$ are real.
+\end{thm}
+
+\begin{proof}
+ Suppose that $H$ has eigenvalue $\lambda$ with eigenvector $\mathbf{v}\not= 0$. Then
+ \[
+ H\mathbf{v} = \lambda\mathbf{v}.
+ \]
+ We pre-multiply by $\mathbf{v}^\dagger$, a $1\times n$ row vector, to obtain
+ \[
+ \mathbf{v}^\dagger H\mathbf{v} = \lambda \mathbf{v}^\dagger \mathbf{v}\tag{$*$}\]
+ We take the Hermitian conjugate of both sides. The left hand side is
+ \[
+ (\mathbf{v}^\dagger H\mathbf{v})^\dagger = \mathbf{v}^\dagger H^\dagger \mathbf{v} = \mathbf{v}^\dagger H \mathbf{v}
+ \]
+ since $H$ is Hermitian. The right hand side is
+ \[
+ (\lambda\mathbf{v}^\dagger\mathbf{v})^\dagger = \lambda^* \mathbf{v}^\dagger \mathbf{v}
+ \]
+ So we have
+ \[
+ \mathbf{v}^\dagger H\mathbf{v} = \lambda^* \mathbf{v}^\dagger \mathbf{v}.
+ \]
+ From $(*)$, we know that $\lambda \mathbf{v}^\dagger \mathbf{v} = \lambda^* \mathbf{v}^\dagger \mathbf{v}$. Since $\mathbf{v} \not= 0$, we know that $\mathbf{v}^\dagger \mathbf{v} = \mathbf{v}\cdot \mathbf{v} \not =0$. So $\lambda = \lambda^*$ and $\lambda$ is real.
+\end{proof}
+
+\begin{thm}
+ The eigenvectors of a Hermitian matrix $H$ corresponding to distinct eigenvalues are orthogonal.
+\end{thm}
+
+\begin{proof}
+ Let
+ \begin{align*}
+ H\mathbf{v}_i &= \lambda_i\mathbf{v}_i\tag{i}\\
+ H\mathbf{v}_j &= \lambda_j\mathbf{v}_j\tag{ii}.
+ \end{align*}
+ Pre-multiply (i) by $\mathbf{v}_j^\dagger$ to obtain
+ \[
+ \mathbf{v}_j^\dagger H\mathbf{v}_i = \lambda_i \mathbf{v}_j^\dagger \mathbf{v}_i\tag{iii}.
+ \]
+ Pre-multiply (ii) by $\mathbf{v}_i^\dagger$ and take the Hermitian conjugate to obtain
+ \[
+ \mathbf{v}_j^\dagger H\mathbf{v}_i = \lambda_j \mathbf{v}_j^\dagger \mathbf{v}_i\tag{iv}.
+ \]
+ Equating (iii) and (iv) yields
+ \[
+ \lambda_i \mathbf{v}_j^\dagger \mathbf{v}_i = \lambda_j \mathbf{v}_j^\dagger \mathbf{v}_i.
+ \]
+ Since $\lambda_i\not= \lambda_j$, we must have $\mathbf{v}_j^\dagger\mathbf{v}_i = 0$. So their inner product is zero and are orthogonal.
+\end{proof}
+
+So we know that if a Hermitian matrix has $n$ distinct eigenvalues, then the eigenvectors form an orthonormal basis. However, if there are degenerate eigenvalues, it is more difficult, and requires the Gram-Schmidt process.
+
+\subsubsection{Gram-Schmidt orthogonalization (non-examinable)}
+Suppose we have a set $B = \{\mathbf{w}_1, \mathbf{w}_2, \cdots, \mathbf{w}_r\}$ of linearly independent vectors. We want to find an orthogonal set $\tilde{\mathbf{B}} = \{\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_r\}$.
+
+Define the projection of $\mathbf{w}$ onto $\mathbf{v}$ by $\mathcal{P}_\mathbf{v}(\mathbf{w}) = \frac{\bra \mathbf{v}\mid \mathbf{w}\ket}{\bra \mathbf{v}\mid \mathbf{v}\ket} \mathbf{v}$. Now construct $\tilde{\mathbf{B}}$ iteratively:
+\begin{enumerate}
+ \item $\mathbf{v}_1 = \mathbf{w}_1$
+ \item $\mathbf{v}_2 = \mathbf{w}_2 - \mathcal{P}_{\mathbf{v}_1}(\mathbf{w})$
+
+ Then we get that $\bra \mathbf{v}_1\mid \mathbf{v}_2\ket = \bra \mathbf{v}_1\mid \mathbf{w}_2\ket - \left(\frac{\bra \mathbf{v}_1\mid \mathbf{w}_2\ket}{\bra \mathbf{v}_1 \mid \mathbf{v}_1\ket}\right) \bra \mathbf{v}_1\mid \mathbf{v}_1\ket = 0$
+ \item $\mathbf{v}_3 = \mathbf{w}_3 - \mathcal{P}_{\mathbf{v}_1}(\mathbf{w}_3) - \mathcal{P}_{\mathbf{v}_2}(\mathbf{w}_3)$
+ \item $\vdots$
+ \item $\displaystyle \mathbf{v}_r = \mathbf{w}_r - \sum_{j = 1}^{r - 1} \mathcal{P}_{\mathbf{v}_j}(\mathbf{w}_r)$
+\end{enumerate}
+At each step, we subtract out the components of $\mathbf{v}_i$ that belong to the space of $\{\mathbf{v}_1, \cdots, \mathbf{v}_{k - 1}\}$. This ensures that all the vectors are orthogonal. Finally, we normalize each basis vector individually to obtain an orthonormal basis.
+
+\subsubsection{Unitary transformation}
+Suppose $U$ is the transformation between one orthonormal basis and a new orthonormal basis $\{\mathbf{u}_1, \mathbf{u}_2, \cdots, \mathbf{u}_n\}$, i.e.\ $\bra \mathbf{u}_i\mid \mathbf{u}_j\ket = \delta_{ij}$. Then
+\[
+ U =
+ \begin{pmatrix}
+ (\mathbf{u}_1)_1 & (\mathbf{u}_2)_1 & \cdots & (\mathbf{u}_n)_1\\
+ (\mathbf{u}_1)_2 & (\mathbf{u}_2)_2 & \cdots & (\mathbf{u}_n)_2\\
+ \vdots & \vdots & \ddots & \vdots\\
+ (\mathbf{u}_1)_n & (\mathbf{u}_2)_n & \cdots & (\mathbf{u}_n)_n
+ \end{pmatrix}
+\]
+Then
+\begin{align*}
+ (U^\dagger U)_{ij} &= (U^\dagger)_{ik}U_{kj}\\
+ &= U_{ki}^* U_{kj}\\
+ &= (\mathbf{u}_i)^*_k(\mathbf{u}_j)_k\\
+ &= \bra \mathbf{u}_i \mid \mathbf{u}_j \ket\\
+ &= \delta_{ij}
+\end{align*}
+So $U$ is a unitary matrix.
+\subsubsection{Diagonalization of \texorpdfstring{$n\times n$}{n x n} Hermitian matrices}
+\begin{thm}
+ An $n\times n$ Hermitian matrix has precisely $n$ orthogonal eigenvectors.
+\end{thm}
+
+\begin{proof}
+ (Non-examinable) Let $\lambda_1,\lambda_2, \cdots, \lambda_r$ be the distinct eigenvalues of $H$ ($r \leq n$), with a set of corresponding orthonormal eigenvectors $B = \{\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_r\}$. Extend to a basis of the whole of $\C^n$
+ \[
+ B' = \{\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_r, \mathbf{w}_1, \mathbf{w}_2,\cdots, \mathbf{w}_{n - r}\}
+ \]
+ Now use Gram-Schmidt to create an orthonormal basis
+ \[
+ \tilde{B} = \{\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_r, \mathbf{u}_1, \mathbf{u}_2, \cdots, \mathbf{u}_{n - r}\}.
+ \]
+ Now write
+ \[
+ P =
+ \begin{pmatrix}
+ \uparrow & \uparrow & & \uparrow & \uparrow & & \uparrow\\
+ \mathbf{v}_1 & \mathbf{v}_2 & \cdots & \mathbf{v}_r & \mathbf{u}_1 & \cdots & \mathbf{u}_{n - r}\\
+ \downarrow & \downarrow & & \downarrow & \downarrow & & \downarrow\\
+ \end{pmatrix}
+ \]
+ We have shown above that this is a unitary matrix, i.e.\ $P^{-1} = P^\dagger$. So if we change basis, we have
+ \begin{align*}
+ P^{-1}HP &= P^\dagger HP\\
+ &= \begin{pmatrix}
+ \lambda_1 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0\\
+ 0 & \lambda_2 & \cdots & 0 & 0 & 0 & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & 0\\
+ 0 & 0 & \cdots & \lambda_r & 0 & 0 & \cdots & 0\\
+ 0 & 0 & \cdots & 0 & c_{11} & c_{12} & \cdots & c_{1, n - r}\\
+ 0 & 0 & \cdots & 0 & c_{21} & c_{22} & \cdots & c_{2, n - r}\\
+ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & \cdots & 0 & c_{n - r,1} & c_{n - r,2} & \cdots & c_{n - r, n - r}
+ \end{pmatrix}
+ \end{align*}
+ Here $C$ is an $(n - r)\times (n - r)$ Hermitian matrix. The eigenvalues of $C$ are also eigenvalues of $H$ because $\det (H - \lambda I) = \det(P^\dagger HP - \lambda I) = (\lambda_1 - \lambda)\cdots (\lambda_r - \lambda)\det (C - \lambda I)$. So the eigenvalues of $C$ are the eigenvalues of $H$.
+
+ We can keep repeating the process on $C$ until we finish all rows. For example, if the eigenvalues of $C$ are all distinct, there are $n - r$ orthonormal eigenvectors $\mathbf{w}_j$ (for $j = r + 1, \cdots, n$) of $C$. Let
+ \[
+ Q =
+ \begin{pmatrix}
+ 1 \\
+ & 1\\
+ && \ddots\\
+ &&& 1\\
+ &&&& \uparrow & \uparrow & &\uparrow\\
+ &&&& \mathbf{w}_{r+1} & \mathbf{w}_{r + 2} & \cdots & \mathbf{w}_n\\
+ &&&& \downarrow & \downarrow & &\downarrow\\
+ \end{pmatrix}
+ \]
+ with other entries $0$. (where we have a $r\times r$ identity matrix block on the top left corner and a $(n - r) \times (n -r)$ with columns formed by $\mathbf{w}_j$)
+
+ Since the columns of $Q$ are orthonormal, $Q$ is unitary. So $Q^\dagger P^\dagger HPQ = \mathrm{diag}(\lambda_1, \lambda_2, \cdots, \lambda_r, \lambda_{r + 1}, \cdots, \lambda_n)$, where the first $r$ $\lambda$s are distinct and the remaining ones are copies of previous ones.
+
+ The $n$ linearly-independent eigenvectors are the columns of $PQ$.
+
+\end{proof}
+So it now follows that $H$ is diagonalizable via transformation $U (=PQ)$. $U$ is a unitary matrix because $P$ and $Q$ are. We have
+\begin{align*}
+ D &= U^\dagger HU\\
+ H &= UDU^\dagger
+\end{align*}
+Note that a real symmetric matrix $S$ is a special case of Hermitian matrices. So we have
+\begin{align*}
+ D &= Q^T SQ\\
+ S &= QDQ^T
+\end{align*}
+\begin{eg}
+ Find the orthogonal matrix which diagonalizes the following real symmetric matrix: $S =
+ \begin{pmatrix}
+ 1 & \beta\\
+ \beta & 1
+ \end{pmatrix}$ with $\beta \not= 0 \in \R$.
+
+ We find the eigenvalues by solving the characteristic equation: $\det(S - \lambda I) = 0$, and obtain $\lambda = 1\pm \beta$.
+
+ The corresponding eigenvectors satisfy $(S - \lambda I)\mathbf{x} = 0$, which gives $\displaystyle \mathbf{x} = \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 1\\
+ \pm1
+ \end{pmatrix}$
+
+ We change the basis from the standard basis to $
+ \displaystyle
+ \frac{1}{\sqrt{2}}\begin{pmatrix}
+ 1\\1
+ \end{pmatrix},
+ \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 1\\-1
+ \end{pmatrix}$ (which is just a rotation by $\pi/4$).
+
+ The transformation matrix is $
+ Q = \begin{pmatrix}
+ 1/\sqrt{2} & 1/\sqrt{2}\\
+ 1/\sqrt{2} & -1/\sqrt{2}
+ \end{pmatrix}$. Then we know that $S = QDQ^T$ with $D = \mathrm{diag}(1, -1)$
+\end{eg}
+\subsubsection{Normal matrices}
+We have seen that the eigenvalues and eigenvectors of Hermitian matrices satisfy some nice properties. More generally, we can define the following:
+\begin{defi}[Normal matrix]
+ A \emph{normal matrix} as a matrix that commutes with its own Hermitian conjugate, i.e.
+ \[
+ NN^\dagger = N^\dagger N
+ \]
+\end{defi}
+Hermitian, real symmetric, skew-Hermitian, real anti-symmetric, orthogonal, unitary matrices are all special cases of normal matrices.
+
+It can be shown that:
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item If $\lambda$ is an eigenvalue of $N$, then $\lambda^*$ is an eigenvalue of $N^\dagger$.
+ \item The eigenvectors of distinct eigenvalues are orthogonal.
+ \item A normal matrix can always be diagonalized with an orthonormal basis of eigenvectors.
+ \end{enumerate}
+\end{prop}
+
+\section{Quadratic forms and conics}
+We want to study quantities like $x_1^2 + x_2^2$ and $3x_1^2 + 2x_1x_2 + 4x_2^2$. For example, conic sections generally take this form. The common characteristic of these is that each term has degree 2. Consequently, we can write it in the form $\mathbf{x}^\dagger A\mathbf{x}$ for some matrix $A$.
+
+\begin{defi}[Sesquilinear, Hermitian and quadratic forms]
+ A \emph{sesquilinear form} is a quantity $F = \mathbf{x}^\dagger A\mathbf{x} = x_i^*A_{ij}x_j$. If $A$ is Hermitian, then $F$ is a \emph{Hermitian form}. If $A$ is real symmetric, then $F$ is a \emph{quadratic form}.
+\end{defi}
+
+\begin{thm}
+ Hermitian forms are real.
+\end{thm}
+\begin{proof}
+ $(\mathbf{x}^\dagger H\mathbf{x})^* = (\mathbf{x}^\dagger H\mathbf{x})^\dagger = \mathbf{x}^\dagger H^\dagger\mathbf{x} = \mathbf{x}^\dagger H\mathbf{x}$. So $(\mathbf{x}^\dagger H\mathbf{x})^* = \mathbf{x}^\dagger H\mathbf{x}$ and it is real.
+\end{proof}
+
+We know that any Hermitian matrix can be diagonalized with a unitary transformation. So $F(\mathbf{x}) = \mathbf{x}^\dagger H\mathbf{x} = \mathbf{x}^\dagger UDU^\dagger \mathbf{x}$. Write $\mathbf{x}' = U^\dagger \mathbf{x}$. So $F = (\mathbf{x}')^\dagger D\mathbf{x}'$, where $D = \mathrm{diag}(\lambda_1,\cdots,\lambda_n)$.
+
+We know that $\mathbf{x}'$ is the vector $\mathbf{x}$ relative to the eigenvector basis. So
+\[
+ F(\mathbf{x}) = \sum_{i = 1}^n \lambda_i |x_i'|^2
+\]
+The eigenvectors are known as the principal axes.
+
+\begin{eg}
+ Take $F = 2x^2 - 4xy + 5y^2 = \mathbf{x}^TS\mathbf{x}$, where $\mathbf{x} =
+ \begin{pmatrix}
+ x\\y
+ \end{pmatrix}$ and $S =
+ \begin{pmatrix}
+ 2 & -2\\
+ -2 & 5
+ \end{pmatrix}$.
+
+ Note that we can always choose the matrix to be symmetric. This is since for any antisymmetric $A$, we have $\mathbf{x}^\dagger A\mathbf{x} = 0$. So we can just take the symmetric part.
+
+ The eigenvalues are $1, 6$ with corresponding eigenvectors $
+ \displaystyle \frac{1}{\sqrt{5}}\begin{pmatrix}
+ 2\\1
+ \end{pmatrix},\frac{1}{\sqrt{5}}
+ \begin{pmatrix}
+ 1\\-2
+ \end{pmatrix}$. Now change basis with
+ \[
+ Q = \frac{1}{\sqrt{5}}
+ \begin{pmatrix}
+ 2 & 1\\
+ 1 & -2
+ \end{pmatrix}
+ \]
+ Then $\mathbf{x}' = Q^T\mathbf{x} =
+ \frac{1}{\sqrt{5}}\begin{pmatrix}
+ 2x + y\\x - 2y
+ \end{pmatrix}$. Then $F = (x')^2 + 6(y')^2$.
+
+ So $F = c$ is an ellipse.
+\end{eg}
+
+\subsection{Quadrics and conics}
+\subsubsection{Quadrics}
+\begin{defi}[Quadric]
+ A \emph{quadric} is an $n$-dimensional surface defined by the zero of a real quadratic polynomial, i.e.
+ \[
+ \mathbf{x}^T A\mathbf{x} + \mathbf{b}^T\mathbf{x} + c = 0,
+ \]
+ where $A$ is a real $n\times n$ matrix, $\mathbf{x}, \mathbf{b}$ are $n$-dimensional column vectors and $c$ is a constant scalar.
+\end{defi}
+
+As noted in example, anti-symmetric matrix has $\mathbf{x}^TA\mathbf{x} = 0$, so for any $A$, we can split it into symmetric and anti-symmetric parts, and just retain the symmetric part $S = (A + A^T)/2$. So we can have
+\[
+ \mathbf{x}^T S\mathbf{x} + \mathbf{b}^T\mathbf{x} + c = 0
+\]
+with $S$ symmetric.
+
+Since $S$ is real and symmetric, we can diagonalize it using $S = QDQ^T$ with $D$ diagonal. We write $\mathbf{x}' = Q^T \mathbf{x}$ and $\mathbf{b}' = Q^T \mathbf{b}$. So we have
+\[
+ (\mathbf{x}')^TD\mathbf{x}' + (\mathbf{b}')^T \mathbf{x}' + c = 0.
+\]
+If $S$ is invertible, i.e.\ with no zero eigenvalues, then write $\mathbf{x}'' = \mathbf{x}' + \frac{1}{2}D^{-1}\mathbf{b}'$ which shifts the origin to eliminate the linear term $(\mathbf{b}')^T\mathbf{x}'$ and finally have (dropping the prime superfixes)
+\[
+ \mathbf{x}^TD\mathbf{x} = k.
+\]
+So through two transformations, we have ended up with a simple quadratic form.
+
+\subsubsection{Conic sections \texorpdfstring{$(n = 2)$}{(n = 2)}}
+From the equation above, we obtain
+\[
+ \lambda_1x_1^2 + \lambda_2x_2^2 = k.
+\]
+We have the following cases:
+\begin{enumerate}
+ \item $\lambda_1\lambda_2 > 0$: we have ellipses with axes coinciding with eigenvectors of $S$. (We require $\mathrm{sgn}(k) = \mathrm{sgn}(\lambda_1,\lambda_2)$, or else we would have no solutions at all)
+ \item $\lambda_1\lambda_2 < 0$: say $\lambda_1 = k/a^2 > 0$, $\lambda_2 = -k/b^2 < 0$. So we obtain
+ \[
+ \frac{x_1^2}{a^2} - \frac{x_2^2}{b^2} = 1,
+ \]
+ which is a hyperbola.
+ \item $\lambda_1\lambda_2 = 0$: Say $\lambda_2 = 0$, $\lambda_1\not= 0$. Note that in this case, our symmetric matrix $S$ is not invertible and we cannot shift our origin using as above.
+
+ From our initial equation, we have
+ \[
+ \lambda_1(x_1')^2 + b_1'x_1' + b_2' x_2' + c = 0.
+ \]
+ We perform the coordinate transform (which is simply completing the square!)
+ \begin{align*}
+ x_1'' &= x_1' + \frac{b_1'}{2\lambda_1}\\
+ x_2'' &= x_2' + \frac{c}{b_2'} - \frac{(b_1')^2}{4\lambda_1b_2'}
+ \end{align*}
+ to remove the $x_1'$ and constant term. Dropping the primes, we have
+ \[
+ \lambda_1 x_1^2 + b_2 x_2 = 0,
+ \]
+ which is a parabola.
+
+ Note that above we assumed $b_2'\not= 0$. If $b_2' = 0$, we have $\lambda_1(x_1')^2 + b_1' x_1' + c = 0$. If we solve this quadratic for $x_1'$, we obtain 0, 1 or 2 solutions for $x_1$ (and $x_2$ can be any value). So we have 0, 1 or 2 straight lines.
+\end{enumerate}
+These are known as conic sections. As you will see in IA Dynamics and Relativity, this are the trajectories of planets under the influence of gravity.
+
+\subsection{Focus-directrix property}
+Conic sections can be defined in a different way, in terms of
+\begin{defi}[Conic sections]
+ The \emph{eccentricity} and \emph{scale} are properties of a conic section that satisfy the following:
+
+ Let the \emph{foci} of a conic section be $(\pm ae, 0)$ and the \emph{directrices} be $x = \pm a/e$.
+
+ A \emph{conic section} is the set of points whose distance from focus is $e \times$ distance from directrix which is closer to that of focus (unless $e = 1$, where we take the distance to the other directrix).
+\end{defi}
+
+Now consider the different cases of $e$:
+\begin{enumerate}
+ \item $e < 1$. By definition,
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2.5, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, -2) -- (0, 2.5) node [above] {$y$};
+ \draw (0, 0) node [anchor = north east] {$O$};
+ \draw [dashed] (3.333, -2) -- (3.333, 2) node [above] {$x = a/e$};
+
+ \draw [dashed] (1.2, 0) node [anchor = south east] {$ae$} node [circ] {}
+ -- (1.6, 0.96) node [anchor = south west] {$(x, y)$} node [circ] {}
+ -- (3.333, 0.96);
+ \draw (0, 0) circle [x radius = 2, y radius = 1.6];
+ \end{tikzpicture}
+ \end{center}
+ \begin{align*}
+ \sqrt{(x - ae)^2 + y^2} &= e\left(\frac{a}{e} - x\right)\\
+ \frac{x^2}{a^2} + \frac{y^2}{a^2(1 - e^2)} = 1
+ \end{align*}
+ Which is an ellipse with semi-major axis $a$ and semi-minor axis $a\sqrt{1 - e^2}$. (if $e = 0$, then we have a circle)
+
+ \item $e > 1$. So
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-4, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, -3) -- (0, 3) node [above] {$y$};
+ \draw (0, 0) node [anchor = north east] {$O$};
+ \draw [dashed] (1.333, -3) -- (1.333, 2.5) node [above] {$x = a/e$};
+ \draw [dashed] (3, 0) node [anchor = south west] {$ae$} node [circ] {}
+ -- (2.41, 1.5) node [right] {$(x, y)$} node [circ] {}
+ -- (1.333, 1.5);
+
+ \draw plot[domain = -1:1] ({2*cosh(\x)},{2.23607*sinh(\x)});
+ \draw plot[domain = -1:1] ({-2*cosh(\x)},{2.23607*sinh(\x)});
+ \end{tikzpicture}
+ \end{center}
+ \begin{align*}
+ \sqrt{(x - ae)^2 + y^2} &= e\left(x - \frac{a}{e}\right)\\
+ \frac{x^2}{a^2} - \frac{y^2}{a^2(e^2 - 1)} &= 1
+ \end{align*}
+ and we have a hyperbola.
+ \item $e = 1$: Then
+ \begin{center}
+ \begin{tikzpicture}[yscale=0.4]
+ \draw [->] (-2.5, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, -6) -- (0, 6) node [above] {$y$};
+ \draw (0, 0) node [anchor = north east] {$O$};
+ \draw [dashed] (-2, -6) -- (-2, 5) node [above] {$x = a$};
+ \draw [dashed] (2, 0) node [anchor = south west] {$a$} node [circ] {}
+ -- (1.2, 3.1) node [right] {$(x, y)$} node [circ] {}
+ -- (-2, 3.1);
+
+ \draw plot[domain = -5:5] ({\x*\x/8,\x});
+ \end{tikzpicture}
+ \end{center}
+ \begin{align*}
+ \sqrt{(x - a)^2 + y^2} &= (x + 1)\\
+ y^2 &= 4ax
+ \end{align*}
+ and we have a parabola.
+\end{enumerate}
+
+Conics also work in polar coordinates. We introduce a new parameter $l$ such that $l/e$ is the distance from the focus to the directrix. So
+\[
+ l = a|1 - e^2|.
+\]
+We use polar coordinates $(r, \theta)$ centered on a focus. So the focus-directrix property is
+\begin{align*}
+ r &= e\left(\frac{l}{e} - r\cos\theta\right)\\
+ r &= \frac{l}{1 + e\cos\theta}
+\end{align*}
+We see that $r\to \infty$ if $\theta \to \cos^{-1}(-1/e)$, which is only possible if $e\geq 1$, i.e.\ hyperbola or parabola. But ellipses have $e < 1$. So $r$ is bounded, as expected.
+
+\section{Transformation groups}
+We have previously seen that orthogonal matrices are used to transform between orthonormal bases. Alternatively, we can see them as transformations of space itself that preserve distances, which is something we will prove shortly.
+
+Using this as the definition of an orthogonal matrix, we see that our definition of orthogonal matrices is dependent on our choice of the notion of distance, or metric. In special relativity, we will need to use a different metric, which will lead to the \emph{Lorentz matrices}, the matrices that conserve distances in special relativity. We will have a brief look at these as well.
+
+\subsection{Groups of orthogonal matrices}
+\begin{prop}
+ The set of all $n\times n$ orthogonal matrices $P$ forms a group under matrix multiplication.
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}[label=\arabic{*}.]
+ \setcounter{enumi}{-1}
+ \item If $P, Q$ are orthogonal, then consider $R = PQ$. $RR^T = (PQ)(PQ)^T = P(QQ^T)P^T = PP^T = I$. So $R$ is orthogonal.
+ \item $I$ satisfies $II^T = I$. So $I$ is orthogonal and is an identity of the group.
+ \item Inverse: if $P$ is orthogonal, then $P^{-1}=P^T$ by definition, which is also orthogonal.
+ \item Matrix multiplication is associative since function composition is associative.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[Orthogonal group]
+ The \emph{orthogonal group} $O(n)$ is the group of orthogonal matrices.
+\end{defi}
+
+\begin{defi}[Special orthogonal group]
+ The \emph{special orthogonal group} is the subgroup of $O(n)$ that consists of all orthogonal matrices with determinant $1$.
+\end{defi}
+
+In general, we can show that any matrix in $O(2)$ is of the form
+\[
+ \begin{pmatrix}
+ \cos\theta & -\sin\theta\\
+ \sin\theta & \cos\theta
+ \end{pmatrix}\text{ or }
+ \begin{pmatrix}
+ \cos\theta & \sin\theta\\
+ \sin\theta & -\cos\theta
+ \end{pmatrix}
+\]
+\subsection{Length preserving matrices}
+\begin{thm}
+ Let $P\in O(n)$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $P$ is orthogonal
+ \item $|P\mathbf{x}| = |\mathbf{x}|$
+ \item $(P\mathbf{x})^T(P\mathbf{y}) = \mathbf{x}^T\mathbf{y}$, i.e.\ $(P\mathbf{x})\cdot(P\mathbf{y}) = \mathbf{x}\cdot \mathbf{y}$.
+ \item If $(\mathbf{v}_1, \mathbf{v_2}, \cdots, \mathbf{v}_n)$ are orthonormal, so are $(P\mathbf{v}_1, P\mathbf{v}_2, \cdots, P\mathbf{v}_n)$
+ \item The columns of $P$ are orthonormal.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ We do them one by one:
+ \begin{enumerate}
+ \item $\Rightarrow$ (ii): $|P\mathbf{x}|^2 = (P\mathbf{x})^T(P\mathbf{x}) = \mathbf{x}^TP^TP\mathbf{x} = \mathbf{x}^T\mathbf{x} = |\mathbf{x}|^2$
+ \item $\Rightarrow$ (iii): $|P(\mathbf{x} + \mathbf{y})|^2 = |\mathbf{x + y}|^2$. The right hand side is
+ \[
+ (\mathbf{x}^T + \mathbf{y}^T)(\mathbf{x + y}) = \mathbf{x}^T\mathbf{x} + y^T\mathbf{y} + \mathbf{y}^T\mathbf{x} + \mathbf{x}^T\mathbf{y} = |\mathbf{x}|^2 + |\mathbf{y}|^2 + 2\mathbf{x}^T\mathbf{y}.
+ \]
+ Similarly, the left hand side is
+ \[
+ |P\mathbf{x} + P\mathbf{y}|^2 = |P\mathbf{x}|^2 + |P\mathbf{y}| + 2(P\mathbf{x})^TP\mathbf{y} = |\mathbf{x}|^2 + |\mathbf{y}|^2 + 2(P\mathbf{x})^TP\mathbf{y}.
+ \]
+ So $(P\mathbf{x})^TP\mathbf{y} = \mathbf{x}^T\mathbf{y}$.
+ \item $\Rightarrow$ (iv): $(P\mathbf{v}_i)^TP\mathbf{v}_j = \mathbf{v}_i^T\mathbf{v}_j = \delta_{ij}$. So $P\mathbf{v}_i$'s are also orthonormal.
+ \item $\Rightarrow$ (v): Take the $\mathbf{v}_i$'s to be the standard basis. So the columns of $P$, being $P\mathbf{e}_i$, are orthonormal.
+ \item $\Rightarrow$ (i): The columns of $P$ are orthonormal. Then $(PP^T)_{ij} = P_{ik}P_{jk} = (P_i)\cdot (P_j) = \delta_{ij}$, viewing $P_i$ as the $i$th column of $P$. So $PP^T = I$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Therefore the set of length-preserving matrices is precisely $O(n)$.
+
+\subsection{Lorentz transformations}
+Consider the \emph{Minkowski} 1 + 1 dimension spacetime (i.e.\ 1 space dimension and 1 time dimension)
+
+\begin{defi}[Minkowski inner product]
+ The \emph{Minkowski} inner product of 2 vectors $\mathbf{x}$ and $\mathbf{y}$ is
+ \[
+ \bra \mathbf{x}\mid \mathbf{y}\ket = \mathbf{x}^TJ\mathbf{y},
+ \]
+ where
+ \[
+ J =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix}
+ \]
+ Then $\bra \mathbf{x}\mid \mathbf{y}\ket = x_1y_1 - x_2y_2$.
+\end{defi}
+This is to be compared to the usual \emph{Euclidean} inner product of $\mathbf{x}, \mathbf{y}\in \R^2$, given by
+\[
+ \bra \mathbf{x}\mid \mathbf{y}\ket = \mathbf{x}^T\mathbf{y} = \mathbf{x}^TI\mathbf{y} = x_1y_1 + x_2y_2.
+\]
+
+\begin{defi}[Preservation of inner product]
+ A transformation matrix $M$ preserves the Minkowski inner product if
+ \[
+ \bra \mathbf{x}|\mathbf{y}\ket = \bra M\mathbf{x} | M\mathbf{y}\ket
+ \]
+ for all $\mathbf{x}, \mathbf{y}$.
+\end{defi}
+
+We know that $\mathbf{x}^TJ\mathbf{y} = (M\mathbf{x})^TJM\mathbf{y} = \mathbf{x}^T M^TJM\mathbf{y}$. Since this has to be true for all $\mathbf{x}$ and $\mathbf{y}$, we must have
+\[
+ J = M^TJM.
+\]
+We can show that $M$ takes the form of
+\[
+ H_\alpha = \begin{pmatrix}
+ \cosh \alpha & \sinh \alpha\\
+ \sinh \alpha & \cosh \alpha
+ \end{pmatrix}\text{ or } K_{\alpha/2} =
+ \begin{pmatrix}
+ \cosh\alpha & -\sinh\alpha\\
+ \sinh\alpha & -\cosh\alpha
+ \end{pmatrix}
+\]
+where $H_\alpha$ is a \emph{hyperbolic rotation}, and $K_{\alpha/2}$ is a \emph{hyperbolic reflection}.
+
+This is technically \emph{all} matrices that preserve the metric, since these only include matrices with $M_{11} > 0$. In physics, these are the matrices we want, since $M_{11} < 0$ corresponds to inverting time, which is frowned upon.
+
+\begin{defi}[Lorentz matrix]
+ A \emph{Lorentz matrix} or a \emph{Lorentz boost} is a matrix in the form
+ \[
+ B_v = \frac{1}{\sqrt{1 - v^2}}
+ \begin{pmatrix}
+ 1 & v\\
+ v & 1
+ \end{pmatrix}.
+ \]
+ Here $|v| < 1$, where we have chosen units in which the speed of light is equal to $1$. We have $B_v = H_{\tanh^{-1}v}$
+\end{defi}
+
+\begin{defi}[Lorentz group]
+ The \emph{Lorentz group} is a group of all Lorentz matrices under matrix multiplication.
+\end{defi}
+It is easy to prove that this is a group. For the closure axiom, we have $B_{v_1}B_{v_2} = B_{v_3}$, where
+\[
+ v_3 = \tanh(\tanh^{-1} v_1 + \tanh^{-1} v_2) = \frac{v_1 + v_2}{1 + v_1v_2}
+\]
+The set of all $B_v$ is a group of transformations which preserve the Minkowski inner product.
+\end{document}
diff --git a/books/cam/IB_E/metric_and_topological_spaces.tex b/books/cam/IB_E/metric_and_topological_spaces.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1007c31884612a2384c96b0b4af5894101679bf6
--- /dev/null
+++ b/books/cam/IB_E/metric_and_topological_spaces.tex
@@ -0,0 +1,1890 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Easter}
+\def\nyear {2015}
+\def\nlecturer {J.\ Rasmussen}
+\def\ncourse {Metric and Topological Spaces}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Metrics}\\
+Definition and examples. Limits and continuity. Open sets and neighbourhoods. Characterizing limits and continuity using neighbourhoods and open sets.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Topology}\\
+Definition of a topology. Metric topologies. Further examples. Neighbourhoods, closed sets, convergence and continuity. Hausdorff spaces. Homeomorphisms. Topological and non-topological properties. Completeness. Subspace, quotient and product topologies.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Connectedness}\\
+Definition using open sets and integer-valued functions. Examples, including intervals. Components. The continuous image of a connected space is connected. Path-connectedness. Path-connected spaces are connected but not conversely. Connected open sets in Euclidean space are path-connected.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Compactness}\\
+Definition using open covers. Examples: finite sets and [0, 1]. Closed subsets of compact spaces are compact. Compact subsets of a Hausdorff space must be closed. The compact subsets of the real line. Continuous images of compact sets are compact. Quotient spaces. Continuous real-valued functions on a compact space are bounded and attain their bounds. The product of two compact spaces is compact. The compact subsets of Euclidean space. Sequential compactness.\hspace*{\fill} [3]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+The course \emph{Metric and Topological Spaces} is divided into two parts. The first is on metric spaces, and the second is on topological spaces (duh).
+
+In Analysis, we studied real numbers a lot. We defined many properties such as convergence of sequences and continuity of functions. For example, if $(x_n)$ is a sequence in $\R$, $x_n \to x$ means
+\[
+ (\forall \varepsilon > 0)(\exists N)(\forall n > N)\, |x_n - x| < \varepsilon.
+\]
+Similarly, a function $f$ is continuous at $x_0$ if
+\[
+ (\forall \varepsilon > 0)(\exists \delta > 0)(\forall x)\,|x - x_0| < \delta \Rightarrow |f(x) - f(x_0)| < \varepsilon.
+\]
+However, the definition of convergence doesn't really rely on $x_n$ being real numbers, except when calculating values of $|x_n - x|$. But what does $|x_n - x|$ really mean? It is the distance between $x_n$ and $x$. To define convergence, we don't really need notions like subtraction and absolute values. We simply need a (sensible) notion of distance between points.
+
+Given a set $X$, we can define a \emph{metric} (``distance function'') $d: X\times X\to \R$, where $d(x, y)$ is the distance between the points $x$ and $y$. Then we say a sequence $(x_n)$ in $X$ converges to $x$ if
+\[
+ (\forall \varepsilon > 0)(\exists N)(\forall n > N)\, d(x_n, x) < \varepsilon.
+\]
+Similarly, a function $f: X\to X$ is continuous if
+\[
+ (\forall \varepsilon > 0)(\exists \delta > 0)(\forall x)\,d(x, x_0) < \delta \Rightarrow d(f(x), f(x_0)) < \varepsilon.
+\]
+Of course, we will need the metric $d$ to satisfy certain conditions such as being non-negative, and we will explore these technical details in the first part of the course.
+
+As people studied metric spaces, it soon became evident that metrics are not really that useful a notion. Given a set $X$, it is possible to find many different metrics that completely agree on which sequences converge and what functions are continuous.
+
+It turns out that it is not the metric that determines (most of) the properties of the space. Instead, it is the \emph{open sets} induced by the metric (intuitively, an open set is a subset of $X$ without ``boundary'', like an open interval). Metrics that induce the same open sets will have almost identical properties (apart from the actual distance itself).
+
+The idea of a topological space is to just keep the notion of open sets and abandon metric spaces, and this turns out to be a really good idea. The second part of the course is the study of these topological spaces and defining a lot of interesting properties just in terms of open sets.
+
+\section{Metric spaces}
+\subsection{Definitions}
+As mentioned in the introduction, given a set $X$, it is often helpful to have a notion of distance between points. This distance function is known as the \emph{metric}.
+\begin{defi}[Metric space]
+ A \emph{metric space} is a pair $(X, d_X)$ where $X$ is a set (the \emph{space}) and $d_X$ is a function $d_X: X \times X \to \R$ (the \emph{metric}) such that for all $x, y, z$,
+ \begin{itemize}
+ \item $d_X(x, y) \geq 0$\hfill(non-negativity)
+ \item $d_X(x, y) = 0$ iff $x = y$\hfill(identity of indiscernibles)
+ \item $d_X(x, y) = d_X(y, x)$\hfill(symmetry)
+ \item $d_X(x, z) \leq d_X(x, y) + d_X(y, z)$\hfill(triangle inequality)
+ \end{itemize}
+\end{defi}
+
+We will have two quick examples of metrics before going into other important definitions. We will come to more examples afterwards.
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item (Euclidean ``usual'' metric) Let $X = \R^n$. Let
+ \[
+ d(\mathbf{v}, \mathbf{w}) = |\mathbf{v} - \mathbf{w}| = \sqrt{\sum_{i = 1}^n (v_i - w_i)^2}.
+ \]
+ This is the usual notion of distance we have in the $\R^n$ vector space. It is not difficult to show that this is indeed a metric (the fourth axiom follows from the Cauchy-Schwarz inequality).
+ \item (Discrete metric) Let $X$ be a set, and
+ \[
+ d_X(x, y) =
+ \begin{cases}
+ 1 & x \not= y\\
+ 0 & x = y
+ \end{cases}
+ \]
+ To show this is indeed a metric, we have to show it satisfies all the axioms. The first three axioms are trivially satisfied. How about the fourth? We can prove this by exhaustion.
+
+ Since the distance function can only return $0$ or $1$, $d(x, z)$ can be $0$ or $1$, while $d(x, y) + d(y, z)$ can be $0$, $1$ or $2$. For the fourth axiom to fail, we must have RHS $<$ LHS. This can only happen if the right hand side is $0$. But for the right hand side to be $0$, we must have $x = y = z$. So the left hand side is also $0$. So the fourth axiom is always satisfied.
+ \end{itemize}
+\end{eg}
+
+Given a metric space $(X, d)$, we can generate another metric space by picking out a subset of $X$ and reusing the same metric.
+\begin{defi}[Metric subspace]
+ Let $(X, d_X)$ be a metric space, and $Y\subseteq X$. Then $(Y, d_Y)$ is a metric space, where $d_Y(a, b) = d_X(a, b)$, and said to be a \emph{subspace} of $X$.
+\end{defi}
+
+\begin{eg}
+ $S^n = \{\mathbf{v}\in \R^{n + 1}: |\mathbf{v}| = 1\}$, the $n$-dimensional sphere, is a subspace of $\R^{n + 1}$.
+\end{eg}
+
+Finally, as promised, we come to the definition of convergent sequences and continuous functions.
+\begin{defi}[Convergent sequences]
+ Let $(x_n)$ be a sequence in a metric space $(X, d_X)$. We say $(x_n)$ \emph{converges to} $x\in X$, written $x_n \to x$, if $d(x_n, x) \to 0$ (as a real sequence). Equivalently,
+ \[
+ (\forall \varepsilon > 0)(\exists N)(\forall n > N)\, d(x_n, x) < \varepsilon.
+ \]
+\end{defi}
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item Let $(\mathbf{v}_n)$ be a sequence in $\R^k$ with the Euclidean metric. Write $\mathbf{v}_n = (v_n^1, \cdots, v_n^k)$, and $\mathbf{v} = (v^1, \cdots, v^k)\in \R^k$. Then $\mathbf{v}_n \to \mathbf{v}$ iff $(v_n^i) \to v^i$ for all $i$.
+
+ \item Let $X$ have the discrete metric, and suppose $x_n \to x$. Pick $\varepsilon = \frac{1}{2}$. Then there is some $N$ such that $d(x_n, x) < \frac{1}{2}$ whenever $n > N$. But if $d(x_n, x) < \frac{1}{2}$, we must have $d(x_n, x) = 0$. So $x_n = x$. Hence if $x_n \to x$, then eventually all $x_n$ are equal to $x$.
+
+ \end{itemize}
+\end{eg}
+
+Similar to what we did in Analysis, we can show that limits are unique (if exist).
+\begin{prop}
+ If $(X, d)$ is a metric space, $(x_n)$ is a sequence in $X$ such that $x_n \to x$, $x_n \to x'$, then $x = x'$.
+\end{prop}
+
+\begin{proof}
+ For any $\varepsilon > 0$, we know that there exists $N$ such that $d(x_n, x) < \varepsilon/2$ if $n > N$. Similarly, there exists some $N'$ such that $d(x_n, x') < \varepsilon/2$ if $n > N'$.
+
+ Hence if $n > \max(N, N')$, then
+ \begin{align*}
+ 0 &\leq d(x, x')\\
+ &\leq d(x, x_n) + d(x_n, x')\\
+ &= d(x_n, x) + d(x_n, x')\\
+ &\leq \varepsilon.
+ \end{align*}
+ So $0 \leq d(x, x') \leq \varepsilon$ for all $\varepsilon > 0$. So $d(x, x') = 0$, and $x = x'$.
+\end{proof}
+
+Note that to prove the above proposition, we used all of the four axioms. In the first line, we used non-negativity to say $0\leq d(x, x')$. In the second line, we used triangle inequality. In the third line, we used symmetry to swap $d(x, x_n)$ with $d(x_n, x)$. Finally, we used the identity of indiscernibles to conclude that $x = x'$.
+
+To define a continuous function, here, we opt to use the sequence definition. We will later show that this is indeed equivalent to the $\varepsilon-\delta$ definition, as well as a few other more useful definitions.
+\begin{defi}[Continuous function]
+ Let $(X, d_X)$ and $(Y, d_Y)$ be metric spaces, and $f:X \to Y$. We say $f$ is \emph{continuous} if $f(x_n) \to f(x)$ (in $Y$) whenever $x_n \to x$ (in $X$).
+\end{defi}
+
+\begin{eg}
+ Let $X = \R$ with the Euclidean metric. Let $Y = \R$ with the discrete metric. Then
+ $f:X \to Y$ that maps $f(x) = x$ is not continuous. This is since $1/n \to 0$ in the Euclidean metric, but not in the discrete metric.
+
+ On the other hand, $g: Y\to X$ by $g(x) = x$ is continuous, since a sequence in $Y$ that converges is eventually constant.
+\end{eg}
+
+\subsection{Examples of metric spaces}
+In this section, we will give four different examples of metrics, where the first two are metrics on $\R^2$. There is an obvious generalization to $\R^n$, but we will look at $\R^2$ specifically for the sake of simplicity.
+\begin{eg}[Manhattan metric]
+ Let $X = \R^2$, and define the metric as
+ \[
+ d(\mathbf{x}, \mathbf{y}) = d((x_1, x_2), (y_1, y_2)) = |x_1 - y_1| + |x_2 - y_2|.
+ \]
+ The first three axioms are again trivial. To prove the triangle inequality, we have
+ \begin{align*}
+ d(\mathbf{x}, \mathbf{y}) + d(\mathbf{y}, \mathbf{z}) &= |x_1 - y_1| + |x_2 - y_2| + |y_1 - z_1| + |y_2 - z_2|\\
+ &\geq |x_1 - z_1| + |x_2 - z_2|\\
+ &= d(\mathbf{x}, \mathbf{z}),
+ \end{align*}
+ using the triangle inequality for $\R$.
+
+ This metric represents the distance you have to walk from one point to another if you are only allowed to move horizontally and vertically (and not diagonally).
+\end{eg}
+
+\begin{eg}[British railway metric]
+ Let $X = \R^2$. We define
+ \[
+ d(\mathbf{x}, \mathbf{y}) =
+ \begin{cases}
+ |\mathbf{x} - \mathbf{y}|&\text{if }\mathbf{x} = k\mathbf{y}\\
+ |\mathbf{x}| + |\mathbf{y}|&\text{otherwise}
+ \end{cases}
+ \]
+ To explain the name of this metric, think of Britain with London as the origin. Since the railway system is \st{stupid} less than ideal, all trains go through London. For example, if you want to go from Oxford to Cambridge (and obviously not the other way round), you first go from Oxford to London, then London to Cambridge. So the distance traveled is the distance from London to Oxford plus the distance from London to Cambridge.
+
+ The exception is when the two destinations lie along the same line, in which case, you can directly take the train from one to the other without going through London, and hence the ``if $\mathbf{x} = k\mathbf{y}$'' clause.
+\end{eg}
+
+\begin{eg}[$p$-adic metric]
+ Let $p\in \Z$ be a prime number. We first define the norm $|n|_p$ to be $p^{-k}$, where $k$ is the highest power of $p$ that divides $n$. If $n = 0$, we let $|n|_p = 0$. For example, $|20|_2 = |2^2\cdot 5|_2 = 2^{-2}$.
+
+ Now take $X = \Z$, and let $d_p (a, b) = |a - b|_p$. The first three axioms are trivial, and the triangle inequality can be proved by making some number-theoretical arguments about divisibility.
+
+ This metric has rather interesting properties. With respect to $d_2$, we have $1, 2, 4, 8, 16, 32, \cdots \to 0$, while $1, 2, 3, 4, \cdots$ does not converge. We can also use it to prove certain number-theoretical results, but we will not go into details here.
+\end{eg}
+
+\begin{eg}[Uniform metric]
+ Let $X = C[0, 1]$ be the set of all continuous functions on $[0, 1]$. Then define
+ \[
+ d(f, g) = \max_{x\in [0, 1]}|f(x) - g(x)|.
+ \]
+ The maximum always exists since continuous functions on $[0, 1]$ are bounded and attain their bounds.
+
+ Now let $F: C[0, 1] \to \R$ be defined by $F(f) = f(\frac{1}{2})$. Then this is continuous with respect to the uniform metric on $C[0, 1]$ and the usual metric on $\R$:
+
+ Let $f_n \to f$ in the uniform metric. Then we have to show that $F(f_n) \to F(f)$, i.e.\ $f_n(\frac{1}{2}) \to f(\frac{1}{2})$. This is easy, since we have
+ \[
+ 0 \leq |F(f_n) - F(f)| = |f_n(\tfrac{1}{2}) - f(\tfrac{1}{2})| \leq \max|f_n(x) - f(x)| \to 0.
+ \]
+ So $|f_n(\frac{1}{2}) - f(\frac{1}{2})| \to 0$. So $f_n(\frac{1}{2}) \to f(\frac{1}{2})$.
+\end{eg}
+
+\subsection{Norms}
+There are several notions on vector spaces that are closely related to metrics. We'll look at \emph{norms} and \emph{inner products} of vector spaces, and show that they all naturally induce a metric on the space.
+
+First of all, we define the norm. This can be thought of as the ``length'' of a vector in the vector space.
+\begin{defi}[Norm]
+ Let $V$ be a real vector space. A \emph{norm} on $V$ is a function $\|\ph \|: V\to \R$ such that
+ \begin{itemize}
+ \item $\|\mathbf{v}\| \geq 0$ for all $\mathbf{v}\in V$
+ \item $\|\mathbf{v}\| = 0$ if and only if $\mathbf{v} = \mathbf{0}$.
+ \item $\|\lambda \mathbf{v}\| = |\lambda|\|\mathbf{v}\|$
+ \item $\|\mathbf{v} + \mathbf{w}\| \leq \|\mathbf{v}\| + \|\mathbf{w}\|$.
+ \end{itemize}
+\end{defi}
+
+\begin{eg}
+ Let $V = \R^n$. There are several possible norms we can define on $\R^n$:
+ \begin{align*}
+ \|\mathbf{v}\|_1 &= \sum_{i = 1}^n |v_i|\\
+ \|\mathbf{v}\|_2 &= \sqrt{\sum_{i = 1}^n v_i^2}\\
+ \|\mathbf{v}\|_\infty &= \max \{|v_i|: 1 \leq i \leq n\}.
+ \end{align*}
+ In general, we can define the norm
+ \[
+ \|\mathbf{v}\|_p = \left(\sum_{i = 1}^n |v_i|^p\right)^{1/p}.
+ \]
+ for any $1 \leq p \leq \infty$, and $\|\mathbf{v}\|_\infty$ is the limit as $p\to \infty$.
+
+ Proof that these are indeed norms is left as an exercise for the reader (in the example sheets).
+\end{eg}
+
+A norm naturally induces a metric on $V$:
+\begin{lemma}
+ If $\|\ph\|$ is a norm on $V$, then
+ \[
+ d(\mathbf{v}, \mathbf{w}) = \|\mathbf{v} - \mathbf{w}\|
+ \]
+ defines a metric on $V$.
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item $d(\mathbf{v}, \mathbf{w}) = \|\mathbf{v} - \mathbf{w}\| \geq 0$ by the definition of the norm.
+ \item $d(\mathbf{v}, \mathbf{w}) = 0 \Leftrightarrow \|\mathbf{v} - \mathbf{w}\| = 0 \Leftrightarrow \mathbf{v} - \mathbf{w} = \mathbf{0} \Leftrightarrow \mathbf{v} = \mathbf{w}$.
+ \item $d(\mathbf{w}, \mathbf{v}) = \|\mathbf{w} - \mathbf{v}\| = \|(-1)(\mathbf{v} - \mathbf{w})\| = |-1| \|\mathbf{v} - \mathbf{w}\| = d(\mathbf{v}, \mathbf{w})$.
+ \item $d(\mathbf{u}, \mathbf{v}) + d(\mathbf{v}, \mathbf{w}) = \|\mathbf{u} - \mathbf{v}\| + \|\mathbf{v} - \mathbf{w}\| \geq \|\mathbf{u} - \mathbf{w}\| = d(\mathbf{u}, \mathbf{w})$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ We have the following norms on $C[0, 1]$:
+ \begin{align*}
+ \|f\|_1 &= \int_0^1 |f(x)|\;\d x\\
+ \|f\|_2 &= \sqrt{\int_0^1 f(x)^2 \;\d x}\\
+ \|f\|_{\infty} &= \max_{x \in [0, 1]}|f(x)|
+ \end{align*}
+ The first two are known as the $L^1$ and $L^2$ norms. The last is called the uniform norm, since it induces the uniform metric.
+\end{eg}
+It is easy to show that these are indeed norms. The only slightly tricky part is to show that $\|f\| = 0$ iff $f = 0$, which we obtain via the following lemma.
+
+\begin{lemma}
+ Let $f\in C[0, 1]$ satisfy $f(x) \geq 0$ for all $x\in [0, 1]$. If $f(x)$ is not constantly $0$, then $\int_0^1 f(x)\;\d x > 0$.
+\end{lemma}
+
+\begin{proof}
+ Pick $x_0 \in [0, 1]$ with $f(x_0) = a > 0$. Then since $f$ is continuous, there is a $\delta$ such that $|f(x) - f(x_0)| < a/2$ if $|x - x_0| < \delta$. So $|f(x)| > a/2$ in this region.
+
+ Take
+ \[
+ g(x) =
+ \begin{cases}
+ a/2 & |x - x_0| < \delta\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ Then $f(x) \geq g(x)$ for all $x\in [0, 1]$. So
+ \[
+ \int_0^1 f(x)\;\d x \geq \int_0^1 g(x)\;\d x = \frac{a}{2}\cdot (2\delta) > 0.\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ Let $X = C[0, 1]$, and let
+ \[
+ d_1(f, g) = \|f - g\|_1 = \int_0^1 |f(x) - g(x)|\;\d x.
+ \]
+ Define the sequence
+ \[
+ f_n =
+ \begin{cases}
+ 1 - nx & x \in [0, \frac{1}{n}]\\
+ 0 & x \geq \frac{1}{n}.
+ \end{cases}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (3, 0) node [above] {$f$};
+ \draw [->] (0, -0.5) -- (0, 2) node [right] {$x$};
+ \draw [red, thick] (0, 1.5) -- (0.5, 0) node [below, black] {$\frac{1}{n}$} -- (2, 0) node [below, black] {$1$};
+ \end{tikzpicture}
+ \end{center}
+ Then
+ \[
+ \|f\|_1 = \frac{1}{2}\cdot \frac{1}{n} \cdot 1 = \frac{1}{2n} \to 0
+ \]
+ as $n \to \infty$. So $f_n \to 0$ in $(X, d_1)$ where $0(x) = 0$.
+
+ On the other hand,
+ \[
+ \|f_n\|_\infty = \max_{x\in [0, 1]} \|f(x)\| = 1.
+ \]
+ So $f_n \not\to 0$ in the uniform metric.
+
+ So the function $(C[0, 1], d_1) \to (C[0, 1], d_\infty)$ that maps $f\mapsto f$ is \emph{not} continuous. This is similar to the case that the identity function from the usual metric of $\R$ to the discrete metric of $\R$ is not continuous. However, the discrete metric is a \emph{silly} metric, but $d_1 $ is a genuine useful metric here.
+
+ Using the same example, we can show that the function $G: (C[0, 1], d_1) \to (\R, \text{usual})$ with $G(f) = f(0)$ is not continuous.
+\end{eg}
+
+We'll now define the inner product of a real vector space. This is a generalization of the notion of the ``dot product''.
+\begin{defi}[Inner product]
+ Let $V$ be a real vector space. An \emph{inner product} on $V$ is a function $\bra \ph, \ph\ket: V\times V \to \R$ such that
+ \begin{enumerate}
+ \item $\bra \mathbf{v}, \mathbf{v}\ket \geq 0$ for all $\mathbf{v}\in V$
+ \item $\bra \mathbf{v}, \mathbf{v}\ket = 0$ if and only if $\mathbf{v} = \mathbf{0}$.
+ \item $\bra \mathbf{v}, \mathbf{w}\ket = \bra \mathbf{w}, \mathbf{v}\ket$.
+ \item $\bra \mathbf{v}_1 + \lambda \mathbf{v}_2, \mathbf{w}) = \bra \mathbf{v}_1, \mathbf{w}\ket + \lambda\bra \mathbf{v}_2 , \mathbf{w}\ket$.
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Let $V = \R^n$. Then
+ \[
+ \bra \mathbf{v}, \mathbf{w} \ket = \sum_{i = 1}^n v_i w_i
+ \]
+ is an inner product.
+ \item Let $V = C[0, 1]$. Then
+ \[
+ \bra f, g\ket = \int_0^1 f(x) g(x) \;\d x
+ \]
+ is an inner product.
+ \end{enumerate}
+\end{eg}
+
+We just showed that norms induce metrics. The proof was completely trivial as the definitions were almost the same. Now we want to prove that inner products induce norms. However, this is slightly less trivial. To do so, we need the Cauchy-Schwarz inequality.
+
+\begin{thm}[Cauchy-Schwarz inequality]
+ If $\bra\ph,\ph\ket$ is an inner product, then
+ \[
+ \bra \mathbf{v}, \mathbf{w}\ket^2 \leq \bra \mathbf{v}, \mathbf{v}\ket\bra\mathbf{w},\mathbf{w}\ket.
+ \]
+\end{thm}
+
+\begin{proof}
+ For any $x$, we have
+ \[
+ \bra \mathbf{v} + x\mathbf{w}, \mathbf{v} + x \mathbf{w}\ket = \bra \mathbf{v}, \mathbf{v}\ket + 2x\bra \mathbf{v}, \mathbf{w}\ket + x^2 \bra \mathbf{w}, \mathbf{w}\ket \geq 0.
+ \]
+ Seen as a quadratic in $x$, since it is always non-negative, it can have at most one real root. So
+ \[
+ (2 \bra \mathbf{v}, \mathbf{w}\ket)^2 - 4\bra \mathbf{v}, \mathbf{v}\ket \bra \mathbf{w}, \mathbf{w}\ket \leq 0.
+ \]
+ So the result follows.
+\end{proof}
+
+With this, we can show that inner products induce norms (and hence metrics).
+\begin{lemma}
+ If $\bra \ph, \ph \ket$ is an inner product on $V$, then
+ \[
+ \|\mathbf{v}\| = \sqrt{\bra \mathbf{v}, \mathbf{v}\ket}
+ \]
+ is a norm.
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item $\|\mathbf{v}\| = \sqrt{\bra \mathbf{v}, \mathbf{v}\ket} \geq 0$.
+ \item $\|\mathbf{v}\| = 0\Leftrightarrow \bra \mathbf{v}, \mathbf{v}\ket = 0 \Leftrightarrow \mathbf{v} = \mathbf{0}$.
+ \item $\|\lambda \mathbf{v}\| = \sqrt{\bra \lambda \mathbf{v}, \lambda \mathbf{v}\ket} = \sqrt{\lambda^2 \bra \mathbf{v}, \mathbf{v}\ket} = |\lambda| \|\mathbf{v}\|$.
+ \item
+ \begin{align*}
+ (\|\mathbf{v}\| + \|\mathbf{w}\|)^2 &= \|\mathbf{v}\|^2 + 2\|\mathbf{v}\|\|\mathbf{w}\| + \|\mathbf{w}\|^2\\
+ &\geq \bra \mathbf{v}, \mathbf{v}\ket + 2\bra \mathbf{v}, \mathbf{w}\ket + \bra \mathbf{w}, \mathbf{w}\ket \\
+ &= \|\mathbf{v} + \mathbf{w}\|^2\qedhere
+ \end{align*}%\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{Open and closed subsets}
+In this section, we will study open and closed subsets of metric spaces. We will then proceed to prove certain key properties of open and closed subsets, and show that they are all we need to define continuity. This will lead to the next chapter (and the rest of the course), where we completely abandon the idea of metrics and only talk about open and closed subsets.
+
+To define open and closed subsets, we first need the notion of balls.
+\begin{defi}[Open and closed balls]
+ Let $(X, d)$ be a metric space. For any $x\in X$, $r\in \R$,
+ \[
+ B_r(x) = \{y\in X: d(y, x) < r\}
+ \]
+ is the \emph{open ball} centered at $x$.
+ \[
+ \bar{B}_r(x) = \{y\in X: d(y, x) \leq r\}
+ \]
+ is the \emph{closed ball} centered at $x$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item When $X = \R$, $B_r(x) = (x - r, x + r)$. $\bar{B}_r(x) = [x - r, x + r]$.
+ \item When $X = \R^2$,
+ \begin{enumerate}
+ \item If $d$ is the metric induced by the $\|\mathbf{v}\|_1 = |v_1| + |v_2|$, then an open ball is a rotated square.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$v_1$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$v_2$};
+ \draw [dashed, fill=gray!50!white] (1, 0) -- (0, 1) -- (-1, 0) -- (0, -1) --cycle;
+ \end{tikzpicture}
+ \end{center}
+ \item If $d$ is the metric induced by the $\|\mathbf{v}\|_2 = \sqrt{v_1^2 + v_2^2}$, then an open ball is an actual disk.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$v_1$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$v_2$};
+ \draw [dashed, fill=gray!50!white] circle [radius=1];
+ \end{tikzpicture}
+ \end{center}
+
+ \item If $d$ is the metric induced by the $\|\mathbf{v}\|_\infty = \max\{|v_1|, |v_2|\}$, then an open ball is a square.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$v_1$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$v_2$};
+ \draw [dashed, fill=gray!50!white] (1, 1) -- (1, -1) -- (-1, -1) -- (-1, 1) --cycle;
+ \end{tikzpicture}
+ \end{center}
+ \end{enumerate}
+ \end{enumerate}
+\end{eg}
+\begin{defi}[Open subset]
+ $U\subseteq X$ is an \emph{open subset} if for every $x\in U$, $\exists \delta > 0$ such that $B_\delta(x) \subseteq U$.
+
+ $C\subseteq X$ is a \emph{closed subset} if $X\setminus C \subseteq X$ is open.
+\end{defi}
+As if we've not emphasized this enough, this is a very \emph{very} important definition.
+
+We first prove that this is a sensible definition.
+\begin{lemma}
+ The open ball $B_r(x) \subseteq X$ is an open subset, and the closed ball $\bar{B}_r(x) \subseteq X$ is a closed subset.
+\end{lemma}
+
+\begin{proof}
+ Given $y\in B_r(x)$, we must find $\delta > 0$ with $B_\delta(y) \subseteq B_r(x)$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$v_1$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$v_2$};
+ \draw [dashed, fill=gray!50!white] circle [radius=1];
+ \draw (0, 0) -- (0.707, 0.707) node [below, pos = 0.5] {$r$};
+ \draw (0.5, 0.5) node [circ] {} node [right] {$y$};
+ \draw [dashed] (0.5, 0.5) circle [radius=0.293];
+ \end{tikzpicture}
+ \end{center}
+ Since $y\in B_r(x)$, we must have $a = d(y, x) < r$. Let $\delta = r - a > 0$. Then if $z \in B_\delta (y)$, then
+ \[
+ d(z, x)\leq d(z, y) + d(y, x) < (r - a) + a = r.
+ \]
+ So $z \in B_r(x)$. So $B_\delta(y) \subseteq B_r(x)$ as desired.
+
+ The second statement is equivalent to $X\setminus \bar{B}_r(x) = \{y\in X: d(y, x) > r\}$ is open. The proof is very similar.
+\end{proof}
+Note that openness is a property of a \emph{subset}. $A\subseteq X$ being open depends on both $A$ and $X$, not just $A$. For example, $[0, \frac{1}{2})$ is not an open subset of $\R$, but is an open subset of $[0, 1]$ (since it is $B_{\frac{1}{2}}(0)$), both with the Euclidean metric. However, we are often lazy and just say ``open set'' instead of ``open subset''.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $(0, 1)\subseteq \R$ is open, while $[0, 1]\subseteq \R$ is closed. $[0, 1)\subseteq \R$ is neither closed nor open.
+ \item $\Q\subseteq \R$ is neither open nor closed, since any open interval contains both rational numbers and irrational numbers. So any open interval cannot be a subset of $\Q$ or $\R\setminus \Q$.
+ \item Let $X = [-1, 1] \setminus \{0\}$ with the Euclidean metric. Let $A = [-1, 0)\subseteq X$. Then $A$ is open since it is equal to $B_1(-1)$. $A$ is also closed since it is equal to $\bar B_{\frac{1}{2}}(-\frac{1}{2})$.
+ \end{enumerate}
+\end{eg}
+
+\begin{defi}[Open neighborhood]
+ If $x\in X$, an \emph{open neighborhood} of $x$ is an open $U\subseteq X$ with $x\in U$.
+\end{defi}
+This is not really an interesting definition, but is simply a convenient shorthand for ``open subset containing $x$''.
+
+\begin{lemma}
+ If $U$ is an open neighbourhood of $x$ and $x_n \to x$, then $\exists N$ such that $x_n \in U$ for all $n > N$.
+\end{lemma}
+
+\begin{proof}
+ Since $U$ is open, there exists some $\delta > 0$ such that $B_\delta(x)\subseteq U$. Since $x_n \to x$, $\exists N$ such that $d(x_n, x) < \delta$ for all $n > N$. This implies that $x_n \in B_\delta(x)$ for all $n > N$. So $x_n \in U$ for all $n > N$.
+\end{proof}
+
+\begin{defi}[Limit point]
+ Let $A\subseteq X$. Then $x\in X$ is a \emph{limit point} of $A$ if there is a sequence $x_n \to x$ such that $x_n \in A$ for all $n$.
+\end{defi}
+Intuitively, a limit point is a point we can get arbitrarily close to.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item If $a\in A$, then $a$ is a limit point of $A$, by taking the sequence $a, a, a, a, \cdots$.
+ \item If $A = (0, 1)\subseteq \R$, then $0$ is a limit point of $A$, e.g.\ take $x_n = \frac{1}{n}$.
+ \item Every $x\in \R$ is a limit point of $\Q$.
+ \end{enumerate}
+\end{eg}
+
+It is possible to characterize closed subsets by limit points. This is often a convenient way of proving that sets are closed.
+\begin{prop}
+ $C\subseteq X$ is a closed subset if and only if every limit point of $C$ is an element of $C$.
+\end{prop}
+
+\begin{proof}
+ $(\Rightarrow)$ Suppose $C$ is closed and $x_n \to x$, $x_n \in C$. We have to show that $x\in C$.
+
+ Since $C$ is closed, $A = X\setminus C\subseteq X$ is open. Suppose the contrary that $x\not\in C$. Then $x\in A$. Hence $A$ is an open neighbourhood of $x$. Then by our previous lemma, we know that there is some $N$ such that $x_n \in A$ for all $n \geq N$. So $x_N \in A$. But we know that $x_N \in C$ by assumption. This is a contradiction. So we must have $x\in C$.
+
+ $(\Leftarrow)$ Suppose that $C$ is not closed. We have to find a limit point not in $C$.
+
+ Since $C$ is not closed, $A$ is not open. So $\exists x\in A$ such that $B_\delta(x)\not\subseteq A$ for all $\delta > 0$. This means that $B_\delta(x) \cap C \not= \emptyset$ for all $\delta > 0$.
+
+ So pick $x_n \in B_{\frac{1}{n}}(x)\cap C$ for each $n > 0$. Then $x_n \in C$, $d(x_n, x) = \frac{1}{n} \to 0$. So $x_n \to x$. So $x$ is a limit point of $C$ which is not in $C$.
+\end{proof}
+
+Finally, we get to the Really Important Result\textsuperscript{TM} that tells us metrics are useless.
+\begin{prop}[Characterization of continuity]
+ Let $(X, d_x)$ and $(Y, d_y)$ be metric spaces, and $f: X\to Y$. The following conditions are equivalent:
+ \begin{enumerate}
+ \item $f$ is continuous
+ \item If $x_n \to x$, then $f(x_n) \to f(x)$ (which is the definition of continuity)
+ \item For any closed subset $C\subseteq Y$, $f^{-1}(C)$ is closed in $X$.
+ \item For any open subset $U\subseteq Y$, $f^{-1}(U)$ is open in $X$.
+ \item For any $x\in X$ and $\varepsilon > 0$, $\exists \delta > 0$ such that $f(B_\delta(x)) \subseteq B_\varepsilon(f(x))$. Alternatively, $d_x(x, z) < \delta \Rightarrow d_y(f(x), f(z)) < \varepsilon$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item $1 \Leftrightarrow 2$: by definition
+ \item $2 \Rightarrow 3$: Suppose $C\subseteq Y$ is closed. We want to show that $f^{-1}(C)$ is closed. So let $x_n \to x$, where $x_n \in f^{-1}(C)$.
+
+ We know that $f(x_n) \to f(x)$ by (2) and $f(x_n) \in C$. So $f(x)$ is a limit point of $C$. Since $C$ is closed, $f(x) \in C$. So $x\in f^{-1}(C)$. So every limit point of $f^{-1}(C)$ is in $f^{-1}(C)$. So $f^{-1}(C)$ is closed.
+ \item $3 \Rightarrow 4$: If $U\subseteq Y$ is open, then $Y\setminus U$ is closed in Y. So $f^{-1}(Y\setminus U) = X\setminus f^{-1}(U)$ is closed in $X$. So $f^{-1}(U)\subseteq X$ is open.
+
+ \item $4 \Rightarrow 5$: Given $x\in X, \varepsilon > 0$, $B_\varepsilon(f(x))$ is open in $Y$. By (4), we know $f^{-1}(B_\varepsilon(f(x))) = A$ is open in $X$. Since $x\in A$, $\exists \delta > 0$ with $B_\delta (x) \subseteq A$. So
+ \[
+ f(B_\delta(x)) \subseteq f(A) = f(f^{-1}(B_\varepsilon (f(x)))) = B_\varepsilon (f(x))
+ \]
+ \item $5 \Rightarrow 2$: Suppose $x_n \to x$. Given $\varepsilon > 0$, $\exists \delta > 0$ such that $f(B_\delta(x)) \subseteq B_\varepsilon(f(x))$. Since $x_n \to x$, $\exists N$ such that $x_n \in B_\delta (x)$ for all $n > N$. Then $f(x_n) \in f(B_\delta(x))\subseteq B_\varepsilon(f(x))$ for all $n > N$. So $f(x_n) \to f(x)$.\qedhere
+ \end{itemize}
+\end{proof}
+
+The third and fourth condition can allow us to immediately decide if a subset is open or closed in some cases.
+
+\begin{eg}
+ Let $f: \R^3 \to \R$ be defined as
+ \[
+ f(x_1, x_2, x_3) = x_1^2 + x_2^4 x_3^6 + x_1^8 x_3^2.
+ \]
+ Then this is continuous. So $\{\mathbf{x}\in \R^3: f(\mathbf{x}) \leq 1\} = f^{-1}((-\infty, 1])$ is closed in $\R^3$.
+\end{eg}
+
+Before we end, we prove some key properties of open subsets. These will be used as the defining properties of open subsets in the next chapter.
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item $\emptyset$ and $X$ are open subsets of $X$.
+ \item Suppose $V_\alpha \subseteq X$ is open for all $\alpha \in A$. Then $\displaystyle U = \bigcup_{\alpha \in A}V_\alpha$ is open in $X$.
+ \item If $V_1, \cdots, V_n\subseteq X$ are open, then so is $\displaystyle V = \bigcap_{i = 1}^n V_i$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item $\emptyset$ satisfies the definition of an open subset vacuously. $X$ is open since for any $x$, $B_1(x) \subseteq X$.
+ \item If $x\in U$, then $x\in V_\alpha$ for some $\alpha$. Since $V_\alpha$ is open, there exists $\delta > 0$ such that $B_\delta(x) \subseteq V_\alpha$. So $\displaystyle B_\delta (x) \subseteq \bigcup_{\alpha \in A}V_\alpha = U$. So $U$ is open.
+ \item If $x\in V$, then $x\in V_i$ for all $i = 1, \cdots, n$. So $\exists \delta_i > 0$ with $B_{\delta_i}(x) \subseteq V_i$. Take $\delta = \min\{\delta_1, \cdots, \delta_n\}$. So $B_\delta(x) \subseteq V_i$ for all $i$. So $B_\delta(x) \subseteq V$. So $V$ is open.\qedhere
+ \end{enumerate}
+\end{proof}
+Note that we can take infinite unions or finite intersection, but not infinite intersections. For example, the intersection of all $(-\frac{1}{n}, \frac{1}{n})$ is $\{0\}$, which is not open.
+
+\section{Topological spaces}
+\subsection{Definitions}
+We have previously shown that a function $f$ is continuous iff $f^{-1}(U)$ is open whenever $U$ is open. Convergence can also be characterized using open sets only. This suggests that we can dump the metric and just focus on the open sets.
+
+What we will do is to define the \emph{topology} of a space $X$ to be the set of open sets of $X$. Then this topology would define most of the structure or geometry of $X$, and we don't need to care about metrics.
+
+\begin{defi}[Topological space]
+ A \emph{topological space} is a set $X$ (the space) together with a set $\cU \subseteq \P(X)$ (the topology) such that:
+ \begin{enumerate}
+ \item $\emptyset, X\in \cU$
+ \item If $V_\alpha\in \cU$ for all $\alpha \in A$, then $\displaystyle \bigcup_{\alpha\in A}V_\alpha \in \cU$.
+ \item If $V_1, \cdots, V_n \in \cU$, then $\displaystyle \bigcap_{i = 1}^n V_i \in \cU$.
+ \end{enumerate}
+ The elements of $X$ are the \emph{points}, and the elements of $\cU$ are the open subsets of $X$.
+\end{defi}
+
+\begin{defi}[Induced topology]
+ Let $(X, d)$ be a metric space. Then the topology \emph{induced by} $d$ is the set of all open sets of $X$ under $d$.
+\end{defi}
+
+\begin{eg}
+ Let $X = \R^n$ and consider the two metrics $d_1(\mathbf{x}, \mathbf{y}) = \|\mathbf{x} - \mathbf{y}\|_1$ and $d_\infty(\mathbf{x}, \mathbf{y}) = \|\mathbf{x} - \mathbf{y}\|_\infty$. We will show that they induce the same topology.
+
+ Recall that the metrics are defined by
+ \[
+ \|\mathbf{v}\| = \sum_{i = 1}^n |v_i|,\quad \|\mathbf{v}\|_\infty = \max_{1 \leq i \leq n}|v_i|.
+ \]
+ This implies that
+ \[
+ \|\mathbf{v}\|_\infty \leq \|\mathbf{v}\|_1 \leq n\|\mathbf{v}\|_\infty.
+ \]
+ This in turn implies that
+ \[
+ B_r^\infty(x) \supseteq B_r^1 (x) \supseteq B_{r/n}^\infty(x).
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$v_1$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$v_2$};
+ \draw [red] (1, 0) -- (0, 1) node [pos=0.6, right] {$B^1$} -- (-1, 0) -- (0, -1) -- cycle;
+ \draw [blue] (1, 1) -- (1, -1) -- (-1, -1) -- (-1, 1) -- cycle node [right] {$B^{\infty}$};
+ \draw [blue] (0.5, 0.5) -- (0.5, -0.5) -- (-0.5, -0.5) -- (-0.5, 0.5) -- cycle;
+ \end{tikzpicture}
+ \end{center}
+ To show that the metrics induce the same topology, suppose that $U$ is open with respect to $d_1$, and we show that it is open with respect to $d_\infty$. Let $x \in U$. Since $U$ is open with respect to $d_1$, there exists some $\delta > 0$ such that $B_\delta^1(x)\subseteq U$. So $B_{\delta/n}^\infty (x) \subseteq B_\delta^1(x) \subseteq U$. So $U$ is open with respect to $d_\infty$.
+
+ The other direction is similar.
+\end{eg}
+
+\begin{eg}
+ Let $X = C[0, 1]$. Let $d_1(f, g) = \|f - g\|_1$ and $d_\infty(f, g) = \|f - g\|_\infty$. Then they do not induce the same topology, since $(X, d_1) \to (X, d_\infty)$ by $f\mapsto f$ is not continuous.
+\end{eg}
+
+It is possible to have some other topologies that are not induced by metrics.
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Let $X$ be any set.
+ \begin{enumerate}
+ \item $\cU = \{\emptyset, X\}$ is the \emph{coarse topology} on $X$.
+ \item $\cU = \P(X)$ is the \emph{discrete topology} on $X$, since it is induced by the discrete metric.
+ \item $\cU = \{A\subseteq X: X\setminus A\text{ is finite or } A = \emptyset\}$ is the \emph{cofinite topology} on $X$.
+ \end{enumerate}
+ \item Let $X = \R$, and $\cU = \{(a, \infty): a\in \R\}$ is the \emph{right order topology} on $\R$.
+ \end{enumerate}
+\end{eg}
+
+Now we can define continuous functions in terms of the topology only.
+\begin{defi}[Continuous function]
+ Let $f: X\to Y$ be a map of topological spaces. Then $f$ is \emph{continuous} if $f^{-1}(U)$ is open in $X$ whenever $U$ is open in $Y$.
+\end{defi}
+When the topologies are induced by metrics, the topological and metric notions of continuous functions coincide, as we showed in the previous chapter.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Any function $f: X\to Y$ is continuous if $X$ has the discrete topology.
+ \item Any function $f: X\to Y$ is continuous if $Y$ has the coarse topology.
+ \item If $X$ and $Y$ both have cofinite topology, then $f: X\to Y$ is continuous iff $f^{-1}(\{y\})$ is finite for every $y\in Y$.
+ \end{enumerate}
+\end{eg}
+
+\begin{lemma}
+ If $f: X\to Y$ and $g: Y\to Z$ are continuous, then so is $g\circ f: X\to Z$.
+\end{lemma}
+
+\begin{proof}
+ If $U\subseteq Z$ is open, $g$ is continuous, then $g^{-1}(U)$ is open in $Y$. Since $f$ is also continuous, $f^{-1}(g^{-1}(U)) = (g\circ f)^{-1}(U)$ is open in $X$.
+\end{proof}
+
+In group theory, we had the notion of isomorphisms between groups. Isomorphic groups are equal-up-to-renaming-of-elements, and are considered to be the same for most purposes.
+
+Similarly, we will define \emph{homeomorphisms} between topological spaces, and homeomorphic topological spaces will be considered to be the same (notice the ``e'' in hom\textbf{e}omorphism).
+\begin{defi}[Homeomorphism]
+ $f: X\to Y$ is a \emph{homeomorphism} if
+ \begin{enumerate}
+ \item $f$ is a bijection
+ \item Both $f$ and $f^{-1}$ are continuous
+ \end{enumerate}
+ Equivalently, $f$ is a bijection and $U\subseteq X$ is open iff $f(U)\subseteq Y$ is open.
+
+ Two spaces are \emph{homeomorphic} if there exists a homeomorphism between them, and we write $X\simeq Y$.
+\end{defi}
+Note that we specifically require $f$ and $f^{-1}$ to be both continuous. In group theory, if $\phi$ is a bijective homomorphism, then $\phi^{-1}$ is automatically a homomorphism as well. However, this is not true for topological spaces. $f$ being continuous does not imply $f^{-1}$ is continuous, as illustrated by the example below.
+
+\begin{eg}
+ Let $X = C[0, 1]$ with the topology induced by $\|\ph \|_1$ and $Y = C[0, 1]$ with the topology induced by $\|\ph\|_\infty$. Then $F: Y\to X$ by $f\mapsto f$ is continuous but $F^{-1}$ is not.
+\end{eg}
+
+\begin{eg}
+ Let $X = [0, 2\pi)$ and $Y = S^1 = \{z \in \C: |z| = 1\}$. Then $f: X \to Y$ given by $f(x) = e^{ix}$ is continuous but its inverse is not.
+\end{eg}
+
+Similar to isomorphisms, we can show that homeomorphism is an equivalence relation.
+\begin{lemma}
+ Homeomorphism is an equivalence relation.
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item The identity map $I_X: X\to X$ is always a homeomorphism. So $X\simeq X$.
+ \item If $f: X\to Y$ is a homeomorphism, then so is $f^{-1}:Y\to X$. So $X\simeq Y \Rightarrow Y\simeq X$.
+ \item If $f: X\to Y$ and $g: Y\to Z$ are homeomorphisms, then $g\circ f: X\to Z$ is a homeomorphism. So $X\simeq Y$ and $Y\simeq Z$ implies $X\simeq Z$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Under the usual topology, the open intervals $(0, 1)\simeq (a, b)$ for all $a, b\in \R$, using the homeomorphism $x\mapsto a + (b - a)x$.
+
+ Similarly, $[0, 1] \simeq [a, b]$
+ \item $(-1, 1) \simeq \R$ by $x\mapsto \tan(\frac{\pi}{2}x)$.
+ \item $\R\simeq (0, \infty)$ by $x\mapsto e^x$.
+ \item $(a, \infty)\simeq (b, \infty)$ by $x\mapsto x + (b - a)$.
+ \end{enumerate}
+ The fact that $\simeq$ is an equivalence relation implies that any 2 open intervals in $\R$ are homeomorphic.
+
+\end{eg}
+It is relatively easy to show that two spaces are homeomorphic. We just have to write down a homeomorphism. However, it is rather difficult to prove that two spaces are \emph{not homeomorphic}.
+
+For example, is $(0, 1)$ homeomorphic to $[0, 1]$? No, but we aren't able to immediately prove it now. How about $\R$ and $\R^2$? Again they are not homeomorphic, but to show this we will need some tools that we'll develop in the next few lectures.
+
+How about $\R^m$ and $\R^n$ in general? They are not homeomorphic, but we won't be able to prove this rigorously in the course. To properly prove this, we will need tools from algebraic topology.
+
+So how can we prove that two spaces are not homeomorphic? In group theory, we could prove that two groups are not isomorphic by, say, showing that they have different orders. Similarly, to distinguish between topological spaces, we have to define certain \emph{topological properties}. Then if two spaces have different topological properties, we can show that they are not homeomorphic.
+
+But before that, we will first define many many useful definitions in topological spaces, including sequences, subspaces, products and quotients. The remainder of the chapter will be mostly definitions that we will use later.
+\subsection{Sequences}
+To define the convergence of a sequence using open sets, we again need the concept of open neighbourhoods.
+
+\begin{defi}[Open neighbourhood]
+ An \emph{open neighbourhood} of $x\in X$ is an open set $U\subseteq X$ with $x\in U$.
+\end{defi}
+
+Now we can use this to define convergence of sequences.
+\begin{defi}[Convergent sequence]
+ A sequence $x_n \to x$ if for every open neighbourhood $U$ of $x$, $\exists N$ such that $x_n \in U$ for all $n > N$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item If $X$ has the coarse topology, then any sequence $x_n$ converges to every $x\in X$, since there is only one open neighbourhood of $x$.
+ \item If $X$ has the cofinite topology, no two $x_n$s are the same, then $x_n \to x$ for every $x\in X$, since every open set can only have finitely many $x_n$ not inside it.
+ \end{enumerate}
+\end{eg}
+
+This looks weird. This is definitely not what we used to think of sequences. At least, we would want to have unique limits.
+
+Fortunately, there is a particular class of spaces where sequences are well-behaved and have at most one limit.
+\begin{defi}[Hausdorff space]
+ A topological space $X$ is \emph{Hausdorff} if for all $x_1, x_2\in X$ with $x_1 \not= x_2$, there exist open neighbourhoods $U_1$ of $x_1$, $U_2$ of $x_2$ such that $U_1 \cap U_2 = \emptyset$.
+\end{defi}
+
+\begin{lemma}
+ If $X$ is Hausdorff, $x_n$ is a sequence in $X$ with $x_n \to x$ and $x_n \to x'$, then $x = x'$, i.e.\ limits are unique.
+\end{lemma}
+
+\begin{proof}
+ Suppose the contrary that $x\not= x'$. Then by definition of Hausdorff, there exist open neighbourhoods $U, U'$ of $x, x'$ respectively with $U \cap U' = \emptyset$.
+
+ Since $x_n \to x$ and $U$ is a neighbourhood of $x$, by definition, there is some $N$ such that whenever $n > N$, we have $x_n \in U$. Similarly, since $x_n \to x'$, there is some $N'$ such that whenever $n > N'$, we have $x_n \in U'$.
+
+ This means that whenever $n > \max(N, N')$, we have $x_n \in U$ and $x_n \in U'$. So $x_n \in U\cap U'$. This contradicts the fact that $U \cap U' = \emptyset$.
+
+ Hence we must have $x = x'$.
+\end{proof}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item If $X$ has more than 1 element, then the coarse topology on $X$ is not Hausdorff.
+ \item If $X$ has infinitely many elements, the cofinite topology on $X$ is not Hausdorff.
+ \item The discrete topology is always Hausdorff.
+ \item If $(X, d)$ is a metric space, the topology induced by $d$ is Hausdorff: for $x_1 \not= x_2$, let $r = d(x_1, x_2) > 0$. Then take $U_i = B_{r/2}(x_i)$. Then $U_1 \cap U_2 = \emptyset$.
+ \end{enumerate}
+\end{eg}
+
+\subsection{Closed sets}
+We will define closed sets similarly to what we did for metric spaces.
+
+\begin{defi}[Closed sets]
+ $C\subseteq X$ is \emph{closed} if $X\setminus C$ is an open subset of $X$.
+\end{defi}
+
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item If $C_\alpha$ is a closed subset of $X$ for all $\alpha \in A$, then $\bigcap_{\alpha \in A} C_\alpha$ is closed in $X$.
+ \item If $C_1, \cdots, C_n$ are closed in $X$, then so is $\bigcup_{i = 1}^n C_i$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Since $C_\alpha$ is closed in $X$, $X \setminus C_\alpha$ is open in $X$. So $\bigcup_{\alpha\in A}(X\setminus C_\alpha) = X\setminus \bigcap_{\alpha\in A}C_\alpha$ is open. So $\bigcap_{\alpha \in A}C_\alpha$ is closed.
+
+ \item If $C_i$ is closed in $X$, then $X\setminus C_i$ is open. So $\bigcap_{i = 1}^n (X\setminus C_i) = X\setminus \bigcup_{i = 1}^n C_i$ is open. So $\bigcup_{i = 1}^n C_i$ is closed.\qedhere
+ \end{enumerate}
+\end{proof}
+
+This time we can take infinite intersections and finite unions, which is the opposite of what we have for open sets.
+
+Note that it is entirely possible to define the topology to be the collection of all \emph{closed} sets instead of open sets, but people seem to like open sets more.
+
+\begin{cor}
+ If $X$ is Hausdorff and $x\in X$, then $\{x\}$ is closed in $X$.
+\end{cor}
+
+\begin{proof}
+ For all $y\in X$, there exist open subsets $U_y, V_y$ with $y\in U_y, x\in V_y$, $U_y \cap V_y = \emptyset$.
+
+ Let $C_y = X\setminus U_y$. Then $C_y$ is closed, $y\not\in C_y$, $x\in C_y$. So $\{x\} = \bigcap_{y\not= x} C_y$ is closed since it is an intersection of closed subsets.
+\end{proof}
+\subsection{Closure and interior}
+\subsubsection{Closure}
+Given a subset $A\subseteq X$, if $A$ is not closed, we would like to find the smallest closed subset containing $A$. This is known as the \emph{closure} of $A$.
+
+Officially, we define the closure as follows:
+
+\begin{defi}
+ Let $X$ be a topological space and $A\subseteq X$. Define
+ \[
+ \mathcal{C}_A = \{C\subseteq X: A\subseteq C\text{ and }C\text{ is closed in }X\}
+ \]
+ Then the \emph{closure} of $A$ in $X$ is
+ \[
+ \bar A = \bigcap_{C\in \mathcal{C}_A} C.
+ \]
+\end{defi}
+First we do a sanity check: since $\bar A$ is defined as an intersection, we should make sure we are not taking an intersection of no sets. This is easy: since $X$ is closed in $X$ (its complement $\emptyset$ is open), $\mathcal{C}_A \not= \emptyset$. So we can safely take the intersection.
+
+Since $\bar A$ is an intersection of closed sets, it is closed in $X$. Also, if $C\in \mathcal{C}_A$, then $A\subseteq C$. So $A\subseteq \bigcap_{C\in \mathcal{C}_A} C = \bar A$. In fact, we have
+
+\begin{prop}
+ $\bar A$ is the smallest closed subset of $X$ which contains $A$.
+\end{prop}
+
+\begin{proof}
+ Let $K\subseteq X$ be a closed set containing $A$. Then $K\in \mathcal{C}_A$. So $\bar A = \bigcap_{C\in \mathcal{C}_A}C \subseteq K$. So $\bar A\subseteq K$.
+\end{proof}
+
+We basically \emph{defined} the closure such that it is the smallest closed subset of $X$ which contains $A$.
+
+However, while this ``clever'' definition makes it easy to prove the above property, it is rather difficult to directly use it to compute the closure.
+
+To compute the closure, we define the \emph{limit point} analogous to what we did for metric spaces.
+\begin{defi}[Limit point]
+ A \emph{limit point} of $A$ is an $x\in X$ such that there is a sequence $x_n \to x$ with $x_n \in A$ for all $n$.
+\end{defi}
+In general, limit points are easier to compute, and can be a useful tool for determining the closure of $A$.
+
+Now let
+\[
+ L(A) = \{x\in X: x\text{ is a limit point of }A\}.
+\]
+We immediately get the following lemma.
+\begin{lemma}
+ If $C\subseteq X$ is closed, then $L(C) = C$.
+\end{lemma}
+
+\begin{proof}
+ Exactly the same as that for metric spaces. We will also prove a more general result very soon that implies this.
+\end{proof}
+
+Recall that we proved the converse of this statement for metric spaces. However, the converse is \emph{not} true for topological spaces in general.
+
+\begin{eg}
+ Let $X$ be an uncountable set (e.g.\ $\R$), and define a topology on $X$ by saying a set is open if it is empty or has countable complement. One can check that this indeed defines a topology. We claim that the only sequences that converge are those that are eventually constant.
+
+ Indeed, if $x_n$ is a sequence and $x \in X$, then consider the open set
+ \[
+ U = (X \setminus \{x_n: n \in \N\}) \cup \{x\}.
+ \]
+ Then the only element in the sequence $x_n$ that can possibly be contained in $U$ is $x$ itself. So if $x_n \to x$, this implies that $x_n$ is eventually always $x$.
+
+ In particular, it follows that $L(A) = A$ for all $A \subseteq X$.
+\end{eg}
+
+However, we do have the following result:
+
+\begin{prop}
+ $L(A) \subseteq \bar A$.
+\end{prop}
+
+\begin{proof}
+ If $A\subseteq C$, then $L(A) \subseteq L(C)$. If $C$ is closed, then $L(C) = C$. So $C\in \mathcal{C}_A \Rightarrow L(A) \subseteq C$. So $L(A) \subseteq \bigcap_{C\in \mathcal{C}_A}C = \bar A$.
+\end{proof}
+This in particular implies the previous lemma, since for any $A$, we have $A \subseteq L(A) \subseteq \bar{A}$, and when $A$ is closed, we have $A = \bar{A}$.
+
+Finally, we have the following corollary that can help us find the closure of subsets:
+\begin{cor}
+ Given a subset $A \subseteq X$, if we can find some closed $C$ such that $A \subseteq C \subseteq L(A)$, then we in fact have $C = \bar{A}$.
+\end{cor}
+
+\begin{proof}
+ $C\subseteq L(A) \subseteq \bar A \subseteq C$, where the last step is since $\bar A$ is the smallest closed set containing $A$. So $C = L(A) = \bar A$.
+\end{proof}
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item Let $(a, b)\subseteq \R$. Then $\overline{(a, b)} = [a, b]$.
+ \item Let $\Q\subseteq \R$. Then $\bar\Q = \R$.
+ \item $\overline{\R \setminus \Q} = \R$.
+ \item In $\R^n$ with the Euclidean metric, $\overline{B_r(x)} = \bar B_r(x)$. In general, $\overline{B_r(x)}\subseteq \bar B_r(x)$, since $\bar B_r(x)$ is closed and $B_r(x) \subseteq \bar B_r(x)$, but these need not be equal.
+
+ For example, if $X$ has the discrete metric, then $B_1(x) = \{x\}$. Then $\overline{B_1(x)} = \{x\}$, but $\bar B_1(x) = X$.
+ \end{itemize}
+\end{eg}
+
+In the above example, we had $\bar\Q = \R$. In some sense, all points of $\R$ are ``surrounded'' by points in $\Q$. We say that $\Q$ is \emph{dense} in $\R$.
+\begin{defi}[Dense subset]
+ $A\subseteq X$ is \emph{dense} in $X$ if $\bar A = X$.
+\end{defi}
+
+\begin{eg}
+ $\Q$ and $\R\setminus \Q$ are both dense in $\R$ with the usual topology.
+\end{eg}
+
+\subsubsection{Interior}
+We defined the closure of $A$ to be the smallest closed subset containing $A$. We can similarly define the interior of $A$ to be the largest open subset contained in $A$.
+
+\begin{defi}[Interior]
+ Let $A\subseteq X$, and let
+ \[
+ \mathcal{O}_A = \{U\subseteq X: U\subseteq A, U\text{ is open in }X\}.
+ \]
+ The \emph{interior} of $A$ is
+ \[
+ \Int(A) = \bigcup_{U\in \mathcal{O}_A} U.
+ \]
+\end{defi}
+
+\begin{prop}
+ $\Int(A)$ is the largest open subset of $X$ contained in $A$.
+\end{prop}
+The proof is similar to proof for closure.
+
+To find the closure, we could use limit points. What trick do we have to find the interior?
+\begin{prop}
+ $X\setminus \Int(A) = \overline{X\setminus A}$
+\end{prop}
+
+\begin{proof}
+ $U\subseteq A\Leftrightarrow (X\setminus U)\supseteq (X\setminus A)$. Also, $U\text{ open in }X\Leftrightarrow X\setminus U\text{ is closed in }X$.
+
+ So the complement of the largest open subset of $X$ contained in $A$ will be the smallest closed subset containing $X\setminus A$.
+\end{proof}
+
+\begin{eg}
+ $\Int(\Q) = \Int(\R\setminus \Q) = \emptyset$.
+\end{eg}
+
+\subsection{New topologies from old}
+In group theory, we had the notions of subgroups, products and quotients. There are exact analogies for topological spaces. In this section, we will study the subspace topology, product topology and quotient topology.
+
+\subsubsection{Subspace topology}
+\begin{defi}[Subspace topology]
+ Let $X$ be a topological space and $Y\subseteq X$. The \emph{subspace topology} on $Y$ is given by: $V$ is an open subset of $Y$ if there is some $U$ open in $X$ such that $V = Y\cap U$.
+\end{defi}
+If we simply write $Y\subseteq X$ and don't specify a topology, the subspace topology is assumed. For example, when we write $\Q\subseteq \R$, we are thinking of $\Q$ with the subspace topology inherited from $\R$.
+
+\begin{eg}
+ If $(X, d)$ is a metric space and $Y\subseteq X$, then the metric topology on $(Y, d)$ is the subspace topology, since $B_r^Y(y) = Y\cap B_r^X(y)$.
+\end{eg}
+
+To show that this is indeed a topology, we need the following set theory facts:
+\begin{align*}
+ Y\cap \left(\bigcup_{\alpha \in A}V_\alpha\right) &= \bigcup_{\alpha \in A}\left(Y\cap V_\alpha\right)\\
+ Y\cap \left(\bigcap_{\alpha\in A}V_\alpha\right) &= \bigcap_{\alpha\in A}(Y\cap V_\alpha)
+\end{align*}
+
+\begin{prop}
+ The subspace topology is a topology.
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Since $\emptyset$ is open in $X$, $\emptyset = Y\cap \emptyset$ is open in $Y$.
+
+ Since $X$ is open in $X$, $Y = Y\cap X$ is open in $Y$.
+ \item If $V_\alpha$ is open in $Y$, then $V_\alpha = Y\cap U_\alpha$ for some $U_\alpha$ open in $X$. Then
+ \[
+ \bigcup_{\alpha\in A}V_\alpha = \bigcup_{\alpha\in A}\left(Y\cap U_\alpha\right) = Y\cap \left(\bigcup_{\alpha\in U}U_\alpha\right).
+ \]
+ Since $\bigcup U_\alpha$ is open in $X$, so $\bigcup V_\alpha$ is open in $Y$.
+ \item If $V_i$ is open in $Y$, then $V_i = Y\cap U_i$ for some open $U_i\subseteq X$. Then
+ \[
+ \bigcap_{i = 1}^n V_i = \bigcap_{i = 1}^n \left(Y\cap U_i\right) = Y\cap \left(\bigcap_{i = 1}^n U_i\right).
+ \]
+ Since $\bigcap U_i$ is open, $\bigcap V_i$ is open.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Recall that if $Y\subseteq X$, there is an inclusion function $\iota: Y \to X$ that sends $y \mapsto y$. We can use this to obtain the following defining property of a subspace.
+\begin{prop}
+ If $Y$ has the subspace topology, $f: Z\to Y$ is continuous iff $\iota\circ f: Z\to X$ is continuous.
+\end{prop}
+
+\begin{proof}
+ $(\Rightarrow)$ If $U\subseteq X$ is open, then $\iota^{-1}(U) = Y\cap U$ is open in $Y$. So $\iota$ is continuous. So if $f$ is continuous, so is $\iota\circ f$.
+
+ $(\Leftarrow)$ Suppose we know that $\iota\circ f$ is continuous. Given $V\subseteq Y$ is open, we know that $V = Y \cap U = \iota^{-1}(U)$. So $f^{-1}(V) = f^{-1}(\iota^{-1}(U))) = (\iota\circ f)^{-1}(U)$ is open since $\iota\circ f$ is continuous. So $f$ is continuous.
+\end{proof}
+This property is ``defining'' in the sense that it can be used to define a subspace: $Y$ is a subspace of $X$ if there exists some function $\iota: Y \to X$ such that for any $f$, $f$ is continuous iff $\iota\circ f$ is continuous.
+
+\begin{eg}
+ $D^n = \{\mathbf{v}\in \R^n: |\mathbf{v}|\leq 1\}$ is the $n$-dimensional closed unit disk. $S^{n - 1} = \{\mathbf{v}\in \R^n: |\mathbf{v}| = 1\}$ is the $n - 1$-dimensional sphere.
+
+ We have
+ \[
+ \Int(D^n) = \{\mathbf{v}\in \R^n : |\mathbf{v}| < 1\} = B_1(\mathbf{0}).
+ \]
+ This is, in fact, homeomorphic to $\R^n$. To show this, we can first pick our favorite homeomorphism $f: [0, 1) \mapsto [1, \infty)$. Then $\mathbf{v}\mapsto f(|\mathbf{v}|)\mathbf{v}$ is a homeomorphism $\Int(D^n) \to \R^n$.
+\end{eg}
+\subsubsection{Product topology}
+If $X$ and $Y$ are sets, the product is defined as
+\[
+ X\times Y = \{(x, y): x\in X, y\in Y\}
+\]
+We have the projection functions $\pi_1: X\times Y \to X$, $\pi_2: X\times Y \to Y$ given by
+\[
+ \pi_1(x, y) = x,\quad \pi_2(x, y) = y.
+\]
+If $A\subseteq X, B\subseteq Y$, then we have $A\times B \subseteq X\times Y$.
+
+Given topological spaces $X$, $Y$, we can define a topology on $X\times Y$ as follows:
+\begin{defi}[Product topology]
+ Let $X$ and $Y$ be topological spaces. The \emph{product topology} on $X\times Y$ is given by:
+
+ $U\subseteq X\times Y$ is open if: for every $(x, y)\in U$, there exist $V_x\subseteq X, W_y\subseteq Y$ open neighbourhoods of $x$ and $y$ such that $V_x\times W_y \subseteq U$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item If $V\subseteq X$ and $W\subseteq Y$ are open, then $V\times W \subseteq X\times Y$ is open (take $V_x = V$, $W_y = W$).
+ \item The product topology on $\R\times \R$ is same as the topology induced by the $\|\ph \|_\infty$, hence is also the same as the topology induced by $\|\ph\|_2$ or $\|\ph\|_1$. Similarly, the product topology on $\R^n = \R^{n - 1}\times \R$ is also the same as that induced by $\|\ph\|_\infty$.
+ \item $(0, 1) \times (0, 1)\times \cdots\times (0, 1) \subseteq \R^n$ is the open $n$-dimensional cube in $\R^n$.
+
+ Since $(0, 1)\simeq \R$, we have $(0, 1)^n \simeq \R^n \simeq \Int(D^n)$.
+ \item $[0, 1]\times S^n \simeq [1, 2] \times S^n\simeq \{\mathbf{v}\in \R^{n + 1}: 1 \leq |\mathbf{v}| \leq 2\}$, where the last homeomorphism is given by $(t, \mathbf{w}) \mapsto t\mathbf{w}$ with inverse $\mathbf{v} \mapsto (|\mathbf{v}|, \hat{\mathbf{v}})$. This is a thickened sphere.
+
+ \item Let $A\subseteq \{(r, z): r > 0\} \subseteq \R^2$, $R(A)$ be the set obtained by rotating $A$ around the $z$ axis. Then $R(A) \simeq S\times A$ by
+ \[
+ (x, y, z) = (\mathbf{v}, z) \mapsto (\hat{\mathbf{v}}, (|\mathbf{v}|, z)).
+ \]
+ In particular, if $A$ is a circle, then $R(A) \simeq S^1\times S^1 = T^2$ is the two-dimensional torus.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \draw [->] (-2, 0) -- (2, 0) node [right] {$r$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$z$};
+ \draw [red] (1, 0) circle [radius = 0.5];
+ \draw [->] (-0.17, 1.7) arc (140:400:0.2);
+ \end{scope}
+
+ \draw [very thick, ->] (2.5, 0) -- (3.5, 0);
+
+ \begin{scope}[shift={(5.5, 0)}, scale=0.8]
+ \draw (0,0) ellipse (2 and 1.12);
+ \path[rounded corners=24pt] (-.9,0)--(0,.6)--(.9,0) (-.9,0)--(0,-.56)--(.9,0);
+ \draw[rounded corners=28pt] (-1.1,.1)--(0,-.6)--(1.1,.1);
+ \draw[rounded corners=24pt] (-.9,0)--(0,.6)--(.9,0);
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ \end{itemize}
+\end{eg}
+The defining property is that $f: Z\to X\times Y$ is continuous iff $\pi_1\circ f$ and $\pi_2\circ f$ are continuous.
+
+Note that our definition of the product topology is rather similar to the definition of open sets for metrics. We have a special class of subsets of the form $V\times W$, and a subset $U$ is open iff every point $x\in U$ is contained in some $V\times W\subseteq U$. In some sense, these subsets ``generate'' the open sets.
+
+Alternatively, if $U\subseteq X\times Y$ is open, then
+\[
+ U = \bigcup_{(x, y)\in U} V_x\times W_y.
+\]
+So $U\subseteq X\times Y$ is open if and only if it is a union of members of our special class of subsets.
+
+We call this special class the \emph{basis}.
+
+\begin{defi}[Basis]
+ Let $\cU$ be a topology on $X$. A subset $\mathcal{B} \subseteq \cU$ is a \emph{basis} if ``$U\in \cU$ iff $U$ is a union of sets in $\mathcal{B}$''.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item $\{V\times W: V\subseteq X, W\subseteq Y\text{ are open}\}$ is a basis for the product topology for $X\times Y$.
+ \item If $(X, d)$ is a metric space, then
+ \[
+ \{B_{1/n}(x): n\in \N^+, x\in X\}
+ \]
+ is a basis for the topology induced by $d$.
+ \end{itemize}
+\end{eg}
+
+\subsubsection{Quotient topology}
+If $X$ is a set and $\sim$ is an equivalence relation on $X$, then the quotient $X/{\sim}$ is the set of equivalence classes. The projection $\pi: X\to X/{\sim}$ is defined as $\pi(x) = [x]$, the equivalence class containing $x$.
+
+\begin{defi}[Quotient topology]
+ If $X$ is a topological space, the \emph{quotient topology} on $X/{\sim}$ is given by: $U$ is open in $X/{\sim}$ if $\pi^{-1}(U)$ is open in $X$.
+\end{defi}
+We can think of the quotient as ``gluing'' the points identified by $\sim$ together.
+
+The defining property is $f: X/{\sim} \to Y$ is continuous iff $f\circ \pi: X\to Y$ is continuous.
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item Let $X = \R$, $x\sim y$ iff $x - y\in \Z$. Then $X/{\sim} = \R/\Z \simeq S^1$, given by $[x] \mapsto (\cos 2\pi x, \sin 2\pi x)$.
+ \item Let $X = \R^2$. Then $\mathbf{v}\sim \mathbf{w}$ iff $\mathbf{v} - \mathbf{w} \in \Z^2$. Then $X/{\sim} = \R^2/\Z^2 = (\R/\Z)\times (\R/\Z) \simeq S^1\times S^1 = T^2$. Similarly, $\R^n/\Z^n = T^n = S^1\times S^1 \times \cdots \times S^1$.
+ \item If $A\subseteq X$, define $\sim$ by $x\sim y$ iff $x = y$ or $x, y\in A$. This glues everything in $A$ together and leaves everything else alone.
+
+ We often write this as $X/A$. Note that this is not consistent with the notation we just used above!
+ \begin{itemize}
+ \item Let $X = [0, 1]$ and $A = \{0, 1\}$, then $X/A \simeq S^1$ by, say, $t\mapsto (\cos 2\pi t, \sin 2\pi t)$. Intuitively, the equivalence relation says that the two end points of $[0, 1]$ are ``the same''. So we join the ends together to get a circle.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \draw (-1, 0) node [circ,red] {} -- (1, 0) node [circ,red] {};
+ \draw [->] (1, 0.1) arc (0:70:1);
+ \draw [->] (-1, 0.1) arc (180:110:1);
+ \end{scope}
+
+ \draw [very thick, ->] (1.5, 0) -- (2.5, 0);
+ \begin{scope}[shift={(4, 0)}];
+ \draw circle [radius=1];
+ \node at (0, 1) [circ, red] {};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+
+ \item Let $X = D^n$ and $A = S^{n - 1}$. Then $X/A \simeq S^n$. This can be pictured as pulling the boundary of the disk together to a point to create a closed surface
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [thick, red, fill=gray!50!white] circle [radius=1];
+
+ \draw [black, very thick, ->] (1.5, 0) -- (2.5, 0);
+
+ \draw [fill=gray!50!white] (4, 0) circle [radius=1];
+
+ \draw [dashed] (5, 0) arc (0:180:1 and 0.3);
+ \draw (3, 0) arc (180:360:1 and 0.3);
+ \node at (4, 1) [circ, red] {};
+ \end{tikzpicture}
+ \end{center}
+ \end{itemize}
+ \item Let $X = [0, 1]\times [0, 1]$ with $\sim$ given by $(0, y) \sim (1, y)$ and $(x, 0) \sim (x, 1)$, then $X/{\sim} \simeq S^1 \times S^1 = T^2$, by, say
+ \[
+ (x, y)\mapsto\big((\cos 2\pi x, \sin 2\pi x), (\cos 2\pi y, \sin 2\pi y)\big)
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \fill [thick, gray!50!white] (-1, -1) rectangle (1, 1);
+ \draw [->] (-0.3, 1) parabola (0.3, 0.2);
+ \draw [->] (-0.3, -1) parabola (0.3, -0.2);
+ \draw [thick, red] (-1, -1) -- (1, -1);
+ \draw [thick, red] (-1, 1) -- (1, 1);
+ \draw [thick, blue] (-1, -1) -- (-1, 1);
+ \draw [thick, blue] (1, -1) -- (1, 1);
+ \end{scope}
+
+ \draw [thick, ->] (1.5, 0) -- (2.5, 0);
+ \begin{scope}[shift={(4.3, 0)}]
+ \draw plot [smooth, tension=1.5] coordinates {(1, 0) (1.5, 0.5) (0.5, 1)};
+ \shade [bottom color=gray!90!white, top color = gray!10!white] (-1, 0.5) arc (90:-90:0.3 and 0.5) -- (1, -0.5) arc (-90:90:0.3 and 0.5) -- cycle;
+ \shade [bottom color=gray!30!white, top color = gray!50!white] (-1, 0) circle [x radius = 0.3, y radius = 0.5];
+ \draw [thick, red] (-0.7, 0) -- (1.3, 0);
+ \draw [thick, blue] (-1, 0) circle [x radius = 0.3, y radius = 0.5];
+ \draw [thick, blue] (-1, 0.5) arc (90:-90:0.3 and 0.5);
+ \draw [thick, blue] (1, -0.5) arc (-90:90:0.3 and 0.5);
+ \draw [thick, blue] (-1, 0.5) arc (90:270:0.3 and 0.5);
+ \draw [dashed, thick, blue] (1, 0.5) arc (90:270:0.3 and 0.5);
+ \draw plot [smooth, tension=1.5] coordinates {(-1, 0) (-1.5, 0.5) (-0.5, 1)};
+ \draw [->] (-0.51, 1) -- (-0.5, 1);
+ \draw [->] (0.51, 1) -- (0.5, 1);
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ Similarly, $T^3 = [0, 1]^3/{\sim}$, where the equivalence is analogous to above.
+ \end{itemize}
+\end{eg}
+Note that even if $X$ is Hausdorff, $X/{\sim}$ may not be! For example, $\R/\Q$ is not Hausdorff.
+\section{Connectivity}
+Finally we can get to something more interesting. In this chapter, we will study the connectivity of spaces. Intuitively, we would want to say that a space is ``connected'' if it is one-piece. For example, $\R$ is connected, while $\R \setminus \{0\}$ is not. We will come up with two different definitions of connectivity - normal connectivity and path connectivity, where the latter implies the former, but not the other way round.
+
+\subsection{Connectivity}
+We will first look at normal connectivity.
+\begin{defi}[Connected space]
+ A topological space $X$ is \emph{disconnected} if $X$ can be written as $A\cup B$, where $A$ and $B$ are disjoint, non-empty open subsets of $X$. We say $A$ and $B$ \emph{disconnect} $X$.
+
+ A space is \emph{connected} if it is not disconnected.
+\end{defi}
+Note that being connected is a property of a \emph{space}, not a subset. When we say ``$A$ is a connected subset of $X$'', it means $A$ is connected with the subspace topology inherited from $X$.
+
+Being (dis)connected is a \emph{topological} property, i.e.\ if $X$ is (dis)connected, and $X\simeq Y$, then $Y$ is (dis)connected. To show this, let $f: X\to Y$ be the homeomorphism. By definition, $A$ is open in $X$ iff $f(A)$ is open in $Y$. So $A$ and $B$ disconnect $X$ iff $f(A)$ and $f(B)$ disconnect $Y$.
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item If $X$ has the coarse topology, it is connected.
+ \item If $X$ has the discrete topology and at least 2 elements, it is disconnected.
+ \item Let $X\subseteq \R$. If there is some $\alpha \in \R\setminus X$ such that there is some $a, b\in X$ with $a < \alpha < b$, then $X$ is disconnected. In particular, $X\cap (-\infty, \alpha)$ and $X\cap (\alpha, \infty)$ disconnect $X$.
+
+ For example, $(0, 1)\cup (1, 2)$ is disconnected ($\alpha = 1$).
+ \end{itemize}
+\end{eg}
+
+We can also characterize connectivity in terms of continuous functions:
+\begin{prop}
+ $X$ is disconnected iff there exists a continuous surjective $f: X\to \{0, 1\}$ with the discrete topology.
+
+ Alternatively, $X$ is connected iff any continuous map $f: X \to \{0, 1\}$ is constant.
+\end{prop}
+
+\begin{proof}
+ $(\Rightarrow)$ If $A$ and $B$ disconnect $X$, define
+ \[
+ f(x) =
+ \begin{cases}
+ 0 & x\in A\\
+ 1 & x\in B
+ \end{cases}
+ \]
+ Then $f^{-1}(\emptyset)=\emptyset$, $f^{-1}(\{0, 1\}) = X$, $f^{-1}(\{0\}) = A$ and $f^{-1}(\{1\}) = B$ are all open. So $f$ is continuous. Also, since $A, B$ are non-empty, $f$ is surjective.
+
+ $(\Leftarrow)$ Given $f: X\mapsto \{0, 1\}$ surjective and continuous, define $A = f^{-1}(\{0\})$, $B = f^{-1}(\{1\})$. Then $A$ and $B$ disconnect $X$.
+\end{proof}
+
+\begin{thm}
+ $[0, 1]$ is connected.
+\end{thm}
+
+Note that $\Q\cap [0, 1]$ is \emph{dis}connected, since we can pick our favorite irrational number $a$, and then $\{x: x < a\}$ and $\{x: x > a\}$ disconnect the interval. So we better use something special about $[0, 1]$.
+
+The key property of $\R$ is that every non-empty $A\subseteq [0, 1]$ has a supremum.
+\begin{proof}
+ Suppose $A$ and $B$ disconnect $[0, 1]$. wlog, assume $1\in B$. Since $A$ is non-empty, $\alpha = \sup A$ exists. Then either
+ \begin{itemize}
+ \item $\alpha \in A$. Then $\alpha < 1$, since $1 \in B$. Since $A$ is open, $\exists \varepsilon > 0$ with $B_\varepsilon(\alpha) \subseteq A$. So $\alpha + \frac{\varepsilon}{2} \in A$, contradicting supremality of $\alpha$; or
+
+ \item $\alpha \not\in A$. Then $\alpha \in B$. Since $B$ is open, $\exists \varepsilon > 0$ such that $B_\varepsilon (\alpha) \subseteq B$. Then $a \leq \alpha - \varepsilon$ for all $a\in A$. This contradicts $\alpha$ being the \emph{least} upper bound of $A$.
+ \end{itemize}
+
+ Either option gives a contradiction. So $A$ and $B$ cannot exist and $[0, 1]$ is connected.
+\end{proof}
+
+To conclude the section, we will prove the intermediate value property. The key proposition needed is the following.
+\begin{prop}
+ If $f: X\to Y$ is continuous and $X$ is connected, then $\im f$ is also connected.
+\end{prop}
+
+\begin{proof}
+ Suppose $A$ and $B$ disconnect $\im f$. We will show that $f^{-1}(A)$ and $f^{-1}(B)$ disconnect $X$.
+
+ Since $A, B\subseteq \im f$ are open, we know that $A = \im f\cap A'$ and $B = \im f\cap B'$ for some $A', B'$ open in $Y$. Then $f^{-1}(A) = f^{-1}(A')$ and $f^{-1}(B) = f^{-1}(B')$ are open in $X$.
+
+ Since $A, B$ are non-empty, $f^{-1}(A)$ and $f^{-1}(B)$ are non-empty. Also, $f^{-1}(A) \cap f^{-1}(B) = f^{-1}(A\cap B) = f^{-1}(\emptyset) = \emptyset$. Finally, $A\cup B = \im f$. So $f^{-1}(A)\cup f^{-1}(B) = f^{-1}(A\cup B) = X$.
+
+ So $f^{-1}(A)$ and $f^{-1}(B)$ disconnect $X$, contradicting our hypothesis. So $\im f$ is connected.
+\end{proof}
+Alternatively, if $\im f$ is not connected, let $g: \im f \to \{0, 1\}$ be continuous surjective. Then $g\circ f: X \to \{0, 1\}$ is continuous surjective. Contradiction.
+
+\begin{thm}[Intermediate value theorem]
+ Suppose $f: X\to \R$ is continuous and $X$ is connected. If $\exists x_0, x_1$ such that $f(x_0) < 0 < f(x_1)$, then $\exists x\in X$ with $f(x) = 0$.
+\end{thm}
+
+\begin{proof}
+ Suppose no such $x$ exists. Then $0\not\in \im f$ while $0 > f(x_0) \in \im f$, $0 < f(x_1) \in \im f$. Then $\im f$ is disconnected (from our previous example), contradicting $X$ being connected.
+\end{proof}
+Alternatively, if $f(x) \not= 0$ for all $x$, then $\frac{f(x)}{|f(x)|}$ is a continuous surjection from $X$ to $\{-1, +1\}$, which is a contradiction.
+
+\begin{cor}
+ If $f: [0, 1] \to \R$ is continuous with $f(0) < 0 < f(1)$, then $\exists x\in [0, 1]$ with $f(x) = 0$.
+\end{cor}
+Is the converse of the intermediate value theorem true? If $X$ is disconnected, can we find a function $g$ that doesn't satisfy the intermediate value property?
+
+The answer is yes. Since $X$ is disconnected, let $f: X\to \{0, 1\}$ be continuous. Then let $g(x) = f(x) - \frac{1}{2}$. Then $g$ is continuous but doesn't satisfy the intermediate value property.
+
+\subsection{Path connectivity}
+The other notion of connectivity is \emph{path connectivity}. A space is path connected if we can join any two points with a path. First, we need a definition of a path.
+
+\begin{defi}[Path]
+ Let $X$ be a topological space, and $x_0, x_1 \in X$. Then a \emph{path} from $x_0$ to $x_1$ is a continuous function $\gamma: [0, 1] \mapsto X$ such that $\gamma(0) = x_0$, $\gamma(1) = x_1$.
+\end{defi}
+
+\begin{defi}[Path connectivity]
+ A topological space $X$ is \emph{path connected} if for all points $x_0, x_1 \in X$, there is a path from $x_0$ to $x_1$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $(a, b), [a, b), (a, b], \R$ are all path connected (using paths given by linear functions).
+ \item $\R^n$ is path connected (e.g.\ $\gamma (t) = t \mathbf{x}_1 + (1 - t)\mathbf{x}_0$).
+ \item $\R^n\setminus \{0\}$ is path-connected for $n > 1$ (the paths are either line segments or bent line segments to get around the hole).
+ \end{enumerate}
+\end{eg}
+
+Path connectivity is a stronger condition than connectivity, in the sense that
+\begin{prop}
+ If $X$ is path connected, then $X$ is connected.
+\end{prop}
+
+\begin{proof}
+ Let $X$ be path connected, and let $f: X \to \{0, 1\}$ be a continuous function. We want to show that $f$ is constant.
+
+ Let $x, y \in X$. By path connectedness, there is a map $\gamma: [0, 1] \to X$ such that $\gamma(0) = x$ and $\gamma(1) = y$. Composing with $f$ gives a map $f \circ \gamma: [0, 1] \to \{0, 1\}$. Since $[0, 1]$ is connected, this must be constant. In particular, $f(\gamma(0)) = f(\gamma(1))$, i.e.\ $f(x) = f(y)$. Since $x, y$ were arbitrary, we know $f$ is constant.
+\end{proof}
+
+We can use connectivity to distinguish spaces. Apart from the obvious ``$X$ is connected while $Y$ is not'', we can also try to remove points and see what happens. We will need the following lemma:
+\begin{lemma}
+ Suppose $f: X\to Y$ is a homeomorphism and $A\subseteq X$, then $f|_A: A\to f(A)$ is a homeomorphism.
+\end{lemma}
+
+\begin{proof}
+ Since $f$ is a bijection, $f|_A$ is a bijection. If $U\subseteq f(A)$ is open, then $U = f(A) \cap U'$ for some $U'$ open in $Y$. So $f|_A^{-1}(U) = f^{-1}(U')\cap A$ is open in $A$. So $f|_A$ is continuous. Similarly, we can show that $(f|_A)^{-1}$ is continuous.
+\end{proof}
+
+\begin{eg} $[0, 1] \not\simeq (0, 1)$. Suppose it were. Let $f: [0, 1]\to (0,1)$ be a homeomorphism. Let $A = (0, 1]$. Then $f|_A: (0, 1] \to (0, 1)\setminus\{f(0)\}$ is a homeomorphism. But $(0, 1]$ is connected while $(0, 1)\setminus \{f(0)\}$ is disconnected. Contradiction.
+
+Similarly, $[0, 1)\not\simeq [0, 1]$ and $[0, 1)\not\simeq (0, 1)$. Also, $\R^n\not \simeq \R$ for $n > 1$, and $S^1$ is not homeomorphic to any subset of $\R$.
+\end{eg}
+
+\subsubsection{Higher connectivity*}
+We were able to use path connectivity to determine that $\R$ is not homeomorphic to $\R^n$ for $n > 1$. If we want to distinguish general $\R^n$ and $\R^m$, we will need to generalize the idea of path connectivity to higher connectivity.
+
+To do so, we have to formulate path connectivity in a different way. Recall that $S^0 = \{-1, 1\} \simeq \{0, 1\}\subseteq \R$, while $D^1 = [-1, 1] \simeq [0, 1] \subseteq \R$. Then we can formulate path connectivity as: $X$ is path-connected iff any continuous $f: S^0\to X$ extends to a continuous $\gamma: D^1 \to X$ with $\gamma|_{S^0} = f$.
+
+This is much more easily generalized:
+\begin{defi}[$n$-connectedness]
+ $X$ is $n$\emph{-connected} if for any $k \leq n$, any continuous $f: S^k \to X$ extends to a continuous $F: D^{k + 1}\to X$ such that $F|_{S^k} = f$.
+\end{defi}
+For any point $p\in \R^n$, $\R^n \setminus \{p\}$ is $m$-connected iff $m \leq n - 2$. So $\R^n\setminus\{p\} \not\simeq \R^m\setminus\{q\}$ unless $n = m$. So $\R^n \not\simeq \R^m$.
+
+Unfortunately, we have not yet proven that $\R^n \not\simeq \R^m$. We casually stated that $\R^n \setminus \{p\}$ is $m$-connected iff $m \leq n - 2$. However, while this intuitively makes sense, it is in fact very difficult to prove. To actually prove this, we will need tools from algebraic topology.
+
+\subsection{Components}
+If a space is disconnected, we could divide the space into different \emph{components}, each of which is (maximally) connected. This is what we will investigate in this section. Of course, we would have a different notion of ``components'' for each type of connectivity. We will first start with path connectivity.
+
+\subsubsection{Path components}
+Defining path components (i.e.\ components with respect to path connectedness) is relatively easy. To do so, we need the following lemma.
+
+\begin{lemma}
+ Define $x\sim y$ if there is a path from $x$ to $y$ in $X$. Then $\sim$ is an equivalence relation.
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item For any $x\in X$, let $\gamma_x: [0, 1] \to X$ be $\gamma(t) = x$, the constant path. Then this is a path from $x$ to $x$. So $x\sim x$.
+ \item If $\gamma: [0, 1] \to X$ is a path from $x$ to $y$, then $\bar \gamma: [0, 1] \to X$ by $t \mapsto \gamma(1 - t)$ is a path from $y$ to $x$. So $x\sim y \Rightarrow y\sim x$.
+ \item If $\gamma_1$ is a path from $x$ to $y$ and $\gamma_2$ is a path from $y$ to $z$, then $\gamma_2*\gamma_1$ defined by
+ \[
+ t\mapsto
+ \begin{cases}
+ \gamma_1(2t) & t\in [0, 1/2]\\
+ \gamma_2(2t - 1) & t\in [1/2, 1]
+ \end{cases}
+ \]
+ is a path from $x$ to $z$. So $x\sim y, y\sim z \Rightarrow x\sim z$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+With this lemma, we can immediately define the path components:
+\begin{defi}[Path components]
+ Equivalence classes of the relation ``$x\sim y$ if there is a path from $x$ to $y$'' are \emph{path components} of $X$.
+\end{defi}
+
+\subsubsection{Connected components}
+Defining connected components (i.e.\ components with respect to regular connectivity) is slightly more involved. The key proposition we need is as follows:
+
+\begin{prop}
+ Suppose $Y_\alpha\subseteq X$ is connected for all $\alpha\in T$ and that $\bigcap_{\alpha\in T}Y_\alpha \not= \emptyset$. Then $Y = \bigcup_{\alpha\in T}Y_\alpha$ is connected.
+\end{prop}
+
+\begin{proof}
+ Suppose the contrary that $A$ and $B$ disconnect $Y$. Then $A$ and $B$ are open in $Y$. So $A = Y\cap A'$ and $B = Y\cap B'$, where $A', B'$ are open in $X$. For any fixed $\alpha$, let
+ \[
+ A_\alpha = Y_\alpha \cap A = Y_\alpha \cap A',\quad B_\alpha = Y_\alpha\cap B = Y_\alpha \cap B'.
+ \]
+ Then they are open in $Y_\alpha$. Since $Y = A\cup B$, we have
+ \[
+ Y_\alpha = Y\cap Y_\alpha = (A\cup B)\cap Y_\alpha = A_\alpha \cup B_\alpha.
+ \]
+ Since $A\cap B = \emptyset$, we have
+ \[
+ A_\alpha \cap B_\alpha = Y_\alpha \cap (A\cap B) = \emptyset.
+ \]
+ So $A_\alpha, B_\alpha$ are disjoint. So $Y_\alpha$ is connected but is the disjoint union of open subsets $A_\alpha, B_\alpha$.
+
+ By definition of connectivity, this can only happen if $A_\alpha = \emptyset$ or $B_\alpha = \emptyset$.
+
+ However, by assumption, $\displaystyle\bigcap_{\alpha \in T}Y_\alpha \not= \emptyset$. So pick $\displaystyle y\in \bigcap_{\alpha\in T}Y_\alpha$. Since $y\in Y$, either $y\in A$ or $y\in B$. wlog, assume $y\in A$. Then $y\in Y_\alpha$ for all $\alpha$ implies that $y\in A_\alpha$ for all $\alpha$. So $A_\alpha$ is non-empty for all $\alpha$. So $B_\alpha$ is empty for all $\alpha$. So $B = \emptyset$.
+
+ So $A$ and $B$ did not disconnect $Y$ after all. Contradiction.
+\end{proof}
+
+Using this lemma, we can define connected components:
+
+\begin{defi}[Connected component]
+ If $x\in X$, define
+ \[
+ \mathcal{C}(x) = \{A\subseteq X: x\in A\text{ and }A\text{ is connected}\}.
+ \]
+ Then $\displaystyle C(x) = \bigcup_{A\in \mathcal{C}(x)} A$ is the \emph{connected component} of $x$.
+\end{defi}
+$C(x)$ is the largest connected subset of $X$ containing $x$. To show this, we first note that $\{x\}\in \mathcal{C}(x)$. So $x\in C(x)$. To show that it is connected, just note that $x\in \bigcap_{A\in \mathcal{C}(x)}A$. So $\bigcap_{A\in \mathcal{C}(x)}A$ is non-empty. By our previous proposition, this implies that $C(x)$ is connected.
+
+\begin{lemma}
+ If $y\in C(x)$, then $C(y) = C(x)$.
+\end{lemma}
+
+\begin{proof}
+ Since $y\in C(x)$ and $C(x)$ is connected, $C(x) \subseteq C(y)$. So $x\in C(y)$. Then $C(y)\subseteq C(x)$. So $C(x) = C(y)$.
+\end{proof}
+
+It follows that $x\sim y$ if $x \in C(y)$ is an equivalence relation and the connected components of $X$ are the equivalence classes.
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item Let $X = (-\infty, 0) \cup (0, \infty)\subseteq \R$. Then the connected components are $(-\infty, 0)$ and $(0, \infty)$, which are also the path components.
+ \item Let $X = \Q\subseteq \R$. Then $C(x) = \{x\}$ for all $x\in X$. In this case, we say $X$ is totally disconnected.
+ \end{itemize}
+\end{eg}
+Note that $C(x)$ and $X\setminus C(x)$ need not disconnect $X$, even though it is the case in our first example. For this, we must need $C(x)$ and $X\setminus C(x)$ to be open as well. For example, in Example 2, $C(x) = \{x\}$ is not open.
+
+It is also important to note that path components need not be equal to the connected components, as illustrated by the following example. However, since path connected spaces are connected, the path component containing $x$ must be a subset of $C(x)$.
+
+\begin{eg}
+ Let $Y = \{(0, y): y\in \R\}\subseteq \R^2$ be the $y$ axis.
+
+ Let $Z = \{(x, \frac{1}{x}\sin \frac{1}{x}): x\in (0, \infty)\}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, -3) -- (0, 3) node [above] {$y$};
+
+ \draw [red] (0, -2.5) -- (0, 2.5) node [left] {$Y$};
+
+ \draw [blue, domain=0.01:3, samples=300] plot (\x, {2.5 * exp(-\x) * sin(1 / \x r)}) node [anchor = south west] {$Z$};
+ \end{tikzpicture}
+ \end{center}
+ Let $X = Y\cup Z\subseteq \R^2$. We claim that $Y$ and $Z$ are the path components of $X$. Since $Y$ and $Z$ are individually path connected, it suffices to show that there is no continuous $\gamma: [0, 1]\to X$ with $\gamma(0) = (0, 0)$, $\gamma(1) = (1,\sin 1)$.
+
+ Suppose $\gamma$ existed. Then the function $\pi_2\circ \gamma: [0, 1] \to \R$ projecting the path to the $y$ direction is continuous. So it is bounded. Let $M$ be such that $\pi_2\circ \gamma (t) \leq M$ for all $t\in [0, 1]$. Let $W = X\cap (\R \times (-\infty, M])$ be the part of $X$ that lies below $y = M$. Then $\im \gamma \subseteq W$.
+
+ However, $W$ is disconnected: pick $t_0$ with $\frac{1}{t_0} \sin\frac{1}{t_0} > M$. Then $W \cap ((-\infty, t_0)\times \R)$ and $W\cap ((t_0, \infty)\times \R)$ disconnect $W$. This is a contradiction, since $\gamma$ is continuous and $[0, 1]$ is connected.
+
+ We also claim that $X$ is connected: suppose $A$ and $B$ disconnect $X$. Then since $Y$ and $Z$ are connected, either $Y\subseteq A$ or $Y\subseteq B$; $Z\subseteq A$ or $Z\subseteq B$. If both $Y\subseteq A, Z\subseteq A$, then $B=\emptyset$, which is not possible.
+
+ So wlog assume $A = Y$, $B = Z$. This is also impossible, since $Y$ is not open in $X$ as it is not a union of balls (any open ball containing a point in $Y$ will also contain a point in $Z$). Hence $X$ must be connected.
+\end{eg}
+
+Finally, recall that we showed that path-connected subsets are connected. While the converse is not true in general, there are special cases where it is true.
+\begin{prop}
+ If $U\subseteq \R^n$ is open and connected, then it is path-connected.
+\end{prop}
+
+\begin{proof}
+ Let $A$ be a path component of $U$. We first show that $A$ is open.
+
+ Let $a\in A$. Since $U$ is open, $\exists \varepsilon > 0$ such that $B_\varepsilon(a) \subseteq U$. We know that $B_\varepsilon(a) \simeq \Int(D^n)$ is path-connected (e.g.\ use line segments connecting the points). Since $A$ is a path component and $a\in A$, we must have $B_\varepsilon(a) \subseteq A$. So $A$ is an open subset of $U$.
+
+ Now suppose $b\in U\setminus A$. Then since $U$ is open, $\exists \varepsilon > 0$ such that $B_\varepsilon(b) \subseteq U$. Since $B_\varepsilon(b)$ is path-connected, so if $B_\varepsilon(b) \cap A \not= \emptyset$, then $B_\varepsilon(b)\subseteq A$. But this implies $b\in A$, which is a contradiction. So $B_\varepsilon(b) \cap A = \emptyset$. So $B_\varepsilon(b) \subseteq U\setminus A$. Then $U\setminus A$ is open.
+
+ So $A, U\setminus A$ are disjoint open subsets of $U$. Since $U$ is connected, we must have $U\setminus A$ empty (since $A$ is not). So $U = A$ is path-connected.
+\end{proof}
+
+\section{Compactness}
+Compactness is an important concept in topology. It can be viewed as a generalization of being ``closed and bounded'' in $\R$. Alternatively, it can also be viewed as a generalization of being finite. Compact sets tend to have a lot of really nice properties. For example, if $X$ is compact and $f: X \to \R$ is continuous, then $f$ is bounded and attains its bound.
+
+There are two different definitions of compactness - one based on open covers (which we will come to shortly), and the other based on sequences. In metric spaces, these two definitions are equal. However, in general topological spaces, these notions can be different. The first is just known as ``compactness'' and the second is known as ``sequential compactness''.
+
+The actual definition of compactness is rather weird and unintuitive, since it is an idea we haven't seen before. To quote Qiaochu Yuan's math.stackexchange answer (\url{http://math.stackexchange.com/a/371949}),
+
+\begin{quotation}
+ The following story may or may not be helpful. Suppose you live in a world where there are two types of animals: Foos, which are red and short, and Bars, which are blue and tall. Naturally, in your language, the word for Foo over time has come to refer to things which are red and short, and the word for Bar over time has come to refer to things which are blue and tall. (Your language doesn't have separate words for red, short, blue, and tall.)
+
+ One day a friend of yours tells you excitedly that he has discovered a new animal. ``What is it like?'' you ask him. He says, ``well, it's sort of Foo, but\ldots''
+
+ The reason he says it's sort of Foo is that it's short. However, it's not red. But your language doesn't yet have a word for ``short,'' so he has to introduce a new word --- maybe ``compact''\ldots
+
+ \separator
+
+ The situation with compactness is sort of like the above. It turns out that finiteness, which you think of as one concept (in the same way that you think of ``Foo'' as one concept above), is really two concepts: \textbf{discreteness} and \textbf{compactness}. You've never seen these concepts separated before, though. When people say that compactness is like finiteness, they mean that compactness captures part of what it means to be finite in the same way that shortness captures part of what it means to be Foo.
+
+ But in some sense you've never encountered the notion of compactness by itself before, isolated from the notion of discreteness (in the same way that above you've never encountered the notion of shortness by itself before, isolated from the notion of redness). This is just a new concept and you will to some extent just have to deal with it on its own terms until you get comfortable with it.
+\end{quotation}
+
+\subsection{Compactness}
+\begin{defi}[Open cover]
+ Let $\cU\subseteq \P(X)$ be a topology on $X$. An \emph{open cover} of $X$ is a subset $\mathcal{V}\subseteq \cU$ such that
+ \[
+ \bigcup_{V\in \mathcal{V}} V = X.
+ \]
+ We say $\mathcal{V}$ \emph{covers} $X$.
+
+ If $\mathcal{V}'\subseteq \mathcal{V}$, and $\mathcal{V}'$ covers $X$, then we say $\mathcal{V}'$ is a \emph{subcover} of $\mathcal{V}$.
+\end{defi}
+
+\begin{defi}[Compact space]
+ A topological space $X$ is \emph{compact} if every open cover $\mathcal{V}$ of $X$ has a finite subcover $\mathcal{V}' = \{V_1, \cdots, V_n\} \subseteq \mathcal{V}$.
+\end{defi}
+Note that some people (especially algebraic geometers) call this notion ``quasi-compact'', and reserve the name ``compact'' for ``quasi-compact and Hausdorff''. We will not adapt this notion.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item If $X$ is finite, then $\P(X)$ is finite. So any open cover of $X$ is finite. So $X$ is compact.
+ \item Let $X = \R$ and $\mathcal{V} = \{(-R, R) : R\in \R, R > 0\}$. Then this is an open cover with no finite subcover. So $\R$ is not compact. Hence all open intervals are not compact since they are homeomorphic to $\R$.
+ \item Let $X = [0, 1]\cap \Q$. Let
+ \[
+ U_n = X\setminus(\alpha - 1/n, \alpha + 1/n).
+ \]
+ for some irrational $\alpha$ in $(0, 1)$ (e.g.\ $\alpha = 1/\sqrt{2}$).
+
+ Then $\bigcup_{n > 0} U_n = X$ since $\alpha$ is irrational. Then $\mathcal{V} = \{U_n: n\in \Z > 0\}$ is an open cover of $X$. Since this has no finite subcover, $X$ is not compact.
+ \end{enumerate}
+\end{eg}
+
+\begin{thm}
+ $[0, 1]$ is compact.
+\end{thm}
+Again, since this is not true for $[0, 1]\cap \Q$, we must use a special property of the reals.
+
+\begin{proof}
+ Suppose $\mathcal{V}$ is an open cover of $[0, 1]$. Let
+ \[
+ A = \{a\in [0, 1]: [0, a] \text{ has a finite subcover of }\mathcal{V}\}.
+ \]
+ First show that $A$ is non-empty. Since $\mathcal{V}$ covers $[0, 1]$, in particular, there is some $V_0$ that contains $0$. So $\{0\}$ has a finite subcover $V_0$. So $0\in A$.
+
+ Next we note that by definition, if $0\leq b \leq a$ and $a\in A$, then $b\in A$.
+
+ Now let $\alpha = \sup A$. Suppose $\alpha < 1$. Then $\alpha \in [0, 1]$.
+
+ Since $\mathcal{V}$ covers $X$, let $\alpha \in V_\alpha$. Since $V_\alpha$ is open, there is some $\varepsilon$ such that $B_\varepsilon(\alpha) \subseteq V_\alpha$. By definition of $\alpha$, we must have $\alpha - \varepsilon/2\in A$. So $[0, \alpha - \varepsilon/2]$ has a finite subcover. Add $V_\alpha$ to that subcover to get a finite subcover of $[0, \alpha + \varepsilon/2]$. Contradiction (technically, it will be a finite subcover of $[0, \eta]$ for $\eta = \min(\alpha + \varepsilon/2, 1)$, in case $\alpha + \varepsilon/2$ gets too large).
+
+ So we must have $\alpha = \sup A = 1$.
+
+ Now we argue as before: $\exists V_1 \in \mathcal{V}$ such that $1 \in V_1$ and $\exists \varepsilon > 0$ with $(1 - \varepsilon, 1] \subseteq V_1$. Since $1 - \varepsilon \in A$, there exists a finite $\mathcal{V}' \subseteq \mathcal{V}$ which covers $[0, 1 - \varepsilon/2]$. Then $\mathcal{W} = \mathcal{V}' \cup \{V_1\}$ is a finite subcover of $\mathcal{V}$.
+\end{proof}
+
+We mentioned that compactness is a generalization of ``closed and bounded''. We will now show that compactness is indeed in some ways related to closedness.
+\begin{prop}
+ If $X$ is compact and $C$ is a closed subset of $X$, then $C$ is also compact.
+\end{prop}
+\begin{proof}
+ To prove this, given an open cover of $C$, we need to find a finite subcover. To do so, we need to first convert it into an open cover of $X$. We can do so by adding $X\setminus C$, which is open since $C$ is closed. Then since $X$ is compact, we can find a finite subcover of this, which we can convert back to a finite subcover of $C$.
+
+ Formally, suppose $\mathcal{V}$ is an open cover of $C$. Say $\mathcal{V} = \{V_\alpha: \alpha \in T\}$. For each $\alpha$, since $V_\alpha$ is open in $C$, $V_\alpha = C\cap V_\alpha'$ for some $V_\alpha'$ open in $X$. Also, since $\bigcup_{\alpha \in T}V_a = C$, we have $\bigcup_{\alpha\in T}V_\alpha' \supseteq C$.
+
+ Since $C$ is closed, $U = X \setminus C$ is open in $X$. So $\mathcal{W} = \{V_\alpha': \alpha \in T\} \cup \{U\}$ is an open cover of $X$. Since $X$ is compact, $\mathcal{W}$ has a finite subcover $\mathcal{W}' = \{V_{\alpha_1}', \cdots, V_{\alpha_n}', U\}$ ($U$ may or may not be in there, but it doesn't matter). Now $U\cap C = \emptyset$. So $\{V_{\alpha_1},\cdots, V_{\alpha_n}\}$ is a finite subcover of $C$.
+\end{proof}
+
+The converse is not always true, but holds for Hausdorff spaces.
+\begin{prop}
+ Let $X$ be a Hausdorff space. If $C\subseteq X$ is compact, then $C$ is closed in $X$.
+\end{prop}
+
+\begin{proof}
+ Let $U = X\setminus C$. We will show that $U$ is open.
+
+ For any $x$, we will find a $U_x$ such that $U_x\subseteq U$ and $x\in U_x$. Then $U = \bigcup_{x\in U}U_x$ will be open since it is a union of open sets.
+
+ To construct $U_x$, fix $x\in U$. Since $X$ is Hausdorff, for each $y\in C$, $\exists U_{xy}, W_{xy}$ open neighbourhoods of $x$ and $y$ respectively with $U_{xy}\cap W_{xy} = \emptyset$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray!50!white] circle [radius = 1];
+ \draw (-1.5, -1.5) rectangle (3, 1.5);
+
+ \draw [red, fill=red!50!yellow, opacity=0.3] (0.8, 0) circle [radius=0.7];
+ \node [red] at (0.8, 0.7) [above] {$W_{xy}$};
+ \draw [red, fill=red!50!yellow, opacity=0.5] (2.2, 0.1) circle [radius=0.5];
+ \node [red] at (2.2, 0.6) [above] {$U_{xy}$};
+
+ \node at (0, 1) [below] {$C$};
+ \node [circ] at (2, 0) {};
+ \node at (2, 0) [right] {$x$};
+
+ \node [circ] at (0.7, 0.1) {};
+ \node at (0.7, 0.1) [right] {$y$};
+
+ \end{tikzpicture}
+ \end{center}
+ Then $\mathcal{W} = \{W_{xy}\cap C: y\in C\}$ is an open cover of $C$. Since $C$ is compact, there exists a finite subcover $\mathcal{W}' = \{W_{xy_1}\cap C, \cdots, W_{xy_n}\cap C\}$.
+
+ Let $U_x = \bigcap_{i = 1}^n U_{xy_i}$. Then $U_x$ is open since it is a finite intersection of open sets. To show $U_x \subseteq U$, note that $W_x = \bigcup_{i = 1}^n W_{xy_i} \supseteq C$ since $\{W_{xy_i}\cap C\}$ is an open cover. We also have $W_x \cap U_x = \emptyset$. So $U_x \subseteq U$. So done.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray!50!white] circle [radius = 1];
+ \draw (-1.5, -1.5) rectangle (3, 1.5);
+
+ \draw [red, fill=red!50!yellow, opacity=0.3] (0.8, 0) circle [radius=0.7];
+ \draw [red, fill=red!50!yellow, opacity=0.5] (2.2, 0.1) circle [radius=0.5];
+
+ \draw [blue, fill=red!50!blue, opacity=0.3] (-0.1, 0.3) circle [radius=0.9];
+ \draw [blue, fill=red!50!blue, opacity=0.5] (1.7, 0) circle [radius=0.4];
+
+ \draw [green, fill=green!50!blue, opacity=0.3] (-0.2, -0.4) circle [radius=1];
+ \draw [green, fill=green!50!blue, opacity=0.5] (2, -0.2) circle [radius=0.4];
+
+ \draw [red!50!black, thick] (0.709137, 0.694078) arc (25.97:185.97:0.9) arc (142.67:342.63:1) arc (266.27:457.46:0.7) node [anchor = south west] {$W_x$};
+
+ \draw [blue!50!black, thick] (2.04817, 0.197101) arc (83.08:138.34:0.4) arc (183.91:237.96:0.5) arc (305.94:389.51:0.4) node [above] {$U_x$};
+ \node at (0, 1) [below] {$C$};
+ \node [circ] at (2, 0) {};
+ \node at (2, 0) [right] {$x$};
+
+ \end{tikzpicture}
+ \end{center}
+\end{proof}
+
+After relating compactness to closedness, we will relate it to boundedness. First we need to define boundedness for general metric spaces.
+\begin{defi}[Bounded metric space]
+ A metric space $(X, d)$ is \emph{bounded} if there exists $M\in \R$ such that $d(x, y) \leq M$ for all $x, y\in X$.
+\end{defi}
+
+\begin{eg}
+ $A\subseteq \R$ is bounded iff $A\subseteq [-N, N]$ for some $N\in \R$.
+\end{eg}
+Note that being bounded is \emph{not} a topological property. For example, $(0, 1)\simeq \R$ but $(0, 1)$ is bounded while $\R$ is not. It depends on the metric $d$, not just the topology it induces.
+
+\begin{prop}
+ A compact metric space $(X, d)$ is bounded.
+\end{prop}
+
+\begin{proof}
+ Pick $x\in X$. Then $V = \{B_r(x): r\in \R^+\}$ is an open cover of $X$. Since $X$ is compact, there is a finite subcover $\{B_{r_1}(x), \cdots, B_{r_n}(x)\}$.
+
+ Let $R = \max\{r_1, \cdots, r_n\}$. Then $d(x, y) < R$ for all $y\in X$. So for all $y, z\in X$,
+ \[
+ d(y, z) \leq d(y, x) + d(x, z) < 2R
+ \]
+ So $X$ is bounded.
+\end{proof}
+
+\begin{thm}[Heine-Borel]
+ $C\subseteq \R$ is compact iff $C$ is closed and bounded.
+\end{thm}
+
+\begin{proof}
+ Since $\R$ is a metric space (hence Hausdorff), $C$ is also a metric space.
+
+ So if $C$ is compact, $C$ is closed in $\R$, and $C$ is bounded, by our previous two propositions.
+
+ Conversely, if $C$ is closed and bounded, then $C\subseteq [-N, N]$ for some $N\in \R$. Since $[-N, N] \simeq [0, 1]$ is compact, and $C = C\cap [-N, N]$ is closed in $[-N, N]$, $C$ is compact.
+\end{proof}
+
+\begin{cor}
+ If $A\subseteq \R$ is compact, $\exists \alpha \in A$ such that $\alpha \geq a$ for all $a\in A$.
+\end{cor}
+
+\begin{proof}
+ Since $A$ is compact, it is bounded. Let $\alpha = \sup A$. Then by definition, $\alpha \geq a$ for all $a\in A$. So it is enough to show that $\alpha \in A$.
+
+ Suppose $\alpha \not\in A$. Then $\alpha \in \R \setminus A$. Since $A$ is compact, it is closed in $\R$. So $\R \setminus A$ is open. So $\exists \varepsilon > 0$ such that $B_\varepsilon(\alpha) \subseteq \R\setminus A$, which implies that $a \leq \alpha - \varepsilon$ for all $a\in A$. This contradicts the assumption that $\alpha = \sup A$. So we can conclude $\alpha\in A$.
+\end{proof}
+
+We call $\alpha = \max A$ the maximum element of $A$.
+
+We have previously proved that if $X$ is connected and $f: X\to Y$, then $\im f\subseteq Y$ is connected. The same statement is true for compactness.
+\begin{prop}
+ If $f: X \to Y$ is continuous and $X$ is compact, then $\im f \subseteq Y$ is also compact.
+\end{prop}
+
+\begin{proof}
+ Suppose $\mathcal{V} = \{V_\alpha: \alpha \in T\}$ is an open cover of $\im f$. Since $V_\alpha$ is open in $\im f$, we have $V_\alpha = \im f\cap V_\alpha'$, where $V_\alpha'$ is open in $Y$. Then
+ \[
+ W_\alpha = f^{-1}(V_\alpha) = f^{-1}(V_\alpha')
+ \]
+ is open in $X$. If $x\in X$ then $f(x)$ is in $V_\alpha$ for some $\alpha$, so $x\in W_\alpha$. Thus $\mathcal{W} = \{W_\alpha: \alpha \in T\}$ is an open cover of $X$.
+
+ Since $X$ is compact, there's a finite subcover $\{W_{\alpha_1}, \cdots, W_{\alpha_n}\}$ of $\mathcal{W}$.
+
+ Since $V_\alpha \subseteq \im f$, $f(W_\alpha) = f(f^{-1}(V_\alpha)) = V_\alpha$. So
+ \[
+ \{V_{\alpha_1}, \cdots, V_{\alpha_n}\}
+ \]
+ is a finite subcover of $\mathcal{V}$.
+\end{proof}
+
+\begin{thm}[Maximum value theorem]
+ If $f: X \to \R$ is continuous and $X$ is compact, then $\exists x\in X$ such that $f(x) \geq f(y)$ for all $y\in X$.
+\end{thm}
+
+\begin{proof}
+ Since $X$ is compact, $\im f$ is compact. Let $\alpha = \max \{\im f\}$. Then $\alpha \in \im f$. So $\exists x\in X$ with $f(x) = \alpha$. Then by definition $f(x) \geq f(y)$ for all $y\in X$.
+\end{proof}
+
+\begin{cor}
+ If $f: [0, 1] \to \R$ is continuous, then $\exists x\in [0, 1]$ such that $f(x) \geq f(y)$ for all $y\in [0, 1]$
+\end{cor}
+
+\begin{proof}
+ $[0, 1]$ is compact.
+\end{proof}
+
+\subsection{Products and quotients}
+\subsubsection{Products}
+Recall the product topology on $X\times Y$. $U\subseteq X\times Y$ is open if it is a union of sets of the form $V\times W$ such that $V\subseteq X, W\subseteq Y$ are open.
+
+The major takeaway of this section is the following theorem:
+\begin{thm}
+ If $X$ and $Y$ are compact, then so is $X\times Y$.
+\end{thm}
+
+\begin{proof}
+ First consider the special type of open cover $\mathcal{V}$ of $X\times Y$ such that every $U\in \mathcal{V}$ has the form $U = V\times W$, where $V\subseteq X$ and $W\subseteq Y$ are open.
+
+ For every $(x, y) \in X\times Y$, there is $U_{xy}\in \mathcal{V}$ with $(x, y)\in U_{xy}$. Write
+ \[
+ U_{xy} = V_{xy}\times W_{xy},
+ \]
+ where $V_{xy}\subseteq X$, $W_{xy}\subseteq Y$ are open, $x\in V_{xy}, y\in W_{xy}$.
+
+ Fix $x\in X$. Then $\mathcal{W}_x = \{W_{xy}: y\in Y\}$ is an open cover of $Y$. Since $Y$ is compact, there is a finite subcover $\{W_{xy_1}, \cdots, W_{xy_n}\}$.
+
+ Then $V_x = \bigcap_{i = 1}^n V_{xy_i}$ is a finite intersection of open sets. So $V_x$ is open in $X$. Moreover, $\mathcal{V}_x = \{U_{xy_1},\cdots , U_{xy_n}\}$ covers $V_x \times Y$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [red] (1.8, 0) node [black, below] {$x$} -- (1.8, 4);
+ \draw [blue] (1.45, 0) rectangle (2.25, 0.75);
+ \draw [blue] (1.5, 0.4) rectangle (2.3, 1);
+ \draw [blue] (1.4, 0.8) rectangle (2.1, 2);
+ \draw [blue] (1.6, 1.4) rectangle (2.5, 3.2);
+ \draw [blue] (1.2, 3) rectangle (2.2, 4) node [anchor = north west] {$U_{xy_i}$};
+
+ \draw [yellow!30!red] (1.62, 0) rectangle (2.08, 4) node [above] {$V_x\times Y$};
+ \draw (0, 0) rectangle (4, 4);
+ \node at (4, 0) [right] {$X$};
+ \node at (0, 4) [above] {$Y$};
+ \end{tikzpicture}
+ \end{center}
+ Now $\mathcal{O} = \{V_x: x\in X\}$ is an open cover of $X$. Since $X$ is compact, there is a finite subcover $\{V_{x_1}, \cdots, V_{x_m}\}$. Then $\mathcal{V}' = \bigcup_{i = 1}^m \mathcal{V}_{x_i}$ is a finite subset of $\mathcal{V}$, which covers all of $X\times Y$.
+
+ In the general, case, suppose $\mathcal{V}$ is an open cover of $X\times Y$. For each $(x, y) \in X\times Y$, $\exists U_{xy}\in \mathcal{V}$ with $(x, y)\in U_{xy}$. Since $U_{xy}$ is open, $\exists V_{xy}\subseteq X, W_{xy}\subseteq Y$ open with $V_{xy}\times W_{xy}\subseteq U_{xy}$ and $x\in V_{xy}$, $y\in W_{xy}$.
+
+ Then $\mathcal{Q} = \{V_{xy}\times W_{xy}: (x, y)\in (X, Y)\}$ is an open cover of $X\times Y$ of the type we already considered above. So it has a finite subcover $\{V_{x_1y_1}\times W_{x_1y_1}, \cdots, V_{x_n y_n}\times W_{x_n y_n}\}$. Now $V_{x_iy_i} \times W_{x_iy_i} \subseteq U_{x_iy_i}$. So $\{U_{x_1y_1},\cdots, U_{x_n y_n}\}$ is a finite subcover of $X\times Y$.
+\end{proof}
+
+\begin{eg}
+ The unit cube $[0, 1]^n = [0, 1]\times [0, 1]\times \cdots \times [0, 1]$ is compact.
+\end{eg}
+
+\begin{cor}[Heine-Borel in $\R^n$]
+ $C\subseteq \R^n$ is compact iff $C$ is closed and bounded.
+\end{cor}
+
+\begin{proof}
+ If $C$ is bounded, $C\subseteq [-N, N]^n$ for some $N \in \R$, which is compact. The rest of the proof is exactly the same as for $n = 1$.
+\end{proof}
+
+\subsubsection{Quotients}
+It is easy to show that the quotient of a compact space is compact, since every open subset in the quotient space can be projected back to an open subset the original space. Hence we can project an open cover from the quotient space to the original space, and get a finite subcover. The details are easy to fill in.
+
+Instead of proving the above, in this section, we will prove that compact quotients have some nice properties. We start with a handy proposition.
+\begin{prop}
+ Suppose $f: X\to Y$ is a continuous bijection. If $X$ is compact and $Y$ is Hausdorff, then $f$ is a homeomorphism.
+\end{prop}
+
+\begin{proof}
+ We show that $f^{-1}$ is continuous. To do this, it suffices to show $(f^{-1})^{-1}(C)$ is closed in $Y$ whenever $C$ is closed in $X$. By hypothesis, $f$ is a bijection . So $(f^{-1})^{-1}(C) = f(C)$.
+
+ Supposed $C$ is closed in $X$. Since $X$ is compact, $C$ is compact. Since $f$ is continuous, $f(C) = (\im f|_C)$ is compact. Since $Y$ is Hausdorff and $f(C) \subseteq Y$ is compact, $f(C)$ is closed.
+\end{proof}
+
+We will apply this to quotients.
+
+Recall that if $\sim$ is an equivalence relation on $X$, $\pi: X\to X/{\sim}$ is continuous iff $f\circ \pi: X\to Y$ is continuous.
+
+\begin{cor}
+ Suppose $f: X/{\sim} \to Y$ is a bijection, $X$ is compact, $Y$ is Hausdorff, and $f\circ \pi$ is continuous, then $f$ is a homeomorphism.
+\end{cor}
+
+\begin{proof}
+ Since $X$ is compact and $\pi: X\mapsto X/{\sim}$ is continuous, $\im \pi \subseteq X/{\sim}$ is compact. Since $f\circ \pi$ is continuous, $f$ is continuous. So we can apply the proposition.
+\end{proof}
+
+\begin{eg}
+ Let $X = D^2$ and $A = S^1 \subseteq X$. Then $f: X/A \mapsto S^2$ by $(r, \theta) \mapsto (1, \pi r, \theta)$ in spherical coordinates is a homeomorphism.
+
+ We can check that $f$ is a continuous bijection and $D^2$ is compact. So $X/A \simeq S^2$.
+\end{eg}
+
+\subsection{Sequential compactness}
+The other definition of compactness is sequential compactness. We will not do much with it, but only prove that it is the same as compactness for metric spaces.
+
+\begin{defi}[Sequential compactness]
+ A topological space $X$ is \emph{sequentially compact} if every sequence $(x_n)$ in $X$ has a convergent subsequence (that converges to a point in $X$!).
+\end{defi}
+
+\begin{eg}
+ $(0, 1) \subseteq \R$ is not sequentially compact since no subsequence of $(1/n)$ converges to any $x\in (0, 1)$.
+\end{eg}
+
+To show that sequential compactness is the same as compactness, we will first need a lemma.
+\begin{lemma}
+ Let $(x_n)$ be a sequence in a metric space $(X, d)$ and $x\in X$. Then $(x_n)$ has a subsequence converging to $x$ iff for every $\varepsilon > 0$, $x_n \in B_\varepsilon (x)$ for infinitely many $n$ $(*)$.
+\end{lemma}
+
+\begin{proof}
+ If $(x_{n_i}) \to x$, then for every $\varepsilon$, we can find $I$ such that $i > I$ implies $x_{n_i}\in B_\varepsilon (x)$ by definition of convergence. So $(*)$ holds.
+
+ Now suppose $(*)$ holds. We will construct a sequence $x_{n_i} \to x$ inductively. Take $n_0 = 0$. Suppose we have defined $x_{n_0}, \cdots, x_{n_{i - 1}}$.
+
+ By hypothesis, $x_n \in B_{1/i}(x)$ for infinitely many $n$. Take $n_i$ to be smallest such $n$ with $n_i > n_{i - 1}$.
+
+ Then $d(x_{n_i}, x) < \frac{1}{i}$ implies that $x_{n_i} \to x$.
+\end{proof}
+
+Here we will only prove that compactness implies sequential compactness, and the other direction is left as an exercise for the reader.
+\begin{thm}
+ If $(X, d)$ is a compact \emph{metric space}, then $X$ is sequentially compact.
+\end{thm}
+
+\begin{proof}
+ Suppose $x_n$ is a sequence in $X$ with no convergent subsequence. Then for any $y\in X$, there is no subsequence converging to $y$. By lemma, there exists $\varepsilon > 0$ such that $x_n\in B_\varepsilon (y)$ for only finitely many $n$.
+
+ Let $U_y = B_\varepsilon (y)$. Now $\mathcal{V} = \{U_y: y\in X\}$ is an open cover of $X$. Since $X$ is compact, there is a finite subcover $\{U_{y_1}, \cdots, U_{y_m}\}$. Then $x_n \in \bigcup_{i = 1}^m U_{y_i} = X$ for only finitely many $n$. This is nonsense, since $x_n \in X$ for all $n$!
+
+ So $x_n$ must have a convergent subsequence.
+\end{proof}
+
+\begin{eg}
+ Let $X = C[0, 1]$ with the topology induced $d_\infty$ (uniform norm). Let
+ \[
+ f_n(x) =
+ \begin{cases}
+ nx & x\in [0, 1/n]\\
+ 2 - nx & x\in [1/n, 2/n]\\
+ 0 & x \in [2/n, 1]
+ \end{cases}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (2, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 1.5) node [above] {$y$};
+ \draw [red] (0, 0) -- (0.3, 1) -- (0.6, 0) -- (1.5, 0);
+ \end{tikzpicture}
+ \end{center}
+ Then $f_n(x) \to 0$ for all $x\in [0, 1]$. We now claim that $f_n$ has no convergent subsequence.
+
+ Suppose $f_{n_i} \to f$. Then $f_{n_i}(x) \to f(x)$ for all $x\in [0, 1]$. However, we know that $f_{n_i}(x) \to 0$ for all $x\in [0, 1]$. So $f(x) = 0$. However, $d_{\infty}(f_{n_i}, 0) = 1$. So $f_{n_i}\not \to 0$.
+
+ It follows that $B_1(0)\subseteq X$ is not sequentially compact. So it is not compact.
+\end{eg}
+
+\subsection{Completeness}
+The course ends with a small discussion about completeness. This really should belong to the chapter on metric spaces, since this is all about metric spaces. However, we put it here because the (only) proposition we have is about compactness, which was just defined recently.
+
+Similar to what we did in Analysis, we can define Cauchy sequences.
+\begin{defi}[Cauchy sequence]
+ Let $(X, d)$ be a metric space. A sequence $(x_n)$ in $X$ is \emph{Cauchy} if for every $\varepsilon > 0$, $\exists N$ such that $d(x_n, x_m) < \varepsilon$ for all $n, m \geq N$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $x_n = \sum_{k = 1}^n 1/k$ is \emph{not} Cauchy.
+ \item Let $X = (0, 1) \subseteq \R$ with $x_n = \frac{1}{n}$. Then this is Cauchy but does not converge.
+ \item If $x_n \to x\in X$, then $x_n$ is Cauchy. The proof is the same as that in Analysis I.
+
+ \item Let $X = \Q \subseteq \R$. Then the sequence $(2, 2.7, 2.71, 2.718, \cdots)$ is Cauchy but does not converge in $\Q$.
+ \end{enumerate}
+\end{eg}
+
+Exactly analogous to what we did in Analysis, we can also define a complete space.
+\begin{defi}[Complete space]
+ A metric space $(X, d)$ is \emph{complete} if every Cauchy sequence in $X$ converges to a limit in $X$.
+\end{defi}
+
+\begin{eg}
+ $(0, 1)$ and $\Q$ are not complete.
+\end{eg}
+
+\begin{prop}
+ If $X$ is a compact metric space, then $X$ is complete.
+\end{prop}
+
+\begin{proof}
+ Let $x_n$ be a Cauchy sequence in $X$. Since $X$ is sequentially compact, there is a convergent subsequence $x_{n_i}\to x$. We will show that $x_n \to x$.
+
+ Given $\varepsilon > 0$, pick $N$ such that $d(x_n, x_m) < \varepsilon/2$ for $n,m \geq N$. Pick $I$ such that $n_I \geq N$ and $d(x_{n_i}, x) < \varepsilon/2$ for all $i> I$. Then for $n \geq n_I$, $d(x_n, x) \leq d(x_n, x_{n_I}) + d(x_{n_I}, x) < \varepsilon$. So $x_n \to x$.
+\end{proof}
+
+\begin{cor}
+ $\R^n$ is complete.
+\end{cor}
+
+\begin{proof}
+ If $(x_n) \subseteq \R^n$ is Cauchy, then $(x_n) \subseteq \bar B_R (0) $ for some $R$, and $\bar{B}_R(0)$ is compact. So it converges.
+\end{proof}
+
+Note that completeness is not a topological property. $\R\simeq (0, 1)$ but $\R$ is complete while $(0, 1)$ is not.
+\end{document}
diff --git a/books/cam/IB_E/optimisation.tex b/books/cam/IB_E/optimisation.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6f2ae8bb8a109bef97569ed96b8e0a8649f9051c
--- /dev/null
+++ b/books/cam/IB_E/optimisation.tex
@@ -0,0 +1,1892 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Easter}
+\def\nyear {2015}
+\def\nlecturer {F.\ A.\ Fischer}
+\def\ncourse {Optimisation}
+\def\nofficial {http://www.maths.qmul.ac.uk/~ffischer/teaching/opt/}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Lagrangian methods}\\
+General formulation of constrained problems; the Lagrangian sufficiency theorem. Interpretation of Lagrange multipliers as shadow prices. Examples.\hspace*{\fill} [2]
+
+\vspace{10pt}
+\noindent\textbf{Linear programming in the nondegenerate case}\\
+Convexity of feasible region; sufficiency of extreme points. Standardization of problems, slack variables, equivalence of extreme points and basic solutions. The primal simplex algorithm, artificial variables, the two-phase method. Practical use of the algorithm; the tableau. Examples. The dual linear problem, duality theorem in a standardized case, complementary slackness, dual variables and their interpretation as shadow prices. Relationship of the primal simplex algorithm to dual problem. Two person zero-sum games.\hspace*{\fill} [6]
+
+\vspace{10pt}
+\noindent\textbf{Network problems}\\
+The Ford-Fulkerson algorithm and the max-flow min-cut theorems in the rational case. Network flows with costs, the transportation algorithm, relationship of dual variables with nodes. Examples. Conditions for optimality in more general networks; *the simplex-on-a-graph algorithm*.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Practice and applications}\\
+*Efficiency of algorithms*. The formulation of simple practical and combinatorial problems as linear programming or network problems.\hspace*{\fill} [1]}
+
+\tableofcontents
+
+\section{Introduction and preliminaries}
+\subsection{Constrained optimization}
+In optimization, the objective is to maximize or minimize some function. For example, if we are a factory, we want to minimize our cost of production. Often, our optimization is not unconstrained. Otherwise, the way to minimize costs is to produce nothing at all. Instead, there are some constraints we have to obey. The is known as \emph{constrained optimization}.
+
+\begin{defi}[Constrained optimization]
+ The general problem is of \emph{constrained optimization} is
+ \begin{center}
+ minimize $f(x)$ subject to $h(x) = b$, $x\in X$
+ \end{center}
+ where $x\in \R^n$ is the \emph{vector of decision variables}, $f: \R^n \to \R$ is the \emph{objective function}, $h: \R^n \to \R^m$ and $b\in \R^m$ are the \emph{functional constraints}, and $X\subseteq \R^n$ is the \emph{regional constraint}.
+\end{defi}
+Note that everything above is a vector, but we do not bold our vectors. This is since almost everything we work with is going to be a vector, and there isn't much point in bolding them.
+
+This is indeed the most general form of the problem. If we want to maximize $f$ instead of minimize, we can minimize $-f$. If we want our constraints to be an inequality in the form $h(x) \geq b$, we can introduce a \emph{slack variable} $z$, make the functional constraint as $h(x) - z = b$, and add the regional constraint $z \geq 0$. So all is good, and this is in fact the most general form.
+
+Linear programming is, surprisingly, the case where everything is linear. We can write our problem as:
+\begin{center}
+ minimize $c^Tx$ subject to
+ \begin{align*}
+ a_i^Tx &\geq b_i \text{ for all }i \in M_1\\
+ a_i^Tx &\leq b_i \text{ for all }i \in M_2\\
+ a_i^Tx &= b_i \text{ for all }i \in M_3\\
+ x_i &\geq 0 \text{ for all }i \in N_1\\
+ x_j &\leq 0 \text{ for all }i \in N_2
+ \end{align*}
+\end{center}
+where we've explicitly written out the different forms the constraints can take.
+
+This is too clumsy. Instead, we can perform some tricks and turn them into a nicer form:
+\begin{defi}[General and standard form]
+ The \emph{general form} of a linear program is
+ \begin{center}
+ minimize $c^T x$ subject to $Ax \geq b$, $x \geq 0$
+ \end{center}
+ The \emph{standard form} is
+ \begin{center}
+ minimize $c^T x$ subject to $Ax = b$, $x \geq 0$.
+ \end{center}
+\end{defi}
+It takes some work to show that these are indeed the most general forms. The equivalence between the two forms can be done via slack variables, as described above. We still have to check some more cases. For example, this form says that $x \geq 0$, i.e.\ all decision variables have to be positive. What if we want $x$ to be unconstrained, ie can take any value we like? We can split $x$ into to parts, $x = x^+ - x^-$, where each part has to be positive. Then $x$ can take any positive or negative value.
+
+Note that when I said ``nicer'', I don't mean that turning a problem into this form necessarily makes it easier to solve \emph{in practice}. However, it will be much easier to work with when developing general theory about linear programs.
+
+\begin{eg}
+ We want to minimize $-(x_1 + x_2)$ subject to
+ \begin{align*}
+ x_1 + 2x_2 &\leq 6\\
+ x_1 - x_2 &\leq 3\\
+ x_1, x_2 &\geq 0
+ \end{align*}
+ Since we are lucky to have a 2D problem, we can draw this out.
+ \begin{center}
+ \begin{tikzpicture}
+ \path [fill=gray!50!white] (0, 0) -- (3, 0) -- (4, 1) -- (0, 3) -- cycle;
+
+ \draw [->] (-1, 0) -- (6, 0) node [right] {$x_1$};
+ \draw [->] (0, -1) -- (0, 4) node [above] {$x_2$};
+
+ \draw (2, -1) -- (5, 2) node [right] {$x_1 - x_2 = 3$};
+ \draw (-1, 3.5) -- (5, 0.5) node [right] {$x_1 + 2x_2 = 6$};
+ \draw [->] (0, 0) -- (-0.5, -0.5) node [anchor=north east] {$c$};
+
+ \draw [dashed] (-1, 1) -- (1, -1) node [below] {\tiny $-(x_1 + x_2) = 0$};
+ \draw [dashed] (-1, 3) -- (3, -1) node [below] {\tiny $-(x_1 + x_2) = -2$};
+ \draw [dashed] (2, 3) -- (6, -1) node [below] {\tiny $-(x_1 + x_2) = -5$};
+ \end{tikzpicture}
+ \end{center}
+ The shaded region is the feasible region, and $c$ is our \emph{cost vector}. The dotted lines, which are orthogonal to $c$ are lines in which the objective function is constant. To minimize our objective function, we want the line to be as right as possible, which is clearly achieved at the intersection of the two boundary lines.
+\end{eg}
+Now we have a problem. In the general case, we have absolutely \emph{no idea} how to solve it. What we \emph{do} know, is how to do \emph{un}constrained optimization.
+
+\subsection{Review of unconstrained optimization}
+Let $f: \R^n \to \R$, $x^*\in \R^n$. A necessary condition for $x^*$ to minimize $f$ over $\R^n$ is $\nabla f(x^*) = 0$, where
+\[
+ \nabla f = \left(\frac{\partial f}{\partial x_1}, \cdots, \frac{\partial f}{\partial x_n}\right)^T
+\]
+is the gradient of $f$.
+
+However, this is obviously not a sufficient condition. Any such point can be a maximum, minimum or a saddle. Here we need a notion of convexity:
+\begin{defi}[Convex region]
+ A region $S\subseteq \R^n$ is \emph{convex} iff for all $\delta\in [0, 1]$, $x, y\in S$, we have $\delta x + (1 - \delta) y \in S$. Alternatively, If you take two points, the line joining them lies completely within the region.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[shift={(-2, 0)}]
+ \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0, 0) (0.6, -1) (1, 0) (0.7, 0.7)};
+ \draw (-0.5, -0.5) node [circ] {} -- (0.5, -0.5) node [circ] {};
+
+ \node at (0, -1.5) {non-convex};
+ \end{scope}
+
+ \begin{scope}[shift={(2, 0)}]
+ \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0.6, -1) (1, 0) (0.7, 0.7)};
+ \node at (0, -1.5) {convex};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+
+\begin{defi}[Convex function]
+ A function $f: S\to \R$ is \emph{convex} if $S$ is convex, and for all $x, y\in S$, $\delta\in [0, 1]$, we have $\delta f(x) + (1 - \delta)f(y) \geq f(\delta x + (1 - \delta)y)$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw(-2, 4) -- (-2, 0) -- (2, 0) -- (2, 4);
+ \draw (-1.3, 0.1) -- (-1.3, -0.1) node [below] {$x$};
+ \draw (1.3, 0.1) -- (1.3, -0.1) node [below] {$y$};
+ \draw (-1.7, 2) parabola bend (-.2, 1) (1.7, 3.3);
+ \draw [dashed] (-1.3, 0) -- (-1.3, 1.53) node [circ] {};
+ \draw [dashed] (1.3, 0) -- (1.3, 2.42) node [circ] {};
+ \draw (-1.3, 1.53) -- (1.3, 2.42);
+ \draw [dashed] (0, 0) node [below] {\tiny $\delta x + (1 - \delta)y$} -- (0, 1.975) node [above] {\tiny$\delta f(x) + (1 - \delta) f(y)\quad\quad\quad\quad\quad\quad$} node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ A function is \emph{concave} if $-f$ is convex. Note that a function can be neither concave nor convex.
+\end{defi}
+
+We have the following lemma:
+\begin{lemma}
+ Let $f$ be twice differentiable. Then $f$ is convex on a convex set $S$ if the Hessian matrix
+ \[
+ Hf_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j}
+ \]
+ is positive semidefinite for all $x\in S$, where this fancy term means:
+\end{lemma}
+
+\begin{defi}[Positive-semidefinite]
+ A matrix $H$ is \emph{positive semi-definite} if $v^T Hv \geq 0$ for all $v\in \R^n$.
+\end{defi}
+
+Which leads to the following theorem:
+\begin{thm}
+ Let $X\subseteq \R^n$ be convex, $f: \R^n \to \R$ be twice differentiable on $X$. If $x^* \in X$ satisfy $\nabla f(x^*) = 0$ and $Hf(x)$ is positive semidefinite for all $x\in X$, then $x^*$ minimizes $f$ on $X$.
+\end{thm}
+We will not prove these.
+
+Note that this is helpful, since linear functions are convex (and concave). The problem is that our problems are constrained, not unconstrained. So we will have to convert constrained problems to unconstrained problems.
+
+\section{The method of Lagrange multipliers}
+So how do we solve the problem of constrained maximization? The trick here is to include the constraints into the constraints into the objective function, so that things outside the constraint will not be thought to be minima.
+
+Suppose the original problem is
+\begin{center}
+ minimize $f(x)$ subject to $h(x) = b$, $x\in X$.
+\end{center}
+Call the constraint $(P)$.
+\begin{defi}[Lagrangian]
+ The \emph{Lagrangian} of a constraint $(P)$ is defined as
+ \[
+ L(x, \lambda) = f(x) - \lambda^T(h(x) - b).
+ \]
+ for $\lambda\in \R^m$. $\lambda$ is known as the \emph{Lagrange multiplier}.
+\end{defi}
+Note that when the constraint is satisfied, $h(x) - b = 0$, and $L(x, \lambda) = f(x)$.
+
+We could as well have used
+\[
+ L(x, \lambda) = f(x) + \lambda^T(h(x) - b).
+\]
+since we just have to switch the sign of $\lambda$. So we don't have to worry about getting the sign of $\lambda$ wrong when defining the Lagrangian.
+
+If we minimize $L$ over both $x$ and $\lambda$, then we will magically find the minimal solution subject to the constrains. Sometimes.
+
+\begin{thm}[Lagrangian sufficiency]
+ Let $x^*\in X$ and $\lambda^*\in \R^m$ be such that
+ \[
+ L(x^* ,\lambda^*) = \inf_{x\in X}L(x, \lambda^*)\quad\text{and}\quad h(x^*) = b.
+ \]
+ Then $x^*$ is optimal for ($P$).
+
+ In words, if $x^*$ minimizes $L$ for a fixed $\lambda^*$, and $x^*$ satisfies the constraints, then $x^*$ minimizes $f$.
+\end{thm}
+This looks like a pretty powerful result, but it turns out that it is quite easy to prove.
+
+\begin{proof}
+ We first define the ``feasible set'': let $X(b) = \{x\in X: h(x) = b\}$, i.e.\ the set of all $x$ that satisfies the constraints. Then
+ \begin{align*}
+ \min_{x\in X(b)} f(x) &= \min_{x\in X(b)} (f(x) - \lambda^{*T}(h(x) - b))\quad\text{ since $h(x) - b = 0$}\\
+ &\geq \min_{x\in X} (f(x) - \lambda^{*T}(h(x) - b))\\
+ &= f(x^*) - \lambda^{*T}(h(x^*) - b).\\
+ &= f(x^*).\qedhere
+ \end{align*}
+\end{proof}
+How can we interpret this result? To find these values of $\lambda^*$ and $x^*$, we have to solve
+\begin{align*}
+ \nabla L &= 0\\
+ h(x) &= b.
+\end{align*}
+Alternatively, we can write this as
+\begin{align*}
+ \nabla f &= \lambda \nabla h\\
+ h(x) &= b.
+\end{align*}
+What does this mean? For better visualization, we take the special case where $f$ and $h$ are a functions $\R^2 \to \R$. Usually, if we want to minimize $f$ without restriction, then for small changes in $x$, there should be no (first-order) change in $f$, i.e.\ $\d f = \nabla f\cdot \d x = 0$. This has to be true for all possible directions of $x$.
+
+However, if we are constrained by $h(x) = b$, this corresponds to forcing $x$ to lie along this particular path. Hence the restriction $\d f = 0$ only has to hold when $x$ lies along the path. Since we need $\nabla f\cdot \d x = 0$, this means that $\nabla f$ has to be perpendicular to $\d x$. Alternatively, $\nabla f$ has to be parallel to the normal to the path. Since the normal to the path is given by $\nabla h$, we obtain the requirement $\nabla f = \lambda \nabla h$.
+
+This is how we should interpret the condition $\nabla f = \lambda \nabla h$. Instead of requiring that $\nabla f = 0$ as in usual minimization problems, we only require $\nabla f$ to point at directions perpendicular to the allowed space.
+
+\begin{eg}
+ Minimize $x_1 + x_2 - 2x_3$ subject to
+ \begin{align*}
+ x_1 + x_2 + x_3 &= 5\\
+ x_1^2 + x_2^2 = 4
+ \end{align*}
+ The Lagrangian is
+ \begin{align*}
+ L(x, \lambda) ={}& x_1 - x_2 - 2x_3 - \lambda_1(x_1 + x_2 + x_3 - 5) - \lambda_2 (x_1^2 + x_2^2 - 4)\\
+ ={}& ((1 - \lambda_1)x_1 - 2\lambda_2 x_1^2) + ((-1 - \lambda_1)x_2 - \lambda_2 x_2^2) \\
+ &+ (-2 - \lambda_1)x_3 + 5\lambda_1 + 4\lambda_2
+ \end{align*}
+ We want to pick a $\lambda^*$ and $x^*$ such that $L(x^*, \lambda^*)$ is minimal. Then in particular, for our $\lambda^*$, $L(x, \lambda^*)$ must have a finite minimum.
+
+ We note that $(-2 - \lambda_1)x_3$ does not have a finite minimum unless $\lambda_1 = -2$, since $x_3$ can take any value. Also, the terms in $x_1$ and $x_2$ do not have a finite minimum unless $\lambda_2 < 0$.
+
+ With these in mind, we find a minimum by setting all first derivatives to be $0$:
+ \begin{align*}
+ \frac{\partial L}{\partial x_1} &= 1 - \lambda_1 - 2\lambda_2 x_1 = 3 - 2\lambda_2x_1\\
+ \frac{\partial L}{\partial x_2} &= -1 - \lambda_1 - 2\lambda_2 x_2 = 1 - 2\lambda_2 x_2
+ \end{align*}
+ Since these must be both $0$, we must have
+ \[
+ x_1 = \frac{3}{2\lambda_2}, \quad x_2 = \frac{1}{2\lambda_2}.
+ \]
+ To show that this is indeed a minimum, we look at the Hessian matrix:
+ \[
+ HL =
+ \begin{pmatrix}
+ -2\lambda_2 & 0\\
+ 0 & -2\lambda_2
+ \end{pmatrix}
+ \]
+ which is positive semidefinite when $\lambda_2 < 0$, which is the condition we came up with at the beginning.
+
+ Let $Y = \{\lambda: \R^2: \lambda_1 = -2, \lambda_2 < 0\}$ be our helpful values of $\lambda$.
+
+ So we have shown above that for every $\lambda \in Y$, $L(x, \lambda)$ has a unique minimum at $x(\lambda) = (\frac{3}{2\lambda_2}, \frac{1}{2\lambda2}, x_3)^T$.
+
+ Now all we have to do is find $\lambda$ and $x$ such that $x(\lambda)$ satisfies the functional constraints. The second constraint gives
+ \[
+ x_1^2 + x_2^2 = \frac{9}{4\lambda^2} + \frac{1}{4\lambda_2^2} = 4 \Leftrightarrow \lambda_2 = -\sqrt{\frac{5}{8}}.
+ \]
+ The first constraint gives
+ \[
+ x_3 = 5 - x_1 - x_2.
+ \]
+ So the theorem implies that
+ \[
+ x_1 = -3\sqrt{\frac{2}{5}},\quad x_2 = -\sqrt{\frac{2}{5}},\quad x_3 = 5 + 4\sqrt{\frac{2}{5}}.
+ \]
+\end{eg}
+So far so good. But what if our functional constraint is an inequality? We will need slack variables.
+
+To minimize $f(x)$ subject to $h(x) \leq b$, $x\in X$, we proceed as follows:
+\begin{enumerate}
+ \item Introduce slack variables to obtain the equivalent problem, to minimize $f(x)$ subject to $h(x) + z = b$, $x \in X$, $z \geq 0$.
+ \item Compute the Lagrangian
+ \[
+ L(x, z, \lambda) = f(x) - \lambda^T(f(x) + z - b).
+ \]
+ \item Find
+ \[
+ Y = \left\{\lambda: \inf_{x\in X, z\geq 0}L(x, z, \lambda) > -\infty\right\}.
+ \]
+ \item For each $\lambda\in Y$, minimize $L(x, z, \lambda)$, i.e.\ find
+ \[
+ x^*(\lambda)\in X,\quad z^*(\lambda) \geq 0
+ \]
+ such that
+ \[
+ L(x^*(\lambda), z^*(\lambda), \lambda) = \inf_{x\in X, z\geq 0} L(x, z, \lambda)
+ \]
+ \item Find $\lambda^*\in Y$ such that
+ \[
+ h(x^*(\lambda^*)) + z^*(\lambda^*) = b.
+ \]
+\end{enumerate}
+Then by the Lagrangian sufficiency condition, $x^*(\lambda^*)$ is optimal for the constrained problem.
+
+\subsection{Complementary Slackness}
+If we introduce a slack variable $z$, we note that changing the value of $z_j$ does not affect our objective function, and we are allowed to pick any positive $z$. Hence if the corresponding Lagrange multiplier is $\lambda_j$, then we must have $(z^*(\lambda))_j \lambda_j = 0$. This is since by definition $z^*(\lambda)_j$ minimizes $z_j \lambda_j$. Hence if $z_j \lambda_j \not= 0$, we can tweak the values of $z_j$ to make a smaller $z_j \lambda_j$.
+
+This makes our life easier since our search space is smaller.
+
+\begin{eg}
+ Consider the following problem:
+ \begin{center}
+ maximize $x_1 - 3x_2$ subject to
+ \begin{align*}
+ x_1^2 + x_2^2 + z_1 &= 4\\
+ x_1 + x_2 + z_2 + z_2 &= 2\\
+ z_1, z_2 &\geq 0.
+ \end{align*}
+ where $z_1, z_2$ are slack variables.
+ \end{center}
+ The Lagrangian is
+ \[
+ L(x, z, \lambda) = ((1 - \lambda_2)x_1 - \lambda_1 x_1^2) + ((-3 - \lambda_2)x_2 - \lambda_1 x_2^2) - \lambda_1 z_1 - \lambda_2 z_2 + 4\lambda_1 + 2\lambda_2.
+ \]
+ To ensure finite minimum, we need $\lambda_1, \lambda_2 \leq 0$.
+
+ By complementary slackness, $\lambda_1 z_1 = \lambda_2 z_2 = 0$. We can then consider the cases $\lambda_1 = 0$ and $z_1 = 0$ separately, and save a lot of algebra.
+\end{eg}
+
+\subsection{Shadow prices}
+We have previously described how we can understand the requirement $\nabla f = \lambda \nabla h$. But what does the multiplier $\lambda$ represent?
+\begin{thm}
+ Consider the problem
+ \begin{center}
+ minimize $f(x)$ subject to $h(x) = b$.
+ \end{center}
+ Here we assume all functions are continuously differentiable. Suppose that for each $b\in \R^n$, $\phi(b)$ is the optimal value of $f$ and $\lambda^*$ is the corresponding Lagrange multiplier. Then
+ \[
+ \frac{\partial \phi}{\partial b_i} = \lambda_i^*.
+ \]
+\end{thm}
+Proof is omitted, as it is just a tedious application of chain rule etc.
+
+This can be interpreted as follows: suppose we are a factory which is capable of producing $m$ different kinds of goods. Since we have finitely many resources, and producing stuff requires resources, $h(x) = b$ limits the amount of goods we can produce. Now of course, if we have more resources, i.e.\ we change the value of $b$, we will be able to produce more/less stuff, and thus generate more profit. The change in profit per change in $b$ is given by $\frac{\partial \phi}{\partial b_i}$, which is the value of $\lambda$.
+
+The result also holds when the functional constraints are inequality constraints. If the $i$th constraint holds with equality at the optimal solution, then the above reasoning holds. Otherwise, if it is not held with equality, then the Lagrange multiplier is $0$ by complementary slackness. Also, the partial derivative of $\phi$ with respect to $b_i$ also has to be $0$, since changing the upper bound doesn't affect us if we are not at the limit. So they are equal.
+\subsection{Lagrange duality}
+Consider the problem
+\begin{center}
+ minimize $f(x)$ subject to $h(x) = b$, $x\in X$.
+\end{center}
+Denote this as $P$.
+
+The Lagrangian is
+\[
+ L(x, \lambda) = f(x) - \lambda^T (h(x) - b).
+\]
+Define the dual function $g: \R^m \to \R$ as
+\[
+ g(\lambda) = \inf_{x\in X}L(x, \lambda).
+\]
+ie, we fix $\lambda$, and see how small we can get $L$ to be. As before, let
+\[
+ Y = \{\lambda\in \R^n: g(\lambda) > -\infty\}.
+\]
+Then we have
+\begin{thm}[Weak duality]
+ If $x\in X(b)$ (i.e.\ $x$ satisfies both the functional and regional constraints) and $\lambda \in Y$, then
+ \[
+ g(\lambda) \leq f(x).
+ \]
+ In particular,
+ \[
+ \sup_{\lambda\in Y}g(\lambda) \leq \inf_{x\in X(b)}f(x).
+ \]
+\end{thm}
+
+\begin{proof}
+ \begin{align*}
+ g(\lambda) &= \inf_{x'\in X}L(x', \lambda)\\
+ &\leq L(x, \lambda)\\
+ &= f(x) - \lambda^T (h(x) - b)\\
+ &= f(x).\qedhere
+ \end{align*}
+\end{proof}
+
+This suggests that we can solve a dual problem - instead of minimizing $f$, we can maximize $g$ subject to $\lambda\in Y$. Denote this problem as $(D)$. The original problem $(P)$ is called \emph{primal}.
+
+\begin{defi}[Strong duality]
+ $(P)$ and $(D)$ are said to satisfy \emph{strong duality} if
+ \[
+ \sup_{\lambda\in Y}g(\lambda) = \inf_{x\in X(b)}f(x).
+ \]
+\end{defi}
+It turns out that problems satisfying strong duality are exactly those for which the method of Lagrange multipliers work.
+
+\begin{eg}
+ Again consider the problem to minimize $x_1 - x_2 - 2x_3$ subject to
+ \begin{align*}
+ x_1 + x_2 + x_3 &= 5\\
+ x_1^2 + x_2^2 &= 4
+ \end{align*}
+ We saw that
+ \[
+ Y = \{\lambda\in \R^2: \lambda_1 = -2, \lambda_2 < 0\}
+ \]
+ and
+ \[
+ x^*(\lambda) = \left(\frac{3}{2\lambda_2}, \frac{1}{2\lambda_2}, 5 - \frac{4}{2\lambda_2}\right).
+ \]
+ The dual function is
+ \[
+ g(\lambda) = \inf_{x\in X} L(x, \lambda) = L(x^*(\lambda), \lambda) = \frac{10}{4\lambda_2} + 4\lambda_2 - 10.
+ \]
+ The dual is the problem to
+ \begin{center}
+ maximize $\frac{10}{4\lambda_2} + 4\lambda_2 - 10$ subject to $\lambda_2 < 0$.
+ \end{center}
+ The maximum is attained for
+ \[
+ \lambda_2 = -\sqrt{\frac{5}{8}}
+ \]
+ After calculating the values of $g$ and $f$, we can see that the primal and dual do have the same optimal value.
+\end{eg}
+
+Right now, what we've got isn't helpful, because we won't know if our problem satisfies strong duality!
+
+\subsection{Supporting hyperplanes and convexity}
+We use the fancy term ``hyperplane'' to denote planes in higher dimensions (in an $n$-dimensional space, a hyperplane has $n - 1$ dimensions).
+
+\begin{defi}[Supporting hyperplane]
+ A hyperplane $\alpha: \R^m \to \R$ is \emph{supporting} to $\phi$ at $b$ if $\alpha$ intersects $\phi$ at $b$ and $\phi(c) \geq \alpha(c)$ for all $c$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$x$};
+ \draw (-1, 2) parabola bend (0, 0.5) (1, 2) node [above] {$\phi$};
+ \node [circ] at (0.4, 0.75) {};
+ \node [right] at (0.4, 0.75) {$\phi(b)$};
+ \draw (-0.7, -0.5) -- (1.5, 2) node [right] {$\alpha$};
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+
+\begin{thm}
+ $(P)$ satisfies strong duality iff $\phi(c) = \inf\limits_{x \in X(c)}f(x)$ has a supporting hyperplane at $b$.
+\end{thm}
+Note that here we fix a $b$, and let $\phi$ be a function of $c$.
+
+\begin{proof}
+ $(\Leftarrow)$ Suppose there is a supporting hyperplane. Then since the plane passes through $\phi(b)$, it must be of the form
+ \[
+ \alpha(c) = \phi(b) + \lambda^T(c - b).
+ \]
+ Since this is supporting, for all $c\in \R^m$,
+ \[
+ \phi(b) + \lambda^T(c - b) \leq \phi(c),
+ \]
+ or
+ \[
+ \phi(b) \leq \phi(c) - \lambda^T(c - b),
+ \]
+ This implies that
+ \begin{align*}
+ \phi(b) &\leq \inf_{c\in \R^m}(\phi(c) - \lambda^T(c - b))\\
+ &= \inf_{c\in \R^m}\inf_{x\in X(c)}(f(x) - \lambda^T(h(x) - b))\\
+ \intertext{(since $\phi(c) = \inf\limits_{x\in X(c)} f(x)$ and $h(x) = c$ for $x\in X(c)$)}
+ &= \inf_{x\in X}L(x, \lambda).\\
+ \intertext{(since $\bigcup\limits_{c\in \R^m}X(c) = X$, which is true since for any $x\in X$, we have $x\in X(h(x))$)}
+ &= g(\lambda)
+ \end{align*}
+ By weak duality, $g(\lambda) \leq \phi(b)$. So $\phi(b) = g(\lambda)$. So strong duality holds.
+
+ $(\Rightarrow)$. Assume now that we have strong duality. The there exists $\lambda$ such that for all $c\in \R^m$,
+ \begin{align*}
+ \phi(b) &= g(\lambda)\\
+ &= \inf_{x\in X}L(x, \lambda)\\
+ &\leq \inf_{x\in X(c)} L(x, \lambda)\\
+ &= \inf_{x\in X(c)} (f(x) - \lambda^T(h(x) - b))\\
+ &= \phi(c) - \lambda^T(c - b)
+ \end{align*}
+ So $\phi(b) + \lambda^T(c - b) \leq \phi(c)$. So this defines a supporting hyperplane.
+\end{proof}
+
+We are having some progress now. To show that Lagrange multipliers work, we need to show that $(P)$ satisfies strong duality. To show that $(P)$ satisfies strong duality, we need to show that it has a supporting hyperplane at $b$. How can we show that there is a supporting hyperplane? A sufficient condition is convexity.
+
+\begin{thm}[Supporting hyperplane theorem]
+ Suppose that $\phi: \R^m \to \R$ is convex and $b\in\R^m$ lies in the interior of the set of points where $\phi$ is finite. Then there exists a supporting hyperplane to $\phi$ at $b$.
+\end{thm}
+Proof follows rather straightforwardly from the definition of convexity, and is omitted.
+
+This is some even better progress. However, the definition of $\phi$ is rather convoluted. How can we show that it is convex? We have the following helpful theorem:
+
+\begin{thm}
+ Let
+ \[
+ \phi(b) = \inf_{x\in X} \{f(x): h(x) \leq b\}
+ \]
+ If $X, f, h$ are convex, then so is $\phi$ (assuming feasibility and boundedness).
+\end{thm}
+
+\begin{proof}
+ Consider $b_1, b_2\in \R^m$ such that $\phi(b_1)$ and $\phi(b_2)$ are defined. Let $\delta \in [0, 1]$ and define $b = \delta b_1 + (1 - \delta)b_2$. We want to show that $\phi(b) \leq \delta \phi(b_1) + (1 - \delta)\phi(b_2)$.
+
+ Consider $x_1 \in X(b_1)$, $x_2 \in X(b_2)$, and let $x = \delta x_1 + (1 - \delta)x_2$. By convexity of $X$, $x\in X$.
+
+ By convexity of $h$,
+ \begin{align*}
+ h(x) &= h(\delta x_1 + (1 - \delta) x_2)\\
+ &\leq \delta h(x_1) + (1 - \delta)h(x_2)\\
+ &\leq \delta b_1 + (1 - \delta)b_2\\
+ &= b
+ \end{align*}
+ So $x\in X(b)$. Since $\phi(x)$ is an optimal solution, by convexity of $f$,
+ \begin{align*}
+ \phi(b) &\leq f(x)\\
+ &= f(\delta x_1 + (1 - \delta) x_2)\\
+ &\leq \delta f(x_1) + (1 - \delta)f(x_2)
+ \end{align*}
+ This holds for any $x_1\in X(b_1)$ and $x_2 \in X(b_2)$. So by taking infimum of the right hand side,
+ \[
+ \phi(b) \leq \delta \phi(b_1) + (1 - \delta) \phi(b_2).
+ \]
+ So $\phi$ is convex.
+\end{proof}
+$h(x) = b$ is equivalent to $h(x) \leq b$ and $-h(x) \leq -b$. So the result holds for problems with equality constraints if both $h$ and $-h$ are convex, i.e.\ if $h(x)$ is linear.
+
+So
+\begin{thm}
+ If a linear program is feasible and bounded, then it satisfies strong duality.
+\end{thm}
+
+\section{Solutions of linear programs}
+\subsection{Linear programs}
+We'll come up with an algorithm to solve linear program efficiently. We first illustrate the general idea with the case of a 2D linear program. Consider the problem
+\begin{center}
+ maximize $x_1 + x_2$ subject to
+ \begin{align*}
+ x_1 + 2x_2 &\leq 6\\
+ x_1 - x_2 &\leq 3\\
+ x_1, x_2 &\geq 0
+ \end{align*}
+\end{center}
+We can plot the solution space out
+\begin{center}
+ \begin{tikzpicture}
+ \path [fill=gray!50!white] (0, 0) -- (3, 0) -- (4, 1) -- (0, 3) -- cycle;
+ \draw [->] (-1, 0) -- (8, 0) node [right] {$x_1$};
+ \draw [->] (0, -1) -- (0, 4) node [above] {$x_2$};
+
+ \draw (2, -1) -- +(4, 4) node [right] {$x_1 - x_2 = 3$};
+ \draw (-1, 3.5) -- +(8, -4) node [right] {$x_1 + 2x_2 = 6$};
+ \draw [->] (0, 0) -- (0.5, 0.5) node [anchor=south west] {$c$};
+ \end{tikzpicture}
+\end{center}
+To maximize $x_1 + x_2$, we want to go as far in the $c$ direction as possible. It should be clear that the optimal point will lie on a corner of the polygon of feasible region, no matter what the shape of it might be.
+
+Even if we have cases where $c$ is orthogonal to one of the lines, eg
+\begin{center}
+ \begin{tikzpicture}
+ \path [fill=gray!50!white] (0, 0) -- (3, 0) -- (3.25, 0.25) -- (0, 3.5) -- cycle;
+ \draw [->] (-1, 0) -- (6, 0) node [right] {$x_1$};
+ \draw [->] (0, -1) -- (0, 4) node [above] {$x_2$};
+
+ \draw (2, -1) -- +(3, 3) node [right] {$x_1 - x_2 = 3$};
+ \draw (-0.5, 4) -- +(5, -5) node [right] {$x_1 + x_2 = 3.5$};
+ \draw [->] (0, 0) -- (0.5, 0.5) node [anchor=south west] {$c$};
+ \node [circ] at (1.75, 1.75) {};
+ \node at (1.75, 1.75) [anchor = south west] {$A$};
+ \end{tikzpicture}
+\end{center}
+An optimal point might be $A$. However, if we know that $A$ is an optimal point, we can slide it across the $x_1 + x_2 = 3.5$ line until it meets one of the corners. Hence we know that one of the corners must be an optimal point.
+
+This already allows us to solve linear programs, since we can just try all corners and see which has the smallest value. However, this can be made more efficient, especially when we have a large number of dimensions and hence corners.
+
+\subsection{Basic solutions}
+Here we will assume that the rows of $A$ are linearly independent, and any set of $m$ columns are linearly independent. Otherwise, we can just throw away the redundant rows or columns.
+
+In general, if both the constraints and the objective functions are linear, then the optimal point always lies on a ``corner'', or an \emph{extreme point}.
+
+\begin{defi}[Extreme point]
+ An \emph{extreme point} $x\in S$ of a convex set $S$ is a point that cannot be written as a convex combination of two distinct points in $S$, i.e.\ if $y, z\in S$ and $\delta \in (0, 1)$ satisfy
+ \[
+ x = \delta y + (1 - \delta) z,
+ \]
+ then $x = y = z$.
+\end{defi}
+
+Consider again the linear program in standard form, i.e.
+\begin{center}
+ maximize $c^T x$ subject to $Ax = b, x \geq 0$, where $A \in \R^{m\times n}$ and $b\in \R^m$.
+\end{center}
+Note that now we are talking about maximization instead of minimization.
+
+\begin{defi}[Basic solution and basis]
+ A solution $x\in \R^n$ is \emph{basic} if it has at most $m$ non-zero entries (out of $n$), i.e.\ if there exists a set $B\subseteq \{1, \cdots, n\}$ with $|B| = m$ such that $x_i = 0$ if $i\not\in B$. In this case, $B$ is called the \emph{basis}, and $x_i$ are the \emph{basic variables} if $i\in B$.
+\end{defi}
+We will later see (via an example) that basic solutions correspond to solutions at the ``corners'' of the solution space.
+
+\begin{defi}[Non-degenerate solutions]
+ A basic solution is \emph{non-degenerate} if it has exactly $m$ non-zero entries.
+\end{defi}
+
+Note that by ``solution'', we do not mean a solution to the whole maximization problem. Instead we are referring to a solution to the constraint $Ax = b$. Being a solution does \emph{not} require that $x \geq 0$. Those that satisfy this regional constraint are known as \emph{feasible}.
+
+\begin{defi}[Basic feasible solution]
+ A basic solution $x$ is \emph{feasible} if it satisfies $x \geq 0$.
+\end{defi}
+
+\begin{eg}
+ Consider the linear program
+ \begin{center}
+ maximize $f(x) = x_1 + x_2$ subject to
+ \begin{align*}
+ x_1 + 2x_2 + z_1&= 6\\
+ x_1 - x_2 + z_2 &= 3\\
+ x_1, x_2, z_1, z_2 &\geq 0
+ \end{align*}
+ \end{center}
+ where we have included the slack variables.
+
+ Since we have 2 constraints, a basic solution would require 2 non-zero entries, and thus 2 zero entries. The possible basic solutions are
+ \begin{center}
+ \begin{tabular}{cccccc}
+ \toprule
+ & $x_1$ & $x_2$ & $z_1$ & $z_2$ & $f(x)$\\
+ \midrule
+ $A$ & $0$ & $0$ & $6$ & $3$ & $0$\\
+ $B$ & $0$ & $3$ & $0$ & $6$ & $3$\\
+ $C$ & $4$ & $1$ & $0$ & $0$ & $5$\\
+ $D$ & $3$ & $0$ & $3$ & $0$ & $3$\\
+ $E$ & $6$ & $0$ & $0$ & $-4$ & $6$\\
+ $F$ & $0$ & $-3$ & $12$ & $0$ & $-3$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Among all 6, $E$ and $F$ are \emph{not} feasible solutions since they have negative entries. So the basic feasible solutions are $A, B, C, D$.
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [fill=gray!50!white] (0, 0) node [anchor = south west] {$A$} --
+ (3, 0) node [above] {$B$} --
+ (4, 1) node [left] {$C$} --
+ (0, 3) node [anchor = north east] {$D$} -- cycle;
+
+ \draw [->] (-1, 0) -- (8, 0) node [right] {$x_1$};
+ \draw [->] (0, -4) -- (0, 4) node [above] {$x_2$};
+
+ \draw (-1, -4) -- +(7, 7) node [right] {$x_1 - x_2 = 3$};
+ \draw (-1, 3.5) -- +(8, -4) node [right] {$x_1 + 2x_2 = 6$};
+ \node [above] at (6, 0) {$E$};
+ \node [left] at (0, -3) {$F$};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+In previous example, we saw that the extreme points are exactly the basic feasible solutions. This is true in general.
+\begin{thm}
+ A vector $x$ is a basic feasible solution of $Ax = b$ if and only if it is an extreme point of the set $X(b) = \{x': Ax' = b, x' \geq 0\}$.
+\end{thm}
+We will not prove this.
+
+\subsection{Extreme points and optimal solutions}
+Recall that we previously showed in our 2D example that the optimal solution lies on an extreme point, i.e.\ is a basic feasible solution. This is also true in general.
+
+\begin{thm}
+ If $(P)$ is feasible and bounded, then there exists an optimal solution that is a basic feasible solution.
+\end{thm}
+
+\begin{proof}
+ Let $x$ be optimal of $(P)$. If $x$ has at most non-zero entries, it is a basic feasible solution, and we are done.
+
+ Now suppose $x$ has $r > m$ non-zero entries. Since it is not an extreme point, we have $y\not= z\in X(b)$, $\delta \in (0, 1)$ such that
+ \[
+ x = \delta y + (1 - \delta) z.
+ \]
+ We will show there exists an optimal solution strictly fewer than $r$ non-zero entries. Then the result follows by induction.
+
+ By optimality of $x$, we have $c^T x \geq c^T y$ and $c^T x \geq c^T z$.
+
+ Since $c^T x = \delta c^T y + (1 - \delta)c^Tz$, we must have that $c^T x = c^T y = c^T z$, i.e.\ $y$ and $z$ are also optimal.
+
+ Since $y \geq 0$ and $z \geq 0$, $x = \delta y + (1 - \delta) z$ implies that $y_i = z_i = 0$ whenever $x_i = 0$.
+
+ So the non-zero entries of $y$ and $z$ is a subset of the non-zero entries of $x$. So $y$ and $z$ have at most $r$ non-zero entries, which must occur in rows where $x$ is also non-zero.
+
+ If $y$ or $z$ has strictly fewer than $r$ non-zero entries, then we are done. Otherwise, for any $\hat{\delta}$ (not necessarily in $(0, 1)$), let
+ \[
+ x_{\hat{\delta}} = \hat{\delta} y + (1 - \hat{\delta}) z = z + \hat{\delta}(y - z).
+ \]
+ Observe that $x_{\hat{\delta}}$ is optimal for every $\hat\delta\in \R$.
+
+ Moreover, $y - z \not= 0$, and all non-zero entries of $y - z$ occur in rows where $x$ is non-zero as well. We can thus choose $\hat\delta\in \R$ such that $x_{\hat{\delta}} \geq 0$ and $x_{\hat{\delta}}$ has strictly fewer than $r$ non-zero entries.
+\end{proof}
+Intuitively, this is what we do when we ``slide along the line'' if $c$ is orthogonal to one of the boundary lines.
+
+This result in fact holds more generally for the maximum of a convex function $f$ over a compact (i.e.\ closed and bounded) convex set $X$.
+
+In that case, we can write any point $x\in X$ as a convex combination
+\[
+ x = \sum_{i = 1}^k \delta_i x^i
+\]
+of extreme points $x^k\in X$, where $\delta \in \R_{\geq 0}^k$ and $\sum_{i=1}^k \delta_i = 1$.
+
+Then, by convexity of $f$,
+\[
+ f(x) \leq \sum_{i = 1}^k \delta_i f(x^i) \leq \max_i f(x^i)
+\]
+So any point in the interior cannot be better than the extreme points.
+\subsection{Linear programming duality}
+Consider the linear program in general form with slack variables,
+\begin{center}
+ minimize $c^Tx$ subject to $Ax - z = b$, $x, z\geq 0$
+\end{center}
+We have $X = \{(x, z): x, z\geq 0\}\subseteq \R^{m + n}$.
+
+The Lagrangian is
+\[
+ L(x, z, \lambda) = c^Tx - \lambda^T(A x - z - b) = (c^T - \lambda^TA)x + \lambda^T z + \lambda^T b.
+\]
+Since $x, z$ can be arbitrarily positive, this has a finite minimum if and only if
+\[
+ c^T - \lambda^TA \geq 0,\quad \lambda^T \geq 0.
+\]
+Call the feasible set $Y$. Then for fixed $\lambda\in Y$, the minimum of $L(x, z, \lambda)$ is attained when $(c^T - \lambda^T A)x$ and $\lambda^T z = 0$ by complementary slackness. So
+\[
+ g(\lambda) = \inf_{(x, z) \in X} L(x, z, \lambda) = \lambda^T b.
+\]
+The dual is thus
+\begin{center}
+ maximize $\lambda^T b$ subject to $A^T\lambda \leq c$, $\lambda \geq 0$
+\end{center}
+
+\begin{thm}
+ The dual of the dual of a linear program is the primal.
+\end{thm}
+
+\begin{proof}
+ It suffices to show this for the linear program in general form. We have shown above that the dual problem is
+ \begin{center}
+ minimize $-b^T\lambda$ subject to $-A^T \lambda \geq -c$, $\lambda \geq 0$.
+ \end{center}
+ This problem has the same form as the primal, with $-b$ taking the role of $c$, $-c$ taking the role of $b$, $-A^T$ taking the role of $A$. So doing it again, we get back to the original problem.
+\end{proof}
+
+\begin{eg}
+ Let the primal problem be
+ \begin{center}
+ maximize $3x_1 + 2x_2$ subject to
+ \begin{align*}
+ 2x_1 + x_2 + z_1 &= 4\\
+ 2x_1 + 3x_2 + z_2 &= 6\\
+ x_1, x_2, z_1, z_2 \geq 0.
+ \end{align*}
+ \end{center}
+ Then the dual problem is
+ \begin{center}
+ minimize $4\lambda_1 + 6\lambda_2$ such that
+ \begin{align*}
+ 2\lambda_1 + 2\lambda_2 - \mu_1 &= 3\\
+ \lambda_1 + 3\lambda_2 - \mu_2 &= 2\\
+ \lambda_1, \lambda_2, \mu_1, \mu_2 \geq 0.
+ \end{align*}
+ \end{center}
+ We can compute all basic solutions of the primal and the dual by setting $n - m - 2$ variables to be zero in turn.
+
+ Given a particular basic solutions of the primal, the corresponding solutions of the dual can be found by using the complementary slackness solutions:
+ \[
+ \lambda_1 z_1 = \lambda_2 z_2 = 0,\quad \mu_1 x_1 = \mu_2 x_2 = 0.
+ \]
+ \begin{center}
+ \begin{tabular}{cccccccccccc}
+ \toprule
+ & $x_1$ & $x_2$ & $z_1$ & $z_2$ & $f(x)$ &\;& $\lambda_1$ & $\lambda_2$ & $\mu_1$ & $\mu_2$ & $g(\lambda)$\\
+ \midrule
+ A & 0 & 0 & 4 & 6 & 0 && 0 & 0 & -3 & -2 & 0\\
+ B & 2 & 0 & 0 & 2 & 6 && $\frac{3}{2}$ & 0 & 0 & $-\frac{1}{2}$ & 6\\
+ C & 3 & 0 & -2 & 0 & 9 && 0 & $\frac{3}{2}$ & 0 & $\frac{5}{2}$ & 9\\
+ D & $\frac{3}{2}$ & 1 & 0 & 0 & $\frac{13}{2}$ && $\frac{5}{4}$ & $\frac{1}{4}$ & 0 & 0 & $\frac{13}{2}$\\
+ E & 0 & 2 & 2 & 0 & 4 && 0 & $\frac{2}{3}$ & $-\frac{5}{3}$ & 0 & 4\\
+ F & 0 & 4 & 0 & -6 & 8 && 2 & 0 & 1 & 0 & 8\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[yscale=0.5]
+ \path [fill=gray!50!white] (0, 0) node [anchor = north east] {$A$} --
+ (2, 0) node [below] {$B$} --
+ (1.5, 1) node [anchor = south west] {$D$} --
+ (0, 2) node [anchor = north east] {$E$} -- cycle;
+ \node [above] at (3, 0) {$C$};
+ \node [left] at (0, 4) {$F$};
+
+ \draw [->] (-0.5, 0) -- (4.5, 0) node [right] {$x_1$};
+ \draw [->] (0, -3) -- (0, 6) node [above] {$x_2$};
+ \draw (-0.5, 5) -- +(3.5, -7) node [below] {\small $2x_1 + x_2 = 4$};
+ \draw (-0.5, 2.333) -- +(4.5, -3) node [below] {\small $2x_1 + 3x_2 = 6$};
+ \end{scope}
+
+ \begin{scope}[shift={(6, 0)}, xscale=1.5]
+ \path [fill=gray!50!white] (0, 3) -- (0, 1.5) node [left] {$C$} --
+ (1.25, 0.25) node [anchor = south west] {$D$} --
+ (2, 0) node [above] {$F$} --
+ (3, 0) -- (3, 3) -- cycle;
+ \node at (0, 0) [anchor = north east] {$A$};
+ \node at (0, 0.667) [anchor = north east] {$B$};
+ \node at (1.5, 0) [below] {$E$};
+
+ \draw [->] (-0.5, 0) -- (3, 0) node [right] {$\lambda_1$};
+ \draw [->] (0, -1.5) -- (0, 3) node [above] {$\lambda_2$};
+
+ \draw (-0.5, 2) -- +(3, -3) node [below] {\small $2\lambda_1 + 2\lambda_2 = 3$};
+ \draw (-0.5, 0.833) -- +(3, -1) node [below] {\small $\lambda_1 + 3\lambda_2 = 2$};
+ \end{scope}
+
+ \end{tikzpicture}
+ \end{center}
+ We see that $D$ is the only solution such that both the primal and dual solutions are feasible. So we know it is optimal without even having to calculate $f(x)$. It turns out this is always the case.
+\end{eg}
+
+\begin{thm}
+ Let $x$ and $\lambda$ be feasible for the primal and the dual of the linear program in general form. Then $x$ and $\lambda$ and optimal if and only if they satisfy complementary slackness, i.e.\ if
+ \[
+ (c^T - \lambda^T A)x = 0\text{ and }\lambda^T(Ax - b) = 0.
+ \]
+\end{thm}
+
+\begin{proof}
+ If $x$ and $\lambda$ are optimal, then
+ \[
+ c^Tx = \lambda^T b
+ \]
+ since every linear program satisfies strong duality. So
+ \begin{align*}
+ c^Tx &= \lambda^T b\\
+ &= \inf_{x'\in X} (c^T x' - \lambda^T(Ax' - b))\\
+ &\leq c^T x - \lambda^T (Ax - b)\\
+ &\leq c^T x.
+ \end{align*}
+ The last line is since $Ax \geq b$ and $\lambda\geq 0$.
+
+ The first and last term are the same. So the inequalities hold with equality. Therefore
+ \[
+ \lambda^T b = c^Tx - \lambda^T (Ax - b) = (c^T - \lambda^TA)x + \lambda^Tb.
+ \]
+ So
+ \[
+ (c^T - \lambda^TA)x = 0.
+ \]
+ Also,
+ \[
+ c^Tx - \lambda^T(Ax - b) = c^Tx
+ \]
+ implies
+ \[
+ \lambda^T(Ax - b) = 0.
+ \]
+ On the other hand, suppose we have complementary slackness, i.e.
+ \[
+ (c^T - \lambda^T A)x = 0\text{ and }\lambda^T(Ax - b) = 0,
+ \]
+ then
+ \[
+ c^Tx = c^Tx - \lambda^T(Ax - b) = (c^T - \lambda^T A)x + \lambda^T b = \lambda^Tb.
+ \]
+ Hence by weak duality, $x$ and $\lambda$ are optimal.
+\end{proof}
+\subsection{Simplex method}
+The simplex method is an algorithm that makes use of the result we just had. To find the optimal solution to a linear program, we start with a basic feasible solution of the primal, and then modify the variables step by step until the dual is also feasible.
+
+We start with an example, showing what we do, then explain the logic behind, then do a more proper example.
+
+\begin{eg}
+ Consider the following problem:
+ \begin{center}
+ maximize $x_1 + x_2$ subject to
+ \begin{align*}
+ x_1 + 2x_2 + z_1 &= 6\\
+ x_1 - x_2 + z_2 &= 3\\
+ x_1, x_2, z_1, z_2 \geq 0.
+ \end{align*}
+ \end{center}
+ We write everything in the \emph{simplex tableau}, by noting down the coefficients:
+ \begin{center}
+ \begin{tabular}{cccccc}
+ \toprule
+ &$x_1$ & $x_2$ & $z_1$ & $z_2$ \\
+ \midrule
+ Constraint 1 & 1 & 2 & 1 & 0 & 6 \\
+ Constraint 2 & 1 & -1 & 0 & 1 & 3 \\
+ Objective & 1 & 1 & 0 & 0 & 0 \\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ We see an identity matrix $\begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix}$ in the $z_1$ and $z_2$ columns, and these correspond to basic feasible solution: $z_1 = 6, z_2 = 3, x_1 = x_2 = 0$. It's pretty clear that our basic feasible solution is not optimal, since our objective function is $0$. This is since something in the last row is positive, and we can increase the objective by, say, increasing $x_1$.
+
+ The simplex method says that we can find the optimal solution if we make the bottom row all negative while keeping the right column positive, by doing row operations.
+
+ We multiply the first row by $\frac{1}{2}$ and subtract/add it to the other rows to obtain
+ \begin{center}
+ \begin{tabular}{cccccc}
+ \toprule
+ &$x_1$ & $x_2$ & $z_1$ & $z_2$ & \\
+ \midrule
+ Constraint 1 & $\frac{1}{2}$ & 1 & $\frac{1}{2}$ & 0 & 3 \\
+ Constraint 2 & $\frac{2}{3}$ & 0 & $\frac{1}{2}$ & 1 & 6 \\
+ Objective & $\frac{1}{2}$ & 0 & $-\frac{1}{2}$ & 0 & -3\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Our new basic feasible solution is $x_2 = 3, z_2 = 6, x_1 = z_1 = 0$. We see that the number in the bottom-right corner is $-f(x)$. We can continue this process to finally obtain a solution.
+\end{eg}
+Here we adopt the following notation: let $A\subseteq \R^{m\times n}$ and $b\in \R^m$. Assume that $A$ has full rank. Let $B$ be a basis and set $B\subseteq \{1, 2, \cdots, n\}$ with $|B| = m$, corresponding to at most $m$ non-zero entries.
+
+We rearrange the columns so that all basis columns are on the left. Then we can write our matrices as
+\begin{align*}
+ A_{m\times n} &=
+ \begin{pmatrix}
+ (A_B)_{m\times m} & (A_N)_{m\times (n - m)}
+ \end{pmatrix}\\
+ x_{n\times 1} &=
+ \begin{pmatrix}
+ (x_B)_{m\times 1} & (x_N)_{(n - m)\times 1}
+ \end{pmatrix}^T\\
+ c_{1\times n} &=
+ \begin{pmatrix}
+ (c_B)_{m\times 1} & (c_N)_{(n - m)\times 1}
+ \end{pmatrix}.
+\end{align*}
+Then the functional constraints
+\[
+ Ax = b
+\]
+can be decomposed as
+\[
+ A_Bx_B + A_Nx_N = b.
+\]
+We can rearrange this to obtain
+\[
+ x_B = A_B^{-1}(b - A_N x_N).
+\]
+In particular, when $x_N = 0$, then
+\[
+ x_B = A_B^{-1}b.
+\]
+The general tableau is then
+\begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ Basis components & Other components\\
+ \midrule\\
+ $A_B^{-1} A_B = I$ & $A_B^{-1}A_N$ & $A_B^{-1}b$\\\\
+ \midrule
+ \quad$c^T_B - c^T_BA_B^{-1}A_B = 0$\quad & \quad$c_N^T - c_B^TA_B^{-1}A_N$\quad & $-c_B^T A_B^{-1}b$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+This might look really scary, and it is! Without caring too much about how the formulas for the cells come from, we see the identity matrix on the left, which is where we find our basic feasible solution. Below that is the row for the objective function. The values of this row must be $0$ for the basis columns.
+
+On the right-most column, we have $A_B^{-1}b$, which is our $x_B$. Below that is $-c_B^TA_B^{-1}b$, which is the negative of our objective function $c_B^Tx_B$.
+\subsubsection{The simplex tableau}
+We have
+\begin{align*}
+ f(x) &= c^T x \\
+ &= c_B^T x_B + c_N^T x_N\\
+ &= c_B^T A_B^{-1}(b - A_N x_N) + c_N^T x_N\\
+ &= c_B^T A_B^{-1}b + (c_N^T - c_B^TA_B^{-1}A_N)x_N.
+\end{align*}
+We will maximize $c^T x$ by choosing a basis such that $c_N^T - c_B^T A_B^{-1}A_N \leq 0$, i.e.\ non-positive everywhere and $A_B^{-1}b \geq 0$.
+
+If this is true, then for any feasible solution $x\in \R^n$, we must have $x_N \geq 0$. So $(c_N^T - c_B^TA_B^{-1}A_N)x_N \leq 0$ and
+\[
+ f(x) \leq c_B^T A_B^{-1}b.
+\]
+So if we choose $x_B = A_B^{-1}b$, $x_N = 0$, then we have an optimal solution.
+
+Hence our objective is to pick a basis that makes $c_N^T - c_B^T A_B^{-1}A_N \leq 0$ while keeping $A_B^{-1}b \geq 0$. To do this, suppose this is not attained. Say $(c_N^T - c_B^T A_B^{-1}A_N)_i > 0$.
+
+We can increase the value of the objective function by increasing $(x_N)_i$. As we increase $(x_N)_i$, we have to satisfy the functional constraints. So the value of other variables will change as well. We can keep increasing $(x_N)_i$ until another variable hits $0$, say $(x_B)_j$. Then we will have to stop.
+
+(However, if it so happens that we can increase $(x_N)_i$ indefinitely without other things hitting $0$, our problem is unbounded)
+
+The effect of this is that we have switched basis by removing $(x_B)_j$ and adding $(x_N)_i$. We can continue from here. If $(c_N^T - c_B^T A_B^{-1}A_N)$ is negative, we are done. Otherwise, we continue the above procedure.
+
+The simplex method is a systematic way of doing the above procedure.
+
+\subsubsection{Using the Tableau}
+Consider a tableau of the form
+\begin{center}
+\begin{tabular}{cc}
+ \toprule\\
+ \quad\quad $a_{ij}$\quad\quad\quad & $a_{i0}$\\\\
+ \midrule
+ \quad\quad $a_{0j}$\quad\quad\quad & $a_{00}$\\
+ \bottomrule
+\end{tabular}
+\end{center}
+where $a_{i0}$ is $b$, $a_{0j}$ corresponds to the objective function, and $a_{00}$ is initial $0$.
+
+The simplex method proceeds as follows:
+\begin{enumerate}
+ \item Find an initial basic feasible solution.
+ \item Check whether $a_{0j} \leq 0$ for every $j$. If so, the current solution is optimal. Stop.
+ \item If not, choose a \emph{pivot column} $j$ such that $a_{0j} > 0$. Choose a \emph{pivot row} $i\in \{i: a_{ij} > 0\}$ that minimizes $a_{i0}/a_{ij}$. If multiple rows are minimize $a_{i0}/a_{ij}$, then the problem is degenerate, and things \emph{might} go wrong. If $a_{ij} \leq 0$ for all $i$, i.e.\ we cannot choose a pivot row, the problem is unbounded, and we stop.
+ \item We update the tableau by multiplying row $i$ by $1/a_{ij}$ (such that the new $a_{ij} = 1$), and add a $(-a_{kj}/a_{ij})$ multiple of row $i$ to each row $k \not= i$, including $k = 0$ (so that $a_{kj} = 0$ for all $k \not= i$)
+
+ We have a basic feasible solution, since our choice of $a_{ij}$ makes all right-hand columns positive after subtracting (apart from $a_{00}$).
+ \item \texttt{GOTO} (ii).
+\end{enumerate}
+
+Now visit the example at the beginning of the section to see how this is done in practice. Then read the next section for a more complicated example.
+\subsection{The two-phase simplex method}
+Sometimes we don't have a nice identity matrix to start with. In this case, we need to use the \emph{two-phase simplex method} to first find our first basic feasible solution, then to the actual optimization.
+
+This method is illustrated by example.
+\begin{eg}
+ Consider the problem
+ \begin{center}
+ minimize $6x_1 + 3x_2$ subject to
+ \begin{align*}
+ x_1 + x_2 &\geq 1\\
+ 2x_1 - x_2 &\geq 1\\
+ 3x_2 &\leq 2\\
+ x_1, x_2 &\geq 0
+ \end{align*}
+ \end{center}
+ This is a minimization problem. To avoid being confused, we maximize $-6x_1 - 3x_2$ instead. We add slack variables to obtain
+ \begin{center}
+ maximize $-6x_1 - 3x_2$ subject to
+ \begin{align*}
+ x_1 + x_2 - z_1 &= 1\\
+ 2x_1 - x_2 - z_2 &= 1\\
+ 3x_2 + z_3 &= 2\\
+ x_1, x_2, z_1, z_2, z_3 &\geq 0
+ \end{align*}
+ \end{center}
+ Now we don't have a basic feasible solution, since we would need $z_1 = z_2 = -1, z_3 = 2$, which is not feasible. So we add \emph{more} variables, called the artificial variables.
+ \begin{center}
+ maximize $-6x_1 - 3x_2$ subject to
+ \begin{align*}
+ x_1 + x_2 - z_1 + y_1&= 1\\
+ 2x_1 - x_2 - z_2 +y_2 &= 1\\
+ 3x_2 + z_3 &= 2\\
+ x_1, x_2, z_1, z_2, z_3, y_1, y_2 &\geq 0
+ \end{align*}
+ \end{center}
+ Note that adding $y_1$ and $y_2$ might create new solutions, which is bad. We solve this problem by first trying to make $y_1$ and $y_2$ both $0$ and find a basic feasible solution. Then we can throw away $y_1$ and $y_2$ and then get a basic feasible for our original problem. So momentarily, we want to solve
+ \begin{center}
+ minimize $y_1 + y_2$ subject to
+ \begin{align*}
+ x_1 + x_2 - z_1 + y_1&= 1\\
+ 2x_1 - x_2 - z_2 +y_2&= 1\\
+ 3x_2 + z_3 &= 2\\
+ x_1, x_2, z_1, z_2, z_3, y_1, y_2 &\geq 0
+ \end{align*}
+ \end{center}
+ By minimizing $y_1$ and $y_2$, we will make them zero.
+
+ Our simplex tableau is
+ \begin{center}
+ \begin{tabular}{cccccccc}
+ \toprule
+ $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$ & $y_1$ & $y_2$ \\
+ \midrule
+ 1 & 1 & -1 & 0 & 0 & 1 & 0 & 1\\
+ 2 & -1 & 0 & -1 & 0 & 0 & 1 & 1\\
+ 0 & 3 & 0 & 0 & 1 & 0 & 0 & 2\\
+ \midrule
+ -6 & -3 & 0 & 0 & 0 & 0 & 0 & 0\\
+ 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Note that we keep both our original and ``kill-$y_i$'' objectives, but now we only care about the second one. We will keep track of the original objective so that we can use it in the second phase.
+
+ We see an initial feasible solution $y_1 = y_2 = 1, z_3 = 2$. However, this is not a proper simplex tableau, as the basis columns should not have non-zero entries (apart from the identity matrix itself). But we have the two $-1$s at the bottom! So we add the first two rows to the last to obtain
+ \begin{center}
+ \begin{tabular}{cccccccc}
+ \toprule
+ $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$ & $y_1$ & $y_2$ \\
+ \midrule
+ 1 & 1 & -1 & 0 & 0 & 1 & 0 & 1\\
+ 2 & -1 & 0 & -1 & 0 & 0 & 1 & 1\\
+ 0 & 3 & 0 & 0 & 1 & 0 & 0 & 2\\
+ \midrule
+ -6 & -3 & 0 & 0 & 0 & 0 & 0 & 0\\
+ 3 & 0 & -1 & -1 & 0 & 0 & 0 & 2\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Our pivot column is $x_1$, and our pivot row is the second row. We divide it by $1$ and add/subtract it from other rows.
+ \begin{center}
+ \begin{tabular}{cccccccc}
+ \toprule
+ $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$ & $y_1$ & $y_2$ \\
+ \midrule
+ 0 & $\frac{3}{2}$ & -1 & $\frac{1}{2}$ & 0 & 1 & $-\frac{1}{2}$ & $\frac{1}{2}$\\
+ 1 & $-\frac{1}{2}$ & 0 & $-\frac{1}{2}$ & 0 & 0 & $\frac{1}{2}$ & $\frac{1}{2}$\\
+ 0 & 3 & 0 & 0 & 1 & 0 & 0 & 2\\
+ \midrule
+ 0 & -6 & 0 & -3 & 0 & 0 & 3 & 3\\
+ 0 & $\frac{3}{2}$ & $-1$ & $\frac{1}{2}$ & 0 & 0 & $-\frac{3}{2}$ & $\frac{1}{2}$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ There are two possible pivot columns. We pick $z_2$ and use the first row as the pivot row.
+ \begin{center}
+ \begin{tabular}{cccccccc}
+ \toprule
+ $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$ & $y_1$ & $y_2$ \\
+ \midrule
+ 0 & 3 & -2 & 1 & 0 & 2 & -1 & 1\\
+ 1 & 1 & -1 & 0 & 0 & 1 & 0 & 1\\
+ 0 & 3 & 0 & 0 & 1 & 0 & 0 & 2\\
+ \midrule
+ 0 & 3 & -6 & 0 & 0 & 6 & 0 & 6\\
+ 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ We see that $y_1$ and $y_2$ are no longer in the basis, and hence take value $0$. So we drop all the phase I stuff, and are left with
+ \begin{center}
+ \begin{tabular}{cccccccc}
+ \toprule
+ $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$\\
+ \midrule
+ 0 & 3 & -2 & 1 & 0 & 1\\
+ 1 & 1 & -1 & 0 & 0 & 1\\
+ 0 & 3 & 0 & 0 & 1 & 2\\
+ \midrule
+ 0 & 3 & -6 & 0 & 0 & 6\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ We see a basic feasible solution $z_1 = x_1 = 1, z_3 = 2$.
+
+ We pick $x_2$ as the pivot column, and the first row as the pivot row. Then we have
+ \begin{center}
+ \begin{tabular}{cccccccc}
+ \toprule
+ $x_1$ & $x_2$ & $z_1$ & $z_2$ & $z_3$\\
+ \midrule
+ 0 & 1 & $-\frac{2}{3}$ & $\frac{1}{3}$ & 0 & $\frac{1}{3}$\\
+ 1 & 0 & $-\frac{1}{3}$ & $-\frac{1}{3}$ & 0 & $\frac{2}{3}$\\
+ 0 & 0 & 2 & -1 & 1 & 1\\
+ \midrule
+ 0 & 0 & -4 & -1 & 0 & 5\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Since the last row is all negative, we have complementary slackness. So this is a optimal solution. So $x_1 = \frac{2}{3}, x_2 = \frac{1}{3}, z_3 = 1$ is a feasible solution, and our optimal value is $5$.
+
+ Note that we previously said that the bottom right entry is the negative of the optimal value, not the optimal value itself! This is correct, since in the tableau, we are maximizing $-6x_1 - 3x_2$, whose maximum value is $-5$. So the minimum value of $6x_1 + 3x_2$ is $5$.
+\end{eg}
+\section{Non-cooperative games}
+Here we have a short digression to game theory. We mostly focus on games with two players.
+
+\subsection{Games and Solutions}
+\begin{defi}[Bimatrix game]
+ A two-player game, or \emph{bimatrix game}, is given by two matrices $P, Q\in \R^{m\times n}$. Player 1, or the \emph{row player}, chooses a row $i\in \{1, \cdots, m\}$, while player 2, the \emph{column player}, chooses a column $j\in \{1, \cdots, n\}$. These are selected without knowledge of the other player's decisions. The two players then get payoffs $P_{ij}$ and $Q_{ij}$ respectively.
+\end{defi}
+
+\begin{eg}
+ A game of rock-paper-scissors can have payoff matrices
+ \[
+ P_{ij} =
+ \begin{pmatrix}
+ 0 & -1 & 1\\
+ 1 & 0 & -1\\
+ -1 & 1 & 0
+ \end{pmatrix},\quad
+ Q_{ij} =
+ \begin{pmatrix}
+ 0 & 1 & -1\\
+ -1 & 0 & 1\\
+ 1 & -1 & 0
+ \end{pmatrix}.
+ \]
+ Here a victory gives you a payoff of $1$, a loss gives a payoff of $-1$, and a draw gives a payoff of $1$. Also the first row/column corresponds to playing rock, second corresponds to paper and third corresponds to scissors.
+
+ Usually, this is not the best way to display the payoff matrices. First of all, we need two write out two matrices, and there isn't an easy way to indicate what row corresponds to what decision. Instead, we usually write this as a table.
+ \begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ & R & P & S\\
+ \midrule
+ R & $(0, 0)$ & $(-1, 1)$ & $(1, -1)$\\
+ P & $(1, -1)$ & $(0, 0)$ & $(-1, 1)$\\
+ S & $(-1, 1)$ & $(1, -1)$ & $(0, 0)$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ By convention, the first item in the tuple $(-1, 1)$ indicates the payoff of the row player, and the second item indicates the payoff of the column player.
+\end{eg}
+\begin{defi}[Strategy]
+ Players are allowed to play randomly. The set of \emph{strategies} the row player can have is
+ \[
+ X = \{x\in \R^m: x \geq 0, \sum x_i = 1\}
+ \]
+ and the column player has strategies
+ \[
+ Y = \{y\in \R^n: y \geq 0, \sum y_i = 1\}
+ \]
+ Each vector corresponds to the probabilities of selecting each row or column.
+
+ A strategy profile $(x, y)\in X\times Y$ induces a lottery, and we write $p(x, y) = x^T Py$ for the expected payoff of the row player.
+
+ If $x_i = 1$ for some $i$, i.e.\ we always pick $i$, we call $x$ a \emph{pure strategy}.
+\end{defi}
+
+\begin{eg}[Prisoner's dilemma]
+ Suppose Alice and Bob commit a crime together, and are caught by the police. They can choose to remain silent ($S$) or testify ($T$). Different options will lead to different outcomes:
+ \begin{itemize}
+ \item Both keep silent: the police has little evidence and they go to jail for 2 years.
+ \item One testifies and one remains silent: the one who testifies gets awarded and is freed, while the other gets stuck in jail for 10 years.
+ \item Both testify: they both go to jail for 5 years.
+ \end{itemize}
+
+ We can represent this by a payoff table:
+ \begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ & $S$ & $T$\\
+ \midrule
+ $S$ & $(2, 2)$ & $(0, 3)$\\
+ $T$ & $(3, 0)$ & $(1, 1)$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Note that higher payoff is desired, so a longer serving time corresponds to a lower payoff. Also, payoffs are interpreted relatively, so replacing $(0, 3)$ with $(0, 100)$ (and $(3, 0)$ with $(100, 0)$) in the payoff table would make no difference.
+
+ Here we see that regardless of what the other person does, it is always strictly better to testify than not (unless you want to be nice). We say $T$ is a \emph{dominant strategy}, and $(1, 1)$ is \emph{Pareto dominated} by $(2, 2)$.
+\end{eg}
+
+\begin{eg}[Chicken]
+ The game of \emph{Chicken} is as follows: two people drive their cars towards each other at high speed. If they collide, they will di.e.\ Hence they can decide to chicken out ($C$) or continue driving ($D$). If both don't chicken, they die, which is bad. If one chickens and the other doesn't the person who chicken looks silly (but doesn't die). If both chicken out, they both look slightly silly. This can be represented by the following table:
+ \begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ & $C$ & $D$ \\
+ \midrule
+ $C$ & $(2, 2)$ & $(1, 3)$ \\
+ $D$ & $(3, 1)$ & $(0, 0)$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Here there is no dominating strategy, so we need a different way of deciding what to do.
+
+ Instead, we define the \emph{security level} of the row player to be
+ \[
+ \max_{x\in X}\min_{y\in Y} p(x, y) = \max_{x\in X}\min_{j\in \{1, \ldots, n\}} \sum_{i = 1}^m x_i p_{ij}.
+ \]
+ Such an $x$ is the strategy the row player can employ that minimizes the worst possible loss. This is called the maximin strategy.
+
+ We can formulate this as a linear program:
+ \begin{center}
+ maximize $v$ such that
+ \begin{align*}
+ \sum_{i = 1}^m x_i p_{ij} &\geq v\quad\text{for all }j = 1, \cdots, n\\
+ \sum_{i = 1}^m x_i &= 1\\
+ x &\geq 0
+ \end{align*}
+ \end{center}
+ Here the maximin strategy is to chicken. However, this isn't really what we are looking for, since if both players employ this maximin strategy, it would be better for you to not chicken out.
+\end{eg}
+
+\begin{defi}[Best response and equilibrium]
+ A strategy $x\in X$ is a \emph{best response} to $y\in Y$ if for all $x'\in X$
+ \[
+ p(x, y) \geq p(x', y)
+ \]
+ A pair $(x, y)$ is an \emph{equilibrium} if $x$ is the best response against $y$ and $y$ is a best response against $x$.
+\end{defi}
+
+\begin{eg}
+ In the chicken game, there are two pure equilibrium, $(3, 1)$ and $(1, 3)$, and there is a mixed equilibrium in which the players pick the options with equal probability.
+\end{eg}
+
+\begin{thm}[Nash, 1961]
+ Every bimatrix game has an equilibrium.
+\end{thm}
+We are not proving this since it is too hard.
+
+\subsection{The minimax theorem}
+There is a special type of game known as a \emph{zero sum game}.
+\begin{defi}[Zero-sum game]
+ A bimatrix game is a \emph{zero-sum game}, or matrix game, if $q_{ij} = -p_{ij}$ for all $i, j$, i.e.\ the total payoff is always 0.
+\end{defi}
+To specify a matrix game, we only need one matrix, not two, since the matrix of the other player is simply the negative of the matrix of the first.
+
+\begin{eg}
+ The rock-paper-scissors games as specified in the beginning example is a zero-sum game.
+\end{eg}
+
+\begin{thm}[von Neumann, 1928]
+ If $P\in \R^{m\times n}$. Then
+ \[
+ \max_{x\in X}\min_{y\in Y} p(x, y) = \min_{y\in Y}\max_{x\in X} p(x, y).
+ \]
+ Note that this is equivalent to
+ \[
+ \max_{x\in X}\min_{y\in Y} p(x, y) = -\max_{y\in Y}\min_{x\in X} -p(x, y).
+ \]
+ The left hand side is the worst payoff the row player can get if he employs the minimax strategy. The right hand side is the worst payoff the column player can get if he uses his minimax strategy.
+
+ The theorem then says that if both players employ the minimax strategy, then this is an equilibrium.
+\end{thm}
+
+\begin{proof}
+ Recall that the optimal value of $\max\min p(x, y)$ is a solution to the linear program
+ \begin{center}
+ maximize $v$ such that
+ \begin{align*}
+ \sum_{i = 1}^m x_i p_{ij} &\geq v\quad\text{for all }j = 1, \cdots, n\\
+ \sum_{i = 1}^m x_i &= 1\\
+ x &\geq 0
+ \end{align*}
+ \end{center}
+ Adding slack variable $z\in \R^n$ with $z \geq 0$, we obtain the Lagrangian
+ \[
+ L(v, x, z, w, y) = v + \sum_{j = 1}^n y_j\left(\sum_{i = 1}^m x_ip_{ij} - z_j - v\right) - w\left(\sum_{i = 1}^m x_i - 1\right),
+ \]
+ where $w\in \R$ and $y\in \R^n$ are Lagrange multipliers. This is equal to
+ \[
+ \left(1 - \sum_{j = 1}^n y_j\right)v + \sum_{i = 1}^m \left(\sum_{j = 1}^n p_{ij}y_j - w\right)x_i- \sum_{j = 1}^n y_j z_j + w.
+ \]
+ This has finite minimum for all $v\in \R, x \geq 0$ iff $\sum y_i = 1$, $\sum p_{ij}y_j \leq w$ for all $i$, and $y \geq 0$. The dual is therefore
+ \begin{center}
+ minimize $w$ subject to
+ \begin{align*}
+ \sum_{j = 1}^n p_{ij}y_j &\leq w\quad\text{ for all }i\\
+ \sum_{j = 1}^n {y_j} &= 1\\
+ y &\geq 0
+ \end{align*}
+ \end{center}
+ This corresponds to the column player choosing a strategy $(y_i)$ such that the expected payoff is bounded above by $w$.
+
+ The optimum value of the dual is $\displaystyle\min_{y\in Y}\max_{x\in X}p(x, y)$. So the result follows from strong duality.
+\end{proof}
+
+\begin{defi}[Value]
+ The \emph{value} of the matrix game with payoff matrix $P$ is
+ \[
+ v = \max_{x\in X}\min_{y\in Y} p(x, y) = \min_{y\in Y}\max_{x\in X} p(x, y).
+ \]
+\end{defi}
+In general, the equilibrium are given by
+\begin{thm}
+ $(x, y)\in X\times Y$ is an equilibrium of the matrix game with payoff matrix $P$ if and only if
+ \begin{align*}
+ \min_{y'\in Y} p(x, y') &= \max_{x' \in X}\min_{y'\in Y} p(x', y')\\
+ \max_{x'\in X} p(x', y) &= \min_{y' \in Y}\max_{x'\in X} p(x', u')
+ \end{align*}
+ i.e.\ the $x, y$ are optimizers for the $\max\min$ and $\min\max$ functions.
+\end{thm}
+
+Proof is in the second example sheet.
+
+\section{Network problems}
+\subsection{Definitions}
+We are going to look into several problems that involve graphs. Unsurprisingly, we will need some definitions from graph theory.
+\begin{defi}[Directed graph/network]
+ A \emph{directed graph} or \emph{network} is a pair $G = (V, E)$, where $V$ is the set of vertices and $E\subseteq V\times V$ is the set of edges. If $(u, v)\in E$, we say there is an edge from $u$ to $v$.
+\end{defi}
+
+\begin{defi}[Degree]
+ The degree of a vertex $u \in V$ is the number of $v\in V$ such that $(u, v)\in E$ or $(v, u)\in E$.
+\end{defi}
+
+\begin{defi}[Walk]
+ An \emph{walk} from $u\in V$ to $v\in V$ is a sequence of vertices $u = v_1, \cdots, v_k = v$ such that $(v_i, v_{i + 1})\in E$ for all $i$. An \emph{undirected walk} allows $(v_i, v_{i + 1})\in E$ or $(v_{i + 1}, v)\in E$, i.e.\ we are allowed to walk backwards.
+\end{defi}
+
+\begin{defi}[Path]
+ A path is a walk where $v_1, \cdots, v_k$ are pairwise distinct.
+\end{defi}
+
+\begin{defi}[Cycle]
+ A cycle is a walk where $v_1, \cdots, v_{k - 1}$ are pairwise distinct and $v_1 = v_k$.
+\end{defi}
+
+\begin{defi}[Connected graph]
+ A graph is \emph{connected} if for any pair of vertices, there is an undirected path between them.
+\end{defi}
+
+\begin{defi}[Tree]
+ A \emph{tree} is a connected graph without (undirected) cycles.
+\end{defi}
+
+\begin{defi}[Spanning tree]
+ The \emph{spanning tree} of a graph $G = (V, E)$ is a tree $(V', E')$ with $V' = V$ and $E'\subseteq E$.
+\end{defi}
+
+\subsection{Minimum-cost flow problem}
+Let $G = (V, E)$ be a directed graph. Let the number of vertices be $|V| = n$ and let $b\in \R^n$. For each edge, we assign three numbers: a cost, an lower bound and a upper bound. We denote these as matrices as $C, \underline{M}, \overline{M}\in \R^{n\times n}$.
+
+Each component of the vector $b_i$ denotes the amount of flow entering or leaving each vertex $i\in V$. If $b_i > 0$, we call $i\in V$ a source. For example, if we have a factory at $b_i$ that produces stuff, $b_i$ will be positive. This is only the amount of stuff produced or consumed in the vertex, and not how many things flow through the vertex.
+
+$c_{ij}$ is the cost of transferring one unit of stuff from vertex $i$ to vertex $j$ (fill entries with $0$ if there is no edge between the vertices), and $\underline{m}_{ij}$ and $\overline{m}_{ij}$ denote the lower and upper bounds on the amount of flow along $(i, j)\in E$ respectively.
+
+$x\in \R^{n\times n}$ is a minimum-cost flow if it minimizes the cost of transferring stuff, while satisfying the constraints, i.e.\ it is an optimal solution to the problem
+\begin{center}
+ minimize $\displaystyle\sum_{(i, j)\in E}c_{ij}x_{ij}$ subject to
+ \[
+ b_i + \sum_{j: (j, i)\in E} x_{ji} = \sum_{j: (i, j)\in E}x_{ij}\quad\text{ for each }i\in V
+ \]
+ \[
+ \underline{m}_{ij} \leq x_{ij}\leq \overline{m}_{ij}\quad\text{ for all }(i, j) \in E.
+ \]
+\end{center}
+This problem is a linear program. In theory, we can write it into the general form $Ax = b$, where $A$ is a huge matrix given by
+\[
+ a_{ij}
+ \begin{cases}
+ 1 & \text{if the $k$th edge starts at vertex }i\\
+ -1 & \text{if the $k$th edge ends at vertex }i\\
+ 0 & \text{otherwise}
+ \end{cases}
+\]
+However, using this huge matrix to solve this problem by the simplex method is not very efficient. So we will look for better solutions.
+
+Note that for the system to make sense, we must have
+\[
+ \sum_{i\in V}b_i = 0,
+\]
+i.e.\ the total supply is equal to the total consumption.
+
+To simplify the problem, we can convert it into an equivalent \emph{circulation problem}, where $b_i = 0$ for all $i$. We do this by adding an additional vertex where we send all the extra $b_i$ to. For example, if a vertex has $b_i = -1$, then it takes in more stuff than it gives out. So we can mandate it to send out one extra unit to the additional vertex. Then $b_i = 0$.
+
+An \emph{uncapacitated problem} is the case where $\underline{m}_{ij} = 0$ and $\overline{m}_{ij} = \infty$ for all $(i, j) \in E$. An uncapacitated problem is either unbounded or bounded. If it is bounded, then it is equivalent to a problem with finite capacities, since we can add a bound greater than what the optimal solution wants.
+
+We are going to show that this can be reduced to a simpler problem:
+
+\subsection{The transportation problem}
+The transportation problem is a special case of the minimum-flow problem, where the graph is a \emph{bipartite graph}. In other words, we can split the vertices into two halves $A, B$, where all edges flow from a vertex in $A$ to a vertex in $B$. We call the vertices of $A$ the \emph{suppliers} and the vertices of $B$ the \emph{consumers}.
+
+In this case, we can write the problem as
+\begin{center}
+ minimize $\displaystyle\sum_{i = 1}^n\sum_{j = 1}^m c_{ij}x_{ij}$ subject to
+ \[
+ \sum_{j = 1}^m x_{ij} = s_i\text{ for }i = 1, \cdots, n
+ \]
+ \[
+ \sum_{i = 1}^n x_{ij} = d_j\text{ for }j = 1, \cdots, m
+ \]
+ \[
+ x\geq 0
+ \]
+\end{center}
+This $s_i$ is the supply of each supplier, and $d_i$ is the demand of each supplier. We have $s\in \R^n, d\in \R^m$ satisfying $s, d\geq 0$, $\sum s_i = \sum d_j$.
+
+Finally, we have $c\in \R^{n\times m}$ representing the cost of transferal.
+
+We now show that every (bounded) minimum cost-flow problem can be reduced to the transportation problem.
+
+\begin{thm}
+ Every minimum cost-flow problem with finite capacities or non-negative costs has an equivalent transportation problem.
+\end{thm}
+
+\begin{proof}
+ Consider a minimum-cost flow problem on network $(V, E)$. It is wlog to assume that $\underline{m}_{ij} = 0$ for all $(i, j) \in E$. Otherwise, set $\underline{m}_{ij}$ to $0$, $\overline{m}_{ij}$ to $\overline{m}_{ij} - \underline{m}_{ij}$, $b_i$ to $b_i - \underline{m}_{ij}$, $b_j$ to $b_j + \underline{m}_{ij}$, $x_{ij}$ to $x_{ij} - \underline{m}_{ij}$. Intuitively, we just secretly ship the minimum amount without letting the network know.
+
+ Moreover, we can assume that all capacities are finite: if some edge has infinite capacity but non-negative cost, then setting the capacity to a large enough number, for example $\sum_{i \in V}|b_i|$ does not affect the optimal solutions. This is since cost is non-negative, and the optimal solution will not want shipping loops. So we will have at most $\sum |b_i|$ shipments.
+
+ We will construct an instance of the transportation problem as follows:
+
+ For every $i\in V$, add a consumer with demand $\left(\sum_{k: (i, k)\in E}\overline{m}_{ik}\right) - b_i$.
+
+ For every $(i, j)\in E$, add a supplier with supply $\overline{m}_{ij}$, an edge to consumer $i$ with cost $c_{(ij, i)} = 0$ and an edge to consumer $j$ with cost $c_{(ij, j)} = c_{ij}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [draw, circle, inner sep=0, minimum width=0.6cm] (ij) at (0, 0) {$ij$};
+ \node [draw, circle, inner sep=0, minimum width=0.6cm] (i) at (2, 1) {$i$};
+ \node [draw, circle, inner sep=0, minimum width=0.6cm] (j) at (2, -1) {$j$};
+ \draw [->] (ij) -- (i) node [pos=0.5, above] {$0$};
+ \draw [->] (ij) -- (j) node [pos=0.5, below] {$c_{ij}$};
+ \draw [->] (-1.5, 0) node [left] {$\overline{m}_{ij}$} -- (ij);
+ \draw [->] (i) -- +(1.5, 0) node [right] {$\sum_{k: (i, k)\in E}\overline{m}_{ik} - b_i$};
+ \draw [->] (j) -- +(1.5, 0) node [right] {$\sum_{k: (j, k)\in E}\overline{m}_{jk} - b_j$};
+ \end{tikzpicture}
+ \end{center}
+ The idea is that if the capacity of the edge $(i, j)$ is, say, 5, in the original network, and we want to transport $3$ along this edge, then in the new network, we send $3$ units from $ij$ to $j$, and $2$ units to $i$.
+
+ The tricky part of the proof is to show that we have the same constraints in both graphs.
+
+ For any flow $x$ in the original network, the corresponding flow on $(ij, j)$ is $x_{ij}$ and the flow on $(ij, i)$ is $\overline{m}_{ij} - x_{ij}$. The total flow into $i$ is then
+ \[
+ \sum_{k: (i, k)\in E}(\overline{m}_{ik} - x_{ik}) + \sum_{k: (k, i)\in E}x_{ki}
+ \]
+ This satisfies the constraints of the new network if and only if
+ \[
+ \sum_{k: (i, k)\in E}(\overline{m}_{ik} - x_{ik}) + \sum_{k: (k, i)\in E}x_{ki} = \sum_{k: (i, k)\in E}\overline{m}_{ik} - b_i,
+ \]
+ which is true if and only if
+ \[
+ b_i + \sum_{k:(k, i)\in E}x_{ki} - \sum_{k: (i, k)\in E}x_{ik} = 0,
+ \]
+ which is exactly the constraint for the node $i$ in the original minimal-cost flow problem. So done.
+\end{proof}
+To solve the transportation problem, it is convenient to have two sets of Lagrange multipliers, one for the supplier constraints and one for the consumer constraint. Then the Lagrangian of the transportation problem can be written as
+\[
+ L(x, \lambda, \mu) = \sum_{i = 1}^m\sum_{j = 1}^n c_{ij}x_{ij} + \sum_{i = 1}^n \lambda_i\left(s_i - \sum_{j = 1}^m x_{ij}\right) - \sum_{j = 1}^n \mu_j\left(d_j - \sum_{j = 1}^n x_{ij}\right).
+\]
+Note that we use different signs for the Lagrange multipliers for the suppliers and the consumers, so that our ultimate optimality condition will look nicer.
+
+This is equivalent to
+\[
+ L(x, \lambda, \mu) = \sum_{i = 1}^n \sum_{j = 1}^n (c_{ij} - \lambda_i + \mu_j)x_{ij} + \sum_{i = 1}^n \lambda_i s_i - \sum_{j = 1}^m \mu_j d_j.
+\]
+Since $x \geq 0$, the Lagrangian has a finite minimum iff $c_{ij} - \lambda_i + \mu_j \geq 0$ for all $i, j$. So this is our dual feasibility condition.
+
+At an optimum, complementary slackness entails that
+\[
+ (c_{ij} - \lambda_i + \mu_j)x_{ij} = 0
+\]
+for all $i, j$.
+
+In this case, we have a tableau as follows:
+\newcommand\bb[1]{\multicolumn{1}{|c|}{#1}}
+\newcommand\bbb[1]{\multicolumn{2}{c|}{#1}}
+\newcommand\bbbb[1]{\multicolumn{2}{c}{#1}}
+\begin{center}
+ \begin{tabular}{c|cc|cc|cc|cc|c}
+ \multicolumn{1}{c}{ } & \bbbb{$\mu_1$} & \bbbb{$\mu_2$} & \bbbb{$\mu_3$} & \bbbb{$\mu_4$}\\\cline{2-9}
+ & \bbb{$\lambda_1 - \mu_1$} & \bbb{$\lambda_1 - \mu_2$} & \bbb{$\lambda_1 - \mu_3$} & \bbb{$\lambda_1 - \mu_4$}\\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ $\lambda_1$ & $x_{11}$ & \bb{$c_{11}$} & $x_{12}$ & \bb{$c_{12}$} & $x_{13}$ & \bb{$c_{13}$} & $x_{14}$ & \bb{$c_{14}$} & $s_1$\\\cline{2-9}
+ & \bbb{$\lambda_2 - \mu_1$} & \bbb{$\lambda_2 - \mu_2$} & \bbb{$\lambda_2 - \mu_3$} & \bbb{$\lambda_2 - \mu_4$}\\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ $\lambda_2$ & $x_{21}$ & \bb{$c_{21}$} & $x_{22}$ & \bb{$c_{22}$} & $x_{23}$ & \bb{$c_{23}$} & $x_{24}$ & \bb{$c_{24}$} & $s_1$\\\cline{2-9}
+ & \bbb{$\lambda_3 - \mu_1$} & \bbb{$\lambda_3 - \mu_2$} & \bbb{$\lambda_3 - \mu_3$} & \bbb{$\lambda_3 - \mu_4$}\\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ $\lambda_3$ & $x_{31}$ & \bb{$c_{31}$} & $x_{32}$ & \bb{$c_{32}$} & $x_{33}$ & \bb{$c_{33}$} & $x_{34}$ & \bb{$c_{34}$} & $s_1$\\\cline{2-9}
+ \multicolumn{1}{c}{ }& \bbbb{$d_1$} & \bbbb{$d_2$} & \bbbb{$d_3$} & \bbbb{$d_4$}\\
+ \end{tabular}
+\end{center}
+We have a row for each supplier and a column for each consumer.
+\begin{eg}
+ Suppose we have three suppliers with supplies $8, 10$ and $9$; and four consumers with demands $6, 5, 8, 8$.
+
+ It is easy to create an initial feasible solution - we just start from the first consumer and first supplier, and supply as much as we can until one side runs out of stuff.
+
+ We first fill our tableau with our feasible solution.
+ \begin{center}
+ \begin{tabular}{c|cc|cc|cc|cc|c}
+ \cline{2-9}
+ & & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & 6 & \bb{5} & 2 & \bb{3} & & \bb{4} & & \bb{6} & 8\\\cline{2-9}
+ & & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & & \bb{2} & 3 & \bb{7} & 7 & \bb{4} & & \bb{1} & 10\\\cline{2-9}
+ & & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4} & 9\\\cline{2-9}
+ \multicolumn{1}{c}{ }& \bbbb{6} & \bbbb{5} &\bbbb{8} & \bbbb{8} &
+ \end{tabular}
+ \end{center}
+ \begin{center}
+ \begin{tikzpicture}
+ \node (s1) at (0, 0) [circ] {};
+ \node at (0, 0) [left] {$8 = s_1$};
+ \node (s2) at (0, -1) [circ] {};
+ \node at (0, -1) [left] {$10 = s_2$};
+ \node (s3) at (0, -2) [circ] {};
+ \node at (0, -2) [left] {$9 = s_3$};
+
+ \node (d1) at (2, 0) [circ] {};
+ \node at (2, 0) [right] {$d_1 = 6$};
+ \node (d2) at (2, -1) [circ] {};
+ \node at (2, -1) [right] {$d_2 = 5$};
+ \node (d3) at (2, -2) [circ] {};
+ \node at (2, -2) [right] {$d_3 = 8$};
+ \node (d4) at (2, -3) [circ] {};
+ \node at (2, -3) [right] {$d_4 = 8$};
+
+ \draw [->] (s1) -- (d1) node [pos=0.5, above] {\tiny 6};
+ \draw [->] (s1) -- (d2) node [pos=0.5, above] {\tiny 2};
+ \draw [->] (s2) -- (d2) node [pos=0.5, above] {\tiny 3};
+ \draw [->] (s2) -- (d3) node [pos=0.5, above] {\tiny 7};
+ \draw [->] (s3) -- (d3) node [pos=0.5, above] {\tiny 1};
+ \draw [->] (s3) -- (d4) node [pos=0.5, above] {\tiny 8};
+ \end{tikzpicture}
+ \end{center}
+ We see that our basic feasible solution corresponds to a spanning tree. In general, if we have $n$ suppliers and $m$ consumers, then we have $n + m$ vertices, and hence $n + m - 1$ edges. So we have $n + m - 1$ dual constraints. So we can arbitrarily choose one Lagrange multiplier, and the other Lagrange multipliers will follow. We choose $\lambda_1 = 0$. Since we require
+ \[
+ (c_{ij} - \lambda_i + \mu_i)x_{ij} = 0,
+ \]
+ for edges in the spanning tree, $x_{ij} \not= 0$. So $c_{ij} - \lambda_i + \mu_i = 0$. Hence we must have $\mu_1 = -5$. We can fill in the values of the other Lagrange multipliers as follows, and obtain
+ \begin{center}
+ \begin{tabular}{c|cc|cc|cc|cc|}
+ \multicolumn{1}{c}{ }& \bbbb{-5} & \bbbb{-3} & \bbbb{0} & \bbbb{-2}\\\cline{2-9}
+ & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ 0 & 6 & \bb{5} & 2 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
+ & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ 4 & & \bb{2} & 3 & \bb{7} & 7 & \bb{4} & & \bb{1}\\\cline{2-9}
+ & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ 2 & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4}\\\cline{2-9}
+ \end{tabular}
+ \end{center}
+ We can fill in the values of $\lambda_i - \mu_i$:
+ \begin{center}
+ \begin{tabular}{c|cc|cc|cc|cc|}
+ \multicolumn{1}{c}{ }& \bbbb{-5} & \bbbb{-3} & \bbbb{0} & \bbbb{-2}\\\cline{2-9}
+ & & & & & \bbb{0} & \bbb{2} \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ 0 & 6 & \bb{5} & 2 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
+ & \bbb9 & & & & & \bbb{6} \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ 4 & & \bb{2} & 3 & \bb{7} & 7 & \bb{4} & & \bb{1}\\\cline{2-9}
+ & \bbb{7} & \bbb{5} & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ 2 & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4}\\\cline{2-9}
+ \end{tabular}
+ \end{center}
+ The dual feasibility condition is
+ \[
+ \lambda_i - \mu_i \leq c_{ij}
+ \]
+ If it is satisfied everywhere, we have optimality. Otherwise, will have to do something.
+
+ What we do is we add an edge, say from the second supplier to the first consumer. Then we have created a cycle. We keep increasing the flow on the new edge. This causes the values on other edges to change by flow conservation. So we keep doing this until some other edge reaches zero.
+
+ If we increase flow by, say, $\delta$, we have
+ \begin{center}
+ \begin{tabular}{c|cc|cc|cc|cc|}
+ \cline{2-9}
+ & & & & &&& & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & $6 - \delta$ & \bb{5} & $2 + \delta$ & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
+ & & & & & && & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & $\delta$ & \bb{2} & $3 - \delta$ & \bb{7} & 7 & \bb{4} & & \bb{1}\\\cline{2-9}
+ & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4}\\\cline{2-9}
+ \end{tabular}
+ \end{center}
+ \begin{center}
+ \begin{tikzpicture}
+ \node (s1) at (0, 0) [circ] {};
+ \node at (0, 0) [left] {$8 = s_1$};
+ \node (s2) at (0, -1) [circ] {};
+ \node at (0, -1) [left] {$10 = s_2$};
+ \node (s3) at (0, -2) [circ] {};
+ \node at (0, -2) [left] {$9 = s_3$};
+
+ \node (d1) at (2, 0) [circ] {};
+ \node at (2, 0) [right] {$d_1 = 6$};
+ \node (d2) at (2, -1) [circ] {};
+ \node at (2, -1) [right] {$d_2 = 5$};
+ \node (d3) at (2, -2) [circ] {};
+ \node at (2, -2) [right] {$d_3 = 8$};
+ \node (d4) at (2, -3) [circ] {};
+ \node at (2, -3) [right] {$d_4 = 8$};
+
+ \draw [->] (s1) -- (d1) node [pos=0.5, above] {\tiny $6 - \delta$};
+ \draw [->] (s1) -- (d2) node [pos=0.5, above] {\tiny $2 + \delta$};
+ \draw [->] (s2) -- (d2) node [pos=0.5, above] {\tiny $3 - \delta$};
+ \draw [->] (s2) -- (d3) node [pos=0.5, above] {\tiny 7};
+ \draw [->] (s3) -- (d3) node [pos=0.5, above] {\tiny 1};
+ \draw [->] (s3) -- (d4) node [pos=0.5, above] {\tiny 8};
+ \draw [mred, dashed, ->] (s2) -- (d1) node [pos=0.3, above] {\tiny $\delta$};
+ \end{tikzpicture}
+ \end{center}
+ The maximum value of $\delta$ we can take is $3$. So we end up with
+ \begin{center}
+ \begin{tabular}{c|cc|cc|cc|cc|}
+ \cline{2-9}
+ & & & & & & && \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & 3 & \bb{5} & 5 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
+ & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & 3 & \bb{2} & & \bb{7} & 7 & \bb{4} & & \bb{1}\\\cline{2-9}
+ & & & & & & & &\\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4}\\\cline{2-9}
+ \end{tabular}
+ \end{center}
+ We re-compute the Lagrange multipliers to obtain
+ \begin{center}
+ \begin{tabular}{c|cc|cc|cc|cc|}
+ \multicolumn{1}{c}{ }& \bbbb{-5} & \bbbb{-3} & \bbbb{-7} & \bbbb{-9}\\\cline{2-9}
+ & & & & & \bbb7 & \bbb9 \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ 0 & 3 & \bb{5} & 5 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
+ & & & \bbb0 & & & \bbb6 \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ -3 & 3 & \bb{2} & & \bb{7} & 7 & \bb{4} & & \bb{1}\\\cline{2-9}
+ & \bbb0 & \bbb{-2} & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ -5 & & \bb{5} & & \bb{6} & 1 & \bb{2} & 8 & \bb{4}\\\cline{2-9}
+ \end{tabular}
+ \end{center}
+ We see a violation at the bottom right. So we do it again:
+ \begin{center}
+ \begin{tabular}{c|cc|cc|cc|cc|}
+ \cline{2-9}
+ & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & 3 & \bb{5} & 5 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
+ & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & 3 & \bb{2} & & \bb{7} & $7 - \delta$ & \bb{4} & $\delta$ & \bb{1}\\\cline{2-9}
+ & & & && & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & & \bb{5} & & \bb{6} & $1 + \delta$ & \bb{2} & $8 - \delta$ & \bb{4}\\\cline{2-9}
+ \end{tabular}
+ \end{center}
+ The maximum possible value of $\delta$ is 7. So we have
+ \begin{center}
+ \begin{tabular}{c|cc|cc|cc|cc|}
+ \cline{2-9}
+ & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & 3 & \bb{5} & 5 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
+ & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & 3 & \bb{2} & & \bb{7} & & \bb{4} & 7 & \bb{1}\\\cline{2-9}
+ & & & & & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ & & \bb{5} & & \bb{6} & 8 & \bb{2} & 1 & \bb{4}\\\cline{2-9}
+ \end{tabular}
+ \end{center}
+ Calculating the Lagrange multipliers gives
+ \begin{center}
+ \begin{tabular}{c|cc|cc|cc|cc|}
+ \multicolumn{1}{c}{}& \bbbb{-5} & \bbbb{-3} & \bbbb{-2} & \bbbb{-4} \\\cline{2-9}
+ & & & & & \bbb2 & \bbb4 \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ 0 & 3 & \bb{5} & 5 & \bb{3} & & \bb{4} & & \bb{6}\\\cline{2-9}
+ & & & \bbb0 & \bbb{-1} & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ -3 & 3 & \bb{2} & & \bb{7} & & \bb{4} & 7 & \bb{1}\\\cline{2-9}
+ & \bbb{5} & \bbb{3} & & & & \\\cline{3-3}\cline{5-5}\cline{7-7}\cline{9-9}
+ 0 & & \bb{5} & & \bb{6} & 8 & \bb{2} & 1 & \bb{4}\\\cline{2-9}
+ \end{tabular}
+ \end{center}
+ No more violations. Finally. So this is the optimal solution.
+\end{eg}
+
+\subsection{The maximum flow problem}
+Suppose we have a network $(V, E)$ with a single source $1$ and a single sink $n$. There is no costs in transportation, but each edge has a capacity. We want to transport as much stuff from $1$ to $n$ as possible.
+
+We can turn this into a minimum-cost flow problem. We add an edge from $n$ to $1$ with $-1$ cost and infinite capacity. Then the minimal cost flow will maximize the flow from $n$ to $1$ as possible. So the same amount of stuff will have to flow from $1$ to $n$ through the network.
+
+We can write this is problem as
+\begin{center}
+ maximize $\delta$ subject to
+ \[
+ \sum_{j: (i, j) \in E}x_{ij} - \sum_{j(j, i)\in E}x_{ji} =
+ \begin{cases}
+ \delta & i = 1\\
+ -\delta & i = n\\
+ 0&\text{otherwise}
+ \end{cases}\quad\text{ for each }i
+ \]
+ \[
+ 0 \leq x_{ij}\leq C_{ij}\quad \text{ for each }(i, j)\in E.
+ \]
+\end{center}
+Here $\delta$ is the total flow from $1$ to $n$.
+
+While we can apply our results from the minimum-cost flow problem, we don't have to do so. There are easier ways to solve the problem, using the \emph{max-flow min-cut theorem}.
+
+First we need to define a cut.
+\begin{defi}[Cut]
+ Suppose $G = (V, E)$ with capacities $C_{ij}$ for $(i, j)\in E$. A \emph{cut} of $G$ is a partition of $V$ into two sets.
+
+ For $S\subseteq V$, the \emph{capacity} of the cut $(S, V\setminus S)$ is
+ \[
+ C(S) = \sum_{(i, j)\in (S\times (V\setminus S))\cap E}C_{ij},
+ \]
+ All this clumsy notation says is that we add up the capacities of all edges from $S$ to $V\setminus S$.
+\end{defi}
+
+Assume $x$ is a feasible flow vector that sends $\delta$ units from $1$ to $n$. For $X, Y\subseteq V$, we define
+\[
+ f_x(X, Y) = \sum_{(i, j)\in (X\times Y)\cap E}x_{ij},
+\]
+i.e.\ the overall amount of flow from $X$ to $Y$.
+
+For any solution $x_{ij}$ and cut $S\subseteq V$ with $1\in S, n\in V\setminus S$, the total flow from $1$ to $n$ can be written as
+\[
+ \delta = \sum_{i\in S}\left(\sum_{j: (i, j)\in E}x_{ij} - \sum_{j: (j, i)\in E}x_{ji}\right).
+\]
+This is true since by flow conservation, for any $i \not= 1$, $\sum\limits_{j: (i, j) \in E}x_{ij} - \sum\limits_{j: (j, i)\in E}x_{ji} = 0$, and for $i = 1$, it is $\delta$. So the sum is $\delta$. Hence
+\begin{align*}
+ \delta &= f_x(S, V) - f_x(V, S)\\
+ &= f_x(S, S) + f_x(S, V\setminus S) - f_x(V\setminus S, S) - f_x(S, S)\\
+ &= f_x(S, V\setminus S) - f_x(V\setminus S, S)\\
+ &\leq f_x(S, V\setminus S)\\
+ &\leq C(S)
+\end{align*}
+This says that the flow through the cut is less than the capacity of the cut, which is obviously true. The less obvious result is that this bound is tight, i.e.\ there is always a cut $S$ such that $\delta = C(S)$.
+
+\begin{thm}[Max-flow min-cut theorem]
+ Let $\delta$ be an optimal solution. Then
+ \[
+ \delta = \min\{C(S): S\subseteq V, 1\in S, n \in V\setminus S\}
+ \]
+\end{thm}
+
+\begin{proof}
+ Consider any feasible flow vector $x$. Call a path $v_0, \cdots, v_k$ an \emph{augmenting path} if the flow along the path can be increased. Formally, it is a path that satisfies
+ \[
+ x_{v_{i - 1}v_i} < C_{v_{i - 1}v_i}\text{ or }x_{v_iv_{i - 1}} > 0
+ \]
+ for $i = 1,\cdots, k$. The first condition says that we have a forward edge where we have not hit the capacity, while the second condition says that we have a backwards edge with positive flow. If these conditions are satisfied, we can increase the flow of each edge (or decrease the backwards flow for backwards edge), and the total flow increases.
+
+ Now assume that $x$ is optimal and let
+ \[
+ S = \{1\}\cup \{i\in V: \text{ there exists an augmenting path from $1$ to $i$}\}.
+ \]
+ Since there is an augmenting path from $1$ to $S$, we can increase flow from $1$ to any vertex in $S$. So $n \not\in S$ by optimality. So $n\in V\setminus S$.
+
+ We have previously shown that
+ \[
+ \delta = f_x(S, V\setminus S) - f_x(V\setminus S, S).
+ \]
+ We now claim that $f_x(V\setminus S, S) = 0$. If it is not $0$, it means that there is a node $v\in V\setminus S$ such that there is flow from $v$ to a vertex $u\in S$. Then we can add that edge to the augmenting path to $u$ to obtain an augmenting path to $v$.
+
+ Also, we must have $f_x(S, V\setminus S) = C(S)$. Or else, we can still send more things to the other side so there is an augmenting path. So we have
+ \[
+ \delta = C(S).\qedhere
+ \]
+\end{proof}
+
+The max-flow min-cut theorem does not tell us \emph{how} to find an optimal path. Instead, it provides a quick way to confirm that our path is optimal.
+
+It turns out that it isn't difficult to find an optimal solution. We simply keep adding flow along augmenting paths until we cannot do so. This is known as the \emph{Ford-Fulkerson algorithm}.
+
+\begin{enumerate}
+ \item Start from a feasible flow $x$, e.g.\ $x = \mathbf{0}$.
+ \item If there is no augmenting path for $x$ from $1$ to $n$, then $x$ is optimal.
+ \item Find an augmenting path for $x$ from $1$ to $n$, and send a maximum amount of flow along it.
+ \item \texttt{GOTO} (ii).
+\end{enumerate}
+
+\begin{eg}
+ Consider the diagram
+ \begin{center}
+ \begin{tikzpicture}[xscale=2]
+ \node at (0, 0) [circ] {-};
+ \node at (3, 0) [circ] {-};
+ \node at (0, 0) [above] {1};
+ \node at (3, 0) [above] {$n$};
+ \draw [->] (0, 0) -- (1, 1) node [pos = 0.5, above] {\small 5};
+ \draw [->] (1, 1) -- (2, 1) node [pos = 0.5, above] {\small 1};
+ \draw [->] (2, 1) -- (3, 0) node [pos = 0.5, above] {\small 1};
+ \draw [->] (0, 0) -- (1, -1) node [pos = 0.5, above] {\small 5};
+ \draw [->] (1, -1) -- (2, -1) node [pos = 0.5, above] {\small 2};
+ \draw [->] (2, -1) -- (3, 0) node [pos = 0.5, above] {\small 5};
+ \draw [->] (1, 1) -- (2, -1) node [pos = 0.5, above] {\small 4};
+
+ \node at (1, 1) [circ] {};
+ \node at (2, 1) [circ] {};
+ \node at (1, -1) [circ] {};
+ \node at (2, -1) [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ We can keep adding flow until we reach
+ \begin{center}
+ \begin{tikzpicture}[xscale=2]
+ \node at (0, 0) [circ] {-};
+ \node at (3, 0) [circ] {-};
+ \node at (0, 0) [above] {1};
+ \node at (3, 0) [above] {$n$};
+ \draw [->] (0, 0) -- (1, 1) node [pos = 0.5, above] {\small 5};
+ \draw [->] (1, 1) -- (2, 1) node [pos = 0.5, above] {\small 1};
+ \draw [->] (2, 1) -- (3, 0) node [pos = 0.5, above] {\small 1};
+ \draw [->] (0, 0) -- (1, -1) node [pos = 0.5, above] {\small 5};
+ \draw [->] (1, -1) -- (2, -1) node [pos = 0.5, above] {\small 2};
+ \draw [->] (2, -1) -- (3, 0) node [pos = 0.5, above] {\small 5};
+ \draw [->] (1, 1) -- (2, -1) node [pos = 0.5, above] {\small 4};
+
+ \node [red] at (0.5, 0.5) [below] {\small 4};
+ \node [red] at (1.5, 1) [below] {\small 1};
+ \node [red] at (2.5, 0.5) [below] {\small 1};
+ \node [red] at (0.5, -0.5) [below] {\small 2};
+ \node [red] at (1.5, -1) [below] {\small 2};
+ \node [red] at (2.5, -0.5) [below] {\small 5};
+ \node [red] at (1.5, 0) [below] {\small 3};
+
+ \node at (1, 1) [circ] {};
+ \node at (2, 1) [circ] {};
+ \node at (1, -1) [circ] {};
+ \node at (2, -1) [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ (red is flow, black is capacity). We know this is an optimum, since our total flow is $6$, and we can draw a cut with capacity 6:
+ \begin{center}
+ \begin{tikzpicture}[xscale=2]
+ \node at (0, 0) [circ] {-};
+ \node at (3, 0) [circ] {-};
+ \node at (0, 0) [above] {1};
+ \node at (3, 0) [above] {$n$};
+ \draw [->] (0, 0) -- (1, 1) node [pos = 0.5, above] {\small 5};
+ \draw [->] (1, 1) -- (2, 1) node [pos = 0.5, above] {\small 1};
+ \draw [->] (2, 1) -- (3, 0) node [pos = 0.5, above] {\small 1};
+ \draw [->] (0, 0) -- (1, -1) node [pos = 0.5, above] {\small 5};
+ \draw [->] (1, -1) -- (2, -1) node [pos = 0.5, above] {\small 2};
+ \draw [->] (2, -1) -- (3, 0) node [pos = 0.5, above] {\small 5};
+ \draw [->] (1, 1) -- (2, -1) node [pos = 0.5, above] {\small 4};
+ \draw [dashed, red] (2.3, 1.1) -- (2.3, -1.1);
+
+ \node at (1, 1) [circ] {};
+ \node at (2, 1) [circ] {};
+ \node at (1, -1) [circ] {};
+ \node at (2, -1) [circ] {};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+\end{document}
diff --git a/books/cam/IB_E/variational_principles.tex b/books/cam/IB_E/variational_principles.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6ca9612e891578f6c97bc28c657786739f05b44e
--- /dev/null
+++ b/books/cam/IB_E/variational_principles.tex
@@ -0,0 +1,1652 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Easter}
+\def\nyear {2015}
+\def\nlecturer {P.\ K.\ Townsend}
+\def\ncourse {Variational Principles}
+\def\nofficial {http://www.damtp.cam.ac.uk/user/examples/B6La.pdf}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent Stationary points for functions on $\R^n$. Necessary and sufficient conditions for minima and maxima. Importance of convexity. Variational problems with constraints; method of Lagrange multipliers. The Legendre Transform; need for convexity to ensure invertibility; illustrations from thermodynamics.\hspace*{\fill} [4]
+
+\vspace{5pt}
+\noindent The idea of a functional and a functional derivative. First variation for functionals, Euler-Lagrange equations, for both ordinary and partial differential equations. Use of Lagrange multipliers and multiplier functions.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Fermat's principle; geodesics; least action principles, Lagrange's and Hamilton's equations for particles and fields. Noether theorems and first integrals, including two forms of Noether's theorem for ordinary differential equations (energy and momentum, for example). Interpretation in terms of conservation laws.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Second variation for functionals; associated eigenvalue problem.\hspace*{\fill} [2]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+Consider a light ray travelling towards a mirror and being reflected.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (1, 1) -- (1, -1);
+ \draw (0.9, 0) -- (1.1, 0) node [right] {$z$};
+ \draw [->] (0, 0.7) -- (1, 0) -- (0, -0.7);
+ \end{tikzpicture}
+\end{center}
+We see that the light ray travels towards the mirror, gets reflected at $z$, and hits the (invisible) eye. What determines the path taken? The usual answer would be that the reflected angle shall be the same as the incident angle. However, ancient Greek mathematician Hero of Alexandria provided a different answer: the path of the light minimizes the total distance travelled.
+
+We can assume that light travels in a straight line except when reflected. Then we can characterize the path by a single variable $z$, the point where the light ray hits the mirror. Then we let $L(z)$ to be the length of the path, and we can solve for $z$ by setting $L'(z) = 0$.
+
+This principle sounds reasonable - in the absence of mirrors, light travels in a straight line - which is the shortest path between two points. But is it always true that the \emph{shortest} path is taken? No! We only considered a plane mirror, and this doesn't hold if we have, say, a spherical mirror. However, it turns out that in all cases, the path is a stationary point of the length function, i.e.\ $L'(z) = 0$.
+
+Fermat put this principle further. Assuming that light travels at a finite speed, the shortest path is the path that takes the minimum time. Fermat's principle thus states that
+\begin{center}
+ Light travels on the path that takes the shortest time.
+\end{center}
+This alternative formulation has the advantage that it applies to refraction as well. Light travels at different speeds in different mediums. Hence when they travel between mediums, they will change direction such that the total time taken is minimized.
+
+We usually define the refractive index $n$ of a medium to be $n = 1/v$, where $v$ is the velocity of light in the medium. Then we can write the variational principle as
+\begin{center}
+ minimize $\displaystyle \int_{\text{path}} n\;\d s$,
+\end{center}
+where $\d s$ is the path length element. This is easy to solve if we have two distinct mediums. Since light travels in a straight line in each medium, we can simply characterize the path by the point where the light crosses the boundary. However, in the general case, we should be considering \emph{any} possible path between two points. In this case, we could no longer use ordinary calculus, and need new tools - calculus of variations.
+
+In calculus of variations, the main objective is to find a function $x(t)$ that minimizes an integral $\int f(x)\;\d t$ for some function $f$. For example, we might want to minimize $\int (x^2 + x)\;\d t$. This differs greatly from ordinary minimization problems. In ordinary calculus, we minimize a function $f(x)$ for all possible values of $x\in \R$. However, in calculus of variations, we will be minimizing the integral $\int f(x)\;\d t$ over all possible \emph{functions} $x(t)$.
+
+\section{Multivariate calculus}
+Before we start calculus of variations, we will first have a brief overview of minimization problems in ordinary calculus. Unless otherwise specified, $f$ will be a function $\R^n \to \R$. For convenience, we write the argument of $f$ as $\mathbf{x} = (x_1, \cdots, x_n)$ and $x = |\mathbf{x}|$. We will also assume that $f$ is sufficiently smooth for our purposes.
+
+\subsection{Stationary points}
+The quest of minimization starts with finding stationary points.
+\begin{defi}[Stationary points]
+ \emph{Stationary points} are points in $\R^n$ for which $\nabla f = \mathbf{0}$, i.e.
+ \[
+ \frac{\partial f}{\partial x_1} = \frac{\partial f}{\partial x_2} = \cdots = \frac{\partial f}{\partial x_n} = 0
+ \]
+\end{defi}
+All minima and maxima are stationary points, but knowing that a point is stationary is not sufficient to determine which type it is. To know more about the nature of a stationary point, we Taylor expand $f$ about such a point, which we assume is $\mathbf{0}$ for notational convenience.
+\begin{align*}
+ f(\mathbf{x}) &= f(\mathbf{0}) + \mathbf{x}\cdot \nabla f + \frac{1}{2}\sum_{i, j}x_ix_j\frac{\partial^2 f}{\partial x_i \partial x_j} + O(x^3).\\
+ &= f(\mathbf{0}) + \frac{1}{2}\sum_{i, j}x_ix_j\frac{\partial^2 f}{\partial x_i \partial x_j} + O(x^3).
+\end{align*}
+The second term is so important that we have a name for it:
+\begin{defi}[Hessian matrix]
+ The \emph{Hessian matrix} is
+ \[
+ H_{ij}(\mathbf{x}) = \frac{\partial^2 f}{\partial x_i \partial x_j}
+ \]
+\end{defi}
+Using summation notation, we can write our result as
+\[
+ f(\mathbf{x}) - f(\mathbf{0}) = \frac{1}{2}x_i H_{ij}x_j + O(x^3).
+\]
+Since $H$ is symmetric, it is diagonalizable. Thus after rotating our axes to a suitable coordinate system, we have
+\[
+ H_{ij}' =
+ \begin{pmatrix}
+ \lambda_1 & 0 & \cdots & 0\\
+ 0 & \lambda_2 & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & \cdots & \lambda_n
+ \end{pmatrix},
+\]
+where $\lambda_i$ are the eigenvalues of $H$. Since $H$ is real symmetric, these are all real. In our new coordinate system, we have
+\[
+ f(\mathbf{x}) - f(\mathbf{0}) = \frac{1}{2}\sum_{i = 1}^n \lambda_i (x_i')^2
+\]
+This is useful information. If all eigenvalues $\lambda_i$ are positive, then $f(\mathbf{x}) - f(\mathbf{0})$ must be positive (for small $\mathbf{x}$). Hence our stationary point is a local minimum. Similarly, if all eigenvalues are negative, then it is a local maximum.
+
+If there are mixed signs, say $\lambda_1 > 0$ and $\lambda_2 < 0$, then $f$ increases in the $x_1$ direction and decreases in the $x_2$ direction. In this case we say we have a saddle point.
+
+If some $\lambda = 0$, then we have a \emph{degenerate stationary point}. To identify the nature of this point, we must look at even higher derivatives.
+
+In the special case where $n = 2$, we do not need to explicitly find the eigenvalues. We know that $\det H$ is the product of the two eigenvalues. Hence if $\det H$ is negative, the eigenvalues have different signs, and we have a saddle. If $\det H$ is positive, then the eigenvalues are of the same sign.
+
+To determine if it is a maximum or minimum, we can look at the trace of $H$, which is the sum of eigenvalues. If $\tr H$ is positive, then we have a local minimum. Otherwise, it is a local maximum.
+
+\begin{eg}
+ Let $f(x, y) = x^3 + y^3 - 3xy$. Then
+ \[
+ \nabla f = 3(x^2 - y, y^2 - x).
+ \]
+ This is zero iff $x^2 = y$ and $y^2 = x$. This is satisfied iff $y^4 = y$. So either $y = 0$, or $y = 1$. So there are two stationary points: $(0, 0)$ and $(1, 1)$.
+
+ The Hessian matrix is
+ \[
+ H =
+ \begin{pmatrix}
+ 6x & -3\\
+ -3 & 6y
+ \end{pmatrix},
+ \]
+ and we have
+ \begin{align*}
+ \det H &= 9(4xy - 1)\\
+ \tr H &= 6(x + y).
+ \end{align*}
+ At $(0, 0)$, $\det H < 0$. So this is a saddle point. At $(1, 1)$, $\det H > 0$, $\tr H > 0$. So this is a local minimum.
+\end{eg}
+\subsection{Convex functions}
+\subsubsection{Convexity}
+\emph{Convex functions} is an important class of functions that has a lot of nice properties. For example, stationary points of convex functions are all minima, and a convex function can have at most one minimum value. To define convex functions, we need to first define a \emph{convex set}.
+
+\begin{defi}[Convex set]
+ A set $S\subseteq \R^n$ is \emph{convex} if for any distinct $\mathbf{x}, \mathbf{y}\in S, t\in (0, 1)$, we have $(1 - t)\mathbf{x} + t\mathbf{y} \in S$. Alternatively, any line joining two points in $S$ lies completely within $S$.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[shift={(-2, 0)}]
+ \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0, 0) (0.6, -1) (1, 0) (0.7, 0.7)};
+ \draw (-0.5, -0.5) node [circ] {} -- (0.5, -0.5) node [circ] {};
+
+ \node at (0, -1.5) {non-convex};
+ \end{scope}
+
+ \begin{scope}[shift={(2, 0)}]
+ \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0.6, -1) (1, 0) (0.7, 0.7)};
+ \node at (0, -1.5) {convex};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+
+\begin{defi}[Convex function]
+ A function $f: \R^n \to \R$ is \emph{convex} if
+ \begin{enumerate}
+ \item The domain $D(f)$ is convex
+ \item The function $f$ lies below (or on) all its chords, i.e.
+ \[
+ f((1 - t)\mathbf{x} + t\mathbf{y}) \leq (1 - t)f(\mathbf{x}) + tf(\mathbf{y}) \tag{$*$}
+ \]
+ for all $\mathbf{x}, \mathbf{y}\in D(f), t\in (0, 1)$.
+ \end{enumerate}
+ A function is \emph{strictly convex} if the inequality is strict, i.e.
+ \[
+ f((1 - t)\mathbf{x} + t\mathbf{y}) < (1 - t)f(\mathbf{x}) + tf(\mathbf{y}).
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw(-2, 4) -- (-2, 0) -- (2, 0) -- (2, 4);
+ \draw (-1.3, 0.1) -- (-1.3, -0.1) node [below] {$x$};
+ \draw (1.3, 0.1) -- (1.3, -0.1) node [below] {$y$};
+ \draw (-1.7, 2) parabola bend (-.2, 1) (1.7, 3.3);
+ \draw [dashed] (-1.3, 0) -- (-1.3, 1.53) node [circ] {};
+ \draw [dashed] (1.3, 0) -- (1.3, 2.42) node [circ] {};
+ \draw (-1.3, 1.53) -- (1.3, 2.42);
+ \draw [dashed] (0, 0) node [below] {\tiny $(1 - t)x + ty$} -- (0, 1.975) node [above] {\tiny$(1 - t)f(x) + t f(y)\quad\quad\quad\quad\quad\quad$} node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ A function $f$ is (strictly) concave iff $-f$ is (strictly) convex.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $f(x) = x^2$ is strictly convex.
+ \item $f(x) = |x|$ is convex, but not strictly.
+ \item $f(x) = \frac{1}{x}$ defined on $x > 0$ is strictly convex.
+ \item $f(x) = \frac{1}{x}$ defined on $\R^* = \R \setminus \{0\}$ is \emph{not} convex. Apart from the fact that $\R^*$ is not a convex domain. But even if we defined, like $f(0) = 0$, it is not convex by considering the line joining $(-1, -1)$ and $(1, 1)$ (and in fact $f(x) = \frac{1}{x}$ defined on $x < 0$ is concave).
+ \end{enumerate}
+\end{eg}
+\subsubsection{First-order convexity condition}
+While the definition of a convex function seems a bit difficult to work with, if our function is differentiable, it is easy to check if it is convex.
+
+First assume that our function is once differentiable, and we attempt to find a first-order condition for convexity. Suppose that $f$ is convex. For fixed $\mathbf{x}, \mathbf{y}$, we define the function
+\[
+ h(t) = (1 - t)f(\mathbf{x}) + tf(\mathbf{y}) - f((1 - t)\mathbf{x} + t \mathbf{y}).
+\]
+By the definition of convexity of $f$, we must have $h(t) \geq 0$. Also, trivially $h(0) = 0$. So
+\[
+ \frac{h(t) - h(0)}{t} \geq 0
+\]
+for any $t\in (0, 1)$. So
+\[
+ h'(0) \geq 0.
+\]
+On the other hand, we can also differentiate $h$ directly and evaluate at $0$:
+\[
+ h'(0) = f(\mathbf{y}) - f(\mathbf{x}) - (\mathbf{y} - \mathbf{x})\cdot \nabla f (\mathbf{x}).
+\]
+Combining our two results, we know that
+\[
+ f(\mathbf{y}) \geq f(\mathbf{x}) + (\mathbf{y} - \mathbf{x})\cdot \nabla f(\mathbf{x}) \tag{$\dagger$}
+\]
+It is also true that this condition implies convexity, which is an easy result.
+
+How can we interpret this result? The equation $f(\mathbf{x}) + (\mathbf{y} - \mathbf{x}) \cdot \nabla f(\mathbf{x}) = 0$ defines the tangent plane of $f$ at $\mathbf{x}$. Hence this condition is saying that a convex differentiable function lies above all its tangent planes.
+
+We immediately get the corollary
+\begin{cor}
+ A stationary point of a convex function is a global minimum. There can be more than one global minimum (e.g.\ a constant function), but there is at most one if the function is strictly convex.
+\end{cor}
+
+\begin{proof}
+ Given $\mathbf{x}_0$ such that $\nabla f(\mathbf{x}_0) = \mathbf{0}$, $(\dagger)$ implies that for any $\mathbf{y}$,
+ \[
+ f(\mathbf{y}) \geq f(\mathbf{x}_0) + (\mathbf{y} - \mathbf{x}_0)\cdot \nabla f(\mathbf{x}_0) = f(\mathbf{x}_0). \qedhere
+ \]
+\end{proof}
+We can write our first-order convexity condition in a different way. We can rewrite $(\dagger)$ into the form
+\[
+ (\mathbf{y} - \mathbf{x}) \cdot [\nabla f(\mathbf{y}) - \nabla f(\mathbf{x})] \geq f(\mathbf{x}) - f(\mathbf{y}) - (\mathbf{x} - \mathbf{y}) \cdot \nabla f(\mathbf{y}).
+\]
+By applying $(\dagger)$ to the right hand side (with $\mathbf{x}$ and $\mathbf{y}$ swapped), we know that the right hand side is $\geq 0$. So we have another first-order condition:
+\[
+ (\mathbf{y} - \mathbf{x})\cdot [\nabla f(\mathbf{y}) - \nabla f(\mathbf{x})] \geq 0,
+\]
+It can be shown that this is equivalent to the other conditions.
+
+This condition might seem a bit weird to comprehend, but all it says is that $\nabla f(\mathbf{x})$ is a non-decreasing function. For example, when $n = 1$, the equation states that $(y - x)(f'(y) - f'(x)) \geq 0$, which is the same as saying $f'(y) \geq f'(x)$ whenever $y > x$.
+
+\subsubsection{Second-order convexity condition}
+We have an even nicer condition when the function is twice differentiable. We start with the equation we just obtained:
+\[
+ (\mathbf{y} - \mathbf{x})\cdot [\nabla f(\mathbf{y}) - \nabla f(\mathbf{x})] \geq 0,
+\]
+Write $\mathbf{y} = \mathbf{x} + \mathbf{h}$. Then
+\[
+ \mathbf{h} \cdot (\nabla f(\mathbf{x} + \mathbf{h}) - \nabla f(\mathbf{x})) \geq 0.
+\]
+Expand the left in Taylor series. Using suffix notation, this becomes
+\[
+ h_i [h_j \nabla_j \nabla_i f + O(h^2)] \geq 0.
+\]
+But $\nabla_j \nabla_i f = H_{ij}$. So we have
+\[
+ h_i H_{ij}h_j + O(h^3) \geq 0
+\]
+This is true for all $h$ if the Hessian $H$ is positive semi-definite (or simply positive), i.e.\ the eigenvalues are non-negative. If they are in fact all positive, then we say $H$ is positive definite.
+
+Hence convexity implies that the Hessian matrix is positive for all $\mathbf{x}\in D(f)$. Strict convexity implies that it is positive definite.
+
+The converse is also true --- if the Hessian is positive, then it is convex.
+
+\begin{eg}
+ Let $f(x, y) = \frac{1}{xy}$ for $x, y > 0$. Then the Hessian is
+ \[
+ H = \frac{1}{xy}
+ \begin{pmatrix}
+ \frac{2}{x^2} & \frac{1}{xy}\\
+ \frac{1}{xy} & \frac{2}{y^2}
+ \end{pmatrix}
+ \]
+ The determinant is
+ \[
+ \det H = \frac{3}{x^4y^4} > 0
+ \]
+ and the trace is
+ \[
+ \tr H = \frac{2}{xy}\left(\frac{1}{x^2} + \frac{1}{y^2}\right) > 0.
+ \]
+ So $f$ is convex.
+
+ To conclude that $f$ is convex, we only used the fact that $xy$ is positive, instead of $x$ and $y$ being individually positive. Then could we relax the domain condition to be $xy > 0$ instead? The answer is no, because in this case, the function will no longer be convex!
+\end{eg}
+
+\subsection{Legendre transform}
+The Legendre transform is an important tool in classical dynamics and thermodynamics. In classical dynamics, it is used to transform between the Lagrangian and the Hamiltonian. In thermodynamics, it is used to transform between the energy, Helmholtz free energy and enthalpy. Despite its importance, the definition is slightly awkward.
+
+Suppose that we have a function $f(x)$, which we'll assume is differentiable. For some reason, we want to transform it into a function of the conjugate variable $p = \frac{\d f}{\d x}$ instead. In most applications to physics, this quantity has a particular physical significance. For example, in classical dynamics, if $L$ is the Lagrangian, then $p = \frac{\partial L}{\partial \dot{x}}$ is the (conjugate) momentum. $p$ also has a context-independent geometric interpretation, which we will explore later. For now, we will assume that $p$ is more interesting than $x$.
+
+Unfortunately, the obvious option $f^*(p) = f(x(p))$ is not the transform we want. There are various reasons for this, but the major reason is that it is \emph{ugly}. It lacks any mathematical elegance, and has almost no nice properties at all.
+
+In particular, we want our $f^*(p)$ to satisfy the property
+\[
+ \frac{\d f^*}{\d p} = x.
+\]
+This says that if $p$ is the conjugate of $x$, then $x$ is the conjugate of $p$. We will soon see how this is useful in the context of thermodynamics.
+
+The symmetry is better revealed if we write in terms of differentials. The differential of the function $f$ is
+\[
+ \d f = \frac{\d f}{\d x}\;\d x = p\;\d x.
+\]
+So we want our $f^*$ to satisfy
+\[
+ \d f^* = x\;\d p.
+\]
+How can we obtain this? From the product rule, we know that
+\[
+ \d (xp) = x\;\d p + p\;\d x.
+\]
+So if we define $f^* = xp - f$ (more explicitly written as $f^*(p) = x(p)p - f(x(p))$), then we obtain the desired relation $\d f^* = x\;\d p$. Alternatively, we can say $\frac{\d f^*}{\d p} = x$.
+
+The actual definition we give will not be exactly this. Instead, we define it in a way that does not assume differentiability. We'll also assume that the function takes the more general form $\R^n \to \R$.
+\begin{defi}[Legendre transform]
+ Given a function $f: \R^n \to \R$, its \emph{Legendre transform} $f^*$ (the ``conjugate'' function) is defined by
+ \[
+ f^*(\mathbf{p}) = \sup_{\mathbf{x}}(\mathbf{p}\cdot \mathbf{x} - f(\mathbf{x})),
+ \]
+ The domain of $f^*$ is the set of $\mathbf{p}\in \R^n$ such that the supremum is finite. $\mathbf{p}$ is known as the conjugate variable.
+\end{defi}
+This relation can also be written as $f^*(p) + f(x) = px$, where $x(p)$ is the value of $x$ that maximizes the function.
+
+To show that this is the same as what we were just talking about, note that the supremum of $\mathbf{p}\cdot \mathbf{x} - f(\mathbf{x})$ is obtained when its derivative is zero, i.e.\ $\mathbf{p} = \nabla f(\mathbf{x})$. In particular, in the 1D case, $f^*(p) = px - f(x)$, where $x$ satisfies $f'(x) = p$. So $\mathbf{p}$ is indeed the derivative of $f$ with respect to $\mathbf{x}$.
+
+From the definition, we can immediately conclude that
+\begin{lemma}
+ $f^*$ is always convex.
+\end{lemma}
+
+\begin{proof}
+ \begin{align*}
+ f^*((1 - t)\mathbf{p} + t\mathbf{q}) &= \sup_\mathbf{x} \big[((1 - t)\mathbf{p}\cdot \mathbf{x} + t\mathbf{q}\cdot \mathbf{x} - f(\mathbf{x})\big].\\
+ &= \sup_\mathbf{x} \big[(1 - t)(\mathbf{p}\cdot \mathbf{x} - f(\mathbf{x})) + t(\mathbf{q}\cdot \mathbf{x} - f(\mathbf{x}))\big]\\
+ &\leq (1 - t)\sup_\mathbf{x} [\mathbf{p}\cdot \mathbf{x} - f(\mathbf{x})] + t\sup_\mathbf{x}[\mathbf{q}\cdot \mathbf{x} - f(\mathbf{x})]\\
+ &= (1 - t)f^*(\mathbf{p}) + tf^*(\mathbf{q})
+ \end{align*}
+ Note that we cannot immediately say that $f^*$ is convex, since we have to show that the domain is convex. But by the above bounds, $f^*((1 - t)\mathbf{p} + t\mathbf{q})$ is bounded by the sum of two finite terms, which is finite. So $(1 - t)\mathbf{p} + t\mathbf{q}$ is also in the domain of $f^*$.
+\end{proof}
+
+This transformation can be given a geometric interpretation. We will only consider the 1D case, because drawing higher-dimensional graphs is hard. For any fixed $x$, we draw the tangent line of $f$ at the point $x$. Then $f^*(p)$ is the intersection between the tangent line and the $y$ axis:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, -2) -- (0, 3) node [above] {$y$};
+ \draw (-1, 0) parabola (3.5, 3);
+ \draw (-0.4, -1.3) -- +(4, 4.24) node [right] {slope $=p$};
+ \draw [dashed] (-1, -0.9) -- +(5, 0);
+ \node at (0, -0.9) [anchor = north west] {$-f^*(p)$};
+ \draw [black, arrows={latex'-latex'}](2.6, -0.9) -- +(0, 2.76) node [pos=0.5, left] {$px$};
+ \draw [blue, arrows={latex'-latex'}] (2.8, 0) -- +(0, -0.9) node [right, pos=0.5] {$f^*(p) = px - f(x)$};
+ \draw [red, arrows={latex'-latex'}] (2.8, 0) -- +(0, 1.86) node [pos=0.4, right] {$f(x)$};
+ \end{tikzpicture}
+\end{center}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Let $f(x) = \frac{1}{2}ax^2$ for $a > 0$. Then $p = ax$ at the maximum of $px - f(x)$. So
+ \[
+ f^*(p) = px - f(x) = p\cdot \frac{p}{a} - \frac{1}{2}a\left(\frac{p}{a}\right)^2 = \frac{1}{2a}p^2.
+ \]
+ So the Legendre transform maps a parabola to a parabola.
+ \item $f(v) = -\sqrt{1 - v^2}$ for $|v| < 1$ is a lower semi-circle. We have
+ \[
+ p = f'(v) = \frac{v}{\sqrt{1 - v^2}}
+ \]
+ So
+ \[
+ v = \frac{p}{\sqrt{1 + p^2}}
+ \]
+ and exists for all $p\in \R$. So
+ \[
+ f^*(p) = pv - f(v) = \frac{p^2}{\sqrt{1 + p^2}} + \frac{1}{\sqrt{1 + p^2}} = \sqrt{1 + p^2}.
+ \]
+ A circle gets mapped to a hyperbola.
+ \item Let $f = cx$ for $c > 0$. This is convex but not strictly convex. Then $px - f(x) = (p - c)x$. This has no maximum unless $p = c$. So the domain of $f^*$ is simply $\{c\}$. One point. So $f^*(p) = 0$. So a line goes to a point.
+ \end{enumerate}
+\end{eg}
+
+Finally, we prove that applying the Legendre transform twice gives the original function.
+\begin{thm}
+ If $f$ is convex, differentiable with Legendre transform $f^*$, then $f^{**} = f$.
+\end{thm}
+
+\begin{proof}
+ We have $f^*(\mathbf{p}) = (\mathbf{p}\cdot\mathbf{x}(\mathbf{p}) - f(\mathbf{x}(\mathbf{p}))$ where $\mathbf{x}(\mathbf{p})$ satisfies $\mathbf{p} = \nabla f(\mathbf{x}(\mathbf{p}))$.
+
+ Differentiating with respect to $\mathbf{p}$, we have
+ \begin{align*}
+ \nabla_i f^*(\mathbf{p}) &= x_i + p_j \nabla_i x_j (\mathbf{p}) - \nabla_i x_j(\mathbf{p}) \nabla_j f(\mathbf{x})\\
+ &= x_i + p_j \nabla_i x_j(\mathbf{p}) - \nabla_i x_j(\mathbf{p}) p_j\\
+ &= x_i.
+ \end{align*}
+ So
+ \[
+ \nabla f^*(\mathbf{p}) = \mathbf{x}.
+ \]
+ This means that the conjugate variable of $\mathbf{p}$ is our original $\mathbf{x}$. So
+ \begin{align*}
+ f^{**}(\mathbf{x}) &= (\mathbf{x} \cdot \mathbf{p} - f^*(\mathbf{p}))|_{\mathbf{p} = \mathbf{p}(\mathbf{x})}\\
+ &= \mathbf{x}\cdot \mathbf{p} - (\mathbf{p}\cdot \mathbf{x} - f(\mathbf{x}))\\
+ &= f(\mathbf{x}). \qedhere
+ \end{align*}
+\end{proof}
+Note that strict convexity is \emph{not} required. For example, in our last example above with the straight line, $f^*(p) = 0$ for $p = c$. So $f^{**}(x) = (xp - f^*(p))|_{p = c} = cx = f(x)$.
+
+However, convexity \emph{is} required. If $f^{**} = f$ is true, then $f$ must be convex, since it is a Legendre transform. Hence $f^{**} = f$ cannot be true for non-convex functions.
+
+\subsubsection*{Application to thermodynamics}
+Given a system of a fixed number of particles, the energy of a system is usually given as a function of entropy and volume:
+\[
+ E = E(S, V).
+\]
+We can think of this as a gas inside a piston with variable volume.
+
+There are two things that can affect the energy: we can push in the piston and modify the volume. This corresponds to a work done of $-p\;\d V$, where $p$ is the pressure. Alternatively, we can simply heat it up and create a heat change of $T\;\d S$, where $T$ is the temperature. Then we have
+\[
+ \d E = T\;\d S - p\;\d V.
+\]
+Comparing with the chain rule, we have
+\[
+ \frac{\partial E}{\partial S} = T,\quad -\frac{\partial E}{\partial V} = p
+\]
+However, the entropy is a mysterious quantity no one understands. Instead, we like temperature, defined as $T = \frac{\partial E}{\partial S}$. Hence we use the (negative) Legendre transform to obtain the conjugate function \emph{Helmholtz free energy}.
+\[
+ F(T, V) = \inf_S [E(S, V) - TS] = E(S, V) - S\frac{\partial E}{\partial S} = E - ST,
+\]
+Note that the Helmholtz free energy satisfies
+\[
+ \d F = - S \;\d T - p \;\d V.
+\]
+Just as we could recover $T$ and $p$ from $E$ via taking partial derivatives with respect to $S$ and $V$, we are able to recover $S$ and $p$ from $F$ by taking partial derivatives with respect to $T$ and $V$. This would not be the case if we simply defined $F(T, V) = E(S(T, V), V)$.
+
+If we take the Legendre transform with respect to $V$, we get the enthalpy instead, and if we take the Legendre transform with respect to both, we get the Gibbs free energy.
+
+\subsection{Lagrange multipliers}
+At the beginning, we considered the problem of \emph{unconstrained maximization}. We wanted to maximize $f(x, y)$ where $x, y$ can be any real value. However, sometimes we want to restrict to certain values of $(x, y)$. For example, we might want $x$ and $y$ to satisfy $x + y = 10$.
+
+We take a simple example of a hill. We model it using the function $f(x, y)$ given by the height above the ground. The hilltop would be given by the maximum of $f$, which satisfies
+\[
+ 0 = \d f = \nabla f\cdot \d \mathbf{x}
+\]
+for any (infinitesimal) displacement $\d \mathbf{x}$. So we need
+\[
+ \nabla f = \mathbf{0}.
+\]
+This would be a case of \emph{unconstrained maximization}, since we are considering all possible values of $x$ and $y$.
+
+A problem of \emph{constrained maximization} would be as follows: we have a path $p$ defined by $p(x, y) = 0$. What is the highest point along the path $p$?
+
+We still need $\nabla f \cdot \d \mathbf{x} = 0$, but now $\d \mathbf{x}$ is \emph{not} arbitrary. We only consider the $\d \mathbf{x}$ parallel to the path. Alternatively, $\nabla f$ has to be entirely perpendicular to the path. Since we know that the normal to the path is $\nabla p$, our condition becomes
+\[
+ \nabla f = \lambda \nabla p
+\]
+for some lambda $\lambda$. Of course, we still have the constraint $p(x, y) = 0$. So what we have to solve is
+\begin{align*}
+ \nabla f &= \lambda \nabla p\\
+ p &= 0
+\end{align*}
+for the three variables $x, y, \lambda$.
+
+Alternatively, we can change this into a single problem of \emph{unconstrained} extremization. We ask for the stationary points of the function $\phi(x, y, \lambda)$ given by
+\[
+ \phi(x, y, \lambda) = f(x, y) - \lambda p(x, y)
+\]
+When we maximize against the variables $x$ and $y$, we obtain the $\nabla f = \lambda \nabla p$ condition, and maximizing against $\lambda$ gives the condition $p = 0$.
+
+\begin{eg}
+ Find the radius of the smallest circle centered on origin that intersects $y = x^2 - 1$.
+
+ \begin{enumerate}
+ \item First do it the easy way: for a circle of radius $R$ to work, $x^2 + y^2 = R^2$ and $y = x^2 - 1$ must have a solution. So
+ \[
+ (x^2)^2 - x^2 + 1 - R^2 = 0
+ \]
+ and
+ \[
+ x^2 = \frac{1}{2}\pm \sqrt{R^2 - \frac{3}{4}}
+ \]
+ So $R_{\min} = \sqrt{3}/2$.
+
+ \item We can also view this as a variational problem. We want to minimize $f(x, y) = x^2 + y^2$ subject to the constraint $p(x, y) = 0$ for $p(x, y) = y - x^2 + 1$.
+
+ We can solve this directly. We can solve the constraint to obtain $y = x^2 - 1$. Then
+ \[
+ R^2(x) = f(x, y(x)) = (x^2)^2 - x^2 + 1
+ \]
+ We look for stationary points of $R^2$:
+ \[
+ (R^2(x))' = 0 \Rightarrow x\left(x^2 - \frac{1}{2}\right)= 0
+ \]
+ So $x = 0$ and $R = 1$; or $x = \pm \frac{1}{\sqrt{2}}$ and $R = \frac{\sqrt{3}}{2}$. Since $\frac{\sqrt{3}}{2}$ is smaller, this is our minimum.
+
+ \item Finally, we can use Lagrange multipliers. We find stationary points of the function
+ \[
+ \phi(x, y, \lambda) = f(x, y) - \lambda p(x, y) = x^2 + y^2 - \lambda (y - x^2 + 1)
+ \]
+ The partial derivatives give
+ \begin{align*}
+ \frac{\partial \phi}{\partial x} = 0 &\Rightarrow 2x(1 + \lambda) = 0\\
+ \frac{\partial \phi}{\partial y} = 0 &\Rightarrow 2y - \lambda = 0\\
+ \frac{\partial \phi}{\partial \lambda} = 0 &\Rightarrow y - x^2 + 1 = 0
+ \end{align*}
+ The first equation gives us two choices
+ \begin{itemize}
+ \item $x = 0$. Then the third equation gives $y = -1$. So $R = \sqrt{x^2 + y^2} = 1$.
+ \item $\lambda = -1$. So the second equation gives $y = -\frac{1}{2}$ and the third gives $x = \pm \frac{1}{\sqrt{2}}$. Hence $R = \frac{\sqrt{3}}{2}$ is the minimum.
+ \end{itemize}
+ \end{enumerate}
+\end{eg}
+This can be generalized to problems with functions $\R^n \to \R$ using the same logic.
+
+\begin{eg}
+ For $\mathbf{x}\in \R^n$, find the minimum of the quadratic form
+ \[
+ f(\mathbf{x}) = x_i A_{ij}x_j
+ \]
+ on the surface $|\mathbf{x}|^2 = 1$.
+
+ \begin{enumerate}
+ \item The constraint imposes a normalization condition on $\mathbf{x}$. But if we scale up $\mathbf{x}$, $f(\mathbf{x})$ scales accordingly. So if we define
+ \[
+ \Lambda(\mathbf{x}) = \frac{f(\mathbf{x})}{g(\mathbf{x})},\quad g(\mathbf{x}) = |\mathbf{x}|^2,
+ \]
+ the problem is equivalent to minimization of $\Lambda (\mathbf{x})$ without constraint. Then
+ \[
+ \nabla_i \Lambda(\mathbf{x}) = \frac{2}{g}\left[A_{ij} x_j - \frac{f}{g} x_i\right]
+ \]
+ So we need
+ \[
+ A\mathbf{x} = \Lambda \mathbf{x}
+ \]
+ So the extremal values of $\Lambda (\mathbf{x})$ are the eigenvalues of $A$. So $\Lambda_{\min}$ is the lowest eigenvalue.
+
+ This answer is intuitively obvious if we diagonalize $A$.
+
+ \item We can also do it with Lagrange multipliers. We want to find stationary values of
+ \[
+ \phi(\mathbf{x}, \lambda) = f(\mathbf{x}) - \lambda(|\mathbf{x}|^2 - 1).
+ \]
+ So
+ \[
+ \mathbf{0} = \nabla \phi \Rightarrow A_{ij} x_j = \lambda x_i
+ \]
+ Differentiating with respect to $\lambda$ gives
+ \[
+ \frac{\partial \phi}{\partial \lambda} = 0 \Rightarrow |\mathbf{x}|^2 = 1.
+ \]
+ So we get the same set of equations.
+ \end{enumerate}
+\end{eg}
+
+\begin{eg}
+ Find the probability distribution $\{p_1, \cdots, p_n\}$ satisfying $\sum_i p_i = 1$ that maximizes the information entropy
+ \[
+ S = - \sum_{i = 1}^n p_i \log p_i.
+ \]
+ We look for stationary points of
+ \[
+ \phi(\mathbf{p}, \lambda) = -\sum_{i = 1}^n p_i \ln p_i - \lambda\sum_{i = 1}^n p_i + \lambda.
+ \]
+ We have
+ \[
+ \frac{\partial \phi}{\partial p_i}= - \ln p_i - (1 + \lambda) = 0.
+ \]
+ So
+ \[
+ p_i = e^{-(1 + \lambda)}.
+ \]
+ It is the same for all $i$. So we must have $p_i = \frac{1}{n}$.
+\end{eg}
+
+\section{Euler-Lagrange equation}
+\subsection{Functional derivatives}
+\begin{defi}[Functional]
+ A \emph{functional} is a function that takes in another real-valued function as an argument. We usually write them as $F[x]$ (square brackets), where $x = x(t): \R \to \R$ is a real function. We say that $F[x]$ is a functional of the function $x(t)$.
+\end{defi}
+Of course, we can also have functionals of many functions, e.g.\ $F[x, y]\in \R$ for $x, y: \R \to \R$. We can also have functionals of a function of many variables.
+
+\begin{eg}
+ Given a medium with refractive index $n(\mathbf{x})$, the time taken by a path $\mathbf{x}(t)$ from $\mathbf{x}_0$ to $\mathbf{x}_1$ is given by the functional
+ \[
+ T[\mathbf{x}] = \int_{\mathbf{x}_0}^{\mathbf{x}_1} n(\mathbf{x}) \;\d t.
+ \]
+\end{eg}
+
+While this is a very general definition, in reality, there is just one particular class of functionals we care about. Given a function $x(t)$ defined for $\alpha \leq t \leq \beta$, we study functional of the form
+\[
+ F[x] = \int_\alpha^\beta f(x, \dot{x}, t)\;\d t
+\]
+for some function $f$.
+
+Our objective is to find a stationary point of the functional $F[x]$. To do so, suppose we vary $x(t)$ by a small amount $\delta x(t)$. Then the corresponding change $\delta F[x]$ of $F[x]$ is
+\begin{align*}
+ \delta F[x] &= F[x + \delta x] - F[x]\\
+ &= \int_\alpha ^\beta \big(f(x + \delta x, \dot{x} + \delta \dot{x}, t) - f(x, \dot{x}, t)\big)\;\d t\\
+ \intertext{Taylor expand to obtain}
+ &= \int_\alpha^\beta \left(\delta x\frac{\partial f}{\partial x} + \delta \dot{x} \frac{\partial f}{\partial \dot{x}}\right)\;\d t + O(\delta x^2)\\
+ \intertext{Integrate the second term by parts to obtain}
+ \delta F[x] &= \int_\alpha^\beta\delta x\left[\frac{\partial f}{\partial x} - \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right)\right]\;\d t + \left[ \delta x\frac{\partial f}{\partial \dot{x}}\right]_\alpha^\beta.
+\end{align*}
+This doesn't seem like a very helpful equation. Life would be much easier if the last term (known as the \emph{boundary term}) $\left[ \delta x\frac{\partial f}{\partial \dot{x}}\right]_\alpha^\beta$ vanishes. Fortunately, for most of the cases we care about, the boundary conditions mandate that the boundary term does indeed vanish. Most of the time, we are told that $x$ is fixed at $t = \alpha, \beta$. So $\delta x(\alpha) = \delta x(\beta) = 0$. But regardless of what we do, we always choose boundary conditions such that the boundary term is 0. Then
+\[
+ \delta F[x] = \int_\alpha ^\beta \left(\delta x \frac{\delta F[x]}{\delta x(t)}\right)\;\d t
+\]
+where
+\begin{defi}[Functional derivative]
+ \[
+ \frac{\delta F[x]}{\delta x} = \frac{\partial f}{\partial x} - \frac{\d }{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right)
+ \]
+ is the \emph{functional derivative} of $F[x]$.
+\end{defi}
+
+If we want to find a stationary point of $F$, then we need $\frac{\delta F[x]}{\delta x} = 0$. So
+\begin{defi}[Euler-Lagrange equation]
+ The \emph{Euler-Lagrange} equation is
+ \[
+ \frac{\partial f}{\partial x} - \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right) = 0
+ \]
+ for $\alpha \leq t \leq \beta$.
+\end{defi}
+There is an obvious generalization to functionals $F[\mathbf{x}]$ for $\mathbf{x}(t) \in \R^n$:
+\[
+ \frac{\partial f}{\partial x_i} - \frac{\d }{\d t}\left(\frac{\partial f}{\partial \dot{x}_i}\right) = 0 \quad\text{ for all }i.
+\]
+\begin{eg}[Geodesics of a plane]
+ What is the curve $C$ of minimal length between two points $A, B$ in the Euclidean plane? The length is
+ \[
+ L = \int_C \;\d \ell
+ \]
+ where $\;\d \ell = \sqrt{\d x^2 + \d y^2}$.
+
+ There are two ways we can do this:
+ \begin{enumerate}
+ \item We restrict to curves for which $x$ (or $y$) is a good parameter, i.e.\ $y$ can be made a function of $x$. Then
+ \[
+ \d \ell = \sqrt{1 + (y')^2}\;\d x.
+ \]
+ Then
+ \[
+ L[y] = \int_\alpha^\beta \sqrt{1 + (y')^2}\;\d x.
+ \]
+ Since there is no explicit dependence on $y$, we know that
+ \[
+ \frac{\partial f}{\partial y} = 0
+ \]
+ So the Euler-Lagrange equation says that
+ \[
+ \frac{\d}{\d x} \left(\frac{\partial f}{\partial y'}\right) = 0
+ \]
+ We can integrate once to obtain
+ \[
+ \frac{\partial f}{\partial y'} = \text{constant}
+ \]
+ This is known as a \emph{first integral}, which will be studied more in detail later.
+
+ Plugging in our value of $f$, we obtain
+ \[
+ \frac{y'}{\sqrt{1 + (y')^2}} = \text{constant}
+ \]
+ This shows that $y'$ must be constant. So $y$ must be a straight line.
+ \item We can get around the restriction to ``good'' curves by choosing an arbitrary parameterization $\mathbf{r} = (x(t), y(t))$ for $t\in [0, 1]$ such that $\mathbf{r}(0) = A$, $\mathbf{r}(1) = B$. So
+ \[
+ \d \ell = \sqrt{\dot x^2 + \dot y^2}\;\d t.
+ \]
+ Then
+ \[
+ L[x, y] = \int_0^1 \sqrt{\dot x^2 + \dot y^2} \;\d t.
+ \]
+ We have, again
+ \[
+ \frac{\partial f}{\partial x} = \frac{\partial f}{\partial y} = 0.
+ \]
+ So we are left to solve
+ \[
+ \frac{\d }{\d t}\left(\frac{\partial f}{\partial \dot x}\right) = \frac{\d }{\d t}\left(\frac{\partial f}{\partial \dot y}\right) = 0.
+ \]
+ So we obtain
+ \[
+ \frac{\dot x}{\sqrt{\dot x^2 + \dot y^2}} = c,\quad \frac{\dot y}{\sqrt{\dot x^2 + \dot y^2}} = s
+ \]
+ where $c$ and $s$ are constants. While we have two constants, they are not independent. We must have $c^2 + s^2 = 1$. So we let $c = \cos \theta$, $s = \sin \theta$. Then the two conditions are both equivalent to
+ \[
+ (\dot x \sin \theta)^2 = (\dot y\cos \theta)^2.
+ \]
+ Hence
+ \[
+ \dot x \sin \theta = \pm\dot y \cos \theta.
+ \]
+ We can choose a $\theta$ such that we have a positive sign. So
+ \[
+ y\cos \theta = x\sin \theta + A
+ \]
+ for a constant $A$. This is a straight line with slope $\tan \theta$.
+ \end{enumerate}
+\end{eg}
+\subsection{First integrals}
+In our example above, $f$ did not depend on $x$, and hence $\frac{\partial f}{\partial x} = 0$. Then the Euler-Lagrange equations entail
+\[
+ \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right) = 0.
+\]
+We can integrate this to obtain
+\[
+ \frac{\partial f}{\partial \dot{x}} = \text{constant}.
+\]
+We call this the \emph{first integral}. First integrals are important in several ways. The most immediate advantage is that it simplifies the problem a lot. We only have to solve a first-order differential equation, instead of a second-order one. Not needing to differentiate $\frac{\partial f}{\partial \dot{x}}$ also prevents a lot of mess arising from the product and quotient rules.
+
+This has an additional significance when applied to problems in physics. If we have a first integral, then we get $\frac{\partial f}{\partial \dot{x}} =$ constant. This corresponds to a \emph{conserved quantity} of the system. When formulating physics problems as variational problems (as we will do in Chapter~\ref{sec:hamilton}), the conservation of energy and momentum will arise as constants of integration from first integrals.
+
+There is also a more complicated first integral appearing when $f$ does not (explicitly) depend on $t$. To find this out, we have to first consider the total derivative $\frac{\d f}{\d t}$. By the chain rule, we have
+\begin{align*}
+ \frac{\d f}{\d t} &= \frac{\partial f}{\partial t} + \frac{\d x}{\d t}\frac{\partial f}{\partial x} + \frac{\d \dot{x}}{\d t}\frac{\partial f}{\partial \dot{x}} \\
+ &= \frac{\partial f}{\partial t} + \dot{x}\frac{\partial f}{\partial x} + \ddot{x} \frac{\partial f}{\partial \dot{x}}.
+\end{align*}
+On the other hand, the Euler-Lagrange equation says that
+\[
+ \frac{\partial f}{\partial x} = \frac{\d}{\d t} \left(\frac{\partial f}{\partial \dot{x}}\right).
+\]
+Substituting this into our equation for the total derivative gives
+\begin{align*}
+ \frac{\d f}{\d t} &= \frac{\partial f}{\partial t} + \dot{x} \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right) + \ddot{x}\frac{\partial f}{\partial \dot{x}}\\
+ &= \frac{\partial f}{\partial t} + \frac{\d}{\d t}\left(\dot{x}\frac{\partial f}{\partial \dot{x}}\right).
+\end{align*}
+Then
+\[
+ \frac{\d}{\d t}\left(f - \dot{x}\frac{\partial f}{\partial \dot{x}}\right) = \frac{\partial f}{\partial t}.
+\]
+So if $\frac{\partial f}{\partial t} = 0$, then we have the first integral
+\[
+ f - \dot{x}\frac{\partial f}{\partial \dot{x}} = \text{constant}.
+\]
+
+\begin{eg}
+ Consider a light ray travelling in the vertical $xz$ plane inside a medium with refractive index $n(z) = \sqrt{a - bz}$ for positive constants $a, b$. The phase velocity of light is $v = \frac{c}{n}$.
+
+ According the Fermat's principle, the path minimizes
+ \[
+ T = \int_A^B \frac{\d \ell}{v}.
+ \]
+ This is equivalent to minimizing the optical path length
+ \[
+ cT = P = \int_A^B n\;\d \ell.
+ \]
+ We specify our path by the function $z(x)$. Then the path element is given by
+ \[
+ \d \ell = \sqrt{\d x^2 + \d z^2} = \sqrt{1 + z'(x)^2}\;\d x,
+ \]
+ Then
+ \[
+ P[z] = \int_{x_A}^{x_B}n(z)\sqrt{1 + (z')^2}\;\d x.
+ \]
+ Since this does not depend on $x$, we have the first integral
+ \[
+ k = f - z'\frac{\partial f}{\partial z'} = \frac{n(z)}{\sqrt{1 + (z')^2}}.
+ \]
+ for an integration constant $k$. Squaring and putting in the value of $n$ gives
+ \[
+ (z')^2 = \frac{b}{k^2}(z_0 - z),
+ \]
+ where $z_0 = (a - k^2)/b$. This is integrable and we obtain
+ \[
+ \frac{\d z}{\sqrt{z_0 - z}} = \pm\frac{\sqrt{b}}{k}\;\d x.
+ \]
+ So
+ \[
+ \sqrt{z - z_0} = \pm \frac{\sqrt{b}}{2k}(x - x_0),
+ \]
+ where $x_0$ is our second integration constant. Square it to obtain
+ \[
+ z = z_0 - \frac{b}{4k^2}(x - x_0)^2,
+ \]
+ which is a parabola.
+\end{eg}
+
+\begin{eg}[Principle of least action]
+ Mechanics as laid out by Newton was expressed in terms of forces and acceleration. While this is able to describe a lot of phenomena, it is rather unsatisfactory. For one, it is messy and difficult to scale to large systems involving many particles. It is also ugly.
+
+ As a result, mechanics is later reformulated in terms of a variational principle. A quantity known as the \emph{action} is defined for each possible path taken by the particle, and the actual path taken is the one that minimizes the action (technically, it is the path that is a stationary point of the action functional).
+
+ The version we will present here is an old version proposed by Maupertuis and Euler. While it sort-of works, it is still cumbersome to work with. The modern version is from Hamilton who again reformulated the action principle to something that is more powerful and general. This modern version will be discussed more in detail in Chapter~\ref{sec:hamilton}, and for now we will work with the old version first.
+
+ The original definition for the action, as proposed by Maupertuis, was mass $\times$ velocity $\times$ distance. This was given a more precise mathematical definition by Euler. For a particle with constant energy,
+ \[
+ E = \frac{1}{2} mv^2 + U(\mathbf{x}),
+ \]
+ where $v = |\dot{\mathbf{x}}|$. So we have
+ \[
+ mv = \sqrt{2m(E - U(\mathbf{x}))}.
+ \]
+ Hence we can define the action to be
+ \[
+ A = \int_A^B \sqrt{2m(E - U(\mathbf{x}))}\;\d \ell,
+ \]
+ where $\d\ell$ is the path length element. We minimize this to find the trajectory.
+
+ For a particle near the surface of the Earth, under the influence of gravity, $U = mgz$. So we have
+ \[
+ A[z] = \int_A^B \sqrt{2mE - 2m^2gz}\sqrt{1 + (z')^2}\;\d x,
+ \]
+ which is of exactly the same form as the optics problem we just solved. So the result is again a parabola, as expected.
+\end{eg}
+
+\begin{eg}[Brachistochrone]
+ The Brachistochrone problem was one of the earliest problems in the calculus of variations. The name comes from the Greek words \emph{br\'akhistos} (``shortest'') and \emph{khr\'onos} (``time'').
+
+ The question is as follows: suppose we have a bead sliding along a frictionless wire, starting from rest at the origin $A$. What shape of wire minimizes the time for the bead to travel to $B$?
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, 0.5) -- (0, -3) node [below] {$y$};
+
+ \draw [red] (0.86, -1) circle [radius=0.1];
+
+ \node [circ] {};
+ \node [anchor = south east] {$A$};
+
+ \node at (3, -1) [circ] {};
+ \node at (3, -1) [right] {$B$};
+ \draw (0, 0) parabola bend (2, -1.5) (3, -1);
+ \end{tikzpicture}
+ \end{center}
+ The conservation of energy implies that
+ \[
+ \frac{1}{2}mv^2 = mgy.
+ \]
+ So
+ \[
+ v = \sqrt{2gy}
+ \]
+ We want to minimize
+ \[
+ T = \int \frac{\d \ell}{v}.
+ \]
+ So
+ \[
+ T = \frac{1}{\sqrt{2g}}\int \frac{\sqrt{\d x^2 + \d y^2}}{\sqrt{y}} = \frac{1}{\sqrt{2g}}\int \sqrt{\frac{1 + (y')^2}{y}}\;\d x
+ \]
+ Since there is no explicit dependence on $x$, we have the first integral
+ \[
+ f - y'\frac{\partial f}{\partial y'} = \frac{1}{\sqrt{y(1 + (y')^2)}} = \text{constant}
+ \]
+ So the solution is
+ \[
+ y(1 + (y')^2) = c
+ \]
+ for some positive constant $c$.
+
+ The solution of this ODE is, in parametric form,
+ \begin{align*}
+ x &= c(\theta - \sin \theta)\\
+ y &= c(1 - \cos \theta).
+ \end{align*}
+ Note that this has $x = y = 0$ at $\theta = 0$. This describes a cycloid.
+\end{eg}
+
+\subsection{Constrained variation of functionals}
+So far, we've considered the problem of finding stationary values of $F[x]$ without any restraint on what $x$ could be. However, sometimes there might be some restrictions on the possible values of $x$. For example, we might have a surface in $\R^3$ defined by $g(x) = 0$. If we want to find the path of shortest length on the surface (i.e.\ geodesics), then we want to minimize $F[x]$ subject to the constraint $g(x(t)) = 0$.
+
+We can again use Lagrange multipliers. The problem we have to solve is equivalent to finding stationary values (without constraints) of
+\[
+ \Phi_\lambda [x] = F[x] - \lambda(P[x] - c).
+\]
+with respect to the function $x(t)$ and the variable $\lambda$.
+
+\begin{eg}[Isoperimetric problem]
+ If we have a string of fixed length, what is the maximum area we can enclose with it?
+
+ We first argue that the region enclosed by the curve is convex. If it is not, we can ``push out'' the curve to increase the area enclosed without changing the length. Assuming this, we can split the curve into two parts:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [red] plot [smooth, tension=1.2] coordinates {(1, 1.5) (1.6, 2.3) (2.6, 2.3) (3.5, 2) (4, 1.5)};
+ \node [red, anchor = south east] at (1.2, 1.75) {$y_2$};
+ \draw [blue] plot [smooth, tension=1.2] coordinates {(1, 1.4) (1.4, 0.8) (2.6, 0.7) (3.6, 0.9) (4, 1.5)};
+ \node [blue, anchor = north west] at (3, 0.8) {$y_1$};
+
+ \draw [dashed] (1, 1.5) -- (1, 0) node [below] {$\alpha$};
+ \draw [dashed] (4, 1.5) -- (4, 0) node [below] {$\beta$};
+
+ \draw [->] (0, 0) -- (5, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$y$};
+ \end{tikzpicture}
+ \end{center}
+ We have $\d A = [y_2(x) - y_1(x)]\;\d x$. So
+ \[
+ A = \int_\alpha^\beta [y_2(x) - y_1(x)]\;\d x.
+ \]
+ Alternatively,
+ \[
+ A[y] = \oint y(x)\;\d x.
+ \]
+ and the length is
+ \[
+ L[y] = \oint \d\ell = \oint \sqrt{1 + (y')^2}\;\d x.
+ \]
+ So we look for stationary points of
+ \[
+ \Phi_\lambda [y] = \oint [y(x) - \lambda\sqrt{1 + (y')^2}]\;\d x + \lambda L.
+ \]
+ In this case, we can be sure that our boundary terms vanish since there is no boundary.
+
+ Since there is no explicit dependence on $x$, we obtain the first integral
+ \[
+ f - y'\frac{\partial f}{\partial y'} = \text{constant} = y_0.
+ \]
+ So
+ \[
+ y_0 = y - \lambda\sqrt{1 + (y')^2} + \frac{\lambda (y')^2}{\sqrt{1 + (y')^2}} = y - \frac{\lambda}{\sqrt{1 + (y')^2}}.
+ \]
+ So
+ \begin{align*}
+ (y - y_0)^2 &= \frac{\lambda^2}{1 + (y')^2}\\
+ (y')^2 &= \frac{\lambda^2}{(y - y_0)^2} - 1\\
+ \frac{(y - y_0)y'}{\sqrt{\lambda^2 - (y - y_0)^2}} &= \pm 1.\\
+ \d\left[\sqrt{\lambda^2 - (y - y_0)^2}\pm x\right] &= 0.
+ \end{align*}
+ So we have
+ \[
+ \lambda^2 - (y - y_0)^2 = (x - x_0)^2,
+ \]
+ or
+ \[
+ (x - x_0)^2 + (y - y_0)^2 = \lambda^2.
+ \]
+ This is a circle of radius $\lambda$. Since the perimeter of this circle will be $2\pi \lambda$, we must have $\lambda = L/(2\pi)$. So the maximum area is $\pi\lambda^2 = L^2/(4\pi)$.
+\end{eg}
+
+\begin{eg}[Sturm-Liouville problem]
+ The \emph{Sturm-Liouville problem} is a very general class of problems. We will develop some very general theory about these problems without going into specific examples. It can be formulated as follows: let $\rho(x)$, $\sigma(x)$ and $w(x)$ be real functions of $x$ defined on $\alpha \leq x \leq \beta$. We will consider the special case where $\rho$ and $w$ are positive on $\alpha < x < \beta$. Our objective is to find stationary points of the functional
+ \[
+ F[y] = \int_\alpha^\beta (\rho(x)(y')^2 + \sigma(x)y^2)\;\d x
+ \]
+ subject to the condition
+ \[
+ G[y] = \int_\alpha^\beta w(x)y^2\;\d x = 1.
+ \]
+ Using the Euler-Lagrange equation, the functional derivatives of $F$ and $G$ are
+ \begin{align*}
+ \frac{\delta F[y]}{\delta y} &= 2\big(-(\rho y')' + \sigma y\big)\\
+ \frac{\delta G[y]}{\delta y} &= 2 (wy).
+ \end{align*}
+ So the Euler-Lagrange equation of $\Phi_\lambda [y] = F[y] - \lambda(G[y] - 1)$ is
+ \[
+ -(\rho y')' + \sigma y - \lambda wy = 0.
+ \]
+ We can write this as the eigenvalue problem
+ \[
+ \mathcal{L}y = \lambda wy.
+ \]
+ where
+ \[
+ \mathcal{L} = -\frac{\d}{\d x}\left(\rho\frac{\d}{\d x}\right) + \sigma
+ \]
+ is the \emph{Sturm-Liouville operator}. We call this a \emph{Sturm-Liouville eigenvalue} problem. $w$ is called the \emph{weight function}.
+
+ We can view this problem in a different way. Notice that $\mathcal{L} y = \lambda wy$ is linear in $y$. Hence if $y$ is a solution, then so is $Ay$. But if $G[y] = 1$, then $G[Ay] = A^2$. Hence the condition $G[y] = 1$ is simply a normalization condition. We can get around this problem by asking for the minimum of the functional
+ \[
+ \Lambda [y] = \frac{F[y]}{G[y]}
+ \]
+ instead. It turns out that this $\Lambda$ has some significance. To minimize $\Lambda$, we cannot apply the Euler-Lagrange equations, since $\Lambda$ is not of the form of an integral. However, we can try to vary it directly:
+ \[
+ \delta\Lambda = \frac{1}{G}\delta F - \frac{F}{G^2} \delta G = \frac{1}{G}(\delta F - \Lambda \delta G).
+ \]
+ When $\Lambda$ is minimized, we have
+ \[
+ \delta \Lambda = 0 \quad\Leftrightarrow\quad \frac{\delta F}{\delta y} = \Lambda \frac{\delta G}{\delta y}\quad \Leftrightarrow\quad \mathcal{L} y = \Lambda wy.
+ \]
+ So at stationary values of $\Lambda[y]$, $\Lambda$ is the associated Sturm-Liouville eigenvalue.
+\end{eg}
+
+\begin{eg}[Geodesics]
+ Suppose that we have a surface in $\R^3$ defined by $g(\mathbf{x}) = 0$, and we want to find the path of shortest distance between two points on the surface. These paths are known as \emph{geodesics}.
+
+ One possible approach is to solve $g(\mathbf{x}) = 0$ directly. For example, if we have a unit sphere, a possible solution is $x = \cos\theta \cos\phi$, $y = \cos\theta\sin\phi$, $z = \sin\theta$. Then the total length of a path would be given by
+ \[
+ D[\theta, \phi] = \int_A^B \sqrt{\d \theta^2 + \sin^2 \theta \d \phi^2}.
+ \]
+ We then vary $\theta$ and $\phi$ to minimize $D$ and obtain a geodesic.
+
+ Alternatively, we can impose the condition $g(\mathbf{x}(t)) = 0$ with a Lagrange multiplier. However, since we want the constraint to be satisfied for \emph{all} $t$, we need a Lagrange multiplier \emph{function} $\lambda(t)$. Then our problem would be to find stationary values of
+ \[
+ \Phi[\mathbf{x}, \lambda] = \int_0^1 \big(|\dot{\mathbf{x}}| - \lambda(t) g(\mathbf{x}(t))\big)\;\d t
+ \]
+\end{eg}
+\section{Hamilton's principle}
+\label{sec:hamilton}
+As mentioned before, Lagrange and Hamilton reformulated Newtonian dynamics into a much more robust system based on an action principle.
+
+The first important concept is the idea of a \emph{configuration space}. This configuration space is a vector space containing \emph{generalized coordinates} $\xi(t)$ that specify the configuration of the system. The idea is to capture \emph{all} information about the system in one single vector.
+
+In the simplest case of a single free particle, these generalized coordinates would simply be the coordinates of the position of the particle. If we have two particles given by positions $\mathbf{x}(t) = (x_1, x_2, x_3)$ and $\mathbf{y}(t) = (y_1, y_2, y_3)$, our generalized coordinates might be $\xi(t) = (x_1, x_2, x_3, y_1, y_2, y_3)$. In general, if we have $N$ different free particles, the configuration space has $3N$ dimensions.
+
+The important thing is that the \emph{generalized} coordinates need not be just the usual Cartesian coordinates. If we are describing a pendulum in a plane, we do not need to specify the $x$ and $y$ coordinates of the mass. Instead, the system can be described by just the angle $\theta$ between the mass and the vertical. So the generalized coordinates is just $\xi(t) = \theta(t)$. This is much more natural to work with and avoids the hassle of imposing constraints on $x$ and $y$.
+
+\subsection{The Lagrangian}
+The concept of generalized coordinates was first introduced by Lagrange in 1788. He then showed that $\xi(t)$ obeys certain complicated ODEs which are determined by the kinetic energy and the potential energy.
+
+In the 1830s, Hamilton made Lagrange's mechanics much more pleasant. He showed that the solutions of these ODEs are extremal points of a new ``action'',
+\[
+ S[\xi] = \int L\;\d t
+\]
+where
+\[
+ L = T - V
+\]
+is the \emph{Lagrangian}, with $T$ the kinetic energy and $V$ the potential energy.
+
+\begin{law}[Hamilton's principle]
+ The actual path $\xi(t)$ taken by a particle is the path that makes the action $S$ stationary.
+\end{law}
+
+Note that $S$ has dimensions $ML^2T^{-1}$, which is the same as the 18th century action (and Plank's constant).
+
+\begin{eg}
+ Suppose we have 1 particle in Euclidean 3-space. The configuration space is simply the coordinates of the particle in space. We can choose Cartesian coordinates $\mathbf{x}$. Then
+ \[
+ T = \frac{1}{2}m|\dot{\mathbf{x}}|^2,\quad V = V(\mathbf{x}, t)
+ \]
+ and
+ \[
+ S[\mathbf{x}] = \int_{t_A}^{t_B}\left(\frac{1}{2}m|\dot{\mathbf{x}}|^2 - V(\mathbf{x}, t)\right)\;\d t.
+ \]
+ Then the Lagrangian is
+ \[
+ L(\mathbf{x}, \dot{\mathbf{x}}, t) = \frac{1}{2}m|\dot{\mathbf{x}}|^2 - V(\mathbf{x}, t)
+ \]
+ We apply the Euler-Lagrange equations to obtain
+ \[
+ 0 = \frac{\d}{\d t}\left(\frac{\partial L}{\partial \dot{\mathbf{x}}}\right) - \frac{\partial L}{\partial \mathbf{x}} = m\ddot{\mathbf{x}} + \nabla V.
+ \]
+ So
+ \[
+ m\ddot{\mathbf{x}} = -\nabla V
+ \]
+ This is Newton's law $\mathbf{F} = m\mathbf{a}$ with $\mathbf{F} = -\nabla V$. This shows that Lagrangian mechanics is ``the same'' as Newton's law. However, Lagrangian mechanics has the advantage that it does not care what coordinates you use, while Newton's law requires an inertial frame of reference.
+\end{eg}
+Lagrangian mechanics applies even when $V$ is time-dependent. However, if $V$ is independent of time, then so is $L$. Then we can obtain a first integral.
+
+As before, the chain rule gives
+\begin{align*}
+ \frac{\d L}{\d t} &= \frac{\partial L}{\partial t} + \dot{\mathbf{x}}\cdot \frac{\partial L}{\partial \mathbf{x}} + \ddot{\mathbf{x}} \cdot \frac{\partial}{\partial \dot{\mathbf{x}}}\\
+ &= \frac{\partial L}{\partial t} + \dot{\mathbf{x}}\cdot \underbrace{\left(\frac{\partial L}{\partial \mathbf{x}} - \frac{\d}{\d t}\left(\frac{\partial L}{\partial \dot{\mathbf{x}}}\right)\right)}_{\displaystyle\frac{\delta S}{\delta x} = 0} +
+ \underbrace{\dot{\mathbf{x}} \cdot \frac{\d}{\d t}\left(\frac{\partial L}{\partial \dot{\mathbf{x}}}\right) + \ddot{\mathbf{x}}\cdot \frac{\partial L}{\partial \dot{\mathbf{x}}}}_{\displaystyle\frac{\d}{\d t}\left(\dot{\mathbf{x}}\cdot \frac{\partial L}{\partial \dot{\mathbf{x}}}\right)}
+\end{align*}
+So we have
+\[
+ \frac{\d}{\d t}\left(L - \dot{\mathbf{x}}\cdot \frac{\partial L}{\partial \dot{\mathbf{x}}}\right) = \frac{\partial L}{\partial t}.
+\]
+If $\frac{\partial L}{\partial t} = 0$, then
+\[
+ \dot{\mathbf{x}}\cdot \frac{\partial L}{\partial \dot {\mathbf{x}}} - L = E
+\]
+for some constant $E$. For example, for one particle,
+\[
+ E = m|\dot{\mathbf{x}}|^2 - \frac{1}{2}m|\dot{\mathbf{x}}|^2 + V = T + V = \text{total energy}.
+\]
+
+\begin{eg}
+ Consider a central force field $\mathbf{F} = -\nabla V$, where $V = V(r)$ is independent of time. We use spherical polar coordinates $(r, \theta, \phi)$, where
+ \begin{align*}
+ x &= r\sin \theta \cos \phi\\
+ y &= r\sin \theta \sin \phi\\
+ z &= r\cos \theta.
+ \end{align*}
+ So
+ \[
+ T = \frac{1}{2}m|\dot{\mathbf{x}}|^2 = \frac{1}{2}m\left(\dot{r}^2 + r^2(\dot{\theta}^2 + \sin^2 \theta \dot{\phi}^2)\right)
+ \]
+ So
+ \[
+ L = \frac{1}{2}m\dot{r}^2 + \frac{1}{2}mr^2\big(\dot{\theta}^2 + \sin^2\theta\dot{\phi}^2\big) - V(r).
+ \]
+ We'll use the fact that motion is planar (a consequence of angular momentum conservation). So wlog $\theta = \frac{\pi}{2}$. Then
+ \[
+ L = \frac{1}{2}m\dot{r}^2 + \frac{1}{2}mr^2 \dot{\phi}^2 - V(r).
+ \]
+ Then the Euler Lagrange equations give
+ \begin{align*}
+ m\ddot{r} - mr\dot{\phi}^2 + V'(r) &= 0\\
+ \frac{\d}{\d t}\left(mr^2 \dot{\phi}\right) &= 0.
+ \end{align*}
+ From the second equation, we see that $r^2 \dot\phi = h$ is a constant (angular momentum per unit mass). Then $\dot{\phi} = h/r^2$. So
+ \[
+ m\ddot{r} - \frac{mh^2}{r^3} + V'(r) = 0.
+ \]
+ If we let
+ \[
+ V_{\mathrm{eff}} = V(r) + \frac{mh^2}{2r^2}
+ \]
+ be the \emph{effective potential}, then we have
+ \[
+ m\ddot{r} = -V_{\mathrm{eff}}'(r).
+ \]
+ For example, in a gravitational field, $V(r) = -\frac{GM}{r}$. Then
+ \[
+ V_{\mathrm{eff}} = m\left(-\frac{GM}{r} + \frac{h^2}{2r^2}\right).
+ \]
+\end{eg}
+
+\subsection{The Hamiltonian}
+In 1833, Hamilton took Lagrangian mechanics further and formulated \emph{Hamiltonian mechanics}. The idea is to abandon $\dot{\mathbf{x}}$ and use the conjugate momentum $\mathbf{p} = \frac{\partial L}{\partial \dot{\mathbf{x}}}$ instead. Of course, this involves taking the Legendre transform of the Lagrangian to obtain the Hamiltonian.
+
+\begin{defi}[Hamiltonian]
+ The \emph{Hamiltonian} of a system is the Legendre transform of the Lagrangian:
+ \[
+ H(\mathbf{x}, \mathbf{p}) = \mathbf{p}\cdot \dot{\mathbf{x}} - L(\mathbf{x}, \dot{\mathbf{x}}),
+ \]
+ where $\dot{\mathbf{x}}$ is a function of $\mathbf{p}$ that is the solution to $\mathbf{p} = \frac{\partial L}{\partial \dot{\mathbf{x}}}$.
+
+ $\mathbf{p}$ is the \emph{conjugate momentum} of $\mathbf{x}$. The space containing the variables $\mathbf{x}, \mathbf{p}$ is known as the \emph{phase space}.
+\end{defi}
+
+Since the Legendre transform is its self-inverse, the Lagrangian is the Legendre transform of the Hamiltonian with respect to $\mathbf{p}$. So
+\[
+ L = \mathbf{p}\cdot \dot{\mathbf{x}} - H(\mathbf{x}, \mathbf{p})
+\]
+with
+\[
+ \dot{\mathbf{x}} = \frac{\partial H}{\partial \mathbf{p}}.
+\]
+Hence we can write the action using the Hamiltonian as
+\[
+ S[\mathbf{x}, \mathbf{p}] = \int (\mathbf{p}\cdot \dot{\mathbf{x}} - H(\mathbf{x}, \mathbf{p}))\;\d t.
+\]
+This is the \emph{phase-space form} of the action. The Euler-Lagrange equations for these are
+\[
+ \dot{\mathbf{x}} = \frac{\partial H}{\partial \mathbf{p}}, \quad \dot{\mathbf{p}} = -\frac{\partial H}{\partial \mathbf{x}}
+\]
+Using the Hamiltonian, the Euler-Lagrange equations put $\mathbf{x}$ and $\mathbf{p}$ on a much more equal footing, and the equations are more symmetric. There are also many useful concepts arising from the Hamiltonian, which are explored much in-depth in the II Classical Dynamics course.
+
+So what does the Hamiltonian look like? Consider the case of a single particle. The Lagrangian is given by
+\[
+ L = \frac{1}{2}m|\dot{\mathbf{x}}|^2 - V(\mathbf{x}, t).
+\]
+Then the conjugate momentum is
+\[
+ \mathbf{p} = \frac{\partial L}{\partial \dot{\mathbf{x}}} = m\dot{\mathbf{x}},
+\]
+which happens to coincide with the usual definition of the momentum. However, the conjugate momentum is often something more interesting when we use generalized coordinates. For example, in polar coordinates, the conjugate momentum of the angle is the angular momentum.
+
+Substituting this into the Hamiltonian, we obtain
+\begin{align*}
+ H(\mathbf{x}, \mathbf{p}) &= \mathbf{p}\cdot \frac{\mathbf{p}}{m} - \frac{1}{2}m\left(\frac{\mathbf{p}}{m}\right)^2 + V(\mathbf{x}, t)\\
+ &= \frac{1}{2m}|\mathbf{p}|^2 + V.
+\end{align*}
+So $H$ is the total energy, but expressed in terms of $\mathbf{x}, \mathbf{p}$, not $\mathbf{x}, \dot{\mathbf{x}}$.
+\subsection{Symmetries and Noether's theorem}
+Given
+\[
+ F[x]= \int_\alpha^\beta f(x, \dot{x}, t)\;\d t,
+\]
+suppose we change variables by the transformation $t \mapsto t^*(t)$ and $x\mapsto x^*(t^*)$. Then we have a new independent variable and a new function. This gives
+\[
+ F[x] \mapsto F^* [x^*] = \int_{\alpha^*}^{\beta^*} f(x^*, \dot{x}^*, t^*)\;\d t^*
+\]
+with $\alpha^* = t^*(\alpha)$ and $\beta^* = t^*(\beta)$.
+
+There are some transformations that are particularly interesting:
+\begin{defi}[Symmetry]
+ If $F^*[x^*] = F[x]$ for all $x$, $\alpha$ and $\beta$, then the transformation $*$ is a \emph{symmetry}.
+\end{defi}
+
+This transformation could be a translation of time, space, or a rotation, or even more fancy stuff. The exact symmetries $F$ has depends on the form of $f$. For example, if $f$ only depends on the magnitudes of $x$, $\dot{x}$ and $t$, then rotation of space will be a symmetry.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Consider the transformation $t \mapsto t$ and $x \mapsto x + \varepsilon$ for some small $\varepsilon$. Then
+ \[
+ F^*[x^*] = \int_\alpha^\beta f(x + \varepsilon, \dot{x}, t)\;\d x = \int_\alpha^\beta \left(f(x, \dot{x}, t) + \varepsilon \frac{\partial f}{\partial x}\right)\;\d x
+ \]
+ by the chain rule. Hence this transformation is a symmetry if $\frac{\partial f}{\partial x} = 0$.
+
+ However, we also know that if $\frac{\partial f}{\partial x} = 0$, then we have the first integral
+ \[
+ \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right) = 0.
+ \]
+ So $\frac{\partial f}{\partial \dot{x}}$ is a conserved quantity.
+ \item Consider the transformation $t \mapsto t - \varepsilon$. For the sake of sanity, we will also transform $x\mapsto x^*$ such that $x^*(t^*) = x(t)$. Then
+ \[
+ F^*[x^*] = \int_\alpha^\beta f(x, \dot{x}, t - \varepsilon)\;\d t = \int_\alpha^\beta \left(f(x, \dot{x}, t) - \varepsilon \frac{\partial f}{\partial t}\right)\;\d t.
+ \]
+ Hence this is a symmetry if $\frac{\partial f}{\partial t} = 0$.
+
+ We also know that if $\frac{\partial f}{\partial t} = 0$ is true, then we obtain the first integral
+ \[
+ \frac{\d}{\d t}\left(f - \dot{x}\frac{\partial f}{\partial \dot{x}}\right) = 0
+ \]
+ So we have a conserved quantity $f - \dot{x}\frac{\partial f}{\partial \dot{x}}$.
+ \end{enumerate}
+\end{eg}
+We see that for each simple symmetry we have above, we can obtain a first integral, which then gives a constant of motion. Noether's theorem is a powerful generalization of this.
+
+\begin{thm}[Noether's theorem]
+ For every continuous symmetry of $F[x]$, the solutions (i.e.\ the stationary points of $F[x]$) will have a corresponding conserved quantity.
+\end{thm}
+What does ``continuous symmetry'' mean? Intuitively, it is a symmetry we can do ``a bit of''. For example, rotation is a continuous symmetry, since we can do a bit of rotation. However, reflection is not, since we cannot reflect by ``a bit''. We either reflect or we do not.
+
+Note that continuity is essential. For example, if $f$ is quadratic in $x$ and $\dot{x}$, then $x\mapsto -x$ will be a symmetry. But since it is not continuous, there won't be a conserved quantity.
+
+Since the theorem requires a continuous symmetry, we can just consider infinitesimally small symmetries and ignore second-order terms. Almost every equation will have some $O(\varepsilon^2)$ that we will not write out.
+
+We will consider symmetries that involve only the $x$ variable. Up to first order, we can write the symmetry as
+\[
+ t \mapsto t,\quad x(t)\mapsto x(t) + \varepsilon h(t),
+\]
+for some $h(t)$ representing the symmetry transformation (and $\varepsilon$ a small number).
+
+By saying that this transformation is a symmetry, it means that when we pick $\varepsilon$ to be any (small) constant number, the functional $F[x]$ does not change, i.e.\ $\delta F = 0$.
+
+On the other hand, since $x(t)$ is a stationary point of $F[x]$, we know that if $\varepsilon$ is non-constant but vanishes at the end-points, then $\delta F = 0$ as well. We will combine these two information to find a conserved quantity of the system.
+
+For the moment, we do not assume anything about $\varepsilon$ and see what happens to $F[x]$. Under the transformation, the change in $F[x]$ is given by
+\begin{align*}
+ \delta F &= \int \big(f(x + \varepsilon h, \dot{x} + \varepsilon \dot{h} + \dot{\varepsilon} h, t) - f(x, \dot{x}, t)\big)\;\d t\\
+ &= \int\left(\frac{\partial f}{\partial x}\varepsilon h + \frac{\partial f}{\partial \dot{x}}\varepsilon \dot{h} + \frac{\partial f}{\partial \dot{x}}\dot{\varepsilon} h\right)\;\d t\\
+ &= \int \varepsilon\left(\frac{\partial f}{\partial x}h + \frac{\partial f}{\partial \dot{x}}\dot{h}\right)\;\d t + \int \dot{\varepsilon} \left(\frac{\partial f}{\partial \dot{x}}h\right)\;\d t.
+\end{align*}
+First consider the case where $\varepsilon$ is a constant. Then the second integral vanishes. So we obtain
+\[
+ \varepsilon \int \left(\frac{\partial f}{\partial x}h + \frac{\partial f}{\partial\dot{x}}\dot{h}\right)\;\d t = 0
+\]
+This requires that
+\[
+ \frac{\partial f}{\partial x}h + \frac{\partial f}{\partial\dot{x}}\dot{h} = 0
+\]
+Hence we know that
+\[
+ \delta F = \int\dot{\varepsilon}\left(\frac{\partial f}{\partial \dot{x}}h\right)\;\d t.
+\]
+Then consider a variable $\varepsilon$ that is non-constant but vanishes at end-points. Then we know that since $x$ is a solution, we must have $\delta F = 0$. So we get
+\[
+ \int\dot{\varepsilon}\left(\frac{\partial f}{\partial \dot{x}}h\right)\;\d t = 0.
+\]
+We can integrate by parts to obtain
+\[
+ \int \varepsilon \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}h\right)\;\d t = 0.
+\]
+for \emph{any} $\varepsilon$ that vanishes at end-points. Hence we must have
+\[
+ \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}h\right) = 0.
+\]
+So $\frac{\partial f}{\partial \dot{x}}h$ is a conserved quantity.
+
+Obviously, not all symmetries just involve the $x$ variable. For example, we might have a time translation $t \mapsto t - \varepsilon$. However, we can encode this as a transformation of the $x$ variable only, as $x(t) \mapsto x(t - \varepsilon)$.
+
+In general, to find the conserved quantity associated with the symmetry $x(t)\mapsto x(t) + \varepsilon h(t)$, we find the change $\delta F$ assuming that $\varepsilon$ is a function of time as opposed to a constant. Then the coefficient of $\dot{\varepsilon}$ is the conserved quantity.
+
+\begin{eg}
+ We can apply this to Hamiltonian mechanics. The motion of the particle is the stationary point of
+ \[
+ S[\mathbf{x}, \mathbf{p}] = \int \left(\mathbf{p}\cdot \dot{\mathbf{x}} - H(\mathbf{x}, \mathbf{p})\right)\;\d t,
+ \]
+ where
+ \[
+ H = \frac{1}{2m} |\mathbf{p}|^2 + V(\mathbf{x}).
+ \]
+ \begin{enumerate}
+ \item First consider the case where there is no potential. Since the action depends only on $\dot{\mathbf{x}}$ (or $\mathbf{p}$) and not $\mathbf{x}$ itself, it is invariant under the translation
+ \[
+ \mathbf{x}\mapsto \mathbf{x} + \boldsymbol\varepsilon,\quad \mathbf{p}\mapsto \mathbf{p}.
+ \]
+ For general $\boldsymbol\varepsilon$ that can vary with time, we have
+ \begin{align*}
+ \delta S &= \int \Big[\big(\mathbf{p}\cdot (\dot{\mathbf{x}} + \dot{\boldsymbol\varepsilon}) - H(\mathbf{p})\big) - \big(\mathbf{p}\cdot \dot{\mathbf{x}} - H(\mathbf{p})\big)\Big]\;\d t\\
+ &= \int \mathbf{p}\cdot \dot{\boldsymbol\varepsilon}\;\d t.
+ \end{align*}
+ Hence $\mathbf{p}$ (the momentum) is a constant of motion.
+ \item If the potential has no time-dependence, then the system is invariant under time translation. We'll skip the tedious computations and just state that time translation invariance implies conservation of $H$ itself, which is the energy.
+
+ \item The above two results can also be obtained directly from first integrals of the Euler-Lagrange equation. However, we can do something cooler. Suppose that we have a potential $V(|\mathbf{x}|)$ that only depends on radius. Then this has a rotational symmetry.
+
+ Choose any favorite axis of rotational symmetry $\boldsymbol\omega$, and make the rotation
+ \begin{align*}
+ \mathbf{x}&\mapsto \mathbf{x} + \varepsilon\boldsymbol\omega\times \mathbf{x}\\
+ \mathbf{p}&\mapsto \mathbf{p} + \varepsilon\boldsymbol\omega\times \mathbf{p},
+ \end{align*}
+ Then our rotation does not affect the radius $|\mathbf{x}|$ and momentum $|\mathbf{p}|$. So the Hamiltonian $H(\mathbf{x}, \mathbf{p})$ is unaffected. Noting that $\boldsymbol\omega \times \mathbf{p} \cdot \dot{\mathbf{x}} = 0$, we have
+ \begin{align*}
+ \delta S &= \int \left(\mathbf{p}\cdot \frac{\d}{\d t}\left(\mathbf{x} + \varepsilon\boldsymbol\omega \times \mathbf{x}\right) - \mathbf{p}\cdot \dot{\mathbf{x}}\right)\;\d t\\
+ &= \int \left(\mathbf{p}\cdot \frac{\d}{\d t}(\varepsilon\boldsymbol\omega\times \mathbf{x})\right)\;\d t\\
+ &= \int \left(\mathbf{p}\cdot \left[\boldsymbol\omega\times \frac{\d}{\d t}(\varepsilon\mathbf{x})\right]\right)\;\d t\\
+ &= \int \left(\mathbf{p}\cdot \left[\boldsymbol\omega\times (\dot{\varepsilon} \mathbf{x} + \varepsilon \dot{\mathbf{x}})\right]\right)\;\d t\\
+ &= \int \left(\dot{\varepsilon}\mathbf{p}\cdot (\boldsymbol\omega\times \mathbf{x}) + \varepsilon \mathbf{p}\cdot (\boldsymbol\omega \times \dot{\mathbf{x}})\right)\;\d t\\
+ \intertext{Since $\mathbf{p}$ is parallel to $\dot{\mathbf{x}}$, we are left with}
+ &= \int \left(\dot{\varepsilon}\mathbf{p}\cdot (\boldsymbol\omega\times \mathbf{x})\right)\;\d t\\
+ &= \int \dot{\varepsilon}\boldsymbol\omega\cdot (\mathbf{x}\times \mathbf{p})\;\d t.
+ \end{align*}
+ So $\boldsymbol\omega\cdot (\mathbf{x}\times \mathbf{p})$ is a constant of motion. Since this is true for all $\boldsymbol\omega$, $\mathbf{L} = \mathbf{x}\times \mathbf{p}$ must be a constant of motion, and this is the angular momentum.
+ \end{enumerate}
+\end{eg}
+\section{Multivariate calculus of variations}
+So far, the function $x(t)$ we are varying is just a function of a single variable $t$. What if we have a more complicated function to consider?
+
+We will consider the most general case $\mathbf{y}(x_1, \cdots, x_m) \in \R^n$ that maps $\R^m \to \R^n$ (we can also view this as $n$ different functions that map $\R^m \to \R$). The functional will be a multiple integral of the form
+\[
+ F[\mathbf{y}] = \int\cdots \int \; f(\mathbf{y}, \nabla \mathbf{y}, x_1, \cdots, x_m)\;\d x_1\cdots \d x_m,
+\]
+where $\nabla \mathbf{y}$ is the second-rank tensor defined as
+\[
+ \nabla \mathbf{y} = \left(\frac{\partial \mathbf{y}}{\partial x_1}, \cdots, \frac{\partial \mathbf{y}}{\partial x_m}\right).
+\]
+In this case, instead of attempting to come up with some complicated generalized Euler-Lagrange equation, it is often a better idea to directly consider variations $\delta \mathbf{y}$ of $\mathbf{y}$. This is best illustrated by example.
+
+\begin{eg}[Minimal surfaces in $\E^3$]
+ This is a natural generalization of geodesics. A minimal surface is a surface of least area subject to some boundary conditions. Suppose that $(x, y)$ are good coordinates for a surface $S$, where $(x, y)$ takes values in the domain $D \subseteq \R^2$. Then the surface is defined by $z = h(x, y)$, where $h$ is the \emph{height function}.
+
+ When possible, we will denote partial differentiation by suffices, i.e.\ $h_x = \frac{\partial h}{\partial x}$. Then the area is given by
+ \[
+ A[h] = \int_D \sqrt{1 + h_x^2 + h_y^2}\;\d A.
+ \]
+ Consider a variation of $h(x, y)$: $h\mapsto h + \delta h(x, y)$. Then
+ \begin{align*}
+ A[h + \delta h] &= \int_D \sqrt{1 + (h_x + (\delta h)_x)^2 + (h_y + (\delta h)_y)^2}\;\d A\\
+ &= A[h] + \int_D \left(\frac{h_x (\delta h)_x + h_y (\delta h)_y}{\sqrt{1 + h_x^2 + h_y^2}} + O(\delta h^2)\right)\;\d A
+ \end{align*}
+ We integrate by parts to obtain
+ \[
+ \delta A = -\int_D \delta h\left(\frac{\partial}{\partial x} \left(\frac{h_x}{\sqrt{1 + h_x^2 + h_y^2}}\right) + \frac{\partial}{\partial y} \left(\frac{h_y}{\sqrt{1 + h_x^2 + h_y^2}}\right)\right)\;\d A + O(\delta h^2)
+ \]
+ plus some boundary terms. So our minimal surface will satisfy
+ \[
+ \frac{\partial}{\partial x} \left(\frac{h_x}{\sqrt{1 + h_x^2 + h_y^2}}\right) + \frac{\partial}{\partial y} \left(\frac{h_y}{\sqrt{1 + h_x^2 + h_y^2}}\right) = 0
+ \]
+ Simplifying, we have
+ \[
+ (1 + h_y^2)h_{xx} + (1 + h_x^2) h_{yy} - 2h_xh_y h_{xy} = 0.
+ \]
+ This is a non-linear 2nd-order PDE, the minimal-surface equation. While it is difficult to come up with a fully general solution to this PDE, we can consider some special cases.
+ \begin{itemize}
+ \item There is an obvious solution
+ \[
+ h(x, y) = Ax + By + C,
+ \]
+ since the equation involves second-derivatives and this function is linear. This represents a plane.
+
+ \item If $|\nabla h|^2 \ll 1$, then $h_x^2$ and $h_y^2$ are small. So we have
+ \[
+ h_{yy} + h_{yy} = 0,
+ \]
+ or
+ \[
+ \nabla^2 h = 0.
+ \]
+ So we end up with the Laplace equation. Hence harmonic functions are (approximately) minimal-area.
+ \item We might want a cylindrically-symmetric solution, i.e.\ $h(x, y) = z(r)$, where $r = \sqrt{x^2 + y^2}$. Then we are left with an ordinary differential equation
+ \[
+ rz'' + z' + z'^3 = 0.
+ \]
+ The general solution is
+ \[
+ z = A^{-1}\cosh (Ar) + B,
+ \]
+ a \emph{catenoid}.
+
+ Alternatively, to obtain this,this we can substitute $h(x, y) = z(r)$ into $A[h]$ to get
+ \[
+ A[z] = 2\pi \int r\sqrt{1 + (h'(r))^2}\;\d r,
+ \]
+ and we can apply the Euler-Lagrange equation.
+ \end{itemize}
+\end{eg}
+
+\begin{eg}[Small amplitude oscillations of uniform string]
+ Suppose we have a string with uniform constant mass density $\rho$ with uniform tension $T$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [domain=0:2] plot ({3*\x}, {sin(\x*180)/1.5});
+ \draw [dashed] (0, 0) -- (6, 0);
+ \draw [->] (1.5, 0) -- (1.5, 0.666) node [pos = 0.5, right] {$y$};
+ \end{tikzpicture}
+ \end{center}
+ Suppose we pull the line between $x = 0$ and $x = a$ with some tension $T$. Then we set it into motion such that the amplitude is given by $y(x; t)$. Then the kinetic energy is
+ \[
+ T = \frac{1}{2}\int_0^a \rho v^2\;\d x = \frac{\rho}{2}\int_0^a \dot{y}^2 \;\d x.
+ \]
+ The potential energy is the tension times the length. So
+ \[
+ V = T\int \;\d \ell = T\int_0^a\sqrt{1 + (y')^2}\;\d x = (Ta) + \int_0^a \frac{1}{2}T(y'^2)\;\d x.
+ \]
+ Note that $y'$ is the derivative wrt $x$ while $\dot{y}$ is the derivative wrt time.
+
+ The $Ta$ term can be seen as the \emph{ground-state energy}. It is the energy initially stored if there is no oscillation. Since this constant term doesn't affect where the stationary points lie, we will ignore it. Then the action is given by
+ \[
+ S[y] = \iint_0^a \left(\frac{1}{2}\rho \dot{y}^2 - \frac{1}{2}T(y')^2\right)\;\d x\;\d t
+ \]
+ We apply Hamilton's principle which says that we need
+ \[
+ \delta S[y] = 0.
+ \]
+ We have
+ \[
+ \delta S[y] = \iint_0^a \left(\rho \dot{y} \frac{\partial}{\partial t}\delta y - Ty' \frac{\partial}{\partial x}\delta y\right)\;\d x\;\d t.
+ \]
+ Integrate by parts to obtain
+ \[
+ \delta S[y] = \iint_0^a \delta y(\rho \ddot{y} - Ty'')\;\d x\;\d t + \text{boundary term}.
+ \]
+ Assuming that the boundary term vanishes, we will need
+ \[
+ \ddot{y} - v^2 y'' = 0,
+ \]
+ where $v^2 = T/\rho$. This is the wave equation in two dimensions. Note that this is a linear PDE, which is a simplification resulting from our assuming the oscillation is small.
+
+ The general solution to the wave equation is
+ \[
+ y(x, t) = f_+(x - vt) + f_-(x + vt),
+ \]
+ which is a superposition of a wave travelling rightwards and a wave travelling leftwards.
+\end{eg}
+
+\begin{eg}[Maxwell's equations]
+ It is possible to obtain Maxwell's equations from an action principle, where we define a Lagrangian for the electromagnetic field. Note that this is the Lagrangian for the \emph{field} itself, and there is a separate Lagrangian for particles moving in a field.
+
+ We have to first define several quantities. First we have the charges: $\rho$ represents the electric charge density and $\mathbf{J}$ represents the electric current density.
+
+ Then we have the potentials: $\phi$ is the electric scalar potential and $\mathbf{A}$ is the magnetic vector potential.
+
+ Finally the fields: $\mathbf{E} = -\nabla \phi - \dot{\mathbf{A}}$ is the electric field, and $\mathbf{B} = \nabla\times \mathbf{A}$ is the magnetic field.
+
+ We pick convenient units where $c = \varepsilon_0 = \mu_0 = 1$. With these concepts in mind, the Lagrangian is given by
+ \[
+ S[\mathbf{A}, \phi] = \int \left(\frac{1}{2}(|\mathbf{E}|^2 - |\mathbf{B}|^2) + \mathbf{A}\cdot \mathbf{J} - \phi \rho\right)\;\d V\;\d t
+ \]
+ Varying $\mathbf{A}$ and $\phi$ by $\delta \mathbf{A}$ and $\delta \phi$ respectively, we have
+ \[
+ \delta S = \int \left(-\mathbf{E}\cdot \left(\nabla \delta\phi + \frac{\partial}{\partial t} \delta \mathbf{A}\right) - \mathbf{B}\cdot \nabla \times \delta \mathbf{A} + \delta \mathbf{A}\cdot \mathbf{J} - \rho\delta\phi\right)\;\d V\;\d t.
+ \]
+ Integrate by parts to obtain
+ \[
+ \delta S = \int\left(\delta\mathbf{A}\cdot (\dot{\mathbf{E}} - \nabla\times \mathbf{B} + \mathbf{J}) + \delta \phi(\nabla\cdot \mathbf{E} - \rho)\right)\;\d V\;\d t.
+ \]
+ Since the coefficients have to be $0$, we must have
+ \[
+ \nabla \times \mathbf{B} = \mathbf{J} + \dot{\mathbf{E}},\quad \nabla \cdot \mathbf{E} = \rho.
+ \]
+ Also, the definitions of $\mathbf{E}$ and $\mathbf{B}$ immediately give
+ \[
+ \nabla\cdot \mathbf{B} = 0,\quad \nabla \times \mathbf{E} = - \dot{\mathbf{B}}.
+ \]
+ These four equations are Maxwell's equations.
+\end{eg}
+\section{The second variation}
+\subsection{The second variation}
+So far, we have only looked at the ``first derivatives'' of functionals. We can identify stationary points, but we don't know if it is a maximum, minimum or a saddle. To distinguish between these, we have to look at the ``second derivatives'', or the \emph{second variation}.
+
+Suppose $x(t) = x_0(t)$ is a solution of
+\[
+ \frac{\delta F[x]}{\delta y(x)} = 0,
+\]
+i.e.\ $F[x]$ is stationary at $y = y_0$.
+
+To determine what type of stationary point it is, we need to expand $F[x + \delta x]$ to second order in $\delta x$. For convenience, let $\delta x(t) = \varepsilon \xi(t)$ with constant $\varepsilon \ll 1$. We will also only consider functionals of the form
+\[
+ F[x] = \int_\alpha^\beta f(x, \dot{x}, t)\;\d t
+\]
+with fixed-end boundary conditions, i.e.\ $\xi(\alpha) = \xi(\beta) = 0$. We will use both dots ($\dot{x}$) and dashes ($x'$) to denote derivatives.
+
+We consider a variation $x \mapsto x + \delta x$ and expand the integrand to obtain
+\begin{align*}
+ &f(x + \varepsilon \xi, \dot{x} + \varepsilon \dot{\xi}, t) - f(x, \dot{x}, t)\\
+ &= \varepsilon \left(\xi \frac{\partial f}{\partial x} + \dot{\xi}\frac{\partial f}{\partial \dot{x}}\right) + \frac{\varepsilon^2}{2}\left(\xi^2 \frac{\partial^2 f}{\partial x^2} + 2\xi\dot{\xi} \frac{\partial^2 f}{\partial x \partial \dot{x}} + \dot{\xi}^2 \frac{\partial^2 f}{\partial \dot{x}^2}\right) + O(\varepsilon^3)\\
+ \intertext{Noting that $2\xi \dot{\xi} = (\xi^2)'$ and integrating by parts, we obtain}
+ &= \varepsilon \xi\left[\frac{\partial f}{\partial x} - \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right)\right] + \frac{\varepsilon^2}{2}\left\{\xi^2\left[\frac{\partial^2 f}{\partial x^2} - \frac{\d}{\d t}\left(\frac{\partial^2 f}{\partial x\partial \dot{x}}\right)\right] + \dot{\xi}^2 \frac{\partial f}{\partial \dot{x}^2}\right\}.
+\end{align*}
+plus some boundary terms which vanish. So
+\[
+ F[x + \varepsilon \xi] - F[x] = \int_\alpha^\beta\varepsilon\xi\left[\frac{\partial f}{\partial x} - \frac{\d}{\d t}\left(\frac{\partial f}{\partial \dot{x}}\right)\right]\;\d t + \frac{\varepsilon^2}{2}\delta^2 F[x, \xi] + O(\varepsilon^3),
+\]
+where
+\[
+ \delta^2 F[x, \xi] = \int_\alpha^\beta \left\{\xi^2 \left[\frac{\partial^2 f}{\partial x^2} - \frac{\d}{\d t}\left(\frac{\partial^2 f}{\partial x \partial \dot{x}}\right)\right] + \dot{\xi}^2 \frac{\partial^2 f}{\partial \dot{x}^2}\right\}\;\d t
+\]
+is a functional of both $x(t)$ and $\xi(t)$. This is analogous to the term
+\[
+ \delta \mathbf{x}^T H(\mathbf{x})\delta \mathbf{x}
+\]
+appearing in the expansion of a regular function $f(\mathbf{x})$. In the case of normal functions, if $H(\mathbf{x})$ is positive, $f(\mathbf{x})$ is convex for all $\mathbf{x}$, and the stationary point is hence a global minimum. A similar result holds for functionals.
+
+In this case, if $\delta^2 F[x, \xi] > 0$ for all non-zero $\xi$ and all allowed $x$, then a solution $x_0(t)$ of $\frac{\delta F}{\delta x} = 0$ is an absolute minimum.
+
+\begin{eg}[Geodesics in the plane]
+ We previously shown that a straight line is a stationary point for the curve-length functional, but we didn't show it is in fact the shortest distance! Maybe it is a maximum, and we can get the shortest distance by routing to the moon and back.
+
+ Recall that $f = \sqrt{1 + (y')^2}$. Then
+ \[
+ \frac{\partial f}{\partial y} = 0,\quad \frac{\partial f}{\partial y'} = \frac{y'}{\sqrt{1 + (y')^2}},\quad \frac{\partial^2 f}{\partial y'^2} = \frac{1}{\sqrt{1 + (y')^2}^3},
+ \]
+ with the other second derivatives zero. So we have
+ \[
+ \delta ^2 F[y, \xi] = \int_\alpha^\beta \frac{\dot{\xi}^2}{(1 + (y')^2)^{3/2}}\;\d x > 0
+ \]
+ So if we have a stationary function satisfying the boundary conditions, it is an absolute minimum. Since the straight line is a stationary function, it is indeed the minimum.
+\end{eg}
+However, not all functions are convex\textsuperscript{[\textcolor{blue}{citation needed}]}. We can still ask whether a solution $x_0(t)$ of the Euler-Lagrange equation is a local minimum. For these, we need to consider
+\[
+ \delta^2 F[x_0, \xi] = \int_\alpha^\beta (\rho(t)\dot{\xi}^2 + \sigma(t) \xi^2)\;\d t,
+\]
+where
+\[
+ \rho(t) = \left.\frac{\partial^2 f}{\partial \dot{x}^2}\right|_{x = x_0},\quad
+ \sigma(t) = \left[\frac{\partial^2 f}{\partial x^2} - \frac{\d}{\d t}\left(\frac{\partial^2 f}{\partial x \partial \dot{x}}\right)\right]_{x = x_0}.
+\]
+This is of the same form as the Sturm-Liouville problem. For $x_0$ to minimize $F[x]$ locally, we need $\delta^2 F[x_0, \xi] > 0$. A necessary condition for this is
+\[
+ \rho(t) \geq 0,
+\]
+which is the \emph{Legendre condition}.
+
+The intuition behind this necessary condition is as follows: suppose that $\rho(t)$ is negative in some interval $I\subseteq [\alpha, \beta]$. Then we can find a $\xi(t)$ that makes $\delta^2 F[x_0, \xi]$ negative. We simply have to make $\xi$ zero outside $I$, and small but crazily oscillating inside $I$. Then inside $I$, $\dot{x}^2$ wiill be very large while $\xi^2$ is kept tiny. So we can make $\delta^2 F[y, \xi]$ arbitrarily negative.
+
+Turning the intuition into a formal proof is not difficult but is tedious and will be omitted.
+
+However, this is not a sufficient condition. Even if we had a strict inequality $\rho (t) > 0$ for all $\alpha < t < \beta$, it is still not sufficient.
+
+Of course, a sufficient (but not necessary) condition is $\rho(t) > 0, \sigma(t) \geq 0$, but this is not too interesting.
+
+\begin{eg}
+ In the Branchistochrone problem, we have
+ \[
+ T[x] \propto \int_\alpha^\beta \sqrt{\frac{1 + \dot{x}^2}{x}}\;\d t.
+ \]
+ Then
+ \begin{align*}
+ \rho(t) &= \left.\frac{\partial^2 f}{\partial \dot{x}^2}\right|_{x_0} > 0\\
+ \sigma(t) &= \frac{1}{2x^2\sqrt{x(1 + \dot{x}^2)}} > 0.
+ \end{align*}
+ So the cycloid does minimize the time $T$.
+\end{eg}
+\subsection{Jacobi condition for local minima of \texorpdfstring{$F[x]$}{F[x]}}
+Legendre tried to prove that $\rho > 0$ is a sufficient condition for $\delta^2 F > 0$. This is known as the \emph{strong Legendre condition}. However, he obviously failed, since it is indeed not a sufficient condition. Yet, it turns out that he was close.
+
+Before we get to the actual sufficient condition, we first try to understand why thinking $\rho > 0$ is sufficient isn't as crazy as it first sounds.
+
+If $\rho > 0$ and $\sigma < 0$, we would want to create a negative $\delta^2 F[x_0, \xi]$ by choosing $\xi$ to be large but slowly varying. Then we will have a very negative $\sigma(t)\xi^2$ while a small positive $\rho(t) \dot{\xi^2}$.
+
+The problem is that $\xi$ has to be $0$ at the end points $\alpha$ and $\beta$. For $\xi$ to take a large value, it must reach the value from $0$, and this requires some variation of $\xi$, thereby inducing some $\dot{\xi}$. This is not a problem if $\alpha$ and $\beta$ are far apart - we simply slowly climb up to a large value of $\xi$ and then slowly rappel back down, maintaining a low $\dot{\xi}$ throughout the process. However, it is not unreasonable to assume that as we make the distance $\beta - \alpha$ smaller and smaller, eventually all $\xi$ will lead to a positive $\delta^2 F[x_0, \xi]$, since we cannot reach large values of $\xi$ without having large $\dot{\xi}$.
+
+It turns out that the intuition is correct. As long as $\alpha$ and $\beta$ are sufficiently close, $\delta^2 F[x_0, \xi]$ will be positive. The derivation of this result is, however, rather roundabout, involving a number of algebraic tricks.
+
+For a solution $x_0$ to the Euler Lagrange equation, we have
+\[
+ \delta^2 F[x_0, \xi] = \int_\alpha^\beta \big(\rho(t) \dot{\xi}^2 + \sigma(t) \xi^2\big)\;\d t,
+\]
+where
+\[
+ \rho(t) = \left.\frac{\partial^2 f}{\partial \dot{x}^2}\right|_{x = x_0},\quad
+ \sigma(t) = \left[\frac{\partial^2 f}{\partial x^2} - \frac{\d}{\d t}\left(\frac{\partial^2 f}{\partial x \partial \dot{x}}\right)\right]_{x = x_0}.
+\]
+Assume $\rho(t) > 0$ for $\alpha < t < \beta$ (the strong Legendre condition) and assume boundary conditions $\xi(\alpha) = \xi(\beta) = 0$. When is this sufficient for $\delta^2 F > 0$?
+
+First of all, notice that for any smooth function $w(x)$, we have
+\[
+ 0 = \int_\alpha^\beta (w\xi^2)' \;\d t
+\]
+since this is a total derivative and evaluates to $w\xi(\alpha) - w\xi(\beta) = 0$. So we have
+\[
+ 0 = \int_\alpha^\beta (2w\xi \dot{\xi} + \dot{w}\xi^2)\;\d t.
+\]
+This allows us to rewrite $\delta^2 F$ as
+\[
+ \delta^2 F = \int_\alpha^\beta \big(\rho \dot{\xi}^2 + 2w\xi \dot{\xi} + (\sigma + \dot{w})\xi^2\big)\;\d t.
+\]
+Now complete the square in $\xi$ and $\dot{\xi}$. So
+\[
+ \delta^2 F = \int_\alpha^\beta \left[\rho\left(\dot{\xi} + \frac{w}{\rho} \xi\right)^2 +\left(\sigma + \dot{w} - \frac{w^2}{\rho}\right)\xi^2 \right]\;\d t
+\]
+This is non-negative if
+\[
+ w^2 = \rho(\sigma + \dot{w}).\tag{$*$}
+\]
+So as long as we can find a solution to this equation, we know that $\delta^2 F$ is non-negative. Could it be that $\delta^2 F = 0$? Turns out not. If it were, then $\dot{\xi} = -\frac{w}{\rho}\xi$. We can solve this to obtain
+\[
+ \xi(x) = C\exp\left(-\int_\alpha^x \frac{w(s)}{\rho(s)}\;\d s\right).
+\]
+We know that $\xi(\alpha) = 0$. But $\xi(\alpha) = C e^0$. So $C = 0$. Hence equality holds only for $\xi = 0$.
+
+So all we need to do is to find a solution to $(*)$, and we are sure that $\delta^2 F > 0$.
+
+Note that this is non-linear in $w$. We can convert this into a linear equation by defining $w$ in terms of a new function $u$ by $w = -\rho \dot{u}/u$. Then $(*)$ becomes
+\[
+ \rho\left(\frac{\dot{u}}{u}\right)^2 = \sigma - \left(\frac{\rho \dot{u}}{u}\right)' = \sigma - \frac{(\rho \dot{u})'}{u} + \rho \left(\frac{\dot{u}}{u}\right)^2.
+\]
+We see that the left and right terms cancel. So we have
+\[
+ -(\rho \dot{u})' + \sigma u = 0.
+\]
+This is the \emph{Jacobi accessory equation}, a second-order linear ODE.
+
+There is a caveat here. Not every solution $u$ will do. Recall that $u$ is used to produce $w$ via $w = -\rho\dot{u}/u$. Hence within $[\alpha, \beta]$, we cannot have $u = 0$ since we cannot divide by zero. If we can find a non-zero $u(x)$ satisfying the Jacobi accessory equation, then $\delta^2 F > 0$ for $\xi \not= 0$, and hence $y_0$ is a local minimum of $F$.
+
+A suitable solution will always exists for sufficiently small $\beta - \alpha$, but may not exist if $\beta - \alpha$ is too large, as stated at the beginning of the chapter.
+
+\begin{eg}[Geodesics on unit sphere]
+ For any curve $C$ on the sphere, we have
+ \[
+ L = \int_C \sqrt{\d \theta^2 + \sin^2 \theta \;\d \phi^2}.
+ \]
+ If $\theta$ is a good parameter of the curve, then
+ \[
+ L[\phi] = \int_{\theta _1}^{\theta_2} \sqrt{1 + \sin^2 \theta (\phi')^2}\;\d \theta.
+ \]
+ Alternatively, if $\phi$ is a good parameter, we have
+ \[
+ L[\theta] = \int_{\phi_1}^{\phi_2}\sqrt{(\theta')^2 + \sin^2 \theta}\;\d \phi.
+ \]
+ We will look at the second case.
+
+ We have
+ \[
+ f(\theta, \theta') = \sqrt{(\theta')^2 + \sin^2 \theta}.
+ \]
+ So
+ \[
+ \frac{\partial f}{\partial \theta} = \frac{\sin \theta\cos \theta}{\sqrt{(\theta')^2 + \sin^2 \theta}},\quad \frac{\partial f}{\partial \theta'} = \frac{\theta'}{\sqrt{(\theta')^2 + \sin^2 \theta}}.
+ \]
+ Since $\frac{\partial f}{\partial \phi} = 0$, we have the first integral
+ \[
+ \text{const} = f - \theta' \frac{\partial f}{\partial \theta'} = \frac{\sin^2 \theta}{\sqrt{(\theta')^2 + \sin^2 \theta}}
+ \]
+ So a solution is
+ \[
+ c\sin^2 \theta = \sqrt{(\theta')^2 + \sin^2 \theta}.
+ \]
+ Here we need $c \geq 1$ for the equation to make sense.
+
+ We will consider the case where $c = 1$ (in fact, we can show that we can always orient our axes such that $c = 1$). This occurs when $\theta' = 0$, i.e.\ $\theta$ is a constant. Then our first integral gives $\sin^2 \theta = \sin \theta$. So $\sin \theta = 1$ and $\theta = \pi/2$. This corresponds to a curve on the equator. (we ignore the case $\sin \theta = 0$ that gives $\theta = 0$, which is a rather silly solution)
+
+ There are two equatorial solutions to the Euler-Lagrange equations. Which, if any, minimizes $L[\theta]$?
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=2];
+ \draw [red] (2, 0) arc (0:240:2 and 0.5);
+ \draw [red] (2, 0) arc (0:-60:2 and 0.5);
+ \draw [blue] (0, -0.5) arc (-90:-60:2 and 0.5) node [circ] {};
+ \draw [blue] (0, -0.5) arc (270:240:2 and 0.5) node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ We have
+ \[
+ \left.\frac{\partial^2 f}{\partial (\theta')^2}\right|_{\theta = \pi/2} = 1
+ \]
+ and
+ \[
+ \frac{\partial^2 f}{\partial \theta \partial \theta'} = -1,\quad \frac{\partial^2}{\partial \theta\partial \theta'} = 0.
+ \]
+ So $\rho(x) = 1$ and $\sigma(x) = -1$. So
+ \[
+ \delta^2 F = \int_{\phi_1}^{\phi_2} ((\xi')^2 - \xi^2)\;\d \phi.
+ \]
+ The Jacobi accessory equation is $u'' + u = 0$. So the general solution is $u \propto \sin \phi - \gamma \cos\phi$. This is equal to zero if $\tan \phi = \gamma$.
+
+ Looking at the graph of $\tan \phi$, we see that $\tan$ has a zero every $\pi$ radians. Hence if the domain $\phi_2 - \phi_1$ is greater than $\pi$ (i.e.\ we go the long way from the first point to the second), it will always contain some values for which $\tan \phi$ is zero. So we cannot conclude that the longer path is a local minimum (it is obviously not a global minimum, by definition of longer) (we also cannot conclude that it is \emph{not} a local minimum, since we tested with a sufficient and not necessary condition). On the other hand, if $\phi_2 - \phi_1$ is less than $\pi$, then we will be able to pick a $\gamma$ such that $u$ is non-zero in the domain.
+\end{eg}
+\end{document}
+
diff --git a/books/cam/IB_L/complex_analysis.tex b/books/cam/IB_L/complex_analysis.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6ef8cf98bd1d0dca652af724f56c1e5d98c887e1
--- /dev/null
+++ b/books/cam/IB_L/complex_analysis.tex
@@ -0,0 +1,3082 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Lent}
+\def\nyear {2016}
+\def\nlecturer {I.\ Smith}
+\def\ncourse {Complex Analysis}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Analytic functions}\\
+Complex differentiation and the Cauchy--Riemann equations. Examples. Conformal mappings. Informal discussion of branch points, examples of $\log z$ and $z^c$.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Contour integration and Cauchy's theorem}\\
+Contour integration (for piecewise continuously differentiable curves). Statement and proof of Cauchy's theorem for star domains. Cauchy's integral formula, maximum modulus theorem, Liouville's theorem, fundamental theorem of algebra. Morera's theorem.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Expansions and singularities}\\
+Uniform convergence of analytic functions; local uniform convergence. Differentiability of a power series. Taylor and Laurent expansions. Principle of isolated zeros. Residue at an isolated singularity. Classification of isolated singularities.\hspace*{\fill} [4]
+
+\vspace{10pt}
+\noindent\textbf{The residue theorem}\\
+Winding numbers. Residue theorem. Jordan's lemma. Evaluation of definite integrals by contour integration. Rouch\'e's theorem, principle of the argument. Open mapping theorem.\hspace*{\fill} [4]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+Complex analysis is the study of complex differentiable functions. While this sounds like it should be a rather straightforward generalization of real analysis, it turns out complex differentiable functions behave rather differently. Requiring that a function is complex differentiable is a very strong condition, and consequently these functions have very nice properties.
+
+One of the most distinguishing results from complex analysis is Liouville's theorem, which says that every bounded complex differentiable function $f: \C \to \C$ must be constant. This is very false for real functions (e.g.\ $\sin x$). This gives a strikingly simple proof of the fundamental theorem of algebra --- if the polynomial $p$ has no roots, then $\frac{1}{p}$ is well-defined on all of $\C$, and it is easy to show this must be bounded. So $p$ is constant.
+
+Many things we hoped were true in real analysis are indeed true in complex analysis. For example, if a complex function is once differentiable, then it is infinitely differentiable. In particular, every complex differentiable function has a Taylor series and is indeed equal to its Taylor series (in reality, to prove these, we show that every complex differentiable function is equal to its Taylor series, and then notice that power series are always infinitely differentiable).
+
+Another result we will prove is that the uniform limit of complex differentiable functions is again complex differentiable. Contrast this with the huge list of weird conditions we needed for real analysis!
+
+Not only is differentiation nice. It turns out integration is also easier in complex analysis. In fact, we will exploit this fact to perform real integrals by pretending they are complex integrals. However, this will not be our main focus here --- those belong to the IB Complex Methods course instead.
+
+\section{Complex differentiation}
+\subsection{Differentiation}
+We start with some definitions. As mentioned in the introduction, Liouville's theorem says functions defined on the whole of $\C$ are often not that interesting. Hence, we would like to work with some subsets of $\C$ instead. As in real analysis, for differentiability to be well-defined, we would want a function to be defined on an open set, so that we can see how $f: U \to \C$ varies as we approach a point $z_0 \in U$ from all different directions.
+
+\begin{defi}[Open subset]
+ A subset $U \subseteq \C$ is \emph{open} if for any $x \in U$, there is some $\varepsilon > 0$ such that the open ball $B_\varepsilon(x) = B(x; \varepsilon) \subseteq U$.
+\end{defi}
+The notation used for the open ball varies form time to time, even within the same sentence. For example, instead of putting $\varepsilon$ as the subscript, we could put $x$ as the subscript and $\varepsilon$ inside the brackets. Hopefully, it will be clear from context.
+
+This is good, but we also want to rule out some silly cases, such as functions defined on subsets that look like this:
+\begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \draw [fill=mblue, fill opacity=0.5] plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.6) (0.3, 1.2) (-1.7, 0.8)};
+ \begin{scope}[shift={(5, 0)}]
+ \draw [fill=mblue, fill opacity=0.5] plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.6) (0.3, 1.2) (-1.7, 0.8)};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+This would violate results such as functions with zero derivative must be constant. Hence, we would require our subset to be connected. This means for any two points in the set, we can find a path joining them. A path can be formally defined as a function $\gamma: [0, 1] \to \C$, with start point $\gamma(0)$ and end point $\gamma(1)$.
+
+\begin{defi}[Path-connected subset]
+ A subset $U \subseteq \C$ is path-connected if for any $x, y \in U$, there is some $\gamma: [0, 1] \to U$ continuous such that $\gamma(0) = x$ and $\gamma(1) = y$.
+\end{defi}
+
+Together, these define what it means to be a domain. These are (usually) things that will be the domains of our functions.
+\begin{defi}[Domain]
+ A \emph{domain} is a non-empty open path-connected subset of $\C$.
+\end{defi}
+
+With this, we can define what it means to be differentiable at a point. This is, in fact, exactly the same definition as that for real functions.
+\begin{defi}[Differentiable function]
+ Let $U \subseteq \C$ be a domain and $f: U \to \C$ be a function. We say $f$ is \emph{differentiable} at $w \in U$ if
+ \[
+ f'(w) = \lim_{z\to w} \frac{f(z) - f(w)}{z - w}
+ \]
+ exists.
+\end{defi}
+Here we implicitly require that the limit does not depend on which direction we approach $w$ from. This requirement is also present for real differentiability, but there are just two directions we can approach $w$ from --- the positive direction and the negative direction. For complex analysis, there are infinitely many directions to choose from, and it turns out this is a very strong condition to impose.
+
+Complex differentiability at a point $w$ is not too interesting. Instead, what we want is a slightly stronger condition that the function is complex differentiable in a neighbourhood of $w$.
+
+\begin{defi}[Analytic/holomorphic function]
+ A function $f$ is \emph{analytic} or \emph{holomorphic} at $w \in U$ if $f$ is differentiable on an open neighbourhood $B(w, \varepsilon)$ of $w$ (for some $\varepsilon$).
+\end{defi}
+
+\begin{defi}[Entire function]
+ If $f: \C \to \C$ is defined on all of $\C$ and is holomorphic on $\C$, then $f$ is said to be \emph{entire}.
+\end{defi}
+
+It is not universally agreed what the words analytic and holomorphic should mean. Some people take one of these word to mean instead that the function has a (and is given by the) Taylor series, and then take many pages to prove that these two notions are indeed the same. But since they are the same, we shall just opt for the simpler definition.
+
+The goal of the course is to develop the rich theory of these complex differentiable functions and see how we can integrate them along continuously differentiable ($C^1$) paths in the complex plane.
+
+Before we try to achieve our lofty goals, we first want to figure out when a function is differentiable. Sure we can do this by checking the definition directly, but this quickly becomes cumbersome for more complicated functions. Instead, we would want to see if we can relate complex differentiability to real differentiability, since we know how to differentiate real functions.
+
+Given $f: U \to \C$, we can write it as $f = u + iv$, where $u, v: U \to \R$ are real-valued functions. We can further view $u$ and $v$ as real-valued functions of two real variables, instead of one complex variable.
+
+Then from IB Analysis II, we know this function $u: U \to \R$ is differentiable (as a real function) at a point $(c, d) \in U$, with derivative $Du|_{(c, d)} = (\lambda, \mu)$, if and only if
+\[
+ \frac{u(x, y) - u(c, d) - (\lambda (x - c) + \mu (y - d))}{\|(x, y) - (c, d)\|} \to 0\quad \text{as}\quad (x, y) \to (c, d).
+\]
+This allows us to come up with a nice criterion for when a complex function is differentiable.
+
+\begin{prop}
+ Let $f$ be defined on an open set $U \subseteq \C$. Let $w = c + id \in U$ and write $f = u + iv$. Then $f$ is complex differentiable at $w$ if and only if $u$ and $v$, viewed as a real function of two real variables, are differentiable at $(c, d)$, \emph{and}
+ \begin{align*}
+ u_x &= v_y,\\
+ u_y &= -v_x.
+ \end{align*}
+ These equations are the \emph{Cauchy--Riemann equations}. In this case, we have
+ \[
+ f'(w) = u_x(c, d) + iv_x(c, d) = v_y(c, d) -i u_y(c, d).
+ \]
+\end{prop}
+
+\begin{proof}
+ By definition, $f$ is differentiable at $w$ with $f'(w) = p + iq$ if and only if
+ \[
+ \lim_{z \to w} \frac{f(z) - f(w) - (p + iq)(z - w)}{z - w} = 0. \tag{$\dagger$}
+ \]
+ If $z = x + iy$, then
+ \[
+ (p + iq) (z - w) = p(x - c) - q(y - d) + i(q(x - c) + p (y - d)).
+ \]
+ So, breaking into real and imaginary parts, we know $(\dagger)$ holds if and only if
+ \[
+ \lim_{(x, y) \to (c, d)} \frac{u(x, y) - u(c, d) - (p(x - c) - q(y - d))}{\sqrt{(x - c)^2 + (y - d)^2}} = 0
+ \]
+ and
+ \[
+ \lim_{(x, y) \to (c, d)} \frac{v(x, y) - v(c, d) - (q(x - c) + p(y - d))}{\sqrt{(x - c)^2 + (y - d)^2}} = 0.
+ \]
+ Comparing this to the definition of the differentiability of a real-valued function, we see this holds exactly if $u$ and $v$ are differentiable at $(c, d)$ with
+ \[
+ Du|_{(c, d)} = (p, -q),\quad Dv|_{(c, d)} = (q, p).\qedhere
+ \]
+\end{proof}
+A standard warning is given that $f: U \to \C$ can be written as $f = u + iv$, where $u_x = v_y$ and $u_y = -v_x$ at $(c, d) \in U$, we \emph{cannot} conclude that $f$ is complex differentiable at $(c, d)$. These conditions only say the partial derivatives exist, but this does \emph{not} imply imply that $u$ and $v$ are differentiable, as required by the proposition. However, if the partial derivatives exist and are continuous, then by IB Analysis II we know they are differentiable.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item The usual rules of differentiation (sum rule, product, rule, chain rule, derivative of inverse) all hold for complex differentiable functions, with the same proof as the real case.
+ \item A polynomial $p: \C \to \C$ is entire. This can be checked directly from definition, or using the product rule.
+ \item A \emph{rational function} $\frac{p(z)}{q(z)}: U \to \C$, where $U \subseteq \C \setminus \{z: q(z) = 0\}$, is holomorphic on any such $U$. Here $p, q$ are polynomials.
+ \item $f(z) = |z|$ is \emph{not} complex differentiable at \emph{any} point of $\C$. Indeed, we can write this as $f = u + iv$, where
+ \[
+ u(x, y) = \sqrt{x^2 + y^2},\quad v(x, y) = 0.
+ \]
+ If $(x, y) \not= (0, 0)$, then
+ \[
+ u_x = \frac{x}{\sqrt{x^2 + y^2}},\quad u_y = \frac{y}{\sqrt{x^2 + y^2}}.
+ \]
+ If we are not at the origin, then clearly we cannot have both vanish, but the partials of $v$ both vanish. Hence the Cauchy--Riemann equations do not hold and it is not differentiable outside of the origin.
+
+ At the origin, we can compute directly that
+ \[
+ \frac{f(h) - f(0)}{h} = \frac{|h|}{h}.
+ \]
+ This is, say, $+1$ for $h \in \R^+$ and $-1$ for $h \in \R^-$. So the limit as $h \to 0$ does not exist.
+ \end{enumerate}
+\end{eg}
+
+\subsection{Conformal mappings}
+The course schedules has a weird part where we are supposed to talk about conformal mappings for a lecture, but not use it anywhere else. We have to put them \emph{somewhere}, and we might as well do it now. However, this section will be slightly disconnected from the rest of the lectures.
+
+\begin{defi}[Conformal function]
+ Let $f: U \to \C$ be a function holomorphic at $w \in U$. If $f'(w) \not= 0$, we say $f$ is \emph{conformal} at $w$.
+\end{defi}
+
+What exactly does $f'(w) \not= 0$ tell us? In the real case, if we know a function $f: (a, b) \to \R$ is continuous differentiable, and $f'(c) \not= 0$, then $f$ is locally increasing or decreasing at $c$, and hence has a local inverse. This is also true in the case of complex functions.
+
+We write $f = u + iv$, then viewed as a map $\R^2 \to \R^2$, the Jacobian matrix is given by
+\[
+ Df =
+ \begin{pmatrix}
+ u_x & u_y\\
+ v_x & v_y
+ \end{pmatrix}.
+\]
+Then
+\[
+ \det (Df) = u_x v_y - u_y v_x = u_x^2 + u_y^2.
+\]
+Using the formula for the complex derivative in terms of the partials, this shows that if $f'(w) \not= 0$, then $\det(Df|_w) \not= 0$. Hence, by the inverse function theorem (viewing $f$ as a function $\R^2 \to \R^2$), $f$ is locally invertible at $w$ (technically, we need $f$ to be \emph{continuously} differentiable, instead of just differentiable, but we will later show that $f$ in fact must be infinitely differentiable). Moreover, by the same proof as in real analysis, the local inverse to a holomorphic function is holomorphic (and conformal).
+
+But being conformal is more than just being locally invertible. An important property of conformal mappings is that they preserve angles. To give a precise statement of this, we need to specify how ``angles'' work.
+
+The idea is to look at tangent vectors of paths. Let $\gamma_1, \gamma_2: [-1, 1] \to U$ be continuously differentiable paths that intersect when $t = 0$ at $w = \gamma_1(0) = \gamma_2(0)$. Moreover, assume $\gamma_i'(0) \not= 0$.
+
+Then we can compare the angles between the paths by looking at the difference in arguments of the tangents at $w$. In particular, we define
+\[
+ \mathrm{angle} (\gamma_1, \gamma_2) = \arg (\gamma_1'(0)) - \arg (\gamma_2'(0)).
+\]
+Let $f: U \to \C$ and $w \in U$. Suppose $f$ is conformal at $w$. Then $f$ maps our two paths to $f \circ \gamma_i: [-1, 1] \to \C$. These two paths now intersect at $f(w)$. Then the angle between them is
+\begin{align*}
+ \mathrm{angle} (f \circ \gamma_1, f\circ \gamma_2) &= \arg((f\circ \gamma_1)'(0)) - \arg((f \circ \gamma_2)'(0))\\
+ &= \arg\left(\frac{(f\circ \gamma_1)'(0)}{(f \circ \gamma_2)'(0)}\right)\\
+ &= \arg\left(\frac{\gamma_1'(0)}{\gamma_2'(0)}\right)\\
+ &= \mathrm{angle} (\gamma_1, \gamma_2),
+\end{align*}
+using the chain rule and the fact that $f'(w) \not= 0$. So angles are preserved.
+
+What else can we do with conformal maps? It turns out we can use it to solve Laplace's equation.
+
+We will later prove that if $f: U \to \C$ is holomorphic on an open set $U$, then $f': U \to \C$ is \emph{also} holomorphic. Hence $f$ is infinitely differentiable.
+
+In particular, if we write $f = u + iv$, then using the formula for $f'$ in terms of the partials, we know $u$ and $v$ are also \emph{infinitely differentiable}. Differentiating the Cauchy--Riemann equations, we get
+\[
+ u_{xx} = v_{yx} = -u_{yy}.
+\]
+In other words,
+\[
+ u_{xx} + u_{yy} = 0,
+\]
+We get similar results for $v$ instead. Hence $\Re(f)$ and $\Im(f)$ satisfy the Laplace equation and are hence \emph{harmonic} (by definition).
+
+\begin{defi}[Conformal equivalence]
+ If $U$ and $V$ are open subsets of $\C$ and $f: U \to V$ is a conformal bijection, then it is a \emph{conformal equivalence}.
+\end{defi}
+Note that in general, a bijective continuous map need not have a continuous inverse. However, if we are given further that it is \emph{conformal}, then the inverse mapping theorem tells us there is a local conformal inverse, and if the function is bijective, these patch together to give a global conformal inverse.
+
+The idea is that if we are required to solve the 2D Laplace's equation on a funny domain $U$ subject to some boundary conditions, we can try to find a conformal equivalence $f$ between $U$ and some other nice domain $V$. We can then solve Laplace's equation on $V$ subject to the boundary conditions carried forward by $f$, which is hopefully easier. Afterwards, we pack this solution into the real part of a conformal function $g$, and then $g \circ f: U \to \C$ is a holomorphic function on $U$ whose real part satisfies the boundary conditions we want. So we have found a solution to Laplace's equation.
+
+You will have a chance to try that on the first example sheet. Instead, we will focus on finding conformal equivalences between different regions, since examiners like these questions.
+
+\begin{eg}
+ Any M\"obius map $A(z) = \frac{az + b}{cz + d}$ (with $ad - bc \not= 0$) defines a conformal equivalence $\C \cup \{\infty\} \to \C\cup \{\infty\}$ in the obvious sense. $A'(z) \not= 0$ follows from the chain rule and the invertibility of $A(z)$.
+
+ In particular, the M\"obius group of the disk
+ \begin{align*}
+ \text{M\"ob}(D) &= \{f \in \text{M\"obius group}: f(D) = D\} \\
+ &= \left\{\lambda\frac{z - a}{\bar{a}z - 1} \in \text{M\"ob} : |a| < 1, |\lambda| = 1\right\}
+ \end{align*}
+ is a group of conformal equivalences of the disk. You will prove that the M\"obius group of the disk is indeed of this form in the first example sheet, and that these are all conformal equivalences of the disk on example sheet 2.
+\end{eg}
+\begin{eg}
+ The map $z \mapsto z^n$ for $n \geq 2$ is holomorphic everywhere and conformal except at $z = 0$. This gives a conformal equivalence
+ \[
+ \left\{z \in \C^*: 0 < \arg(z) < \frac{\pi}{n}\right\} \leftrightarrow \H,
+ \]
+ where we adopt the following notation:
+\end{eg}
+
+\begin{notation}
+ We write $\C^* = \C \setminus \{0\}$ and
+ \[
+ \H = \{z \in \C: \Im (z) > 0\}
+ \]
+ is the upper half plane.
+\end{notation}
+
+\begin{eg}
+ Note that $z \in \H$ if and only if $z$ is closer to $i$ than to $-i$. In other words,
+ \[
+ |z - i| < |z + i|,
+ \]
+ or
+ \[
+ \left|\frac{z - i}{z + i}\right| < 1.
+ \]
+ So $z \mapsto \frac{z - i}{z + i}$ defines a conformal equivalence $\H \to D$, the unit disk. We know this is conformal since it is a special case of the M\"obius map.
+\end{eg}
+
+\begin{eg}
+ Consider the map
+ \[
+ z \mapsto w = \frac{1}{2}\left(z + \frac{1}{z}\right),
+ \]
+ assuming $z \in \C^*$. This can also be written as
+ \[
+ \frac{w + 1}{w - 1} = 1 + \frac{2}{w - 1} = 1 + \frac{4}{z + \frac{1}{z} - 2} = 1 + \frac{4z}{z^2 - 2z + 1} = \left(\frac{z + 1}{z - 1}\right)^2.
+ \]
+ So this is just squaring in some funny coordinates given by $\frac{z + 1}{z - 1}$. This map is holomorphic (except at $z = 0$). Also, we have
+ \[
+ f'(z) = 1 - \frac{z^2 + 1}{2z^2}.
+ \]
+ So $f$ is conformal except at $\pm 1$.
+
+ Recall that the first thing we learnt about M\"obius maps is that they take lines and circles to lines and circles. This does something different. We write $z = re^{i\theta}$. Then if we write $z \mapsto w = u + iv$, we have
+ \begin{align*}
+ u &= \frac{1}{2}\left(r + \frac{1}{r}\right) \cos \theta\\
+ v &= \frac{1}{2}\left(r - \frac{1}{r}\right) \sin \theta
+ \end{align*}
+ Fixing the radius and argument fixed respectively, we see that a circle of radius $\rho$ is mapped to the ellipse
+ \[
+ \frac{u^2}{\frac{1}{4}\left(\rho + \frac{1}{\rho}\right)^2} + \frac{v^2}{\frac{1}{4}\left(\rho - \frac{1}{\rho}\right)^2} = 1,
+ \]
+ while the half-line $\arg(z) = \mu$ is mapped to the hyperbola
+ \[
+ \frac{u^2}{\cos^2\mu} - \frac{v^2}{\sin^2 \mu} = 1.
+ \]
+ We can do something more interesting. Consider a off-centered circle, chosen to pass through the point $-1$ and $-i$. Then the image looks like this:
+ \begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw [->] (-2, 0) -- (4, 0);
+ \draw [->] (0, -2) -- (0, 4);
+ \node [circ] at (-1, 0) {};
+ \node [circ] at (0, -1) {};
+ \node at (-1, 0) [anchor = north east] {$-1$};
+ \node at (0, -1) [anchor = north east] {$-i$};
+ \draw [mblue] (1, 1) circle [radius=2.236];
+
+ \draw [->] (5, 1) -- (7, 1) node [pos=0.5, above] {$f$};
+
+ \begin{scope}[shift={(11, 0)}]
+ \draw [->] (-3, 0) -- (5, 0);
+ \draw [->] (0, -2) -- (0, 4);
+ \draw [mblue, domain=0:360, samples=50] plot ({(sqrt (7 + 4.4721*(sin(\x) + cos(\x))) + 1/(sqrt (7 + 4.4721*(sin(\x) + cos(\x))))) * (2.236*cos(\x) + 1)/(sqrt (7 + 4.4721*(sin(\x) + cos(\x))))},{(sqrt (7 + 4.4721*(sin(\x) + cos(\x))) - 1/(sqrt (7 + 4.4721*(sin(\x) + cos(\x))))) * (2.236*sin(\x) + 1)/(sqrt (7 + 4.4721*(sin(\x) + cos(\x))))});
+ \node [circ] at (-2, 0) {};
+ \node [circ] at (0, 0) {};
+ \node at (-2, 0) [below] {$f(-1)$};
+ \node at (0, 0) [anchor = south west] {$f(-i)$};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ Note that we have a singularity at $f(-1) = -1$. This is exactly the point where $f$ is not conformal, and is no longer required to preserve angles.
+
+ This is a crude model of an aerofoil, and the transformation is known as the Joukowsky transform.
+
+ In applied mathematics, this is used to model fluid flow over a wing in terms of the analytically simpler flow across a circular section.
+\end{eg}
+
+We interlude with a little trick. Often, there is no simple way to describe regions in space. However, if the region is bounded by circular arcs, there is a trick that can be useful.
+
+Suppose we have a circular arc between $\alpha$ and $\beta$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (8 ,0);
+ \draw [dashed] (1, 0) -- (4, 2) -- (6, 0);
+ \node [circ] (a) at (3, 1.333) {};
+ \node [circ] (c) at (5, 1) {};
+ \node [circ] (b) at (4, 2) {};
+ \node at (a) [left] {$\alpha$};
+ \node at (b) [above] {$z$};
+ \node at (c) [right] {$\beta$};
+ \drawcirculararc(5, 1)(4,2)(3, 1.333);
+
+ \draw (1.4, 0) arc(0:33.69:0.4);
+ \node [right] at (1.4, 0.2) {$\phi$};
+ \draw (6.4, 0) arc(0:135:0.4);
+ \node [right] at (6.4, 0.2) {$\theta$};
+ \draw (4.2828, 1.7172) arc(315:213.69:0.4);
+ \node [below] at (4, 1.6) {$\mu$};
+ \end{tikzpicture}
+\end{center}
+Along this arc, $\mu = \theta - \phi = \arg(z - \alpha) - \arg(z - \beta)$ is constant, by elementary geometry. Thus, for each fixed $\mu$, the equation
+\[
+ \arg(z - \alpha) - \arg(z - \beta) = \mu
+\]
+determines an arc through the points $\alpha, \beta$.
+
+To obtain a region bounded by two arcs, we find the two $\mu_-$ and $\mu_+$ that describe the boundary arcs. Then a point lie between the two arcs if and only if its $\mu$ is in between $\mu_-$ and $\mu_+$, i.e.\ the region is
+\[
+ \left\{z: \arg\left(\frac{z - \alpha}{z - \beta}\right) \in [\mu_-, \mu_+]\right\}.
+\]
+This says the point has to lie in some arc between those given by $\mu_-$ and $\mu_+$.
+
+For example, the following region:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -1) -- (0, 3);
+ \fill [mblue, opacity=0.5] (2, 0) arc (0:180:2) -- (2, 0);
+ \draw (2, 0) arc (0:180:2);
+ \node [circ] at (-2, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (0, 2) {};
+ \node at (-2, 0) [below] {$-1$};
+ \node at (2, 0) [below] {$1$};
+ \node at (0, 2) [anchor = south west] {$i$};
+ \end{tikzpicture}
+\end{center}
+can be given by
+\[
+ \mathcal{U} = \left\{z: \arg\left(\frac{z - 1}{z + 1}\right) \in \left[\frac{\pi}{2}, \pi\right]\right\}.
+\]
+Thus for instance the map
+\[
+ z \mapsto -\left(\frac{z - 1}{z + 1}\right)^2
+\]
+is a conformal equivalence from $\mathcal{U}$ to $\H$. This is since if $z \in \mathcal{U}$, then $\frac{z - 1}{z + 1}$ has argument in $\left[\frac{\pi}{2}, \pi\right]$, and can have arbitrary magnitude since $z$ can be made as close to $-1$ as you wish. Squaring doubles the angle and gives the lower half-plane, and multiplying by $-1$ gives the upper half plane.
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \fill [mblue, opacity=0.5] (2, 0) arc (0:180:2) -- (2, 0);
+ \draw (2, 0) arc (0:180:2);
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+
+ \draw [->] (3.5, 0) -- (5.5, 0) node [pos=0.5, above] {$z \mapsto \frac{z - 1}{z + 1}$};
+ \begin{scope}[shift={(9,0)}]
+ \fill [mblue, opacity=0.5] (-3, 3) rectangle (0, 0);
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+
+ \end{scope}
+ \begin{scope}[shift={(0,-8)}]
+ \draw [->] (-5.5, 0) -- (-3.5, 0) node [pos=0.5, above] {$z \mapsto z^2$};
+
+ \fill [mblue, opacity=0.5] (-3, 0) rectangle (3, -3);
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+
+ \draw [->] (3.5, 0) -- (5.5, 0) node [pos=0.5, above] {$z \mapsto -z$};
+ \end{scope}
+ \begin{scope}[shift={(9,-8)}]
+ \fill [mblue, opacity=0.5] (-3, 0) rectangle (3, 3);
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+In fact, there is a really powerful theorem telling us most things are conformally equivalent to the unit disk.
+
+\begin{thm}[Riemann mapping theorem]
+ Let $\mathcal{U} \subseteq \C$ be the bounded domain enclosed by a simple closed curve, or more generally any simply connected domain not equal to all of $\C$. Then $\mathcal{U}$ is conformally equivalent to $D = \{z: |z| < 1\} \subseteq \C$.
+\end{thm}
+This in particular tells us any two simply connected domains are conformally equivalent.
+
+The terms \emph{simple closed curve} and \emph{simply connected} are defined as follows:
+
+\begin{defi}[Simple closed curve]
+ A \emph{simple closed curve} is the image of an injective map $S^1 \to \C$.
+\end{defi}
+It should be clear (though not trivial to prove) that a simple closed curve separates $\C$ into a bounded part and an unbounded part.
+
+The more general statement requires the following definition:
+\begin{defi}[Simply connected]
+ A domain $\mathcal{U}\subseteq \C$ is \emph{simply connected} if every continuous map from the circle $f: S^1 \to \mathcal{U}$ can be extended to a continuous map from the disk $F: \overline{D^2} \to \mathcal{U}$ such that $F|_{\partial \overline{D^2}} = f$. Alternatively, any loop can be continuously shrunk to a point.
+\end{defi}
+
+\begin{eg}
+ The unit disk is simply-connected, but the region defined by $1 < |z| < 2$ is not, since the circle $|z| = 1.5$ cannot be extended to a map from a disk.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+ \fill [mblue, opacity=0.5] circle [radius=2];
+ \fill [white] circle [radius=1];
+ \draw circle [radius=1];
+ \draw circle [radius=2];
+ \draw (-1, 0) -- (1, 0);
+ \draw (0, -1) -- (0, 1);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+We will not prove this statement, but it is nice to know that this is true.
+
+If we believe that the unit disk is relatively simple, then since all simply connected regions are conformally equivalent to the disk, all simply connected domains are boring. This suggests we will later encounter domains with holes to make the course interesting. This is in fact true, and we will study these holes in depth later.
+
+\begin{eg}
+ The exponential function
+ \[
+ e^z = 1 + z + \frac{z^2}{2!} + \frac{z^3}{3!} + \cdots
+ \]
+ defines a function $\C \to \C^*$. In fact it is a conformal mapping. This sends the region $\{z: \Re(z) \in [a, b]\}$ to the annulus $\{e^a \leq |w| \leq e^b\}$. One is simply connected, but the other is not --- this is not a problem since $e^z$ is \emph{not} bijective on the strip.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \draw (1, 3) -- (1, -3);
+ \draw (2, 3) -- (2, -3);
+ \fill [mblue, opacity=0.5] (1, 3) rectangle (2, -3);
+ \node [anchor=north east] at (1, 0) {$a$};
+ \node [anchor=north west] at (2, 0) {$b$};
+
+ \draw [->] (3.5, 0) -- (5.5, 0) node [above] {$e^z$};
+ \begin{scope}[shift={(9, 0)}]
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+ \fill [mblue, opacity=0.5] circle [radius=2];
+ \fill [white] circle [radius=1];
+ \draw circle [radius=1];
+ \draw circle [radius=2];
+ \draw (-1, 0) -- (1, 0);
+ \draw (0, -1) -- (0, 1);
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\subsection{Power series}
+Some of our favorite functions are all power series, including polynomials (which are degenerate power series in some sense), exponential functions and trigonometric functions. We will later show that \emph{all} holomorphic functions are (locally) given by power series, but without knowing this fact, we shall now study some of the basic properties of power series.
+
+It turns out power series are nice. The key property is that power series are infinitely differentiable (as long as it converges), and the derivative is given pointwise.
+
+We begin by recalling some facts about convergence from IB Analysis II.
+\begin{defi}[Uniform convergence]
+ A sequence $(f_n)$ of functions \emph{converge uniformly} to $f$ if for all $\varepsilon > 0$, there is some $N$ such that $n > N$ implies $|f_n(z) - f(z)| < \varepsilon$ for all $z$.
+\end{defi}
+
+\begin{prop}
+ The uniform limit of continuous functions is continuous.
+\end{prop}
+
+\begin{prop}[Weierstrass M-test]
+ For a sequence of functions $f_n$, if we can find $(M_n) \subseteq \R_{>0}$ such that $|f_n(x)| < M_n$ for all $x$ in the domain, then $\sum M_n$ converges implies $\sum f_n(x)$ converges uniformly on the domain.
+\end{prop}
+
+\begin{prop}
+ Given any constants $\{c_n\}_{n \geq 0} \subseteq \C$, there is a unique $R \in [0, \infty]$ such that the series $z \mapsto \sum_{n = 0}^\infty c_n(z - a)^n$ converges absolutely if $|z - a| < R$ and diverges if $|z - a| > R$. Moreover, if $0 < r < R$, then the series converges uniformly on $\{z: |z - a| < r\}$. This $R$ is known as the \emph{radius of convergence}.
+\end{prop}
+So while we don't necessarily get uniform convergence on the whole domain, we get uniform convergence on all compact subsets of the domain.
+
+We are now going to look at power series. They will serve as examples, and as we will see later, universal examples, of holomorphic functions. The most important result we need is the following result about their differentiability.
+
+\begin{thm}
+ Let
+ \[
+ f(z) = \sum_{n = 0}^\infty c_n (z - a)^n
+ \]
+ be a power series with radius of convergence $R > 0$. Then
+ \begin{enumerate}
+ \item $f$ is holomorphic on $B(a; R) = \{z: |z - a| < R\}$.
+ \item $f'(z) = \sum n c_n (z - 1)^{n - 1}$, which also has radius of convergence $R$.
+ \item Therefore $f$ is infinitely complex differentiable on $B(a; R)$. Furthermore,
+ \[
+ c_n = \frac{f^{(n)}(a)}{n!}.
+ \]
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ Without loss of generality, take $a = 0$. The third part obviously follows from the previous two, and we will prove the first two parts simultaneously. We would like to first prove that the derivative series has radius of convergence $R$, so that we can freely happily manipulate it.
+
+ Certainly, we have $|n c_n| \geq |c_n|$. So by comparison to the series for $f$, we can see that the radius of convergence of $\sum n c_n z^{n - 1}$ is at most $R$. But if $|z| < \rho < R$, then we can see
+ \[
+ \frac{|n c_n z^{n - 1}|}{|c_n \rho^{n - 1}|} = n \left|\frac{z}{\rho}\right|^{n - 1} \to 0
+ \]
+ as $n \to \infty$. So by comparison to $\sum c_n \rho^{n - 1}$, which converges, we see that the radius of convergence of $\sum n c_n z^{n - 1}$ is at least $\rho$. So the radius of convergence must be exactly $R$.
+
+ Now we want to show $f$ really is differentiable with that derivative. Pick $z, w$ such that $|z|, |w| \leq \rho$ for some $\rho < R$ as before.
+
+ Define a new function
+ \[
+ \varphi (z, w) = \sum_{n = 1}^\infty c_n \sum_{j = 0}^{n - 1} z^j w^{n - 1 - j}.
+ \]
+ Noting
+ \[
+ \left|c_n \sum_{j = 0}^{n - 1} z^j w^{n - 1 - j}\right| \leq n |c_n| \rho^n,
+ \]
+ we know the series defining $\varphi$ converges uniformly on $\{|z| \leq \rho, |w| < \rho\}$, and hence to a continuous limit.
+
+ If $z \not= w$, then using the formula for the (finite) geometric series, we know
+ \[
+ \varphi(z, w) = \sum_{n = 1}^\infty c_n\left(\frac{z^n - w^n}{z - w}\right) = \frac{f(z) - f(w)}{z - w}.
+ \]
+ On the other hand, if $z = w$, then
+ \[
+ \varphi(z, z) = \sum_{n = 1}^\infty c_n n z^{n - 1}.
+ \]
+ Since $\varphi$ is continuous, we know
+ \[
+ \lim_{w \to z} \frac{f(z) - f(w)}{z - w} \to \sum_{n = 1}^\infty c_n nz^{n - 1}.
+ \]
+ So $f'(z) = \varphi(z, z)$ as claimed. Then (iii) follows from (i) and (ii) directly.
+\end{proof}
+
+\begin{cor}
+ Given a power series
+ \[
+ f(z) = \sum_{n \geq 0} c_n (z - a)^n
+ \]
+ with radius of convergence $R > 0$, and given $0 < \varepsilon < R$, if $f$ vanishes on $B(a, \varepsilon)$, then $f$ vanishes identically.
+\end{cor}
+
+\begin{proof}
+ If $f$ vanishes on $B(a, \varepsilon)$, then all its derivatives vanish, and hence the coefficients all vanish. So it is identically zero.
+\end{proof}
+This is obviously true, but will come up useful some time later.
+
+It is might be useful to have an explicit expression for $R$. For example, by IA Analysis I, we know
+\begin{align*}
+ R &= \sup \{r \geq 0: |c_n|r^n \to 0\text{ as }n \to \infty\}\\
+ &= \frac{1}{\limsup \sqrt[n]{|c_n|}}.
+\end{align*}
+But we probably won't need these.
+
+\subsection{Logarithm and branch cuts}
+Recall that the exponential function
+\[
+ e^z = \exp(z) = 1 + z + \frac{z^2}{2!} + \frac{z^3}{3!} + \cdots
+\]
+has a radius of convergence of $\infty$. So it is an entire function. We have the usual standard properties, such as $e^{z + w} = e^z e^w$, and also
+\[
+ e^{x + iy} = e^x e^{iy} = e^x(\cos y + i \sin y).
+\]
+So given $w \in \C^* = \C\setminus\{0\}$, there are solutions to $e^z = w$. In fact, this has infinitely many solutions, differing by adding integer multiples of $2\pi i$. In particular, $e^z = 1$ if and only if $z$ is an integer multiple of $2\pi i$.
+
+This means $e^x$ does not have a well-defined inverse. However, we \emph{do} want to talk about the logarithm. The solution would be to just fix a particular range of $\theta$ allowed. For example, we can define the logarithm as the function sending $r e^{i\theta}$ to $\log r + i\theta$, where we now force $-\pi < \theta < \pi$. This is all well, except it is now not defined at $-1$ (we can define it as, say, $i \pi$, but we would then lose continuity).
+
+There is no reason why we should pick $-\pi < \theta < \pi$. We could as well require $300 \pi < \theta < 302\pi$. In general, we make the following definition:
+\begin{defi}[Branch of logarithm]
+ Let $U \subseteq \C^*$ be an open subset. A \emph{branch of the logarithm} on $U$ is a continuous function $\lambda: U \to \C$ for which $e^{\lambda (z)} = z$ for all $z \in U$.
+\end{defi}
+This is a partially defined inverse to the exponential function, only defined on some domain $U$. These need not exist for all $U$. For example, there is no branch of the logarithm defined on the whole $\C^*$, as we will later prove.
+
+\begin{eg}
+ Let $U = \C\setminus \R_{\leq 0}$, a ``slit plane''.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (2, 0);
+ \draw [very thick] (-2, 0) -- (0, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \node [circ] at (0, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ Then for each $z \in U$, we write $z = r^{i \theta}$, with $-\pi < \theta < \pi$. Then $\lambda(z) = \log (r) + i \theta$ is a branch of the logarithm. This is the \emph{principal branch}.
+
+ On $U$, there is a continuous function $\arg: U \to (-\pi, \pi)$, which is why we can construct a branch. This is not true on, say, the unit circle.
+\end{eg}
+
+Fortunately, as long as we have picked a branch, most things we want to be true about $\log$ is indeed true.
+\begin{prop}
+ On $\{z \in \C: z \not\in \R_{\leq 0}\}$, the principal branch $\log: U \to \C$ is holomorphic function. Moreover,
+ \[
+ \frac{\d}{\d z}\log z = \frac{1}{z}.
+ \]
+ If $|z| < 1$, then
+ \[
+ \log (1 + z) = \sum_{n \geq 1} (-1)^{n - 1} \frac{z^n}{n} = z - \frac{z^2}{2} + \frac{z^3}{3} - \cdots.
+ \]
+\end{prop}
+
+\begin{proof}
+ The logarithm is holomorphic since it is a local inverse to a holomorphic function. Since $e^{\log z} = z$, the chain rule tells us $\frac{\d}{\d z} (\log z) = \frac{1}{z}$.
+
+ To show that $\log(1 + z)$ is indeed given by the said power series, note that the power series does have a radius of convergence $1$ by, say, the ratio test. So by the previous result, it has derivative
+ \[
+ 1 - z + z^2 + \cdots = \frac{1}{1 + z}.
+ \]
+ Therefore, $\log(1 + z)$ and the claimed power series have equal derivative, and hence coincide up to a constant. Since they agree at $z = 0$, they must in fact be equal.
+\end{proof}
+Having defined the logarithm, we can now define general power functions.
+
+Let $\alpha \in \C$ and $\log: U \to \C$ be a branch of the logarithm. Then we can define
+\[
+ z^{\alpha} = e^{\alpha \log z}
+\]
+on $U$. This is again only defined when $\log$ is.
+
+In a more general setting, we can view $\log$ as an instance of a multi-valued function on $\C^*$. At each point, the function $\log$ can take many possible values, and every time we use $\log$, we have to pick one of those values (in a continuous way).
+
+In general, we say that a point $p \in \C $ is a \emph{branch point} of a multivalued function if the function cannot be given a continuous single-valued definition in a (punctured) neighbourhood $B(p, \varepsilon) \setminus \{p\}$ of $p$ for any $\varepsilon > 0$. For example, $0$ is a branch point of $\log$.
+
+\begin{eg}
+ Consider the function
+ \[
+ f(z) = \sqrt{z(z - 1)}.
+ \]
+ This has \emph{two} branch points, $z = 0$ and $z = 1$, since we cannot define a square root consistently near $0$, as it is defined via the logarithm.
+\end{eg}
+
+Note we can define a continuous branch of $f$ on either
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [very thick] (-2, 0) -- (0, 0);
+ \draw [very thick] (1, 0) -- (2, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \node [below] at (1, 0) {$1$};
+ \node [anchor = north east] {$0$};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (0, 0) {};
+ \end{tikzpicture}
+\end{center}
+or we can just kill a finite slit by
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [very thick] (1, 0) -- (0, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \node [below] at (1, 0) {$1$};
+ \node [anchor = north east] {$0$};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (0, 0) {};
+ \end{tikzpicture}
+\end{center}
+Why is the second case possible? Note that
+\[
+ f(z) = e^{\frac{1}{2}(\log(z) + \log(z - 1))}.
+\]
+If we move around a path encircling the finite slit, the argument of each of $\log(z)$ and $\log(z - 1)$ will jump by $2\pi i$, and the total change in the exponent is $2\pi i$. So the expression for $f(z)$ becomes uniquely defined.
+
+While these two ways of cutting slits look rather different, if we consider this to be on the Riemann sphere, then these two cuts now look similar. It's just that one passes through the point $\infty$, and the other doesn't.
+
+The introduction of these slits is practical and helpful for many of our problems. However, theoretically, this is not the best way to think about multi-valued functions. A better treatment will be provided in the IID Riemann Surfaces course.
+
+\section{Contour integration}
+In the remaining of the course, we will spend all our time studying integration of complex functions. At first, you might think this is just an obvious generalization of integration of real functions. This is not true. Starting from Cauchy's theorem, beautiful and amazing properties of complex integration comes one after another. Using these, we can prove many interesting properties of holomorphic functions as well as do lots of integrals we previously were not able to do.
+
+\subsection{Basic properties of complex integration}
+We start by considering functions $f: [a, b] \to \C$. We say such a function is Riemann integrable if $\Re(f)$ and $\Im (f)$ are individually, and the integral is defined to be
+\[
+ \int_a^b f(t) \;\d t = \int_a^b \Re(f(t))\;\d t + i \int_a^b \Im(f(t))\;\d t.
+\]
+While Riemann integrability is a technical condition to check, we know that all continuous functions are integrable, and this will apply in most cases we care about in this course. After all, this is not a course on exotic functions.
+
+We start from some less interesting facts, and slowly develop and prove some really amazing results.
+\begin{lemma}
+ Suppose $f: [a, b] \to \C$ is continuous (and hence integrable). Then
+ \[
+ \left|\int_a^b f(t)\;\d t\right| \leq (b - a) \sup_t |f(t)|
+ \]
+ with equality if and only if $f$ is constant.
+\end{lemma}
+
+\begin{proof}
+ We let
+ \[
+ \theta = \arg\left(\int_a^b f(t)\;\d t\right),
+ \]
+ and
+ \[
+ M = \sup_t |f(t)|.
+ \]
+ Then we have
+ \begin{align*}
+ \left|\int_a^b f(t)\;\d t\right| &= \int_a^b e^{-i\theta} f(t)\;\d t\\
+ &= \int_a^b \Re(e^{-i\theta} f(t))\;\d t\\
+ &\leq (b - a) M,
+ \end{align*}
+ with equality if and only if $|f(t)| =M$ and $\arg f(t) = \theta$ for all $t$, i.e.\ $f$ is constant.
+\end{proof}
+
+Integrating functions of the form $f: [a, b] \to \C$ is easy. What we really care about is integrating a genuine complex function $f: U \subseteq \C \to \C$. However, we cannot just ``integrate'' such a function. There is no given one-dimensional domain we can integrate along. Instead, we have to make some up ourselves. We have to define some \emph{paths} in the complex plane, and integrate along them.
+
+\begin{defi}[Path]
+ A \emph{path} in $\C$ is a continuous function $\gamma: [a, b] \to \C$, where $a, b\in \R$.
+\end{defi}
+For general paths, we just require continuity, and do not impose any conditions about, say, differentiability.
+
+Unfortunately, the world is full of weird paths. There are even paths that fill up the whole of the unit square. So we might want to look at some nicer paths.
+
+\begin{defi}[Simple path]
+ A path $\gamma: [a, b] \to \C$ is \emph{simple} if $\gamma(t_1) = \gamma(t_2)$ only if $t_1 = t_2$ or $\{t_1, t_2\} = \{a, b\}$.
+\end{defi}
+In other words, it either does not intersect itself, or only intersects itself at the end points.
+
+\begin{defi}[Closed path]
+ A path $\gamma: [a, b] \to \C$ is \emph{closed} if $\gamma(a) = \gamma(b)$.
+\end{defi}
+
+\begin{defi}[Contour]
+ A \emph{contour} is a simple closed path which is piecewise $C^1$, i.e.\ piecewise continuously differentiable.
+\end{defi}
+
+For example, it can look something like this:
+\begin{center}
+ \begin{tikzpicture}[scale=2]
+ \draw [->-=0.6] plot [smooth] coordinates {(0, 0) (0.5, 0.1) (1, -0.1)};
+ \node [circ] at (0, 0) {};
+ \draw [->-=0.3, ->-=0.8] plot [smooth] coordinates {(1, -0.1) (0.5, -1) (0, 0)};
+ \end{tikzpicture}
+\end{center}
+Most of the time, we are just interested in integration along contours. However, it is also important to understand integration along just simple $C^1$ smooth paths, since we might want to break our contour up into different segments. Later, we will move on to consider more general closed piecewise $C^1$ paths, where we can loop around a point many many times.
+
+We can now define what it means to integrate along a smooth path.
+\begin{defi}[Complex integration]
+ If $\gamma: [a, b] \to U \subseteq \C$ is $C^1$-smooth and $f: U \to \C$ is continuous, then we define the \emph{integral} of $f$ along $\gamma$ as
+ \[
+ \int_\gamma f(z) \;\d z = \int_a^b f(\gamma(t)) \gamma'(t) \;\d t.
+ \]
+ By summing over subdomains, the definition extends to piecewise $C^1$-smooth paths, and in particular contours.
+\end{defi}
+
+We have the following elementary properties:
+\begin{enumerate}
+ \item The definition is insensitive to reparametrization. Let $\phi: [a', b'] \to [a, b]$ be $C^1$ such that $\phi(a') = a, \phi(b') = b$. If $\gamma$ is a $C^1$ path and $\delta= \gamma \circ \phi$, then
+ \[
+ \int_{\gamma} f(z) \;\d z = \int_{\delta}f(z) \;\d z.
+ \]
+ This is just the regular change of variables formula, since
+ \[
+ \int_{a'}^{b'} f(\gamma(\phi(t))) \gamma'(\phi(t)) \phi'(t)\;\d t= \int_a^b f(\gamma(u)) \gamma'(u)\;\d u
+ \]
+ if we let $u = \phi(t)$.
+ \item If $a < u < b$, then
+ \[
+ \int_\gamma f(z)\;\d z = \int_{\gamma|_{[a, u]}} f(z)\;\d z + \int_{\gamma|_{[u, b]}}f(z) \;\d z.
+ \]
+\end{enumerate}
+These together tells us the integral depends only on the path itself, not how we look at the path or how we cut up the path into pieces.
+
+We also have the following easy properties:
+\begin{enumerate}[resume]
+ \item If $-\gamma$ is $\gamma$ with reversed orientation, then
+ \[
+ \int_{-\gamma} f(z)\;\d z = -\int_\gamma f(z)\;\d z.
+ \]
+ \item If we set for $\gamma: [a, b] \to \C$ the \emph{length}
+ \[
+ \length(\gamma) = \int_a^b |\gamma'(t)|\;\d t,
+ \]
+ then
+ \[
+ \left|\int_\gamma f(z)\;\d z\right| \leq \length (\gamma) \sup_t |f(\gamma(t))|.
+ \]
+\end{enumerate}
+
+\begin{eg}
+ Take $U = \C^*$, and let $f(z) = z^n$ for $n \in \Z$. We pick $\phi: [0, 2\pi] \to U$ that sends $\theta \mapsto e^{i\theta}$. Then
+ \[
+ \int_\phi f(z)\;\d z =
+ \begin{cases}
+ 2\pi i & n = -1\\
+ 0& \text{otherwise}
+ \end{cases}
+ \]
+ To show this, we have
+ \begin{align*}
+ \int_{\phi} f(z)\;\d z &= \int_0^{2\pi} e^{in\theta} ie^{i\theta}\;\d \theta\\
+ &= i\int_0^{2\pi} e^{i(n + 1)\theta}\;\d \theta.
+ \end{align*}
+ If $n = -1$, then the integrand is constantly $1$, and hence gives $2\pi i$. Otherwise, the integrand is a non-trivial exponential which is made of trigonometric functions, and when integrated over $2\pi$ gives zero.
+\end{eg}
+
+\begin{eg}
+ Take $\gamma$ to be the contour
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -1) -- (0, 3);
+ \draw [thick, mred, ->-=0.7] (-2, 0) -- (2, 0) node [pos=0.7, below] {$\gamma_1$};
+ \draw [thick, mred, ->-=0.3, ->-=0.75] (2, 0) arc (0:180:2) node [pos=0.3, anchor = south west] {$\gamma_2$};
+ \node [below] at (2, 0) {$R$};
+ \node [below] at (-2, 0) {$-R$};
+ \node [anchor = south west] at (0, 2) {$iR$};
+ \end{tikzpicture}
+ \end{center}
+ We parametrize the path in segments by
+ \begin{align*}
+ \gamma_1: [-R, R] &\to \C & \gamma_2: [0, 1] &\to \C\\
+ t &\mapsto t & t &\mapsto R e^{i\pi t}
+ \end{align*}
+ Consider the function $f(z) = z^2$. Then the integral is
+ \begin{align*}
+ \int_\gamma f(z)\;\d z &= \int_{-R}^R t^2 \;\d t + \int_0^1 R^2 e^{2\pi i t} i\pi R e^{i\pi t}\;\d t\\
+ &= \frac{2}{3}R^3 + R^3 i\pi \int_0^1 e^{3 \pi i t}\;\d t\\
+ &= \frac{2}{3}R^3 + R^3 i\pi \left[\frac{e^{3\pi it}}{3\pi i}\right]_0^1\\
+ &= 0
+ \end{align*}
+\end{eg}
+We worked this out explicitly, but we have just wasted our time, since this is just an instance of the fundamental theorem of calculus!
+
+\begin{defi}[Antiderivative]
+ Let $U \subseteq \C$ and $f: U \to \C$ be continuous. An \emph{antiderivative} of $f$ is a holomorphic function $F: U \to \C$ such that $F'(z) = f(z)$.
+\end{defi}
+
+Then the fundamental theorem of calculus tells us:
+\begin{thm}[Fundamental theorem of calculus]
+ Let $f: U \to \C$ be continuous with antiderivative $F$. If $\gamma: [a, b] \to U$ is piecewise $C^1$-smooth, then
+ \[
+ \int_\gamma f(z)\;\d z= F(\gamma(b)) - F(\gamma(a)).
+ \]
+\end{thm}
+In particular, the integral depends only on the end points, and not the path itself. Moreover, if $\gamma$ is closed, then the integral vanishes.
+
+\begin{proof}
+ We have
+ \[
+ \int_\gamma f(z)\;\d z = \int_a^b f(\gamma(t)) \gamma'(t) \;\d t = \int_a^b (F \circ \gamma)' (t)\;\d t.
+ \]
+ Then the result follows from the usual fundamental theorem of calculus, applied to the real and imaginary parts separately.
+\end{proof}
+
+\begin{eg}
+ This allows us to understand the first example we had. We had the function $f(z) = z^n$ integrated along the path $\phi(t) = e^{it}$ (for $0 \leq t \leq 2\pi$).
+
+ If $n \not= -1$, then
+ \[
+ f = \frac{\d}{\d t}\left(\frac{z^{n + 1}}{n + 1}\right).
+ \]
+ So $f$ has a well-defined antiderivative, and the integral vanishes. On the other hand, if $n = -1$, then
+ \[
+ f(z) = \frac{\d}{\d z} (\log z),
+ \]
+ where $\log$ can only be defined on a \emph{slit} plane. It is not defined on the whole unit circle. So we cannot apply the fundamental theorem of calculus.
+
+ Reversing the argument around, since $\int_\phi z^{-1} \;\d z$ does not vanish, this implies there is not a continuous branch of $\log$ on any set $U$ containing the unit circle.
+\end{eg}
+
+\subsection{Cauchy's theorem}
+A question we might ask ourselves is when the anti-derivative exists. A necessary condition, as we have seen, is that the integral around any closed curve has to vanish. This is also sufficient.
+\begin{prop}
+ Let $U \subseteq \C$ be a domain (i.e.\ path-connected non-empty open set), and $f: U \to \C$ be continuous. Moreover, suppose
+ \[
+ \int_\gamma f(z)\;\d z = 0
+ \]
+ for any closed piecewise $C^1$-smooth path $\gamma$ in $U$. Then $f$ has an antiderivative.
+\end{prop}
+
+This is more-or-less the same proof we gave in IA Vector Calculus that a real function is a gradient if and only if the integral about any closed path vanishes.
+
+\begin{proof}
+ Pick our favorite $a_0 \in U$. For $w \in U$, we choose a path $\gamma_w: [0, 1] \to U$ such that $\gamma_w(0) = a_0$ and $\gamma_w(1) = w$.
+
+ We first go through some topological nonsense to show we can pick $\gamma_w$ such that this is piecewise $C^1$. We already know a \emph{continuous} path $\gamma: [0, 1] \to U$ from $a_0$ to $w$ exists, by definition of path connectedness. Since $U$ is open, for all $x$ in the image of $\gamma$, there is some $\varepsilon(x) > 0$ such that $B(x, \varepsilon(x)) \subseteq U$. Since the image of $\gamma$ is compact, it is covered by finitely many such balls. Then it is trivial to pick a piecewise straight path living inside the union of these balls, which is clearly piecewise smooth.
+ \begin{center}
+ \begin{tikzpicture}[scale=2]
+ \draw [thick] plot [smooth] coordinates {(0, 0) (0.7, -0.1) (1.5, 0.2) (2, 0.1)};
+ \node [above] at (1.5, 0.2) {$\gamma$};
+ \node [left] at (0, 0) {$a_0$};
+ \node [circ] at (0, 0) {};
+ \node [right] at (2, 0.1) {$w$};
+ \node [circ] at (2, 0.1) {};
+
+ \draw [mblue, fill=mblue, fill opacity=0.3] (0, 0) circle [radius=0.25];
+ \draw [mblue, fill=mblue, fill opacity=0.3] (0.35, -0.06) circle [radius=0.15];
+ \draw [mblue, fill=mblue, fill opacity=0.3] (0.7, -0.1) circle [radius=0.35];
+ \draw [mblue, fill=mblue, fill opacity=0.3] (1.1, 0.04) circle [radius=0.18];
+ \draw [mblue, fill=mblue, fill opacity=0.3] (1.5, 0.2) circle [radius=0.31];
+ \draw [mblue, fill=mblue, fill opacity=0.3] (2, 0.1) circle [radius=0.23];
+
+ \draw [mred, thick] (0, 0) -- (0.2, -0.07) -- (0.4, -0.15) -- (1.2, 0.010) -- (1.81, 0.12) node [pos=0.5, below] {$\gamma_w$} -- (2, 0.1);
+ \end{tikzpicture}
+ \end{center}
+ We thus define
+ \[
+ F(w) = \int_{\gamma_w} f(z)\;\d z.
+ \]
+ Note that this $F(w)$ is independent of the choice of $\gamma_w$, by our hypothesis on $f$ --- given another choice $\tilde{\gamma}_w$, we can form the new path $\gamma_w * (-\tilde{\gamma}_w)$, namely the path obtained by concatenating $\gamma_w$ with $-\tilde{\gamma}_w$.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [left] {$a_0$};
+ \node at (2, 2) [circ] {};
+ \node at (2, 2) [right] {$w$};
+ \draw [->-=0.6] plot [smooth, tension=1] coordinates {(0, 0) (0.8, 1.7) (2, 2)};
+ \node [above] at (0.8, 1.7) {$\tilde{\gamma}_w$};
+ \draw [->-=0.6] plot [smooth, tension=1] coordinates {(0, 0) (1.2, -0.1) (2, 2)};
+ \node at (2, 1) {$\gamma_w$};
+ \end{tikzpicture}
+ \end{center}
+ This is a closed piecewise $C^1$-smooth curve. So
+ \[
+ \int_{\gamma_w * (-\tilde{\gamma}_w)} f(z) \;\d z = 0.
+ \]
+ The left hand side is
+ \[
+ \int_{\gamma_w} f(z)\;\d z + \int_{-\tilde{\gamma}_w}f(z)\;\d z = \int_{\gamma_w} f(z)\;\d z - \int_{\tilde{\gamma}_w} f(z) \;\d z.
+ \]
+ So the two integrals agree.
+
+ Now we need to check that $F$ is complex differentiable. Since $U$ is open, we can pick $\theta > 0$ such that $B(w; \varepsilon) \subseteq U$. Let $\delta_h$ be the radial path in $B(w, \varepsilon)$ from $W$ to $w + h$, with $|h| < \varepsilon$.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [left] at (0, 0) {$a$};
+ \draw [->-=0.6] plot [smooth, tension=1] coordinates {(0, 0) (1.2, -0.1) (2, 2)};
+ \node at (2, 1) {$\gamma_w$};
+ \node at (2, 2) [circ] {};
+ \node at (2, 2) [right] {$w$};
+
+ \draw [->-=0.7] (2, 2) -- (1.4, 1.9) node [pos=0.5, above] {$\delta_h$} node [circ] {} node [left] {$w + h$};
+ \end{tikzpicture}
+ \end{center}
+ Now note that $\gamma_w * \delta_h$ is a path from $a_0$ to $w + h$. So
+ \begin{align*}
+ F(w + h) &= \int_{\gamma_w * \delta_h} f(z)\;\d z\\
+ &= F(w) + \int_{\delta_h} f(z)\;\d z\\
+ &= F(w) + hf(w) + \int_{\delta_h} (f(z) - f(w))\;\d z.
+ \end{align*}
+ Thus, we know
+ \begin{align*}
+ \left|\frac{F(w + h) - F(w)}{h} - f(w)\right| &\leq \frac{1}{|h|} \left|\int_{\delta_h} f(z) - f(w)\;\d z\right| \\
+ &\leq \frac{1}{|h|} \length(\delta_h) \sup_{\delta_h} |f(z) - f(w)|\\
+ &=\sup_{\delta_h} |f(z) - f(w)|.
+ \end{align*}
+ Since $f$ is continuous, as $h \to 0$, we know $f(z) - f(w) \to 0$. So $F$ is differentiable with derivative $f$.
+\end{proof}
+To construct the anti-derivative, we assumed $\int_\gamma f(z) \;\d z = 0$. But we didn't really need that much. To simplify matters, we can just consider curves consisting of straight line segments. To do so, we need to make sure we really can draw line segments between two points.
+
+You might think --- aha! We should work with convex spaces. No. We do not need such a strong condition. Instead, all we need is that we have a distinguished point $a_0$ such that there is a line segment from $a_0$ to any other point.
+
+\begin{defi}[Star-shaped domain]
+ A \emph{star-shaped domain} or \emph{star domain} is a domain $U$ such that there is some $a_0 \in U$ such that the line segment $[a_0, w] \subseteq U$ for all $w \in U$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (1, 0) -- (2, 2) -- (0, 1) -- (-2, 2) -- (-1, 0) -- (-2, -2) -- (0, -1) -- (2, -2) -- (1, 0);
+ \node [circ] {};
+ \node [below] {$a_0$};
+ \draw (0, 0) -- (1, 1.1) node [circ] {} node [right] {$w$};
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+This is weaker than requiring $U$ to be convex, which says any line segment between \emph{any two points} in $U$, lies in $U$.
+
+In general, we have the implications
+\[
+ U\text{ is a disc}\Rightarrow U\text{ is convex} \Rightarrow U\text{ is star-shaped} \Rightarrow U\text{ is path-connected},
+\]
+and none of the implications reverse.
+
+In the proof, we also needed to construct a small straight line segment $\delta_h$. However, this is a non-issue. By the openness of $U$, we can pick an open ball $B(w, \varepsilon) \subseteq U$, and we can certainly construct the straight line in this ball.
+
+Finally, we get to the integration part. Suppose we picked all our $\gamma_w$ to be the fixed straight line segment from $a_0$. Then for antiderivative to be differentiable, we needed
+\[
+ \int_{\gamma_w * \delta_h} f(z) \;\d z = \int_{\gamma_{w + h}} f(z)\;\d z.
+\]
+In other words, we needed to the integral along the path $\gamma_w * \delta_h * (-\gamma_{w + h})$ to vanish. This is a rather simple kind of paths. It is just (the boundary of) a triangle, consisting of three line segments.
+
+\begin{defi}[Triangle]
+ A \emph{triangle} in a domain $U$ is what it ought to be --- the Euclidean convex hull of $3$ points in $U$, lying wholly in $U$. We write its boundary as $\partial T$, which we view as an oriented piecewise $C^1$ path, i.e.\ a contour.
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[shift={(-3, 0)}]
+ \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0.6, -1) (1, 0) (0.7, 0.7)};
+ \draw (-0.5, -0.5) node [circ] {} -- (0.5, -0.5) node [circ] {} -- (0, 0.366) node [circ] {} -- cycle;
+
+ \node at (0, -1.5) {good};
+ \end{scope}
+
+ \begin{scope}
+ \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0, 0) (0.6, -1) (1, 0) (0.7, 0.7)};
+ \draw (-0.5, -0.5) node [circ] {} -- (0.5, -0.5) node [circ] {} -- (0, 0.366) node [circ] {} -- cycle;
+
+ \node at (0, -1.5) {bad};
+ \end{scope}
+
+ \begin{scope}[shift={(3, 0)}]
+ \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0.6, -1) (1, 0) (0.7, 0.7)};
+ \draw (-0.5, -0.5) node [circ] {} -- (0.5, -0.5) node [circ] {} -- (0, 0.366) node [circ] {} -- cycle;
+ \draw [fill=white] (0, -0.2113) circle [radius=0.2];
+
+ \node at (0, -1.5) {very bad};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Our earlier result on constructing antiderivative then shows:
+\begin{prop}
+ If $U$ is a star domain, and $f: U \to \C$ is continuous, and if
+ \[
+ \int_{\partial T} f(z)\;\d z = 0
+ \]
+ for all triangles $T \subseteq U$, then $f$ has an antiderivative on $U$.
+\end{prop}
+
+\begin{proof}
+ As before, taking $\gamma_w = [a_0, w] \subseteq U$ if $U$ is star-shaped about $a_0$.
+\end{proof}
+
+This is in some sense a weaker proposition --- while our hypothesis only requires the integral to vanish over triangles, and not arbitrary closed loops, we are restricted to star domains only.
+
+But well, this is technically a weakening, but how is it useful? Surely if we can somehow prove that the integral of a particular function vanishes over all triangles, then we can easily modify the proof such that it works for all possible curves.
+
+Turns out, it so happens that for triangles, we can fiddle around with some geometry to prove the following result:
+
+\begin{thm}[Cauchy's theorem for a triangle]
+ Let $U$ be a domain, and let $f: U \to \C$ be holomorphic. If $T \subseteq U$ is a triangle, then $\int_{\partial T} f(z)\;\d z = 0$.
+\end{thm}
+So for holomorphic functions, the hypothesis of the previous theorem automatically holds.
+
+We immediately get the following corollary, which is what we will end up using most of the time.
+\begin{cor}[Convex Cauchy]
+ If $U$ is a convex or star-shaped domain, and $f: U \to \C$ is holomorphic, then for \emph{any} closed piecewise $C^1$ paths $\gamma \in U$, we must have
+ \[
+ \int_\gamma f(z)\;\d z = 0.
+ \]
+\end{cor}
+
+\begin{proof}[Proof of corollary]
+ If $f$ is holomorphic, then Cauchy's theorem says the integral over any triangle vanishes. If $U$ is star shaped, our proposition says $f$ has an antiderivative. Then the fundamental theorem of calculus tells us the integral around any closed path vanishes.
+\end{proof}
+
+Hence, all we need to do is to prove that fact about triangles.
+\begin{proof}[Proof of Cauchy's theorem for a triangle]
+ Fix a triangle $T$. Let
+ \[
+ \eta = \left|\int_{\partial T} f(z)\;\d z\right|,\quad \ell = \length (\partial T).
+ \]
+ The idea is to show to bound $\eta$ by $\varepsilon$, for every $\varepsilon > 0$, and hence we must have $\eta = 0$. To do so, we subdivide our triangles.
+
+ Before we start, it helps to motivate the idea of subdividing a bit. By subdividing the triangle further and further, we are focusing on a smaller and smaller region of the complex plane. This allows us to study how the integral behaves locally. This is helpful since we are given that $f$ is holomorphic, and holomorphicity is a local property.
+
+ We start with $T = T^0:$
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.2, ->-=0.533, ->-=0.866] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \end{tikzpicture}
+ \end{center}
+ We then add more lines to get $T_a^0, T_b^0, T_c^0, T_d^0$ (it doesn't really matter which is which).
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.112, ->-=0.279, ->-=0.445, ->-=0.612, ->-=0.779, ->-=0.945] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \draw [mred, ->-=0.22, ->-=0.553, ->-=0.886] (1, 0) -- (0.5, 0.866) -- (1.5, 0.866) -- cycle;
+ \end{tikzpicture}
+ \end{center}
+ We orient the middle triangle by the anti-clockwise direction. Then we have
+ \[
+ \int_{\partial T^0} f(z)\;\d z = \sum_{a, b, c, d} \int_{\partial T^0_{\Cdot}} f(z)\;\d z,
+ \]
+ since each internal edge occurs twice, with opposite orientation.
+
+ For this to be possible, if $\eta = \left|\int_{\partial T^0} f(z)\;\d z\right|$, then there must be some subscript in $\{a, b, c, d\}$ such that
+ \[
+ \left|\int_{\partial T^0_{\Cdot}} f(z)\;\d z\right| \geq \frac{\eta}{4}.
+ \]
+ We call this $T_{\Cdot}^0 = T^1$. Then we notice $\partial T^1$ has length
+ \[
+ \length(\partial T^1) = \frac{\ell}{2}.
+ \]
+ Iterating this, we obtain triangles
+ \[
+ T^0 \supseteq T^1 \supseteq T^2 \supseteq \cdots
+ \]
+ such that
+ \[
+ \left|\int_{\partial T^i} f(z)\;\d z\right| \geq \frac{\eta}{4^i},\quad \length (\partial T^i) = \frac{\ell}{2^i}.
+ \]
+ Now we are given a nested sequence of closed sets. By IB Metric and Topological Spaces (or IB Analysis II), there is some $z_0 \in \bigcap_{i \geq 0} T^i$.
+
+ Now fix an $\varepsilon > 0$. Since $f$ is holomorphic at $z_0$, we can find a $\delta > 0$ such that
+ \[
+ |f(w) - f(z_0) - (w - z_0) f'(z_0)| \leq \varepsilon|w - z_0|
+ \]
+ whenever $|w - z_0| < \delta$. Since the diameters of the triangles are shrinking each time, we can pick an $n$ such that $T^n \subseteq B(z_0, \varepsilon)$. We're almost there. We just need to do one last thing that is slightly sneaky. Note that
+ \[
+ \int_{\partial T^n} 1 \;\d z = 0 = \int_{\partial T^n} z \;\d z,
+ \]
+ since these functions certainly do have anti-derivatives on $T^n$. Therefore, noting that $f(z_0)$ and $f'(z_0)$ are just constants, we have
+ \begin{align*}
+ \left|\int_{\partial T^n}f(z)\;\d z\right| &= \left|\int_{\partial T^n} (f(z) - f(z_0) - (z - z_0) f'(z_0))\;\d z\right|\\
+ &\leq \int_{\partial T^n} |f(z) - f(z_0) - (z - z_0) f'(z_0)|\;\d z\\
+ &\leq \length(\partial T^n) \varepsilon \sup_{z \in \partial T^n}|z - z_0|\\
+ &\leq \varepsilon \length(\partial T^n)^2,
+ \end{align*}
+ where the last line comes from the fact that $z_0 \in T^n$, and the distance between any two points in the triangle cannot be greater than the perimeter of the triangle. Substituting our formulas for these in, we have
+ \[
+ \frac{\eta}{4^n}\leq \frac{1}{4^n} \ell^2 \varepsilon.
+ \]
+ So
+ \[
+ \eta \leq \ell^2 \varepsilon.
+ \]
+ Since $\ell$ is fixed and $\varepsilon$ was arbitrary, it follows that we must have $\eta = 0$.
+\end{proof}
+Is this the best we can do? Can we formulate this for an arbitrary domain, and not just star-shaped ones? It is obviously not true if the domain is not simply connected, e.g.\ for $f(z) = \frac{1}{z}$ defined on $\C \setminus \{0\}$. However, it turns out Cauchy's theorem holds as long as the domain is simply connected, as we will show in a later part of the course. However, this is not surprising given the Riemann mapping theorem, since any simply connected domain is conformally equivalent to the unit disk, which \emph{is} star-shaped (and in fact convex).
+
+We can generalize our result when $f: U \to \C$ is continuous on the whole of $U$, and holomorphic except on finitely many points. In this case, the same conclusion holds --- $\int_\gamma f(z)\;\d z = 0$ for all piecewise smooth closed $\gamma$.
+
+Why is this? In the proof, it was sufficient to focus on showing $\int_{\partial T} f(z)\;\d z = 0$ for a triangle $T \subseteq U$. Consider the simple case where we only have a single point of non-holomorphicity $a \in T$. The idea is again to subdivide.
+\begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw (-2, 0) -- (2, 0) -- (0, 3.464) -- cycle;
+ \draw (-0.5, 0.866) -- (0.5, 0.866) -- (0, 1.732) -- cycle;
+ \node [circ] at (0, 1.15589) {};
+ \node [above] at (0, 1.15589) {$a$};
+
+ \draw (-2, 0) -- (-0.5, 0.866);
+ \draw (2, 0) -- (0.5, 0.866);
+ \draw (0, 3.464) -- (0, 1.732);
+
+ \draw (-2, 0) -- (0, 1.732);
+ \draw (2, 0) -- (-0.5, 0.866);
+ \draw (0, 3.464) -- (0.5, 0.866);
+ \end{tikzpicture}
+\end{center}
+We call the center triangle $T'$. Along all other triangles in our subdivision, we get $\int f(z)\;\d z = 0$, as these triangles lie in a region where $f$ is holomorphic. So
+\[
+ \int_{\partial T} f(z)\;\d z = \int_{\partial T'} f(z)\;\d z.
+\]
+Note now that we can make $T'$ as small as we like. But
+\[
+ \left|\int_{\partial T'} f(z)\;\d z\right| \leq \length(\partial T') \sup_{z \in \partial T'} |f(z)|.
+\]
+Since $f$ is continuous, it is bounded. As we take smaller and smaller subdivisions, $\length(\partial T') \to 0$. So we must have $\int_{\partial T} f(z)\;\d z = 0$.
+
+From here, it's straightforward to conclude the general case with many points of non-holomorphicity --- we can divide the triangle in a way such that each small triangle contains one bad point.
+
+\subsection{The Cauchy integral formula}
+Our next amazing result will be Cauchy's integral formula. This formula allows us to find the value of $f$ inside a ball $B(z_0, r)$ just given the values of $f$ on the boundary $\partial \overline{B(z_0, r)}$.
+
+\begin{thm}[Cauchy integral formula]
+ Let $U$ be a domain, and $f: U \to \C$ be holomorphic. Suppose there is some $\overline{B(z_0; r)} \subseteq U$ for some $z_0$ and $r > 0$. Then for all $z \in B(z_0; r)$, we have
+ \[
+ f(z) = \frac{1}{2\pi i} \int_{\partial \overline{B(z_0; r)}} \frac{f(w)}{w - z}\;\d w.
+ \]
+\end{thm}
+Recall that we previously computed $\int_{\partial \overline{B(0, 1)}} \frac{1}{z} \;\d z= 2\pi i$. This is indeed a special case of the Cauchy integral formula. We will provide two proofs. The first proof relies on the above generalization of Cauchy's theorem.
+
+\begin{proof}
+ Since $U$ is open, there is some $\delta > 0$ such that $\overline{B(z_0; r + \delta)} \subseteq U$. We define $g: B(z_0; r+ \delta) \to \C$ by
+ \[
+ g(w) =
+ \begin{cases}
+ \frac{f(w) - f(z)}{w - z} & w \not= z\\
+ f'(z) & w = z
+ \end{cases},
+ \]
+ where we have \emph{fixed} $z \in B(z_0; r)$ as in the statement of the theorem. Now note that $g$ is holomorphic as a function of $w \in B(z_0, r + \delta)$, except perhaps at $w = z$. But since $f$ is holomorphic, by definition $g$ is continuous everywhere on $B(z_0, r + \delta)$. So the previous result says
+ \[
+ \int_{\partial \overline{B(z_0; r)}} g(w)\;\d w = 0.
+ \]
+ This is exactly saying that
+ \[
+ \int_{\partial \overline{B(z_0; r)}} \frac{f(w)}{w - z} \;\d w= \int_{\partial \overline{B(z_0; r)}} \frac{f(z)}{w - z}\;\d w.
+ \]
+ We now rewrite
+ \[
+ \frac{1}{w - z} = \frac{1}{w - z_0} \cdot \frac{1}{1 - \left(\frac{z - z_0}{w - z_0}\right)} = \sum_{n = 0}^\infty \frac{(z - z_0)^n}{(w - z_0)^{n + 1}}.
+ \]
+ Note that this sum converges uniformly on $\partial \overline{B(z_0; r)}$ since
+ \[
+ \left|\frac{z - z_0}{w - z_0}\right| < 1
+ \]
+ for $w$ on this circle.
+
+ By uniform convergence, we can exchange summation and integration. So
+ \[
+ \int_{\partial \overline{B(z_0; r)}} \frac{f(w)}{w - z} \;\d w = \sum_{n = 0}^\infty \int_{\partial \overline{B(z_0, r)}} f(z) \frac{(z - z_0)^n}{(w - z_0)^{n + 1}}\;\d w.
+ \]
+ We note that $f(z) (z - z_0)^n$ is just a constant, and that we have previously proven
+ \[
+ \int_{\partial \overline{B(z_0; r)}} (w - z_0)^k \;\d w =
+ \begin{cases}
+ 2\pi i & k = -1\\
+ 0 & k \not= -1
+ \end{cases}.
+ \]
+ So the right hand side is just $2\pi i f(z)$. So done.
+\end{proof}
+
+\begin{cor}[Local maximum principle]
+ Let $f: B(z, r) \to \C$ be holomorphic. Suppose $|f(w)| \leq |f(z)|$ for all $w \in B(z; r)$. Then $f$ is constant. In other words, a non-constant function cannot achieve an interior local maximum.
+\end{cor}
+
+\begin{proof}
+ Let $0 < \rho < r$. Applying the Cauchy integral formula, we get
+ \begin{align*}
+ |f(z)| &= \left|\frac{1}{2\pi i}\int_{\partial \overline{B(z; \rho)}} \frac{f(w)}{w - z}\;\d w\right|\\
+ \intertext{Setting $w = z + \rho e^{2\pi i\theta}$, we get}
+ &= \left|\int_0^1 f(z + \rho e^{2 \pi i \theta})\;\d \theta\right|\\
+ &\leq \sup_{|z - w| = \rho} |f(w)|\\
+ &\leq f(z).
+ \end{align*}
+ So we must have equality throughout. When we proved the supremum bound for the integral, we showed equality can happen only if the integrand is constant. So $|f(w)|$ is constant on the circle $|z - w| = \rho$, and is equal to $f(z)$. Since this is true for all $\rho \in (0, r)$, it follows that $|f|$ is constant on $B(z; r)$. Then the Cauchy--Riemann equations then entail that $f$ must be constant, as you have shown in example sheet 1.
+\end{proof}
+Going back to the Cauchy integral formula, recall that we had $\overline{B(z_0; r)} \subseteq U$, $f: U \to C$ holomorphic, and we want to show
+\[
+ f(z) = \frac{1}{2\pi i} \int_{\partial \overline{B(z_0; r)}} \frac{f(w)}{w - z}\;\d w.
+\]
+When we proved it last time, we remember we know how to integrate things of the form $\frac{1}{(w - z_0)^n}$, and manipulated the formula such that we get the integral is made of things like this.
+
+The second strategy is to change the contour of integration instead of changing the integrand. If we can change it so that the integral is performed over a circle around $z$ instead of $z_0$, then we know what to do.
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [right] {$z_0$};
+ \draw [->] circle [radius=2];
+
+ \node [circ] at (0, 1) {};
+ \node [right] at (0, 1) {$z$};
+ \draw [->] (0, 1) circle [radius=0.5];
+ \end{tikzpicture}
+\end{center}
+\begin{proof}(of Cauchy integral formula again)
+ Given $\varepsilon > 0$, we pick $\delta > 0$ such that $\overline{B(z, \delta)} \subseteq B(z_0, r)$, and such that whenever $|w - z| < \delta$, then $|f(w) - f(z)| < \varepsilon$. This is possible since $f$ is uniformly continuous on the neighbourhood of $z$. We now cut our region apart:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [right] {$z_0$};
+ \draw circle [radius=2];
+
+ \node [circ] at (0, 1) {};
+ \node [right] at (0, 1) {$z$};
+ \draw (0, 1) circle [radius=0.5];
+
+ \draw [red, ->-=0.04, ->-=0.35, ->-=0.72] (0, 2) -- (0, 1.5) arc(90:-90:0.5) -- (0, -2) arc(-90:90:2);
+ \fill [mblue, opacity=0.5] (0, 2) -- (0, 1.5) arc(90:-90:0.5) -- (0, -2) arc(-90:90:2);
+
+ \begin{scope}[shift={(5, 0)}]
+ \node [circ] {};
+ \node [right] {$z_0$};
+ \draw circle [radius=2];
+
+ \node [circ] at (0, 1) {};
+ \node [right] at (0, 1) {$z$};
+ \draw (0, 1) circle [radius=0.5];
+
+ \draw [red, -<-=0.01, -<-=0.31, -<-=0.69] (0, 2) -- (0, 1.5) arc(90:270:0.5) -- (0, -2) arc(270:90:2);
+ \fill [mblue, opacity=0.5] (0, 2) -- (0, 1.5) arc(90:270:0.5) -- (0, -2) arc(270:90:2);
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ We know $\frac{f(w)}{w - z}$ is holomorphic on sufficiently small open neighbourhoods of the half-contours indicated. The area enclosed by the contours might not be star-shaped, but we can definitely divide it once more so that it is. Hence the integral of $\frac{f(w)}{w - z}$ around the half-contour vanishes by Cauchy's theorem. Adding these together, we get
+ \[
+ \int_{\partial \overline{B(z_0, r)}} \frac{f(w)}{w - z}\;\d w = \int_{\partial \overline{B(z, \delta)}} \frac{f(w)}{w - z}\;\d w,
+ \]
+ where the balls are both oriented anticlockwise. Now we have
+ \[
+ \left|f(z) - \frac{1}{2\pi i} \int_{\partial \overline{B(z_0, r)}} \frac{f(w)}{w - z}\;\d w\right| = \left|f(z) - \frac{1}{2\pi i} \int_{\partial \overline{B(z, \delta)}} \frac{f(w)}{w - z}\;\d w\right|.
+ \]
+ Now we once again use the fact that
+ \[
+ \int_{\partial \overline{B(z, \delta)}} \frac{1}{w - z}\;\d z = 2\pi i
+ \]
+ to show this is equal to
+ \[
+ \left|\frac{1}{2\pi i} \int_{\partial \overline{B(z, \delta)}} \frac{f(z) - f(w)}{w - z} \;\d w\right| \leq \frac{1}{2\pi} \cdot 2\pi \delta \cdot \frac{1}{\delta} \cdot \varepsilon = \varepsilon.
+ \]
+ Taking $\varepsilon \to 0$, we see that the Cauchy integral formula holds.
+\end{proof}
+Note that the subdivision we did above was something we can do in general.
+
+\begin{defi}[Elementary deformation]
+ Given a pair of $C^1$-smooth (or piecewise smooth) closed paths $\phi, \psi: [0, 1] \to U$, we say $\psi$ is an elementary deformation of $\phi$ if there exists convex open sets $C_1, \cdots, C_n \subseteq U$ and a division of the interval $0 = x_0 < x_1 < \cdots < x_n = 1$ such that on $[x_{i - 1}, x_i]$, both $\phi(t)$ and $\psi(t)$ belong to $C_i$.
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \draw [decorate, decoration={snake, segment length=1.5cm}] ellipse (2 and 0.5);
+ \node [right] at (2, 0) {$\phi$};
+ \draw [decorate, decoration={snake, segment length=1cm}] ellipse (0.5 and 2);
+ \node [above] at (0, 2.1) {$\psi$};;
+
+ \node [circ] at (-0.5, -0.21) {};
+ \node [scale=0.6, above] at (-0.3, -0.21) {$\phi(x_{i - 1})$};
+
+ \node [circ] at (0.4, -0.35) {};
+ \node [scale=0.6, above] at (0.26, -0.35) {$\phi(x_i)$};
+
+ \node [circ] at (-0.48, -1.21) {};
+ \node [scale=0.6, left] at (-0.48, -1.21) {$\phi(x_{i - 1})$};
+
+ \node [circ] at (0.28, -1.11) {};
+ \node [scale=0.6, right] at (0.28, -1.11) {$\phi(x_i)$};
+
+ \draw [fill opacity=0.5, fill=mblue] (-0.5, -0.21) -- (0.4, -0.35) -- (0.28, -1.11) -- (-0.48, -1.21) -- cycle;
+ \end{tikzpicture}
+\end{center}
+Then there are straight lines $\gamma_i: \phi(x_i) \to \psi(x_i)$ lying inside $C_i$. If $f$ is holomorphic on $U$, considering the shaded square, we find
+\[
+ \int_\phi f(z)\;\d z = \int_\psi f(z)\;\d z
+\]
+when $\phi$ and $\psi$ are convex deformations.
+
+We now explore some classical consequences of the Cauchy Integral formula. The next is Liouville's theorem, as promised.
+\begin{thm}[Liouville's theorem]
+ Let $f: \C \to \C$ be an entire function (i.e.\ holomorphic everywhere). If $f$ is bounded, then $f$ is constant.
+\end{thm}
+This, for example, means there is no interesting holomorphic period functions like $\sin$ and $\cos$ that are bounded everywhere.
+
+\begin{proof}
+ Suppose $|f(z)| \leq M$ for all $z \in \C$. We fix $z_1, z_2 \in \C$, and estimate $|f(z_1) - f(z_2)|$ with the integral formula.
+
+ Let $R > \max \{2 |z_1|, 2|z_2|\}$. By the integral formula, we know
+ \begin{align*}
+ |f(z_1) - f(z_2)| &= \left|\frac{1}{2\pi i}\int_{\partial B(0, R)} \left(\frac{f(w)}{w - z_1} - \frac{f(w)}{w - z_2}\right)\;\d w\right|\\
+ &= \left|\frac{1}{2\pi i}\int_{\partial B(0, R)} \frac{f(w)(z_1 - z_2)}{(w - z_1)(w - z_2)} \;\d w\right|\\
+ &\leq \frac{1}{2\pi} \cdot 2\pi R \cdot \frac{M |z_1 - z_2|}{(R/2)^2} \\
+ &= \frac{4M|z_1 - z_2|}{R}.
+ \end{align*}
+ Note that we get the bound on the denominator since $|w| = R$ implies $|w - z_i| > \frac{R}{2}$ by our choice of $R$. Letting $R \to \infty$, we know we must have $f(z_1) = f(z_2)$. So $f$ is constant.
+\end{proof}
+
+\begin{cor}[Fundamental theorem of algebra]
+ A non-constant complex polynomial has a root in $\C$.
+\end{cor}
+
+\begin{proof}
+ Let
+ \[
+ P(z) = a_n z^n + a_{n - 1}z^{n - 1} + \cdots + a_0,
+ \]
+ where $a_n \not= 0$ and $n > 0$. So $P$ is non-constant. Thus, as $|z| \to \infty$, $|P(z)| \to \infty$. In particular, there is some $R$ such that for $|z| > R$, we have $|P(z)| \geq 1$.
+
+ Now suppose for contradiction that $P$ does not have a root in $\C$. Then consider
+ \[
+ f(z) = \frac{1}{P(z)},
+ \]
+ which is then an entire function, since it is a rational function. On $\overline{B(0, R)}$, we know $f$ is certainly continuous, and hence bounded. Outside this ball, we get $|f(z)| \leq 1$. So $f(z)$ is constant, by Liouville's theorem. But $P$ is non-constant. This is absurd. Hence the result follows.
+\end{proof}
+There are many many ways we can prove the fundamental theorem of algebra. However, none of them belong wholely to algebra. They all involve some analysis or topology, as you might encounter in the IID Algebraic Topology and IID Riemann Surface courses.
+
+This is not surprising since the construction of $\R$, and hence $\C$, is intrinsically analytic --- we get from $\N$ to $\Z$ by requiring it to have additive inverses; $\Z$ to $\Q$ by requiring multiplicative inverses; $\R$ to $\C$ by requiring the root to $x^2 + 1 = 0$. These are all algebraic. However, to get from $\Q$ to $\R$, we are requiring something about convergence in $\Q$. This is not algebraic. It requires a particular of metric on $\Q$. If we pick a different metric, then you get a different completion, as you may have seen in IB Metric and Topological Spaces. Hence the construction of $\R$ is actually analytic, and not purely algebraic.
+
+\subsection{Taylor's theorem}
+When we first met Taylor series, we were happy, since we can express anything as a power series. However, we soon realized this is just a fantasy --- the Taylor series of a real function need not be equal to the function itself. For example, the function $f(x) = e^{-x^{-2}}$ has vanishing Taylor series at $0$, but does not vanish in any neighbourhood of $0$. What we \emph{do} have is Taylor's theorem, which gives you an expression for what the remainder is if we truncate our series, but is otherwise completely useless.
+
+In the world of complex analysis, we are happy once again. Every holomorphic function can be given by its Taylor series.
+\begin{thm}[Taylor's theorem]
+ Let $f: B(a, r) \to \C$ be holomorphic. Then $f$ has a convergent power series representation
+ \[
+ f(z) = \sum_{n = 0}^\infty c_n (z - a)^n
+ \]
+ on $B(a, r)$. Moreover,
+ \[
+ c_n = \frac{f^{(n)}(a)}{n!} = \frac{1}{2\pi i}\int_{\partial B(a, \rho)}\frac{f(z)}{(z - a)^{n + 1}}\;\d z
+ \]
+ for any $0 < \rho < r$.
+\end{thm}
+Note that the very statement of the theorem already implies any holomorphic function has to be infinitely differentiable. This is a good world.
+
+\begin{proof}
+ We'll use Cauchy's integral formula. If $|w - a|< \rho < r$, then
+ \[
+ f(w) = \frac{1}{2\pi i} \int_{\partial B(a, \rho)} \frac{f(z)}{z - w}\;\d z.
+ \]
+ Now (cf.\ the first proof of the Cauchy integral formula), we note that
+ \[
+ \frac{1}{z - w} = \dfrac{1}{(z - a)\left(1 - \frac{w - a}{z - a}\right)} = \sum_{n = 0}^n \frac{(w - a)^n}{(z - a)^{n + 1}}.
+ \]
+ This series is uniformly convergent everywhere on the $\rho$ disk, including its boundary. By uniform convergence, we can exchange integration and summation to get
+ \begin{align*}
+ f(w) &= \sum_{n = 0}^\infty \left(\frac{1}{2\pi i} \int_{\partial B(a, \rho)} \frac{f(z)}{(z - a)^{n + 1}}\;\d z\right) (w - a)^n\\
+ &= \sum_{n = 0}^\infty c_n (w - a)^n.
+ \end{align*}
+ Since $c_n$ does not depend on $w$, this is a genuine power series representation, and this is valid on any disk $B(a, \rho) \subseteq B(a, r)$.
+
+ Then the formula for $c_n$ in terms of the derivative comes for free since that's the formula for the derivative of a power series.
+\end{proof}
+
+This tells us every holomorphic function behaves like a power series. In particular, we do not get weird things like $e^{-x^{-2}}$ on $\R$ that have a trivial Taylor series expansion, but is itself non-trivial. Similarly, we know that there are no ``bump functions'' on $\C$ that are non-zero only on a compact set (since power series don't behave like that). Of course, we already knew that from Liouville's theorem.
+
+\begin{cor}
+ If $f: B(a, r) \to \C$ is holomorphic on a disc, then $f$ is infinitely differentiable on the disc.
+\end{cor}
+
+\begin{proof}
+ Complex power series are infinitely differentiable (and $f$ had better be infinitely differentiable for us to write down the formula for $c_n$ in terms of $f^{(n)}$).
+\end{proof}
+
+This justifies our claim from the very beginning that $\Re(f)$ and $\Im(f)$ are harmonic functions if $f$ is holomorphic.
+
+\begin{cor}
+ If $f: U\to \C$ is a complex-valued function, then $f = u + iv$ is holomorphic at $p \in U$ if and only if $u, v$ satisfy the Cauchy--Riemann equations, and that $u_x, u_y, v_x, v_y$ are continuous in a neighbourhood of $p$.
+\end{cor}
+
+\begin{proof}
+ If $u_x, u_y, v_x, v_y$ exist and are continuous in an open neighbourhood of $p$, then $u$ and $v$ are differentiable as functions $\R^2 \to \R^2$ at $p$, and then we proved that the Cauchy--Riemann equations imply differentiability at each point in the neighbourhood of $p$. So $f$ is differentiable at a neighbourhood of $p$.
+
+ On the other hand, if $f$ is holomorphic, then it is infinitely differentiable. In particular, $f'(z)$ is also holomorphic. So $u_x, u_y, v_x, v_y$ are differentiable, hence continuous.
+\end{proof}
+
+We also get the following (partial) converse to Cauchy's theorem.
+\begin{cor}[Morera's theorem]
+ Let $U\subseteq \C$ be a domain. Let $f:U \to \C$ be continuous such that
+ \[
+ \int_\gamma f(z)\;\d z = 0
+ \]
+ for all piecewise-$C^1$ closed curves $\gamma \in U$. Then $f$ is holomorphic on $U$.
+\end{cor}
+
+\begin{proof}
+ We have previously shown that the condition implies that $f$ has an antiderivative $F: U \to \C$, i.e.\ $F$ is a holomorphic function such that $F' = f$. But $F$ is infinitely differentiable. So $f$ must be holomorphic.
+\end{proof}
+
+Recall that Cauchy's theorem required $U$ to be sufficiently nice, e.g.\ being star-shaped or just simply-connected. However, Morera's theorem does not. It just requires that $U$ is a domain. This is since holomorphicity is a local property, while vanishing on closed curves is a global result. Cauchy's theorem gets us from a local property to a global property, and hence we need to assume more about what the ``globe'' looks like. On the other hand, passing from a global property to a local one does not. Hence we have this asymmetry.
+
+\begin{cor}
+ Let $U \subseteq \C$ be a domain, $f_n; U \to \C$ be a holomorphic function. If $f_n \to f$ uniformly, then $f$ is in fact holomorphic, and
+ \[
+ f'(z) = \lim_n f_n'(z).
+ \]
+\end{cor}
+\begin{proof}
+ Given a piecewise $C^1$ path $\gamma$, uniformity of convergence says
+ \[
+ \int_\gamma f_n(z)\;\d z \to \int_\gamma f(z)\;\d z
+ \]
+ uniformly. Since $f$ being holomorphic is a local condition, so we fix $p \in U$ and work in some small, convex disc $B(p, \varepsilon) \subseteq U$. Then for any curve $\gamma$ inside this disk, we have
+ \[
+ \int_\gamma f_n(z) \;\d z = 0.
+ \]
+ Hence we also have $\int_\gamma f(z)\;\d z = 0$. Since this is true for all curves, we conclude $f$ is holomorphic inside $B(p, \varepsilon)$ by Morera's theorem. Since $p$ was arbitrary, we know $f$ is holomorphic.
+
+ We know the derivative of the limit is the limit of the derivative since we can express $f'(a)$ in terms of the integral of $\frac{f(z)}{(z - a)^2}$, as in Taylor's theorem.
+\end{proof}
+There is a lot of passing between knowledge of integrals and knowledge of holomorphicity all the time, as we can see in these few results. These few sections are in some sense the heart of the course, where we start from Cauchy's theorem and Cauchy's integral formula, and derive all the other amazing consequences.
+
+\subsection{Zeroes}
+Recall that for a polynomial $p(z)$, we can talk about the \emph{order} of its zero at $z = a$ by looking at the largest power of $(z - a)$ dividing $p$. \emph{A priori}, it is not clear how we can do this for general functions. However, given that everything is a Taylor series, we know how to do this for holomorphic functions.
+
+\begin{defi}[Order of zero]
+ Let $f: B(a, r) \to \C$ be holomorphic. Then we know we can write
+ \[
+ f(z) = \sum_{n = 0}^\infty c_n (z - a)^n
+ \]
+ as a convergent power series. Then either all $c_n = 0$, in which case $f = 0$ on $B(a, r)$, or there is a least $N$ such that $c_N \not =0$ ($N$ is just the smallest $n$ such that $f^{(n)} (a) \not= 0$).
+
+ If $N > 0$, then we say $f$ has a \emph{zero of order $N$}.
+\end{defi}
+If $f$ has a zero of order $N$ at $a$, then we can write
+\[
+ f(z) = (z - a)^N g(z)
+\]
+on $B(a, r)$, where $g(a) = c_N \not= 0$.
+
+Often, it is not the actual order that is too important. Instead, it is the ability to factor $f$ in this way. One of the applications is the following:
+\begin{lemma}[Principle of isolated zeroes]
+ Let $f: B(a, r) \to \C$ be holomorphic and not identically zero. Then there exists some $0 < \rho < r$ such that $f(z) \not= 0$ in the punctured neighbourhood $B(a, \rho) \setminus \{a\}$.
+\end{lemma}
+
+\begin{proof}
+ If $f(a) \not= 0$, then the result is obvious by continuity of $f$.
+
+ The other option is not too different. If $f$ has a zero of order $N$ at $a$, then we can write $f(z) = (z - a)^N g(z)$ with $g(a) \not= 0$. By continuity of $g$, $g$ does not vanish on some small neighbourhood of $a$, say $B(a, \rho)$. Then $f(z)$ does not vanish on $B(a, \rho) \setminus \{a\}$.
+\end{proof}
+
+A consequence is that given two holomorphic functions on the same domain, if they agree on sufficiently many points, then they must in fact be equal.
+\begin{cor}[Identity theorem]
+ Let $U \subseteq \C$ be a domain, and $f, g: U \to \C$ be holomorphic. Let $S = \{z \in U: f(z) = g(z)\}$. Suppose $S$ contains a non-isolated point, i.e.\ there exists some $w \in S$ such that for all $\varepsilon > 0$, $S \cap B(w, \varepsilon) \not= \{w\}$. Then $f = g$ on $U$.
+\end{cor}
+
+\begin{proof}
+ Consider the function $h(z) = f(z) - g(z)$. Then the hypothesis says $h(z)$ has a non-isolated zero at $w$, i.e.\ there is no non-punctured neighbourhood of $w$ on which $h$ is non-zero. By the previous lemma, this means there is some $\rho > 0$ such that $h = 0$ on $B(w, \rho) \subseteq U$.
+
+ Now we do some topological trickery. We let
+ \begin{align*}
+ U_0 &= \{a \in U: h = 0\text{ on some neighbourhood }B(a, \rho)\text{ of }a\text{ in }U\},\\
+ U_1 &= \{a \in U: \text{there exists }n \geq 0\text{ such that }h^{(n)} \not= 0\}.
+ \end{align*}
+ Clearly, $U_0 \cap U_1 = \emptyset$, and the existence of Taylor expansions shows $U_0 \cup U_1 = U$.
+
+ Moreover, $U_0$ is open by definition, and $U_1$ is open since $h^{(n)}(z)$ is continuous near any given $a \in U_1$. Since $U$ is (path) connected, such a decomposition can happen if one of $U_0$ and $U_1$ is empty. But $w \in U_0$. So in fact $U_0 = U$, i.e.\ $h$ vanishes on the whole of $U$. So $f = g$.
+\end{proof}
+
+In particular, if two holomorphic functions agree on some small open subset of the domain, then they must in fact be identical. This is a very strong result, and is very false for real functions. Hence, to specify, say, an entire function, all we need to do is to specify it on an arbitrarily small domain we like.
+
+\begin{defi}[Analytic continuiation]
+ Let $U_0\subseteq U \subseteq \C$ be domains, and $f: U_0 \to \C$ be holomorphic. An \emph{analytic continuation} of $f$ is a holomorphic function $h:U \to \C$ such that $h|_{U_0} = f$, i.e.\ $h(z) = f(z)$ for all $z \in U_0$.
+\end{defi}
+By the identity theorem, we know the analytic continuation is unique if it exists.
+
+Thus, given any holomorphic function $f: U \to \C$, it is natural to ask how far we can extend the domain, i.e.\ what is the largest $U' \supseteq U$ such that there is an analytic continuation of $f$ to $U'$.
+
+There is no general method that does this for us. However, one useful trick is to try to write our function $f$ in a different way so that it is clear how we can extend it to elsewhere.
+
+\begin{eg}
+ Consider the function
+ \[
+ f(z) = \sum_{n \geq 0} z^n = 1 + z + z^2 + \cdots
+ \]
+ defined on $B(0, 1)$.
+
+ By itself, this series diverges for $z$ outside $B(0, 1)$. However, we know well that this function is just
+ \[
+ f(z) = \frac{1}{1 - z}.
+ \]
+ This alternative representation makes sense on the whole of $\C$ except at $z = 1$. So we see that $f$ has an analytic continuation to $\C \setminus \{1\}$. There is clearly no extension to the whole of $\C$, since it blows up near $z = 1$.
+\end{eg}
+
+\begin{eg}
+ Alternatively, consider
+ \[
+ f(z) = \sum_{n \geq 0} z^{2^n}.
+ \]
+ Then this again converges on $B(0, 1)$. You will show in example sheet 2 that there is no analytic continuation of $f$ to \emph{any} larger domain.
+\end{eg}
+
+\begin{eg}
+ The Riemann zeta function
+ \[
+ \zeta(z) = \sum_{n = 1}^\infty n^{-z}
+ \]
+ defines a holomorphic function on $\{z: \Re(z) > 1\} \subseteq \C$. Indeed, we have $|n^{-z}| = |n^{\Re(z)}|$, and we know $\sum n^{-t}$ converges for $t \in \R_{> 1}$, and in fact does so uniformly on any compact domain. So the corollary of Morera's theorem tells us that $\zeta(z)$ is holomorphic on $\Re(z) > 1$.
+
+ We know this cannot converge as $z \to 1$, since we approach the harmonic series which diverges. However, it turns out $\zeta(z)$ has an analytic continuation to $\C \setminus \{1\}$. We will not prove this.
+
+ At least formally, using the fundamental theorem of arithmetic, we can expand $n$ as a product of its prime factors, and write
+ \[
+ \zeta(z) = \prod_{\text{primes }p} (1 + p^{-z} + p^{-2z} + \cdots) = \prod_{\text{primes }p} \frac{1}{1 - p^{-z}}.
+ \]
+ If there were finitely many primes, then this would be a well-defined function on all of $\C$, since this is a finite product. Hence, the fact that this blows up at $z = 1$ implies that there are infinitely many primes.
+\end{eg}
+
+\subsection{Singularities}
+The next thing to study is \emph{singularities} of holomorphic functions. These are places where the function is not defined. There are many ways a function can be ill-defined. For example, if we write
+\[
+ f(z) = \frac{1 - z}{1 - z},
+\]
+then on the face of it, this function is not defined at $z = 1$. However, elsewhere, $f$ is just the constant function $1$, and we might as well define $f(1) = 1$. Then we get a holomorphic function. These are rather silly singularities, and are singular solely because we were not bothered to define $f$ there.
+
+Some singularities are more interesting, in that they are genuinely singular. For example, the function
+\[
+ f(z) = \frac{1}{1 - z}
+\]
+is actually singular at $z = 1$, since $f$ is unbounded near the point. It turns out these are the only possibilities.
+
+\begin{prop}[Removal of singularities]
+ Let $U$ be a domain and $z_0 \in U$. If $f: U \setminus \{z_0\} \to \C$ is holomorphic, and $f$ is bounded near $z_0$, then there exists an $a$ such that $f(z) \to a$ as $z \to z_0$.
+
+ Furthermore, if we define
+ \[
+ g(z) =
+ \begin{cases}
+ f(z) & z \in U \setminus \{z_0\}\\
+ a & z = z_0
+ \end{cases},
+ \]
+ then $g$ is holomorphic on $U$.
+\end{prop}
+
+\begin{proof}
+ Define a new function $h: U \to \C$ by
+ \[
+ h(z) =
+ \begin{cases}
+ (z - z_0)^2 f(z) & z \not= z_0\\
+ 0 & z = z_0
+ \end{cases}.
+ \]
+ Then since $f$ is holomorphic away from $z_0$, we know $h$ is also holomorphic away from $z_0$.
+
+ Also, we know $f$ is bounded near $z_0$. So suppose $|f(z)| < M$ in some neighbourhood of $z_0$. Then we have
+ \[
+ \left|\frac{h(z) - h(z_0)}{ z - z_0}\right| \leq |z - z_0| M.
+ \]
+ So in fact $h$ is also differentiable at $z_0$, and $h(z_0) = h'(z_0) = 0$. So near $z_0$, $h$ has a Taylor series
+ \[
+ h(z) = \sum_{n \geq 0} a_n(z - z_0)^n.
+ \]
+ Since we are told that $a_0 = a_1 = 0$, we can define a $g(z)$ by
+ \[
+ g(z) = \sum_{n \geq 0} a_{n + 2} (z - z_0)^n,
+ \]
+ defined on some ball $B(z_0, \rho)$, where the Taylor series for $h$ is defined. By construction, on the punctured ball $B(z_0, \rho) \setminus \{z_0\}$, we get $g(z) = f(z)$. Moreover, $g(z) \to a_2$ as $z \to z_0$. So $f(z) \to a_2$ as $z \to z_0$.
+
+ Since $g$ is a power series, it is holomorphic. So the result follows.
+\end{proof}
+This tells us the only way for a function to fail to be holomorphic at an isolated point is that it blows up near the point. This won't happen because $f$ fails to be continuous in some weird ways.
+
+However, we are not yet done with our classification. There are many ways in which things can blow up. We can further classify these into two cases --- the case where $|f(z)| \to \infty$ as $z \to z_0$, and the case where $|f(z)|$ does not converge as $z \to z_0$. It happens that the first case is almost just as boring as the removable ones.
+
+\begin{prop}
+ Let $U$ be a domain, $z_0 \in U$ and $f: U \setminus \{z_0\} \to \C$ be holomorphic. Suppose $|f(z)| \to \infty$ as $z \to z_0$. Then there is a unique $k \in \Z_{\geq 1}$ and a unique holomorphic function $g: U \to \C$ such that $g(z_0) \not= 0$, and
+ \[
+ f(z) = \frac{g(z)}{(z - z_0)^k}.
+ \]
+\end{prop}
+
+\begin{proof}
+ We shall construct $g$ near $z_0$ in some small neighbourhood, and then apply analytic continuation to the whole of $U$. The idea is that since $f(z)$ blows up nicely as $z \to z_0$, we know $\frac{1}{f(z)}$ behaves sensibly near $z_0$.
+
+ We pick some $\delta > 0$ such that $|f(z)| \geq 1$ for all $z \in B(z_0; \delta) \setminus \{z_0\}$. In particular, $f(z)$ is non-zero on $B(z_0; \delta)\setminus \{z_0\}$. So we can define
+ \[
+ h(z) =
+ \begin{cases}
+ \frac{1}{f(z)} & z \in B(z_0; \delta) \setminus \{z_0\}\\
+ 0 & z = z_0
+ \end{cases}.
+ \]
+ Since $|\frac{1}{f(z)}| \leq 1$ on $B(z_0; \delta) \setminus \{z_0\}$, by the removal of singularities, $h$ is holomorphic on $B(z_0, \delta)$. Since $h$ vanishes at the $z_0$, it has a unique definite order at $z_0$, i.e.\ there is a unique integer $k \geq 1$ such that $h$ has a zero of order $k$ at $z_0$. In other words,
+ \[
+ h(z) = (z - z_0)^k \ell(z),
+ \]
+ for some holomorphic $\ell: B(z_0; \delta) \to \C$ and $\ell(z_0) \not= 0$.
+
+ Now by continuity of $\ell$, there is some $0 < \varepsilon < \delta$ such that $\ell (z) \not= 0$ for all $z \in B(z_0, \varepsilon)$. Now define $g: B(z_0; \varepsilon) \to \C$ by
+ \[
+ g(z) = \frac{1}{\ell(z)}.
+ \]
+ Then $g$ is holomorphic on this disc.
+
+ By construction, at least away from $z_0$, we have
+ \[
+ g(z) = \frac{1}{\ell(z)} = \frac{1}{h(z)} \cdot (z - z_0)^k = (z - z_0)^k f(z).
+ \]
+ $g$ was initially defined on $B(z_0; \varepsilon) \to \C$, but now this expression certainly makes sense on all of $U$. So $g$ admits an analytic continuation from $B(z_0; \varepsilon)$ to $U$. So done.
+\end{proof}
+
+We can start giving these singularities different names. We start by formally defining what it means to be a singularity.
+\begin{defi}[Isolated singularity]
+ Given a domain $U$ and $z_0 \in U$, and $f: U \setminus \{z_0\} \to \C$ holomorphic, we say $z_0$ is an \emph{isolated singularity} of $f$.
+\end{defi}
+
+\begin{defi}[Removable singularity]
+ A singularity $z_0$ of $f$ is a \emph{removable singularity} if $f$ is bounded near $z_0$.
+\end{defi}
+
+\begin{defi}[Pole]
+ A singularity $z_0$ is a \emph{pole of order $k$} of $f$ if $|f(z)| \to \infty$ as $z \to z_0$ and one can write
+ \[
+ f(z) = \frac{g(z)}{(z - z_0)^k}
+ \]
+ with $g: U \to \C$, $g(z_0) \not= 0$.
+\end{defi}
+
+\begin{defi}[Isolated essential singularity]
+ An isolated singularity is an \emph{isolated essential singularity} if it is neither removable nor a pole.
+\end{defi}
+
+It is easy to give examples of removable singularities and poles. So let's look at some essential singularities.
+\begin{eg}
+ $z \mapsto e^{1/z}$ has an isolated essential singularity at $z = 0$.
+\end{eg}
+
+Note that if $B(z_0, \varepsilon) \setminus \{z_0\} \to \C$ has a pole of order $h$ at $z_0$, then $f$ naturally defines a map $\hat{f}: B (z_0; \varepsilon) \to \CP^1 = \C\cup \{\infty\}$, the Riemann sphere, by
+\[
+ f(z) =
+ \begin{cases}
+ \infty & z = z_0\\
+ f(z) & z \not= z_0
+ \end{cases}.
+\]
+This is then a ``continuous'' function. So a singularity is just a point that gets mapped to the point $\infty$.
+
+As was emphasized in IA Groups, the point at infinity is not a special point in the Riemann sphere. Similarly, poles are also not really singularities from the viewpoint of the Riemann sphere. It's just that we are looking at it in a wrong way. Indeed, if we change coordinates on the Riemann sphere so that we label each point $w \in \CP^1$ by $w' = \frac{1}{w}$ instead, then $f$ just maps $z_0$ to $0$ under the new coordinate system. In particular, at the point $z_0$, we find that $f$ is holomorphic and has an innocent zero of order $k$.
+
+Since poles are not bad, we might as well allow them.
+\begin{defi}[Meromorphic function]
+ If $U$ is a domain and $S \subseteq U$ is a finite or discrete set, a function $f: U \setminus S \to \C$ which is holomorphic and has (at worst) poles on $S$ is said to be \emph{meromorphic} on $U$.
+\end{defi}
+The requirement that $S$ is discrete is so that each pole in $S$ is actually an isolated singularity.
+
+\begin{eg}
+ A rational function $\frac{P(z)}{Q(z)}$, where $P, Q$ are polynomials, is holomorphic on $\C \setminus \{z: Q(z) = 0\}$, and meromorphic on $\C$. More is true --- it is in fact holomorphic as a function $\CP^1 \to \CP^1$.
+\end{eg}
+These ideas are developed more in depth in the IID Riemann Surfaces course.
+
+As an aside, if we want to get an interesting holomorphic function with domain $\CP^1$, its image must contain the point $\infty$, or else its image will be a compact subset of $\C$ (since $\CP^1$ is compact), thus bounded, and therefore constant by Liouville's theorem.
+
+At this point, we really should give essential singularities their fair share of attention. Not only are they bad. They are bad \emph{spectacularly}.
+
+\begin{thm}[Casorati-Weierstrass theorem]
+ Let $U$ be a domain, $z_0 \in U$, and suppose $f: U \setminus \{z_0\} \to \C$ has an essential singularity at $z_0$. Then for all $w \in \C$, there is a sequence $z_n \to z_0$ such that $f(z_n) \to w$.
+
+ In other words, on any punctured neighbourhood $B(z_0; \varepsilon) \setminus \{z_0\}$, the image of $f$ is dense in $\C$.
+\end{thm}
+This is not actually too hard to proof.
+
+\begin{proof}
+ See example sheet 2.
+\end{proof}
+
+If you think that was bad, actually essential singularities are worse than that. The theorem only tells us the image is dense, but not that we will hit every point. It is in fact \emph{not} true that every point will get hit. For example $e^{\frac{1}{z}}$ can never be zero. However, this is the worst we can get
+
+\begin{thm}[Picard's theorem]
+ If $f$ has an isolated essential singularity at $z_0$, then there is some $b \in \C$ such that on each punctured neighbourhood $B(z_0; \varepsilon)\setminus \{z_0\}$, the image of $f$ contains $\C\setminus \{b\}$.
+\end{thm}
+The proof is beyond this course.
+
+\subsection{Laurent series}
+If $f$ is holomorphic at $z_0$, then we have a local power series expansion
+\[
+ f(z) = \sum_{n = 0}^\infty c_n (z - z_0)^n
+\]
+near $z_0$. If $f$ is singular at $z_0$ (and the singularity is not removable), then there is no hope we can get a Taylor series, since the existence of a Taylor series would imply $f$ is holomorphic at $z = z_0$.
+
+However, it turns out we can get a series expansion if we allow ourselves to have negative powers of $z$.
+
+\begin{thm}[Laurent series]
+ Let $0 \leq r < R < \infty$, and let
+ \[
+ A = \{z \in \C: r < |z - a| < R\}
+ \]
+ denote an annulus on $\C$.
+
+ Suppose $f: A \to \C$ is holomorphic. Then $f$ has a (unique) convergent series expansion
+ \[
+ f(z) = \sum_{n = -\infty}^\infty c_n (z - a)^n,
+ \]
+ where
+ \[
+ c_n = \frac{1}{2\pi i} \int_{\partial \overline{B(a, \rho)}} \frac{f(z)}{(z - a)^{n + 1}}\;\d z
+ \]
+ for $r < \rho < R$. Moreover, the series converges uniformly on compact subsets of the annulus.
+\end{thm}
+
+The Laurent series provides another way of classifying singularities. In the case where $r = 0$, we just have
+\[
+ f(z) = \sum_{-\infty}^\infty c_n (z - a)^n
+\]
+on $B(a, R) \setminus \{a\}$, then we have the following possible scenarios:
+\begin{enumerate}
+ \item $c_n = 0$ for all $n < 0$. Then $f$ is bounded near $a$, and hence this is a removable singularity.
+ \item Only finitely many negative coefficients are non-zero, i.e.\ there is a $k \geq 1$ such that $c_n = 0$ for all $n < -k$ and $c_{-k} \not= 0$. Then $f$ has a pole of order $k$ at $a$.
+ \item There are infinitely many non-zero negative coefficients. Then we have an isolated essential singularity.
+\end{enumerate}
+So our classification of singularities fit nicely with the Laurent series expansion.
+
+We can interpret the Laurent series as follows --- we can write
+\[
+ f(z) = f_{\mathrm{in}}(z) + f_{\mathrm{out}}(z),
+\]
+where $f_{\mathrm{in}}$ consists of the terms with positive power and $f_{\mathrm{out}}$ consists of those with negative power. Then $f_{\mathrm{in}}$ is the part that is holomorphic on the disk $|z - a| < R$, while $f_{\mathrm{out}}(z)$ is the part that is holomorphic on $|z - a| > r$. These two combine to give an expression holomorphic on $r < |z - a| < R$. This is just a nice way of thinking about it, and we will not use this anywhere. So we will not give a detailed proof of this interpretation.
+
+\begin{proof}
+ The proof looks very much like the blend of the two proofs we've given for the Cauchy integral formula. In one of them, we took a power series expansion of the integrand, and in the second, we changed our contour by cutting it up. This is like a mix of the two.
+
+ Let $w \in A$. We let $r < \rho' < | w- a| < \rho'' < R$.
+ \begin{center}
+ \begin{tikzpicture}%[rotate=10]
+ \draw [fill opacity=0.5, fill=mblue] circle [radius=3];
+ \draw [fill=white] circle [radius=1];
+
+ \node at (0, 0) [right] {$a$};
+ \node [circ] at (0, 0) {};
+
+ \draw [mred, ->-=0, ->-=0.17, ->-=0.51, ->-=0.81] (1.31, 0) arc(0:-90:1.3) -- (0.01, -2.6) arc(-90:90:2.6) node [pos=0.5, right] {$\tilde{\gamma}$} -- (0.01, 1.3) arc(90:0:1.3);
+ \draw [mgreen, ->-=0, ->-=0.17, ->-=0.51, ->-=0.81] (-1.31, 0) arc(180:90:1.3) -- (-0.01, 2.6) arc(90:270:2.6) node [pos=0.5, left] {$\tilde{\tilde{\gamma}}$} -- (-0.01, -1.3) arc(270:180:1.3);
+
+ \node [circ] at (2, 0) {};
+ \node [right] at (2, 0) {$w$};
+
+ \draw [dashed] (0, 0) -- (0.65, 1.13) node [pos=0.6, left] {$\rho'$};
+ \draw [dashed] (0, 0) -- (2.25, 1.3) node [pos=0.6, above] {$\rho''$};
+ \end{tikzpicture}
+ \end{center}
+ We let $\tilde{\gamma}$ be the contour containing $w$, and $\tilde{\tilde{\gamma}}$ be the other contour.
+
+ Now we apply the Cauchy integral formula to say
+ \[
+ f(w) = \frac{1}{2\pi i} \int_{\tilde{\gamma}}\frac{f(z)}{z - w}\;\d z
+ \]
+ and
+ \[
+ 0 = \frac{1}{2\pi i} \int_{\tilde{\tilde{\gamma}}} \frac{f(z)}{z - w} \;\d z.
+ \]
+ So we get
+ \[
+ f(w) = \frac{1}{2\pi i} \int_{\partial B(a, \rho'')} \frac{f(z)}{z - w}\;\d z - \frac{1}{2\pi i} \int_{\partial B(a, \rho')} \frac{f(z)}{z - w}\;\d z.
+ \]
+ As in the first proof of the Cauchy integral formula, we make the following expansions: for the first integral, we have $w - a < z - a$. So
+ \[
+ \frac{1}{z - w} = \frac{1}{z - a} \left(\frac{1}{1 - \frac{w - a}{z - a}}\right) = \sum_{n = 0}^\infty \frac{(w - a)^n}{(z - a)^{n + 1}},
+ \]
+ which is uniformly convergent on $z \in \partial B(a, \rho'')$.
+
+ For the second integral, we have $w - a> z - a$. So
+ \[
+ \frac{-1}{z - w} = \frac{1}{w - a} \left(\frac{1}{1 - \frac{z - a}{w - a}}\right) = \sum_{m = 1}^\infty \frac{(z - a)^{m - 1}}{(w - a)^m},
+ \]
+ which is uniformly convergent for $z \in \partial B(a, \rho')$.
+
+ By uniform convergence, we can swap summation and integration. So we get
+ \begin{align*}
+ f(w) ={}& \sum_{n = 0}^\infty \left(\frac{1}{2 \pi i} \int_{\partial B(a, \rho'')} \frac{f(z)}{(z - a)^{n + 1}} \;\d z\right) (w - a)^n \\
+ &+ \sum_{m = 1}^\infty \left(\frac{1}{2\pi i} \int_{\partial B(a, \rho')}\frac{f(z)}{(z - a)^{-m + 1}} \;\d z\right) (w - a)^{-m}.
+ \end{align*}
+ Now we substitute $n = -m$ in the second sum, and get
+ \[
+ f(w) = \sum_{n = -\infty}^\infty \tilde{c}_n (w - a)^n,
+ \]
+ for the integrals $\tilde{c}_n$. However, some of the coefficients are integrals around the $\rho''$ circle, while the others are around the $\rho'$ circle. This is not a problem. For any $r < \rho < R$, these circles are convex deformations of $|z - a| = \rho$ inside the annulus $A$. So
+ \[
+ \int_{\partial B(a, \rho)} \frac{f(z)}{(z - a)^{n + 1}} \;\d z
+ \]
+ is independent of $\rho$ as long as $\rho \in (r, R)$. So we get the result stated.
+\end{proof}
+
+\begin{defi}[Principal part]
+ If $f: B(a, r)\setminus \{a\} \to \C$ is holomorphic and if $f$ has Laurent series
+ \[
+ f(z) = \sum_{n = -\infty}^\infty c_n (z - a)^n,
+ \]
+ then the \emph{principal part} of $f$ at $a$ is
+ \[
+ f_{\mathrm{principal}} = \sum_{n = -\infty}^{-1} c_n (z - a)^n.
+ \]
+\end{defi}
+So $f - f_{\mathrm{principal}}$ is holomorphic near $a$, and $f_{\mathrm{principal}}$ carries the information of what kind of singularity $f$ has at $a$.
+
+When we talked about Taylor series, if $f: B(a, r) \to \C$ is holomorphic with Taylor series $f(z) = \sum_{n = 0}^\infty c_n(z - a)^n$, then we had two possible ways of expressing the coefficients of $c_n$. We had
+\[
+ c_n = \frac{1}{2\pi i} \int_{\partial B(a, \rho)} \frac{f(z)}{(z - a)^{n + 1}}\;\d z = \frac{f^{(n)}(a)}{n!}.
+\]
+In particular, the second expansion makes it obvious the Taylor series is uniquely determined by $f$.
+
+For the Laurent series, we cannot expect to have a simple expression of the coefficients in terms of the derivatives of the function, for the very reason that $f$ is not even defined, let alone differentiable, at $a$. So is the Laurent series unique?
+
+\begin{lemma}
+ Let $f: A \to \C$ be holomorphic, $A = \{r < |z - a| < R\}$, with
+ \[
+ f(z) = \sum_{n = -\infty}^{\infty} c_n(z - a)^n
+ \]
+ Then the coefficients $c_n$ are uniquely determined by $f$.
+\end{lemma}
+
+\begin{proof}
+ Suppose also that
+ \[
+ f(z) = \sum_{n = -\infty}^\infty b_n (z - a)^n.
+ \]
+ Using our formula for $c_k$, we know
+ \begin{align*}
+ 2\pi i c_k &= \int_{\partial B(a, \rho)} \frac{f(z)}{(z - a)^{k + 1}} \;\d z \\
+ &= \int_{\partial B(a, \rho)} \left(\sum_n b_n (z - a)^{n - k - 1}\right)\;\d z\\
+ &= \sum_n b_n \int_{\partial B(a, \rho)} (z - a)^{n - k - 1}\;\d z\\
+ &= 2\pi i b_k.
+ \end{align*}
+ So $c_k = b_k$.
+\end{proof}
+While we do have uniqueness, we still don't know how to find a Laurent series. For a Taylor series, we can just keep differentiating and then get the coefficients. For Laurent series, the above integral is often almost impossible to evaluate. So the technique to compute a Laurent series is blind guesswork.
+
+\begin{eg}
+ We know
+ \[
+ \sin z = z - \frac{z^3}{3!} + \frac{z^5}{5!} - \cdots
+ \]
+ defines a holomorphic function, with a radius of convergence of $\infty$. Now consider
+ \[
+ \cosec z = \frac{1}{\sin z},
+ \]
+ which is holomorphic except for $z = k\pi$, with $k \in \Z$. So $\cosec z$ has a Laurent series near $z = 0$. Using
+ \[
+ \sin z = z\left(1 - \frac{z^2}{6} + O(z^4)\right),
+ \]
+ we get
+ \[
+ \cosec z = \frac{1}{z} \left(1 + \frac{z^2}{6} + O(z^4)\right).
+ \]
+ From this, we can read off that the Laurent series has $c_n = 0$ for all $n \leq -2$, $c_{-1} = 1$, $c_1 = \frac{1}{5}$. If we want, we can go further, but we already see that $\cosec$ has a simple pole at $z = 0$.
+
+ By periodicity, $\cosec$ has a simple pole at all other singularities.
+\end{eg}
+
+\begin{eg}
+ Consider instead
+ \[
+ \sin \left(\frac{1}{z}\right) = \frac{1}{z} - \frac{1}{3! z^3} + \frac{1}{5! z^5} - \cdots.
+ \]
+ We see this is holomorphic on $\C^*$, with $c_n \not= 0$ for infinitely many $n < 0$. So this has an isolated essential singularity.
+\end{eg}
+
+\begin{eg}
+ Consider $\cosec \left(\frac{1}{z}\right)$. This has singularities at $z = \frac{1}{k\pi}$ for $k \in \N = \{1, 2, 3, \cdots\}$. So it is not holomorphic at any punctured neighbourhood $B(0, r)\setminus \{0\}$ of zero. So this has a \emph{non-isolated} singularity at zero, and there is \emph{no} Laurent series in a neighbourhood of zero.
+\end{eg}
+
+We've already done most of the theory. In the remaining of the course, we will use these techniques to \emph{do stuff}. We will spend most of our time trying to evaluate integrals, but before that, we will have a quick look on how we can use Laurent series to evalue some series.
+
+\begin{eg}[Series summation]
+ We claim that
+ \[
+ f(z) = \sum_{n = -\infty}^\infty \frac{1}{(z - n)^2}
+ \]
+ is holomorphic on $\C \setminus \Z$, and moreover if we let
+ \[
+ f(z) = \frac{\pi^2}{\sin^2(\pi z)},
+ \]
+ We will reserve the name $f$ for the original series, and refer to the function $z \mapsto \frac{\pi^2}{\sin^2 (\pi z)}$ as $g$ instead, until we have proven that they are the same.
+
+ Our strategy is as follows --- we first show that $f(z)$ converges and is holomorphic, which is not hard, given the Weierstrass $M$-test and Morera's theorem. To show that indeed we have $f(z) = g(z)$, we first show that they have equal principal part, so that $f(z) - g(z)$ is entire. We then show it is zero by proving $f - g$ is bounded, hence constant, and that $f(z) - g(z) \to 0$ as $z \to \infty$ (in some appropriate direction).
+
+ For any fixed $w \in \C \setminus \Z$, we can compare it with $\sum \frac{1}{n^2}$ and apply the Weierstrass $M$-test. We pick $r > 0$ such that $|w - n| > 2r$ for all $n \in \Z$. Then for all $z \in B(w; r)$, we have
+ \[
+ |z - n| \geq \max\{r, n - |w| - r\}.
+ \]
+ Hence
+ \[
+ \frac{1}{|z - n|^2} \leq \min\left\{\frac{1}{r^2}, \frac{1}{(n - |w| - r)^2}\right\} = M_n.
+ \]
+ By comparison to $\sum \frac{1}{n^2}$, we know $\sum_n M_n$ converges. So by the Weierstrass $M$-test, we know our series converges uniformly on $B(w, r)$.
+
+ By our results around Morera's theorem, we see that $f$ is a uniform limit of holomorphic functions $\sum_{n = -N}^N \frac{1}{(z - n)^2}$, and hence holomorphic.
+
+ Since $w$ was arbitrary, we know $f$ is holomorphic on $\C \setminus \Z$. Note that we do not say the sum converges uniformly on $\C \setminus \Z$. It's just that for any point $w \in \C \setminus \Z$, there is a small neighbourhood of $w$ on which the sum is uniformly convergent, and this is sufficient to apply the result of Morera's.
+
+ For the second part, note that $f$ is periodic, since $f(z + 1) = f(z)$. Also, at $0$, $f$ has a double pole, since $f(z) = \frac{1}{z^2} + $ holomorphic stuff near $z = 0$. So $f$ has a double pole at each $k \in \Z$. Note that $\frac{1}{\sin^2 (\pi z)}$ also has a double pole at each $k \in \Z$.
+
+ Now, consider the principal parts of our functions --- at $k \in \Z$, $f(z)$ has principal part $\frac{1}{(z - k)^2}$. Looking at our previous Laurent series for $\cosec (z)$, if
+ \[
+ g(z) = \left(\frac{\pi}{\sin \pi z}\right)^2,
+ \]
+ then $\lim_{z \to 0} z^2 g(z) = 1$. So $g(z)$ must have the same principal part at $0$ and hence at $k$ for all $k \in \Z$.
+
+ Thus $h(z) = f(z) - g(z)$ is holomorphic on $\C \setminus \Z$. However, since its principal part vanishes at the integers, it has at worst a removable singularity. Removing the singularity, we know $h(z)$ is entire.
+
+ Since we want to prove $f(z) = g(z)$, we need to show $h(z) = 0$.
+
+ We first show it is boundedness. We know $f$ and $g$ are both periodic with period $1$. So it suffices to focus attention on the strip
+ \[
+ -\frac{1}{2} \leq x = \Re(z) \leq \frac{1}{2}.
+ \]
+ To show this is bounded on the rectangle, it suffices to show that $h(x + iy) \to 0$ as $y \to \pm\infty$, by continuity. To do so, we show that $f$ and $g$ both vanish as $y \to \infty$.
+
+ So we set $z = x + iy$, with $|x| \leq \frac{1}{2}$. Then we have
+ \[
+ |g(z)| \leq \frac{4\pi^2}{|e^{\pi y} - e^{- \pi y}|} \to 0
+ \]
+ as $y \to \infty$. Exactly analogously,
+ \[
+ |f(z)| \leq \sum_{n \in \Z} \frac{1}{|x + iy - n|^2} \leq \frac{1}{y^2} + 2 \sum_{n = 1}^\infty \frac{1}{(n - \frac{1}{2})^2 + y^2} \to 0
+ \]
+ as $y \to \infty$. So $h$ is bounded on the strip, and tends to $0$ as $y \to \infty$, and is hence constant by Liouville's theorem. But if $h \to 0$ as $y \to \infty$, then the constant better be zero. So we get
+ \[
+ h(z) = 0.
+ \]
+\end{eg}
+
+\section{Residue calculus}
+\subsection{Winding numbers}
+Recall that the type of the singularity of a point depends on the coefficients in the Laurent series, and these coefficients play an important role in determining the behaviour of the functions. Among all the infinitely many coefficients, it turns out the coefficient of $z^{-1}$ is the most important one, as we will soon see. We call this the \emph{residue} of $f$.
+
+\begin{defi}[Residue]
+ Let $f: B (a, r) \setminus \{a\} \to \C$ be holomorphic, with Laurent series
+ \[
+ f(z) = \sum_{n = -\infty}^\infty c_n (z - a)^n.
+ \]
+ Then the \emph{residue} of $f$ at $a$ is
+ \[
+ \Res(f, a) = \Res_f(a) = c_{-1}.
+ \]
+\end{defi}
+Note that if $\rho < r$, then by definition of the Laurent coefficients, we know
+\[
+ \int_{\partial \overline{B(a, \rho)}} f(z)\;\d z = 2\pi i c_{-1}.
+\]
+So we can alternatively write the residue as
+\[
+ \Res_f(a) = \frac{1}{2 \pi i}\int_{\partial \overline{B(a, \rho)}} f(z)\;\d z.
+\]
+This gives us a formulation of the residue without reference to the Laurent series.
+
+Deforming paths if necessary, it is not too far-fetching to imagine that for \emph{any} simple curve $\gamma$ around the singularity $a$, we have
+\[
+ \int_{\gamma} f(z)\;\d z = 2\pi i\Res(f, a).
+\]
+Moreover, if the path actually encircles two singularities $a$ and $b$, then deforming the path, we would expect to have
+\[
+ \int_{\gamma} f(z)\;\d z = 2\pi i(\Res(f, a) + \Res(f, b)),
+\]
+and this generalizes to multiple singularities in the obvious way.
+
+If this were true, then it would be very helpful, since this turns integration into addition, which is (hopefully) much easier!
+
+Indeed, we will soon prove that this result holds. However, we first get rid of the technical restriction that we only work with simple (i.e.\ non-self intersecting) curves. This is completely not needed. We are actually not really worried in the curve intersecting itself. The reason why we've always talked about simple closed curves is that we want to avoid the curve going around the same point many times.
+
+There is a simple workaround to this problem --- we consider arbitrary curves, and then count how many times we are looping around the point. If we are looping around it twice, then we count its contribution twice!
+
+Indeed, suppose we have the following curve around a singularity:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.45] plot [smooth cycle, tension=1] coordinates {(0.5, -1) (-0.6, 0) (-0.8, -1.2) (0.8, -1.2) (0.6, 0) (-0.5, -1)};
+ \node [circ] at (0, -0.8) {};
+ \node [below] at (0, -0.8) {$a$};
+ \end{tikzpicture}
+\end{center}
+We see that the curve loops around $a$ twice. Also, by the additivity of the integral, we can break this curve into two closed contours. So we have
+\[
+ \frac{1}{2\pi i} \int_\gamma f(z)\;\d z = 2 \Res_f(a).
+\]
+So what we want to do now is to define properly what it means for a curve to loop around a point $n$ times. This will be called the winding number.
+
+There are many ways we can define the winding number. The definition we will pick is based on the following observation --- suppose, for convenience, that the point in question is the origin. As we move along a simple closed curve around $0$, our argument will change. If we keep track of our argument continuously, then we will find that when we return to starting point, the argument would have increased by $2\pi$. If we have a curve that winds around the point twice, then our argument will increase by $4\pi$.
+
+What we do is exactly the above --- given a path, find a continuous function that gives the ``argument'' of the path, and then define the winding number to be the difference between the argument at the start and end points, divided by $2\pi$.
+
+For this to make sense, there are two important things to prove. First, we need to show that there is indeed a continuous ``argument'' function of the curve, in a sense made precise in the lemma below. Then we need to show the winding number is well-defined, but that is easier.
+\begin{lemma}
+ Let $\gamma: [a, b] \to \C$ be a continuous closed curve, and pick a point $w \in \C \setminus \image (\gamma)$. Then there are continuous functions $r: [a, b] \to \R > 0$ and $\theta: [a, b] \to \R$ such that
+ \[
+ \gamma(t) = w + r(t) e^{i\theta(t)}.
+ \]
+\end{lemma}
+Of course, at each point $t$, we can find $r$ and $\theta$ such that the above holds. The key point of the lemma is that we can do so continuously.
+
+\begin{proof}
+ Clearly $r(t) = |\gamma(t) - w|$ exists and is continuous, since it is the composition of continuous functions. Note that this is never zero since $\gamma(t)$ is never $w$. The actual content is in defining $\theta$.
+
+ To define $\theta(t)$, we for simplicity assume $w = 0$. Furthermore, by considering instead the function $\frac{\gamma(t)}{r(t)}$, which is continuous and well-defined since $r$ is never zero, we can assume $|\gamma(t)| = 1$ for all $t$.
+
+ Recall that the principal branch of $\log$, and hence of the argument $\Im (\log)$, takes values in $(-\pi, \pi)$ and is defined on $\C \setminus \R_{\leq 0}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.5] (0, 2) rectangle (2, -2);
+ \draw [->] (0, 0) -- (2, 0);
+ \draw [thick] (-2, 0) -- (0, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \node [circ] at (0, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ If $\gamma(t)$ always lied in, say, the right-hand half plane, we would have no problem defining $\theta$ consistently, since we can just let
+ \[
+ \theta(t) = \arg(\gamma(t))
+ \]
+ for $\arg$ the principal branch. There is nothing special about the right-hand half plane. Similarly, if $\gamma$ lies in the region as shaded below:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [->] (0, -2) -- (0, 2);
+
+ \fill [mblue, opacity=0.5] (-1, 2) -- (1, -2) -- (2, -2) -- (2, 2) -- cycle;
+ \draw (-1, 2) -- (1, -2);
+
+ \draw (0, 0.8) arc(90:116.56:0.8) node [pos=0.5, above] {$\alpha$};
+ \end{tikzpicture}
+ \end{center}
+ i.e.\ we have
+ \[
+ \gamma(t) \in \left\{z : \Re\left(\frac{z}{e^{i\alpha}}\right) > 0\right\}
+ \]
+ for a fixed $\alpha$, we can define
+ \[
+ \theta(t) = \alpha + \arg\left(\frac{\gamma(t)}{e^{i\alpha}}\right).
+ \]
+ Since $\gamma: [a, b] \to \C$ is continuous, it is uniformly continuous, and we can find a subdivision
+ \[
+ a = a_0 < a_1 < \cdots < a_m = b,
+ \]
+ such that if $s, t \in [a_{i - 1}, a_i]$, then $|\gamma(s) - \gamma(t)| < \sqrt{2}$, and hence $\gamma(s)$ and $\gamma(t)$ belong to such a half-plane.
+
+ So we define $\theta_j: [a_{j - 1}, a_j] \to \R$ such that
+ \[
+ \gamma(t) = e^{i\theta_j(t)}
+ \]
+ for $t \in [a_{j - 1}, a_j]$, and $1 \leq j \leq n - 1$.
+
+ On each region $[a_{j - 1}, a_j]$, this gives a continuous argument function. We cannot immediately extend this to the whole of $[a, b]$, since it is entirely possible that $\theta_j(a_j) = \theta_{j + 1}(a_j)$. However, we do know that $\theta_j(a_j)$ are both values of the argument of $\gamma(a_j)$. So they must differ by an integer multiple of $2\pi$, say $2n \pi$. Then we can just replace $\theta_{j + 1}$ by $\theta_{j + 1} - 2n \pi$, which is an equally valid argument function, and then the two functions will agree at $a_j$.
+
+ Hence, for $j > 1$, we can successively re-define $\theta_j$ such that the resulting map $\theta$ is continuous. Then we are done.
+\end{proof}
+
+We can thus use this to define the winding number.
+
+\begin{defi}[Winding number]
+ Given a continuous path $\gamma: [a, b] \to \C$ such that $\gamma(a) = \gamma(b)$ and $w \not\in \image(\gamma)$, the \emph{winding number} of $\gamma$ about $w$ is
+ \[
+ \frac{\theta(b) - \theta(a)}{2\pi},
+ \]
+ where $\theta: [a, b] \to \R$ is a continuous function as above. This is denoted by $I(\gamma, w)$ or $n_\gamma(W)$.
+\end{defi}
+$I$ and $n$ stand for index and number respectively.
+
+Note that we always have $I(\gamma, w) \in \Z$, since $\theta(b)$ and $\theta(a)$ are arguments of the same number. More importantly, $I(\gamma, w)$ is well-defined --- suppose $\gamma(t) = r(t) e^{i \theta_1(t)} = r(t) e^{i\theta_2(t)}$ for continuous functions $\theta_1, \theta_2: [a, b] \to \R$. Then $\theta_1 - \theta_2 : [a, b] \to \R$ is continuous, but takes values in the discrete set $2\pi \Z$. So it must in fact be constant, and thus $\theta_1(b) - \theta_1(a) = \theta_2(b) - \theta_2(a)$.
+
+So far, what we've done is something that is true for arbitrary continuous closed curve. However, if we focus on piecewise $C^1$-smooth closed path, then we get an alternative expression:
+\begin{lemma}
+ Suppose $\gamma: [a, b] \to \C$ is a piecewise $C^1$-smooth closed path, and $w \not\in \image(\gamma)$. Then
+ \[
+ I(\gamma, w) = \frac{1}{2\pi i} \int_\gamma \frac{1}{z - w} \;\d z.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Let $\gamma(t) - w = r(t) e^{i \theta(t)}$, with now $r$ and $\theta$ piecewise $C^1$-smooth. Then
+ \begin{align*}
+ \int_\gamma \frac{1}{z - w}\;\d z &= \int_a^b \frac{\gamma'(t)}{\gamma(t) - w}\;\d t\\
+ &= \int_a^b \left(\frac{r'(t)}{r(t)} + i \theta'(t)\right) \;\d t\\
+ &= [\ln r(t) + i\theta(t)]^b_a\\
+ &= i(\theta(b) - \theta(a))\\
+ &= 2\pi i I(\gamma, w).
+ \end{align*}
+ So done.
+\end{proof}
+
+In some books, this integral expression is taken as the definition of the winding number. While this is elegant in complex analysis, it is not clear \emph{a priori} that this is an integer, and only works for piecewise $C^1$-smooth closed curves, not arbitrary continuous closed curves.
+
+On the other hand, what is evident from this expression is that $I(\gamma, w)$ is continuous as a function of $w \in \C \setminus \image(\gamma)$, since it is even holomorphic as a function of $w$. Since $I(\gamma; w)$ is integer valued, $I(\gamma)$ must be locally constant on path components of $\C \setminus \image(\gamma)$.
+
+We can quickly verify that this is a sensible definition, in that the winding number around a point ``outside'' the curve is zero. More precisely, since $\image(\gamma)$ is compact, all points of sufficiently large modulus in $\C$ belong to one component of $\C \setminus \image(\gamma)$. This is indeed the only path component of $\C\setminus \image(\gamma)$ that is unbounded.
+
+To find the winding number about a point in this unbounded component, note that $I(\gamma; w)$ is consistent on this component, and so we can consider arbitrarily larger $w$. By the integral formula,
+\[
+ |I(\gamma, w)| \leq \frac{1}{2\pi} \length(\gamma) \max_{z \in \gamma} \frac{1}{|w - z|} \to 0
+\]
+as $w \to \infty$. So it does vanish outside the curve. Of course, inside the other path components, we can still have some interesting values of the winding number.
+
+\subsection{Homotopy of closed curves}
+The last ingredient we need before we can get to the residue theorem is the idea of homotopy. Recall we had this weird, ugly definition of elementary deformation of curves --- given $\phi, \psi: [a, b] \to U$, which are closed, we say $\psi$ is an \emph{elementary deformation} or \emph{convex deformation} of $\phi$ if there exists a decomposition $a = x_0 < x_1 < \cdots < x_n = b$ and convex open sets $C_1, \cdots, C_n \subseteq U$ such that for $x_{i - 1} \leq t \leq x_i$, we have $\phi(t)$ and $\psi(t)$ in $C_i$.
+
+It was a rather unnatural definition, since we have to make reference to this arbitrarily constructed dissection of $[a, b]$ and convex sets $C_i$. Moreover, this definition fails to be transitive (e.g.\ on $\R \setminus \{0\}$, rotating a circle about the center by, say, $\frac{\pi}{10}$ is elementary, but rotating by $\pi$ is not). Yet, this definition was cooked up just so that it immediately follows that elementary deformations preserve integrals of holomorphic functions around the loop.
+
+The idea now is to define a more general and natural notion of deforming a curve, known as ``homotopy''. We will then show that each homotopy can be given by a sequence of elementary deformations. So homotopies also preserve integrals of holomorphic functions.
+
+\begin{defi}[Homotopy of closed curves]
+ Let $U \subseteq \C$ be a domain, and let $\phi: [a, b] \to U$ and $\psi: [a, b] \to U$ be piecewise $C^1$-smooth closed paths. A \emph{homotopy} from $\phi: \psi$ is a continuous map $F: [0, 1] \times [a, b] \to U$ such that
+ \[
+ F(0, t) = \phi(t),\quad F(1, t) = \psi(t),
+ \]
+ and moreover, for all $s \in [0, t]$, the map $t \mapsto F(s, t)$ viewed as a map $[a, b] \to U$ is closed and piecewise $C^1$-smooth.
+\end{defi}
+We can imagine this as a process of ``continuously deforming'' the path $\phi$ to $\psi$, with a path $F(s, \ph)$ at each point in time $s \in [0, 1]$.
+
+\begin{prop}
+ Let $\phi, \psi: [a, b] \to U$ be homotopic (piecewise $C^1$) closed paths in a domain $U$. Then there exists some $\phi = \phi_0, \phi_1, \cdots, \phi_N = \psi$ such that each $\phi_j$ is piecewise $C^1$ closed and $\phi_{i + 1}$ is obtained from $\phi_i$ by elementary deformation.
+\end{prop}
+
+\begin{proof}
+ This is an exercise in uniform continuity. We let $F: [0, 1] \times [a, b] \to U$ be a homotopy from $\phi$ to $\psi$. Since $\image (F)$ is compact and $U$ is open, there is some $\varepsilon > 0$ such that $B(F(s, t), \varepsilon) \subseteq U$ for all $(s, t) \in [0, 1] \times [a, b]$ (for each $s, t$, pick the maximum $\varepsilon_{s, t} > 0$ such that $B(F(s, t), \varepsilon_{s, t}) \subseteq U$. Then $\varepsilon_{s, t}$ varies continuously with $s, t$, hence attains its minimum on the compact set $[0, 1] \times [a, b]$. Then picking $\varepsilon$ to be the minimum works).
+
+ Since $F$ is uniformly continuous, there is some $\delta$ such that $\|(s, t) - (s', t')\| < \delta$ implies $|F(s, t) - F(s', t')| < \varepsilon$.
+
+ Now we pick $n \in \N$ such that $\frac{1 + (b - a)}{n} < \delta$, and let
+ \begin{align*}
+ x_j &= a + (b - a) \frac{j}{n}\\
+ \phi_i(t) &= F\left(\tfrac{i}{n}, t\right)\\
+ C_{ij} &= B\left(F\left(\tfrac{i}{n}, x_j\right), \varepsilon\right)
+ \end{align*}
+ Then $C_{ij}$ is clearly convex. These definitions are cooked up precisely so that if $s \in \left(\frac{i - 1}{n}, \frac{i}{n}\right)$ and $t \in [x_{j - 1}, x_j]$, then $F(s, t) \in C_{ij}$. So the result follows.
+\end{proof}
+
+\begin{cor}
+ Let $U$ be a domain, $f: U \to \C$ be holomorphic, and $\gamma_1, \gamma_2$ be homotopic piecewise $C^1$-smooth closed curves in $U$. Then
+ \[
+ \int_{\gamma_1}f(z)\;\d z = \int_{\gamma_2}f(z)\;\d z.
+ \]
+\end{cor}
+This means the integral around any path depends only on the homotopy class of the path, and not the actual path itself.
+
+We can now use this to ``upgrade'' our Cauchy's theorem to allow arbitrary simply connected domains. The theorem will become immediate if we adopt the following alternative definition of a simply connected domain:
+
+\begin{defi}[Simply connected domain]
+ A domain $U$ is \emph{simply connected} if every $C^1$ smooth closed path is homotopic to a constant path.
+\end{defi}
+This is in fact equivalent to our earlier definition that every continuous map $S^1 \to U$ can be extended to a continuous map $D^2 \to U$. This is almost immediately obvious, except that our old definition only required the map to be continuous, while the new definition only works with piecewise $C^1$ paths. We will need something that allows us to approximate any continuous curve with a piecewise $C^1$-smooth one, but we shall not do that here. Instead, we will just forget about the old definition and stick to the new one.
+
+Rewriting history, we get the following corollary:
+\begin{cor}[Cauchy's theorem for simply connected domains]
+ Let $U$ be a simply connected domain, and let $f: U \to \C$ be holomorphic. If $\gamma$ is any piecewise $C^1$-smooth closed curve in $U$, then
+ \[
+ \int_\gamma f(z)\;\d z = 0.
+ \]
+\end{cor}
+We will sometimes refer to this theorem as ``simply-connected Cauchy'', but we are not in any way suggesting that Cauchy himself is simply connected.
+
+\begin{proof}
+ By definition of simply-connected, $\gamma$ is homotopic to the constant path, and it is easy to see the integral along a constant path is zero.
+\end{proof}
+
+\subsection{Cauchy's residue theorem}
+We finally get to Cauchy's residue theorem. This in some sense a mix of all the results we've previously had. Simply-connected Cauchy tells us the integral of a holomorphic $f$ around a closed curve depends only on its homotopy class, i.e.\ we can deform curves by homotopy and this preserves the integral. This means the value of the integral really only depends on the ``holes'' enclosed by the curve.
+
+We also had the Cauchy integral formula. This says if $f: B(a, r) \to \C$ is holomorphic, $w \in B(a, \rho)$ and $\rho < r$, then
+\[
+ f(w) = \frac{1}{2\pi i} \int_{\partial \overline{B(a, \rho)}} \frac{f(z)}{z - w}\;\d z.
+\]
+Note that $f(w)$ also happens to be the residue of the function $\frac{f(z)}{z - w}$. So this really says if $g$ has a simple pole at $a$ inside the region bounded by a simple closed curve $\gamma$, then
+\[
+ \frac{1}{2\pi} \int_\gamma g(z)\;\d z = \Res (g, a).
+\]
+The Cauchy's residue theorem says the result holds for \emph{any} type of singularities, and \emph{any} number of singularities.
+
+\begin{thm}[Cauchy's residue theorem]
+ Let $U$ be a simply connected domain, and $\{z_1, \cdots, z_k\} \subseteq U$. Let $f: U \setminus \{z_1, \cdots, z_k\} \to \C$ be holomorphic. Let $\gamma: [a, b] \to U$ be a piecewise $C^1$-smooth closed curve such that $z_i \not= \image(\gamma)$ for all $i$. Then
+ \[
+ \frac{1}{2\pi i} \int_{\gamma}f(z)\;\d z = \sum_{j = 1}^k I(\gamma, z_i) \Res(f; z_i).
+ \]
+\end{thm}
+The Cauchy integral formula and simply-connected Cauchy are special cases of this.
+
+\begin{proof}
+ At each $z_i$, $f$ has a Laurent expansion
+ \[
+ f(z) = \sum_{n \in \Z} c_n^{(i)} (z - z_i)^n,
+ \]
+ valid in some neighbourhood of $z_i$. Let $g_i(z)$ be the principal part, namely
+ \[
+ g_i(z) = \sum_{n = -\infty}^{-1} c_n^{(i)}(z - z_i)^n.
+ \]
+ From the proof of the Laurent series, we know $g_i(z)$ gives a holomorphic function on $U \setminus \{z_i\}$.
+
+ We now consider $f - g_1 - g_2 - \cdots - g_k$, which is holomorphic on $U \setminus \{z_1, \cdots, z_k\}$, and has a \emph{removable} singularity at each $z_i$. So
+ \[
+ \int_\gamma (f - g_1 - \cdots - g_k)(z)\;\d z = 0,
+ \]
+ by simply-connected Cauchy. Hence we know
+ \[
+ \int_\gamma f(z)\;\d z = \sum_{j = 1}^k \int_\gamma g_j(z)\;\d z.
+ \]
+ For each $j$, we use uniform convergence of the series $\sum_{n \leq -1} c_n^{(j)} (z - z_j)^n$ on compact subsets of $U \setminus \{z_j\}$, and hence on $\gamma$, to write
+ \[
+ \int_\gamma g_j(z)\;\d z = \sum_{n \leq -1} c_n^{(j)} \int_\gamma (z - z_j)^n\;\d z.
+ \]
+ However, for $n \not= -1$, the function $(z - z_j)^n$ has an antiderivative, and hence the integral around $\gamma$ vanishes. So this is equal to
+ \[
+ c_{-1}^{(j)} \int_\gamma \frac{1}{z - z_j}\;\d z.
+ \]
+ But $c_{-1}^{(j)}$ is by definition the residue of $f$ at $z_j$, and the integral is just the integral definition of the winding number (up to a factor of $2\pi i$). So we get
+ \[
+ \int_\gamma f(z)\;\d z = 2\pi i \sum_{j = 1}^k \Res(f; z_j) I(\gamma, z_j).
+ \]
+ So done.
+\end{proof}
+
+\subsection{Overview}
+We've done most of the theory we need. In the remaining of the time, we are going to use these tools to do something useful. In particular, we will use the residue theorem heavily to compute integrals.
+
+But before that, we shall stop and look at what we have done so far.
+
+Our first real interesting result was Cauchy's theorem for a triangle, which had a rather weird hypothesis --- if $f: U \to \C$ is holomorphic and $\Delta \subseteq U$ is at triangle, then
+\[
+ \int_{\partial \Delta} f(z)\;\d z = 0.
+\]
+To prove this, we dissected our triangle into smaller and smaller triangles, and then the result followed how the numbers and bounds magically fit in together.
+
+To accompany this, we had another theorem that used triangles. Suppose $U$ is a star domain and $f: U \to \C$ is continuous. Then if
+\[
+ \int_{\partial \Delta} f(z)\;\d z = 0
+\]
+for all triangles, then there is a holomorphic $F$ with $F'(z) = f(z)$. Here we defined $F$ by
+\[
+ F(z) = \int_{z_0}^z f(z)\;\d z,
+\]
+where $z_0$ is the ``center'' of the star, and we integrate along straight lines. The triangle condition ensures this is well-defined.
+
+These are the parts where we used some geometric insight --- in the first case we thought of subdividing, and in the second we decided to integrate along paths.
+
+These two awkward theorems about triangles fit in perfectly into the convex Cauchy theorem, via the fundamental theorem of calculus. This tells us that if $f: U \to \C$ is holomorphic and $U$ is convex, then
+\[
+ \int_\gamma f(z)\;\d z = 0
+\]
+for all closed $\gamma \subseteq U$.
+
+We then noticed this allows us to deform paths nicely and still preserve the integral. We called these nice deformations \emph{elementary deformations}, and then used it to obtain the Cauchy integral formula, namely
+\[
+ f(w) = \frac{1}{2\pi i} \int_{\partial B(a, \rho)} \frac{f(z)}{z - w}\;\d z
+\]
+for $f: B(a, r) \to \C$, $\rho < r$ and $w \in B(a, \rho)$.
+
+This formula led us to some classical theorems like the Liouville theorem and the maximum principle. We also used the power series trick to prove Taylor's theorem, saying any holomorphic function is locally equal to some power series, which we call the \emph{Taylor series}. In particular, this shows that holomorphic functions are infinitely differentiable, since all power series are.
+
+We then notice that for $U$ a convex domain, if $f: U \to \C$ is continuous and
+\[
+ \int_\gamma f(z)\;\d z = 0
+\]
+for all curves $\gamma$, then $f$ has an antiderivative. Since $f$ is the derivative of its antiderivative (by definition), it is then (infinitely) differentiable. So a function is holomorphic on a simply connected domain if and only if the integral along any closed curve vanishes. Since the latter property is easily shown to be conserved by uniform limits, we know the uniform limit of holomorphic functions is holomorphic.
+
+Then we figured out that we can use the same power series expansion trick to deal with functions with singularities. It's just that we had to include negative powers of $z$. Adding in the ideas of winding numbers and homotopies, we got the residue theorem. We showed that if $U$ is simply connected and $f: U \setminus \{z_1, \cdots, z_k\} \to \C$ is holomorphic, then
+\[
+ \frac{1}{2\pi i} \int_{\gamma} f(z)\;\d z = \sum \Res(f, z_i) I(\gamma, z_i).
+\]
+This will further lead us to Rouch\'e's theorem and the argument principle, to be done later.
+
+Throughout the course, there weren't too many ideas used. Everything was built upon the two ``geometric'' theorems of Cauchy's theorem for triangles and the antiderivative theorem. Afterwards, we repeatedly used the idea of deforming and cutting paths, as well as the power series expansion of $\frac{1}{z - w}$, and that's it.
+
+\subsection{Applications of the residue theorem}
+This section is more accurately described as ``Integrals, integrals, integrals''. Our main objective is to evaluate \emph{real} integrals, but to do so, we will pretend they are complex integrals, and apply the residue theorem.
+
+Before that, we first come up with some tools to compute residues, since we will have to do that quite a lot.
+
+\begin{lemma}
+ Let $f: U \setminus \{a\} \to \C$ be holomorphic with a pole at $a$, i.e $f$ is meromorphic on $U$.
+ \begin{enumerate}
+ \item If the pole is simple, then
+ \[
+ \Res(f, a) = \lim_{z \to a} (z - a) f(z).
+ \]
+ \item If near $a$, we can write
+ \[
+ f(z) = \frac{g(z)}{h(z)},
+ \]
+ where $g(a) \not= 0$ and $h$ has a simple zero at $a$, and $g, h$ are holomorphic on $B(a, \varepsilon) \setminus \{a\}$, then
+ \[
+ \Res(f, a) = \frac{g(a)}{h'(a)}.
+ \]
+ \item If
+ \[
+ f(z) = \frac{g(z)}{(z - a)^k}
+ \]
+ near $a$, with $g(a) \not= 0$ and $g$ is holomorphic, then
+ \[
+ \Res(f, a) = \frac{g^{(k - 1)}(a)}{(k - 1)!}.
+ \]
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item By definition, if $f$ has a simple pole at $a$, then
+ \[
+ f(z) = \frac{c_{-1}}{(z - a)} + c_0 + c_1(z - a) + \cdots,
+ \]
+ and by definition $c_{-1} = \Res(f, a)$. Then the result is obvious.
+ \item This is basically L'H\^opital's rule. By the previous part, we have
+ \[
+ \Res(f; a) = \lim_{z \to a} (z - a)\frac{g(z)}{h(z)} = g(a) \lim_{z \to a} \frac{z - a}{h(z) - h(a)} = \frac{g(a)}{h'(a)}.
+ \]
+ \item We know the residue $\Res(f; a)$ is the coefficient of $(z - a)^{k - 1}$ in the Taylor series of $g$ at $a$, which is exactly $\frac{1}{(k - 1)!} g^{(k - 1)}(a)$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ We want to compute the integral
+ \[
+ \int_0^{\infty} \frac{1}{1 + x^4}\;\d x.
+ \]
+ We consider the following contour:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -2) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.8] (-2, 0) -- (2, 0) arc(0:180:2);
+
+ \node [below] at (-2, 0) {$-R$};
+ \node [below] at (2, 0) {$R$};
+
+ \node at (1, 1) {$\times$};
+ \node [below] at (1, 1) {$e^{i \pi /4}$};
+ \node at (-1, 1) {$\times$};
+ \node [below] at (-1, 1) {$e^{3 i \pi /4}$};
+
+ \node at (1, -1) {$\times$};
+ \node at (-1, -1) {$\times$};
+ \end{tikzpicture}
+ \end{center}
+ We notice $\frac{1}{1 + x^4}$ has poles at $x^4 = -1$, as indicated in the diagram. Note that the two of the poles lie in the unbounded region. So $I(\gamma, \ph) = 0$ for these.
+
+ We can write the integral as
+ \[
+ \int_{\gamma_R} \frac{1}{1 + z^4} \;\d z = \int_{-R}^R \frac{1}{1 + x^4} \;\d x + \int_0^\pi \frac{i Re^{i\theta}}{1 + R^4 e^{4 i \theta}}\;\d \theta.
+ \]
+ The first term is something we care about, while the second is something we despise. So we might want to get rid of it. We notice the integrand of the second integral is $O(R^{-3})$. Since we are integrating it over something of length $R$, the whole thing tends to $0$ as $R \to \infty$.
+
+ We also know the left hand side is just
+ \[
+ \int_{\gamma_R} \frac{1}{1 + z^4} \;\d z = 2\pi i(\Res(f, e^{i \pi /4}) + \Res(f, e^{3i \pi /4})).
+ \]
+ So we just have to compute the residues. But our function is of the form given by part (ii) of the lemma above. So we know
+ \[
+ \Res(f, e^{i \pi /4}) = \left.\frac{1}{4z^3}\right|_{z = e^{i \pi /4}} = \frac{1}{4} e^{-3\pi i/4},
+ \]
+ and similarly at $e^{i 3\pi /4}$. On the other hand, as $R \to \infty$, the first integral on the right is $\int_{-\infty}^\infty \frac{1}{1 + x^4}\;\d x$, which is, by evenness, twice of what we want. So
+ \[
+ 2\int_0^\infty \frac{1}{1 + x^4} \;\d x = \int_{-\infty}^\infty \frac{1}{1 + x^4}\;\d x = -\frac{2\pi i}{4} (e^{i \pi/4} + e^{3 \pi i/4}) = \frac{\pi}{\sqrt{2}}.
+ \]
+ Hence our integral is
+ \[
+ \int_0^\infty \frac{1}{1 + x^4}\;\d x = \frac{\pi}{2\sqrt{2}}.
+ \]
+\end{eg}
+When computing contour integrals, there are two things we have to decide. First, we need to pick a nice contour to integrate along. Secondly, as we will see in the next example, we have to decide what function to integrate.
+
+\begin{eg}
+ Suppose we want to integrate
+ \[
+ \int_\R \frac{\cos (x)}{1 + x + x^2}\;\d x.
+ \]
+ We know $\cos$, as a complex function, is everywhere holomorphic, and $1 + x + x^2$ have two simple zeroes, namely at the cube roots of unity. We pick the same contour, and write $\omega = e^{2\pi i/3}$. Then we have
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.8] (-2, 0) -- (2, 0) arc(0:180:2);
+
+ \node [below] at (-2, 0) {$-R$};
+ \node [below] at (2, 0) {$R$};
+
+ \node at (-1, 1.414) {$\times$};
+ \node [right] at (-1, 1.414) {$\omega$};
+ \end{tikzpicture}
+ \end{center}
+ Life would be good if $\cos$ were bounded, for the integrand would then be $O(R^{-2})$, and the circular integral vanishes. Unfortunately, at, say, $iR$, $\cos(z)$ is large. So instead, we consider
+ \[
+ f(z) = \frac{e^{iz}}{1 + z + z^2}.
+ \]
+ Now, again by the previous lemma, we get
+ \[
+ \Res(f; \omega) = \frac{e^{i\omega}}{2\omega + 1}.
+ \]
+ On the semicircle, we have
+ \[
+ \left|\int_0^\pi f(Re^{i\theta}) Re^{i\theta}\;\d \theta\right| \leq \int_0^\pi \frac{Re^{-\sin \theta R}}{|R^2 e^{2i \theta} + Re^{i\theta} + 1|} \;\d \theta,
+ \]
+ which is $O(R^{-1})$. So this vanishes as $R \to \infty$.
+
+ The remaining is not quite the integral we want, but we can just take the real part. We have
+ \begin{align*}
+ \int_{\R} \frac{\cos x}{1 + x + x^2}\;\d x &= \Re \int_\R f(z)\;\d z\\
+ &= \Re \lim_{R \to \infty} \int_{\gamma_R} f(z)\;\d z\\
+ &= \Re (2\pi i \Res(f, \omega))\\
+ &= \frac{2\pi}{\sqrt{3}} e^{-\sqrt{3}/2}\cos \frac{1}{2} .
+ \end{align*}
+\end{eg}
+
+Another class of integrals that often come up are integrals of trigonometric functions, where we are integrating along the unit circle.
+\begin{eg}
+ Consider the integral
+ \[
+ \int_0^{\pi/2} \frac{1}{1 + \sin^2(t)}\;\d t.
+ \]
+ We use the expression of $\sin$ in terms of the exponential function, namely
+ \[
+ \sin(t) = \frac{e^{it} - e^{-it}}{2i}.
+ \]
+ So if we are on the unit circle, and $z = e^{it}$, then
+ \[
+ \sin (t) = \frac{z - z^{-1}}{2}.
+ \]
+ Moreover, we can check
+ \[
+ \frac{\d z}{\d t} = ie^{it}.
+ \]
+ So
+ \[
+ \d t = \frac{\d z}{iz}.
+ \]
+ Hence we get
+ \begin{align*}
+ \int_0^{\pi/2} \frac{1}{1 + \sin^2(t)} \;\d t &= \frac{1}{4} \int_0^{2\pi} \frac{1}{1 + \sin^2 (t)}\;\d t\\
+ &= \frac{1}{4} \int_{|z| = 1} \frac{1}{1 + \frac{(z - z^{-1})^2}{-4}} \frac{\d z}{iz}\\
+ &= \int_{|z| = 1}\frac{i z}{z^4 - 6z^2 + 1}\;\d z.
+ \end{align*}
+ The base is a quadratic in $z^2$, which we can solve. We find the roots to be $1 \pm \sqrt{2}$ and $-1 \pm \sqrt{2}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+ \draw [mred, thick, ->-=0.2] circle [radius=1];
+
+ \node at (-0.414, 0) {$\times$};
+ \node at (0.414, 0) {$\times$};
+ \node at (2.414, 0) {$\times$};
+ \node at (-2.414, 0) {$\times$};
+ \end{tikzpicture}
+ \end{center}
+ The residues at the point $\sqrt{2} - 1$ and $-\sqrt{2} + 1$ give $-\frac{i\sqrt{2}}{16}$. So the integral we want is
+ \[
+ \int_0^{\pi/2} \frac{1}{1 + \sin^2(t)} = 2\pi i\left(\frac{-\sqrt{2}i}{16} + \frac{-\sqrt{2} i}{16}\right) = \frac{\pi}{2\sqrt{2}}.
+ \]
+\end{eg}
+Most rational functions of trigonometric functions can be integrated around $|z| = 1$ in this way, using the fact that
+\[
+ \sin (kt) = \frac{e^{ikt} - e^{-ikt}}{2i} = \frac{z^k - z^{-k}}{2},\quad \cos(kt) = \frac{e^{ikt} + e^{-ikt}}{2} = \frac{z^k + z^{-k}}{2}.
+\]
+We now develop a few lemmas that help us evaluate the contributions of certain parts of contours, in order to simplify our work.
+
+\begin{lemma}
+ Let $f: B(a, r) \setminus \{a\} \to \C$ be holomorphic, and suppose $f$ has a simple pole at $a$. We let $\gamma_\varepsilon: [\alpha, \beta] \to \C$ be given by
+ \[
+ t\mapsto a + \varepsilon e^{it}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+ \draw [dashed] circle [radius=2];
+ \draw [mred, thick, ->-=0.5] (1.414, 1.414) arc(45:120:2) node [pos=0.5, above ]{$\gamma_\varepsilon$};
+ \node [circ] at (0, 0) {};
+ \node [anchor = north west] {$a$};
+
+ \draw (0, 0) -- (1.414, 1.414);
+ \draw (0, 0) -- (-1, 1.732);
+ \draw (0.3, 0) arc(0:120:0.3) node [above] {$\beta$};
+ \draw (0.6, 0) arc(0:45:0.6) node [pos=0.8, right] {$\alpha$};
+
+ \node [anchor = north west] at (2, 0) {$\varepsilon$};
+ \end{tikzpicture}
+ \end{center}
+ Then
+ \[
+ \lim_{\varepsilon \to 0} \int_{\gamma_\varepsilon} f(z)\;\d z = (\beta - \alpha) \cdot i \cdot \Res(f, a).
+ \]
+\end{lemma}
+
+\begin{proof}
+ We can write
+ \[
+ f(z) = \frac{c}{z - a} + g(z)
+ \]
+ near $a$, where $c = \Res(f; a)$, and $g: B(a, \delta) \to \C$ is holomorphic near $a$. We take $\varepsilon < \delta$. Then
+ \[
+ \left|\int_{\gamma_\varepsilon} g(z)\;\d z\right| \leq (\beta - \alpha) \cdot \varepsilon \sup_{z \in \gamma_\varepsilon} |g(z)|.
+ \]
+ But $g$ is bounded on $B(\alpha, \delta)$. So this vanishes as $\varepsilon \to 0$. So the remaining integral is
+ \begin{align*}
+ \lim_{\varepsilon \to 0} \int_{\gamma_\varepsilon} \frac{c}{z - a}\;\d z &= c \lim_{\varepsilon \to 0} \int_{\gamma_\varepsilon} \frac{1}{z - a} \;\d z\\
+ &= c \lim_{\varepsilon \to 0} \int_\alpha^\beta \frac{1}{\varepsilon e^{it}} \cdot i\varepsilon e^{it}\;\d t\\
+ &= i(\beta - \alpha) c,
+ \end{align*}
+ as required.
+\end{proof}
+
+A lemma of a similar flavor allows us to consider integrals on expanding semicircles.
+\begin{lemma}[Jordan's lemma]
+ Let $f$ be holomorphic on a neighbourhood of infinity in $\C$, i.e.\ on $\{|z| > r\}$ for some $r > 0$. Assume that $zf(z)$ is bounded in this region. Then for $\alpha > 0$, we have
+ \[
+ \int_{\gamma_R} f(z) e^{i\alpha z}\;\d z \to 0
+ \]
+ as $R \to \infty$, where $\gamma_R(t) = Re^{it}$ for $t \in [0, \pi]$ is the semicircle (which is \emph{not} closed).
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -0.5) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.8] (2, 0) arc(0:180:2) node [pos=0.3, right] {$\gamma_R$};
+
+ \node [anchor = north east] at (-2, 0) {$-R$};
+ \node [anchor = north west] at (2, 0) {$R$};
+ \node [circ] at (-2, 0) {};
+ \node [circ] at (2, 0) {};
+ \end{tikzpicture}
+ \end{center}
+\end{lemma}
+In previous cases, we had $f(z) = O(R^{-2})$, and then we can bound the integral simply as $O(R^{-1}) \to 0$. In this case, we only require $f(z) = O(R^{-1})$. The drawback is that the case $\int_{\gamma_R}f(z)\;\d z$ need not work --- it is possible that this does not vanish. However, if we have the extra help from $e^{i\alpha x}$, then we do get that the integral vanishes.
+
+\begin{proof}
+ By assumption, we have
+ \[
+ |f(z)| \leq \frac{M}{|z|}
+ \]
+ for large $|z|$ and some constant $M > 0$. We also have
+ \[
+ |e^{i \alpha z}| = e^{-R\alpha \sin t}
+ \]
+on $\gamma_R$. To avoid messing with $\sin t$, we note that on $(0, \frac{\pi}{2}]$, the function $\frac{\sin \theta}{\theta}$ is decreasing, since
+\[
+ \frac{\d}{\d \theta} \left(\frac{\sin \theta}{\theta}\right) = \frac{\theta \cos \theta - \sin \theta}{\theta^2} \leq 0.
+\]
+Then by consider the end points, we find
+\[
+ \sin(t) \geq \frac{2t}{\pi}
+\]
+for $t \in [0, \frac{\pi}{2}]$. This gives us the bound
+\[
+ |e^{i\alpha z}| = e^{-R\alpha \sin t} \leq
+ \begin{cases}
+ e^{-Ra2t/\pi} & 0 \leq t \leq \frac{\pi}{2}\\
+ e^{-Ra2t'/\pi} & 0 \leq t' = \pi - t \leq \frac{\pi}{2}
+ \end{cases}
+\]
+So we get
+\begin{align*}
+ \left|\int_0^{\pi/2} e^{iR\alpha e^{it}} f(Re^{it}) Re^{it}\;\d t\right| &\leq \int_0^{2\pi} e^{-2\alpha Rt/\pi} \cdot M \;\d t\\
+ &= \frac{1}{2R} (1 - e^{\alpha R})\\
+ &\to 0
+\end{align*}
+as $R \to \infty$.
+
+The estimate for
+\[
+ \int_{\pi/2}^\pi f(z) e^{i\alpha z}\;\d z
+\]
+is analogous.
+\end{proof}
+
+\begin{eg}
+ We want to show
+ \[
+ \int_0^\infty \frac{\sin x}{x}\;\d x = \frac{\pi}{2}.
+ \]
+ Note that $\frac{\sin x}{x}$ has a removable singularity at $x = 0$. So everything is fine.
+
+ Our first thought might be to use our usual semi-circular contour that looks like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.8] (-2, 0) -- (2, 0) arc(0:180:2) node [pos=0.6, anchor = south west] {$\gamma_R$};
+
+ \node [below] at (-2, 0) {$-R$};
+ \node [below] at (2, 0) {$R$};
+ \end{tikzpicture}
+ \end{center}
+ If we look at this and take the function $\frac{\sin z}{z}$, then we get no control at $iR \in \gamma_R$. So what we would like to do is to replace the sine with an exponential. If we let
+ \[
+ f(z) = \frac{e^{iz}}{z},
+ \]
+ then we now have the problem that $f$ has a simple pole at $0$. So we consider a modified contour
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -0.5) -- (0, 3);
+ \draw [mred, thick, ->-=0.09, ->-=0.37, ->-=0.8] (-2, 0) node [below] {$-R$} -- (-0.5, 0) node [below] {$-\varepsilon$} arc(180:0:0.5) node [below] {$\varepsilon$} -- (2, 0) node [below] {$R$} arc(0:180:2) node [pos=0.3, right] {$\gamma_{R, \varepsilon}$};
+
+ \node {$\times$};
+ \end{tikzpicture}
+ \end{center}
+ Now if $\gamma_{R, \varepsilon}$ denotes the modified contour, then the singularity of $\frac{e^{iz}}{z}$ lies outside the contour, and Cauchy's theorem says
+ \[
+ \int_{\gamma_{R, \varepsilon}} f(z)\;\d z = 0.
+ \]
+ Considering the $R$-semicircle $\gamma_R$, and using Jordan's lemma with $\alpha = 1$ and $\frac{1}{z}$ as the function, we know
+ \[
+ \int_{\gamma_R} f(z) \;\d z \to 0
+ \]
+ as $R \to \infty$.
+
+ Considering the $\varepsilon$-semicircle $\gamma_\varepsilon$, and using the first lemma, we get a contribution of $-i \pi$, where the sign comes from the orientation. Rearranging, and using the fact that the function is even, we get the desired result.
+\end{eg}
+
+\begin{eg}
+ Suppose we want to evaluate
+ \[
+ \int_{-\infty}^\infty \frac{e^{ax}}{\cosh x}\;\d x,
+ \]
+ where $a \in (-1, 1)$ is a real constant.
+
+ To do this, note that the function
+ \[
+ f(z) = \frac{e^{az}}{\cosh z}
+ \]
+ has simple poles where $z = \left(n + \frac{1}{2}\right) i \pi$ for $n \in \Z$. So if we did as we have done above, then we would run into infinitely many singularities, which is not fun.
+
+ Instead, we note that
+ \[
+ \cos (x + i\pi) =- \cosh x.
+ \]
+ Consider a rectangular contour
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.78] (-2, 0) node [below] {$-R$} -- (2, 0) node [pos=0.7, below] {$\gamma_0$} node [below] {$R$} -- (2, 1.5) node [pos=0.5, right] {$\gamma_{\mathrm{vert}}^+$} -- (-2, 1.5) node [pos=0.3, above] {$\gamma_1$} -- (-2, 0) node [pos=0.5, left] {$\gamma_{\mathrm{vert}}^-$};
+
+ \node at (0, 0.75) {$\times$};
+ \node [left] at (0, 0.75) {$\frac{\pi i}{2}$};
+
+ \node [circ] at (0, 1.5) {};
+ \node [anchor = south east] at (0, 1.5) {$\pi i$};
+ \end{tikzpicture}
+ \end{center}
+ We now enclose only one singularity, namely $\rho = \frac{i \pi }{2}$, where
+ \[
+ \Res(f, \rho) = \frac{e^{a \rho}}{\cosh' (\rho)} = ie^{a \pi i/2}.
+ \]
+ We first want to see what happens at the edges. We have
+ \[
+ \int_{\gamma_{\mathrm{vert}}^+} f(z)\;\d z = \int_0^\pi \frac{e^{a(R + iy)}}{\cosh (R + iy)} i \;\d y.
+ \]
+ hence we can bound this as
+ \[
+ \left|\int_{\gamma_{\mathrm{vert}}^+} f(z)\;\d z\right| \leq \int_0^\pi \left|\frac{2 e^{aR}}{e^R - e^{-R}}\right| \;\d y \to 0\text{ as }R \to \infty,
+ \]
+ since $a < 1$. We can do a similar bound for $\gamma_{\mathrm{vert}^-}$, where we use the fact that $a > -1$.
+
+ Thus, letting $R \to \infty$, we get
+ \[
+ \int_{\R} \frac{e^{ax}}{\cosh x} \;\d x + \int_{+\infty}^{-\infty} \frac{e^{a \pi i} e^{ax}}{\cosh(x + i \pi)} \;\d x= 2\pi i (-i e^{a \pi i/2}).
+ \]
+ Using the fact that $\cosh(x + i\pi) = -\cos(x)$, we get
+ \[
+ \int_{\R} \frac{e^{ax}}{\cosh x} \;\d x = \frac{2 \pi e^{a i \pi/2}}{1 + e^{a \pi i}}= \pi \sec\left(\frac{\pi a}{2}\right).
+ \]
+\end{eg}
+
+\begin{eg}
+ We provide a(nother) proof that
+ \[
+ \sum_{n \geq 1} \frac{1}{n^2} = \frac{\pi^2}{6}.
+ \]
+ Recall we just avoided having to encircle infinitely poles by picking a rectangular contour. Here we do the opposite --- we encircle infinitely many poles, and then we can use this to evaluate the infinite sum of residues using contour integrals.
+
+ We consider the function $f(z) = \frac{\pi \cot(\pi z)}{z^2}$, which is holomorphic on $\C$ except for simple poles at $\Z \setminus \{0\}$, and a triple pole at $0$.
+
+ We can check that at $n \in \Z \setminus \{0\}$, we can write
+ \[
+ f(z) = \frac{\pi \cos (\pi z)}{z^2} \cdot \frac{1}{\sin (\pi z)},
+ \]
+ where the second term has a simple zero at $n$, and the first is non-vanishing at $n \not= 0$. Then we have compute
+ \[
+ \Res(f; n) = \frac{\pi \cos (\pi n)}{n^2} \cdot \frac{1}{\pi \cos (\pi n)} = \frac{1}{n^2}.
+ \]
+ Note that the reason why we have those funny $\pi$'s all around the place is so that we can get this nice expression for the residue.
+
+ At $z = 0$, we get
+ \[
+ \cot(z) = \left(1 - \frac{z^2}{2} + O(z^4)\right) \left(z - \frac{z^3}{3} + O(z^5)\right)^{-1} = \frac{1}{z} - \frac{z}{3} + O(z^2).
+ \]
+ So we get
+ \[
+ \frac{\pi \cot (\pi z)}{z^2} = \frac{1}{z^3} - \frac{\pi^2}{3z} + \cdots
+ \]
+ So the residue is $-\frac{\pi^2}{3}$. Now we consider the following square contour:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \foreach \x in {-2.5,-2,...,2.5} {
+ \node at (\x, 0) {$\times$};
+ }
+
+ \draw [thick, mred, ->-=0.33] (1.75, 1.75) -- (-1.75, 1.75) -- (-1.75, -1.75) -- (1.75, -1.75) -- cycle node [pos=0.85, right] {$\gamma_N$};
+
+ \node [anchor = south west] at (0, 1.75) {$(N + \frac{1}{2})i$};
+ \node [anchor = north west] at (0, -1.75) {$-(N + \frac{1}{2})i$};
+ \node [anchor = south west] at (1.75, 0) {$N + \frac{1}{2}$};
+ \node [anchor = south east] at (-1.75, 0) {$-(N + \frac{1}{2})$};
+
+ \node [circ] at (0, 1.75) {};
+ \node [circ] at (0, -1.75) {};
+ \node [circ] at (1.75, 0) {};
+ \node [circ] at (-1.75, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ Since we don't want the contour itself to pass through singularities, we make the square pass through $\pm\left(N + \frac{1}{2}\right)$. Then the residue theorem says
+ \[
+ \int_{\gamma_N}f(z) \;\d z = 2\pi i\left(2 \sum_{n = 1}^N \frac{1}{n^2} - \frac{\pi^2}{3}\right).
+ \]
+ We can thus get the desired series if we can show that
+ \[
+ \int_{\gamma_N} f(z)\;\d z \to 0\text{ as }n\to \infty.
+ \]
+ We first note that
+ \begin{align*}
+ \left|\int_{\gamma_N} f(z)\;\d z \right| &\leq \sup_{\gamma_N} \left|\frac{\pi \cot \pi z}{z^2}\right| 4(2N + 1)\\
+ &\leq \sup_{\gamma_N} |\cot \pi z| \frac{4(2N + 1)\pi}{\left(N + \frac{1}{2}\right)^2}\\
+ &= \sup_{\gamma_N} |\cot \pi z| O(N^{-1}).
+ \end{align*}
+ So everything is good if we can show $\sup_{\gamma_N} |\cot \pi z|$ is bounded as $N \to \infty$.
+
+ On the vertical sides, we have
+ \[
+ z = \pi \left(N + \frac{1}{2}\right) + iy,
+ \]
+ and thus
+ \[
+ |\cot (\pi z)| = |\tan(i \pi y)| = |\tanh (\pi y)| \leq 1,
+ \]
+ while on the horizontal sides, we have
+ \[
+ z = x \pm i\left(N + \frac{1}{2}\right),
+ \]
+ and
+ \[
+ |\cot (\pi z)| \leq \frac{e^{\pi(N + 1/2)} + e^{-\pi(N + 1/2)}}{e^{\pi(N + 1/2)} - e^{-\pi(N + 1/2)}} = \coth\left(N + \frac{1}{2}\right)\pi.
+ \]
+ While it is not clear at first sight that this is bounded, we notice $x \mapsto \coth x$ is decreasing and positive for $x \geq 0$. So we win.
+\end{eg}
+
+\begin{eg}
+ Suppose we want to compute the integral
+ \[
+ \int_0^\infty \frac{\log x}{1 + x^2}\;\d x.
+ \]
+ The point is that to define $\log z$, we have to cut the plane to avoid multi-valuedness. In this case, we might choose to cut it along $i \R\leq 0$, giving a branch of $\log$, for which $\arg(z) \in \left(-\frac{\pi}{2}, \frac{3\pi}{2}\right)$. We need to avoid running through zero. So we might look at the following contour:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+ \draw [thick] (0, 0) -- (0, -3);
+
+ \node [circ] at (0, 0) {};
+
+ \draw [mred, thick, ->-=0.1, ->-=0.4, ->-=0.7] (-2, 0) node [below] {$-R$} -- (-0.4, 0) node [below] {$\varepsilon$} arc(180:0:0.4) node [below] {$\varepsilon$} -- (2, 0) node [below] {$R$} arc(0:180:2);
+
+ \node at (0, 1) {$\times$};
+ \node [right] at (0, 1) {$i$};
+ \end{tikzpicture}
+ \end{center}
+ On the large semicircular arc of radius $R$, the integrand
+ \[
+ |f(z)||\d z| = O\left(R \cdot \frac{\log R}{R^2}\right) = O\left(\frac{\log R}{R}\right) \to 0\text{ as }R \to \infty.
+ \]
+ On the small semicircular arc of radius $\varepsilon$, the integrand
+ \[
+ |f(z)| |\d z| = O(\varepsilon \log \varepsilon) \to 0\text{ as } \varepsilon \to 0.
+ \]
+ Hence, as $\varepsilon \to 0$ and $R \to \infty$, we are left with the integral along the negative real axis. Along the negative real axis, we have
+ \[
+ \log z = \log|z| + i \pi.
+ \]
+ So the residue theorem says
+ \[
+ \int_0^\infty \frac{\log x}{1 + x^2}\;\d x + \int_\infty^0 \frac{\log |z| + i\pi}{1 + x^2}(-\d x) = 2\pi i \Res(f; i).
+ \]
+ We can compute the residue as
+ \[
+ \Res(f, i) = \frac{\log i}{2i} = \frac{\frac{1}{2}i \pi}{2 i} = \frac{\pi}{4}.
+ \]
+ So we find
+ \[
+ 2 \int_0^\infty \frac{\log x}{1 + x^2}\;\d x + i \pi \int_0^\infty \frac{1}{1 + x^2} \;\d x = \frac{i \pi ^2}{2}.
+ \]
+ Taking the real part of this, we obtain
+ \[
+ \int_0^\infty \frac{\log x}{1 + x^2}\;\d x = 0.
+ \]
+\end{eg}
+In this case, we had a branch cut, and we managed to avoid it by going around our magic contour. Sometimes, it is helpful to run our integral along the branch cut.
+
+\begin{eg}
+ We want to compute
+ \[
+ \int_0^\infty \frac{\sqrt{x}}{x^2 + ax + b} \;\d x,
+ \]
+ where $a, b \in \R$. To define $\sqrt{z}$, we need to pick a branch cut. We pick it to lie along the real line, and consider the \emph{keyhole contour}
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+ \draw [thick] (0, 0) -- (3, 0);
+ \node [circ] at (0, 0) {};
+
+ \draw [mred, thick, ->-=0.1, ->-=0.45, ->-=0.75, ->-=0.84, ->-=0.95] (1.977, 0.3) arc(8.63:351.37:2) -- (0.4, -0.3) arc(323.13:36.87:0.5) -- cycle;
+
+ \end{tikzpicture}
+ \end{center}
+ As usual this has a small circle of radius $\varepsilon$ around the origin, and a large circle of radius $R$. Note that these both avoid the branch cut.
+
+ Again, on the $R$ circle, we have
+ \[
+ |f(z)||\d z| = O\left(\frac{1}{\sqrt{R}}\right) \to 0\text{ as }R \to \infty.
+ \]
+ On the $\varepsilon$-circle, we have
+ \[
+ |f(z)||\d z| = O(\varepsilon^{3/2}) \to 0\text{ as }\varepsilon \to 0.
+ \]
+ Viewing $\sqrt{z} = e^{\frac{1}{2}\log z}$, on the two pieces of the contour along $\R_{\geq 0}$, $\log z$ differs by $2 \pi i$. So $\sqrt{z}$ changes sign. This cancels with the sign change arising from going in the wrong direction. Therefore the residue theorem says
+ \[
+ 2\pi i \sum \text{residues inside contour} = 2 \int_0^\infty \frac{\sqrt{x}}{x^2 + ax + b}\;\d x.
+ \]
+ What the residues are depends on what the quadratic actually is, but we will not go into details.
+\end{eg}
+
+\subsection{Rouch\'es theorem}
+We now want to move away from computing integrals, and look at a different application --- Rouch\'es theorem. Recall one of the first applications of complex analysis is to use Liouville's theorem to prove the fundamental theorem of algebra, and show that every polynomial has a root. One might wonder --- if we know a bit more about the polynomial, can we say a bit more about how the roots behave?
+
+To do this, recall we said that if $f: B(a; r) \to \C$ is holomorphic, and $f(a) = 0$, then $f$ has a zero of \emph{order $k$} if, locally,
+\[
+ f(z) = (z - a)^k g(z),
+\]
+with $g$ holomorphic and $g(a) \not= 0$.
+
+Analogously, if $f: B(a, r) \setminus \{a\} \to \C$ is holomorphic, and $f$ has at worst a pole at $a$, we can again write
+\[
+ f(z) = (z - a)^k g(z),
+\]
+where now $k \in \Z$ may be negative. Since we like numbers to be positive, we say the order of the zero/pole is $|k|$.
+
+It turns out we can use integrals to help count poles and zeroes.
+
+\begin{thm}[Argument principle]
+ Let $U$ be a simply connected domain, and let $f$ be meromorphic on $U$. Suppose in fact $f$ has finitely many zeroes $z_1, \cdots, z_k$ and finitely many poles $w_1, \cdots, w_\ell$. Let $\gamma$ be a piecewise-$C^1$ closed curve such that $z_i, w_j \not\in \image(\gamma)$ for all $i, j$. Then
+ \[
+ I(f \circ \gamma, 0) = \frac{1}{2\pi i}\int_\gamma \frac{f'(z)}{f(z)}\;\d z = \sum_{i = 1}^k \ord(f; z_i) I_\gamma(z_i) - \sum_{j = 1}^\ell \ord(f, w_j) I(\gamma, w_j).
+ \]
+\end{thm}
+Note that the first equality comes from the fact that
+\[
+ I(f \circ \gamma, 0) = \frac{1}{2\pi i} \int_{f \circ \gamma} \frac{\d w}{w} = \frac{1}{2\pi i}\int_\gamma \frac{\d f}{f(z)} = \frac{1}{2\pi i} \int_\gamma \frac{f'(z)}{f(z)}\;\d z.
+\]
+In particular, if $\gamma$ is a simple closed curve, then all the winding numbers of $\gamma$ about points $z_i, w_j$ lying in the region bound by $\gamma$ are all $+1$ (with the right choice of orientation). Then
+\[
+ \text{number of zeroes} - \text{number of poles} = \frac{1}{2\pi} (\text{change in argument of $f$ along $\gamma$}).
+\]
+
+\begin{proof}
+ By the residue theorem, we have
+ \[
+ \frac{1}{2\pi i}\int_\gamma \frac{f'(z)}{f(z)}\;\d z = \sum_{z \in U} \Res\left(\frac{f'}{f}, z\right) I(\gamma, z),
+ \]
+ where we sum over all zeroes and poles of $z$. Note that outside these zeroes and poles, the function $\frac{f'(z)}{f(z)}$ is holomorphic.
+
+ Now at each $z_i$, if $f(z) = (z - z_j)^k g(z)$, with $g(z_j) \not= 0$, then by direct computation, we get
+ \[
+ \frac{f'(z)}{f(z)} = \frac{k}{z - z_j} + \frac{g'(z)}{g(z)}.
+ \]
+ Since at $z_j$, $g$ is holomorphic and non-zero, we know $\frac{g'(z)}{g(z)}$ is holomorphic near $z_j$. So
+ \[
+ \Res\left(\frac{f'}{f}, z_j\right) = k = \ord(f, z_j).
+ \]
+ Analogously, by the same proof, at the $w_i$, we get
+ \[
+ \Res\left(\frac{f'}{f}, w_j\right) = -\ord(f; w_j).
+ \]
+ So done.
+\end{proof}
+This might be the right place to put the following remark --- all the time, we have assumed that a simple closed curve ``bounds a region'', and then we talk about which poles or zeroes are bounded by the curve. While this \emph{seems} obvious, it is not. This is given by the Jordan curve theorem, which is actually hard.
+
+Instead of resorting to this theorem, we can instead define what it means to bound a region in a more convenient way. One can say that for a domain $U$, a closed curve $\gamma \subseteq U$ \emph{bounds} a domain $D \subseteq U$ if
+\[
+ I(\gamma, z) =
+ \begin{cases}
+ +1 & z \in D\\
+ 0 & z \not\in D
+ \end{cases},
+\]
+for a particular choice of orientation on $\gamma$. However, we shall not worry ourselves with this.
+
+The main application of the argument principle is Rouch\'es theorem.
+\begin{cor}[Rouch\'es theorem]
+ Let $U$ be a domain and $\gamma$ a closed curve which bounds a domain in $U$ (the key case is when $U$ is simply connected and $\gamma$ is a simple closed curve). Let $f, g$ be holomorphic on $U$, and suppose $|f| > |g|$ for all $z \in \image(\gamma)$. Then $f$ and $f + g$ have the same number of zeroes in the domain bound by $\gamma$, when counted with multiplicity.
+\end{cor}
+
+\begin{proof}
+ If $|f| > |g|$ on $\gamma$, then $f$ and $f + g$ cannot have zeroes on the curve $\gamma$. We let
+ \[
+ h(z) = \frac{f(z) + g(z)}{f(z)} = 1 + \frac{g(z)}{f(z)}.
+ \]
+ This is a natural thing to consider, since zeroes of $f + g$ is zeroes of $h$, while poles of $h$ are zeroes of $f$. Note that by assumption, for all $z \in \gamma$, we have
+ \[
+ h(z) \in B(1, 1) \subseteq \{z: \Re z > 0\}.
+ \]
+ Therefore $h \circ \gamma$ is a closed curve in the half-plane $\{z: \Re z > 0\}$. So $I(h \circ \gamma; 0) = 0$. Then by the argument principle, $h$ must have the same number of zeros as poles in $D$, when counted with multiplicity (note that the winding numbers are all $+1$).
+
+ Thus, as the zeroes of $h$ are the zeroes of $f + g$, and the poles of $h$ are the poles of $f$, the result follows.
+\end{proof}
+
+\begin{eg}
+ Consider the function $z^6 + 6z + 3$. This has three roots (with multiplicity) in $\{1 < |z| < 2\}$. To show this, note that on $|z| = 2$, we have
+ \[
+ |z|^4 = 16 > 6|z| + 3 \geq |6z + 3|.
+ \]
+ So if we let $f(z) = z^4$ and $g(z) = 6z + 3$, then $f$ and $f + g$ have the same number of roots in $\{|z| < 2\}$. Hence all four roots lie inside $\{|z| < 2\}$.
+
+ On the other hand, on $|z| = 1$, we have
+ \[
+ |6z| = 6 > |z^4 + 3|.
+ \]
+ So $6z$ and $z^6 + 6z + 3$ have the same number of roots in $\{|z| < 1\}$. So there is exactly one root in there, and the remaining three must lie in $\{1 < |z| < 2\}$ (the bounds above show that $|z|$ cannot be exactly $1$ or $2$). So done.
+\end{eg}
+
+\begin{eg}
+ Let
+ \[
+ P(x) = x^n + a_{n - 1} x^{n - 1} + \cdots + a_1 x + a_0 \in \Z[x],
+ \]
+ and suppose $a_0 \not= 0$. If
+ \[
+ |a_{n - 1}| > 1 + |a_{n - 2}| + \cdots + |a_1| + |a_0|,
+ \]
+ then $P$ is irreducible over $\Z$ (and hence irreducible over $\Q$, by Gauss' lemma from IB Groups, Rings and Modules).
+
+ To show this, we let
+ \begin{align*}
+ f(z) &= a_{n - 1} z^{n - 1},\\
+ g(z) &= z^n + a_{n - 2}z^{n - 2} + \cdots + a_1 z + a_0.
+ \end{align*}
+ Then our hypothesis tells us $|f| > |g|$ on $|z| = 1$.
+
+ So $f$ and $P = f + g$ both have $n - 1$ roots in the open unit disc $\{|z| < 1\}$.
+
+ Now if we could factor $P(z) = Q(z)R(z)$, where $Q, R \in \Z[x]$, then at least one of $Q, R$ must have all its roots inside the unit disk. Say all roots of $Q$ are inside the unit disk. But we assumed $a_0 \not= 0$. So $0$ is not a root of $P$. Hence it is not a root of $Q$. But the product of the roots $Q$ is a coefficient of $Q$, hence an integer strictly between $0$ and $1$. This is a contradiction.
+\end{eg}
+
+The argument principle and Rouch\'es theorem tell us how many roots we have got. However, we do not know if they are distinct or not. This information is given to us via the local degree theorem. Before we can state it, we have to define the local degree.
+
+\begin{defi}[Local degree]
+ Let $f: B(a, r) \to \C$ be holomorphic and non-constant. Then the \emph{local degree} of $f$ at $a$, written $\deg(f, a)$ is the order of the zero of $f(z) - f(a)$ at $a$.
+\end{defi}
+If we take the Taylor expansion of $f$ about $a$, then the local degree is the degree of the first non-zero term after the constant term.
+
+\begin{lemma}
+ The local degree is given by
+ \[
+ \deg (f, a) = I (f \circ \gamma, f(a)),
+ \]
+ where
+ \[
+ \gamma(t) = a + re^{it},
+ \]
+ with $0 \leq t \leq 2\pi$, for $r > 0$ sufficiently small.
+\end{lemma}
+
+\begin{proof}
+ Note that by the identity theorem, we know that, $f(z) - f(a)$ has an isolated zero at $a$ (since $f$ is non-constant). So for sufficiently small $r$, the function $f(z) - f(a)$ does not vanish on $\overline{B(a, r)} \setminus \{a\}$. If we use this $r$, then $f \circ \gamma$ never hits $f(a)$, and the winding number is well-defined.
+
+ The result then follows directly from the argument principle.
+\end{proof}
+
+\begin{prop}[Local degree theorem]
+ Let $f: B(a, r) \to \C$ be holomorphic and non-constant. Then for $r > 0$ sufficiently small, there is $\varepsilon > 0$ such that for any $w \in B(f(a), \varepsilon) \setminus \{f(a)\}$, the equation $f(z) = w$ has exactly $\deg(f, a)$ distinct solutions in $B(a, r)$.
+\end{prop}
+
+\begin{proof}
+ We pick $r > 0$ such that $f(z) - f(a)$ and $f'(z)$ don't vanish on $B(a, r) \setminus \{a\}$. We let $\gamma(t) = a + re^{it}$. Then $f(a) \not\in \image(f \circ \gamma)$. So there is some $\varepsilon > 0$ such that
+ \[
+ B(f(a), \varepsilon) \cap \image(f \circ \gamma) = \emptyset.
+ \]
+ We now let $w \in B(f(a), \varepsilon)$. Then the number of zeros of $f(z) - w$ in $B(a, r)$ is just $I(f \circ \gamma, w)$, by the argument principle. This is just equal to $I(f \circ \gamma, f(a)) = \deg(f, a)$, by the invariance of $I(\Gamma, *)$ as we move $*$ in a component $\C \setminus \Gamma$.
+
+ Now if $w \not= f(a)$, since $f'(z) \not= 0$ on $B(a, r)\setminus \{a\}$, all roots of $f(z) - w$ must be simple. So there are exactly $\deg (f; a)$ distinct zeros.
+\end{proof}
+
+The local degree theorem says the equation $f(z) = w$ has $\deg(f, a)$ roots for $w$ sufficiently close to $f(a)$. In particular, we know there \emph{are} some roots. So $B(f(a), \varepsilon)$ is contained in the image of $f$. So we get the following result:
+
+\begin{cor}[Open mapping theorem]
+ Let $U$ be a domain and $f: U \to \C$ is holomorphic and non-constant, then $f$ is an open map, i.e.\ for all open $V \subseteq U$, we get that $f(V)$ is open.
+\end{cor}
+
+\begin{proof}
+ This is an immediate consequence of the local degree theorem. It suffices to prove that for every $z \in U$ and $r > 0$ sufficiently small, we can find $\varepsilon > 0$ such that $B(f(a), \varepsilon) \subseteq f(B(a, r))$. This is true by the local degree theorem.
+\end{proof}
+
+Recall that Liouville's theorem says every holomorphic $f: \C \to B(0, 1)$ is constant. However, for any other simply connected domain, we know there are some interesting functions we can write down.
+
+\begin{cor}
+ Let $U\subseteq \C$ be a simply connected domain, and $U \not= \C$. Then there is a non-constant holomorphic function $U \to B(0, 1)$.
+\end{cor}
+This is a weak form of the Riemann mapping theorem, which says that there is a \emph{conformal equivalence} to $B(0, 1)$. This just says there is a map that is not boring.
+
+\begin{proof}
+ We let $q \in \C \setminus U$, and let $\phi(z) = z - q$. So $\phi: U \to \C$ is non-vanishing. It is also clearly holomorphic and non-constant. By an exercise (possibly on the example sheet), there is a holomorphic function $g: U \to \C$ such that $\phi(z) = e^{g(z)}$ for all $z$. In particular, our function $\phi(z) = z - q: U \to \C^*$ can be written as $\phi(z) = h(z)^2$, for some function $h: U \to \C^*$ (by letting $h(z) = e^{\frac{1}{2}g(z)}$).
+
+ We let $y \in h(U)$, and then the open mapping theorem says there is some $r > 0$ with $B(y, r) \subseteq h(U)$. But notice $\phi$ is injective by observation, and that $h(z_1) = \pm h(z_2)$ implies $\phi(z_1) = \phi(z_2)$. So we deduce that $B(-y, r) \cap h(U) = \emptyset$ (note that since $y \not= 0$, we have $B(y, r) \cap B(-y, r) = \emptyset$ for sufficiently small $r$).
+
+ Now define
+ \[
+ f: z \mapsto \frac{r}{2(h(z) + y)}.
+ \]
+ This is a holomorphic function $f: U \to B(0, 1)$, and is non-constant.
+\end{proof}
+This shows the amazing difference between $\C$ and $\C \setminus \{0\}$.
+\end{document}
diff --git a/books/cam/IB_L/complex_methods.tex b/books/cam/IB_L/complex_methods.tex
new file mode 100644
index 0000000000000000000000000000000000000000..8c663c2b7d7b550b00bee6a3aa7a79300f38e5bc
--- /dev/null
+++ b/books/cam/IB_L/complex_methods.tex
@@ -0,0 +1,2816 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Lent}
+\def\nyear {2016}
+\def\nlecturer {R.\ E.\ Hunt}
+\def\ncourse {Complex Methods}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Analytic functions}\\
+Definition of an analytic function. Cauchy-Riemann equations. Analytic functions as conformal mappings; examples. Application to the solutions of Laplace's equation in various domains. Discussion of $\log z$ and $z^a$.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Contour integration and Cauchy's Theorem}\\
+{[}\emph{Proofs of theorems in this section will not be examined in this course.}{]}\\
+Contours, contour integrals. Cauchy's theorem and Cauchy's integral formula. Taylor and Laurent series. Zeros, poles and essential singularities.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Residue calculus}\\
+Residue theorem, calculus of residues. Jordan's lemma. Evaluation of definite integrals by contour integration.\hspace*{\fill} [4]
+
+\vspace{10pt}
+\noindent\textbf{Fourier and Laplace transforms}\\
+Laplace transform: definition and basic properties; inversion theorem (proof not required); convolution theorem. Examples of inversion of Fourier and Laplace transforms by contour integration. Applications to differential equations.\hspace*{\fill} [4]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+In Part IA, we learnt quite a lot about differentiating and integrating real functions. Differentiation was fine, but integration was tedious. Integrals were \emph{very} difficult to evaluate.
+
+In this course, we will study differentiating and integrating complex functions. Here differentiation is nice, and integration is easy. We will show that complex differentiable functions satisfy many things we hoped were true --- a complex differentiable function is automatically infinitely differentiable. Moreover, an everywhere differentiable function must be constant if it is bounded.
+
+On the integration side, we will show that integrals of complex functions can be performed by computing things known as \emph{residues}, which are much easier to compute. We are not actually interested in performing complex integrals. Instead, we will take some difficult real integrals, and pretend they are complex ones.
+
+This is a methods course. By this, we mean we will not focus too much on proofs. We will at best just skim over the proofs. Instead, we focus on \emph{doing things}. We will not waste time proving things people have proved 300 years ago. If you like proofs, you can go to the IB Complex Analysis course, or look them up in relevant books.
+
+\section{Analytic functions}
+\subsection{The complex plane and the Riemann sphere}
+We begin with a review of complex numbers. Any complex number $z \in \C$ can be written in the form $x + iy$, where $x = \Re z$, $y = \Im z$ are real numbers. We can also write it as $r e^{i\theta}$, where
+\begin{defi}[Modulus and argument]
+ The \emph{modulus} and \emph{argument} of a complex number $z = x + iy$ are given by
+ \[
+ r = |z| = \sqrt{x^2 + y^2}, \quad \theta = \arg z,
+ \]
+ where $x = r \cos \theta, y = r \sin \theta$.
+\end{defi}
+The argument is defined only up to multiples of $2\pi$. So we define the following:
+\begin{defi}[Principal value of argument]
+ The \emph{principal value} of the argument is the value of $\theta$ in the range $(-\pi, \pi]$.
+\end{defi}
+We might be tempted to write down the formula
+\[
+ \theta = \tan^{-1} \left(\frac{y}{x}\right),
+\]
+but this does not always give the right answer --- it is correct only if $x > 0$. If $x \leq 0$, then it might be out by $\pm \pi$ (e.g.\ consider $z = 1 + i$ and $z = -1 - i$).
+
+\begin{defi}[Open set]
+ An \emph{open set} $\mathcal{D}$ is one which does not include its boundary. More technically, $\mathcal{D} \subseteq \C$ is open if for all $z_0 \in \mathcal{D}$, there is some $\delta > 0$ such that the disc $|z - z_0| < \delta$ is contained in $\mathcal{D}$.
+\end{defi}
+
+\begin{defi}[Neighbourhood]
+ A \emph{neighbourhood} of a point $z \in \C$ is an open set containing $z$.
+\end{defi}
+
+\subsubsection*{The extended complex plane}
+Often, the complex plane $\C$ itself is not enough. We want to consider the point $\infty$ as well. This forms the extended complex plane.
+\begin{defi}[The extended complex plane]
+ The \emph{extended complex plane} is $\C^* = \C \cup \{\infty\}$. We can reach the ``point at infinity'' by going off in any direction in the plane, and all are equivalent. In particular, there is no concept of $-\infty$. All infinities are the same. Operations with $\infty$ are done in the obvious way.
+\end{defi}
+Sometimes, we \emph{do} write down things like $-\infty$. This does not refer to a different point. Instead, this indicates a \emph{limiting process}. We mean we are approaching this infinity from the direction of the negative real axis. However, we still end up in the same place.
+
+Conceptually, we can visualize this using the \emph{Riemann sphere}, which is a sphere resting on the complex plane with its ``South Pole'' $S$ at $z = 0$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (8, 0) -- (10, 3) -- (2, 3) -- (0, 0);
+ \draw [gray] (1, 1.5) -- (9, 1.5);
+ \draw [gray] (4, 0) -- (6, 3);
+ \draw (5, 2.5) circle [radius=1.2];
+
+ \node [circ] at (5, 1.5) {};
+ \node at (5, 1.5) [right] {$S$};
+
+ \node [circ] at (5, 3.5) {};
+ \node at (5, 3.5) [right] {$N$};
+
+ \node [circ] at (5.5, 2.8) {};
+ \node at (5.5, 2.8) [right] {$P$};
+
+ \draw (5, 3.5) -- (7, 0.7) node [circ] {} node [right] {$z$};
+ \end{tikzpicture}
+\end{center}
+For any point $z \in \C$, drawing a line through the ``North Pole'' $N$ of the sphere to $z$, and noting where this intersects the sphere. This specifies an equivalent point $P$ on the sphere. Then $\infty$ is equivalent to the North Pole of the sphere itself. So the extended complex plane is mapped bijectively to the sphere.
+
+This is a useful way to visualize things, but is not as useful when we actually want to do computations. To investigate properties of $\infty$, we use the substitution $\zeta = \frac{1}{z}$. A function $f(z)$ is said to have a particular property \emph{at $\infty$} if $f(\frac{1}{\zeta})$ has that same property at $\zeta = 0$. This vague notion will be made precise when we have specific examples to play with.
+
+\subsection{Complex differentiation}
+Recall the definition of differentiation for a real function $f(x)$:
+\[
+ f'(x) = \lim_{\delta x \to 0} \frac{f(x + \delta x) - f(x)}{\delta x}.
+\]
+It is implicit that the limit must be the same whichever direction we approach from. For example, consider $|x|$ at $x = 0$. If we approach from the right, i.e.\ $\delta x \to 0^+$, then the limit is $+1$, whereas from the left, i.e.\ $\delta x \to 0^-$, the limit is $-1$. Because these limits are different, we say that $|x|$ is not differentiable at the origin.
+
+This is obvious and we already know that, but for complex differentiation, this issue is much more important, since there are many more directions. We now extend the definition of differentiation to complex number:
+\begin{defi}[Complex differentiable function]
+ A complex differentiable function $f: \C \to \C$ is \emph{differentiable} at $z$ if
+ \[
+ f'(z) = \lim_{\delta z \to 0} \frac{f(z + \delta z) - f(z)}{\delta z}
+ \]
+ exists (and is therefore independent of the direction of approach --- but now there are infinitely many possible directions).
+\end{defi}
+This is the same definition as that for a real function. Often, we are not interested in functions that are differentiable at a \emph{point}, since this might allow some rather exotic functions we do not want to consider. Instead, we want the function to be differentiable near the point.
+
+\begin{defi}[Analytic function]
+ We say $f$ is \emph{analytic} at a point $z$ if there exists a neighbourhood of $z$ throughout which $f'$ exists. The terms \emph{regular} and \emph{holomorphic} are also used.
+\end{defi}
+
+\begin{defi}[Entire function]
+ A complex function is \emph{entire} if it is analytic throughout $\C$.
+\end{defi}
+
+The property of analyticity is in fact a surprisingly strong one! For example, two consequences are:
+\begin{enumerate}
+ \item If a function is analytic, then it is differentiable infinitely many times. This is very \emph{very} false for real functions. There are real functions differentiable $N$ times, but no more (e.g.\ by taking a non-differentiable function and integrating it $N$ times).
+ \item A bounded entire function must be a constant.
+\end{enumerate}
+There are many more interesting properties, but these are sufficient to show us that complex differentiation is very different from real differentiation.
+\subsubsection*{The Cauchy-Riemann equations}
+We already know well how to differentiate real functions. Can we use this to determine whether certain complex functions are differentiable? For example is the function $f(x + iy) = \cos x + i\sin y$ differentiable? In general, given a complex function
+\[
+ f(z) = u(x, y) + iv(x, y),
+\]
+where $z = x + iy$ are $u, v$ are real functions, is there an easy criterion to determine whether $f$ is differentiable?
+
+We suppose that $f$ is differentiable at $z$. We may take $\delta z$ in any direction we like. First, we take it to be real, with $\delta z = \delta x$. Then
+\begin{align*}
+ f'(z) &= \lim_{\delta x \to 0} \frac{f(z + \delta x) - f(z)}{\delta x}\\
+ &= \lim_{\delta x \to 0} \frac{u(x + \delta x, y) + iv(x + \delta x, y) - (u(x, y) + iv(x, y))}{\delta x}\\
+ &= \frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x}.
+\end{align*}
+What this says is something entirely obvious --- since we are allowed to take the limit in any direction, we can take it in the $x$ direction, and we get the corresponding partial derivative. This is a completely uninteresting point. Instead, let's do the really fascinating thing of taking the limit in the $y$ direction!
+
+Let $\delta z = i \delta y$. Then we can compute
+\begin{align*}
+ f'(z) &= \lim_{\delta y \to 0} \frac{f(z + i\delta y) - f(z)}{i \delta y}\\
+ &= \lim_{\delta y \to 0} \frac{u(x, y + \delta y) + iv(x, y + \delta y) - (u(x, y) + iv(x, y))}{i \delta y}\\
+ &= \frac{\partial v}{\partial y} - i \frac{\partial u}{\partial y}.
+\end{align*}
+By the definition of differentiability, the two results for $f'(z)$ must agree! So we must have
+\[
+ \frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x} = \frac{\partial v}{\partial y} - i \frac{\partial u}{\partial y}.
+\]
+Taking the real and imaginary components, we get
+\begin{prop}[Cauchy-Riemann equations]
+ If $f = u + iv$ is differentiable, then
+ \[
+ \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y},\quad \frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}.
+ \]
+\end{prop}
+Is the converse true? If these equations hold, does it follow that $f$ is differentiable? This is not always true. This holds only if $u$ and $v$ themselves are differentiable, which is a stronger condition that the partial derivatives exist, as you may have learnt from IB Analysis II. In particular, this holds if the partial derivatives $u_x, u_y, v_x, v_y$ are continuous (which implies differentiability). So
+\begin{prop}
+ Given a complex function $f = u + iv$, if $u$ and $v$ are real differentiable at a point $z$ and
+ \[
+ \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y},\quad \frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x},
+ \]
+ then $f$ is differentiable at $z$.
+\end{prop}
+We will not prove this --- proofs are for IB Complex Analysis.
+
+\subsubsection*{Examples of analytic functions}
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $f(z) = z$ is entire, i.e.\ differentiable everywhere. Here $u = x, v = y$. Then the Cauchy-Riemann equations are satisfied everywhere, since
+ \[
+ \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y} = 1,\quad \frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x} = 0,
+ \]
+ and these are clearly continuous. Alternatively, we can prove this directly from the definition.
+ \item $f(z) = e^z = e^x (\cos y + i \sin y)$ is entire since
+ \[
+ \frac{\partial u}{\partial x} = e^x \cos y = \frac{\partial v}{\partial y},\quad \frac{\partial u}{\partial y} = - e^x \sin y = -\frac{\partial v}{\partial x}.
+ \]
+ The derivative is
+ \[
+ f'(z) = \frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x} = e^x \cos y + i e^x \sin y = e^z,
+ \]
+ as expected.
+ \item $f(z) = z^n$ for $n \in \N$ is entire. This is less straightforward to check. Writing $z = r(\cos \theta + i \sin \theta)$, we obtain
+ \[
+ u = r^n \cos n\theta,\quad v = r^n \sin n\theta.
+ \]
+ We can check the Cauchy-Riemann equation using the chain rule, writing $r = \sqrt{x^2 = y^2}$ and $\tan \theta = \frac{y}{x}$. This takes quite a while, and it's not worth your time. But if you really do so, you will find the derivative to be $nz^{n - 1}$.
+ \item Any rational function, i.e.\ $f(z) = \frac{P(z)}{Q(z)}$ where $P, Q$ are polynomials, is analytic \emph{except} at points where $Q(z) = 0$ (where it is not even defined). For instance,
+ \[
+ f(z) = \frac{z}{z^2 + 1}
+ \]
+ is analytic except at $\pm i$.
+ \item Many standard functions can be extended naturally to complex functions and obey the usual rules for their derivatives. For example,
+ \begin{itemize}
+ \item $\sin z = \frac{e^{iz} - e^{-iz}}{2i}$ is differentiable with derivative $\cos z = \frac{e^{iz} + e^{-iz}}{2}$. We can also write
+ \begin{align*}
+ \sin z &= \sin (x + iy) \\
+ &= \sin x \cos iy + \cos x \sin iy \\
+ &= \sin x \cosh y + i \cos x \sinh y,
+ \end{align*}
+ which is sometimes convenient.
+ \item Similarly $\cos z, \sinh z, \cosh z$ etc. differentiate to what we expect them to differentiate to.
+ \item $\log z = \log|z| + i \arg z$ has derivative $\frac{1}{z}$.
+ \item The product rule, quotient rule and chain rule hold in exactly the same way, which allows us to prove (iii) and (iv) easily.
+ \end{itemize}
+ \end{enumerate}
+\end{eg}
+
+\subsubsection*{Examples of non-analytic functions}
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Let $f(z) = \Re z$. This has $u = x, v = 0$. But
+ \[
+ \frac{\partial u}{\partial x} = 1\not= 0 = \frac{\partial v}{\partial y}.
+ \]
+ So $\Re z$ is nowhere analytic.
+ \item Consider $f(z) = |z|$. This has $u = \sqrt{x^2 + y^2}, v = 0$. This is thus nowhere analytic.
+ \item The complex conjugate $f(z) = \bar z = z^* = x - iy$ has $u = x, v = -y$. So the Cauchy-Riemann equations don't hold. Hence this is nowhere analytic.
+
+ We could have deduced (ii) from this --- if $|z|$ were analytic, then so would $|z|^2$, and hence $\bar{z} = \frac{|z|^2}{z}$ also has to be analytic, which is not true.
+ \item We have to be a bit more careful with $f(z) = |z|^2 = x^2 + y^2$. The Cauchy-Riemann equations are satisfied only at the origin. So $f$ is only differentiable at $z = 0$. However, it is not analytic since there is no neighbourhood of $0$ throughout which $f$ is differentiable.
+ \end{enumerate}
+\end{eg}
+
+\subsection{Harmonic functions}
+This is the last easy section of the course.
+
+\begin{defi}[Harmonic conjugates]
+ Two functions $u, v$ satisfying the Cauchy-Riemann equations are called \emph{harmonic conjugates}.
+\end{defi}
+
+If we know one, then we can find the other up to a constant. For example, if $u(x, y) = x^2 - y^2$, then $v$ must satisfy
+\[
+ \frac{\partial v}{\partial y} = \frac{\partial u}{\partial x} = 2x.
+\]
+So we must have $v = 2xy + g(x)$ for some function $g(x)$. The other Cauchy-Riemann equation gives
+\[
+ -2y = \frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x} = -2y - g'(x).
+\]
+This tells us $g'(x) = 0$. So $g$ must be a genuine constant, say $\alpha$. The corresponding analytic function whose real part is $u$ is therefore
+\[
+ f(z) = x^2 - y^2 + 2ixy + i\alpha = (x + iy)^2 + i \alpha = z^2 + i\alpha.
+\]
+Note that in an exam, if we were asked to find the analytic function $f$ with real part $u$ (where $u$ is given), then we \emph{must} express it in terms of $z$, and not $x$ and $y$, or else it is not clear this is indeed analytic.
+
+On the other hand, if we are given that $f(z) = u + iv$ is analytic, then we can compute
+\begin{align*}
+ \frac{\partial^2 u}{\partial x^2} &= \frac{\partial }{\partial x}\left(\frac{\partial u}{\partial x}\right)\\
+ &= \frac{\partial }{\partial x} \left(\frac{\partial v}{\partial y}\right)\\
+ &= \frac{\partial }{\partial y}\left(\frac{\partial v}{\partial x}\right)\\
+ &= \frac{\partial }{\partial y}\left(- \frac{\partial u}{\partial y}\right)\\
+ &= -\frac{\partial^2 u}{\partial y^2}.
+\end{align*}
+So $u$ satisfies Laplace's equation in two dimensions, i.e.
+\[
+ \nabla^2 u = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0.
+\]
+Similarly, so does $v$.
+\begin{defi}[Harmonic function]
+ A function satisfying Laplace's equation equation in an open set is said to be \emph{harmonic}.
+\end{defi}
+
+Thus we have shown the following:
+\begin{prop}
+ The real and imaginary parts of any analytic function are harmonic.
+\end{prop}
+
+\subsection{Multi-valued functions}
+For $z = r^{i\theta}$, we define $\log z = \log r + i \theta$. There are infinitely many values of $\log z$, for every choice of $\theta$. For example,
+\[
+ \log i = \frac{\pi i}{2} \text{ or }\frac{5\pi i}{2}\text{ or } -\frac{3\pi i}{2}\text{ or }\cdots.
+\]
+This is fine, right? Functions can be multi-valued. Nothing's wrong.
+
+Well, when we write down an expression, it'd better be well-defined. So we really should find some way to deal with this.
+
+This section is really more subtle than it sounds like. It turns out it is non-trivial to deal with these multi-valued functions. We can't just, say, randomly require $\theta$ to be in, say, $\left(0, 2\pi\right]$, or else we will have some continuity problems, as we will later see.
+
+\subsubsection*{Branch points}
+Consider the three curves shown in the diagram.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+ \draw [->] (1, 1) arc (45:405:1.41) node [anchor = south west] {$C_3$};
+
+ \draw [->] (3, 3) arc(0:360:1) node [right] {$C_1$};
+ \draw [->] (-2, 0) arc(0:360:0.4) node [anchor = south west] {$C_2$};
+ \end{tikzpicture}
+\end{center}
+In $C_1$, we could always choose $\theta$ to be always in the range $\left(0, \frac{\pi}{2}\right)$, and then $\log z$ would be continuous and single-valued going round $C_1$.
+
+On $C_2$, we could choose $\theta \in \left(\frac{\pi}{2}, \frac{3\pi}{2}\right)$ and $\log z$ would again be continuous and single-valued.
+
+However, this doesn't work for $C_3$. Since this encircles the origin, there is no such choice. Whatever we do, $\log z$ cannot be made continuous and single-valued around $C_3$. It must either ``jump'' somewhere, or the value has to increase by $2\pi i$ every time we go round the circle, i.e.\ the function is multi-valued.
+
+We now define what a branch point is. In this case, it is the origin, since that is where all our problems occur.
+\begin{defi}[Branch point]
+ A \emph{branch point} of a function is a point which is impossible to encircle with a curve on which the function is both continuous and single-valued. The function is said to have a \emph{branch point singularity} there.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\log (z - a)$ has a branch point at $z = a$.
+ \item $\log\left(\frac{z - 1}{z + 1}\right) = \log(z - 1) - \log(z + 1)$ has two branch points at $\pm 1$.
+ \item $z^\alpha = r^\alpha e^{i\alpha \theta}$ has a branch point at the origin as well for $\alpha \not\in \Z$ --- consider a circle of radius of $r_0$ centered at $0$, and wlog that we start at $\theta = 0$ and go once round anticlockwise. Just as before, $\theta$ must vary continuous to ensure continuity of $e^{i\alpha \theta}$. So as we get back almost to where we started, $\theta$ will approach $2\pi$, and there will be a jump in $\theta$ from $2\pi$ back to $0$. So there will be a jump in $z^\alpha$ from $r_0^{\alpha} e^{2\pi i \alpha}$ to $r_0^\alpha$. So $z^\alpha$ is not continuous if $e^{2\pi i \alpha} \not= 1$, i.e.\ $\alpha$ is not an integer.
+ \item $\log z$ also has a branch point at $\infty$. Recall that to investigate the properties of a function $f(z)$ at infinity, we investigate the property of $f\left(\frac{1}{z}\right)$ at zero. If $\zeta = \frac{1}{z}$, then $\log z = - \log \zeta$, which has a branch point at $\zeta = 0$. Similarly, $z^{\alpha}$ has a branch point at $\infty$ for $\alpha \not\in \Z$.
+ \item The function $\log\left(\frac{z - 1}{z + 1}\right)$ does \emph{not} have a branch point at infinity, since if $\zeta = \frac{1}{z}$, then
+ \[
+ \log\left(\frac{z - 1}{z + 1}\right) = \log\left(\frac{1 - \zeta}{1 + \zeta}\right).
+ \]
+ For $\zeta$ close to zero, $\frac{1 - \zeta}{1 + \zeta}$ remains close to $1$, and therefore well away from the branch point of $\log$ at the origin. So we can encircle $\zeta = 0$ without $\log\left(\frac{1 - \zeta}{1 + \zeta}\right)$ being discontinuous.
+ \end{enumerate}
+\end{eg}
+So we've identified the points where the functions have problems. How do we deal with these problems?
+
+\subsubsection*{Branch cuts}
+If we wish to make $\log z$ continuous and single valued, therefore, we must stop any curve from encircling the origin. We do this by introducing a branch cut from $-\infty$ on the real axis to the origin. No curve is allowed to cross this cut.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \draw [thick,decorate, decoration=zigzag] (-2, 0) -- (0, 0);
+ \node [circ] at (0, 0) {};
+
+ \draw (0, 0) -- (1.5, 1) node [circ] {} node [right] {$z$};
+ \draw (0.4, 0) arc(0:33.69:0.4);
+ \node at (0.4, 0.2) [right] {$\theta$};
+ \end{tikzpicture}
+\end{center}
+Once we've decided where our branch cut is, we can use it to fix on values of $\theta$ lying in the range $(-\pi, \pi]$, and we have defined a \emph{branch} of $\log z$. This branch is single-valued and continuous on any curve $C$ that does not cross the cut. This branch is in fact analytic everywhere, with $\frac{\d}{\d z} \log z = \frac{1}{z}$, \emph{except} on the non-positive real axis, where it is not even continuous.
+
+Note that a \emph{branch cut} is the squiggly line, while a \emph{branch} is a particular choice of the value of $\log z$.
+
+The cut described above is the \emph{canonical} (i.e.\ standard) branch cut for $\log z$. The resulting value of $\log z$ is called the principal value of the logarithm.
+
+What are the values of $\log z$ just above and just below the branch cut? Consider a point on the negative real axis, $z = x < 0$. Just above the cut, at $z = x + i 0^+$, we have $\theta = \pi$. So $\log z = \log |x| + i \pi$. Just below it, at $z = x + i0^-$, we have $\log z = \log |x| - i \pi$. Hence we have a discontinuity of $2\pi i$.
+
+We have picked an arbitrary branch cut and branch. We can pick other branch cuts or branches. Even with the same branch cut, we can still have a different branch --- we can instead require $\theta$ to fall in $(\pi, 3\pi]$. Of course, we can also pick other branch cuts, e.g.\ the non-negative imaginary axis. Any cut that stops curves wrapping around the branch point will do.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \draw [decorate, thick, decoration=zigzag] (0, 0) -- (0, 2);
+ \node [circ] at (0, 0) {};
+ \end{tikzpicture}
+\end{center}
+Here we can choose $\theta \in \left(-\frac{3\pi}{2}, \frac{\pi}{2}\right]$. We can also pick a branch cut like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \draw [decorate, thick, decoration=zigzag] plot coordinates {(0, 0) (0.5, 0.8) (-0.2, 1.5) (0, 2)};
+ \draw [gray] plot [smooth] coordinates {(0, 0) (0.5, 0.8) (-0.2, 1.5) (0, 2)};
+ \node [circ] at (0, 0) {};
+ \end{tikzpicture}
+\end{center}
+The exact choice of $\theta$ is more difficult to write down, but this is an equally valid cut, since it stops curves from encircling the origin.
+
+Exactly the same considerations (and possible branch cuts) apply for $z^\alpha$ (for $\alpha \not\in \Z$).
+
+In practice, whenever a problem requires the use of a branch, it is important to specify it clearly. This can be done in two ways:
+\begin{enumerate}
+ \item Define the function and parameter range explicitly, e.g.
+ \[
+ \log z = \log|z| + i \arg z, \quad \arg z \in (-\pi, \pi].
+ \]
+ \item Specify the location of the branch cut and give the value of the required branch at a single point not on the cut. The values everywhere else are then defined uniquely by continuity. For example, we have $\log z$ with a branch cut along $\R^{\leq 0}$ and $\log 1 = 0$. Of course, we could have defined $\log 1 = 2\pi i$ as well, and this would correspond to picking $\arg z \in (\pi, 3 \pi]$.
+\end{enumerate}
+Either way can be used, but it must be done properly.
+
+\subsubsection*{Riemann surfaces*}
+Instead of this brutal way of introducing a cut and forbidding crossing, Riemann imagined different branches as separate copies of $\C$, all stacked on top of each other but each one joined to the next at the branch cut. This structure is a \emph{Riemann surface}.
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[shift={(0, -0.6)}]
+ \draw [path fading=west] (2, 0.5) .. controls (2.5, 0.625) and (2.5, 1.375) .. (3, 1.5);
+ \draw (3, 1.5) -- (5, 2) -- (4, 5);
+ \draw [thick, decorate, decoration={zigzag, segment length=2mm, amplitude=0.5mm}] (2.5, 1) -- +(-0.4, 1.2);
+ \end{scope}
+ \foreach \x in {0, 1,2,3}{
+ \begin{scope}[shift={(0, 0.6 * \x)}]
+ \draw [fill=white] (4, 4.4) -- (-1, 3) -- (0, 0) -- (2, 0.5) .. controls (2.5, 0.625) and (2.5, 1.375) .. (3, 1.5) -- (5, 2) -- (4, 5);
+ \draw [thick, decorate, decoration={zigzag, segment length=2mm, amplitude=0.5mm}] (2.5, 1) -- +(-0.4, 1.2);
+ \node at (0, 0.1) [anchor = south west] {$\C$};
+ \end{scope}
+ }
+ \begin{scope}[shift={(0, 2.4)}]
+ \fill [white] (3.9, 4.4) -- (-1, 3) -- (0, 0) -- (2, 0.5) .. controls (2.5, 0.625) and (2.5, 1.375) .. (3, 1.5) -- (4.7, 2) -- (3.7, 5);
+ \draw (4, 4.4) -- (-1, 3) -- (0, 0) -- (2, 0.5);
+ \draw [path fading=east] (2, 0.5).. controls (2.5, 0.625) and (2.5, 1.375) .. (3, 1.5);
+ \node at (0, 0.1) [anchor = south west] {$\C$};
+ \draw [thick, decorate, decoration={zigzag, segment length=2mm, amplitude=0.5mm}] (2.5, 1) -- +(-0.4, 1.2) node [circ] {};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+The idea is that traditionally, we are not allowed to cross branch cuts. Here, when we cross a branch cut, we will move to a different copy of $\C$, and this corresponds to a different branch of our function.
+
+We will not say any more about this --- there is a whole Part II course devoted to these, uncreatively named IID Riemann Surfaces.
+
+\subsubsection*{Multiple branch cuts}
+When there is more than one branch point, we may need more than one branch cut. For
+\[
+ f(z) = (z(z - 1))^{\frac{1}{3}},
+\]
+there are two branch points, at $0$ and $1$. So we need two branch cuts. A possibility is shown below. Then no curve can wrap around either $0$ or $1$.
+\begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [decorate, thick, decoration=zigzag] (-2, 0) -- (0, 0);
+ \draw [decorate, thick, decoration=zigzag] (1, 0) -- (2, 0);
+ \draw [->] (0, -1) -- (0, 2);
+ \node [below] at (1, 0) {$1$};
+ \node [anchor = north east] {$0$};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (0, 0) {};
+
+ \draw (0, 0) -- (2, 1) node [right] {$z$} node [circ] {} node [pos=0.5, anchor = south east] {$r$};
+ \draw (1, 0) -- (2, 1) node [pos=0.7, anchor = north west] {$r_1$};
+
+ \draw (0.4, 0) arc(0:26.565:0.4);
+ \node at (0.4, 0.14) [right] {$\theta$};
+
+ \draw (1.4, 0) arc (0:45:0.4);
+ \node at (1.36, 0.2) [right] {$\theta_1$};
+ \end{tikzpicture}
+\end{center}
+For any $z$, we write $z = re^{i\theta}$ and $z - 1 = r_1 e^{i\theta_1}$ with $\theta \in (-\pi , \pi]$ and $\theta_1 \in [0, 2\pi)$, and define
+\[
+ f(z) = \sqrt[3]{rr_1} e^{i(\theta + \theta_1)/3}.
+\]
+This is continuous so long as we don't cross either branch cut. This is all and simple.
+
+However, sometimes, we need fewer branch cuts than we might think. Consider instead the function
+\[
+ f(z) = \log\left(\frac{z - 1}{z + 1}\right).
+\]
+Writing $z + 1 = r e^{i\theta}$ and $z - 1 = r_1 e^{i \theta_1}$, we can write this as
+\begin{align*}
+ f(z) &= \log (z - 1) - \log(z + 1)\\
+ &= \log(r_1/r) + i(\theta_1 - \theta).
+\end{align*}
+This has branch points at $\pm 1$. We can, of course, pick our branch cut as above. However, notice that these two cuts also make it impossible for $z$ to ``wind around $\infty$'' (e.g.\ moving around a circle of arbitrarily large radius). Yet $\infty$ is not a branch point, and we don't have to make this unnecessary restriction. Instead, we can use the following branch cut:
+\begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [decorate, thick, decoration=zigzag] (-1, 0) -- (1, 0);
+ \draw [->] (0, -1) -- (0, 2);
+ \node [below] at (1, 0) {$1$};
+ \node [below] at (-1, 0) {$-1$};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (-1, 0) {};
+
+ \draw (-1, 0) -- (2, 1) node [right] {$z$} node [circ] {} node [pos=0.5, anchor = south east] {$r$};
+ \draw (1, 0) -- (2, 1) node [pos=0.7, anchor = north west] {$r_1$};
+
+ \draw (-0.6, 0) arc(0:18.435:0.4);
+ \node at (-0.8, 0.1) [above] {$\theta$};
+
+ \draw (1.4, 0) arc (0:45:0.4);
+ \node at (1.4, 0.2) [right] {$\theta_1$};
+ \end{tikzpicture}
+\end{center}
+Drawing this branch cut is not hard. However, picking the values of $\theta, \theta_1$ is more tricky. What we really want to pick is $\theta, \theta_1 \in [0, 2\pi)$. This might not look intuitive at first, but we will shortly see why this is the right choice.
+
+Suppose that we are unlawful and cross the branch cut. Then the value of $\theta$ passes through the branch cut, while the value of $\theta_1$ varies smoothly. So the value of $f(z)$ jumps. This is expected since we have a branch cut there. If we pass through the negative real axis on the left of the branch cut, then nothing happens, since $\theta = \theta_1 = \pi$ are not at a point of discontinuity.
+
+The interesting part is when we pass through the positive real axis on the right of branch cut. When we do this, \emph{both} $\theta$ and $\theta_1$ jump by $2\pi$. However, this does not induce a discontinuity in $f(z)$, since $f(z)$ depends on the difference $\theta_1 - \theta$, which has not experienced a jump.
+
+\subsection{\texorpdfstring{M\"obius}{Mobius} map}
+We are now going to consider a special class of maps, namely the \emph{M\"obius maps}, as defined in IA Groups. While these maps have many many different applications, the most important thing we are going to use it for is to define some nice conformal mappings in the next section.
+
+We know from general theory that the M\"obius map
+\[
+ z \mapsto w = \frac{az + b}{cz + d}
+\]
+with $ad - bc \not= 0$ is analytic except at $z = -\frac{d}{c}$. It is useful to consider it as a map from $\C^* \to \C^* = \C \cup \{\infty\}$, with
+\[
+ -\frac{d}{c} \mapsto \infty,\quad \infty \mapsto \frac{a}{c}.
+\]
+It is then a bijective map between $\C^*$ and itself, with the inverse being
+\[
+ w \mapsto \frac{-d w + b}{cw - a},
+\]
+another M\"obius map. These are all analytic everywhere when considered as a map $\C^* \to \C^*$.
+
+\begin{defi}[Circline]
+ A \emph{circline} is either a circle or a line.
+\end{defi}
+
+The key property of M\"obius maps is the following:
+\begin{prop}
+ M\"obius maps take circlines to circlines.
+\end{prop}
+Note that if we start with a circle, we might get a circle or a line; if we start with a line, we might get a circle or a line.
+
+\begin{proof}
+ Any circline can be expressed as a circle of Apollonius,
+ \[
+ |z - z_1| = \lambda |z - z_2|,
+ \]
+ where $z_1, z_2 \in \C$ and $\lambda \in \R^+$.
+
+ This was proved in the first example sheet of IA Vectors and Matrices. The case $\lambda = 1$ corresponds to a line, while $\lambda \not= 1$ corresponds to a circle. Substituting $z$ in terms of $w$, we get
+ \[
+ \left|\frac{-dw + b}{cw - a} - z_1\right| = \lambda \left|\frac{-dw + b}{cw - a} - z_2 \right|.
+ \]
+ Rearranging this gives
+ \[
+ |(cz_1 + d) w - (az_1 + b)| = \lambda|(cz_2 + d)w - (az_2 + b)|.\tag{$*$}
+ \]
+ A bit more rearranging gives
+ \[
+ \left|w - \frac{az_1 + b}{cz_1 + d}\right| = \lambda \left|\frac{cz_2 + d}{cz_1 + d}\right|\left|w - \frac{az_2 + b}{cz_2 + d}\right|.
+ \]
+ This is another circle of Apollonius.
+
+ Note that the proof fails if either $cz_1 + d = 0$ or $cz_2 + d = 0$, but then $(*)$ trivially represents a circle.
+\end{proof}
+
+Geometrically, it is clear that choosing three distinct points in $\C^*$ uniquely specifies a circline (if one of the points is $\infty$, then we have specified the straight line through the other two points).
+
+Also,
+\begin{prop}
+ Given six points $\alpha, \beta, \gamma, \alpha', \beta', \gamma' \in \C^*$, we can find a M\"obius map which sends $\alpha \mapsto \alpha', \beta \mapsto \beta'$ and $\gamma \to \gamma'$.
+\end{prop}
+
+\begin{proof}
+ Define the M\"obius map
+ \[
+ f_1(z) = \frac{\beta - \gamma}{\beta - \alpha} \frac{z - \alpha}{z - \gamma}.
+ \]
+ By direct inspection, this sends $\alpha \to 0, \beta \to 1$ and $\gamma \to \infty$. Again, we let
+ \[
+ f_2(z) = \frac{\beta' - \gamma'}{\beta' - \alpha'} \frac{z - \alpha'}{z - \gamma'}.
+ \]
+ This clearly sends $\alpha' \to 0, \beta' \to 1$ and $\gamma' \to \infty$. Then $f_2^{-1} \circ f_1$ is the required mapping. It is a M\"obius map since M\"obius maps form a group.
+\end{proof}
+
+Therefore, we can therefore find a M\"obius map taking any given circline to any other, which is convenient.
+
+\subsection{Conformal maps}
+Sometimes, we might be asked to solve a problem on some complicated subspace $U \subseteq \C$. For example, we might need to solve Laplace's equation subject to some boundary conditions. In such cases, it is often convenient to transform our space $U$ into some nicer space $V$, such as the open disk. To do so, we will need a complex function $f$ that sends $U$ to $V$. For this function to preserve our properties such that the solution on $V$ can be transferred back to a solution on $U$, we would of course want $f$ to be differentiable. Moreover, we would like it to have non-vanishing derivative, so that it is at least locally invertible.
+
+\begin{defi}[Conformal map]
+ A \emph{conformal map} $f: U \to V$, where $U, V$ are \emph{open} subsets of $\C$, is one which is analytic with non-zero derivative.
+\end{defi}
+In reality, we would often want the map to be a bijection. We sometimes call these \emph{conformal equivalences}.
+
+Unfortunately, after many hundred years, we still haven't managed to agree on what being conformal means. An alternative definition is that a conformal map is one that preserves the angle (in both magnitude and orientation) between intersecting curves.
+
+We shall show that our definition implies this is true; the converse is also true, but the proof is omitted. So the two definitions are equivalent.
+
+\begin{prop}
+ A conformal map preserves the angles between intersecting curves.
+\end{prop}
+
+\begin{proof}
+ Suppose $z_1(t)$ is a curve in $\C$, parameterised by $t \in \R$, which passes through a point $z_0$ when $t = t_1$. Suppose that its tangent there, $z'_1(t_1)$, has a well-defined direction, i.e.\ is non-zero, and the curve makes an angle $\phi = \arg z_1'(t_1)$ to the $x$-axis at $z_0$.
+
+ Consider the image of the curve, $Z_1(t) = f(z_1(t))$. Its tangent direction at $t = t_1$ is
+ \[
+ Z_1'(t_1) = z_1'(t_1) f'(z_1(t_1)) = z_1'(t_0) f'(z_0),
+ \]
+ and therefore makes an angle with the $x$-axis of
+ \[
+ \arg (Z_1'(t_1)) = \arg(z_1'(t_1) f'(z_0)) = \phi + \arg f'(z_0),
+ \]
+ noting that $\arg f'(z_0)$ exists since $f$ is conformal, and hence $f'(z_0) \not= 0$.
+
+ In other words, the tangent direction has been rotated by $\arg f'(z_0)$, and this is independent of the curve we started with.
+
+ Now if $z_2(t)$ is another curve passing through $z_0$. Then its tangent direction will also be rotated by $\arg f'(z_0)$. The result then follows.
+\end{proof}
+
+Often, the easiest way to find the image set of a conformal map acting on a set $U$ is first to find the image of its boundary, $\partial U$, which will form the boundary $\partial V$ of $V$; but, since this does not reveal which side of $\partial V$ $V$ lies on, take a point of your choice within $U$, whose image will lie within $V$.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item The map $z \mapsto az + b$, where $a, b \in \C$ and $a\not= 0$, is a conformal map. It rotates by $\arg a$, enlarges by $|a|$, and translates by $b$. This is conformal everywhere.
+ \item The map $f(z) = z^2$ is a conformal map from
+ \[
+ U = \left\{z: 0 < |z| < 1, 0 < \arg z < \frac{\pi}{2}\right\}
+ \]
+ to
+ \[
+ V = \{w: 0 < |w| < 1, 0 < \arg w < \pi\}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.5] (0, 0) -- (1.5, 0) arc(0:90:1.5) -- (0, 0);
+ \draw [->] (-1, 0) -- (2, 0);
+ \draw [->] (0, -1) -- (0, 2);
+ \node [below] at (1.5, 0) {$1$};
+ \draw (1.5, 0) arc(0:90:1.5);
+ \node at (0.5, 0.5) {$U$};
+
+ \draw [->] (2.5, 0.5) -- +(1, 0) node [pos=0.5, above] {$f$};
+ \begin{scope}[shift={(6, 0)}];
+ \fill [mblue, opacity=0.5] (0, 0) -- (1.5, 0) arc(0:180:1.5) -- (0, 0);
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [->] (0, -1) -- (0, 2);
+ \node [below] at (1.5, 0) {$1$};
+ \draw (1.5, 0) arc(0:180:1.5);
+ \node at (0.5, 0.5) {$V$};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ Note that the right angles between the boundary curves at $z = 1$ and $i$ are preserved, because $f$ is conformal there; but the right angle at $z = 0$ is not preserved because $f$ is not conformal there ($f'(0) = 0$). Fortunately, this does not matter, because $U$ is an \emph{open} set and does not contain $0$.
+ \item How could we conformally map the left-hand half-plane
+ \[
+ U = \{z: \Re z < 0\}
+ \]
+ to a wedge
+ \[
+ V = \left\{w: -\frac{\pi}{4} < \arg w\leq \frac{\pi}{4}\right\}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.5] (-2, 2) rectangle (0, -2);
+ \draw (-2, 0) -- (2, 0);
+ \draw (0, -2) -- (0, 2);
+ \node at (-1, 1) {$U$};
+ \node [circ] {};
+ \draw [thick,decorate, decoration=zigzag] (0, -2) -- (0, 0);
+
+ \draw [->] (2.5, 0.5) -- +(1, 0);
+ \begin{scope}[shift={(6, 0)}];
+ \fill [mblue, opacity=0.5] (2, 2) -- (0, 0) -- (2, -2);
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [->] (0, -1) -- (0, 2);
+ \draw (2, 2) -- (0, 0) -- (2, -2);
+
+ \node at (1.5, 0.5) {$V$};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ We need to halve the angle. We saw that $z \mapsto z^2$ doubles then angle, so we might try $z^{\frac{1}{2}}$, for which we need to choose a branch. The branch cut must \emph{not} lie in $U$, since $z^{\frac{1}{2}}$ is not analytic on the branch cut. In particular, the principal branch does not work.
+
+ So we choose a cut along the negative imaginary axis, and the function is defined by $r e^{i\theta} \mapsto \sqrt{r} e^{i\theta/2}$, where $\theta \in \left(-\frac{\pi}{2}, \frac{3\pi}{2}\right]$. This produces the wedge $\{z': \frac{\pi}{4} < \arg z' < \frac{3\pi}{4}\}$. This isn't exactly the wedge we want. So we need to rotate it through $-\frac{\pi}{2}$. So the final map is
+ \[
+ f(z) = -i z^{\frac{1}{2}}.
+ \]
+ \item $e^z$ takes rectangles conformally to sectors of annuli:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (3, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw [fill=mblue, fill opacity=0.5] (0.8, 0.7) rectangle (2.3, 1.5);
+ \node at (1.55, 1.1) {$U$};
+ \draw [dashed] (0.8, 0.7) -- (0, 0.7) node [left] {$iy_1$};
+ \draw [dashed] (0.8, 1.5) -- (0, 1.5) node [left] {$iy_2$};
+ \draw [dashed] (0.8, 0.7) -- (0.8, 0) node [below] {$x_1$};
+ \draw [dashed] (2.3, 0.7) -- (2.3, 0) node [below] {$x_2$};
+
+ \draw [->] (3.5, 1) -- +(1, 0);
+ \begin{scope}[shift={(7.4,1)}, scale=0.8];
+ \draw [fill=mblue, fill opacity=0.5] (0.75, 0.75) arc(45:116.565:1.0607) -- (-0.94868, 1.89737) arc(116.565:45:2.1213) -- cycle;
+
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \draw (0, 0) -- (0.75, 0.75);
+ \draw (0, 0) -- (-0.47434, 0.94868);
+ \draw [dashed] circle [radius=2.1213];
+ \draw [dashed] circle [radius=1.06066];
+
+ \node [anchor = north east] at (1.0606, 0) {$e^{x_1}$};
+ \node [anchor = north west] at (2.1213, 0) {$e^{x_2}$};
+
+ \draw [mblue] (0.2, 0) arc(0:116.565:0.2);
+ \node [mblue] at (0.1, 0.1) [above] {$y_1$};
+ \draw [mred] (0.4, 0) arc(0:45:0.4);
+ \node [mred] at (0.3, 0.2) [right] {$y_2$};
+
+ \node at (0.3, 1.5) {$V$};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ With an appropriate choice of branch, $\log z$ does the reverse.
+ \item M\"obius maps (which are conformal equivalence except at the point that is sent to $\infty$) are very useful in taking circles, or parts of them to straight lines, or vice versa.
+
+ Consider $f(z) = \frac{z - 1}{z + 1}$ acting on the unit disk $U = \{z: |z| < 1\}$. The boundary of $U$ is a circle. The three points $-1, i$ and $+1$ lie on this circle, and are mapped to $\infty$, $i$ and $0$ respectively.
+
+ Since M\"obius maps take circlines to circlines, the image of $\partial U$ is the imaginary axis. Since $f(0) = -1$, we see that the image of $U$ is the left-hand half plane.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] circle [radius=1.5];
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \node at (0.5, 0.5) {$U$};
+
+ \draw [->] (2.5, 0.5) -- +(1, 0);
+ \begin{scope}[shift={(6, 0)}];
+ \fill [mblue, opacity=0.5] (-2, 2) rectangle (0, -2);
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \node at (-1, 1) {$V$};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ We can derive this alternatively by noting
+ \[
+ w = \frac{z - 1}{z + 1} \Leftrightarrow z = -\frac{w + 1}{w - 1}.
+ \]
+ So
+ \[
+ |z| < 1 \Leftrightarrow |w + 1| < |w - 1|,
+ \]
+ i.e.\ $w$ is closer to $-1$ than it is to $+1$, which describes precisely the left-hand half plane.
+
+ In fact, this particular map $f(z) = \frac{z - 1}{z + 1}$ can be deployed more generally on quadrants, because it permutes $8$ divisions on the complex plane as follows:
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [mred, fill opacity=0.5] (-2, -2) rectangle (2, 2);
+ \fill [white] circle [radius=1.5];
+ \draw [fill=mblue, fill opacity=0.5] circle [radius=1.5];
+ \draw [->] (-2, 0) -- (2, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \node at (0.5, 0.5) {$2$};
+ \node at (-0.5, 0.5) {$3$};
+ \node at (0.5, -0.5) {$6$};
+ \node at (-0.5, -0.5) {$7$};
+
+ \node at (1.5, 1.5) {$1$};
+ \node at (-1.5, 1.5) {$4$};
+ \node at (1.5, -1.5) {$5$};
+ \node at (-1.5, -1.5) {$8$};
+ \end{tikzpicture}
+ \end{center}
+ The map sends $1 \mapsto 2 \mapsto 3 \mapsto 4 \mapsto 1$ and $5 \mapsto 6 \mapsto 7 \mapsto 8 \mapsto 5$. In particular, this agrees with what we had above --- it sends the complete circle to the left hand half plane.
+ \item Consider the map $f(z) = \frac{1}{z}$. This is just another M\"obius map! Hence everything we know about M\"obius maps apply to this. In particular, it is useful for acting on vertical and horizontal lines. Details are left for the first example sheet.
+ \end{enumerate}
+\end{eg}
+In practice, complicated conformal maps are usually built up from individual building blocks, each a simple conformal map. The required map is the composition of these. For this to work, we have to note that the composition of conformal maps is conformal, by the chain rule.
+
+\begin{eg}
+ Suppose we want to map the upper half-disc $|z| < 1$, $\Im z > 0$ to the full disc $|z| < 1$. We might want to just do $z \mapsto z^2$. However, this does not work, since the image does not include the non-negative real axis, say $z = \frac{1}{2}$. Instead, we need to do something more complicated. We will do this in several steps:
+ \begin{enumerate}
+ \item We apply $f_1(z) = \frac{z - 1}{z + 1}$ to take the half-disc to the second quadrant.
+ \item We now recall that $f_1$ also takes the right-hand half plane to the disc. So we square and rotate to get the right-hand half plane. We apply $f_2(z) = iz^2$.
+ \item We apply $f_3(z) = f_1(z)$ again to obtain the disc.
+ \end{enumerate}
+ Then the desired conformal map is $f_3 \circ f_2 \circ f_1$, you can, theoretically, expand this out and get an explicit expression, but that would be a waste of time.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \fill [mblue, opacity=0.5] (2, 0) arc (0:180:2) -- (2, 0);
+ \draw (2, 0) arc (0:180:2);
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+
+ \draw [->] (4, 0) -- (6, 0) node [pos=0.5, above] {$z \mapsto \frac{z - 1}{z + 1}$};
+ \begin{scope}[shift={(10,0)}]
+ \fill [mblue, opacity=0.5] (-3, 3) rectangle (0, 0);
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+ \end{scope}
+
+ \begin{scope}[shift={(0,-8)}]
+ \draw [->] (-6, 0) -- (-4, 0) node [pos=0.5, above] {$z \mapsto iz^2$};
+
+ \fill [mblue, opacity=0.5] (0, 3) rectangle (3, -3);
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+
+ \draw [->] (4, 0) -- (6, 0) node [pos=0.5, above] {$z \mapsto \frac{z - 1}{z + 1}$};
+ \end{scope}
+
+ \begin{scope}[shift={(10,-8)}]
+ \fill [mblue, opacity=0.5] circle [radius=2];
+ \draw circle [radius=2];
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\subsection{Solving Laplace's equation using conformal maps}
+As we have mentioned, conformal maps are useful for transferring problems from a complicated domain to a simple domain. For example, we can use it to solve Laplace's equation, since solutions to Laplace's equations are given by real and imaginary parts of holomorphic functions.
+
+More concretely, the following algorithm can be used to solve Laplace's Equation $\nabla^2 \phi(x, y) = 0$ on a tricky domain $U \subseteq \R^2$ with given Dirichlet boundary conditions on $\partial U$. We now pretend $\R^2$ is actually $\C$, and identify subsets of $\R^2$ with subsets of $\C$ in the obvious manner.
+\begin{enumerate}
+ \item Find a conformal map $f: U \to V$, where $U$ is now considered a subset of $\C$, and $V$ is a ``nice'' domain of our choice. Our aim is to find a harmonic function $\Phi$ in $V$ that satisfies the same boundary conditions as $\phi$.
+ \item Map the boundary conditions on $\partial U$ directly to the equivalent points on $\partial V$.
+ \item Now solve $\nabla^2 \Phi = 0$ in $V$ with the new boundary conditions.
+ \item The required harmonic function $\phi$ in $U$ is then given by
+ \[
+ \phi(x, y) = \Phi(\Re(f(x + iy)), \Im f(x + iy)).
+ \]
+\end{enumerate}
+To prove this works, we can take the $\nabla^2$ of this expression, write $f = u + iv$, use the Cauchy-Riemann equation, and expand the mess.
+
+Alternatively, we perform magic. Note that since $\Phi$ is harmonic, it is the real part of some complex analytic function $F(z) = \Phi(x, y) + i \Psi(x, y)$, where $z = x + iy$. Now $F(f(z))$ is analytic, as it is a composition of analytic functions. So its real part, which is $\Phi(\Re f, \Im f)$, is harmonic.
+
+Let's do an example. In this case, you might be able to solve this directly just by looking at it, using what you've learnt from IB Methods. However, we will do it with this complex methods magic.
+\begin{eg}
+ We want to find a bounded solution of $\nabla^2 \phi = 0$ on the first quadrant of $\R^2$ subject to $\phi(x, 0) = 0$ and $\phi(0, y) = 1$ when, $x, y > 0$.
+
+ This is a bit silly, since our $U$ is supposed to be a nasty region, but our $U$ is actually quite nice. Nevertheless, we still do this since this is a good example.
+
+ We choose $f(z) = \log z$, which maps $U$ to the strip $0 < \Im z < \frac{\pi}{2}$.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \fill [mblue, opacity=0.5] (3, 3) rectangle (0, 0);
+ \draw (-3, 0) -- (0, 0);
+ \draw (0, -3) -- (0, 0);
+
+ \node at (1.5, 1.5) {$U$};
+ \node [mred, left] at (0, 1.5) {$1$};
+ \node [mgreen, below] at (1.5, 0) {$0$};
+ \draw [thick, mgreen] (0, 0) -- (3, 0);
+ \draw [thick, mred] (0, 0) -- (0, 3);
+
+ \draw [->] (3.5, 0) -- (5.5, 0) node [pos=0.5, above] {$z \mapsto \log z$};
+ \begin{scope}[shift={(9,0)}]
+ \fill [mblue, opacity=0.5] (-3, 1) rectangle (3, 0);
+
+ \node at (1, 0.5) {$V$};
+ \draw [mgreen, thick] (-3, 1) -- (3, 1);
+ \draw [mred, thick] (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+
+ \node [anchor = south west] at (0, 1) {$i\frac{\pi}{2}$};
+ \node [anchor = north west] at (0, 0) {$0$};
+
+ \node [mgreen, below] at (-1, 0) {$0$};
+ \node [mred, above] at (-1, 1) {$1$};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ Recall that we said $\log$ maps an annulus to a rectangle. This is indeed the case here --- $U$ is an annulus with zero inner radius and infinite outer radius; $V$ is an infinitely long rectangle.
+
+ Now, we must now solve $\nabla^2 \Phi = 0$ in $V$ subject to
+ \[
+ \Phi(x, 0) = 0,\quad \Phi\left(x, \frac{\pi}{2}\right) = 1
+ \]
+ for all $x \in \R$. Note that we have these boundary conditions since $f(z)$ takes positive real axis of $\partial V$ to the line $\Im z = 0$, and the positive imaginary axis to $\Im z = \frac{\pi}{2}$.
+
+ By inspection, the solution is
+ \[
+ \Phi(x, y) = \frac{2}{\pi}y.
+ \]
+ Hence,
+ \begin{align*}
+ \Phi(x, y) &= \Phi(\Re \log z, \Im \log z)\\
+ &= \frac{2}{\pi} \Im \log z\\
+ &= \frac{2}{\pi} \tan^{-1}\left(\frac{y}{x}\right).
+ \end{align*}
+ Notice this is just the argument $\theta$.
+\end{eg}
+
+\section{Contour integration and Cauchy's theorem}
+In the remaining of the course, we will spend all our time studying integration of complex functions, and see what we can do with it. At first, you might think this is just an obvious generalization of integration of real functions. This is not true. Complex integrals have many many nice properties, and it turns out there are some really convenient tricks for evaluating complex integrals. In fact, we will learn how to evaluate certain real integrals by pretending they are complex.
+
+\subsection{Contour and integrals}
+With real functions, we can just integrate a function, say, from $0$ to $1$, since there is just one possible way we can get from $0$ to $1$ along the real line. However, in the complex plane, there are many paths we can take to get from a point to another. Integrating along different paths may produce different results. So we have to carefully specify our path of integration.
+
+\begin{defi}[Curve]
+ A \emph{curve} $\gamma(t)$ is a (continuous) map $\gamma: [0, 1] \to \C$.
+\end{defi}
+
+\begin{defi}[Closed curve]
+ A \emph{closed curve} is a curve $\gamma$ such that $\gamma(0) = \gamma(1)$.
+\end{defi}
+
+\begin{defi}[Simple curve]
+ A \emph{simple curve} is one which does not intersect itself, except at $t = 0, 1$ in the case of a closed curve.
+\end{defi}
+
+\begin{defi}[Contour]
+ A \emph{contour} is a piecewise smooth curve.
+\end{defi}
+Everything we do is going to be about contours. We shall, in an abuse of notation, often use the symbol $\gamma$ to denote both the map \emph{and} its image, namely the actual curve in $\C$ traversed in a particular direction.
+
+\begin{notation}
+ The contour $-\gamma$ is the contour $\gamma$ traversed in the opposite direction. Formally, we say
+ \[
+ (-\gamma)(t) = \gamma(1 - t).
+ \]
+ Given two contours $\gamma_1$ and $\gamma_2$ with $\gamma_1(1) = \gamma_2(0)$, $\gamma_1 + \gamma_2$ denotes the two contours joined end-to-end. Formally,
+ \[
+ (\gamma_1 + \gamma_2)(t) =
+ \begin{cases}
+ \gamma_1(2t) & t < \frac{1}{2}\\
+ \gamma_2(2t - 1) & t \geq \frac{1}{2}
+ \end{cases}.
+ \]
+\end{notation}
+
+\begin{defi}[Contour integral]
+ The \emph{contour integral} $\int_\gamma f(z)\;\d z$ is defined to be the usual real integral
+ \[
+ \int_\gamma f(z)\;\d z= \int_0^1 f(\gamma(t)) \gamma'(t)\;\d t.
+ \]
+\end{defi}
+Alternatively, and equivalently, dissect $[0, 1]$ into $0 = t_0 < t_1 < \cdots < t_n = 1$, and let $z_n = \gamma(t_n)$ for $n = 0, \cdots, N$. We define
+\[
+ \delta t_n = t_{n + 1} - t_n,\quad \delta z_n = z_{n + 1} - z_n.
+\]
+Then
+\[
+ \int_\gamma f(z)\;\d z = \lim_{\Delta \to 0} \sum_{n = 0}^{N - 1} f(z_n) \delta z_n,
+\]
+where
+\[
+ \Delta = \max_{n = 0, \cdots, N - 1} \delta t_n,
+\]
+and as $\Delta \to 0, N \to \infty$.
+
+All this says is that the integral is what we expect it to be --- an infinite sum.
+
+The result of a contour integral between two points in $\C$ may depend on the choice of contour.
+\begin{eg}
+ Consider
+ \[
+ I_1 = \int_{\gamma_1} \frac{\d z}{z},\quad I_2 = \int_{\gamma_2} \frac{\d z}{z},
+ \]
+ where the paths are given by
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \draw [->-=0.7, mred] (-2, 0) arc(180:0:2) node [pos=0.7, anchor = south west] {$\gamma_1$};
+ \draw [->-=0.7, mblue] (-2, 0) arc(180:360:2) node [pos=0.7, anchor = north west] {$\gamma_2$};
+
+ \draw (0, 0) -- (1.414, 1.414);
+ \draw (0.4, 0) arc(0:45:0.4) node [pos=0.8, right] {$\theta$};
+
+ \node [anchor = north west] at (2, 0) {$1$};
+ \node [anchor = north east] at (-2, 0) {$-1$};
+ \node [circ] at (0, 0) {};
+ \node [anchor = north east] {$0$};
+ \end{tikzpicture}
+ \end{center}
+ In both cases, we integrate from $z = -1$ to $+1$ around a unit circle: $\gamma_1$ above, $\gamma_2$ below the real axis. Substitute $z = e^{i\theta}$, $\d z = ie^{i\theta} \;\d \theta$. Then we get
+ \begin{align*}
+ I_1 &= \int_{\pi}^0 \frac{ie^{i\theta}\;\d \theta}{e^{i\theta}} = -i\pi\\
+ I_2 &= \int_{-\pi}^0 \frac{ie^{i\theta}\;\d \theta}{e^{i\theta}} = i\pi.
+ \end{align*}
+ So they can in fact differ.
+\end{eg}
+
+\subsubsection*{Elementary properties of the integral}
+Contour integrals behave as we would expect them to.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item We write $\gamma_1 + \gamma_2$ for the path obtained by joining $\gamma_1$ and $\gamma_2$. We have
+ \[
+ \int_{\gamma_1 + \gamma_2} f(z)\;\d z = \int_{\gamma_1} f(z)\;\d z + \int_{\gamma_2} f(z)\;\d z\\
+ \]
+ Compare this with the equivalent result on the real line:
+ \[
+ \int_a^c f(x)\;\d x = \int_a^b f(x)\;\d x + \int_b^c f(x)\;\d x.
+ \]
+ \item Recall $-\gamma$ is the path obtained from reversing $\gamma$. Then we have
+ \[
+ \int_{-\gamma} f(z)\;\d z = -\int_\gamma f(z)\;\d z.
+ \]
+ Compare this with the real result
+ \[
+ \int_a^b f(x)\;\d x = -\int_b^a f(x)\;\d x.
+ \]
+ \item If $\gamma$ is a contour from $a$ to $b$ in $\C$, then
+ \[
+ \int_\gamma f'(z)\;\d z = f(b) - f(a).
+ \]
+ This looks innocuous. This is just the fundamental theorem of calculus. However, there is some subtlety. This requires $f$ to be differentiable at every point on $\gamma$. In particular, it must not cross a branch cut. For example, our previous example had $\log z$ as the antiderivative of $\frac{1}{z}$. However, this does not imply the integrals along different paths are the same, since we need to pick different branches of $\log$ for different paths, and things become messy.
+
+ \item Integration by substitution and by parts work exactly as for integrals on the real line.
+ \item If $\gamma$ has length $L$ and $|f(z)|$ is bounded by $M$ on $\gamma$, then
+ \[
+ \left|\int_\gamma f(z)\;\d z\right| \leq L M.
+ \]
+ This is since
+ \[
+ \left|\int_\gamma f(z)\;\d z\right| \leq \int_\gamma |f(z)| |\;\d z| \leq M \int_\gamma |\d z| = ML.
+ \]
+ We will be using this result a lot later on.
+ \end{enumerate}
+\end{prop}
+We will not prove these. Again, if you like proofs, go to IB Complex Analysis.
+
+\subsubsection*{Integrals on closed contours}
+If $\gamma$ is a closed contour, then it doesn't matter where we start from on $\gamma$; $\oint_\gamma f(z)\;\d z$ means the same thing in any case, so long as we go all the way round ($\oint$ denotes an integral around a closed contour).
+
+The usual direction of traversal is anticlockwise (the ``positive sense''). If we traverse $\gamma$ in a negative sense (clockwise), then we get negative the previous result. More technically, the positive sense is the direction that keeps the interior of the contour on the left. This ``more technical'' definition might seem pointless --- if you can't tell what anticlockwise is, then you probably can't tell which is the left. However, when we deal with more complicated structures in the future, it turns out it is easier to define what is ``on the left'' than ``anticlockwise''.
+
+\subsubsection*{Simply connected domain}
+\begin{defi}[Simply connected domain]
+ A domain $\mathcal{D}$ (an open subset of $\C$) is \emph{simply connected} if it is connected and every closed curve in $\mathcal{D}$ encloses only points which are also in $\mathcal{D}$.
+\end{defi}
+In other words, it does not have holes. For example, this is not simply-connected:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0.6, -1) (1, 0) (0.7, 0.7)};
+ \draw [fill=white] (0, -0.2113) circle [radius=0.2];
+ \end{tikzpicture}
+\end{center}
+These ``holes'' need not be big holes like this, but just individual points at which a function under consider consideration is singular.
+
+\subsection{Cauchy's theorem}
+We now come to the highlight of the course --- Cauchy's theorem. Most of the things we do will be based upon this single important result.
+
+\begin{thm}[Cauchy's theorem]
+ If $f(z)$ is analytic in a simply-connected domain $\mathcal{D}$, then for every simple closed contour $\gamma$ in $\mathcal{D}$, we have
+ \[
+ \oint_\gamma f(z)\;\d z = 0.
+ \]
+\end{thm}
+This is quite a powerful statement, and will allow us to do a lot! On the other hand, this tells us functions that are analytic everywhere are not too interesting. Instead, we will later look at functions like $\frac{1}{z}$ that have singularities.
+
+\begin{proof}(non-examinable)
+ The proof of this remarkable theorem is simple (with a catch), and follows from the Cauchy-Riemann equations and Green's theorem. Recall that Green's theorem says
+ \[
+ \oint_{\partial S} (P \;\d x + Q\;\d y) = \iint_S \left(\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right)\;\d x\;\d y.
+ \]
+ Let $u, v$ be the real and imaginary parts of $f$. Then
+ \begin{align*}
+ \oint_\gamma f(z) \;\d z &= \oint_\gamma (u + iv) (\d x + i\;\d y)\\
+ &= \oint_\gamma (u\;\d x - v\;\d y) + i \oint_\gamma (v \;\d x + u \;\d y)\\
+ &= \iint_S \left(-\frac{\partial v}{\partial x} - \frac{\partial u}{\partial y}\right)\;\d x\;\d y + i\iint_S \left(\frac{\partial u}{\partial x} - \frac{\partial v}{\partial y}\right)\;\d x \;\d y
+ \end{align*}
+ But both integrands vanish by the Cauchy-Riemann equations, since $f$ is differentiable throughout $S$. So the result follows.
+\end{proof}
+Actually, this proof requires $u$ and $v$ to have continuous partial derivatives in $S$, otherwise Green's theorem does not apply. We shall see later that in fact $f$ is differentiable infinitely many time, so actually $u$ and $v$ \emph{do} have continuous partial derivatives. However, our proof of that will utilize Cauchy's theorem! So we are trapped.
+
+Thus a completely different proof (and a very elegant one!) is required if we do not wish to make assumptions about $u$ and $v$. However, we shall not worry about this in this course since it is easy to verify that the functions we use do have continuous partial derivatives. And we are not doing Complex Analysis.
+
+\subsection{Contour deformation}
+One useful consequence of Cauchy's theorem is that we can freely deform contours along regions where $f$ is defined without changing the value of the integral.
+\begin{prop}
+ Suppose that $\gamma_1$ and $\gamma_2$ are contours from $a$ to $b$, and that $f$ is analytic on the contours \emph{and} between the contours. Then
+ \[
+ \int_{\gamma_1}f(z)\;\d z = \int_{\gamma_2} f(z)\;\d z.
+ \]
+\end{prop}
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [anchor = north east] {$a$};
+ \node [circ] at (2, 3) {};
+ \node [anchor = south west] at (2, 3) {$b$};
+
+ \draw [->-=0.5] plot [smooth, tension=1] coordinates {(0, 0) (0.5, 2.5) (2, 3)};
+ \draw [->-=0.5] plot [smooth, tension=1] coordinates {(0, 0) (1.2, 0.3) (2, 3)};
+
+ \node at (1.8, 0.9) {$\gamma_2$};
+ \node at (-0.1, 1.6) {$\gamma_1$};
+ \end{tikzpicture}
+\end{center}
+
+\begin{proof}
+ Suppose first that $\gamma_1$ and $\gamma_2$ do not cross. Then $\gamma_1 - \gamma_2$ is a simple closed contour. So
+ \[
+ \oint_{\gamma_1 - \gamma_2} f(z)\;\d z = 0
+ \]
+ by Cauchy's theorem. Then the result follows.
+
+ If $\gamma_1$ and $\gamma_2$ \emph{do} cross, then dissect them at each crossing point, and apply the previous result to each section.
+\end{proof}
+So we conclude that if $f$ has no singularities, then $\int_a^b f(z)\;\d z$ does not depend on the chosen contour.
+
+This result of path independence, and indeed Cauchy's theorem itself, becomes less surprising if we think of $\int f(z)\;\d z$ as a path integral in $\R^2$, because
+\[
+ f(z)\;\d z = (u + iv)(\d z + i\;\d y) = (u + iv) \;\d x + (-v + iu)\;\d y
+\]
+is an exact differential, since
+\[
+ \pd{y} (u + iv) = \pd{x} (-v + iu)
+\]
+from the Cauchy-Riemann equations.
+
+The same idea of ``moving the contour'' applies to \emph{closed} contours. Suppose that $\gamma_1$ is a closed contour that can be continuously deformed to another one, $\gamma_2$, inside it; and suppose $f$ has no singularities in the region between them.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+ \draw [->] plot [smooth cycle] coordinates {(-2.4, -1.3) (0, -1.3) (1.4, -2) (2.6, -1.7) (2.8, 0.6) (0.6, 1.9) (-3.4, 0.8)};
+
+ \node [right] at (1.4, 0.4) {$\gamma_2$};
+ \node [right] at (2.8, 0.6) {$\gamma_1$};
+ \node at (0.1, 0) {$\times$};
+ \node at (-0.3, 0) {$\times$};
+ \node at (3, 2) {$\times$};
+ \end{tikzpicture}
+\end{center}
+We can instead consider the following contour $\gamma$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [-<-=0.4, -<-=0.8] plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+ \draw [->-=0.3, ->-=0.7, ->-=0.9] plot [smooth cycle] coordinates {(-2.4, -1.3) (0, -1.3) (1.4, -2) (2.6, -1.7) (2.8, 0.6) (0.6, 1.9) (-3.4, 0.8)};
+
+ \node [right] at (2.8, 0.6) {$\gamma$};
+ \node at (0.1, 0) {$\times$};
+ \node at (-0.3, 0) {$\times$};
+ \node at (3, 2) {$\times$};
+
+ \draw [white, very thick] (2.4, 1.07) -- (2.1, 1.3);
+ \draw [white, line width=0.5cm] (1.16, 0.65) -- (1.4, 0.4);
+ \draw [->-=0.7] (2.4, 1.07) -- (1.4, 0.4);
+ \draw [-<-=0.4] (2.1, 1.3) -- (1.16, 0.65);
+ \end{tikzpicture}
+\end{center}
+By Cauchy's theorem, we know $\oint_\gamma f(z)\;\d z = 0$ since $f(z)$ is analytic throughout the region enclosed by $\gamma$. Now we let the distance between the two ``cross-cuts'' tend to zero: those contributions cancel and, in the limit, we have
+\[
+ \oint_{\gamma_1 - \gamma_2} f(z)\;\d z = 0.
+\]
+hence we know
+\[
+ \oint_{\gamma_1} f(z)\;\d z = \oint_{\gamma_2} f(z)\;\d z = 0.
+\]
+
+\subsection{Cauchy's integral formula}
+\begin{thm}[Cauchy's integral formula]
+ Suppose that $f(z)$ is analytic in a domain $\mathcal{D}$ and that $z_0 \in \mathcal{D}$. Then
+ \[
+ f(z_0) = \frac{1}{2\pi i} \oint_\gamma \frac{f(z)}{z - z_0}\;\d z
+ \]
+ for any simple closed contour $\gamma$ in $\mathcal{D}$ encircling $z_0$ anticlockwise.
+\end{thm}
+This result is going to be very important in a brief moment, for proving one thing. Afterwards, it will be mostly useless.
+
+\begin{proof}(non-examinable)
+ We let $\gamma_\varepsilon$ be a circle of radius $\varepsilon$ about $z_0$, within $\gamma$.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [right] {$z_0$};
+ \draw [->] circle [radius=0.5];
+ \node [right] at (0.5, 0) {$\gamma_\varepsilon$};
+
+ \draw [->] plot [smooth cycle] coordinates {(-2.4, -1.3) (0, -1.3) (1.4, -2) (2.6, -1.7) (2.8, 0.6) (0.6, 1.9) (-3.4, 0.8)};
+ \node [right] at (2.8, 0.6) {$\gamma$};
+ \end{tikzpicture}
+ \end{center}
+ Since $\frac{f(z)}{z - z_0}$ is analytic except when $z = z_0$, we know
+ \[
+ \oint_\gamma \frac{f(z)}{z - z_0} \;\d z = \oint_{\gamma_\varepsilon} \frac{f(z)}{z - z_0}\;\d z.
+ \]
+ We now evaluate the right integral directly. Substituting $z = z_0 + \varepsilon e^{i\theta}$, we get
+ \begin{align*}
+ \oint_{\gamma_\varepsilon} \frac{f(z)}{z - z_0}\;\d z &= \int_0^{2\pi} \frac{f(z_0 + \varepsilon e^{i\theta})}{\varepsilon e^{i\theta}} i\varepsilon e^{i\theta} \;\d \theta\\
+ &= i\int_0^2\pi (f(z_0) + O(\varepsilon))\;\d \theta\\
+ &\rightarrow 2\pi i f(z_0)
+ \end{align*}
+ as we take the limit $\varepsilon \to 0$. The result then follows.
+\end{proof}
+So, if we know $f$ on $\gamma$, then we know it at all points within $\gamma$. While this seems magical, it is less surprising if we look at it in another way. We can write $f = u + iv$, where $u$ and $v$ are harmonic functions, i.e.\ they satisfy Laplace's equation. Then if we know the values of $u$ and $v$ on $\gamma$, then what we essentially have is Laplace's equation with Dirichlet boundary conditions! Then the fact that this tells us everything about $f$ within the boundary is just the statement that Laplace's equation with Dirichlet boundary conditions has a unique solution!
+
+The difference between this and what we've got in IA Vector Calculus is that Cauchy's integral formula gives an explicit formula for the value of $f(z_0)$, while in IA Vector Calculus, we just know there is one solution, whatever that might be.
+
+Note that this does not hold if $z_0$ does not lie on or inside $\gamma$, since Cauchy's theorem just gives
+\[
+ \frac{1}{2\pi i} \oint_\gamma \frac{f(z)}{z - z_0} \;\d z = 0.
+\]
+Now, we can differentiate Cauchy's integral formula with respect to $z_0$, and obtain
+\[
+ f'(z_0) = \frac{1}{2\pi i} \oint_\gamma \frac{f(z)}{(z - z_0)^2}\;\d z.
+\]
+We have just taken the differentiation inside the integral sign. This is valid since \st{it's Complex Methods and we don't care} the integrand, both before and after, is a continuous function of both $z$ and $z_0$.
+
+We see that the integrand is still differentiable. So we can differentiate it again, and obtain
+\[
+ f^{(n)}(z_0) = \frac{n!}{2\pi i} \oint_{\gamma} \frac{f(z)}{(z - z_0)^{n + 1}}\;\d z.
+\]
+Hence at any point $z_0$ where $f$ is analytic, \emph{all} its derivatives exist, and we have just found a formula for them. So it is differentiable infinitely many times as advertised.
+
+A classic example of Cauchy's integral formula is Liouville's theorem.
+\begin{thm}[Liouville's theorem*]
+ Any bounded entire function is a constant.
+\end{thm}
+
+\begin{proof}(non-examinable)
+ Suppose that $|f(z)| \leq M$ for all $z$, and consider a circle of radius $r$ centered at an arbitrary point $z_0 \in \C$. Then
+ \[
+ f'(z_0) = \frac{1}{2\pi i} \oint_{|z - z_0| = r} \frac{f(z)}{(z - z_0)^2}\;\d z.
+ \]
+ Hence we know
+ \[
+ \frac{1}{2\pi i} \leq \frac{1}{2\pi} \cdot 2\pi r \cdot \frac{M}{r^2} \to 0
+ \]
+ as $r \to \infty$. So $f'(z_0) = 0$ for all $z_0 \in \C$. So $f$ is constant.
+\end{proof}
+
+\section{Laurent series and singularities}
+\subsection{Taylor and Laurent series}
+If $f$ is analytic at $z_0$, then it has a Taylor series
+\[
+ f(z) = \sum_{n = 0}^\infty a_n (z - z_0)^n
+\]
+in a neighbourhood of $z_0$. We will prove this as a special case of the coming proposition. Exactly which neighbourhood it applies in depends on the function. Of course, we know the coefficients are given by
+\[
+ a_n = \frac{f^{(n)}(z_0)}{n!},
+\]
+but this doesn't matter. All the standard Taylor series from real analysis apply in $\C$ as well. For example,
+\[
+ e^z = \sum_{n = 0}^\infty \frac{z^n}{n!},
+\]
+and this converges for all $z$. Also, we have
+\[
+ (1 - z)^{-1} = \sum_{n = 0}^\infty z^n.
+\]
+This converges for $|z| < 1$.
+
+But if $f$ has a singularity at $z_0$, we cannot expect such a Taylor series, since it would imply $f$ is non-singular at $z_0$. However, it turns out we can get a series expansion if we allow ourselves to have negative powers of $z$.
+
+\begin{prop}[Laurent series]
+ If $f$ is analytic in an \emph{annulus} $R_1 < |z - z_0| < R_2$, then it has a \emph{Laurent series}
+ \[
+ f(z) = \sum_{n = -\infty}^\infty a_n (z - z_0)^n.
+ \]
+ This is convergent within the annulus. Moreover, the convergence is uniform within compact subsets of the annulus.
+\end{prop}
+
+\begin{proof}(non-examinable)
+ We wlog $z_0 = 0$. Given a $z$ in the annulus, we pick $r_1, r_2$ such that
+ \[
+ R_1 < r_1 < |z| < r_2 < R_2,
+ \]
+ and we let $\gamma_1$ and $\gamma_2$ be the contours $|z| = r_1$, $|z| = r_2$ traversed anticlockwise respectively. We choose $\gamma$ to be the contour shown in the diagram below.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [<-] circle [radius=1];
+ \draw [->] circle [radius=2];
+ \node [circ] {};
+ \node [right] {$z_0$};
+ \draw (0, 0) -- (-0.707, 0.707) node [pos=0.5, anchor = south west] {$r_1$};
+ \draw (0, 0) -- (-2, 0) node [pos=0.7, above] {$r_2$};
+
+ \node [circ] at (1.3, 0.8) {};
+ \node [right] at (1.3, 0.8) {$z$};
+
+ \draw [white, line width=0.5cm] (0.1, 1) -- (-0.1, 1);
+ \draw [white, line width=0.5cm] (0.1, 2) -- (-0.1, 2);
+ \draw [-<-=0.3] (0.1, 1) -- (0.1, 2);
+ \draw [->-=0.6] (-0.1, 1) -- (-0.1, 2);
+
+ \node [right] at (2, 0) {$\gamma$};
+ \end{tikzpicture}
+ \end{center}
+ We now apply Cauchy's integral formula (after a change of notation):
+ \[
+ f(z) = \frac{1}{2\pi i} \oint_\gamma \frac{f(\zeta)}{\zeta - z} \;\d \zeta.
+ \]
+ We let the distance between the cross-cuts tend to zero. Then we get
+ \[
+ f(z) = \frac{1}{2\pi i} \oint_{\gamma_2} \frac{f(\zeta)}{\zeta - z}\;\d z - \frac{1}{2\pi i} \oint_{\gamma_1} \frac{f(\zeta)}{\zeta - z}\;\d \zeta,
+ \]
+ We have to subtract the second integral because it is traversed in the opposite direction. We do the integrals one by one. We have
+ \begin{align*}
+ \oint_{\gamma_1} \frac{f(\zeta)}{\zeta - z} &= -\frac{1}{z} \oint_{\gamma_1} \frac{f(\zeta)}{1 - \frac{\zeta}{z}}\;\d \zeta\\
+ \intertext{Taking the Taylor series of $\frac{1}{1 - \frac{\zeta}{z}}$, which is valid since $|\zeta| = r_1 < |z|$ on $\gamma_1$, we obtain}
+ &= -\frac{1}{z}\oint_{\gamma_1} f(\zeta) \sum_{m = 0}^\infty \left(\frac{\zeta}{z}\right)^m \;\d \zeta\\
+ &= -\sum_{m = 0}^\infty z^{-m - 1} \oint_{\gamma_1}f(\zeta) \zeta^m \;\d \zeta.
+ \end{align*}
+ This is valid since $\oint \sum = \sum \oint$ by uniform convergence. So we can deduce
+ \[
+ -\frac{1}{2\pi i} \oint_{\gamma_1} \frac{f(\zeta)}{\zeta - z}\;\d \zeta = \sum_{n = -\infty}^{-1} a_n z^n,
+ \]
+ where
+ \[
+ a_n = \frac{1}{2\pi i}\oint_{\gamma_1} f(\zeta) \zeta^{-n - 1} \;\d \zeta
+ \]
+ for $n < 0$.
+
+ Similarly, we obtain
+ \[
+ \frac{1}{2\pi i} \oint_{\gamma_2} \frac{f(\zeta)}{\zeta - z}\;\d \zeta = \sum_{n = 0}^\infty a_n z^n,
+ \]
+ for the same definition of $a_n$, except $n \geq 0$, by expanding
+ \[
+ \frac{1}{\zeta - z} = \frac{1}{\zeta} \frac{1}{1 - \frac{z}{\zeta}} = \sum_{n = 0}^\infty \frac{z^n}{\zeta^{n + 1}}.
+ \]
+ This is again valid since $|\zeta| = r_2 > |z|$ on $\gamma_2$. Putting these result together, we obtain the Laurent series. The general result then follows by translating the origin by $z_0$.
+
+ We will not prove uniform convergence --- go to IB Complex Analysis.
+\end{proof}
+It can be shown that Laurent series are unique. Then not only is there a unique Laurent series for each annulus, but if we pick two different annuli on which $f$ is analytic (assuming they overlap), they must have the same coefficient. This is since we can restrict the two series to their common intersection, and then uniqueness requires the coefficients to be must be the same.
+
+Note that the Taylor series is just a special case of Laurent series, and if $f$ is holomorphic at $z_0$, then our explicit formula for the coefficients plus Cauchy's theorem tells us we have $a_n = 0$ for $n < 0$.
+
+Note, however, that we needed the Taylor series of $\frac{1}{1 - z}$ in order to prove Taylor's theorem.
+
+\begin{eg}
+ Consider $\frac{e^z}{z^3}$. What is its Laurent series about $z_0 = 0$? We already have a Taylor series for $e^z$, and all we need to do is to divide it by $z^3$. So
+ \[
+ \frac{e^z}{z^3} = \sum_{n = 0}^\infty \frac{z^{n - 3}}{n!} = \sum_{n = -3}^{\infty} \frac{z^n}{(n + 3)!}.
+ \]
+\end{eg}
+
+\begin{eg}
+ What is the Laurent series for $e^{1/z}$ about $z_0$? Again, we just use the Taylor series for $e^z$. We have
+ \[
+ e^{\frac{1}{z}} = 1 + \frac{1}{z} + \frac{1}{2! z^2} + \frac{1}{3! z^3} + \cdots.
+ \]
+ So the coefficients are
+ \[
+ a_n = \frac{1}{(-n)!}\text{ for }n \leq 0.
+ \]
+\end{eg}
+
+\begin{eg}
+ This is a little bit more tricky. Consider
+ \[
+ f(z) = \frac{1}{z - a},
+ \]
+ where $a \in \C$. Then $f$ is analytic in $|z| < |a|$. So it has a Taylor series about $z_0 = 0$ given by
+ \[
+ \frac{1}{z - a} = -\frac{1}{a}\left(1 - \frac{z}{a}\right)^{-1} = -\sum_{n = 0}^\infty \frac{1}{a^{n + 1}} z^n.
+ \]
+ What about in $|z| > |a|$? This is an annulus, that goes from $|a|$ to infinity. So it has a Laurent series. We can find it by
+ \[
+ \frac{1}{z - a} = \frac{1}{z} \left(1 - \frac{a}{z}\right)^{-1} = \sum_{m = 0}^\infty \frac{a^m}{z^{m + 1}} = \sum_{n = -\infty}^{-1} a^{-n - 1} z^n.
+ \]
+\end{eg}
+
+\begin{eg}
+ Now consider
+ \[
+ f(z) = \frac{e^z}{z^2 - 1}.
+ \]
+ This has a singularity at $z_0 = 1$ but is analytic in an annulus $0 < |z - z_0| < 2$ (the $2$ comes from the other singularity at $z =-1$). How do we find its Laurent series? This is a standard trick that turns out to be useful --- we write everything in terms of $\zeta = z - z_0$. So
+ \begin{align*}
+ f(z) &= \frac{e^\zeta e^{z_0}}{\zeta(\zeta + 2)} \\
+ &= e^{z_0}{2\zeta} e^{\zeta} \left(1 + \frac{1}{2}\zeta\right)^{-1}\\
+ &= \frac{e}{2 \zeta} \left(1 + \zeta + \frac{1}{2!}\zeta^2 + \cdots\right)\left(1 - \frac{1}{2}\zeta + \cdots\right)\\
+ &= \frac{e}{2\zeta} \left(1 + \frac{1}{2}\zeta + \cdots\right)\\
+ &= \frac{e}{2}\left(\frac{1}{z - z_0} + \frac{1}{2} + \cdots\right).
+ \end{align*}
+ This is now a Laurent series, with $a_{-1} = \frac{1}{2}e$, $a_0 = \frac{1}{4}e$ etc.
+\end{eg}
+
+\begin{eg}
+ This doesn't seem to work for $f(z) = z^{-1/2}$. The reason is that the required branch cut of $z^{-\frac{1}{2}}$ would pass through any annulus about $z = 0$. So we cannot find an annulus on which $f$ is analytic.
+\end{eg}
+
+The radius of convergence of a Laurent series is at least the size of the annulus, as we that's how we have constructed it. So how large is it?
+
+It turns out the radius of convergence of a Laurent series is always the distance from $z_0$ to the closest singularity of $f(z)$, for we may always choose $R_2$ to be that distance, and obtain a Laurent series valid all the way to $R_2$ (not inclusive). This Laurent series must be the same series as we would have obtained with a smaller radius, because Laurent series are unique.
+
+While this sounds just like some technicalities, this is actually quite useful. When deriving a Laurent series, we can make any assumptions we like about $z - z_0$ being small. Then even if we have derived a Laurent series for a really small neighbourhood, we automatically know the series is valid up to the other point where $f$ is singular.
+
+\subsection{Zeros}
+Recall that for a polynomial $p(z)$, we can talk about the \emph{order} of its zero at $z = a$ by looking at the largest power of $(z - a)$ dividing $p$. \emph{A priori}, it is not clear how we can do this for general functions. However, given that everything is a Taylor series, we know how to do this for holomorphic functions.
+
+\begin{defi}[Zeros]
+ The \emph{zeros} of an analytic function $f(Z)$ are the points $z_0$ where $f(z_0) = 0$. A zero is of \emph{order $N$} if in its Taylor series $\sum_{n = 0}^\infty a_n (z - z_0)^n$, the first non-zero coefficient is $a_N$.
+
+ Alternatively, it is of order $N$ if $0 = f(z_0) = f'(z_0) = \cdots = f^{(N - 1)}$, but $f^{(N)}(z_0) \not= 0$.
+\end{defi}
+
+\begin{defi}[Simple zero]
+ A zero of order one is called a \emph{simple zero}.
+\end{defi}
+
+\begin{eg}
+ $z^3 + iz^2 + z + i = (z - i)(z + i)^2$ has a simple zero at $z = i$ and a zero of order $2$ at $z = -i$.
+\end{eg}
+
+\begin{eg}
+ $\sinh z$ has zeros where $\frac{1}{2}(e^z - e^{-z}) = 0$, i.e.\ $e^{2z} = 1$, i.e.\ $z = n \pi i$, where $n \in \Z$. The zeros are all simple, since $\cosh n\pi i = \cos n \pi \not= 0$.
+\end{eg}
+
+\begin{eg}
+ Since $\sinh z$ has a simple zero at $z = \pi i$, we know $\sinh^3 z$ has a zero of order $3$ there. This is since the first term of the Taylor series of $\sinh z$ about $z = \pi i$ has order $1$, and hence the first term of the Taylor series of $\sinh ^3 z$ has order $3$.
+
+ We can also find the Taylor series about $\pi i$ by writing $\zeta = z - \pi i$:
+ \begin{align*}
+ \sinh^3 z &= [\sinh(\zeta + \pi i)]^3\\
+ &= [-\sinh \zeta]^3\\
+ &= -\left(\zeta + \frac{1}{3!} + \cdots \right)^3\\
+ &= - \zeta^3 - \frac{1}{2} \zeta^5 - \cdots\\
+ &= -(z - \pi i)^3 - \frac{1}{2} (z - \pi i)^5 - \cdots.
+ \end{align*}
+\end{eg}
+\subsection{Classification of singularities}
+The previous section was rather boring --- you've probably seen all of that before. It is just there as a buildup for our study of singularities. These are in some sense the ``opposites'' of zeros.
+
+\begin{defi}[Isolated singularity]
+ Suppose that $f$ has a singularity at $z_0 = z$. If there is a neighbourhood of $z_0$ within which $f$ is analytic, except at $z_0$ itself, then $f$ has an \emph{isolated singularity} at $z_0$. If there is no such neighbourhood, then $f$ has an \emph{essential (non-isolated) singularity} at $z_0$.
+\end{defi}
+
+\begin{eg}
+ $\cosech z$ has isolated singularities at $z = n \pi i, n \in \Z$, since $\sinh$ has zeroes at these points.
+\end{eg}
+
+\begin{eg}
+ $\cosech \frac{1}{z}$ has isolated singularities at $z = \frac{1}{n \pi i}$, with $n \not= 0$, and an essential non-isolated singularity at $z = 0$ (since there are other arbitrarily close singularities).
+\end{eg}
+
+\begin{eg}
+ $\cosech z$ also has an essential non-isolated singularity at $z = \infty$, since $\cosech \frac{1}{z}$ has an essential non-isolated singularity at $z = 0$.
+\end{eg}
+
+\begin{eg}
+ $\log z$ has a non-isolated singularity at $z = 0$, because it is not analytic at any point on the branch cut. This is normally referred to as a branch point singularity.
+\end{eg}
+
+If $f$ has an isolated singularity, at $z_0$, we can find an annulus $0 < |z - z_0| < r$ within which $f$ is analytic, and it therefore has a Laurent series. This gives us a way to classify singularities:
+
+\begin{enumerate}
+ \item Check for a branch point singularity.
+ \item Check for an essential (non-isolated) singularity.
+ \item Otherwise, consider the coefficients of the Laurent series $\sum_{n = -\infty}^\infty a_n (z - z_0)^n$:
+ \begin{enumerate}
+ \item If $a_n = 0$ for all $n < 0$, then $f$ has a \emph{removable singularity} at $z_0$.
+ \item If there is a $N > 0$ such that $a_n = 0$ for all $n < -N$ but $a_{-N} \not= 0$, then $f$ has a \emph{pole} of order $N$ at $z_0$ (for $N = 1, 2, \cdots$, this is also called a \emph{simple pole}, \emph{double pole} etc.).
+ \item If there does not exist such an $N$, then $f$ has an \emph{essential isolated singularity}.
+ \end{enumerate}
+\end{enumerate}
+A removable singularity (one with Laurent series $a_0 + a_1 (z - z_0) + \cdots$) is so called because we can remove the singularity by redefining $f(z_0) = a_0 = \lim\limits_{z \to z_0}f(z)$; then $f$ will become analytic at $z_0$.
+
+Let's look at some examples. In fact, we have 10 examples here.
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\frac{1}{z - i}$ has a simple pole at $z = i$. This is since its Laurent series is, err, $\frac{1}{z - i}$.
+ \item $\frac{\cos z}{z}$ has a singularity at the origin. This has Laurent series
+ \[
+ \frac{\cos z}{z} = z^{-1} - \frac{1}{2}z + \frac{1}{24}z^3 - \cdots,
+ \]
+ and hence it has a simple pole.
+ \item Consider $\frac{z^2}{(z - 1)^3(z - i)^2}$. This has a double pole at $z = i$ and a triple pole at $z = 1$. To show formally that, for instance, there is a double pole at $z = i$, notice first that $\frac{z^2}{(z - 1)^3}$ is analytic at $z = i$. So it has a Taylor series, say,
+ \[
+ b_0 + b_1(z - i) + b_2(z - i)^2 + \cdots
+ \]
+ for some $b_n$. Moreover, since $\frac{z^2}{(z - 1)^3}$ is non-zero at $z = i$, we have $b_0 \not= 0$. Hence
+ \[
+ \frac{z^2}{(z - 1)^3 (z - i)^2} = \frac{b_0}{(z - i)^2} + \frac{b_1}{z - i} + b2 + \cdots.
+ \]
+ So this has a double pole at $z = i$.
+ \item If $g(z)$ has zero of order $N$ at $z = z_0$, then $\frac{1}{g(z)}$ has a pole of order $N$ there, and vice versa. Hence $\cot z$ has a simple pole at the origin, because $\tan z$ has a simple zero there. To prove the general statement, write
+ \[
+ g(z) = (z - z_0)^N G(z)
+ \]
+ for some $G$ with $G(z_0) \not= 0$. Then $\frac{1}{G(z)}$ has a Taylor series about $z_0$, and then the result follows.
+ \item $z^2$ has a double pole at infinity, since $\frac{1}{\zeta^2}$ has a double pole at $\zeta = 0$.
+ \item $e^{1/z}$ has an essential isolated singularity at $z = 0$ because all the $a_n$'s are non-zero for $n \leq 0$.
+ \item $\sin \frac{1}{z}$ also has an essential isolated singularity at $z = 0$ because (using the standard Taylor series for $\sin$) there are non-zero $a_n$'s for infinitely many negative $n$.
+ \item $f(z) = \frac{e^z - 1}{z}$ has a removable singularity at $z = 0$, because its Laurent series is
+ \[
+ f(z) = 1 + \frac{1}{2!}z + \frac{1}{3!}z^2 + \cdots.
+ \]
+ By defining $f(0) = 1$, we would remove the singularity and obtain an entire function.
+ \item $f(z) = \frac{\sin z}{z}$ is not defined at $z = 0$, but has a removable singularity there; remove it by setting $f(0) = 1$.
+ \item A rational function $f(z) = \frac{P(z)}{Q(z)}$ (where $P, Q$ are polynomials) has a singularity at any point $z_0$ where $Q$ has a zero. Assuming $Q$ has a simple zero, if $P(z_0) = 0$ as well, then the singularity is removable by redefining $f(z_0) = \frac{P'(z_0)}{Q'(z_0)}$ (by L'H\^opital's rule).
+ \end{enumerate}
+\end{eg}
+
+Near an essential isolated singularity of a function $f(z)$, it can be shown that $f$ takes \emph{all} possible complex values (except at most one) in any neighbourhood, however small. For example, $e^{\frac{1}{z}}$ takes all values except zero. We will not prove this. Even in IB Complex Analysis.
+
+\subsection{Residues}
+So far, we've mostly been making lots of definitions. We haven't actually used them to do much useful things. We are almost there. It turns out we can easily evaluate integrals of analytic functions by looking at their Laurent series. Moreover, we don't need the \emph{whole} Laurent series. We just need one of the coefficients.
+
+\begin{defi}[Residue]
+ The \emph{residue} of a function $f$ at an isolated singularity $z_0$ is the coefficient $a_{-1}$ in its Laurent expansion about $z_0$. There is no standard notation, but shall denote the residue by $\res\limits_{z = z_0}f(z)$.
+\end{defi}
+
+\begin{prop}
+ At a \emph{simple} pole, the residue is given by
+ \[
+ \res_{z = z_0}f(z) = \lim_{z \to z_0} (z - z_0) f(z).
+ \]
+\end{prop}
+
+\begin{proof}
+ We can simply expand the right hand side to obtain
+ \begin{align*}
+ \lim_{z \to z_0} (z - z_0)\left(\frac{a_{-1}}{z - z_0} + a_0 + a_1(z - z_0) + \cdots\right) &= \lim_{z \to z_0} (a_{-1} + a_0(z - z_0) +\cdots) \\
+ &= a_{-1},
+ \end{align*}
+ as required.
+\end{proof}
+
+How about for more complicated poles? More generally, at a pole of order $N$, the formula is a bit messier.
+\begin{prop}
+ At a pole of order $N$, the residue is given by
+ \[
+ \lim_{z \to z_0} \frac{1}{(N - 1)!} \frac{\d^{N - 1}}{\d z^{N - 1}} (z - z_0)^N f(z).
+ \]
+\end{prop}
+This can be proved in a similar manner (see example sheet 2).
+
+In practice, a variety of techniques can be used to evaluate residues --- no single technique is optimal for all situations.
+
+\begin{eg}
+ Consider $f(z) = \frac{e^z}{z^3}$. We can find the residue by directly computing the Laurent series about $z = 0$:
+ \[
+ \frac{e^z}{z^3} = z^{-3} + z^{-2} + \frac{1}{2}z^{-1} + \frac{1}{3!} + \cdots.
+ \]
+ Hence the residue is $\frac{1}{2}$.
+
+ Alternatively, we can use the fact that $f$ has a pole of order $3$ at $z = 0$. So we can use the formula to obtain
+ \[
+ \res_{z = 0} f(z) = \lim_{z \to 0} \frac{1}{2!} \frac{\d^2}{\d z^2} (z^3 f(z)) = \lim_{z \to 0} \frac{1}{2} \frac{\d^2}{\d z^2} e^z = \frac{1}{2}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider
+ \[
+ g(z) = \frac{e^z}{z^2 - 1}.
+ \]
+ This has a simple pole at $z = 1$. Recall we have found its Laurent series at $z = 1$ to be
+ \[
+ \frac{e^z}{z^2 - 1} = \frac{e}{2}\left(\frac{1}{z - 1} + \frac{1}{2} + \cdots\right).
+ \]
+ So the residue is $\frac{e}{2}$.
+
+ Alternatively, we can use our magic formula to obtain
+ \[
+ \res_{z = 1} g(z) = \lim_{z \to 1} \frac{(z - 1)e^z}{z^2 - 1} = \lim_{z \to 1}\frac{e^z}{z + 1} = \frac{e}{2}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider $h(z) = (z^8 - w^8)^{-1}$, for any complex constant $w$. We know this has $8$ simple poles at $z = w e^{n \pi i/4}$ for $n = 0, \cdots, 7$. What is the residue at $z = w$?
+
+ We can try to compute this directly by
+ \begin{align*}
+ \res_{z = w} h(z) &= \lim_{z \to w} \frac{z - w}{(z - w)(z - w e^{i \pi /4}) \cdots (z - w e^{7 \pi i/4})}\\
+ &= \frac{1}{(w - w e^{i \pi /4}) \cdots (w - w e^{7\pi i/4})}\\
+ &= \frac{1}{w^7} \frac{1}{(1 - e^{i\pi/4})\cdots (1 - e^{7 i \pi/4})}
+ \end{align*}
+ Now we are quite stuck. We don't know what to do with this. We can think really hard about complex numbers and figure out what it should be, but this is difficult. What we should do is to apply L'H\^opital's rule and obtain
+ \[
+ \res_{z = w}h(z) = \lim_{z \to w} \frac{z - w}{z^8 - w^8} = \frac{1}{8z^7} = \frac{1}{8w^7}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider the function $(\sinh \pi z)^{-1}$. This has a simple pole at $z = ni$ for all integers $n$ (because the zeros of $\sinh z$ are at $n\pi i$ and are simple). Again, we can compute this by finding the Laurent expansion. However, it turns out it is easier to use our magic formula together with L'H\^opital's rule. We have
+ \[
+ \lim_{z \to ni} \frac{z - ni}{\sinh \pi z} = \lim_{z \to ni} \frac{1}{\pi \cosh \pi z} = \frac{1}{\pi \cosh n\pi i} = \frac{1}{\pi \cos n\pi} = \frac{(-1)^n}{\pi}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider the function $(\sinh^3 z)^{-1}$. This time, we find the residue by looking at the Laurent series. We first look at $\sinh^3 z$. This has a zero of order $3$ at $z = \pi i$. Its Taylor series is
+ \[
+ \sinh^3 z = -(z - \pi i)^3 - \frac{1}{2}(z - \pi i)^5.
+ \]
+ Therefore
+ \begin{align*}
+ \frac{1}{\sinh^3 z} &= -(z - \pi i)^{-3} \left(1 + \frac{1}{2}(z - \pi i)^2 + \cdots\right)^{-1}\\
+ &= -(z - \pi i)^{-3}\left(1 - \frac{1}{2}(z - \pi i)^2 + \cdots\right)\\
+ &= -(z - \pi i)^{-3} + \frac{1}{2}(z - \pi i)^{-1} + \cdots
+ \end{align*}
+ Therefore the residue is $\frac{1}{2}$.
+\end{eg}
+So we can compute residues. But this seems a bit odd --- why are we interested in the coefficient of $a_{-1}$? Out of the doubly infinite set of coefficients, why $a_{-1}$?
+
+The purpose of this definition is to aid in evaluating integrals $\oint f(z)\;\d z$, where $f$ is a function analytic within the anticlockwise simple closed contour $\gamma$, except for an isolated singularity $z_0$. We let $\gamma_r$ be a circle of radius $r$ centered on $z_0$, lying within $\gamma$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+ \node [circ] at (0, 0) {};
+ \node [right] at (0, 0) {$z_0$};
+ \draw [->] circle [radius=0.5];
+ \node [right] at (0.5, 0) {$\gamma_r$};
+ \end{tikzpicture}
+\end{center}
+Now $f$ has some Laurent series expansion $\sum_{n = -\infty}^\infty a_n (z - z_0)^n$ about $z_0$. Recall that we are allowed to deform contours along regions where $f$ is analytic. So
+\begin{align*}
+ \oint_\gamma f(z)\;\d z &= \oint_{\gamma_r} f(z)\;\d z\\
+ &= \oint_{\gamma_r} \sum_{n = -\infty}^\infty a_n (z - z_0)^n \;\d z\\
+ \intertext{By uniform convergence, we can swap the integral and sum to obtain}
+ &= \sum_{n = -\infty}^\infty a_n \oint_{\gamma_r} (z - z_0)^n \;\d z
+\end{align*}
+We know how to integrate around the circle:
+\begin{align*}
+ \oint_{\gamma_r}(z - z_0)^n \;\d z &= \int_0^{2\pi} r^n e^{in \theta} i r e^{i \theta}\;\d \theta\\
+ &= ir^{n + 1} \int_0^{2\pi} e^{i(n + 1)\theta}\;\d \theta\\
+ &=
+ \begin{cases}
+ 2 \pi i & n = -1\\
+ \frac{r^{n + 1}}{n + 1}\left[ e^{i(n + 1)\theta}\right] = 0, &n \not= -1
+ \end{cases}.
+\end{align*}
+Hence we have
+\[
+ \oint_\gamma f(z)\;\d z = 2\pi i a_{-1} = 2 \pi i \res_{z = z_0}f(z).
+\]
+\begin{thm}
+ $\gamma$ be an anticlockwise simple closed contour, and let $f$ be analytic within $\gamma$ except for an isolated singularity $z_0$. Then
+ \[
+ \oint_\gamma f(z)\;\d z = 2\pi i a_{-1} = 2 \pi i \res_{z = z_0}f(z).
+ \]
+\end{thm}
+
+\section{The calculus of residues}
+Nowadays, we use ``calculus'' to mean differentiation and integration. However, historically, the word ``calculus'' just means doing calculations. The word ``calculus'' in the calculus of residues does not refer to differentiation and integration, even though in this case they \emph{are} related, but this is just a coincidence.
+
+\subsection{The residue theorem}
+We are now going to massively generalize the last result we had in the previous chapter. We are going consider a function $f$ with \emph{many} singularities, and obtain an analogous formula.
+
+\begin{thm}[Residue theorem]
+ Suppose $f$ is analytic in a simply-connected region except at a finite number of isolated singularities $z_1, \cdots, z_n$, and that a simple closed contour $\gamma$ encircles the singularities anticlockwise. Then
+ \[
+ \oint_\gamma f(z)\;\d z = 2\pi i \sum_{k = 1}^n \res_{z = z_k} f(z).
+ \]
+\end{thm}
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.3, ->-=0.7, ->-=0.9] plot [smooth cycle] coordinates {(-2.4, -1.3) (0, -1.3) (1.4, -2) (2.6, -1.7) (2.8, 0.6) (0.6, 1.9) (-3.4, 0.8)};
+ \node [circ] (z1) at (1.8, 0) {};
+ \node [right] at (z1) {$z_1$};
+ \node [circ] (z2) at (0, 1) {};
+ \node [right] at (z2) {$z_2$};
+ \node [circ] (z3) at (-2, -0.5) {};
+ \node [right] at (z3) {$z_3$};
+ \node at (3, 1) {$\gamma$};
+ \end{tikzpicture}
+\end{center}
+Note that we have already proved the case $n = 1$ in the previous section. To prove this, we just need a difficult drawing.
+
+\begin{proof}
+ Consider the following curve $\hat{\gamma}$, consisting of small clockwise circles $\gamma_1, \cdots, \gamma_n$ around each singularity; cross cuts, which cancel in the limit as they approach each other, in pairs; and the large outer curve (which is the same as $\gamma$ in the limit).
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.3, ->-=0.7, ->-=0.9] plot [smooth cycle] coordinates {(-2.4, -1.3) (0, -1.3) (1.4, -2) (2.6, -1.7) (2.8, 0.6) (0.6, 1.9) (-3.4, 0.8)};
+ \node [circ] (z1) at (1.8, 0) {};
+ \node [circ] (z2) at (0, 1) {};
+ \node [circ] (z3) at (-2, -0.5) {};
+
+ \node at (3, 1) {$\hat{\gamma}$};
+ \draw [-<-=0.5] (z1) circle [radius=0.5];
+ \draw [-<-=0.75](z2) circle [radius=0.5];
+ \draw [<-] (z3) circle [radius=0.5];
+
+ \draw [white, line width=0.5cm] (2.91, 0.1) -- (2.92, -0.1);
+ \draw [white, line width=0.5cm] (2.29, 0.1) -- (2.29, -0.1);
+ \draw (2.29, 0.1) -- (2.91, 0.1);
+ \draw (2.29, -0.1) -- (2.92, -0.1);
+
+ \draw [white, line width=0.5cm] (0.1, 1.49) -- (-0.1, 1.49);
+ \draw [white, line width=0.5cm] (0.1, 1.86) -- (-0.1, 1.84);
+ \draw (0.1, 1.49) -- (0.1, 1.86);
+ \draw (-0.1, 1.49) -- (-0.1, 1.84);
+
+ \draw [white, line width=0.5cm] (-3.05, -0.6) -- (-3.18, -0.4);
+ \draw [white, line width=0.5cm] (-2.49, -0.6) -- (-2.49, -0.4);
+ \draw (-2.49, -0.6) -- (-3.05, -0.6);
+ \draw (-2.49, -0.4) -- (-3.18, -0.4);
+
+ \node [right] at (z1) {$z_1$};
+ \node [right] at (z2) {$z_2$};
+ \node [right] at (z3) {$z_3$};
+ \end{tikzpicture}
+ \end{center}
+ Note that $\hat{\gamma}$ encircles no singularities. So $\oint_{\hat{\gamma}}f(z)\;\d z = 0$ by Cauchy's theorem. So in the limit when the cross cuts cancel, we have
+ \[
+ \oint_\gamma f(z)\;\d z + \sum_{k = 1}^n \oint_{\gamma_k} f(z)\;\d z = \oint_{\hat{\gamma}} f(z)\;\d z = 0.
+ \]
+ But from what we did in the previous section, we know
+ \[
+ \oint_{\gamma_k}f(z)\;\d z = -2\pi i \res_{z = z_k} f(z),
+ \]
+ since $\gamma_k$ encircles only one singularity, and we get a negative sign since $\gamma_k$ is a clockwise contour. Then the result follows.
+\end{proof}
+
+This is the key practical result of this course. We are going to be using this very extensively to compute integrals.
+
+\subsection{Applications of the residue theorem}
+We now use the residue theorem to evaluate lots of real integrals.
+\begin{eg}
+ We shall evaluate
+ \[
+ I = \int_0^\infty \frac{\d x}{1 + x^2},
+ \]
+ which we can already do so by trigonometric substitution. While it is silly to integrate this with the residue theorem here, since integrating directly is much easier, this technique is a much more general method, and can be used to integrate many other things. On the other hand, our standard tricks easily become useless when we change the integrand a bit, and we need to find a completely different method.
+
+ Consider
+ \[
+ \oint_\gamma \frac{\d z}{1 + z^2},
+ \]
+ where $\gamma$ is the contour shown: from $-R$ to $R$ along the real axis ($\gamma_0$), then returning to $-R$ via a semicircle of radius $R$ in the upper half plane ($\gamma_R$). This is known as ``closing in the upper-half plane''.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.8] (-2, 0) -- (2, 0) node [pos=0.3, below] {$\gamma_0$} arc(0:180:2) node [pos=0.3, right] {$\gamma_R$};
+
+ \node [below] at (-2, 0) {$-R$};
+ \node [below] at (2, 0) {$R$};
+
+ \node at (0, 1) {$\times$};
+ \node [right] at (0, 1) {$i$};
+ \end{tikzpicture}
+ \end{center}
+ Now we have
+ \[
+ \frac{1}{1 + z^2} = \frac{1}{(z + i)(z - i)}.
+ \]
+ So the only singularity enclosed by $\gamma$ is a simple pole at $z = i$, where the residue is
+ \[
+ \lim_{z \to i} \frac{1}{z + i} = \frac{1}{2i}.
+ \]
+ Hence
+ \[
+ \int_{\gamma_0} \frac{\d z}{1 + z^2} + \int_{\gamma_R} \frac{\d z}{1 + z^2} = \int_\gamma \frac{\d z}{1 + z^2}= 2\pi i \cdot \frac{1}{2i} = \pi.
+ \]
+ Let's now look at the terms individually. We know
+ \[
+ \int_{\gamma_0} \frac{\d z}{1 + z^2} = \int_{-R}^R \frac{\d x}{1 + x^2} \to 2I
+ \]
+ as $R \to \infty$. Also,
+ \[
+ \int_{\gamma_R} \frac{\d z}{1 + z^2} \to 0
+ \]
+ as $R \to \infty$ (see below). So we obtain in the limit
+ \[
+ 2I + 0 = \pi.
+ \]
+ So
+ \[
+ I = \frac{\pi}{2}.
+ \]
+ Finally, we need to show that the integral about $\gamma_R$ vanishes as $R \to \infty$. This is usually a bit tricky. We can use a formal or informal argument. We first do it formally: by the triangle inequality, we know
+ \[
+ |1 + z^2| \geq |1 - |z|^2|.
+ \]
+ On $\gamma_R$, we know $|z| = R$. So for large $R$, we get
+ \[
+ |1 + z^2| \geq 1 - R^2 = R^2 - 1.
+ \]
+ Hence
+ \[
+ \frac{1}{|1 + z^2|} \leq \frac{1}{R^2 - 1}.
+ \]
+ Thus we can bound the integral by
+ \[
+ \left|\int_{\gamma_R} \frac{\d z}{1 + z^2}\right| \leq \pi R \cdot \frac{1}{R^2 - 1} \to 0
+ \]
+ as $R \to \infty$.
+
+ We can also do this informally, by writing
+ \[
+ \left|\int_{\gamma_R} \frac{\d z}{1 + z^2}\right| \leq \pi R \sup_{z \in \gamma_R} \left|\frac{1}{1 + z^2}\right| = \pi R \cdot O(R^{-2}) = O(R^{-1}) \to 0.
+ \]
+\end{eg}
+This example is not in itself impressive, but the method adapts easily to more difficult integrals. For example, the same argument would allow us to integrate $\frac{1}{1 + x^8}$ with ease.
+
+Note that we could have ``closed in the lower half-plane'' instead.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 1);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.8] (-2, 0) -- (2, 0) arc(0:-180:2);
+
+ \node [above] at (-2, 0) {$-R$};
+ \node [above] at (2, 0) {$R$};
+
+ \node at (0, -1) {$\times$};
+ \node [left] at (0, -1) {$-i$};
+ \end{tikzpicture}
+\end{center}
+Most of the argument would be unchanged; the residue would now be
+\[
+ \res_{z = -i} \frac{1}{1 + z^2} = -\frac{1}{2i},
+\]
+but the contour is now traversed clockwise. So we collect another minus sign, and obtain the same result.
+
+Let's do more examples.
+\begin{eg}
+ To find the integral
+ \[
+ I = \int_0^\infty \frac{\d x}{(x^2 + a^2)^2},
+ \]
+ where $a > 0$ is a real constant, consider the contour integral
+ \[
+ \int_{\gamma} \frac{\d z}{(z^2 + a^2)^2},
+ \]
+ where $\gamma$ is exactly as above. The only singularity within $\gamma$ is a pole of order $2$ at $z = ia$, at which the residue is
+ \begin{align*}
+ \lim_{z \to ia} \frac{\d}{\d z} \frac{1}{(z + ia)^2} &= \lim_{z \to ia} \frac{-2}{(z + ia)^3} \\
+ &= \frac{-2}{-8ia^3} \\
+ &=-\frac{1}{4}ia^{-3}.
+ \end{align*}
+ We also have to do the integral around $\gamma_R$. This still vanishes as $R \to \infty$, since
+ \[
+ \left|\int_{\gamma_R} \frac{\d z}{(z^2 + a^2)^2}\right| \leq \pi R \cdot O(R^{-4}) = O(R^{-3}) \to 0.
+ \]
+ Therefore
+ \[
+ 2I = 2\pi i \left(-\frac{1}{4}ia^{-3}\right).
+ \]
+ So
+ \[
+ I = \frac{\pi}{4a^3}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider
+ \[
+ I = \int_0^\infty \frac{\d x}{1 + x^4}.
+ \]
+ We use the same contour again. There are simple poles of $\frac{1}{1 + z^4}$ at
+ \[
+ e^{\pi i/4}, e^{3\pi i/4}, e^{-\pi i/4}, e^{-3\pi i/4},
+ \]
+ but only the first two are enclosed by the contour.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.8] (-2, 0) -- (2, 0) node [pos=0.3, below] {$\gamma_0$} arc(0:180:2) node [pos=0.3, right] {$\gamma_R$};
+
+ \node [below] at (-2, 0) {$-R$};
+ \node [below] at (2, 0) {$R$};
+
+ \node at (1, 1) {$\times$};
+ \node at (-1, 1) {$\times$};
+ \end{tikzpicture}
+ \end{center}
+ The residues at these two poles are $-\frac{1}{4}e^{\pi i/4}$ and $+\frac{1}{4} e^{-\pi i/4}$ respectively. Hence
+ \[
+ 2 I = 2\pi i \left(-\frac{1}{4}e^{\pi i/4} + \frac{1}{4} e^{-\pi i/4}\right).
+ \]
+ Working out some algebra, we find
+ \[
+ I = \frac{\pi}{2\sqrt{2}}.
+ \]
+\end{eg}
+
+\begin{eg}
+ There is another way of doing the previous integral. Instead, we use this quarter-circle as shown:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (3, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.2, ->-=0.5, ->-=0.9] (0, 0) -- (2, 0) node [pos=0.5, below] {$\gamma_0$} arc(0:90:2) node [pos=0.5, right] {$\gamma_1$} -- (0, 0) node [pos=0.5, left] {$\gamma_2$};
+
+ \node [left] at (0, 2) {$iR$};
+ \node [below] at (2, 0) {$R$};
+
+ \node at (1, 1) {$\times$};
+ \end{tikzpicture}
+ \end{center}
+ In words, $\gamma$ consists of the real axis from $0$ to $R$ ($\gamma_0$); the arc circle from $R$ to $iR$ ($\gamma_1$); and the imaginary axis from $iR$ to $0$ ($\gamma_2$).
+
+ Now
+ \[
+ \int_{\gamma_0}\frac{\d z}{1 + z^4} \to I\text{ as }R \to \infty,
+ \]
+ and along $\gamma_2$ we substitute $z = iy$ to obtain
+ \[
+ \int_{\gamma_0} \frac{\d z}{1 + z^4} = \int_R^0 i\frac{\d y}{1 + y^4} \to -i I\text{ as }R \to \infty.
+ \]
+ Finally, the integral over $\gamma_1$ vanishes as before. We enclose only one pole, which makes the calculation a bit easier than what we did last time. In the limit, we get
+ \[
+ I - iI = 2\pi i \left(-\frac{1}{4} e^{\pi i/4}\right),
+ \]
+ and we again obtain
+ \[
+ I = \frac{\pi}{2 \sqrt{2}}.
+ \]
+\end{eg}
+
+\begin{eg}
+ We now look at trigonometric integrals of the form :
+ \[
+ \int_0^{2\pi} f(\sin \theta, \cos \theta)\;\d \theta.
+ \]
+ We substitute
+ \[
+ z = e^{i\theta},\quad \cos \theta = \frac{1}{2}(z + z^{-1}),\quad \sin \theta = \frac{1}{2i}(z - z^{-1}).
+ \]
+ We then end up with a closed contour integral.
+
+ For example, consider the integral
+ \[
+ I = \int_0^{2\pi}\frac{\d \theta}{a + \cos \theta},
+ \]
+ where $a > 1$. We substitute $z = e^{i\theta}$, so that $\d z = iz\;\d \theta$ and $\cos \theta = \frac{1}{2}(z + z^{-1})$. As $\theta$ increases from $1$ to $2\pi$, $z$ moves round the circle $\gamma$ of radius $1$ in the complex plane. Hence
+ \[
+ I = \oint_{\gamma} \frac{(iz)^{-1}\;\d z}{a + \frac{1}{2}(z + z^{-1})} = -2i \oint_{\gamma} \frac{\d z}{z^2 + 2az + 1}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+ \draw [mred, thick, ->-=0.2] circle [radius=1.8];
+ \node [right] at (1.2726, 1.2726) {$\gamma$};
+
+ \node (z-) at (-2.5, 0) {$\times$};
+ \node [below] at (z-) {$z_-$};
+
+ \node (z+) at (-1.2, 0) {$\times$};
+ \node [below] at (z+) {$z_+$};
+ \end{tikzpicture}
+ \end{center}
+ We now solve the quadratic to obtain the poles, which happen to be
+ \[
+ z_{\pm} = -a \pm \sqrt{a^2 - 1}.
+ \]
+ With some careful thinking, we realize $z_+$ is inside the circle, while $z_-$ is outside.
+
+ To find the residue, we notice the integrand is equal to
+ \[
+ \frac{1}{(z - z_+)(z - z_-)}.
+ \]
+ So the residue at $z = z_+$ is
+ \[
+ \frac{1}{z_+ - z_-} = \frac{1}{2\sqrt{a^2 - 1}}.
+ \]
+ Hence
+ \[
+ I = -2i\left(\frac{2\pi i}{2\sqrt{a^2 - 1}}\right) = \frac{2\pi}{\sqrt{a^2 - 1}}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Integrating a function with a branch cut is a bit more tricky, and requires a ``keyhole contour''. Suppose we want to integrate
+ \[
+ I = \int_0^\infty \frac{x^\alpha}{1 + \sqrt{2}x + x^2}\;\d x,
+ \]
+ with $-1 < \alpha < 1$ so that the integral converges. We need a branch cut for $z^{\alpha}$. We take our branch cut to be along the positive real axis, and define
+ \[
+ z^\alpha = r^\alpha e^{i\alpha \theta},
+ \]
+ where $z = re^{i\theta}$ and $0 \leq \theta < 2\pi$. We use the following keyhole contour:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+ \draw [thick,decorate, decoration=zigzag] (0, 0) -- (3, 0);
+
+ \draw [mred, thick, ->-=0.1, ->-=0.45, ->-=0.75, ->-=0.84, ->-=0.95] (1.977, 0.3) arc(8.63:351.37:2) node [pos=0.15, anchor = south west] {$C_R$} -- (0.4, -0.3) arc(323.13:36.87:0.5) node [pos=0.7, left] {$C_\varepsilon$} -- cycle;
+
+ \node [circ] at (0, 0) {};
+
+ \node at (-1, -1) {$\times$};
+ \node [right] at (-1, -1) {$e^{5\pi i/4}$};
+ \node at (-1, 1) {$\times$};
+ \node [right] at (-1, 1) {$e^{3\pi i/4}$};
+ \end{tikzpicture}
+ \end{center}
+ This consists of a large circle $C_R$ of radius $R$, a small circle $C_\varepsilon$ of radius $\varepsilon$, and the two lines just above and below the branch cut. The idea is to simultaneously take the limit $\varepsilon \to 0$ and $R \to \infty$.
+
+ We have four integrals to work out. The first is
+ \[
+ \int_{\gamma_R} \frac{z^\alpha}{1 + \sqrt{2}z + z^2} \;\d z= O(R^{\alpha - 2}) \cdot 2\pi R = O(R^{\alpha - 1}) \to 0
+ \]
+ as $R \to \infty$. To obtain the contribution from $\gamma_\varepsilon$, we substitute $z = \varepsilon e^{i\theta}$, and obtain
+ \[
+ \int_{2\pi}^0 \frac{\varepsilon^\alpha e^{i\alpha\theta}}{1 + \sqrt{2} \varepsilon e^{i\theta} + \varepsilon^2 e^{2i \theta}} i \varepsilon e^{i\theta}\;\d \theta = O(\varepsilon^{\alpha + 1}) \to 0.
+ \]
+ Finally, we look at the integrals above and below the branch cut. The contribution from just above the branch cut is
+ \[
+ \int_\varepsilon^R \frac{x^{\alpha}}{1 + \sqrt{2}x + x^2}\;\d x \to I.
+ \]
+ Similarly, the integral below is
+ \[
+ \int_R^\varepsilon \frac{x^\alpha e^{2\alpha \pi i}}{1 + \sqrt{2}x + x^2} \to -e^{2\alpha \pi i}I.
+ \]
+ So we get
+ \[
+ \oint_\gamma \frac{z^\alpha}{1 + \sqrt{2} z + z^2}\;\d z \to (1 - e^{2\alpha \pi i})I.
+ \]
+ All that remains is to compute the residues. We write the integrand as
+ \[
+ \frac{z^\alpha}{(z - e^{3e\pi i/4})(z - e^{5\pi i/4})}.
+ \]
+ So the poles are at $z_0 = e^{3\pi i/4}$ and $z_1 = e^{5\pi i/4}$. The residues are
+ \[
+ \frac{e^{3\alpha \pi i/4}}{\sqrt{2} i},\quad \frac{e^{5\alpha\pi i/4}}{-\sqrt{2}i}
+ \]
+ respectively. Hence we know
+ \[
+ (1 - e^{2\alpha \pi i})I = 2\pi i\left(\frac{e^{3\alpha \pi i/4}}{\sqrt{2} i} + \frac{e^{5\alpha \pi i/4}}{-\sqrt{2} i}\right).
+ \]
+ In other words, we get
+ \[
+ e^{\alpha \pi i}(e^{-\alpha \pi i} - e^{\alpha \pi i})I = \sqrt{2} \pi e^{\alpha \pi i} (e^{-\alpha \pi i/4} - e^{\alpha \pi i/4}).
+ \]
+ Thus we have
+ \[
+ I = \sqrt{2} \pi \frac{\sin(\alpha \pi/4)}{\sin (\alpha\pi)}.
+ \]
+ Note that we we labeled the poles as $e^{3\pi i/4}$ and $e^{5 \pi i/4}$. The second point is the same point as $e^{-3\pi i/4}$, but it would be wrong to label it like that. We have decided at the beginning to pick the branch such that $0 \leq \theta < 2\pi$, and $-3\pi i/4$ is \emph{not} in that range. If we wrote it as $e^{-3\pi i/4}$ instead, we might have got the residue as $e^{-3 \alpha \pi i/4}/(-\sqrt{2}i)$, which is not the same as $e^{5 \alpha \pi i/4}/(-\sqrt{2}i)$.
+\end{eg}
+Note that the need of a branch cut does \emph{not} mean we must use a keyhole contour. Sometimes, we can get away with the branch cut and contour chosen as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+ \draw [thick, decorate, decoration=zigzag] (0, 0) -- (0, -3);
+
+ \node [circ] at (0, 0) {};
+
+ \draw [mred, thick, ->-=0.1, ->-=0.4, ->-=0.7] (-2, 0) node [below] {$-R$} -- (-0.4, 0) node [below] {$\varepsilon$} arc(180:0:0.4) node [below] {$\varepsilon$} -- (2, 0) node [below] {$R$} arc(0:180:2);
+
+ \node at (0, 1) {$\times$};
+ \node [right] at (0, 1) {$i$};
+ \end{tikzpicture}
+\end{center}
+
+\subsection{Further applications of the residue theorem using rectangular contours}
+\begin{eg}
+ We want to calculate
+ \[
+ I = \int_{-\infty}^\infty \frac{e^{\alpha x}}{\cosh x}\;\d x,
+ \]
+ where $\alpha \in (-1, 1)$. We notice this has singularities at $x = \left(n + \frac{1}{2}\right)\pi i$ for all $n$. So if we used our good, old semi-circular contours, then we would run into infinitely many singularities as $R \to \infty$.
+
+ Instead, we abuse the periodic nature of $\cosh$, and consider the following rectangular contour:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.78] (-2, 0) node [below] {$-R$} -- (2, 0) node [pos=0.7, below] {$\gamma_0$} node [below] {$R$} -- (2, 1.5) node [pos=0.5, right] {$\gamma_R^+$} -- (-2, 1.5) node [pos=0.3, above] {$\gamma_1$} -- (-2, 0) node [pos=0.5, left] {$\gamma_R^-$};
+
+ \node at (0, 0.75) {$\times$};
+ \node [left] at (0, 0.75) {$\frac{\pi i}{2}$};
+
+ \node [circ] at (0, 1.5) {};
+ \node [anchor = south east] at (0, 1.5) {$\pi i$};
+ \end{tikzpicture}
+ \end{center}
+ We see that
+ \[
+ \int_{\gamma_0} \frac{e^{\alpha z}}{\cosh z} \;\d z\to I \text{ as }R \to \infty.
+ \]
+ Also,
+ \begin{align*}
+ \int_{\gamma_1} \frac{e^{\alpha z}}{\cosh z} \;\d z &= \int_R^{-R} \frac{e^{\alpha(x + \pi i)}}{\cosh(x + \pi i)}\;\d x\\
+ &= e^{\alpha \pi i} \int_R^{-R} \frac{e^{\alpha x}}{-\cosh x}\;\d x\\
+ &\to e^{\alpha \pi i}I.
+ \end{align*}
+ On the $\gamma_R^+$, we have
+ \[
+ \cosh z = \cosh(R + iy) = \cosh R \cos y + i\sinh R \sin y.
+ \]
+ So
+ \[
+ |\cosh z| = \sqrt{\cosh^2 R \cos^2 y + \sinh^2 R \sin^2 y}.
+ \]
+ We now use the formula $\cosh^2 R = 1 + \sinh^2 R$, and magically obtain
+ \[
+ |\cosh z| = \sqrt{\cos^2 y + \sinh^2 R} \geq \sinh R.
+ \]
+ Hence
+ \[
+ \left|\frac{e^{\alpha z}}{\cosh z}\right| \leq \frac{|e^{\alpha R}e^{\alpha i y}|}{\sinh R} = \frac{e^{\alpha R}}{\sinh R} = O(e^{(\alpha - 1)R}) \to 0.
+ \]
+ Hence as $R \to \infty$,
+ \[
+ \int_{\gamma^+} \frac{e^{\alpha z}}{\cosh z} \;\d z\to 0.
+ \]
+ Similarly, the integral along $\gamma^-$ vanishes.
+
+ Finally, we need to find the residue. The only singularity inside the contour is at $\frac{\pi i}{2}$, where the residue is
+ \[
+ \frac{e^{\alpha \pi i/2}}{\sinh \frac{\pi i}{2}} = -ie^{\alpha \pi i/2}.
+ \]
+ Hence we get
+ \[
+ I ( 1 + e^{\alpha \pi i}) = 2\pi i (-i e^{\alpha \pi i/2}) = 2\pi ie^{\alpha \pi i/2}.
+ \]
+ So we find
+ \[
+ I = \frac{\pi}{\cos(\alpha \pi/2)}.
+ \]
+\end{eg}
+These rectangular contours tend to be useful for trigonometric and hyperbolic functions.
+
+\begin{eg}
+ Consider the integral
+ \[
+ \int_\gamma \frac{\cot \pi z}{z^2}\;\d z,
+ \]
+ where $\gamma$ is the square contour shown with corners at $(N + \frac{1}{2})(\pm 1 \pm i)$, where $N$ is a large integer, avoiding the singularities
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \foreach \x in {-2.5,-2,...,2.5} {
+ \node at (\x, 0) {$\times$};
+ }
+
+ \draw [mred, thick] (1.75, 1.75) rectangle (-1.75, -1.75);
+
+ \node [anchor = south west] at (0, 1.75) {$(N + \frac{1}{2})i$};
+ \node [anchor = north west] at (0, -1.75) {$-(N + \frac{1}{2})i$};
+ \node [anchor = south west] at (1.75, 0) {$N + \frac{1}{2}$};
+ \node [anchor = south east] at (-1.75, 0) {$-(N + \frac{1}{2})$};
+
+ \node [circ] at (0, 1.75) {};
+ \node [circ] at (0, -1.75) {};
+ \node [circ] at (1.75, 0) {};
+ \node [circ] at (-1.75, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ There are simple poles at $z = n\in \Z\setminus \{0\}$, with residues $\frac{1}{n^2 \pi}$, and a triple pole at $z = 0$ with residue $-\frac{1}{3}\pi$ (from the Taylor series for $z^2 \tan \pi z$). It turns out the integrals along the sides all vanish as $N \to \infty$ (see later). So we know
+ \[
+ 2\pi i\left(2 \sum_{n = 1}^N \frac{1}{n^2} \pi - \frac{\pi}{3}\right) \to 0
+ \]
+ as $N \to \infty$. In other words,
+ \[
+ \sum_{n = 1}^N \frac{1}{n^2} = \frac{\pi^2}{6}.
+ \]
+ This is probably the most complicated and most inefficient way of computing this series. However, notice we can easily modify this to fund the sum of $\frac{1}{n^3}$, or $\frac{1}{n^4}$, or any complicated sum we can think of.
+
+ Hence all that remains is to show that the integrals along the sides vanish. On the right-hand side, we can write $z = N + \frac{1}{2} + iy$. Then
+ \[
+ |\cot \pi z| = \left|\cot\left( \left(N + \frac{1}{2}\right) + i \pi y\right)\right| = |-\tan i \pi y| = |\tanh \pi y| \leq 1.
+ \]
+ So $\cot \pi z$ is bounded on the vertical side. Since we are integrating $\frac{\cot \pi z}{z^2}$, the integral vanishes as $N \to \infty$.
+
+ Along the top, we get $z = x + \left(N + \frac{1}{2}\right)i$. This gives
+ \[
+ |\cot \pi z| = \frac{\sqrt{\cosh^2 \left(N + \frac{1}{2}\right) \pi - \sin^2 \pi x}}{\sqrt{\sinh^2 \left(N + \frac{1}{2}\right)\pi + \sin^2 \pi x}} \leq \coth\left(N + \frac{1}{2}\right) \pi \leq \coth \frac{\pi}{2}.
+ \]
+ So again $\cot \pi z$ is bounded on the top side. So again, the integral vanishes as $N \to \infty$.
+\end{eg}
+
+\subsection{Jordan's lemma}
+So far, we have bounded the semi-circular integral in a rather crude way, with
+\[
+ \left|\int_{\gamma_R} f(z) e^{i\lambda z} \;\d z\right| \leq R \sup_{|z| = R} |f(z)|.
+\]
+This works for most cases. However, sometimes we need a more subtle argument.
+\begin{lemma}[Jordan's lemma]
+ Suppose that $f$ is an analytic function, except for a finite number of singularities, and that $f(z) \to 0$ as $|z| \to \infty$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.8] (2, 0) arc(0:180:2) node [pos=0.3, right] {$\gamma_R$};
+ \draw [mblue, thick, ->-=0.3, ->-=0.8] (2, 0) arc(0:-180:2) node [pos=0.3, right] {$\gamma_R'$};
+
+ \node [anchor = north east] at (-2, 0) {$-R$};
+ \node [anchor = north west] at (2, 0) {$R$};
+ \node [circ] at (-2, 0) {};
+ \node [circ] at (2, 0) {};
+
+ \node at (1, 1) {$\times$};
+ \node at (-1, -0.5) {$\times$};
+ \node at (-0.3, 0.6) {$\times$};
+ \end{tikzpicture}
+ \end{center}
+ Then for any real constant $\lambda > 0$, we have
+ \[
+ \int_{\gamma_R} f(z) e^{i\lambda z}\;\d z\to 0
+ \]
+ as $R \to \infty$, where $\gamma_R$ is the semicircle of radius $R$ in the upper half-plane.
+
+ For $\lambda < 0$, the same conclusion holds for the semicircular $\gamma_R'$ in the lower half-plane.
+\end{lemma}
+Such integrals arise frequently in Fourier transforms, as we shall see in the next chapter.
+
+How can we prove this result?
+
+The result is easy to show if in fact $f(z) = o(|z|^{-1})$ as $|z| \to \infty$, i.e.\ $\frac{f(z)}{|z|} \to 0$ as $|z|\to \infty$, since $|e^{i\lambda z}| = e^{-\lambda \Im z} \leq 1$ on $\gamma_R$. So
+\[
+ \left|\int_{\gamma_R} f(z) e^{i \lambda z}\;\d z\right| \leq \pi R \cdot o(R^{-1}) = o(1) \to 0.
+\]
+But for functions decaying less rapidly than $o(|z|^{-1})$ (e.g.\ if $f(z) = \frac{1}{z}$), we need to prove Jordan's lemma, which extends the result to any function $f(z)$ that tends to zero at infinity.
+\begin{proof}
+ The proof relies on the fact that for $\theta \in \left[0, \frac{\pi}{2}\right]$, we have
+ \[
+ \sin \theta \geq \frac{2\theta}{\pi}.
+ \]
+ So we get
+ \begin{align*}
+ \left|\int_{\gamma_R} f(z) e^{i\lambda z}\;\d z\right| &= \left|\int_0^\pi f(Re^{i\theta}) e^{i\lambda Re^{i\theta}} i R e^{i\theta}\;\d \theta\right|\\
+ &\leq R\int_0^\pi |f(Re^{i\theta})| \left|e^{i\lambda R e^{i\theta}}\right|\;\d \theta\\
+ &\leq 2R\sup_{z \in \gamma_R} |f(z)| \int_0^{\pi/2} e^{-\lambda R \sin \theta}\;\d \theta\\
+ &\leq 2R\sup_{z \in \gamma_R} |f(z)| \int_0^{\pi/2} e^{-2\lambda R \theta/\pi}\;\d \theta\\
+ &= \frac{\pi}{\lambda}(1 - e^{-\lambda R}) \sup_{z \in \gamma_R}|f(z)|\\
+ &\to 0,
+ \end{align*}
+ as required. Same for $\gamma_R'$ when $\lambda < 0$.
+\end{proof}
+Note that for most cases, we don't actually need it, but can just bound it by $R \sup_{|z| = R} f(z)$.
+
+\begin{eg}
+ Suppose we want to compute
+ \[
+ I = \int_0^\infty \frac{\cos \alpha x}{1 + x^2}\;\d x,
+ \]
+ where $\alpha$ is a positive real constant. We consider the following contour:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.8] (-2, 0) -- (2, 0) node [pos=0.3, below] {$\gamma_0$} arc(0:180:2) node [pos=0.3, right] {$\gamma_R$};
+
+ \node [below] at (-2, 0) {$-R$};
+ \node [below] at (2, 0) {$R$};
+
+ \node at (0, 1) {$\times$};
+ \node [right] at (0, 1) {$i$};
+ \end{tikzpicture}
+ \end{center}
+ and we compute
+ \[
+ \Re \int_\gamma \frac{e^{i\alpha z}}{1 + z^2}\;\d z.
+ \]
+ Along $\gamma_0$, we obtain $2I$ as $R \to \infty$. On $\gamma_R$, we do not actually need Jordan's lemma to show that we obtain zero in the limit, but we can still use it to save ink. So
+ \[
+ I = \frac{1}{2} \Re \left(2\pi i\res_{z = i} \frac{e^{i\alpha z}}{1 + z^2}\right) = \frac{1}{2} \Re\left(2\pi i\frac{e^{-\alpha}}{2i}\right) = \frac{1}{2}\pi e^{-\alpha}.
+ \]
+ Note that taking the real part does not do anything here --- the result of the integral is completely real. This is since the imaginary part of $\int_\gamma \frac{e^{i\alpha z}}{1 + z^2} \;\d z$ is integrating $\frac{\sin \alpha z}{1 + z^2}$, which is odd and vanishes.
+
+ Note that if we had attempted to integrate $\int_\gamma \frac{\cos \alpha z}{1 + z^2}\;\d z$ directly, we would have found that $\int_{\gamma_R} \not\to 0$. In fact, $\cos \alpha z$ is unbounded at $\infty$.
+\end{eg}
+
+\begin{eg}
+ We want to find
+ \[
+ I = \int_{-\infty}^\infty \frac{\sin x}{x}\;\d x.
+ \]
+ This time, we \emph{do} require Jordan's lemma. Here we have an extra complication --- while $\frac{\sin x}{x}$ is well-behaved at the origin, to perform the contour integral, we need to integrate $\frac{e^{iz}}{z}$ instead, which \emph{is} singular at the origin.
+
+ Instead, we split the integral in half, and write
+ \begin{align*}
+ \int_{-\infty}^\infty &= \lim_{\substack{\mathllap{\varepsilon} \to \mathrlap{0}\\\mathllap{R}\to\mathrlap{\infty}}} \left(\int_{-R}^{-\varepsilon} \frac{\sin x}{x}\;\d x + \int_\varepsilon^R \frac{\sin x}{x}\;\d x\right)\\
+ &= \Im \lim_{\substack{\mathllap{\varepsilon} \to \mathrlap{0}\\\mathllap{R}\to\mathrlap{\infty}}} \left(\int_{-R}^{-\varepsilon} \frac{e^{iz}}{x}\;\d x + \int_\varepsilon^R \frac{e^{iz}}{x}\;\d x\right)
+ \end{align*}
+ We now let $C$ be the contour from $-R$ to $-\varepsilon$, then round a semi-circle $C_\varepsilon$ to $\varepsilon$, then to $R$, then returning via a semi-circle $C_R$ of radius $R$. Then $C$ misses all our singularities.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -0.5) -- (0, 3);
+ \draw [mred, thick, ->-=0.09, ->-=0.37, ->-=0.8] (-2, 0) node [below] {$-R$} -- (-0.5, 0) node [below] {$-\varepsilon$} arc(180:0:0.5) node [pos=0.2, left] {$C_\varepsilon$} node [below] {$\varepsilon$} -- (2, 0) node [below] {$R$} arc(0:180:2) node [pos=0.3, right] {$C_R$};
+
+ \node {$\times$};
+ \end{tikzpicture}
+ \end{center}
+ Since $C$ misses all singularities, we must have
+ \[
+ \int_{-R}^{-\varepsilon} \frac{e^{iz}}{z} \;\d z + \int_\varepsilon^R \frac{e^{iz}}{z} \;\d z = -\int_{C_\varepsilon} \frac{e^{iz}}{z} \;\d z - \int_{C_R} \frac{e^{iz}}{z}\;\d z.
+ \]
+ By Jordan's lemma, the integral around $C_R$ vanishes as $R \to \infty$. On $C_\varepsilon$, we substitute $z = \varepsilon e^{i\theta}$ and $e^{iz} = 1 + O(\varepsilon)$ to obtain
+ \[
+ \int_{C_\varepsilon} \frac{e^{iz}}{z} \;\d z = \int_\pi^0 \frac{1 + O(\varepsilon)}{\varepsilon e^{i\theta}} i\varepsilon e^{i\theta} \;\d \theta = -i\pi + O(\varepsilon).
+ \]
+ Hence, in the limit $\varepsilon \to 0$ and $R \to \infty$, we get
+ \[
+ \int_{-\infty}^\infty \frac{\sin x}{x} \;\d x = \Im(i\pi) = \pi.
+ \]
+ Similarly, we can compute
+ \[
+ \int_{-\infty}^\infty \frac{\sin^2 x}{x^2}\;\d x = \pi.
+ \]
+ Alternatively, we notice that $\frac{\sin z}{z}$ has a removable singularity at the origin. Removing the singularity, the integrand is completely analytic. Therefore the original integral is equivalent to the integral along this path:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -0.5) -- (0, 2);
+ \draw [mred, thick, ->-=0.53] (-3, 0) -- (-2, 0) .. controls (-1, 0) and (-1, 0.5) .. (0, 0.5) .. controls (1, 0.5) and (1, 0) .. (2, 0) -- (3, 0);
+ \end{tikzpicture}
+ \end{center}
+ We can then write $\sin z = \frac{e^{iz} - e^{-iz}}{2i}$, and then apply our standard techniques and Jordan's lemma.
+\end{eg}
+
+\section{Transform theory}
+We are now going to consider two different types of ``transforms''. The first is the Fourier transform, which we already met in IB methods. The second is the Laplace transform. While the formula of a Laplace transform is completely real, the \emph{inverse} Laplace transform involves a more complicated contour integral, which is why we did not do it in IB Methods. In either case, the new tool of contour integrals allows us to compute \emph{more} transforms.
+
+Apart from that, the two transforms are pretty similar, and most properties of the Fourier transform also carry over to the Laplace transform.
+
+\subsection{Fourier transforms}
+\begin{defi}[Fourier transform]
+ The \emph{Fourier transform} of a function $f(x)$ that decays sufficiently as $|x| \to \infty$ is defined as
+ \[
+ \tilde{f}(k) = \int_{-\infty}^\infty f(x)e^{-ikx}\;\d x,
+ \]
+ and the inverse transform is
+ \[
+ f(x) = \frac{1}{2\pi} \int_{-\infty}^\infty \tilde{f}(k) e^{ikx}\;\d k.
+ \]
+\end{defi}
+It is common for the terms $e^{-ikx}$ and $e^{ikx}$ to be swapped around in these definitions. It might even be swapped around by the same author in the same paper --- for some reason, if we have a function in two variables, then it is traditional to transform one variable with $e^{-ikx}$ and the other with $e^{ikx}$, just to confuse people. More rarely, factors of $2\pi$ or $\sqrt{2\pi}$ are rearranged.
+
+Traditionally, if $f$ is a function of position $x$, then the transform variable is called $k$; while if $f$ is a function of time $t$, then it is called $\omega$. You don't have to stick to this notation, if you like being confusing.
+
+In fact, a more precise version of the inverse transform is
+\[
+ \frac{1}{2}(f(x^+) + f(x^-)) = \frac{1}{2\pi} \mathrm{PV} \int_{-\infty}^\infty \tilde{f}(k) e^{ikx}\;\d k.
+\]
+The left-hand side indicates that at a discontinuity, the inverse Fourier transform gives the \emph{average} value. The right-hand side shows that only the \emph{Cauchy principal value} of the integral (denoted $\mathrm{PV} \int$, $\mathrm{P}\int$ or $\dashint$) is required, i.e.\ the limit
+\[
+ \lim_{R \to \infty} \int_{-R}^R \tilde{f}(k) e^{ikx} \;\d k,
+\]
+rather than
+\[
+ \lim_{\substack{\mathllap{R} \to \mathrlap{\infty} \\ \mathllap{S}\to \mathrlap{-\infty}}}\quad \int_S^R\tilde{f}(k) e^{ikx} \;\d k.
+\]
+Several functions have $\mathrm{PV}$ integrals, but not normal ones. For example,
+\[
+ \mathrm{PV} \int_{-\infty}^\infty \frac{x}{1 + x^2}\;\d x = 0,
+\]
+since it is an odd function, but
+\[
+ \int_{-\infty}^\infty \frac{x}{1 + x^2}\;\d x
+\]
+diverges at both $-\infty$ and $\infty$. So the normal proper integral does not exist.
+
+So for the inverse Fourier transform, we only have to care about the Cauchy principal value. This is convenient because that's how we are used to compute contour integrals all the time!
+
+\begin{notation}
+ The Fourier transform can also be denoted by $\tilde{f} = \mathcal{F}(f)$ or $\tilde{f}(k) = \mathcal{F}(f)(k)$. In a slight abuse of notation, we often write $\tilde{f}(k) = \mathcal{F}(f(x))$, but this is not correct notation, since $\mathcal{F}$ takes in a function at a parameter, not a function evaluated at a particular point.
+\end{notation}
+
+Note that in the Tripos exam, you are expected to know about all the properties of the Fourier transform you have learned from IB Methods.
+
+We now calculate some Fourier transforms using the calculus of residues.
+\begin{eg}
+ Consider $f(x) = e^{-x^2/2}$. Then
+ \begin{align*}
+ \tilde{f}(k) &= \int_{-\infty}^\infty e^{-x^2/2} e^{-ikx}\;\d x\\
+ &= \int_{-\infty}^\infty e^{-(x + ik)^2/2} e^{-k^2/2}\;\d x\\
+ &= e^{-k^2/2}\int_{-\infty + ik}^{\infty + ik} e^{-z^2/2}\;\d z
+ \end{align*}
+ We create a rectangular contour that looks like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw [mred, thick, -<-=0.3, -<-=0.78] (-2, 0) node [below] {$-R$} -- (2, 0) node [pos=0.7, below] {$\gamma_1$} node [below] {$R$} -- (2, 1.5) node [pos=0.5, right] {$\gamma_R^+$} -- (-2, 1.5) node [pos=0.3, above] {$\gamma_0$} -- (-2, 0) node [pos=0.5, left] {$\gamma_R^-$};
+
+ \node [circ] at (0, 1.5) {};
+ \node [anchor = south east] at (0, 1.5) {$ik$};
+ \end{tikzpicture}
+ \end{center}
+ The integral we want is the integral along $\gamma_0$ as shown, in the limit as $R \to \infty$. We can show that $\int_{\gamma_R^+} \to 0$ and $\int_{\gamma_R^-} \to 0$. Then we notice there are no singularities inside the contour. So
+ \[
+ \int_{\gamma_0} e^{-z^2/2}\;\d z = -\int_{\gamma_1} e^{-z^2/2}\;\d z,
+ \]
+ in the limit. Since $\gamma_1$ is traversed in the reverse direction, we have
+ \[
+ \tilde{f}(k) = e^{-k^2/2} \int_{-\infty}^\infty e^{-z^2/2}\;\d z = \sqrt{2\pi} e^{-k^2/2},
+ \]
+ using a standard result from real analysis.
+\end{eg}
+
+When inverting Fourier transforms, we generally use a semicircular contour (in the upper half-plane if $x > 0$, lower if $x < 0$), and apply Jordan's lemma.
+\begin{eg}
+ Consider the real function
+ \[
+ f(x) =
+ \begin{cases}
+ 0 & x < 0\\
+ e^{-ax} & x > 0
+ \end{cases},
+ \]
+ where $a > 0$ is a real constant. The Fourier transform of $f$ is
+ \begin{align*}
+ \tilde{f}(k) &= \int_{-\infty}^\infty f(x) e^{-ikx}\;\d x\\
+ &= \int_0^\infty e^{-ax - ikx}\;\d x\\
+ &= -\frac{1}{a + ik} [e^{-ax - ikx}]_0^\infty\\
+ &= \frac{1}{a + ik}.
+ \end{align*}
+ We shall compute the inverse Fourier transform by evaluating
+ \[
+ \frac{1}{2\pi} \int_{-\infty}^\infty \tilde{f}(k) e^{ikx}\;\d k.
+ \]
+ In the complex plane, we let $\gamma_0$ be the contour from $-R$ to $R$ in the real axis; $\gamma_R$ be the semicircle of radius $R$ in the upper half-plane, $\gamma_R'$ the semicircle in the lower half-plane. We let $\gamma = \gamma_0 + \gamma_R$ and $\gamma' = \gamma_0 + \gamma_R'$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \draw [mred, thick, ->-=0.3, ->-=0.8] (-2, 0) -- (2, 0) node [pos=0.3, below] {$\gamma_0$} arc(0:180:2) node [pos=0.3, right] {$\gamma_R$};
+ \draw [dashed, mblue, thick, ->-=0.3, ->-=0.8] (-2, 0) -- (2, 0) arc(0:-180:2) node [pos=0.3, right] {$\gamma_R'$};
+
+ \node [anchor = north east] at (-2, 0) {$-R$};
+ \node [anchor = north west] at (2, 0) {$R$};
+
+ \node at (0, 1) {$\times$};
+ \node [right] at (0, 1) {$i$};
+ \end{tikzpicture}
+ \end{center}
+ We see $\tilde{f}(k)$ has only one pole, at $k = ia$, which is a simple pole. So we get
+ \[
+ \oint_\gamma \tilde{f}(k) e^{ikx} \;\d k = 2\pi i \res_{k = ia} \frac{e^{ikx}}{i(k - ia)} = 2\pi e^{-ax},
+ \]
+ while
+ \[
+ \oint_{\gamma'}\tilde{f}(k) e^{ikx} \;\d k = 0.
+ \]
+ Now if $x > 0$, applying Jordan's lemma (with $\lambda = x$) to $C_R$ shows that $\int_{C_R} \tilde{f}(k) e^{ikx}\;\d k \to 0$ as $R \to \infty$. Hence we get
+ \begin{align*}
+ \frac{1}{2\pi} \int_{-\infty}^\infty &= \frac{1}{2\pi} \lim_{R \to \infty}\int_{\gamma_0}\tilde{f}(k) e^{ikx}\;\d k\\
+ &= \frac{1}{2\pi} \lim_{R \to \infty}\left(\int_\gamma\tilde{f}(k) e^{ikx}\;\d k - \int_{\gamma_R}\tilde{f}(k) e^{ikx}\;\d k\right)\\
+ &= e^{-ax}.
+ \end{align*}
+ For $x < 0$, we have to close in the lower half plane. Since there are no singularities, we get
+ \[
+ \frac{1}{2\pi} \int_{-\infty}^\infty \tilde{f}(k) e^{ikx}\;\d k = 0.
+ \]
+ Combining these results, we obtain
+ \[
+ \frac{1}{2\pi} \int_{-\infty}^\infty \tilde{f}(k) e^{ikx}\;\d k =
+ \begin{cases}
+ 0 & x < 0\\
+ e^{-ax} & x > 0
+ \end{cases},
+ \]
+\end{eg}
+We've already done Fourier transforms in IB Methods, so we will no spend any more time on it. We move on to the new exciting topic of \emph{Laplace transforms}.
+
+\subsection{Laplace transform}
+The Fourier transform is a powerful tool for solving differential equations and investigating physical systems, but it has two key restrictions:
+\begin{enumerate}
+ \item Many functions of interest grow exponentially (e.g.\ $e^x$), and so do not have Fourier transforms;
+ \item There is no way of incorporating initial or boundary conditions in the transform variable. When used to solve an ODE, the Fourier transform merely gives a particular integral: there are no arbitrary constants produced by the method.
+\end{enumerate}
+So for solving differential equations, the Fourier transform is pretty limited. Right into our rescue is the \emph{Laplace transform}, which gets around these two restrictions. However, we have to pay the price with a different restriction --- it is only defined for functions $f(t)$ which vanishes for $t < 0$ (by convention).
+
+From now on, we shall make this assumption, so that if we refer to the function $f(t) = e^t$ for instance, we really mean $f(t) = e^t H(t)$, where $H(t)$ is the Heaviside step function,
+\[
+ H(t) =
+ \begin{cases}
+ 1 & t > 0\\
+ 0 & t < 0
+ \end{cases}.
+\]
+\begin{defi}[Laplace transform]
+ The \emph{Laplace transform} of a function $f(t)$ such that $f(t) = 0$ for $t < 0$ is defined by
+ \[
+ \hat{f}(p) = \int_0^\infty f(t) e^{-pt}\;\d t.
+ \]
+ This exists for functions that grow no more than exponentially fast.
+\end{defi}
+There is no standard notation for the Laplace transform.
+\begin{notation}
+ We sometimes write
+ \[
+ \hat{f} = \mathcal{L}(f),
+ \]
+ or
+ \[
+ \hat{f}(p) = \mathcal{L}(f(t)).
+ \]
+\end{notation}
+The variable $p$ is also not standard. Sometimes, $s$ is used instead.
+
+Many functions (e.g.\ $t$ and $e^t$) which do not have Fourier transforms do have Laplace transforms.
+
+Note that $\hat{f}(p) = \tilde{f}(-ip)$, where $\tilde{f}$ is the Fourier transform, provided that both transforms exist.
+
+\begin{eg}
+ Let's do some exciting examples.
+ \begin{enumerate}
+ \item
+ \[
+ \mathcal{L}(1) = \int_0^\infty e^{-pt} \;\d t = \frac{1}{p}.
+ \]
+ \item Integrating by parts, we find
+ \[
+ \mathcal{L}(t) = \frac{1}{p^2}.
+ \]
+ \item
+ \[
+ \mathcal{L}(e^{\lambda t}) = \int_0^\infty e^{(\lambda - p)t}\;\d t = \frac{1}{p - \lambda}.
+ \]
+ \item We have
+ \begin{align*}
+ \mathcal{L}(\sin t) &= \mathcal{L}\left(\frac{1}{2i} \left(e^{it} - e^{-it}\right)\right) \\
+ &= \frac{1}{2i} \left(\frac{1}{p - i} - \frac{1}{p + i}\right)\\
+ &= \frac{1}{p^2 + 1}.
+ \end{align*}
+ \end{enumerate}
+\end{eg}
+Note that the integral only converges if $\Re p$ is sufficiently large. For example, in (iii), we require $\Re p > \Re \lambda$. However, once we have calculated $\hat{f}$ in this domain, we can consider it to exist everywhere in the complete $p$-plane, except at singularities (such as at $p = \lambda$ in this example). This process of extending a complex function initially defined in some part of the plane to a larger part is known as \emph{analytic continuation}.
+
+So far, we haven't done anything interesting with Laplace transform, and this is going to continue in the next section!
+\subsection{Elementary properties of the Laplace transform}
+We will come up with seven elementary properties of the Laplace transform. The first $4$ properties are easily proved by direct substitution
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item Linearity:
+ \[
+ \mathcal{L}(\alpha f + \beta g) = \alpha \mathcal{L}(f) + \beta \mathcal{L}(g).
+ \]
+ \item Translation:
+ \[
+ \mathcal{L}(f(t - t_0)H(t - t_0)) = e^{-pt_0} \hat{f}(p).
+ \]
+ \item Scaling:
+ \[
+ \mathcal{L}(f(\lambda t)) = \frac{1}{\lambda} \hat{f} \left(\frac{p}{\lambda}\right),
+ \]
+ where we require $\lambda > 0$ so that $f(\lambda t)$ vanishes for $t < 0$.
+ \item Shifting:
+ \[
+ \mathcal{L}(e^{p_0 t}f(t)) = \hat{f}(p - p_0).
+ \]
+ \item Transform of a derivative:
+ \[
+ \mathcal{L}(f'(t)) = p \hat{f}(p) - f(0).
+ \]
+ Repeating the process,
+ \[
+ \mathcal{L}(f''(t)) = p \mathcal{L}(f'(t)) - f'(0) = p^2 \hat{f}(p) - p f(0) - f'(0),
+ \]
+ and so on. This is the key fact for solving ODEs using Laplace transforms.
+ \item Derivative of a transform:
+ \[
+ \hat{f}'(p) = \mathcal{L}(-tf(t)).
+ \]
+ Of course, the point of this is not that we know what the derivative of $\hat{f}$ is. It is we know how to find the Laplace transform of $tf(t)$! For example, this lets us find the derivative of $t^2$ with ease.
+
+ In general,
+ \[
+ \hat{f}^{(n)} (p) = \mathcal{L}((-t)^n f(t)).
+ \]
+ \item Asymptotic limits
+ \[
+ p\hat{f}(p) \to
+ \begin{cases}
+ f(0) & \text{as }p \to \infty\\
+ f(\infty) & \text{as }p \to 0
+ \end{cases},
+ \]
+ where the second case requires $f$ to have a limit at $\infty$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}\setcounter{enumi}{4}
+ \item We have
+ \[
+ \int_0^\infty f'(t) e^{-pt}\;\d t = [f(t) e^{-pt}]^{\infty}_0 + p \int_0^\infty f(t) e^{-pt}\;\d t = p \hat{f}(p) - f(0).
+ \]
+ \item We have
+ \[
+ \hat{f}(p) = \int_0^\infty f(t) e^{-pt}\;\d t.
+ \]
+ Differentiating with respect to $p$, we have
+ \[
+ \hat{f}'(p) = -\int_0^\infty t f(t) e^{-pt}\;\d t.
+ \]
+ \item Using (v), we know
+ \[
+ p\hat{f}(p) = f(0) + \int_0^\infty f'(t) e^{-pt} \;\d t.
+ \]
+ As $p \to \infty$, we know $e^{-pt} \to 0$ for all $t$. So $p \hat{f}(p) \to f(0)$. This proof looks dodgy, but is actually valid since $f'$ grows no more than exponentially fast.
+
+ Similarly, as $p \to 0$, then $e^{-pt} \to 1$. So
+ \[
+ p \hat{f}(p) \to f(0) + \int_0^\infty f'(t) \;\d t = f(\infty).\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ We can compute
+ \[
+ \mathcal{L}(t \sin t) = -\frac{\d}{\d p} \mathcal{L}(\sin t) = -\frac{\d}{\d p} \frac{1}{p^2 + 1} = \frac{2p}{(p^2 + 1)^2}.
+ \]
+\end{eg}
+
+\subsection{The inverse Laplace transform}
+It's no good trying to find the Laplace transforms of functions if we can't invert them. Given $\hat{f}(p)$, we can calculate $f(t)$ using the \emph{Bromwich inversion formula}.
+\begin{prop}
+ The inverse Laplace transform is given by
+ \[
+ f(t) = \frac{1}{2\pi i} \int_{c - i \infty}^{c + i\infty} \hat{f}(p) e^{pt} \;\d p,
+ \]
+ where $c$ is a real constant such that the \emph{Bromwich inversion contour} $\gamma$ given by $\Re p = c$ lies to the \emph{right} of all the singularities of $\hat{f}(p)$.
+\end{prop}
+
+\begin{proof}
+ Since $f$ has a Laplace transform, it grows no more than exponentially. So we can find a $c \in \R$ such that
+ \[
+ g(t) = f(t) e^{-ct}
+ \]
+ decays at infinity (and is zero for $t < 0$, of course). So $g$ has a Fourier transform, and
+ \[
+ \tilde{g}(\omega) = \int_{-\infty}^\infty f(t) e^{-ct} e^{-i\omega t}\;\d t= \hat{f}(c + i\omega).
+ \]
+ Then we use the Fourier inversion formula to obtain
+ \[
+ g(t) = \frac{1}{2\pi} \int_{-\infty}^\infty \hat{f}(c+ i\omega) e^{i\omega t}\;\d \omega.
+ \]
+ So we make the substitution $p = c + i\omega$, and thus obtain
+ \[
+ f(t) e^{-ct} = \frac{1}{2 \pi i} \int_{c - i\infty}^{c + i\infty} \hat{f}(p) e^{(p - c)t}\;\d p.
+ \]
+ Multiplying both sides by $e^{ct}$, we get the result we were looking for (the requirement that $c$ lies to the right of all singularities is to fix the ``constant of integration'' so that $f(t) = 0$ for all $t < 0$, as we will soon see).
+\end{proof}
+In most cases, we don't have to use the full inversion formula. Instead, we use the following special case:
+\begin{prop}
+ In the case that $\hat{f}(p)$ has only a finite number of isolated singularities $p_k$ for $k = 1, \cdots, n$, and $\hat{f}(p) \to 0$ as $|p| \to \infty$, then
+ \[
+ f(t) = \sum_{k = 1}^n \res_{p = p_k} (\hat{f}(p) e^{pt})
+ \]
+ for $t > 0$, and vanishes for $t < 0$.
+\end{prop}
+
+\begin{proof}
+ We first do the case where $t < 0$, consider the contour $\gamma_0 + \gamma_R'$ as shown, which encloses no singularities.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1.5, 0) -- (3.5, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \draw [->-=0.3, ->-=0.8] (1, -2) node [below] {$c - iR$} -- (1, 2) node [pos=0.7, right] {$\gamma_0$} node [above] {$c + iR$} arc(90:-90:2) node [pos=0.3, right] {$\gamma_R'$};
+
+ \node at (-0.3, -0.6) {$\times$};
+ \node at (-1, 0.8) {$\times$};
+ \node at (0.2, 0.4) {$\times$};
+ \end{tikzpicture}
+ \end{center}
+ Now if $\hat{f}(p) = o(|p|^{-1})$ as $|p| \to \infty$, then
+ \[
+ \left|\int_{\gamma_R'} \hat{f}(p) e^{pt}\;\d p\right| \leq \pi R e^{ct} \sup_{p \in \gamma_R'} |\hat{f}(p)| \to 0 \text{ as }R \to \infty.
+ \]
+ Here we used the fact $|e^{pt}| \leq e^{ct}$, which arises from $\Re (pt) \leq ct$, noting that $t < 0$.
+
+ If $\hat{f}$ decays less rapidly at infinity, but still tends to zero there, the same result holds, but we need to use a slight modification of Jordan's lemma. So in either case, the integral
+ \[
+ \int_{\gamma_R'} \hat{f}(p) e^{pt}\;\d p \to 0\text{ as }R\to \infty.
+ \]
+ Thus, we know $\int_{\gamma_0} \to \int_\gamma$. Hence, by Cauchy's theorem, we know $f(t) = 0$ for $t < 0$. This is in agreement with the requirement that functions with Laplace transform vanish for $t < 0$.
+
+ Here we see why $\gamma$ must lie to the right of all singularities. If not, then the contour would encircle some singularities, and then the integral would no longer be zero.
+
+ When $t > 0$, we close the contour to the left.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (2, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \draw [->-=0.3, ->-=0.8] (1, -2.5) node [below] {$c - iR$} -- (1, 2.5) node [pos=0.7, right] {$\gamma_0$} node [above] {$c + iR$} arc(90:270:2.5) node [pos=0.3, left] {$\gamma_R$};
+
+ \node at (-0.3, -0.6) {$\times$};
+ \node at (-1, 0.8) {$\times$};
+ \node at (0.2, 0.4) {$\times$};
+ \end{tikzpicture}
+ \end{center}
+ This time, our $\gamma$ \emph{does} enclose some singularities. Since there are only finitely many singularities, we enclose all singularities for sufficiently large $R$. Once again, we get $\int_{\gamma_R} \to 0$ as $R \to \infty$. Thus, by the residue theorem, we know
+ \[
+ \int_{\gamma} \hat{f}(p) e^{pt} \;\d p = \lim_{R \to \infty} \int_{\gamma_0}\hat{f}(p) e^{pt}\;\d p = 2\pi i \sum_{k = 1}^n \res_{p = p_k} (\hat{f}(p) e^{pt}).
+ \]
+ So from the Bromwich inversion formula,
+ \[
+ f(t) = \sum_{k = 1}^n \res_{p = p_k} (\hat{f}(p) e^{pt}),
+ \]
+ as required.
+\end{proof}
+
+\begin{eg}
+ We know
+ \[
+ \hat{f}(p) = \frac{1}{p - 1}
+ \]
+ has a pole at $p = 1$. So we must use $c > 1$. We have $\hat{f}(p) \to 0$ as $|p| \to \infty$. So Jordan's lemma applies as above. Hence $f(t) = 0$ for $t < 0$, and for $t > 0$, we have
+ \[
+ f(t) = \res_{p = 1} \left(\frac{e^{pt}}{p - 1}\right) = e^t.
+ \]
+ This agrees with what we computed before.
+\end{eg}
+
+\begin{eg}
+ Consider $\hat{f}(p) = p^{-n}$. This has a pole of order $n$ at $p = 0$. So we pick $c > 0$. Then for $t > 0$, we have
+ \[
+ f(t) = \res_{p = 0}\left(\frac{e^{pt}}{p^n}\right) = \lim_{p \to 0} \left(\frac{1}{(n - 1)!} \frac{\d^{n - 1}}{\d p^{n - 1}} e^{pt}\right) = \frac{t^{n - 1}}{(n - 1)!}.
+ \]
+ This again agrees with what we computed before.
+\end{eg}
+
+\begin{eg}
+ In the case where
+ \[
+ \hat{f}(p) = \frac{e^{-p}}{p},
+ \]
+ then we cannot use the standard result about residues, since $\hat{f}(p)$ does not vanish as $|p| \to \infty$. But we can use the original Bromwich inversion formula to get
+ \begin{align*}
+ f(t) &= \frac{1}{2\pi i} \int_\gamma \frac{e^{-p}}{p} e^{pt}\;\d p\\
+ &= \frac{1}{2\pi i} \int_\gamma \frac{1}{p} e^{pt'}\;\d p,
+ \end{align*}
+ where $t' = t - 1$. Now we can close to the right when $t' < 0$, and to the left when $t' > 0$, picking up the residue from the pole at $p = 0$. Then we get
+ \[
+ f(t) =
+ \begin{cases}
+ 0 & t' < 0\\
+ 1 & t' > 0
+ \end{cases}
+ =
+ \begin{cases}
+ 0 & t< 1\\
+ 1 & t > 1
+ \end{cases} = H(t - 1).
+ \]
+ This again agrees with what we've got before.
+\end{eg}
+
+\begin{eg}
+ If $\hat{f}(p)$ has a branch point (at $p = 0$, say), then we must use a Bromwich keyhole contour as shown.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+ \draw [thick,decorate, decoration=zigzag] (0, 0) -- (-3, 0);
+
+ \draw [mred, thick] (1, 2.5) arc(90:173.11:2.5) -- (-0.4, 0.3) arc (143.13:-143.13:0.5);
+ \draw [mred, thick] (1, 2.5) -- (1, -2.5) arc(-90:-173.11:2.5) -- (-0.4, -0.3);
+ \node [circ] at (0, 0) {};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\subsection{Solution of differential equations using the Laplace transform}
+The Laplace transform converts ODEs to algebraic equations, and PDEs to ODEs. We will illustrate this by an example.
+ % include worked example
+
+\begin{eg}
+ Consider the differential equation
+ \[
+ t\ddot{y} - t\dot{y} + y = 0,
+ \]
+ with $y(0) = 2$ and $\dot{y}(0) = -1$. Note that
+ \[
+ \mathcal{L}(t \dot{y}) = -\frac{\d}{\d p} \mathcal{L}(\dot{y}) - \frac{\d}{\d p} (p\hat{y} - y(0)) = -p\hat{y}' - \hat{y}.
+ \]
+ Similarly, we find
+ \[
+ \mathcal{L}(t\ddot{y}) = -p^2 \hat{y}' - 2p \hat{y} + y(0).
+ \]
+ Substituting and rearranging, we obtain
+ \[
+ p \hat{y}' + 2\hat{y} = \frac{2}{p},
+ \]
+ which is a simpler differential equation. We can solve this using an integrating factor to obtain
+ \[
+ \hat{y} = \frac{2}{p} + \frac{A}{p^2},
+ \]
+ where $A$ is an arbitrary constant. Hence we have
+ \[
+ y = 2 + At,
+ \]
+ and $A = -1$ from the initial conditions.
+\end{eg}
+
+\begin{eg}
+ A system of ODEs can always be written in the form
+ \[
+ \dot{\mathbf{x}} = M \mathbf{x},\quad \mathbf{x}(0) = \mathbf{x}_0,
+ \]
+ where $\mathbf{x} \in \R^n$ and $M$ is an $n \times n$ matrix. Taking the Laplace transform, we obtain
+ \[
+ p \hat{\mathbf{x}} - \mathbf{x}_0 = M \hat{\mathbf{x}}.
+ \]
+ So we get
+ \[
+ \hat{\mathbf{x}} = (pI - M)^{-1} \mathbf{x}_0.
+ \]
+ This has singularities when $p$ is equal to an eigenvalue of $M$.
+\end{eg}
+
+\subsection{The convolution theorem for Laplace transforms}
+Finally, we recall from IB Methods that the Fourier transforms turn convolutions into products, and vice versa. We will now prove an analogous result for Laplace transforms.
+
+\begin{defi}[Convolution]
+ The \emph{convolution} of two functions $f$ and $g$ is defined as
+ \[
+ (f * g)(t) = \int_{-\infty}^\infty f(t - t') g(t') \;\d t'.
+ \]
+\end{defi}
+
+When $f$ and $g$ vanish for negative $t$, this simplifies to
+\[
+ (f * g) (t) = \int_0^t f(t - t') g(t) \;\d t.
+\]
+\begin{thm}[Convolution theorem]
+ The Laplace transform of a convolution is given by
+ \[
+ \mathcal{L}(f * g)(p) = \hat{f}(p) \hat{g}(p).
+ \]
+\end{thm}
+
+\begin{proof}
+ \begin{align*}
+ \mathcal{L}(f * g)(p) &= \int_0^\infty \left(\int_0^t f(t - t') g(t') \;\d t'\right) e^{-pt} \;\d t\\
+ \intertext{We change the order of integration in the $(t, t')$ plane, and adjust the limits accordingly (see picture below)}
+ &= \int_0^\infty \left(\int_{t'}^\infty f(t - t') g(t') e^{-pt} \;\d t\right) \;\d t'\\
+ \intertext{We substitute $u = t - t'$ to get}
+ &= \int_0^\infty \left(\int_0^\infty f(u) g(t') e^{-pu}e^{-pt'} \;\d u\right)\;\d t'\\
+ &= \int_0^\infty \left(\int_0^\infty f(u) e^{-pu} \;\d u\right) g(t') e^{-pt'} \;\d t'\\
+ &= \hat{f}(p) \hat{g}(p).
+ \end{align*}
+ Note the limits of integration are correct since they both represent the region below
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.5] (0, 0) -- (2, 2) -- (2, 0) -- cycle;
+ \draw [->] (0, 0) -- (2, 0) node [right] {$t$};
+ \draw [->] (0, 0) -- (0, 2) node [right] {$t'$};
+ \draw (0, 0) -- (2, 2);
+ \end{tikzpicture}
+ \end{center}
+\end{proof}
+\end{document}
diff --git a/books/cam/IB_L/electromagnetism.tex b/books/cam/IB_L/electromagnetism.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5e517c4be7982ee8e1ff35cd8a26898aae69ec01
--- /dev/null
+++ b/books/cam/IB_L/electromagnetism.tex
@@ -0,0 +1,2310 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Lent}
+\def\nyear {2015}
+\def\nlecturer {D.\ Tong}
+\def\ncourse {Electromagnetism}
+\def\nofficial {http://www.damtp.cam.ac.uk/user/tong/justem.html}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Electromagnetism and Relativity}\\
+Review of Special Relativity; tensors and index notation. Lorentz force law. Electromagnetic tensor. Lorentz transformations of electric and magnetic fields. Currents and the conservation of charge. Maxwell equations in relativistic and non-relativistic forms.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Electrostatics}\\
+Gauss's law. Application to spherically symmetric and cylindrically symmetric charge distributions. Point, line and surface charges. Electrostatic potentials; general charge distributions, dipoles. Electrostatic energy. Conductors.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Magnetostatics}\\
+Magnetic fields due to steady currents. Ampre's law. Simple examples. Vector potentials and the Biot-Savart law for general current distributions. Magnetic dipoles. Lorentz force on current distributions and force between current-carrying wires. Ohm's law.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Electrodynamics}\\
+Faraday's law of induction for fixed and moving circuits. Electromagnetic energy and Poynting vector. 4-vector potential, gauge transformations. Plane electromagnetic waves in vacuum, polarization.\hspace*{\fill} [5]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+Electromagnetism is one of the four fundamental forces of the universe. Apart from gravity, most daily phenomena can be explained by electromagnetism. The very existence of atoms requires the electric force (plus weird quantum effects) to hold electrons to the nucleus, and molecules are formed again due to electric forces between atoms. More macroscopically, electricity powers all our electrical appliances (by definition), while the magnetic force makes things stick on our fridge. Finally, it is the force that gives rise to \emph{light}, and allows us to see things.
+
+While it seems to be responsible for so many effects, the modern classical description of electromagnetism is rather simple. It is captured by merely \emph{four} short and concise equations known as Maxwell's equations. In fact, in the majority of this course, we would simply be exploring different solutions to these equations.
+
+Historically, electromagnetism has another significance --- it led to Einstein's discovery of special relativity. At the end of the course, we will look at how Maxwell's equations naturally fit in nicely in the framework of relativity. Written relativistically, Maxwell's equation look \emph{much} simpler and more elegant. We will discover that magnetism is \emph{entirely} a relativistic effect, and makes sense only in the context of relativity.
+
+As we move to the world of special relativity, not only do we get a better understanding of Maxwell's equation. As a takeaway, we also get to understand relativity itself better, as we provide a more formal treatment of special relativity, which becomes a powerful tool to develop theories consistent with relativity.
+
+\section{Preliminaries}
+\subsection{Charge and Current}
+The strength of the electromagnetic force experienced by a particle is determined by its \emph{(electric) charge}. The SI unit of charge is the \emph{Coulomb}. In this course, we assume that the charge can be any real number. However, at the fundamental level, charge is quantised. All particles carry charge $q = ne$ for some integer $n$, and the basic unit $e \approx \SI{1.6e-19}{\coulomb}$. For example, the electron has $n = -1$, proton has $n = +1$, neutron has $n = 0$.
+
+Often, it will be more useful to talk about \emph{charge density} $\rho(\mathbf{x}, t)$.
+\begin{defi}[Charge density]
+ The \emph{charge density} is the charge per unit volume. The total charge in a region $V$ is
+ \[
+ Q = \int_V \rho(\mathbf{x}, t)\; \d V
+ \]
+\end{defi}
+When we study charged sheets or lines, the charge density would be charge per unit area or length instead, but this would be clear from context.
+
+The motion of charge is described by the \emph{current} density $\mathbf{J}(\mathbf{x}, t)$.
+\begin{defi}[Current and current density]
+For any surface $S$, the integral
+\[
+ I = \int_S \mathbf{J}\cdot d\mathbf{S}
+\]
+counts the charge per unit time passing through $S$. $I$ is the \emph{current}, and $\mathbf{J}$ is the \emph{current density}, ``current per unit area''.
+\end{defi}
+Intuitively, if the charge distribution $\rho (\mathbf{x}, t)$ has velocity $\mathbf{v}(x, t)$, then (neglecting relativistic effects), we have
+\[
+ \mathbf{J} = \rho \mathbf{v}.
+\]
+
+\begin{eg}
+ A wire is a cylinder of cross-sectional area $A$. Suppose there are $n$ electrons per unit volume. Then
+ \begin{align*}
+ \rho &= nq = -ne\\
+ \mathbf{J} &= nq\mathbf{v}\\
+ I &= nqvA.
+ \end{align*}
+\end{eg}
+
+It is well known that charge is conserved --- we cannot create or destroy charge. However, the conservation of charge does not simply say that ``the total charge in the universe does not change''. We want to rule out scenarios where a charge on Earth disappears, and instantaneously appears on the Moon. So what we really want to say is that charge is conserved locally: if it disappears here, it must have moved to somewhere nearby. Alternatively, charge density can only change due to continuous currents. This is captured by the \emph{continuity equation}:
+\begin{law}[Continuity equation]
+ \[
+ \frac{\partial\rho}{\partial t} + \nabla\cdot \mathbf{J} = 0.
+ \]
+\end{law}
+We can write this into a more intuitive integral form via the divergence theorem.
+
+The charge $Q$ in some region $V$ is defined to be
+\[
+ Q = \int_V \rho \;\d V.
+\]
+So
+\[
+ \frac{\d Q}{\d t} = \int_V \frac{\partial\rho}{\partial t}\; \d V = -\int_V \nabla\cdot \mathbf{J}\; \d V = -\int_S \mathbf{J}\cdot \d S.
+\]
+Hence the continuity equation states that the change in total charge in a volume is given by the total current passing through its boundary.
+
+In particular, we can take $V = \R^3$, the whole of space. If there are no currents at infinity, then
+\[
+ \frac{\d Q}{\d t} = 0
+\]
+So the continuity equation implies the conservation of charge.
+
+\subsection{Forces and Fields}
+In modern physics, we believe that all forces are mediated by \emph{fields} (not to be confused with ``fields'' in algebra, or agriculture). A \emph{field} is a dynamical quantity (i.e.\ a function) that assigns a value to every point in space and time. In electromagnetism, we have two fields:
+\begin{itemize}
+ \item the electric field $\mathbf{E}(\mathbf{x}, t)$;
+ \item the magnetic field $\mathbf{B}(\mathbf{x}, t)$.
+\end{itemize}
+Each of these fields is a vector, i.e.\ it assigns a \emph{vector} to every point in space and time, instead of a single number.
+
+The fields interact with particles in two ways. On the one hand, fields cause particles to move. On the other hand, particles create fields. The first aspect is governed by the Lorentz force law:
+\begin{law}[Lorentz force law]
+\[
+ \mathbf{F} = q(\mathbf{E} + \mathbf{v}\times \mathbf{B})
+\]
+\end{law}
+\noindent while the second aspect is governed by \emph{Maxwell's equations}.
+\begin{law}[Maxwell's Equations]
+ \begin{align*}
+ \nabla \cdot \mathbf{E} &= \frac{\rho}{\varepsilon_0}\\
+ \nabla \cdot \mathbf{B} &= 0\\
+ \nabla \times \mathbf{E} +\frac{\partial \mathbf{B}}{\partial t} &= 0\\
+ \nabla \times \mathbf{B} - \mu_0\varepsilon_0 \frac{\partial \mathbf{E}}{\partial t} &= \mu_0 \mathbf{J},
+ \end{align*}
+ where we have two constants of nature:
+ \begin{itemize}
+ \item $\varepsilon_0 = \SI{8.85e-12}{\per\metre\cubed\per\kilogram\s\squared\coulomb\squared}$ is the electric constant;
+ \item $\mu_0 = \SI{4\pi e-6}{\metre\kilogram\per\coulomb\squared}$ is the magnetic constant.
+ \end{itemize}
+ Some prefer to call these constants the ``permittivity of free space'' and ``permeability of free space'' instead. But why bother with these complicated and easily-confused names when we can just call them ``electric constant'' and ``magnetic constant''?
+\end{law}
+We've just completed the description of all of (classical, non-relativistic) electromagnetism. The remaining of the course would be to study the consequences of and solutions to these equations.
+
+\section{Electrostatics}
+Electrostatics is the study of stationary charges in the absence of magnetic fields. We take $\rho = \rho(\mathbf{x})$, $\mathbf{J} = 0$ and $\mathbf{B} = 0$. We then look for time-independent solutions. In this case, the only relevant equations are
+\begin{align*}
+ \nabla \cdot \mathbf{E} &= \frac{\rho}{\varepsilon_0}\\
+ \nabla\times \mathbf{E} &= 0,
+\end{align*}
+and the other two equations just give $0 = 0$.
+
+In this chapter, our goal is to find $\mathbf{E}$ for any $\rho$. Of course, we first start with simple, symmetric cases, and then tackle the more general cases later.
+
+\subsection{Gauss' Law}
+Here we transform the first Maxwell's equation into an integral form, known as \emph{Gauss' Law}.
+
+Consider a region $V\subseteq \R^3$ with boundary $S = \partial V$. Then integrating the first equation over the volume $V$ gives
+\[
+ \int_V \nabla\cdot \mathbf{E}\; \d V = \frac{1}{\varepsilon_0} \int_V \rho \;\d V.
+\]
+The divergence theorem gives $\int_V \nabla\cdot \mathbf{E}\;\d V = \int_S \mathbf{E}\cdot \d \mathbf{S}$, and by definition, $Q = \int_V \rho \;\d V$. So we end up with
+\begin{law}[Gauss' law]
+ \[
+ \int_S \mathbf{E}\cdot \d \mathbf{S} =\frac{Q}{\varepsilon_0},
+ \]
+ where $Q$ is the total charge inside $V$.
+\end{law}
+\begin{defi}[Flux through surface]
+ The \emph{flux} of $\mathbf{E}$ through the surface $S$ is defined to be
+ \[
+ \int_S \mathbf{E} \cdot\d \mathbf{S}.
+ \]
+\end{defi}
+
+Gauss' law tells us that the flux depends only on the total charge contained inside the surface. In particular any external charge does not contribute to the total flux. While external charges \emph{do} create fields that pass through the surface, the fields have to enter the volume through one side of the surface and leave through the other. Gauss' law tells us that these two cancel each other out exactly, and the total flux caused by external charges is zero.
+
+From this, we can prove Coulomb's law:
+\begin{eg}[Coulomb's law]
+ We consider a spherically symmetric charge density $\rho (r)$ with $\rho (r) = 0$ for $r > R$, i.e.\ all the charge is contained in a ball of radius $R$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=1];
+ \draw [->] (0, 0) -- (.71, .71) node [anchor=south west] {$R$};
+ \draw [->] (0, 1.1) -- (0, 2);
+ \draw [->] (0, -1.1) -- (0, -2);
+ \draw [->] (1.1, 0) -- (2, 0) node [right] {$E$};
+ \draw [->] (-1.1, 0) -- (-2, 0);
+ \draw [dashed] circle [radius=1.5];
+ \node at (-1.06, 1.06) [anchor=south east] {$S$};
+ \end{tikzpicture}
+ \end{center}
+ By symmetry, the force is the same in all directions and point outward radially. So
+ \[
+ \mathbf{E} = E(r) \hat{\mathbf{r}}.
+ \]
+ This immediately ensures that $\nabla \times \mathbf{E} = 0$.
+
+ Put $S$ to be a sphere of radius $r > R$. Then the total flux is
+ \begin{align*}
+ \int_S \mathbf{E}\cdot \d \mathbf{S} &= \int_S E(r) \hat{\mathbf{r}}\cdot \d \mathbf{S}\\
+ &= E(r) \int_S \hat{\mathbf{r}}\cdot \d \mathbf{S}\\
+ &= E(r)\cdot 4\pi r^2
+ \end{align*}
+ By Gauss' law, we know that this is equal to $\frac{Q}{\varepsilon_0}$. Therefore
+ \[
+ E(r) = \frac{Q}{4\pi \varepsilon_0 r^2}
+ \]
+ and
+ \[
+ \mathbf{E}(r) = \frac{Q}{4\pi\varepsilon_0 r^2}\hat{\mathbf{r}}.
+ \]
+ By the Lorentz force law, the force experienced by a second charge is
+ \[
+ \mathbf{F}(\mathbf{r}) = \frac{Qq}{4\pi\varepsilon_0 r^2}\hat{\mathbf{r}},
+ \]
+ which is Coulomb's law.
+
+ Strictly speaking, this only holds when the charges are not moving. However, for most practical purposes, we can still use this because the corrections required when they are moving are tiny.
+\end{eg}
+
+\begin{eg}
+ Consider a uniform sphere with
+ \[
+ \rho (r) = \begin{cases}
+ \rho & r < R\\
+ 0 & r > R
+ \end{cases}.
+ \]
+ Outside, we know that
+ \[
+ \mathbf{E}(r) = \frac{Q}{4\pi\varepsilon_0 r^2}\hat{\mathbf{r}}
+ \]
+ Now suppose we are inside the sphere.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=1];
+ \draw [->] (0, 0) -- (.71, .71) node [anchor=south west] {$R$};
+ \draw [->] (0, 1.1) -- (0, 2);
+ \draw [->] (0, -1.1) -- (0, -2);
+ \draw [->] (1.1, 0) -- (2, 0) node [right] {$E$};
+ \draw [->] (-1.1, 0) -- (-2, 0);
+ \draw [dashed] circle [radius=0.5];
+ \draw [->] (0, 0) -- (-.353, .353) node [anchor=south east] {$r$};
+ \end{tikzpicture}
+ \end{center}
+ Then
+ \[
+ \int_S \mathbf{E}\cdot \d\mathbf{S} = E(r) 4\pi r^2 = \frac{Q}{\varepsilon_0}\left(\frac{r^3}{R^3}\right)
+ \]
+ So
+ \[
+ \mathbf{E}(r) = \frac{Qr}{4\pi\varepsilon_0 R^3}\hat{\mathbf{r}},
+ \]
+ and the field increases with radius.
+\end{eg}
+
+\begin{eg}[Line charge]
+ Consider an infinite line with uniform charge density \emph{per unit length} $\eta$.
+
+ We use cylindrical polar coordinates:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, -2) -- (0, 2) node [above] {$z$};
+ \draw [->] (0.1, 0) -- (1, 0) node [right] {$r = \sqrt{x^2 + y^2}$};
+
+ \draw [->] (0.1, 1) -- (1, 1) node [right] {$E$};
+ \draw [->] (-0.1, 1) -- (-1, 1);
+ \draw [->] (-0.1, 0) -- (-1, 0);
+ \draw [->] (0.1, -1) -- (1, -1);
+ \draw [->] (-0.1, -1) -- (-1, -1);
+
+ \draw [gray, fill=gray!50!white, fill opacity=0.4] (-0.5, 1.5) arc (180:360:0.5 and 0.2) -- (0.5, -1.5) arc (360:180:0.5 and 0.2) -- cycle;
+ \draw [gray, fill=gray!50!white, fill opacity=0.2] (0, 1.5) circle [x radius=0.5, y radius=0.2];
+ \draw [gray, dashed] (0.5, -1.5) arc (0:180:0.5 and 0.2);
+ \end{tikzpicture}
+ \end{center}
+ By symmetry, the field is radial, i.e.
+ \[
+ \mathbf{E}(r) = E(r) \hat{\mathbf{r}}.
+ \]
+ Pick $S$ to be a cylinder of length $L$ and radius $r$. We know that the end caps do not contribute to the flux since the field lines are perpendicular to the normal. Also, the curved surface has area $2\pi rL$. Then
+ \[
+ \int_S\mathbf{E}\cdot \d\mathbf{S} = E(r)2\pi rL = \frac{\eta L}{\varepsilon_0}.
+ \]
+ So
+ \[
+ \mathbf{E}(r) = \frac{\eta}{2\pi \varepsilon_0 r} \hat{\mathbf{r}}.
+ \]
+ Note that the field varies as $1/r$, not $1/r^2$. Intuitively, this is because we have one more dimension of ``stuff'' compared to the point charge, so the field does not drop as fast.
+\end{eg}
+
+\begin{eg}[Surface charge]
+ Consider an infinite plane $z = 0$, with uniform charge per unit area $\sigma$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-2.5, -.5) -- (1.5, -.5) -- (2.5, .5) -- (-1.5, .5) -- cycle;
+ \draw [->] (0, 0) -- (0, 1);
+ \draw [->] (-1, 0) -- (-1, 1);
+ \draw [->] (1, 0) -- (1, 1);
+
+ \draw [->] (0, -.6) -- (0, -1.2);
+ \draw [->] (1, -.6) -- (1, -1.2);
+ \draw [->] (-1, -.6) -- (-1, -1.2);
+
+ \draw [gray, fill=gray!50!white, fill opacity=0.4] (-0.5, 1.5) arc (180:360:0.5 and 0.2) -- (0.5, -1.5) arc (360:180:0.5 and 0.2) -- cycle;
+ \draw [gray, fill=gray!50!white, fill opacity=0.2] (0, 1.5) circle [x radius=0.5, y radius=0.2];
+ \draw [gray, dashed] (0.5, -1.5) arc (0:180:0.5 and 0.2);
+
+ \draw [dashed] circle [x radius=0.5, y radius=0.2];
+ \end{tikzpicture}
+ \end{center}
+ By symmetry, the field points vertically, and the field bottom is the opposite of that on top. we must have
+ \[
+ \mathbf{E} = E(z)\hat{\mathbf{z}}
+ \]
+ with
+ \[
+ E(z) = -E(-z).
+ \]
+ Consider a vertical cylinder of height $2z$ and cross-sectional area $A$. Now only the end caps contribute. So
+ \[
+ \int_S \mathbf{E} \cdot \d \mathbf{S} = E(z) A - E(-z) A =\frac{\sigma A}{\varepsilon _0}.
+ \]
+ So
+ \[
+ E(z) = \frac{\sigma }{2\varepsilon_0}
+ \]
+ and is constant.
+
+ Note that the electric field is discontinuous across the surface. We have
+ \[
+ E(z\to 0+) - E(z\to 0-) = \frac{\sigma}{\varepsilon_0}.
+ \]
+ This is a general result that is true for any arbitrary surfaces and $\sigma$. We can prove this by considering a cylinder across the surface and then shrink it indefinitely. Then we find that
+ \[
+ \hat{\mathbf{n}}\cdot \mathbf{E}_+ - \hat{\mathbf{n}}\cdot \mathbf{E}_- = \frac{\sigma}{\varepsilon_0}.
+ \]
+ However, the components of $\mathbf{E}$ tangential to the surface are continuous.
+\end{eg}
+
+\subsection{Electrostatic potential}
+In the most general case, we will have to solve both $\nabla \cdot \mathbf{E} = \rho/\varepsilon_0$ and $\nabla \times \mathbf{E} = \mathbf{0}$. However, we already know that the general form of the solution to the second equation is $\mathbf{E} = -\nabla\phi$ for some scalar field $\phi$.
+\begin{defi}[Electrostatic potential]
+ If $\mathbf{E} = -\nabla \phi$, then $\phi$ is the \emph{electrostatic potential}.
+\end{defi}
+Substituting this into the first equation, we obtain
+\[
+ \nabla^2\phi = \frac{\rho}{\varepsilon_0}.
+\]
+This is the \emph{Poisson equation}, which we have studied in other courses. If we are in the middle of nowhere and $\rho = 0$, then we get the \emph{Laplace equation}.
+
+There are a few interesting things to note about our result:
+\begin{itemize}
+ \item $\phi$ is only defined up to a constant. We usually fix this by insisting $\phi(\mathbf{r}) \to 0$ as $r\to \infty$. This statement seems trivial, but this property of $\phi$ is actually very important and gives rise to a lot of interesting properties. However, we will not have the opportunity to explore this in this course.
+ \item The Poisson equation is linear. So if we have two charges $\rho_1$ and $\rho_2$, then the potential is simply $\phi_1 + \phi_2$ and the field is $\mathbf{E}_1 + \mathbf{E}_2$. This is the \emph{principle of superposition}. Among the four fundamental forces of nature, electromagnetism is the only force with this property.
+\end{itemize}
+\subsubsection{Point charge}
+Consider a point particle with charge $Q$ at the origin. Then
+\[
+ \rho (\mathbf{r}) = Q\delta^3(\mathbf{r}).
+\]
+Here $\delta^3$ is the generalization of the usual delta function for (3D) vectors.
+
+The equation we have to solve is
+\[
+ \nabla^2\phi = -\frac{Q}{\varepsilon_0}\delta^3(\mathbf{r}).
+\]
+Away from the origin $\mathbf{r} = \mathbf{0}$, $\delta^3(\mathbf{r}) = 0$, and we have the Laplace equation. From the IA Vector Calculus course, the general solution is
+\[
+ \phi = \frac{\alpha}{r}\quad \text{for some constant }\alpha.
+\]
+The constant $\alpha$ is determined by the delta function. We integrate the equation over a sphere of radius $r$ centered at the origin. The left hand side gives
+\[
+ \int_V \nabla^2\phi\;\d V = \int_S \nabla\phi\cdot\d \mathbf{S} = \int_S -\frac{\alpha}{r^2} \hat{\mathbf{r}}\cdot \d \mathbf{S} = -4\pi\alpha
+\]
+The right hand side gives
+\[
+ -\frac{Q}{\varepsilon_0}\int_V\delta^3(r)\; \d V = -\frac{Q}{\varepsilon_0}.
+\]
+So
+\[
+ \alpha = \frac{Q}{4\pi \varepsilon_0}
+\]
+and
+\[
+ \mathbf{E} = -\nabla \phi = \frac{Q}{4\pi\varepsilon_0 r^2}\hat{\mathbf{r}}.
+\]
+This is just what we get from Coulomb's law.
+\subsubsection{Dipole}
+\begin{defi}[Dipole]
+ A \emph{dipole} consists of two point charges, $+Q$ and $-Q$ at $\mathbf{r} = 0$ and $\mathbf{r} = -\mathbf{d}$ respectively.
+\end{defi}
+
+To find the potential of a dipole, we simply apply the principle of superposition and obtain
+\[
+ \phi = \frac{1}{4\pi\varepsilon_0}\left(\frac{Q}{r} - \frac{Q}{|\mathbf{r} + \mathbf{d}|}\right).
+\]
+This is not a very helpful result, but we can consider the case when we are far, far away, i.e.\ $r \gg d$. To do so, we Taylor expand the second term. For a general $f(\mathbf{r})$, we have
+\[
+ f(\mathbf{r} + \mathbf{d}) = f(\mathbf{r}) + \mathbf{d}\cdot \nabla f(\mathbf{r}) + \frac{1}{2}(\mathbf{d}\cdot \nabla)^2f(\mathbf{r}) + \cdots.
+\]
+Applying to the term we are interested in gives
+\begin{align*}
+ \frac{1}{|\mathbf{r} + \mathbf{d}|} &= \frac{1}{r} - \mathbf{d}\cdot \nabla\left(\frac{1}{r}\right) + \frac{1}{2}(\mathbf{d}\cdot \nabla)^2\left(\frac{1}{r}\right) + \cdots\\
+ &= \frac{1}{r} - \frac{\mathbf{d}\cdot \mathbf{r}}{r^3} - \frac{1}{2}\left(\frac{\mathbf{d}\cdot \mathbf{d}}{r^3} - \frac{3(\mathbf{d}\cdot \mathbf{r})^2}{r^5}\right) + \cdots.
+\end{align*}
+Plugging this into our equation gives
+\[
+ \phi = \frac{Q}{4\pi\varepsilon_0}\left(\frac{1}{r} - \frac{1}{r} + \frac{\mathbf{d}\cdot \mathbf{r}}{r^3} + \cdots\right) \sim \frac{Q}{4\pi\varepsilon_0} \frac{\mathbf{d}\cdot \mathbf{r}}{r^3}.
+\]
+\begin{defi}[Electric dipole moment]
+ We define the \emph{electric dipole moment} to be
+ \[
+ \mathbf{p} = Q\mathbf{d}.
+ \]
+ By convention, it points from -ve to +ve.
+\end{defi}
+Then
+\[
+ \phi = \frac{\mathbf{p}\cdot \hat{\mathbf{r}}}{4\pi\varepsilon_0 r^2},
+\]
+and
+\[
+ \mathbf{E} = -\nabla\phi = \frac{1}{4\pi\varepsilon_0}\left(\frac{3(\mathbf{p}\cdot\hat{\mathbf{r}})\hat{\mathbf{r}} - \mathbf{p}}{r^3}\right).
+\]
+
+\subsubsection{General charge distribution}
+To find $\phi$ for a general charge distribution $\rho$, we use the Green's function for the Laplacian. The Green's function is defined to be the solution to
+\[
+ \nabla^2 G(\mathbf{r}, \mathbf{r}') = \delta^3(\mathbf{r} - \mathbf{r}'),
+\]
+In the section about point charges, We have shown that
+\[
+ G(\mathbf{r}, \mathbf{r}') = -\frac{1}{4\pi}\frac{1}{|\mathbf{r} - \mathbf{r}'|}.
+\]
+We assume all charge is contained in some compact region $V$. Then
+\begin{align*}
+ \phi(\mathbf{r}) &= -\frac{1}{\varepsilon_0}\int_V \rho(\mathbf{r}') G(\mathbf{r}, \mathbf{r}')\;\d^3 \mathbf{r}'\\
+ &= \frac{1}{4\pi\varepsilon_0}\int_V \frac{\rho(\mathbf{r}')}{|\mathbf{r} - \mathbf{r}'|}\;\d^3 \mathbf{r}'
+\end{align*}
+Then
+\begin{align*}
+ \mathbf{E}(\mathbf{r}) &= -\nabla \phi(\mathbf{r})\\
+ &=- \frac{1}{4\pi\varepsilon_0} \int_V \rho(\mathbf{r}')\nabla \left(\frac{1}{|\mathbf{r} - \mathbf{r}'|}\right)\;\d^3 \mathbf{r}'\\
+ &= \frac{1}{4\pi\varepsilon_0}\int_V \rho(\mathbf{r}')\frac{(\mathbf{r} - \mathbf{r}')}{|\mathbf{r} - \mathbf{r}'|^3}\;\d^3 \mathbf{r}'
+\end{align*}
+So if we plug in a very complicated $\rho$, we get a very complicated $\mathbf{E}$!
+
+However, we can ask what $\phi$ and $\mathbf{E}$ look like very far from $V$, i.e.\ $|\mathbf{r}| \gg |\mathbf{r}'|$.
+
+We again use the Taylor expansion.
+\begin{align*}
+ \frac{1}{|\mathbf{r} - \mathbf{r}'|} &= \frac{1}{r} + \mathbf{r}'\cdot \nabla\left(\frac{1}{r}\right) + \cdots\\
+ &= \frac{1}{r} + \frac{\mathbf{r}\cdot \mathbf{r}'}{r^3} + \cdots.
+\end{align*}
+Then we get
+\begin{align*}
+ \phi(\mathbf{r}) &= \frac{1}{4\pi\varepsilon_0} \int_V \rho(\mathbf{r}')\left(\frac{1}{r} + \frac{\mathbf{r}\cdot \mathbf{r}'}{r^3} + \cdots\right)\;\d^3 \mathbf{r}'\\
+ &= \frac{1}{4\pi\varepsilon_0}\left(\frac{Q}{r} + \frac{\mathbf{p}\cdot \hat{\mathbf{r}}}{r^2} + \cdots\right),
+\end{align*}
+where
+\begin{align*}
+ Q &= \int_V \rho(\mathbf{r}')\;\d V' \\
+ \mathbf{p} &= \int_V \mathbf{r}'\rho(\mathbf{r}')\; \d V'\\
+ \hat{\mathbf{r}} &= \frac{\mathbf{r}}{\|\mathbf{r}\|}.
+\end{align*}
+So if we have a huge lump of charge, we can consider it to be a point charge $Q$, plus some dipole correction terms.
+\subsubsection{Field lines and equipotentials}
+Vectors are usually visualized using arrows, where longer arrows represent larger vectors. However, this is not a practical approach when it comes to visualizing fields, since a field assigns a vector to \emph{every single point in space}, and we don't want to draw infinitely many arrows. Instead, we use field lines.
+
+\begin{defi}[Field line]
+ A \emph{field line} is a continuous line tangent to the electric field $\mathbf{E}$. The density of lines is proportional to $|\mathbf{E}|$.
+\end{defi}
+They begin and end only at charges (and infinity), and never cross.
+
+We can also draw the \emph{equipotentials}.
+\begin{defi}[Equipotentials]
+ \emph{Equipotentials} are surfaces of constant $\phi$. Because $\mathbf{E} = -\nabla \phi$, they are always perpendicular to field lines.
+\end{defi}
+\begin{eg}\leavevmode
+ The field lines for a positive and a negative charge are, respectively,
+ \begin{center}
+ \begin{tikzpicture}
+ \node [draw, circle, inner sep = 0, minimum size=13] {+};
+ \draw [dashed] circle [radius=1.4];
+ \draw [dashed] circle [radius=.7];
+ \draw [->] (0, 0.3) -- (0, 1.5);
+ \draw [->] (0.3, 0) -- (1.5, 0);
+ \draw [->] (0, -0.3) -- (0, -1.5);
+ \draw [->] (-0.3, 0) -- (-1.5, 0);
+ \end{tikzpicture}
+ \begin{tikzpicture}
+ \node [draw, circle, inner sep = 0, minimum size=13] {-};
+ \draw [dashed] circle [radius=1.4];
+ \draw [dashed] circle [radius=.7];
+ \draw [->] (0, 1.5) -- (0, 0.3);
+ \draw [->] (1.5, 0) -- (0.3, 0);
+ \draw [->] (0, -1.5) -- (0, -0.3);
+ \draw [->] (-1.5, 0) -- (-0.3, 0);
+ \end{tikzpicture}
+ \end{center}
+ We can also draw field lines for dipoles:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [draw, circle, inner sep = 0, minimum size=13] (+) {+};
+ \node [draw, circle, inner sep = 0, minimum size=13] (-) at (2, 0) {-};
+ \draw [->-=0.6] (+) -- (-);
+ \draw [->-=0.6] (+) to [bend left=30] (-);
+ \draw [->-=0.6] (+) to [bend right=30] (-);
+ \draw [->-=0.6] (+) to [bend left=60] (-);
+ \draw [->-=0.6] (+) to [bend right=60] (-);
+ \draw [->-=0.7] (+) to [bend right=30] +(-1, 0.5);
+ \draw [->-=0.7] (+) to [bend left=30] +(-1, -0.5);
+ \draw [->-=0.7] (+) to +(-1, 0);
+ \draw [-<-=0.4] (-) to [bend left=30] +(1, 0.5);
+ \draw [-<-=0.4] (-) to [bend right=30] +(1, -0.5);
+ \draw [-<-=0.4] (-) to +(1, 0);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\subsection{Electrostatic energy}
+We want to calculate how much energy is stored in the electric field. Recall from IA Dynamics and Relativity that a particle of charge $q$ in a field $\mathbf{E} = -\nabla \phi$ has potential energy $U(\mathbf{r}) = q\phi(\mathbf{r})$.
+
+$U(\mathbf{r})$ can be thought of as the work done in bringing the particle from infinity, as illustrated below:
+\begin{align*}
+ \text{work done} &= -\int_\infty^\mathbf{r} \mathbf{F}\cdot \d \mathbf{r}\\
+ &= -\int_\infty^\mathbf{r}\mathbf{E}\cdot \d \mathbf{r} \\
+ &= q\int_\infty^\mathbf{r} \nabla \mathbf{\phi}\cdot \d \mathbf{r}\\
+ &= q[\phi(\mathbf{r}) - \phi(\infty)]\\
+ &= U(\mathbf{r})
+\end{align*}
+where we set $\phi(\infty) = 0$.
+
+Now consider $N$ charges $q_i$ at positions $\mathbf{r}_i$. The total potential energy stored is the work done to assemble these particles. Let's put them in one by one.
+\begin{enumerate}
+ \item The first charge is free. The work done is $W_1 = 0$.
+ \item To place the second charge at position $\mathbf{r}_2$ takes work. The work is
+ \[
+ W_2 = \frac{q_1q_2}{4\pi\varepsilon_0}\frac{1}{|\mathbf{r}_1 - \mathbf{r}_2|}.
+ \]
+ \item To place the third charge at position $\mathbf{r}_3$, we do
+ \[
+ W_3 = \frac{q_3}{4\pi\varepsilon_0}\left(\frac{q_1}{|\mathbf{r}_1 - \mathbf{r}_3|} + \frac{q_2}{|\mathbf{r}_2 - \mathbf{r}_3|}\right)
+ \]
+ \item etc.
+\end{enumerate}
+The total work done is
+\[
+ U = \sum_{i = 1}^N W_i = \frac{1}{4\pi\varepsilon_0} \sum_{i < j} \frac{q_iq_j}{|\mathbf{r}_i - \mathbf{r}_j|}.
+\]
+Equivalently,
+\[
+ U = \frac{1}{4\pi\varepsilon_0} \frac{1}{2} \sum_{i \not= j} \frac{q_iq_j}{|\mathbf{r}_i - \mathbf{r}_j|}.
+\]
+We can write this in an alternative form. The potential at point $\mathbf{r}_i$ due to all other particles is
+\[
+ \phi(\mathbf{r}_i) = \frac{1}{4\pi\varepsilon_0}\sum_{j\not= i}\frac{q_j}{|\mathbf{r}_i - \mathbf{r}_j|}.
+\]
+So we can write
+\[
+ U = \frac{1}{2}\sum_{i = 1}^N q_i \phi(\mathbf{r}_i).
+\]
+There is an obvious generalization to continuous charge distributions:
+\[
+ U = \frac{1}{2} \int \rho (\mathbf{r}) \phi(\mathbf{r}) \;\d^3 \mathbf{r}.
+\]
+Hence we obtain
+\begin{align*}
+ U &= \frac{\varepsilon_0}{2}\int (\nabla\cdot \mathbf{E}) \phi \;\d^3 \mathbf{r}\\
+ &= \frac{\varepsilon_0}{2}\int [\nabla\cdot (\mathbf{E}\phi) - \mathbf{E}\cdot \nabla \phi]\;\d^3 \mathbf{r}.
+\end{align*}
+The first term is a total derivative and vanishes. In the second term, we use the definition $\mathbf{E} = -\nabla \phi$ and obtain
+\begin{prop}
+ \[
+ U = \frac{\varepsilon_0}{2}\int \mathbf{E}\cdot \mathbf{E} \;\d^3 \mathbf{r}.
+ \]
+\end{prop}
+This derivation of potential energy is not satisfactory. The final result shows that the potential energy depends only on the field itself, and not the charges. However, the result was derived using charges and electric potentials --- there should be a way to derive this result directly with the field, and indeed there is. However, this derivation belongs to a different course.
+
+Also, we have waved our hands a lot when generalizing to continuous distributions, which was not entirely correct. If we have a single point particle, the original discrete formula implies that there is no potential energy. However, since the associated field is non-zero, our continuous formula gives a non-zero potential.
+
+This does \emph{not} mean that the final result is wrong. It is correct, but it describes a more sophisticated (and preferred) conception of ``potential energy''. Again, we shall not go into the details in this course.
+
+\subsection{Conductors}
+\begin{defi}[Conductor]
+ A \emph{conductor} is a region of space which contains lots of charges that are free to move.
+\end{defi}
+
+In electrostatic situations, we must have $\mathbf{E} = 0$ inside a conductor. Otherwise, the charges inside the conductor would move till equilibrium. This almost describes the whole of the section: if you apply an electric field onto a conductor, the charges inside the conductor move until the external field is canceled out.
+
+From this, we can derive a lot of results. Since $\mathbf{E} = 0$, $\phi$ is constant inside the conductor. Since $\nabla \cdot \mathbf{E} = \rho/\varepsilon_0$, inside the conductor, we must have $\rho = 0$. Hence any net charge must live on the surface.
+
+Also, since $\phi$ is constant inside the conductor, the surface of the conductor is an equipotential. Hence the electric field is perpendicular to the surface. This makes sense since any electric field with components parallel to the surface would cause the charges to move.
+
+Recall also from our previous discussion that across a surface, we have
+\[
+ \hat{\mathbf{n}}\cdot \mathbf{E}_\text{outside} - \hat{\mathbf{n}}\cdot \mathbf{E}_\text{inside} = \frac{\sigma}{\varepsilon_0},
+\]
+where $\sigma$ is the surface charge density. Since $\mathbf{E}_\text{inside} = 0$, we obtain
+\[
+ \mathbf{E}_\text{outside} = \frac{\sigma}{\varepsilon_0}\hat{\mathbf{n}}
+\]
+This allows us to compute the surface charge given the field, and vice versa.
+
+\begin{eg}
+ Consider a spherical conductor with $Q = 0$. We put a positive plate on the left and a negative plate on the right. This creates a field from the left to the right.
+
+ With the conductor in place, since the electric field lines must be perpendicular to the surface, they have to bend towards the conductor. Since field lines end and start at charges, there must be negative charges at the left and positive charges at the right.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [draw, circle, inner sep = 0, minimum size=1.5cm] (charge) {};
+ \draw [->] (-3, 1.5) -- (3, 1.5);
+ \draw [->] (-3, -1.5) -- (3, -1.5);
+ \draw [<-] (charge) to [bend right=10] (-3, 0.75);
+ \draw [<-] (charge) to [bend left=10] (-3, -0.75);
+ \draw [->] (charge) to [bend left=10] (3, 0.75);
+ \draw [->] (charge) to [bend right=10] (3, -0.75);
+ \draw [->] (-3, 0) -- (charge);
+ \draw [->] (charge) -- (3, 0);
+ \node at (-0.2, 0) [left] {$-$};
+ \node at (0.2, 0) [right] {$+$};
+ \end{tikzpicture}
+ \end{center}
+ We get an induced surface charge.
+\end{eg}
+
+\begin{eg}
+ Suppose we have a conductor that fills all space $x < 0$. We ground it such that $\phi = 0$ throughout the conductor. Then we place a charge $q$ at $x = d > 0$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray!50!white] (-1, 1.5) rectangle (0, -1.5);
+ \node at (-0.5, 0) {$\phi = 0$};
+ \node [draw, circle] (+) at (2, 0) {$+$};
+ \draw [->] (+) -- (0, 0);
+ \draw [->] (+) to [bend right=20] (0, 0.75);
+ \draw [->] (+) to [bend left=20] (0, -0.75);
+ \end{tikzpicture}
+ \end{center}
+ We are looking for a potential that corresponds to a source at $x = d$ and satisfies $\phi = 0$ for $x < 0$. Since the solution to the Poisson equation is unique, we can use the method of images to guess a solution and see if it works --- if it does, we are done.
+
+ To guess a solution, we pretend that we don't have a conductor. Instead, we have a charge $-q$ and $x = -d$. Then by symmetry we will get $\phi = 0$ when $x = 0$. The potential of this pair is
+ \[
+ \phi = \frac{1}{4\pi\varepsilon_0}\left[\frac{q}{\sqrt{(x - d)^2 + y^2 + z^2}} - \frac{q}{\sqrt{(x + d)^2 + y^2 + z^2}}\right].
+ \]
+ To get the solution we want, we ``steal'' part of this potential and declare our potential to be
+ \[
+ \phi =
+ \begin{cases}
+ \frac{1}{4\pi\varepsilon_0}\left[\frac{q}{\sqrt{(x - d)^2 + y^2 + z^2}} - \frac{q}{\sqrt{(x + d)^2 + y^2 + z^2}}\right] & \text{if }x > 0\\
+ 0 & \text{if }x \leq 0
+ \end{cases}
+ \]
+ Using this solution, we can immediately see that it satisfies Poisson's equations both outside and inside the conductor. To complete our solution, we need to find the surface charge required such that the equations are satisfied on the surface as well.
+
+ To do so, we can calculate the electric field near the surface, and use the relation $\sigma = \varepsilon_0 \mathbf{E}_\text{outside}\cdot \hat{\mathbf{n}}$. To find $\sigma$, we only need the component of $\mathbf{E}$ in the $x$ direction:
+ \[
+ \mathbf{E}_x = -\frac{\partial \phi}{\partial x} = \frac{q}{4\pi\varepsilon_0}\left(\frac{x - d}{|\mathbf{r} - \mathbf{d}|^{3/2}} - \frac{x + d}{|\mathbf{r} + \mathbf{d}|^{3/2}}\right)
+ \]
+ for $x > 0$. Then induced surface charge density is given by $\mathbf{E}_x$ at $x = 0$:
+ \[
+ \sigma = E_x\varepsilon_0 = -\frac{q}{2\pi}\frac{d}{(d^2 + y^2 + z^2)^{3/2}}.
+ \]
+ The total surface charge is then given by
+ \[
+ \int \sigma \;\d y\;\d z = -q.
+ \]
+\end{eg}
+\section{Magnetostatics}
+Charges give rise to electric fields, currents give rise to magnetic fields. In this section, we study the magnetic fields induced by \emph{steady currents}, i.e.\ in situations where $\mathbf{J}\not= 0$ and $\rho = 0$. Again, we look for time-independent solutions.
+
+Since there is no charge, we obtain $\mathbf{E} = 0$. The remaining Maxwell's equations are
+\begin{align*}
+ \nabla \times \mathbf{B} &= \mu \mathbf{J}\\
+ \nabla\cdot \mathbf{B} &= 0
+\end{align*}
+The objective is, given a $\mathbf{J}$, find the resultant $\mathbf{B}$.
+
+Before we start, what does the condition $\rho = 0$ mean? It does \emph{not} mean that there are no charges around. We want charges to be moving around to create current. What it means is that the positive and negative charges balance out exactly. More importantly, it stays that way. We don't have a net charge flow from one place to another. At any specific point in space, the amount of charge entering is the same as the amount of charge leaving. This is the case in many applications. For example, in a wire, all electrons move together at the same rate, and we don't have charge building up at parts of the circuit.
+
+Mathematically, we can obtain the interpretation from the continuity equation:
+\[
+ \frac{\partial\rho}{\partial t} + \nabla \cdot \mathbf{J} = 0.
+\]
+In the case of steady currents, we have $\rho = 0$. So
+\[
+ \nabla\cdot \mathbf{J} = 0,
+\]
+which says that there is no net flow in or out of a point.
+\subsection{Ampere's Law}
+Consider a surface $S$ with boundary $C$. Current $\mathbf{J}$ flows through $S$. We now integrate the first equation over the surface $S$ to obtain
+\[
+ \int_S (\nabla\times \mathbf{B})\cdot \d S = \oint_C \mathbf{B}\cdot \d \mathbf{r} = \mu_0 \int_S \mathbf{J}\cdot \d S.
+\]
+So
+\begin{law}[Ampere's law]
+ \[
+ \oint_C \mathbf{B}\cdot \d \mathbf{r} = \mu_0 I,
+ \]
+ where $I$ is the current through the surface.
+\end{law}
+
+\begin{eg}[A long straight wire]
+ A wire is a cylinder with current $I$ flowing through it.
+
+ We use cylindrical polar coordinates $(r, \varphi, z)$, where $z$ is along the direction of the current, and $r$ points in the radial direction.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 1.5) circle [x radius=0.5, y radius = 0.15];
+ \draw [dashed] (0.5, -1.5) arc (0: 180:0.5 and 0.15);
+ \draw (-0.5, -1.5) arc (180: 360:0.5 and 0.15);
+ \draw (0.5, 1.5) -- (0.5, -1.5);
+ \draw (-0.5, 1.5) -- (-0.5, -1.5);
+
+ \draw [->] (0, 1.5) -- (0, 2) node [above] {$I$};
+
+ \draw [fill = gray!50!white] circle [x radius = 1, y radius = 0.3];
+ \draw [fill = white] circle [x radius = 0.5, y radius = 0.15];
+ \draw [fill = white, draw = none] (-0.5, 0) rectangle (0.5, 1);
+ \draw (-0.5, 0) -- (-0.5, 1);
+ \draw (0.5, 0) -- (0.5, 1);
+ \node at (1, 0) [right] {$S$};
+
+ \draw [->] (3, 0) -- (3, 1) node [above] {$z$};
+ \draw [->] (3, 0) -- (4, 0) node [right] {$r$};
+ \end{tikzpicture}
+ \end{center}
+ By symmetry, the magnetic field can only depend on the radius, and must lie in the $x,y$ plane. Since we require that $\nabla\cdot \mathbf{B} = 0$, we cannot have a radial component. So the general form is
+ \[
+ \mathbf{B}(\mathbf{r}) = B(r)\hat{\boldsymbol\varphi}.
+ \]
+ To find $B(r)$, we integrate over a disc that cuts through the wire horizontally. We have
+ \[
+ \oint_C \mathbf{B}\cdot \d \mathbf{r} = B(r)\int_0^{2\pi} r\;\d \varphi = 2\pi r B(r)
+ \]
+ By Ampere's law, we have
+ \[
+ 2\pi rB(r) = \mu_0 I.
+ \]
+ So
+ \[
+ \mathbf{B}(r) = \frac{\mu_0 I}{2\pi r} \hat{\boldsymbol\varphi}.
+ \]
+\end{eg}
+
+\begin{eg}[Surface current]
+ Consider the plane $z = 0$ with \emph{surface current density} $\mathbf{k}$ (i.e.\ current per unit length).
+ \begin{center}
+ \begin{tikzpicture}[
+ y = {(0.5cm,0.5cm)},
+ z = {(0cm,1cm)}]
+ \draw (-2, -1, 0) -- (2, -1, 0) -- (2, 1, 0) -- (-2, 1, 0) -- cycle;
+ \draw [->] (-1.5, 0.5, 0) -- (1.5, 0.5, 0);
+ \draw [->] (-1.5, -0.5, 0) -- (1.5, -0.5, 0);
+ \draw [->] (-1.5, 0, 0) -- (1.5, 0, 0);
+ \end{tikzpicture}
+ \end{center}
+ Take the x-direction to be the direction of the current, and the $z$ direction to be the normal to the plane.
+
+ We imagine this situation by considering this to be infinitely many copies of the above wire situation. Then the magnetic fields must point in the $y$-direction. By symmetry, we must have
+ \[
+ \mathbf{B} = -B(z) \hat{\mathbf{y}},
+ \]
+ with $B(z) = -B(-z)$.
+
+ Consider a vertical rectangular loop of length $L$ through the surface
+ \begin{center}
+ \begin{tikzpicture}[
+ y = {(0.5cm,0.5cm)},
+ z = {(0cm,1cm)}]
+ \draw (-2, -1, 0) -- (2, -1, 0) -- (2, 1, 0) -- (-2, 1, 0) -- cycle;
+ \draw [->] (-1.5, 0.5, 0) -- (1.5, 0.5, 0);
+ \draw [->] (-1.5, -0.5, 0) -- (1.5, -0.5, 0);
+ \draw [->] (-1.5, 0, 0) -- (1.5, 0, 0);
+ \draw [red] (0, 0.75, 0) -- (0, 0.75, 0.25) node [above] {$L$} -- (0, -0.75, 0.25) -- (0, -0.75, 0);
+ \draw [dashed, red] (0, -0.75, 0) -- (0, -0.75, -0.25) -- (0, 0.75, -0.25) -- (0, 0.75, 0);
+ \end{tikzpicture}
+ \end{center}
+ Then
+ \[
+ \oint_C \mathbf{B}\cdot \d \mathbf{r} = LB(z) - LB(-z) = \mu_0 kL
+ \]
+ So
+ \[
+ B(z) = \frac{\mu_0 k}{2}\quad\text{ for }z > 0.
+ \]
+ Similar to the electrostatic case, the magnetic field is constant, and the part parallel to the surface is discontinuous across the plane. This is a general result, i.e.\ across any surface,
+ \[
+ \hat {\mathbf{n}} \times \mathbf{B}_+ - \hat{\mathbf{n}}\times \mathbf{B}_{-} = \mu_0 \mathbf{k}.
+ \]
+\end{eg}
+
+\begin{eg}[Solenoid]
+ A \emph{solenoid} is a cylindrical surface current, usually made by wrapping a wire around a cylinder.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 1.5) circle [x radius=0.5, y radius = 0.15];
+ \draw [dashed] (0.5, -1.5) arc (0: 180:0.5 and 0.15);
+ \draw (-0.5, -1.5) arc (180: 360:0.5 and 0.15);
+ \draw [red, dashed] (0.5, -1) arc (0: 180:0.5 and 0.15);
+ \draw [red, ->-=0.6] (-0.5, -1) arc (180: 360:0.5 and 0.15);
+ \draw [red, dashed] (0.5, -0.5) arc (0: 180:0.5 and 0.15);
+ \draw [red, ->-=0.6] (-0.5, -0.5) arc (180: 360:0.5 and 0.15);
+ \draw [red, dashed] (0.5, 0) arc (0: 180:0.5 and 0.15);
+ \draw [red, ->-=0.6] (-0.5, 0) arc (180: 360:0.5 and 0.15);
+ \draw [red, dashed] (0.5, 0.5) arc (0: 180:0.5 and 0.15);
+ \draw [red, ->-=0.6] (-0.5, 0.5) arc (180: 360:0.5 and 0.15);
+ \draw [red, dashed] (0.5, 1) arc (0: 180:0.5 and 0.15);
+ \draw [red, ->-=0.6] (-0.5, 1) arc (180: 360:0.5 and 0.15);
+ \draw (0.5, 1.5) -- (0.5, -1.5);
+ \draw (-0.5, 1.5) -- (-0.5, -1.5);
+
+ \draw [->] (3, 0) -- (3, 1) node [above] {$z$};
+ \draw [->] (3, 0) -- (4, 0) node [right] {$r$};
+
+ \draw (0.2, 1) -- (0.8, 1) node[anchor = south west] {$C$};
+ \draw [->-=0.6] (0.8, 1) -- (0.8, -1);
+ \draw (0.8, -1) -- (0.2, -1) -- (0.2, 1);
+ \end{tikzpicture}
+ \end{center}
+ We use cylindrical polar coordinates with $z$ in the direction of the extension of the cylinder. By symmetry, $\mathbf{B} = B(r)\hat{\mathbf{z}}$.
+
+ Away from the cylinder, $\nabla \times \mathbf{B} = 0$. So $\frac{\partial B}{\partial r} = 0$, which means that $B(r)$ is constant outside. Since we know that $\mathbf{B} = \mathbf{0}$ at infinity, $\mathbf{B} = \mathbf{0}$ everywhere outside the cylinder.
+
+ To compute $\mathbf{B}$ inside, use Ampere's law with a curve $C$. Note that only the vertical part (say of length $L$) inside the cylinder contributes to the integral. Then
+ \[
+ \oint_C \mathbf{B}\cdot \d \mathbf{r} = BL = \mu_o INL.
+ \]
+ where $N$ is the number of wires per unit length and $I$ is the current in each wire (so $INL$ is the total amount of current through the wires).
+
+ So
+ \[
+ B = \mu_0 IN.
+ \]
+\end{eg}
+
+\subsection{Vector potential}
+For general current distributions $\mathbf{J}$, we also need to solve $\nabla \cdot \mathbf{B} = 0$.
+
+Recall that for the electric case, the equation is $\nabla\cdot \mathbf{E} = \rho/\varepsilon_0$. For the $\mathbf{B}$ field, we have $0$ on the right hand side instead of $\rho/\varepsilon_0$. This is telling us that there are no magnetic monopoles, i.e.\ magnetic charges.
+
+The general solution to this equation is $\mathbf{B} = \nabla \times \mathbf{A}$ for some $\mathbf{A}$.
+\begin{defi}[Vector potential]
+ If $\mathbf{B} = \nabla\times \mathbf{A}$, then $\mathbf{A}$ is the \emph{vector potential}.
+\end{defi}
+The other Maxwell equation then says
+\[
+ \nabla \times \mathbf{B} = -\nabla^2 \mathbf{A} + \nabla(\nabla\cdot \mathbf{A}) = \mu_0 \mathbf{J}.\tag{$*$}
+\]
+This is rather difficult to solve, but it can be made easier by noting that $\mathbf{A}$ is not unique. If $\mathbf{A}$ is a vector potential, then for any function $\chi(\mathbf{x})$,
+\[
+ \mathbf{A}' = \mathbf{A} + \nabla \chi
+\]
+is also a vector potential of $\mathbf{B}$, since $\nabla \times (\mathbf{A} + \nabla \chi) = \nabla \times \mathbf{A}$.
+
+The transformation $\mathbf{A} \mapsto \mathbf{A} + \nabla\chi$ is called a \emph{gauge transformation}.
+
+\begin{defi}[(Coulomb) gauge]
+ Each choice of $\mathbf{A}$ is called a \emph{gauge}. An $\mathbf{A}$ such that $\nabla \cdot \mathbf{A} = 0$ is called a \emph{Coulomb gauge}.
+\end{defi}
+
+\begin{prop}
+ We can always pick $\chi$ such that $\nabla \cdot \mathbf{A}' = 0$.
+\end{prop}
+
+\begin{proof}
+ Suppose that $\mathbf{B} = \nabla \times \mathbf{A}$ with $\nabla \cdot \mathbf{A} = \psi(\mathbf{x})$. Then for any $\mathbf{A}' = \mathbf{A} + \nabla \chi$, we have
+ \[
+ \nabla \cdot \mathbf{A}' = \nabla \mathbf{A} + \nabla^2 \chi = \psi + \nabla^2\chi.
+ \]
+ So we need a $\chi$ such that $\nabla^2\chi = -\psi$. This is the Poisson equation which we know that there is always a solution by, say, the Green's function. Hence we can find a $\chi$ that works.
+\end{proof}
+
+If $\mathbf{B} = \nabla\times \mathbf{A}$ and $\nabla\cdot \mathbf{A} = 0$, then the Maxwell equation $(*)$ becomes
+\[
+ \nabla^2 \mathbf{A} = -\mu_0 \mathbf{J}.
+\]
+Or, in Cartesian components,
+\[
+ \nabla^2 A_i = -\mu_0 J_i.
+\]
+This is 3 copies of the Poisson equation, which we know how to solve using Green's functions. The solution is
+\[
+ A_i (\mathbf{r}) = \frac{\mu_0}{4\pi}\int \frac{J_i (\mathbf{r}')}{|\mathbf{r} - \mathbf{r}'|}\;\d V',
+\]
+or
+\[
+ \mathbf{A}(\mathbf{r}) = \frac{\mu_0}{4\pi}\int \frac{\mathbf{J}(\mathbf{r'})}{|\mathbf{r} - \mathbf{r}'|}\;\d V',
+\]
+both integrating over $\mathbf{r}'$.
+
+We have randomly written down a solution of $\nabla^2 \mathbf{A} = -\mu_0 \mathbf{J}$. However, this is a solution to Maxwell's equations only if it is a \emph{Coulomb gauge}. Fortunately, it is:
+\begin{align*}
+ \nabla\cdot \mathbf{A}(\mathbf{r}) &= \frac{\mu_0}{4\pi}\int \mathbf{J}(\mathbf{r}')\cdot \nabla\left(\frac{1}{|\mathbf{r} - \mathbf{r}'|}\right)\;\d V'\\
+ &= -\frac{\mu_0}{4\pi}\int \mathbf{J}(\mathbf{r}') \cdot \nabla'\left(\frac{1}{|\mathbf{r} - \mathbf{r}'|}\right)\;\d V'\\
+ \intertext{Here we employed a clever trick --- differentiating $1/|\mathbf{r} - \mathbf{r}'|$ with respect to $\mathbf{r}$ is the negative of differentiating it with respect to $\mathbf{r}'$. Now that we are differentiating against $\mathbf{r}'$, we can integrate by parts to obtain}
+ &= -\frac{\mu_0}{4\pi}\int \left[\nabla'\cdot \left(\frac{\mathbf{J}(\mathbf{r}')}{|\mathbf{r} - \mathbf{r}'|}\right) - \frac{\nabla'\cdot \mathbf{J}(\mathbf{r}')}{|\mathbf{r} - \mathbf{r}'|}\right]\;\d V'.
+\end{align*}
+Here both terms vanish. We assume that the current is localized in some region of space so that $\mathbf{J} = \mathbf{0}$ on the boundary. Then the first term vanishes since it is a total derivative. The second term vanishes since we assumed that the current is steady ($\nabla\cdot \mathbf{J} = 0$). Hence we have Coulomb gauge.
+
+\begin{law}[Biot-Savart law]
+ The magnetic field is
+ \[
+ \mathbf{B}(\mathbf{r}) = \nabla \times \mathbf{A} = \frac{\mu_0}{4\pi}\int \mathbf{J}(\mathbf{r}')\times \frac{\mathbf{r} - \mathbf{r}'}{|\mathbf{r} - \mathbf{r}'|^3}\;\d V'.
+ \]
+ If the current is localized on a curve, this becomes
+ \[
+ \mathbf{B} = \frac{\mu_0 I}{4\pi}\oint_C \d \mathbf{r}' \times \frac{\mathbf{r} - \mathbf{r}'}{|\mathbf{r} - \mathbf{r}'|^3},
+ \]
+ since $\mathbf{J}(\mathbf{r}')$ is non-zero only on the curve.
+\end{law}
+
+\subsection{Magnetic dipoles}
+According to Maxwell's equations, magnetic monopoles don't exist. However, it turns out that a localized current looks like a dipole from far far away.
+
+\begin{eg}[Current loop]
+ Take a current loop of wire $C$, radius $R$ and current $I$.
+ \begin{center}
+ \begin{tikzpicture}
+ \clip (-2.1, 1.2) rectangle (2.1, -1.2);
+ \draw [->-=0.8] circle [x radius = 1, y radius = 0.5];
+ \draw [->, gray!70!black] (-1, 0) circle [x radius=0.4, y radius = 0.5];
+ \draw [->, gray!70!black] (-1.2, 0) circle [x radius=0.8, y radius = 1.1];
+ \draw [->, gray!70!black] (-1.3, 0) circle [x radius=1.1, y radius = 2];
+ \draw [->-=0.5, gray!70!black] (0, -1.2) -- (0, 1.2);
+ \draw [gray!70!black] (1, 0) circle [x radius=0.4, y radius = 0.5];
+ \draw [gray!70!black] (1.2, 0) circle [x radius=0.8, y radius = 1.1];
+ \draw [gray!70!black] (1.3, 0) circle [x radius=1.1, y radius = 2];
+ \draw [gray!70!black, ->] (0.6, -0.01) -- (0.6, 0);
+ \draw [gray!70!black, ->] (0.4, -0.01) -- (0.4, 0);
+ \draw [gray!70!black, ->] (0.2, -0.01) -- (0.2, 0);
+ \end{tikzpicture}
+ \end{center}
+ Based on the fields generated by a straight wire, we can guess that $\mathbf{B}$ looks like this, but we want to calculate it.
+
+ By the Biot-Savart law, we know that
+ \[
+ \mathbf{A}(\mathbf{r}) = \frac{\mu_0}{4\pi}\int\frac{\mathbf{J}(\mathbf{r}')}{|\mathbf{r}-\mathbf{r}'|}\;\d V = \frac{\mu_0 I}{4\pi}\oint_C \frac{\d \mathbf{r}'}{|\mathbf{r} - \mathbf{r}'|}.
+ \]
+ Far from the loop, $|\mathbf{r} - \mathbf{r}'|$ is small and we can use the Taylor expansion.
+ \[
+ \frac{1}{|\mathbf{r} - \mathbf{r}'|} = \frac{1}{r} + \frac{\mathbf{r}\cdot \mathbf{r}'}{r^3} + \cdots
+ \]
+ Then
+ \[
+ \mathbf{A}(\mathbf{r}) = \frac{\mu_0 I}{4\pi}\oint_C \left(\frac{1}{r} + \frac{\mathbf{r}\cdot \mathbf{r}'}{r^3} + \cdots\right) \d \mathbf{r}'.
+ \]
+ Note that $r$ is a constant of the integral, and we can take it out. The first $\frac{1}{r}$ term vanishes because it is a constant, and when we integrate along a closed loop, we get 0. So we only consider the second term.
+
+ We claim that for any constant vector $\mathbf{g}$,
+ \[
+ \oint_C \mathbf{g}\cdot \mathbf{r}' \;\d \mathbf{r}' = \mathbf{S}\times \mathbf{g},
+ \]
+ where $\mathbf{S} = \int \d \mathbf{\mathbf{S}}$ is the \emph{vector area} of the surface bounded by $C$. It follows from Green's theorem for the function $f(r')$:
+ \[
+ \oint_C f(r')\d \mathbf{r}' = \int_S \nabla f\times\d \mathbf{S},
+ \]
+ taking $f(r') = \mathbf{g}\cdot \mathbf{r}'$. Then
+ \[
+ \oint_C \mathbf{g}\cdot \mathbf{r}'\;\d \mathbf{r}' = \int_S \mathbf{g}\times \d \mathbf{S} = \mathbf{S}\times \mathbf{g}.
+ \]
+ Using this, we have
+ \[
+ \mathbf{A}(\mathbf{r}) \approx \frac{\mu_0}{4\pi}\frac{\mathbf{m}\times \mathbf{r}}{r^3},
+ \]
+ where
+ \begin{defi}[Magnetic dipole moment] The \emph{magnetic dipole moment} is
+ \[
+ \mathbf{m} = I\mathbf{S}.
+ \]
+ \end{defi}
+ Then
+ \[
+ \mathbf{B} = \nabla\times \mathbf{A} = \frac{\mu_0}{4\pi}\left(\frac{3(\mathbf{m}\cdot \hat{\mathbf{r}})\hat{\mathbf{r}} - \mathbf{m}}{r^3}\right).
+ \]
+ This is the same form as $\mathbf{E}$ for an electric dipole, except that there is no $1/r^2$ leading term here, because we have no magnetic monopoles.
+\end{eg}
+
+After doing it for a loop, we can do it for a general current distribution:
+\begin{eg}
+ We have
+ \begin{align*}
+ A_i(\mathbf{r}) &= \frac{\mu_0}{4\pi}\int \frac{J_i(\mathbf{r}')}{|\mathbf{r} - \mathbf{r}'|}\;\d V'\\
+ &= \frac{\mu_0}{4\pi}\int\left(\frac{J_i(\mathbf{r}')}{r} - \frac{J_i(\mathbf{r}')(\mathbf{r}\cdot \mathbf{r}')}{r^3} + \cdots\right)\;\d V'.
+ \end{align*}
+ We will show that the first term vanishes by showing that it is a total derivative. We have
+ \[
+ \partial_j'(J_jr_i') = \underbrace{(\partial_j'J_j)}_{=\nabla\cdot \mathbf{J} = 0}r_i' + \underbrace{(\partial_j'r_i')}_{=\delta_{ij}}J_j = J_i.
+ \]
+ For the second term, we look at
+ \[
+ \partial_j' (J_jr_i'r_k') = (\partial_j'J_j)r_i'r_k' + J_ir_k' + J_kr_i' =J_kr_i' + J_ir_k'.
+ \]
+ Apply this trick to
+ \[
+ \int J_ir_jr'_j \;\d V = \int \frac{r_j}{2}(J_i r_j' - J_j r'_i)\;\d V',
+ \]
+ where we discarded a total derivative $\partial_j'(J_jr_i'r_u')$. Putting it back in vector notation,
+ \begin{align*}
+ \int J_i\mathbf{r}\cdot \mathbf{r}'\;\d V &= \int\frac{1}{2}\left(J_i(\mathbf{r}\cdot \mathbf{r}') - r_i(\mathbf{J}\cdot \mathbf{r}')\right)\;\d V\\
+ &= \int \frac{1}{2}\left[\mathbf{r}\times (\mathbf{J}\times \mathbf{r}')\right]_i\;\d V.
+ \end{align*}
+ So the long-distance vector potential is again
+ \[
+ \mathbf{A} (\mathbf{r}) = \frac{\mu_0}{4\pi}\frac{\mathbf{m}\times \mathbf{r}}{r^3},
+ \]
+ with
+ \begin{defi}[Magnetic dipole moment]
+ \[
+ \mathbf{m} = \frac{1}{2}\int \mathbf{r}'\times \mathbf{J}(\mathbf{r}')\;\d V'.
+ \]
+ \end{defi}
+\end{eg}
+
+\subsection{Magnetic forces}
+We've seen that moving charge produces currents which generates a magnetic field. But we also know that a charge moving in a magnetic field experiences a force $\mathbf{F} = q\mathbf{r}\times \mathbf{B}$. So two currents will exert a force on each other.
+
+\begin{eg}[Two parallel wires]\leavevmode
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.5] (-1, -2) -- (-1, 2);
+ \draw [->-=0.5] (1, -2) -- (1, 2);
+ \draw [->] (-1, -1.5) -- (1, -1.5) node [pos = 0.5, below] {$d$};
+ \draw [->] (1, -1.5) -- (-1, -1.5);
+
+ \draw [->] (3, 0) -- (3, 1) node [above] {$z$};
+ \draw [->] (3, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (3, 0) -- (3.8, 0.6) node [anchor = south west] {$y$};
+ \end{tikzpicture}
+ \end{center}
+ We know that the field produced by each current is
+ \[
+ \mathbf{B}_1 = \frac{\mu_0 I}{2\pi r}\hat{\varphi}.
+ \]
+ The particles on the second wire will feel a force
+ \[
+ \mathbf{F} = q\mathbf{v}\times \mathbf{B}_1 = q\mathbf{v}\times \left(\frac{\mu_0 I_1}{2\pi d}\right) \hat{\mathbf{y}}.
+ \]
+ But $\mathbf{J}_2 = nq\mathbf{v}$ and $I_2 = J_2 A$, where $n$ is the density of particles and $A$ is the cross-sectional area of the wire. So the number of particles per unit length is $nA$, and the force per unit length is
+ \[
+ nA\mathbf{F} = \frac{\mu_0 I_1I_2}{2\pi d}\hat{\mathbf{z}}\times \hat{\mathbf{y}} = -\mu_0\frac{I_1I_2}{2\pi d}\hat{\mathbf{x}}.
+ \]
+ So if $I_1I_2 > 0$, i.e.\ the currents are in the same direction, the force is attractive. Otherwise the force is repulsive.
+\end{eg}
+
+\begin{eg}[General force]
+ A current $\mathbf{J}_1$, localized on some closed curve $C_1$, sets up a magnetic field
+ \[
+ \mathbf{B}(\mathbf{r}) = \frac{\mu_0I_1}{4\pi}\oint_{C_1} \d \mathbf{r}_1 \times \frac{\mathbf{r} - \mathbf{r}_1}{|\mathbf{r} - \mathbf{r}_1|^3}.
+ \]
+ A second current $\mathbf{J}_2$ on $C_2$ experiences the Lorentz force
+ \[
+ \mathbf{F} = \int \mathbf{J}_2(\mathbf{r})\times \mathbf{B}(\mathbf{r})\;\d V.
+ \]
+ While we are integrating over all of space, the current is localized at a curve $C_2$. So
+ \[
+ \mathbf{F} = I_2\oint_{C_2} \d \mathbf{r}_2 \times \mathbf{B}(\mathbf{r}_2).
+ \]
+ Hence
+ \[
+ \mathbf{F} = \frac{\mu_0}{4\pi} I_1 I_2 \oint_{C_1}\oint_{C_2}\d \mathbf{r}_2\times \left(\d \mathbf{r}_1\times \frac{\mathbf{r}_2 - \mathbf{r}_1}{|\mathbf{r}_2 - \mathbf{r}_1|^3}\right).
+ \]
+ For well-separated currents, approximated by $\mathbf{m}_1$ and $\mathbf{m}_2$, we claim that the force can be written as
+ \[
+ \mathbf{F} = \frac{\mu_0}{4\pi}\nabla\left(\frac{3(\mathbf{m}_1\cdot \hat{\mathbf{r}})(\mathbf{m}_2\cdot \hat{\mathbf{r}}) - (\mathbf{m}_1\cdot \mathbf{m}_2)}{r^3}\right),
+ \]
+ whose proof is too complicated to be included.
+\end{eg}
+
+Note that we've spent a whole chapter discussing ``magnets'', but they are nothing like what we stick on our fridges. It turns out that these ``real'' magnets are composed of many tiny aligned microscopic dipoles arising from electron spin. However, obviously we do not care about magnets in real life.
+
+\section{Electrodynamics}
+So far, we have only looked at fields that do not change with time. However, in real life, fields \emph{do} change with time. We will now look at time-dependent $\mathbf{E}$ and $\mathbf{B}$ fields.
+\subsection{Induction}
+We'll explore the Maxwell equation
+\[
+ \nabla\times \mathbf{E} + \frac{\partial \mathbf{B}}{\partial t} = 0.
+\]
+In short, if the magnetic field changes in time, i.e.\ $\frac{\partial \mathbf{B}}{\partial t} \not = 0$, this creates an $\mathbf{E}$ that accelerates charges, which creates a current in a wire. This process is called induction. Consider a wire, which is a closed curve $C$, with a surface $S$.
+
+We integrate over the surface $S$ to obtain
+\[
+ \int_S (\nabla\times \mathbf{E})\cdot \d \mathbf{S} = -\int_S \frac{\partial \mathbf{B}}{\partial t}\cdot \d \mathbf{S}.
+\]
+By Stokes' theorem and commutativity of integration and differentiation (assuming $S$ and $C$ do not change in time), we have
+\[
+ \int_C \mathbf{E}\cdot \d \mathbf{r} = - \frac{\d }{\d t} \int _S \mathbf{B} \cdot \d \mathbf{S}.
+\]
+\begin{defi}[Electromotive force (emf)]
+ The \emph{electromotive force} (emf) is
+ \[
+ \mathcal{E} = \int_C \mathbf{E}\cdot \d \mathbf{r}.
+ \]
+ Despite the name, this is not a force! We can think of it as the work done on a unit charge moving around the curve, or the ``voltage'' of the system.
+\end{defi}
+
+For convenience we define the quantity
+\begin{defi}[Magnetic flux]
+ The \emph{magnetic flux} is
+ \[
+ \Phi = \int_S \mathbf{B}\cdot \d \mathbf{S}.
+ \]
+\end{defi}
+Then we have
+\begin{law}[Faraday's law of induction]
+ \[
+ \mathcal{E} = -\frac{\d \Phi}{\d t}.
+ \]
+\end{law}
+This says that when we change the magnetic flux through $S$, then a current is induced. In practice, there are many ways we can change the magnetic flux, such as by moving bar magnets or using an electromagnet and turning it on and off.
+
+The minus sign has a significance. When we change a magnetic field, an emf is created. This induces a current around the wire. However, we also know that currents produce magnetic fields. The minus sign indicates that the induced magnetic field \emph{opposes} the initial change in magnetic field. If it didn't and the induced magnetic field reinforces the change, we will get runaway behaviour and the world will explode. This is known as \emph{Lenz's law}.
+
+\begin{eg}
+ Consider a circular wire with a magnetic field perpendicular to it.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [x radius = 1.5, y radius = 0.5];
+ \draw [->] (0, 0) -- (0, 1);
+ \draw [->] (-0.5, 0) -- (-0.5, 1);
+ \draw [->] (-1, 0) -- (-1, 1);
+ \draw [->] (0.5, 0) -- (0.5, 1);
+ \draw [->] (1, 0) -- (1, 1);
+ \draw (0, -0.6) -- (0, -1);
+ \draw (0.5, -0.55) -- (0.5, -1);
+ \draw (1, -0.45) -- (1, -1);
+ \draw (-0.5, -0.55) -- (-0.5, -1);
+ \draw (-1, -0.45) -- (-1, -1);
+ \end{tikzpicture}
+ \end{center}
+ If we decrease $\mathbf{B}$ such that $\dot{\Phi} < 0$, then $\mathcal{E} > 0$. So the current flows anticlockwise (viewed from above). The current generates its own $\mathbf{B}$. This acts to \emph{increase} $\mathbf{B}$ inside, which counteracts the initial decrease.
+ \begin{center}
+ \begin{tikzpicture}[xscale = 2]
+ \clip (-2.1, 1.2) rectangle (2.1, -1.2);
+ \draw [->-=0.8] circle [x radius = 1, y radius = 0.5];
+ \draw [->, gray!50!black] (-1, 0) circle [x radius=0.4, y radius = 1];
+ \draw [->, gray!50!black] (-1.2, 0) circle [x radius=0.8, y radius = 2.2];
+ \draw [->, gray!50!black] (-1.3, 0) circle [x radius=1.1, y radius = 4];
+ \draw [->-=0.5, gray!50!black] (0, -1.2) -- (0, 1.2);
+ \draw [gray!50!black] (1, 0) circle [x radius=0.4, y radius = 1];
+ \draw [gray!50!black] (1.2, 0) circle [x radius=0.8, y radius = 2.2];
+ \draw [gray!50!black] (1.3, 0) circle [x radius=1.1, y radius = 4];
+ \draw [gray!50!black, ->] (0.6, -0.01) -- (0.6, 0);
+ \draw [gray!50!black, ->] (0.4, -0.01) -- (0.4, 0);
+ \draw [gray!50!black, ->] (0.2, -0.01) -- (0.2, 0);
+ \end{tikzpicture}
+ \end{center}
+ This means you don't get runaway behaviour.
+\end{eg}
+There is a related way to induce a current: keep $\mathbf{B}$ fixed and move wire.
+
+\begin{eg}\leavevmode
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0.4, 1) -- (-0.1, 0) node [pos = 0.4, left] {$d$};
+ \draw [->] (-0.1, 0) -- (0.4, 1);
+ \draw (0.75, 0.5) -- (0.75, -0.3);
+ \draw (1.25, 0.5) -- (1.25, -0.3);
+ \draw (1.75, 0.5) -- (1.75, -0.3);
+ \draw (2.25, 0.5) -- (2.25, -0.3);
+ \draw (2.75, 0.5) -- (2.75, -0.3);
+
+ \draw [fill=gray!50!white, opacity=0.8] (3, 0) -- (0, 0) -- (0.5, 1) -- (3.5, 1);
+ \draw [ultra thick] (1.5, -0.3) -- (2.2, 1.3);
+
+ \draw [->] (0.75, 0.5) -- (0.75, 1.5);
+ \draw [->] (1.25, 0.5) -- (1.25, 1.5);
+ \draw [->] (1.75, 0.5) -- (1.75, 1.5);
+ \draw [->] (2.25, 0.5) -- (2.25, 1.5);
+ \draw [->] (2.75, 0.5) -- (2.75, 1.5);
+ \end{tikzpicture}
+ \end{center}
+ Slide the bar to the left with speed $v$. Each charge $q$ will experience a Lorentz force
+ \[
+ F = qvB,
+ \]
+ in the counterclockwise direction.
+
+ The emf, defined as the work done per unit charge, is
+ \[
+ \mathcal{E} = vBd,
+ \]
+ because work is only done for particles on the bar.
+
+ Meanwhile, the change of flux is
+ \[
+ \frac{\d \Phi}{\d t} = -vBd,
+ \]
+ since the area decreases at a rate of $-vd$.
+
+ We again have
+ \[
+ \mathcal{E} = -\frac{\d \Phi}{\d t}.
+ \]
+ Note that we obtain the same formula but different physics --- we used Lorentz force law, not Maxwell's equation.
+\end{eg}
+
+Now we consider the general case: a moving loop $C(t)$ bounding a surface $S(t)$. As the curve moves, the curve sweeps out a cylinder $S_c$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] circle [x radius=1cm, y radius=0.3cm];
+ \draw [fill=gray] (0.6, 2) circle [x radius=1cm, y radius=0.3cm] node {$S(t + \delta t)$};
+ \shade [left color=gray!30!white, right color=gray!70!white, opacity=0.7] (-1, 0) arc (180:360:1 and 0.3) -- (1.6, 2) arc (360:180:1 and 0.3) -- (-1, 0);
+
+ \node {$S(t)$};
+ \node at (0.3, 1) {$S_c$};
+ \end{tikzpicture}
+\end{center}
+The change in flux
+\begin{align*}
+ \Phi(t + \delta t) - \Phi(t) &= \int_{S(t + \delta t)} \mathbf{B}(t + \delta t)\cdot \d \mathbf{S} - \int_{S(t)}\mathbf{B}(t)\cdot \d \mathbf{S}\\
+ &= \int_{S(t + \delta t)}\left(\mathbf{B}(t) + \frac{\partial \mathbf{B}}{\partial t}\right)\cdot \d \mathbf{S} - \int_{S(t)}\mathbf{B}(t)\cdot \d \mathbf{S} + O(\delta t^2)\\
+ &= \delta t\int_{S(t)}\frac{\partial \mathbf{B}}{\partial t}\cdot \d \mathbf{S} + \left[\int_{S(t + \delta t)} - \int_{S(t)}\right]\mathbf{B}(t)\cdot \d \mathbf{S} + O(\delta t^2)
+\end{align*}
+We know that $S(t + \delta t)$, $S(t)$ and $S_c$ together form a closed surface. Since $\nabla\cdot \mathbf{B} = 0$, the integral of $\mathbf{B}$ over a closed surface is $0$. So we obtain
+\[
+ \left[\int_{S(t + \delta t)} - \int_{S(t)}\right]\mathbf{B}(t)\cdot \d \mathbf{S} + \int_{S_c}\mathbf{B}(t)\cdot \d \mathbf{S} = 0.
+\]
+Hence we have
+\[
+ \Phi(t + \delta t) - \Phi(t) = \delta t\int_{S(t)}\frac{\partial \mathbf{B}}{\partial t}\cdot \d \mathbf{S} - \int_{S_c}\mathbf{B}(t)\cdot \d \mathbf{S} = 0.
+\]
+We can simplify the integral over $S_c$ by writing the surface element as
+\[
+ \d \mathbf{S} = (\d \mathbf{r}\times \mathbf{v})\;\delta t.
+\]
+Then $\mathbf{B}\cdot \d \mathbf{S} = \delta t(\mathbf{v}\times \mathbf{B}) \cdot \d \mathbf{r}$. So
+\[
+ \frac{\d \Phi}{\d t} = \lim_{\delta \to 0}\frac{\delta \Phi}{\delta t} = \int_{S(t)}\frac{\partial \mathbf{B}}{\partial t}\cdot \d \mathbf{S} - \int_{C(t)}(\mathbf{v}\times \mathbf{B})\cdot \d \mathbf{r}.
+\]
+From Maxwell's equation, we know that $\frac{\partial \mathbf{B}}{\partial t} = -\nabla \times \mathbf{E}$. So we have
+\[
+ \frac{\d \Phi}{\d t} = -\int_C (\mathbf{E} + \mathbf{v}\times \mathbf{B})\;\d \mathbf{r}.
+\]
+Defining the emf as
+\[
+ \mathcal{E} = \int_C (\mathbf{E} + \mathbf{v}\times \mathbf{B})\;\d \mathbf{r},
+\]
+we obtain the equation
+\[
+ \mathcal{E} = -\frac{\partial \Phi}{\partial t}
+\]
+for the most general case where the curve itself can change.
+\subsection{Magnetostatic energy}
+Suppose that a current $I$ flows along a wire $C$. From magnetostatics, we know that this gives rise to a magnetic field $\mathbf{B}$, and hence a flux $\Phi$ given by
+\[
+ \Phi = \int_S \mathbf{B}\cdot \d \mathbf{S},
+\]
+where $S$ is the surface bounded by $C$.
+\begin{defi}[Inductance]
+ The \emph{inductance} of a curve $C$, defined as
+ \[
+ L = \frac{\Phi}{I},
+ \]
+ is the amount of flux it generates per unit current passing through $C$. This is a property only of the curve $C$.
+\end{defi}
+
+Inductance is something engineers care a \emph{lot} about, as they need to create real electric circuits and make things happen. However, us mathematicians find these applications completely pointless and don't actually care about inductance. The only role it will play is in the proof we perform below.
+
+\begin{eg}[The solenoid]
+ Consider a solenoid of length $\ell$ and cross-sectional area $A$ (with $\ell \gg \sqrt{A}$ so we can ignore end effects). We know that
+ \[
+ B = \mu_0 IN,
+ \]
+ where $N$ is the number of turns of wire per unit length and $I$ is the current. The flux through a single turn (pretending it is closed) is
+ \[
+ \Phi_0 = \mu_0 INA.
+ \]
+ So the total flux is
+ \[
+ \Phi = \Phi_0 N\ell = \mu_0 IN^2V,
+ \]
+ where $V$ is the volume, $A\ell$. So
+ \[
+ L = \mu_0 N^2 V.
+ \]
+\end{eg}
+
+We can use the idea of inductance to compute the energy stored in magnetic fields. The idea is to compute the work done in building up a current.
+
+As we build the current, the change in current results in a change in magnetic field. This produces an induced emf that we need work to oppose. The emf is given by
+\[
+ \mathcal{E} = -\frac{\d \Phi}{\d t} = -L\frac{\d I}{\d t}.
+\]
+This opposes the change in current by Lenz's law. In time $\delta t$, a charge $I\delta t$ flows around $C$. The work done is
+\[
+ \delta W = \mathcal{E} I\delta t = -LI\frac{\d I}{\d t}\delta t.
+\]
+So
+\[
+ \frac{\d W}{\d t} = -LI \frac{\d I}{\d t} = -\frac{1}{2}L\frac{\d I^2}{\d t}.
+\]
+So the work done to build up a current is
+\[
+ W = \frac{1}{2}LI^2 = \frac{1}{2}I\Phi.
+\]
+Note that we dropped the minus sign because we switched from talking about the work done by the emf to the work done to oppose the emf.
+
+This work done is identified with the energy stored in the system. Recall that the vector potential $\mathbf{A}$ is given by $\mathbf{B} = \nabla \times \mathbf{A}$. So
+\begin{align*}
+ U &= \frac{1}{2}I\int_S \mathbf{B}\cdot \d \mathbf{S}\\
+ &= \frac{1}{2}I\int_S(\nabla\times \mathbf{A})\cdot \d \mathbf{S}\\
+ &= \frac{1}{2}I\oint_C \mathbf{A}\cdot \d \mathbf{r}\\
+ &= \frac{1}{2}\int_{\R^3} \mathbf{J}\cdot \mathbf{A}\;\d V\\
+ \intertext{Using Maxwell's equation $\nabla\times \mathbf{B} = \mu_0 \mathbf{J}$, we obtain}
+ &= \frac{1}{2\mu_0}\int (\nabla\times \mathbf{B})\cdot \mathbf{A}\;\d V\\
+ &= \frac{1}{2\mu_0}\int[\nabla\cdot(\mathbf{B}\times \mathbf{A}) + \mathbf{B}\cdot (\nabla\times \mathbf{A})]\;\d V\\
+ \intertext{Assuming that $\mathbf{B}\times \mathbf{A}$ vanishes sufficiently fast at infinity, the integral of the first term vanishes. So we are left with}
+ &= \frac{1}{2\mu_0}\int \mathbf{B}\cdot \mathbf{B}\;\d V.
+\end{align*}
+So
+\begin{prop}
+ The energy stored in a magnetic field is
+ \[
+ U = \frac{1}{2\mu_0}\int \mathbf{B}\cdot \mathbf{B}\;\d V.
+ \]
+\end{prop}
+In general, the energy stored in $\mathbf{E}$ and $\mathbf{B}$ is
+\[
+ U = \int \left(\frac{\varepsilon_0}{2}\mathbf{E}\cdot \mathbf{E} + \frac{1}{2\mu_0}\mathbf{B}\cdot \mathbf{B}\right)\;\d V.
+\]
+Note that while this is true, it does not follow directly from our results for pure magnetic and pure electric fields. It is entirely plausible that when both are present, they interact in weird ways that increases the energy stored. However, it turns out that this does not happen, and this formula is right.
+
+\subsection{Resistance}
+The story so far is that we change the flux, an emf is produced, and charges are accelerated. In principle, we should be able to compute the current. But accelerating charges are complicated (they emit light). Instead, we invoke a new effect, friction.
+
+In a wire, this is called \emph{resistance}. In most materials, the effect of resistance is that $\mathcal{E}$ is proportional to the speed of the charged particles, rather than the acceleration.
+
+We can think of the particles as accelerating for a very short period of time, and then reaching a terminal velocity. So
+\begin{law}[Ohm's law]
+ \[
+ \mathcal{E} = IR,
+ \]
+\end{law}
+\begin{defi}[Resistance]
+ The \emph{resistance} is the $R$ in Ohm's law.
+\end{defi}
+Note that $\mathcal{E} = \int \mathbf{E}\cdot \d \mathbf{r}$ and $\mathbf{E} = - \nabla\phi$. So $\mathcal{E} = V$, the potential difference. So Ohm's law can also be written as $V = IR$.
+
+\begin{defi}[Resistivity and conductivity]
+For the wire of length $L$ and cross-sectional area $A$, we define the \emph{resistivity} to be
+\[
+ \rho = \frac{AR}{L},
+\]
+and the conductivity is
+\[
+ \sigma = \frac{1}{\rho}.
+\]
+\end{defi}
+These are properties only of the substance and not the actual shape of the wire. Then Ohm's law reads
+\begin{law}[Ohm's law]
+ \[
+ \mathbf{J} = \sigma \mathbf{E}.
+ \]
+\end{law}
+We can formally derive Ohm's law by considering the field and interactions between the electron and the atoms, but we're not going to do that.
+
+\begin{eg}\leavevmode
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0.4, 1) -- (-0.1, 0) node [pos = 0.4, left] {$d$};
+ \draw [->] (-0.1, 0) -- (0.4, 1);
+ \draw (0.75, 0.5) -- (0.75, -0.3);
+ \draw (1.25, 0.5) -- (1.25, -0.3);
+ \draw (1.75, 0.5) -- (1.75, -0.3);
+ \draw (2.25, 0.5) -- (2.25, -0.3);
+ \draw (2.75, 0.5) -- (2.75, -0.3);
+
+ \draw [fill=gray!50!white, opacity=0.8] (3, 0) -- (0, 0) -- (0.5, 1) -- (3.5, 1);
+ \draw [ultra thick] (1.5, -0.3) -- (2.2, 1.3);
+
+ \draw [->] (0.75, 0.5) -- (0.75, 1.5);
+ \draw [->] (1.25, 0.5) -- (1.25, 1.5);
+ \draw [->] (1.75, 0.5) -- (1.75, 1.5);
+ \draw [->] (2.25, 0.5) -- (2.25, 1.5);
+ \draw [->] (2.75, 0.5) -- (2.75, 1.5);
+
+ \draw [->] (4, 0) -- (4, 1) node [above] {$z$};
+ \draw [->] (4, 0) -- (5, 0) node [right] {$x$};
+ \draw [->] (4, 0) -- (4.8, 0.6) node [anchor = south west] {$y$};
+ \end{tikzpicture}
+ \end{center}
+ Suppose the bar moves to the left with speed $v$. Suppose that the sliding bar has resistance $R$, and the remaining parts of the circuit are superconductors with no resistance.
+
+ There are two dynamical variables, the position of the bar $x(t)$, and the current $I(t)$.
+
+ If a current $I$ flows, the force on a small segment of the bar is
+ \[
+ \mathbf{F} = IB \hat{\mathbf{y}}\times \hat{\mathbf{z}}
+ \]
+ So the total force on a bar is
+ \[
+ \mathbf{F} = IB\ell\hat{\mathbf{x}}.
+ \]
+ So
+ \[
+ m\ddot{x} = IB\ell.
+ \]
+ We can compute the emf as
+ \[
+ \mathcal{E} = -\frac{\d \Phi}{\d t} = -B\ell\dot{x}.
+ \]
+ So Ohm's law gives
+ \[
+ IR = -B\ell\dot{x}.
+ \]
+ Hence
+ \[
+ m\ddot{x} = -\frac{B^2\ell^2}{R}\dot{x}.
+ \]
+ Integrating once gives
+ \[
+ \dot{x}(t) = -ve^{-B^2\ell^2t/mR}.
+ \]
+\end{eg}
+With resistance, we need to do work to keep a constant current. In time $\delta t$, the work needed is
+\[
+ \delta W = \mathcal{E} I\delta t = I^2 R \delta t
+\]
+using Ohm's law. So
+\begin{defi}[Joule heating]
+ \emph{Joule heating} is the energy lost in a circuit due to friction. It is given by
+ \[
+ \frac{\d W}{\d t} = I^2 R.
+ \]
+\end{defi}
+\subsection{Displacement currents}
+Recall that the Maxwell equations are
+\begin{align*}
+ \nabla \cdot \mathbf{E} &= \frac{\rho}{\varepsilon_0}\\
+ \nabla \cdot \mathbf{B} &= 0\\
+ \nabla \times \mathbf{E} &= -\frac{\partial \mathbf{B}}{\partial t}\\
+ \nabla \times \mathbf{B} &= \mu_0\left(\mathbf{J} + \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t}\right)
+\end{align*}
+So far we have studied all the equations apart from the $\mu_0\varepsilon_0 \frac{\partial \mathbf{E}}{\partial t}$ term. Historically this term is called the \emph{displacement current}.
+
+The need of this term was discovered by purely mathematically, since people discovered that Maxwell's equations would be inconsistent with charge conservation without the term.
+
+Without the term, the last equation is
+\[
+ \nabla \times \mathbf{B} = \mu_0 \mathbf{J}.
+\]
+Take the divergence of the equation to obtain
+\[
+ \mu_0 \nabla\cdot \mathbf{J} = \nabla\cdot (\nabla\times \mathbf{B}) = 0.
+\]
+But charge conservation says that
+\[
+ \dot{\rho} + \nabla\cdot \mathbf{J} = 0.
+\]
+These can both hold iff $\dot{\rho} = 0$. But we clearly can change the charge density --- pick up a charge and move it elsewhere! Contradiction.
+
+
+With the new term, taking the divergence yields
+\[
+ \mu_0\left(\nabla\cdot \mathbf{J} + \varepsilon_0 \nabla\cdot \frac{\partial \mathbf{E}}{\partial t}\right) = 0.
+\]
+Since partial derivatives commute, we have
+\[
+ \varepsilon_0\nabla\cdot \frac{\partial \mathbf{E}}{\partial t} = \varepsilon_0 \frac{\partial}{\partial t} (\nabla\cdot \mathbf{E}) = \dot{\rho}
+\]
+by the first Maxwell's equation. So it gives
+\[
+ \nabla\cdot \mathbf{J} + \dot{\rho} = 0.
+\]
+So with the new term, not only is Maxwell's equation consistent with charge conservation --- it actually implies charge conservation.
+
+\subsection{Electromagnetic waves}
+We now look for solutions to Maxwell's equation in the case where $\rho = 0$ and $\mathbf{J} = 0$, i.e.\ in nothing/vacuum.
+
+Differentiating the fourth equation with respect to time,
+\begin{align*}
+ \mu_0\varepsilon_0 \frac{\partial^2 \mathbf{E}}{\partial t^2} &= \frac{\partial }{\partial t}(\nabla\times \mathbf{B})\\
+ &= \nabla\times \frac{\partial \mathbf{B}}{\partial t}\\
+ &= \nabla(\underbrace{\nabla\cdot \mathbf{E}}_{ = \rho/\varepsilon_0 = 0}) + \nabla^2 \mathbf{E}\quad \text{ by vector identities}\\
+ &= \nabla^2 \mathbf{E}.
+\end{align*}
+So each component of $\mathbf{E}$ obeys the wave equation
+\[
+ \frac{1}{c^2}\frac{\partial^2 \mathbf{E}}{\partial t^2} - \nabla^2 \mathbf{E} = 0.
+\]
+We can do exactly the same thing to show that $\mathbf{B}$ obeys the same equation:
+\[
+ \frac{1}{c^2}\frac{\partial^2 \mathbf{B}}{\partial t^2} - \nabla^2 \mathbf{B} = 0,
+\]
+where the speed of the wave is
+\[
+ c = \frac{1}{\sqrt{\mu_0\varepsilon_0}}
+\]
+Recall from the first lecture that
+\begin{itemize}
+ \item $\varepsilon_0 = \SI{8.85e-12}{\per\metre\cubed\per\kilogram\s\squared\coulomb\squared}$
+ \item $\mu_0 = \SI{4\pi e-6}{\metre\kilogram\per\coulomb\squared}$
+\end{itemize}
+So
+\[
+ c = \SI{3e8}{\meter\per\second},
+\]
+which is the speed of light!
+
+We now look for plane wave solutions which propagate in the $x$ direction, and are independent of $y$ and $z$. So we can write our electric field as
+\[
+ \mathbf{E}(\mathbf{x}) = (E_x(x, t), E_y(x, t), E_z(x, t)).
+\]
+Hence any derivatives wrt $y$ and $z$ are zero. Since we know that $\nabla \cdot \mathbf{E} = 0$, $E_x$ must be constant. We take $E_x = 0$. wlog, assume $E_z = 0$, ie the wave propagate in the $x$ direction and oscillates in the $y$ direction. Then we look for solutions of the form
+\[
+ \mathbf{E} = (0, E(x, t), 0),
+\]
+with
+\[
+ \frac{1}{c^2}\frac{\partial^2 \mathbf{E}}{\partial t^2} - \frac{\partial^2 \mathbf{E}}{\partial x^2} = 0.
+\]
+The general solution is
+\[
+ E(x, t) = f(x - ct) + g(x + ct).
+\]
+The most important solutions are the \emph{monochromatic} waves
+\[
+ E = E_0 \sin (kx - \omega t).
+\]
+\begin{defi}[Amplitude, wave number and frequency]\leavevmode
+ \begin{enumerate}
+ \item $E_0$ is the \emph{amplitude}
+ \item $k$ is the \emph{wave number}.
+ \item $\omega$ is the \emph{(angular) frequency}.
+ \end{enumerate}
+ The wave number is related to the wavelength by
+ \[
+ \lambda = \frac{2\pi}{k}.
+ \]
+ Since the wave has to travel at speed $c$, we must have
+ \[
+ \omega^2 = c^2 k^2
+ \]
+ So the value of $k$ determines the value of $\omega$, vice versa.
+\end{defi}
+To solve for $\mathbf{B}$,we use
+\[
+ \nabla\times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t}.
+\]
+So $\mathbf{B} = (0, 0, B)$ for some $B$. Hence the equation gives.
+\[
+ \frac{\partial B}{\partial t} = -\frac{\partial E}{\partial x}.
+\]
+So
+\[
+ B = \frac{E_0}{c}\sin(kx - \omega t).
+\]
+Note that this is uniquely determined by $\mathbf{E}$, and we do not get to choose our favorite amplitude, frequency etc for the magnetic component.
+
+We see that $\mathbf{E}$ and $\mathbf{B}$ oscillate in phase, orthogonal to each other, and orthogonal to the direction of travel. These waves are what we usually consider to be ``light''.
+
+Also note that Maxwell's equations are linear, so we can add up two solutions to get a new one. This is particularly important, since it allows light waves to pass through each other without interfering.
+
+It is useful to use complex notation. The most general monochromatic takes the form
+\[
+ \mathbf{E} = \mathbf{E}_0 \exp(i(\mathbf{k}\cdot \mathbf{x} - \omega t)),
+\]
+and
+\[
+ \mathbf{B} = \mathbf{B}_0 \exp(i(\mathbf{k}\cdot \mathbf{x} - \omega t)),
+\]
+with $\omega = c^2 |\mathbf{k}|^2$.
+\begin{defi}[Wave vector]
+ $\mathbf{k}$ is the \emph{wave vector}, which is real.
+\end{defi}
+The ``actual'' solutions are just the real part of these expressions.
+
+There are some restrictions to the values of $\mathbf{E}_0$ etc due to the Maxwell's equations:
+\begin{align*}
+ \nabla\cdot \mathbf{E} = 0 &\Rightarrow \mathbf{k}\cdot \mathbf{E}_0 = 0\\
+ \nabla\cdot \mathbf{B} = 0 &\Rightarrow \mathbf{k}\cdot \mathbf{B}_0 = 0\\
+ \nabla\times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} &\Rightarrow \mathbf{k}\times \mathbf{E}_0 = \omega \mathbf{B}_0
+\end{align*}
+If $\mathbf{E}_0$ and $\mathbf{B}_0$ are real, then $\mathbf{k}, \mathbf{E}_0/c$ and $\mathbf{B}_0$ form a right-handed orthogonal triad of vectors.
+\begin{defi}[Linearly polarized wave]
+ A solution with real $\mathbf{E}_0, \mathbf{B}_0, \mathbf{k}$ is said to be \emph{linearly polarized}.
+\end{defi}
+This says that the waves oscillate up and down in a fixed plane.
+
+If $\mathbf{E}_0$ and $\mathbf{B}_0$ are complex, then the polarization is not in a fixed direction. If we write
+\[
+ \mathbf{E}_0 = \boldsymbol\alpha + i\boldsymbol\beta
+\]
+for $\boldsymbol\alpha, \boldsymbol\beta\in \R^3$, then the ``real solution'' is
+\[
+ \Re(\mathbf{E}) = \boldsymbol\alpha\cos(\mathbf{k}\cdot \mathbf{x} - \omega t) - \boldsymbol\beta \sin (\mathbf{k}\cdot \mathbf{x} - \omega t).
+\]
+Note that $\nabla\cdot \mathbf{E} = 0$ requires that $\mathbf{k}\cdot \boldsymbol\alpha = \mathbf{k}\cdot \boldsymbol\beta$. It is not difficult to see that this traces out an ellipse.
+\begin{defi}[Elliptically polarized wave]
+ If $\mathbf{E}_0$ and $\mathbf{B}_0$ are complex, then it is said to be \emph{elliptically polarized}. In the special case where $|\boldsymbol\alpha| = |\boldsymbol\beta|$ and $\boldsymbol\alpha \cdot \boldsymbol\beta = 0$, this is \emph{circular polarization}.
+\end{defi}
+
+We can have a simple application: why metals are shiny.
+
+A metal is a conductor. Suppose the region $x > 0$ is filled with a conductor.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] (0, 2) rectangle (1, -2);
+ \draw [->] (-2, 0) -- (-1, 0) node [pos =0.5, above] {$\mathbf{E}_\mathrm{inc}$};
+ \end{tikzpicture}
+\end{center}
+A light wave is incident on the conductor, ie
+\[
+ \mathbf{E}_{\mathrm{inc}} = E_0 \hat{\mathbf{y}} \exp(i(kx + \omega t)),
+\]
+with $\omega = ck$.
+
+We know that inside a conductor, $\mathbf{E} = 0$, and at the surface, $\mathbf{E}_{\parallel} = 0$. So $\mathbf{E}_0 \cdot \hat{\mathbf{y}}|_{x = 0} = 0$.
+
+Then clearly our solution above does not satisfy the boundary conditions!
+
+To achieve the boundary conditions, we add a reflected wave
+\[
+ \mathbf{E}_{\mathrm{ref}} = -E_0 \hat{\mathbf{y}} \exp(i(-kx - \omega t)).
+\]
+Then our total electric field is
+\[
+ \mathbf{E} = \mathbf{E}_{\mathrm{inc}} + \mathbf{E}_{\mathrm{ref}}.
+\]
+Then this is a solution to Maxwell's equations since it is a sum of two solutions, and satisfies $\mathbf{E}\cdot \hat{\mathbf{y}}|_{x = 0} = 0$ as required.
+
+Maxwell's equations says $\nabla\times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t}$. So
+\begin{align*}
+ \mathbf{B}_{\mathrm{inc}} &= \frac{E_0}{c}\hat{\mathbf{z}} \exp(i(kx - \omega t))\\
+ \mathbf{B}_{\mathrm{ref}} &= \frac{E_0}{c}\hat{\mathbf{z}} \exp(i(-kx - \omega t))
+\end{align*}
+This obeys $\mathbf{B}\cdot \hat{\mathbf{n}} = 0$, where $\hat{\mathbf{n}}$ is the normal to the surface. But we also have
+\[
+ \mathbf{B}\cdot \hat{\mathbf{z}}|_{x = 0^-} = \frac{2E_0}{c}e^{-i\omega t},
+\]
+So there \emph{is} a magnetic field at the surface. However, we know that inside the conductor, we have $\mathbf{B} = 0$. This means that there is a discontinuity across the surface! We know that discontinuity happens when there is a surface current. Using the formula we've previously obtained, we know that the surface current is given by
+\[
+ \mathbf{K} = \pm\frac{2E_0}{\mu_0 c}\hat{\mathbf{y}} e^{-i \omega t}.
+\]
+So shining a light onto a metal will cause an oscillating current. We can imagine the process as the incident light hits the conductor, causes an oscillating current, which generates a reflected wave (since accelerating charges generate light --- cf.\ IID Electrodynamics)
+
+We can do the same for light incident at an angle, and prove that the incident angle is equal to the reflected angle.
+
+\subsection{Poynting vector}
+Electromagnetic waves carry energy --- that's how the Sun heats up the Earth! We will compute how much.
+
+The energy stored in a field in a volume $V$ is
+\[
+ U = \int_V \left(\frac{\varepsilon_0}{2}\mathbf{E}\cdot \mathbf{E} + \frac{1}{2\mu_0}\mathbf{B}\cdot \mathbf{B}\right)\;\d V.
+\]
+We have
+\begin{align*}
+ \frac{\d U}{\d t} &= \int_V \left(\varepsilon_0 \mathbf{E}\cdot \frac{\partial \mathbf{E}}{\partial t} + \frac{1}{\mu_0}\mathbf{B}\cdot \frac{\partial \mathbf{B}}{\partial t}\right)\;\d V\\
+ &= \int_V \left(\frac{1}{\mu_0}\mathbf{E}\cdot (\nabla\times \mathbf{B}) - \mathbf{E}\cdot \mathbf{J} - \frac{1}{\mu_0}\mathbf{B}\cdot (\nabla\times \mathbf{E})\right)\;\d V.
+\end{align*}
+But
+\[
+ \mathbf{E}\cdot (\nabla \times \mathbf{B}) - \mathbf{B}\cdot (\nabla\times \mathbf{E}) = \nabla\cdot (\mathbf{E}\times \mathbf{B}),
+\]
+by vector identities. So
+\[
+ \frac{\d U}{\d t} = -\int_V \mathbf{J}\cdot \mathbf{E}\;\d V - \frac{1}{\mu_0}\int_S (\mathbf{E}\times \mathbf{B})\cdot \;\d \mathbf{S}.
+\]
+Recall that the work done on a particle $q$ moving with velocity $v$ is $\delta W = q\mathbf{v}\cdot \mathbf{E}\;\delta t$. So the $\mathbf{J}\cdot \mathbf{E}$ term is the work done on a charged particles in $V$. We can thus write
+\begin{thm}[Poynting theorem]
+ \[
+ \underbrace{\frac{\d U}{\d t} + \int_V \mathbf{J}\cdot \mathbf{E} \;\d V}_{\text{Total change of energy in $V$ (fields + particles)}} = \underbrace{-\frac{1}{\mu_0}\int_S (\mathbf{E}\times \mathbf{B})\cdot\d S}_{\text{Energy that escapes through the surface }S}.
+ \]
+\end{thm}
+
+\begin{defi}[Poynting vector]
+ The \emph{Poynting vector} is
+ \[
+ \mathbf{S} = \frac{1}{\mu_0}\mathbf{E}\times \mathbf{B}.
+ \]
+\end{defi}
+The Poynting vector characterizes the energy transfer.
+
+For a linearly polarized wave,
+\begin{align*}
+ \mathbf{E} &= \mathbf{E}_0 \sin (\mathbf{k}\cdot \mathbf{x} - \omega t)\\
+ \mathbf{B} &= \frac{1}{c}(\hat{\mathbf{k}}\times \mathbf{E}_0)\sin (\mathbf{k}\cdot \mathbf{x} - \omega t).
+\end{align*}
+So
+\[
+ \mathbf{S} = \frac{E_0^2}{c\mu_0}\hat{\mathbf{k}} \sin^2(\mathbf{k}\cdot\mathbf{x} - \omega t).
+\]
+The average over $T = \frac{2\pi}{\omega}$ is thus
+\[
+ \bra\mathbf{S}\ket = \frac{E_0^2}{2c\mu_0}\hat{\mathbf{k}}.
+\]
+\section{Electromagnetism and relativity}
+In this section, we will write Maxwell's equations in the language of relativity, and show that the two are compatible. We will first start with a review of special relativity. We will introduce it in a more formal approach, forgetting about all that fun about time dilation, twins and spaceships.
+
+\subsection{A review of special relativity}
+\subsubsection{A geometric interlude on (co)vectors}
+Let's first look at normal, Euclidean geometry we know from IA Vectors and Matrices. We all know what a vector is. A vector is, roughly, a direction. For example, the velocity of a particle is a vector --- it points in the direction where the particle is moving. On the other hand, position is not quite a vector. It is a vector only after we pick an ``origin'' of our space. Afterwards, we can think of position as a vector pointing the direction from the origin to where we are.
+
+Perhaps a slightly less familiar notion is that of a \emph{covector}. A covector is some mathematical object that takes in a vector and spits out a number, and it has to do this in a linear way. In other words, given a vector space $V$, a covector is a linear map $V \to \R$.
+
+One prominent example is that the derivative $\d f$ of a function $f: \R^n \to \R$ at (say) the origin is naturally a \emph{co}vector! If you give me a direction, the derivative tells us how fast $f$ changes in that direction. In other words, given a vector $\mathbf{v}$, $\d f(\mathbf{v})$ is the \emph{directional derivative} of $f$ in the direction of $\mathbf{v}$.
+
+But, you say, we were taught that the derivative of $f$ is the gradient, which is a vector. Indeed, you've probably never heard of the word ``covector'' before. If we wanted to compute the directional derivative of $f$ in the direction of $v$, we simply compute the gradient $\nabla f$, and then take the dot product $\nabla f \cdot \mathbf{v}$. We don't need to talk about covectors, right?
+
+The key realization is that to make the last statement, we need the notion of a dot product, or \emph{inner product}. Once we have an inner product, it is an easy mathematical fact that every covector $L: V \to \R$ is uniquely of the form
+\[
+ L(v) = \mathbf{v} \cdot \mathbf{w}
+\]
+for some fixed $\mathbf{w} \in V$, and conversely any vector gives a covector this way. Thus, whenever we have a dot product on our hands, the notion of covector is redundant --- we can just talk about vectors.
+
+In special relativity, we still have an inner product. However, the inner product is not a ``genuine'' inner product. For example, the inner product of a (non-zero) vector with itself might be zero, or even negative! And in this case, we need to be careful about the distinction between a vector and a covector, and it is worth understanding the difference more carefully.
+
+Since we eventually want to do computations, it is extremely useful to look at these in coordinates. Suppose we have picked a basis $\mathbf{e}_1, \cdots, \mathbf{e}_n$. Then by definition of a basis, we can write any vector $\mathbf{v}$ as $\mathbf{v} = \sum v^i \mathbf{e}_i$. These $v^i$ are the coordinates of $\mathbf{v}$ in this coordinate system, and by convention, we write the indices with superscript. We also say they have \emph{upper indices}.
+
+We can also introduce coordinates for \emph{co}vectors. If $L$ is a covector, then we can define its coordinates by
+\[
+ L_i = L(\mathbf{e}_i).
+\]
+By convention, the indices are now written as subscripts, or \emph{lower indices}. Using the summation convention, we have
+\[
+ L(\mathbf{v}) = L(v^i \mathbf{e}_i) = v^i L(\mathbf{e}_i) = v^i L_i.
+\]
+Previously, when we introduced the summation convention, we had a ``rule'' that each index can only appear at most twice, and we sum over it when it is repeated. Here we can refine this rule, and give good meaning to it:
+\begin{rrule}
+ We can only contract an upper index with a lower index.
+\end{rrule}
+The interpretation is that ``contraction'' really just means applying a covector to a vector --- a very natural thing to do. It doesn't make sense to ``apply'' a vector to a vector, or a covector to a covector. It also doesn't make sense to repeat the same index three times, because we can only apply a single covector to a single vector.
+
+It is common to encounter some things that are neither vectors nor covectors, but we still want to apply the summation convention to them. For example, we want to write $\mathbf{v} = v^i \mathbf{e}_i$, even though $\mathbf{e}_i$ is not a covector. It turns out in all cases we encounter, there is one choice of upper or lower indexing that makes it consistent with our summation convention. For example, we should write $\mathbf{e}_i$, not $\mathbf{e}^i$, so that $\mathbf{v} = v^i \mathbf{e}_i$ works out.
+
+We said previously that the existence of an inner product allows us to convert a covector into a vector, and vice versa. So let's see how inner products work in a choice of basis.
+
+If the basis we picked were orthonormal, then for any vectors $\mathbf{v}$ and $\mathbf{w}$, we simply have $\mathbf{v} \cdot \mathbf{w} = \mathbf{v}^T \mathbf{w}$. Alternatively, we have $\mathbf{v} \cdot \mathbf{w} = \sum_i v^i w^i$. If our basis were not orthonormal (which is necessarily the case in SR), we can define the matrix $\boldsymbol\eta$ by
+\[
+ \eta_{ij} = \mathbf{e}_i \cdot \mathbf{e}_j.
+\]
+We will later say that $\boldsymbol\eta$ is a $(0, 2)$-tensor, after we defined what that means. The idea is that it takes in \emph{two} vectors, and returns a number (namely the inner product of them). This justifies our choice to use lower indices for both coordinates. For now, we can argue for this choice by noting that the indices on $\mathbf{e}_i$ and $\mathbf{e}_j$ are already lower.
+
+Using this $\boldsymbol\eta$, we can compute
+\[
+ \mathbf{v} \cdot \mathbf{w} = (v^i \mathbf{e}_i) \cdot (w^j \mathbf{e}_j) = v^i w^j (\mathbf{e}_i \cdot \mathbf{e}_j) = w^i w^j \eta_{ij}.
+\]
+In other words, we have
+\[
+ \mathbf{v} \cdot \mathbf{w} = \mathbf{v}^T \boldsymbol\eta \mathbf{w}.
+\]
+We see that this matrix $\boldsymbol\eta$ encodes all the information about the inner product in this basis. This is known as the \emph{metric}. If we picked an orthonormal basis, then $\boldsymbol\eta$ would be the identity matrix.
+
+Now it is easy to convert a vector into a covector. The covector $(-\cdot \mathbf{w})$ is given by $\mathbf{v} \cdot \mathbf{w} = v^i (w^j\eta_{ij})$. We can then read off the coordinates of the covector to be
+\[
+ w_i = w^j \eta_{ij}.
+\]
+In general, these coordinates $w_i$ are \emph{not} the same as $w^i$. This is generally true only if $\eta_{ij}$ is the identity matrix, i,e.\ the basis is orthonormal. Thus, distinguishing between vectors and covectors now has a practical purpose. Each ``vector'' has two sets of coordinates --- one when you think of it as a vector, and one when you turn it into a covector, and they are different. So the positioning of the indices help us keep track of which coordinates we are talking about.
+
+We can also turn a covector $w_i$ back to a vector, if we take the inverse of the matrix $\boldsymbol\eta$, which we will write as $\eta^{ij}$. Then
+\[
+ w^i = w_j \eta^{ij}.
+\]
+\subsubsection{Transformation rules}
+It is often the case that in relativity, we can write down the coordinates of an object in any suitable basis. However, it need not be immediately clear to us whether that object should be a vector or covector, or perhaps neither. Thus, we need a way to identify whether objects are vectors or covectors. To do so, we investigate how the coordinates of vectors and covectors when we change basis.
+
+By definition, if we have a change of basis matrix $P$, then the coordinates of a vector transform as $\mathbf{v} \mapsto P \mathbf{v}$, or equivalently
+\[
+ v^i \mapsto v'^i = P^i\!_j v^j.
+\]
+How about covectors? If $L$ is a covector and $\mathbf{v}$ is vector, then $L(\mathbf{v})$ is a number. In particular, its value does not depend on basis. Thus, we know that the sum $L_i v^i$ must be invariant under any change of basis. Thus, if $L_i \mapsto \tilde{L}_i$, then we know
+\[
+ \tilde{L}_i P^i\!_j v^j = L_i v^j.
+\]
+Thus, we know $\tilde{L}_i P^i\!_j = L_i$. To obtain $\tilde{L}_i$, we have to invert $P^i\!_j$ and multiply $L_i$ with that.
+
+However, our formalism in terms of indices does not provide us with a way of denoting the inverse of a matrix pleasantly. We can avert this problem by focusing on \emph{orthogonal} matrices, i.e.\ the matrices that preserve the metric. We say $P$ is orthogonal if
+\[
+ P^T \eta P = \eta,
+\]
+or equivalently,
+\[
+ P^i\!_j \eta_{ik} P^k\!_\ell = \eta_{j\ell}.
+\]
+This implies the inverse of $P$ has coordinates
+\[
+ P_i{}^j = (\eta^{-1} P^T \eta)_i\!^j = \eta_{i\ell} \eta^{jk} P^\ell\!_k,
+\]
+which is the fancy way of describing the ``transpose'' (which is not the literal transpose unless $\eta = I$). This is just reiterating the fact that the inverse of an orthogonal matrix is its transpose. When we do special relativity, the orthogonal matrices are exactly the Lorentz transformations.
+
+Thus, we find that if $P$ is orthogonal, then covectors transform as
+\[
+ L_i \mapsto L_i' = P_i{}^j L_j.
+\]
+We can now write down the ``physicists' '' definition of vectors and covectors. Before we do that, we restrict to the case of interest in special relativity. The reason is that we started off this section with the caveat ``in any \emph{suitable} basis''. We shall not bother to explain what ``suitable'' means in general, but just do it in the case of interest.
+
+\subsubsection{Vectors and covectors in SR}
+Recall from IA Dynamics and Relativity that in special relativity, we combine space and time into one single object. For example, the position and time of an event is now packed into a single \emph{4-vector} in spacetime
+\[
+ X^\mu =
+ \begin{pmatrix}
+ ct\\
+ x\\
+ y\\
+ z
+ \end{pmatrix}.
+\]
+Here the index $\mu$ ranges from $0$ to $3$. In special relativity, we use Greek alphabets (e.g.\ $\mu, \nu, \rho, \sigma$) to denote the indices. If we want to refer to the spacial components ($1, 2, 3$) only, we will use Roman alphabets (e.g.\ $i, j, k$) to denote them.
+
+As we initially discussed, position is very naturally thought of as a vector, and we will take this as our starting postulate. We will then use the transformation rules to identify whether any other thing should be a vector or a covector.
+
+In the ``standard'' basis, the metric we use is the \term{Minkowski metric}, defined by
+\[
+ \eta_{\mu\nu} =
+ \begin{pmatrix}
+ +1 & 0 & 0 & 0\\
+ 0 & -1 & 0 & 0\\
+ 0 & 0 & -1 & 0\\
+ 0 & 0 & 0 & -1
+ \end{pmatrix}.\tag{$*$}
+\]
+This is \emph{not} positive definite, hence not a genuine inner product. However, it is still invertible, which is what our previous discussion required. This means, for example,
+\[
+ X \cdot X = (ct)^2 - (x^2 + y^2 + z^2),
+\]
+the spacetime interval.
+
+\begin{defi}[Orthonormal basis]
+ An \emph{orthonormal basis} of spacetime is a basis where the metric takes the form $(*)$. An (orthonormal) coordinate system is a choice of orthonormal basis.
+\end{defi}
+
+\begin{defi}[Lorentz transformations]
+ A \emph{Lorentz transformation} is a change-of-basis matrix that preserves the inner product, i.e.\ orthogonal matrices under the Minkowski metric.
+\end{defi}
+Thus, Lorentz transformations send orthonormal bases to orthonormal bases. For example, the familiar Lorentz boost
+\begin{align*}
+ ct' &= \gamma\left(ct - \frac{v}{c}x\right)\\
+ x' &= \gamma\left(x - \frac{v}{c}ct\right)\\
+ y' &= y\\
+ z' &= z
+\end{align*}
+is the Lorentz transformation given by the matrix
+\[
+ \Lambda^\mu\!_\nu =
+ \begin{pmatrix}
+ \gamma & -\gamma v/c & 0 &0\\
+ -\gamma v/c & \gamma & 0 & 0\\
+ 0 & 0 & 1 & 0\\
+ 0 & 0 & 0 & 1
+ \end{pmatrix}
+\]
+Other examples include rotations of the space dimensions, which are given by matrices of the form
+\[
+ \Lambda^\mu\!_\nu =
+ \begin{pmatrix}
+ 1 & 0 & 0 & 0\\
+ 0 & \\
+ 0 & & R \\
+ 0
+ \end{pmatrix},
+\]
+with $R$ a rotation matrix.
+
+We can now write down our practical definition of vectors and covectors.
+\begin{defi}[Vectors and covectors]
+ A vector is an assignment of $4$ numbers $V^\mu, \mu = 0, 1, 2, 3$ to each coordinate system such that under a change of basis by $\Lambda$, the coordinates $V^\mu$ transform as $V^\mu \mapsto \Lambda^\mu\!_\nu V^\nu$.
+
+ A covector is an assignment of $4$ numbers $V_\mu, \mu = 0, 1, 2, 3$ to each coordinate system such that under a change of basis by $\Lambda$, the coordinates $V_\mu$ transform as $V_\mu \mapsto \Lambda_\mu\!^\nu V_\nu$.
+\end{defi}
+
+\begin{eg}
+ By assumption, the position $X^\mu$ is a vector.
+\end{eg}
+
+\begin{eg}
+ Suppose we have a trajectory of a particle $X^\mu(s)$ in spacetime. Then $\frac{\d}{\d s} X^\mu(s)$ is also a vector, by checking it transforms.
+\end{eg}
+
+Finally, we would want to be able to talk about tensors. For example, we want to be able to talk about $X^\mu X^\nu$. This is an assignment of $16$ numbers indexed by $\mu, \nu = 0, 1, 2, 3$ that transforms as
+\[
+ X^\mu X^\nu \mapsto \Lambda^\mu\!_\rho \Lambda^\nu\!_\sigma X^\rho X^\sigma.
+\]
+We would also like to talk about $\eta_{\mu\nu}$ as a tensor. We can make the following general definition:
+
+\begin{defi}[Tensor]
+ A \emph{tensor} of type $(m, n)$ is a quantity
+ \[
+ T^{\mu_1\cdots \mu_n}\!_{\nu_1\cdots \nu_n}
+ \]
+ which transforms as
+ \[
+ T'^{\mu_1\cdots \mu_n}\!_{\nu_1\cdots \nu_n} = \Lambda^{\mu_1}\!_{\rho_1} \cdots \Lambda^{\mu_m}\!_{\rho_m}\Lambda_{\nu_1}\!^{\sigma_1}\cdots\Lambda_{\nu_n}\!^{\sigma_n} \times T^{\rho_1, \cdots, \rho_m}\!_{\sigma_1, \cdots, \sigma_n}.
+ \]
+\end{defi}
+As we saw, we can change the type of a tensor by raising and lowering indices by contracting with $\eta_{\mu\nu}$ or its inverse. However, the total $n + m$ will not be changed.
+
+Finally, we introduce the $4$-derivative.
+\begin{defi}[4-derivative]
+ The \emph{4-derivative} is
+ \[
+ \partial_\mu = \frac{\partial}{\partial X^\mu} = \left(\frac{1}{c}\frac{\partial}{\partial t}, \nabla\right).
+ \]
+\end{defi}
+As we previously discussed, the derivative ought to be a covector. We can also verify this by explicit computation using the chain rule. Under a transformation $X^\mu \mapsto X'^\mu$, we have
+\begin{align*}
+ \partial_\mu = \frac{\partial}{\partial X^\mu} \mapsto \frac{\partial}{\partial X'^\mu} = \frac{\partial X^\nu}{\partial X'^\mu}\frac{\partial}{\partial X^\nu} = (\Lambda^{-1})^\nu\!_\mu\partial_\nu = \Lambda_\mu\!^\nu \partial_\nu.
+\end{align*}
+
+\subsection{Conserved currents}
+Recall the continuity equation
+\[
+ \frac{\partial\rho}{\partial t} + \nabla\cdot \mathbf{J} = 0.
+\]
+An implication of this was that we cannot have a charge disappear on Earth and appear on Moon. One way to explain this impossibility would be that the charge is travelling faster than the speed of light, which is something related to relativity. This suggests that we might be able to write it relativistically.
+
+We define
+\[
+ J^\mu =
+ \begin{pmatrix}
+ \rho c\\
+ \mathbf{J}
+ \end{pmatrix}
+\]
+Before we do anything with it, we must show that this is a 4-vector, i.e.\ it transforms via left multiplication by $\Lambda^\mu\!_\nu$.
+
+The full verification is left as an exercise. We work out a special case to justify why this is a sensible thing to believe in. Suppose we have a static charge density $\rho_0(\mathbf{x})$ with $\mathbf{J} = 0$. In a frame boosted by $\mathbf{v}$, we want to show that the new current is
+\[
+ J'^\mu = \Lambda^\mu\!_\nu J^\nu =
+ \begin{pmatrix}
+ \gamma \rho_0 c\\
+ -\gamma \rho_0 \mathbf{v}
+ \end{pmatrix}.
+\]
+The new charge is now $\gamma \rho_0$ instead of $\rho_0$. This is correct since we have Lorentz contraction. As space is contracted, we get more charge per unit volume. Then the new current is velocity times charge density, which is $\mathbf{J}' = \gamma\rho_0 \mathbf{v}$.
+
+With this definition of $J^\mu$, local charge conservation is just
+\[
+ \partial_\mu J^\mu = 0.
+\]
+This is invariant under Lorentz transformation, simply because the indices work out (i.e.\ we match indices up with indices down all the time).
+
+We see that once we have the right notation, the laws of physics are so short and simple!
+
+\subsection{Gauge potentials and electromagnetic fields}
+Recall that when solving electrostatic and magentostatic problems, we introduced the potentials $\varphi$ and $\mathbf{A}$ to help solve two of Maxwell's equations:
+\begin{align*}
+ \nabla\times \mathbf{E} = 0 &\Leftrightarrow \mathbf{E} = -\nabla \phi\\
+ \nabla\cdot \mathbf{B} = 0 &\Leftrightarrow \mathbf{B} = \nabla\times \mathbf{A}.
+\end{align*}
+However, in general, we do not have $\nabla \times \mathbf{E} = 0$. Instead, the equations are
+\[
+ \nabla\times E + \frac{\partial \mathbf{B}}{\partial t} = 0,\quad \nabla\cdot \mathbf{B} = 0.
+\]
+So we cannot use $\mathbf{E} = -\nabla \phi$ directly if $\mathbf{B}$ changes. Instead, we define $\mathbf{E}$ and $\mathbf{B}$ in terms of $\phi$ and $\mathbf{A}$ as follows:
+\begin{align*}
+ \mathbf{E} &= -\nabla\phi - \frac{\partial \mathbf{A}}{\partial t}\\
+ \mathbf{B} &= \nabla\times \mathbf{A}.
+\end{align*}
+It is important to note that $\phi$ and $\mathbf{A}$ are not unique. We can shift them by
+\begin{align*}
+ \mathbf{A} \mapsto \mathbf{A} + \nabla \chi\\
+ \phi \mapsto \phi - \frac{\partial \chi}{\partial t}
+\end{align*}
+for any function $\chi(\mathbf{x}, t)$. These are known as \emph{gauge transformations}. Plug them into the equations and you will see that you get the same $\mathbf{E}$ and $\mathbf{B}$.
+
+Now we have ended up with four objects, 1 from $\phi$ and 3 from $\mathbf{A}$, so we can put them into a 4-vector \emph{gauge potential}:
+\[
+ A^\mu =
+ \begin{pmatrix}
+ \phi/c\\
+ \mathbf{A}
+ \end{pmatrix}\]
+We will assume that this makes sense, i.e.\ is a genuine 4-vector.
+
+Now gauge transformations take a really nice form:
+\[
+ A_\mu \mapsto A_\mu - \partial_\mu\chi.
+\]
+Finally, we define the anti-symmetric electromagnetic tensor
+\[
+ F_{\mu\nu} = \partial_\mu A_\nu -\partial_\nu A_\mu.
+\]
+Since this is antisymmetric, the diagonals are all 0, and $A_{\mu\nu} = -A_{\nu\mu}$. So this thing has $(4 \times 4 - 4)/2 = 6$ independent components. So it could encapsulate information about the electric and magnetic fields (and nothing else).
+
+We note that $F$ is invariant under gauge transformations:
+\[
+ F_{\mu\nu} \mapsto F_{\mu\nu} - \partial_\mu\partial_\nu\chi + \partial_\nu\partial_\mu \chi = F_{\mu\nu}.
+\]
+We can compute components of $F_{\mu\nu}$.
+\[
+ F_{01} = \frac{1}{c}\frac{\partial}{\partial t}(-A_x) - \frac{\partial \phi/c}{\partial x} = \frac{E_x}{c}.
+\]
+Note that we have $-A_x$ instead of $A_x$ since $F_{\mu\nu}$ was defined in terms of $A_\mu$ with indices down. We can similarly obtain $F_{02} = E_y/c$ and $F_{03} = E_z/c$.
+
+We also have
+\[
+ F_{12} = \frac{\partial}{\partial x}(-Ay) - \frac{\partial}{\partial y}(-A_x) = -B_z.
+\]
+So
+\[
+ F_{\mu\nu} =
+ \begin{pmatrix}
+ 0 & E_x/c & E_y/c & E_z/c\\
+ -E_x/c & 0 & -B_z & B_y\\
+ -E_y/c & B_z & 0 & -B_x\\
+ -E_z/c & -B_y & B_x & 0
+ \end{pmatrix}
+\]
+Raising both indices, we have
+\[
+ F^{\mu\nu} = \eta^{\mu\rho}\eta^{\nu\sigma} F_{\rho\sigma} =
+ \begin{pmatrix}
+ 0 & -E_x/c & -E_y/c & -E_z/c\\
+ E_x/c & 0 & -B_z & B_y\\
+ E_y/c & B_z & 0 & -B_x\\
+ E_z/c & -B_y & B_x & 0
+ \end{pmatrix}
+\]
+So the electric fields are inverted and the magnetic field is intact (which makes sense, since moving indices up and down will flip the sign of the spacial components. The electric field has one, while the magnetic field has two. So the magnetic field is swapped twice and is intact).
+
+Both $F_{\mu\nu}$ and $F^{\mu\nu}$ are tensors. Under a Lorentz transformation, we have
+\[
+ F'^{\mu\nu} = \Lambda^\mu\!_\rho\Lambda^\nu_\sigma F^{\rho\sigma}.
+\]
+For example under rotation
+\[
+ \Lambda =
+ \begin{pmatrix}
+ 1 & 0 & 0 & 0\\
+ 0\\
+ 0 & & R\\
+ 0
+ \end{pmatrix},
+\]
+we find that
+\begin{align*}
+ \mathbf{E}' &= R\mathbf{E}\\
+ \mathbf{B}' &= R\mathbf{B}.
+\end{align*}
+Under a boost by $v$ in the $x$-direction , we have
+\begin{align*}
+ E_x' &= E_x\\
+ E_y' &= \gamma(E_y - vB_z)\\
+ E_z' &= \gamma(E_z + vB_y)\\
+ B_x' &= B_x\\
+ B_y' &= \gamma\left(B_y + \frac{v}{c^2}E_z\right)\\
+ B_z' &= \gamma\left(B_z - \frac{v}{c^2} E_y\right)
+\end{align*}
+
+\begin{eg}[Boosted line charge]
+ An infinite line along the $x$ direction with uniform charge per unit length, $\eta$, has electric field
+ \[
+ \mathbf{E} = \frac{\eta}{2\pi \varepsilon_0 (y^2 + z^2)}
+ \begin{pmatrix}
+ 0\\ y\\ z
+ \end{pmatrix}.
+ \]
+ The magnetic field is $\mathbf{B} = 0$. Plugging this into the equation above, an observer in frame $S'$ boosted with $\mathbf{v} = (v, 0, 0)$, i.e.\ parallel to the wire, sees
+ \[
+ \mathbf{E} = \frac{\eta\gamma}{2\pi\varepsilon_0(y^2 + z^2)}
+ \begin{pmatrix}
+ 0\\y\\z
+ \end{pmatrix} =
+ \frac{\eta\gamma}{2\pi\varepsilon_0(y'^2 + z'^2)}
+ \begin{pmatrix}
+ 0\\y'\\z'
+ \end{pmatrix}.
+ \]
+ Also,
+ \[
+ \mathbf{B} = \frac{\eta\gamma v}{2\pi\varepsilon_0 \sigma^2(y^2 + z^2)}
+ \begin{pmatrix}
+ 0\\z\\-y
+ \end{pmatrix} = \frac{\eta\gamma v}{2\pi\varepsilon_0 \sigma^2(y'^2 + z'^2)}
+ \begin{pmatrix}
+ 0\\z'\\-y'
+ \end{pmatrix}.
+ \]
+ In frame $S'$, the charge density is Lorentz contracted to $\gamma\eta$. The magnetic field can be written as
+ \[
+ \mathbf{B} = \frac{\mu_0 I'}{2\pi \sqrt{y'^2 + z'^2}}\, \hat{\varphi}',
+ \]
+ where $\displaystyle \hat{\varphi}' = \frac{1}{\sqrt{y'^2 + z'^2}} \begin{pmatrix}0\\-z'\\y'\end{pmatrix}$ is the basis vector of cylindrical coordinates, and $I' = -\gamma\eta v$ is the current.
+
+ This is what we calculated from Ampere's law previously. But we didn't \emph{use} Ampere's law here. We used Gauss' law, and then applied a Lorentz boost.
+\end{eg}
+We see that magnetic fields are relativistic effects of electric fields. They are what we get when we apply a Lorentz boost to an electric field. So relativity is not only about \emph{very fast objects}\textsuperscript{TM}. It is there when you stick a magnet onto your fridge!
+
+\begin{eg}[Boosted point charge]
+ A boosted point charge generates a current, but is not the steady current we studied in magnetostatics. As the point charge moves, the current density moves.
+
+ A point charge $Q$, at rest in frame $S$ has
+ \[
+ \mathbf{E} = \frac{Q}{4\pi \varepsilon_0 r^2}\hat{\mathbf{r}} = \frac{Q}{4\pi \varepsilon_0^2(x^2 + y^2 + z^2)^{3/2}}
+ \begin{pmatrix}
+ x\\y\\z
+ \end{pmatrix},
+ \]
+ and $\mathbf{B} = 0$.
+
+ In frame $S'$, boosted with $\mathbf{v} = (v, 0, 0)$, we have
+ \[
+ \mathbf{E}' = \frac{Q}{4\pi \varepsilon_0(x^2 + y^2 + z^2)^{3/2}}
+ \begin{pmatrix}
+ x\\ \gamma y\\ \gamma z.
+ \end{pmatrix}.
+ \]
+ We need to express this in terms of $x', y', z'$, instead of $x, y, z$. Then we get
+ \[
+ \mathbf{E}' = \frac{Q\gamma}{4\pi \varepsilon_0 (\gamma^2(x' + vt')^2 + y'^2 + z'^2)^{3/2}}
+ \begin{pmatrix}
+ x' + vt'\\y'\\z'
+ \end{pmatrix}.
+ \]
+ Suppose that the particle sits at $(-vt', 0, 0)$ in $S'$. Let's look at the electric at $t'=0$. Then the radial vector is $\mathbf{r}' =
+ \begin{pmatrix}
+ x'\\y'\\z'
+ \end{pmatrix}.
+ $
+ In the denominator, we write
+ \begin{align*}
+ \gamma^2x'^2 + y'^2 + z'^2 &= (\gamma^2 - 1)x'^2 + r'^2\\
+ &= \frac{v^2\gamma^2}{c^2}x'^2 + r'^2\\
+ &= \left(\frac{v^2\gamma^2}c^2 \cos^2 \theta + 1\right)r'^2\\
+ &= \gamma^2\left(1 - \frac{v^2}{c^2}\sin^2 \theta\right)r'^2
+ \end{align*}
+ where $\theta$ is the angle between the $x'$ axis and $\mathbf{r}'$.
+
+ So
+ \[
+ \mathbf{E}' = \frac{1}{\gamma^2\left(1 - \frac{v^2}{c^2}\sin^2 \theta\right)^{3/2}}\frac{Q}{4\pi \varepsilon_0 r^2}\hat{\mathbf{r}'}.
+ \]
+ The factor $1/\gamma^2\left(1 - \frac{v^2}{c^2}\sin^2 \theta\right)^{3/2}$ squashes the electric field in the direction of motion.
+
+ This result was first discovered by Lorentz from solving Maxwell's equations directly, which lead to him discovering the Lorentz transformations.
+
+ There is also a magnetic field
+ \[
+ \mathbf{B} = \frac{\mu_0 Q\gamma v}{4\pi(\gamma^2(x' + vt')^2 + y'^2 + z'^2)^{3/2}}
+ \begin{pmatrix}
+ 0\\z'\\-y'
+ \end{pmatrix}.
+ \]
+\end{eg}
+
+\subsubsection*{Lorentz invariants}
+We can ask the question ``are there any combinations of $\mathbf{E}$ and $\mathbf{B}$ that all observers agree on?'' With index notation, all we have to do is to contract all indices such that there are no dangling indices.
+
+It turns out that there are two such possible combinations.
+
+The first thing we might try would be
+\[
+ \frac{1}{2}F_{\mu\nu}F^{\mu\nu} = -\frac{\mathbf{E}^2}{c^2} + \mathbf{B}^2,
+\]
+which works great.
+
+To describe the second invariant, we need to introduce a new object in Minkowski space. This is the fully anti-symmetric tensor
+\[
+ \varepsilon^{\mu\nu\rho\sigma} =
+ \begin{cases}
+ +1 & \mu\nu\rho\sigma\text{ is even permutation of 0123}\\
+ -1 & \mu\nu\rho\sigma\text{ is odd permutation of 0123}\\
+ 0 & \text{otherwise}
+ \end{cases}
+\]
+This is analogous to the $\varepsilon_{ijk}$ in $\R^3$, just that this time we are in 4-dimensional spacetime. Under a Lorentz transformation,
+\[
+ \varepsilon'^{\mu\nu\rho\sigma} = \Lambda^\mu\!_\kappa\Lambda^\nu\!_\lambda \Lambda^\rho\!_\alpha \Lambda^\sigma\!_\beta \varepsilon^{\kappa\lambda\alpha\beta}.
+\]
+Since $\varepsilon^{\mu\nu\rho\sigma}$ is fully anti-symmetric, so is $\varepsilon'^{\mu\nu\rho\sigma}$. Similar to what we did in $\R^3$, we can show that the only fully anti-symmetric tensors in Minkowski space are multiples of $\varepsilon^{\mu\nu\rho\sigma}$. So
+\[
+ \varepsilon'^{\mu\nu\rho\sigma} = a \varepsilon^{\mu\nu\rho\sigma}
+\]
+for some constant $a$. To figure out what $a$ is, test a single component:
+\[
+ \varepsilon'^{0123} = \Lambda^0\!_\kappa\Lambda^1\!_\lambda \Lambda^2\!_\beta \Lambda^3\!_\beta \varepsilon^{\kappa\lambda\alpha\beta} = \det \Lambda.
+\]
+Lorentz transformations have $\det \Lambda = +1$ (rotations, boosts), or $\det \Lambda = -1$ (reflections, time reversal).
+
+We will restrict to $\Lambda$ such that $\det \Lambda = +1$. Then $a = 1$, and $\varepsilon^{\mu\nu\rho\sigma}$ is invariant. In particular, it is a tensor.
+
+We can finally apply this to electromagnetic fields. The \emph{dual electromagnetic tensor} is defined to be
+\[
+ \tilde{F}^{\mu\nu} = \frac{1}{2}\varepsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}.=
+ \begin{pmatrix}
+ 0 & -B_x & -B_y & -B_z\\
+ B_x & 0 & E_z/c & -E_y/c\\
+ B_y & -E_z/c & 0 & E_x/c\\
+ B_z & E_y/c & -E_x/c & 0
+ \end{pmatrix}.
+\]
+Why do we have the factor of a half? Consider a single component, say $\tilde{F}^{12}$. It gets contributions from both $F_{03}$ and $F_{30}$, so we need to average the sum to avoid double-counting. $\tilde{F}^{\mu\nu}$ is yet another antisymmetric matrix.
+
+This is obtained from $F^{\mu\nu}$ through the substitution $\mathbf{E} \mapsto c\mathbf{B}$ and $\mathbf{B}\mapsto -\mathbf{E}/c$. Note the minus sign!
+
+Then $\tilde{F}^{\mu\nu}$ is a tensor. We can construct the other Lorentz invariant using $\tilde{F}^{\mu\nu}$. We don't get anything new if we contract this with itself, since $\tilde{F}_{\mu\nu}\tilde{F}^{\mu\nu} = -F_{\mu\nu}F^{\mu\nu}$. Instead, the second Lorentz invariant is
+\[
+ \frac{1}{4}\tilde{F}^{\mu\nu}F_{\mu\nu} = \mathbf{E}\cdot \mathbf{B}/c.
+\]
+\subsection{Maxwell Equations}
+To write out Maxwell's Equations relativistically, we have to write them out in the language of tensors. It turns out that they are
+\begin{align*}
+ \partial_\mu F^{\mu\nu} &= \mu_0 J^\nu\\
+ \partial_\mu \tilde{F}^{\mu\nu} &= 0.
+\end{align*}
+As we said before, if we find the right way of writing equations, they look really simple! We don't have to worry ourselves with where the $c$ and $\mu_0$, $\varepsilon_0$ go!
+
+Note that each law is actually 4 equations, one for each of $\nu = 0, 1, 2, 3$. Under a Lorentz boost, the equations are not invariant individually. Instead, they all transform nicely by left-multiplication of $\Lambda_\rho^\nu$.
+
+We now check that these agree with the Maxwell's equations.
+
+First work with the first equation: when $\nu = 0$, we are left with
+\[
+ \partial_i F^{i0} = \mu_0 J^0,
+\]
+where $i$ ranges over $1, 2, 3$. This is equivalent to saying
+\[
+ \nabla\cdot \left(\frac{\mathbf{E}}{c}\right) = \mu_0 \rho c,
+\]
+or
+\[
+ \nabla\cdot \mathbf{E} = c^2 \mu_0 \rho = \frac{\rho}{\varepsilon_0}
+\]
+When $\nu = i$ for some $i = 1, 2, 3$, we get
+\[
+ \partial _\mu F^{\mu i} = \mu_0 J^i.
+\]
+So after some tedious calculation, we obtain
+\[
+ \frac{1}{c}\frac{\partial}{\partial t}\left(-\frac{\mathbf{E}}{c}\right) + \nabla\times \mathbf{B} = \mu_0 \mathbf{J}.
+\]
+With the second equation, when $\nu = 0$, we have
+\[
+ \partial_i \tilde{F}^{i0} = 0 \Rightarrow \nabla\cdot \mathbf{B} = 0.
+\]
+$\nu = i$ gives
+\[
+ \partial_\mu \tilde{F}^{\mu i} = 0 \Rightarrow \frac{\partial \mathbf{B}}{\partial t} + \nabla\times \mathbf{E}.
+\]
+So we recover Maxwell's equations. Then we now see why the $J^\nu$ term appears in the first equation and not the second --- it tells us that there is only electric charge, not magnetic charge.
+
+We can derive the continuity equation from Maxwell's equation here. Since $\partial_\nu\partial_\mu F^{\mu\nu} = 0$ due to anti-symmetry, we must have $\partial_\nu J^\nu = 0$. Recall that we once derived the continuity equation from Maxwell's equations without using relativity, which worked but is not as clean as this.
+
+Finally, we recall the long-forgotten potential $A_\mu$. If we define $F_{\mu\nu}$ in terms of it:
+\[
+ F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu,
+\]
+then the equation $\partial_\mu \tilde{F}^{\mu\nu} = 0$ comes for free since we have
+\[
+ \partial_\mu \tilde{F}^{\mu\nu} = \frac{1}{2}\varepsilon^{\mu\nu\rho\sigma}\partial_{\mu}F_{\rho\sigma} = \frac{1}{2} \varepsilon^{\mu\nu\rho\sigma}\partial_\mu(\partial_\rho A_\sigma - \partial_\sigma A_\rho) = 0
+\]
+by symmetry. This means that we can also write the Maxwell equations as
+\[
+ \partial_\mu F^{\mu\nu} = \mu_0 J^\nu\quad\text{where}\quad F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu.
+\]
+\subsection{The Lorentz force law}
+The final aspect of electromagnetism is the Lorentz force law for a particle with charge $q$ moving with velocity $\mathbf{u}$:
+\[
+ \frac{\d \mathbf{p}}{\d t} = q(\mathbf{E} + \mathbf{v}\times \mathbf{B}).
+\]
+To write this in relativistic form, we use the proper time $\tau$ (time experienced by particle), which obeys
+\[
+ \frac{\d t}{\d \tau} = \gamma(\mathbf{u}) = \frac{1}{\sqrt{1 - u^2/c^2}}.
+\]
+We define the 4-velocity $\displaystyle U = \frac{\d X}{\d \tau} = \gamma\begin{pmatrix}c\\ \mathbf{u}\end{pmatrix}$, and 4-momentum $P = \begin{pmatrix}E/c\\ \mathbf{p}\end{pmatrix}$, where $E$ is the energy. Note that $E$ is the energy while $\mathbf{E}$ is the electric field.
+
+The Lorentz force law can be written as
+\[
+ \frac{\d P^\mu}{\d \tau} = qF^{\mu\nu}U_\nu.
+\]
+We show that this does give our original Lorentz force law:
+
+When $\mu = 1, 2, 3$, we obtain
+\[
+ \frac{\d \mathbf{p}}{\d \tau} = q\gamma (\mathbf{E} + \mathbf{v}\times \mathbf{B}).
+\]
+By the chain rule, since $\frac{\d t}{\d \tau} = \gamma$, we have
+\[
+ \frac{\d \mathbf{p}}{\d t} = q(\mathbf{E}+ \mathbf{v}\times \mathbf{B}).
+\]
+So the good, old Lorentz force law returns. Note that here $\mathbf{p} = m\gamma \mathbf{v}$, the relativistic momentum, not the ordinary momentum.
+
+But how about the $\mu = 0$ component? We get
+\[
+ \frac{\d P^0}{\d \tau} = \frac{1}{c}\frac{\d E}{\d \tau} = \frac{q}{c}\gamma \mathbf{E}\cdot \mathbf{v}.
+\]
+This says that
+\[
+ \frac{\d E}{\d t} = q\mathbf{E}\cdot \mathbf{v},
+\]
+which is our good old formula for the work done by an electric field.
+
+\begin{eg}[Motion in a constant field]
+ Suppose that $\mathbf{E} = (E, 0, 0)$ and $\mathbf{u} = (v, 0, 0)$. Then
+ \[
+ m\frac{\d (\gamma u)}{\d t} = qE.
+ \]
+ So
+ \[
+ m\gamma u = qEt.
+ \]
+ So
+ \[
+ u = \frac{\d x}{\d t} = \frac{qEt}{\sqrt{m^2 + q^2 E^2 t^2/c^2}}.
+ \]
+ Note that $u\to c$ as $t\to \infty$. Then we can solve to find
+ \[
+ x = \frac{mc^2}{q}\left(\sqrt{1 + \frac{q^2 E^2 t^2}{mc^2}} - 1\right)
+ \]
+ For small $t$, $x\approx \frac{1}{2}qEt^2$, as expected.
+\end{eg}
+
+\begin{eg}[Motion in constant magnetic field]
+ Suppose $\mathbf{B} = (0, 0, B)$. Then we start with
+ \[
+ \frac{\d P^0}{\d \tau} = 0 \Rightarrow E = m\gamma c^2 = \text{constant}.
+ \]
+ So $|\mathbf{u}|$ is constant. Then
+ \[
+ m\frac{\partial (\gamma \mathbf{u})}{\partial t} = q\mathbf{u}\times \mathbf{B}.
+ \]
+ So
+ \[
+ m\gamma \frac{\d \mathbf{u}}{\d t} = q\mathbf{u}\times \mathbf{B}
+ \]
+ since $|\mathbf{u}|$, and hence $\gamma$, is constant. This is the same equation we saw in Dynamics and Relativity for a particle in a magnetic field, except for the extra $\gamma$ term. The particle goes in circles with frequency
+ \[
+ \omega = \frac{qB}{m\gamma}.
+ \]
+\end{eg}
+\end{document}
diff --git a/books/cam/IB_L/fluid_dynamics.tex b/books/cam/IB_L/fluid_dynamics.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ab8403e64f4d371a3b5869f6419468aafbad80ae
--- /dev/null
+++ b/books/cam/IB_L/fluid_dynamics.tex
@@ -0,0 +1,2829 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Lent}
+\def\nyear {2016}
+\def\nlecturer {P.\ F.\ Linden}
+\def\ncourse {Fluid Dynamics}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Parallel viscous flow}\\
+Plane Couette flow, dynamic viscosity. Momentum equation and boundary conditions. Steady flows including Poiseuille flow in a channel. Unsteady flows, kinematic viscosity, brief description of viscous boundary layers (skin depth).\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Kinematics}\\
+Material time derivative. Conservation of mass and the kinematic boundary condition. Incompressibility; streamfunction for two-dimensional flow. Streamlines and path lines.\hspace*{\fill} [2]
+
+\vspace{10pt}
+\noindent\textbf{Dynamics}\\
+Statement of Navier-Stokes momentum equation. Reynolds number. Stagnation-point flow; discussion of viscous boundary layer and pressure field. Conservation of momentum; Euler momentum equation. Bernoulli's equation.
+
+\vspace{5pt}
+\noindent Vorticity, vorticity equation, vortex line stretching, irrotational flow remains irrotational. \hspace*{\fill} [4]
+
+\vspace{10pt}
+\noindent\textbf{Potential flows}\\
+Velocity potential; Laplace's equation, examples of solutions in spherical and cylindrical geometry by separation of variables. Translating sphere. Lift on a cylinder with circulation.
+
+\vspace{5pt}
+\noindent Expression for pressure in time-dependent potential flows with potential forces. Oscillations in a manometer and of a bubble.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Geophysical flows}\\
+Linear water waves: dispersion relation, deep and shallow water, standing waves in a container, Rayleigh-Taylor instability.
+
+\vspace{5pt}
+\noindent Euler equations in a rotating frame. Steady geostrophic flow, pressure as streamfunction. Motion in a shallow layer, hydrostatic assumption, modified continuity equation. Conservation of potential vorticity, Rossby radius of deformation.\hspace*{\fill} [4]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+In real life, we encounter a lot of fluids. For example, there is air and water. These are known as \emph{Newtonian fluids}, whose dynamics follow relatively simple equations. This is fundamentally because they have simple composition --- they are made up of simple molecules. An example of \emph{non-Newtonian} fluid is shampoo, which, despite looking innocent, has long chain molecules with complex properties.
+
+In fluid dynamics, one of the most important things is to distinguish what is a fluid and what is not. For example, air and water are fluids, while the wall is solid. A main distinction is that we can lean on the wall, but not on air and water. In particular, if we apply a force on the wall, it will deform a bit, but then stop. A finite force on a solid will lead to a finite deformation. On the other hand, if we attempt to apply a force onto air or water, it will just move along the direction of force indefinitely. A finite force can lead to infinite deformation.
+
+This is the main difference between solid mechanics and fluid mechanics. In solid mechanics, we look at properties like elasticity. This measures how much deformation we get when we apply a force. In fluid mechanics, we don't measure distances, since they can potentially be infinite. Instead, we would often be interested in the velocity of fluids.
+
+There are many applications of fluid dynamics. On a small scale, the dynamics of fluids in cells is important for biology. On a larger scale, the fluid flow of the mantle affects the movement of tectonic plates, while the dynamics of the atmosphere can be used to explain climate and weather. On an even larger scale, we can use fluid dynamics to analyse the flow of galactic systems in the universe.
+
+\section{Parallel viscous flow}
+\setcounter{subsection}{-1}
+\subsection{Preliminaries}
+This section is called preliminaries, not definitions, because the ``definitions'' we give will be a bit fuzzy. We will (hopefully) get better definitions later on.
+
+We start with an absolutely clear an unambiguous definition of the subject we are going to study.
+\begin{defi}[Fluid]
+ A \emph{fluid} is a material that flows.
+\end{defi}
+
+\begin{eg}
+ Air, water and oil are fluids. These are known as \emph{simple} or \emph{Newtonian} fluids, because they are simple.
+
+ Paint, toothpaste and shampoo are \emph{complex} or \emph{non-Newtonian} fluids, because they are complicated.
+
+ Sand, rice and foams are \emph{granular flows}. These have some fluid-like properties, but are fundamentally made of small granular solids.
+\end{eg}
+
+In this course, we will restrict our attention to Newtonian fluids. Practically speaking, these are fluids for which our equations work. The assumption we will make when deriving our equations will be the following:
+
+\begin{defi}[Newtonian fluids and viscosity]
+ A \emph{Newtonian fluid} is a fluid with a linear relationship between stress and rate of strain. The constant of proportionality is \emph{viscosity}.
+\end{defi}
+This, and other concepts we define here, will become more clear when we start to write down our equations.
+
+\begin{defi}[Stress]
+ \emph{Stress} is force per unit area.
+\end{defi}
+For example, pressure is a stress.
+
+\begin{defi}[Strain]
+ \emph{Strain} is the extension per unit length. The \emph{rate of strain} is $\frac{\d}{\d t}(\mathrm{strain})$ is concerned with gradients of velocity.
+\end{defi}
+As mentioned in the introduction, for fluid dynamics, we tend to study velocities instead of displacements. So we will mostly work with rate of strain instead of strain itself.
+
+These quantities are in fact tensor fields, but we will not treat them as such in this course. We will just consider ``simplified'' cases. For the full-blown treatment with tensor fields, refer to the IID Fluid Dynamics.
+
+In this course, we are going make a lot of simplifying assumptions, since the world is too complicated. For example, most of the time, we will make the \emph{inviscid approximation}, where we set the viscosity to $0$. We will also often consider motions in one direction only.
+
+\subsection{Stress}
+We first look at stresses. These are the forces that exist inside the fluid. If we have a boundary, we can classify the stress according to the direction of the force --- whether it is normal to or parallel to the boundary. Note that the boundary can either be an actual physical boundary, or an imaginary surface we cook up in order to compute things.
+
+Suppose we have a fluid with pressure $p$ acting on a surface with unit normal $\mathbf{n}$, pointing \emph{into} the fluid. This causes
+\begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (0, 0) rectangle (6, -1);
+ \draw [->] (3.1, 0) -- +(0, 1) node [above] {$\mathbf{n}$};
+ \draw [->] (2.9, 0) -- +(0, -1) node [below] {$\tau_p$};
+ \node at (0.5, -0.5) {solid};
+ \node at (0.5, 0.5) {fluid};
+ \node at (6, 1) [left] {pressure $p$};
+ \end{tikzpicture}
+\end{center}
+
+\begin{defi}[Normal stress]
+ The \emph{normal stress} is
+ \[
+ \tau_p = -p\mathbf{n}.
+ \]
+\end{defi}
+The normal stress is present everywhere, as long as we have a fluid (with pressure). However, pressure by itself does not do anything, since pressure acts in all directions, and the net effect cancels out. However, if we have a pressure \emph{gradient}, then gives an actual force and drives fluid flow. For example, suppose we have a pipe, with the pump on the left:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- +(4, 0);
+ \node at (0, 0.25) [left] {high pressure};
+ \node at (4, 0.25) [right] {low pressure $p_{\mathrm{atm}}$};
+ \draw (0, 0.5) -- +(4, 0);
+ \draw [->] (2.5, -0.5) node [right] {$\nabla p$} -- +(-1, 0);
+ \draw [->] (1.5, -1) -- +(1, 0) node [right] {force $-\nabla p$};
+ \draw (0, 0.25) ellipse (0.05 and 0.25);
+ \draw [dashed] (4, 0) arc (270:90:0.05 and 0.25);
+ \draw (4, 0) arc (270:450:0.05 and 0.25);
+ \end{tikzpicture}
+\end{center}
+Then this gives a \emph{body force} that drives the water from left to right.
+
+We can also have stress in the horizontal direction. Suppose we have two infinite plates with fluid in the middle. We keep the bottom plane at rest, and move the top plate with velocity $U$.
+\begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (0, 0) rectangle (6, -1);
+ \fill [gray, path fading=north] (0, 1) rectangle (6, 2);
+ \draw [->] (2.5, 1.3) -- +(1, 0) node [right] {$U$};
+
+ \draw [<->] (5.7, 0) -- +(0, 1) node [right, pos=0.5] {$h$};
+ \end{tikzpicture}
+\end{center}
+By definition, the stress is the force per unit area. In this case, it is the horizontal force we need to exert to keep the top plate moving.
+
+\begin{defi}[Tangential stress]
+ The \emph{tangential stress} $\tau_s$ is the force (per unit area) required to move the top plate at speed $U$.
+\end{defi}
+This is also the force we need to exert on the bottom plate to keep it still. Or the horizontal force at any point in fluid in order to maintain the velocity gradient.
+
+By definition of a Newtonian fluid, this stress is proportional to the velocity gradient, i.e.
+\begin{law}
+ For a Newtonian fluid, we have
+ \[
+ \tau_s \propto \frac{U}{h}.
+ \]
+\end{law}
+
+\begin{defi}[Dynamic viscosity]
+ The \emph{dynamic viscosity} $\mu$ of the fluid is the constant of proportionality in
+ \[
+ \tau_s = \mu \frac{U}{h}.
+ \]
+\end{defi}
+
+We can try to figure out the dimensions of these quantities:
+\begin{align*}
+ [\tau_s] &= ML^{-1} T^{-2}\\
+ \left[\frac{U}{h}\right] &= T^{-1}\\
+ [\mu] &= ML^{-1} T^{-1}.
+\end{align*}
+In SI units, $\mu$ has unit $\SI{}{\kilo\gram\per\meter\per\second}$.
+
+We have not yet said what the fluid in the middle does. It turns out this is simple: at the bottom, the fluid is constant, and at the top, the fluid moves with velocity $U$. In between, the speed varies linearly.
+\begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (0, 0) rectangle (6, -1);
+ \fill [gray, path fading=north] (0, 1) rectangle (6, 2);
+ \draw [->] (2.5, 1.3) -- +(1, 0) node [right] {$U$};
+
+ \draw [<->] (5.7, 0) -- +(0, 1) node [right, pos=0.5] {$h$};
+
+ \draw (1, 0) -- (1, 1);
+ \draw (1, 0) -- (2, 1);
+ \foreach \x in {0,0.25,0.5,0.75} {
+ \draw [->] (1, \x) -- (1 + \x, \x);
+ }
+ \node at (1.75, 0.75) [right] {$u$};
+ \end{tikzpicture}
+\end{center}
+We will derive this formally later.
+
+For a general flow, let $u_T(\mathbf{x})$ be the velocity of the fluid at position $\mathbf{x}$. Then the velocity gradient is
+\[
+ \frac{\partial \mu_T(\mathbf{x})}{\partial \mathbf{n}}.
+\]
+Hence the tangential stress is given by
+\[
+ \tau_s = \mu \frac{\partial u_T(\mathbf{x})}{\partial \mathbf{n}},
+\]
+and is in the direction of the tangential component of velocity. Again, the normal vector $\mathbf{n}$ points \emph{into} the fluid.
+
+\subsection{Steady parallel viscous flow}
+We are first going to consider a very simple type of flow, known as \emph{steady parallel viscous flow}, and derive the equations of motion of it. We first explain what this name means. The word ``viscous'' simply means we do not assume the viscosity vanishes, and is not something new.
+
+\begin{defi}[Steady flow]
+ A \emph{steady flow} is a flow that does not change in time. In other words, all forces balance, and there is no acceleration.
+\end{defi}
+
+\begin{defi}[Parallel flow]
+ A \emph{parallel flow} is a flow where the fluid only flows in one dimension (say the $x$ direction), and only depends on the direction perpendicular to a plane (say the $x-z$ plane). So the velocity can be written as
+ \[
+ \mathbf{u} = (u(y), 0, 0).
+ \]
+\end{defi}
+These can be conveniently thought of as being two-dimensional, by forgetting the $z$ direction.
+
+Note that our velocity does not depend on the $x$ direction. This can be justified by the assumption that the fluid is incompressible. If we had a velocity gradient in the $x$-direction, then we will have fluid ``piling up'' at certain places, violating incompressibility.
+
+We will give a formal definition of incompressibility later. In general, though, fluids are not compressible. For example, sound waves are exactly waves of compression in air, and cannot exist if air is incompressible. So we can alternatively state the assumption of incompressibility as ``sound travels at infinite speed''.
+
+Hence, compressibility matters mostly when we are travelling near the speed of sound. If we are moving in low speeds, we can just pretend the fluid is indeed incompressible. This is what we will do in this course, since the speed of sound is in general very high.
+
+For a general steady parallel viscous flow, We can draw a flow profile like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) parabola (1, 2);
+ \foreach \x in {0,0.25,0.5,0.75} {
+ \pgfmathsetmacro\len{sqrt(\x)}
+ \draw [->] (0, 2*\x) -- +(\len, 0);
+ }
+ \draw [->] (0, 0) -- (0, 2) node [above] {$y$};
+ \end{tikzpicture}
+\end{center}
+The lengths of the arrow signify the magnitude of the velocity at that point.
+
+To derive the equations of motion, we can consider a small box in the fluid.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) parabola (1, 2);
+ \foreach \x in {0,0.25,0.5,0.75} {
+ \pgfmathsetmacro\len{sqrt(\x)}
+ \draw [->] (0, 2*\x) -- +(\len, 0);
+ }
+ \draw [->] (0, 0) -- (0, 2) node [above] {$y$};
+ \draw [mred] (-0.5, 0.8) node [anchor = north east] {$x, y$} -- +(2, 0) node [anchor = north west] {$x + \delta x$} -- +(2, 0.3) -- +(0, 0.3) node [anchor = south east] {$y + \delta y$} -- cycle;
+ \end{tikzpicture}
+\end{center}
+We know that this block of fluid moves in the $x$ direction without acceleration. So the total forces of the surrounding environment on the box should vanish.
+
+We first consider the $x$ direction. There are normal stresses at the sides, and tangential stresses at the top and bottom. The sum of forces in the $x$-direction (per unit transverse width) gives
+\[
+ p(x) \delta y - p(x + \delta x) \delta y + \tau_s(y + \delta y) \delta x + \tau_s(y) \delta x = 0.
+\]
+By the definition of $\tau_s$, we can write
+\[
+ \tau_s(y + \delta y) = \mu \frac{\partial u}{\partial y}(y + \delta y),\quad \tau_s (y) = -\mu \frac{\partial u}{\partial y}(y),
+\]
+where the different signs come from the different normals (for a normal pointing downwards, $\frac{\partial}{\partial \mathbf{n}} = \frac{\partial}{\partial(-y)}$).
+
+Dividing by $\delta x \delta y$, we get
+\[
+ \frac{1}{\delta x}(p(x) - p(x + \delta x)) + \mu\frac{1}{\delta y}\left(\frac{\partial u}{\partial y}(y + \delta y) - \frac{\partial u}{\partial y}(y)\right).
+\]
+Taking the limit as $\delta x, \delta y \to 0$, we end up with the equation of motion
+\[
+ -\frac{\partial p}{\partial x} + \mu \frac{\partial^2 u}{\partial y^2} = 0.
+\]
+Performing similar calculations in the $y$ direction, we obtain
+\[
+ -\frac{\partial p}{\partial y} = 0.
+\]
+In the second equation, we keep the negative sign for consistency, but obviously in this case it is not necessary.
+
+This is the simplest possible case. We can extend this a bit by allowing, non-steady flows and external forces on the fluid. Then the velocity is of the form
+\[
+ \mathbf{u} = (u(y, t), 0, 0).
+\]
+Writing the external body force (per unit volume) as $(f_x, f_y, 0)$, we obtain the equations
+\begin{align*}
+ \rho\frac{\partial u}{\partial t} &= -\frac{\partial p}{\partial x} + \mu \frac{\partial^2 u}{\partial y^2} + f_x\\
+ 0 &=-\frac{\partial p}{\partial y} + f_y.
+\end{align*}
+The derivation of these equations is straightforward, and is left as an exercise for the reader on the example sheet. Here $\rho$ is the density, i.e.\ the mass per unit volume. For air, it is approximately $\SI{1}{\kilo\gram\per\meter\cubed}$, and for water, it is approximately $\SI{1000}{\kilo\gram\per\meter\cubed}$.
+
+Along with equations, we also need boundary condition. In general, Newtonian fluids satisfy one of the following two boundary conditions:
+
+\begin{enumerate}
+ \item \emph{No-slip condition}: at the boundary, the tangential component of the fluid velocity equals the tangential velocity of boundary. In particular, if the boundary is stable, the tangential component of the fluid velocity is zero. This means fluids stick to surfaces.
+ \[
+ u_T = 0.\tag{1.5}
+ \]
+ \item \emph{Stress condition}: alternatively, a tangential stress $\tau$ is imposed on the fluid. In this case,
+ \[
+ -\mu \frac{\partial u_T}{\partial n} = \tau.
+ \]
+\end{enumerate}
+The no-slip condition is common when we have a fluid-solid boundary, where we think the fluid ``sticks'' to the solid boundary. The stress condition is more common when we have a fluid-fluid boundary (usually liquid and gas), where we require the tangential stresses to match up. In non-parallel flow, we will often require the flow not to penetrate the solid boundary, but this is automatically satisfied in parallel flow.
+
+We are going to look at some examples.
+\begin{eg}[Couette flow]
+ This is flow driven by the motion of a boundary, as we have previously seen.
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (0, 0) rectangle (6, -1);
+ \fill [gray, path fading=north] (0, 1) rectangle (6, 2);
+ \draw [->] (2.5, 1.3) -- +(1, 0) node [right] {$U$};
+
+ \draw (2, 0) -- (2, 1);
+ \draw (2, 0) -- (3, 1);
+
+ \foreach \x in {0.25,0.5,0.75} {
+ \draw [->] (2, \x) -- +(\x, 0);
+ }
+ \end{tikzpicture}
+ \end{center}
+ We assume that this is a steady flow, and there is no pressure gradient. So our equations give
+ \[
+ \frac{\partial^2 u}{\partial y^2} = 0.
+ \]
+ Moreover, the no-slip condition says $u = 0$ on $y = 0$; $u = U$ on $y = h$. The solution is thus
+ \[
+ u = \frac{U y}{h}.
+ \]
+\end{eg}
+
+\begin{eg}[Poiseuille flow]
+ This is a flow driven by a pressure gradient between stationary boundaries. Again we have two boundaries
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (0, 0) rectangle (6, -1);
+ \fill [gray, path fading=north] (0, 1) rectangle (6, 2);
+
+ \node [left] at (0, 0.5) {$P_1$};
+ \node [right] at (6, 0.5) {$P_0$};
+
+ \foreach \x in {0.25,0.5,0.75} {
+ \pgfmathsetmacro\len{\x * (1 - \x)}
+ \draw [->] (2, \x) -- +(3 * \len, 0);
+ };
+ \draw [rotate=270, yscale=-1, shift={(-3,-3)}] (2.5, 0.25) parabola (2, 1);% magic
+ \draw [rotate=90, shift={(-2,-3)}] (2.5, 0.25) parabola (2, 1);
+ \draw (2, 0) -- (2, 1);
+ \end{tikzpicture}
+ \end{center}
+ We have a high pressure $P_1$ on the left, and a low pressure $P_0 < P_1$ on the right. We solve this problem again, but we will also include gravity. So the equations of motion become
+ \begin{align*}
+ -\frac{\partial p}{\partial x} + \mu\frac{\partial^2 u}{\partial y^2} &= 0\\
+ -\frac{\partial p}{\partial y} - g\rho &= 0
+ \end{align*}
+ The boundary conditions are $u = 0$ at $y = 0, h$. The second equation implies
+ \[
+ p =- g\rho y + f(x)
+ \]
+ for some function $f$. Substituting into the first gives
+ \[
+ \mu\frac{\partial^2 u}{\partial y^2} = f'(x).
+ \]
+ The left is a function of $y$ only, while the right depends only on $x$. So both must be constant, say $G$. Using the boundary conditions, we get
+ \[
+ \mu\frac{\partial^2 u}{\partial y^2} = f'(x) = G = \frac{P_1 - P_0}{L},
+ \]
+ where $L$ is the length of the tube. Then we find
+ \[
+ u = \frac{G}{2 \mu} y(h - y).
+ \]
+ Here the velocity is the greatest at the middle, where $y = \frac{h}{2}$.
+\end{eg}
+
+Since the equations of motion are linear, if we have both a moving boundary and a pressure gradient, we can just add the two solutions up.
+
+\subsection{Derived properties of a flow}
+The velocity and pressure already fully describe the flow. However, there are some other useful quantities we can compute out of these.
+
+The first thing we consider is how much stuff is being transported by the flow.
+\begin{defi}[Volume flux]
+ The \emph{volume flux} is the volume of fluid traversing a cross-section per unit time. This is given by
+ \[
+ q = \int_0^h u(y) \;\d y
+ \]
+ per unit transverse width.
+\end{defi}
+We can calculate this immediately for the two flows.
+
+\begin{eg}
+ For the Couette flow, we have
+ \[
+ q = \int_0^h \frac{Uy}{h}\;\d y = \frac{Uh}{2}.
+ \]
+ For the Poiseuille flow, we have
+ \[
+ q = \int_0^h \frac{G}{2\mu} y (h - y)\;\d y = \frac{Gh^3}{12 \mu}.
+ \]
+\end{eg}
+
+We can also ask how much ``rotation'' there is in the flow.
+\begin{defi}[Vorticity]
+ The \emph{vorticity} is defined by
+ \[
+ \boldsymbol\omega = \nabla \times \mathbf{u}.
+ \]
+\end{defi}
+In our case, since we have
+\[
+ \mathbf{u} = (u(y, t), 0, 0),
+\]
+we have
+\[
+ \boldsymbol\omega = \left(0, 0, -\frac{\partial u}{\partial y}\right).
+\]
+\begin{eg}
+ For the case of the Couette flow, the vorticity is $\boldsymbol\omega = \left(0, 0, -\frac{U}{h}\right)$. This is a constant, i.e.\ the vorticity is uniform.
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (0, 0) rectangle (6, -1);
+ \fill [gray, path fading=north] (0, 1) rectangle (6, 2);
+ \draw [->] (2.5, 1.3) -- +(1, 0) node [right] {$U$};
+
+ \draw (2, 0) -- (2, 1);
+ \draw (2, 0) -- (3, 1);
+
+ \foreach \x in {0.25,0.5,0.75} {
+ \draw [->] (2, \x) -- +(\x, 0);
+ }
+ \draw [-latex'] (3.5, 0.75) arc(90:-180:0.25);
+ \node at (3.75, 0.5) [right] {$\omega$};
+ \end{tikzpicture}
+ \end{center}
+ For the case of the Poiseuille flow, we have
+ \[
+ \boldsymbol\omega = \left(0, 0, \frac{G}{\mu}\left(y - \frac{h}{2}\right)\right).
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (0, 0) rectangle (6, -1);
+ \fill [gray, path fading=north] (0, 1) rectangle (6, 2);
+
+ \node [left] at (0, 0.5) {$P_1$};
+ \node [right] at (6, 0.5) {$P_0$};
+
+ \foreach \x in {0.25,0.5,0.75} {
+ \pgfmathsetmacro\len{\x * (1 - \x)}
+ \draw [->] (2, \x) -- +(3 * \len, 0);
+ };
+ \draw [rotate=270, yscale=-1, shift={(-3,-3)}] (2.5, 0.25) parabola (2, 1);% magic
+ \draw [rotate=90, shift={(-2,-3)}] (2.5, 0.25) parabola (2, 1);
+ \draw (2, 0) -- (2, 1);
+
+ \draw [-latex'] (2.9, 0.2) arc(180:-90:0.1);
+ \node at (3.1, 0.2) [right] {$\omega$};
+
+ \draw [-latex'] (2.9, 0.8) arc(-180:90:0.1);
+ \node at (3.1, 0.8) [right] {$\omega$};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+Recall that the tangential stress $\tau_s$ is the tangential force per unit area exerted by the fluid on the surface, given by
+\[
+ \tau_s = \mu \frac{\partial u}{\partial \mathbf{n}},
+\]
+with $\mathbf{n}$ pointing into the fluid.
+
+\begin{eg}
+ For the Couette flow, we have
+ \[
+ \tau_s =
+ \begin{cases}
+ \mu \frac{U}{h} & y = 0\\
+ -\mu \frac{U}{h} & y = h
+ \end{cases}.
+ \]
+ We see that at $y = 0$, the stress is positive, and pulls the surface forward. At $y = h$, it is negative, and the surface is pulled backwards.
+
+ For the Poiseuille flow, we have
+ \[
+ \tau_s =
+ \begin{cases}
+ \frac{Gh}{2} & y = 0\\
+ \frac{Gh}{2} & y = h\\
+ \end{cases}
+ \]
+ Both surfaces are pulled forward, and this is independent of the viscosity. This makes sense since the force on the surface is given by the pressure gradient, which is independent of the fluid.
+\end{eg}
+
+\subsection{More examples}
+Since we are probably bored by the Couette and Poiseuille flows, we do another more interesting example.
+\begin{eg}[Gravity-driven flow down a slope]
+ Suppose we have some fluid flowing along a slope.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 1.5) -- (3, 0) -- (2, 0);
+ \fill [mblue, opacity=0.5] (0, 1.5) -- (3, 0) -- (4, 0) -- (0, 2) -- cycle;
+
+ \draw [gray, ->] (1, 1) -- +(0.5, 1) node [above] {$y$};
+ \draw [gray, ->] (1, 1) -- +(1, -0.5) node [below, pos=0.5] {$x$};
+ \draw [->] (3, 2) -- +(0, -0.7) node [pos=0.5, right] {$g$};
+ \draw [->] (1.5, 1) -- (2, 0.75) node [right] {$u$};
+ \node [right] at (4, 1.5) {$p_0$};
+ \node [right, align=left] at (4.5, 1.33) {atmospheric\\ pressure};
+ \draw (2.6, 0) arc(180:153.4349:0.4);
+ \node [left] at (2.66, 0.15) {$\alpha$};
+ \end{tikzpicture}
+ \end{center}
+ Here there is just atmosphere above the fluid, and we assume the fluid flow is steady, i.e.\ $u$ is merely a function of $y$. We further assume that the atmospheric pressure does not vary over the vertical extent of the flow. This is a very good approximation because $\rho_{\mathrm{air}} \ll \rho_{\mathrm{liq}}$.
+
+ Similarly, we assume $\mu_{\mathrm{air}} \ll \mu_{\mathrm{liq}}$. So the air exerts no significant tangential stress. This is known as a free surface.
+
+ We first solve the $y$ momentum equation. The force in the $y$ direction is $-g \rho \cos \alpha$. Hence the equation is
+ \[
+ \frac{\partial p}{\partial y} = - gp \cos \alpha.
+ \]
+ Using the fact that $p = p_0$ at the top boundary, we get
+ \[
+ p = p_0 - g\rho \cos \alpha (y - h).
+ \]
+ In particular, $p$ is independent of $x$. In the $x$ component, we get
+ \[
+ \mu \frac{\partial^2 u}{\partial y^2} = - g\rho \sin \alpha.
+ \]
+ The no slip condition gives $u = 0$ when $y = 0$. The other condition is that there is no stress at $y = h$. So we get $\frac{\partial u}{\partial y} = 0$ when $y = h$.
+
+ The solution is thus
+ \[
+ u = \frac{g\rho\sin \alpha}{2 \mu} y(2h - y).
+ \]
+ This is a bit like the Poiseuille flow, with $\frac{gp \sin \alpha}{2\mu}$ as the pressure gradient. But instead of going to zero at $y = h$, we get to zero at $y = 2h$ instead. So this is half a Poiseuille flow. % diagram
+
+ It is an exercise for the reader to calculate the volume flux $q$.
+\end{eg}
+
+For a change, we do a case where we have \emph{unsteady} flow.
+\begin{eg}
+ Consider fluid initially at rest in $y > 0$, resting on a flat surface.
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (0, 0) rectangle (6, -1);
+ \draw [->] (0, 0) -- (0, 1) node [above] {$y$};
+ \draw [->] (0, 0) -- (1 , 0) node [below] {$x$};
+ \end{tikzpicture}
+ \end{center}
+ At time $t = 0$, the boundary $y = 0$ starts to move at constant speed $U$. There is no force and no pressure gradient.
+
+ We use the $x$-momentum equation to get
+ \[
+ \frac{\partial u}{\partial t} = \nu \frac{\partial^2 u}{\partial y^2},
+ \]
+ where $\nu = \frac{\mu}{\rho}$. This is clearly the diffusion equation, with the diffusivity $\nu$. We can view this as the diffusion coefficient for motion/momentum/vorticity.
+ \begin{defi}[Kinematic viscosity]
+ The \emph{kinematic viscosity} is
+ \[
+ \nu = \frac{\mu}{\rho}.
+ \]
+ \end{defi}
+ The boundary conditions are $u = 0$ for $t = 0$ and $u \to 0$ as $y \to \infty$ for all $t$. The other boundary condition is obviously $u = U$ when $y = 0$, for $t > 0$.
+
+ You should have learnt how to solve this in, say, IB Methods. We will approach this differently here.
+
+ Before we start, we try to do some dimensional analysis, and try to figure out how far we have to go away from the boundary before we don't feel any significant motion.
+
+ We first list all the terms involved. We are already provided with a velocity $U$. We let $T$ be our time scale. We would like to know how fast the movement of fluid propagates up the $y$ axis. We note that in this case, we don't really have an extrinsic length scale --- in the case where we have two boundaries, the distance between them is a natural length scale to compare with, but here the fluid is infinitely thick. So we need to come up with a characteristic intrinsic length scale somewhat arbitrarily. For example, at any time $T$, we can let $\delta$ be the amount of fluid that has reached a speed of at least $\frac{U}{10}$ (that was completely arbitrary).
+
+ Instead of trying to figure out the dimensions of, say, $\nu$ and trying to match them, we just replace terms in the differential equation with these quantities of the right dimension, since we know our differential equation is dimensionally correct. So we obtain
+ \[
+ \frac{U}{T} \sim \nu \frac{U}{\delta^2}.
+ \]
+ Since the $U$ cancels out, we get
+ \[
+ \delta \sim \sqrt{\nu T}.
+ \]
+ We can figure out approximately how big this is, using the table below:
+ \begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ & $\mu(\SI{}{\kilo\gram\per\meter\per\second})$ & $\rho(\SI{}{\kilo\gram\per\meter\cubed})$ & $\nu(\SI{}{\meter\squared\per\second})$\\
+ \midrule
+ water & $10^{-3}$ & $10^3$ & $10^{-6}$\\
+ air & $10^{-5}$ & $1$ & $10^{-5}$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ These are, in general, very tiny. So we expect to feel nothing when we move slightly away from the boundary.
+
+ We now solve the problem properly. In an infinite domain with no extrinsic length scale, the diffusion equation admits a similarity solution. We write
+ \[
+ u(y, t) = U f(\eta),
+ \]
+ where $f(\eta)$ is a dimensionless function of the dimensionless variable $\eta$. To turn $y$ into a dimensionless variable, we have to define
+ \[
+ \eta = \frac{y}{\delta} = \frac{y}{\sqrt{\nu t}}.
+ \]
+ We substitute this form of the solution into the differential equation. Then we get
+ \[
+ -\frac{1}{2} \eta f'(\eta) = f''(\eta),
+ \]
+ with boundary condition $f = 1$ on $\eta = 0$. Then the solution is by definition
+ \[
+ f = \erfc \left(\frac{\eta}{2}\right).
+ \]
+ Hence
+ \[
+ u = U \erfc \left(\frac{y}{2\sqrt{\eta t}}\right).
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (0.5, 0) rectangle (5, -1);
+ \draw [->] (0.5, 0) -- (5, 0) node [right] {$U$};
+ \draw [->] (3.3, 0) -- +(0, 1.2) node [right, pos=0.5] {$\delta \sim \sqrt{\eta t}$};
+
+ \draw (1, 0) -- (1, 1.5);
+ \draw (2.5, 0) parabola (1, 1.5);
+ \foreach \y in {0.3,0.6,0.9,1.2} {
+ \pgfmathsetmacro\x{1.5 - sqrt(1.5 * \y)};
+ \draw [-latex'] (1, \y) -- +(\x, 0);
+ }
+ \end{tikzpicture}
+ \end{center}
+ We can compute the tangential stress of the fluid in the above case to be
+ \[
+ \tau_s = \mu\frac{\partial u}{\partial y} = \left.\mu \frac{U}{\sqrt{\nu t}} \left(\frac{2}{\sqrt{\pi}}\right) e^{-y^{2}}\right|_{y = 0} = \frac{\mu U}{\sqrt{\pi \nu t}}.
+ \]
+ Using our previous calculations of the kinematic viscosities, we find that $\frac{\mu}{\sqrt{\nu}}$ is $1$ for water and $10^{-3}$ for air.
+
+ This is significant in, say, the motion of ocean current. When the wind blows, it causes the water in the ocean to move along with it. This is done in a way such that the surface tension between the two fluids match at the boundary. Hence we see that even if the air blows really quickly, the resultant ocean current is much smaller, say a hundredth to a thousandth of it.
+\end{eg}
+
+\section{Kinematics}
+So far, we have considered a rather specific kind of fluid flow --- parallel flow, and derived the equations of motion from first principles. From now on, we will start to consider more general theory of fluid dynamics. In this section, we are mostly making some definitions and descriptions of flows. In the next chapter, we will take the equations of motion and try to solve them in certain scenarios.
+
+\subsection{Material time derivative}
+We first want to consider the problem of how we can measure the change in a quantity, say $f$. This might be pressure, velocity, temperature, or anything else you can imagine.
+
+The obvious thing to do would be the consider the time derivative
+\[
+ \frac{\partial f}{\partial t}.
+\]
+In physical terms, this would be equivalent to fixing our measurement instrument at a point and measure the quantity over time. This is known as the \emph{Eulerian picture}.
+
+However, often we want to consider something else. We pretend we are a fluid particle, and move along with the flow. We then measure the change in $f$ along our trajectory. This is known as the \emph{Lagrangian picture}.
+
+Let's look at these two pictures, and see how they relate to each other. Consider a time-dependent field $f(\mathbf{x}, t)$. For example, it might be the pressure of the system, or the temperature of the fluid. Consider a path $\mathbf{x}(t)$ through the field, and we want to know how the field varies as we move along the path.
+
+Along the path $\mathbf{x}(t)$, the chain rule gives
+\begin{align*}
+ \frac{\d f}{\d t}(\mathbf{x}(t), t) &= \frac{\partial f}{\partial x} \frac{\d x}{\d t} + \frac{\partial f}{\partial y}\frac{\d y}{\d t} + \frac{\partial f}{\partial z}\frac{\d z}{\d t} + \frac{\partial f}{\partial t}\\
+ &= \nabla f \cdot \dot{\mathbf{x}} + \frac{\partial f}{\partial t}.
+\end{align*}
+\begin{defi}[Material derivative]
+ If $\mathbf{x}(t)$ is the (Lagrangian) path followed by a fluid particle, then necessarily $\dot{\mathbf{x}}(t) = \mathbf{u}$ by definition. In this case, we write
+ \[
+ \frac{\d f}{\d t} = \frac{\D f}{\D t}.
+ \]
+ This is the \emph{material derivative}.
+
+ In other words, we have
+ \[
+ \frac{\D f}{\D t} = \mathbf{u}\cdot \nabla f + \frac{\partial f}{\partial t}.
+ \]
+\end{defi}
+On the left of this equation, we have the \emph{Lagrangian derivative}, which is the change in $f$ as we follow the path. On the right of this equation, the first term is the \emph{advective derivative}, which is the change due to change in position. The second term is $\frac{\partial f}{\partial t}$, the \emph{Eulerian time derivative}, which is the change at a fixed point.
+
+For example, consider a river that gets wider as we go downstream. We know (from, say, experience) that the flow is faster upstream than downstream. If the motion is steady, then the Eulerian time derivative vanishes, but the Lagrangian derivative does not, since as the fluid goes down the stream, the fluid slows down, and there is a spacial variation.
+
+Often, it is the Lagrangian derivative that is relevant, but the Eulerian time derivative is what we can usually put into differential equations. So we will need both of them, and relate them by the formula above.
+
+\subsection{Conservation of mass}
+We can start formulating equations. The first one is the conservation of mass.
+
+We fix an arbitrary region of space $\mathcal{D}$ with boundary $\partial \mathcal{D}$ and outward normal $\mathbf{n}$. We imagine there is some flow through this volume
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mblue, ->-=0.13, ->-=0.4, ->-=0.9] plot [smooth] coordinates {(-3, 0) (-2, -0.3) (-0.5, 0.5) (0.5, 0.5) (2, 1.5)};
+ \draw [mblue, ->-=0.13, ->-=0.4, ->-=0.9] plot [smooth] coordinates {(-3, -1) (-2, -1) (0, -0.5) (2, -1.5)};
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+ \node at (0, 0) {$\mathcal{D}$};
+ \node [above] at (0.3, 1) {$\partial D$};
+ \draw [->] (1.4, 0.4) -- +(0.4, 0.2666) node [right] {$\mathbf{n}$};
+ \end{tikzpicture}
+\end{center}
+What we want to say is that the change in the mass inside $\mathcal{D}$ is equal to the total flow of fluid through the boundary. We can write this as
+\[
+ \frac{\d }{\d t}\int_{\mathcal{D}} \rho \;\d V = -\int_{\partial \mathcal{D}} \rho \mathbf{u}\cdot \mathbf{n}\;\d S.
+\]
+We have the negative sign since we picked the outward normal, and hence the integral measures the outward flow of fluid.
+
+Since the domain is fixed, we can interchange the derivative and the integral on the left; on the right, we can use the divergence theorem to get
+\[
+ \int_{\mathcal{D}} \left(\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \mathbf{u})\right) \;\d V = 0.
+\]
+Since $\mathcal{D}$ was arbitrary, we must have
+\[
+ \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \mathbf{u}) = 0
+\]
+everywhere in space.
+
+This is the general form of a conservation law --- the rate of change of ``stuff'' density plus the divergence of the ``stuff flux'' is constantly zero. Similar conservation laws appear everywhere in physics.
+
+In the conservation equation, we can expand $\nabla \cdot (\rho \mathbf{u})$ to get
+\[
+ \frac{\partial \rho}{\partial t} + \mathbf{u}\cdot \nabla \rho + \rho \nabla \cdot \mathbf{u} = 0.
+\]
+We notice the first term is just the material derivative of $\rho$. So we get
+\[
+ \frac{\D \rho}{\D t} + \rho \nabla \cdot \mathbf{u} = 0.
+\]
+With the conservation of mass, we can now properly say what incompressibility is. What exactly happens when we compress a fluid? When we compress mass, in order to conserve mass, we must increase the density. If we don't allow changes in density, then the material derivative $\frac{\D \rho}{\D t}$ must vanish. So we define an incompressible fluid as follows:
+\begin{defi}[Incompressible fluid]
+ A fluid is \emph{incompressible} if the density of a fluid particle does not change. This implies
+ \[
+ \frac{\D \rho}{\D t} = 0,
+ \]
+ and hence
+ \[
+ \nabla \cdot \mathbf{u} = 0.
+ \]
+ This is also known as the \emph{continuity equation}.
+\end{defi}
+For parallel flow, $\mathbf{u} = (u, 0, 0)$. So if the flow is incompressible, we must have $\frac{\partial u}{\partial x} = \nabla \cdot \mathbf{u} = 0$. So we considered $u$ of the form $u = u(y, z, t)$.
+
+Of course, incompressibility is just an approximation. Since we can hear things, everything must be compressible. So what really matters is whether the slow speed is small compared to the speed of sound. If it is relatively small, then incompressibility is a good approximation. In air, the speed of sound is approximately $\SI{340}{\meter\per\second}$. In water, the speed of sound is approximately $\SI{1500}{\meter\per\second}$.
+
+\subsection{Kinematic boundary conditions}
+Suppose our system has a boundary. There are two possible cases --- either the boundary is rigid, like a wall, or it moves with the fluid. In either case, fluids are not allowed to pass through the boundary.
+
+To deal with boundaries, we first suppose it has a velocity $\mathbf{U}$ (which may vary with position if the boundary extends through space). We define a local reference frame moving with velocity $\mathbf{U}$, so that in this frame of reference, the boundary is stationary.
+
+In this frame of reference, the fluid has relative velocity
+\[
+ \mathbf{u}' = \mathbf{u} - \mathbf{U}.
+\]
+As we mentioned, fluids are not allowed to cross the boundary. Let the normal to the boundary be $\mathbf{n}$, then we must have
+\[
+ \mathbf{u}' \cdot \mathbf{n} = 0.
+\]
+In terms of the outside frame, we get
+\[
+ \mathbf{u}\cdot \mathbf{n} = \mathbf{U} \cdot \mathbf{n}.
+\]
+This is the condition we have for the boundary.
+
+If we have a stationary rigid boundary, e.g.\ the wall, then $\mathbf{U} = \mathbf{0}$. So our boundary condition is
+\[
+ \mathbf{u}\cdot \mathbf{n}= 0.
+\]
+Free boundaries are more complicated. A good example of a free material boundary is the surface of a water wave, or interface between two immiscible fluids --- say oil and water.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) sin (1, 0.2) cos (2, 0) sin (3, -0.2) cos (4, 0);
+ \node at (3, 1) {oil};
+ \node at (3, -1) {water};
+
+ \draw [->] (1, 0.2) -- +(0, 0.7) node [above] {$\mathbf{n}$};
+
+ \node [circ] at (3, -0.2) {};
+ \node [above] at (3, -0.2) {$\zeta(x, y, t)$};
+ \end{tikzpicture}
+\end{center}
+We can define the surface by a function
+\[
+ z = \zeta(x, y, t).
+\]
+We can alternatively define the surface as a contour of
+\[
+ F(x, y, z, t) = z - \zeta(x, y, t).
+\]
+Then the surface is defined by $F = 0$. By IA Vector Calculus, the normal is parallel to
+\[
+ \mathbf{n} \parallel \nabla F = (-\zeta_x, -\zeta_y, 1).
+\]
+Also, we have
+\[
+ \mathbf{U} = (0, 0, \zeta_t).
+\]
+We now let the velocity of the fluid be
+\[
+ \mathbf{u} = (u, v, w).
+\]
+Then the boundary condition requires
+\[
+ \mathbf{u}\cdot \mathbf{n} = \mathbf{U}\cdot \mathbf{n}.
+\]
+In other words, we have
+\[
+ -u \zeta_x - v\zeta_y + w = \zeta_t.
+\]
+So we get
+\[
+ w = u \zeta_x + v\zeta_y + \zeta_t = \mathbf{u}\cdot \nabla \zeta + \frac{\partial \zeta}{\partial t} = \frac{\D \zeta}{\D t}.
+\]
+So all the boundary condition says is
+\[
+ \frac{\D \zeta}{\D t} = w.
+\]
+Alternatively, since $F$ is a material surface, its material derivative must vanish. So
+\[
+ \frac{\D F}{\D t} = 0,
+\]
+and this just gives the same result as above. We will discuss surface waves towards the end of the course, and then come back and use this condition.
+
+\subsection{Streamfunction for incompressible flow}
+We suppose our fluid is incompressible, i.e.
+\[
+ \nabla \cdot \mathbf{u} = 0.
+\]
+By IA Vector Calculus, this implies there is a vector potential $\mathbf{A}$ such that
+\begin{defi}[Vector potential]
+ A \emph{vector potential} is an $\mathbf{A}$ such that
+ \[
+ \mathbf{u} = \nabla \times \mathbf{A}.
+ \]
+\end{defi}
+
+In the special case where the flow is two dimensional, say
+\[
+ \mathbf{u} = (u(x, y, t), v(x, y, t), 0),
+\]
+we can immediately know $\mathbf{A}$ is of the form
+\[
+ \mathbf{A} = (0, 0, \psi(x, y, t)),
+\]
+Taking the curl of this, we get
+\[
+ \mathbf{u} = \left(\frac{\partial \psi}{\partial y}, -\frac{\partial \psi}{\partial x}, 0\right).
+\]
+\begin{defi}[Streamfunction]
+ The $\psi$ such that $\mathbf{A} = (0, 0, \psi)$ is the \emph{streamfunction}.
+\end{defi}
+This streamfunction is both physically significant, and mathematically convenient, as we will soon see.
+
+We look at some properties of the streamfunction. The first thing we can do is to look at the contours $\psi = c$. These have normal
+\[
+ \mathbf{n} = \nabla \psi = \left(\psi_x, \psi_y, 0\right).
+\]
+We immediately see that
+\[
+ \mathbf{u}\cdot \mathbf{n} = \frac{\partial \psi}{\partial x} \frac{\partial \psi}{\partial y} - \frac{\partial \psi}{\partial y}\frac{\partial \psi}{\partial x} = 0.
+\]
+So the flow is perpendicular to the normal, i.e.\ tangent to the contours of $\psi$.
+\begin{defi}[Streamlines]
+ The \emph{streamlines} are the contours of the streamfunction $\psi$.
+\end{defi}
+This gives an instantaneous picture of flow.
+
+Note that if the flow is \emph{unsteady}, then the streamlines are \emph{not} particle paths.
+\begin{eg}
+ Consider $\mathbf{u} = (t, 1, 0)$. When $t = 0$, the velocity is purely in the $y$ direction, and the streamlines are also vertical; at $t = 1$, the velocity makes an $45^\circ$ angle with the horizontal, and the streamlines are slanted:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.57] (0, 0) -- +(0, 2);
+ \draw [->-=0.57] (0.5, 0) -- +(0, 2);
+ \draw [->-=0.57] (1, 0) -- +(0, 2);
+ \draw [->-=0.57] (1.5, 0) -- +(0, 2);
+ \draw [->-=0.57] (2, 0) -- +(0, 2);
+
+ \node [below] at (1, -0.5) {$t = 0$};
+
+ \begin{scope}[shift={(4, 0)}]
+ \draw [->-=0.6] (0, 0) -- +(2, 2);
+ \draw [->-=0.6] (0.5, 0) -- +(2, 2);
+ \draw [->-=0.6] (1, 0) -- +(2, 2);
+ \draw [->-=0.6] (1.5, 0) -- +(2, 2);
+ \draw [->-=0.6] (2, 0) -- +(2, 2);
+
+ \node [below] at (1, -0.5) {$t = 1$};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ However, \emph{no} particles will actually follow any of these streamlines. For a particle released at $\mathbf{x}_0 = (x_0, y_0)$. Then we get
+ \[
+ \dot{x}(t) = u = t,\quad \dot{y} = v = 1.
+ \]
+ Hence we get
+ \[
+ x = \frac{1}{2}t^2 + x_0,\quad y = t + y_0.
+ \]
+ Eliminating $t$, we get that the path is given by
+ \[
+ (x - x_0) = \frac{1}{2}(y - y_0)^2.
+ \]
+ So the particle paths are parabolas.
+\end{eg}
+Typically, we draw streamlines that are ``evenly spaced'', i.e.\ we pick the streamlines $\psi = c_0$, $\psi = c_1$, $\psi = c_2$ etc. such that $c_3 - c_2 = c_2 - c_1$.
+
+Then we know the flow is faster where streamlines are closer together:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [-<-=0.44] (4, 0.5) parabola (0, 1);
+ \draw [-<-=0.44] (4, 0) -- (0, 0);
+ \draw [-<-=0.44] (4, -0.5) parabola (0, -1);
+
+ \node [left] {slow};
+ \node [right] at (4, 0) {fast};
+ \end{tikzpicture}
+\end{center}
+This is since the fluid between any two stream lines must be between the stream lines. So if the flow is incompressible, to conserve mass, they mast move faster when the streamlines are closer.
+
+We can also consider the volume flux (per unit length in the $z$-direction), crossing any curve from $\mathbf{x}_0$ to $\mathbf{x}_1$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [-<-=0.44] (4, 0.5) parabola (0, 1);
+ \draw [-<-=0.44] (4, 0) -- (0, 0);
+ \draw [-<-=0.44] (4, -0.5) parabola (0, -1);
+
+ \draw [mred, thick] plot [smooth, tension=1] coordinates {(1, 0.78125) (2, 0.3) (3, -0.53125)};
+ \node [left] {slow};
+ \node [right] at (4, 0) {fast};
+
+ \node [circ] at (1, 0.78125) {};
+ \node [above] at (1, 0.78125) {$\mathbf{x}_0$};
+ \node [circ] at (3, -0.53125) {};
+ \node [below] at (3, -0.53125) {$\mathbf{x}_1$};
+
+ \draw [->] (0.5, 0.45) -- (1, 0.4) node [right] {$\mathbf{u}$};
+
+ \end{tikzpicture}
+\end{center}
+Then the volume flux is
+\[
+ q = \int_{\mathbf{x}_0}^{\mathbf{x}_1} \mathbf{u}\cdot \mathbf{n}\;\d \ell.
+\]
+We see that
+\[
+ \mathbf{n}\;\d \ell = (-\d y, \d x).
+\]
+So we can write this as
+\[
+ q = \int_{\mathbf{x}_0}^{\mathbf{x}_1} -\frac{\partial \psi}{\partial y}\;\d y - \frac{\partial \psi}{\partial x}\;\d x = \psi(\mathbf{x}_0) - \psi(\mathbf{x}_1).
+\]
+So the flux depends only the difference in the value of $\psi$. Hence, for closer streamlines, to maintain the same volume flux, we need a higher speed.
+
+Also, note that $\psi$ is constant on a stationary rigid boundary, i.e.\ the boundary is a streamline, since the flow is tangential at the boundary. This is a consequence of $\mathbf{u}\cdot \mathbf{n} = 0$. We often choose $\psi = 0$ as our boundary.
+
+Sometimes it is convenient to consider the case when we have plane polars. We embed these in cylindrical polars $(r, \theta, z)$. Then we have
+\[
+ \mathbf{u} = \nabla \times (0, 0, \psi) = \frac{1}{r}
+ \begin{vmatrix}
+ \mathbf{e}_r & r \mathbf{e}_\theta & \mathbf{e}_z\\
+ \partial_r & \partial_\theta & \partial_z\\
+ 0 & 0 & \psi
+ \end{vmatrix}
+ =\left(r \frac{\partial \psi}{\partial \theta}, -\frac{\partial \psi}{\partial r}, 0\right).
+\]
+As an exercise, we can verify that $\nabla \cdot \mathbf{u} = 0$. It is convenient to note that in plane polars,
+\[
+ \nabla \cdot \mathbf{u} = \frac{1}{r} \frac{\partial}{\partial r} (r \mathbf{u}_r) + \frac{1}{r} \frac{\partial u_\theta}{\partial \theta}.
+\]
+
+\section{Dynamics}
+\subsection{Navier-Stokes equations}
+So far, we haven't done anything useful. We have not yet come up with an equation of motion for fluids, and thus we don't know how fluids actually behave. The equations for the parallel viscous flow was a good start, and it turns out the general equation of motion is of a rather similar form.
+
+Unfortunately, the equation is rather complicated and difficult to derive, and we will not derive it in this course. Instead, it is left for the Part II course. However, we will be able to derive some special cases of it later under certain simplifying assumptions.
+\begin{law}[Navier-Stokes equation]
+ \[
+ \rho \frac{\D \mathbf{u}}{\D t} = - \nabla p + \mu \nabla^2 \mathbf{u} + \mathbf{f}.
+ \]
+\end{law}
+This is the general equation for fluid motion. The left hand side is mass times acceleration, and the right is the individual forces --- the pressure gradient, viscosity, and the body forces (per unit volume) respectively.
+
+In general, these are very difficult equations to solve because of non-linearity. For example, in the material derivative, we have the term $\mathbf{u} \cdot \nabla \mathbf{u}$.
+
+There are a few things to note:
+\begin{enumerate}
+ \item The acceleration of a fluid particle is the Lagrangian material derivative of the velocity.
+ \item The derivation of the viscous term is complicated, since for each side of the cube, there is one normal direction and two tangential directions. Thus this requires consideration of the \emph{stress tensor}. This is only done in IID Fluid Dynamics.
+ \item In a gravitational field, we just have $\mathbf{f} = \mathbf{g}\rho$. This is the only body force we will consider in this course.
+ \item Note that $\nabla^2 \mathbf{u}$ can be written as
+ \[
+ \nabla^2 \mathbf{u} = \nabla (\nabla \cdot \mathbf{u}) - \nabla \times (\nabla \times \mathbf{u}).
+ \]
+ In an incompressible fluid, this reduces to
+ \[
+ \nabla^2 \mathbf{u} = -\nabla \times (\nabla \times \mathbf{u}) = -\nabla \times \boldsymbol\omega,
+ \]
+ where $\boldsymbol\omega = \nabla \times \mathbf{u}$ is the vorticity. In Cartesian coordinates, for $\mathbf{u} = (u_x, u_y, u_z)$, we have
+ \[
+ \nabla^2 \mathbf{u} = (\nabla^2 u_x, \nabla^2 u_y, \nabla^2 u_z),
+ \]
+ where
+ \[
+ \nabla^2 = \pd[2]{x} + \pd[2]{y} + \pd[2]{z}.
+ \]
+ \item The Navier-Stokes equation reduces to the parallel flow equation in the special case of parallel flow, i.e.\ $\mathbf{u} = (u(y, t), 0, 0)$. This verification is left as an exercise for the reader.
+\end{enumerate}
+
+\subsection{Pressure}
+In the Navier-Stokes equation, we have a pressure term. In general, we classify the pressure into two categories. If there is gravity, then we will get pressure in the fluid due to the weight of fluid above it. This is what we call \emph{hydrostatic pressure}. Technically, this is the pressure in a fluid at rest, i.e.\ when $\mathbf{u} = \mathbf{0}$.
+
+We denote the hydrostatic pressure as $p_H$. To find this, we put in $\mathbf{u} = 0$ into the Navier-Stokes equation to get
+\[
+ \nabla p_H = \mathbf{f} = \rho \mathbf{g}.
+\]
+We can integrate this to obtain
+\[
+ p_H = p \mathbf{g}\cdot \mathbf{x} + p_0,
+\]
+where $p_0$ is some arbitrary constant. Usually, we have $\mathbf{g} = (0, 0, -g)$. Then
+\[
+ p_H = p_0 - g \rho z.
+\]
+This exactly says that the hydrostatic pressure is the weight of the fluid above you.
+
+What can we infer from this? Suppose we have a body $\mathcal{D}$ with boundary $\partial \mathcal{D}$ and outward normal $\mathbf{n}$. Then the force due to the pressure is
+\begin{align*}
+ \mathbf{F} &= - \int_{\partial \mathcal{D}} p_H \mathbf{n}\cdot dS \\
+ &= - \int_\mathcal{D} \nabla p_H \;\d V\\
+ &= - \int_{\mathcal{D}} \mathbf{g} \rho \;\d V\\
+ &= - \mathbf{g} \int_\mathcal{D} \rho \;\d V\\
+ &= - M\mathbf{g},
+\end{align*}
+where $M$ is the mass of fluid displaced. This is known as \emph{Archimedes' principle}.
+
+In particular, if the body is less dense than the fluid, it will float; if the body is denser than the fluid, it will sink; if the density is the same, then it does not move, and we say it is neutrally buoyant.
+
+This is valid only when nothing is moving, since that was our assumption. Things can be very different when things are moving, which is why planes can fly.
+
+In general, when there is motion, we might expect some other pressure gradient. It can either be some external pressure gradient driving the motion (e.g.\ in the case of Poiseuille flow), or a pressure gradient caused by the flow itself. In either case, we can write
+\[
+ p = p_H + p',
+\]
+where $p_H$ is the hydrostatic pressure, and $p'$ is what caused/results from motion.
+
+We substitute this into the Navier-Stokes equation to obtain
+\[
+ \rho \frac{\D u}{\D t} = -\nabla p' + \mu \nabla^2 \mathbf{u}.
+\]
+So the hydrostatic pressure term cancels with the gravitational term. What we usually do is drop the ``prime'', and just look at the deviation from hydrostatic pressure. What this means is that gravity no longer plays a role, and we can ignore gravity in any flow in which the density is constant. Then all fluid particles are neutrally buoyant. This is the case in most of the course, except when we consider motion of water waves, since there is a difference in air density and water density.
+
+\subsection{Reynolds number}
+As we mentioned, the Navier-Stokes equation is very difficult to solve. So we want to find some approximations to the equation. We would like to know if we can ignore some terms. For example, if we can neglect the viscous term, then we are left with a first-order equation, not a second-order one.
+
+To do so, we look at the balance of terms, and see if some terms dominate the others. This is done via Reynolds number.
+
+We suppose the flow has a characteristic speed $U$ and an extrinsic length scale $L$, externally imposed by geometry. For example, if we look at the flow between two planes, the characteristic speed can be the maximum (or average) speed of the fluid, and a sensible length scale would be the length between the planes.
+
+Next, we have to define the time scale $T = L/U$. Finally, we suppose pressure \emph{differences} have characteristic magnitude $P$. We are concerned with differences since it is pressure differences that drive the flow.
+
+We are going to take the Navier-Stokes equation, and look at the scales of the terms. Dividing by $\rho$, we get
+\begin{alignat*}{4}
+ \frac{\partial \mathbf{u}}{\partial t} &{}+{}& \mathbf{u}\cdot \nabla \mathbf{u} &{}={}& -\frac{1}{\rho} \nabla p &{}+{}& \nu \nabla^2 \mathbf{u},\\
+ \intertext{where again $\nu = \frac{\mu}{\rho}$. We are going to estimate the size of these terms. We get}
+ \frac{U}{(L/U)} &&U \cdot \frac{U}{L} &&\frac{1}{\rho} \frac{P}{L} &&\nu \frac{U}{L^2}.
+ \intertext{Dividing by $U^2/L$, we get}
+ 1 && 1 && \frac{P}{\rho U^2} && \frac{\nu}{UL}.
+\end{alignat*}
+\begin{defi}[Reynolds number]
+ The \emph{Reynolds number} is
+ \[
+ Re = \frac{UL}{\nu},
+ \]
+ which is a dimensionless number. This is a measure of the magnitude of the inertia to viscous terms.
+\end{defi}
+So if $Re$ is very large, then the viscous term is small, and we can probably neglect it. For example, for an aircraft, we have $U \sim 10^4$, $L \sim 10$ and $\nu \sim 10^{-5}$. So the Reynolds number is large, and we can ignore the viscous term. On the other hand, if we have a small slow-moving object in viscous fluid, then $Re$ will be small, and the viscous term is significant.
+
+Note that the pressure always scales to balance the dominant terms in the equation, so as to impose incompressibility, i.e.\ $\nabla \cdot \mathbf{u} = 0$. So we don't have to care about its scale.
+
+In practice, it is the Reynolds number, and not the factors $U, L, \nu$ individually, that determines the behaviour of a flow. For example, even though lava is very very viscous, on a large scale, the flow of lava is just like the flow of water in a river, since they have comparable Reynolds number.
+
+\begin{defi}[Dynamic similarity]
+ Flows with the same geometry and equal Reynolds numbers are said to be dynamically similar.
+\end{defi}
+
+When $Re \ll 1$, the inertia terms are negligible, and we now have
+\[
+ P \sim \frac{\rho\nu U}{L} = \frac{\mu U}{L}.
+\]
+So the pressure balances the sheer stress. We can approximate the Navier-Stokes equation by dropping the term on the left hand side, and write
+\[
+ 0 = -\nabla p + \mu \nabla^2 \mathbf{u},
+\]
+with the incompressibility condition
+\[
+ \nabla \cdot \mathbf{u} = 0.
+\]
+These are known as \emph{Stokes equations}. This is now a linear equation, with four equations and four unknowns (three components of $\mathbf{u}$ and one component of pressure). We find
+\[
+ \mathbf{u} \propto \nabla p,
+\]
+and so the velocity is proportional to the pressure gradient.
+
+When $Re \gg 1$, the viscous terms are negligible on extrinsic length scale. Then the pressure scales on the momentum flux,
+\[
+ P \sim \rho U^2,
+\]
+and on extrinsic scales, we can approximate Navier-Stokes equations by the \emph{Euler equations}
+\begin{align*}
+ \rho \frac{\D u}{\D t} &= - \nabla p\\
+ \nabla \cdot \mathbf{u} &= 0.
+\end{align*}
+In this case, the acceleration is proportional to the pressure gradient.
+
+Why do we keep saying ``extrinsic scale''? Note that when we make this approximation, the order of the differential equation drops by $1$. So we can no longer satisfy all boundary conditions. The boundary condition we have to give up is the no-slip condition. So when we make this approximation, we will have to allow the fluid at the boundary to have non-zero velocity.
+
+So when is this approximation valid? It is obviously wrong when we are \emph{at} the boundary. If the velocity gradient near the boundary is relatively large, then we quickly get significant non-zero velocity when we move away from boundary, and hence obtain the ``correct'' answer. So we get problems only at the length scale where the viscous and inertia effects are comparable, i.e.\ at the \emph{intrinsic} length scale.
+
+Since the \emph{intrinsic length scale} $\delta$ is the scale at which the viscous and inertia terms are comparable, we need
+\[
+ U^2 \sim \frac{\nu U}{\delta^2}.
+\]
+So we get
+\[
+ \delta = \frac{\nu}{U}.
+\]
+Alternatively, we have
+\[
+ \frac{\delta}{L} = \frac{\nu}{UL} = \frac{1}{Re}.
+\]
+Thus, for large Reynolds number, the intrinsic length scale, which is the scale in which the viscous and boundary effects matter, is small, and we can ignore them.
+
+For much of the rest of this course, we will ignore viscosity, and consider \emph{inviscid flow}.
+
+\subsection{A case study: stagnation point flow (\texorpdfstring{$\mathbf{u} = \mathbf{0}$}{u = 0})}
+\begin{eg}[Stagnation point flow]
+ We attempt to find a solution to the Navier-Stokes equation in the half-plane $y \geq 0$, subject to the boundary condition $\mathbf{u} \to (Ex, -E y, 0)$ as $y \to \infty$, where $E > 0$ is a constant, and $\mathbf{u} = \mathbf{0}$ on $y = 0$.
+
+ Intuitively, we would expect the flow to look like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (-3, 0) rectangle (3, -0.5);
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 3) node [above] {$y$};
+
+ \draw [->-=0.5, rotate=90](3, 0.2) parabola (0.2, 3);
+ \draw [->-=0.5, xscale=-1, rotate=90](3, 0.2) parabola (0.2, 3);
+
+ \clip (-3, 3) rectangle (3, 0);
+ \draw [->-=0.5, rotate=90](3.3, 0.9) parabola (0.9, 3);
+ \draw [->-=0.5, xscale=-1, rotate=90](3.3, 0.9) parabola (0.9, 3);
+ \end{tikzpicture}
+ \end{center}
+ The boundary conditions as $y \to \infty$ gives us a picture of what is going on at large $y$, as shown in the diagram, but near the boundary, the velocity has to start to vanish. So we would want to solve the equations to see what happens near the boundary.
+
+ Again, this problem does not have any extrinsic length scale. So we can seek a similarity solution in terms of
+ \[
+ \eta = \frac{y}{\delta}, \quad \delta = \sqrt{\frac{\nu}{E}}.
+ \]
+ First of all, we make sure this is the right similarity solution. We show that $\delta$ has dimensions of length:
+ \begin{itemize}
+ \item The dimension of $E$ is $[E] = T^{-1}$
+ \item The dimension of $\nu$ is $[\nu] = L^2 T^{-1}$.
+ \end{itemize}
+ Hence
+ \[
+ \left[\frac{E}{\nu}\right] = L^2,
+ \]
+ and $\delta$ has dimension $L$. Therefore $\eta = \frac{y}{\delta}$ is dimensionless.
+
+ Note that we make $\eta$ a function of $y$ instead of a function of $x$ so that we can impose the boundary condition as $y \to \infty$.
+
+ What would the similarity solution look like? Applying the incompressibility conditions $\nabla \cdot \mathbf{u} = 0$, it must be of the form
+ \[
+ \mathbf{u} = (u, v, 0) = (Ex g'(\nu), -E \delta g(\eta), 0).
+ \]
+ To check this is really incompressible, we can find the streamfunction. It turns out it is given by
+ \[
+ \psi = \sqrt{\nu E} x g(\eta).
+ \]
+ We can compute
+ \[
+ u = \frac{\partial \psi}{\partial t} = \sqrt{\nu E} x g'(\eta) \frac{1}{\delta} = Ex g'(\eta),
+ \]
+ and similarly for the $y$ component. So this is really the streamfunction. Therefore we must have $\nabla \cdot \mathbf{u} = 0$. Alternatively, we can simply compute $\nabla \cdot \mathbf{u}$ and find it to be zero.
+
+ Finally, we look at the Navier-Stokes equations. The $x$ and $y$ components are
+ \begin{align*}
+ u \frac{\partial u}{\partial x} + v \frac{\partial u}{\partial y} &= -\frac{1}{\rho}\frac{\partial p}{\partial x} + \nu \left(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}\right)\\
+ u \frac{\partial v}{\partial x} + v \frac{\partial v}{\partial y} &= -\frac{1}{\rho}\frac{\partial p}{\partial y} + \nu \left(\frac{\partial^2 v}{\partial x^2} + \frac{\partial^2 v}{\partial y^2}\right).
+ \end{align*}
+ Substituting our expression for $u$ and $v$, we get
+ \[
+ Ex g'E g' - E \delta g E x g'' \frac{1}{\delta} = -\frac{1}{\rho} \frac{\partial p}{\partial x} - \nu E xg''' \frac{1}{\delta^2}.
+ \]
+ Some tidying up gives
+ \[
+ E^2 x(g'^2 - gg'') = -\frac{1}{\rho} \frac{\partial p}{\partial x} - E^2 x g'''.
+ \]
+ We can do the same thing for the $y$-component, and get
+ \[
+ E\sqrt{\nu E}gg' = -\frac{1}{\rho} \frac{\partial p}{\partial y} - E\sqrt{\nu E}g''.
+ \]
+ So we've got two equations for $g$ and $p$. The trick is to take the $y$ derivative of the first, and $x$ derivative of the second, and we can use that to eliminate the $p$ terms. Then we have
+ \[
+ g'g'' - gg''' = g^{(4)}.
+ \]
+ So we have a single equation for $g$ that we shall solve.
+
+ We now look at our boundary conditions. The no-slip condition gives $\mathbf{u} = \mathbf{0}$ on $y = 0$. So $g'(0) = g(0) = 0$ when $\eta = 0$.
+
+ As as $y \to \infty$, the boundary conditions for $\mathbf{u}$ gives $g'(\eta) \to 1$, $g(\eta) \to \eta$ as $\eta \to \infty$.
+
+ All dimensional variables are absorbed into the scaled variables $g$ and $\eta$. So we only have to solve the ODE once. The far field velocity $\mathbf{u} = (Ex, -E y, 0)$ is reached to a very good approximation when
+ \[
+ \eta \gtrsim 1,\quad y \gtrsim \delta = \frac{\eta}{E}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (5, 0) node [right] {$\eta$};
+ \draw [->] (0, -0.5) -- (0, 2.5) node [above] {$g'(\eta)$};
+ \draw (0, 0) .. controls (1, 0) and (1, 2) .. (3, 2) -- (5, 2);
+ \draw [dashed] (0, 2) -- (3, 2);
+
+ \draw [<->] (0, 1) -- (3, 1) node [pos=0.5, fill=white] {$\delta = O(1)$};
+ \end{tikzpicture}
+ \end{center}
+ We get the horizontal velocity profile as follows:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 3) node [above] {$y$};
+
+ \draw (1, 0) arc(270:360:1 and 1.5) -- (2, 3) node [pos=0.5, right] {$Ex$};
+
+ \draw [dashed] (-1, 1.5) -- (4, 1.5);
+
+ \draw [<->] (3, 0) -- (3, 1.5) node [pos=0.5, fill=white] {$\delta$};
+
+ \foreach \x in {0.55, 1.1} {
+ \pgfmathsetmacro\y{sqrt(1 - (1 - \x/1.5)^2)};
+ \draw [-latex'] (1, \x) -- +(\y, 0);
+ }
+ \end{tikzpicture}
+ \end{center}
+ At the scale $\delta$, we get a Reynolds number of
+ \[
+ Re_\delta = \frac{U\delta}{\nu}\sim O(1).
+ \]
+ This is the \emph{boundary layer}. For a larger extrinsic scale $L \gg \delta$, we get
+ \[
+ Re_L = \frac{UL}{2} \gg 1.
+ \]
+ When interested in flow on scales much larger than $\delta$, we ignore the region $y < \delta$ (since it is small), and we imagine a rigid boundary at $y = \delta$ at which the no-slip condition does not apply.
+
+ When $Re_L \gg 1$, we solve the Euler equations, namely
+ \begin{align*}
+ \rho \frac{\D \mathbf{u}}{\D t} &= - \nabla p + \mathbf{f}\\
+ \nabla \cdot \mathbf{u} &= 0.
+ \end{align*}
+ We still have a boundary condition --- we don't allow fluid to flow \emph{through} the boundary. So we require
+ \[
+ \mathbf{u}\cdot \mathbf{n} = 0\quad\text{at a stationary rigid boundary}.
+ \]
+ The no-slip condition is no longer satisfied.
+
+ It is an exercise for the reader to show that $\mathbf{u} = (Ex, -Ey, 0)$ satisfies the Euler equations in $y > 0$ with a rigid boundary of $y = 0$, with
+ \[
+ p = p_0 - \frac{1}{2} \rho E^2(x^2 + y^2).
+ \]
+ We can plot the curves of constant pressure, as well as the streamlines:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -1) -- (0, 3) node [above] {$y$};
+ \foreach \r in {0.6, 1.2, 1.8, 2.4} {
+ \draw (\r, 0) arc(0:180:\r);
+ }
+ \node [circ] {};
+ \node [anchor = north west] {$p_0$};
+
+ \draw [mred, ->-=0.5, domain=0.167:3] plot (\x, {0.5/\x});
+ \draw [mred, ->-=0.5, domain=0.5:3] plot (\x, {1.5/\x});
+ \draw [mred, ->-=0.5, domain=0.667:3] plot (\x, {2/\x});
+
+ \draw [mred, ->-=0.5, domain=0.167:3] plot ({-\x}, {0.5/\x});
+ \draw [mred, ->-=0.5, domain=0.5:3] plot ({-\x}, {1.5/\x});
+ \draw [mred, ->-=0.5, domain=0.667:3] plot ({-\x}, {2/\x});
+ \end{tikzpicture}
+ \end{center}
+ As a flow enters from the top, the pressure keeps increases, and this slows down the flow. We say the $y$-pressure gradient is \emph{adverse}. As it moves down and flows sideways, the pressure \emph{pushes} the flow. So the $x$-pressure gradient is \emph{favorable}.
+
+ At the origin, the velocity is zero, and this is a \emph{stagnation point}. This is also the point of highest pressure. In general, velocity is high at low pressures and low at high pressures.
+\end{eg}
+
+\subsection{Momentum equation for inviscid (\texorpdfstring{$\nu = 0$}{nu = 0}) incompressible fluid}
+We are now going to derive the Navier-Stokes equation in the special case where viscosity is assumed to vanish. We will derive this by considering the change in momentum.
+
+Consider an arbitrary volume $\mathcal{D}$ with boundary $\partial \mathcal{D}$ and outward pointing normal $\mathbf{n}$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mblue, ->-=0.13, ->-=0.4, ->-=0.9] plot [smooth] coordinates {(-3, 0) (-2, -0.3) (-0.5, 0.5) (0.5, 0.5) (2, 1.5)};
+ \draw [mblue, ->-=0.13, ->-=0.4, ->-=0.9] plot [smooth] coordinates {(-3, -1) (-2, -1) (0, -0.5) (2, -1.5)};
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+ \node at (0, 0) {$\mathcal{D}$};
+ \node [above] at (0.3, 1) {$\partial D$};
+ \draw [->] (1.4, 0.4) -- +(0.4, 0.2666) node [right] {$\mathbf{n}$};
+ \end{tikzpicture}
+\end{center}
+The total momentum of the fluid in $\mathcal{D}$ can change owing to four things:
+\begin{enumerate}
+ \item Momentum flux across the boundary $\partial \mathcal{D}$;
+ \item Surface pressure forces;
+ \item Body forces;
+ \item Viscous surface forces.
+\end{enumerate}
+We will ignore the last one. We can then write the rate of change of the total momentum as
+\[
+ \frac{\d}{\d t} \int_{\mathcal{D}} \rho \mathbf{u} \;\d V = -\int_{\partial \mathcal{D}} \rho \mathbf{u} (\mathbf{u}\cdot \mathbf{n}) \;\d S - \int_{\partial \mathcal{D}} p \mathbf{n}\;\d S + \int_{\mathcal{D}} \mathbf{f}\;\d V.
+\]
+It is helpful to write this in suffix notation. In this case, the equation becomes
+\[
+ \frac{\d}{\d t} \int_{\mathcal{D}} \rho u_i \;\d V = -\int_{\partial \mathcal{D}} \rho u_i u_j n_j \;\d S - \int_{\partial \mathcal{D}} - p n_i \;\d S + \int_{\mathcal{D}} f_i \;\d V.
+\]
+Just as in the case of mass conservation, we can use the divergence theorem to write
+\[
+ \int_{\mathcal{D}} \left(\rho \frac{\partial u_i}{\partial t} + \rho \frac{\partial}{\partial x_j} (u_i u_j)\right)\;\d V = \int_{\mathcal{D}} \left(-\frac{\partial p}{\partial x_i} + f_i\right) \;\d V.
+\]
+Since $\mathcal{D}$ is arbitrary, we must have
+\[
+ \rho \frac{\partial u_i}{\partial t} + \rho \frac{\partial u_i}{\partial x_j} + \rho u_i \frac{\partial u_j}{\partial x_j} = -\frac{\partial p}{\partial x_i} + f_i.
+\]
+The last term on the left is the divergence of $\mathbf{u}$, which vanishes by incompressibility, and the remaining terms is just the material derivative of $\mathbf{u}$. So we get
+\begin{prop}[Euler momentum equation]
+ \[
+ \rho \frac{\D \mathbf{u}}{\D t} = - \nabla p + \mathbf{f}.
+ \]
+\end{prop}
+This is just the equation we get from the Navier-Stokes equation by ignoring the viscous terms. However, we were able to derive this directly from momentum conservation.
+
+We can further derive some equations from this.
+
+For conservative forces, we can write $\mathbf{f} = -\nabla \chi$, where $\chi$ is a scalar potential. For example, gravity can be given by $\mathbf{f} = \rho \mathbf{g} = \nabla (\rho \mathbf{g}\cdot \mathbf{x})$ (for $\rho$ constant). So $\chi = - \rho \mathbf{g}\cdot \mathbf{x} = g \rho z$ if $\mathbf{g} = (0, 0, -g)$.
+
+In the case of a steady flow, $\frac{\partial \mathbf{u}}{\partial t}$ vanishes. Then the momentum equation becomes
+\[
+ 0 = -\int_{\partial \mathcal{D}} \rho \mathbf{u} (\mathbf{u} \cdot \mathbf{n}) \;\d S - \int_{\partial \mathcal{D}}p \mathbf{n} \;\d S - \int_{\mathcal{D}} \nabla \chi \;\d V.
+\]
+We can then convert this to
+\begin{prop}[Momentum integral for steady flow]
+ \[
+ \int_{\partial \mathcal{D}} (\rho \mathbf{u} (\mathbf{u}\cdot \mathbf{n}) + p \mathbf{n} + \chi \mathbf{n}) \;\d S = 0.
+ \]
+\end{prop}
+Alternatively, we notice the vector identity
+\[
+ \mathbf{u}\times (\nabla \times \mathbf{u}) = \nabla\left(\frac{1}{2}|\mathbf{u}|^2\right) - \mathbf{u}\cdot \nabla \mathbf{u}.
+\]
+We use this to rewrite the Euler momentum equation as
+\[
+ \rho \frac{\partial \mathbf{u}}{\partial t} + \rho \nabla \left(\frac{1}{2}|\mathbf{u}|^2\right) - \rho \mathbf{u}\times (\nabla \times \mathbf{u}) = -\nabla p - \nabla \chi.
+\]
+Dotting with $\mathbf{u}$, the last term on the left vanishes, and we get
+\begin{prop}[Bernoulli's equation]
+ \[
+ \frac{1}{2}\rho \frac{\partial|\mathbf{u}|^2}{\partial t} = -\mathbf{u}\cdot \nabla \left(\frac{1}{2} \rho |\mathbf{u}|^2 + p + \chi\right).
+ \]
+\end{prop}
+Note also that this tells us that high velocity goes with low pressure; low pressure goes with high velocity.
+
+In the case where we have a steady flow, we know
+\[
+ H = \frac{1}{2}\rho |\mathbf{u}|^2 + p + \chi
+\]
+is constant along streamlines.
+
+Even if the flow is not steady, we can still define the value $H$, and then we can integrate Bernoulli's equation over a volume $\mathcal{D}$ to obtain
+\[
+ \frac{\d}{\d t}\int_{\mathcal{D}} \frac{1}{2}\rho |\mathbf{u}|^2 \;\d V = -\int_{\partial \mathcal{D}} H \mathbf{u}\cdot \mathbf{n} \;\d S.
+\]
+So $H$ is the transportable energy of the flow.
+
+\begin{eg}
+ Consider a pipe with a constriction. Ignore the vertical pipes for the moment --- they are there just so that we can measure the pressure in the fluid, as we will later see.
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.5] (-3, 2) -- (-0.2, 2) -- (-0.2, 2.8) arc(180:360:0.2 and 0.05) -- (0.2, 3) -- (0.2, 2) .. controls (1.2, 2) and (0.8, 1.5) .. (1.8, 1.5) -- (1.8, 2.1) arc(180:360:0.2 and 0.05) -- (2.2, 3) -- (2.2, 1.5) .. controls (3.2, 1.5) and (3, 2) .. (4, 2) -- (4, 0) .. controls (3, 0) and (3.2, 0.5) .. (2.2, 0.5) -- (1.8, 0.5) .. controls (0.8, 0.5) and (1.2, 0) .. (0.2, 0) -- (-3, 0) -- cycle;
+
+ \draw [dashed] (0.2, 2.1) -- (1.8, 2.1);
+ \draw [dashed] (0.2, 2.8) -- (1.8, 2.8);
+
+ \draw [<->] (1, 2.1) -- (1, 2.8) node [pos=0.5, right] {$h$};
+
+ \draw (-3, 2) -- (-0.2, 2) -- (-0.2, 3);
+ \draw (0.2, 3) -- (0.2, 2) .. controls (1.2, 2) and (0.8, 1.5) .. (1.8, 1.5) -- (1.8, 3);
+ \draw (-3, 0) -- (0.2, 0) .. controls (1.2, 0) and (0.8, 0.5) .. (1.8, 0.5) -- (2.2, 0.5) .. controls (3.2, 0.5) and (3, 0) .. (4, 0);
+ \draw (2.2, 3) -- (2.2, 1.5) .. controls (3.2, 1.5) and (3, 2) .. (4, 2);
+
+ \draw [gray, ->-=0.5] (-3, 1) -- (4, 1);
+
+ \node [align=left, left] at (-3, 1) {$U$\\$P$\\$A$};
+ \node [below] at (2, 1) {$u, p, a$};
+
+ \draw [->] (-2.5, 1.5) -- +(1, 0);
+ \draw [->] (-2.5, 0.5) -- +(1, 0);
+ \end{tikzpicture}
+ \end{center}
+ Suppose at the left, we have a uniform speed $U$, area $A$ and pressure $P$. In the middle constricted area, we have speed $u$, area $a$ and pressure $p$.
+
+ By the conservation of mass, we have
+ \[
+ q = UA = ua.
+ \]
+ We apply Bernoulli along the central streamline, using the fact that $H$ is constant along streamlines. We can omit the body force $\chi = \rho gy$, since this is the same at both locations. Then we get
+ \[
+ \frac{1}{2}\rho U^2 + P = \frac{1}{2} \rho u^2 + p.
+ \]
+ Replacing our $U$ with $q/A$ and $u$ with $q/a$, we obtain
+ \[
+ \frac{1}{2} \rho \left(\frac{q}{A}\right)^2 + P = \frac{1}{2} \rho \left(\frac{q}{a}\right)^2 + p.
+ \]
+ Rearranging gives
+ \[
+ \left(\frac{1}{A^2} - \frac{1}{a^2}\right)q^2 = \frac{2(p - P)}{\rho}.
+ \]
+ We see there is a difference in pressure due to the difference in area. This is balanced by the difference in heights $h$. Using the $y$-momentum equation, we get
+ \[
+ \left(\frac{1}{A^2} - \frac{1}{a^2}\right)q^2 = \frac{2(p - P)}{\rho} = -2gh.
+ \]
+ Then we obtain
+ \[
+ q = \sqrt{2gh} \frac{Aa}{\sqrt{A^2 - a^2}}.
+ \]
+ Therefore we can measure $h$ in order to find out the flow rate. This allows us to measure fluid velocity just by creating a constriction and then putting in some pipes.
+\end{eg}
+
+\begin{eg}[Force on a fire hose nozzle]
+ Suppose we have a fire hose nozzle like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0.75) -- (-0.5, 0.75) .. controls (0, 0.75) and (0.5, 0.25) .. (1, 0.25) -- (2, 0.25);
+ \draw (-3, -0.75) -- (-0.5, -0.75) .. controls (0, -0.75) and (0.5, -0.25) .. (1, -0.25) -- (2, -0.25);
+
+ \draw [->] (-3, 0.5) -- +(0.5, 0);
+ \draw [->] (-3, 0) -- +(0.5, 0);
+ \draw [->] (-3, -0.5) -- +(0.5, 0);
+
+ \draw [->] (1.5, 0) -- +(0.5, 0);
+
+ \draw [mred, dashed] (-3, 0.7) -- (-0.5, 0.7) node [pos=0.5, above] {$(2)$}.. controls (0, 0.7) and (0.5, 0.2) .. (1, 0.2) node [pos=0.5, anchor = south west] {$(3)$} -- (2, 0.2) node [pos=0.5, above] {$(4)$} -- (2, -0.2) node [pos=0.5, right] {$(5)$} -- (1, -0.2) .. controls (0.5, -0.2) and (0, -0.7) .. (-0.5, -0.7) -- (-3, -0.7) -- cycle node [pos=0.5, left] {$(1)$};
+
+ \node [align=left] at (-4, 0) {$U$\\$A$\\$P$};
+ \node at (3.7, 0) {$u, a, p = 0$};
+ \end{tikzpicture}
+ \end{center}
+ We consider the steady-flow equation and integrate along the surface indicated above. We integrate each section separately. The end $(1)$ contributes
+ \[
+ \rho U(-U)A - PA.
+ \]
+ On $(2)$, everything vanishes. On $(3)$, the first term vanishes since the velocity is parallel to the surface. Then we get a contribution of
+ \[
+ 0 + \int_{\mathrm{nozzle}} p \mathbf{n}\cdot \hat{\mathbf{x}}\;\d S.\tag{3}
+ \]
+ Similarly, everything in $(4)$ vanishes. Finally, on $(5)$, noting that $p = 0$, we get
+ \[
+ \rho u^2 a.
+ \]
+ By the steady flow equation, we know these all sum to zero. Hence, the force on the nozzle is just
+ \[
+ F = \int_{\mathrm{nozzle}} p\mathbf{n}\cdot \hat{\mathbf{x}}\;\d S = \rho AU^2 - \rho au^2 + PA.
+ \]
+ We can again apply Bernoulli along a streamline in the middle, which says
+ \[
+ \frac{1}{2}\rho U^2 + P = \frac{1}{2} \rho u^2.
+ \]
+ So we can get
+ \[
+ F = \rho AU^2 - \rho au^2 + \frac{1}{2} \rho A(u^2 - U^2) = \frac{1}{2} \rho\frac{A}{a^2}q^2 \left(1 - \frac{a}{A}\right)^2.
+ \]
+ Let's now put some numbers in. Suppose $A = (0.1)^2 \pi\SI{}{\meter\squared}$ and $a = (0.05)^2 \pi\SI{}{\meter\squared}$. So we get
+ \[
+ \frac{A}{a} = 4.
+ \]
+ A typical fire hose has a flow rate of
+ \[
+ q = \SI{0.01}{\meter\cubed\per\second}.
+ \]
+ So we get
+ \[
+ F = \frac{1}{2} \cdot 1000 \cdot \frac{4}{\pi/40} \cdot 10^{-4} \cdot \left(\frac{3}{4}\right)^2 \approx \SI{14}{N}.
+ \]
+\end{eg}
+
+\subsection{Linear flows}
+Suppose we have a favorite point $\mathbf{x}_0$. Near the point $\mathbf{x}_0$, it turns out we can break up the flow into three parts --- uniform flow, pure strain, and pure rotation.
+
+To do this, we take the Taylor expansion of $\mathbf{u}$ about $\mathbf{x}_0$:
+\begin{align*}
+ \mathbf{u}(\mathbf{x}) &= \mathbf{u}(\mathbf{x}_0) + (\mathbf{x} - \mathbf{x}_0) \cdot \nabla \mathbf{u}(\mathbf{x}_0) + \cdots\\
+ &= \mathbf{u}_0 + \mathbf{r}\cdot \nabla \mathbf{u}_0,
+\end{align*}
+with $\mathbf{r} = \mathbf{x} - \mathbf{x}_0$ and $\mathbf{u}_0 = \mathbf{u}(\mathbf{x}_0)$. This is a linear approximation to the flow field.
+
+We can do something more about the $\nabla \mathbf{u}$ term. This is a rank-2 tensor, i.e.\ a matrix, and we can split it into its symmetric and antisymmetric parts:
+\[
+ \nabla \mathbf{u} = \frac{\partial u_i}{\partial x_j} = E_{ij} + \Omega_{ij} = E + \Omega,
+\]
+where
+\begin{align*}
+ E_{ij} &= \frac{1}{2}\left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i}\right),\\
+ \Omega_{ij} &=\frac{1}{2}\left(\frac{\partial u_i}{\partial x_j} - \frac{\partial u_j}{\partial x_i}\right),
+\end{align*}
+We can write the second part in terms of the vorticity. Recall we have defined the vorticity as
+\[
+ \boldsymbol\omega = \nabla \times \mathbf{u}.
+\]
+Then we have
+\[
+ \boldsymbol\omega \times \mathbf{r} = (\nabla \times \mathbf{u}) \times \mathbf{r} = r_j \left(\frac{\partial u_i}{\partial x_j} - \frac{\partial u_j}{\partial x_i}\right) = 2 \Omega_{ij}r_j.
+\]
+So we can write
+\[
+ \mathbf{u} = \mathbf{u}_0 + E \mathbf{r} + \frac{1}{2} \boldsymbol\omega \times \mathbf{r}.
+\]
+The first component is uniform flow; the second is the strain field; and the last is the rotation component.
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0,0.5,1,1.5} {
+ \draw [->-=0.6] (\x, 0) -- +(1, 2);
+ }
+ \node at (1.25, -0.5) {uniform flow};
+
+ \begin{scope}[shift={(5, 1)}, scale=0.5]
+ \draw [->-=0.85, -<-=0.15] (-2, -2) -- (2, 2);
+ \draw [->-=0.25, -<-=0.75] (-2, 2) -- (2, -2);
+
+ \draw [->- = 0.3, ->- = 0.88] (-2, 1.2) .. controls (-0.8, 0.3) and (-0.8, -0.3) .. (-2, -1.2);
+ \draw [->- = 0.3, ->- = 0.88] (2, -1.2) .. controls (0.8, -0.3) and (0.8, 0.3) .. (2, 1.2);
+
+ \draw [->- = 0.3, ->- = 0.88] (-1.2, 2) .. controls (-0.3, 0.8) and (0.3, 0.8) .. (1.2, 2);
+ \draw [->- = 0.3, ->- = 0.88] (1.2, -2) .. controls (0.3, -0.8) and (-0.3, -0.8) .. (-1.2, -2);
+
+ \node [circ] at (0, 0) {};
+ \end{scope}
+ \node at (5, -0.5) {pure strain};
+
+ \node [circ] at (8.5, 1) {};
+
+ \draw [->] (9, 1) arc(0:270:0.5);
+
+ \node at (8.5, -0.5) {pure rotation};
+ \end{tikzpicture}
+\end{center}
+Since we have an incompressible fluid, we have $\nabla \cdot \mathbf{u} = 0$. So $E$ has zero trace, i.e.\ if
+\[
+ E =
+ \begin{pmatrix}
+ E_1 & 0 & 0\\
+ 0 & E_2 & 0\\
+ 0 & 0 & E_3
+ \end{pmatrix},
+\]
+then $E_1 + E_2 + E_3 = 0$. This justifies the picture above.
+
+\subsection{Vorticity equation}
+The Navier-Stokes equation tells us how the velocity changes with time. Can we obtain a similar equation for the vorticity? Consider the Navier-Stokes equation for a viscous fluid,
+\[
+ \rho\left(\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u}\cdot \nabla \mathbf{u}\right) = -\nabla p - \nabla \chi + \mu \nabla^2 \mathbf{u}
+\]
+We use a vector identity
+\[
+ \mathbf{u}\cdot \nabla \mathbf{u} = \frac{1}{2}\nabla |\mathbf{u}^2| - \mathbf{u}\times \boldsymbol\omega,
+\]
+and take the curl of the above equation to obtain
+\[
+ \frac{\partial \boldsymbol\omega}{\partial t} - \nabla \times (\mathbf{u}\times \boldsymbol\omega) = \nu \nabla^2 \boldsymbol\omega,
+\]
+exploiting the fact that the curl of a gradient vanishes. We now use the fact that
+\[
+ \nabla \times (\mathbf{u}\times \boldsymbol\omega) = (\nabla \cdot \boldsymbol\omega) \mathbf{u} + \boldsymbol\omega \cdot \nabla \mathbf{u} - (\nabla \cdot \mathbf{u}) \boldsymbol\omega - \mathbf{u}\cdot \nabla \boldsymbol\omega.
+\]
+The divergence of a curl vanishes, and so does $\nabla \cdot \mathbf{u}$ by incompressibility. So we get
+\[
+ \frac{\partial \boldsymbol\omega}{\partial t} + \mathbf{u}\cdot \nabla \boldsymbol\omega - \boldsymbol\omega \cdot \nabla \mathbf{u} = \nu \nabla^2 \boldsymbol \omega.
+\]
+Now we use the definition of the material derivative, and rearrange terms to obtain
+\begin{prop}[Vorticity equation]
+ \[
+ \frac{\D \boldsymbol\omega}{\D t} = \boldsymbol\omega \cdot \nabla \mathbf{u} + \nu \nabla^2 \boldsymbol\omega.
+ \]
+\end{prop}
+Hence, the rate of change of vorticity of a fluid particle is caused by $\boldsymbol\omega \cdot \nabla \mathbf{u}$ (amplification by stretching or twisting) and $\nu \nabla^2 \boldsymbol\omega$ (dissipation of vorticity by viscosity). The second term also allows for generation of vorticity at boundaries by the no-slip condition. This will be explained shortly.
+
+Consider an inviscid fluid, where $\nu = 0$. So we are left with
+\[
+ \frac{\D \boldsymbol\omega}{\D t} = \boldsymbol\omega \cdot \nabla \mathbf{u}.
+\]
+So if we take the dot product with $\boldsymbol\omega$, we get
+\begin{align*}
+ \frac{\D}{\D t}\left(\frac{1}{2} |\boldsymbol\omega|^2\right) &= \boldsymbol\omega \cdot \nabla u \cdot \boldsymbol\omega\\
+ &= \boldsymbol\omega (E + \Omega) \boldsymbol\omega\\
+ &= \omega_i (E_{ij} + \Omega_{ij}) \omega_j.
+\end{align*}
+Since $\omega_i \omega_j$ is symmetric, while $\Omega_{ij}$ is antisymmetric, the second term vanishes. In the principal axes, $E$ is diagonalizable. So we get
+\[
+ \frac{\D}{\D t} \left(\frac{1}{2} |\boldsymbol\omega|^2\right) = E_1 \omega_1^2 + E_2 \omega_2^2 + E_3 \omega_3^2.
+\]
+wlog, we assume $E_1 > 0$ (since the $E_i$'s sum to $0$), and imagine $E_2, E_3 < 0$. So the flow is stretched in the $\mathbf{e}_1$ direction and compressed radially. We consider what happens to a vortex in the direction of the stretching, $\boldsymbol\omega = (\omega_1, 0, 0)$. We then get
+\[
+ \frac{\D}{\D t}\left(\frac{1}{2}\omega_1^2\right) = E_1 \omega_1^2.
+\]
+So the vorticity grows exponentially. This is vorticity amplification by stretching. This is not really unexpected --- as the fluid particles have to get closer to the axis of rotation, they have to rotate faster, by the conservation of angular momentum.
+
+This is in general true --- vorticity increases as the length of a material line increases. To show this, we consider two neighbouring (Lagrangian) fluid particles, $\mathbf{x}_1(t), \mathbf{x}_2(t)$. We let $\delta\boldsymbol\ell(t) = \mathbf{x}_2(t) - \mathbf{x}_1(t)$. Note that
+\[
+ \frac{\D \mathbf{x}_2(t)}{\D t} = \mathbf{u}_2(\mathbf{x}_2),\quad \frac{\D \mathbf{x}_1(t)}{\D t} = \mathbf{u}_1(\mathbf{x}_1).
+\]
+Therefore
+\[
+ \frac{\D \delta \boldsymbol\ell(t)}{\D t} = \mathbf{u}(\mathbf{x}_2) - \mathbf{u}(\mathbf{x}_1) = \delta \boldsymbol\ell \cdot \nabla \mathbf{u},
+\]
+by taking a first-order Taylor expansion. This is exactly the same equation as that for $\boldsymbol\omega$ in an inviscid fluid. So vorticity increases as the length of a material line increases.
+
+Note that the vorticity is generated by viscous forces near boundaries.
+\begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (0, 0) rectangle (6, -1);
+
+ \draw (2, 0) -- (2, 2);
+ \draw (2, 0) parabola (4, 2);
+
+ \foreach \x in {0.5,1,1.5} {
+ \pgfmathsetmacro\y{sqrt(2 * \x)};
+ \draw [->] (2, \x) -- +(\y, 0);
+ }
+ \draw [-latex'] (4.5, 1.5) arc(90:-180:0.5);
+ \node at (5, 1) [right] {$\omega$};
+ \end{tikzpicture}
+\end{center}
+When we make the inviscid approximation, then we are losing this source of vorticity, and we sometimes have to be careful.
+
+We have so far assumed the density is constant. If the fluid has a non-uniform density $\rho(\mathbf{x})$, then it turns out
+\[
+ \frac{\D \boldsymbol\omega}{\D t} = \boldsymbol\omega \cdot \nabla \mathbf{u} + \frac{1}{\rho^2} \nabla \rho \times \nabla p.
+\]
+This is what happens in convection flow. The difference in density drives a motion of the fluid. For example, if we have a horizontal density gradient and a vertical pressure gradient (e.g.\ we have a heater at the end of a room with air pressure varying according to height), then we get the following vorticity:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 3) -- (0, 0) node [below] {$\nabla p$};
+ \draw [->] (-0.5, 2.5) -- +(3, 0) node [above] {$\nabla \rho$};
+
+ \node at (-3, 1.5) {hot, light};
+ \node at (3, 1.5) {cold, dense};
+
+ \draw [-latex'] (1, 2) arc(90:-180:0.5);
+ \node at (1.5, 1.5) [right] {$\omega$}; % improve
+ \node at (1, 1.5) {$\times$};
+ \end{tikzpicture}
+\end{center}
+
+\section{Inviscid irrotational flow}
+From now on, we are going to make a further simplifying assumption. We are going to assume that our flow is is incompressible, inviscid, \emph{and} irrotational.
+
+We first check this is a sensible assumption, in that if we start of with an irrotational flow, then the flow will continue being irrotational. Suppose we have $\nabla \times \mathbf{u} = \mathbf{0}$ at $t = 0$, and the fluid is inviscid and homogeneous (i.e.\ $\rho$ is constant). Then by the vorticity equation
+\[
+ \frac{\D \boldsymbol\omega}{\D t} = \boldsymbol\omega \cdot \nabla \mathbf{u} = 0.
+\]
+So the vorticity will keep on being zero.
+
+In this case, we can write $\mathbf{u} = \nabla \phi$ for some $\phi$.
+\begin{defi}[Velocity potential]
+ The \emph{velocity potential} of a velocity $\mathbf{u}$ is a scalar function $\phi$ such that $\mathbf{u} = \nabla \phi$.
+\end{defi}
+Note that since we are lazy, we don't not have a negative sign in front of $\nabla \phi$, unlike, say, gravity.
+
+If we are incompressible, then $\nabla \cdot \mathbf{u} = 0$ implies
+\[
+ \nabla^2 \phi = 0.
+\]
+So the potential satisfies Laplace's equation.
+
+\begin{defi}[Potential flow]
+ A \emph{potential flow} is a flow whose velocity potential satisfies Laplace's equation.
+\end{defi}
+
+A key property of Laplace's equation is that it is linear. So we can add two solutions up to get a third solution.
+
+\subsection{Three-dimensional potential flows}
+We will not consider Laplace's equation in full generality. Instead, we will consider some solutions with symmetry. We will thus use spherical coordinates $(r, \theta, \varphi)$. Then
+\[
+ \nabla^2 \phi = \frac{1}{r^2} \pd{r} \left(r^2 \frac{\partial \phi}{\partial r}\right) + \frac{1}{r^2 \sin \theta} \pd{\theta} \left(\sin \theta\frac{\partial \phi}{\partial \theta}\right) + \frac{1}{r^2 \sin^2 \theta} \frac{\partial^2 \phi}{\partial \varphi^2}.
+\]
+It is also useful to know what the gradient is. This is given by
+\[
+ \mathbf{u} = \nabla \phi = \left(\frac{\partial \phi}{\partial r}, \frac{1}{r} \frac{\partial \phi}{\partial \theta}, \frac{1}{r \sin \theta} \frac{\partial \phi}{\partial \varphi}\right).
+\]
+We start off with the simplest case. The simplest thing we can possibly imagine is if $\phi = \phi(r)$ depends only on $r$. So the velocity is purely radial. Then Laplace's equation implies
+\[
+ \frac{\partial}{\partial r}\left(r^2 \frac{\partial \phi}{\partial r}\right) = 0.
+\]
+So we know
+\[
+ \frac{\partial \phi}{\partial r} = \frac{A}{r^2}
+\]
+for some constant $A$. So
+\[
+ \phi = -\frac{A}{r} + B,
+\]
+where $B$ is yet another constant. Since we only care about the gradient $\nabla \phi$, we can wlog $B = 0$. So the only possible velocity potential is
+\[
+ \phi = -\frac{A}{r}.
+\]
+Then the speed, given by $\frac{\partial \phi}{\partial r}$, falls off as $\frac{1}{r^2}$.
+
+What is the physical significance of the factor $A$? Consider the volume flux $q$ across the surface of the sphere $r = a$. Then
+\[
+ q = \int_S \mathbf{u}\cdot \mathbf{n} \;\d S = \int_S u_r \;\d S = \int_S \frac{\partial \phi}{\partial r} \;\d S = \int_S \frac{A}{a^2}\;\d S = 4\pi A.
+\]
+So we can write
+\[
+ \phi = -\frac{q}{4\pi r}.
+\]
+When $q > 0$, this corresponds to a point source of fluid. When $q < 0$, this is a point sink of fluid.
+
+We can also derive this solution directly, using incompressibility. Since flow is incompressible, the flux through any sphere containing $0$ should be constant. Since the surface area increases as $4\pi r^2$, the velocity must drop as $\frac{1}{r^2}$, in agreement with what we obtained above.
+
+Notice that we have
+\[
+ \nabla^2 \phi = q \delta(x).
+\]
+So $\phi$ is actually Green's function for the Laplacian.
+
+That was not too interesting. We can consider a less general solution, where $\phi$ depends on $r$ and $\theta$ but not $\varphi$. Then Laplace's equation becomes
+\[
+ \nabla^2 \phi = \frac{1}{r^2} \pd{r} \left(r^2 \frac{\partial \phi}{\partial r}\right) + \frac{1}{r^2 \sin \theta} \pd{\theta} \left(\sin \theta\frac{\partial \phi}{\partial \theta}\right) = 0.
+\]
+As we know from IB Methods, We can use Legendre polynomials to write the solution as
+\[
+ \phi = \sum_{n = 0}^\infty (A_n r^n + B_n r^{-n - 1})P_n(\cos \theta).
+\]
+We then have
+\[
+ \mathbf{u} =
+ \left(\frac{\partial \phi}{\partial r}, \frac{1}{r}\frac{\partial \phi}{\partial \theta},0\right).
+\]
+\begin{eg}
+ We can look at uniform flow past a sphere of radius $a$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] circle [radius=1];
+ \draw [->] (-3, 0.7) -- +(1, 0);
+ \draw [->] (-3, 0) node [left] {$U$} -- +(1, 0);
+ \draw [->] (-3, -0.7) -- +(1, 0);
+
+ \draw [->] (0, 0) -- (1.5, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (1, 1) node [right] {$r$};
+ \draw (0.4, 0) arc(0:45:0.4) node [pos=0.7, right] {$\theta$};
+ \end{tikzpicture}
+ \end{center}
+ We suppose the upstream flow is $\mathbf{u} = U \hat{\mathbf{x}}$. So
+ \[
+ \phi = Ux = Ur\cos \theta.
+ \]
+ So we need to solve
+ \begin{align*}
+ \nabla^2 \phi &= 0 & r &>a\\
+ \phi&\to Ur \cos \theta & r &\to \infty\\
+ \frac{\partial \phi}{\partial r}&= 0 & r&=a.
+ \end{align*}
+ The last condition is there to ensure no fluid flows into the sphere, i.e.\ $\mathbf{u}\cdot \mathbf{n} = 0$, for $\mathbf{n}$ the outward normal.
+
+ Since $P_1(\cos \theta) = \cos \theta$, and the $P_n$ are orthogonal, our boundary conditions at infinity require $\phi$ to be of the form
+ \[
+ \phi = \left(Ar + \frac{B}{r^2}\right) \cos \theta.
+ \]
+ We now just apply the two boundary conditions. The condition that $\phi \to Ur \cos \theta$ tells us $A = u$, and the other condition tells us
+ \[
+ A - \frac{2B}{a^3} = 0.
+ \]
+ So we get
+ \[
+ \phi = U\left(r + \frac{a^3}{2r^2}\right) \cos \theta.
+ \]
+ We can interpret $Ur \cos \theta$ as the uniform flow, and $U \frac{a^3}{2r^2} \cos \theta$ as the dipole response due to the sphere.
+
+ What is the velocity and pressure? We can compute the velocity to be
+ \begin{align*}
+ u_r &= \frac{\partial \phi}{\partial r} = U\left(1 - \frac{a^3}{r^3}\right) \cos \theta\\
+ u_\theta &= \frac{1}{r} \frac{\partial \phi}{\partial \theta} = -U\left(1 + \frac{a^3}{2r^3}\right)\sin \theta.
+ \end{align*}
+ We notice that $u_r = u_\theta = 0$ when $\phi = 0, \pi$ and $r = a$.
+
+ At the north and south poles, when $\theta = \pm \frac{\pi}{2}$, we get
+ \[
+ u_r = 0,\quad u_\theta = \pm \frac{3U}{2}.
+ \]
+ So the velocity is faster at the top than at the infinity boundary. This is why it is windier at the top of a hill than below.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] circle [radius=1];
+ \draw [->-=0.2, ->-=0.85] (-3, 0) -- (3, 0);
+ \node [circ] at (-1, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [anchor = south west] at (-1, 0) {$A'$};
+ \node [anchor = south east] at (1, 0) {$A$};
+ \node [circ] at (0, -1) {};
+ \node [circ] at (0, 1) {};
+ \node [above] at (0, -1) {$B$};
+ \node [below] at (0, 1) {$B'$};
+
+ \draw [->-=0.23, ->-=0.8] (-3, 0.2) .. controls (-1, 0.2) and (-1, 1.1) .. (0, 1.1) .. controls (1, 1.1) and (1, 0.2) .. (3, 0.2);
+ \draw [->-=0.3, ->-=0.73] (-3, 0.4) .. controls (-1, 0.4) and (-1, 1.2) .. (0, 1.2) .. controls (1, 1.2) and (1, 0.4) .. (3, 0.4);
+ \draw [->-=0.23, ->-=0.8] (-3, -0.2) .. controls (-1, -0.2) and (-1, -1.1) .. (0, -1.1) .. controls (1, -1.1) and (1, -0.2) .. (3, -0.2);
+ \draw [->-=0.3, ->-=0.73] (-3, -0.4) .. controls (-1, -0.4) and (-1, -1.2) .. (0, -1.2) .. controls (1, -1.2) and (1, -0.4) .. (3, -0.4);
+ \end{tikzpicture}
+ \end{center}
+ Note that the streamlines are closer to each other near the top since the velocity is faster.
+
+ To obtain the pressure on the surface of the sphere, we apply Bernoulli's equation on a streamline to the surface $(a, \theta)$.
+
+ Comparing with what happens at infinity, we get
+ \[
+ p_\infty + \frac{1}{2} \rho U^2 = p + \frac{1}{2}\rho U^2 \frac{9}{4} \sin^2 \theta.
+ \]
+ Thus, the pressure at the surface is
+ \[
+ p = p_\infty + \frac{1}{2}\rho U^2 \left(1 - \frac{9}{4} \sin^2 \theta\right).
+ \]
+ At $A$ and $A'$, we have
+ \[
+ p = p_\infty + \frac{1}{2}\rho U^2.
+ \]
+ At $B$ and $B'$, we find
+ \[
+ p = p_\infty - \frac{5}{8} \rho U^2.
+ \]
+ Note that the pressure is a function of $\sin^2 \theta$. So the pressure at the back of the sphere is exactly the same as that at the front. Thus, if we integrate the pressure around the whole surface, we get $0$. So the fluid exerts no net force on the sphere! This is d'Alemberts' paradox. This is, of course, because we have ignored viscosity.
+
+ In practice, for a solid sphere in a viscous fluid at high Reynolds numbers, the flow looks very similar upstream, but at some point after passing the sphere, the flow separates and forms a \emph{wake} behind the sphere. At the wake, the pressure is approximately $p_\infty$. The separation is caused by the adverse pressure gradient at the rear.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] circle [radius=1];
+ \draw [->-=0.6] (-3, 0) -- (-1, 0);
+
+ \draw (0.707, 0.707) -- +(2, 1);
+ \draw (0.707, -0.707) -- +(2, -1);
+
+ \draw [->-=0.23, ->-=0.8] (-3, 0.2) .. controls (-1, 0.2) and (-1, 1.1) .. (0, 1.1) .. controls (1, 1.1) and (0.707, 0.907) .. (2.707, 1.907);
+ \draw [->-=0.3, ->-=0.73] (-3, 0.4) .. controls (-1, 0.4) and (-1, 1.2) .. (0, 1.2) .. controls (1, 1.2) and (0.707, 1.107) .. (2.707, 2.107);
+ \draw [->-=0.23, ->-=0.8] (-3, -0.2) .. controls (-1, -0.2) and (-1, -1.1) .. (0, -1.1) .. controls (1, -1.1) and (0.707, -0.907) .. (2.707, -1.907);
+ \draw [->-=0.3, ->-=0.73] (-3, -0.4) .. controls (-1, -0.4) and (-1, -1.2) .. (0, -1.2) .. controls (1, -1.2) and (0.707, -1.107) .. (2.707, -2.107);
+
+ \node at (2, 0) {wake};
+ \end{tikzpicture}
+ \end{center}
+ Empirically, we find the force is
+ \[
+ F = C_D \frac{1}{2} \rho U^2\pi \alpha^2,
+ \]
+ where $C_D$ is a dimensionless drag coefficient. This is, in general, a function of the Reynolds number $Re$.
+\end{eg}
+
+\begin{eg}[Rising bubble]
+ Suppose we have a rising bubble, rising at speed $U$. We assume our bubble is small and spherical, and has a free surface. In this case, we do not have to re-calculate the velocity of the fluid. We can change our frame of reference, and suppose the bubble is stationary and the fluid is moving past it at $U$. Then this is exactly what we have calculated above. We then steal the solution and then translate the velocities back by $U$ to get the solution.
+
+ Doing all this, we find that the kinetic energy of the fluid is
+ \[
+ \int_{r > a} \frac{1}{2}\rho |\mathbf{u}|^2 \;\d V = \frac{\pi}{3} a^3 \rho U^2 = \frac{1}{2}M_A U^2,
+ \]
+ where
+ \[
+ M_A = \frac{1}{2} \left(\frac{4}{3} \pi a^3 \rho\right) = \frac{1}{2} M_D,
+ \]
+ is the \emph{added mass} of the bubble (and $M_D$ is the mass of the fluid displaced by the bubble).
+
+ Now suppose we raise the bubble by a distance $h$. The change in potential energy of the system is
+ \[
+ \Delta\mathrm{PE} = -M_D gh.
+ \]
+ So
+ \[
+ \frac{1}{2} M_A U^2 - M_D gh = \mathrm{Energy}
+ \]
+ is constant, since we assume there is no dissipation of energy due to viscosity.
+
+ We differentiate this to know
+ \[
+ M_A U \dot{U} = M_D g \dot{h} = M_D gU.
+ \]
+ We can cancel out the $U$'s and get
+ \[
+ \dot{U} = 2g.
+ \]
+ So in an inviscid fluid, the bubble rises at twice the acceleration of gravity.
+\end{eg}
+
+\subsection{Potential flow in two dimensions}
+We now consider the case where we have two-dimensional world. We use polar coordinates. Then we have
+\[
+ \nabla^2 \phi = \frac{1}{r} \pd{r} \left(r \frac{\partial \phi}{\partial r}\right) + \frac{1}{r^2} \frac{\partial^2 \phi}{\partial \theta^2}.
+\]
+The gradient is given by
+\[
+ \mathbf{u} = \nabla \phi = \left(\frac{\partial \phi}{\partial r}, \frac{1}{r} \frac{\partial \phi}{\partial \theta}\right).
+\]
+The general solution to Laplace's equation is given by
+\[
+ \phi = A \log r + B \theta + \sum_{n = 1}^\infty (A_n r^n + B_n r^{-n})
+ \begin{cases}
+ \cos n\theta\\
+ \sin n\theta
+ \end{cases}.
+\]
+\begin{eg}[Point source]
+ Suppose we have a point source. We can either solve
+ \[
+ \nabla^2 \phi = q \delta (r),
+ \]
+ where $q$ is the volume flux per unit distance, normal to the plane, or use the conservation of mass to obtain
+ \[
+ 2\pi r u_r = q.
+ \]
+ So
+ \[
+ u_r = \frac{q}{2\pi r}.
+ \]
+ Hence we get
+ \[
+ \phi = \frac{q}{2\pi} \log r.
+ \]
+\end{eg}
+
+\begin{eg}[Point vertex]
+ In this case, we have flow that goes around in circles:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \draw [->] circle [radius=0.5];
+ \draw [->] circle [radius=1];
+ \draw [->] circle [radius=1.5];
+ \end{tikzpicture}
+ \end{center}
+ Since there is no radial velocity, we must have $\frac{\partial \phi}{\partial r} = 0$. So $\phi$ only depends on $\theta$. This corresponds to the solution $\phi = B\theta$.
+
+ To find out the value of $B$, consider the circulation around a loop
+ \[
+ K = \oint_{r = a} \mathbf{u}\cdot \;\d \ell = \int_0^{2\pi} \frac{B}{a} \cdot a \;\d \theta = 2\pi B.
+ \]
+ So we get
+ \[
+ \phi = \frac{K}{2\pi}\theta,\quad u_\theta = \frac{K}{2\pi r}.
+ \]
+ It is an exercise for the reader to show that this flow is indeed irrotational, i.e.\ $\nabla \times \mathbf{u} = 0$ for $r \not= 0$, despite it looking rotational. Moreover, for any (simple) loop $c$, we get
+ \[
+ \oint_c \mathbf{u}\cdot\;\d \ell =
+ \begin{cases}
+ K & \text{the origin is inside C}\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ We can interpret this as saying there is no vorticity everywhere, except at the singular point at the origin, where we have infinite vorticity. Indeed, we have
+ \[
+ \oint_c \mathbf{u}\cdot \d \ell = \int_S \boldsymbol\omega \cdot \mathbf{n} \;\d S,
+ \]
+ where $S$ is the surface bounded by $c$.
+\end{eg}
+
+\begin{eg}[Uniform flow past a cylinder]
+ Consider the flow past a cylinder.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] circle [radius=1];
+ \draw [->] (-3, 0.7) -- +(1, 0);
+ \draw [->] (-3, 0) node [left] {$U$} -- +(1, 0);
+ \draw [->] (-3, -0.7) -- +(1, 0);
+
+ \draw [->] (0, 0) -- (1.5, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (1, 1) node [right] {$r$};
+ \draw (0.4, 0) arc(0:45:0.4) node [pos=0.7, right] {$\theta$};
+ \end{tikzpicture}
+ \end{center}
+ We need to solve
+ \begin{align*}
+ \nabla^2 \phi &= 0 & r &> a\\
+ \phi &\to U r\cos \theta & r & \to \infty\\
+ \frac{\partial \phi}{\partial r} &= 0 & r &= a.
+ \end{align*}
+ We already have the general solution above. So we just write it down. We find
+ \[
+ \phi = U\left(r + \frac{a^2}{r}\right) \cos \theta + \frac{K}{2\pi}\theta.
+ \]
+ The last term allows for a net circulation $K$ around the cylinder, to account for vorticity in the viscous boundary layer on the surface of the cylinder. We have
+ \begin{align*}
+ u_r &= U\left(1 - \frac{a^2}{r^2}\right) \cos \theta\\
+ u_\theta &= -U \left(1 + \frac{a^2}{r^2}\right) \sin \theta + \frac{K}{2 \pi r}.
+ \end{align*}
+ We can find the streamfunction for this as
+ \[
+ \psi = Ur\sin \theta\left(1 - \frac{a^2}{r^2}\right) - \frac{K}{2\pi} \log r.
+ \]
+ If there is no circulation, i.e.\ $K = 0$, then we get a flow similar to the flow around a sphere. Again, there is no net force on the cylinder, since flow is symmetric fore and aft, above and below. Again, we get two stagnation points at $A, A'$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.2, ->-=0.85] (-3, 0) -- (3, 0);
+ \draw [fill=gray] circle [radius=1];
+ \node [circ] at (-1, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [anchor = south west] at (-1, 0) {$A'$};
+ \node [anchor = south east] at (1, 0) {$A$};
+
+ \draw [->-=0.23, ->-=0.8] (-3, 0.2) .. controls (-1, 0.2) and (-1, 1.1) .. (0, 1.1) .. controls (1, 1.1) and (1, 0.2) .. (3, 0.2);
+ \draw [->-=0.3, ->-=0.73] (-3, 0.4) .. controls (-1, 0.4) and (-1, 1.2) .. (0, 1.2) .. controls (1, 1.2) and (1, 0.4) .. (3, 0.4);
+ \draw [->-=0.23, ->-=0.8] (-3, -0.2) .. controls (-1, -0.2) and (-1, -1.1) .. (0, -1.1) .. controls (1, -1.1) and (1, -0.2) .. (3, -0.2);
+ \draw [->-=0.3, ->-=0.73] (-3, -0.4) .. controls (-1, -0.4) and (-1, -1.2) .. (0, -1.2) .. controls (1, -1.2) and (1, -0.4) .. (3, -0.4);
+ \end{tikzpicture}
+ \end{center}
+ What happens when $K \not= 0$ is more interesting. We first look at the stagnation points. We get $u_r = 0$ if and only if $r = a$ or $\cos \theta = 0$. For $u_\theta = 0$, when $r = a$, we require
+ \[
+ K = 4 \pi a U \sin \theta.
+ \]
+ So provided $|K| \leq 4 \pi a U$, there is a solution to this problem, and we get stagnation points on the boundary.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] circle [radius=1];
+ \node [circ] at (-0.707, 0.707) {};
+ \node [circ] at (0.707, 0.707) {};
+
+ \draw [->-=0.5] (-3, 1.1) .. controls (-2, 1.1) .. (-0.707, 0.707);
+ \draw [->-=0.5] (0.707, 0.707) .. controls (2, 1.1) .. (3, 1.1);
+ \draw [->-=0.2, ->-=0.8] (-3, 0.8) .. controls (-1, 0.8) and (-1.5, -1.1) .. (0, -1.1) .. controls (1.5, -1.1) and (1, 0.8) .. (3, 0.8);
+ \draw [->-=0.3, ->-=0.7] (-3, 0.5) .. controls (-1, 0.5) and (-1.5, -1.2) .. (0, -1.2) .. controls (1.5, -1.2) and (1, 0.5) .. (3, 0.5);
+
+ \draw [->-=0.5] (-3, 1.4) .. controls (-2, 1.4) and (-1, 1.15) .. (0, 1.15) .. controls (1, 1.15) and (2, 1.4) .. (3, 1.4);
+ \end{tikzpicture}
+ \end{center}
+ For $|K| > 4 \pi a U$, we do not get a stagnation point on the boundary. However, we still have the stagnation point where $\cos \theta = 0$, i.e.\ $\theta = \pm \frac{\pi}{2}$. Looking at the equation for $u_\theta = 0$, only $\theta = \frac{\pi}{2}$ works.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray] circle [radius=1];
+ \node [circ] at (0, 1.8) {};
+
+ \draw [->-=0.1, ->-=0.95] (0, 1.8) .. controls (-2, 0.8) and (-1.3, -1.2) .. (0, -1.2) .. controls (1.3, -1.2) and (2, 0.8) .. (0, 1.8);
+
+ \draw [->-=0.1, ->-=0.95] (0, 1.4) .. controls (-1.3, 1.4) and (-1.6, -1.1) .. (0, -1.1) .. controls (1.6, -1.1) and (1.3, 1.4) .. (0, 1.4);
+
+ \draw [->-=0.3, ->-=0.8] (-3, 2.3) .. controls (-1.5, 2.3) .. (0, 1.8) .. controls (1.5, 2.3) .. (3, 2.3);
+
+ \draw [->-=0.1, ->-=0.95] (-3, 2) .. controls (-2, 2) and (-1, 1.8) .. (-1, 1.5) .. controls (-1, 1.2) and (-1.4, 0.8) .. (-1.4, 0) arc (180:360:1.4) .. controls (1.4, 0.8) and (1, 1.2) .. (1, 1.5) .. controls (1, 1.8) and (2, 2) .. (3, 2);
+ \draw [->-=0.5] (-3, 2.6) .. controls (-2, 2.6) .. (0, 2.2) .. controls (2, 2.6) and (2, 2.6) .. (3, 2.6);
+ \end{tikzpicture}
+ \end{center}
+ Let's now look at the effect on the sphere. For steady potential flow, Bernoulli works (i.e.\ $H$ is constant) everywhere, not just along each streamline (see later). So we can calculate the pressure on the surface. Let $p$ be the pressure on the surface. Then we get
+ \[
+ p_\infty + \frac{1}{2} \rho U^2 = p + \frac{1}{2} \rho \left(\frac{K}{2\pi a} - 2U \sin \theta\right)^2.
+ \]
+ So we find
+ \[
+ p = p_\infty + \frac{1}{2} \rho U^2 - \frac{\rho K^2}{8 \pi^2 a^2} + \frac{\rho KU \sin \theta}{\pi a} - 2\rho U^2 \sin^2 \theta.
+ \]
+ We see the pressure is symmetrical fore and aft. So there is no force in the $x$ direction.
+
+ However, we get a transverse force (per unit length) in the $y$-direction. We have
+ \[
+ F_y = -\int_0^{2\pi} p \sin \theta (a \;\d \theta) = -\int_0^{2\pi} \frac{\rho K U}{\pi a} \sin^2 \theta a \;\d \theta = - \rho UK.
+ \]
+ where we have dropped all the odd terms. So there is a sideways force in the direction perpendicular to the flow, and is directly proportional to the circulation of the system.
+
+ In general, the \emph{magnus force} (lift force) resulting from interaction between the flow $\mathbf{U}$ and the vortex $\mathbf{K}$ is
+ \[
+ \mathbf{F} = \rho \mathbf{U}\times \mathbf{K}.
+ \]
+\end{eg}
+\subsection{Time dependent potential flows}
+Consider the time-dependent Euler equation
+\[
+ \rho\left(\frac{\partial \mathbf{u}}{\partial t} + \nabla\left(\frac{1}{2}|\mathbf{u}|^2\right) - \mathbf{u}\times \boldsymbol\omega\right) = -\nabla p - \nabla \chi.
+\]
+We assume that we have a potential flow $\mathbf{u} = \nabla \phi$. So $\boldsymbol\omega = 0$. Then we can write the whole equation as
+\[
+ \nabla \left(\rho \frac{\partial \phi}{\partial t} + \frac{1}{2} \rho |\mathbf{u}|^2 + p + \chi\right) = 0.
+\]
+Thus we can integrate this to obtain
+\[
+ \rho \frac{\partial\phi}{\partial t} + \frac{1}{2} \rho |\nabla \phi|^2 + p + \chi = f(t),
+\]
+where $f(t)$ is a function independent of space. This equation allows us to correlate the properties of $\phi$, $p$ etc. at different points in space.
+
+\begin{eg}[Oscillations in a manometer]
+ A manometer is a $U$-shaped tube.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 4) -- (0, 0) -- (3, 0) -- (3, 4);
+ \draw (0.5, 4) -- (0.5, 1) -- (2.5, 1) -- (2.5, 4);
+ \draw [dashed] (-0.5, 2.5) -- (3.5, 2.5);
+ \draw [<->] (1.5, 1) -- (1.5, 2.5) node [pos=0.5, fill=white] {$H$};
+ \node [left] at (0, 1) {$y = 0$};
+
+ \fill [mblue, opacity=0.5] (0, 3.3) -- (0, 0) -- (3, 0) -- (3, 1.7) arc(0:-180:0.25 and 0.05) -- (2.5, 1) -- (0.5, 1) -- (0.5, 3.3) arc(0:-180:0.25 and 0.05);
+
+ \draw [<->] (0.8, 2.5) -- (0.8, 3.3) node [pos=0.5, right] {$h$};
+ \end{tikzpicture}
+ \end{center}
+ We use some magic to set it up such that the water level in the left tube is $h$ above the equilibrium position $H$. Then when we release the system, the water levels on both side would oscillate.
+
+ We can get quite far just by doing dimensional analysis. There are only two parameters $g, H$. Hence the frequency must be proportional to $\sqrt{\frac{g}{H}}$. To get the constant of proportionality, we have to do proper calculations.
+
+ We are going to assume the reservoir at the bottom is large, so velocities are negligible. So $\phi$ is constant in the reservoir, say $\phi = 0$. We want to figure out the velocity on the left. This only moves vertically. So we have
+ \[
+ \phi = uy = \dot{h}y.
+ \]
+ So we have
+ \[
+ \frac{\partial \phi}{\partial t} = \ddot{h}y.
+ \]
+ On the right hand side, we just have
+ \[
+ \phi = -uy = -\dot{g} y,\quad \frac{\partial \phi}{\partial t} = -\ddot{h}y.
+ \]
+ We now apply the equation from one tube to the other --- we get
+ \begin{multline*}
+ \rho \ddot{h} (H + h) + \frac{1}{2}\rho \dot{h}^2 + p_{\mathrm{atm}} + g\rho(H + h) = f(t) \\
+ = -\rho\ddot{h}(H - h) + \frac{1}{2}\rho \dot{h}^2 + p_{\mathrm{atm}} + g\rho(H - h).
+ \end{multline*}
+ Quite a lot of these terms cancel, and we are left with
+ \[
+ 2 \rho H\ddot{h} + 2g\rho h = 0.
+ \]
+ Simplifying terms, we get
+ \[
+ \ddot{h} + \frac{g}{H} h = 0.
+ \]
+ So this is simple harmonic motion with the frequency $\sqrt{\frac{g}{H}}$.
+\end{eg}
+
+\begin{eg}[Oscillations of a bubble]
+ Suppose we have a spherical bubble of radius $a(t)$ in some fluid. Spherically symmetric oscillations induce a flow in the fluid. This satisfies
+ \begin{align*}
+ \nabla^2 \phi &= 0 & r & > a\\
+ \phi&\to 0 & r &\to \infty\\
+ \frac{\partial \phi}{\partial r} &= \dot{a} & r &= a.
+ \end{align*}
+ In spherical polars, we write Laplace's equation as
+ \[
+ \frac{1}{r^2} \frac{\partial}{\partial r} \left(r^2 \frac{\partial \phi}{\partial r}\right) = 0.
+ \]
+ So we have
+ \[
+ \phi = \frac{A(t)}{r},
+ \]
+ and
+ \[
+ u_r = \frac{\partial \phi}{\partial r} = -\frac{A(t)}{r^2}.
+ \]
+ This certainly vanishes as $r \to \infty$. We also know
+ \[
+ -\frac{A}{a^2} = \dot{a}.
+ \]
+ So we have
+ \[
+ \phi = -\frac{a^2\dot{a}}{r}.
+ \]
+ Now we have
+ \[
+ \left.\frac{\partial \phi}{\partial t}\right|_{r = a} = \left.-\frac{2a\dot{a}^2}{r} -\frac{a^2\ddot{a}}{r}\right|_{r = a} = -(a\ddot{a} + 2\dot{a}^2).
+ \]
+ We now consider the pressure on the surface of the bubble.
+
+ We will ignore gravity, and apply Euler's equation at the bubble surface and at infinity. Then we get
+ \[
+ -\rho (a\ddot{a} + 2 \dot{a}^2) + \frac{1}{2}\rho \dot{a}^2 + p(a, t) = p_\infty.
+ \]
+ Hence we get
+ \[
+ \rho\left(a\ddot{a} + \frac{3}{2}\dot{a}^2\right) = p(a, t) - p_\infty.
+ \]
+ This is a difficult equation to solve, because it is non-linear. So we assume we have small oscillations about equilibrium, and write
+ \[
+ a = a_0 + \eta(t),
+ \]
+ where $\eta(t) \ll a_0$. Then we can write
+ \[
+ a\ddot{a} + \frac{3}{2} \dot{a}^2 = (a_0 + \eta) \ddot{\eta} + \frac{3}{2} \dot{\eta}^2 = a_0 \ddot{\eta} + O(\eta^2).
+ \]
+ Ignoring second order terms, we get
+ \[
+ \rho a_0 \ddot{\eta} = p(a, t) - p_\infty.
+ \]
+ We also know that $p_\infty$ is the pressure when we are in equilibrium. So $p(a, t) - p_\infty$ is the small change in pressure $\delta p$ caused by the change in volume.
+
+ To relate the change in pressure with the change in volume, we need to know some thermodynamics. We suppose the oscillation is adiabatic, i.e.\ it does not involve any heat exchange. This is valid if the oscillation is fast, since there isn't much time for heat transfer. Then it is a \emph{fact} from thermodynamics that the motion obeys
+ \[
+ PV^\gamma =\text{ constant},
+ \]
+ where
+ \[
+ \gamma = \frac{\text{specific heat under constant pressure}}{\text{specific heat under constant volume}}.
+ \]
+ We can take logs to obtain
+ \[
+ \log p + \gamma \log V = \text{ constant}.
+ \]
+ Then taking small variations of $p$ and $V$ about $p_\infty$ and $V_0 = \frac{4}{3} \pi a_0^3$ gives
+ \[
+ \delta (\log p) + \gamma \delta(\log V) = 0.
+ \]
+ In other words, we have
+ \[
+ \frac{\delta p}{p_\infty} = -\gamma \frac{\delta V}{V_0}
+ \]
+ Thus we find
+ \[
+ p(a, t) - p_\infty = \delta p = -p_\infty \gamma \frac{\delta V}{V_0} = -3p_\infty\gamma \frac{\eta}{a_0}.
+ \]
+ Thus we get
+ \[
+ \rho a_0 \ddot{\eta} = -\frac{3 \gamma \eta}{a_0} p_\infty.
+ \]
+ This is again simple harmonic motion with frequency
+ \[
+ \omega = \left(\frac{3\gamma p_0}{\rho a_0}\right)^{1/2}.
+ \]
+ We know all these numbers, so we can put them in. For a $\SI{1}{\centi\meter}$ bubble, we get $\omega \approx \SI{2e3}{\per\second}$. For reference, the human audible range is $20 - \SI{20000}{\per\second}$. This is why we can hear, say, waves breaking.
+\end{eg}
+
+\section{Water waves}
+We now consider water waves. We are first going to use dimensional analysis to understand the qualitative behaviour of water waves in deep and shallow water respectively. Afterwards, we will try to solve it properly, and see how we can recover the results from the dimensional analysis.
+\begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (-3, 0) rectangle (3, -1);
+
+ \draw [dashed] (-3, 1) -- (3, 1);
+
+ \fill [mblue, opacity=0.5] (-3, 0) -- (-3, 1) sin (-2, 1.2) cos (-1, 1) sin (0, 0.8) cos (1, 1) sin (2, 1.2) cos (3, 1) -- (3, 0) -- cycle;;
+
+ \draw (-3, 1) sin (-2, 1.2) cos (-1, 1) sin (0, 0.8) cos (1, 1) sin (2, 1.2) cos (3, 1);
+
+ \draw [->] (-5, 0.5) -- +(1, 0) node [right] {$x$};
+ \draw [->] (-5, 0.5) -- +(0, 1) node [above] {$z$};
+ \draw [->] (-5, 0.5) -- +(0.866, 0.5) node [right] {$y$};
+
+ \draw (-1.5, 2.3) node [right] {$h(x, y, t)$} edge [out=180, in=90, -latex] (-2, 1.2);
+
+ \draw [<->] (2, 0) -- (2, 1) node [pos=0.5, right] {$H$};
+ \end{tikzpicture}
+\end{center}
+\subsection{Dimensional analysis}
+Consider waves with wave number $k = \frac{2\pi}{\lambda}$, where $\lambda$ is the wavelength, on a layer of water of depth $H$. We suppose the fluid is inviscid. Then the wave speed $c$ depends on $k$, $g$ and $H$. Dimensionally, we can write the answer as
+\[
+ c = \sqrt{gH} f(kH).
+\]
+for some dimensionless function $f$.
+
+Now suppose we have deep water. Then $H \gg \lambda$. Therefore $kH \gg 1$. In this limit, we would expect the speed not to depend on $H$, since $H$ is just too big. The only way this can be true is if
+\[
+ f \propto \frac{1}{\sqrt{kH}}.
+\]
+Then we know
+\[
+ c = \alpha \sqrt{\frac{g}{k}},
+\]
+where $\alpha$ is some dimensionless constant.
+
+What happens near the shore? Here the water is shallow. So we have $kH \ll 1$. Since the wavelength is now so long, the speed should be independent of $k$. So $f$ is a constant, say $\beta$. So
+\[
+ c = \beta\sqrt{gH}.
+\]
+We don't know $\alpha$ and $\beta$. To know that, we would need proper theory. And we also need that to connect up the regimes of deep and shallow water, as we will soon do.
+
+Yet these can already explain several phenomena we see in daily life. For example, we see that wave fronts are always parallel to the shore, regardless of how the shore is shaped and positioned. This is since if wave is coming in from an angle, the parts further away from the shore move faster (since it is deeper), causing the wave front to rotate until it is parallel.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 0) -- (2, 0);
+ \node at (0, -0.5) {shore};
+ \begin{scope}[shift={(0, 2)}]
+ \begin{scope}[rotate=20]
+ \draw (-1.5, 0) -- (1.5, 0);
+ \draw (-1.5, 0.3) -- (1.5, 0.3) node [right] {wave fronts};
+ \draw (-1.5, 0.6) -- (1.5, 0.6);
+
+ \draw [-latex'] (-1, 0) -- (-1, -0.3);
+ \draw [->] (1, 0) -- (1, -0.6);
+ \end{scope}
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+We can also use this to explain why waves break. Near the shore, the water is shallow, and the difference in height between the peaks and troughs of the wave is significant. Hence the peaks travels faster than the trough, causing the waves to break.
+\begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (-3, -1) rectangle (3, -2);
+
+ \fill [mblue, opacity=0.5] (-3, -1) -- (-3, 1) sin (-2, 1.5) cos (-1, 1) sin (0, 0.5) cos (1, 1) sin (2, 1.5) cos (3, 1) -- (3, -1) -- cycle;;
+
+ \draw (-3, 1) sin (-2, 1.5) cos (-1, 1) sin (0, 0.5) cos (1, 1) sin (2, 1.5) cos (3, 1);
+
+ \draw [<->] (-2, -1) -- (-2, 1.5) node [pos=0.5, right] {$H + h$};
+ \draw [<->] (0, -1) -- (0, 0.5) node [pos=0.5, right] {$H - h$};
+
+ \draw [->] (-2, 1.5) -- +(1, 0);
+ \draw [-latex'] (0, 0.5) -- +(0.5, 0);
+ \end{tikzpicture}
+\end{center}
+
+\subsection{Equation and boundary conditions}
+We now try to solve for the actual solution.
+\begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (-3, 0) rectangle (3, -1);
+
+ \draw [dashed] (-3, 1) -- (3, 1) node [right] {$z = 0$};
+ \node [right] at (3, 0) {$z = -H$};
+
+ \fill [mblue, opacity=0.5] (-3, 0) -- (-3, 1) sin (-2, 1.2) cos (-1, 1) sin (0, 0.8) cos (1, 1) sin (2, 1.2) cos (3, 1) -- (3, 0) -- cycle;;
+
+ \draw (-3, 1) sin (-2, 1.2) cos (-1, 1) sin (0, 0.8) cos (1, 1) sin (2, 1.2) cos (3, 1);
+
+ \draw (-1.5, 2.3) node [right] {$h(x, y, t)$} edge [out=180, in=90, -latex] (-2, 1.2);
+
+ \draw [<->] (2, 0) -- (2, 1) node [pos=0.5, right] {$H$};
+ \end{tikzpicture}
+\end{center}
+We assume the fluid is inviscid, and the motion starts from rest. Thus the vorticity $\nabla \times \mathbf{u}$ is initially zero, and hence always zero. Together with the incompressibility condition $\nabla \cdot \mathbf{u} = 0$, we end up with Laplace's equation
+\[
+ \nabla^2 \phi = 0.
+\]
+We have some \emph{kinematic boundary conditions}. First of all, there can be no flow through the bottom. So we have
+\[
+ u_z = \frac{\partial \phi}{\partial z} = 0
+\]
+when $z = -H$. At the free surface, we have
+\[
+ u_z = \frac{\partial \phi}{\partial z} = \frac{\D h}{\D t} = \frac{\partial h}{\partial t} + u \frac{\partial h}{\partial x} + v \frac{\partial h}{\partial y}
+\]
+when $z = h$.
+
+We then have the dynamic boundary condition that the pressure at the surface is the atmospheric pressure, i.e.\ at $z = h$, we have
+\[
+ p = p_0 = \text{constant}.
+\]
+We need to relate this to the flow. So we apply the time-dependent Bernoulli equation
+\[
+ \rho \frac{\partial \phi}{\partial t} + \frac{1}{2} \rho|\nabla \phi|^2 + g\rho h + p_0 = f(t)\text{ on }z = h.
+\]
+The equation is not hard, but the boundary conditions are. Apart from them being non-linear, there is this surface $h$ that we know nothing about.
+
+It is impossible to solve these equations just as they are. So we want to make some approximations. We assume that the waves amplitudes are small, i.e.\ that
+\[
+ h \ll H.
+\]
+Moreover, we assume that the waves are relatively flat, so that
+\[
+ \frac{\partial h}{\partial x},\frac{\partial h}{\partial y} \ll 1,
+\]
+We then ignore quadratic terms in small quantities. For example, since the waves are small, the velocities $u$ and $v$ also are. So we ignore $u \frac{\partial h}{\partial x}$ and $v\frac{\partial h}{\partial y}$. Similarly, we ignore the whole of $|\nabla \phi|^2$ in Bernoulli's equations since it is small.
+
+Next, we use Taylor series to write
+\[
+ \left.\frac{\partial \phi}{\partial z}\right|_{z = h} = \left.\frac{\partial \phi}{\partial z}\right|_{z = 0} + h \left.\frac{\partial^2 \phi}{\partial z^2}\right|_{z = 0} + \cdots.
+\]
+Again, we ignore all quadratic terms. So we just approximate
+\[
+ \left.\frac{\partial \phi}{\partial z}\right|_{z = h} = \left.\frac{\partial \phi}{\partial z}\right|_{z = 0}.
+\]
+We are then left with linear water waves. The equations are then
+\begin{align*}
+ \nabla^2 \phi &= 0 & -H < z &\leq 0\\
+ \frac{\partial \phi}{\partial z} &= 0 & z &= -H\\
+ \frac{\partial \phi}{\partial z} &= \frac{\partial h}{\partial t} & z &= 0\\
+ \frac{\partial \phi}{\partial t} + gh &= f(t) & z &= h.
+\end{align*}
+Note that the last equation is just Bernoulli equations, after removing the small terms and throwing our constants and factors in to the function $f$.
+
+We now have a nice, straightforward problem. We have a linear equation with linear boundary conditions, which we can solve.
+
+\subsection{Two-dimensional waves (straight crested waves)}
+We are going further simplify the situation by considering the case where the wave does not depend on $y$. We consider a simple wave form
+\[
+ h = h_0 e^{i(kx - \omega t)}.
+\]
+Using the boundary condition at $z = 0$, we know we must have a solution of the form
+\[
+ \phi = \hat{\phi}(z) e^{i(kx - \omega t)}.
+\]
+Putting this into Laplace's equation, we have
+\[
+ -k^2 \hat{\phi} + \hat{\phi}'' = 0.
+\]
+We notice that the solutions are then of the form
+\[
+ \hat{\phi} = \phi_0 \cosh k(z + H),
+\]
+where the constants are chosen so that $\frac{\partial \phi}{\partial z} = 0$ at $z = -H$.
+
+We now have three unknowns, namely $h_0, \phi_0$ and $\omega$ (we assume $k$ is given, and we want to find waves of this wave number). We use the boundary condition
+\[
+ \frac{\partial \phi}{\partial z} = \frac{\partial h}{\partial t}\text{ at }z = 0.
+\]
+We then get
+\[
+ k \phi_0 \sinh kH = -i \omega h_0.
+\]
+We put in Bernoulli's equation to get
+\[
+ -i\omega \hat{\phi}(z) e^{i(kx - \omega t)} + g h_0 e^{i(kx - \omega t)} = f(t).
+\]
+For this not to depend on $x$, we must have
+\[
+ -i\omega \phi_0 \cosh kH + gh_0 = 0.
+\]
+The trivial solution is of course $h_0 = \phi_0 = 0$. Otherwise, we can solve to get
+\[
+ \omega^2 = gk \tanh kH.
+\]
+This is the \emph{dispersion relation}, and relates the frequency to the wavelengths of the wave.
+
+We can use the dispersion relation to find the speed of the wave. This is just
+\[
+ c = \frac{\omega}{k} = \sqrt{\frac{g}{k}\tanh kH}.
+\]
+We can now look at the limits we have previously obtained with large and small $H$.
+
+In deep water (or short waves), we have $kH \gg 1$. We know that as $kH \to \infty$, we get $\tanh kH \to 1$. So we get
+\[
+ c = \sqrt{\frac{g}{k}}.
+\]
+In shallow water, we have $kH \ll 1$. In the limit $kH \to 0$, we get $\tanh kH \to kH$. Then we get
+\[
+ c = \sqrt{gH}.
+\]
+This are exactly as predicted using dimensional analysis, with all the dimensionless constants being $1$.
+
+We can now plot how the wave speed varies with $k$.
+\begin{center}
+ \begin{tikzpicture}[xscale=0.5]
+ \draw [->] (-0.5, 0) -- (8, 0) node [right] {$kH$};
+ \draw [->] (0, -0.5) -- (0, 2.5) node [right] {$\frac{c}{\sqrt{gH}}$};
+
+ \node [circ] at (0, 2) {};
+ \node [left] at (0, 2) {$1$};
+
+ \draw [domain=0:8] plot [smooth] (\x, {2/(1 + \x^2/(3 + \x^2/(5 + \x^2/(7 + \x^2/(9 + \x^2/11)))))});
+% continued fraction approximation to tanh x. Cannot use built in tanh x
+% because (1) it does not work for large x; and (2) this avoids division by
+% zero error when evaluating tanh x / x at x = 0 directly.
+ \end{tikzpicture}
+\end{center}
+We see that wave speed decreases monotonically with $k$, and long waves travel faster than short waves.
+
+This means if we start with, say, a square wave, the long components of the wave travels faster than the short components. So the square wave disintegrates as it travels.
+
+Note also that this puts an upper bound on the maximum value of the speed $c$. There can be no wave travelling faster than $\sqrt{gH}$. Thus if you travel faster than $\sqrt{gH}$, all the waves you produce are left behind you. In general, if you have velocity $U$, we can define the \emph{Froude number}
+\[
+ Fr = \frac{U}{\sqrt{gH}}.
+\]
+This is like the Mach number.
+
+For a tsunami, we have
+\[
+ \lambda \sim \SI{400}{\kilo\meter},\quad H \sim \SI{4}{\kilo\meter}.
+\]
+We are thus in the regime of small $kH$, and
+\[
+ c = \sqrt{10 \times 4\times 10^3} = \SI{200}{\meter\per\second}.
+\]
+Note that the speed of the tsunami depends on the depth of the water. So the topography of the bottom of the ocean will affect how tsunamis move. So knowing the topography of the sea bed allows us to predict how tsunamis will move, and can save lives.
+
+\subsection{Group velocity}
+Now suppose we have two waves travelling closely together with similar wave numbers, e.g.
+\[
+ \sin k_1 x + \sin k_2 x = 2 \sin \left(\frac{k_1 + k_2}{2} x\right) \cos\left(\frac{k_1 - k_2}{2}x\right),
+\]
+since $k_1$ and $k_2$ are very similar, we know $\frac{k_1 + k_2}{2} \approx k_1$, and $\frac{k_1 - k_2}{2}$ is small, hence the cosine term has long period. We would then expect waves to look like this
+\begin{center}
+ \begin{tikzpicture}[xscale=1.5]
+ \draw [semithick, mblue, domain=0:6,samples=600] plot(\x, {0.5 * cos(2000 * \x) * sin (90 * \x)});
+ \draw [semithick, morange] (0, 0) sin (1, 0.5) cos (2, 0) sin (3, -0.5) cos (4, 0) sin (5, 0.5) cos (6, 0);
+ \draw [semithick, mgreen] (0, 0) sin (1, -0.5) cos (2, 0) sin (3, 0.5) cos (4, 0) sin (5, -0.5) cos (6, 0);
+ \end{tikzpicture}
+\end{center}
+So the amplitudes of the waves would fluctuate. We say the wave travels in \emph{groups}. For more details, see IID Fluid Dynamics or IID Asymptotic Methods.
+
+It turns out the ``packets'' don't travel at the same velocity as the waves themselves. The group velocity is given by
+\[
+ c_g = \frac{\partial \omega}{\partial k}.
+\]
+In particular, for deep water waves, where $\omega \sim \sqrt{gk}$, we get
+\[
+ c_g = \frac{1}{2} \sqrt{\frac{g}{k}} = \frac{1}{2}c.
+\]
+This is also the velocity at which energy propagates.
+
+\subsection{Rayleigh-Taylor instability}
+Note that it is possible to turn the problem upside down, and imagine we have water over air:
+\begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.5] (-3, 2) -- (-3, 1) sin (-2, 1.2) cos (-1, 1) sin (0, 0.8) cos (1, 1) sin (2, 1.2) cos (3, 1) -- (3, 2) -- cycle;;
+
+ \draw (-3, 1) sin (-2, 1.2) cos (-1, 1) sin (0, 0.8) cos (1, 1) sin (2, 1.2) cos (3, 1);
+
+ \node at (0, 1.5) {water};
+ \node at (0, 0.5) {air};
+ \end{tikzpicture}
+\end{center}
+We can imagine this as the same scenario as before, but with gravity pull upwards. So exactly the same equations hold, but we replace $-g$ with $g$. In deep water, we have
+\[
+ \omega^2 = -gk.
+\]
+So we have
+\[
+ \omega = \pm i\sqrt{gk}.
+\]
+Thus we get
+\[
+ h \propto A e^{\sqrt{gk} t} + B e^{-\sqrt{gk}t}.
+\]
+We thus have an exponentially growing solution. So the system is unstable, and water will fall down. This very interesting fact is known as \emph{Rayleigh-Taylor instability}.
+
+\section{Fluid dynamics on a rotating frame}
+We would like to study fluid dynamics in a rotating frame, because the Earth is rotating (hopefully). So this is particularly relevant if we want to study ocean currents or atmospheric flow.
+
+\subsection{Equations of motion in a rotating frame}
+
+The Lagrangian (particle) acceleration in a rotating frame of reference is given by
+\[
+ \frac{\D \mathbf{u}}{\D t} + 2 \boldsymbol\Omega \times \mathbf{u} + \boldsymbol\Omega \times (\boldsymbol\Omega \times \mathbf{x}),
+\]
+as you might recall from IA Dynamics and Relativity. So we have the equation of motion
+\[
+ \rho\left(\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u}\cdot \nabla \mathbf{u} + 2 \boldsymbol\Omega \times \mathbf{u}\right) = -\nabla p - \rho \boldsymbol\omega \times (\boldsymbol\Omega \times \mathbf{x}) + \mathbf{g} \rho.
+\]
+This is complicated, so we want to simplify this. We will first compare the centrifugal force compared to gravity. We have
+\[
+ |\boldsymbol\Omega| = \frac{2\pi}{1\text{ day}} s^{-1} \approx \SI{2\pi e-5}{\per\second}.
+\]
+The largest scales are $\SI{10e4}{\kilo\meter}$. Compared to gravity $\mathbf{g}$, we have
+\[
+ \frac{|\boldsymbol\Omega \times (\boldsymbol\Omega \times \mathbf{x})|}{|\mathbf{g}|}\leq \frac{(2\pi)^2 \times 10^{-10}.\times 10^7}{10} \approx 4 \times 10^{-3}.
+\]
+So the centrifugal term is tiny compared to gravity, and we will ignore it. Alternatively, we can show that $\boldsymbol\Omega \times (\boldsymbol\omega \times \mathbf{x})$ can be given by a scalar potential, and we can incorporate it into the potential term, but we will not do that.
+
+Next, we want to get rid of the non-linear terms. We consider motions for which
+\[
+ |\mathbf{u}\cdot \nabla \mathbf{u}| \ll |2 \boldsymbol\Omega \times \mathbf{u}|.
+\]
+The scales of these two terms are $U^2/L$ and $\Omega U$ respectively. So we need
+\[
+ R_o = \frac{U}{\Omega L} \ll 1.
+\]
+This is known as the Rossby number. In our atmosphere, we have $U \sim \SI{10}{\meter\per\second}$ and $L \sim \SI{1e3}{\kilo\meter}$. So we get
+\[
+ R_o = \frac{10}{10^6 \cdot 10^{-4}} \approx 0.1.
+\]
+So we can ignore the non-linear terms. Thus we get
+\begin{prop}[Euler's equation in a rotating frame]
+ \[
+ \frac{\partial \mathbf{u}}{\partial t} + 2 \boldsymbol\Omega \times \mathbf{u} = -\frac{1}{\rho} \nabla p + \mathbf{g}.
+ \]
+\end{prop}
+
+\begin{defi}[Coriolis parameter/planetary vorticity]
+ We conventionally write $2 \boldsymbol\Omega = \mathbf{f}$, and we call this the \emph{Coriolis parameter} or the \emph{planetary vorticity}.
+\end{defi}
+
+Note that since we take the cross product of $\mathbf{f}$ with $\mathbf{u}$, only the component perpendicular to the velocity matters. Assuming that fluid flows along the surface of the earth, we only need the component of $\mathbf{f}$ normal to the surface, namely
+\[
+ f = 2 \Omega \sin \theta,
+\]
+where $\theta$ is the angle from the equator.
+\subsection{Shallow water equations}
+We are now going to derive the shallow water equations, where we have some water shallow relative to its horizontal extent. This is actually quite a good approximation for the ocean --- while the Atlantic is around $\SI{4}{\kilo\meter}$ deep, it is several thousand kilometers across. So the ratio is just like a piece of paper. Similarly, this is also a good approximation for the atmosphere.
+
+Suppose we have a shallow layer of depth $z = h(x, y)$ with $p = p_0$ on $z = h$.
+\begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (-3, 0) rectangle (3, -1);
+
+ \draw [dashed] (-3, 1) -- (3, 1);
+
+ \fill [mblue, opacity=0.5] (-3, 0) -- (-3, 1) sin (-2, 1.2) cos (-1, 1) sin (0, 0.8) cos (1, 1) sin (2, 1.2) cos (3, 1) -- (3, 0) -- cycle;;
+
+ \draw (-3, 1) sin (-2, 1.2) cos (-1, 1) sin (0, 0.8) cos (1, 1) sin (2, 1.2) cos (3, 1);
+
+ \draw (-1.5, 2.3) node [right] {$h(x, y, t)$} edge [out=180, in=90, -latex] (-2, 1.2);
+
+ \draw [->] (4, 1.5) -- +(0, -1) node [below] {$\mathbf{g}$};
+
+ \draw [->] (4.5, 0.5) -- +(0, 1) node [right] {$\frac{1}{2} \mathbf{f}$};
+
+ \draw [-latex'](4.35, 1) arc(135:405:0.2 and 0.1);
+ \end{tikzpicture}
+\end{center}
+We consider motions with horizontal scales $L$ much greater than vertical scales $H$.
+
+We use the fact that the fluid is incompressible, i.e.\ $\nabla \cdot \mathbf{u} = 0$. Writing $\mathbf{u} = (u, v, w)$, we get
+\[
+ \frac{\partial w}{\partial z} = -\frac{\partial u}{\partial x} - \frac{\partial v}{\partial y}.
+\]
+The scales of the terms are $W/H$, $U/L$ and $V/L$ respectively. Since $H \ll L$, we know $W \ll U, V$, i.e.\ most of the movement is horizontal, which makes sense, since there isn't much vertical space to move around.
+
+We consider only horizontal velocities, and write
+\[
+ \mathbf{u} = (u, v, 0),
+\]
+and
+\[
+ \mathbf{f} = (0, 0, f).
+\]
+Then from Euler's equations, we get
+\begin{align*}
+ \frac{\partial u}{\partial t} - fv &= -\frac{1}{\rho} \frac{\partial p}{\partial x},\\
+ \frac{\partial v}{\partial t} + fu &= -\frac{1}{\rho} \frac{\partial p}{\partial y},\\
+ 0 &= -\frac{1}{\rho}\frac{\partial p}{\partial z} -g.
+\end{align*}
+From the last equation, plus the boundary conditions, we know
+\[
+ p = p_0 = g\rho(h - z).
+\]
+This is just the hydrostatic balance. We now put this expression into the horizontal components to get
+\begin{align*}
+ \frac{\partial u}{\partial t} - fv &= -g\frac{\partial h}{\partial x},\\
+ \frac{\partial v}{\partial t} + fu &= -g\frac{\partial h}{\partial y}.
+\end{align*}
+Note that the right hand sides are independent of $z$. So the accelerations are independent of $z$.
+
+The initial conditions are usually that $u$ and $v$ are independent of $z$. So we assume that the velocities always do not depend on $z$.
+
+\subsection{Geostrophic balance}
+When we have steady flow, the time derivatives vanish. So we get
+\begin{align*}
+ u &= \pd{y} \left(-\frac{gh}{f}\right) = \pd{y} \left(-\frac{p}{\rho f}\right),\\
+ v &= -\pd{x} \left(-\frac{gh}{f}\right) = -\pd{x} \left(-\frac{p}{\rho f}\right).
+\end{align*}
+Hopefully, this reminds us of streamfunctions. The streamlines are places where $h$ is constant, i.e.\ the surface is of constant height, i.e.\ the pressure is constant.
+
+\begin{defi}[Shallow water streamfunction]
+ The quantity
+ \[
+ \psi = -\frac{gh}{f}
+ \]
+ is the \emph{shallow water streamfunction}.
+\end{defi}
+
+In general, near a low pressure zone, there is a pressure gradient pushing the flow towards to the low pressure area. Since flow moves in circles around the low pressure zone, there is a Coriolis force that balances this force. This is the geostrophic balance.
+\begin{center}
+ \begin{tikzpicture}
+ \node {$L$};
+ \draw [->-=0.75] ellipse (1 and 0.5);
+ \draw [->-=0.75] ellipse (2 and 1);
+ \draw [->-=0.75] ellipse (3 and 1.5);
+ \end{tikzpicture}
+\end{center}
+This is a cyclone. Note that this picture is only valid in the Northern hemisphere. If we are on the other side of the Earth, cyclones go the other way round.
+
+We now look at the continuity equation, i.e.\ the conservation of mass.
+
+We consider a horizontal surface $\mathcal{D}$ in the water. Then we can compute
+\[
+ \frac{\d}{\d t} \int_\mathcal{D} \rho h \;\d V = -\int_{\partial \mathcal{D}} h \rho \mathbf{u}_H \cdot \mathbf{n}\;\d S,
+\]
+where $\mathbf{u}_H$ is the horizontal velocity. Applying the divergence theorem, we get
+\[
+ \int_{\mathcal{D}} \pd{t} (\rho h)\;\d V = -\int_{\mathcal{D}} \nabla_H \cdot (\rho h \mathbf{u}_H)\;\d V,
+\]
+where
+\[
+ \nabla_H = \left(\pd{x}, \pd{y}, 0\right).
+\]
+Since this was an arbitrary surface, we can take the integral away, and we have the continuity equation
+\[
+ \frac{\partial h}{\partial t} + \nabla_H \cdot (\mathbf{u}_h h) = 0.
+\]
+So if there is water flowing into a point (i.e.\ a vertical line), then the height of the surface falls, and vice versa.
+
+We can write this out in full. In Cartesian coordinates:
+\[
+ \frac{\partial h}{\partial t} + \frac{\partial}{\partial x}(uh) + \pd{y}(uh) = 0.
+\]
+To simplify the situation, we suppose we have small oscillations, so we have $h = h_0 + \eta(x, y, t)$, where $\eta \ll h_0$, and write
+\[
+ \mathbf{u} = (u(x, y), v(x, y)).
+\]
+Then we can rewrite our equations of motion as
+\[
+ \frac{\partial \mathbf{u}}{\partial t} + \mathbf{f} \times \mathbf{u} = -g \nabla \eta\tag{$*$}
+\]
+and, ignoring terms like $u\eta, v\eta$, the continuity equation gives
+\[
+ \frac{\d \eta}{\d t} + h_0 \nabla \cdot \mathbf{u} = 0.\tag{$\dagger$}
+\]
+Taking the curl of the $(*)$, we get
+\[
+ \frac{\partial \boldsymbol\zeta}{\partial t} + \mathbf{f}\nabla \cdot \mathbf{u} = 0,
+\]
+where
+\[
+ \boldsymbol\zeta = \nabla \times \mathbf{u}.
+\]
+Note that even though we wrote this as a vector equation, really only the $z$-component is non-zero. So we can also view $\boldsymbol\zeta$ and $\mathbf{f}$ as scalars, and get a scalar equation.
+
+We can express $\nabla \cdot \mathbf{u}$ in terms of $\eta$ using $(\dagger)$. So we get
+\[
+ \frac{\partial}{\partial t}\left(\boldsymbol\zeta - \frac{\eta}{h_0}\mathbf{f}\right) = \frac{\d \mathbf{Q}}{\d t} = 0,
+\]
+where
+\begin{defi}[Potential vorticity]
+ The \emph{potential vorticity} is
+ \[
+ \mathbf{Q} = \boldsymbol\zeta - \frac{\eta}{h_0}\mathbf{f},
+ \]
+\end{defi}
+and this is conserved.
+
+Hence given any initial condition, we can compute $\mathbf{Q}(x, y, 0) = \mathbf{Q}_0$. Then we have
+\[
+ \mathbf{Q}(x, y, t) = \mathbf{Q}_0
+\]
+for all time.
+
+How can we make use of this? We start by taking the divergence of $(*)$ above to get
+\[
+ \frac{\partial}{\partial t}(\nabla \cdot \mathbf{u}) - \mathbf{f} \cdot \nabla \times \mathbf{u} = - g \nabla^2 \eta,
+\]
+and use $(\dagger)$ to substitute
+\[
+ \nabla \cdot \mathbf{u} = -\frac{1}{h_0} \frac{\partial \eta}{\partial t}.
+\]
+We then get
+\[
+ -\frac{1}{h_0} \frac{\partial^2 \eta}{\partial t^2} - \mathbf{f}\cdot \boldsymbol\zeta = -g\nabla^2 \eta.
+\]
+We now use the conservation of potential vorticity, namely
+\[
+ \boldsymbol\zeta = \mathbf{Q}_0 + \frac{\eta}{h_0}\mathbf{f},
+\]
+to rewrite this as
+\[
+ \frac{\partial^2 \eta}{\partial t^2} - gh_0 \nabla^2 \eta + \mathbf{f}\cdot \mathbf{f} \eta = - h_0 \mathbf{f}\cdot \mathbf{Q}_0.
+\]
+Note that the right hand side is just a constant (in time). So we have a nice differential equation we can solve.
+
+\begin{eg}
+ Suppose we have fluid with mean depth $h_0$, and we start with the following scenario:
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (-3, 0) rectangle (3, -1);
+
+ \draw [dashed] (-3, 2) -- (3, 2) node [right] {$h_0$};
+ \fill [mblue, opacity=0.5] (-3, 0) -- (-3, 1.5) -- (0, 1.5) -- (0, 2.5) -- (3, 2.5) -- (3, 0) -- cycle;
+
+ \draw [latex'-latex'] (2, 2) -- +(0, 0.5) node [right, pos=0.5] {$\eta_0$};
+ \draw [latex'-latex'] (-2, 2) -- +(0, -0.5) node [right, pos=0.5] {$\eta_0$};
+
+ \draw [->] (0, 0) -- (0, 3) node [above] {$z$};
+ \end{tikzpicture}
+ \end{center}
+ Due to the differences in height, we have higher pressure on the right and lower pressure on the left.
+
+ If there is no rotation, then the final state is a flat surface with no flow. However, this cannot be the case if there is rotation, since this violates the conservation of $\mathbf{Q}$. So what happens if there is rotation?
+
+ At the beginning, there is no movement. So we have $\boldsymbol\zeta(t = 0) = 0$. Thus we have
+ \[
+ Q_0 =
+ \begin{cases}
+ -\frac{\eta_0}{h_0}f & x > 0\\
+ \frac{\eta_0}{h_0}f & x < 0
+ \end{cases}.
+ \]
+ We seek the final steady state such that
+ \[
+ \frac{\partial \eta}{\partial t} = 0.
+ \]
+ We further assume that the final solution is independent of $y$, so
+ \[
+ \frac{\partial \eta}{\partial y} = 0.
+ \]
+ So $\eta = \eta(x)$ is just a function of $x$. Our equation then says
+ \[
+ \frac{\partial^2 \eta}{\partial x^2} - \frac{f^2}{gh_0} \eta = \frac{f}{g}Q_0 = \mp \frac{f^2}{gh_0}\eta_0.
+ \]
+ It is convenient to define a new variable
+ \[
+ R = \frac{\sqrt{gh_0}}{f},
+ \]
+ which is a length scale. We know $\sqrt{gh_0}$ is the fastest possible wave speed, and thus $R$ is how far a wave can travel in one rotation period. We rewrite our equation as
+ \[
+ \frac{\d^2 \eta}{\d x^2} - \frac{1}{R^2}\eta = \mp \frac{1}{R^2}\eta_0.
+ \]
+ \begin{defi}[Rossby radius of deformation]
+ The length scale
+ \[
+ R = \frac{\sqrt{gh_0}}{f}
+ \]
+ is the \emph{Rossby radius of deformation}.
+ \end{defi}
+ This is the fundamental length scale to use in rotating systems when gravity is involved as well.
+
+ We now impose our boundary conditions. We require $\eta \to \pm \eta_0$ as $x \to \pm \infty$. We also require $\eta$ and $\frac{\d \eta}{\d x}$ to be continuous at $x = 0$.
+
+ The solution is
+ \[
+ \eta = \eta_0
+ \begin{cases}
+ 1 - e^{-x/R} & x > 0\\
+ -(1 - e^{x/R}) & x < 0
+ \end{cases}.
+ \]
+ We can see that this looks quite different form the non-rotating case. It looks like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray, path fading=south] (-3, 0) rectangle (3, -1);
+ \draw [->] (0, 0) -- (0, 3) node [above] {$z$};
+
+ \draw [dashed] (-3, 2) -- (3, 2) node [right] {$h_0$};
+ \fill [mblue, opacity=0.5] (-3, 0) -- (-3, 1.5) .. controls (1, 1.5) and (-1, 2.5) .. (3, 2.5) -- (3, 0) -- cycle;
+ \end{tikzpicture}
+ \end{center}
+ The horizontal length scale involved is $2R$.
+
+ We now look at the velocities. Using the steady flow equations, we have
+ \begin{align*}
+ u &= -\frac{g}{f} \frac{\partial \eta}{\partial y} = 0\\
+ v &= \frac{g}{f} \frac{\partial\eta}{\partial x} = \eta_0 \sqrt{\frac{g}{h_0}} e^{-|x|/R}.
+ \end{align*}
+ So there is still flow in this system, and is a flow in the $y$ direction into the paper. This flow gives Coriolis force to the right, and hence balances the pressure gradient of the system.
+
+ The final state is not one of rest, but one with motion in which the Coriolis force balances the pressure gradient. This is geostrophic flow.
+\end{eg}
+
+Going back to our pressure maps, if we have high and low pressure systems, we can have flows that look like this:
+\begin{center}
+ \begin{tikzpicture}
+ \node {L};
+
+ \node at (4, 0) {H};
+
+ \draw [->] circle [radius=1];
+ \draw [-<-=0.5] (4, 0) circle [radius=1];
+
+ \draw (1, -1.5) edge [out=60,in=-60, ->-=0.5] (1, 1.5);
+ \draw (3, -1.5) edge [out=120,in=-120, ->-=0.5] (3, 1.5);
+
+ \draw [->-=0.5] (2, -1.5) -- (2, 1.5);
+ \end{tikzpicture}
+\end{center}
+Then the Coriolis force will balance the pressure gradients.
+
+So weather maps describe balanced flows. We can compute the scales here. In the atmosphere, we have approximately
+\[
+ R \approx \frac{\sqrt{10 \cdot 10^3}}{10^{-4}} = 10^6 \approx \SI{1000}{\kilo\meter}.
+\]
+So the scales of cyclones are approximately $\SI{1000}{\kilo\meter}$.
+
+On the other hand, in the ocean, we have
+\[
+ R \approx \frac{\sqrt{10 \cdot 10}}{10^{-4}} = 10^5 = \SI{100}{\kilo\meter}.
+\]
+So ocean scales are much smaller than atmospheric scales.
+\end{document}
diff --git a/books/cam/IB_L/geometry.tex b/books/cam/IB_L/geometry.tex
new file mode 100644
index 0000000000000000000000000000000000000000..37028cab48ad34a57f9e30d172e6cfa8da65f269
--- /dev/null
+++ b/books/cam/IB_L/geometry.tex
@@ -0,0 +1,3067 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Lent}
+\def\nyear {2016}
+\def\nlecturer {A.\ G.\ Kovalev}
+\def\ncourse {Geometry}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\emph{Parts of Analysis II will be found useful for this course.}
+\vspace{10pt}
+
+\noindent Groups of rigid motions of Euclidean space. Rotation and reflection groups in two and three dimensions. Lengths of curves.\hspace*{\fill} [2]
+
+\vspace{5pt}
+\noindent Spherical geometry: spherical lines, spherical triangles and the Gauss-Bonnet theorem. Stereographic projection and M\"obius transformations.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Triangulations of the sphere and the torus, Euler number.\hspace*{\fill} [1]
+
+\vspace{5pt}
+\noindent Riemannian metrics on open subsets of the plane. The hyperbolic plane. Poincar\'e models and their metrics. The isometry group. Hyperbolic triangles and the Gauss-Bonnet theorem. The hyperboloid model.\hspace*{\fill} [4]
+
+\vspace{5pt}
+\noindent Embedded surfaces in $\R^3$. The first fundamental form. Length and area. Examples.\hspace*{\fill} [1]
+
+\vspace{5pt}
+\noindent Length and energy. Geodesics for general Riemannian metrics as stationary points of the energy. First variation of the energy and geodesics as solutions of the corresponding Euler-Lagrange equations. Geodesic polar coordinates (informal proof of existence). Surfaces of revolution.\hspace*{\fill} [2]
+
+\vspace{5pt}
+\noindent The second fundamental form and Gaussian curvature. For metrics of the form $du^2 + G(u, v) dv^2$, expression of the curvature as $\sqrt{G_{uu}}/\sqrt{G}$. Abstract smooth surfaces and isometries. Euler numbers and statement of Gauss-Bonnet theorem, examples and applications.\hspace*{\fill} [3]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+In the very beginning, Euclid came up with the axioms of geometry, one of which is the \emph{parallel postulate}. This says that given any point $P$ and a line $\ell$ not containing $P$, there is a line through $P$ that does not intersect $\ell$. Unlike the other axioms Euclid had, this was not seen as ``obvious''. For many years, geometers tried hard to prove this axiom from the others, but failed.
+
+Eventually, people realized that this axiom \emph{cannot} be proved from the others. There exists some other ``geometries'' in which the other axioms hold, but the parallel postulate fails. This was known as \emph{hyperbolic geometry}. Together with Euclidean geometry and spherical geometry (which is the geometry of the surface of a sphere), these constitute the three classical geometries. We will study these geometries in detail, and see that they actually obey many similar properties, while being strikingly different in other respects.
+
+That is not the end. There is no reason why we have to restrict ourselves to these three types of geometry. In later parts of the course, we will massively generalize the notions we began with and eventually define an \emph{abstract smooth surface}. This covers all three classical geometries, and many more!
+
+\section{Euclidean geometry}
+We are first going to look at Euclidean geometry. Roughly speaking, this is the geometry of the familiar $\R^n$ under the usual inner product. There really isn't much to say, since we are already quite familiar with Euclidean geometry. We will quickly look at isometries of $\R^n$ and curves in $\R^n$. Afterwards, we will try to develop analogous notions in other more complicated geometries.
+
+\subsection{Isometries of the Euclidean plane}
+The purpose of this section is to study maps on $\R^n$ that preserve distances, i.e.\ \emph{isometries} of $\R^n$. Before we begin, we define the notion of distance on $\R^n$ in the usual way.
+
+\begin{defi}[(Standard) inner product]
+ The \emph{(standard) inner product} on $\R^n$ is defined by
+ \[
+ (\mathbf{x}, \mathbf{y}) = \mathbf{x}\cdot \mathbf{y} = \sum_{i = 1}^n x_i y_i.
+ \]
+\end{defi}
+
+\begin{defi}[Euclidean Norm]
+ The \emph{Euclidean norm} of $\mathbf{x} \in \R^n$ is
+ \[
+ \|\mathbf{x}\| = \sqrt{(\mathbf{x}, \mathbf{x})}.
+ \]
+ This defines a metric on $\R^n$ by
+ \[
+ d(\mathbf{x}, \mathbf{y}) = \|\mathbf{x} - \mathbf{y}\|.
+ \]
+\end{defi}
+
+Note that the inner product and the norm both depend on our choice of origin, but the distance does not. In general, we don't like having a choice of origin --- choosing the origin is just to provide a (very) convenient way labelling points. The origin should not be a special point (in theory). In fancy language, we say we view $\R^n$ as an \emph{affine space} instead of a \emph{vector space}.
+
+\begin{defi}[Isometry]
+ A map $f: \R^n \to \R^n$ is an \emph{isometry} of $\R^n$ if
+ \[
+ d(f(\mathbf{x}), f(\mathbf{y})) = d(\mathbf{x}, \mathbf{y})
+ \]
+ for all $\mathbf{x}, \mathbf{y} \in \R^n$.
+\end{defi}
+Note that $f$ is not required to be linear. This is since we are viewing $\R^n$ as an affine space, and linearity only makes sense if we have a specified point as the origin. Nevertheless, we will still view the linear isometries as ``special'' isometries, since they are more convenient to work with, despite not being special fundamentally.
+
+Our current objective is to classify \emph{all} isometries of $\R^n$. We start with the linear isometries. Recall the following definition:
+\begin{defi}[Orthogonal matrix]
+ An $n \times n$ matrix $A$ is \emph{orthogonal} if $AA^T = A^T A = I$. The group of all orthogonal matrices is the orthogonal group $\Or(n)$.
+\end{defi}
+
+In general, for any matrix $A$ and $\mathbf{x}, \mathbf{y} \in \R^n$, we get
+\[
+ (A\mathbf{x}, A \mathbf{y}) = (A\mathbf{x})^T (A \mathbf{y}) = \mathbf{x}^T A^T A \mathbf{y} = (\mathbf{x}, A^T A \mathbf{y}).
+\]
+So $A$ is orthogonal if and only if $(A\mathbf{x}, A\mathbf{y}) = (\mathbf{x}, \mathbf{y})$ for all $\mathbf{x}, \mathbf{y} \in \R^n$.
+
+Recall also that the inner product can be expressed in terms of the norm by
+\[
+ (\mathbf{x}, \mathbf{y}) = \frac{1}{2}(\|\mathbf{x} + \mathbf{y}\|^2 - \|\mathbf{x}\|^2 -\|\mathbf{y}\|^2).
+\]
+So if $A$ preserves norm, then it preserves the inner product, and the converse is obviously true. So $A$ is orthogonal if and only if $\|A\mathbf{x}\| = \|\mathbf{x}\|$ for all $\mathbf{x} \in \R^n$. Hence matrices are orthogonal if and only if they are isometries.
+
+More generally, let
+\[
+ f(\mathbf{x}) = A\mathbf{x} + \mathbf{b}.
+\]
+Then
+\[
+ d(f(\mathbf{x}), f(\mathbf{y})) = \|A(\mathbf{x} - \mathbf{y})\|.
+\]
+So any $f$ of this form is an isometry if and only if $A$ is orthogonal. This is not too surprising. What might not be expected is that all isometries are of this form.
+
+\begin{thm}
+ Every isometry of $f: \R^n \to \R^n$ is of the form
+ \[
+ f(\mathbf{x}) = A\mathbf{x} + \mathbf{b}.
+ \]
+ for $A$ orthogonal and $\mathbf{b} \in \R^n$.
+\end{thm}
+
+\begin{proof}
+ Let $f$ be an isometry. Let $\mathbf{e}_1, \cdots, \mathbf{e}_n$ be the standard basis of $\R^n$. Let
+ \[
+ \mathbf{b} = f(\mathbf{0}), \quad \mathbf{a}_i = f(\mathbf{e}_i) - \mathbf{b}.
+ \]
+ The idea is to construct our matrix $A$ out of these $\mathbf{a}_i$. For $A$ to be orthogonal, $\{\mathbf{a}_i\}$ must be an orthonormal basis.
+
+ Indeed, we can compute
+ \[
+ \|\mathbf{a}_i\| = \|\mathbf{f}(\mathbf{e}_i) - f(\mathbf{0})\| = d(f(\mathbf{e}_i), f(\mathbf{0})) = d(\mathbf{e}_i, \mathbf{0}) = \|\mathbf{e}_i\| = 1.
+ \]
+ For $i \not = j$, we have
+ \begin{align*}
+ (\mathbf{a}_i, \mathbf{a}_j) &= -(\mathbf{a}_i, -\mathbf{a}_j) \\
+ &=-\frac{1}{2}(\|\mathbf{a}_i - \mathbf{a}_j\|^2 - \|\mathbf{a}_i\|^2 - \|\mathbf{a}_j\|^2)\\
+ &= -\frac{1}{2}(\|f(\mathbf{e}_i) - f(\mathbf{e}_j)\|^2 - 2)\\
+ &= -\frac{1}{2}(\|\mathbf{e}_i - \mathbf{e}_j\|^2 - 2)\\
+ &= 0
+ \end{align*}
+ So $\mathbf{a}_i$ and $\mathbf{a}_j$ are orthogonal. In other words, $\{\mathbf{a}_i\}$ forms an orthonormal set. It is an easy result that any orthogonal set must be linearly independent. Since we have found $n$ orthonormal vectors, they form an orthonormal basis.
+
+ Hence, the matrix $A$ with columns given by the column vectors $\mathbf{a}_i$ is an orthogonal matrix. We define a new isometry
+ \[
+ g(\mathbf{x}) = A\mathbf{x} + \mathbf{b}.
+ \]
+ We want to show $f = g$. By construction, we know $g(\mathbf{x}) = f(\mathbf{x})$ is true for $\mathbf{x} = \mathbf{0}, \mathbf{e}_1, \cdots, \mathbf{e}_n$.
+
+ We observe that $g$ is invertible. In particular,
+ \[
+ g^{-1}(\mathbf{x}) = A^{-1}(\mathbf{x} - \mathbf{b}) = A^T \mathbf{x} - A^T\mathbf{b}.
+ \]
+ Moreover, it is an isometry, since $A^T$ is orthogonal (or we can appeal to the more general fact that inverses of isometries are isometries).
+
+ We define
+ \[
+ h = g^{-1}\circ f.
+ \]
+ Since it is a composition of isometries, it is also an isometry. Moreover, it fixes $\mathbf{x} = \mathbf{0}, \mathbf{e}_1, \cdots, \mathbf{e}_n$.
+
+ It currently suffices to prove that $h$ is the identity.
+
+ Let $\mathbf{x} \in \R^n$, and expand it in the basis as
+ \[
+ \mathbf{x} = \sum_{i = 1}^n x_i \mathbf{e}_i.
+ \]
+ Let
+ \[
+ \mathbf{y} = h(\mathbf{x}) = \sum_{i = 1}^n y_i \mathbf{e}_i.
+ \]
+ We can compute
+ \begin{align*}
+ d(\mathbf{x}, \mathbf{e}_i)^2 &= (\mathbf{x} - \mathbf{e}_i, \mathbf{x} - \mathbf{e}_i) = \|\mathbf{x}\|^2 + 1 - 2 x_i\\
+ d(\mathbf{x}, \mathbf{0})^2 &= \|\mathbf{x}\|^2.
+ \end{align*}
+ Similarly, we have
+ \begin{align*}
+ d(\mathbf{y}, \mathbf{e}_i)^2 &= (\mathbf{y} - \mathbf{e}_i, \mathbf{y} - \mathbf{e}_i) = \|\mathbf{y}\|^2 + 1 - 2 y_i\\
+ d(\mathbf{y}, \mathbf{0})^2 &= \|\mathbf{y}\|^2.
+ \end{align*}
+ Since $h$ is an isometry and fixes $\mathbf{0}, \mathbf{e}_1, \cdots, \mathbf{e}_n$, and by definition $h(\mathbf{x}) = \mathbf{y}$, we must have
+ \[
+ d(\mathbf{x}, \mathbf{0}) = d(\mathbf{y}, \mathbf{0}), \quad d(\mathbf{x}, \mathbf{e}_i) = d(\mathbf{y}, \mathbf{e}_i).
+ \]
+ The first equality gives $\|\mathbf{x}\|^2 = \|\mathbf{y}\|^2$, and the others then imply $x_i = y_i$ for all $i$. In other words, $\mathbf{x} = \mathbf{y} = h(\mathbf{x})$. So $h$ is the identity.
+\end{proof}
+
+We now collect all our isometries into a group, for convenience.
+\begin{defi}[Isometry group]
+ The \emph{isometry group} $\Isom(\R^n)$ is the group of all isometries of $\R^n$, which is a group by composition.
+\end{defi}
+
+\begin{eg}[Reflections in an affine hyperplane]
+ Let $H \subseteq \R^n$ be an affine hyperplane given by
+ \[
+ H = \{\mathbf{x} \in \R^n: \mathbf{u} \cdot \mathbf{x} = c\},
+ \]
+ where $\|\mathbf{u}\| = 1$ and $c \in \R$. This is just a natural generalization of a 2-dimensional plane in $\R^3$. Note that unlike a vector subspace, it does not have to contain the origin (since the origin is not a special point).
+
+ Reflection in $H$, written $R_H$, is the map
+ \begin{align*}
+ R_H: \R^n &\to \R^n\\
+ \mathbf{x} &\mapsto \mathbf{x} - 2(\mathbf{x} \cdot \mathbf{u} - c)\mathbf{u}
+ \end{align*}
+ It is an exercise in the example sheet to show that this is indeed an isometry.
+
+ We now check this is indeed what we think a reflection should be. Note that every point in $\R^n$ can be written as $\mathbf{a} + t\mathbf{u}$, where $\mathbf{a} \in H$. Then the reflection should send this point to $\mathbf{a} - t\mathbf{u}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, -2) {};
+ \node [anchor = north east] at (0, -2) {$\mathbf{0}$};
+ \draw [->] (0, -2) -- (3.5, 1) node [right] {$\mathbf{a}$};
+ \draw [->] (3.5, 1) -- +(0, -1.5) node [below] {$\mathbf{a} - t\mathbf{u}$};
+ \draw [fill=mblue, fill opacity=0.7] (0, 0) -- (5, 0) -- (7, 2) node [right] {$H$} -- (2, 2) -- cycle;
+ \draw [->] (3.5, 1) -- +(0, 1.5) node [above] {$\mathbf{a} + t\mathbf{u}$};
+ \end{tikzpicture}
+ \end{center}
+ This is a routine check:
+ \[
+ R_H (\mathbf{a} + t\mathbf{u}) = (\mathbf{a} + t\mathbf{u}) - 2t\mathbf{u} = \mathbf{a} - t\mathbf{u}.
+ \]
+ In particular, we know $R_H$ fixes exactly the points of $H$.
+
+ The converse is also true --- any isometry $S \in \Isom(\R^n)$ that fixes the points in some affine hyperplane $H$ is either the identity or $R_H$.
+
+ To show this, we first want to translate the plane such that it becomes a vector subspace. Then we can use our linear algebra magic. For any $\mathbf{a} \in \R^n$, we can define the translation by $\mathbf{a}$ as
+ \[
+ T_{\mathbf{a}}(\mathbf{x}) = \mathbf{x} + \mathbf{a}.
+ \]
+ This is clearly an isometry.
+
+ We pick an arbitrary $\mathbf{a} \in H$, and let $R = T_{-\mathbf{a}} S T_\mathbf{a} \in \Isom(\R^n)$. Then $R$ fixes exactly $H' = T_{-\mathbf{a}} H$. Since $\mathbf{0} \in H'$, $H'$ is a vector subspace. In particular, if $H = \{\mathbf{x}: \mathbf{x}\cdot \mathbf{u} = c\}$, then by putting $c = \mathbf{a}\cdot \mathbf{u}$, we find
+ \[
+ H' = \{\mathbf{x}: \mathbf{x}\cdot \mathbf{u} = 0\}.
+ \]
+ To understand $R$, we already know it fixes everything in $H'$. So we want to see what it does to $\mathbf{u}$. Note that since $R$ is an isometry and fixes the origin, it is in fact an orthogonal map. Hence for any $\mathbf{x} \in H'$, we get
+ \[
+ (R\mathbf{u}, \mathbf{x}) = (R\mathbf{u}, R\mathbf{x}) = (\mathbf{u}, \mathbf{x}) = 0.
+ \]
+ So $R\mathbf{u}$ is also perpendicular to $H'$. Hence $R\mathbf{u} = \lambda \mathbf{u}$ for some $\lambda$. Since $R$ is an isometry, we have $\|R\mathbf{u}\|^2 = 1$. Hence $|\lambda|^2 = 1$, and thus $\lambda = \pm 1$. So either $\lambda = 1$, and $R = \id$; or $\lambda = -1$, and $R = R_{H'}$, as we already know for orthogonal matrices.
+
+ It thus follow that $S = \id_{\R^n}$, or $S$ is the reflection in $H$.
+
+ Thus we find that each reflection $R_H$ is the (unique) isometry fixing $H$ but not $\id_{\R^n}$.
+\end{eg}
+It is an exercise in the example sheet to show that every isometry of $\R^n$ is a composition of at most $n + 1$ reflections. If the isometry fixes $0$, then $n$ reflections will suffice.
+
+Consider the subgroup of $\Isom(\R^n)$ that fixes $\mathbf{0}$. By our general expression for the general isometry, we know this is the set $\{f(\mathbf{x}) = A\mathbf{x}: A A^T = I\} \cong \Or(n)$, the orthogonal group.
+
+For each $A \in \Or(n)$, we must have $\det(A)^2 = 1$. So $\det A = \pm 1$. We use this to define a further subgroup, the special orthogonal group.
+\begin{defi}[Special orthogonal group]
+ The \emph{special orthogonal group} is the group
+ \[
+ \SO(n) = \{A \in \Or(n):\det A = 1\}.
+ \]
+\end{defi}
+
+We can look at these explicitly for low dimensions.
+\begin{eg}
+ Consider
+ \[
+ A =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \Or(2)
+ \]
+ Orthogonality then requires
+ \[
+ a^2 + c^2 = b^2 + d^2 = 1,\quad ab + cd = 0.
+ \]
+ Now we pick $0 \leq \theta, \varphi \leq 2\pi$ such that
+ \begin{align*}
+ a &= \cos \theta & b &= -\sin \varphi\\
+ c &= \sin \theta & d &= \cos \varphi.
+ \end{align*}
+ Then $ab + cd = 0$ gives $\tan \theta = \tan \varphi$ (if $\cos \theta$ and $\cos \varphi$ are zero, we formally say these are both infinity). So either $\theta = \varphi$ or $\theta = \varphi \pm \pi$. Thus we have
+ \[
+ A=
+ \begin{pmatrix}
+ \cos \theta & -\sin \theta\\
+ \sin \theta & \cos \theta
+ \end{pmatrix}\text{ or }
+ A =
+ \begin{pmatrix}
+ \cos \theta & \sin \theta\\
+ \sin \theta & -\cos \theta
+ \end{pmatrix}
+ \]
+ respectively. In the first case, this is a rotation through $\theta$ about the origin. This is determinant $1$, and hence $A \in \SO(2)$.
+
+ In the second case, this is a reflection in the line $\ell$ at angle $\frac{\theta}{2}$ to the $x$-axis. Then $\det A = -1$ and $A \not\in \SO(2)$.
+
+ So in two dimensions, the orthogonal matrices are either reflections or rotations --- those in $\SO(2)$ are rotations, and the others are reflections.
+\end{eg}
+Before we can move on to three dimensions, we need to have the notion of orientation. We might intuitively know what an orientation is, but it is rather difficult to define orientation formally. The best we can do is to tell whether two given bases of a vector space have ``the same orientation''. Thus, it would make sense to define an orientation as an equivalence class of bases of ``the same orientation''. We formally define it as follows:
+\begin{defi}[Orientation]
+ An \emph{orientation} of a vector space is an equivalence class of bases --- let $\mathbf{v}_1, \cdots, \mathbf{v}_n$ and $\mathbf{v}_1', \cdots, \mathbf{v}_n'$ be two bases and $A$ be the change of basis matrix. We say the two bases are equivalent iff $\det A > 0$. This is an equivalence relation on the bases, and the equivalence classes are the orientations.
+\end{defi}
+
+\begin{defi}[Orientation-preserving isometry]
+ An isometry $f(\mathbf{x}) = A\mathbf{x} + \mathbf{b}$ is \emph{orientation-preserving} if $\det A = 1$. Otherwise, if $\det A = -1$, we say it is \emph{orientation-reversing}.
+\end{defi}
+
+\begin{eg}
+ We now want to look at $\Or(3)$. First focus on the case where $A \in \SO(3)$, i.e.\ $\det A = 1$. Then we can compute
+ \[
+ \det(A - I) = \det(A^T - I) = \det(A)\det(A^T - I) = \det(I - A) = -\det(A - I).
+ \]
+ So $\det (A - I) = 0$, i.e.\ $+1$ is an eigenvalue in $\R$. So there is some $\mathbf{v}_1 \in \R^3$ such that $A\mathbf{v}_1 = \mathbf{v}_1$.
+
+ We set $W = \bra \mathbf{v}_1\ket^{\perp}$. Let $\mathbf{w} \in W$. Then we can compute
+ \[
+ (A\mathbf{w}, \mathbf{v}_1) = (A\mathbf{w}, A\mathbf{v}_1) = (\mathbf{w}, \mathbf{v}_1) = 0.
+ \]
+ So $A\mathbf{w} \in W$. In other words, $W$ is fixed by $A$, and $A|_{W}: W \to W$ is well-defined. Moreover, it is still orthogonal and has determinant $1$. So it is a rotation of the two-dimensional vector space $W$.
+
+ We choose $\{\mathbf{v}_2, \mathbf{v}_3\}$ an orthonormal basis of $W$. Then under the bases $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$, $A$ is represented by
+ \[
+ A =
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & \cos \theta & - \sin \theta\\
+ 0 & \sin \theta & \cos \theta
+ \end{pmatrix}.
+ \]
+ This is the most general orientation-preserving isometry of $\R^3$ that fixes the origin.
+
+ How about the orientation-reversing ones?
+ Suppose $\det A = -1$. Then $\det(-A) = 1$. So in some orthonormal basis, we can express $A$ as
+ \[
+ -A =
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & \cos \theta & - \sin \theta\\
+ 0 & \sin \theta & \cos \theta
+ \end{pmatrix}.
+ \]
+ So $A$ takes the form
+ \[
+ A =
+ \begin{pmatrix}
+ -1 & 0 & 0\\
+ 0 & \cos \varphi & -\sin \varphi\\
+ 0 & \sin \varphi & \cos \varphi
+ \end{pmatrix},
+ \]
+ where $\varphi = \theta + \pi$. This is a rotated reflection, i.e.\ we first do a reflection, then rotation. In the special case where $\varphi = 0$, this is a pure reflection.
+\end{eg}
+That's all we're going to talk about isometries.
+
+\subsection{Curves in \tph{$\R^n$}{Rn}{&\#x211D;n}}
+Next we will briefly look at curves in $\R^n$. This will be very brief, since curves in $\R^n$ aren't too interesting. The most interesting fact might be that the shortest curve between two points is a straight line, but we aren't even proving that, because it is left for the example sheet.
+
+\begin{defi}[Curve]
+ A \emph{curve} $\Gamma$ in $\R^n$ is a continuous map $\Gamma: [a, b] \to \R^n$.
+\end{defi}
+Here we can think of the curve as the trajectory of a particle moving through time. Our main objective of this section is to define the length of a curve. We might want to define the length as
+\[
+ \int_a^b \|\Gamma'(t)\|\;\d t,
+\]
+as is familiar from, say, IA Vector Calculus. However, we can't do this, since our definition of a curve does not require $\Gamma$ to be continuously differentiable. It is merely required to be continuous. Hence we have to define the length in a more roundabout way.
+
+Similar to the definition of the Riemann integral, we consider a dissection $\mathcal{D} = a = t_0 < t_1 < \cdots < t_N = b$ of $[a, b]$, and set $P_i = \Gamma(t_i)$. We define
+\[
+ S_\mathcal{D} = \sum_i \|\overrightarrow{P_i P_{i + 1}}\|.
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] (0) at (0, 0) {};
+ \node [circ] (1) at (0.5, 1) {};
+ \node [circ] (2) at (1.3, 0.9) {};
+ \node [circ] (3) at (1.6, 0.2) {};
+ \node [circ] (4) at (2.4, 0.2) {};
+ \node [circ] (5) at (3, 1.2) {};
+ \node [circ] (6) at (5, 0.5) {};
+
+ \node [left] at (0) {$P_0$};
+ \node [left] at (1) {$P_1$};
+ \node [right] at (2) {$P_2$};
+ \node [above] at (5) {$P_{N - 1}$};
+ \node [right] at (6) {$P_N$};
+
+ \draw plot [smooth, tension=0.6] coordinates {(0) (1) (1, 1.6) (2) (3) (2, -0.4) (4) (5) (6)};
+ \draw [mred] (0) -- (1) -- (2) -- (3);
+ \draw [mred, dotted] (3) -- (4);
+ \draw [mred] (4) -- (5) -- (6);
+ \end{tikzpicture}
+\end{center}
+Notice that if we add more points to the dissection, then $S_\mathcal{D}$ will necessarily increase, by the triangle inequality. So it makes sense to define the length as the following supremum:
+\begin{defi}[Length of curve]
+ The length of a curve $\Gamma: [a, b] \to \R^n$ is
+ \[
+ \ell = \sup_{\mathcal{D}} S_{\mathcal{D}},
+ \]
+ if the supremum exists.
+\end{defi}
+Alternatively, if we let
+\[
+ \mathrm{mesh}(\mathcal{D})= \max_i (t_i - t_{i - 1}),
+\]
+then if $\ell$ exists, then we have
+\[
+ \ell = \lim_{\mathrm{mesh}(\mathcal{D}) \to 0} s_{\mathcal{D}}.
+\]
+Note also that by definition, we can write
+\[
+ \ell = \inf\{\tilde{\ell}: \tilde{\ell} \geq S_{\mathcal{D}}\text{ for all }\mathcal{D}\}.
+\]
+The definition by itself isn't too helpful, since there is no nice and easy way to check if the supremum exists. However, differentiability allows us to compute this easily in the expected way.
+\begin{prop}
+ If $\Gamma$ is continuously differentiable (i.e.\ $C^1$), then the length of $\Gamma$ is given by
+ \[
+ \length(\Gamma) = \int_a^b \|\Gamma'(t)\|\;\d t.
+ \]
+\end{prop}
+The proof is just a careful check that the definition of the integral coincides with the definition of length.
+\begin{proof}
+ To simplify notation, we assume $n = 3$. However, the proof works for all possible dimensions. We write
+ \[
+ \Gamma(t) = (f_1(t), f_2(t), f_3(t)).
+ \]
+ For every $s \not= t \in [a, b]$, the mean value theorem tells us
+ \[
+ \frac{f_i(t) - f_i(s)}{t - s} = f'_i (\xi_i)
+ \]
+ for some $\xi_i \in (s, t)$, for all $i = 1, 2, 3$.
+
+ Now note that $f_i'$ are continuous on a closed, bounded interval, and hence uniformly continuous. For all $\varepsilon \in 0$, there is some $\delta > 0$ such that $|t - s| < \delta$ implies
+ \[
+ |f_i'(\xi_i) - f'(\xi)| < \frac{\varepsilon}{3}
+ \]
+ for all $\xi \in (s, t)$. Thus, for any $\xi \in (s, t)$, we have
+ \[
+ \left\|\frac{\Gamma(t) - \Gamma(s)}{t - s} - \Gamma'(\xi)\right\| = \left\|\begin{pmatrix}f'_1(\xi_1)\\ f'_2(\xi_2)\\ f'_3(\xi_3)\end{pmatrix} - \begin{pmatrix}f'_1(\xi)\\ f'_2(\xi)\\ f'_3(\xi)\end{pmatrix}\right\| \leq \frac{\varepsilon}{3} + \frac{\varepsilon}{3} + \frac{\varepsilon}{3} = \varepsilon.
+ \]
+ In other words,
+ \[
+ \|\Gamma(t) - \Gamma(s) - (t - s) \Gamma'(\xi)\| \leq \varepsilon(t - s).
+ \]
+ We relabel $t = t_i$, $s = t_{i - 1}$ and $\xi = \frac{s + t}{2}$.
+
+ Using the triangle inequality, we have
+ \begin{multline*}
+ (t_i - t_{i - 1}) \left\|\Gamma'\left(\frac{t_i + t_{i - 1}}{2}\right)\right\| - \varepsilon(t_i - t_{i - 1}) < \|\Gamma(t_i) - \Gamma(t_{i - 1})\| \\
+ < (t_i - t_{i - 1}) \left\|\Gamma'\left(\frac{t_i + t_{i - 1}}{2}\right)\right\| + \varepsilon(t_i - t_{i - 1}).
+ \end{multline*}
+ Summing over all $i$, we obtain
+ \begin{multline*}
+ \sum_i (t_i - t_{i - 1}) \left\|\Gamma'\left(\frac{t_i + t_{i - 1}}{2}\right)\right\| - \varepsilon(b - a) < S_{\mathcal{D}}\\
+ < \sum_i (t_i - t_{i - 1}) \left\|\Gamma'\left(\frac{t_i + t_{i - 1}}{2}\right)\right\| + \varepsilon(b - a),
+ \end{multline*}
+ which is valid whenever $\mathrm{mesh}(\mathcal{D}) < \delta$.
+
+ Since $\Gamma'$ is continuous, and hence integrable, we know
+ \[
+ \sum_i (t_i - t_{i - 1}) \left\|\Gamma'\left(\frac{t_i + t_{i - 1}}{2}\right)\right\| \to \int_a^b \|\Gamma'(t)\|\;\d t
+ \]
+ as $\mathrm{mesh}(\mathcal{D}) \to 0$, and
+ \[
+ \length(\Gamma) = \lim_{\mathrm{mesh}(\mathcal{D}) \to 0} S_\mathcal{D} = \int_a^b \|\Gamma'(t)\|\;\d t.\qedhere
+ \]
+\end{proof}
+This is all we are going to say about Euclidean space.
+
+\section{Spherical geometry}
+The next thing we are going to study is the geometry on the surface of a sphere. This is a rather sensible thing to study, since it so happens that we all live on something (approximately) spherical. It turns out the geometry of the sphere is very different from that of $\R^2$.
+
+In this section, we will always think of $S^2$ as a subset of $\R^3$ so that we can reuse what we know about $\R^3$.
+\begin{notation}
+ We write $S = S^2 \subseteq \R^3$ for the unit sphere. We write $O = \mathbf{0}$ for the origin, which is the center of the sphere (and not on the sphere).
+\end{notation}
+
+When we live on the sphere, we can no longer use regular lines in $\R^3$, since these do not lie fully on the sphere. Instead, we have a concept of a \emph{spherical line}, also known as a \emph{great circle}.
+\begin{defi}[Great circle]
+ A \emph{great circle} (in $S^2$) is $S^2 \cap (\text{a plane through }O)$. We also call these \emph{(spherical) lines}.
+\end{defi}
+We will also call these \emph{geodesics}, which is a much more general term defined on any surface, and happens to be these great circles in $S^2$.
+
+In $\R^3$, we know that any three points that are not colinear determine a unique plane through them. Hence given any two non-antipodal points $P, Q \in S$, there exists a unique spherical line through $P$ and $Q$.
+
+\begin{defi}[Distance on a sphere]
+ Given $P, Q \in S$, the \emph{distance} $d(P, Q)$ is the shorter of the two (spherical) line segments (i.e.\ arcs) $PQ$ along the respective great circle. When $P$ and $Q$ are antipodal, there are infinitely many line segments between them of the same length, and the distance is $\pi$.
+\end{defi}
+Note that by the definition of the radian, $d(P, Q)$ is the angle between $\overrightarrow{OP}$ and $\overrightarrow{OQ}$, which is also $\cos^{-1}(\mathbf{P}\cdot \mathbf{Q})$ (where $\mathbf{P} = \overline{OP}$, $\mathbf{Q} = \overline{OQ}$).
+
+\subsection{Triangles on a sphere}
+One main object of study is spherical triangles -- they are defined just like Euclidean triangles, with $AB, AC, BC$ line segments on $S$ of length $<\pi$. The restriction of length is just merely for convenience.
+\begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius = 2.5];
+ \pgfpathmoveto{\pgfpoint{-0.86602cm}{-0.5cm}};
+ \pgfpatharcto{2cm}{2cm}{0}{0}{0}{\pgfpoint{0cm}{1cm}}\pgfusepath{stroke};
+ \pgfpathmoveto{\pgfpoint{0cm}{1cm}};
+ \pgfpatharcto{2cm}{2cm}{0}{0}{0}{\pgfpoint{0.86602cm}{-0.5cm}}\pgfusepath{stroke};
+ \pgfpathmoveto{\pgfpoint{0.86602cm}{-0.5cm}};
+ \pgfpatharcto{2cm}{2cm}{0}{0}{0}{\pgfpoint{-0.86602cm}{-0.5cm}}\pgfusepath{stroke}; % improve
+
+ \draw (0, 1) node [above] {$A$} node [circ] {};
+ \draw (-0.86602, -0.5) node [left] {$B$} node [circ] {};
+ \draw (0.86602, -0.5) node [right] {$C$} node [circ] {};
+ \node at (-0.55, 0.5) [left] {$c$};
+ \node at (0.55, 0.5) [right] {$b$};
+ \node at (0, -0.75) [below] {$a$};
+
+ \pgfpathmoveto{\pgfpoint{-0.29cm}{0.77cm}};
+ \pgfpatharcto{0.5cm}{0.5cm}{0}{0}{1}{\pgfpoint{0.29cm}{0.77cm}}\pgfusepath{stroke};
+ \node at (0, 0.5) {$\alpha$};
+
+ \pgfpathmoveto{\pgfpoint{-0.81cm}{-0.15cm}};
+ \pgfpatharcto{0.4cm}{0.4cm}{0}{0}{0}{\pgfpoint{-0.53cm}{-0.63cm}}\pgfusepath{stroke};
+ \node at (-0.4, -0.3) {$\beta$};
+
+ \pgfpathmoveto{\pgfpoint{0.81cm}{-0.15cm}};
+ \pgfpatharcto{0.4cm}{0.4cm}{0}{0}{1}{\pgfpoint{0.53cm}{-0.63cm}}\pgfusepath{stroke};
+ \node at (0.4, -0.3) {$\gamma$};
+ \end{tikzpicture}
+\end{center}
+We will take advantage of the fact that the sphere sits in $\R^3$. We set
+\begin{align*}
+ \mathbf{n}_1 &= \frac{\mathbf{C}\times \mathbf{B}}{\sin a}\\
+ \mathbf{n}_2 &= \frac{\mathbf{A}\times \mathbf{C}}{\sin b}\\
+ \mathbf{n}_3 &= \frac{\mathbf{B}\times \mathbf{A}}{\sin c}.
+\end{align*}
+These are unit normals to the planes $OBC, OAC$ and $OAB$ respectively. They are pointing out of the solid $OABC$.
+
+The angles $\alpha, \beta, \gamma$ are the angles between the planes for the respective sides. Then $0 < \alpha, \beta, \gamma < \pi$. Note that the angle between $\mathbf{n}_2$ and $\mathbf{n}_3$ is $\pi + \alpha$ (not $\alpha$ itself --- if $\alpha = 0$, then the angle between the two normals is $\pi$). So
+\begin{align*}
+ \mathbf{n}_2 \cdot \mathbf{n}_3 &= -\cos \alpha\\
+ \mathbf{n}_3 \cdot \mathbf{n}_1 &= -\cos \beta\\
+ \mathbf{n}_1 \cdot \mathbf{n}_2 &= -\cos \gamma.
+\end{align*}
+We have the following theorem:
+\begin{thm}[Spherical cosine rule]
+ \[
+ \sin a \sin b \cos \gamma = \cos c - \cos a \cos b.
+ \]
+\end{thm}
+
+\begin{proof}
+ We use the fact from IA Vectors and Matrices that
+ \[
+ (\mathbf{C}\times \mathbf{B}) \cdot (\mathbf{A} \times \mathbf{C}) = (\mathbf{A}\cdot \mathbf{C})(\mathbf{B}\cdot \mathbf{C}) - (\mathbf{C} \cdot \mathbf{C})(\mathbf{B}\cdot \mathbf{A}),
+ \]
+ which follows easily from the double-epsilon identity
+ \[
+ \varepsilon_{ijk}\varepsilon_{imn} = \delta_{jm}\delta_{kn} - \delta_{jn}\delta_{km}.
+ \]
+ In our case, since $\mathbf{C}\cdot \mathbf{C} = 1$, the right hand side is
+ \[
+ (\mathbf{A}\cdot \mathbf{C}) (\mathbf{B}\cdot \mathbf{C}) - (\mathbf{B}\cdot \mathbf{A}).
+ \]
+ Thus we have
+ \begin{align*}
+ -\cos \gamma &= \mathbf{n}_1 \cdot \mathbf{n}_2\\
+ &= \frac{\mathbf{C}\times \mathbf{B}}{\sin a} \cdot \frac{\mathbf{A}\times \mathbf{C}}{\sin b} \\
+ &= \frac{(\mathbf{A}\cdot \mathbf{C})(\mathbf{B}\cdot \mathbf{C}) - (\mathbf{B}\cdot \mathbf{A})}{\sin a \sin b}\\
+ &= \frac{\cos b\cos a - \cos c}{\sin a \sin b}.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{cor}[Pythagoras theorem]
+ If $\gamma = \frac{\pi}{2}$, then
+ \[
+ \cos c = \cos a \cos b.
+ \]
+\end{cor}
+
+Analogously, we have a spherical sine rule.
+\begin{thm}[Spherical sine rule]
+ \[
+ \frac{\sin a}{\sin \alpha} = \frac{\sin b}{\sin \beta} = \frac{\sin c}{\sin \gamma}.
+ \]
+\end{thm}
+
+\begin{proof}
+ We use the fact that
+ \[
+ (\mathbf{A}\times \mathbf{C}) \times (\mathbf{C}\times \mathbf{B}) = (\mathbf{C}\cdot (\mathbf{B}\times \mathbf{A}))\mathbf{C},
+ \]
+ which we again are not bothered to prove again. The left hand side is
+ \[
+ -(\mathbf{n}_1 \times \mathbf{n}_2) \sin a \sin b
+ \]
+ Since the angle between $\mathbf{n}_1$ and $\mathbf{n}_2$ is $\pi + \gamma$, we know $\mathbf{n}_1 \times \mathbf{n}_2 = \mathbf{C}\sin \gamma$. Thus the left hand side is
+ \[
+ -\mathbf{C} \sin a \sin b \sin \gamma.
+ \]
+ Thus we know
+ \[
+ \mathbf{C}\cdot (\mathbf{A} \times \mathbf{B}) = \sin a\sin b \sin \gamma.
+ \]
+ However, since the scalar triple product is cyclic, we know
+ \[
+ \mathbf{C}\cdot (\mathbf{A}\times \mathbf{B}) = \mathbf{A}\cdot (\mathbf{B} \times \mathbf{C}).
+ \]
+ In other words, we have
+ \[
+ \sin a \sin b \sin \gamma = \sin b \sin c \sin \alpha.
+ \]
+ Thus we have
+ \[
+ \frac{\sin \gamma}{\sin c} = \frac{\sin \alpha}{\sin a}.
+ \]
+ Similarly, we know this is equal to $\frac{\sin \beta}{\sin b}$.
+\end{proof}
+
+Recall that for small $a, b, c$, we know
+\[
+ \sin a = a + O(a^3).
+\]
+Similarly,
+\[
+ \cos a = 1 - \frac{a^2}{2} + O(a^4).
+\]
+As we take the limit $a, b, c \to 0$, the spherical sine and cosine rules become the usual Euclidean versions. For example, the cosine rule becomes
+\[
+ ab \cos \gamma = 1 - \frac{c^2}{2} - \left(1 - \frac{a^2}{2}\right)\left(1 - \frac{b^2}{2}\right) + O(\|(a, b, c)\|^3).
+\]
+Rearranging gives
+\[
+ c^2 = a^2 + b^2 - 2ab \cos \gamma + O(\|(a, b, c,)\|^3).
+\]
+The sine rule transforms similarly as well. This is what we would expect, since making $a, b, c$ small is equivalent to zooming into the surface of the sphere, and it looks more and more like flat space.
+
+Note that if $\gamma = \pi$, it then follows that $C$ is in the line segment given by $AB$. So $c = a + b$. Otherwise, we get
+\[
+ \cos c > \cos a \cos b - \sin a \sin b = \cos(a + b),
+\]
+since $\cos \gamma < 1$. Since $\cos$ is decreasing on $[0, \pi]$, we know
+\[
+ c < a + b.
+\]
+\begin{cor}[Triangle inequality]
+ For any $P, Q, R \in S^2$, we have
+ \[
+ d(P, Q) + d(Q, R) \geq d(P, R),
+ \]
+ with equality if and only if $Q$ lies is in the line segment $PR$ of shortest length.
+\end{cor}
+
+\begin{proof}
+ The only case left to check is if $d(P, R) = \pi$, since we do not allow our triangles to have side length $\pi$. But in this case they are antipodal points, and any $Q$ lies in a line through $PR$, and equality holds.
+\end{proof}
+Thus, we find that $(S^2, d)$ is a metric space.
+
+On $\R^n$, straight lines are curves that minimize distance. Since we are calling spherical lines \emph{lines}, we would expect them to minimize distance as well. This is in fact true.
+\begin{prop}
+ Given a curve $\Gamma$ on $S^2 \subseteq \R^3$ from $P$ to $Q$, we have $\ell = \length(\Gamma) \geq d(P, Q)$. Moreover, if $\ell = d(P, Q)$, then the image of $\Gamma$ is a spherical line segment $PQ$.
+\end{prop}
+
+\begin{proof}
+ Let $\Gamma: [0, 1] \to S$ and $\ell = \length(\Gamma)$. Then for any dissection $\mathcal{D}$ of $[0, 1]$, say $0 = t_0 < \cdots < t_N = 1$, write $P_i = \Gamma(t_i)$. We define
+ \[
+ \tilde{S}_{\mathcal{D}} = \sum_i d(P_{i - 1}, P_i) > S_{\mathcal{D}} = \sum_i |\overrightarrow{P_{i - 1}P_i}|,
+ \]
+ where the length in the right hand expression is the distance in Euclidean $3$-space.
+
+ Now suppose $\ell < d(P, Q)$. Then there is some $\varepsilon > 0$ such that $\ell(1 + \varepsilon) < d(P, Q)$.
+
+ Recall from basic trigonometric that if $\theta > 0$, then $\sin \theta < \theta$. Also,
+ \[
+ \frac{\sin \theta}{\theta} \to 1\text{ as }\theta \to 0.
+ \]
+ Thus we have
+ \[
+ \theta \leq (1 + \varepsilon) \sin \theta.
+ \]
+ for small $\theta$. What we really want is the double of this:
+ \[
+ 2\theta \leq (1 + \varepsilon) 2\sin \theta.
+ \]
+ This is useful since these lengths appear in the following diagram:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0) -- (0, 0) -- (-2.121, 2.121) -- cycle node [pos=0.55, right] {$2 \sin \theta$};
+ \draw (-3, 0) arc (180:135:3) node [pos=0.55, left] {$2 \theta$};
+
+ \draw (-0.4, 0) arc(180:135:0.4) node [pos=0.7, left] {$2\theta$};
+ \end{tikzpicture}
+ \end{center}
+ This means for $P, Q$ sufficiently close, we have $d(P, Q) \leq (1 + \varepsilon) |\overrightarrow{PQ}|$.
+
+ From Analysis II, we know $\Gamma$ is uniformly continuous on $[0, 1]$. So we can choose $\mathcal{D}$ such that
+ \[
+ d(P_{i - 1}, P_i) \leq (1 + \varepsilon)|\overrightarrow{P_{i - 1}P_i}|
+ \]
+ for all $i$. So we know that for sufficiently fine $\mathcal{D}$,
+ \[
+ \tilde{S}_{\mathcal{D}} \leq (1 + \varepsilon) S_\mathcal{D} < d(P, Q),
+ \]
+ since $S_\mathcal{D} \to \ell$. However, by the triangle inequality $\tilde{S}_\mathcal{D} \geq d(P, Q)$. This is a contradiction. Hence we must have $\ell \geq d(P, Q)$.
+
+ Suppose now $\ell = d(P, Q)$ for some $\Gamma: [0, 1] \to S$, $\ell = \length(\Gamma)$. Then for every $t \in [0, 1]$, we have
+ \begin{align*}
+ d(P, Q) = \ell &= \length \Gamma|_{[0, t]} + \length \Gamma|_{[t, 1]} \\
+ &\geq d(P, \Gamma(t)) + d(\Gamma(t), Q)\\
+ &\geq d(P, Q).
+ \end{align*}
+ Hence we must have equality all along the way, i.e.
+ \[
+ d(P, Q) = d(P, \Gamma(t)) + d(\Gamma(t), Q)
+ \]
+ for all $\Gamma(t)$.
+
+ However, this is possible only if $\Gamma(t)$ lies on the shorter spherical line segment $PQ$, as we have previously proved. So done.
+\end{proof}
+
+Note that if $\Gamma$ is a curve of minimal length from $P$ to $Q$, then $\Gamma$ is a spherical line segment. Further, from the proof of this proposition, we know $\length \Gamma|_{[0, t]} = d(P, \Gamma(t))$ for all $t$. So the parametrisation of $\Gamma$ is monotonic. Such a $\Gamma$ is called a \emph{minimizing geodesic}.
+
+Finally, we get to an important theorem whose prove involves complicated pictures. This is known as the \emph{Gauss-Bonnet theorem}. The Gauss-Bonnet theorem is in fact a much more general theorem. However, here we will specialize in the special case of the sphere. Later, when doing hyperbolic geometry, we will prove the hyperbolic version of the Gauss-Bonnet theorem. Near the end of the course, when we have developed sufficient machinery, we would be able to \emph{state} the Gauss-Bonnet theorem in its full glory. However, we will not be able to prove the general version.
+
+\begin{prop}[Gauss-Bonnet theorem for $S^2$]
+ If $\Delta$ is a spherical triangle with angles $\alpha, \beta, \gamma$, then
+ \[
+ \area (\Delta) = (\alpha + \beta + \gamma) - \pi.
+ \]
+\end{prop}
+
+\begin{proof}
+ \makeatletter
+ \newcommand\filllunefront[9]{
+ \pgfmathsetmacro{\x@x}{#3 * cos(#4)};
+ \pgfmathsetmacro{\x@y}{#3 * sin(#4)};
+ \pgfmathsetmacro{\y@x}{#3 * cos(#5)};
+ \pgfmathsetmacro{\y@y}{#3 * sin(#5)};
+
+ \color{#8};
+ \pgfsetfillopacity{0.6};
+ \pgfpathmoveto{\pgfpoint{#1}{#2}};
+ \pgfpatharcto{#3 cm}{#6 cm}{#4}{0}{1}{\pgfpoint{-\x@x cm}{-\x@y cm}};
+ \pgfpatharcto{#3 cm}{#3 cm}{0}{0}{#9}{\pgfpoint{-\y@x cm}{-\y@y cm}};
+ \pgfpatharcto{#3 cm}{#7 cm}{#5}{0}{int(1-#9)}{\pgfpoint{#1}{#2}};
+ \pgfusepath{fill};
+
+ \pgfpathmoveto{\pgfpoint{#1}{#2}};
+ \pgfpatharcto{#3 cm}{#6 cm}{#4}{0}{0}{\pgfpoint{\x@x cm}{\x@y cm}};
+ \pgfpatharcto{#3 cm}{#3 cm}{0}{0}{#9}{\pgfpoint{\y@x cm}{\y@y cm}};
+ \pgfpatharcto{#3 cm}{#7 cm}{#5}{0}{#9}{\pgfpoint{#1}{#2}};
+ \pgfusepath{fill};
+ }
+ \newcommand\fillluneback[9]{
+ \pgfmathsetmacro{\x@x}{#3 * cos(#4)};
+ \pgfmathsetmacro{\x@y}{#3 * sin(#4)};
+ \pgfmathsetmacro{\y@x}{#3 * cos(#5)};
+ \pgfmathsetmacro{\y@y}{#3 * sin(#5)};
+
+ \color{#8};
+
+ \pgfsetfillopacity{0.11};
+ \pgfpathmoveto{\pgfpoint{-#1}{-#2}};
+ \pgfpatharcto{#3 cm}{#6 cm}{#4}{0}{0}{\pgfpoint{-\x@x cm}{-\x@y cm}};
+ \pgfpatharcto{#3 cm}{#3 cm}{0}{0}{#9}{\pgfpoint{-\y@x cm}{-\y@y cm}};
+ \pgfpatharcto{#3 cm}{#7 cm}{#5}{0}{#9}{\pgfpoint{-#1}{-#2}};
+ \pgfusepath{fill};
+
+ \pgfpathmoveto{\pgfpoint{-#1}{-#2}};
+ \pgfpatharcto{#3 cm}{#6 cm}{#4}{0}{1}{\pgfpoint{\x@x cm}{\x@y cm}};
+ \pgfpatharcto{#3 cm}{#3 cm}{0}{0}{#9}{\pgfpoint{\y@x cm}{\y@y cm}};
+ \pgfpatharcto{#3 cm}{#7 cm}{#5}{0}{int(1-#9)}{\pgfpoint{-#1}{-#2}};
+ \pgfusepath{fill};
+ }
+ \makeatother
+ We start with the concept of a double lune. A \emph{double lune} with angle $0 < \alpha < \pi$ is two regions $S$ cut out by two planes through a pair of antipodal points, where $\alpha$ is the angle between the two planes.
+ \begin{center}
+ \begin{tikzpicture}
+ \pgfdeclarelayer{col}
+ \pgfsetlayers{col,main}
+ \pgfmathsetmacro{\radius}{2.5}
+ \pgfmathsetmacro{\ct}{20}
+ \pgfmathsetmacro{\crad}{1.3}
+ \pgfmathsetmacro{\dt}{70}
+ \pgfmathsetmacro{\drad}{0.5}
+
+ \draw circle [radius = \radius];
+
+ \draw [name path=c1, rotate=\ct] (\radius, 0) arc(0:180:{\radius} and {\crad});
+ \draw [opacity=0.6, name path=d1, rotate=\ct, dashed] (\radius, 0) arc(0:-180:{\radius} and {\crad});
+
+ \draw [name path=c2, rotate=\dt] (\radius, 0) arc(0:180:{\radius} and {\drad});
+ \draw [opacity=0.6, name path=d2, rotate=\dt, dashed] (\radius, 0) arc(0:-180:{\radius} and {\drad});
+
+ \path [name intersections={of=c1 and c2,by=A}];
+ \path [name intersections={of=d1 and d2,by=A'}];
+ \node [circ] at (A) {};
+ \node [circ] at (A') {};
+
+ \node [anchor = south east] at (A) {$A$};
+ \node [anchor = south east] at (A') {$A'$};
+
+ \getCoord{\ax}{\ay}{A};
+ \getCoord{\aax}{\aay}{A'};
+ \begin{pgfonlayer}{col}
+ \filllunefront{\ax}{\ay}{\radius}{\ct}{\dt}{\crad}{\drad}{mblue}{1};
+ \fillluneback{\ax}{\ay}{\radius}{\ct}{\dt}{\crad}{\drad}{mblue}{1};
+ \end{pgfonlayer}
+
+ \end{tikzpicture}
+ \end{center}
+ It is not hard to show that the area of a double lune is $4 \alpha$, since the area of the sphere is $4\pi$.
+
+ Now note that our triangle $\Delta = ABC$ is the intersection of 3 \emph{single} lunes, with each of $A, B, C$ as the pole (in fact we only need two, but it is more convenient to talk about $3$).
+ \begin{center}
+ \begin{tikzpicture}
+ \pgfdeclarelayer{col}
+ \pgfsetlayers{col,main}
+ \pgfmathsetmacro{\radius}{2.5}
+ \pgfmathsetmacro{\ct}{20}
+ \pgfmathsetmacro{\crad}{1.3}
+ \pgfmathsetmacro{\dt}{70}
+ \pgfmathsetmacro{\drad}{0.5}
+ \pgfmathsetmacro{\et}{140}
+ \pgfmathsetmacro{\erad}{0.5}
+
+ \draw circle [radius = \radius];
+
+ \draw [name path=c1, rotate=\ct] (\radius, 0) arc(0:180:{\radius} and {\crad});
+ \draw [opacity=0.6, name path=c2, rotate=\ct, dashed] (\radius, 0) arc(0:-180:{\radius} and {\crad});
+
+ \draw [name path=d1, rotate=\dt] (\radius, 0) arc(0:180:{\radius} and {\drad});
+ \draw [opacity=0.6, name path=d2, rotate=\dt, dashed] (\radius, 0) arc(0:-180:{\radius} and {\drad});
+
+ \draw [name path=e1, rotate=\et] (\radius, 0) arc(0:180:{\radius} and {\erad});
+ \draw [opacity=0.6, name path=e2, rotate=\et, dashed] (\radius, 0) arc(0:-180:{\radius} and {\erad});
+
+ \path [name intersections={of=c1 and d1,by=A}];
+ \path [name intersections={of=c2 and d2,by=A'}];
+ \node [circ] at (A) {};
+ \node [opacity=0.6, circ] at (A') {};
+ \node [anchor = south east] at (A) {$A$};
+ \node [opacity=0.6, anchor = south east] at (A') {$A'$};
+
+ \path [name intersections={of=c1 and e1,by=B}];
+ \path [name intersections={of=c2 and e2,by=B'}];
+ \node [circ] at (B) {};
+ \node [opacity=0.6, circ] at (B') {};
+ \node [left] at (B) {$B$};
+ \node [opacity=0.6, right] at (B') {$B'$};
+
+ \path [name intersections={of=d1 and e1,by=C}];
+ \path [name intersections={of=d2 and e2,by=C'}];
+ \node [circ] at (C) {};
+ \node [opacity=0.6, circ] at (C') {};
+ \node [right] at (C) {$C$};
+ \node [opacity=0.6, right] at (C') {$C'$};
+
+ \getCoord{\ax}{\ay}{A};
+ \getCoord{\bx}{\by}{B};
+ \getCoord{\cx}{\cy}{C};
+
+ \begin{pgfonlayer}{col}
+ \fillluneback{\ax}{\ay}{\radius}{\ct}{\dt}{\crad}{\drad}{mblue}{1};
+ \fillluneback{\bx}{\by}{\radius}{\ct}{180+\et}{\crad}{\erad}{mgreen}{0};
+ \fillluneback{\cx}{\cy}{\radius}{\dt}{\et}{\drad}{\erad}{morange}{1};
+
+ \filllunefront{\ax}{\ay}{\radius}{\ct}{\dt}{\crad}{\drad}{mblue}{1};
+ \filllunefront{\bx}{\by}{\radius}{\ct}{180+\et}{\crad}{\erad}{mgreen}{0};
+ \filllunefront{\cx}{\cy}{\radius}{\dt}{\et}{\drad}{\erad}{morange}{1};
+ \end{pgfonlayer}
+ \end{tikzpicture}
+ \end{center}
+ Therefore $\Delta$ together with its antipodal partner $\Delta'$ is a subset of each of the 3 double lunes with areas $4\alpha, 4\beta, 4\gamma$. Also, the union of all the double lunes cover the whole sphere, and overlap at exactly $\Delta$ and $\Delta'$.
+ Thus
+ \[
+ 4(\alpha + \beta + \gamma) = 4\pi + 2(\area(\Delta) + \area(\Delta')) = 4\pi + 4 \area(\Delta).\qedhere
+ \]
+\end{proof}
+This is easily generalized to arbitrary convex $n$-gons on $S^2$ (with $n \geq 3$). Suppose $M$ is such a convex $n$-gon with interior angles $\alpha_1, \cdots, \alpha_n$. Then we have
+\[
+ \area(M) = \sum_1^n \alpha_i - (n - 2) \pi.
+\]
+This follows directly from cutting the polygon up into the constituent triangles.
+
+This is very unlike Euclidean space. On $\R^2$, we always have $\alpha + \beta + \gamma = \pi$. Not only is this false on $S^2$, but by measuring the difference, we can tell the area of the triangle. In fact, we can identify triangles up to congruence just by knowing the three angles.
+
+\subsection{M\texorpdfstring{\"o}{o}bius geometry}
+It turns out it is convenient to identify the sphere $S^2$ withe the extended complex plane $\C_\infty = \C \cup \{\infty\}$. Then isometries of $S^2$ will translate to nice class of maps of $\C_\infty$.
+
+We first find a way to identify $S^2$ with $C_\infty$. We use coordinates $\zeta \in \C_\infty$. We define the \emph{stereographic projection} $\pi: S^2 \to \C_\infty$ by
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (8, 0) -- (10, 3) -- (2, 3) -- (0, 0);
+ \draw [gray] (1, 1.5) -- (9, 1.5);
+ \draw [gray] (4, 0) -- (6, 3);
+ \draw (6.2, 1.5) arc(0:180:1.2);
+ \draw [opacity=0.5] (6.2, 1.5) arc(0:-180:1.2);
+ \draw (6.2, 1.5) arc(0:-180:1.2 and 0.5);
+ \draw [dashed] (6.2, 1.5) arc(0:180:1.2 and 0.5);
+
+ \node [circ] at (5, 2.5) {};
+ \node at (5, 2.5) [right] {$N$};
+
+ \node [circ] at (5.5, 2.17) {};
+ \node at (5.5, 2.17) [right] {$P$};
+
+ \draw (5, 2.5) -- (7, 1.2) node [circ] {} node [right] {$\pi(P)$};
+ \end{tikzpicture}
+\end{center}
+\[
+ \pi(P) = (\text{line } PN)\cap \{z = 0\},
+\]
+which is well defined except where $P = N$, in which case we define $\pi(N) = \infty$.
+
+To give an explicit formula for this, consider the cross-section through the plane $ONP$.
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [anchor = north east] {$O$};
+ \draw circle [radius=2];
+ \draw (-3, 0) -- (5.5, 0);
+ \draw (0, 0) -- (0, 2) node [circ] {} node [above] {$N$};
+ \draw (0, 1.414) -- (1.414, 1.414) node [pos=0.5, below] {$r$} node [right] {$P$} node [circ] {};
+ \draw (0.2, 1.414) -- +(0, 0.2) -- +(-0.2, 0.2);
+
+ \draw [<->] (-0.245, 0) -- +(0, 1.414) node [fill=white, pos=0.5] {$z$};
+ \draw (0, 2) -- (4.8259, 0) node [circ] {} node [above] {$\pi(P)$};
+
+ \draw [<->] (0, -0.245) -- +(4.8259, 0) node [fill=white, pos=0.5] {$R$};
+ \end{tikzpicture}
+\end{center}
+If $P$ has coordinates $(x, y)$, then we see that $\pi(P)$ will be a scalar multiple of $x + iy$. To find this factor, we notice that we have two similar triangles, and hence obtain
+\[
+ \frac{r}{R} = \frac{1 - z}{1}.
+\]
+Then we obtain the formula
+\[
+ \pi(x, y, z) = \frac{x + iy}{1 - z}.
+\]
+If we do the projection from the South pole instead, we get a related formula.
+\begin{lemma}
+ If $\pi': S^2 \to \C_{\infty}$ denotes the stereographic projection from the South Pole instead, then
+ \[
+ \pi'(P) = \frac{1}{\overline{\pi(P)}}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Let $P(x, y, z)$. Then
+ \[
+ \pi(x, y, z) = \frac{x + iy}{1 - z}.
+ \]
+ Then we have
+ \[
+ \pi'(x, y, z) = \frac{x + iy}{1 + z},
+ \]
+ since we have just flipped the $z$ axis around. So we have
+ \[
+ \overline{\pi(P)}\pi'(P) = \frac{x^2 + y^2}{1 - z^2} =1,
+ \]
+ noting that we have $x^2 + y^2 + z^2 = 1$ since we are on the unit sphere.
+\end{proof}
+
+We can use this to infer that $\pi' \circ \pi^{-1}: C_{\infty} \to C_{\infty}$ takes $\zeta \mapsto 1/\bar{\zeta}$, which is the inversion in the unit circle $|\zeta| = 1$.
+
+From IA Groups, we know M\"obius transformations act on $\C_{\infty}$ and form a group $G$ by composition. For each matrix
+\[
+ A =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \GL(2, \C),
+\]
+we get a M\"obius map $\C_\infty \to \C_\infty$ by
+\[
+ \zeta \mapsto \frac{a \zeta + b}{c \zeta + d}.
+\]
+Moreover, composition of M\"obius map is the same multiplication of matrices.
+
+This is not exactly a bijective map between $G$ and $\GL(2, \C)$. For any $\lambda \in \C^* = \C \setminus \{0\}$, we know $\lambda A$ defines the same map M\"obius map as $A$. Conversely, if $A_1$ and $A_2$ gives the same M\"obius map, then there is some $\lambda_1 \not= 0$ such that $A_1 = \lambda A_2$.
+
+Hence, we have
+\[
+ G \cong \PGL(2, \C) = \GL(2, \C)/\C^*,
+\]
+where
+\[
+ \C^* \cong \{ \lambda I: \lambda \in \C^*\}.
+\]
+Instead of taking the whole $\GL(2, \C)$ and quotienting out multiples of the identities, we can instead start with $\SL(2, \C)$. Again, $A_1, A_2 \in \SL(2, \C)$ define the same map if and only if $A_1 = \lambda A_2$ for some $\lambda$. What are the possible values of $\lambda$? By definition of the special linear group, we must have
+\[
+ 1 = \det(\lambda A) = \lambda^2 \det A = \lambda^2.
+\]
+So $\lambda^2 = \pm 1$. So each M\"obius map is represented by two matrices, $A$ and $-A$, and we get
+\[
+ G \cong \PSL(2, \C) = \SL(2, \C)/\{\pm 1\}.
+\]
+Now let's think about the sphere. On $S^2$, the rotations $\SO(3)$ act as isometries. In fact, the full isometry group of $S^2$ is $\Or(3)$ (the proof is on the first example sheet). What we would like to show that rotations of $S^2$ correspond to M\"obius transformations coming from the subgroup $\SU(2) \leq \GL(2, \C)$.
+
+\begin{thm}
+ Via the stereographic projection, every rotation of $S^2$ induces a M\"obius map defined by a matrix in $\SU(2) \subseteq \GL(2, \C)$, where
+ \[
+ \SU(2) = \left\{
+ \begin{pmatrix}
+ a & -b\\
+ \bar{b} & \bar{a}
+ \end{pmatrix}
+ : |a|^2 + |b|^2 = 1
+ \right\}.
+ \]
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Consider the $r(\hat{\mathbf{z}}, \theta)$, the rotations about the $z$ axis by $\theta$. These corresponds to the M\"obius map $\zeta \mapsto e^{i\theta} \zeta$, which is given by the unitary matrix
+ \[
+ \begin{pmatrix}
+ e^{i\theta/2} & 0\\
+ 0 & e^{-i\theta/2}.
+ \end{pmatrix}
+ \]
+ \item Consider the rotation $r(\hat{\mathbf{y}}, \frac{\pi}{2})$. This has the matrix
+ \[
+ \begin{pmatrix}
+ 0 & 0 & 1\\
+ 0 & 1 & 0\\
+ -1 & 0 & 0
+ \end{pmatrix}
+ \begin{pmatrix}
+ x\\y\\z
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ z\\y\\-x
+ \end{pmatrix}.
+ \]
+ This corresponds to the map
+ \[
+ \zeta = \frac{x + iy}{1 - z} \mapsto \zeta' = \frac{z + iy}{1 + x}
+ \]
+ We want to show this is a M\"obius map. To do so, we guess what the M\"obius map should be, and check it works. We can manually compute that $-1 \mapsto \infty$, $1 \mapsto 0$, $i \mapsto i$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=2.5];
+ \draw (-2.5, 0) arc(180:360:2.5 and 0.5);
+ \draw [dashed] (-2.5, 0) arc(180:0:2.5 and 0.6);
+
+ \node [circ] at (-2.5, 0) {};
+ \node [circ] at (2.5, 0) {};
+ \node [circ] at (-0.5, -0.49) {};
+ \node [left] at (-2.5, 0) {$1$};
+ \node [right] at (2.5, 0) {$-1$};
+ \node [below] at (-0.5, -0.49) {$i$};
+
+ \node [circ] at (0, -2.5) {};
+ \node [below] at (0, -2.5) {$0$};
+ \node [circ] at (0, 2.5) {};
+ \node [above] at (0, 2.5) {$\infty$};
+ \end{tikzpicture}
+ \end{center}
+ The only M\"obius map that does this is
+ \[
+ \zeta' = \frac{\zeta - 1}{\zeta + 1}.
+ \]
+ We now check:
+ \begin{align*}
+ \frac{\zeta - 1}{\zeta + 1} &= \frac{x + iy - 1 + z}{x + iy + 1 - z}\\
+ &= \frac{x - 1 + z + iy}{x + 1 - (z - iy)}\\
+ &= \frac{(z + iy)(x - 1 + z + iy)}{(x + 1)(z + iy) - (z^2 + y^2)}\\
+ &= \frac{(z + iy)(x - 1 + z + iy)}{(x + 1)(z + iy) + (x^2 - 1)}\\
+ &= \frac{(z + iy)(x - 1 + z + iy)}{(x + 1)(z + iy + x - 1)}\\
+ &= \frac{z + iy}{x + 1}.
+ \end{align*}
+ So done. We finally have to write this in the form of an $\SU(2)$ matrix:
+ \[
+ \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 1 & -1\\
+ 1 & 1
+ \end{pmatrix}.
+ \]
+ \item We claim that $\SO(3)$ is generated by $r\left(\hat{\mathbf{y}}, \frac{\pi}{2}\right)$ and $r(\hat{\mathbf{z}}, \theta)$ for $0 \leq \theta < 2\pi$.
+
+ To show this, we observe that $r(\hat{\mathbf{x}}, \varphi) = r(\hat{\mathbf{y}}, \frac{\pi}{2}) r(\hat{\mathbf{z}}, \varphi) r(\hat{\mathbf{y}}, -\frac{\pi}{2})$. Note that we read the composition from right to left. You can convince yourself this is true by taking a physical sphere and try rotating. To prove it formally, we can just multiply the matrices out.
+
+ Next, observe that for $\mathbf{v} \in S^2 \subseteq \R^3$, there are some angles $\varphi, \psi$ such that $g = r(\hat{\mathbf{z}}, \psi) r(\hat{\mathbf{x}}, \varphi)$ maps $\mathbf{v}$ to $\hat{\mathbf{x}}$. We can do so by first picking $r(\hat{\mathbf{x}}, \varphi)$ to rotate $\mathbf{v}$ into the $(x, y)$-plane. Then we rotate about the $z$-axis to send it to $\hat{\mathbf{x}}$.
+
+ Then for any $\theta$, we have $r(\mathbf{v}, \theta) = g^{-1} r(\hat{\mathbf{x}}, \theta) g$, and our claim follows by composition.
+ \item Thus, via the stereographic projection, every rotation of $S^2$ corresponds to products of M\"obius transformations of $\C_\infty$ with matrices in $\SU(2)$.\qedhere
+ \end{enumerate}
+\end{proof}
+The key of the proof is step (iii). Apart from enabling us to perform the proof, it exemplifies a useful technique in geometry --- we know how to rotate arbitrary things in the $z$ axis. When we want to rotate things about the $x$ axis instead, we first rotate the sphere to move the $x$ axis to where the $z$ axis used to be, do those rotations, and then rotate it back. In general, we can use some isometries or rotations to move what we want to do to a convenient location.
+
+\begin{thm}
+ The group of rotations $\SO(3)$ acting on $S^2$ corresponds precisely with the subgroup $\PSU(2) = \SU(2)/ \pm 1$ of M\"obius transformations acting on $\C_\infty$.
+\end{thm}
+What this is in some sense a converse of the previous theorem. We are saying that for each M\"obius map from $\SU(2)$, we can find some rotation of $S^2$ that induces that M\"obius map, and there is exactly one.
+
+\begin{proof}
+ Let $g \in \PSU(2)$ be a M\"obius transformation
+ \[
+ g(z) = \frac{az + b}{\bar{b} z + \bar{a}}.
+ \]
+ Suppose first that $g(0) = 0$. So $b = 0$. So $a\bar{a} = 1$. Hence $a = e^{i\theta/2}$. Then $g$ corresponds to $r(\hat{\mathbf{z}}, \theta)$, as we have previously seen.
+
+ In general, let $g(0) = w \in \C_\infty$. Let $Q \in S^2$ be such that $\pi(Q) = w$. Choose a rotation $A \in \SO(3)$ such that $A(Q) = -\hat{\mathbf{z}}$. Since $A$ is a rotation, let $\alpha \in \PSU(2)$ be the corresponding M\"obius transformation. By construction we have $\alpha(w) = 0$. Then the composition $\alpha \circ g$ fixes zero. So it corresponds to some $B = r(z, \theta)$. We then see that $g$ corresponds to $A^{-1} B \in \SO(3)$. So done.
+\end{proof}
+Again, we solved an easy case, where $g(0) = 0$, and then performed some rotations to transform any other case into this simple case.
+
+We have now produced a $2$-to-$1$ map
+\[
+ \SU(2) \to \PSU(2) \cong \SO(3).
+\]
+If we treat these groups as topological spaces, this map does something funny.
+
+Suppose we start with a (non-closed) path from $I$ to $-I$ in $\SU(2)$. Applying the map, we get a \emph{closed} loop from $I$ to $I$ in $\SO(3)$.
+
+Hence, in $\SO(3)$, loops are behave slightly weirdly. If we go around this loop in $\SO(3)$, we didn't really get back to the same place. Instead, we have actually moved from $I$ to $-I$ in $\SU(2)$. It takes \emph{two} full loops to actually get back to $I$. In physics, this corresponds to the idea of spinors.
+
+We can also understand this topologically as follows: since $\SU(2)$ is defined by two complex points $a, b \in \C$ such that $|a|^2 + |b|^2$, we can view it as as three-sphere $S^3$ in $\SO(3)$.
+
+A nice property of $S^3$ is it is \emph{simply connected}, as in any loop in $S^3$ can be shrunk to a point. In other words, given any loop in $S^3$, we can pull and stretch it (continuously) until becomes the ``loop'' that just stays at a single point.
+
+On the other hand, $\SO(3)$ is not simply connected. We have just constructed a loop in $\SO(3)$ by mapping the path from $I$ to $-I$ in $\SU(2)$. We \emph{cannot} deform this loop until it just sits at a single point, since if we lift it back up to $\SU(2)$, it still has to move from $I$ to $-I$.
+
+The neat thing is that in some sense, $S^3 \cong \SU(2)$ is just ``two copies'' of $\SO(3)$. By duplicating $\SO(3)$, we have produced $\SU(2)$, a simply connected space. Thus we say $\SU(2)$ is a \emph{universal cover} of $\SO(3)$.
+
+We've just been waffling about spaces and loops, and throwing around terms we haven't defined properly. These vague notions will be made precise in the IID Algebraic Topology course, and we will then (maybe) see that $\SU(2)$ is a \emph{universal cover} of $\SO(3)$.
+
+\section{Triangulations and the Euler number}
+We shall now study the idea of triangulations and the Euler number. We aren't going to do much with them in this course --- we will not even prove that the Euler number is a well-defined number. However, we need Euler numbers in order to state the full Gauss-Bonnet theorem at the end, and the idea of triangulations is useful in the IID Algebraic Topology course for defining simplicial homology. More importantly, the discussion of these concepts is required by the schedules. Hence we will get \emph{some} exposure to these concepts in this chapter.
+
+It is convenient to have an example other than the sphere when discussing triangulations and Euler numbers. So we introduce the \emph{torus}
+\begin{defi}[(Euclidean) torus]
+ The \emph{(Euclidean) torus} is the set $\R^2/\Z^2$ of equivalence classes of $(x, y) \in \R^2$ under the equivalence relation
+ \[
+ (x_1, y_1) \sim (x_2, y_2) \Leftrightarrow x_1 - x_2, y_1 - y_2 \in \Z.
+ \]
+\end{defi}
+It is easy to see this is indeed an equivalence relation. Thus a point in $T$ represented by $(x, y)$ is a coset $(x, y) + \Z^2$ of the subgroup $\Z^2 \leq \R^2$.
+
+Of course, there are many ways to represent a point in the torus. However, for any closed square $Q \subseteq \R^2$ with side length $1$, we can obtain $T$ is by identifying the sides
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mred, ->-=0.58] (0, 0) -- +(2, 0);
+ \draw [mred, ->-=0.58] (0, 2) -- +(2, 0);
+
+ \draw [mblue, ->>-=0.63] (0, 0) -- +(0, 2);
+ \draw [mblue, ->>-=0.63] (2, 0) -- +(0, 2);
+
+ \node at (1, 1) {$Q$};
+ \end{tikzpicture}
+\end{center}
+We can define a distance $d$ for all $P_1, P_2 \in T$ to be
+\[
+ d(P_1, P_2) = \min\{\|\mathbf{v}_1 - \mathbf{v}_2\|: \mathbf{v}_i \in \R^2, \mathbf{v}_i + \Z^2 = P_i\}.
+\]
+It is not hard to show this definition makes $(T, d)$ into a metric space. This allows us to talk about things like open sets and continuous functions. We will later show that this is not just a metric space, but a \emph{smooth surface}, after we have defined what that means.
+
+We write $\mathring{Q}$ for the interior of $Q$. Then the natural map $f: \mathring{Q} \to T$ given by $\mathbf{v} \mapsto \mathbf{v} + \Z^2$ is a bijection onto some open set $U \subseteq T$. In fact, $U$ is just $T \setminus \{\text{two circles meeting in 1 point}\}$, where the two circles are the boundary of the square $Q$.
+
+Now given any point in the torus represented by $P + \Z^2$, we can find a square $Q$ such that $P \in \mathring{Q}$. Then $f: \mathring{Q} \to T$ restricted to an open disk about $P$ is an isometry (onto the image, a subset of $\R^2$). Thus we say $d$ is a \emph{locally Euclidean metric}.
+
+One can also think of the torus $T$ as the surface of a doughnut, ``embedded'' in Euclidean space $\R^3$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0,0) ellipse (2 and 1.12);
+ \path[rounded corners=24pt] (-.9,0)--(0,.6)--(.9,0) (-.9,0)--(0,-.56)--(.9,0);
+ \draw[rounded corners=28pt] (-1.1,.1)--(0,-.6)--(1.1,.1);
+ \draw[rounded corners=24pt] (-.9,0)--(0,.6)--(.9,0);
+ \end{tikzpicture}
+\end{center}
+Given this, it is natural to define the distance between two points to be the length of the shortest curve between them on the torus. However, this distance function is \emph{not} the same as what we have here. So it is misleading to think of our locally Euclidean torus as a ``doughnut''.
+
+With one more example in our toolkit, we can start doing what we really want to do. The idea of a triangulation is to cut a space $X$ up into many smaller triangles, since we like triangles. However, we would like to first define what a triangle is.
+\begin{defi}[Topological triangle]
+ A \emph{topological triangle} on $X = S^2$ or $T$ (or any metric space $X$) is a subset $R \subseteq X$ equipped with a homeomorphism $R \to \Delta$, where $\Delta$ is a closed Euclidean triangle in $\R^2$.
+\end{defi}
+Note that a spherical triangle is in fact a topological triangle, using the radial projection to the plane $\R^2$ from the center of the sphere.
+\begin{defi}[Topological triangulation]
+ A \emph{topological triangulation} $\tau$ of a metric space $X$ is a finite collection of topological triangles of $X$ which cover $X$ such that
+ \begin{enumerate}
+ \item For every pair of triangles in $\tau$, either they are disjoint, or they meet in exactly one edge, or meet at exactly one vertex.
+ \item Each edge belongs to exactly two triangles.
+ \end{enumerate}
+\end{defi}
+These notions are useful only if the space $X$ is ``two dimensional'' --- there is no way we can triangulate, say $\R^3$, or a line. We can generalize triangulation to allow higher dimensional ``triangles'', namely topological tetrahedrons, and in general, $n$-simplices, and make an analogous definition of triangulation. However, we will not bother ourselves with this.
+
+\begin{defi}[Euler number]
+ The \emph{Euler number} of a triangulation $e = e(X, \tau)$ is
+ \[
+ e = F - E + V,
+ \]
+ where $F$ is the number of triangles; $E$ is the number of edges; and $V$ is the number of vertices.
+\end{defi}
+Note that each edge actually belongs to two triangles, but we will only count it once.
+
+There is one important fact about triangulations from algebraic topology, which we will state without proof.
+\begin{thm}
+ The Euler number $e$ is independent of the choice of triangulation.
+\end{thm}
+So the Euler number $e = e(X)$ is a property of the space $X$ itself, not a particular triangulation.
+
+\begin{eg}
+ Consider the following triangulation of the sphere:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=2];
+ \draw [name path=eqf] (-2, 0) arc(180:360:2 and 0.5);
+ \draw [dashed, name path=eqb] (-2, 0) arc(180:0:2 and 0.5);
+ \draw [name path=verf] (0, 2) arc(90:270:0.5 and 2);
+ \draw [dashed, name path=verb] (0, 2) arc(90:-90:0.5 and 2);
+ \node [circ] at (2, 0) {};
+ \node [circ] at (0, 2) {};
+ \node [circ] at (-2, 0) {};
+ \node [circ] at (0, -2) {};
+
+ \path [name intersections={of=eqf and verf,by=F}];
+ \node [circ] at (F) {};
+ \path [name intersections={of=eqb and verb,by=B}];
+ \node [circ] at (B) {};
+ \end{tikzpicture}
+ \end{center}
+ This has $8$ faces, $12$ edges and $6$ vertices. So $e = 2$.
+\end{eg}
+
+\begin{eg}
+ Consider the following triangulation of the torus. Be careful not to double count the edges and vertices at the sides, since the sides are glued together.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [mred, ->-=0.55] (0, 0) -- +(3, 0);
+ \draw [mred, ->-=0.55] (0, 3) -- +(3, 0);
+
+ \draw [mblue, ->>-=0.6] (0, 0) -- +(0, 3);
+ \draw [mblue, ->>-=0.6] (3, 0) -- +(0, 3);
+ \foreach \x in {1,2} {
+ \draw (\x, 0) -- +(0, 3);
+ \draw (0, \x) -- +(3, 0);
+ }
+ \draw (0, 0) -- (3, 3);
+ \draw (1, 0) -- (3, 2);
+ \draw (2, 0) -- (3, 1);
+ \draw (0, 1) -- (2, 3);
+ \draw (0, 2) -- (1, 3);
+ \end{tikzpicture}
+ \end{center}
+ This has $18$ faces, $27$ edges and $9$ vertices. So $e = 0$.
+\end{eg}
+In both cases, we did not cut up our space with funny, squiggly lines. Instead, we used ``straight'' lines. These triangles are known as \emph{geodesic triangles}.
+
+\begin{defi}[Geodesic triangle]
+ A \emph{geodesic triangle} is a triangle whose sides are geodesics, i.e.\ paths of shortest distance between two points.
+\end{defi}
+In particular, we used spherical triangles in $S^2$ and Euclidean triangles in $\mathring{Q}$. Triangulations made of geodesic triangles are rather nice. They are so nice that we can actually prove something about them!
+\begin{prop}
+ For every geodesic triangulation of $S^2$ (and respectively $T$) has $e = 2$ (respectively, $e = 0$).
+\end{prop}
+Of course, we know this is true for \emph{any} triangulation, but it is difficult to prove that without algebraic topology.
+
+\begin{proof}
+ For any triangulation $\tau$, we denote the ``faces'' of $\Delta_1, \cdots, \Delta_F$, and write $\tau_i = \alpha_i + \beta_i + \gamma_i$ for the sum of the interior angles of the triangles (with $i = 1, \cdots, F$).
+
+ Then we have
+ \[
+ \sum \tau_i = 2 \pi V,
+ \]
+ since the total angle around each vertex is $2\pi$. Also, each triangle has three edges, and each edge is from two triangles. So $3F = 2E$. We write this in a more convenient form:
+ \[
+ F = 2E - 2F.
+ \]
+ How we continue depends on whether we are on the sphere or the torus.
+ \begin{itemize}
+ \item For the sphere, Gauss-Bonnet for the sphere says the area of $\Delta_i$ is $\tau_i - \pi$. Since the area of the sphere is $4\pi$, we know
+ \begin{align*}
+ 4\pi &= \sum \area(\Delta_i) \\
+ &= \sum (\tau_i - \pi) \\
+ &= 2\pi V - F\pi \\
+ &= 2\pi V - (2E - 2F)\pi \\
+ &= 2\pi (F - E + V).
+ \end{align*}
+ So $F - E + V = 2$.
+ \item For the torus, we have $\tau_i = \pi$ for every face in $\mathring{Q}$. So
+ \[
+ 2\pi V = \sum \tau_i = \pi F.
+ \]
+ So
+ \[
+ 2V = F = 2E - 2F.
+ \]
+ So we get
+ \[
+ 2(F - V + E) = 0,
+ \]
+ as required.\qedhere
+ \end{itemize}
+\end{proof}
+Note that in the definition of triangulation, we decomposed $X$ into topological triangles. We can also use decompositions by topological polygons, but they are slightly more complicated, since we have to worry about convexity. However, apart from this, everything works out well. In particular, the previous proposition also holds, and we have Euler's formula for $S^2$: $V - E + F = 2$ for any polygonal decomposition of $S^2$. This is not hard to prove, and is left as an exercise on the example sheet.
+
+\section{Hyperbolic geometry}
+At the beginning of the course, we studied Euclidean geometry, which was not hard, because we already knew about it. Later on, we studied spherical geometry. That also wasn't too bad, because we can think of $S^2$ concretely as a subset of $\R^3$.
+
+We are next going to study hyperbolic geometry. Historically, hyperbolic geometry was created when people tried to prove Euclid's parallel postulate (that given a line $\ell$ and a point $P \not\in \ell$, there exists a unique line $\ell'$ containing $P$ that does not intersect $\ell$). Instead of proving the parallel postulate, they managed to create a new geometry where this is false, and this is hyperbolic geometry.
+
+Unfortunately, hyperbolic geometry is much more complicated, since we cannot directly visualize it as a subset of $\R^3$. Instead, we need to develop the machinery of a \emph{Riemannian metric} in order to properly describe hyperbolic geometry. In a nutshell, this allows us to take a subset of $\R^2$ and measure distances in it in a funny way.
+
+\subsection{Review of derivatives and chain rule}
+We start by reviewing some facts about taking derivatives, and make explicit the notation we will use.
+
+\begin{defi}[Smooth function]
+ Let $U \subseteq \R^n$ be open, and $f = (f_1, \cdots, f_m): U \to \R^m$. We say $f$ is \emph{smooth} (or $C^\infty$) if each $f_i$ has continuous partial derivatives of each order. In particular, a $C^\infty$ map is differentiable, with continuous first-order partial derivatives.
+\end{defi}
+
+\begin{defi}[Derivative]
+ The \emph{derivative} for a function $f: U \to \R^m$ at a point $a \in U$ is a linear map $\d f_a: \R^n \to \R^m$ (also written as $\D f(a)$ or $f'(a)$) such that
+ \[
+ \lim_{h \to 0} \frac{\|f(a + h) - f(a) - \d f_a \cdot h\|}{\|h\|} \to 0,
+ \]
+ where $h \in \R^n$.
+
+ If $m = 1$, then $\d f_a$ is expressed as $\left(\frac{\partial f}{\partial x_a}(a), \cdots, \frac{\partial f}{\partial x_n}(a)\right)$, and the linear map is given by
+ \[
+ (h_1, \cdots, h_n) \mapsto \sum_{i = 1}^n \frac{\partial f}{\partial x_i}(a) h_i,
+ \]
+ i.e.\ the dot product. For a general $m$, this vector becomes a matrix. The \emph{Jacobian matrix} is
+ \[
+ J(f)_a = \left(\frac{\partial f_i}{\partial x_j}(a)\right),
+ \]
+ with the linear map given by matrix multiplication, namely
+ \[
+ h \mapsto J(f)_a \cdot h.
+ \]
+\end{defi}
+
+\begin{eg}
+ Recall that a holomorphic (analytic) function of complex variables $f: U \subseteq \C \to \C$ has a derivative $f'$, defined by
+ \[
+ \lim_{|w| \to 0} \frac{|f(z + w) - f(z) - f'(z) w|}{|w|} \to 0
+ \]
+ We let $f'(z) = a + ib$ and $w = h_1 + i h_2$. Then we have
+ \[
+ f'(z) w = ah_1 - bh_2 + i(ah_2 + bh_1).
+ \]
+ We identify $\R^2 = \C$. Then $f: U \subseteq \R^2 \to \R^2$ has a derivative $\d f_z: \R^2 \to \R^2$ given by
+ \[
+ \begin{pmatrix}
+ a & -b\\
+ b & a
+ \end{pmatrix}.
+ \]
+\end{eg}
+
+We're also going to use the chain rule quite a lot. So we shall write it out explicitly.
+\begin{prop}[Chain rule]
+ Let $U \subseteq \R^n$ and $V\subseteq \R^p$. Let $f: U \to \R^m$ and $g: V \to U$ be smooth. Then $f \circ g: V \to \R^m$ is smooth and has a derivative
+ \[
+ d(f \circ g)_p = (\d f)_{g(p)} \circ (\d g)_p.
+ \]
+ In terms of the Jacobian matrices, we get
+ \[
+ J(f \circ g)_p = J(f)_{g(p)}J(g)_p.
+ \]
+\end{prop}
+
+\subsection{Riemannian metrics}
+Finally, we get to the idea of a Riemannian metric. The basic idea of a Riemannian metric is not too unfamiliar. Presumably, we have all seen maps of the Earth, where we try to draw the spherical Earth on a piece of paper, i.e.\ a subset of $\R^2$. However, this does not behave like $\R^2$. You cannot measure distances on Earth by placing a ruler on the map, since distances are distorted. Instead, you have to find the coordinates of the points (e.g.\ the longitude and latitude), and then plug them into some complicated formula. Similarly, straight lines on the map are not really straight (spherical) lines on Earth.
+
+We really should not think of Earth a subset of $\R^2$. All we have done was to ``force'' Earth to live in $\R^2$ to get a convenient way of depicting the Earth, as well as a convenient system of labelling points (in many map projections, the $x$ and $y$ axes are the longitude and latitude).
+
+This is the idea of a Riemannian metric. To describe some complicated surface, we take a subset $U$ of $\R^2$, and define a new way of measuring distances, angles and areas on $U$. All these information are packed into an entity known as the \emph{Riemannian metric}.
+
+\begin{defi}[Riemannian metric]
+ We use coordinates $(u, v) \in \R^2$. We let $V \subseteq \R^2$ be open. Then a Riemannian metric on $V$ is defined by giving $C^{\infty}$ functions $E, F, G: V \to \R$ such that
+ \[
+ \begin{pmatrix}
+ E(P) & F(P)\\
+ F(P) & G(P)
+ \end{pmatrix}
+ \]
+ is a positive definite definite matrix for all $P \in V$.
+
+ Alternatively, this is a smooth function that gives a $2\times 2$ symmetric positive definite matrix, i.e.\ inner product $\bra \ph, \ph\ket_P$, for each point in $V$. By definition, if $\mathbf{e}_1, \mathbf{e}_2$ are the standard basis, then
+ \begin{align*}
+ \bra \mathbf{e}_1, \mathbf{e}_1\ket_P &= E(P)\\
+ \bra \mathbf{e}_1, \mathbf{e}_2\ket_P &= F(P)\\
+ \bra \mathbf{e}_2, \mathbf{e}_2\ket_P &= G(P).
+ \end{align*}
+\end{defi}
+\begin{eg}
+ We can pick $E = G = 1$ and $F = 0$. Then this is just the standard Euclidean inner product.
+\end{eg}
+
+As mentioned, we should not imagine $V$ as a subset of $\R^2$. Instead, we should think of it as an abstract two-dimensional surface, with some coordinate system given by a subset of $\R^2$. However, this coordinate system is just a convenient way of labelling points. They do not represent any notion of distance. For example, $(0, 1)$ need not be closer to $(0, 2)$ than to $(7, 0)$. These are just abstract labels.
+
+With this in mind, $V$ does not have any intrinsic notion of distances, angles and areas. However, we do want these notions. We can certainly \emph{write down} things like the difference of two points, or even the compute the derivative of a function. However, these numbers you get are not meaningful, since we can easily use a different coordinate system (e.g.\ by scaling the axes) and get a different number. They have to be interpreted with the \emph{Riemannian metric}. This tells us how to measure these things, via an inner product ``that varies with space''. This variation in space is not an oddity arising from us not being able to make up our minds. This is since we have ``forced'' our space to lie in $\R^2$. Inside $V$, going from $(0, 1)$ to $(0, 2)$ might be very different from going from $(5, 5)$ to $(6, 5)$, since coordinates don't mean anything. Hence our inner product needs to measure ``going from $(0, 1)$ to $(0, 2)$'' differently from ``going from $(5, 5)$ to $(6, 5)$'', and must vary with space.
+
+We'll soon come to defining how this inner product gives rise to the notion of distance and similar stuff. Before that, we want to understand what we can put into the inner product $\bra\ph, \ph\ket_P$. Obviously these would be vectors in $\R^2$, but where do these vectors come from? What are they supposed to represent?
+
+The answer is ``directions'' (more formally, tangent vectors). For example, $\bra \mathbf{e}_1, \mathbf{e}_1\ket_P$ will tell us how far we actually are going if we move in the direction of $\mathbf{e}_1$ from $P$. Note that we say ``move in the direction of $\mathbf{e}_1$'', not ``move by $\mathbf{e}_1$''. We really should read this as ``if we move by $h\mathbf{e}_1$ for some small $h$, then the distance covered is $h \sqrt{\bra \mathbf{e}_1, \mathbf{e}_1\ket_P}$". This statement is to be interpreted along the same lines as ``if we vary $x$ by some small $h$, then the value of $f$ will vary by $f'(x) h$''. Notice how the inner product allows us to translate a length in $\R^2$ (namely $\|h\mathbf{e}_1\|_{\mathrm{eucl}} = h$) into the actual length in $V$.
+
+What we needed for this is just the norm induced by the inner product. Since what we have is the whole inner product, we in fact can define more interesting things such as areas and angles. We will formalize these ideas very soon, after getting some more notation out of the way.
+
+Often, instead of specifying the three functions separately, we write the metric as
+\[
+ E\;\d u^2 + 2F \;\d u \;\d v + G \;\d v^2.
+\]
+This notation has some mathematical meaning. We can view the coordinates as smooth functions $u: V \to \R$, $v: U \to \R$. Since they are smooth, they have derivatives. They are linear maps
+\begin{align*}
+ \d u_P: \R^2 &\to \R & \d v_P: \R^2 &\to \R\\
+ (h_1, h_2) &\mapsto h_1 & (h_1, h_2) &\mapsto h_2.
+\end{align*}
+These formula are valid for all $P \in V$. So we just write $\d u$ and $\d v$ instead. Since they are maps $\R^2 \to \R$, we can view them as vectors in the dual space, $\d u, \d v \in (\R^2)^*$. Moreover, they form a basis for the dual space. In particular, they are the dual basis to the standard basis $\mathbf{e}_1, \mathbf{e}_2$ of $\R^2$.
+
+Then we can consider $\d u^2, \d u\;\d v$ and $\d v^2$ as \emph{bilinear forms} on $\R^2$. For example,
+\begin{align*}
+ \d u^2 (\mathbf{h}, \mathbf{k}) &= \d u(\mathbf{h}) \d u(\mathbf{k})\\
+ \d u \;\d v(\mathbf{h}, \mathbf{k}) &= \frac{1}{2} (\d u(\mathbf{h}) \d v(\mathbf{k}) + \d u(\mathbf{k}) \d v(\mathbf{h}))\\
+ \d v^2 (\mathbf{h}, \mathbf{k}) &= \d v(\mathbf{h}) \d v(\mathbf{k})
+\end{align*}
+These have matrices
+\[
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 0
+ \end{pmatrix},
+ \quad
+ \begin{pmatrix}
+ 0 & \frac{1}{2}\\
+ \frac{1}{2} & 0
+ \end{pmatrix},
+ \quad
+ \begin{pmatrix}
+ 0 & 0\\
+ 0 & 1
+ \end{pmatrix}
+\]
+respectively. Then we indeed have
+\[
+ E \;\d u^2 + 2F \;\d u\;\d v + G \;\d v^2 =
+ \begin{pmatrix}
+ E & F\\
+ F & G
+ \end{pmatrix}.
+\]
+We can now start talking about what this is good for. In standard Euclidean space, we have a notion of length and area. A Riemannian metric also gives a notion of length and area.
+
+\begin{defi}[Length]
+ The \emph{length} of a smooth curve $\gamma = (\gamma_1, \gamma_2): [0, 1] \to V$ is defined as
+ \[
+ \int_0^1 \left(E \dot{\gamma}_1^2 + 2F \dot{\gamma_1} \dot{\gamma_2} + G \dot{\gamma_2}^2 \right)^{\frac{1}{2}}\;\d t,
+ \]
+ where $E = E(\gamma_1(t), \gamma_2(t))$ etc. We can also write this as
+ \[
+ \int_0^1 \bra \dot{\gamma}, \dot{\gamma}\ket_{\gamma(t)} ^{\frac{1}{2}} \;\d t.
+ \]
+\end{defi}
+\begin{defi}[Area]
+ The \emph{area} of a region $W \subseteq V$ is defined as
+ \[
+ \int_W (EG - F^2)^{\frac{1}{2}} \;\d u\;\d v
+ \]
+ when this integral exists.
+\end{defi}
+In the area formula, what we are integrating is just the determinant of the metric. This is also known as the Gram determinant.
+
+We define the distance between two points $P$ and $Q$ to be the infimum of the lengths of all curves from $P$ to $Q$. It is an exercise on the second example sheet to prove that this is indeed a metric.
+
+\begin{eg}
+ We will not do this in full detail --- the details are to be filled in in the third example sheet.
+
+ Let $V = \R^2$, and define the Riemannian metric by
+ \[
+ \frac{4(\d u^2 + \d v^2)}{(1 + u^2 + v^2)^2}.
+ \]
+ This looks somewhat arbitrary, but we shall see this actually makes sense by identifying $\R^2$ with the sphere by the stereographic projection $\pi: S^2 \setminus \{N\} \to \R^2$.
+
+ For every point $P \in S^2$, the tangent plane to $S^2$ at $P$ is given by $\{\mathbf{x} \in \R^3: \mathbf{x}\cdot \overrightarrow{OP} = 0\}$. Note that we translated it so that $P$ is the origin, so that we can view it as a vector space (points on the tangent plane are points ``from $P$''). Now given any two tangent vectors $\mathbf{x}_1, \mathbf{x}_2 \perp \overrightarrow{OP}$, we can take the inner product $\mathbf{x}_1 \cdot \mathbf{x}_2$ in $\R^3$.
+
+ We want to say this inner product is ``the same as'' the inner product provided by the Riemannian metric on $\R^2$. We cannot just require
+ \[
+ \mathbf{x}_1 \cdot \mathbf{x}_2 = \bra \mathbf{x}_1, \mathbf{x}_2 \ket_{\pi(P)},
+ \]
+ since this makes no sense at all. Apart from the obvious problem that $\mathbf{x}_1, \mathbf{x}_2$ have three components but the Riemannian metric takes in vectors of two components, we know that $\mathbf{x}_1$ and $\mathbf{x}_2$ are vectors tangent to $P \in S^2$, but to apply the Riemannian metric, we need the corresponding tangent vector at $\pi(P) \in \R^2$. To do so, we act by $\d \pi_p$. So what we want is
+ \[
+ \mathbf{x}_1 \cdot \mathbf{x}_2 = \bra \d \pi_P(\mathbf{x}_1), \d \pi_P(\mathbf{x}_2)\ket_{\pi(P)}.
+ \]
+ Verification of this equality is left as an exercise on the third example sheet. It is helpful to notice
+ \[
+ \pi^{-1}(u, v) = \frac{(2u, 2v, u^2 + v^2 - 1)}{1 + u^2 + v^2}.
+ \]
+\end{eg}
+
+In some sense, we say the surface $S^2 \setminus \{N\}$ is ``isometric'' to $\R^2$ via the stereographic projection $\pi$. We can define the notion of isometry between two open sets with Riemannian metrics in general.
+
+\begin{defi}[Isometry]
+ Let $V, \tilde{V} \subseteq \R^2$ be open sets endowed with Riemannian metrics, denoted as $\bra \ph, \ph\ket_P$ and $\bra \ph, \ph\ket^\sim_Q$ for $P \in V, Q \in \tilde{V}$ respectively.
+
+ A diffeomorphism (i.e.\ $C^\infty$ map with $C^\infty$ inverse) $\varphi: V \to \tilde{V}$ is an \emph{isometry} if for every $P \in V$ and $\mathbf{x}, \mathbf{y} \in \R^2$, we get
+ \[
+ \bra \mathbf{x}, \mathbf{y}\ket_P = \bra \d \varphi_P(\mathbf{x}), \d\varphi_P(\mathbf{y})\ket^\sim_{\varphi(P)}.
+ \]
+\end{defi}
+Again, in the definition, $\mathbf{x}$ and $\mathbf{y}$ represent tangent vectors at $P \in V$, and on the right of the equality, we need to apply $\d \varphi_P$ to get tangent vectors at $\varphi(P) \in \tilde{V}$.
+
+How are we sure this indeed is the right definition? We, at the very least, would expect isometries to preserve lengths. Let's see this is indeed the case. If $\gamma: [0, 1] \to V$ is a $C^\infty$ curve, the composition $\tilde{\gamma} = \varphi \circ \gamma: [0, 1] \to \tilde{V}$ is a path in $\tilde{V}$. We let $P = \gamma(t)$, and hence $\varphi(P) = \tilde{\gamma}(t)$. Then
+\[
+ \bra \tilde{\gamma}'(t), \tilde{\gamma}'(t)\ket_{\tilde{\gamma}(t)}^\sim = \bra \d \varphi_P \circ \gamma'(t), \d \varphi_P \circ \gamma'(t)\ket_{\varphi(P)}^\sim = \bra \gamma'(t), \gamma'(t)\ket_{\gamma(t) = P}.
+\]
+Integrating, we obtain
+\[
+ \length(\tilde{\gamma}) = \length(\gamma) = \int_0^1 \bra \gamma'(t), \gamma'(t)\ket _{\gamma(t)} \;\d t.
+\]
+\subsection{Two models for the hyperbolic plane}
+That's enough preparation. We can start talking about hyperbolic plane. We will in fact provide two models of the hyperbolic plane. Each model has its own strengths, and often proving something is significantly easier in one model than the other.
+
+We start with the disk model.
+\begin{defi}[Poincar\'e disk model]
+ The \emph{(Poincar\'e) disk model} for the hyperbolic plane is given by the unit disk
+ \[
+ D \subseteq \C\cong \R^2,\quad D = \{\zeta \in \C: |\zeta| < 1\},
+ \]
+ and a Riemannian metric on this disk given by
+ \[
+ \frac{4(\d u^2 + \d v^2)}{(1 - u^2 - v^2)^2} = \frac{4 |\d \zeta|^2}{(1 - |\zeta|^2)^2},\tag{$*$}
+ \]
+ where $\zeta = u + iv$.
+\end{defi}
+Note that this is similar to our previous metric for the sphere, but we have $1 - u^2 - v^2$ instead of $1 + u^2 + v^2$.
+
+To interpret the term $|\d \zeta|^2$, we can either formally set $|\d \zeta|^2 = \d u^2 + \d v^2$, or interpret it as the derivative $\d \zeta = \d u + i \d v: \C \to \C$.
+
+We see that $(*)$ is a scaling of the standard Riemannian metric by a factor depending on the polar radius $r = |\zeta|^2$. The distances are scaled by $\frac{2}{1 - r^2}$, while the areas are scaled by $\frac{4}{(1 - r^2)^2}$. Note, however, that the angles in the hyperbolic disk are the same as that in $\R^2$. This is in general true for metrics that are just scaled versions of the Euclidean metric (exercise).
+
+Alternatively, we can define the hyperbolic plane with the upper-half plane.
+\begin{defi}[Upper half-plane]
+ The \emph{upper half-plane} is
+ \[
+ H = \{z \in \C: \Im(z) > 0\}.
+ \]
+\end{defi}
+What is the appropriate Riemannian metric to put on the upper half plane? We know $D$ bijects to $H$ via the M\"obius transformation
+\[
+ \varphi: \zeta \in D \mapsto i \frac{1 + \zeta}{1 - \zeta} \in H.
+\]
+This bijection is in fact a conformal equivalence, as defined in IB Complex Analysis/Methods. The idea is to pick a metric on $H$ such that this map is an isometry. Then $H$ together with this Riemannian metric will be the upper half-plane model for the hyperbolic plane.
+
+To avoid confusion, we reserve the letter $z$ for points $z \in H$, with $z = x + iy$, while we use $\zeta$ for points $\zeta \in D$, and write $\zeta = u + iv$. Then we have
+\[
+ z = i\frac{1 + \zeta}{1 - \zeta},\quad \zeta = \frac{z - i}{z + i}.
+\]
+Instead of trying to convert the Riemannian metric on $D$ to $H$, which would be a complete algebraic horror, we first try converting the Euclidean metric. The Euclidean metric on $\R^2 = \C$ is given by
+\[
+ \bra w_1, w_2\ket = \Re(w_1 \overline{w_2}) = \frac{1}{2}(w_1 \bar{w}_2 + \bar{w}_1 w_2).
+\]
+So if $\bra\ph, \ph\ket_{\mathrm{eucl}}$ is the Euclidean metric at $\zeta$, then at $z$ such that $\zeta = \frac{z - i}{z + i}$, we require (by definition of isometry)
+\[
+ \bra w, v\ket_z = \left\bra \frac{\d \zeta}{\d z} w, \frac{\d \zeta}{\d z} v\right\ket_{\mathrm{eucl}} = \left|\frac{\d \zeta}{\d z}\right|^2 \Re(w \bar{v}) = \left|\frac{\d \zeta}{\d z}\right|^2 (w_1 v_1 + w_2 v_2),
+\]
+where $w = w_1 + iw_2, v = v_1 + i v_2$.
+
+Hence, on $H$, we obtain the Riemannian metric
+\[
+ \left|\frac{\d \zeta}{\d z}\right|^2 (\d x^2 + \d y^2).
+\]
+We can compute
+\[
+ \frac{\d \zeta}{\d z} = \frac{1}{z + i} - \frac{z - i}{(z + i)^2} = \frac{2i}{(z + i)^2}.
+\]
+This is what we get if we started with a Euclidean metric. If we start with the hyperbolic metric on $D$, we get an additional scaling factor. We can do some computations to get
+\[
+ 1 - |\zeta|^2 = 1 - \frac{|z - i|^2}{|z + i|^2},
+\]
+and hence
+\[
+ \frac{1}{1 - |\zeta|^2} = \frac{|z + i|^2}{|z + i|^2 - |z - i|^2} = \frac{|z + i|^2}{4 \Im z}.
+\]
+Putting all these together, metric corresponding to $\frac{4 |\d \zeta|^2}{(1 - |\zeta|^2)^2}$ on $D$ is
+\[
+ 4 \cdot \frac{4}{|z + i|^4} \cdot \left(\frac{|z + i|^2}{4 \Im z}\right)^2 \cdot |\d z|^2 = \frac{|\d z|^2}{(\Im z)^2} = \frac{\d x^2 + \d y^2}{y^2}.
+\]
+We now use all these ingredients to define the upper half-plane model.
+\begin{defi}[Upper half-plane model]
+ The \emph{upper half-plane} model of the hyperbolic plane is the upper half-plane $H$ with the Riemannian metric
+ \[
+ \frac{\d x^2 + \d y^2}{y^2}.
+ \]
+\end{defi}
+The lengths on $H$ are scaled (from the Euclidean one) by $\frac{1}{y}$, while the areas are scaled by $\frac{1}{y^2}$. Again, the angles are the same.
+
+Note that we did not have to go through so much mess in order to define the sphere. This is since we can easily ``embed'' the surface of the sphere in $\R^3$. However, there is no easy surface in $\R^3$ that gives us the hyperbolic plane. As we don't have an actual prototype, we need to rely on the more abstract data of a Riemannian metric in order to work with hyperbolic geometry.
+
+We are next going to study the geometry of $H$, We claim that the following group of M\"obius maps are isometries of $H$:
+\[
+ \PSL(2, \R) = \left\{z \mapsto \frac{az + b}{cz + d}: a, b, c, d \in \R, ad - bc = 1\right\}.
+\]
+Note that the coefficients have to be real, not complex.
+\begin{prop}
+ The elements of $\PSL(2, \R)$ are isometries of $H$, and this preserves the lengths of curves.
+\end{prop}
+
+\begin{proof}
+ It is easy to check that $\PSL(2, \R)$ is generated by
+ \begin{enumerate}
+ \item Translations $z \mapsto z + a$ for $a \in \R$
+ \item Dilations $z \mapsto az$ for $a > 0$
+ \item The single map $z \mapsto -\frac{1}{z}$.
+ \end{enumerate}
+ So it suffices to show each of these preserves the metric $\frac{|\d z|^2}{y^2}$, where $z = x + iy$. The first two are straightforward to see, by plugging it into formula and notice the metric does not change.
+
+ We now look at the last one, given by $z \mapsto -\frac{1}{z}$. The derivative at $z$ is
+ \[
+ f'(z) = \frac{1}{z^2}.
+ \]
+ So we get
+ \[
+ \d z \mapsto \d\left(-\frac{1}{z}\right) = \frac{\d z}{z^2}.
+ \]
+ So
+ \[
+ \left|\d\left(-\frac{1}{z}\right)\right|^2 = \frac{|\d z|^2}{|z|^4}.
+ \]
+ We also have
+ \[
+ \Im\left(-\frac{1}{z}\right) = -\frac{1}{|z|^2} \Im\bar{z} = \frac{\Im z}{|z|^2}.
+ \]
+ So
+ \[
+ \frac{|\d(-1/z)|^2}{\Im(-1/z)^2} = \left(\frac{|\d z|^2}{|z^4|}\right)\big/ \left(\frac{(\Im z)^2}{|z|^4}\right) = \frac{|\d z|^2}{(\Im z)^2}.
+ \]
+ So this is an isometry, as required.
+\end{proof}
+
+Note that each $z \mapsto az + b$ with $a > 0, b \in \R$ is in $\PSL(2, \R)$. Also, we can use maps of this form to send any point to any other point. So $\PSL(2, \R)$ acts transitively on $H$. Moreover, everything in $\PSL(2, \R)$ fixes $\R \cup \{\infty\}$.
+
+Recall also that each M\"obius transformation preserves circles and lines in the complex plane, as well as angles between circles/lines. In particular, consider the line $L = i\R$, which meets $\R$ perpendicularly, and let $g \in \PSL(2, \R)$. Then the image is either a circle centered at a point in $\R$, or a straight line perpendicular to $\R$.
+
+We let $L^+ = L \cap H = \{it: t > 0\}$. Then $g(L^+)$ is either a vertical half-line or a semi-circle that ends in $\R$.
+
+\begin{defi}[Hyperbolic lines]
+ \emph{Hyperbolic lines} in $H$ are vertical half-lines or semicircles ending in $\R$.
+\end{defi}
+We will now prove some lemmas to justify why we call these hyperbolic lines.
+
+\begin{lemma}
+ Given any two distinct points $z_1, z_2 \in H$, there exists a unique hyperbolic line through $z_1$ and $z_2$.
+\end{lemma}
+
+\begin{proof}
+ This is clear if $\Re z_1 = \Re z_2$ --- we just pick the vertical half-line through them, and it is clear this is the only possible choice.
+
+ Otherwise, if $\Re z_1 \not= \Re z_2$, then we can find the desired circle as follows:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0) -- (3, 0) node [above] {$\R$};
+ \node [circ] at (0, 0) {};
+
+ \node [circ] (z1) at (-2, 1) {};
+ \node [circ] (z2) at (1, 2) {};
+ \node [left] at (z1) {$z_1$};
+ \node [above] at (z2) {$z_2$};
+
+ \draw (-0.2, 1.6) -- (-0.1, 1.3) -- (-0.4, 1.2);
+
+ \draw (z1) -- (z2);
+ \draw [dashed] (0.5, -1.5) -- (-1, 3);
+
+ \draw (2.2351, 0) arc(0:180:2.2351);
+ \end{tikzpicture}
+ \end{center}
+ It is also clear this is the only possible choice.
+\end{proof}
+
+\begin{lemma}
+ $\PSL(2, \R)$ acts transitively on the set of hyperbolic lines in $H$.
+\end{lemma}
+
+\begin{proof}
+ It suffices to show that for each hyperbolic line $\ell$, there is some $g \in \PSL(2, \R)$ such that $g(\ell)= L^+$. This is clear when $\ell$ is a vertical half-line, since we can just apply a horizontal translation.
+
+ If it is a semicircle, suppose it has end-points $s < t \in \R$. Then consider
+ \[
+ g(z) = \frac{z - t}{z - s}.
+ \]
+ This has determinant $-s + t > 0$. So $g \in \PSL(2, \R)$. Then $g(t) = 0$ and $g(s) = \infty$. Then we must have $g(\ell) = L^+$, since $g(\ell)$ is a hyperbolic line, and the only hyperbolic lines passing through $\infty$ are the vertical half-lines. So done.
+\end{proof}
+Moreover, we can achieve $g(s) = 0$ and $g(t) = \infty$ by composing with $-\frac{1}{z}$. Also, for any $P \in \ell$ not on the endpoints, we can construct a $g$ such that $g(P) = i \in L^+$, by composing with $z \mapsto az$. So the isometries act transitively on pairs $(\ell, P)$, where $\ell$ is a hyperbolic line and $P \in \ell$.
+
+\begin{defi}[Hyperbolic distance]
+ For points $z_1, z_2 \in H$, the \emph{hyperbolic distance} $\rho(z_1, z_2)$ is the length of the segment $[z_1, z_2] \subseteq \ell$ of the hyperbolic line through $z_1, z_2$ (parametrized monotonically).
+\end{defi}
+Thus $\PSL(2, \R)$ preserves hyperbolic distances. Similar to Euclidean space and the sphere, we show these lines minimize distance.
+
+\begin{prop}
+ If $\gamma: [0, 1] \to H$ is a piecewise $C^1$-smooth curve with $\gamma(0) = z_1, \gamma(1) = z_2$, then $\length(\gamma) \geq \rho(z_1, z_2)$, with equality iff $\gamma$ is a monotonic parametrisation of $[z_1, z_2] \subseteq \ell$, where $\ell$ is the hyperbolic line through $z_1$ and $z_2$.
+\end{prop}
+
+\begin{proof}
+ We pick an isometry $g \in \PSL(2, \R)$ so that $g(\ell) = L^+$. So without loss of generality, we assume $z_1 = iu$ and $z_2 = iv$, with $u < v \in \R$.
+
+ We decompose the path as $\gamma(t) = x(t) + iy(t)$. Then we have
+ \begin{align*}
+ \length(\gamma) &= \int_0^1 \frac{1}{y}\sqrt{\dot{x}^2 + \dot{y}^2} \;\d t \\
+ &\geq \int_0^1 \frac{|\dot{y}|}{y}\;\d z\\
+ &\geq \left|\int_0^1 \frac{\dot{y}}{y}\;\d t\right| \\
+ &= [\log y(t)]_0^1\\
+ &= \log\left(\frac{v}{u}\right)
+ \end{align*}
+ This calculation also tells us that $\rho(z_1, z_2) = \log\left(\frac{v}{u}\right)$. so $\length(\gamma) \geq \rho(z_1, z_2)$ with equality if and only if $x(t) = 0$ (hence $\gamma \subseteq L^+$) and $\dot{y} \geq 0$ (hence monotonic).
+\end{proof}
+
+\begin{cor}[Triangle inequality]
+ Given three points $z_1, z_2, z_3 \in H$, we have
+ \[
+ \rho(z_1, z_3) \leq \rho(z_1, z_2) + \rho(z_2, z_3),
+ \]
+ with equality if and only if $z_2$ lies between $z_1$ and $z_2$.
+\end{cor}
+Hence, $(H, \rho)$ is a metric space.
+
+\subsection{Geometry of the hyperbolic disk}
+So far, we have worked with the upper half-plane model. This is since the upper half-plane model is more convenient for these calculations. However, sometimes the disk model is more convenient. So we also want to understand that as well.
+
+Recall that $\zeta \in D \mapsto z = i\frac{1 + \zeta}{1 - \zeta} \in H$ is an isometry, with an (isometric) inverse $z \in H \mapsto \zeta = \frac{z - i}{z + i} \in D$. Moreover, since these are M\"obius maps, circle-lines are preserved, and angles between the lines are also preserved.
+
+Hence, immediately from previous work on $H$, we know
+\begin{enumerate}
+ \item $\PSL(2, \R) \cong \{\text{M\"obius transformations sending $D$ to itself}\} = G$.
+ \item Hyperbolic lines in $D$ are circle segments meeting $|\zeta| = 1$ orthogonally, including diameters.
+ \item $G$ acts \emph{transitively} on hyperbolic lines in $D$ (and also on pairs consisting of a line and a point on the line).
+ \item The length-minimizing geodesics on $D$ are a segments of hyperbolic lines parametrized monotonically.
+\end{enumerate}
+
+We write $\rho$ for the (hyperbolic) distance in $D$.
+
+\begin{lemma}
+ Let $G$ be the set of isometries of the hyperbolic disk. Then
+ \begin{enumerate}
+ \item Rotations $z \mapsto e^{i\theta}z$ (for $\theta \in \R$) are elements of $G$.
+ \item If $a \in D$, then $g(z) = \frac{z - a}{1 - \bar{a} z}$ is in $G$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item This is clearly an isometry, since this is a linear map, preserves $|z|$ and $|\d z|$, and hence also the metric
+ \[
+ \frac{4 |\d z|^2}{(1 - |z|^2)^2}.
+ \]
+ \item First, we need to check this indeed maps $D$ to itself. To do this, we first make sure it sends $\{|z| = 1\}$ to itself. If $|z| = 1$, then
+ \[
+ |1 - \bar{a} z| = |\bar{z} (1 - \bar{a} z)| = |\bar{z} - \bar{a}| = |z - a|.
+ \]
+ So
+ \[
+ |g(z)| = 1.
+ \]
+ Finally, it is easy to check $g(a) = 0$. By continuity, $G$ must map $D$ to itself. We can then show it is an isometry by plugging it into the formula.\qedhere
+ \end{enumerate}
+\end{proof}
+It is an exercise on the second example sheet to show all $g \in G$ is of the form
+\[
+ g(z) = e^{i\theta} \frac{z - a}{1 - \bar{a} z}
+\]
+or
+\[
+ g(z) = e^{i\theta} \frac{\bar{z} - a}{1 - \bar{a} \bar{z}}
+\]
+for some $\theta \in \R$ and $a \in D$.
+
+We shall now use the disk model to do something useful. We start by coming up with an explicit formula for distances in the hyperbolic plane.
+\begin{prop}
+ If $0 \leq r < 1$, then
+ \[
+ \rho(0, r e^{i\theta}) = 2 \tanh^{-1} r.
+ \]
+ In general, for $z_1, z_2 \in D$, we have
+ \[
+ g(z_1, z_2) = 2 \tanh^{-1} \left|\frac{z_1 - z_2}{1 - \bar{z_1} z_2}\right|.
+ \]
+\end{prop}
+
+\begin{proof}
+ By the lemma above, we can rotate the hyperbolic disk so that $re^{i\theta}$ is rotated to $r$. So
+ \[
+ \rho(0, r e^{i\theta}) = \rho(0, r).
+ \]
+ We can evaluate this by performing the integral
+ \[
+ \rho(0, r) = \int_0^r \frac{2 \;\d t}{1 - t^2} = 2 \tanh^{-1} r.
+ \]
+ For the general case, we apply the M\"obius transformation
+ \[
+ g(z) = \frac{z - z_1}{1 - \bar{z}_1 z}.
+ \]
+ Then we have
+ \[
+ g(z_1) = 0,\quad g(z_2) = \frac{z_2 - z_1}{1 - \bar{z}_1 z_2} = \left|\frac{z_1 - z_2}{1 - \bar{z_1} z_2}\right| e^{i\theta}.
+ \]
+ So
+ \[
+ \rho(z_1, z_2) = \rho(g(z_1), g(z_2)) = 2 \tanh^{-1} \left|\frac{z_1 - z_2}{1 - \bar{z_1} z_2}\right|.\qedhere
+ \]
+\end{proof}
+Again, we exploited the idea of performing the calculation in an easy case, and then using isometries to move everything else to the easy case. In general, when we have a ``distinguished'' point in the hyperbolic plane, it is often convenient to use the disk model, move it to $0$ by an isometry.
+
+\begin{prop}
+ For every point $P$ and hyperbolic line $\ell$, with $P \not\in \ell$, there is a unique line $\ell'$ with $P \in \ell'$ such that $\ell'$ meets $\ell$ orthogonally, say $\ell \cap \ell' = Q$, and $\rho(P, Q) \leq \rho(P, \tilde{Q})$ for all $\tilde{Q} \in \ell$.
+\end{prop}
+This is a familiar fact from Euclidean geometry. To prove this, we again apply the trick of letting $P = 0$.
+
+\begin{proof}
+ wlog, assume $P = 0 \in D$. Note that a line in $D$ (that is not a diameter) is a Euclidean circle. So it has a center, say $C$.
+
+ Since any line through $P$ is a diameter, there is clearly only one line that intersects $\ell$ perpendicularly (recall angles in $D$ is the same as the Euclidean angle).
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.2] circle [radius=2];
+
+ \draw [mgreen] (-3.2, 0) -- (6, 0) node [pos=0.3, above] {$\ell'$};
+
+ \node [circ] {};
+ \node [above] {$P$};
+ \node at (-1, 1) {$D$};
+
+ \node [circ] at (3, 0) {};
+ \node [above] at (3, 0) {$C$};
+ \draw [dashed] (3, 0) circle [radius=1.8];
+ \node [circ] (A) at (1.627, 1.164) {};
+ \node [circ] at (1.627, -1.164) {};
+ \draw [mred, thick] (A) arc(139.726:220.274:1.8) node [pos=0.3, left] {$\ell$};
+
+ \node [circ] at (1.2, 0) {};
+ \node [anchor = south west] at (1.2, 0) {$Q$};
+ \end{tikzpicture}
+ \end{center}
+ It is also clear that $PQ$ minimizes the \emph{Euclidean} distance between $P$ and $\ell$. While this is not the same as the hyperbolic distance, since hyperbolic lines through $P$ are diameters, having a larger hyperbolic distance is equivalent to having a higher Euclidean distance. So this indeed minimizes the distance.
+\end{proof}
+
+How does reflection in hyperbolic lines work? This time, we work in the upper half-plane model, since we have a favorite line $L^+$.
+\begin{lemma}[Hyperbolic reflection]
+ Suppose $g$ is an isometry of the hyperbolic half-plane $H$ and $g$ fixes every point in $L^+ = \{iy: y \in \R^+\}$. Then $G$ is either the identity or $g(z) = -\bar{z}$, i.e.\ it is a reflection in the vertical axis $L^+$.
+\end{lemma}
+Observe we have already proved a similar result in Euclidean geometry, and the spherical version was proven in the first example sheet.
+\begin{proof}
+ For every $P \in H \setminus L^+$, there is a unique line $\ell'$ containing $P$ such that $\ell' \perp L^+$. Let $Q = L^+ \cap \ell'$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, 0) -- (0, 3) node [right] {$L^+$};
+ \draw (2, 0) arc(0:180:2) node [pos=0.75, left] {$\ell'$};
+ \node [circ] (P) at (1.414, 1.414) {};
+ \node at (P) [right] {$P$};
+ \node [circ] (Q) at (0, 2) {};
+ \node at (Q) [anchor = north east] {$Q$};
+ \end{tikzpicture}
+ \end{center}
+ We see $\ell'$ is a semicircle, and by definition of isometry, we must have
+ \[
+ \rho(P, Q) = \rho(g(P), Q).
+ \]
+ Now note that $g(\ell')$ is also a line meeting $L^+$ perpendicularly at $Q$, since $g$ fixes $L^+$ and preserves angles. So we must have $g(\ell') = \ell'$. Then in particular $g(P) \in \ell'$. So we must have $g(P) = P$ or $g(P) = P'$, where $P'$ is the image under reflection in $L^+$.
+
+ Now it suffices to prove that if $g(P) = P$ for any one $P$, then $g(P)$ must be the identity (if $g(P) = P'$ for all $P$, then $g$ must be given by $g(z) = -\bar{z}$).
+
+ Now suppose $g(P) = P$, and let $A \in H^+$, where $H^+ = \{z \in H: \Re z > 0\}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, 0) -- (0, 3.5) node [right] {$L^+$};
+ \node [circ] (P) at (2, 1.2) {};
+ \node at (P) [right] {$P$};
+ \node [circ] (P') at (-2, 1.2) {};
+ \node at (P') [left] {$P'$};
+
+ \node [circ] (A) at (1.4, 2.4) {};
+ \node at (A) [right] {$A$};
+ \node [circ] (A') at (-1.4, 2.4) {};
+ \node at (A') [left] {$A'$};
+
+ \node [circ] (B) at (0, 2.604) {};
+ \node [anchor = south west] at (B) {$B$};
+ \drawcirculararc(2, 1.2)(0, 2.604)(-1.4, 2.4);
+ \drawcirculararc(1.4, 2.4)(0, 2.604)(-2, 1.2);
+ \end{tikzpicture}
+ \end{center}
+ Now if $g(A) \not= A$, then $g(A) = A'$. Then $\rho(A', P) = \rho(A, P)$. But
+ \[
+ \rho(A', P) = \rho(A', B) + \rho(B, P) = \rho(A, B) + \rho(B, P) > \rho(A, P),
+ \]
+ by the triangle inequality, noting that $B \not\in (AP)$. This is a contradiction. So $g$ must fix everything.
+\end{proof}
+\begin{defi}[Hyperbolic reflection]
+ The map $R: z \in H \mapsto -\bar{z} \in H$ is the \emph{(hyperbolic) reflection in $L^+$}. More generally, given any hyperbolic line $\ell$, let $T$ be the isometry that sends $\ell$ to $L^+$. Then the \emph{(hyperbolic) reflection in $\ell$} is
+ \[
+ R_\ell = T^{-1} RT
+ \]
+\end{defi}
+Again, we already know how to reflect in $L^+$. So to reflect in another line $\ell$, we move our plane such that $\ell$ becomes $L^+$, do the reflection, and move back.
+
+By the previous proposition, $R_\ell$ is the unique isometry that is not identity and fixes $\ell$.
+
+\subsection{Hyperbolic triangles}
+\begin{defi}[Hyperbolic triangle]
+ A \emph{hyperbolic triangle} $ABC$ is the region determined by three hyperbolic line segments $AB, BC$ and $CA$, including extreme cases where some vertices $A, B, C$ are allowed to be ``at infinity''. More precisely, in the half-plane model, we allow them to lie in $\R \cup \{\infty\}$; in the disk model we allow them to lie on the unit circle $|z| = 1$.
+\end{defi}
+We see that if $A$ is ``at infinity'', then the angle at $A$ must be zero.
+
+Recall for a region $R \subseteq H$, we can compute the area of $R$ as
+\[
+ \area(R) = \iint_{R} \frac{\d x\;\d y}{y^2}.
+\]
+Similar to the sphere, we have
+\begin{thm}[Gauss-Bonnet theorem for hyperbolic triangles]
+ For each hyperbolic triangle $\Delta$, say, $ABC$, with angles $\alpha, \beta, \gamma \geq 0$ (note that zero angle is possible), we have
+ \[
+ \area(\Delta) = \pi - (\alpha + \beta + \gamma).
+ \]
+\end{thm}
+
+\begin{proof}
+ First do the case where $\gamma = 0$, so $C$ is ``at infinity''. Recall that we like to use the disk model if we have a distinguished point in the hyperbolic plane. If we have a distinguished point at \emph{infinity}, it is often advantageous to use the upper half plane model, since $\infty$ is a distinguished point at infinity.
+
+ So we use the upper-half plane model, and wlog $C = \infty$ (apply $\PSL(2, \R)$) if necessary. Then $AC$ and $BC$ are vertical half-lines. So $AB$ is the arc of a semi-circle. So $AB$ is an arc of a semicircle.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0) -- (3, 0);
+
+ \draw (-1.5, 0) -- (-1.5, 4) node [above] {$C$};
+ \draw (1.5, 0) -- (1.5, 4) node [above] {$C$};
+
+ \draw (2.2, 0) arc(0:180:2);
+ \node [circ] (O) at (0.2, 0) {};
+ \node [circ] (B) at (1.5, 1.52) {};
+ \node [anchor = south west] at (B) {$B$};
+ \node [circ] (A) at (-1.5, 1.05) {};
+ \node [anchor = south east] at (A) {$A$};
+
+ \draw [dashed] (A) -- (O) -- (B);
+ \draw (0.7, 0) arc(0:49.46:0.5) node [pos=0.7, right] {$\beta$};
+ \draw (0.5, 0) arc(0:148.30:0.3) node [pos=0.8, above] {$\pi - \alpha$};
+
+ \draw [dashed] (A) -- +(0.525, 0.85);
+ \draw (-1.5, 1.45) arc(90:58.30:0.4) node [pos=0.6, above] {$\alpha$};
+
+ \draw [dashed] (B) -- +(-0.76, 0.65);
+ \draw (1.5, 1.82) arc(90:138.2:0.3) node [pos=0.6, above] {$\beta$};
+ \end{tikzpicture}
+ \end{center}
+ We use the transformation $z \mapsto z + a$ (with $a \in \R$) to center the semi-circle at $0$. We then apply $z \mapsto bz$ (with $b > 0$) to make the circle have radius $1$. Thus wlog $AB \subseteq \{x^2 + y^2 = 1\}$.
+
+ Now we have
+ \begin{align*}
+ \area(T) &= \int_{\cos (\pi - \alpha)}^{\cos \beta} \int_{\sqrt{1 - x^2}}^\infty \frac{1}{y^2}\;\d y\;\d x\\
+ &= \int_{\cos (\pi - \alpha)}^{\cos \beta} \frac{1}{\sqrt{1 - x^2}} \;\d x \\
+ &= [-\cos^{-1}(x)]_{\cos(\pi - \alpha)}^{\cos \beta}\\
+ &= \pi - \alpha - \beta,
+ \end{align*}
+ as required.
+
+ In general, we use $H$ again, and we can arrange $AC$ in a vertical half-line. Also, we can move $AB$ to $x^2 + y^2 = 1$, noting that this transformation keeps $AC$ vertical.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0) -- (3, 0);
+
+ \draw (-1.5, 0) -- (-1.5, 4);
+ \draw (1.5, 0) -- (1.5, 4);
+
+ \draw (2.2, 0) arc(0:180:2);
+ \node [circ] (B) at (1.5, 1.52) {};
+ \node [anchor = south west] at (B) {$B$};
+ \node [circ] (A) at (-1.5, 1.05) {};
+ \node [anchor = south east] at (A) {$A$};
+ \draw [dashed] (A) -- +(0.525, 0.85);
+ \draw (-1.5, 1.45) arc(90:58.30:0.4) node [pos=0.6, above] {$\alpha$};
+ \draw [dashed] (B) -- +(-0.76, 0.65);
+
+ \pgfpathmoveto{\pgfpoint{1.5cm}{1.52cm}};
+ \pgfpatharcto{3.025cm}{3.025cm}{0}{0}{1}{\pgfpoint{-1.5cm}{3cm}}\pgfusepath{stroke};
+
+ \node [circ] (C) at (-1.5, 3) {};
+ \node [left] at (C) {$C$};
+
+ \draw [dashed] (C) -- +(1, 0.192535);
+ \draw (-1.5, 2.7) arc(-90:10.9:0.3) node [pos=0.5, below] {$\gamma$};
+
+ \draw [dashed] (B) -- +(-0.76, 1.307);
+
+ \draw (1.5, 1.82) arc(90:120.18:0.3) node [pos=0.7, above] {$\delta$};
+
+ \draw (1.2487, 1.952) arc(120.18:138.2:0.5);
+ \draw [mred] (0.8, 1) node [below] {$\beta$} edge [out=90, in=120, -latex] (1.17, 1.9);
+ \end{tikzpicture}
+ \end{center}
+ We consider $\Delta_1 = AB \infty$ and $\Delta_2 = CB\infty$. Then we can immediately write
+ \begin{align*}
+ \area(\Delta_1) &= \pi - \alpha - (\beta + \delta)\\
+ \area(\Delta_2) &= \pi - \delta - (\pi - \gamma) = \gamma - \delta.
+ \end{align*}
+ So we have
+ \[
+ \area(T) = \area(\Delta_2) - \area(\Delta_1) = \pi - \alpha - \beta - \gamma,
+ \]
+ as required.
+\end{proof}
+
+Similar to the spherical case, we have some hyperbolic sine and cosine rules. For example, we have
+\begin{thm}[Hyperbolic cosine rule]
+ In a triangle with sides $a, b, c$ and angles $\alpha, \beta, \gamma$, we have
+ \[
+ \cosh c = \cosh a \cosh b - \sinh a \sinh b \cos \gamma.
+ \]
+\end{thm}
+
+\begin{proof}
+ See example sheet 2.
+\end{proof}
+
+Recall that in $S^2$, any two lines meet (in two points). In the Euclidean plane $\R^2$, any two lines meet (in one point) iff they are not parallel. Before we move on to the hyperbolic case, we first make a definition.
+
+\begin{defi}[Parallel lines]
+ We use the disk model of the hyperbolic plane. Two hyperbolic lines are \emph{parallel} iff they meet only at the boundary of the disk (at $|z| = 1$).
+\end{defi}
+
+\begin{defi}[Ultraparallel lines]
+ Two hyperbolic lines are \emph{ultraparallel} if they don't meet anywhere in $\{|z| \leq 1\}$.
+\end{defi}
+
+In the Euclidean plane, we have the parallel axiom: given a line $\ell$ and $P \not\in \ell$, there exists a unique line $\ell'$ containing $P$ with $\ell \cap \ell' = \emptyset$. This fails in both $S^2$ and the hyperbolic plane --- but for very different reasons! In $S^2$, there are no such parallel lines. In the hyperbolic plane, there are \emph{many} parallel lines. There is a more deep reason for why this is the case, which we will come to at the very end of the course.
+
+\subsection{Hyperboloid model}
+Recall we said there is no way to view the hyperbolic plane as a subset of $\R^3$, and hence we need to mess with Riemannian metrics. However, it turns out we can indeed embed the hyperbolic plane in $\R^3$, if we give $\R^3$ a different metric!
+
+\begin{defi}[Lorentzian inner product]
+ The \emph{Lorentzian inner product} on $\R^3$ has the matrix
+ \[
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & 1 & 0\\
+ 0 & 0 & -1
+ \end{pmatrix}
+ \]
+\end{defi}
+This is less arbitrary as it seems. Recall from IB Linear Algebra that we can always pick a basis where a non-degenerate symmetric bilinear form has diagonal made of $1$ and $-1$. If we further identify $A$ and $-A$ as the ``same'' symmetric bilinear form, then this is the only other possibility left.
+
+Thus, we obtain the quadratic form given by
+\[
+ q(\mathbf{x}) = \bra \mathbf{x}, \mathbf{x}\ket = x^2 + y^2 - z^2.
+\]
+We now define the 2-sheet hyperboloid as
+\[
+ \{\mathbf{x} \in \R^2: q(\mathbf{x}) = -1\}.
+\]
+This is given explicitly by the formula
+\[
+ x^2 + y^2 = z^2 - 1.
+\]
+We don't actually need to two sheets. So we define
+\[
+ S^+ = S \cap \{z > 0\}.
+\]
+We let $\pi: S^+ \to D \subseteq \C = \R^2$ be the stereographic projection from (0, 0, -1) by
+\[
+ \pi(x, y, z) = \frac{x + iy}{1 + z} = u + iv.
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] (7, 5) arc(0:180:2 and 0.3);
+ \draw (3, 5) .. controls (5, 1.5) .. (7, 5) arc (0:-180:2 and 0.3);
+ \draw (0, 0) -- (8, 0) -- (10, 3) -- (2, 3) -- (0, 0);
+ \draw [gray] (1, 1.5) -- (9, 1.5);
+ \draw [gray] (4, 0) -- (6, 3);
+
+ \node [circ] at (5, 0.63) {};
+
+ \draw (5, 0.63) node [circ] {} -- (6, 4) node [circ] {} node [right] {$P$};
+ \node [circ] at (5.15, 1.14) {};
+ \node [right] at (5.15, 1.14) {$\pi(P)$};
+ \end{tikzpicture}
+\end{center}
+We put $r^2 = u^2 + v^2$. Doing some calculations, we show that
+\begin{enumerate}
+ \item We always have $r < 1$, as promised.
+ \item The stereographic projection $\pi$ is invertible with
+ \[
+ \sigma(u, v) = \pi^{-1}(u, v) = \frac{1}{1 - r^2}(2u, 2v, 1 + r^2) \in S^+.
+ \]
+ \item The tangent plane to $S^+$ at $P$ is spanned by
+ \[
+ \sigma_u = \frac{\partial \sigma}{\sigma u},\quad \sigma_v = \frac{\partial \sigma}{\partial v}.
+ \]
+ We can explicitly compute these to be
+ \begin{align*}
+ \sigma_u &= \frac{2}{(1 - r^2)^2} (1 + u^2 - v^2, 2uv, 2u),\\
+ \sigma_v &= \frac{2}{(1 - r^2)^2} (2uv, 1 + v^2 - u^2, 2v).
+ \end{align*}
+ We restrict the inner product $\bra \ph, \ph \ket$ to the span of $\sigma_u, \sigma_v$, and we get a symmetric bilinear form assigned to each $u, v \in D$ given by
+ \[
+ E\;\d u^2 + 2F \;\d u\;\d v + G \;\d v^2,
+ \]
+ where
+ \begin{align*}
+ E &= \bra \sigma_u, \sigma_u\ket = \frac{4}{(1 - r^2)^2},\\
+ F &= \bra \sigma_u, \sigma_v\ket = 0,\\
+ G &= \bra \sigma_v, \sigma_v\ket = \frac{4}{(1 - r^2)^2}.
+ \end{align*}
+\end{enumerate}
+We have thus recovered the Poincare disk model of the hyperbolic plane.
+
+\section{Smooth embedded surfaces (in \tph{$\R^3$}{R3}{&\#x211D3})}
+\subsection{Smooth embedded surfaces}
+So far, we have been studying some specific geometries, namely Euclidean, spherical and hyperbolic geometry. From now on, we go towards greater generality, and study arbitrary surfaces. We will mostly work with surfaces that are smoothly embedded as subsets of $\R^3$, we can develop notions parallel to those we have had before, such as Riemannian metrics and lengths. At the very end of the course, we will move away from the needless restriction restriction that the surface is embedded in $\R^3$, and study surfaces that \emph{just are}.
+
+\begin{defi}[Smooth embedded surface]
+ A set $S \subseteq \R^3$ is a \emph{(parametrized) smooth embedded surface} if every point $P \in S$ has an open neighbourhood $U \subseteq S$ (with the subspace topology on $S \subseteq \R^3$) and a map $\sigma: V \to U$ from an open $V \subseteq \R^2$ to $U$ such that if we write $\sigma(u, v) = (x(u, v), y(u, v), z(u, v))$, then
+ \begin{enumerate}
+ \item $\sigma$ is a homeomorphism (i.e.\ a bijection with continuous inverse)
+ \item $\sigma$ is $C^\infty$ (smooth) on $V$ (i.e.\ has continuous partial derivatives of all orders).
+ \item For all $Q \in V$, the partial derivatives $\sigma_u(Q)$ and $\sigma_v(Q)$ are linearly independent.
+ \end{enumerate}
+\end{defi}
+Recall that
+\[
+ \sigma_u(Q) = \frac{\partial \sigma}{\partial u}(Q) =
+ \begin{pmatrix}
+ \frac{\partial x}{\partial u}\\
+ \frac{\partial y}{\partial u}\\
+ \frac{\partial z}{\partial u}
+ \end{pmatrix}
+ (Q)
+ = \d\sigma_Q (\mathbf{e}_1),
+\]
+where $\mathbf{e}_1, \mathbf{e}_2$ is the standard basis of $\R^2$. Similarly, we have
+\[
+ \sigma_v(Q) = \d \sigma_Q(\mathbf{e}_2).
+\]
+We define some terminology.
+\begin{defi}[Smooth coordinates]
+ We say $(u, v)$ are \emph{smooth coordinates} on $U \subseteq S$.
+\end{defi}
+
+\begin{defi}[Tangent space]
+ The subspace of $\R^3$ spanned by $\sigma_u(Q), \sigma_v(Q)$ is the
+ \emph{tangent space} $T_PS$ to $S$ at $P = \sigma(Q)$.
+\end{defi}
+
+\begin{defi}[Smooth parametrisation]
+ The function $\sigma$ is a \emph{smooth parametrisation} of $U \subseteq S$.
+\end{defi}
+
+\begin{defi}[Chart]
+ The function $\sigma^{-1}: U \to V$ is a \emph{chart} of $U$.
+\end{defi}
+
+\begin{prop}
+ Let $\sigma: V \to U$ and $\tilde{\sigma}: \tilde{V} \to U$ be two $C^\infty$ parametrisations of a surface. Then the homeomorphism
+ \[
+ \varphi = \sigma^{-1} \circ \tilde{\sigma}: \tilde{V} \to V
+ \]
+ is in fact a diffeomorphism.
+\end{prop}
+This proposition says any two parametrizations of the same surface are compatible.
+
+\begin{proof}
+ Since differentiability is a local property, it suffices to consider $\varphi$ on some small neighbourhood of a point in $V$. Pick our favorite point $(v_0, u_0) \in \tilde{V}$. We know $\sigma = \sigma(u, v)$ is differentiable. So it has a Jacobian matrix
+ \[
+ \begin{pmatrix}
+ x_u & x_v\\
+ y_u & y_v\\
+ z_y & z_v
+ \end{pmatrix}.
+ \]
+ By definition, this matrix has rank two at each point. wlog, we assume the first two rows are linearly independent. So
+ \[
+ \det
+ \begin{pmatrix}
+ x_u & x_v\\
+ y_u & y_v
+ \end{pmatrix} \not= 0
+ \]
+ at $(v_0, u_0) \in \tilde{V}$. We define a new function
+ \[
+ F(u, v) =
+ \begin{pmatrix}
+ x(u, v)\\
+ y(u, v)
+ \end{pmatrix}.
+ \]
+ Now the inverse function theorem applies. So $F$ has a local $C^\infty$ inverse, i.e.\ there are two open neighbourhoods $(u_0, v_0) \in N$ and $F(u_0, v_0) \in N' \subseteq \R^2$ such that $f: N \to N'$ is a diffeomorphism.
+
+ Writing $\pi: \tilde{\sigma} \to N'$ for the projection $\pi(x, y, z) = (x, y)$ we can put these things in a commutative diagram:
+ \[
+ \begin{tikzcd}
+ & \sigma(N) \ar[d, "\pi"]\\
+ N \ar[r, "F"'] \ar[ru, "\sigma"] & N'
+ \end{tikzcd}.
+ \]
+ We now let $\tilde{N} = \tilde{\sigma}^{-1}(\sigma(N))$ and $\tilde{F} = \pi \circ \tilde{\sigma}$, which is yet again smooth. Then we have the following larger commutative diagram.
+ \[
+ \begin{tikzcd}
+ & \sigma(N) \ar[d, "\pi"]\\
+ N \ar[r, "F"'] \ar[ru, "\sigma"] & N' & \tilde{N} \ar[l, "\tilde{F}"] \ar[lu, "\tilde{\sigma}"']
+ \end{tikzcd}.
+ \]
+ Then we have
+ \[
+ \varphi = \sigma^{-1} \circ \tilde{\sigma} = \sigma^{-1} \circ \pi^{-1} \circ \pi \circ \tilde{\sigma} = F^{-1} \circ \tilde{F},
+ \]
+ which is smooth, since $F^{-1}$ and $\tilde{F}$ are. Hence $\varphi$ is smooth everywhere. By symmetry of the argument, $\varphi^{-1}$ is smooth as well. So this is a diffeomorphism.
+\end{proof}
+
+A more practical result is the following:
+\begin{cor}
+ The tangent plane $T_Q S$ is independent of parametrization.
+\end{cor}
+
+\begin{proof}
+ We know
+ \[
+ \tilde{\sigma} (\tilde{u}, \tilde{v}) = \sigma(\varphi_1(\tilde{u}, \tilde{v}), \varphi_2(\tilde{u}, \tilde{v})).
+ \]
+ We can then compute the partial derivatives as
+ \begin{align*}
+ \tilde{\sigma}_{\tilde{u}} &= \varphi_{1, \tilde{u}} \sigma_u + \varphi_{2, \tilde{u}} \sigma_v\\
+ \tilde{\sigma}_{\tilde{v}} &= \varphi_{1, \tilde{v}} \sigma_u + \varphi_{2, \tilde{v}} \sigma_v
+ \end{align*}
+ Here the transformation is related by the Jacobian matrix
+ \[
+ \begin{pmatrix}
+ \varphi_{1, \tilde{u}} & \varphi_{1, \tilde{v}}\\
+ \varphi_{2, \tilde{u}} & \varphi_{2, \tilde{v}}
+ \end{pmatrix} = J(\varphi).
+ \]
+ This is invertible since $\varphi$ is a diffeomorphism. So $(\sigma_{\tilde{u}}, \sigma_{\tilde{v}})$ and $(\sigma_{u}, \sigma_v)$ are different basis of the same two-dimensional vector space. So done.
+\end{proof}
+
+Note that we have
+\[
+ \tilde{\sigma}_{\tilde{u}} \times \tilde{\sigma}_{\tilde{v}} = \det(J(\varphi)) \sigma_u \times \sigma_v.
+\]
+So we can define
+\begin{defi}[Unit normal]
+ The \emph{unit normal} to $S$ at $Q \in S$ is
+ \[
+ N = N_Q = \frac{\sigma_u \times \sigma_v}{\|\sigma_u \times \sigma_v\|},
+ \]
+ which is well-defined up to a sign.
+\end{defi}
+
+Often, instead of a parametrization $\sigma: V \subseteq \R^2 \to U \subseteq S$, we want the function the other way round. We call this a chart.
+\begin{defi}[Chart]
+ Let $S \subseteq \R^3$ be an embedded surface. The map $\theta= \sigma^{-1}: U \subseteq S \to V \subseteq \R^2$ is a \emph{chart}.
+\end{defi}
+
+\begin{eg}
+ Let $S^2 \subseteq \R^3$ be a sphere. The two stereographic projections from $\pm \mathbf{e}_3$ give two charts, whose domain together cover $S^2$.
+\end{eg}
+
+Similar to what we did to the sphere, given a chart $\theta: U \to V \subseteq \R^2$, we can induce a Riemannian metric on $V$. We first get an inner product on the tangent space as follows:
+\begin{defi}[First fundamental form]
+ If $S \subseteq \R^3$ is an embedded surface, then each $T_Q S$ for $Q \in S$ has an inner product from $\R^3$, i.e.\ we have a family of inner products, one for each point. We call this family the \emph{first fundamental form}.
+\end{defi}
+This is a theoretical entity, and is more easily worked with when we have a chart. Suppose we have a parametrization $\sigma: V \to U \subseteq S$, $\mathbf{a}, \mathbf{b} \in \R^2$, and $P \in V$. We can then define
+\[
+ \bra \mathbf{a}, \mathbf{b} \ket_P = \bra \d \sigma_P (\mathbf{a}), \d \sigma_P(\mathbf{b})\ket_{\R^3}.
+\]
+With respect to the standard basis $\mathbf{e}_1, \mathbf{e}_2 \in \R^2$, we can write the first fundamental form as
+\[
+ E \;\d u^2 + 2F\;\d u\;\d v + G \;\d v^2,
+\]
+where
+\begin{align*}
+ E &= \bra \sigma_u, \sigma_u\ket = \bra \mathbf{e}_1, \mathbf{e}_1\ket_P\\
+ F &= \bra \sigma_u, \sigma_v\ket = \bra \mathbf{e}_1, \mathbf{e}_2\ket_P\\
+ G &= \bra \sigma_v, \sigma_v\ket = \bra \mathbf{e}_2, \mathbf{e}_2\ket_P.
+\end{align*}
+Thus, this induces a Riemannian metric on $V$. This is also called the first fundamental form corresponding to $\sigma$. This is what we do in practical examples.
+
+We will assume the following property, which we are not bothered to prove.
+\begin{prop}
+ If we have two parametrizations related by $\tilde{\sigma} = \sigma \circ \varphi: \tilde{V} \to U$, then $\varphi: \tilde{V} \to V$ is an isometry of Riemannian metrics (on $V$ and $\tilde{V}$).
+\end{prop}
+
+\begin{defi}[Length and energy of curve]
+ Given a smooth curve $\Gamma: [a, b] \to S \subseteq \R^3$, the \emph{length} of $\gamma$ is
+ \[
+ \length(\Gamma) = \int_a^b \|\Gamma'(t)\| \;\d t.
+ \]
+ The \emph{energy} of the curve is
+ \[
+ \energy(\Gamma) = \int_a^b \|\Gamma'(t)\|^2 \;\d t.
+ \]
+\end{defi}
+We can think of the energy as something like the kinetic energy of a particle along the path, except that we are missing the factor of $\frac{1}{2}m$, because they are annoying.
+
+How does this work with parametrizations? For the sake of simplicity, we assume $\Gamma([a, b]) \subseteq U$ for some parametrization $\sigma: V \to U$. Then we define the new curve
+\[
+ \gamma = \sigma^{-1} \circ \Gamma: [a, b] \to V.
+\]
+This curve has two components, say $\gamma = (\gamma_1, \gamma_2)$. Then we have
+\[
+ \Gamma'(t) = (\d \sigma)_{\gamma(t)}(\dot{\gamma}_1(t) \mathbf{e}_1 + \dot{\gamma}_2(t) \mathbf{e}_2) = \dot{\gamma}_1 \sigma_u + \dot{\gamma}_2 \sigma_v,
+\]
+and thus
+\[
+ \|\Gamma'(t)\| = \bra \dot{\gamma}, \dot{\gamma}\ket_P^{\frac{1}{2}} = (E \dot{\gamma}_1^2 + 2F \dot{\gamma}_1 \dot{\gamma}_2 + G \dot{\gamma}_2^2)^{\frac{1}{2}}.
+\]
+So we get
+\[
+ \length \Gamma = \int_a^b (E \dot{\gamma}_1^2 + 2F \dot{\gamma}_1 \dot{\gamma}_2 + G \dot{\gamma}_2^2)^{\frac{1}{2}}\;\d t.
+\]
+Similarly, the energy is given by
+\[
+ \energy \Gamma = \int_a^b (E \dot{\gamma}_1^2 + 2F \dot{\gamma}_1 \dot{\gamma}_2 + G \dot{\gamma}_2^2)\;\d t.
+\]
+This agrees with what we've had for Riemannian metrics.
+\begin{defi}[Area]
+ Given a smooth $C^\infty$ parametrization $\sigma: V \to U \subseteq S \subseteq \R^3$, and a region $T \subseteq U$, we define the \emph{area} of $T$ to be
+ \[
+ \area(T) = \int_{\theta(T)} \sqrt{EG - F^2} \;\d u\;\d v,
+ \]
+ whenever the integral exists (where $\theta = \sigma^{-1}$ is a chart).
+\end{defi}
+
+\begin{prop}
+ The area of $T$ is independent of the choice of parametrization. So it extends to more general subsets $T \subseteq S$, not necessarily living in the image of a parametrization.
+\end{prop}
+
+\begin{proof}
+ Exercise! % fill in
+\end{proof}
+
+Note that in examples, $\sigma(V) = U$ often is a dense set in $S$. For example, if we work with the sphere, we can easily parametrize everything but the poles. In that case, it suffices to use just one parametrization $\sigma$ for $\area (S)$.
+
+Note also that areas are invariant under isometries.
+\subsection{Geodesics}
+We now come to the important idea of a \emph{geodesic}. We will first define these for Riemannian metrics, and then generalize it to general embedded surfaces.
+
+\begin{defi}[Geodesic]
+ Let $V \subseteq \R_{u, v}^2$ be open, and $E \;\d u^2 + 2F \;\d u\;\d v + G \;\d v^2$ be a Riemannian metric on $V$. We let
+ \[
+ \gamma = (\gamma_1, \gamma_2): [a, b] \to V
+ \]
+ be a smooth curve. We say $\gamma$ is a \emph{geodesic} with respect to the Riemannian metric if it satisfies
+ \begin{align*}
+ \frac{\d}{\d t}(E \dot{\gamma}_1 + F \dot{\gamma}_2) &= \frac{1}{2}(E_u \dot{\gamma}_1^2 + 2 F_u \dot{\gamma}_1\dot{\gamma}_2 + G_u \dot{\gamma}_2^2)\\
+ \frac{\d}{\d t}(F \dot{\gamma}_1 + G \dot{\gamma}_2) &= \frac{1}{2}(E_v \dot{\gamma}_1^2 + 2 F_v \dot{\gamma}_1\dot{\gamma}_2 + G_v \dot{\gamma}_2^2)
+ \end{align*}
+ for all $t \in [a, b]$. These equations are known as the \emph{geodesic ODEs}.
+\end{defi}
+What exactly do these equations mean? We will soon show that these are curves that minimize (more precisely, are stationary points of) energy. To do so, we need to come up with a way of describing what it means for $\gamma$ to minimize energy among all possible curves.
+
+\begin{defi}[Proper variation]
+ Let $\gamma: [a, b] \to V$ be a smooth curve, and let $\gamma(a) = p$ and $\gamma(b) = q$. A \emph{proper variation} of $\gamma$ is a $C^\infty$ map
+ \[
+ h: [a, b] \times (-\varepsilon, \varepsilon) \subseteq \R^2 \to V
+ \]
+ such that
+ \[
+ h(t, 0) = \gamma(t)\text{ for all }t \in [a, b],
+ \]
+ and
+ \[
+ h(a, \tau) = p,\quad h(b, \tau) = q\text{ for all }|\tau| < \varepsilon,
+ \]
+ and that
+ \[
+ \gamma_\tau = h(\ph, \tau): [a, b] \to V
+ \]
+ is a $C^\infty$ curve for all fixed $\tau \in (-\varepsilon, \varepsilon)$.
+\end{defi}
+
+\begin{prop}
+ A smooth curve $\gamma$ satisfies the geodesic ODEs if and only if $\gamma$ is a stationary point of the energy function for all proper variation, i.e.\ if we define the function
+ \[
+ E(\tau) = \energy(\gamma_\tau): (-\varepsilon, \varepsilon) \to \R,
+ \]
+ then
+ \[
+ \left.\frac{\d E}{\d \tau} \right|_{\tau = 0} = 0.
+ \]
+\end{prop}
+
+\begin{proof}
+ We let $\gamma(t) = (u(t), v(t))$. Then we have
+ \[
+ \energy (\gamma) = \int_a^b (E(u, v) \dot{u}^2 + 2F(u, v) \dot{u}\dot{v} + G(u, v) \dot{v}^2) \;\d t = \int_a^b I(u, v, \dot{u}, \dot{v}) \;\d t.
+ \]
+ We consider this as a function of four variables $u, \dot{u}, v, \dot{v}$, which are not necessarily related to one another. From the calculus of variations, we know $\gamma$ is stationary if and only if
+ \[
+ \frac{\d}{\d t} \left(\frac{\partial I}{\partial \dot{u}}\right) = \frac{\partial I}{\partial u},\quad \frac{\d}{\d t} \left(\frac{\partial I}{\partial \dot{v}}\right) = \frac{\partial I}{\partial v}.
+ \]
+ The first equation gives us
+ \[
+ \frac{\d}{\d t} (2 (E\dot{u} + F\dot{v})) = E_u \dot{u}^2 + 2F_u\dot{u}\dot{v} + G_u\dot{v}^2,
+ \]
+ which is exactly the geodesic ODE. Similarly, the second equation gives the other geodesic ODE. So done.
+\end{proof}
+
+Since the definition of a geodesic involves the derivative only, which is a local property, we can easily generalize the definition to arbitrary embedded surfaces.
+
+\begin{defi}[Geodesic on smooth embedded surface]
+ Let $S \subseteq \R^3$ be an embedded surface. Let $\Gamma: [a, b] \to S$ be a smooth curve in $S$, and suppose there is a parametrization $\sigma: V \to U \subseteq S$ such that $\im \Gamma \subseteq U$. We let $\theta = \sigma^{-1}$ be the corresponding chart.
+
+ We define a new curve in $V$ by
+ \[
+ \gamma = \theta \circ \Gamma : [a, b] \to V.
+ \]
+ Then we say $\Gamma$ is a \emph{geodesic} on $S$ if and only if $\gamma$ is a geodesic with respect to the induced Riemannian metric.
+
+ For a general $\Gamma: [a, b] \to V$, we say $\Gamma$ is a \emph{geodesic} if for each point $t_0 \in [a, b]$, there is a neighbourhood $\tilde{V}$ of $t_0$ such that $\im \Gamma|_{\tilde{V}}$ lies in the domain of some chart, and $\Gamma|_{\tilde{V}}$ is a geodesic in the previous sense.
+\end{defi}
+
+\begin{cor}
+ If a curve $\Gamma$ minimizes the energy among all curves from $P = \Gamma(a)$ to $Q = \Gamma(b)$, then $\Gamma$ is a geodesic.
+\end{cor}
+
+\begin{proof}
+ For any $a_1, a_2$ such that $a \leq a_1 \leq b_1 \leq b$, we let $\Gamma_1 = \Gamma|_{[a_1, b_1]}$. Then $\Gamma_1$ also minimizes the energy between $a_1$ and $b_1$ for all curves between $\Gamma(a_1)$ and $\Gamma(b_1)$.
+
+ If we picked $a_1, b_1$ such that $\Gamma([a_1, b_1]) \subseteq U$ for some parametrized neighbourhood $U$, then $\Gamma_1$ is a geodesic by the previous proposition. Since the parametrized neighbourhoods cover $S$, at each point $t_0 \in [a, b]$, we can find $a_1, b_1$ such that $\Gamma([a_1, b_1]) \subseteq U$. So done.
+\end{proof}
+
+This is good, but we can do better. To do so, we need a lemma.
+
+\begin{lemma}
+ Let $V \subseteq \R^2$ be an open set with a Riemannian metric, and let $P, Q \in V$. Consider $C^\infty$ curves $\gamma: [a, b] \to V$ such that $\gamma(0) = P, \gamma(1) = Q$. Then such a $\gamma$ will minimize the energy (and therefore is a geodesic) if and only if $\gamma$ minimizes the length \emph{and} has constant speed.
+\end{lemma}
+This means being a geodesic is \emph{almost} the same as minimizing length. It's just that to be a geodesic, we have to parametrize it carefully.
+
+\begin{proof}
+ Recall the Cauchy-Schwartz inequality for continuous functions $f, g \in C[0,1]$, which says
+ \[
+ \left(\int_0^1 f(x)g(x)\;\d x\right)^2 \leq \left(\int_0^1 f(x)^2\;\d x\right)\left(\int_0^1 g(x)^2\;\d x\right),
+ \]
+ with equality iff $g = \lambda f$ for some $\lambda \in \R$, or $f = 0$, i.e.\ $g$ and $f$ are linearly dependent.
+
+ We now put $f = 1$ and $g = \|\dot{\gamma}\|$. Then Cauchy-Schwartz says
+ \[
+ (\length \gamma)^2 \leq \energy(\gamma),
+ \]
+ with equality if and only if $\dot{\gamma}$ is constant.
+
+ From this, we see that a curve of minimal energy must have constant speed. Then it follows that minimizing energy is the same as minimizing length if we move at constant speed.
+\end{proof}
+
+Is the converse true? Are all geodesics length minimizing? The answer is ``almost''. We have to be careful with our conditions in order for it to be true.
+\begin{prop}
+ A curve $\Gamma$ is a geodesic iff and only if it minimizes the energy \emph{locally}, and this happens if it minimizes the length locally and has constant speed.
+
+ Here minimizing a quantity locally means for every $t \in [a, b]$, there is some $\varepsilon > 0$ such that $\Gamma|_{[t - \varepsilon, t + \varepsilon]}$ minimizes the quantity.
+\end{prop}
+We will not prove this. Local minimization is the best we can hope for, since the definition of a geodesic involves differentiation, and derivatives are local properties.
+
+\begin{prop}
+ In fact, the geodesic ODEs imply $\|\Gamma'(t)\|$ is constant.
+\end{prop}
+We will also not prove this, but in the special case of the hyperbolic plane, we can check this directly. This is an exercise on the third example sheet.
+
+A natural question to ask is that if we pick a point $P$ and a tangent direction $\mathbf{a}$, can we find a geodesic through $P$ whose tangent vector at $P$ is $\mathbf{a}$?
+
+In the geodesic equations, if we expand out the derivative, we can write the equation as
+\[
+ \begin{pmatrix}
+ E & F\\
+ F & G
+ \end{pmatrix}
+ \begin{pmatrix}
+ \ddot{\gamma}_1\\
+ \ddot{\gamma}_2
+ \end{pmatrix} = \text{something}.
+\]
+Since the Riemannian metric is positive definite, we can invert the matrix and get an equation of the form
+\[
+ \begin{pmatrix}
+ \ddot{\gamma}_1\\ \ddot{\gamma}_2
+ \end{pmatrix} =
+ H(\gamma_1, \gamma_2, \dot{\gamma}_1, \dot{\gamma_2)}
+\]
+for some function $H$. From the general theory of ODE's in IB Analysis II, subject to some sensible conditions, given any $P = (u_0, v_0) \in V$ and $\mathbf{a} = (p_0, q_0) \in \R^2$, there is a \emph{unique} geodesic curve $\gamma(t)$ defined for $|t| < \varepsilon$ with $\gamma(0) = P$ and $\dot{\gamma}(0) = \mathbf{a}$. In other words, we can choose a point, and a direction, and then there is a geodesic going that way.
+
+Note that we need the restriction that $\gamma$ is defined only for $|t| < \varepsilon$ since we might run off to the boundary in finite time. So we need not be able to define it for all $t \in \R$.
+
+How is this result useful? We can use the uniqueness part to find geodesics. We can try to find some family of curves $\mathcal{C}$ that are length-minimizing. To prove that we have found \emph{all} of them, we can show that given any point $P \in V$ and direction $\mathbf{a}$, there is some curve in $\mathcal{C}$ through $P$ with direction $\mathbf{a}$.
+
+\begin{eg}
+ Consider the sphere $S^2$. Recall that arcs of great circles are length-minimizing, at least locally. So these are indeed geodesics. Are these all? We know for any $P \in S^2$ and any tangent direction, there exists a unique great circle through $P$ in this direction. So there cannot be any other geodesics on $S^2$, by uniqueness.
+
+ Similarly, we find that hyperbolic line are precisely all the geodesics on a hyperbolic plane.
+\end{eg}
+
+We have defined these geodesics as solutions of certain ODEs. It is possible to show that the solutions of these ODEs depend $C^\infty$-smoothly on the initial conditions. We shall use this to construct around each point $P \in S$ in a surface \emph{geodesic polar coordinates}. The idea is that to specify a point near $P$, we can just say ``go in direction $\theta$, and then move along the corresponding geodesic for time $r$''.
+
+We can make this (slightly) more precise, and provide a quick sketch of how we can do this formally. We let $\psi: U \to V$ be some chart with $P \in U \subseteq S$. We wlog $\psi(P) = 0 \in V \subseteq \R^2$. We denote by $\theta$ the polar angle (coordinate), defined on $V \setminus \{0\}$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (3, 0);
+ \draw [->] (0, -0.5) -- (0, 3);
+
+ \draw (0, 0) -- (2, 1) node [circ] {};
+ \draw (0.6, 0) arc(0:26.57:0.6) node [pos=0.7, right] {$\theta$};
+ \end{tikzpicture}
+\end{center}
+Then for any given $\theta$, there is a unique geodesic $\gamma^{\theta}: (-\varepsilon, \varepsilon) \to V$ such that $\gamma^{\theta}(0) = 0$, and $\dot{\gamma}^\theta(0)$ is the unit vector in the $\theta$ direction.
+
+We define
+\[
+ \sigma(r, \theta) = \gamma^\theta(r)
+\]
+whenever this is defined. It is possible to check that $\sigma$ is $C^\infty$-smooth. While we would like to say that $\sigma$ gives us a parametrization, this is not exactly true, since we cannot define $\theta$ continuously. Instead, for each $\theta_0$, we define the region
+\[
+ W_{\theta_0} = \{(r, \theta): 0 < r < \varepsilon, \theta_0 < \theta < \theta_0 + 2\pi\} \subseteq \R^2.
+\]
+Writing $V_0$ for the image of $W_{\theta_0}$ under $\sigma$, the composition
+\[
+ \begin{tikzcd}
+ W_{\theta_0} \ar[r, "\sigma"] & V_0 \ar[r, "\psi^{-1}"] & U_0 \subseteq S
+ \end{tikzcd}
+\]
+is a valid parametrization. Thus $\sigma^{-1} \circ \psi$ is a valid chart.
+
+The image $(r, \theta)$ of this chart are the \emph{geodesic polar coordinates}. We have the following lemma:
+
+\begin{lemma}[Gauss' lemma]
+ The geodesic circles $\{r = r_0\} \subseteq W$ are orthogonal to their radii, i.e.\ to $\gamma^\theta$, and the Riemannian metric (first fundamental form) on $W$ is
+ \[
+ \d r^2 + G(r, \theta) \;\d \theta^2.
+ \]
+\end{lemma}
+This is why we like geodesic polar coordinates. Using these, we can put the Riemannian metric into a very simple form.
+
+Of course, this is just a sketch of what really happens, and there are many holes to fill in. For more details, go to IID Differential Geometry.
+
+\begin{defi}[Atlas]
+ An \emph{atlas} is a collection of charts covering the whole surface.
+\end{defi}
+
+The collection of all geodesic polars about all points give us an example. Other interesting atlases are left as an exercise on example sheet 3.
+
+\subsection{Surfaces of revolution}
+So far, we do not have many examples of surfaces. We now describe a nice way of obtaining surfaces --- we obtain a surface $S$ by rotating a plane curve $\eta$ around a line $\ell$. We may wlog assume that coordinates a chosen so that $\ell$ is the $z$-axis, and $\eta$ lies in the $x-z$ plane.
+
+More precisely, we let $\eta: (a, b) \to \R^3$, and write
+\[
+ \eta(u) = (f(u), 0, g(u)).
+\]
+Note that it is possible that $a = -\infty$ and/or $b = \infty$.
+
+We require $\|\eta'(u)\| = 1$ for all $u$. This is sometimes known as \emph{parametrization by arclength}. We also require $f(u) > 0$ for all $u > 0$, or else things won't make sense.
+
+Finally, we require that $\eta$ is a homeomorphism to its image. This is more than requiring $\eta$ to be injective. This is to eliminate things like
+\begin{center}
+ \begin{tikzpicture}
+ \path[use as bounding box] (-1.8, -0.7) rectangle (1.8,0.7);
+ \draw [-latex'] (0, 0) .. controls (2, 2) and (2, -2) .. (0, 0);
+ \draw [-latex'] (0, 0) .. controls (-2, -2) and (-2, 2) .. (0, 0);
+ \end{tikzpicture}
+\end{center} % can insert comb
+Then $S$ is the image of the following map:
+\[
+ \sigma(u, v) = (f(u) \cos v, f(u) \sin v, g(u))
+\]
+for $a < u < b$ and $0 \leq v \leq 2\pi$. This is not exactly a parametrization, since it is not injective ($v = 0$ and $v = 2\pi$ give the same points). To rectify this, for each $\alpha \in \R$, we define
+\[
+ \sigma^\alpha: (a, b) \times (\alpha, \alpha + 2\pi) \to S,
+\]
+given by the same formula, and this is a homeomorphism onto the image. The proof of this is left as an exercise for the reader. % complete?
+
+Assuming this, we now show that this is indeed a parametrization. It is evidently smooth, since $f$ and $g$ both are. To show this is a parametrization, we need to show that the partial derivatives are linearly independent. We can compute the partial derivatives and show that they are non-zero. We have
+\begin{align*}
+ \sigma_u &= (f' \cos v, f'\sin v, g')\\
+ \sigma_v &= (-f \sin v, f \cos v, 0).
+\end{align*}
+We then compute the cross product as
+\[
+ \sigma_u \times \sigma_v = (-fg' \cos v, -fg' \sin v, ff').
+\]
+So we have
+\[
+ \|\sigma_u \times \sigma_v\| = f^2 (g'^2 + f'^2) = f^2 \not= 0.
+\]
+Thus every $\sigma^\alpha$ is a valid parametrization, and $S$ is a valid embedded surface.
+
+More generally, we can allow $S$ to be covered by several families of parametrizations of type $\sigma^\alpha$, i.e.\ we can consider more than one curve or more than one axis of rotation. This allows us to obtain, say, $S^2$ or the embedded torus (in the old sense, we cannot view $S^2$ as a surface of revolution in the obvious way, since we will be missing the poles).
+
+\begin{defi}[Parallels]
+ On a surface of revolution, \emph{parallels} are curves of the form
+ \[
+ \gamma(t) = \sigma(u_0, t)\text{ for fixed }u_0.
+ \]
+ \emph{Meridians} are curves of the form
+ \[
+ \gamma(t) = \sigma(t, v_0)\text{ for fixed }v_0.
+ \]
+\end{defi}
+These are generalizations of the notions of longitude and latitude (in some order) on Earth.
+
+In a general surface of revolution, we can compute the first fundamental form with respect to $\sigma$ as
+\begin{align*}
+ E &= \|\sigma_u\|^2 = f'^2 + g'^2 = 1,\\
+ F &= \sigma_u \cdot \sigma_v = 0\\
+ G &= \|\sigma_v\|^2 = f^2.
+\end{align*}
+So its first fundamental form is also of the simple form, like the geodesic polar coordinates.
+
+Putting these explicit expressions into the geodesic formula, we find that the geodesic equations are
+\begin{align*}
+ \ddot{u} &= f \frac{\d f}{\d u} \dot{v}^2\\
+ \frac{\d}{\d t} (f^2 \dot{v}) &= 0.
+\end{align*}
+
+\begin{prop}
+ We assume $\|\dot{\gamma}\| = 1$, i.e.\ $\dot{u}^2 + f^2 (u) \dot{v}^2 = 1$.
+ \begin{enumerate}
+ \item Every unit speed meridians is a geodesic.
+ \item A (unit speed) parallel will be a geodesic if and only if
+ \[
+ \frac{\d f}{\d u} (u_0) = 0,
+ \]
+ i.e.\ $u_0$ is a critical point for $f$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item In a meridian, $v = v_0$ is constant. So the second equation holds. Also, we know $\|\dot{\gamma}\| = |\dot{u}| = 1$. So $\ddot{u} = 0$. So the first geodesic equation is satisfied.
+ \item Since $o = o_u$, we know $f(u_0)^2 \dot{v}^2 = 1$. So
+ \[
+ \dot{v} = \pm \frac{1}{f(u_0)}.
+ \]
+ So the second equation holds. Since $\dot{v}$ and $f$ are non-zero, the first equation is satisfied if and only if $\frac{\d f}{\d u} = 0$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{Gaussian curvature}
+We will next consider the notion of curvature. Intuitively, Euclidean space is ``flat'', while the sphere is ``curved''. In this section, we will define a quantity known as the \emph{curvature} that characterizes how curved a surface is.
+
+The definition itself is not too intuitive. So what we will do is that we first study the curvature of curves, which is something we already know from, say, IA Vector Calculus. Afterwards, we will make an analogous definition for surfaces.
+
+\begin{defi}[Curvature of curve]
+ We let $\eta: [0, \ell] \to R^2$ be a curve parametrized with unit speed, i.e.\ $\|\eta'\| = 1$. The \emph{curvature} $\kappa$ at the point $\eta(s)$ is determined by
+ \[
+ \eta'' = \kappa \mathbf{n},
+ \]
+ where $\mathbf{n}$ is the unit normal, chosen so that $\kappa$ is non-negative.
+\end{defi}
+If $f: [c, d] \to [0, \ell]$ is a smooth function and $f'(t) > 0$ for all $t$, then we can reparametrize our curve to get
+\[
+ \gamma(t) = \eta(f(t)).
+\]
+We can then find
+\[
+ \dot{\gamma}(t) = \frac{\d f}{\d t} \eta'(f(t)).
+\]
+So we have
+\[
+ \|\dot{\gamma}\|^2 =\left(\frac{\d f}{\d t}\right)^2.
+\]
+We also have by definition
+\[
+ \eta''(f(t)) = \kappa \mathbf{n},
+\]
+where $\kappa$ is the curvature at $\gamma(t)$.
+
+On the other hand, Taylor's theorem tells us
+\begin{multline*}
+ \gamma(t + \Delta t) - \gamma(t) = \left(\frac{\d f}{\d t}\right) \eta'(f(t)) \Delta t \\
+ + \frac{1}{2} \left[\left(\frac{\d^2 f}{\d t^2}\right) \eta'(f(t)) + \left(\frac{\d f}{\d t}\right)^2 \eta''(f(t))\right] + \text{higher order terms}.
+\end{multline*}
+Now we know by assumption that
+\[
+ \eta' \cdot \eta' = 1.
+\]
+Differentiating thus give
+s\[
+ \eta' \cdot \eta'' = 0.
+\]
+Hence we get
+\[
+ \eta' \cdot \mathbf{n} = 0.
+\]
+We now take the dot product of the Taylor expansion with $\mathbf{n}$, killing off all the $\eta'$ terms. Then we get
+\[
+ (\gamma(t + \Delta t) - \gamma(t)) \cdot \mathbf{n} = \frac{1}{2} \kappa \|\dot{\gamma}\|^2 (\Delta t)^2 + \cdots,\tag{$*$}
+\]
+where $\kappa$ is the curvature. This is the distance denoted below:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 2) parabola bend (0, 0) (2, 2);
+ \draw (-2, 0) -- (2, 0);
+ \node [circ] at (0, 0) {};
+ \node [below] at (0, 0) {$\gamma(t)$};
+
+ \node [circ] at (1, 0.5) {};
+ \node [right] at (1, 0.5) {$\gamma(t + \Delta t)$};
+
+ \draw [dashed] (1, 0.5) -- (1, 0);
+
+ \draw (1, -1) edge [out=30, in=0, -latex'] (1, 0.25);
+ \node [left] at (1, -1) {$(\gamma(t + \Delta t) - \gamma (t))\cdot \mathbf{n}$};
+ \end{tikzpicture}
+\end{center}
+We can also compute
+\[
+ \|\gamma(t + \Delta t) - \gamma(t)\|^2 = \|\dot{\gamma}\|^2 (\Delta t)^2.\tag{$\dagger$}
+\]
+So we find that $\frac{1}{2}\kappa$ is the ratio of the leading (quadratic) terms of $(*)$ and $(\dagger)$, and is independent of the choice of parametrization.
+
+We now try to apply this thinking to embedded surfaces. We let $\sigma: V \to U \subseteq S$ be a parametrization of a surface $S$ (with $V \subseteq \R^2$ open). We apply Taylor's theorem to $\sigma$ to get
+\begin{multline*}
+ \sigma(u + \Delta u, v + \Delta v) - \sigma(u, v) = \sigma_u \Delta u + \sigma_v \Delta v \\
+ + \frac{1}{2}(\sigma_{uu} (\Delta u^2) + 2 \sigma_{uv} \Delta u \Delta v + \sigma_{vv} (\Delta v)^2) + \cdots.
+\end{multline*}
+We now measure the deviation from the tangent plane, i.e.
+\[
+ (\sigma(u + \Delta u, v + \Delta v) - \sigma(u, v))\cdot \mathbf{N} = \frac{1}{2}(L(\Delta u)^2 + 2M\Delta u \Delta v + N (\Delta v)^2) + \cdots,
+\]
+where
+\begin{align*}
+ L &= \sigma_{uu} \cdot \mathbf{N},\\
+ M &= \sigma_{uv} \cdot \mathbf{N},\\
+ N &= \sigma_{vv} \cdot \mathbf{N}.
+\end{align*}
+Note that $\mathbf{N}$ and $N$ are different things. $\mathbf{N}$ is the unit normal, while $N$ is the expression given above.
+
+We can also compute
+\[
+ \|\sigma(u + \Delta u, v + \Delta v) - \sigma(u, v)\|^2 = E (\Delta u)^2 + 2F \Delta u \Delta v + G (\Delta v)^2 + \cdots.
+\]
+We now define the second fundamental form as
+\begin{defi}[Second fundamental form]
+ The \emph{second fundamental form} on $V$ with $\sigma: V \to U \subseteq S$ for $S$ is
+ \[
+ L \;\d u^2 + 2M \;\d u \;\d v + N \;\d v^2,
+ \]
+ where
+ \begin{align*}
+ L &= \sigma_{uu} \cdot \mathbf{N}\\
+ M &= \sigma_{uv} \cdot \mathbf{N}\\
+ N &= \sigma_{vv} \cdot \mathbf{N}.
+ \end{align*}
+\end{defi}
+
+\begin{defi}[Gaussian curvature]
+ The \emph{Gaussian curvature} $K$ of a surface of $S$ at $P \in S$ is the ratio of the determinants of the two fundamental forms, i.e.
+ \[
+ K = \frac{LN - M^2}{EG - F^2}.
+ \]
+ This is valid since the first fundamental form is positive-definite and in particular has non-zero derivative.
+\end{defi}
+We can imagine that $K$ is a capital $\kappa$, but it looks just like a normal capital $K$.
+
+Note that $K > 0$ means the second fundamental form is definite (i.e.\ either positive definite or negative definite). If $K < 0$, then the second fundamental form is indefinite. If $K = 0$, then the second fundamental form is semi-definite (but not definite).
+
+\begin{eg}
+ Consider the unit sphere $S^2 \subseteq \R^3$. This has $K > 0$ at each point. We can compute this directly, or we can, for the moment, pretend that $M = 0$. Then by symmetry, $N = M$. So $K > 0$.
+
+ On the other hand, we can imagine a Pringle crisp (also known as a hyperbolic paraboloid), and this has $K < 0$. More examples are left on the third example sheet. For example we will see that the embedded torus in $\R^3$ has points at which $K > 0$, some where $K < 0$, and others where $K = 0$.
+\end{eg}
+It can be deduced, similar to the curves, that $K$ is independent of parametrization.
+
+Recall that around each point, we can get some nice coordinates where the first fundamental form looks simple. We might expect the second fundamental form to look simple as well. That is indeed true, but we need to do some preparation first.
+\begin{prop}
+ We let
+ \[
+ \mathbf{N} = \frac{\sigma_u \times \sigma_v}{\|\sigma_u \times \sigma_v\|}
+ \]
+ be our unit normal for a surface patch. Then at each point, we have
+ \begin{align*}
+ \mathbf{N}_u &= a \sigma_u + b \sigma_v,\\
+ \mathbf{N}_v &= c \sigma_u + d \sigma_v,
+ \end{align*}
+ where
+ \[
+ -
+ \begin{pmatrix}
+ L & M\\
+ M & N
+ \end{pmatrix}=
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}
+ \begin{pmatrix}
+ E & F\\
+ F & G
+ \end{pmatrix}.
+ \]
+ In particular,
+ \[
+ K = ad - bc.
+ \]
+\end{prop}
+
+\begin{proof}
+ Note that
+ \[
+ \mathbf{N} \cdot \mathbf{N} = 1.
+ \]
+ Differentiating gives
+ \[
+ \mathbf{N}\cdot \mathbf{N}_u = 0 = \mathbf{N}\cdot \mathbf{N}_v.
+ \]
+ Since $\sigma_u, \sigma_v$ and $\mathbf{N}$ for an orthogonal basis, at least there are some $a, b, c, d$ such that
+ \begin{align*}
+ \mathbf{N}_u &= a \sigma_u + b \sigma_v\\
+ \mathbf{N}_v &= c \sigma_u + d \sigma_v.
+ \end{align*}
+ By definition of $\sigma_u$, we have
+ \[
+ \mathbf{N}\cdot \sigma_u = 0.
+ \]
+ So differentiating gives
+ \[
+ \mathbf{N}_u \cdot \sigma_u + \mathbf{N}\cdot \sigma_{uu} = 0.
+ \]
+ So we know
+ \[
+ \mathbf{N}_u \cdot \sigma_u = -L.
+ \]
+ Similarly, we find
+ \[
+ \mathbf{N}_u = \sigma_v = -M = N_v \cdot \sigma_u,\quad \mathbf{N}_v \cdot \sigma_v = -N.
+ \]
+ We dot our original definition of $\mathbf{N}_u, \mathbf{N}_v$ in terms of $a, b, c, d$ with $\sigma_u$ and $\sigma_v$ to obtain
+ \begin{align*}
+ -L &= aE + bF & -M &= aF + bG\\
+ -M &= cE + dF & -N &= cF + dG.
+ \end{align*}
+ Taking determinants, we get the formula for the curvature.
+\end{proof}
+
+If we have nice coordinates on $S$, then we get a nice formula for the Gaussian curvature $K$.
+\begin{thm}
+ Suppose for a parametrization $\sigma: V \to U \subseteq S \subseteq \R^3$, the first fundamental form is given by
+ \[
+ \d u^2 + G (u, v) \;\d v^2
+ \]
+ for some $G \in C^{\infty}(V)$. Then the Gaussian curvature is given by
+ \[
+ K = \frac{-(\sqrt{G})_{uu}}{\sqrt{G}}.
+ \]
+ In particular, we do not need to compute the second fundamental form of the surface.
+\end{thm}
+This is purely a technical result.
+\begin{proof}
+ We set
+ \[
+ \mathbf{e} = \sigma_u,\quad \mathbf{f} = \frac{\sigma_v}{\sqrt{G}}.
+ \]
+ Then $\mathbf{e}$ and $\mathbf{f}$ are unit and orthogonal. We also let $\mathbf{N} = \mathbf{e} \times \mathbf{f}$ be a third unit vector orthogonal to $\mathbf{e}$ and $\mathbf{f}$ so that they form a basis of $\R^3$.
+
+ Using the notation of the previous proposition, we have
+ \begin{align*}
+ \mathbf{N}_u \times \mathbf{N}_v &= (a \sigma_u + b \sigma_v)\times (c \sigma_u + d \sigma_v) \\
+ &= (ad - bc) \sigma_u \times \sigma_v \\
+ &= K \sigma_u \times \sigma_v\\
+ &= K \sqrt{G} \mathbf{e} \times \mathbf{f}\\
+ &= K \sqrt{G} \mathbf{N}.
+ \end{align*}
+ Thus we know
+ \begin{align*}
+ K \sqrt{G} &= (\mathbf{N}_u \times \mathbf{N}_v) \cdot \mathbf{N} \\
+ &= (\mathbf{N}_u \times \mathbf{N}_v) \cdot (\mathbf{e}\times \mathbf{f})\\
+ &= (\mathbf{N}_u \cdot \mathbf{e})(\mathbf{N}_v \cdot \mathbf{f}) - (\mathbf{N}_u \cdot \mathbf{f}) (\mathbf{N}_v \cdot \mathbf{e}).
+ \end{align*}
+ Since $\mathbf{N}\cdot \mathbf{e} = 0$, we know
+ \[
+ N_u \cdot \mathbf{e} + \mathbf{N}\cdot \mathbf{e}_u = 0.
+ \]
+ Hence to evaluate the expression above, it suffices to compute $\mathbf{N}\cdot \mathbf{e}_u$ instead of $\mathbf{N}_u \cdot \mathbf{e}$.
+
+ Since $\mathbf{e} \cdot \mathbf{e} = 1$, we know
+ \[
+ \mathbf{e}\cdot \mathbf{e}_u = 0 = \mathbf{e}\cdot \mathbf{e}_v.
+ \]
+ So we can write
+ \begin{align*}
+ \mathbf{e}_u &= \alpha \mathbf{f} + \lambda_1 \mathbf{N}\\
+ \mathbf{e}_v &= \beta \mathbf{f} + \lambda_2 \mathbf{N}.
+ \end{align*}
+ Similarly, we have
+ \begin{align*}
+ \mathbf{f}_u &= -\tilde{\alpha}\mathbf{e} + \mu_1 \mathbf{N}\\
+ \mathbf{f}_v &= -\tilde{\beta}\mathbf{e} + \mu_2 \mathbf{N}.
+ \end{align*}
+ Our objective now is to find the coefficients $\mu_i, \lambda_i$, and then
+ \[
+ K\sqrt{G} = \lambda_1 \mu_2 - \lambda_2 \mu_1.
+ \]
+ Since we know $\mathbf{e} \cdot \mathbf{f} = 0$, differentiating gives
+ \begin{align*}
+ \mathbf{e}_u \cdot \mathbf{f} + \mathbf{e}\cdot \mathbf{f}_u &= 0\\
+ \mathbf{e}_v \cdot \mathbf{f} + \mathbf{e} \cdot \mathbf{f}_v &= 0.
+ \end{align*}
+ Thus we get
+ \[
+ \tilde{\alpha} = \alpha,\quad \tilde{\beta} = \beta.
+ \]
+ But we have
+ \[
+ \alpha = \mathbf{e}_u \cdot \mathbf{f} = \sigma_{uu}\cdot \frac{\sigma_v}{\sqrt{G}} = \left((\sigma_u \cdot \sigma_v)_u - \frac{1}{2} (\sigma_u \cdot \sigma_u)_v\right) \frac{1}{\sqrt{G}} = 0,
+ \]
+ since $\sigma_u \cdot \sigma_v = 0, \sigma_u \cdot \sigma_u = 1$. So $\alpha$ vanishes.
+
+ Also, we have
+ \[
+ \beta = \mathbf{e}_v \cdot \mathbf{f} = \sigma_{uv} \cdot \frac{\sigma_v}{\sqrt{G}} = \frac{1}{2} \frac{G_u}{\sqrt{G}} = (\sqrt{G})_u.
+ \]
+ Finally, we can use our equations again to find
+ \begin{align*}
+ \lambda_1 \mu_1 - \lambda_2 \mu_1 &= \mathbf{e}_u \cdot \mathbf{f}_v - \mathbf{e}_v \cdot \mathbf{f}_u \\
+ &= (\mathbf{e}\cdot \mathbf{f}_v)_u - (\mathbf{e}\cdot \mathbf{f}_u)_v \\
+ &= -\tilde{\beta}_u - (-\tilde{\alpha})_u \\
+ &= -(\sqrt{G})_{uu}.
+ \end{align*}
+ So we have
+ \[
+ K\sqrt{G} = -(\sqrt{G})_{uu},
+ \]
+ as required. Phew.
+\end{proof}
+Observe, for $\sigma$ as in the previous theorem, $K$ depends only on the first fundamental from, not on the second fundamental form. When Gauss discovered this, he was so impressed that he called it the \emph{Theorema Egregium}, which means
+\begin{cor}[Theorema Egregium]
+ If $S_1$ and $S_2$ have locally isometric charts, then $K$ is locally the same.
+\end{cor}
+
+\begin{proof}
+ We know that this corollary is valid under the assumption of the previous theorem, i.e.\ the existence of a parametrization $\sigma$ of the surface $S$ such that the first fundamental form is
+ \[
+ \d u^2 + G(u, v) \;\d v^2.
+ \]
+ Suitable $\sigma$ includes, for each point $P \in S$, the geodesic polars $(\rho, \theta)$. However, $P$ itself is not in the chart, i.e.\ $P \not\in \sigma(U)$, and there is no guarantee that there will be some geodesic polar that covers $P$. To solve this problem, we notice that $K$ is a $C^\infty$ function of $S$, and in particular continuous. So we can determine the curvature at $P$ as
+ \[
+ K(P) = \lim_{\rho \to 0} K(\rho, \sigma).
+ \]
+ So done.
+
+ Note also that every surface of revolution has such a suitable parametrization, as we have previously explicitly seen.
+\end{proof}
+
+\section{Abstract smooth surfaces}
+While embedded surfaces are quite general surfaces, they do not include, say, the hyperbolic plane. We can generalize our notions by considering surfaces ``without embedding in $\R^3$''. These are known as abstract surfaces.
+\begin{defi}[Abstract smooth surface]
+ An \emph{abstract smooth surface} $S$ is a metric space (or Hausdorff (and second-countable) topological space) equipped with homeomorphisms $\theta_i: \mathcal{U}_i \to V_i$, where $\mathcal{U}_i \subseteq S$ and $V_i \subseteq \R^2$ are open sets such that
+ \begin{enumerate}
+ \item $S = \bigcup_i \mathcal{U}_i$
+ \item For any $i, j$, the transition map
+ \[
+ \phi_{ij} = \theta_j \circ \theta_i^{-1}: \theta_j(\mathcal{U}_i \cap \mathcal{U}_j) \to \theta_i(\mathcal{U}_i \cap \mathcal{U}_j)
+ \]
+ is a diffeomorphism. Note that $\theta_j(\mathcal{U}_i \cap \mathcal{U}_j)$ and $\theta_i (\mathcal{U}_i \cap \mathcal{U}_j)$ are open sets in $\R^2$. So it makes sense to talk about whether the function is a diffeomorphism.
+ \end{enumerate}
+\end{defi}
+Like for embedded surfaces, the maps $\theta_i$ are called \emph{charts}, and the collection of $\theta_i$'s satisfying our conditions is an \emph{atlas} etc.
+
+\begin{defi}[Riemannian metric on abstract surface]
+ A \emph{Riemannian metric} on an abstract surface is given by Riemannian metrics on each $V_i = \theta_i(\mathcal{U}_i)$ subject to the compatibility condition that for all $i, j$, the transition map $\phi_{ij}$ is an isometry, i.e.
+ \[
+ \bra d \varphi_P(\mathbf{a}), \d \varphi_P(\mathbf{b})\ket_{\varphi(P)} = \bra \mathbf{a}, \mathbf{b}\ket_P
+ \]
+ Note that on the left, we are computing the Riemannian metric on $V_i$, while on the left, we are computing it on $V_j$.
+\end{defi}
+
+Then we can define lengths, areas, energies on an abstract surface $S$.
+
+It is clear that every embedded surface is an abstract surface, by forgetting that it is embedded in $\R^3$.
+\begin{eg}
+ The three classical geometries are all abstract surfaces.
+ \begin{enumerate}
+ \item The Euclidean space $\R^2$ with $\d x^2 + \d y^2$ is an abstract surface.
+ \item The sphere $S^2 \subseteq \R^2$, being an embedded surface, is an abstract surface with metric
+ \[
+ \frac{4(\d x^2 + \d y^2)}{(1 + x^2 + y^2)^2}.
+ \]
+ \item The hyperbolic disc $D \subseteq \R^2$ is an abstract surface with metric
+ \[
+ \frac{4(\d x^2 + \d y^2)}{(1 - x^2 - y^2)^2}.
+ \]
+ and this is isometric to the upper half plane $H$ with metric
+ \[
+ \frac{\d x^2 + \d y^2}{y^2}
+ \]
+ \end{enumerate}
+\end{eg}
+Note that in the first and last example, it was sufficient to use just one chart to cover every point of the surface, but not for the sphere. Also, in the case of the hyperbolic plane, we can have many different charts, and they are compatible.
+
+Finally, we notice that we really need the notion of abstract surface for the hyperbolic plane, since it cannot be realized as an embedded surface in $\R^3$. The proof is not obvious at all, and is a theorem of Hilbert.
+
+One important thing we can do is to study the curvature of surfaces. Given a $P \in S$, the Riemannian metric (on a chart) around $P$ determines a ``reparametrization'' by geodesics, similar to embedded surfaces. Then the metric takes the form
+\[
+ \d \rho^2 + G(\rho, \theta) \;\d \theta^2.
+\]
+We then define the curvature as
+\[
+ K = \frac{-(\sqrt{G})_{\rho\rho}}{\sqrt{G}}.
+\]
+Note that for embedded surfaces, we obtained this formula as a theorem. For abstract surfaces, we take this as a \emph{definition}.
+
+We can check how this works in some familiar examples.
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item In $\R^2$, we use the usual polar coordinates $(\rho, \theta)$, and the metric becomes
+ \[
+ \d \rho^2 + \rho^2 \;\d \theta^2,
+ \]
+ where $x = \rho \cos \theta$ and $y = \rho \sin \theta$. So the curvature is
+ \[
+ \frac{-(\sqrt{G})_{\rho\rho}}{\sqrt{G}} = \frac{-(\rho)_{\rho\rho}}{\rho} = 0.
+ \]
+ So the Euclidean space has zero curvature.
+ \item For the sphere $S$, we use the spherical coordinates, fixing the radius to be $1$. So we specify each point by
+ \[
+ \sigma(\rho, \theta) = (\sin \rho \cos \theta, \sin \rho \sin \theta, \cos \rho).
+ \]
+ Note that $\rho$ is not really the radius in spherical coordinates, but just one of the angle coordinates. We then have the metric
+ \[
+ \d \rho^2 + \sin^2 \rho \;\d \theta^2.
+ \]
+ Then we get
+ \[
+ \sqrt{G} = \sin \rho,
+ \]
+ and $K = 1$.
+ \item For the hyperbolic plane, we use the disk model $D$, and we first express our original metric in polar coordinates of the Euclidean plane to get
+ \[
+ \left(\frac{2}{1 - r^2}\right)^2 (\d r^2 + r^2 \;\d \theta^2).
+ \]
+ This is not geodesic polar coordinates, since $r$ is given by the Euclidean distance, not hyperbolic distance. We will need to put
+ \[
+ \rho = 2\tanh^{-1} r,\quad \d \rho = \frac{2}{1 - r^2}\;\d r.
+ \]
+ Then we have
+ \[
+ r = \tanh \frac{\rho}{2},
+ \]
+ which gives
+ \[
+ \frac{4r^2}{(1 - r^2)^2} = \sinh^2 \rho.
+ \]
+ So we finally get
+ \[
+ \sqrt{G} = \sinh \rho,
+ \]
+ with
+ \[
+ K = -1.
+ \]
+ \end{enumerate}
+\end{eg}
+We see that the three classic geometries are characterized by having constant $0$, $1$ and $-1$ curvatures.
+
+We are almost able to state the Gauss-Bonnet theorem. Before that, we need the notion of triangulations. We notice that our old definition makes sense for (compact) abstract surfaces $S$. So we just use the same definition. We then define the \emph{Euler number} of an abstract surface as
+\[
+ e(S) = F - E + V,
+\]
+as before. Assuming that the Euler number is independent of triangulations, we know that this is invariant under homeomorphisms.
+
+\begin{thm}[Gauss-Bonnet theorem]
+ If the sides of a triangle $ABC \subseteq S$ are geodesic segments, then
+ \[
+ \int_{ABC} K \;\d A = (\alpha + \beta + \gamma) - \pi,
+ \]
+ where $\alpha, \beta, \gamma$ are the angles of the triangle, and $\d A$ is the ``area element'' given by
+ \[
+ \d A = \sqrt{EG - F^2} \;\d u\;\d v,
+ \]
+ on each domain $\mathcal{U} \subseteq S$ of a chart, with $E, F, G$ as in the respective first fundamental form.
+
+ Moreover, if $S$ is a compact surface, then
+ \[
+ \int_S K\;\d A = 2\pi e(S).
+ \]
+\end{thm}
+We will not prove this theorem, but we will make some remarks. Note that we can deduce the second part from the first part. The basic idea is to take a triangulation of $S$, and then use things like each edge belongs to two triangles and each triangle has three edges.
+
+This is a genuine generalization of what we previously had for the sphere and hyperbolic plane, as one can easily see.
+
+Using the Gauss-Bonnet theorem, we can define the curvature $K(P)$ for a point $P \in S$ alternatively by considering triangles containing $P$, and then taking the limit
+\[
+ \lim_{\text{area} \to 0} \frac{(\alpha + \beta + \gamma) - \pi}{\text{area}} = K(P).
+\]
+Finally, we note how this relates to the problem of the parallel postulate we have mentioned previously. The parallel postulate, in some form, states that given a line and a point not on it, there is a unique line through the point and parallel to the line. This holds in Euclidean geometry, but not hyperbolic and spherical geometry.
+
+It is a fact that this is equivalent to the axiom that the angles of a triangle sum to $\pi$. Thus, the Gauss-Bonnet theorem tells us the parallel postulate is captured by the fact that the curvature of the Euclidean plane is zero everywhere.
+\end{document}
diff --git a/books/cam/IB_L/groups_rings_and_modules.tex b/books/cam/IB_L/groups_rings_and_modules.tex
new file mode 100644
index 0000000000000000000000000000000000000000..335c6ccf0d221057224626fb3ce0afa23490edcb
--- /dev/null
+++ b/books/cam/IB_L/groups_rings_and_modules.tex
@@ -0,0 +1,4591 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Lent}
+\def\nyear {2016}
+\def\nlecturer {O.\ Randal-Williams}
+\def\ncourse {Groups, Rings and Modules}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Groups}\\
+Basic concepts of group theory recalled from Part IA Groups. Normal subgroups, quotient groups and isomorphism theorems. Permutation groups. Groups acting on sets, permutation representations. Conjugacy classes, centralizers and normalizers. The centre of a group. Elementary properties of finite $p$-groups. Examples of finite linear groups and groups arising from geometry. Simplicity of $A_n$.
+
+\vspace{5pt}
+\noindent Sylow subgroups and Sylow theorems. Applications, groups of small order.\hspace*{\fill} [8]
+
+\vspace{10pt}
+\noindent\textbf{Rings}\\
+Definition and examples of rings (commutative, with 1). Ideals, homomorphisms, quotient rings, isomorphism theorems. Prime and maximal ideals. Fields. The characteristic of a field. Field of fractions of an integral domain.
+
+\vspace{5pt}
+\noindent Factorization in rings; units, primes and irreducibles. Unique factorization in principal ideal domains, and in polynomial rings. Gauss' Lemma and Eisenstein's irreducibility criterion.
+
+\vspace{5pt}
+\noindent Rings $\Z[\alpha]$ of algebraic integers as subsets of $\C$ and quotients of $\Z[x]$. Examples of Euclidean domains and uniqueness and non-uniqueness of factorization. Factorization in the ring of Gaussian integers; representation of integers as sums of two squares.
+
+\vspace{5pt}
+\noindent Ideals in polynomial rings. Hilbert basis theorem.\hspace*{\fill} [10]
+
+\vspace{10pt}
+\noindent\textbf{Modules}\\
+Definitions, examples of vector spaces, abelian groups and vector spaces with an endomorphism. Sub-modules, homomorphisms, quotient modules and direct sums. Equivalence of matrices, canonical form. Structure of finitely generated modules over Euclidean domains, applications to abelian groups and Jordan normal form.\hspace*{\fill} [6]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+The course is naturally divided into three sections --- Groups, Rings, and Modules.
+
+In IA Groups, we learnt about some basic properties of groups, and studied several interesting groups in depth. In the first part of this course, we will further develop some general theory of groups. In particular, we will prove two more isomorphism theorems of groups. While we will not find these theorems particularly useful in this course, we will be able to formulate analogous theorems for other algebraic structures such as rings and modules, as we will later find in the course.
+
+In the next part of the course, we will study rings. These are things that behave somewhat like $\Z$, where we can add, subtract, multiply but not (necessarily) divide. While $\Z$ has many nice properties, these are not necessarily available in arbitrary rings. Hence we will classify rings into different types, depending on how many properties of $\Z$ they inherit. We can then try to reconstruct certain IA Numbers and Sets results in these rings, such as unique factorization of numbers into primes and B\'ezout's theorem.
+
+Finally, we move on to modules. The definition of a module is very similar to that of a vector space, except that instead of allowing scalar multiplication by elements of a field, we have scalar multiplication by elements of a ring. It turns out modules are completely unlike vector spaces, and can have much more complicated structures. Perhaps because of this richness, many things turn out to be modules. Using module theory, we will be able to prove certain important theorems such as the classification of finite abelian groups and the Jordan normal form theorem.
+
+\section{Groups}
+\subsection{Basic concepts}
+We will begin by quickly recapping some definitions and results from IA Groups.
+
+\begin{defi}[Group]\index{group}
+ A \emph{group} is a triple $(G, \ph, e)$, where $G$ is a set, $\ph: G\times G \to G$ is a function and $e \in G$ is an element such that
+ \begin{enumerate}
+ \item For all $a, b, c \in G$, we have $(a \cdot b) \cdot c = a \cdot (b \cdot c)$.\hfill (associativity)
+ \item For all $a \in G$, we have $a \cdot e = e \cdot a = a$.\hfill (identity)\index{identity}
+ \item For all $a \in G$, there exists $a^{-1} \in G$ such that $a \cdot a^{-1} = a^{-1} \cdot a = e$.\hfill (inverse)\index{inverse}
+ \end{enumerate}
+\end{defi}
+Some people add a stupid axiom that says $g \cdot h \in G$ for all $g, h \in G$, but this is already implied by saying $\cdot$ is a function to $G$. You can write that down as well, and no one will say you are stupid. But they might secretly \emph{think} so.
+
+\begin{lemma}
+ The inverse of an element is unique.
+\end{lemma}
+
+\begin{proof}
+ Let $a^{-1}, b$ be inverses of $a$. Then
+ \[
+ b = b \cdot e = b \cdot a \cdot a^{-1} = e \cdot a^{-1} = a^{-1}. \qedhere
+ \]
+\end{proof}
+
+\begin{defi}[Subgroup]
+ If $(G, \ph, e)$ is a group and $H \subseteq G$ is a subset, it is a \emph{subgroup} if
+ \begin{enumerate}
+ \item $e \in H$,
+ \item $a, b \in H$ implies $a\cdot b\in H$,
+ \item $\ph: H \times H \to H$ makes $(H, \ph, e)$ a group.
+ \end{enumerate}
+ We write $H \leq G$ if $H$ is a subgroup of $G$.
+\end{defi}
+Note that the last condition in some sense encompasses the first two, but we need the first two conditions to hold before the last statement makes sense at all.
+
+\begin{lemma}
+ $H \subseteq G$ is a subgroup if $H$ is non-empty and for any $h_1, h_2 \in H$, we have $h_1h_2^{-1} \in H$.
+\end{lemma}
+
+\begin{defi}[Abelian group]\index{abelian group}
+ A group $G$ is \emph{abelian} if $a\cdot b = b\cdot a$ for all $a, b\in G$.
+\end{defi}
+
+\begin{eg}
+ We have the following familiar examples of groups
+ \begin{enumerate}
+ \item $(\Z, +, 0)$, $(\Q, +, 0)$, $(\R, +, 0)$, $(\C, +, 0)$.
+ \item We also have groups of symmetries:
+ \begin{enumerate}
+ \item The symmetric group $S_n$ is the collection of all permutations of $\{1, 2, \cdots, n\}$.
+ \item The dihedral group $D_{2n}$ is the symmetries of a regular $n$-gon.
+ \item The group $\GL_n(\R)$ is the group of invertible $n\times n$ real matrices, which also is the group of invertible $\R$-linear maps from the vector space $\R^n$ to itself.
+ \end{enumerate}
+ \item The alternating group $A_n \leq S_n$.
+ \item The cyclic group $C_n \leq D_{2n}$.
+ \item The special linear group $\SL_n(\R) \leq \GL_n(\R)$, the subgroup of matrices of determinant $1$.
+ \item The Klein-four group $C_2 \times C_2$.
+ \item The quaternions $Q_8 = \{\pm 1, \pm i, \pm j, \pm k\}$ with $ij = k, ji = -k$, $i^2 = j^2 = k^2 = -1$, $(-1)^2 = 1$.
+ \end{enumerate}
+\end{eg}
+
+With groups and subgroups, we can talk about cosets.
+\begin{defi}[Coset]\index{coset}
+ If $H \leq G$, $g \in G$, the \emph{left coset} $gH$ is the set
+ \[
+ gH = \{x \in G: x = g\cdot h\text{ for some }h \in H\}.
+ \]
+\end{defi}
+For example, since $H$ is a subgroup, we know $e \in H$. So for any $g \in G$, we must have $g \in gH$.
+
+The collection of $H$-cosets in $G$ forms a partition of $G$, and furthermore, all $H$-cosets $gH$ are in bijection with $H$ itself, via $h \mapsto gh$. An immediate consequence is
+\begin{thm}[Lagrange's theorem]\index{Lagrange's theorem}
+ Let $G$ be a finite group, and $H \leq G$. Then
+ \[
+ |G| = |H| |G:H|,
+ \]
+ where $|G:H|$ is the number of $H$-cosets in $G$.
+\end{thm}
+We can do exactly the same thing with right cosets and get the same conclusion.
+
+We have implicitly used the following notation:
+\begin{defi}[Order of group]\index{order}
+ The \emph{order} of a group is the number of elements in $G$, written $|G|$.
+\end{defi}
+
+Instead of order of the group, we can ask what the order of an element is.
+\begin{defi}[Order of element]\index{order}
+ The \emph{order} of an element $g \in G$ is the smallest positive $n$ such that $g^n = e$. If there is no such $n$, we say $g$ has infinite order.
+
+ We write $\ord(g) = n$.
+\end{defi}
+
+A basic lemma is as follows:
+\begin{lemma}
+ If $G$ is a finite group and $g \in G$ has order $n$, then $n \mid |G|$.
+\end{lemma}
+
+\begin{proof}
+ Consider the following subset:
+ \[
+ H= \{e, g, g^2, \cdots, g^{n - 1}\}.
+ \]
+ This is a subgroup of $G$, because it is non-empty and $g^rg^{-s} = g^{r - s}$ is on the list (we might have to add $n$ to the power of $g$ to make it positive, but this is fine since $g^n = e$). Moreover, there are no repeats in the list: if $g^i = g^j$, with wlog $i \geq j$, then $g^{i - j} = e$. So $i - j < n$. By definition of $n$, we must have $i - j = 0$, i.e.\ $i = j$.
+
+ Hence Lagrange's theorem tells us $n = |H| \mid |G|$.
+\end{proof}
+
+\subsection{Normal subgroups, quotients, homomorphisms, isomorphisms}
+We all (hopefully) recall what the definition of a normal subgroup is. However, instead of just stating the definition and proving things about it, we can try to motivate the definition, and see how one could naturally come up with it.
+
+Let $H \leq G$ be a subgroup. The objective is to try to make the collection of cosets
+\[
+ G/H = \{gH: g \in G\}
+\]
+into a group.
+
+Before we do that, we quickly come up with a criterion for when two cosets $gH$ and $g'H$ are equal. Notice that if $gH = g'H$, then $g \in g'H$. So $g = g'\cdot h$ for some $h$. In other words, $(g')^{-1} \cdot g = h \in H$. So if two elements represent the same coset, their difference is in $H$. The argument is also reversible. Hence two elements $g, g'$ represent the same $H$-coset if and only if $(g')^{-1} g \in H$.
+
+Suppose we try to make the set $G/H = \{gH: g \in G\}$ into a group, by the obvious formula
+\[
+ (g_1 H) \cdot (g_2 H) = g_1 g_2 H.
+\]
+However, this doesn't necessarily make sense. If we take a different representative for the same coset, we want to make sure it gives the same answer.
+
+If $g_2 H = g_2' H$, then we know $g_2' = g_2 \cdot h$ for some $h \in H$. So
+\[
+ (g_1 H) \cdot (g_2' H) = g_1 g_2' H = g_1 g_2 h H = g_1g_2 H = (g_1 H) \cdot (g_2 H).
+\]
+So all is good.
+
+What if we change $g_1$? If $g_1H = g_1' H$, then $g_1' = g_1 \cdot h$ for some $h \in H$. So
+\[
+ (g_1' H) \cdot (g_2 H) = g_1' g_2 H = g_1 h g_2 H.
+\]
+Now we are stuck. We would really want the equality
+\[
+ g_1 h g_2 H = g_1 g_2 H
+\]
+to hold. This requires
+\[
+ (g_1g_2)^{-1} g_1 h g_2 \in H.
+\]
+This is equivalent to
+\[
+ g_2^{-1} h g_2 \in H.
+\]
+So for $G/H$ to actually be a group under this operation, we must have, for any $h \in H$ and $g \in G$, the property $g^{-1} h g \in H$ to hold.
+
+This is not necessarily true for an arbitrary $H$. Those nice ones that satisfy this property are known as \emph{normal subgroups}.
+\begin{defi}[Normal subgroup]\index{normal subgroup}
+ A subgroup $H \leq G$ is \emph{normal} if for any $h \in H$ and $g \in G$, we have $g^{-1}h g \in H$. We write $H \lhd G$.
+\end{defi}
+
+This allows us to make the following definition:
+\begin{defi}[Quotient group]\index{quotient group}
+ If $H \lhd G$ is a normal subgroup, then the set $G/H$ of left $H$-cosets forms a group with multiplication
+ \[
+ (g_1 H) \cdot (g_2 H) = g_1 g_2 H.
+ \]
+ with identity $eH = H$. This is known as the \emph{quotient group}.
+\end{defi}
+This is indeed a group. Normality was defined such that this is well-defined. Multiplication is associative since multiplication in $G$ is associative. The inverse of $gH$ is $g^{-1}H$, and $eH$ is easily seen to be the identity.
+
+So far, we've just been looking at groups themselves. We would also like to know how groups interact with each other. In other words, we want to study functions between groups. However, we don't allow \emph{arbitrary} functions, since groups have some structure, and we would like the functions to respect the group structures. These nice functions are known as \emph{homomorphisms}.
+
+\begin{defi}[Homomorphism]\index{homomorphism}
+ If $(G, \ph, e_G)$ and $(H, *, e_H)$ are groups, a function $\phi: G\to H$ is a \emph{homomorphism} if $\phi(e_G) = e_H$, and for $g, g' \in G$, we have
+ \[
+ \phi(g \cdot g') = \phi(g) * \phi(g').
+ \]
+\end{defi}
+If we think carefully, $\phi(e_G) = e_H$ can be derived from the second condition, but it doesn't hurt to put it in as well.
+
+\begin{lemma}
+ If $\phi: G \to H$ is a homomorphism, then
+ \[
+ \phi(g^{-1}) = \phi(g)^{-1}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We compute $\phi(g\cdot g^{-1})$ in two ways. On the one hand, we have
+ \[
+ \phi(g\cdot g^{-1}) = \phi(e) = e.
+ \]
+ On the other hand, we have
+ \[
+ \phi(g\cdot g^{-1}) = \phi(g) * \phi(g^{-1}).
+ \]
+ By the uniqueness of inverse, we must have
+ \[
+ \phi(g^{-1}) = \phi(g)^{-1}. \qedhere
+ \]
+\end{proof}
+
+Given any homomorphism, we can build two groups out of it:
+\begin{defi}[Kernel]\index{kernel}
+ The \emph{kernel} of a homomorphism $\phi: G \to H$ is
+ \[
+ \ker(\phi) = \{g \in G: \phi(g) = e\}.
+ \]
+\end{defi}
+
+\begin{defi}[Image]\index{image}
+ The \emph{image} of a homomorphism $\phi: G \to H$ is
+ \[
+ \im (\phi) = \{h\in H: h = \phi(g) \text{ for some }g \in G\}.
+ \]
+\end{defi}
+
+\begin{lemma}
+ For a homomorphism $\phi: G\to H$, the kernel $\ker (\phi)$ is a \emph{normal subgroup}, and the image $\im (\phi)$ is a subgroup of $H$.
+\end{lemma}
+
+\begin{proof}
+ There is only one possible way we can prove this.
+
+ To see $\ker(\phi)$ is a subgroup, let $g, h \in \ker \phi$. Then
+ \[
+ \phi(g\cdot h^{-1}) = \phi(g) * \phi(h)^{-1} = e * e^{-1} = e.
+ \]
+ So $gh^{-1} \in \ker \phi$. Also, $\phi(e) = e$. So $\ker(\phi)$ is non-empty. So it is a subgroup.
+
+ To show it is normal, let $g \in \ker(\phi)$. Let $x \in G$. We want to show $x^{-1}gx \in \ker(\phi)$. We have
+ \[
+ \phi(x^{-1} gx) = \phi(x^{-1}) * \phi(g) * \phi(x) = \phi(x^{-1}) * \phi(x) = \phi(x^{-1}x) = \phi(e) = e.
+ \]
+ So $x^{-1}gx \in \ker(\phi)$. So $\ker(\phi)$ is normal.
+
+ Also, if $\phi(g), \phi(h) \in \im (\phi)$, then
+ \[
+ \phi(g) * \phi(h)^{-1} = \phi(gh^{-1}) \in \im (\phi).
+ \]
+ Also, $e \in \im(\phi)$. So $\im(\phi)$ is non-empty. So $\im(\phi)$ is a subgroup.
+\end{proof}
+
+\begin{defi}[Isomorphism]\index{isomorphism}
+ An \emph{isomorphism} is a homomorphism that is also a bijection.
+\end{defi}
+
+\begin{defi}[Isomorphic group]\index{isomorphic}
+ Two groups $G$ and $H$ are \emph{isomorphic} if there is an isomorphism between them. We write $G \cong H$.
+\end{defi}
+Usually, we identify two isomorphic groups as being ``the same'', and do not distinguish isomorphic groups.
+
+It is an exercise to show the following:
+\begin{lemma}
+ If $\phi$ is an isomorphism, then the inverse $\phi^{-1}$ is also an isomorphism.
+\end{lemma}
+
+When studying groups, it is often helpful to break the group apart into smaller groups, which are hopefully easier to study. We will have three isomorphism theorems to do so. These isomorphism theorems tell us what happens when we take quotients of different things. Then if a miracle happens, we can patch what we know about the quotients together to get information about the big group. Even if miracles do not happen, these are useful tools to have.
+
+The first isomorphism relates the kernel to the image.
+\begin{thm}[First isomorphism theorem]\index{first isomorphism theorem}
+ Let $\phi: G \to H$ be a homomorphism. Then $\ker(\phi) \lhd G$ and
+ \[
+ \frac{G}{\ker (\phi)} \cong \im (\phi).
+ \]
+\end{thm}
+
+\begin{proof}
+ We have already proved that $\ker (\phi)$ is a normal subgroup. We now have to construct a homomorphism $f : G/\ker(\phi) \to \im (\phi)$, and prove it is an isomorphism.
+
+ Define our function as follows:
+ \begin{align*}
+ f: \frac{G}{\ker (\phi)} &\to \im (\phi)\\
+ g\ker (\phi) &\mapsto \phi(g).
+ \end{align*}
+ We first tackle the obvious problem that this might not be well-defined, since we are picking a representative for the coset. If $g \ker(\phi) = g' \ker(\phi)$, then we know $g^{-1} \cdot g' \in \ker(\phi)$. So $\phi(g^{-1} \cdot g') = e$. So we know
+ \[
+ e = \phi(g^{-1} \cdot g') = \phi(g)^{-1} * \phi(g').
+ \]
+ Multiplying the whole thing by $\phi(g)$ gives $\phi(g) = \phi(g')$. Hence this function is well-defined.
+
+ Next we show it is a homomorphism. To see $f$ is a homomorphism, we have
+ \begin{align*}
+ f(g\ker (\phi) \cdot g'\ker(\phi)) &= f(gg'\ker(\phi)) \\
+ &= \phi(gg') \\
+ &= \phi(g) * \phi(g') \\
+ &= f(g\ker(\phi)) * f(g'\ker(\phi)).
+ \end{align*}
+ So $f$ is a homomorphism. Finally, we show it is a bijection.
+
+ To show it is surjective, let $h \in \im (\phi)$. Then $h = \phi(g)$ for some $g$. So $h = f(g\ker (\phi))$ is in the image of $f$.
+
+ To show injectivity, suppose $f(g\ker (\phi)) = f(g'\ker(\phi))$. So $\phi(g) = \phi(g')$. So $\phi(g^{-1} \cdot g') = e$. Hence $g^{-1} \cdot g' \in \ker(\phi)$, and hence $g \ker(\phi) = g'\ker(\phi)$. So done.
+\end{proof}
+
+Before we move on to further isomorphism theorems, we see how we can use these to identify two groups which are not obviously the same.
+
+\begin{eg}
+ Consider a homomorphism $\phi: \C \to \C\setminus \{0\}$ given by $z \mapsto e^z$. We also know that
+ \[
+ e^{z + w} = e^z e^w.
+ \]
+ This means $\phi$ is a homomorphism if we think of it as $\phi: (\C, +) \to (\C\setminus \{0\}, \times)$.
+
+ What is the image of this homomorphism? The existence of $\log$ shows that $\phi$ is surjective. So $\im \phi = \C\setminus \{0\}$. What about the kernel? It is given by
+ \[
+ \ker(\phi) = \{z \in \C: e^z = 1\} = 2\pi i \Z,
+ \]
+ i.e.\ the set of all integer multiples of $2\pi i$. The conclusion is that
+ \[
+ (\C / (2\pi i \Z), +) \cong (\C\setminus \{0\}, \times).
+ \]
+\end{eg}
+
+The second isomorphism theorem is a slightly more complicated theorem.
+\begin{thm}[Second isomorphism theorem]\index{second isomorphism theorem}
+ Let $H \leq G$ and $K \lhd G$. Then $HK = \{h\cdot k: h \in H, k \in K\}$ is a subgroup of $G$, and $H\cap K \lhd H$. Moreover,
+ \[
+ \frac{HK}{K} \cong \frac{H}{H \cap K}.
+ \]
+\end{thm}
+
+\begin{proof}
+ Let $hk, h'k' \in HK$. Then
+ \[
+ h'k'(hk)^{-1} = h'k' k^{-1}h^{-1} = (h'h^{-1})(hk'k^{-1}h^{-1}).
+ \]
+ The first term is in $H$, while the second term is $k'k^{-1} \in K$ conjugated by $h$, which also has to be in $K$ be normality. So this is something in $H$ times something in $K$, and hence in $HK$. $HK$ also contains $e$, and is hence a group.
+
+ To show $H \cap K \lhd H$, consider $x \in H\cap K$ and $h \in H$. Consider $h^{-1} x h$. Since $x \in K$, the normality of $K$ implies $h^{-1}xh \in K$. Also, since $x, h \in H$, closure implies $h^{-1}xh \in H$. So $h^{-1} x h \in H \cap K$. So $H \cap K \lhd H$.
+
+ Now we can start to prove the second isomorphism theorem. To do so, we apply the first isomorphism theorem to it. Define
+ \begin{align*}
+ \phi: H &\to G/K\\
+ &h \mapsto hK
+ \end{align*}
+ This is easily seen to be a homomorphism. We apply the first isomorphism theorem to this homomorphism. The image is all $K$-cosets represented by something in $H$, i.e.
+ \[
+ \im (\phi) = \frac{HK}{K}.
+ \]
+ Then the kernel of $\phi$ is
+ \[
+ \ker(\phi) = \{h \in H : hK = eK\} = \{h \in H : h \in K\} = H \cap K.
+ \]
+ So the first isomorphism theorem says
+ \[
+ \frac{H}{H \cap K} \cong \frac{HK}{K}.\qedhere
+ \]
+\end{proof}
+Notice we did more work than we really had to. We could have started by writing down $\phi$ and checked it is a homomorphism. Then since $H \cap K$ is its kernel, it has to be a normal subgroup.
+
+Before we move on to the third isomorphism theorem, we notice that if $K \lhd G$, then there is a bijection between subgroups of $G/K$ and subgroups of $G$ containing $K$, given by
+\begin{align*}
+ \{\text{subgroups of }G/K\} &\longleftrightarrow\{\text{subgroups of }G\text{ which contain }K\}\\
+ X \leq \frac{G}{K} &\longrightarrow \{g \in G: gK \in X\}\\
+ \frac{L}{K} \leq \frac{G}{K} &\longleftarrow K \lhd L \leq G.
+\end{align*}
+This specializes to the bijection of normal subgroups:
+\[
+ \{\text{normal subgroups of }G/K\} \longleftrightarrow\{\text{normal subgroups of }G\text{ which contain }K\}\\
+\]
+using the same bijection.
+
+It is an elementary exercise to show that these are inverses of each other. This correspondence will be useful in later times.
+
+\begin{thm}[Third isomorphism theorem]\index{third isomorphism theorem}
+ Let $K \leq L \leq G$ be normal subgroups of $G$. Then
+ \[
+ \frac{G}{K}\big/ \frac{L}{K} \cong \frac{G}{L}.
+ \]
+\end{thm}
+
+\begin{proof}
+ Define the homomorphism
+ \begin{align*}
+ \phi: G/K &\to G/L\\
+ gK &\mapsto gL
+ \end{align*}
+ As always, we have to check this is well-defined. If $gK = g'K$, then $g^{-1}g' \in K \subseteq L$. So $gL = g'L$. This is also a homomorphism since
+ \[
+ \phi(gK \cdot g'K) = \phi(gg'K) = gg'L = (gL) \cdot (g'L) = \phi(gK) \cdot \phi(g'K).
+ \]
+ This clearly is surjective, since any coset $gL$ is the image $\phi(gK)$. So the image is $G/L$. The kernel is then
+ \[
+ \ker(\phi) = \{gK: gL = L\} = \{gK: g \in L\} = \frac{L}{K}.
+ \]
+ So the conclusion follows by the first isomorphism theorem.
+\end{proof}
+
+The general idea of these theorems is to take a group, find a normal subgroup, and then quotient it out. Then hopefully the normal subgroup and the quotient group will be simpler. However, this doesn't always work.
+
+\begin{defi}[Simple group]\index{simple group}
+ A (non-trivial) group $G$ is \emph{simple} if it has no normal subgroups except $\{e\}$ and $G$.
+\end{defi}
+
+In general, simple groups are complicated. However, if we only look at abelian groups, then life is simpler. Note that by commutativity, the normality condition is always trivially satisfied. So \emph{any} subgroup is normal. Hence an abelian group can be simple only if it has no non-trivial subgroups at all.
+
+\begin{lemma}
+ An abelian group is simple if and only if it is isomorphic to the cyclic group $C_p$ for some prime number $p$.
+\end{lemma}
+
+\begin{proof}
+ By Lagrange's theorem, any subgroup of $C_p$ has order dividing $|C_p| = p$. Hence if $p$ is prime, then it has no such divisors, and any subgroup must have order $1$ or $p$, i.e.\ it is either $\{e\}$ or $C_p$ itself. Hence in particular any normal subgroup must be $\{e\}$ or $C_p$. So it is simple.
+
+ Now suppose $G$ is abelian and simple. Let $e \not= g \in G$ be a non-trivial element, and consider $H = \{\cdots, g^{-2}, g^{-1}, e, g, g^2, \cdots\}$. Since $G$ is abelian, conjugation does nothing, and every subgroup is normal. So $H$ is a normal subgroup. As $G$ is simple, $H = \{e\}$ or $H = G$. Since it contains $g \not= e$, it is non-trivial. So we must have $H = G$. So $G$ is cyclic.
+
+ If $G$ is infinite cyclic, then it is isomorphic to $\Z$. But $\Z$ is not simple, since $2\Z \lhd \Z$. So $G$ is a finite cyclic group, i.e.\ $G \cong C_m$ for some finite $m$.
+
+ If $n \mid m$, then $g^{m/n}$ generates a subgroup of $G$ of order $n$. So this is a normal subgroup. Therefore $n$ must be $m$ or $1$. Hence $G$ cannot be simple unless $m$ has no divisors except $1$ and $m$, i.e.\ $m$ is a prime.
+\end{proof}
+
+One reason why simple groups are important is the following:
+\begin{thm}
+ Let $G$ be any finite group. Then there are subgroups
+ \[
+ G = H_1 \rhd H_2 \rhd H_3 \rhd H_4 \rhd \cdots \rhd H_n = \{e\}.
+ \]
+ such that $H_i/H_{i + 1}$ is simple.
+\end{thm}
+Note that here we only claim that $H_{i + 1}$ is normal in $H_i$. This does not say that, say, $H_3$ is a normal subgroup of $H_1$.
+
+\begin{proof}
+ If $G$ is simple, let $H_2 = \{e\}$. Then we are done.
+
+ If $G$ is not simple, let $H_2$ be a maximal proper normal subgroup of $G$. We now claim that $G/H_2$ is simple.
+
+ If $G/H_2$ is not simple, it contains a proper non-trivial normal subgroup $L \lhd G/H_2$ such that $L \not= \{e\}, G/H_2$. However, there is a correspondence between normal subgroups of $G/H_2$ and normal subgroups of $G$ containing $H_2$. So $L$ must be $K/H_2$ for some $K \lhd G$ such that $K \geq H_2$. Moreover, since $L$ is non-trivial and not $G/H_2$, we know $K$ is not $G$ or $H_2$. So $K$ is a larger normal subgroup. Contradiction.
+
+ So we have found an $H_2 \lhd G$ such that $G/H_2$ is simple. Iterating this process on $H_2$ gives the desired result. Note that this process eventually stops, as $H_{i + 1} < H_i$, and hence $|H_{i + 1}| < |H_i|$, and all these numbers are finite.
+\end{proof}
+
+\subsection{Actions of permutations}
+When we first motivated groups, we wanted to use them to represent some collection of ``symmetries''. Roughly, a symmetry of a set $X$ is a permutation of $X$, i.e.\ a bijection $X \to X$ that leaves some nice properties unchanged. For example, a symmetry of a square is a permutation of the vertices that leaves the overall shape of the square unchanged.
+
+Instead of just picking some nice permutations, we can consider the group of \emph{all} permutations. We call this the \emph{symmetric group}.
+
+\begin{defi}[Symmetric group]\index{symmetric group}\index{$S_n$}
+ The \emph{symmetric group} $S_n$ is the group of all permutations of $\{1, \cdots, n\}$, i.e.\ the set of all bijections of this set with itself.
+\end{defi}
+
+A convenient way of writing permutations is to use the disjoint cycle notation, such as writing $(1\; 2\; 3)(4\; 5)(6)$ for the permutation that maps
+\begin{align*}
+ 1 &\mapsto 2 & 4 &\mapsto 5\\
+ 2 &\mapsto 3 & 5 &\mapsto 4\\
+ 3 &\mapsto 1 & 6 &\mapsto 6.
+\end{align*}
+Unfortunately, the convention for writing permutations is weird. Since permutations are bijections, and hence functions, they are multiplied the wrong way, i.e.\ $f \circ g$ means first apply $g$, then apply $f$. In particular, $(1\; 2\; 3) (3\; 4)$ requires first applying the second permutation, then the first, and is in fact $(1\; 2\; 3\; 4)$.
+
+We know that any permutation is a product of transpositions. Hence we make the following definition.
+\begin{defi}[Even and odd permutation]\index{odd permutation}\index{even permutation}
+ A permutation $\sigma \in S_n$ is \emph{even} if it can be written as a product of evenly many transpositions; \emph{odd} otherwise.
+\end{defi}
+In IA Groups, we spent a lot of time proving this is well-defined, and we are not doing that again (note that this definition by itself \emph{is} well-defined --- if a permutation can be both written as an even number of transposition and an odd number of transposition, the definition says it is even. However, this is not what we really want, since we cannot immediately conclude that, say, $(1\; 2)$ is odd).
+
+This allows us to define the homomorphism:\index{sign}\index{$\sgn$}
+\begin{align*}
+ \sgn: S_n &\to (\{\pm 1\}, \times)\\
+ \sigma &\mapsto
+ \begin{cases}
+ +1 & \sigma\text{ is even}\\
+ -1 & \sigma\text{ is odd}
+ \end{cases}
+\end{align*}
+
+\begin{defi}[Alternating group]\index{alternating group}\index{$A_n$}
+ The \emph{alternating group} $A_n \leq S_n$ is the subgroup of even permutations, i.e.\ $A_n$ is the kernel of $\sgn$.
+\end{defi}
+
+This immediately tells us $A_n \lhd S_n$, and we can immediately work out its index, since
+\[
+ \frac{S_n}{A_n} \cong \im (\sgn) = \{\pm 1\},
+\]
+unless $n = 1$. So $A_n$ has index $2$.
+
+More generally, for a set $X$, we can define its symmetric group as follows:
+\begin{defi}[Symmetric group of $X$]\index{symmetric group}\index{$\Sym(X)$}
+ Let $X$ be a set. We write $\Sym(X)$ for the group of all permutations of $X$.
+\end{defi}
+
+However, we don't always want the whole symmetric group. Sometimes, we just want some subgroups of symmetric groups, as in our initial motivation. So we make the following definition.
+
+\begin{defi}[Permutation group]\index{permutation group}
+ A group $G$ is called a \emph{permutation group} if it is a subgroup of $\Sym(X)$ for some $X$, i.e.\ it is given by some, but not necessarily all, permutations of some set.
+
+ We say $G$ is a \emph{permutation group of order $n$} if in addition $|X| = n$.
+\end{defi}
+This is not really a too interesting definition, since, as we will soon see, every group is (isomorphic to) a permutation group. However, in some cases, thinking of a group as a permutation group of some object gives us better intuition on what the group is about.
+
+\begin{eg}
+ $S_n$ and $A_n$ are obviously permutation groups. Also, the dihedral group $D_{2n}$ is a permutation group of order $n$, viewing it as a permutation of the vertices of a regular $n$-gon.
+\end{eg}
+
+We would next want to recover the idea of a group being a ``permutation''. If $G \leq \Sym(X)$, then each $g \in G$ should be able to give us a permutation of $X$, in a way that is consistent with the group structure. We say the group $G$ \emph{acts} on $X$. In general, we make the following definition:
+
+\begin{defi}[Group action]\index{group action}
+ An \emph{action} of a group $(G, \ph)$ on a set $X$ is a function
+ \[
+ *: G\times X \to X
+ \]
+ such that
+ \begin{enumerate}
+ \item $g_1 * (g_2 * x) = (g_1 \cdot g_2) * x$ for all $g_1, g_2 \in G$ and $x \in X$.
+ \item $e * x = x$ for all $x \in X$.
+ \end{enumerate}
+\end{defi}
+There is another way of defining group actions, which is arguably a better way of thinking about group actions.
+
+\begin{lemma}
+ An action of $G$ on $X$ is equivalent to a homomorphism $\phi: G \to \Sym(X)$.
+\end{lemma}
+Note that the statement by itself is useless, since it doesn't tell us how to translate between the homomorphism and a group action. The important part is the proof.
+\begin{proof}
+ Let $*: G \times X \to X$ be an action. Define $\phi: G \to \Sym(X)$ by sending $g$ to the function $\phi(g) = (g * \ph : X \to X)$. This is indeed a permutation --- $g^{-1} * \ph$ is an inverse since
+ \[
+ \phi(g^{-1})(\phi(g)(x)) = g^{-1} * (g * x) = (g^{-1} \cdot g) * x = e * x = x,
+ \]
+ and a similar argument shows $\phi(g) \circ \phi(g^{-1}) = \id_X$. So $\phi$ is at least a well-defined function.
+
+ To show it is a homomorphism, just note that
+ \[
+ \phi(g_1)(\phi(g_2)(x)) = g_1 * (g_2 * x) = (g_1 \cdot g_2) * x = \phi(g_1 \cdot g_2)(x).
+ \]
+ Since this is true for all $x \in X$, we know $\phi(g_1)\circ \phi(g_2) = \phi(g_1 \cdot g_2)$. Also, $\phi(e)(x) = e * x = x$. So $\phi(e)$ is indeed the identity. Hence $\phi$ is a homomorphism.
+
+ We now do the same thing backwards. Given a homomorphism $\phi: G \to \Sym(X)$, define a function by $g * x = \phi(g)(x)$. We now check it is indeed a group action. Using the definition of a homomorphism, we know
+ \begin{enumerate}
+ \item $g_1 * (g_2 * x) = \phi(g_1)(\phi(g_2)(x)) = (\phi(g_1) \circ \phi(g_2))(x) = \phi(g_1 \cdot g_2)(x) = (g_1 \cdot g_2) * x$.
+ \item $e * x = \phi(e)(x) = \id_X(x) = x$.
+ \end{enumerate}
+ So this homomorphism gives a group action. These two operations are clearly inverses to each other. So group actions of $G$ on $X$ are the same as homomorphisms $G \to \Sym(X)$.
+\end{proof}
+
+\begin{defi}[Permutation representation]\index{permutation representation}
+ A \emph{permutation representation} of a group $G$ is a homomorphism $G \to \Sym (X)$.
+\end{defi}
+We have thus shown that a permutation representation is the same as a group action.
+
+The good thing about thinking of group actions as homomorphisms is that we can use all we know about homomorphisms on them.
+\begin{notation}
+ For an action of $G$ on $X$ given by $\phi: G \to \Sym(X)$, we write $G^X = \im(\phi)$ and $G_X = \ker(\phi)$.
+\end{notation}
+
+The first isomorphism theorem immediately gives
+\begin{prop}
+ $G_X \lhd G$ and $G/G_X \cong G^X$.
+\end{prop}
+
+In particular, if $G_X = \{e\}$ is trivial, then $G \cong G^X \leq \Sym(X)$.
+
+\begin{eg}
+ Let $G$ be the group of symmetries of a cube. Let $X$ be the set of diagonals of the cube.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (2, 2);
+ \draw (2, 0) -- +(1, 0.5);
+ \draw (2, 2) -- +(1, 0.5);
+ \draw (0, 2) -- +(1, 0.5);
+ \draw [dashed] (0, 0) -- +(1, 0.5);
+ \draw (1, 2.5) -- (3, 2.5) -- (3, 0.5);
+ \draw [dashed] (1, 2.5) -- (1, 0.5) -- (3, 0.5);
+
+ \draw [mred] (0, 0) node [circ] {} -- (3, 2.5) node [circ] {};
+ \draw [mblue] (2, 0) node [circ] {} -- (1, 2.5) node [circ] {};
+ \draw [morange] (0, 2) node [circ] {} -- (3, 0.5) node [circ] {};
+ \draw [mgreen] (2, 2) node [circ] {} -- (1, 0.5) node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ Then $G$ acts on $X$, and so we get $\phi: G \to \Sym(X)$. What is its kernel? To preserve the diagonals, it either does nothing to the diagonal, or flips the two vertices. So $G_X = \ker(\phi) = \{\id, \text{symmetry that sends each vertex to its opposite}\} \cong C_2$.
+
+ How about the image? We have $G^X = \im(\phi) \leq \Sym(X) \cong S_4$. It is an exercise to show that $\im(\phi) = \Sym(X)$, i.e.\ that $\phi$ is surjective. We are not proving this because this is an exercise in geometry, not group theory. Then the first isomorphism theorem tells us
+ \[
+ G^X \cong G/G_X.
+ \]
+ So
+ \[
+ |G| = |G^X| |G_X| = 4! \cdot 2 = 48.
+ \]
+\end{eg}
+This is an example of how we can use group actions to count elements in a group.
+
+\begin{eg}[Cayley's theorem]\index{Cayley's theorem}
+ For any group $G$, we have an action of $G$ on $G$ itself via
+ \[
+ g * g_1 = gg_1.
+ \]
+ It is trivial to check this is indeed an action. This gives a group homomorphism $\phi: G \to \Sym(G)$. What is its kernel? If $g \in \ker(\phi)$, then it acts trivially on every element. In particular, it acts trivially on the identity. So $g* e = e$, which means $g = e$. So $\ker(\phi) = \{e\}$. By the first isomorphism theorem, we get
+ \[
+ G \cong G/\{e\} \cong \im \phi \leq \Sym(G).
+ \]
+ So we know every group is (isomorphic to) a subgroup of a symmetric group.
+\end{eg}
+
+\begin{eg}
+ Let $H$ be a subgroup of $G$, and $X = G/H$ be the set of left cosets of $H$. We let $G$ act on $X$ via
+ \[
+ g* g_1 H = gg_1 H.
+ \]
+ It is easy to check this is well-defined and is indeed a group action. So we get $\phi: G \to \Sym(X)$.
+
+ Now consider $G_X = \ker(\phi)$. If $g \in G_X$, then for every $g_1 \in G$, we have $g * g_1 H = g_1H$. This means $g_1^{-1} gg_1 \in H$. In other words, we have
+ \[
+ g \in g_1 H g_1^{-1}.
+ \]
+ This has to happen for \emph{all} $g_1 \in G$. So
+ \[
+ G_X \subseteq \bigcap_{g_1 \in G} g_1 Hg_1^{-1}.
+ \]
+ This argument is completely reversible --- if $g \in \bigcap_{g_1 \in G} g_1 Hg_1^{-1}$, then for each $g_1 \in G$, we know
+ \[
+ g_1^{-1}g g_1 \in H,
+ \]
+ and hence
+ \[
+ gg_1 H = g_1H.
+ \]
+ So
+ \[
+ g* g_1 H = g_1 H
+ \]
+ So $g \in G_X$. Hence we indeed have equality:
+ \[
+ \ker (\phi) = G_X = \bigcap_{g_1 \in G} g_1 Hg_1^{-1}.
+ \]
+ Since this is a kernel, this is a normal subgroup of $G$, and is contained in $H$. Starting with an arbitrary subgroup $H$, this allows us to generate a normal subgroup, and this is indeed the biggest normal subgroup of $G$ that is contained in $H$, if we stare at it long enough.
+\end{eg}
+
+We can use this to prove the following theorem.
+\begin{thm}
+ Let $G$ be a finite group, and $H \leq G$ a subgroup of index $n$. Then there is a normal subgroup $K \lhd G$ with $K \leq H$ such that $G/K$ is isomorphic to a subgroup of $S_n$. Hence $|G/K| \mid n!$ and $|G/K| \geq n$.
+\end{thm}
+
+\begin{proof}
+ We apply the previous example, giving $\phi: G \to \Sym(G/H)$, and let $K$ be the kernel of this homomorphism. We have already shown that $K \leq H$. Then the first isomorphism theorem gives
+ \[
+ G/K \cong \im \phi \leq \Sym(G/H) \cong S_n.
+ \]
+ Then by Lagrange's theorem, we know $|G/K| \mid |S_n| = n!$, and we also have $|G/K| \geq |G/H| = n$.
+\end{proof}
+
+\begin{cor}
+ Let $G$ be a non-abelian simple group. Let $H \leq G$ be a proper subgroup of index $n$. Then $G$ is isomorphic to a subgroup of $A_n$. Moreover, we must have $n \geq 5$, i.e.\ $G$ cannot have a subgroup of index less than $5$.
+\end{cor}
+
+\begin{proof}
+ The action of $G$ on $X = G/H$ gives a homomorphism $\phi: G \to \Sym(X)$. Then $\ker(\phi) \lhd G$. Since $G$ is simple, $\ker(\phi)$ is either $G$ or $\{e\}$. We first show that it cannot be $G$. If $\ker(\phi) = G$, then every element of $G$ acts trivially on $X = G/H$. But if $g \in G\setminus H$, which exists since the index of $H$ is not $1$, then $g * H = gH \not= H$. So $g$ does not act trivially. So the kernel cannot be the whole of $G$. Hence $\ker(\phi) = \{e\}$.
+
+ Thus by the first isomorphism theorem, we get
+ \[
+ G \cong \im (\phi) \leq \Sym(X) \cong S_n.
+ \]
+ We now need to show that $G$ is in fact a subgroup of $A_n$.
+
+ We know $A_n \lhd S_n$. So $\im(\phi) \cap A_n \lhd \im(\phi) \cong G$. As $G$ is simple, $\im (\phi) \cap A_n$ is either $\{e\}$ or $G = \im(\phi)$. We want to show that the second thing happens, i.e.\ the intersection is not the trivial group. We use the second isomorphism theorem. If $\im(\phi) \cap A_n = \{e\}$, then
+ \[
+ \im(\phi) \cong \frac{\im(\phi)}{\im(\phi) \cap A_n} \cong \frac{\im (\phi) A_n}{A_n} \leq \frac{S_n}{A_n} \cong C_2.
+ \]
+ So $G \cong \im (\phi)$ is a subgroup of $C_2$, i.e.\ either $\{e\}$ or $C_2$ itself. Neither of these are non-abelian. So this cannot be the case. So we must have $\im(\phi) \cap A_n = \im(\phi)$, i.e.\ $\im (\phi) \leq A_n$.
+
+ The last part follows from the fact that $S_1, S_2, S_3, S_4$ have no non-abelian simple subgroups, which you can check by going to a quiet room and listing out all their subgroups.
+\end{proof}
+
+Let's recall some old definitions from IA Groups.
+\begin{defi}[Orbit]\index{orbit}
+ If $G$ acts on a set $X$, the \emph{orbit} of $x \in X$ is
+ \[
+ G\cdot x = \{g * x \in X: g \in G\}.
+ \]
+\end{defi}
+
+\begin{defi}[Stabilizer]\index{stabilizer}
+ If $G$ acts on a set $X$, the \emph{stabilizer} of $x \in X$ is
+ \[
+ G_x = \{g \in G: g * x = x\}.
+ \]
+\end{defi}
+
+The main theorem about these concepts is the orbit-stabilizer theorem.
+\begin{thm}[Orbit-stabilizer theorem]\index{orbit-stabilizer theorem}
+ Let $G$ act on $X$. Then for any $x \in X$, there is a bijection between $G\cdot x$ and $G/G_x$, given by $g \cdot x \leftrightarrow g\cdot G_x$.
+
+ In particular, if $G$ is finite, it follows that
+ \[
+ |G| = |G_x| |G \cdot x|.
+ \]
+\end{thm}
+It takes some work to show this is well-defined and a bijection, but you've done it in IA Groups. In IA Groups, you probably learnt the second statement instead, but this result is more generally true for infinite groups.
+
+\subsection{Conjugacy, centralizers and normalizers}
+We have seen that every group acts on itself by multiplying on the left. A group $G$ can also act on itself in a different way, by conjugation:\index{conjugation}
+\[
+ g * g_1 = g g_1 g^{-1}.
+\]
+Let $\phi: G \to \Sym(G)$ be the associated permutation representation. We know, by definition, that $\phi(g)$ is a bijection from $G$ to $G$ as sets. However, here $G$ is not an arbitrary set, but is a group. A natural question to ask is whether $\phi(g)$ is a homomorphism or not. Indeed, we have
+\[
+ \phi(g) (g_1 \cdot g_2) = g g_1 g_2 g^{-1} = (g g_1 g^{-1})(g g_2 g^{-1}) = \phi(g)(g_1) \phi(g) (g_2).
+\]
+So $\phi(g)$ is a homomorphism from $G$ to $G$. Since $\phi(g)$ is bijective (as in \emph{any} group action), it is in fact an isomorphism.
+
+Thus, for any group $G$, there are many isomorphisms from $G$ to itself, one for every $g \in G$, and can be obtained from a group action of $G$ on itself.
+
+We can, of course, take the collection of all isomorphisms of $G$, and form a new group out of it.
+\begin{defi}[Automorphism group]\index{automorphism group}
+ The \emph{automorphism group} of $G$ is
+ \[
+ \Aut(G) = \{f: G \to G: f\text{ is a group isomorphism}\}.
+ \]
+ This is a group under composition, with the identity map as the identity.
+\end{defi}
+This is a subgroup of $\Sym(G)$, and the homomorphism $\phi: G\to \Sym(G)$ by conjugation lands in $\Aut(G)$.
+
+This is pretty fun --- we can use this to cook up some more groups, by taking a group and looking at its automorphism group.
+
+We can also take a group, take its automorphism group, and then take its automorphism group again, and do it again, and see if this process stabilizes, or becomes periodic, or something. This is left as an exercise for the reader.
+
+\begin{defi}[Conjugacy class]\index{conjugacy class}
+ The \emph{conjugacy class} of $g \in G$ is
+ \[
+ \ccl_G(g) = \{hgh^{-1}: h \in G\},
+ \]
+ i.e.\ the orbit of $g \in G$ under the conjugation action.
+\end{defi}
+
+\begin{defi}[Centralizer]\index{centralizer}
+ The \emph{centralizer} of $g \in G$ is
+ \[
+ C_G(g) = \{h \in G: hgh^{-1} = g\},
+ \]
+ i.e.\ the stabilizer of $g$ under the conjugation action. This is alternatively the set of all $h \in G$ that commute with $g$.
+\end{defi}
+
+\begin{defi}[Center]\index{center}
+ The \emph{center} of a group $G$ is
+ \[
+ Z(G) = \{h \in G: hg h^{-1} = g\text{ for all }g \in G\} = \bigcap_{g \in G} C_G(g) = \ker (\phi).
+ \]
+\end{defi}
+These are the elements of the group that commute with everything else.
+
+By the orbit-stabilizer theorem, for each $x \in G$, we obtain a bijection $\ccl(x) \leftrightarrow G/C_G(x)$.
+\begin{prop}
+ Let $G$ be a finite group. Then
+ \[
+ |\ccl(x)| = |G:C_G(x)| = |G|/|C_G(x)|.
+ \]
+\end{prop}
+In particular, the size of each conjugacy class divides the order of the group.
+
+Another useful notion is the normalizer.
+\begin{defi}[Normalizer]\index{normalizer}
+ Let $H \leq G$. The \emph{normalizer} of $H$ in $G$ is
+ \[
+ N_G(H) = \{g \in G: g^{-1}H g = H\}.
+ \]
+\end{defi}
+Note that we certainly have $H \leq N_G(H)$. Even better, $H \lhd N_G(H)$, essentially by definition. This is in fact the biggest subgroup of $G$ in which $H$ is normal.
+
+We are now going to look at conjugacy classes of $S_n$. Now we recall from IA Groups that permutations in $S_n$ are conjugate if and only if they have the same cycle type when written as a product of disjoint cycles. We can think of the cycle types as partitions of $n$. For example, the partition $2, 2, 1$ of $5$ corresponds to the conjugacy class of $(1\; 2)(3\; 4)(5)$. So the conjugacy classes of $S_n$ are exactly the partitions of $n$.
+
+We will use this fact in the proof of the following theorem:
+\begin{thm}
+ The alternating groups $A_n$ are simple for $n \geq 5$ (also for $n = 1, 2, 3$).
+\end{thm}
+The cases in brackets follow from a direct check since $A_1 \cong A_2 \cong \{e\}$ and $A_3 \cong C_3$, all of which are simple. We can also check manually that $A_4$ has non-trivial normal subgroups, and hence not simple.
+
+Recall we also proved that $A_5$ is simple in IA Groups by brute force --- we listed all its conjugacy classes, and see they cannot be put together to make a normal subgroup. This obviously cannot be easily generalized to higher values of $n$. Hence we need to prove this with a different approach.
+
+\begin{proof}
+ We start with the following claim:
+ \begin{claim}
+ $A_n$ is generated by $3$-cycles.
+ \end{claim}
+ As any element of $A_n$ is a product of evenly-many transpositions, it suffices to show that every product of two transpositions is also a product of $3$-cycles.
+
+ There are three possible cases: let $a, b, c, d$ be distinct. Then
+ \begin{enumerate}
+ \item $(a\; b)(a\; b) = e$.
+ \item $(a\; b)(b\; c) = (a\; b\; c)$.
+ \item $(a\; b)(c\; d) = (a\; c\; b)(a\; c\; d)$.
+ \end{enumerate}
+ So we have shown that every possible product of two transpositions is a product of three-cycles.
+
+ \begin{claim}
+ Let $H \lhd A_n$. If $H$ contains a $3$-cycle, then we $H = A_n$.
+ \end{claim}
+ We show that if $H$ contains a $3$-cycle, then \emph{every} $3$-cycle is in $H$. Then we are done since $A_n$ is generated by $3$-cycles. For concreteness, suppose we know $(a\; b\; c) \in H$, and we want to show $(1\; 2\; 3) \in H$.
+
+ Since they have the same cycle type, so we have $\sigma \in S_n$ such that $(a\; b\; c) = \sigma(1\; 2\; 3) \sigma^{-1}$. If $\sigma$ is even, i.e.\ $\sigma \in A_n$, then we have that $(1\; 2\; 3) \in \sigma^{-1}H\sigma = H$, by the normality of $H$ and we are trivially done.
+
+ If $\sigma$ is odd, replace it by $\bar{\sigma} = \sigma \cdot (4\; 5)$. Here is where we use the fact that $n \geq 5$ (we will use it again later). Then we have
+ \[
+ \bar{\sigma} (1\; 2\; 3) \bar{\sigma}^{-1} = \sigma (4\; 5)(1\; 2\; 3)(4\; 5) \sigma^{-1} = \sigma(1\; 2\; 3) \sigma^{-1} = (a\; b\; c),
+ \]
+ using the fact that $(1\; 2\; 3)$ and $(4\; 5)$ commute. Now $\bar{\sigma}$ is even. So $(1\; 2\; 3) \in H$ as above.
+
+ What we've got so far is that if $H \lhd A_n$ contains \emph{any} $3$-cycle, then it is $A_n$. Finally, we have to show that every normal subgroup must contain at least one $3$-cycle.
+ \begin{claim}
+ Let $H \lhd A_n$ be non-trivial. Then $H$ contains a $3$-cycle.
+ \end{claim}
+ We separate this into many cases
+ \begin{enumerate}
+ \item Suppose $H$ contains an element which can be written in disjoint cycle notation
+ \[
+ \sigma = (1\; 2\; 3 \cdots r) \tau,
+ \]
+ for $r \geq 4$. We now let $\delta = (1\; 2\; 3) \in A_n$. Then by normality of $H$, we know $\delta^{-1} \sigma \delta \in H$. Then $\sigma^{-1}\delta^{-1}\sigma \delta \in H$. Also, we notice that $\tau$ does not contain $1, 2, 3$. So it commutes with $\delta$, and also trivially with $(1\; 2\; 3\; \cdots\; r)$. We can expand this mess to obtain
+ \[
+ \sigma^{-1} \delta^{-1} \sigma\delta = (r\; \cdots \; 2\; 1)(1\; 3\; 2)(1\; 2\; 3\; \cdots \; r)(1\; 2\; 3) = (2\; 3 \; r),
+ \]
+ which is a $3$-cycle. So done.
+
+ The same argument goes through if $\sigma = (a_1 \; a_2 \; \cdots \; a_r) \tau$ for any $a_1, \cdots, a_n$.
+ \item Suppose $H$ contains an element consisting of at least two $3$-cycles in disjoint cycle notation, say
+ \[
+ \sigma = (1\; 2\; 3)(4\; 5\; 6)\tau
+ \]
+ We now let $\delta = (1\; 2\; 4)$, and again calculate
+ \[
+ \sigma^{-1} \delta^{-1}\sigma\delta = (1\; 3\; 2)(4\; 6\; 5)(1\; 4\; 2)(1\; 2\; 3)(4\; 5\; 6)(1\; 2\; 4) = (1\; 2\; 4\; 3\; 6).
+ \]
+ This is a $5$-cycle, which is necessarily in $H$. By the previous case, we get a $3$-cycle in $H$ too, and hence $H = A_n$.
+ \item Suppose $H$ contains $\sigma = (1\; 2\; 3)\tau$, with $\tau$ a product of $2$-cycles (if $\tau$ contains anything longer, then it would fit in one of the previous two cases). Then $\sigma^2 = (1\; 2\; 3)^2 = (1\; 3\; 2)$ is a three-cycle.
+ \item Suppose $H$ contains $\sigma = (1\; 2)(3\; 4)\tau$, where $\tau$ is a product of $2$-cycles. We first let $\delta = (1\; 2\; 3)$ and calculate
+ \[
+ u = \sigma^{-1}\delta^{-1}\sigma\delta = (1\; 2)(3\; 4)(1\; 3\; 2)(1\; 2)(3\; 4)(1\; 2\; 3) = (1\; 4)(2\; 3),
+ \]
+ which is again in $u$. We landed in the same case, but instead of two transpositions times a mess, we just have two transpositions, which is nicer. Now let
+ \[
+ v = (1\; 5\; 2)u (1\; 2\; 5) = (1\; 3)(4\; 5) \in H.
+ \]
+ Note that we used $n \geq 5$ again. We have yet again landed in the same case. Notice however, that these are not the same transpositions. We multiply
+ \[
+ uv = (1\; 4)(2\; 3)(1\; 3)(4\; 5) = (1\; 2\; 3\; 4\; 5) \in H.
+ \]
+ This is then covered by the first case, and we are done.
+ \end{enumerate}
+ So done. Phew.
+\end{proof}
+
+\subsection{Finite \texorpdfstring{$p$}{p}-groups}
+Note that when studying the orders of groups and subgroups, we always talk about divisibility, since that is what Lagrange's theorem tells us about. We never talk about things like the sum of the orders of two subgroups. When it comes to divisibility, the simplest case would be when the order is a prime, and we have done that already. The next best thing we can hope for is that the order is a power of a prime.
+
+\begin{defi}[$p$-group]\index{$p$-group}
+ A finite group $G$ is a \emph{$p$-group} if $|G| = p^n$ for some prime number $p$ and $n \geq 1$.
+\end{defi}
+
+\begin{thm}
+ If $G$ is a finite $p$-group, then $Z(G) = \{x \in G: xg = gx\text{ for all }g \in G\}$ is non-trivial.
+\end{thm}
+This immediately tells us that for $n \geq 2$, a $p$ group is never simple.
+
+\begin{proof}
+ Let $G$ act on itself by conjugation. The orbits of this action (i.e.\ the conjugacy classes) have order dividing $|G| = p^n$. So it is either a singleton, or its size is divisible by $p$.
+
+ Since the conjugacy classes partition $G$, we know the total size of the conjugacy classes is $|G|$. In particular,
+ \begin{multline*}
+ |G| = \text{number of conjugacy class of size 1} \\
+ + \sum \text{order of all other conjugacy classes}.
+ \end{multline*}
+ We know the second term is divisible by $p$. Also $|G| = p^n$ is divisible by $p$. Hence the number of conjugacy classes of size $1$ is divisible by $p$. We know $\{e\}$ is a conjugacy class of size $1$. So there must be at least $p$ conjugacy classes of size $1$. Since the smallest prime number is $2$, there is a conjugacy class $\{x\} \not= \{e\}$.
+
+ But if $\{x\}$ is a conjugacy class on its own, then by definition $g^{-1}xg = x$ for all $g \in G$, i.e.\ $xg = gx$ for all $g \in G$. So $x \in Z(G)$. So $Z(G)$ is non-trivial.
+\end{proof}
+The theorem allows us to prove interesting things about $p$-groups by induction --- we can quotient $G$ by $Z(G)$, and get a smaller $p$-group. One way to do this is via the below lemma.
+
+\begin{lemma}
+ For any group $G$, if $G/Z(G)$ is cyclic, then $G$ is abelian.
+
+ In other words, if $G/Z(G)$ is cyclic, then it is in fact trivial, since the center of an abelian group is the abelian group itself.
+\end{lemma}
+
+\begin{proof}
+ Let $g \in Z(G)$ be a generator of the cyclic group $G/Z(G)$. Hence every coset of $Z(G)$ is of the form $g^rZ(G)$. So every element $x \in G$ must be of the form $g^r z$ for $z \in Z(G)$ and $r \in \Z$. To show $G$ is abelian, let $\bar{x} = g^{\bar{r}} \bar{z}$ be another element, with $\bar{z} \in Z(G), \bar{r} \in \Z$. Note that $z$ and $\bar{z}$ are in the center, and hence commute with every element. So we have
+ \[
+ x\bar{x} = g^r z g^{\bar{r}} \bar{z} = g^r g^{\bar{r}} z \bar{z} = g^{\bar{r}}g^r \bar{z} z = g^{\bar{r}}\bar{z} g^r z = \bar{x} x.
+ \]
+ So they commute. So $G$ is abelian.
+\end{proof}
+
+This is a general lemma for groups, but is particularly useful when applied to $p$ groups.
+\begin{cor}
+ If $p$ is prime and $|G| = p^2$, then $G$ is abelian.
+\end{cor}
+
+\begin{proof}
+ Since $Z(G) \leq G$, its order must be $1$, $p$ or $p^2$. Since it is not trivial, it can only be $p$ or $p^2$. If it has order $p^2$, then it is the whole group and the group is abelian. Otherwise, $G/Z(G)$ has order $p^2/p = p$. But then it must be cyclic, and thus $G$ must be abelian. This is a contradiction. So $G$ is abelian.
+\end{proof}
+
+\begin{thm}
+ Let $G$ be a group of order $p^a$, where $p$ is a prime number. Then it has a subgroup of order $p^b$ for any $0 \leq b \leq a$.
+\end{thm}
+This means there is a subgroup of every conceivable order. This is not true for general groups. For example, $A_5$ has no subgroup of order $30$ or else that would be a normal subgroup.
+
+\begin{proof}
+ We induct on $a$. If $a = 1$, then $\{e\}, G$ give subgroups of order $p^0$ and $p^1$. So done.
+
+ Now suppose $a > 1$, and we want to construct a subgroup of order $p^b$. If $b = 0$, then this is trivial, namely $\{e\} \leq G$ has order $1$.
+
+ Otherwise, we know $Z(G)$ is non-trivial. So let $x \not= e \in Z(G)$. Since $\ord(x) \mid |G|$, its order is a power of $p$. If it in fact has order $p^c$, then $x^{p^{c - 1}}$ has order $p$. So we can suppose, by renaming, that $x$ has order $p$. We have thus generated a subgroup $\bra x\ket$ of order exactly $p$. Moreover, since $x$ is in the center, $\bra x\ket$ commutes with everything in $G$. So $\bra x\ket$ is in fact a normal subgroup of $G$. This is the point of choosing it in the center. Therefore $G/\bra x\ket$ has order $p^{a - 1}$.
+
+ Since this is a strictly smaller group, we can by induction suppose $G/\bra x\ket$ has a subgroup of any order. In particular, it has a subgroup $L$ of order $p^{b - 1}$. By the subgroup correspondence, there is some $K \leq G$ such that $L = K / \bra x\ket$ and $H \lhd K$. But then $K$ has order $p^b$. So done.
+\end{proof}
+
+\subsection{Finite abelian groups}
+We now move on to a small section, which is small because we will come back to it later, and actually prove what we claim.
+
+It turns out finite abelian groups are very easy to classify. We can just write down a list of all finite abelian groups. We write down the classification theorem, and then prove it in the last part of the course, where we hit this with a huge sledgehammer.
+
+\begin{thm}[Classification of finite abelian groups]\index{classification of finite abelian groups}
+ Let $G$ be a finite abelian group. Then there exist some $d_1, \cdots, d_r$ such that
+ \[
+ G \cong C_{d_1} \times C_{d_2} \times \cdots \times C_{d_r}.
+ \]
+ Moreover, we can pick $d_i$ such that $d_{i + 1} \mid d_i$ for each $i$, and this expression is unique.
+\end{thm}
+It turns out the best way to prove this is not to think of it as a group, but as a $\Z$-module, which is something we will come to later.
+
+\begin{eg}
+ The abelian groups of order $8$ are $C_8$, $C_4 \times C_2$, $C_2 \times C_2 \times C_2$.
+\end{eg}
+
+Sometimes this is not the most useful form of decomposition. To get a nicer decomposition, we use the following lemma:
+\begin{lemma}
+ If $n$ and $m$ are coprime, then $C_{mn} \cong C_m \times C_n$.
+\end{lemma}
+This is a grown-up version of the Chinese remainder theorem. This is what the Chinese remainder theorem really says.
+
+\begin{proof}
+ It suffices to find an element of order $nm$ in $C_m \times C_n$. Then since $C_n \times C_m$ has order $nm$, it must be cyclic, and hence isomorphic to $C_{nm}$.
+
+ Let $g \in C_m$ have order $m$; $h \in C_n$ have order $n$, and consider $(g, h) \in C_m \times C_n$. Suppose the order of $(g, h)$ is $k$. Then $(g, h)^k = (e, e)$. Hence $(g^k, h^k) = (e, e)$. So the order of $g$ and $h$ divide $k$, i.e.\ $m \mid k$ and $n \mid k$. As $m$ and $n$ are coprime, this means that $mn \mid k$.
+
+ As $k = \ord((g, h))$ and $(g, h) \in C_m \times C_n$ is a group of order $mn$, we must have $k \mid nm$. So $k = nm$.
+\end{proof}
+
+\begin{cor}
+ For any finite abelian group $G$, we have
+ \[
+ G \cong C_{d_1} \times C_{d_2} \times \cdots \times C_{d_r},
+ \]
+ where each $d_i$ is some prime power.
+\end{cor}
+
+\begin{proof}
+ From the classification theorem, iteratively apply the previous lemma to break each component up into products of prime powers.
+\end{proof}
+
+As promised, this is short.
+
+\subsection{Sylow theorems}
+We finally get to the big theorem of this part of the course.
+\begin{thm}[Sylow theorems]\index{Sylow theorem}
+ Let $G$ be a finite group of order $p^a \cdot m$, with $p$ a prime and $p \nmid m$. Then
+ \begin{enumerate}
+ \item The set of Sylow $p$-subgroups of $G$, given by
+ \[
+ \Syl_p(G) = \{P \leq G: |P| = p^a\},
+ \]
+ is non-empty. In other words, $G$ has a subgroup of order $p^a$.
+ \item All elements of $\Syl_p(G)$ are conjugate in $G$.
+ \item The number of Sylow $p$-subgroups $n_p = |\Syl_p(G)|$ satisfies $n_p \equiv 1 \pmod p$ and $n_p \mid |G|$ (in fact $n_p \mid m$, since $p$ is not a factor of $n_p$).
+ \end{enumerate}
+\end{thm}
+These are sometimes known as Sylow's first/second/third theorem respectively.
+
+We will not prove this just yet. We first look at how we can apply this theorem. We can use it without knowing how to prove it.
+
+\begin{lemma}
+ If $n_p = 1$, then the Sylow $p$-subgroup is normal in $G$.
+\end{lemma}
+
+\begin{proof}
+ Let $P$ be the unique Sylow $p$-subgroup, and let $g \in G$, and consider $g^{-1}Pg$. Since this is isomorphic to $P$, we must have $|g^{-1} Pg| = p^a$, i.e.\ it is also a Sylow $p$-subgroup. Since there is only one, we must have $P = g^{-1}Pg$. So $P$ is normal.
+\end{proof}
+
+\begin{cor}
+ Let $G$ be a non-abelian simple group. Then $|G| \mid \frac{n_p!}{2}$ for every prime $p$ such that $p \mid |G|$.
+\end{cor}
+
+\begin{proof}
+ The group $G$ acts on $\Syl_p(G)$ by conjugation. So it gives a permutation representation $\phi: G \to \Sym(\Syl_p(G)) \cong S_{n_p}$. We know $\ker \phi \lhd G$. But $G$ is simple. So $\ker(\phi)= \{e\}$ or $G$. We want to show it is not the whole of $G$.
+
+ If we had $G = \ker(\phi)$, then $g^{-1} Pg = P$ for all $g \in G$. Hence $P$ is a normal subgroup. As $G$ is simple, either $P = \{e\}$, or $P = G$. We know $P$ cannot be trivial since $p \mid |G|$. But if $G = P$, then $G$ is a $p$-group, has a non-trivial center, and hence $G$ is not non-abelian simple. So we must have $\ker(\phi) = \{e\}$.
+
+ Then by the first isomorphism theorem, we know $G \cong \im \phi \leq S_{n_p}$. We have proved the theorem without the divide-by-two part. To prove the whole result, we need to show that in fact $\im (\phi) \leq A_{n_p}$. Consider the following composition of homomorphisms:
+ \[
+ \begin{tikzcd}
+ G \ar[r, "\phi"] & S_{n_p} \ar[r, "\sgn"] & \{\pm 1\}.
+ \end{tikzcd}
+ \]
+ If this is surjective, then $\ker (\sgn \circ \phi) \lhd G$ has index $2$ (since the index is the size of the image), and is not the whole of $G$. This means $G$ is not simple (the case where $|G| = C_2$ is ruled out since it is abelian).
+
+ So the kernel must be the whole $G$, and $\sgn \circ \phi$ is the trivial map. In other words, $\sgn(\phi(g)) = +1$. So $\phi(g) \in A_{n_p}$. So in fact we have
+ \[
+ G \cong \im(\phi) \leq A_{n_p}.
+ \]
+ So we get $|G| \mid \frac{n_p!}{2}$.
+\end{proof}
+
+\begin{eg}
+ Suppose $|G| = 1000$. Then $|G|$ is not simple. To show this, we need to factorize $1000$. We have $|G| = 2^3 \cdot 5^3$. We pick our favorite prime to be $p = 5$. We know $n_5 \cong 1 \pmod 5$, and $n_5 \mid 2^3 = 8$. The only number that satisfies this is $n_5 = 1$. So the Sylow $5$-subgroup is normal, and hence $G$ is not normal.
+\end{eg}
+
+\begin{eg}
+ Let $|G| = 132 = 2^2 \cdot 3 \cdot 11$. We want to show this is not simple. So for a contradiction suppose it is.
+
+ We start by looking at $p = 11$. We know $n_{11} \equiv 1 \pmod {11}$. Also $n_{11} \mid 12$. As $G$ is simple, we must have $n_{11} = 12$.
+
+ Now look at $p = 3$. We have $n_3 = 1 \pmod 3$ and $n_3 \mid 44$. The possible values of $n_3$ are $4$ and $22$.
+
+ If $n_3 = 4$, then the corollary says $|G| \mid \frac{4!}{2} = 12$, which is of course nonsense. So $n_3 = 22$.
+
+ At this point, we count how many elements of each order there are. This is particularly useful if $p \mid |G|$ but $p^2 \nmid |G|$, i.e.\ the Sylow $p$-subgroups have order $p$ and hence are cyclic.
+
+ As all Sylow $11$-subgroups are disjoint, apart from $\{e\}$, we know there are $12 \cdot (11 - 1) = 120$ elements of order $11$. We do the same thing with the Sylow $3$-subgroups. We need $22 \cdot (3 - 1) = 44$ elements of order $3$. But this is more elements than the group has. This can't happen. So $G$ must be simple.
+\end{eg}
+
+We now get to prove our big theorem. This involves some non-trivial amount of trickery.
+\begin{proof}[Proof of Sylow's theorem]
+ Let $G$ be a finite group with $|G| = p^a m$, and $p \nmid m$.
+ \begin{enumerate}
+ \item We need to show that $\Syl_p(G) \not= \emptyset$, i.e.\ we need to find some subgroup of order $p^a$. As always, we find something clever for $G$ to act on. We let
+ \[
+ \Omega = \{X\text{ sub\emph{set} of }G: |X| = p^a\}.
+ \]
+ We let $G$ act on $\Omega$ by
+ \[
+ g * \{g_1, g_2, \cdots, g_{p^a}\} = \{gg_1, gg_2, \cdots, gg_{p^a}\}.
+ \]
+ Let $\Sigma \leq \Omega$ be an orbit.
+
+ We first note that if $\{g_1, \cdots, g_{p^a}\} \in \Sigma$, then by the definition of an orbit, for every $g \in G$,
+ \[
+ gg_1^{-1} * \{g_1, \cdots, g_{p^a}\} = \{g, gg_1^{-1}g_2, \cdots, gg_1^{-1} g_{p^a}\} \in \Sigma.
+ \]
+ The important thing is that this set contains $g$. So for each $g$, $\Sigma$ contains a set $X$ which contains $g$. Since each set $X$ has size $p^a$, we must have
+ \[
+ |\Sigma| \geq \frac{|G|}{p^a} = m.
+ \]
+ Suppose $|\Sigma| = m$. Then the orbit-stabilizer theorem says the stabilizer $H$ of any $\{g_1, \cdots, g_{p^a}\} \in \Sigma$ has index $m$, hence $|H| = p^a$, and thus $H \in \Syl_p(G)$.
+
+ So we need to show that not every orbit $\Sigma$ can have size $> m$. Again, by the orbit-stabilizer, the size of any orbit divides the order of the group, $|G| = p^a m$. So if $|\Sigma| > m$, then $p \mid |\Sigma|$. Suppose we can show that $p \nmid |\Omega|$. Then not every orbit $\Sigma$ can have size $> m$, since $\Omega$ is the disjoint union of all the orbits, and thus we are done.
+
+ So we have to show $p \nmid |\Omega|$. This is just some basic counting. We have
+ \[
+ |\Omega| = \binom{|G|}{p^a} = \binom{p^a m}{p^a} = \prod_{j = 0}^{p^a - 1} = \frac{p^a m - j}{p^a - j}.
+ \]
+ Now note that the largest power of $p$ dividing $p^am - j$ is the largest power of $p$ dividing $j$. Similarly, the largest power of $p$ dividing $p^a - j$ is also the largest power of $p$ dividing $j$. So we have the same power of $p$ on top and bottom for each item in the product, and they cancel. So the result is not divisible by $p$.
+
+ This proof is not straightforward. We first needed the clever idea of letting $G$ act on $\Omega$. But then if we are given this set, the obvious thing to do would be to find something in $\Omega$ that is also a group. This is not what we do. Instead, we find an orbit whose stabilizer is a Sylow $p$-subgroup.
+
+ \item We instead prove something stronger: if $Q \leq G$ is a $p$-subgroup (i.e.\ $|Q| = p^b$, for $b$ not necessarily $a$), and $P \leq G$ is a Sylow $p$-subgroup, then there is a $g \in G$ such that $g^{-1} Qg \leq P$. Applying this to the case where $Q$ is another Sylow $p$-subgroup says there is a $g$ such that $g^{-1}Qg \leq P$, but since $g^{-1}Qg$ has the same size as $P$, they must be equal.
+
+ We let $Q$ act on the set of cosets of $G/P$ via
+ \[
+ q * gP = qgP.
+ \]
+ We know the orbits of this action have size dividing $|Q|$, so is either $1$ or divisible by $p$. But they can't all be divisible by $p$, since $|G/P|$ is coprime to $p$. So at least one of them have size $1$, say $\{gP\}$. In other words, for every $q \in Q$, we have $qgP = gP$. This means $g^{-1}qg \in P$. This holds for every element $q \in Q$. So we have found a $g$ such that $g^{-1}Qg \leq P$.
+
+ \item Finally, we need to show that $n_p \cong 1 \pmod p$ and $n_p \mid |G|$, where $n_p = |\Syl_P(G)|$.
+
+ The second part is easier --- by Sylow's second theorem, the action of $G$ on $\Syl_p(G)$ by conjugation has one orbit. By the orbit-stabilizer theorem, the size of the orbit, which is $|\Syl_p(G)| = n_p$, divides $|G|$. This proves the second part.
+
+ For the first part, let $P \in \Syl_P(G)$. Consider the action by conjugation of $P$ on $\Syl_p(G)$. Again by the orbit-stabilizer theorem, the orbits each have size $1$ or size divisible by $p$. But we know there is one orbit of size $1$, namely $\{P\}$ itself. To show $n_p = |\Syl_P(G)| \cong 1\pmod p$, it is enough to show there are no other orbits of size $1$.
+
+ Suppose $\{Q\}$ is an orbit of size $1$. This means for every $p \in P$, we get
+ \[
+ p^{-1} Qp = Q.
+ \]
+ In other words, $P \leq N_G(Q)$. Now $N_G(Q)$ is itself a group, and we can look at its Sylow $p$-subgroups. We know $Q \leq N_G(Q) \leq G$. So $p^a \mid |N_G(Q)| \mid p^a m$. So $p^a$ is the biggest power of $p$ that divides $|N_G(Q)|$. So $Q$ is a Sylow $p$-subgroup of $N_G(Q)$.
+
+ Now we know $P \leq N_G(Q)$ is \emph{also} a Sylow $p$-subgroup of $N_G(Q)$. By Sylow's second theorem, they must be conjugate in $N_G(Q)$. But conjugating anything in $Q$ by something in $N_G(Q)$ does nothing, by definition of $N_G(Q)$. So we must have $P = Q$. So the only orbit of size $1$ is $\{P\}$ itself. So done.\qedhere
+ \end{enumerate}
+\end{proof}
+
+This is all the theories of groups we've got. In the remaining time, we will look at some interesting examples of groups.
+\begin{eg}
+ Let $G = \GL_n(\Z/p)$, i.e.\ the set of invertible $n\times n$ matrices with entries in $\Z/p$, the integers modulo $p$. Here $p$ is obviously a prime. When we do rings later, we will study this properly.
+
+ First of all, we would like to know the size of this group. A matrix $A \in \GL_n(\Z/p)$ is the same as $n$ linearly independent vectors in the vector space $(\Z/p)^n$. We can just work out how many there are. This is not too difficult, when you know how.
+
+ We can pick the first vector, which can be anything except zero. So there are $p^n - 1$ ways of choosing the first vector. Next, we need to pick the second vector. This can be anything that is not in the span of the first vector, and this rules out $p$ possibilities. So there are $p^n - p$ ways of choosing the second vector. Continuing this chain of thought, we have
+ \[
+ |\GL_n(\Z/p)| = (p^n - 1)(p^n - p)(p^n - p^2) \cdots (p^n - p^{n - 1}).
+ \]
+ What is a Sylow $p$-subgroup of $\GL_n(\Z/p)$? We first work out what the order of this is. We can factorize that as
+ \[
+ |\GL_n(\Z/p)|= (1 \cdot p \cdot p^2 \cdot \cdots \cdot p^{n - 1}) ((p^n - 1)(p^{n - 1} - 1) \cdots (p - 1)).
+ \]
+ So the largest power of $p$ that divides $|\GL_n(\Z/p)|$ is $p^{\binom{n}{2}}$. Let's find a subgroup of size $p^{\binom{n}{2}}$. We consider matrices of the form
+ \[
+ U = \left\{
+ \begin{pmatrix}
+ 1 & * & * & \cdots & *\\
+ 0 & 1 & * & \cdots & *\\
+ 0 & 0 & 1 & \cdots & *\\
+ \vdots & \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & 0 & \cdots & 1
+ \end{pmatrix} \in \GL_n(\Z/p)
+ \right\}.
+ \]
+ Then we know $|U| = p^{\binom{n}{2}}$ as each $*$ can be chosen to be anything in $\Z/p$, and there are $\binom{n}{2}$ $*$s.
+
+ Is the Sylow $p$-subgroup unique? No. We can take the lower triangular matrices and get another Sylow $p$-subgroup.
+\end{eg}
+
+\begin{eg}
+ Let's be less ambitious and consider $\GL_2(\Z/p)$. So
+ \[
+ |G| = p(p^2 - 1)(p - 1) = p(p - 1)^2 (p + 1).
+ \]
+ Let $\ell$ be another prime number such that $\ell \mid p - 1$. Suppose the largest power of $\ell$ that divides $|G|$ is $\ell^2$. Can we (explicitly) find a Sylow $\ell$-subgroup?
+
+ First, we want to find an element of order $\ell$. How is $p - 1$ related to $p$ (apart from the obvious way)? We know that
+ \[
+ (\Z/p)^\times = \{x \in \Z/p:(\exists y)\, xy \equiv 1 \pmod p\} \cong C_{p - 1}.
+ \]
+ So as $\ell\mid p - 1$, there is a subgroup $C_\ell \leq C_{p - 1} \cong (\Z/p)^\times$. Then we immediately know where to find a subgroup of order $\ell^2$: we have
+ \[
+ C_\ell \times \C_\ell \leq (\Z/p)^* \times (\Z/p)^\times \leq \GL_2(\Z/p),
+ \]
+ where the final inclusion is the diagonal matrices, identifying
+ \[
+ (a, b) \leftrightarrow
+ \begin{pmatrix}
+ a & 0\\
+ 0 & b
+ \end{pmatrix}.
+ \]
+ So this is the Sylow $\ell$-subgroup.
+\end{eg}
+
+\section{Rings}
+\subsection{Definitions and examples}
+We now move on to something completely different --- rings. In a ring, we are allowed to add, subtract, multiply but not divide. Our canonical example of a ring would be $\Z$, the integers, as studied in IA Numbers and Sets.
+
+In this course, we are only going to consider rings in which multiplication is commutative, since these rings behave like ``number systems'', where we can study number theory. However, some of these rings do not behave like $\Z$. Thus one major goal of this part is to understand the different properties of $\Z$, whether they are present in arbitrary rings, and how different properties relate to one another.
+
+\begin{defi}[Ring]\index{ring}
+ A \emph{ring} is a quintuple $(R, +, \ph, 0_R, 1_R)$ where $0_R, 1_R \in R$, and $+, \ph: R \times R \to R$ are binary operations such that
+ \begin{enumerate}
+ \item $(R, +, 0_R)$ is an abelian group.
+ \item The operation $\ph: R \times R \to R$ satisfies associativity, i.e.
+ \[
+ a\cdot (b\cdot c) = (a \cdot b)\cdot c,
+ \]
+ and identity:
+ \[
+ 1_R \cdot r = r \cdot 1_R = r.
+ \]
+ \item Multiplication distributes over addition, i.e.
+ \begin{align*}
+ r_1 \cdot (r_2 + r_3) &= (r_1 \cdot r_2) + (r_1 \cdot r_3)\\
+ (r_1 + r_2) \cdot r_3 &= (r_1 \cdot r_3) + (r_2 \cdot r_3).
+ \end{align*}
+ \end{enumerate}
+\end{defi}
+
+\begin{notation}
+ If $R$ is a ring and $r \in R$, we write $-r$ for the inverse to $r$ in $(R, +, 0_R)$. This satisfies $r + (-r) = 0_R$. We write $r - s$ to mean $r + (-s)$ etc.
+\end{notation}
+Some people don't insist on the existence of the multiplicative identity, but we will for the purposes of this course.
+
+Since we can add and multiply two elements, by induction, we can add and multiply any finite number of elements. However, the notions of infinite sum and product are undefined. It doesn't make sense to ask if an infinite sum converges.
+
+\begin{defi}[Commutative ring]\index{commutative ring}
+ We say a ring $R$ is \emph{commutative} if $a \cdot b = b \cdot a$ for all $a, b \in R$.
+\end{defi}
+From now onwards, all rings in this course are going to be commutative.
+
+Just as we have groups and subgroups, we also have subrings.
+\begin{defi}[Subring]\index{subring}
+ Let $(R, +, \ph, 0_R, 1_R)$ be a ring, and $S \subseteq R$ be a subset. We say $S$ is a \emph{subring} of $R$ if $0_R, 1_R \in S$, and the operations $+, \ph$ make $S$ into a ring in its own right. In this case we write $S \leq R$.
+\end{defi}
+
+\begin{eg}
+ The familiar number systems are all rings: we have $\Z \leq \Q \leq \R \leq \C$, under the usual $0, 1, +, \ph$.
+\end{eg}
+
+\begin{eg}
+ The set $\Z[i] = \{a + ib: a, b \in \Z\} \leq \C$ is the \emph{Gaussian integers}, which is a ring.
+
+ We also have the ring $\Q[\sqrt{2}] = \{a + b \sqrt{2} \in \R: a, b \in \Q\} \leq \R$.
+\end{eg}
+We will use the square brackets notation quite frequently. It should be clear what it should mean, and we will define it properly later.
+
+In general, elements in a ring do not have inverses. This is not a bad thing. This is what makes rings interesting. For example, the division algorithm would be rather contentless if everything in $\Z$ had an inverse. Fortunately, $\Z$ only has two invertible elements --- $1$ and $-1$. We call these \emph{units}.
+
+\begin{defi}[Unit]\index{unit}
+ An element $u \in R$ is a \emph{unit} if there is another element $v \in R$ such that $u \cdot v = 1_R$.
+\end{defi}
+It is important that this depends on $R$, not just on $u$. For example, $2 \in \Z$ is not a unit, but $2 \in \Q$ is a unit (since $\frac{1}{2}$ is an inverse).
+
+A special case is when (almost) everything is a unit.
+\begin{defi}[Field]\index{field}
+ A \emph{field} is a non-zero ring where every $u \not= 0_R \in R$ is a unit.
+\end{defi}
+We will later show that $0_R$ cannot be a unit except in a very degenerate case.
+
+\begin{eg}
+ $\Z$ is not a field, but $\Q, \R, \C$ are all fields.
+
+ Similarly, $\Z[i]$ is not a field, while $\Q[\sqrt{2}]$ is.
+\end{eg}
+
+\begin{eg}
+ Let $R$ be a ring. Then $0_R + 0_R = 0_R$, since this is true in the group $(R, +, 0_R)$. Then for any $r \in R$, we get
+ \[
+ r\cdot (0_R + 0_R) = r\cdot 0_R.
+ \]
+ We now use the fact that multiplication distributes over addition. So
+ \[
+ r \cdot 0_R + r \cdot 0_R = r \cdot 0_R.
+ \]
+ Adding $(-r \cdot 0_R)$ to both sides give
+ \[
+ r \cdot 0_R = 0_R.
+ \]
+ This is true for any element $r \in R$. From this, it follows that if $R \not= \{0\}$, then $1_R \not= 0_R$ --- if they were equal, then take $r \not= 0_R$. So
+ \[
+ r = r \cdot 1_R = r \cdot 0_R = 0_R,
+ \]
+ which is a contradiction.
+\end{eg}
+Note, however, that $\{0\}$ forms a ring (with the only possible operations and identities), the zero ring, albeit a boring one. However, this is often a counterexample to many things.
+
+\begin{defi}[Product of rings]\index{product of rings}
+ Let $R, S$ be rings. Then the \emph{product} $R \times S$ is a ring via
+ \[
+ (r, s) + (r', s') = (r + r', s + s'),\quad (r, s) \cdot (r', s') = (r\cdot r', s \cdot s').
+ \]
+ The zero is $(0_R, 0_S)$ and the one is $(1_R, 1_S)$.
+
+ We can (but won't) check that these indeed are rings.
+\end{defi}
+
+\begin{defi}[Polynomial]\index{polynomial}
+ Let $R$ be a ring. Then a \emph{polynomial} with coefficients in $R$ is an expression
+ \[
+ f = a_0 + a_1X + a_2 X^2 + \cdots + a_n X^n,
+ \]
+ with $a_i \in R$. The symbols $X^i$ are formal symbols.
+\end{defi}
+We identify $f$ and $f + 0_R \cdot X^{n + 1}$ as the same things.
+
+\begin{defi}[Degree of polynomial]\index{degree}
+ The \emph{degree} of a polynomial $f$ is the largest $m$ such that $a_m \not= 0$.
+\end{defi}
+
+\begin{defi}[Monic polynomial]\index{monic polynomial}
+ Let $f$ have degree $m$. If $a_m = 1$, then $f$ is called \emph{monic}.
+\end{defi}
+
+\begin{defi}[Polynomial ring]\index{polynomial ring}
+ We write $R[X]$ for the set of all polynomials with coefficients in $R$. The operations are performed in the obvious way, i.e.\ if $f = a_0 + a_1X + \cdots + A_n X^n$ and $g = b_0 + b_1X + \cdots + b_k X^k$ are polynomials, then
+ \[
+ f + g = \sum_{r = 0}^{\max\{n, k\}} (a_i + b_i) X^i,
+ \]
+ and
+ \[
+ f\cdot g = \sum_{i = 0}^{n + k} \left(\sum_{j = 0}^i a_j b_{i - j}\right) X^i,
+ \]
+ We identify $R$ with the constant polynomials, i.e.\ polynomials $\sum a_i X^i$ with $a_i = 0$ for $i > 0$. In particular, $0_R \in R$ and $1_R \in R$ are the zero and one of $R[X]$.
+\end{defi}
+This is in fact a ring.
+
+Note that a polynomial is just a sequence of numbers, interpreted as the coefficients of some formal symbols. While it does indeed induce a function in the obvious way, we shall not identify the polynomial with the function given by it, since different polynomials can give rise to the same function.
+
+For example, in $\Z/2\Z [X]$, $f = X^2 + X$ is not the zero polynomial, since its coefficients are not zero. However, $f(0) = 0$ and $f(1) = 0$. As a function, this is identically zero. So $f \not= 0$ as a polynomial but $f = 0$ as a function.
+
+\begin{defi}[Power series]\index{power series}
+ We write $R[[X]]$ for the ring of power series on $R$, i.e.
+ \[
+ f = a_0 + a_1 X + a_2 X^2 + \cdots,
+ \]
+ where each $a_i \in R$. This has addition and multiplication the same as for polynomials, but without upper limits.
+\end{defi}
+A power series is very much not a function. We don't talk about whether the sum converges or not, because it is not a sum.
+
+\begin{eg}
+ Is $1 - X \in R[X]$ a unit? For every $g = a_0 + \cdots + a_n X^n$ (with $a_n \not= 0$), we get
+ \[
+ (1 - X)g = \text{stuff} + \cdots - a_n X^{n + 1},
+ \]
+ which is not $1$. So $g$ cannot be the inverse of $(1 - X)$. So $(1 - X)$ is not a unit.
+
+ However, $1 - X \in R[[X]]$ \emph{is} a unit, since
+ \[
+ (1 - X)(1 + X + X^2 + X^3 + \cdots) = 1.
+ \]
+\end{eg}
+
+\begin{defi}[Laurent polynomials]\index{Laurent polynomial}
+ The \emph{Laurent polynomials} on $R$ is the set $R[X, X^{-1}]$, i.e.\ each element is of the form
+ \[
+ f = \sum_{i \in \Z} a_i X^i
+ \]
+ where $a_i \in R$ and only finitely many $a_i$ are non-zero. The operations are the obvious ones.
+\end{defi}
+We can also think of Laurent series, but we have to be careful. We allow infinitely many positive coefficients, but only finitely many negative ones. Or else, in the formula for multiplication, we will have an infinite sum, which is undefined.
+
+\begin{eg}
+ Let $X$ be a set, and $R$ be a ring. Then the set of all functions on $X$, i.e.\ functions $f: X \to R$, is a ring with ring operations given by
+ \[
+ (f + g)(x) = f(x) + g(x),\quad (f\cdot g)(x) = f(x) \cdot g(x).
+ \]
+ Here zero is the constant function $0$ and one is the constant function $1$.
+
+ Usually, we don't want to consider all functions $X \to R$. Instead, we look at some subrings of this. For example, we can consider the ring of all continuous functions $\R \to \R$. This contains, for example, the polynomial functions, which is just $\R[X]$ (since in $\R$, polynomials \emph{are} functions).
+\end{eg}
+
+\subsection{Homomorphisms, ideals, quotients and isomorphisms}
+Just like groups, we will come up with analogues of homomorphisms, normal subgroups (which are now known as ideals), and quotients.
+
+\begin{defi}[Homomorphism of rings]\index{homomorphism}
+ Let $R, S$ be rings. A function $\phi: R \to S$ is a \emph{ring homomorphism} if it preserves everything we can think of, i.e.
+ \begin{enumerate}
+ \item $\phi(r_1 + r_2) = \phi(r_1) + \phi(r_2)$,
+ \item $\phi(0_R) = 0_S$,
+ \item $\phi(r_1 \cdot r_2) = \phi(r_1) \cdot \phi(r_2)$,
+ \item $\phi(1_R) = 1_S$.
+ \end{enumerate}
+\end{defi}
+
+\begin{defi}[Isomorphism of rings]\index{isomorphism}
+ If a homomorphism $\phi: R \to S$ is a bijection, we call it an \emph{isomorphism}.
+\end{defi}
+
+\begin{defi}[Kernel]\index{kernel}
+ The \emph{kernel} of a homomorphism $\phi: R \to S$ is
+ \[
+ \ker (\phi) = \{r \in R: \phi(r) = 0_S\}.
+ \]
+\end{defi}
+
+\begin{defi}[Image]\index{image}
+ The \emph{image} of $\phi: R \to S$ is
+ \[
+ \im(\phi) = \{s \in S: s = \phi(r) \text{ for some }r \in R\}.
+ \]
+\end{defi}
+
+\begin{lemma}
+ A homomorphism $\phi: R \to S$ is injective if and only if $\ker \phi = \{0_R\}$.
+\end{lemma}
+
+\begin{proof}
+ A ring homomorphism is in particular a group homomorphism $\phi: (R, +, 0_R) \to (S, +, 0_S)$ of abelian groups. So this follows from the case of groups.
+\end{proof}
+
+In the group scenario, we had groups, subgroups and \emph{normal} subgroups, which are special subgroups. Here, we have a special kind of subsets of a ring that act like normal subgroups, known as \emph{ideals}.
+
+\begin{defi}[Ideal]\index{ideal}
+ A subset $I \subseteq R$ is an \emph{ideal}, written $I \lhd R$, if
+ \begin{enumerate}
+ \item It is an additive subgroup of $(R, +, 0_R)$, i.e.\ it is closed under addition and additive inverses.\hfill (additive closure)
+ \item If $a \in I$ and $b \in R$, then $a \cdot b \in I$. \hfill (strong closure)
+ \end{enumerate}
+ We say $I$ is a proper ideal if $I \not= R$.
+\end{defi}
+Note that the multiplicative closure is stronger than what we require for subrings --- for subrings, it has to be closed under multiplication by its own elements; for ideals, it has to be closed under multiplication by everything in the world. This is similar to how normal subgroups not only have to be closed under internal multiplication, but also conjugation by external elements.
+
+\begin{lemma}
+ If $\phi: R \to S$ is a homomorphism, then $\ker(\phi)\lhd R$.
+\end{lemma}
+
+\begin{proof}
+ Since $\phi: (R, +, 0_R) \to (S, +, 0_S)$ is a group homomorphism, the kernel is a subgroup of $(R, +, 0_R)$.
+
+ For the second part, let $a \in \ker(\phi)$, $b \in R$. We need to show that their product is in the kernel. We have
+ \[
+ \phi(a\cdot b) = \phi(a) \cdot \phi(b) = 0 \cdot \phi(b) = 0.
+ \]
+ So $a \cdot b \in \ker(\phi)$.
+\end{proof}
+
+\begin{eg}
+ Suppose $I \lhd R$ is an ideal, and $1_R \in I$. Then for any $r \in R$, the axioms entail $1_R \cdot r \in I$. But $1_R \cdot r = r$. So if $1_R \in I$, then $I = R$.
+
+ In other words, every proper ideal does not contain $1$. In particular, every proper ideal is not a subring, since a subring must contain $1$.
+\end{eg}
+We are starting to diverge from groups. In groups, a normal subgroup is a subgroup, but here an ideal is not a subring.
+
+\begin{eg}
+ We can generalize the above a bit. Suppose $I \lhd R$ and $u \in I$ is a unit, i.e.\ there is some $v \in R$ such that $u \cdot v = 1_R$. Then by strong closure, $1_R = u \cdot v\in I$. So $I = R$.
+
+ Hence proper ideals are not allowed to contain any unit at all, not just $1_R$.
+\end{eg}
+
+\begin{eg}
+ Consider the ring $\Z$ of integers. Then every ideal of $\Z$ is of the form
+ \[
+ n\Z = \{\cdots, -2n, -n, 0, n, 2n, \cdots\} \subseteq \Z.
+ \]
+ It is easy to see this is indeed an ideal.
+
+ To show these are all the ideals, let $I \lhd \Z$. If $I = \{0\}$, then $I = 0\Z$. Otherwise, let $n \in \N$ be the smallest positive element of $I$. We want to show in fact $I = n\Z$. Certainly $n\Z \subseteq I$ by strong closure.
+
+ Now let $m \in I$. By the Euclidean algorithm, we can write
+ \[
+ m = q \cdot n + r
+ \]
+ with $0 \leq r < n$. Now $n,m \in I$. So by strong closure, $m, q \cdot n \in I$. So $r = m - q\cdot n \in I$. As $n$ is the smallest positive element of $I$, and $r < n$, we must have $r = 0$. So $m = q\cdot n \in n\Z$. So $I \subseteq n\Z$. So $I = n\Z$.
+\end{eg}
+The key to proving this was that we can perform the Euclidean algorithm on $\Z$. Thus, for any ring $R$ in which we can ``do Euclidean algorithm'', every ideal is of the form $aR = \{a \cdot r: r \in R\}$ for some $a \in R$. We will make this notion precise later.
+
+\begin{defi}[Generator of ideal]\index{generator of ideal}
+ For an element $a \in R$, we write
+ \[
+ (a) = aR = \{a \cdot r: r \in R\} \lhd R.
+ \]
+ This is the \emph{ideal generated by $a$}.
+
+ In general, let $a_1, a_2, \cdots, a_k \in R$, we write
+ \[
+ (a_1, a_2, \cdots, a_k) = \{ a_1 r_1 + \cdots + a_k r_k : r_1, \cdots, r_k \in R\}.
+ \]
+ This is the \emph{ideal generated by $a_1, \cdots, a_k$}.
+\end{defi}
+We can also have ideals generated by infinitely many objects, but we have to be careful, since we cannot have infinite sums.
+\begin{defi}[Generator of ideal]\index{generator of ideal}
+ For $A \subseteq R$ a subset, the \emph{ideal generated by $A$} is
+ \[
+ (A) = \left\{\sum_{a \in A} r_a \cdot a: r_a \in R, \text{ only finitely-many non-zero}\right\}.
+ \]
+\end{defi}
+
+These ideals are rather nice ideals, since they are easy to describe, and often have some nice properties.
+\begin{defi}[Principal ideal]\index{principal ideal}
+ An ideal $I$ is a \emph{principal ideal} if $I = (a)$ for some $a \in R$.
+\end{defi}
+So what we have just shown for $\Z$ is that all ideals are principal. Not all rings are like this. These are special types of rings, which we will study more in depth later.
+
+\begin{eg}
+ Consider the following subset:
+ \[
+ \{f \in \R[X]: \text{ the constant coefficient of }f \text{ is }0\}.
+ \]
+ This is an ideal, as we can check manually (alternatively, it is the kernel of the ``evaluate at $0$'' homomorphism). It turns out this is a principal ideal. In fact, it is $(X)$.
+\end{eg}
+
+We have said ideals are like normal subgroups. The key idea is that we can divide by ideals.
+
+\begin{defi}[Quotient ring]\index{quotient ring}
+ Let $I \lhd R$. The \emph{quotient ring} $R/I$ consists of the (additive) cosets $r + I$ with the zero and one as $0_R + I$ and $1_R + I$, and operations
+ \begin{align*}
+ (r_1 + I) + (r_2 + I) &= (r_1 + r_2) + I\\
+ (r_1 + I) \cdot (r_2 + I) &= r_1r_2 + I.
+ \end{align*}
+\end{defi}
+
+\begin{prop}
+ The quotient ring is a ring, and the function
+ \begin{align*}
+ R &\to R/I\\
+ r &\mapsto r + I
+ \end{align*}
+ is a ring homomorphism.
+\end{prop}
+This is true, because we defined ideals to be those things that can be quotiented by. So we just have to check we made the right definition.
+
+Just as we could have come up with the definition of a normal subgroup by requiring operations on the cosets to be well-defined, we could have come up with the definition of an ideal by requiring the multiplication of cosets to be well-defined, and we would end up with the strong closure property.
+
+\begin{proof}
+ We know the group $(R/I, +, 0_{R/I})$ is well-defined, since $I$ is a (normal) subgroup of $R$. So we only have to check multiplication is well-defined.
+
+ Suppose $r_1 + I = r_1' + I$ and $r_2 + I = r_2' + I$. Then $r_1' - r_1 = a_1 \in I$ and $r_2' - r_2 = a_2 \in I$. So
+ \[
+ r_1' r_2' = (r_1 + a_1) (r_2 + a_2) = r_1 r_2 + r_1a_2 + r_2a_1 + a_1a_2.
+ \]
+ By the strong closure property, the last three objects are in $I$. So $r_1' r_2' + I = r_1r_2 + I$.
+
+ It is easy to check that $0_R + I$ and $1_R + I$ are indeed the zero and one, and the function given is clearly a homomorphism.
+\end{proof}
+
+\begin{eg}
+ We have the ideals $n\Z \lhd \Z$. So we have the quotient rings $\Z / n\Z$. The elements are of the form $m + n\Z$, so they are just
+ \[
+ 0 + n\Z, 1 + n\Z, 2 + n\Z, \cdots, (n - 1) + n\Z.
+ \]
+ Addition and multiplication are just what we are used to --- addition and multiplication modulo $n$.
+\end{eg}
+
+Note that it is easier to come up with ideals than normal subgroups --- we can just pick up random elements, and then take the ideal generated by them.
+\begin{eg}
+ Consider $(X) \lhd \C[X]$. What is $\C[X]/(X)$? Elements are represented by
+ \[
+ a_0 + a_1 X + a_2 X^2 + \cdots + a_n X^n + (X).
+ \]
+ But everything but the first term is in $(X)$. So every such thing is equivalent to $a_0 + (X)$. It is not hard to convince yourself that this representation is unique. So in fact $\C[X]/(X) \cong \C$, with the bijection $a_0 + (X) \leftrightarrow a_0$.
+\end{eg}
+If we want to prove things like this, we have to convince ourselves this representation is unique. We can do that by hand here, but in general, we want to be able to do this properly.
+
+\begin{prop}[Euclidean algorithm for polynomials]\index{Euclidean algorithm for polynomials}
+ Let $\F$ be a field and $f, g \in \F[X]$. Then there is some $r, q \in \F[X]$ such that
+ \[
+ f = gq + r,
+ \]
+ with $\deg r < \deg g$.
+\end{prop}
+This is like the usual Euclidean algorithm, except that instead of the absolute value, we use the degree to measure how ``big'' the polynomial is.
+
+\begin{proof}
+ Let $\deg (f) = n$. So
+ \[
+ f = \sum_{i = 0}^n a_i X^i,
+ \]
+ and $a_n \not= 0$. Similarly, if $\deg g = m$, then
+ \[
+ g = \sum_{i = 0}^m b_i X^i,
+ \]
+ with $b_m \not= 0$. If $n < m$, we let $q = 0$ and $r = f$, and done.
+
+ Otherwise, suppose $n \geq m$, and proceed by induction on $n$.
+
+ We let
+ \[
+ f_1 = f - a_n b_m^{-1} X^{n - m} g.
+ \]
+ This is possible since $b_m \not= 0$, and $\F$ is a field. Then by construction, the coefficients of $X^n$ cancel out. So $\deg (f_1) < n$.
+
+ If $n = m$, then $\deg (f_1) < n = m$. So we can write
+ \[
+ f = (a_n b_m^{-1} X^{n - m})g + f_1,
+ \]
+ and $\deg(f_1) < \deg(f)$. So done. Otherwise, if $n > m$, then as $\deg(f_1) < n$, by induction, we can find $r_1, q_1$ such that
+ \[
+ f_1 = g q_1 + r_1,
+ \]
+ and $\deg (r_1) < \deg g = m$. Then
+ \[
+ f = a_nb_m^{-1} X^{n - m} g + q_1 g + r_1 = (a_n b_m^{-1}X^{n - m} + q_1) g + r_1.
+ \]
+ So done.
+\end{proof}
+Now that we have a Euclidean algorithm for polynomials, we should be able to show that every ideal of $\F[X]$ is generated by one polynomial. We will not prove it specifically here, but later show that in \emph{general}, in every ring where the Euclidean algorithm is possible, all ideals are principal.
+
+We now look at some applications of the Euclidean algorithm.
+\begin{eg}
+ Consider $\R[X]$, and consider the principal ideal $(X^2 + 1) \lhd \R[X]$. We let $R = \R[X]/(X^2 + 1)$.
+
+ Elements of $R$ are polynomials
+ \[
+ \underbrace{a_0 + a_1X + a_2 X^2 + \cdots + a_n X^n}_{f} + (X^2 + 1).
+ \]
+ By the Euclidean algorithm, we have
+ \[
+ f = q(X^2 + 1) + r,
+ \]
+ with $\deg(r) < 2$, i.e.\ $r = b_0 + b_1 X$. Thus $f + (X^2 + 1) = r + (X^2 + 1)$. So every element of $\R[X]/(X^2 + 1)$ is representable as $a + bX$ for some $a, b \in \R$.
+
+ Is this representation unique? If $a + bX + (X^2 + 1) = a' + b' X + (X^2 + 1)$, then the difference $(a - a') + (b - b')X \in (X^2 + 1)$. So it is $(X^2 + 1)q$ for some $q$. This is possible only if $q = 0$, since for non-zero $q$, we know $(X^2 + 1)q$ has degree at least $2$. So we must have $(a - a') + (b - b')X = 0$. So $a + bX = a' + b'X$. So the representation is unique.
+
+ What we've got is that every element in $R$ is of the form $a + bX$, and $X^2 + 1 = 0$, i.e.\ $X^2 = -1$. This sounds like the complex numbers, just that we are calling it $X$ instead of $i$.
+
+ To show this formally, we define the function
+ \begin{align*}
+ \phi: \R[X]/(X^2 + 1) &\to \C\\
+ a + bX + (X^2 + 1) & \mapsto a + bi.
+ \end{align*}
+ This is well-defined and a bijection. It is also clearly additive. So to prove this is an isomorphism, we have to show it is multiplicative. We check this manually. We have
+ \begin{align*}
+ &\phi((a + bX + (X^2 + 1))(c + dX + (X^2 + 1))) \\
+ ={}& \phi(ac + (ad + bc)X + bdX^2 + (X^2 + 1))\\
+ ={}& \phi((ac - bd) + (ad + bc)X + (X^2 + 1))\\
+ ={}& (ac - bd) + (ad + bc)i\\
+ ={}& (a + bi) (c + di)\\
+ ={}& \phi(a + bX + (X^2 + 1))\phi(c + dX + (X^2 + 1)).
+ \end{align*}
+ So this is indeed an isomorphism.
+\end{eg}
+
+This is pretty tedious. Fortunately, we have some helpful results we can use, namely the isomorphism theorems. These are exactly analogous to those for groups.
+\begin{thm}[First isomorphism theorem]\index{first isomorphism theorem}
+ Let $\phi: R \to S$ be a ring homomorphism. Then $\ker (\phi) \lhd R$, and
+ \[
+ \frac{R}{\ker (\phi)} \cong \im(\phi) \leq S.
+ \]
+\end{thm}
+
+\begin{proof}
+ We have already seen $\ker(\phi) \lhd R$. Now define
+ \begin{align*}
+ \Phi: R/\ker(\phi) &\to \im(\phi)\\
+ r + \ker(\phi) &\mapsto \phi(r).
+ \end{align*}
+ This well-defined, since if $r + \ker(\phi) = r' + \ker(\phi)$, then $r - r' \in \ker(\phi)$. So $\phi(r - r') = 0$. So $\phi(r) = \phi(r')$.
+
+ We don't have to check this is bijective and additive, since that comes for free from the (proof of the) isomorphism theorem of groups. So we just have to check it is multiplicative. To show $\Phi$ is multiplicative, we have
+ \begin{align*}
+ \Phi((r + \ker(\phi))(t + \ker(\phi))) &= \Phi(rt + \ker(\phi)) \\
+ &= \phi(rt) \\
+ &= \phi(r)\phi(t) \\
+ &= \Phi(r + \ker(\phi)) \Phi(t + \ker(\phi)).\qedhere
+ \end{align*}
+\end{proof}
+This is more-or-less the same proof as the one for groups, just that we had a few more things to check.
+
+Since there is the \emph{first} isomorphism theorem, we, obviously, have more coming.
+
+\begin{thm}[Second isomorphism theorem]\index{second isomorphism theorem}
+ Let $R \leq S$ and $J \lhd S$. Then $J \cap R \lhd R$, and
+ \[
+ \frac{R + J}{J} = \{r + J: r \in R\} \leq \frac{S}{J}
+ \]
+ is a subring, and
+ \[
+ \frac{R}{R \cap J} \cong \frac{R + J}{J}.
+ \]
+\end{thm}
+
+\begin{proof}
+ Define the function
+ \begin{align*}
+ \phi: R &\to S/J\\
+ r &\mapsto r + J.
+ \end{align*}
+ Since this is the quotient map, it is a ring homomorphism. The kernel is
+ \[
+ \ker(\phi) = \{r \in R: r + J = 0,\text{ i.e.\ } r \in J\} = R \cap J.
+ \]
+ Then the image is
+ \[
+ \im(\phi) = \{r + J: r \in R\} = \frac{R + J}{J}.
+ \]
+ Then by the first isomorphism theorem, we know $R \cap J \lhd R$, and $\frac{R + J}{J} \leq S$, and
+ \[
+ \frac{R}{R \cap J} \cong \frac{R + J}{J}.\qedhere
+ \]
+\end{proof}
+
+Before we get to the third isomorphism theorem, recall we had the subgroup correspondence for groups. Analogously, for $I \lhd R$,
+\begin{align*}
+ \{\text{subrings of }R/I\} &\longleftrightarrow\{\text{subrings of }R\text{ which contain }I\}\\
+ L \leq \frac{R}{I} &\longrightarrow \{x \in R: x + I \in L\}\\
+ \frac{S}{I} \leq \frac{R}{I} &\longleftarrow I \lhd S \leq R.
+\end{align*}
+This is exactly the same formula as for groups.
+
+For groups, we had a correspondence for normal subgroups. Here, we have a correspondence between ideals
+\[
+ \{\text{ideals of }R/I\} \longleftrightarrow\{\text{ideals of }R\text{ which contain }I\}\\
+\]
+It is important to note here that quotienting in groups and rings have different purposes. In groups, we take quotients so that we have simpler groups to work with. In rings, we often take quotients to get more interesting rings. For example, $\R[X]$ is quite boring, but $\R[X]/(X^2 + 1) \cong \C$ is more interesting. Thus this ideal correspondence allows us to occasionally get interesting ideals from boring ones.
+
+\begin{thm}[Third isomorphism theorem]\index{third isomorphism theorem}
+ Let $I \lhd R$ and $J \lhd R$, and $I \subseteq J$. Then $J / I \lhd R/I$ and
+ \[
+ \left(\frac{R}{I}\right) \big/ \left(\frac{J}{I}\right) \cong \frac{R}{J}.
+ \]
+\end{thm}
+
+\begin{proof}
+ We define the map
+ \begin{align*}
+ \phi: R/I &\to R/J\\
+ r + I &\mapsto r + J.
+ \end{align*}
+ This is well-defined and surjective by the groups case. Also it is a ring homomorphism since multiplication in $R/I$ and $R/J$ are ``the same''. The kernel is
+ \[
+ \ker(\phi) = \{r + I: r + J = 0,\text{ i.e.\ } r \in J\} = \frac{J}{I}.
+ \]
+ So the result follows from the first isomorphism theorem.
+\end{proof}
+
+Note that for any ring $R$, there is a unique ring homomorphism $\Z \to R$, given by
+\begin{align*}
+ \iota: \Z &\to R\\
+ n \geq 0 &\mapsto \underbrace{1_R + 1_R + \cdots + 1_R}_{n\text{ times}}\\
+ n \leq 0 &\mapsto -(\underbrace{1_R + 1_R + \cdots + 1_R}_{-n\text{ times}})
+\end{align*}
+Any homomorphism $\Z \to R$ must be given by this formula, since it must send the unit to the unit, and we can show this is indeed a homomorphism by distributivity. So the ring homomorphism is unique. In fancy language, we say $\Z$ is the initial object in (the category of) rings.
+
+We then know $\ker(\iota) \lhd \Z$. Thus $\ker(\iota) = n\Z$ for some $n$.
+
+\begin{defi}[Characteristic of ring]\index{characteristic}
+ Let $R$ be a ring, and $\iota: \Z \to R$ be the unique such map. The \emph{characteristic} of $R$ is the unique non-negative $n$ such that $\ker (\iota) = n\Z$.
+\end{defi}
+
+\begin{eg}
+ The rings $\Z, \Q, \R, \C$ all have characteristic $0$. The ring $\Z / n\Z$ has characteristic $n$. In particular, all natural numbers can be characteristics.
+\end{eg}
+The notion of the characteristic will not be too useful in this course. However, fields of non-zero characteristic often provide interesting examples and counterexamples to some later theory.
+
+\subsection{Integral domains, field of factions, maximal and prime ideals}
+Many rings can be completely nothing like $\Z$. For example, in $\Z$, we know that if $a, b \not= 0$, then $ab \not= 0$. However, in, say, $\Z/6\Z$, we get $2, 3 \not= 0$, but $2 \cdot 3 = 0$. Also, $\Z$ has some nice properties such as every ideal is principal, and every integer has an (essentially) unique factorization. We will now classify rings according to which properties they have.
+
+We start with the most fundamental property that the product of two non-zero elements are non-zero. We will almost exclusively work with rings that satisfy this property.
+\begin{defi}[Integral domain]\index{integral domain}
+ A non-zero ring $R$ is an \emph{integral domain} if for all $a, b \in R$, if $a \cdot b = 0_R$, then $a = 0_R$ or $b = 0_R$.
+\end{defi}
+
+An element that violates this property is known as a \emph{zero divisor}.
+\begin{defi}[Zero divisor]\index{zero divisor}
+ An element $x \in R$ is a \emph{zero divisor} if $x \not = 0$ and there is a $y \not= 0$ such that $x \cdot y = 0 \in R$.
+\end{defi}
+In other words, a ring is an integral domain if it has no zero divisors.
+
+\begin{eg}
+ All fields are integral domains, since if $a \cdot b = 0$, and $b \not= 0$, then $a = a\cdot (b\cdot b^{-1}) = 0$. Similarly, if $a\not= 0$, then $b = 0$.
+\end{eg}
+
+\begin{eg}
+ A subring of an integral domain is an integral domain, since a zero divisor in the small ring would also be a zero divisor in the big ring.
+\end{eg}
+
+\begin{eg}
+ Immediately, we know $\Z, \Q, \R, \C$ are integral domains, since $\C$ is a field, and the others are subrings of it. Also, $\Z[i] \leq \C$ is also an integral domain.
+\end{eg}
+These are the nice rings we like in number theory, since there we can sensibly talk about things like factorization.
+
+It turns out there are no interesting finite integral domains.
+\begin{lemma}
+ Let $R$ be a finite ring which is an integral domain. Then $R$ is a field.
+\end{lemma}
+
+\begin{proof}
+ Let $a \in R$ be non-zero, and consider the ring homomorphism
+ \begin{align*}
+ a \cdot -: R &\to R\\
+ b &\mapsto a \cdot b
+ \end{align*}
+ We want to show this is injective. For this, it suffices to show the kernel is trivial. If $r \in \ker (a \cdot -)$, then $a \cdot r = 0$. So $r = 0$ since $R$ is an integral domain. So the kernel is trivial.
+
+ Since $R$ is finite, $a\cdot -$ must also be surjective. In particular, there is an element $b \in R$ such that $a \cdot b = 1_R$. So $a$ has an inverse. Since $a$ was arbitrary, $R$ is a field.
+\end{proof}
+
+So far, we know fields are integral domains, and subrings of integral domains are integral domains. We have another good source of integral domain as follows:
+\begin{lemma}
+ Let $R$ be an integral domain. Then $R[X]$ is also an integral domain.
+\end{lemma}
+
+\begin{proof}
+ We need to show that the product of two non-zero elements is non-zero. Let $f, g\in R[X]$ be non-zero, say
+ \begin{align*}
+ f &= a_0 + a_1X + \cdots + a_n X^n \in R[X]\\
+ g &= b_0 + b_1X + \cdots + b_m X^m \in R[X],
+ \end{align*}
+ with $a_n, b_m \not= 0$. Then the coefficient of $X^{n + m}$ in $fg$ is $a_n b_m$. This is non-zero since $R$ is an integral domain. So $fg$ is non-zero. So $R[X]$ is an integral domain.
+\end{proof}
+So, for instance, $\Z[X]$ is an integral domain.
+
+We can also iterate this.
+
+\begin{notation}
+ Write $R[X, Y]$ for $(R[X])[Y]$, the polynomial ring of $R$ in two variables. In general, write $R[X_1, \cdots, X_n] = (\cdots((R[X_1])[X_2]) \cdots )[X_n]$.
+\end{notation}
+
+Then if $R$ is an integral domain, so is $R[X_1, \cdots, X_n]$.
+
+We now mimic the familiar construction of $\Q$ from $\Z$. For any integral domain $R$, we want to construct a field $F$ that consists of ``fractions'' of elements in $R$. Recall that a subring of any field is an integral domain. This says the converse --- every integral domain is the subring of some field.
+
+\begin{defi}[Field of fractions]\index{field of fractions}
+ Let $R$ be an integral domain. A \emph{field of fractions} $F$ of $R$ is a field with the following properties
+ \begin{enumerate}
+ \item $R \leq F$
+ \item Every element of $F$ may be written as $a \cdot b^{-1}$ for $a, b \in R$, where $b^{-1}$ means the multiplicative inverse to $b \not= 0$ in $F$.
+ \end{enumerate}
+\end{defi}
+For example, $\Q$ is the field of fractions of $\Z$.
+
+\begin{thm}
+ Every integral domain has a field of fractions.
+\end{thm}
+
+\begin{proof}
+ The construction is exactly how we construct the rationals from the integers --- as equivalence classes of pairs of integers. We let
+ \[
+ S = \{(a, b) \in R \times R: b \not= 0\}.
+ \]
+ We think of $(a, b) \in S$ as $\frac{a}{b}$. We define the equivalence relation $\sim$ on $S$ by
+ \[
+ (a, b) \sim (c, d) \Leftrightarrow ad = bc.
+ \]
+ We need to show this is indeed a equivalence relation. Symmetry and reflexivity are obvious. To show transitivity, suppose
+ \[
+ (a, b) \sim (c, d),\quad (c, d) \sim (e, f),
+ \]
+ i.e.
+ \[
+ ad = bc,\quad cf = de.
+ \]
+ We multiply the first equation by $f$ and the second by $b$, to obtain
+ \[
+ adf = bcf,\quad bcf = bed.
+ \]
+ Rearranging, we get
+ \[
+ d(af - be) = 0.
+ \]
+ Since $d$ is in the denominator, $d \not= 0$. Since $R$ is an integral domain, we must have $af - be = 0$, i.e.\ $af = be$. So $(a, b) \sim (e, f)$. This is where being an integral domain is important.
+
+ Now let
+ \[
+ F = S/{\sim}
+ \]
+ be the set of equivalence classes. We now want to check this is indeed the field of fractions. We first want to show it is a field. We write $\frac{a}{b} = [(a, b)] \in F$, and define the operations by
+ \begin{align*}
+ \frac{a}{b} + \frac{c}{d} &= \frac{ad + bc}{bd}\\
+ \frac{a}{b}\cdot \frac{c}{d} &= \frac{ac}{bd}.
+ \end{align*}
+ These \emph{are} well-defined, and make $(F, +, \cdot, \frac{0}{1}, \frac{1}{1})$ into a ring. There are many things to check, but those are straightforward, and we will not waste time doing that here.
+
+ Finally, we need to show every non-zero element has an inverse. Let $\frac{a}{b} \not= 0_F$, i.e.\ $\frac{a}{b} \not= \frac{0}{1}$, or $a\cdot 1 \not= b \cdot 0 \in R$, i.e.\ $a \not= 0$. Then $\frac{b}{a} \in F$ is defined, and
+ \[
+ \frac{b}{a} \cdot \frac{a}{b} = \frac{ba}{ba} = 1_F.
+ \]
+ So $\frac{a}{b}$ has a multiplicative inverse. So $F$ is a field.
+
+ We now need to construct a subring of $F$ that is isomorphic to $R$. To do so, we need to define an injective isomorphism $\phi: R \to F$. This is given by
+ \begin{align*}
+ \phi: R &\to F\\
+ r &\mapsto \frac{r}{1}.
+ \end{align*}
+ This is a ring homomorphism, as one can check easily. The kernel is the set of all $r \in R$ such that $\frac{r}{1} = 0$, i.e.\ $r = 0$. So the kernel is trivial, and $\phi$ is injective. Then by the first isomorphism theorem, $R \cong \im(\phi) \subseteq F$.
+
+ Finally, we need to show everything is a quotient of two things in $R$. We have
+ \[
+ \frac{a}{b} = \frac{a}{1} \cdot \frac{1}{b} = \frac{a}{1}\cdot \left(\frac{b}{1}\right)^{-1},
+ \]
+ as required.
+\end{proof}
+This gives us a very useful tool. Since this gives us a field from an integral domain, this allows us to use field techniques to study integral domains. Moreover, we can use this to construct new interesting fields from integral domains.
+
+\begin{eg}
+ Consider the integral domain $\C[X]$. Its field of fractions is the field of all rational functions $\frac{p(X)}{q(X)}$, where $p, q \in \C[X]$.
+\end{eg}
+To some people, it is a shame to think of rings as having elements. Instead, we should think of a ring as a god-like object, and the only things we should ever mention are its ideals. We should also not think of the ideals as containing elements, but just some abstract objects, and all we know is how ideals relate to one another, e.g.\ if one contains the other.
+
+Under this philosophy, we can think of a field as follows:
+\begin{lemma}
+ A (non-zero) ring $R$ is a field if and only if its only ideals are $\{0\}$ and $R$.
+\end{lemma}
+Note that we don't need elements to define the ideals $\{0\}$ and $R$. $\{0\}$ can be defined as the ideal that all other ideals contain, and $R$ is the ideal that contains all other ideals. Alternatively, we can reword this as ``$R$ is a field if and only if it has only two ideals'' to avoid mentioning explicit ideals.
+
+\begin{proof}
+ $(\Rightarrow)$ Let $I \lhd R$ and $R$ be a field. Suppose $x \not= 0 \in I$. Then as $x$ is a unit, $I = R$.
+
+ $(\Leftarrow)$ Suppose $x \not= 0 \in R$. Then $(x)$ is an ideal of $R$. It is not $\{0\}$ since it contains $x$. So $(x) = R$. In other words $1_R \in (x)$. But $(x)$ is defined to be $\{x \cdot y: y \in R\}$. So there is some $u \in R$ such that $x\cdot u = 1_R$. So $x$ is a unit. Since $x$ was arbitrary, $R$ is a field.
+\end{proof}
+This is another reason why fields are special. They have the simplest possible ideal structure.
+
+This motivates the following definition:
+
+\begin{defi}[Maximal ideal]\index{maximal ideal}
+ An ideal $I$ of a ring $R$ is \emph{maximal} if $I \not= R$ and for any ideal $J$ with $I \leq J \leq R$, either $J = I$ or $J = R$.
+\end{defi}
+
+The relation with what we've done above is quite simple. There is an easy way to recognize if an ideal is maximal.
+
+\begin{lemma}
+ An ideal $I \lhd R$ is maximal if and only if $R/I$ is a field.
+\end{lemma}
+
+\begin{proof}
+ $R/I$ is a field if and only if $\{0\}$ and $R/I$ are the only ideals of $R/I$. By the ideal correspondence, this is equivalent to saying $I$ and $R$ are the only ideals of $R$ which contains $I$, i.e.\ $I$ is maximal. So done.
+\end{proof}
+This is a nice result. This makes a correspondence between properties of ideals $I$ and properties of the quotient $R/I$. Here is another one:
+
+\begin{defi}[Prime ideal]\index{prime ideal}
+ An ideal $I$ of a ring $R$ is \emph{prime} if $I \not= R$ and whenever $a, b \in R$ are such that $a\cdot b \in I$, then $a \in I$ or $b \in I$.
+\end{defi}
+
+This is like the opposite of the property of being an ideal --- being an ideal means if we have something in the ideal and something outside, the product is always in the ideal. This does the backwards. If the product of two random things is in the ideal, then one of them must be from the ideal.
+
+\begin{eg}
+ A non-zero ideal $n\Z \lhd \Z$ is prime if and only if $n$ is a prime.
+
+ To show this, first suppose $n = p$ is a prime, and $a\cdot b \in p\Z$. So $p \mid a\cdot b$. So $p \mid a$ or $p \mid b$, i.e.\ $a \in p\Z$ or $b \in p\Z$.
+
+ For the other direction, suppose $n = pq$ is a composite number ($p, q \not= 1$). Then $n \in n\Z$ but $p \not\in n\Z$ and $q \not\in n\Z$, since $0 < p, q < n$.
+\end{eg}
+So instead of talking about prime numbers, we can talk about prime ideals instead, because ideals are better than elements.
+
+We prove a result similar to the above:
+\begin{lemma}
+ An ideal $I \lhd R$ is prime if and only if $R/I$ is an integral domain.
+\end{lemma}
+
+\begin{proof}
+ Let $I$ be prime. Let $a + I, b + I \in R/I$, and suppose $(a + I)(b + I) = 0_{R/I}$. By definition, $(a + I)(b + I) = ab + I$. So we must have $ab \in I$. As $I$ is prime, either $a \in I$ or $b \in I$. So $a + I = 0_{R/I}$ or $b + I = 0_{R/I}$. So $R/I$ is an integral domain.
+
+ Conversely, suppose $R/I$ is an integral domain. Let $a, b \in R$ be such that $ab \in I$. Then $(a + I)(b + I) = ab + I = 0_{R/I} \in R/I$. Since $R/I$ is an integral domain, either $a + I = 0_{R/I}$ or $b + I = 0_{R/i}$, i.e.\ $a \in I$ or $b\in I$. So $I$ is a prime ideal.
+\end{proof}
+
+Prime ideals and maximal ideals are the main types of ideals we care about. Note that every field is an integral domain. So we immediately have the following result:
+\begin{prop}
+ Every maximal ideal is a prime ideal.
+\end{prop}
+
+\begin{proof}
+ $I \lhd R$ is maximal implies $R/I$ is a field implies $R/I$ is an integral domain implies $I$ is prime.
+\end{proof}
+The converse is not true. For example, $\{0\} \subseteq \Z$ is prime but not maximal. Less stupidly, $(X) \in \Z[X, Y]$ is prime but not maximal (since $\Z[X, Y]/(X) \cong \Z[Y]$). We can provide a more explicit proof of this, which is essentially the same.
+
+\begin{proof}[Alternative proof]
+ Let $I$ be a maximal ideal, and suppose $a, b \not \in I$ but $ab \in I$. Then by maximality, $I + (a) = I + (b) = R = (1)$. So we can find some $p, q \in R$ and $n, m \in I$ such that $n + ap = m + bq = 1$. Then
+ \[
+ 1 = (n + ap)(m + bq) = nm + apm + bqn + ab pq \in I,
+ \]
+ since $n, m, ab \in I$. This is a contradiction.
+\end{proof}
+
+\begin{lemma}
+ Let $R$ be an integral domain. Then its characteristic is either $0$ or a prime number.
+\end{lemma}
+
+\begin{proof}
+ Consider the unique map $\phi: \Z \to R$, and $\ker(\phi) = n\Z$. Then $n$ is the characteristic of $R$ by definition.
+
+ By the first isomorphism theorem, $\Z/n\Z = \im(\phi) \leq R$. So $\Z/n\Z$ is an integral domain. So $n\Z \lhd \Z$ is a prime. So $n = 0$ or a prime number.
+\end{proof}
+
+\subsection{Factorization in integral domains}
+We now move on to tackle the problem of factorization in rings. For sanity, we suppose throughout the section that $R$ is an integral domain. We start by making loads of definitions.
+
+\begin{defi}[Unit]\index{unit}
+ An element $a \in R$ is a \emph{unit} if there is a $b \in R$ such that $ab = 1_R$. Equivalently, if the ideal $(a) = R$.
+\end{defi}
+
+\begin{defi}[Division]\index{division}
+ For elements $a, b \in R$, we say $a$ \emph{divides} $b$, written $a \mid b$, if there is a $c \in R$ such that $b = ac$. Equivalently, if $(b) \subseteq (a)$.
+\end{defi}
+
+\begin{defi}[Associates]\index{associates}
+ We say $a, b \in R$ are \emph{associates} if $a = bc$ for some unit $c$. Equivalently, if $(a) = (b)$. Equivalently, if $a \mid b$ and $b \mid a$.
+\end{defi}
+In the integers, this can only happen if $a$ and $b$ differ by a sign, but in more interesting rings, more interesting things can happen.
+
+When considering division in rings, we often consider two associates to be ``the same''. For example, in $\Z$, we can factorize $6$ as
+\[
+ 6 = 2 \cdot 3 = (-2) \cdot (-3),
+\]
+but this does not violate unique factorization, since $2$ and $-2$ are associates (and so are $3$ and $-3$), and we consider these two factorizations to be ``the same''.
+
+\begin{defi}[Irreducible]\index{irreducible}
+ We say $a \in R$ is \emph{irreducible} if $a \not = 0$, $a$ is not a unit, and if $a = xy$, then $x$ or $y$ is a unit.
+\end{defi}
+For integers, being irreducible is the same as being a prime number. However, ``prime'' means something different in general rings.
+
+\begin{defi}[Prime]\index{prime}
+ We say $a \in R$ is \emph{prime} if $a$ is non-zero, not a unit, and whenever $a \mid xy$, either $a \mid x$ or $a \mid y$.
+\end{defi}
+It is important to note all these properties depend on the ring, not just the element itself.
+\begin{eg}
+ $2 \in \Z$ is a prime, but $2 \in \Q$ is not (since it is a unit).
+
+ Similarly, the polynomial $2X \in \Q[X]$ is irreducible (since $2$ is a unit), but $2X \in \Z[X]$ not irreducible.
+\end{eg}
+
+We have two things called prime, so they had better be related.
+\begin{lemma}
+ A principal ideal $(r)$ is a prime ideal in $R$ if and only if $r = 0$ or $r$ is prime.
+\end{lemma}
+
+\begin{proof}
+ $(\Rightarrow)$ Let $(r)$ be a prime ideal. If $r = 0$, then done. Otherwise, as prime ideals are proper, i.e.\ not the whole ring, $r$ is not a unit. Now suppose $r \mid a \cdot b$. Then $a\cdot b \in (r)$. But $(r)$ is prime. So $a \in (r)$ or $b\in (r)$. So $r \mid a$ or $r \mid b$. So $r$ is prime.
+
+ $(\Leftarrow)$ If $r = 0$, then $(0) = \{0\} \lhd R$, which is prime since $R$ is an integral domain. Otherwise, let $r \not= 0$ be prime. Suppose $a \cdot b \in (r)$. This means $r \mid a\cdot b$. So $r\mid a$ or $r \mid b$. So $a \in (r)$ and $b\in (r)$. So $(r)$ is prime.
+\end{proof}
+
+Note that in $\Z$, prime numbers exactly match the irreducibles, but prime numbers are also prime (surprise!). In general, it is not true that irreducibles are the same as primes. However, one direction is always true.
+
+\begin{lemma}
+ Let $r \in R$ be prime. Then it is irreducible.
+\end{lemma}
+
+\begin{proof}
+ Let $r \in R$ be prime, and suppose $r = ab$. Since $r \mid r = ab$, and $r$ is prime, we must have $r \mid a$ or $r \mid b$. wlog, $r\mid a$. So $a = rc$ for some $c \in R$. So $r = ab = rcb$. Since we are in an integral domain, we must have $1 = cb$. So $b$ is a unit.
+\end{proof}
+
+We now do a long interesting example.
+\begin{eg}
+ Let
+ \[
+ R = \Z[\sqrt{-5}] = \{a + b\sqrt{-5}: a, b\in \Z\} \leq \C.
+ \]
+ By definition, it is a subring of a field. So it is an integral domain. What are the units of the ring? There is a nice trick we can use, when things are lying inside $\C$. Consider the function
+ \[
+ N: R \to \Z_{\geq 0}
+ \]
+ given by
+ \[
+ N(a + b\sqrt{-5}) \mapsto a^2 + 5b^2.
+ \]
+ It is convenient to think of this as $z \mapsto z\bar{z} = |z|^2$. This satisfies $N(z \cdot w) = N(z) N(w)$. This is a desirable thing to have for a ring, since it immediately implies all units have norm $1$ --- if $r \cdot s = 1$, then $1 = N(1) = N(rs) = N(r)N(s)$. So $N(r)=N(s) = 1$.
+
+ So to find the units, we need to solve $a^2 + 5b^2 = 1$, for $a$ and $b$ units. The only solutions are $\pm 1$. So only $\pm 1 \in R$ can be units, and these obviously are units. So these are all the units.
+
+ Next, we claim $2 \in R$ is irreducible. We again use the norm. Suppose $2 = ab$. Then $4 = N(2) = N(a)N(b)$. Now note that nothing has norm $2$. $a^2 + 5b^2$ can never be $2$ for integers $a, b \in \Z$. So we must have, wlog, $N(a) = 4, N(b) = 1$. So $b$ must be a unit. Similarly, we see that $3, 1 + \sqrt{-5}, 1 - \sqrt{-5}$ are irreducible (since there is also no element of norm $3$).
+
+ We have four irreducible elements in this ring. Are they prime? No! Note that
+ \[
+ (1 + \sqrt{-5})(1 - \sqrt{-5}) = 6 = 2\cdot 3.
+ \]
+ We now claim $2$ does not divide $1 + \sqrt{-5}$ or $1 - \sqrt{-5}$. So $2$ is not prime.
+
+ To show this, suppose $2 \mid 1 + \sqrt{-5}$. Then $N(2) \mid N(1 + \sqrt{-5})$. But $N(2) = 4$ and $N(1 + \sqrt{-5}) = 6$, and $4 \nmid 6$. Similarly, $N(1 - \sqrt{-5}) = 6$ as well. So $2 \nmid 1 \pm \sqrt{-5}$.
+\end{eg}
+There are several life lessons here. First is that primes and irreducibles are not the same thing in general. We've always thought they were the same because we've been living in the fantasy land of the integers. But we need to grow up.
+
+The second one is that factorization into irreducibles is not necessarily unique, since $2\cdot 3 = (1 + \sqrt{-5})(1 - \sqrt{-5})$ are two factorizations into irreducibles.
+
+However, there is one situation when unique factorizations holds. This is when we have a Euclidean algorithm available.
+
+\begin{defi}[Euclidean domain]\index{Euclidean domain}\index{ED}
+ An integral domain $R$ is a \emph{Euclidean domain} (ED) if there is a \emph{Euclidean function} $\phi: R\setminus \{0\} \to \Z_{\geq 0}$ such that
+ \begin{enumerate}
+ \item $\phi(a \cdot b) \geq \phi(b)$ for all $a, b \not= 0$
+ \item If $a, b\in R$, with $b \not= 0$, then there are $q, r \in R$ such that
+ \[
+ a = b \cdot q + r,
+ \]
+ and either $r = 0$ or $\phi(r) < \phi(b)$.
+ \end{enumerate}
+\end{defi}
+
+What are examples? Every time in this course where we said ``Euclidean algorithm'', we have an example.
+\begin{eg}
+ $\Z$ is a Euclidean domain with $\phi(n) = |n|$.
+\end{eg}
+
+\begin{eg}
+ For any field $\F$, $\F[X]$ is a Euclidean domain with
+ \[
+ \phi(f) = \deg(f).
+ \]
+\end{eg}
+
+\begin{eg}
+ The Gaussian integers $R = \Z[i] \leq \C$ is a Euclidean domain with $\phi(z) = N(z) = |z|^2$. We now check this:
+ \begin{enumerate}
+ \item We have $\phi(zw) = \phi(z)\phi(w) \geq \phi(z)$, since $\phi(w)$ is a positive integer.
+ \item Given $a, b\in \Z[i]$, $b\not= 0$. We consider the complex number
+ \[
+ \frac{a}{b} \in \C.
+ \]
+ Consider the following complex plane, where the red dots are points in $\Z[i]$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2.5, 0) -- (2.5, 0) node [right] {$\Re$};
+ \draw [->] (0, -2.5) -- (0, 2.5) node [above] {$\Im$};
+ \foreach \x in {-2,...,2}{
+ \foreach \y in {-2,...,2}{
+ \node [mred, circ] at (\x, \y) {};
+ }
+ }
+ \node [circ] at (1.6, 0.7) {};
+ \node [right] at (1.6, 0.7) {$\frac{a}{b}$};
+ \end{tikzpicture}
+ \end{center}
+ By looking at the picture, we know that there is some $q \in \Z[i]$ such that $\left|\frac{a}{b} - q\right| < 1$. So we can write
+ \[
+ \frac{a}{b} = q + c
+ \]
+ with $|c| < 1$. Then we have
+ \[
+ a = b\cdot q + \underbrace{b\cdot c}_r.
+ \]
+ We know $r = a - bq \in \Z[i]$, and $\phi(r) = N(bc) = N(b)N(c) < N(b) = \phi(b)$. So done.
+ \end{enumerate}
+ This is not just true for the Gaussian integers. All we really needed was that $R \leq \C$, and for any $x \in \C$, there is some point in $R$ that is not more than $1$ away from $x$. If we draw some more pictures, we will see this is not true for $\Z[\sqrt{-5}]$.
+\end{eg}
+
+Before we move on to prove unique factorization, we first derive something we've previously mentioned. Recall we showed that every ideal in $\Z$ is principal, and we proved this by the Euclidean algorithm. So we might expect this to be true in an arbitrary Euclidean domain.
+
+\begin{defi}[Principal ideal domain]\index{principal ideal domain}\index{PID}
+ A ring $R$ is a \emph{principal ideal domain} (PID) if it is an integral domain, and every ideal is a principal ideal, i.e.\ for all $I \lhd R$, there is some $a$ such that $I = (a)$.
+\end{defi}
+
+\begin{eg}
+ $\Z$ is a principal ideal domain.
+\end{eg}
+
+\begin{prop}
+ Let $R$ be a Euclidean domain. Then $R$ is a principal ideal domain.
+\end{prop}
+We have already proved this, just that we did it for a particular Euclidean domain $\Z$. Nonetheless, we shall do it again.
+
+\begin{proof}
+ Let $R$ have a Euclidean function $\phi: R \setminus \{0\} \to \Z_{\geq 0}$. We let $I \lhd R$ be a non-zero ideal, and let $b \in I\setminus \{0\}$ be an element with $\phi(b)$ minimal. Then for any $a \in I$, we write
+ \[
+ a = bq + r,
+ \]
+ with $r = 0$ or $\phi(r) < \phi(b)$. However, any such $r$ must be in $I$ since $r = a - bq \in I$. So we cannot have $\phi(r) < \phi(b)$. So we must have $r = 0$. So $a = bq$. So $a \in (b)$. Since this is true for all $a \in I$, we must have $I \subseteq (b)$. On the other hand, since $b \in I$, we must have $(b) \subseteq I$. So we must have $I = (b)$.
+\end{proof}
+This is exactly, word by word, the same proof as we gave for the integers, except we replaced the absolute value with $\phi$.
+
+\begin{eg}
+ $\Z$ is a Euclidean domain, and hence a principal ideal domain. Also, for any field $\F$, $\F[X]$ is a Euclidean domain, hence a principal ideal domain.
+
+ Also, $\Z[i]$ is a Euclidean domain, and hence a principal ideal domain.
+
+ What is a non-example of principal ideal domains? In $\Z[X]$, the ideal $(2, X) \lhd \Z[X]$ is not a principal ideal. Suppose it were. Then $(2, X) = (f)$. Since $2 \in (2, X) = (f)$, we know $2 \in (f)$ , i.e.\ $2 = f\cdot g$ for some $g$. So $f$ has degree zero, and hence constant. So $f = \pm 1$ or $\pm 2$.
+
+ If $f = \pm 1$, since $\pm 1$ are units, then $(f) = \Z[X]$. But $(2, X) \not= \Z[X]$, since, say, $1 \not\in (2, X)$. If $f = \pm 2$, then since $X \in (2, X) = (f)$, we must have $\pm 2 \mid X$, but this is clearly false. So $(2, X)$ cannot be a principal ideal.
+\end{eg}
+
+\begin{eg}
+ Let $A \in M_{n \times n}(\F)$ be an $n \times n$ matrix over a field $\F$. We consider the following set
+ \[
+ I = \{f \in \F[X]: f(A) = 0\}.
+ \]
+ This is an ideal --- if $f, g \in I$, then $(f + g)(A) = f(A) + g(A) = 0$. Similarly, if $f \in I$ and $h \in \F[X]$, then $(fg)(A) = f(A)g(A) = 0$.
+
+ But we know $\F[X]$ is a principal ideal domain. So there must be some $m \in \F[X]$ such that $I = (m)$ for some $m$.
+
+ Suppose $f \in \F[X]$ such that $f(A) = 0$, i.e.\ $f \in I$. Then $m \mid f$. So $m$ is a polynomial that divides all polynomials that kill $A$, i.e.\ $m$ is the \emph{minimal polynomial} of $A$.
+
+ We have just proved that all matrices have minimal polynomials, and that the minimal polynomial divides all other polynomials that kill $A$. Also, the minimal polynomial is unique up to multiplication of units.
+\end{eg}
+
+Let's get further into number theory-like things. For a general ring, we cannot factorize things into irreducibles uniquely. However, in some rings, this is possible.
+\begin{defi}[Unique factorization domain]\index{unique factorization domain}\index{UFD}
+ An integral domain $R$ is a \emph{unique factorization domain} (UFD) if
+ \begin{enumerate}
+ \item Every non-unit may be written as a product of irreducibles;
+ \item If $p_1 p_2 \cdots p_n = q_1 \cdots q_m$ with $p_i, q_j$ irreducibles, then $n = m$, and they can be reordered such that $p_i$ is an associate of $q_i$.
+ \end{enumerate}
+\end{defi}
+This is a really nice property, and here we can do things we are familiar with in number theory. So how do we know if something is a unique factorization domain?
+
+Our goal is to show that all principal ideal domains are unique factorization domains. To do so, we are going to prove several lemmas that give us some really nice properties of principal ideal domains.
+
+Recall we saw that every prime is an irreducible, but in $\Z[\sqrt{-5}]$, there are some irreducibles that are not prime. However, this cannot happen in principal ideal domains.
+\begin{lemma}
+ Let $R$ be a principal ideal domain. If $p \in R$ is irreducible, then it is prime.
+\end{lemma}
+Note that this is also true for general unique factorization domains, which we can prove directly by unique factorization.
+
+\begin{proof}
+ Let $p \in R$ be irreducible, and suppose $p \mid a\cdot b$. Also, suppose $p \nmid a$. We need to show $p \mid b$.
+
+ Consider the ideal $(p, a) \lhd R$. Since $R$ is a principal ideal domain, there is some $d \in R$ such that $(p, a) = (d)$. So $d \mid p$ and $d \mid a$.
+
+ Since $d \mid p$, there is some $q_1$ such that $p = q_1 d$. As $p$ is irreducible, either $q_1$ or $d$ is a unit.
+
+ If $q_1$ is a unit, then $d = q_1^{-1} p$, and this divides $a$. So $a = q_1^{-1} p x$ for some $x$. This is a contradiction, since $p\nmid a$.
+
+ Therefore $d$ is a unit. So $(p, a) = (d) = R$. In particular, $1_R \in (p, a)$. So suppose $1_R = rp + sa$, for some $r, s \in R$. We now take the whole thing and multiply by $b$. Then we get
+ \[
+ b = rpb + sab.
+ \]
+ We observe that $ab$ is divisible by $p$, and so is $p$. So $b$ is divisible by $p$. So done.
+\end{proof}
+This is similar to the argument for integers. For integers, we would say if $p \nmid a$, then $p$ and $a$ are coprime. Therefore there are some $r, s$ such that $1 = rp + sa$. Then we continue the proof as above. Hence what we did in the middle is to do something similar to showing $p$ and $a$ are ``coprime''.
+
+Another nice property of principal ideal domains is the following:
+\begin{lemma}
+ Let $R$ be a principal ideal domain. Let $I_1 \subseteq I_2 \subseteq I_3 \subseteq \cdots$ be a chain of ideals. Then there is some $N\in \N$ such that $I_n = I_{n + 1}$ for some $n \geq N$.
+\end{lemma}
+So in a principal ideal domain, we cannot have an infinite chain of bigger and bigger ideals.
+
+\begin{defi}[Ascending chain condition]\index{ascending chain condition}\index{ACC}
+ A ring satisfies the \emph{ascending chain condition} (ACC) if there is no infinite strictly increasing chain of ideals.
+\end{defi}
+
+\begin{defi}[Noetherian ring]\index{Noetherian ring}
+ A ring that satisfies the ascending chain condition is known as a \emph{Noetherian ring}.
+\end{defi}
+
+So we are proving that every principal ideal domain is Noetherian.
+
+\begin{proof}
+ The obvious thing to do when we have an infinite chain of ideals is to take the union of them. We let
+ \[
+ I = \bigcup_{n \geq 1}^\infty I_n,
+ \]
+ which is again an ideal. Since $R$ is a principal ideal domain, $I = (a)$ for some $a \in R$. We know $a \in I = \bigcup_{n = 0}^\infty I_n$. So $a \in I_N$ for some $N$. Then we have
+ \[
+ (a) \subseteq I_N \subseteq I = (a)
+ \]
+ So we must have $I_N = I$. So $I_n = I_N = I$ for all $n \geq N$.
+\end{proof}
+Notice it is not important that $I$ is generated by one element. If, for some reason, we know $I$ is generated by finitely many elements, then the same argument work. So if every ideal is finitely generated, then the ring must be Noetherian. It turns out this is an if-and-only-if --- if you are Noetherian, then every ideal is finitely generated. We will prove this later on in the course.
+
+Finally, we have done the setup, and we can prove the proposition promised.
+\begin{prop}
+ Let $R$ be a principal ideal domain. Then $R$ is a unique factorization domain.
+\end{prop}
+
+\begin{proof}
+ We first need to show any (non-unit) $r \in R$ is a product of irreducibles.
+
+ Suppose $r \in R$ cannot be factored as a product of irreducibles. Then it is certainly not irreducible. So we can write $r = r_1 s_1$, with $r_1, s_1$ both non-units. Since $r$ cannot be factored as a product of irreducibles, wlog $r_1$ cannot be factored as a product of irreducibles (if both can, then $r$ would be a product of irreducibles). So we can write $r_1 = r_2 s_2$, with $r_2, s_2$ not units. Again, wlog $r_2$ cannot be factored as a product of irreducibles. We continue this way.
+
+ By assumption, the process does not end, and then we have the following chain of ideals:
+ \[
+ (r) \subseteq (r_1) \subseteq (r_2) \subseteq \cdots \subseteq (r_n) \subseteq \cdots
+ \]
+ But then we have an ascending chain of ideals. By the ascending chain condition, these are all eventually equal, i.e.\ there is some $n$ such that $(r_n) = (r_{n + 1}) = (r_{n + 2}) =\cdots$. In particular, since $(r_n) = (r_{n + 1})$, and $r_n = r_{n + 1} s_{n + 1}$, then $s_{n + 1}$ is a unit. But this is a contradiction, since $s_{n + 1}$ is not a unit. So $r$ must be a product of irreducibles.
+
+ To show uniqueness, we let $p_1p_2 \cdots p_n= q_1 q_2 \cdots q_m$, with $p_i, q_i$ irreducible. So in particular $p_1 \mid q_1 \cdots q_m$. Since $p_1$ is irreducible, it is prime. So $p_1$ divides some $q_i$. We reorder and suppose $p_1 \mid q_1$. So $q_1 = p_1 \cdot a$ for some $a$. But since $q_1$ is irreducible, $a$ must be a unit. So $p_1, q_1$ are associates. Since $R$ is a principal ideal domain, hence integral domain, we can cancel $p_1$ to obtain
+ \[
+ p_2p_3 \cdots p_n = (a q_2) q_3 \cdots q_m.
+ \]
+ We now rename $aq_2$ as $q_2$, so that we in fact have
+ \[
+ p_2p_3 \cdots p_n = q_2 q_3 \cdots q_m.
+ \]
+ We can then continue to show that $p_i$ and $q_i$ are associates for all $i$. This also shows that $n = m$, or else if $n = m + k$, saw, then $p_{k + 1} \cdots p_n = 1$, which is a contradiction.
+\end{proof}
+
+We can now use this to define other familiar notions from number theory.
+\begin{defi}[Greatest common divisor]\index{greatest common divisor}\index{gcd}
+ $d$ is a \emph{greatest common divisor} (gcd) of $a_1,a_2, \cdots, a_n$ if $d \mid a_i$ for all $i$, and if any other $d'$ satisfies $d' \mid a_i$ for all $i$, then $d' \mid d$.
+\end{defi}
+Note that the gcd of a set of numbers, if exists, is not unique. It is only well-defined up to a unit.
+
+This is a definition that says what it means to be a greatest common divisor. However, it does not always have to exist.
+\begin{lemma}
+ Let $R$ be a unique factorization domain. Then greatest common divisors exists, and is unique up to associates.
+\end{lemma}
+
+\begin{proof}
+ We construct the greatest common divisor using the good-old way of prime factorization.
+
+ We let $p_1, p_2, \cdots, p_m$ be a list of all irreducible factors of $a_i$, such that no two of these are associates of each other. We now write
+ \[
+ a_i = u_i\prod_{j = 1}^m p_j^{n_{ij}},
+ \]
+ where $n_{ij} \in \N$ and $u_i$ are units. We let
+ \[
+ m_j = \min_i \{n_{ij}\},
+ \]
+ and choose
+ \[
+ d = \prod_{j = 1}^m p_j^{m_j}.
+ \]
+ As, by definition, $m_j \leq n_{ij}$ for all $i$, we know $d \mid a_i$ for all $i$.
+
+ Finally, if $d' \mid a_i$ for all $i$, then we let
+ \[
+ d' = v \prod_{j = 1}^m p_j^{t_j}.
+ \]
+ Then we must have $t_j \leq n_{ij}$ for all $i, j$. So we must have $t_j \leq m_j$ for all $j$. So $d' \mid d$.
+
+ Uniqueness is immediate since any two greatest common divisors have to divide each other.
+\end{proof}
+
+\subsection{Factorization in polynomial rings}
+Since polynomial rings are a bit more special than general integral domains, we can say a bit more about them.
+
+Recall that for $F$ a field, we know $F[X]$ is a Euclidean domain, hence a principal ideal domain, hence a unique factorization domain. Therefore we know
+\begin{enumerate}
+ \item If $I \lhd F[X]$, then $I = (f)$ for some $f \in F[X]$.
+ \item If $f \in F[X]$, then $f$ is irreducible if and only if $f$ is prime.
+ \item Let $f$ be irreducible, and suppose $(f) \subseteq J \subseteq F[X]$. Then $J = (g)$ for some $g$. Since $(f) \subseteq (g)$, we must have $f = gh$ for some $h$. But $f$ is irreducible. So either $g$ or $h$ is a unit. If $g$ is a unit, then $(g) = F[X]$. If $h$ is a unit, then $(f) = (g)$. So $(f)$ is a maximal ideal. Note that this argument is valid for any PID, not just polynomial rings.
+ \item Let $(f)$ be a prime ideal. Then $f$ is prime. So $f$ is irreducible. So $(f)$ is maximal. But we also know in complete generality that maximal ideals are prime. So in $F[X]$, prime ideals are the same as maximal ideals. Again, this is true for all PIDs in general.
+ \item Thus $f$ is irreducible if and only if $F[X]/(f)$ is a field.
+\end{enumerate}
+To use the last item, we can first show that $F[X]/(f)$ is a field, and then use this to deduce that $f$ is irreducible. But we can also do something more interesting --- find an irreducible $f$, and then generate an interesting field $F[X]/(f)$.
+
+So we want to understand reducibility, i.e.\ we want to know whether we can factorize a polynomial $f$. Firstly, we want to get rid of the trivial case where we just factor out a scalar, e.g.\ $2X^2 + 2 = 2(X^2 + 1) \in \Z[X]$ is a boring factorization.
+
+\begin{defi}[Content]\index{content}
+ Let $R$ be a UFD and $f = a_0 + a_1 X + \cdots + a_n X^n \in R[X]$. The \emph{content} $c(f)$ of $f$ is
+ \[
+ c(f) = \gcd(a_0, a_1, \cdots, a_n) \in R.
+ \]
+\end{defi}
+Again, since the gcd is only defined up to a unit, so is the content.
+
+\begin{defi}[Primitive polynomial]\index{primitive polynomial}
+ A polynomial is \emph{primitive} if $c(f)$ is a unit, i.e.\ the $a_i$ are coprime.
+\end{defi}
+Note that this is the best we can do. We cannot ask for $c(f)$ to be exactly $1$, since the gcd is only well-defined up to a unit.
+
+We now want to prove the following important lemma:
+\begin{lemma}[Gauss' lemma]\index{Gauss' lemma}
+ Let $R$ be a UFD, and $f \in R[X]$ be a primitive polynomial. Then $f$ is reducible in $R[X]$ if and only if $f$ is reducible $F[X]$, where $F$ is the field of fractions of $R$.
+\end{lemma}
+
+We can't do this right away. We first need some preparation. Before that, we do some examples.
+\begin{eg}
+ Consider $X^3 + X + 1 \in \Z[X]$. This has content $1$ so is primitive. We show it is not reducible in $\Z[X]$, and hence not reducible in $\Q[X]$.
+
+ Suppose $f$ \emph{is} reducible in $\Q[X]$. Then by Gauss' lemma, this is reducible in $\Z[X]$. So we can write
+ \[
+ X^3 + X + 1 = gh,
+ \]
+ for some polynomials $g, h \in \Z[X]$, with $g, h$ not units. But if $g$ and $h$ are not units, then they cannot be constant, since the coefficients of $X^3 + X + 1$ are all $1$ or $0$. So they have degree at least $1$. Since the degrees add up to $3$, we wlog suppose $g$ has degree $1$ and $h$ has degree $2$. So suppose
+ \[
+ g = b_0 + b_1X,\quad h = c_0 + c_1 X + c_2 X^2.
+ \]
+ Multiplying out and equating coefficients, we get
+ \begin{align*}
+ b_0 c_0 &= 1\\
+ c_2 b_1 &= 1
+ \end{align*}
+ So $b_0$ and $b_1$ must be $\pm 1$. So $g$ is either $1 + X, 1 - X, -1 + X$ or $-1 - X$, and hence has $\pm 1$ as a root. But this is a contradiction, since $\pm 1$ is not a root of $X^3 + X + 1$. So $f$ is not reducible in $\Q$. In particular, $f$ has no root in $\Q$.
+\end{eg}
+We see the advantage of using Gauss' lemma --- if we worked in $\Q$ instead, we could have gotten to the step $b_0 c_0 = 1$, and then we can do nothing, since $b_0$ and $c_0$ can be many things if we live in $\Q$.
+
+Now we start working towards proving this.
+\begin{lemma}
+ Let $R$ be a UFD. If $f, g \in R[X]$ are primitive, then so is $fg$.
+\end{lemma}
+
+\begin{proof}
+ We let
+ \begin{align*}
+ f &= a_0 + a_1X + \cdots + a_n X^n,\\
+ g &= b_0 + b_1X + \cdots + b_m X^m,
+ \end{align*}
+ where $a_n, b_m \not= 0$, and $f, g$ are primitive. We want to show that the content of $fg$ is a unit.
+
+ Now suppose $fg$ is not primitive. Then $c(fg)$ is not a unit. Since $R$ is a UFD, we can find an irreducible $p$ which divides $c(fg)$.
+
+ By assumption, $c(f)$ and $c(g)$ are units. So $p\nmid c(f)$ and $p \nmid c(g)$. So suppose $p \mid a_0$, $p \mid a_1$, \ldots, $p \mid a_{k - 1}$ but $p \nmid a_k$. Note it is possible that $k = 0$. Similarly, suppose $p\mid b_0, p \mid b_1, \cdots, p\mid b_{\ell - 1}, p \nmid b_\ell$.
+
+ We look at the coefficient of $X^{k + \ell}$ in $fg$. It is given by
+ \[
+ \sum_{i + j = k + \ell} a_i b_j = a_{k + \ell}b_0 + \cdots + a_{k + 1}b_{\ell - 1} + a_k b_{\ell} + a_{k - 1}b_{\ell + 1} + \cdots + a_0 b_{\ell + k}.
+ \]
+ By assumption, this is divisible by $p$. So
+ \[
+ p \mid \sum_{i + j = k + \ell} a_i b_j.
+ \]
+ However, the terms $a_{k + \ell}b_0 + \cdots + a_{k + 1} b_{\ell - 1}$, is divisible by $p$, as $p \mid b_j$ for $j < \ell$. Similarly, $a_{k - 1}b_{\ell + 1} + \cdots + a_0 b_{\ell + k}$ is divisible by $p$. So we must have $p \mid a_k b_\ell$. As $p$ is irreducible, and hence prime, we must have $p \mid a_k$ or $p \mid b_\ell$. This is a contradiction. So $c(fg)$ must be a unit.
+\end{proof}
+
+\begin{cor}
+ Let $R$ be a UFD. Then for $f, g \in R[X]$, we have that $c(fg)$ is an associate of $c(f)c(g)$.
+\end{cor}
+Again, we cannot say they are equal, since content is only well-defined up to a unit.
+
+\begin{proof}
+ We can write $f = c(f) f_1$ and $g= c(g) g_1$, with $f_1$ and $g_1$ primitive. Then
+ \[
+ fg = c(f)c(g) f_1g_1.
+ \]
+ Since $f_1g_1$ is primitive, so $c(f)c(g)$ is a gcd of the coefficients of $fg$, and so is $c(fg)$, by definition. So they are associates.
+\end{proof}
+
+Finally, we can prove Gauss' lemma.
+\begin{lemma}[Gauss' lemma]
+ Let $R$ be a UFD, and $f \in R[X]$ be a primitive polynomial. Then $f$ is reducible in $R[X]$ if and only if $f$ is reducible $F[X]$, where $F$ is the field of fractions of $R$.
+\end{lemma}
+
+\begin{proof}
+ We will show that a primitive $f \in R[X]$ is reducible in $R[X]$ if and only if $f$ is reducible in $F[X]$.
+
+ One direction is almost immediately obvious. Let $f = gh$ be a product in $R[X]$ with $g, h$ not units. As $f$ is primitive, so are $g$ and $h$. So both have degree $> 0$. So $g, h$ are not units in $F[X]$. So $f$ is reducible in $F[X]$.
+
+ The other direction is less obvious. We let $f = gh$ in $F[X]$, with $g, h$ not units. So $g$ and $h$ have degree $> 0$, since $F$ is a field. So we can clear denominators by finding $a, b \in R$ such that $(ag), (bh) \in R[X]$ (e.g.\ let $a$ be the product of denominators of coefficients of $g$). Then we get
+ \[
+ ab f = (ag)(bh),
+ \]
+ and this is a factorization in $R[X]$. Here we have to be careful --- $(ag)$ is one thing that lives in $R[X]$, and is not necessarily a product in $R[X]$, since $g$ might not be in $R[X]$. So we should just treat it as a single symbol.
+
+ We now write
+ \begin{align*}
+ (ag) &= c(ag) g_1,\\
+ (bh) &= c(bh) h_1,
+ \end{align*}
+ where $g_1, h_1$ are primitive. So we have
+ \[
+ ab = c(abf) = c((ag)(bh)) = u \cdot c(ag)c(bh),
+ \]
+ where $u \in R$ is a unit, by the previous corollary. But also we have
+ \[
+ abf = c(ag)c(gh) g_1 h_1 = u^{-1}ab g_1 h_1.
+ \]
+ So cancelling $ab$ gives
+ \[
+ f = u^{-1} g_1 h_1 \in R[X].
+ \]
+ So $f$ is reducible in $R[X]$.
+\end{proof}
+If this looks fancy and magical, you can try to do this explicitly in the case where $R = \Z$ and $F = \Q$. Then you will probably get enlightened.
+
+We will do another proof performed in a similar manner.
+\begin{prop}
+ Let $R$ be a UFD, and $F$ be its field of fractions. Let $g \in R[X]$ be primitive. We let
+ \[
+ J = (g) \lhd R[X],\quad I = (g) \lhd F[X].
+ \]
+ Then
+ \[
+ J = I \cap R[X].
+ \]
+ In other words, if $f \in R[X]$ and we can write it as $f = gh$, with $h \in F[X]$, then in fact $h \in R[X]$.
+\end{prop}
+
+\begin{proof}
+ The strategy is the same --- we clear denominators in the equation $f = gh$, and then use contents to get that down in $R[X]$.
+
+ We certainly have $J \subseteq I \cap R[X]$. Now let $f \in I \cap R[X]$. So we can write
+ \[
+ f = gh,
+ \]
+ with $h \in F[X]$. So we can choose $b \in R$ such that $bh \in R[X]$. Then we know
+ \[
+ bf = g(bh) \in R[X].
+ \]
+ We let
+ \[
+ (bh) = c(bh) h_1,
+ \]
+ for $h_1 \in R[X]$ primitive. Thus
+ \[
+ bf = c(bh) g h_1.
+ \]
+ Since $g$ is primitive, so is $gh_1$. So $c(bh) = u c(bf)$ for $u$ a unit. But $b f$ is really a product in $R[X]$. So we have
+ \[
+ c(bf) = c(b)c(f) = bc(f).
+ \]
+ So we have
+ \[
+ bf = ubc(f) g h_1.
+ \]
+ Cancelling $b$ gives
+ \[
+ f = g (uc(f)h_1).
+ \]
+ So $g \mid f$ in $R[X]$. So $f \in J$.
+\end{proof}
+
+From this we can get ourselves a large class of UFDs.
+\begin{thm}
+ If $R$ is a UFD, then $R[X]$ is a UFD.
+\end{thm}
+In particular, if $R$ is a UFD, then $R[X_1, \cdots, X_n]$ is also a UFD.
+
+\begin{proof}
+ We know $R[X]$ has a notion of degree. So we will combine this with the fact that $R$ is a UFD.
+
+ Let $f \in R[X]$. We can write $f = c(f) f_1$, with $f_1$ primitive. Firstly, as $R$ is a UFD, we may factor
+ \[
+ c(f) = p_1 p_2 \cdots p_n,
+ \]
+ for $p_i \in R$ irreducible (and also irreducible in $R[X]$). Now we want to deal with $f_1$.
+
+ If $f_1$ is not irreducible, then we can write
+ \[
+ f_1 = f_2 f_3,
+ \]
+ with $f_2, f_3$ both not units. Since $f_1$ is primitive, $f_2, f_3$ also cannot be constants. So we must have $\deg f_2, \deg f_3 > 0$. Also, since $\deg f_2 + \deg f_3 = \deg f_1$, we must have $\deg f_2, \deg f_3 < \deg f_1$. If $f_2, f_3$ are irreducible, then done. Otherwise, keep on going. We will eventually stop since the degrees have to keep on decreasing. So we can write it as
+ \[
+ f_1 = q_1 \cdots q_m,
+ \]
+ with $q_i$ irreducible. So we can write
+ \[
+ f = p_1 p_2 \cdots p_n q_1 q_2 \cdots q_m,
+ \]
+ a product of irreducibles.
+
+ For uniqueness, we first deal with the $p$'s. We note that
+ \[
+ c(f) = p_1 p_2 \cdots p_n
+ \]
+ is a unique factorization of the content, up to reordering and associates, as $R$ is a UFD. So cancelling the content, we only have to show that primitives can be factored uniquely.
+
+ Suppose we have two factorizations
+ \[
+ f_1 = q_1 q_2 \cdots q_m = r_1 r_2 \cdots r_\ell.
+ \]
+ Note that each $q_i$ and each $r_i$ is a factor of the primitive polynomial $f_1$, so are also primitive. Now we do (maybe) the unexpected thing. We let $F$ be the field of fractions of $R$, and consider $q_i, r_i \in F[X]$. Since $F$ is a field, $F[X]$ is a Euclidean domain, hence principal ideal domain, hence unique factorization domain.
+
+ By Gauss' lemma, since the $q_i$ and $r_i$ are irreducible in $R[X]$, they are also irreducible in $F[X]$. As $F[X]$ is a UFD, we find that $\ell = m$, and after reordering, $r_i$ and $q_i$ are associates, say
+ \[
+ r_i = u_i q_i,
+ \]
+ with $u_i \in F[X]$ a unit. What we want to say is that $r_i$ is a unit times $q_i$ in $R[X]$. Firstly, note that $u_i \in F$ as it is a unit. Clearing denominators, we can write
+ \[
+ a_i r_i = b_i q_i \in R[X].
+ \]
+ Taking contents, since $r_i, q_i$ are primitives, we know $a_i$ and $b_i$ are associates, say
+ \[
+ b_i = v_i a_i,
+ \]
+ with $v_i \in R$ a unit. Cancelling $a_i$ on both sides, we know $r_i = v_i q_i $ as required.
+\end{proof}
+The key idea is to use Gauss' lemma to say the reducibility in $R[X]$ is the same as reducibility in $F[X]$, as long as we are primitive. The first part about contents is just to turn everything into primitives.
+
+Note that the last part of the proof is just our previous proposition. We could have applied it, but we decide to spell it out in full for clarity.
+
+\begin{eg}
+ We know $\Z[X]$ is a UFD, and if $R$ is a UFD, then $R[X_1, \cdots, X_n]$ is also a UFD.
+\end{eg}
+
+This is a useful thing to know. In particular, it gives us examples of UFDs that are not PIDs. However, in such rings, we would also like to have an easy to determine whether something is reducible. Fortunately, we have the following criterion:
+\begin{prop}[Eisenstein's criterion]\index{Eisenstein's criterion}
+ Let $R$ be a UFD, and let
+ \[
+ f = a_0 + a_1 X + \cdots + a_n X^n \in R[X]
+ \]
+ be primitive with $a_n \not= 0$. Let $p \in R$ be irreducible (hence prime) be such that
+ \begin{enumerate}
+ \item $p \nmid a_n$;
+ \item $p \mid a_i$ for all $0 \leq i < n$;
+ \item $p^2 \nmid a_0$.
+ \end{enumerate}
+ Then $f$ is irreducible in $R[X]$, and hence in $F[X]$ (where $F$ is the field of fractions of $R$).
+\end{prop}
+It is important that we work in $R[X]$ all the time, until the end where we apply Gauss' lemma. Otherwise, we cannot possibly apply Eisenstein's criterion since there are no primes in $F$.
+
+\begin{proof}
+ Suppose we have a factorization $f = gh$ with
+ \begin{align*}
+ g &= r_0 + r_1 X + \cdots + r_k X^k\\
+ h &= s_0 + s_1 X + \cdots + s_\ell X^\ell,
+ \end{align*}
+ for $r_k, s_\ell \not= 0$.
+
+ We know $r_k s_\ell = a_n$. Since $p \nmid a_n$, so $p \nmid r_k$ and $p \nmid s_\ell$. We can also look at bottom coefficients. We know $r_0 s_0 = a_0$. We know $p \mid a_0$ and $p^2 \nmid a_0$. So $p$ divides exactly one of $r_0$ and $s_0$. wlog, $p \mid r_0$ and $p \nmid s_0$.
+
+ Now let $j$ be such that
+ \[
+ p \mid r_0,\quad p \mid r_1,\cdots,\quad p \mid r_{j - 1},\quad p \nmid r_j.
+ \]
+ We now look at $a_j$. This is, by definition,
+ \[
+ a_j = r_0 s_j + r_1 s_{j - 1} + \cdots + r_{j - 1} s_1 + r_j s_0.
+ \]
+ We know $r_0, \cdots, r_{j - 1}$ are all divisible by $p$. So
+ \[
+ p \mid r_0 s_j + r_1 s_{j - 1} + \cdots + r_{j - 1} s_1.
+ \]
+ Also, since $p \nmid r_j$ and $p \nmid s_0$, we know $p \nmid r_j s_0$, using the fact that $p$ is prime. So $p \nmid a_j$. So we must have $j = n$.
+
+ We also know that $j \leq k \leq n$. So we must have $j = k = n$. So $\deg g = n$. Hence $\ell = n - h = 0$. So $h$ is a constant. But we also know $f$ is primitive. So $h$ must be a unit. So this is not a proper factorization.
+\end{proof}
+
+\begin{eg}
+ Consider the polynomial $X^n - p \in \Z[X]$ for $p$ a prime. Apply Eisenstein's criterion with $p$, and observe all the conditions hold. This is certainly primitive, since this is monic. So $X^n - p$ is irreducible in $\Z[X]$, hence in $\Q[X]$. In particular, $X^n - p$ has no rational roots, i.e.\ $\sqrt[n]{p}$ is irrational (for $n > 1$).
+\end{eg}
+
+\begin{eg}
+ Consider a polynomial
+ \[
+ f = X^{p - 1} + X^{p - 2} + \cdots + X^2 + X + 1 \in \Z[X],
+ \]
+ where $p$ is a prime number. If we look at this, we notice Eisenstein's criteria does not apply. What should we do? We observe that
+ \[
+ f = \frac{X^p - 1}{X - 1}.
+ \]
+ So it might be a good idea to let $Y = X - 1$. Then we get a new polynomial
+ \[
+ \hat{f} = \hat{f}(Y) = \frac{(Y + 1)^p - 1}{Y} = Y^{p - 1} + \binom{p}{1} Y^{p - 2} + \binom{p}{2} Y^{p - 3} + \cdots + \binom{p}{p - 1}.
+ \]
+ When we look at it hard enough, we notice Eisenstein's criteria can be applied --- we know $p \mid \binom{p}{i}$ for $1 \leq i \leq p - 1$, but $p^2 \nmid \binom{p}{p - 1} = p$. So $\hat{f}$ is irreducible in $\Z[Y]$.
+
+ Now if we had a factorization
+ \[
+ f(X) = g(X)h(X) \in \Z[X],
+ \]
+ then we get
+ \[
+ \hat{f}(Y) = g(Y + 1)h(Y + 1)
+ \]
+ in $\Z[Y]$. So $f$ is irreducible.
+
+ Hence none of the roots of $f$ are rational (but we already know that --- they are not even real!).
+\end{eg}
+
+\subsection{Gaussian integers}
+We've mentioned the Gaussian integers already.
+\begin{defi}[Gaussian integers]\index{Gaussian integers}
+ The \emph{Gaussian integers} is the subring
+ \[
+ \Z[i] = \{a + bi: a, b\in \Z\} \leq \C.
+ \]
+\end{defi}
+
+We have already shown that the norm $N(a + ib) = a^2 + b^2$ is a Euclidean function for $\Z[i]$. So $\Z[i]$ is a Euclidean domain, hence principal ideal domain, hence a unique factorization domain.
+
+Since the units must have norm $1$, they are precisely $\pm 1, \pm i$. What does factorization in $\Z[i]$ look like? What are the primes? We know we are going to get new primes, i.e.\ primes that are not integers, while we will lose some other primes. For example, we have
+\[
+ 2 = (1 + i)(1 - i).
+\]
+So $2$ is not irreducible, hence not prime. However, $3$ is a prime. We have $N(3) = 9$. So if $3 = uv$, with $u, v$ not units, then $9 = N(u)N(v)$, and neither $N(u)$ nor $N(v)$ are $1$. So $N(u) = N(v) = 3$. However, $3 = a^2 + b^2$ has no solutions with $a, b \in \Z$. So there is nothing of norm $3$. So $3$ is irreducible, hence a prime.
+
+Also, $5$ is not prime, since
+\[
+ 5 = (1 + 2i)(1 - 2i).
+\]
+How can we understand which primes stay as primes in the Gaussian integers?
+\begin{prop}
+ A prime number $p \in \Z$ is prime in $\Z[i]$ if and only if $p \not= a^2 + b^2$ for $a, b \in \Z \setminus \{0\}$.
+\end{prop}
+The proof is exactly what we have done so far.
+
+\begin{proof}
+ If $p = a^2 + b^2$, then $p = (a + ib)(a - ib)$. So $p$ is not irreducible.
+
+ Now suppose $p = uv$, with $u, v$ not units. Taking norms, we get $p^2 = N(u) N(v)$. So if $u$ and $v$ are not units, then $N(u) = N(v) = p$. Writing $u = a + ib$, then this says $a^2 + b^2 = p$.
+\end{proof}
+
+So what we have to do is to understand when a prime $p$ can be written as a sum of two squares. We will need the following helpful lemma:
+
+\begin{lemma}
+ Let $p$ be a prime number. Let $\F_p = \Z/p\Z$ be the field with $p$ elements. Let $\F_p^\times = \F_p\setminus \{0\}$ be the group of invertible elements under multiplication. Then $\F_p^\times \cong C_{p - 1}$.
+\end{lemma}
+
+\begin{proof}
+ Certainly $\F_p^\times$ has order $p - 1$, and is abelian. We know from the classification of finite abelian groups that if $\F_p^\times$ is not cyclic, then it must contain a subgroup $C_m \times C_m$ for $m > 1$ (we can write it as $C_d \times C_{d'} \times \cdots$, and that $d' \mid d$. So $C_d$ has a subgroup isomorphic to $C_{d'}$).
+
+ We consider the polynomial $X^m - 1 \in \F_p[x]$, which is a UFD. At best, this factors into $m$ linear factors. So $X^m - 1$ has at most $m$ distinct roots. But if $C_m \times C_m \leq \F_p^\times$, then we can find $m^2$ elements of order dividing $m$. So there are $m^2$ elements of $\F_p$ which are roots of $X^m - 1$. This is a contradiction. So $\F_p^\times$ is cyclic.
+\end{proof}
+
+This is a funny proof, since we have not found any element that has order $p - 1$.
+
+\begin{prop}
+ The primes in $\Z[i]$ are, up to associates,
+ \begin{enumerate}
+ \item Prime numbers $p \in \Z \leq \Z[i]$ such that $p\equiv 3 \pmod 4$.
+ \item Gaussian integers $z \in \Z[i]$ with $N(z) = z\bar{z} = p$ for some prime $p$ such that $p = 2$ or $p \equiv 1 \pmod 4$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ We first show these are primes. If $p \equiv 3 \pmod 4$, then $p \not= a^2 + b^2$, since a square number mod $4$ is always $0$ or $1$. So these are primes in $\Z[i]$.
+
+ On the other hand, if $N(z) = p$, and $z = uv$, then $N(u) N(v) = p$. So $N(u)$ is $1$ or $N(v)$ is $1$. So $u$ or $v$ is a unit. Note that we did not use the condition that $p \not\equiv 3 \pmod 4$. This is not needed, since $N(z)$ is always a sum of squares, and hence $N(z)$ cannot be a prime that is $3$ mod $4$.
+
+ Now let $z \in \Z[i]$ be irreducible, hence prime. Then $\bar{z}$ is also irreducible. So $N(z) = z\bar{z}$ is a factorization of $N(z)$ into irreducibles. Let $p \in \Z$ be an ordinary prime number dividing $N(z)$, which exists since $N(z) \not= 1$.
+
+ Now if $p \equiv 3\pmod 4$, then $p$ itself is prime in $\Z[i]$ by the first part of the proof. So $p \mid N(z) = z\bar{z}$. So $p \mid z$ or $p \mid \bar{z}$. Note that if $p \mid \bar{z}$, then $p \mid z$ by taking complex conjugates. So we get $p \mid z$. Since both $p$ and $z$ are irreducible, they must be equal up to associates.
+
+ Otherwise, we get $p = 2$ or $p \equiv 1 \pmod 4$. If $p \equiv 1 \pmod 4$, then $p - 1 = 4k$ for some $k \in \Z$. As $\F_p^\times \cong C_{p - 1} = C_{4k}$, there is a unique element of order $2$ (this is true for any cyclic group of order $4k$ --- think $\Z/4k\Z$). This must be $[-1] \in \F_p$. Now let $a \in \F_p^\times$ be an element of order $4$. Then $a^2$ has order $2$. So $[a^2] = [-1]$.
+
+ This is a complicated way of saying we can find an $a$ such that $p \mid a^2 + 1$. Thus $p \mid (a + i)(a - i)$. In the case where $p = 2$, we know by checking directly that $2 = (1 + i)(1 - i)$.
+
+ In either case, we deduce that $p$ (or $2$) is not prime (hence irreducible), since it clearly does not divide $a \pm i$ (or $1 \pm i$). So we can write $p = z_1 z_2$, for $z_1, z_2 \in \Z[i]$ not units. Now we get
+ \[
+ p^2 = N(p) = N(z_1) N(z_2).
+ \]
+ As the $z_i$ are not units, we know $N(z_1) = N(z_2) = p$. By definition, this means $p = z_1 \bar{z}_1 = z_2 \bar{z}_2$. But also $p = z_1 z_2$. So we must have $\bar{z}_1 = z_2$.
+
+ Finally, we have $p = z_1 \bar{z}_1 \mid N(z) = z\bar{z}$. All these $z$, $z_i$ are irreducible. So $z$ must be an associate of $z_1$ (or maybe $\bar{z}_1$). So in particular $N(z) = p$.
+\end{proof}
+
+\begin{cor}
+ An integer $n \in \Z_{\geq 0}$ may be written as $x^2 + y^2$ (as the sum of two squares) if and only if ``when we write $n = p_1^{n_1} p_2^{n_2} \cdots p_k^{n_k}$ as a product as distinct primes, then $p_i \equiv 3 \pmod 4$ implies $n_i$ is even''.
+\end{cor}
+
+We have proved this in the case when $n$ is a prime.
+
+\begin{proof}
+ If $n = x^2 + y^2$, then we have
+ \[
+ n = (x + iy)(x - iy) = N(x + iy).
+ \]
+ Let $z = x + iy$. So we can write $z = \alpha_1 \cdots \alpha_q$ as a product of irreducibles in $\Z[i]$. By the proposition, each $\alpha_i$ is either $\alpha_i = p$ (a genuine prime number with $p \equiv 3\pmod 4$), or $N(\alpha_i) = p$ is a prime number which is either $2$ or $\equiv 1 \pmod 4$. We now take the norm to obtain
+ \[
+ N = x^2 + y^2 = N(z) = N(\alpha_1) N(\alpha_2) \cdots N(\alpha_q).
+ \]
+ Now each $N(\alpha_i)$ is either $p^2$ with $p \equiv 3\pmod 4$, or is just $p$ for $p = 2$ or $p \equiv 1\pmod 4$. So if $p^m$ is the largest power of $p$ divides $n$, we find that $n$ must be even if $p \equiv 3 \pmod 4$.
+
+ Conversely, let $n = p_1^{n_1} p_2^{n_2} \cdots p_k^{n_k}$ be a product of distinct primes. Now for each $p_i$, either $p_i \equiv 3 \pmod 4$, and $n_i$ is even, in which case
+ \[
+ p_i^{n_i} = (p_i^2)^{n_i/2} = N(p_i^{n_i/2});
+ \]
+ or $p_i = 2$ or $p_i \equiv 1 \pmod 4$, in which case, the above proof shows that $p_i = N(\alpha_i)$ for some $\alpha_i$. So $p_i^n = N(\alpha_i^n)$.
+
+ Since the norm is multiplicative, we can write $n$ as the norm of some $z \in \Z[i]$. So
+ \[
+ n = N(z) = N(x + iy) = x^2 + y^2,
+ \]
+ as required.
+\end{proof}
+
+\begin{eg}
+ We know $65 = 5 \times 13$. Since $5, 13 \equiv 1 \pmod 4$, it is a sum of squares. Moreover, the proof tells us how to find $65$ as the sum of squares. We have to factor $5$ and $13$ in $\Z[i]$. We have
+ \begin{align*}
+ 5 &= (2 + i)(2 - i)\\
+ 13 &= (2 + 3i)(2 - 3i).
+ \end{align*}
+ So we know
+ \[
+ 65 = N(2 + i)N(2 + 3i) = N((2 + i)(2 + 3i)) = N(1 + 8i) = 1^2 + 8^2.
+ \]
+ But there is a choice here. We had to pick which factor is $\alpha$ and which is $\bar{\alpha}$. So we can also write
+ \[
+ 65 = N((2 + i)(2 - 3i)) = N(7 - 4i) = 7^2 + 4^2.
+ \]
+ So not only are we able to write them as sum of squares, but this also gives us many ways of writing $65$ as a sum of squares.
+\end{eg}
+
+\subsection{Algebraic integers}
+We generalize the idea of Gaussian integers to \emph{algebraic integers}.
+
+\begin{defi}[Algebraic integer]\index{algebraic integer}
+ An $\alpha \in \C$ is called an algebraic integer if it is a root of a monic polynomial in $\Z[X]$, i.e.\ there is a monic $f \in \Z[X]$ such that $f(\alpha) = 0$.
+\end{defi}
+
+We can immediately check that this is a sensible definition --- not all complex numbers are algebraic integers, since there are only countably many polynomials with integer coefficients, hence only countably many algebraic integers, but there are uncountably many complex numbers.
+
+\begin{notation}
+ For $\alpha$ an algebraic integer, we write $\Z[\alpha] \leq \C$ for the smallest subring containing $\alpha$.
+\end{notation}
+This can also be defined for arbitrary complex numbers, but it is less interesting.
+
+We can also construct $\Z[\alpha]$ by taking it as the image of the map $\phi: \Z[X] \to \C$ given by $g \mapsto g(\alpha)$. So we can also write
+\[
+ \Z[\alpha] = \frac{\Z[X]}{I},\quad I = \ker \phi.
+\]
+Note that $I$ is non-empty, since, say, $f \in I$, by definition of an algebraic integer.
+\begin{prop}
+ Let $\alpha \in \C$ be an algebraic integer. Then the ideal
+ \[
+ I = \ker(\phi: \Z[X] \to \C, f \mapsto f(\alpha))
+ \]
+ is principal, and equal to $(f_\alpha)$ for some irreducible monic $f_\alpha$.
+\end{prop}
+This is a non-trivial theorem, since $\Z[X]$ is not a principal ideal domain. So there is no immediate guarantee that $I$ is generated by one polynomial.
+
+\begin{defi}[Minimal polynomial]\index{minimal polynomial}
+ Let $\alpha \in \C$ be an algebraic integer. Then the \emph{minimal polynomial} is a polynomial $f_\alpha$ is the irreducible monic such that $I = \ker(\phi) = (f_\alpha)$.
+\end{defi}
+
+\begin{proof}
+ By definition, there is a monic $f \in \Z[X]$ such that $f(a) = 0$. So $f \in I$. So $I \not= 0$. Now let $f_\alpha \in I$ be such a polynomial of minimal degree. We may suppose that $f_\alpha$ is primitive. We want to show that $I = (f_\alpha)$, and that $f_\alpha$ is irreducible.
+
+ Let $h \in I$. We pretend we are living in $\Q[X]$. Then we have the Euclidean algorithm. So we can write
+ \[
+ h = f_\alpha q + r,
+ \]
+ with $r = 0$ or $\deg r < \deg f_\alpha$. This was done over $\Q[X]$, not $\Z[X]$. We now clear denominators. We multiply by some $a \in \Z$ to get
+ \[
+ ah = f_\alpha (aq) + (ar),
+ \]
+ where now $(aq), (ar) \in \Z[X]$. We now evaluate these polynomials at $\alpha$. Then we have
+ \[
+ a h(\alpha) = f_\alpha(\alpha) aq(\alpha) + ar(\alpha).
+ \]
+ We know $f_\alpha(\alpha) = h(\alpha) = 0$, since $f_\alpha$ and $h$ are both in $I$. So $ar(\alpha) = 0$. So $(ar) \in I$. As $f_\alpha \in I$ has minimal degree, we cannot have $\deg (r) = \deg(ar) < \deg(f_a)$. So we must have $r = 0$.
+
+ Hence we know
+ \[
+ a h = f_\alpha \cdot(aq)
+ \]
+ is a factorization in $\Z[X]$. This is almost right, but we want to factor $h$, not $ah$. Again, taking contents of everything, we get
+ \[
+ a c(h) = c(ah) = c(f_\alpha (aq)) = c(aq),
+ \]
+ as $f_\alpha$ is primitive. In particular, $a \mid c(aq)$. This, by definition of content, means $(aq)$ can be written as $a \bar{q}$, where $\bar{q} \in \Z[X]$. Cancelling, we get $q = \bar{q} \in \Z[X]$. So we know
+ \[
+ h = f_\alpha q \in (f_\alpha).
+ \]
+ So we know $I = (f_\alpha)$.
+
+ To show $f_\alpha$ is irreducible, note that
+ \[
+ \frac{\Z[X]}{(f_\alpha)} \cong \frac{\Z[X]}{\ker \phi} \cong \im (\phi) = \Z[\alpha] \leq \C.
+ \]
+ Since $\C$ is an integral domain, so is $\im(\phi)$. So we know $\Z[X]/(f_\alpha)$ is an integral domain. So $(f_\alpha)$ is prime. So $f_\alpha$ is prime, hence irreducible.
+
+ If this final line looks magical, we can unravel this proof as follows: suppose $f_\alpha = pq$ for some non-units $pq$. Then since $f_\alpha(\alpha) = 0$, we know $p(\alpha) q(\alpha) = 0$. Since $p(\alpha), q(\alpha) \in \C$, which is an integral domain, we must have, say, $p(\alpha) = 0$. But then $\deg p < \deg f_\alpha$, so $p \not\in I = (f_\alpha)$. Contradiction.
+\end{proof}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item We know $\alpha = i$ is an algebraic integer with $f_\alpha = X^2 + 1$.
+ \item Also, $\alpha = \sqrt{2}$ is an algebraic integer with $f_\alpha = X^2 - 2$.
+ \item More interestingly, $\alpha = \frac{1}{2}(1 + \sqrt{-3})$ is an algebraic integer with $f_\alpha = X^2 - X - 1$.
+ \item The polynomial $X^5 - X + d \in \Z[X]$ with $d \in \Z_{\geq 0}$ has precisely one real root $\alpha$, which is an algebraic integer. It is a theorem, which will be proved in IID Galois Theory, that this $\alpha$ cannot be constructed from integers via $+, -, \times, \div, \sqrt[n]{\ph}$. It is also a theorem, found in IID Galois Theory, that degree 5 polynomials are the smallest degree for which this can happen (the prove involves writing down formulas analogous to the quadratic formula for degree $3$ and $4$ polynomials).
+ \end{enumerate}
+\end{eg}
+
+\begin{lemma}
+ Let $\alpha \in \Q$ be an algebraic integer. Then $\alpha \in \Z$.
+\end{lemma}
+
+\begin{proof}
+ Let $f_\alpha \in \Z[X]$ be the minimal polynomial, which is irreducible. In $\Q[X]$, the polynomial $X - \alpha$ must divide $f_\alpha$. However, by Gauss' lemma, we know $f \in \Q[X]$ is irreducible. So we must have $f_\alpha = X - \alpha \in \Z[X]$. So $\alpha$ is an integer.
+\end{proof}
+
+It turns out the collection of all algebraic integers form a subring of $\C$. This is not at all obvious --- given $f, g \in \Z[X]$ monic such that $f(\alpha) = g(\alpha) = 0$, there is no easy way to find a new monic $h$ such that $h(\alpha + \beta) = 0$. We will prove this much later on in the course.
+
+\subsection{Noetherian rings}
+We now revisit the idea of Noetherian rings, something we have briefly mentioned when proving that PIDs are UFDs.
+\begin{defi}[Noetherian ring]\index{Noetherian ring}
+ A ring is \emph{Noetherian} if for any chain of ideals
+ \[
+ I_1 \subseteq I_2 \subseteq I_3 \subseteq \cdots,
+ \]
+ there is some $N$ such that $I_N = I_{N + 1} = I_{N + 2} = \cdots$.
+
+ This condition is known as the \emph{ascending chain condition}.
+\end{defi}
+
+\begin{eg}
+ Every finite ring is Noetherian. This is since there are only finitely many possible ideals.
+\end{eg}
+
+\begin{eg}
+ Every field is Noetherian. This is since there are only two possible ideals.
+\end{eg}
+
+\begin{eg}
+ Every principal ideal domain (e.g.\ $\Z$) is Noetherian. This is easy to check directly, but the next proposition will make this utterly trivial.
+\end{eg}
+
+Most rings we love and know are indeed Noetherian. However, we can explicitly construct some non-Noetherian ideals.
+\begin{eg}
+ The ring $\Z[X_1, X_2, X_3, \cdots]$ is not Noetherian. This has the chain of strictly increasing ideals
+ \[
+ (X_1) \subseteq (X_1, X_2) \subseteq (X_1, X_2, X_3) \subseteq \cdots.
+ \]
+\end{eg}
+
+We have the following proposition that makes Noetherian rings much more concrete, and makes it obvious why PIDs are Noetherian.
+
+\begin{defi}[Finitely generated ideal]\index{finitely-generated ideal}
+ An ideal $I$ is \emph{finitely generated} if it can be written as $I = (r_1, \cdots, r_n)$ for some $r_1, \cdots, r_n \in R$.
+\end{defi}
+
+\begin{prop}
+ A ring is Noetherian if and only if every ideal is finitely generated.
+\end{prop}
+Every PID trivially satisfies this condition. So we know every PID is Noetherian.
+
+\begin{proof}
+ We start with the easier direction --- from concrete to abstract.
+
+ Suppose every ideal of $R$ is finitely generated. Given the chain $I_1 \subseteq I_2 \subseteq \cdots$, consider the ideal
+ \[
+ I = I_1 \cup I_2 \cup I_3 \cup \cdots.
+ \]
+ This is obviously an ideal, and you will check this manually in example sheet 2.
+
+ We know $I$ is finitely generated, say $I = (r_1, \cdots, r_n)$, with $r_i \in I_{k_i}$. Let
+ \[
+ K = \max_{i = 1, \cdots, n}\{k_i\}.
+ \]
+ Then $r_1, \cdots, r_n \in I_K$. So $I_K = I$. So $I_K = I_{K + 1} = I_{K + 2} = \cdots$.
+
+ To prove the other direction, suppose there is an ideal $I \lhd R$ that is not finitely generated. We pick $r_1 \in I$. Since $I$ is not finitely generated, we know $(r_1) \not= I$. So we can find some $r_2 \in I \setminus (r_1)$.
+
+ Again $(r_1, r_2) \not= I$. So we can find $r_3 \in I\setminus (r_1, r_2)$. We continue on, and then can find an infinite strictly ascending chain
+ \[
+ (r_1) \subseteq (r_1, r_2) \subseteq (r_1, r_2, r_3) \subseteq \cdots.
+ \]
+ So $R$ is not Noetherian.
+\end{proof}
+When we have developed some properties or notions, a natural thing to ask is whether it passes on to subrings and quotients.
+
+If $R$ is Noetherian, does every subring of $R$ have to be Noetherian? The answer is no. For example, since $\Z[X_1, X_2, \cdots]$ is an integral domain, we can take its field of fractions, which is a field, hence Noetherian, but $\Z[X_1, X_2, \cdots]$ is a subring of its field of fractions.
+
+How about quotients?
+\begin{prop}
+ Let $R$ be a Noetherian ring and $I$ be an ideal of $R$. Then $R/I$ is Noetherian.
+\end{prop}
+
+\begin{proof}
+ Whenever we see quotients, we should think of them as the image of a homomorphism. Consider the quotient map
+ \begin{align*}
+ \pi: R&\to R/I\\
+ x &\mapsto x + I.
+ \end{align*}
+ We can prove this result by finitely generated or ascending chain condition. We go for the former. Let $J \lhd R/I$ be an ideal. We want to show that $J$ is finitely generated. Consider the inverse image $\pi^{-1}(J)$. This is an ideal of $R$, and is hence finitely generated, since $R$ is Noetherian. So $\pi^{-1}(J) = (r_1, \cdots, r_n)$ for some $r_1, \cdots, r_n \in R$. Then $J$ is generated by $\pi(r_1), \cdots, \pi(r_n)$. So done.
+\end{proof}
+
+This gives us many examples of Noetherian rings. But there is one important case we have not tackled yet --- polynomial rings. We know $\Z[X]$ is not a PID, since $(2, X)$ is not principal. However, this is finitely generated. So we are not dead. We might try to construct some non-finitely generated ideal, but we are bound to fail. This is since $\Z[X]$ \emph{is} a Noetherian ring. This is a special case of the following powerful theorem:
+
+\begin{thm}[Hilbert basis theorem]\index{Hilbert basis theorem}
+ Let $R$ be a Noetherian ring. Then so is $R[X]$.
+\end{thm}
+Since $\Z$ is Noetherian, we know $\Z[X]$ also is. Hence so is $\Z[X, Y]$ etc.
+
+The Hilbert basis theorem was, surprisingly, proven by Hilbert himself. Before that, there were many mathematicians studying something known as \emph{invariant theory}. The idea is that we have some interesting objects, and we want to look at their symmetries. Often, there are infinitely many possible such symmetries, and one interesting question to ask is whether there is a finite set of symmetries that generate all possible symmetries.
+
+This sounds like an interesting problem, so people devoted much time, writing down funny proofs, showing that the symmetries are finitely generated. However, the collection of such symmetries are often just ideals of some funny ring. So Hilbert came along and proved the Hilbert basis theorem, and showed once and for all that those rings are Noetherian, and hence the symmetries are finitely generated.
+
+\begin{proof}
+ The proof is not too hard, but we will need to use \emph{both} the ascending chain condition and the fact that all ideals are finitely-generated.
+
+ Let $I \lhd R[X]$ be an ideal. We want to show it is finitely generated. Since we know $R$ is Noetherian, we want to generate some ideals of $R$ from $I$.
+
+ How can we do this? We can do the silly thing of taking all constants of $I$, i.e.\ $I \cap R$. But we can do better. We can consider all linear polynomials, and take their leading coefficients. Thinking for a while, this is indeed an ideal.
+
+ In general, for $n = 0, 1, 2, \cdots$, we let
+ \[
+ I_n = \{r \in R: \text{there is some }f \in I\text{ such that }f = r X^n + \cdots\} \cup \{0\}.
+ \]
+ Then it is easy to see, using the strong closure property, that each ideal $I_n$ is an ideal of $R$. Moreover, they form a chain, since if $f \in I$, then $Xf \in I$, by strong closure. So $I_n \subseteq I_{n + 1}$ for all $n$.
+
+ By the ascending chain condition of $R$, we know there is some $N$ such that $I_N = I_{N + 1} = \cdots$. Now for each $0 \leq n \leq N$, since $R$ is Noetherian, we can write
+ \[
+ I_n = (r_1^{(n)}, r_2^{(n)}, \cdots, r^{(n)}_{k(n)}).
+ \]
+ Now for each $r_i^{(n)}$, we choose some $f_i^{(n)} \in I$ with $f^{(n)}_i = r_i^{(n)} X^n + \cdots$.
+
+ We now claim the polynomials $f_i^{(n)}$ for $0 \leq n \leq N$ and $1 \leq i \leq k(n)$ generate $I$.
+
+ Suppose not. We pick $g \in I$ of minimal degree not generated by the $f_i^{(n)}$.
+
+ There are two possible cases. If $\deg g = n \leq N$, suppose
+ \[
+ g = rX^n + \cdots.
+ \]
+ We know $r \in I_n$. So we can write
+ \[
+ r = \sum_i \lambda_i r^{(n)}_i
+ \]
+ for some $\lambda_i \in R$, since that's what generating an ideal means. Then we know
+ \[
+ \sum_i \lambda_i f_i^{(n)} = r X^n + \cdots \in I.
+ \]
+ But if $g$ is not in the span of the $f_i^{(j)}$, then so isn't $g - \sum_i \lambda_i f_i^{(n)}$. But this has a lower degree than $g$. This is a contradiction.
+
+ Now suppose $\deg g = n > N$. This might look scary, but it is not, since $I_n = I_N$. So we write the same proof. We write
+ \[
+ g = r X^n + \cdots.
+ \]
+ But we know $r \in I_n = I_N$. So we know
+ \[
+ r = \sum_I \lambda_i r_i^{(N)}.
+ \]
+ Then we know
+ \[
+ X^{n - N}\sum_i \lambda_i f_i^{(n)} = r X^N + \cdots \in I.
+ \]
+ Hence $g - X^{n - N} \sum \lambda_i f^{(N)}_i$ has smaller degree than $g$, but is not in the span of $f_i^{(j)}$.
+\end{proof}
+
+As an aside, let $\mathcal{E} \subseteq F[X_1, X_2, \cdots, X_n]$ be any set of polynomials. We view this as a set of equations $f = 0$ for each $f \in \mathcal{E}$. The claim is that to solve the potentially infinite set of equations $\mathcal{E}$, we actually only have to solve finitely many equations.
+
+Consider the ideal $(\mathcal{E}) \lhd F[X_1, \cdots, X_n]$. By the Hilbert basis theorem, there is a finite list $f_1, \cdots, f_k$ such that
+\[
+ (f_1, \cdots, f_k) = (\mathcal{E}).
+\]
+We want to show that we only have to solve $f_i(x) = 0$ for these $f_i$. Given $(\alpha_1, \cdots, \alpha_n) \in F^n$, consider the homomorphism
+\begin{align*}
+ \phi_\alpha: F[X_1, \cdots, X_n] &\to F\\
+ X_i &\mapsto \alpha_i.
+\end{align*}
+Then we know $(\alpha_1, \cdots, \alpha_n) \in F^n$ is a solution to the equations $\mathcal{E}$ if and only if $(\mathcal{E}) \subseteq \ker(\varphi_\alpha)$. By our choice of $f_i$, this is true if and only if $(f_1, \cdots, f_k) \subseteq \ker(\varphi_\alpha)$. By inspection, this is true if and only if $(\alpha_1, \cdots, \alpha_n)$ is a solution to all of $f_1, \cdots, f_k$. So solving $\mathcal{E}$ is the same as solving $f_1, \cdots, f_k$. This is useful in, say, algebraic geometry.
+
+\section{Modules}
+Finally, we are going to look at modules. Recall that to define a vector space, we first pick some base field $\F$. We then defined a vector space to be an abelian group $V$ with an action of $\F$ on $V$ (i.e.\ scalar multiplication) that is compatible with the multiplicative and additive structure of $\F$.
+
+In the definition, we did not at all mention division in $\F$. So in fact we can make the same definition, but allow $\F$ to be a ring instead of a field. We call these \emph{modules}. Unfortunately, most results we prove about vector spaces \emph{do} use the fact that $\F$ is a field. So many linear algebra results do not apply to modules, and modules have much richer structures.
+
+\subsection{Definitions and examples}
+\begin{defi}[Module]\index{module}
+ Let $R$ be a commutative ring. We say a quadruple $(M, +, 0_M, \ph)$ is an \emph{$R$-module} if
+ \begin{enumerate}
+ \item $(M, +, 0_M)$ is an abelian group
+ \item The operation $\ph: R \times M \to M$ satisfies
+ \begin{enumerate}
+ \item $(r_1 + r_2) \cdot m = (r_1 \cdot m) + (r_2 \cdot m)$;
+ \item $r\cdot (m_1 + m_2) = (r\cdot m_1) + (r\cdot m_2)$;
+ \item $r_1 \cdot (r_2 \cdot m) = (r_1 \cdot r_2) \cdot m$; and
+ \item $1_R \cdot m = m$.
+ \end{enumerate}
+ \end{enumerate}
+\end{defi}
+Note that there are two different additions going on --- addition in the ring and addition in the module, and similarly two notions of multiplication. However, it is easy to distinguish them since they operate on different things. If needed, we can make them explicit by writing, say, $+_R$ and $+_M$.
+
+We can imagine modules as rings acting on abelian groups, just as groups can act on sets. Hence we might say ``$R$ acts on $M$'' to mean $M$ is an $R$-module.
+
+\begin{eg}
+ Let $\F$ be a field. An $\F$-module is precisely the same as a vector space over $\F$ (the axioms are the same).
+\end{eg}
+
+\begin{eg}
+ For any ring $R$, we have the $R$-module $R^n = R \times R \times \cdots \times R$ via
+ \[
+ r\cdot (r_1, \cdots, r_n) = (rr_1, \cdots, rr_n),
+ \]
+ using the ring multiplication. This is the same as the definition of the vector space $\F^n$ for fields $\F$.
+\end{eg}
+
+\begin{eg}
+ Let $I \lhd R$ be an ideal. Then it is an $R$-module via
+ \[
+ r \cdot_I a = r\cdot_R a,\quad r_1 +_I r_2 = r_1 +_R r_2.
+ \]
+ Also, $R/I$ is an $R$-module via
+ \[
+ r\cdot_{R/I} (a + I) = (r\cdot_R a) + I,
+ \]
+\end{eg}
+
+\begin{eg}
+ A $\Z$-module is precisely the same as an abelian group. For $A$ an abelian group, we have
+ \begin{align*}
+ \Z \times A &\to A\\
+ (n, a) &\mapsto \underbrace{a + \cdots + a}_{n\text{ times}},
+ \end{align*}
+ where we adopt the notation
+ \[
+ \underbrace{a + \cdots + a}_{-n\text{ times}} = \underbrace{(-a) + \cdots + (-a)}_{n\text{ times}},
+ \]
+ and adding something to itself $0$ times is just $0$.
+
+ This definition is essentially forced upon us, since by the axioms of a module, we must have $(1, a) \mapsto a$. Then we must send, say, $(2, a) = (1 + 1, a) \mapsto a + a$.
+\end{eg}
+
+\begin{eg}
+ Let $\F$ be a field and $V$ a vector space over $\F$, and $\alpha: V \to V$ be a linear map. Then $V$ is an $\F[X]$-module via
+ \begin{align*}
+ \F[X] \times V &\to V\\
+ (f, \mathbf{v}) & \mapsto f(\alpha) (\mathbf{v}).
+ \end{align*}
+ This \emph{is} a module.
+
+ Note that we cannot just say that $V$ is an $\F[X]$-module. We have to specify the $\alpha$ as well. Picking a different $\alpha$ will give a different $\F[X]$-module structure.
+\end{eg}
+
+\begin{eg}
+ Let $\phi: R \to S$ be a homomorphism of rings. Then any $S$-module $M$ may be considered as an $R$-module via
+ \begin{align*}
+ R\times M &\to M\\
+ (r, m) &\mapsto \phi(r) \cdot_M m.
+ \end{align*}
+\end{eg}
+
+\begin{defi}[Submodule]\index{submodule}
+ Let $M$ be an $R$-module. A subset $N \subseteq M$ is an \emph{$R$-submodule} if it is a subgroup of $(M, +, 0_M)$, and if $n \in N$ and $r \in R$, then $rn \in N$. We write $N \leq M$.
+\end{defi}
+
+\begin{eg}
+ We know $R$ itself is an $R$-module. Then a subset of $R$ is a submodule if and only if it is an ideal.
+\end{eg}
+
+\begin{eg}
+ A subset of an $\F$-module $V$, where $\F$ is a field, is an $\F$-submodule if and only if it is a vector subspace of $V$.
+\end{eg}
+
+\begin{defi}[Quotient module]
+ Let $N \leq M$ be an $R$-submodule. The \emph{quotient module} $M/N$ is the set of $N$-cosets in $(M, +, 0_M)$, with the $R$-action given by
+ \[
+ r\cdot (m + N) = (r\cdot m) + N.
+ \]
+\end{defi}
+It is easy to check this is well-defined and is indeed a module.
+
+Note that modules are different from rings and groups. In groups, we had subgroups, and we have some really nice ones called normal subgroups. We are only allowed to quotient by normal subgroups. In rings, we have subrings and ideals, which are unrelated objects, and we only quotient by ideals. In modules, we only have submodules, and we can quotient by arbitrary submodules.
+
+\begin{defi}[$R$-module homomorphism and isomorphism]\index{homomorphism}\index{isomorphism}
+ A function $f: M \to N$ between $R$-modules is an \emph{$R$-module homomorphism} if it is a homomorphism of abelian groups, and satisfies
+ \[
+ f(r \cdot m) = r \cdot f(m)
+ \]
+ for all $r \in R$ and $m \in M$.
+
+ An \emph{isomorphism} is a bijective homomorphism, and two $R$-modules are isomorphic if there is an isomorphism between them.
+\end{defi}
+Note that on the left, the multiplication is the action in $M$, while on the right, it is the action in $N$.
+
+\begin{eg}
+ If $\F$ is a field and $V, W$ are $\F$-modules (i.e.\ vector spaces over $\F$), then an $\F$-module homomorphism is precisely an $\F$-linear map.
+\end{eg}
+
+\begin{thm}[First isomorphism theorem]\index{first isomorphism theorem}
+ Let $f: M \to N$ be an $R$-module homomorphism. Then
+ \[
+ \ker f = \{m \in M: f(m) = 0\} \leq M
+ \]
+ is an $R$-submodule of $M$. Similarly,
+ \[
+ \im f = \{f(m): m \in M\}\leq N
+ \]
+ is an $R$-submodule of $N$. Then
+ \[
+ \frac{M}{\ker f} \cong \im f.
+ \]
+\end{thm}
+
+We will not prove this again. The proof is exactly the same.
+
+\begin{thm}[Second isomorphism theorem]\index{second isomorphism theorem}
+ Let $A, B \leq M$. Then
+ \[
+ A + B = \{m \in M: m = a + b\text{ for some } a \in A, b \in B\} \leq M,
+ \]
+ and
+ \[
+ A \cap B \leq M.
+ \]
+ We then have
+ \[
+ \frac{A + B}{A} \cong \frac{B}{A \cap B}.
+ \]
+\end{thm}
+
+\begin{thm}[Third isomorphism theorem]\index{third isomorphism theorem}
+ Let $N \leq L \leq M$. Then we have
+ \[
+ \frac{M}{L} \cong \left(\frac{M}{N}\right)\big/ \left(\frac{L}{N}\right).
+ \]
+\end{thm}
+
+Also, we have a correspondence
+\[
+ \{\text{submodules of }M/N\} \longleftrightarrow\{\text{submodules of }M\text{ which contain }N\}\\
+\]
+It is an exercise to see what these mean in the cases where $R$ is a field, and modules are vector spaces.
+
+We now have something new. We have a new concept that was not present in rings and groups.
+\begin{defi}[Annihilator]\index{annihilator}
+ Let $M$ be an $R$-module, and $m \in M$. The \emph{annihilator} of $m$ is
+ \[
+ \Ann(m) = \{r \in R: r \cdot m = 0\}.
+ \]
+ For any set $S \subseteq M$, we define
+ \[
+ \Ann(S) = \{r \in R: r \cdot m = 0\text{ for all }m \in S\} = \bigcap_{m \in S} \Ann(m).
+ \]
+ In particular, for the module $M$ itself, we have
+ \[
+ \Ann(M) = \{r \in R: r \cdot m = 0\text{ for all }m \in M\} = \bigcap_{m \in M} \Ann(m).
+ \]
+\end{defi}
+
+Note that the annihilator is a subset of $R$. Moreover it is an ideal --- if $r \cdot m = 0$ and $s \cdot m = 0$, then $(r + s) \cdot m = r \cdot m + s \cdot m = 0$. So $r + s \in \Ann(m)$. Moreover, if $r \cdot m = 0$, then also $(sr) \cdot m = s \cdot (r \cdot m) = 0$. So $sr \in \Ann(m)$.
+
+What is this good for? We first note that any $m \in M$ generates a submodule $Rm$ as follows:
+\begin{defi}[Submodule generated by element]
+ Let $M$ be an $R$-module, and $m \in M$. The \emph{submodule generated by $m$} is
+ \[
+ Rm = \{r \cdot m \in M: r \in R\}.
+ \]
+\end{defi}
+
+We consider the $R$-module homomorphism
+\begin{align*}
+ \phi: R &\to M\\
+ r &\mapsto rm.
+\end{align*}
+This is clearly a homomorphism. Then we have
+\begin{align*}
+ Rm &= \im(\phi),\\
+ \Ann(m) &= \ker(\phi).
+\end{align*}
+The conclusion is that
+\[
+ Rm \cong R/\Ann(m).
+\]
+As we mentioned, rings acting on modules is like groups acting on sets. We can think of this as the analogue of the orbit-stabilizer theorem.
+
+In general, we can generate a submodule with many elements.
+\begin{defi}[Finitely generated module]\index{finitely generated module}
+ An $R$-module $M$ is \emph{finitely generated} if there is a finite list of elements $m_1, \cdots, m_k$ such that
+ \[
+ M = Rm_1 + Rm_2 + \cdots + Rm_k = \{r_1 m_1 + r_2 m_2 + \cdots + r_k m_k: r_i \in R\}.
+ \]
+\end{defi}
+This is in some sense analogous to the idea of a vector space being finite-dimensional. However, it behaves much more differently.
+
+While this definition is rather concrete, it is often not the most helpful characterization of finitely-generated modules. Instead, we use the following lemma:
+\begin{lemma}
+ An $R$-module $M$ is finitely-generated if and only if there is a surjective $R$-module homomorphism $f: R^k \twoheadrightarrow M$ for some finite $k$.
+\end{lemma}
+
+\begin{proof}
+ If
+ \[
+ M = R m_1 + R m_2 + \cdots + R m_k,
+ \]
+ we define $f: R^k \to M$ by
+ \[
+ (r_1, \cdots, r_k) \mapsto r_1 m_1 + \cdots + r_k m_k.
+ \]
+ It is clear that this is an $R$-module homomorphism. This is by definition surjective. So done.
+
+ Conversely, given a surjection $f: R^k \twoheadrightarrow M$, we let
+ \[
+ m_i = f(0, 0, \cdots, 0, 1, 0, \cdots, 0),
+ \]
+ where the $1$ appears in the $i$th position. We now claim that
+ \[
+ M = R m_1 + R m_2 + \cdots + R m_k.
+ \]
+ So let $m \in M$. As $f$ is surjective, we know
+ \[
+ m = f(r_1, r_2, \cdots, r_k)
+ \]
+ for some $r_i$. We then have
+ \begin{align*}
+ &\hphantom{{}={}}f(r_1, r_2, \cdots, r_k)\\
+ &= f((r_1, 0, \cdots, 0) + (0, r_2, 0, \cdots, 0) + \cdots + (0, 0, \cdots, 0, r_k))\\
+ &= f(r_1, 0, \cdots, 0) + f(0, r_2, 0, \cdots, 0) + \cdots + f(0, 0, \cdots, 0, r_k)\\
+ &= r_1 f(1, 0, \cdots, 0) + r_2 f(0, 1, 0, \cdots, 0) + \cdots + r_k f(0, 0, \cdots, 0, 1)\\
+ &= r_1 m_1 + r_2 m_2 + \cdots + r_k m_k.
+ \end{align*}
+ So the $m_i$ generate $M$.
+\end{proof}
+This view is a convenient way of thinking about finitely-generated modules. For example, we can immediately prove the following corollary:
+
+\begin{cor}
+ Let $N \leq M$ and $M$ be finitely-generated. Then $M/N$ is also finitely generated.
+\end{cor}
+
+\begin{proof}
+ Since $m$ is finitely generated, we have some surjection $f: R^k \twoheadrightarrow M$. Moreover, we have the surjective quotient map $q: M \twoheadrightarrow M/N$. Then we get the following composition
+ \[
+ \begin{tikzcd}
+ R^k \ar[r, "f", two heads] & M \ar[r, "q", two heads] & M/N,
+ \end{tikzcd}
+ \]
+ which is a surjection, since it is a composition of surjections. So $M/N$ is finitely generated.
+\end{proof}
+
+It is very tempting to believe that if a module is finitely generated, then its submodules are also finitely generated. It would be very wrong to think so.
+
+\begin{eg}
+ A submodule of a finitely-generated module need not be finitely generated.
+
+ We let $R = \C[X_1, X_2, \cdots]$. We consider the $R$-module $M = R$, which is finitely generated (by $1$). A submodule of the ring is the same as an ideal. Moreover, an ideal is finitely generated as an ideal if and only if it is finitely generated as a module. We pick the submodule
+ \[
+ I = (X_1, X_2, \cdots),
+ \]
+ which we have already shown to be not finitely-generated. So done.
+\end{eg}
+
+\begin{eg}
+ For a complex number $\alpha$, the ring $\Z[\alpha]$ (i.e.\ the smallest subring of $\C$ containing $\alpha$) is a finitely-generated as a $\Z$-module if and only if $\alpha$ is an algebraic integer.
+
+ Proof is left as an exercise for the reader on the last example sheet. This allows us to prove that algebraic integers are closed under addition and multiplication, since it is easier to argue about whether $\Z[\alpha]$ is finitely generated.
+\end{eg}
+
+\subsection{Direct sums and free modules}
+We've been secretly using the direct sum in many examples, but we shall define it properly now.
+
+\begin{defi}[Direct sum of modules]\index{direct sum}
+ Let $M_1, M_2, \cdots, M_k$ be $R$-modules. The \emph{direct sum} is the $R$-module
+ \[
+ M_1 \oplus M_2 \oplus \cdots \oplus M_k,
+ \]
+ which is the set $M_1 \times M_2 \times \cdots \times M_k$, with addition given by
+ \[
+ (m_1, \cdots, m_k) + (m_1', \cdots, m_k') = (m_1 + m_1', \cdots, m_k + m_k'),
+ \]
+ and the $R$-action given by
+ \[
+ r\cdot (m_1, \cdots, m_k) = (rm_1, \cdots, rm_k).
+ \]
+\end{defi}
+We've been using one example of the direct sum already, namely
+\[
+ R^n = \underbrace{R \oplus R \oplus \cdots \oplus R}_{n\text{ times}}.
+\]
+Recall we said modules are like vector spaces. So we can try to define things like basis and linear independence. However, we will fail massively, since we really can't prove much about them. Still, we can \emph{define} them.
+
+\begin{defi}[Linear independence]\index{linear independence}
+ Let $m_1, \cdots, m_k \in M$. Then $\{m_1, \cdots, m_k\}$ is \emph{linearly independent} if
+ \[
+ \sum_{i = 1}^k r_i m_i = 0
+ \]
+ implies $r_1 = r_2 = \cdots = r_k = 0$.
+\end{defi}
+
+Lots of modules will not have a basis in the sense we are used to. The next best thing would be the following:
+\begin{defi}[Freely generate]\index{freely generated}
+ A subset $S \subseteq M$ generates $M$ \emph{freely} if
+ \begin{enumerate}
+ \item $S$ generates $M$
+ \item Any set function $\psi: S \to N$ to an $R$-module $N$ extends to an $R$-module map $\theta: M \to N$.
+ \end{enumerate}
+\end{defi}
+Note that if $\theta_1, \theta_2$ are two such extensions, we can consider $\theta_1 - \theta_2: M \to N$. Then $\theta_1 - \theta_2$ sends everything in $S$ to $0$. So $S \subseteq \ker (\theta_1 - \theta_2) \leq M$. So the submodule generated by $S$ lies in $\ker (\theta_1 - \theta_2)$ too. But this is by definition $M$. So $M \leq \ker(\theta_1 - \theta_2) \leq M$, i.e.\ equality holds. So $\theta_1 - \theta_2 = 0$. So $\theta_1 = \theta_2$. So any such extension is unique.
+
+Thus, what this definition tells us is that giving a map from $M$ to $N$ is exactly the same thing as giving a function from $S$ to $N$.
+
+\begin{defi}[Free module and basis]\index{free module}\index{basis}
+ An $R$-module is \emph{free} if it is freely generated by some subset $S \subseteq M$, and $S$ is called a \emph{basis}.
+\end{defi}
+We will soon prove that if $R$ is a field, then every module is free. However, if $R$ is not a field, then there are non-free modules.
+\begin{eg}
+ The $\Z$-module $\Z/2\Z$ is not freely generated. Suppose $\Z/2\Z$ were generated by some $S \subseteq \Z/2\Z$. Then this can only possibly be $S = \{1\}$. Then this implies there is a homomorphism $\theta: \Z/2\Z \to \Z$ sending $1$ to $1$. But it does not send $0 = 1 + 1$ to $1 + 1$, since homomorphisms send $0$ to $0$. So $\Z/2\Z$ is not freely generated.
+\end{eg}
+
+We now want to formulate free modules in a way more similar to what we do in linear algebra.
+\begin{prop}
+ For a subset $S = \{m_1, \cdots, m_k\} \subseteq M$, the following are equivalent:
+ \begin{enumerate}
+ \item $S$ generates $M$ freely.
+ \item $S$ generates $M$ and the set $S$ is independent.
+ \item Every element of $M$ is \emph{uniquely} expressible as
+ \[
+ r_1 m_1 + r_2 m_2 + \cdots + r_k m_k
+ \]
+ for some $r_i \in R$.
+ \end{enumerate}
+\end{prop}
+\begin{proof}
+ The fact that (ii) and (iii) are equivalent is something we would expect from what we know from linear algebra, and in fact the proof is the same. So we only show that (i) and (ii) are equivalent.
+
+ Let $S$ generate $M$ freely. If $S$ is not independent, then we can write
+ \[
+ r_1 m_1 + \cdots + r_k m_k = 0,
+ \]
+ with $r_i \in R$ and, say, $r_1$ non-zero. We define the set function $\psi: S \to R$ by sending $m_1 \mapsto 1_R$ and $m_i \mapsto 0$ for all $i \not= 1$. As $S$ generates $M$ freely, this extends to an $R$-module homomorphism $\theta: M \to R$.
+
+ By definition of a homomorphism, we can compute
+ \begin{align*}
+ 0 &= \theta(0)\\
+ &= \theta(r_1 m_1 + r_2 m_2 + \cdots + r_k m_k) \\
+ &= r_1\theta(m_1) + r_2 \theta(m_2) + \cdots + r_k \theta(m_k)\\
+ &= r_1.
+ \end{align*}
+ This is a contradiction. So $S$ must be independent.
+
+ To prove the other direction, suppose every element can be uniquely written as $r_1m_1 + \cdots + r_k m_k$. Given any set function $\psi: S \to N$, we define $\theta: M \to N$ by
+ \[
+ \theta(r_1m_1 + \cdots + r_k m_k) = r_1 \psi(m_1) + \cdots + r_k \psi(m_k).
+ \]
+ This is well-defined by uniqueness, and is clearly a homomorphism. So it follows that $S$ generates $M$ freely.
+\end{proof}
+
+\begin{eg}
+ The set $\{2, 3\} \in \Z$ generates $\Z$. However, they do not generate $\Z$ freely, since
+ \[
+ 3\cdot 2 + (-2) \cdot 3 = 0.
+ \]
+ Recall from linear algebra that if a set $S$ spans a \emph{vector space} $V$, and it is not independent, then we can just pick some useless vectors and throw them away in order to get a basis. However, this is no longer the case in modules. Neither $2$ nor $3$ generate $\Z$.
+\end{eg}
+
+\begin{defi}[Relations]\index{relation}
+ If $M$ is a finitely-generated $R$-module, we have shown that there is a surjective $R$-module homomorphism $\phi: R^k \to M$. We call $\ker(\phi)$ the \emph{relation module} for those generators.
+\end{defi}
+
+\begin{defi}[Finitely presented module]\index{finitely presented module}
+ A finitely-generated module is \emph{finitely presented} if we have a surjective homomorphism $\phi: R^k \to M$ and $\ker \phi$ is finitely generated.
+\end{defi}
+
+Being finitely presented means I can tell you everything about the module with a finite amount of paper. More precisely, if $\{m_1, \cdots, m_k\}$ generate $M$ and $\{n_1, n_2, \cdots, n_\ell\}$ generate $\ker(\phi)$, then each
+\[
+ n_i = (r_{i1}, \cdots r_{ik})
+\]
+corresponds to the relation
+\[
+ r_{i1}m_1 + r_{i2}m_2 + \cdots + r_{ik}m_k = 0
+\]
+in $M$. So $M$ is the module generated by writing down $R$-linear combinations of $m_1, \cdots, m_k$, and saying two elements are the same if they are related to one another by these relations. Since there are only finitely many generators and finitely many such relations, we can specify the module with a finite amount of information.
+
+A natural question we might ask is if $n \not= m$, then are $R^n$ and $R^m$ the same? In vector spaces, they obviously must be different, since basis and dimension are well-defined concepts.
+
+\begin{prop}[Invariance of dimension/rank]
+ Let $R$ be a non-zero ring. If $R^n \cong R^m$ as $R$-modules, then $n = m$.
+\end{prop}
+We know this is true if $R$ is a field. We now want to reduce this to the case where $R$ is a ring.
+
+If $R$ is an integral domain, then we can produce a field by taking the field of fractions, and this might be a good starting point. However, we want to do this for general rings. So we need some more magic.
+
+We will need the following construction:
+
+Let $I \lhd R$ be an ideal, and let $M$ be an $R$-module. We define
+\[
+ IM = \{a m \in M: a \in I, m \in M\} \leq M.
+\]
+So we can take the quotient module $M/IM$, which is an $R$-module again.
+
+Now if $b \in I$, then its action on $M/IM$ is
+\[
+ b (m + IM) = bm + IM = IM.
+\]
+So everything in $I$ kills everything in $M/IM$. So we can consider $M/IM$ as an $R/I$ module by
+\[
+ (r + I)\cdot (m + IM) = r\cdot m + IM.
+\]
+So we have proved that
+\begin{prop}
+ If $I\lhd R$ is an ideal and $M$ is an $R$-module, then $M/IM$ is an $R/I$ module in a natural way.
+\end{prop}
+We next need to use the following general fact:
+
+\begin{prop}
+ Every non-zero ring has a maximal ideal.
+\end{prop}
+This is a rather strong statement, since it talks about ``all rings'', and we can have weird rings. We need to use a more subtle argument, namely via Zorn's lemma. You probably haven't seen it before, in which case you might want to skip the proof and just take the lecturer's word on it.
+
+\begin{proof}
+ We observe that an ideal $I \lhd R$ is proper if and only if $1_R \not\in I$. So every increasing union of proper ideals is proper. Then by \emph{Zorn's lemma}, there is a maximal ideal (Zorn's lemma says if an arbitrary union of increasing things is still a thing, then there is a maximal such thing, roughly).
+\end{proof}
+
+With these two notions, we get
+\begin{prop}[Invariance of dimension/rank]
+ Let $R$ be a non-zero ring. If $R^n \cong R^m$ as $R$-modules, then $n = m$.
+\end{prop}
+
+\begin{proof}
+ Let $I$ be a maximal ideal of $R$. Suppose we have $R^n \cong R^m$. Then we must have
+ \[
+ \frac{R^n}{IR^n} \cong \frac{R^m}{IR^m},
+ \]
+ as $R/I$ modules.
+
+ But staring at it long enough, we figure that
+ \[
+ \frac{R^n}{IR^n} \cong \left(\frac{R}{I}\right)^n,
+ \]
+ and similarly for $m$. Since $R/I$ is a field, the result follows by linear algebra.
+\end{proof}
+The point of this proposition is not the result itself (which is not too interesting), but the general constructions used behind the proof.
+
+\subsection{Matrices over Euclidean domains}
+This is the part of the course where we deliver all our promises about proving the classification of finite abelian groups and Jordan normal forms.
+
+Until further notice, we will assume $R$ is a Euclidean domain, and we write $\phi: R\setminus \{0\} \to \Z_{\geq 0}$ for its Euclidean function. We know that in such a Euclidean domain, the greatest common divisor $\gcd(a, b)$ exists for all $a, b \in R$.
+
+We will consider some matrices with entries in $R$.
+
+\begin{defi}[Elementary row operations]\index{elementary row operations}
+ \emph{Elementary row operations} on an $m \times n$ matrix $A$ with entries in $R$ are operations of the form
+ \begin{enumerate}
+ \item Add $c \in R$ times the $i$th row to the $j$th row. This may be done by multiplying by the following matrix on the left:
+ \[
+ \begin{pmatrix}
+ 1 \\
+ & \ddots \\
+ & & 1 & & c\\
+ & & & \ddots\\
+ & & & & 1\\
+ & & & & & \ddots\\
+ & & & & & & 1
+ \end{pmatrix},
+ \]
+ where $c$ appears in the $i$th column of the $j$th row.
+ \item Swap the $i$th and $j$th rows. This can be done by left-multiplication of the matrix
+ \setcounter{MaxMatrixCols}{11}
+ \[
+ \begin{pmatrix}
+ 1\\
+ & \ddots\\
+ & & 1\\
+ & & & 0 & & & & 1\\
+ & & & & 1\\
+ & & & & & \ddots\\
+ & & & & & & 1\\
+ & & & 1 & & & & 0\\
+ & & & & & & & & 1\\
+ & & & & & & & & & \ddots\\
+ & & & & & & & & & & 1
+ \end{pmatrix}.
+ \]
+ Again, the rows and columns we have messed with are the $i$th and $j$th rows and columns.
+ \item We multiply the $i$th row by a \emph{unit} $c \in R$. We do this via the following matrix:
+ \[
+ \begin{pmatrix}
+ 1 \\
+ & \ddots\\
+ & & 1 \\
+ & & & c\\
+ & & & & 1\\
+ & & & & & \ddots\\
+ & & & & & & 1
+ \end{pmatrix}
+ \]
+ Notice that if $R$ is a field, then we can multiply any row by any non-zero number, since they are all units.
+ \end{enumerate}
+ We also have elementary column operations defined in a similar fashion, corresponding to right multiplication of the matrices. Notice all these matrices are invertible.
+\end{defi}
+
+\begin{defi}[Equivalent matrices]\index{equivalent matrices}
+ Two matrices are \emph{equivalent} if we can get from one to the other via a sequence of such elementary row and column operations.
+\end{defi}
+
+Note that if $A$ and $B$ are equivalent, then we can write
+\[
+ B = QAT^{-1}
+\]
+for some invertible matrices $Q$ and $T^{-1}$.
+
+The aim of the game is to find, for each matrix, a matrix equivalent to it that is as simple as possible. Recall from IB Linear Algebra that if $R$ is a field, then we can put any matrix into the form
+\[
+ \begin{pmatrix}
+ I_r & 0\\
+ 0 & 0
+ \end{pmatrix}
+\]
+via elementary row and column operations. This is no longer true when working with rings. For example, over $\Z$, we cannot put the matrix
+\[
+ \begin{pmatrix}
+ 2 & 0\\
+ 0 & 0
+ \end{pmatrix}
+\]
+into that form, since no operation can turn the $2$ into a $1$. What we get is the following result:
+\begin{thm}[Smith normal form]\index{Smith normal form}
+ An $m \times n$ matrix over a Euclidean domain $R$ is equivalent to a diagonal matrix
+ \[
+ \begin{pmatrix}
+ d_1\\
+ & d_2\\
+ & & \ddots\\
+ & & & d_r\\
+ & & & & 0\\
+ & & & & & \ddots\\
+ & & & & & & 0
+ \end{pmatrix},
+ \]
+ with the $d_i$ all non-zero and
+ \[
+ d_1 \mid d_2 \mid d_3 \mid \cdots \mid d_r.
+ \]
+\end{thm}
+Note that the divisibility criterion is similar to the classification of finitely-generated abelian groups. In fact, we will derive that as a consequence of the Smith normal form.
+
+\begin{defi}[Invariant factors]\index{invariant factors}
+ The $d_k$ obtained in the Smith normal form are called the \emph{invariant factors} of $A$.
+\end{defi}
+
+We first exhibit the algorithm of producing the Smith normal form with an algorithm in $\Z$.
+\begin{eg}
+ We start with the matrix
+ \[
+ \begin{pmatrix}
+ 3 & 7 & 4\\
+ 1 & -1 & 2\\
+ 3 & 5 & 1
+ \end{pmatrix}.
+ \]
+ We want to move the $1$ to the top-left corner. So we swap the first and second rows to obtain.
+ \[
+ \begin{pmatrix}
+ 1 & -1 & 2\\
+ 3 & 7 & 4\\
+ 3 & 5 & 1
+ \end{pmatrix}.
+ \]
+ We then try to eliminate the other entries in the first row by column operations. We add multiples of the first column to the second and third to obtain
+ \[
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 3 & 10 & -2\\
+ 3 & 8 & -5
+ \end{pmatrix}.
+ \]
+ We similarly clear the first column to get
+ \[
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & 10 & -2\\
+ 0 & 8 & -5
+ \end{pmatrix}.
+ \]
+ We are left with a $2\times 2$ matrix to fiddle with.
+
+ We swap the second and third columns so that $2$ is in the $2, 2$ entry, and secretly change sign to get
+ \[
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & 2 & 10\\
+ 0 & 5 & 8
+ \end{pmatrix}.
+ \]
+ We notice that $(2, 5) = 1$. So we can use linear combinations to introduce a $1$ at the bottom
+ \[
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & 2 & 10\\
+ 0 & 1 & -12
+ \end{pmatrix}.
+ \]
+ Swapping rows, we get
+ \[
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & 1 & -12\\
+ 0 & 2 & 10
+ \end{pmatrix}.
+ \]
+ We then clear the remaining rows and columns to get
+ \[
+ \begin{pmatrix}
+ 1 & 0 & 0\\
+ 0 & 1 & 0\\
+ 0 & 0 & 34
+ \end{pmatrix}.
+ \]
+\end{eg}
+
+\begin{proof}
+ Throughout the process, we will keep calling our matrix $A$, even though it keeps changing in each step, so that we don't have to invent hundreds of names for these matrices.
+
+ If $A = 0$, then done! So suppose $A \not= 0$. So some entry is not zero, say, $A_{ij} \not= 0$. Swapping the $i$th and first row, then $j$th and first column, we arrange that $A_{11} \not= 0$. We now try to reduce $A_{11}$ as much as possible. We have the following two possible moves:
+ \begin{enumerate}
+ \item If there is an $A_{1j}$ not divisible by $A_{11}$, then we can use the Euclidean algorithm to write
+ \[
+ A_{1j} = q A_{11} + r.
+ \]
+ By assumption, $r \not= 0$. So $\phi(r) < \phi(A_{11})$ (where $\phi$ is the Euclidean function).
+
+ So we subtract $q$ copies of the first column from the $j$th column. Then in position $(1, j)$, we now have $r$. We swap the first and $j$th column such that $r$ is in position $(1, 1)$, and we have strictly reduced the value of $\phi$ at the first entry.
+ \item If there is an $A_{i1}$ not divisible by $A_{11}$, we do the same thing, and this again reduces $\phi(A_{11})$.
+ \end{enumerate}
+ We keep performing these until no move is possible. Since the value of $\phi(A_{11})$ strictly decreases every move, we stop after finitely many applications. Then we know that we must have $A_{11}$ dividing all $A_{ij}$ and $A_{i1}$. Now we can just subtract appropriate multiples of the first column from others so that $A_{1j} = 0$ for $j \not= 1$. We do the same thing with rows so that the first row is cleared. Then we have a matrix of the form
+ \[
+ A = \begin{pmatrix}
+ d & 0 & \cdots & 0\\
+ 0 \\
+ \vdots & & C\\
+ 0
+ \end{pmatrix}.
+ \]
+ We would like to say ``do the same thing with $C$'', but then this would get us a regular diagonal matrix, not necessarily in Smith normal form. So we need some preparation.
+
+ \begin{enumerate}[resume]
+ \item Suppose there is an entry of $C$ not divisible by $d$, say $A_{ij}$ with $i, j > 1$.
+ \[
+ A =
+ \begin{pmatrix}
+ d & 0 & \cdots & 0 & \cdots & 0\\
+ 0\\
+ \vdots\\
+ 0 & & & A_{ij}\\
+ \vdots &\\
+ 0
+ \end{pmatrix}
+ \]
+ We suppose
+ \[
+ A_{ij} = qd + r,
+ \]
+ with $r \not= 0$ and $\phi(r) < \phi(d)$. We add column $1$ to column $j$, and subtract $q$ times row $1$ from row $i$. Now we get $r$ in the $(i, j)$th entry, and we want to send it back to the $(1, 1)$ position. We swap row $i$ with row $1$, swap column $j$ with row $1$, so that $r$ is in the $(1, 1)$th entry, and $\phi(r) < \phi(d)$.
+
+ Now we have messed up the first row and column. So we go back and do (i) and (ii) again until the first row and columns are cleared. Then we get
+ \[
+ A = \begin{pmatrix}
+ d' & 0 & \cdots & 0\\
+ 0 \\
+ 0 & & C'\\
+ 0
+ \end{pmatrix},
+ \]
+ where
+ \[
+ \phi(d') \leq \phi(r) < \phi(d).
+ \]
+ \end{enumerate}
+ As this strictly decreases the value of $\phi(A_{11})$, we can only repeat this finitely many times. When we stop, we will end up with a matrix
+ \[
+ A = \begin{pmatrix}
+ d & 0 & \cdots & 0\\
+ 0 \\
+ \vdots & & C\\
+ 0
+ \end{pmatrix},
+ \]
+ and $d$ divides \emph{every} entry of $C$. Now we apply the entire process to $C$. When we do this process, notice all allowed operations don't change the fact that $d$ divides every entry of $C$.
+
+ So applying this recursively, we obtain a diagonal matrix with the claimed divisibility property.
+\end{proof}
+Note that if we didn't have to care about the divisibility property, we can just do (i) and (ii), and we can get a diagonal matrix. The magic to get to the Smith normal form is (iii).
+
+Recall that the $d_i$ are called the invariant factors. So it would be nice if we can prove that the $d_i$ are indeed invariant. It is not clear from the algorithm that we will always end up with the same $d_i$. Indeed, we can multiply a whole row by $-1$ and get different invariant factors. However, it turns out that these are unique up to multiplication by units.
+
+To study the uniqueness of the invariant factors of a matrix $A$, we relate them to other invariants, which involves \emph{minors}.
+\begin{defi}[Minor]\index{minor}
+ A \emph{$k \times k$ minor} of a matrix $A$ is the determinant of a $k \times k$ sub-matrix of $A$ (i.e.\ a matrix formed by removing all but $k$ rows and all but $k$ columns).
+\end{defi}
+Any given matrix has many minors, since we get to decide which rows and columns we can throw away. The idea is to consider the ideal generated by all the minors of matrix.
+
+\begin{defi}[Fitting ideal]\index{Fitting ideal}
+ For a matrix $A$, the \emph{$k$th Fitting ideal} $\Fit_k(A) \lhd R$ is the ideal generated by the set of all $k \times k$ minors of $A$.
+\end{defi}
+A key property is that equivalent matrices have the same Fitting ideal, even if they might have very different minors.
+
+\begin{lemma}
+ Let $A$ and $B$ be equivalent matrices. Then
+ \[
+ \Fit_k(A) = \Fit_k(B)
+ \]
+ for all $k$.
+\end{lemma}
+
+\begin{proof}
+ It suffices to show that changing $A$ by a row or column operation does not change the Fitting ideal. Since taking the transpose does not change the determinant, i.e.\ $\Fit_k(A) = \Fit_k(A^T)$, it suffices to consider the row operations.
+
+ The most difficult one is taking linear combinations. Let $B$ be the result of adding $c$ times the $i$th row to the $j$th row, and fix $C$ a $k \times k$ minor of $A$. Suppose the resultant matrix is $C'$. We then want to show that $\det C' \in \Fit_k(A)$.
+
+ If the $j$th row is outside of $C$, then the minor $\det C$ is unchanged. If both the $i$th and $j$th rows are in $C$, then the submatrix $C$ changes by a row operation, which does not affect the determinant. These are the boring cases.
+
+ Suppose the $j$th row is in $C$ and the $i$th row is not. Suppose the $i$th row is $f_1, \cdots, f_k$. Then $C$ is changed to $C'$, with the $j$th row being
+ \[
+ (C_{j1} + c f_1, C_{j_2} + c f_2, \cdots, C_{jk} + c f_k).
+ \]
+ We compute $\det C'$ by expanding along this row. Then we get
+ \[
+ \det C'= \det C + c\det D,
+ \]
+ where $D$ is the matrix obtained by replacing the $j$th row of $C$ with $(f_1, \cdots, f_k)$.
+
+ The point is that $\det C$ is definitely a minor of $A$, and $\det D$ is still a minor of $A$, just another one. Since ideals are closed under addition and multiplications, we know
+ \[
+ \det (C') \in \Fit_k(A).
+ \]
+ The other operations are much simpler. They just follow by standard properties of the effect of swapping rows or multiplying rows on determinants. So after any row operation, the resultant submatrix $C'$ satisfies
+ \[
+ \det (C') \in \Fit_k(A).
+ \]
+ Since this is true for all minors, we must have
+ \[
+ \Fit_k(B) \subseteq \Fit_k(A).
+ \]
+ But row operations are invertible. So we must have
+ \[
+ \Fit_k(A) \subseteq \Fit_k(B)
+ \]
+ as well. So they must be equal. So done.
+\end{proof}
+
+We now notice that if we have a matrix in Smith normal form, say
+\[
+ B =
+ \begin{pmatrix}
+ d_1\\
+ & d_2\\
+ & & \ddots\\
+ & & & d_r\\
+ & & & & 0\\
+ & & & & & \ddots\\
+ & & & & & & 0
+ \end{pmatrix},
+\]
+then we can immediately read off
+\[
+ \Fit_k(B) = (d_1 d_2 \cdots d_k).
+\]
+This is clear once we notice that the only possible contributing minors are from the diagonal submatrices, and the minor from the top left square submatrix divides all other diagonal ones. So we have
+
+\begin{cor}
+ If $A$ has Smith normal form
+ \[
+ B =
+ \begin{pmatrix}
+ d_1\\
+ & d_2\\
+ & & \ddots\\
+ & & & d_r\\
+ & & & & 0\\
+ & & & & & \ddots\\
+ & & & & & & 0
+ \end{pmatrix},
+ \]
+ then
+ \[
+ \Fit_k(A) = (d_1 d_2 \cdots d_k).
+ \]
+ So $d_k$ is unique up to associates.
+\end{cor}
+This is since we can find $d_k$ by dividing the generator of $\Fit_k(A)$ by the generator of $\Fit_{k - 1}(A)$.
+
+\begin{eg}
+ Consider the matrix in $\Z$:
+ \[
+ A = \begin{pmatrix}
+ 2 & 0\\
+ 0 & 3
+ \end{pmatrix}.
+ \]
+ This is diagonal, but not in Smith normal form. We can potentially apply the algorithm, but that would be messy. We notice that
+ \[
+ \Fit_1(A) = (2, 3) = (1).
+ \]
+ So we know $d_1 = \pm 1$. We can then look at the second Fitting ideal
+ \[
+ \Fit_2(A) = (6).
+ \]
+ So $d_1 d_2 = \pm 6$. So we must have $d_2 = \pm 6$. So the Smith normal form is
+ \[
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 6
+ \end{pmatrix}.
+ \]
+ That was much easier.
+\end{eg}
+
+We are now going to use Smith normal forms to do things. We will need some preparation, in the form of the following lemma:
+\begin{lemma}
+ Let $R$ be a principal ideal domain. Then any submodule of $R^m$ is generated by at most $m$ elements.
+\end{lemma}
+This is obvious for vector spaces, but is slightly more difficult here.
+
+\begin{proof}
+ Let $N \leq R^m$ be a submodule. Consider the ideal
+ \[
+ I = \{r \in R: (r, r_2, \cdots, r_m) \in N\text{ for some }r_2, \cdots, r_m \in R\}.
+ \]
+ It is clear this is an ideal. Since $R$ is a principle ideal domain, we must have $I = (a)$ for some $a \in R$. We now choose an
+ \[
+ n = (a, a_2, \cdots, a_m) \in N.
+ \]
+ Then for any vector $(r_1, r_2, \cdots, r_m) \in N$, we know that $r_1 \in I$. So $a \mid r_1$. So we can write
+ \[
+ r_1 = ra.
+ \]
+ Then we can form
+ \[
+ (r_1, r_2, \cdots, r_m) - r(a, a_2, \cdots, a_m) = (0, r_2 - ra_2, \cdots, r_m - r a_m)\in N.
+ \]
+ This lies in $N' = N \cap (\{0\} \times R^{m - 1}) \leq R^{m - 1}$. Thus everything in $N$ can be written as a multiple of $n$ plus something in $N'$. But by induction, since $N' \leq R^{m - 1}$, we know $N'$ is generated by at most $m - 1$ elements. So there are $n_2, \cdots, n_m \in N'$ generating $N'$. So $n, n_2, \cdots, n_m$ generate $N$.
+\end{proof}
+
+If we have a submodule of $R^m$, then it has at most $m$ generators. However, these might generate the submodule in a terrible way. The next theorem tells us there is a nice way of finding generators.
+\begin{thm}
+ Let $R$ be a Euclidean domain, and let $N \leq R^m$ be a submodule. Then there exists a basis $v_1, \cdots, v_m$ of $R^m$ such that $N$ is generated by $d_1 v_1, d_2 v_2, \cdots, d_r v_r$ for some $0 \leq r \leq m$ and some $d_i \in R$ such that
+ \[
+ d_1 \mid d_2 \mid \cdots \mid d_r.
+ \]
+\end{thm}
+
+This is not hard, given what we've developed so far.
+\begin{proof}
+ By the previous lemma, $N$ is generated by some elements $x_1, \cdots, x_n$ with $n \leq m$. Each $x_i$ is an element of $R^m$. So we can think of it as a column vector of length $m$, and we can form a matrix
+ \[
+ A =
+ \begin{pmatrix}
+ \uparrow & \uparrow & & \uparrow\\
+ x_1 & x_2 & \cdots & x_n\\
+ \downarrow & \downarrow & & \downarrow
+ \end{pmatrix}.
+ \]
+ We've got an $m \times n$ matrix. So we can put it in Smith normal form! Since there are fewer columns than there are rows, this is of the form
+ \[
+ \begin{pmatrix}
+ d_1\\
+ & d_2\\
+ & & \ddots\\
+ & & & d_r\\
+ & & & & 0\\
+ & & & & & \ddots\\
+ & & & & & & 0\\
+ & & & & & & 0\\
+ & & & & & & \vdots\\
+ & & & & & & 0\\
+ \end{pmatrix}
+ \]
+ Recall that we got to the Smith normal form by row and column operations. Performing row operations is just changing the basis of $R^m$, while each column operation changes the generators of $N$.
+
+ So what this tells us is that there is a new basis $v_1, \cdots, v_m$ of $R^m$ such that $N$ is generated by $d_1 v_1, \cdots, d_r v_r$. By definition of Smith normal form, the divisibility condition holds.
+\end{proof}
+
+\begin{cor}
+ Let $R$ be a Euclidean domain. A submodule of $R^m$ is free of rank at most $m$. In other words, the submodule of a free module is free, and of a smaller (or equal) rank.
+\end{cor}
+
+\begin{proof}
+ Let $N \leq R^m$ be a submodule. By the above, there is a basis $v_1, \cdots, v_n$ of $R^m$ such that $N$ is generated by $d_1 v_1, \cdots, d_r v_r$ for $r \leq m$. So it is certainly generated by at most $m$ elements. So we only have to show that $d_1 v_1, \cdots, d_r v_r$ are independent. But if they were linearly dependent, then so would be $v_1, \cdots, v_m$. But $v_1, \cdots, v_n$ are a basis, hence independent. So $d_1 v_1, \cdots, d_r v_r$ generate $N$ \emph{freely}. So
+ \[
+ N \cong R^r.\qedhere
+ \]
+\end{proof}
+Note that this is not true for all rings. For example, $(2, X) \lhd \Z[X]$ is a submodule of $\Z[X]$, but is not isomorphic to $\Z[X]$.
+
+\begin{thm}[Classification of finitely-generated modules over a Euclidean domain]\index{classification of finitely-generated modules over Euclidean domains}
+ Let $R$ be a Euclidean domain, and $M$ be a finitely generated $R$-module. Then
+ \[
+ M \cong \frac{R}{(d_1)} \oplus \frac{R}{(d_2)} \oplus \cdots \oplus\frac{R}{(d_r)} \oplus R \oplus R \oplus \cdots \oplus R
+ \]
+ for some $d_i \not= 0$, and
+ \[
+ d_1 \mid d_2 \mid \cdots \mid d_r.
+ \]
+\end{thm}
+This is either a good or bad thing. If you are pessimistic, this says the world of finitely generated modules is boring, since there are only these modules we already know about. If you are optimistic, this tells you all finitely-generated modules are of this simple form, so we can prove things about them assuming they look like this.
+
+\begin{proof}
+ Since $M$ is finitely-generated, there is a surjection $\phi: R^m \to M$. So by the first isomorphism, we have
+ \[
+ M \cong \frac{R^m}{\ker \phi}.
+ \]
+ Since $\ker \phi$ is a submodule of $R^m$, by the previous theorem, there is a basis $v_1, \cdots, v_m$ of $R^m$ such that $\ker \phi$ is generated by $d_1 v_1, \cdots, d_r v_r$ for $0 \leq r \leq m$ and $d_1 \mid d_2 \mid \cdots \mid d_r$. So we know
+ \[
+ M \cong \frac{R^m}{((d_1, 0, \cdots, 0), (0, d_2, 0, \cdots, 0), \cdots, (0, \cdots, 0, d_r, 0, \cdots, 0))}.
+ \]
+ This is just
+ \[
+ \frac{R}{(d_1)} \oplus \frac{R}{(d_2)} \oplus \cdots \oplus \frac{R}{(d_r)} \oplus R \oplus \cdots \oplus R,
+ \]
+ with $m - r$ copies of $R$.
+\end{proof}
+This is particularly useful in the case where $R = \Z$, where $R$-modules are abelian groups.
+
+\begin{eg}
+ Let $A$ be the abelian group generated by $a, b, c$ with relations
+ \[
+ \begin{alignedat}{4}
+ 2a & {}+{} & 3b & {}+{} & c &= 0,\\
+ a & {}+{} & 2b & & &= 0,\\
+ 5a & {}+{} & 6b & {}+{} & 7c &= 0.
+ \end{alignedat}
+ \]
+ In other words, we have
+ \[
+ A = \frac{\Z^3}{((2, 3, 1), (1, 2, 0), (5, 6, 7))}.
+ \]
+ We would like to get a better description of $A$. It is not even obvious if this module is the zero module or not.
+
+ To work out a good description, We consider the matrix
+ \[
+ X =
+ \begin{pmatrix}
+ 2 & 1 & 5\\
+ 3 & 2 & 6\\
+ 1 & 0 & 7
+ \end{pmatrix}.
+ \]
+ To figure out the Smith normal form, we find the fitting ideals. We have
+ \[
+ \Fit_1(X) = (1, \cdots) = (1).
+ \]
+ So $d_1 = 1$.
+
+ We have to work out the second fitting ideal. In principle, we have to check all the minors, but we immediately notice
+ \[
+ \begin{vmatrix}
+ 2 & 1\\
+ 3 & 2
+ \end{vmatrix} = 1.
+ \]
+ So $\Fit_2(X) = (1)$, and $d_2 = 1$. Finally, we find
+ \[
+ \Fit_3(X) =
+ \left(\begin{pmatrix}
+ 2 & 1 & 5\\
+ 3 & 2 & 6\\
+ 1 & 0 & 7
+ \end{pmatrix}\right) = (3).
+ \]
+ So $d_3 = 3$. So we know
+ \[
+ A \cong \frac{\Z}{(1)} \oplus \frac{\Z}{(1)} \oplus \frac{\Z}{(3)} \cong \frac{\Z}{(3)} \cong C_3.
+ \]
+ If you don't feel like computing determinants, doing row and column reduction is often as quick and straightforward.
+\end{eg}
+
+We re-state the previous theorem in the specific case where $R$ is $\Z$, since this is particularly useful.
+\begin{cor}[Classification of finitely-generated abelian groups]\index{classification of finitely-generated abelian groups}
+ Any finitely-generated abelian group is isomorphic to
+ \[
+ C_{d_1} \times \cdots \times C_{d_r} \times C_\infty \times \cdots \times C_\infty,
+ \]
+ where $C_\infty \cong \Z$ is the infinite cyclic group, with
+ \[
+ d_1 \mid d_2 \mid \cdots \mid d_r.
+ \]
+\end{cor}
+
+\begin{proof}
+ Let $R = \Z$, and apply the classification of finitely generated $R$-modules.
+\end{proof}
+
+Note that if the group is finite, then there cannot be any $C_\infty$ factors. So it is just a product of finite cyclic groups.
+\begin{cor}
+ If $A$ is a finite abelian group, then
+ \[
+ A \cong C_{d_1} \times \cdots \times C_{d_r},
+ \]
+ with
+ \[
+ d_1 \mid d_2 \mid \cdots \mid d_r.
+ \]
+\end{cor}
+This is the result we stated at the beginning of the course.
+
+Recall that we were also to decompose a finite abelian group into products of the form $C_{p^k}$, where $p$ is a prime, and we said it was just the Chinese remainder theorem. This is again in general true, but we, again, need the Chinese remainder theorem.
+
+\begin{lemma}[Chinese remainder theorem]\index{Chinese remainder theorem}
+ Let $R$ be a Euclidean domain, and $a, b \in R$ be such that $\gcd(a, b) = 1$. Then
+ \[
+ \frac{R}{(ab)} \cong \frac{R}{(a)} \times \frac{R}{(b)}
+ \]
+ as $R$-modules.
+\end{lemma}
+The proof is just that of the Chinese remainder theorem written in ring language.
+\begin{proof}
+ Consider the $R$-module homomorphism
+ \[
+ \phi: \frac{R}{(a)} \times \frac{R}{(b)} \to \frac{R}{(ab)}
+ \]
+ by
+ \[
+ (r_1 + (a), r_2 + (b)) \mapsto br_1 + ar_2 + (ab).
+ \]
+ To show this is well-defined, suppose
+ \[
+ (r_1 + (a), r_2 + (b)) = (r_1' + (a), r_2' + (b)).
+ \]
+ Then
+ \begin{align*}
+ r_1 &= r_1' + xa\\
+ r_2 &= r_2' + yb.
+ \end{align*}
+ So
+ \[
+ br_1 + ar_2 + (ab) = br_1' + x ab + a r_2' + yab + (ab) = br_1' + ar_2' + (ab).
+ \]
+ So this is indeed well-defined. It is clear that this is a module map, by inspection.
+
+ We now have to show it is surjective and injective. So far, we have not used the hypothesis, that $\gcd(a, b) = 1$. As we know $\gcd(a, b) = 1$, by the Euclidean algorithm, we can write
+ \[
+ 1 = ax + by
+ \]
+ for some $x, y \in R$. So we have
+ \[
+ \phi(y + (a), x + (b)) = by + ax + (ab) = 1 + (ab).
+ \]
+ So $1 \in \im \phi$. Since this is an $R$-module map, we get
+ \[
+ \phi(r (y + (a), x + (b))) = r \cdot (1 + (ab)) = r + (ab).
+ \]
+ The key fact is that $R/(ab)$ as an $R$-module is generated by $1$. Thus we know $\phi$ is surjective.
+
+ Finally, we have to show it is injective, i.e.\ that the kernel is trivial. Suppose
+ \[
+ \phi(r_1 + (a), r_2 + (b)) = 0 + (ab).
+ \]
+ Then
+ \[
+ br_1 + a r_2 \in (ab).
+ \]
+ So we can write
+ \[
+ br_1 + ar_2 = abx
+ \]
+ for some $x \in R$. Since $a \mid ar_2$ and $a \mid abx$, we know $a \mid br_1$. Since $a$ and $b$ are coprime, unique factorization implies $a \mid r_1$. Similarly, we know $b \mid r_2$.
+ \[
+ (r_1 + (a), r_2 + (b))= (0 + (a), 0 + (b)).
+ \]
+ So the kernel is trivial.
+\end{proof}
+
+\begin{thm}[Prime decomposition theorem]\index{prime decomposition theorem}
+ Let $R$ be a Euclidean domain, and $M$ be a finitely-generated $R$-module. Then
+ \[
+ M \cong N_1 \oplus N_2 \oplus \cdots \oplus N_t,
+ \]
+ where each $N_i$ is either $R$ or is $R/(p^n)$ for some prime $p \in R$ and some $n \geq 1$.
+\end{thm}
+
+\begin{proof}
+ We already know
+ \[
+ M \cong \frac{R}{(d_1)} \oplus \cdots \oplus \frac{R}{(d_r)} \oplus R \oplus \cdots \oplus R.
+ \]
+ So it suffices to show that each $R/(d_1)$ can be written in that form. We let
+ \[
+ d = p_1^{n_1} p_2^{n_2} \cdots p_k^{n_k}
+ \]
+ with $p_i$ distinct primes. So each $p_i^{n_i}$ is coprime to each other. So by the lemma iterated, we have
+ \[
+ \frac{R}{(d_1)} \cong \frac{R}{(p_1^{n_1})} \oplus\cdots \oplus \frac{R}{(p_k^{n_k})}.\qedhere
+ \]
+\end{proof}
+
+\subsection{Modules over \texorpdfstring{$\F[X]$}{F[X]} and normal forms for matrices}
+That was one promise delivered. We next want to consider the Jordan normal form. This is less straightforward, since considering $V$ directly as an $\F$ module would not be too helpful (since that would just be pure linear algebra). Instead, we use the following trick:
+
+For a field $\F$, the polynomial ring $\F[X]$ is a Euclidean domain, so the results of the last few sections apply. If $V$ is a vector space on $\F$, and $\alpha: V \to V$ is a linear map, then we can make $V$ into an $\F[X]$-module via
+\begin{align*}
+ \F[X] \times V &\to V\\
+ (f, v) &\mapsto (f(\alpha))(v).
+\end{align*}
+We write $V_\alpha$ for this $\F[X]$-module.
+
+\begin{lemma}
+ If $V$ is a finite-dimensional vector space, then $V_\alpha$ is a finitely-generated $\F[X]$-module.
+\end{lemma}
+
+\begin{proof}
+ If $\mathbf{v}_1, \cdots, \mathbf{v}_n$ generate $V$ as an $\F$-module, i.e.\ they span $V$ as a vector space over $\F$, then they also generate $V_\alpha$ as an $\F[X]$-module, since $\F \leq \F[X]$.
+\end{proof}
+
+\begin{eg}
+ Suppose $V_\alpha \cong \F[X]/(X^r)$ as $\F[X]$-modules. Then in particular they are isomorphic as $\F$-modules (since being a map of $\F$-modules has fewer requirements than being a map of $\F[X]$-modules).
+
+ Under this bijection, the elements $1, X, X^2, \cdots, X^{r - 1} \in \F[X]/(X^r)$ form a vector space basis for $V_\alpha$. Viewing $\F[X]/(X^r)$ as an $\F$-vector space, the action of $X$ has the matrix
+ \[
+ \begin{pmatrix}
+ 0 & 0 & \cdots & 0 & 0\\
+ 1 & 0 & \cdots & 0 & 0\\
+ 0 & 1 & \cdots & 0 & 0\\
+ \vdots & \vdots & \ddots & \vdots & \vdots\\
+ 0 & 0 & \cdots & 1 & 0
+ \end{pmatrix}.
+ \]
+ We also know that in $V_\alpha$, the action of $X$ is by definition the linear map $\alpha$. So under this basis, $\alpha$ also has matrix
+ \[
+ \begin{pmatrix}
+ 0 & 0 & \cdots & 0 & 0\\
+ 1 & 0 & \cdots & 0 & 0\\
+ 0 & 1 & \cdots & 0 & 0\\
+ \vdots & \vdots & \ddots & \vdots & \vdots\\
+ 0 & 0 & \cdots & 1 & 0
+ \end{pmatrix}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Suppose
+ \[
+ V_\alpha \cong \frac{\F[X]}{((X - \lambda)^r)}
+ \]
+ for some $\lambda \in \F$. Consider the new linear map
+ \[
+ \beta = \alpha - \lambda \cdot \id: V \to V.
+ \]
+ Then $V_\beta \cong \F[Y]/(Y^r)$, for $Y = X - \lambda$. So there is a basis for $V$ so that $\beta$ looks like
+ \[
+ \begin{pmatrix}
+ 0 & 0 & \cdots & 0 & 0\\
+ 1 & 0 & \cdots & 0 & 0\\
+ 0 & 1 & \cdots & 0 & 0\\
+ \vdots & \vdots & \ddots & \vdots & \vdots\\
+ 0 & 0 & \cdots & 1 & 0
+ \end{pmatrix}.
+ \]
+ So we know $\alpha$ has matrix
+ \[
+ \begin{pmatrix}
+ \lambda & 0 & \cdots & 0 & 0\\
+ 1 & \lambda & \cdots & 0 & 0\\
+ 0 & 1 & \cdots & 0 & 0\\
+ \vdots & \vdots & \ddots & \vdots & \vdots\\
+ 0 & 0 & \cdots & 1 & \lambda
+ \end{pmatrix}
+ \]
+ So it is a Jordan block (except the Jordan blocks are the other way round, with zeroes below the diagonal).
+\end{eg}
+
+\begin{eg}
+ Suppose $V_\alpha \cong \F[X]/(f)$ for some polynomial $f$, for
+ \[
+ f = a_0 + a_1X + \cdots + a_{r - 1} X^{r - 1} + X^r.
+ \]
+ This has a basis $1, X, X^2, \cdots, X^{r - 1}$ as well, in which $\alpha$ is
+ \[
+ c(f) =
+ \begin{pmatrix}
+ 0 & 0 & \cdots & 0 & -a_0\\
+ 1 & 0 & \cdots & 0 & -a_1\\
+ 0 & 1 & \cdots & 0 & -a_2\\
+ \vdots & \vdots & \ddots & \vdots & \vdots\\
+ 0 & 0 & \cdots & 1 & -a_{r - 1}
+ \end{pmatrix}.
+ \]
+ We call this the \emph{companion matrix} for the monic polynomial $f$.
+\end{eg}
+These are different things that can possibly happen. Since we have already classified all finitely generated $\F[X]$ modules, this allows us to put matrices in a rather nice form.
+
+\begin{thm}[Rational canonical form]\index{rational canonical form}
+ Let $\alpha: V \to V$ be a linear endomorphism of a finite-dimensional vector space over $\F$, and $V_\alpha$ be the associated $\F[X]$-module. Then
+ \[
+ V_\alpha \cong \frac{\F[X]}{(f_1)} \oplus \frac{\F[X]}{(f_2)} \oplus \cdots \oplus \frac{\F[X]}{(f_s)},
+ \]
+ with $f_1 \mid f_2 \mid \cdots \mid f_s$. Thus there is a basis for $V$ in which the matrix for $\alpha$ is the block diagonal
+ \[
+ \begin{pmatrix}
+ c(f_1) & 0 & \cdots & 0\\
+ 0 & c(f_2) & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & c(f_s)
+ \end{pmatrix}
+ \]
+\end{thm}
+This is the sort of theorem whose statement is longer than the proof.
+
+\begin{proof}
+ We already know that $V_\alpha$ is a finitely-generated $\F[X]$-module. By the structure theorem of $\F[X]$-modules, we know
+ \[
+ V_\alpha \cong \frac{\F[X]}{(f_1)} \oplus \frac{\F[X]}{(f_2)} \oplus \cdots \oplus \frac{\F[X]}{(f_s)} \oplus 0.
+ \]
+ We know there are no copies of $\F[X]$, since $V_\alpha = V$ is finite-dimensional over $\F$, but $\F[X]$ is not. The divisibility criterion also follows from the structure theorem. Then the form of the matrix is immediate.
+\end{proof}
+This is really a canonical form. The Jordan normal form is not canonical, since we can move the blocks around. The structure theorem determines the factors $f_i$ up to units, and once we require them to be monic, there is no choice left.
+
+In terms of matrices, this says that if $\alpha$ is represented by a matrix $A \in M_{n, n} (\F)$ in some basis, then $A$ is conjugate to a matrix of the form above.
+
+From the rational canonical form, we can immediately read off the minimal polynomial as $f_s$. This is since if we view $V_\alpha$ as the decomposition above, we find that $f_s(\alpha)$ kills everything in $\frac{\F[X]}{(f_s)}$. It also kills the other factors since $f_i \mid f_s$ for all $i$. So $f_s(\alpha) = 0$. We also know no smaller polynomial kills $V$, since it does not kill $\frac{\F[X]}{(f_s)}$.
+
+Similarly, we find that the characteristic polynomial of $\alpha$ is $f_1 f_2 \cdots f_s$.
+
+Recall we had a different way of decomposing a module over a Euclidean domain, namely the prime decomposition, and this gives us the Jordan normal form.
+
+Before we can use that, we need to know what the primes are. This is why we need to work over $\C$.
+
+\begin{lemma}
+ The prime elements of $\C[X]$ are the $X - \lambda$ for $\lambda \in \C$ (up to multiplication by units).
+\end{lemma}
+
+\begin{proof}
+ Let $f \in \C[X]$. If $f$ is constant, then it is either a unit or $0$. Otherwise, by the fundamental theorem of algebra, it has a root $\lambda$. So it is divisible by $X - \lambda$. So if $f$ is irreducible, it must have degree $1$. And clearly everything of degree $1$ is prime.
+\end{proof}
+
+Applying the prime decomposition theorem to $\C[X]$-modules gives us the Jordan normal form.
+
+\begin{thm}[Jordan normal form]\index{Jordan normal form}
+ Let $\alpha: V \to V$ be an endomorphism of a vector space $V$ over $\C$, and $V_\alpha$ be the associated $\C[X]$-module. Then
+ \[
+ V_\alpha \cong \frac{\C[X]}{((X - \lambda_1)^{a_1})} \oplus \frac{\C[X]}{((X - \lambda_2)^{a_2})} \oplus \cdots \oplus \frac{\C[X]}{((X - \lambda_t)^{a_t})},
+ \]
+ where $\lambda_i \in \C$ do \emph{not} have to be distinct. So there is a basis of $V$ in which $\alpha$ has matrix
+ \[
+ \begin{pmatrix}
+ J_{a_1}(\lambda_1) & & & 0\\
+ & J_{a_2}(\lambda_2)\\
+ & & \ddots\\\
+ 0 & & & J_{a_t} (\lambda_t)
+ \end{pmatrix},
+ \]
+ where
+ \[
+ J_m (\lambda) =
+ \begin{pmatrix}
+ \lambda & 0 & \cdots & 0\\
+ 1 & \lambda & \cdots & 0\\
+ \vdots & \ddots & \ddots & \vdots\\
+ 0 & \cdots & 1 & \lambda
+ \end{pmatrix}
+ \]
+ is an $m \times m$ matrix.
+\end{thm}
+
+\begin{proof}
+ Apply the prime decomposition theorem to $V_\alpha$. Then all primes are of the form $X - \lambda$. We then use our second example at the beginning of the chapter to get the form of the matrix.
+\end{proof}
+
+The blocks $J_m(\lambda)$ are called the \emph{Jordan $\lambda$-blocks}. It turns out that the Jordan blocks are unique up to reordering, but it does not immediately follow from what we have so far, and we will not prove it. It is done in the IB Linear Algebra course.
+
+We can also read off the minimal polynomial and characteristic polynomial of $\alpha$. The minimal polynomial is
+\[
+ \prod_{\lambda} (X - \lambda)^{a_\lambda},
+\]
+where $a_\lambda$ is the size of the largest $\lambda$-block. The characteristic polynomial of $\alpha$ is
+\[
+ \prod_{\lambda} (X - \lambda)^{b_\lambda},
+\]
+where $b_\lambda$ is the sum of the sizes of the $\lambda$-blocks. Alternatively, it is
+\[
+ \prod_{i = 1}^t (X - \lambda_i)^{a_i}.
+\]
+From the Jordan normal form, we can also read off another invariant, namely the size of the $\lambda$-space of $\alpha$, namely the number of $\lambda$-blocks.
+
+We can also use the idea of viewing $V$ as an $\F[X]$ module to prove Cayley-Hamilton theorem. In fact, we don't need $\F$ to be a field.
+\begin{thm}[Cayley-Hamilton theorem]\index{Cayley-Hamilton theorem}
+ Let $M$ be a finitely-generated $R$-module, where $R$ is some commutative ring. Let $\alpha: M \to M$ be an $R$-module homomorphism. Let $A$ be a matrix representation of $\alpha$ under some choice of generators, and let $p(t) = \det(tI - A)$. Then $p(\alpha) = 0$.
+\end{thm}
+
+\begin{proof}
+ We consider $M$ as an $R[X]$-module with action given by
+ \[
+ (f(X))(m) = f(\alpha) m.
+ \]
+ Suppose $e_1, \cdots, e_n$ span $M$, and that for all $i$, we have
+ \[
+ \alpha(e_i) = \sum_{j = 1}^n a_{ij} e_j.
+ \]
+ Then
+ \[
+ \sum_{j = 1}^n (X \delta_{ij} - a_{ij}) e_j = 0.
+ \]
+ We write $C$ for the matrix with entries
+ \[
+ c_{ij} = X \delta_{ij} - a_{ij} \in \F[X].
+ \]
+ We now use the fact that
+ \[
+ \adj(C) C = \det(C) I,
+ \]
+ which we proved in IB Linear Algebra (and the proof did not assume that the underlying ring is a field). Expanding this out, we get the following equation (in $\F[X]$).
+ \[
+ \chi_\alpha (X) I = \det(XI - A) I = (\adj(XI - A)) (XI - A).
+ \]
+ Writing this in components, and multiplying by $e_k$, we have
+ \[
+ \chi_\alpha(X) \delta_{ik}e_k = \sum_{j = 1}^n (\adj(XI - A)_{ij}) (X \delta_{jk} - a_{jk}) e_k.
+ \]
+ Then for each $i$, we sum over $k$ to obtain
+ \[
+ \sum_{k = 1}^n \chi_\alpha(X) \delta_{ik} e_k = \sum_{j, k = 1}^n (\adj(XI - A)_{ij}) (X \delta_{jk} - a_{jk}) e_k = 0,
+ \]
+ by our choice of $a_{ij}$. But the left hand side is just $\chi_{\alpha}(X) e_i$. So $\chi_\alpha(X)$ acts trivially on all of the generators $e_i$. So it in fact acts trivially. So $\chi_\alpha(\alpha)$ is the zero map (since acting by $X$ is the same as acting by $\alpha$, by construction).
+\end{proof}
+Note that if we want to prove this just for matrices, we don't really need the \emph{theory} of rings and modules. It just provides a convenient language to write the proof in.
+
+\subsection{Conjugacy of matrices*}
+We are now going to do some fun computations of conjugacy classes of matrices, using what we have got so far.
+\begin{lemma}
+ Let $\alpha, \beta: V \to V$ be two linear maps. Then $V_\alpha\cong V_\beta$ as $\F[X]$-modules if and only if $\alpha$ and $\beta$ are conjugate as linear maps, i.e.\ there is some $\gamma: V \to V$ such that $\alpha = \gamma^{-1}\beta\gamma$.
+\end{lemma}
+This is not a deep theorem. This is in some sense just some tautology. All we have to do is to unwrap what these statements say.
+
+\begin{proof}
+ Let $\gamma: V_\beta \to V_\alpha$ be an $\F[X]$-module isomorphism. Then for $\mathbf{v} \in V$, we notice that $\beta (\mathbf{v})$ is just $X \cdot \mathbf{v}$ in $V_\beta$, and $\alpha(\mathbf{v})$ is just $X \cdot \mathbf{v}$ in $V_\alpha$. So we get
+ \[
+ \beta \circ \gamma(\mathbf{v}) = X \cdot (\gamma(\mathbf{v})) = \gamma(X \cdot \mathbf{v}) = \gamma \circ \alpha(\mathbf{v}),
+ \]
+ using the definition of an $\F[X]$-module homomorphism.
+
+ So we know
+ \[
+ \beta\gamma = \gamma\alpha.
+ \]
+ So
+ \[
+ \alpha = \gamma^{-1}\beta\gamma.
+ \]
+ Conversely, let $\gamma: V \to V$ be a linear isomorphism such that $\gamma^{-1}\beta\gamma = \alpha$. We now claim that $\gamma: V_\alpha \to V_\beta$ is an $\F[X]$-module isomorphism. We just have to check that
+ \begin{align*}
+ \gamma(f \cdot \mathbf{v}) &= \gamma(f(\alpha)(\mathbf{v})) \\
+ &= \gamma(a_0 + a_1 \alpha + \cdots + a_n \alpha^n) (\mathbf{v})\\
+ &= \gamma(a_0 \mathbf{v}) + \gamma(a_1 \alpha(\mathbf{v})) + \gamma (a_2 \alpha^2 (\mathbf{v})) + \cdots + \gamma(a_n \alpha^n(\mathbf{v}))\\
+ &= (a_0 + a_1 \beta + a_2 \beta^2 + \cdots + a_n \beta^n)(\gamma(\mathbf{v}))\\
+ &= f\cdot \gamma(\mathbf{v}).\qedhere
+ \end{align*}
+\end{proof}
+So classifying linear maps up to conjugation is the same as classifying modules.
+
+We can reinterpret this a little bit, using our classification of finitely-generated modules.
+\begin{cor}
+ There is a bijection between conjugacy classes of $n \times n$ matrices over $\F$ and sequences of monic polynomials $d_1, \cdots, d_r$ such that $d_1 \mid d_2 \mid \cdots \mid d_r$ and $\deg (d_1\cdots d_r) = n$.
+\end{cor}
+
+\begin{eg}
+ Let's classify conjugacy classes in $\GL_2(\F)$, i.e.\ we need to classify $\F[X]$-modules of the form
+ \[
+ \frac{\F[X]}{(d_1)} \oplus \frac{\F[X]}{(d_2)} \oplus \cdots \oplus \frac{\F[X]}{(d_r)}
+ \]
+ which are two-dimensional as $\F$-modules. As we must have $\deg(d_1d_2 \cdots d_r) = 2$, we either have a quadratic thing or two linear things, i.e.\ either
+ \begin{enumerate}
+ \item $r = 1$ and $\deg (d_1) = 2$,
+ \item $r = 2$ and $\deg (d_1) = \deg(d_2) = 1$. In this case, since we have $d_1 \mid d_2$, and they are both monic linear, we must have $d_1 = d_2 = X - \lambda$ for some $\lambda$.
+ \end{enumerate}
+ In the first case, the module is
+ \[
+ \frac{\F[X]}{(d_1)},
+ \]
+ where, say,
+ \[
+ d_1 = X^2 + a_1 X + a_2.
+ \]
+ In the second case, we get
+ \[
+ \frac{\F[X]}{(X - \lambda)} \oplus \frac{\F[X]}{(X - \lambda)}.
+ \]
+ What does this say? In the first case, we use the basis $1, X$, and the linear map has matrix
+ \[
+ \begin{pmatrix}
+ 0 & -a_2\\
+ 1 & -a_1
+ \end{pmatrix}
+ \]
+ In the second case, this is
+ \[
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \lambda
+ \end{pmatrix}.
+ \]
+ Do these cases overlap? Suppose the two of them are conjugate. Then they have the same determinant and same trace. So we know
+ \begin{align*}
+ -a_1 &= 2\lambda\\
+ a_2 &= \lambda^2
+ \end{align*}
+ So in fact our polynomial is
+ \[
+ X^2 + a_1 X + a_2 = X^2 - 2\lambda + \lambda^2 = (X - \lambda)^2.
+ \]
+ This is just the polynomial of a Jordan block. So the matrix
+ \[
+ \begin{pmatrix}
+ 0 & -a_2\\
+ 1 & -a_1
+ \end{pmatrix}
+ \]
+ is conjugate to the Jordan block
+ \[
+ \begin{pmatrix}
+ \lambda & 0\\
+ 1 & \lambda
+ \end{pmatrix},
+ \]
+ but this is not conjugate to $\lambda I$, e.g.\ by looking at eigenspaces. So these cases are disjoint.
+
+ Note that we have done more work that we really needed, since $\lambda I$ is invariant under conjugation.
+
+ But the first case is not too satisfactory. We can further classify it as follows. If $X^2 + a_1 X + a_2$ is reducible, then it is
+ \[
+ (X - \lambda)(X - \mu)
+ \]
+ for some $\mu, \lambda \in \F$. If $\lambda = \mu$, then the matrix is conjugate to
+ \[
+ \begin{pmatrix}
+ \lambda & 0\\
+ 1 & \lambda
+ \end{pmatrix}
+ \]
+ Otherwise, it is conjugate to
+ \[
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \mu
+ \end{pmatrix}.
+ \]
+ In the case where $X^2 + a_1 X + a_2$ is irreducible, there is nothing we can do in general. However, we can look at some special scenarios and see if there is anything we can do.
+\end{eg}
+
+\begin{eg}
+ Consider $\GL_2(\Z/3)$. We want to classify its conjugacy classes. By the general theory, we know everything is conjugate to
+ \[
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \mu
+ \end{pmatrix},\quad
+ \begin{pmatrix}
+ \lambda & 0\\
+ 1 & \lambda
+ \end{pmatrix},\quad
+ \begin{pmatrix}
+ 0 & -a_2\\
+ 1 & -a_1
+ \end{pmatrix},
+ \]
+ with
+ \[
+ X^2 + a_1 X + a_2
+ \]
+ irreducible. So we need to figure out what the irreducibles are.
+
+ A reasonable strategy is to guess. Given any quadratic, it is easy to see if it is irreducible, since we can try to see if it has any roots, and there are just three things to try. However, we can be a bit slightly more clever. We first count how many irreducibles we are expecting, and then find that many of them.
+
+ There are $9$ monic quadratic polynomials in total, since $a_1, a_2 \in \Z/3$. The reducibles are $(X - \lambda)^2$ or $(X - \lambda)(X - \mu)$ with $\lambda \not= \mu$. There are three of each kind. So we have $6$ reducible polynomials, and so $3$ irreducible ones.
+
+ We can then check that
+ \[
+ X^2 + 1,\quad X^2 + X + 2, X^2 + 2X + 2
+ \]
+ are the irreducible polynomials. So every matrix in $\GL_2(\Z/3)$ is either congruent to
+ \[
+ \begin{pmatrix}
+ 0 & -1\\
+ 1 & 0
+ \end{pmatrix},\quad
+ \begin{pmatrix}
+ 0 & -2\\
+ 1 & -1
+ \end{pmatrix},\quad
+ \begin{pmatrix}
+ 0 & -2\\
+ 1 & -2
+ \end{pmatrix},\quad
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \mu
+ \end{pmatrix},\quad
+ \begin{pmatrix}
+ \lambda & 0\\
+ 1 & \lambda
+ \end{pmatrix},
+ \]
+ where $\lambda, \mu \in (\Z/3)^\times$ (since the matrix has to be invertible). The number of conjugacy classes of each type are $1, 1, 1, 3, 2$. So there are 8 conjugacy classes. The first three classes have elements of order $4, 8, 8$ respectively, by trying. We notice that the identity matrix has order $1$, and
+ \[
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \mu
+ \end{pmatrix}
+ \]
+ has order $2$ otherwise. Finally, for the last type, we have
+ \[
+ \ord
+ \begin{pmatrix}
+ 1 & 0\\
+ 1 & 1
+ \end{pmatrix} = 3,\quad \ord
+ \begin{pmatrix}
+ 2 & 0\\
+ 1 & 2
+ \end{pmatrix} = 6
+ \]
+ Note that we also have
+ \[
+ |\GL_2(\Z/3)| = 48 = 2^4 \cdot 3.
+ \]
+ Since there is no element of order $16$, the Sylow $2$-subgroup of $\GL_2(\Z/3)$ is not cyclic.
+
+ To construct the Sylow $2$-subgroup, we might start with an element of order $8$, say
+ \[
+ B =
+ \begin{pmatrix}
+ 0 & 1 \\
+ 1 & 2
+ \end{pmatrix}.
+ \]
+ To make a subgroup of order $6$, a sensible guess would be to take an element of order $2$, but that doesn't work, since $B^4$ will give you the element of order $2$. Instead, we pick
+ \[
+ A =
+ \begin{pmatrix}
+ 0 & 2\\
+ 1 & 0
+ \end{pmatrix}.
+ \]
+ We notice
+ \[
+ A^{-1}BA =
+ \begin{pmatrix}
+ 0 & 1\\
+ 2 & 0
+ \end{pmatrix}
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 2
+ \end{pmatrix}
+ \begin{pmatrix}
+ 0 & 2\\
+ 1 & 0
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 1 & 2\\
+ 0 & 2
+ \end{pmatrix}
+ \begin{pmatrix}
+ 0 & 2\\
+ 1 & 0
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 2 & 2\\
+ 2 & 0
+ \end{pmatrix} = B^3.
+ \]
+ So this is a bit like the dihedral group.
+
+ We know that
+ \[
+ \bra B\ket \lhd \bra A, B\ket.
+ \]
+ Also, we know $|\bra B\ket| = 8$. So if we can show that $\bra B \ket$ has index $2$ in $\bra A, B\ket$, then this is the Sylow $2$-subgroup. By the \emph{second isomorphism theorem}, something we have never used in our life, we know
+ \[
+ \frac{\bra A, B\ket}{\bra B\ket} \cong \frac{\bra A\ket}{\bra A \ket \cap \bra B\ket}.
+ \]
+ We can list things out, and then find
+ \[
+ \bra A\ket \cap \bra B\ket = \left\bra
+ \begin{pmatrix}
+ 2 & 0\\
+ 0 & 2
+ \end{pmatrix}\right\ket \cong C_2.
+ \]
+ We also know $\bra A\ket \cong C_4$. So we know
+ \[
+ \frac{|\bra A, B\ket|}{|\bra B\ket|} = 2.
+ \]
+ So $|\bra A, B\ket| = 16$. So this is the Sylow $2$-subgroup. in fact, it is
+ \[
+ \bra A, B\mid A^4 = B^8 = e, A^{-1}BA = B^3\ket
+ \]
+ We call this the \emph{semi-dihedral group} of order $16$, because it is a bit like a dihedral group.
+
+ Note that finding this subgroup was purely guesswork. There is no method to know that $A$ and $B$ are the right choices.
+\end{eg}
+
+\printindex
+\end{document}
diff --git a/books/cam/IB_L/numerical_analysis.tex b/books/cam/IB_L/numerical_analysis.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1a54a95cb6399c9503207ac4b41e590b6b39c4a0
--- /dev/null
+++ b/books/cam/IB_L/numerical_analysis.tex
@@ -0,0 +1,2554 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Lent}
+\def\nyear {2016}
+\def\nlecturer {G.\ Moore}
+\def\ncourse {Numerical Analysis}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Polynomial approximation}\\
+Interpolation by polynomials. Divided differences of functions and relations to derivatives. Orthogonal polynomials and their recurrence relations. Least squares approximation by polynomials. Gaussian quadrature formulae. Peano kernel theorem and applications.\hspace*{\fill} [6]
+
+\vspace{10pt}
+\noindent\textbf{Computation of ordinary differential equations}\\
+Euler's method and proof of convergence. Multistep methods, including order, the root condition and the concept of convergence. Runge-Kutta schemes. Stiff equations and A-stability.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Systems of equations and least squares calculations}\\
+LU triangular factorization of matrices. Relation to Gaussian elimination. Column pivoting. Factorizations of symmetric and band matrices. The Newton-Raphson method for systems of non-linear algebraic equations. QR factorization of rectangular matrices by Gram-Schmidt, Givens and Householder techniques. Application to linear least squares calculations.\hspace*{\fill} [5]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+Numerical analysis is the study of algorithms. There are many problems we would like algorithms to solve. In this course, we will tackle the problems of polynomial approximation, solving ODEs and solving linear equations. These are all problems that frequently arise when we do (applied) maths.
+
+In general, there are two things we are concerned with --- accuracy and speed. Accuracy is particularly important in the cases of polynomial approximation and solving ODEs, since we are trying approximate things. We would like to make good approximations of the solution with relatively little work. On the other hand, in the case of solving linear equations, we are more concerned with speed --- our solutions will be exact (up to numerical errors due to finite precision of calculations), but we would like to solve it quickly. We might have to deal with huge systems, and we don't want the computation time to grow too quickly.
+
+In the past, this was an important subject, since they had no computers. The algorithms had to be implemented by hand. It was thus very important to find some practical and efficient methods of computing things, or else it would take them forever to calculate what they wanted. So we wanted quick algorithms that give reasonably accurate results.
+
+Nowadays, this is still an important subject. While we have computers that are much faster at computation, we still want our programs to be fast. We would also want to get really accurate results, since we might be using them to, say, send our rocket to the Moon. Moreover, with more computational power, we might sacrifice efficiency for some other desirable properties. For example, if we are solving for the trajectory of a particle, we might want the solution to satisfy the conservation of energy. This would require some much more complicated and slower algorithms that no one would have considered in the past. Nowadays, with computers, these algorithms become more feasible, and are becoming increasingly more popular.
+
+\section{Polynomial interpolation}
+Polynomials are nice. Writing down a polynomial of degree $n$ involves only $n + 1$ numbers. They are easy to evaluate, integrate and differentiate. So it would be nice if we can approximate things with polynomials. For simplicity, we will only deal with real polynomials.
+
+\begin{notation}
+ We write $P_n[x]$ for the real linear vector space of polynomials (with real coefficients) having degree $n$ or less.
+\end{notation}
+It is easy to show that $\dim (P_n[x]) = n + 1$.
+
+\subsection{The interpolation problem}
+The idea of polynomial interpolation is that we are given $n + 1$ distinct interpolation points $\{x_i\}_{i = 0}^n \subseteq \R$, and $n + 1$ data values $\{f_i\}_{i = 0}^n \subseteq \R$. The objective is to find a $p \in P_n[x]$ such that
+\[
+ p(x_i) = f_i\quad \text{for}\quad i = 0, \cdots, n.
+\]
+In other words, we want to fit a polynomial through the points $(x_i, f_i)$.
+
+There are many situations where this may come up. For example, we may be given $n + 1$ actual data points, and we want to fit a polynomial through the points. Alternatively, we might have a complicated function $f$, and want to approximate it with a polynomial $p$ such that $p$ and $f$ agree on at least that $n + 1$ points.
+
+The naive way of looking at this is that we try a polynomial
+\[
+ p(x) = a_n x^n + a_{n - 1}x^{n - 1} + \cdots + a_0,
+\]
+and then solve the system of equations
+\[
+ f_i = p(x_i) = a_n x_i^n + a_{n - 1}x_i^{n - 1} + \cdots + a_0.
+\]
+This is a perfectly respectable system of $n + 1$ equations in $n + 1$ unknowns. From linear algebra, we know that in general, such a system is not guaranteed to have a solution, and if the solution exists, it is not guaranteed to be unique.
+
+That was not helpful. So our first goal is to show that in the case of polynomial interpolation, the solution exists and is unique.
+
+\subsection{The Lagrange formula}
+It turns out the problem is not too hard. You can probably figure it out yourself if you lock yourself in a room for a few days (or hours). In fact, we will come up with \emph{two} solutions to the problem.
+
+The first is via the Lagrange cardinal polynomials. These are simple to state, and it is obvious that these work very well. However, practically, this is not the best way to solve the problem, and we will not talk about them much. Instead, we use this solution as a proof of existence of polynomial interpolations. We will then develop our second method on the assumption that polynomial interpolations exist, and find a better way of computing them.
+
+\begin{defi}[Lagrange cardinal polynomials]
+ The \emph{Lagrange cardinal polynomials} with respect to the interpolation points $\{x_i\}_{i = 0}^n$ are, for $k = 0, \cdots, n$,
+ \[
+ \ell_k (x) = \prod_{i = 0, i \not= k}^n \frac{x - x_i}{x_k - x_i}.
+ \]
+\end{defi}
+Note that these polynomials have degree exactly $n$. The significance of these polynomials is we have $\ell_k(x_i) = 0$ for $i \not= k$, and $\ell_k(x_k) = 1$. In other words, we have
+\[
+ \ell_k(x_j) = \delta_{jk}.
+\]
+This is obvious from definition.
+
+With these cardinal polynomials, we can immediately write down a solution to the interpolation problem.
+\begin{thm}
+ The interpolation problem has exactly one solution.
+\end{thm}
+
+\begin{proof}
+ We define $p \in P_n[x]$ by
+ \[
+ p(x) = \sum_{k = 0}^n f_k \ell_k (x).
+ \]
+ Evaluating at $x_i$ gives
+ \[
+ p(x_j) = \sum_{k = 0}^n f_k \ell_k(x_j) = \sum_{k = 0}^n f_k \delta_{jk} = f_j.
+ \]
+ So we get existence.
+
+ For uniqueness, suppose $p, q \in P_n[x]$ are solutions. Then the difference $r = p - q \in P_n[x]$ satisfies $r(x_j) = 0$ for all $j$, i.e.\ it has $n + 1$ roots. However, a non-zero polynomial of degree $n$ can have at most $n$ roots. So in fact $p - q$ is zero, i.e.\ $p = q$.
+\end{proof}
+
+While this \emph{works}, it is not ideal. If we one day decide we should add one more interpolation point, we would have to recompute all the cardinal polynomials, and that is not fun. Ideally, we would like some way to reuse our previous computations when we have new interpolation points.
+
+\subsection{The Newton formula}
+The idea of Newton's formula is as follows --- for $k = 0, \cdots, n$, we write $p_k \in P_k[x]$ for the polynomial that satisfies
+\[
+ p_k(x_i) = f_i\quad\text{for}\quad i = 0, \cdots, k.
+\]
+This is the unique degree-$k$ polynomial that satisfies the first $k + 1$ conditions, whose existence (and uniqueness) is guaranteed by the previous section. Then we can write
+\[
+ p(x) = p_n(x) = p_0(x) + (p_1(x) - p_0(x)) + \cdots + (p_n(x) - p_{n - 1}(x)).
+\]
+Hence we are done if we have an efficient way of finding the differences $p_k - p_{k - 1}$.
+
+We know that $p_k$ and $p_{k - 1}$ agree on $x_0, \cdots, x_{k - 1}$. So $p_k - p_{k - 1}$ evaluates to $0$ at those points, and we must have
+\[
+ p_k(x) - p_{k - 1}(x) = A_k \prod_{i = 0}^{k - 1}(x - x_i),
+\]
+for some $A_k$ yet to be found out. Then we can write
+\[
+ p(x) = p_n(x) = A_0 + \sum_{k = 1}^n A_k \prod_{i = 0}^{k - 1} (x - x_i).
+\]
+This formula has the advantage that it is built up gradually from the interpolation points one-by-one. If we stop the sum at any point, we have obtained the polynomial that interpolates the data for the first $k$ points (for some $k$). Conversely, if we have a new data point, we just need to add a new term, instead of re-computing everything.
+
+All that remains is to find the coefficients $A_k$. For $k = 0$, we know $A_0$ is the unique constant polynomial that interpolates the point at $x_0$, i.e.\ $A_0 = f_0$.
+
+For the others, we note that in the formula for $p_k - p_{k - 1}$, we find that $A_k$ is the leading coefficient of $x^k$. But $p_{k - 1}(x)$ has no degree $k$ term. So $A_k$ must be the leading coefficient of $p_k$. So we have reduced our problem to finding the leading coefficients of $p_k$.
+
+The solution to this is known as the \emph{Newton divided differences}. We first invent a new notation:
+\[
+ A_k = f[x_0, \cdots, x_k].
+\]
+Note that these coefficients depend only on the first $k$ interpolation points. Moreover, since the labelling of the points $x_0, \cdots, x_k$ is arbitrary, we don't have to start with $x_0$. In general, the coefficient
+\[
+ f[x_j, \cdots, x_k]
+\]
+is the leading coefficient of the unique $q \in P_{k - j}[x]$ such that $q(x_i) = f_i$ for $i = j, \cdots, k$.
+
+While we do not have an explicit formula for what these coefficients are, we can come up with a recurrence relation for these coefficients.
+
+\begin{thm}[Recurrence relation for Newton divided differences]
+ For $0 \leq j < k \leq n$, we have
+ \[
+ f[x_j, \cdots, x_k] = \frac{f[x_{j + 1}, \cdots, x_k] - f[x_j, \cdots, x_{k - 1}]}{x_k - x_j}.
+ \]
+\end{thm}
+
+\begin{proof}
+ The key to proving this is to relate the interpolating polynomials. Let $q_0, q_1 \in P_{k - j - 1}[x]$ and $q_2 \in P_{k - j}$ satisfy
+ \begin{align*}
+ q_0(x_i) &= f_i & i &=j, \cdots, k - 1\\
+ q_1(x_i) &= f_i & i &=j + 1, \cdots, k\\
+ q_2(x_i) &= f_i & i &=j, \cdots, k
+ \end{align*}
+ We now claim that
+ \[
+ q_2(x) = \frac{x - x_j}{x_k - x_j} q_1(x) + \frac{x_k - x}{x_k - x_j} q_0(x).
+ \]
+ We can check directly that the expression on the right correctly interpolates the points $x_i$ for $i = j, \cdots, k$. By uniqueness, the two expressions agree. Since $f[x_j, \cdots, x_k$], $f[x_{j + 1}, \cdots, x_k]$ and $f[x_j, \cdots, x_{k - 1}]$ are the leading coefficients of $q_2, q_1, q_0$ respectively, the result follows.
+\end{proof}
+Thus the famous Newton divided difference table can be constructed
+\begin{center}
+ \begin{tabular}{cccccc}
+ \toprule
+ $x_i$ & $f_i$ & $f[*, *]$ & $f[*, *, *]$ & $\cdots$ & $f[*, \cdots,*]$\\
+ \midrule
+ $x_0$ & $f[x_0]$\\
+ & & $f[x_0, x_1]$\\
+ $x_1$ & $f[x_1]$ & & $f[x_0, x_1, x_2]$ \\
+ & & $f[x_1, x_2]$ & & $\ddots$\\
+ $x_2$ & $f[x_2]$ & & $f[x_2, x_3, x_4]$ & $\cdots$ & $f[x_0, x_1, \cdots, x_n]$\\
+ & & $f[x_2, x_3]$ & & $\iddots$\\
+ $x_3$ & $f[x_3]$ & & $\iddots$\\
+ $\vdots$ & $\vdots$ & $\iddots$ &\\
+ $x_n$ & $f[x_n]$\\
+ \bottomrule
+ \end{tabular}
+\end{center} % beautify
+From the first $n$ columns, we can find the $n + 1$th column using the recurrence relation above. The values of $A_k$ can then be found at the top diagonal, and this is all we really need. However, to compute this diagonal, we will need to compute everything in the table.
+
+In practice, we often need not find the actual interpolating polynomial. If we just want to evaluate $p(\hat{x})$ at some new point $\hat{x}$ using the divided table, we can use \emph{Horner's scheme}, given by
+\begin{alltt}
+ S <- f[\(\mathtt{x\sb{0}}\),..., \(\mathtt{x\sb{n}}\)]
+ for k = n - 1,..., 0
+ S <- (\(\mathtt{\hat{x}}\) - \(\mathtt{x\sb{k}}\))S + f[\(\mathtt{x\sb{0}}\),..., \(\mathtt{x\sb{k}}\)]
+ end\end{alltt}
+This only takes $O(n)$ operations.
+
+If an extra data point $\{x_{n + 1}, f_{n + 1}\}$ is added, then we only have to compute an extra diagonal $f[x_k, \cdots, x_{n + 1}]$ for $k =n, \cdots, 0$ in the divided difference table to obtain the new coefficient, and the old results can be reused. This requires $O(n)$ operations. This is less straightforward for Lagrange's method.
+
+\subsection{A useful property of divided differences}
+In the next couple of sections, we are interested in the error of polynomial interpolation. Suppose the data points come from $f_i = f(x_i)$ for some complicated $f$ we want to approximate, and we interpolate $f$ at $n$ data points $\{x_i\}_{i = 1}^n$ with $p_n$. How does the error $e_n(x) = f(x) - p_n(x)$ depend on $n$ and the choice of interpolation points?
+
+At the interpolation point, the error is necessarily $0$, by definition of interpolation. However, this does not tell us anything about the error elsewhere.
+
+Our ultimate objective is to bound the error $e_n$ by suitable multiples of the derivatives of $f$. We will do this in two steps. We first relate the derivatives to the divided differences, and then relate the error to the divided differences.
+
+The first part is an easy result based on the following purely calculus lemma.
+\begin{lemma}
+ Let $g \in C^m[a, b]$ have a continuous $m$th derivative. Suppose $g$ is zero at $m + \ell$ distinct points. Then $g^{(m)}$ has at least $\ell$ distinct zeros in $[a, b]$.
+\end{lemma}
+
+\begin{proof}
+ This is a repeated application of Rolle's theorem. We know that between every two zeros of $g$, there is at least one zero of $g' \in C^{m - 1}[a, b]$. So by differentiating once, we have lost at most $1$ zeros. So after differentiating $m$ times, $g^{(m)}$ has lost at most $m$ zeros. So it still has at least $\ell$ zeros.
+\end{proof}
+
+\begin{thm}
+ Let $\{x_i\}_{i = 0}^n \in [a, b]$ and $f \in C^n[a, b]$. Then there exists some $\xi \in (a, b)$ such that
+ \[
+ f[x_0, \cdots, x_n] = \frac{1}{n!} f^{(n)}(\xi).
+ \]
+\end{thm}
+
+\begin{proof}
+ Consider $e = f - p_n \in C^n[a, b]$. This has at least $n + 1$ distinct zeros in $[a, b]$. So by the lemma, $e^{(n)} = f^{(n)} - p_n^{(n)}$ must vanish at some $\xi \in (a, b)$. But then $p_n^{(n)} = n! f[x_0, \cdots, x_n]$ constantly. So the result follows.
+\end{proof}
+
+\subsection{Error bounds for polynomial interpolation}
+The actual error bound is not too difficult as well. It turns out the error $e = f - p_n$ is ``like the next term in the Newton's formula''. This vague sentence is made precise in the following theorem:
+\begin{thm}
+ Assume $\{x_i\}_{i = 0}^n \subseteq [a, b]$ and $f \in C[a, b]$. Let $\bar{x} \in [a, b]$ be a non-interpolation point. Then
+ \[
+ e_n(\bar{x}) = f[x_0, x_1, \cdots, x_n, \bar{x}] \omega(\bar{x}),
+ \]
+ where
+ \[
+ \omega(x) = \prod_{i = 0}^n (x - x_i).
+ \]
+\end{thm}
+Note that we forbid the case where $\bar{x}$ is an interpolation point, since it is not clear what the expression $f[x_0, x_1, \cdots, x_n, \bar{x}]$ means. However, if $\bar{x}$ is an interpolation point, then both $e_n(\bar x)$ and $\omega(\bar{x})$ are zero, so there isn't much to say.
+
+\begin{proof}
+ We think of $\bar{x} = x_{n + 1}$ as a new interpolation point so that
+ \[
+ p_{n + 1}(x) - p_n(x) = f[x_0, \cdots, x_n, \bar{x}] \omega(x)
+ \]
+ for all $x \in R$. In particular, putting $x = \bar{x}$, we have $p_{n + 1}(\bar{x}) = f(\bar{x})$, and we get the result.
+\end{proof}
+
+Combining the two results, we find
+\begin{thm}
+ If in addition $f \in C^{n + 1}[a, b]$, then for each $x \in [a, b]$, we can find $\xi_x \in (a, b)$ such that
+ \[
+ e_n(x) = \frac{1}{(n + 1)!} f^{(n + 1)}(\xi_x) \omega(x)
+ \]
+\end{thm}
+
+\begin{proof}
+ The statement is trivial if $x$ is an interpolation point --- pick arbitrary $\xi_x$, and both sides are zero. Otherwise, this follows directly from the last two theorems.
+\end{proof}
+
+This is an exact result, which is not too useful, since there is no easy constructive way of finding what $\xi_x$ should be. Instead, we usually go for a bound. We introduce the max norm
+\[
+ \|g\|_{\infty} = \max_{t \in [a, b]} |g(t)|.
+\]
+This gives the more useful bound
+\begin{cor}
+ For all $x \in [a, b]$, we have
+ \[
+ |f(x) - p_n(x)| \leq \frac{1}{(n + 1)!} \|f^{(n + 1)}\|_{\infty} |\omega(x)|
+ \]
+\end{cor}
+
+Assuming our function $f$ is fixed, this error bound depends only on $\omega(x)$, which depends on our choice of interpolation points. So can we minimize $\omega(x)$ in some sense by picking some clever interpolation points $\Delta = \{x_i\}_{i = 0}^n$? Here we will have $n$ fixed. So instead, we put $\Delta$ as the subscript. We can write our bound as
+\[
+ \|f - p_{\Delta}\|_{\infty} \leq \frac{1}{(n + 1)!} \|f^{(n + 1)}\|_{\infty} \|\omega_{\Delta}\|_{\infty}.
+\]
+So the objective is to find a $\Delta$ that minimizes $\|\omega_{\Delta}\|_{\infty}$.
+
+For the moment, we focus on the special case where the interval is $[-1, 1]$. The general solution can be obtained by an easy change of variable.
+
+For some magical reasons that hopefully will become clear soon, the optimal choice of $\Delta$ comes from the \emph{Chebyshev polynomials}.
+\begin{defi}[Chebyshev polynomial]
+ The \emph{Chebyshev polynomial} of degree $n$ on $[-1, 1]$ is defined by
+ \[
+ T_n(x) = \cos(n \theta),
+ \]
+ where $x = \cos \theta$ with $\theta\in [0, \pi]$.
+\end{defi}
+So given an $x$, we find the unique $\theta$ that satisfies $x = \cos \theta$, and then find $\cos (n \theta)$. This is in fact a polynomial in disguise, since from trigonometric identities, we know $\cos (n\theta)$ can be expanded as a polynomial in $\cos \theta$ up to degree $n$.
+
+Two key properties of $T_n$ on $[-1, 1]$ are
+\begin{enumerate}
+ \item The maximum absolute value is obtained at
+ \[
+ X_k = \cos\left(\frac{\pi k}{n}\right)
+ \]
+ for $k = 0, \cdots, n$ with
+ \[
+ T_n(X_k) = (-1)^k.
+ \]
+ \item This has $n$ distinct zeros at
+ \[
+ x_k = \cos\left(\frac{2k - 1}{2n}\pi\right).
+ \]
+ for $k = 1, \cdots, n$.
+\end{enumerate}
+When plotted out, the polynomials look like this:
+\begin{center}
+ \begin{tikzpicture}[scale=2]
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$x$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$T_4(x)$};
+
+ \draw [mblue,domain=0:180, samples=40] plot [smooth] ({cos(\x)}, {cos(4*\x)});
+ \draw [dashed] (-1, -1) rectangle (1, 1);
+ \node at (-1, 0) [below] {$-1$};
+ \node at (1, 0) [below] {$1$};
+ \node at (0, 1) [anchor = south west] {$1$};
+ \node at (0, -1) [left] {$-1$};
+ \end{tikzpicture}
+\end{center}
+All that really matters about the Chebyshev polynomials is that the maximum is obtained at $n + 1$ distinct points with alternating sign. The exact form of the polynomial is not really important.
+
+Notice there is an intentional clash between the use of $x_k$ as the zeros and $x_k$ as the interpolation points --- we will show these are indeed the optimal interpolation points.
+
+We first prove a convenient recurrence relation for the Chebyshev polynomials:
+\begin{lemma}[3-term recurrence relation]
+ The Chebyshev polynomials satisfy the recurrence relations
+ \[
+ T_{n + 1}(x) = 2x T_n(x) - T_{n - 1}(x)
+ \]
+ with initial conditions
+ \[
+ T_0(x) = 1,\quad T_1(x) = x.
+ \]
+\end{lemma}
+
+\begin{proof}
+ \[
+ \cos((n + 1) \theta) + \cos((n - 1)\theta) = 2\cos \theta \cos(n\theta).\qedhere
+ \]
+\end{proof}
+This recurrence relation can be useful for many things, but for our purposes, we only use it to show that the leading coefficient of $T_n$ is $2^{n - 1}$ (for $n \geq 1$).
+
+\begin{thm}[Minimal property for $n \geq 1$]
+ On $[-1, 1]$, among all polynomials $p \in P_n[x]$ with leading coefficient $1$, $\frac{1}{2^{n - 1}} \|T_n\|$ minimizes $\|p\|_{\infty}$. Thus, the minimum value is $\frac{1}{2^{n - 1}}$.
+\end{thm}
+
+\begin{proof}
+ We proceed by contradiction. Suppose there is a polynomial $q_n \in P_n$ with leading coefficient $1$ such that $\|q_n\|_{\infty} < \frac{1}{2^{n - 1}}$. Define a new polynomial
+ \[
+ r = \frac{1}{2^{n - 1}}T_n - q_n.
+ \]
+ This is, by assumption, non-zero.
+
+ Since both the polynomials have leading coefficient $1$, the difference must have degree at most $n - 1$, i.e.\ $r \in P_{n - 1}[x]$. Since $\frac{1}{2^{n - 1}}T_n(X_k) = \pm \frac{1}{2^{n - 1}}$, and $|q_n(X_n)| < \frac{1}{2^{n - 1}}$ by assumption, $r$ alternates in sign between these $n + 1$ points. But then by the intermediate value theorem, $r$ has to have at least $n$ zeros. This is a contradiction, since $r$ has degree $n - 1$, and cannot be zero.
+\end{proof}
+
+\begin{cor}
+ Consider
+ \[
+ w_\Delta = \prod_{i = 0}^n (x - x_i) \in P_{n + 1}[x]
+ \]
+ for any distinct points $\Delta = \{x_i\}_{i = 0}^n \subseteq [-1, 1]$. Then
+ \[
+ \min_{\Delta} \|\omega_{\Delta}\|_{\infty} = \frac{1}{2^n}.
+ \]
+ This minimum is achieved by picking the interpolation points to be the zeros of $T_{n + 1}$, namely
+ \[
+ x_k = \cos\left(\frac{2k + 1}{2n + 2} \pi\right), \quad k = 0, \cdots, n.
+ \]
+\end{cor}
+
+\begin{thm}
+ For $f \in C^{n + 1}[-1, 1]$, the Chebyshev choice of interpolation points gives
+ \[
+ \|f - p_n\|_{\infty} \leq \frac{1}{2^n} \frac{1}{(n + 1)!} \|f^{(n + 1)}\|_{\infty}.
+ \]
+\end{thm}
+Suppose $f$ has as many continuous derivatives as we want. Then as we increase $n$, what happens to the error bounds? The coefficients involve dividing by an exponential \emph{and} a factorial. Hence as long as the higher derivatives of $f$ don't blow up too badly, in general, the error will tend to zero as $n \to \infty$, which makes sense.
+
+The last two results can be easily generalized to arbitrary intervals $[a, b]$, and this is left as an exercise for the reader.
+
+\section{Orthogonal polynomials}
+It turns out the Chebyshev polynomials is just an example of a more general class of polynomials, known as \emph{orthogonal polynomials}. As in linear algebra, we can define a scalar product on the space of polynomials, and then find a basis of orthogonal polynomials of the vector space under this scalar product. We shall show that each set of orthogonal polynomials has a three-term recurrence relation, just like the Chebyshev polynomials.
+
+\subsection{Scalar product}
+The scalar products we are interested in would be generalization of the usual scalar product on Euclidean space,
+\[
+ \bra \mathbf{x}, \mathbf{y}\ket = \sum_{i = 1}^n x_i y_i.
+\]
+We want to generalize this to vector spaces of functions and polynomials. We will not provide a formal definition of vector spaces and scalar products on an abstract vector space. Instead, we will just provide some examples of commonly used ones.
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Let $V = C^s[a, b]$, where $[a, b]$ is a finite interval and $s \geq 0$. Pick a weight function $w(x) \in C(a, b)$ such that $w(x) > 0$ for all $x \in (a, b)$, and $w$ is integrable over $[a, b]$. In particular, we allow $w$ to vanish at the end points, or blow up mildly such that it is still integrable.
+
+ We then define the inner product to be
+ \[
+ \bra f, g\ket = \int_a^b w(x) f(x) d(x)\;\d x.
+ \]
+ \item We can allow $[a, b]$ to be infinite, e.g.\ $[0, \infty)$ or even $(-\infty, \infty)$, but we have to be more careful. We first define
+ \[
+ \bra f, g\ket = \int_a^b w(x) f(x) g(x) \;\d x
+ \]
+ as before, but we now need more conditions. We require that $\int_a^b w(x) x^n\;\d x$ to exist for all $n \geq 0$, since we want to allow polynomials in our vector space. For example, $w(x) = e^{-x}$ on $[0, \infty)$, works, or $w(x) = e^{-x^2}$ on $(-\infty, \infty)$. These are scalar products for $P_n[x]$ for $n \geq 0$, but we cannot extend this definition to all smooth functions since they might blow up too fast at infinity. We will not go into the technical details, since we are only interested in polynomials, and knowing it works for polynomials suffices.
+ \item We can also have a discrete inner product, defined by
+ \[
+ \bra f, g\ket = \sum_{j = 1}^m w_j f(\xi_j) g(\xi_j)
+ \]
+ with $\{\xi_j\}_{j = 1}^m$ distinct points and $\{w_j\}_{j = 1}^m > 0$. Now we have to restrict ourselves a lot. This is a scalar product for $V = P_{m - 1}[x]$, but not for higher degrees, since a scalar product should satisfy $\bra f, f \ket > 0$ for $f \not= 0$. In particular, we cannot extend this to all smooth functions.
+ \end{enumerate}
+\end{eg}
+With an inner product, we can define orthogonality.
+\begin{defi}[Orthogonalilty]
+ Given a vector space $V$ and an inner product $\bra \ph, \ph\ket$, two vectors $f, g \in V$ are \emph{orthogonal} if $\bra f, g\ket = 0$.
+\end{defi}
+
+\subsection{Orthogonal polynomials}
+\begin{defi}[Orthogonal polynomial]
+ Given a vector space $V$ of polynomials and inner product $\bra \ph, \ph\ket$, we say $p_n \in P_n[x]$ is the \emph{$n$th orthogonal polynomial} if
+ \[
+ \bra p_n, q\ket = 0\text{ for all }q \in P_{n - 1}[x].
+ \]
+ In particular, $\bra p_n, p_m\ket = 0$ for $n \not= m$.
+\end{defi}
+
+We said \emph{the} orthogonal polynomial, but we need to make sure such a polynomial has to be unique. It is clearly not unique, since if $p_n$ satisfies these relations, then so does $\lambda p_n$ for all $\lambda \not= 0$. For uniqueness, we need to impose some scaling. We usually do so by requiring the leading polynomial to be $1$, i.e.\ it is monic.
+
+\begin{defi}[Monic polynomial]
+ A polynomial $p \in P_n[x]$ is \emph{monic} if the coefficient of $x^n$ is $1$.
+\end{defi}
+In practice, most famous traditional polynomials are not monic. They have a different scaling imposed. Still, as long as we have \emph{some} scaling, we will have uniqueness.
+
+We will not mess with other scalings, and stick to requiring them to be monic since this is useful for proving things.
+
+\begin{thm}
+ Given a vector space $V$ of functions and an inner product $\bra \ph, \ph \ket$, there exists a unique monic orthogonal polynomial for each degree $n \geq 0$. In addition, $\{p_k\}_{k = 0}^n$ form a basis for $P_n[x]$.
+\end{thm}
+
+\begin{proof}
+ This is a big induction proof over both parts of the theorem. We induct over $n$. For the base case, we pick $p_0(x) = 1$, which is the only degree-zero monic polynomial.
+
+ Now suppose we already have $\{p_n\}_{k = 0}^n$ satisfying the induction hypothesis.
+
+ Now pick any monic $q_{n + 1} \in P_{n + 1}[x]$, e.g.\ $x^{n + 1}$. We now construct $p_{n + 1}$ from $q_{n + 1}$ by the Gram-Schmidt process. We define
+ \[
+ p_{n + 1} = q_{n + 1} - \sum_{k = 0}^n \frac{\bra q_{n + 1}, p_k\ket}{\bra p_k, p_k\ket} p_k.
+ \]
+ This is again monic since $q_{n + 1}$ is, and we have
+ \[
+ \bra p_{n + 1}, p_m \ket = 0
+ \]
+ for all $m \leq n$, and hence $\bra p_{n + 1}, p\ket = 0$ for all $p \in P_n[x] = \bra p_0, \cdots,p_n\ket$.
+
+ To obtain uniqueness, assume both $p_{n + 1}, \hat{p}_{n + 1} \in P_{n + 1}[x]$ are both monic orthogonal polynomials. Then $r = p_{n + 1} - \hat{p}_{n + 1} \in P_n[x]$. So
+ \[
+ \bra r, r\ket = \bra r, p_{n + 1} - \hat{p}_{n + 1}\ket = \bra r, p_{n + 1}\ket - \bra r, \hat{p}_{n + 1}\ket = 0 - 0 = 0.
+ \]
+ So $r = 0$. So $p_{n + 1} = \hat{p}_{n - 1}$.
+
+ Finally, we have to show that $p_0, \cdots, p_{n + 1}$ form a basis for $P_{n + 1}[x]$. Now note that every $p \in P_{n + 1}[x]$ can be written uniquely as
+ \[
+ p = cp_{n + 1} + q,
+ \]
+ where $q \in P_n[x]$. But $\{p_k\}_{k = 0}^n$ is a basis for $P_n[x]$. So $q$ can be uniquely decomposed as a linear combination of $p_0, \cdots, p_n$.
+
+ Alternatively, this follows from the fact that any set of orthogonal vectors must be linearly independent, and since there are $n + 2$ of these vectors and $P_{n + 1}[x]$ has dimension $n + 2$, they must be a basis.
+\end{proof}
+
+In practice, following the proof naively is not the best way of producing the new $p_{n + 1}$. Instead, we can reduce a lot of our work by making a clever choice of $q_{n + 1}$.
+
+\subsection{Three-term recurrence relation}
+Recall that for the Chebyshev polynomials, we obtained a three-term recurrence relation for them using special properties of the cosine. It turns out these recurrence relations exist in general for orthogonal polynomials.
+
+We start by picking $q_{n + 1} = xp_n$ in the previous proof. We now use the fact that
+\[
+ \bra xf, g\ket = \bra f, xg\ket.
+\]
+This is not necessarily true for arbitrary inner products, but for most sensible inner products we will meet in this course, this is true. In particular, it is clearly true for inner products of the form
+\[
+ \bra f, g\ket = \int w(x) f(x) g(x)\;\d x.
+\]
+Assuming this, we obtain the following theorem.
+\begin{thm}
+ Monic orthogonal polynomials are generated by
+ \[
+ p_{k + 1}(x) = (x - \alpha_k)p_k(x) - \beta_k p_{k - 1}(x)
+ \]
+ with initial conditions
+ \[
+ p_0 = 1,\quad p_1(x) = (x - \alpha_0) p_0,
+ \]
+ where
+ \[
+ \alpha_k = \frac{\bra x p_k, p_k\ket}{\bra p_k, p_k\ket},\quad \beta_k = \frac{\bra p_k, p_k\ket}{\bra p_{k - 1}, p_{k - 1}\ket}.
+ \]
+\end{thm}
+
+\begin{proof}
+ By inspection, the $p_1$ given is monic and satisfies
+ \[
+ \bra p_1, p_0\ket = 0.
+ \]
+ Using $q_{n + 1} = x p_n$ in the Gram-Schmidt process gives
+ \begin{align*}
+ p_{n + 1} &= xp_n - \sum_{k = 0}^n \frac{\bra x p_n, p_k\ket}{\bra p_k, p_k\ket} p_k\\
+ p_{n + 1} &= xp_n - \sum_{k = 0}^n \frac{\bra p_n, x p_k\ket}{\bra p_k, p_k\ket} p_k\\
+ \intertext{We notice that $\bra p_n, xp_k\ket$ and vanishes whenever $x p_k$ has degree less than $n$. So we are left with}
+ &= xp_n - \frac{\bra x p_n, p_n\ket}{\bra p_n, p_n\ket}p_n - \frac{\bra p_n, xp_{n - 1}\ket}{\bra p_{n - 1}, p_{n - 1}\ket} p_{n - 1}\\
+ &= (x - \alpha_n) p_n - \frac{\bra p_n, xp_{n - 1}\ket}{\bra p_{n - 1}, p_{n - 1}\ket} p_{n - 1}.
+ \end{align*}
+ Now we notice that $xp_{n - 1}$ is a monic polynomial of degree $n$ so we can write this as $x p_{n - 1} = p_n + q$. Thus
+ \[
+ \bra p_n, xp_{n - 1}\ket = \bra p_n, p_n + q\ket = \bra p_n, p_n\ket.
+ \]
+ Hence the coefficient of $p_{n - 1}$ is indeed the $\beta$ we defined.
+\end{proof}
+
+\subsection{Examples}
+The four famous examples are the Legendre polynomials, Chebyshev polynomials, Laguerre polynomials and Hermite polynomials. We first look at how the Chebyshev polynomials fit into this framework.
+
+Chebyshev is based on the scalar product defined by
+\[
+ \bra f, g\ket = \int_{-1}^1 \frac{1}{\sqrt{1 - x^2}} f(x) g(x)\;\d x.
+\]
+Note that the weight function blows up mildly at the end, but this is fine since it is still integrable.
+
+This links up with
+\[
+ T_n (x) = \cos(n\theta)
+\]
+for $x = \cos \theta$ via the usual trigonometric substitution. We have
+\begin{align*}
+ \bra T_n, T_m \ket &= \int_0^\pi \frac{1}{\sqrt{1 - \cos^2 \theta}} \cos(n\theta) \cos (m\theta) \sin \theta \;\d \theta\\
+ &= \int_0^\pi \cos(n\theta) \cos (m\theta)\;\d \theta\\
+ &= 0\text{ if }m\not= n.
+\end{align*}
+The other orthogonal polynomials come from scalar products of the form
+\[
+ \bra f, g\ket = \int_a^b w(x) f(x) g(x)\;\d x,
+\]
+as described in the table below:
+\begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ Type & Range & Weight\\
+ \midrule
+ Legendre & $[-1, 1]$ & $1$\\
+ Chebyshev & $[-1, 1]$ & $\frac{1}{\sqrt{1 - x^2}}$\\
+ Laguerre & $[0, \infty)$ & $e^{-x}$\\
+ Hermite & $(-\infty, \infty)$ & $e^{-x^2}$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+\subsection{Least-squares polynomial approximation}
+If we want to approximate a function with a polynomial, polynomial interpolation might not be the best idea, since all we do is make sure the polynomial agrees with $f$ at certain points, and then hope it is a good approximation elsewhere. Instead, the idea is to choose a polynomial $p$ in $P_n[x]$ that ``minimizes the error''.
+
+What exactly do we mean by minimizing the error? The error is defined as the function $f - p$. So given an appropriate inner product on the vector space of continuous functions, we want to minimize
+\[
+ \|f - p\|^2 = \bra f - p, f - p\ket.
+\]
+This is usually of the form
+\[
+ \bra f - p, f - p\ket = \int_a^b w(x) [f(x) - p(x)]^2 \;\d x,
+\]
+but we can also use alternative inner products such as
+\[
+ \bra f - p, f - p\ket = \sum_{j = 1}^m w_j [f(\xi_i) - p(\xi_i)]^2.
+\]
+Unlike polynomial interpolation, there is no guarantee that the approximation agrees with the function anywhere. Unlike polynomial interpolation, there is some guarantee that the total error is small (or as small as we can get, by definition). In particular, if $f$ is continuous, then the Weierstrass approximation theorem tells us the total error must eventually vanish.
+
+Unsurprisingly, the solution involves the use of the orthogonal polynomials with respect to the corresponding inner products.
+\begin{thm}
+ If $\{p_n\}_{k = 0}^n$ are orthogonal polynomials with respect to $\bra \ph, \ph \ket$, then the choice of $c_k$ such that
+ \[
+ p = \sum_{k = 0}^n c_k p_k
+ \]
+ minimizes $\|f - p\|^2$ is given by
+ \[
+ c_k = \frac{\bra f, p_k\ket}{\|p_k\|^2},
+ \]
+ and the formula for the error is
+ \[
+ \|f - p\|^2 = \|f\|^2 - \sum_{k = 0}^n \frac{\bra f, p_k\ket^2}{\|p_k\|^2}.
+ \]
+\end{thm}
+Note that the solution decouples, in the sense that $c_k$ depends only on $f$ and $p_k$. If we want to take one more term, we just need to compute an extra term, and not redo our previous work.
+
+Also, we notice that the formula for the error is a positive term $\|f\|^2$ subtracting a lot of squares. As we increase $n$, we subtract more squares, and the error decreases. If we are lucky, the error tends to $0$ as we take $n \to \infty$. Even though we might not know how many terms we need in order to get the error to be sufficiently small, we can just keep adding terms until the computed error small enough (which is something we have to do anyway even if we knew what $n$ to take).
+
+\begin{proof}
+ We consider a general polynomial
+ \[
+ p = \sum_{k = 0}^n c_k p_k.
+ \]
+ We substitute this in to obtain
+ \[
+ \bra f - p, f - p\ket = \bra f, f\ket - 2 \sum_{k = 0}^n c_k \bra f, p_k\ket + \sum_{k = 0}^n c_k^2 \|p_k\|^2.
+ \]
+ Note that there are no cross terms between the different coefficients. We minimize this quadratic by setting the partial derivatives to zero:
+ \[
+ 0 = \frac{\partial}{\partial c_k} \bra f - p, f - p\ket = -2 \bra f, p_k\ket + 2c_k \|p_k\|^2.
+ \]
+ To check this is indeed a minimum, note that the Hessian matrix is simply $2I$, which is positive definite. So this is really a minimum. So we get the formula for the $c_k$'s as claimed, and putting the formula for $c_k$ gives the error formula.
+\end{proof}
+
+Note that our constructed $p \in P_n[x]$ has a nice property: for $k \leq n$, we have
+\[
+ \bra f - p, p_k\ket = \bra f, p_k\ket - \bra p, p_k\ket = \bra f, p_k\ket - \frac{\bra f, p_k\ket}{\|p_k\|^2} \bra p_k, p_k\ket = 0.
+\]
+Thus for all $q \in P_n[x]$, we have
+\[
+ \bra f - p, q\ket = 0.
+\]
+In particular, this is true when $q = p$, and tells us $\bra f, p\ket = \bra p, p\ket$. Using this to expand $\bra f - p, f - p\ket$ gives
+\[
+ \|f - p\|^2 + \|p\|^2 = \|f\|^2,
+\]
+which is just a glorified Pythagoras theorem.
+\section{Approximation of linear functionals}
+\subsection{Linear functionals}
+In this chapter, we are going to study approximations of linear functions. Before we start, it is helpful to define what a linear functional is, and look at certain examples of these.
+
+\begin{defi}[Linear functional]
+ A \emph{linear functional} is a linear mapping $L: V \to \R$, where $V$ is a real vector space of functions.
+\end{defi}
+In generally, a linear functional is a linear mapping from a vector space to its underlying field of scalars, but for the purposes of this course, we will restrict to this special case.
+
+We usually don't put so much emphasis on the actual vector space $V$. Instead, we provide a formula for $L$, and take $V$ to be the vector space of functions for which the formula makes sense.
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item We can choose some fixed $\xi \in \R$, and define a linear functional by
+ \[
+ L(f) = f(\xi).
+ \]
+ \item Alternatively, for fixed $\eta \in \R$ we can define our functional by
+ \[
+ L(f) = f'(\eta).
+ \]
+ In this case, we need to pick a vector space in which this makes sense, e.g.\ the space of continuously differentiable functions.
+ \item We can define
+ \[
+ L(f) = \int_a^b f(x)\;\d x.
+ \]
+ The set of continuous (or even just integrable) functions defined on $[a, b]$ will be a sensible domain for this linear functional.
+ \item Any linear combination of these linear functions are also linear functionals. For example, we can pick some fixed $\alpha, \beta \in \R$, and define
+ \[
+ L(f) = f(\beta) - f(\alpha) - \frac{\beta - \alpha}{2} [f'(\beta) + f'(\alpha)].
+ \]
+ \end{enumerate}
+\end{eg}
+The objective of this chapter is to construct approximations to more complicated linear functionals (usually integrals, possibly derivatives point values) in terms of simpler linear functionals (usually point values of $f$ itself).
+
+For example, we might produce an approximation of the form
+\[
+ L(f) \approx \sum_{i = 0}^N a_i f(x_i),
+\]
+where $V = C^p[a, b]$, $p \geq 0$, and $\{x_i\}_{i = 0}^N \subseteq [a, b]$ are distinct points.
+
+How can we choose the coefficients $a_i$ and the points $x_i$ so that our approximation is ``good''?
+
+We notice that most of our functionals can be easily evaluated exactly when $f$ is a polynomial. So we might approximate our function $f$ by a polynomial, and then do it exactly for polynomials.
+
+More precisely, we let $ \{x_i\}_{i = 0}^N \subseteq [a, b]$ be arbitrary points. Then using the Lagrange cardinal polynomials $\ell_i$, we have
+\[
+ f(x) \approx \sum_{i = 0}^N f(x_i) \ell_i(x).
+\]
+Then using linearity, we can approximate
+\[
+ L(f) \approx L\left(\sum_{i = 0}^N f(x_i) \ell_i(x)\right) = \sum_{i = 0}^N L(\ell_i) f(x_i).
+\]
+So we can pick
+\[
+ a_i = L(\ell_i).
+\]
+Similar to polynomial interpolation, this formula is exact for $f \in P_N[x]$. But we could do better. If we can freely choose $\{a_i\}_{i = 0}^N$ \emph{and} $\{x_i\}_{i = 0}^N$, then since we now have $2n + 2$ free parameters, we might expect to find an approximation that is exact for $f \in P_{2N + 1}[x]$. This is not always possible, but there are cases when we can. The most famous example is Gaussian quadrature.
+
+\subsection{Gaussian quadrature}
+The objective of Gaussian quadrature is to approximate integrals of the form
+\[
+ L(f) = \int_a^b w(x) f(x)\;\d x,
+\]
+where $w(x)$ is a weight function that determines a scalar product.
+
+Traditionally, we have a different set of notations used for Gaussian quadrature. So just in this section, we will use some funny notation that is inconsistent with the rest of the course.
+
+We let
+\[
+ \bra f, g\ket = \int_a^b w(x) f(x)g(x)\;\d x
+\]
+be a scalar product for $P_\nu[x]$. We will show that we can find \emph{weights}, written $\{b_n\}_{k = 1}^\nu$, and \emph{nodes}, written $\{c_k\}_{k = 1}^\nu \subseteq [a, b]$, such that the approximation
+\[
+ \int_a^b w(x) f(x)\;\d x \approx \sum_{k = 1}^{\nu} b_k f(c_k)
+\]
+is exact for $f \in P_{2\nu - 1}[x]$. The nodes $\{c_k\}_{k = 1}^{\nu}$ will turn out to be the zeros of the orthogonal polynomial $p_\nu$ with respect to the scalar product. The aim of this section is to work this thing out.
+
+We start by showing that this is the best we can achieve.
+\begin{prop}
+ There is no choice of $\nu$ weights and nodes such that the approximation of $\int_a^b w(x) f(x)\;\d x$ is exact for all $f \in P_{2\nu}[x]$.
+\end{prop}
+
+\begin{proof}
+ Define
+ \[
+ q(x) = \prod_{k = 1}^{\nu} (x - c_k) \in P_{\nu}[x].
+ \]
+ Then we know
+ \[
+ \int_a^b w(x) q^2(x)\;\d x > 0,
+ \]
+ since $q^2$ is always non-negative and has finitely many zeros. However,
+ \[
+ \sum_{k = 1}^\nu b_k q^2(c_n) = 0.
+ \]
+ So this cannot be exact for $q^2$.
+\end{proof}
+
+Recall that we initially had the idea of doing the approximation by interpolating $f$ at some arbitrary points in $[a, b]$. We call this \emph{ordinary quadrature}.
+\begin{thm}[Ordinary quadrature]
+ For any distinct $\{c_k\}_{k = 1}^\nu \subseteq [a, b]$, let $\{\ell_k\}_{k = 1}^\nu$ be the Lagrange cardinal polynomials with respect to $\{c_k\}_{k = 1}^\nu$. Then by choosing
+ \[
+ b_k = \int_a^b w(x) \ell_k(x) \;\d x,
+ \]
+ the approximation
+ \[
+ L(f) = \int_a^b w(x) f(x)\;\d x \approx \sum_{k = 1}^\nu b_k f(c_k)
+ \]
+ is exact for $f \in P_{\nu - 1}[x]$.
+
+ We call this method ordinary quadrature.
+\end{thm}
+This simple idea is how we generate many classical numerical integration techniques, such as the trapezoidal rule. But those are quite inaccurate. It turns out a clever choice of $\{c_k\}$ does much better --- take them to be the zeros of the orthogonal polynomials. However, to do this, we must make sure the roots indeed lie in $[a, b]$. This is what we will prove now --- given any inner product, the roots of the orthogonal polynomials must lie in $[a, b]$.
+
+\begin{thm}
+ For $\nu \geq 1$, the zeros of the orthogonal polynomial $p_\nu$ are real, distinct and lie in $(a, b)$.
+\end{thm}
+We have in fact proved this for a particular case in IB Methods, and the same argument applies.
+
+\begin{proof}
+ First we show there is at least one root. Notice that $p_0 = 1$. Thus for $\nu \geq 1$, by orthogonality, we know
+ \[
+ \int_a^b w(x) p_\nu(x) p_1(x)\;\d x = \int_a^b w(x) p_\nu(x)\;\d x = 0.
+ \]
+ So there is at least one sign change in $(a, b)$. We have already got the result we need for $\nu = 1$, since we only need one zero in $(a, b)$.
+
+ Now for $\nu > 1$, suppose $\{\xi_j\}_{j = 1}^m$ are the places where the sign of $p_\nu$ changes in $(a, b)$ (which is a subset of the roots of $p_\nu$). We define
+ \[
+ q(x) = \prod_{j = 1}^m (x - \xi_j) \in P_m[x].
+ \]
+ Since this changes sign at the same place as $p_\nu$, we know $q p_\nu$ maintains the same sign in $(a, b)$. Now if we had $m < \nu$, then orthogonality gives
+ \[
+ \bra q, p_\nu\ket = \int_a^b w(x)q(x) p_\nu(x)\;\d x = 0,
+ \]
+ which is impossible, since $qp_\nu$ does not change sign. Hence we must have $m = \nu$.
+\end{proof}
+
+\begin{thm}
+ In the ordinary quadrature, if we pick $\{c_k\}_{k = 1}^\nu$ to be the roots of $p_\nu(x)$, then get we exactness for $f \in P_{2\nu - 1}[x]$. In addition, $\{b_n\}_{k = 1}^\nu$ are all positive.
+\end{thm}
+
+\begin{proof}
+ Let $f \in P_{2 \nu - 1}[x]$. Then by polynomial division, we get
+ \[
+ f = qp_\nu + r,
+ \]
+ where $q, r$ are polynomials of degree at most $\nu - 1$. We apply orthogonality to get
+ \[
+ \int_a^b w(x) f(x)\;\d x = \int_a^b w(x) (q(x) p_\nu(x) + r(x)) \;\d x= \int_a^b w(x) r(x)\;\d x.
+ \]
+ Also, since each $c_k$ is a root of $p_\nu$, we get
+ \[
+ \sum_{k = 1}^\nu b_k f(c_k) = \sum_{k = 1}^\nu b_k (q(c_k) p_\nu(c_k) + r(c_k)) = \sum_{k = 1}^\nu b_k r(c_k).
+ \]
+ But $r$ has degree at most $\nu - 1$, and this formula is exact for polynomials in $P_{\nu - 1}[x]$. Hence we know
+ \[
+ \int_a^b w(x) f(x)\;\d x = \int_a^b w(x) r(x)\;\d x = \sum_{k = 1}^\nu b_k r(c_k) = \sum_{k = 1}^\nu b_k f(c_k).
+ \]
+ To show the weights are positive, we simply pick as special $f$. Consider $f \in \{\ell_k^2\}_{k = 1}^\nu \subseteq P_{2\nu - 2}[x]$, for $\ell_k$ the Lagrange cardinal polynomials for $\{c_k\}_{k = 1}^\nu$. Since the quadrature is exact for these, we get
+ \[
+ 0 < \int_a^b w(x)\ell_k^2(x)\;\d x = \sum_{j = 1}^\nu b_j \ell_k^2(c_j) = \sum_{j = 1}^\nu b_j \delta_{jk} = b_k.
+ \]
+ Since this is true for all $k = 1, \cdots, \nu$, we get the desired result.
+\end{proof}
+
+\section{Expressing errors in terms of derivatives}
+As before, we approximate a linear functional $L$ by
+\[
+ L(f) \approx \sum_{i = 0}^n a_i L_i(f),
+\]
+where $L_i$ are some simpler linear functionals, and suppose this is exact for $f \in P_k[x]$ for some $k \geq 0$.
+
+Hence we know the error
+\[
+ e_L(f) = L(f) - \sum_{i = 0}^n a_i L_i(f) = 0
+\]
+whenever $f \in P_k[x]$. We say the error \emph{annihilates} for polynomials of degree less than $k$.
+
+How can we use this property to generate formulae for the error and error bounds? We first start with a rather simple example.
+\begin{eg}
+ Let $L(f) = f(\beta)$. We decide to be silly and approximate $L(f)$ by
+ \[
+ L(f) \approx f(\alpha) + \frac{\beta - \alpha}{2} (f'(\beta) + f'(\alpha)),
+ \]
+ where $\alpha \not= \beta$. This is clearly much easier to evaluate. The error is given by
+ \[
+ e_L(f) = f(\beta) - f(\alpha) - \frac{\beta - \alpha}{2} (f'(\beta) + f'(\alpha)),
+ \]
+ and this vanishes for $f \in P_2[x]$.
+\end{eg}
+How can we get a more useful error formula? We can't just use the fact that it annihilates polynomials of degree $k$. We need to introduce something beyond this --- the $k + 1$th derivative. We now assume $f \in C^{k + 1}[a, b]$.
+
+Note that so far, everything we've done works if the interval is infinite, as long as the weight function vanishes sufficiently quickly as we go far away. However, for this little bit, we will need to require $[a, b]$ to be finite, since we want to make sure we can take the supremum of our functions.
+
+We now seek an exact error formula in terms of $f^{(k + 1)}$, and bounds of the form
+\[
+ |e_L(f)| \leq c_L \|f^{(k + 1)}\|_{\infty}
+\]
+for some constant $c_L$. Moreover, we want to make $c_L$ as small as possible. We don't want to give a constant of $10$ million, while in reality we can just use $2$.
+
+\begin{defi}[Sharp error bound]
+ The constant $c_L$ is said to be \emph{sharp} if for any $\varepsilon > 0$, there is some $f_{\varepsilon} \in C^{k + 1}[a, b]$ such that
+ \[
+ |e_L(f)| \geq (c_L - \varepsilon)\|f^{(k + 1)}_\varepsilon\|_{\infty}.
+ \]
+\end{defi}
+
+This makes it precise what we mean by $c_L$ ``cannot be better''. This doesn't say anything about whether $c_L$ can actually be achieved. This depends on the particular form of the question.
+
+To proceed, we need Taylor's theorem with the integral remainder, i.e.
+\[
+ f(x) = f(a) + (x - a) f'(a) + \cdots + \frac{(x - a)^k}{k!} f^{(k)}(a) + \frac{1}{k!}\int_a^x (x - \theta)^k f^{(k + 1)}(\theta)\;\d \theta.
+\]
+This is not really good, since there is an $x$ in the upper limit of the integral. Instead, we write the integral as
+\[
+ \int_a^b (x - \theta)^k_{+} f^{(k + 1)}(\theta)\;\d \theta,
+\]
+where we define $(x - \theta)^k_+$ is a new function defined by
+\[
+ (x - \theta)^k_+ =
+ \begin{cases}
+ (x - \theta)^k &x \geq \theta\\
+ 0 & x < \theta.
+ \end{cases}
+\]
+Then if $\lambda$ is a linear functional that annihilates $P_k[x]$, then we have
+\[
+ \lambda(f) = \lambda\left(\frac{1}{k!}\int_a^b (x - \theta)^k_+ f^{(k + 1)}(\theta)\;\d \theta\right)
+\]
+for all $f \in C^{k + 1}[a, b]$.
+
+For \emph{our} linear functionals, we can simplify by taking the $\lambda$ inside the integral sign and obtain
+\[
+ \lambda(f) = \frac{1}{k!} \int_a^b \lambda((x - \theta)_+^k) f^{(k + 1)}(\theta)\;\d \theta,
+\]
+noting that $\lambda$ acts on $(x - \theta)_{+}^k \in C^{k - 1}[a, b]$ as a function of $x$, and think of $\theta$ as being held constant.
+
+Of course, pure mathematicians will come up with linear functionals for which we cannot move the $\lambda$ inside, but for our linear functionals (point values, derivative point values, integrals etc.), this is valid, as we can verify directly.
+
+Hence we arrive at
+\begin{thm}[Peano kernel theorem]
+ If $\lambda$ annihilates polynomials of degree $k$ or less, then
+ \[
+ \lambda(f) = \frac{1}{k!} \int_a^b K(\theta) f^{(k + 1)}(\theta) \;\d \theta
+ \]
+ for all $f \in C^{k + 1}[a, b]$, where
+\end{thm}
+
+\begin{defi}[Peano kernel]
+ The \emph{Peano kernel} is
+ \[
+ K(\theta) = \lambda ((x - \theta)_+^k).
+ \]
+\end{defi}
+The important thing is that the kernel $K$ is independent of $f$. Taking suprema in different ways, we obtain different forms of bounds:
+\[
+ |\lambda(f)| \leq \frac{1}{k!}
+ \begin{dcases}
+ \int_a^b |K(\theta)|\;\d \theta \|f^{(k + 1)}\|_\infty\\
+ \left(\int_a^b |K(\theta)|^2 \;\d \theta\right)^{\frac{1}{2}}\|f^{(k + 1)}\|_2\\
+ \|K(\theta)\|_\infty \|f^{(k + 1)}\|_1
+ \end{dcases}.
+\]
+Hence we can find the constant $c_L$ for different choices of the norm. When computing $c_L$, don't forget the factor of $\frac{1}{k!}$!
+
+By fiddling with functions a bit, we can show these bounds are indeed sharp.
+
+\begin{eg}
+ Consider our previous example where
+ \[
+ e_L(f) = f(\beta) - f(\alpha) - \frac{\beta - \alpha}{2} (f'(\beta) + f'(\alpha)),
+ \]
+ with exactness up to polynomials of degree $2$. We wlog assume $\alpha < \beta$. Then
+ \[
+ K(\theta) = e_L((x - \theta)_+^2) = (\beta - \theta)_+^2 - (\alpha - \theta)_+^2 - (\beta - \alpha)((\beta - \theta)_+ + (\alpha - \theta)_+).
+ \]
+ Hence we get
+ \[
+ K(\theta) =
+ \begin{cases}
+ 0 & a \leq \theta \leq \alpha\\
+ (\alpha - \theta)(\beta - \theta) & \alpha \leq \theta \leq \beta\\
+ 0 & \beta \leq \theta \leq b.
+ \end{cases}
+ \]
+ Hence we know
+ \[
+ e_L(f) = \frac{1}{2}\int_\alpha^\beta (\alpha - \theta)(\beta - \theta) f'''(\theta) \;\d \theta
+ \]
+ for all $f \in C^3[a, b]$.
+\end{eg}
+
+Note that in this particular case, our function $K(\theta)$ does not change sign on $[a, b]$. Under this extra assumption, we can say a bit more.
+
+First, we note that the bound
+\[
+ |\lambda(f)| \leq \left|\frac{1}{k!} \int_a^b K(\theta) \;\d \theta\right| \|f^{(k + 1)}\|_\infty
+\]
+can be achieved by $x^{k + 1}$, since this has constant $k + 1$th derivative. Also, we can use the integral mean value theorem to get the bound
+\[
+ \lambda (f) = \frac{1}{k!} \left(\int_a^b K(\theta)\;\d \theta\right) f^{(k + 1)}(\xi),
+\]
+where $\xi \in (a, b)$ depends on $f$. These are occasionally useful.
+
+\begin{eg}
+ Continuing our previous example, we see that $K(\theta) \leq 0$ on $[a, b]$, and
+ \[
+ \int_a^b K(\theta)\;\d \theta = -\frac{1}{6}(\beta - \alpha)^3.
+ \]
+ Hence we have the bound
+ \[
+ |e_L(f)| \leq \frac{1}{12}(\beta - \alpha)^3 \|f'''\|_{\infty},
+ \]
+ and this bound is achieved for $x^3$. We also have
+ \[
+ e_L(f) = -\frac{1}{12}(\beta - \alpha)^3 f'''(\xi)
+ \]
+ for some $f$-dependent value of some $\xi \in (a, b)$.
+\end{eg}
+
+Finally, note that Peano's kernel theorem says if $e_L(f) = 0$ for all $f \in P_k[x]$, then we have
+\[
+ e_L(f) = \frac{1}{k!} \int_a^b K(\theta) f^{(k + 1)}(\theta)\;\d \theta
+\]
+for all $f \in C^{k + 1}[a, b]$.
+
+But for any other fixed $j = 0, \cdots, k - 1$, we also have $e_L(f) = 0$ for all $f \in P_j[x]$. So we also know
+\[
+ e_L(f) = \frac{1}{j!} \int_a^b K_j (\theta) f^{(j + 1)}(\theta)\;\d \theta
+\]
+for all $f \in C^{j + 1}[a, b]$. Note that we have a different kernel.
+
+In general, this might not be a good idea, since we are throwing information away. Yet, this can be helpful if we get some less smooth functions that don't have $k + 1$ derivatives.
+
+\section{Ordinary differential equations}
+\subsection{Introduction}
+Our next big goal is to solve ordinary differential equations numerically. We will focus on differential equations of the form
+\[
+ \mathbf{y}'(t) = \mathbf{f}(t, \mathbf{y}(t))
+\]
+for $0 \leq t \leq T$, with initial conditions
+\[
+ \mathbf{y}(0) = \mathbf{y}_0.
+\]
+The data we are provided is the function $\mathbf{f}: \R \times \R^N \to \R^N$, the ending time $T > 0$, and the initial condition $\mathbf{y}_0 \in \R^n$. What we seek is the function $\mathbf{y}: [0, T] \to \R^N$.
+
+When solving the differential equation numerically, our goal would be to make our numerical solution as close to the true solution as possible. This makes sense only if a ``true'' solution actually exists, and is unique. From IB Analysis II, we know a unique solution to the ODE exists if $\mathbf{f}$ is \emph{Lipschitz}.
+
+\begin{defi}[Lipschitz function]
+ A function $\mathbf{f}: \R \times \R^N \to \R^N$ is \emph{Lipschitz with Lipschitz constant $\lambda \geq 0$} if
+ \[
+ \|\mathbf{f}(t, \mathbf{x}) - \mathbf{f}(t, \hat{\mathbf{x}})\| \leq \lambda \|\mathbf{x} - \hat{\mathbf{x}}\|
+ \]
+ for all $t \in [0, T]$ and $\mathbf{x}, \hat{\mathbf{x}} \in \R^N$.
+
+ A function is \emph{Lipschitz} if it is Lipschitz with Lipschitz constant $\lambda$ for some $\lambda$.
+\end{defi}
+It doesn't really matter what norm we pick. It will just change the $\lambda$. The importance is the existence of a $\lambda$.
+
+A special case is when $\lambda = 0$, i.e.\ $\mathbf{f}$ does not depend on $\mathbf{x}$. In this case, this is just an integration problem, and is usually easy. This is a convenient test case --- if our numerical approximation does not even work for these easy problems, then it's pretty useless.
+
+Being Lipschitz is sufficient for existence and uniqueness of a solution to the differential equation, and hence we can ask if our solution converges to this unique solution. An extra assumption we will often make is that $\mathbf{f}$ can be expanded in a Taylor series to as many degrees as we want, since this is convenient for our analysis.
+
+What exactly does a numerical solution to the ODE consist of? We first choose a small time step $h > 0$, and then construct approximations
+\[
+ \mathbf{y}_n \approx \mathbf{y}(t_n),\quad n = 1, 2, \cdots,
+\]
+with $t_n = nh$. In particular, $t_n - t_{n - 1} = h$ and is always constant. In practice, we don't fix the step size $t_n - t_{n - 1}$, and allow it to vary in each step. However, this makes the analysis much more complicated, and we will not consider varying time steps in this course.
+
+If we make $h$ smaller, then we will (probably) make better approximations. However, this is more computationally demanding. So we want to study the behaviour of numerical methods in order to figure out what $h$ we should pick.
+
+\subsection{One-step methods}
+There are many ways we can classify numerical methods. One important classification is one-step versus multi-step methods. In one-step methods, the value of $\mathbf{y}_{n + 1}$ depends only on the previous iteration $t_n$ and $\mathbf{y}_n$. In multi-step methods, we are allowed to look back further in time and use further results.
+
+\begin{defi}[(Explicit) one-step method]
+ A numerical method is \emph{(explicit) one-step} if $\mathbf{y}_{n + 1}$ depends only on $t_n$ and $\mathbf{y}_n$, i.e.
+ \[
+ \mathbf{y}_{n + 1} = \boldsymbol\phi_h(t_n, \mathbf{y}_n)
+ \]
+ for some function $\boldsymbol\phi_h: \R \times \R^N \to \R^N$.
+\end{defi}
+We will later see what ``explicit'' means.
+
+The simplest one-step method one can imagine is \emph{Euler's method}.
+\begin{defi}[Euler's method]
+ \emph{Euler's method} uses the formula
+ \[
+ \mathbf{y}_{n + 1} = \mathbf{y}_n + h\mathbf{f}(t_n, \mathbf{y}_n).
+ \]
+\end{defi}
+We want to show that this method ``converges''. First of all, we need to make precise the notion of ``convergence''. The Lipschitz condition means there is a unique solution to the differential equation. So we would want the numerical solution to be able to approximate the actual solution to arbitrary accuracy as long as we take a small enough $h$.
+
+\begin{defi}[Convergence of numerical method]
+ For each $h > 0$, we can produce a sequence of discrete values $\mathbf{y}_n$ for $n = 0, \cdots, [T/h]$, where $[T/h]$ is the integer part of $T/h$. A method \emph{converges} if, as $h \to 0$ and $nh \to t$ (hence $n \to \infty$), we get
+ \[
+ \mathbf{y}_n \to \mathbf{y}(t),
+ \]
+ where $\mathbf{y}$ is the true solution to the differential equation. Moreover, we require the convergence to be uniform in $t$.
+\end{defi}
+
+We now prove that Euler's method converges. We will only do this properly for Euler's method, since the algebra quickly becomes tedious and incomprehensible. However, the proof strategy is sufficiently general that it can be adapted to most other methods.
+
+\begin{thm}[Convergence of Euler's method]\leavevmode
+ \begin{enumerate}
+ \item For all $t \in [0, T]$, we have
+ \[
+ \lim_{\substack{\mathllap{h} \to \mathrlap{0}\\\mathllap{nh}\to\mathrlap{t}}}\; \mathbf{y}_n - \mathbf{y}(t) = 0.
+ \]
+ \item Let $\lambda$ be the Lipschitz constant of $f$. Then there exists a $c \geq 0$ such that
+ \[
+ \|\mathbf{e}_n\| \leq ch \frac{e^{\lambda T} - 1}{\lambda}
+ \]
+ for all $0 \leq n \leq [T/h]$, where $\mathbf{e}_n = \mathbf{y}_n - \mathbf{y}(t_n)$.
+ \end{enumerate}
+\end{thm}
+Note that the bound in the second part is uniform. So this immediately gives the first part of the theorem.
+
+\begin{proof}
+ There are two parts to proving this. We first look at the \emph{local truncation error}. This is the error we would get at each step assuming we got the previous steps right. More precisely, we write
+ \[
+ \mathbf{y}(t_{n + 1}) = \mathbf{y}(t_n) + h (\mathbf{f}, t_n, \mathbf{y}(t_n)) + \mathbf{R}_n,
+ \]
+ and $\mathbf{R}_n$ is the local truncation error. For the Euler's method, it is easy to get $\mathbf{R}_n$, since $\mathbf{f}(t_n, \mathbf{y}(t_n)) = \mathbf{y}'(t_n)$, by definition. So this is just the Taylor series expansion of $\mathbf{y}$. We can write $\mathbf{R}_n$ as the integral remainder of the Taylor series,
+ \[
+ \mathbf{R}_n = \int_{t_n}^{t_{n + 1}} (t_{n + 1} - \theta) y''(\theta)\;\d \theta.
+ \]
+ By some careful analysis, we get
+ \[
+ \|\mathbf{R}_n\|_\infty \leq ch^2,
+ \]
+ where
+ \[
+ c = \frac{1}{2} \|y''\|_{\infty}.
+ \]
+ This is the easy part, and tends to go rather smoothly even for more complicated methods.
+
+ Once we have bounded the local truncation error, we patch them together to get the actual error. We can write
+ \begin{align*}
+ \mathbf{e}_{n + 1} &= \mathbf{y}_{n + 1} - \mathbf{y}(t_{n + 1})\\
+ &= \mathbf{y}_n + h \mathbf{f}(t_n, \mathbf{y}_n) - \Big(\mathbf{y}(t_n) + h \mathbf{f}(t_n, \mathbf{y}(t_n)) + \mathbf{R}_n\Big)\\
+ &= (\mathbf{y}_n - \mathbf{y}(t_n)) + h \Big(\mathbf{f}(t_n, \mathbf{y}_n) - \mathbf{f}(t_n, \mathbf{y}(t_n))\Big) - \mathbf{R}_n
+ \intertext{Taking the infinity norm, we get}
+ \|\mathbf{e}_{n + 1}\|_{\infty} &\leq \|\mathbf{y}_n - \mathbf{y}(t_n)\|_\infty + h \|\mathbf{f}(t_n, \mathbf{y}_n) - \mathbf{f}(t_n, \mathbf{y}(t_n))\|_{\infty} + \|\mathbf{R}_n\|_\infty\\
+ &\leq \|\mathbf{e}_n\|_\infty + h \lambda \|\mathbf{e}_n\|_{\infty} + ch^2\\
+ &= (1 + \lambda h)\|\mathbf{e}_n\|_\infty + ch^2.
+ \end{align*}
+ This is valid for all $n \geq 0$. We also know $\|\mathbf{e}_0\| = 0$. Doing some algebra, we get
+ \[
+ \|\mathbf{e}_n\|_{\infty} \leq ch^2 \sum_{j = 0}^{n - 1} (1 + h \lambda)^j \leq \frac{ch}{\lambda }\left((1 + h \lambda)^n - 1\right).
+ \]
+ Finally, we have
+ \[
+ (1 + h\lambda) \leq e^{\lambda h},
+ \]
+ since $1 + \lambda h$ is the first two terms of the Taylor series, and the other terms are positive. So
+ \[
+ (1 + h\lambda)^n \leq e^{\lambda h n} \leq e^{\lambda T}.
+ \]
+ So we obtain the bound
+ \[
+ \|\mathbf{e}_n\|_\infty \leq ch \frac{e^{\lambda T} - 1}{\lambda}.
+ \]
+ Then this tends to $0$ as we take $h \to 0$. So the method converges.
+\end{proof}
+This works as long as $\lambda \not= 0$. However, $\lambda = 0$ is the easy case, since it is just integration. We can either check this case directly, or use the fact that $\frac{e^{\lambda T} - 1}{\lambda} \to T$ as $\lambda \to 0$.
+
+The same proof strategy works for most numerical methods, but the algebra will be much messier. We will not do those in full. We, however, take note of some useful terminology:
+\begin{defi}[Local truncation error]
+ For a general (multi-step) numerical method
+ \[
+ \mathbf{y}_{n + 1} = \boldsymbol\phi (t_n, \mathbf{y}_0, \mathbf{y}_1, \cdots,\mathbf{y}_n),
+ \]
+ the \emph{local truncation error} is
+ \[
+ \boldsymbol\eta_{n + 1} = \mathbf{y}(t_{n + 1}) - \boldsymbol\phi_n(t_n, \mathbf{y}(t_0), \mathbf{y}(t_1), \cdots, \mathbf{y}(t_n)).
+ \]
+\end{defi}
+This is the error we will make at the $(n + 1)$th step if we had accurate values for the first $n$ steps.
+
+For Euler's method, the local truncation error is just the Taylor series remainder term.
+
+\begin{defi}[Order]
+ The order of a numerical method is the largest $p \geq 1$ such that $\boldsymbol\eta_{n + 1} = O(h^{p + 1})$.
+\end{defi}
+The Euler method has order $1$. Notice that this is one less than the power of the local truncation error, since when we look at the global error, we drop a power, and only have $\mathbf{e}_n \sim h$.
+
+Let's try to get a little bit beyond Euler's method.
+\begin{defi}[$\theta$-method]
+ For $\theta \in [0, 1]$, the \emph{$\theta$-method} is
+ \[
+ \mathbf{y}_{n + 1} = \mathbf{y}_n + h\Big( \theta \mathbf{f}(t_n, \mathbf{y}_n) + (1 - \theta) \mathbf{f}(t_{n + 1}, \mathbf{y}_{n + 1})\Big).
+ \]
+\end{defi}
+If we put $\theta = 1$, then we just get Euler's method. The other two most common choices of $\theta$ are $\theta = 0$ (backward Euler) and $\theta = \frac{1}2$ (trapezoidal rule).
+
+Note that for $\theta \not= 1$, we get an \emph{implicit method}. This is since $\mathbf{y}_{n + 1}$ doesn't just appear simply on the left hand side of the equality. Our formula for $\mathbf{y}_{n + 1}$ involves $\mathbf{y}_{n + 1}$ itself! This means, in general, unlike the Euler method, we can't just write down the value of $\mathbf{y}_{n + 1}$ given the value of $\mathbf{y}_n$. Instead, we have to treat the formula as $N$ (in general) non-linear equations, and solve them to find $\mathbf{y}_{n + 1}$!
+
+In the past, people did not like to use this, because they didn't have computers, or computers were too slow. It is tedious to have to solve these equations in every step of the method. Nowadays, these are becoming more and more popular because it is getting easier to solve equations, and $\theta$-methods have some huge theoretical advantages (which we do not have time to get into).
+
+We now look at the error of the $\theta$-method. We have
+\begin{align*}
+ \boldsymbol\eta &= \mathbf{y}(t_{n + 1}) - \mathbf{y}(t_n) - h \Big(\theta \mathbf{y}'(t_n) + (1 - \theta)\mathbf{y}'(t_{n + 1})\Big)\\
+ \intertext{We expand all terms about $t_n$ with Taylor's series to obtain}
+ &= \left(\theta - \frac{1}{2}\right) h^2 \mathbf{y}''(t_n) + \left(\frac{1}{2}\theta - \frac{1}{3}\right) h^3 \mathbf{y}'''(t_n) + O(h^4).
+\end{align*}
+We see that $\theta = \frac{1}{2}$ gives us an order $2$ method. Otherwise, we get an order $1$ method.
+
+\subsection{Multi-step methods}
+We can try to make our methods more efficient by making use of previous values of $\mathbf{y}_n$ instead of the most recent one. One common method is the AB2 method:
+\begin{defi}[2-step Adams-Bashforth method]
+ The \emph{2-step Adams-Bashforth (AB2) method} has
+ \[
+ \mathbf{y}_{n + 2} = \mathbf{y}_{n + 1} + \frac{1}{2}h \left(3 \mathbf{f}(t_{n + 1}, \mathbf{y}_{n + 1}) - \mathbf{f}(t_n, \mathbf{y}_n)\right).
+ \]
+\end{defi}
+This is a particular example of the general class of \emph{Adams-Bashforth methods}.
+
+In general, a multi-step method is defined as follows:
+\begin{defi}[Multi-step method]
+ A \emph{$s$-step numerical method} is given by
+ \[
+ \sum_{\ell = 0}^s \rho_\ell \mathbf{y}_{n + \ell} = h \sum_{ \ell = 0}^s \sigma_\ell \mathbf{f}(t_{n + \ell}, \mathbf{y}_{n + \ell}).
+ \]
+ This formula is used to find the value of $\mathbf{y}_{n + s}$ given the others.
+\end{defi}
+One point to note is that we get the same method if we multiply all the constants $\rho_\ell, \sigma_\ell$ by a non-zero constant. By convention, we normalize this by setting $\rho_s = 1$. Then we can alternatively write this as
+\[
+ \mathbf{y}_{n + s} = h \sum_{\ell = 0}^s \sigma_\ell \mathbf{f}(t_{n + \ell}, \mathbf{y}_{n + \ell}) - \sum_{\ell = 0}^{s - 1} \rho_\ell \mathbf{y}_{n + \ell}.
+\]
+This method is an implicit method if $\sigma_s \not= 0$. Otherwise, it is explicit.
+
+Note that this method is \emph{linear} in the sense that the coefficients $\rho_\ell$ and $\sigma_\ell$ appear linearly, outside the $\mathbf{f}$s. Later we will see more complicated numerical methods where these coefficients appear inside the arguments of $\mathbf{f}$.
+
+For multi-step methods, we have a slight problem to solve. In a one-step method, we are given $\mathbf{y}_0$, and this allows us to immediately apply the one-step method to get higher values of $\mathbf{y}_n$. However, for an $s$-step method, we need to use other (possibly $1$-step) method to obtain $\mathbf{y}_1, \cdots, \mathbf{y}_{s - 1}$ before we can get started.
+
+Fortunately, we only need to apply the one-step method a fixed, small number of times, even as $h \to 0$. So the accuracy of the one-step method at the start does not matter too much.
+
+We now study the properties of a general multi-step method. The first thing we can talk about is the order:
+\begin{thm}
+ An $s$-step method has order $p$ ($p \geq 1$) if and only if
+ \[
+ \sum_{\ell = 0}^s \rho_\ell = 0
+ \]
+ and
+ \[
+ \sum_{\ell = 0}^s \rho_\ell \ell^k = k\sum_{\ell = 0}^s \sigma_\ell \ell^{k - 1}
+ \]
+ for $k = 1, \cdots, p$, where $0^0 = 1$.
+\end{thm}
+
+This is just a rather technical result coming directly from definition.
+\begin{proof}
+ The local truncation error is
+ \[
+ \sum_{\ell = 0}^s \rho_\ell \mathbf{y}(t_{n + \ell}) - h \sum_{\ell = 0}^s \sigma_\ell \mathbf{y}'(t_{n + \ell}).
+ \]
+ We now expand the $\mathbf{y}$ and $\mathbf{y}'$ about $t_n$, and obtain
+ \[
+ \left(\sum_{\ell = 0}^s \rho_\ell\right) \mathbf{y}(t_n) + \sum_{k = 1}^\infty \frac{h^k}{k!}\left(\sum_{\ell = 0}^s \rho_\ell \ell^k - k \sum_{\ell = 0}^s \sigma_\ell \ell^{k - 1}\right)\mathbf{y}^{(k)}(t_n).
+ \]
+ This is $O(h^{p + 1})$ under the given conditions.
+\end{proof}
+
+\begin{eg}[AB2]
+ In the two-step Adams-Bashforth method, We see that the conditions hold for $p = 2$ but not $p = 3$. So the order is $2$.
+\end{eg}
+
+Instead of dealing with the coefficients $\rho_\ell$ and $\sigma_\ell$ directly, it is often convenient to pack them together into two polynomials. This uses two polynomials associated with the numerical method. Unsurprisingly, they are
+\[
+ \rho(w) = \sum_{\ell = 0}^s \rho_\ell w^\ell,\quad \sigma(w) = \sum_{\ell = 0}^s \sigma_\ell w^\ell.
+\]
+We can then use this to restate the above theorem.
+\begin{thm}
+ A multi-step method has order $p$ (with $p \geq 1$) if and only if
+ \[
+ \rho(e^x) - x \sigma(e^x) = O(x^{p + 1})
+ \]
+ as $x \to 0$.
+\end{thm}
+
+\begin{proof}
+ We expand
+ \[
+ \rho(e^x) - x \sigma(e^x) = \sum_{\ell = 0}^s \rho_\ell e^{\ell x} - x \sum_{\ell = 0}^s \sigma_\ell e^{\ell x}.
+ \]
+ We now expand the $e^{\ell x}$ in Taylor series about $x = 0$. This comes out as
+ \[
+ \sum_{\ell = 0}^s \rho_\ell + \sum_{k = 1}^\infty \frac{1}{k!} \left(\sum_{\ell = 0}^s \rho_\ell \ell^k - k \sum_{\ell = 0}^s \sigma_\ell \ell^{k - 1}\right) x^k.
+ \]
+ So the result follows.
+\end{proof}
+Note that $\sum_{\ell = 0}^s \rho_\ell = 0$, which is the condition required for the method to even have an order at all, can be expressed as $\rho(1) = 0$.
+
+\begin{eg}[AB2]
+ In the two-step Adams-Bashforth method, we get
+ \[
+ \rho(w) = w^2 - w,\quad \sigma(w) = \frac{3}{2} w - \frac{1}{2}.
+ \]
+ We can immediately check that $\rho(1) = 0$. We also have
+ \[
+ \rho(e^x) - x \sigma(e^x) = \frac{5}{12} x^3 + O(x^4).
+ \]
+ So the order is $2$.
+\end{eg}
+
+We've sorted out the order of multi-step methods. The next thing to check is convergence. This is where the difference between one-step and multi-step methods come in. For one-step methods, we only needed the order to understand convergence. It is a \emph{fact} that a one step method converges whenever it has an order $p \geq 1$. For multi-step methods, we need an extra condition.
+
+\begin{defi}[Root condition]
+ We say $\rho(w)$ satisfies the \emph{root condition} if all its zeros are bounded by $1$ in size, i.e.\ all roots $w$ satisfy $|w| \leq 1$. Moreover any zero with $|w| = 1$ must be simple.
+\end{defi}
+We can imagine this as saying large roots are bad --- they cannot get past $1$, and we cannot have too many with modulus $1$.
+
+We saw any sensible multi-step method must have $\rho(1) = 0$. So in particular, $1$ must be a simple zero.
+
+\begin{thm}[Dahlquist equivalence theorem]
+ A multi-step method is convergent if and only if
+ \begin{enumerate}
+ \item The order $p$ is at least $1$; and
+ \item The root condition holds.
+ \end{enumerate}
+\end{thm}
+The proof is too difficult to include in this course, or even the Part II version. This is only done in Part III.
+
+\begin{eg}[AB2]
+ Again consider the two-step Adams-Bashforth method. We have seen it has order $p = 2 \geq 1$. So we need to check the root condition. So $\rho(w) = w^2 - w = w(w - 1)$. So it satisfies the root condition.
+\end{eg}
+
+Let's now come up with a sensible strategy for constructing convergent $s$-step methods:
+\begin{enumerate}
+ \item Choose a $\rho$ so that $\rho(1) = 0$ and the root condition holds.
+ \item Choose $\sigma$ to maximize the order, i.e.
+ \[
+ \sigma = \frac{\rho(w)}{\log w} +
+ \begin{cases}
+ O(|w - 1|^{s + 1}) & \text{if implicit}\\
+ O(|w - 1|^s) & \text{if explicit}
+ \end{cases}
+ \]
+ We have the two different conditions since for implicit methods, we have one more coefficient to fiddle with, so we can get a higher order.
+\end{enumerate}
+Where does the $\frac{1}{\log w}$ come from? We try to substitute $w = e^x$ (noting that $e^x - 1\sim x$). Then the formula says
+\[
+ \sigma(e^x) = \frac{1}{x} \rho(e^x) +
+ \begin{cases}
+ O(x^{s + 1}) & \text{if implicit}\\
+ O(x^s) & \text{if explicit}
+ \end{cases}.
+\]
+Rearranging gives
+\[
+ \rho(e^x) -x \sigma (e^x) =
+ \begin{cases}
+ O(x^{s + 2}) & \text{if implicit}\\
+ O(x^{s + 1}) & \text{if explicit}
+ \end{cases},
+\]
+which is our order condition. So given any $\rho$, there is only one sensible way to pick $\sigma$. So the key is in picking a good enough $\rho$.
+
+The root conditions is ``best'' satisfied if $\rho(w) = w^{s - 1}(w - 1)$, i.e.\ all but one roots are $0$. Then we have
+\[
+ \mathbf{y}_{n + s} - \mathbf{y}_{n + s - 1} = h \sum_{\ell = 0}^s \sigma_\ell \mathbf{f}(t_{n + \ell}, \mathbf{y}_{n + \ell}),
+\]
+where $\alpha$ is chosen to maximize order.
+
+\begin{defi}[Adams method]
+ An \emph{Adams method} is a multi-step numerical method with $\rho(w) = w^{s - 1}(w - 1)$.
+\end{defi}
+These can be either explicit or implicit. In different cases, we get different names.
+
+\begin{defi}[Adams-Bashforth method]
+ An \emph{Adams-Bashforth} method is an explicit Adams method.
+\end{defi}
+
+\begin{defi}[Adams-Moulton method]
+ An \emph{Adams-Moulton} method is an implicit Adams method.
+\end{defi}
+
+\begin{eg}
+ We look at the two-step third-order Adams-Moulton method. This is given by
+ \[
+ \mathbf{y}_{n + 2} - \mathbf{y}_{n + 1} = h \left(-\frac{1}{2} \mathbf{f}(t_n, \mathbf{y}_n) + \frac{2}{3} \mathbf{f}(t_{n + 1}, \mathbf{y}_{n + 1}) + \frac{5}{12} \mathbf{f}(t_{n + 1}, \mathbf{y}_{n + 2})\right).
+ \]
+ Where do these coefficients come from? We have to first expand our $\frac{\rho(w)}{\log w}$ about $w - 1$:
+ \[
+ \frac{\rho(w)}{\log w} = \frac{w(w - 1)}{\log w} = 1 + \frac{3}{2} (w - 1) + \frac{5}{12}(w - 1)^2 + O(|w - 1|^3).
+ \]
+ These aren't our coefficients of $\sigma$, since what we need to do is to rearrange the first three terms to be expressed in terms of $w$. So we have
+ \[
+ \frac{\rho(w)}{\log w} = -\frac{1}{12} + \frac{2}{3} w + \frac{5}{12}w^2 + O(|w - 1|^3).
+ \]
+\end{eg}
+
+Another important class of multi-step methods is constructed in the opposite way --- we choose a particular $\sigma$, and then find the most optimal $\rho$.
+
+\begin{defi}[Backward differentiation method]
+ A \emph{backward differentiation method} has $\sigma (w) = \sigma_s w^s$, i.e.
+ \[
+ \sum_{\ell = 0}^s \rho_\ell \mathbf{y}_{n + \ell} = \sigma_s \mathbf{f}(t_{n + s}, \mathbf{y}_{n + s}).
+ \]
+\end{defi}
+This is a generalization of the one-step backwards Euler method.
+
+Given this $\sigma$, we need to choose the appropriate $\rho$. Fortunately, this can be done quite easily.
+\begin{lemma}
+ An $s$-step backward differentiation method of order $s$ is obtained by choosing
+ \[
+ \rho(w) = \sigma_s \sum_{\ell = 1}^s \frac{1}{\ell} w^{s - \ell}(w - 1)^\ell,
+ \]
+ with $\sigma_s$ chosen such that $\rho_s = 1$, namely
+ \[
+ \sigma_s = \left(\sum_{\ell = 1}^s \frac{1}{\ell}\right)^{-1}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We need to construct $\rho$ so that
+ \[
+ \rho(w) = \sigma_s w^s \log w + O(|w - 1|^{s + 1}).
+ \]
+ This is easy, if we write
+ \begin{align*}
+ \log w &= - \log\left(\frac{1}{w}\right)\\
+ &= -\log \left(1 - \frac{w - 1}{w}\right)\\
+ &= \sum_{\ell = 1}^\infty \frac{1}{\ell} \left(\frac{w - 1}{w}\right)^\ell.
+ \end{align*}
+ Multiplying by $\sigma_s w^s$ gives the desired result.
+\end{proof}
+For this method to be convergent, we need to make sure it does satisfy the root condition. It turns out the root condition is satisfied only for $s \leq 6$. This is not obvious by first sight, but we can certainly verify this manually.
+
+\subsection{Runge-Kutta methods}
+Finally, we look at Runge-Kutta methods. These methods are very complicated, and are rather tedious to analyse. They have been largely ignored for a long time, until more powerful computers came along and made these methods much more practical. These are used quite a lot nowadays since they have many nice properties.
+
+Runge-Kutta methods can be motivated by Gaussian quadrature, but we will not go into that connection here. Instead, we'll go straight in and work with the method.
+
+\begin{defi}[Runge-Kutta method]
+ General (implicit) $\nu$-stage \emph{Runge-Kutta (RK) methods} have the form
+ \[
+ \mathbf{y}_{n + 1} = \mathbf{y}_n + h \sum_{\ell = 1}^\nu b_\ell \mathbf{k}_\ell,
+ \]
+ where
+ \[
+ \mathbf{k}_\ell = \mathbf{f}\left(t_n + c_n h, \mathbf{y}_n + h\sum_{j = 1}^\nu a_{\ell j} \mathbf{k}_j\right)
+ \]
+ for $\ell = 1, \cdots, \nu$.
+\end{defi}
+There are a lot of parameters we have to choose. We need to pick
+\[
+ \{b_\ell\}_{\ell = 1}^\nu,\quad \{c\}_{\ell = 1}^\nu,\quad \{a_{\ell j}\}_{\ell, j = 1}^\nu.
+\]
+Note that in general, $\{\mathbf{k}_{\ell}\}_{\ell = 1}^\nu$ have to be solved for, since they are defined in terms of one another. However, for certain choices of parameters, we can make this an explicit method. This makes it easier to compute, but we would have lost some accuracy and flexibility.
+
+Unlike all the other methods we've seen so far, the parameters appear \emph{inside} $\mathbf{f}$. They appear non-linearly inside the functions. This makes the method much more complicated and difficult to analyse using Taylor series. Yet, once we manage to do this properly, these have lots of nice properties. Unfortunately, we will not have time to go into what these properties actually are.
+
+Notice this is a one-step method. So once we get order $p \geq 1$, we will have convergence. So what conditions do we need for a decent order?
+
+This is in general very complicated. However, we can quickly obtain some necessary conditions. We can consider the case where $\mathbf{f}$ is a constant. Then $\mathbf{k}_\ell$ is always that constant. So we must have
+\[
+ \sum_{\ell = 1}^\nu b_\ell = 1.
+\]
+It turns out we also need, for $\ell = 1, \cdots, \nu$,
+\[
+ c_\ell = \sum_{j = 1}^\nu a_{\ell j}.
+\]
+While these are necessary conditions, they are not sufficient. We need other conditions as well, which we shall get to later. It is a fact that the best possible order of a $\nu$-stage Runge-Kutta method is $2 \nu$.
+
+To describe a Runge-Kutta method, a standard notation is to put the coefficients in the \emph{Butcher table}:
+\begin{center}
+ \begin{tabular}{c|ccc}
+ $c_1$ & $a_{11}$ & $\cdots$ & $a_{1\nu}$\\
+ $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$\\
+ $c_\nu$ & $a_{\nu 1}$ & $\cdots$ & $a_{\nu\nu}$\\\hline
+ & $b_1$ & $\cdots$ & $b_\nu$
+ \end{tabular}
+\end{center}
+We sometimes more concisely write it as
+\begin{center}
+ \begin{tabular}{c|ccc}
+ \\
+ $\mathbf{c}$ &\vphantom{a} &{\Large $A$}&\vphantom{a}\\
+ \\\hline
+ & &$\mathbf{v}^T$
+ \end{tabular}
+\end{center}
+This table allows for a general implicit method. Initially, explicit methods came out first, since they are much easier to compute. In this case, the matrix $A$ is strictly lower triangular, i.e.\ $a_{\ell j}$ whenever $\ell \leq j$.
+
+\begin{eg}
+ The most famous explicit Runge-Kutta method is the 4-stage 4th order one, often called the \emph{classical} Runge-Kutta method. The formula can be given explicitly by
+ \[
+ \mathbf{y}_{n + 1} = \mathbf{y}_n + \frac{h}{6}(\mathbf{k}_1 + 2\mathbf{k}_2 + 2\mathbf{k}_3 + \mathbf{k}_4),
+ \]
+ where
+ \begin{align*}
+ \mathbf{k}_1 &= \mathbf{f}(x_n, \mathbf{y}_n)\\
+ \mathbf{k}_2 &= \mathbf{f}\left(x_n + \frac{1}{2}h, \mathbf{y}_n + \frac{1}{2}h \mathbf{k}_1\right)\\
+ \mathbf{k}_3 &= \mathbf{f}\left(x_n + \frac{1}{2}h, \mathbf{y}_n + \frac{1}{2}h \mathbf{k}_2\right)\\
+ \mathbf{k}_4 &= \mathbf{f}\left(x_n + h, \mathbf{y}_n + h \mathbf{k}_3\right).
+ \end{align*}
+ We see that this is an explicit method. We don't need to solve any equations.
+\end{eg}
+Choosing the parameters for the Runge-Kutta method to maximize order is \emph{hard}. Consider the simplest case, the $2$-stage explicit method. The general formula is
+\[
+ \mathbf{y}_{n + 1} = \mathbf{y}_n + h(b_1 \mathbf{k}_1 + b_2 \mathbf{k}_2),
+\]
+where
+\begin{align*}
+ \mathbf{k}_1 &= f(x_n, \mathbf{y}_n)\\
+ \mathbf{k}_2 &= f(x_n + c_2 h, \mathbf{y}_n + r_2 h\mathbf{k}_1).
+\end{align*}
+To analyse this, we insert the true solution into the method. First, we need to insert the true solution of the ODE into the $\mathbf{k}$'s. We get
+\begin{align*}
+ \mathbf{k}_1 &= \mathbf{y}'(t_n)\\
+ \mathbf{k}_2 &= \mathbf{f}(t_n + c_2 h, \mathbf{y}(t_n) + c_2 h \mathbf{y}'(t_n))\\
+ &= \mathbf{y}'(t_n) + c_2 h \left(\frac{\partial \mathbf{f}}{\partial t}(t_n. \mathbf{y} (t_n)) + \nabla \mathbf{f}(t_n, \mathbf{y}(t_n)) \mathbf{y}'(t_n)\right) + O(h^2)\\
+ \intertext{Fortunately, we notice that the thing inside the huge brackets is just $\mathbf{y}''(t_n)$. So this is}
+ &= \mathbf{y}'(t_n) + c_2 h \mathbf{y}''(t_n) + O(h^2).
+\end{align*}
+Hence, our local truncation method for the Runge-Kutta method is
+\begin{multline*}
+ \mathbf{y}(t_{n + 1}) - \mathbf{y}(t_n) - h(b_1 \mathbf{k}_1 + b_2 \mathbf{k}_2) \\
+ = (1 - b_1 - b_2)h \mathbf{y}'(t_n) + \left(\frac{1}{2} - b_2 c_2\right) h^2 \mathbf{y}''(t_n) + O(h^3).
+\end{multline*}
+Now we see why Runge-Kutta methods are hard to analyse. The coefficients appear non-linearly in this expression. It is still solvable in this case, in the obvious way, but for higher stage methods, this becomes much more complicated.
+
+In this case, we have a $1$-parameter family of order $2$ methods, satisfying
+\[
+ b_1 + b_2 = 1,\quad b_2c_2 = \frac{1}{2}.
+\]
+It is easy to check using a simple equation $y' = \lambda h$ that it is not possible to get a higher order method. So as long as our choice of $b_1$ and $b_2$ satisfy this equation, we get a decent order $2$ method.
+
+As we have seen, Runge-Kutta methods are really complicated, even in the simplest case. However, they have too many good properties that they are becoming very popular nowadays.
+
+\section{Stiff equations}
+\subsection{Introduction}
+Initially, when people were developing numerical methods, people focused mostly on quantitative properties like order and accuracy. We then develop many different methods like multi-step methods and Runge-Kutta methods.
+
+More recently, people started to look at \emph{structural properties}. Often, equations come with some special properties. For example, a differential equation describing the motion of a particle would most probably conserve energy. When we approximate it numerically, we would like the numerical approximation to satisfy conservation of energy as well. This is what recent developments are looking at --- we want to look at whether numerical methods preserve certain nice properties.
+
+We are not going to look at conservation of energy --- this is too complicated for a first course. Instead, we look at the following problem. Suppose we have a system of ODEs for $0 \leq t \leq T$:
+\begin{align*}
+ \mathbf{y}'(t) &= \mathbf{f}(t, \mathbf{y}(t))\\
+ \mathbf{y}(0) &= \mathbf{y}_0.
+\end{align*}
+Suppose $T > 0$ is arbitrary, and
+\[
+ \lim_{t \to \infty} \mathbf{y}(t) = \mathbf{0}.
+\]
+What restriction on $h$ is necessary for a numerical method to satisfy $\lim\limits_{n \to \infty} \mathbf{y}_n = \mathbf{0}$?
+
+This question is still too complicated for us to tackle. It can only be easily solved for linear problems, namely ODEs of the form
+\[
+ \mathbf{y}'(t) = A \mathbf{y}(t),
+\]
+for $A \in R^{N \times N}$.
+
+Firstly, for what $A$ do we have $\mathbf{y}(t) \to 0$ as $t \to \infty$? By some basic linear algebra, we know this holds only if $\Re(\lambda) < 0$ for all eigenvalues $A$. To simplify further, we consider the case where
+\[
+ y'(t) = \lambda y(t),\quad \Re(\lambda) < 0.
+\]
+It should be clear that if $A$ is diagonalizable, then it can be reduced to multiple instances of this case. Otherwise, we need to do some more work, but we'll not do that in this course.
+
+And that, at last, is enough simplification.
+
+\subsection{Linear stability}
+This is the easy part of the course, where we are just considering the problem $y' = \lambda y$. No matter how complicated are numerical method is, when applied to this problem, it usually becomes really simple.
+
+\begin{defi}[Linear stability domain]
+ If we apply a numerical method to
+ \[
+ y'(t) = \lambda y(t)
+ \]
+ with $y(0) = 1$, $\lambda \in \C$, then its linear stability domain is
+ \[
+ D = \left\{z = h\lambda: \lim_{n \to \infty} y_n = 0\right\}.
+ \]
+\end{defi}
+
+\begin{eg}
+ Consider the Euler method. The discrete solution is
+ \[
+ y_n = (1 + h\lambda)^n.
+ \]
+ Thus we get
+ \[
+ D = \{z \in \C: |1 + z| < 1\}.
+ \]
+ We can visualize this on the complex plane as the open unit ball
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2.5, 0) -- (2.5, 0) node [right] {$\Re$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$\Im$};
+
+ \draw [fill=mblue, fill opacity=0.5] (-1, 0) circle [radius=1];
+ \node [circ] at (-1, 0) {};
+ \node [below] at (-1, 0) {$-1$};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{eg}
+ Consider instead the backward Euler method. This method is implicit, but since our problem is simple, we can find what $n$th step is. It turns out it is
+ \[
+ y_n = (1 - \lambda h)^{-n}.
+ \]
+ Then we get
+ \[
+ D = \{z \in \C: |1 - z| > 1\}.
+ \]
+ We can visualize this as the region:
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.5] (-2.5, 2) rectangle (2.5, -2);
+
+ \draw [fill=white] (1, 0) circle [radius=1];
+ \node [circ] at (1, 0) {};
+ \node [below] at (1, 0) {$1$};
+ \draw [->] (-2.5, 0) -- (2.5, 0) node [right] {$\Re$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$\Im$};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+We make the following definition:
+\begin{defi}[A-stability]
+ A numerical method is \emph{A-stable} if
+ \[
+ \C^- = \{z \in \C: \Re(z) < 0\} \subseteq D.
+ \]
+\end{defi}
+In particular, for $\Re(z) < \lambda$, A-stability means that $y_n$ will tend to $0$ regardless of how large $h$ is.
+
+Hence the backwards Euler method is A-stable, but the Euler method is not.
+
+A-stability is a very strong requirement. It is hard to get A-stability. In particular, Dahlquist proved that no multi-step method of order $p \geq 3$ is A-stable. Moreover, no explicit Runge-Kutta method can be A-stable.
+
+So let's look at some other implicit methods.
+\begin{eg}[Trapezoidal rule]
+ Again consider $y'(t) = \lambda y$, with the trapezoidal rule. Then we can find
+ \[
+ y_n = \left(\frac{1 + h\lambda/2}{1 - h\lambda/2}\right)^n.
+ \]
+ So the linear stability domain is
+ \[
+ D = \left\{z \in \C: \left|\frac{2 + z}{2 - z}\right| < 1 \right\}.
+ \]
+ What this says is that $z$ has to be closer to $-2$ than to $2$. In other words, $D$ is exactly $\C^-$.
+\end{eg}
+
+When testing a numerical method for A-stability in general, complex analysis is helpful. Usually, when applying a numerical method to the problem $y' = \lambda y$, we get
+\[
+ y_n = [r(h\lambda)]^n,
+\]
+where $r$ is some rational function. So
+\[
+ D = \{z \in \C: |r(z)| < 1\}.
+\]
+We want to know if $D$ contains the left half-plane. For more complicated expressions of $r$, like the case of the trapezoidal rule, this is not so obvious. Fortunately, we have the \emph{maximum principle}:
+\begin{thm}[Maximum principle]
+ Let $g$ be analytic and non-constant in an open set $\Omega \subseteq \C$. Then $|g|$ has no maximum in $\Omega$.
+\end{thm}
+Since $|g|$ needs to have a maximum in the closure of $\Omega$, the maximum must occur on the boundary. So to show $|g| \leq 1$ on the region $\Omega$, we only need to show the inequality holds on the boundary $\partial \Omega$.
+
+We try $\Omega = \C^-$. The trick is to first check that $g$ is analytic in $\Omega$, and then check what happens at the boundary. This technique is made clear in the following example:
+
+\begin{eg}
+ Consider
+ \[
+ r(z) = \frac{6 - 2z}{6 - 4z + z^2}.
+ \]
+ This is still pretty simple, but can illustrate how we can use the maximum principle.
+
+ We first check if it is analytic. This certainly has some poles, but they are $2 \pm \sqrt{2} i$, and are in the right-half plane. So this is analytic in $\C^-$.
+
+ Next, what happens at the boundary of the left-half plane? Firstly, as $|z| \to \infty$, we find $r(z) \to 0$, since we have a $z^2$ at the denominator. The next part is checking when $z$ is on the imaginary axis, say $z = it$ with $t \in \R$. Then we can check by some messy algebra that
+ \[
+ |r(it)| \leq 1
+ \]
+ for $t \in \R$. Therefore, by the maximum principle, we must have $|r(z)| \leq 1$ for all $z \in \C^-$.
+\end{eg}
+
+\section{Implementation of ODE methods}
+We just did quite a lot of theory about numerical methods. To end this section, we will look at some more practical sides of ODE methods.
+
+\subsection{Local error estimation}
+The first problem we want to tackle is what $h$ we should pick. Usually, when using numerical analysis software, you will be asked for an error tolerance, and then the software will automatically compute the required $h$ we need. How is this done?
+
+Milne's device is a method for estimating the local truncation error of a multi-step method, and hence changing the step-length $h$ (there are similar techniques for Runge-Kutta methods, but they are more complicated). This uses two multi-step methods of the same order.
+
+To keep things simple, we consider the two-step Adams-Bashforth method. Recall that this is given by
+\[
+ \mathbf{y}_{n + 1} = \mathbf{y}_n + \frac{h}{2} (2 \mathbf{f}(t_n, \mathbf{y}_n) - \mathbf{f}(t_{n - 1}, \mathbf{y}_{n - 1})).
+\]
+This is an order $2$ error with
+\[
+ \boldsymbol\eta_{n + 1} = \frac{5}{12}h^3 \mathbf{y}'''(t_n) + O(h^4).
+\]
+The other multi-step method of order $2$ is our old friend, the trapezoidal rule. This is an implicit method
+\[
+ \mathbf{y}_{n + 1} = \mathbf{y}_n + \frac{h}{2} (\mathbf{f}(t_n, \mathbf{y}_n) + \mathbf{f}(t_{n + 1}, \mathbf{y}_{n + 1})).
+\]
+This has a local truncation error of
+\[
+ \boldsymbol\eta_{n + 1} = -\frac{1}{12}h^3 \mathbf{y}'''(t_n) + O(h^4).
+\]
+The key to Milne's device is the coefficients of $h^3 \mathbf{y}'''(y_n)$, namely
+\[
+ c_{\mathrm{AB}} = \frac{5}{12},\quad c_{\mathrm{TR}} = -\frac{1}{12}.
+\]
+Since these are two different methods, we get different $\mathbf{y}_{n + 1}$'s. We distinguish these by superscripts, and have
+\begin{align*}
+ \mathbf{y}(t_{n + 1}) - \mathbf{y}_{n + 1}^{\mathrm{AB}} &\simeq c_{\mathrm{AB}} h^3 \mathbf{y}'''(t_n)\\
+ \mathbf{y}(t_{n + 1}) - \mathbf{y}_{n + 1}^{\mathrm{TR}} &\simeq c_{\mathrm{TR}} h^3 \mathbf{y}'''(t_n)
+\end{align*}
+We can now eliminate some terms to obtain
+\[
+ \mathbf{y}(t_{n + 1}) - \mathbf{y}_{n + 1}^{\mathrm{TR}} \simeq \frac{-c_{\mathrm{TR}}}{c_{\mathrm{AB}} - c_{\mathrm{TR}}} (\mathbf{y}_{n + 1}^{\mathrm{AB}} - \mathbf{y}_{n + 1}^{\mathrm{TR}}).
+\]
+In this case, the constant we have is $\frac{1}{6}$. So we can estimate the local truncation error for the trapezoidal rule, without knowing the value of $\mathbf{y}'''$. We can then use this to adjust $h$ accordingly.
+
+The extra work we need for this is to compute numerical approximations with two methods. Usually, we will use a simple, explicit method such as the Adams-Bashforth method as the second method, when we want to approximate the error of a more complicated but nicer method, like the trapezoidal rule.
+
+\subsection{Solving for implicit methods}
+Implicit methods are often more likely to preserve nice properties like conservation of energy. Since we have more computational power nowadays, it is often preferable to use these more complicated methods. When using these implicit methods, we have to come up with some way to solve the equations involved.
+
+As an example, we consider the backward Euler method
+\[
+ \mathbf{y}_{n + 1} = \mathbf{y}_n + h \mathbf{f}(t_{n + 1}, \mathbf{y}_{n + 1}).
+\]
+There are two ways to solve this equation for $\mathbf{y}_{n + 1}$. The simplest method is \emph{functional iteration}. As the name suggests, this method is iterative. So we use superscripts to denote the iterates. In this case, we use the formula
+\[
+ \mathbf{y}_{n + 1}^{(k + 1)} = \mathbf{y}_n + h \mathbf{f}(t_{n + 1}, \mathbf{y}_{n + 1}^{k}).
+\]
+To do this, we need to find a $\mathbf{y}_{n + 1}^{(0)}$. Usually, we start $\mathbf{y}_{n + 1}^{(0)} = \mathbf{y}_n$. Even better, we can use some simpler explicit method to obtain our first guess of $\mathbf{y}_{n + 1}^{(0)}$.
+
+The question, of course, is whether this converges. Fortunately, this converges to a locally unique solution if $\lambda h$ is sufficiently small, where $\lambda$ is the Lipschitz constant of $\mathbf{f}$. For the backward Euler, we will require $\lambda h < 1$. This relies on the contraction mapping theorem, which you may have met in IB Analysis II.
+
+Does this matter? Sometimes it does. Usually, we pick an $h$ using accuracy considerations, picking the largest possible $h$ that still gives us the desired accuracy. However, if we use this method, we might need to pick a much smaller $h$ in order for this to work. This will require us to compute much more iterations, and can take a lot of time.
+
+An alternative is Newton's method. This is given by the formula
+\begin{align*}
+ (I - hJ^{(k)}) \mathbf{z}^{(k)} &= \mathbf{y}_{n + 1}^{(k)} - (\mathbf{y}_n + h \mathbf{f}(t_{n + 1}, \mathbf{y}_{n + 1}^{(k)}))\\
+ \mathbf{y}_{n + 1}^{(k + 1)} &= \mathbf{y}_n^{(k)} - \mathbf{z}^{(k)},
+\end{align*}
+where $J^{(k)}$ is the Jacobian matrix
+\[
+ J^{(k)} = \nabla \mathbf{f}(t_{n + 1}, \mathbf{y}_{n + 1}^{(k)}) \in \R^{N \times N}.
+\]
+This requires us to solve for $\mathbf{z}$ in the first equation, but this is a linear system, which we have some efficient methods for solving.
+
+There are several variants to Newton's method. This is the full Newton's method, where we re-compute the Jacobian in every iteration. It is also possible to just use the same Jacobian over and over again. There are some speed gains in solving the equation, but then we will need more iterations before we can get our $\mathbf{y}_{n + 1}$.
+
+\section{Numerical linear algebra}
+In the last part of the course, we study numerical algorithms for solving certain problems in linear algebra. Here we are not so much concerned about accuracy --- the solutions we obtained are theoretically exact. Instead, we will try to find some efficient methods of solving these problems. However, we will occasionally make some comments about accuracy problems that arise from the fact that we only work with finite precision.
+
+We start off with the simplest problem in linear algebra: given $A \in R^{n \times n}$, $\mathbf{b} \in R^n$, we want to find an $\mathbf{x}$ such that
+\[
+ A\mathbf{x} = \mathbf{b}.
+\]
+We all know about the theory --- if $A$ is non-singular, then there is a unique solution for every possible $\mathbf{b}$. Otherwise, there are no solutions for some $\mathbf{b}$ and infinitely many solutions for other $\mathbf{b}$'s.
+
+Most of the time, we will only consider the case where $A$ is non-singular. However, we will sometimes comment on what happens when $A$ is singular.
+
+\subsection{Triangular matrices}
+While matrices in general and big and scary, triangular matrices tend to be much nicer.
+\begin{defi}[Triangular matrix]
+ A matrix $A$ is \emph{upper triangular} if $A_{ij} = 0$ whenever $i > j$. It is \emph{lower triangular} if $A_{ij} = 0$ whenever $i < j$. We usually denote upper triangular matrices as $U$ and lower triangular matrices as $L$.
+\end{defi}
+We can visualize these matrices as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (1, 0) -- (0, 1) -- cycle;
+ \node [below] at (0.5, 0) {$L$};
+ \begin{scope}[shift={(2, 0)}]
+ \draw (1, 1) -- (1, 0) -- (0, 1) -- cycle;
+ \node [below] at (0.5, 0) {$U$};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Why are triangular matrices nice? First of all, it is easy to find the determinants of triangular matrices. We have
+\[
+ \det(L) = \prod_{i = 1}^n L_{ii},\quad \det(U) = \prod_{i = 1}^n U_{11}.
+\]
+In particular, a triangular matrix is non-singular if and only if it has no zero entries in its diagonal.
+
+How do we solve equations involving triangular matrices? Suppose we have a lower triangular matrix equation
+\[
+ \begin{pmatrix}
+ L_{11} & 0 & \cdots & 0\\
+ L_{12} & L_{22} & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots\\
+ L_{1n} & L_{2n} & \cdots & L_{nn}
+ \end{pmatrix}
+ \begin{pmatrix}
+ x_1\\
+ x_2\\
+ \vdots\\
+ x_n
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ b_1\\
+ b_2\\
+ \vdots\\
+ b_n
+ \end{pmatrix},
+\]
+or, more visually, we have
+\[
+ \begin{tikzpicture}[baseline=-0.65ex]
+ \draw (0, -0.5) -- (1, -0.5) -- (0, 0.5) -- cycle;
+ \end{tikzpicture}
+ \;\;
+ \begin{tikzpicture}[baseline=-0.65ex]
+ \draw (0, -0.5) rectangle (0.3, 0.5);
+ \end{tikzpicture}
+ \;
+ =
+ \;
+ \begin{tikzpicture}[baseline=-0.65ex]
+ \draw (0, -0.5) rectangle (0.3, 0.5);
+ \end{tikzpicture}
+\]
+There is nothing to solve in the first line. We can immediately write down the value of $x_1$. Substituting this into the second line, we can then solve for $x_2$. In general, we have
+\[
+ x_i = \frac{1}{L_{ii}} \left(b_i - \sum_{j = 1}^{i - 1} L_{ij} x_j\right).
+\]
+This is known as \emph{forward substitution}.
+
+For upper triangular matrices, we can do a similar thing, but we have to solve from the bottom instead. We have
+\[
+ x_i = \frac{1}{U_{ii}} \left(b_i - \sum_{j = 1 + i}^n U_{ij} x_j\right).
+\]
+This is known as \emph{backwards substitution}.
+
+This can be used to find the inverse of matrices as well. The solution to $L \mathbf{x}_j = \mathbf{e}_j$ is the $j$th column of $L^{-1}$. It is then easy to see from this that the inverse of a lower triangular matrix is also lower triangular.
+
+Similarly, the columns of $U^{-1}$ are given by solving $U\mathbf{x}_j = \mathbf{e}_i$, and we see that $U^{-1}$ is also upper triangular.
+
+It is helpful to analyse how many operations it takes to compute this. If we look carefully, solving $L\mathbf{x} =\mathbf{b}$ or $U\mathbf{x} = \mathbf{b}$ this way requires $O(n^2)$ operations.
+
+\subsection{LU factorization}
+In general, we don't always have triangular matrices. The idea is, for every matrix $A$, find some lower and triangular matrices $L, U$ such that $A = LU$.
+\[
+ \begin{tikzpicture}[baseline=-0.65ex]
+ \draw (0, -0.5) rectangle (1, 0.5);
+ \end{tikzpicture}
+ \;
+ =
+ \;
+ \begin{tikzpicture}[baseline=-0.65ex]
+ \draw (0, -0.5) -- (1, -0.5) -- (0, 0.5) -- cycle;
+ \end{tikzpicture}
+ \;\;
+ \begin{tikzpicture}[baseline=-0.65ex]
+ \draw (1, 0.5) -- (1, -0.5) -- (0, 0.5) -- cycle;
+ \end{tikzpicture}
+\]
+If we can do this, then we can solve $A\mathbf{x} = \mathbf{b}$ in two steps --- we first find a $\mathbf{y}$ such that $L\mathbf{y} = \mathbf{b}$. Then we find an $\mathbf{x}$ such that $U\mathbf{x} = \mathbf{y}$. Then
+\[
+ A\mathbf{x} = LU\mathbf{x} = L\mathbf{y} = \mathbf{b}.
+\]
+So what we want is such a factorization. To guarantee uniqueness of the factorization, we require that $L$ is \emph{unit}, i.e.\ the diagonals are all $1$. Otherwise, given any such factorization, we can divide $U$ by some (non-zero) constant and multiply $L$ by the same constant to get another factorization.
+
+\begin{defi}[LU factorization]
+ $A = LU$ is an \emph{LU factorization} if $U$ is upper triangular and $L$ is \emph{unit} lower triangular (i.e.\ the diagonals of $L$ are all $1$).
+\end{defi}
+Note that since $L$ has to be unit, it must be non-singular. However, we still allow $A$ and $U$ to be singular. Note that
+\[
+ \det (A) = \det (L) \det(U) = \det(U).
+\]
+So $A$ is singular if and only if $U$ is.
+
+Unfortunately, even if $A$ is non-singular, it may not have an LU factorization. We take a simple example of
+\[
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 1
+ \end{pmatrix}
+\]
+This is clearly non-singular, with determinant $-1$. However, we can manually check that there is no LU factorization of this.
+
+On the other hand, while we don't really like singular matrices, singular matrices can still have LU factorizations. For example,
+\[
+ \begin{pmatrix}
+ 0 & 0\\
+ 0 & 1
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ 0 & 0\\
+ 0 & 1
+ \end{pmatrix}
+\]
+is trivially an LU factorization of a singular matrix.
+
+Fortunately, if $A$ is non-singular and has an LU factorization, then this factorization is unique (this is not necessarily true if $A$ is singular. For example, we know
+\[
+ \begin{pmatrix}
+ 0 & 0\\
+ 0 & 1
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ 1 & 0\\
+ a & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ 0 & 0\\
+ 0 & 1
+ \end{pmatrix}
+\]
+for any real number $a$).
+
+To understand when LU factorizations exist, we try to construct one, and see when it could fail.
+
+We write $L$ in terms of its columns, and $U$ in terms of its rows:
+\[
+ L =
+ \begin{pmatrix}
+ \mathbf{l}_1 & \mathbf{l}_2 & \cdots & \mathbf{l}_n
+ \end{pmatrix},
+ \quad
+ U =
+ \begin{pmatrix}
+ \mathbf{u}_1^T\\
+ \mathbf{u}_2^T\\
+ \vdots\\
+ \mathbf{u}_n^T
+ \end{pmatrix}
+\]
+Clearly, these rows and columns cannot be arbitrary, since $L$ and $U$ are triangular. In particular, $\mathbf{l}_i, \mathbf{u}_i$ must be zero in the first $i - 1$ entries.
+
+Suppose this is an LU factorization of $A$. Then we can write
+\[
+ A = L\cdot U = \mathbf{l}_1 \mathbf{u}_1^T + \mathbf{l}_2 \mathbf{u}_2^T + \cdots + \mathbf{l}_n \mathbf{u}_n^T.
+\]
+What do these matrices look like? For each $i$, we know $\mathbf{l}_i$ and $\mathbf{u}_i$ have the first $i - 1$ entries zero. So the first $i - 1$ rows and columns of $\mathbf{l}_i \mathbf{u}_i^T$ are zero. In particular, the first row and columns only have contributions from $\mathbf{l}_1\mathbf{u}_1^T$, the second row/column only has contributions from $\mathbf{l}_2 \mathbf{u}_2^T$ and $\mathbf{l}_1 \mathbf{u}_1^T$ etc.
+
+The plan is as follows:
+\begin{enumerate}
+ \item Obtain $\mathbf{l}_1$ and $\mathbf{u}_1$ from the first row and column of $A$. Since the first entry of $\mathbf{l}_1$ is $1$, $\mathbf{u}_1^T$ is exactly the first row of $A$. We can then obtain $\mathbf{l}_2$ by taking the first column of $A$ and dividing by $U_{11} = A_{11}$.
+
+ \item Obtain $\mathbf{l}_2$ and $\mathbf{u}_2$ form the second row and column of $A - \mathbf{l}_1 \mathbf{u}_1^T$ similarly.
+ \item $\cdots$
+ \item Obtain $\mathbf{l}_n$ and $\mathbf{u}_n^T$ from the $n$th row and column of $A - \sum_{i = 1}^{n - 1} \mathbf{l}_i \mathbf{u}_i^T$.
+\end{enumerate}
+We can turn this into an algorithm. We define the intermediate matrices, starting with
+\[
+ A^{(0)} = A.
+\]
+For $k = 1, \cdots, n$, we let
+\begin{align*}
+ U_{kj} &= A_{kj}^{(k - 1)} & j &=k, \cdots, n\\
+ L_{ik} &= \frac{A_{ik}}{A_{kk}^{(k - 1)}} & i &= k, \cdots, n\\
+ A_{ij}^{(k)} &= A_{ij}^{(k - 1)} - L_{ik} U_{kj} & i, j &\geq k
+\end{align*}
+When $k = n$, we end up with a zero matrix, and then $U$ and $L$ are completely filled.
+
+We can now see when this will break down. A sufficient condition for $A = LU$ to exist is that $A_{kk}^{(k - 1)} \not= 0$ for all $k$. Since $A_{kk}^{(k - 1)} = U_{kk}$, this sufficient condition ensures $U$, and hence $A$ is non-singular. Conversely, if $A$ is non-singular and an LU factorization exists, then this would always work, since we must have $A_{kk}^{(k - 1)} = U_{kk} \not= 0$. Moreover, the LU factorization must be given by this algorithm. So we get uniqueness.
+
+The problem with this sufficient condition is that most of these coefficients do not appear in the matrix $A$. They are constructed during the algorithm. We don't know easily what they are in terms of the coefficients of $A$. We will later come up with an equivalent condition on our original $A$ that is easier to check.
+
+Note that as long as this method does not break down, we need $O(n^3)$ operations to perform this factorization. Recall we only needed $O(n^2)$ operations to solve the equation after factorization. So the bulk of the work in solving $A\mathbf{x} = \mathbf{b}$ is in doing the LU factorization.
+
+As before, this allows us to find the inverse of $A$ if it is non-singular. In particular, solving $A\mathbf{x}_j = \mathbf{e}_j$ gives the $j$th column of $A^{-1}$. Note that we are solving the system for the same $A$ for each $j$. So we only have to perform the LU factorization once, and then solve $n$ different equations. So in total we need $O(n^3)$ operations.
+
+However, we still have the problem that factorization is not always possible. Requiring that we must factorize $A$ as $LU$ is too restrictive. The idea is to factor something closely related, but not exactly $A$. Instead, we want a factorization
+\[
+ PA = LU,
+\]
+where $P$ is a permutation matrix. Recall a permutation matrix acting on a column vector just permutes the elements of the vector, and $P$ acting on $A$ would just permute the rows of $A$. So we want to factor $A$ up to a permutation of rows.
+
+Our goal now is to extend the previous algorithm to allow permutations of rows, and then we shall show that we will be able to perform this factorization all the time.
+
+Suppose our breakdown occurs at $k = 1$, i.e.\ $A_{11}^{(0)} = A_{11} = 0$. We find a permutation matrix $P_1$ and let it act via $P_1A^{(0)}$. The idea is to look down the first column of $A$, and find a row starting with a non-zero element, say $p$. Then we use $P_1$ to interchange rows $1$ and $p$ such that $P_1A^{(0)}$ has a non-zero top-most entry. For simplicity, we assume we always need a $P_1$, and if $A_{11}^{(0)}$ is non-zero in the first place, we just take $P_1$ to be the identity.
+
+After that, we can carry on. We construct $\mathbf{l}_1$ and $\mathbf{u}_1$ from $P_1A^{(0)}$ as before, and set $A^{(1)} = P_1A^{(0)} - \mathbf{l}_1 \mathbf{u}_1^T$.
+
+But what happens if the first column of $A$ is completely zero? Then no interchange will make the $(1, 1)$ entry non-zero. However, in this case, we don't actually have to do anything. We can immediately find our $\mathbf{l}_1$ and $\mathbf{u}_1$, namely set $\mathbf{l}_1 = \mathbf{e}_1$ (or anything) and let $\mathbf{u}_1^T$ be the first row of $A^{(0)}$. Then this already works. Note however that this corresponds to $A$ (and hence $U$) being singular, and we are not too interested with these.
+
+The later steps are exactly analogous. Suppose we have $A_{kk}^{(k - 1)} = 0$. Again we find a $P_k$ such that $P_k A^{(k - 1)}$ has a non-zero $(k, k)$ entry. We then construct $\mathbf{l}_k$ and $\mathbf{u}_k$ from $P_k A^{(k - 1)}$ and set
+\[
+ A^{(k)} = P_k A^{(n - 1)} - \mathbf{l}_k \mathbf{u}_k^t.
+\]
+Again, if the $k$th column of $A^{(k - 1)}$ is completely zero, we set $\mathbf{l}_k = \mathbf{e}_k$ and $\mathbf{u}_k^T$ to be the $k$th row of $A^{(k - 1)}$. But again this implies $A$ and $U$ will be singular.
+
+However, as we do this, the permutation matrices appear all over the place inside the algorithm. It is not immediately clear that we do get a factorization of the form $PA = LU$. Fortunately, keeping track of the interchanges, we \emph{do} have an LU factorization
+\[
+ PA = \tilde{L}U,
+\]
+where $U$ is what we got from the algorithm,
+\[
+ P = P_{n - 1} \cdots P_2 P_1,
+\]
+while $\tilde{L}$ is given by
+\[
+ \tilde{L} =
+ \begin{pmatrix}
+ \tilde{\mathbf{l}}_1 & \cdots & \tilde{\mathbf{l}}_n
+ \end{pmatrix},
+ \quad \tilde{\mathbf{l}}_k = P_{n - 1} \cdots P_{k - 1} \mathbf{l}_k.
+\]
+Note that in particular, we have
+\[
+ \tilde{\mathbf{l}}_{n - 1} = \mathbf{l}_{n - 1},\quad \tilde{\mathbf{l}}_n = \mathbf{l}_n.
+\]
+One problem we have not considered is the problem of inexact arithmetic. While these formula are correct mathematically, when we actually implement things, we do them on computers with finite precision. As we go through the algorithm, errors will accumulate, and the error might be amplified to a significant amount when we get the reach the end. We want an algorithm that is insensitive to errors. In order to work safely in inexact arithmetic, we will put the element of \emph{largest} modulus in the $(k, k)$th position, not just an arbitrary non-zero one, as this minimizes the error when dividing.
+
+\subsection{\texorpdfstring{$A = LU$}{A = LU} for special \texorpdfstring{$A$}{A}}
+There is one thing we haven't figured out yet. Can we just look at the matrix $A$, and then determine whether it has an LU factorization (that does not involve permutations)? To answer this, we start with some definitions.
+
+\begin{defi}[Leading principal submatrix]
+ The \emph{leading principal submatrices} $A_k \in \R^{k \times k}$ for $k = 1, \cdots, n$ of $A \in \R^{n \times n}$ are
+ \[
+ (A_k)_{ij} = A_{ij},\quad i, j = 1, \cdots, k.
+ \]
+ In other words, we take the first $k$ rows and columns of $A$.
+\end{defi}
+
+\begin{thm}
+ A sufficient condition for the existence for both the existence and uniqueness of $A = LU$ is that $\det(A_{k}) \not= 0$ for $k = 1, \cdots, n - 1$.
+\end{thm}
+Note that we don't need $A$ to be non-singular. This is equivalent to the restriction $A_{kk}^{(k - 1)} \not= 0$ for $k = 1, \cdots, n - 1$. Also, this is a \emph{sufficient} condition, not a necessary one.
+
+\begin{proof}
+ Straightforward induction. % include
+\end{proof}
+
+We extend this result a bit:
+\begin{thm}
+ If $\det (A_k) \not= 0$ for all $k = 1, \cdots, n$, then $A \in R^{n \times n}$ has a unique factorization of the form
+ \[
+ A = LD \hat{U},
+ \]
+ where $D$ is non-singular diagonal matrix, and both $L$ and $\hat{U}$ are unit triangular.
+\end{thm}
+
+\begin{proof}
+ From the previous theorem, $A = LU$ exists. Since $A$ is non-singular, $U$ is non-singular. So we can write this as
+ \[
+ U = D\hat{U},
+ \]
+ where $D$ consists of the diagonals of $U$ and $\hat{U} = D^{-1}U$ is unit.
+\end{proof}
+This is not hard, but is rather convenient. In particular, it allows us to factorize symmetric matrices in a nice way.
+
+\begin{thm}
+ Let $A \in \R^{n \times n}$ be non-singular and $\det(A_k) \not= 0$ for all $k = 1, \cdots, n$. Then there is a unique ``symmetric'' factorization
+ \[
+ A = LDL^T,
+ \]
+ with $L$ unit lower triangular and $D$ diagonal and non-singular.
+\end{thm}
+
+\begin{proof}
+ From the previous theorem, we can factorize $A$ uniquely as
+ \[
+ A = LD\hat{U}.
+ \]
+ We take the transpose to obtain
+ \[
+ A = A^T = \hat{U}^T D L^T.
+ \]
+ This is a factorization of the form ``unit lower''-``diagonal''-``unit upper''. By uniqueness, we must have $\hat{U} = L^T$. So done.
+\end{proof}
+
+We just looked at a special type of matrices, the symmetric matrices. We look at another type of matrices:
+\begin{defi}[Positive definite matrix]
+ A matrix $A \in R^{n\times n}$ is \emph{positive-definite} if
+ \[
+ \mathbf{x}^T A\mathbf{x} > 0
+ \]
+ for $\mathbf{x}\not= \mathbf{0} \in \R^n$.
+\end{defi}
+
+\begin{thm}
+ Let $A \in \R^{n\times n}$ be a positive-definite matrix. Then $\det(A_k) \not= 0$ for all $k = 1, \cdots, n$.
+\end{thm}
+
+\begin{proof}
+ First consider $k = n$. To show $A$ is non-singular, it suffices to show that $A\mathbf{x} = \mathbf{0}$ implies $\mathbf{x} = \mathbf{0}$. But we can multiply the equation by $\mathbf{x}^T$ to obtain $\mathbf{x}^T A\mathbf{x} = 0$. By positive-definiteness, we must have $\mathbf{x} = \mathbf{0}$. So done.
+
+ Now suppose $A_k \mathbf{y} = \mathbf{0}$ for $k < n$ and $\mathbf{y} = \R^k$. Then $\mathbf{y}^T A_k \mathbf{y} = 0$. We invent a new $\mathbf{x} \in \R^n$ by taking $\mathbf{y}$ and pad it with zeros. Then $\mathbf{x}^T A\mathbf{x} = 0$. By positive-definiteness, we know $\mathbf{x} = \mathbf{0}$. Then in particular $\mathbf{y} = \mathbf{0}$.
+\end{proof}
+
+We are going to use this to prove the most important result in this section. We are going to consider matrices that are symmetric \emph{and} positive-definite.
+
+\begin{thm}
+ A symmetric matrix $A \in \R^{n \times n}$ is \emph{positive-definite} iff we can factor it as
+ \[
+ A = LDL^T,
+ \]
+ where $L$ is unit lower triangular, $D$ is diagonal and $D_{kk} > 0$.
+\end{thm}
+
+\begin{proof}
+ First suppose such a factorization exists, then
+ \[
+ \mathbf{x}^T A\mathbf{x} = \mathbf{x}^T L DL^T \mathbf{x} = (L^T\mathbf{x})^T D(L^T\mathbf{x}).
+ \]
+ We let $\mathbf{y} = L^T\mathbf{x}$. Note that $\mathbf{y} = \mathbf{0}$ if and only if $\mathbf{x} = \mathbf{0}$, since $L$ is invertible. So
+ \[
+ \mathbf{x}^TA\mathbf{x} = \mathbf{y}^T D\mathbf{y} = \sum_{k = 1}^n y_k^2 D_{kk} > 0
+ \]
+ if $\mathbf{y} \not= 0$.
+
+ Now if $A$ is positive definite, it has an LU factorization, and since $A$ is symmetric, we can write it as
+ \[
+ A = LDL^T,
+ \]
+ where $L$ is unit lower triangular and $D$ is diagonal. Now we have to show $D_{kk} > 0$. We define $\mathbf{y}_k$ such that $L^T \mathbf{y}_k = \mathbf{e}_k$, which exist, since $L$ is invertible. Then clearly $\mathbf{y}_k \not= \mathbf{0}$. Then we have
+ \[
+ D_{kk} = \mathbf{e}_k^T D\mathbf{e}_k = \mathbf{y}_k^T LDL^T \mathbf{y}_k = \mathbf{y}_k^T A\mathbf{y}_k > 0.
+ \]
+ So done.
+\end{proof}
+This is a practical check for symmetric $A$ being positive definite. We can perform this LU factorization, and then check whether the diagonal has positive entries.
+
+\begin{defi}[Cholesky factorization]
+ The \emph{Cholesky factorization} of a symmetric positive-definite matrix $A$ is a factorization of the form
+ \[
+ A = LDL^T,
+ \]
+ with $L$ unit lower triangular and $D$ a positive-definite diagonal matrix.
+\end{defi}
+There is another way of doing this. We let $D^{1/2}$ be the ``square root'' of $D$, by taking the positive square root of the diagonal entries of $D$. Then we have
+\[
+ A = LDL^T = LD^{1/2}D^{1/2}L^T = (LD^{1/2})(LD^{1/2})^T = GG^T,
+\]
+where $G$ is lower triangular with $G_{kk} > 0$. This is another way of presenting this result.
+
+Finally, we look at the LU factorization of band matrices.
+\begin{defi}[Band matrix]
+ A \emph{band matrix} of \emph{band width} $r$ is a matrix $A$ such that $A_{ij} \not= 0$ implies $|i - j| \leq r$.
+\end{defi}
+For example, a band matrix of band width $0$ is a diagonal matrix; a band matrix of band width $1$ is a tridiagonal matrix.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (2, 2);
+ \draw (0, 1.6) -- (1.6, 0);
+ \draw (0.4, 2) -- (2, 0.4);
+ \end{tikzpicture}
+\end{center}
+The result is
+\begin{prop}
+ If a band matrix $A$ has band width $r$ and an LU factorization $A = LU$, then $L$ and $U$ are both band matrices of width $r$.
+\end{prop}
+
+\begin{proof}
+ Straightforward verification.
+\end{proof}
+
+\section{Linear least squares}
+Finally, we consider a slightly different kind of problem, whose solution involves a different kind of factorization.
+
+The question we are interested in is how we can ``solve''
+\[
+ A\mathbf{x} = \mathbf{b},
+\]
+where $A \in \R^{m \times n}$, with $m > n$, and $\mathbf{b} \in \R^m$ and $\mathbf{x} \in \R^n$.
+
+Of course, this is an over-determined system. In general, this has no solution. So we want to find the ``best'' solution to this system of equations. More precisely, we want to find an $\mathbf{x}^* \in \R^n$ that minimizes $\|A\mathbf{x} - \mathbf{b}\|^2$. Note that we use the Euclidean norm. This is the \emph{least squares problem}.
+
+Why would one be interested in this? Often in, say, statistics, we have some linear model that says the outcome $B$ is related to the input variables $A_1, \cdots, A_n$ by
+\[
+ B = A_1 X_1 + A_2 X_2 + \cdots + A_n X_n + \varepsilon,
+\]
+where the $X_i$ are some unknown parameters, and $\varepsilon$ is some random fluctuation. The idea is to do lots of experiments, say $m$ of them, collect the data in $A \in R^{m \times n}$ and $\mathbf{b} \in R^m$, and then find the $X_i$'s that predict these results the most accurately, i.e.\ find the $\mathbf{x}$ such that $A\mathbf{x}$ is as close to $\mathbf{b}$ as possible.
+
+So how can we find this $\mathbf{x}^*$? First of all, we find a necessary and sufficient condition for $\mathbf{x}^*$ to be a solution to this.
+\begin{thm}
+ A vector $\mathbf{x}^* \in \R^n$ minimizes $\|A\mathbf{x} - \mathbf{b}\|^2$ if and only if
+ \[
+ A^T(A\mathbf{x}^* - \mathbf{b}) = 0.
+ \]
+\end{thm}
+
+\begin{proof}
+ A solution, by definition, minimizes
+ \begin{align*}
+ f(\mathbf{x}) &= \bra A\mathbf{x} - \mathbf{b}, A\mathbf{x} - \mathbf{b}\ket\\
+ &= \mathbf{x}^T AA\mathbf{x} - 2\mathbf{x}^T A^T \mathbf{b} + \mathbf{b}^T \mathbf{b}.
+ \end{align*}
+ Then as a function of $\mathbf{x}$, the partial derivatives of this must vanish. We have
+ \[
+ \nabla f(\mathbf{x}) = 2A^T(A\mathbf{x} - \mathbf{b}).
+ \]
+ So a necessary condition is
+ \[
+ A^T (A\mathbf{x} - \mathbf{b}).
+ \]
+ Now suppose our $\mathbf{x}^*$ satisfies $A^T (A\mathbf{x}^* - \mathbf{b}) = 0$. Then for all $\mathbf{x} \in \R^n$, we write $\mathbf{x} = \mathbf{x}^* + \mathbf{y}$, and then we have
+ \begin{align*}
+ \|A\mathbf{x} - \mathbf{b}\|^2 &= \|A(\mathbf{x}^* + \mathbf{y}) - \mathbf{b}\|\\
+ &= \|A \mathbf{x}^* - \mathbf{b}\| + 2\mathbf{y}^T A^T (A\mathbf{x} - \mathbf{b}) + \|A\mathbf{y}\|^2\\
+ &= \|A \mathbf{x}^* - \mathbf{b}\| + \|A\mathbf{y}\|^2\\
+ &\geq \|A \mathbf{x}^* - \mathbf{b}\|.
+ \end{align*}
+ So $\mathbf{x}^*$ must minimize the Euclidean norm.
+\end{proof}
+
+\begin{cor}
+ If $A \in \R^{m \times n}$ is a full-rank matrix, then there is a unique solution to the least squares problem.
+\end{cor}
+
+\begin{proof}
+ We know all minimizers are solutions to
+ \[
+ (A^T A)\mathbf{x} = A^T \mathbf{b}.
+ \]
+ The matrix $A$ being full rank means $\mathbf{y} \not= \mathbf{0} \in \R^n$ implies $A\mathbf{y} \not= \mathbf{0} \in \R^m$. Hence $A^T A \in \R^{n \times n}$ is positive definite (and in particular non-singular), since
+ \[
+ \mathbf{x}^TA^T A\mathbf{x} = (A \mathbf{x})^T (A \mathbf{x}) = \|A \mathbf{x}\|^2 > 0
+ \]
+ for $\mathbf{x} \not= \mathbf{0}$. So we can invert $A^T A$ and find a unique solution $\mathbf{x}$.
+\end{proof}
+
+Now to find the minimizing $\mathbf{x}^*$, we need to solve the \emph{normal equations}
+\[
+ A^T A\mathbf{x} = A^T \mathbf{b}.
+\]
+If $A$ has full rank, then $A^T A$ is non-singular, and there is a unique solution. If not, then the general theory of linear equations tells us there are either infinitely many solution or no solutions. But for this particular form of equations, it turns out we can show this is always consistent. So there will always be a solution. However, solving these equations is not the recommended algorithm for finding $\mathbf{x}^*$ for accuracy reasons. Instead the solution of these is based on orthogonal matrices.
+
+\begin{defi}[Orthogonal matrix]
+ A square matrix $Q \in \R^{n \times n}$ is orthogonal if $Q^T Q = I$.
+\end{defi}
+
+A key property of the orthogonal matrices is that we have
+\[
+ \|Q\mathbf{x}\| = \|\mathbf{x}\|
+\]
+for all $\mathbf{x} \in \R^n$. This is helpful since what we are trying to do in the least-squares problem involves minimizing the Euclidean norm of something.
+
+The idea is that for any orthogonal matrix, minimizing $\|A\mathbf{x} - \mathbf{b}\|$ is the same as minimizing $\|QA \mathbf{x} - Q\mathbf{b}\|$. So $Q$ should we pick? Recall that a simple kind of matrices is triangular matrices. So we might want to get something, say, upper triangular.
+
+\begin{defi}[QR factorization]
+ A \emph{QR factorization} of an $m \times n$ matrix $A$ is a factorization of the form
+ \[
+ A = QR,
+ \]
+ where $Q \in \R^{m \times m}$ is an orthogonal matrix, and $R \in \R^{m \times n}$ is ``upper triangular'' matrix, where ``upper triangular'' means
+ \[
+ R =
+ \begin{pmatrix}
+ R_{11} & R_{12} & \cdots & R_{1m}\\
+ 0 & R_{22} & \cdots & R_{2m}\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & R_{mm}\\
+ 0 & 0 & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & 0
+ \end{pmatrix}
+ \]
+\end{defi}
+Note that every $A \in R^{m \times n}$ has a QR factorization, as we will soon show, but this is not unique (e.g.\ we can multiply both $Q$ and $R$ by $-1$).
+
+Once we have the QR factorization, we can multiply $\|A\mathbf{x} - \mathbf{b}\|$ by $Q^T = Q^{-1}$, and get an equivalent problem of minimizing $\|R \mathbf{x} - Q^T \mathbf{b}\|$. We will not go into details, but it should be clear that this is not too hard to solve.
+
+There is an alternative ``skinny'' QR factorization, with $A = \tilde{Q} \tilde{R}$, where we remove redundant rows and columns such that $\tilde{Q} \in \R^{m \times n}$ and $\tilde{R} \in \R^{n \times n}$. % insert some pictures.
+This is good enough to solve the least squares problem.
+
+We will see that if $A$ has full rank, then the skinny QR is unique up to a sign, i.e.\ if we require $\tilde{R}_{kk} > 0$ for $k = 1, \cdots, n$.
+
+We shall demonstrate three standard algorithms for QR factorization:
+\begin{enumerate}
+ \item Gram-Schmidt factorization
+ \item Givens rotations
+ \item Householder reflections
+\end{enumerate}
+The latter two are based on some geometric ideas, while the first is based on the familiar Gram-Schmidt process.
+
+\subsubsection*{Gram-Schmidt factorization}
+This targets the skinny version. So we stop writing the tildes above, and just write $Q \in \R^{m \times n}$ and $R \in \R^{n \times n}$.
+
+As we all know, the Gram-Schmidt process orthogonalizes vectors. What we are going to orthogonalize are the columns of $A$. We write
+\[
+ A =
+ \begin{pmatrix}
+ \mathbf{a}_1 & \cdots & \mathbf{a}_n
+ \end{pmatrix},\quad
+ Q =
+ \begin{pmatrix}
+ \mathbf{q}_1 & \cdots & \mathbf{q}_n
+ \end{pmatrix}.
+\]
+By definition of the QR factorization, we need
+\[
+ \mathbf{a}_j = \sum_{i = 1}^j R_{ij} \mathbf{q}_i.
+\]
+This is just done in the usual Gram-Schmidt way.
+\begin{enumerate}
+ \item To construct column $1$, if $\mathbf{a}_1 \not= \mathbf{0}$, then we set
+ \[
+ \mathbf{q}_1 = \frac{\mathbf{a}_1}{\|\mathbf{a}_1\|},\quad R_{11} = \|\mathbf{a}_1\|.
+ \]
+ Note that the only non-unique possibility here is the sign --- we can let $R_{11} = - \|\mathbf{a}_1\|$ and $\mathbf{q}_1 = -\frac{\mathbf{a}_1}{\|\mathbf{a}_1\|}$ instead. But if we require $R_{11} > 0$, then this is fixed.
+
+ In the degenerate case, $\mathbf{a}_1 = \mathbf{0}$, then we can just set $R_{11} = 0$, and the pick any $\mathbf{q}_1 \in \R^n$ with $\|\mathbf{q}_1\| = 1$.
+ \item For columns $1 < k \leq n$, for $i = 1, \cdots, k - 1$, we set
+ \[
+ R_{ik} = \bra \mathbf{q}_i, \mathbf{a}_k\ket,
+ \]
+ and set
+ \[
+ \mathbf{d}_k = \mathbf{a}_k - \sum_{i = 1}^{k - 1} \mathbf{q}_i \bra \mathbf{q}_i, \mathbf{a}_k\ket.
+ \]
+ If $\mathbf{d}_k \not= \mathbf{0}$, then we set
+ \[
+ \mathbf{q}_k = \frac{\mathbf{d}_k}{\|\mathbf{d}_k\|},
+ \]
+ and
+ \[
+ R_{kk} = \|\mathbf{d}_k\|.
+ \]
+ In the case where $\mathbf{d}_k = \mathbf{0}$, we again set $R_{kk} = 0$, and pick $\mathbf{q}_k$ to be anything orthonormal to $\mathbf{q}_1, \cdots, \mathbf{q}_{k - 1}$.
+\end{enumerate}
+
+In practice, a slightly different algorithm (modified Gram-Schmidt process) is used, which is (much) superior with inexact arithmetic. The modified Gram-Schmidt process is in fact the same algorithm, but performed in a different order in order to minimize errors.
+
+However, this is often not an ideal algorithm for large matrices, since there are many divisions and normalizations involved in computing the $\mathbf{q}_i$, and the accumulation of errors will cause the resulting matrix $Q$ to lose orthogonality.
+
+\subsubsection*{Givens rotations}
+This works with the full QR factorization.
+
+Recall that in $\R^2$, a clockwise rotation of $\theta \in [-\pi, \pi]$ is performed by
+\[
+ \begin{pmatrix}
+ \cos \theta & \sin \theta\\
+ -\sin \theta & \cos \theta
+ \end{pmatrix}
+ \begin{pmatrix}
+ \alpha\\ \beta
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ \alpha \cos \theta + \beta \sin \theta\\
+ -\alpha \sin \theta + \beta \cos \theta
+ \end{pmatrix}.
+\]
+By choosing $\theta$ such that
+\[
+ \cos \theta = \frac{\alpha}{\sqrt{\alpha^2 + \beta^2}},\quad \sin \theta = \frac{\beta}{\sqrt{\alpha^2 + \beta^2}},
+\]
+this then becomes
+\[
+ \begin{pmatrix}
+ \sqrt{\alpha^2 + \beta^2}\\
+ 0
+ \end{pmatrix}.
+\]
+Of course, by choosing a slightly different $\theta$, we can make the result zero in the first component and $\sqrt{\alpha^2 + \beta^2}$ in the second.
+
+\begin{defi}[Givens rotation]
+ In $\R^m$, where $m > 2$, we define the \emph{Givens rotation} on $3$ parameters $1 \leq p < q \leq m, \theta \in [-\pi, \pi]$ by
+ \setcounter{MaxMatrixCols}{11}
+ \[
+ \Omega_\theta^{[p, q]} =
+ \begin{pmatrix}
+ 1 &\\
+ & \ddots\\
+ & & 1\\
+ & & & \cos \theta & & & & \sin \theta\\
+ & & & & 1\\
+ & & & & & \ddots\\
+ & & & & & & 1\\
+ & & & -\sin \theta & & & & \cos \theta\\
+ & & & & & & & & 1\\
+ & & & & & & & & & \ddots\\
+ & & & & & & & & & & 1
+ \end{pmatrix} \in \R^{m \times m},
+ \]
+ where the $\sin$ and $\cos$ appear at the $p, q$th rows and columns.
+\end{defi}
+Note that for $\mathbf{y} \in \R^m$, the effect of $\Omega_\theta^{[p, q]}$ only alters the $p$ and $q$ components. In general, for $B \in \R^{m \times n}$, then $\Omega_\theta^{[p, q]} B$ only alters the $p$ and $q$ rows of $B$.
+
+Moreover, just like the $\R^2$ case, given a particular $\mathbf{z} \in \R^m$, we can choose $\theta$ such that the $q$th component $(\Omega^{[p, q]}_\theta \mathbf{z})_q = 0$.
+
+Hence, $A \in \R^{m \times n}$ can be transformed into an ``upper triangular'' form by applying $s = mn - \frac{n(n + 1)}{2}$ Givens rotations, since we need to introduce $s$ many zeros. Then
+\[
+ Q_s \cdots Q_1 A = R.
+\]
+How exactly do we do this? Instead of writing down a general algorithm, we illustrate it with a matrix $A \in R^{4 \times 3}$. The resultant $R$ would look like
+\[
+ R =
+ \begin{pmatrix}
+ R_{11} & R_{12} & R_{13}\\
+ 0 & R_{22} & R_{23}\\
+ 0 & 0 & R_{33}\\
+ 0 & 0 & 0
+ \end{pmatrix}.
+\]
+We will apply the Givens rotations in the following order:
+\[
+ \Omega_{\theta_6}^{[3, 4]}\Omega_{\theta_5}^{[3, 4]} \Omega_{\theta_4}^{[2, 3]} \Omega_{\theta_3}^{[1, 4]} \Omega_{\theta_2}^{[1, 3]} \Omega_{\theta_1}^{[1, 2]} A = R.
+\]
+The matrix $A$ transforms as follows:
+\begin{center}
+ \begin{tikzpicture}[scale=0.35]
+ \begin{scope}
+ \draw (-1.5, -2) rectangle (1.5, 2);
+ \node at (-1, 1.5) {$\times$}; \node at (0, 1.5) {$\times$}; \node at (1, 1.5) {$\times$};
+ \node at (-1, 0.5) {$\times$}; \node at (0, 0.5) {$\times$}; \node at (1, 0.5) {$\times$};
+ \node at (-1, -0.5) {$\times$}; \node at (0, -0.5) {$\times$}; \node at (1, -0.5) {$\times$};
+ \node at (-1, -1.5) {$\times$}; \node at (0, -1.5) {$\times$}; \node at (1, -1.5) {$\times$};
+ \draw [->] (2, 0) -- (5, 0) node [pos=0.5, above] {$\Omega_{\theta_1}^{[1, 2]}$};
+ \end{scope}
+
+ \begin{scope}[shift={(7, 0)}]
+ \draw (-1.5, -2) rectangle (1.5, 2);
+ \node at (-1, 1.5) {$\times$}; \node at (0, 1.5) {$\times$}; \node at (1, 1.5) {$\times$};
+ \node at (-1, 0.5) {$0$}; \node at (0, 0.5) {$\times$}; \node at (1, 0.5) {$\times$};
+ \node at (-1, -0.5) {$\times$}; \node at (0, -0.5) {$\times$}; \node at (1, -0.5) {$\times$};
+ \node at (-1, -1.5) {$\times$}; \node at (0, -1.5) {$\times$}; \node at (1, -1.5) {$\times$};
+ \draw [->] (2, 0) -- (5, 0) node [pos=0.5, above] {$\Omega_{\theta_2}^{[1, 3]}$};
+ \end{scope}
+ \begin{scope}[shift={(14, 0)}]
+ \draw (-1.5, -2) rectangle (1.5, 2);
+ \node at (-1, 1.5) {$\times$}; \node at (0, 1.5) {$\times$}; \node at (1, 1.5) {$\times$};
+ \node at (-1, 0.5) {$0$}; \node at (0, 0.5) {$\times$}; \node at (1, 0.5) {$\times$};
+ \node at (-1, -0.5) {$0$}; \node at (0, -0.5) {$\times$}; \node at (1, -0.5) {$\times$};
+ \node at (-1, -1.5) {$\times$}; \node at (0, -1.5) {$\times$}; \node at (1, -1.5) {$\times$};
+ \draw [->] (2, 0) -- (5, 0) node [pos=0.5, above] {$\Omega_{\theta_3}^{[1, 3]}$};
+ \end{scope}
+
+ \begin{scope}[shift={(21, 0)}]
+ \draw (-1.5, -2) rectangle (1.5, 2);
+ \node at (-1, 1.5) {$\times$}; \node at (0, 1.5) {$\times$}; \node at (1, 1.5) {$\times$};
+ \node at (-1, 0.5) {$0$}; \node at (0, 0.5) {$\times$}; \node at (1, 0.5) {$\times$};
+ \node at (-1, -0.5) {$0$}; \node at (0, -0.5) {$\times$}; \node at (1, -0.5) {$\times$};
+ \node at (-1, -1.5) {$0$}; \node at (0, -1.5) {$\times$}; \node at (1, -1.5) {$\times$};
+ \end{scope}
+
+ \begin{scope}[shift={(3.5, -6)}]
+ \draw [->] (-5, 0) -- (-2, 0) node [pos=0.5, above] {$\Omega_{\theta_4}^{[2, 3]}$};
+ \draw (-1.5, -2) rectangle (1.5, 2);
+ \node at (-1, 1.5) {$\times$}; \node at (0, 1.5) {$\times$}; \node at (1, 1.5) {$\times$};
+ \node at (-1, 0.5) {$0$}; \node at (0, 0.5) {$\times$}; \node at (1, 0.5) {$\times$};
+ \node at (-1, -0.5) {$0$}; \node at (0, -0.5) {$0$}; \node at (1, -0.5) {$\times$};
+ \node at (-1, -1.5) {$0$}; \node at (0, -1.5) {$\times$}; \node at (1, -1.5) {$\times$};
+ \draw [->] (2, 0) -- (5, 0) node [pos=0.5, above] {$\Omega_{\theta_5}^{[3, 4]}$};
+ \end{scope}
+
+ \begin{scope}[shift={(10.5, -6)}]
+ \draw (-1.5, -2) rectangle (1.5, 2);
+ \node at (-1, 1.5) {$\times$}; \node at (0, 1.5) {$\times$}; \node at (1, 1.5) {$\times$};
+ \node at (-1, 0.5) {$0$}; \node at (0, 0.5) {$\times$}; \node at (1, 0.5) {$\times$};
+ \node at (-1, -0.5) {$0$}; \node at (0, -0.5) {$0$}; \node at (1, -0.5) {$\times$};
+ \node at (-1, -1.5) {$0$}; \node at (0, -1.5) {$0$}; \node at (1, -1.5) {$\times$};
+ \draw [->] (2, 0) -- (5, 0) node [pos=0.5, above] {$\Omega_{\theta_6}^{[3, 4]}$};
+ \end{scope}
+
+ \begin{scope}[shift={(17.5, -6)}]
+ \draw (-1.5, -2) rectangle (1.5, 2);
+ \node at (-1, 1.5) {$\times$}; \node at (0, 1.5) {$\times$}; \node at (1, 1.5) {$\times$};
+ \node at (-1, 0.5) {$0$}; \node at (0, 0.5) {$\times$}; \node at (1, 0.5) {$\times$};
+ \node at (-1, -0.5) {$0$}; \node at (0, -0.5) {$0$}; \node at (1, -0.5) {$\times$};
+ \node at (-1, -1.5) {$0$}; \node at (0, -1.5) {$0$}; \node at (1, -1.5) {$0$};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Note that when applying, say, $\Omega_{\theta_4}^{[2, 3]}$, the zeros of the first column get preserved, since $\Omega_{\theta_4}^{[2, 3]}$ only mixes together things in row $2$ and $3$, both of which are zero in the first column. So we are safe.
+
+Note that this gives us something of the form
+\[
+ Q_s \cdots Q_1 A = R.
+\]
+We can obtain a QR factorization by inverting to get
+\[
+ A = Q_1^T \cdots Q_s^T R.
+\]
+However, we don't really need to do this if we just want to do use this to solve the least squares problem, since to do so, we need to multiply by $Q^T$, not $Q$, which is exactly $Q_s \cdots Q_1$.
+
+\subsubsection*{Householder reflections}
+\begin{defi}[Householder reflection]
+ For $\mathbf{u} \not= \mathbf{0} \in \R^m$, we define the \emph{Householder reflection} by
+ \[
+ H_{\mathbf{u}} = I - 2\frac{\mathbf{u}\mathbf{u}^T}{ \mathbf{u}^T \mathbf{u}} \in \R^{m \times m}.
+ \]
+\end{defi}
+Note that this is symmetric, and we can see $H_{\mathbf{u}}^2 = I$. So this is indeed orthogonal.
+
+To show this is a reflection, suppose we resolve $\mathbf{x}$ as the perpendicular and parallel parts as $\mathbf{x} = \alpha \mathbf{u} + \mathbf{w} \in \R^m$, where
+\[
+ \alpha = \frac{\mathbf{u}^T \mathbf{x}}{\mathbf{u}^T \mathbf{u}},\quad \mathbf{u}^T \mathbf{w} = 0.
+\]
+Then we have
+\[
+ H_{\mathbf{u}} \mathbf{x} = -\alpha \mathbf{u} + \mathbf{w}.
+\]
+So this is a reflection in the $m - 1$-dimensional hyperplane $\mathbf{u}^T \mathbf{y} = 0$.
+
+What is the cost of computing $H_{\mathbf{u}}\mathbf{z}$? This is evaluated as
+\[
+ H_{\mathbf{u}}\mathbf{z} = \mathbf{z} - 2\frac{\mathbf{u}^T \mathbf{z}}{\mathbf{u}^T \mathbf{u}}\mathbf{u}.
+\]
+This only requires $O(m)$ operations, which is nice.
+
+\begin{prop}
+ A matrix $A \in \R^{m \times n}$ can be transformed into upper-triangular form by applying $n$ Householder reflections, namely
+ \[
+ H_n \cdots H_1 A = R,
+ \]
+ where each $H_n$ introduces zero into column $k$ and leaves the other zeroes alone.
+\end{prop}
+This is much less multiplications needed than the number needed for Givens rotations, in general.
+
+To show this, we first prove some lemmas.
+
+\begin{lemma}
+ Let $\mathbf{a}, \mathbf{b} \in \R^m$, with $\mathbf{a} \not= \mathbf{b}$, but $\|\mathbf{a}\| = \|\mathbf{b}\|$. Then if we pick $\mathbf{u} = \mathbf{a} - \mathbf{b}$, then
+ \[
+ H_\mathbf{u} \mathbf{a} = \mathbf{b}.
+ \]
+\end{lemma}
+This is obvious if we draw some pictures in low dimensions.
+\begin{proof}
+ We just do it:
+ \[
+ H_\mathbf{u} \mathbf{a} = \mathbf{a} - \frac{2(\|\mathbf{a}\| - \mathbf{a}^T \mathbf{b})}{\|\mathbf{a}\|^2 - 2\mathbf{a}^T \mathbf{b} + \|\mathbf{b}\|^2} (\mathbf{a} - \mathbf{b}) = \mathbf{a} - (\mathbf{a} - \mathbf{b}) = \mathbf{b},
+ \]
+ where we used the fact that $\|\mathbf{a}\| = \|\mathbf{b}\|$.
+\end{proof}
+We will keep applying the lemma. Since we want to get an upper triangular matrix, it makes sense to pick $\mathbf{b}$ to have many zeroes, and the best option would be to pick $\mathbf{b}$ to be a unit vector.
+
+So we begin our algorithm: we want to clear the first column of $A$. We let $\mathbf{a}$ be the first column of $A$, and assume $\mathbf{a} \in \R^m$ is not already in the correct form, i.e.\ $\mathbf{a}$ is not a multiple of $\mathbf{e}_1$. Then we define
+\[
+ \mathbf{u} = \mathbf{a} \mp \|\mathbf{a}\| \mathbf{e}_1,
+\]
+where either choice of the sign is pure-mathematically valid. However, we will later see that there is one choice that is better when we have to deal with inexact arithmetic. Then we end up with
+\[
+ H_1 \mathbf{a} = H_{\mathbf{u}} \mathbf{a} = \pm \|\mathbf{a}\| \mathbf{e}_1.
+\]
+Now we have
+\[
+ H_1 A =
+ \begin{tikzpicture}[scale=0.35, baseline=-0.65ex]
+ \draw (-1.5, -2) rectangle (1.5, 2);
+ \node at (-1, 1.5) {$\times$}; \node at (0, 1.5) {$\times$}; \node at (1, 1.5) {$\times$};
+ \node at (-1, 0.5) {$0$}; \node at (0, 0.5) {$\times$}; \node at (1, 0.5) {$\times$};
+ \node at (-1, -0.5) {$0$}; \node at (0, -0.5) {$\times$}; \node at (1, -0.5) {$\times$};
+ \node at (-1, -1.5) {$0$}; \node at (0, -1.5) {$\times$}; \node at (1, -1.5) {$\times$};
+ \end{tikzpicture}.
+\]
+To do the next step, we need to be careful, since we don't want to destroy the previously created zeroes.
+
+\begin{lemma}
+ If the first $k - 1$ components of $\mathbf{u}$ are zero, then
+ \begin{enumerate}
+ \item For every $\mathbf{x} \in \R^m$, $H_{\mathbf{u}} \mathbf{x}$ does not alter the first $k - 1$ components of $\mathbf{x}$.
+ \item If the last $(m - k + 1)$ components of $\mathbf{y} \in \R^m$ are zero, then $H_{\mathbf{u}}\mathbf{y} = \mathbf{y}$.
+ \end{enumerate}
+\end{lemma}
+These are all obvious from definition. All these say is that reflections don't affect components perpendicular to $\mathbf{u}$, and in particular fixes all vectors perpendicular to $\mathbf{u}$.
+
+\begin{lemma}
+ Let $\mathbf{a}, \mathbf{b} \in \R^m$, with
+ \[
+ \begin{pmatrix}
+ a_k\\
+ \vdots\\
+ a_m
+ \end{pmatrix} \not=
+ \begin{pmatrix}
+ b_k\\
+ \vdots\\
+ b_m
+ \end{pmatrix},
+ \]
+ but
+ \[
+ \sum_{j = k}^m a_j^2 = \sum_{j = k}^m b_j^2.
+ \]
+ Suppose we pick
+ \[
+ \mathbf{u} = (0, 0, \cdots, 0, a_k - b_k, \cdots, a_m - b_m)^T.
+ \]
+ Then we have
+ \[
+ H_\mathbf{u} \mathbf{a} = (a_1, \cdots, a_{k - 1} b_k, \cdots, b_m).
+ \]
+\end{lemma}
+This is a generalization of what we've had before for $k = 1$. Again, the proof is just straightforward verification.
+
+Now we can mess with the second column of $H_1 A$. We let $\mathbf{a}$ be the second column of $H_1 A$, and assume $a_3, \cdots, a_m$ are not all zero, i.e.\ $(0, a_2, \cdots, a_m)^T$ is not a multiple of $\mathbf{e}_2$. We choose
+\[
+ \mathbf{u} = (0, a_2 \mp \gamma, a_3, \cdots, a_m)^T,
+\]
+where
+\[
+ \gamma = \sqrt{\sum_{j = 2}^m a_j}.
+\]
+Then
+\[
+ H_\mathbf{u} \mathbf{a} = (a_1, \pm \gamma, 0, \cdots).
+\]
+Also, by the previous lemma, this does not affect anything in the first column and the first row. Now we have
+\[
+ H_2 H_1 A =
+ \begin{tikzpicture}[scale=0.35, baseline=-0.65ex]
+ \draw (-1.5, -2) rectangle (1.5, 2);
+ \node at (-1, 1.5) {$\times$}; \node at (0, 1.5) {$\times$}; \node at (1, 1.5) {$\times$};
+ \node at (-1, 0.5) {$0$}; \node at (0, 0.5) {$\times$}; \node at (1, 0.5) {$\times$};
+ \node at (-1, -0.5) {$0$}; \node at (0, -0.5) {$0$}; \node at (1, -0.5) {$\times$};
+ \node at (-1, -1.5) {$0$}; \node at (0, -1.5) {$0$}; \node at (1, -1.5) {$\times$};
+ \end{tikzpicture}
+\]
+Suppose we have reached $H_{k - 1} \cdots H_1 A$, where the first $k - 1$ rows are of the correct form, i.e.
+\[
+ H_{k - 1} \cdots H_1 A =
+ \begin{tikzpicture}[baseline=-0.65ex, scale=0.35]
+ \draw (-1.5, -2) rectangle (1.5, 2);
+ \draw (-1.5, 2) -- (0, 0.5);
+ \draw (0, -2) -- (0, 0.5) -- (1.5, 0.5);
+ \node at (-0.75, -0.75) {$0$};
+ \draw [semithick, decorate, decoration={brace, mirror}, yshift=-2pt] (-1.5, -2) -- (0, -2) node [pos=0.5, below] {$k - 1$};
+ \end{tikzpicture}
+\]
+We consider the $k$th column of $H_{k - 1} \cdots H_1 A$, say $\mathbf{a}$. We assume $(0, \cdots, 0, a_k, \cdots, a_m)^T$ is not a multiple of $\mathbf{e}_k$. Choosing
+\[
+ \mathbf{u} = \gamma(0, \cdots, 0, a_k \mp \gamma, a_{k + 1}, \cdots, a_m)^t,\quad \gamma = \sqrt{\sum_{j = k}^m a_j^2},
+\]
+we find that
+\[
+ H_\mathbf{u} \mathbf{a} = (a_1, \cdots, a_{k - 1}, \pm \gamma, 0, \cdots, 0)^T.
+\]
+Now we have
+\[
+ H_k \cdots H_1 A =
+ \begin{tikzpicture}[baseline=-0.65ex, scale=0.35]
+ \draw (-1.5, -2) rectangle (1.5, 2);
+ \draw (-1.5, 2) -- (0.2, 0.3);
+ \draw (0.2, -2) -- (0.2, 0.3) -- (1.5, 0.3);
+ \node at (-0.75, -0.75) {$0$};
+ \draw [semithick, decorate, decoration={brace, mirror}, yshift=-2pt] (-1.5, -2) -- (0.2, -2) node [pos=0.5, below] {$k$};
+ \end{tikzpicture},
+\]
+and $H_k$ did \emph{not} alter the first $k -1 $ rows and columns of $H_{k - 1} \cdots H_1 A$.
+
+There is one thing we have to decide on --- which sign to pick. As mentioned, these do not matter in pure mathematics, but with inexact arithmetic, we should pick the sign in $a_k \mp \gamma$ such that $a_k \mp \gamma$ has maximum magnitude, i.e.\ $a_k \mp \gamma = a_k + \sgn(a_k) \gamma$. It takes some analysis to justify why this is the right choice, but it is not too surprising that there is some choice that is better.
+
+So how do Householder and Givens compare? We notice that the Givens method generates zeros at one entry at a time, while Householder does it column by column. So in general, the Householder method is superior.
+
+However, for certain matrices with special structures, we might need the extra delicacy of introducing a zero one at a time. For example, if $A$ is a band matrix, then it might be beneficial to just remove the few non-zero entries one at a time.
+\end{document}
diff --git a/books/cam/IB_L/statistics.tex b/books/cam/IB_L/statistics.tex
new file mode 100644
index 0000000000000000000000000000000000000000..82c9098affd76f657c2a10b393b60c7a25f799f8
--- /dev/null
+++ b/books/cam/IB_L/statistics.tex
@@ -0,0 +1,2500 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Lent}
+\def\nyear {2015}
+\def\nlecturer {D.\ Spiegelhalter}
+\def\ncourse {Statistics}
+\def\nofficial {http://www.statslab.cam.ac.uk/Dept/People/djsteaching/S1B-15-all-lectures.pdf}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Estimation}\\
+Review of distribution and density functions, parametric families. Examples: binomial, Poisson, gamma. Sufficiency, minimal sufficiency, the Rao-Blackwell theorem. Maximum likelihood estimation. Confidence intervals. Use of prior distributions and Bayesian inference.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Hypothesis testing}\\
+Simple examples of hypothesis testing, null and alternative hypothesis, critical region, size, power, type I and type II errors, Neyman-Pearson lemma. Significance level of outcome. Uniformly most powerful tests. Likelihood ratio, and use of generalised likelihood ratio to construct test statistics for composite hypotheses. Examples, including $t$-tests and $F$-tests. Relationship with confidence intervals. Goodness-of-fit tests and contingency tables.\hspace*{\fill} [4]
+
+\vspace{10pt}
+\noindent\textbf{Linear models}\\
+Derivation and joint distribution of maximum likelihood estimators, least squares, Gauss-Markov theorem. Testing hypotheses, geometric interpretation. Examples, including simple linear regression and one-way analysis of variance. Use of software.\hspace*{\fill} [7]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+Statistics is a set of principles and procedures for gaining and processing quantitative evidence in order to help us make judgements and decisions. In this course, we focus on formal \emph{statistical inference}. In the process, we assume that we have some data generated from some unknown probability model, and we aim to use the data to learn about certain properties of the underlying probability model.
+
+In particular, we perform \emph{parametric inference}. We assume that we have a random variable $X$ that follows a particular known family of distribution (e.g.\ Poisson distribution). However, we do not know the parameters of the distribution. We then attempt to estimate the parameter from the data given.
+
+For example, we might know that $X\sim \Poisson (\mu)$ for some $\mu$, and we want to figure out what $\mu$ is.
+
+Usually we repeat the experiment (or observation) many times. Hence we will have $X_1, X_2, \cdots, X_n$ being iid with the same distribution as $X$. We call the set $\mathbf{X} = (X_1, X_2, \cdots, X_n)$ a \emph{simple random sample}. This is the data we have.
+
+We will use the observed $\mathbf{X} = \mathbf{x}$ to make inferences about the parameter $\theta$, such as
+\begin{itemize}
+ \item giving an estimate $\hat{\theta}(\mathbf{x})$ of the true value of $\theta$.
+ \item Giving an interval estimate $(\hat{\theta}_1(\mathbf{x}), \hat{\theta}_2(\mathbf{x}))$ for $\theta$
+ \item testing a hypothesis about $\theta$, e.g.\ whether $\theta = 0$.
+\end{itemize}
+\section{Estimation}
+\subsection{Estimators}
+The goal of estimation is as follows: we are given iid $X_1, \cdots, X_n$, and we know that their probability density/mass function is $f_X(x; \theta)$ for some unknown $\theta$. We know $f_X$ but not $\theta$. For example, we might know that they follow a Poisson distribution, but we do not know what the mean is. The objective is to estimate the value of $\theta$.
+
+\begin{defi}[Statistic]
+ A \emph{statistic} is an estimate of $\theta$. It is a function $T$ of the data. If we write the data as $\mathbf{x} = (x_1, \cdots, x_n)$, then our estimate is written as $\hat{\theta} = T(\mathbf{x})$. $T(\mathbf{X})$ is an \emph{estimator} of $\theta$.
+
+ The distribution of $T = T(\mathbf{X})$ is the \emph{sampling distribution} of the statistic.
+\end{defi}
+Note that we adopt the convention where capital $\mathbf{X}$ denotes a random variable and $\mathbf{x}$ is an observed value. So $T(\mathbf{X})$ is a random variable and $T(\mathbf{x})$ is a particular value we obtain after experiments.
+
+\begin{eg}
+ Let $X_1, \cdots, X_n$ be iid $N(\mu, 1)$. A possible estimator for $\mu$ is
+ \[
+ T(\mathbf{X}) = \frac{1}{n}\sum X_i.
+ \]
+ Then for any particular observed sample $\mathbf{x}$, our estimate is
+ \[
+ T(\mathbf{x}) = \frac{1}{n}\sum x_i.
+ \]
+ What is the sampling distribution of $T$? Recall from IA Probability that in general, if $X_i \sim N(\mu_i, \sigma_i^2)$, then $\sum X_i \sim N(\sum \mu_i, \sum \sigma_i^2)$, which is something we can prove by considering moment-generating functions.
+
+ So we have $T(\mathbf{X})\sim N(\mu, 1/n)$. Note that by the Central Limit Theorem, even if $X_i$ were not normal, we still have approximately $T(\mathbf{X})\sim N(\mu, 1/n)$ for large values of $n$, but here we get exactly the normal distribution even for small values of $n$.
+\end{eg}
+The estimator $\frac{1}{n}\sum X_i$ we had above is a rather sensible estimator. Of course, we can also have silly estimators such as $T(\mathbf{X}) = X_1$, or even $T(\mathbf{X}) = 0.32$ always.
+
+One way to decide if an estimator is silly is to look at its \emph{bias}.
+\begin{defi}[Bias]
+ Let $\hat{\theta} = T(\mathbf{X})$ be an estimator of $\theta$. The \emph{bias} of $\hat{\theta}$ is the difference between its expected value and true value.
+ \[
+ \bias(\hat{\theta}) = \E_\theta(\hat{\theta}) - \theta.
+ \]
+ Note that the subscript $_\theta$ does not represent the random variable, but the thing we want to estimate. This is inconsistent with the use for, say, the probability mass function.
+
+ An estimator is \emph{unbiased} if it has no bias, i.e.\ $\E_\theta(\hat{\theta}) = \theta$.
+\end{defi}
+To find out $\E_\theta (T)$, we can either find the distribution of $T$ and find its expected value, or evaluate $T$ as a function of $\mathbf{X}$ directly, and find its expected value.
+
+\begin{eg}
+ In the above example, $\E_\mu(T) =\mu$. So $T$ is unbiased for $\mu$.
+\end{eg}
+
+\subsection{Mean squared error}
+Given an estimator, we want to know how good the estimator is. We have just come up with the concept of the \emph{bias} above. However, this is generally not a good measure of how good the estimator is.
+
+For example, if we do 1000 random trials $X_1, \cdots, X_{1000}$, we can pick our estimator as $T(\mathbf{X}) = X_1$. This is an unbiased estimator, but is really bad because we have just wasted the data from the other 999 trials. On the other hand, $T'(\mathbf{X}) = 0.01 + \frac{1}{1000}\sum X_i$ is biased (with a bias of $0.01$), but is in general much more trustworthy than $T$. In fact, at the end of the section, we will construct cases where the only possible unbiased estimator is a completely silly estimator to use.
+
+Instead, a commonly used measure is the \emph{mean squared error}.
+\begin{defi}[Mean squared error]
+ The \emph{mean squared error} of an estimator $\hat{\theta}$ is $\E_\theta[(\hat{\theta} - \theta)^2]$.
+
+ Sometimes, we use the \emph{root mean squared error}, that is the square root of the above.
+\end{defi}
+We can express the mean squared error in terms of the variance and bias:
+\begin{align*}
+ \E_\theta[(\hat{\theta} - \theta)^2] &= \E_\theta[(\hat{\theta} - \E_\theta(\hat{\theta}) + \E_\theta(\hat{\theta}) - \theta)^2]\\
+ &= \E_\theta[(\hat{\theta} - \E_\theta(\hat{\theta}))^2] + [\E_\theta(\hat{\theta}) - \theta]^2 + 2\E_{\theta}[\E_\theta(\hat{\theta}) - \theta]\underbrace{\E_\theta[\hat{\theta} - \E_\theta(\hat{\theta})]}_{0}\\
+ &= \var(\hat{\theta}) + \bias^2(\hat{\theta}).
+\end{align*}
+If we are aiming for a low mean squared error, sometimes it could be preferable to have a biased estimator with a lower variance. This is known as the ``bias-variance trade-off''.
+
+For example, suppose $X\sim \binomial(n, \theta)$, where $n$ is given and $\theta$ is to be determined. The standard estimator is $T_U = X/n$, which is unbiased. $T_U$ has variance
+\[
+ \var_\theta(T_U) = \frac{\var_\theta(X)}{n^2} = \frac{\theta(1 - \theta)}{n}.
+\]
+Hence the mean squared error of the usual estimator is given by
+\[
+ \mse (T_U) = \var_\theta(T_U) + \bias^2(T_U) = \theta(1 - \theta)/n.
+\]
+Consider an alternative estimator
+\[
+ T_B = \frac{X + 1}{n + 2} = w\frac{X}{n} + (1 - w)\frac{1}{2},
+\]
+where $w = n/(n + 2)$. This can be interpreted to be a weighted average (by the sample size) of the sample mean and $1/2$. We have
+\[
+ \E_\theta(T_B) - \theta = \frac{n\theta + 1}{n + 2} - \theta = (1 - w)\left(\frac{1}{2} - \theta\right),
+\]
+and is biased. The variance is given by
+\[
+ \var_\theta(T_B) = \frac{\var_\theta(X)}{(n + 2)^2} = w^2\frac{\theta(1 - \theta)}{n}.
+\]
+Hence the mean squared error is
+\[
+ \mse (T_B) = \var_\theta(T_B) + \bias^2(T_B) = w^2\frac{\theta(1 - \theta)}{n} + (1 - w)^2\left(\frac{1}{2} - \theta\right)^2.
+\]
+We can plot the mean squared error of each estimator for possible values of $\theta$. Here we plot the case where $n = 10$.
+\begin{center}
+ \begin{tikzpicture}[xscale=6,yscale=100]
+ \draw [mred, domain=0:1] plot (\x, {\x * (1 - \x) / 10});
+ \node [mred, above] at (0.5, 0.025) {unbiased estimator};
+ \draw [mblue, domain=0:1] plot (\x, {25 * \x * (1 - \x) / 360 + (1 - 5/6)*(1 - 5/6)*(0.5 - \x)*(0.5 - \x)}) node [right] {biased estimator};
+ \draw [->] (0, 0) -- (1.1, 0) node [right] {$\theta$};
+ \draw [->] (0, 0) -- (0, 0.035) node [above] {mse};
+ \foreach \x in {0, 0.2, 0.4, 0.6, 0.8, 1.0} {
+ \draw (\x, 0) -- (\x, -0.002) node [below] {\x};
+ }
+ \foreach \y in {0, 0.01, 0.02, 0.03} {
+ \draw (0, \y) -- (-.03, \y) node [left] {\y};
+ }
+ \end{tikzpicture}
+\end{center}
+This biased estimator has smaller MSE unless $\theta$ has extreme values.
+
+We see that sometimes biased estimators could give better mean squared errors. In some cases, not only could unbiased estimators be worse --- they could be completely nonsense.
+
+Suppose $X\sim \Poisson(\lambda)$, and for whatever reason, we want to estimate $\theta = [P(X = 0)]^2 = e^{-2\lambda}$. Then any unbiased estimator $T(X)$ must satisfy $\E_\theta(T(X)) = \theta$, or equivalently,
+\[
+ E_\lambda(T(X)) = e^{-\lambda}\sum_{x = 0}^\infty T(x) \frac{\lambda^x}{x!} = e^{-2\lambda}.
+\]
+The only function $T$ that can satisfy this equation is $T(X) = (-1)^X$.
+
+Thus the unbiased estimator would estimate $e^{-2\lambda}$ to be $1$ if $X$ is even, -1 if $X$ is odd. This is clearly nonsense.
+
+\subsection{Sufficiency}
+Often, we do experiments just to find out the value of $\theta$. For example, we might want to estimate what proportion of the population supports some political candidate. We are seldom interested in the data points themselves, and just want to learn about the big picture. This leads us to the concept of a \emph{sufficient statistic}. This is a statistic $T(\mathbf{X})$ that contains all information we have about $\theta$ in the sample.
+
+\begin{eg}
+ Let $X_1, \cdots X_n$ be iid $\Bernoulli(\theta)$, so that $\P(X_i = 1) = 1 - \P(X_i = 0) = \theta$ for some $0 < \theta < 1$. So
+ \[
+ f_{\mathbf{X}} (\mathbf{x}\mid \theta) = \prod_{i = 1}^n \theta^{x_i}(1 - \theta)^{1 - x_i} = \theta^{\sum x_i}(1 - \theta)^{n - \sum x_i}.
+ \]
+ This depends on the data only through $T(\mathbf{X}) = \sum x_i$, the total number of ones.
+
+ Suppose we are now given that $T(\mathbf{X}) = t$. Then what is the distribution of $\mathbf{X}$? We have
+ \[
+ f_{\mathbf{X}\mid T = t}(\mathbf{x}) = \frac{\P_\theta(\mathbf{X} = \mathbf{x}, T = t)}{\P_\theta (T = t)} = \frac{\P_\theta(\mathbf{X} = \mathbf{x})}{\P_\theta(T = t)}.
+ \]
+ Where the last equality comes because since if $\mathbf{X} = \mathbf{x}$, then $T$ must be equal to $t$. This is equal to
+ \[
+ \frac{\theta^{\sum x_i}(1 - \theta)^{n - \sum x_i}}{\binom{n}{t}\theta^t (1 - \theta)^{n - t}} = \binom{n}{t}^{-1}.
+ \]
+ So the conditional distribution of $\mathbf{X}$ given $T = t$ does not depend on $\theta$. So if we know $T$, then additional knowledge of $\mathbf{x}$ does not give more information about $\theta$.
+\end{eg}
+
+\begin{defi}[Sufficient statistic]
+ A statistic $T$ is \emph{sufficient} for $\theta$ if the conditional distribution of $\mathbf{X}$ given $T$ does not depend on $\theta$.
+\end{defi}
+
+There is a convenient theorem that allows us to find sufficient statistics.
+\begin{thm}[The factorization criterion]
+ $T$ is sufficient for $\theta$ if and only if
+ \[
+ f_{\mathbf{X}}(\mathbf{x} \mid \theta) = g(T(\mathbf{x}), \theta)h(\mathbf{x})
+ \]
+ for some functions $g$ and $h$.
+\end{thm}
+
+\begin{proof}
+ We first prove the discrete case.
+
+ Suppose $f_\mathbf{X}(\mathbf{x}\mid \theta) = g(T(\mathbf{x}), \theta)h(\mathbf{x})$. If $T(\mathbf{x}) = t$, then
+ \begin{align*}
+ f_{\mathbf{X}\mid T = t}(\mathbf{x}) &= \frac{\P_\theta(\mathbf{X} = \mathbf{x}, T(\mathbf{X}) = t)}{\P_\theta (T = t)}\\
+ &= \frac{g(T(\mathbf{x}), \theta)h(\mathbf{x})}{\sum_{\{\mathbf{y}:T(\mathbf{y}) = t\}}g(T(\mathbf{y}), \theta)h(\mathbf{y})}\\
+ &=\frac{g(t, \theta)h(\mathbf{x})}{g(t, \theta)\sum h(\mathbf{y})}\\
+ &= \frac{h(\mathbf{x})}{\sum h(\mathbf{y})}
+ \end{align*}
+ which doesn't depend on $\theta$. So $T$ is sufficient.
+
+ The continuous case is similar. If $f_\mathbf{X}(\mathbf{x} \mid \theta) = g(T(\mathbf{x}), \theta)h(\mathbf{x})$, and $T(\mathbf{x}) = t$, then
+ \begin{align*}
+ f_{\mathbf{X}\mid T = t}(\mathbf{x}) &= \frac{g(T(\mathbf{x}), \theta) h(\mathbf{x})}{\int_{\mathbf{y}: T(\mathbf{y}) = t} g(T(\mathbf{y}), \theta) h(\mathbf{y})\;\d \mathbf{y}}\\
+ &= \frac{g(t, \theta) h(\mathbf{x})}{g(t, \theta) \int h(\mathbf{y}) \;\d \mathbf{y}}\\
+ &= \frac{h(\mathbf{x})}{\int h(\mathbf{y})\;\d \mathbf{y}},
+ \end{align*}
+ which does not depend on $\theta$.
+
+ Now suppose $T$ is sufficient so that the conditional distribution of $\mathbf{X}\mid T = t$ does not depend on $\theta$. Then
+ \[
+ \P_\theta(\mathbf{X} = \mathbf{x}) = \P_\theta(\mathbf{X} = \mathbf{x}, T=T(\mathbf{x})) = \P_\theta(\mathbf{X} = \mathbf{x}\mid T = T(\mathbf{x}))\P_\theta (T = T(\mathbf{x})).
+ \]
+ The first factor does not depend on $\theta$ by assumption; call it $h(\mathbf{x})$. Let the second factor be $g(t, \theta)$, and so we have the required factorisation.
+\end{proof}
+
+\begin{eg}
+ Continuing the above example,
+ \[
+ f_\mathbf{X}(\mathbf{x}\mid \theta) = \theta^{\sum x_i}(1 - \theta)^{n - \sum x_i}.
+ \]
+ Take $g(t, \theta) = \theta^t(1 - \theta)^{n - t}$ and $h(\mathbf{x}) = 1$ to see that $T(\mathbf{X}) = \sum X_i$ is sufficient for $\theta$.
+\end{eg}
+
+\begin{eg}
+ Let $X_1, \cdots, X_n$ be iid $U[0, \theta]$. Write $1_{[A]}$ for the indicator function of an arbitrary set $A$. We have
+ \[
+ f_\mathbf{X}(\mathbf{x}\mid \theta) = \prod_{i = 1}^n \frac{1}{\theta}1_{[0 \leq x_i\leq \theta]} = \frac{1}{\theta^n}1_{[\max_i x_i \leq \theta]}1_{[\min_i x_i\geq 0]}.
+ \]
+ If we let $T = \max_i x_i$, then we have
+ \[
+ f_\mathbf{X}(\mathbf{x}\mid \theta) = \underbrace{\frac{1}{\theta^n}1_{[t \leq \theta]}}_{g(t, \theta)}\quad \underbrace{1_{[\min_i x_i \geq 0]}}_{h(\mathbf{x})}.
+ \]
+ So $T = \max_i x_i$ is sufficient.
+\end{eg}
+
+Note that sufficient statistics are not unique. If $T$ is sufficient for $\theta$, then so is any 1-1 function of $T$. $\mathbf{X}$ is always sufficient for $\theta$ as well, but it is not of much use. How can we decide if a sufficient statistic is ``good''?
+
+Given any statistic $T$, we can partition the sample space $\mathcal{X}^n$ into sets $\{\mathbf{x}\in \mathcal{X}: T(\mathbf{x}) = t\}$. Then after an experiment, instead of recording the actual value of $\mathbf{x}$, we can simply record the partition $\mathbf{x}$ falls into. If there are less partitions than possible values of $\mathbf{x}$, then effectively there is less information we have to store.
+
+If $T$ is sufficient, then this data reduction does not lose any information about $\theta$. The ``best'' sufficient statistic would be one in which we achieve the maximum possible reduction. This is known as the \emph{minimal sufficient statistic}. The formal definition we take is the following:
+
+\begin{defi}[Minimal sufficiency]
+ A sufficient statistic $T(\mathbf{X})$ is \emph{minimal} if it is a function of every other sufficient statistic, i.e.\ if $T'(\mathbf{X})$ is also sufficient, then $T'(\mathbf{X}) = T'(\mathbf{Y}) \Rightarrow T(\mathbf{X}) = T(\mathbf{Y})$.
+\end{defi}
+
+Again, we have a handy theorem to find minimal statistics:
+\begin{thm}
+ Suppose $T = T(\mathbf{X})$ is a statistic that satisfies
+ \begin{center}
+ $\displaystyle\frac{f_\mathbf{X}(\mathbf{x}; \theta)}{f_\mathbf{X}(\mathbf{y}; \theta)}$ does not depend on $\theta$ if and only if $T(\mathbf{x}) = T(\mathbf{y})$.
+ \end{center}
+ Then $T$ is minimal sufficient for $\theta$.
+\end{thm}
+\begin{proof}
+ First we have to show sufficiency. We will use the factorization criterion to do so.
+
+ Firstly, for each possible $t$, pick a favorite $\mathbf{x}_t$ such that $T(\mathbf{x}_t) = t$.
+
+ Now let $\mathbf{x}\in \mathcal{X}^N$ and let $T(\mathbf{x}) = t$. So $T(\mathbf{x}) = T(\mathbf{x}_t)$. By the hypothesis, $\frac{f_\mathbf{X}(\mathbf{x}; \theta)}{f_\mathbf{X}(\mathbf{x}_t: \theta)}$ does not depend on $\theta$. Let this be $h(\mathbf{x})$. Let $g(t, \theta) = f_\mathbf{X}(\mathbf{x}_t, \theta)$. Then
+ \[
+ f_\mathbf{X}(\mathbf{x}; \theta) = f_\mathbf{X}(\mathbf{x}_t; \theta) \frac{f_\mathbf{X}(\mathbf{x}; \theta)}{f_\mathbf{X}(\mathbf{x}_t; \theta)} = g(t, \theta) h(\mathbf{x}).
+ \]
+ So $T$ is sufficient for $\theta$.
+
+ To show that this is minimal, suppose that $S(\mathbf{X})$ is also sufficient. By the factorization criterion, there exist functions $g_S$ and $h_S$ such that
+ \[
+ f_\mathbf{X}(\mathbf{x}; \theta) = g_S(S(\mathbf{x}), \theta) h_S(\mathbf{x}).
+ \]
+ Now suppose that $S(\mathbf{x}) = S(\mathbf{y})$. Then
+ \[
+ \frac{f_\mathbf{X}(\mathbf{x}; \theta)}{f_\mathbf{X}(\mathbf{y}; \theta)} = \frac{g_S(S(\mathbf{x}), \theta)h_S(\mathbf{x})}{g_S(S(\mathbf{y}), \theta)h_S(\mathbf{y})} = \frac{h_S(\mathbf{x})}{h_S(\mathbf{y})}.
+ \]
+ This means that the ratio $\frac{f_\mathbf{X}(\mathbf{x}; \theta)}{f_\mathbf{X}(\mathbf{y}; \theta)}$ does not depend on $\theta$. By the hypothesis, this implies that $T(\mathbf{x}) = T(\mathbf{y})$. So we know that $S(\mathbf{x}) = S(\mathbf{y})$ implies $T(\mathbf{x}) = T(\mathbf{y})$. So $T$ is a function of $S$. So $T$ is minimal sufficient.
+\end{proof}
+
+\begin{eg}
+ Suppose $X_1, \cdots, X_n$ are iid $N(\mu, \sigma^2)$. Then
+ \begin{align*}
+ \frac{f_\mathbf{X}(\mathbf{x}\mid \mu, \sigma^2)}{f_\mathbf{X}(\mathbf{y}\mid \mu, \sigma^2)} &= \frac{(2\pi\sigma^2)^{-n/2}\exp\left\{-\frac{1}{2\sigma^2}\sum_i(x_i - \mu)^2\right\}}{(2\pi\sigma^2)^{-n/2}\exp\left\{-\frac{1}{2\sigma^2}\sum_i(y_i - \mu)^2\right\}}\\
+ &=\exp\left\{-\frac{1}{2\sigma^2}\left(\sum_i x_i^2 - \sum_i y_i^2\right) + \frac{\mu}{\sigma^2}\left(\sum_{i}x_i - \sum_i y_i\right)\right\}.
+ \end{align*}
+ This is a constant function of $(\mu, \sigma^2)$ iff $\sum_i x_i^2 = \sum_i y_i^2$ and $\sum_i x_i = \sum_i y_i$. So $T(\mathbf{X}) = (\sum_i X_i^2, \sum_i X_i)$ is minimal sufficient for $(\mu, \sigma^2)$.
+\end{eg}
+
+As mentioned, minimal sufficient statistics allow us to store the results of our experiments in the most efficient way. It turns out if we have a minimal sufficient statistic, then we can use it to improve \emph{any} estimator.
+\begin{thm}[Rao-Blackwell Theorem]
+ Let $T$ be a sufficient statistic for $\theta$ and let $\tilde{\theta}$ be an estimator for $\theta$ with $\E(\tilde{\theta}^2) < \infty$ for all $\theta$. Let $\hat{\theta}(\mathbf{x}) = \E[\tilde{\theta}(\mathbf{X})\mid T(\mathbf{X}) = T(\mathbf{x})]$. Then for all $\theta$,
+ \[
+ \E[(\hat{\theta} - \theta)^2] \leq \E[(\tilde{\theta} - \theta)^2].
+ \]
+ The inequality is strict unless $\tilde{\theta}$ is a function of $T$.
+\end{thm}
+Here we have to be careful with our definition of $\hat{\theta}$. It is defined as the expected value of $\tilde{\theta}(\mathbf{X})$, and this could potentially depend on the actual value of $\theta$. Fortunately, since $T$ is sufficient for $\theta$, the conditional distribution of $\mathbf{X}$ given $T = t$ does not depend on $\theta$. Hence $\hat{\theta} = \E[\tilde{\theta}(\mathbf{X})\mid T]$ does not depend on $\theta$, and so is a genuine estimator. In fact, the proof does not use that $T$ is sufficient; we only require it to be sufficient so that we can compute $\hat{\theta}$.
+
+Using this theorem, given any estimator, we can find one that is a function of a sufficient statistic and is at least as good in terms of mean squared error of estimation. Moreover, if the original estimator $\tilde{\theta}$ is unbiased, so is the new $\hat{\theta}$. Also, if $\tilde{\theta}$ is already a function of $T$, then $\hat{\theta} = \tilde{\theta}$.
+
+\begin{proof}
+ By the conditional expectation formula, we have $\E(\hat{\theta}) = \E[\E(\tilde{\theta}\mid T)] = E(\tilde{\theta})$. So they have the same bias.
+
+ By the conditional variance formula,
+ \[
+ \var(\tilde{\theta}) = \E[\var (\tilde{\theta}\mid T)] + \var [\E(\tilde{\theta}\mid T)] = \E[\var(\tilde{\theta}\mid T)] + \var(\hat{\theta}).
+ \]
+ Hence $\var(\tilde{\theta}) \geq \var (\hat{\theta})$. So $\mse(\tilde{\theta}) \geq \mse(\hat{\theta})$, with equality only if $\var(\tilde{\theta}\mid T) = 0$.
+\end{proof}
+
+\begin{eg}
+ Suppose $X_1, \cdots, X_n$ are iid $\Poisson(\lambda)$, and let $\theta = e^{-\lambda}$, which is the probability that $X_1 = 0$. Then
+ \[
+ p_\mathbf{X}(\mathbf{x}\mid \lambda) =\frac{e^{-n\lambda}\lambda^{\sum x_i}}{\prod x_i!}.
+ \]
+ So
+ \[
+ p_\mathbf{X}(\mathbf{x}\mid \theta) = \frac{\theta^n(-\log \theta)^{\sum x_i}}{\prod x_i!}.
+ \]
+ We see that $T = \sum X_i$ is sufficient for $\theta$, and $\sum X_i \sim \Poisson (n\lambda)$.
+
+ We start with an easy estimator $\theta$ is $\tilde{\theta} = 1_{X_1 = 0}$, which is unbiased (i.e.\ if we observe nothing in the first observation period, we assume the event is impossible). Then
+ \begin{align*}
+ \E[\tilde{\theta}\mid T = t] &= \P\left(X_1 = 0 \mid \sum_1^n X_i = t\right)\\
+ &= \frac{\P(X_1 = 0)\P(\sum_2^n X_i = t)}{\P(\sum_1^n X_i = t)}\\
+ &= \left(\frac{n - 1}{n}\right)^t.
+ \end{align*}
+ So $\hat{\theta} = (1 - 1/n)^{\sum x_i}$. This approximately $(1 - 1/n)^{n\bar{X}} \approx e^{-\bar X} = e^{-\hat{\lambda}}$, which makes sense.
+\end{eg}
+
+\begin{eg}
+ Let $X_1, \cdots, X_n$ be iid $U[0, \theta]$, and suppose that we want to estimate $\theta$. We have shown above that $T = \max X_i$ is sufficient for $\theta$. Let $\hat{\theta} = 2X_1$, an unbiased estimator. Then
+ \begin{align*}
+ \E[\tilde{\theta}\mid T = t] &= 2\E[X_1\mid \max X_i = t]\\
+ &= 2\E[X_1\mid \max X_i = t, X_1 = \max X_i]\P(X_1 = \max X_i)\\
+ &\quad\quad+ 2\E[X_1\mid \max X_i = t, X_1 \not= \max X_i] \P(X_1 \not= \max X_i)\\
+ &= 2\left(t\times \frac{1}{n} + \frac{t}{2}\frac{n - 1}{n}\right)\\
+ &= \frac{n + 1}{n}t.
+ \end{align*}
+ So $\hat{\theta} = \frac{n + 1}{n}\max X_i$ is our new estimator.
+\end{eg}
+
+\subsection{Likelihood}
+There are many different estimators we can pick, and we have just come up with some criteria to determine whether an estimator is ``good''. However, these do not give us a systematic way of coming up with an estimator to actually use. In practice, we often use the \emph{maximum likelihood estimator}.
+
+Let $X_1, \cdots , X_n$ be random variables with joint pdf/pmf $f_\mathbf{X}(\mathbf{x}\mid \theta)$. We observe $\mathbf{X} = \mathbf{x}$.
+
+\begin{defi}[Likelihood]
+ For any given $\mathbf{x}$, the \emph{likelihood} of $\theta$ is $\like(\theta) = f_\mathbf{X}(\mathbf{x} \mid \theta)$, regarded as a function of $\theta$. The \emph{maximum likelihood estimator} (mle) of $\theta$ is an estimator that picks the value of $\theta$ that maximizes $\like (\theta)$.
+\end{defi}
+Often there is no closed form for the mle, and we have to find $\hat{\theta}$ numerically.
+
+When we can find the mle explicitly, in practice, we often maximize the \emph{log-likelihood} instead of the likelihood. In particular, if $X_1, \cdots, X_n$ are iid, each with pdf/pmf $f_X(x\mid \theta)$, then
+\begin{align*}
+ \like(\theta) &= \prod_{i = 1}^n f_X(x_i\mid \theta),\\
+ \log\like (\theta) &= \sum_{i = 1}^n \log f_X(x_i\mid \theta).
+\end{align*}
+
+\begin{eg}
+ Let $X_1, \cdots, X_n$ be iid $\Bernoulli(p)$. Then
+ \[
+ l(p) =\log\like(p) = \left(\sum x_i\right)\log p + \left(n - \sum x_i\right)\log(1 - p).
+ \]
+ Thus
+ \[
+ \frac{\d l}{\d p} = \frac{\sum x_i}{p} - \frac{n - \sum x_i}{1 - p}.
+ \]
+ This is zero when $p = \sum x_i /n$. So this is the maximum likelihood estimator (and is unbiased).
+\end{eg}
+
+\begin{eg}
+ Let $X_1, \cdots, X_n$ be iid $N(\mu, \sigma^2)$, and we want to estimate $\theta = (\mu, \sigma^2)$. Then
+ \[
+ l(\mu, \sigma^2) = \log\like(\mu, \sigma^2) = -\frac{n}{2}\log (2\pi) - \frac{n}{2}\log (\sigma^2) - \frac{1}{2\sigma^2}\sum (x_i - \mu)^2.
+ \]
+ This is maximized when
+ \[
+ \frac{\partial l}{\partial\mu} = \frac{\partial l}{\partial \sigma^2} = 0.
+ \]
+ We have
+ \[
+ \frac{\partial l}{\partial \mu} = -\frac{1}{\sigma^2}\sum (x_i - \mu), \quad \frac{\partial l}{\partial\sigma^2} = -\frac{n}{2\sigma^2} + \frac{1}{2\sigma^4}\sum (x_i - \mu)^2.
+ \]
+ So the solution, hence maximum likelihood estimator is $(\hat{\mu}, \hat{\sigma}^2) = (\bar x, S_{xx}/n)$, where $\bar{x} = \frac{1}{n}\sum x_i$ and $S_{xx} = \sum(x_i - \bar{x})^2$.
+
+ We shall see later that $S_{XX}/\sigma^2 = \frac{n\hat{\sigma}^2}{\sigma^2}\sim \chi_{n - 1}^2$, and so $\E(\hat{\sigma}^2) = \frac{(n - 1)\sigma^2}{n}$, i.e.\ $\hat{\sigma}^2$ is biased.
+\end{eg}
+
+\begin{eg}[German tank problem]
+ Suppose the American army discovers some German tanks that are sequentially numbered, i.e.\ the first tank is numbered 1, the second is numbered 2, etc. Then if $\theta$ tanks are produced, then the probability distribution of the tank number is $U(0, \theta)$. Suppose we have discovered $n$ tanks whose numbers are $x_1, x_2, \cdots, x_n$, and we want to estimate $\theta$, the total number of tanks produced. We want to find the maximum likelihood estimator.
+
+ Then
+ \[
+ \like(\theta) = \frac{1}{\theta^n}1_{[\max x_i \leq \theta]}1_{[\min x_i\geq 0]}.
+ \]
+ So for $\theta\geq \max _i$, $\like (\theta) = 1/\theta^n$ and is decreasing as $\theta$ increases, while for $\theta < \max x_i$, $\like (\theta) = 0$. Hence the value $\hat{\theta} = \max x_i$ maximizes the likelihood.
+
+ Is $\hat{\theta}$ unbiased? First we need to find the distribution of $\hat{\theta}$. For $0 \leq t \leq \theta$, the cumulative distribution function of $\hat{\theta}$ is
+ \[
+ F_{\hat{\theta}}(t) = P(\hat{\theta} \leq t) = \P(X_i \leq t \text{ for all }i) = (\P(X_i \leq t))^n = \left(\frac{t}{\theta}\right)^n,
+ \]
+ Differentiating with respect to $T$, we find the pdf $f_{\hat{\theta}} = \frac{nt^{n - 1}}{\theta^n}$. Hence
+ \[
+ \E(\hat{\theta}) = \int_0^\theta t\frac{nt^{n - 1}}{\theta^n} \;\d t = \frac{n\theta}{n + 1}.
+ \]
+ So $\hat{\theta}$ is biased, but asymptotically unbiased.
+\end{eg}
+
+\begin{eg}
+ Smarties come in $k$ equally frequent colours, but suppose we do not know $k$. Assume that we sample with replacement (although this is unhygienic).
+
+ Our first Smarties are Red, Purple, Red, Yellow. Then
+ \begin{align*}
+ \like(k)&=\P_k(\text{1st is a new colour})\P_k(\text{2nd is a new colour})\\\
+ & \quad\quad\P_k(\text{3rd matches 1st})\P_k(\text{4th is a new colour})\\
+ &= 1\times \frac{k - 1}{k}\times \frac{1}{k}\times \frac{k - 2}{k}\\
+ &= \frac{(k - 1)(k - 2)}{k^3}.
+\end{align*}
+The maximum likelihood is 5 (by trial and error), even though it is not much likelier than the others.
+\end{eg}
+
+How does the mle relate to sufficient statistics? Suppose that $T$ is sufficient for $\theta$. Then the likelihood is $g(T(\mathbf{x}), \theta)h(\mathbf{x})$, which depends on $\theta$ through $T(\mathbf{x})$. To maximise this as a function of $\theta$, we only need to maximize $g$. So the mle $\hat{\theta}$ is a function of the sufficient statistic.
+
+Note that if $\phi = h(\theta)$ with $h$ injective, then the mle of $\phi$ is given by $h(\hat{\theta})$. For example, if the mle of the standard deviation $\sigma$ is $\hat{\sigma}$, then the mle of the variance $\sigma^2$ is $\hat{\sigma}^2$. This is rather useful in practice, since we can use this to simplify a lot of computations.
+
+\subsection{Confidence intervals}
+\begin{defi}
+ A $100\gamma\%$ ($0 < \gamma < 1$) \emph{confidence interval} for $\theta$ is a random interval $(A(\mathbf{X}), B(\mathbf{X}))$ such that $\P(A(\mathbf{X}) < \theta < B(\mathbf{X})) = \gamma$, no matter what the true value of $\theta$ may be.
+\end{defi}
+It is also possible to have confidence intervals for vector parameters.
+
+Notice that it is the endpoints of the interval that are random quantities, while $\theta$ is a fixed constant we want to find out.
+
+We can interpret this in terms of repeat sampling. If we calculate $(A(\mathbf{x}), B(\mathbf{x}))$ for a large number of samples $\mathbf{x}$, then approximately $100\gamma\%$ of them will cover the true value of $\theta$.
+
+It is important to know that having observed some data $\mathbf{x}$ and calculated $95\%$ confidence interval, we \emph{cannot} say that $\theta$ has 95\% chance of being within the interval. Apart from the standard objection that $\theta$ is a fixed value and either is or is not in the interval, and hence we cannot assign probabilities to this event, we will later construct an example where even though we have got a $50\%$ confidence interval, we are $100\%$ sure that $\theta$ lies in that interval.
+
+\begin{eg}
+ Suppose $X_1, \cdots, X_n$ are iid $N(\theta, 1)$. Find a $95\%$ confidence interval for $\theta$.
+
+ We know $\bar X \sim N(\theta, \frac{1}{n})$, so that $\sqrt{n}(\bar X - \theta)\sim N(0, 1)$.
+
+ Let $z_1, z_2$ be such that $\Phi(z_2) - \Phi(z_1) = 0.95$, where $\Phi$ is the standard normal distribution function.
+
+ We have $\P[z_1 < \sqrt{n}(\bar X - \theta) < z_2] = 0.95$, which can be rearranged to give
+ \[
+ \P\left[\bar X - \frac{z_2}{\sqrt{n}} < \theta < \bar X - \frac{z_1}{\sqrt{n}}\right] = 0.95.
+ \]
+ so we obtain the following $95\%$ confidence interval:
+ \[
+ \left(\bar X - \frac{z_2}{\sqrt{n}}, \bar X - \frac{z_1}{\sqrt{n}}\right).
+ \]
+ There are many possible choices for $z_1$ and $z_2$. Since $N(0, 1)$ density is symmetric, the shortest such interval is obtained by $z_2 = z_{0.025} = -z_1$. We can also choose other values such as $z_1 = -\infty$, $z_2 = 1.64$, but we usually choose symmetric end points.
+\end{eg}
+The above example illustrates a common procedure for finding confidence intervals:
+\begin{itemize}
+ \item Find a quantity $R(\mathbf{X}, \theta)$ such that the $\P_\theta$-distribution of $R(\mathbf{X}, \theta)$ does not depend on $\theta$. This is called a \emph{pivot}. In our example, $R(\mathbf{X}, \theta) = \sqrt{n}(\bar X - \theta)$.
+ \item Write down a probability statement of the form $\P_\theta(c_1 < R(\mathbf{X}, \theta) < c_2) = \gamma$.
+ \item Rearrange the inequalities inside $\P(\ldots)$ to find the interval.
+\end{itemize}
+Usually $c_1, c_2$ are percentage points from a known standardised distribution, often equitailed. For example, we pick $2.5\%$ and $97.5\%$ points for a $95\%$ confidence interval. We could also use, say $0\%$ and $95\%$, but this generally results in a wider interval.
+
+Note that if $(A(\mathbf{X}), B(\mathbf{X}))$ is a $100\gamma\%$ confidence interval for $\theta$, and $T(\theta)$ is a monotone increasing function of $\theta$, then $(T(A(\mathbf{X})), T(B(\mathbf{X})))$ is a $100\gamma\%$ confidence interval for $T(\theta)$.
+\begin{eg}
+ Suppose $X_1, \cdots, X_{50}$ are iid $N(0, \sigma^2)$. Find a $99\%$ confidence interval for $\sigma^2$.
+
+ We know that $X_i/\sigma \sim N(0, 1)$. So $\displaystyle\frac{1}{\sigma^2}\sum_{i = 1}^{50}X_i^2 \sim \chi^2_{50}$.
+
+ So $R(\mathbf{X}, \sigma^2) = \sum_{i = 1}^{50} X_2^2/\sigma^2$ is a pivot.
+
+ Recall that $\chi_n^2(\alpha)$ is the upper $100\alpha\%$ point of $\chi_n^2$, i.e.\
+ \[
+ \P(\chi_n^2 \leq \chi_n^2(\alpha)) = 1 - \alpha.
+ \]
+ So we have $c_1 = \chi_{50}^2(0.995) = 27.99$ and $c_2 = \chi_{50}^2(0.005) = 79.49$.
+
+ So
+ \[
+ \P \left(c_1 < \frac{\sum X_i^2}{\sigma^2} < c_2\right) = 0.99,
+ \]
+ and hence
+ \[
+ \P\left(\frac{\sum X_i^2}{c_2} < \sigma^2 < \frac{\sum X_i^2}{c_1}\right) = 0.99.
+ \]
+ Using the remark above, we know that a $99\%$ confidence interval for $\sigma$ is $\left(\sqrt{\frac{\sum X_i^2}{c_2}}, \sqrt{\frac{\sum X_i^2}{c_1}}\right)$.
+\end{eg}
+
+\begin{eg}
+ Suppose $X_1, \cdots, X_n$ are iid $\Bernoulli(p)$. Find an approximate confidence interval for $p$.
+
+ The mle of $p$ is $\hat p = \sum X_i/n$.
+
+ By the Central Limit theorem, $\hat{p}$ is approximately $N(p, p(1 - p)/n)$ for large $n$.
+
+ So $\displaystyle \frac{\sqrt{n}(\hat{p} - p)}{\sqrt{p(1 - p)}}$ is approximately $N(0, 1)$ for large $n$. So letting $z_{(1-\gamma) / 2}$ be the solution to $\Phi(z_{(1-\gamma) / 2}) - \Phi(-z_{(1-\gamma) / 2}) = 1 - \gamma$, we have
+ \[
+ \P\left(\hat p - z_{(1 - \gamma)/2}\sqrt{\frac{p(1 - p)}{n}} < p < \hat{p} + z_{(1 - \gamma)/2}\sqrt{\frac{p(1 - p)}{n}}\right)\approx \gamma.
+ \]
+ But $p$ is unknown! So we approximate it by $\hat{p}$ to get a confidence interval for $p$ when $n$ is large:
+ \[
+ \P\left(\hat p - z_{(1 - \gamma)/2}\sqrt{\frac{\hat p(1 - \hat p)}{n}} < p < \hat{p} + z_{(1 - \gamma)/2}\sqrt{\frac{\hat p(1 - \hat p)}{n}}\right)\approx \gamma.
+ \]
+ Note that we have made a lot of approximations here, but it would be difficult to do better than this.
+\end{eg}
+
+\begin{eg}
+ Suppose an opinion poll says $20\%$ of the people are going to vote UKIP, based on a random sample of $1,000$ people. What might the true proportion be?
+
+ We assume we have an observation of $x = 200$ from a $\binomial(n, p)$ distribution with $n = 1,000$. Then $\hat p = x/n = 0.2$ is an unbiased estimate, and also the mle.
+
+ Now $\var(X/n) = \frac{p(1 - p)}{n}\approx \frac{\hat p(1 - \hat p)}{n} = 0.00016$. So a $95\%$ confidence interval is
+ \[
+ \left(\hat p - 1.96\sqrt{\frac{\hat p(1 - \hat p)}{n}}, \hat p + 1.96\sqrt{\frac{\hat p(1 - \hat p)}{n}}\right) = 0.20 \pm 1.96\times 0.013 = (0.175, 0.225),
+ \]
+ If we don't want to make that many approximations, we can note that $p(1 - p)\leq 1/4$ for all $0 \leq p \leq 1$. So a conservative $95\%$ interval is $\hat p \pm 1.96\sqrt{1/4n} \approx \hat p \pm \sqrt{1/n}$. So whatever proportion is reported, it will be 'accurate' to $\pm 1/\sqrt{n}$.
+\end{eg}
+
+\begin{eg}
+ Suppose $X_1, X_2$ are iid from $U(\theta - 1/2, \theta + 1/2)$. What is a sensible $50\%$ confidence interval for $\theta$?
+
+ We know that each $X_i$ is equally likely to be less than $\theta$ or greater than $\theta$. So there is $50\%$ chance that we get one observation on each side, i.e.\
+ \[
+ \P_\theta(\min(X_1, X_2) \leq \theta \leq \max(X_1, X_2)) = \frac{1}{2}.
+ \]
+ So $(\min(X_1, X_2), \max (X_1, X_2))$ is a 50\% confidence interval for $\theta$.
+
+ But suppose after the experiment, we obtain $|x_1 - x_2| \geq \frac{1}{2}$. For example, we might get $x_1 = 0.2, x_2 = 0.9$, then we \emph{know} that, in this particular case, $\theta$ \emph{must} lie in $(\min (X_1, X_2), \max(X_1, X_2))$, and we don't have just 50\% ``confidence''!
+
+ This is why after we calculate a confidence interval, we should not say ``there is $100(1 - \alpha)\%$ chance that $\theta$ lies in here''. The confidence interval just says that ``if we keep making these intervals, $100(1 - \alpha)\%$ of them will contain $\theta$''. But if have calculated a \emph{particular} confidence interval, the probability that that \emph{particular} interval contains $\theta$ is \emph{not} $100(1 - \alpha)\%$.
+\end{eg}
+
+\subsection{Bayesian estimation}
+So far we have seen the \emph{frequentist} approach to a statistical inference, i.e.\ inferential statements about $\theta$ are interpreted in terms of repeat sampling. For example, the percentage confidence in a confidence interval is the probability that the interval will contain $\theta$, not the probability that $\theta$ lies in the interval.
+
+In contrast, the Bayesian approach treats $\theta$ as a random variable taking values in $\Theta$. The investigator's information and beliefs about the possible values of $\theta$ before any observation of data are summarised by a \emph{prior distribution} $\pi(\theta)$. When $\mathbf{X} = \mathbf{x}$ are observed, the extra information about $\theta$ is combined with the prior to obtain the \emph{posterior distribution} $\pi(\theta\mid \mathbf{x})$ for $\theta$ given $\mathbf{X} = \mathbf{x}$.
+
+There has been a long-running argument between the two approaches. Recently, things have settled down, and Bayesian methods are seen to be appropriate in huge numbers of application where one seeks to assess a probability about a ``state of the world''. For example, spam filters will assess the probability that a specific email is a spam, even though from a frequentist's point of view, this is nonsense, because the email either is or is not a spam, and it makes no sense to assign a probability to the email's being a spam.
+
+In Bayesian inference, we usually have some \emph{prior} knowledge about the distribution of $\theta$ (e.g.\ between $0$ and $1$). After collecting some data, we will find a \emph{posterior} distribution of $\theta$ given $\mathbf{X} = \mathbf{x}$.
+
+\begin{defi}[Prior and posterior distribution]
+ The \emph{prior distribution} of $\theta$ is the probability distribution of the value of $\theta$ before conducting the experiment. We usually write as $\pi(\theta)$.
+
+ The \emph{posterior distribution} of $\theta$ is the probability distribution of the value of $\theta$ given an outcome of the experiment $\mathbf{x}$. We write as $\pi(\theta\mid \mathbf{x})$.
+\end{defi}
+By Bayes' theorem, the distributions are related by
+\[
+ \pi(\theta\mid \mathbf{x}) = \frac{f_{\mathbf{X}}(\mathbf{x}\mid \theta)\pi(\theta)}{f_{\mathbf{X}}(\mathbf{x})}.
+\]
+Thus
+\begin{align*}
+ \pi(\theta\mid \mathbf{x}) &\propto f_{\mathbf{X}}(\mathbf{x}\mid \theta)\pi(\theta).\\
+ \text{posterior} &\propto \text{likelihood}\times\text{prior}.
+\end{align*}
+where the constant of proportionality is chosen to make the total mass of the posterior distribution equal to one. Usually, we use this form, instead of attempting to calculate $f_\mathbf{X}(\mathbf{x})$.
+
+It should be clear that the data enters through the likelihood, so the inference is automatically based on any sufficient statistic.
+
+\begin{eg}
+ Suppose I have $3$ coins in my pocket. One is $3:1$ in favour of tails, one is a fair coin, and one is $3:1$ in favour of heads.
+
+ I randomly select one coin and flip it once, observing a head. What is the probability that I have chosen coin 3?
+
+ Let $X = 1$ denote the event that I observe a head, $X = 0$ if a tail. Let $\theta$ denote the probability of a head. So $\theta$ is either 0.25, 0.5 or 0.75.
+
+ Our prior distribution is $\pi(\theta = 0.25) = \pi(\theta = 0.5) = \pi(\theta = 0.75) = 1/3$.
+
+ The probability mass function $f_X(x\mid \theta) = \theta^x(1 - \theta)^{1 - x}$. So we have the following results:
+ \begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ $\theta$ & $\pi(\theta)$ & $f_X(x = 1\mid \theta)$ & $f_X(x = 1\mid \theta)\pi(\theta)$ & $\pi(x)$\\
+ \midrule
+ 0.25 & 0.33 & 0.25 & 0.0825 & 0.167\\
+ 0.50 & 0.33 & 0.50 & 0.1650 & 0.333\\
+ 0.75 & 0.33 & 0.75 & 0.2475 & 0.500\\
+ \midrule
+ Sum & 1.00 & 1.50 & 0.4950 & 1.000\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ So if we observe a head, then there is now a $50\%$ chance that we have picked the third coin.
+\end{eg}
+
+\begin{eg}
+ Suppose we are interested in the true mortality risk $\theta$ in a hospital $H$ which is about to try a new operation. On average in the country, around $10\%$ of the people die, but mortality rates in different hospitals vary from around $3\%$ to around $20\%$. Hospital $H$ has no deaths in their first 10 operations. What should we believe about $\theta$?
+
+ Let $X_i = 1$ if the $i$th patient in $H$ dies. The
+ \[
+ f_{\mathbf{x}}(\mathbf{x}\mid \theta) = \theta^{\sum x_i}(1 - \theta)^{n - \sum x_i}.
+ \]
+ Suppose a priori that $\theta\sim \betaD(a, b)$ for some unknown $a > 0, b > 0$ so that
+ \[
+ \pi(\theta)\propto \theta^{a - 1}(1 - \theta)^{b - 1}.
+ \]
+ Then the posteriori is
+ \[
+ \pi(\theta\mid \mathbf{x})\propto f_{\mathbf{x}}(\mathbf{x}\mid \theta)\pi(\theta)\propto \theta^{\sum x_i + a - 1}(1 - \theta)^{n- \sum x_i + b - 1}.
+ \]
+ We recognize this as $\betaD(\sum x_i + a, n - \sum x_i + b)$. So
+ \[
+ \pi(\theta\mid \mathbf{x}) = \frac{\theta^{\sum x_i + a - 1}(1 - \theta)^{n - \sum x_i + b - 1}}{B(\sum x_i + a, n - \sum x_i + b)}.
+ \]
+ In practice, we need to find a Beta prior distribution that matches our information from other hospitals. It turns out that $\betaD(a = 3, b = 27)$ prior distribution has mean 0.1 and $\P(0.03 < \theta < .20) = 0.9$.
+
+ Then we observe data $\sum x_i = 0$, $n = 0$. So the posterior is $\betaD(\sum x_i + a, n - \sum x_i + b) = \betaD(3, 37)$. This has a mean of $3/40 = 0.075$.
+
+ This leads to a different conclusion than a frequentist analysis. Since nobody has died so far, the mle is $0$, which does not seem plausible. Using a Bayesian approach, we have a higher mean than $0$ because we take into account the data from other hospitals.
+\end{eg}
+
+For this problem, a beta prior leads to a beta posterior. We say that the beta family is a \emph{conjugate} family of prior distributions for Bernoulli samples.
+
+Suppose that $a = b = 1$ so that $\pi (\theta) = 1$ for $0 < \theta < 1$ --- the uniform distribution. Then the posterior is $\betaD(\sum x_i + 1, n - \sum x_i + 1)$, with properties
+\begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ &mean & mode & variance\\
+ \midrule
+ prior & 1/2 & non-unique & 1/12\\
+ posterior & $\displaystyle \frac{\sum x_i + 1}{n + 2}$ & $\displaystyle \frac{\sum x_i}{n}$ & $\displaystyle\frac{(\sum x_i + 1)(n - \sum x_i + 1)}{(n + 2)^2(n + 3)}$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+Note that the mode of the posterior is the mle.
+
+The posterior mean estimator, $\frac{\sum x_i + 1}{n + 2}$ is discussed in Lecture 2, where we showed that this estimator had smaller mse than the mle for non-extreme value of $\theta$. This is known as the Laplace's estimator.
+
+The posterior variance is bounded above by $1/(4(n + 3))$, and this is smaller than the prior variance, and is smaller for larger $n$.
+
+Again, note that the posterior automatically depends on the data through the sufficient statistic.
+
+After we come up with the posterior distribution, we have to decide what estimator to use. In the case above, we used the posterior mean, but this might not be the best estimator.
+
+To determine what is the ``best'' estimator, we first need a \emph{loss function}. Let $L(\theta, a)$ be the loss incurred in estimating the value of a parameter to be $a$ when the true value is $\theta$.
+
+Common loss functions are quadratic loss $L(\theta, a) = (\theta - a)^2$, absolute error loss $L(\theta, a) = |\theta - a|$, but we can have others.
+
+When our estimate is $a$, the expected posterior loss is
+\[
+ h(a) = \int L(\theta, a)\pi(\theta\mid \mathbf{x})\;\d \theta.
+\]
+\begin{defi}[Bayes estimator]
+ The \emph{Bayes estimator} $\hat{\theta}$ is the estimator that minimises the expected posterior loss.
+\end{defi}
+
+For quadratic loss,
+\[
+ h(a) = \int (a - \theta)^2 \pi(\theta\mid \mathbf{x})\;\d \theta.
+\]
+$h'(a) = 0$ if
+\[
+ \int (a - \theta) \pi(\theta\mid \mathbf{x})\;\d \theta = 0,
+\]
+or
+\[
+ a \int \pi(\theta\mid \mathbf{x})\;\d \theta = \int \theta \pi(\theta\mid \mathbf{x})\;\d \theta,
+\]
+Since $\int \pi (\theta\mid \mathbf{x})\;\d \theta = 1$, the Bayes estimator is $\hat{\theta} = \int \theta\pi(\theta\mid \mathbf{x})\;\d \theta$, the \emph{posterior mean}.
+
+For absolute error loss,
+\begin{align*}
+ h(a) &= \int |\theta - a|\pi(\theta\mid \mathbf{x})\;\d \theta\\
+ &= \int_{-\infty}^a (a - \theta)\pi(\theta\mid \mathbf{x})\;\d \theta + \int_a^\infty (\theta - a)\pi(\theta\mid \mathbf{x})\;\d \theta\\
+ &= a\int_{-\infty}^a \pi(\theta\mid \mathbf{x})\;\d \theta - \int_{-\infty}^a\theta\pi(\theta\mid \mathbf{x})\;\d \theta\\
+ &+ \int_a^\infty \theta\pi(\theta\mid \mathbf{x})\;\d \theta - a\int_a^\infty \pi(\theta\mid \mathbf{x})\;\d \theta.
+\end{align*}
+Now $h'(a) = 0$ if
+\[
+ \int_{-\infty}^a \pi(\theta\mid \mathbf{x})\;\d \theta = \int_a^\infty \pi(\theta\mid \mathbf{x})\;\d \theta.
+\]
+This occurs when each side is $1/2$. So $\hat{\theta}$ is the \emph{posterior median}.
+
+\begin{eg}
+ Suppose that $X_1, \cdots , X_n$ are iid $N(\mu, 1)$, and that a priori $\mu\sim N(0, \tau^{-2})$ for some $\tau^{-2}$. So $\tau$ is the certainty of our prior knowledge.
+
+ The posterior is given by
+ \begin{align*}
+ \pi(\mu\mid \mathbf{x})&\propto f_\mathbf{x}(\mathbf{x}\mid \mu)\pi(\mu)\\
+ &\propto \exp\left[-\frac{1}{2}\sum(x_i - \mu)^2\right]\exp\left[-\frac{\mu^2\tau^2}{2}\right]\\
+ &\propto \exp\left[-\frac{1}{2}(n + \tau^2)\left\{\mu - \frac{\sum x_i}{n + \tau^2}\right\}^2\right]
+ \end{align*}
+ since we can regard $n$, $\tau$ and all the $x_i$ as constants in the normalisation term, and then complete the square with respect to $\mu$. So the posterior distribution of $\mu$ given $\mathbf{x}$ is a normal distribution with mean $\sum x_i/(n + \tau^2)$ and variance $1/(n + \tau^2)$.
+
+ The normal density is symmetric, and so the posterior mean and the posterior media have the same value $\sum x_i/(n + \tau^2)$.
+
+ This is the optimal estimator for both quadratic and absolute loss.
+\end{eg}
+
+\begin{eg}
+ Suppose that $X_1, \cdots, X_n$ are iid $\Poisson(\lambda)$ random variables, and $\lambda$ has an exponential distribution with mean $1$. So $\pi(\lambda) = e^{-\lambda}.$
+
+ The posterior distribution is given by
+ \[
+ \pi(\lambda\mid \mathbf{x}) \propto e^{n\lambda} \lambda^{\sum x_i}e^{-\lambda} = \lambda^{\sum x_i}e^{-(n + 1)\lambda},\quad \lambda > 0,
+ \]
+ which is $\gammaD\left(\sum x_i + 1, n + 1\right)$. Hence under quadratic loss, our estimator is
+ \[
+ \hat{\lambda} = \frac{\sum x_i + 1}{n + 1},
+ \]
+ the posterior mean.
+
+ Under absolute error loss, $\hat{\lambda}$ solves
+ \[
+ \int_0^{\hat{\lambda}} \frac{(n + 1)^{\sum x_i + 1}\lambda^{\sum x_i}e ^{-(n + 1)\lambda}}{\left(\sum x_i\right)!}\;\d \lambda = \frac{1}{2}.
+ \]
+\end{eg}
+
+\section{Hypothesis testing}
+Often in statistics, we have some \emph{hypothesis} to test. For example, we want to test whether a drug can lower the chance of a heart attack. Often, we will have two hypotheses to compare: the \emph{null hypothesis} states that the drug is useless, while the \emph{alternative hypothesis} states that the drug is useful. Quantitatively, suppose that the chance of heart attack without the drug is $\theta_0$ and the chance with the drug is $\theta$. Then the null hypothesis is $H_0: \theta = \theta_0$, while the alternative hypothesis is $H_1: \theta \not= \theta_0$.
+
+It is important to note that the null hypothesis and alternative hypothesis are not on equal footing. By default, we assume the null hypothesis is true. For us to reject the null hypothesis, we need a \emph{lot} of evidence to prove that. This is since we consider incorrectly rejecting the null hypothesis to be a much more serious problem than accepting it when we should not. For example, it is relatively okay to reject a drug when it is actually useful, but it is terrible to distribute drugs to patients when the drugs are actually useless. Alternatively, it is more serious to deem an innocent person guilty than to say a guilty person is innocent.
+
+In general, let $X_1, \cdots, X_n$ be iid, each taking values in $\mathcal{X}$, each with unknown pdf/pmf $f$. We have two hypotheses, $H_0$ and $H_1$, about $f$. On the basis of data $\mathbf{X} = \mathbf{x}$, we make a choice between the two hypotheses.
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item A coin has $\P(\text{Heads}) = \theta$, and is thrown independently $n$ times. We could have $H_0:\theta = \frac{1}{2}$ versus $H_1: \theta = \frac{3}{4}.$
+ \item Suppose $X_1, \cdots, X_n$ are iid discrete random variables. We could have $H_0:$ the distribution is Poisson with unknown mean, and $H_1:$ the distribution is not Poisson.
+ \item General parametric cases: Let $X_1, \cdots , X_n$ be iid with density $f(x\mid \theta)$. $f$ is known while $\theta$ is unknown. Then our hypotheses are $H_0: \theta\in \Theta_0$ and $H_1:\theta\in \Theta_1$, with $\Theta_0\cap \Theta_1 = \emptyset$.
+ \item We could have $H_0: f = f_0$ and $H_1: f = f_1$, where $f_0$ and $f_1$ are densities that are completely specified but do not come form the same parametric family.
+ \end{itemize}
+\end{eg}
+
+\begin{defi}[Simple and composite hypotheses]
+ A \emph{simple hypothesis} $H$ specifies $f$ completely (e.g.\ $H_0: \theta = \frac{1}{2}$). Otherwise, $H$ is a \emph{composite hypothesis}.
+\end{defi}
+
+\subsection{Simple hypotheses}
+\begin{defi}[Critical region]
+ For testing $H_0$ against an alternative hypothesis $H_1$, a test procedure has to partition $\mathcal{X}^n$ into two disjoint exhaustive regions $C$ and $\bar C$, such that if $\mathbf{x}\in C$, then $H_0$ is rejected, and if $\mathbf{x}\in \bar C$, then $H_0$ is not rejected. $C$ is the \emph{critical region}.
+\end{defi}
+
+When performing a test, we may either arrive at a correct conclusion, or make one of the two types of error:
+\begin{defi}[Type I and II error]\leavevmode
+ \begin{enumerate}
+ \item \emph{Type I error}: reject $H_0$ when $H_0$ is true.
+ \item \emph{Type II error}: not rejecting $H_0$ when $H_0$ is false.
+ \end{enumerate}
+\end{defi}
+
+\begin{defi}[Size and power]
+When $H_0$ and $H_1$ are both simple, let
+\[
+ \alpha = \P(\text{Type I error}) = \P(\mathbf{X}\in C\mid H_0\text{ is true}).
+\]
+\[
+ \beta = \P(\text{Type II error}) = \P(\mathbf{X}\not\in C\mid H_1\text{ is true}).
+\]
+The \emph{size} of the test is $\alpha$, and $1 - \beta$ is the \emph{power} of the test to detect $H_1$.
+\end{defi}
+
+If we have two simple hypotheses, a relatively straightforward test is the \emph{likelihood ratio test}.
+\begin{defi}[Likelihood]
+ The \emph{likelihood} of a simple hypothesis $H: \theta = \theta^*$ given data $\mathbf{x}$ is
+ \[
+ L_\mathbf{x}(H) = f_\mathbf{X}(\mathbf{x}\mid \theta = \theta^*).
+ \]
+ The \emph{likelihood ratio} of two simple hypotheses $H_0, H_1$ given data $\mathbf{x}$ is
+ \[
+ \Lambda_\mathbf{x}(H_0; H_1) = \frac{L_\mathbf{x}(H_1)}{L_\mathbf{x}(H_0)}.
+ \]
+ A \emph{likelihood ratio test} (LR test) is one where the critical region $C$ is of the form
+ \[
+ C = \{\mathbf{x}: \Lambda_\mathbf{x} (H_0; H_1) > k\}
+ \]
+ for some $k$.
+\end{defi}
+
+It turns out this rather simple test is ``the best'' in the following sense:
+\begin{lemma}[Neyman-Pearson lemma]
+ Suppose $H_0: f = f_0$, $H_1: f = f_1$, where $f_0$ and $f_1$ are continuous densities that are nonzero on the same regions. Then among all tests of size less than or equal to $\alpha$, the test with the largest power is the likelihood ratio test of size $\alpha$.
+\end{lemma}
+
+\begin{proof}
+ Under the likelihood ratio test, our critical region is
+ \[
+ C = \left\{\mathbf{x}: \frac{f_1(\mathbf{x})}{f_0(\mathbf{x})} > k\right\},
+ \]
+ where $k$ is chosen such that $\alpha=\P(\text{reject }H_0\mid H_0) = \P(\mathbf{X}\in C\mid H_0) = \int_C f_0(\mathbf{x})\;\d \mathbf{x}$. The probability of Type II error is given by
+ \[
+ \beta = \P(\mathbf{X}\not\in C\mid f_1) = \int_{\bar C}f_1(\mathbf{x})\;\d \mathbf{x}.
+ \]
+ Let $C^*$ be the critical region of any other test with size less than or equal to $\alpha$. Let $\alpha^* = \P(\mathbf{X} \in C^*\mid f_0)$ and $\beta^* = \P(\mathbf{X}\not\in C^*\mid f_1)$. We want to show $\beta \leq \beta^*$.
+
+ We know $\alpha^* \leq \alpha$, i.e
+ \[
+ \int_{C^*}f_0(\mathbf{x})\;\d \mathbf{x}\leq \int_Cf_0(\mathbf{x}) \;\d \mathbf{x}.
+ \]
+ Also, on $C$, we have $f_1(\mathbf{x}) > kf_0(\mathbf{x})$, while on $\bar C$ we have $f_1(\mathbf{x}) \leq kf_0(\mathbf{x})$. So
+ \begin{align*}
+ \int_{\bar C^*\cap C} f_1(\mathbf{x}) \;\d \mathbf{x} &\geq k\int_{\bar C^*\cap C}f_0(\mathbf{x}) \;\d \mathbf{x}\\
+ \int_{\bar C\cap C^*}f_1(\mathbf{x}) \;\d \mathbf{x} &\leq k\int_{\bar C\cap C^*}f_0 (\mathbf{x})\;\d \mathbf{x}.
+ \end{align*}
+ Hence
+ \begin{align*}
+ \beta - \beta^* &= \int_{\bar C}f_1(\mathbf{x}) \;\d \mathbf{x} - \int_{\bar C^*}f_1(\mathbf{x})\;\d \mathbf{x}\\
+ &= \int_{\bar C\cap C^*} f_1(\mathbf{x}) \;\d \mathbf{x} + \int_{\bar C\cap \bar C^*}f_1(\mathbf{x})\;\d \mathbf{x} \\
+ &\quad - \int_{\bar C^*\cap C} f_1(\mathbf{x}) \;\d \mathbf{x} - \int_{\bar C\cap \bar C^*}f_1(\mathbf{x})\;\d \mathbf{x}\\
+ &= \int_{\bar C \cap C^*}f_1(\mathbf{x})\;\d \mathbf{x} - \int_{\bar C^*\cap C}f_1(\mathbf{x})\;\d \mathbf{x}\\
+ &\leq k\int_{\bar C \cap C^*}f_0(\mathbf{x})\;\d \mathbf{x} - k\int_{\bar C^*\cap C}f_0(\mathbf{x})\;\d \mathbf{x}\\
+ &= k\left\{\int_{\bar C\cap C^*}f_0(\mathbf{x})\;\d \mathbf{x} + \int_{C \cap C^*}f_0(\mathbf{x})\;\d \mathbf{x}\right\} \\
+ &\quad- k\left\{\int_{\bar C^*\cap C}f_0(\mathbf{x})\;\d \mathbf{x} + \int_{C\cap C^*}f_0(\mathbf{x})\;\d \mathbf{x}\right\}\\
+ &= k(\alpha^* - \alpha)\\
+ &\leq 0.
+ \end{align*}
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue!50!white] (0, 0) rectangle (2.5, 1);
+ \draw [fill=mred!50!white] (2.5, 0) rectangle (4, 1);
+ \draw (0, 1) rectangle (2.5, 4);
+ \draw [fill=mgreen!50!white] (2.5, 1) rectangle (4, 4);
+
+ \node at (0, 0.5) [left] {$C^*$};
+ \node at (0, 2.5) [left] {$\bar{C^*}$};
+
+ \node at (1.25, 4) [above] {$\bar{C}$};
+ \node at (3.26, 4) [above] {$C$};
+
+ \node [align=center] at (1.25, 0.5) {$C^*\cap \bar C$ \\\small $(f_1 \leq kf_0)$};
+ \node [align=center] at (3.25, 2.5) {$\bar{C^*}\cap C$ \\\small $(f_1 \geq kf_0)$};
+ \node [align=center] at (3.25, 0.5) {$C^*\cap C$};
+
+ \draw [decorate, decoration={brace}] (2.5, 0) -- (0, 0) node [pos=0.5, below] {$\beta/H_1$};
+ \draw [decorate, decoration={brace}] (4, 0) -- (2.5, 0) node [pos=0.5, below] {$\alpha/H_0$};
+
+ \draw [decorate, decoration={brace}] (4, 1) -- (4, 0) node [pos=0.5, right] {$\alpha^*/H_0$};
+ \draw [decorate, decoration={brace}] (4, 4) -- (4, 1) node [pos=0.5, right] {$\beta^*/H_1$};
+ \end{tikzpicture}
+ \end{center}
+\end{proof}
+Here we assumed the $f_0$ and $f_1$ are continuous densities. However, this assumption is only needed to ensure that the likelihood ratio test of exactly size $\alpha$ exists. Even with non-continuous distributions, the likelihood ratio test is still a good idea. In fact, you will show in the example sheets that for a discrete distribution, as long as a likelihood ratio test of exactly size $\alpha$ exists, the same result holds.
+
+\begin{eg}
+ Suppose $X_1, \cdots, X_n$ are iid $N(\mu, \sigma_0^2)$, where $\sigma_0^2$ is known. We want to find the best size $\alpha$ test of $H_0: \mu = \mu_0$ against $H_1: \mu = \mu_1$, where $\mu_0$ and $\mu_1$ are known fixed values with $\mu_1 > \mu_0$. Then
+ \begin{align*}
+ \Lambda_\mathbf{x}(H_0; H_1) &= \frac{(2\pi\sigma_0^2)^{-n/2}\exp\left(-\frac{1}{2\sigma^2_0}\sum(x_i - \mu_1)^2\right)}{(2\pi\sigma_0^2)^{-n/2}\exp\left(-\frac{1}{2\sigma^2_0}\sum(x_i - \mu_0)^2\right)}\\
+ &= \exp\left(\frac{\mu_1 - \mu_0}{\sigma_0^2}n\bar x + \frac{n(\mu_0^2 - \mu_1^2)}{2\sigma_0^2}\right).
+ \end{align*}
+ This is an increasing function of $\bar x$, so for any $k$, $\Lambda_\mathbf{x} > k\Leftrightarrow \bar x > c$ for some $c$. Hence we reject $H_0$ if $\bar x > c$, where $c$ is chosen such that $\P(\bar X > c \mid H_0) = \alpha$.
+
+ Under $H_0$, $\bar X \sim N(\mu_0, \sigma_0^2/n)$, so $Z = \sqrt{n}(\bar X - \mu_0)/\sigma_0 \sim N(0, 1)$.
+
+ Since $\bar x > c\Leftrightarrow z > c'$ for some $c'$, the size $\alpha$ test rejects $H_0$ if
+ \[
+ z = \frac{\sqrt{n}(\bar x - \mu_0)}{\sigma_0} > z_\alpha.
+ \]
+ For example, suppose $\mu_0 = 5$, $\mu_1 = 6$, $\sigma_0 = 1$, $\alpha = 0.05$, $n = 4$ and $\mathbf{x} = (5.1, 5.5, 4.9, 5.3)$. So $\bar x = 5.2$.
+
+ From tables, $z_{0.05} = 1.645$. We have $z = 0.4$ and this is less than $1.645$. So $\mathbf{x}$ is not in the rejection region.
+
+ We do not reject $H_0$ at the 5\% level and say that the data are consistent with $H_0$.
+
+ Note that this does not mean that we \emph{accept} $H_0$. While we don't have sufficient reason to believe it is false, we also don't have sufficient reason to believe it is true.
+
+ This is called a $z$-\emph{test}.
+\end{eg}
+
+In this example, LR tests reject $H_0$ if $z > k$ for some constant $k$. The size of such a test is $\alpha = \P(Z > k\mid H_0) = 1 - \Phi(k)$, and is decreasing as $k$ increasing.
+Our observed value $z$ will be in the rejected region iff $z > k \Leftrightarrow \alpha > p^* = \P(Z > z\mid H_0)$.
+\begin{defi}[$p$-value]
+ The quantity $p^*$ is called the \emph{$p$-value} of our observed data $\mathbf{x}$. For the example above, $z = 0.4$ and so $p^* = 1 - \Phi(0.4) = 0.3446$.
+\end{defi}
+
+In general, the $p$-value is sometimes called the ``observed significance level'' of $\mathbf{x}$. This is the probability under $H_0$ of seeing data that is ``more extreme'' than our observed data $\mathbf{x}$. Extreme observations are viewed as providing evidence against $H_0$.
+
+\subsection{Composite hypotheses}
+For composite hypotheses like $H:\theta \geq 0$, the error probabilities do not have a single value. We define
+\begin{defi}[Power function]
+ The \emph{power function} is
+ \[
+ W(\theta) = \P(\mathbf{X}\in C\mid \theta) = \P(\text{reject }H_0\mid \theta).
+ \]
+\end{defi}
+We want $W(\theta)$ to be small on $H_0$ and large on $H_1$.
+
+\begin{defi}[Size]
+ The \emph{size} of the test is
+ \[
+ \alpha =\sup_{\theta\in \Theta_0}W(\theta).
+ \]
+\end{defi}
+This is the worst possible size we can get.
+
+For $\theta\in \Theta_1$, $1 - W(\theta) = \P(\text{Type II error}\mid \theta)$.
+
+Sometimes the Neyman-Pearson theory can be extended to one-sided alternatives.
+
+For example, in the previous example, we have shown that the most powerful size $\alpha$ test of $H_0: \mu = \mu_0$ versus $H_1: \mu = \mu_1$ (where $\mu_1 > \mu_0$) is given by
+\[
+ C = \left\{x: \frac{\sqrt{n}(\bar x - \mu_0)}{\sigma_0} > z_\alpha\right\}.
+\]
+The critical region depends on $\mu_0, n, \sigma_0, \alpha$, and the fact that $\mu_1 > \mu_0$. It does \emph{not} depend on the particular value of $\mu_1$. This test is then uniformly the most powerful size $\alpha$ for testing $H_0: \mu = \mu_0$ against $H_1: \mu> \mu_0$.
+
+\begin{defi}[Uniformly most powerful test]
+ A test specified by a critical region $C$ is \emph{uniformly most powerful} (UMP) size $\alpha$ test for test $H_0:\theta\in \Theta_0$ against $H_1: \theta \in \Theta_1$ if
+ \begin{enumerate}
+ \item $\sup_{\theta\in \Theta_0} W(\theta) = \alpha$.
+ \item For any other test $C^*$ with size $\leq \alpha$ and with power function $W^*$, we have $W(\theta) \geq W^*(\theta)$ for all $\theta\in \Theta_1$.
+ \end{enumerate}
+ Note that these may not exist. However, the likelihood ratio test often works.
+\end{defi}
+
+\begin{eg}
+ Suppose $X_1, \cdots, X_n$ are iid $N(\mu, \sigma_0^2)$ where $\sigma_0$ is known, and we wish to test $H_0: \mu\leq \mu_0$ against $H_1: \mu > \mu_0$.
+
+ First consider testing $H_0': \mu = \mu_0$ against $H_1': \mu = \mu_1$, where $\mu_1 > \mu_0$. The Neyman-Pearson test of size $\alpha$ of $H_0'$ against $H_1'$ has
+ \[
+ C = \left\{\mathbf{x}: \frac{\sqrt{n}(\bar x - \mu_0)}{\sigma_0} > z_\alpha\right\}.
+ \]
+ We show that $C$ is in fact UMP for the composite hypotheses $H_0$ against $H_1$. For $\mu\in \R$, the power function is
+ \begin{align*}
+ W(\mu) &= \P_\mu(\text{reject } H_0)\\
+ &= \P_\mu\left(\frac{\sqrt{n}(\bar X - \mu_0)}{\sigma_0} > z_\alpha\right)\\
+ &= \P_\mu\left(\frac{\sqrt{n}(\bar X - \mu)}{\sigma_0} > z_\alpha + \frac{\sqrt{n}(\mu_0 - \mu)}{\sigma_0}\right)\\
+ &= 1 - \Phi\left(z_\alpha + \frac{\sqrt{n}(\mu_0 - \mu)}{\sigma_0}\right)
+ \end{align*}
+ To show this is UMP, we know that $W(\mu_0) = \alpha$ (by plugging in). $W(\mu)$ is an increasing function of $\mu$. So
+ \[
+ \sup_{ \mu \leq \mu_0} W(\mu) = \alpha.
+ \]
+ So the first condition is satisfied.
+
+ For the second condition, observe that for any $\mu > \mu_0$, the Neyman-Pearson size $\alpha$ test of $H_0'$ vs $H_1'$ has critical region $C$. Let $C^*$ and $W^*$ belong to any other test of $H_0$ vs $H_1$ of size $\leq \alpha$. Then $C^*$ can be regarded as a test of $H_0'$ vs $H_1'$ of size $\leq \alpha$, and the Neyman-Pearson lemma says that $W^*(\mu_1) \leq W(\mu_1)$. This holds for all $\mu_1 > \mu_0$. So the condition is satisfied and it is UMP.
+\end{eg}
+We now consider likelihood ratio tests for more general situations.
+\begin{defi}[Likelihood of a composite hypothesis]
+ The \emph{likelihood} of a composite hypothesis $H:\theta\in \Theta$ given data $\mathbf{x}$ to be
+ \[
+ L_{\mathbf{x}}(H) = \sup_{\theta\in \Theta}f(\mathbf{x}\mid \theta).
+ \]
+\end{defi}
+So far we have considered disjoint hypotheses $\Theta_0, \Theta_1$, but we are not interested in any specific alternative. So it is easier to take $\Theta_1 = \Theta$ rather than $\Theta \setminus\Theta_0$. Then
+\[
+ \Lambda_\mathbf{x} (H_0; H_1) = \frac{L_\mathbf{x}(H_1)}{L_\mathbf{x}(H_0)}=\frac{\sup_{\theta\in \Theta_1}f(\mathbf{x}\mid \theta)}{\sup_{\theta\in \Theta_0}f(\mathbf{x}\mid \theta)} \geq 1,
+\]
+with large values of $\Lambda$ indicating departure from $H_0$.
+
+\begin{eg}
+ Suppose that $X_1, \cdots, X_n$ are iid $N(\mu, \sigma_0^2)$, with $\sigma_0^2$ known, and we wish to test $H_0: \mu = \mu_0$ against $H_1: \mu \not= \mu_0$ (for given constant $\mu_0$). Here $\Theta_0 = \{\mu_0\}$ and $\Theta = \R$.
+
+ For the numerator, we have $\sup_\Theta f(\mathbf{x}\mid \mu) = f(\mathbf{x}\mid \hat{\mu})$, where $\hat{\mu}$ is the mle. We know that $\hat{\mu} = \bar x$. Hence
+ \[
+ \Lambda_\mathbf{x}(H_0; H_1) = \frac{(2\pi\sigma_0^2)^{-n/2}\exp\left(-\frac{1}{2\sigma^2_0}\sum(x_i - \bar x)^2\right)}{(2\pi\sigma_0^2)^{-n/2}\exp\left(-\frac{1}{2\sigma^2_0}\sum(x_i - \mu_0)^2\right)}.
+ \]
+ Then $H_0$ is rejected if $\Lambda_x$ is large.
+
+ To make our lives easier, we can use the logarithm instead:
+ \[
+ 2\log \Lambda(H_0;H_1) = \frac{1}{\sigma_0^2}\left[\sum (x_i - \mu_0)^2 - \sum (x_i - \bar x)^2\right] = \frac{n}{\sigma_0^2}(\bar x - \mu_0)^2.
+ \]
+ So we can reject $H_0$ if we have
+ \[
+ \left|\frac{\sqrt{n}(\bar x - \mu_0)}{\sigma_0}\right| > c
+ \]
+ for some $c$.
+
+ We know that under $H_0$, $\displaystyle Z = \frac{\sqrt{n}(\bar X - \mu_0)}{\sigma_0}\sim N(0, 1)$. So the size $\alpha$ generalised likelihood test rejects $H_0$ if
+ \[
+ \left|\frac{\sqrt{n}(\bar x - \mu_0)}{\sigma_0}\right| > z_{\alpha/2}.
+ \]
+ Alternatively, since $\displaystyle \frac{n(\bar X - \mu_0)^2}{\sigma_0^2}\sim \chi_1^2$, we reject $H_0$ if
+ \[
+ \frac{n(\bar x - \mu_0)^2}{\sigma_0^2} > \chi_1^2(\alpha),
+ \]
+ (check that $z_{\alpha/2}^2 = \chi_1^2(\alpha)$).
+
+ Note that this is a two-tailed test --- i.e.\ we reject $H_0$ both for high and low values of $\bar x$.
+\end{eg}
+
+The next theorem allows us to use likelihood ratio tests even when we cannot find the exact relevant null distribution.
+
+First consider the ``size'' or ``dimension'' of our hypotheses: suppose that $H_0$ imposes $p$ independent restrictions on $\Theta$. So for example, if $\Theta = \{\theta: \theta = (\theta_1, \cdots, \theta_k)\}$, and we have
+\begin{itemize}
+ \item $H_0: \theta_{i_1} = a_1, \theta_{i_2} = a_2, \cdots , \theta_{i_p} = a_p$; or
+ \item $H_0: A\theta = \mathbf{b}$ (with $A$ $p\times k$, $\mathbf{b}$ $p\times 1$ given); or
+ \item $H_0: \theta_i = f_i(\varphi), i = 1, \cdots, k$ for some $\varphi = (\varphi_1, \cdots, \varphi_{k - p})$.
+\end{itemize}
+We say $\Theta$ has $k$ free parameters and $\Theta_0$ has $k - p$ free parameters. We write $|\Theta_0| = k - p$ and $|\Theta| = k$.
+
+\begin{thm}[Generalized likelihood ratio theorem]
+ Suppose $\Theta_0 \subseteq \Theta_1$ and $|\Theta_1| - |\Theta_0| = p$. Let $\mathbf{X} = (X_1, \cdots, X_n)$ with all $X_i$ iid. If $H_0$ is true, then as $n\to \infty$,
+ \[
+ 2\log \Lambda_\mathbf{X}(H_0;H_1)\sim \chi_p^2.
+ \]
+ If $H_0$ is not true, then $2\log \Lambda$ tends to be larger. We reject $H_0$ if $2\log \Lambda > c$, where $c = \chi_p^2(\alpha)$ for a test of approximately size $\alpha$.
+\end{thm}
+
+We will not prove this result here. In our example above, $|\Theta_1| - |\Theta_0| = 1$, and in this case, we saw that under $H_0$, $2\log \Lambda \sim \chi_1^2$ \emph{exactly} for all $n$ in that particular case, rather than just approximately.
+
+\subsection{Tests of goodness-of-fit and independence}
+\subsubsection{Goodness-of-fit of a fully-specified null distribution}
+So far, we have considered relatively simple cases where we are attempting to figure out, say, the mean. However, in reality, more complicated scenarios arise. For example, we might want to know if a dice is fair, i.e.\ if the probability of getting each number is exactly $\frac{1}{6}$. Our null hypothesis would be that $p_1 = p_2 = \cdots = p_6 = \frac{1}{6}$, while the alternative hypothesis allows any possible values of $p_i$.
+
+In general, suppose the observation space $\mathcal{X}$ is partitioned into $k$ sets, and let $p_i$ be the probability that an observation is in set $i$ for $i = 1, \cdots, k$. We want to test ``$H_0:$ the $p_i$'s arise from a fully specified model'' against ``$H_1:$ the $p_i$'s are unrestricted (apart from the obvious $p_i \geq 0, \sum p_i = 1$)''.
+
+\begin{eg}
+ The following table lists the birth months of admissions to Oxford and Cambridge in 2012.
+ \begin{center}
+ \begin{tabular}{cccccccccccc}
+ \toprule
+ Sep & Oct & Nov & Dec & Jan & Feb & Mar & Apr & May & Jun & Jul & Aug\\
+ 470 & 515 & 470 & 457 & 473 & 381 & 466 & 457 & 437 & 396 & 384 & 394\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Is this compatible with a uniform distribution over the year?
+
+ Out of $n$ independent observations, let $N_i$ be the number of observations in $i$th set. So $(N_1,\cdots, N_k)\sim \multinomial(k; p_1, \cdots, p_k)$.
+
+ For a generalized likelihood ratio test of $H_0$, we need to find the maximised likelihood under $H_0$ and $H_1$.
+
+ Under $H_1$, $\like(p_1, \cdots, p_k) \propto p_1^{n_1}\cdots p_k^{n_k}$. So the log likelihood is $l = \text{constant} + \sum n_i \log p_i$. We want to maximise this subject to $\sum p_i = 1$. Using the Lagrange multiplier, we will find that the mle is $\hat{p_i} = n_i/n$. Also $|\Theta_1| = k - 1$ (not $k$, since they must sum up to 1).
+
+ Under $H_0$, the values of $p_i$ are specified completely, say $p_i = \tilde{p}_i$. So $|\Theta_0| = 0$. Using our formula for $\hat{p_i}$, we find that
+ \[
+ 2\log \Lambda = 2\log\left(\frac{\hat{p}_1^{n_1}\cdots \hat{p}_k^{n_k}}{\tilde{p}_{1}^{n_1}\cdots \tilde{p}_k^{n_k}}\right) = 2\sum n_i \log \left(\frac{n_i}{n\tilde{p}_i}\right)\tag{1}
+ \]
+ Here $|\Theta_1| - |\Theta_0| = k - 1$. So we reject $H_0$ if $2\log \Lambda > \chi_{k - 1}^2(\alpha)$ for an approximate size $\alpha$ test.
+
+ Under $H_0$ (no effect of month of birth), $\tilde{p}_i$ is the proportion of births in month $i$ in 1993/1994 in the whole population --- this is \emph{not} simply proportional to the number of days in each month (or even worse, $\frac{1}{12}$), as there is for example an excess of September births (the ``Christmas effect''). So
+
+ Then
+ \[
+ 2\log \Lambda = 2\sum n_i \log\left(\frac{n_i}{n\tilde{p}_i}\right) = 44.86.
+ \]
+ $\P(\chi_{11}^2 > 44.86) = 3\times 10^{-9}$, which is our $p$-value. Since this is certainly less than 0.001, we can reject $H_0$ at the $0.1\%$ level, or can say the result is ``significant at the $0.1\%$ level''.
+
+ The traditional levels for comparison are $\alpha = 0.05, 0.01, 0.001$, roughly corresponding to ``evidence'', ``strong evidence'' and ``very strong evidence''.
+\end{eg}
+A similar common situation has $H_0: p_i = p_i(\theta)$ for some parameter $\theta$ and $H_1$ as before. Now $|\Theta_0|$ is the number of independent parameters to be estimated under $H_0$.
+
+Under $H_0$, we find mle $\hat{\theta}$ by maximizing $n_i \log p_i (\theta)$, and then
+\[
+ 2\log \Lambda = 2\log \left(\frac{\hat{p_1}^{n_1}\cdots \hat{p_k}^{n_k}}{p_1(\hat{\theta})^{n_1}\cdots p_k (\hat{\theta})^{n_k}}\right) = 2\sum n_i \log \left(\frac{n_i}{np_i(\hat{\theta})}\right).\tag{2}
+\]
+The degrees of freedom are $k - 1 - |\Theta_0|$.
+
+\subsubsection{Pearson's chi-squared test}
+Notice that the two log likelihoods are of the same form. In general, let $o_i = n_i$ (observed number) and let $e_i = n\tilde{p_i}$ or $np_i(\hat{\theta})$ (expected number). Let $\delta_i = o_i - e_i$. Then
+\begin{align*}
+ 2\log \Lambda &= 2\sum o_i \log \left(\frac{o_i}{e_i}\right)\\
+ &= 2\sum (e_i + \delta_i) \log\left(1 + \frac{\delta_i}{e_i}\right)\\
+ &= 2\sum (e_i + \delta_i) \left(\frac{\delta_i}{e_i} - \frac{\delta_i^2}{2e_i^2} + O(\delta_i^3)\right)\\
+ &= 2\sum \left(\delta_i + \frac{\delta_i^2}{e_i} - \frac{\delta_i^2}{2e_i} + O(\delta_i^3)\right)\\
+ \intertext{We know that $\sum \delta_i = 0$ since $\sum e_i = \sum o_i$. So}
+ &\approx \sum \frac{\delta_i^2}{e_i}\\
+ &= \sum\frac{(o_i - e_i)^2}{e_i}.
+\end{align*}
+This is known as the \emph{Pearson's chi-squared test}.
+
+\begin{eg}
+ Mendel crossed 556 smooth yellow male peas with wrinkled green peas. From the progeny, let
+ \begin{enumerate}
+ \item $N_1$ be the number of smooth yellow peas,
+ \item $N_2$ be the number of smooth green peas,
+ \item $N_3$ be the number of wrinkled yellow peas,
+ \item $N_4$ be the number of wrinkled green peas.
+ \end{enumerate}
+ We wish to test the goodness of fit of the model
+ \begin{center}
+ $H_0: (p_1, p_2, p_3, p_4) = \left(\frac{9}{16}, \frac{3}{16}, \frac{3}{16}, \frac{1}{16}\right)$.
+ \end{center}
+ Suppose we observe $(n_1, n_2, n_3, n_4) = (315, 108, 102, 31)$.
+
+ We find $(e_1, e_2, e_3, e_4) = (312.75, 104.25, 104.25, 34.75)$. The actual $2\log \Lambda = 0.618$ and the approximation we had is $\sum \frac{(o_i - e_i)^2}{e_i} = 0.604$.
+
+ Here $|\Theta_0| = 0$ and $|\Theta_1| = 4 - 1 = 3$. So we refer to test statistics $\chi_3^2(\alpha)$.
+
+ Since $\chi_3^2(0.05) = 7.815$, we see that neither value is significant at $5\%$. So there is no evidence against Mendel's theory. In fact, the $p$-value is approximately $\P(\chi_3^2 > 0.6) \approx 0.90$. This is a \emph{really} good fit, so good that people suspect the numbers were not genuine.
+\end{eg}
+
+\begin{eg}
+ In a genetics problem, each individual has one of the three possible genotypes, with probabilities $p_1, p_2, p_3$. Suppose we wish to test $H_0: p_i = p_i(\theta)$, where
+ \[
+ p_1(\theta) = \theta^2,\quad p_2 = 2\theta(1 - \theta), \quad p_3(\theta) = (1 - \theta)^2.
+ \]
+ for some $\theta \in (0, 1)$.
+
+ We observe $N_i = n_i$. Under $H_0$, the mle $\hat{\theta}$ is found by maximising
+ \[
+ \sum n_i \log p_i(\theta) = 2n_1 \log \theta + n_2\log(2\theta(1 - \theta)) + 2n_3 \log (1 - \theta).
+ \]
+ We find that $\hat{\theta} = \frac{2n_1 + n_2}{2n}$. Also, $|\Theta_0| = 1$ and $|\Theta_1| = 2$.
+
+ After conducting an experiment, we can substitute $p_i(\hat{\theta})$ into (2), or find the corresponding Pearson's chi-squared statistic, and refer to $\chi_1^2$.
+\end{eg}
+
+\subsubsection{Testing independence in contingency tables}
+\begin{defi}[Contingency table]
+ A \emph{contingency table} is a table in which observations or individuals are classified according to one or more criteria.
+\end{defi}
+
+\begin{eg}
+ 500 people with recent car changes were asked about their previous and new cars. The results are as follows:
+ \begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ & & & New car &\\
+ & & Large & Medium & Small\\\midrule
+ \multirow{3}{*}{\rotatebox[origin=c]{90}{Previous}\;\rotatebox[origin=c]{90}{car}}& Large & 56 & 52 & 42\\
+ & Medium & 50 & 83 & 67\\
+ & Small & 18 & 51 & 81\\\bottomrule
+ \end{tabular}
+ \end{center}
+ This is a two-way contingency table: Each person is classified according to the previous car size and new car size.
+\end{eg}
+Consider a two-way contingency table with $r$ rows and $c$ columns. For $i = 1, \cdots, r$ And $j = 1, \cdots, c$, let $p_{ij}$ be the probability that an individual selected from the population under consideration is classified in row $i$ and column $j$. (i.e.\ in the $(i, j)$ cell of the table).
+
+Let $p_{i+} = \P(\text{in row }i)$ and $p_{+j} = \P(\text{in column }j)$. Then we must have $p_{++} = \sum_i \sum_j p_{ij} = 1$.
+
+Suppose a random sample of $n$ individuals is taken, and let $n_{ij}$ be the number of these classified in the $(i, j)$ cell of the table.
+
+Let $n_{i+} = \sum_j n_{ij}$ and $n_{+j} = \sum_i n_{ij}$. So $n_{++} = n$.
+
+We have
+\[
+ (N_{11}, \cdots, N_{1c}, N_{21}, \cdots, N_{rc}) \sim \multinomial (rc; p_{11}, \cdots, p_{1c}, p_{21}, \cdots, p_{rc}).
+\]
+We may be interested in testing the null hypothesis that the two classifications are independent. So we test
+\begin{itemize}
+ \item $H_0$: $p_{ij} = p_{i+}p_{+j}$ for all $i, j$, i.e.\ independence of columns and rows.
+ \item $H_1$: $p_{ij}$ are unrestricted.
+\end{itemize}
+Of course we have the usual restrictions like $p_{++} = 1$, $p_{ij} \geq 0$.
+
+Under $H_1$, the mles are $\hat{p_{ij}} = \frac{n_{ij}}{n}$.
+
+Under $H_0$, the mles are $\hat{p}_{i+} = \frac{n_{i+}}{n}$ and $\hat{p}_{+j} = \frac{n_{+j}}{n}$.
+
+Write $o_{ij} = n_{ij}$ and $e_{ij} = n\hat{p}_{i+}\hat{p}_{+j} = n_{i+}n_{+j}/n$.
+
+Then
+\[
+ 2\log \Lambda = 2\sum_{i = 1}^r \sum_{j = 1}^c o_{ij}\log\left(\frac{o_{ij}}{e_{ij}}\right) \approx \sum_{i = 1}^r \sum_{j = 1}^c \frac{(o_{ij} - e_{ij})^2}{e_{ij}}.
+\]
+using the same approximating steps for Pearson's Chi-squared test.
+
+We have $|\Theta_1| = rc - 1$, because under $H_1$ the $p_{ij}$'s sum to one. Also, $|\Theta_0| = (r - 1) + (c - 1)$ because $p_{1+}, \cdots, p_{r+}$ must satisfy $\sum_i p_{i+} = 1$ and $p_{+1}, \cdots, p_{+c}$ must satisfy $\sum_j p_{+j} = 1$. So
+\[
+ |\Theta_1| - |\Theta_0| = rc - 1 - (r - 1) - (c - 1) = (r - 1)(c - 1).
+\]
+
+\begin{eg}
+ In our previous example, we wish to test $H_0$: the new and previous car sizes are independent. The actual data is:
+ \begin{center}
+ \begin{tabular}{cccccc}
+ \toprule
+ & & & New car &\\
+ & & Large & Medium & Small & \textbf{Total}\\\midrule
+ \multirow{4}{*}{\rotatebox[origin=c]{90}{Previous}\;\rotatebox[origin=c]{90}{car}}& Large & 56 & 52 & 42 & \textbf{150}\\
+ & Medium & 50 & 83 & 67 & \textbf{120}\\
+ & Small & 18 & 51 & 81 & \textbf{150}\\\cmidrule{2-6}
+ & \textit{Total} & \textit{124} & \textit{186} & \textit{190} & \textit{\textbf{500}}\\\bottomrule
+ \end{tabular}
+ \end{center}
+ while the expected values given by $H_0$ is
+ \begin{center}
+ \begin{tabular}{cccccc}
+ \toprule
+ & & & New car &\\
+ & & Large & Medium & Small & \textbf{Total}\\\midrule
+ \multirow{4}{*}{\rotatebox[origin=c]{90}{Previous}\;\rotatebox[origin=c]{90}{car}}& Large & 37.2 & 55.8 & 57.0 & \textbf{150}\\
+ & Medium & 49.6 & 74.4 & 76.0 & \textbf{120}\\
+ & Small & 37.2 & 55.8 & 57.0 & \textbf{150}\\\cmidrule{2-6}
+ & \textit{Total} & \textit{124} & \textit{186} & \textit{190} & \textit{\textbf{500}}\\\bottomrule
+ \end{tabular}
+ \end{center}
+ Note the margins are the same. It is quite clear that they do not match well, but we can find the $p$ value to be sure.
+
+ $\displaystyle\sum\sum \frac{(o_{ij} - e_{ij})^2}{e_{ij}} = 36.20$, and the degrees of freedom is $(3 - 1)(3 - 1) = 4$.
+
+ From the tables, $\chi_4^2(0.05) = 9.488$ and $\chi_4^2(0.01) = 13.28$.
+
+ So our observed value of 36.20 is significant at the $1\%$ level, i.e.\ there is strong evidence against $H_0$. So we conclude that the new and present car sizes are not independent.
+\end{eg}
+\subsection{Tests of homogeneity, and connections to confidence intervals}
+\subsubsection{Tests of homogeneity}
+\begin{eg}
+ 150 patients were randomly allocated to three groups of 50 patients each. Two groups were given a new drug at different dosage levels, and the third group received a placebo. The responses were as shown in the table below.
+ \begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ & Improved & No difference & Worse & \textbf{Total}\\\midrule
+ Placebo & 18 & 17 & 15 & \textbf{50}\\
+ Half dose & 20 & 10 & 20 & \textbf{50}\\
+ Full dose & 25 & 13 & 12& \textbf{50}\\\midrule
+ \textit{Total} & \textit{63} & \textit{40} & \textit{47} & \textbf{\textit{150}}\\\bottomrule
+ \end{tabular}
+ \end{center}
+ Here the row totals are fixed in advance, in contrast to our last section, where the row totals are random variables.
+
+ For the above, we may be interested in testing $H_0:$ the probability of ``improved'' is the same for each of the three treatment groups, and so are the probabilities of ``no difference'' and ``worse'', i.e.\ $H_0$ says that we have homogeneity down the rows.
+\end{eg}
+In general, we have independent observations from $r$ multinomial distributions, each of which has $c$ categories, i.e.\ we observe an $r\times c$ table $(n_{ij})$, for $i = 1, \cdots, r$ and $j = 1, \cdots, c$, where
+\[
+ (N_{i1}, \cdots, N_{ic}) \sim \multinomial(n_{i+}, p_{i1}, \cdots, p_{ic})
+\]
+independently for each $i = 1, \cdots, r$.
+We want to test
+\[
+ H_0: p_{1j} = p_{2j} = \cdots = p_{rj} = p_j,
+\]
+for $j = 1, \cdots, c$, and
+\[
+ H_1: p_{ij}\text{ are unrestricted}.
+\]
+Using $H_1$, for any matrix of probabilities $(p_{ij})$,
+\[
+ \like((p_{ij})) = \prod_{i = 1}^r \frac{n_{i+}!}{n_{i1}!\cdots n_{ic}!}p_{i1}^{n_{i1}} \cdots p_{ic}^{n_{ic}},
+\]
+and
+\[
+ \log\like = \text{constant} + \sum_{i = 1}^r \sum_{j = 1}^c n_{ij}\log p_{ij}.
+\]
+Using Lagrangian methods, we find that $\hat{p}_{ij} = \frac{n_{ij}}{n_{i+}}$.
+
+Under $H_0$,
+\[
+ \log\like = \text{constant} + \sum_{j = 1}^c n_{+j}\log p_j.
+\]
+By Lagrangian methods, we have $\hat{p}_j = \frac{n_{+j}}{n_{++}}$.
+
+Hence
+\[
+ 2\log \Lambda = \sum_{i = 1}^{r}\sum_{j = 1}^c n_{ij}\log\left(\frac{\hat{p}_{ij}}{\hat{p}_j}\right) = 2\sum_{i = 1}^r\sum_{j = 1}^c n_{ij}\log\left(\frac{n_{ij}}{n_{i+}n_{+j}/n_{++}}\right),
+\]
+which is the same as what we had last time, when the row totals are unrestricted!
+
+We have $|\Theta_1| = r(c - 1)$ and $|\Theta_0| = c - 1$. So the degrees of freedom is $r(c - 1) - (c - 1) = (r - 1)(c - 1)$, and under $H_0$, $2\log\Lambda$ is approximately $\chi^2_{(r - 1)(c - 1)}$. Again, it is exactly the same as what we had last time!
+
+We reject $H_0$ if $2\log \Lambda > \chi_{(r - 1)(c - 1)}^2 (\alpha)$ for an approximate size $\alpha$ test.
+
+If we let $o_{ij}= n_{ij}, e_{ij} = \frac{n_{i+}n_{+j}}{n_{++}}$, and $\delta_{ij} = o_{ij} - e_{ij}$, using the same approximating steps as for Pearson's chi-squared, we obtain
+\[
+ 2\log \Lambda \approx \sum \frac{(o_{ij} - e_{ij})^2}{e_{ij}}.
+\]
+\begin{eg}
+ Continuing our previous example, our data is
+ \begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ & Improved & No difference & Worse & \textbf{Total}\\\midrule
+ Placebo & 18 & 17 & 15 & \textbf{50} \\
+ Half dose & 20 & 10 & 20 & \textbf{50} \\
+ Full dose & 25 & 13 & 12& \textbf{50} \\\midrule
+ \textit{Total} & \textit{6}3 & \textit{40} & \textit{47} & \textbf{\textit{150}} \\ \bottomrule
+ \end{tabular}
+ \end{center}
+ The expected under $H_0$ is
+ \begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ & Improved & No difference & Worse &\textbf{Total}\\\midrule
+ Placebo & 21 & 13.3 & 15.7 & \textbf{50} \\
+ Half dose & 21 & 13.3 & 15.7 & \textbf{50}\\
+ Full dose & 21 & 13.3 & 15.7 & \textbf{50}\\\midrule
+ \textit{Total}& \textit{63} & \textit{40} & \textit{47} & \textbf{\textit{150}}\\ \bottomrule
+ \end{tabular}
+ \end{center}
+ We find $2\log \Lambda = 5.129$, and we refer this to $\chi_4^2$. Clearly this is not significant, as the mean of $\chi_4^2$ is $4$, and is something we would expect to happen solely by chance.
+
+ We can calculate the $p$-value: from tables, $\chi_4^2(0.05) = 9.488$, so our observed value is not significant at $5\%$, and the data are consistent with $H_0$.
+
+ We conclude that there is no evidence for a difference between the drug at the given doses and the placebo.
+
+ For interest,
+ \[
+ \sum\frac{(o_{ij} - e_{ij})^2}{e_{ij}} = 5.173,
+ \]
+ giving the same conclusion.
+\end{eg}
+\subsubsection{Confidence intervals and hypothesis tests}
+Confidence intervals or sets can be obtained by inverting hypothesis tests, and vice versa
+
+\begin{defi}[Acceptance region]
+ The \emph{acceptance region} $A$ of a test is the complement of the critical region $C$.
+
+ Note that when we say ``acceptance'', we really mean ``non-rejection''! The name is purely for historical reasons.
+\end{defi}
+\begin{thm}[Duality of hypothesis tests and confidence intervals]\leavevmode
+ Suppose $X_1, \cdots, X_n$ have joint pdf $f_\mathbf{X}(\mathbf{x}\mid \theta)$ for $\theta\in \Theta$.
+ \begin{enumerate}
+ \item Suppose that for every $\theta_0\in \Theta$ there is a size $\alpha$ test of $H_0: \theta = \theta_0$. Denote the acceptance region by $A(\theta_0)$. Then the set $I(\mathbf{X}) = \{\theta:\mathbf{X}\in A(\theta)\}$ is a $100(1 - \alpha)\%$ confidence set for $\theta$.
+ \item Suppose $I(\mathbf{X})$ is a $100(1 - \alpha)\%$ confidence set for $\theta$. Then $A(\theta_0) = \{\mathbf{X}: \theta_0 \in I(\mathbf{X})\}$ is an acceptance region for a size $\alpha$ test of $H_0: \theta = \theta_0$.
+ \end{enumerate}
+\end{thm}
+Intuitively, this says that ``confidence intervals'' and ``hypothesis acceptance/rejection'' are the same thing. After gathering some data $\mathbf{X}$, we can produce a, say, $95\%$ confidence interval $(a, b)$. Then if we want to test the hypothesis $H_0: \theta = \theta_0$, we simply have to check whether $\theta_0 \in (a, b)$.
+
+On the other hand, if we have a test for $H_0: \theta = \theta_0$, then the confidence interval is all $\theta_0$ in which we would accept $H_0: \theta = \theta_0$.
+\begin{proof}
+ First note that $\theta_0\in I(\mathbf{X})$ iff $\mathbf{X}\in A(\theta_0)$.
+
+ For (i), since the test is size $\alpha$, we have
+ \[
+ \P(\text{accept }H_0\mid H_0\text{ is true}) = \P(\mathbf{X}\in A(\theta_0)\mid \theta=\theta_0) = 1 - \alpha.
+ \]
+ And so
+ \[
+ \P(\theta_0\in I(\mathbf{X})\mid \theta = \theta_0) = \P(\mathbf{X}\in A(\theta_0)\mid \theta = \theta_0) = 1 - \alpha.
+ \]
+ For (ii), since $I(\mathbf{X})$ is a $100(1 - \alpha)\%$ confidence set, we have
+ \[
+ P(\theta_0\in I(\mathbf{X})\mid \theta = \theta_0) = 1- \alpha.
+ \]
+ So
+ \[
+ \P(\mathbf{X}\in A(\theta_0)\mid \theta = \theta_0) = \P(\theta\in I(\mathbf{X})\mid \theta = \theta_0) = 1- \alpha.\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ Suppose $X_1, \cdots, X_n$ are iid $N(\mu, 1)$ random variables and we want a $95\%$ confidence set for $\mu$.
+
+ One way is to use the theorem and find the confidence set that belongs to the hypothesis test that we found in the previous example. We find a test of size 0.05 of $H_0 : \mu= \mu_0$ against $H_1: \mu\not= \mu_0$ that rejects $H_0$ when $|\sqrt{n}(\bar x - \mu_0)| > 1.96$ (where 1.96 is the upper $2.5\%$ point of $N(0, 1)$).
+
+ Then $I(\mathbf{X}) = \{\mu: \mathbf{X}\in A(\mu)\} = \{\mu:|\sqrt{n}(\bar X - \mu)| < 1.96\}$. So a $95\%$ confidence set for $\mu$ is $(\bar X - 1.96/\sqrt{n}, \bar X + 1.96/\sqrt{n})$.
+\end{eg}
+\subsection{Multivariate normal theory}
+\subsubsection{Multivariate normal distribution}
+So far, we have only worked with scalar random variables or a vector of iid random variables. In general, we can have a random (column) vector $\mathbf{X} = (X_1, \cdots, X_n)^T$, where the $X_i$ are correlated.
+
+The mean of this vector is given by
+\[
+ \boldsymbol\mu = \E[\mathbf{X}] = (\E(X_1), \cdots, \E(X_n))^T = (\mu_1, \cdots, \mu_n)^T.
+\]
+Instead of just the variance, we have the covariance matrix
+\[
+ \cov (\mathbf{X}) = \E[(\mathbf{X} - \boldsymbol\mu)(\mathbf{X} - \boldsymbol\mu)^T] = (\cov(X_i, X_j))_{ij},
+\]
+provided they exist, of course.
+
+We can multiply the vector $\mathbf{X}$ by an $m\times n$ matrix $A$. Then we have
+\[
+ \E[A\mathbf{X}] = A\boldsymbol\mu,
+\]
+and
+\[
+ \cov (A\mathbf{X}) = A\cov(\mathbf{X})A^T.\tag{$*$}
+\]
+The last one comes from
+\begin{align*}
+ \cov(A\mathbf{X}) &= \E[(A\mathbf{X} -\E[A\mathbf{X}])(A\mathbf{X} - \E[A\mathbf{X}])^T] \\
+ &= \E[A(\mathbf{X} - \E\mathbf{X})(\mathbf{X} - \E\mathbf{X})^T A^T]\\
+ &= A\E[(\mathbf{X} - \E\mathbf{X})(\mathbf{X} - \E\mathbf{X})^T]A^T.
+\end{align*}
+If we have two random vectors $\mathbf{V}, \mathbf{W}$, we can define the covariance $\cov(\mathbf{V}, \mathbf{W})$ to be a matrix with $(i, j)$th element $\cov(V_i, W_j)$. Then $\cov(A\mathbf{X}, B\mathbf{X}) = A\cov (\mathbf{X}) B^T$.
+
+An important distribution is a \emph{multivariate normal distribution}.
+
+\begin{defi}[Multivariate normal distribution]
+ $\mathbf{X}$ has a \emph{multivariate normal distribution} if, for every $\mathbf{t}\in \R^n$, the random variable $\mathbf{t}^T\mathbf{X}$ (i.e.\ $\mathbf{t}\cdot \mathbf{X}$) has a normal distribution. If $\E[\mathbf{X}] = \boldsymbol\mu$ and $\cov (\mathbf{X}) = \Sigma$, we write $\mathbf{X}\sim N_n(\boldsymbol\mu, \Sigma)$.
+\end{defi}
+
+Note that $\Sigma$ is symmetric and is positive semi-definite because by $(*)$,
+\[
+ \mathbf{t}^T\Sigma \mathbf{t} = \var(\mathbf{t}^T\mathbf{X}) \geq 0.
+\]
+So what is the pdf of a multivariate normal? And what is the moment generating function? Recall that a (univariate) normal $X\sim N(\mu, \sigma^2)$ has density
+\[
+ f_X(x; \mu, \sigma^2) = \frac{1}{\sqrt{2\pi}\sigma} \exp\left(-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\right),
+\]
+with moment generating function
+\[
+ M_X(s) = \E[e^{sX}] = \exp\left(\mu s + \frac{1}{2}\sigma^2 s^2\right).
+\]
+Hence for any $\mathbf{t}$, the moment generating function of $\mathbf{t}^T\mathbf{X}$ is given by
+\[
+ M_{\mathbf{t}^T \mathbf{X}}(s) = \E[e^{s\mathbf{t}^T\mathbf{X}}] = \exp\left(\mathbf{t}^T\boldsymbol\mu s + \frac{1}{2}\mathbf{t}^T\Sigma \mathbf{t}s^2\right).
+\]
+Hence $\mathbf{X}$ has mgf
+\[
+ M_\mathbf{X}(\mathbf{t}) = \E[e^{\mathbf{t}^T \mathbf{X}}]= M_{\mathbf{t}^T\mathbf{X}}(1) = \exp\left(\mathbf{t}^T\boldsymbol\mu + \frac{1}{2}\mathbf{t}^T \Sigma \mathbf{t}\right).\tag{$\dagger$}
+\]
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item If $\mathbf{X} \sim N_n(\mu, \Sigma)$, and $A$ is an $m\times n$ matrix, then $A\mathbf{X} \sim N_m (A\boldsymbol\mu, A\Sigma A^T)$.
+ \item If $\mathbf{X}\sim N_n(\mathbf{0}, \sigma^2 I)$, then
+ \[
+ \frac{|\mathbf{X}|^2}{\sigma^2} = \frac{\mathbf{X}^T\mathbf{X}}{\sigma^2} = \sum \frac{X_i^2}{\sigma ^2}\sim \chi_n^2.
+ \]
+ Instead of writing $|\mathbf{X}|^2/\sigma^2 \sim \chi_n^2$, we often just say $|\mathbf{X}|^2 \sim \sigma^2 \chi_n^2$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item See example sheet 3.
+ \item Immediate from definition of $\chi_n^2$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{prop}
+ Let $\mathbf{X}\sim N_n(\mu, \Sigma)$. We split $\mathbf{X}$ up into two parts: $\mathbf{X} =
+ \begin{pmatrix}
+ \mathbf{X}_1\\
+ \mathbf{X}_2
+ \end{pmatrix}$, where $\mathbf{X}_i$ is a $n_i \times 1$ column vector and $n_1 + n_2 = n$.
+
+ Similarly write
+ \[
+ \boldsymbol\mu =
+ \begin{pmatrix}
+ \boldsymbol\mu_1\\
+ \boldsymbol\mu_2
+ \end{pmatrix}
+ ,\quad
+ \Sigma =
+ \begin{pmatrix}
+ \Sigma_{11} & \Sigma_{12}\\
+ \Sigma_{21} & \Sigma_{22}
+ \end{pmatrix},
+ \]
+ where $\Sigma_{ij}$ is an $n_i\times n_j$ matrix.
+
+ Then
+ \begin{enumerate}
+ \item $\mathbf{X}_i \sim N_{n_i}(\boldsymbol\mu_i, \Sigma_{ii})$
+ \item $\mathbf{X}_1$ and $\mathbf{X}_2$ are independent iff $\Sigma_{12} = 0$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item See example sheet 3.
+ \item Note that by symmetry of $\Sigma$, $\Sigma_{12} = 0$ if and only if $\Sigma_{21} = 0$.
+
+ From $(\dagger)$, $M_\mathbf{X}(\mathbf{t}) = \exp(\mathbf{t}^T \boldsymbol\mu + \frac{1}{2}\mathbf{t}^T\Sigma\mathbf{t})$ for each $\mathbf{t}\in \R^n$. We write $\mathbf{t} = \begin{pmatrix}\mathbf{t}_1\\\mathbf{t}_2\end{pmatrix}$. Then the mgf is equal to
+ \[
+ M_\mathbf{X}(\mathbf{t}) = \exp\left(\mathbf{t}_1^T\boldsymbol\mu_1 + \mathbf{t}_2^T \Sigma_{11}\mathbf{t}_1 + \frac{1}{2}\mathbf{t}_2^T \Sigma_{22}\mathbf{t}_2 + \frac{1}{2}\mathbf{t}_1^T \Sigma_{12}\mathbf{t}_2 + \frac{1}{2}\mathbf{t}_2^T \Sigma_{21}\mathbf{t}_1\right).
+ \]
+ From (i), we know that $M_{\mathbf{X}_i}(\mathbf{t}_i) = \exp(\mathbf{t}_i^T\boldsymbol\mu_i + \frac{1}{2}\mathbf{t}_i^T \Sigma_{ii}\mathbf{t}_i)$. So $M_\mathbf{X}(\mathbf{t}) = M_{\mathbf{X}_1}(\mathbf{t}_1)M_{\mathbf{X}_2}(\mathbf{t}_2)$ for all $\mathbf{t}$ if and only if $\Sigma_{12} = 0$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{prop}
+ When $\Sigma$ is a positive definite, then $\mathbf{X}$ has pdf
+ \[
+ f_\mathbf{X}(\mathbf{x}; \mu, \Sigma) = \frac{1}{|\Sigma|^2} \left(\frac{1}{\sqrt{2\pi}}\right)^n \exp\left[-\frac{1}{2}(\mathbf{x} - \boldsymbol\mu)^T\Sigma^{-1}(\mathbf{x} - \boldsymbol\mu)\right].
+ \]
+\end{prop}
+Note that $\Sigma$ is always positive semi-definite. The conditions just forbid the case $|\Sigma| = 0$, since this would lead to dividing by zero.
+
+\subsubsection{Normal random samples}
+We wish to use our knowledge about multivariate normals to study univariate normal data. In particular, we want to prove the following:
+
+\begin{thm}[Joint distribution of $\bar X$ and $S_{XX}$]
+ Suppose $X_1, \cdots, X_n$ are iid $N(\mu, \sigma^2)$ and $\bar X = \frac{1}{n} \sum X_i$, and $S_{XX} = \sum (X_i - \bar X)^2$. Then
+ \begin{enumerate}
+ \item $\bar X \sim N(\mu, \sigma^2/n)$
+ \item $S_{XX}/\sigma^2 \sim \chi_{n - 1}^2$.
+ \item $\bar X$ and $S_{XX}$ are independent.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ We can write the joint density as $\mathbf{X} \sim N_n(\boldsymbol\mu, \sigma^2I)$, where $\boldsymbol\mu = (\mu, \mu, \cdots, \mu)$.
+
+ Let $A$ be an $n\times n$ orthogonal matrix with the first row all $1/\sqrt{n}$ (the other rows are not important). One possible such matrix is
+ \[
+ A =
+ \begin{pmatrix}
+ \frac{1}{\sqrt{n}} & \frac{1}{\sqrt{n}} & \frac{1}{\sqrt{n}} & \frac{1}{\sqrt{n}} & \cdots & \frac{1}{\sqrt{n}}\\
+ \frac{1}{\sqrt{2\times 1}} & \frac{-1}{\sqrt{2\times 1}} & 0 & 0 & \cdots & 0\\
+ \frac{1}{\sqrt{3\times 2}} & \frac{1}{\sqrt{3\times 2}} & \frac{-2}{\sqrt{3\times 2}} & 0 & \cdots & 0\\
+ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
+ \frac{1}{\sqrt{n(n - 1)}} & \frac{1}{\sqrt{n(n - 1)}} & \frac{1}{\sqrt{n(n - 1)}} & \frac{1}{\sqrt{n(n - 1)}} & \cdots & \frac{-(n - 1)}{\sqrt{n(n - 1)}} \\
+ \end{pmatrix}
+ \]
+ Now define $\mathbf{Y} = A\mathbf{X}$. Then
+ \[
+ \mathbf{Y} \sim N_n(A\boldsymbol\mu, A\sigma^2IA^T) = N_n(A\boldsymbol\mu, \sigma^2 I).
+ \]
+ We have
+ \[
+ A\boldsymbol\mu = (\sqrt{n}\mu, 0, \cdots, 0)^T.
+ \]
+ So $Y_1 \sim N(\sqrt{n}\mu, \sigma^2)$ and $Y_i \sim N(0, \sigma^2)$ for $i = 2, \cdots, n$. Also, $Y_1, \cdots, Y_n$ are independent, since the covariance matrix is every non-diagonal term $0$.
+
+ But from the definition of $A$, we have
+ \[
+ Y_1 = \frac{1}{\sqrt{n}}\sum_{i = 1}^n X_i = \sqrt{n} \bar X.
+ \]
+ So $\sqrt{n} \bar X \sim N(\sqrt{n}\mu, \sigma^2)$, or $\bar X \sim N(\mu, \sigma^2/n)$. Also
+ \begin{align*}
+ Y_2^2 + \cdots + Y_n^2 &= \mathbf{Y}^T\mathbf{Y} - Y_1^2\\
+ &= \mathbf{X}^TA^TA\mathbf{X} - Y_1^2\\
+ &= \mathbf{X}^T\mathbf{X} - n\bar X^2\\
+ &= \sum_{i = 1}^n X_i^2 - n\bar X^2\\
+ &= \sum_{i = 1}^n (X_i - \bar X)^2 \\
+ &= S_{XX}.
+ \end{align*}
+ So $S_{XX} = Y_2^2 + \cdots + Y^2_n \sim \sigma^2 \chi_{n - 1}^2$.
+
+ Finally, since $Y_1$ and $Y_2, \cdots, Y_n$ are independent, so are $\bar X$ and $S_{XX}$.
+\end{proof}
+\subsection{Student's \texorpdfstring{$t$}{t}-distribution}
+\begin{defi}[$t$-distribution]
+ Suppose that $Z$ and $Y$ are independent, $Z\sim N(0, 1)$ and $Y\sim \chi_k^2$. Then
+ \[
+ T = \frac{Z}{\sqrt{Y/k}}
+ \]
+ is said to have a $t$-distribution on $k$ degrees of freedom, and we write $T\sim t_k$.
+\end{defi}
+
+The density of $t_k$ turns out to be
+\[
+ f_T(t) = \frac{\Gamma((k + 1)/2)}{\Gamma(k/2)} \frac{1}{\sqrt{\pi k}}\left(1 + \frac{t^2}{k}\right)^{-(k+1)/2}.
+\]
+This density is symmetric, bell-shaped, and has a maximum at $t = 0$, which is rather like the standard normal density. However, it can be shown that $\P(T > t) > \P(Z > t)$, i.e.\ the $T$ distribution has a ``fatter'' tail. Also, as $k \to \infty$, $t_k$ approaches a normal distribution.
+
+\begin{prop}
+ If $k > 1$, then $\E_k(T) = 0$.
+
+ If $k > 2$, then $\var_k(T) = \frac{k}{k - 2}$.
+
+ If $k = 2$, then $\var_k(T) = \infty$.
+
+ In all other cases, the values are undefined. In particular, the $k = 1$ case has undefined mean and variance. This is known as the \emph{Cauchy distribution}.
+\end{prop}
+
+\begin{notation}
+ We write $t_k(\alpha)$ be the upper $100\alpha\%$ point of the $t_k$ distribution, so that $\P(T > t_k(\alpha)) = \alpha$.
+\end{notation}
+
+Why would we define such a weird distribution? The typical application is to study random samples with unknown mean \emph{and} unknown variance.
+
+Let $X_1, \cdots, X_n$ be iid $N(\mu, \sigma^2)$. Then $\bar X \sim N(\mu, \sigma^2/n)$. So $Z = \frac{\sqrt{n}(\bar X - \mu)}{\sigma} \sim N(0, 1)$.
+
+Also, $S_{XX}/\sigma^2 \sim \chi^2_{n - 1}$ and is independent of $\bar X$, and hence $Z$. So
+\[
+ \frac{\sqrt{n}(\bar X - \mu)/\sigma}{\sqrt{S_{XX}/((n - 1)\sigma^2)}} \sim t_{n - 1},
+\]
+or
+\[
+ \frac{\sqrt{n}(\bar X - \mu)}{\sqrt{S_{XX}/(n - 1)}} \sim t_{n - 1}.
+\]
+We write $\tilde{\sigma}^2 = \frac{S_{XX}}{n - 1}$ (note that this is the unbiased estimator). Then a $100(1 - \alpha)\%$ confidence interval for $\mu$ is found from
+\[
+ 1 - \alpha = \P\left(-t_{n - 1}\left(\frac{\alpha}{2}\right) \leq \frac{\sqrt{n}(\bar X - \mu)}{\tilde{\sigma}} \leq t_{n - 1}\left(\frac{\alpha}{2}\right)\right).
+\]
+This has endpoints
+\[
+ \bar X \pm \frac{\tilde{\sigma}}{\sqrt{n}}t_{n - 1}\left(\frac{\alpha}{2}\right).
+\]
+
+\section{Linear models}
+\subsection{Linear models}
+Linear models can be used to explain or model the relationship between a \emph{response} (or \emph{dependent}) variable, and one or more \emph{explanatory} variables (or \emph{covariates} or \emph{predictors}). As the name suggests, we assume the relationship is linear.
+
+\begin{eg}
+ How do motor insurance claim rates (response) depend on the age and sex of the driver, and where they live (explanatory variables)?
+\end{eg}
+
+It is important to note that (unless otherwise specified), we do \emph{not} assume normality in our calculations here.
+
+Suppose we have $p$ covariates $x_j$, and we have $n$ observations $Y_i$. We assume $n > p$, or else we can pick the parameters to fix our data exactly. Then each observation can be written as
+\[
+ Y_i = \beta_1 x_{i1} + \cdots + \beta_p x_{ip} + \varepsilon_i.\tag{(*)}
+\]
+for $i = 1, \cdots, n$. Here
+\begin{itemize}
+ \item $\beta_1, \cdots, \beta_p$ are unknown, fixed parameters we wish to work out (with $n > p$)
+ \item $x_{i1}, \cdots, x_{ip}$ are the values of the $p$ covariates for the $i$th response (which are all known).
+ \item $\varepsilon_1, \cdots, \varepsilon_n$ are independent (or possibly just uncorrelated) random variables with mean 0 and variance $\sigma^2$.
+\end{itemize}
+We think of the $\beta_j x_{ij}$ terms to be the causal effects of $x_{ij}$ and $\varepsilon_i$ to be a random fluctuation (error term).
+
+Then we clearly have
+\begin{itemize}
+ \item $\E(Y_i) = \beta_1 x_{i1} + \cdots \beta_px_{ip}$.
+ \item $\var(Y_i) = \var(\varepsilon_i) = \sigma^2$.
+ \item $Y_1, \cdots, Y_n$ are independent.
+\end{itemize}
+Note that $(*)$ is linear in the parameters $\beta_1, \cdots, \beta_p$. Obviously the real world can be much more complicated. But this is much easier to work with.
+
+\begin{eg}
+ For each of 24 males, the maximum volume of oxygen uptake in the blood and the time taken to run 2 miles (in minutes) were measured. We want to know how the time taken depends on oxygen uptake.
+
+ We might get the results
+ \begin{center}
+ \begin{tabular}{ccccccccc}
+ \toprule
+ Oxygen & 42.3 & 53.1 & 42.1 & 50.1 & 42.5 & 42.5 & 47.8 & 49.9 \\
+ Time & 918 & 805 & 892 & 962 & 968 & 907 & 770 & 743 \\
+ \midrule
+ Oxygen & 36.2 & 49.7 & 41.5 & 46.2 & 48.2 & 43.2 & 51.8 & 53.3\\
+ Time & 1045 & 810 & 927 & 813 & 858 & 860 & 760 & 747\\
+ \midrule
+ Oxygen & 53.3 & 47.2 & 56.9 & 47.8 & 48.7 & 53.7 & 60.6 & 56.7\\
+ Time & 743 & 803 & 683 & 844 & 755 & 700 & 748 & 775\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ For each individual $i$, we let $Y_i$ be the time to run 2 miles, and $x_i$ be the maximum volume of oxygen uptake, $i = 1, \cdots, 24$. We might want to fit a straight line to it. So a possible model is
+ \[
+ Y_i = a + bx_i + \varepsilon_i,
+ \]
+ where $\varepsilon_i$ are independent random variables with variance $\sigma^2$, and $a$ and $b$ are constants.
+\end{eg}
+
+The subscripts in the equation makes it tempting to write them as matrices:
+\[
+ \mathbf{Y} =
+ \begin{pmatrix}
+ Y_1\\
+ \vdots\\
+ Y_n
+ \end{pmatrix}, \quad
+ X =
+ \begin{pmatrix}
+ x_{11} & \cdots & x_{1p}\\
+ \vdots & \ddots & \vdots\\
+ x_{n1} & \cdots & x_{np}.
+ \end{pmatrix},\quad
+ \boldsymbol\beta =
+ \begin{pmatrix}
+ \beta_1\\
+ \vdots\\
+ \beta_p
+ \end{pmatrix}, \quad
+ \boldsymbol\varepsilon =
+ \begin{pmatrix}
+ \varepsilon_1\\
+ \vdots\\
+ \varepsilon_n
+ \end{pmatrix}
+\]
+Then the equation becomes
+\[
+ \mathbf{Y} = X\boldsymbol\beta + \boldsymbol\varepsilon.\tag{2}
+\]
+We also have
+\begin{itemize}
+ \item $\E(\boldsymbol\varepsilon) = \mathbf{0}$.
+ \item $\cov (\mathbf{Y}) = \sigma^2 I$.
+\end{itemize}
+We assume throughout that $X$ has full rank $p$, i.e.\ the columns are independent, and that the error variance is the same for each observation. We say this is the \emph{homoscedastic} case, as opposed to \emph{heteroscedastic}.
+
+\begin{eg}
+ Continuing our example, we have, in matrix form
+ \[
+ \mathbf{Y} =
+ \begin{pmatrix}
+ Y_1\\
+ \vdots\\
+ Y_{24}
+ \end{pmatrix}, \quad
+ X =
+ \begin{pmatrix}
+ 1 & x_1\\
+ \vdots & \vdots \\
+ 1 & x_{24}
+ \end{pmatrix}, \quad
+ \boldsymbol\beta =
+ \begin{pmatrix}
+ a\\
+ b
+ \end{pmatrix}, \quad
+ \boldsymbol\varepsilon =
+ \begin{pmatrix}
+ \varepsilon_1\\
+ \vdots\\
+ \varepsilon_{24}.
+ \end{pmatrix}
+ \]
+ Then
+ \[
+ \mathbf{Y} = X\boldsymbol\beta + \boldsymbol\varepsilon.
+ \]
+\end{eg}
+
+\begin{defi}[Least squares estimator]
+ In a linear model $\mathbf{Y} = X\boldsymbol\beta + \boldsymbol\varepsilon$, the \emph{least squares estimator} $\hat{\boldsymbol\beta}$ of $\boldsymbol\beta$ minimizes
+ \begin{align*}
+ S(\boldsymbol\beta) &= \|\mathbf{Y} - X\boldsymbol\beta\|^2\\
+ &= (\mathbf{Y} - X\boldsymbol\beta)^T(\mathbf{Y} - X\boldsymbol\beta)\\
+ &= \sum_{i = 1}^n (Y_i - x_{ij}\beta_j)^2
+ \end{align*}
+ with implicit summation over $j$.
+
+ If we plot the points on a graph, then the least square estimators minimizes the (square of the) vertical distance between the points and the line.
+\end{defi}
+
+To minimize it, we want
+\[
+ \left.\frac{\partial S}{\partial \beta_k}\right|_{\boldsymbol\beta = \hat{\boldsymbol\beta}} = 0
+\]
+for all $k$. So
+\[
+ -2 x_{ik}(Y_i - x_{ij}\hat{\beta}_j) = 0
+\]
+for each $k$ (with implicit summation over $i$ and $j$), i.e.\
+\[
+ x_{ik}x_{ij}\hat{\beta_j} = x_{ik}Y_i
+\]
+for all $k$. Putting this back in matrix form, we have
+\begin{prop}
+ The least squares estimator satisfies
+ \[
+ X^TX\hat{\boldsymbol\beta} = X^T\mathbf{Y}.\tag{3}
+ \]
+\end{prop}
+We could also have derived this by completing the square of $(\mathbf{Y} - X\boldsymbol\beta)^T(\mathbf{Y} - X\boldsymbol\beta)$, but that would be more complicated.
+
+In order to find $\hat{\boldsymbol\beta}$, our life would be much easier if $X^TX$ has an inverse. Fortunately, it always does. We assumed that $X$ is of full rank $p$. Then
+\[
+ \mathbf{t}X^TX\mathbf{t} = (X\mathbf{t})^T(X\mathbf{t}) = \|X\mathbf{t}\|^2 > 0
+\]
+for $\mathbf{t}\not= \mathbf{0}$ in $\R^p$ (the last inequality is since if there were a $\mathbf{t}$ such that $\|X\mathbf{t}\| = 0$, then we would have produced a linear combination of the columns of $X$ that gives $\mathbf{0}$). So $X^TX$ is positive definite, and hence has an inverse. So
+\[
+ \hat{\boldsymbol\beta} = (X^TX)^{-1} X^T\mathbf{Y},\tag{4}
+\]
+which is linear in $\mathbf{Y}$.
+
+We have
+\[
+ \E(\hat{\boldsymbol\beta}) = (X^TX^{-1})X^T\E[\mathbf{Y}] = (X^TX)^{-1}X^TX\boldsymbol\beta = \boldsymbol\beta.
+\]
+So $\hat{\boldsymbol\beta}$ is an unbiased estimator for $\boldsymbol\beta$. Also
+\[
+ \cov(\hat{\boldsymbol\beta}) = (X^TX)^{-1}X^T \cov(\mathbf{Y})X(X^TX)^{-1} = \sigma^2(X^TX)^{-1} ,\tag{5}
+\]
+since $\cov \mathbf{Y} = \sigma^2 I$.
+
+\subsection{Simple linear regression}
+What we did above was \emph{so} complicated. If we have a simple linear regression model
+\[
+ Y_i = a + bx_i + \varepsilon_i.
+\]
+then we can reparameterise it to
+\[
+ Y_i = a' + b(x_i - \bar x) + \varepsilon_i,\tag{6}
+\]
+where $\bar x = \sum x_i/n$ and $a' = a + b\bar x$. Since $\sum (x_i - \bar x) = 0$, this leads to simplified calculations.
+
+In matrix form,
+\[
+ X =
+ \begin{pmatrix}
+ 1 & (x_1 - \bar x)\\
+ \vdots & \vdots \\
+ 1 & (x_{24} - \bar x)
+ \end{pmatrix}.
+\]
+Since $\sum (x_i - \bar x) = 0$, in $X^TX$, the off-diagonals are all $0$, and we have
+\[
+ X^TX =
+ \begin{pmatrix}
+ n & 0\\
+ 0 & S_{xx}
+ \end{pmatrix},
+\]
+where $S_{xx} = \sum (x_i - \bar x)^2$.
+
+Hence
+\[
+ (X^TX)^{-1} =
+ \begin{pmatrix}
+ \frac{1}{n} & 0\\
+ 0 & \frac{1}{S_{xx}}
+ \end{pmatrix}
+\]
+So
+\[
+ \hat{\boldsymbol\beta} = (X^TX^{-1})X^T \mathbf{Y} =
+ \begin{pmatrix}
+ \bar Y \\
+ \frac{S_{xY}}{S_{xx}}
+ \end{pmatrix},
+\]
+where $S_{xY} = \sum Y_i(x_i - \bar x)$.
+
+Hence the estimated intercept is $\hat{a}' = \bar y$, and the estimated gradient is
+\begin{align*}
+ \hat{b} &= \frac{S_{xy}}{S_{xx}}\\
+ &= \frac{\sum_i y_i(x_i - \bar x)}{\sum_i (x_i - \bar x)^2}\\
+ &= \frac{\sum_i (y_i - \bar y)(x_i - \bar x)}{\sqrt{\sum_i (x_i - \bar x)^2\sum_i (y_i - \bar y)^2}}\times \sqrt{\frac{S_{yy}}{S_{xx}}}\tag{$*$}\\
+ &= r \times \sqrt{\frac{S_{yy}}{S_{xx}}}.
+\end{align*}
+We have $(*)$ since $\sum \bar y(x_i - \bar x) = 0$, so we can add it to the nominator. Then the other square root things are just multiplying and dividing by the same things.
+
+So the gradient is the \emph{Pearson product-moment correlation coefficient} $r$ times the ratio of the empirical standard deviations of the $y$'s and $x$'s (note that the gradient is the same whether the $x$'s are standardised to have mean 0 or not).
+
+Hence we get $\cov(\hat{\boldsymbol\beta}) = (X^TX)^{-1}\sigma^2$, and so from our expression of $(X^TX)^{-1}$,
+\[
+ \var(\hat{a}') = \var (\bar Y) = \frac{\sigma^2}{n}, \quad\var(\hat{b}) = \frac{\sigma^2}{S_{xx}}.
+\]
+Note that these estimators are uncorrelated.
+
+Note also that these are obtained without any explicit distributional assumptions.
+
+\begin{eg}
+ Continuing our previous oxygen/time example, we have $\bar y = 826.5$, $S_{xx} = 783.5 = 28.0^2$, $S_{xy} = -10077$, $S_{yy} = 444^2$, $r = -0.81$, $\hat b = -12.9$.
+\end{eg}
+
+\begin{thm}[Gauss Markov theorem]
+ In a full rank linear model, let $\hat{\boldsymbol\beta}$ be the least squares estimator of $\boldsymbol\beta$ and let $\boldsymbol\beta^*$ be any other unbiased estimator for $\boldsymbol\beta$ which is linear in the $Y_i$'s. Then
+ \[
+ \var(\mathbf{t}^T\hat{\boldsymbol\beta}) \leq \var (\mathbf{t}^T\boldsymbol\beta^*).
+ \]
+ for all $\mathbf{t}\in \R^p$. We say that $\hat{\boldsymbol\beta}$ is the \emph{best linear unbiased estimator} of $\boldsymbol\beta$ (BLUE).
+\end{thm}
+
+\begin{proof}
+ Since $\boldsymbol\beta^*$ is linear in the $Y_i$'s, $\boldsymbol\beta^* = A\mathbf{Y}$ for some $p\times n$ matrix $A$.
+
+ Since $\boldsymbol\beta^*$ is an unbiased estimator, we must have $\E[\boldsymbol\beta^*] = \boldsymbol\beta$. However, since $\boldsymbol\beta^* = A\mathbf{Y}$, $\E[\boldsymbol\beta^*] = A\E[\mathbf{Y}] = AX\boldsymbol\beta$. So we must have $\boldsymbol\beta = AX\boldsymbol\beta$. Since this holds for any $\boldsymbol\beta$, we must have $AX = I_p$. Now
+ \begin{align*}
+ \cov(\boldsymbol\beta^*) &= \E[(\boldsymbol\beta^* - \boldsymbol\beta)(\boldsymbol\beta^* - \boldsymbol\beta)^T]\\
+ &= \E[(A\mathbf{Y} - \boldsymbol\beta)(A\mathbf{Y} - \boldsymbol\beta)^T]\\
+ &= \E[(AX\boldsymbol\beta + A\boldsymbol\varepsilon - \boldsymbol\beta)(AX\boldsymbol\beta + A\boldsymbol\varepsilon - \boldsymbol\beta)^T]\\
+ \intertext{Since $AX\boldsymbol\beta = \boldsymbol\beta$, this is equal to}
+ &= \E[A\boldsymbol\varepsilon(A\boldsymbol\varepsilon)^T]\\
+ &= A(\sigma^2I)A^T\\
+ &= \sigma^2 AA^T.
+ \end{align*}
+ Now let $\boldsymbol\beta^* - \hat{\boldsymbol\beta} = (A - (X^TX)^{-1}X^T)\mathbf{Y} = B\mathbf{Y}$, for some $B$. Then
+ \[
+ BX = AX - (X^TX^{-1})X^TX = I_p - I_p = 0.
+ \]
+ By definition, we have $A\mathbf{Y} = B\mathbf{Y} + (X^TX)^{-1}X^T \mathbf{Y}$, and this is true for all $\mathbf{Y}$. So $A = B + (X^TX)^{-1}X^T$. Hence
+ \begin{align*}
+ \cov(\boldsymbol\beta^*) &= \sigma^2AA^T\\
+ &= \sigma^2(B + (X^TX)^{-1}X^T)(B + (X^TX)^{-1}X^T)^T\\
+ &= \sigma^2(BB^T + (X^TX)^{-1})\\
+ &= \sigma^2 BB^T + \cov(\hat{\boldsymbol\beta}).
+ \end{align*}
+ Note that in the second line, the cross-terms disappear since $BX = 0$.
+
+ So for any $\mathbf{t}\in \R^p$, we have
+ \begin{align*}
+ \var(\mathbf{t}^T\boldsymbol\beta^*) &= \mathbf{t}^T\cov(\boldsymbol\beta^*)\mathbf{t} \\
+ &= \mathbf{t}^T\cov (\hat{\boldsymbol\beta})\mathbf{t} + \mathbf{t}^TBB^T\mathbf{t}\sigma^2\\
+ &= \var(\mathbf{t}^T \hat{\boldsymbol\beta}) + \sigma^2\|B^T\mathbf{t}\|^2\\
+ &\geq \var (\mathbf{t}^T\hat{\boldsymbol\beta}).
+ \end{align*}
+ Taking $\mathbf{t} = (0, \cdots, 1, 0, \cdots, 0)^T$ with a $1$ in the $i$th position, we have
+ \[
+ \var(\hat{\beta_i}) \leq \var(\beta_i^*).\qedhere
+ \]
+\end{proof}
+
+\begin{defi}[Fitted values and residuals]
+ $\hat{\mathbf{Y}} = X\hat{\boldsymbol\beta}$ is the \emph{vector of fitted values}. These are what our model says $\mathbf{Y}$ should be.
+
+ $\mathbf{R} = \mathbf{Y} - \hat{\mathbf{Y}}$ is the \emph{vector of residuals}. These are the deviations of our model from reality.
+
+ The \emph{residual sum of squares} is
+ \[
+ \mathrm{RSS} = \|\mathbf{R}\|^2 = \mathbf{R}^T\mathbf{R} = (\mathbf{Y} - X\hat{\boldsymbol\beta})^T(\mathbf{Y} - X\hat{\boldsymbol\beta}).
+ \]
+\end{defi}
+We can give these a geometric interpretation. Note that $X^T\mathbf{R} = X^T(\mathbf{Y} - \hat{\mathbf{Y}}) = X^T\mathbf{Y} - X^TX\hat{\boldsymbol\beta} = 0$ by our formula for $\boldsymbol\beta$. So $\mathbf{R}$ is orthogonal to the column space of $X$.
+
+Write $\hat{\mathbf{Y}} = X\hat{\boldsymbol\beta} = X(X^TX)^{-1}X^T\mathbf{Y} = P\mathbf{Y}$, where $P = X(X^TX)^{-1}X^T$. Then $P$ represents an orthogonal projection of $\R^n$ onto the space spanned by the columns of $X$, i.e.\ it projects the actual data $\mathbf{Y}$ to the fitted values $\hat{\mathbf{Y}}$. Then $\mathbf{R}$ is the part of $\mathbf{Y}$ orthogonal to the column space of $X$.
+
+The projection matrix $P$ is idempotent and symmetric, i.e.\ $P^2 = P$ and $P^T = P$.
+\subsection{Linear models with normal assumptions}
+So far, we have not assumed anything about our variables. In particular, we have not assumed that they are normal. So what further information can we obtain by assuming normality?
+
+\begin{eg}
+ Suppose we want to measure the resistivity of silicon wafers. We have five instruments, and five wafers were measured by each instrument (so we have 25 wafers in total). We assume that the silicon wafers are all the same, and want to see whether the instruments consistent with each other, i.e.\ The results are as follows:
+ \begin{center}
+ \begin{tabular}{ccccccc}
+ \toprule
+ & & \multicolumn{5}{c}{Wafer}\\
+ & & 1 & 2 & 3 & 4 & 5\\
+ \midrule
+ \multirow{5}{*}{\rotatebox[origin=c]{90}{Instrument}}
+ & 1 & 130.5 & 112.4 & 118.9 & 125.7 & 134.0\\
+ & 2 & 130.4 & 138.2 & 116.7 & 132.6 & 104.2\\
+ & 3 & 113.0 & 120.5 & 128.9 & 103.4 & 118.1\\
+ & 4 & 128.0 & 117.5 & 114.9 & 114.9 & 98.8\\
+ & 5 & 121.2 & 110.5 & 118.5 & 100.5 & 120.9\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Let $Y_{i,j}$ be the resistivity of the $j$th wafer measured by instrument $i$, where $i, j = 1, \cdots, 5$. A possible model is
+ \[
+ Y_{i,j} = \mu_i + \varepsilon_{i, j},
+ \]
+ where $\varepsilon_{ij}$ are independent random variables such that $\E[\varepsilon_{ij}] = 0$ and $\var(\varepsilon_{ij}) = \sigma^2$, and the $\mu_i$'s are unknown constants.
+
+ This can be written in matrix form, with
+ \[
+ \mathbf{Y} =
+ \begin{pmatrix}
+ Y_{1, 1}\\\vdots\\Y_{1, 5}\\
+ Y_{2, 1}\\\vdots\\Y_{2, 5}\\
+ \vdots\\
+ Y_{5, 1}\\\vdots\\Y_{5, 5}\\
+ \end{pmatrix}, \quad
+ X =
+ \begin{pmatrix}
+ 1 & 0 & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 1 & 0 & \cdots & 0\\
+ 0 & 1 & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 1 & \cdots& 0\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & 1\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & 1
+ \end{pmatrix}, \quad
+ \boldsymbol\beta =
+ \begin{pmatrix}
+ \mu_1\\ \mu_2\\ \mu_3\\ \mu_4\\ \mu_5
+ \end{pmatrix}, \quad
+ \boldsymbol\varepsilon =
+ \begin{pmatrix}
+ \varepsilon_{1, 1} \\ \vdots \\ \varepsilon_{1, 5}\\
+ \varepsilon_{2, 1} \\ \vdots \\ \varepsilon_{2, 5}\\ \vdots\\
+ \varepsilon_{5, 1} \\ \vdots \\ \varepsilon_{5, 5}\\
+ \end{pmatrix}
+ \]
+ Then
+ \[
+ \mathbf{Y} = X\boldsymbol\beta + \boldsymbol\varepsilon.
+ \]
+ We have
+ \[
+ X^TX =
+ \begin{pmatrix}
+ 5 & 0 & \cdots & 0\\
+ 0 & 5 & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & 5
+ \end{pmatrix}
+ \]
+ Hence
+ \[
+ (X^TX)^{-1} =
+ \begin{pmatrix}
+ \frac{1}{5} & 0 & \cdots & 0\\
+ 0 & \frac{1}{5} & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & \frac{1}{5}
+ \end{pmatrix}
+ \]
+ So we have
+ \[
+ \hat{\boldsymbol\mu} = (X^TX)^{-1}X^T\mathbf{Y} =
+ \begin{pmatrix}
+ \bar Y_1 \\\vdots\\ \bar Y_5
+ \end{pmatrix}
+ \]
+ The residual sum of squares is
+ \[
+ \mathrm{RSS} = \sum_{i = 1}^5 \sum_{j = 1}^5 (Y_{i, j} - \hat{\mu}_i)^2 = \sum_{i = 1}^5 \sum_{j = 1}^5 (Y_{i, j} - \bar Y_i)^2 = 2170.
+ \]
+ This has $n - p = 25 - 5 = 20$ degrees of freedom. We will later see that $\bar \sigma = \sqrt{\mathrm{RSS}/(n - p)} = 10.4$.
+
+ Note that we still haven't used normality!
+\end{eg}
+
+We now make a normal assumption:
+\[
+ \mathbf{Y} = X\boldsymbol\beta + \boldsymbol\varepsilon,\quad \boldsymbol\varepsilon\sim N_n(\mathbf{0}, \sigma^2 I),\quad \rank(X) = p < n.
+\]
+This is a special case of the linear model we just had, so all previous results hold.
+
+Since $\mathbf{Y} = N_n(X\boldsymbol\beta, \sigma^2 I)$, the log-likelihood is
+\[
+ l(\boldsymbol\beta, \sigma^2) = -\frac{n}{2}\log 2\pi - \frac{n}{2}\log \sigma^2 - \frac{1}{2\sigma^2}S(\boldsymbol\beta),
+\]
+where
+\[
+ S(\boldsymbol\beta) = (\mathbf{Y} - X\boldsymbol\beta)^T(\mathbf{Y} - X\boldsymbol\beta).
+\]
+If we want to maximize $l$ with respect to $\boldsymbol\beta$, we have to maximize the only term containing $\boldsymbol\beta$, i.e.\ $S(\boldsymbol\beta)$. So
+\begin{prop}
+ Under normal assumptions the maximum likelihood estimator for a linear model is
+ \[
+ \hat{\boldsymbol\beta} = (X^TX)^{-1}X^T\mathbf{Y},
+ \]
+ which is the same as the least squares estimator.
+\end{prop}
+This isn't coincidence! Historically, when Gauss devised the normal distribution, he designed it so that the least squares estimator is the same as the maximum likelihood estimator.
+
+To obtain the MLE for $\sigma^2$, we require
+\[
+ \left.\frac{\partial l}{\partial \sigma^2}\right|_{\hat{\boldsymbol\beta}, \hat{\sigma}^2} = 0,
+\]
+i.e.\
+\[
+ -\frac{n}{2\sigma^2} + \frac{S(\hat{\boldsymbol\beta})}{2 \hat{\sigma}^4} = 0
+\]
+So
+\[
+ \hat{\sigma}^2 = \frac{1}{n}S(\hat{\boldsymbol\beta}) = \frac{1}{n}(\mathbf{Y} - X\hat{\boldsymbol\beta})^T(\mathbf{Y} - X\hat{\boldsymbol\beta}) = \frac{1}{n}\mathrm{RSS}.
+\]
+Our ultimate goal now is to show that $\hat{\boldsymbol\beta}$ and $\hat{\sigma}^2$ are independent. Then we can apply our other standard results such as the $t$-distribution.
+
+First recall that the matrix $P = X(X^TX)^{-1}X^T$ that projects $\mathbf{Y}$ to $\hat{\mathbf{Y}}$ is idempotent and symmetric. We will prove the following properties of it:
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item If $\mathbf{Z}\sim N_n(\mathbf{0}, \sigma^2 I)$ and $A$ is $n\times n$, symmetric, idempotent with rank $r$, then $\mathbf{Z}^TA\mathbf{Z} \sim \sigma^2 \chi_r^2$.
+ \item For a symmetric idempotent matrix $A$, $\rank(A) = \tr(A)$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Since $A$ is idempotent, $A^2 = A$ by definition. So eigenvalues of $A$ are either 0 or 1 (since $\lambda \mathbf{x} = A\mathbf{x} = A^2 \mathbf{x} = \lambda^2 \mathbf{x}$).
+
+ Since $A$ is also symmetric, it is diagonalizable. So there exists an orthogonal $Q$ such that
+ \[
+ \Lambda = Q^TAQ = \diag(\lambda_1, \cdots, \lambda_n) = \diag(1, \cdots, 1, 0, \cdots, 0)
+ \]
+ with $r$ copies of 1 and $n - r$ copies of $0$.
+
+ Let $\mathbf{W} = Q^T\mathbf{Z}$. So $\mathbf{Z} = Q\mathbf{W}$. Then $\mathbf{W}\sim N_n(\mathbf{0}, \sigma^2 I)$, since $\cov(\mathbf{W}) = Q^T\sigma^2 IQ = \sigma^2 I$. Then
+ \[
+ \mathbf{Z}^TA\mathbf{Z} = \mathbf{W}^T Q^TAQ\mathbf{W} = \mathbf{W}^T\Lambda \mathbf{W} = \sum_{i = 1}^r w_i^2 \sim \chi_r^2.
+ \]
+ \item
+ \begin{align*}
+ \rank (A) &= \rank (\Lambda)\\
+ &= \tr (\Lambda)\\
+ &= \tr (Q^TAQ)\\
+ &= \tr (AQ^TQ)\\
+ &= \tr {A}\qedhere
+ \end{align*}%\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{thm}
+ For the normal linear model $\mathbf{Y}\sim N_n(X\boldsymbol\beta, \sigma^2 I)$,
+ \begin{enumerate}
+ \item $\hat{\boldsymbol\beta} \sim N_p(\boldsymbol\beta, \sigma^2(X^TX)^{-1})$
+ \item $\mathrm{RSS} \sim \sigma^2 \chi_{n - p}^2$, and so $\hat{\sigma}^2 \sim \frac{\sigma^2}{n}\chi_{n -p}^2$.
+ \item $\hat{\boldsymbol\beta}$ and $\hat{\sigma}^2$ are independent.
+ \end{enumerate}
+\end{thm}
+
+The proof is not particularly elegant --- it is just a whole lot of linear algebra!
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item We have $\hat{\boldsymbol\beta} = (X^TX)^{-1}X^T\mathbf{Y}$. Call this $C\mathbf{Y}$ for later use. Then $\hat {\boldsymbol\beta}$ has a normal distribution with mean
+ \[
+ (X^TX)^{-1}X^T(X\boldsymbol\beta) = \boldsymbol\beta
+ \]
+ and covariance
+ \[
+ (X^TX)^{-1}X^T(\sigma^2 I)[(X^TX)^{-1}X^T]^T = \sigma^2(X^TX)^{-1}.
+ \]
+ So
+ \[
+ \hat{\boldsymbol\beta}\sim N_p(\boldsymbol\beta, \sigma^2(X^TX)^{-1}).
+ \]
+ \item
+ Our previous lemma says that $\mathbf{Z}^TA\mathbf{Z}\sim \sigma^2 \chi_r^2$. So we pick our $\mathbf{Z}$ and $A$ so that $\mathbf{Z}^TA\mathbf{Z} = \mathrm{RSS}$, and $r$, the degrees of freedom of $A$, is $n - p$.
+
+ Let $\mathbf{Z} = \mathbf{Y} - X\boldsymbol\beta$ and $A = (I_n - P)$, where $P = X(X^TX)^{-1}X^T$. We first check that the conditions of the lemma hold:
+
+ Since $\mathbf{Y}\sim N_n(X\boldsymbol\beta, \sigma^2 I)$, $\mathbf{Z} = \mathbf{Y} - X\boldsymbol\beta\sim N_n(\mathbf{0}, \sigma^2 I)$.
+
+ Since $P$ is idempotent, $I_n - P$ also is (check!). We also have
+ \[
+ \rank(I_n - P) = \tr(I_n - P) = n - p.
+ \]
+ Therefore the conditions of the lemma hold.
+
+ To get the final useful result, we want to show that the RSS is indeed $\mathbf{Z}^TA\mathbf{Z}$. We simplify the expressions of RSS and $\mathbf{Z}^TA\mathbf{Z}$ and show that they are equal:
+ \[
+ \mathbf{Z}^TA\mathbf{Z} = (\mathbf{Y} - X\boldsymbol\beta)^T(I_n - P)(\mathbf{Y} - X\boldsymbol\beta)=\mathbf{Y}^T(I_n - P)\mathbf{Y}.
+ \]
+ Noting the fact that $(I_n - P)X = \mathbf{0}$.
+
+ Writing $\mathbf{R} = \mathbf{Y} - \hat{\mathbf{Y}} = (I_n - P)\mathbf{Y}$, we have
+ \[
+ \mathrm{RSS} = \mathbf{R}^T\mathbf{R} = \mathbf{Y}^T(I_n - P)\mathbf{Y},
+ \]
+ using the symmetry and idempotence of $I_n - P$.
+
+ Hence $\mathrm{RSS} = \mathbf{Z}^TA\mathbf{Z} \sim \sigma^2\chi^2_{n - p}$. Then
+ \[
+ \hat{\sigma}^2 = \frac{\mathrm{RSS}}{n}\sim \frac{\sigma^2}{n}\chi^2_{n- p}.
+ \]
+ \item Let $V = \begin{pmatrix}\hat{\boldsymbol\beta}\\\mathbf{R}\end{pmatrix} = D\mathbf{Y}$, where $D = \begin{pmatrix}C\\I_n - P\end{pmatrix}$ is a $(p + n)\times n$ matrix.
+
+ Since $\mathbf{Y}$ is multivariate, $V$ is multivariate with
+ \begin{align*}
+ \cov (V) &= D\sigma^2ID^T\\
+ &= \sigma^2
+ \begin{pmatrix}
+ CC^T & C(I_n - P)^T\\
+ (I_n - P)C^T & (I_n - P)(I_n - P)^T
+ \end{pmatrix}\\
+ &=
+ \sigma^2
+ \begin{pmatrix}
+ CC^T & C(I_n - P)\\
+ (I_n - P)C^T & (I_n - P)
+ \end{pmatrix}\\
+ &= \sigma^2
+ \begin{pmatrix}
+ CC^T & 0\\
+ 0 & I_n - P
+ \end{pmatrix}
+ \end{align*}
+ Using $C(I_n - P) = 0$ (since $(X^TX)^{-1}X^T(I_n - P) = 0$ since $(I_n - P)X = 0$ --- check!).
+
+ Hence $\hat{\boldsymbol\beta}$ and $\mathbf{R}$ are independent since the off-diagonal covariant terms are 0. So $\hat{\boldsymbol\beta}$ and $\mathrm{RSS}=\mathbf{R}^T\mathbf{R}$ are independent. So $\hat{\boldsymbol\beta}$ and $\hat{\sigma}^2$ are independent.\qedhere
+ \end{itemize}
+\end{proof}
+
+From (ii), $\E(\mathrm{RSS}) = \sigma^2(n - p)$. So $\tilde{\sigma}^2 = \frac{\mathrm{RSS}}{n - p}$ is an unbiased estimator of $\sigma^2$. $\tilde{\sigma}$ is often known as the \emph{residual standard error} on $n - p$ degrees of freedom.
+\subsection{The \texorpdfstring{$F$}{F} distribution}
+\begin{defi}[$F$ distribution]
+ Suppose $U$ and $V$ are independent with $U\sim \chi_m^2$ and $V\sim \chi_n^2$. The $X = \frac{U/m}{V/n}$ is said to have an \emph{$F$-distribution} on $m$ and $n$ degrees of freedom. We write $X\sim F_{m, n}$
+\end{defi}
+Since $U$ and $V$ have mean $m$ and $n$ respectively, $U/m$ and $V/n$ are approximately $1$. So $F$ is often approximately $1$.
+
+It should be very clear from definition that
+\begin{prop}
+ If $X\sim F_{m, n}$, then $1/X\sim F_{n, m}$.
+\end{prop}
+
+We write $F_{m, n}(\alpha)$ be the upper $100\alpha\%$ point for the $F_{m, n}$-distribution so that if $X\sim F_{m, n}$, then $\P(X > F_{m, n}(\alpha)) = \alpha$.
+
+Suppose that we have the upper $5\%$ point for all $F_{n, m}$. Using these information, it is easy to find the \emph{lower} $5\%$ point for $F_{m, n}$ since we know that $\P(F_{m, n} < 1/x) = \P(F_{n, m} > x)$, which is where the above proposition comes useful.
+
+Note that it is immediate from definitions of $t_n$ and $F_{1, n}$ that if $Y\sim t_n$, then $Y^2\sim F_{1, n}$, i.e.\ it is a ratio of independent $\chi_1^2$ and $\chi_n^2$ variables.
+
+\subsection{Inference for \texorpdfstring{$\boldsymbol\beta$}{beta}}
+We know that $\hat{\boldsymbol\beta} \sim N_p(\boldsymbol\beta, \sigma^2(X^TX)^{-1})$. So
+\[
+ \hat{\beta}_j \sim N(\beta_j, \sigma^2(X^TX)^{-1}_{jj}).
+\]
+The \emph{standard error} of $\hat{\beta}_j$ is defined to be
+\[
+ \mathrm{SE}(\hat{\beta}_j) = \sqrt{\tilde{\sigma}^2 (X^TX)_{jj}^{-1}},
+\]
+where $\tilde{\sigma}^2 = \mathrm{RSS}/(n - p)$. Unlike the actual variance $\sigma^2(X^TX)^{-1}_{jj}$, the standard error is calculable from our data.
+
+Then
+\[
+ \frac{\hat{\beta}_j - \beta_j}{\mathrm{SE}(\hat{\beta}_j)} = \frac{\hat{\beta}_j - \beta_j}{\sqrt{\tilde{\sigma}^2(X^TX)^{-1}_{jj}}} = \frac{(\hat{\beta}_j - \beta_j)/\sqrt{\sigma^2(X^TX)_{jj}^{-1}}}{\sqrt{\mathrm{RSS}/((n - p)\sigma^2)}}
+\]
+By writing it in this somewhat weird form, we now recognize both the numerator and denominator. The numerator is a standard normal $N(0, 1)$, and the denominator is an independent $\sqrt{\chi^2_{n - p}/(n - p)}$, as we have previously shown. But a standard normal divided by $\chi^2$ is, by definition, the $t$ distribution. So
+\[
+ \frac{\hat{\beta}_j - \beta_j}{\mathrm{SE}(\hat{\beta}_j)} \sim t_{n - p}.
+\]
+So a $100(1 - \alpha)\%$ confidence interval for $\beta_j$ has end points $\hat{\beta}_j\pm \mathrm{SE}(\hat{\beta}_j)t_{n - p}(\frac{\alpha}{2})$.
+
+In particular, if we want to test $H_0: \beta_j = 0$, we use the fact that under $H_0$, $\frac{\hat{\beta}_j}{\mathrm{SE}(\hat{\beta}_j)} \sim t_{n - p}$.
+
+\subsection{Simple linear regression}
+We can apply our results to the case of simple linear regression. We have
+\[
+ Y_i = a' + b(x_i - \bar x) + \varepsilon_i,
+\]
+where $\bar x = \sum x_i/n$ and $\varepsilon_i$ are iid $N(0, \sigma^2)$ for $i = 1, \cdots, n$.
+
+Then we have
+\begin{align*}
+ \hat{a}' &= \bar Y \sim N\left(a', \frac{\sigma^2}{n}\right)\\
+ \hat{b} &= \frac{S_{xY}}{S_{xx}}\sim N\left(b, \frac{\sigma^2}{S_{xx}}\right)\\
+ \hat{Y}_i &= \hat{a}' + \hat{b}(x_i - \bar x)\\
+ \mathrm{RSS} &= \sum_i (Y_i - \hat{Y_i})^2 \sim \sigma^2 \chi_{n - 2}^2,
+\end{align*}
+and $(\hat{a}', \hat{b})$ and $\hat{\sigma}^2 = \mathrm{RSS}/n$ are independent, as we have previously shown.
+
+Note that $\hat{\sigma}^2$ is obtained by dividing RSS by $n$, and is the maximum likelihood estimator. On the other hand, $\tilde{\sigma}$ is obtained by dividing RSS by $n - p$, and is an unbiased estimator.
+
+\begin{eg}
+ Using the oxygen/time example, we have seen that
+ \[
+ \tilde{\sigma}^2 = \frac{\mathrm{RSS}}{n - p} = \frac{67968}{24 - 2} = 3089 = 55.6^2.
+ \]
+ So the standard error of $\hat{\beta}$ is
+ \[
+ \mathrm{SE}(\hat{b}) = \sqrt{\tilde{\sigma}^2(X^TX)_{22}^{-1}} = \sqrt{\frac{3089}{S_{xx}}} = \frac{55.6}{28.0} = 1.99.
+ \]
+ So a $95\%$ interval for $b$ has end points
+ \[
+ \hat{b}\pm \mathrm{SE}(\hat{b})\times t_{n - p}(0.025) = 12.9\pm 1.99*t_{22}(0.025) = (-17.0, -8.8),
+ \]
+ using the fact that $t_{22}(0.025) = 2.07$.
+
+ Note that this interval does not contain $0$. So if we want to carry out a size $0.05$ test of $H_0: b = 0$ (they are uncorrelated) vs $H_1: b \not= 0$ (they are correlated), the test statistic would be$\frac{\hat{b}}{\mathrm{SE}(\hat{b})} = \frac{-12.9}{1.99} = -6.48$. Then we reject $H_0$ because this is less than $-t_{22}(0.025) = -2.07$.
+\end{eg}
+\subsection{Expected response at \texorpdfstring{$\mathbf{x}^*$}{x*}}
+After performing the linear regression, we can now make \emph{predictions} from it. Suppose that $\mathbf{x}^*$ is a new vector of values for the explanatory variables.
+
+The expected response at $\mathbf{x}^*$ is $\E[\mathbf{Y}\mid \mathbf{x}^*] = \mathbf{x}^{*T}\boldsymbol\beta$. We estimate this by $\mathbf{x}^{*T}\hat{\boldsymbol\beta}$. Then we have
+\[
+ \mathbf{x}^{*T}(\hat{\boldsymbol\beta} - \boldsymbol\beta)\sim N(0, \mathbf{x}^{*T} \cov(\hat{\boldsymbol\beta})\mathbf{x}^*) = N(0, \sigma^2\mathbf{x}^{*T}(X^TX)^{-1}\mathbf{x}^*).
+\]
+Let $\tau^2 = \mathbf{x}^{*T} (X^TX)^{-1}\mathbf{x}^*$. Then
+\[
+ \frac{\mathbf{x}^{*T}(\hat{\boldsymbol\beta} - \boldsymbol\beta)}{\tilde{\sigma}\tau}\sim t_{n - p}.
+\]
+Then a confidence interval for the expected response $\mathbf{x}^{*T}\boldsymbol\beta$ has end points
+\[
+ \mathbf{x}^{*T}\hat{\boldsymbol\beta} \pm \tilde{\sigma}\tau t_{n - p}\left(\frac{\alpha}{2}\right).
+\]
+\begin{eg}
+ Previous example continued:
+
+ Suppose we wish to estimate the time to run 2 miles for a man with an oxygen take-up measurement of 50. Here $\mathbf{x}^{*T} = (1, 50 - \bar x)$, where $\bar x = 48.6$.
+
+ The estimated expected response at $\mathbf{x}^{*T}$ is
+ \[
+ \mathbf{x}^{*T}\hat{\boldsymbol\beta} = \hat{a}' + (50 - 48.5)\times \hat{b} = 826.5 - 1.4\times 12.9 = 808.5,
+ \]
+ which is obtained by plugging $\mathbf{x}^{*T}$ into our fitted line.
+
+ We find
+ \[
+ \tau^2 = \mathbf{x}^{*T}(X^TX)^{-1}\mathbf{x}^* = \frac{1}{n} + \frac{\mathbf{x}^{*2}}{S_{xx}} = \frac{1}{24} + \frac{1.4^2}{783.5} = 0.044 = 0.21^2.
+ \]
+ So a $95\%$ confidence interval for $\E[Y\mid \mathbf{x}^* = 50 - \bar x]$ is
+ \[
+ \mathbf{x}^{*T}\hat{\boldsymbol\beta}\pm \tilde{\sigma}\tau t_{n - p}\left(\frac{\alpha}{2}\right) = 808.5 \pm 55.6 \times 0.21 \times 2.07 = (783.6, 832.2).
+ \]
+\end{eg}
+Note that this is the confidence interval for the predicted \emph{expected value}, NOT the confidence interval for the actual obtained value.
+
+The predicted response at $\mathbf{x}^*$ is $Y^* = \mathbf{x}^*\boldsymbol\beta + \varepsilon^*$, where $\varepsilon^*\sim N(0, \sigma^2)$, and $Y^*$ is independent of $Y_1, \cdots, Y_n$. Here we have more uncertainties in our prediction: $\boldsymbol\beta$ and $\varepsilon^*$.
+
+A $100(1 - \alpha)\%$ \emph{prediction interval} for $Y^*$ is an interval $I(\mathbf{Y})$ such that $\P(Y^*\in I(\mathbf{Y})) = 1 - \alpha$, where the probability is over the joint distribution of $Y^*, Y_1, \cdots, Y_n$. So $I$ is a random function of the past data $\mathbf{Y}$ that outputs an interval.
+
+First of all, as above, the predicted \emph{expected} response is $\hat{Y}^* = \mathbf{x}^{*T}\boldsymbol\beta$. This is an unbiased estimator since $\hat{Y}^* - Y^* = \mathbf{x}^{*T}(\hat{\boldsymbol\beta} - \boldsymbol\beta) - \varepsilon^*$, and hence
+\[
+ \E[\hat{Y}^* - Y^*] = \mathbf{x}^{*T}(\boldsymbol\beta - \boldsymbol\beta) = 0,
+\]
+To find the variance, we use that fact that $\mathbf{x}^{*T}(\hat{\boldsymbol\beta} - \boldsymbol\beta)$ and $\varepsilon^*$ are independent, and the variance of the sum of independent variables is the sum of the variances. So
+\begin{align*}
+ \var(\hat{Y}^* - Y^*) &= \var (\mathbf{x}^{*T}(\hat{\boldsymbol\beta})) + \var (\varepsilon^*)\\
+ &= \sigma^2 \mathbf{x}^{*T}(X^TX)^{-1}\mathbf{x}^* + \sigma^2.\\
+ &= \sigma^2(\tau^2 + 1).
+\end{align*}
+We can see this as the uncertainty in the regression line $\sigma^2\tau^2$, plus the wobble about the regression line $\sigma^2$. So
+\[
+ \hat{Y}^* - Y^* \sim N(0, \sigma^2(\tau^2 + 1)).
+\]
+We therefore find that
+\[
+ \frac{\hat{Y}^* - Y^*}{\tilde{\sigma}\sqrt{\tau^2 + 1}} \sim t_{n - p}.
+\]
+So the interval with endpoints
+\[
+ \mathbf{x}^{*T}\hat{\boldsymbol\beta} \pm \tilde{\sigma}\sqrt{\tau^2 + 1}t_{n - p}\left(\frac{\alpha}{2}\right)
+\]
+is a $95\%$ prediction interval for $Y^*$. We don't call this a confidence interval --- confidence intervals are about finding parameters of the distribution, while the \emph{prediction} interval is about our predictions.
+
+\begin{eg}
+ A $95\%$ prediction interval for $Y^*$ at $\mathbf{x}^{*T} = (1, (50 - \bar x))$ is
+ \[
+ \mathbf{x}^{*T}\pm \tilde{\sigma}\sqrt{\tau^2 + 1}t_{n - p}\left(\frac{\alpha}{2}\right) = 808.5\pm 55.6 \times 1.02 \times 2.07 = (691.1, 925.8).
+ \]
+ Note that this is \emph{much} wider than our our expected response! This is since there are three sources of uncertainty: we don't know what $\sigma$ is, what $\hat{b}$ is, and the random $\varepsilon$ fluctuation!
+\end{eg}
+
+\begin{eg}
+ Wafer example continued: Suppose we wish to estimate the expected resistivity of a new wafer in the first instrument. Here $\mathbf{x}^{*T} = (1, 0, \cdots, 0)$ (recall that $\mathbf{x}$ is an indicator vector to indicate which instrument is used).
+
+ The estimated response at $\mathbf{x}^{*T}$ is
+ \[
+ \mathbf{x}^{*T}\hat{\boldsymbol\mu} = \hat{\mu}_1 = \bar{y_1} = 124.3
+ \]
+ We find
+ \[
+ \tau^2 = \mathbf{x}^{*T}(X^TX)^{-1}\mathbf{x}^* = \frac{1}{5}.
+ \]
+ So a $95\%$ confidence interval for $\E[Y_1^*]$ is
+ \[
+ \mathbf{x}^{*T}\hat{\boldsymbol\mu}\pm \tilde{\sigma}\tau t_{n - p}\left(\frac{\alpha}{2}\right) = 124.3 \pm \frac{10.4}{\sqrt{5}}\times 2.09 = (114.6, 134.0).
+ \]
+ Note that we are using an estimate of $\sigma$ obtained from all five instruments. If we had only used the data from the first instrument, $\sigma$ would be estimated as
+ \[
+ \tilde{\sigma}_1 = \sqrt{ \frac{\sum_{j = 1}^5y_{1,j} - \bar{y_1}}{5 - 1}} = 8.74.
+ \]
+ The observed $95\%$ confidence interval for $\mu_1$ would have been
+ \[
+ \bar {y_1} \pm \frac{\tilde{\sigma}_1}{\sqrt{5}}t_4\left(\frac{\alpha}{2}\right) = 124.3\pm 3.91\times 2.78 = (113.5, 135.1),
+ \]
+ which is \emph{slightly} wider. Usually it is much wider, but in this special case, we only get little difference since the data from the first instrument is relatively tighter than the others.
+
+ A $95\%$ prediction interval for $Y_1^*$ at $\mathbf{x}^{*T} = (1, 0, \cdots, 0)$ is
+ \[
+ \mathbf{x}^{*T} \hat{\boldsymbol\mu} \pm \tilde{\sigma}\sqrt{\tau^2 + 1} t_{n - p}\left(\frac{\alpha}{2}\right) = 124.3 \pm 10.42 \times 1.1\times 2.07 = (100.5, 148.1).
+ \]
+\end{eg}
+\subsection{Hypothesis testing}
+\subsubsection{Hypothesis testing}
+In hypothesis testing, we want to know whether certain variables influence the result. If, say, the variable $x_1$ does not influence $Y$, then we must have $\beta_1 = 0$. So the goal is to test the hypothesis $H_0: \beta_1 = 0$ versus $H_1: \beta_1 \not= 0$. We will tackle a more general case, where $\boldsymbol\beta$ can be split into two vectors $\boldsymbol\beta_0$ and $\boldsymbol\beta_1$, and we test if $\boldsymbol\beta_1$ is zero.
+
+We start with an obscure lemma, which might seem pointless at first, but will prove itself useful very soon.
+\begin{lemma}
+ Suppose $\mathbf{Z} \sim N_n(\mathbf{0}, \sigma^2 I_n)$, and $A_1$ and $A_2$ are symmetric, idempotent $n\times n$ matrices with $A_1A_2 = 0$ (i.e.\ they are orthogonal). Then $\mathbf{Z}^T A_1 \mathbf{Z}$ and $\mathbf{Z}^TA_2 \mathbf{Z}$ are independent.
+\end{lemma}
+This is geometrically intuitive, because $A_1$ and $A_2$ being orthogonal means they are concerned about different parts of the vector $\mathbf{Z}$.
+
+\begin{proof}
+ Let $\mathbf{X}_i = A_i \mathbf{Z}$, $i = 1, 2$ and
+ \[
+ \mathbf{W} =
+ \begin{pmatrix}
+ \mathbf{W}_1\\
+ \mathbf{W}_2
+ \end{pmatrix} = \begin{pmatrix} A_1\\ A_2\end{pmatrix}\mathbf{Z}.
+ \]
+ Then
+ \[
+ \mathbf{W}\sim N_{2n}\left(
+ \begin{pmatrix}
+ \mathbf{0}\\
+ \mathbf{0}
+ \end{pmatrix}, \sigma^2
+ \begin{pmatrix}
+ A_1 & 0\\
+ 0 & A_2
+ \end{pmatrix}\right)
+ \]
+ since the off diagonal matrices are $\sigma^2 A_1^TA_2 = A_1A_2 = 0$.
+
+ So $\mathbf{W}_1$ and $\mathbf{W}_2$ are independent, which implies
+ \[
+ \mathbf{W}_1^T \mathbf{W}_1 = \mathbf{Z}^TA_1^TA_1 \mathbf{Z} = \mathbf{Z}^T A_1A_1 \mathbf{Z} = \mathbf{Z}^T A_1 \mathbf{Z}
+ \]
+ and
+ \[
+ \mathbf{W}_2^T \mathbf{W}_2 = \mathbf{Z}^TA_2^TA_2 \mathbf{Z} = \mathbf{Z}^T A_2A_2 \mathbf{Z} = \mathbf{Z}^T A_2 \mathbf{Z}
+ \]
+ are independent
+\end{proof}
+Now we go to hypothesis testing in general linear models:
+
+Suppose $\mathop{X}\limits_{n \times p} = \left(\mathop{X_0}\limits_{n \times p_0}\, \mathop{X_1}\limits_{n \times (p - p_0)}\right)$ and $\mathbf{B} = \begin{pmatrix}\boldsymbol\beta_0 \\ \boldsymbol\beta_1\end{pmatrix}$, where $\rank (X) = p, \rank(X_0) = p_0$.
+
+We want to test $H_0: \boldsymbol\beta_1 = 0$ against $H_1: \boldsymbol\beta_1 \not= 0$. Under $H_0$, $X_1 \boldsymbol\beta_1$ vanishes and
+\[
+ \mathbf{Y} = X_0 \boldsymbol\beta + \boldsymbol\varepsilon.
+\]
+Under $H_0$, the mle of $\boldsymbol\beta_0$ and $\sigma^2$ are
+\begin{align*}
+ \hat{\hat{\boldsymbol\beta}}_0 &= (X_0^TX_0)^{-1}X_0^T \mathbf{Y}\\
+ \hat{\hat{\sigma}}^2 &= \frac{\mathrm{RSS}_0}{n} = \frac{1}{n} (\mathbf{Y} - X_0 \hat{\hat{\boldsymbol\beta}}_0)^T(\mathbf{Y} - X_0 \hat{\hat{\boldsymbol\beta_0}})
+\end{align*}
+and we have previously shown these are independent.
+
+Note that our poor estimators wear two hats instead of one. We adopt the convention that the estimators of the null hypothesis have two hats, while those of the alternative hypothesis have one.
+
+So the fitted values under $H_0$ are
+\[
+ \hat{\hat{\mathbf{Y}}} = X_0(X_0^TX_0)^{-1}X_0^T \mathbf{Y} = P_0 \mathbf{Y},
+\]
+where $P_0 = X_0(X_0^T X_0)^{-1}X_0^T$.
+
+The generalized likelihood ratio test of $H_0$ against $H_1$ is
+\begin{align*}
+ \Lambda_\mathbf{Y}(H_0, H_1) &= \frac{\left(\frac{1}{\sqrt{2\pi \hat{\sigma}^2}}\right)\exp\left(-\frac{1}{2\hat{\sigma}^2}(\mathbf{Y} - X\hat{\boldsymbol\beta})^T(\mathbf{Y} - X\hat{\boldsymbol\beta})\right)}{\left(\frac{1}{\sqrt{2\pi \hat{\hat{\sigma}}^2}}\right)\exp\left(-\frac{1}{2\hat{\hat{\sigma}}^2}(\mathbf{Y} - X\hat{\hat{\boldsymbol\beta}}_0)^T(\mathbf{Y} - X\hat{\hat{\boldsymbol\beta}}_0)\right)}\\
+ &= \left(\frac{\hat{\hat{\sigma}}^2}{\hat{\sigma}^2}\right)^{n/2} \\
+ &= \left(\frac{\text{RSS}_0}{\mathrm{RSS}}\right)^{n/2} \\
+ &= \left(1 + \frac{\mathrm{RSS}_0 - \mathrm{RSS}}{\mathrm{RSS}}\right)^{n/2}.
+\end{align*}
+We reject $H_0$ when $2\log \Lambda$ is large, equivalently when $\frac{\mathrm{RSS}_0 - \mathrm{RSS}}{\mathrm{RSS}}$ is large.
+
+Using the results in Lecture 8, under $H_0$, we have
+\[
+ 2\log \Lambda = n\log\left(1 + \frac{\mathrm{RSS}_0 - \mathrm{RSS}}{\mathrm{RSS}}\right),
+\]
+which is approximately a $\chi_{p_1 - p_0}^2$ random variable.
+
+This is a good approximation. But we can get an exact null distribution, and get an exact test.
+
+We have previously shown that $\mathrm{RSS} = \mathbf{Y}^T(I_n - P)\mathbf{Y}$, and so
+\[
+ \mathrm{RSS}_0 - \mathrm{RSS} = \mathbf{Y}^T(I_n - P_0)\mathbf{Y} - \mathbf{Y}^T(I_n - P)\mathbf{Y} = \mathbf{Y}^T(P - P_0)\mathbf{Y}.
+\]
+Now both $I_n - P$ and $P - P_0$ are symmetric and idempotent, and therefore $\rank(I_n - P) = n - p$ and
+\[
+ \rank(P - P_0) = \tr(P - P_0) = \tr(P) - \tr(P_0) = \rank(P) - \rank(P_0) = p - p_0.
+\]
+Also,
+\[
+ (I_n - P)(P - P_0) = (I_n - P)P - (I_n - P)P_0 = (P - P^2) - (P_0 - PP_0) = 0.
+\]
+(we have $P^2 = P$ by idempotence, and $PP_0 = P_0$ since after projecting with $P_0$, we are already in the space of $P$, and applying $P$ has no effect)
+
+Finally,
+\begin{align*}
+ \mathbf{Y}^T(I_n - P)\mathbf{Y} &= (\mathbf{Y} - X_0 \boldsymbol\beta_0)^T(I_n - P)(\mathbf{Y} - X_0 \boldsymbol\beta_0)\\
+ \mathbf{Y}^T(P - P_0)\mathbf{Y} &= (\mathbf{Y} - X_0 \boldsymbol\beta_0)^T(P - P_0)(\mathbf{Y} - X_0 \boldsymbol\beta_0)
+\end{align*}
+since $(I_n - P)X_0 = (P - P_0)X_0 = 0$.
+
+If we let $\mathbf{Z} = \mathbf{Y} - X_0 \boldsymbol\beta_0$, $A_1 = I_n - P$, $A_2 = P - P_0$, and apply our previous lemma, and the fact that $\mathbf{Z}^TA_i \mathbf{Z} \sim \sigma^2 \chi_r^2$, then
+\begin{align*}
+ \mathrm{RSS} = \mathbf{Y}^T(I_n - P)\mathbf{Y} &\sim \chi_{n - p}^2\\
+ \mathrm{RSS}_0 - \mathrm{RSS} = \mathbf{Y}^T(P - P_0)\mathbf{Y} &\sim \chi^2_{p - p_0}
+\end{align*}
+and these random variables are independent.
+
+So under $H_0$,
+\[
+ F = \frac{\mathbf{Y}^T(P - P_0)\mathbf{Y}/(p - p_0)}{\mathbf{Y}^T(I_n - P)\mathbf{Y}/(n - p)} = \frac{(\mathrm{RSS}_0 - \mathrm{RSS})/(p - p_0)}{\mathrm{RSS}/(n - p)} \sim F_{p - p_0, n - p},
+\]
+Hence we reject $H_0$ if $F > F_{p - p_0, n - p}(\alpha)$.
+
+$\mathrm{RSS}_0 - \mathrm{RSS}$ is the reduction in the sum of squares due to fitting $\boldsymbol\beta_1$ in addition to $\boldsymbol\beta_0$.
+\begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ Source of var. & d.f. & sum of squares & mean squares & $F$ statistic\\
+ \midrule
+ Fitted model & $p - p_0$ & $\mathrm{RSS}_0 - \mathrm{RSS}$ & $\frac{\mathrm{RSS}_0 - \mathrm{RSS}}{p - p_0}$ & \multirow{2}{*}{$\frac{(\mathrm{RSS}_0 - \mathrm{RSS})/(p - p_0)}{\mathrm{RSS}/(n - p)}$}\\
+ Residual & $n - p$ & RSS & $\frac{\mathrm{RSS}}{n - p}$\\
+ \midrule
+ Total & $n - p_0$ & $\mathrm{RSS}_0$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+The ratio $\frac{\mathrm{RSS}_0 - \mathrm{RSS}}{\mathrm{RSS}_0}$ is sometimes known as the \emph{proportion of variance explained} by $\boldsymbol\beta_1$, and denoted $R^2$.
+
+\subsubsection{Simple linear regression}
+We assume that
+\[
+ Y_i = a' + b(x_i - \bar x) + \varepsilon_i,
+\]
+where $\bar x = \sum x_i/n$ and $\varepsilon_i$ are $N(0, \sigma^2)$.
+
+Suppose we want to test the hypothesis $H_0: b = 0$, i.e.\ no linear relationship. We have previously seen how to construct a confidence interval, and so we could simply see if it included 0.
+
+Alternatively, under $H_0$, the model is $Y_i \sim N(a', \sigma^2)$, and so $\hat a' = \bar Y$, and the fitted values are $\hat Y_i = \bar Y$.
+
+The observed $\mathrm{RSS}_0$ is therefore
+\[
+ \mathrm{RSS}_0 = \sum_i (y_i - \bar y)^2 = S_{yy}.
+\]
+The fitted sum of squares is therefore
+\[
+ \mathrm{RSS}_0 - \mathrm{RSS} = \sum_i \big((y_i - \bar y)^2 - (y_i - \bar y - \hat{b}(x_i - \bar x))^2\big) = \hat{b}^2 \sum (x_i - \bar x)^2 = \hat{b}^2 S_{xx}.
+\]
+\begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ Source of var. & d.f. & sum of squares & mean squares & $F$ statistic\\
+ \midrule
+ Fitted model & 1 & $\mathrm{RSS}_0 - \mathrm{RSS} = \hat{b}^2 S_{XX}$ & $\hat{b}^2 S_{xx}$ & \multirow{2}{*}{$F = \frac{\hat{b}^2 S_{xx}}{\tilde{\sigma}^2}$}\\
+ Residual & $n - 2$ & $\mathrm{RSS} = \sum_i (y_i- \hat y)^2$ & $\tilde{\sigma}^2$.\\
+ \midrule
+ Total & $n - 1$ & $\mathrm{RSS}_0 = \sum_i (y_i - \bar y)^2$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+Note that the proposition of variance explained is $\hat{b}^2S_{xx}/S_{yy} = \frac{S_{xy}^2}{S_{xx}S_{yy}} = r^2$, where $r$ is the Pearson's product-moment correlation coefficient
+\[
+ r = \frac{S_{xy}}{\sqrt{S_{xx}S_{yy}}}.
+\]
+We have previously seen that under $H_0$, $\frac{\hat{b}}{\mathrm{SE}(\hat{b})} \sim t_{n - 2}$, where $\mathrm{SE}(\hat{b}) = \tilde{\sigma}/\sqrt{S_{xx}}$. So we let
+\[
+ t = \frac{\hat{b}}{\mathrm{SE}(\hat{b})} = \frac{\hat{b}\sqrt{S_{xx}}}{\tilde{\sigma}}.
+\]
+Checking whether $|t| > t_{n - 2}\left(\frac{\alpha}{2}\right)$ is precisely the same as checking whether $t^2 = F > F_{1, n - 2}(\alpha)$, since a $F_{1, n - 2}$ variable is $t_{n - 2}^2$.
+
+Hence the same conclusion is reached, regardless of whether we use the $t$-distribution or the $F$ statistic derived form an analysis of variance table.
+
+\subsubsection{One way analysis of variance with equal numbers in each group}
+Recall that in our wafer example, we made measurements in groups, and want to know if there is a difference between groups. In general, suppose $J$ measurements are taken in each of $I$ groups, and that
+\[
+ Y_{ij} = \mu_i + \varepsilon_{ij},
+\]
+where $\varepsilon_{ij}$ are independent $N(0, \sigma^2)$ random variables, and the $\mu_i$ are unknown constants.
+
+Fitting this model gives
+\[
+ \mathrm{RSS} = \sum_{i = 1}^I\sum_{j = 1}^J (Y_{ij} - \hat{\mu}_i)^2 = \sum_{i = 1}^I\sum_{j = 1}^J (Y_{ij} - \bar Y_{i.})^2
+\]
+on $n - I$ degrees of freedom.
+
+Suppose we want to test the hypothesis $H_0: \mu_i = \mu$, i.e.\ no difference between groups.
+
+Under $H_0$, the model is $Y_{ij} \sim N(\mu, \sigma^2)$, and so $\hat{\mu} = \bar Y$, and the fitted values are $\hat{Y_{ij}} = \bar Y$.
+
+The observed $\mathrm{RSS}_0$ is therefore
+\[
+ \mathrm{RSS}_0 = \sum_{i,j} (y_{ij} - \bar y_{..})^2.
+\]
+The fitted sum of squares is therefore
+\[
+ \mathrm{RSS}_0 - \mathrm{RSS} = \sum_i \sum_j \big((y_{ij} - \bar y_{..})^2 - (y_{ij} - \bar y_{i.})^2 \big) = J\sum_i (\bar y_{i.} - \bar y_{..})^2.
+\]
+\begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ Source of var. & d.f. & sum of squares & mean squares & $F$ statistic\\
+ \midrule
+ Fitted model & $I - 1$ & $J\sum_i (\bar y_i - \bar y_{..})^2$ & $J\sum_i \frac{(\bar y_{i.} - \bar y_{..})^2}{I - 1}$ & \multirow{2}{*}{$J\sum_i \frac{(\bar y_{i.} - \bar y_{..})^2}{(I - 1)\tilde{\sigma}^2}$} \\
+ Residual & $n - I$ & $\sum_i \sum_j(y_{ij} - \bar y_{i.})^2$ & $\tilde{\sigma}^2$\\
+ \midrule
+ Total & $n - 1$ & $\sum_i \sum_j(y_{ij} - \bar y_{..})^2$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+\end{document}
diff --git a/books/cam/IB_M/analysis_ii.tex b/books/cam/IB_M/analysis_ii.tex
new file mode 100644
index 0000000000000000000000000000000000000000..fdb4a75b0d0237c44d4981bda5ca4bf7386703ed
--- /dev/null
+++ b/books/cam/IB_M/analysis_ii.tex
@@ -0,0 +1,3417 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Michaelmas}
+\def\nyear {2015}
+\def\nlecturer {N.\ Wickramasekera}
+\def\ncourse {Analysis II}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Uniform convergence}\\
+The general principle of uniform convergence. A uniform limit of continuous functions is continuous. Uniform convergence and termwise integration and differentiation of series of real-valued functions. Local uniform convergence of power series.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Uniform continuity and integration}\\
+Continuous functions on closed bounded intervals are uniformly continuous. Review of basic facts on Riemann integration (from Analysis I). Informal discussion of integration of complex-valued and $\R^n$-valued functions of one variable; proof that $\|\int_a^b f(x) \;\d x\| \leq \int_a^b \|f(x)\|\;\d x$.\hspace*{\fill} [2]
+
+\vspace{10pt}
+\noindent\textbf{$\R^n$ as a normed space}\\
+Definition of a normed space. Examples, including the Euclidean norm on $\R^n$ and the uniform norm on $C[a, b]$. Lipschitz mappings and Lipschitz equivalence of norms. The Bolzano-Weierstrass theorem in $\R^n$. Completeness. Open and closed sets. Continuity for functions between normed spaces. A continuous function on a closed bounded set in $\R^n$ is uniformly continuous and has closed bounded image. All norms on a finite-dimensional space are Lipschitz equivalent.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Differentiation from $\R^m$ to $\R^n$}\\
+Definition of derivative as a linear map; elementary properties, the chain rule. Partial derivatives; continuous partial derivatives imply differentiability. Higher-order derivatives; symmetry of mixed partial derivatives (assumed continuous). Taylor's theorem. The mean value inequality. Path-connectedness for subsets of $\R^n$; a function having zero derivative on a path-connected open subset is constant.\hspace*{\fill} [6]
+
+\vspace{10pt}
+\noindent\textbf{Metric spaces}\\
+Definition and examples. *Metrics used in Geometry*. Limits, continuity, balls, neighbourhoods, open and closed sets.\hspace*{\fill} [4]
+
+\vspace{10pt}
+\noindent\textbf{The Contraction Mapping Theorem}\\
+The contraction mapping theorem. Applications including the inverse function theorem (proof of continuity of inverse function, statement of differentiability). Picard's solution of differential equations.\hspace*{\fill} [4]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+Analysis II, is, unsurprisingly, a continuation of IA Analysis I. The key idea in the course is to generalize what we did in Analysis I. The first thing we studied in Analysis I was the convergence of sequences of numbers. Here, we would like to study what it means for a sequence of \emph{functions} to converge (this is \emph{technically} a generalization of what we did before, since a sequence of numbers is just a sequence of functions $f_n: \{0\} \to \R$, but this is not necessarily a helpful way to think about it). It turns out this is non-trivial, and there are many ways in which we can define the convergence of functions, and different notions are useful in different circumstances.
+
+The next thing is the idea of \emph{uniform} continuity. This is a stronger notion than just continuity. Despite being stronger, we will prove an important theorem saying any continuous function on $[0, 1]$ (and in general a closed, bounded subset of $\R$) is uniform continuous. This does not mean that uniform continuity is a useless notion, even if we are just looking at functions on $[0, 1]$. The \emph{definition} of uniform continuity is much stronger than just continuity, so we now know continuous functions on $[0, 1]$ are really nice, and this allows us to prove many things with ease.
+
+We can also generalize in other directions. Instead of looking at functions, we might want to define convergence for \emph{arbitrary} sets. Of course, if we are given a set of, say, apples, oranges and pears, we cannot define convergence in a natural way. Instead, we need to give the set some additional structure, such as a \emph{norm} or \emph{metric}. We can then define convergence in a very general setting.
+
+Finally, we will extend the notion of differentiation from functions $\R \to \R$ to general vector functions $\R^n \to \R^m$. This might sound easy --- we have been doing this in IA Vector Calculus all the time. We just need to formalize it a bit, just like what we did in IA Analysis I, right? It turns out differentiation from $\R^n$ to $\R^m$ is much more subtle, and we have to be really careful when we do so, and it takes quite a long while before we can prove that, say, $f(x, y, z) = x^2 e^{3z} \sin (2xy)$ is differentiable.
+
+\section{Uniform convergence}
+In IA Analysis I, we understood what it means for a sequence of real numbers to converge. Suppose instead we have sequence of functions. In general, let $E$ be any set (not necessarily a subset of $\R$), and $f_n: E\to \R$ for $n = 1, 2, \cdots$ be a sequence of functions. What does it mean for $f_n$ to converge to some other function $f: E \to \R$?
+
+We want this notion of convergence to have properties similar to that of convergence of numbers. For example, a constant sequence $f_n = f$ has to converge to $f$, and convergence should not be affected if we change finitely many terms. It should also act nicely with products and sums.
+
+An obvious first attempt would be to define it in terms of the convergence of numbers.
+\begin{defi}[Pointwise convergence]
+ The sequence $f_n$ converges \emph{pointwise} to $f$ if
+ \[
+ f(x) = \lim_{n\to \infty} f_n(x)
+ \]
+ for all $x$.
+\end{defi}
+This is an easy definition that is simple to check, and has the usual properties of convergence. However, there is a problem. Ideally, We want to deduce properties of $f$ from properties of $f_n$. For example, it would be great if continuity of all $f_n$ implies continuity of $f$, and similarly for integrability and values of derivatives and integrals. However, it turns out we cannot. The notion of pointwise convergence is too weak. We will look at many examples where $f$ fails to preserve the properties of $f_n$.
+
+\begin{eg}
+ Let $f_n: [-1, 1] \to \R$ be defined by $f_n(x) = x^{1/(2n + 1)}$. These are all continuous, but the pointwise limit function is
+ \[
+ f_n(x) \to f(x) =
+ \begin{cases}
+ 1 & 0 < x \leq 1\\
+ 0 & x = 0\\
+ -1 & -1 \leq x < 0
+ \end{cases},
+ \]
+ which is not continuous.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$y$};
+
+ \foreach \n in {1,2,3,4,5,6} {
+ \pgfmathsetmacro\c{40 + 10 * \n};
+ \pgfmathsetmacro\k{1 / (2*\n + 1)};
+ \draw [domain=-3:3, samples=100, mred!\c!white] plot (\x, {\x ^ \k});
+ }
+ \end{tikzpicture}
+ \end{center}
+ Less excitingly, we can let $f_n$ be given by the following graph:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$y$};
+
+ \draw [thick, mred] (-3, -1) -- (-0.7, -1) -- (0.7, 1) -- (3, 1);
+ \draw [dashed] (-0.7, -1) -- (-0.7, 0) node [above] {$-\frac{1}{n}$};
+ \draw [dashed] (0.7, 1) -- (0.7, 0) node [below] {$\frac{1}{n}$};
+ \end{tikzpicture}
+ \end{center}
+ which converges to the same function as above.
+\end{eg}
+
+\begin{eg}
+ Let $f_n: [0, 1] \to \R$ be the piecewise linear function formed by joining $(0, 0), (\frac{1}{n}, n), (\frac{2}{n}, 0)$ and $(1, 0)$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$y$};
+ \draw [thick, mred] (0, 0) -- (0.7, 2) -- (1.4, 0) -- (4, 0);
+ \node [anchor = north east] {$0$};
+ \node at (1.4, 0) [below] {$\frac{2}{n}$};
+ \draw [dashed] (0.7, 0) node [below] {$\frac{1}{n}$} -- (0.7, 2) -- (0, 2) node [left] {$n$};
+ \end{tikzpicture}
+ \end{center}
+ The pointwise limit of this function is $f_n(x) \to f(x) = 0$. However, we have
+ \[
+ \int_0^a f_n(x)\;\d x = 1\text{ for all }n;\quad \int_0^1 f(x) \;\d x = 0.
+ \]
+ So the limit of the integral is not the integral of the limit.
+\end{eg}
+
+\begin{eg}
+ Let $f_n: [0, 1] \to \R$ be defined as
+ \[
+ f_n (x) =
+ \begin{cases}
+ 1 & n!x \in \Z\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ Since $f_n$ has finitely many discontinuities, it is Riemann integrable. However, the limit is
+ \[
+ f_n(x) \to f(x) =
+ \begin{cases}
+ 1 & x\in \Q\\
+ 0 & x\not\in \Q
+ \end{cases}
+ \]
+ which is not integrable. So integrability of a function is not preserved by pointwise limits.
+\end{eg}
+This suggests that we need a stronger notion of convergence. Of course, we don't want this notion to be too strong. For example, we could define $f_n \to f$ to mean ``$f_n = f$ for all sufficiently large $n$'', then any property common to $f_n$ is obviously inherited by the limit. However, this is clearly silly since only the most trivial sequences would converge.
+
+Hence we want to find a middle ground between the two cases --- a notion of convergence that is sufficiently strong to preserve most interesting properties, without being too trivial. To do so, we can examine what went wrong in the examples above. In the last example, even though our sequence $f_n$ does indeed tends pointwise to $f$, different points converge at different rates to $f$. For example, at $x = 1$, we already have $f_1(1) = f(1) = 1$. However, at $x = (100!)^{-1}$, $f_{99}(x) = 0$ while $f(x) = 1$. No matter how large $n$ is, we can still find some $x$ where $f_n(x)$ differs a lot from $f(x)$. In other words, if we are given pointwise convergence, there is no guarantee that for very large $n$, $f_n$ will ``look like'' $f$, since there might be some points for which $f_n$ has not started to move towards $f$.
+
+Hence, what we need is for $f_n$ to converge to $f$ at the same pace. This is known as uniform convergence.
+
+\begin{defi}[Uniform convergence]
+ A sequence of functions $f_n: E\to \R$ converges \emph{uniformly} to $f$ if
+ \[
+ (\forall \varepsilon)(\exists N)(\forall x)(\forall n > N)\; |f_n(x) - f(x)| < \varepsilon.
+ \]
+ Alternatively, we can say
+ \[
+ (\forall \varepsilon)(\exists N)(\forall n > N)\; \sup_{x\in E} |f_n(x) - f(x)| < \varepsilon.
+ \]
+\end{defi}
+Note that similar to pointwise convergence, the definition does not require $E$ to be a subset of $\R$. It could as well be the set $\{\text{Winnie},\text{Piglet},\text{Tigger}\}$. However, many of our \emph{theorems} about uniform convergence will require $E$ to be a subset of $\R$, or else we cannot sensibly integrate or differentiate our function.
+
+We can compare this definition with the definition of pointwise convergence:
+\[
+ (\forall \varepsilon)(\forall x)(\exists N)(\forall n > N)\; |f_n(x) - f(x)| < \varepsilon.
+\]
+The only difference is in where there $(\forall x)$ sits, and this is what makes all the difference. Uniform convergence requires that there is an $N$ that works for \emph{every} $x$, while pointwise convergence just requires that for each $x$, we can find an $N$ that works.
+
+It should be clear from definition that if $f_n \to f$ uniformly, then $f_n \to f$ pointwise. We will show that the converse is false:
+\begin{eg}
+ Again consider our first example, where $f_n: [-1, 1] \to \R$ is defined by $f_n(x) = x^{1/(2n + 1)}$. If the uniform limit existed, then it must be given by
+ \[
+ f_n(x) \to f(x) =
+ \begin{cases}
+ 1 & 0 < x \leq 1\\
+ 0 & x = 1\\
+ -1 & -1 \leq x < 0
+ \end{cases},
+ \]
+ since uniform convergence implies pointwise convergence.
+
+ We will show that we don't have uniform convergence. Pick $\varepsilon = \frac{1}{4}$. Then for each $n$, $x = 2^{-(2n + 1)}$ will have $f_n(x) = \frac{1}{2}$, $f(x) = 1$. So there is some $x$ such that $|f_n(x) - f(x)| > \varepsilon$. So $f_n \not\to f$ uniformly.
+\end{eg}
+
+\begin{eg}
+ Let $f_n: \R \to \R$ be defined by $f_n (x) = \frac{x}{n}$. Then $f_n(x) \to f(x) = 0$ pointwise. However, this convergence is not uniform in $\R$ since $|f_n(x) - f(x)| = \frac{|x|}{n}$, and this can be arbitrarily large for any $n$.
+
+ However, if we restrict $f_n$ to a bounded domain, then the convergence is uniform. Let the domain be $[-a, a]$ for some positive, finite $a$. Then
+ \[
+ \sup |f_n(x) - f(x)| = \frac{|x|}{n} \leq \frac{a}{n}.
+ \]
+ So given $\varepsilon$, pick $N$ such that $N > \frac{a}{\varepsilon}$, and we are done.
+\end{eg}
+
+Recall that for sequences of normal numbers, we have normal convergence and Cauchy convergence, which we proved to be the same. Then clearly pointwise convergence and pointwise Cauchy convergence of functions are equivalent. We will now look into the case of uniform convergence.
+
+\begin{defi}[Uniformly Cauchy sequence]
+ A sequence $f_n: E\to \R$ of functions is \emph{uniformly Cauchy} if
+ \[
+ (\forall \varepsilon > 0)(\exists N)(\forall m,n > N)\;\sup_{x\in E}|f_n(x) - f_m(x)| < \varepsilon.
+ \]
+\end{defi}
+
+Our first theorem will be that uniform Cauchy convergence and uniform convergence are equivalent.
+
+\begin{thm}
+ Let $f_n: E\to \R$ be a sequence of functions. Then $(f_n)$ converges uniformly if and only if $(f_n)$ is uniformly Cauchy.
+\end{thm}
+
+\begin{proof}
+ First suppose that $f_n \to f$ uniformly. Given $\varepsilon$, we know that there is some $N$ such that
+ \[
+ (\forall n > N)\; \sup_{x\in E} |f_n(x) - f(x)| < \frac{\varepsilon}{2}.
+ \]
+ Then if $n, m > N$, $x\in E$ we have
+ \[
+ |f_n(x) - f_m(x)| \leq |f_n(x) - f(x)| + |f_m(x) - f(x)| < \varepsilon.
+ \]
+ So done.
+
+ Now suppose $(f_n)$ is uniformly Cauchy. Then $(f_n(x))$ is Cauchy for all $x$. So it converges. Let
+ \[
+ f(x) = \lim_{n\to \infty}f_n(x).
+ \]
+ We want to show that $f_n \to f$ uniformly. Given $\varepsilon > 0$, choose $N$ such that whenever $n, m > N$, $x\in E$, we have $|f_n(x) - f_m(x)| < \frac{\varepsilon}{2}$. Letting $m\to \infty$, $f_m(x) \to f(x)$. So we have $|f_n(x) - f(x)| \leq \frac{\varepsilon}{2} < \varepsilon$. So done.
+\end{proof}
+
+This is an important result. If we are given a concrete sequence of functions, then the usual way to show it converges is to compute the pointwise limit and then prove that the convergence is uniform. However, if we are dealing with sequences of functions in general, this is less likely to work. Instead, it is often much easier to show that a sequence of functions is uniformly convergent by showing it is uniformly Cauchy.
+
+We now move on to show that uniform convergence tends to preserve properties of functions.
+\begin{thm}[Uniform convergence and continuity]
+ Let $E \subseteq \R$, $x \in E$ and $f_n, f: E\to \R$. Suppose $f_n \to f$ uniformly, and $f_n$ are continuous at $x$ for all $n$. Then $f$ is also continuous at $x$.
+
+ In particular, if $f_n$ are continuous everywhere, then $f$ is continuous everywhere.
+\end{thm}
+This can be concisely phrased as ``the uniform limit of continuous functions is continuous''.
+
+\begin{proof}
+ Let $\varepsilon > 0$. Choose $N$ such that for all $n \geq N$, we have
+ \[
+ \sup_{y\in E}|f_n(y) - f(y)| < \varepsilon.
+ \]
+ Since $f_N$ is continuous at $x$, there is some $\delta$ such that
+ \[
+ |x - y| < \delta \Rightarrow |f_N(x) - f_N(y)| < \varepsilon.
+ \]
+ Then for each $y$ such that $|x - y| < \delta$, we have
+ \[
+ |f(x) - f(y)| \leq |f(x) - f_N(x)| + |f_N(x) - f_N(y)| + |f_N(y) - f(y)| < 3\varepsilon.\qedhere
+ \]
+\end{proof}
+
+\begin{thm}[Uniform convergence and integrals]
+ Let $f_n, f: [a, b]\to \R$ be Riemann integrable, with $f_n \to f$ uniformly. Then
+ \[
+ \int_a^b f_n(t)\;\d t \to \int_a^b f(t)\;\d t.
+ \]
+\end{thm}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ \left|\int_a^b f_n(t) \;\d t - \int_a^b f(t)\;\d t\right| &= \left|\int_a^b f_n(t) - f(t)\;\d t\right|\\
+ &\leq \int_a^b |f_n(t) - f(t)|\;\d t\\
+ &\leq \sup_{t\in [a, b]}|f_n(t) - f(t)| (b - a)\\
+ &\to 0\text{ as }n\to \infty.\qedhere
+ \end{align*}
+\end{proof}
+This is really the easy part. What we would also want to prove is that if $f_n$ is integrable, $f_n \to f$ uniformly, then $f$ is integrable. This is indeed true, but we will not prove it yet. We will come to this later on at the part where we talk a lot about integrability.
+
+So far so good. However, the relationship between uniform convergence and differentiability is more subtle. The uniform limit of differentiable functions need not be differentiable. Even if it were, the limit of the derivative is not necessarily the same as the derivative of the limit, even if we just want pointwise convergence of the derivative.
+
+\begin{eg}
+ Let $f_n, f: [-1, 1] \to \R$ be defined by
+ \[
+ f_n(x) = |x|^{1 + 1/n}, \quad f(x) = |x|.
+ \]
+ Then $f_n \to f$ uniformly (exercise).
+
+ Each $f_n$ is differentiable --- this is obvious at $x \not= 0$, and at $x = 0$, the derivative is
+ \[
+ f_n'(0) = \lim_{x \to 0}\frac{f_n (x) - f_n(0)}{x} = \lim_{x \to 0} \sgn(x) |x|^{1/n} = 0
+ \]
+ However, the limit $f$ is not differentiable at $x = 0$.
+\end{eg}
+
+\begin{eg}
+ Let
+ \[
+ f_n(x) = \frac{\sin nx}{\sqrt{n}}
+ \]
+ for all $x\in \R$. Then
+ \[
+ \sup_{x\in \R}|f_n (x)| \leq \frac{1}{\sqrt{n}} \to 0.
+ \]
+ So $f_n \to f = 0$ uniformly in $\R$. However, the derivative is
+ \[
+ f_n'(x) = \sqrt{n} \cos nx,
+ \]
+ which does not converge to $f' = 0$, e.g.\ at $x = 0$.
+\end{eg}
+
+Hence, for differentiability to play nice, we need a condition even stronger than uniform convergence.
+\begin{thm}
+ Let $f_n: [a, b]\to \R$ be a sequence of functions differentiable on $[a, b]$ (at the end points $a$, $b$, this means that the one-sided derivatives exist). Suppose the following holds:
+ \begin{enumerate}
+ \item For some $c\in [a, b]$, $f_n(c)$ converges.
+ \item The sequence of derivatives $(f_n')$ converges uniformly on $[a, b]$.
+ \end{enumerate}
+ Then $(f_n)$ converges uniformly on $[a, b]$, and if $f = \lim f_n$, then $f$ is differentiable with derivative $f'(x) = \lim f_n'(x)$.
+\end{thm}
+Note that we do \emph{not} assume that $f_n'$ are continuous or even Riemann integrable. If they are, then the proof is much easier!
+
+\begin{proof}
+ If we are given a specific sequence of functions and are asked to prove that they converge uniformly, we usually take the pointwise limit and show that the convergence is uniform. However, given a general function, this is usually not helpful. Instead, we can use the Cauchy criterion by showing that the sequence is uniformly Cauchy.
+
+ We want to find an $N$ such that $n, m> N$ implies $\sup|f_n - f_m| < \varepsilon$. We want to relate this to the derivatives. We might want to use the fundamental theorem of algebra for this. However, we don't know that the derivative is integrable! So instead, we go for the mean value theorem.
+
+ Fix $x\in [a, b]$. We apply the mean value theorem to $f_n - f_m$ to get
+ \[
+ (f_n - f_m)(x) - (f_n - f_m)(c) = (x - c)(f'_n - f'_m)(t)
+ \]
+ for some $t\in (x, c)$.
+
+ Taking the supremum and rearranging terms, we obtain
+ \[
+ \sup_{x\in [a, b]}|f_n(x) - f_m(x)| \leq |f_n(c) - f_m(c)| + (b - a)\sup_{t\in [a, b]}|f_n'(t) - f_m'(t)|.
+ \]
+ So given any $\varepsilon$, since $f_n'$ and $f_n(c)$ converge and are hence Cauchy, there is some $N$ such that for any $n, m \geq N$,
+ \[
+ \sup_{t \in [a, b]} |f_n'(t) - f_m'(t)| < \varepsilon, \quad |f_n(c) - f_m(c)| < \varepsilon.
+ \]
+ Hence we obtain
+ \[
+ n, m \geq N \Rightarrow \sup_{x\in [a, b]}|f_n(x) - f_m(x)| < (1 + b - a) \varepsilon.
+ \]
+ So by the Cauchy criterion, we know that $f_n$ converges uniformly. Let $f = \lim f_n$.
+
+ Now we have to check differentiability. Let $f_n' \to h$. For any fixed $y\in [a, b]$, define
+ \[
+ g_n(x) =
+ \begin{cases}
+ \frac{f_n(x) - f_n(y)}{x - y} & x \not= y\\
+ f_n'(y) & x = y
+ \end{cases}
+ \]
+ Then by definition, $f_n$ is differentiable at $y$ iff $g_n$ is continuous at $y$. Also, define
+ \[
+ g(x) =
+ \begin{cases}
+ \frac{f(x) - f(y)}{x - y} & x \not= y\\
+ h(y) & x = y
+ \end{cases}
+ \]
+ Then $f$ is differentiable with derivative $h$ at $y$ iff $g$ is continuous at $y$. However, we know that $g_n \to g$ pointwise on $[a, b]$, and we know that $g_n$ are all continuous. To conclude that $g$ is continuous, we have to show that the convergence is uniform. To show that $g_n$ converges uniformly, we rely on the Cauchy criterion and the mean value theorem.
+
+ For $x \not = y$, we know that
+ \[
+ g_n(x) - g_m(x) = \frac{(f_n - f_m)(x) - (f_n - f_m)(y)}{x - y} = (f'_n - f'_m)(t)
+ \]
+ for some $t \in [x, y]$. This also holds for $x = y$, since $g_n(y) - g_m(y) = f_n'(y) - f_m'(y)$ by definition.
+
+ Let $\varepsilon > 0$. Since $f'$ converges uniformly, there is some $N$ such that for all $x\not= y$, $n, m > N$, we have
+ \[
+ |g_n(x) - g_m(x)| \leq \sup |f'_n - f'_m| < \varepsilon.
+ \]
+ So
+ \[
+ n, m\geq N \Rightarrow \sup_{[a, b]}|g_n - g_m| < \varepsilon,
+ \]
+ i.e.\ $g_n$ converges uniformly. Hence the limit function $g$ is continuous, in particular at $x = y$. So $f$ is differentiable at $y$ and $f'(y) = h(y) = \lim f_n'(y)$.
+\end{proof}
+
+If we assume additionally that $f_n'$ are continuous, then there is an easy proof of this theorem. By the fundamental theorem of calculus, we have
+\[
+ f_n(x) = f_n(c) + \int_c^x f_n'(t)\;\d t.\tag{$*$}
+\]
+Then we get that
+\begin{align*}
+ \sup_{[a, b]}|f_n(x) - f_m(x)| &\leq |f_n(c) - f_m(c)| + \sup_{x\in [a, b]}\left|\int_c^x (f_n'(t) - f_m'(t))\;\d t\right|\\
+ &\leq |f_n(c) - f_m(c)| + (b - a)\sup_{t\in [a, b]}|f_n'(t) - f_m'(t)|\\
+ &< \varepsilon
+\end{align*}
+for sufficiently large $n, m> N$.
+
+So by the Cauchy criterion, $f_n \to f$ uniformly for some function $f: [a, b] \to \R$.
+
+Since the $f_n'$ are continuous, $h = \lim\limits_{n\to \infty} f_n'$ is continuous and hence integrable. Taking the limit of $(*)$, we get
+\[
+ f(x) = f(c) + \int_c^x h(t)\;\d t.
+\]
+Then the fundamental theorem of calculus says that $f$ is differentiable and $f'(x) = h(x) = \lim f_n'(x)$. So done.
+
+Finally, we have a small proposition that can come handy.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item Let $f_n, g_n: E\to \R$, be sequences, and $f_n \to f$, $g_n \to g$ uniformly on $E$. Then for any $a, b\in \R$, $af_n + bg_n \to af + bg$ uniformly.
+ \item Let $f_n \to f$ uniformly, and let $g: E\to \R$ is bounded. Then $gf_n: E\to \R$ converges uniformly to $gf$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Easy exercise.
+ \item Say $|g(x)| < M$ for all $x\in E$. Then
+ \[
+ |(gf_n)(x) - (gf)(x)| \leq M |f_n(x) - f(x)|.
+ \]
+ So
+ \[
+ \sup_E |gf_n - gf| \leq M \sup_E |f_n - f| \to 0.\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+Note that (ii) is false without assuming boundedness. An easy example is to take $f_n = \frac{1}{n}$, $x\in \R$, and $g(x) = x$. Then $f_n \to 0$ uniformly, but $(gf_n)(x) = \frac{x}{n}$ does not converge uniformly to $0$.
+
+\section{Series of functions}
+\subsection{Convergence of series}
+Recall that in Analysis I, we studied the convergence of a series of numbers. Here we will look at a series of \emph{functions}. The definitions are almost exactly the same.
+\begin{defi}[Convergence of series]
+ Let $g_n; E \to \R$ be a sequence of functions. Then we say the series $\sum_{n = 1}^\infty g_n$ converges at a point $x \in E$ if the sequence of partial sums
+ \[
+ f_n = \sum_{j = 1}^ng_j
+ \]
+ converges at $x$. The series converges uniformly if $f_n$ converges uniformly.
+\end{defi}
+
+\begin{defi}[Absolute convergence]
+ $\sum g_n$ converges \emph{absolutely} at a point $x\in E$ if $\sum |g_n|$ converges at $x$.
+
+ $\sum g_n$ converges \emph{absolutely uniformly} if $\sum |g_n|$ converges uniformly.
+\end{defi}
+
+\begin{prop}
+ Let $g_n: E\to \R$. If $\sum g_n$ converges absolutely uniformly, then $\sum g_n$ converges uniformly.
+\end{prop}
+
+\begin{proof}
+ Again, we don't have a candidate for the limit. So we use the Cauchy criterion.
+
+ Let $f_n = \sum\limits_{j = 1}^n g_j$ and $h_n(x) = \sum\limits_{j = 1}^n |g_j|$ be the partial sums. Then for $n > m$, we have
+ \[
+ |f_n(x) - f_m(x)| = \left|\sum_{j = m + 1}^n g_j(x)\right| \leq \sum_{j = m + 1}^n |g_j(x)| = |h_n(x) - h_m(x)|.
+ \]
+ By hypothesis, we have
+ \[
+ \sup_{x\in E}|h_n(x) - h_m(x)| \to 0\text{ as }n, m\to \infty.
+ \]
+ So we get
+ \[
+ \sup_{x\in E}|f_n(x) - f_m(x)| \to 0\text{ as }n, m\to \infty.
+ \]
+ So the result follows from the Cauchy criteria.
+\end{proof}
+It is important to remember that uniform convergence plus absolute pointwise convergence does not imply absolute uniform convergence.
+
+\begin{eg}
+ Consider the series
+ \[
+ \sum_{n = 1}^\infty \frac{(-1)^n}{n}x^n.
+ \]
+ This converges absolutely for every $x\in [0, 1)$ since it is bounded by the geometric series. In fact, it converges \emph{uniformly} on $[0, 1)$ (see example sheet). However, this does \emph{not} converge absolutely uniformly on $[0, 1)$.
+
+ We can consider the difference in partial sums
+ \[
+ \sum_{j = m}^n \left|\frac{(-1)^j}{j}x^j\right| = \sum_{j = m}^n \frac{1}{j}|x|^j \geq \left(\frac{1}{m} + \frac{1}{m + 1} + \cdots + \frac{1}{n}\right)|x|^n.
+ \]
+ For each $N$, we can make this difference large enough by picking a really large $n$, and then making $x$ close enough to $1$. So the supremum is unbounded.
+\end{eg}
+
+\begin{thm}[Weierstrass M-test]
+ Let $g_n: E \to \R$ be a sequence of functions. Suppose there is some sequence $M_n$ such that for all $n$, we have
+ \[
+ \sup_{x\in E}|g_n (x)| \leq M_n.
+ \]
+ If $\sum M_n$ converges, then $\sum g_n$ converges absolutely uniformly.
+\end{thm}
+This is in fact a very easy result, and we could as well reproduce the proof whenever we need it. However, this pattern of proving absolute uniform convergence is so common that we prove it as a test.
+
+\begin{proof}
+ Let $f_n = \sum\limits_{j = 1}^n |g_j|$ be the partial sums. Then for $n > m$, we have
+ \[
+ |f_n(x) - f_m(x)| = \sum_{j = m + 1}^n |g_j(x)| \leq \sum_{j = m + 1}^n M_j.
+ \]
+ Taking supremum, we have
+ \[
+ \sup|f_n(x) - f_m(x)| \leq \sum_{j = m + 1}^n M_j \to 0\text{ as }n, m\to \infty.
+ \]
+ So done by the Cauchy criterion.
+\end{proof}
+
+\subsection{Power series}
+A particularly interesting kind of series is a \emph{power series}. We have already met these in IA Analysis I and proved some results about them. However, our results were pointwise results, discussing how $\sum c_n (x - a)^n$ behaves at a particular point $x$. Here we will quickly look into how the power series behaves as a function of $x$. In particular, we want to know whether it converges absolutely uniformly.
+
+\begin{thm}
+ Let $\sum\limits_{n = 0}^\infty c_n (x - a)^n$ be a real power series. Then there exists a unique number $R\in [0, +\infty]$ (called the radius of convergence) such that
+ \begin{enumerate}
+ \item If $|x - a| < R$, then $\sum c_n (x - a)^n$ converges absolutely.
+ \item If $|x - a| > R$, then $\sum c_n (x - a)^n$ diverges.
+ \item If $R > 0$ and $0 < r < R$, then $\sum c_n (x - a)^n$ converges absolutely uniformly on $[a - r, a + r]$.
+
+ We say that the sum converges locally absolutely uniformly inside the circle of convergence, i.e.\ for every point $y\in (a - R, a + R)$, there is some open interval around $y$ on which the sum converges absolutely uniformly.
+ \end{enumerate}
+ These results hold for complex power series as well, but for concreteness we will just do it for real series.
+\end{thm}
+Note that the first two statements are things we already know from IA Analysis I, and we are not going to prove them.
+\begin{proof}
+ See IA Analysis I for (i) and (ii).
+
+ For (iii), note that from (i), taking $x = a - r$, we know that $\sum |c_n| r^n$ is convergent. But we know that if $x\in [a - r, a + r]$, then
+ \[
+ |c_n (x - a)^n| \leq |c_n| r^n.
+ \]
+ So the result follows from the Weierstrass M-test by taking $M_n = |c_n| r^n$.
+\end{proof}
+Note that uniform convergence need not hold on the entire interval of convergence.
+\begin{eg}
+ Consider $\sum x^n$. This converges for $x\in (-1, 1)$, but uniform convergence fails on $(-1, 1)$ since the tail
+ \[
+ \sum_{j = m}^n x^j = x^n \sum_{j = 0}^{n - m} x^j \geq \frac{x^m}{1 - x}.
+ \]
+ This is not uniformly small since we can make this large by picking $x$ close to $1$.
+\end{eg}
+
+\begin{thm}[Termwise differentiation of power series]
+ Suppose $\sum c_n (x - n)^n$ is a real power series with radius of convergence $R > 0$. Then
+ \begin{enumerate}
+ \item The ``derived series''
+ \[
+ \sum_{n = 1}^\infty n c_n (x - a)^{n - 1}
+ \]
+ has radius of convergence $R$.
+ \item The function defined by $f(x) = \sum c_n (x - a)^n$, $x\in (a - R, a + R)$ is differentiable with derivative $f'(x) = \sum n c_n (x - a)^{n - 1}$ within the (open) circle of convergence.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $R_1$ be the radius of convergence of the derived series. We know
+ \[
+ |c_n(x - a)^n| = |c_n||x - a|^{n - 1}|x - a| \leq |n c_n (x - a)^{n - 1}| |x - a|.
+ \]
+ Hence if the derived series $\sum n c_n(x - a)^{n - 1}$ converges absolutely for some $x$, then so does $\sum c_n (x - a)^n$. So $R_1 \leq R$.
+
+ Suppose that the inequality is strict, i.e.\ $R_1 < R$, then there are $r_1, r$ such that $R_1 < r_1 < r < R$, where $\sum n|c_n| r_1^{n - 1}$ diverges while $\sum |c_n| r^n$ converges. But this cannot be true since $n|c_n| r_1^{n - 1} \leq |c_n| r^n$ for sufficiently large $n$. So we must have $R_1 = R$.
+
+ \item Let $f_n(x) = \sum\limits_{j = 0}^n c_j (x - a)^j$. Then $f_n '(x) = \sum\limits_{j = 1}^n j c_j (x - a)^{j - 1}$. We want to use the result that the derivative of limit is limit of derivative. This requires that $f_n$ converges at a point, and that $f_n'$ converges uniformly. The first is obviously true, and we know that $f_n'$ converges uniformly on $[a - r, a + r]$ for any $r < R$. So for each $x_0$, there is some interval containing $x_0$ on which $f_n'$ is convergent. So on this interval, we know that
+ \[
+ f(x) = \lim_{n \to \infty}f_n (x)
+ \]
+ is differentiable with
+ \[
+ f'(x) = \lim_{n \to \infty}f_n'(x) = \sum_{j = 1}^\infty jc_j (x - a)^j.
+ \]
+ In particular,
+ \[
+ f'(x_0) = \sum_{j = 1}^\infty jc_j (x_0 - a)^j.
+ \]
+ Since this is true for all $x_0$, the result follows.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\section{Uniform continuity and integration}
+\subsection{Uniform continuity}
+Recall that we had a rather weak notion of convergence, known as pointwise convergence, and then promoted it to \emph{uniform convergence}. The process of this promotion is to replace the condition ``for each $x$, we can find an $\varepsilon$'' to ``we can find an $\varepsilon$ that works for each $x$''. We are going to do the same for continuity to obtain uniform continuity.
+
+\begin{defi}[Uniform continuity]
+ Let $E \subseteq \R$ and $f: E\to \R$. We say that $f$ is \emph{uniformly continuous} on $E$ if
+ \[
+ (\forall \varepsilon)(\exists \delta > 0)(\forall x)(\forall y)\; |x - y| < \delta \Rightarrow |f(x) - f(y)| < \varepsilon.
+ \]
+\end{defi}
+Compare this to the definition of continuity:
+\[
+ (\forall \varepsilon)(\forall x)(\exists \delta > 0)(\forall y)\; |x - y| < \delta \Rightarrow |f(x) - f(y)| < \varepsilon.
+\]
+Again, we have shifted the $(\forall x)$ out of the $(\exists \delta)$ quantifier. The difference is that in regular continuity, $\delta$ can depend on our choice of $x$, but in uniform continuity, it only depends on $y$. Again, clearly a uniformly continuous function is continuous.
+
+In general, the converse is not true, as we will soon see in two examples. However, the converse is true in a lot of cases.
+\begin{thm}
+ Any continuous function on a closed, bounded interval is uniformly continuous.
+\end{thm}
+
+\begin{proof}
+ We are going to prove by contradiction. Suppose $f: [a, b] \to \R$ is not uniformly continuous. Since $f$ is not uniformly continuous, there is some $\varepsilon > 0$ such that for all $\delta = \frac{1}{n}$, there is some $x_n, y_n$ such that $|x_n - y_n| < \frac{1}{n}$ but $|f(x_n) - f(y_n)| > \varepsilon$.
+
+ Since we are on a closed, bounded interval, by Bolzano-Weierstrass, $(x_n)$ has a convergent subsequence $(x_{n_i}) \to x$. Then we also have $y_{n_i}\to x$. So by continuity, we must have $f(x_{n_i}) \to f(x)$ and $f(y_{n_i}) \to f(x)$. But $|f(x_{n_i}) - f(y_{n_i})| > \varepsilon$ for all $n_i$. This is a contradiction.
+\end{proof}
+Note that we proved this in the special case where the domain is $[a, b]$ and the image is $\R$. In fact, $[a, b]$ can be replaced by any compact metric space; $\R$ by any metric space. This is since all we need is for Bolzano-Weierstrass to hold in the domain, i.e.\ the domain is sequentially compact (ignore this comment if you have not taken IB Metric and Topological Spaces).
+
+Instead of a contradiction, we can also do a direct proof of this statement, using the Heine-Borel theorem which says that $[0, 1]$ is compact.
+
+While this is a nice theorem, in general a continuous function need not be uniformly continuous.
+
+\begin{eg}
+ Consider $f: (0, 1] \to \R$ given by $f(x) = \frac{1}{x}$. This is not uniformly continuous, since when we get very close to $0$, a small change in $x$ produces a large change in $\frac{1}{x}$.
+
+ In particular, for any $\delta < 1$ and $x < \delta$, $y = \frac{x}{2}$, then $|x - y| = \frac{x}{2} < \delta$ but $|f(x) - f(y)| = \frac{1}{x} > 1$.
+\end{eg}
+
+In this example, the function is unbounded. However, even bounded functions can be not uniformly continuous.
+\begin{eg}
+ Let $f: (0, 1] \to \R$, $f(x) = \sin \frac{1}{x}$. We let
+ \[
+ x_n = \frac{1}{2n\pi},\quad y_n = \frac{1}{(2n + \frac{1}{2})\pi}.
+ \]
+ Then we have
+ \[
+ |f(x_n) - f(y_n)| = |0 - 1| = 1,
+ \]
+ while
+ \[
+ |x_n - y_n| = \frac{\pi}{2n(4n + 1)} \to 0.
+ \]
+\end{eg}
+
+\subsection{Applications to Riemann integrability}
+We can apply the idea of uniform continuity to Riemann integration.
+
+We first quickly recap and summarize things we know about Riemann integrals IA Analysis I. Let $f: [a, b] \to \R$ be a bounded function, say $m \leq f(x) \leq M$ for all $x\in [a, b]$. Consider a partition of $[a, b]$
+\[
+ P = \{a_0, a_1, \cdots, a_n\}.
+\]
+i.e.\ $a = a_0 < a_1 < a_2 < \cdots < a_n = b$. The \emph{upper sum} is defined as
+\[
+ U(P, f) = \sum_{j = 0}^{n - 1} (a_{j + 1} - a_j) \sup_{[a_j, a_{j + 1}]} f.
+\]
+Similarly, the lower sum is defined as
+\[
+ L(P, f) = \sum_{j = 0}^{n - 1} (a_{j + 1} - a_j) \inf_{[a_j, a_{j + 1}]} f.
+\]
+It is clear from definition that
+\[
+ m(b - a) \leq L(P, f) \leq U(P, f) \leq M(b - a).
+\]
+Also, if a partition $P^*$ is a \emph{refinement} of a partition $P$, i.e.\ it contains all the points of $P$ and possibly more, then
+\[
+ L(P, f) \leq L(P^*, f) \leq U(P^*, f) \leq U(P, f).
+\]
+It thus follows that if $P_1$ and $P_2$ are arbitrary partitions, then
+\[
+ L(P_1, f) \leq U(P_2, f),
+\]
+which we can show by finding a partition that is simultaneously a refinement of $P_1$ and $P_2$. The importance of this result is that
+\[
+ \sup_{P} L(P, f) \leq \inf_P U(P, f),
+\]
+where we take extrema over all partitions $P$. We define these to be the \emph{upper and lower integrals}
+\[
+ I^*(f) = \inf_P U(P, f),\quad I_*(f) = \sup_P L(P, f).
+\]
+So we know that
+\[
+ m(b - a) \leq I_*(f) \leq I^*(f) \leq M(b - a).
+\]
+Now given any $\varepsilon > 0$, by definition of the infimum, there is a partition $P_1$ such that
+\[
+ U(P_1, f) < I^*(f) + \frac{\varepsilon}{2}.
+\]
+Similarly, there is a partition $P_2$ such that
+\[
+ L(P_2, f) > I_*(f) - \frac{\varepsilon}{2}.
+\]
+So if we let $P = P_1 \cup P_2$, then $P$ is a refinement of both $P_1$ and $P_2$. So
+\[
+ U(P, f) < I^*(f) + \frac{\varepsilon}{2}
+\]
+and
+\[
+ L(P, f) > I_*(f) - \frac{\varepsilon}{2}.
+\]
+Combining these, we know that
+\[
+ 0 \leq I^*(f) - I_*(f) < U(P, f) - L(P, f) < I^*(f) - I_*(f) + \varepsilon.
+\]
+We now define what it means to be Riemann integrable.
+\begin{defi}[Riemann integrability]
+ A bounded function $f: [a, b] \to \R$ is \emph{Riemann integrable} on $[a, b]$ if $I^*(f) = I_*(f)$. We write
+ \[
+ \int_a^b f(x)\;\d x = I^*(f) = I_*(f).
+ \]
+\end{defi}
+
+Then the Riemann criterion says
+\begin{thm}[Riemann criterion for integrability]
+ A bounded function $f: [a, b] \to \R$ is Riemann integrable if and only if for every $\varepsilon$, there is a partition $P$ such that
+ \[
+ U(P, f) - L(P, f) < \varepsilon.
+ \]
+\end{thm}
+That's the end of our recap. Now we have a new theorem.
+
+\begin{thm}
+ If $f: [a, b] \to [A, B]$ is integrable and $g: [A, B] \to \R$ is continuous, then $g\circ f: [a, b] \to \R$ is integrable.
+\end{thm}
+
+\begin{proof}
+ Let $\varepsilon > 0$. Since $g$ is continuous, $g$ is uniformly continuous. So we can find $\delta = \delta(\varepsilon) > 0$ such that for any $x, y\in [A, B]$, if $|x - y| < \delta$ then $|g(x) - g(y)| < \varepsilon$.
+
+ Since $f$ is integrable, for arbitrary $\varepsilon'$, we can find a partition $P = \{a = a_0 < a_1 < \cdots < a_n = b\}$ such that
+ \[
+ U(P, f) - L(P, f) = \sum_{j = 0}^{n - 1} (a_{j + 1} - a_j) \left(\sup_{I_j} f - \inf_{I_j} f\right) < \varepsilon'. \tag{$*$}
+ \]
+ Our objective is to make $U(P, g\circ f) - L(P, g\circ f)$ small. By uniform continuity of $g$, if $\sup_{I_j} f - \inf_{I_j} f$ is less than $\delta$, then $\sup_{I_j} g\circ f - \inf_{I_j} g\circ f$ will be less than $\varepsilon$. We like these sorts of intervals. So we let
+ \[
+ J = \left\{j: \sup_{I_j} f - \inf_{I_j}f < \delta\right\},
+ \]
+ We now show properly that these intervals are indeed ``nice''. For any $j \in J$, for all $x, y\in I_j$, we must have
+ \[
+ |f(x) - f(y)| \leq \sup_{z_1, z_2\in I_j} (f(z_1) - f(z_2)) = \sup_{I_j} f - \inf_{I_j} f < \delta.
+ \]
+ Hence, for each $j\in J$ and all $x, y\in I_j$, we know that
+ \[
+ |g\circ f(x) - g\circ f(y)| < \varepsilon.
+ \]
+ Hence, we must have
+ \[
+ \sup_{I_j} \Big(g\circ f(x) - g\circ f(y)\Big) \leq \varepsilon.
+ \]
+ So
+ \[
+ \sup_{I_j} g\circ f - \inf_{I_j} g\circ f \leq \varepsilon.
+ \]
+ Hence we know that
+ \begin{align*}
+ U(P, g\circ f) - L(P, g\circ f) &= \sum_{j = 0}^n (a_{j + 1} - a_j) \left(\sup_{I_j}g\circ f - \inf_{I_j}g\circ f\right)\\
+ &= \sum_{j \in J} (a_{j + 1} - a_j) \left(\sup_{I_j}g\circ f - \inf_{I_j}g\circ f\right) \\
+ &\quad\quad\quad+ \sum_{j \not\in J} (a_{j + 1} - a_j)\left(\sup_{I_j}g\circ f - \inf_{I_j}g\circ f\right).\\
+ &\leq \varepsilon(b - a) + 2\sup_{[A, B]}|g| \sum_{j\not\in J}(a_{j + 1} - a_j).
+ \end{align*}
+ Hence, it suffices here to make $\sum\limits_{j\not\in J} (a_{j + 1} - a_j)$ small. From $(*)$, we know that we must have
+ \[
+ \sum_{j \not\in J} (a_{j + 1} - a_j) < \frac{\varepsilon'}{\delta},
+ \]
+ or else $U(P, f) - L(P, f) > \varepsilon'$. So we can bound
+ \[
+ U(P, g\circ f) - L(P, g\circ f) \leq \varepsilon(b - a) + 2\sup_{[A, B]} |g| \frac{\varepsilon'}{\delta}.
+ \]
+ So if we are given an $\varepsilon$ at the beginning, we can get a $\delta$ by uniform continuity. Afterwards, we pick $\varepsilon'$ such that $\varepsilon' = \varepsilon \delta$. Then we have shown that given any $\varepsilon$, there exists a partition such that
+ \[
+ U(P, g\circ f) - L(P, g\circ f) < \left((b - a) + 2\sup_{[A, B]} |g|\right) \varepsilon.
+ \]
+ Then the claim follows from the Riemann criterion.
+\end{proof}
+
+As an immediate consequence, we know that any continuous function is integrable, since we can just let $f$ be the identity function, which we can easily show to be integrable.
+
+\begin{cor}
+ A continuous function $g:[a, b] \to \R$ is integrable.
+\end{cor}
+
+\begin{thm}
+ Let $f_n: [a, b] \to \R$ be bounded and integrable for all $n$. Then if $(f_n)$ converges uniformly to a function $f: [a ,b] \to \R$, then $f$ is bounded and integrable.
+\end{thm}
+
+\begin{proof}
+ Let
+ \[
+ c_n = \sup_{[a, b]} |f_n - f|.
+ \]
+ Then uniform convergence says that $c_n \to 0$. By definition, for each $x$, we have
+ \[
+ f_n(x) - c_n \leq f(x) \leq f_n(x) + c_n.
+ \]
+ Since $f_n$ is bounded, this implies that $f$ is bounded by $\sup |f_n| + c_n$. Also, for any $x, y\in [a, b]$, we know
+ \[
+ f(x) - f(y) \leq (f_n(x) - f_n(y)) + 2 c_n.
+ \]
+ Hence for any partition $P$,
+ \[
+ U(P, f) - L(L, f) \leq U(P, f_n) - L(P, f_n) + 2(b - a) c_n.
+ \]
+ So given $\varepsilon > 0$, first choose $n$ such that $2(b - a) c_n < \frac{\varepsilon}{2}$. Then choose $P$ such that $U(P, f_n) - L(P, f_n) < \frac{\varepsilon}{2}$. Then for this partition, $U(P, f) - L(P, f) < \varepsilon$.
+\end{proof}
+
+Most of the theory of Riemann integration extends to vector-valued or complex-valued functions (of a single real variable).
+\begin{defi}[Riemann integrability of vector-valued function]
+ Let $\mathbf{f}: [a, b] \to \R^n$ be a vector-valued function. Write
+ \[
+ \mathbf{f}(x) = (f_1(x), f_2(x), \cdots, f_n(x))
+ \]
+ for all $x\in [a, b]$. Then $\mathbf{f}$ is \emph{Riemann integrable} iff $f_j: [a, b] \to \R$ is integrable for all $j$. The integral is defined as
+ \[
+ \int_a^b \mathbf{f}(x)\;\d x = \left(\int_a^b f_1(x)\;\d x, \cdots, \int_a^b f_n(x)\;\d x\right) \in \R^n.
+ \]
+\end{defi}
+It is easy to see that most basic properties of integrals of real functions extend to the vector-valued case. A less obvious fact is the following.
+\begin{prop}
+ If $\mathbf{f}: [a, b] \to \R^n$ is integrable, then the function $\|\mathbf{f}\|: [a, b] \to \R$ defined by
+ \[
+ \|\mathbf{f}\|(x) = \|\mathbf{f}(x)\| = \sqrt{\sum_{j = 1}^n f_j^2(x)}.
+ \]
+ is integrable, and
+ \[
+ \left\|\int_a^b \mathbf{f}(x)\;\d x\right\| \leq \int_a^b \|\mathbf{f}\|(x)\;\d x.
+ \]
+\end{prop}
+This is a rather random result, but we include it here because it will be helpful at some point in time.
+
+\begin{proof}
+ The integrability of $\|\mathbf{f}\|$ is clear since squaring and taking square roots are continuous, and a finite sum of integrable functions is integrable. To show the inequality, we let
+ \[
+ \mathbf{v} = (v_1, \cdots, v_n) = \int_a^b \mathbf{f}(x)\;\d x.
+ \]
+ Then by definition,
+ \[
+ v_j = \int_a^b f_j(x)\;\d x.
+ \]
+ If $\mathbf{v} = \mathbf{0}$, then we are done. Otherwise, we have
+ \begin{align*}
+ \|\mathbf{v}\|^2 &= \sum_{j = 1}^n v_j^2 \\
+ &= \sum_{j = 1}^n v_j \int_a^b f_j(x)\;\d x \\
+ &= \int_a^b \sum_{j = 1}^n (v_j f_j(x))\;\d x\\
+ &= \int_a^b \mathbf{v}\cdot \mathbf{f}(x)\;\d x\\
+ \intertext{Using the Cauchy-Schwarz inequality, we get}
+ &\leq \int_a^b \|\mathbf{v}\| \|\mathbf{f}\|(x)\;\d x\\
+ &= \|\mathbf{v}\|\int_a^b \|\mathbf{f}\|\;\d x.
+ \end{align*}
+ Divide by $\|\mathbf{v}\|$ and we are done.
+\end{proof}
+\subsection{Non-examinable fun*}
+Since there is time left in the lecture, we'll write down a really remarkable result.
+\begin{thm}[Weierstrass Approximation Theorem*]
+ If $f: [0, 1] \to \R$ is continuous, then there exists a sequence of polynomials $(p_n)$ such that $p_n \to f$ uniformly. In fact, the sequence can be given by
+ \[
+ p_n(x) = \sum_{k = 0}^n f\left(\frac{k}{n}\right) \binom{n}{k} x^k(1 - x)^{n - k}.
+ \]
+ These are known as \emph{Bernstein polynomials}.
+\end{thm}
+Of course, there are many different sequences of polynomials converging uniformly to $f$. Apart from the silly examples like adding $\frac{1}{n}$ to each $p_n$, there can also be vastly different ways of constructing such polynomial sequences.
+
+\begin{proof}
+ For convenience, let
+ \[
+ p_{n, k}(x) = \binom{n}{k}x^k (1 - x)^{n - k}.
+ \]
+ First we need a few facts about these functions. Clearly, $p_{n, k}(x) \geq 0$ for all $x\in [0, 1]$. Also, by the binomial theorem,
+ \[
+ \sum_{k = 0}^n \binom{n}{k}x^k y^{n - k} = (x + y)^n.
+ \]
+ So we get
+ \[
+ \sum_{k = 0}^n p_{n, k}(x) = 1.
+ \]
+ Differentiating the binomial theorem with respect to $x$ and putting $y = 1 - x$ gives
+ \[
+ \sum_{k = 0}^n \binom{n}{k}k x^{k - 1}(1 - x)^{n - k} = n.
+ \]
+ We multiply by $x$ to obtain
+ \[
+ \sum_{k = 0}^n \binom{n}{k}k x^k(1 - x)^{n - k} = nx.
+ \]
+ In other words,
+ \[
+ \sum_{k = 0}^n kp_{n, k}(x) = nx.
+ \]
+ Differentiating once more gives
+ \[
+ \sum_{k = 0}^n k(k - 1)p_{n, k}(x) = n(n - 1)x^2.
+ \]
+ Adding these two results gives
+ \[
+ \sum_{k = 0}^n k^2 p_{n, k}(x) = n^2 x^2 + nx(1 - x).
+ \]
+ We will write our results in a rather weird way:
+ \[
+ \sum_{k = 0}^n (nx - k)^2 p_{n, k}(x) = n^2 x^2 - 2nx \cdot nx + n^2 x^2 + nx(1 - x) = nx(1 - x).\tag{$*$}
+ \]
+ This is what we really need.
+
+ Now given $\varepsilon$, since $f$ is continuous, $f$ is uniformly continuous. So pick $\delta$ such that $|f(x) - f(y)| < \varepsilon$ whenever $|x - y| < \delta$.
+
+ Since $\sum p_{n, k}(x) = 1$, $f(x) = \sum p_{n, k}(x) f(x)$. Now for each \emph{fixed} $x$, we can write
+ \begin{align*}
+ |p_n(x) - f(x)| &= \left|\sum_{k = 0}^n \left(f\left(\frac{k}{n}\right) - f(x)\right) p_{n, k}(x)\right|\\
+ &\leq \sum_{k = 0}^n \left|f\left(\frac{k}{n}\right) - f(x)\right| p_{n, k}(x)\\
+ &= \sum_{k: |x - k/n| < \delta}\left(\left|f\left(\frac{k}{n}\right) - f(x)\right| p_{n, k}(x)\right) \\
+ &\quad\quad\quad+ \sum_{k: |x - k/n| \geq \delta}\left(\left|f\left(\frac{k}{n}\right) - f(x)\right| p_{n, k}(x)\right)\\
+ &\leq \varepsilon \sum_{k = 0}^n p_{n, k}(x) + 2\sup_{[0, 1]} |f| \sum_{k: |x - k/n| > \delta} p_{n, k}(x)\\
+ &\leq \varepsilon + 2\sup_{[0, 1]} |f|\cdot \frac{1}{\delta^2}\sum_{k: |x - k/n| > \delta} \left(x - \frac{k}{n}\right)^2 p_{n, k}(x)\\
+ &\leq \varepsilon + 2\sup_{[0, 1]} |f|\cdot \frac{1}{\delta^2}\sum_{k = 0}^n \left(x - \frac{k}{n}\right)^2 p_{n, k}(x)\\
+ &= \varepsilon + \frac{2\sup|f|}{\delta^2 n^2} nx(1 - x)\\
+ &\leq \varepsilon + \frac{2\sup|f|}{\delta^2 n}
+ \end{align*}
+ Hence given any $\varepsilon$ and $\delta$, we can pick $n$ sufficiently large that that $|p_n(x) - f(x)| < 2\varepsilon$. This is picked independently of $x$. So done.
+\end{proof}
+
+Unrelatedly, we might be interested in the question --- when is a function Riemann integrable? A possible answer is if it satisfies the Riemann integrability criterion, but this is not really helpful. We know that a function is integrable if it is continuous. But it need not be. It could be discontinuous at finitely many points and still be integrable. If it has countably many discontinuities, then we still can integrate it. How many points of discontinuity can we accommodate if we want to keep integrability?
+
+To answer this questions, we have Lebesgue's theorem. To state this theorem, we need the following definition:
+\begin{defi}[Lebesgue measure zero*]
+ A subset $A\subseteq \R$ is said to have \emph{(Lebesgue) measure zero} if for any $\varepsilon > 0$, there exists a countable (possibly finite) collection of open intervals $I_j$ such that
+ \[
+ A \subseteq \bigcup_{j = 1}^\infty I_J,
+ \]
+ and
+ \[
+ \sum_{j = 1}^\infty | I_j| < \varepsilon.
+ \]
+ here $|I_j|$ is defined as the length of the interval, not the cardinality (obviously).
+\end{defi}
+This is a way to characterize ``small'' sets, and in general a rather good way. This will be studied in depth in the IID Probability and Measure course.
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item The empty set has measure zero.
+ \item Any finite set has measure zero.
+ \item Any countable set has measure zero. If $A = \{a_0, a_1, \cdots\}$, take
+ \[
+ I_j = \left(a_j - \frac{\varepsilon}{2^{j + 1}}, a_j + \frac{\varepsilon}{2^{j + 1}}\right).
+ \]
+ Then $A$ is contained in the union and the sum of lengths is $\varepsilon$.
+ \item A countable union of sets of measure zero has measure zero, using a similar proof strategy as above.
+ \item Any (non-trivial) interval does not have measure zero.
+ \item The Cantor set, despite being uncountable, has measure zero. The Cantor set is constructed as follows: start with $C_0 = [0, 1]$. Remove the middle third $\left(\frac{1}{3}, \frac{2}{3}\right)$ to obtain $C_1 = \left[0, \frac{1}{3}\right] \cup \left[\frac{2}{3}, 1\right]$. Removing the middle third of each segment to obtain
+ \[
+ C_2 = \left[0, \frac{1}{9}\right] \cup \left[\frac{2}{9}, \frac{3}{9}\right] \cup \left[\frac{6}{9}, \frac{7}{9}\right]\cup \left[\frac{8}{9}, 1\right].
+ \]
+ Continue iteratively by removing the middle thirds of each part. Define
+ \[
+ C = \bigcap_{n = 0}^\infty C_n,
+ \]
+ which is the Cantor set. Since each $C_n$ consists of $2^n$ disjoint closed intervals of length $1/3^n$, the total length of the segments of $C_n$ is $\left(\frac{2}{3}\right)^n \to 0$. So we can cover $C$ by arbitrarily small union of intervals. Hence the Cantor set has measure zero.
+
+ It is slightly trickier to show that $C$ is uncountable, and to save time, we are not doing it now.
+ \end{itemize}
+\end{eg}
+Using this definition, we can have the following theorem:
+\begin{thm}[Lebesgue's theorem on the Riemann integral*]
+ Let $f: [a, b] \to \R$ be a bounded function, and let $\mathcal{D}_f$ be the set of points of discontinuities of $f$. Then $f$ is Riemann integrable \emph{if and only if} $\mathcal{D}_f$ has measure zero.
+\end{thm}
+Using this result, a lot of our theorems follow easily of these. Apart from the easy ones like the sum and product of integrable functions is integrable, we can also easily show that the composition of a continuous function with an integrable function is integrable, since composing with a continuous function will not introduce more discontinuities.
+
+Similarly, we can show that the uniform limit of integrable functions is integrable, since the points of discontinuities of the uniform limit is at most the (countable) union of all discontinuities of the functions in the sequence.
+
+Proof is left as an exercise for the reader, in the example sheet.
+
+\section{\tph{$\R^n$}{Rn}{&\#x211D;n} as a normed space}
+\subsection{Normed spaces}
+Our objective is to extend most of the notions we had about functions of a single variable $f: \R \to \R$ to functions of multiple variables $f: \R^n \to \R$. More generally, we want to study functions $f: \Omega \to \R^m$, where $\Omega \subseteq \R^n$. We wish to define analytic notions such as continuity, differentiability and even integrability (even though we are not doing integrability in this course).
+
+In order to do this, we need more structure on $\R^n$. We already know that $\R^n$ is a vector space, which means that we can add, subtract and multiply by scalars. But to do analysis, we need something to replace our notion of $|x - y|$ in $\R$. This is known as a \emph{norm}.
+
+It is useful to define and study this structure in an abstract setting, as opposed to thinking about $\R^n$ specifically. This leads to the general notion of \emph{normed spaces}.
+
+\begin{defi}[Normed space]
+ Let $V$ be a real vector space. A \emph{norm} on $V$ is a function $\|\ph \|: V\to \R$ satisfying
+ \begin{enumerate}
+ \item $\|\mathbf{x}\| \geq 0$ with equality iff $\mathbf{x} = \mathbf{0}$ \hfill (non-negativity)
+ \item $\|\lambda \mathbf{x}\| = |\lambda|\|\mathbf{x}\|$ \hfill (linearity in scalar multiplication)
+ \item $\|\mathbf{x} + \mathbf{y}\| \leq \|\mathbf{x} \| + \|\mathbf{y}\|$ \hfill (triangle inequality)
+ \end{enumerate}
+ A \emph{normed space} is a pair $(V, \|\ph \|)$. If the norm is understood, we just say $V$ is a normed space. We do have to be slightly careful since there can be multiple norms on a vector space.
+\end{defi}
+Intuitively, $\|\mathbf{x}\|$ is the \emph{length} or \emph{magnitude} of $\mathbf{x}$.
+\begin{eg}
+ We will first look at finite-dimensional spaces. This is typically $\R^n$ with different norms.
+ \begin{itemize}
+ \item Consider $\R^n$, with the Euclidean norm
+ \[
+ \|\mathbf{x}\|_2 = \left(\sum x_i^2\right)^2.
+ \]
+ This is also known as the \emph{usual norm}. It is easy to check that this is a norm, apart from the triangle inequality. So we'll just do this. We have
+ \begin{align*}
+ \|\mathbf{x} + \mathbf{y}\|^2 &= \sum_{i = 1}^n (x_i + y_i)^2 \\
+ &= \|\mathbf{x}\|^2 + \|\mathbf{y}\|^2 + 2\sum x_i y_i \\
+ &\leq \|\mathbf{x}\|^2 + \|\mathbf{y}\|^2 + 2\|\mathbf{x}\|\mathbf{y}\| \\
+ &= (\|\mathbf{x}\|^2 + \|\mathbf{y}\|^2),
+ \end{align*}
+ where we used the Cauchy-Schwarz inequality. So done.
+ \item We can have the following norm on $\R^n$:
+ \[
+ \|\mathbf{x}\|_1 = \sum |x_i|.
+ \]
+ It is easy to check that this is a norm.
+ \item We can also have the following norm on $\R^n$:
+ \[
+ \|\mathbf{x}\|_\infty = \max\{|x_i|: 1 \leq i \leq n\}.
+ \]
+ It is also easy to check that this is a norm.
+ \item In general, we can define the $p$ norm (for $p \geq 1$) by
+ \[
+ \|\mathbf{x}\|_p = \left(\sum |x_i|^p\right)^{1/p}.
+ \]
+ It is, however, not trivial to check the triangle inequality, and we will not do this.
+
+ We can show that as $p\to \infty$, $\|\mathbf{x}\|_p \to \|\mathbf{x}\|_\infty$, which justifies our notation above.
+ \end{itemize}
+ We also have some infinite dimensional examples. Often, we can just extend our notions on $\R^n$ to infinite sequences with some care. We write $\R^\N$ for the set of all infinite real sequences $(x_k)$. This is a vector space with termwise addition and scalar multiplication.
+ \begin{itemize}
+ \item Define
+ \[
+ \ell^1 = \left\{(x_k) \in \R^\N: \sum |x_k| < \infty\right\}.
+ \]
+ This is a linear subspace of $\R^\N$. We define the norm by
+ \[
+ \|(x_k)\|_1 = \|(x_k)\|_{\ell^1} = \sum |x_k|.
+ \]
+ \item Similarly, we can define $\ell^2$ by
+ \[
+ \ell^2 = \left\{(x_k) \in \R^\N: \sum x_k^2 < \infty\right\}.
+ \]
+ The norm is defined by
+ \[
+ \|(x_k)\|_2 = \|(x_k)\|_{\ell^2} = \left(\sum x_k^2\right)^{1/2}.
+ \]
+ We can also write this as
+ \[
+ \|(x_k)\|_{\ell^2} = \lim_{n \to \infty} \|(x_1, \cdots, x_n)\|_2.
+ \]
+ So the triangle inequality for the Euclidean norm implies the triangle inequality for $\ell^2$.
+ \item In general, for $p \geq 1$, we can define
+ \[
+ \ell^p = \left\{(x_k) \in \R^\N: \sum |x_k|^p < \infty\right\}
+ \]
+ with the norm
+ \[
+ \|(x_k)\|_p = \|(x_k)\|_{\ell^p} = \left(\sum |x_k|^p\right)^{1/p}.
+ \]
+ \item Finally, we have $\ell^\infty$, where
+ \[
+ \ell^\infty = \{(x_k)\in \R^\N: \sup |x_k| < \infty\},
+ \]
+ with the norm
+ \[
+ \|(x_k)\|_\infty = \|(x_k)\|_{\ell^\infty} = \sup |x_k|.
+ \]
+ \end{itemize}
+ Finally, we can have examples where we look at function spaces, usually $C([a, b])$, the set of continuous real functions on $[a, b]$.
+ \begin{itemize}
+ \item We can define the $L^1$ norm by
+ \[
+ \|f\|_{L^1} = \|f\|_1 = \int_a^b |f|\;\d x.
+ \]
+ \item We can define $L^2$ similarly by
+ \[
+ \|f\|_{L^2} = \|f\|_2 = \left(\int_a^b f^2 \;\d x\right)^{\frac{1}{2}}.
+ \]
+ \item In general, we can define $L^p$ for $p \geq 1$ by
+ \[
+ \|f\|_{L^p} = \|f\|_p = \left(\int_a^b f^p \;\d x\right)^{\frac{1}{p}}.
+ \]
+ \item Finally, we have $L^\infty$ by
+ \[
+ \|f\|_{L^\infty} = \|f\|_\infty = \sup |f|.
+ \]
+ This is also called the \emph{uniform norm}, or the \emph{supremum norm}.
+ \end{itemize}
+\end{eg}
+Later, when we define convergence for general normed space, we will show that convergence under the uniform norm is equivalent to uniform convergence.
+
+To show that $L^2$ is actually a norm, we can use the Cauchy-Schwarz inequality for integrals.
+\begin{lemma}[Cauchy-Schwarz inequality (for integrals)]
+ If $f, g\in C([a, b])$, $f, g \geq 0$, then
+ \[
+ \int_a^b fg\;\d x \leq \left(\int_a^b f^2\;\d x)\right)^{1/2}\left(\int_a^b g^2\;\d x\right)^{1/2}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ If $\int_a^b f^2\;\d x = 0$, then $f = 0$ (since $f$ is continuous). So the inequality holds trivially.
+
+ Otherwise, let $A^2 = \int_a^b f^2\;\d x \not= 0$, $B^2 = \int_a^b g^2\;\d x$. Consider the function
+ \[
+ \phi(t) = \int_a^b (g - tf)^2 \;\d t \geq 0.
+ \]
+ for every $t$. We can expand this as
+ \[
+ \phi(t) = t^2 A^2 - 2t \int_a^b gf\;\d x + B^2.
+ \]
+ The conditions for a quadratic in $t$ to be non-negative is exactly
+ \[
+ \left(\int_a^b gf\;\d x\right)^2 - A^2 B^2 \leq 0.
+ \]
+ So done.
+\end{proof}
+Note that the way we defined $L^p$ is rather unsatisfactory. To define the $\ell^p$ spaces, we first have the norm defined as a sum, and then $\ell^p$ to be the set of all sequences for which the sum converges. However, to define the $L^p$ space, we restrict ourselves to $C([0, 1])$, and then define the norm. Can we just define, say, $L^1$ to be the set of all functions such that $\int_0^1 |f|\;\d x$ exists? We could, but then the norm would no longer be the norm, since if we have the function $f(x) = \begin{cases} 1 & x = 0.5\\ 0 & x \not= 0.5\end{cases}$, then $f$ is integrable with integral $0$, but is not identically zero. So we cannot expand our vector space to be too large. To define $L^p$ properly, we need some more sophisticated notions such as Lebesgue integrability and other fancy stuff, which will be done in the IID Probability and Measure course.
+
+We have just defined many norms on the same space $\R^n$. These norms are clearly not the same, in the sense that for many $\mathbf{x}$, $\|\mathbf{x}\|_1$ and $\|\mathbf{x}\|_2$ have different values. However, it turns out the norms are all ``equivalent'' in some sense. This intuitively means the norms are ``not too different'' from each other, and give rise to the same notions of, say, convergence and completeness.
+
+A precise definition of equivalence is as follows:
+\begin{defi}[Lipschitz equivalence of norms]
+ Let $V$ be a (real) vector space. Two norms $\|\ph \|, \|\ph \|'$ on $V$ are \emph{Lipschitz equivalent} if there are real constants $0 < a < b$ such that
+ \[
+ a \|\mathbf{x}\| \leq \|\mathbf{x}\|' \leq b\|\mathbf{x}\|
+ \]
+ for all $\mathbf{x} \in V$.
+
+ It is easy to show this is indeed an equivalence relation on the set of all norms on $V$.
+\end{defi}
+We will show that if two norms are equivalent, the ``topological'' properties of the space do not depend on which norm we choose. For example, the norms will agree on which sequences are convergent and which functions are continuous.
+
+It is possible to reformulate the notion of equivalence in a more geometric way. To do so, we need some notation:
+\begin{defi}[Open ball]
+ Let $(V, \|\ph \|)$ be a normed space, $\mathbf{a} \in V$, $r > 0$. The \emph{open ball} centered at $\mathbf{a}$ with radius $r$ is
+ \[
+ B_r(\mathbf{a}) = \{\mathbf{x}\in V: \|\mathbf{x} - \mathbf{a}\| < r\}.
+ \]
+\end{defi}
+Then the requirement that $a\|\mathbf{x}\| \leq \|\mathbf{x}\|' \leq b\|\mathbf{x}\|$ for all $\mathbf{x}\in V$ is equivalent to saying
+\[
+ B_{1/b} (\mathbf{0}) \subseteq B_1'(\mathbf{0}) \subseteq B_{1/a}(\mathbf{0}),
+\]
+where $B'$ is the ball with respect to $\|\ph \|'$, while $B$ is the ball with respect to $\|\ph \|$. Actual proof of equivalence is on the second example sheet.
+
+\begin{eg}
+ Consider $\R^2$. Then the norms $\|\ph \|_\infty$ and $\|\ph \|_2$ are equivalent. This is easy to see using the ball picture:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 0) -- (2, 0);
+ \draw (0, -2) -- (0, 2);
+ \draw [mred] circle [radius=1];
+ \draw [mblue] (-1, -1) rectangle (1, 1);
+ \draw [mblue] (-0.707, -0.707) rectangle (0.707, 0.707);
+ \end{tikzpicture}
+ \end{center}
+ where the blue ones are the balls with respect to $\|\ph \|_\infty$ and the red one is the ball with respect to $\|\ph \|_2$.
+
+ In general, we can consider $\R^n$, again with $\|\ph \|_2$ and $\|\ph \|_\infty$. We have
+ \[
+ \|\mathbf{x}\|_\infty \leq \|\mathbf{x}\|_2 \leq \sqrt{n}\|\mathbf{x}\|_\infty.
+ \]
+\end{eg}
+These are easy to check manually. However, later we will show that in fact, any two norms on a finite-dimensional vector space are Lipschitz equivalent. Hence it is more interesting to look at infinite dimensional cases.
+
+\begin{eg}
+ Let $V = C([0, 1])$ with the norms
+ \[
+ \|f\|_1 = \int_0^1 |f|\;\d x,\quad \|f\|_\infty = \sup_{[0, 1]}|f|.
+ \]
+ We clearly have the bound
+ \[
+ \|f\|_1 \leq \|f\|_\infty.
+ \]
+ However, there is no constant $b$ such that
+ \[
+ \|f\|_\infty \leq b\|f\|_1
+ \]
+ for all $f$. This is easy to show by constructing a sequence of functions $f_n$ by
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 2) node [above] {$y$};
+ \draw [mred, semithick] (0, 0) -- (0.5, 1.5) -- (1, 0) -- (2.5, 0);
+ \draw [dashed] (0.5, 1.5) -- +(-0.5, 0) node [left] {$1$};
+ \draw [dashed] (0.5, 1.5) -- +(0, -1.5) node [below] {$\frac{1}{n}$};
+ \end{tikzpicture}
+ \end{center}
+ where the width is $\frac{2}{n}$ and the height is $1$. Then $\|f_n\|_\infty = 1$ but $\|f_n\|_1 = \frac{1}{n} \to 0$.
+\end{eg}
+\begin{eg}
+ Similarly, consider the space $\ell^2 = \left\{(x_n): \sum x_n^2 < \infty\right\}$ under the regular $\ell^2$ norm and the $\ell^\infty$ norm. We have
+ \[
+ \|(x_k)\|_\infty \leq \|(x_k)\|_{\ell^2},
+ \]
+ but there is no $b$ such that
+ \[
+ \|(x_k)\|_{\ell^2} \leq b\|(x_k)\|_\infty.
+ \]
+ For example, we can consider the sequence $x^n = (1, 1, \cdots, 1, 0, 0, \cdots)$, where the first $n$ terms are $1$.
+\end{eg}
+So far in all our examples, out of the two inequalities, one holds and one does not. Is it possible for both inequalities to not hold? The answer is yes. This is an exercise on the second example sheet as well.
+
+This is all we are going to say about Lipschitz equivalence. We are now going to define convergence, and study the consequences of Lipschitz equivalence to convergence.
+\begin{defi}[Bounded subset]
+ Let $(V, \|\ph \|)$ be a normed space. A subset $E \subseteq V$ is \emph{bounded} if there is some $R > 0$ such that
+ \[
+ E \subseteq B_R(\mathbf{0}).
+ \]
+\end{defi}
+
+\begin{defi}[Convergence of sequence]
+ Let $(V, \|\ph\|)$ be a normed space. A sequence $(x_k)$ in $V$ \emph{converges to} $\mathbf{x} \in V$ if $\|\mathbf{x}_k - \mathbf{x}\| \to 0$ (as a sequence in $\R$), i.e.
+ \[
+ (\forall \varepsilon > 0)(\exists N)(\forall k \geq N)\; \|\mathbf{x}_k - \mathbf{x}\| < \varepsilon.
+ \]
+\end{defi}
+These two definitions, obviously, depends on the chosen norm, not just the vector space $V$. However, if two norms are equivalent, then they agree on what is bounded and what converges.
+\begin{prop}
+ If $\|\ph \|$ and $\|\ph \|'$ are Lipschitz equivalent norms on a vector space $V$, then
+ \begin{enumerate}
+ \item A subset $E\subseteq V$ is bounded with respect to $\|\ph \|$ if and only if it is bounded with respect to $\|\ph \|'$.
+ \item A sequence $x_k$ converges to $x$ with respect to $\|\ph \|$ if and only if it converges to $x$ with respect to $\|\ph \|'$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item This is direct from definition of equivalence.
+ \item Say we have $a, b$ such that $a\|\mathbf{y}\| \leq \|\mathbf{y}\|' \leq b\|\mathbf{y}\|$ for all $\mathbf{y}$. So
+ \[
+ a\|\mathbf{x}_k - \mathbf{x}\| \leq \|\mathbf{x}_k - \mathbf{x}\|' \leq b\|\mathbf{x}_k - \mathbf{x}\|.
+ \]
+ So $\|\mathbf{x}_k - \mathbf{x}\| \to 0$ if and only if $\|\mathbf{x}_k - \mathbf{x}\|' \to 0$. So done.\qedhere
+ \end{enumerate}
+\end{proof}
+What if the norms are not equivalent? It is not surprising that there are some sequences that converge with respect to one norm but not another. More surprisingly, it is possible that a sequence converges to different limits under different norms. This is, again, on the second example sheet.
+
+We have some easy facts about convergence:
+\begin{prop}
+ Let $(V, \|\ph \|)$ be a normed space. Then
+ \begin{enumerate}
+ \item If $\mathbf{x}_k \to \mathbf{x}$ and $\mathbf{x}_k \to \mathbf{y}$, then $\mathbf{x} = \mathbf{y}$.
+ \item If $\mathbf{x}_k \to \mathbf{x}$, then $a\mathbf{x}_k \to a\mathbf{x}$.
+ \item If $\mathbf{x}_k \to \mathbf{x}$, $\mathbf{y}_k \to \mathbf{y}$, then $\mathbf{x}_k + \mathbf{y}_k \to \mathbf{x} + \mathbf{y}$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item $\|\mathbf{x} - \mathbf{y}\| \leq \|\mathbf{x} - \mathbf{x}_k\| + \|\mathbf{x}_k - \mathbf{y}\| \to 0$. So $\|\mathbf{x} - \mathbf{y}\| = 0$. So $\mathbf{x} = \mathbf{y}$.
+ \item $\|a \mathbf{x}_k - a \mathbf{x}\| = |a|\|\mathbf{x}_k - \mathbf{x}\| \to 0$.
+ \item $\|(\mathbf{x}_k + \mathbf{y}_k) - (\mathbf{x} + \mathbf{y})\| \leq \|\mathbf{x}_k - \mathbf{x}\| + \|\mathbf{y}_k - \mathbf{y}\| \to 0$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{prop}
+ Convergence in $\R^n$ (with respect to, say, the Euclidean norm) is equivalent to coordinate-wise convergence, i.e.\ $\mathbf{x}^{(k)} \to \mathbf{x}$ if and only if $x^{(k)}_j \to x_j$ for all $j$.
+\end{prop}
+
+\begin{proof}
+ Fix $\varepsilon > 0$. Suppose $\mathbf{x}^{(k)} \to \mathbf{x}$. Then there is some $N$ such that for any $k \geq N$ such that
+ \[
+ \|\mathbf{x}^{(k)} - \mathbf{x}\|_2^2 = \sum_{j = 1}^n (x_j^{(k)} - x_j)^2 < \varepsilon.
+ \]
+ Hence $|x_j^{(k)} - x_j| < \varepsilon$ for all $k \leq N$.
+
+ On the other hand, for any fixed $j$, there is some $N_j$ such that $k \geq N_j$ implies $|x_j^{(k)} - x_j| < \frac{\varepsilon}{\sqrt{n}}$. So if $k \geq \max\{N_j: j = 1, \cdots, n\}$, then
+ \[
+ \|\mathbf{x}^{(k)} - \mathbf{x}\|_2 = \left(\sum_{j = 1}^n (x_j^{(k)} - x_j)^2\right)^{\frac{1}{2}} < \varepsilon.
+ \]
+ So done
+\end{proof}
+
+Another space we would like to understand is the space of continuous functions. It should be clear that uniform convergence is the same as convergence under the uniform norm, hence the name. However, there is no norm such that convergence under the norm is equivalent to pointwise convergence, i.e.\ pointwise convergence is not normable. In fact, it is not even metrizable. However, we will not prove this.
+
+We'll now generalize the Bolzano-Weierstrass theorem to $\R^n$.
+\begin{thm}[Bolzano-Weierstrass theorem in $\R^n$]
+ Any bounded sequence in $\R^n$ (with, say, the Euclidean norm) has a convergent subsequence.
+\end{thm}
+
+\begin{proof}
+ We induct on $n$. The $n = 1$ case is the usual Bolzano-Weierstrass on the real line, which was proved in IA Analysis I.
+
+ Assume the theorem holds in $\R^{n - 1}$, and let $\mathbf{x}^{(k)} = (x^{(k)}_1, \cdots, x_n^{(k)})$ be a bounded sequence in $\R^n$. Then let $\mathbf{y}^{(k)} = (x^{(k)}_1, \cdots, x_{n - 1}^{(k)})$. Since for any $k$, we know that
+ \[
+ \|\mathbf{y}^{(k)}\|^2 + |x_n^{(k)}|^2 = \|\mathbf{x}^{(k)}\|^2,
+ \]
+ it follows that both $(\mathbf{y}^{(k)})$ and $(x_n^{(k)})$ are bounded. So by the induction hypothesis, there is a subsequence $(k_j)$ of $(k)$ and some $\mathbf{y} \in \R^{n - 1}$ such that $\mathbf{y}^{(k_j)} \to \mathbf{y}$. Also, by Bolzano-Weierstrass in $\R$, there is a further subsequence $(x_n^{(k_{j_\ell})})$ of $(x_n^{(k_j)})$ that converges to, say, $y_n \in \R$. Then we know that
+ \[
+ \mathbf{x}^{(k_{j_\ell})} \to (\mathbf{y}, y_n).
+ \]
+ So done.
+\end{proof}
+
+Note that this is generally \emph{not} true for normed spaces. Finite-dimensionality is important for both of these results.
+
+\begin{eg}
+ Consider $(\ell^\infty, \|\ph\|_\infty)$. We let $e^{(k)}_j = \delta_{jk}$ be the sequence with $1$ in the $k$th component and $0$ in other components. Then $e_j^{(k)} \to 0$ for all fixed $j$, and hence $e^{(k)}$ converges componentwise to the zero element $0 = (0, 0, \cdots)$. However, $e^{(k)}$ does not converge to the zero element since $\|e^{(k)} - 0\|_\infty = 1$ for all $k$. Also, this is bounded but does not have a convergent subsequence for the same reasons.
+\end{eg}
+
+We know that all finite dimensional vector spaces are isomorphic to $\R^n$ as vector spaces for some $n$, and we will later show that all norms on finite dimensional spaces are equivalent. This means every finite-dimensional normed space satisfies the Bolzano-Weierstrass property. Is the converse true? If a normed vector space satisfies the Bolzano-Weierstrass property, must it be finite dimensional? The answer is yes, and the proof is in the example sheet.
+
+\begin{eg}
+ Let $C([0, 1])$ have the $\|\ph \|_{L^2}$ norm. Consider $f_n(x) = \sin 2n\pi x$. We know that
+ \[
+ \|f_n\|^2_{L^2} = \int_0^1 |f_n|^2 = \frac{1}{2}.
+ \]
+ So it is bounded. However, it doesn't have a convergent subsequence. If it did, say $f_{n_j} \to f$ in $L^2$, then we must have
+ \[
+ \|f_{n_j} - f_{n_{j + 1}}\|^2 \to 0.
+ \]
+ However, by direct calculation, we know that
+ \[
+ \|f_{n_j} - f_{n_{j + 1}}\|^2 = \int_0^1 (\sin 2n_j \pi x - \sin 2n_{j + 1}\pi x)^2 = 1.
+ \]
+\end{eg}
+Note that the same argument shows also that the sequence $(\sin 2n\pi x)$ has no subsequence that converges \emph{pointwise} on $[0, 1]$. To see this, we need the result that if $(f_j)$ is a sequence in $C([0, 1])$ that is uniformly bounded with $f_j \to f$ pointwise, then $f_j$ converges to $f$ under the $L^2$ norm. However, we will not be able to prove this (in a nice way) without Lebesgue integration from IID Probability and Measure.
+
+\subsection{Cauchy sequences and completeness}
+\begin{defi}[Cauchy sequence]
+ Let $(V, \|\ph \|)$ be a normed space. A sequence $(\mathbf{x}^{(k)})$ in $V$ is a \emph{Cauchy sequence} if
+ \[
+ (\forall \varepsilon)(\exists N)(\forall n, m \geq N)\; \|\mathbf{x}^{(n)} - \mathbf{x}^{(m)}\| < \varepsilon.
+ \]
+\end{defi}
+
+\begin{defi}[Complete normed space]
+ A normed space $(V, \|\ph\|)$ is \emph{complete} if every Cauchy sequence converges to an element in $V$.
+\end{defi}
+
+We'll start with some easy facts about Cauchy sequences and complete spaces.
+\begin{prop}
+ Any convergent sequence is Cauchy.
+\end{prop}
+
+\begin{proof}
+ If $\mathbf{x}_k \to \mathbf{x}$, then
+ \[
+ \|\mathbf{x}_k - \mathbf{x}_\ell\| \leq \|\mathbf{x}_k - \mathbf{x}\| + \|\mathbf{x}_\ell - \mathbf{x}\| \to 0 \text{ as }k, \ell \to \infty.\qedhere
+ \]
+\end{proof}
+
+\begin{prop}
+ A Cauchy sequence is bounded.
+\end{prop}
+
+\begin{proof}
+ By definition, there is some $N$ such that for all $n \geq N$, we have $\|\mathbf{x}_N - \mathbf{x}_n\| < 1$. So $\|\mathbf{x}_n\| < 1 + \|\mathbf{x}_N\|$ for $n \geq N$. So, for all $n$,
+ \[
+ \|\mathbf{x}_n\| \leq \max\{\|\mathbf{x}_1\|, \cdots, \|\mathbf{x}_{N - 1}\|, 1 + \|\mathbf{x}_N\|\}.\qedhere
+ \]
+\end{proof}
+
+\begin{prop}
+ If a Cauchy sequence has a subsequence converging to an element $\mathbf{x}$, then the whole sequence converges to $\mathbf{x}$.
+\end{prop}
+
+\begin{proof}
+ Suppose $\mathbf{x}_{k_j} \to \mathbf{x}$. Since $(\mathbf{x}_k)$ is Cauchy, given $\varepsilon > 0$, we can choose an $N$ such that $\|\mathbf{x}_n - \mathbf{x}_m\| < \frac{\varepsilon}{2}$ for all $n, m \geq N$. We can also choose $j_0$ such that $k_{j_0} \geq n$ and $\|\mathbf{x}_{k_{j_0}} - \mathbf{x}\| < \frac{\varepsilon}{2}$. Then for any $n \geq N$, we have
+ \[
+ \|\mathbf{x}_n - \mathbf{x}\| \leq \|\mathbf{x}_n - \mathbf{x}_{k_{j_0}}\| + \|\mathbf{x} - \mathbf{x}_{k_{j_0}}\| < \varepsilon.\qedhere
+ \]
+\end{proof}
+
+\begin{prop}
+ If $\|\ph \|'$ is Lipschitz equivalent to $\|\ph\|$ on $V$, then $(\mathbf{x}_k)$ is Cauchy with respect to $\|\ph \|$ if and only if $(\mathbf{x}_k)$ is Cauchy with respect to $\|\ph\|'$. Also, $(V, \|\ph\|)$ is complete if and only if $(V, \|\ph\|')$ is complete.
+\end{prop}
+
+\begin{proof}
+ This follows directly from definition.
+\end{proof}
+
+\begin{thm}
+ $\R^n$ (with the Euclidean norm, say) is complete.
+\end{thm}
+
+\begin{proof}
+ The important thing is to know this is true for $n = 1$, which we have proved from Analysis I.
+
+ If $(\mathbf{x}^k)$ is Cauchy in $\R^n$, then $(x_j^{(k)})$ is a Cauchy sequence of real numbers for each $j \in \{1, \cdots, n\}$. By the completeness of the reals, we know that $x_j^k \to x_j \in \R$ for some $x$. So $x^k \to x = (x_1, \cdots, x_n)$ since convergence in $\R^n$ is equivalent to componentwise convergence.
+\end{proof}
+Note that the spaces $\ell^1, \ell^2, \ell^\infty$ are all complete with respect to the standard norms. Also, $C([0, 1])$ is complete with respect to $\|\ph\|_\infty$, since uniform Cauchy convergence implies uniform convergence, and the uniform limit of continuous functions is continuous. However, $C([0, 1])$ with the $L^1$ or $L^2$ norms are not complete (see example sheet).
+
+The incompleteness of $L^1$ tells us that $C([0, 1])$ is \emph{not large enough} to to be complete under the $L^1$ or $L^2$ norm. In fact, the space of Riemann integrable functions, say $\mathcal{R}([0, 1])$, is the natural space for the $L^1$ norm, and of course contains $C([0, 1])$. As we have previously mentioned, this time $\mathcal{R}([0, 1])$ is \emph{too large} for $\|\ph\|$ to be a norm, since $\int_0^1 |f|\;\d x = 0$ does not imply $f = 0$. This is a problem we can solve. We just have to take the equivalence classes of Riemann integrable functions, where $f$ and $g$ are equivalent if $\int_0^1 |f - g|\;\d x = 0$. But still, $L^1$ is not complete on $\mathcal{R}([0, 1])/{\sim}$. This is a serious problem in the Riemann integral. This eventually lead to the Lebesgue integral, which generalizes the Riemann integral, and gives a complete normed space.
+
+Note that when we quotient our $\mathcal{R}([0, 1])$ by the equivalence relation $f\sim g$ if $\int_0^1 |f - g|\;\d x = 0$, we are not losing too much information about our functions. We know that for the integral to be zero, $f - g$ cannot be non-zero at a point of continuity. Hence they agree on all points of continuities. We also know that by Lebesgue's theorem, the set of points of discontinuity has Lebesgue measure zero. So they disagree on at most a set of Lebesgue measure zero.
+
+\begin{eg}
+ Let
+ \[
+ V = \{(x_n) \in \R^\N: x_j = 0\text{ for all but finitely many }j\}.
+ \]
+ Take the supremum norm $\|\ph\|_\infty$ on $V$. This is a subspace of $\ell^\infty$ (and is sometimes denoted $\ell^0$). Then $(V, \|\ph\|_\infty)$ is \emph{not} complete. We define $x^{(k)} = (1, \frac{1}{2}, \frac{1}{3}, \cdots, \frac{1}{k}, 0, 0, \cdots)$ for $k = 1, 2, 3, \cdots$. Then this is Cauchy, since
+ \[
+ \|x^{(k)} - x^{(\ell)}\| = \frac{1}{\min\{\ell, k\} + 1} \to 0,
+ \]
+ but it is not convergent in $V$. If it actually converged to some $x$, then $x^{(k)}_j \to x_j$. So we must have $x_j = \frac{1}{j}$, but this sequence not in $V$.
+
+ We will later show that this is because $V$ is not \emph{closed}, after we define what it means to be closed.
+\end{eg}
+
+\begin{defi}[Open set]
+ Let $(V, \|\ph\|)$ be a normed space. A subspace $E\subseteq V$ is \emph{open} in $V$ if for any $\mathbf{y} \in E$, there is some $r > 0$ such that
+ \[
+ B_r(\mathbf{y}) = \{\mathbf{x}\in V: \|\mathbf{x} - \mathbf{y}\| < r\} \subseteq E.
+ \]
+\end{defi}
+
+We first check that the open ball is open.
+\begin{prop}
+ $B_r(\mathbf{y})\subseteq V$ is an open subset for all $r > 0$, $\mathbf{y} \in V$.
+\end{prop}
+
+\begin{proof}
+ Let $\mathbf{x} \in B_r(\mathbf{y})$. Let $\rho = r - \|\mathbf{x} - \mathbf{y}\| > 0$. Then $B_\rho(\mathbf{x}) \subseteq B_r(\mathbf{y})$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [dashed, fill=gray!50!white] circle [radius=1];
+ \draw (0, 0) -- (0.707, 0.707);
+ \draw (0.5, 0.5) node [circ] {} node [right] {$\mathbf{x}$};
+ \draw [dashed] (0.5, 0.5) circle [radius=0.293];
+ \node [circ] at (0, 0) {};
+ \node [below] at (0, 0) {$\mathbf{y}$};
+ \end{tikzpicture}
+ \end{center}
+\end{proof}
+
+\begin{defi}[Limit point]
+ Let $(V, \|\ph\|)$ be a normed space, $E\subseteq V$. A point $\mathbf{y} \in V$ is a \emph{limit point} of $E$ if there is a sequence $(\mathbf{x}_k)$ in $E$ with $\mathbf{x}_k \not= \mathbf{y}$ for all $k$ and $\mathbf{x}_k \to \mathbf{y}$.
+\end{defi}
+(Some people allow $\mathbf{x}_k = \mathbf{y}$, but we will use this definition in this course)
+
+\begin{eg}
+ Let $V = \R$, $E = (0, 1)$. Then $0, 1$ are limit points of $E$. The set of \emph{all} limit points is $[0, 1]$.
+
+ If $E' = (0, 1) \cup \{2\}$. Then the set of limit points of $E'$ is still $[0, 1]$.
+\end{eg}
+
+There is a nice result characterizing whether a set contains all its limit points.
+\begin{prop}
+ Let $E\subseteq V$. Then $E$ contains all of its limit points if and only if $V\setminus E$ is open in $V$.
+\end{prop}
+
+Using this proposition, we define the following:
+\begin{defi}[Closed set]
+ Let $(V, \|\ph\|)$ be a normed space. Then $E\subseteq V$ is \emph{closed} if $V\setminus E$ is open, i.e.\ $E$ contains all its limit points.
+\end{defi}
+Note that sets can be both closed or open; or neither closed nor open.
+
+Before we prove the proposition, we first have a lemma:
+
+\begin{lemma}
+ Let $(V, \|\ph\|)$ be a normed space, $E$ any subset of $V$. Then a point $\mathbf{y} \in V$ is a limit point of $E$ if and only if
+ \[
+ (B_r(\mathbf{y}) \setminus \{\mathbf{y}\}) \cap E \not= \emptyset
+ \]
+ for every $r$.
+\end{lemma}
+
+\begin{proof}
+ $(\Rightarrow)$ If $\mathbf{y}$ is a limit point of $E$, then there exists a sequence $(\mathbf{x}_k) \in E$ with $\mathbf{x}_k \not= \mathbf{y}$ for all $k$ and $\mathbf{x}_k \to \mathbf{y}$. Then for every $r$, for sufficiently large $k$, $\mathbf{x}_k \in B_r(\mathbf{y})$. Since $\mathbf{x}_k \not= \{\mathbf{y}\}$ and $\mathbf{x}_k \in E$, the result follows.
+
+ $(\Leftarrow)$ For each $k$, let $r = \frac{1}{k}$. By assumption, we have some $\mathbf{x}_k \in (B_{\frac{1}{k}}(\mathbf{y}) \setminus \{\mathbf{y}\}) \cap E$. Then $\mathbf{x}_k \to \mathbf{y}$, $\mathbf{x}_k \not= \mathbf{y}$ and $\mathbf{x}_k \in E$. So $\mathbf{y}$ is a limit point of $E$.
+\end{proof}
+
+Now we can prove our proposition.
+\begin{prop}
+ Let $E\subseteq V$. Then $E$ contains all of its limit points if and only if $V\setminus E$ is open in $V$.
+\end{prop}
+
+\begin{proof}
+ $(\Rightarrow)$ Suppose $E$ contains all its limit points. To show $V\setminus E$ is open, we let $\mathbf{y} \in V\setminus E$. So $\mathbf{y}$ is not a limit point of $E$. So for some $r$, we have $(B_r(\mathbf{y})\setminus \{\mathbf{y}\}) \cap E = \emptyset$. Hence it follows that $B_r(\mathbf{y}) \subseteq V\setminus E$ (since $\mathbf{y} \not\in E$).
+
+ $(\Leftarrow)$ Suppose $V\setminus E$ is open. Let $\mathbf{y} \in V\setminus E$. Since $V\setminus E$ is open, there is some $r$ such that $B_r(\mathbf{y}) \subseteq V\setminus E$. By the lemma, $\mathbf{y}$ is not a limit point of $E$. So all limit points of $E$ are in $E$.
+\end{proof}
+
+\subsection{Sequential compactness}
+In general, there are two different notions of compactness --- ``sequential compactness'' and just ``compactness''. However, in normed spaces (and metric spaces, as we will later encounter), these two notions are equivalent. So we will be lazy and just say ``compactness'' as opposed to ``sequential compactness''.
+
+\begin{defi}[(Sequentially) compact set]
+ Let $V$ be a normed vector space. A subset $K\subseteq V$ is said to be \emph{compact} (or \emph{sequentially compact}) if every sequence in $K$ has a subsequence that converges to a point in $K$.
+\end{defi}
+
+There are things we can immediately know about the spaces:
+\begin{thm}
+ Let $(V, \|\ph\|)$ be a normed vector space, $K\subseteq V$ a subset. Then
+ \begin{enumerate}
+ \item If $K$ is compact, then $K$ is closed and bounded.
+ \item If $V$ is $\R^n$ (with, say, the Euclidean norm), then if $K$ is closed and bounded, then $K$ is compact.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $K$ be compact. Boundedness is easy: if $K$ is unbounded, then we can generate a sequence $\mathbf{x}_k$ such that $\|\mathbf{x}_k\| \to \infty$. Then this cannot have a convergent subsequence, since any subsequence will also be unbounded, and convergent sequences are bounded. So $K$ must be bounded.
+
+ To show $K$ is closed, let $\mathbf{y}$ be a limit point of $K$. Then there is some $\mathbf{y}_k \in K$ such that $\mathbf{y}_k \to \mathbf{y}$. Then by compactness, there is a subsequence of $\mathbf{y}_k$ converging to some point in $K$. But any subsequence must converge to $\mathbf{y}$. So $\mathbf{y} \in K$.
+
+ \item Let $K$ be closed and bounded. Let $\mathbf{x}_k$ be a sequence in $K$. Since $V = \R^n$ and $K$ is bounded, $(\mathbf{x}_k)$ is a bounded sequence in $\R^n$. So by Bolzano-Weierstrass, this has a convergent subsequence $\mathbf{x}_{k_j}$. By closedness of $K$, we know that the limit is in $K$. So $K$ is compact.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{Mappings between normed spaces}
+We are now going to look at functions between normed spaces, and see if they are continuous.
+
+Let $(V, \|\ph\|)$, $(V', \|\ph\|')$ be normed spaces, and let $E \subseteq K$ be a subset, and $f: E \to V'$ a mapping (which is just a function, although we reserve the terminology ``function'' or ``functional'' for when $V' = \R$).
+
+\begin{defi}[Continuity of mapping]
+ Let $\mathbf{y} \in E$. We say $f: E \to V'$ is \emph{continuous at} $\mathbf{y}$ if for all $\varepsilon > 0$, there is $\delta > 0$ such that the following holds:
+ \[
+ (\forall \mathbf{x} \in E)\; \|\mathbf{x} - \mathbf{y}\|_V < \delta \Rightarrow \|f(\mathbf{x}) - f(\mathbf{y})\|_{V'} < \varepsilon.
+ \]
+\end{defi}
+Note that $\mathbf{x} \in E$ and $\|\mathbf{x} - \mathbf{y}\| < \delta$ is equivalent to saying $\mathbf{x} \in B_\delta(\mathbf{y}) \cap E$. Similarly, $\|f(\mathbf{x}) - f(\mathbf{y})\| < \varepsilon$ is equivalent to $f(\mathbf{x}) \in B_\varepsilon(f(\mathbf{y}))$. In other words, $\mathbf{x} \in f^{-1}(B_\varepsilon(f(\mathbf{y})))$. So we can rewrite this statement as there is some $\delta > 0$ such that
+\[
+ E \cap B_\delta(\mathbf{y}) \subseteq f^{-1}(B_\varepsilon(f(\mathbf{y}))).
+\]
+We can use this to provide an alternative characterization of continuity.
+\begin{thm}
+ Let $(V, \|\ph\|)$, $(V', \|\ph\|')$ be normed spaces, $E\subseteq V$, $f: E \to \mathbf{V}'$. Then $f$ is continuous at $\mathbf{y} \in E$ if and only if for any sequence $\mathbf{y}_k \to \mathbf{y}$ in $E$, we have $f(\mathbf{y}_k) \to f(\mathbf{y})$.
+\end{thm}
+
+\begin{proof}
+ $(\Rightarrow)$ Suppose $f$ is continuous at $\mathbf{y} \in E$, and that $\mathbf{y}_k \to \mathbf{y}$. Given $\varepsilon > 0$, by continuity, there is some $\delta > 0$ such that
+ \[
+ B_\delta(\mathbf{y}) \cap E \subseteq f^{-1}(B_\varepsilon(f(\mathbf{y}))).
+ \]
+ For sufficiently large $k$, $\mathbf{y}_k \in B_\delta(\mathbf{y}) \cap E$. So $f(\mathbf{y}_k) \in B_\varepsilon(f(\mathbf{y}))$, or equivalently,
+ \[
+ |f(\mathbf{y}_k) - f(\mathbf{y})| < \varepsilon.
+ \]
+ So done.
+
+ $(\Leftarrow)$ If $f$ is not continuous at $y$, then there is some $\varepsilon > 0$ such that for any $k$, we have
+ \[
+ B_{\frac{1}{k}}(\mathbf{y}) \not\subseteq f^{-1}(B_\varepsilon(f(\mathbf{y}))).
+ \]
+ Choose $\mathbf{y}_k \in B_{\frac{1}{k}}(\mathbf{y}) \setminus f^{-1}(B_{\varepsilon}(f(\mathbf{y})))$. Then $\mathbf{y}_k \to \mathbf{y}$, $\mathbf{y}_k \in E$, but $\|f(\mathbf{y}_k) - f(\mathbf{y})\| \geq \varepsilon$, contrary to the hypothesis.
+\end{proof}
+
+\begin{defi}[Continuous function]
+ $f: E \to V'$ is \emph{continuous} if $f$ is continuous at every point $\mathbf{y} \in E$.
+\end{defi}
+
+\begin{thm}
+ Let $(V, \|\ph\|)$ and $(V', \|\ph\|')$ be normed spaces, and $K$ a compact subset of $V$, and $f: V \to V'$ a continuous function. Then
+ \begin{enumerate}
+ \item $f(K)$ is compact in $V'$
+ \item $f(K)$ is closed and bounded
+ \item If $V' = \R$, then the function attains its supremum and infimum, i.e.\ there is some $\mathbf{y}_1, \mathbf{y}_2 \in K$ such that
+ \[
+ f(\mathbf{y}_1) = \sup\{f(\mathbf{y}): \mathbf{y} \in K\},\quad f(\mathbf{y}_2) = \inf\{f(\mathbf{y}): \mathbf{y} \in K\}.
+ \]
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $(\mathbf{x}_k)$ be a sequence in $f(K)$ with $\mathbf{x}_k = f(\mathbf{y}_k)$ for some $\mathbf{y}_k \in K$. By compactness of $K$, there is a subsequence $(\mathbf{y}_{k_j})$ such that $\mathbf{y}_{k_j} \to \mathbf{y}$. By the previous theorem, we know that $f(\mathbf{y}_{j_k}) \to f(\mathbf{y})$. So $\mathbf{x}_{k_j} \to f(\mathbf{y}) \in f(K)$. So $f(K)$ is compact.
+ \item This follows directly from $(i)$, since every compact space is closed and bounded.
+ \item If $F$ is any bounded subset of $\R$, then either $\sup F \in F$ or $\sup F$ is a limit point of $F$ (or both), by definition of the supremum. If $F$ is closed and bounded, then any limit point must be in $F$. So $\sup F \in F$. Applying this fact to $F = f(K)$ gives the desired result, and similarly for infimum.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Finally, we will end the chapter by proving that any two norms on a finite dimensional space are Lipschitz equivalent. The key lemma is the following:
+\begin{lemma}
+ Let $V$ be an $n$-dimensional vector space with a basis $\{\mathbf{v}_1, \cdots, \mathbf{v}_n\}$. Then for any $\mathbf{x} \in V$, write $\mathbf{x} = \sum_{j = 1}^n x_j \mathbf{v}_j$, with $x_j \in \R$. We define the Euclidean norm by
+ \[
+ \|\mathbf{x}\|_2 = \left(\sum x_j^2\right)^{\frac{1}{2}}.
+ \]
+ Then this is a norm, and $S = \{\mathbf{x} \in V: \|\mathbf{x}\|_2 = 1\}$ is compact in $(V, \|\ph\|_2)$.
+\end{lemma}
+After we show this, we can easily show that every other norm is equivalent to this norm.
+
+This is not hard to prove, since we know that the unit sphere in $\R^n$ is compact, and we can just pass our things on to $\R^n$.
+
+\begin{proof}
+ $\|\ph\|_2$ is well-defined since $x_1, \cdots, x_n$ are uniquely determined by $\mathbf{x}$ (by (a certain) definition of basis). It is easy to check that $\|\ph\|_2$ is a norm.
+
+ Given a sequence $\mathbf{x}^{(k)}$ in $S$, if we write $\mathbf{x}^{(k)} = \sum_{j = 1}^n x_j^{(k)} \mathbf{v}_j$. We define the following sequence in $\R^n$:
+ \[
+ \tilde{\mathbf{x}}^{(k)} = (x_1^{(k)},\cdots, x_n^{(k)}) \in \tilde{S} = \{\tilde{\mathbf{x}} \in \R^n: \|\tilde{\mathbf{x}}\|_{\mathrm{Euclid}} = 1\}.
+ \]
+ As $\tilde{S}$ is closed and bounded in $\R^n$ under the Euclidean norm, it is compact. Hence there exists a subsequence $\tilde{x}^{(k_j)}$ and $\tilde{x} \in \tilde{S}$ such that $\|\tilde{\mathbf{x}}^{(k_j)} - \tilde{\mathbf{x}} \|_{\mathrm{Euclid}} \to 0$. This says that $\mathbf{x} = \sum_{j = 1}^n x_j \mathbf{v}_j \in S$, and $\|\mathbf{x}^{k_j} - \mathbf{x}\|_2 \to 0$. So done.
+\end{proof}
+
+\begin{thm}
+ Any two norms on a finite dimensional vector space are Lipschitz equivalent.
+\end{thm}
+
+The idea is to pick a basis, and prove that any norm is equivalent to $\|\ph\|_2$.
+
+To show that an arbitrary norm $\|\ph\|$ is equivalent to $\|\ph\|_2$, we have to show that for any $\|\mathbf{x}\|$, we have
+\[
+ a \|\mathbf{x}\|_2 \leq \|\mathbf{x}\| \leq b\|\mathbf{x}\|_2.
+\]
+We can divide by $\|\mathbf{x}\|_2$ and obtain an equivalent requirement:
+\[
+ a \leq \left\|\frac{\mathbf{x}}{\|\mathbf{x}\|_2}\right\| \leq b.
+\]
+We know that any $\mathbf{x}/\|\mathbf{x}\|_2$ lies in the unit sphere $S = \{\mathbf{x} \in V: \|\mathbf{x}\|_2 = 1\}$. So we want to show that the image of $\|\ph\|$ is bounded. But we know that $S$ is compact. So it suffices to show that $\|\ph\|$ is continuous.
+\begin{proof}
+ Fix a basis $\{\mathbf{v}_1, \cdots, \mathbf{v}_n\}$ for $V$, and define $\|\ph\|_2$ as in the lemma above. Then $\|\ph\|_2$ is a norm on $V$, and $S = \{\mathbf{x} \in V: \|\mathbf{x}\|_2 = 1\}$, the unit sphere, is compact by above.
+
+ To show that any two norms are equivalent, it suffices to show that if $\|\ph\|$ is any other norm, then it is equivalent to $\|\ph\|_2$, since equivalence is transitive.
+
+ For any
+ \[
+ \mathbf{x} = \sum_{j = 1}^n x_j \mathbf{v}_j,
+ \]
+ we have
+ \begin{align*}
+ \|\mathbf{x}\| &= \left\|\sum_{j = 1}^n x_j \mathbf{v}_j\right\|\\
+ &\leq \sum |x_j| \|\mathbf{v}_j\|\\
+ &\leq \|\mathbf{x}\|_2 \left(\sum_{j = 1}^n \|\mathbf{v}_j\|^2\right)^{\frac{1}{2}}
+ \end{align*}
+ by the Cauchy-Schwarz inequality. So $\|\mathbf{x}\| \leq b\|\mathbf{x}\|_2$ for $b = \left(\sum \|\mathbf{v}_j\|^2\right)^{\frac{1}{2}}$.
+
+ To find $a$ such that $\|\mathbf{x}\| \geq a\|\mathbf{x}\|_2$, consider $\|\ph\|: (S, \|\ph\|_2)\to \R$. By above, we know that
+ \[
+ \|\mathbf{x} - \mathbf{y}\| \leq b \|\mathbf{x} - \mathbf{y}\|_2
+ \]
+ By the triangle inequality, we know that $\big|\|\mathbf{x}\| - \|\mathbf{y}\|\big| \leq \|\mathbf{x} - \mathbf{y}\|$. So when $\mathbf{x}$ is close to $\mathbf{y}$ under $\|\ph\|_2$, then $\|\mathbf{x}\|$ and $\|\mathbf{y}\|$ are close. So $\|\ph\|: (S, \|\ph\|_2) \to \R$ is continuous. So there is some $\mathbf{x}_0 \in S$ such that $\|\mathbf{x}_0\| = \inf_{x \in S} \|\mathbf{x}\| = a$, say. Since $\|\mathbf{x}\| > 0$, we know that $\|\mathbf{x}_0\| > 0$. So $\|\mathbf{x}\| \geq a\|\mathbf{x}\|_2$ for all $\mathbf{x}\in V$.
+\end{proof}
+
+The key to the proof is the compactness of the unit sphere of $(V, \|\ph\|)$. On the other hand, compactness of the unit sphere also characterizes finite dimensionality. As you will show in the example sheets, if the unit sphere of a space is compact, then the space must be finite-dimensional.
+
+\begin{cor}
+ Let $(V, \|\ph\|)$ be a finite-dimensional normed space.
+ \begin{enumerate}
+ \item The Bolzano-Weierstrass theorem holds for $V$, i.e.\ any bounded sequence sequence in $V$ has a convergent subsequence.
+ \item A subset of $V$ is compact if and only if it is closed and bounded.
+ \end{enumerate}
+\end{cor}
+
+\begin{proof}
+ If a subset is bounded in one norm, then it is bounded in any Lipschitz equivalent norm. Similarly, if it converges to $\mathbf{x}$ in one norm, then it converges to $\mathbf{x}$ in any Lipschitz equivalent norm.
+
+ Since these results hold for the Euclidean norm $\|\ph\|_2$, it follows that they hold for arbitrary finite-dimensional vector spaces.
+\end{proof}
+
+\begin{cor}
+ Any finite-dimensional normed vector space $(V, \|\ph\|)$ is complete.
+\end{cor}
+
+\begin{proof}
+ This is true since if a space is complete in one norm, then it is complete in any Lipschitz equivalent norm, and we know that $\R^n$ under the Euclidean norm is complete.
+\end{proof}
+
+\section{Metric spaces}
+We would like to extend our notions such as convergence, open and closed subsets, compact subsets and continuity from normed spaces to more general sets. Recall that when we defined these notions, we didn't really use the vector space structure of a normed vector space much. Moreover, we mostly defined these things in terms of convergence of sequences. For example, a space is closed if it contains all its limits, and a space is open if its complement is closed.
+
+So what do we actually need in order to define convergence, and hence all the notions we've been using? Recall we define $\mathbf{x}_k \to \mathbf{x}$ to mean $\|\mathbf{x}_k - \mathbf{x}\| \to 0$ as a sequence in $\R$. What is $\|\mathbf{x}_k - \mathbf{x}\|$ really about? It is measuring the distance between $\mathbf{x}_k$ and $\mathbf{x}$. So what we really need is a measure of distance.
+
+To do so, we can define a distance function $d: V\times V \to \R$ by $d(x, y) = \|x - y\|$. Then we can define $x_k \to x$ to mean $d(x_k, x) \to 0$.
+
+Hence, given \emph{any} function $d: V\times V \to \R$, we can define a notion of ``convergence'' as above. However, we want this to be well-behaved. In particular, we would want the limits of sequences to be unique, and any constant sequence $x_k = x$ should converge to $x$.
+
+We will come up with some restrictions on what $d$ can be based on these requirements.
+
+We can look at our proof of uniqueness of limits (for normed spaces), and see what properties of $d$ we used. Recall that to prove the uniqueness of limits, we first assume that $x_k \to x$ and $x_k \to y$. Then we noticed
+\[
+ \|x - y\| \leq \|x - x_k\| + \|x_k - y\| \to 0,
+\]
+and hence $\|x - y\| = 0$. So $x = y$. We can reformulate this argument in terms of $d$. We first start with
+\[
+ d(x, y) \leq d(x, x_k) + d(x_k, y).
+\]
+To obtain this equation, we are relying on the triangle inequality. So we would want $d$ to satisfy the triangle inequality.
+
+After obtaining this, we know that $d(x_k, y) \to 0$, since this is just the definition of convergence. However, we do not immediately know $d(x, x_k) \to 0$, since we are given a fact about $d(x_k, x)$, not $d(x, x_k)$. Hence we need the property that $d(x_k, x) = d(x, x_k)$. This is symmetry.
+
+Combining this, we know that
+\[
+ d(x, y) \leq 0.
+\]
+From this, we want to say that in fact, $d(x, y) = 0$, and thus $x = y$. Hence we need the property that $d(x, y) \geq 0$ for all $x, y$, and that $d(x, y) = 0$ implies $x = y$.
+
+Finally, to show that a constant sequence has a limit, suppose $x_k = x$ for all $k \in \N$. Then we know that $d(x, x_k) = d(x, x)$ should tend to $0$. So we must have $d(x, x) = 0$ for all $x$.
+
+We will use these properties to define \emph{metric spaces}.
+\subsection{Preliminary definitions}
+\begin{defi}[Metric space]
+ Let $X$ be any set. A \emph{metric} on $X$ is a function $d: X\times X \to \R$ that satisfies
+ \begin{itemize}
+ \item $d(x, y) \geq 0$ with equality iff $x = y$\hfill(non-negativity)
+ \item $d(x, y) = d(y, x)$\hfill(symmetry)
+ \item $d(x, y) \leq d(x, z) + d(z, y)$\hfill(triangle inequality)
+ \end{itemize}
+ The pair $(X, d)$ is called a \emph{metric space}.
+\end{defi}
+We have seen that we can define convergence in terms of a metric. Hence, we can also define open subsets, closed subsets, compact spaces, continuous functions etc. for metric spaces, in a manner consistent with what we had for normed spaces. Moreover, we will show that many of our theorems for normed spaces are also valid in metric spaces.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\R^n$ with the Euclidean \emph{metric} is a metric space, where the metric is defined by
+ \[
+ d(x, y) = \|x - y\| = \sqrt{\sum (x_j - y_j)^2}.
+ \]
+ \item More generally, if $(V, \|\ph\|)$ is a normed space, then $d(x, y) = \|x - y\|$ defines a metric on $V$.
+ \item Discrete metric: let $X$ be any set, and define
+ \[
+ d(x, y) =
+ \begin{cases}
+ 0 & x = y\\
+ 1 & x\not= y
+ \end{cases}.
+ \]
+ \item Given a metric space $(X, d)$, we define
+ \[
+ g(x, y) = \min\{1, d(x, y)\}.
+ \]
+ Then this is a metric on $X$. Similarly, if we define
+ \[
+ h(x, y) = \frac{d(x, y)}{1 + d(x, y)}
+ \]
+ is also a metric on $X$. In both cases, we obtain a \emph{bounded} metric.
+
+ The axioms are easily shown to be satisfied, apart from the triangle inequality. So let's check the triangle inequality for $h$. We'll use a general fact that for numbers $a, c \geq 0, b, d > 0$ we have
+ \[
+ \frac{a}{b} \leq \frac{c}{d} \Leftrightarrow \frac{a}{a + b} \leq \frac{c}{c + d}.
+ \]
+ Based on this fact, we can start with
+ \[
+ d(x, y) \leq d(x, z) + d(z, y).
+ \]
+ Then we obtain
+ \begin{align*}
+ \frac{d(x, y)}{1 + d(x, y)} &\leq \frac{d(x, z) + d(z, y)}{1 + d(x, z) + d(z, y)} \\
+ &= \frac{d(x, z)}{1 + d(x, z) + d(z, y)} + \frac{d(z, y)}{1 + d(x, z) + d(z, y)} \\
+ &\leq \frac{d(x, z)}{1 + d(x, z)} + \frac{d(z, y)}{1 + d(z, y)}.
+ \end{align*}
+ So done.
+ \end{enumerate}
+\end{eg}
+
+We can also extend the notion of Lipschitz equivalence to metric spaces.
+\begin{defi}[Lipschitz equivalent metrics]
+ Metrics $d, d'$ on a set $X$ are said to be \emph{Lipschitz equivalent} if there are (positive) constants $A, B$ such that
+ \[
+ A d(x, y) \leq d'(x, y) \leq B d(x, y)
+ \]
+ for all $x, y \in X$.
+\end{defi}
+Clearly, any Lipschitz equivalent norms give Lipschitz equivalent metrics. Any metric coming from a norm in $\R^n$ is thus Lipschitz equivalent to the Euclidean metric. We will later show that two equivalent norms induce the same \emph{topology}. In some sense, Lipschitz equivalent norms are indistinguishable.
+
+\begin{defi}[Metric subspace]
+ Given a metric space $(X, d)$ and a subset $Y\subseteq X$, the restriction $d|_{Y \times Y} \to \R$ is a metric on $Y$. This is called the \emph{induced metric} or \emph{subspace metric}.
+\end{defi}
+Note that unlike vector subspaces, we do not require our subsets to have any structure. We can take \emph{any} subset of $X$ and get a metric subspace.
+
+\begin{eg}
+ Any subspace of $\R^n$ is a metric space with the Euclidean metric.
+\end{eg}
+
+\begin{defi}[Convergence]
+ Let $(X, d)$ be a metric space. A sequence $x_n \in X$ is said to \emph{converge} to $x$ if $d(x_n, x) \to 0$ as a real sequence. In other words,
+ \[
+ (\forall \varepsilon)(\exists K)(\forall k > K)\, d(x_k, x) < \varepsilon.
+ \]
+ Alternatively, this says that given any $\varepsilon$, for sufficiently large $k$, we get $x_k \in B_\varepsilon(x)$.
+\end{defi}
+Again, $B_r(a)$ is the open ball centered at $a$ with radius $r$, defined as
+\[
+ B_r(a) = \{x \in X: d(x, a) < r\}.
+\]
+\begin{prop}
+ The limit of a convergent sequence is unique.
+\end{prop}
+
+\begin{proof}
+ Same as that of normed spaces.
+\end{proof}
+Note that notions such as convergence, open and closed subsets and continuity of mappings all make sense in an even more general setting called \emph{topological spaces}. However, in this setting, limits of convergent sequences can fail to be unique. We will not worry ourselves about these since we will just focus on metric spaces.
+
+\subsection{Topology of metric spaces}
+We will define open subsets of a metric space in exactly the same way as we did for normed spaces.
+\begin{defi}[Open subset]
+ Let $(X, d)$ be a metric space. A subset $U\subseteq X$ is \emph{open} if for every $y \in U$, there is some $r > 0$ such that $B_r(y) \subseteq U$.
+\end{defi}
+This means we can write any open $U$ as a union of open balls:
+\[
+ U = \bigcup_{y \in U} B_{r(y)} (y)
+\]
+for appropriate choices of $r(y)$ for every $y$.
+
+It is easy to check that every open ball $B_r(y)$ is an open set. The proof is exactly the same as what we had for normed spaces.
+
+Note that two different metrics $d, d'$ on the same set $X$ may give rise to the same collection of open subsets.
+\begin{eg}
+ Lipschitz equivalent metrics give rise to the same collection of open sets, i.e.\ if $d, d'$ are Lipschitz equivalent, then a subset $U \subseteq X$ is open with respect to $d$ if and only if it is open with respect to $d'$. Proof is left as an easy exercise.
+\end{eg}
+The converse, however, is not necessarily true.
+\begin{eg}
+ Let $X = \R$, $d(x, y) = |x - y|$ and $d'(x, y) = \min\{1, |x - y|\}$. It is easy to check that these are not Lipschitz equivalent, but they induce the same set collection of open subsets.
+\end{eg}
+
+\begin{defi}[Topology]
+ Let $(X, d)$ be a metric space. The \emph{topology} on $(X, d)$ is the collection of open subsets of $X$. We say it is the topology induced by the metric.
+\end{defi}
+
+\begin{defi}[Topological notion]
+ A notion or property is said to be a \emph{topological} notion or property if it only depends on the topology, and not the metric.
+\end{defi}
+
+We will introduce a useful terminology before we go on:
+
+\begin{defi}[Neighbourhood]
+ Given a metric space $X$ and a point $x \in X$, a \emph{neighbourhood} of $x$ is an open set containing $x$.
+\end{defi}
+Some people do not require the set to be open. Instead, it requires a neighbourhood to be a set that contains an open subset that contains $x$, but this is too complicated, and we could as well work with open subsets directly.
+
+Clearly, being a neighbourhood is a topological property.
+
+\begin{prop}
+ Let $(X, d)$ be a metric space. Then $x_k \to x$ if and only if for every neighbourhood $V$ of $x$, there exists some $K$ such that $x_k \in V$ for all $k \geq K$. Hence convergence is a topological notion.
+\end{prop}
+
+\begin{proof}
+ $(\Rightarrow)$ Suppose $x_k \to X$, and let $V$ be any neighbourhood of $x$. Since $V$ is open, by definition, there exists some $\varepsilon$ such that $B_\varepsilon(x) \subseteq V$. By definition of convergence, there is some $K$ such that $x_k \in B_\varepsilon(x)$ for $k \geq K$. So $x_k \in V$ whenever $k \geq K$.
+
+ $(\Rightarrow)$ Since every open ball is a neighbourhood, this direction follows directly from definition.
+\end{proof}
+
+\begin{thm}
+ Let $(X, d)$ be a metric space. Then
+ \begin{enumerate}
+ \item The union of \emph{any} collection of open sets is open
+ \item The intersection of finitely many open sets is open.
+ \item $\emptyset$ and $X$ are open.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $U = \bigcup_\alpha V_\alpha$, where each $V_\alpha$ is open. If $x\in U$, then $x\in V_\alpha$ for some $\alpha$. Since $V_\alpha$ is open, there exists $\delta > 0$ such that $B_\delta(x) \subseteq V_\alpha$. So $B_\delta (x) \subseteq \bigcup_\alpha V_\alpha = U$. So $U$ is open.
+ \item Let $U = \bigcap_{i = 1}^n V_\alpha$, where each $V_\alpha$ is open. If $x\in V$, then $x\in V_i$ for all $i = 1, \cdots, n$. So $\exists \delta_i > 0$ with $B_{\delta_i}(x) \subseteq V_i$. Take $\delta = \min\{\delta_1, \cdots, \delta_n\}$. So $B_\delta(x) \subseteq V_i$ for all $i$. So $B_\delta(x) \subseteq V$. So $V$ is open.
+ \item $\emptyset$ satisfies the definition of an open subset vacuously. $X$ is open since for any $x$, $B_1(x) \subseteq X$.\qedhere
+ \end{enumerate}
+\end{proof}
+This theorem is not important in this course. However, this will be a key defining property we will use when we define topological spaces in IB Metric and Topological Spaces.
+
+We can now define closed subsets and characterize them using open subsets, in exactly the same way as for normed spaces.
+\begin{defi}[Limit point]
+ Let $(X, d)$ be a metric space and $E\subseteq X$. A point $y \in X$ is a \emph{limit point} of $E$ if there exists a sequence $x_k \in E$, $x_k \not= y$ such that $x_k \to y$.
+\end{defi}
+
+\begin{defi}[Closed subset]
+ A subset $E\subseteq X$ is \emph{closed} if $E$ contains all its limit points.
+\end{defi}
+
+\begin{prop}
+ A subset is closed if and only if its complement is open.
+\end{prop}
+
+\begin{proof}
+ Exactly the same as that of normed spaces. It is useful to observe that $y \in X$ is a limit point of $E$ if and only if $(B_r(y)\setminus \{y\}) \cap E \not= \emptyset$ for all $r > 0$.
+\end{proof}
+
+We can write down an analogous theorem for closed sets:
+\begin{thm}
+ Let $(X, d)$ be a metric space. Then
+ \begin{enumerate}
+ \item The intersection of \emph{any} collection of closed sets is closed
+ \item The union of finitely many closed sets is closed.
+ \item $\emptyset$ and $X$ are closed.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ By taking complements of the result for open subsets.
+\end{proof}
+
+\begin{prop}
+ Let $(X, d)$ be a metric space and $x \in X$. Then the singleton $\{x\}$ is a closed subset, and hence any finite subset is closed.
+\end{prop}
+
+\begin{proof}
+ Let $y \in X \setminus \{x\}$. So $d(x, y) > 0$. Then $B_{d(y, x)}(x) \subseteq X\setminus \{x\}$. So $X\setminus \{x\}$ is open. So $\{x\}$ is closed.
+
+ Alternatively, since $\{x\}$ has no limit points, it contains all its limit points. So it is closed.
+\end{proof}
+
+\subsection{Cauchy sequences and completeness}
+
+\begin{defi}[Cauchy sequence]
+ Let $(X, d)$ be a metric space. A sequence $(x_n)$ in $X$ is \emph{Cauchy} if
+ \[
+ (\forall \varepsilon)(\exists N)(\forall n, m \geq N)\; d(x_n, x_m) < \varepsilon.
+ \]
+\end{defi}
+
+\begin{prop}
+ Let $(X, d)$ be a metric space. Then
+ \begin{enumerate}
+ \item Any convergent sequence is Cauchy.
+ \item If a Cauchy sequence has a convergent subsequence, then the original sequence converges to the same limit.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $x_k \to x$, then
+ \[
+ d(x_m, x_n) \leq d(x_m, x) + d(x_n, x) \to 0
+ \]
+ as $m, n \to \infty$.
+ \item Suppose $x_{k_j} \to x$. Since $(x_k)$ is Cauchy, given $\varepsilon > 0$, we can choose an $N$ such that $d(x_n, x_m) < \frac{\varepsilon}{2}$ for all $n, m \geq N$. We can also choose $j_0$ such that $k_{j_0} \geq n$ and $d(x_{k_{j_0}}, x) < \frac{\varepsilon}{2}$. Then for any $n \geq N$, we have
+ \[
+ d(x_n, x) \leq d(x_n, x_{k_{j_0}}) + d(x, x_{k_{j_0}}) < \varepsilon.\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[Complete metric space]
+ A metric space $(X, d)$ is complete if all Cauchy sequences converge to a point in $X$.
+\end{defi}
+
+\begin{eg}
+ Let $X = \R^n$ with the Euclidean metric. Then $X$ is complete.
+\end{eg}
+It is easy to produce incomplete metric spaces. Since arbitrary subsets of metric spaces are subspaces, we can just remove some random elements to make it incomplete.
+
+\begin{eg}
+ Let $X = (0, 1) \subseteq \R$ with the Euclidean metric. Then this is incomplete, since $\left(\frac{1}{k}\right)$ is Cauchy but has no limit in $X$.
+
+ Similarly, $X = \R \setminus \{0\}$ is incomplete. Note, however, that it is possible to construct a metric $d'$ on $X = \R \setminus \{0\}$ such that $d'$ induces the same topology on $X$, but makes $X$ complete. This shows that completeness is not a topological property. The actual construction is left as an exercise on the example sheet.
+\end{eg}
+
+\begin{eg}
+ We can create an easy example of an incomplete metric on $\R^n$. We start by defining $h: \R^n \to \R^n$ by
+ \[
+ h(x) = \frac{x}{1 + \|x\|},
+ \]
+ where $\|\ph\|$ is the Euclidean norm. We can check that this is injective: if $h(x) = h(y)$, taking the norm gives
+ \[
+ \frac{\|x\|}{1 + \|x\|} = \frac{\|y\|}{ 1 + \|y\|}.
+ \]
+ So we must have $\|x\| = \|y\|$, i.e.\ $x = y$. So $h(x) = h(y)$ implies $x = y$.
+
+ Now we define
+ \[
+ d(x, y) = \|h(x) - h(y)\|.
+ \]
+ It is an easy check that this is a metric on $\R^n$.
+
+ In fact, we can show that $h: \R^n \to B_1(0)$, and $h$ is a homeomorphism (i.e.\ continuous bijection with continuous inverse) between $\R^n$ and the unit ball $B_1(0)$, both with the Euclidean metric.
+
+ To show that this metric is incomplete, we can consider the sequence $x_k = (k - 1) e_1$, where $e_1 = (1, 0, 0, \cdots, 0)$ is the usual basis vector. Then $(x_k)$ is Cauchy in $(\R^n, d)$. To show this, first note that
+ \[
+ h(x_k) = \left(1 - \frac{1}{k}\right) e_1.
+ \]
+ Hence we have
+ \[
+ d(x_n, x_m) = \|h(x_n) - h(x_m)\| = \left|\frac{1}{n} - \frac{1}{m}\right| \to 0.
+ \]
+ So it is Cauchy. To show it does not converge in $(\R^n, d)$, suppose $d(x_k, x) \to 0$ for some $x$. Then since
+ \[
+ d(x_k, x) = \|h(x_k) - h(x)\| \geq \big| \|h(x_k)\| - \|h(x)\|\big|,
+ \]
+ We must have
+ \[
+ \|h(x)\| = \lim_{k \to \infty} \|h(x_k)\| = 1.
+ \]
+ However, there is no element with $\|h(x)\| = 1$.
+\end{eg}
+What is happening in this example, is that we are pulling in the whole $\R^n$ in to the unit ball. Then under this norm, a sequence that ``goes to infinity'' in the usual norm will be Cauchy in this norm, but we have nothing at infinity for it to converge to.
+
+Suppose we have a complete metric space $(X, d)$. We know that we can form arbitrary subspaces by taking subsets of $X$. When will this be complete? Clearly it has to be closed, since it has to include all its limit points. It turns it closedness is a sufficient condition.
+\begin{thm}
+ Let $(X, d)$ be a metric space, $Y \subseteq X$ any subset. Then
+ \begin{enumerate}
+ \item If $(Y, d|_{Y\times Y})$ is complete, then $Y$ is closed in $X$.
+ \item If $(X, d)$ is complete, then $(Y, d|_{Y\times Y})$ is complete if and only if it is closed.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $x \in X$ be a limit point of $Y$. Then there is some sequence $x_k \to x$, where each $x_k \in Y$. Since $(x_k)$ is convergent, it is a Cauchy sequence. Hence it is Cauchy in $Y$. By completeness of $Y$, $(x_k)$ has to converge to some point in $Y$. By uniqueness of limits, this limit must be $x$. So $x \in Y$. So $Y$ contains all its limit points.
+ \item We have just showed that if $Y$ is complete, then it is closed. Now suppose $Y$ is closed. Let $(x_k)$ be a Cauchy sequence in $Y$. Then $(x_k)$ is Cauchy in $X$. Since $X$ is complete, $x_k\to x$ for some $x \in X$. Since $x$ is a limit point of $Y$, we must have $x \in Y$. So $x_k$ converges in $Y$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{Compactness}
+\begin{defi}[(Sequential) compactness]
+ A metric space $(X, d)$ is \emph{(sequentially) compact} if every sequence in $X$ has a convergent subsequence.
+
+ A subset $K\subseteq X$ is said to be compact if $(K, d|_{K\times K})$ is compact. In other words, $K$ is compact if every sequence in $K$ has a subsequence that converges to some point in $K$.
+\end{defi}
+Note that when we say every sequence has a convergent subsequence, we do not require it to be bounded. This is unlike the statement of the Bolzano-Weierstrass theorem. In particular, $\R$ is \emph{not} compact.
+
+It follows from definition that compactness is a topological property, since it is defined in terms of convergence, and convergence is defined in terms of open sets.
+
+The following theorem relates completeness with compactness.
+\begin{thm}
+ All compact spaces are complete and bounded.
+\end{thm}
+Note that $X$ is bounded iff $X \subseteq B_r(x_0)$ for some $r \in \R, x_0 \in X$ (or $X$ is empty).
+
+\begin{proof}
+ Let $(X, d)$ be a compact metric space. Let $(x_k)$ be Cauchy in $X$. By compactness, it has some convergent subsequence, say $x_{k_j} \to x$. So $x_k \to x$. So it is complete.
+
+ If $(X, d)$ is not bounded, by definition, for any $x_0$, there is a sequence $(x_k)$ such that $d(x_k, x_0) > k$ for every $k$. But then $(x_k)$ cannot have a convergent subsequence. Otherwise, if $x_{k_j} \to x$, then
+ \[
+ d(x_{k_j}, x_0) \leq d(x_{k_j}, x) + d(x, x_0)
+ \]
+ and is bounded, which is a contradiction.
+\end{proof}
+This implies that if $(X, d)$ is a metric space and $E\subseteq X$, and $E$ is compact, then $E$ is bounded, i.e.\ $E \subseteq B_R(x_0)$ for some $x_0 \in X, R > 0$, and $E$ with the subspace metric is complete. Hence $E$ is closed as a subset of $X$.
+
+The converse is not true. For example, recall if we have an infinite-dimensional normed vector space, then the closed unit sphere is complete and bounded, but not compact. Alternatively, we can take $X = \R$ with the metric $d(x, y) = \min\{1, |x - y|\}$. This is clearly bounded (by $1$), and it is easy to check that this is complete. However, this is not compact since the sequence $x_k = k$ has no convergent subsequence.
+
+However, we can strengthen the condition of boundedness to \emph{total} boundedness, and get the equivalence between ``completeness and total boundedness'' and compactness.
+
+\begin{defi}[Totally bounded*]
+ A metric space $(X, d)$ is said to be \emph{totally bounded} if for all $\varepsilon > 0$, there is an integer $N \in \N$ and points $x_1, \cdots, x_N \in X$ such that
+ \[
+ X = \bigcup_{i = 1}^N B_\varepsilon (x_i).
+ \]
+\end{defi}
+It is easy to check that being totally bounded implies being bounded. We then have the following strengthening of the previous theorem.
+\begin{thm}(non-examinable)
+ Let $(X, d)$ be a metric space. Then $X$ is compact if and only if $X$ is complete and totally bounded.
+\end{thm}
+
+\begin{proof}
+ $(\Leftarrow)$ Let $X$ be complete and totally bounded, $(y_i) \in X$. For every $j \in \N$, there exists a finite set of points $E_j$ such that every point is within $\frac{1}{j}$ of one of these points.
+
+ Now since $E_1$ is finite, there is some $x_1 \in E_1$ such that there are infinitely many $y_i$'s in $B(x_1, 1)$. Pick the first $y_i$ in $B(x_1, 1)$ and call it $y_{i_1}$.
+
+ Now there is some $x_2 \in E_2$ such that there are infinitely many $y_i$'s in $B(x_1, 1) \cap B(x_2, \frac{1}{2})$. Pick the one with smallest value of $i > i_1$, and call this $y_{i_2}$. Continue till infinity.
+
+ This procedure gives a sequence $x_i \in E_i$ and subsequence $(y_{i_k})$, and also
+ \[
+ y_{i_n} \in \bigcap_{j = 1}^n B\left(x_j, \frac{1}{j}\right).
+ \]
+ It is easy to see that $(y_{i_n})$ is Cauchy since if $m > n$, then $d(y_{i_m}, y_{i_n}) < \frac{2}{n}$. By completeness of $X$, this subsequence converges.
+
+ $(\Rightarrow)$ Compactness implying completeness is proved above. Suppose $X$ is not totally bounded. We show it is not compact by constructing a sequence with no Cauchy subsequence.
+
+ Suppose $\varepsilon$ is such that there is no finite set of points $x_1, \cdots, x_N$ with
+ \[
+ X = \bigcup_{i = 1}^N B_\varepsilon (x_i).
+ \]
+ We will construct our sequence iteratively.
+
+ Start by picking an arbitrary $y_1$. Pick $y_2$ such that $d(y_1, y_2) \geq \varepsilon$. This exists or else $B_\varepsilon(y_1)$ covers all of $X$.
+
+ Now given $y_1, \cdots, y_n$ such that $d(y_i, y_j) \geq \varepsilon$ for all $i, j = 1, \cdots, n$, $i \not= j$, we pick $y_{n + 1}$ such that $d(y_{n + 1}, y_j) \geq \varepsilon$ for all $j = 1, \cdots, n$. Again, this exists, or else $\bigcup_{i = 1}^n B_\varepsilon(y_i)$ covers $X$. Then clearly the sequence $(y_n)$ is not Cauchy. So done.
+\end{proof}
+In IID Linear Analysis, we will prove the Arzel\`a-Ascoli theorem that characterizes the compact subsets of the space $C([a. b])$ in a very concrete way, which is in some sense a strengthening of this result.
+
+\subsection{Continuous functions}
+We are going to look at continuous mappings between metric spaces.
+\begin{defi}[Continuity]
+ Let $(X, d)$ and $(X', d')$ be metric spaces. A function $f: X \to X'$ is \emph{continuous at $y \in X$} if
+ \[
+ (\forall \varepsilon > 0)(\exists \delta > 0)(\forall x)\; d(x, y) < \delta \Rightarrow d'(f(x), f(y)) < \varepsilon.
+ \]
+ This is true if and only if for every $\varepsilon > 0$, there is some $\delta > 0$ such that
+ \[
+ B_\delta(y) \subseteq f^{-1}B_\varepsilon(f(x)).
+ \]
+ $f$ is \emph{continuous} if $f$ is continuous at each $y \in X$.
+\end{defi}
+
+\begin{defi}[Uniform continuity]
+ $f$ is \emph{uniformly continuous} on $X$ if
+ \[
+ (\forall \varepsilon > 0)(\exists \delta > 0)(\forall x, y \in X)\; d(x, y) < \delta \Rightarrow d(f(x), f(y)) < \varepsilon.
+ \]
+ This is true if and only if for all $\varepsilon$, there is some $\delta$ such that for \emph{all} $y$, we have
+ \[
+ B_\delta(y) \subseteq f^{-1} (B_\varepsilon(f(y))).
+ \]
+\end{defi}
+
+\begin{defi}[Lipschitz function and Lipschitz constant]
+ $f$ is said to be \emph{Lipschitz} on $X$ if there is some $K \in [0, \infty)$ such that for all $x, y \in X$,
+ \[
+ d'(f(x), f(y)) \leq K d(x, y)
+ \]
+ \emph{Any} such $K$ is called a Lipschitz constant.
+\end{defi}
+It is easy to show
+\[
+ \text{Lipschitz} \Rightarrow \text{uniform continuity} \Rightarrow \text{continuity}.
+\]
+We have seen many examples that continuity does not imply uniform continuity. To show that uniform continuity does not imply Lipschitz, take $X = X' = \R$. We define the metrics as
+\[
+ d(x, y) = \min\{1, |x - y|\}, \quad d'(x, y) = |x - y|.
+\]
+Now consider the function $f: (X, d) \to (X', d')$ defined by $f(x) = x$. We can then check that this is uniformly continuous but not Lipschitz.
+
+Note that the statement that metrics $d$ and $d'$ are Lipschitz equivalent is equivalent to saying the two identity maps $i: (X, d) \to (X, d')$ and $i': (X, d') \to (X, d)$ are Lipschitz, hence the name.
+
+Note also that the metric \emph{itself} is also a Lipschitz map for any metric. Here we are viewing the metric as a function $d: X\times X \to \R$, with the metric on $X\times X$ defined as
+\[
+ \tilde{d}((x_1, y_1), (x_2, y_2)) = d(x_1, x_2) + d(y_1, y_2).
+\]
+This is a consequence of the triangle inequality, since
+\[
+ d(x_1, y_1) \leq d(x_1, x_2) + d(x_2, y_2) + d(y_1, y_2).
+\]
+Moving the middle term to the left gives
+\[
+ d(x_1, y_1) - d(x_2, y_2) \leq \tilde{d}((x_1, y_1), (x_2, y_2))
+\]
+Swapping the theorems around, we can put in the absolute value to obtain
+\[
+ |d(x_1, y_1) - d(x_2, y_2)| \leq \tilde{d}((x_1, y_1), (x_2, y_2))
+\]
+Recall that at the very beginning, we proved that a continuous map from a closed, bounded interval is automatically uniformly continuous. This is true whenever the domain is compact.
+\begin{thm}
+ Let $(X, d)$ be a compact metric space, and $(X', d')$ is any metric space. If $f: X\to X'$ be continuous, then $f$ is uniformly continuous.
+\end{thm}
+
+This is exactly the same proof as what we had for the $[0, 1]$ case.
+\begin{proof}
+ We are going to prove by contradiction. Suppose $f: X \to X'$ is not uniformly continuous. Since $f$ is not uniformly continuous, there is some $\varepsilon > 0$ such that for all $\delta = \frac{1}{n}$, there is some $x_n, y_n$ such that $d(x_n, y_n) < \frac{1}{n}$ but $d'(f(x_n), f(y_n)) > \varepsilon$.
+
+ By compactness of $X$, $(x_n)$ has a convergent subsequence $(x_{n_i}) \to x$. Then we also have $y_{n_i}\to x$. So by continuity, we must have $f(x_{n_i}) \to f(x)$ and $f(y_{n_i}) \to f(x)$. But $d'(f(x_{n_i}), f(y_{n_i})) > \varepsilon$ for all $n_i$. This is a contradiction.
+\end{proof}
+
+In the proof, we have secretly used (part of) the following characterization of continuity:
+\begin{thm}
+ Let $(X, d)$ and $(X', d')$ be metric spaces, and $f: X\to X'$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $f$ is continuous at $y$.
+ \item $f(x_k) \to f(y)$ for every sequence $(x_k)$ in $X$ with $x_k \to y$.
+ \item For every neighbourhood $V$ of $f(y)$, there is a neighbourhood $U$ of $y$ such that $U\subseteq f^{-1}(V)$.
+ \end{enumerate}
+\end{thm}
+Note that the definition of continuity says something like (iii), but with open balls instead of open sets. So this should not be surprising.
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Leftrightarrow$ (ii): The argument for this is the same as for normed spaces.
+ \item (i) $\Rightarrow$ (iii): Let $V$ be a neighbourhood of $f(y)$. Then by definition there is $\varepsilon > 0$ such that $B_\varepsilon (f(y)) \subseteq V$. By continuity of $f$, there is some $\delta$ such that
+ \[
+ B_\delta(y) \subseteq f^{-1}(B_\varepsilon(f(y))) \subseteq f^{-1}(V).
+ \]
+ Set $U = B_\varepsilon(y)$ and done.
+ \item (iii) $\Rightarrow$ (i): for any $\varepsilon$, use the hypothesis with $V = B_\varepsilon (f(y))$ to get a neighbourhood $U$ of $y$ such that
+ \[
+ U \subseteq f^{-1}(V) = f^{-1}(B_{\varepsilon}(f(y))).
+ \]
+ Since $U$ is open, there is some $\delta$ such that $B_\delta(y) \subseteq U$. So we get
+ \[
+ B_\delta(y) \subseteq f^{-1}(B_\varepsilon(f(y))).
+ \]
+ So we get continuity.\qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{cor}
+ A function $f: (X, d) \to (X', d')$ is continuous if $f^{-1}(V)$ is open in $X$ whenever $V$ is open in $X'$.
+\end{cor}
+
+\begin{proof}
+ Follows directly from the equivalence of (i) and (iii) in the theorem above.
+\end{proof}
+
+\subsection{The contraction mapping theorem}
+If you have already taken IB Metric and Topological Spaces, then you were probably bored by the above sections, since you've already met them all. Finally, we get to something new. This section is comprised of just two theorems. The first is the contraction mapping theorem, and we will use it to prove Picard-Lindel\"of existence theorem. Later, we will prove the inverse function theorem using the contraction mapping theorem. All of these are really powerful and important theorems in analysis. They have many more applications and useful corollaries, but we do not have time to get into those.
+
+\begin{defi}[Contraction mapping]
+ Let $(X, d)$ be metric space. A mapping $f: X \to X$ is a \emph{contraction} if there exists some $\lambda$ with $0 \leq \lambda < 1$ such that
+ \[
+ d(f(x), f(y)) \leq \lambda d(x, y).
+ \]
+\end{defi}
+Note that a contraction mapping is by definition Lipschitz and hence (uniformly) continuous.
+
+\begin{thm}[Contraction mapping theorem]
+ Let $X$ be a (non-empty) complete metric space, and if $f: X \to X$ is a contraction, then $f$ has a \emph{unique} fixed point, i.e.\ there is a unique $x$ such that $f(x) = x$.
+
+ Moreover, if $f: X\to X$ is a function such that $f^{(m)}: X\to X$ (i.e.\ $f$ composed with itself $m$ times) is a contraction for some $m$, then $f$ has a unique fixed point.
+\end{thm}
+We can see finding fixed points as the process of solving equations. One important application we will have is to use this to solve \emph{differential} equations.
+
+Note that the theorem is false if we drop the completeness assumption. For example, $f: (0, 1) \to (0, 1)$ defined by $\frac{x}{2}$ is clearly a contraction with no fixed point. The theorem is also false if we drop the assumption $\lambda < 1$. In fact, it is not enough to assume $d(f(x), f(y)) < d(x, y)$ for all $x, y$. A counterexample is to be found on example sheet 3.
+
+\begin{proof}
+ We first focus on the case where $f$ itself is a contraction.
+
+ Uniqueness is straightforward. By assumption, there is some $0 \leq \lambda < 1$ such that
+ \[
+ d(f(x), f(y)) \leq \lambda d(x, y)
+ \]
+ for all $x, y \in X$. If $x$ and $y$ are both fixed points, then this says
+ \[
+ d(x, y) = d(f(x), f(y)) \leq \lambda d(x, y).
+ \]
+ This is possible only if $d(x, y) = 0$, i.e.\ $x = y$.
+
+ To prove existence, the idea is to pick a point $x_0$ and keep applying $f$. Let $x_0 \in X$. We define the sequence $(x_n)$ inductively by
+ \[
+ x_{n + 1} = f(x_n).
+ \]
+ We first show that this is Cauchy. For any $n \geq 1$, we can compute
+ \[
+ d(x_{n + 1}, x_n) = d(f(x_n), f(x_{n - 1})) \leq \lambda d(x_n, x_{n - 1}) \leq \lambda^n d(x_1, x_0).
+ \]
+ Since this is true for any $n$, for $m > n$, we have
+ \begin{align*}
+ d(x_m, x_n) &\leq d(x_m, x_{m - 1}) + d(x_{m - 1}, x_{m - 2}) + \cdots + d(x_{n + 1}, x_n) \\
+ &= \sum_{j = n}^{m - 1} d(x_{j + 1}, x_j)\\
+ &= \sum_{j = n}^{m - 1} \lambda^j d(x_1, x_0)\\
+ &\leq d(x_1, x_0) \sum_{j = n}^\infty \lambda^j\\
+ &= \frac{\lambda^n}{1 - \lambda} d(x_1, x_0).
+ \end{align*}
+ Note that we have again used the property that $\lambda < 1$.
+
+ This implies $d(x_m, x_n) \to 0$ as $m, n \to \infty$. So this sequence is Cauchy. By the completeness of $X$, there exists some $x \in X$ such that $x_n \to x$. Since $f$ is a contraction, it is continuous. So $f(x_n) \to f(x)$. However, by definition $f(x_n) = x_{n + 1}$. So taking the limit on both sides, we get $f(x) = x$. So $x$ is a fixed point.
+
+ Now suppose that $f^{(m)}$ is a contraction for some $m$. Hence by the first part, there is a unique $x \in X$ such that $f^{(m)}(x) = x$. But then
+ \[
+ f^{(m)}(f(x)) = f^{(m + 1)}(x) = f(f^{(m)}(x)) = f(x).
+ \]
+ So $f(x)$ is also a fixed point of $f^{(n)}(x)$. By uniqueness of fixed points, we must have $f(x) = x$. Since any fixed point of $f$ is clearly a fixed point of $f^{(n)}$ as well, it follows that $x$ is the unique fixed point of $f$.
+\end{proof}
+
+Based on the proof of the theorem, we have the following error estimate in the contraction mapping theorem: for $x_0 \in X$ and $x_n = f(x_{n - 1})$, we showed that for $m > n$, we have
+\[
+ d(x_m, x_n) \leq \frac{\lambda^n}{1 - \lambda}d(x_1, x_0).
+\]
+If $x_n \to x$, taking the limit of the above bound as $m \to \infty$ gives
+\[
+ d(x, x_n) \leq \frac{\lambda^n}{1 - \lambda} d(x_1, x_0).
+\]
+This is valid for all $n$.
+
+We are now going to use this to obtain the Picard-Lindel\"of existence theorem for ordinary differential equations. The objective is as follows. Suppose we are given a function
+\[
+ \mathbf{F} = (F_1, F_2, \cdots, F_n): \R\times \R^n \to \R^n.
+\]
+We interpret the $\R$ as time and the $\R^n$ as space.
+
+Given $t_0 \in \R$ and $\mathbf{x}_0 \in \R^n$, we want to know when can we find a solution to the ODE
+\[
+ \frac{\d \mathbf{f}}{\d t} = \mathbf{F}(t, \mathbf{f}(t))
+\]
+subject to $f(t_0) = \mathbf{x}_0$. We would like this solution to be valid (at least) for all $t$ in some interval $I$ containing $t_0$.
+
+More explicitly, we want to understand when will there be some $\varepsilon > 0$ and a differentiable function $\mathbf{f} = (f_1, \cdots, f_n): (t_0 - \varepsilon, t_0 + \varepsilon) \to \R^n$ (i.e.\ $f_j: (t_0 - \varepsilon, t_0 + \varepsilon) \to \R$ is differentiable for all $j$) satisfying
+\[
+ \frac{\d f_j}{\d t} = F_j(t, f_1(t), \cdots, f_n(t))
+\]
+such that $f_j(t_0) = x_0^{(j)}$ for all $j = 1, \ldots, n$ and $t \in (t_0 - \varepsilon, t_0 + \varepsilon)$.
+
+We can imagine this scenario as a particle moving in $\R^n$, passing through $\mathbf{x}_0$ at time $t_0$. We then ask if there is a trajectory $\mathbf{f}(t)$ such that the velocity of the particle at any time $t$ is given by $\mathbf{F}(t, \mathbf{f}(t))$.
+
+This is a complicated system, since it is a coupled system of many variables. Explicit solutions are usually impossible, but in certain cases, we can prove the existence of a solution. Of course, solutions need not exist for arbitrary $F$. For example, there will be no solution if $F$ is everywhere discontinuous, since any derivative is continuous in a dense set of points. The Picard-Lindel\"of existence theorem gives us sufficient conditions for a unique solution to exists.
+
+We will need the following notation
+\begin{notation}
+ For $\mathbf{x}_0 \in \R^n$, $R > 0$, we let
+ \[
+ \overline{B_R(\mathbf{x}_0)} = \{\mathbf{x} \in \R^n: \|\mathbf{x} - \mathbf{x}_0\|_2 \leq R\}.
+ \]
+\end{notation}
+Then the theorem says
+\begin{thm}[Picard-Lindel\"of existence theorem]
+ Let $\mathbf{x}_0 \in \R^n$, $R > 0$, $a < b$, $t_0 \in [a, b]$. Let $\mathbf{F}: [a, b] \times \overline{B_R(\mathbf{x}_0)} \to \R^n$ be a continuous function satisfying
+ \[
+ \|\mathbf{F}(t, \mathbf{x}) - \mathbf{F}(t, \mathbf{y})\|_2 \leq \kappa\|\mathbf{x} - \mathbf{y}\|_2
+ \]
+ for some fixed $\kappa > 0$ and all $t \in [a, b], \mathbf{x} \in \overline{B_R(\mathbf{x}_0)}$. In other words, $F(t, \ph): \R^n \to \R^n$ is Lipschitz on $\overline{B_R(\mathbf{x}_0})$ with the same Lipschitz constant for every $t$. Then
+ \begin{enumerate}
+ \item There exists an $\varepsilon > 0$ and a unique differentiable function $\mathbf{f}: [t_0 - \varepsilon, t_0 + \varepsilon] \cap [a, b] \to \R^n$ such that
+ \[
+ \frac{\d \mathbf{f}}{\d t} = \mathbf{F}(t, \mathbf{f}(t))\tag{$*$}
+ \]
+ and $\mathbf{f}(t_0) = \mathbf{x}_0$.
+ \item If
+ \[
+ \sup_{[a, b] \times \overline{B_R(\mathbf{x}_0)}} \|\mathbf{F}\|_2 \leq \frac{R}{b - a},
+ \]
+ then there exists a unique differential function $\mathbf{f}: [a, b] \to \R^n$ that satisfies the differential equation and boundary conditions above.
+ \end{enumerate}
+\end{thm}
+Even $n = 1$ is an important, special, non-trivial case. Even if we have only one dimension, explicit solutions may be very difficult to find, if not impossible. For example,
+\[
+ \frac{\d f}{\d t} = f^2 + \sin f + e^f
+\]
+would be almost impossible to solve. However, the theorem tells us there will be a solution, at least locally.
+
+Note that any differentiable $f$ satisfying the differential equation is automatically continuously differentiable, since the derivative is $\mathbf{F}(t, \mathbf{f}(t))$, which is continuous.
+
+Before we prove the theorem, we first show the requirements are indeed necessary. We first look at that $\varepsilon$ in (i). Without the addition requirement in (ii), there might not exist a solution globally on $[a, b]$. For example, we can consider the $n = 1$ case, where we want to solve
+\[
+ \frac{\d f}{\d t} = f^2,
+\]
+with boundary condition $f(0) = 1$. Our $F(t, f) = f^2$ is a nice, uniformly Lipschitz function on any $[0, b] \times B_R(1) = [0, b] \times [1 - R, 1 + R]$. However, we will shortly see that there is no global solution.
+
+If we assume $f \not= 0$, then for all $t \in [0, b]$, the equation is equivalent to
+\[
+ \frac{\d}{\d t}(t + f^{-1}) = 0.
+\]
+So we need $t + f^{-1}$ to be constant. The initial conditions tells us this constant is $1$. So we have
+\[
+ f(t) = \frac{1}{1 - t}.
+\]
+Hence the solution on $[0, 1)$ is $\frac{1}{1 - t}$. Any solution on $[0, b]$ must agree with this on $[0, 1)$. So if $b \geq 1$, then there is no solution in $[0, b]$.
+
+The Lipschitz condition is also necessary to guarantee uniqueness. Without this condition, existence of a solution is still guaranteed (but is another theorem, the Cauchy-Peano theorem), but we could have many different solutions. For example, we can consider the differential equation
+\[
+ \frac{\d f}{\d t} = \sqrt{|f|}
+\]
+with $f(0) = 0$. Here $F(t, x) = \sqrt{|x|}$ is not Lipschitz near $x = 0$. It is easy to see that both $f = 0$ and $f(t) = \frac{1}{4}t^2$ are both solutions. In fact, for any $\alpha \in [0, b]$, the function
+\[
+ f_\alpha(t) =
+ \begin{cases}
+ 0 & 0 \leq t \leq \alpha\\
+ \frac{1}{4}(t - \alpha)^2 & \alpha \leq t \leq b
+ \end{cases}
+\]
+is also a solution. So we have an infinite number of solutions.
+
+We are now going to use the contraction mapping theorem to prove this. In general, this is a very useful idea. It is in fact possible to use other fixed point theorems to show the existence of solutions to partial differential equations. This is \emph{much} more difficult, but has many far-reaching important applications to theoretical physics and geometry, say. For these, see Part III courses.
+
+\begin{proof}
+ First, note that (ii) implies (i). We know that
+ \[
+ \sup_{[a, b]\times \overline{B_R(\mathbf{x})}}\|\mathbf{F}\|
+ \]
+ is bounded since it is a continuous function on a compact domain. So we can pick an $\varepsilon$ such that
+ \[
+ 2\varepsilon \leq \frac{R}{\sup_{[a, b]\times \overline{B_R(\mathbf{x})}}\|\mathbf{F}\|}.
+ \]
+ Then writing $[t_0 - \varepsilon, t_0 + \varepsilon] \cap [a, b] = [a_1, b_1]$, we have
+ \[
+ \sup_{[a_1, b_1]\times \overline{B_R(\mathbf{x})}}\|\mathbf{F}\| \leq \sup_{[a, b]\times \overline{B_R(\mathbf{x})}}\|\mathbf{F}\| \leq \frac{R}{2\varepsilon} \leq \frac{R}{b_1 - a_1}.
+ \]
+ So (ii) implies there is a solution on $[t_0 - \varepsilon, t_0 + \varepsilon] \cap [a, b]$. Hence it suffices to prove (ii).
+
+ To apply the contraction mapping theorem, we need to convert this into a fixed point problem. The key is to reformulate the problem as an integral equation. We know that a differentiable $\mathbf{f}: [a, b] \to \R^n$ satisfies the differential equation $(*)$ if and only if $\mathbf{f}: [a, b] \to \overline{B_R(\mathbf{x}_0)}$ is continuous and satisfies
+ \[
+ \mathbf{f}(t) = \mathbf{x}_0 + \int_{t_0}^t \mathbf{F}(s, \mathbf{f}(s))\;\d s
+ \]
+ by the fundamental theorem of calculus. Note that we don't require $\mathbf{f}$ is differentiable, since if a continuous $\mathbf{f}$ satisfies this equation, it is automatically differentiable by the fundamental theorem of calculus. This is very helpful, since we can work over the much larger vector space of continuous functions, and it would be easier to find a solution.
+
+ We let $X = C([a, b], \overline{B_R(\mathbf{x}_0)})$. We equip $X$ with the supremum metric
+ \[
+ \|\mathbf{g} - \mathbf{h}\| = \sup_{t \in [a, b]} \|\mathbf{g}(t) - \mathbf{h}(t)\|_2.
+ \]
+ We see that $X$ is a closed subset of the complete metric space $C([a, b] , \R^n)$ (again taken with the supremum metric). So $X$ is complete. For every $\mathbf{g} \in X$, we define a function $T\mathbf{g}: [a, b] \to \R^n$ by
+ \[
+ (T\mathbf{g}) (t) = \mathbf{x}_0 + \int_{t_0}^t \mathbf{F}(s, \mathbf{g}(s))\;\d s.
+ \]
+ Our differential equation is thus
+ \[
+ \mathbf{f} = T\mathbf{f}.
+ \]
+ So we first want to show that $T$ is actually mapping $X \to X$, i.e.\ $T\mathbf{g} \in X$ whenever $\mathbf{g} \in X$, and then prove it is a contraction map.
+
+ We have
+ \begin{align*}
+ \|T\mathbf{g}(t) - \mathbf{x}_0\|_2 &= \left\|\int_{t_0}^t \mathbf{F}(s, \mathbf{g}(s))\;\d s\right\|\\
+ &\leq \left|\int_{t_0}^t \|\mathbf{F} (s, \mathbf{g}(s))\|_2\;\d s\right|\\
+ &\leq \sup_{[a, b] \times \overline{B_R(\mathbf{x}_0)}} \|\mathbf{F}\| \cdot |b - a|\\
+ &\leq R
+ \end{align*}
+ Hence we know that $T\mathbf{g}(t) \in \overline{B}_R(\mathbf{x}_0)$. So $T \mathbf{g} \in X$.
+
+ Next, we need to show this is a contraction. However, it turns out $T$ need not be a contraction. Instead, what we have is that for $\mathbf{g}_1, \mathbf{g}_2 \in X$, we have
+ \begin{align*}
+ \|T \mathbf{g}_1(t) - T\mathbf{g}_2(t)\|_2 &= \left\|\int_{t_0}^t \mathbf{F}(s, \mathbf{g}_1(s)) - \mathbf{F}(s, \mathbf{g}_2(s))\;\d s\right\|_2\\
+ &\leq \left|\int_{t_0}^t \|\mathbf{F}(s, \mathbf{g}_1(s)) - \mathbf{F}(s, \mathbf{g}_2(s))\|_2 \;\d s \right|\\
+ &\leq \kappa (b - a) \|\mathbf{g}_1 - \mathbf{g}_2\|_\infty
+ \end{align*}
+ by the Lipschitz condition on $F$. If we indeed have
+ \[
+ \kappa (b - a) < 1 \tag{$\dagger$},
+ \]
+ then the contraction mapping theorem gives an $f \in X$ such that
+ \[
+ T\mathbf{f} = \mathbf{f},
+ \]
+ i.e.
+ \[
+ \mathbf{f} = \mathbf{x}_0 + \int_{t_0}^t \mathbf{F}(s, \mathbf{f}(s))\;\d s.
+ \]
+ However, we do not necessarily have $(\dagger)$. There are many ways we can solve this problem. Here, we can solve it by finding an $m$ such that $T^{(m)} = T\circ T \circ \cdots \circ T: X \to X$ is a contraction map. We will in fact show that this map satisfies the bound
+ \[
+ \sup_{t \in [a, b]} \|T^{(m)}\mathbf{g}_1(t) - T^{(m)}\mathbf{g}_2(t)\| \leq \frac{(b - a)^m \kappa^m}{m!} \sup_{t \in [a, b]} \|\mathbf{g}_1(t) - \mathbf{g}_2(t)\|. \tag{$\ddag$}
+ \]
+ The key is the $m!$, since this grows much faster than any exponential. Given this bound, we know that for sufficiently large $m$, we have
+ \[
+ \frac{(b - a)^m \kappa^m}{m!} < 1,
+ \]
+ i.e.\ $T^{(m)}$ is a contraction. So by the contraction mapping theorem, the result holds.
+
+ So it only remains to prove the bound. To prove this, we prove instead the pointwise bound: for any $t \in [a, b]$, we have
+ \[
+ \|T^{(m)}\mathbf{g}_1(t) - T^{(m)}\mathbf{g}_2(t)\|_2 \leq \frac{(|t - t_0|)^m \kappa^m}{m!} \sup_{s \in [t_0, t]}\|\mathbf{g}_1(s) - \mathbf{g}_2(s)\|.
+ \]
+ From this, taking the supremum on the left, we obtain the bound $(\ddag)$.
+
+ To prove this pointwise bound, we induct on $m$. We wlog assume $t > t_0$. We know that for every $m$, the difference is given by
+ \begin{align*}
+ \|T^{(m)}g_1(t) - T^{(m)}g_2(t)\|_2 &= \left\|\int_{t_0}^t F(s, T^{(m - 1)}g_1(s)) - F(s, T^{(m - 1)} g_2(s))\;\d s\right\|_2.\\
+ &\leq \kappa\int_{t_0}^t \|T^{(m - 1)} g_1(s) - T^{(m - 1)} g_2(s)\|_2 \;\d s.
+ \end{align*}
+ This is true for all $m$. If $m = 1$, then this gives
+ \[
+ \|T g_1(t) - T g_2(t)\| \leq \kappa(t - t_0) \sup_{[t_0, t]} \|g_1 - g_2\|_2.
+ \]
+ So the base case is done.
+
+ For $m \geq 2$, assume by induction the bound holds with $m - 1$ in place of $m$. Then the bounds give
+ \begin{align*}
+ \| T^{(m)} g_1(t) - T^{(m)} g_2(t)\| &\leq \kappa \int_{t_0}^t \frac{k^{m - 1}(s - t_0)^{m - 1}}{(m - 1)!} \sup_{[t_0, s]} \|g_1 - g_2\|_2 \;\d s\\
+ &\leq \frac{\kappa^m}{(m - 1)!} \sup_{[t_0, t]} \|g_1 - g_2\|_2 \int_{t_0}^t (s - t_0)^{m - 1} \;\d s\\
+ &= \frac{\kappa^m (t - t_0)^m}{m!} \sup_{[t_0, t]} \|g_1 - g_2\|_2.
+ \end{align*}
+ So done.
+\end{proof}
+Note that to get the factor of $m!$, we had to actually perform the integral, instead of just bounding $(s - t_0)^{m -1 }$ by $(t - t_0)$. In general, this is a good strategy if we want tight bounds. Instead of bounding
+\[
+ \left|\int_a^b f(x) \;\d x\right| \leq (b - a) \sup |f(x)|,
+\]
+we write $f(x) = g(x) h(x)$, where $h(x)$ is something easily integrable. Then we can have a bound
+\[
+ \left|\int_a^b f(x) \;\d x\right| \leq \sup |g(x)| \int_a^b |h(x)| \;\d x.
+\]
+
+\section{Differentiation from \tph{$\R^m$}{Rm}{&\#x211D;m} to \tph{$\R^n$}{Rn}{&\#x211D;n}}
+\subsection{Differentiation from \tph{$\R^m$}{Rm}{&\#x211D;m} to \tph{$\R^n$}{Rn}{&\#x211D;n}}
+We are now going to investigate differentiation of functions $f: \R^n \to \R^m$. The hard part is to first come up with a sensible definition of what this means. There is no obvious way to generalize what we had for real functions. After defining it, we will need to do some hard work to come up with easy ways to check if functions are differentiable. Then we can use it to prove some useful results like the mean value inequality. We will always use the usual Euclidean norm.
+
+
+To define differentiation in $\R^n$, we first we need a definition of the limit.
+
+\begin{defi}[Limit of function]
+ Let $E \subseteq \R^n$ and $f: E \to \R^m$. Let $\mathbf{a}\in \R^n$ be a limit point of $E$, and let $\mathbf{b} \in \R^m$. We say
+ \[
+ \lim_{\mathbf{x} \to \mathbf{a}} f(\mathbf{x}) = \mathbf{b}
+ \]
+ if for every $\varepsilon > 0$, there is some $\delta > 0$ such that
+ \[
+ (\forall \mathbf{x} \in E)\; 0 < \|\mathbf{x} - \mathbf{a}\| < \delta \Rightarrow \|f(\mathbf{x}) - \mathbf{b}\| < \varepsilon.
+ \]
+\end{defi}
+As in the case of $\R$ in IA Analysis I, we do not impose any requirements on $F$ when $\mathbf{x} = \mathbf{a}$. In particular, we don't assume that $a$ is in the domain $E$.
+
+We would like a definition of differentiation for functions $f: \R^n \to \R$ (or more generally $\mathbf{f}: \R^n \to \R^m$) that directly extends the familiar definition on the real line. Recall that if $f: (b, c) \to \R$ and $a \in (b, c)$, we say $f$ is \emph{differentiable} if the limit
+\[
+ Df(a) = f'(a) = \lim_{h \to 0} \frac{f(a + h) - f(a)}{h}\tag{$*$}
+\]
+exists (as a real number). This cannot be extended to higher dimensions directly, since $\mathbf{h}$ would become a vector in $\R^n$, and it is not clear what we mean by dividing by a vector. We might try dividing by $\|\mathbf{h}\|$ instead, i.e.\ require that
+\[
+ \lim_{\mathbf{h}\to \mathbf{0}}\frac{f(\mathbf{a} + \mathbf{h}) - f(\mathbf{a})}{\|\mathbf{h}\|}
+\]
+exists. However, this is clearly wrong, since in the case of $n = 1$, this reduces to the existence of the limit
+\[
+ \frac{f(a + h) - f(a)}{|h|},
+\]
+which almost never exists, e.g.\ when $f(x) = x$. It is also possible that this exists while the genuine derivative does not, e.g.\ when $f(x) = |x|$, at $x = 0$. So this is clearly wrong.
+
+Now we are a bit stuck. We need to divide by something, and that thing better be a scalar. $\|\mathbf{h}\|$ is not exactly what we want. What should we do? The idea is move $f'(a)$ to the other side of the equation, and $(*)$ becomes
+\[
+ \lim_{h \to 0} \frac{f(a + h) - f(a) - f'(a) h}{h} = 0.
+\]
+Now if we replace $h$ by $|h|$, nothing changes. So this is equivalent to
+\[
+ \lim_{h \to 0} \frac{f(a + h) - f(a) - f'(a) h}{|h|} = 0.
+\]
+In other words, the function $f$ is differentiable if there is some $A$ such that
+\[
+ \lim_{h \to 0} \frac{f(a + h) - f(a) - A h}{|h|} = 0,
+\]
+and we call $A$ the derivative.
+
+We are now in a good shape to generalize. Note that if $f: \R^n \to \R$ is a real-valued function, then $f(a + h) - f(a)$ is a scalar, but $h$ is a vector. So $A$ is not just a number, but a (row) vector. In general, if our function $\mathbf{f}: \R^n \to \R^m$ is vector-valued, then our $A$ should be an $m \times n$ matrix. Alternatively, $A$ is a linear map from $\R^n$ to $\R^m$.
+
+\begin{defi}[Differentiation in $\R^n$]
+ Let $U\subseteq \R^n$ be open, $\mathbf{f}: \R^n \to \R^m$. We say $\mathbf{f}$ is differentiable at a point $\mathbf{a} \in U$ if there exists a linear map $A: \R^n \to \R^m$ such that
+ \[
+ \lim_{\mathbf{h} \to \mathbf{0}} \frac{\mathbf{f}(\mathbf{a} + \mathbf{h}) - \mathbf{f}(\mathbf{a}) - A \mathbf{h}}{\|\mathbf{h}\|} = 0.
+ \]
+ We call $A$ the \emph{derivative of $\mathbf{f}$ at $a$}. We write the derivative as $D \mathbf{f}(\mathbf{a})$.
+\end{defi}
+This is equivalent to saying
+\[
+ \lim_{\mathbf{x} \to \mathbf{a}} \frac{\mathbf{f}(\mathbf{x}) - \mathbf{f}(\mathbf{a}) - A (\mathbf{x} - \mathbf{a})}{\|\mathbf{x} - \mathbf{a}\|} = 0.
+\]
+Note that this is completely consistent with our usual definition the case where $n = m = 1$, as we have discussed above, since a linear transformation $\alpha: \R \to \R$ is just given by $\alpha(h) = Ah$ for some real $A \in \R$.
+
+One might instead attempt to define differentiability as follows: for any $f: \R^m \to \R$, we say $f$ is differentiable at $x$ if $f$ is differentiable when restricted to any line passing through $x$. However, this is a weaker notion, and we will later see that if we define differentiability this way, then differentiability will no longer imply continuity, which is bad.
+
+Having defined differentiation, we want to show that the derivative is unique.
+\begin{prop}[Uniqueness of derivative]
+ Derivatives are unique.
+\end{prop}
+
+\begin{proof}
+ Suppose $A, B: \R^n \to \R^m$ both satisfy the condition
+ \begin{align*}
+ \lim_{\mathbf{h} \to \mathbf{0}} \frac{\mathbf{f}(\mathbf{a} + \mathbf{h}) - \mathbf{f}(\mathbf{a}) - A \mathbf{h}}{\|\mathbf{h}\|} &= \mathbf{0}\\
+ \lim_{\mathbf{h} \to \mathbf{0}} \frac{\mathbf{f}(\mathbf{a} + \mathbf{h}) - \mathbf{f}(\mathbf{a}) - B \mathbf{h}}{\|\mathbf{h}\|} &= \mathbf{0}.
+ \end{align*}
+ By the triangle inequality, we get
+ \[
+ \|(B - A) \mathbf{h}\| \leq \|\mathbf{f}(\mathbf{a} + \mathbf{h}) - f(\mathbf{a}) - A\mathbf{h}\| + \|f(\mathbf{a} + \mathbf{h}) - f(\mathbf{a}) - B\mathbf{h}\|.
+ \]
+ So
+ \[
+ \frac{\|(B - A)\mathbf{h}\|}{\|\mathbf{h}\|} \to 0
+ \]
+ as $h \to 0$. We set $\mathbf{h} = t\mathbf{u}$ in this proof to get
+ \[
+ \frac{\|(B - A) t\mathbf{u}\|}{\|t\mathbf{u}\|} \to 0
+ \]
+ as $t \to 0$. Since $(B - A)$ is linear, we know
+ \[
+ \frac{\|(B - A) t\mathbf{u}\|}{\|t\mathbf{u}\|} = \frac{\|(B - A) \mathbf{u}\|}{\|\mathbf{u}\|}.
+ \]
+ So $(B - A) \mathbf{u} = \mathbf{0}$ for all $\mathbf{u} \in \R^n$. So $B = A$.
+\end{proof}
+
+\begin{notation}
+ We write $L(\R^n; \R^m)$ for the space of linear maps $A: \R^n \to \R^m$.
+\end{notation}
+So $D\mathbf{f}(\mathbf{a}) \in L(\R^n; \R^m)$.
+
+To avoid having to write limits and divisions all over the place, we have the following convenient notation:
+\begin{notation}[Little $o$ notation]
+ For any function $\alpha: B_r(0) \subseteq \R^n \to \R^m$, write
+ \[
+ \alpha(\mathbf{h}) = o(\mathbf{h})
+ \]
+ if
+ \[
+ \frac{\alpha(\mathbf{h})}{\|\mathbf{h}\|} \to 0\text{ as }\mathbf{h}\to \mathbf{0}.
+ \]
+ In other words, $\alpha\to \mathbf{0}$ faster than $\|\mathbf{h}\|$ as $\mathbf{h}\to \mathbf{0}$.
+
+ Note that officially, $\alpha(\mathbf{h}) = o(\mathbf{h})$ as a whole is a piece of notation, and does not represent equality.
+\end{notation}
+Then the condition for differentiability can be written as: $f: U \to \R^m$ is differentiable at $\mathbf{a} \in U$ if there is some $A$ with
+\[
+ f(\mathbf{a} + \mathbf{h}) - f(\mathbf{a}) - A\mathbf{h} = o(\mathbf{h}).
+\]
+Alternatively,
+\[
+ f(\mathbf{a} + \mathbf{h}) = f(\mathbf{a}) + A\mathbf{h} + o(\mathbf{h}).
+\]
+Note that we require the domain $U$ of $\mathbf{f}$ to be open, so that for each $\mathbf{a} \in U$, there is a small ball around $\mathbf{a}$ on which $\mathbf{f}$ is defined, so $\mathbf{f}(\mathbf{a} + \mathbf{h})$ is defined for for sufficiently small $\mathbf{h}$. We could relax this condition and consider ``one-sided'' derivatives instead, but we will not look into these in this course.
+
+We can interpret the definition of differentiability as saying we can find a ``good'' linear approximation (technically, it is affine, not linear) to the function $\mathbf{f}$ near $\mathbf{a}$.
+
+While the definition of the derivative is good, it is purely existential. This is unlike the definition of differentiability of real functions, where we are asked to compute an explicit limit --- if the limit exists, that's the derivative. If not, it is not differentiable. In the higher-dimensional world, this is not the case. We have completely no idea where to find the derivative, even if we know it exists. So we would like an explicit formula for it.
+
+The idea is to look at specific ``directions'' instead of finding the general derivative. As always, let $\mathbf{f}: U \to \R^m$ be differentiable at $\mathbf{a} \in U$. Fix some $\mathbf{u} \in \R^n$, take $\mathbf{h} = t \mathbf{u}$ (with $t \in \R$). Assuming $\mathbf{u} \not= \mathbf{0}$, differentiability tells
+\[
+ \lim_{t \to 0} \frac{\mathbf{f}(\mathbf{a} + t\mathbf{u}) - \mathbf{f}(\mathbf{a}) - D \mathbf{f}(\mathbf{a}) (t\mathbf{u})}{\|t \mathbf{u}\|} = 0.
+\]
+This is equivalent to saying
+\[
+ \lim_{t \to 0} \frac{\mathbf{f}(\mathbf{a} + t\mathbf{u}) - \mathbf{f}(\mathbf{a}) - t D \mathbf{f}(\mathbf{a}) \mathbf{u}}{|t|\|\mathbf{u}\|} = 0.
+\]
+Since $\|\mathbf{u}\|$ is fixed, This in turn is equivalent to
+\[
+ \lim_{t \to 0} \frac{\mathbf{f}(\mathbf{a} + t\mathbf{u}) - \mathbf{f}(\mathbf{a}) - t D \mathbf{f}(\mathbf{a}) \mathbf{u}}{t} = 0.
+\]
+This, finally, is equal to
+\[
+ D\mathbf{f}(\mathbf{a}) \mathbf{u} = \lim_{t \to 0} \frac{\mathbf{f}(\mathbf{a} + t\mathbf{u}) - \mathbf{f}(\mathbf{a})}{t}.
+\]
+We derived this assuming $\mathbf{u} \not= \mathbf{0}$, but this is trivially true for $\mathbf{u} = \mathbf{0}$. So this valid for all $\mathbf{u}$.
+
+This is of the same form as the usual derivative, and it is usually not too difficult to compute this limit. Note, however, that this says if the derivative exists, then the limit above is related to the derivative as above. However, even if the limit exists for all $\mathbf{u}$, we still cannot conclude that the derivative exists.
+
+Regardless, even if the derivative does not exist, this limit is still often a useful notion.
+
+\begin{defi}[Directional derivative]
+ We write
+ \[
+ D_{\mathbf{u}} \mathbf{f}(\mathbf{a}) = \lim_{t\to 0} \frac{\mathbf{f}(\mathbf{a} + t\mathbf{u}) - \mathbf{f}(\mathbf{a})}{t}
+ \]
+ whenever this limit exists. We call $D_{\mathbf{u}} \mathbf{f} (\mathbf{a})$ the \emph{directional derivative} of $\mathbf{f}$ at $\mathbf{a} \in U$ in the direction of $\mathbf{u} \in \R^n$.
+
+ By definition, we have
+ \[
+ D_{\mathbf{u}} \mathbf{f}(\mathbf{a}) = \left.\frac{\d}{\d t}\right|_{t = 0} \mathbf{f}(\mathbf{a} + t\mathbf{u}).
+ \]
+\end{defi}
+
+Often, it is convenient to focus on the special cases where $\mathbf{u} = \mathbf{e}_j$, a member of the standard basis for $\R^n$. This is known as the \emph{partial derivative}. By convention, this is defined for real-valued functions only, but the same definition works for any $\R^m$-valued function.
+
+\begin{defi}[Partial derivative]
+ The \emph{$j$th partial derivative} of $f: U \to \R$ at $\mathbf{a} \in U$ is
+ \[
+ D_{\mathbf{e}_j} f(\mathbf{a}) = \lim_{t \to \infty} \frac{f(\mathbf{a} + t \mathbf{e}_j) - f(\mathbf{a})}{t},
+ \]
+ when the limit exists. We often write this as
+ \[
+ D_{\mathbf{e}_j} f(\mathbf{a}) = D_j f(\mathbf{a}) = \frac{\partial f}{\partial x_j}.
+ \]
+\end{defi}
+Note that these definitions do not require differentiability of $\mathbf{f}$ at $\mathbf{a}$. We will see some examples shortly. Before that, we first establish some elementary properties of differentiable functions.
+\begin{prop}
+ Let $U \subseteq \R^n$ be open, $\mathbf{a} \in U$.
+ \begin{enumerate}
+ \item If $\mathbf{f}: U \to \R^m$ is differentiable at $\mathbf{a}$, then $\mathbf{f}$ is continuous at $\mathbf{a}$.
+ \item If we write $\mathbf{f} = (f_1, f_2, \cdots, f_m): U \to \R^m$, where each $f_i: U \to \R$, then $\mathbf{f}$ is differentiable at $\mathbf{a}$ if and only if each $f_j$ is differentiable at $\mathbf{a}$ for each $j$.
+ \item If $f, g: U \to \R^m$ are both differentiable at $\mathbf{a}$, then $\lambda \mathbf{f} + \mu \mathbf{g}$ is differentiable at $\mathbf{a}$ with
+ \[
+ D( \lambda \mathbf{f} + \mu \mathbf{g})(\mathbf{a}) = \lambda D \mathbf{f}(\mathbf{a}) + \mu D \mathbf{g}(\mathbf{a}).
+ \]
+ \item If $A: \R^n \to \R^m$ is a linear map, then $A$ is differentiable for any $\mathbf{a} \in \R^n$ with
+ \[
+ D A(\mathbf{a}) = A.
+ \]
+ \item If $\mathbf{f}$ is differentiable at $\mathbf{a}$, then the directional derivative $D_\mathbf{u} \mathbf{f}(\mathbf{a})$ exists for all $\mathbf{u} \in \R^n$, and in fact
+ \[
+ D_{\mathbf{u}} \mathbf{f}(\mathbf{a}) = D \mathbf{f}(\mathbf{a}) \mathbf{u}.
+ \]
+ \item If $\mathbf{f}$ is differentiable at $\mathbf{a}$, then all partial derivatives $D_j f_i(\mathbf{a})$ exist for $j = 1, \cdots, n$; $i = 1, \cdots, m$, and are given by
+ \[
+ D_j f_i(\mathbf{a}) = D f_i(\mathbf{a}) \mathbf{e}_j.
+ \]
+ \item If $A = (A_{ij})$ be the matrix representing $D \mathbf{f}(\mathbf{a})$ with respect to the standard basis for $\R^n$ and $\R^m$, i.e.\ for any $\mathbf{h} \in \R^n$,
+ \[
+ D \mathbf{f}(\mathbf{a}) \mathbf{h} = A \mathbf{h}.
+ \]
+ Then $A$ is given by
+ \[
+ A_{ij} = \bra D \mathbf{f}(\mathbf{a}) \mathbf{e}_j, \mathbf{b}_i\ket = D_j \mathbf{f}_i (a).
+ \]
+ where $\{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$ is the standard basis for $\R^n$, and $\{\mathbf{b}_1, \cdots, \mathbf{b}_m\}$ is the standard basis for $\R^m$.
+ \end{enumerate}
+\end{prop}
+The second property is useful, since instead of considering arbitrary $\R^m$-valued functions, we can just look at real-valued functions.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item By definition, if $\mathbf{f}$ is differentiable, then as $\mathbf{h} \to \mathbf{0}$, we know
+ \[
+ \mathbf{f}(\mathbf{a} + \mathbf{h}) - \mathbf{f}(\mathbf{a}) - D \mathbf{f}(\mathbf{a}) \mathbf{h} \to \mathbf{0}.
+ \]
+ Since $D \mathbf{f}(\mathbf{a})\mathbf{h} \to \mathbf{0}$ as well, we must have $\mathbf{f}(\mathbf{a} + \mathbf{h}) \to \mathbf{f}(\mathbf{h})$.
+ \item Exercise on example sheet 4.
+ \item We just have to check this directly. We have
+ \begin{align*}
+ &\frac{(\lambda \mathbf{f} + \mu \mathbf{g})(\mathbf{a} + \mathbf{h}) - (\lambda \mathbf{f} + \mu \mathbf{g})(\mathbf{a}) - (\lambda D \mathbf{f}(\mathbf{a}) + \mu D \mathbf{g}(\mathbf{a}))}{\|\mathbf{h}\|}\\
+ &= \lambda \frac{\mathbf{f}(\mathbf{a} + \mathbf{h}) - \mathbf{f}(\mathbf{a}) - D \mathbf{f}(\mathbf{a})\mathbf{h}}{\|\mathbf{h}\|} + \mu\frac{\mathbf{g}(\mathbf{a} + \mathbf{h}) - \mathbf{g}(\mathbf{a}) - D \mathbf{g}(\mathbf{a}) \mathbf{h}}{\|\mathbf{h}\|}.
+ \end{align*}
+ which tends to $0$ as $\mathbf{h} \to \mathbf{0}$. So done.
+ \item Since $A$ is linear, we always have $A (\mathbf{a} + \mathbf{h}) - A (\mathbf{a}) - A \mathbf{h} = \mathbf{0}$ for all $\mathbf{h}$.
+ \item We've proved this in the previous discussion.
+ \item We've proved this in the previous discussion.
+ \item This follows from the general result for linear maps: for any linear map represented by $(A_{ij})_{m \times n}$, we have
+ \[
+ A_{ij} = \bra A \mathbf{e}_j, \mathbf{b}_i\ket.
+ \]
+ Applying this with $A = D \mathbf{f}(\mathbf{a})$ and note that for any $\mathbf{h} \in \R^n$,
+ \[
+ D \mathbf{f}(\mathbf{a}) \mathbf{h} = (D \mathbf{f}_1 (\mathbf{a}) \mathbf{h}, \cdots, D \mathbf{f}_m (\mathbf{a}) \mathbf{h}).
+ \]
+ So done.\qedhere
+ \end{enumerate}
+\end{proof}
+
+The above says differentiability at a point implies the existence of all directional derivatives, which in turn implies the existence of all partial derivatives. The converse implication does not hold in either of these.
+\begin{eg}
+ Let $f^2: \R^2 \to \R$ be defined by
+ \[
+ f(x, y) =
+ \begin{cases}
+ 0 & xy = 0\\
+ 1 & xy \not= 0
+ \end{cases}
+ \]
+ Then the partial derivatives are
+ \[
+ \frac{\d f}{\d x} (0, 0) = \frac{\d f}{\d y} (0, 0) = 0,
+ \]
+ In other directions, say $\mathbf{u} = (1, 1)$, we have
+ \[
+ \frac{f(\mathbf{0} + t \mathbf{u}) - f(\mathbf{0})}{t} = \frac{1}{t}
+ \]
+ which diverges as $t \to 0$. So the directional derivative does not exist.
+\end{eg}
+
+\begin{eg}
+ Let $f: \R^2 \to \R$ be defined by
+ \[
+ f(x, y) =
+ \begin{cases}
+ \frac{x^3}{y} & y \not= 0\\
+ 0 & y = 0
+ \end{cases}
+ \]
+ Then for $\mathbf{u} = (u_1, u_2) \not= \mathbf{0}$ and $t \not= 0$, we can compute
+ \[
+ \frac{f(\mathbf{0} + t\mathbf{u}) - f(\mathbf{0})}{t} =
+ \begin{cases}
+ \frac{t u_1^3}{u_2} & u_2 \not= 0\\
+ 0 & u_2 = 0
+ \end{cases}
+ \]
+ So
+ \[
+ D_{\mathbf{u} } f(\mathbf{0}) = \lim_{t\to 0} \frac{f(\mathbf{0} + t\mathbf{u}) - f(\mathbf{0})}{t} = 0,
+ \]
+ and the directional derivative exists. However, the function is not differentiable at $0$, since it is not even continuous at $0$, as
+ \[
+ f(\delta, \delta^4) = \frac{1}{\delta}
+ \]
+ diverges as $\delta \to 0$.
+\end{eg}
+
+\begin{eg}
+ Let $f: \R^2 \to \R$ be defined by
+ \[
+ f(x, y) =
+ \begin{cases}
+ \frac{x^3}{x^2 + y^2} & (x, y) \not= (0, 0)\\
+ 0 & (x, y) = (0, 0)
+ \end{cases}.
+ \]
+ It is clear that $f$ continuous at points other than $0$, and $f$ is also continuous at $0$ since $|f(x, y)| \leq |x|$. We can compute the partial derivatives as
+ \[
+ \frac{\partial f}{\partial x}(0, 0) = 1,\quad \frac{\partial f}{\partial y} (0, 0) = 0.
+ \]
+ In fact, we can compute the difference quotient in the direction $\mathbf{u} = (u_1, u_2) \not= \mathbf{0}$ to be
+ \[
+ \frac{f(\mathbf{0} + t \mathbf{u}) - f(\mathbf{0})}{t} = \frac{u_1^3}{u_1^2 + u_2^2}.
+ \]
+ So we have
+ \[
+ D_\mathbf{u} f(\mathbf{0}) = \frac{u_1^3}{u_1^2 + u_2^2}.
+ \]
+ We can now immediately conclude that $f$ is not differentiable at $0$, since if it were, then we would have
+ \[
+ D_\mathbf{u} f(\mathbf{0}) = D f(\mathbf{0}) \mathbf{u},
+ \]
+ which should be a linear expression in $\mathbf{u}$, but this is not.
+
+ Alternatively, if $f$ were differentiable, then we have
+ \[
+ D f(\mathbf{0}) \mathbf{h} =
+ \begin{pmatrix}
+ 1 & 0
+ \end{pmatrix}
+ \begin{pmatrix}
+ h_1\\h_2
+ \end{pmatrix} = h_1.
+ \]
+ However, we have
+ \[
+ \frac{f(\mathbf{0} + \mathbf{h}) - f(\mathbf{0}) - D f(\mathbf{0}) \mathbf{h}}{\|\mathbf{h}\|} = \frac{\frac{h_1^3}{h_1^2 + h_2^2} - h_1}{\sqrt{h_1^2 + h_2^2}} = -\frac{h_1 h_2^2}{\sqrt{h_1^2 + h_2^2}^3},
+ \]
+ which does not tend to $0$ as $h \to 0$. For example, if $\mathbf{h} = (t, t)$, this quotient is
+ \[
+ -\frac{1}{2^{3/2}}
+ \]
+ for $t \not= 0$.
+\end{eg}
+
+To decide if a function is differentiable, the first step would be to compute the partial derivatives. If they don't exist, then we can immediately know the function is not differentiable. However, if they do, then we have a \emph{candidate} for what the derivative is, and we plug it into the definition to check if it actually is the derivative.
+
+This is a cumbersome thing to do. It turns out that while existence of partial derivatives does not imply differentiability in general, it turns out we can get differentiability if we add some more slight conditions.
+
+\begin{thm}
+ Let $U\subseteq \R^n$ be open, $\mathbf{f}: U \to \R^m$. Let $\mathbf{a} \in U$. Suppose there exists some open ball $B_r(\mathbf{a}) \subseteq U$ such that
+ \begin{enumerate}
+ \item $D_j \mathbf{f}_i(\mathbf{x})$ exists for every $\mathbf{x} \in B_r(\mathbf{a})$ and $1 \leq i \leq m$, $1 \leq j \leq n$
+ \item $D_j \mathbf{f}_i$ are continuous at $\mathbf{a}$ for all $1 \leq i \leq m$, $1 \leq j \leq n$.
+ \end{enumerate}
+ Then $\mathbf{f}$ is differentiable at $\mathbf{a}$.
+\end{thm}
+
+\begin{proof}
+ It suffices to prove for $m = 1$, by the long proposition. For each $\mathbf{h} = (h_1, \cdots, h_n) \in \R^n$, we have
+ \[
+ f(\mathbf{a} + \mathbf{h}) - f(\mathbf{a}) = \sum_{j = 1}^n f(\mathbf{a} + h_1 \mathbf{e}_1 + \cdots + h_j \mathbf{e}_j) - f(\mathbf{a} + h_1 \mathbf{e}_1 + \cdots + h_{j - 1} \mathbf{e}_{j - 1}).
+ \]
+ Now for convenience, we can write
+ \[
+ \mathbf{h}^{(j)} = h_1 \mathbf{e}_1 + \cdots + h_j \mathbf{e}_j = (h_1, \cdots, h_j, 0, \cdots, 0).
+ \]
+ Then we have
+ \begin{align*}
+ f(\mathbf{a} + \mathbf{h}) - f(\mathbf{a}) &= \sum_{j = 1}^n f(\mathbf{a} + \mathbf{h}^{(j)}) - f(a + \mathbf{h}^{(j - 1)}) \\
+ &= \sum_{j = 1}^n f(\mathbf{a} + \mathbf{h}^{(j - 1)} + h_j \mathbf{e}_j) - f(\mathbf{a} + \mathbf{h}^{(j - 1)}).
+ \end{align*}
+ Note that in each term, we are just moving along the coordinate axes. Since the partial derivatives exist, the mean value theorem of single-variable calculus applied to
+ \[
+ g(t) = f(\mathbf{a} + \mathbf{h}^{(j - 1)} + t \mathbf{e}_j)
+ \]
+ on the interval $t \in [0, h_j]$ allows us to write this as
+ \begin{align*}
+ &f(\mathbf{a} + \mathbf{h}) - f(\mathbf{a}) \\
+ &= \sum_{j = 1}^n h_j D_j f (\mathbf{a} + \mathbf{h}^{(j - 1)} + \theta_j h_j \mathbf{e}_j)\\
+ &= \sum_{j = 1}^n h_j D_j f(\mathbf{a}) + \sum_{j = 1}^n h_j \Big(D_j f (\mathbf{a} + \mathbf{h}^{(j - 1)} + \theta_j h_j \mathbf{e}_j) - D_j f(\mathbf{a})\Big)
+ \end{align*}
+ for some $\theta_j \in (0, 1)$.
+
+ Note that $D_j f (\mathbf{a} + \mathbf{h}^{(j - 1)} + \theta_j h_j \mathbf{e}_j) - D_j f(\mathbf{a}) \to 0$ as $\mathbf{h} \to 0$ since the partial derivatives are continuous at $\mathbf{a}$. So the second term is $o(\mathbf{h})$. So $f$ is differentiable at $\mathbf{a}$ with
+ \[
+ D f (\mathbf{a})\mathbf{h}= \sum_{j = 1}^n D_j f (\mathbf{a})h_j.\qedhere
+ \]
+\end{proof}
+This is a very useful result. For example, we can now immediately conclude that the function
+\[
+ \begin{pmatrix}
+ x\\y\\z
+ \end{pmatrix}
+ \mapsto
+ \begin{pmatrix}
+ 3x^2 + 4\sin y + e^{6z}\\
+ xyze^{14x}
+ \end{pmatrix}
+\]
+is differentiable everywhere, since it has continuous partial derivatives. This is much better than messing with the definition itself.
+
+\subsection{The operator norm}
+So far, we have only looked at derivatives at a single point. We haven't discussed much about the derivative at, say, a neighbourhood or the whole space. We might want to ask if the derivative is continuous or bounded. However, this is not straightforward, since the derivative is a linear map, and we need to define these notions for functions whose values are linear maps. In particular, we want to understand the map $D\mathbf{f}: B_r(\mathbf{a}) \to L(\R^n; \R^m)$ given by $\mathbf{x} \mapsto D \mathbf{f}(\mathbf{x})$. To do so, we need a metric on the space $L(\R^n; \R^m)$. In fact, we will use a norm.
+
+Let $\mathcal{L} = L(\R^n; \R^m)$. This is a vector space over $\R$ defined with addition and scalar multiplication defined pointwise. In fact, $\mathbf{L}$ is a subspace of $C(\R^n, \R^m)$.
+
+To prove this, we have to prove that all linear maps are continuous. Let $\{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$ be the standard basis for $\R^n$, and for
+\[
+ \mathbf{x} = \sum_{j = 1}^n x_j \mathbf{e}_j,
+\]
+and $A \in \mathcal{L}$, we have
+\[
+ A (\mathbf{x}) = \sum_{j = 1}^n x_j A \mathbf{e}_j.
+\]
+By Cauchy-Schwarz, we know
+\[
+ \|A(\mathbf{x})\| \leq \sum_{j = 1}^n |x_j| \|A(\mathbf{e}_j)\| \leq \|\mathbf{x}\| \sqrt{\sum_{j = 1}^n \|A (\mathbf{e}_j)\|^2}.
+\]
+So we see $A$ is Lipschitz, and is hence continuous. Alternatively, this follows from the fact that linear maps are differentiable and hence continuous.
+
+We can use this fact to define the norm of linear maps. Since $\mathcal{L}$ is finite-dimensional (it is isomorphic to the space of real $m\times n$ matrices, as vector spaces, and hence have dimension $mn$), it really doesn't matter which norm we pick as they are all Lipschitz equivalent, but a convenient choice is the sup norm, or the \emph{operator norm}.
+
+\begin{defi}[Operator norm]
+ The \emph{operator norm} on $\mathcal{L} = L(\R^n; \R^m)$ is defined by
+ \[
+ \|A\| = \sup_{\mathbf{x} \in \R^n: \|\mathbf{x}\| = 1} \|A \mathbf{x}\|.
+ \]
+\end{defi}
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\|A\| < \infty$ for all $A \in \mathcal{L}$.
+ \item $\|\ph\|$ is indeed a norm on $\mathcal{L}$.
+ \item
+ \[
+ \|A\| = \sup_{ \R^n\setminus \{0\}} \frac{\|A \mathbf{x}\|}{\|\mathbf{x}\|}.
+ \]
+ \item $\|A \mathbf{x}\| \leq \|A\| \|\mathbf{x}\|$ for all $\mathbf{x} \in \R^n$.
+ \item Let $A \in L(\R^n; \R^m)$ and $B \in L(\R^m; \R^p)$. Then $BA = B\circ A \in L(\R^n; \R^p)$ and
+ \[
+ \|B A\| \leq \|B\| \|A\|.
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item This is since $A$ is continuous and $\{\mathbf{x} \in \R^n: \|\mathbf{x}\| = 1\}$ is compact.
+ \item The only non-trivial part is the triangle inequality. We have
+ \begin{align*}
+ \|A + B\| &= \sup_{\|\mathbf{x}\| = 1} \|A \mathbf{x} + B \mathbf{x}\| \\
+ &\leq \sup_{\|\mathbf{x}\| = 1} (\|A \mathbf{x}\| + \|B \mathbf{x}\|)\\
+ &\leq \sup_{\|\mathbf{x}\| = 1} \|A \mathbf{x}\| + \sup_{\|\mathbf{x}\| = 1} \|B \mathbf{x}\|\\
+ &= \|A\| + \|B\|
+ \end{align*}
+ \item This follows from linearity of $A$, and for any $\mathbf{x} \in \R^n$, we have
+ \[
+ \left\|\frac{\mathbf{x}}{\|\mathbf{x}\|}\right\| = 1.
+ \]
+ \item Immediate from above.
+ \item
+ \[
+ \|BA\| = \sup_{\R^n\setminus \{0\}} \frac{\|BA \mathbf{x}\|}{\|\mathbf{x}\|} \leq \sup_{\R^n \setminus \{0\}} \frac{\|B\| \|A \mathbf{x}\|}{\|\mathbf{x}\|} = \|B\|\|A\|.\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+
+For certain easy cases, we have a straightforward expression for the operator norm.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item If $A \in L(\R, \R^m)$, then $A$ can be written as $Ax = x \mathbf{a}$ for some $\mathbf{a} \in \R^m$. Moreover, $\|A\| = \|\mathbf{a}\|$, where the second norm is the Euclidean norm in $\R^n$
+ \item If $A \in L(\R^n, \R)$, then $A \mathbf{x} = \mathbf{x} \cdot \mathbf{a}$ for some fixed $\mathbf{a} \in \R^n$. Again, $\|A\| = \|\mathbf{a}\|$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Set $A(1) = \mathbf{a}$. Then by linearity, we get $A x = x A(1) = x \mathbf{a}$. Then we have
+ \[
+ \|A \mathbf{x}\| = |x| \|\mathbf{a}\|.
+ \]
+ So we have
+ \[
+ \frac{\|A \mathbf{x}\|}{|x|} = \|\mathbf{a}\|.
+ \]
+ \item Exercise on example sheet 4.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{thm}[Chain rule]
+ Let $U \subseteq \R^n$ be open, $\mathbf{a} \in U$, $\mathbf{f}: U \to \R^m$ differentiable at $\mathbf{a}$. Moreover, $V \subseteq \R^m$ is open with $\mathbf{f}(U) \subseteq V$ and $\mathbf{g}: V \to \R^p$ is differentiable at $\mathbf{f}(\mathbf{a})$. Then $\mathbf{g} \circ \mathbf{f}: U \to \R^p$ is differentiable at $\mathbf{a}$, with derivative
+ \[
+ D(\mathbf{g} \circ \mathbf{f})(\mathbf{a}) = D \mathbf{g}(\mathbf{f}(\mathbf{a}))\; D \mathbf{f}(\mathbf{a}).
+ \]
+\end{thm}
+
+\begin{proof}
+ The proof is very easy if we use the little $o$ notation. Let $A = D\mathbf{f}(\mathbf{a})$ and $B = D \mathbf{g}(\mathbf{f}(\mathbf{a}))$. By differentiability of $\mathbf{f}$, we know
+ \begin{align*}
+ \mathbf{f}(\mathbf{a} + \mathbf{h}) &= \mathbf{f}(\mathbf{a}) + A \mathbf{h} + o(\mathbf{h})\\
+ \mathbf{g}(\mathbf{f}(\mathbf{a}) + \mathbf{k}) &= \mathbf{g}(\mathbf{f}(\mathbf{a})) + B\mathbf{k} + o(\mathbf{k})\\
+ \intertext{Now we have}
+ \mathbf{g}\circ \mathbf{f}(\mathbf{a} + \mathbf{h}) &= \mathbf{g}(\mathbf{f}(\mathbf{a}) + \underbrace{A \mathbf{h} + o(\mathbf{h})}_{\mathbf{k}})\\
+ &= \mathbf{g}(\mathbf{f}(\mathbf{a})) + B(A \mathbf{h} + o(\mathbf{h})) + o(A \mathbf{h} + o(\mathbf{h}))\\
+ &= \mathbf{g}\circ \mathbf{f}(\mathbf{a}) + B A \mathbf{h} + B(o(\mathbf{h})) + o(A \mathbf{h} + o(\mathbf{h})).
+ \end{align*}
+ We just have to show the last term is $o(\mathbf{h})$, but this is true since $B$ and $A$ are bounded. By boundedness,
+ \[
+ \|B(o(\mathbf{h}))\| \leq \|B\|\|o (\mathbf{h})\|.
+ \]
+ So $B(o(\mathbf{h})) = o(\mathbf{h})$. Similarly,
+ \[
+ \|A\mathbf{h} + o(\mathbf{h})\| \leq \|A\| \|\mathbf{h}\| + \|o (\mathbf{h})\| \leq (\|A\| + 1)\|\mathbf{h}\|
+ \]
+ for sufficiently small $\|\mathbf{h}\|$. So $o(A \mathbf{h} + o(\mathbf{h}))$ is in fact $o(\mathbf{h})$ as well. Hence
+ \[
+ \mathbf{g}\circ \mathbf{f}(\mathbf{a} + \mathbf{h}) = \mathbf{g} \circ \mathbf{f}(\mathbf{a}) + BA \mathbf{h} + o(\mathbf{h}).\qedhere
+ \]
+\end{proof}
+
+\subsection{Mean value inequalities}
+So far, we have just looked at cases where we assume the function is differentiable at a point. We are now going to assume the function is differentiable in a region, and see what happens to the derivative.
+
+Recall the mean value theorem from single-variable calculus: if $f: [a, b] \to \R$ is continuous on $[a, b]$ and differentiable on $(a, b)$, then
+\[
+ f(b) - f(a) = f'(c) (b - a)
+\]
+for some $c \in (a, b)$. This is our favorite theorem, and we have used it many times in IA Analysis. Here we have an exact equality. However, in general, for vector-valued functions, i.e.\ if we are mapping to $\R^m$, this is no longer true. Instead, we only have an inequality.
+
+We first prove it for the case when the domain is a subset of $\R$, and then reduce the general case to this special case.
+\begin{thm}
+ Let $\mathbf{f}: [a, b] \to \R^m$ be continuous on $[a, b]$ and differentiable on $(a, b)$. Suppose we can find some $M$ such that for all $t \in (a, b)$, we have $\|D\mathbf{f}(t)\| \leq M$. Then
+ \[
+ \|\mathbf{f}(b) - \mathbf{f}(a)\| \leq M(b - a).
+ \]
+\end{thm}
+
+\begin{proof}
+ Let $\mathbf{v} = \mathbf{f}(b) - \mathbf{f}(a)$. We define
+ \[
+ g(t) = \mathbf{v}\cdot \mathbf{f}(t) = \sum_{i = 1}^m v_i f_i(t).
+ \]
+ Since each $f_i$ is differentiable, $g$ is continuous on $[a, b]$ and differentiable on $(a, b)$ with
+ \[
+ g'(t) = \sum v_i f_i' (t).
+ \]
+ Hence, we know
+ \[
+ |g'(t)| \leq \left|\sum_{i = 1}^m v_i f_i'(t)\right| \leq \|\mathbf{v}\| \left(\sum_{i = 1}^n f_i'^2(t)\right)^{1/2} = \|\mathbf{v}\| \|D\mathbf{f} (t)\| \leq M \|\mathbf{v}\|.
+ \]
+ We now apply the mean value theorem to $g$ to get
+ \[
+ g(b) - g(a) = g'(t)(b - a)
+ \]
+ for some $t \in (a, b)$. By definition of $g$, we get
+ \[
+ \mathbf{v} \cdot (\mathbf{f}(b) - \mathbf{f}(a)) = g'(t) (b - a).
+ \]
+ By definition of $\mathbf{v}$, we have
+ \[
+ \|\mathbf{f}(b) - \mathbf{f}(a)\|^2 = |g'(t)(b - a)| \leq (b - a) M \|\mathbf{f} (b) - \mathbf{f}(a)\|.
+ \]
+ If $\mathbf{f}(b) = \mathbf{f}(a)$, then there is nothing to prove. Otherwise, divide by $\|\mathbf{f}(b) - \mathbf{f}(a)\|$ and done.
+\end{proof}
+
+We now apply this to prove the general version.
+\begin{thm}[Mean value inequality]
+ Let $\mathbf{a} \in \R^n$ and $\mathbf{f}: B_r(\mathbf{a}) \to \R^m$ be differentiable on $B_r(\mathbf{a})$ with $\|D\mathbf{f}(\mathbf{x})\| \leq M$ for all $\mathbf{x} \in B_r(\mathbf{a})$. Then
+ \[
+ \|\mathbf{f}(\mathbf{b}_1) - f(\mathbf{b}_2)\| \leq M\|\mathbf{b}_1 - \mathbf{b}_2\|
+ \]
+ for any $\mathbf{b}_1, \mathbf{b}_2 \in B_r(\mathbf{a})$.
+\end{thm}
+
+\begin{proof}
+ We will reduce this to the previous theorem.
+
+ Fix $\mathbf{b}_1, \mathbf{b}_2 \in B_r(\mathbf{a})$. Note that
+ \[
+ t\mathbf{b}_1 + (1 - t) \mathbf{b}_2 \in B_r(\mathbf{a})
+ \]
+ for all $t \in [0, 1]$. Now consider $\mathbf{g}: [0, 1] \to \R^m$.
+ \[
+ \mathbf{g}(t) = \mathbf{f}(t \mathbf{b}_1 + (1 - t)\mathbf{b}_2).
+ \]
+ By the chain rule, $\mathbf{g}$ is differentiable and
+ \[
+ \mathbf{g}'(t) = D\mathbf{g}(t) = (D\mathbf{f} (t \mathbf{b}_1 + (1 - t) \mathbf{b}_2))(\mathbf{b}_1 - \mathbf{b}_2)
+ \]
+ Therefore
+ \[
+ \|D \mathbf{g}(t)\| \leq \|D \mathbf{f}(t \mathbf{b}_1 + (1 - t)\mathbf{b}_2)\| \|\mathbf{b}_1 - \mathbf{b}_2\| \leq M \|\mathbf{b}_1 - \mathbf{b}_2\|.
+ \]
+ Now we can apply the previous theorem, and get
+ \[
+ \|\mathbf{f}(\mathbf{b}_1) - \mathbf{f}(\mathbf{b}_2)\| = \|\mathbf{g}(1) - \mathbf{g}(0)\| \leq M\|\mathbf{b}_1 - \mathbf{b}_2\|.\qedhere
+ \]
+\end{proof}
+Note that here we worked in a ball. In general, we could have worked in a convex set, since all we need is for $t \mathbf{b}_1 + (1 - t)\mathbf{b}_2$ to be inside the domain.
+
+But with this, we have the following easy corollary.
+\begin{cor}
+ Let $\mathbf{f}: B_r(\mathbf{a}) \subseteq \R^n \to \R^m$ have $D\mathbf{f}(\mathbf{x}) = 0$ for all $\mathbf{x} \in B_r(\mathbf{a})$. Then $\mathbf{f}$ is constant.
+\end{cor}
+
+\begin{proof}
+ Apply the mean value inequality with $M = 0$.
+\end{proof}
+
+We would like to extend this corollary. Does this corollary extend to differentiable maps $\mathbf{f}$ with $D\mathbf{f} = 0$ defined on any open set $U \subseteq \R^n$?
+
+The answer is clearly no. Even for functions $f: \R \to \R$, this is not true, since we can have two disjoint intervals $[1, 2] \cup [3, 4]$, and define $f(t)$ to be $1$ on $[1, 2]$ and $2$ on $[3, 4]$. Then $Df = 0$ but $f$ is not constant. $f$ is just locally constant on each interval.
+
+The problem with this is that the sets are disconnected. We cannot connect points in $[1, 2]$ and points in $[3, 4]$ with a line. If we can do so, then we would be able to show that $f$ is constant.
+
+\begin{defi}[Path-connected subset]
+ A subset $E \subseteq \R^n$ is \emph{path-connected} if for any $\mathbf{a}, \mathbf{b} \in E$, there is a continuous map $\gamma: [0, 1] \to E$ such that
+ \[
+ \gamma(0) = \mathbf{a}, \quad \gamma(1) = \mathbf{b}.
+ \]
+\end{defi}
+
+\begin{thm}
+ Let $U\subseteq \R^n$ be open and path-connected. Then for any differentiable $\mathbf{f}: U \to \R^m$, if $D\mathbf{f}(\mathbf{x}) = 0$ for all $\mathbf{x} \in U$, then $\mathbf{f}$ is constant on $U$.
+\end{thm}
+A naive attempt would be to replace $t\mathbf{b}_1 - (1 - t)\mathbf{b}_2$ in the proof of the mean value theorem with a path $\gamma(t)$. However, this is not a correct proof, since this has to assume $\gamma$ is differentiable. So this doesn't work. We have to think some more.
+
+\begin{proof}
+ We are going to use the fact that $\mathbf{f}$ is locally constant. wlog, assume $m = 1$. Given any $\mathbf{a}, \mathbf{b} \in U$, we show that $f(\mathbf{a}) = f(\mathbf{b})$. Let $\gamma: [0, 1] \to U$ be a (continuous) path from $\mathbf{a}$ to $\mathbf{b}$. For any $s \in (0, 1)$, there exists some $\varepsilon$ such that $B_\varepsilon(\gamma(s)) \subseteq U$ since $U$ is open. By continuity of $\gamma$, there is a $\delta$ such that $(s - \delta, s + \delta) \subseteq [0, 1]$ with $\gamma((s - \delta, s + \delta)) \subseteq B_\varepsilon(\gamma(s)) \subseteq U$.
+
+ Since $f$ is constant on $B_\varepsilon(\gamma(s))$ by the previous corollary, we know that $g(t) = f \circ \gamma (t)$ is constant on $(s - \delta, s + \delta)$. In particular, $g$ is differentiable at $s$ with derivative $0$. This is true for all $s$. So the map $g: [0, 1] \to \R$ has zero derivative on $(0, 1)$ and is continuous on $(0, 1)$. So $g$ is constant. So $g(0) = g(1)$, i.e.\ $f(\mathbf{a}) = f(\mathbf{b})$.
+\end{proof}
+If $\gamma$ were differentiable, then this is much easier, since we can show $g' = 0$ by the chain rule:
+\[
+ g'(t) = D f(\gamma(t)) \gamma'(t).
+\]
+\subsection{Inverse function theorem}
+Now, we get to the inverse function theorem. This is one of the most important theorems of the course. This has many interesting and important consequences, but we will not have time to get to these.
+
+Before we can state the inverse function theorem, we need a definition.
+\begin{defi}[$C^1$ function]
+ Let $U \subseteq \R^n$ be open. We say $\mathbf{f}: U \to \R^m$ is $C^1$ on $U$ if $\mathbf{f}$ is differentiable at each $\mathbf{x} \in U$ and
+ \[
+ D\mathbf{f}: U \to L(\R^n, \R^m)
+ \]
+ is continuous.
+
+ We write $C^1(U)$ or $C^1(U; \R^m)$ for the set of all $C^1$ maps from $U$ to $\R^m$.
+\end{defi}
+
+First we get a convenient alternative characterization of $C^1$.
+\begin{prop}
+ Let $U \subseteq \R^n$ be open. Then $\mathbf{f} = (f_1, \cdots, f_n): U \to \R^n$ is $C^1$ on $U$ if and only if the partial derivatives $D_j f_i(\mathbf{x})$ exists for all $\mathbf{x} \in U$, $1 \leq i\leq n$, $1 \leq j \leq n$, and $D_j f_i: U \to \R$ are continuous.
+\end{prop}
+
+\begin{proof}
+ $(\Rightarrow)$ Differentiability of $\mathbf{f}$ at $\mathbf{x}$ implies $D_j f_i(\mathbf{x})$ exists and is given by
+ \[
+ D_j f_i (\mathbf{x}) = \bra D\mathbf{f}(\mathbf{x}) \mathbf{e}_j, \mathbf{b}_i\ket,
+ \]
+ where $\{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$ and $\{\mathbf{b}_1, \cdots, \mathbf{b}_m\}$ are the standard basis for $\R^n$ and $\R^m$.
+
+ So we know
+ \[
+ |D_j f_i (\mathbf{x}) - D_j f_i (\mathbf{y})| = |\bra (D \mathbf{f}(\mathbf{x}) - D \mathbf{f}(\mathbf{y})) \mathbf{e}_j, \mathbf{b}_i \ket | \leq \|D \mathbf{f}(\mathbf{x}) - D \mathbf{f}(\mathbf{y})\|
+ \]
+ since $\mathbf{e}_j$ and $\mathbf{b}_i$ are unit vectors. Hence if $D\mathbf{f}$ is continuous, so is $D_j f_i$.
+
+ $(\Leftarrow)$ Since the partials exist and are continuous, by our previous theorem, we know that the derivative $D \mathbf{f}$ exists. To show $D\mathbf{f}: U \to L(\R^m; \R^n)$ is continuous, note the following general fact:
+
+ For any linear map $A \in L(\R^n; \R^m)$ represented by $(a_{ij})$ so that $A\mathbf{h} = a_{ij} h_j$, then for $\mathbf{x} = (x_1, \cdots, x_n)$, we have
+ \begin{align*}
+ \|A \mathbf{x}\|^2 &= \sum_{i = 1}^m \left(\sum_{j = 1}^n A_{ij} x_j\right)^2\\
+ \intertext{By Cauchy-Schwarz, we have}
+ &\leq \sum_{i = 1}^m \left(\sum_{j = 1}^n a_{ij}^2\right) \left(\sum_{j = 1}^n x_j^2\right)\\
+ &= \|\mathbf{x}\|^2 \sum_{i = 1}^m \sum_{j = 1}^n a_{ij}^2.
+ \end{align*}
+ Dividing by $\|\mathbf{x}\|^2$, we know
+ \[
+ \|A\| \leq \sqrt{\sum \sum a_{ij}^2}.
+ \]
+ Applying this to $A = D \mathbf{f}(\mathbf{x}) - D \mathbf{f}(\mathbf{y})$, we get
+ \[
+ \|D \mathbf{f}(\mathbf{x}) - D \mathbf{f}(\mathbf{y})\| \leq \sqrt{\sum\sum (D_j f_i(\mathbf{x}) - D_j f_i (\mathbf{y}))^2}.
+ \]
+ So if all $D_j f_i$ are continuous, then so is $D \mathbf{f}$.
+\end{proof}
+If we do not wish to go through all that algebra to show the inequality
+\[
+ \|A\| \leq \sqrt{\sum \sum a_{ij}^2},
+\]
+we can instead note that $\sqrt{\sum \sum a_{ij}^2}$ is a norm on $L(\R^n, \R^m)$, since it is just the Euclidean norm if we treat the matrix as a vector written in a funny way. So by the equivalence of norms on finite-dimensional vector spaces, there is some $C$ such that
+\[
+ \|A\| \leq C\sqrt{\sum \sum a_{ij}^2},
+\]
+and then the result follows.
+
+Finally, we can get to the inverse function theorem.
+\begin{thm}[Inverse function theorem]
+ Let $U \subseteq \R^n$ be open, and $\mathbf{f}: U \to \R^m$ be a $C^1$ map. Let $\mathbf{a} \in U$, and suppose that $D\mathbf{f}(\mathbf{a})$ is invertible as a linear map $\R^n \to \R^n$. Then there exists open sets $V, W \subseteq \R^n$ with $\mathbf{a} \in V$, $\mathbf{f}(\mathbf{a}) \in W$, $V \subseteq U$ such that
+ \[
+ \mathbf{f}|_V: V \to W
+ \]
+ is a bijection. Moreover, the inverse map $\mathbf{f}|_V^{-1}: W \to V$ is also $C^1$.
+\end{thm}
+We have a fancy name for these functions.
+\begin{defi}[Diffeomorphism]
+ Let $U, U' \subseteq \R^n$ are open, then a map $\mathbf{g}: U \to U'$ is a \emph{diffeomorphism} if it is $C^1$ with a $C^1$ inverse.
+\end{defi}
+Note that different people have different definitions for the word ``diffeomorphism''. Some require it to be merely differentiable, while others require it to be \emph{infinitely} differentiable. We will stick with this definition.
+
+Then the inverse function theorem says: if $\mathbf{f}$ is $C^1$ and $D \mathbf{f} (\mathbf{a})$ is invertible, then $\mathbf{f}$ is a local diffeomorphism at $\mathbf{a}$.
+
+Before we prove this, we look at the simple case where $n = 1$. Suppose $f'(a) \not= 0$. Then there exists a $\delta$ such that $f'(t) > 0$ or $f'(t) < 0$ in $t \in (a - \delta, a + \delta)$. So $f|_{(a - \delta, a + \delta)}$ is monotone and hence is invertible. This is a triviality. However, this is not a triviality even for $n = 2$.
+
+\begin{proof}
+ By replacing $\mathbf{f}$ with $(D\mathbf{f}(\mathbf{a}))^{-1} \mathbf{f}$ (or by rotating our heads and stretching it a bit), we can assume $D\mathbf{f}(\mathbf{a}) = I$, the identity map. By continuity of $D \mathbf{f}$, there exists some $r > 0$ such that
+ \[
+ \|D \mathbf{f}(\mathbf{x}) - I\| < \frac{1}{2}
+ \]
+ for all $\mathbf{x} \in \overline{B_r(\mathbf{a})}$. By shrinking $r$ sufficiently, we can assume $\overline{B_r(\mathbf{a})} \subseteq U$. Let $W = B_{r/2}(\mathbf{f}(\mathbf{a}))$, and let $V = \mathbf{f}^{-1}(W) \cap B_r(\mathbf{a})$.
+
+ That was just our setup. There are three steps to actually proving the theorem.
+ \begin{claim}
+ $V$ is open, and $\mathbf{f}|_{V}: V \to W$ is a bijection.
+ \end{claim}
+ Since $\mathbf{f}$ is continuous, $\mathbf{f}^{-1}(W)$ is open. So $V$ is open. To show $\mathbf{f}|_V: V \to W$ is bijection, we have to show that for each $\mathbf{y} \in W$, then there is a \emph{unique} $\mathbf{x} \in V$ such that $\mathbf{f}(\mathbf{x}) = \mathbf{y}$. We are going to use the contraction mapping theorem to prove this. This statement is equivalent to proving that for each $\mathbf{y} \in W$, the map $T(\mathbf{x}) = \mathbf{x} - \mathbf{f}(\mathbf{x}) + \mathbf{y}$ has a unique fixed point $\mathbf{x} \in V$.
+
+ Let $\mathbf{h}(\mathbf{x}) = \mathbf{x} - \mathbf{f}(\mathbf{x})$. Then note that
+ \[
+ D \mathbf{h}(\mathbf{x}) = I - D \mathbf{f}(\mathbf{x}).
+ \]
+ So by our choice of $r$, for every $\mathbf{x} \in \overline{B_r(\mathbf{a})}$, we must have
+ \[
+ \|D \mathbf{h}(\mathbf{x})\| \leq \frac{1}{2}.
+ \]
+ Then for any $\mathbf{x}_1, \mathbf{x}_2 \in \overline{B_r(\mathbf{a})}$, we can use the mean value inequality to estimate
+ \[
+ \|\mathbf{h}(\mathbf{x}_1) - \mathbf{h} (\mathbf{x}_2)\| \leq \frac{1}{2} \|\mathbf{x}_1 - \mathbf{x}_2\|.
+ \]
+ Hence we know
+ \[
+ \|T(\mathbf{x}_1) - T (\mathbf{x}_2)\| = \|\mathbf{h}(\mathbf{x}_1) - \mathbf{h}(\mathbf{x}_2)\| \leq \frac{1}{2}\|\mathbf{x}_1 - \mathbf{x}_2\|.
+ \]
+ Finally, to apply the contraction mapping theorem, we need to pick the right domain for $T$, namely $\overline{B_r(\mathbf{a})}$.
+
+ For any $\mathbf{x}\in \overline{B_r(\mathbf{a})}$, we have
+ \begin{align*}
+ \|T(\mathbf{x}) - \mathbf{a}\| &= \|\mathbf{x} - \mathbf{f}(\mathbf{x}) + \mathbf{y} - \mathbf{a}\|\\
+ &= \|\mathbf{x} - \mathbf{f}(\mathbf{x}) - (\mathbf{a} - \mathbf{f}(\mathbf{a})) + \mathbf{y} - \mathbf{f}(\mathbf{a})\|\\
+ &\leq \|\mathbf{h}(\mathbf{x}) - \mathbf{h}(\mathbf{a})\| + \|\mathbf{y} - \mathbf{f}(\mathbf{a})\|\\
+ &\leq \frac{1}{2} \|\mathbf{x} - \mathbf{a}\| + \|\mathbf{y} - \mathbf{f}(\mathbf{a})\|\\
+ &< \frac{r}{2} + \frac{r}{2}\\
+ &= r.
+ \end{align*}
+ So $T: \overline{B_r(\mathbf{a})} \to B_r(\mathbf{a}) \subseteq \overline{B_r(\mathbf{a})}$. Since $\overline{B_r(\mathbf{a})}$ is complete, $T$ has a unique fixed point $\mathbf{x} \in \overline{B_r(\mathbf{a})}$, i.e.\ $T(\mathbf{x}) = \mathbf{x}$. Finally, we need to show $\mathbf{x} \in B_r(\mathbf{a})$, since this is where we want to find our fixed point. But this is true, since $T(\mathbf{x}) \in B_r(\mathbf{a})$ by above. So we must have $\mathbf{x} \in B_r(\mathbf{a})$. Also, since $f(\mathbf{x}) = \mathbf{y}$, we know $\mathbf{x} \in f^{-1}(W)$. So $\mathbf{x} \in V$.
+
+ So we have shown that for each $\mathbf{y} \in W$, there is a unique $\mathbf{x} \in V$ such that $\mathbf{f}(\mathbf{x}) = \mathbf{y}$. So $\mathbf{f}|_V: V \to W$ is a bijection.
+
+ We have done the hard work now. It remains to show that $\mathbf{f}|_V$ is invertible with $C^1$ inverse.
+
+ \begin{claim}
+ The inverse map $\mathbf{g} = \mathbf{f}|_V^{-1}: W \to V$ is Lipschitz (and hence continuous). In fact, we have
+ \[
+ \|\mathbf{g}(\mathbf{y}_1) - \mathbf{g}(\mathbf{y}_2)\| \leq 2 \|\mathbf{y}_1 - \mathbf{y}_2\|.
+ \]
+ \end{claim}
+
+ For any $\mathbf{x}_1, \mathbf{x}_2 \in V$, by the triangle inequality, know
+ \begin{align*}
+ \|\mathbf{x}_1 - \mathbf{x}_2\| - \|\mathbf{f}(\mathbf{x}_1) - \mathbf{f}(\mathbf{x}_2)\| &\leq \|(\mathbf{x}_1 - \mathbf{f}(\mathbf{x}_1)) - (\mathbf{x}_2 - \mathbf{f}(\mathbf{x}_2))\|\\
+ &= \|\mathbf{h}(\mathbf{x}_1) - \mathbf{h}(\mathbf{x}_0)\|\\
+ &\leq \frac{1}{2}\|\mathbf{x}_1 - \mathbf{x}_2\|.
+ \end{align*}
+ Hence, we get
+ \[
+ \|\mathbf{x}_1 - \mathbf{x}_2\| \leq 2 \|\mathbf{f}(\mathbf{x}_1) - \mathbf{f}(\mathbf{x}_2)\|.
+ \]
+ Apply this to $\mathbf{x}_1 = \mathbf{g}(\mathbf{y}_1)$ and $\mathbf{x}_2 = \mathbf{g}(\mathbf{y}_2)$, and note that $\mathbf{f}(\mathbf{g}(\mathbf{y}_j)) = \mathbf{y}_j$ to get the desired result.
+
+ \begin{claim}
+ $\mathbf{g}$ is in fact $C^1$, and moreover, for all $\mathbf{y} \in W$,
+ \[
+ D \mathbf{g}(\mathbf{y}) = D \mathbf{f}(\mathbf{g}(\mathbf{y}))^{-1}.\tag{$*$}
+ \]
+ \end{claim}
+ Note that if $\mathbf{g}$ were differentiable, then its derivative must be given by $(*)$, since by definition, we know
+ \[
+ \mathbf{f}(\mathbf{g}(\mathbf{y})) = \mathbf{y},
+ \]
+ and hence the chain rule gives
+ \[
+ D \mathbf{f}(\mathbf{g}(\mathbf{y})) \cdot D \mathbf{g}(\mathbf{y}) = I.
+ \]
+ Also, we immediately know $D \mathbf{g}$ is continuous, since it is the composition of continuous functions (the inverse of a matrix is given by polynomial expressions of the components). So we only need to check that $D \mathbf{f}(\mathbf{g}(\mathbf{y}))^{-1}$ satisfies the definition of the derivative.
+
+ First we check that $D\mathbf{f}(\mathbf{x})$ is indeed invertible for every $\mathbf{x} \in \overline{B_r(\mathbf{a})}$. We use the fact that
+ \[
+ \|D \mathbf{f}(\mathbf{x}) - I\| \leq \frac{1}{2}.
+ \]
+ If $D\mathbf{f}(\mathbf{x}) \mathbf{v} = \mathbf{0}$, then we have
+ \[
+ \|\mathbf{v}\| = \|D \mathbf{f}(\mathbf{x})\mathbf{v} - \mathbf{v}\| \leq \|D \mathbf{f}(\mathbf{x}) - I\| \|\mathbf{v}\| \leq \frac{1}{2} \|\mathbf{v}\|.
+ \]
+ So we must have $\|\mathbf{v}\| = 0$, i.e.\ $\mathbf{v} = \mathbf{0}$. So $\ker D \mathbf{f}(\mathbf{x}) = \{\mathbf{0}\}$. So $D \mathbf{f}(\mathbf{g}(\mathbf{y}))^{-1}$ exists.
+
+ Let $\mathbf{x} \in V$ be fixed, and $\mathbf{y} = \mathbf{f}(\mathbf{x})$. Let $\mathbf{k}$ be small and
+ \[
+ \mathbf{h} = \mathbf{g}(\mathbf{y} + \mathbf{k}) - \mathbf{g}(\mathbf{y}).
+ \]
+ In other words,
+ \[
+ \mathbf{f}(\mathbf{x} + \mathbf{h}) - \mathbf{f}(\mathbf{x}) = \mathbf{k}.
+ \]
+ Since $\mathbf{g}$ is invertible, whenever $\mathbf{k} \not= \mathbf{0}$, $\mathbf{h} \not= \mathbf{0}$. Since $\mathbf{g}$ is continuous, as $\mathbf{k} \to \mathbf{0}$, $\mathbf{h} \to \mathbf{0}$ as well.
+
+ We have
+ \begin{align*}
+ &\frac{\mathbf{g}(\mathbf{y} + \mathbf{k}) - \mathbf{g}(\mathbf{y}) - D \mathbf{f}(\mathbf{g}(\mathbf{y}))^{-1} \mathbf{k}}{\|\mathbf{k}\|}\\
+ &= \frac{\mathbf{h} - D\mathbf{f}(\mathbf{g}(\mathbf{y}))^{-1} \mathbf{k}}{\|\mathbf{k}\|}\\
+ &= \frac{D \mathbf{f}(\mathbf{x})^{-1}(D \mathbf{f}(\mathbf{x})\mathbf{h} - \mathbf{k})}{\|\mathbf{k}\|}\\
+ &= \frac{-D \mathbf{f}(\mathbf{x})^{-1}(\mathbf{f}(\mathbf{x} + \mathbf{h}) - \mathbf{f}(\mathbf{x}) - D \mathbf{f}(\mathbf{x})\mathbf{h})}{\|\mathbf{k}\|}\\
+ &= -D \mathbf{f}(\mathbf{x})^{-1} \left(\frac{\mathbf{f}(\mathbf{x} + \mathbf{h}) - \mathbf{f}(\mathbf{x}) - D \mathbf{f}(\mathbf{x}) \mathbf{\mathbf{h}}}{\|\mathbf{h}\|} \cdot \frac{\|\mathbf{h}\|}{\|\mathbf{k}\|}\right)\\
+ &= -D \mathbf{f}(\mathbf{x})^{-1} \left(\frac{\mathbf{f}(\mathbf{x} + \mathbf{h}) - \mathbf{f}(\mathbf{x}) - D \mathbf{f}(\mathbf{x}) \mathbf{\mathbf{h}}}{\|\mathbf{h}\|} \cdot \frac{\|\mathbf{g}(\mathbf{y} + \mathbf{k}) - \mathbf{g}(\mathbf{y})\|}{\|(\mathbf{y} + \mathbf{k}) - \mathbf{y}\|}\right).
+ \end{align*}
+ As $\mathbf{k} \to \mathbf{0}$, $\mathbf{h} \to \mathbf{0}$. The first factor $-D \mathbf{f}(\mathbf{x})^{-1}$ is fixed; the second factor tends to $\mathbf{0}$ as $\mathbf{h} \to \mathbf{0}$; the third factor is bounded by $2$. So the whole thing tends to $\mathbf{0}$. So done.
+\end{proof}
+Note that in the case where $n = 1$, if $f: (a, b) \to \R$ is $C^1$ with $f'(x) \not= 0$ for every $x$, then $f$ is monotone on the \emph{whole} domain $(a, b)$, and hence $f: (a, b) \to f((a, b))$ is a bijection. In higher dimensions, this is not true. Even if we know that $D \mathbf{f}(\mathbf{x})$ is invertible for all $\mathbf{x} \in U$, we cannot say $\mathbf{f}|_U$ is a bijection. We still only know there is a local inverse.
+
+\begin{eg}
+ Let $U = \R^2$, and $\mathbf{f}: \R^2 \to \R^2$ be given by
+ \[
+ \mathbf{f}(x,y) =
+ \begin{pmatrix}
+ e^x \cos y\\
+ e^x \sin y
+ \end{pmatrix}.
+ \]
+ Then we can directly compute
+ \[
+ D\mathbf{f}(x, y) =
+ \begin{pmatrix}
+ e^x \cos y & -e^x \sin y\\
+ e^x \sin y & e^x \cos y.
+ \end{pmatrix}
+ \]
+ Then we have
+ \[
+ \det (D\mathbf{f}(x, y)) = e^x \not= 0
+ \]
+ for all $(x, y) \in \R^2$. However, by periodicity, we have
+ \[
+ f(x, y + 2n\pi) = f(x, y)
+ \]
+ for all $n$. So $f$ is not injective on $\R^2$.
+\end{eg}
+One major application of the inverse function theorem is to prove the \emph{implicit function theorem}. We will not go into details here, but an example of the theorem can be found on example sheet 4.
+
+\subsection{2nd order derivatives}
+We've done so much work to understand first derivatives. For real functions, we can immediately know a lot about higher derivatives, since the derivative is just a normal real function again. Here, it slightly more complicated, since the derivative is a linear operator. However, this is not really a problem, since the space of linear operators is just yet another vector space, so we can essentially use the same definition.
+
+\begin{defi}[2nd derivative]
+ Let $U \subseteq \R^n$ be open, $\mathbf{f}: U \to \R^m$ be differentiable. Then $D\mathbf{f}: U \to L(\R^n; \R^m)$. We say $D\mathbf{f}$ is \emph{differentiable} at $\mathbf{a} \in U$ if there exists $A \in L(\R^n; L(\R^n; \R^m))$ such that
+ \[
+ \lim_{\mathbf{h} \to 0} \frac{1}{\|\mathbf{h}\|} (D\mathbf{f}(\mathbf{a} + \mathbf{h}) - D \mathbf{f}(\mathbf{a}) - A \mathbf{h}) = 0.
+ \]
+\end{defi}
+For this to make sense, we would need to put a norm on $L(\R^n; \R^m)$ (e.g.\ the operator norm), but $A$, if it exists, is independent of the choice of the norm, since all norms are equivalent for a finite-dimensional space.
+
+This is, in fact, the same definition as our usual differentiability, since $L(\R^n; \R^m)$ is just a finite-dimensional space, and is isomorphic to $\R^{nm}$. So $D\mathbf{f}$ is differentiable if and only if $D\mathbf{f}: U \to \R^{nm}$ is differentiable with $A \in L(\R^n; \R^{nm})$. This allows use to recycle our previous theorems about differentiability.
+
+In particular, we know $D\mathbf{f}$ is differentiable is implied by the existence of partial derivatives $D_i (D_j f_k)$ in a neighbourhood of $\mathbf{a}$, and their continuity at $\mathbf{a}$, for all $\mathbf{k} = 1, \cdots, m$ and $i, j = 1, \cdots, n$.
+
+\begin{notation}
+ Write
+ \[
+ D_{ij} \mathbf{f}(\mathbf{a}) = D_i(D_j \mathbf{f})(\mathbf{a}) = \frac{\partial^2}{\partial x_i \partial x_j} \mathbf{f}(\mathbf{a}).
+ \]
+\end{notation}
+
+Let's now go back to the initial definition, and try to interpret it. By linear algebra, in general, a linear map $\phi: \R^\ell \to L(\R^n; \R^m)$ induces a bilinear map $\Phi: \R^\ell \times \R^n \to \R^m$ by
+\[
+ \Phi(\mathbf{u}, \mathbf{v}) = \phi(\mathbf{u})(\mathbf{v}) \in \R^m.
+\]
+In particular, we know
+\begin{align*}
+ \Phi(a \mathbf{u} + b \mathbf{v}, \mathbf{w}) &= a \Phi(\mathbf{u}, \mathbf{w}) + b \Phi(\mathbf{v}, \mathbf{w})\\
+ \Phi(\mathbf{u}, a \mathbf{v} + b \mathbf{w}) &= a \Phi(\mathbf{u}, \mathbf{v}) + b \Phi(\mathbf{u}, \mathbf{w}).
+\end{align*}
+Conversely, if $\Phi: \R^\ell \times \R^n \to \R^m$ is bilinear, then $\phi: \R^\ell \to L(\R^n; \R^m)$ defined by
+\[
+ \phi(\mathbf{u}) = (\mathbf{v} \mapsto \Phi(\mathbf{u}, \mathbf{v}))
+\]
+is linear. These are clearly inverse operations to each other. So there is a one-to-one correspondence between bilinear maps $\phi: \R^\ell \times \R^n \to \R^m$ and linear maps $\Phi: \R^\ell \to L(\R^n; \R^m)$.
+
+In other words, instead of treating our second derivative as a weird linear map in $L(\R^n; L(\R^n; \R^m))$, we can view it as a bilinear map $\R^n \times \R^n \to \R^m$.
+\begin{notation}
+ We define $D^2 \mathbf{f}(\mathbf{a}) : \R^n \times \R^n \to \R^m$ by
+ \[
+ D^2 \mathbf{f}(\mathbf{a}) (\mathbf{u}, \mathbf{v}) = D(D \mathbf{f})(\mathbf{a})(\mathbf{u})(\mathbf{v}).
+ \]
+\end{notation}
+We know $D^2 \mathbf{f}(\mathbf{a})$ is a bilinear map.
+
+In coordinates, if
+\[
+ \mathbf{u} = \sum_{j = 1}^n u_j \mathbf{e}_j,\quad \mathbf{v} = \sum_{j = 1}^n v_j \mathbf{e}_j,
+\]
+where $\{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$ are the standard basis for $\R^n$, then using bilinearity, we have
+\[
+ D^2 \mathbf{f}(\mathbf{a})(\mathbf{u}, \mathbf{v}) = \sum_{i = 1}^n \sum_{j = 1}^n D^2 \mathbf{f}(\mathbf{a}) (\mathbf{e}_i, \mathbf{e}_j) u_i v_j.
+\]
+This is very similar to the case of first derivatives, where the derivative can be completely specified by the values it takes on the basis vectors.
+
+In the definition of the second derivative, we can again take $\mathbf{h} = t \mathbf{e}_i$. Then we have
+\[
+ \lim_{t \to 0} \frac{D \mathbf{f}(\mathbf{a} + t \mathbf{e}_i) - D \mathbf{f}(\mathbf{a}) - t D(D \mathbf{f})(\mathbf{a})(\mathbf{e}_i)}{t} = 0.
+\]
+Note that the whole thing at the top is a linear map in $L(\R^n; \R^m)$. We can let the whole thing act on $\mathbf{e}_j$, and obtain
+\[
+ \lim_{t \to 0} \frac{D \mathbf{f}(\mathbf{a} + t \mathbf{e}_i)(\mathbf{e}_j) - D \mathbf{f}(\mathbf{a}) (\mathbf{e}_j) - t D (D \mathbf{f})(\mathbf{a})(\mathbf{e}_i)(\mathbf{e}_j)}{t} = 0.
+\]
+for all $i, j = 1, \cdots, n$. Taking the $D^2 \mathbf{f}(\mathbf{a})(\mathbf{e}_i, \mathbf{e}_j)$ to the other side, we know
+\begin{align*}
+ D^2 \mathbf{f}(\mathbf{a}) (\mathbf{e}_i, \mathbf{e}_j) &= \lim_{t \to 0} \frac{D\mathbf{f}(\mathbf{a} + t \mathbf{e}_i)(\mathbf{e}_j) - D \mathbf{f}(\mathbf{a})(\mathbf{e}_j)}{t}\\
+ &= \lim_{t \to 0} \frac{D_{\mathbf{e}_j} \mathbf{f}(\mathbf{a} + t \mathbf{e}_i) - D_{\mathbf{e}_j} \mathbf{f}(\mathbf{a})}{t}\\
+ &= D_{\mathbf{e}_i} D_{\mathbf{e}_j} \mathbf{f}(\mathbf{a}).
+\end{align*}
+In other words, we have
+\[
+ D^2 \mathbf{f}(\mathbf{e}_i, \mathbf{e}_j) = \sum_{k = 1}^m D_{ij} f_k(\mathbf{a}) \mathbf{b}_k,
+\]
+where $\{\mathbf{b}_1, \cdots, \mathbf{b}_m\}$ is the standard basis for $\R^m$. So we have
+\[
+ D^2 \mathbf{f}(\mathbf{u}, \mathbf{v}) = \sum_{i, j = 1}^n \sum_{k = 1}^m D_{ij} f_k (\mathbf{a}) u_i v_j \mathbf{b}_k
+\]
+We have been very careful to keep the right order of the partial derivatives. However, in most cases we care about, it doesn't matter.
+\begin{thm}[Symmetry of mixed partials]
+ Let $U \subseteq \R^n$ be open, $\mathbf{f}: U \to \R^m$, $\mathbf{a} \in U$, and $\rho > 0$ such that $B_\rho(\mathbf{a}) \subseteq U$.
+
+ Let $i, j \in \{1, \cdots, n\}$ be fixed and suppose that $D_i D_j \mathbf{f}(\mathbf{x})$ and $D_j D_i \mathbf{f}(\mathbf{x})$ exist for all $\mathbf{x} \in B_\rho(\mathbf{a})$ and are continuous at $\mathbf{a}$. Then in fact
+ \[
+ D_i D_j \mathbf{f}(\mathbf{a}) = D_j D_i \mathbf{f}(\mathbf{a}).
+ \]
+\end{thm}
+
+The proof is quite short, when we know what to do.
+\begin{proof}
+ wlog, assume $m = 1$. If $i = j$, then there is nothing to prove. So assume $i \not =j$.
+
+ Let
+ \[
+ g_{ij}(t) = f(\mathbf{a} + t \mathbf{e}_i + t \mathbf{e}_j) - f(\mathbf{a} + t \mathbf{e}_i) - f(\mathbf{a} + t \mathbf{e}_j) + f(\mathbf{a}).
+ \]
+ Then for each fixed $t$, define $\phi: [0, 1] \to \R$ by
+ \[
+ \phi(s) = f(\mathbf{a} + st \mathbf{e}_i + t \mathbf{e}_j) - f(\mathbf{a} + st \mathbf{e}_i).
+ \]
+ Then we get
+ \[
+ g_{ij}(t) = \phi(1) - \phi(0).
+ \]
+ By the mean value theorem and the chain rule, there is some $\theta \in (0, 1)$ such that
+ \[
+ g_{ij}(t) = \phi'(\theta) = t\Big(D_if(\mathbf{a} + \theta t \mathbf{e}_i + t \mathbf{e}_j) - D_if(\mathbf{a} + \theta t \mathbf{e}_i)\Big).
+ \]
+ Now apply mean value theorem to the function
+ \[
+ s \mapsto D_i f(\mathbf{a} + \theta t \mathbf{e}_i + st \mathbf{e}_j),
+ \]
+ there is some $\eta \in (0, 1)$ such that
+ \[
+ g_{ij}(t) = t^2 D_j D_i f(\mathbf{a} + \theta t \mathbf{e}_i + \eta t \mathbf{e}_j).
+ \]
+ We can do the same for $g_{ji}$, and find some $\tilde{\theta}, \tilde{\eta}$ such that
+ \[
+ g_{ji}(t) = t^2 D_i D_j f(a + \tilde{\theta} t \mathbf{e}_i + \tilde{\eta} t \mathbf{e}_j).
+ \]
+ Since $g_{ij} = g_{ji}$, we get
+ \[
+ t^2 D_j D_i f(\mathbf{a} + \theta t \mathbf{e}_i + \eta t \mathbf{e}_j) = t^2 D_i D_j f(a + \tilde{\theta} t \mathbf{e}_i + \tilde{\eta} t \mathbf{e}_j).
+ \]
+ Divide by $t^2$, and take the limit as $t \to 0$. By continuity of the partial derivatives, we get
+ \[
+ D_j D_i f(\mathbf{a}) = D_i D_j f(\mathbf{a}).\qedhere
+ \]
+\end{proof}
+This is nice. Whenever the second derivatives are continuous, the order does not matter. We can alternatively state this result as follows:
+
+\begin{prop}
+ If $f: U \to \R^m$ is differentiable in $U$ such that $D_i D_j \mathbf{f}(\mathbf{x})$ exists in a neighbourhood of $\mathbf{a} \in U$ and are continuous at $\mathbf{a}$, then $D \mathbf{f}$ is differentiable at $\mathbf{a}$ and
+ \[
+ D^2 \mathbf{f} (\mathbf{a})(\mathbf{u}, \mathbf{v}) = \sum_j \sum_i D_i D_j \mathbf{f}(\mathbf{a}) u_i v_j.
+ \]
+ is a symmetric bilinear form.
+\end{prop}
+
+\begin{proof}
+ This follows from the fact that continuity of second partials implies differentiability, and the symmetry of mixed partials.
+\end{proof}
+
+Finally, we conclude with a version of Taylor's theorem for multivariable functions.
+
+\begin{thm}[Second-order Taylor's theorem]
+ Let $f: U \to \R$ be $C^2$, i.e.\ $D_i D_j f(\mathbf{x})$ are continuous for all $\mathbf{x} \in U$. Let $\mathbf{a} \in U$ and $B_r(\mathbf{a}) \subseteq U$. Then
+ \[
+ f(\mathbf{a} + \mathbf{h}) = f(\mathbf{a}) + D f(\mathbf{a})\mathbf{h} + \frac{1}{2} D^2 f(\mathbf{h}, \mathbf{h}) + E(\mathbf{h}),
+ \]
+ where $E(\mathbf{h}) = o(\|\mathbf{h}\|^2)$.
+\end{thm}
+
+\begin{proof}
+ Consider the function
+ \[
+ g(t) = f(\mathbf{a} + t\mathbf{h}).
+ \]
+ Then the assumptions tell us $g$ is twice differentiable. By the 1D Taylor's theorem, we know
+ \[
+ g(1) = g(0) + g'(0) + \frac{1}{2} g''(s)
+ \]
+ for some $s \in [0, 1]$.
+
+ In other words,
+ \begin{align*}
+ f(\mathbf{a} + \mathbf{h}) &= f(\mathbf{a}) + Df(\mathbf{a}) \mathbf{h} + \frac{1}{2} D^2 f(\mathbf{a} + s \mathbf{h}) (\mathbf{h}, \mathbf{h})\\
+ &= f(\mathbf{a}) + Df(\mathbf{a}) \mathbf{h} + \frac{1}{2} D^2 f(\mathbf{a}) (\mathbf{h}, \mathbf{h}) + E(\mathbf{h}),
+ \end{align*}
+ where
+ \[
+ E(\mathbf{h}) = \frac{1}{2}\left(D^2 f(\mathbf{a} + s \mathbf{h}) (\mathbf{h}, \mathbf{h}) - D^2 f(\mathbf{a}) (\mathbf{h}, \mathbf{h})\right).
+ \]
+ By definition of the operator norm, we get
+ \[
+ |E(\mathbf{h})| \leq \frac{1}{2} \|D^2 f(\mathbf{a} + s \mathbf{h}) - D^2 f(\mathbf{a})\| \|\mathbf{h}\|^2.
+ \]
+ By continuity of the second derivative, as $\mathbf{h} \to \mathbf{0}$, we get
+ \[
+ \|D^2 f(\mathbf{a} + s \mathbf{h}) - D^2 f(\mathbf{a})\| \to 0.
+ \]
+ So $E(\mathbf{h}) = o(\|\mathbf{h}\|^2)$. So done.
+\end{proof}
+\end{document}
diff --git a/books/cam/IB_M/linear_algebra.tex b/books/cam/IB_M/linear_algebra.tex
new file mode 100644
index 0000000000000000000000000000000000000000..58ea1d64d4446dee65a78225b964149bf24e8f52
--- /dev/null
+++ b/books/cam/IB_M/linear_algebra.tex
@@ -0,0 +1,4652 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Michaelmas}
+\def\nyear {2015}
+\def\nlecturer {S.\ J.\ Wadsley}
+\def\ncourse {Linear Algebra}
+\def\nofficial{https://www.dpmms.cam.ac.uk/~sjw47/LinearAlgebraM15.pdf}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent Definition of a vector space (over $\R$ or $\C$), subspaces, the space spanned by a subset. Linear independence, bases, dimension. Direct sums and complementary subspaces. \hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Linear maps, isomorphisms. Relation between rank and nullity. The space of linear maps from $U$ to $V$, representation by matrices. Change of basis. Row rank and column rank.\hspace*{\fill} [4]
+
+\vspace{5pt}
+\noindent Determinant and trace of a square matrix. Determinant of a product of two matrices and of the inverse matrix. Determinant of an endomorphism. The adjugate matrix.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Eigenvalues and eigenvectors. Diagonal and triangular forms. Characteristic and minimal polynomials. Cayley-Hamilton Theorem over $\C$. Algebraic and geometric multiplicity of eigenvalues. Statement and illustration of Jordan normal form.\hspace*{\fill} [4]
+
+\vspace{5pt}
+\noindent Dual of a finite-dimensional vector space, dual bases and maps. Matrix representation, rank and determinant of dual map.\hspace*{\fill} [2]
+
+\vspace{5pt}
+\noindent Bilinear forms. Matrix representation, change of basis. Symmetric forms and their link with quadratic forms. Diagonalisation of quadratic forms. Law of inertia, classification by rank and signature. Complex Hermitian forms.\hspace*{\fill} [4]
+
+\vspace{5pt}
+\noindent Inner product spaces, orthonormal sets, orthogonal projection, $V = W \oplus W^\bot$. Gram-Schmidt orthogonalisation. Adjoints. Diagonalisation of Hermitian matrices. Orthogonality of eigenvectors and properties of eigenvalues.\hspace*{\fill} [4]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+In IA Vectors and Matrices, we have learnt about vectors (and matrices) in a rather applied way. A vector was just treated as a ``list of numbers'' representing a point in space. We used these to represent lines, conics, planes and many other geometrical notions. A matrix is treated as a ``physical operation'' on vectors that stretches and rotates them. For example, we studied the properties of rotations, reflections and shears of space. We also used matrices to express and solve systems of linear equations. We mostly took a practical approach in the course.
+
+In IB Linear Algebra, we are going to study vectors in an abstract setting. Instead of treating vectors as ``lists of numbers'', we view them as things we can add and scalar-multiply. We will write down axioms governing how these operations should behave, just like how we wrote down the axioms of group theory. Instead of studying matrices as an array of numbers, we instead look at linear maps between vector spaces abstractly.
+
+In the course, we will, of course, prove that this abstract treatment of linear algebra is just ``the same as'' our previous study of ``vectors as a list of numbers''. Indeed, in certain cases, results are much more easily proved by working with matrices (as an array of numbers) instead of abstract linear maps, and we don't shy away from doing so. However, most of the time, looking at these abstractly will provide a much better fundamental understanding of how things work.
+
+\section{Vector spaces}
+\subsection{Definitions and examples}
+\begin{notation}
+ We will use $\F$ to denote an arbitrary field, usually $\R$ or $\C$.
+\end{notation}
+
+Intuitively, a vector space $V$ over a field $\F$ (or an $\F$-vector space) is a space with two operations:
+\begin{itemize}
+ \item We can add two vectors $\mathbf{v}_1, \mathbf{v}_2 \in V$ to obtain $\mathbf{v}_1 + \mathbf{v}_2 \in V$.
+ \item We can multiply a scalar $\lambda \in \F$ with a vector $\mathbf{v}\in V$ to obtain $\lambda \mathbf{v} \in V$.
+\end{itemize}
+
+Of course, these two operations must satisfy certain axioms before we can call it a vector space. However, before going into these details, we first look at a few examples of vector spaces.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\R^n = \{\text{column vectors of length }n\text{ with coefficients in }\R\}$ with the usual addition and scalar multiplication is a vector space.
+
+ An $m\times n$ matrix $A$ with coefficients in $\R$ can be viewed as a linear map from $\R^m$ to $\R^n$ via $\mathbf{v} \mapsto A\mathbf{v}$.
+
+ This is a motivational example for vector spaces. When confused about definitions, we can often think what the definition means in terms of $\R^n$ and matrices to get some intuition.
+
+ \item Let $X$ be a set and define $\R^X = \{f: X\to \R\}$ with addition $(f + g)(x) = f(x) + g(x)$ and scalar multiplication $(\lambda f)(x) = \lambda f(x)$. This is a vector space.
+
+ More generally, if $V$ is a vector space, $X$ is a set, we can define $V^X = \{f: X \to V\}$ with addition and scalar multiplication as above.
+ \item Let $[a, b]\subseteq \R$ be a closed interval, then
+ \[
+ C([a, b], \R) = \{f\in \R^{[a,b]}: f\text{ is continuous}\}
+ \]
+ is a vector space, with operations as above. We also have
+ \[
+ C^{\infty}([a, b], \R) = \{f\in \R^{[a,b]}: f\text{ is infinitely differentiable}\}
+ \]
+ \item The set of $m\times n$ matrices with coefficients in $\R$ is a vector space, using componentwise addition and scalar multiplication, is a vector space.
+ \end{enumerate}
+\end{eg}
+
+Of course, we cannot take a random set, define some random operations called addition and scalar multiplication, and call it a vector space. These operations have to behave sensibly.
+
+\begin{defi}[Vector space]
+ An \emph{$\F$-vector space} is an (additive) abelian group $V$ together with a function $\F \times V \to V$, written $(\lambda, \mathbf{v}) \mapsto \lambda \mathbf{v}$, such that
+ \begin{enumerate}
+ \item $\lambda(\mu \mathbf{v}) = \lambda \mu \mathbf{v}$ for all $\lambda, \mu \in \F$, $\mathbf{v}\in V$ \hfill (associativity)
+ \item $\lambda(\mathbf{u} + \mathbf{v}) = \lambda \mathbf{u} + \lambda \mathbf{v}$ for all $\lambda\in \F$, $\mathbf{u}, \mathbf{v}\in V$\hfill (distributivity in $V$)
+ \item $(\lambda + \mu) \mathbf{v} = \lambda \mathbf{v} + \mu \mathbf{v}$ for all $\lambda, \mu \in \F$, $\mathbf{v}\in V$ \hfill (distributivity in $\F$)
+ \item $1\mathbf{v} = \mathbf{v}$ for all $\mathbf{v}\in V$ \hfill (identity)
+ \end{enumerate}
+
+ We always write $\mathbf{0}$ for the additive identity in $V$, and call this the identity. By abuse of notation, we also write $0$ for the trivial vector space $\{\mathbf{0}\}$.
+\end{defi}
+In a general vector space, there is no notion of ``coordinates'', length, angle or distance. For example, it would be difficult to assign these quantities to the vector space of real-valued continuous functions in $[a, b]$.
+
+From the axioms, there are a few results we can immediately prove.
+\begin{prop}
+ In any vector space $V$, $0\mathbf{v} = \mathbf{0}$ for all $\mathbf{v}\in V$, and $(-1)\mathbf{v} = -\mathbf{v}$, where $-\mathbf{v}$ is the additive inverse of $\mathbf{v}$.
+\end{prop}
+Proof is left as an exercise.
+
+In mathematics, whenever we define ``something'', we would also like to define a ``sub-something''. In the case of vector spaces, this is a subspace.
+\begin{defi}[Subspace]
+ If $V$ is an $\F$-vector space, then $U\subseteq V$ is an ($\F$-linear) \emph{subspace} if
+ \begin{enumerate}
+ \item $\mathbf{u}, \mathbf{v}\in U$ implies $\mathbf{u} + \mathbf{v} \in U$.
+ \item $\mathbf{u}\in U, \lambda \in \F$ implies $\lambda \mathbf{u}\in U$.
+ \item $\mathbf{0}\in U$.
+ \end{enumerate}
+ These conditions can be expressed more concisely as ``$U$ is non-empty and if $\lambda, \mu\in \F, \mathbf{u}, \mathbf{v}\in U$, then $\lambda \mathbf{u} + \mu \mathbf{v}\in U$''.
+
+ Alternatively, $U$ is a subspace of $V$ if it is itself a vector space, inheriting the operations from $V$.
+
+ We sometimes write $U\leq V$ if $U$ is a subspace of $V$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\{(x_1, x_2, x_3) \in \R^3: x_1 + x_2 + x_3 = t\}$ is a subspace of $\R^3$ iff $t = 0$.
+ \item Let $X$ be a set. We define the \emph{support} of $f$ in $\F^X$ to be $\supp(f) = \{x\in X: f(x) \not= 0\}$. Then the set of functions with finite support forms a vector subspace. This is since $\supp (f + g) \subseteq \supp(f) \cup \supp(g)$, $\supp (\lambda f) = \supp (f)$ (for $\lambda \not= 0$) and $\supp (0) = \emptyset$.
+ \end{enumerate}
+\end{eg}
+
+If we have two subspaces $U$ and $V$, there are several things we can do with them. For example, we can take the intersection $U\cap V$. We will shortly show that this will be a subspace. However, taking the union will in general not produce a vector space. Instead, we need the sum:
+
+\begin{defi}[Sum of subspaces]
+ Suppose $U, W$ are subspaces of an $\F$ vector space $V$. The \emph{sum} of $U$ and $V$ is
+ \[
+ U + W = \{\mathbf{u} + \mathbf{w}: \mathbf{u}\in U, \mathbf{w}\in W\}.
+ \]
+\end{defi}
+
+\begin{prop}
+ Let $U, W$ be subspaces of $V$. Then $U + W$ and $U\cap W$ are subspaces.
+\end{prop}
+
+\begin{proof}
+ Let $\mathbf{u}_i + \mathbf{w}_i \in U + W$, $\lambda, \mu\in \F$. Then
+ \[
+ \lambda(\mathbf{u}_1 + \mathbf{w}_1) + \mu(\mathbf{u}_2 + \mathbf{w}_2) = (\lambda\mathbf{u}_1 + \mu\mathbf{u}_2) + (\lambda\mathbf{w}_1 + \mu\mathbf{w}_2) \in U + W.
+ \]
+ Similarly, if $\mathbf{v}_i \in U\cap W$, then $\lambda \mathbf{v}_1 + \mu \mathbf{v}_2\in U$ and $\lambda \mathbf{v}_1 + \mu \mathbf{v}_2\in W$. So $\lambda \mathbf{v}_1 + \mu \mathbf{v}_2\in U\cap W$.
+
+ Both $U\cap W$ and $U + W$ contain $\mathbf{0}$, and are non-empty. So done.
+\end{proof}
+
+In addition to sub-somethings, we often have quotient-somethings as well.
+\begin{defi}[Quotient spaces]
+ Let $V$ be a vector space, and $U\subseteq V$ a subspace. Then the quotient group $V/U$ can be made into a vector space called the \emph{quotient space}, where scalar multiplication is given by $(\lambda, \mathbf{v} + U) = (\lambda \mathbf{v}) + U$.
+
+ This is well defined since if $\mathbf{v} + U = \mathbf{w} + U\in V/U$, then $\mathbf{v} - \mathbf{w} \in U$. Hence for $\lambda \in \F$, we have $\lambda \mathbf{v} - \lambda \mathbf{w} \in U$. So $\lambda \mathbf{v} + U = \lambda \mathbf{w} + U$.
+\end{defi}
+\subsection{Linear independence, bases and the Steinitz exchange lemma}
+Recall that in $\R^n$, we had the ``standard basis'' made of vectors of the form $\mathbf{e}_i = (0, \cdots, 0, 1, 0, \cdots, 0)$, with $1$ in the $i$th component and $0$ otherwise. We call this a \emph{basis} because everything in $\R^n$ can be (uniquely) written as a sum of (scalar multiples of) these basis elements. In other words, the whole $\R^n$ is generated by taking sums and multiples of the basis elements.
+
+We would like to capture this idea in general vector spaces. The most important result in this section is to prove that for any vector space $V$, any two basis must contain the same number of elements. This means we can define the ``dimension'' of a vector space as the number of elements in the basis.
+
+While this result sounds rather trivial, it is a very important result. We will in fact prove a slightly stronger statement than what was stated above, and this ensures that the dimension of a vector space is well-behaved. For example, the subspace of a vector space has a smaller dimension than the larger space (at least when the dimensions are finite).
+
+This is not the case when we study modules in IB Groups, Rings and Modules, which are generalizations of vector spaces. Not all modules have basis, which makes it difficult to define the dimension. Even for those that have basis, the behaviour of the ``dimension'' is complicated when, say, we take submodules. The existence and well-behavedness of basis and dimension is what makes linear algebra different from modules.
+
+\begin{defi}[Span]
+ Let $V$ be a vector space over $\F$ and $S\subseteq V$. The \emph{span} of $S$ is defined as
+ \[
+ \bra S\ket = \left\{\sum_{i = 1}^n \lambda_i \mathbf{s}_i : \lambda_i \in \F, \mathbf{s}_i \in S, n \geq 0\right\}
+ \]
+ This is the smallest subspace of $V$ containing $S$.
+
+ Note that the sums must be finite. We will not play with infinite sums, since the notion of convergence is not even well defined in a general vector space.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Let $V = \R^3$ and $S = \left\{\begin{pmatrix}1\\0\\0\end{pmatrix}, \begin{pmatrix}0\\1\\1\end{pmatrix}, \begin{pmatrix}1\\2\\2\end{pmatrix}\right\}$. Then
+ \[
+ \bra S\ket = \left\{
+ \begin{pmatrix}
+ a\\b\\b\\
+ \end{pmatrix}: a, b\in \R
+ \right\}.
+ \]
+ Note that any subset of $S$ of order 2 has the same span as $S$.
+ \item Let $X$ be a set, $x \in X$. Define the function $\delta x: X\to \F$ by
+ \[
+ \delta x(y) =
+ \begin{cases}
+ 1 & y = x\\
+ 0 & y\not= x
+ \end{cases}.
+ \]
+ Then $\bra \delta x: x \in X\ket$ is the set of all functions with finite support.
+ \end{enumerate}
+\end{eg}
+
+\begin{defi}[Spanning set]
+ Let $V$ be a vector space over $\F$ and $S\subseteq V$. $S$ \emph{spans} $V$ if $\bra S\ket = V$.
+\end{defi}
+
+\begin{defi}[Linear independence]
+ Let $V$ be a vector space over $\F$ and $S\subseteq V$. Then $S$ is \emph{linearly independent} if whenever
+ \[
+ \sum_{i = 1}^n \lambda_i \mathbf{s}_i = \mathbf{0}\text{ with } \lambda_i \in \F, \mathbf{s}_1, \mathbf{s}_2, \cdots, \mathbf{s}_n \in S\text{ distinct},
+ \]
+ we must have $\lambda_i = 0$ for all $i$.
+
+ If $S$ is not linearly independent, we say it is \emph{linearly dependent}.
+\end{defi}
+
+\begin{defi}[Basis]
+ Let $V$ be a vector space over $\F$ and $S\subseteq V$. Then $S$ is a \emph{basis} for $V$ if $S$ is linearly independent and spans $V$.
+\end{defi}
+
+\begin{defi}[Finite dimensional]
+ A vector space is \emph{finite dimensional} if there is a finite basis.
+\end{defi}
+Ideally, we would want to define the \emph{dimension} as the number of vectors in the basis. However, we must first show that this is well-defined. It is certainly plausible that a vector space has a basis of size $7$ as well as a basis of size $3$. We must show that this can never happen, which is something we'll do soon.
+
+We will first have an example:
+\begin{eg}
+ Again, let $V = \R^3$ and $S = \left\{\begin{pmatrix}1\\0\\0\end{pmatrix}, \begin{pmatrix}0\\1\\1\end{pmatrix}, \begin{pmatrix}1\\2\\2\end{pmatrix}\right\}$. Then $S$ is linearly dependent since
+ \[
+ 1\begin{pmatrix}1\\0\\0\end{pmatrix} + 2\begin{pmatrix}0\\1\\1\end{pmatrix} + (-1) \begin{pmatrix}1\\2\\2\end{pmatrix} = \mathbf{0}.
+ \]
+ $S$ also does not span $V$ since $\begin{pmatrix}0\\0\\1\end{pmatrix}\not \in \bra S\ket$.
+\end{eg}
+
+Note that no linearly independent set can contain $\mathbf{0}$, as $1\cdot \mathbf{0} = \mathbf{0}$. We also have $\bra \emptyset\ket = \{\mathbf{0}\}$ and $\emptyset$ is a basis for this space.
+
+There is an alternative way in which we can define linear independence.
+\begin{lemma}
+ $S\subseteq V$ is linearly dependent if and only if there are distinct $\mathbf{s}_0, \cdots, \mathbf{s}_n \in S$ and $\lambda_1, \cdots, \lambda_n\in \F$ such that
+ \[
+ \sum_{i = 1}^n \lambda_i \mathbf{s}_i = \mathbf{s}_0.
+ \]
+\end{lemma}
+
+\begin{proof}
+ If $S$ is linearly dependent, then there are some $\lambda_1, \cdots, \lambda_n \in \F$ not all zero and $\mathbf{s}_1,\cdots, \mathbf{s}_n \in S$ such that $\sum \lambda_i \mathbf{s}_i = \mathbf{0}$. Wlog, let $\lambda_1\not= 0$. Then
+ \[
+ \mathbf{s}_1 = \sum_{i = 2}^n -\frac{\lambda_i}{\lambda_1} \mathbf{s}_i.
+ \]
+ Conversely, if $\mathbf{s}_0 = \sum_{i = 1}^n \lambda_i \mathbf{s}_i$, then
+ \[
+ (-1)\mathbf{s}_0 + \sum_{i = 1}^n \lambda_i \mathbf{s}_i = \mathbf{0}.
+ \]
+ So $S$ is linearly dependent.
+\end{proof}
+
+This in turn gives an alternative characterization of what it means to be a basis:
+\begin{prop}
+ If $S = \{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$ is a subset of $V$ over $\F$, then it is a basis if and only if every $\mathbf{v}\in V$ can be written uniquely as a finite linear combination of elements in $S$, i.e.\ as
+ \[
+ \mathbf{v} = \sum_{i = 1}^n \lambda_i \mathbf{e}_i.
+ \]
+\end{prop}
+
+\begin{proof}
+ We can view this as a combination of two statements: it can be spanned in at least one way, and it can be spanned in at most one way. We will see that the first part corresponds to $S$ spanning $V$, and the second part corresponds to $S$ being linearly independent.
+
+ In fact, $S$ spanning $V$ is defined exactly to mean that every item $\mathbf{v}\in V$ can be written as a finite linear combination in at least one way.
+
+ Now suppose that $S$ is linearly independent, and we have
+ \[
+ \mathbf{v} = \sum_{i = 1}^n \lambda_i \mathbf{e}_i = \sum_{i = 1}^n\mu_i \mathbf{e}_i.
+ \]
+ Then we have
+ \[
+ \mathbf{0} = \mathbf{v} - \mathbf{v} = \sum_{i = 1}^n (\lambda_i - \mu_i) \mathbf{e}_i.
+ \]
+ Linear independence implies that $\lambda_i - \mu_i = 0$ for all $i$. Hence $\lambda_i = \mu_i$. So $\mathbf{v}$ can be expressed in a unique way.
+
+ On the other hand, if $S$ is not linearly independent, then we have
+ \[
+ \mathbf{0} = \sum_{i = 1}^n \lambda_i \mathbf{e}_i
+ \]
+ where $\lambda_i \not= 0$ for some $i$. But we also know that
+ \[
+ \mathbf{0} = \sum_{i = 1}^n 0\cdot \mathbf{e}_i.
+ \]
+ So there are two ways to write $\mathbf{0}$ as a linear combination. So done.
+\end{proof}
+
+Now we come to the key theorem:
+\begin{thm}[Steinitz exchange lemma]
+ Let $V$ be a vector space over $\F$, and $S = \{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$ a finite linearly independent subset of $V$, and $T$ a spanning subset of $V$. Then there is some $T'\subseteq T$ of order $n$ such that $(T\setminus T') \cup S$ still spans $V$. In particular, $|T| \geq n$.
+\end{thm}
+What does this actually say? This says if $T$ is spanning and $S$ is independent, there is a way of grabbing $|S|$ many elements away from $T$ and replace them with $S$, and the result will still be spanning.
+
+In some sense, the final remark is the most important part. It tells us that we cannot have a independent set larger than a spanning set, and most of our corollaries later will only use this remark.
+
+This is sometimes stated in the following alternative way for $|T| < \infty$.
+\begin{cor}
+ Let $\{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$ be a linearly independent subset of $V$, and suppose $\{\mathbf{f}_1, \cdots, \mathbf{f}_m\}$ spans $V$. Then there is a re-ordering of the $\{\mathbf{f}_i\}$ such that $\{\mathbf{e}_1,\cdots, \mathbf{e}_n, \mathbf{f}_{n + 1}, \cdots, \mathbf{f}_m\}$ spans $V$.
+\end{cor}
+
+The proof is going to be slightly technical and notationally daunting. So it helps to give a brief overview of what we are going to do in words first. The idea is to do the replacement one by one. The first one is easy. Start with $\mathbf{e}_1$. Since $T$ is spanning, we can write
+\[
+ \mathbf{e}_1 = \sum \lambda_i \mathbf{t}_i
+\]
+for some $\mathbf{t}_i \in T, \lambda_i \in \F$ non-zero. We then replace with $\mathbf{t}_1$ with $\mathbf{e}_1$. The result is still spanning, since the above formula allows us to write $\mathbf{t}_1$ in terms of $\mathbf{e}_1$ and the other $\mathbf{t}_i$.
+
+We continue inductively. For the $r$th element, we again write
+\[
+ \mathbf{e}_r = \sum \lambda_i \mathbf{t}_i.
+\]
+We would like to just pick a random $\mathbf{t}_i$ and replace it with $\mathbf{e}_r$. However, we cannot do this arbitrarily, since the lemma wants us to replace something \emph{in $T$} with with $\mathbf{e}_r$. After all that replacement procedure before, some of the $\mathbf{t}_i$ might have actually come from $S$.
+
+This is where the linear independence of $S$ kicks in. While some of the $\mathbf{t}_i$ might be from $S$, we cannot possibly have all \emph{all} of them being from $S$, or else this violates the linear independence of $S$. Hence there is something genuinely from $T$, and we can safely replace it with $\mathbf{e}_r$.
+
+We now write this argument properly and formally.
+\begin{proof}
+ Suppose that we have already found $T_r'\subseteq T$ of order $0 \leq r < n$ such that
+ \[
+ T_r = (T\setminus T_r') \cup \{\mathbf{e}_1, \cdots, \mathbf{e}_r\}
+ \]
+ spans $V$.
+
+ (Note that the case $r = 0$ is trivial, since we can take $T_r' = \emptyset$, and the case $r = n$ is the theorem which we want to achieve.)
+
+ Suppose we have these. Since $T_r$ spans $V$, we can write
+ \[
+ \mathbf{e}_{r + 1} = \sum_{i = 1}^k \lambda_i \mathbf{t}_i,\quad \lambda_i \in \F, \mathbf{t}_i \in T_r.
+ \]
+ We know that the $\mathbf{e}_i$ are linearly independent, so not all $\mathbf{t}_i$'s are $\mathbf{e}_i$'s. So there is some $j$ such that $\mathbf{t}_j \in (T\setminus T_r')$. We can write this as
+ \[
+ \mathbf{t}_j = \frac{1}{\lambda_j} \mathbf{e}_{r + 1} + \sum_{i \not= j} -\frac{\lambda_i}{\lambda_j} \mathbf{t}_i.
+ \]
+ We let $T_{r + 1}' = T_r' \cup \{\mathbf{t}_j\}$ of order $r + 1$, and
+ \[
+ T_{r + 1} = (T\setminus T_{r + 1}') \cup \{\mathbf{e}_1, \cdots, \mathbf{e}_{r + 1}\} = (T_r \setminus \{\mathbf{t}_j\}\} \cup \{\mathbf{e}_{r + 1}\}
+ \]
+ Since $\mathbf{t}_j$ is in the span of $T_r\cup \{\mathbf{e}_{r + 1}\}$, we have $\mathbf{t}_j \in \bra T_{r + 1}\ket$. So
+ \[
+ V \supseteq \bra T_{r + 1}\ket \supseteq \bra T_r \ket = V.
+ \]
+ So $\bra T_{r + 1}\ket = V$.
+
+ Hence we can inductively find $T_n$.
+\end{proof}
+
+From this lemma, we can immediately deduce a lot of important corollaries.
+\begin{cor}
+ Suppose $V$ is a vector space over $\F$ with a basis of order $n$. Then
+ \begin{enumerate}
+ \item Every basis of $V$ has order $n$.
+ \item Any linearly independent set of order $n$ is a basis.
+ \item Every spanning set of order $n$ is a basis.
+ \item Every finite spanning set contains a basis.
+ \item Every linearly independent subset of $V$ can be extended to basis.
+ \end{enumerate}
+\end{cor}
+
+\begin{proof}
+ Let $S = \{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$ be the basis for $V$.
+ \begin{enumerate}
+ \item Suppose $T$ is another basis. Since $S$ is independent and $T$ is spanning, $|T| \geq |S|$.
+
+ The other direction is less trivial, since $T$ might be infinite, and Steinitz does not immediately apply. Instead, we argue as follows: since $T$ is linearly independent, every finite subset of $T$ is independent. Also, $S$ is spanning. So every finite subset of $T$ has order at most $|S|$. So $|T| \leq |S|$. So $|T| = |S|$.
+
+ \item Suppose now that $T$ is a linearly independent subset of order $n$, but $\bra T\ket \not= V$. Then there is some $\mathbf{v} \in V\setminus \bra T\ket$. We now show that $T\cup \{\mathbf{v}\}$ is independent. Indeed, if
+ \[
+ \lambda_0 \mathbf{v} + \sum_{i = 1}^m \lambda_i \mathbf{t}_i = \mathbf{0}
+ \]
+ with $\lambda_i \in \F$, $\mathbf{t}_1, \cdots, \mathbf{t}_m\in T$ distinct, then
+ \[
+ \lambda_0 \mathbf{v} = \sum_{i = 1}^m (-\lambda_i) \mathbf{t}_i.
+ \]
+ Then $\lambda_0 \mathbf{v} \in \bra T\ket$. So $\lambda_0= 0$. As $T$ is linearly independent, we have $\lambda_0 = \cdots = \lambda_m = 0$. So $T\cup \{\mathbf{v}\}$ is a linearly independent subset of size $> n$. This is a contradiction since $S$ is a spanning set of size $n$.
+
+ \item Let $T$ be a spanning set of order $n$. If $T$ were linearly dependent, then there is some $\mathbf{t}_0, \cdots, \mathbf{t}_m \in T$ distinct and $\lambda_1, \cdots, \lambda_m \in \F$ such that
+ \[
+ \mathbf{t}_0 = \sum \lambda_i \mathbf{t}_i.
+ \]
+ So $\mathbf{t}_0 \in \bra T\setminus \{\mathbf{t}_0\}\ket$, i.e.\ $\bra T\setminus \{\mathbf{t}_0\} \ket = V$. So $T\setminus \{\mathbf{t}_0\}$ is a spanning set of order $n - 1$, which is a contradiction.
+
+ \item Suppose $T$ is any finite spanning set. Let $T' \subseteq T$ be a spanning set of least possible size. This exists because $T$ is finite. If $|T'|$ has size $n$, then done by (iii). Otherwise by the Steinitz exchange lemma, it has size $|T'| > n$. So $T'$ must be linearly dependent because $S$ is spanning. So there is some $\mathbf{t}_0, \cdots, \mathbf{t}_m \in T$ distinct and $\lambda_1, \cdots, \lambda_m \in \F$ such that $\mathbf{t}_0 = \sum \lambda_i \mathbf{t}_i$. Then $T'\setminus \{\mathbf{t}_0\}$ is a smaller spanning set. Contradiction.
+
+ \item Suppose $T$ is a linearly independent set. Since $S$ spans, there is some $S' \subseteq S$ of order $|T|$ such that $(S\setminus S')\cup T$ spans $V$ by the Steinitz exchange lemma. So by (ii), $(S\setminus S')\cup T$ is a basis of $V$ containing $T$.\qedhere
+ \end{enumerate}
+\end{proof}
+Note that the last part is where we actually use the full result of Steinitz.
+
+Finally, we can use this to define the dimension.
+\begin{defi}[Dimension]
+ If $V$ is a vector space over $\F$ with finite basis $S$, then the \emph{dimension} of $V$, written
+ \[
+ \dim V = \dim_{\F}V = |S|.
+ \]
+\end{defi}
+By the corollary, $\dim V$ does not depend on the choice of $S$. However, it does depend on $\F$. For example, $\dim_\C \C = 1$ (since $\{1\}$ is a basis), but $\dim_\R \C = 2$ (since $\{1, i\}$ is a basis).
+
+After defining the dimension, we can prove a few things about dimensions.
+\begin{lemma}
+ If $V$ is a finite dimensional vector space over $\F$, $U\subseteq V$ is a proper subspace, then $U$ is finite dimensional and $\dim U < \dim V$.
+\end{lemma}
+
+\begin{proof}
+ Every linearly independent subset of $V$ has size at most $\dim V$. So let $S \subseteq U$ be a linearly independent subset of largest size. We want to show that $S$ spans $U$ and $|S| < \dim V$.
+
+ If $\mathbf{v}\in V\setminus \bra S\ket$, then $S\cup \{\mathbf{v}\}$ is linearly independent. So $\mathbf{v}\not\in U$ by maximality of $S$. This means that $\bra S\ket = U$.
+
+ Since $U\not= V$, there is some $\mathbf{v}\in V\setminus U = V\setminus \bra S\ket$. So $S\cup \{\mathbf{v}\}$ is a linearly independent subset of order $|S| + 1$. So $|S| + 1 \leq \dim V$. In particular, $\dim U = |S| < \dim V$.
+\end{proof}
+
+\begin{prop}
+ If $U, W$ are subspaces of a finite dimensional vector space $V$, then
+ \[
+ \dim (U + W) = \dim U + \dim W - \dim (U\cap W).
+ \]
+\end{prop}
+The proof is not hard, as long as we manage to pick the right basis to do the proof. This is our slogan:
+\begin{center}
+ When you choose a basis, always choose the right basis.
+\end{center}
+We need a basis for all four of them, and we want to compare the bases. So we want to pick bases that are compatible.
+
+\begin{proof}
+ Let $R = \{\mathbf{v}_1, \cdots, \mathbf{v}_r\}$ be a basis for $U\cap W$. This is a linearly independent subset of $U$. So we can extend it to be a basis of $U$ by
+ \[
+ S = \{\mathbf{v}_1, \cdots, \mathbf{v}_r, \mathbf{u}_{r + 1}, \cdots, \mathbf{u}_s\}.
+ \]
+ Similarly, for $W$, we can obtain a basis
+ \[
+ T = \{\mathbf{v}_1, \cdots, \mathbf{v}_r, \mathbf{w}_{r + 1}, \cdots, \mathbf{w}_t\}.
+ \]
+ We want to show that $\dim (U + W) = s + t - r$. It is sufficient to prove that $S\cup T$ is a basis for $U + W$.
+
+ We first show spanning. Suppose $\mathbf{u} + \mathbf{w} \in U + W$, $\mathbf{u}\in U, \mathbf{w}\in W$. Then $\mathbf{u}\in \bra S\ket$ and $\mathbf{w}\in \bra T\ket$. So $\mathbf{u} + \mathbf{w} \in \bra S\cup T\ket$. So $U + W = \bra S \cup T\ket$.
+
+ To show linear independence, suppose we have a linear relation
+ \[
+ \sum_{i = 1}^r \lambda_i \mathbf{v}_i + \sum_{j = r + 1}^s \mu_j \mathbf{u}_j + \sum_{k = r + 1}^t \nu_k \mathbf{w}_k = \mathbf{0}.
+ \]
+ So
+ \[
+ \sum \lambda_i \mathbf{v}_i + \sum \mu_j \mathbf{u}_j = - \sum \nu_k \mathbf{w}_k.
+ \]
+ Since the left hand side is something in $U$, and the right hand side is something in $W$, they both lie in $U\cap W$.
+
+ Since $S$ is a basis of $U$, there is only one way of writing the left hand vector as a sum of $\mathbf{v}_i$ and $\mathbf{u}_j$. However, since $R$ is a basis of $U\cap W$, we can write the left hand vector just as a sum of $\mathbf{v}_i$'s. So we must have $\mu_j = 0$ for all $j$. Then we have
+ \[
+ \sum \lambda_i \mathbf{v}_i + \sum \nu_k \mathbf{w}_k = \mathbf{0}.
+ \]
+ Finally, since $T$ is linearly independent, $\lambda_i = \nu_k = 0$ for all $i, k$. So $S\cup T$ is linearly independent.
+\end{proof}
+
+\begin{prop}
+ If $V$ is a finite dimensional vector space over $\F$ and $U\cup V$ is a subspace, then
+ \[
+ \dim V = \dim U + \dim V/U.
+ \]
+\end{prop}
+We can view this as a linear algebra version of Lagrange's theorem. Combined with the first isomorphism theorem for vector spaces, this gives the rank-nullity theorem.
+
+\begin{proof}
+ Let $\{\mathbf{u}_1, \cdots, \mathbf{u}_m\}$ be a basis for $U$ and extend this to a basis $\{\mathbf{u}_1, \cdots, \mathbf{u}_m,\allowbreak \mathbf{v}_{m + 1}, \cdots, \mathbf{v}_n\}$ for $V$. We want to show that $\{\mathbf{v}_{m + 1} + U, \cdots, \mathbf{v}_n + U\}$ is a basis for $V/U$.
+
+ It is easy to see that this spans $V/U$. If $\mathbf{v} + U \in V/U$, then we can write
+ \[
+ \mathbf{v} = \sum \lambda_i \mathbf{u}_i + \sum \mu_i \mathbf{v}_i.
+ \]
+ Then
+ \[
+ \mathbf{v} + U = \sum \mu_i (\mathbf{v}_i + U) + \sum \lambda_i (\mathbf{u}_i + U) = \sum \mu_i (\mathbf{v}_i + U).
+ \]
+ So done.
+
+ To show that they are linearly independent, suppose that
+ \[
+ \sum \lambda_i (\mathbf{v}_i + U) = \mathbf{0} + U = U.
+ \]
+ Then this requires
+ \[
+ \sum \lambda_i \mathbf{v}_i \in U.
+ \]
+ Then we can write this as a linear combination of the $\mathbf{u}_i$'s. So
+ \[
+ \sum \lambda_i \mathbf{v}_i = \sum \mu_j \mathbf{u}_j
+ \]
+ for some $\mu_j$. Since $\{\mathbf{u}_1, \cdots, \mathbf{u}_m, \mathbf{v}_{n + 1}, \cdots, \mathbf{v}_n\}$ is a basis for $V$, we must have $\lambda_i = \mu_j = 0$ for all $i, j$. So $\{\mathbf{v}_i + U\}$ is linearly independent.
+\end{proof}
+\subsection{Direct sums}
+We are going to define direct sums in many ways in order to confuse students.
+\begin{defi}[(Internal) direct sum]
+ Suppose $V$ is a vector space over $\F$ and $U, W\subseteq V$ are subspaces. We say that $V$ is the \emph{(internal) direct sum} of $U$ and $W$ if
+ \begin{enumerate}
+ \item $U + W = V$
+ \item $U \cap W = 0$.
+ \end{enumerate}
+ We write $V = U\oplus W$.
+
+ Equivalently, this requires that every $\mathbf{v}\in V$ can be written uniquely as $\mathbf{u} + \mathbf{w}$ with $\mathbf{u}\in U, \mathbf{w}\in W$. We say that $U$ and $W$ are \emph{complementary subspaces} of $V$.
+\end{defi}
+You will show in the example sheets that given any subspace $U \subseteq V$, $U$ must have a complementary subspace in $V$.
+
+\begin{eg}
+ Let $V = \R^2$, and $U = \bra \begin{pmatrix}0\\1\end{pmatrix}\ket$. Then $\bra \begin{pmatrix}1\\1\end{pmatrix}\ket$ and $\bra \begin{pmatrix}1\\0\end{pmatrix}\ket$ are both complementary subspaces to $U$ in $V$.
+\end{eg}
+
+\begin{defi}[(External) direct sum]
+ If $U, W$ are vector spaces over $\F$, the \emph{(external) direct sum} is
+ \[
+ U\oplus W = \{(\mathbf{u}, \mathbf{w}): \mathbf{u}\in U, \mathbf{w}\in W\},
+ \]
+ with addition and scalar multiplication componentwise:
+ \[
+ (\mathbf{u}_1, \mathbf{w}_1) + (\mathbf{u}_2, \mathbf{w}_2) = (\mathbf{u}_1 + \mathbf{u}_2, \mathbf{w}_1 + \mathbf{w}_2),\quad \lambda (\mathbf{u}, \mathbf{w}) = (\lambda \mathbf{u}, \lambda \mathbf{w}).
+ \]
+\end{defi}
+The difference between these two definitions is that the first is decomposing $V$ into smaller spaces, while the second is building a bigger space based on two spaces.
+
+Note, however, that the external direct sum $U\oplus W$ is the internal direct sum of $U$ and $W$ viewed as subspaces of $U\oplus W$, i.e.\ as the internal direct sum of $\{(\mathbf{u}, \mathbf{0}): \mathbf{u}\in U\}$ and $\{(\mathbf{0}, \mathbf{v}): \mathbf{v}\in V\}$. So these two are indeed compatible notions, and this is why we give them the same name and notation.
+
+\begin{defi}[(Multiple) (internal) direct sum]
+ If $U_1, \cdots, U_n\subseteq V$ are subspaces of $V$, then $V$ is the \emph{(internal) direct sum}
+ \[
+ V = U_1 \oplus \cdots \oplus U_n = \bigoplus_{i = 1}^n U_i
+ \]
+ if every $\mathbf{v}\in V$ can be written uniquely as $\mathbf{v} = \sum \mathbf{u}_i$ with $\mathbf{u}_i \in U_i$.
+
+ This can be extended to an infinite sum with the same definition, just noting that the sum $\mathbf{v} = \sum \mathbf{u}_i$ has to be finite.
+\end{defi}
+For more details, see example sheet 1 Q. 10, where we prove in particular that $\dim V = \sum \dim U_i$.
+
+\begin{defi}[(Multiple) (external) direct sum]
+ If $U_1, \cdots, U_n$ are vector spaces over $\F$, the external direct sum is
+ \[
+ U_1 \oplus \cdots \oplus U_n = \bigoplus_{i = 1}^n U_i = \{(\mathbf{u}_1, \cdots, \mathbf{u}_n): \mathbf{u}_i \in U_i\},
+ \]
+ with pointwise operations.
+
+ This can be made into an infinite sum if we require that all but finitely many of the $\mathbf{u}_i$ have to be zero.
+\end{defi}
+
+\section{Linear maps}
+In mathematics, apart from studying objects, we would like to study functions between objects as well. In particular, we would like to study functions that respect the structure of the objects. With vector spaces, the kinds of functions we are interested in are \emph{linear maps}.
+\subsection{Definitions and examples}
+\begin{defi}[Linear map]
+ Let $U, V$ be vector spaces over $\F$. Then $\alpha: U\to V$ is a \emph{linear map} if
+ \begin{enumerate}
+ \item $\alpha(\mathbf{u}_1 + \mathbf{u}_2) = \alpha(\mathbf{u}_1) + \alpha(\mathbf{u}_2)$ for all $\mathbf{u}_i \in U$.
+ \item $\alpha(\lambda \mathbf{u}) = \lambda \alpha (\mathbf{u})$ for all $\lambda \in \F, \mathbf{u}\in U$.
+ \end{enumerate}
+ We write $\mathcal{L}(U, V)$ for the set of linear maps $U\to V$.
+\end{defi}
+There are a few things we should take note of:
+\begin{itemize}
+ \item If we are lazy, we can combine the two requirements to the single requirement that
+ \[
+ \alpha (\lambda \mathbf{u}_1 + \mu \mathbf{u}_2) = \lambda \alpha(\mathbf{u}_1) + \mu \alpha(\mathbf{u}_2).
+ \]
+ \item It is easy to see that if $\alpha$ is linear, then it is a group homomorphism (if we view vector spaces as groups). In particular, $\alpha (\mathbf{0}) = \mathbf{0}$.
+ \item If we want to stress the field $\F$, we say that $\alpha$ is $\F$-linear. For example, complex conjugation is a map $\C \to \C$ that is $\R$-linear but not $\C$-linear.
+\end{itemize}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Let $A$ be an $n\times m$ matrix with coefficients in $\F$. We will write $A\in M_{n, m}(\F)$. Then $\alpha: \F^m \to \F^n$ defined by $\mathbf{v}\to A\mathbf{v}$ is linear.
+
+ Recall matrix multiplication is defined by: if $A_{ij}$ is the $ij$th coefficient of $A$, then the $i$th coefficient of $A\mathbf{v}$ is $A_{ij}\mathbf{v}_j$. So we have
+ \begin{align*}
+ \alpha(\lambda \mathbf{u} + \mu \mathbf{v})_i &= \sum_{j = 1}^m A_{ij}(\lambda \mathbf{u} + \mu \mathbf{v})_j \\
+ &= \lambda \sum_{j = 1}^m A_{ij}u_j + \mu \sum_{j = 1}^m A_{ij} v_j \\
+ &= \lambda \alpha(\mathbf{u})_i + \mu \alpha(\mathbf{v})_i.
+ \end{align*}
+ So $\alpha$ is linear.
+ \item Let $X$ be a set and $g\in \F^X$. Then we define $m_g: \F^X \to \F^X$ by $m_g(f)(x) = g(x) f(x)$. Then $m_g$ is linear. For example, $f(x) \mapsto 2x^2 f(x)$ is linear.
+ \item Integration $I: (C([a, b]), \R) \to (C([a, b]), \R)$ defined by $f\mapsto \int_a^x f(t) \;\d t$ is linear.
+ \item Differentiation $D: (C^\infty ([a, b]), \R) \to (C^\infty ([a, b]), \R)$ by $ f\mapsto f'$ is linear.
+ \item If $\alpha, \beta\in \mathcal{L}(U, V)$, then $\alpha + \beta$ defined by $(\alpha + \beta)(\mathbf{u}) = \alpha(\mathbf{u}) + \beta(\mathbf{u})$ is linear.
+
+ Also, if $\lambda \in \F$, then $\lambda \alpha$ defined by $(\lambda \alpha)(\mathbf{u}) = \lambda (\alpha (\mathbf{u}))$ is also linear.
+
+ In this way, $\mathcal{L}(U, V)$ is also a vector space over $\F$.
+ \item Composition of linear maps is linear. Using this, we can show that many things are linear, like differentiating twice, or adding and then multiplying linear maps.
+ \end{enumerate}
+\end{eg}
+
+Just like everything else, we want to define isomorphisms.
+\begin{defi}[Isomorphism]
+ We say a linear map $\alpha: U\to V$ is an \emph{isomorphism} if there is some $\beta: V\to U$ (also linear) such that $\alpha \circ \beta = \id_V$ and $\beta\circ \alpha = \id_U$.
+
+ If there exists an isomorphism $U\to V$, we say $U$ and $V$ are \emph{isomorphic}, and write $U\cong V$.
+\end{defi}
+
+\begin{lemma}
+ If $U$ and $V$ are vector spaces over $\F$ and $\alpha: U\to V$, then $\alpha$ is an isomorphism iff $\alpha$ is a bijective linear map.
+\end{lemma}
+
+\begin{proof}
+ If $\alpha$ is an isomorphism, then it is clearly bijective since it has an inverse function.
+
+ Suppose $\alpha$ is a linear bijection. Then as a function, it has an inverse $\beta: V\to U$. We want to show that this is linear. Let $\mathbf{v}_1, \mathbf{v}_2 \in V$, $\lambda, \mu \in \F$. We have
+ \[
+ \alpha \beta(\lambda \mathbf{v}_1 + \mu \mathbf{v}_2) = \lambda \mathbf{v}_1 + \mu \mathbf{v}_2 = \lambda \alpha \beta (\mathbf{v}_1) + \mu \alpha \beta (\mathbf{v}_2) = \alpha (\lambda \beta(\mathbf{v}_1) + \mu \beta (\mathbf{v}_2)).
+ \]
+ Since $\alpha$ is injective, we have
+ \[
+ \beta(\lambda \mathbf{v}_1 + \mu \mathbf{v}_2) = \lambda \beta (\mathbf{v}_1) + \mu \beta (\mathbf{v}_2).
+ \]
+ So $\beta$ is linear.
+\end{proof}
+
+\begin{defi}[Image and kernel]
+ Let $\alpha: U\to V$ be a linear map. Then the \emph{image} of $\alpha$ is
+ \[
+ \im \alpha = \{\alpha (\mathbf{u}): \mathbf{u}\in U\}.
+ \]
+ The \emph{kernel} of $\alpha$ is
+ \[
+ \ker \alpha = \{\mathbf{u}: \alpha (\mathbf{u}) = \mathbf{0}\}.
+ \]
+\end{defi}
+It is easy to show that these are subspaces of $V$ and $U$ respectively.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Let $A\in M_{m, n}(\F)$ and $\alpha: \F^n \to \F^m$ be the linear map $\mathbf{v}\mapsto A\mathbf{v}$. Then the system of linear equations
+ \[
+ \sum_{j = 1}^m A_{ij}x_j = b_i,\quad 1 \leq i \leq n
+ \]
+ has a solution iff $(b_1, \cdots, b_n) \in \im \alpha$.
+
+ The kernel of $\alpha$ contains all solutions to $\sum_j A_{ij}x_j = 0$.
+ \item Let $\beta: C^{\infty}(\R, \R) \to C^{\infty}(\R, \R)$ that sends
+ \[
+ \beta(f)(t) = f''(t) + p(t) f'(t) + q(t) f(t).
+ \]
+ for some $p, q\in C^{\infty}(\R, \R)$.
+
+ Then if $y(t) \in \im \beta$, then there is a solution (in $C^\infty (\R, \R)$) to the differential equation
+ \[
+ f''(t) + p(t) f'(t) + q(t) f(t) = y(t).
+ \]
+ Similarly, $\ker \beta$ contains the solutions to the homogeneous equation
+ \[
+ f''(t) + p(t) f'(t) + q(t) f(t) = 0.
+ \]
+ \end{enumerate}
+\end{eg}
+
+If two vector spaces are isomorphic, then it is not too surprising that they have the same dimension, since isomorphic spaces are ``the same''. Indeed this is what we are going to show.
+\begin{prop}
+ Let $\alpha: U\to V$ be an $\F$-linear map. Then
+ \begin{enumerate}
+ \item If $\alpha$ is injective and $S\subseteq U$ is linearly independent, then $\alpha (S)$ is linearly independent in $V$.
+ \item If $\alpha$ is surjective and $S\subseteq U$ spans $U$, then $\alpha (S)$ spans $V$.
+ \item If $\alpha$ is an isomorphism and $S\subseteq U$ is a basis, then $\alpha(S)$ is a basis for $V$.
+ \end{enumerate}
+\end{prop}
+Here (iii) immediately shows that two isomorphic spaces have the same dimension.
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We prove the contrapositive. Suppose that $\alpha$ is injective and $\alpha(S)$ is linearly dependent. So there are $\mathbf{s}_0, \cdots, \mathbf{s}_n \in S$ distinct and $\lambda_1, \cdots, \lambda_n\in \F$ not all zero such that
+ \[
+ \alpha(\mathbf{s}_0) = \sum_{i = 1}^n \lambda_i \alpha(\mathbf{s}_i) = \alpha\left(\sum_{i = 1}^n \lambda_i \mathbf{s}_i\right).
+ \]
+ Since $\alpha$ is injective, we must have
+ \[
+ \mathbf{s}_0 = \sum_{i = 1}^n \lambda_i \mathbf{s}_i.
+ \]
+ This is a non-trivial relation of the $\mathbf{s}_i$ in $U$. So $S$ is linearly dependent.
+ \item Suppose $\alpha$ is surjective and $S$ spans $U$. Pick $\mathbf{v} \in V$. Then there is some $\mathbf{u}\in U$ such that $\alpha(\mathbf{u}) = \mathbf{v}$. Since $S$ spans $U$, there is some $\mathbf{s}_1, \cdots, \mathbf{s}_n\in S$ and $\lambda_1, \cdots, \lambda_n\in \F$ such that
+ \[
+ \mathbf{u} = \sum_{I = 1}^n \lambda_i \mathbf{s}_i.
+ \]
+ Then
+ \[
+ \mathbf{v} = \alpha (\mathbf{u}) = \sum_{i = 1}^n \lambda_i \alpha (\mathbf{s}_i).
+ \]
+ So $\alpha (S)$ spans $V$.
+ \item Follows immediately from (i) and (ii).\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{cor}
+ If $U$ and $V$ are finite-dimensional vector spaces over $\F$ and $\alpha: U\to V$ is an isomorphism, then $\dim U = \dim V$.
+\end{cor}
+Note that we restrict it to finite-dimensional spaces since we've only shown that dimensions are well-defined for finite dimensional spaces. Otherwise, the proof works just fine for infinite dimensional spaces.
+
+\begin{proof}
+ Let $S$ be a basis for $U$. Then $\alpha(S)$ is a basis for $V$. Since $\alpha$ is injective, $|S| = |\alpha(S)|$. So done.
+\end{proof}
+
+How about the other way round? If two vector spaces have the same dimension, are they necessarily isomorphic? The answer is yes, at least for finite-dimensional ones.
+
+However, we will not just prove that they are isomorphic. We will show that they are isomorphic in \emph{many ways}.
+\begin{prop}
+ Suppose $V$ is a $\F$-vector space of dimension $n < \infty$. Then writing $\mathbf{e}_1,\cdots, \mathbf{e}_n$ for the standard basis of $\F^n$, there is a bijection
+ \[
+ \Phi: \{\text{isomorphisms }\F^n \to V\} \to \{\text{(ordered) basis} (\mathbf{v}_1, \cdots, \mathbf{v}_n)\text{ for }V\},
+ \]
+ defined by
+ \[
+ \alpha \mapsto (\alpha (\mathbf{e}_1), \cdots, \alpha(\mathbf{e}_n)).
+ \]
+\end{prop}
+
+\begin{proof}
+ We first make sure this is indeed a function --- if $\alpha$ is an isomorphism, then from our previous proposition, we know that it sends a basis to a basis. So $(\alpha(\mathbf{e}_1), \cdots, \alpha(\mathbf{e}_n))$ is indeed a basis for $V$.
+
+ We now have to prove surjectivity and injectivity.
+
+ Suppose $\alpha, \beta: \F^n \to V$ are isomorphism such that $\Phi(\alpha) = \Phi(\beta)$. In other words, $\alpha (\mathbf{e}_i) = \beta(\mathbf{e}_i)$ for all $i$. We want to show that $\alpha = \beta$. We have
+ \[
+ \alpha\left(
+ \begin{pmatrix}
+ x_1\\\vdots\\x_n
+ \end{pmatrix}
+ \right) = \alpha \left(\sum_{i = 1}^n x_i \mathbf{e}_i\right) = \sum x_i \alpha (\mathbf{e}_i) = \sum x_i \beta (\mathbf{e}_i) = \beta\left(
+ \begin{pmatrix}
+ x_1\\\vdots\\x_n
+ \end{pmatrix}\right).
+ \]
+ Hence $\alpha = \beta$.
+
+ Next, suppose that $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ is an ordered basis for $V$. Then define
+ \[
+ \alpha\left(
+ \begin{pmatrix}
+ x_1\\\vdots\\x_n
+ \end{pmatrix}
+ \right) = \sum x_i \mathbf{v}_i.
+ \]
+ It is easy to check that this is well-defined and linear. We also know that $\alpha$ is injective since $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ is linearly independent. So if $\sum x_i \mathbf{v}_i = \sum y_i \mathbf{v}_i$, then $x_i = y_i$. Also, $\alpha$ is surjective since $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ spans $V$. So $\alpha$ is an isomorphism, and by construction $\Phi(\alpha) = (\mathbf{v}_1, \cdots, \mathbf{v}_n)$.
+\end{proof}
+
+\subsection{Linear maps and matrices}
+Recall that our first example of linear maps is matrices acting on $\F^n$. We will show that in fact, \emph{all} linear maps come from matrices. Since we know that all vector spaces are isomorphic to $\F^n$, this means we can represent arbitrary linear maps on vector spaces by matrices.
+
+This is a useful result, since it is sometimes easier to argue about matrices than linear maps.
+
+\begin{prop}
+ Suppose $U, V$ are vector spaces over $\F$ and $S = \{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$ is a basis for $U$. Then every function $f: S \to V$ extends uniquely to a linear map $U \to V$.
+\end{prop}
+The slogan is ``to define a linear map, it suffices to define its values on a basis''.
+
+\begin{proof}
+ For uniqueness, first suppose $\alpha, \beta: U \to V$ are linear and extend $f: S \to V$. We have sort-of proved this already just now.
+
+ If $\mathbf{u}\in U$, we can write $\mathbf{u} = \sum_{i = 1}^n u_i \mathbf{e}_i$ with $u_i \in \F$ since $S$ spans. Then
+ \[
+ \alpha (\mathbf{u}) = \alpha\left(\sum u_i \mathbf{e}_i\right) = \sum u_i \alpha (\mathbf{e}_i) = \sum u_i f( \mathbf{e}_i).
+ \]
+ Similarly,
+ \[
+ \beta( \mathbf{u}) = \sum u_i f(\mathbf{e}_i).
+ \]
+ So $\alpha (\mathbf{u}) = \beta(\mathbf{u})$ for every $\mathbf{u}$. So $\alpha = \beta$.
+
+ For existence, if $\mathbf{u} \in U$, we can write $\mathbf{u} = \sum u_i \mathbf{e}_i$ in a unique way. So defining
+ \[
+ \alpha(\mathbf{u}) = \sum u_i f(\mathbf{e}_i)
+ \]
+ is unambiguous. To show linearity, let $\lambda, \mu\in \F$, $\mathbf{u}, \mathbf{v}\in U$. Then
+ \begin{align*}
+ \alpha (\lambda \mathbf{u} + \mu \mathbf{v}) &= \alpha \left(\sum (\lambda u_i + \mu v_i) \mathbf{e}_i\right) \\
+ &= \sum (\lambda u_i + \mu v_i) f(\mathbf{e}_i)\\
+ &= \lambda \left(\sum u_i f(\mathbf{e}_i)\right) + \mu \left(\sum v_i f(\mathbf{e}_i)\right)\\
+ &= \lambda \alpha(\mathbf{u}) + \mu \alpha(\mathbf{v}).
+ \end{align*}
+ Moreover, $\alpha$ does extend $f$.
+\end{proof}
+
+\begin{cor}
+ If $U$ and $V$ are finite-dimensional vector spaces over $\F$ with bases $(\mathbf{e}_1, \cdots, \mathbf{e}_m)$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_n)$ respectively, then there is a bijection
+ \[
+ \Mat_{n, m}(\F) \to \mathcal{L}(U, V),
+ \]
+ sending $A$ to the unique linear map $\alpha$ such that $\alpha(\mathbf{e}_i) = \sum a_{ji} \mathbf{f}_j$.
+\end{cor}
+We can interpret this as follows: the $i$th column of $A$ tells us how to write $\alpha (\mathbf{e}_i)$ in terms of the $\mathbf{f}_j$.
+
+We can also draw a fancy diagram to display this result. Given a basis $\mathbf{e}_1, \cdots, \mathbf{e}_m$, by our bijection, we get an isomorphism $s(\mathbf{e}_i): U\to \F^m$. Similarly, we get an isomorphism $s(\mathbf{f}_i): V\to \F^n$.
+
+Since a matrix is a linear map $A: \F^m \to \F^n$, given a matrix $A$, we can produce a linear map $\alpha: U\to V$ via the following composition
+\[
+ \begin{tikzcd}
+ U \ar[r, "s(\mathbf{e}_i)"] & \F^m \ar[r, "A"] & \F^n \ar[r, "s(\mathbf{f}_i)^{-1}"] & V.
+ \end{tikzcd}
+\]
+We can put this into a square:
+\[
+ \begin{tikzcd}[row sep=large]
+ \F^m \ar[r, "A"] & \F^n\\
+ U \ar[u, "s(\mathbf{e}_i)"] \ar[r, "\alpha"] & V \ar[u, "s(\mathbf{f}_i)"']
+ \end{tikzcd}
+\]
+Then the corollary tells us that every $A$ gives rise to an $\alpha$, and every $\alpha$ corresponds to an $A$ that fit into this diagram.
+\begin{proof}
+ If $\alpha$ is a linear map $U \to V$, then for each $1 \leq i \leq m$, we can write $\alpha(\mathbf{e}_i)$ uniquely as
+ \[
+ \alpha(\mathbf{e}_i) = \sum_{j = 1}^n a_{ji} \mathbf{f}_j
+ \]
+ for some $a_{ji} \in \F$. This gives a matrix $A = (a_{ij})$. The previous proposition tells us that every matrix $A$ arises in this way, and $\alpha$ is determined by $A$.
+\end{proof}
+
+\begin{defi}[Matrix representation]
+ We call the matrix corresponding to a linear map $\alpha\in \mathcal{L}(U, V)$ under the corollary the \emph{matrix representing} $\alpha$ with respect to the bases $(\mathbf{e}_1, \cdots, \mathbf{e}_m)$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_n)$.
+\end{defi}
+
+It is an exercise to show that the bijection $\Mat_{n, m}(\F) \to \mathcal{L}(U, V)$ is an isomorphism of the vector spaces and deduce that $\dim \mathcal{L}(U, V) = (\dim U)(\dim V)$.
+
+\begin{prop}
+ Suppose $U, V, W$ are finite-dimensional vector spaces over $\F$ with bases $R = (\mathbf{u}_1, \cdots, \mathbf{u}_r)$ , $S = (\mathbf{v}_1, .., \mathbf{v}_s)$ and $T = (\mathbf{w}_1, \cdots, \mathbf{w}_t)$ respectively.
+
+ If $\alpha: U\to V$ and $\beta: V\to W$ are linear maps represented by $A$ and $B$ respectively (with respect to $R$, $S$ and $T$), then $\beta\alpha$ is linear and represented by $BA$ with respect to $R$ and $T$.
+\end{prop}
+\[
+ \begin{tikzcd}[row sep=large]
+ \F^r \ar[r, "A"] & \F^s \ar [r, "B"] & \F^t\\
+ U \ar[u, "s(R)"] \ar[r, "\alpha"] & V \ar[u, "s(S)"] \ar [r, "\beta"] & W \ar[u, "s(T)"']
+ \end{tikzcd}
+\]
+\begin{proof}
+ Verifying $\beta\alpha$ is linear is straightforward. Next we write $\beta\alpha(\mathbf{u}_i)$ as a linear combination of $\mathbf{w}_1, \cdots, \mathbf{w}_t$:
+ \begin{align*}
+ \beta\alpha(\mathbf{u}_i) &= \beta\left(\sum_k A_{ki}\mathbf{v}_k\right) \\
+ &= \sum_k A_{ki}\beta(\mathbf{v}_k) \\
+ &= \sum_k A_{ki}\sum_j B_{jk} \mathbf{w}_j \\
+ &= \sum_j \left(\sum_k B_{jk}A_{ki}\right)\mathbf{w}_j\\
+ &= \sum_j (BA)_{ji} \mathbf{w}_j\qedhere
+ \end{align*}
+\end{proof}
+
+\subsection{The first isomorphism theorem and the rank-nullity theorem}
+The main theorem of this section is the \emph{rank-nullity theorem}, which relates the dimensions of the kernel and image of a linear map. This is in fact an easy corollary of a stronger result, known as the \emph{first isomorphism theorem}, which directly relates the kernel and image themselves. This first isomorphism is an exact analogy of that for groups, and should not be unfamiliar. We will also provide another proof that does not involve quotients.
+
+\begin{thm}[First isomorphism theorem]
+ Let $\alpha: U\to V$ be a linear map. Then $\ker \alpha$ and $\im \alpha$ are subspaces of $U$ and $V$ respectively. Moreover, $\alpha$ induces an isomorphism
+ \begin{align*}
+ \bar{\alpha}: U/\ker \alpha &\to \im \alpha\\
+ (\mathbf{u} + \ker \alpha) &\mapsto \alpha(\mathbf{u})
+ \end{align*}
+\end{thm}
+Note that if we view a vector space as an abelian group, then this is exactly the first isomorphism theorem of groups.
+
+\begin{proof}
+ We know that $\mathbf{0} \in \ker \alpha$ and $\mathbf{0}\in \im \alpha$.
+
+ Suppose $\mathbf{u}_1, \mathbf{u}_2 \in \ker \alpha$ and $\lambda_1, \lambda_2\in \F$. Then
+ \[
+ \alpha (\lambda_1 \mathbf{u}_1 + \lambda_2 \mathbf{u}_2) = \lambda_1 \alpha(\mathbf{u}_1) + \lambda_2 \alpha(\mathbf{u}_2) = \mathbf{0}.
+ \]
+ So $\lambda_1 \mathbf{u}_1 + \lambda_2 \mathbf{u}_2 \in \ker \alpha$. So $\ker \alpha$ is a subspace.
+
+ Similarly, if $\alpha(\mathbf{u}_1), \alpha(\mathbf{u}_2) \in \im \alpha$, then $\lambda\alpha(\mathbf{u}_1) + \lambda_2 \alpha(\mathbf{u}_2) = \alpha(\lambda_1 \mathbf{u}_1 + \lambda_2 \mathbf{u}_2) \in \im \alpha$. So $\im \alpha$ is a subspace.
+
+ Now by the first isomorphism theorem of groups, $\bar{\alpha}$ is a well-defined isomorphism of groups. So it remains to show that $\bar{\alpha}$ is a linear map. Indeed, we have
+ \[
+ \bar{\alpha}(\lambda(\mathbf{u} + \ker \alpha)) = \alpha (\lambda \mathbf{u}) = \lambda \alpha(\mathbf{u}) = \lambda (\bar{\alpha}(\mathbf{u} + \ker \alpha)).
+ \]
+ So $\bar {\alpha}$ is a linear map.
+\end{proof}
+
+\begin{defi}[Rank and nullity]
+ If $\alpha: U\to V$ is a linear map between finite-dimensional vector spaces over $\F$ (in fact we just need $U$ to be finite-dimensional), the \emph{rank} of $\alpha$ is the number $r(\alpha) = \dim \im \alpha$. The \emph{nullity} of $\alpha$ is the number $n(\alpha) = \dim \ker \alpha$.
+\end{defi}
+
+\begin{cor}[Rank-nullity theorem]
+ If $\alpha: U \to V$ is a linear map and $U$ is finite-dimensional, then
+ \[
+ r(\alpha) + n(\alpha) = \dim U.
+ \]
+\end{cor}
+
+\begin{proof}
+ By the first isomorphism theorem, we know that $U/\ker \alpha \cong \im \alpha$. So we have
+ \[
+ \dim \im \alpha = \dim (U/\ker \alpha) = \dim U - \dim \ker \alpha.
+ \]
+ So the result follows.
+\end{proof}
+
+We can also prove this result without the first isomorphism theorem, and say a bit more in the meantime.
+\begin{prop}
+ If $\alpha: U\to V$ is a linear map between finite-dimensional vector spaces over $\F$, then there are bases $(\mathbf{e}_1, \cdots, \mathbf{e}_m)$ for $U$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_n)$ for $V$ such that $\alpha$ is represented by the matrix
+ \[
+ \begin{pmatrix}
+ I_r & 0\\
+ 0 & 0
+ \end{pmatrix},
+ \]
+ where $r = r(\alpha)$ and $I_r$ is the $r\times r$ identity matrix.
+
+ In particular, $r(\alpha) + n(\alpha) = \dim U$.
+\end{prop}
+
+\begin{proof}
+ Let $\mathbf{e}_{k + 1}, \cdots, \mathbf{e}_m$ be a basis for the kernel of $\alpha$. Then we can extend this to a basis of the $(\mathbf{e}_1,\cdots, \mathbf{e}_m)$.
+
+ Let $\mathbf{f}_i = \alpha(\mathbf{e}_i)$ for $1 \leq i \leq k$. We now show that $(\mathbf{f}_1, \cdots, \mathbf{f}_k)$ is a basis for $\im \alpha$ (and thus $k = r$). We first show that it spans. Suppose $\mathbf{v}\in \im \alpha$. Then we have
+ \[
+ \mathbf{v} = \alpha\left(\sum_{i = 1}^m \lambda_i \mathbf{e}_i\right)
+ \]
+ for some $\lambda_i \in \F$. By linearity, we can write this as
+ \[
+ \mathbf{v} = \sum_{i = 1}^m \lambda_i \alpha(\mathbf{e}_i) = \sum_{i = 1}^k \lambda_i \mathbf{f}_i + \mathbf{0}.
+ \]
+ So $\mathbf{v}\in \bra \mathbf{f}_1, \cdots, \mathbf{f}_k\ket$.
+
+ To show linear dependence, suppose that
+ \[
+ \sum_{i = 1}^k \mu_i \mathbf{f}_i = \mathbf{0}.
+ \]
+ So we have
+ \[
+ \alpha \left(\sum_{i = 1}^k \mu_i \mathbf{e}_i\right) = \mathbf{0}.
+ \]
+ So $\sum_{i = 1}^k \mu_i \mathbf{e}_i \in \ker \alpha$. Since $(\mathbf{e}_{k + 1}, \cdots, \mathbf{e}_m)$ is a basis for $\ker \alpha$, we can write
+ \[
+ \sum_{i = 1}^k \mu_i \mathbf{e}_i = \sum_{i = k + 1}^m \mu_i \mathbf{e}_i
+ \]
+ for some $\mu_i$ ($i = k + 1, \cdots, m$). Since $(\mathbf{e}_1, \cdots, \mathbf{e}_m)$ is a basis, we must have $\mu_i = 0$ for all $i$. So they are linearly independent.
+
+ Now we extend $(\mathbf{f}_1, \cdots, \mathbf{f}_r)$ to a basis for $V$, and
+ \[
+ \alpha(\mathbf{e}_i) =
+ \begin{cases}
+ \mathbf{f}_i & 1 \leq i \leq k\\
+ 0 & k + 1 \leq i \leq m
+ \end{cases}.\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ Let
+ \[
+ W = \{x\in \R^5: x_1 + x_2 + x_3 = 0 = x_3 - x_4 - x_5\}.
+ \]
+ What is $\dim W$? Well, it clearly is $3$, but how can we prove it?
+
+ We can consider the map $\alpha: \R^5 \to \R^2$ given by
+ \[
+ \begin{pmatrix}
+ x_1\\\vdots \\ x_5
+ \end{pmatrix}
+ \mapsto
+ \begin{pmatrix}
+ x_1 + x_2 + x_5\\
+ x_3 - x_4 - x_5.
+ \end{pmatrix}
+ \]
+ Then $\ker \alpha = W$. So $\dim W = 5 - r(\alpha)$. We know that $\alpha(1, 0, 0, 0, 0) = (1, 0)$ and $\alpha(0, 0, 1, 0, 0) = (0, 1)$. So $r(\alpha) = \dim \im \alpha = 2$. So $\dim W = 3$.
+\end{eg}
+More generally, the rank-nullity theorem gives that $m$ linear equations in $n$ have a space of solutions of dimension at least $n - m$.
+
+\begin{eg}
+ Suppose that $U$ and $W$ are subspaces of $V$, all of which are finite-dimensional vector spaces of $\F$. We let
+ \begin{align*}
+ \alpha: U\oplus W &\to V\\
+ (\mathbf{u}, \mathbf{w}) &\mapsto \mathbf{u} + \mathbf{w},
+ \end{align*}
+ where the $\oplus$ is the \emph{external} direct sum. Then $\im \alpha =U + W$ and
+ \[
+ \ker \alpha = \{(\mathbf{u}, -\mathbf{u}): \mathbf{u}\in U\cap W\} \cong \dim (U\cap W).
+ \]
+ Then we have
+ \[
+ \dim U + \dim W = \dim (U\oplus W) = r(\alpha) + n(\alpha) = \dim(U + W) + \dim (U\cap W).
+ \]
+ This is a result we've previously obtained through fiddling with basis and horrible stuff.
+\end{eg}
+
+\begin{cor}
+ Suppose $\alpha: U\to V$ is a linear map between vector spaces over $\F$ both of dimension $n < \infty$. Then the following are equivalent
+ \begin{enumerate}
+ \item $\alpha$ is injective;
+ \item $\alpha$ is surjective;
+ \item $\alpha$ is an isomorphism.
+ \end{enumerate}
+\end{cor}
+
+\begin{proof}
+ It is clear that, (iii) implies (i) and (ii), and (i) and (ii) together implies (iii). So it suffices to show that (i) and (ii) are equivalent.
+
+ Note that $\alpha$ is injective iff $n(\alpha) = 0$, and $\alpha$ is surjective iff $r(\alpha) = \dim V = n$. By the rank-nullity theorem, $n(\alpha) + r(\alpha) = n$. So the result follows immediately.
+\end{proof}
+
+\begin{lemma}
+ Let $A \in M_{n, n}(\F) = M_n(\F)$ be a square matrix. The following are equivalent
+ \begin{enumerate}
+ \item There exists $B\in M_n (\F)$ such that $BA = I_n$.
+ \item There exists $C\in M_n (\F)$ such that $AC = I_n$.
+ \end{enumerate}
+ If these hold, then $B = C$. We call $A$ \emph{invertible} or \emph{non-singular}, and write $A^{-1} = B = C$.
+\end{lemma}
+
+\begin{proof}
+ Let $\alpha, \beta, \gamma, \iota: \F^n \to \F^n$ be the linear maps represented by matrices $A, B, C, I_n$ respectively with respect to the standard basis.
+
+ We note that (i) is equivalent to saying that there exists $\beta$ such that $\beta\alpha = \iota$. This is true iff $\alpha$ is injective, which is true iff $\alpha$ is an isomorphism, which is true iff $\alpha$ has an inverse $\alpha^{-1}$.
+
+ Similarly, (ii) is equivalent to saying that there exists $\gamma$ such that $\alpha\gamma = \iota$. This is true iff $\alpha$ is injective, which is true iff $\alpha$ is isomorphism, which is true iff $\alpha$ has an inverse $\alpha^{-1}$.
+
+ So these are the same things, and we have $\beta = \alpha^{-1} = \gamma$.
+\end{proof}
+
+\subsection{Change of basis}
+Suppose we have a linear map $\alpha: U \to V$. Given a basis $\{\mathbf{e}_i\}$ for $U$, and a basis $\{\mathbf{f}_i\}$ for $V$, we can obtain a matrix $A$.
+\[
+ \begin{tikzcd}[row sep=large]
+ U \ar[r, "\alpha"] & V\\
+ \F^m \ar[r, "A"] \ar[u, "(\mathbf{e}_i)"] & \F^n \ar[u, "(\mathbf{f}_i)"']
+ \end{tikzcd}
+\]
+We now want to consider what happens when we have two different basis $\{\mathbf{u}_i\}$ and $\{\mathbf{e}_i\}$ of $U$. These will then give rise to two different maps from $\F^m$ to our space $U$, and the two basis can be related by a change-of-basis map $P$. We can put them in the following diagram:
+\[
+ \begin{tikzcd}[row sep=large]
+ U \ar[r, "\iota_U"] & U\\
+ \F^m \ar[r, "P"] \ar[u, "(\mathbf{u}_i)"] & \F^m \ar[u, "(\mathbf{e}_i)"]
+ \end{tikzcd}
+\]
+where $\iota_U$ is the identity map. If we perform a change of basis for both $U$ and $V$, we can stitch the diagrams together as
+\[
+ \begin{tikzcd}[row sep=large]
+ U \ar[r, "\iota_U"] & U \ar[r, "\alpha"] & V & V\ar[l, "\iota_V"']\\
+ \F^m \ar[r, "P"] \ar[u, "(\mathbf{u}_i)"] \ar [rrr, bend right, "B"'] & \F^m \ar[r, "A"] \ar[u, "(\mathbf{e}_i)"] & \F^n \ar[u, "(\mathbf{f}_i)"] & \F^n \ar[l, "Q"'] \ar[u, "(\mathbf{v}_i)"]
+ \end{tikzcd}
+\]
+Then if we want a matrix representing the map $U\to V$ with respect to bases $(\mathbf{u}_i)$ and $(\mathbf{v}_i)$, we can write it as the composition
+\[
+ B = Q^{-1}AP.
+\]
+We can write this as a theorem:
+\begin{thm}
+ Suppose that $(\mathbf{e}_1, \cdots, \mathbf{e}_m)$ and $(\mathbf{u}_1,\cdots, \mathbf{u}_m)$ are basis for a finite-dimensional vector space $U$ over $\F$, and $(\mathbf{f}_1, \cdots, \mathbf{f}_n)$ and $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ are basis of a finite-dimensional vector space $V$ over $\F$.
+
+ Let $\alpha: U\to V$ be a linear map represented by a matrix $A$ with respect to $(\mathbf{e}_i)$ and $(\mathbf{f}_i)$ and by $B$ with respect to $(\mathbf{u}_i)$ and $(\mathbf{v}_i)$. Then
+ \[
+ B = Q^{-1}AP,
+ \]
+ where $P$ and $Q$ are given by
+ \[
+ \mathbf{u}_i = \sum_{k = 1}^m P_{ki}\mathbf{e}_k,\quad \mathbf{v}_i = \sum_{k = 1}^n Q_{ki}\mathbf{f}_k.
+ \]
+\end{thm}
+Note that one can view $P$ as the matrix representing the identity map $i_U$ from $U$ with basis $(\mathbf{u}_i$) to $U$ with basis $(\mathbf{e}_i)$, and similarly for $Q$. So both are invertible.
+
+\begin{proof}
+ On the one hand, we have
+ \[
+ \alpha(\mathbf{u}_i) = \sum_{j = 1}^n B_{ji}\mathbf{v}_j = \sum_j\sum_\ell B_{ji} Q_{\ell j}\mathbf{f}_\ell = \sum_\ell [QB]_{\ell i}\mathbf{f}_\ell.
+ \]
+ On the other hand, we can write
+ \[
+ \alpha (\mathbf{u}_i) = \alpha \left(\sum_{k = 1}^m P_{ki}\mathbf{e}_k\right) = \sum_{k = 1}^m P_{ki} \sum_\ell A_{\ell k}\mathbf{f}_\ell = \sum_{\ell}[AP]_{\ell i} f_\ell.
+ \]
+ Since the $\mathbf{f}_\ell$ are linearly independent, we conclude that
+ \[
+ QB = AP.
+ \]
+ Since $Q$ is invertible, we get $B = Q^{-1}AP$.
+\end{proof}
+
+\begin{defi}[Equivalent matrices]
+ We say $A, B\in \Mat_{n, m}(\F)$ are \emph{equivalent} if there are invertible matrices $P\in\Mat_{m}(\F)$, $Q\in \Mat_n (\F)$ such that $B = Q^{-1}AP$.
+\end{defi}
+Since $\GL_K(\F) = \{A \in \Mat_k(\F): A\text{ is invertible}\}$ is a group, for each $k \geq 1$, this is indeed an equivalence relation. The equivalence classes are orbits under the action of $\GL_m(\F) \times \GL_n(\F)$, given by
+\begin{align*}
+ \GL_m(\F) \times \GL_n(\F)\times \Mat_{n, m}(\F) &\to \Mat (\F)\\
+ (P, Q, A) &\mapsto QAP^{-1}.
+\end{align*}
+Two matrices are equivalent if and only if they represent the same linear map with respect to different basis.
+
+\begin{cor}
+ If $A\in \Mat_{n, m}(\F)$, then there exists invertible matrices $P \in \GL_m(\F), Q\in \GL_n(\F)$ so that
+ \[
+ Q^{-1}AP =
+ \begin{pmatrix}
+ I_r & 0\\
+ 0 & 0
+ \end{pmatrix}
+ \]
+ for some $0 \leq r \leq \min(m, n)$.
+\end{cor}
+This is just a rephrasing of the proposition we had last time. But this tells us there are $\min(m, n) + 1$ orbits of the action above parametrized by $r$.
+
+\begin{defi}[Column and row rank]
+ If $A\in \Mat_{n, m}(\F)$, then
+ \begin{itemize}
+ \item The \emph{column rank} of $A$, written $r(A)$, is the dimension of the subspace of $\F^n$ spanned by the columns of $A$.
+ \item The \emph{row rank} of $A$, written $r(A)$, is the dimension of the subspace of $\F^m$ spanned by the rows of $A$. Alternatively, it is the column rank of $A^T$.
+ \end{itemize}
+\end{defi}
+There is no a priori reason why these should be equal to each other. However, it turns out they are always equal.
+
+Note that if $\alpha: \F^m \to \F^n$ is the linear map represented by $A$ (with respect to the standard basis), then $r(A) = r(\alpha)$, i.e.\ the column rank is the rank. Moreover, since the rank of a map is independent of the basis, equivalent matrices have the same column rank.
+
+\begin{thm}
+ If $A \in \Mat_{n, m}(\F)$, then $r(A) = r(A^T)$, i.e.\ the row rank is equivalent to the column rank.
+\end{thm}
+
+\begin{proof}
+ We know that there are some invertible $P, Q$ such that
+ \[
+ Q^{-1}AP =
+ \begin{pmatrix}
+ I_r & 0\\
+ 0 & 0
+ \end{pmatrix},
+ \]
+ where $r = r(A)$. We can transpose this whole equation to obtain
+ \[
+ (Q^{-1}AP)^T = P^T A^T (Q^T)^{-1} =
+ \begin{pmatrix}
+ I_r & 0\\
+ 0 & 0
+ \end{pmatrix}
+ \]
+ So $r(A^T) = r$.
+\end{proof}
+
+\subsection{Elementary matrix operations}
+We are now going to re-prove our corollary that we can find $P, Q$ such that $Q^{-1}AP = \begin{pmatrix} I_r & 0\\ 0 & 0 \end{pmatrix}$ in a way that involves matrices only. This will give a concrete way to find $P$ and $Q$, but is less elegant.
+
+To do so, we need to introduce \emph{elementary matrices}.
+\begin{defi}[Elementary matrices]
+ We call the following matrices of $\GL_n(\F)$ \emph{elementary matrices}:
+ \setcounter{MaxMatrixCols}{11}
+ \[
+ S_{ij}^n =
+ \begin{pmatrix}
+ 1\\
+ & \ddots\\
+ & & 1\\
+ & & & 0 & & & & 1\\
+ & & & & 1\\
+ & & & & & \ddots\\
+ & & & & & & 1\\
+ & & & 1 & & & & 0\\
+ & & & & & & & & 1\\
+ & & & & & & & & & \ddots\\
+ & & & & & & & & & & 1
+ \end{pmatrix}
+ \]
+This is called a reflection, where the rows we changed are the $i$th and $j$th row.
+ \[
+ E_{ij}^n (\lambda) =
+ \begin{pmatrix}
+ 1 \\
+ & \ddots \\
+ & & 1 & & \lambda\\
+ & & & \ddots\\
+ & & & & 1\\
+ & & & & & \ddots\\
+ & & & & & & 1
+ \end{pmatrix}
+ \]
+ This is called a shear, where $\lambda$ appears at the $i,j$th entry.
+ \[
+ T_{i}^n (\lambda) =
+ \begin{pmatrix}
+ 1 \\
+ & \ddots\\
+ & & 1 \\
+ & & & \lambda\\
+ & & & & 1\\
+ & & & & & \ddots\\
+ & & & & & & 1
+ \end{pmatrix}
+ \]
+ This is called a shear, where $\lambda\not= 0$ appears at the $i$th column. % not a shear
+\end{defi}
+Observe that if $A$ is a $m\times n$ matrix, then
+\begin{enumerate}
+ \item $AS_{ij}^n$ is obtained from $A$ by swapping the $i$ and $j$ columns.
+ \item $AE_{Ij}^n(\lambda)$ is obtained by adding $\lambda\times$ column $i$ to column $j$.
+ \item $AT_i^n (\lambda)$ is obtained from $A$ by rescaling the $i$th column by $\lambda$.
+\end{enumerate}
+Multiplying on the left instead of the right would result in the same operations performed on the rows instead of the columns.
+
+\begin{prop}
+ If $A\in \Mat_{n, m}(\F)$, then there exists invertible matrices $P \in \GL_m(\F), Q\in \GL_n(\F)$ so that
+ \[
+ Q^{-1}AP =
+ \begin{pmatrix}
+ I_r & 0\\
+ 0 & 0
+ \end{pmatrix}
+ \]
+ for some $0 \leq r \leq \min(m, n)$.
+\end{prop}
+
+We are going to start with $A$, and then apply these operations to get it into this form.
+
+\begin{proof}
+ We claim that there are elementary matrices $E_1^m, \cdots, E_a^m$ and $F_1^n, \cdots, F_b^n$ (these $E$ are not necessarily the shears, but any elementary matrix) such that
+ \[
+ E_1^m \cdots E_a^m AF_1^n \cdots F_b^n =
+ \begin{pmatrix}
+ I_r & 0\\
+ 0 & 0
+ \end{pmatrix}
+ \]
+ This suffices since the $E_i^m \in \GL_M(\F)$ and $F_j^n \in \GL_n(\F)$. Moreover, to prove the claim, it suffices to find a sequence of elementary row and column operations reducing $A$ to this form.
+
+ If $A = 0$, then done. If not, there is some $i, j$ such that $A_{ij} \not= 0$. By swapping row $1$ and row $i$; and then column $1$ and column $j$, we can assume $A_{11} \not= 0$. By rescaling row $1$ by $\frac{1}{A_{11}}$, we can further assume $A_{11} = 1$.
+
+ Now we can add $-A_{1j}$ times column $1$ to column $j$ for each $j \not= 1$, and then add $-A_{i1}$ times row $1$ to row $i \not= 1$. Then we now have
+ \[
+ A =
+ \begin{pmatrix}
+ 1 & 0 & \cdots & 0\\
+ 0 \\
+ \vdots & & B\\
+ 0 &
+ \end{pmatrix}
+ \]
+ Now $B$ is smaller than $A$. So by induction on the size of $A$, we can reduce $B$ to a matrix of the required form, so done.
+\end{proof}
+It is an exercise to show that the row and column operations do not change the row rank or column rank, and deduce that they are equal.
+
+\section{Duality}
+Duality is a principle we will find throughout mathematics. For example, in IB Optimisation, we considered the dual problems of linear programs. Here we will look for the dual of vector spaces. In general, we try to look at our question in a ``mirror'' and hope that the mirror problem is easier to solve than the original mirror.
+
+At first, the definition of the dual might see a bit arbitrary and weird. We will try to motivate it using what we will call \emph{annihilators}, but they are much more useful than just for these. Despite their usefulness, though, they can be confusing to work with at times, since the dual space of a vector space $V$ will be constructed by considering linear maps on $V$, and when we work with maps on dual spaces, things explode.
+
+\subsection{Dual space}
+To specify a subspace of $\F^n$, we can write down linear equations that its elements satisfy. For example, if we have the subspace $U = \bra \begin{pmatrix} 1\\2\\1 \end{pmatrix}\ket\subseteq \F^3$, we can specify this by saying $\begin{pmatrix}x_1\\ x_2\\ x_3\end{pmatrix} \in U$ if and only if
+\begin{align*}
+ x_1 - x_3 &= 0\\
+ 2x_1 - x_2 &= 0.
+\end{align*}
+However, characterizing a space in terms of equations involves picking some particular equations out of the many possibilities. In general, we do not like making arbitrary choices. Hence the solution is to consider \emph{all} possible such equations. We will show that these form a subspace in some space.
+
+We can interpret these equations in terms of linear maps $\F^n \to \F$. For example $x_1 - x_3 = 0$ if and only if $\begin{pmatrix}x_1\\x_2\\x_3\end{pmatrix} \in \ker \theta$, where $\theta: \F^3 \to \F$ is defined by $\begin{pmatrix}x_1\\x_2\\x_3\end{pmatrix} \mapsto x_1 - x_3$.
+
+This works well with the vector space operations. If $\theta_1, \theta_2: \F^n \to \F$ vanish on some subspace of $\F^n$, and $\lambda, \mu\in \F$, then $\lambda \theta_1 + \mu \theta_2$ also vanishes on the subspace. So the set of all maps $\F^n \to \F$ that vanishes on $U$ forms a vector space.
+
+To formalize this notion, we introduce dual spaces.
+
+\begin{defi}[Dual space]
+ Let $V$ be a vector space over $\F$. The \emph{dual} of $V$ is defined as
+ \[
+ V^* = \mathcal{L}(V, \F) = \{\theta: V \to \F: \theta\text{ linear}\}.
+ \]
+ Elements of $V^*$ are called \emph{linear functionals} or \emph{linear forms}.
+\end{defi}
+By convention, we use Roman letters for elements in $V$, and Greek letters for elements in $V^*$.
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item If $V = \R^3$ and $\theta: V\to \R$ that sends $\begin{pmatrix}x_1\\x_2\\x_3\end{pmatrix} \mapsto x_1 - x_3$, then $\theta \in V^*$.
+ \item Let $V = \F^X$. Then for any fixed $x$, $\theta: V\to \F$ defined by $f \mapsto f(x)$ is in $V^*$.
+ \item Let $V = C([0, 1], \R)$. Then $f \mapsto \int_0^1 f(t)\;\d t \in V^*$.
+ \item The trace $\tr: M_n(\F)\to \F$ defined by $A\mapsto \sum_{i = 1}^n A_{ii}$ is in $M_n(\F)^*$.
+ \end{itemize}
+\end{eg}
+
+It turns out it is rather easy to specify how the dual space looks like, at least in the case where $V$ is finite dimensional.
+\begin{lemma}
+ If $V$ is a finite-dimensional vector space over $f$ with basis $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$, then there is a basis $(\varepsilon_1, \cdots, \varepsilon_n)$ for $V^*$ (called the \emph{dual basis} to $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$) such that
+ \[
+ \varepsilon_i(\mathbf{e}_j) = \delta_{ij}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Since linear maps are characterized by their values on a basis, there exists unique choices for $\varepsilon_1, \cdots, \varepsilon_n \in V^*$. Now we show that $(\varepsilon_1, \cdots, \varepsilon_n)$ is a basis.
+
+ Suppose $\theta \in V^*$. We show that we can write it uniquely as a combination of $\varepsilon_1, \cdots, \varepsilon_n$. We have $\theta = \sum_{i = 1}^n \lambda_i \varepsilon_i$ if and only if $\theta(\mathbf{e}_j) = \sum_{i = 1}^n \lambda_i \varepsilon_i(\mathbf{e}_j)$ (for all $j$) if and only if $\lambda_j = \theta(\mathbf{e}_j)$. So we have uniqueness and existence.
+\end{proof}
+
+\begin{cor}
+ If $V$ is finite dimensional, then $\dim V = \dim V^*$.
+\end{cor}
+When $V$ is not finite dimensional, this need not be true. However, we know that the dimension of $V^*$ is at least as big as that of $V$, since the above gives a set of $\dim V$ many independent vectors in $V^*$. In fact for any infinite dimensional vector space, $\dim V^*$ is strictly larger than $\dim V$, if we manage to define dimensions for infinite-dimensional vector spaces.
+
+It helps to come up with a more concrete example of how dual spaces look like. Consider the vector space $\F^n$, where we treat each element as a column vector (with respect to the standard basis). Then we can regard elements of $V^*$ as just row vectors $(a_1, \cdots, a_n) = \sum_{j = 1}^n a_j\varepsilon_j$ with respect to the dual basis. We have
+\[
+ \left(\sum a_j \varepsilon_j\right)\left(\sum_{x_i}\mathbf{e}_i\right) = \sum_{i, j} a_j x_i \delta_{ij} = \sum_{i = 1}^n a_i x_i =
+ \begin{pmatrix}
+ a_1 & \cdots & a_n
+ \end{pmatrix}
+ \begin{pmatrix}
+ x_1\\\vdots\\x_n
+ \end{pmatrix}.
+\]
+This is exactly what we want.
+
+Now what happens when we change basis? How will the dual basis change?
+\begin{prop}
+ Let $V$ be a finite-dimensional vector space over $\F$ with bases $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_n)$, and that $P$ is the change of basis matrix so that
+ \[
+ \mathbf{f}_i = \sum_{k = 1}^n P_{ki}\mathbf{e}_k.
+ \]
+ Let $(\varepsilon_1, \cdots, \varepsilon_n)$ and $(\eta_1, \cdots, \eta_n)$ be the corresponding dual bases so that
+ \[
+ \varepsilon_i (\mathbf{e}_j) = \delta_{ij} = \eta_i (\mathbf{f}_j).
+ \]
+ Then the change of basis matrix from $(\varepsilon_1, \cdots, \varepsilon_n)$ to $(\eta_1, \cdots, \eta_n)$ is $(P^{-1})^T$, i.e.
+ \[
+ \varepsilon_i = \sum_{\ell = 1}^n P_{\ell i}^T \eta_\ell.
+ \]
+\end{prop}
+
+\begin{proof}
+ For convenience, write $Q = P^{-1}$ so that
+ \[
+ \mathbf{e}_j = \sum_{k = 1}^n Q_{kj}\mathbf{f}_k.
+ \]
+ So we can compute
+ \begin{align*}
+ \left(\sum_{\ell = 1}^n P_{i\ell}\eta_\ell\right)(\mathbf{e}_j) &= \left(\sum_{\ell = 1}^n P_{i\ell}\eta_\ell\right)\left(\sum_{k = 1}^n Q_{kj}\mathbf{f}_k\right)\\
+ &= \sum_{k, \ell} P_{i\ell}\delta_{\ell k} Q_{kj}\\
+ &= \sum_{k, \ell} P_{i\ell} Q_{\ell j}\\
+ &= [PQ]_{ij}\\
+ &= \delta_{ij}.
+ \end{align*}
+ So $\varepsilon_i = \sum_{\ell = 1}^n P_{\ell i}^T \eta_\ell$.
+\end{proof}
+
+Now we'll return to our original motivation, and think how we can define subspaces of $V^*$ in terms of subspaces of $V$, and vice versa.
+
+\begin{defi}[Annihilator]
+ Let $U\subseteq V$. Then the \emph{annihilator} of $U$ is
+ \[
+ U^0 = \{\theta\in V^* : \theta(\mathbf{u}) = 0, \forall \mathbf{u}\in U\}.
+ \]
+ If $W \subseteq V^*$, then the \emph{annihilator} of $W$ is
+ \[
+ W^0 = \{\mathbf{v}\in V: \theta(\mathbf{v}) = 0,\forall \theta \in W\}.
+ \]
+\end{defi}
+One might object that $W^0$ should be a subset of $V^{**}$ and not $V$. We will later show that there is a canonical isomorphism between $V^{**}$ and $V$, and this will all make sense.
+
+\begin{eg}
+ Consider $\R^3$ with standard basis $\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3$; $(R^3)^*$ with dual basis $(\varepsilon_1, \varepsilon_2, \varepsilon_3)$. If $U = \bra \mathbf{e}_1 + 2\mathbf{e}_2 + \mathbf{e}_3\ket$ and $W = \bra \varepsilon_1 - \varepsilon_3, 2\varepsilon_1 - \varepsilon_2\ket $, then $U^0 = W$ and $W^0 = U$.
+\end{eg}
+We see that the dimension of $U$ and $U^0$ add up to three, which is the dimension of $\R^3$. This is typical.
+
+\begin{prop}
+ Let $V$ be a vector space over $\F$ and $U$ a subspace. Then
+ \[
+ \dim U + \dim U^0 = \dim V.
+ \]
+\end{prop}
+We are going to prove this in many ways.
+\begin{proof}
+ Let $(\mathbf{e}_1, \cdots, \mathbf{e}_k)$ be a basis for $U$ and extend to $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ a basis for $V$. Consider the dual basis for $V^*$, say $(\varepsilon_1, \cdots, \varepsilon_n)$. Then we will show that
+ \[
+ U^0 = \bra \varepsilon_{k + 1}, \cdots, \varepsilon_{n}\ket.
+ \]
+ So $\dim U^0 = n - k$ as required. This is easy to prove --- if $j > k$, then $\varepsilon_j(\mathbf{e}_i) = 0$ for all $ i \leq k$. So $\varepsilon_{k + 1}, \cdots, \varepsilon_n \in U^0$. On the other hand, suppose $\theta \in U^0$. Then we can write
+ \[
+ \theta = \sum_{j = 1}^n \lambda_j \varepsilon_j.
+ \]
+ But then $0 = \theta (\mathbf{e}_i) = \lambda_i$ for $i \leq k$. So done.
+\end{proof}
+
+\begin{proof}
+ Consider the restriction map $V^* \to U^*$, given by $\theta \mapsto \theta|_U$. This is obviously linear. Since every linear map $U \to \F$ can be extended to $V\to \F$, this is a surjection. Moreover, the kernel is $U^0$. So by rank-nullity theorem,
+ \[
+ \dim V^* = \dim U^0 + \dim U^*.
+ \]
+ Since $\dim V^* = \dim V$ and $\dim U^* = \dim U$, we're done.
+\end{proof}
+
+\begin{proof}
+ We can show that $U^0 \simeq (V/U)^*$, and then deduce the result. Details are left as an exercise.
+\end{proof}
+
+\subsection{Dual maps}
+Since linear algebra is the study of vector spaces and linear maps between them, after dualizing vector spaces, we should be able to dualize linear maps as well. If we have a map $\alpha: V \to W$, then after dualizing, the map will go \emph{the other direction}, i.e.\ $\alpha^*: W^* \to V^*$. This is a characteristic common to most dualization processes in mathematics.
+
+\begin{defi}[Dual map]
+ Let $V, W$ be vector spaces over $\F$ and $\alpha: V\to W \in \mathcal{L}(V, W)$. The \emph{dual map} to $\alpha$, written $\alpha^*: W^* \to V^*$ is given by $\theta \mapsto \theta \circ \alpha$. Since the composite of linear maps is linear, $\alpha^*(\theta) \in V^*$. So this is a genuine map.
+\end{defi}
+
+\begin{prop}
+ Let $\alpha \in \mathcal{L}(V, W)$ be a linear map. Then $\alpha^* \in \mathcal{L}(W^*, V^*)$ is a linear map.
+\end{prop}
+This is \emph{not} the same as what we remarked at the end of the definition of the dual map. What we remarked was that given any $\theta$, $\alpha^*(\theta)$ is a linear map. What we want to show here is that $\alpha^*$ itself as a map $W^* \to V^*$ is linear.
+
+\begin{proof}
+ Let $\lambda, \mu \in \F$ and $\theta_1, \theta_2 \in W^*$. We want to show
+ \[
+ \alpha^*(\lambda \theta_1 + \mu \theta_2) = \lambda \alpha^*(\theta_1) + \mu \alpha^*(\theta_2).
+ \]
+ To show this, we show that for every $\mathbf{v} \in V$, the left and right give the same result. We have
+ \begin{align*}
+ \alpha^*(\lambda \theta_1 + \mu \theta_2)(\mathbf{v}) &= (\lambda \theta_1 + \mu \theta_2)(\alpha \mathbf{v}) \\
+ &= \lambda \theta_1 (\alpha (\mathbf{v})) + \mu \theta_2 (\alpha(\mathbf{v})) \\
+ &= (\lambda \alpha^*(\theta_1)+ \mu \alpha^*(\theta_2))(\mathbf{v}).
+ \end{align*}
+ So $\alpha^* \in \mathcal{L}(W^*, V^*)$.
+\end{proof}
+
+What happens to the matrices when we take the dual map? The answer is that we get the transpose.
+\begin{prop}
+ Let $V, W$ be finite-dimensional vector spaces over $\F$ and $\alpha: V\to W$ be a linear map. Let $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ be a basis for $V$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_m)$ be a basis for $W$; $(\varepsilon_1, \cdots, \varepsilon_n)$ and $(\eta_1, \cdots, \eta_m)$ the corresponding dual bases.
+
+ Suppose $\alpha$ is represented by $A$ with respect to $(\mathbf{e}_i)$ and $(\mathbf{f}_i)$ for $V$ and $W$. Then $\alpha^*$ is represented by $A^T$ with respect to the corresponding dual bases.
+\end{prop}
+
+\begin{proof}
+ We are given that
+ \[
+ \alpha (\mathbf{e}_i) = \sum_{k = 1}^m A_{ki}\mathbf{f}_k.
+ \]
+ We must compute $\alpha^*(\eta_i)$. To do so, we evaluate it at $\mathbf{e}_j$. We have
+ \[
+ \alpha^*(\eta_i)(\mathbf{e}_j) = \eta_i(\alpha(\mathbf{e}_j)) = \eta_i\left(\sum_{k = 1}^m A_{kj}\mathbf{f}_k\right) = \sum_{k = 1}^m A_{kj} \delta_{ik} = A_{ij}.
+ \]
+ We can also write this as
+ \[
+ \alpha^*(\eta_i)(\mathbf{e}_j) = \sum_{k = 1}^n A_{ik} \varepsilon_k (\mathbf{e}_j).
+ \]
+ Since this is true for all $j$, we have
+ \[
+ \alpha^*(\eta_i) \sum_{k = 1}^n A_{ik}\varepsilon_k = \sum_{k = 1}^n A_{ki}^T \varepsilon_k.
+ \]
+ So done.
+\end{proof}
+
+Note that if $\alpha: U\to V$ and $\beta: V\to W$, $\theta \in W^*$, then
+\[
+ (\beta\alpha)^*(\theta) = \theta\beta\alpha = \alpha^*(\theta\beta) = \alpha^*(\beta^*(\theta)).
+\]
+So we have $(\beta\alpha)^* = \alpha^*\beta^*$. This is obviously true for the finite-dimensional case, since that's how transposes of matrices work.
+
+Similarly, if $\alpha, \beta: U \to V$, then $(\lambda\alpha + \mu\beta)^* = \lambda\alpha^* + \mu\beta^*$.
+
+What happens when we change basis? If $B = Q^{-1}AP$ for some invertible $P$ and $Q$, then
+\[
+ B^T = (Q^{-1}AP)^T = P^TA^T(Q^{-1})^T = ((P^{-1})^T)^{-1} A^T (Q^{-1})^T.
+\]
+So in the dual space, we conjugate by the dual of the change-of-basis matrices.
+
+As we said, we can use dualization to translate problems about a vector space to its dual. The following lemma gives us some good tools to do so:
+\begin{lemma}
+ Let $\alpha \in \mathcal{L}(V, W)$ with $V, W$ finite dimensional vector spaces over $\F$. Then
+ \begin{enumerate}
+ \item $\ker \alpha^* = (\im \alpha)^0$.
+ \item $r(\alpha) = r(\alpha^*)$ (which is another proof that row rank is equal to column rank).
+ \item $\im \alpha^* = (\ker \alpha)^0$.
+ \end{enumerate}
+\end{lemma}
+At first sight, (i) and (iii) look quite similar. However, (i) is almost trivial to prove, but (iii) is rather hard.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $\theta \in W^*$, then
+ \begin{align*}
+ \theta \in \ker \alpha^* &\Leftrightarrow \alpha^*(\theta) = 0 \\
+ &\Leftrightarrow (\forall \mathbf{v}\in V)\; \theta \alpha(\mathbf{v}) = 0 \\
+ &\Leftrightarrow (\forall \mathbf{w}\in \im \alpha)\; \theta(\mathbf{w}) = 0\\
+ &\Leftrightarrow \theta \in (\im \alpha)^0.
+ \end{align*}
+ \item As $\im \alpha \leq W$, we've seen that
+ \[
+ \dim \im \alpha + \dim (\im \alpha)^0 = \dim W.
+ \]
+ Using (i), we see
+ \[
+ n(\alpha^*) = \dim (\im \alpha)^0.
+ \]
+ So
+ \[
+ r(\alpha) + n(\alpha^*) = \dim W = \dim W^*.
+ \]
+ By the rank-nullity theorem, we have $r(\alpha) = r(\alpha^*)$.
+ \item The proof in (i) doesn't quite work here. We can only show that one includes the other. To draw the conclusion, we will show that the two spaces have the dimensions, and hence must be equal.
+
+ Let $\theta \in \im \alpha^*$. Then $\theta = \phi \alpha$ for some $\phi \in W^*$. If $\mathbf{v}\in \ker\alpha$, then
+ \[
+ \theta(\mathbf{v}) = \phi(\alpha(\mathbf{v})) = \phi(\mathbf{0}) = \mathbf{0}.
+ \]
+ So $\im \alpha^* \subseteq (\ker\alpha)^0$.
+
+ But we know
+ \[
+ \dim (\ker \alpha)^0 + \dim \ker \alpha = \dim V,
+ \]
+ So we have
+ \[
+ \dim (\ker \alpha)^0 = \dim V - n(\alpha) = r(\alpha) = r(\alpha^*) = \dim \im \alpha^*.
+ \]
+ Hence we must have $\im \alpha^* = (\ker \alpha)^0$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Not only do we want to get from $V$ to $V^*$, we want to get back from $V^*$ to $V$. We can take the dual of $V^*$ to get a $V^{**}$. We already know that $V^{**}$ is isomorphic to $V$, since $V^*$ is isomorphic to $V$ already. However, the isomorphism between $V^*$ and $V$ are not ``natural''. To define such an isomorphism, we needed to pick a basis for $V$ and consider a dual basis. If we picked a different basis, we would get a different isomorphism. There is no natural, canonical, uniquely-defined isomorphism between $V$ and $V^*$.
+
+However, this is not the case when we want to construct an isomorphism $V \to V^{**}$. The construction of this isomorphism is obvious once we think hard what $V^{**}$ actually means. Unwrapping the definition, we know $V^{**} = \mathcal{L}(V^*, \F)$. Our isomorphism has to produce something in $V^{**}$ given any $\mathbf{v} \in V$. This is equivalent to saying given any $\mathbf{v} \in V$ and a function $\theta \in V^*$, produce something in $\F$.
+
+This is easy, by definition $\theta \in V^*$ is just a linear map $V \to \F$. So given $\mathbf{v}$ and $\theta$, we just return $\theta(\mathbf{v})$. We now just have to show that this is linear and is bijective.
+\begin{lemma}
+ Let $V$ be a vector space over $\F$. Then there is a linear map $\ev: V\to (V^*)^*$ given by
+ \[
+ \ev (\mathbf{v})(\theta) = \theta(\mathbf{v}).
+ \]
+ We call this the \emph{evaluation} map.
+\end{lemma}
+We call this a ``canonical'' map since this does not require picking a particular basis of the vector spaces. It is in some sense a ``natural'' map.
+
+\begin{proof}
+ We first show that $\ev(\mathbf{v}) \in V^{**}$ for all $\mathbf{v}\in V$, i.e.\ $\ev (\mathbf{v})$ is linear for any $\mathbf{v}$. For any $\lambda, \mu\in \F$, $\theta_1, \theta_2 \in V^*$, then for $\mathbf{v} \in V$, we have
+ \begin{align*}
+ \ev(\mathbf{v})(\lambda\theta_1 + \mu\theta_2) &= (\lambda\theta_1 + \mu\theta_2)(\mathbf{v}) \\
+ &= \lambda\theta_1(\mathbf{v}) + \mu\theta_2(\mathbf{v}) \\
+ &= \lambda\ev(\mathbf{v})(\theta_1) + \mu \ev(\mathbf{v})(\theta_2).
+ \end{align*}
+ So done. Now we show that $\ev$ itself is linear. Let $\lambda, \mu\in \F$, $\mathbf{v}_1, \mathbf{v}_2 \in V$. We want to show
+ \[
+ \ev(\lambda\mathbf{v}_1 + \mu \mathbf{v}_2) = \lambda \ev (\mathbf{v}_1) + \mu \ev(\mathbf{v}_2).
+ \]
+ To show these are equal, pick $\theta \in V^*$. Then
+ \begin{align*}
+ \ev(\lambda \mathbf{v}_1 + \mu \mathbf{v}_2)(\theta) &= \theta(\lambda\mathbf{v}_1 + \mu \mathbf{v}_2) \\
+ &= \lambda\theta(\mathbf{v}_1) + \mu \theta(\mathbf{v}_2) \\
+ &= \lambda \ev(\mathbf{v}_1)(\theta) + \mu \ev(\mathbf{v}_2)(\theta) \\
+ &= (\lambda \ev(\mathbf{v}_1) + \mu \ev(\mathbf{v}_2))(\theta).
+ \end{align*}
+ So done.
+\end{proof}
+In the special case where $V$ is finite-dimensional, this is an isomorphism.
+\begin{lemma}
+ If $V$ is finite-dimensional, then $\ev: V \to V^{**}$ is an isomorphism.
+\end{lemma}
+This is very false for infinite dimensional spaces. In fact, this is true \emph{only} for finite-dimensional vector spaces (assuming the axiom of choice), and some (weird) people use this as the definition of finite-dimensional vector spaces.
+
+\begin{proof}
+ We first show it is injective. Suppose $\ev(\mathbf{v}) = \mathbf{0}$ for some $\mathbf{v}\in V$. Then $\theta (\mathbf{v}) = \ev (\mathbf{v})(\theta) = 0$ for all $\theta \in V^*$. So $\dim \bra \mathbf{v}\ket^0 = \dim V^* = \dim V$. So $\dim \bra \mathbf{v} \ket = 0$. So $\mathbf{v} = 0$. So $\ev$ is injective. Since $V$ and $V^{**}$ have the same dimension, this is also surjective. So done.
+\end{proof}
+From now on, we will just pretend that $V$ and $V^{**}$ are the same thing, at least when $V$ is finite dimensional.
+
+Note that this lemma does not just say that $V$ is isomorphic to $V^{**}$ (we already know that since they have the same dimension). This says there is a completely canonical way to choose the isomorphism.
+
+In general, if $V$ is infinite dimensional, then $\ev$ is injective, but not surjective. So we can think of $V$ as a subspace of $V^{**}$ in a canonical way.
+
+\begin{lemma}
+ Let $V, W$ be finite-dimensional vector spaces over $\F$ after identifying ($V$ and $V^{**}$) and ($W$ and $W^{**}$) by the evaluation map. Then we have
+ \begin{enumerate}
+ \item If $U\leq V$, then $U^{00} = U$.
+ \item If $\alpha\in \mathcal{L}(V, W)$, then $\alpha^{**} = \alpha$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $\mathbf{u} \in U$. Then $\mathbf{u}(\theta) = \theta(\mathbf{u}) = 0$ for all $\theta \in U^0$. So $\mathbf{u}$ annihilates everything in $U^0$. So $\mathbf{u} \in U^{00}$. So $U \subseteq U^{00}$. We also know that
+ \[
+ \dim U = \dim V - \dim U^0 = \dim V - (\dim V - \dim U^{00}) = \dim U^{00}.
+ \]
+ So we must have $U = U^{00}$.
+ \item The proof of this is basically --- the transpose of the transpose is the original matrix. The only work we have to do is to show that the dual of the dual basis is the original basis.
+
+ Let $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ be a basis for $V$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_m)$ be a basis for $W$, and let $(\varepsilon_1, \cdots, \varepsilon_n)$ and $(\eta_1, \cdots, \eta_n)$ be the corresponding dual basis. We know that
+ \[
+ \mathbf{e}_i(\varepsilon_j) = \delta_{ij} = \varepsilon_j(\mathbf{e}_i),\quad \mathbf{f}_i(\eta_j) = \delta_{ij} = \eta_j(\mathbf{f}_i).
+ \]
+ So $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ is dual to $(\varepsilon_1, \cdots, \varepsilon_n)$, and similarly for $\mathbf{f}$ and $\eta$.
+
+ If $\alpha$ is represented by $A$, then $\alpha^*$ is represented by $A^T$. So $\alpha^{**}$ is represented by $(A^T)^T = A$. So done.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{prop}
+ Let $V$ be a finite-dimensional vector space $\F$ and $U_1$, $U_2$ are subspaces of $V$. Then we have
+ \begin{enumerate}
+ \item $(U_1 + U_2)^0 = U_1^0 \cap U_2^0$
+ \item $(U_1 \cap U_2)^0 = U_1^0 + U_2^0$
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Suppose $\theta \in V^*$. Then
+ \begin{align*}
+ \theta \in (U_1 + U_2)^0 &\Leftrightarrow \theta (\mathbf{u}_1 + \mathbf{u}_2) = 0\text{ for all }\mathbf{u}_i \in U_i\\
+ &\Leftrightarrow \theta (\mathbf{u}) = 0\text{ for all }\mathbf{u} \in U_1 \cup U_2\\
+ &\Leftrightarrow \theta \in U_1^0 \cap U_2^0.
+ \end{align*}
+ \item We have
+ \[
+ (U_1 \cap U_2)^0 = ((U_1^0)^0 \cap (U_2^0)^0)^0 = (U_1^0 + U_2^0)^{00} = U_1^0 + U_2^0.
+ \]
+ So done.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\section{Bilinear forms I}
+\label{sec:bilin1}
+So far, we have been looking at linear things only. This can get quite boring. For a change, we look at \emph{bi}linear maps instead. In this chapter, we will look at bilinear forms in general. It turns out there isn't much we can say about them, and hence this chapter is rather short. Later, in Chapter~\ref{sec:bilin2}, we will study some special kinds of bilinear forms which are more interesting.
+
+\begin{defi}[Bilinear form]
+ Let $V, W$ be vector spaces over $\F$. Then a function $\phi: V\times W \to \F$ is a \emph{bilinear form} if it is linear in each variable, i.e.\ for each $\mathbf{v} \in V$, $\phi(\mathbf{v}, \ph): W \to \F$ is linear; for each $\mathbf{w} \in W$, $\phi(\ph, \mathbf{w}): V\to \F$ is linear.
+\end{defi}
+
+\begin{eg}
+ The map defined by
+ \begin{align*}
+ V\times V^* &\to \F\\
+ (\mathbf{v}, \theta) &\mapsto \theta(\mathbf{v}) = \ev(\mathbf{v})(\theta)
+ \end{align*}
+ is a bilinear form.
+\end{eg}
+
+\begin{eg}
+ Let $V = W = \F^n$. Then the function $(\mathbf{v}, \mathbf{w}) = \sum_{i = 1}^n v_i w_i$ is bilinear.
+\end{eg}
+
+\begin{eg}
+ If $V = W = C([0, 1], \R)$, then
+ \[
+ (f, g) \mapsto \int_0^a fg \;\d t
+ \]
+ is a bilinear form.
+\end{eg}
+
+\begin{eg}
+ Let $A \in \Mat_{m, n}(\F)$. Then
+ \begin{align*}
+ \phi: \F^m \times \F^n &\to \F\\
+ (\mathbf{v}, \mathbf{w}) &\mapsto \mathbf{v}^T A\mathbf{w}
+ \end{align*}
+ is bilinear. Note that the (real) dot product is the special case of this, where $n = m$ and $A = I$.
+\end{eg}
+In fact, this is the most general form of bilinear forms on finite-dimensional vector spaces.
+
+\begin{defi}[Matrix representing bilinear form]
+ Let $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ be a basis for $V$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_m)$ be a basis for $W$, and $\psi: V\times W \to \F$. Then the \emph{matrix $A$ representing $\psi$} with respect to the basis is defined to be
+ \[
+ A_{ij} = \psi(\mathbf{e}_i, \mathbf{f}_j).
+ \]
+\end{defi}
+Note that if $\mathbf{v} = \sum \lambda_i \mathbf{e}_i$ and $\mathbf{w} = \sum \mu_j \mathbf{f}_j$, then by linearity, we get
+\begin{align*}
+ \psi(\mathbf{v}, \mathbf{w}) &= \psi\left(\sum \lambda_i \mathbf{e}_i, \mathbf{w}\right) \\
+ &= \sum_i \lambda_i \psi(\mathbf{e}_i, \mathbf{w})\\
+ &= \sum_i \lambda_i \psi\left(\mathbf{e}_i, \sum \mu_j \mathbf{f}_j\right)\\
+ &= \sum_{i, j} \lambda_i \mu_j \psi(\mathbf{e}_i, \mathbf{f}_j)\\
+ &= \lambda^T A \mu.
+\end{align*}
+So $\psi$ is determined by $A$.
+
+We have identified linear maps with matrices, and we have identified bilinear maps with matrices. However, you shouldn't think linear maps are bilinear maps. They are, obviously, two different things. In fact, the matrices representing matrices and bilinear forms transform differently when we change basis.
+
+\begin{prop}
+ Suppose $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ and $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ are basis for $V$ such that
+ \[
+ \mathbf{v}_i = \sum P_{ki}\mathbf{e}_k\text{ for all }i = 1,\cdots, n;
+ \]
+ and $(\mathbf{f}_1, \cdots, \mathbf{f}_m)$ and $(\mathbf{w}_1, \cdots, \mathbf{w}_m)$ are bases for $W$ such that
+ \[
+ \mathbf{w}_i = \sum Q_{\ell j} \mathbf{f}_\ell\text{ for all }j = 1, \cdots, m.
+ \]
+ Let $\psi: V\times W \to \F$ be a bilinear form represented by $A$ with respect to $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_m)$, and by $B$ with respect to the bases $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ and $(\mathbf{w}_1, \cdots, \mathbf{w}_m)$. Then
+ \[
+ B = P^T AQ.
+ \]
+\end{prop}
+The difference with the transformation laws of matrices is this time we are taking \emph{transposes}, not \emph{inverses}.
+
+\begin{proof}
+ We have
+ \begin{align*}
+ B_{ij} &= \phi(\mathbf{v}_i, \mathbf{w}_j)\\
+ &= \phi\left(\sum P_{ki}\mathbf{e}_k, \sum Q_{\ell j}\mathbf{f}_\ell\right)\\
+ &= \sum P_{ki}Q_{\ell j}\phi(\mathbf{e}_k, \mathbf{f}_\ell)\\
+ &= \sum_{k, \ell} P^T_{ik} A_{k\ell} Q_{\ell j}\\
+ &= (P^T AQ)_{ij}.\qedhere
+ \end{align*}
+\end{proof}
+Note that while the transformation laws for bilinear forms and linear maps are different, we still get that two matrices are representing the same bilinear form with respect to different bases if and only if they are equivalent, since if $B = P^{-1} AQ$, then $B = ((P^{-1})^T)^T AQ$.
+
+If we are given a bilinear form $\psi: V\times W \to \F$, we immediately get two linear maps:
+\[
+ \psi_L: V\to W^*,\quad \psi_R: W \to V^*,
+\]
+defined by $\psi_L(\mathbf{v}) = \psi(\mathbf{v}, \ph)$ and $\psi_R(\mathbf{w}) = \psi(\ph, \mathbf{w})$.
+
+For example, if $\psi: V\times V^* \to \F$, is defined by $(\mathbf{v}, \theta) \mapsto \theta(\mathbf{v})$, then $\psi_L: V\to V^{**}$ is the evaluation map. On the other hand, $\psi_R: V^* \to V^*$ is the identity map.
+
+\begin{lemma}
+ Let $(\varepsilon_1,\cdots, \varepsilon_n)$ be a basis for $V^*$ dual to $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ of $V$; $(\eta_1,\cdots, \eta_n)$ be a basis for $W^*$ dual to $(\mathbf{f}_1, \cdots, \mathbf{f}_n)$ of $W$.
+
+ If $A$ represents $\psi$ with respect to $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_m)$, then $A$ also represents $\psi_R$ with respect to $(\mathbf{f}_1,\cdots, \mathbf{f}_m)$ and $(\varepsilon_1, \cdots, \varepsilon_n)$; and $A^T$ represents $\psi_L$ with respect to $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ and $(\eta_1, \cdots, \eta_m)$.
+\end{lemma}
+
+\begin{proof}
+ We just have to compute
+ \[
+ \psi_L(\mathbf{e}_i)(\mathbf{f}_j) = A_{ij} = \left(\sum A_{i\ell} \eta_\ell\right) (\mathbf{f}_j).
+ \]
+ So we get
+ \[
+ \psi_L(\mathbf{e}_i) = \sum A_{\ell i}^T\eta_\ell.
+ \]
+ So $A^T$ represents $\psi_L$.
+
+ We also have
+ \[
+ \psi_R(\mathbf{f}_j)(\mathbf{e}_i) = A_{ij}.
+ \]
+ So
+ \[
+ \psi_R(\mathbf{f}_j) = \sum A_{kj}\varepsilon_k.\qedhere
+ \]
+\end{proof}
+
+\begin{defi}[Left and right kernel]
+ The kernel of $\psi_L$ is \emph{left kernel} of $\psi$, while the kernel of $\psi_R$ is the \emph{right kernel} of $\psi$.
+\end{defi}
+Then by definition, $\mathbf{v}$ is in the left kernel if $\psi(\mathbf{v}, \mathbf{w}) = 0$ for all $\mathbf{w} \in W$.
+
+More generally, if $T\subseteq V$, then we write
+\[
+ T^\bot = \{\mathbf{w} \in W: \psi(\mathbf{t}, \mathbf{w}) = 0\text{ for all }\mathbf{t} \in T\}.
+\]
+Similarly, if $U\subseteq W$, then we write
+\[
+ ^\bot U = \{\mathbf{v} \in V: \psi(\mathbf{v}, \mathbf{u}) = 0\text{ for all }\mathbf{u}\in U\}.
+\]
+In particular, $V^\bot = \ker \psi_R$ and $^\bot W = \ker \psi_L$.
+
+If we have a non-trivial left (or right) kernel, then in some sense, some elements in $V$ (or $W$) are ``useless'', and we don't like these.
+\begin{defi}[Non-degenerate bilinear form]
+ $\psi$ is \emph{non-degenerate} if the left and right kernels are both trivial. We say $\psi$ is \emph{degenerate} otherwise.
+\end{defi}
+
+\begin{defi}[Rank of bilinear form]
+ If $\psi: V\to W$ is a bilinear form $\F$ on a finite-dimensional vector space $V$, then the \emph{rank} of $V$ is the rank of any matrix representing $\phi$. This is well-defined since $r(P^T AQ) = r(A)$ if $P$ and $Q$ are invertible.
+
+ Alternatively, it is the rank of $\psi_L$ (or $\psi_R$).
+\end{defi}
+
+\begin{lemma}
+ Let $V$ and $W$ be finite-dimensional vector spaces over $\F$ with bases $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_m)$ be their basis respectively.
+
+ Let $\psi: V\times W \to \F$ be a bilinear form represented by $A$ with respect to these bases. Then $\phi$ is non-degenerate if and only if $A$ is (square and) invertible. In particular, $V$ and $W$ have the same dimension.
+\end{lemma}
+We can understand this as saying if there are too many things in $V$ (or $W$), then some of them are bound to be useless.
+
+\begin{proof}
+ Since $\psi_R$ and $\psi_L$ are represented by $A$ and $A^T$ (in some order), they both have trivial kernel if and only if $n(A) = n(A^T) = 0$. So we need $r(A) = \dim V$ and $r(A^T) = \dim W$. So we need $\dim V = \dim W$ and $A$ have full rank, i.e.\ the corresponding linear map is bijective. So done.
+\end{proof}
+
+\begin{eg}
+ The map
+ \begin{align*}
+ \F^2 \times \F^2 &\to \F\\
+ \begin{pmatrix}
+ a\\c
+ \end{pmatrix},
+ \begin{pmatrix}
+ b\\d
+ \end{pmatrix} &\mapsto ad - bc
+ \end{align*}
+ is a bilinear form. This, obviously, corresponds to the determinant of a 2-by-2 matrix. We have $\psi(\mathbf{v}, \mathbf{w}) = -\psi(\mathbf{w}, \mathbf{v})$ for all $\mathbf{v}, \mathbf{w}\in \F^2$.
+\end{eg}
+
+\section{Determinants of matrices}
+We probably all know what the determinant is. Here we are going to give a slightly more abstract definition, and spend quite a lot of time trying motivate this definition.
+
+Recall that $S_n$ is the group of permutations of $\{1, \cdots, n\}$, and there is a unique group homomorphism $\varepsilon: S_n \to \{\pm 1\}$ such that $\varepsilon(\sigma) = 1$ if $\sigma$ can be written as a product of an even number of transpositions; $\varepsilon(\sigma) = -1$ if $\sigma$ can be written as an odd number of transpositions. It is proved in IA Groups that this is well-defined.
+
+\begin{defi}[Determinant]
+ Let $A \in \Mat_{n, n}(\F)$. Its \emph{determinant} is
+ \[
+ \det A = \sum_{\sigma \in S_n} \varepsilon(\sigma) \prod_{i = 1}^n A_{i \sigma(i)}.
+ \]
+\end{defi}
+This is a big scary definition. Hence, we will spend the first half of the chapter trying to understand what this really means, and how it behaves. We will eventually prove a formula that is useful for computing the determinant, which is probably how you were first exposed to the determinant.
+
+\begin{eg}
+ If $n = 2$, then $S_2 = \{\id, (1\; 2)\}$. So
+ \[
+ \det A = A_{11}A_{22} - A_{12} A_{21}.
+ \]
+ When $n = 3$, then $S_3$ has 6 elements, and
+ \begin{align*}
+ \det A &= A_{11}A_{22}A_{33} + A_{12}A_{23}A_{31} + A_{13}A_{21}A_{32}\\
+ &\quad - A_{11}A_{23}A_{32} - A_{22}A_{31}A_{13} - A_{33}A_{12}A_{21}.
+ \end{align*}
+\end{eg}
+
+We will first prove a few easy and useful lemmas about the determinant.
+\begin{lemma}
+ $\det A = \det A^T$.
+\end{lemma}
+
+\begin{proof}
+ \begin{align*}
+ \det A^T &= \sum_{\sigma \in S_n} \varepsilon(\sigma) \prod_{i = 1}^n A_{\sigma(i)i}\\
+ &= \sum_{\sigma \in S_n} \varepsilon (\sigma) \prod_{j = 1}^n A_{j \sigma^{-1}(j)}\\
+ &= \sum_{\tau \in S_n} \varepsilon (\tau^{-1}) \prod_{j = 1}^n A_{j \tau (j)}\\
+ \intertext{Since $\varepsilon(\tau) = \varepsilon(\tau^{-1})$, we get}
+ &= \sum_{\tau \in S_n} \varepsilon (\tau) \prod_{j = 1}^n A_{j \tau (j)}\\
+ &= \det A.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{lemma}
+ If $A$ is an upper triangular matrix, i.e.
+ \[
+ A =
+ \begin{pmatrix}
+ a_{11} & a_{12} & \cdots & a_{1n}\\
+ 0 & a_{22} & \cdots & a_{2n}\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & a_{nn}
+ \end{pmatrix}
+ \]
+ Then
+ \[
+ \det A = \prod_{i = 1}^n a_{ii}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We have
+ \[
+ \det A = \sum_{\sigma \in S_n} \varepsilon(\sigma) \prod_{i = 1}^n A_{i\sigma (i)}\\
+ \]
+ But $A_{i \sigma(i)} = 0$ whenever $i > \sigma(i)$. So
+ \[
+ \prod_{i = 1}^n A_{i\sigma(i)} = 0
+ \]
+ if there is some $i \in \{1, \cdots, n\}$ such that $i > \sigma(i)$.
+
+ However, the only permutation in which $i \leq \sigma(i)$ for all $i$ is the identity. So the only thing that contributes in the sum is $\sigma = \id$. So
+ \[
+ \det A = \prod_{i = 1}^n A_{ii}.\qedhere
+ \]
+\end{proof}
+To motivate this definition, we need a notion of volume. How can we define \emph{volume} on a vector space? It should be clear that the ``volume'' cannot be uniquely determined, since it depends on what units we are using. For example, saying the volume is ``$1$'' is meaningless unless we provide the units, e.g.\ $\SI{}{\centi\meter\cubed}$. So we have an axiomatic definition for what it means for something to denote a ``volume''.
+
+\begin{defi}[Volume form]
+ A \emph{volume form} on $\F^n$ is a function $d: \F^n \times \cdots \times \F^n \to \F$ that is
+ \begin{enumerate}
+ \item Multilinear, i.e.\ for all $i$ and all $\mathbf{v}_1, \cdots, \mathbf{v}_{i - 1}, \mathbf{v}_{i + 1}, \cdots, \mathbf{v}_n \in \F^n$, we have
+ \[
+ \d (\mathbf{v}_1, \cdots, \mathbf{v}_{i - 1}, \ph, \mathbf{v}_{i + 1}, \cdots, \mathbf{v}_n) \in (\F^n)^*.
+ \]
+ \item Alternating, i.e.\ if $\mathbf{v}_i = \mathbf{v}_j$ for some $i \not= j$, then
+ \[
+ d(\mathbf{v}_1, \cdots, \mathbf{v}_n) = 0.
+ \]
+ \end{enumerate}
+\end{defi}
+We should think of $d(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ as the $n$-dimensional volume of the parallelopiped spanned by $\mathbf{v}_1, \cdots, \mathbf{v}_n$.
+
+We can view $A \in \Mat_n(\F)$ as $n$-many vectors in $\F^n$ by considering its columns $A = (A^{(1)}\; A^{(2)}\; \cdots \; A^{(n)})$, with $A^{(i)} \in \F^n$. Then we have
+\begin{lemma}
+ $\det A$ is a volume form.
+\end{lemma}
+
+\begin{proof}
+ To see that $\det$ is multilinear, it is sufficient to show that each
+ \[
+ \prod_{i = 1}^n A_{i \sigma(i)}
+ \]
+ is multilinear for all $\sigma \in S_n$, since linear combinations of multilinear forms are multilinear. But each such product is contains precisely one entry from each column, and so is multilinear.
+
+ To show it is alternating, suppose now there are some $k, \ell$ distinct such that $A^{(k)} = A^{(\ell)}$. We let $\tau$ be the transposition $(k\; \ell)$. By Lagrange's theorem, we can write
+ \[
+ S_n = A_n \amalg \tau A_n,
+ \]
+ where $A_n = \ker \varepsilon$ and $\amalg$ is the disjoint union. We also know that
+ \[
+ \sum_{\sigma \in A_n} \prod_{i = 1}^n A_{i \sigma (i)} = \sum_{\sigma \in A_n} \prod_{i = 1}^n A_{i, \tau\sigma(i)},
+ \]
+ since if $\sigma(i)$ is not $k$ or $l$, then $\tau$ does nothing; if $\sigma(i)$ is $k$ or $l$, then $\tau$ just swaps them around, but $A^{(k)} = A^{(l)}$. So we get
+ \[
+ \sum_{\sigma \in A_n} \prod_{i = 1}^n A_{i \sigma (i)} = \sum_{\sigma' \in \tau A_n} \prod_{i = 1}^n A_{i\sigma'(i)},
+ \]
+ But we know that
+ \[
+ \det A = \text{LHS} - \text{RHS} = 0.
+ \]
+ So done.
+\end{proof}
+We have shown that determinants are volume forms, but is this the only volume form? Well obviously not, since $2 \det A$ is also a valid volume form. However, in some sense, all volume forms are ``derived'' from the determinant. Before we show that, we need the following
+
+\begin{lemma}
+ Let $d$ be a volume form on $\F^n$. Then swapping two entries changes the sign, i.e.
+ \[
+ d (\mathbf{v}_1, \cdots, \mathbf{v}_i, \cdots, \mathbf{v}_j,\cdots, \mathbf{v}_n) = -d(\mathbf{v}_1, \cdots, \mathbf{v}_j, \cdots, \mathbf{v}_i, \cdots, \mathbf{v}_n).
+ \]
+\end{lemma}
+
+\begin{proof}
+ By linearity, we have
+ \begin{align*}
+ 0 &= d(\mathbf{v}_1, \cdots, \mathbf{v}_i + \mathbf{v}_j, \cdots, \mathbf{v}_i + \mathbf{v}_j, \cdots,\mathbf{v}_n) \\
+ &= d(\mathbf{v}_1, \cdots, \mathbf{v}_i, \cdots, \mathbf{v}_i, \cdots, \mathbf{v}_n)\\
+ &\quad\quad+ d(\mathbf{v}_1, \cdots, \mathbf{v}_i, \cdots, \mathbf{v}_j, \cdots, \mathbf{v}_n)\\
+ &\quad\quad+ d(\mathbf{v}_1, \cdots, \mathbf{v}_j, \cdots, \mathbf{v}_i, \cdots, \mathbf{v}_n)\\
+ &\quad\quad+ d(\mathbf{v}_1, \cdots, \mathbf{v}_j, \cdots, \mathbf{v}_j, \cdots, \mathbf{v}_n)\\
+ &= d(\mathbf{v}_1, \cdots, \mathbf{v}_i, \cdots, \mathbf{v}_j, \cdots, \mathbf{v}_n)\\
+ &\quad\quad+ d(\mathbf{v}_1, \cdots, \mathbf{v}_j, \cdots, \mathbf{v}_i, \cdots, \mathbf{v}_n).
+ \end{align*}
+ So done.
+\end{proof}
+
+\begin{cor}
+ If $\sigma \in S_n$, then
+ \[
+ d(\mathbf{v}_{\sigma(1)}, \cdots, \mathbf{v}_{\sigma(n)}) = \varepsilon(\sigma) d(\mathbf{v}_1,\cdots, \mathbf{v}_n)
+ \]
+ for any $\mathbf{v}_i \in \F^n$.
+\end{cor}
+
+\begin{thm}
+ Let $d$ be any volume form on $\F^n$, and let $A = (A^{(1)}\;\cdots \;A^{(n)}) \in \Mat_n(\F)$. Then
+ \[
+ d(A^{(1)}, \cdots, A^{(n)}) = (\det A) d(\mathbf{e}_1, \cdots, \mathbf{e}_n),
+ \]
+ where $\{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$ is the standard basis.
+\end{thm}
+
+\begin{proof}
+ We can compute
+ \begin{align*}
+ d(A^{(1)}, \cdots, A^{(n)}) &= d\left(\sum_{i = 1}^n A_{i1} \mathbf{e}_i, A^{(2)}, \cdots, A^{(n)}\right)\\
+ &= \sum_{i = 1}^n A_{i1} d(\mathbf{e}_i, A^{(2)}, \cdots, A^{(n)})\\
+ &= \sum_{i, j = 1}^n A_{i1}A_{j2}d(\mathbf{e}_i, \mathbf{e}_j, A^{(3)}, \cdots, A^{(n)})\\
+ &= \sum_{i_1, \cdots, i_n} d(\mathbf{e}_{i_1}, \cdots, \mathbf{e}_{i_n})\prod_{j = 1}^n A_{i_j j}.
+ \end{align*}
+ We know that lots of these are zero, since if $i_k = i_j$ for some $k, j$, then the term is zero. So we are just summing over distinct tuples, i.e.\ when there is some $\sigma$ such that $i_j = \sigma(j)$. So we get
+ \[
+ d(A^{(1)}, \cdots, A^{(n)}) = \sum_{\sigma \in S_n} d(\mathbf{e}_{\sigma(1)}, \cdots, \mathbf{e}_{\sigma(n)})\prod_{j = 1}^n A_{\sigma(j)j}.
+ \]
+ However, by our corollary up there, this is just
+ \[
+ d(A^{(1)}, \cdots, A^{(n)}) = \sum_{\sigma \in S_n} \varepsilon(\sigma) d(\mathbf{e}_1, \cdots, \mathbf{e}_n) \prod_{j = 1}^n A_{\sigma(j)j} = (\det A) d(\mathbf{e}_1, \cdots, \mathbf{e}_n).
+ \]
+ So done.
+\end{proof}
+We can rewrite the formula as
+\[
+ d (A \mathbf{e}_1, \cdots, A\mathbf{e}_n) = (\det A)d(\mathbf{e}_1, \cdots, \mathbf{e}_n).
+\]
+It is not hard to see that the same proof gives for any $\mathbf{v}_1, \cdots, \mathbf{v}_n$, we have
+\[
+ d(A\mathbf{v}_1, \cdots, A\mathbf{v}_n) = (\det A)d(\mathbf{v}_1, \cdots, \mathbf{v}_n).
+\]
+So we know that $\det A$ is the volume rescaling factor of an arbitrary parallelopiped, and this is true for \emph{any} volume form $d$.
+
+\begin{thm}
+ Let $A, B \in \Mat_n(\F)$. Then $\det(AB) = \det(A)\det(B)$.
+\end{thm}
+
+\begin{proof}
+ Let $d$ be a non-zero volume form on $\F^n$ (e.g.\ the ``determinant''). Then we can compute
+ \[
+ d(AB\mathbf{e}_1,\cdots , AB\mathbf{e}_n) = (\det AB) d(\mathbf{e}_1,\cdots, \mathbf{e}_n),
+ \]
+ but we also have
+ \[
+ d(AB\mathbf{e}_1, \cdots, AB\mathbf{e}_n) = (\det A) d(B\mathbf{e}_1, \cdots, B\mathbf{e}_n) = (\det A)(\det B)d(\mathbf{e}_1, \cdots, \mathbf{e}_n).
+ \]
+ Since $d$ is non-zero, we must have $\det AB = \det A \det B$.
+\end{proof}
+
+\begin{cor}
+ If $A \in \Mat_n(\F)$ is invertible, then $\det A \not= 0$. In fact, when $A$ is invertible, then $\det (A^{-1}) = (\det A)^{-1}$.
+\end{cor}
+
+\begin{proof}
+ We have
+ \[
+ 1 = \det I = \det(AA^{-1}) = \det A\det A^{-1}.
+ \]
+ So done.
+\end{proof}
+
+\begin{defi}[Singular matrices]
+ A matrix $A$ is \emph{singular} if $\det A = 0$. Otherwise, it is \emph{non-singular}.
+\end{defi}
+We have just shown that if $\det A = 0$, then $A$ is not invertible. Is the converse true? If $\det A \not= 0$, then can we conclude that $A$ is invertible? The answer is yes. We are now going to prove it in an abstract and clean way. We will later prove this fact again by constructing an explicit formula for the inverse, which involves dividing by the determinant. So if the determinant is non-zero, then we know an inverse exists.
+
+\begin{thm}
+ Let $A \in \Mat_n(\F)$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $A$ is invertible.
+ \item $\det A \not= 0$.
+ \item $r(A) = n$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ We have proved that (i) $\Rightarrow$ (ii) above, and the rank-nullity theorem implies (iii) $\Rightarrow$ (i). We will prove (ii) $\Rightarrow$ (iii). In fact we will show the contrapositive. Suppose $r(A) < n$. By rank-nullity theorem, $n(A) > 0$. So there is some $\mathbf{x} = \begin{pmatrix}\lambda_1\\\vdots\\\lambda_n\end{pmatrix}$ such that $A\mathbf{x} = \mathbf{0}$. Suppose $\lambda_k \not= 0$. We define $B$ as follows:
+ \[
+ B =
+ \begin{pmatrix}
+ 1 & & & \lambda_1\\
+ & \ddots & & \vdots\\
+ & & 1 & \lambda_{k - 1}\\
+ & & & \lambda_k\\
+ & & & \lambda_{k + 1} & 1\\
+ & & & \vdots & & \ddots\\
+ & & & \lambda_n & & & 1
+ \end{pmatrix}
+ \]
+ So $AB$ has the $k$th column identically zero. So $\det(AB) = 0$. So it is sufficient to prove that $\det (B) \not= 0$. But $\det B = \lambda_k \not= 0$. So done.
+\end{proof}
+
+We are now going to come up with an alternative formula for the determinant (which is probably the one you are familiar with). To do so, we introduce the following notation:
+\begin{notation}
+ Write $\hat{A}_{ij}$ for the matrix obtained from $A$ by deleting the $i$th row and $j$th column.
+\end{notation}
+
+\begin{lemma}
+ Let $A \in \Mat_n(\F)$. Then
+ \begin{enumerate}
+ \item We can expand $\det A$ along the $j$th column by
+ \[
+ \det A = \sum_{i = 1}^n (-1)^{i + j} A_{ij} \det \hat{A}_{ij}.
+ \]
+ \item We can expand $\det A$ along the $i$th row by
+ \[
+ \det A = \sum_{j = 1}^n (-1)^{i + j} A_{ij} \det \hat{A}_{ij}.
+ \]
+ \end{enumerate}
+\end{lemma}
+We could prove this directly from the definition, but that is messy and scary, so let's use volume forms instead.
+
+\begin{proof}
+ Since $\det A = \det A^T$, (i) and (ii) are equivalent. So it suffices to prove just one of them. We have
+ \[
+ \det A = d(A^{(1)}, \cdots, A^{(n)}),
+ \]
+ where $d$ is the volume form induced by the determinant. Then we can write as
+ \begin{align*}
+ \det A &= d\left(A^{(1)}, \cdots, \sum_{i = 1}^n A_{ij} \mathbf{e}_i, \cdots, A^{(n)}\right)\\
+ &= \sum_{i = 1}^n A_{ij} d(A^{(1)}, \cdots, \mathbf{e}_i, \cdots, A^{(n)})
+ \end{align*}
+ The volume form on the right is the determinant of a matrix with the $j$th column replaced with $\mathbf{e}_i$. We can move our columns around so that our matrix becomes
+ \[
+ B =
+ \begin{pmatrix}
+ \hat{A}_{ij} & 0\\
+ \mathrm{stuff} & 1
+ \end{pmatrix}
+ \]
+ We get that $\det B = \det \hat{A}^{ij}$, since the only permutations that give a non-zero sum are those that send $n$ to $n$. In the row and column swapping, we have made $n - j$ column transpositions and $n - i$ row transpositions. So we have
+ \begin{align*}
+ \det A &= \sum_{i = 1}^n A_{ij} (-1)^{n - j} (-1)^{n - i}\det B\\
+ &= \sum_{i = 1}^n A_{ij} (-1)^{i + j} \det \hat{A}_{ij}.\qedhere
+ \end{align*}
+\end{proof}
+This is not only useful for computing determinants, but also computing inverses.
+
+\begin{defi}[Adjugate matrix]
+ Let $A \in \Mat_n(\F)$. The \emph{adjugate matrix} of $A$, written $\adj A$, is the $n\times n$ matrix such that $(\adj A)_{ij} = (-1)^{i + j} \det\hat{A}_{ji}$.
+\end{defi}
+
+The relevance is the following result:
+\begin{thm}
+ If $A \in \Mat_n(\F)$, then $A(\adj A) = (\det A) I_n = (\adj A)A$. In particular, if $\det A \not= 0$, then
+ \[
+ A^{-1} = \frac{1}{\det A}\adj A.
+ \]
+\end{thm}
+Note that this is \emph{not} an efficient way to compute the inverse.
+
+\begin{proof}
+ We compute
+ \[
+ [(\adj A)A]_{jk} = \sum_{i = 1}^n (\adj A)_{ji} A_{ik} = \sum_{i = 1}^n (-1)^{i + j} \det\hat{A}_{ij} A_{ik}.\tag{$*$}
+ \]
+ So if $j = k$, then $[(\adj A)A]_{jk} = \det A$ by the lemma.
+
+ Otherwise, if $j \not= k$, consider the matrix $B$ obtained from $A$ by replacing the $j$th column by the $k$th column. Then the right hand side of $(*)$ is just $\det B$ by the lemma. But we know that if two columns are the same, the determinant is zero. So the right hand side of $(*)$ is zero. So
+ \[
+ [(\adj A)A]_{jk} = \det A \delta_{jk}
+ \]
+ The calculation for $[A\adj A] = (\det A) I_n$ can be done in a similar manner, or by considering $(A\adj A)^T = (\adj A)^T A^T = (\adj (A^T)) A^T = (\det A) I_n$.
+\end{proof}
+Note that the coefficients of $(\adj A)$ are just given by polynomials in the entries of $A$, and so is the determinant. So if $A$ is invertible, then its inverse is given by a rational function (i.e.\ ratio of two polynomials) in the entries of $A$.
+
+This is very useful theoretically, but not computationally, since the polynomials are very large. There are better ways computationally, such as Gaussian elimination.
+
+We'll end with a useful tricks to compute the determinant.
+\begin{lemma}
+ Let $A, B$ be square matrices. Then for any $C$, we have
+ \begin{align*}
+ \det
+ \begin{pmatrix}
+ A & C\\
+ 0 & B
+ \end{pmatrix}
+ = (\det A) (\det B).
+ \end{align*}
+\end{lemma}
+
+\begin{proof}
+ Suppose $A\in \Mat_k(\F)$, and $B\in \Mat_{\ell}(\F)$, so $C \in \Mat_{k, \ell}(\F)$. Let
+ \[
+ X =
+ \begin{pmatrix}
+ A & C\\
+ 0 & B
+ \end{pmatrix}.
+ \]
+ Then by definition, we have
+ \[
+ \det X = \sum_{\sigma \in S_{k + \ell}}\varepsilon(\sigma) \prod_{i = 1}^{k + \ell} X_{i\sigma(i)}.
+ \]
+ If $j \leq k$ and $i > k$, then $X_{ij} = 0$. We only want to sum over permutations $\sigma$ such that $\sigma(i) > k$ if $i > k$. So we are permuting the last $j$ things among themselves, and hence the first $k$ things among themselves. So we can decompose this into $\sigma = \sigma_1 \sigma_2$, where $\sigma_1$ is a permutation of $\{1, \cdots, k\}$ and fixes the remaining things, while $\sigma_2$ fixes $\{1, \cdots, k\}$, and permutes the remaining. Then
+ \begin{align*}
+ \det X &= \sum_{\sigma = \sigma_1\sigma_2}\varepsilon(\sigma_1\sigma_2) \prod_{i = 1}^k X_{i\sigma_1(i)} \prod_{j = 1}^\ell X_{k + j\; \sigma_2(k + j)}\\
+ &= \left(\sum_{\sigma_1 \in S_k} \varepsilon(\sigma_1) \prod_{i = 1}^k A_{i\sigma_1(i)}\right)\left(\sum_{\sigma_2 \in S_\ell} \varepsilon(\sigma_2) \prod_{j = 1}^\ell B_{j\sigma_2(j)}\right)\\
+ &= (\det A)(\det B)
+ \end{align*}
+\end{proof}
+
+\begin{cor}
+ \[
+ \det
+ \begin{pmatrix}
+ A_1 & & & \mathrm{stuff}\\
+ & A_2\\
+ & & \ddots\\
+ 0 & & & A_n
+ \end{pmatrix} = \prod_{i = 1}^n \det A_i
+ \]
+\end{cor}
+
+\section{Endomorphisms}
+Endomorphisms are linear maps from a vector space $V$ to itself. One might wonder --- why would we want to study these linear maps in particular, when we can just work with arbitrary linear maps from any space to any other space?
+
+When we work with arbitrary linear maps, we are free to choose any basis for the domain, and any basis for the co-domain, since it doesn't make sense to require they have the ``same'' basis. Then we proved that by choosing the right bases, we can put matrices into a nice form with only $1$'s in the diagonal.
+
+However, when working with endomorphisms, we can require ourselves to use the same basis for the domain and co-domain, and there is much more we can say. One major objective is to classify all matrices up to similarity, where two matrices are similar if they represent the same endomorphism under different bases.
+
+\subsection{Invariants}
+\begin{defi}
+ If $V$ is a (finite-dimensional) vector space over $\F$. An \emph{endomorphism} of $V$ is a linear map $\alpha: V \to V$. We write $\End(V)$ for the $\F$-vector space of all such linear maps, and $I$ for the identity map $V \to V$.
+\end{defi}
+When we think about matrices representing an endomorphism of $V$, we'll use the same basis for the domain and the range. We are going to study some properties of these endomorphisms that are not dependent on the basis we pick, known as \emph{invariants}.
+
+\begin{lemma}
+ Suppose $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_n)$ are bases for $V$ and $\alpha \in \End(V)$. If $A$ represents $\alpha$ with respect to $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ and $B$ represents $\alpha$ with respect to $(\mathbf{f}_1,\cdots, \mathbf{f}_n)$, then
+ \[
+ B = P^{-1}AP,
+ \]
+ where $P$ is given by
+ \[
+ \mathbf{f}_i = \sum_{j = 1}^n P_{ji}\mathbf{e}_j.
+ \]
+\end{lemma}
+
+\begin{proof}
+ This is merely a special case of an earlier more general result for arbitrary maps and spaces.
+\end{proof}
+
+\begin{defi}[Similar matrices]
+ We say matrices $A$ and $B$ are \emph{similar} or \emph{conjugate} if there is some $P$ invertible such that $B = P^{-1}AP$.
+\end{defi}
+
+Recall that $\GL_n(\F)$, the group of invertible $n\times n$ matrices. $\GL_n(\F)$ acts on $\Mat_n(\F)$ by conjugation:
+\[
+ (P, A) \mapsto PAP^{-1}.
+\]
+We are conjugating it this way so that the associativity axiom holds (otherwise we get a \emph{right} action instead of a \emph{left} action). Then $A$ and $B$ are similar iff they are in the same orbit. Since orbits always partition the set, this is an equivalence relation.
+
+Our main goal is to classify the orbits, i.e.\ find a ``nice'' representative for each orbit.
+
+Our initial strategy is to identify basis-independent invariants for endomorphisms. For example, we will show that the rank, trace, determinant and characteristic polynomial are all such invariants.
+
+Recall that the trace of a matrix $A \in \Mat_n(\F)$ is the sum of the diagonal elements:
+\begin{defi}[Trace]
+ The \emph{trace} of a matrix of $A \in \Mat_n(\F)$ is defined by
+ \[
+ \tr A = \sum_{i = 1}^n A_{ii}.
+ \]
+\end{defi}
+
+We want to show that the trace is an invariant. In fact, we will show a stronger statement (as well as the corresponding statement for determinants):
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item If $A \in \Mat_{m, n}(\F)$ and $B\in \Mat_{n, m}(\F)$, then
+ \[
+ \tr AB = \tr BA.
+ \]
+ \item If $A, B \in \Mat_n(\F)$ are similar, then $\tr A = \tr B$.
+ \item If $A, B \in \Mat_n(\F)$ are similar, then $\det A = \det B$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We have
+ \[
+ \tr AB = \sum_{i = 1}^m (AB)_{ii} = \sum_{i = 1}^m \sum_{j = 1}^n A_{ij}B_{ji} = \sum_{j = 1}^n \sum_{i = 1}^m B_{ji}A_{ij} = \tr BA.
+ \]
+ \item Suppose $B = P^{-1}AP$. Then we have
+ \[
+ \tr B = \tr (P^{-1}(AP)) = \tr ((AP)P^{-1}) = \tr A.
+ \]
+ \item We have
+ \[
+ \det (P^{-1}AP) = \det P^{-1} \det A \det P = (\det P)^{-1} \det A \det P = \det A.\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+This allows us to define the trace and determinant of an \emph{endomorphism}.
+\begin{defi}[Trace and determinant of endomorphism]
+ Let $\alpha \in \End(V)$, and $A$ be a matrix representing $\alpha$ under any basis. Then the \emph{trace} of $\alpha$ is $\tr \alpha = \tr A$, and the \emph{determinant} is $\det \alpha = \det A$.
+\end{defi}
+The lemma tells us that the determinant and trace are well-defined. We can also define the determinant without reference to a basis, by defining more general volume forms and define the determinant as a scaling factor.
+
+The trace is slightly more tricky to define without basis, but in IB Analysis II example sheet 4, you will find that it is the directional derivative of the determinant at the origin.
+
+To talk about the characteristic polynomial, we need to know what eigenvalues are.
+\begin{defi}[Eigenvalue and eigenvector]
+ Let $\alpha \in \End(V)$. Then $\lambda \in \F$ is an \emph{eigenvalue} (or E-value) if there is some $\mathbf{v} \in V\setminus \{0\}$ such that $\alpha \mathbf{v} = \lambda \mathbf{v}$.
+
+ $\mathbf{v}$ is an \emph{eigenvector} if $\alpha(\mathbf{v}) = \lambda \mathbf{v}$ for some $\lambda \in \F$.
+
+ When $\lambda \in \F$, the \emph{$\lambda$-eigenspace}, written $E_\alpha(\lambda)$ or $E(\lambda)$ is the subspace of $V$ containing all the $\lambda$-eigenvectors, i.e.
+ \[
+ E_\alpha(\lambda) = \ker (\lambda \iota - \alpha).
+ \]
+ where $\iota$ is the identity function.
+\end{defi}
+\begin{defi}[Characteristic polynomial]
+ The \emph{characteristic polynomial} of $\alpha$ is defined by
+ \[
+ \chi_\alpha(t) = \det (t\iota - \alpha).
+ \]
+\end{defi}
+You might be used to the definition $\chi_\alpha(t) = \det(\alpha - t \iota)$ instead. These two definitions are obviously equivalent up to a factor of $-1$, but this definition has an advantage that $\chi_\alpha(t)$ is always monic, i.e.\ the leading coefficient is $1$. However, when doing computations in reality, we often use $\det (\alpha - t\iota)$ instead, since it is easier to negate $t \iota$ than $\alpha$.
+
+We know that $\lambda$ is an eigenvalue of $\alpha$ iff $n(\alpha - \lambda \iota) > 0$ iff $r(\alpha - \lambda \iota) < \dim V$ iff $\chi_\alpha(\lambda) = \det(\lambda \iota - \alpha) = 0$. So the eigenvalues are precisely the roots of the characteristic polynomial.
+
+If $A \in \Mat_n(\F)$, we can define $\chi_A(t) = \det (tI - A)$.
+\begin{lemma}
+ If $A$ and $B$ are similar, then they have the same characteristic polynomial.
+\end{lemma}
+
+\begin{proof}
+ \[
+ \det (tI - P^{-1}AP) = \det(P^{-1}(tI - A)P) = \det(tI - A).\qedhere
+ \]
+\end{proof}
+
+\begin{lemma}
+ Let $\alpha \in \End(V)$ and $\lambda_1, \cdots, \lambda_k$ distinct eigenvalues of $\alpha$. Then
+ \[
+ E(\lambda_1) + \cdots + E(\lambda_k) = \bigoplus_{i = 1}^k E(\lambda_i)
+ \]
+ is a direct sum.
+\end{lemma}
+
+\begin{proof}
+ Suppose
+ \[
+ \sum_{i = 1}^k \mathbf{x}_i = \sum_{i = 1}^k \mathbf{y}_i,
+ \]
+ with $\mathbf{x}_i, \mathbf{y}_i \in E(\lambda_i)$. We want to show that they are equal. We are going to find some clever map that tells us what $\mathbf{x}_i$ and $\mathbf{y}_i$ are. Consider $\beta_j \in \End(V)$ defined by
+ \[
+ \beta_j = \prod_{r \not= j} (\alpha - \lambda_r \iota).
+ \]
+ Then
+ \begin{align*}
+ \beta_j\left(\sum_{i = 1}^k \mathbf{x}_i\right) &= \sum_{i = 1}^k \prod_{r \not= j}(\alpha - \lambda_{r}\iota)(\mathbf{x}_i)\\
+ &= \sum_{i = 1}^k \prod_{r\not= j} (\lambda_i - \lambda_r)(\mathbf{x}_i).
+ \intertext{Each summand is zero, unless $i \not= j$. So this is equal to}
+ \beta_j\left(\sum_{i = 1}^k \mathbf{x}_i\right) &= \prod_{r \not= j}(\lambda_j - \lambda_r) (\mathbf{x}_j).
+ \end{align*}
+ Similarly, we obtain
+ \[
+ \beta_j\left(\sum_{i = 1}^k \mathbf{y}_i\right) = \prod_{r \not= j}(\lambda_j - \lambda_r) (\mathbf{y}_j).
+ \]
+ Since we know that $\sum \mathbf{x}_i = \sum \mathbf{y}_i$, we must have
+ \[
+ \prod_{r \not= j}(\lambda_j - \lambda_r) \mathbf{x}_j = \prod_{r \not= j} (\lambda_j- \lambda_r)\mathbf{y}_j.
+ \]
+ Since we know that $\prod_{r \not= j} (\lambda_r - \lambda_j) \not= 0$, we must have $\mathbf{x}_i = \mathbf{y}_i$ for all $i$.
+
+ So each expression for $\sum \mathbf{x}_i$ is unique.
+\end{proof}
+The proof shows that any set of non-zero eigenvectors with distinct eigenvalues is linearly independent.
+
+\begin{defi}[Diagonalizable]
+ We say $\alpha \in \End(V)$ is diagonalizable if there is some basis for $V$ such that $\alpha$ is represented by a diagonal matrix, i.e.\ all terms not on the diagonal are zero.
+\end{defi}
+These are in some sense the nice matrices we like to work with.
+
+\begin{thm}
+ Let $\alpha \in \End(V)$ and $\lambda_1, \cdots, \lambda_k$ be distinct eigenvalues of $\alpha$. Write $E_i$ for $E(\lambda_i)$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $\alpha$ is diagonalizable.
+ \item $V$ has a basis of eigenvectors for $\alpha$.
+ \item $V = \bigoplus_{i = 1}^k E_i$.
+ \item $\dim V = \sum_{i = 1}^k \dim E_i$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Leftrightarrow$ (ii): Suppose $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ is a basis for $V$. Then
+ \[
+ \alpha(\mathbf{e}_i) = A_{ji} \mathbf{e}_j,
+ \]
+ where $A$ represents $\alpha$. Then $A$ is diagonal iff each $\mathbf{e}_i$ is an eigenvector. So done
+ \item (ii) $\Leftrightarrow$ (iii): It is clear that (ii) is true iff $\sum E_i = V$, but we know that this must be a direct sum. So done.
+ \item (iii) $\Leftrightarrow$ (iv): This follows from example sheet 1 Q10, which says that $V = \bigoplus_{i = 1}^k E_i$ iff the bases for $E_i$ are disjoint and their union is a basis of $V$.\qedhere
+ \end{itemize}
+\end{proof}
+\subsection{The minimal polynomial}
+\subsubsection{Aside on polynomials}
+Before we talk about minimal polynomials, we first talk about polynomials in general.
+
+\begin{defi}[Polynomial]
+ A \emph{polynomial} over $\F$ is an object of the form
+ \[
+ f(t) = a_m t^m + a_{m - 1}t^{m - 1} + \cdots + a_1 t + a_0,
+ \]
+ with $m \geq 0, a_0, \cdots, a_m \in \F$.
+
+ We write $\F[t]$ for the set of polynomials over $\F$.
+\end{defi}
+Note that we don't identify a polynomial $f$ with the corresponding function it represents. For example, if $\F = \Z/p\Z$, then $t^p$ and $t$ are different polynomials, even though they define the same function (by Fermat's little theorem/Lagrange's theorem). Two polynomials are equal if and only if they have the same coefficients.
+
+However, we will later see that if $\F$ is $\R$ or $\C$, then polynomials are equal if and only if they represent the same function, and this distinction is not as important.
+
+\begin{defi}[Degree]
+ Let $f \in \F[t]$. Then the \emph{degree} of $f$, written $\deg f$ is the largest $n$ such that $a_n \not= 0$. In particular, $\deg 0 = -\infty$.
+\end{defi}
+
+Notice that $\deg fg = \deg f + \deg g$ and $\deg f + g \leq \max\{\deg f, \deg g\}$.
+
+\begin{lemma}[Polynomial division]
+ If $f, g \in \F[t]$ (and $g \not= 0$), then there exists $q, r \in \F[t]$ with $\deg r < \deg g$ such that
+ \[
+ f = qg + r.
+ \]
+\end{lemma}
+Proof is omitted.
+
+\begin{lemma}
+ If $\lambda \in \F$ is a root of $f$, i.e.\ $f(\lambda) = 0$, then there is some $g$ such that
+ \[
+ f(t) = (t - \lambda) g(t).
+ \]
+\end{lemma}
+
+\begin{proof}
+ By polynomial division, let
+ \[
+ f(t) = (t - \lambda)g(t) + r(t)
+ \]
+ for some $g(t), r(t) \in \F[t]$ with $\deg r < \deg (t - \lambda) = 1$. So $r$ has to be constant, i.e.\ $r(t) = a_0$ for some $a_0 \in \F$. Now evaluate this at $\lambda$. So
+ \[
+ 0 = f(\lambda) = (\lambda - \lambda)g(\lambda) + r(\lambda) = a_0.
+ \]
+ So $a_0 = 0$. So $r = 0$. So done.
+\end{proof}
+
+\begin{defi}[Multiplicity of a root]
+ Let $f \in \F[t]$ and $\lambda$ a root of $f$. We say $\lambda$ has \emph{multiplicity} $k$ if $(t - \lambda)^k$ is a factor of $f$ but $(t - \lambda)^{k + 1}$ is not, i.e.
+ \[
+ f(t) = (t - \lambda)^k g(t)
+ \]
+ for some $g(t) \in \F[t]$ with $g(\lambda) \not= 0$.
+\end{defi}
+
+We can use the last lemma and induction to show that any non-zero $f \in \F[t]$ can be written as
+\[
+ f = g(t) \prod_{i = 1}^k (t - \lambda_i)^{a_i},
+\]
+where $\lambda_1, \cdots, \lambda_k$ are all distinct, $a_i > 1$, and $g$ is a polynomial with no roots in $\F$.
+
+Hence we obtain the following:
+\begin{lemma}
+ A non-zero polynomial $f \in \F[t]$ has at most $\deg f$ roots, counted with multiplicity.
+\end{lemma}
+
+\begin{cor}
+ Let $f, g \in \F[t]$ have degree $ 0$ has precisely $n$ roots, counted with multiplicity, since if we write $f(t) = g(t)\prod (t - \lambda_i)^{a_i}$ and $g$ has no roots, then $g$ is constant. So the number of roots is $\sum a_i = \deg f$, counted with multiplicity.
+
+It also follows that every polynomial in $\R$ factors into linear polynomials and quadratic polynomials with no real roots (since complex roots of real polynomials come in complex conjugate pairs).
+
+\subsubsection{Minimal polynomial}
+
+\begin{notation}
+ Given $f(t) = \sum_{i = 0}^m a_i t^i \in \F[t]$, $A \in \Mat_n(\F)$ and $\alpha \in \End(V)$, we can write
+ \[
+ f(A) = \sum_{i = 0}^m a_i A^i,\quad f(\alpha) = \sum_{i = 0}^m a_i \alpha^i
+ \]
+ where $A^0 = I$ and $\alpha^0 = \iota$.
+\end{notation}
+
+\begin{thm}[Diagonalizability theorem]
+ Suppose $\alpha \in \End(V)$. Then $\alpha$ is diagonalizable if and only if there exists non-zero $p(t) \in \F[t]$ such that $p(\alpha) = 0$, and $p(t)$ can be factored as a product of \emph{distinct} linear factors.
+\end{thm}
+
+\begin{proof}
+ Suppose $\alpha$ is diagonalizable. Let $\lambda_1, \cdots, \lambda_k$ be the distinct eigenvalues of $\alpha$. We have
+ \[
+ V = \bigoplus_{i = 1}^k E(\lambda_i).
+ \]
+ So each $\mathbf{v} \in V$ can be written (uniquely) as
+ \[
+ \mathbf{v} = \sum_{i = 1}^k \mathbf{v}_i \text{ with }\alpha(\mathbf{v}_i) = \lambda_i \mathbf{v}_i.
+ \]
+ Now let
+ \[
+ p(t) = \prod_{i = 1}^k (t - \lambda_i).
+ \]
+ Then for any $\mathbf{v}$, we get
+ \[
+ p(\alpha) (\mathbf{v}) = \sum_{i = 1}^k p(\alpha) (\mathbf{v}_i) = \sum_{i = 1}^k p(\lambda_i) \mathbf{v}_i = \mathbf{0}.
+ \]
+ So $p(\alpha) = 0$. By construction, $p$ has distinct linear factors.
+
+ Conversely, suppose we have our polynomial
+ \[
+ p(t) = \prod_{i = 1}^k (t - \lambda_i),
+ \]
+ with $\lambda_1, \cdots, \lambda_k \in \F$ distinct, and $p(\alpha) = 0$ (we can wlog assume $p$ is monic, i.e.\ the leading coefficient is $1$). We will show that
+ \[
+ V = \sum_{i = 1}^k E_\alpha(\lambda_i).
+ \]
+ In other words, we want to show that for all $\mathbf{v} \in V$, there is some $\mathbf{v}_i \in E_\alpha(\lambda_i)$ for $i = 1, \cdots, k$ such that $\mathbf{v} = \sum \mathbf{v}_i$.
+
+ To find these $\mathbf{v}_i$ out, we let
+ \[
+ q_j(t) = \prod_{i \not= j} \frac{t - \lambda_i}{\lambda_j - \lambda_i}.
+ \]
+ This is a polynomial of degree $k - 1$, and $q_j(\lambda_i) = \delta_{ij}$.
+
+ Now consider
+ \[
+ q(t) = \sum_{i = 1}^k q_i(t).
+ \]
+ We still have $\deg q \leq k - 1$, but $q(\lambda_i) = 1$ for any $i$. Since $q$ and $1$ agree on $k$ points, we must have $q = 1$.
+
+ Let $\pi_j: V\to V$ be given by $\pi_j = q_j(\alpha)$. Then the above says that
+ \[
+ \sum_{j = 1}^k \pi_j = \iota.
+ \]
+ Hence given $\mathbf{v} \in V$, we know that $\mathbf{v} = \sum \pi_j \mathbf{v}$.
+
+ We now check that $\pi_j \mathbf{v} \in E_\alpha (\lambda_j)$. This is true since
+ \[
+ (\alpha - \lambda_j\iota) \pi_j \mathbf{v} =\frac{1}{\prod_{i \not= j}(\lambda_j - \lambda_i)} \prod_{i = 1}^k (\alpha - \lambda_\iota) (\mathbf{v}) = \frac{1}{\prod_{i \not= j}(\lambda_j - \lambda_i)} p(\alpha) (\mathbf{v}) = \mathbf{0}.
+ \]
+ So
+ \[
+ \alpha \mathbf{v}_j = \lambda_j \mathbf{v}_j.
+ \]
+ So done.
+\end{proof}
+In the above proof, if $\mathbf{v} \in E_\alpha(\lambda_i)$, then $\pi_j(\mathbf{v}) = \delta_{ij}\mathbf{v}$. So $\pi_i$ is a projection onto the $E_\alpha(\lambda_i)$.
+
+\begin{defi}[Minimal polynomial]
+ The \emph{minimal polynomial} of $\alpha \in \End(V)$ is the non-zero monic polynomial $M_\alpha(t)$ of least degree such that $M_\alpha(\alpha) = 0$.
+\end{defi}
+The monic requirement is just for things to look nice, since we can always divide by the leading coefficient of a polynomial to get a monic version.
+
+Note that if $A$ represents $\alpha$, then for all $p \in \F[t]$, $p(A)$ represents $p(\alpha)$. Thus $p(\alpha)$ is zero iff $p(A) = 0$. So the minimal polynomial of $\alpha$ is the minimal polynomial of $A$ if we define $M_A$ analogously.
+
+There are two things we want to know --- whether the minimal polynomial exists, and whether it is unique.
+
+Existence is always guaranteed in finite-dimensional cases. If $\dim V = n < \infty$, then $\dim \End(V) = n^2$. So $\iota, \alpha, \alpha^2, \cdots, \alpha^{n^2}$ are linearly dependent. So there are some $\lambda_0, \cdots, \lambda_{n^2} \in \F$ not all zero such that
+\[
+ \sum_{i = 0}^{n^2} \lambda_i \alpha^i = 0.
+\]
+So $\deg M_\alpha \leq n^2$. So we must have a minimal polynomial.
+
+To show that the minimal polynomial is unique, we will prove the following stronger characterization of the minimal polynomial:
+\begin{lemma}
+ Let $\alpha \in \End(V)$, and $p \in \F[t]$. Then $p(\alpha) = 0$ if and only if $M_\alpha(t)$ is a factor of $p(t)$. In particular, $M_\alpha$ is unique.
+\end{lemma}
+
+\begin{proof}
+ For all such $p$, we can write $p(t) = q(t) M_\alpha(t) + r(t)$ for some $r$ of degree less than $\deg M_\alpha$. Then
+ \[
+ p(\alpha) = q(\alpha) M_\alpha(\alpha) + r(\alpha).
+ \]
+ So if $r(\alpha) = 0$ iff $p(\alpha) = 0$. But $\deg r < \deg M_\alpha$. By the minimality of $M_\alpha$, we must have $r(\alpha) = 0$ iff $r = 0$. So $p(\alpha) = 0$ iff $M_\alpha(t) \mid p(t)$.
+
+ So if $M_1$ and $M_2$ are both minimal polynomials for $\alpha$, then $M_1 \mid M_2$ and $M_2 \mid M_1$. So $M_2$ is just a scalar multiple of $M_1$. But since $M_1$ and $M_2$ are monic, they must be equal.
+\end{proof}
+
+\begin{eg}
+ Let $V = \F^2$, and consider the following matrices:
+ \[
+ A =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 1
+ \end{pmatrix},\quad
+ B =
+ \begin{pmatrix}
+ 1 & 1\\
+ 0 & 1
+ \end{pmatrix}.
+ \]
+ Consider the polynomial $p(t) = (t - 1)^2$. We can compute $p(A) = p(B) = 0$. So $M_A(t)$ and $M_B(t)$ are factors of $(t - 1)^2$. There aren't many factors of $(t - 1)^2$. So the minimal polynomials are either $(t - 1)$ or $(t - 1)^2$. Since $A - I = 0$ and $B - I \not= 0$, the minimal polynomial of $A$ is $t - 1$ and the minimal polynomial of $B$ is $(t - 1)^2$.
+\end{eg}
+
+We can now re-state our diagonalizability theorem.
+\begin{thm}[Diagonalizability theorem 2.0]
+ Let $\alpha \in \End(V)$. Then $\alpha$ is diagonalizable if and only if $M_\alpha(t)$ is a product of its distinct linear factors.
+\end{thm}
+
+\begin{proof}
+ $(\Leftarrow)$ This follows directly from the previous diagonalizability theorem.
+
+ $(\Rightarrow)$ Suppose $\alpha$ is diagonalizable. Then there is some $p\in \F[t]$ non-zero such that $p(\alpha) = 0$ and $p$ is a product of distinct linear factors. Since $M_\alpha$ divides $p$, $M_\alpha$ also has distinct linear factors.
+\end{proof}
+
+\begin{thm}
+ Let $\alpha, \beta \in \End(V)$ be both diagonalizable. Then $\alpha$ and $\beta$ are simultaneously diagonalizable (i.e.\ there exists a basis with respect to which both are diagonal) if and only if $\alpha\beta = \beta\alpha$.
+\end{thm}
+This is important in quantum mechanics. This means that if two operators do not commute, then they do not have a common eigenbasis. Hence we have the uncertainty principle.
+
+\begin{proof}
+ $(\Rightarrow)$ If there exists a basis $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ for $V$ such that $\alpha$ and $\beta$ are represented by $A$ and $B$ respectively, with both diagonal, then by direct computation, $AB = BA$. But $AB$ represents $\alpha\beta$ and $BA$ represents $\beta\alpha$. So $\alpha\beta = \beta\alpha$.
+
+ $(\Leftarrow)$ Suppose $\alpha\beta = \beta\alpha$. The idea is to consider each eigenspace of $\alpha$ individually, and then diagonalize $\beta$ in each of the eigenspaces. Since $\alpha$ is diagonalizable, we can write
+ \[
+ V = \bigoplus_{i = 1}^k E_\alpha(\lambda_i),
+ \]
+ where $\lambda_i$ are the eigenvalues of $V$. We write $E_i$ for $E_\alpha (\lambda_i)$. We want to show that $\beta$ sends $E_i$ to itself, i.e.\ $\beta(E_i) \subseteq E_i$. Let $\mathbf{v} \in E_i$. Then we want $\beta(\mathbf{v})$ to be in $E_i$. This is true since
+ \[
+ \alpha(\beta(\mathbf{v})) = \beta(\alpha(\mathbf{v})) = \beta(\lambda_i \mathbf{v}) = \lambda_i \beta(\mathbf{v}).
+ \]
+ So $\beta(\mathbf{v})$ is an eigenvector of $\alpha$ with eigenvalue $\lambda_i$.
+
+ Now we can view $\beta|_{E_i} \in \End(E_i)$. Note that
+ \[
+ M_\beta(\beta|_{E_i}) = M_\beta(\beta)|_{E_i} = 0.
+ \]
+ Since $M_\beta(t)$ is a product of its distinct linear factors, it follows that $\beta|_{E_i}$ is diagonalizable. So we can choose a basis $B_i$ of eigenvectors for $\beta|_{E_i}$. We can do this for \emph{all} $i$.
+
+ Then since $V$ is a direct sum of the $E_i$'s, we know that $B = \bigcup_{i = 1}^k B_i$ is a basis for $V$ consisting of eigenvectors for both $\alpha$ and $\beta$. So done.
+\end{proof}
+
+\subsection{The Cayley-Hamilton theorem}
+We will first state the theorem, and then prove it later.
+
+Recall that $\chi_\alpha(t) = \det (t\iota - \alpha)$ for $\alpha \in \End(V)$. Our main theorem of the section (as you might have guessed from the title) is
+\begin{thm}[Cayley-Hamilton theorem]
+ Let $V$ be a finite-dimensional vector space and $\alpha \in \End(V)$. Then $\chi_\alpha(\alpha) = 0$, i.e.\ $M_\alpha(t) \mid \chi_\alpha(t)$. In particular, $\deg M_\alpha \leq n$.
+\end{thm}
+We will not prove this yet, but just talk about it first. It is tempting to prove this by substituting $t = \alpha$ into $\det(t\iota - \alpha)$ and get $\det (\alpha - \alpha) = 0$, but this is meaningless, since what the statement $\chi_\alpha(t) = \det (t\iota - \alpha)$ tells us to do is to expand the determinant of the matrix
+\[
+ \begin{pmatrix}
+ t - a_{11} & a_{12} & \cdots & a_{1n}\\
+ a_{21} & t - a_{22} & \cdots & a_{2n}\\
+ \vdots & \vdots & \ddots & \vdots\\
+ a_{n1} & a_{n2} & \cdots & t - a_{nn}
+ \end{pmatrix}
+\]
+to obtain a polynomial, and we clearly cannot substitute $t = A$ in this expression. However, we can later show that we can use this idea to prove it, but just be a bit more careful.
+
+Note also that if $\rho(t) \in \F[t]$ and
+\[
+ A =
+ \begin{pmatrix}
+ \lambda_1 &\\
+ & \ddots\\
+ & & \lambda_n
+ \end{pmatrix},
+\]
+then
+\[
+ \rho(A) =
+ \begin{pmatrix}
+ \rho(\lambda_1) &\\
+ & \ddots\\
+ & & \rho(\lambda_n)
+ \end{pmatrix}.
+\]
+Since $\chi_A(t)$ is defined as $\prod_{i = 1}^n (t - \lambda_i)$, it follows that $\chi_A(A) = 0$. So if $\alpha$ is diagonalizable, then the theorem is clear.
+
+This was easy. Diagonalizable matrices are nice. The next best thing we can look at is upper-triangular matrices.
+\begin{defi}[Triangulable]
+ An endomorphism $\alpha \in \End(V)$ is \emph{triangulable} if there is a basis for $V$ such that $\alpha$ is represented by an upper triangular matrix
+ \[
+ \begin{pmatrix}
+ a_{11} & a_{12} & \cdots & a_{1n}\\
+ 0 & a_{22} & \cdots & a_{2n}\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & a_{nn}
+ \end{pmatrix}.
+ \]
+\end{defi}
+
+We have a similar lemma telling us when matrices are triangulable.
+
+\begin{lemma}
+ An endomorphism $\alpha$ is triangulable if and only if $\chi_\alpha(t)$ can be written as a product of linear factors, not necessarily distinct. In particular, if $\F = \C$ (or any algebraically closed field), then every endomorphism is triangulable.
+\end{lemma}
+
+\begin{proof}
+ Suppose that $\alpha$ is triangulable and represented by
+ \[
+ \begin{pmatrix}
+ \lambda_1 & * & \cdots & *\\
+ 0 & \lambda_2 & \cdots & *\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & \lambda_n
+ \end{pmatrix}.
+ \]
+ Then
+ \[
+ \chi_\alpha(t) = \det
+ \begin{pmatrix}
+ t - \lambda_1 & * & \cdots & *\\
+ 0 & t - \lambda_2 & \cdots & *\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & t - \lambda_n
+ \end{pmatrix} =
+ \prod_{i = 1}^n (t - \lambda_i).
+ \]
+ So it is a product of linear factors.
+
+ We are going to prove the converse by induction on the dimension of our space. The base case $\dim V = 1$ is trivial, since every $1\times 1$ matrix is already upper triangular.
+
+ Suppose $\alpha \in \End(V)$ and the result holds for all spaces of dimensions $< \dim V$, and $\chi_\alpha$ is a product of linear factors. In particular, $\chi_\alpha(t)$ has a root, say $\lambda \in \F$.
+
+ Now let $U = E(\lambda) \not= 0$, and let $W$ be a complementary subspace to $U$ in $V$, i.e.\ $V = U \oplus W$. Let $\mathbf{u}_1, \cdots, \mathbf{u}_r$ be a basis for $U$ and $\mathbf{w}_{r + 1}, \cdots, \mathbf{w}_n$ be a basis for $W$ so that $\mathbf{u}_1, \cdots, \mathbf{u}_{r}, \mathbf{w}_{r + 1}, \cdots, \mathbf{w}_n$ is a basis for $V$, and $\alpha$ is represented by
+ \[
+ \begin{pmatrix}
+ \lambda I_r & \text{stuff}\\
+ 0 & B
+ \end{pmatrix}
+ \]
+ We know $\chi_\alpha(t) = (t - \lambda)^r \chi_B(t)$. So $\chi_B(t)$ is also a product of linear factors. We let $\beta: W\to W$ be the map defined by $B$ with respect to $\mathbf{w}_{r + 1}, \cdots, \mathbf{w}_n$.
+
+ (Note that in general, $\beta$ is not $\alpha|_W$ in general, since $\alpha$ does not necessarily map $W$ to $W$. However, we can say that $(\alpha - \beta) (\mathbf{w}) \in U$ for all $\mathbf{w}\in W$. This can be much more elegantly expressed in terms of quotient spaces, but unfortunately that is not officially part of the course)
+
+ Since $\dim W < \dim V$, there is a basis $\mathbf{v}_{r + 1}, \cdots, \mathbf{v}_n$ for $W$ such that $\beta$ is represented by $C$, which is upper triangular.
+
+ For $j = 1, \cdots, n - r$, we have
+ \[
+ \alpha(\mathbf{v}_{j + r}) = \mathbf{u} + \sum_{k = 1}^{n - r} C_{kj} \mathbf{v}_{k + r}
+ \]
+ for some $\mathbf{u} \in U$. So $\alpha$ is represented by
+ \[
+ \begin{pmatrix}
+ \lambda I_r & \text{stuff}\\
+ 0 & C
+ \end{pmatrix}
+ \]
+ with respect to $(\mathbf{u}_1, \cdots, \mathbf{u}_r, \mathbf{v}_{r + 1}, \cdots, \mathbf{v}_n)$, which is upper triangular.
+\end{proof}
+
+\begin{eg}
+ Consider the real rotation matrix
+ \[
+ \begin{pmatrix}
+ \cos \theta & \sin \theta\\
+ -\sin \theta & \cos \theta
+ \end{pmatrix}.
+ \]
+ This is \emph{not} similar to a real upper triangular matrix (if $\theta$ is not an integer multiple of $\pi$). This is since the eigenvalues are $e^{\pm i\theta}$ and are not real. On the other hand, as a complex matrix, it is triangulable, and in fact diagonalizable since the eigenvalues are distinct.
+\end{eg}
+For this reason, in the rest of the section, we are mostly going to work in $\C$. We can now prove the Cayley-Hamilton theorem.
+
+\begin{thm}[Cayley-Hamilton theorem]
+ Let $V$ be a finite-dimensional vector space and $\alpha \in \End(V)$. Then $\chi_\alpha(\alpha) = 0$, i.e.\ $M_\alpha(t) \mid \chi_\alpha(t)$. In particular, $\deg M_\alpha \leq n$.
+\end{thm}
+
+\begin{proof}
+ In this proof, we will work over $\C$. By the lemma, we can choose a basis $\{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$ is represented by an upper triangular matrix.
+ \[
+ A =
+ \begin{pmatrix}
+ \lambda_1 & * & \cdots & *\\
+ 0 & \lambda_2 & \cdots & *\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & \lambda_n
+ \end{pmatrix}.
+ \]
+ We must prove that
+ \[
+ \chi_\alpha(\alpha) = \chi_A(\alpha) = \prod_{i = 1}^n (\alpha - \lambda_i \iota) = 0.
+ \]
+ Write $V_j = \bra \mathbf{e}_1, \cdots, \mathbf{e}_j\ket$. So we have the inclusions
+ \[
+ V_0 = 0 \subseteq V_1 \subseteq \cdots \subseteq V_{n - 1} \subseteq V_n = V.
+ \]
+ We also know that $\dim V_j = j$. This increasing sequence is known as a \emph{flag}.
+
+ Now note that since $A$ is upper-triangular, we get
+ \[
+ \alpha(\mathbf{e}_i) = \sum_{k = 1}^i A_{ki} \mathbf{e}_k \in V_i.
+ \]
+ So $\alpha (V_j)\subseteq V_j$ for all $j = 0, \cdots, n$.
+
+ Moreover, we have
+ \[
+ (\alpha - \lambda_j\iota)(\mathbf{e}_j) = \sum_{k = 1}^{j - 1} A_{kj}\mathbf{e}_k \subseteq V_{j - 1}
+ \]
+ for all $j = 1, \cdots, n$. So every time we apply one of these things, we get to a smaller space. Hence by induction on $n - j$, we have
+ \[
+ \prod_{i = j}^n (\alpha - \lambda_i \iota) (V_n) \subseteq V_{j - 1}.
+ \]
+ In particular, when $j = 1$, we get
+ \[
+ \prod_{i = 1}^n (\alpha - \lambda_i \iota) (V) \subseteq V_0 = 0.
+ \]
+ So $\chi_\alpha(\alpha) = 0$ as required.
+
+ Note that if our field $\F$ is not $\C$ but just a subfield of $\C$, say $\R$, we can just pretend it is a complex matrix, do the same proof.
+\end{proof}
+We can see this proof more ``visually'' as follows: for simplicity of expression, we suppose $n = 4$. In the basis where $\alpha$ is upper-triangular, the matrices $A - \lambda_i I$ look like this
+\begin{align*}
+ A - \lambda_1 I &=
+ \begin{pmatrix}
+ 0 & * & * & *\\
+ 0 & * & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & 0 & *
+ \end{pmatrix}&
+ A - \lambda_2 I &=
+ \begin{pmatrix}
+ * & * & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & 0 & *
+ \end{pmatrix}\\
+ A - \lambda_3 I &=
+ \begin{pmatrix}
+ * & * & * & *\\
+ 0 & * & * & *\\
+ 0 & 0 & 0 & *\\
+ 0 & 0 & 0 & *
+ \end{pmatrix}&
+ A - \lambda_4 I &=
+ \begin{pmatrix}
+ * & * & * & *\\
+ 0 & * & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & 0 & 0
+ \end{pmatrix}
+\end{align*}
+Then we just multiply out directly (from the right):
+\begin{align*}
+ \prod_{i = 1}^4 (A - \lambda_i I) &=
+ \begin{pmatrix}
+ 0 & * & * & *\\
+ 0 & * & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & 0 & *
+ \end{pmatrix}
+ \begin{pmatrix}
+ * & * & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & 0 & *
+ \end{pmatrix}
+ \begin{pmatrix}
+ * & * & * & *\\
+ 0 & * & * & *\\
+ 0 & 0 & 0 & *\\
+ 0 & 0 & 0 & *
+ \end{pmatrix}
+ \begin{pmatrix}
+ * & * & * & *\\
+ 0 & * & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & 0 & 0
+ \end{pmatrix}\\
+ &=
+ \begin{pmatrix}
+ 0 & * & * & *\\
+ 0 & * & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & 0 & *
+ \end{pmatrix}
+ \begin{pmatrix}
+ * & * & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & 0 & *
+ \end{pmatrix}
+ \begin{pmatrix}
+ * & * & * & *\\
+ 0 & * & * & *\\
+ 0 & 0 & 0 & 0\\
+ 0 & 0 & 0 & 0
+ \end{pmatrix}\\
+ &=
+ \begin{pmatrix}
+ 0 & * & * & *\\
+ 0 & * & * & *\\
+ 0 & 0 & * & *\\
+ 0 & 0 & 0 & *
+ \end{pmatrix}
+ \begin{pmatrix}
+ * & * & * & *\\
+ 0 & 0 & 0 & 0\\
+ 0 & 0 & 0 & 0\\
+ 0 & 0 & 0 & 0
+ \end{pmatrix}\\
+ &=
+ \begin{pmatrix}
+ 0 & 0 & 0 & 0\\
+ 0 & 0 & 0 & 0\\
+ 0 & 0 & 0 & 0\\
+ 0 & 0 & 0 & 0
+ \end{pmatrix}.
+\end{align*}
+This is exactly what we showed in the proof --- after multiplying out the first $k$ elements of the product (counting from the right), the image is contained in the span of the first $n - k$ basis vectors.
+\begin{proof}
+ We'll now prove the theorem again, which is somewhat a formalization of the ``nonsense proof'' where we just substitute $t = \alpha$ into $\det (\alpha - t\iota)$.
+
+ Let $\alpha$ be represented by $A$, and $B = tI - A$. Then
+ \[
+ B\adj B = \det B I_n = \chi_\alpha(t) I_n.
+ \]
+ But we know that $\adj B$ is a matrix with entries in $\F[t]$ of degree at most $n - 1$. So we can write
+ \[
+ \adj B = B_{n - 1}t^{n - 1} + B_{n - 2}t^{n - 2} + \cdots + B_0,
+ \]
+ with $B_i \in \Mat_n (\F)$. We can also write
+ \[
+ \chi_\alpha(t) = t^n + a_{n - 1}t^{n - 1} + \cdots + a_0.
+ \]
+ Then we get the result
+ \[
+ (tI_n - A) (B_{n - 1}t^{n - 1} + B_{n - 2} t^{n - 2} + \cdots + B_0) = (t^n + a_{n - 1} t^{n - 1} + \cdots + a_0)I_n.
+ \]
+ We would like to just throw in $t = A$, and get the desired result, but in all these derivations, $t$ is assumed to be a real number, and, $tI_n - A$ is the matrix
+ \[
+ \begin{pmatrix}
+ t - a_{11} & a_{12} & \cdots & a_{1n}\\
+ a_{21} & t - a_{22} & \cdots & a_{2n}\\
+ \vdots & \vdots & \ddots & \vdots\\
+ a_{n1} & a_{n2} & \cdots & t - a_{nn}
+ \end{pmatrix}
+ \]
+ It doesn't make sense to put our $A$ in there.
+
+ However, what we \emph{can} do is to note that since this is true for all values of $t$, the coefficients on both sides must be equal. Equating coefficients in $t^k$, we have
+ \begin{align*}
+ -A B_0 &= a_0I_n\\
+ B_0 - AB_1 &= a_1I_n\\
+ &\vdots\\
+ B_{n - 2} - AB_{n - 1} &= a_{n - 1}I_n\\
+ AB_{n - 1} - 0 &= I_n
+ \end{align*}
+ We now multiply each row a suitable power of $A$ to obtain
+ \begin{align*}
+ -A B_0 &= a_0 I_n\\
+ A B_0 - A^2B_1 &= a_1 A\\
+ &\;\;\vdots\\
+ A^{n - 1}B_{n - 2} - A^n B_{n - 1} &= a_{n - 1} A^{n - 1}\\
+ A^{n}B_{n - 1} - 0 &= A^n.
+ \end{align*}
+ Summing this up then gives $\chi_\alpha(A) = 0$.
+\end{proof}
+This proof suggests that we \emph{really} ought to be able to just substitute in $t = \alpha$ and be done. In fact, we can do this, after we develop sufficient machinery. This will be done in the IB Groups, Rings and Modules course.
+
+\begin{lemma}
+ Let $\alpha \in \End(V), \lambda \in \F$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $\lambda$ is an eigenvalue of $\alpha$.
+ \item $\lambda$ is a root of $\chi_\alpha(t)$.
+ \item $\lambda$ is a root of $M_\alpha(t)$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Leftrightarrow$ (ii): $\lambda$ is an eigenvalue of $\alpha$ if and only if $(\alpha - \lambda \iota)(\mathbf{v}) = 0$ has a non-trivial root, iff $\det(\alpha - \lambda \iota) = 0$.
+ \item (iii) $\Rightarrow$ (ii): This follows from Cayley-Hamilton theorem since $M_\alpha \mid \chi_\alpha$.
+ \item (i) $\Rightarrow$ (iii): Let $\lambda$ be an eigenvalue, and $\mathbf{v}$ be a corresponding eigenvector. Then by definition of $M_\alpha$, we have
+ \[
+ M_\alpha(\alpha)(\mathbf{v}) = 0(\mathbf{v}) = 0.
+ \]
+ We also know that
+ \[
+ M_\alpha(\alpha)(\mathbf{v}) = M_\alpha(\lambda)\mathbf{v}.
+ \]
+ Since $\mathbf{v}$ is non-zero, we must have $M_\alpha(\lambda) =0$.
+
+ \item (iii) $\Rightarrow$ (i): This is not necessary since it follows from the above, but we could as well do it explicitly. Suppose $\lambda$ is a root of $M_\alpha(t)$. Then $M_\alpha(t) = (t - \lambda) g(t)$ for some $g \in \F[t]$. But $\deg g < \deg M_\alpha$. Hence by minimality of $M_\alpha$, we must have $g(\alpha) \not= 0$. So there is some $\mathbf{v}\in V$ such that $g(\alpha)(\mathbf{v}) \not=0$. Then
+ \[
+ (\alpha - \lambda\iota)g(\alpha)(\mathbf{v}) = M_\alpha(\alpha)\mathbf{v} = 0.
+ \]
+ So we must have $\alpha (g(\alpha)(\mathbf{v})) = \lambda g(\alpha)(\mathbf{v})$. So $g(\alpha)(\mathbf{v}) \in E_\alpha(\lambda) \setminus \{0\}$. So (i) holds.\qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{eg}
+ What is the minimal polynomial of
+ \[
+ A =
+ \begin{pmatrix}
+ 1 & 0 & -2\\
+ 0 & 1 & 1\\
+ 0 & 0 & 2
+ \end{pmatrix}?
+ \]
+ We can compute $\chi_A(t) = (t - 1)^2 (t - 2)$. So we know that the minimal polynomial is one of $(t - 1)^2(t - 2)$ and $(t - 1)(t - 2)$.
+
+ By direct and boring computations, we can find $(A - I)(A - 2I) = 0$. So we know that $M_A(t) = (t - 1)(t - 2)$. So $A$ is diagonalizable.
+\end{eg}
+\subsection{Multiplicities of eigenvalues and Jordan normal form}
+We will want to put our matrices in their ``Jordan normal forms'', which is a unique form for each equivalence class of similar matrices. The following properties will help determine which Jordan normal form a matrix can have.
+\begin{defi}[Algebraic and geometry multiplicity]
+ Let $\alpha \in \End (V)$ and $\lambda$ an eigenvalue of $\alpha$. Then
+ \begin{enumerate}
+ \item The \emph{algebraic multiplicity} of $\lambda$, written $a_\lambda$, is the multiplicity of $\lambda$ as a root of $\chi_\alpha(t)$.
+ \item The \emph{geometric multiplicity} of $\lambda$, written $g_\lambda$, is the dimension of the corresponding eigenspace, $\dim E_\alpha(\lambda)$.
+ \item $c_\lambda$ is the multiplicity of $\lambda$ as a root of the minimal polynomial $m_\alpha(t)$.
+ \end{enumerate}
+\end{defi}
+
+We will look at some extreme examples:
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item Let
+ \[
+ A =
+ \begin{pmatrix}
+ \lambda & 1 & \cdots & 0\\
+ 0 & \lambda & \ddots & \vdots\\
+ \vdots & \vdots & \ddots & 1\\
+ 0 & 0 & \cdots & \lambda
+ \end{pmatrix}.
+ \]
+ We will later show that $a_\lambda = n = c_\lambda$ and $g_\lambda = 1$.
+ \item Consider $A = \lambda I$. Then $a_\lambda = g_\lambda = n$, $c_\lambda = 1$.
+ \end{itemize}
+\end{eg}
+
+\begin{lemma}
+ If $\lambda$ is an eigenvalue of $\alpha$, then
+ \begin{enumerate}
+ \item $1 \leq g_\lambda \leq a_\lambda$
+ \item $1 \leq c_\lambda \leq a_\lambda$.
+ \end{enumerate}
+\end{lemma}
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item The first inequality is easy. If $\lambda$ is an eigenvalue, then $E(\lambda) \not= 0$. So $g_\lambda = \dim E(\lambda) \geq 1$. To prove the other inequality, if $\mathbf{v}_1, \cdots, \mathbf{v}_g$ is a basis for $E(\lambda)$, then we can extend it to a basis for $V$, and then $\alpha$ is represented by
+ \[
+ \begin{pmatrix}
+ \lambda I_g & *\\
+ 0 & B
+ \end{pmatrix}
+ \]
+ So $\chi_\alpha(t) = (t - \lambda)^g \chi_B(t)$. So $a_\lambda > g = g_\lambda$.
+ \item This is straightforward since $M_\alpha(\lambda) = 0$ implies $1 \leq c_\lambda$, and since $M_\alpha(t) \mid \chi_\alpha(t)$, we know that $c_\lambda \leq \alpha_\lambda$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{lemma}
+ Suppose $\F = \C$ and $\alpha \in \End(V)$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $\alpha$ is diagonalizable.
+ \item $g_\lambda = a_\lambda$ for all eigenvalues of $\alpha$.
+ \item $c_\lambda = 1$ for all $\lambda$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Leftrightarrow$ (ii): $\alpha$ is diagonalizable iff $\dim V = \sum \dim E_\alpha (\lambda_i)$. But this is equivalent to
+ \[
+ \dim V = \sum g_{\lambda_i} \leq \sum a_{\lambda_i} = \deg \chi_\alpha = \dim V.
+ \]
+ So we must have $\sum g_{\lambda_i} = \sum a_{\lambda_i}$. Since each $g_{\lambda_i}$ is at most $a_{\lambda_i}$, they must be individually equal.
+
+ \item (i) $\Leftrightarrow$ (iii): $\alpha$ is diagonalizable if and only if $M_\alpha(t)$ is a product of distinct linear factors if and only if $c_\lambda = 1$ for all eigenvalues $\lambda$.\qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{defi}[Jordan normal form]
+ We say $A \in \Mat_N(\C)$ is in \emph{Jordan normal form} if it is a block diagonal of the form
+ \[
+ \begin{pmatrix}
+ J_{n_1}(\lambda_1) & & & 0\\
+ & J_{n_2}(\lambda_2)\\
+ & & \ddots\\\
+ 0 & & & J_{n_k} (\lambda_k)
+ \end{pmatrix}
+ \]
+ where $k \geq 1$, $n_1, \cdots, n_k \in \N$ such that $n = \sum n_i$, $\lambda_1, \cdots, \lambda_k$ not necessarily distinct, and
+ \[
+ J_m (\lambda) =
+ \begin{pmatrix}
+ \lambda & 1 & \cdots & 0\\
+ 0 & \lambda & \ddots & \vdots\\
+ \vdots & \vdots & \ddots & 1\\
+ 0 & 0 & \cdots & \lambda
+ \end{pmatrix}
+ \]
+ is an $m \times m$ matrix. Note that $J_m(\lambda) = \lambda I_m + J_m(0)$.
+\end{defi}
+For example, it might look something like
+\[\arraycolsep=1.4pt
+ \begin{pmatrix}
+ \lambda_1 & 1\\
+ & \lambda_1 & 1\\
+ & & \lambda_1 & 0\\
+ & & & \lambda_2 & 0\\
+ & & & & \lambda_3 & 1\\
+ & & & & & \lambda_3 & 0\\
+ & & & & & & \ddots & \ddots\\
+ & & & & & & & \lambda_n & 1\\
+ & & & & & & & & \lambda_n
+ \end{pmatrix}
+\]
+with all other entries zero.
+
+Then we have the following theorem:
+\begin{thm}[Jordan normal form theorem]
+ Every matrix $A \in \Mat_n(\C)$ is similar to a matrix in Jordan normal form. Moreover, this Jordan normal form matrix is unique up to permutation of the blocks.
+\end{thm}
+This is a complete solution to the classification problem of matrices, at least in $\C$. We will not prove this completely. We will only prove the uniqueness part, and then reduce the existence part to a special form of endomorphisms. The remainder of the proof is left for IB Groups, Rings and Modules.
+
+We can rephrase this result using linear maps. If $\alpha \in \End(V)$ is an endomorphism of a finite-dimensional vector space $V$ over $\C$, then the theorem says there exists a basis such that $\alpha$ is represented by a matrix in Jordan normal form, and this is unique as before.
+
+Note that the permutation thing is necessary, since if two matrices in Jordan normal form differ only by a rearrangement of blocks, then they are similar, by permuting the basis.
+
+\begin{eg}
+ Every $2\times 2$ matrix in Jordan normal form is one of the three types:
+ \begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ Jordan normal form & $\chi_A$ & $M_A$\\
+ \midrule
+ $\begin{pmatrix} \lambda & 0\\ 0 & \lambda \end{pmatrix}$ & $(t - \lambda)^2$ & $(t - \lambda)$\\\addlinespace
+ $\begin{pmatrix} \lambda & 0\\ 0 & \mu \end{pmatrix}$ & $(t - \lambda)(t - \mu)$ & $(t - \lambda)(t - \mu)$\\\addlinespace
+ $\begin{pmatrix} \lambda & 1\\ 0 & \lambda \end{pmatrix}$ & $(t - \lambda)^2$ & $(t - \lambda)^2$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ with $\lambda, \mu \in \C$ distinct. We see that $M_A$ determines the Jordan normal form of $A$, but $\chi_A$ does not.
+
+ Every $3\times 3$ matrix in Jordan normal form is one of the six types. Here $\lambda_1, \lambda_2$ and $\lambda_3$ are distinct complex numbers.
+ \begin{center}
+ \begin{tabular}{ccccccc}
+ \toprule
+ Jordan normal form & $\chi_A$ & $M_A$\\
+ \midrule
+ $\begin{pmatrix} \lambda_1 & 0 & 0\\ 0 & \lambda_2 & 0\\ 0 & 0 & \lambda_3 \end{pmatrix}$ & $(t - \lambda_1)(t - \lambda_2)(t- \lambda_3)$ & $(t - \lambda_1)(t - \lambda_2)(t- \lambda_3)$\\\addlinespace
+ $\begin{pmatrix} \lambda_1 & 0 & 0\\ 0 & \lambda_1 & 0\\ 0 & 0 & \lambda_2 \end{pmatrix}$ & $(t - \lambda_1)^2 (t - \lambda_2)$ & $(t - \lambda_1) (t - \lambda_2)$\\\addlinespace
+ $\begin{pmatrix} \lambda_1 & 1 & 0\\ 0 & \lambda_1 & 0\\ 0 & 0 & \lambda_2 \end{pmatrix}$ & $(t - \lambda_1)^2 (t - \lambda_2)$ & $(t - \lambda_1)^2 (t - \lambda_2)$\\\addlinespace
+ $\begin{pmatrix} \lambda_1 & 0 & 0\\ 0 & \lambda_1 & 0\\ 0 & 0 & \lambda_1 \end{pmatrix}$ & $(t - \lambda_1)^3$ & $(t - \lambda_1)$\\\addlinespace
+ $\begin{pmatrix} \lambda_1 & 1 & 0\\ 0 & \lambda_1 & 0\\ 0 & 0 & \lambda_1 \end{pmatrix}$ & $(t - \lambda_1)^3$ & $(t - \lambda_1)^2$\\\addlinespace
+ $\begin{pmatrix} \lambda_1 & 1 & 0\\ 0 & \lambda_1 & 1\\ 0 & 0 & \lambda_1 \end{pmatrix}$ & $(t - \lambda_1)^3$ & $(t - \lambda_1)^3$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Notice that $\chi_A$ and $M_A$ together determine the Jordan normal form of a $3\times 3$ complex matrix. We do indeed need $\chi_A$ in the second case, since if we are given $M_A = (t - \lambda_1)(t - \lambda_2)$, we know one of the roots is double, but not which one.
+\end{eg}
+
+In general, though, even $\chi_A$ and $M_A$ together does not suffice.
+
+We now want to understand the Jordan normal blocks better. Recall the definition
+\[
+ J_n(\lambda) =
+ \begin{pmatrix}
+ \lambda & 1 & \cdots & 0\\
+ 0 & \lambda & \ddots & \vdots\\
+ \vdots & \vdots & \ddots & 1\\
+ 0 & 0 & \cdots & \lambda
+ \end{pmatrix} = \lambda I_n + J_n(0).
+\]
+If $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ is the standard basis for $\C^n$, we have
+\[
+ J_n(0)(\mathbf{e}_1) = 0, \quad J_n(0) (\mathbf{e}_i) = \mathbf{e}_{i - 1}\text{ for } 2\leq i \leq n.
+\]
+Thus we know
+\[
+ J_n(0)^k(\mathbf{e}_i) =
+ \begin{cases}
+ 0 & i \leq k\\
+ \mathbf{e}_{i - k} & k < i \leq n
+ \end{cases}
+\]
+In other words, for $k < n$, we have
+\[
+ (J_n(\lambda) - \lambda I)^k = J_n(0)^k =
+ \begin{pmatrix}
+ 0 & I_{n - k}\\
+ 0 & 0
+ \end{pmatrix}.
+\]
+If $k \geq n$, then we have $(J_n(\lambda) - \lambda I)^k = 0$.
+Hence we can summarize this as
+\[
+ n((J_m(\lambda) - \lambda I_m)^r) = \min\{r, m\}.
+\]
+Note that if $A = J_n(\lambda)$, then $\chi_A(t) = M_A(t) = (t - \lambda)^n$. So $\lambda$ is the only eigenvalue of $A$. So $a_\lambda = c_\lambda = n$. We also know that $n(A - \lambda I) = n - r(A - \lambda I) = 1$. So $g_\lambda = 1$.
+
+Recall that a general Jordan normal form is a block diagonal matrix of Jordan blocks. We have just studied individual Jordan blocks. Next, we want to look at some properties of block diagonal matrices in general. If $A$ is the block diagonal matrix
+\[
+ A =
+ \begin{pmatrix}
+ A_1\\
+ & A_2\\
+ & & \ddots\\
+ & & & A_k
+ \end{pmatrix},
+\]
+then
+\[
+ \chi_A(t) = \prod_{i = 1}^k \chi_{A_i}(k).
+\]
+Moreover, if $\rho \in \F[t]$, then
+\[
+ \rho (A) =
+ \begin{pmatrix}
+ \rho(A_1)\\
+ & \rho(A_2)\\
+ & & \ddots\\
+ & & & \rho(A_k)
+ \end{pmatrix}.
+\]
+Hence
+\[
+ M_A(t) = \lcm (M_{A_1}(t), \cdots, M_{A_k} (t)).
+\]
+By the rank-nullity theorem, we have
+\[
+ n(\rho(A)) = \sum_{i = 1}^k n(\rho(A_i)).
+\]
+Thus if $A$ is in Jordan normal form, we get the following:
+\begin{itemize}
+ \item $g_\lambda$ is the number of Jordan blocks in $A$ with eigenvalue $\lambda$.
+ \item $a_\lambda$ is the sum of sizes of the Jordan blocks of $A$ with eigenvalue $\lambda$.
+ \item $c_\lambda$ is the size of the largest Jordan block with eigenvalue $\lambda$.
+\end{itemize}
+
+We are now going to prove the uniqueness part of Jordan normal form theorem.
+\begin{thm}
+ Let $\alpha \in \End(V)$, and $A$ in Jordan normal form representing $\alpha$. Then the number of Jordan blocks $J_n(\lambda)$ in $A$ with $n \geq r$ is
+ \[
+ n((\alpha - \lambda\iota)^r) - n((\alpha - \lambda\iota)^{r - 1}).
+ \]
+\end{thm}
+This is clearly independent of the choice of basis. Also, given this information, we can figure out how many Jordan blocks of size exactly $n$ by doing the right subtraction. Hence this tells us that Jordan normal forms are unique up to permutation of blocks.
+
+\begin{proof}
+ We work blockwise for
+ \[
+ A =
+ \begin{pmatrix}
+ J_{n_1}(\lambda_1)\\
+ & J_{n_2}(\lambda_2)\\
+ & & \ddots\\\
+ & & & J_{n_k} (\lambda_k)
+ \end{pmatrix}.
+ \]
+ We have previously computed
+ \[
+ n((J_m(\lambda) - \lambda I_m)^r) =
+ \begin{cases}
+ r & r \leq m\\
+ m & r > m
+ \end{cases}.
+ \]
+ Hence we know
+ \[
+ n((J_m(\lambda) - \lambda I_m)^r) - n((J_m(\lambda) - \lambda I_m)^{r - 1}) =
+ \begin{cases}
+ 1 & r \leq m\\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ It is also easy to see that for $\mu \not= \lambda$,
+ \[
+ n((J_m(\mu) - \lambda I_m)^r) = n(J_m(\mu - \lambda)^r) = 0
+ \]
+ Adding up for each block, for $r \geq 1$, we have
+ \[
+ n((\alpha - \lambda \iota)^r) - n((\alpha - \lambda \iota)^{r - 1}) =\text{ number of Jordan blocks }J_n(\lambda)\text{ with } n \geq r.\qedhere
+ \]
+\end{proof}
+We can interpret this result as follows: if $r \leq m$, when we take an additional power of $J_m(\lambda) - \lambda I_m$, we get from $\begin{pmatrix}0 & I_{m - r}\\ 0 & 0\end{pmatrix}$ to $\begin{pmatrix}0 & I_{m - r - 1}\\ 0 & 0\end{pmatrix}$. So we kill off one more column in the matrix, and the nullity increase by one. This happens until $(J_m(\lambda) - \lambda I_m)^r = 0$, in which case increasing the power no longer affects the matrix. So when we look at the difference in nullity, we are counting the number of blocks that are affected by the increase in power, which is the number of blocks of size at least $r$.
+
+We have now proved uniqueness, but existence is not yet clear. To show this, we will reduce it to the case where there is exactly one eigenvalue. This reduction is easy if the matrix is diagonalizable, because we can decompose the matrix into each eigenspace and then work in the corresponding eigenspace. In general, we need to work with ``generalized eigenspaces''.
+
+\begin{thm}[Generalized eigenspace decomposition]
+ Let $V$ be a finite-dimensional vector space $\C$ such that $\alpha \in \End(V)$. Suppose that
+ \[
+ M_\alpha(t) = \prod_{i = 1}^k (t - \lambda_i)^{c_i},
+ \]
+ with $\lambda_1, \cdots, \lambda_k \in \C$ distinct. Then
+ \[
+ V = V_1 \oplus \cdots \oplus V_k,
+ \]
+ where $V_i = \ker((\alpha - \lambda_i \iota)^{c_i})$ is the \emph{generalized eigenspace}.
+\end{thm}
+This allows us to decompose $V$ into a block diagonal matrix, and then each block will only have one eigenvalue.
+
+Note that if $c_1 = \cdots = c_k = 1$, then we recover the diagonalizability theorem. Hence, it is not surprising that the proof of this is similar to the diagonalizability theorem. We will again prove this by constructing projection maps to each of the $V_i$.
+
+\begin{proof}
+ Let
+ \[
+ p_j(t) = \prod_{i \not= j} (t - \lambda_i)^{c_i}.
+ \]
+ Then $p_1, \cdots, p_k$ have no common factors, i.e.\ they are coprime. Thus by Euclid's algorithm, there exists $q_1, \cdots, q_k \in \C[t]$ such that
+ \[
+ \sum p_i q_i = 1.
+ \]
+ We now define the endomorphism
+ \[
+ \pi_j = q_j(\alpha) p_j(\alpha)
+ \]
+ for $j = 1, \cdots, k$.
+
+ Then $\sum \pi_j = \iota$. Since $M_\alpha(\alpha) = 0$ and $M_\alpha(t) = (t - \lambda_j \iota)^{c_j} p_j(t)$, we get
+ \[
+ (\alpha - \lambda_j \iota)^{c_j} \pi_j = 0.
+ \]
+ So $\im \pi_j \subseteq V_j$.
+
+ Now suppose $\mathbf{v} \in V$. Then
+ \[
+ \mathbf{v} = \iota (\mathbf{v}) = \sum_{j = 1}^k \pi_j (\mathbf{v}) \in \sum V_j.
+ \]
+ So
+ \[
+ V = \sum V_j.
+ \]
+ To show this is a direct sum, note that $\pi_i \pi_j = 0$, since the product contains $M_\alpha(\alpha)$ as a factor. So
+ \[
+ \pi_i = \iota \pi_i = \left(\sum \pi_j\right) \pi_i = \pi_i^2.
+ \]
+ So $\pi$ is a projection, and $\pi_j|_{V_j} = \iota_{V_j}$. So if $\mathbf{v} = \sum \mathbf{v}_i$, then applying $\pi_i$ to both sides gives $\mathbf{v}_i = \pi_i (\mathbf{v})$. Hence there is a unique way of writing $\mathbf{v}$ as a sum of things in $V_i$. So $V = \bigoplus V_j$ as claimed.
+\end{proof}
+Note that we didn't really use the fact that the vector space is over $\C$, except to get that the minimal polynomial is a product of linear factors. In fact, for arbitrary vector spaces, if the minimal polynomial of a matrix is a product of linear factors, then it can be put into Jordan normal form. The converse is also true --- if it can be put into Jordan normal form, then the minimal polynomial is a product of linear factors, since we've seen that a necessary and sufficient condition for the minimal polynomial to be a product of linear factors is for there to be a basis in which the matrix is upper triangular.
+
+Using this theorem, by restricting $\alpha$ to its generalized eigenspaces, we can reduce the existence part of the Jordan normal form theorem to the case $M_\alpha(t) = (t - \lambda)^c$. Further by replacing $\alpha$ by $\alpha - \lambda \iota$, we can reduce this to the case where $0$ is the only eigenvalue.
+
+\begin{defi}[Nilpotent]
+ We say $\alpha \in \End(V)$ is nilpotent if there is some $r$ such that $\alpha^r = 0$.
+\end{defi}
+Over $\C$, $\alpha$ is nilpotent if and only if the only eigenvalue of $\alpha$ is $0$. This is since $\alpha$ is nilpotent if the minimal polynomial is $t^r$ for some $r$.
+
+We've now reduced the problem of classifying complex endomorphisms to the classifying nilpotent endomorphisms. This is the point where we stop. For the remainder of the proof, see IB Groups, Rings and Modules. There is in fact an elementary proof of it, and we're not doing it not because it's hard, but because we don't have time.
+
+\begin{eg}
+ Let
+ \[
+ A =
+ \begin{pmatrix}
+ 3 & -2 & 0\\
+ 1 & 0 & 0\\
+ 1 & 0 & 1
+ \end{pmatrix}
+ \]
+ We know we can find the Jordan normal form by just computing the minimal polynomial and characteristic polynomial. But we can do better and try to find a $P$ such that $P^{-1}AP$ is in Jordan normal form.
+
+ We first compute the eigenvalues of $A$. The characteristic polynomial is
+ \[
+ \det \begin{pmatrix}
+ t - 3 & -2 & 0\\
+ 1 & t & 0\\
+ 1 & 0 & t - 1
+ \end{pmatrix} = (t - 1)((t - 3)t + 2) = (t - 1)^2 (t - 2).
+ \]
+ We now compute the eigenspaces of $A$. We have
+ \[
+ A - I =
+ \begin{pmatrix}
+ 2 & -2 & 0\\
+ 1 & -1 & 0\\
+ 1 & 0 & 0
+ \end{pmatrix}
+ \]
+ We see this has rank $2$ and hence nullity $1$, and the eigenspace is the kernel
+ \[
+ E_A(1) = \left\bra
+ \begin{pmatrix}
+ 0\\0\\1
+ \end{pmatrix}\right\ket
+ \]
+ We can also compute the other eigenspace. We have
+ \[
+ A - 2I =
+ \begin{pmatrix}
+ 1 & -2 & 0\\
+ 1 & -2 & 0\\
+ 1 & 0 & -1
+ \end{pmatrix}
+ \]
+ This has rank $2$ and
+ \[
+ E_A(2) = \left\bra
+ \begin{pmatrix}
+ 2 \\ 1 \\ 2
+ \end{pmatrix}\right\ket.
+ \]
+ Since
+ \[
+ \dim E_A(1) + \dim E_A(2) = 2 < 3,
+ \]
+ this is not diagonalizable. So the minimal polynomial must also be $M_A(t) = \chi_A(t) = (t - 1)^2 (t - 2)$. From the classificaion last time, we know that $A$ is similar to
+ \[
+ \begin{pmatrix}
+ 1 & 1 & 0\\
+ 0 & 1 & 0\\
+ 0 & 0 & 2
+ \end{pmatrix}
+ \]
+ We now want to compute a basis that transforms $A$ to this. We want a basis $(\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3)$ of $\C^3$ such that
+ \[
+ A\mathbf{v}_1 = \mathbf{v}_1,\quad A\mathbf{v}_2 = \mathbf{v}_1 + \mathbf{v}_2,\quad A \mathbf{v}_3 = 2\mathbf{v}_3.
+ \]
+ Equivalently, we have
+ \[
+ (A - I)\mathbf{v}_1 = \mathbf{0},\quad (A - I)\mathbf{v}_2 = \mathbf{v}_1, \quad (A - 2I)\mathbf{v}_3 = \mathbf{0}.
+ \]
+ There is an obvious choices $\mathbf{v}_3$, namely the eigenvector of eigenvalue $2$.
+
+ To find $\mathbf{v}_1$ and $\mathbf{v}_2$, the idea is to find some $\mathbf{v}_2$ such that $(A - I) \mathbf{v}_2 \not= \mathbf{0}$ but $(A - I)^2 \mathbf{v}_2 = \mathbf{0}$. Then we can let $\mathbf{v}_1 = (A - I) \mathbf{v}_1$.
+
+ We can compute the kernel of $(A - I)^2$. We have
+ \[
+ (A - I)^2 =
+ \begin{pmatrix}
+ 2 & -2 & 0\\
+ 1 & -1 & 0\\
+ 2 & -2 & 0
+ \end{pmatrix}
+ \]
+ The kernel of this is
+ \[
+ \ker (A - I)^2 = \left\bra
+ \begin{pmatrix}
+ 0 \\ 0 \\ 1
+ \end{pmatrix},
+ \begin{pmatrix}
+ 1 \\ 1 \\ 0
+ \end{pmatrix}
+ \right\ket.
+ \]
+ We need to pick our $\mathbf{v}_2$ that is in this kernel but not in the kernel of $A - I$ (which is the eigenspace $E_1$ we have computed above). So we have
+ \[
+ \mathbf{v}_2 =
+ \begin{pmatrix}
+ 1\\1\\0
+ \end{pmatrix},\quad
+ \mathbf{v}_1 =
+ \begin{pmatrix}
+ 0\\0\\1
+ \end{pmatrix},\quad
+ \mathbf{v}_3 =
+ \begin{pmatrix}
+ 2\\1\\2
+ \end{pmatrix}.
+ \]
+ Hence we have
+ \[
+ P =
+ \begin{pmatrix}
+ 0 & 1 & 2\\
+ 0 & 1 & 1\\
+ 1 & 0 & 2
+ \end{pmatrix}
+ \]
+ and
+ \[
+ P^{-1} AP = \begin{pmatrix}
+ 1 & 1 & 0\\
+ 0 & 1 & 0\\
+ 0 & 0 & 2
+ \end{pmatrix}.
+ \]
+\end{eg}
+
+\section{Bilinear forms II}
+\label{sec:bilin2}
+In Chapter~\ref{sec:bilin1}, we have looked at bilinear forms in general. Here, we want to look at bilinear forms on a single space, since often there is just one space we are interested in. We are also not looking into general bilinear forms on a single space, but just those that are symmetric.
+
+\subsection{Symmetric bilinear forms and quadratic forms}
+\begin{defi}[Symmetric bilinear form]
+ Let $V$ is a vector space over $\F$. A bilinear form $\phi: V \times V \to \F$ is \emph{symmetric} if
+ \[
+ \phi(\mathbf{v}, \mathbf{w}) = \phi(\mathbf{w}, \mathbf{v})
+ \]
+ for all $\mathbf{v}, \mathbf{w} \in V$.
+\end{defi}
+
+\begin{eg}
+ If $S \in \Mat_n(\F)$ is a symmetric matrix, i.e.\ $S^T = S$, the bilinear form $\phi: \F^n \times \F^n \to \F$ defined by
+ \[
+ \phi(\mathbf{x}, \mathbf{y}) = \mathbf{x}^T S\mathbf{x} = \sum_{i, j = 1}^n x_i S_{ij} y_j
+ \]
+ is a symmetric bilinear form.
+\end{eg}
+This example is typical in the following sense:
+
+\begin{lemma}
+ Let $V$ be a finite-dimensional vector space over $\F$, and $\phi: V\times V \to \F$ is a symmetric bilinear form. Let $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ be a basis for $V$, and let $M$ be the matrix representing $\phi$ with respect to this basis, i.e.\ $M_{ij} = \phi(\mathbf{e}_i, \mathbf{e}_j)$. Then $\phi$ is symmetric if and only if $M$ is symmetric.
+\end{lemma}
+
+\begin{proof}
+ If $\phi$ is symmetric, then
+ \[
+ M_{ij} = \phi(\mathbf{e}_i, \mathbf{e}_j) = \phi(\mathbf{e}_j, \mathbf{e}_i) = M_{ji}.
+ \]
+ So $M^T = M$. So $M$ is symmetric.
+
+ If $M$ is symmetric, then
+ \begin{align*}
+ \phi(\mathbf{x}, \mathbf{y}) &= \phi\left(\sum x_i \mathbf{e}_i, \sum y_j \mathbf{e}_j\right)\\
+ &= \sum_{i, j} x_i M_{ij} y_j\\
+ &= \sum_{i, j}^n y_j M_{ji} x_i\\
+ &= \phi(\mathbf{y}, \mathbf{x}).\qedhere
+ \end{align*}
+\end{proof}
+
+We are going to see what happens when we change basis. As in the case of endomorphisms, we will require to change basis in the same ways on both sides.
+
+\begin{lemma}
+ Let $V$ is a finite-dimensional vector space, and $\phi: V\times V \to \F$ a bilinear form. Let $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ and $(\mathbf{f}_1, \cdots, \mathbf{f}_n)$ be bases of $V$ such that
+ \[
+ \mathbf{f}_i = \sum_{k = 1}^n P_{ki} \mathbf{e}_k.
+ \]
+ If $A$ represents $\phi$ with respect to $(\mathbf{e}_i)$ and $B$ represents $\phi$ with respect to $(\mathbf{f}_i)$, then
+ \[
+ B = P^T AP.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Special case of general formula proven before.
+\end{proof}
+
+This motivates the following definition:
+\begin{defi}[Congruent matrices]
+ Two square matrices $A, B$ are \emph{congruent} if there exists some invertible $P$ such that
+ \[
+ B = P^T AP.
+ \]
+\end{defi}
+It is easy to see that congruence is an equivalence relation. Two matrices are congruent if and only if represent the same bilinear form with respect to different bases.
+
+Thus, to classify (symmetric) bilinear forms is the same as classifying (symmetric) matrices up to congruence.
+
+Before we do the classification, we first look at quadratic forms, which are something derived from bilinear forms.
+
+\begin{defi}[Quadratic form]
+ A \emph{function} $q: V \to \F$ is a \emph{quadratic form} if there exists some bilinear form $\phi$ such that
+ \[
+ q(\mathbf{v}) = \phi(\mathbf{v}, \mathbf{v})
+ \]
+ for all $\mathbf{v} \in V$.
+\end{defi}
+Note that quadratic forms are \emph{not} linear maps (they are quadratic).
+
+\begin{eg}
+ Let $V = \R^2$ and $\phi$ be represented by $A$ with respect to the standard basis. Then
+ \[
+ q\left(
+ \begin{pmatrix}
+ x\\y
+ \end{pmatrix}\right) =
+ \begin{pmatrix}
+ x & y
+ \end{pmatrix}
+ \begin{pmatrix}
+ A_{11} & A_{12}\\
+ A_{21} & A_{22}
+ \end{pmatrix}
+ \begin{pmatrix}
+ x\\y
+ \end{pmatrix}
+ = A_{11} x^2 + (A_{12} + A_{21}) xy + A_{22}y^2.
+ \]
+\end{eg}
+Notice that if $A$ is replaced the symmetric matrix
+\[
+ \frac{1}{2}(A + A^T),
+\]
+then we get a different $\phi$, but the same $q$. This is in fact true in general.
+
+\begin{prop}[Polarization identity]
+ Suppose that $\Char \F \not= 2$, i.e.\ $1 + 1 \not= 0$ on $\F$ (e.g.\ if $\F$ is $\R$ or $\C$). If $q: V\to \F$ is a quadratic form, then there exists a \emph{unique} symmetric bilinear form $\phi: V \times V\to \F$ such that
+ \[
+ q(\mathbf{v}) = \phi(\mathbf{v}, \mathbf{v}).
+ \]
+\end{prop}
+
+\begin{proof}
+ Let $\psi: V \times V\to \F$ be a bilinear form such that $\psi(\mathbf{v}, \mathbf{v}) = q(\mathbf{v})$. We define $\phi: V \times V\to \F$ by
+ \[
+ \phi(\mathbf{v}, \mathbf{w}) = \frac{1}{2}(\psi(\mathbf{v}, \mathbf{w}) + \psi(\mathbf{w}, \mathbf{v}))
+ \]
+ for all $\mathbf{v}, \mathbf{w} \in \F$. This is clearly a bilinear form, and it is also clearly symmetric and satisfies the condition we wants. So we have proved the existence part.
+
+ To prove uniqueness, we want to find out the values of $\phi(\mathbf{v}, \mathbf{w})$ in terms of what $q$ tells us. Suppose $\phi$ is such a symmetric bilinear form. We compute
+ \begin{align*}
+ q(\mathbf{v} + \mathbf{w}) &= \phi(\mathbf{v} + \mathbf{w}, \mathbf{v} + \mathbf{w}) \\
+ &= \phi(\mathbf{v}, \mathbf{v}) + \phi(\mathbf{v}, \mathbf{w}) + \phi(\mathbf{w}, \mathbf{v}) + \phi(\mathbf{w}, \mathbf{w})\\
+ &= q(\mathbf{v}) + 2\phi(\mathbf{v}, \mathbf{w}) + q(\mathbf{w}).
+ \end{align*}
+ So we have
+ \[
+ \phi(\mathbf{v}, \mathbf{w}) = \frac{1}{2}(q(\mathbf{v} + \mathbf{w}) - q(\mathbf{v}) - q(\mathbf{w})).
+ \]
+ So it is determined by $q$, and hence unique.
+\end{proof}
+
+\begin{thm}
+ Let $V$ be a finite-dimensional vector space over $\F$, and $\phi: V\times V \to \F$ a symmetric bilinear form. Then there exists a basis $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ for $V$ such that $\phi$ is represented by a diagonal matrix with respect to this basis.
+\end{thm}
+This tells us classifying symmetric bilinear forms is easier than classifying endomorphisms, since for endomorphisms, even over $\C$, we cannot always make it diagonal, but we can for bilinear forms over arbitrary fields.
+
+\begin{proof}
+ We induct over $n = \dim V$. The cases $n = 0$ and $n = 1$ are trivial, since all matrices are diagonal.
+
+ Suppose we have proven the result for all spaces of dimension less than $n$. First consider the case where $\phi(\mathbf{v}, \mathbf{v}) = 0$ for all $\mathbf{v} \in V$. We want to show that we must have $\phi = 0$. This follows from the polarization identity, since this $\phi$ induces the zero quadratic form, and we know that there is a unique bilinear form that induces the zero quadratic form. Since we know that the zero bilinear form also induces the zero quadratic form, we must have $\phi = 0$. Then $\phi$ will be represented by the zero matrix with respect to any basis, which is trivially diagonal.
+
+ If not, pick $\mathbf{e}_1 \in V$ such that $\phi(\mathbf{e}_1, \mathbf{e}_1) \not= 0$. Let
+ \[
+ U = \ker \phi(\mathbf{e}_1, \ph) = \{\mathbf{u} \in V: \phi(\mathbf{e}_1, \mathbf{u}) = 0\}.
+ \]
+ Since $\phi(\mathbf{e}_1, \ph) \in V^* \setminus \{0\}$, we know that $\dim U = n - 1$ by the rank-nullity theorem.
+
+ Our objective is to find other basis elements $\mathbf{e}_2, \cdots, \mathbf{e}_n$ such that $\phi(\mathbf{e}_1, \mathbf{e}_j) = 0$ for all $j > 1$. For this to happen, we need to find them inside $U$.
+
+ Now consider $\phi|_{U\times U}: U\times U \to \F$, a symmetric bilinear form. By the induction hypothesis, we can find a basis $\mathbf{e}_2, \cdots, \mathbf{e}_n$ for $U$ such that $\phi|_{U\times U}$ is represented by a diagonal matrix with respect to this basis.
+
+ Now by construction, $\phi(\mathbf{e}_i, \mathbf{e}_j) = 0$ for all $1 \leq i \not= j \leq n$ and $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ is a basis for $V$. So we're done.
+\end{proof}
+
+\begin{eg}
+ Let $q$ be a quadratic form on $\R^3$ given by
+ \[
+ q\left(
+ \begin{pmatrix}
+ x\\y\\z
+ \end{pmatrix}\right) = x^2 + y^2 + z^2 + 2xy + 4yz + 6xz.
+ \]
+ We want to find a basis $\mathbf{f}_1, \mathbf{f}_2, \mathbf{f}_3$ for $\R^3$ such that $q$ is of the form
+ \[
+ q(a\mathbf{f}_1 + b\mathbf{f}_2 + c\mathbf{f}_3) = \lambda a^2 + \mu b^2 + \nu c^2
+ \]
+ for some $\lambda, \mu, \nu \in \R$.
+
+ There are two ways to do this. The first way is to follow the proof we just had. We first find our symmetric bilinear form. It is the bilinear form represented by the matrix
+ \[
+ A =
+ \begin{pmatrix}
+ 1 & 1 & 3\\
+ 1 & 1 & 2\\
+ 3 & 2 & 1
+ \end{pmatrix}.
+ \]
+ We then find $\mathbf{f}_1$ such that $\phi(\mathbf{f}_1, \mathbf{f}_1) \not= 0$. We note that $q(\mathbf{e}_1) = 1\not= 0$. So we pick
+ \[
+ \mathbf{f}_1 = \mathbf{e}_1 =
+ \begin{pmatrix}
+ 1\\0\\0
+ \end{pmatrix}.
+ \]
+ Then
+ \[
+ \phi(\mathbf{e}_1, \mathbf{v}) =
+ \begin{pmatrix}
+ 1 & 0 & 0
+ \end{pmatrix}
+ \begin{pmatrix}
+ 1 & 1 & 3\\
+ 1 & 1 & 2\\
+ 3 & 2 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ v_1\\v_2\\v_3
+ \end{pmatrix} = v_1 + v_2 + 3v_3.
+ \]
+ Next we need to pick our $\mathbf{f}_2$. Since it is in the kernel of $\phi(\mathbf{f}_1, \ph)$, it must satisfy
+ \[
+ \phi(\mathbf{f}_1, \mathbf{f}_2) = 0.
+ \]
+ To continue our proof inductively, we also have to pick an $\mathbf{f}_2$ such that
+ \[
+ \quad \phi(\mathbf{f}_2, \mathbf{f}_2) \not= 0.
+ \]
+ For example, we can pick
+ \[
+ \mathbf{f}_2 =
+ \begin{pmatrix}
+ 3\\0\\-1
+ \end{pmatrix}.
+ \]
+ Then we have $q(\mathbf{f}_2) = -8$.
+
+ Then we have
+ \[
+ \phi(\mathbf{f}_2, \mathbf{v}) =
+ \begin{pmatrix}
+ 3 & 0 & -1
+ \end{pmatrix}
+ \begin{pmatrix}
+ 1 & 1 & 3\\
+ 1 & 1 & 2\\
+ 3 & 2 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ v_1\\v_2\\v_3
+ \end{pmatrix} = v_2 + 8v_3
+ \]
+ Finally, we want $\phi(\mathbf{f}_1, \mathbf{f}_3) = \phi(\mathbf{f}_2, \mathbf{f}_3) = 0$. Then
+ \[
+ \mathbf{f}_3 =
+ \begin{pmatrix}
+ 5 \\ -8 \\1
+ \end{pmatrix}
+ \]
+ works. We have $q(\mathbf{f}_3) = 8$.
+
+ With these basis elements, we have
+ \begin{align*}
+ q(a \mathbf{f}_1 + b\mathbf{f}_2 + c \mathbf{f}_3) &= \phi(a \mathbf{f}_1 + b \mathbf{f}_2 + c \mathbf{f}_3, a \mathbf{f}_1 + b \mathbf{f}_2 + c \mathbf{f}_3) \\
+ &= a^2 q (\mathbf{f}_1) + b^2 q(\mathbf{f}_2) + c^2 q(\mathbf{f}_3) \\
+ &= a^2 - 8b^2 + 8c^2.
+ \end{align*}
+ Alternatively, we can solve the problem by completing the square. We have
+ \begin{align*}
+ x^2 + y^2 + z^2 + 2xy + 4yz + 6xz &= (x + y + 3z)^2 - 2yz - 8z^2\\
+ &= (x + y + 3z)^2 - 8\left(z + \frac{y}{8}\right)^2 + \frac{1}{8}y^2.
+ \end{align*}
+ We now see
+ \[
+ \phi\left(
+ \begin{pmatrix}
+ x\\y\\z
+ \end{pmatrix},
+ \begin{pmatrix}
+ x'\\y'\\z'
+ \end{pmatrix}\right) = (x + y + 3z)(x' + y' + 3z') - 8\left(z + \frac{y}{8}\right)\left(z' + \frac{y'}{8}\right) + \frac{1}{8}yy'.
+ \]
+ Why do we know this? This is clearly a symmetric bilinear form, and this also clearly induces the $q$ given above. By uniqueness, we know that this is the right symmetric bilinear form.
+
+ We now use this form to find our $\mathbf{f}_1, \mathbf{f}_2, \mathbf{f}_3$ such that $\phi(\mathbf{f}_i, \mathbf{f}_j) = \delta_{ij}$.
+
+ To do so, we just solve the equations
+ \begin{align*}
+ x + y + 3z &= 1\\
+ z + \frac{1}{8}y &= 0\\
+ y &= 0.
+ \end{align*}
+ This gives our first vector as
+ \[
+ \mathbf{f}_1 =
+ \begin{pmatrix}
+ 1\\0\\0
+ \end{pmatrix}.
+ \]
+ We then solve
+ \begin{align*}
+ x + y + 3z &= 0\\
+ z + \frac{1}{8}y &= 1\\
+ y &= 0.
+ \end{align*}
+ So we have
+ \[
+ \mathbf{f}_2 =
+ \begin{pmatrix}
+ -3\\0\\1
+ \end{pmatrix}.
+ \]
+ Finally, we solve
+ \begin{align*}
+ x + y + 3z &= 0\\
+ z + \frac{1}{8}y &= 0\\
+ y &= 1.
+ \end{align*}
+ This gives
+ \[
+ \mathbf{f}_3 =
+ \begin{pmatrix}
+ -5/8\\ 1 \\ -1/8.
+ \end{pmatrix}.
+ \]
+ Then we can see that the result follows, and we get
+ \[
+ q(a\mathbf{f}_1 + b\mathbf{f}_2 + c\mathbf{f}_3) = a^2 - 8b^2 + \frac{1}{8}c^2.
+ \]
+\end{eg}
+We see that the diagonal matrix we get is not unique. We can re-scale our basis by any constant, and get an equivalent expression.
+
+\begin{thm}
+ Let $\phi$ be a symmetric bilinear form over a complex vector space $V$. Then there exists a basis $(\mathbf{v}_1, \cdots, \mathbf{v}_m)$ for $V$ such that $\phi$ is represented by
+ \[
+ \begin{pmatrix}
+ I_r & 0\\
+ 0 & 0
+ \end{pmatrix}
+ \]
+ with respect to this basis, where $r = r(\phi)$.
+\end{thm}
+
+\begin{proof}
+ We've already shown that there exists a basis $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ such that $\phi(\mathbf{e}_i, \mathbf{e}_j) = \lambda_i \delta_{ij}$ for some $\lambda_{ij}$. By reordering the $\mathbf{e}_i$, we can assume that $\lambda_1, \cdots, \lambda_r \not= 0$ and $\lambda_{r + 1},\cdots, \lambda_n = 0$.
+
+ For each $1 \leq i \leq r$, there exists some $\mu_i$ such that $\mu_i^2 = \lambda_i$. For $r + 1 \leq r \leq n$, we let $\mu_i = 1$ (or anything non-zero). We define
+ \[
+ \mathbf{v}_i = \frac{\mathbf{e}_i}{\mu_i}.
+ \]
+ Then
+ \[
+ \phi(\mathbf{v}_i, \mathbf{v}_j) = \frac{1}{\mu_i \mu_j} \phi(\mathbf{e}_i, \mathbf{e}_j) =
+ \begin{cases}
+ 0 & i\not= j\text{ or }i = j > r\\
+ 1 & i = j < r.
+ \end{cases}
+ \]
+ So done.
+\end{proof}
+Note that it follows that for the corresponding quadratic form $q$, we have
+\[
+ q\left(\sum_{i = 1}^n a_i \mathbf{v}_i\right) = \sum_{i = 1}^r a_i^2.
+\]
+\begin{cor}
+ Every symmetric $A \in \Mat_n(\C)$ is congruent to a unique matrix of the form
+ \[
+ \begin{pmatrix}
+ I_r & 0\\
+ 0 & 0
+ \end{pmatrix}.
+ \]
+\end{cor}
+Now this theorem is a bit too strong, and we are going to fix that next lecture, by talking about Hermitian forms and sesquilinear forms. Before that, we do the equivalent result for real vector spaces.
+\begin{thm}
+ Let $\phi$ be a symmetric bilinear form of a finite-dimensional vector space over $\R$. Then there exists a basis $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ for $V$ such that $\phi$ is represented
+ \[
+ \begin{pmatrix}
+ I_p\\
+ & -I_q\\
+ & & 0
+ \end{pmatrix},
+ \]
+ with $p + q = r(\phi)$, $p, q \geq 0$. Equivalently, the corresponding quadratic forms is given by
+ \[
+ q\left(\sum_{i = 1}^n a_i \mathbf{v}_i\right) = \sum_{i = 1}^p a_i^2 - \sum_{j = p + 1}^{p + q} a_j^2.
+ \]
+\end{thm}
+Note that we have seen these things in special relativity, where the Minkowski inner product is given by the symmetric bilinear form represented by
+\[
+ \begin{pmatrix}
+ -1 & 0 & 0 & 0 \\
+ 0 & 1 & 0 & 0 \\
+ 0 & 0 & 1 & 0\\
+ 0 & 0 & 0 & 1
+ \end{pmatrix},
+\]
+in units where $c = 1$.
+
+\begin{proof}
+ We've already shown that there exists a basis $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ such that $\phi(\mathbf{e}_i, \mathbf{e}_j) = \lambda_i \delta_{ij}$ for some $\lambda_1, \cdots, \lambda_n \in \R$. By reordering, we may assume
+ \[
+ \begin{cases}
+ \lambda_i > 0 & 1 \leq i \leq p\\
+ \lambda_i < 0 & p + 1 \leq i \leq r\\
+ \lambda_i = 0 & i > r
+ \end{cases}
+ \]
+ We let $\mu_i$ be defined by
+ \[
+ \mu_i =
+ \begin{cases}
+ \sqrt{\lambda_i} & 1 \leq i \leq p\\
+ \sqrt{-\lambda_i} & p + 1 \leq i \leq r\\
+ 1 & i > r
+ \end{cases}
+ \]
+ Defining
+ \[
+ \mathbf{v}_i = \frac{1}{\mu_i}\mathbf{e}_i,
+ \]
+ we find that $\phi$ is indeed represented by
+ \[
+ \begin{pmatrix}
+ I_p\\
+ & -I_q\\
+ & & 0
+ \end{pmatrix},\qedhere
+ \]
+\end{proof}
+We will later show that this form is indeed unique. Before that, we will have a few definitions, that really only make sense over $\R$.
+
+\begin{defi}[Positive/negative (semi-)definite]
+ Let $\phi$ be a symmetric bilinear form on a finite-dimensional real vector space $V$. We say
+ \begin{enumerate}
+ \item $\phi$ is \emph{positive definite} if $\phi(\mathbf{v}, \mathbf{v}) > 0$ for all $\mathbf{v} \in V\setminus \{0\}$.
+ \item $\phi$ is \emph{positive semi-definite} if $\phi(\mathbf{v}, \mathbf{v}) \geq 0$ for all $\mathbf{v} \in V$.
+ \item $\phi$ is \emph{negative definite} if $\phi(\mathbf{v}, \mathbf{v}) < 0$ for all $\mathbf{v} \in V\setminus \{0\}$.
+ \item $\phi$ is \emph{negative semi-definite} if $\phi(\mathbf{v}, \mathbf{v}) \leq 0$ for all $\mathbf{v} \in V$.
+ \end{enumerate}
+\end{defi}
+
+We are going to use these notions to prove uniqueness. It is easy to see that if $p = 0$ and $q = n$, then we are negative definite; if $p = 0$ and $q \not= n$, then we are negative semi-definite etc.
+
+\begin{eg}
+ Let $\phi$ be a symmetric bilinear form on $\R^n$ represented by
+ \[
+ \begin{pmatrix}
+ I_p & 0\\
+ 0 & 0_{n - p}
+ \end{pmatrix}.
+ \]
+ Then $\phi$ is positive semi-definite. $\phi$ is positive definite if and only if $n = p$.
+
+ If instead $\phi$ is represented by
+ \[
+ \begin{pmatrix}
+ -I_p & 0\\
+ 0 & 0_{n - p}
+ \end{pmatrix},
+ \]
+ then $\phi$ is negative semi-definite. $\phi$ is negative definite precisely if $n = q$.
+\end{eg}
+
+We are going to use this to prove the uniqueness part of our previous theorem.
+\begin{thm}[Sylvester's law of inertia]
+ Let $\phi$ be a symmetric bilinear form on a finite-dimensional real vector space $V$. Then there exists unique non-negative integers $p, q$ such that $\phi$ is represented by
+ \[
+ \begin{pmatrix}
+ I_p & 0 & 0\\
+ 0 & -I_q & 0\\
+ 0 & 0 & 0
+ \end{pmatrix}
+ \]
+ with respect to some basis.
+\end{thm}
+
+\begin{proof}
+ We have already proved the existence part, and we just have to prove uniqueness. To do so, we characterize $p$ and $q$ in a basis-independent way. We already know that $p + q = r(\phi)$ does not depend on the basis. So it suffices to show $p$ is unique.
+
+ To see that $p$ is unique, we show that $p$ is the largest dimension of a subspace $P \subseteq V$ such that $\phi|_{P\times P}$ is positive definite.
+
+ First we show we can find such at $P$. Suppose $\phi$ is represented by
+ \[
+ \begin{pmatrix}
+ I_p & 0 & 0\\
+ 0 & -I_q & 0\\
+ 0 & 0 & 0
+ \end{pmatrix}
+ \]
+ with respect to $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$. Then $\phi$ restricted to $\bra \mathbf{e}_1, \cdots, \mathbf{e}_p\ket$ is represented by $I_p$ with respect to $\mathbf{e}_1, \cdots, \mathbf{e}_p$. So $\phi$ restricted to this is positive definite.
+
+ Now suppose $P$ is any subspace of $V$ such that $\phi|_{P \times P}$ is positive definite. To show $P$ has dimension at most $p$, we find a subspace complementary to $P$ with dimension $n - p$.
+
+ Let $Q = \bra \mathbf{e}_{p + 1}, \cdots, \mathbf{e}_n\ket$. Then $\phi$ restricted to $Q\times Q$ is represented by
+ \[
+ \begin{pmatrix}
+ -I_q& 0\\
+ 0 & 0
+ \end{pmatrix}.
+ \]
+ Now if $\mathbf{v} \in P\cap Q \setminus \{0\}$, then $\phi(\mathbf{v}, \mathbf{v}) > 0$ since $\mathbf{v}\in P\setminus \{0\}$ and $\phi(\mathbf{v}, \mathbf{v}) \leq 0$ since $\mathbf{v}\in Q$, which is a contradiction. So $P\cap Q = 0$.
+
+ We have
+ \[
+ \dim V \geq \dim (P + Q) = \dim P + \dim Q = \dim P + (n - p).
+ \]
+ Rearranging gives
+ \[
+ \dim P \leq p.
+\]
+ A similar argument shows that $q$ is the maximal dimension of a subspace $Q\subseteq V$ such that $\phi|_{Q\times Q}$ is negative definite.
+\end{proof}
+
+\begin{defi}[Signature]
+ The \emph{signature} of a bilinear form $\phi$ is the number $p - q$, where $p$ and $q$ are as above.
+\end{defi}
+Of course, we can recover $p$ and $q$ from the signature and the rank of $\phi$.
+
+\begin{cor}
+ Every real symmetric matrix is congruent to precisely one matrix of the form
+ \[
+ \begin{pmatrix}
+ I_p & 0 & 0\\
+ 0 & -I_q & 0\\
+ 0 & 0 & 0
+ \end{pmatrix}.
+ \]
+\end{cor}
+\subsection{Hermitian form}
+The above result was nice for real vector spaces. However, if $\phi$ is a bilinear form on a $\C$-vector space $V$, then $\phi(i\mathbf{v}, i\mathbf{v}) = -\phi(\mathbf{v}, \mathbf{v})$. So there can be no good notion of positive definiteness for complex bilinear forms. To make them work for complex vector spaces, we need to modify the definition slightly to obtain Hermitian forms.
+
+\begin{defi}[Sesquilinear form]
+ Let $V, W$ be complex vector spaces. Then a \emph{sesquilinear form} is a function $\phi: V \times W \to \C$ such that
+ \begin{enumerate}
+ \item $\phi(\lambda \mathbf{v}_1 + \mu \mathbf{v}_2, \mathbf{w}) = \bar{\lambda} \phi(\mathbf{v}_1, \mathbf{w}) + \bar{\mu}\phi(\mathbf{v}_2, \mathbf{w})$.
+ \item $\phi(\mathbf{v}, \lambda \mathbf{w}_1 + \mu \mathbf{w}_2) = \lambda \phi(\mathbf{v}, \mathbf{w}_1) + \mu \phi(\mathbf{v} \mathbf{w}_2)$.
+ \end{enumerate}
+ for all $\mathbf{v}, \mathbf{v}_1, \mathbf{v}_2 \in V$, $\mathbf{w}, \mathbf{w}_1, \mathbf{w}_2 \in W$ and $\lambda, \mu \in \C$.
+\end{defi}
+Note that some people have an opposite definition, where we have linearity in the first argument and conjugate linearity in the second.
+
+These are called sesquilinear since ``sesqui'' means ``one and a half'', and this is linear in the second argument and ``half linear'' in the first.
+
+Alternatively, to define a sesquilinear form, we can define a new complex vector space $\bar{V}$ structure on $V$ by taking the same abelian group (i.e.\ the same underlying set and addition), but with the scalar multiplication $\C \times \bar{V} \to \bar{V}$ defined as
+\[
+ (\lambda, \mathbf{v}) \mapsto \bar{\lambda} \mathbf{v}.
+\]
+Then a sesquilinear form on $V\times W$ is a bilinear form on $\bar{V} \times W$. Alternatively, this is a linear map $W \to \bar{V}^*$.
+
+\begin{defi}[Representation of sesquilinear form]
+ Let $V, W$ be finite-dimensional complex vector spaces with basis $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ and $(\mathbf{w}_1, \cdots, \mathbf{w}_m)$ respectively, and $\phi: V \times W \to \C$ be a sesquilinear form. Then the matrix representing $\phi$ with respect to these bases is
+ \[
+ A_{ij} = \phi(\mathbf{v}_i, \mathbf{w}_j).
+ \]
+ for $1 \leq i \leq n, 1 \leq j \leq m$.
+\end{defi}
+As usual, this determines the whole sesquilinear form. This follows from the analogous fact for the bilinear form on $\bar{V} \times W \to \C$. Let $\mathbf{v} = \sum \lambda_i \mathbf{v}_i$ and $W = \sum \mu_j w_j$. Then we have
+\[
+ \phi(\mathbf{v}, \mathbf{w}) = \sum_{i, j} \overline{\lambda}_i \mu_j \phi(\mathbf{v}_i, \mathbf{w}_j) = \lambda^\dagger A \mu.
+\]
+We now want the right definition of symmetric sesquilinear form. We cannot just require $\phi(\mathbf{v}, \mathbf{w}) = \phi(\mathbf{w}, \mathbf{v})$, since $\phi$ is linear in the second variable and conjugate linear on the first variable. So in particular, if $\phi(\mathbf{v}, \mathbf{w}) \not= 0$, we have $\phi(i \mathbf{v}, \mathbf{w}) \not= \phi(\mathbf{v}, i\mathbf{w})$.
+
+\begin{defi}[Hermitian sesquilinear form]
+ A sesquilinear form on $V\times V$ is \emph{Hermitian} if
+ \[
+ \phi(\mathbf{v}, \mathbf{w}) = \overline{\phi(\mathbf{w}, \mathbf{v})}.
+ \]
+\end{defi}
+Note that if $\phi$ is Hermitian, then $\phi(\mathbf{v}, \mathbf{v}) = \overline{\phi(\mathbf{v}, \mathbf{v})} \in \R$ for any $\mathbf{v} \in V$. So it makes sense to ask if it is positive or negative. Moreover, for any complex number $\lambda$, we have
+\[
+ \phi(\lambda \mathbf{v}, \lambda \mathbf{v}) = |\lambda|^2 \phi(\mathbf{v}, \mathbf{v}).
+\]
+So multiplying by a scalar does not change the sign. So it makes sense to talk about positive (semi-)definite and negative (semi-)definite Hermitian forms.
+
+We will prove results analogous to what we had for real symmetric bilinear forms.
+\begin{lemma}
+ Let $\phi: V \times V\to \C$ be a sesquilinear form on a finite-dimensional vector space over $\C$, and $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ a basis for $V$. Then $\phi$ is Hermitian if and only if the matrix $A$ representing $\phi$ is Hermitian (i.e.\ $A = A^\dagger$).
+\end{lemma}
+
+\begin{proof}
+ If $\phi$ is Hermitian, then
+ \[
+ A_{ij} = \phi(\mathbf{e}_i, \mathbf{e}_j) = \overline{\phi(\mathbf{e}_j, \mathbf{e}_i)} = A^{\dagger}_{ij}.
+ \]
+ If $A$ is Hermitian, then
+ \[
+ \phi\left(\sum \lambda_i \mathbf{e}_i, \sum \mu_j \mathbf{e}_j\right) = \lambda^\dagger A \mu = \overline{\mu^\dagger A^\dagger \lambda} = \overline{\phi\left(\sum \mu_j \mathbf{e}_j, \sum \lambda_j \mathbf{e}_j\right)}.
+ \]
+ So done.
+\end{proof}
+
+\begin{prop}[Change of basis]
+ Let $\phi$ be a Hermitian form on a finite dimensional vector space $V$; $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ and $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ are bases for $V$ such that
+ \[
+ \mathbf{v}_i = \sum_{k = 1}^n P_{ki} \mathbf{e}_k;
+ \]
+ and $A, B$ represent $\phi$ with respect to $(\mathbf{e}_1, \ldots, \mathbf{e}_n)$ and $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ respectively. Then
+ \[
+ B = P^\dagger AP.
+ \]
+\end{prop}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ B_{ij} &= \phi(\mathbf{v}_i, \mathbf{v}_j) \\
+ &= \phi\left(\sum P_{ki} \mathbf{e}_k, \sum P_{\ell j} \mathbf{e}_\ell\right)\\
+ &= \sum_{k, \ell = 1}^n \bar{P}_{ki} P_{\ell j} A_{k\ell}\\
+ &= (P^\dagger AP)_{ij}.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{lemma}[Polarization identity (again)]
+ A Hermitian form $\phi$ on $V$ is determined by the function $\psi: \mathbf{v} \mapsto \phi(\mathbf{v}, \mathbf{v})$.
+\end{lemma}
+
+The proof this time is slightly more involved.
+\begin{proof}
+ We have the following:
+ \begin{align*}
+ \psi(\mathbf{x} + \mathbf{y}) &= \phi(\mathbf{x}, \mathbf{x}) + \phi(\mathbf{x}, \mathbf{y}) + \phi(\mathbf{y}, \mathbf{x}) + \phi(\mathbf{y}, \mathbf{y})\\
+ -\psi(\mathbf{x} - \mathbf{y}) &= -\phi(\mathbf{x}, \mathbf{x}) + \phi(\mathbf{x}, \mathbf{y}) + \phi(\mathbf{y}, \mathbf{x}) - \phi(\mathbf{y}, \mathbf{y})\\
+ i\psi(\mathbf{x} -i\mathbf{y}) &= i\phi(\mathbf{x}, \mathbf{x}) + \phi(\mathbf{x}, \mathbf{y}) - \phi(\mathbf{y}, \mathbf{x}) +i\phi(\mathbf{y}, \mathbf{y})\\
+ -i\psi(\mathbf{x} +i\mathbf{y}) &=-i\phi(\mathbf{x}, \mathbf{x}) + \phi(\mathbf{x}, \mathbf{y}) - \phi(\mathbf{y}, \mathbf{x}) -i\phi(\mathbf{y}, \mathbf{y})
+ \end{align*}
+ So
+ \[
+ \phi(\mathbf{x}, \mathbf{y}) = \frac{1}{4}(\psi(\mathbf{x} + \mathbf{y}) - \psi(\mathbf{x} - \mathbf{y}) + i\psi(\mathbf{x} - i\mathbf{y}) - i \psi(\mathbf{x} + i\mathbf{y})).\qedhere
+ \]
+\end{proof}
+
+\begin{thm}[Hermitian form of Sylvester's law of inertia]
+ Let $V$ be a finite-dimensional complex vector space and $\phi$ a hermitian form on $V$. Then there exists unique non-negative integers $p$ and $q$ such that $\phi$ is represented by
+ \[
+ \begin{pmatrix}
+ I_p & 0 & 0\\
+ 0 & -I_q & 0\\
+ 0 & 0 & 0
+ \end{pmatrix}
+ \]
+ with respect to some basis.
+\end{thm}
+
+\begin{proof}
+ Same as for symmetric forms over $\R$.
+\end{proof}
+
+\section{Inner product spaces}
+Welcome to the last chapter, where we discuss inner products. Technically, an inner product is just a special case of a positive-definite symmetric bilinear or hermitian form. However, they are usually much more interesting and useful. Many familiar notions such as orthogonality only make sense when we have an inner product.
+
+In this chapter, we will adopt the convention that $\F$ always means either $\R$ or $\C$, since working with other fields doesn't make much sense here.
+
+\subsection{Definitions and basic properties}
+\begin{defi}[Inner product space]
+ Let $V$ be a vector space. An \emph{inner product} on $V$ is a positive-definite symmetric bilinear/hermitian form. We usually write $(x, y)$ instead of $\phi(x, y)$.
+
+ A vector space equipped with an inner product is an \emph{inner product space}.
+\end{defi}
+We will see that if we have an inner product, then we can define lengths and distances in a sensible way.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\R^n$ or $\C^n$ with the usual inner product
+ \[
+ (x, y) = \sum_{i = 1}^n \bar{x}_i y_i
+ \]
+ forms an inner product space.
+
+ In some sense, this is the only inner product on finite-dimensional spaces, by Sylvester's law of inertia. However, we would not like to think so, and instead work with general inner products.
+ \item Let $C([0, 1], \F)$ be the vector space of real/complex valued continuous functions on $[0, 1]$. Then the following is an inner product:
+ \[
+ (f, g) = \int_0^1 \bar{f}(t) g(t)\;\d t.
+ \]
+ \item More generally, for any $w: [0, 1] \to \R^+$ continuous, we can define the inner product on $C([0, 1], \F)$ as
+ \[
+ (f, g) = \int_0^1 w(t) \bar{f}(t) g(t) \;\d t.
+ \]
+ \end{enumerate}
+\end{eg}
+If $V$ is an inner product space, we can define a norm on $V$ by
+\[
+ \|\mathbf{v}\| = \sqrt{(\mathbf{v}, \mathbf{v})}.
+\]
+This is just the usual notion of norm on $\R^n$ and $\C^n$. This gives the notion of length in inner product spaces. Note that $\|\mathbf{v}\| > 0$ with equality if and only if $\mathbf{v} = \mathbf{0}$.
+
+Note also that the norm $\|\ph\|$ determines the inner product by the polarization identity.
+
+We want to see that this indeed satisfies the definition of a norm, as you might have seen from Analysis II. To prove this, we need to prove the Cauchy-Schwarz inequality.
+
+\begin{thm}[Cauchy-Schwarz inequality]
+ Let $V$ be an inner product space and $\mathbf{v}, \mathbf{w} \in V$. Then
+ \[
+ |(\mathbf{v}, \mathbf{w})| \leq \|\mathbf{v}\| \|\mathbf{w}\|.
+ \]
+\end{thm}
+
+\begin{proof}
+ If $\mathbf{w} = 0$, then this is trivial. Otherwise, since the norm is positive definite, for any $\lambda$, we get
+ \[
+ 0 \leq (\mathbf{v} - \lambda \mathbf{w}, \mathbf{v} - \lambda \mathbf{w}) = (\mathbf{v}, \mathbf{v}) - \bar{\lambda} (\mathbf{w}, \mathbf{v}) - \lambda (\mathbf{v}, \mathbf{w}) + |\lambda|^2 (\mathbf{w}, \mathbf{w}).
+ \]
+ We now pick a clever value of $\lambda$. We let
+ \[
+ \lambda = \frac{(\mathbf{w}, \mathbf{v})}{(\mathbf{w}, \mathbf{w})}.
+ \]
+ Then we get
+ \[
+ 0 \leq (\mathbf{v}, \mathbf{v}) - \frac{|(\mathbf{w}, \mathbf{v})|^2}{(\mathbf{w}, \mathbf{w})} - \frac{|(\mathbf{w}, \mathbf{v})|^2}{(\mathbf{w}, \mathbf{w})} + \frac{|(\mathbf{w}, \mathbf{v})|^2}{(\mathbf{w}, \mathbf{w})}.
+ \]
+ So we get
+ \[
+ |(\mathbf{w}, \mathbf{v})|^2 \leq (\mathbf{v}, \mathbf{v}) (\mathbf{w}, \mathbf{w}).
+ \]
+ So done.
+\end{proof}
+With this, we can prove the triangle inequality.
+
+\begin{cor}[Triangle inequality]
+ Let $V$ be an inner product space and $\mathbf{v}, \mathbf{w} \in V$. Then
+ \[
+ \|\mathbf{v} + \mathbf{w}\| \leq \|\mathbf{v}\| + \|\mathbf{w}\|.
+ \]
+\end{cor}
+
+\begin{proof}
+ We compute
+ \begin{align*}
+ \|\mathbf{v} + \mathbf{w}\|^2 &= (\mathbf{v} + \mathbf{w}, \mathbf{v} + \mathbf{w}) \\
+ &= (\mathbf{v}, \mathbf{v}) + (\mathbf{v}, \mathbf{w}) + (\mathbf{w}, \mathbf{v}) + (\mathbf{w}, \mathbf{w})\\
+ &\leq \|\mathbf{v}\|^2 + 2 \|\mathbf{v} \| \|\mathbf{w}\| + \|\mathbf{w}\|^2\\
+ &= (\|\mathbf{v}\| + \|\mathbf{w}\|)^2.
+ \end{align*}
+ So done.
+\end{proof}
+
+The next thing we do is to define orthogonality. This generalizes the notion of being ``perpendicular''.
+
+\begin{defi}[Orthogonal vectors]
+ Let $V$ be an inner product space. Then $\mathbf{v}, \mathbf{w} \in V$ are \emph{orthogonal} if $(\mathbf{v}, \mathbf{w}) = 0$.
+\end{defi}
+
+\begin{defi}[Orthonormal set]
+ Let $V$ be an inner product space. A set $\{\mathbf{v}_i: i \in I\}$ is an \emph{orthonormal set} if for any $i, j \in I$, we have
+ \[
+ (\mathbf{v}_i, \mathbf{v}_j) = \delta_{ij}
+ \]
+\end{defi}
+It should be clear that an orthonormal set must be linearly independent.
+
+\begin{defi}[Orthonormal basis]
+ Let $V$ be an inner product space. A subset of $V$ is an \emph{orthonormal basis} if it is an orthonormal set and is a basis.
+\end{defi}
+
+In an inner product space, we almost always want orthonormal basis only. If we pick a basis, we should pick an orthonormal one.
+
+However, we do not know there is always an orthonormal basis, even in the finite-dimensional case. Also, given an orthonormal set, we would like to extend it to an orthonormal basis. This is what we will do later.
+
+Before that, we first note that given an orthonormal basis, it is easy to find the coordinates of any vector in this basis. Suppose $V$ is a finite-dimensional inner product space with an orthonormal basis $\mathbf{v}_1, \cdots, \mathbf{v}_n$. Given
+\[
+ \mathbf{v} = \sum_{i = 1}^n \lambda_i \mathbf{v}_i,
+\]
+we have
+\[
+ (\mathbf{v}_j, \mathbf{v}) = \sum_{i = 1}^n \lambda_i (\mathbf{v}_j, \mathbf{v}_i) = \lambda_j.
+\]
+So $\mathbf{v} \in V$ can always be written as
+\[
+ \sum_{i = 1}^n (\mathbf{v}_i, \mathbf{v}) \mathbf{v}_i.
+\]
+\begin{lemma}[Parseval's identity]
+ Let $V$ be a finite-dimensional inner product space with an orthonormal basis $\mathbf{v}_1, \cdots, \mathbf{v}_n$, and $\mathbf{v}, \mathbf{w} \in V$. Then
+ \[
+ (\mathbf{v}, \mathbf{w}) = \sum_{i = 1}^n \overline{(\mathbf{v}_i, \mathbf{v})} (\mathbf{v}_i, \mathbf{w}).
+ \]
+ In particular,
+ \[
+ \|\mathbf{v}\|^2 = \sum_{i = 1}^n |(\mathbf{v}_i, \mathbf{v})|^2.
+ \]
+\end{lemma}
+This is something we've seen in IB Methods, for infinite dimensional spaces. However, we will only care about finite-dimensional ones now.
+
+\begin{proof}
+ \begin{align*}
+ (\mathbf{v}, \mathbf{w}) &= \left(\sum_{i = 1}^n (\mathbf{v}_i, \mathbf{v}) \mathbf{v}_i, \sum_{j = 1}^n (\mathbf{v}_j, \mathbf{w}) \mathbf{v}_j\right)\\
+ &= \sum_{i, j = 1}^n \overline{(\mathbf{v}_i, \mathbf{v})} (\mathbf{v}_j, \mathbf{w}) (\mathbf{v}_i, \mathbf{v}_j)\\
+ &= \sum_{i, j = 1}^n \overline{(\mathbf{v}_i, \mathbf{v})} (\mathbf{v}_j, \mathbf{w}) \delta_{ij}\\
+ &= \sum_{i = 1}^n \overline{(\mathbf{v}_i, \mathbf{v})} (\mathbf{v}_i, \mathbf{w}).\qedhere
+ \end{align*}
+\end{proof}
+
+\subsection{Gram-Schmidt orthogonalization}
+As mentioned, we want to make sure every vector space has an orthonormal basis, and we can extend any orthonormal set to an orthonormal basis, at least in the case of finite-dimensional vector spaces. The idea is to start with an arbitrary basis, which we know exists, and produce an orthonormal basis out of it. The way to do this is the Gram-Schmidt process.
+\begin{thm}[Gram-Schmidt process]
+ Let $V$ be an inner product space and $\mathbf{e}_1, \mathbf{e}_2, \cdots$ a linearly independent set. Then we can construct an orthonormal set $\mathbf{v}_1, \mathbf{v}_2, \cdots$ with the property that
+ \[
+ \bra \mathbf{v}_1, \cdots, \mathbf{v}_k\ket = \bra \mathbf{e}_1, \cdots, \mathbf{e}_k\ket
+ \]
+ for every $k$.
+\end{thm}
+Note that we are not requiring the set to be finite. We are just requiring it to be countable.
+
+\begin{proof}
+ We construct it iteratively, and prove this by induction on $k$. The base case $k = 0$ is contentless.
+
+ Suppose we have already found $\mathbf{v}_1, \cdots, \mathbf{v}_k$ that satisfies the properties. We define
+ \[
+ \mathbf{u}_{k + 1} = \mathbf{e}_{k + 1} - \sum_{i = 1}^k (\mathbf{v}_i, \mathbf{e}_{i + 1}) \mathbf{v}_i.
+ \]
+ We want to prove that this is orthogonal to all the other $\mathbf{v}_i$'s for $i \leq k$. We have
+ \[
+ (\mathbf{v}_j, \mathbf{u}_{k + 1}) = (\mathbf{v}_j, \mathbf{e}_{k + 1}) - \sum_{i = 1}^k (\mathbf{v}_i, \mathbf{e}_{k + 1}) \delta_{ij} = (\mathbf{v}_j, \mathbf{e}_{k + 1}) - (\mathbf{v}_j, \mathbf{e}_{k + 1}) = 0.
+ \]
+ So it is orthogonal.
+
+ We want to argue that $\mathbf{u}_{k + 1}$ is non-zero. Note that
+ \[
+ \bra \mathbf{v}_1, \cdots, \mathbf{v}_k, \mathbf{u}_{k + 1}\ket = \bra \mathbf{v}_1, \cdots, \mathbf{v}_k, \mathbf{e}_{k + 1}\ket
+ \]
+ since we can recover $\mathbf{e}_{k + 1}$ from $\mathbf{v}_1, \cdots, \mathbf{v}_k$ and $\mathbf{u}_{k + 1}$ by construction. We also know
+ \[
+ \bra \mathbf{v}_1, \cdots, \mathbf{v}_k, \mathbf{e}_{k + 1}\ket = \bra \mathbf{e}_1, \cdots, \mathbf{e}_k, \mathbf{e}_{k + 1}\ket
+ \]
+ by assumption. We know $\bra \mathbf{e}_1, \cdots, \mathbf{e}_k, \mathbf{e}_{k + 1}\ket$ has dimension $k + 1$ since the $\mathbf{e}_i$ are linearly independent. So we must have $\mathbf{u}_{k + 1}$ non-zero, or else $\bra \mathbf{v}_1, \cdots, \mathbf{v}_k\ket$ will be a set of size $k$ spanning a space of dimension $k + 1$, which is clearly nonsense.
+
+ Therefore, we can define
+ \[
+ \mathbf{v}_{k + 1} = \frac{\mathbf{u}_{k + 1}}{\|\mathbf{u}_{k + 1}\|}.
+ \]
+ Then $\mathbf{v}_1, \cdots, \mathbf{v}_{k + 1}$ is orthonormal and $\bra \mathbf{v}_1, \cdots, \mathbf{v}_{k + 1}\ket = \bra \mathbf{e}_1, \cdots, \mathbf{e}_{k + 1}\ket$ as required.
+\end{proof}
+
+\begin{cor}
+ If $V$ is a finite-dimensional inner product space, then any orthonormal set can be extended to an orthonormal basis.
+\end{cor}
+
+\begin{proof}
+ Let $\mathbf{v}_1, \cdots, \mathbf{v}_k$ be an orthonormal set. Since this is linearly independent, we can extend it to a basis $(\mathbf{v}_1,\cdots, \mathbf{v}_k, \mathbf{x}_{k + 1}, \cdots, \mathbf{x}_n)$.
+
+ We now apply the Gram-Schmidt process to this basis to get an orthonormal basis of $V$, say $(\mathbf{u}_1, \cdots, \mathbf{u}_n)$. Moreover, we can check that the process does not modify our $\mathbf{v}_1, \cdots, \mathbf{v}_k$, i.e.\ $\mathbf{u}_i = \mathbf{v}_i$ for $1 \leq i \leq k$. So done.
+\end{proof}
+
+\begin{defi}[Orthogonal internal direct sum]
+ Let $V$ be an inner product space and $V_1, V_2 \leq V$. Then $V$ is the \emph{orthogonal internal direct sum} of $V_1$ and $V_2$ if it is a direct sum and $V_1$ and $V_2$ are orthogonal. More precisely, we require
+ \begin{enumerate}
+ \item $V = V_1 + V_2$
+ \item $V_1 \cap V_2 = 0$
+ \item $(\mathbf{v}_1, \mathbf{v}_2) = 0$ for all $\mathbf{v}_1 \in V_1$ and $\mathbf{v}_2 \in V_2$.
+ \end{enumerate}
+ Note that condition (iii) implies (ii), but we write it for the sake of explicitness.
+
+ We write $V = V_1 \perp V_2$.
+\end{defi}
+
+\begin{defi}[Orthogonal complement]
+ If $W \leq V$ is a subspace of an inner product space $V$, then the \emph{orthogonal complement} of $W$ in $V$ is the subspace
+ \[
+ W^\perp = \{\mathbf{v} \in V: (\mathbf{v}, \mathbf{w}) = 0, \forall \mathbf{w} \in W\}.
+ \]
+\end{defi}
+It is true that the orthogonal complement is a complement and orthogonal, i.e.\ $V$ is the orthogonal direct sum of $W$ and $W^\perp$.
+
+\begin{prop}
+ Let $V$ be a finite-dimensional inner product space, and $W \leq V$. Then
+ \[
+ V = W \perp W^\perp.
+ \]
+\end{prop}
+
+\begin{proof}
+ There are three things to prove, and we know (iii) implies (ii). Also, (iii) is obvious by definition of $W^\perp$. So it remains to prove (i), i.e.\ $V = W + W^\perp$.
+
+ Let $\mathbf{w}_1, \cdots, \mathbf{w}_k$ be an orthonormal basis for $W$, and pick $\mathbf{v}\in V$. Now let
+ \[
+ \mathbf{w} = \sum_{i = 1}^k (\mathbf{w}_i, \mathbf{v}) \mathbf{w}_i.
+ \]
+ Clearly, we have $\mathbf{w} \in W$. So we need to show $\mathbf{v} - \mathbf{w} \in W^\perp$. For each $j$, we can compute
+ \begin{align*}
+ (\mathbf{w}_j, \mathbf{v} - \mathbf{w}) &= (\mathbf{w}_j, \mathbf{v}) - \sum_{i = 1}^k (\mathbf{w}_i, \mathbf{v})(\mathbf{w}_j, \mathbf{w}_i)\\
+ &= (\mathbf{w}_j, \mathbf{v}) - \sum_{i = 1}^k (\mathbf{w}_i, \mathbf{v}) \delta_{ij}\\
+ &= 0.
+ \end{align*}
+ Hence for any $\lambda_i$, we have
+ \[
+ \left(\sum \lambda_j \mathbf{w}_j, \mathbf{v} - \mathbf{w}\right) = 0.
+ \]
+ So we have $\mathbf{v}-\mathbf{w} \in W^\perp$. So done.
+\end{proof}
+
+\begin{defi}[Orthogonal external direct sum]
+ Let $V_1, V_2$ be inner product spaces. The \emph{orthogonal external direct sum} of $V_1$ and $V_2$ is the vector space $V_1 \oplus V_2$ with the inner product defined by
+ \[
+ (\mathbf{v}_1 + \mathbf{v}_2, \mathbf{w}_1 + \mathbf{w}_2) = (\mathbf{v}_1, \mathbf{w}_1) + (\mathbf{v}_2, \mathbf{w}_2),
+ \]
+ with $\mathbf{v}_1, \mathbf{w}_1 \in V_1$, $\mathbf{v}_2, \mathbf{w}_2 \in V_2$.
+
+ Here we write $\mathbf{v}_1 + \mathbf{v}_2 \in V_1 \oplus V_2$ instead of $(\mathbf{v}_1, \mathbf{v}_2)$ to avoid confusion.
+\end{defi}
+This external direct sum is equivalent to the internal direct sum of $\{(\mathbf{v}_1, \mathbf{0}): \mathbf{v}_1 \in V_1\}$ and $\{(\mathbf{0}, \mathbf{v}_2): \mathbf{v}_2 \in V_2\}$.
+
+\begin{prop}
+ Let $V$ be a finite-dimensional inner product space and $W \leq V$. Let $(\mathbf{e}_1, \cdots, \mathbf{e}_k)$ be an orthonormal basis of $W$. Let $\pi$ be the orthonormal projection of $V$ onto $W$, i.e.\ $\pi: V \to W$ is a function that satisfies $\ker \pi = W^\perp$, $\pi|_W = \id$. Then
+ \begin{enumerate}
+ \item $\pi$ is given by the formula
+ \[
+ \pi(\mathbf{v}) = \sum_{i = 1}^k (\mathbf{e}_i, \mathbf{v}) \mathbf{e}_i.
+ \]
+ \item For all $\mathbf{v}\in V, \mathbf{w} \in W$, we have
+ \[
+ \|\mathbf{v} - \pi(\mathbf{v})\| \leq \|\mathbf{v} - \mathbf{w}\|,
+ \]
+ with equality if and only if $\pi(\mathbf{v}) = \mathbf{w}$. This says $\pi(\mathbf{v})$ is the point on $W$ that is closest to $\mathbf{v}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (5, 0) -- (7, 2.5) -- (2, 2.5) -- cycle;
+ \draw [->] (3, 1.25) -- (3, 3) node [above] {$W^\perp$};
+ \draw [dashed] (4, 1.25) node [left] {$\mathbf{w}$} -- (5, 3) node [above] {$\mathbf{v}$};
+ \draw [->] (5, 3) -- (5, 1.25) node [right] {$\pi(\mathbf{v})$};
+ \draw [dashed] (4, 1.25) -- (5, 1.25);
+ \node at (5, 3) [circ] {};
+ \node at (4, 1.25) [circ] {};
+ \node at (5, 1.25) [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $\mathbf{v} \in V$, and define
+ \[
+ \mathbf{w} = \sum_{i = 1} (\mathbf{e}_i, \mathbf{v}) \mathbf{e}_i.
+ \]
+ We want to show this is $\pi(\mathbf{v})$. We need to show $\mathbf{v} - \mathbf{w} \in W^\perp$. We can compute
+ \[
+ (\mathbf{e}_j, \mathbf{v} - \mathbf{w}) = (\mathbf{e}_j, \mathbf{v}) - \sum_{i = 1}^k (\mathbf{e}_i, \mathbf{v}) (\mathbf{e}_j, \mathbf{e}_i) = 0.
+ \]
+ So $\mathbf{v} - \mathbf{w}$ is orthogonal to every basis vector in $\mathbf{w}$, i.e.\ $\mathbf{v} - \mathbf{w} \in W^\perp$.So
+ \[
+ \pi(\mathbf{v}) = \pi(\mathbf{w}) + \pi(\mathbf{v} - \mathbf{w}) = \mathbf{w}
+ \]
+ as required.
+ \item This is just Pythagoras' theorem. Note that if $\mathbf{x}$ and $\mathbf{y}$ are orthogonal, then
+ \begin{align*}
+ \|\mathbf{x} + \mathbf{y}\|^2 &= (\mathbf{x} + \mathbf{y}, \mathbf{x} + \mathbf{y}) \\
+ &= (\mathbf{x}, \mathbf{x}) + (\mathbf{x}, \mathbf{y}) + (\mathbf{y}, \mathbf{x}) + (\mathbf{y}. \mathbf{y})\\
+ &= \|\mathbf{x}\|^2 + \|\mathbf{y}\|^2.
+ \end{align*}
+ We apply this to our projection. For any $\mathbf{w} \in W$, we have
+ \[
+ \|\mathbf{v} - \mathbf{w}\|^2 = \|\mathbf{v} - \pi (\mathbf{v})\|^2 + \|\pi(\mathbf{v}) - \mathbf{w}\|^2 \geq \|\mathbf{v} - \pi(\mathbf{v})\|^2
+ \]
+ with equality if and only if $\|\pi(\mathbf{v}) - \mathbf{w}\| = 0$, i.e.\ $\pi(\mathbf{v}) = \mathbf{w}$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{Adjoints, orthogonal and unitary maps}
+\subsubsection*{Adjoints}
+\begin{lemma}
+ Let $V$ and $W$ be finite-dimensional inner product spaces and $\alpha: V \to W$ is a linear map. Then there exists a unique linear map $\alpha^*: W \to V$ such that
+ \[
+ (\alpha \mathbf{v}, \mathbf{w}) = (\mathbf{v}, \alpha^* \mathbf{w})\tag{$*$}
+ \]
+ for all $\mathbf{v} \in V$, $\mathbf{w} \in W$.
+\end{lemma}
+
+\begin{proof}
+ There are two parts. We have to prove existence and uniqueness. We'll first prove it concretely using matrices, and then provide a conceptual reason of what this means.
+
+ Let $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ and $(\mathbf{w}_1, \cdots, \mathbf{w}_m)$ be orthonormal basis for $V$ and $W$. Suppose $\alpha$ is represented by $A$.
+
+ To show uniqueness, suppose $\alpha^*: W \to V$ satisfies $(\alpha \mathbf{v}, \mathbf{w}) = (\mathbf{v}, \alpha^* \mathbf{w})$ for all $\mathbf{v} \in V$, $\mathbf{w} \in W$, then for all $i, j$, by definition, we know
+ \begin{align*}
+ (\mathbf{v}_i, \alpha^*(\mathbf{w}_j)) &= (\alpha(\mathbf{v}_i), \mathbf{w}_j) \\
+ &= \left(\sum_k A_{ki} \mathbf{w}_k, \mathbf{w}_j\right)\\
+ &= \sum_k \bar{A}_{ki} (\mathbf{w}_k, \mathbf{w}_j) = \bar{A}_{ji}.
+ \end{align*}
+ So we get
+ \[
+ \alpha^*(\mathbf{w}_j) = \sum_i (\mathbf{v}_i, \alpha^*(\mathbf{w}_j)) \mathbf{v}_i = \sum_i \bar{A}_{ji} \mathbf{v}_i.
+ \]
+ Hence $\alpha^*$ must be represented by $A^\dagger$. So $\alpha^*$ is unique.
+
+ To show existence, all we have to do is to show $A^\dagger$ indeed works. Now let $\alpha^*$ be represented by $A^\dagger$. We can compute the two sides of $(*)$ for arbitrary $\mathbf{v}, \mathbf{w}$. We have
+ \begin{align*}
+ \left(\alpha\left(\sum \lambda_i \mathbf{v}_i\right), \sum \mu_j \mathbf{w}_j\right) &= \sum_{i, j} \bar{\lambda}_i \mu_j (\alpha(\mathbf{v}_i), \mathbf{w}_j)\\
+ &= \sum_{i, j} \bar{\lambda}_i \mu_j \left(\sum_k A_{ki} \mathbf{w}_k, \mathbf{w}_j\right)\\
+ &= \sum_{i, j} \bar{\lambda}_i \bar{A}_{ji} \mu_j.
+ \end{align*}
+ We can compute the other side and get
+ \begin{align*}
+ \left(\sum \lambda_i \mathbf{v}_i, \alpha^*\left(\sum \mu_j \mathbf{w}_j\right)\right) &= \sum_{i, j} \bar{\lambda}_i \mu_j \left(\mathbf{v}_i, \sum_k A^{\dagger}_{kj} \mathbf{v}_k\right)\\
+ &= \sum_{i, j} \bar{\lambda}_i \bar{A}_{ji} \mu_j.
+ \end{align*}
+ So done.
+\end{proof}
+What does this mean, conceptually? Note that the inner product $V$ defines an isomorphism $V \to \bar{V}^*$ by $\mathbf{v} \mapsto (\ph,\mathbf{v})$. Similarly, we have an isomorphism $W \to \bar{W}^*$. We can then put them in the following diagram:
+\[
+ \begin{tikzcd}
+ V \ar[r, "\alpha"] \ar[d, "\cong"] & W \ar[d, "\cong"]\\
+ \bar{V}^* & \bar{W}^* \ar[l, dashed, "\alpha^*"]
+ \end{tikzcd}
+\]
+Then $\alpha^*$ is what fills in the dashed arrow. So $\alpha^*$ is in some sense the ``dual'' of the map $\alpha$.
+
+\begin{defi}[Adjoint]
+ We call the map $\alpha^*$ the \emph{adjoint} of $\alpha$.
+\end{defi}
+We have just seen that if $\alpha$ is represented by $A$ with respect to some orthonormal bases, then $\alpha^*$ is represented by $A^\dagger$.
+
+\begin{defi}[Self-adjoint]
+ Let $V$ be an inner product space, and $\alpha \in \End(V)$. Then $\alpha$ is \emph{self-adjoint} if $\alpha = \alpha^*$, i.e.
+ \[
+ (\alpha(\mathbf{v}), \mathbf{w}) = (\mathbf{v}, \alpha(\mathbf{w}))
+ \]
+ for all $\mathbf{v}, \mathbf{w}$.
+\end{defi}
+Thus if $V = \R^n$ with the usual inner product, then $A \in \Mat_n(\R)$ is self-adjoint if and only if it is symmetric, i.e.\ $A = A^T$. If $V = \C^n$ with the usual inner product, then $A \in \Mat_n(\C)$ is self-adjoint if and only if $A$ is Hermitian, i.e.\ $A = A^\dagger$.
+
+Self-adjoint endomorphisms are important, as you may have noticed from IB Quantum Mechanics. We will later see that these have real eigenvalues with an orthonormal basis of eigenvectors.
+
+\subsubsection*{Orthogonal maps}
+Another important class of endomorphisms is those that preserve lengths. We will first do this for real vector spaces, since the real and complex versions have different names.
+\begin{defi}[Orthogonal endomorphism]
+ Let $V$ be a real inner product space. Then $\alpha \in \End(V)$ is \emph{orthogonal} if
+ \[
+ (\alpha(\mathbf{v}), \alpha(\mathbf{w})) = (\mathbf{v}, \mathbf{w})
+ \]
+ for all $\mathbf{v}, \mathbf{w} \in V$.
+\end{defi}
+By the polarization identity, $\alpha$ is orthogonal if and only if $\|\alpha(\mathbf{v})\| = \|\mathbf{v}\|$ for all $\mathbf{v} \in V$.
+
+A real square matrix (as an endomorphism of $\R^n$ with the usual inner product) is orthogonal if and only if its columns are an orthonormal set.
+
+There is also an alternative way of characterizing these orthogonal maps.
+
+\begin{lemma}
+ Let $V$ be a finite-dimensional space and $\alpha \in \End(V)$. Then $\alpha$ is orthogonal if and only if $\alpha^{-1} = \alpha^*$.
+\end{lemma}
+
+\begin{proof}
+ $(\Leftarrow)$ Suppose $\alpha^{-1} = \alpha^*$. If $\alpha^{-1} = \alpha^*$, then
+ \[
+ (\alpha \mathbf{v}, \alpha \mathbf{v}) = (\mathbf{v}, \alpha^* \alpha \mathbf{v}) = (\mathbf{v}, \alpha^{-1} \alpha \mathbf{v}) = (\mathbf{v}, \mathbf{v}).
+ \]
+ $(\Rightarrow)$ If $\alpha$ is orthogonal and $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ is an orthonormal basis for $V$, then for $1 \leq i, j \leq n$, we have
+ \[
+ \delta_{ij} = (\mathbf{v}_i, \mathbf{v}_j) = (\alpha \mathbf{v}_i, \alpha \mathbf{v}_j) = (\mathbf{v}_i, \alpha^* \alpha \mathbf{v}_j).
+ \]
+ So we know
+ \[
+ \alpha^* \alpha (\mathbf{v}_j) = \sum_{i = 1}^n (\mathbf{v}_i, \alpha^* \alpha \mathbf{v}_j)) \mathbf{v}_i = \mathbf{v}_j.
+ \]
+ So by linearity of $\alpha^* \alpha$, we know $\alpha^* \alpha = \id_V$. So $\alpha^* = \alpha^{-1}$.
+\end{proof}
+
+\begin{cor}
+ $\alpha \in \End(V)$ is orthogonal if and only if $\alpha$ is represented by an orthogonal matrix, i.e.\ a matrix $A$ such that $A^T A = AA^T = I$, with respect to any orthonormal basis.
+\end{cor}
+
+\begin{proof}
+ Let $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ be an orthonormal basis for $V$. Then suppose $\alpha$ is represented by $A$. So $\alpha^*$ is represented by $A^T$. Then $A^* = A^{-1}$ if and only if $AA^T = A^T A = I$.
+\end{proof}
+
+\begin{defi}[Orthogonal group]
+ Let $V$ be a real inner product space. Then the \emph{orthogonal group} of $V$ is
+ \[
+ \Or(V) = \{\alpha \in \End(V): \alpha\text{ is orthogonal}\}.
+ \]
+\end{defi}
+It follows from the fact that $\alpha^* = \alpha^{-1}$ that $\alpha$ is invertible, and it is clear from definition that $\Or(V)$ is closed under multiplication and inverses. So this is indeed a group.
+
+\begin{prop}
+ Let $V$ be a finite-dimensional real inner product space and $(\mathbf{e}_1, \cdots, \mathbf{e}_n)$ is an orthonormal basis of $V$. Then there is a bijection
+ \begin{align*}
+ \Or(V) &\to \{\text{orthonormal basis for }V\}\\
+ \alpha &\mapsto (\alpha(\mathbf{e}_1, \cdots, \mathbf{e}_n)).
+ \end{align*}
+\end{prop}
+This is analogous to our result for general vector spaces and general bases, where we replace $\Or(V)$ with $\GL(V)$.
+
+\begin{proof}
+ Same as the case for general vector spaces and general bases.
+\end{proof}
+
+\subsubsection*{Unitary maps}
+We are going to study the complex version of orthogonal maps, known as \emph{unitary maps}. The proofs are almost always identical to the real case, and we will not write the proofs again.
+
+\begin{defi}[Unitary map]
+ Let $V$ be a finite-dimensional complex vector space. Then $\alpha \in \End(V)$ is \emph{unitary} if
+ \[
+ (\alpha(\mathbf{v}), \alpha(\mathbf{w})) = (\mathbf{v}, \mathbf{w})
+ \]
+ for all $\mathbf{v}, \mathbf{w} \in V$.
+\end{defi}
+By the polarization identity, $\alpha$ is unitary if and only if $\|\alpha(\mathbf{v})\| = \|\mathbf{v}\|$ for all $\mathbf{v} \in V$.
+
+\begin{lemma}
+ Let $V$ be a finite dimensional complex inner product space and $\alpha \in \End(V)$. Then $\alpha$ is unitary if and only if $\alpha$ is invertible and $\alpha^* = \alpha^{-1}$.
+\end{lemma}
+
+\begin{cor}
+ $\alpha \in \End(V)$ is unitary if and only if $\alpha$ is represented by a unitary matrix $A$ with respect to any orthonormal basis, i.e.\ $A^{-1} = A^\dagger$.
+\end{cor}
+
+\begin{defi}[Unitary group]
+ Let $V$ be a finite-dimensional complex inner product space. Then the \emph{unitary group} of $V$ is
+ \[
+ U(V) = \{\alpha \in \End(V): \alpha\text{ is unitary}\}.
+ \]
+\end{defi}
+
+\begin{prop}
+ Let $V$ be a finite-dimensional complex inner product space. Then there is a bijection
+ \begin{align*}
+ U(V) &\to \{\text{orthonormal basis of } V\}\\
+ \alpha &\mapsto \{\alpha(\mathbf{e}_1), \cdots, \alpha (\mathbf{e}_n)\}.
+ \end{align*}
+\end{prop}
+
+\subsection{Spectral theory}
+We are going to classify matrices in inner product spaces. Recall that for general vector spaces, what we effectively did was to find the orbits of the conjugation action of $\GL(V)$ on $\Mat_n(\F)$. If we have inner product spaces, we will want to look at the action of $O(V)$ or $U(V)$ on $\Mat_n(\F)$. In a more human language, instead of allowing arbitrary basis transformations, we only allow transforming between orthonormal basis.
+
+We are not going to classify all endomorphisms, but just self-adjoint and orthogonal/unitary ones.
+
+\begin{lemma}
+ Let $V$ be a finite-dimensional inner product space, and $\alpha \in \End(V)$ self-adjoint. Then
+ \begin{enumerate}
+ \item $\alpha$ has a real eigenvalue, and all eigenvalues of $\alpha$ are real.
+ \item Eigenvectors of $\alpha$ with distinct eigenvalues are orthogonal.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}
+ We are going to do real and complex cases separately.
+ \begin{enumerate}
+ \item Suppose first $V$ is a complex inner product space. Then by the fundamental theorem of algebra, $\alpha$ has an eigenvalue, say $\lambda$. We pick $\mathbf{v} \in V\setminus \{0\}$ such that $\alpha \mathbf{v} = \lambda \mathbf{v}$. Then
+ \[
+ \bar{\lambda}(\mathbf{v}, \mathbf{v}) = (\lambda \mathbf{v}, \mathbf{v}) = (\alpha \mathbf{v}, \mathbf{v}) = (\mathbf{v}, \alpha \mathbf{v}) = (\mathbf{v}, \lambda \mathbf{v}) = \lambda (\mathbf{v}, \mathbf{v}).
+ \]
+ Since $\mathbf{v}\not= \mathbf{0}$, we know $(\mathbf{v}, \mathbf{v})\not= 0$. So $\lambda = \bar{\lambda}$.
+
+ For the real case, we pretend we are in the complex case. Let $\mathbf{e}_1, \cdots, \mathbf{e}_n$ be an orthonormal basis for $V$. Then $\alpha$ is represented by a symmetric matrix $A$ (with respect to this basis). Since real symmetric matrices are Hermitian viewed as complex matrices, this gives a self-adjoint endomorphism of $\C^n$. By the complex case, $A$ has real eigenvalues only. But the eigenvalues of $A$ are the eigenvalues of $\alpha$ and $M_A(t) = M_\alpha(t)$. So done.
+
+ Alternatively, we can prove this without reducing to the complex case. We know every irreducible factor of $M_\alpha(t)$ in $\R[t]$ must have degree $1$ or $2$, since the roots are either real or come in complex conjugate pairs. Suppose $f(t)$ were an irreducible factor of degree $2$. Then
+ \[
+ \left(\frac{m_\alpha}{f}\right)(\alpha) \not= 0
+ \]
+ since it has degree less than the minimal polynomial. So there is some $\mathbf{v} \in V$ such that
+ \[
+ \left(\frac{M_\alpha}{f}\right)(\alpha)(\mathbf{v}) \not= \mathbf{0}.
+ \]
+ So it must be that $f(\alpha)(\mathbf{v}) = \mathbf{0}$. Let $U = \bra \mathbf{v}, \alpha (\mathbf{v})\ket$. Then this is an $\alpha$-invariant subspace of $V$ since $f$ has degree $2$.
+
+ Now $\alpha|_U \in \End(U)$ is self-adjoint. So if $(\mathbf{e}_1, \mathbf{e}_2)$ is an orthonormal basis of $U$, then $\alpha$ is represented by a real symmetric matrix, say
+ \[
+ \begin{pmatrix}
+ a & b\\
+ b & a
+ \end{pmatrix}
+ \]
+ But then $\chi_{\alpha|_U}(t) = (t - a)^2 - b^2$, which has real roots, namely $a \pm b$. This is a contradiction, since $M_{\alpha|_U} = f$, but $f$ is irreducible.
+ \item Now suppose $\alpha \mathbf{v} = \lambda \mathbf{v}$, $\alpha \mathbf{w} = \mu \mathbf{w}$ and $\lambda \not= \mu$. We need to show $(\mathbf{v}, \mathbf{w}) = 0$. We know
+ \[
+ (\alpha \mathbf{v}, \mathbf{w}) = (\mathbf{v}, \alpha \mathbf{w})
+ \]
+ by definition. This then gives
+ \[
+ \lambda (\mathbf{v}, \mathbf{w}) = \mu (\mathbf{v}, \mathbf{w})
+ \]
+ Since $\lambda \not= \mu$, we must have $(\mathbf{v}, \mathbf{w}) = 0$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{thm}
+ Let $V$ be a finite-dimensional inner product space, and $\alpha \in \End(V)$ self-adjoint. Then $V$ has an orthonormal basis of eigenvectors of $\alpha$.
+\end{thm}
+
+\begin{proof}
+ By the previous lemma, $\alpha$ has a real eigenvalue, say $\lambda$. Then we can find an eigenvector $\mathbf{v} \in V\setminus \{0\}$ such that $\alpha \mathbf{v} = \lambda \mathbf{v}$.
+
+ Let $U = \bra \mathbf{v}\ket ^\perp$. Then we can write
+ \[
+ V = \bra \mathbf{v}\ket \perp U.
+ \]
+ We now want to prove $\alpha$ sends $U$ into $U$. Suppose $\mathbf{u} \in U$. Then
+ \[
+ (\mathbf{v}, \alpha (\mathbf{u})) = (\alpha \mathbf{v}, \mathbf{u}) = \lambda (\mathbf{v}, \mathbf{u}) = 0.
+ \]
+ So $\alpha (\mathbf{u}) \in \bra \mathbf{v}\ket^\perp = U$. So $\alpha|_U \in \End(U)$ and is self-adjoint.
+
+ By induction on $\dim V$, $U$ has an orthonormal basis $(\mathbf{v}_2, \cdots, \mathbf{v}_n)$ of $\alpha$ eigenvectors. Now let
+ \[
+ \mathbf{v}_1 = \frac{\mathbf{v}}{\|\mathbf{v}\|}.
+ \]
+ Then $(\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_n)$ is an orthonormal basis of eigenvectors for $\alpha$.
+\end{proof}
+
+\begin{cor}
+ Let $V$ be a finite-dimensional vector space and $\alpha$ self-adjoint. Then $V$ is the orthogonal (internal) direct sum of its $\alpha$-eigenspaces.
+\end{cor}
+
+\begin{cor}
+ Let $A \in \Mat_n(\R)$ be symmetric. Then there exists an orthogonal matrix $P$ such that $P^TAP = P^{-1}AP$ is diagonal.
+\end{cor}
+
+\begin{proof}
+ Let $(\ph, \ph)$ be the standard inner product on $\R^n$. Then $A$ is self-adjoint as an endomorphism of $\R^n$. So $\R^n$ has an orthonormal basis of eigenvectors for $A$, say $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$. Taking $P = (\mathbf{v}_1\; \mathbf{v}_2\; \cdots \; \mathbf{v}_n)$ gives the result.
+\end{proof}
+
+\begin{cor}
+ Let $V$ be a finite-dimensional real inner product space and $\psi: V\times V \to \R$ a symmetric bilinear form. Then there exists an orthonormal basis $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ for $V$ with respect to which $\psi$ is represented by a diagonal matrix.
+\end{cor}
+
+\begin{proof}
+ Let $(\mathbf{u}_1, \cdots, \mathbf{u}_n)$ be any orthonormal basis for $V$. Then $\psi$ is represented by a symmetric matrix $A$. Then there exists an orthogonal matrix $P$ such that $P^T AP$ is diagonal. Now let $\mathbf{v}_i = \sum P_{ki} \mathbf{u}_k$. Then $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ is an orthonormal basis since
+ \begin{align*}
+ (\mathbf{v}_i, \mathbf{v}_j) &= \left(\sum P_{ki} \mathbf{u}_k, \sum P_{\ell j} \mathbf{u}_\ell\right) \\
+ &= \sum P_{ik}^T (\mathbf{u}_k, \mathbf{u}_\ell) P_{\ell j}\\
+ &= [P^T P]_{ij}\\
+ &= \delta_{ij}.
+ \end{align*}
+ Also, $\psi$ is represented by $P^T AP$ with respect to $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$.
+\end{proof}
+Note that the diagonal values of $P^T AP$ are just the eigenvalues of $A$. So the signature of $\psi$ is just the number of positive eigenvalues of $A$ minus the number of negative eigenvalues of $A$.
+
+\begin{cor}
+ Let $V$ be a finite-dimensional real vector space and $\phi, \psi$ symmetric bilinear forms on $V$ such that $\phi$ is positive-definite. Then we can find a basis $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ for $V$ such that both $\phi$ and $\psi$ are represented by diagonal matrices with respect to this basis.
+\end{cor}
+
+\begin{proof}
+ We use $\phi$ to define an inner product. Choose an orthonormal basis for $V$ (equipped with $\phi$) $(\mathbf{v}_1, \cdots, \mathbf{v}_n)$ with respect to which $\psi$ is diagonal. Then $\phi$ is represented by $I$ with respect to this basis, since $\psi(\mathbf{v}_i, \mathbf{v}_j) = \delta_{ij}$. So done.
+\end{proof}
+
+\begin{cor}
+ If $A, B \in \Mat_n(\R)$ are symmetric and $A$ is positive definitive (i.e.\ $\mathbf{v}^T A \mathbf{v} > 0$ for all $\mathbf{v} \in \R^n \setminus \{0\}$). Then there exists an invertible matrix $Q$ such that $Q^T AQ$ and $Q^T BQ$ are both diagonal.
+\end{cor}
+
+We can deduce similar results for complex finite-dimensional vector spaces, with the same proofs. In particular,
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item If $A \in \Mat_n(\C)$ is Hermitian, then there exists a unitary matrix $U \in \Mat_n(\C)$ such that
+ \[
+ U^{-1}AU = U^\dagger AU
+ \]
+ is diagonal.
+ \item If $\psi$ is a Hermitian form on a finite-dimensional complex inner product space $V$, then there is an orthonormal basis for $V$ diagonalizing $\psi$.
+ \item If $\phi, \psi$ are Hermitian forms on a finite-dimensional complex vector space and $\phi$ is positive definite, then there exists a basis for which $\phi$ and $\psi$ are diagonalized.
+ \item Let $A, B \in \Mat_n(\C)$ be Hermitian, and $A$ positive definitive (i.e.\ $\mathbf{v}^\dagger A \mathbf{v} > 0$ for $\mathbf{v} \in V \setminus \{0\}$). Then there exists some invertible $Q$ such that $Q^\dagger AQ$ and $Q^\dagger BQ$ are diagonal.
+ \end{enumerate}
+\end{prop}
+
+That's all for self-adjoint matrices. How about unitary matrices?
+\begin{thm}
+ Let $V$ be a finite-dimensional complex vector space and $\alpha \in U(V)$ be unitary. Then $V$ has an orthonormal basis of $\alpha$ eigenvectors.
+\end{thm}
+
+\begin{proof}
+ By the fundamental theorem of algebra, there exists $\mathbf{v} \in V\setminus \{0\}$ and $\lambda \in \C$ such that $\alpha \mathbf{v} = \lambda \mathbf{v}$. Now consider $W = \bra \mathbf{v}\ket^\perp$. Then
+ \[
+ V = W \perp \bra \mathbf{v}\ket.
+ \]
+ We want to show $\alpha$ restricts to a (unitary) endomorphism of $W$. Let $\mathbf{w} \in W$. We need to show $\alpha(\mathbf{w})$ is orthogonal to $\mathbf{v}$. We have
+ \[
+ (\alpha \mathbf{w}, \mathbf{v}) = (\mathbf{w}, \alpha^{-1}\mathbf{v}) = (\mathbf{w}, \lambda^{-1} \mathbf{v}) = 0.
+ \]
+ So $\alpha(\mathbf{w}) \in W$ and $\alpha|_W \in \End(W)$. Also, $\alpha|_W$ is unitary since $\alpha$ is. So by induction on $\dim V$, $W$ has an orthonormal basis of $\alpha$ eigenvectors. If we add $\mathbf{v}/\|\mathbf{v}\|$ to this basis, we get an orthonormal basis of $V$ itself comprised of $\alpha$ eigenvectors.
+\end{proof}
+This theorem and the analogous one for self-adjoint endomorphisms have a common generalization, at least for complex inner product spaces. The key fact that leads to the existence of an orthonormal basis of eigenvectors is that $\alpha$ and $\alpha^*$ commute. This is clearly a necessary condition, since if $\alpha$ is diagonalizable, then $\alpha^*$ is diagonal in the same basis (since it is just the transpose (and conjugate)), and hence they commute. It turns out this is also a sufficient condition, as you will show in example sheet 4.
+
+However, we cannot generalize this in the real orthogonal case. For example,
+\[
+ \begin{pmatrix}
+ \cos \theta & \sin \theta\\
+ - \sin \theta & \cos \theta
+ \end{pmatrix} \in O(\R^2)
+\]
+cannot be diagonalized (if $\theta \not\in \pi \Z$). However, in example sheet 4, you will find a classification of $O(V)$, and you will see that the above counterexample is the worst that can happen in some sense.
+\end{document}
diff --git a/books/cam/IB_M/markov_chains.tex b/books/cam/IB_M/markov_chains.tex
new file mode 100644
index 0000000000000000000000000000000000000000..91dd76d8ae75134f878de90adca3e270a6d13590
--- /dev/null
+++ b/books/cam/IB_M/markov_chains.tex
@@ -0,0 +1,1665 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Michaelmas}
+\def\nyear {2015}
+\def\nlecturer {G.\ R.\ Grimmett}
+\def\ncourse {Markov Chains}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Discrete-time chains}\\
+Definition and basic properties, the transition matrix. Calculation of $n$-step transition probabilities. Communicating classes, closed classes, absorption, irreducibility. Calculation of hitting probabilities and mean hitting times; survival probability for birth and death chains. Stopping times and statement of the strong Markov property.\hspace*{\fill} [5]
+
+\vspace{5pt}
+\noindent Recurrence and transience; equivalence of transience and summability of $n$-step transition probabilities; equivalence of recurrence and certainty of return. Recurrence as a class property, relation with closed classes. Simple random walks in dimensions one, two and three.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Invariant distributions, statement of existence and uniqueness up to constant multiples. Mean return time, positive recurrence; equivalence of positive recurrence and the existence of an invariant distribution. Convergence to equilibrium for irreducible, positive recurrent, aperiodic chains *and proof by coupling*. Long-run proportion of time spent in a given state.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Time reversal, detailed balance, reversibility, random walk on a graph.\hspace*{\fill} [1]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+So far, in IA Probability, we have always dealt with one random variable, or numerous independent variables, and we were able to handle them. However, in real life, things often \emph{are} dependent, and things become much more difficult.
+
+In general, there are many ways in which variables can be dependent. Their dependence can be very complicated, or very simple. If we are just told two variables are dependent, we have no idea what we can do with them.
+
+This is similar to our study of functions. We can develop theories about continuous functions, increasing functions, or differentiable functions, but if we are just given a random function without assuming anything about it, there really isn't much we can do.
+
+Hence, in this course, we are just going to study a particular kind of dependent variables, known as \emph{Markov chains}. In fact, in IA Probability, we have already encountered some of these. One prominent example is the random walk, in which the next position depends on the previous position. This gives us some dependent random variables, but they are dependent in a very simple way.
+
+In reality, a random walk is too simple of a model to describe the world. We need something more general, and these are Markov chains. These, by definition, are random distributions that satisfy the \emph{Markov assumption}. This assumption, intuitively, says that the future depends only upon the current state, and not how we got to the current state. It turns out that just given this assumption, we can prove a lot about these chains.
+
+\section{Markov chains}
+\subsection{The Markov property}
+We start with the definition of a Markov chain.
+\begin{defi}[Markov chain]
+ Let $X = (X_0, X_1, \cdots)$ be a sequence of random variables taking values in some set $S$, the \emph{state space}. We assume that $S$ is countable (which could be finite).
+
+ We say $X$ has the \emph{Markov property} if for all $n\geq 0, i_0,\cdots, i_{n + 1}\in S$, we have
+ \[
+ \P(X_{n + 1} = i_{n + 1} \mid X_0 = i_0, \cdots, X_n = i_n) = \P(X_{n + 1} = i_{n + 1} \mid X_n = i_n).
+ \]
+ If $X$ has the Markov property, we call it a \emph{Markov chain}.
+
+ We say that a Markov chain $X$ is \emph{homogeneous} if the conditional probabilities $\P(X_{n + 1} = j \mid X_n = i)$ do not depend on $n$.
+\end{defi}
+All our chains $X$ will be Markov and homogeneous unless otherwise specified.
+
+Since the state space $S$ is countable, we usually label the states by integers $i \in \N$.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item A random walk is a Markov chain.
+ \item The branching process is a Markov chain.
+ \end{enumerate}
+\end{eg}
+
+In general, to fully specify a (homogeneous) Markov chain, we will need two items:
+\begin{enumerate}
+ \item The initial distribution $\lambda_i = \P(X_0 = i)$. We can write this as a vector $\lambda = (\lambda_i: i \in S)$.
+ \item The transition probabilities $p_{i, j} = \P(X_{n + 1} = j \mid X_n = i)$. We can write this as a matrix $P = (p_{i, j})_{i, j\in S}$.
+\end{enumerate}
+
+We will start by proving a few properties of $\lambda$ and $P$. These let us know whether an arbitrary pair of vector and matrix $(\lambda, P)$ actually specifies a Markov chain.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\lambda$ is a \emph{distribution}, i.e.\ $\lambda_i \geq 0, \sum_i \lambda_i = 1$.
+ \item $P$ is a \emph{stochastic matrix}, i.e.\ $p_{i, j} \geq 0$ and $\sum_j p_{i, j} = 1$ for all $i$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Obvious since $\lambda$ is a probability distribution.
+ \item $p_{i, j} \geq 0$ since $p_{ij}$ is a probability. We also have
+ \[
+ \sum_j p_{i,j} = \sum_j \P(X_{1} = j \mid X_0 = i) = 1
+ \]
+ since $\P(X_1 = \ph \mid X_0 = i)$ is a probability distribution function.\qedhere
+ \end{enumerate}
+\end{proof}
+Note that we only require the row sum to be $1$, and the column sum need not be.
+
+We will prove another seemingly obvious fact.
+\begin{thm}
+ Let $\lambda$ be a distribution (on $S$) and $P$ a stochastic matrix. The sequence $X = (X_0, X_1, \cdots)$ is a Markov chain with initial distribution $\lambda$ and transition matrix $P$ iff
+ \[
+ \P(X_0 = i_0, X_1 = i_1, \cdots, X_n = i_n) = \lambda_{i_0}p_{i_0, i_1}p_{i_1, i_2}\cdots p_{i_{n - 1}, i_n}\tag{$*$}
+ \]
+ for all $n, i_0, \cdots, i_n$
+\end{thm}
+\begin{proof}
+ Let $A_k$ be the event $X_k = i_k$. Then we can write $(*)$ as
+ \[
+ \P(A_0\cap A_1\cap\cdots \cap A_n) = \lambda_{i_0}p_{i_0, i_1}p_{i_1, i_2}\cdots p_{i_{n - 1}, i_n}. \tag{$*$}
+ \]
+ We first assume that $X$ is a Markov chain. We prove $(*)$ by induction on $n$.
+
+ When $n = 0$, $(*)$ says $\P(A_0) = \lambda_{i_0}$. This is true by definition of $\lambda$.
+
+ Assume that it is true for all $n < N$. Then
+ \begin{align*}
+ \P(A_0 \cap A_1 \cap \cdots \cap A_N) &= \P(A_0,\cdots, A_{N - 1})\P(A_0, \cdots, A_N \mid A_0, \cdots, A_{N - 1})\\
+ &= \lambda_{i_0} p_{i_0, i_1}\cdots p_{i_{N - 2}, i_{N - 1}} \P(A_{N} \mid A_0,\cdots, A_{N - 1})\\
+ &= \lambda_{i_0} p_{i_0, i_1}\cdots p_{i_{N - 2}, i_{N - 1}} \P(A_{N} \mid A_{N - 1})\\
+ &= \lambda_{i_0} p_{i_0, i_1}\cdots p_{i_{N - 2}, i_{N - 1}} p_{i_{N - 1}, i_N}.
+ \end{align*}
+ So it is true for $N$ as well. Hence we are done by induction.
+
+ Conversely, suppose that ($*$) holds. Then for $n = 0$, we have $\P(X_0 = i_0) = \lambda_{i_0}$. Otherwise,
+ \begin{align*}
+ \P(X_n = i_n\mid X_0 = i_0, \cdots, X_{n - 1} = i_{n - 1}) &= \P(A_n \mid A_0 \cap \cdots\cap A_{n - 1})\\
+ &= \frac{\P(A_0\cap \cdots \cap A_n)}{\P(A_0\cap \cdots \cap A_{n - 1})}\\
+ &= p_{i_{n - 1}, i_n},
+ \end{align*}
+ which is independent of $i_0, \cdots, i_{n - 2}$. So this is Markov.
+\end{proof}
+
+Often, we do not use the Markov property directly. Instead, we use the following:
+\begin{thm}[Extended Markov property]
+ Let $X$ be a Markov chain. For $n \geq 0$, any $H$ given in terms of the past $\{X_i: i < n\}$, and any $F$ given in terms of the future $\{X_i: i > n\}$, we have
+ \[
+ \P(F\mid X_n = i, H) = \P(F\mid X_n = i).
+ \]
+\end{thm}
+To prove this, we need to stitch together many instances of the Markov property. Actual proof is omitted.
+
+\subsection{Transition probability}
+Recall that we can specify the dynamics of a Markov chain by the \emph{one-step} transition probability,
+\[
+ p_{i, j} = \P(X_{n + 1} = j\mid X_n = i).
+\]
+However, we don't always want to take 1 step. We might want to take 2 steps, 3 steps, or, in general, $n$ steps. Hence, we define
+\begin{defi}[$n$-step transition probability]
+ The $n$-step transition probability from $i$ to $j$ is
+ \[
+ p_{i, j}(n) = \P(X_n = j\mid X_0 = i).
+ \]
+\end{defi}
+How do we compute these probabilities? The idea is to break this down into smaller parts. We want to express $p_{i, j}(m + n)$ in terms of $n$-step and $m$-step transition probabilities. Then we can iteratively break down an arbitrary $p_{i, j}(n)$ into expressions involving the one-step transition probabilities only.
+
+To compute $p_{i, j}(m + n)$, we can think of this as a two-step process. We first go from $i$ to some unknown point $k$ in $m$ steps, and then travel from $k$ to $j$ in $n$ more steps. To find the probability to get from $i$ to $j$, we consider all possible routes from $i$ to $j$, and sum up all the probability of the paths. We have
+\begin{align*}
+ p_{ij}(m + n) &= \P(X_{m + n}\mid X_0 = i)\\
+ &= \sum_k \P(X_{m + n} = j\mid X_m = k, X_0 = i)\P(X_m = k\mid X_0 = i)\\
+ &= \sum_k \P(X_{m + n} = j\mid X_m = k)\P(X_m = k \mid X_0 = i)\\
+ &= \sum_k p_{i, k}(m)p_{k, j}(n).
+\end{align*}
+Thus we get
+\begin{thm}[Chapman-Kolmogorov equation]
+ \[
+ p_{i, j}(m + n) = \sum_{k\in S} p_{i, k}(m) p_{k, j}(n).
+ \]
+\end{thm}
+This formula is suspiciously familiar. It is just matrix multiplication!
+
+\begin{notation}
+ Write $P(m) = (p_{i, j}(m))_{i, j\in S}$.
+\end{notation}
+Then we have
+\[
+ P(m + n) = P(m)P(n)
+\]
+In particular, we have
+\[
+ P(n) = P(1)P(n - 1) = \cdots = P(1)^n = P^n.
+\]
+This allows us to easily compute the $n$-step transition probability by matrix multiplication.
+
+\begin{eg}
+ Let $S = \{1, 2\}$, with
+ \[
+ P =
+ \begin{pmatrix}
+ 1 - \alpha & \alpha\\
+ \beta & 1 - \beta
+ \end{pmatrix}
+ \]
+ We assume $0 < \alpha, \beta < 1$. We want to find the $n$-step transition probability.
+
+ We can achieve this via diagonalization. We can write $P$ as
+ \[
+ P = U^{-1}
+ \begin{pmatrix}
+ \kappa_1 & 0\\
+ 0 & \kappa_2
+ \end{pmatrix}U,
+ \]
+ where the $\kappa_i$ are eigenvalues of $P$, and $U$ is composed of the eigenvectors.
+
+ To find the eigenvalues, we calculate
+ \[
+ \det (P - \lambda I) = (1 - \alpha - \lambda)(1 - \beta - \lambda) - \alpha\beta = 0.
+ \]
+ We solve this to obtain
+ \[
+ \kappa_1 = 1,\quad \kappa_2 = 1 - \alpha - \beta.
+ \]
+ Usually, the next thing to do would be to find the eigenvectors to obtain $U$. However, here we can cheat a bit and not do that. Using the diagonalization of $P$, we have
+ \[
+ P^n = U^{-1}
+ \begin{pmatrix}
+ \kappa_1^n & 0\\
+ 0 & \kappa_2^n
+ \end{pmatrix}U.
+ \]
+ We can now attempt to compute $p_{1, 2}$. We know that it must be of the form
+ \[
+ p_{1, 2} = A\kappa_1^n + B\kappa_2^n = A + B(1 - \alpha - \beta)^n
+ \]
+ where $A$ and $B$ are constants coming from $U$ and $U^{-1}$. However, we know well that
+ \[
+ p_{1, 2}(0) = 0,\quad p_{12}(1) = \alpha.
+ \]
+ So we obtain
+ \begin{align*}
+ A + B &= 0\\
+ A + B(1 - \alpha - \beta) &= \alpha.
+ \end{align*}
+ This is something we can solve, and obtain
+ \[
+ p_{1, 2}(n) = \frac{\alpha}{\alpha + \beta}(1 - (1 - \alpha - \beta)^n) = 1 - p_{1, 1}(n).
+ \]
+ How about $p_{2, 1}$ and $p_{2, 2}$? Well we don't need additional work. We can obtain these simply by interchanging $\alpha$ and $\beta$. So we obtain
+ \[
+ P^n = \frac{1}{\alpha + \beta}
+ \begin{pmatrix}
+ \beta + \alpha(1 - \alpha - \beta)^n & \alpha - \alpha(1 - \alpha - \beta)^n \\
+ \beta - \beta(1 - \beta - \alpha)^n & \alpha + \beta(1 - \beta - \alpha)^n \\
+ \end{pmatrix}
+ \]
+ What happens as $n\to \infty$? We can take the limit and obtain
+ \[
+ P^n \to \frac{1}{\alpha + \beta}
+ \begin{pmatrix}
+ \beta & \alpha\\
+ \beta & \alpha
+ \end{pmatrix}
+ \]
+ We see that the two rows are the same. This means that as time goes on, where we end up does not depend on where we started. We will later (near the end of the course) see that this is generally true for most Markov chains.
+
+ Alternatively, we can solve this by a difference equation. The recurrence relation is given by
+ \[
+ p_{1, 1}(n + 1) = p_{1, 1}(n)(1 - \alpha) + p_{1, 2}(n)\beta.
+ \]
+ Writing in terms of $p_{11}$ only, we have
+ \[
+ p_{1, 1}(n + 1) = p_{1, 1}(n)(1 - \alpha) + (1 - p_{1, 1}(n))\beta.
+ \]
+ We can solve this as we have done in IA Differential Equations.
+\end{eg}
+We saw that the Chapman-Kolmogorov equation can be concisely stated as a rule about matrix multiplication. In general, many statements about Markov chains can be formulated in the language of linear algebra naturally.
+
+For example, let $X_0$ have distribution $\lambda$. What is the distribution of $X_1$? By definition, it is
+\[
+ \P(X_1 = j) = \sum_i \P(X_1 = j\mid X_0 = i)\P(X_0 = i) = \sum_i \lambda_i p_{i, j}.
+\]
+Hence this has a distribution $\lambda P$, where $\lambda$ is treated as a row vector. Similarly, $X_n$ has the distribution $\lambda P^n$.
+
+In fact, historically, Markov chains was initially developed as a branch of linear algebra, and a lot of the proofs were just linear algebra manipulations. However, nowadays, we often look at it as a branch of probability theory instead, and this is what we will do in this course. So don't be scared if you hate linear algebra.
+
+\section{Classification of chains and states}
+\subsection{Communicating classes}
+Suppose we have a Markov chain $X$ over some state space $S$. While we would usually expect the different states in $S$ to be mutually interacting, it is possible that we have a state $i \in S$ that can never be reached, or we might get stuck in some state $j \in S$ and can never leave. These are usually less interesting. Hence we would like to rule out these scenarios, and focus on what we call \emph{irreducible chains}, where we can freely move between different states.
+
+We start with an some elementary definitions.
+\begin{defi}[Leading to and communicate]
+ Suppose we have two states $i, j\in S$. We write $i \to j$ ($i$ \emph{leads to} $j$) if there is some $n \geq 0$ such that $p_{i, j}(n) > 0$, i.e.\ it is possible for us to get from $i$ to $j$ (in multiple steps). Note that we allow $n = 0$. So we always have $i \to i$.
+
+ We write $i \leftrightarrow j$ if $i \to j$ and $j \to i$. If $i \leftrightarrow j$, we say $i$ and $j$ \emph{communicate}.
+\end{defi}
+
+\begin{prop}
+ $\leftrightarrow$ is an equivalence relation.
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Reflexive: we have $i \leftrightarrow i$ since $p_{i, i}(0) = 1$.
+ \item Symmetric: trivial by definition.
+ \item Transitive: suppose $i \to j$ and $j \to k$. Since $i \to j$, there is some $m > 0$ such that $p_{i, j}(m) > 0$. Since $j \to k$, there is some $n$ such that $p_{j, k}(n) > 0$. Then $p_{i, k}(m + n) = \sum_{r} p_{i, r}(m)p_{r k}(n) \geq p_{i, j}(m)p_{j, k}(n) > 0$. So $i \to k$.
+
+ Similarly, if $j \to i$ and $k \to j$, then $k \to i$. So $i \leftrightarrow j$ and $j \leftrightarrow k$ implies that $i \leftrightarrow k$.\qedhere
+ \end{enumerate}
+\end{proof}
+So we have an equivalence relation, and we know what to do with equivalence relations. We form equivalence classes!
+\begin{defi}[Communicating classes]
+ The equivalence classes of $\leftrightarrow$ are \emph{communicating classes}.
+\end{defi}
+We have to be careful with these communicating classes. Different communicating classes are not completely isolated. Within a communicating class $A$, of course we can move between any two vertices. However, it is also possible that we can escape from a class $A$ to a different class $B$. It is just that after going to $B$, we cannot return to class $A$. From $B$, we might be able to get to another space $C$. We can jump around all the time, but (if there are finitely many communicating classes) eventually we have to stop when we have visited every class. Then we are bound to stay in that class.
+
+Since we are eventually going to be stuck in that class anyway, often, we can just consider this final communicating class and ignore the others. So wlog we can assume that the chain only has one communicating class.
+
+\begin{defi}[Irreducible chain]
+ A Markov chain is \emph{irreducible} if there is a unique communication class.
+\end{defi}
+From now on, we will mostly care about irreducible chains only.
+
+More generally, we call a subset \emph{closed} if we cannot escape from it.
+\begin{defi}[Closed]
+ A subset $C\subseteq S$ is \emph{closed} if $p_{i, j} = 0$ for all $i \in C, j\not\in C$.
+\end{defi}
+
+\begin{prop}
+ A subset $C$ is closed iff ``$i\in C, i\to j$ implies $j \in C$''.
+\end{prop}
+
+\begin{proof}
+ Assume $C$ is closed. Let $i \in C, i \to j$. Since $i \to j$, there is some $m$ such that $p_{i, j}(m) > 0$. Expanding the Chapman-Kolmogorov equation, we have
+ \[
+ p_{i, j}(m) = \sum_{i_1, \cdots, i_{m - 1}} p_{i, i_1}p_{i_1, i_2}, \cdots, p_{i_{m - 1},j} > 0.
+ \]
+ So there is some route $i, i_1, \cdots, i_{m - 1}, j$ such that $p_{i, i_1}, p_{i_1, i_2}, \cdots, p_{i_{m - 1}, j} > 0$. Since $p_{i, i_1} > 0$, we have $i_1\in C$. Since $p_{i_1,i_2} > 0$, we have $i_2\in C$. By induction, we get that $j \in C$.
+
+ To prove the other direction, assume that ``$i\in C, i \to j$ implies $j\in C$''. So for any $i\in C, j\not\in C$, then $i\not\to j$. So in particular $p_{i, j} = 0$.
+\end{proof}
+
+\begin{eg}
+ Consider $S = \{1, 2, 3, 4, 5, 6\}$ with transition matrix
+ \[
+ P =
+ \begin{pmatrix}
+ \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0\\
+ 0 & 0 & 1 & 0 & 0 & 0\\
+ \frac{1}{3} & 0 & 0 & \frac{1}{3} & \frac{1}{3} & 0\\
+ 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0\\
+ 0 & 0 & 0 & 0 & 0 & 1\\
+ 0 & 0 & 0 & 0 & 1 & 0\\
+ \end{pmatrix}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \node [mstate] (1) at (0, 0) {$1$};
+ \node [mstate] (2) at (1, 1.4) {$2$};
+ \node [mstate] (3) at (2, 0) {$3$};
+ \node [mstate] (4) at (3, -1.4) {$4$};
+ \node [mstate] (5) at (4, 0) {$5$};
+ \node [mstate] (6) at (6, 0) {$6$};
+ \draw (1) edge [loop left, ->] (1);
+ \draw (1) edge [->] (2);
+ \draw (2) edge [->] (3);
+ \draw (3) edge [->] (1);
+ \draw (3) edge [->] (4);
+ \draw (3) edge [->] (5);
+ \draw (4) edge [->] (5);
+ \draw (4) edge [loop below, ->] (4);
+ \draw (5) edge [bend left, ->] (6);
+ \draw (6) edge [bend left, ->] (5);
+ \end{tikzpicture}
+ \end{center}
+ We see that the communicating classes are $\{1, 2, 3\}$, $\{4\}$, $\{5, 6\}$, where $\{5, 6\}$ is closed.
+\end{eg}
+
+\subsection{Recurrence or transience}
+The major focus of this chapter is recurrence and transience. This was something that came up when we discussed random walks in IA Probability --- given a random walk, say starting at $0$, what is the probability that we will return to $0$ later on? Recurrence and transience is a qualitative way of answering this question. As we mentioned, we will mostly focus on irreducible chains. So by definition there is always a non-zero probability of returning to $0$. Hence the question we want to ask is whether we are going to return to $0$ with certainty, i.e.\ with probability $1$. If we are bound to return, then we say the state is recurrent. Otherwise, we say it is transient.
+
+It should be clear that this notion is usually interesting only for an infinite state space. If we have an infinite state space, we might get transience because we are very likely to drift away to a place far, far away and never return. However, in a finite state space, this can't happen. Transience can occur only if we get stuck in a place and can't leave, i.e.\ we are not in an irreducible state space. These are not interesting.
+
+\begin{notation}
+ For convenience, we will introduce some notations. We write
+ \[
+ \P_i(A) = \P(A\mid X_0 = i),
+ \]
+ and
+ \[
+ \E_i(Z) = \E(Z\mid X_0 = i).
+ \]
+\end{notation}
+
+Suppose we start from $i$, and randomly wander around. Eventually, we may or may not get to $j$. If we do, there is a time at which we first reach $j$. We call this the \emph{first passage time}.
+
+\begin{defi}[First passage time and probability]
+ The \emph{first passage time} of $j \in S$ starting from $i$ is
+ \[
+ T_j = \min\{n \geq 1: X_n = j\}.
+ \]
+ Note that this implicitly depends on $i$. Here we require $n \geq 1$. Otherwise $T_i$ would always be $0$.
+
+ The \emph{first passage probability} is
+ \[
+ f_{ij}(n) = \P_i(T_j = n).
+ \]
+\end{defi}
+
+\begin{defi}[Recurrent state]
+ A state $i\in S$ is \emph{recurrent} (or \emph{persistent}) if
+ \[
+ \P_i (T_i < \infty) = 1,
+ \]
+ i.e.\ we will eventually get back to the state. Otherwise, we call the state \emph{transient}.
+\end{defi}
+Note that transient does \emph{not} mean we don't get back. It's just that we are not sure that we will get back. We can show that if a state is recurrent, then the probability that we return to $i$ infinitely many times is also $1$.
+
+Our current objective is to show the following characterization of recurrence.
+\begin{thm}
+ $i$ is recurrent iff $\sum_n p_{i, i}(n) = \infty$.
+\end{thm}
+The technique to prove this would be to use generating functions. We need to first decide what sequence to work with. For any fixed $i, j$, consider the sequence $p_{ij}(n)$ as a sequence in $n$. Then we define
+\[
+ P_{i, j}(s) = \sum_{n = 0}^\infty p_{i, j}(n) s^n.
+\]
+We also define
+\[
+ F_{i, j}(S) = \sum_{n = 0}^\infty f_{i, j}(n) s^n,
+\]
+where $f_{i, j}$ is our first passage probability. For the sake of clarity, we make it explicit that $p_{i, j}(0) = \delta_{i, j}$, and $f_{i, j}(0) = 0$.
+
+Our proof would be heavily based on the result below:
+\begin{thm}
+ \[
+ P_{i, j}(s) = \delta_{i, j} + F_{i, j}(s)P_{j, j}(s),
+ \]
+ for $-1 < s \leq 1$.
+\end{thm}
+
+\begin{proof}
+ Using the law of total probability
+ \begin{align*}
+ p_{i, j}(n) &= \sum_{m = 1}^n \P_i(X_n = j\mid T_j = m) \P_i(T_j = m) \\
+ \intertext{Using the Markov property, we can write this as}
+ &= \sum_{m = 1}^n \P(X_n = j\mid X_m = j) \P_i(T_j = m)\\
+ &= \sum_{m = 1}^n p_{j, j}(n - m) f_{i, j}(m).
+ \end{align*}
+ We can multiply through by $s^n$ and sum over all $n$ to obtain
+ \[
+ \sum_{n = 1}^\infty p_{i, j}(n) s^n = \sum_{n = 1}^\infty \sum_{m = 1}^n p_{j, j}(n - m)s^{n - m} f_{i, j}(m)s^m.
+ \]
+ The left hand side is \emph{almost} the generating function $P_{i, j}(s)$, except that we are missing an $n = 0$ term, which is $p_{i, j}(0) = \delta_{i, j}$. The right hand side is the ``convolution'' of the power series $P_{j, j}(s)$ and $F_{i, j}(s)$, which we can write as the product $P_{j, j}(s) F_{i, j}(s)$. So
+ \[
+ P_{i, j}(s) - \delta_{i, j} = P_{j, j}(s) F_{i, j}(s).\qedhere
+ \]
+\end{proof}
+
+Before we actually prove our theorem, we need one helpful result from Analysis that allows us to deal with power series nicely.
+\begin{lemma}[Abel's lemma]
+ Let $u_1, u_2, \cdots$ be real numbers such that $U(s) = \sum_{n} u_n s^n$ converges for $0 < s < 1$. Then
+ \[
+ \lim_{s\to 1^-} U(s) = \sum_n u_n.
+ \]
+\end{lemma}
+Proof is an exercise in analysis, which happens to be on the first example sheet of Analysis II.
+
+We now prove the theorem we initially wanted to show
+\begin{thm}
+ $i$ is recurrent iff $\sum_n p_{ii}(n) = \infty$.
+\end{thm}
+
+\begin{proof}
+ Using $j = i$ in the above formula, for $0 < s < 1$, we have
+ \[
+ P_{i, i}(s) = \frac{1}{1 - F_{i, i} (s)}.
+ \]
+ Here we need to be careful that we are not dividing by $0$. This would be a problem if $F_{ii}(s) = 1$. By definition, we have
+ \[
+ F_{i, i}(s) = \sum_{n = 1}^\infty f_{i, i}(n) s^n.
+ \]
+ Also, by definition of $f_{ii}$, we have
+ \[
+ F_{i, i}(1) = \sum_n f_{i, i}(n) = \P(\text{ever returning to }i) \leq 1.
+ \]
+ So for $|s| < 1$, $F_{i, i}(s) < 1$. So we are not dividing by zero. Now we use our original equation
+ \[
+ P_{i, i}(s) = \frac{1}{1 - F_{i, i} (s)},
+ \]
+ and take the limit as $s \to 1$. By Abel's lemma, we know that the left hand side is
+ \[
+ \lim_{s \to 1}P_{i, i}(s) = P_{i, i}(1) = \sum_n p_{i, i}(n).
+ \]
+ The other side is
+ \[
+ \lim_{s \to 1}\frac{1}{1 - F_{i, i}(s)} = \frac{1}{1 - \sum f_{i, i}(n)}.
+ \]
+ Hence we have
+ \[
+ \sum_n p_{i, i}(n) = \frac{1}{1 - \sum f_{i, i}(n)}.
+ \]
+ Since $\sum f_{i, i}(n)$ is the probability of ever returning, the probability of ever returning is 1 if and only if $\sum_n p_{i, i}(n) = \infty$.
+\end{proof}
+
+Using this result, we can check if a state is recurrent. However, a Markov chain has many states, and it would be tedious to check every state individually. Thus we have the following helpful result.
+\begin{thm}
+ Let $C$ be a communicating class. Then
+ \begin{enumerate}
+ \item Either every state in $C$ is recurrent, or every state is transient.
+ \item If $C$ contains a recurrent state, then $C$ is closed.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $i \leftrightarrow j$ and $i \not =j$. Then by definition of communicating, there is some $m$ such that $p_{i, j}(m) = \alpha > 0$, and some $n$ such that $p_{j, i}(n) = \beta > 0$. So for each $k$, we have
+ \[
+ p_{i, i}(m + k + n) \geq p_{i, j}(m) p_{j, j}(k) p_{j, i}(n) = \alpha\beta p_{j, j}(k).
+ \]
+ So if $\sum_k p_{j, j}(k) = \infty$, then $\sum_r p_{i, i}(r) = \infty$. So $j$ recurrent implies $i$ recurrent. Similarly, $i$ recurrent implies $j$ recurrent.
+ \item If $C$ is not closed, then there is a non-zero probability that we leave the class and never get back. So the states are not recurrent.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Note that there is a profound difference between a finite state space and an infinite state space. A finite state space can be represented by a finite matrix, and we are all very familiar with a finite matrices. We can use everything we know about finite matrices from IA Vectors and Matrices. However, infinite matrices are weirder.
+
+For example, any finite transition matrix $P$ has an eigenvalue of $1$. This is since the row sums of a transition matrix is always $1$. So if we multiply $P$ by $\mathbf{e} = (1, 1, \cdots, 1)$, then we get $\mathbf{e}$ again. However, this is not true for infinite matrices, since we usually don't usually allow arbitrary infinite vectors. To avoid getting infinitely large numbers when multiplying vectors and matrices, we usually restrict our focus to vectors $\mathbf{x}$ such that $\sum x_i^2$ is finite. In this case the vector $\mathbf{e}$ is not allowed, and the transition matrix need not have eigenvalue $1$.
+
+Another thing about a finite state space is that probability ``cannot escape''. Each step of a Markov chain gives a probability distribution on the state space, and we can imagine the progression of the chain as a flow of probabilities around the state space. If we have a finite state space, then all the probability flow must be contained within our finite state space. However, if we have an infinite state space, then probabilities can just drift away to infinity.
+
+More concretely, we get the following result about finite state spaces.
+\begin{thm}
+ In a finite state space,
+ \begin{enumerate}
+ \item There exists at least one recurrent state.
+ \item If the chain is irreducible, every state is recurrent.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ (ii) follows immediately from (i) since if a chain is irreducible, either all states are transient or all states are recurrent. So we just have to prove (i).
+
+ We first fix an arbitrary $i$. Recall that
+ \[
+ P_{i, j}(s) = \delta_{i, j} + P_{j, j}(s) F_{i, j}(s).
+ \]
+ If $j$ is transient, then $\sum_n p_{j, j}(n) = P_{j, j}(1) < \infty$. Also, $F_{i, j}(1)$ is the probability of ever reaching $j$ from $i$, and is hence finite as well. So we have $P_{i, j}(1) < \infty$. By Abel's lemma, $P_{i, j}(1)$ is given by
+ \[
+ P_{i, j}(1) = \sum_n p_{i, j}(n).
+ \]
+ Since this is finite, we must have $p_{i, j}(n)\to 0$.
+
+ Since we know that
+ \[
+ \sum_{j\in S}p_{i, j}(n) = 1,
+ \]
+ if every state is transient, then since the sum is finite, we know $\sum p_{i, j}(n) \to 0$ as $n\to \infty$. This is a contradiction. So we must have a recurrent state.
+\end{proof}
+
+\begin{thm}[P\'olya's theorem]
+ Consider $\Z^d = \{(x_1, x_2, \cdots, x_d): x_i \in \Z\}$. This generates a graph with $x$ adjacent to $y$ if $|x - y| = 1$, where $|\ph |$ is the Euclidean norm.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \node at (-4, 0) {$d = 1$};
+ \draw (-2.5, 0) -- (2.5, 0);
+ \foreach \x in {-2,-1,...,2} {
+ \node [circ] at (\x, 0) {};
+ }
+ \begin{scope}[shift={(0, -3.5)}]
+ \node at (-4, 2) {$d = 2$};
+ \foreach \x in {-2, -1,...,2} {
+ \foreach \y in {-2,-1,...,2} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {-2, -1,...,2} {
+ \draw (\x, -2.5) -- (\x, 2.5);
+ \draw (-2.5, \x) -- (2.5, \x);
+ }
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ Consider a random walk in $\Z^d$. At each step, it moves to a neighbour, each chosen with equal probability, i.e.
+ \[
+ \P(X_{n + 1} = j\mid X_n = i) =
+ \begin{cases}
+ \frac{1}{2d} & |j - i| = 1\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ This is an irreducible chain, since it is possible to get from one point to any other point. So the points are either all recurrent or all transient.
+
+ The theorem says this is recurrent iff $d = 1$ or $2$.
+\end{thm}
+Intuitively, this makes sense that we get recurrence only for low dimensions, since if we have more dimensions, then it is easier to get lost.
+
+\begin{proof}
+ We will start with the case $d = 1$. We want to show that $\sum p_{0, 0}(n) = \infty$. Then we know the origin is recurrent. However, we can simplify this a bit. It is impossible to get back to the origin in an odd number of steps. So we can instead consider $\sum p_{0, 0}(2n)$. However, we can write down this expression immediately. To return to the origin after $2n$ steps, we need to have made $n$ steps to the left, and $n$ steps to the right, in any order. So we have
+ \[
+ p_{0, 0}(2n) = \P(n\text{ steps left}, n\text{ steps right}) = \binom{2n}{n} \left(\frac{1}{2}\right)^{2n}.
+ \]
+ To show if this converges, it is not helpful to work with these binomial coefficients and factorials. So we use Stirling's formula $n! \simeq \sqrt{2\pi n}\left(\frac{n}{e}\right)^n$. If we plug this in, we get
+ \[
+ p_{0, 0}(2n) \sim \frac{1}{\sqrt{\pi n}}.
+ \]
+ This tends to $0$, but really slowly, and even more slowly than the harmonic series. So we have $\sum p_{0, 0}(2n) = \infty$.
+
+ In the $d = 2$ case, suppose after $2n$ steps, I have taken $r$ steps right, $\ell$ steps left, $u$ steps up and $d$ steps down. We must have $r + \ell + u + d = 2n$, and we need $r = \ell, u = d$ to return the origin. So we let $r = \ell = m, u = d = n - m$. So we get
+ \begin{align*}
+ p_{0, 0}(2n) &= \left(\frac{1}{4}\right)^{2n} \sum_{m = 0}^n \binom{2n}{m, m, n - m, n - m} \\
+ &= \left(\frac{1}{4}\right)^{2n} \sum_{m = 0}^n \frac{(2n)!}{(m!)^2 ((n - m)!)^2} \\
+ &= \left(\frac{1}{4}\right)^{2n} \binom{2n}{n}\sum_{m = 0}^n \left(\frac{n!}{m!(n - m)!}\right)^2\\
+ &= \left(\frac{1}{4}\right)^{2n} \binom{2n}{n}\sum_{m = 0}^n \binom{n}{m}\binom{n}{n - m}\\
+ \intertext{We now use a well-known identity (proved in IA Numbers and Sets) to obtain}
+ &= \left(\frac{1}{4}\right)^{2n} \binom{2n}{n} \binom{2n}{n}\\
+ &= \left[\binom{2n}{n} \left(\frac{1}{2}\right)^{2n}\right]^2\\
+ &\sim \frac{1}{\pi n}.
+ \end{align*}
+ So the sum diverges. So this is recurrent. Note that the two-dimensional probability turns out to be the square of the one-dimensional probability. This is not a coincidence, and we will explain this after the proof. However, this does not extend to higher dimensions.
+
+ In the $d = 3$ case, we have
+ \[
+ p_{0, 0}(2n) = \left(\frac{1}{6}\right)^{2n}\sum_{i + j + k = n}\frac{(2n)!}{(i!j!k!)^2}.
+ \]
+ This time, there is no neat combinatorial formula. Since we want to show this is summable, we can try to bound this from above. We have
+ \begin{align*}
+ p_{0, 0}(2n) &= \left(\frac{1}{6}\right)^{2n} \binom{2n}{n} \sum \left(\frac{n!}{i!j!k!}\right)^2\\
+ &= \left(\frac{1}{2}\right)^{2n} \binom{2n}{n} \sum \left(\frac{n!}{3^n i!j!k!}\right)^2\\
+ \intertext{Why do we write it like this? We are going to use the identity $\displaystyle \sum_{i + j + k = n} \frac{n!}{3^n i!j!k!} = 1$. Where does this come from? Suppose we have three urns, and throw $n$ balls into it. Then the probability of getting $i$ balls in the first, $j$ in the second and $k$ in the third is exactly $\frac{n!}{3^n i!j!k!}$. Summing over all possible combinations of $i$, $j$ and $k$ gives the total probability of getting in any configuration, which is $1$. So we can bound this by}
+ &\leq \left(\frac{1}{2}\right)^{2n}\binom{2n}{n} \max\left(\frac{n!}{3^n i!j!k!}\right)\sum \frac{n!}{3^n i!j!k!}\\
+ &= \left(\frac{1}{2}\right)^{2n}\binom{2n}{n} \max\left(\frac{n!}{3^n i!j!k!}\right)
+ \intertext{To find the maximum, we can replace the factorial by the gamma function and use Lagrange multipliers. However, we would just argue that the maximum can be achieved when $i$, $j$ and $k$ are as close to each other as possible. So we get}
+ &\leq \left(\frac{1}{2}\right)^{2n}\binom{2n}{n} \frac{n!}{3^n}\left(\frac{1}{\lfloor n/3\rfloor!}\right)^3\\
+ &\leq Cn^{-3/2}
+ \end{align*}
+ for some constant $C$ using Stirling's formula. So $\sum p_{0,0}(2n) < \infty$ and the chain is transient. We can prove similarly for higher dimensions.
+\end{proof}
+
+Let's get back to why the two dimensional probability is the square of the one-dimensional probability. This square might remind us of independence. However, it is obviously not true that horizontal movement and vertical movement are independent --- if we go sideways in one step, then we cannot move vertically. So we need something more sophisticated.
+
+We write $X_n = (A_n, B_n)$. What we do is that we try to rotate our space. We are going to record our coordinates in a pair of axis that is rotated by $45^\circ$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2, 0) -- (4, 0) node [right] {$A$};
+ \draw [->] (0, -3) -- (0, 3) node [above] {$B$};
+
+ \node [circ] at (3, 2) {};
+ \node [right] at (3, 2) {$(A_n, B_n)$};
+
+ \draw [mred, ->] (-1, -1) -- (3, 3) node [anchor = south west] {$V$};
+ \draw [mred, ->] (-1, 1) -- (3, -3) node [anchor = north west] {$U$};
+
+ \draw [mred, dashed] (3, 2) -- (0.5, -0.5) node [anchor = north east] {$U_n/\sqrt{2}$};
+ \draw [mred, dashed] (3, 2) -- (2.5, 2.5) node [anchor = south east] {$V_n/\sqrt{2}$};
+ \end{tikzpicture}
+\end{center}
+We can define the new coordinates as
+\begin{align*}
+ U_n &= A_n - B_n\\
+ V_n &= A_n + B_n
+\end{align*}
+In each step, either $A_n$ or $B_n$ change by one step. So $U_n$ and $V_n$ \emph{both} change by $1$. Moreover, they are independent. So we have
+\begin{align*}
+ p_{0, 0}(2n) &= \P(A_n = B_n = 0)\\
+ &= \P(U_n = V_n = 0)\\
+ &= \P(U_n = 0)\P (V_n = 0)\\
+ &= \left[\binom{2n}{n}\left(\frac{1}{2}\right)^{2n}\right]^2.
+\end{align*}
+\subsection{Hitting probabilities}
+Recurrence and transience tells us if we are going to return to the original state with (almost) certainty. Often, we would like to know something more qualitative. What is the actual probability of returning to the state $i$? If we return, what is the expected duration of returning?
+
+We can formulate this in a more general setting. Let $S$ be our state space, and let $A \subseteq S$. We want to know how likely and how long it takes for us to reach $A$. For example, if we are in a casino, we want to win, say, a million, and don't want to go bankrupt. So we would like to know the probability of reaching $A = \{1\text{ million}\}$ and $A = \{0\}$.
+
+\begin{defi}[Hitting time]
+ The \emph{hitting time} of $A \subseteq S$ is the random variable $H^A = \min\{n \geq 0: X_n \in A\}$. In particular, if we start in $A$, then $H^A = 0$. We also have
+ \[
+ h_i^A = \P_i(H^A < \infty) = \P_i(\text{ever reach }A).
+ \]
+\end{defi}
+
+To determine hitting times, we mostly rely on the following result:
+\begin{thm}
+ The vector $(h_i^A: i \in S)$ satisfies
+ \[
+ h_i^A =
+ \begin{cases}
+ 1 & i \in A\\
+ \sum_{j \in S}p_{i, j}h_j^A & i \not \in A
+ \end{cases},
+ \]
+ and is \emph{minimal} in that for any non-negative solution $(x_i: i \in S)$ to these equations, we have $h_i^A \leq x_i$ for all $i$.
+\end{thm}
+It is easy to show that $h_i^A$ satisfies the formula given, but it takes some more work to show that $h_i^A$ is the minimal. Recall, however, that we have proved a similar result for random walks in IA probability, and the proof is more-or-less the same.
+
+\begin{proof}
+ By definition, $h_i^A = 1$ if $i \in A$. Otherwise, we have
+ \[
+ h_i^A = \P_i(H^A < \infty) = \sum_{j\in S} \P_i(H^A < \infty \mid X_1 = j)p_{i, j} = \sum_{j\in S}h_{j}^A p_{i, j}.
+ \]
+ So $h_i^A$ is indeed a solution to the equations.
+
+ To show that $h_i^A$ is the minimal solution, suppose $x = (x_i: i \in S)$ is a non-negative solution, i.e.
+ \[
+ x_i =
+ \begin{cases}
+ 1 & i \in A\\
+ \sum_{j \in S}p_{i, j}x_j A& i \not \in A
+ \end{cases},
+ \]
+ If $i \in A$, we have $h_i^A = x_i = 1$. Otherwise, we can write
+ \begin{align*}
+ x_i &= \sum_j p_{i, j}x_j \\
+ &= \sum_{j \in A}p_{i, j}x_j + \sum_{j \not\in A}p_{i, j}x_j \\
+ &= \sum_{j \in A}p_{i, j} + \sum_{j \not\in A}p_{i, j}x_j \\
+ &\geq \sum_{j \in A}p_{i, j}\\
+ &= \P_i(H^A = 1).
+ \end{align*}
+ By iterating this process, we can write
+ \begin{align*}
+ x_i &= \sum_{j \in A} p_{i, j} + \sum_{j \not\in A} p_{i, j}\left(\sum_k p_{i, k} x_k\right) \\
+ &= \sum_{j \in A} p_{i, j} + \sum_{j \not\in A} p_{i, j}\left(\sum_{k\in A} p_{i, k} x_k + \sum_{k \not\in A} p_{i, k}x_k\right)\\
+ &\geq \P_i (H^A = 1) + \sum_{j \not \in A, k \in A}p_{i, j}p_{j, k}\\
+ &= \P_i(H^A = 1) + \P_i(H^A = 2)\\
+ &= \P_i (H^A \leq 2).
+ \end{align*}
+ By induction, we obtain
+ \[
+ x_i \geq \P_i(H^A \leq n)
+ \]
+ for all $n$. Taking the limit as $n \to \infty$, we get
+ \[
+ x_i \geq \P_i(H^A \leq \infty) = h_i^A.
+ \]
+ So $h_i^A$ is minimal.
+\end{proof}
+The next question we want to ask is how long it will take for us to hit $A$. We want to find $\E_i(H^A) = k_i^A$. Note that we have to be careful --- if there is a chance that we never hit $A$, then $H^A$ could be infinite, and $\E_i(H^A) = \infty$. This occurs if $h_i^A < 1$. So often we are only interested in the case where $h_i^A = 1$ (note that $h_i^A = 1$ does \emph{not} imply that $k_i^A < \infty$. It is merely a necessary condition).
+
+We get a similar result characterizing the expected hitting time.
+\begin{thm}
+ $(k_i^A: i \in S)$ is the minimal non-negative solution to
+ \[
+ k_i^A =
+ \begin{cases}
+ 0 & i \in A\\
+ 1 + \sum_j p_{i, j}k_j^A & i \not\in A.
+ \end{cases}
+ \]
+\end{thm}
+Note that we have this ``$1 +$'' since when we move from $i$ to $j$, one step has already passed.
+
+The proof is almost the same as the proof we had above.
+\begin{proof}
+ The proof that $(k_i^A)$ satisfies the equations is the same as before.
+
+ Now let $(y_i : i\in S)$ be a non-negative solution. We show that $y_i \geq k_i^A$.
+
+ If $i \in A$, we get $y_i = k_i^A = 0$. Otherwise, suppose $i\not\in A$. Then we have
+ \begin{align*}
+ y_i &= 1 + \sum_j p_{i, j}y_j\\
+ &= 1 + \sum_{j \in A}p_{i, j}y_j + \sum_{j\not\in A}p_{i, j}y_j\\
+ &= 1 + \sum_{j\not\in A}p_{i, j}y_j\\
+ &= 1 + \sum_{j \not\in A}p_{i, j}\left(1 + \sum_{k\not\in A} p_{j, k}y_k\right)\\
+ &\geq 1 + \sum_{j \not\in A} p_{i, j}\\
+ &= \P_i(H^A \geq 1) + \P_i(H^A \geq 2).
+ \end{align*}
+ By induction, we know that
+ \[
+ y_i \geq \P_i (H^A \geq 1) + \cdots + \P_i(H^A \geq n)
+ \]
+ for all $n$. Let $n\to \infty$. Then we get
+ \[
+ y_i \geq \sum_{m \geq 1}\P_i(H^A \geq m) = \sum_{m \geq 1} m\P_i(H^A = m)= k_i^A.\qedhere
+ \]
+\end{proof}
+
+\begin{eg}[Gambler's ruin]
+ This time, we will consider a random walk on $\N$. In each step, we either move to the right with probability $p$, or to the left with probability $q = 1 - p$. What is the probability of ever hitting $0$ from a given initial point? In other words, we want to find $h_i = h_i^{\{0\}}$.
+
+ We know $h_i$ is the minimal solution to
+ \[
+ h_i =
+ \begin{cases}
+ 1 & i = 0\\
+ qh_{i - 1} + ph_{i + 1} & i \not= 0.
+ \end{cases}
+ \]
+ What are the solutions to these equations? We can view this as a difference equation
+ \[
+ ph_{i + 1} - h_i + qh_{i - 1} = 0,\quad i \geq 1.
+ \]
+ with the boundary condition that $h_0 = 1$. We all know how to solve difference equations, so let's just jump to the solution.
+
+ If $p \not= q$, i.e.\ $p \not= \frac{1}{2}$, then the solution has the form
+ \[
+ h_i = A + B\left(\frac{q}{p}\right)^i
+ \]
+ for $i \geq 0$. If $p < q$, then for large $i$, $\left(\frac{q}{p}\right)^i$ is very large and blows up. However, since $h_i$ is a probability, it can never blow up. So we must have $B = 0$. So $h_i$ is constant. Since $h_0 = 1$, we have $h_i = 1$ for all $i$. So we always get to $0$.
+
+ If $p > q$, since $h_0 = 1$, we have $A + B = 1$. So
+ \[
+ h_i = \left(\frac{q}{p}\right)^i + A\left(1 - \left(\frac{q}{p}\right)^i\right).
+ \]
+ This is in fact a solution for all $A$. So we want to find the smallest solution.
+
+ As $i \to\infty$, we get $h_i \to A$. Since $h_i \geq 0$, we know that $A \geq 0$. Subject to this constraint, the minimum is attained when $A = 0$ (since $(q/p)^i$ and $(1 - (q/p)^i)$ are both positive). So we have
+ \[
+ h_i = \left(\frac{q}{p}\right)^i.
+ \]
+ There is another way to solve this. We can give ourselves a ceiling, i.e.\ we also stop when we hit $k > 0$, i.e.\ $h_k = 1$. We now have two boundary conditions and can find a unique solution. Then we take the limit as $k \to \infty$. This is the approach taken in IA Probability.
+
+ Here if $p = q$, then by the same arguments, we get $h_i = 1$ for all $i$.
+\end{eg}
+
+\begin{eg}[Birth-death chain]
+ Let $(p_i: i \geq 1)$ be an arbitrary sequence such that $p_i \in (0, 1)$. We let $q_i = 1 - p_i$. We let $\N$ be our state space and define the transition probabilities to be
+ \[
+ p_{i, i + 1} = p_i,\quad p_{i, i - 1} = q_i.
+ \]
+ This is a more general case of the random walk --- in the random walk we have a constant $p_i$ sequence.
+
+ This is a general model for population growth, where the change in population depends on what the current population is. Here each ``step'' does not correspond to some unit time, since births and deaths occur rather randomly. Instead, we just make a ``step'' whenever some birth or death occurs, regardless of what time they occur.
+
+ Here, if we have no people left, then it is impossible for us to reproduce and get more population. So we have
+ \[
+ p_{0, 0} = 1.
+ \]
+ We say $0$ is \emph{absorbing} in that $\{0\}$ is closed. We let $h_i = h_i^{\{0\}}$. We know that
+ \[
+ h_0 = 1,\quad p_i h_{i + 1} - h_i + q_i h_{i - 1} = 0,\quad i \geq 1.
+ \]
+ This is no longer a difference equation, since the coefficients depends on the index $i$. To solve this, we need magic. We rewrite this as
+ \begin{align*}
+ p_i h_{i + 1} - h_i + q_i h_{i - 1} &= p_i h_{i + 1} - (p_i + q_i) h_i + q_i h_{i - 1} \\
+ &= p_i (h_{i + 1} - h_i) - q_i(h_i - h_{i - 1}).
+ \end{align*}
+ We let $u_i = h_{i - 1} - h_i$ (picking $h_i - h_{i - 1}$ might seem more natural, but this definition makes $u_i$ positive). Then our equation becomes
+ \[
+ u_{i + 1} = \frac{q_i}{p_i} u_i.
+ \]
+ We can iterate this to become
+ \[
+ u_{i + 1} = \left(\frac{q_i}{p_i}\right)\left(\frac{q_{i - 1}}{p_{i - 1}}\right) \cdots \left(\frac{q_1}{p_1}\right) u_1.
+ \]
+ We let
+ \[
+ \gamma_i = \frac{q_1q_2\cdots q_i}{p_1p_2\cdots p_i}.
+ \]
+ Then we get $u_{i + 1} = \gamma_i u_1$. For convenience, we let $\gamma_0 = 1$. Now we want to retrieve our $h_i$. We can do this by summing the equation $u_i = h_{i - 1} - h_i$. So we get
+ \[
+ h_0 - h_i = u_1 + u_2 + \cdots + u_i.
+ \]
+ Using the fact that $h_0 = 1$, we get
+ \[
+ h_i = 1 - u_1(\gamma_0 + \gamma_1 + \cdots + \gamma_{i - 1}).
+ \]
+ Here we have a parameter $u_1$, and we need to find out what this is. Our theorem tells us the value of $u_1$ minimizes $h_i$. This all depends on the value of
+ \[
+ S = \sum_{i = 0}^\infty \gamma_i.
+ \]
+ By the law of excluded middle, $S$ either diverges or converges. If $S = \infty$, then we must have $u_1 = 0$. Otherwise, $h_i$ blows up for large $i$, but we know that $0 \leq h_i \leq 1$. If $S$ is finite, then $u_1$ can be non-zero. We know that the $\gamma_i$ are all positive. So to minimize $h_i$, we need to maximize $u_1$. We cannot make $u_1$ arbitrarily large, since this will make $h_i$ negative. To find the maximum possible value of $u_1$, we take the limit as $i \to\infty$. Then we know that the maximum value of $u_1$ satisfies
+ \[
+ 0 = 1 - u_1 S.
+ \]
+ In other words, $u_1 = 1/S$. So we have
+ \[
+ h_i = \frac{\sum_{k = i}^\infty \gamma_k}{\sum_{k = 0}^\infty \gamma_k}.
+ \]
+\end{eg}
+
+\subsection{The strong Markov property and applications}
+We are going to state the \emph{strong} Markov property and see applications of it. Before this, we should know what the \emph{weak} Markov property is. We have, in fact, already seen the weak Markov property. It's just that we called it the ``Markov property' instead.
+
+In probability, we often have ``strong'' and ``weak'' versions of things. For example, we have the strong and weak law of large numbers. The difference is that the weak versions are expressed in terms of probabilities, while the strong versions are expressed in terms of random variables.
+
+Initially, when people first started developing probability theory, they just talked about probability distributions like the Poisson distribution or the normal distribution. However, later it turned out it is often nicer to talk about random variables instead. After messing with random variables, we can just take expectations or evaluate probabilities to get the corresponding statement about probability distributions. Hence usually the ``strong'' versions imply the ``weak'' version, but not the other way round.
+
+In this case, recall that we defined the Markov property in terms of the probabilities at some fixed time. We have some fixed time $t$, and we want to know the probabilities of events after $t$ in terms of what happened before $t$. In the strong Markov property, we will allow $t$ to be a random variable, and say something slightly more involved. However, we will not allow $T$ to be any random variable, but just some nice ones.
+
+\begin{defi}[Stopping time]
+ Let $X$ be a Markov chain. A random variable $T$ (which is a function $\Omega \to \N\cup \{\infty\}$) is a \emph{stopping time} for the chain $X = (X_n)$ if for $n \geq 0$, the event $\{T = n\}$ is given in terms of $X_0, \cdots, X_n$.
+\end{defi}
+For example, suppose we are in a casino and gambling. We let $X_n$ be the amount of money we have at time $n$. Then we can set our stopping time as ``the time when we have $\$10$ left''. This is a stopping time, in the sense that we can use this as a guide to when to stop --- it is certainly possible to set yourself a guide that you should leave the casino when you have $\$10$ left. However, it does not make sense to say ``I will leave if the next game will make me bankrupt'', since there is no way to tell if the next game will make you bankrupt (it certainly will not if you win the game!). Hence this is not a stopping time.
+
+\begin{eg}
+ The hitting time $H^A$ is a stopping time. This is since $\{H^A = n\} = \{X_i \not\in A\text{ for }i < n\} \cap \{X_n \in A\}$. We also know that $H^A + 1$ is a stopping time since it only depends in $X_i$ for $i \leq n - 1$. However, $H^A - 1$ is not a stopping time since it depends on $X_{n + 1}$.
+\end{eg}
+
+We can now state the strong Markov property, which is expressed in a rather complicated manner but can be useful at times.
+\begin{thm}[Strong Markov property]
+ Let $X$ be a Markov chain with transition matrix $P$, and let $T$ be a stopping time for $X$. Given $T < \infty$ and $X_T = i$, the chain $(X_{T + k}: k \geq 0)$ is a Markov chain with transition matrix $P$ with initial distribution $X_{T + 0} = i$, and this Markov chain is independent of $X_0, \cdots, X_T$.
+\end{thm}
+Proof is omitted.
+
+\begin{eg}[Gambler's ruin]
+ Again, this is the Markov chain taking values on the non-negative integers, moving to the right with probability $p$ and left with probability $q = 1 - p$. $0$ is an absorbing state, since we have no money left to bet if we are broke.
+
+ Instead of computing the probability of hitting zero, we want to find the time it takes to get to $0$, i.e.
+ \[
+ H = \inf\{n \geq 0: X_n = 0\}.
+ \]
+ Note that the infimum of the empty set is $+\infty$, i.e.\ if we never hit zero, we say it takes infinite time. What is the distribution of $H$? We define the generating function
+ \[
+ G_i(s) = \E_i(s^H) = \sum_{n = 0}^\infty s^n \P_i(H = n),\quad |s| < 1.
+ \]
+ Note that we need the requirement that $|s| < 1$, since it is possible that $H$ is infinite, and we would not want to think whether the sum converges when $s = 1$. However, we know that it does for $|s| < 1$.
+
+ We have
+ \[
+ G_1(s) = \E_1(s^H) = p\E_1(s^H\mid X_1 = 2) + q\E_1(s^H\mid X_1 = 0).
+ \]
+ How can we simplify this? The second term is easy, since if $X_1 = 0$, then we must have $H = 1$. So $\E_1(s^H\mid X_1 = 0) = s$. The first term is more tricky. We are now at $2$. To get to $0$, we need to pass through $1$. So the time needed to get to $0$ is the time to get from $2$ to $1$ (say $H'$), plus the time to get from $1$ to $0$ (say $H''$). We know that $H'$ and $H''$ have the same distribution as $H$, and by the strong Markov property, they are independent. So
+ \[
+ G_1 = p\E_1(s^{H' + H'' + 1}) + qs = psG_1^2 + qs.\tag{$*$}
+ \]
+ Solving this, we get two solutions
+ \[
+ G_1 (s) = \frac{1 \pm \sqrt{1 - 4pqs^2}}{2ps}.
+ \]
+ We have to be careful here. This result says that for each value of $s$, $G_1(s)$ is \emph{either} $\frac{1 + \sqrt{1 - 4pqs^2}}{2ps}$ \emph{or} $\frac{1 - \sqrt{1 - 4pqs^2}}{2ps}$. It does \emph{not} say that there is some consistent choice of $+$ or $-$ that works for everything.
+
+ However, we know that if we suddenly change the sign, then $G_1(s)$ will be discontinuous at that point, but $G_1$, being a power series, has to be continuous. So the solution must be either $+$ for all $s$, or $-$ for all $s$.
+
+ To determine the sign, we can look at what happens when $s = 0$. We see that the numerator becomes $1 \pm 1$, while the denominator is $0$. We know that $G$ converges at $s = 0$. Hence the numerator must be $0$. So we must pick $-$, i.e.
+ \[
+ G_1 (s) = \frac{1 - \sqrt{1 - 4pqs^2}}{2ps}.
+ \]
+ We can find $\P_1(H = k)$ by expanding the Taylor series.
+
+ What is the probability of ever hitting $0$? This is
+ \[
+ \P_1(H < \infty) = \sum_{n = 1}^\infty \P_1(H = n) = \lim_{s\to 1} G_1(s) = \frac{1 - \sqrt{1 - 4pq}}{2p}.
+ \]
+ We can rewrite this using the fact that $q = 1 - p$. So $1 - 4pq = 1 - 4p(1 - p) = (1 - 2p)^2 = |q - p|^2$. So we can write
+ \[
+ \P_1(H < \infty) = \frac{1 - |p - q|}{2p} =
+ \begin{cases}
+ 1 & p \leq q\\
+ \frac{q}{p} & p > q
+ \end{cases}.
+ \]
+ Using this, we can also find $\mu = \E_1(H)$. Firstly, if $p > q$, then it is possible that $H = \infty$. So $\mu = \infty$. If $p \leq q$, we can find $\mu$ by differentiating $G_1(s)$ and evaluating at $s = 1$. Doing this directly would result it horrible and messy algebra, which we want to avoid. Instead, we can differentiate $(*)$ and obtain
+ \[
+ G_1' = pG_1^2 + ps 2 G_1G_1' + q.
+ \]
+ We can rewrite this as
+ \[
+ G_1'(s) = \frac{pG(s)^2 + q}{1 - 2ps G(s)}.
+ \]
+ By Abel's lemma, we have
+ \[
+ \mu = \lim_{s \to 1}G'(s) =
+ \begin{cases}
+ \infty & p = \frac{1}{2}\\
+ \frac{1}{p - q} & p < \frac{1}{2}
+ \end{cases}.
+ \]
+\end{eg}
+
+\subsection{Further classification of states}
+So far, we have classified chains in say, irreducible and reducible chains. We have also seen that states can be recurrent or transient. By definition, a state is recurrent if the probability of returning to it is 1. However, we can further classify recurrent states. Even if a state is recurrent, it is possible that the expected time of returning is infinite. So while we will eventually return to the original state, this is likely to take a long, long time. The opposite behaviour is also possible --- the original state might be very attracting, and we are likely to return quickly. It turns out this distinction can affect the long-term behaviour of the chain.
+
+First we have the following proposition, which tells us that if a state is recurrent, then we are expected to return to it infinitely many times.
+\begin{thm}
+ Suppose $X_0 = i$. Let $V_i = |\{n \geq 1: X_n = i\}|$. Let $f_{i, i} = \P_i(T_i < \infty)$. Then
+ \[
+ \P_i (V_i = r) = f_{i, i}^r (1 - f_{i, i}),
+ \]
+ since we have to return $r$ times, each with probability $f_{i, i}$, and then never return.
+
+ Hence, if $f_{i, i} = 1$, then $\P_i(V_i = r) = 0$ for all $r$. So $\P_i(V_i = \infty) = 1$. Otherwise, $\P_i(V_i = r)$ is a genuine geometric distribution, and we get $\P_i(V_i < \infty) = 1$.
+\end{thm}
+
+\begin{proof}
+ Exercise, using the strong Markov property.
+\end{proof}
+
+\begin{defi}[Mean recurrence time]
+ Let $T_i$ be the returning time to a state $i$. Then the \emph{mean recurrence time} of $i$ is
+ \[
+ \mu_i = \E_i(T_i) =
+ \begin{cases}
+ \infty & i\text{ transient}\\
+ \sum_{n = 1}^\infty n f_{i, i}(n) & i\text{ recurrent}
+ \end{cases}
+ \]
+\end{defi}
+
+\begin{defi}[Null and positive state]
+ If $i$ is recurrent, we call $i$ a \emph{null state} if $\mu_i = \infty$. Otherwise $i$ is \emph{non-null} or \emph{positive}.
+\end{defi}
+
+This is mostly all we care about. However, there is still one more technical consideration. Recall that in the random walk starting from the origin, we can only return to the origin after an even number of steps. This causes problems for a lot of our future results. For example, we will later look at the ``convergence'' of Markov chains. If we are very likely to return to $0$ after an even number of steps, but is impossible for an odd number of steps, we don't get convergence. Hence we would like to prevent this from happening.
+
+\begin{defi}[Period]
+ The \emph{period} of a state $i$ is $d_i = \gcd\{n \geq 1: p_{i, i}(n)> 0\}$.
+
+ A state is \emph{aperiodic} if $d_i = 1$.
+\end{defi}
+In general, we like aperiodic states. This is not a very severe restriction. For example, in the random walk, we can get rid of periodicity by saying there is a very small chance of staying at the same spot in each step. We can make this chance is so small that we can ignore its existence for most practical purposes, but will help us get rid of the technical problem of periodicity.
+
+\begin{defi}[Ergodic state]
+ A state $i$ is \emph{ergodic} if it is aperiodic and positive recurrent.
+\end{defi}
+These are the really nice states.
+
+Recall that recurrence is a class property --- if two states are in the same communicating class, then they are either both recurrent, or both transient. Is this true for the properties above as well? The answer is yes.
+
+\begin{thm}
+ If $i \leftrightarrow j$ are communicating, then
+ \begin{enumerate}
+ \item $d_i = d_j$.
+ \item $i$ is recurrent iff $j$ is recurrent.
+ \item $i$ is positive recurrent iff $j$ is positive recurrent.
+ \item $i$ is ergodic iff $j$ is ergodic.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Assume $i \leftrightarrow j$. Then there are $m, n \geq 1$ with $p_{i, j}(m), p_{j, i}(n) > 0$. By the Chapman-Kolmogorov equation, we know that
+ \[
+ p_{i, i}(m + r + n) \geq p_{i, j}(m)p_{j, j}(r)p_{j, i}(n) \geq \alpha p_{j, j}(r),
+ \]
+ where $\alpha = p_{i, j}(m) p_{j, i}(n) > 0$. Now let $D_j = \{r \geq 1: p_{j, j}(r) > 0\}$. Then by definition, $d_j = \gcd D_j$.
+
+ We have just shown that if $r \in D_j$, then we have $m + r + n \in D_i$. We also know that $n + m \in D_i$, since $p_{i, i}(n + m) \geq p_{i, j}(n)p_{ji}(m) > 0$. Hence for any $r \in D_j$, we know that $d_i \mid m + r + n$, and also $d_i \mid m + n$. So $d_i \mid r$. Hence $\gcd D_i \mid \gcd D_j$. By symmetry, $\gcd D_j \mid \gcd D_i$ as well. So $\gcd D_i = \gcd D_j$.
+ \item Proved before.
+ \item This is deferred to a later time.
+ \item Follows directly from (i), (ii) and (iii) by definition.\qedhere
+ \end{enumerate}
+\end{proof}
+
+We also have the following proposition we will use later on:
+\begin{prop}
+ If the chain is irreducible and $j \in S$ is recurrent, then
+ \[
+ \P(X_n = j\text{ for some }n \geq 1) = 1,
+ \]
+ regardless of the distribution of $X_0$.
+\end{prop}
+Note that this is not just the definition of recurrence, since recurrence says that if we start at $i$, then we will return to $i$. Here we are saying wherever we start, we will eventually visit $i$.
+
+\begin{proof}
+ Let
+ \[
+ f_{i, j} = \P_i(X_n = j\text{ for some }n \geq 1).
+ \]
+ Since $j \to i$, there exists a \emph{least} integer $m \geq 1$ with $p_{j, i}(m) > 0$. Since $m$ is least, we know that
+ \[
+ p_{j, i}(m) = \P_j(X_m = i, X_r \not= j\text{ for }r < m).
+ \]
+ This is since we cannot visit $j$ in the path, or else we can truncate our path and get a shorter path from $j$ to $i$. Then
+ \[
+ p_{j, i}(m)(1 - f_{i, j}) \leq 1 - f_{j, j}.
+ \]
+ This is since the left hand side is the probability that we first go from $j$ to $i$ in $m$ steps, and then never go from $i$ to $j$ again; while the right is just the probability of never returning to $j$ starting from $j$; and we know that it is easier to just not get back to $j$ than to go to $i$ in exactly $m$ steps and never returning to $j$. Hence if $f_{j, j} = 1$, then $f_{i, j} = 1$.
+
+ Now let $\lambda_k = \P(X_0 = k)$ be our initial distribution. Then
+ \[
+ \P(X_n = j\text{ for some }n \geq 1) = \sum_i \lambda_i \P_i (X_n =j\text{ for some }n \geq 1) = 1.\qedhere
+ \]
+\end{proof}
+
+\section{Long-run behaviour}
+\subsection{Invariant distributions}
+We want to look at what happens in the long run. Recall that at the very beginning of the course, we calculated the transition probabilities of the two-state Markov chain, and saw that in the long run, as $n \to \infty$, the probability distribution of the $X_n$ will ``converge'' to some particular distribution. Moreover, this limit does not depend on where we started. We'd like to see if this is true for all Markov chains.
+
+First of all, we want to make it clear what we mean by the chain ``converging'' to something. When we are dealing with real sequences $x_n$, we have a precise definition of what it means for $x_n \to x$. How can we define the convergence of a sequence of random variables $X_n$? These are not proper numbers, so we cannot just apply our usual definitions.
+
+For the purposes of our investigation of Markov chains here, it turns out that the right way to think about convergence is to look at the probability mass function. For each $k \in S$, we ask if $\P(X_n = k)$ converges to anything.
+
+In most cases, $\P(X_n = k)$ converges to \emph{something}. Hence, this is not an interesting question to ask. What we would \emph{really} want to know is whether the limit is a probability mass function. It is, for example, possible that $\P(X_n = k) \to 0$ for all $k$, and the result is not a distribution.
+
+From Analysis, we know there are, in general, two ways to prove something converges --- we either ``guess'' a limit and then prove it does indeed converge to that limit, or we prove the sequence is Cauchy. It would be rather silly to prove that these probabilities form a Cauchy sequence, so we will attempt to guess the limit. The limit will be known as an \emph{invariant distribution}, for reasons that will become obvious shortly.
+
+The main focus of this section is to study the existence and properties of invariant distributions, and we will provide sufficient conditions for convergence to occur in the next.
+
+Recall that if we have a starting state $\lambda$, then the distribution of the $n$th step is given by $\lambda P^n$. We have the following trivial identity:
+\[
+ \lambda P^{n + 1} = (\lambda P^n) P.
+\]
+If the distribution converges, then we have $\lambda P^n \to \pi$ for some $\pi$, and also $\lambda P^{n + 1} \to \pi$. Hence the limit $\pi$ satisfies
+\[
+ \pi P = \pi.
+\]
+We call these \emph{invariant distributions}.
+
+\begin{defi}[Invariant distriubtion]
+ Let $X_j$ be a Markov chain with transition probabilities $P$. The distribution $\pi = (\pi_k: k \in S)$ is an \emph{invariant distribution} if
+ \begin{enumerate}
+ \item $\pi_k \geq 0$, $\sum_k \pi_k = 1$.
+ \item $\pi = \pi P$.
+ \end{enumerate}
+ The first condition just ensures that this is a genuine distribution.
+
+ An invariant distribution is also known as an invariant measure, equilibrium distribution or steady-state distribution.
+\end{defi}
+
+\begin{thm}
+ Consider an irreducible Markov chain. Then
+ \begin{enumerate}
+ \item There exists an invariant distribution if some state is positive recurrent.
+ \item If there is an invariant distribution $\pi$, then every state is positive recurrent, and
+ \[
+ \pi_i = \frac{1}{\mu_i}
+ \]
+ for $i \in S$, where $\mu_i$ is the mean recurrence time of $i$. In particular, $\pi$ is unique.
+ \end{enumerate}
+\end{thm}
+Note how we worded the first statement. Recall that we once stated that if one state is positive recurrent, then all states are positive recurrent, and then said we would defer the proof for later on. This is where we actually prove it. In (i), we show that if some state is positive recurrent, then it has an invariant distribution. Then (ii) tells us if there is an invariant distribution, then \emph{all} states are positive recurrent. Hence one state being positive recurrent implies all states being positive recurrent.
+
+Now where did the formula for $\pi$ come from? We can first think what $\pi_i$ should be. By definition, we should know that for large $m$, $\P(X_m = i) \sim \pi_i$. This means that if we go on to really late in time, we would expect to visit the state $i$ $\pi_i$ of the time. On the other hand, the mean recurrence time tells us that we are expected to (re)-visit $i$ every $\mu_i$ steps. So it makes sense that $\pi_i = 1/\mu_i$.
+
+To put this on a more solid ground and actually prove it, we would like to look at some time intervals. For example, we might ask how many times we will hit $i$ in 100 steps. This is not a good thing to do, because we are not given where we are starting, and this probability can depend a lot on where the starting point is.
+
+It turns out the natural thing to do is not to use a fixed time interval, but use a \emph{random} time interval. In particular, we fix a state $k$, and look at the time interval between two consecutive visits of $k$.
+
+We start by letting $X_0 = k$. Let $W_i$ denote the number of visits to $i$ before the next visit to $k$. Formally, we have
+\[
+ W_i = \sum_{m = 1}^\infty 1(X_m = i, m \leq T_k),
+\]
+where $T_k$ is the recurrence time of $m$ and $1$ is the indicator function. In particular, $W_i = 1$ for $i = k$ (if $T_k$ is finite). We can also write this as
+\[
+ W_i = \sum_{m = 1}^{T_k} 1(X_m = i).
+\]
+This is a random variable. So we can look at its expectation. We define
+\[
+ \rho_i = \E_k(W_i).
+\]
+We will show that this $\rho$ is \emph{almost} our $\pi_i$, up to a constant.
+\begin{prop}
+ For an irreducible recurrent chain and $k \in S$, $\rho = (\rho_i: i \in S)$ defined as above by
+ \[
+ \rho_i = \E_k(W_i),\quad W_i = \sum_{m = 1}^\infty 1(X_m = i, T_k \geq m),
+ \]
+ we have
+ \begin{enumerate}
+ \item $\rho_k = 1$
+ \item $\sum_i \rho_i = \mu_k$
+ \item $\rho = \rho P$
+ \item $0 < \rho_i < \infty$ for all $i \in S$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item This follows from definition of $\rho_i$, since for $m < T_k$, $X_m \not= k$.
+ \item Note that $\sum_i W_i = T_k$, since in each step we hit exactly one thing. We have
+ \begin{align*}
+ \sum_i \rho_i &= \sum_i \E_k(W_i)\\
+ &= \E_k\left(\sum_i W_i\right)\\
+ &= \E_k(T_k) \\
+ &= \mu_k.
+ \end{align*}
+ Note that we secretly swapped the sum and expectation, which is in general bad because both are potentially infinite sums. However, there is a theorem (bounded convergence) that tells us this is okay whenever the summands are non-negative, which is left as an Analysis exercise.
+ \item We have
+ \begin{align*}
+ \rho_j &= \E_k(W_j)\\
+ &= \E_k\left(\sum_{m \geq 1}1(X_m = j, T_k \geq m)\right) \\
+ &= \sum_{m\geq 1}\P_k(X_m = j, T_k \geq m)\\
+ &= \sum_{m \geq 1}\sum_{i \in S} \P_k(X_m = j \mid X_{m - 1} = i, T_k \geq m) \P_k(X_{m - 1} = i, T_k \geq m)\\
+ \intertext{We now use the Markov property. Note that $T_k \geq m$ means $X_1, \cdots, X_{m - 1}$ are all not $k$. The Markov property thus tells us the condition $T_k \geq m$ is useless. So we are left with}
+ &= \sum_{m \geq 1}\sum_{i \in S} \P_k(X_m = j \mid X_{m - 1} = i) \P_k(X_{m - 1} = i, T_k \geq m)\\
+ &= \sum_{m \geq 1}\sum_{i \in S} p_{i, j} \P_k(X_{m - 1} = i, T_k \geq m)\\
+ &= \sum_{i \in S} p_{i, j} \sum_{m \geq 1} \P_k(X_{m - 1} = i, T_k \geq m)
+ \end{align*}
+ The last term looks really $\rho_i$, but the indices are slightly off. We shall have faith in ourselves, and show that this is indeed equal to $\rho_i$.
+
+ First we let $r = m - 1$, and get
+ \[
+ \sum_{m \geq 1} \P_k (X_{m - 1} = i, T_k \geq m) = \sum_{r = 0}^\infty \P_k(X_r = i, T_k \geq r + 1).
+ \]
+ Of course this does not fix the problem. We will look at the different possible cases. First, if $i = k$, then the $r = 0$ term is $1$ since $T_k \geq 1$ is always true by definition and $X_0 = k$, also by construction. On the other hand, the other terms are all zero since it is impossible for the return time to be greater or equal to $r + 1$ if we are at $k$ at time $r$. So the sum is $1$, which is $\rho_k$.
+
+ In the case where $i \not= k$, first note that when $r = 0$ we know that $X_0 = k \not= i$. So the term is zero. For $r \geq 1$, we know that if $X_r = i$ and $T_k \geq r$, then we must also have $T_k \geq r + 1$, since it is impossible for the return time to $k$ to be exactly $r$ if we are not at $k$ at time $r$. So $\P_k(X_r = i , T_k \geq r + 1) = \P_k(X_r = i, T_k \geq r)$. So indeed we have
+ \[
+ \sum_{m \geq 0} \P_k (X_{m - 1} = i, T_k \geq m) = \rho_i.
+ \]
+ Hence we get
+ \[
+ \rho_j = \sum_{i \in S} p_{ij} \rho_i.
+ \]
+ So done.
+
+ \item To show that $0 < \rho_i < \infty$, first fix our $i$, and note that $\rho_k = 1$. We know that $\rho = \rho P = \rho P^n$ for $n \geq 1$. So by expanding the matrix sum, we know that for any $m, n$,
+ \begin{align*}
+ \rho_i &\geq \rho_k p_{k, i}(n)\\
+ \rho_k &\geq \rho_i p_{i, k}(m)
+ \end{align*}
+ By irreducibility, we now choose $m, n$ such that $p_{i, k}(m), p_{k, i}(n) > 0$. So we have
+ \[
+ \rho_k p_{k, i}(n) \leq \rho_i \leq \frac{\rho_k}{p_{i, k}(m)}
+ \]
+ Since $\rho_k = 1$, the result follows.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Now we can prove our initial theorem.
+\begin{thm}
+ Consider an irreducible Markov chain. Then
+ \begin{enumerate}
+ \item There exists an invariant distribution if and only if some state is positive recurrent.
+ \item If there is an invariant distribution $\pi$, then every state is positive recurrent, and
+ \[
+ \pi_i = \frac{1}{\mu_i}
+ \]
+ for $i \in S$, where $\mu_i$ is the mean recurrence time of $i$. In particular, $\pi$ is unique.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $k$ be a positive recurrent state. Then
+ \[
+ \pi_i = \frac{\rho_i}{\mu_k}
+ \]
+ satisfies $\pi_i \geq 0$ with $\sum_i \pi_i = 1$, and is an invariant distribution.
+ \item Let $\pi$ be an invariant distribution. We first show that all entries are non-zero. For all $n$, we have
+ \[
+ \pi = \pi P^n.
+ \]
+ Hence for all $i, j \in S$, $n \in \N$, we have
+ \[
+ \pi_i \geq \pi_j p_{j,i}(n).\tag{$*$}
+ \]
+ Since $\sum \pi_1 = 1$, there is some $k$ such that $\pi_k > 0$.
+
+ By $(*)$ with $j = k$, we know that
+ \[
+ \pi_i \geq \pi_k p_{k,i}(n) > 0
+ \]
+ for some $n$, by irreducibility. So $\pi_i > 0$ for all $i$.
+
+ Now we show that all states are positive recurrent. So we need to rule out the cases of transience and null recurrence.
+
+ So assume all states are transient. So $p_{j, i}(n) \to 0$ for all $i, j\in S$, $n \in \N$. However, we know that
+ \[
+ \pi_i = \sum_j \pi_j p_{j, i}(n).
+ \]
+ If our state space is finite, then since $p_{j, i}(n) \to 0$, the sum tends to $0$, and we reach a contradiction, since $\pi_i$ is non-zero. If we have a countably infinite set, we have to be more careful. We have a huge state space $S$, and we don't know how to work with it. So we approximate it by a finite $F$, and split $S$ into $F$ and $S\setminus F$. So we get
+ \begin{align*}
+ 0 &\leq \sum_j \pi_j p_{j, i}(n)\\
+ &= \sum_{j \in F} \pi_j p_{j, i}(n) + \sum_{j \not\in F} \pi_j p_{j, i}(n) \\
+ &\leq \sum_{j \in F}p_{j, i}(n) + \sum_{j \not\in F}\pi_j\\
+ &\to \sum_{j \not\in F}\pi_j
+ \end{align*}
+ as we take the limit $n \to \infty$. We now want to take the limit as $F \to S$. We know that $\sum_{j \in S} \pi_i = 1$. So as we put more and more things into $F$, $\sum_{j \not\in F} \pi_i \to 0$. So $\sum \pi_j p_{j, i}(n) \to 0$. So we get the desired contradiction. Hence we know that all states are recurrent.
+
+ To rule out the case of null recurrence, recall that in the previous discussion, we said that we ``should'' have $\pi_i \mu_i =1 $. So we attempt to prove this. Then this would imply that $\mu_i$ is finite since $\pi_i > 0$.
+
+ By definition $\mu_i = \E_i(T_i)$, and we have the general formula
+ \[
+ \E(N) = \sum_n \P(N \geq n).
+ \]
+ So we get
+ \[
+ \pi_i \mu_i = \sum_{n = 1}^\infty \pi_i\P_i (T_i \geq n).
+ \]
+ Note that $\P_i$ is a probability conditional on starting at $i$. So to work with the expression $\pi_i \P_i (T_i \geq n)$, it is helpful to let $\pi_i$ be the probability of starting at $i$. So suppose $X_0$ has distribution $\pi$. Then
+ \[
+ \pi_i \mu_i = \sum_{n = 1}^\infty \P(T_i \geq n, X_0 = i).
+ \]
+ Let's work out what the terms are. What is the first term? It is
+ \[
+ \P(T_i \geq 1, X_0 = i) = \P(X_0 = i) = \pi_i,
+ \]
+ since we know that we always have $T_i \geq 1$ by definition.
+
+ For other $n \geq 2$, we want to compute $\P(T_i \geq n, X_0 = i)$. This is the probability of starting at $i$, and then not return to $i$ in the next $n - 1$ steps. So we have
+ \[
+ \P(T_i \geq n, X_0 = i) = \P(X_0 = i, X_m \not= i\text{ for } 1 \leq m \leq n - 1)
+ \]
+ Note that all the expressions on the right look rather similar, except that the first term is $=i$ while the others are $\not= i$. We can make them look more similar by writing
+ \begin{align*}
+ \P(T_i \geq n, X_0 = i) &= \P(X_m \not= i\text{ for } 1\leq m \leq n - 1) \\
+ &\quad\quad- \P(X_m \not= i\text{ for } 0\leq m \leq n - 1)
+ \end{align*}
+ What can we do now? The trick here is to use invariance. Since we started with an invariant distribution, we always live in an invariant distribution. Looking at the time interval $1 \leq m \leq n -1$ is the same as looking at $0 \leq m \leq n -2)$. In other words, the sequence $(X_0, \cdots, X_{n - 2})$ has the same distribution as $(X_1, \cdots, X_{n - 1})$. So we can write the expression as
+ \[
+ \P(T_i \geq n, X_0 = i) = a_{n - 2} - a_{n - 1},
+ \]
+ where
+ \[
+ a_r = \P(X_m \not= i \text{ for }0 \leq i \leq r).
+ \]
+ Now we are summing differences, and when we sum differences everything cancels term by term. Then we have
+ \[
+ \pi_i \mu_i = \pi_i + (a_0 - a_1) + (a_1 - a_2) + \cdots
+ \]
+ Note that we cannot do the cancellation directly, since this is an infinite sum, and infinity behaves weirdly. We have to look at a finite truncation, do the cancellation, and take the limit. So we have
+ \begin{align*}
+ \pi_i \mu_i &= \lim_{N \to \infty} [\pi_i + (a_0 - a_1) + (a_1 - a_2) + \cdots + (a_{N - 2} - a_{N - 1})]\\
+ &= \lim_{N\to \infty} [\pi_i + a_0 - a_{N - 1}]\\
+ &= \pi_i + a_0 - \lim_{N \to \infty} a_N.
+ \end{align*}
+ What is each term? $\pi_i$ is the probability that $X_0 = i$, and $a_0$ is the probability that $X_0 \not= i$. So we know that $\pi_i + a_0 = 1$. What about $\lim a_N$? We know that
+ \[
+ \lim_{N\to \infty} a_N = \P(X_m \not= i \text{ for all }m).
+ \]
+ Since the state is recurrent, the probability of never visiting $i$ is $0$. So we get
+ \[
+ \pi_i \mu_i = 1.
+ \]
+ Since $\pi_i > 0$, we get $\mu_i = \frac{1}{\pi_i} < \infty$ for all $i$. Hence we have positive recurrence. We have also proved the formula we wanted.\qedhere
+ \end{enumerate}
+\end{proof}
+\subsection{Convergence to equilibrium}
+So far, we have discussed that if a chain converged, then it must converge to an invariant distribution. We then proved that the chain has a (unique) invariant distribution if and only if it is positive recurrent.
+
+Now, we want to understand when convergence actually occurs.
+\begin{thm}
+ Consider a Markov chain that is irreducible, positive recurrent and aperiodic. Then
+ \[
+ p_{i,k}(n) \to \pi_k
+ \]
+ as $n \to \infty$, where $\pi$ is the unique invariant distribution.
+\end{thm}
+We will prove this by ``coupling''. The idea of coupling is that here we have two sets of probabilities, and we want to prove some relations between them. The first step is to move our attention to random variables, by considering random variables that give rise to these probability distribution. In other words, we look at the Markov chains themselves instead of the probabilities. In general, random variables are nicer to work with, since they are functions, not discrete, unrelated numbers.
+
+However, we have a problem since we get two random variables, but they are completely unrelated. This is bad. So we will need to do some ``coupling'' to correlate the two random variables together.
+
+\begin{proof}(non-examinable)
+ The idea of the proof is to show that for any $i, j, k \in S$, we have $p_{i, k}(n) \to p_{j, k}(n)$ as $n \to \infty$. Then we can argue that no matter where we start, we will tend to the same distribution, and hence any distribution tends to the same distribution as $\pi$, since $\pi$ doesn't change.
+
+ As mentioned, instead of working with probability distributions, we will work with the chains themselves. In particular, we have \emph{two} Markov chains, and we imagine one starts at $i$ and the other starts at $j$. To do so, we define the pair $Z = (X, Y)$ of \emph{two} independent chains, with $X = (X_n)$ and $Y = (Y_n)$ each having the state space $S$ and transition matrix $P$.
+
+ We can let $Z = (Z_n)$, where $Z_n = (X_n, Y_n)$ is a Markov chain on state space $S^2$. This has transition probabilities
+ \[
+ p_{ij, k\ell} = p_{i, k} p_{j, \ell}
+ \]
+ by independence of the chains. We would like to apply theorems to $Z$, so we need to make sure it has nice properties. First, we want to check that $Z$ is irreducible. We have
+ \[
+ p_{ij, k\ell}(n) = p_{i,k} (n) p_{j,\ell}(n).
+ \]
+ We want this to be strictly positive for some $n$. We know that there is $m$ such that $p_{i,k}(m) > 0$, and some $r$ such that $p_{j,\ell}(r) > 0$. However, what we need is an $n$ that makes them \emph{simultaneously} positive. We can indeed find such an $n$, based on the assumption that we have aperiodic chains and waffling something about number theory.
+
+ Now we want to show positive recurrence. We know that $X$, and hence $Y$ is positive recurrent. By our previous theorem, there is a unique invariant distribution $\pi$ for $P$. It is then easy to check that $Z$ has invariant distribution
+ \[
+ \nu = (\nu_{ij}: ij \in S^2)
+ \]
+ given by
+ \[
+ \nu_{i, j} = \pi_i \pi_j.
+ \]
+ This works because $X$ and $Y$ are independent. So $Z$ is also positive recurrent.
+
+ So $Z$ is nice.
+
+ The next step is to couple the two chains together. The idea is to fix some state $s \in S$, and let $T$ be the earliest time at which $X_n = Y_n = s$. Because of recurrence, we can always find such at $T$. After this time $T$, $X$ and $Y$ behave under the exact same distribution.
+
+ We define
+ \[
+ T = \inf\{n: Z_n = (X_n, Y_n) = (s, s)\}.
+ \]
+ We have
+ \begin{align*}
+ p_{i, k}(n) &= \P_i(X_n = k)\\
+ &= \P_{ij}(X_n = k)\\
+ &= \P_{ij} (X_n = k, T \leq n) + \P_{ij} (X_n = k, T > n)\\
+ \intertext{Note that if $T \leq n$, then at time $T$, $X_T = Y_T$. Thus the evolution of $X$ and $Y$ after time $T$ is equal. So this is equal to}
+ &= \P_{ij}(Y_n = k, T \leq n) + \P_{ij} (X_n = k, T > n)\\
+ &\leq \P_{ij}(Y_n = k) + \P_{ij}(T > n)\\
+ &= p_{j,k}(n) + \P_{ij}(T > n).
+ \end{align*}
+ Hence we know that
+ \[
+ |p_{i, k}(n) - p_{j, k}(n)| \leq \P_{ij} (T > n).
+ \]
+ As $n \to \infty$, we know that $\P_{ij}(T > n) \to 0$ since $Z$ is recurrent. So
+ \[
+ |p_{i, k}(n) - p_{j, k}(n)| \to 0
+ \]
+ With this result, we can prove what we want. First, by the invariance of $\pi$, we have
+ \[
+ \pi = \pi P^n
+ \]
+ for all $n$. So we can write
+ \[
+ \pi_k = \sum_j \pi_j p_{j, k}(n).
+ \]
+ Hence we have
+ \[
+ |\pi_k - p_{i, k}(n)| = \left|\sum_j \pi_j (p_{j, k}(n) - p_{i, k}(n))\right| \leq \sum_j \pi_j |p_{j, k}(n) - p_{i, k}|.
+ \]
+ We know that each individual $|p_{j, k}(n) - p_{i, k}(n)|$ tends to zero. So by bounded convergence, we know
+ \[
+ \pi_k - p_{i, k}(n) \to 0.
+ \]
+ So done.
+\end{proof}
+What happens when we have a \emph{null} recurrent case? We would still be able to prove the result about $p_{i, k}(n) \to p_{j, k}(n)$, since $T$ is finite by recurrence. However, we do not have a $\pi$ to make the last step.
+
+Recall that we motivated our definition of $\pi_i$ as the proportion of time we spend in state $i$. Can we prove that this is indeed the case?
+
+More concretely, we let
+\[
+ V_i(n) = |\{m \leq n: X_m = i\}|.
+\]
+We thus want to know what happens to $V_i(n)/n$ as $n \to \infty$. We think this should tend to $\pi_i$.
+
+Note that technically, this is not a well-formed questions, since we don't exactly know how convergence of random variables should be defined. Nevertheless, we can give an informal proof of this result.
+
+The idea is to look at the average time between successive visits. We assume $X_0 = i$. We let $T_m$ be the time of $m$th return to $i$. In particular, $T_0 = 0$. We define $U_m = T_m - T_{m - 1}$. All of these are iid by the strong Markov property, and has mean $\mu_i$ by definition of $\mu_i$.
+
+Hence, by the law of large numbers,
+\[
+ \frac{1}{m} T_m = \frac{1}{m} \sum_{r = 1}^m U_r \sim \E [U_1] = \mu_i. \tag{$*$}
+\]
+We now want to look at $V_i$. If we stare at them hard enough, we see that $V_i(n) \leq k$ if and only if $T_k \leq n$. We can write an equivalent statement by letting $k$ be a real number. We denote $\lceil x\rceil$ as the least integer greater than $x$. Then we have
+\[
+ V_i(n) \leq x \Leftrightarrow T_{\lceil x\rceil} \leq n.
+\]
+Putting a funny value of $x$ in, we get
+\[
+ \frac{V_i(n)}{n} \geq \frac{A}{\mu_i} \Leftrightarrow \frac{1}{n} T_{\lceil An/\mu_i\rceil} \leq 1.
+\]
+However, using $(*)$, we know that
+\[
+ \frac{T_{An/\mu_i}}{An/\mu_i} \to \mu_i.
+\]
+Multiply both sides by $A/\mu_i$ to get
+\[
+ \frac{A}{\mu_i} \frac{T_{An/\mu_i}}{An/\mu_i} = \frac{T_{An/\mu_i}}{n} \to \frac{A \mu_i}{\mu_i} = A.
+\]
+So if $A < 1$, the event $\frac{1}{n}T_{An/\mu_i} \leq 1$ occurs with almost probability $1$. Otherwise, it happens with probability $0$. So in some sense,
+\[
+ \frac{V_i(n)}{n} \to \frac{1}{\mu_i} = \pi_i.
+\]
+\section{Time reversal}
+Physicists have a hard time dealing with time reversal. They cannot find a decent explanation for why we can move in all directions in space, but can only move forward in time. Some physicists mumble something about entropy and pretend they understand the problem of time reversal, but they don't. However, in the world of Markov chain, time reversal is something we understand well.
+
+Suppose we have a Markov chain $X = (X_0, \cdots, X_N)$. We define a new Markov chain by $Y_k = X_{N - k}$. Then $Y = (X_N, X_{N - 1}, \cdots, X_0)$. When is this a Markov chain? It turns out this is the case if $X_0$ has the invariant distribution.
+
+\begin{thm}
+ Let $X$ be positive recurrent, irreducible with invariant distribution $\pi$. Suppose that $X_0$ has distribution $\pi$. Then $Y$ defined by
+ \[
+ Y_{k} = X_{N - k}
+ \]
+ is a Markov chain with transition matrix $\hat{P} = (\hat{p}_{i, j}: i, j \in S)$, where
+ \[
+ \hat{p}_{i, j} = \left(\frac{\pi_j}{\pi_i}\right) p_{j, i}.
+ \]
+ Also $\pi$ is invariant for $\hat{P}$.
+\end{thm}
+Most of the results here should not be surprising, apart from the fact that $Y$ is Markov. Since $Y$ is just $X$ reversed, the transition matrix of $Y$ is just the transpose of the transition matrix of $X$, with some factors to get the normalization right. Also, it is not surprising that $\pi$ is invariant for $\hat{P}$, since each $X_i$, and hence $Y_i$ has distribution $\pi$ by assumption.
+
+\begin{proof}
+ First we show that $\hat{p}$ is a stochastic matrix. We clearly have $\hat{p}_{i,j} \geq 0$. We also have that for each $i$, we have
+ \[
+ \sum_j \hat{p}_{i, j} = \frac{1}{\pi_i} \sum_j \pi_j p_{j, i} = \frac{1}{\pi_i} \pi_i = 1,
+ \]
+ using the fact that $\pi = \pi P$.
+
+ We now show $\pi$ is invariant for $\hat{P}$: We have
+ \[
+ \sum_i \pi_i \hat{p}_{i, j} = \sum_i \pi_j p_{j, i} = \pi_j
+ \]
+ since $P$ is a stochastic matrix and $\sum_i p_{ji} = 1$.
+
+ Note that our formula for $\hat{p}_{i,j}$ gives
+ \[
+ \pi_i \hat{p}_{i, j} = p_{j, i} \pi_j.
+ \]
+ Now we have to show that $Y$ is a Markov chain. We have
+ \begin{align*}
+ \P(Y_0 = i_0, \cdots, Y_k = i_k) &= \P(X_{N - k} = i_k, X_{N - k + 1} = i_{k - 1}, \cdots, X_N = i_0)\\
+ &= \pi_{i_k} p_{i_k, i_{k - 1}} p_{i_{k - 1}, p_{k - 2}} \cdots p_{i_1, i_0}\\
+ &= (\pi_{i_k} p_{i_k, i_{k - 1}}) p_{i_{k - 1}, p_{k - 2}} \cdots p_{i_1, i_0}\\
+ &= \hat{p}_{i_{k - 1} i_k} (\pi_{i_{k - 1}} p_{i_{k - 1}, p_{k - 2}}) \cdots p_{i_1, i_0}\\
+ &= \cdots\\
+ &= \pi_{i_0} \hat{p}_{i_0, i} \hat{p}_{i_1, i_2} \cdots \hat{p}_{i_{k - 1}, i_k}.
+ \end{align*}
+ So $Y$ is a Markov chain.
+\end{proof}
+
+Just because the reverse of a chain is a Markov chain does not mean it is ``reversible''. In physics, we say a process is reversible only if the dynamics of moving forwards in time is the same as the dynamics of moving backwards in time. Hence we have the following definition.
+
+\begin{defi}[Reversible chain]
+ An irreducible Markov chain $X = (X_0, \cdots, X_N)$ in its invariant distribution $\pi$ is \emph{reversible} if its reversal has the same transition probabilities as does $X$, ie
+ \[
+ \pi_i p_{i, j} = \pi_j p_{j, i}
+ \]
+ for all $i, j \in S$.
+
+ This equation is known as the \emph{detailed balance equation}. In general, if $\lambda$ is a distribution that satisfies
+ \[
+ \lambda_i p_{i, j} = \lambda_j p_{j, i},
+ \]
+ we say $(P, \lambda)$ is in \emph{detailed balance}.
+\end{defi}
+Note that most chains are \emph{not} reversible, just like most functions are not continuous. However, if we know reversibility, then we have one powerful piece of information. The nice thing about this is that it is very easy to check if the above holds --- we just have to compute $\pi$, and check the equation directly.
+
+In fact, there is an even easier check. We don't have to start by finding $\pi$, but just some $\lambda$ that is in detailed balance.
+
+\begin{prop}
+ Let $P$ be the transition matrix of an irreducible Markov chain $X$. Suppose $(P, \lambda)$ is in detailed balance. Then $\lambda$ is the \emph{unique} invariant distribution and the chain is reversible (when $X_0$ has distribution $\lambda$).
+\end{prop}
+
+This is a much better criterion. To find $\pi$, we need to solve
+\[
+ \pi_i = \sum_j \pi_j p_{j, i},
+\]
+and this has a big scary sum on the right. However, to find the $\lambda$, we just need to solve
+\[
+ \lambda_i p_{i, j} = \lambda_j p_{j, i},
+\]
+and there is no sum involved. So this is indeed a helpful result.
+
+\begin{proof}
+ It suffices to show that $\lambda$ is invariant. Then it is automatically unique and the chain is by definition reversible. This is easy to check. We have
+ \[
+ \sum_j \lambda_j p_{j, i} = \sum_j \lambda_i p_{i, j} = \lambda_i \sum_j p_{i, j} = \lambda_i.
+ \]
+ So $\lambda$ is invariant.
+\end{proof}
+This gives a really quick route to computing invariant distributions.
+
+\begin{eg}[Birth-death chain with immigration]
+ Recall our birth-death chain, where at each state $i > 0$, we move to $i + 1$ with probability $p_i$ and to $i - 1$ with probability $q_i = 1 - p_i$. When we are at $0$, we are dead and no longer change.
+
+ We wouldn't be able to apply our previous result to this scenario, since $0$ is an absorbing equation, and this chain is obviously not irreducible, let alone positive recurrent. Hence we make a slight modification to our scenario --- if we have population $0$, we allow ourselves to have a probability $p_0$ of having an immigrant and get to $1$, or probability $q_0 = 1 - p_0$ that we stay at $0$.
+
+ This is sort of a ``physical'' process, so it would not be surprising if this is reversible. So we can try to find a solution to the detailed balance equation. If it works, we would have solved it quickly. If not, we have just wasted a minute or two. We need to solve
+ \[
+ \lambda_i p_{i, j} = \lambda_j p_{j, i}.
+ \]
+ Note that this is automatically satisfied if $j$ and $i$ differ by at least $2$, since both sides are zero. So we only look at the case where $j = i + 1$ (the case $j = i - 1$ is the same thing with the slides flipped). So the only equation we have to satisfy is
+ \[
+ \lambda_i p_i = \lambda_{i + 1}q_{i + 1}
+ \]
+ for all $i$. This is just a recursive formula for $\lambda_i$, and we can solve to get
+ \[
+ \lambda_i = \frac{p_{i - 1}}{q_i} \lambda_{i - 1} = \cdots = \left(\frac{p_{i - 1}}{q_i} \frac{p_{i - 2}}{q_{i - 1}} \cdots \frac{p_0}{q_1}\right)\lambda_0.
+ \]
+ We can call the term in the brackets
+ \[
+ \rho_i = \left(\frac{p_{i - 1}}{q_i} \frac{p_{i - 2}}{q_{i - 1}} \cdots \frac{p_0}{q_1}\right).
+ \]
+ For $\lambda_i$ to be a distribution, we need
+ \[
+ 1 = \sum_i \lambda_i = \lambda_0 \sum_i \rho_i.
+ \]
+ Thus if
+ \[
+ \sum \rho_i < \infty,
+ \]
+ then we can pick
+ \[
+ \lambda_0 = \frac{1}{\sum \rho_i}
+ \]
+ and $\lambda$ is a distribution. Hence this is the unique invariant distribution.
+
+ If it diverges, the method fails, and we need to use our more traditional methods to check recurrence and transience.
+\end{eg}
+
+\begin{eg}[Random walk on a finite graph]
+ A graph is a collection of points with edges between them. For example, the following is a graph:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, -1.5) {};
+ \node [circ] at (-1, -0.5) {};
+ \draw (0, 0) -- (2, 0) -- (1, -1.5) -- (0, 0);
+
+ \draw (1, -1.5) -- (-1, -0.5);
+
+ \draw (0, 0) -- (-1, -0.5);
+ \draw (-3, -0.5) -- (-1, -0.5) -- (-2, -1.6) -- cycle;
+ \node [circ] at (-3, -0.5) {};
+ \node [circ] at (-1, -0.5) {};
+ \node [circ] at (-2, -1.6) {};
+
+ \draw (-1.5, -0.2) node [circ] {} -- (-1.2, 1) node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ More precisely, a graph is a pair $G = (V, E)$, where $E$ contains distinct unordered pairs of distinct vertices $(u, v)$, drawn as edges from $u$ to $v$.
+
+ Note that the restriction of distinct pairs and distinct vertices are there to prevent loops and parallel edges, and the fact that the pairs are unordered means our edges don't have orientations.
+
+ A graph $G$ is connected if for all $u, v \in V$, there exists a path along the edges from $u$ to $v$.
+
+ Let $G = (V, E)$ be a connected graph with $|V| \leq \infty$. Let $X = (X_n)$ be a random walk on $G$. Here we live on the vertices, and on each step, we move to one an adjacent vertex. More precisely, if $X_n = x$, then $X_{n + 1}$ is chosen uniformly at random from the set of neighbours of $x$, i.e.\ the set $\{y \in V: (x, y) \in E\}$, independently of the past. This is a Markov chain.
+
+ For example, our previous simple symmetric random walks on $\Z$ or $\Z^d$ are random walks on graphs (despite the graphs not being finite). Our transition probabilities are
+ \[
+ p_{i, j} =
+ \begin{cases}
+ 0 & j\text{ is not a neighbour of }i\\
+ \frac{1}{d_i} & j\text{ is a neighbour of }i
+ \end{cases},
+ \]
+ where $d_i$ is the number of neighbours of $i$, commonly known as the \emph{degree} of $i$.
+
+ By connectivity, the Markov chain is irreducible. Since it is finite, it is recurrent, and in fact positive recurrent.
+
+ This process is a rather ``physical'' process, and again we would expect it to be reversible. So let's try to solve the detailed balance equation
+ \[
+ \lambda_i p_{i, j} = \lambda_j p_{j, i}.
+ \]
+ If $j$ is not a neighbour of $i$, then both sides are zero, and it is trivially balanced. Otherwise, the equation becomes
+ \[
+ \lambda_i \frac{1}{d_i} = \lambda_j \frac{1}{d_j}.
+ \]
+ The solution is \emph{obvious}. Take $\lambda_i = d_i$. In fact we can multiply by any constant $c$, and $\lambda_i = c d_i$ for any $c$. So we pick our $c$ such that this is a distribution, i.e.
+ \[
+ 1 = \sum_i \lambda_i = c \sum_i d_i.
+ \]
+ We now note that since each edge adds $1$ to the degrees of each vertex on the two ends, $\sum d_i$ is just twice the number of edges. So the equation gives
+ \[
+ 1 = 2c |E|.
+ \]
+ Hence we get
+ \[
+ c = \frac{1}{2|E|}.
+ \]
+ Hence, our invariant distribution is
+ \[
+ \lambda_i = \frac{d_i}{2|E|}.
+ \]
+ Let's look at a specific scenario.
+
+ Suppose we have a knight on the chessboard. In each step, the allowed moves are:
+ \begin{itemize}
+ \item Move two steps horizontally, then one step vertically;
+ \item Move two steps vertically, then one step horizontally.
+ \end{itemize}
+ For example, if the knight is in the center of the board (red dot), then the possible moves are indicated with blue crosses:
+ \begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \foreach \x in {0,1,...,8} {
+ \draw (0, \x) -- (8, \x);
+ \draw (\x, 0) -- (\x, 8);
+ }
+ \node [mred, circ] at (3.5, 4.5) {};
+
+ \node [mblue] at (5.5, 5.5) {$\times$};
+ \node [mblue] at (5.5, 3.5) {$\times$};
+ \node [mblue] at (1.5, 5.5) {$\times$};
+ \node [mblue] at (1.5, 3.5) {$\times$};
+ \node [mblue] at (4.5, 6.5) {$\times$};
+ \node [mblue] at (4.5, 2.5) {$\times$};
+ \node [mblue] at (2.5, 6.5) {$\times$};
+ \node [mblue] at (2.5, 2.5) {$\times$};
+ \end{tikzpicture}
+ \end{center}
+ At each epoch of time, our erratic knight follows a legal move chosen uniformly from the set of possible moves. Hence we have a Markov chain derived from the chessboard. What is his invariant distribution? We can compute the number of possible moves from each position:
+ \begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \foreach \x in {0,1,...,8} {
+ \draw (0, \x) -- (8, \x);
+ \draw (\x, 0) -- (\x, 8);
+ }
+ \node at (7.5, 7.5) {$2$};
+ \node at (7.5, 6.5) {$3$};
+ \node at (6.5, 7.5) {$3$};
+ \node at (6.5, 6.5) {$4$};
+
+ \node at (7.5, 5.5) {$4$};
+ \node at (7.5, 4.5) {$4$};
+ \node at (5.5, 7.5) {$4$};
+ \node at (4.5, 7.5) {$4$};
+
+ \node at (6.5, 5.5) {$6$};
+ \node at (6.5, 4.5) {$6$};
+ \node at (4.5, 6.5) {$6$};
+ \node at (5.5, 6.5) {$6$};
+
+ \node at (4.5, 4.5) {$8$};
+ \node at (4.5, 5.5) {$8$};
+ \node at (5.5, 4.5) {$8$};
+ \node at (5.5, 5.5) {$8$};
+ \end{tikzpicture}
+ \end{center}
+ The sum of degrees is
+ \[
+ \sum_i d_i = 336.
+ \]
+ So the invariant distribution at, say, the corner is
+ \[
+ \pi_{\mathrm{corner}} = \frac{2}{336}.
+ \]
+\end{eg}
+\end{document}
diff --git a/books/cam/IB_M/methods.tex b/books/cam/IB_M/methods.tex
new file mode 100644
index 0000000000000000000000000000000000000000..dfde8eca728ee5fabf03f65b0448a5a67ad15bc9
--- /dev/null
+++ b/books/cam/IB_M/methods.tex
@@ -0,0 +1,3676 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Michaelmas}
+\def\nyear {2015}
+\def\nlecturer {D.\ B.\ Skinner}
+\def\ncourse {Methods}
+\def\nofficial {http://www.damtp.cam.ac.uk/user/dbs26/1Bmethods.html}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Self-adjoint ODEs}\\
+Periodic functions. Fourier series: definition and simple properties; Parseval's theorem. Equations of second order. Self-adjoint differential operators. The Sturm-Liouville equation; eigenfunctions and eigenvalues; reality of eigenvalues and orthogonality of eigenfunctions; eigenfunction expansions (Fourier series as prototype), approximation in mean square, statement of completeness.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{PDEs on bounded domains: separation of variables}\\
+Physical basis of Laplace's equation, the wave equation and the diffusion equation. General method of separation of variables in Cartesian, cylindrical and spherical coordinates. Legendre's equation: derivation, solutions including explicit forms of $P_0$, $P_1$ and $P_2$, orthogonality. Bessel's equation of integer order as an example of a self-adjoint eigenvalue problem with non-trivial weight.
+
+\vspace{5pt}
+\noindent Examples including potentials on rectangular and circular domains and on a spherical domain (axisymmetric case only), waves on a finite string and heat flow down a semi-infinite rod.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Inhomogeneous ODEs: Green's functions}\\
+Properties of the Dirac delta function. Initial value problems and forced problems with two fixed end points; solution using Green's functions. Eigenfunction expansions of the delta function and Green's functions.\hspace*{\fill} [4]
+
+\vspace{10pt}
+\noindent\textbf{Fourier transforms}\\
+Fourier transforms: definition and simple properties; inversion and convolution theorems. The discrete Fourier transform. Examples of application to linear systems. Relationship of transfer function to Green's function for initial value problems.\hspace*{\fill} [4]
+
+\vspace{10pt}
+\noindent\textbf{PDEs on unbounded domains}\\
+Classification of PDEs in two independent variables. Well posedness. Solution by the method of characteristics. Green's functions for PDEs in 1, 2 and 3 independent variables; fundamental solutions of the wave equation, Laplace's equation and the diffusion equation. The method of images. Application to the forced wave equation, Poisson's equation and forced diffusion equation. Transient solutions of diffusion problems: the error function.\hspace*{\fill} [6]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+In the previous courses, the (partial) differential equations we have seen are mostly linear. For example, we have Laplace's equation:
+\[
+ \frac{\partial^2 \phi}{\partial x^2} + \frac{\partial \phi}{\partial y^2} = 0,
+\]
+and the heat equation:
+\[
+ \frac{\partial \phi}{\partial t} = \kappa \left(\frac{\partial^2 \phi}{\partial x^2} + \frac{\partial^2 \phi }{\partial y^2}\right).
+\]
+The Schr\"odinger' equation in quantum mechanics is also linear:
+\[
+ i\hbar \frac{\partial \Phi}{\partial t}= -\frac{\hbar^2}{2m}\frac{\partial^2 \phi}{\partial x^2} + V(x) \Phi(x).
+\]
+By being linear, these equations have the property that if $\phi_1, \phi_2$ are solutions, then so are $\lambda_1 \phi_1 + \lambda_2 \phi_2$ (for any constants $\lambda_i$).
+
+Why are all these linear? In general, if we just randomly write down a differential equation, most likely it is not going to be linear. So where did the linearity of equations of physics come from?
+
+The answer is that the real world is \emph{not} linear in general. However, often we are not looking for a completely accurate and precise description of the universe. When we have low energy/speed/whatever, we can often quite accurately approximate reality by a linear equation. For example, the equation of general relativity is very complicated and nowhere near being linear, but for small masses and velocities, they reduce to Newton's law of gravitation, which is linear.
+
+The only exception to this seems to be Schr\"odinger's equation. While there are many theories and equations that superseded the Schr\"odinger equation, these are all still linear in nature. It seems that linearity is the thing that underpins quantum mechanics.
+
+Due to the prevalence of linear equations, it is rather important that we understand these equations well, and this is our primary objective of the course.
+
+\section{Vector spaces}
+When dealing with functions and differential equations, we will often think of the space of functions as a vector space. In many cases, we will try to find a ``basis'' for our space of functions, and expand our functions in terms of the basis. Under different situations, we would want to use a different basis for our space. For example, when dealing with periodic functions, we will want to pick basis elements that are themselves periodic. In other situations, these basis elements would not be that helpful.
+
+A familiar example would be the Taylor series, where we try to approximate a function $f$ by
+\[
+ f(x) = \sum_{n = 0}^\infty \frac{f^{(n)}(0)}{n!} x^n.
+\]
+Here we are thinking of $\{x^n: n \in \N\}$ as the basis of our space, and trying to approximate an arbitrary function as a sum of the basis elements. When writing the function $f$ as a sum like this, it is of course important to consider whether the sum converges, and when it does, whether it actually converges back to $f$.
+
+Another issue of concern is if we have a general set of basis functions $\{y_n\}$, how can we find the coefficients $c_n$ such that $f(x) = \sum c_n y_n(x)$? This is the bit where linear algebra comes in. Finding these coefficients is something we understand well in linear algebra, and we will attempt to borrow the results and apply them to our space of functions.
+
+Another concept we would want to borrow is eigenvalues and eigenfunctions, as well as self-adjoint (``Hermitian'') operators. As we go along the course, we will see some close connections between functions and vector spaces, and we can often get inspirations from linear algebra.
+
+Of course, there is no guarantee that the results from linear algebra would apply directly, since most of our linear algebra results was about finite basis and finite linear sums. However, it is often a good starting point, and usually works when dealing with sufficiently nice functions.
+
+We start with some preliminary definitions, which should be familiar from IA Vectors and Matrices and/or IB Linear Algebra.
+\begin{defi}[Vector space]
+ A \emph{vector space} over $\C$ (or $\R$) is a set $V$ with an operation $+$ which obeys
+ \begin{enumerate}
+ \item $\mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}$\hfill (commutativity)
+ \item $(\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w})$\hfill (associativity)
+ \item There is some $\mathbf{0}\in V$ such that $\mathbf{0} + \mathbf{u} = \mathbf{u}$ for all $\mathbf{u}$\hfill (identity)
+ \end{enumerate}
+ We can also multiply vectors by a scalars $\lambda\in \C$, which satisfies
+ \begin{enumerate}
+ \item $\lambda(\mu \mathbf{v}) = (\lambda \mu) \mathbf{v}$ \hfill (associativity)
+ \item $\lambda(\mathbf{u} + \mathbf{v}) = \lambda \mathbf{u} + \lambda \mathbf{v}$ \hfill (distributivity in $V$)
+ \item $(\lambda + \mu)\mathbf{u} = \lambda \mathbf{u} + \lambda \mathbf{v}$ \hfill (distributivity in $\C$)
+ \item $1\mathbf{v} = \mathbf{v}$ \hfill (identity)
+ \end{enumerate}
+\end{defi}
+Often, we wouldn't have \emph{just} a vector space. We usually give them some additional structure, such as an inner product.
+\begin{defi}[Inner product]
+ An \emph{inner product} on $V$ is a map $(\ph, \ph): V\times V \to \C$ that satisfies
+ \begin{enumerate}
+ \item $(\mathbf{u}, \lambda \mathbf{v}) = \lambda (\mathbf{u}, \mathbf{v})$ \hfill(linearity in second argument)
+ \item $(\mathbf{u}, \mathbf{v} + \mathbf{w}) = (\mathbf{u}, \mathbf{v}) + (\mathbf{u}, \mathbf{w})$ \hfill (additivity)
+ \item $(\mathbf{u}, \mathbf{v}) = (\mathbf{v}, \mathbf{u})^*$ \hfill (conjugate symmetry)
+ \item $(\mathbf{u}, \mathbf{u}) \geq 0$, with equality iff $\mathbf{u} = \mathbf{0}$ \hfill (positivity)
+ \end{enumerate}
+ Note that the positivity condition makes sense since conjugate symmetry entails that $(\mathbf{u}, \mathbf{u}) \in \R$.
+
+ The inner product in turn defines a norm $\|\mathbf{u}\| = \sqrt{(\mathbf{u}, \mathbf{u})}$ that provides the notion of length and distance.
+\end{defi}
+It is important to note that we only have linearity in the \emph{second argument}. For the first argument, we have $(\lambda \mathbf{u}, \mathbf{v}) = (\mathbf{v}, \lambda \mathbf{u})^* = \lambda^* (\mathbf{v}, \mathbf{u})^* = \lambda^* (\mathbf{u}, \mathbf{v})$.
+
+\begin{defi}[Basis]
+ A set of vectors $\{\mathbf{v}_1, \mathbf{v}_2, \cdots, \mathbf{v}_n\}$ form a \emph{basis} of $V$ iff any $\mathbf{u}\in V$ can be uniquely written as a linear combination
+ \[
+ \mathbf{u} = \sum_{i = 1}^n \lambda_i \mathbf{v}_i
+ \]
+ for some scalars $\lambda_i$. The \emph{dimension} of a vector space is the number of basis vectors in its basis.
+
+ A basis is \emph{orthogonal} (with respect to the inner product) if $(\mathbf{v}_i, \mathbf{v}_j) = 0$ whenever $i\not = j$.
+
+ A basis is \emph{orthonormal} (with respect to the inner product) if it is orthogonal and $(\mathbf{v}_i, \mathbf{v}_i) = 1$ for all $i$.
+\end{defi}
+Orthonormal bases are the nice bases, and these are what we want to work with.
+
+Given an orthonormal basis, we can use the inner product to find the expansion of any $\mathbf{u}\in V$ in terms of the basis, for if
+\[
+ \mathbf{u} = \sum_i \lambda_i \mathbf{v}_i,
+\]
+taking the inner product with $\mathbf{v}_j$ gives
+\[
+ (\mathbf{v}_j, \mathbf{u}) = \left(\mathbf{v}_j, \sum_i \lambda_i \mathbf{v}_i\right) = \sum_i \lambda_i (\mathbf{v}_j, \mathbf{v}_i) = \lambda_j,
+\]
+using additivity and linearity. Hence we get the general formula
+\[
+ \lambda_i = (\mathbf{v}_i, \mathbf{u}).
+\]
+We have seen all these so far in IA Vectors and Matrices, where a vector is a list of finitely many numbers. However, \emph{functions} can also be thought of as elements of an (infinite dimensional) vector space.
+
+Suppose we have $f, g: \Omega \to \C$. Then we can define the sum $f + g$ by $(f + g)(x) = f(x) + g(x)$. Given scalar $\lambda$, we can also define $(\lambda f)(x) = \lambda f(x)$.
+
+This also makes intuitive sense. We can simply view a functions as a list of numbers, where we list out the values of $f$ at each point. The list could be infinite, but a list nonetheless.
+
+Most of the time, we don't want to look at the set of \emph{all} functions. That would be too huge and uninteresting. A natural class of functions to consider would be the set of solutions to some particular differential solution. However, this doesn't always work. For this class to actually be a vector space, the sum of two solutions and the scalar multiple of a solution must also be a solution. This is exactly the requirement that the differential equation is linear. Hence, the set of solutions to a linear differential equation would form a vector space. Linearity pops up again.
+
+Now what about the inner product? A natural definition is
+\[
+ (f, g) = \int_{\Sigma} f(x)^* g(x) \;\d \mu,
+\]
+where $\mu$ is some measure. For example, we could integrate $\d x$, or $\d x^2$. This measure specifies how much weighting we give to each point $x$.
+
+Why does this definition make sense? Recall that the usual inner product on finite-dimensional vector spaces is $\sum v_i^* w_i$. Here we are just summing the different components of $v$ and $w$. We just said we can think of the function $f$ as a list of all its values, and this integral is just the sum of all components of $f$ and $g$.
+
+\begin{eg}
+ Let $\Sigma = [a, b]$. Then we could take
+ \[
+ (f, g) = \int_a^b f(x)^* g(x)\;\d x.
+ \]
+ Alternatively, let $\Sigma = D^2 \subseteq \R^2$ be the unit disk. Then we could have
+ \[
+ (f, g) = \int_0^1 \int_0^{2\pi}f(r, \theta)^* g(r, \theta)\;\d \theta\; r\;\d r
+ \]
+\end{eg}
+Note that we were careful and said that $\d \mu$ is ``some measure''. Here we are integrating against $\d\theta\;r\;\d r$. We will later see cases where this can be even more complicated.
+
+If $\Sigma$ has a boundary, we will often want to restrict our functions to take particular values on the boundary, known as boundary conditions. Often, we want the boundary conditions to preserve linearity. We call these nice boundary conditions \emph{homogeneous} conditions.
+
+\begin{defi}[Homogeneous boundary conditions]
+ A boundary condition is \emph{homogeneous} if whenever $f$ and $g$ satisfy the boundary conditions, then so does $\lambda f + \mu g$ for any $\lambda, \mu \in \C$ (or $\R$).
+\end{defi}
+\begin{eg}
+ Let $\Sigma = [a, b]$. We could require that $f(a) + 7 f'(b) = 0$, or maybe $f(a) + 3 f''(a) = 0$. These are examples of homogeneous boundary conditions. On the other hand, the requirement $f(a) = 1$ is \emph{not} homogeneous.
+\end{eg}
+
+\section{Fourier series}
+The first type of functions we will consider is periodic functions.
+\begin{defi}[Periodic function]
+ A function $f$ is \emph{periodic} if there is some fixed $R$ such that $f(x + R) = f(x)$ for all $x$.
+
+ However, it is often much more convenient to think of this as a function $f: S^1 \to \C$ from unit circle to $\C$, and parametrize our function by an angle $\theta$ ranging from $0$ to $2\pi$.
+\end{defi}
+Now why do we care about periodic functions? Apart from the fact that genuine periodic functions exist, we can also use them to model functions on a compact domain. For example, if we have a function defined on $[0, 1]$, we can pretend it is a periodic function on $\R$ by making infinitely many copies of the function to the intervals $[1, 2]$, $[2, 3]$ etc.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.3, 0) -- (2.3, 0) node [right] {$x$};
+ \draw [mred, thick] plot [smooth] coordinates {(0, 0.5) (0.7, -0.3) (1.6, 1) (2, 0.5)};
+
+ \draw [->] (2.8, 0.5) -- (3.8, 0.5);
+ \draw [->] (4.1, 0) -- (10.9, 0) node [right] {$x$};
+ \foreach \x in {4.5, 6.5, 8.5} {
+ \draw [mred, thick] plot [smooth] coordinates {(\x, 0.5) (\x + 0.7, -0.3) (\x + 1.6, 1) (\x + 2, 0.5)};
+ }
+ \end{tikzpicture}
+\end{center}
+\subsection{Fourier series}
+So. As mentioned in the previous chapter, we want to find a set of ``basis functions'' for periodic functions. We could go with the simplest case of periodic functions we know of --- the exponential function $e^{in\theta}$. These functions have a period of $2\pi$, and are rather easy to work with. We all know how to integrate and differentiate the exponential function.
+
+More importantly, this set of basis functions is orthogonal. We have
+\[
+ (e^{im \theta}, e^{in\theta}) = \int_{-\pi}^{\pi} e^{-im\theta} e^{in\theta}\;\d \theta = \int_{-\pi}^\pi e^{i(n - m)\theta}\;\d \theta =
+ \begin{cases}
+ 2\pi & n = m\\
+ 0 & n\not= m
+ \end{cases} = 2\pi \delta_{nm}
+\]
+We can normalize these to get a set of orthonormal functions $\left\{\frac{1}{\sqrt{2\pi}} e^{in\theta}: n\in \Z\right\}$.
+
+Fourier's idea was to use this as a basis for \emph{any} periodic function. Fourier claimed that any $f: S^1 \to \C$ can be expanded in this basis:
+\[
+ f(\theta) = \sum_{n \in \Z}\hat{f}_n e^{in\theta},
+\]
+where
+\[
+ \hat{f}_n = \frac{1}{2\pi} (e^{in\theta}, f) = \frac{1}{2\pi}\int_{-\pi}^\pi e^{-in\theta} f(\theta)\;\d \theta.
+\]
+These really should be defined as $f(\theta) = \sum \hat{f}_n \frac{e^{in\theta}}{\sqrt{2\pi}}$ with $\hat{f}_n = \left(\frac{e^{in\theta}}{\sqrt{2\pi}}, f\right)$, but for convenience reasons, we move all the constant factors to the $\hat{f}_n$ coefficients.
+
+We can consider the special case where $f: S^1 \to \R$ is a real function. We might want to make our expansion look a bit more ``real''. We get
+\[
+ (\hat{f}_n)^* = \left(\frac{1}{2\pi}\int_{-\pi}^\pi e^{-in\theta}f(\theta)\;\d \theta\right)^* = \frac{1}{2\pi}\int_{-\pi}^{\pi}e^{in\theta}f(\theta)\;\d \theta = \hat{f}_{-n}.
+\]
+So we can replace our Fourier series by
+\[
+ f(\theta) = \hat{f}_0 + \sum_{n = 1}^\infty\left(\hat{f}_n e^{in\theta} + \hat{f}_{-n}e^{-in\theta}\right) = \hat{f}_0 + \sum_{n = 1}^\infty \left(\hat{f}_n e^{in\theta} + \hat{f}_n^* e^{-in\theta}\right).
+\]
+Setting $\displaystyle \hat{f}_n = \frac{a_n + ib_n}{2}$, we can write this as
+\begin{align*}
+ f(\theta) &= \hat{f}_0 + \sum_{n = 1}^\infty (a_n \cos n\theta + b_n \sin n\theta)\\
+ &= \frac{a_0}{2} + \sum_{n = 1}^\infty (a_n \cos n\theta + b_n \sin n\theta).
+\end{align*}
+Here the coefficients are
+\[
+ a_n = \frac{1}{\pi}\int_{-\pi}^\pi \cos n\theta f(\theta) \;\d \theta,\quad b_n = \frac{1}{\pi}\int_{-\pi}^\pi \sin n\theta f(\theta) \;\d \theta.
+\]
+This is an alternative formulation of the Fourier series in terms of $\sin$ and $\cos$.
+
+So when given a real function, which expansion should we use? It depends. If our function is odd (or even), it would be useful to pick the sine/cosine expansion, since the cosine (or sine) terms will simply disappear. On the other hand, if we want to stick our function into a differential equation, exponential functions are usually more helpful.
+
+\subsection{Convergence of Fourier series}
+When Fourier first proposed the idea of a Fourier series, people didn't really believe in him. How can we be sure that the infinite series actually converges? It turns out that in many cases, they don't.
+
+To investigate the convergence of the series, we define the \emph{partial Fourier sum} as
+\[
+ S_n f = \sum_{m = -n}^n \hat{f}_m e^{im\theta}.
+\]
+The question we want to answer is whether $S_n f$ ``converges'' to $f$. Here we have to be careful with what we mean by convergence. As we (might) have seen in Analysis, there are many ways of defining convergence of functions. If we have a ``norm'' on the space of functions, we can define convergence to mean $\lim\limits_{n\to \infty}\|S_n f - f\| = 0$. Our ``norm'' can be defined as
+\[
+ \|S_n f - f\| = \frac{1}{2\pi}\int_{-\pi}^\pi |S_n f(\theta) - f(\theta)|^2 \;\d \theta.
+\]
+However, this doesn't really work if $S_n f$ and $f$ can be arbitrary functions, as in this does not necessarily have to be a norm. Indeed, if $\lim\limits_{n \to \infty}S_n f$ differs from $f$ on finitely or countably many points, the integral will still be zero. In particular, $\lim S_n f$ and $f$ can differ at all rational points, but this definition will say that $S_n f$ converges to $f$.
+
+Hence another possible definition of convergence is to require
+\[
+ \lim_{n \to \infty}S_n f(\theta) - f(\theta) = 0
+\]
+for all $\theta$. This is known as \emph{pointwise convergence}. However, this is often too weak a notion. We can ask for more, and require that the \emph{rate} of convergent is independent of $\theta$. This is known as uniform convergence, and is defined by
+\[
+ \lim_{n \to \infty}\sup_{\theta} |S_n f(\theta) - f(\theta)| = 0.
+\]
+Of course, with different definitions of convergence, we can get different answers to whether it converges. Unfortunately, even if we manage to get our definition of convergence right, it is still difficult to come up with a criterion to decide if a function has a convergent Fourier Series.
+
+It is not difficult to prove a general criterion for convergence in norm, but we will not do that, since that is analysis. If interested, one should take the IID Analysis of Functions course. In this course, instead of trying to come up with something general, let's look at an example instead.
+
+\begin{eg}
+ Consider the sawtooth function $f(\theta) = \theta$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3.2, 0) -- (3.2, 0);
+
+ \draw [thick, mblue] (-3, -1) -- (-1, 1);
+ \draw [dashed, mblue] (-1, 1) -- (-1, -1);
+ \draw [thick, mblue] (-1, -1) -- (1, 1);
+ \draw [dashed, mblue] (1, 1) -- (1, -1);
+ \draw [thick, mblue] (1, -1) -- (3, 1);
+
+ \node at (-3, 0) [below] {$-3\pi$};
+ \node at (-1, 0) [below] {$-\pi$};
+ \node at (1, 0) [below] {$\pi$};
+ \node at (3, 0) [below] {$3\pi$};
+ \end{tikzpicture}
+ \end{center}
+ Note that this function is discontinuous at odd multiples of $\pi$.
+
+ The Fourier coefficients (for $n \not= 0$) are
+ \[
+ \hat{f}_n = \frac{1}{2\pi}\int_{-\pi}^\pi e^{-in\theta}\theta\;\d \theta = \left[-\frac{1}{2\pi i n}e^{-in\theta}\theta\right]_{-\pi}^\pi + \frac{1}{2\pi i n}\int_{-\pi}^\pi e^{in\theta}\;\d \theta = \frac{(-1)^{n + 1}}{in}.
+ \]
+ We also have $\hat{f}_0 = 0$.
+
+ Hence we have
+ \[
+ \theta = \sum_{n \not= 0}\frac{(-1)^{n + 1}}{in}e^{in\theta}.
+ \]
+ It turns out that this series converges to the sawtooth for all $\theta \not= (2m + 1)\pi$, i.e.\ everywhere that the sawtooth is continuous.
+
+ Let's look explicitly at the case where $\theta = \pi$. Each term of the partial Fourier series is zero. So we can say that the Fourier series converges to 0. This is the \emph{average} value of $\lim\limits_{\varepsilon \to 0} f(\pi \pm \varepsilon)$.
+
+ This is typical. At an isolated discontinuity, the Fourier series is the average of the limiting values of the original function as we approach from either side.
+\end{eg}
+\subsection{Differentiability and Fourier series}
+Integration is a smoothening operator. If we take, say, the step function
+\[
+ \Theta(x) =
+ \begin{cases}
+ 1 & x > 0\\
+ 0 & x < 0
+ \end{cases}.
+\]
+Then the integral is given by
+\[
+ \int \Theta(x)\;\d x =
+ \begin{cases}
+ x & x > 0\\
+ 0 & x < 0
+ \end{cases}.
+\]
+This is now a continuous function. If we integrate it again, we make the positive side look like a quadratic curve, and it is now differentiable.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2, 0) -- (2, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 2.5) node [above] {$y$};
+ \draw [mblue, thick] (-2, 0) -- (0, 0);
+ \draw [mblue, thick] (0, 2) -- (2, 2);
+
+ \draw [->] (2.5, 1) -- (3.5, 1) node [above, pos=0.5] {$\int\d x$};
+
+ \draw [->] (4, 0) -- (8, 0) node [right] {$x$};
+ \draw [->] (6, -0.5) -- (6, 2.5) node [above] {$y$};
+ \draw [mred, thick] (4, 0) -- (6, 0);
+ \draw [mred, thick] (6, 0) -- (8, 2);
+ \end{tikzpicture}
+\end{center}
+On the other hand, when we differentiate a function, we generally make it worse. For example, when we differentiate the continuous function $\int \Theta (x)\;\d x$, we obtain the discontinuous function $\Theta(x)$. If we attempt to differentiate this again, we get the $\delta$-(non)-function.
+
+Hence, it is often helpful to characterize the ``smoothness'' of a function by how many times we can differentiate it. It turns out this is rather relevant to the behaviour of the Fourier series.
+
+Suppose that we have a function that is itself continuous and whose first $m - 1$ derivatives are also continuous, but $f^{(m)}$ has isolated discontinuities at $\{\theta_1, \theta_2, \theta_3, \cdots, \theta_r\}$.
+
+We can look at the $k$th Fourier coefficient:
+\begin{align*}
+ \hat{f}_k &= \frac{1}{2\pi}\int_{-\pi}^\pi e^{-ik\theta}f(\theta) \;\d \theta\\
+ &= \left[-\frac{1}{2\pi i k}e^{-ik\theta}f(\theta)\right]^{\pi}_{-\pi} + \frac{1}{2\pi i k}\int_{-\pi}^\pi e^{-ik\theta}f'(\theta)\;\d \theta\\
+ \intertext{The first term vanishes since $f(\theta)$ is continuous and takes the same value at $\pi$ and $-\pi$. So we are left with}
+ &= \frac{1}{2\pi i k}\int_{-\pi}^\pi e^{-ik\theta}f'(\theta)\;\d \theta\\
+ &= \cdots\\
+ &= \frac{1}{(ik)^m}\frac{1}{2\pi} \int_{-\pi}^\pi e^{ik\theta} f^{(m)}(\theta)\;\d \theta.
+ \intertext{Now we have to be careful, since $f^{(m)}$ is no longer continuous. However provided that $f^{(m)}$ is everywhere finite, we can approximate this by removing small strips $(\theta_i - \varepsilon, \theta_i + \varepsilon)$ from our domain of integration, and take the limit $\varepsilon \to 0$. We can write this as}
+ &= \lim_{\varepsilon \to 0}\frac{1}{(ik)^m}\frac{1}{2\pi}\int_{-\pi}^{\theta_1 - \varepsilon} + \int_{\theta_1 + \varepsilon}^{\theta_2 - \varepsilon} + \cdots + \int_{\theta_r + \varepsilon}^{\pi} e^{ik\theta} f^{(m)}(\theta)\;\d \theta\\
+ &= \frac{1}{(ik)^{m + 1}} \frac{1}{2\pi}\left[\sum_{s = 1}^r (f^{(m)}(\theta_s^+) - f^{(m)}(\theta_s^-)) + \int_{(-\pi, \pi)\setminus \theta} e^{-ik\theta} f^{(m + 1)}(\theta) \;\d \theta\right].
+\end{align*}
+We now have to stop. So $\hat{f}_k$ decays like $\left(\frac{1}{k}\right)^{m + 1}$ if our function and its $(m - 1)$ derivatives are continuous. This means that if a function is more differentiable, then the coefficients decay more quickly.
+
+This makes sense. If we have a rather smooth function, then we would expect the first few Fourier terms (with low frequency) to account for most of the variation of $f$. Hence the coefficients decay really quickly.
+
+However, if the function is jiggly and bumps around all the time, we would expect to need some higher frequency terms to account for the minute variation. Hence the terms would not decay away that quickly. So in general, if we can differentiate it more times, then the terms should decay quicker.
+
+An important result, which is in some sense a Linear Algebra result, is \emph{Parseval's theorem}.
+\begin{thm}[Parseval's theorem]
+ \[
+ (f, f) = \int_{-\pi}^\pi |f(\theta)|^2 \;\d \theta = 2\pi \sum_{n\in \Z}|\hat{f}_n|^2
+ \]
+\end{thm}
+
+\begin{proof}
+ \begin{align*}
+ (f, f) &= \int_{-\pi}^\pi |f(\theta)|^2 \;\d \theta\\
+ &= \int_{-\pi}^{\pi} \left(\sum_{m \in \Z} \hat{f}_m^* e^{-im\theta}\right)\left(\sum_{n \in \Z} \hat{f}_n e^{in\theta}\right)\;\d \theta\\
+ &= \sum_{m, n\in \Z}\hat{f}_m^* \hat{f}_n \int_{-\pi}^\pi e^{i(n - m)\theta}\;\d \theta\\
+ &= 2\pi \sum_{m, n\in \Z}\hat{f}_m^* \hat{f}_n \delta_{mn}\\
+ &= 2\pi \sum_{n\in \Z}|\hat{f}_n|^2\qedhere
+ \end{align*}
+\end{proof}
+Note that this is not a fully rigorous proof, since we assumed not only that the Fourier series converges to the function, but also that we could commute the infinite sums. However, for the purposes of an applied course, this is sufficient.
+
+Last time, we computed that the sawtooth function $f(\theta) = \theta$ has Fourier coefficients
+\[
+ \hat{f}_0 = 0,\quad \hat{f}_n = \frac{i(-1)^{n}}{n}\text{ for }n\not= 0.
+\]
+But why do we care? It turns out this has some applications in number theory. You might have heard of the Riemann $\zeta$-function, defined by
+\[
+ \zeta(s) = \sum_{n = 1}^\infty \frac{1}{n^s}.
+\]
+We will show that this obeys the property that for any $m$, $\zeta(2m) = \pi^{2m}q$ for some $q\in \Q$. This may not be obvious at first sight. So let's apply Parseval's theorem for the sawtooth defined by $f(\theta) = \theta$. By direct computation, we know that
+\[
+ (f, f) = \int_{-\pi}^\pi \theta^2 \;\d \theta = \frac{2\pi^3}{3}.
+\]
+However, by Parseval's theorem, we know that
+\[
+ (f, f) = 2\pi \sum_{n \in \Z}|\hat{f}_n|^2 = 4\pi \sum_{n = 1}^\infty \frac{1}{n^2}.
+\]
+Putting these together, we learn that
+\[
+ \sum_{n = 1}^\infty \frac{1}{n^2} = \zeta(2) = \frac{\pi^2}{6}.
+\]
+We have just done it for the case where $m = 1$. But if we integrate the sawtooth function repeatedly, then we can get the general result for all $m$.
+
+At this point, we might ask, why are we choosing these $e^{im\theta}$ as our basis? Surely there are a lot of different sets of basis we can use. For example, in finite dimensions, we can just arbitrary choose random vectors (that are not linearly dependent) to get a set of basis vectors. However, in practice, we don't pick them randomly. We often choose a basis that can reveal the symmetry of a system. For example, if we have a rotationally symmetric system, we would like to use polar coordinates. Similarly, if we have periodic functions, then $e^{im\theta}$ is often a good choice of basis.
+
+\section{Sturm-Liouville Theory}
+\subsection{Sturm-Liouville operators}
+In finite dimensions, we often consider \emph{linear maps} $M: V\to W$. If $\{\mathbf{v}_i\}$ is a basis for $V$ and $\{\mathbf{w}_i\}$ is a basis for $W$, then we can represent the map by a matrix with entries
+\[
+ M_{ai} = (\mathbf{w}_a, M\mathbf{v}_i).
+\]
+A map $M: V\to V$ is called \emph{self-adjoint} if $M^\dagger = M$ as matrices. However, it is not obvious how we can extend this notion to arbitrary maps between arbitrary vector spaces (with an inner product) when they cannot be represented by a matrix.
+
+Instead, we make the following definitions:
+\begin{defi}[Adjoint and self-adjoint]
+ The \emph{adjoint} $B$ of a map $A: V\to V$ is a map such that
+ \[
+ (B\mathbf{u}, \mathbf{v}) = (\mathbf{u}, A\mathbf{v})
+ \]
+ for all vectors $\mathbf{u}, \mathbf{v}\in V$. A map is then \emph{self-adjoint} if
+ \[
+ (M\mathbf{u}, \mathbf{v}) = (\mathbf{u}, M\mathbf{v}).
+ \]
+\end{defi}
+Self-adjoint matrices come with a natural basis. Recall that the \emph{eigenvalues} of a matrix are the roots of $\det(M - \lambda I) = 0$. The \emph{eigenvector} corresponding to an eigenvalue $\lambda$ is defined by $M\mathbf{v}_i = \lambda_i \mathbf{v}_i$.
+
+In general, eigenvalues can be any complex number. However, self-adjoint maps have \emph{real} eigenvalues. Suppose
+\[
+ M\mathbf{v}_i = \lambda_i \mathbf{v}_i.
+\]
+Then we have
+\[
+ \lambda_i (\mathbf{v}_i, \mathbf{v}_i) = (\mathbf{v}_i, M\mathbf{v}_i) = (M\mathbf{v}_i, \mathbf{v}_i) = \lambda_i^* (\mathbf{v}_i, \mathbf{v}_i).
+\]
+So $\lambda_i = \lambda_i^*$.
+
+Furthermore, eigenvectors with distinct eigenvalues are orthogonal with respect to the inner product. Suppose that
+\[
+ M \mathbf{v}_i = \lambda_i \mathbf{v}_i,\quad M \mathbf{v}_j = \lambda_j \mathbf{v}_j.
+\]
+Then
+\[
+ \lambda_i (\mathbf{v}_j, \mathbf{v}_i) = (\mathbf{v}_j, M\mathbf{v}_i) = (M\mathbf{v}_j, \mathbf{v}_i) = \lambda_j(\mathbf{v}_j, \mathbf{v}_i).
+\]
+Since $\lambda_i \not= \lambda_j$, we must have $(\mathbf{v}_j, \mathbf{v}_i) = 0$.
+
+Knowing eigenvalues and eigenvalues gives a neat way so solve linear equations of the form
+\[
+ M \mathbf{u} = \mathbf{f}.
+\]
+Here we are given $M$ and $\mathbf{f}$, and want to find $\mathbf{u}$. Of course, the answer is $\mathbf{u} = M^{-1}\mathbf{f}$. However, if we expand in terms of eigenvectors, we obtain
+\[
+ M\mathbf{u} = M\sum u_i \mathbf{v}_i = \sum u_i \lambda_i \mathbf{v}_i.
+\]
+Hence we have
+\[
+ \sum u_i \lambda_i \mathbf{v}_i = \sum f_i \mathbf{v}_i.
+\]
+Taking the inner product with $\mathbf{v}_j$, we know that
+\[
+ u_j = \frac{f_j}{\lambda_j}.
+\]
+So far, these are all things from IA Vectors and Matrices. Sturm-Liouville theory is the infinite-dimensional analogue.
+
+In our vector space of differentiable functions, our ``matrices'' would be linear differential operators $\mathcal{L}$. For example, we could have
+\[
+ \mathcal{L} = A_p(x) \frac{\d^p}{\d x^p} + A_{p - 1}(x) \frac{\d^{p - 1}}{\d x^{p - 1}} + \cdots + A_1(x)\frac{\d}{\d x} + A_0(x).
+\]
+It is an easy check that this is in fact linear.
+
+We say $\mathcal{L}$ has order $p$ if the highest derivative that appears is $\frac{\d^p}{\d x^p}$.
+
+In most applications, we will be interested in the case $p = 2$. When will our $\mathcal{L}$ be self-adjoint?
+
+In the $p = 2$ case, we have
+\begin{align*}
+ \mathcal{L} y &= P \frac{\d^2 y}{\d x^2} + R \frac{\d y}{\d x} - Q y \\
+ &= P\left[\frac{\d^2 y}{\d x^2} + \frac{R}{P}\frac{\d y}{\d x} - \frac{Q}{P}y\right] \\
+ &= P\left[e^{-\int \frac{R}{P}\;\d x}\frac{\d}{\d x}\left(e^{\int \frac{R}{P}\;\d x}\frac{\d y}{\d x}\right)- \frac{Q}{P}y\right]\\
+ \intertext{Let $p = \exp\left(\int \frac{R}{P}\;\d x\right)$. Then we can write this as}
+ &= P p^{-1}\left[\frac{\d}{\d x}\left(p\frac{\d y}{\d x}\right) - \frac{Q}{P}p y\right].
+\end{align*}
+We further define $q = \frac{Q}{P}p$. We also drop a factor of $Pp^{-1}$. Then we are left with
+\[
+ \mathcal{L} = \frac{\d}{\d x}\left(p(x)\frac{\d}{\d x}\right) - q(x).
+\]
+This is the \emph{Sturm-Liouville form} of the operator. Now let's compute $(f, \mathcal{L}g)$. We integrate by parts numerous times to obtain
+\begin{align*}
+ (f, \mathcal{L}g) &= \int_a^b f^*\left(\frac{\d}{\d x}\left(p \frac{\d g}{\d x}\right) - qg\right)\;\d x \\
+ &= [f^* pg']_a^b - \int_a^b \left(\frac{\d f^*}{\d x}p\frac{\d g}{\d x} + f^* qg\right)\;\d x\\
+ &= [f^*pg' - f'^*pg]_a^b + \int_a^b \left(\frac{\d}{\d x}\left(p\frac{\d f^*}{\d x}\right) - qf^*\right)g\;\d x\\
+ &= [(f^*g' - f'^*g)p]_a^b + (\mathcal{L}f, g),
+\end{align*}
+assuming that $p, q$ are real.
+
+So 2nd order linear differential operators are self-adjoint with respect to this norm if $p, q$ are real and the boundary terms vanish. When do the boundary terms vanish? One possibility is when $p$ is periodic (with the right period), or if we constrain $f$ and $g$ to be periodic.
+
+\begin{eg}
+ We can consider a simple case, where
+ \[
+ \mathcal{L} = \frac{\d^2}{\d x^2}.
+ \]
+ Here we have $p = 1, q = 0$. If we ask for functions to be periodic on $[a, b]$, then
+ \[
+ \int_a^b f^*\frac{\d^2 g}{\d x^2}\;\d x = \int_a^b \frac{\d^2 f^*}{\d x^2}g\;\d x.
+ \]
+ Note that it is important that we have a \emph{second-order} differential operator. If it is first-order, then we would have a negative sign, since we integrated by parts once.
+\end{eg}
+
+Just as in finite dimensions, self-adjoint operators have eigenfunctions and eigenvalues with special properties. First, we define a more sophisticated inner product.
+\begin{defi}[Inner product with weight]
+ An \emph{inner product with weight} $w$, written $(\ph, \ph)_w$, is defined by
+ \[
+ (f, g)_w = \int_a^b f^*(x) g(x) w(x)\;\d x,
+ \]
+ where $w$ is real, non-negative, and has only finitely many zeroes.
+\end{defi}
+Why do we want a weight $w(x)$? In the future, we might want to work with the unit disk, instead of a square in $\R^2$. When we want to use polar coordinates, we will have to integrate with $r\;\d r\;\d \theta$, instead of just $\d r\;\d \theta$. Hence we need the weight of $r$. Also, we allow it to have finitely many zeroes, so that the radius can be $0$ at the origin.
+
+Why can't we have more zeroes? We want the inner product to keep the property that $(f, f)_w = 0$ iff $f = 0$ (for continuous $f$). If $w$ is zero at too many places, then the inner product could be zero without $f$ being zero.
+
+We now define what it means to be an eigenfunction.
+\begin{defi}[Eigenfunction with weight]
+ An \emph{eigenfunction with weight} $w$ of $\mathcal{L}$ is a function $y: [a, b] \to \C$ obeying the differential equation
+ \[
+ \mathcal{L} y = \lambda w y,
+ \]
+ where $\lambda\in \C$ is the eigenvalue.
+\end{defi}
+This might be strange at first sight. It seems like we can take any nonsense $y$, apply $\mathcal{L}$, to get some nonsense $\mathcal{L} y$. But then it is fine, since we can write it as some nonsense $w$ times our original $y$. So any function is an eigenfunction? No! There are many constraints $w$ has to satisfy, like being positive, real and having finitely many zeroes. It turns out this severely restraints what values $y$ can take, so not everything will be an eigenfunction. In fact we can develop this theory without the weight function $w$. However, weight functions are much more convenient when, say, dealing with the unit disk.
+
+\begin{prop}
+ The eigenvalues of a Sturm-Liouville operator are real.
+\end{prop}
+
+\begin{proof}
+ Suppose $\mathcal{L} y_i = \lambda_i w y_i$. Then
+ \[
+ \lambda_i (y_i, y_i)_w = \lambda_i (y_i, w y_i) = (y_i, \mathcal{L} y_i) = (\mathcal{L} y_i, y_i) = (\lambda_i w y_i, y_i) = \lambda_i^* (y_i, y_i)_w.
+ \]
+ Since $(y_i, y_i)_w \not= 0$, we have $\lambda_i = \lambda_i^*$.
+\end{proof}
+Note that the first and last terms use the weighted inner product, but the middle terms use the unweighted inner product.
+
+\begin{prop}
+ Eigenfunctions with different eigenvalues (but same weight) are orthogonal.
+\end{prop}
+
+\begin{proof}
+ Let $\mathcal{L} y_i = \lambda_i w y_i$ and $\mathcal{L} y_j = \lambda_j w y_j$. Then
+ \[
+ \lambda_i (y_j, y_i)_w = (y_j, \mathcal{L} y_i) = (\mathcal{L} y_j, y_i) = \lambda_j (y_j, y_i)_w.
+ \]
+ Since $\lambda_i \not= \lambda_j$, we must have $(y_j, y_i)_w = 0$.
+\end{proof}
+
+Those were pretty straightforward manipulations. However, the main results of Sturm--Liouville theory are significantly harder, and we will not prove them. We shall just state them and explore some examples.
+\begin{thm}
+ On a \emph{compact} domain, the eigenvalues $\lambda_1, \lambda_2, \cdots$ form a countably infinite sequence and are discrete.
+\end{thm}
+
+This will be a rather helpful result in quantum mechanics, since in quantum mechanics, the possible values of, say, the energy are the eigenvalues of the Hamiltonian operator. Then this result says that the possible values of the energy are discrete and form an infinite sequence.
+
+Note here the word \emph{compact}. In quantum mechanics, if we restrict a particle in a well $[0, 1]$, then it will have quantized energy level since the domain is compact. However, if the particle is free, then it can have any energy at all since we no longer have a compact domain. Similarly, angular momentum is quantized, since it describe rotations, which takes values in $S^1$, which is compact.
+
+\begin{thm}
+ The eigenfunctions are complete: any function $f: [a, b] \to \C$ (obeying appropriate boundary conditions) can be expanded as
+ \[
+ f(x) = \sum_n \hat{f}_n y_n(x),
+ \]
+ where
+ \[
+ \hat{f}_n = \int y^*_n(x) f(x) w(x)\;\d x.
+ \]
+\end{thm}
+
+\begin{eg}
+ Let $[a, b] = [-L, L]$, $\mathcal{L} = \frac{\d^2}{\d x^2}$, $w = 1$, restricting to periodic functions Then our eigenfunction obeys
+ \[
+ \frac{\d^2 y_n}{\d x^2} = \lambda_n y_n(x),
+ \]
+ Then our eigenfunctions are
+ \[
+ y_n(x) = \exp\left(\frac{in\pi x}{L}\right)
+ \]
+ with eigenvalues
+ \[
+ \lambda_n = - \frac{n^2 \pi^2}{L^2}
+ \]
+ for $n \in \Z$. This is just the Fourier series!
+\end{eg}
+
+\begin{eg}[Hermite polynomials]
+ We are going to cheat a little bit and pick our domain to be $\R$. We want to study the differential equation
+ \[
+ \frac{1}{2}H'' - xH' = \lambda H,
+ \]
+ with $H: \R \to \C$. We want to put this in Sturm-Liouville form. We have
+ \[
+ p(x) = \exp\left(-\int_0^x 2t \;\d t\right) = e^{-x^2},
+ \]
+ ignoring constant factors. Then $q(x) = 0$. We can rewrite this as
+ \[
+ \frac{\d}{\d x}\left(e^{-x^2}\frac{\d H}{\d x}\right) = -2\lambda e^{-x^2} H(x).
+ \]
+ So we take our weight function to be $w(x) = e^{-x^2}$.
+
+ We now ask that $H(x)$ grows at most polynomially as $|x| \to \infty$. In particular, we want $e^{-x^2}H(x)^2 \to 0$. This ensures that the boundary terms from integration by parts vanish at the infinite boundary, so that our Sturm-Liouville operator is self-adjoint.
+
+ The eigenfunctions turn out to be
+ \[
+ H_n(x) = (-1)^n e^{x^2}\frac{\d^n}{\d x^n}\left(e^{-x^2}\right).
+ \]
+ These are known as the \emph{Hermite polynomials}. Note that these are indeed polynomials. When we differentiate the $e^{-x^2}$ term many times, we get a lot of things from the product rule, but they will always keep an $e^{-x^2}$, which will ultimately cancel with $e^{x^2}$.
+\end{eg}
+
+Just as for matrices, we can use the eigenfunction expansion to solve forced differential equations. For example, if might want to solve
+\[
+ \mathcal{L} g = f(x),
+\]
+where $f(x)$ is a forcing term. We can write this as
+\[
+ \mathcal{L} g = w(x) F(x).
+\]
+We expand our $g$ as
+\[
+ g(x) = \sum_{n \in \Z}\hat{g}_n y_n(x).
+\]
+Then by linearity,
+\[
+ \mathcal{L}g = \sum_{n \in \Z}\hat{g}_n \mathcal{L} y_n(x) = \sum_{n \in \Z}\hat{g}_n \lambda_n w(x)y_n(x).
+\]
+We can also expand our forcing function as
+\[
+ w(x) F(x) = w(x) \left(\sum_{n \in \Z}\hat{F}_n y_n(x)\right).
+\]
+Taking the (regular) inner product with $y_m(x)$ (and noting orthogonality of eigenfunctions), we obtain
+\[
+ w(x) \hat{g}_m \lambda_m = w(x) \hat{F}_m
+\]
+This tells us that
+\[
+ \hat{g}_m = \frac{\hat{F}_m}{\lambda_m}.
+\]
+So we have
+\[
+ g(x) = \sum_{n \in \Z}\frac{\hat{F}_n}{\lambda_n}y_n(x),
+\]
+provided all $\lambda_n$ are non-zero.
+
+This is a systematic way of solving forced differential equations. We used to solve these by ``being smart''. We just looked at the forcing term and tried to guess what would work. Unsurprisingly, this approach does not succeed all the time. Thus it is helpful to have a systematic way of solving the equations.
+
+It is often helpful to rewrite this into another form, using the fact that $\hat{F}_n = (y_n, F)_w$. So we have
+\[
+ g(x) = \sum_{n \in \Z}\frac{1}{\lambda_n}(y_n, F)_w y_n(x) = \int_a^b \sum_{n \in \Z}\frac{1}{\lambda_n} y_n^*(t) y_n(x) w(t) F(t)\;\d t.
+\]
+Note that we swapped the sum and the integral, which is in general a dangerous thing to do, but we don't really care because this is an applied course. We can further write the above as
+\[
+ g(x) = \int_a^b G(x, t) F(t) w(t)\;\d t,
+\]
+where $G(x, t)$ is the infinite sum
+\[
+ G(x, t) = \sum_{n \in \Z}\frac{1}{\lambda_n}y_n^*(t) y_n(x).
+\]
+We call this the Green's function. Note that this depends on $\lambda_n$ and $y_n$ only. It depends on the differential operator $\mathcal{L}$, but not the forcing term $F$. We can think of this as something like the ``inverse matrix'', which we can use to solve the forced differential equation for any forcing term.
+
+Recall that for a matrix, the inverse exists if the determinant is non-zero, which is true if the eigenvalues are all non-zero. Similarly, here a necessary condition for the Green's function to exist is that all the eigenvalues are non-zero.
+
+We now have a second version of Parseval's theorem.
+\begin{thm}[Parseval's theorem II]
+ \[
+ (f, f)_w = \sum_{n \in \Z}|\hat{f}_n|^2.
+ \]
+\end{thm}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ (f, f)_w &= \int_\Omega f^*(x) f(x) w(x) \;\d x\\
+ &= \sum_{n, m \in \Z}\int_\Omega \hat{f}^*_n y_n^*(x) \hat{f}_m y_m(x)w(x)\;\d x\\
+ &= \sum_{n, m\in \Z}\hat{f}^*_n \hat{f}_m (y_n, y_m)_w \\
+ &= \sum_{n \in \Z}|\hat{f}_n|^2.\qedhere
+ \end{align*}
+\end{proof}
+
+\subsection{Least squares approximation}
+So far, we have expanded functions in terms of infinite series. However, in real life, when we ask a computer to do this for us, it is incapable of storing and calculating an infinite series. So we need to truncate it at some point.
+
+Suppose we approximate some function $f: \Omega \to \C$ by a \emph{finite} set of eigenfunctions $y_n(x)$. Suppose we write the approximation $g$ as
+\[
+ g(x) = \sum_{k = 1}^n c_k y_k(x).
+\]
+The objective here is to figure out what values of the coefficients $c_k$ are ``the best'', i.e.\ can make $g$ represent $f$ as closely as possible.
+
+Firstly, we have to make it explicit what we mean by ``as closely as possible''. Here we take this to mean that we want to minimize the $w$-norm $(f - g, f - g)_w$. By linearity of the norm, we know that
+\[
+ (f - g, f - g)_w = (f, f)_w + (g, g)_w - (f, g)_w - (g, f)_w.
+\]
+To minimize this norm, we want
+\[
+ 0 = \frac{\partial}{\partial c_j} (f - g, f - g)_w = \frac{\partial}{\partial c_j}[(f, f)_w + (g, g)_w - (f, g)_w - (g, f)_w].
+\]
+We know that the $(f, f)_w$ term vanishes since it does not depend on $c_k$. Expanding our definitions of $g$, we can get
+\[
+ 0 = \frac{\partial}{\partial c_j}\left(\sum_{i = 1}^\infty |c_k|^n - \sum_{k = 1}^n \hat{f}_k^* c_k - \sum_{k = 1}^n c_k^* \hat{f}_k\right) = c^*_j - \hat{f}_j^*.
+\]
+Note that here we are treating $c_j^*$ and $c_j$ as distinct quantities. So when we vary $c_j$, $c_j^*$ is unchanged. To formally justify this treatment, we can vary the real and imaginary parts separately.
+
+Hence, the extremum is achieved at $c_j^* = \hat{f}_j^*$. Similarly, varying with respect to $c_j^*$, we get that $c_j = \hat{f}_j$.
+
+To check that this is indeed an minimum, we can look at the second-derivatives
+\[
+ \frac{\partial^2}{\partial c_i \partial c_j} (f - g, f - g)_w = \frac{\partial^2}{\partial c_i^* c_j^*} (f - g, f - g)_w = 0,
+\]
+while
+\[
+ \frac{\partial^2}{\partial c_i^* \partial c_j} (f - g, f - g)_w = \delta_{ij} \geq 0.
+\]
+Hence this is indeed a minimum.
+
+Thus we know that $(f - g, f - g)_w$ is minimized over all $g(x)$ when
+\[
+ c_k = \hat{f}_k = (y_k, f)_w.
+\]
+These are exactly the coefficients in our infinite expansion. Hence if we truncate our series at an arbitrary point, it is still the best approximation we can get.
+
+\section{Partial differential equations}
+So far, we have come up with a lot of general theory about functions and stuff. We are going to look into some applications of these. In this section, we will develop the general technique of \emph{separation of variables}, and apply this to many many problems.
+
+\subsection{Laplace's equation}
+\begin{defi}[Laplace's equation]
+ \emph{Laplace's equation} on $\R^n$ says that a (twice-differentiable) equation $\phi: \R^n \to \C$ obeys
+ \[
+ \nabla^2 \phi = 0,
+ \]
+ where
+ \[
+ \nabla^2 = \sum_{i = 1}^n \frac{\partial^2}{\partial x_i^2}.
+ \]
+\end{defi}
+This equation arises in many situations. For example, if we have a conservative force $\mathbf{F} = -\nabla \phi$, and we are in a region where the force obeys the source-free Gauss' law $\nabla\cdot \mathbf{F} = 0$ (e.g.\ when there is no electric charge in an electromagnetic field), then we get Laplace's equation.
+
+It also arises as a special case of the heat equation $\frac{\partial \phi}{\partial t} = \kappa \nabla^2 \phi$, and wave equation $\frac{\partial^2 \phi}{\partial t^2} = c^2 \nabla^2 \phi$, when there is no time dependence.
+
+\begin{defi}[Harmonic functions]
+ Functions that obey Laplace's equation are often called \emph{harmonic functions}.
+\end{defi}
+
+\begin{prop}
+ Let $\Omega$ be a compact domain, and $\partial \Omega$ be its boundary. Let $ f: \partial \Omega\to \R$. Then there is a unique solution to $\nabla^2 \phi = 0$ on $\Omega$ with $\phi|_{\partial \Omega} = f$.
+\end{prop}
+
+\begin{proof}
+ Suppose $\phi_1$ and $\phi_2$ are both solutions such that $\phi_1|_{\partial \Omega} = \phi_2|_{\partial \Omega} = f$. Then let $\Phi = \phi_1 - \phi_2$. So $\Phi = 0$ on the boundary. So we have
+ \[
+ 0 = \int_\Omega \Phi \nabla^2 \Phi \;\d^n x = -\int_\Omega (\nabla \Phi) \cdot (\nabla \Phi)\;\d x + \int_{\partial \Omega} \Phi \nabla \Phi\cdot \mathbf{n} \;\d^{n -1 }x.
+ \]
+ We note that the second term vanishes since $\Phi = 0$ on the boundary. So we have
+ \[
+ 0 = -\int_\Omega (\nabla \Phi) \cdot (\nabla \Phi)\;\d x.
+ \]
+ However, we know that $(\nabla \Phi) \cdot (\nabla \Phi) \geq 0$ with equality iff $\nabla \Phi = 0$. Hence $\Phi$ is constant throughout $\Omega$. Since at the boundary, $\Phi = 0$, we have $\Phi = 0$ throughout, i.e.\ $\phi_1 = \phi_2$.
+\end{proof}
+
+Asking that $\phi|_{\partial \Omega} = f(x)$ is a \emph{Dirichlet} boundary condition. We can instead ask $\mathbf{n}\cdot \nabla \phi|_{\partial \Omega} = g$, a \emph{Neumann} condition. In this case, we have a unique solution up to a constant factor.
+
+These we've all seen in IA Vector Calculus, but this time in arbitrary dimensions.
+
+\subsection{Laplace's equation in the unit disk in \texorpdfstring{$\R^2$}{R2}}
+Let $\Omega = \{(x, y) \in \R^2: x^2 + y^2 \leq 1\}$. Then Laplace's equation becomes
+\[
+ 0 = \nabla^2 \phi = \frac{\partial^2 \phi}{\partial x^2} + \frac{\partial^2 \phi}{\partial y^2} = \frac{\partial^2 \phi}{\partial z \partial \bar{z}},
+\]
+where we define $z = x + iy, \bar z = x - iy$. Then the general solution is
+\[
+ \phi(z, \bar z) = \psi(z) + \chi(\bar z)
+\]
+for some $\psi, \chi$.
+
+Suppose we want a solution obeying the Dirichlet boundary condition $\phi(z, \bar z) |_{\partial \Omega} = f(\theta)$, where the boundary $\partial \Omega$ is the unit circle. Since the domain is the unit circle $S^1$, we can Fourier-expand it! We can write
+\[
+ f(\theta) = \sum_{n \in \Z}\hat{f}_n e^{in\theta} = \hat{f}_0 + \sum_{n = 1}^\infty \hat{f}_n e^{in\theta} + \sum_{n = 1}^\infty \hat{f}_{-n} e^{-in\theta}.
+\]
+However, we know that $z = re^{i\theta}$ and $\bar{z} = re^{i\theta}$. So on the boundary, we know that $z = e^{i\theta}$ and $\bar z = e^{-i\theta}$. So we can write
+\[
+ f(\theta) = \hat{f}_0 + \sum_{n = 1}^\infty (\hat{f}_n z^n + \hat{f}_{-n}\bar{z}^n)
+\]
+This is defined for $|z| = 1$. Now we can define $\phi$ on the unit disk by
+\[
+ \phi(z, \bar z) = \hat{f}_0 + \sum_{n = 1}^\infty \hat{f}_n z^n + \sum_{n = 1}^\infty \hat{f}_{-n}\bar{z}^n.
+\]
+It is clear that this obeys the boundary conditions by construction. Also, $\phi(z, \bar{z})$ certainly converges when $|z| < 1$ if the series for $f(\theta)$ converged on $\partial \Omega$. Also, $\phi(z, \bar z)$ is of the form $\psi(z) + \chi(\bar z)$. So it is a solution to Laplace's equation on the unit disk. Since the solution is unique, we're done!
+
+\subsection{Separation of variables}
+Unfortunately, the use of complex variables is very special to the case where $\Omega \subseteq \R^2$. In higher dimensions, we proceed differently.
+
+We let $\Omega = \{(x, y, z) \in \R^3: 0 \leq x \leq a, 0 \leq y \leq b, z \geq 0\}$. We want the following boundary conditions:
+\begin{align*}
+ \psi(0, y, z) &= \psi(a, y, z) = 0\\
+ \psi(x, 0, z) &= \psi(x, b, z) = 0\\
+ \lim_{z\to \infty} \psi(x, y, z) &= 0\\
+ \psi(x, y, 0) &= f(x, y),
+\end{align*}
+where $f$ is a given function. In other words, we want our function to be $f$ when $z = 0$ and vanish at the boundaries.
+
+The first step is to look for a solution of $\nabla^2 \psi(x, y, z) = 0$ of the form
+\[
+ \psi(x, y, z) = X(x)Y(y)Z(z).
+\]
+Then we have
+\[
+ 0 = \nabla^2 \psi = YZX'' + XZY'' + XYZ'' = XYZ \left(\frac{X''}{X} + \frac{Y''}{Y} + \frac{Z''}{Z}\right).
+\]
+As long as $\psi \not= 0$, we can divide through by $\psi$ and obtain
+\[
+ \frac{X''}{X} + \frac{Y''}{Y} + \frac{Z''}{Z} = 0.
+\]
+The key point is each term depends on only \emph{one} of the variables $(x, y, z)$. If we vary, say, $x$, then $\frac{Y''}{Y} + \frac{Z''}{Z}$ does not change. For the total sum to be $0$, $\frac{X''}{X}$ must be constant. So each term has to be separately constant. We can thus write
+\[
+ X'' = -\lambda X,\quad Y'' = -\mu Y,\quad Z'' = (\lambda + \mu) Z.
+\]
+The signs before $\lambda$ and $\mu$ are there just to make our life easier later on.
+
+We can solve these to obtain
+\begin{align*}
+ X &= a\sin \sqrt{\lambda} x + b \cos \sqrt{\lambda} x,\\
+ Y &= c\sin \sqrt{\lambda} y + d \cos \sqrt{\lambda} x,\\
+ Z &= g\exp(\sqrt{\lambda + \mu} z) + h\exp(-\sqrt{\lambda + \mu} z).
+\end{align*}
+We now impose the homogeneous boundary conditions, i.e.\ the conditions that $\psi$ vanishes at the walls and at infinity.
+\begin{itemize}
+ \item At $x = 0$, we need $\psi(0, y, z) = 0$. So $b = 0$.
+ \item At $x = a$, we need $\psi(a, y, z) = 0$. So $\lambda = \left(\frac{n\pi}{a}\right)^2$.
+ \item At $y = 0$, we need $\psi(x, 0, z) = 0$. So $d = 0$.
+ \item At $y = b$, we need $\psi(x, b, z) = 0$. So $\mu = \left(\frac{m\pi}{b}\right)^2$.
+ \item As $z \to \infty$, $\psi(x, y, z) \to 0$. So $g = 0$.
+\end{itemize}
+Here we have $n, m \in \N$.
+
+So we found a solution
+\[
+ \psi(x, y, z) = A_{n, m}\sin \left(\frac{n\pi x}{a}\right) \sin \left(\frac{m\pi y}{b}\right) \exp(-s_{n, m} z),
+\]
+where $A_{n, m}$ is an arbitrary constant, and
+\[
+ s_{n, m}^2 = \left(\frac{n^2}{a^2} + \frac{m^2}{b^2}\right)\pi^2.
+\]
+This obeys the homogeneous boundary conditions for any $n, m \in \N$ but not the inhomogeneous condition at $z = 0$.
+
+By linearity, the general solution obeying the homogeneous boundary conditions is
+\[
+ \psi(x, y, z) = \sum_{n, m = 1}^{\infty} A_{n, m} \sin\left(\frac{n\pi x}{a}\right) \sin \left(\frac{m\pi y}{b}\right) \exp(-s_{n, m} z)
+\]
+The final step is to impose the inhomogeneous boundary condition $\psi(x, y, 0) = f(x, y)$. In other words, we need
+\[
+ \sum_{n, m = 1}^{\infty} A_{n, m} \sin\left(\frac{n\pi x}{a}\right) \sin \left(\frac{m\pi y}{b}\right) = f(x, y). \tag{$\dagger$}
+\]
+The objective is thus to find the $A_{n, m}$ coefficients.
+
+We can use the orthogonality relation we've previously had:
+\[
+ \int_0^a \sin\left(\frac{k\pi x}{a}\right) \sin \left(\frac{n\pi x}{a}\right)\;\d x = \delta_{k, n} \frac{a}{2}.
+\]
+So we multiply $(\dagger)$ by $\sin \left(\frac{k\pi x}{a}\right)$ and integrate:
+\[
+ \sum_{n, m = 1}^\infty A_{n, m}\int_0^a \sin \left(\frac{k\pi x}{a}\right) \sin \left(\frac{n\pi x}{a}\right)\;\d x \sin\left(\frac{m\pi y}{b}\right) = \int_0^a \sin\left(\frac{k \pi x}{a}\right)f(x, y) \;\d x.
+\]
+Using the orthogonality relation, we have
+\[
+ \frac{a}{2}\sum_{m = 1}^\infty A_{k, m} \sin\left(\frac{m\pi y}{b}\right) = \int_0^a \sin\left(\frac{k\pi x}{a}\right) f(x, y)\;\d x.
+\]
+We can perform the same trick again, and obtain
+\[
+ \frac{ab}{4}A_{k, j} = \int_{[0, a]\times [0, b]}\sin \left(\frac{k\pi x}{a}\right) \sin\left(\frac{j\pi y}{b}\right) f(x, y)\;\d x\;\d y.\tag{$*$}
+\]
+So we found the solution
+\[
+ \psi(x, y, z) = \sum_{n, m = 1}^{\infty} A_{n, m} \sin\left(\frac{n\pi x}{a}\right) \sin \left(\frac{m\pi y}{b}\right) \exp(-s_{n, m} z)
+\]
+where $A_{m, n}$ is given by $(*)$. Since we have shown that there is a unique solution to Laplace's equation obeying Dirichlet boundary conditions, we're done.
+
+Note that if we imposed a boundary condition at finite $z$, say $0 \leq z \leq c$, then both sets of exponential $\exp (\pm \sqrt{\lambda + \mu} z)$ would have contributed. Similarly, if $\psi$ does not vanish at the other boundaries, then the $\cos$ terms would also contribute.
+
+In general, to actually find our $\psi$, we have to do the horrible integral for $A_{m, n}$, and this is not always easy (since integrals are \emph{hard}). We will just look at a simple example.
+\begin{eg}
+ Suppose $f(x, y) = 1$. Then
+ \[
+ A_{m, n} = \frac{4}{ab} \int_0^a \sin \left(\frac{n\pi x}{a}\right)\;\d x \int_0^ b \sin \left(\frac{m\pi y}{b}\right)\;\d y =
+ \begin{cases}
+ \frac{16}{\pi^2 mn} & n, m\text{ both odd}\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ Hence we have
+ \[
+ \psi(x, y, z) = \frac{16}{\pi^2}\sum_{n, m\text{ odd}} \frac{1}{nm}\sin\left(\frac{n\pi x}{a}\right) \sin \left(\frac{m\pi y}{b}\right) \exp(-s_{m, n}z).
+ \]
+\end{eg}
+
+\subsection{Laplace's equation in spherical polar coordinates}
+\subsubsection{Laplace's equation in spherical polar coordinates}
+
+We'll often also be interested in taking
+\[
+ \Omega = \{(s, y, z)\in \R^3: \sqrt{x^2 + y^2 + z^2} \leq a\}.
+\]
+Since our domain is spherical, it makes sense to use some coordinate system with spherical symmetry. We use spherical coordinates $(r, \theta, \phi)$, where
+\[
+ x = r\sin \theta \cos \phi,\quad y = r\sin \theta \sin \phi,\quad z = r\cos \theta.
+\]
+The Laplacian is
+\[
+ \nabla^2 = \frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2 \frac{\partial}{\partial r}\right) + \frac{1}{r^2 \sin \theta} \frac{\partial}{\partial \theta}\left(\sin \theta \frac{\partial}{\partial \theta}\right) + \frac{1}{r^2 \sin^2 \theta}\frac{\partial^2}{\partial\phi^2}.
+\]
+Similarly, the volume element is
+\[
+ \d V = \d x\;\d y\;\d z = r^2 \sin \theta \;\d r\;\d \theta \;\d \phi.
+\]
+In this course, we'll only look at \emph{axisymmetric solutions}, where $\psi(r, \theta, \phi) = \psi(r, \theta)$ does not depend on $\phi$.
+
+We perform separation of variables, where we look for any set of solution of $\nabla^2 \psi = 0$ inside $\Omega$ where $\psi(r, \theta) = R(r) \Theta(\theta)$.
+
+Laplace's equation then becomes
+\[
+ \frac{\psi}{r^2}\left[\frac{1}{R}\frac{\d}{\d r}(r^2 R') + \frac{1}{\Theta \sin \theta}\frac{\d}{\d \theta}\left(\sin \theta \frac{\d \Theta}{\d \theta}\right)\right] = 0.
+\]
+Similarly, since each term depends only on one variable, each must separately be constant. So we have
+\[
+ \frac{\d}{\d r}\left(r^2 \frac{\d R}{\d r}\right) = \lambda R,\quad \frac{\d}{\d \theta}\left(\sin \theta \frac{\d \Theta}{\d \theta}\right) = -\lambda \sin \theta \Theta.
+\]
+Note that both these equations are eigenfunction equations for some Sturm-Liouville operator and the $\Theta(\theta)$ equation has a non-trivial weight function $w(\theta) = \sin \theta$.
+\subsubsection{Legendre Polynomials}
+The angular equation is known as the \emph{Legendre's equation}. It's standard to set $x = \cos \theta$ (which is \emph{not} the Cartesian coordinate, but just a new variable). Then we have
+\[
+ \frac{\d}{\d \theta} = -\sin \theta \frac{\d}{\d x}.
+\]
+So the Legendre's equation becomes
+\[
+ -\sin \theta \frac{\d}{\d x}\left[\sin \theta (-\sin \theta) \frac{\d \Theta}{\d x}\right]+ \lambda \sin \theta \Theta = 0.
+\]
+Equivalently, in terms of $x$, we have
+\[
+ \frac{\d}{\d x}\left[(1 - x^2) \frac{\d \Theta}{\d x}\right] = -\lambda \Theta.
+\]
+Note that since $0 \leq \theta \leq \pi$, the domain of $x$ is $-1 \leq x \leq 1$.
+
+This operator is a Sturm--Liouville operator with
+\[
+ p(x) = 1 - x^2,\quad q(x) = 0.
+\]
+For the Sturm--Liouville operator to be self-adjoint, we had
+\[
+ (g, \mathcal{L} f) = (\mathcal{L}g, f) + [p(x) (g^* f' - g'^* f)]^{1}_{-1}.
+\]
+We want the boundary term to vanish. Since $p(x) = 1- x^2$ vanishes at our boundary $x = \pm 1$, the Sturm-Liouville operator is self-adjoint provided our function $\Theta(x)$ is regular at $x = \pm 1$. Hence we look for a set of Legendre's equations inside $(-1, 1)$ that remains regular including at $x = \pm 1$. We can try a power series
+\[
+ \Theta(x) = \sum_{n = 0}^\infty a_n x^n.
+\]
+The Legendre's equation becomes
+\[
+ (1 - x^2) \sum_{n = 0}^\infty a_n n(n - 1)x^{n - 2} - 2\sum_{n = 0}^\infty a_n nx^n + \lambda \sum_{n = 0}^\infty a_n x^n = 0.
+\]
+For this to hold for all $x\in (-1, 1)$, the equation must hold for each coefficient of $x$ separately. So
+\[
+ a_n(\lambda - n(n + 1)) + a_{n - 2}(n + 2)(n + 1) = 0.
+\]
+This requires that
+\[
+ a_{n + 2} = \frac{n(n + 1) - \lambda}{(n + 2)(n + 1)} a_n.
+\]
+This equation relates $a_{n + 2}$ to $a_n$. So we can choose $a_0$ and $a_1$ freely. So we get two linearly independents solutions $\Theta_0(x)$ and $\Theta_1(x)$, where they satisfy
+\[
+ \Theta_0(-x) = \Theta_0(x), \quad \Theta_1(-x) = -\Theta_1(x).
+\]
+In particular, we can expand our recurrence formula to obtain
+\begin{align*}
+ \Theta_0(x) &= a_0 \left[1 - \frac{\lambda}{2} x^2 - \frac{(6 - \lambda)\lambda}{4!}x^4 + \cdots\right]\\
+ \Theta_1(x) &= a_1 \left[x + \frac{(2 - \lambda)}{3!}x^2 + \frac{(12 - \lambda)(2 - \lambda)}{5!}x^5 + \cdots\right].
+\end{align*}
+We now consider the boundary conditions. We know that $\Theta(x)$ must be regular (i.e.\ finite) at $x = \pm 1$. This seems innocent, However, we have
+\[
+ \lim_{n \to \infty}\frac{a_{n + 2}}{a_n} = 1.
+\]
+This is fine if we are inside $(-1, 1)$, since the power series will still converge. However, \emph{at} $\pm 1$, the ratio test is not conclusive and does not guarantee convergence. In fact, more sophisticated convergence tests show that the infinite series would diverge on the boundary! To avoid this problem, we need to choose $\lambda$ such that the series truncates. That is, if we set $\lambda = \ell(\ell + 1)$ for some $\ell \in \N$, then our power series will truncate. Then since we are left with a finite polynomial, it of course converges.
+
+In this case, the finiteness boundary condition fixes our possible values of eigenvalues. This is how quantization occurs in quantum mechanics. In this case, this process is why angular momentum is quantized.
+
+The resulting polynomial solutions $P_\ell(x)$ are called \emph{Legendre polynomials}. For example, we have
+\begin{align*}
+ P_0(x) &= 1\\
+ P_1(x) &= x\\
+ P_2(x) &= \frac{1}{2}(3x^2 - 1)\\
+ P_3(x) &= \frac{1}{2}(5x^3 - 3x),
+\end{align*}
+where the overall normalization is chosen to fix $P_\ell(1) = 1$.
+
+It turns out that
+\[
+ P_\ell(x) = \frac{1}{2^\ell \ell!} \frac{\d^\ell}{\d x^\ell} (x^2 - 1)^\ell.
+\]
+The constants in front are just to ensure normalization. We now check that this indeed gives the desired normalization:
+\begin{align*}
+ P_\ell(1) &= \left.\frac{1}{2^\ell} \frac{\d^\ell}{\d x^\ell}[(x - 1)^\ell (x + 1)^\ell]\right|_{x = 1}\\
+ &= \frac{1}{2^\ell \ell!} \left[\ell!(x + 1)^\ell + (x - 1)(\text{stuff})\right]_{x = 1}\\
+ &= 1
+\end{align*}
+We have previously shown that the eigenfunctions of Sturm-Liouville operators are orthogonal. Let's see that directly now for this operator. The weight function is $w(x) = 1$.
+
+We first prove a short lemma: for $0 \leq k \leq \ell$, we have
+\[
+ \frac{\d^k}{\d x^k}(x^2 - 1)^\ell = Q_{\ell, k}(x^2 - 1)^{\ell - k}
+\]
+for some degree $k$ polynomial $Q_{\ell, k}(x)$.
+
+We show this by induction. This is trivially true when $k = 0$. We have
+\begin{align*}
+ \frac{\d^{k + 1}}{\d x^{k + 1}} (x^2 + 1)^\ell &= \frac{\d}{\d x}\left[Q_{\ell, k}(x) (x^2 - 1)^{\ell - k}\right] \\
+ &= Q_{\ell, k}' (x^2 - 1)^{\ell - k} + 2(\ell - k)Q_{\ell, k} x(x^2 - 1)^{\ell - k - 1}\\
+ &= (x^2 - 1)^{\ell - k - 1}\Big[Q_{\ell, k}'(x^2 - 1) - 2(\ell - k)Q_{\ell, k} x\Big].
+\end{align*}
+Since $Q_{\ell, k}$ has degree $k$, $Q_{\ell, k}'$ has degree $k - 1$. So the right bunch of stuff has degree $k + 1$.
+
+Now we can show orthogonality. We have
+\begin{align*}
+ (P_\ell, P_{\ell'}) &= \int_{-1}^1 P_\ell(x) P_{\ell'}(x) \;\d x\\
+ &= \frac{1}{2^\ell \ell!}\int_{-1}^1 \frac{\d^\ell}{\d x^\ell} (x^2 - 1)^\ell P_{\ell'}(x)\;\d x\\
+ &= \frac{1}{2^\ell \ell!}\left[\frac{\d^{\ell - 1}}{\d x^{\ell - 1}} (x^2 - 1)^2 P_{\ell'}(x)\right] - \frac{1}{2^\ell \ell!} \int_{-1}^1 \frac{\d^{\ell - 1}}{\d x^{\ell - 1}}(x^2 - 1)^\ell \frac{\d P_{\ell'}}{\d x}\;\d x\\
+ &= - \frac{1}{2^\ell \ell!} \int_{-1}^1 \frac{\d^{\ell - 1}}{\d x^{\ell - 1}}(x^2 - 1)^\ell \frac{\d P_{\ell'}}{\d x}\;\d x.
+\end{align*}
+Note that the first term disappears since the $(\ell - 1)$\textsuperscript{th} derivative of $(x^2 - 1)^\ell$ still has a factor of $x^2 - 1$. So integration by parts allows us to transfer the derivative from $x^2 - 1$ to $P_{\ell'}$.
+
+Now if $\ell \not= \ell'$, we can wlog assume $\ell' < \ell$. Then we can integrate by parts $\ell'$ times until we get the $\ell'$\textsuperscript{th} derivative of $P_{\ell'}$, which is zero.
+
+In fact, we can show that
+\[
+ (P_\ell, P_{\ell'}) = \frac{2\delta_{\ell\ell'}}{2\ell + 1}.
+\]
+hence the $P_\ell(x)$ are orthogonal polynomials.
+
+By the fundamental theorem of algebra, we know that $P_\ell(x)$ has $\ell$ roots. In fact, they are always real, and lie in $(-1, 1)$.
+
+Suppose not. Suppose only $m < \ell$ roots lie in $(-1, 1)$. Then let $Q_m(x) = \prod_{r = 1}^m (x - x_r)$, where $\{x_1, x_2, \cdots, x_m\}$ are these $m$ roots. Consider the polynomial $P_{\ell}(x)Q_m(x)$. If we factorize this, we get $\prod_{r = m + 1}^\ell (x - r)\prod_{r = 1}^m (x - x_r)^2$. The first few terms have roots outside $(-1, 1)$, and hence do not change sign in $(-1, 1)$. The latter terms are always non-negative. Hence for some appropriate sign, we have
+\[
+ \pm\int_{-1}^1 P_\ell(x) Q_m(x) \;\d x > 0.
+\]
+However, we can expand
+\[
+ Q_m(x) = \sum_{r = 1}^m q_r P_r(x),
+\]
+but $(P_\ell, P_r) = 0$ for all $r < \ell$. This is a contradiction.
+
+\subsubsection{Solution to radial part}
+In our original equation $\nabla^2 \phi = 0$, we now set $\Theta(\theta) = P_\ell(\cos \theta)$. The radial equation becomes
+\[
+ (r^2 R')' = \ell(\ell + 1)R.
+\]
+Trying a solution of the form $R(r) = r^\alpha$, we find that $\alpha(\alpha + 1) = \ell(\ell + 1)$. So the solution is $\alpha = \ell$ or $-(\ell + 1)$. Hence
+\[
+ \phi(r, \theta) = \sum \left(a_\ell r^\ell + \frac{b_\ell}{r^{\ell + 1}}\right) P_\ell(\cos \theta)
+\]
+is the general solution to the Laplace equation $\nabla^2 \phi = 0$ on the domain. In we want regularity at $r = 0$, then we need $b_\ell = 0$ for all $\ell$.
+
+The $a_\ell$ are fixed if we require $\phi(a, \theta) = f(\theta)$ on $\partial \Omega$, i.e.\ when $r = 1$.
+
+We expand
+\[
+ f(\theta) = \sum_{\ell = 0}^\infty F_\ell P_\ell(\cos \theta),
+\]
+where
+\[
+ F_\ell = (P_\ell, f) = \int_0^\pi P_{\ell}(\cos \theta) f(\theta) \;\d (\cos \theta).
+\]
+So
+\[
+ \phi(r, \theta) = \sum_{\ell = 0}^\infty F_\ell \left(\frac{r}{a}\right)^\alpha P_\ell(\cos \theta).
+\]
+\subsection{Multipole expansions for Laplace's equation}
+One can quickly check that
+\[
+ \phi(\mathbf{r}) = \frac{1}{|\mathbf{r} - \mathbf{r}'|}
+\]
+solves Laplace's equation $\nabla^2 \phi = 0$ for all $\mathbf{r}\in \R^3 \setminus \mathbf{r}'$. For example, if $\mathbf{r}' = \hat{\mathbf{k}}$, where $\hat{\mathbf{k}}$ is the unit vector in the $z$ direction, then
+\[
+ \frac{1}{|\mathbf{r} - \hat{\mathbf{k}}|} = \frac{1}{\sqrt{r^2 + 1 - 2r \cos \theta}} = \sum_{\ell = 0}^\infty c_\ell r^\ell P_\ell(\cos \theta).
+\]
+To find these coefficients, we can employ a little trick. Since $P_\ell(1) = 0$, at $\theta = 0$, we have
+\[
+ \sum_{\ell = 0}^\infty c_\ell r^\ell = \frac{1}{1 - r} = \sum_{\ell = 0}^\infty r^\ell.
+\]
+So all the coefficients must be $1$. This is valid for $r < 1$.
+
+More generally, we have
+\[
+ \frac{1}{|\mathbf{r} - \mathbf{r}'|} = \frac{1}{r'} \sum_{\ell = 0}^\infty \left(\frac{r}{r'}\right)^\ell P_\ell(\hat{\mathbf{r}}\cdot \hat{\mathbf{r}}').
+\]
+This is called the \emph{multiple expansion}, and is valid when $r < r'$. Thus
+\[
+ \frac{1}{|\mathbf{r} - \mathbf{r}'|} = \frac{1}{r'} + \frac{r}{r'^2}\hat{\mathbf{r}}\cdot \hat{\mathbf{r}}' + \cdots.
+\]
+The first term $\frac{1}{r'}$ is known as the \emph{monopole}, and the second term is known as the \emph{dipole}, in analogy to charges in electromagnetism. The monopole term is what we get due to a single charge, and the second term is the result of the interaction of two charges.
+
+\subsection{Laplace's equation in cylindrical coordinates}
+We let
+\[
+ \Omega = \{(r, \theta, z) \in \R^3: r \leq a, z \geq 0\}.
+\]
+In cylindrical polars, we have
+\[
+ \nabla^2 \phi = \frac{1}{r}\frac{\partial}{\partial r}\left(r \frac{\partial \phi}{\partial r}\right) + \frac{1}{r^2}\frac{\partial^2 \phi}{\partial \theta^2} + \frac{\partial^2 \phi}{\partial z^2} = 0.
+\]
+We look for a solution that is regular inside $\Omega$ and obeys the boundary conditions
+\begin{align*}
+ \phi(a, \theta, z) &= 0\\
+ \phi(r, \theta, 0) &= f(r, \theta)\\
+ \lim_{z\to \infty} \phi(r, \theta, z) &= 0,
+\end{align*}
+where $f$ is some fixed function.
+
+Again, we start by separation of variables. We try
+\[
+ \phi(r, \theta, z) = R(r) \Theta(\theta)Z(z).
+\]
+Then we get
+\[
+ \frac{1}{rR} (rR')' + \frac{1}{r^2}\frac{\Theta''}{\Theta} + \frac{Z''}{Z} = 0.
+\]
+So we immediately know that
+\[
+ Z'' = \mu Z
+\]
+We now replace $Z''/Z$ by $\mu$, and multiply by $r^2$ to obtain
+\[
+ \frac{r}{R}(rR')' + \frac{\Theta''}{\Theta} + \mu r^2 = 0.
+\]
+So we know that
+\[
+ \Theta'' = -\lambda \Theta.
+\]
+Then we are left with
+\[
+ r^2 R'' + rR' + (\mu r^2 - \lambda)R = 0.
+\]
+Since we require periodicity in $\theta$, we have
+\[
+ \Theta(\theta) = a_n \sin n\theta + b_n \cos n\theta, \quad \lambda = n^2,\quad n \in \N.
+\]
+Since we want the solution to decay as $z \to \infty$, we have
+\[
+ Z(z) = c_\mu e^{-\sqrt{\mu}z}
+\]
+The radial equation is of Sturm-Liouville type since it can be written as
+\[
+ \frac{\d}{\d r}\left(r \frac{\d R}{\d r}\right) - \frac{n^2}{r}R = -\mu rR.
+\]
+Here we have
+\[
+ p(r) = r, q(r) = -\frac{n^2}{r}, w(r) = r.
+\]
+Introducing $x = r\sqrt{\mu}$, we can rewrite this as
+\[
+ x^2 \frac{\d^2 R}{\d x^2} + x\frac{\d R}{\d x} + (x^2 - n^2)R = 0.
+\]
+This is Bessel's equation. Note that this is actually a whole family of differential equations, one for each $n$. The $n$ is not the eigenvalue we are trying to figure out in this equation. It is already fixed by the $\Theta$ equation. The actual eigenvalue we are finding out is $\mu$, not $n$.
+
+Since Bessel's equations are second-order, there are two independent solution for each $n$, namely $J_n(x)$ and $Y_n(x)$. These are called Bessel functions of order $n$ of the first ($J_n$) or second ($Y_n$) kind. We will not study these functions' properties, but just state some useful properties of them.
+
+ % include plot of Bessel's functions
+The $J_n$'s are all regular at the origin. In particular, as $x \to 0$ $J_n (x) \sim x^n$. These look like decaying sine waves, but the zeroes are not regularly spaced.
+
+On the other hand, the $Y_n$'s are similar but singular at the origin. As $x \to 0$, $Y_0(x) \sim \ln x$, while $Y_n(x) \sim x^{-n}$.
+
+Now we can write our separable solution as
+\[
+ \phi(r, \theta, z) = (a_n \sin n \theta + b_n \cos n\theta) e^{-\sqrt{\mu}z} [c_{\mu, n}J_n (r\sqrt{\mu}) + d_{\mu, n}Y_n (r, \sqrt{\mu})].
+\]
+Since we want regularity at $r = 0$, we don't want to have the $Y_n$ terms. So $d_{\mu, n} = 0$.
+
+We now impose our first boundary condition $\phi(a, \theta, z) = 0$. This demands $J_n(a \sqrt{\mu}) = 0$. So we must have
+\[
+ \sqrt{\mu} = \frac{k_{ni}}{a},
+\]
+where $k_{ni}$ is the $i$th root of $J_n(x)$ (since there is not much pattern to these roots, this is the best description we can give!).
+
+So our general solution obeying the homogeneous conditions is
+\[
+ \phi(r, \theta, z) = \sum_{n = 0}^\infty \sum_{i \in \text{roots}} (A_{ni} \sin n \theta + B_{ni}\cos n\theta) \exp\left(-\frac{k_{ni}}{a}z\right) J_n \left(\frac{k_{ni}r}{a}\right).
+\]
+We finally impose the inhomogeneous boundary condition $\phi(r, \theta, 0) = f(r, \theta)$. To do this, we use the orthogonality condition
+\[
+ \int_0^a J_n\left(\frac{k_{nj}r}{a}\right)J_n\left(\frac{k_{ni}r}{a}\right) r\;\d r = \frac{a^2}{2}\delta_{ij}[J_n'(k_{ni})]^2,
+\]
+which is proved in the example sheets. Note that we have a weight function $r$ here. Also, remember this is the orthogonality relation for different roots of Bessel's functions of the \emph{same order}. It does not relate Bessel's functions of different orders.
+
+We now integrate our general solution with respect to $\cos m\theta$ to obtain
+\[
+ \frac{1}{\pi}\int_{-\pi}^\pi f(r, \theta) \cos m\theta \;\d \theta = \sum_{i \in \text{roots}} J_m\left(\frac{k_{mi}r}{a}\right).
+\]
+Now we just have Bessel's function for a single order $j$. So we can use the orthogonality relation for the Bessel's functions to obtain
+\[
+ B_{mj} = \frac{1}{[J'_m(k_{mj})]}\frac{2}{\pi a^2}\int_0^a \int_{-\pi}^\pi \cos m\theta J_m\left(\frac{k_{mj}r}{a}\right) f(r, \theta)r \;\d r \;\d \theta.
+\]
+How can we possibly perform this integral? We don't know how $J_m$ looks like, what the roots $k_{mj}$ are, and we multiply all these complicated things together and integrate! Often, we just don't. We ask our computers to do this numerically for us. There are indeed some rare cases where we are indeed able to get an explicit solution, but these are rare.
+
+\subsection{The heat equation}
+We choose $\Omega = \R^n \times[0, \infty)$. We treat $\R^n$ as the ``space'' variable, and $[0, \infty)$ as our time variable.
+\begin{defi}[Heat equation]
+ The \emph{heat equation} for a function $\phi: \Omega \to \C$ is
+ \[
+ \frac{\partial \phi}{\partial t} = \kappa \nabla^2 \phi,
+ \]
+ where $\kappa > 0$ is the \emph{diffusion constant}.
+\end{defi}
+We can think of this as a function $\phi$ representing the ``temperature'' at different points in space and time, and this equation says the rate of change of temperature is proportional to $\nabla^2 \phi$.
+
+Our previous study of the Laplace's equation comes in handy. When our system is in equilibrium and does not change with time, then we are left with Laplace's equation.
+
+Let's first look at some interesting properties of the heat equation.
+
+The first most important property of the heat equation is that the total ``heat'' is conserved under evolution by the heat equation, i.e.
+\[
+ \frac{\d}{\d t}\int_{\R^n} \phi(\mathbf{x}, t)\;\d^n x = 0,
+\]
+since we can write this as
+\[
+ \frac{\d}{\d t}\int_{\R^n} \phi(\mathbf{x}, t)\;\d^n x = \int_{\R^n} \frac{\partial \phi}{\partial t}\; \d^n x = \kappa \int_{\R^n} \nabla^2 \phi \;\d^n x = 0,
+\]
+provided $\nabla \phi(\mathbf{x}, t) \to 0$ as $|\mathbf{x}|\to \infty$.
+
+In particular, on a compact space, i.e.\ $\Omega = M\times [0, \infty)$ with $M$ compact, there is no boundary term and we have
+\[
+ \frac{\d}{\d t}\int_M \phi \;\d\mu = 0.
+\]
+The second useful property is if $\phi(\mathbf{x}, t)$ solves the heat equation, so do $\phi_1(\mathbf{x}, t) = \phi(\mathbf{x} - \mathbf{x}_0, t - t_0)$, and $\phi_2(\mathbf{x}, t) = A \phi(\lambda \mathbf{x}, \lambda^2 t)$.
+
+Let's try to choose $A$ such that
+\[
+ \int_{\R^n}\phi_2(\mathbf{x}, t) \;\d^n x = \int_{\R^n}\phi(\mathbf{x}, t) \;\d^n x.
+\]
+We have
+\[
+ \int_{\R^n} \phi_2(\mathbf{x}, t)\;\d^n x = A\int_{\R^n}\phi(\lambda \mathbf{x}, \lambda^2 t)\;\d^n x = A\lambda^{-n} \int_{\R^n}\phi(\mathbf{y}, \lambda^2 t)\;\d^n y,
+\]
+where we substituted $\mathbf{y} = \lambda \mathbf{x}$.
+
+So if we set $A = \lambda^n$, then the total heat in $\phi_2$ at time $\lambda^2 t$ is the same as the total heat in $\phi$ at time $t$. But the total heat is constant in time. So the total heat is the same all the time.
+
+Note again that if $\frac{\partial\phi}{\partial t} = \kappa \nabla^2 \phi$ on $\Omega\times [0, \infty)$, then so does $\lambda^n \phi(\lambda x, \lambda^2 t)$. This means that nothing is lost by letting $\lambda = \frac{1}{\sqrt{t \kappa}}$ and consider solutions of the form % dodgy
+\[
+ \phi(\mathbf{x}, t) = \frac{1}{(\kappa t)^{n/2}} F\left(\frac{\mathbf{x}}{\sqrt{\kappa t}}\right) = \frac{1}{(\kappa t)^{n/2}}F(\boldsymbol\eta),
+\]
+where
+\[
+ \boldsymbol\eta = \frac{\mathbf{x}}{\sqrt{\kappa t}}.
+\]
+For example, in $1 + 1$ dimensions, we can look for a solution of the form $\frac{1}{\sqrt{\kappa t}} F(\eta)$. We have
+\[
+ \frac{\partial\phi}{\partial t} = \frac{\partial}{\partial t}\left(\frac{1}{\sqrt{\kappa t}} F(\eta)\right) = \frac{-1}{2\sqrt{\kappa t^3}} F(\eta) + \frac{1}{\sqrt{\kappa t}} \frac{\d \eta}{\d t}F'(\eta) = \frac{-1}{2\sqrt{\kappa t^3}}[F + \eta F']
+\]
+On the other side of the equation, we have
+\[
+ \kappa\frac{\partial^2}{\partial x^2} \left(\frac{1}{\sqrt{\kappa t}} F(\eta)\right) = \frac{\kappa}{\sqrt{\kappa t}} \frac{\partial}{\partial x}\left(\frac{\partial \eta}{\partial x} F'\right) = \frac{1}{\sqrt{\kappa t^3}}F''.
+\]
+So the equation becomes
+\[
+ 0 = 2F'' + \eta F' + F = (2F' + \eta F)'.
+\]
+So we have
+\[
+ 2F' + \eta F = \text{const},
+\]
+and the boundary conditions require that the constant is zero. So we get
+\[
+ F' = \frac{-\eta}{2}F.
+\]
+We now see a solution
+\[
+ F(\eta) = a \exp\left(\frac{-\eta^2}{4}\right).
+\]
+By convention, we now normalize our solution $\phi(x, t)$ by requiring
+\[
+ \int_{-\infty}^\infty \phi(x, t)\;\d x = 1.
+\]
+This gives
+\[
+ a = \frac{1}{\sqrt{4\pi}},
+\]
+and the full solution is
+\[
+ \phi(x, t) = \frac{1}{\sqrt{4\pi \kappa t}}\exp\left(- \frac{x^2}{4\kappa t}\right).
+\]
+More generally, on $\R^n \times [0, \infty)$, we have the solution
+\[
+ \phi(\mathbf{x}, t) = \frac{1}{(4\pi (t - t_0))^{n/2}} \exp\left(\frac{-(\mathbf{x} - \mathbf{x}_0)^2}{4\kappa(t - t_0)}\right).
+\]
+At any fixed time $t\not= t_0$, the solutions are Gaussians centered on $x_0$ with standard deviation $\sqrt{2\kappa (t - t_0)}$, and the height of the peak at $x = x_0$ is $\sqrt{\frac{1}{4\pi \kappa (t - t_0)}}$.
+
+We see that the solution gets ``flatter'' as $t$ increases. This is a general property of the evolution by the heat equation. Suppose we have an eigenfunction $y(\mathbf{x})$ of the Laplacian on $\Omega$, i.e.\ $\nabla^2 y = -\lambda y$. We want to show that in certain cases, $\lambda$ must be positive. Consider
+\[
+ - \lambda \int_\Omega |y|^2 \d^n x = \int_\Omega y^*(x) \nabla^2 y(x)\;\d^n x = \int_{\partial\Omega} y^* \mathbf{n}\cdot \nabla y \;\d^{n - 1}x - \int_\Omega |\nabla y|^2 \;\d^n x.
+\]
+If there is no boundary term, e.g.\ when $\Omega$ is closed and compact (e.g.\ a sphere), then we get
+\[
+ \lambda = \frac{\int |\nabla y|^2\;\d^n x}{\int_\Omega |y|^2 \;\d^n x}.
+\]
+Hence the eigenvalues are all positive. What does this mean for heat flow? Suppose $\phi(\mathbf{x}, t)$ obeys the heat equation for $t > 0$. At time $t = 0$, we have $\phi(\mathbf{x}, 0) = f(\mathbf{x})$. We can write the solution (somehow) as
+\[
+ \phi(\mathbf{x}, t) = e^{t\nabla^2} \phi(\mathbf{x}, 0),
+\]
+where we (for convenience) let $\kappa = 1$. But we can expand our initial condition
+\[
+ \phi(\mathbf{x}, 0) = f(\mathbf{x}) = \sum_I c_I y_I(\mathbf{x})
+\]
+in a complete set $\{y_I(\mathbf{x})\}$ of eigenfunctions for $\nabla^2$.
+
+By linearity, we have
+\begin{align*}
+ \phi(\mathbf{x}, t) &= e^{t \nabla^2}\left(\sum_I c_I y_I(\mathbf{x})\right) \\
+ &= \sum_I c_I e^{t \nabla^2 }y_I(\mathbf{x}) \\
+ &= \sum_I c_I e^{-\lambda_2 t}y_I(\mathbf{x}) \\
+ &= c_I(t) y_I(\mathbf{x}),
+\end{align*}
+where $c_I(t) = c_I e^{-\lambda_2 t}$. Since $\lambda_I$ are all positive, we know that $c_I(t)$ decays exponentially with time. In particular, the coefficients corresponding to the largest eigenvalues $|\lambda_I|$ decays most rapidly. We also know that eigenfunctions with larger eigenvalues are in general more ``spiky''. So these smooth out rather quickly.
+
+When people first came up with the heat equation to describe heat loss, they were slightly skeptical --- this flattening out caused by the heat equation is not time-reversible, while Newton's laws of motions are all time reversible. How could this smoothening out arise from reversible equations?
+
+Einstein came up with an example where the heat equation can come out of apparently naturally from seemingly reversible laws, namely Brownian motion. The idea is that a particle will randomly jump around in space, where the movement is independent of time and position.
+
+Let the probability that a dust particle moves through a step $y$ over a time $\Delta t$ be $p(y, \Delta t)$. For any fixed $\Delta t$, we must have
+\[
+ \int_{-\infty}^\infty p(y, \Delta t)\;\d y = 1.
+\]
+We also assume that $p(y, \Delta t)$ is independent of time, and of the location of the dust particle. We also assume $p(y, \Delta t)$ is strongly peaked around $y = 0$ and $p(y, \Delta t) = p(-y, \Delta t)$. Now let $P(x, t)$ be the probability that the dust particle is located at $x$ at time $t$. Then at time $t + \delta t$, we have
+\[
+ P(x, t + \Delta t) = \int_{-\infty}^\infty P(x - y, t) p(y, \Delta t)\;\d y.
+\]
+For $P(x - y, t)$ sufficiently smooth, we can write
+\[
+ P(x - y, t) \approx P(x, t)- y\frac{\partial P}{\partial x}(x, t) + \frac{y^2}{2}\frac{\partial^2 P}{\partial x^2}(x, t).
+\]
+So we get
+\[
+ P(x, t + \Delta t) \approx P(x, t) - \frac{\partial P}{\partial x}(x, t) \bra y \ket + \frac{1}{2} \bra y^2\ket \frac{\partial^2 P}{\partial x^2}P(x, t) + \cdots,
+\]
+where
+\[
+ \bra y^r\ket = \int_{-\infty}^\infty y^r p(y, \Delta t)\;\d y.
+\]
+Since $p$ is even, we expect $\bra y^r\ket$ to vanish when $r$ is odd. Also, since $y$ is strongly peaked at $0$, we expect the higher order terms to be small. So we can write
+\[
+ P(x, t + \Delta t) - P(x, t) = \frac{1}{2}\bra y^2\ket \frac{\partial^2 P}{\partial x^2}.
+\]
+Suppose that as we take the limit $\Delta t \to 0$, we get $\frac{\bra y^2\ket }{2 \Delta t} \to \kappa$ for some $\kappa$. Then this becomes the heat equation
+\[
+ \frac{\partial P}{\partial t} = \kappa \frac{\partial^2 P}{\partial x^2}.
+\]
+\begin{prop}
+ Suppose $\phi: \Omega\times [0, \infty) \to \R$ satisfies the heat equation $\frac{\partial \phi}{\partial t} = \kappa \nabla^2 \phi$, and obeys
+ \begin{itemize}
+ \item Initial conditions $\phi(\mathbf{x}, 0) = f(x)$ for all $x \in \Omega$
+ \item Boundary condition $\phi(\mathbf{x}, t) |_{\partial \Omega} = g(\mathbf{x}, t)$ for all $t \in [0, \infty)$.
+ \end{itemize}
+ Then $\phi(\mathbf{x}, t)$ is unique.
+\end{prop}
+
+\begin{proof}
+ Suppose $\phi_1$ and $\phi_2$ are both solutions. Then define $\Phi = \phi_1 - \phi_2$ and
+ \[
+ E(t) = \frac{1}{2}\int_\Omega \Phi^2 \;\d V.
+ \]
+ Then we know that $E(t) \geq 0$. Since $\phi_1, \phi_2$ both obey the heat equation, so does $\Phi$. Also, on the boundary and at $t = 0$, we know that $\Phi = 0$. We have
+ \begin{align*}
+ \frac{\d E}{\d t} &= \int_\Omega \Phi \frac{\d \Phi}{\d t}\;\d V\\
+ &= \kappa \int_\Omega \Phi \nabla^2 \Phi\;\d V\\
+ &= \kappa \int_{\partial \Omega} \Phi \nabla \Phi \cdot \d \mathbf{S} - \kappa \int_\Omega (\nabla \Phi)^2\;\d V\\
+ &= - \kappa \int_\Omega (\nabla \Phi)^2\;\d V\\
+ &\leq 0.
+ \end{align*}
+ So we know that $E$ decreases with time but is always non-negative. We also know that at time $t = 0$, $E = \Phi = 0$. So $E = 0$ always. So $\Phi = 0$ always. So $\phi_1 = \phi_2$.
+\end{proof}
+
+\begin{eg}[Heat conduction in uniform medium]
+ Suppose we are on Earth, and the Sun heats up the surface of the Earth through sun light. So the sun will maintain the soil at some fixed temperature. However, this temperature varies with time as we move through the day-night cycle and seasons of the year.
+
+ We let $\phi(x, t)$ be the temperature of the soil as a function of the depth $x$, defined on $\R^{\geq 0} \times [0, \infty)$. Then it obeys the heat equation with conditions
+ \begin{enumerate}
+ \item $\phi(0, t) = \phi_0 + A \cos \left(\frac{2\pi t}{t_D}\right) + B\cos \left(\frac{2\pi t}{t_Y}\right)$.
+ \item $\phi(x, t) \to $ const as $x \to \infty$.
+ \end{enumerate}
+ We know that
+ \[
+ \frac{\partial \phi}{\partial t} = K \frac{\partial^2 \phi}{\partial x^2}.
+ \]
+ We try the separation of variables
+ \[
+ \phi(x, t) = T(t) X(x).
+ \]
+ Then we get the equations
+ \[
+ T' = \lambda T,\quad X'' = \frac{\lambda}{K}X.
+ \]
+ From the boundary solution, we know that our things will be oscillatory. So we let $\lambda$ be imaginary, and set $\lambda = i \omega$. So we have
+ \[
+ \phi(x, t) = e^{i\omega t}\left(a_\omega e^{-\sqrt{i\omega/K}x} + b_\omega e^{\sqrt{i\omega/K}x}\right).
+ \]
+ Note that we have
+ \[
+ \sqrt{\frac{i\omega}{K}} =
+ \begin{cases}
+ (1 + i) \sqrt{\frac{|\omega|}{2K}} & \omega > 0\\
+ (i - 1) \sqrt{\frac{|\omega|}{2K}} & \omega < 0
+ \end{cases}
+ \]
+ Since $\phi(x, t) \to $ constant as $x \to \infty$, we don't want our $\phi$ to blow up. So if $\omega < 0$, we need $a_\omega = 0$. Otherwise, we need $b_\omega = 0$.
+
+ To match up at $x = 0$, we just want terms with
+ \[
+ |\omega| = \omega_D = \frac{2\pi}{t_D},\quad |\omega| = \omega_Y = \frac{2\pi}{t_Y}.
+ \]
+ So we can write out solution as
+ \begin{align*}
+ \phi(x, t) = \phi_0 &+ A \exp\left(-\sqrt{\frac{\omega_D}{2K}}\right)\cos\left(\omega_Dt - \sqrt{\frac{\omega_D}{2K}}x\right) \\
+ &+ B \exp\left(-\sqrt{\frac{\omega_Y}{2K}}\right)\cos\left(\omega_Yt - \sqrt{\frac{\omega_Y}{2K}}x\right)
+ \end{align*}
+ We can notice a few things. Firstly, as we go further down, the effect of the sun decays, and the temperature is more stable. Also, the effect of the day-night cycle decays more quickly than the annual cycle, which makes sense. We also see that while the temperature does fluctuate with the same frequency as the day-night/annual cycle, as we go down, there is a phase shift. This is helpful since we can store things underground and make them cool in summer, warm in winter.
+\end{eg}
+
+\begin{eg}[Heat conduction in a finite rod]
+ We now have a finite rod of length $2L$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [thick] (-2, 0) -- (2, 0);
+ \node [circ] at (0, 0) {};
+ \node [below] at (0, 0) {$x = 0$};
+
+ \node [circ] at (-2, 0) {};
+ \node [below] at (-2, 0) {$x = -L$};
+
+ \node [circ] at (2, 0) {};
+ \node [below] at (2, 0) {$x = L$};
+ \end{tikzpicture}
+ \end{center}
+ We have the initial conditions
+ \[
+ \phi(x, 0) = \Theta(x) =
+ \begin{cases}
+ 1 & 0 < x < L\\
+ 0 & -L < x < 0
+ \end{cases}
+ \]
+ and the boundary conditions
+ \[
+ \phi(-L, t) = 0,\quad \phi(L, t) = 1
+ \]
+ So we start with a step temperature, and then maintain the two ends at fixed temperatures $0$ and $1$.
+
+ We are going to do separation of variables, but we note that all our boundary and initial conditions are inhomogeneous. This is not helpful. So we use a little trick. We first look for \emph{any} solution satisfying the boundary conditions $\phi_S(-L, t) = 0$, $\phi_S(L, t) = 1$. For example, we can look for time-independent solutions $\phi_S(x, t) = \phi_S(x)$. Then we need $\frac{\d^2 \phi_S}{\d x^2} = 0$. So we get
+ \[
+ \phi_S(x) = \frac{x + L}{2L}.
+ \]
+ By linearity, $\psi(x, t) = \phi(x, t) - \phi_s(x)$ obeys the heat equation with the conditions
+ \[
+ \psi(-L, t) = \psi(L, t) = 0,
+ \]
+ which is homogeneous! Our initial condition now becomes
+ \[
+ \psi(x, 0) = \Theta(x) - \frac{x + L}{2L}.
+ \]
+ We now perform separation of variables for
+ \[
+ \psi(x, t) = X(x) T(t).
+ \]
+ Then we obtain the equations
+ \[
+ T' = -\kappa \lambda T,\quad X' = - \lambda X.
+ \]
+ Then we have
+ \[
+ \psi(x, t) = \left[a\sin (\sqrt{\lambda} x) + b\cos (\sqrt{\lambda}x)\right] e^{-\kappa \lambda t}.
+ \]
+ Since initial condition is odd, we can eliminate all $\cos$ terms. Our boundary conditions also requires
+ \[
+ \lambda = \frac{n^2 \pi^2}{L^2}, \quad n = 1, 2, \cdots.
+ \]
+ So we have
+ \[
+ \phi(x, t) = \phi_s(x) + \sum_{n = 1}^\infty a_n \sin\left(\frac{n\pi x}{L}\right) \exp\left(-\frac{\kappa n^2 \pi^2}{L^2}t\right),
+ \]
+ where $a_n$ are the Fourier coefficients
+ \[
+ a_n = \frac{1}{L}\int_{-L}^L \left[\Theta(x) - \frac{x + L}{2L}\right] \sin\left(\frac{n\pi x}{L}\right)\;\d x = \frac{1}{n\pi}.
+ \]
+\end{eg}
+
+\begin{eg}[Cooling of a uniform sphere]
+ Once upon a time, Charles Darwin went around the Earth, looked at species, and decided that evolution happened. When he came up with his theory, it worked quite well, except that there was one worry. Was there enough time on Earth for evolution to occur?
+
+ This calculation was done by Kelvin. He knew well that the Earth started as a ball of molten rock, and obviously life couldn't have evolved when the world was still molten rock. So he would want to know how long it took for the Earth to cool down to its current temperature, and if that was sufficient for evolution to occur.
+
+ We can, unsurprisingly, use the heat equation. We assume that at time $t = 0$, the temperature is $\phi_0$, the melting temperature of rock. We also assume that the space is cold, and we let the temperature on the boundary of Earth as $0$. We then solve the heat equation on a sphere (or ball), and see how much time it takes for Earth to cool down to its present temperature.
+
+ We let $\Omega = \{(x, y, z) \in \R^3, r \leq R\}$, and we want a solution $\phi(r, t)$ of the heat equation that is spherically symmetric and obeys the conditions
+ \begin{itemize}
+ \item $\phi(R, t) = 0$
+ \item $\phi(r, 0) = \phi_0$,
+ \end{itemize}
+ We can write the heat equation as
+ \[
+ \frac{\partial \phi}{\partial t} = \kappa \nabla^2 \phi = \frac{\kappa}{r^2}\frac{\partial}{\partial r}\left(r^2 \frac{\partial \phi}{\partial r}\right).
+ \]
+ Again, we do the separation of variables.
+ \[
+ \phi(r, t) = R(r) T(t).
+ \]
+ So we get
+ \[
+ T' = -\lambda^2 \kappa T,\quad \frac{\d}{\d r}\left(r^2 \frac{\d R}{\d r}\right) = -\lambda^2 r^2 R.
+ \]
+ We can simplify this a bit by letting $R(r) = \frac{S(r)}{r}$, then our radial equation just becomes
+ \[
+ S'' = -\lambda^2 S.
+ \]
+ We can solve this to get
+ \[
+ R(r) = A_\lambda \frac{\sin \lambda r}{r} + B_\lambda \frac{\cos \lambda r}{r}.
+ \]
+ We want a regular solution at $r = 0$. So we need $B_\lambda = 0$. Also, the boundary condition $\phi(R, t) = 0$ gives
+ \[
+ \lambda = \frac{n\pi}{R},\quad n = 1, 2, \cdots
+ \]
+ So we get
+ \[
+ \phi(r, t) = \sum_{n \in \Z} \frac{A_n}{r} \sin\left(\frac{n\pi r}{R}\right) \exp\left(\frac{-\kappa n^2 \pi^2 t}{R^2}\right),
+ \]
+ where our coefficients are
+ \[
+ A_n = (-1)^{n + 1}\frac{\phi_0 R}{n\pi}.
+ \]
+ We know that the Earth isn't just a cold piece of rock. There are still volcanoes. So we know many terms still contribute to the sum nowadays. This is rather difficult to work with. So we instead look at the temperature gradient
+ \[
+ \frac{\partial \phi}{\partial r} = \frac{\phi_0}{r} \sum_{n \in \Z} (-1)^{n + 1} \cos \left(\frac{n\pi r}{R}\right) \exp\left(\frac{-\kappa n^2 \pi^2 t}{R^2}\right) + \sin\text{stuff}.
+ \]
+ We evaluate this at the surface of the Earth, $R = r$. So we get the gradient
+ \[
+ \left.\frac{\partial\phi}{\partial r}\right|_R = -\frac{\phi_0}{R} \sum_{n\in \Z} \exp\left(-\frac{\kappa n^2 \pi^2 t}{R^2}\right) \approx \frac{\phi_0}{R} \int_{-\infty}^\infty \exp\left(-\frac{\kappa \pi y^2 t}{R^2}\right)\;\d y = \phi_0 \sqrt{\frac{1}{\pi \kappa t}}.
+ \]
+ So the age of the Earth is approximately
+ \[
+ t \approx \left(\frac{\phi_0}{V}\right)^2 \frac{1}{4\pi K},
+ \]
+ where
+ \[
+ V = \left.\frac{\partial \phi}{\partial r}\right|_R.
+ \]
+ We can plug the numbers in, and get that the Earth is 100 million years. This is not enough time for evolution.
+
+ Later on, people discovered that fission of metals inside the core of Earth produce heat, and provides an alternative source of heat. So problem solved! The current estimate of the age of the universe is around 4 billion years, and evolution did have enough time to occur.
+\end{eg}
+
+\subsection{The wave equation}
+Consider a string $x\in [0, L]$ undergoing small oscillations described by $\phi(x, t)$.
+\begin{center}
+ \begin{tikzpicture}[xscale=0.75]
+ \draw (0, 0) sin (2, 1) cos (4, 0) sin (6, -1) cos (8, 0) sin (10, 1) cos (12, 0);
+ \draw [dashed] (0, 0) -- (12, 0);
+
+ \draw [->] (2, 0) -- (2, 1) node [pos=0.5, right] {$\phi(x, t)$};
+ \node [circ] at (8.666, 0.5) {};
+ \node [left] at (8.666, 0.5) {$A$};
+ \node [circ] at (9.333, 0.866) {};
+ \node [above] at (9.333, 0.866) {$B$};
+ \draw [dashed] (8.666, 0.5) -- +(0.7, 0);
+ \draw (8.666, 0.5) +(0.3, 0) arc(0:30:0.3);
+ \node [right] at (8.85, 0.6) {\tiny$\theta\!_A$};
+
+ \draw [dashed] (9.333, 0.866) -- +(0.7, 0);
+ \draw (9.333, 0.866) +(0.3, 0) arc (0:15:0.3);
+ \node [right] at (9.5, 0.7) {\tiny$\theta\!_B$};
+ \end{tikzpicture}
+\end{center} % improve
+Consider two points $A$, $B$ separated by a small distance $\delta x$. Let $T_A$ ($T_B$) be the outward pointing tension tangent to the string at $A$ ($B$). Since there is no sideways ($x$) motion, there is no net horizontal force. So
+\[
+ T_A\cos \theta_A = T_B \cos \theta_B = T.\tag{$*$}
+\]
+If the string has mass per unit length $\mu$, then in the vertical direction, Newton's second law gives
+\[
+ \mu \delta x \frac{\partial^2 \phi}{\partial t^2} = T_B \sin \theta_B - T_A \sin \theta_A.
+\]
+We now divide everything by $T$, noting the relation $(*)$, and get
+\begin{align*}
+ \mu \frac{\delta x}{T}\frac{\partial^2 \phi}{\partial t^2} &= \frac{T_B \sin \theta_B}{T_B\cos \theta_B} - \frac{T_A \sin \theta_A}{T_A \cos \theta_A}\\
+ &= \tan \theta_B - \tan \theta_A\\
+ &= \left.\frac{\partial\phi}{\partial x}\right|_B - \left.\frac{\partial\phi}{\partial x}\right|_A\\
+ &\approx \delta x\frac{\partial^2 \phi}{\partial x^2}.
+\end{align*}
+Taking the limit $\delta x \to 0$ and setting $c^2 = T/\mu$, we have that $\phi(x, t)$ obeys the wave equation
+\[
+ \frac{1}{c^2} \frac{\partial^2 \phi}{\partial t^2} = \frac{\partial^2 \phi}{\partial x^2}.
+\]
+From IA Differential Equations, we've learnt that the solution can be written as $f(x - ct) + g(x + ct)$. However, this does not extend to higher dimensions, but separation of variables does. So let's do that.
+
+Assume that the string is fixed at both ends. Then $\phi(0, t) = \phi(L, t) = 0$ for all $t$. The we can perform separation of variables, and the general solution can then be written
+\[
+ \phi(x, t) = \sum_{n = 1}^\infty \sin\left(\frac{n \pi x}{L}\right)\left[A_n \cos\left(\frac{n \pi c t}{L}\right) + B_n \sin\left(\frac{n\pi ct}{L}\right)\right].
+\]
+The coefficients $A_n$ are fixed by the initial profile $\phi(x, 0)$ of the string, while the coefficients $B_n$ are fixed by the initial string velocity $\frac{\partial \phi}{\partial t} (x, 0)$. Note that we need \emph{two} sets of initial conditions, since the wave equation is second-order in time.
+
+\subsubsection*{Energetics and uniqueness}
+An oscillating string contains has some sort of \emph{energy}. The \emph{kinetic energy} of a small element $\delta x$ of the string is
+\[
+ \frac{1}{2}\mu \delta x\left(\frac{\partial \phi}{\partial t}\right)^2.
+\]
+The \emph{total kinetic energy} of the string is hence the integral
+\[
+ K(t) = \frac{\mu}{2}\int_0^L \left(\frac{\partial \phi}{\partial t}\right)^2 \;\d x.
+\]
+The string also has potential energy due to tension. The \emph{potential energy} of a small element $\delta x$ of the string is
+\begin{align*}
+ T\times \text{extension} &= T(\delta s - \delta x) \\
+ &= T(\sqrt{\delta x^2 + \delta \phi^2} - \delta x)\\
+ &\approx T\delta x\left(1 + \frac{1}{2}\left(\frac{\delta \phi}{\delta x}\right)^2 + \cdots\right) - T \delta x\\
+ &= \frac{T}{2}\delta x \left(\frac{\delta \phi}{\delta x}\right)^2.
+\end{align*}
+Hence the total potential energy of the string is
+\[
+ V(t) = \frac{\mu}{2}\int_0^L c^2 \left(\frac{\partial \phi}{\partial x}\right)^2 \;\d x,
+\]
+using the definition of $c^2$.
+
+Using our series expansion for $\phi$, we get
+\begin{align*}
+ K(t) &= \frac{\mu \pi^2 c^2}{4L}\sum_{n = 1}^\infty n^2 \left[A_n \sin \left(\frac{n\pi c t}{L}\right) - B_n \cos \left(\frac{n\pi ct}{L}\right)\right]^2\\
+ V(t) &= \frac{\mu\pi^2 c^2}{4L} \sum_{n = 1}^\infty n^2 \left[A_n \cos\left(\frac{n\pi ct}{L}\right) + B_n \sin\left(\frac{n\pi ct}{L}\right)\right]^2
+\end{align*}
+The \emph{total energy} is
+\[
+ E(t) = \frac{n\pi^2 c^2}{4L}\sum_{n = 1}^\infty n^2 (A_n^2 + B_n^2).
+\]
+What can we see here? Our solution is essentially an (infinite) sum of independent harmonic oscillators, one for each $n$. The period of the fundamental mode ($n = 1$) is $\frac{2\pi}{\omega} = 2\pi \cdot \frac{L}{\pi c} = \frac{2L}{c}$. Thus, averaging over a period, the average kinetic energy is
+\[
+ \bar K = \frac{c}{2L} \int_0^{2L/c} K(t)\;\d t = \bar V = \frac{c}{2L}\int_0^{2L/c} V(t)\;\d t = \frac{E}{2}.
+\]
+Hence we have an equipartition of the energy between the kinetic and potential energy.
+
+The energy also allows us to prove a uniqueness theorem. First we show that energy is conserved in general.
+
+\begin{prop}
+ Suppose $\phi: \Omega \times [0, \infty) \to \R$ obeys the wave equation $\frac{\partial^2 \phi}{\partial t^2} = c^2 \nabla^2 \phi$ inside $\Omega \times (0, \infty)$, and is fixed at the boundary. Then $E$ is constant.
+\end{prop}
+
+\begin{proof}
+ By definition, we have
+ \[
+ \frac{\d E}{\d t} = \int_\Omega \frac{\partial^2 \psi}{\partial t^2}\frac{\partial \psi}{\partial t} + c^2 \nabla \left(\frac{\partial \phi}{\partial t}\right)\cdot \nabla \phi \;\d V.
+ \]
+ We integrate by parts in the second term to obtain
+ \[
+ \frac{\d E}{\d t} = \int_\Omega \frac{\d \phi}{\d t}\left(\frac{\partial^2 \phi}{\partial t^2} - c^2 \nabla^2 \phi\right)\;\d V + c^2 \int_{\partial \Omega} \frac{\partial \phi}{\partial t}\nabla \phi \cdot \d S.
+ \]
+ Since $\frac{\partial^2 \phi}{\partial t^2} = c^2 \nabla^2 \phi$ by the wave equation, and $\phi$ is constant on $\partial \Omega$, we know that
+ \[
+ \frac{\d E_\phi}{\d t} = 0.\qedhere
+ \]
+\end{proof}
+
+\begin{prop}
+ Suppose $\phi: \Omega \times [0, \infty) \to \R$ obeys the wave equation $\frac{\partial^2 \phi}{\partial t^2} = c^2 \nabla^2 \phi$ inside $\Omega \times (0, \infty)$, and obeys, for some $f, g, h$,
+ \begin{enumerate}
+ \item $\phi(x, 0) = f(x)$;
+ \item $\frac{\partial \phi}{\partial t}(x, 0) = g(x)$; and
+ \item $\phi|_{\partial \Omega\times [0, \infty)} = h(x)$.
+ \end{enumerate}
+ Then $\phi$ is unique.
+\end{prop}
+\begin{proof}
+ Suppose $\phi_1$ and $\phi_2$ are two such solutions. Then $\psi = \phi_1 - \phi_2$ obeys the wave equation
+ \[
+ \frac{\partial^2 \psi}{\partial t^2} = c^2 \nabla^2 \psi,
+ \]
+ and
+ \[
+ \psi|_{\partial \Omega \times [0, \infty)} = \psi|_{\Omega \times \{0\}} = \left.\frac{\partial \psi}{\partial t}\right|_{\Omega \times \{0\}} = 0.
+ \]
+ We let
+ \[
+ E_\psi(t) = \frac{1}{2}\int_\Omega \left[\left(\frac{\partial \psi}{\partial t}\right)^2 + c^2 \nabla \psi \cdot \nabla \psi\right] \;\d V.
+ \]
+ Then since $\psi$ obeys the wave equation with fixed boundary conditions, we know $E_\psi$ is constant.
+
+ Initially, at $t = 0$, we know that $\psi = \frac{\partial \psi}{\partial t} = 0$. So $E_\psi(0) = 0$. At time $t$, we have
+ \[
+ E_\psi = \frac{1}{2}\int_\Omega \left(\frac{\partial \psi}{\partial t}\right)^2 + c^2 (\nabla \psi)\cdot (\nabla \psi) \;\d V = 0.
+ \]
+ Hence we must have $\frac{\partial \psi}{\partial t} = 0$. So $\psi$ is constant. Since it is $0$ at the beginning, it is always $0$.
+\end{proof}
+
+\begin{eg}
+ Consider $\Omega = \{(x, y) \in \R^2, x^2 + y^2 \leq 1\}$, and let $\phi(r, \theta, t)$ solve
+ \[
+ \frac{1}{c^2}\frac{\partial^2 \phi}{\partial t^2} = \nabla^2 \phi = \frac{1}{r} \left(r \frac{\partial \phi}{\partial r}\right) + \frac{1}{r^2}\frac{\partial^2\phi}{\partial \theta^2},
+ \]
+ with the boundary condition $\phi|_{\partial \Omega} = 0$. We can imagine this as a drum, where the membrane can freely oscillate with the boundary fixed.
+
+ Separating variables with $\phi(r, \theta, t) = T(t) R(r) \Theta(\theta)$, we get
+ \[
+ T'' = - c^2 \lambda T,\quad \Theta'' = -\mu \Theta,\quad r(R')' + (r^2\lambda - \mu)R = 0.
+ \]
+ Then as before, $T$ and $\Theta$ are both sine and cosine waves. Since we are in polars coordinates, we need $\phi(t, r, \theta + 2\pi) = \phi(t, r, \theta)$. So we must have $\mu = m^2$ for some $m \in \N$. Then the radial equation becomes
+ \[
+ r(rR')' + (r^2 \lambda - m^2) R = 0,
+ \]
+ which is Bessel's equation of order $m$. So we have
+ \[
+ R(r) = a_m J_m(\sqrt{\lambda} r) + b_m Y_m(\sqrt{\lambda} r).
+ \]
+ Since we want regularity at $r = 0$, we need $b_m = 0$ for all $m$. To satisfy the boundary condition $\phi|_{\partial \Omega} = 0$, we must choose $\sqrt{\lambda} = k_{mi}$, where $k_{mi}$is the $i$th root of $J_m$.
+
+ Hence the general solution is
+ \begin{align*}
+ \phi(t, r, \theta) &= \sum_{i = 0}^\infty [A_{0i}\sin (k_{0i} ct) + B_{0i}\cos(k_{0i}ct)] J_0 (k_{0i}r)\\\
+ &\quad\quad +\sum_{m = 1}^\infty\sum_{i = 0}^\infty [A_{mi}\cos (m \theta) + B_{mi}\sin (m\theta)]\sin k_{mi}ct J_m (k_{mi} r)\\
+ &\quad\quad +\sum_{m = 1}^\infty\sum_{i = 0}^\infty [C_{mi}\cos (m \theta) + D_{mi}\sin (m\theta)]\cos k_{mi}ct J_m (k_{mi} r)
+ \end{align*}
+ For example, suppose we have the initial conditions $\phi(0, r, \theta) = 0$, $\partial_t \phi(0, r, \theta) = g(r)$. So we start with a flat surface and suddenly hit it with some force. By symmetry, we must have $A_{mi}, B_{mi}, C_{mi}, D_{mi} = 0$ for $m \not= 0$. If this does not seem obvious, we can perform some integrals with the orthogonality relations to prove this.
+
+ At $t = 0$, we need $\phi = 0$. So we must have $B_{0i} = 0$. So we are left with
+ \[
+ \phi = \sum_{i = 0}^\infty A_{0i} \sin (k_{0i} ct) J_0(k_{0j}r).
+ \]
+ We can differentiate this, multiply with $J_0(k_{0j} r) r$ to obtain
+ \[
+ \int_0^1 \sum_{i = 0}^\infty k_{0i}c A_{0i}J_0(k_{0i}r) J_0(k_{0j} r)r\;\d r = \int_0^1 g(r) J_0(k_{0j}r)r \;\d r.
+ \]
+ Using the orthogonality relations for $J_0$ from the example sheet, we get
+ \[
+ A_{0i} = \frac{2}{ck_{0i}} \frac{1}{[J_0'(k_{0i})]^2} \int_0^1 g(r)J_0(k_{0j} r) r\;\d r.
+ \]
+ Note that the frequencies come from the roots of the Bessel's function, and are not evenly spaced. This is different from, say, string instruments, where the frequencies are evenly spaced. So drums sound differently from strings.
+\end{eg}
+
+So far, we have used separation of variables to solve our differential equations. This is quite good, as it worked in our examples, but there are a few issues with it. Of course, we have the problem of whether it converges or not, but there is a more fundamental problem.
+
+To perform separation of variables, we need to pick a good coordinate system, such that the boundary conditions come in a nice form. We were able to perform separation of variables because we can find some coordinates that fit our boundary conditions well. However, in real life, most things are not nice. Our domain might have a weird shape, and we cannot easily find good coordinates for it.
+
+Hence, instead of solving things directly, we want to ask some more general questions about them. In particular, Kac asked the question ``can you hear the shape of a drum?'' --- suppose we know all the frequencies of the modes of oscillation on some domain $\Omega$. Can we know what $\Omega$ is like?
+
+The answer is no, and we can explicitly construct two distinct drums that sound the same. Fortunately, we get an affirmative answer if we restrict ourselves a bit. If we require $\Omega$ to be convex, and has a real analytic boundary, then yes! In fact, we have the following result: let $N(\lambda_0)$ be the number of eigenvalues less than $\lambda_0$. Then we can show that
+\[
+ 4\pi^2 \lim_{\lambda_0 \to \infty} \frac{N(\lambda_0)}{\lambda_0} = \mathrm{Area}(\Omega).
+\]
+
+\section{Distributions}
+\subsection{Distributions}
+When performing separation of variables, we also have the problem of convergence as well. To perform separation of variables, we first find some particular solutions of the form, say, $X(x)Y(y)Z(z)$. We know that these solve, say, the wave equation. However, what we do next is take an \emph{infinite} sum of these functions. First of all, how can we be sure that this converges at all? Even if it did, how do we know that the sum satisfies the differential equation? As we have seen in Fourier series, an infinite sum of continuous functions can be discontinuous. If it is not even continuous, how can we say it is a solution of a differential equation?
+
+Hence, at first people were rather skeptical of this method. They thought that while these methods worked, they only work on a small, restricted domain of problems. However, later people realized that this method is indeed rather general, as long as we allow some more ``generalized'' functions that are not functions in the usual sense. These are known as distributions.
+
+To define a distribution, we first pick a class of ``nice'' test functions, where ``nice'' means we can do whatever we want to them (e.g.\ differentiate, integrate etc.) A main example is the set of infinitely smooth functions with compact support on some set $K\subseteq \Omega$, written $C^{\infty}_{\mathrm{cpt}}(\Omega)$ or $D(\Omega)$. For example, we can have the bump function defined by
+\[
+ \phi(x) =
+ \begin{cases}
+ e^{-1/(1- x^2)} & |x| < 1\\
+ 0 & \text{otherwise}.
+ \end{cases}
+\]
+\begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \draw [->] (-2, 0) -- (2, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 1);
+
+ \draw [semithick, domain=-0.999:0.999, mblue] plot (\x, {exp (-1 / (1 - \x * \x))});
+ \draw [semithick, mblue] (-1.5, 0) -- (-1, 0);
+ \draw [semithick, mblue] (1, 0) -- (1.5, 0);
+ \end{tikzpicture}
+\end{center}
+We now define a distribution $T$ to be a linear map $T:D(\Omega) \to \R$. For those who are doing IB Linear Algebra, this is the dual space of the space of (``nice'') real-valued functions. For $\phi \in D(\Omega)$, we often write its image under $T$ as $T[\phi]$.
+
+\begin{eg}
+ The simplest example is just an ordinary function that is integrable over any compact region. Then we define the distribution $T_f$ as
+ \[
+ T_f[\phi] = \int_\Omega f(x) \phi(x) \;\d x.
+ \]
+ Note that this is a linear map since integration is linear (and multiplication is commutative and distributes over addition). Hence every function ``is'' a distribution.
+\end{eg}
+Of course, this would not be interesting if we only had ordinary functions. The most important example of a distribution (that is not an ordinary function) is the Dirac delta ``function''.
+\begin{defi}[Dirac-delta]
+ The \emph{Dirac-delta} is a distribution defined by
+ \[
+ \delta[\phi] = \phi(0).
+ \]
+\end{defi}
+By analogy with the first case, we often abuse notation and write
+\[
+ \delta[\phi] = \int_\Omega \delta(x) \phi(x) \;\d x,
+\]
+and pretend $\delta(x)$ is an actual function. Of course, it is not really a function, i.e.\ it is not a map $\Omega \to \R$. If it were, then we must have $\delta(x) = 0$ whenever $x \not = 0$, since $\delta[\phi] = \phi(0)$ only depends on what happens at $0$. But then this integral will just give $0$ if $\delta(0) \in \R$. Some people like to think of this as a function that is zero anywhere and ``infinitely large'' everywhere else. Formally, though, we have to think of it as a distribution.
+
+Although distributions can be arbitrarily singular and insane, we can nonetheless define \emph{all} their derivatives, defined by
+\[
+ T'[\phi] = -T[\phi'].
+\]
+This is motivated by the case of regular functions, where we get, after integrating by parts,
+\[
+ \int_\Omega f'(x) \phi(x)\;\d x = -\int_\Omega f(x)\phi'(x) \;\d x,
+\]
+with no boundary terms since we have a compact support. Since $\phi$ is infinitely differentiable, we can take arbitrary derivatives of distributions.
+
+So even though distributions can be crazy and singular, everything can be differentiated. This is good.
+
+Generalized functions can occur as limits of sequences of normal functions. For example, the family of functions
+\[
+ G_n(x) = \frac{n}{\sqrt{\pi}} \exp(-n^2 x^2)
+\]
+are smooth for any finite $n$, and $G_n[\phi] \to \delta[\phi]$ for any $\phi$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3.25, 0) node [right] {$t$};
+ \draw [->, use as bounding box] (0, 0) -- (0, 4) node [above] {$D$};
+
+ \draw [semithick, mblue, domain=-3:3, samples = 100] plot (\x, { 1.6 * exp( - \x * \x)});
+ \draw [semithick, morange, domain=-3:3, samples = 100] plot (\x, { 3.2 * exp( - 4 * \x * \x)});
+ \end{tikzpicture}
+\end{center}
+It thus makes sense to define
+\[
+ \delta'(\phi) = -\delta[\phi'] = -\phi'(0),
+\]
+as this is the limit of the sequence
+\[
+ \lim_{n \to \infty}\int_\Omega G'_n(x) \phi(x)\;\d x.
+\]
+It is often convenient to think of $\delta(x)$ as $\lim\limits_{n \to \infty} G_n(x)$, and $\delta'(x) = \lim\limits_{n \to \infty} G'_n(x)$ etc, despite the fact that these limits do not exist as functions.
+
+We can look at some properties of $\delta(x)$:
+\begin{itemize}
+ \item Translation:
+ \[
+ \int_{-\infty}^\infty \delta(x - a) \phi(x)\;\d x = \int_{-\infty}^\infty \delta(y) \phi(y + a)\;\d x.
+ \]
+ \item Scaling:
+ \[
+ \int_{-\infty}^\infty \delta(cx) \phi(x)\;\d x = \int_{-\infty}^\infty \delta(y) \phi\left(\frac{y}{c}\right) \frac{\d y}{|c|} = \frac{1}{|c|}\phi(0).
+ \]
+ \item These are both special cases of the following: suppose $f(x)$ is a continuously differentiable function with isolated simple zeros at $x_i$. Then near any of its zeros $x_i$, we have $f(x) \approx (x - x_i) \left.\frac{\partial f}{\partial x}\right|_{x_i}$. Then
+ \begin{align*}
+ \int_{-\infty}^\infty \delta(f(x)) \phi(x)\;\d x &= \sum_{i = 1}^n \int_{-\infty}^\infty \delta\left((x - x_i)\left.\frac{\partial f}{\partial x}\right|_{x_i}\right) \phi(x)\;\d x \\
+ &= \sum_{i = 1}^n \frac{1}{|f'(x_i)|} \phi(x_i).
+ \end{align*}
+\end{itemize}
+We can also expand the $\delta$-function in a basis of eigenfunctions. Suppose we live in the interval $[-L, L]$, and write a Fourier expansion
+\[
+ \delta(x) = \sum_{n \in \Z}\hat{\delta}_n e^{in\pi x/L}
+\]
+with
+\[
+ \hat{\delta}_n = \frac{1}{2L} \int_{-L}^L e^{in\pi x/L}\delta(x)\;\d x = \frac{1}{2L}
+\]
+So we have
+\[
+ \delta(x) = \frac{1}{2L} \sum_{n \in \Z} e^{in\pi x/L}.
+\]
+This \emph{does} make sense as a distribution. We let
+\[
+ S_N \delta(x) = \frac{1}{2L} \sum_{n = -N}^N e^{in \pi x/L}.
+\]
+Then
+\begin{align*}
+ \lim_{N \to \infty} \int_{-L}^L S_N \delta(x) \phi(x)\;\d x &= \lim_{N \to \infty} \frac{1}{2L}\int_{L}^L \sum_{n = -N}^N e^{in\pi x/L}\phi(x) \;\d x\\
+ &= \lim_{N \to \infty} \sum_{n = -N}^N \left[\frac{1}{2L}\int_{-L}^L e^{in\pi x/L} \phi(x) \;\d x\right]\\
+ &= \lim_{N\to \infty} \sum_{n = -N}^N \hat{\phi}_{-n}\\
+ &= \lim_{N \to \infty}\sum_{n = -N}^N \hat{\phi}_n e^{in\pi 0/L}\\
+ &= \phi(0),
+\end{align*}
+since the Fourier series of the smooth function $\phi(x)$ \emph{does} converge for all $x \in [-L, L]$.
+
+We can equally well expand $\delta(x)$ in terms of any other set of orthonormal eigenfunctions. Let $\{y_n(x)\}$ be a complete set of eigenfunctions on $[a, b]$ that are orthogonal with respect to a weight function $w(x)$. Then we can write
+\[
+ \delta (x - \xi) = \sum_{n} c_n y_n(x),
+\]
+with
+\[
+ c_n = \int_a^b y_n^*(x) \delta(x - \xi) w(x)\;\d x = y_n^*(\xi) w(\xi).
+\]
+So
+\[
+ \delta(x - \xi) = w(\xi) \sum_n y_n^* (\xi) y_n(x) = w(x) \sum_n y_n^*(\xi) y_n(x),
+\]
+using the fact that
+\[
+ \delta(x - \xi) = \frac{w(\xi)}{w(x)} \delta(x - \xi).
+\]
+\subsection{Green's functions}
+One of the main uses of the $\delta$ function is the Green's function. Suppose we wish to solve the 2nd order ordinary differential equation $\mathcal{L}y = f$, where $f(x)$ is a bounded forcing term, and $\mathcal{L}$ is a bounded differential operator, say
+\[
+ \mathcal{L} = \alpha(x) \frac{\partial^2}{\partial x^2} + \beta(x) \frac{\partial}{\partial x} + \gamma(x).
+\]
+As a first step, we try to find the Green's function for $\mathcal{L}$, which we shall write as $G(x, \xi)$, which obeys
+\[
+ \mathcal{L} G(x, \xi) = \delta(x - \xi).
+\]
+Given $G(x, \xi)$, we can then define
+\[
+ y(x) = \int_{a}^b G(x, \xi) f(\xi) \;\d \xi.
+\]
+Then we have
+\[
+ \mathcal{L}y = \int_a^b \mathcal{L} G(x, \xi) f(\xi)\;\d \xi = \int_a^b \delta(x - \xi) f(\xi) \;d\xi = f(x).
+\]
+We can formally say $y = \mathcal{L}^{-1} f$, and the Green's function shows that $\mathcal{L}^{-1}$ means an integral.
+
+Hence if we can solve for Green's function, then we have solved the differential equation, at least in terms of an integral.
+
+To find the Green's function $G(x, \xi)$ obeying the boundary conditions
+\[
+ G(a, \xi) = G(b, \xi) = 0,
+\]
+first note that $\mathcal{L} G = 0$ for all $x \in [a, \xi) \cup (\xi, b]$, i.e.\ everywhere except $\xi$ itself. Thus we must be able to expand it in a basis of solutions in these two regions.
+
+Suppose $\{y_1(x), y_2(x)\}$ are a basis of solutions to $\mathcal{L}G = 0$ everywhere on $[a, b]$, with boundary conditions $y_1(a) = 0, y_2(b) = 0$. Then we must have
+\[
+ G(x, \xi) =
+ \begin{cases}
+ A(\xi) y_1(x) & a \leq x < \xi\\
+ B(\xi) y_2(x) & \xi < x \leq b
+ \end{cases}
+\]
+So we have a whole family of solutions. To fix the coefficients, we must decide how to join these solutions together over $x = \xi$.
+
+If $G(x, \xi)$ were discontinuous at $x = \xi$, then $\partial_x G|_{x = \xi}$ would involve a $\delta$ function, while $\partial^2_x G|_{x = \xi}$ would involve the derivative of the $\delta$ function. This is not good, since nothing in $\mathcal{L}G = \delta(x - \xi)$ would balance a $\delta'$. So $G(x, \xi)$ must be everywhere continuous. Hence we require
+\[
+ A(\xi) y_1 (\xi) = B(\xi) y_2(\xi).\tag{$*$}
+\]
+Now integrate over a small region $(\xi - \varepsilon, \xi + \varepsilon)$ surrounding $\xi$. Then we have
+\[
+ \int_{\xi - \varepsilon}^{\xi + \varepsilon} \left[\alpha(x) \frac{\d^2 G}{\d x^2} + \beta(x) \frac{\d G}{\d x} + \gamma(x) G\right]\;\d x = \int_{\xi - \varepsilon}^{\xi + \varepsilon} \delta(x - \xi)\;\d x = 1.
+\]
+By continuity of $G$, we know that the $\gamma G$ term does not contribute. While $G'$ is discontinuous, it is still finite. So the $\beta G'$ term also does not contribute. So we have
+\[
+ \lim_{\varepsilon \to 0} \int_{\xi - \varepsilon}^{\xi + \varepsilon} \alpha G'' \;\d x = 1.
+\]
+We now integrate by parts to obtain
+\[
+ \lim_{\varepsilon \to 0} [\alpha G']^{\xi + \varepsilon}_{\xi - \varepsilon} + \int_{\xi - \varepsilon}^{\xi + \varepsilon} \alpha' G' \;\d x = 1.
+\]
+Again, by finiteness of $G'$, the integral does not contribute. So we know that
+\[
+ \alpha(\xi)\left(\left.\frac{\partial G}{\partial x}\right|_{\xi^+} - \left.\frac{\partial G}{\partial x}\right|_{\xi^-}\right) = 1
+\]
+Hence we obtain
+\[
+ B(\xi) y'_2 (\xi) - A(\xi) y_1'(\xi) = \frac{1}{\alpha(\xi)}.
+\]
+Together with $(*)$, we know that
+\[
+ A(\xi) = \frac{y_2(\xi)}{\alpha(\xi) W(\xi)},\quad B(\xi) = \frac{y_1(\xi)}{\alpha(\xi)W(\xi)},
+\]
+where $W$ is the \emph{Wronskian}
+\[
+ W = y_1 y_2' - y_2y_1'.
+\]
+Hence, we know that
+\[
+ G(x, \xi) = \frac{1}{\alpha(\xi)W(\xi)}
+ \begin{cases}
+ y_2(\xi) y_1(x) & a \leq x \leq \xi\\
+ y_1(\xi) y_2(x) & \xi < x \leq b.
+ \end{cases}
+\]
+Using the step function $\Theta$, we can write this as
+\[
+ G(x, \xi) = \frac{1}{\alpha(\xi)W(\xi)} [\Theta (\xi - x) y_2(\xi)y_1(x) + \Theta(x - \xi)y_1(\xi)y_2(x)].
+\]
+So our general solution is
+\begin{align*}
+ y(x) &= \int_a^b G(x, \xi) f(\xi)\;\d \xi\\
+ &= \int_x^b \frac{f(\xi)}{\alpha(\xi)W(\xi)} y_2(\xi) y_1(x)\;\d \xi + \int_a^x \frac{f(\xi)}{\alpha(\xi)W(\xi)}y_1(\xi) y_2(x)\;\d \xi.
+\end{align*}
+
+\begin{eg}
+ Consider
+ \[
+ \mathcal{L}y= -y'' - y = f
+ \]
+ for $x \in (0, 1)$ with $y(0) = y(1) = 0$. We choose our basis solution to satisfy $y'' = -y$ as $\{\sin x, \sin (1 - x)\}$. Then we can compute the Wronskian
+ \[
+ W(x) = -\sin x \cos(1 - x) - \sin (1 - x) \cos x = -\sin 1.
+ \]
+ Our Green's function is
+ \[
+ G(x, \xi) = \frac{1}{\sin 1}[\Theta(\xi - x) \sin (1 - \xi) \sin x + \Theta(x - \xi) \sin \xi \sin (1 - x)].
+ \]
+ Hence we get
+ \[
+ y(x) = \sin x \int_{x}^1 \frac{\sin (1 - \xi)}{ \sin 1}f(\xi) \;\d \xi + \sin(1 - x)\int_0^x \frac{\sin \xi}{\sin 1}f(\xi)\;\d \xi.
+ \]
+\end{eg}
+
+\begin{eg}
+ Suppose we have a string with mass per unit length $\mu(x)$ and held under a tension $T$. In the presence of gravity, Newton's second law gives
+ \[
+ \mu\frac{\partial^2 y}{\partial t^2} = T \frac{\partial^2 y}{\partial x^2} + \mu g.
+ \]
+ We look for the steady state solution $\dot{y} = 0$ shape of the string, assuming $y(0, t) = y(L, t) = 0$. So we get
+ \[
+ \frac{\partial^2 y}{\partial x^2} = -\frac{\mu(x)}{T}g.
+ \]
+ We look for a Green's function obeying
+ \[
+ \frac{\partial^2 y}{\partial x^2} = - \delta(x - \xi).
+ \]
+ This can be interpreted as the contribution of a pointlike mass located at $x = \xi$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [gray] (-0.5, 0) -- (4.5, 0);
+ \node [circ] at (0, 0) {};
+ \node [circ] at (4, 0) {};
+ \draw (0, 0) -- (1, -1) -- (4, 0);
+ \node [circ] at (1, -1) {};
+ \node [above] at (0, 0) {$0$};
+ \node [above] at (4, 0) {$L$};
+ \node [above] at (1, 0) {$\xi$};
+ \draw [dashed] (1, 0) -- (1, -1);
+ \draw [->] (1, -1) -- +(0, -0.5) node [below] {$y$};
+ \end{tikzpicture}
+ \end{center}
+ The homogeneous equation $y'' = 0$ gives
+ \[
+ y = Ax + B(x - L).
+ \]
+ So we get
+ \[
+ G(x, \xi) =
+ \begin{cases}
+ A(\xi x) & 0 \leq x < \xi\\
+ B(\xi)(x - L) & \xi < x \leq L.
+ \end{cases}
+ \]
+ Continuity at $x = \xi$ gives
+ \[
+ A(\xi)\xi = B(\xi) (\xi -L).
+ \]
+ The jump condition on the derivative gives
+ \[
+ A(\xi) - B(\xi) = 1.
+ \]
+ We can solve these to get
+ \[
+ A(\xi) = \frac{\xi - L}{L},\quad B(\xi) = \frac{\xi}{L}.
+ \]
+ Hence the Green's function is
+ \[
+ G(x, \xi) = \frac{\xi - L}{L} \Theta(\xi - x) + \frac{\xi}{L}(x - L) \Theta(x - \xi).
+ \]
+ Notice that $\xi$ is always less that $L$. So the first term has a negative slope; while $\xi$ is always positive, so the second term has positive slope.
+
+ For the general case, we can just substitute this into the integral. We can give this integral a physical interpretation. We can think that we have many pointlike particles $m_i$ at small separations $\Delta x$ along the string. So we get
+ \[
+ g(x) = \sum_{i = 1}^n G(x, x_i) \frac{m_i g}{T},
+ \]
+ and in the limit $\Delta x \to 0$ and $n \to \infty$, we get
+ \[
+ g(x) \to \frac{g}{T} \int_0^L G(x, \xi) \mu(\xi)\;\d \xi.
+ \]
+\end{eg}
+\subsection{Green's functions for initial value problems}
+Consider $\mathcal{L}y = f(t)$ subject to $y(t = t_0) = 0$ and $y'(t = t_0) = 0$. So instead of having two boundaries, we have one boundary and restrict both the value and the derivative at this boundary.
+
+As before, let $y_1(t), y_2(t)$ be \emph{any} basis of solutions to $\mathcal{L} y = 0$. The Green's function obeys
+\[
+ \mathcal{L}(G) = \delta(t - \tau).
+\]
+We can write our Green's function as
+\[
+ G(t, \tau) =
+ \begin{cases}
+ A(\tau) y_1(t) + B(\tau) y_2(t) & t_0 \leq t < \tau\\
+ C(\tau) y_1(t) + D(\tau) y_2(t) & t > \tau.
+ \end{cases}
+\]
+Our initial conditions require that
+\[
+ \begin{pmatrix}
+ y_1(t_0) & y_2(t_0)\\
+ y_1'(t_0) & y_2'(t_0)
+ \end{pmatrix}
+ \begin{pmatrix}
+ A\\
+ B
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ 0\\0
+ \end{pmatrix}
+\]
+However, we know that the matrix is non-singular (by definition of $y_1, y_2$ being a basis). So we must have $A, B = 0$ (which makes sense, since $G = 0$ is obviously a solution for $t_0 \leq t < \tau$).
+
+Our continuity and jump conditions now require
+\begin{align*}
+ 0 &= C(\tau) t_1(t) + D(\tau) y_2(t)\\
+ \frac{1}{\alpha(\tau)} &= C(\tau) y_1'(\tau) + D(\tau) y_2'(\tau).
+\end{align*}
+We can solve this to get $C(\tau)$ and $D(\tau)$. Then we get
+\[
+ y(t) = \int_{t_0}^\infty G(t, \tau) f(\tau)\;\d \tau = \int_{t_0}^t G(t, \tau) f(\tau) \;\d \tau,
+\]
+since we know that when $\tau > t$, the Green's function $G(t, \tau) = 0$. Thus the solution $y(t)$ depends on the forcing term term $f$ only for times $ < t$. This expresses causality!
+
+\begin{eg}
+ For example, suppose we have $\ddot{y} + y = f(t)$ with $f(0) = \dot{y}(0) = 0$. Then we have
+ \[
+ G(t, \tau) = \Theta(t - \tau) [C(\tau) \cos(t - \tau) + D(\tau) \sin (t - \tau)]
+ \]
+ for some $C(\tau), D(\tau)$. The continuity and jump conditions gives $D(\tau) = 1, C(\tau) = 0$. So we get
+ \[
+ G(t, \tau) = \Theta(t - \tau) \sin t(\tau).
+ \]
+ So we get
+ \[
+ y(t) = \int_0^t \sin (t - \tau) f(\tau) \;\d \tau.
+ \]
+\end{eg}
+
+\section{Fourier transforms}
+\subsection{The Fourier transform}
+So far, we have considered writing a function $f: S^1 \to \C$ (or a periodic function) as a Fourier sum
+\[
+ f(\theta) = \sum_{n \in \Z} \hat{f}_n e^{in\theta},\quad \hat{f}_n = \frac{1}{2\pi}\int_{-\pi}^\pi e^{-in\theta}f(\theta)\;\d \theta.
+\]
+What if we don't have a periodic function, but just an arbitrary function $f:\R \to \C$? Can we still do Fourier analysis? Yes!
+
+To start with, instead of $\hat{f}_n = \frac{1}{2\pi}\int_{-\pi}^\pi e^{-in\theta }f(\theta)\;\d \theta$, we define the Fourier transform as follows:
+\begin{defi}[Fourier transform]
+ The \emph{Fourier transform} of an (absolutely integrable) function $f: \R \to \C$ is defined as
+ \[
+ \tilde{f}(k) = \int_{-\infty}^\infty e^{-ikx}f(x)\;\d x
+ \]
+ for \emph{all} $k \in \R$. We will also write $\tilde{f}(k) = \mathcal{F}[f(x)]$.
+\end{defi}
+Note that for any $k$, we have
+\[
+ |\tilde{f}(k)| = \left|\int_{-\infty}^\infty e^{-ikx}f(x)\;\d x\right| \leq \int_{-\infty}^\infty |e^{-ikx} f(x)|\;\d x = \int_{-\infty}^\infty |f(x)|\;\d x.
+\]
+Since we have assumed that our function is absolutely integrable, this is finite, and the definition makes sense.
+
+We can look at a few basic properties of the Fourier transform.
+\begin{enumerate}
+ \item Linearity: if $f, g: \R \to \C$ are absolutely integrable and $c_1, c_2$ are constants, then
+ \[
+ \mathcal{F}[c_1 f(x) + c_2 g(x)] = c_1 \mathcal{F}[f(x)] + c_2 \mathcal{F}[g(x)].
+ \]
+ This follows directly from the linearity of the integral.
+ \item Translation:
+ \begin{align*}
+ \mathcal{F}[f(x - a)] &= \int_{\R}e^{-ikx} f(x - a)\;\d x \\
+ \intertext{Let $y = x - a$. Then we have}
+ &= \int_{\R} e^{-ik(y + a)}f (y) \;\d y \\
+ &= e^{-ika} \int_{\R} e^{-iky}f(y)\;\d y \\
+ &= e^{-ika}\mathcal{F}[f(x)],
+ \end{align*}
+ So a translation in $x$-space becomes a re-phasing in $k$-space.
+ \item Re-phasing:
+ \begin{align*}
+ \mathcal{F}[e^{-i\ell x}f(x)] &= \int_{-\infty}^\infty e^{-ikx} e^{-i\ell x} f(x)\;\d x\\
+ &= \int_{-\infty}^\infty e^{-i(k + \ell)x} f(x)\;\d x\\
+ &= \tilde{f}(k + \ell),
+ \end{align*}
+ where $\ell \in \R$ and $\tilde{f}(x) = \mathcal{F}[f(x)]$.
+ \item Scaling:
+ \begin{align*}
+ \mathcal{F}[f(cx)] &= \int_{-\infty}^\infty e^{-ikx}f(cx)\;\d x\\
+ &= \int_{-\infty}^\infty e^{-iky/c} f(y) \frac{\d y}{|c|}\\
+ &= \frac{1}{|c|} \tilde{f}\left(\frac{k}{c}\right).
+ \end{align*}
+ Note that we have to take the absolute value of $c$, since if we replace $x$ by $y/c$ and $c$ is negative, then we will flip the bounds of the integral. The extra minus sign then turns $c$ into $-c = |c|$.
+ \item Convolutions: for functions $f, g: \R \to \C$, we define the \emph{convolution} as
+ \[
+ f * g (x) = \int_{-\infty}^\infty f(x - y) g(y)\;\d y.
+ \]
+ We then have
+ \begin{align*}
+ \mathcal{F}[f * g(x)] &= \int_{-\infty}^\infty e^{-ikx} \left[\int_{-\infty}^{\infty} f(x - y)g(y) \;\d y\right] \;\d x\\
+ &= \int_{\R^2} e^{ik(x - y)} f(x - y) e^{-iky}g(y) \;\d y \;\d x\\
+ &= \int_{\R} e^{-iku} f(u) \;\d u \int_{\R} e^{-iky} g(y)\;\d y\\
+ &= \mathcal{F}[f] \mathcal{F}[g],
+ \end{align*}
+ where $u = x - y$. So the Fourier transform of a convolution is the product of individual Fourier transforms.
+ \item The most useful property of the Fourier transform is that it ``turns differentiation into multiplication''. Integrating by parts, we have
+ \begin{align*}
+ \mathcal{F}[f'(x)] &= \int_{-\infty}^\infty e^{-ikx} \frac{\d f}{\d x}\;\d x\\
+ &= \int_{-\infty}^\infty - \frac{\d}{\d x} (e^{-ikx}) f(x)\;\d x\\
+ \intertext{Note that we don't have any boundary terms since for the function to be absolutely integrable, it has to decay to zero as we go to infinity. So we have}
+ &= ik \int_{-\infty}^\infty e^{-ikx} f(x)\;\d x\\
+ &= ik \mathcal{F}[f(x)]
+ \end{align*}
+ Conversely,
+ \[
+ \mathcal{F}[xf(x)] = \int_{-\infty}^\infty e^{-ikx} xf(x)\;\d x = i \frac{\d}{\d k}\int_{-\infty}^{\infty} e^{-ikx} f(x)\;\d x = i\tilde{f}'(k).
+ \]
+\end{enumerate}
+This are useful results, since if we have a complicated function to find the Fourier transform of, we can use these to break it apart into smaller pieces and hopefully make our lives easier.
+
+Suppose we have a differential equation
+\[
+ \mathcal{L}(\partial) y = f,
+\]
+where
+\[
+ \mathcal{L}(\partial) = \sum_{r = 0}^p c_r \frac{\d^r}{\d x^r}
+\]
+is a differential operator of $p$th order with constant coefficients.
+
+Taking the Fourier transform of both sides of the equation, we find
+\[
+ \mathcal{F}[\mathcal{L}(\partial)y] = \mathcal{F}[f(x)] = \tilde{f}(k).
+\]
+The interesting part is the left hand side, since the Fourier transform turns differentiation into multiplication. The left hand side becomes
+\[
+ c_0 \tilde{y}(k) + c_1 ik\tilde{y}(k) + c_2 (ik)^2 \tilde{y}(k) + \cdots + c_p (ik)^p \tilde{y}(k) = \mathcal{L}(ik) \tilde{y}(k).
+\]
+Here $\mathcal{L}(ik)$ is a polynomial in $ik$. Thus taking the Fourier transform has changed our ordinary differential equation into the algebraic equation
+\[
+ \mathcal{L}(ik) \tilde{y}(k) = \tilde{f}(k).
+\]
+Since $\mathcal{L}(ik)$ is just multiplication by a polynomial, we can immediately get
+\[
+ \tilde{y}(k) = \frac{\tilde{f}(k)}{\mathcal{L}(ik)}.
+\]
+\begin{eg}
+ For $\phi: \R^n \to \C$, suppose we have the equation
+ \[
+ \nabla^2 \phi - m^2 \phi = \rho(x),
+ \]
+ where
+ \[
+ \nabla^2 = \sum_{i = 1}^n \frac{\d^2}{\d x_i^2}
+ \]
+ is the Laplacian on $\R^n$.
+
+ We define the $n$ dimensional Fourier transform by
+ \[
+ \tilde{\phi} (\mathbf{k}) = \int_{\R^n} e^{-i\mathbf{k}\cdot \mathbf{x}} f(\mathbf{x})\;\d \mathbf{x} = \mathcal{F}[\phi(x)]
+ \]
+ where now $\mathbf{k} \in \R^n$ is an $n$-dimensional vector. So we get
+ \[
+ \mathcal{F}[\nabla^2 \phi - m^2 \phi] = \mathcal{F}[\rho] = \tilde{\rho}(\mathbf{k}).
+ \]
+ The first term is
+ \begin{align*}
+ \int_{\R^n} e^{-i\mathbf{k}\cdot \mathbf{x}} \nabla\cdot \nabla \phi \;\d ^n x &= -\int_{\R^n} \nabla (e^{-i\mathbf{k}\cdot \mathbf{x}}) \cdot \nabla \phi \;\d^n x\\
+ &= \int_{\R^n} \nabla^2 (e^{-i\mathbf{k}\cdot \mathbf{x}}) \phi \;\d^n x\\
+ &= -\mathbf{k}\cdot \mathbf{k} \int_{\R^n} e^{-i\mathbf{k}\cdot \mathbf{x}} \phi(\mathbf{x})\;\d^n x\\
+ &= -\mathbf{k}\cdot \mathbf{k} \tilde{\phi} (\mathbf{k}).
+ \end{align*}
+ So our equation becomes
+ \[
+ -\mathbf{k}\cdot \mathbf{k} \tilde{\phi}(\mathbf{k}) - m^2 \tilde{\phi}(\mathbf{k}) = \tilde{\rho}(\mathbf{k}).
+ \]
+ So we get
+ \[
+ \tilde{\phi}(\mathbf{k}) = -\frac{\tilde{\rho}(\mathbf{k})}{|\mathbf{k}|^2 + m^2}.
+ \]
+\end{eg}
+So differential equations are trivial in $k$ space. The problem is, of course, that we want our solution in $x$ space, not $k$ space. So we need to find a way to restore our function back to $x$ space.
+
+\subsection{The Fourier inversion theorem}
+We need to be able to express our original function $f(x)$ in terms of its Fourier transform $\tilde{f}(k)$. Recall that in the periodic case where $f(x) = f(x + L)$, we have
+\[
+ f(x) = \sum_{n \in \Z}\hat{f}_n e^{2inx\pi /L}.
+\]
+We can try to obtain a similar expression for a non-periodic function $f: \R \to \C$ by taking the limit $L \to \infty$. Recall that our coefficients are defined by
+\[
+ \hat{f}_n = \frac{1}{L} \int_{-L/2}^{L/2} e^{-2 in\pi x/L} f(x)\;\d x.
+\]
+Now define
+\[
+ \Delta k = \frac{2\pi }{L}.
+\]
+So we have
+\[
+ f(x) = \sum_{n \in \Z} e^{inx \Delta k} \frac{\Delta k}{2 \pi}\int_{-L/2}^{L/2} e^{-inx \Delta k} f(u) \;\d u
+\]
+As we take the limit $L \to \infty$, we can approximate
+\[
+ \int_{-L/2}^{L/2} e^{-ixn\Delta k} f(x) \;\d x = \int_{-\infty}^\infty e^{-ix (n \Delta k)} f(x)\;\d x = \tilde{f}(n \Delta k).
+\]
+Then for a non-periodic function, we have
+\[
+ f(x) = \lim_{\Delta k \to \infty} \sum_{n \in \Z} \frac{\Delta k}{2\pi} e^{in \Delta k x} \tilde{f} (n \Delta k) = \frac{1}{2\pi} \int_{-\infty}^\infty e^{ik x} \tilde{f}(k) \;\d k.
+\]
+So
+\[
+ f(x) = \mathcal{F}^{-1}[f(k)] = \frac{1}{2\pi} \int_{-\infty}^\infty e^{ik x} \tilde{f}(k) \;\d k.
+\]
+This is a really dodgy argument. We first took the limit as $L \to \infty$ to turn the integral from $\int_{-L}^L$ to $\int_{-\infty}^\infty$. \emph{Then} we take the limit as $\Delta k \to 0$. However, we can't really do this, since $\Delta k$ is defined to be $2\pi /L$, and both limits have to be taken at the same time. So we should really just take this as a heuristic argument for why this works, instead of a formal proof.
+
+Nevertheless, note that the inverse Fourier transform looks very similar to the Fourier transform itself. The only differences are a factor of $\frac{1}{2\pi}$, and the sign of the exponent. We can get rid of the first difference by evenly distributing the factor among the Fourier transform and inverse Fourier transform. For example, we can instead define
+\[
+ \mathcal{F}[f(x)] = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty e^{-ikx} f(x)\;\d x.
+\]
+Then $\mathcal{F}^{-1}$ looks just like $\mathcal{F}$ apart from having the wrong sign.
+
+However, we will stick to our original definition so that we don't have to deal with messy square roots. Hence we have the duality
+\[
+ \mathcal{F}[f(x)] = \mathcal{F}^{-1}[f(-x)] (2\pi).
+\]
+This is useful because it means we can use our knowledge of Fourier transforms to compute inverse Fourier transform.
+
+Note that this does not occur in the case of the Fourier \emph{series}. In the Fourier series, we obtain the coefficients by evaluating an integral, and restore the original function by taking a discrete sum. These operations are not symmetric.
+
+\begin{eg}
+ Recall that we defined the convolution as
+ \[
+ f * g(x) = \int_{-\infty}^\infty f(x - y) g(y)\;\d y.
+ \]
+ We then computed
+ \[
+ \mathcal{F}[f * g(x)] = \tilde{f}(k) \tilde{f}(g).
+ \]
+ It follows by definition of the inverse that
+ \[
+ \mathcal{F}^{-1}[\tilde{f}(k) \tilde{g}(k)] = f * g(x).
+ \]
+ Therefore, we know that
+ \[
+ \mathcal{F}[f(x) g(x)] = 2\pi \int_{-\infty}^\infty \tilde{f}(k - \ell) \tilde{g}(\ell)\;\d \ell = 2\pi \tilde{f} * \tilde{g} (k).
+ \]
+\end{eg}
+
+\begin{eg}
+ Last time we saw that if $\nabla^2 \phi - m^2 \phi = \rho(x)$, then
+ \[
+ \tilde{\phi}(\mathbf{k}) = -\frac{\tilde{\rho}(\mathbf{k})}{|\mathbf{k}|^2 + m^2}.
+ \]
+ Hence we can retrieve our $\phi$ as
+ \[
+ \phi(\mathbf{x}) = \mathcal{F}^{-1}[\tilde{\phi}(k)] = -\frac{1}{(2\pi)^n} \int_{\R^n} e^{i\mathbf{k}\cdot \mathbf{x}} \frac{\tilde{\rho}(\mathbf{k})}{|\mathbf{k}|^2 + m^2}\;\d^n x.
+ \]
+ Note that we have $(2\pi)^n$ instead of $2\pi$ since we get a factor of $2\pi$ for each dimension (and the negative sign was just brought down from the original expression).
+
+ Since we are taking the inverse Fourier transform of a product, we know that the result is just a convolution of $\mathcal{F}^{-1}[\tilde{\rho}(k)] = \rho(x)$ and $\mathcal{F}^{-1}\left(\frac{1}{|\mathbf{k}|^2 + m^2}\right)$.
+
+ Recall that when we worked with Green's function, if we have a forcing term, then the final solution is the convolution of our Green's function with the forcing term. Here we get a similar expression in higher dimensions.
+\end{eg}
+
+\subsection{Parseval's theorem for Fourier transforms}
+\begin{thm}[Parseval's theorem (again)]
+ Suppose $f, g: \R \to \C$ are sufficiently well-behaved that $\tilde{f}$ and $\tilde{g}$ exist and we indeed have $\mathcal{F}^{-1}[\tilde{f}] = f, \mathcal{F}^{-1}[\tilde{g}] = g$. Then
+ \[
+ (f, g) = \int_\R f^*(x) g(x)\;\d x = \frac{1}{2\pi}(\tilde{f}, \tilde{g}).
+ \]
+ In particular, if $f = g$, then
+ \[
+ \|f\|^2 = \frac{1}{2\pi} \|\tilde{f}\|^2.
+ \]
+\end{thm}
+So taking the Fourier transform preserves the $L^2$ norm (up to a constant factor of $\frac{1}{2 \pi}$).
+\begin{proof}
+ \begin{align*}
+ (f, g) &= \int_\R f^*(x) g(x)\;\d x\\
+ &= \int_{-\infty}^\infty f^*(x) \left[\frac{1}{2\pi}\int_{-\infty}^\infty e^{ikx} \tilde{g}(x) \;\d k\right]\;\d x\\
+ &= \frac{1}{2\pi} \int_{-\infty}^\infty \left[\int_{-\infty}^\infty f^*(x) e^{ikx} \;\d x\right] \tilde{g}(k)\;\d k\\
+ &= \frac{1}{2\pi} \int_{-\infty}^\infty \left[\int_{-\infty}^\infty f(x) e^{-ikx} \;\d x\right]^* \tilde{g}(k)\;\d k\\
+ &= \frac{1}{2\pi} \int_{-\infty}^\infty \tilde{f}^*(k) \tilde{g}(k) \;\d k\\
+ &= \frac{1}{2\pi}(\tilde{f}, \tilde{g}).\qedhere
+ \end{align*}
+\end{proof}
+\subsection{A word of warning}
+Suppose $f(x)$ is defined by
+\[
+ f(x) =
+ \begin{cases}
+ 1 & |x| < 1\\
+ 0 & |x| \geq 1.
+ \end{cases}
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 2) node [above] {$y$};
+
+ \draw [mred, semithick] (-2.5, 0) -- (-1, 0) -- (-1, 1) -- (1, 1) -- (1, 0) -- (2.5, 0);
+ \end{tikzpicture}
+\end{center}
+This function looks rather innocent. Sure it has discontinuities at $\pm 1$, but these are not too absurd, and we are just integrating. This is certainly absolutely integrable. We can easily compute the Fourier transform as
+\[
+ \tilde{f}(k) = \int_{-\infty}^\infty e^{-ikx}f(x)\;\d x = \int_{-1}^1 e^{-ikx} \;\d x = \frac{2\sin k}{k}.
+\]
+Is our original function equal to the integral $\mathcal{F}^{-1}[\tilde{f}(l)]$? We have
+\[
+ \mathcal{F}^{-1}\left[\frac{2 \sin k}{k}\right] = \frac{1}{\pi} \int_{-\infty}^\infty e^{ikx} \frac{\sin k}{k} \;\d k.
+\]
+This is hard to do. So let's first see if this function is absolutely integrable. We have
+\begin{align*}
+ \int_{-\infty}^\infty \left|e^{ikx} \frac{\sin k}{k}\right|\;\d x &= \int_{-\infty}^\infty \left|\frac{\sin k}{k}\right|\;\d k\\
+ &= 2 \int_0^\infty \left|\frac{\sin k}{k}\right|\;\d k\\
+ &\geq 2 \sum_{n = 0}^N \int_{(n + 1/4) \pi}^{(n + 3/4)\pi} \left|\frac{\sin k}{k}\right|\;\d k.
+\end{align*}
+ % can insert diagram
+The idea here is instead of looking at the integral over the whole real line, we just pick out segments of it. In these small segments, we know that $|\sin k| \geq \frac{1}{\sqrt{2}}$. So we can bound this by
+\begin{align*}
+ \int_{-\infty}^\infty \left|e^{ikx} \frac{\sin k}{k}\right|\;\d x &\geq 2 \sum_{n = 0}^N \frac{1}{\sqrt{2}} \int_{(n + 1/4) \pi}^{(n + 3/4)\pi} \frac{\d k}{k}\\
+ &\geq \sqrt{2} \sum_{n = 0}^N \int_{(n + 1/4) \pi}^{(n + 3/4)\pi} \frac{\d k}{(n + 1)\pi}\\
+ &= \frac{\sqrt{2}}{2} \sum_{n = 0}^N \frac{1}{n + 1},
+\end{align*}
+and this diverges as $N \to \infty$. So we don't have absolute integrability. So we have to be careful when we are doing these things. Well, in theory. We actually won't. We'll just ignore this issue.
+
+\subsection{Fourier transformation of distributions}
+We have
+\[
+ \mathcal{F}[\delta(x)] = \int_{-\infty}^\infty e^{ikx} \delta (x)\;\d x = 1.
+\]
+Hence we have
+\[
+ \mathcal{F}^{-1}[1] = \frac{1}{2\pi} \int_{-\infty}^\infty e^{ikx}\;\d k = \delta (x).
+\]
+Of course, it is extremely hard to make sense of this integral. It quite obviously diverges as a normal integral, but we can just have faith and believe this makes sense as long as we are talking about distributions.
+
+Similarly, from our rules of translations, we get
+\[
+ \mathcal{F}[\delta(x - a)] = e^{-ika}
+\]
+and
+\[
+ \mathcal{F}[e^{-i\ell x}] = 2\pi \delta(k - \ell),
+\]
+Hence we get
+\[
+ \mathcal{F}[\cos (\ell x)] = \mathcal{F}\left[\frac{1}{2}( e^{i\ell x} + e^{-i\ell x})\right] = \frac{1}{2} \mathcal{F}[e^{i\ell x}] + \frac{1}{2} \mathcal{F}[e^{-i\ell x}] = \pi [\delta (k - \ell) + \delta(k + \ell)].
+\]
+We see that highly localized functions in $x$-space have very spread-out behaviour in $k$-space, and vice versa.
+
+\subsection{Linear systems and response functions}
+Suppose we have an amplifier that modifies an input signal $I(t)$ to produce an output $O(t)$. Typically, amplifiers work by modifying the amplitudes and phases of specific frequencies in the output. By Fourier's inversion theorem, we know
+\[
+ I(t) = \frac{1}{2\pi} \int e^{i\omega t} \tilde{I}(\omega) \;\d \omega.
+\]
+This $\tilde{I}(\omega)$ is the \emph{resolution} of $I(t)$.
+
+We specify what the amplifier does by the \emph{transfer function} $\tilde{R}(\omega)$ such that the output is given by
+\[
+ O(t) = \frac{1}{2\pi} \int_{-\infty}^\infty e^{i\omega t} \tilde{R}(\omega) \tilde{I}(\omega)\;\d \omega.
+\]
+Since this $\tilde{R}(\omega) \tilde{I}(\omega)$ is a product, on computing $O(t) = \mathcal{F}^{-1}[\tilde{R}(\omega) \tilde{I}(\omega)]$, we obtain a \emph{convolution}
+\[
+ O(t) = \int_{-\infty}^\infty R(t - u) I(u)\;\d u,
+\]
+where
+\[
+ R(t) = \frac{1}{2\pi} \int_{-\infty}^\infty e^{i\omega t} \tilde{R}(\omega) \;\d \omega
+\]
+is the \emph{response function}. By plugging it directly into the equation above, we see that $R(t)$ is the response to an input signal $I(t) = \delta(t)$.
+
+Now note that causality implies that the amplifier cannot ``respond'' before any input has been given. So we must have $R(t) = 0$ for all $t < 0$.
+
+Assume that we only start providing input at $t = 0$. Then
+\[
+ O(t) = \int_{-\infty}^\infty R(t - u) I(u)\;\d u = \int_0^t R(t - u)I(u)\;\d u.
+\]
+This is exactly the same form of solution as we found for initial value problems with the response function $R(t)$ playing the role of the Green's function.
+
+\subsection{General form of transfer function}
+To model the situation, suppose the amplifier's operation is described by the ordinary differential equation
+\[
+ I(t) = \mathcal{L}_m O(t),
+\]
+where
+\[
+ \mathcal{L}_m = \sum_{i = 0}^m a_i \frac{\d^i}{\d t^i}
+\]
+with $a_i \in \C$. In other words, we have an $m$th order ordinary differential equation with constant coefficients, and the input is the forcing function. Notice that we are not saying that $O(t)$ can be expressed as a function of derivatives of $I(t)$. Instead, $I(t)$ influences $O(t)$ by being the forcing term the differential equation has to satisfy. This makes sense because when we send an input into the amplifier, this input wave ``pushes'' the amplifier, and the amplifier is forced to react in some way to match the input.
+
+Taking the Fourier transform and using $\mathcal{F}\left[\frac{\d O}{\d t}\right] = i\omega \tilde{O}(\omega)$, we have
+\[
+ \tilde{I}(\omega) = \sum_{j = 0}^m a_j (i\omega)^j \tilde{O}(\omega).
+\]
+So we get
+\[
+ \tilde{O}(\omega) = \frac{\tilde{I}(\omega)}{a_0 + i\omega a_1 + \cdots + (i\omega)^m a_m}.
+\]
+Hence, the transfer function is just
+\[
+ \tilde{R}(\omega) = \frac{1}{a_0 + i\omega a_1 + \cdots + (i\omega)^m a_m}.
+\]
+The denominator is an $n$th order polynomial. By the fundamental theorem of algebra, it has $m$ roots, say $c_j \in \C$ for $j = 1, \cdots, J$, where $c_j$ has multiplicity $k_j$. Then we can write our transfer function as
+\[
+ \tilde{R}(\omega) = \frac{1}{a_m} \prod_{j = 1}^J \frac{1}{(i\omega - c_j)^{k_j}}.
+\]
+By repeated use of partial fractions, we can write this as
+\[
+ \tilde{R}(\omega) = \sum_{j = 1}^J \sum_{r = 1}^{k_j} \frac{\Gamma_{rj}}{(i\omega - c_j)^r}
+\]
+for some constants $\Gamma_{rj} \in \C$.
+
+By linearity of the (inverse) Fourier transform, we can find $O(t)$ if we know the inverse Fourier transform of all functions of the form
+\[
+ \frac{1}{(i\omega - \alpha)^p}.
+\]
+To compute this, there are many possible approaches. We can try to evaluate the integral directly, learn some fancy tricks from complex analysis, or cheat, and let the lecturer tell you the answer. Consider the function
+\[
+ h_0(t) =
+ \begin{cases}
+ e^{\alpha t} & t > 0\\
+ 0 & \text{otherwise}
+ \end{cases}
+\]
+The Fourier transform is then
+\[
+ \tilde{h}_o(\omega) = \int_{-\infty}^\infty e^{-i\omega t} h_0(t) \;\d t = \int_0^\infty e^{(\alpha - i \omega)t} \;\d t = \frac{1}{i\omega - \alpha}
+\]
+provided $\Re(\alpha) < 0$ (so that $e^{(\alpha - i\omega) t} \to 0$ as $t \to \infty$). Similarly, we can let
+\[
+ h_1(t) =
+ \begin{cases}
+ te^{\alpha t} & t > 0\\
+ 0 & \text{otherwise}
+ \end{cases}
+\]
+Then we get
+\[
+ \tilde{h}_1(\omega) = \mathcal{F}[t h_0(t)] = i\frac{\d}{\d \omega} \mathcal{F}[h_0(t)] = \frac{1}{(i \omega - \alpha)^2}.
+\]
+Proceeding inductively, we can define
+\[
+ h_p(t) =
+ \begin{cases}
+ \frac{t^p}{p!} e^{\alpha t} & t > 0\\
+ 0 & \text{otherwise}
+ \end{cases},
+\]
+and this has Fourier transform
+\[
+ \tilde{h}_p (\omega) = \frac{1}{(i \omega - \alpha)^{p + 1}},
+\]
+again provided $\Re(\alpha) < 0$. So the response function is a linear combination of these functions $h_p(t)$ (if any of the roots $c_j$ have non-negative real part, then it turns out the system is unstable, and is better analysed by the Laplace transform). We see that the response function does indeed vanish for all $t < 0$. In fact, each term (except $h_0$) increases from zero at $t = 0$ to rise to some maximum before eventually decaying exponentially.
+
+Fourier transforms can also be used to solve ordinary differential equations on the whole real line $\R$, provided the functions are sufficiently well-behaved. For example, suppose $y: \R \to \R$ solves
+\[
+ y'' - A^2 y = -f(x)
+\]
+with $y$ and $y' \to 0$ as $|x| \to \infty$. Taking the Fourier transform, we get
+\[
+ \tilde{y}(k) = \frac{\tilde{f}(k)}{k^2 + A^2}.
+\]
+Since this is a product, the inverse Fourier transform will be a convolution of $f(x)$ and an object whose Fourier transform is $\frac{1}{k^2 + A^2}$. Again, we'll cheat. Consider
+\[
+ h(x) = \frac{1}{2\mu} e^{-\mu|x|}.
+\]
+with $\mu > 0$. Then
+\begin{align*}
+ \tilde{f}(k) &= \frac{1}{2\mu} \int_{-\infty}^\infty e^{-ikx} e^{-\mu |x|}\;\d x\\
+ &= \frac{1}{\mu} \Re \int_0^\infty e^{-(\mu + ik)x}\;\d x \\
+ &= \frac{1}{\mu} \Re \frac{1}{ik + \mu} \\
+ &= \frac{1}{\mu^2 + k^2}.
+\end{align*}
+Therefore we get
+\[
+ y(x) = \frac{1}{2A} \int_{-\infty}^\infty e^{-A|x - u|} f(u)\;\d u.
+\]
+So far, we have done Fourier analysis over some abelian groups. For example, we've done it over $S^1$, which is an abelian group under multiplication, and $\R$, which is an abelian group under addition. We will next look at Fourier analysis over another abelian group, $\Z_m$, known as the discrete Fourier transform.
+
+\subsection{The discrete Fourier transform}
+Recall that the Fourier transform is defined as
+\[
+ \tilde{f}(k) = \int_\R e^{-ikx} f(x)\;\d x.
+\]
+To find the Fourier transform, we have to know how to perform the integral. If we cannot do the integral, then we cannot do the Fourier transform. However, it is usually difficult to perform integrals for even slightly more complicated functions.
+
+A more serious problem is that we usually don't even have a closed form of the function $f$ for us to perform the integral. In real life, $f$ might be some radio signal, and we are just given some data of what value $f$ takes at different points.. There is no hope that we can do the integral exactly.
+
+Hence, we first give ourselves a simplifying assumption. We suppose that $f$ is mostly concentrated in some finite region $[-R, S]$. More precisely, $|f(x)|\ll 1$ for $x \not\in [-R, S]$ for some $R, S > 0$. Then we can approximate our integral as
+\[
+ \tilde{f}(k) = \int_{-R}^S e^{-ikx} f(x)\;\d x.
+\]
+Afterwards, we perform the integral numerically. Suppose we ``sample'' $f(x)$ at
+\[
+ x = x_j = -R + j\frac{R + S}{N}
+\]
+for $N$ a large integer and $j = 0, 1, \cdots, N - 1$. Then
+\[
+ \tilde{f}(k) \approx \frac{R + S}{N} \sum_{j = 0}^{N - 1} f(x_j) e^{-ik x_j}.
+\]
+This is just the Riemann sum.
+
+Similarly, our computer can only store the result $\tilde{f}(k)$ for some finite list of $k$. Let's choose these to be at
+\[
+ k = k_m = \frac{2\pi m}{R + S}.
+\]
+Then after some cancellation,
+\begin{align*}
+ \tilde{f}(k_m) &\approx \frac{R + S}{N} e^{\frac{2\pi i m R}{R + S}} \sum_{j = 0}^{N - 1} f(x_j) e^{\frac{2\pi i}{N} jm}\\
+ &= (R + S) e^{\frac{2\pi i m R}{R + S}} \left[\frac{1}{N} \sum_{j = 0}^{N - 1} f(x_j) \omega^{-jm}\right],
+\end{align*}
+where
+\[
+ \omega = e^{\frac{2\pi i}{N}}
+\]
+is an $N$th root of unity. We call the thing in the brackets
+\[
+ F(m) = \frac{1}{N} \sum_{j = 0}^{N - 1} f(x_j) \omega^{-jm}.
+\]
+Of course, we've thrown away lots of information about our original function $f(x)$, since we made approximations all over the place. For example, we have already lost all knowledge of structures varying more rapidly than our sampling interval $\frac{R + S}{N}$. Also, $F(m + N) = F(m)$, since $\omega^N = 1$. So we have ``forced'' some periodicity into our function $F$, while $\tilde{f}(k)$ itself was not necessarily periodic.
+
+For the usual Fourier transform, we were able to re-construct our original function from the $\tilde{f}(k)$, but here we clearly cannot. However, if we know the $F(m)$ for all $m = 0, 1, \cdots, N - 1$, then we \emph{can} reconstruct the exact values of $f(x_j)$ for all $j$ by just solving linear equations. To make this more precise, we want to put what we're doing into the linear algebra framework we've been using. Let $G = \{1, \omega, \omega^2, \cdots, \omega^{N - 1}\}$. For our purposes below, we can just treat this as a discrete set, and $\omega^i$ are just meaningless symbols that happen to visually look like the powers of our $N$th roots of unity.
+
+Consider a function $g: G \to \C$ defined by
+\[
+ g(\omega^j) = f(x_j).
+\]
+This is actually nothing but just a new notation for $f(x_j)$. Then using this new notation, we have
+\[
+ F(m) = \frac{1}{N} \sum_{j = 0}^{N - 1} \omega^{-jm} g(\omega^j).
+\]
+The space of \emph{all} functions $g: G \to \C$ is a finite-dimensional vector space, isomorphic to $\C^N$. This has an inner product
+\[
+ (f, g) = \frac{1}{N} \sum_{j = 0}^{N - 1} f^*(\omega^j) g(\omega^j).
+\]
+This inner product obeys
+\begin{align*}
+ (g, f) &= (f, g)^*\\
+ (f, c_1 g_1 + c_2 g_2) &= c_1 (f, g_1) + c_2 (f, g_2)\\
+ (f, f) &\geq 0\text{ with equality iff }f(\omega_j) = 0
+\end{align*}
+Now let $e_n: G \to \C$ be the function
+\[
+ e_m(\omega^j) = \omega^{jm}.
+\]
+Then the set of functions $\{e_m\}$ for $m = 0, \cdots, N - 1$ is an orthonormal basis with respect to our inner product. To show this, we can compute
+\begin{align*}
+ (e_m, e_m) &= \frac{1}{N} \sum_{j = 0}^{N - 1} e_m^*(\omega^j) e_m(\omega^j)\\
+ &= \frac{1}{N} \sum_{j = 0}^{N - 1} \omega^{-jm} \omega^{jm}\\
+ &= \frac{1}{N} \sum_{j = 0}^{N - 1} 1\\
+ &= 1.
+\end{align*}
+For $n \not =m$, we have
+\begin{align*}
+ (e_n, e_m) &= \frac{1}{N} \sum_{j = 0}^{N - 1} e_n^* (\omega^j) e^m(\omega^j)\\
+ &= \frac{1}{N} \sum_{j = 0}^{N - 1} \omega^{j(m - n)}\\
+ &= \frac{1}{N} \frac{1 - \omega^{(m - n)N}}{1 - \omega^{m - n}}.
+\end{align*}
+Since $m - n$ is an integer and $\omega$ is an $N$th root of unity, we know that $\omega^{(m - n)N} = 1$. So the numerator is $0$. However, since $n \not= m$, $m - n\not=0$. So the denominator is non-zero. So we get $0$. Hence
+\[
+ (e_n, e_m) = 0 .
+\]
+We can now rewrite our $F(m)$ as
+\[
+ F(m) = \frac{1}{N}\sum_{j = 0}^{N - 1}\omega^{-jm} f(x_j) = \frac{1}{N}\sum_{j = 0}^{N - 1} e_m^* (\omega^j) g(\omega^j) = (e_m, g).
+\]
+Hence we can expand our $g$ as
+\[
+ g = \sum_{m = 0}^{N - 1} (e_m, g) e_m = \sum_{m = 0}^{N - 1} F(m) e_m.
+\]
+Writing $f$ instead of $g$, we recover the formula
+\[
+ f(x_j) = g(\omega^j) = \sum_{m = 0}^{N - 1} F(m) e_m(\omega^j).
+\]
+If we forget about our $f$s and just look at the $g$, what we have effectively done is take the Fourier transform of functions taking values on $G = \{1, \omega, \cdots, \omega^{N - 1}\}\cong \Z_N$.
+
+This is exactly analogous to what we did for the Fourier transform, except that everything is now discrete, and we don't have to worry about convergence since these are finite sums.
+
+\subsection{The fast Fourier transform*}
+What we've said so far is we've defined
+\[
+ F(m) = \frac{1}{N} \sum_{j = 0}^{N - 1} \omega^{-mj}g(\omega^j).
+\]
+To compute this directly, even given the values of $\omega^{-mj}$ for all $j, m$, this takes $N - 1$ additions and $N + 1$ multiplications. This is $2N$ operations for a \emph{single} value of $m$. To know $F(m)$ for all $m$, this takes $2N^2$ operations.
+
+This is a problem. Historically, during the cold war, especially after the Cuban missile crisis, people were in fear that the world will one day go into a huge nuclear war, and something had to be done to prevent this. Thus the countries decided to come up with a treaty to ensure people don't perform nuclear testings anymore.
+
+However, the obvious problem is that we can't easily check if other countries are doing nuclear tests. Nuclear tests are easy to spot if the test is done above ground, since there will be a huge explosion. However, if it is done underground, it is much harder to test.
+
+They then came up with the idea to put some seismometers all around the world, and record vibrations in the ground. To distinguish normal crust movements from nuclear tests, they wanted to take the Fourier transform and analyze the frequencies of the vibrations. However, they had a \emph{large} amount of data, and the value of $N$ is on the order of magnitude of a few million. So $2N^2$ will be a huge number that the computers at that time were not able to handle. So they need to find a trick to perform Fourier transforms quickly. This is known as the \emph{fast Fourier transform}. Note that this is nothing new mathematically. This is entirely a computational trick.
+
+Now suppose $N = 2M$. We can write
+\begin{align*}
+ F(m) &= \frac{1}{2M} \sum_{j = 0}^{2M - 1} \omega^{-jm} g(\omega^j) \\
+ &= \frac{1}{2}\left[\frac{1}{M}\sum_{k = 0}^{M - 1} \omega^{-2km} g(\omega^{2k}) + \omega^{-(2k + 1)m} g(\omega^{2k + 1})\right].
+\end{align*}
+We now let $\eta$ be an $M$th root of unity, and define
+\[
+ G(\eta^k) = g(\omega^{2k}),\quad H(\eta^k) = g(\omega^{2k + 1}).
+\]
+We then have
+\begin{align*}
+ F(m) &= \frac{1}{2} \left[\frac{1}{M} \sum_{k = 0}^{M - 1} \eta^{-km} G(\eta^k) + \frac{\omega^{-m}}{M} \sum_{k = 0}^{M - 1} \eta^{-km} H(\eta^k)\right]\\
+ &= \frac{1}{2}[\tilde{G}(m) + \omega^{-m} \tilde{H}(m)].
+\end{align*}
+Suppose we are given the values of $\tilde{G}(m)$ and $\tilde{H}(m)$ and $\omega^{-m}$ for all $m = \{0, \cdots, N - 1\}$. Then we can compute $F(m)$ using $3$ operations per value of $m$. So this takes $3 \times N = 6M$ operations for all $M$.
+
+We can compute $\omega^{-m}$ for all $m$ using at most $2N$ operations, and suppose it takes $P_M$ operations to compute $\tilde{G}(m)$ for all $m$. Then the number of operations needed to compute $F(m)$ is
+\[
+ P_{2M} = 2 P_M + 6M + 2M.
+\]
+Now let $N = 2^n$. Then by iterating this, we can find
+\[
+ P_N \leq 4N \log_2 N \ll N^2.
+\]
+So by breaking the Fourier transform apart, we are able to greatly reduce the number of operations needed to compute the Fourier transform. This is used nowadays everywhere for everything.
+\section{More partial differential equations}
+\subsection{Well-posedness}
+Recall that to find a unique solution of Laplace's equation $\nabla^2 \phi = 0$ on a bounded domain $\Omega \subseteq \R^n$, we imposed the Dirichlet boundary condition $\phi|_{\partial \Omega} = f$ or the Neumann boundary condition $\mathbf{n}\cdot \nabla \phi|_{\partial \Omega} = g$.
+
+For the heat equation $\frac{\partial \phi}{\partial t} = \kappa \nabla^2 \phi$ on $\Omega \times [0, \infty)$, we asked for $\partial |_{\partial \Omega \times [0, \infty)} = f$ and also $\phi|_{\Omega \times \{0\}} = g$.
+
+For the wave equation $\frac{\partial^2 \phi}{\partial t^2} = c^2 \nabla^2 \phi$ on $\Omega \times [0, \infty)$, we imposed $\phi|_{\partial \Omega \times [0, \infty)} = f$, $\phi|_{\Omega \times \{0\}} = g$ and also $\partial_t \phi|_{\Omega \times \{0\}} = h$.
+
+All the boundary and initial conditions restrict the value of $\phi$ on some co-dimension 1 surface, i.e.\ a surface whose dimension is one less than the domain. This is known as \emph{Cauchy data} for our partial differential equation.
+
+\begin{defi}[Well-posed problem]
+ A partial differential equation problem is said to be well-posed if its Cauchy data means
+ \begin{enumerate}
+ \item A solution exists;
+ \item The solution is unique;
+ \item A ``small change'' in the Cauchy data leads to a ``small change'' in the solution.
+ \end{enumerate}
+\end{defi}
+To understand the last condition, we need to make it clear what we mean by ``small change''. To do this properly, we need to impose some topology on our space of functions, which is some technicalities we will not go into. Instead, we can look at a simple example.
+
+Suppose we have the heat equation $\partial_t \phi = \kappa \nabla^2 \phi$. We know that whatever starting condition we have, the solution quickly smooths out with time. Any spikiness of the initial conditions get exponentially suppressed. Hence this is a well-posed problem --- changing the initial condition slightly will result in a similar solution.
+
+However, if we take the heat equation but run it backwards in time, we get a non-well-posed problem. If we provide a tiny, small change in the ``ending condition'', as we go back in time, this perturbation grows \emph{exponentially}, and the result could vary wildly.
+
+Another example is as follows: consider the Laplace's equation $\nabla^2 \phi$ on the upper half plane $(x, y) \in \R \times \R^{\geq 0}$ subject to the boundary conditions
+\[
+ \phi(x, 0) = 0,\quad \partial_y \phi(x, 0) = g(x).
+\]
+If we take $g(x) = 0$, then $\phi(x, y) = 0$ is the unique solution, obviously.
+
+However, if we take $g(x) = \frac{\sin Ax}{A}$, then we get the unique solution
+\[
+ \phi(x, y) = \frac{\sin (Ax) \sinh (Ay)}{A^2}.
+\]
+So far so good. However, now consider the limit as $A \to \infty$. Then
+\[
+ g(x) = \frac{\sin (Ax)}{A} \to 0.
+\]
+for all $x \in \R$. However, at the special point $\phi\left(\frac{\pi}{2A}, y\right)$, we get
+\[
+ \phi\left(\frac{\pi}{2A}, y\right) = \frac{\sinh (Ay)}{A^2} \to A^{-2} e^{Ay},
+\]
+which is unbounded. So as we take the limit as our boundary conditions $g(x) \to 0$, we get an unbounded solution.
+
+How can we make sure this doesn't happen? We're first going to look at first order equations.
+
+\subsection{Method of characteristics}
+\subsubsection*{Curves in the plane}
+Suppose we have a curve on the plane $\mathbf{x}: \R \to \R^2$ given by $s\mapsto (x(s), y(s))$.
+
+\begin{defi}[Tangent vector]
+ The tangent vector to a smooth curve $C$ given by $\mathbf{x}: \R \to \R^2$ with $\mathbf{x}(s) = (x(s), y(s))$ is
+ \[
+ \left(\frac{\d x}{\d s}, \frac{\d y}{\d s}\right).
+ \]
+\end{defi}
+
+If we have some quantity $\phi: \R^2 \to \R$, then its value along the curve $C$ is just $\phi(x(s), y(s))$.
+
+A vector field $\mathbf{V}(x, y): \R^2 \to \R^2$ defines a family of curves. To imagine this, suppose we are living in a river. The vector field tells us the how the water flows at each point. We can then obtain a curve by starting a point and flow along with the water. More precisely,
+\begin{defi}[Integral curve]
+ Let $\mathbf{V}(x, y): \R^2 \to \R^2$ be a vector field. The \emph{integral curves} associated to $\mathbf{V}$ are curves whose tangent $\left(\frac{\d x}{\d s}, \frac{\d y}{\d s}\right)$ is just $\mathbf{V}(x, y)$.
+\end{defi}
+For sufficiently regular vector fields, we can fill the space with different curves. We will parametrize which curve we are on by the parameter $t$. More precisely, we have a curve $B = (x(t), y(t))$ that is \emph{transverse} (i.e.\ nowhere parallel) to our family of curves, and we can label the members of our family by the value of $t$ at which they intersect $B$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw rectangle (3, 3);
+
+ \draw plot [smooth] coordinates {(0, 1) (0.7, 1.3) (0.9, 1.2) (1.6, 1.5) (2.3, 1) (3, 0.9)};
+ \draw plot [smooth] coordinates {(0, 1.3) (0.7, 1.6) (0.9, 1.6) (1.6, 1.8) (2.3, 1.2) (3, 1.2)};
+ \draw plot [smooth] coordinates {(0, 1.7) (0.9, 1.9) (1.6, 2.2) (2.3, 1.7) (3, 1.8)};
+ \draw plot [smooth] coordinates {(0, 0.8) (0.7, 1.1) (0.9, 0.9) (1.6, 1.3) (2.3, 0.8) (3, 0.6)};
+
+ \node [right] at (3, 1.8) {$C_4$};
+ \node [right] at (3, 1.2) {$C_3$};
+ \node [right] at (3, 0.9) {$C_2$};
+ \node [right] at (3, 0.6) {$C_1$};
+
+ \draw [mred] plot [smooth] coordinates {(1, 0) (1.3, 1.4) (1.2, 1.9) (1.4, 3)};
+
+ \node [mred, right] at (1.3, 2.4) {$B(t)$};
+ \end{tikzpicture}
+\end{center}
+We can thus label our family of curves by $(x(s, t), y(s, t))$. If the Jacobian
+\[
+ J = \frac{\partial x}{\partial s} \frac{\partial y}{\partial t} - \frac{\partial x}{\partial t} \frac{\partial y}{\partial s} \not= 0,
+\]
+then we can invert this to find $(s, t)$ as a function of $(x, y)$, i.e.\ at any point, we know which curve we are on, and how far along the curve we are.
+
+This means we now have a new coordinate system $(s, t)$ for our points in $\R^2$. It turns out by picking the right vector field $\mathbf{V}$, we can make differential equations much easier to solve in this new coordinate system.
+
+\subsubsection*{The method of characteristics}
+Suppose $\phi: \R^2 \to \R$ obeys
+\[
+ a(x, y) \frac{\partial \phi}{\partial x} + b(x, y) \frac{\partial \phi}{\partial y} = 0.
+\]
+We can define our vector field as
+\[
+ \mathbf{V}(x, y) =
+ \begin{pmatrix}
+ a(x, y)\\
+ b(x, y)
+ \end{pmatrix}.
+\]
+Then we can write the differential equation as
+\[
+ \mathbf{V}\cdot \nabla \phi = 0.
+\]
+Along any particular integral curve of $\mathbf{V}$, we have
+\[
+ \frac{\partial \phi}{\partial s} = \frac{\d x(s)}{\d s} \frac{\partial \phi}{\partial x} + \frac{\d y(s)}{\d s} \frac{\partial \phi}{\partial y} = \mathbf{V}\cdot \nabla \phi,
+\]
+where the integral curves of $\mathbf{V}$ are determined by
+\[
+ \left.\frac{\partial x}{\partial s}\right|_t = a(x, y),\quad \left.\frac{\partial y}{\partial s}\right|_t = b(x, y).
+\]
+This is known as the characteristic equation.
+
+Hence our partial differential equation just becomes the equation
+\[
+ \left.\frac{\partial \phi}{\partial s}\right|_t = 0.
+\]
+To get ourselves a well-posed problem, we have to specify our boundary data along a transverse curve $B$. We pick our transverse curve as $s = 0$, and we suppose we are given the initial data
+\[
+ \phi(0, t) = h(t).
+\]
+Since $\phi$ does not vary with $s$, our solution automatically is
+\[
+ \phi(s, t) = h(t).
+\]
+Then $\phi(x, y)$ is given by inverting the characteristic equations to find $t(x, y)$.
+
+We do a few examples to make this clear.
+
+\begin{eg}[Trivial]
+ Suppose $\phi$ obeys
+ \[
+ \left.\frac{\partial \phi}{\partial x}\right|_y = 0.
+ \]
+ We are given the boundary condition
+ \[
+ \phi(0, y) = f(y).
+ \]
+ The solution is obvious, since the differential equation tells us the function is just a function of $y$, and then the solution is obviously $\phi(x, y) = h(y)$. However, we can try to use the method of characteristics to practice our skills. Our vector field is given by
+ \[
+ \mathbf{V} =
+ \begin{pmatrix}
+ 1\\0
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ \frac{\d x}{\d s}\\
+ \frac{\d y}{\d s}
+ \end{pmatrix}.
+ \]
+ Hence, we get
+ \[
+ \frac{\d x}{\d s} = 1,\quad \frac{\d y}{\d s} = 0.
+ \]
+ So we have
+ \[
+ x = s + c,\quad y = d.
+ \]
+ Our initial curve is given by $y = t, x = 0$. Since we want this to be the curve $s = 0$, we find our integral curves to be
+ \[
+ x = s,\quad y = t.
+ \]
+ Now for each fixed $t$, we have to solve
+ \[
+ \left.\frac{\partial \phi}{\partial s}\right|_t = 0,\quad \phi(s, t) = h(t) = f(t).
+ \]
+ So we know that
+ \[
+ \phi(x, y) = f(y).
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider the equation
+ \[
+ e^x \frac{\partial \phi}{\partial x} + \frac{\partial \phi}{\partial y} = 0.
+ \]
+ with the boundary condition
+ \[
+ \phi(x, 0) = \cosh x.
+ \]
+ We find that the vector field is
+ \[
+ \mathbf{V} =
+ \begin{pmatrix}
+ e^x\\
+ 1
+ \end{pmatrix}
+ \]
+ This gives
+ \[
+ \frac{\d x}{\d s} = e^x,\quad \frac{\d y}{\d s} = 1.
+ \]
+ Thus we have
+ \[
+ e^{-x} = -s + c,\quad y = s + d.
+ \]
+ We pick our curve as $x = t, y = 0$. So our relation becomes
+ \[
+ e^{-x} = -s + e^{-t},\quad y = s.
+ \]
+ We thus have
+ \[
+ \left.\frac{\partial \phi}{\partial s}\right|_t = 0,\quad \phi(s, t) = \phi(0, t) = \cosh(t) = \cosh[-\ln (y + e^{-x})]
+ \]
+ So done.
+\end{eg}
+
+\begin{eg}
+ Let $\phi: \R^2 \to \R$ solve the inhomogeneous partial differential equation
+ \[
+ \partial_x \phi + 2 \partial_y \phi = y e^x
+ \]
+ with $\phi(x, x) = \sin x$.
+
+ We can still use the method of characteristics. We have
+ \[
+ \mathbf{u} =
+ \begin{pmatrix}
+ 1\\2
+ \end{pmatrix}.
+ \]
+ So the characteristic curves obey
+ \[
+ \frac{\d x}{\d s} = 1, \quad \frac{\d y}{\d s} = 2.
+ \]
+ This gives
+ \[
+ x = s + t,\quad y = 2s + t
+ \]
+ so that $x = y = t$ at $s = 0$. We can invert this to obtain the relations
+ \[
+ s = y - x, \quad t = 2x - y.
+ \]
+ The partial differential equation now becomes
+ \[
+ \left.\frac{\d \phi}{\d s}\right|_t = \mathbf{u}\cdot \nabla \phi = y e^x = (2s + t)e^{s + t}
+ \]
+ Note that this is just an ordinary differential equation in $s$ because $t$ is held constant. We have the boundary conditions
+ \[
+ \phi(s = 0, t) = \sin t.
+ \]
+ So we get
+ \[
+ \phi(x(s, t), y(s, t)) = (2 - t)e^t(1 - e^s) + \sin t + 2s e^{s + t}.
+ \]
+ Putting it in terms of $x$ and $y$, we have
+ \[
+ \phi(x, y) = (2 - 2x + y) e^{2x - y} + \sin (2x - y) + (y - 2) e^x.
+ \]
+\end{eg}
+We see that our Cauchy data $\phi(x, x) = \sin x$ should be specified on a curve $B$ that intersects each characteristic curve exactly once.
+
+If we tried to use a characteristic curve $B$ that intersects the characteristics multiple times, if we want a solution \emph{at all}, we cannot specify data freely along $B$. its values at the points of intersection have to compatible with the partial differential equation.
+
+For example, if we have a homogeneous equation, we saw the solution will be constant along the same characteristic. Hence if our Cauchy data requires the solution to take different values on the same characteristic, we will have no solution.
+
+Moreover, even if it is compatible, if $B$ does not intersect all characteristics, we will not get a unique solution. So the solution is not fixed along such characteristics.
+
+On the other hand, if $B$ intersects each characteristic curve transversely, the problem is well-posed. So there exists a unique solution, at least in the neighbourhood of $B$. Notice that data is never transferred between characteristic curves.
+
+\subsection{Characteristics for 2nd order partial differential equations}
+Whether the method of characteristics works for a 2nd order partial differential equation, or even higher order ones, depends on the ``type'' of the differential equation.
+
+To classify these differential equations, we need to define
+
+\begin{defi}[Symbol and principal part]
+ Let $\mathcal{L}$ be the general $2$nd order differential operator on $\R^n$. We can write it as
+ \[
+ \mathcal{L} = \sum_{i, j = 1}^n a^{ij}(x) \frac{\partial^2}{\partial x^i \partial x^j} + \sum_{i = 1}^n b^i(x) \frac{\partial}{\partial x^i} + c(X),
+ \]
+ where $a^{ij}(x), b^i(x), c(x) \in \R$ and $a^{ij} = a^{ji}$ (wlog).
+
+ We define the \emph{symbol} $\sigma(\mathbf{k}, x)$ of $\mathcal{L}$ to be
+ \[
+ \sigma(\mathbf{k}, x) = \sum_{i, j = 1}^n a^{ij} (x) k_i k_j + \sum_{i = 1}^n b^i(x) k_i + c(x).
+ \]
+ So we just replace the derivatives by the variable $k$.
+
+ The \emph{principal part} of the symbol is the leading term
+ \[
+ \sigma^p(\mathbf{k}, x) = \sum_{i, j = 1}^n a^{ij} (x) k_i k_j.
+ \]
+\end{defi}
+
+\begin{eg}
+ If $\mathcal{L} = \nabla^2$, then
+ \[
+ \sigma(\mathbf{k}, x) = \sigma^p(\mathbf{k}, x) = \sum_{i = 1}^n (k_i)^2.
+ \]
+ If $\mathcal{L}$ is the heat operator (where we think of the last coordinate $x^n$ as the time, and others as space), then the operator is given by
+ \[
+ \mathcal{L} = \frac{\partial}{\partial x^n} - \sum_{i = 1}^{n - 1}\frac{\partial^2}{\partial x^{i2}}.
+ \]
+ The symbol is then
+ \[
+ \sigma(\mathbf{k}, x) = k_n - \sum_{i = 1}^{n - 1} (k_i)^2,
+ \]
+ and the principal part is
+ \[
+ \sigma^p (\mathbf{k}, x) = -\sum_{i = 1}^{n - 1}(k_i)^2.
+ \]
+\end{eg}
+Note that the symbol is closely related to the Fourier transform of the differential operator, since both turn differentiation into multiplication. Indeed, they are equal if the coefficients are constant. However, we define this symbol for arbitrary differential operators with non-constant coefficients.
+
+In general, for each $x$, we can write
+\[
+ \sigma^p (\mathbf{k}, x) = \mathbf{k}^T A(x)\mathbf{k},
+\]
+where $A(x)$ has elements $a^{ij}(x)$. Recall that a real symmetric matrix (such as $A$) has all real eigenvalues. So we define the following:
+
+\begin{defi}[Elliptic, hyperbolic, ultra-hyperbolic and parabolic differential operators]
+ Let $\mathcal{L}$ be a differential operator. We say $\mathcal{L}$ is
+ \begin{itemize}
+ \item \emph{elliptic at $x$} if all eigenvalues of $A(x)$ have the same sign. Equivalently, if $\sigma^p(\ph, x)$ is a definite quadratic form;
+ \item \emph{hyperbolic at $x$} if all but one eigenvalues of $A(x)$ have the same sign;
+ \item \emph{ultra-hyperbolic at $x$} if $A(x)$ has more than one eigenvalues of each sign;
+ \item \emph{parabolic at $x$} if $A(x)$ has a zero eigenvalue, i.e.\ $\sigma^p(\ph, x)$ is degenerate.
+ \end{itemize}
+ We say $\mathcal{L}$ is elliptic if $\mathcal{L}$ is elliptic at all $x$, and similarly for the other terms.
+\end{defi}
+These names are supposed to remind us of conic sections, and indeed if we think of $\mathbf{k}^T A\mathbf{k}$ as an equation in $\mathbf{k}$, then we get a conic section.
+
+\begin{eg}
+ Let
+ \[
+ \mathcal{L} = a(x, y) \frac{\partial^2}{\partial x^2} + 2b(x, y) \frac{\partial^2}{\partial x \partial y} + c(x, y) \frac{\partial^2}{\partial y^2} + d(x, y) \frac{\partial}{\partial x} + e(x, y) \frac{\partial}{\partial y} + f(x, y).
+ \]
+ Then the principal part of the symbol is
+ \[
+ \sigma^p(\mathbf{k}, x) =
+ \begin{pmatrix}
+ k_x & k_y
+ \end{pmatrix}
+ \begin{pmatrix}
+ a(x, y) & b(x, y)\\
+ b(x, y) & c(x, y)
+ \end{pmatrix}
+ \begin{pmatrix}
+ k_x \\ k_y
+ \end{pmatrix}
+ \]
+ Then $\mathcal{L}$ is elliptic at $x$ if $b^2 - ac < 0$; hyperbolic if $b^2 - ac > 0$; and parabolic if $b^2 - ac = 0$.
+
+ Note that since we only have two dimensions and hence only two eigenvalues, we cannot possibly have an ultra-hyperbolic equation.
+\end{eg}
+
+\subsubsection*{Characteristic surfaces}
+\begin{defi}[Characteristic surface]
+ Given a differential operator $\mathcal{L}$, let
+ \[
+ f(x^1, x^2, \cdots, x^n) = 0
+ \]
+ define a surface $C \subseteq \R^n$. We say $C$ is \emph{characteristic} if
+ \[
+ \sum_{i, j = 1}^n a^{ij}(x) \frac{\partial f}{\partial x^i} \frac{\partial f}{\partial x^j} = (\nabla f)^T A (\nabla f) = \sigma^p(\nabla f, x) = 0.
+ \]
+ In the case where we only have two dimensions, a characteristic surface is just a curve.
+\end{defi}
+We see that the characteristic equation restricts what values $\nabla f$ can take. Recall that $\nabla f$ is the normal to the surface $C$. So in general, at any point, we can find what the normal of the surface should be, and stitch these together to form the full characteristic surfaces.
+
+For an elliptic operator, all the eigenvalues of $A$ have the same sign. So there are no non-trivial real solutions to this equation $(\nabla f)^T A (\nabla f) = 0$. Consequently, elliptic operators have no real characteristics. So the method of characteristics would not be of any use when studying, say, Laplace's equation, at least if we want to stay in the realm of real numbers.
+
+If $\mathcal{L}$ is parabolic, we for simplicity assume that $A$ has exactly one zero eigenvector, say $\mathbf{n}$, and the other eigenvalues have the same sign. This is the case when, say, there are just two dimensions. So we have $A \mathbf{n} = \mathbf{n}^T A = \mathbf{0}$. We normalize $\mathbf{n}$ such that $\mathbf{n}\cdot \mathbf{n} = 1$.
+
+For any $\nabla f$, we can always decompose it as
+\[
+ \nabla f = \mathbf{n} (\mathbf{n} \cdot \nabla f) + [\nabla f - \mathbf{n}\cdot (\mathbf{n}\cdot \nabla f)].
+\]
+This is a relation that is trivially true, since we just add and subtract the same thing. Note, however, that the first term points along $\mathbf{n}$, while the latter term is orthogonal to $\mathbf{n}$. To save some writing, we adopt the notation
+\[
+ \nabla_\perp f = \nabla f - \mathbf{n}(\mathbf{n} \cdot \nabla \mathbf{f}).
+\]
+So we have
+\[
+ \nabla f = \mathbf{n}(\mathbf{n}\cdot \nabla f) + \nabla f_\perp.
+\]
+Then we can compute
+\begin{align*}
+ (\nabla f)^T A(\nabla f) &= [\mathbf{n}(\mathbf{n}\cdot \nabla f) + \nabla_\perp f]^T A[\mathbf{n}(\mathbf{n}\cdot \nabla f) + \nabla_\perp f]\\
+ &= (\nabla_\perp f)^T A(\nabla_\perp f).
+\end{align*}
+Then by assumption, $(\nabla_\perp f)^T A(\nabla_\perp f)$ is definite. So just as in the elliptic case, there are no non-trivial solutions. Hence, if $f$ defines a characteristic surface, then $\nabla_\perp f = 0$. In other words, $f$ is parallel to $\mathbf{n}$. So at any point, the normal to a characteristic surface must be $\mathbf{n}$, and there is only one possible characteristic.
+
+If $\mathcal{L}$ is hyperbolic, we assume all but one eigenvalues are positive, and let $-\lambda$ be the unique negative eigenvalue. We let $\mathbf{n}$ be the corresponding unit eigenvector, where we normalize it such that $\mathbf{n}\cdot \mathbf{n} = 1$. We say $f$ is characteristic if
+\begin{align*}
+ 0 &= (\nabla f)^T A(\nabla f) \\
+ &= [\mathbf{n}(\mathbf{n}\cdot \nabla f) + \nabla_\perp f]^T A[\mathbf{n}(\mathbf{n}\cdot \nabla f) + \nabla_\perp f]\\
+ &= -\lambda (\mathbf{n}\cdot \nabla f)^2 + (\nabla_\perp f)^T A(\nabla_\perp f).
+\end{align*}
+Consequently, for this to be a characteristic, we need
+\[
+ \mathbf{n}\cdot \nabla f = \pm \sqrt{\frac{(\nabla_\perp f)^T A(\nabla_\perp f)}{\lambda}}.
+\]
+So there are two choices for $\mathbf{n}\cdot \nabla f$, given any $\nabla_\perp f$. So hyperbolic equations have two characteristic surfaces through any point. % why?
+
+This is not too helpful in general. However, in the case where we have two dimensions, we can find the characteristic curves explicitly. Suppose our curve is given by $f(x, y) = 0$. We can write $y = y(x)$. Then since $f$ is constant along each characteristic, by the chain rule, we know
+\[
+ 0 = \frac{\partial f}{\partial x} + \frac{\partial f}{\partial y} \frac{\d y}{\d x}.
+\]
+Hence, we can compute
+\[
+ \frac{\d y}{\d x} = -\frac{-b \pm \sqrt{b^2 - ac}}{a}.
+\]
+We now see explicitly how the type of the differential equation influences the number of characteristics --- if $b^2 - ac > 0$, then we obtain two distinct differential equations and obtain two solutions; if $b^2 - ac = 0$, then we only have one equation; if $b^2 - ac < 0$, then there are no real characteristics.
+
+\begin{eg}
+ Consider
+ \[
+ \partial_y^2 \phi - xy \partial_x^2 \phi = 0
+ \]
+ on $\R^2$. Then $a = -xy, b = 0, c = 1$. So $b^2 - ac = xy$. So the type is elliptic if $xy < 0$, hyperbolic if $xy > 0$, and parabolic if $xy = 0$.
+
+ In the regions where it is hyperbolic, we find
+ \[
+ \frac{-b \pm \sqrt{b^2 - ac}}{a} = \pm \frac{1}{\sqrt{xy}}.
+ \]
+ Hence the two characteristics are given by
+ \[
+ \frac{\d y}{\d x} = \pm \frac{1}{\sqrt{xy}}.
+ \]
+ This has a solution
+ \[
+ \frac{1}{3}y^{3/2}\pm x^{1/2} = c.
+ \]
+ We now let
+ \begin{align*}
+ u &= \frac{1}{3}y^{3/2} + x^{1/2},\\
+ v &= \frac{1}{3}y^{3/2} - x^{1/2}.
+ \end{align*}
+ Then the equation becomes
+ \[
+ \frac{\partial^2 \phi}{\partial u\partial v} + \text{lower order terms} = 0.
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider the wave equation
+ \[
+ \frac{\partial^2 \phi}{\partial t^2} - c^2 \frac{\partial^2 \phi}{\partial x^2} = 0
+ \]
+ on $\R^{1, 1}$. Then the equation is hyperbolic everywhere, and the characteristic curves are $x \pm ct = $ const. Let's look for a solution to the wave equation that obeys
+ \[
+ \phi(x, 0) = f(x),\quad \partial_t \phi(x, 0) = g(x).
+ \]
+ Now put $u = x - ct, v = x + ct$. Then the wave equation becomes
+ \[
+ \frac{\partial^2}{\partial u \partial v} = 0.
+ \]
+ So the general solution to this is
+ \[
+ \phi(x, t) = G(u) + H(v) = G(x - ct) + H(x + ct).
+ \]
+ The initial conditions now fix these functions
+ \[
+ f(x) = G(x) + H(x),\quad g(x) = -c G'(x) + c H'(x).
+ \]
+ Solving these, we find
+ \[
+ \phi(x, t) = \frac{1}{2}[f(x - ct) + f(x + ct)] + \frac{1}{2c}\int_{x - ct}^{x + ct} g(y)\;\d y.
+ \]
+ This is d'Alembert's solution to the $1 + 1$ dimensional wave equation.
+\end{eg}
+Note that the value of $\phi$ at any point $(x, t)$ is \emph{completely} determined by $f, g$ in the interval $[x - ct, x + ct]$. This is known as the \emph{(past) domain of dependence} of the solution at $(x, t)$, written $D^-(x, t)$. Similarly, at any time $t$, the initial data at $(x_0, 0)$ only affects the solution within the region $x_0 - ct \leq x \leq x_0 + ct$. This is the \emph{range of influence} of data at $x_0$, written $D^+(x_0)$.
+\begin{center}
+ \begin{tikzpicture}
+ \node at (-1.5, 2) [circ] {};
+ \node at (-1.5, 2) [above] {$p$};
+ \fill [mgreen, opacity=0.5] (-3.5, 0) -- (-1.5, 2) -- (0.5, 0) -- cycle;
+ \draw (-3.5, 0) -- (-1.5, 2) -- (0.5, 0);
+
+ \node at (-1.5, 0.667) {$D^-(p)$};
+
+ \draw [mred] (-4, 0) -- (4, 0) node [right] {$B$};
+ \draw (1.5, 0) -- (-0.5, 2);
+ \draw (2, 0) -- (4, 2);
+
+ \fill [morange, opacity=0.5] (1.5, 0) -- (-0.5, 2) -- (4, 2) -- (2, 0) -- cycle;
+ \draw [mblue, thick] (1.5, 0) -- (2, 0) node [below, pos=0.5] {$S$};
+ \node at (1.75, 1) {$D^+(S)$};
+ \end{tikzpicture}
+\end{center}
+We see that disturbances in the wave equation propagate with speed $c$.
+
+\subsection{Green's functions for PDEs on \texorpdfstring{$\R^n$}{Rn}}
+\subsubsection*{Green's functions for the heat equation}
+Suppose $\phi: \R^n \times [0, \infty) \to \R$ solves the heat equation
+\[
+ \partial_t \phi = D \nabla^2 \phi,
+\]
+where $D$ is the diffusion constant, subject to
+\[
+ \phi|_{\R^n \times \{0\}} = f.
+\]
+To solve this, we take the Fourier transform of the heat equation in the spatial variables. For simplicity of notation, we take $n = 1$. Writing $\mathcal{F}[\phi(x, t)] = \tilde{\phi}(k, t)$, we get
+\[
+ \partial_t \tilde{\phi}(k ,t) = -D k^2 \tilde{\phi}(k, t)
+\]
+with the boundary conditions
+\[
+ \tilde{\phi}(k, 0) = \tilde{f}(k).
+\]
+Note that this is just an ordinary differential equation in $t$, and $k$ is a fixed parameter for each $k$. So we have
+\[
+ \tilde{\phi}(k, t) = \tilde{f}(k) e^{-Dk^2 t}.
+\]
+We now take the inverse Fourier transform and get
+\[
+ \phi(x, t) = \frac{1}{2\pi} \int_\R e^{ikx}\left[\tilde{f}(k) e^{-D k^2 t} \right]\;\d k
+\]
+This is the inverse Fourier transform of a product. This thus gives the convolution
+\[
+ \phi(x, t) = f * \mathcal{F}^{-1}[e^{-Dk^2 t}].
+\]
+So we are done if we can find the inverse Fourier transform of the Gaussian. This is an easy exercise on example sheet 3, where we find
+\[
+ \mathcal{F}[e^{-a^2 x^2}] = \frac{\sqrt{\pi}}{a} e^{-k^2/4a^2}.
+\]
+So setting
+\[
+ a^2 = \frac{1}{4Dt},
+\]
+So we get
+\[
+ \mathcal{F}^{-1}[e^{-Dk^2 t}] = \frac{1}{\sqrt{4\pi Dt}} \exp\left(-\frac{x^2}{4Dt}\right)
+\]
+We shall call this $S_1(x, t)$, where the subscript $1$ tells us we are in $1 + 1$ dimensions. This is known as the \emph{fundamental solution} of the heat equation. We then get
+\[
+ \phi(x, t) = \int_{-\infty}^\infty f(y) S_1(x - y, t) \;\d y
+\]
+\begin{eg}
+ Suppose our initial data is
+ \[
+ f(x) = \phi_0\delta(x).
+ \]
+ So we start with a really cold room, with a huge spike in temperature at the middle of the room. Then we get
+ \[
+ \phi(x, t) = \frac{\phi_0}{\sqrt{4\pi Dt}} \exp\left(-\frac{x^2}{4Dt}\right).
+ \]
+ What this shows, as we've seen before, is that if we start with a delta function, then as time evolves, we get a Gaussian that gets shorter and broader.
+
+ Now note that if we start with a delta function, at $t = 0$, everywhere outside the origin is non-zero. However, after any infinitely small time $t$, $\phi$ becomes non-zero everywhere, instantly. Unlike the wave equation, information travels instantly to all of space, i.e.\ heat propagates arbitrarily fast according to this equation (of course in reality, it doesn't). This fundamentally, is because the heat equation is parabolic, and only has one characteristic.
+\end{eg}
+
+Now suppose instead $\phi$ satisfies the inhomogeneous, forced, heat equation
+\[
+ \partial_t \phi - D \nabla^2 \phi = F(x, t),
+\]
+but with homogeneous initial conditions $\phi|_{t = 0} = 0$. Physically, this represents having an external heat source somewhere, starting at zero.
+
+Note that if we can solve this, then we have completely solved the heat equation. If we have an inhomogeneous equation \emph{and} an inhomogeneous initial condition, then we can solve this forced problem with homogeneous boundary conditions to get $\phi_F$; and solve the unforced equation with homogeneous equation to get $\phi_H$. Then the sum $\phi = \phi_F + \phi_H$ solves the full equation.
+
+As before, we take the Fourier transform of the forced equation with respect to the spacial variables. As before, we will just do it in the cases with one spacial dimension. We find
+\[
+ \partial_t \tilde{\phi}(k, t) + Dk^2 \tilde{\phi}(k, t) = \tilde{F}(k, t),
+\]
+with the initial condition
+\[
+ \tilde{\phi}(k, 0) = 0.
+\]
+As before, we have reduced this to a first-order ordinary differential equation in $t$. Using an integrating factor, we can rewrite this as
+\[
+ \pd{t} [e^{Dk^2 t} \tilde{\phi}(k, t)] = e^{Dk^2 t} \tilde{F}(k, t).
+\]
+The solution is them
+\[
+ \tilde{\phi}(k, t) = e^{-Dk^2 t} \int_0^t e^{Dk^2 u} \tilde{F}(k, u)\;\d u.
+\]
+We define the Green's function $G(x, t; y, \tau))$ to be the solution to
+\[
+ [\partial_t - D \nabla_x^2] G(x, t; y, \tau) = \delta(x - y) \delta(t - \tau).
+\]
+So the Fourier transform with respect to $x$ gives
+\[
+ \tilde{G}(k, t, y, \tau) = e^{-Dk^2 t} \int_0^t e^{D k^2 u}e^{iky}\delta(t - \tau) \;\d u,
+\]
+where $e^{iky}$ is just the Fourier transform of $\delta(x - y)$. This is equal to
+\[
+ \tilde{G}(k, t; y, \tau) =
+ \begin{cases}
+ 0 & t < \tau\\
+ e^{-iky} e^{-Dk^2 (t - \tau)} & t > \tau
+ \end{cases}
+ = \Theta(t - \tau) e^{-iky} e^{-Dk^2 (t - \tau)}.
+\]
+Reverting the Fourier transform, we get
+\[
+ G(x, t; y, \tau) = \frac{\Theta(t - \tau)}{2\pi} \int_\R e^{ik(x - y)} e^{-Dk^2(t - \tau)}\;\d x
+\]
+This integral is just the inverse Fourier transform of the Gaussian with a phase shift. So we end up with
+\[
+ G(x, t; y, \tau) = \frac{\Theta(t - \tau)}{\sqrt{4\pi D(t- \tau)}} \exp\left(-\frac{(x - y)^2}{4D (t - \tau)}\right) = \Theta(t - \tau) S_1 (x - y; t - \tau).
+\]
+The solution we seek is then
+\[
+ \phi(x, t) = \int_0^t \int_\R F(y, \tau) G(x, t; y, \tau)\;\d y\;\d \tau.
+\]
+It is interesting that the solution to the forced equation involves the same function $S_1(x, t)$ as the homogeneous equation with inhomogeneous boundary conditions.
+
+In general, $S_n(\mathbf{x}, t)$ solves
+\[
+ \frac{\partial S_n}{\partial t} - D \nabla^2 S_n = 0
+\]
+with boundary conditions $S_n(\mathbf{x}, 0) = \delta^{(n)}(\mathbf{x} - \mathbf{y})$, and we can find
+\[
+ S_n(\mathbf{x}, t) = \frac{1}{(4\pi D t)^{n/2}} \exp\left(-\frac{|\mathbf{x} - \mathbf{y}|^2}{4 Dt}\right).
+\]
+Then in general, given an initial condition $\phi|_{t = 0}= f(\mathbf{x})$, the solution is
+\[
+ \phi(\mathbf{x}, t) = \int f(\mathbf{y}) S(\mathbf{x} - \mathbf{y}, t)\;\d^n y.
+\]
+Similarly, $G_n(\mathbf{x}, t; \mathbf{y}, t)$ solves
+\[
+ \frac{\partial G_n}{\partial t} - D \nabla^2 G_n = \delta(t - \tau) \delta^{(n)} (\mathbf{x} - \mathbf{y}),
+\]
+with the boundary condition $G_n(\mathbf{x}, 0; \mathbf{y}, \tau) = 0$. The solution is
+\[
+ G(\mathbf{x}, t; \mathbf{y}, \tau) = \Theta(t - \tau) S_n (\mathbf{x} - \mathbf{y}, t - \tau).
+\]
+Given our Green's function, the general solution to the forced heat equation
+\[
+ \frac{\partial \phi}{\partial t} - D \nabla^2 \phi = F(\mathbf{x}, t),\quad \phi(\mathbf{x}, 0) = 0
+\]
+is just
+\begin{align*}
+ \phi(\mathbf{x}, t) &= \int_0^\infty \int_{\R^n} F(\mathbf{y}, \tau) G(\mathbf{x}, t; \mathbf{y}, \tau) \;\d^n y \;\d \tau \\
+ &= \int_0^t \int_{\R^n} F(\mathbf{y}, \tau) G(\mathbf{x}, t; \mathbf{y}, \tau) \;\d^n y \;\d \tau.
+\end{align*}
+Duhamel noticed that we can write this as
+\[
+ \phi(\mathbf{x}, t) = \int_0^t \phi_F(\mathbf{x}, t; \tau)\;\d \tau,
+\]
+where
+\[
+ \phi_F = \int_\R F(\mathbf{y}, t) S_n(\mathbf{x} - \mathbf{y}, t - \tau)\;\d^n y
+\]
+solves the homogeneous heat equation with $\phi_F |_{t = \tau} = F(\mathbf{x}, \tau)$.
+
+Hence in general, we can think of the forcing term as providing a whole sequence of ``initial conditions'' for all $t > 0$. We then integrate over the times at which these conditions were imposed to find the full solution to the forced problem. This interpretation is called \emph{Duhamel's principle}.
+\subsubsection*{Green's functions for the wave equation}
+Suppose $\phi: \R^n \times [0, \infty) \to \C$ solves the inhomogeneous wave equation
+\[
+ \frac{\partial^2 \phi}{\partial t^2} - c^2 \nabla^2 \phi = F(\mathbf{x}, t)
+\]
+with
+\[
+ \phi(\mathbf{x}, 0) = \pd{t} \phi(\mathbf{x}, 0) = 0.
+\]
+We look for a Green's function $G_n(\mathbf{x}, t; \mathbf{y}, \tau)$ that solves
+\[
+ \frac{\partial G_n}{\partial t} - c^2 \nabla^2 G_n = \delta(t - \tau) \delta^{(n)}(\mathbf{x} - \mathbf{y}) \tag{$*$}
+\]
+with the same initial conditions
+\[
+ G_n(\mathbf{x}, 0, \mathbf{y}, \tau) = \pd{t} G_n(\mathbf{x}, 0, \mathbf{y}, \tau) = 0.
+\]
+Just as before, we take the Fourier transform of this equation with respect the spacial variables $\mathbf{x}$. We get
+\[
+ \pd{t} \tilde{G}_n + c^2 |\mathbf{k}|^2 \tilde{G}_n = \delta(t - \tau) e^{-i\mathbf{k} \cdot \mathbf{y}}.
+\]
+where $\tilde{G}_n = \tilde{G}_n(\mathbf{k}, t, \mathbf{y}, \tau)$.
+
+This is just an ordinary differential equation from the point of view of $t$, and is of the same type of initial value problem that we studied earlier, and the solution is
+\[
+ \tilde{G}_n (\mathbf{k}, t, \mathbf{y}, \tau) = \Theta(t - \tau) e^{-i\mathbf{k} \cdot \mathbf{y}} \frac{\sin |\mathbf{k}| c (t - \tau)}{|\mathbf{k}| c}.
+\]
+To recover the Green's function itself, we have to compute the inverse Fourier transform, and find
+\[
+ G_n(\mathbf{x}, t; \mathbf{y}, \tau) = \frac{1}{(2\pi)^n} \int_{\R^n} e^{i\mathbf{k}\cdot \mathbf{x}} \Theta(t - \tau) e^{-i\mathbf{k}\cdot \mathbf{y}}\frac{\sin |\mathbf{k}| c(t - \tau)}{|\mathbf{k}| c} \;\d^n k.
+\]
+Unlike the case of the heat equation, the form of the answer we get here does depend on the number of spatial dimensions $n$. For definiteness, we look at the case where $n = 3$, since our world (probably) has three dimensions. Then our Green's function is
+\[
+ G(\mathbf{x}, t; \mathbf{y}, \tau) = \frac{\Theta(t - \tau)}{(2\pi)^3 c} \int_{\R^3} e^{i\mathbf{k}\cdot (\mathbf{x} - \mathbf{y})} \frac{\sin |\mathbf{k}|c(t - \tau)}{|\mathbf{k}|} \;\d^3 k.
+\]
+We use spherical polar coordinates with the $z$-axis in $k$-space aligned along the direction of $\mathbf{x} - \mathbf{y}$. Hence $\mathbf{k}\cdot (\mathbf{x} - \mathbf{y}) = kr \cos \theta$, where $r = |\mathbf{x} - \mathbf{y}|$ and $k = |\mathbf{k}|$.
+
+Note that nothing in our integral depends on $\varphi$, so we can pull out a factor of $2\pi$, and get
+\begin{align*}
+ G(\mathbf{x}, t; \mathbf{y}, \tau) &= \frac{\Theta(t - \tau)}{(2 \pi)^2 c} \int_0^\infty \int_0^\pi e^{ik r\cos \theta} \frac{\sin kc(t - \tau)}{k} k^2 \sin \theta \;\d \theta \;\d k\\
+ \intertext{The next integral to do is the $\theta$ integral, which is straightforward since it is an exact differential. Setting $\alpha = c(t - \tau)$, we get}
+ &= \frac{\Theta(t - \tau)}{(2\pi)^2 c} \int_0^\infty \left[\frac{e^{ikr} - e^{-ikr}}{ikr}\right] \frac{\sin kc(t - \tau)}{k} k^2 \;\d k\\
+ &= \frac{\Theta(t - \tau)}{(2\pi)^2 icr} \left[\int_0^\infty e^{ikr} \sin k\alpha\;\d k - \int_0^\infty e^{-ikr} \sin k\alpha\;\d k\right]\\
+ &= \frac{\Theta(t - \tau)}{2\pi i c r} \left[\frac{1}{2\pi} \int_{-\infty}^\infty e^{ikr} \sin k\alpha \;\d k\right]\\
+ &= \frac{\Theta(t - \tau)}{ 2\pi i c r} \mathcal{F}^{-1}[\sin k \alpha],
+\end{align*}
+Now recall $\mathcal{F}[\delta(x - \alpha)] = e^{-ik\alpha}$. So
+\[
+ \mathcal{F}^{-1}[\sin k\alpha] = \mathcal{F}^{-1} \left[\frac{e^{ik\alpha} - e^{-ik\alpha}}{2 i}\right] = \frac{1}{2i} [\delta(x + \alpha) - \delta(x - \alpha)].
+\]
+Hence our Green's function is
+\[
+ G(\mathbf{x}, t; \mathbf{y}, \tau) = -\frac{\Theta(t - \tau)}{4\pi c |\mathbf{x} - \mathbf{y}|}\Big[\delta\big(|\mathbf{x} - \mathbf{y}| + c(t - \tau)\big) - \delta\big(|\mathbf{x} - \mathbf{y}| - c(t - \tau)\big)\Big].
+\]
+Now we look at our delta functions. The step function is non-zero only if $t > \tau$. Hence $|\mathbf{x} - \mathbf{y}| + c(t - \tau)$ is always positive. So $\delta(|\mathbf{x} - \mathbf{y}| + c(t - \tau))$ does not contribute. On the other hand, $\delta(|\mathbf{x} - \mathbf{y}| - c(t - \tau))$ is non-zero only if $t > \tau$. So $\Theta(t - \tau)$ is always positive in this region. So we can write our Green's function as
+\[
+ G(\mathbf{x}, t; \mathbf{y}, \tau) = \frac{1}{4\pi c} \frac{1}{|\mathbf{x} - \mathbf{y}|} \delta(|\mathbf{x} - \mathbf{y}| - c(t - \tau)).
+\]
+As always, given our Green's function, the general solution to the forced equation
+\[
+ \frac{\partial^2 \phi}{\partial t^2} - c^2 \nabla^2 \phi = F(\mathbf{x}, t)
+\]
+is
+\[
+ \phi(\mathbf{x}, t) = \int_0^\infty \int_{\R^3} \frac{F(\mathbf{y}, \tau)}{4\pi c|\mathbf{x} - \mathbf{y}|} \delta(|\mathbf{x} - \mathbf{y}| - c(t - \tau)) \;\d^3 y\;\d \tau.
+\]
+We can use the delta function to do one of the integrals. It is up to us which integral we do, but we pick the time integral to do. Then we get
+\[
+ \phi(\mathbf{x}, t) = \frac{1}{4\pi c^2} \int_{\R^3} \frac{F(\mathbf{y}, t_{\mathrm{ret}})}{|\mathbf{x} - \mathbf{y}|} \;\d^3 y,
+\]
+where
+\[
+ t_{\mathrm{ret}} = t - \frac{|\mathbf{x} - \mathbf{y}|}{c}.
+\]
+This shows that the effect of the forcing term at some point $\mathbf{y} \in \R^3$ affects the solution $\phi$ at some other point $\mathbf{x}$ not instantaneously, but only after time $|\mathbf{x} - \mathbf{y}|/c$ has elapsed. This is just as we saw for characteristics. This, again, tells us that information travels at speed $c$.
+
+Also, we see the effect of the forcing term gets weaker and weaker as we move further away from the source. This dispersion depends on the number of dimensions of the space. As we spread out in a three-dimensional space, the ``energy'' from the forcing term has to be distributed over a larger sphere, and the effect diminishes. On the contrary, in one-dimensional space, there is no spreading out to do, and we don't have this reduction. In fact, in one dimensions, we get
+\[
+ \phi(x, t) = \int_0^t \int_\R F(y, \tau) \frac{\Theta(c(t - \tau) - |x - y|)}{2c} \;\d y \;\d \tau.
+\]
+We see there is now no suppression factor at the bottom, as expected.
+
+\subsection{Poisson's equation}
+Let $\phi: \R^3 \to \R$ satisfy the Poisson's equation
+\[
+ \nabla^2 \phi = -F,
+\]
+where $F(\mathbf{x})$ is a forcing term.
+
+The fundamental solution to this equation is defined to be $G_3(\mathbf{x}, \mathbf{y})$, where
+\[
+ \nabla^2 G_3(\mathbf{x}, \mathbf{y}) = \delta^{(3)}(\mathbf{x} - \mathbf{y}).
+\]
+By rotational symmetry, $G_3(\mathbf{x}, \mathbf{y}) = G_3(|\mathbf{x} - \mathbf{y}|)$. Integrating over a ball
+\[
+ B_r = \{|\mathbf{x} - \mathbf{y}| \leq r, \mathbf{x} \in \R^3\},
+\]
+we have
+\begin{align*}
+ 1 &= \int_{B_r} \nabla^2 G_3\;\d V \\
+ &= \int_{\partial B_r} \mathbf{n}\cdot \nabla G_3 \;\d S\\
+ &= \int_{S^2} \frac{\d G_3}{\d r} r^2 \sin \theta \;\d \theta \;\d \phi\\
+ &= 4\pi r^2 \frac{\d G_3}{\;\d r}.
+\end{align*}
+So we know
+\[
+ \frac{\d G_3}{\d r} = \frac{1}{4\pi r^2},
+\]
+and hence
+\[
+ G_3(\mathbf{x}, \mathbf{y}) = -\frac{1}{4\pi |\mathbf{x} - \mathbf{y}|} + c.
+\]
+We often set $c = 0$ such that
+\[
+ \lim_{|\mathbf{x}| \to \infty} G_3 = 0.
+\]
+\subsubsection*{Green's identities}
+To make use of this fundamental solution in solving Poisson's equation, we first obtain some useful identities.
+
+Suppose $\phi, \psi: \R^3 \to \R$ are both smooth everywhere in some region $\Omega \subseteq \R^3$ with boundary $\partial \Omega$. Then
+\[
+ \int_{\partial \Omega} \phi\mathbf{n} \cdot \nabla \psi\;\d S = \int_\Omega \nabla\cdot (\phi \nabla \psi) \;\d V = \int_\Omega \phi \nabla^2 \psi + (\nabla \phi)\cdot (\nabla \psi)\;\d V.
+\]
+So we get
+\begin{prop}[Green's first identity]
+\[
+ \int_{\partial \Omega} \phi\mathbf{n} \cdot \nabla \psi\;\d S = \int_\Omega \phi \nabla^2 \psi + (\nabla \phi)\cdot (\nabla \psi)\;\d V.
+\]
+\end{prop}
+Of course, this is just an easy consequence of the divergence theorem, but when Green first came up with this, divergence theorem hasn't existed yet.
+
+Similarly, we obtain
+\[
+ \int_{\partial \Omega} \psi\mathbf{n} \cdot \nabla \phi\;\d S = \int_\Omega \psi \nabla^2 \phi + (\nabla \phi)\cdot (\nabla \psi)\;\d V.
+\]
+Subtracting these two equations gives
+\begin{prop}[Green's second identity]
+ \[
+ \int_\Omega \phi \nabla^2 \psi - \psi \nabla^2 \phi \;\d V = \int_{\partial \Omega} \phi \mathbf{n}\cdot \nabla \psi - \psi \mathbf{n}\cdot \nabla \phi\;\d S.
+ \]
+\end{prop}
+Why is this useful? On the left, we have things like $\nabla^2 \psi$ and $\nabla^2 \phi$. These are things we are given by Poisson's or Laplace's equation. On the right, we have things on the boundary, and these are often the boundary conditions we are given. So this can be rather helpful when we have to solve these equations.
+
+\subsubsection*{Using the Green's function}
+We wish to apply this result to the case $\psi = G_3(|\mathbf{x} - \mathbf{y}|)$. However, recall that when deriving Green's identity, we assumed $\phi$ and $\psi$ are smooth everywhere in our domain. However, our Green's function is singular at $\mathbf{x} = \mathbf{y}$, but we want to integrate over this region as well. So we need to do this carefully. Because of the singularity, we want to take
+\[
+ \Omega = B_r - B_\varepsilon = \{\mathbf{x}\in \R^3: \varepsilon \leq |\mathbf{x} - \mathbf{y}| \leq R\}.
+\]
+In other words, we remove a small region of radius $\varepsilon$ centered on $\mathbf{y}$ from the domain.
+
+In this choice of $\Omega$, it is completely safe to use Green's identity, since our Green's function is certainly regular everywhere in this $\Omega$.
+First note that since $\nabla^2 G_3 = 0$ everywhere except at $\mathbf{x} = \mathbf{y}$, we get
+\[
+ \int_\Omega \phi \nabla^2 G_3 - G_3 \nabla^2 \phi \;\d V = - \int_\Omega G_3 \nabla^2 \phi \;\d V
+\]
+Then Green's second identity gives
+\begin{align*}
+ - \int_\Omega G_3 \nabla^2 \phi \;\d V &= \int_{S_r^2} \phi(\mathbf{n}\cdot \nabla G_3) - G_3(\mathbf{n}\cdot \phi)\;\d S \\
+ &\quad+ \int_{S_\varepsilon^2} \phi(\mathbf{n}\cdot \nabla G_3) - G_3(\mathbf{n}\cdot \phi)\;\d S
+\end{align*}
+Note that on the inner boundary, we have $\mathbf{n} = - \hat{\mathbf{r}}$. Also, at $S_\varepsilon^2$, we have
+\[
+ G_3|_{S_\varepsilon^2} = -\frac{1}{4\pi \varepsilon},\quad \left.\frac{\d G_3}{\d r}\right|_{S_\varepsilon^2} = \frac{1}{4\pi \varepsilon^2}.
+\]
+So the inner boundary terms are
+\begin{align*}
+ &\int_{S_\varepsilon^2} \Big(\phi(\mathbf{n}\cdot \nabla G_3) - G_3(\mathbf{n}\cdot \phi)\Big) \varepsilon^2 \sin \theta\;\d \theta\;\d \phi \\
+ &= -\frac{\varepsilon^2}{4 \pi \varepsilon^2} \int_{S_\varepsilon^2} \phi \sin \theta \;\d \theta \;\d \phi + \frac{\varepsilon^2}{4 \pi \varepsilon} \int_{S_\varepsilon^2} (\mathbf{n}\cdot \nabla \phi) \sin \theta \;\d \theta \;\d \phi\\
+ \intertext{Now the final integral is bounded by the assumption that $\phi$ is everywhere smooth. So as we take the limit $\varepsilon \to 0$, the final term vanishes. In the first term, the $\varepsilon$'s cancel. So we are left with}
+ &= -\frac{1}{4\pi} \int_{S_\varepsilon^2} \phi \;\d \Omega\\
+ &= -\bar{\phi} \\
+ &\to -\phi (\mathbf{y})
+\end{align*}
+where $\bar{\phi}$ is the average value of $\phi$ on the sphere.
+
+Now suppose $\nabla^2 \phi = -F$. Then this gives
+\begin{prop}[Green's third identity]
+ \[
+ \phi (\mathbf{y}) = \int_{\partial \Omega} \phi(\mathbf{n}\cdot \nabla G_3) - G_3(\mathbf{n}\cdot \nabla \phi) \;\d S - \int_\Omega G_3 (\mathbf{x}, \mathbf{y}) F(\mathbf{x})\;\d^3 x.
+ \]
+\end{prop}
+This expresses $\phi$ at any point $\mathbf{y}$ in $\Omega$ in terms of the fundamental solution $G_3$, the forcing term $F$ and boundary data. In particular, if the boundary values of $\phi$ and $\mathbf{n} \cdot \nabla \phi$ vanish as we take $r \to \infty$, then we have
+\[
+ \phi(\mathbf{y}) = - \int_{\R^3} G_3(\mathbf{x}, \mathbf{y}) F(\mathbf{x})\;\d^3 x.
+\]
+So the fundamental solution \emph{is} the Green's function for Poisson's equation on $\R^3$.
+
+However, there is a puzzle. Suppose $F = 0$. So $\nabla^2 \phi = 0$. Then Green's identity says
+\[
+ \phi(\mathbf{y}) = \int_{S_r^2} \left(\phi \frac{\d G_3}{\d r} - G_3 \frac{\d \phi}{\d r}\right)\;\d S.
+\]
+But we know there is a unique solution to Laplace's equation on every bounded domain once we specify the boundary value $\phi|_{\partial \Omega}$, or a unique-up-to-constant solution if we specify the boundary value of $\mathbf{n}\cdot \nabla \phi|_{\partial \Omega}$.
+
+However, to get $\phi(\mathbf{y})$ using Green's identity, we need to know $\phi$ and \emph{and} $\mathbf{n} \cdot \nabla \phi$ on the boundary. This is too much.
+
+Green's third identity \emph{is} a valid relation obeyed by solutions to Poisson's equation, but it is not constructive. We cannot specify $\phi$ \emph{and} $\mathbf{n}\cdot \nabla \phi$ freely. What we would like is a formula of $\phi$ given, say, just the value of $\phi$ on the boundary.
+
+\subsubsection*{Dirichlet Green's function}
+To overcome this (in the Dirichlet case), we seek to modify $G_3$ via
+\[
+ G_3 \to G = G_3 + H(\mathbf{x}, \mathbf{y}),
+\]
+where $\nabla^2 H = 0$ everywhere in $\Omega$, $H$ is regular throughout $\Omega$, and $G|_{\partial \Omega} = 0$. In other words, we find some $H$ that does not affect the relations on $G$ when acted on by $\nabla^2$, but now our $G$ will have boundary value $0$. We will find this $H$ later, but given such an $H$, we replace $G_3$ with $G - H$ in Green's third identity, and see that all the $H$ terms fall out, i.e.\ $G$ also satisfies Green's third identity. So
+\begin{align*}
+ \phi(\mathbf{y}) &= \int_{\partial\Omega} \left[\phi \mathbf{n}\cdot \nabla G - G \mathbf{n}\cdot \nabla \phi\right]\;\d S - \int FG\;\d V\\
+ &= \int_{\partial\Omega} \phi \mathbf{n}\cdot \nabla G\;\d S - \int FG\;\d V.
+\end{align*}
+So as long as we find $H$, we can express the value of $\phi(\mathbf{y})$ in terms of the values of $\phi$ on the boundary. Similarly, if we are given a Neumann condition, i.e.\ the value of $\mathbf{n} \cdot \nabla \phi$ on the boundary, we have to find an $H$ that kills off $\mathbf{n}\cdot \nabla G$ on the boundary, and get a similar result.
+
+In general, finding a harmonic $\nabla^2 H = 0$ with
+\[
+ H|_{\partial \Omega} = \left.\frac{1}{4\pi |\mathbf{x} - \mathbf{x}_0|}\right|_{\partial \Omega}
+\]
+is a difficult problem. However, the \emph{method of images} allows us to solve this in some special cases with lots of symmetry.
+
+\begin{eg}
+ Suppose
+ \[
+ \Omega = \{(x, y, z) \in \R^3: z \geq 0\}.
+ \]
+ We wish to find a solution to $\nabla^2 = -F$ in $\Omega$ with $\phi \to 0$ rapidly as $|\mathbf{x}| \to \infty$ with boundary condition $\phi(x, y, 0) = g(x, y)$.
+
+ The fundamental solution
+ \[
+ G_3(\mathbf{x}, \mathbf{x}_0) = -\frac{1}{4\pi} \frac{1}{|\mathbf{x} - \mathbf{x}_0|}
+ \]
+ obeys all the conditions we need except
+ \[
+ G_3|_{z = 0} = -\frac{1}{4\pi} \frac{1}{[(x - x_0)^2 + (y - y_0)^2 + z_0^2]^{1/2}} \not= 0.
+ \]
+ However, let $\mathbf{x}_0^R$ be the point $(x_0, y_0, -z_0)$ . This is the reflection of $x_0$ in the boundary plane $z = 0$. Since the point $\mathbf{x}_0^R$ is outside our domain, $G_3(\mathbf{x}, \mathbf{x}_0^R)$ obeys
+ \[
+ \nabla^2 G_3(\mathbf{x}, \mathbf{x}_0^R) = 0
+ \]
+ for all $\mathbf{x} \in \Omega$, and also
+ \[
+ G_3(\mathbf{x}, \mathbf{x}_0^R)|_{z = 0} = G_3(\mathbf{x}, \mathbf{x}_0)|_{z = 0}.
+ \]
+ Hence we take
+ \[
+ G(\mathbf{x}, \mathbf{x}_0) = G_3(\mathbf{x}, \mathbf{x}_0) - G_3(\mathbf{x}, \mathbf{x}_0^R).
+ \]
+ The outward pointing normal to $\Omega$ at $z = 0$ is $\mathbf{n} = -\hat{\mathbf{z}}$. Hence we have
+ \begin{align*}
+ \mathbf{n} \cdot \nabla G|_{z = 0} &= \frac{1}{4\pi} \left[\frac{-(z - z_0)}{|\mathbf{x} - \mathbf{x}_0|^3} - \frac{-(z + z_0)}{|\mathbf{x} - \mathbf{x}_0^R|^3}\right]_{z = 0} \\
+ &= \frac{1}{2\pi} \frac{z_0}{[(x - x_0)^2 + (y - y_0)^2 + z_0^2]^{3/2}}.
+ \end{align*}
+ Therefore our solution is
+ \begin{align*}
+ \phi(\mathbf{x}_0) &= \frac{1}{4\pi} \int_\Omega \left[\frac{1}{|\mathbf{x} - \mathbf{x}_0|} - \frac{1}{|\mathbf{x} - \mathbf{x}_0^R|}\right] F(\mathbf{x})\;\d^3 x \\
+ &\quad + \frac{z_0}{2\pi} \int_{\R^2} \frac{g(x, y)}{[(x - x_0)^2 + (y - y_0)^2 + z_0^2]^{3/2}} \;\d x \;\d y.
+ \end{align*}
+ What have we actually done here? The Green's function $G_3(\mathbf{x}, \mathbf{x}_0)$ in some sense represents a ``charge'' at $\mathbf{x}_0$. We can imagine that the term $G_3(\mathbf{x}, \mathbf{x}_0^R)$ represents the contribution to our solution from a point charge of opposite sign located at $\mathbf{x}_0^R$. Then by symmetry, $G$ is zero at $z = 0$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 0) -- (2, 0) node [right] {$z = 0$};
+ \node [circ] at (0, 1.5) {};
+ \node at (0, 1.5) [above] {$\mathbf{x}_0$};
+ \node [circ] at (0, -1.5) {};
+ \node at (0, -1.5) [below] {$\mathbf{x}_0^R$};
+ \end{tikzpicture}
+ \end{center}
+ Our solution for $\phi(\mathbf{x}_0)$ is valid if we extend it to $\mathbf{x}_0 \in \R^3$ where $\phi(\mathbf{x}_0)$ is solved by choosing two mirror charge distributions. But this is irrelevant for our present purposes. We are only concerned with finding a solution in the region $z \geq 0$. This is just an artificial device we have in order to solve the problem.
+\end{eg}
+
+\begin{eg}
+ Suppose a chimney produces smoke such that the density $\phi(\mathbf{x}, t)$ of smoke obeys
+ \[
+ \partial_t \phi - D \nabla^2 \phi = F(\mathbf{x}, t).
+ \]
+ The left side is just the heat equation, modelling the diffusion of smoke, while the right forcing term describes the production of smoke by the chimney.
+
+ If this were a problem for $\mathbf{x} \in \R^3$, then the solution is
+ \[
+ \phi(\mathbf{x}, t) = \int_0^t \int_{\R^3} F(\mathbf{y}, \tau)S_3(\mathbf{x} - \mathbf{y}, t - \tau)\;\d^3 \mathbf{y}\;\d \tau,
+ \]
+ where
+ \[
+ S_3(\mathbf{x} - \mathbf{y}, t - \tau) = \frac{1}{[4\pi D (t - \tau)]^{3/2}} \exp\left(-\frac{|\mathbf{x} - \mathbf{y}|^2}{4D (t - \tau)}\right).
+ \]
+ This is true only if the smoke can diffuse in all of $\R^3$. However, this is not true for our current circumstances, since smoke does not diffuse into the ground.
+
+ To account for this, we should find a Green's function that obeys
+ \[
+ \mathbf{n}\cdot \nabla G|_{z = 0} = 0.
+ \]
+ This says that there is no smoke diffusing in to the ground.
+
+ This is achieved by picking
+ \[
+ G(\mathbf{x}, t; \mathbf{y}, \tau) = \Theta(t - \tau) [S_3(\mathbf{x} - \mathbf{y}, t - \tau) + S_3(\mathbf{x} - \mathbf{y}^R, t - \tau)].
+ \]
+ We can directly check that this obeys
+ \[
+ \partial_t D^2 \nabla^2 G = \delta (t - \tau) \delta^3(\mathbf{x} - \mathbf{y})
+ \]
+ when $\mathbf{x} \in \Omega$, and also
+ \[
+ \mathbf{n}\cdot \nabla G|z_0 = 0.
+ \]
+ Hence the smoke density is given by
+ \[
+ \phi(\mathbf{x}, t) = \int_0^t \int_\Omega F(\mathbf{y}, \tau) [S_3(\mathbf{x} - \mathbf{y}, t - \tau) + S_3(\mathbf{x} - \mathbf{y}^R, t - \tau)]\;\d^3 y.
+ \]
+ We can think of the second term as the contribution from a ``mirror chimney''. Without a mirror chimney, we will have smoke flowing into the ground. With a mirror chimney, we will have equal amounts of mirror smoke flowing up from the ground, so there is no net flow. Of course, there are no mirror chimneys in reality. These are just artifacts we use to find the solution we want.
+\end{eg}
+
+\begin{eg}
+ Suppose we want to solve the wave equation in the region $(x, t)$ such that $x > 0$ with boundary conditions
+ \[
+ \phi(x, 0) = b(x),\quad \partial_t(x, 0) = 0,\quad \partial_x \phi(0, t) = 0.
+ \]
+ On $\R^{1, 1}$ d'Alembert's solution gives
+ \[
+ \phi(x, t) = \frac{1}{2}[b(x - ct) + b(x + ct)]
+ \]
+ This is not what we want, since eventually we will have a wave moving past the $x = 0$ line.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [mred, semithick] (-3, 0) -- (3, 0);
+ \draw [dashed] (0, -0.5) -- (0, 2) node [right] {$x = 0$};
+ \draw [domain=-1:1,samples=50, mblue] plot ({\x - 1.5}, {1.5 * exp(-7 * \x * \x)});
+
+ \draw [->] (-1.1, 0.8) -- +(0.5, 0);
+ \end{tikzpicture}
+ \end{center}
+ To compensate for this, we introduce a mirror wave moving in the opposite direction, such that when as they pass through each other at $x = 0$, there is no net flow across the boundary.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [mred, semithick] (-3, 0) -- (3, 0);
+ \draw [dashed] (0, -0.5) -- (0, 2) node [right] {$x = 0$};
+
+ \draw [domain=-1:1,samples=50, mblue] plot ({\x - 1.5}, {1.5 * exp(-7 * \x * \x)});
+ \draw [->] (-1.1, 0.8) -- +(0.5, 0);
+
+ \begin{scope}[xscale=-1]
+ \draw [dashed, domain=-1:1,samples=50, mblue] plot ({\x - 1.5}, {1.5 * exp(-7 * \x * \x)});
+ \draw [dashed, ->] (-1.1, 0.8) -- +(0.5, 0);
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ More precisely, we include a \emph{mirror initial condition} $\phi(x, 0) = b(x) + b(-x)$, where we set $b(x) = 0$ when $x < 0$. In the region $x > 0$ we are interested in, only the $b(x)$ term will contribute. In the $x < 0$ region, only $x > 0$ will contribute. Then the general solution is
+ \[
+ \phi(x, t) = \frac{1}{2}[b(x + ct) + b(x + c) + b( -x - ct) + b(-x + ct)].
+ \]
+\end{eg}
+\end{document}
diff --git a/books/cam/IB_M/quantum_mechanics.tex b/books/cam/IB_M/quantum_mechanics.tex
new file mode 100644
index 0000000000000000000000000000000000000000..77ad32b39014de9511148dcefe48eddcfd118fde
--- /dev/null
+++ b/books/cam/IB_M/quantum_mechanics.tex
@@ -0,0 +1,2600 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IB}
+\def\nterm {Michaelmas}
+\def\nyear {2015}
+\def\nlecturer {J.\ M.\ Evans}
+\def\ncourse {Quantum Mechanics}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\textbf{Physical background}\\
+Photoelectric effect. Electrons in atoms and line spectra. Particle diffraction.\hspace*{\fill} [1]
+
+\vspace{10pt}
+\noindent\textbf{Schr\"odinger equation and solutions}\\
+De Broglie waves. Schr\"odinger equation. Superposition principle. Probability interpretation, density and current.\hspace*{\fill} [2]
+
+\vspace{5pt}
+\noindent Stationary states. Free particle, Gaussian wave packet. Motion in 1-dimensional potentials, parity. Potential step, square well and barrier. Harmonic oscillator.\hspace*{\fill} [4]
+
+\vspace{10pt}
+\noindent\textbf{Observables and expectation values}\\
+Position and momentum operators and expectation values. Canonical commutation relations. Uncertainty principle.\hspace*{\fill} [2]
+
+\vspace{5pt}
+\noindent Observables and Hermitian operators. Eigenvalues and eigenfunctions. Formula for expectation value.\hspace*{\fill} [2]
+
+\vspace{10pt}
+\noindent\textbf{Hydrogen atom}\\
+Spherically symmetric wave functions for spherical well and hydrogen atom.
+
+\vspace{5pt}
+\noindent Orbital angular momentum operators. General solution of hydrogen atom.\hspace*{\fill} [5]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+Quantum mechanics (QM) is a radical generalization of classical physics. Profound new features of quantum mechanics include
+\begin{enumerate}
+ \item \emph{Quantisation} --- Quantities such as energy are often restricted to a discrete set of values, or appear in definite amounts , called \emph{quanta}.
+ \item \emph{Wave-particle duality} --- Classical concepts of particles and waves become merged in quantum mechanics. They are different aspects of a single entity. So an electron will no longer be thought of a ``particle'' but an entity that has properties of both particles and waves.
+ \item \emph{Probability and uncertainty} --- Predictions in quantum mechanics involve probability in a fundamental way. This probability does not arise from our lack of knowledge of the system, but is a genuine uncertainty in reality. In particular, there are limits to what we can ask about a physical system, even in principle. For example, the Heisenberg uncertainty principle entails that we cannot accurately know both the position \emph{and} momentum of a particle.
+\end{enumerate}
+Quantum mechanics also involves a new fundamental constant $h$ or $\hbar = \frac{h}{2\pi}$. The dimension of this is
+\[
+ [h] = ML^2T^{-1} = [\text{energy}]\times [\text{time}] = [\text{position}] \times [\text{momentum}].
+\]
+We can think of this constant as representing the ``strength'' of quantum effects. Despite having these new profound features, we expect to recover classical physics when we take the limit $\hbar \to 0$.
+
+Historically, there are a few experiments that led to the development of quantum mechanics.
+\subsection{Light quanta}
+In quantum mechanics, light (or electromagnetic waves) consists of quanta called photons. We can think of them as waves that come in discrete ``packets'' that behave like particles.
+
+In particular, photons behave like particles with energy $E = h \nu = \hbar \omega$, where $\nu$ is the frequency and $\omega = 2\pi \nu$ is the angular frequency. However, we usually don't care about $\nu$ and just call $\omega$ the frequency.
+
+Similarly, the momentum is given by $p = h/\lambda = \hbar k$, where $\lambda$ is the wavelength and $k = 2\pi/\lambda$ is the wave number.
+
+For electromagnetic waves, the speed is $c = \omega/k = \nu\lambda$. This is consistent with the fact that photons are massless particles, since we have
+\[
+ E = cp,
+\]
+as entailed by special relativity.
+
+Historically, the concept of quanta was introduced by Planck. At that time, the classical laws of physics did not accurately depict how the spectrum of black-body radiation will behave. In particular, it predicted that a black body will emit an infinite amount of energy through radiation, which is clearly nonsense. Using the concept of quanta and the energy-frequency relation, Planck was to derive the spectrum of ``black-body'' radiation that is consistent with experimental results. However, they were not yet sure that light indeed come in quanta physically. It could have just been a mathematical trick to derive the desired result.
+
+The physical reality of photons was clarified by Einstein in explaining the \emph{photo-electric effect}.
+
+When we shine some light (or electromagnetic radiation $\gamma$) of frequency $\omega$ onto certain metals, this can cause an emission of electrons ($e$). We can measure the maximum kinetic energy $K$ of these electrons.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 0) -- (2, 0);
+ \draw [decorate, decoration={snake}] (-2, 2) -- (0, 0) node [pos = 0.5, anchor=south west] {$\gamma$};
+ \draw [->] (0, 0) -- (2, 2) node [right] {$e$};
+ \end{tikzpicture}
+\end{center}
+Experiments show that
+\begin{enumerate}
+ \item The number of electrons emitted depends on the intensity (brightness) of the light, but not the frequency.
+ \item The kinetic energy $K$ depends only (linearly) on the frequency but not the intensity.
+ \item For $\omega < \omega_0$ (for some critical value $\omega_0$), \emph{no} electrons are emitted at all.
+\end{enumerate}
+This is hard to understand classically, but is exactly as expected if each electron emitted is due to the impact with a single photon. If $W$ is the energy required to liberate an electron, then we would expect $K = \hbar \omega - W$ by the conservation of energy. We will have no emission if $\omega < \omega_0 = W/\hbar$.
+
+\subsection{Bohr model of the atom}
+When we heat atoms up to make them emit light; or shine light at atoms so that they absorb light, we will find that light is emitted and absorbed at very specific frequencies, known as the emission and absorption spectra. This suggests that the inner structure of atoms is discrete.
+
+However, this is not the case in the classical model. In the classical model, the simplest atom, the hydrogen, consists of an electron with charge $-e$ and mass $m$, orbiting a proton of charge $+e$ and mass $m_p \gg m$ fixed at the origin.
+
+The potential energy is
+\[
+ V(r) = -\frac{e^2}{4\pi\varepsilon_0}\frac{1}{r},
+\]
+and the dynamics of the electron is governed by Newton's laws of motions, just as we derived the orbits of planets under the gravitational potential in IA Dynamics and Relativity. This model implies that the angular momentum $\mathbf{L}$ is constant, and so is the energy $E = \frac{1}{2}mv^2 + V(r)$.
+
+This is not a very satisfactory model for the atom. First of all, it cannot explain the discrete emission and absorption spectra. More importantly, while this model seems like a mini solar system, electromagnetism behaves differently from gravitation. To maintain a circular orbit, an acceleration has to be applied onto the electron. Indeed, the force is given by
+\[
+ F = \frac{mv^2}{r} = \frac{e^2}{4\pi \varepsilon_0}\frac{1}{r^2}.
+\]
+Accelerating particles emit radiation and lose energy. So according to classical electrodynamics, the electron will just decay into the proton and atoms will implode.
+
+The solution to this problem is to simply declare that this cannot happen. Bohr proposed the \emph{Bohr quantization conditions} that restricts the classical orbits by saying that the angular momentum can only take values
+\[
+ L = mrv = n\hbar
+\]
+for $n = 1, 2, \cdots$. Using these, together with the force equation, we can solve $r$ and $v$ completely for each $n$ and obtain
+\begin{align*}
+ r_n &= \frac{4\pi \varepsilon_0}{me^2}\hbar^2 n^2\\
+ v_n &= \frac{e^2}{4\pi \varepsilon_0}\frac{1}{\hbar n}\\
+ E_n &= -\frac{1}{2}m\left(\frac{e^2}{4\pi \varepsilon_0 \hbar}\right)^2 \frac{1}{n^2}.
+\end{align*}
+Now we assume that the electron can make transitions between different energy levels $n$ and $m > n$, accompanied by emission or absorption of a photon of frequency $\omega$ given by
+\[
+ E = \hbar \omega = E_n - E_m = \frac{1}{2}m \left(\frac{e^2}{4\pi \varepsilon_0 \hbar}\right)^2\left(\frac{1}{n^2} - \frac{1}{m^2}\right).
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-1, 0) node [left] {$0$} -- (1, 0);
+ \draw (-1, -0.5) node [left] {$E_m$} -- (1, -0.5);
+ \draw (-1, -1.5) node [left] {$E_n$} -- (1, -1.5);
+ \draw [->] (0, -0.5) -- (0, -1.5);
+ \draw [decorate, decoration={snake}] (0, -1) -- (2, -1) node [right] {$\gamma$};
+ \end{tikzpicture}
+\end{center}
+This model explains a \emph{vast} amount of experimental data. This also gives an estimate of the size of a hydrogen atom:
+\[
+ r_1 = \left(\frac{4\pi \varepsilon_0}{me^2}\right) \hbar^2 \approx \SI{5.29e-11}{\meter},
+\]
+known as the \emph{Bohr radius}.
+
+While the model fits experiments very well, it does not provide a good explanation for why the radius/angular momentum should be quantized. It simply asserts this fact and then magically produces the desired results. Thus, we would like a better understanding of why angular momentum should be quantized.
+
+\subsection{Matter waves}
+The relations
+\begin{align*}
+ E &= h\nu = \hbar \omega\\
+ p &= \frac{h}{\lambda} = \hbar k
+\end{align*}
+are used to associate particle properties (energy and momentum) to waves. They can also be used the other way round --- to associate wave properties (frequency and wave number) to particles. Moreover, these apply to non-relativistic particles such as electrons (as well as relativistic photons). This $\lambda$ is known as the \emph{de Broglie wavelength}.
+
+Of course, nothing prevents us from assigning arbitrary numbers to our particles. So an immediate question to ask is --- is there any physical significance to the ``frequency'' and ``wavenumber'' of particles? Or maybe particles in fact \emph{are} waves?
+
+Recall that the quantization of the Bohr model requires that
+\[
+ L = rp = n\hbar.
+\]
+Using the relations above, this is equivalent to requiring that
+\[
+ n\lambda = 2\pi r.
+\]
+This is exactly the statement that the circumference of the orbit is an integer multiple of the wavelength. This is the condition we need for a standing wave to form on the circumference. This looks promising as an explanation for the quantization relation.
+
+But in reality, do electrons actually behave like waves? If electrons really are waves, then they should exhibit the usual behaviour of waves, such as diffraction and interference.
+
+We can repeat our favorite double-slit experiment on electrons. We have a sinusoidal wave incident on some barrier with narrow openings as shown:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (4, -2) -- (4, 2);
+
+ \foreach \x in {-1, -2, -3} {
+ \draw (\x, -2) -- (\x, 2);
+ }
+
+ \draw [<->] (-2, -1.7) -- (-3, -1.7) node [below, pos=0.5] {$\lambda$};
+ \draw [->] (-3, 2.3) -- (-1.5, 2.3) node [right] {wave};
+
+ \draw (0, 0.55) -- (4, 1.5);
+ \draw (0, -0.55) -- (4, 1.5);
+ \draw [domain=-2:2,samples=50, mblue] plot [smooth] ({4 + 1.5 * exp(-\x * \x) * (cos (200 * \x))^2}, \x);
+ \draw [->, align=left] (4, 0) -- (6, 0) node [right] {density of\\ electrons};
+
+ \draw [dashed] (0, 0.55) -- (0.34, -0.375);
+ \node at (0.20, -0.45) [below] {$\delta$};
+
+ \draw [mred, fill=white] (-0.05, -0.5) rectangle (0.05, 0.5);
+ \draw [mred, fill=white] (-0.05, 0.6) rectangle (0.05, 2);
+ \draw [mred, fill=white] (-0.05, -0.6) rectangle (0.05, -2);
+ \end{tikzpicture}
+\end{center}
+At different points, depending on the difference $\delta$ in path length, we may have constructive interference (large amplitude) or destructive interference (no amplitude). In particular, constructive interference occurs if $\delta = n\lambda$, and destructive if $\delta = (n + \frac{1}{2})\lambda$.
+
+Not only does this experiment allow us to verify if something is a wave. We can also figure out its wavelength $\lambda$ by experiment.
+
+Practically, the actual experiment for electrons is slightly more complicated. Since the wavelength of an electron is rather small, to obtain the diffraction pattern, we cannot just poke holes in sheets. Instead, we need to use crystals as our diffraction grating. Nevertheless, this shows that electrons do diffract, and the wavelength \emph{is} the de Broglie wavelength.
+
+This also has a conceptual importance. For regular waves, diffraction is something we can make sense of. However, here we are talking about electrons. We know that if we fire many many electrons, the distribution will follow the pattern described above. But what if we just fire a single electron? On \emph{average}, it should still follow the distribution. However, for this individual electron, we cannot know where it will actually land. We can only provide a probability distribution of where it will end up. In quantum mechanics, everything is inherently probabilistic.
+
+As we have seen, quantum mechanics is vastly different from classical mechanics. This is unlike special relativity, where we are just making adjustments to Newtonian mechanics. In fact, in IA Dynamics and Relativity, we just ``derived'' special relativity by assuming the principle of relativity and that the speed of light is independent of the observer. This is not something we can do for quantum mechanics --- what we are going to do is just come up with some theory and then show (or claim) that they agree with experiment.
+
+\section{Wavefunctions and the Schr\texorpdfstring{\"o}{o}dinger equation}
+The time evolution of particles in quantum mechanics is governed by the \emph{Schr\"odinger equation}, which we will come to shortly. In general, this is a difficult equation to solve, and there aren't many interesting cases where we are able to provide a full solution.
+
+Hence, to begin with, we will concentrate on quantum mechanics in one (spatial) dimension only, as the maths is much simpler and diagrams are easier to draw, and we can focus more on the physical content.
+
+\subsection{Particle state and probability}
+Classically, a point particle in 1 dimension has a definitive position $x$ (and momentum $p$) at each time. To completely specify a particle, it suffices to write down these two numbers. In quantum mechanics, this is much more complicated. Instead, a particle has a \emph{state} at each time, specified by a complex-valued \emph{wavefunction} $\psi(x)$.
+
+The physical content of the wavefunction is as follows: if $\psi$ is appropriately normalized, then when we measure the position of a particle, we get a result $x$ with probability density function $|\psi(x)|^2$, i.e.\ the probability that the position is found in $[x, x + \delta x]$ (for small $\delta x$) is given by $|\psi(x)|^2 \delta x$. Alternatively, the probability of finding it in an interval $[a, b]$ is given by
+\[
+ \P(\text{particle position in }[a, b]) = \int_a^b |\psi(x)|^2 \;\d x.
+\]
+What do we mean by ``appropriately normalized''? From our equation above, we see that we require
+\[
+ \int_{-\infty}^\infty |\psi(x)|^2\;\d x = 1,
+\]
+since this is the total probability of finding the particle anywhere at all. This the normalization condition required.
+
+\begin{eg}[Gaussian wavefunction]
+ We define
+ \[
+ \psi(x) = C e^{-\frac{(x - c)^2}{2\alpha}},
+ \]
+ where $c$ is real and $C$ could be complex.
+ \begin{center}
+ \begin{tikzpicture}[yscale=1.5]
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, 1.3) -- (0, 0) node [below] {$c$};
+ \draw [domain=-3:3,samples=50, mblue] plot (\x, {exp(-\x * \x)});
+ \end{tikzpicture}
+ \end{center}
+ We have
+ \[
+ \int_{-\infty}^\infty |\psi(x)|^2 \;\d x = |C|^2 \int_{-\infty}^{\infty}e^{-\frac{(x - c)^2}{\alpha}} \;\d x = |C|^2 (\alpha \pi)^{\frac{1}{2}} = 1.
+ \]
+ So for normalization, we need to pick $C = (1/\alpha \pi)^{1/4}$ (up to a multiple of $e^{i\theta}$).
+
+ If $\alpha$ is small, then we have a sharp peak around $x = c$. If $\alpha$ is large, it is more spread out.
+\end{eg}
+While it is nice to have a normalized wavefunction, it is often inconvenient to deal exclusively with normalized wavefunctions, or else we will have a lot of ugly-looking constants floating all around. As a matter of fact, we can always restore normalization at the end of the calculation. So we often don't bother.
+
+If we do not care about normalization, then for any (non-zero) $\lambda$, $\psi(x)$ and $\lambda \psi(x)$ represent the same quantum state (since they give the same probabilities). In practice, we usually refer to either of these as ``the state''. If we like fancy words, we can thus think of the states as equivalence classes of wavefunctions under the equivalence relation $\psi \sim \phi$ if $\phi = \lambda \psi$ for some non-zero $\lambda$.
+
+What we do require, then, is not that the wavefunction is normalized, but \emph{normalizable}, i.e.
+\[
+ \int_{-\infty}^\infty |\psi(x)|^2 \;\d x < \infty.
+\]
+We will very soon encounter wavefunctions that are \emph{not} normalizable. Mathematically, these are useful things to have, but we have to be more careful when interpreting these things physically.
+
+A characteristic property of quantum mechanics is that if $\psi_1(x)$ and $\psi_2(x)$ are wavefunctions for a particle, then $\psi_1(x) + \psi_2 (x)$ is also a possible particle state (ignoring normalization), provided the result is non-zero. This is the principle of superposition, and arises from the fact that the equations of quantum mechanics are linear.
+
+\begin{eg}[Superposition of Gaussian wavefunctions]
+ Take
+ \[
+ \psi(x) = B\left(\exp\left(\frac{-(x - c)^2}{2\alpha}\right) + \exp\left(-\frac{x^2}{2\beta}\right)\right).
+ \]
+ Then the resultant distribution would be something like
+ \begin{center}
+ \begin{tikzpicture}[xscale=0.75, yscale=1.5]
+ \draw (-3, 0) -- (9, 0);
+ \draw [domain=-3:9,samples=80, mblue] plot (\x, {exp(-\x * \x) + exp(-(\x - 6)^2/4)});
+ \end{tikzpicture}
+ \end{center}
+ We choose $B$ so that $\psi$ in a normalized wavefunction for a single particle. Note that this is \emph{not} two particles at two different positions. It is \emph{one} particle that is ``spread out'' at two different positions.
+\end{eg}
+It is possible that in some cases, the particles in the configuration space may be restricted. For example, we might require $ -\frac{\ell}{2} \leq x \leq \frac{\ell}{2}$ with some boundary conditions at the edges. Then the normalization condition would not be integrating over $(-\infty, \infty)$, but $[-\frac{\ell}{2}, \frac{\ell}{2}]$.
+
+\subsection{Operators}
+We know that the square of the wavefunction gives the probability distribution of the \emph{position} of the particle. How about other information such as the momentum and energy? It turns out that all the information about the particle is contained in the wavefunction (which is why we call it the ``state'' of the particle).
+
+We call each property of the particle which we can measure an \emph{observable}. Each observable is represented by an \emph{operator} acting on $\psi(x)$. For example, the position is represented by the operator $\hat{x} = x$. This means that $(\hat{x} \psi)(x) = x\psi(x)$. We can list a few other operators:
+\begin{center}
+ \begin{tabular}{rll}
+ position & $\hat{x} = x$ & $\hat{x} \psi = x\psi(x)$\\
+ momentum & $\hat{p} = -i\hbar \frac{\partial}{\partial x}$ & $\hat{p}\psi = -i\hbar \psi'(x)$\\
+ energy & $H = \frac{\hat{p}^2}{2m} + V(\hat{x})$ & $H\psi = -\frac{\hbar^2}{2m} \psi''(x) + V(x)\psi(x)$
+ \end{tabular}
+\end{center}
+The final $H$ is called the Hamiltonian, where $m$ is the mass and $V$ is the potential. We see that the Hamiltonian is just the kinetic energy $\frac{p^2}{2m}$ and the potential energy $V$. There will be more insight into why the operators are defined like this in IIC Classical Dynamics and IID Principles of Quantum Mechanics.
+
+Note that we put hats on $\hat{x}$ and $\hat{p}$ to make it explicit that these are operators, as opposed to the classical quantities position and momentum. Otherwise, the definition $\hat{x} = x$ would look silly.
+
+How do these operators relate to the actual physical properties? In general, when we measure an observable, the result is not certain. They are randomly distributed according to some probability distribution, which we will go into full details later.
+
+However, a \emph{definite} result is obtained if and only if $\psi$ is an eigenstate, or eigenfunction, of the operator. In this case, results of the measurements are the eigenvalue associated. For example, we have
+\[
+ \hat{p} \psi = p\psi
+\]
+if and only if $\psi$ is a state with definite momentum $p$. Similarly,
+\[
+ H\psi = E\psi
+\]
+if and only if $\psi$ has definite energy $E$.
+
+Here we are starting to see why quantization occurs in quantum mechanics. Since the only possible values of $E$ and $p$ are the eigenvalues, if the operators have a discrete set of eigenvalues, then we can only have discrete values of $p$ and $E$.
+
+\begin{eg}
+ Let
+ \[
+ \psi(x) = Ce^{ikx}.
+ \]
+ This has a wavelength of $\lambda = 2\pi/k$. This is a momentum eigenstate, since we have
+ \[
+ \hat{p}\psi = -\hbar \psi' = (\hbar k)\psi.
+ \]
+ So we know that the momentum eigenvalue is $p = \hbar k$. This looks encouraging!
+
+ Note that if there is no potential, i.e.\ $V = 0$, then
+ \[
+ H\psi = \frac{\hat{p}^2}{2m}\psi = -\frac{\hbar^2}{2m}\psi'' = \frac{\hbar^2 k^2}{2m}\psi.
+ \]
+ So the energy eigenvalue is
+ \[
+ E = \frac{\hbar^2 k^2}{2m}.
+ \]
+\end{eg}
+Note, however, that our wavefunction has $|\psi(x)|^2 = |C|^2$, which is a constant. So this wavefunction is not normalizable on the whole line. However, if we restrict ourselves to some finite domain $-\frac{\ell}{2} \leq x \leq \frac{\ell}{2}$, then we can normalize by picking $C= \frac{1}{\sqrt{\ell}}$.
+
+\begin{eg}
+ Consider the Gaussian distribution
+ \[
+ \psi(x) = C\exp\left(-\frac{x^2}{2\alpha}\right).
+ \]
+ We get
+ \[
+ \hat{p}\psi(x) = -i\hbar \psi'(x) \not= p\psi(x)
+ \]
+ for any number $p$. So this is not an eigenfunction of the momentum.
+
+ However, if we consider the harmonic oscillator with potential
+ \[
+ V(x) = \frac{1}{2}Kx^2,
+ \]
+ then this $\psi(x)$ is an eigenfunction of the Hamiltonian operator, provided we picked the right $\alpha$. We have
+ \[
+ H\psi = -\frac{\hbar^2}{2m}\psi'' + \frac{1}{2}Kx^2 \psi = E\psi
+ \]
+ when $\alpha^2 = \frac{\hbar^2}{Km}$. Then the energy is $E = \frac{\hbar}{2}\sqrt{\frac{K}{m}}$. This is to be verified on the example sheet.
+\end{eg}
+Despite being a simple system, the harmonic oscillator is \emph{incredibly} useful in theoretical physics. We will hence solve this completely later.
+
+\begin{defi}[Time-independent Schr\"odinger equation]
+ The \emph{time-independent Schr\"odinger equation} is the energy eigenvalue equation
+ \[
+ H\psi = E\psi,
+ \]
+ or
+ \[
+ -\frac{\hbar^2}{2m}\psi'' + V(x) \psi = E\psi.
+ \]
+\end{defi}
+This is in general what determines what the system behaves. In particular, the eigenvalues $E$ are precisely the allowed energy values.
+
+\subsection{Time evolution of wavefunctions}
+So far, everything is instantaneous. The wavefunction specifies the state \emph{at a particular time}, and the eigenvalues are the properties of the system \emph{at that particular time}. However, this is quantum \emph{mechanics}, or quantum \emph{dynamics}. We should be looking at how things \emph{change}. We want to know how the wavefunction changes with time. This is what we will get to now.
+
+\subsubsection*{Time-dependent Schr\"odinger equation}
+We will write $\Psi$ instead of $\psi$ to indicate that we are looking at the time-dependent wavefunction. The evolution of this $\Psi(x, t)$ is described by the time-dependent Schr\"odinger equation.
+\begin{defi}[Time-dependent Schr\"odinger equation]
+ For a time-dependent wavefunction $\Psi(x, t)$, the \emph{time-dependent Schr\"odinger equation} is
+ \[
+ i\hbar \frac{\partial \Psi}{\partial t} = H\Psi.\tag{$*$}
+ \]
+\end{defi}
+For a particle in a potential $V(x)$, this can reads
+\[
+ i\hbar \frac{\partial \Psi}{\partial t} = -\frac{\hbar^2}{2m}\frac{\partial^2 \Psi}{\partial x^2} + V(x) \Psi.
+\]
+While this looks rather scary, it isn't really that bad. First of all, it is linear. So the sums and multiples of solutions are also solutions. It is also first-order in time. So if we know the wavefunction $\Psi(x, t_0)$ at a particular time $t_0$, then this determines the whole function $\Psi(x, t)$.
+
+This is similar to classical dynamics, where knowing the potential $V$ (and hence the Hamiltonian $H$) completely specifies how the system evolves with time. However, this is in some ways different from classical dynamics. Newton's second law is second-order in time, while this is first-order in time. This is significant since when our equation is first-order in time, then the current state of the wavefunction completely specifies the evolution of the wavefunction in time.
+
+Yet, this difference is just an illusion. The wavefunction is the \emph{state} of the particle, and not just the ``position''. Instead, we can think of it as capturing the position \emph{and} momentum. Indeed, if we write the equations of classical dynamics in terms of position and momentum, it will be first order in time.
+
+\subsubsection*{Stationary states}
+It is not the coincidence that the time-independent Schr\"odinger equation and the time-dependent Schr\"odinger equation are named so similarly (and it is also not an attempt to confuse students).
+
+We perform separation of variables, and consider a special class of solutions $\Psi(x, t) = T(t) \psi(x)$, where $\Psi(x, 0) = \psi(x)$ (i.e.\ $T(0) = 1$). If $\psi$ satisfies the time-independent Schr\"odinger equation
+\[
+ H\psi = E\psi,
+\]
+then since $H$ does not involve time derivatives, we know $\Psi$ is an energy eigenstate at each fixed $t$, i.e.
+\[
+ H\Psi = E\Psi.
+\]
+So if we want this $\Psi$ to satisfy the Schr\"odinger equation, we must have
+\[
+ i\hbar \dot{T} = ET.
+\]
+The solution is obvious:
+\[
+ T(t) = \exp\left(-\frac{iEt}{\hbar}\right).
+\]
+We can write our full solution as
+\[
+ \Psi(x, t) = \psi(x) \exp\left(-\frac{iEt}{\hbar}\right).
+\]
+Note that the frequency is $\omega = \frac{E}{\hbar}$. So we recover the Energy-frequency relation we've previously had.
+
+\begin{defi}[Stationary state]
+ A \emph{stationary state} is a state of the form
+ \[
+ \Psi(x, t) = \psi(x) \exp\left(-\frac{iEt}{\hbar}\right).
+ \]
+ where $\psi(x)$ is an eigenfunction of the Hamiltonian with eigenvalue $E$. This term is also sometimes applied to $\psi$ instead.
+\end{defi}
+While the stationary states seem to be a rather peculiar class of solutions that would rarely correspond to an actual physical state in reality, they are in fact very important in quantum mechanics. The reason is that the stationary states form a \emph{basis} of the state space. In other words, every possible state can be written as a (possibly infinite) linear combination of stationary states. Hence, by understanding the stationary states, we can understand a lot about a quantum system.
+
+\subsubsection*{Conservation of probability}
+Note that for a stationary state, we have
+\[
+ |\Psi(x, t)|^2 = |\psi(x)|^2,
+\]
+which is independent of time. In general, this is true in most cases.
+
+Consider a general $\Psi(x, t)$ obeying the time-dependent Schr\"odinger equation.
+
+\begin{prop}
+ The probability density
+ \[
+ P(x, t) = |\Psi(x, t)|^2
+ \]
+ obeys a conservation equation
+ \[
+ \frac{\partial P}{\partial t} = - \frac{\partial j}{\partial x},
+ \]
+ where
+ \[
+ j(x, t) = -\frac{i\hbar}{2m} \left(\Psi^*\frac{\d \Psi}{\d x} - \frac{\d \Psi^*}{\d x} \Psi\right)
+ \]
+ is the \emph{probability current}.
+\end{prop}
+Since $\Psi^* \Psi'$ is the complex conjugate of $\Psi'^* \Psi$, we know that $\Psi^*\Psi' - \Psi'^* \Psi$ is imaginary. So multiplying by $i$ ensures that $j(x, t)$ is real, which is a good thing since $P$ is also real.
+
+\begin{proof}
+ This is straightforward from the Schr\"odinger equation and its complex conjugate. We have
+ \begin{align*}
+ \frac{\partial P}{\partial t} &= \Psi^* \frac{\partial \Psi}{\partial t} + \frac{\partial \Psi*}{\partial t} \Psi\\
+ &= \Psi^* \frac{i\hbar }{2m}\Psi'' - \frac{i\hbar}{2m}\Psi''^* \Psi\\
+ \intertext{where the two $V$ terms cancel each other out, assuming $V$ is real}
+ &= -\frac{\partial j}{\partial x}.\qedhere
+ \end{align*}
+\end{proof}
+The important thing here is not the specific form of $j$, but that $\frac{\partial P}{\partial t}$ can be written as the space derivative of some quantity. This implies that the probability that we find the particle in $[a, b]$ at fixed time $t$ changes as
+\[
+ \frac{\d}{\d t}\int_a^b |\Psi(x, t)|^2 \;\d x = \int_a^b -\frac{\partial j}{\partial x}(x, t)\;\d x = j(a, t) - j(b, t).
+\]
+We can think of the final term as the probability current getting in and out of the interval at the boundary.
+
+In particular, consider a normalizable state with $\Psi, \Psi', j \to 0$ as $x \to \pm\infty$ for fixed $t$. Taking $a \to -\infty$ and $b\to +\infty$, we have
+\[
+ \frac{\d}{\d t}\int_{-\infty}^\infty |\Psi(x, t)|^2 \;\d x = 0.
+\]
+What does this tell us? This tells us that if $\Psi(x, 0)$ is normalized, $\Psi(x, t)$ is normalized for all $t$. Hence we know that for each fixed $t$, $|\Psi(x, t)|^2$ is a probability distribution. So what this really says is that the probability interpretation is consistent with the time evolution.
+
+\section{Some examples in one dimension}
+\subsection{Introduction}
+In general, we are going to consider the energy eigenvalue problem for a particle in 1 dimension in a potential $V(x)$, i.e.
+\[
+ H\psi = -\frac{\hbar^2}{2m}\psi'' + V(x) \psi = E\psi.
+\]
+In other words, we want to find the allowed energy eigenvalues.
+
+This is a hard problem in general. However, we can consider the really easy case where $V(X) = U$, where $U$ is a constant. Then we can easily write down solutions.
+
+If $U > E$, then the Schr\"odinger equation is equivalent to
+\[
+ \psi'' - \kappa^2 \psi = 0,
+\]
+where $\kappa$ is such that $U - E = \frac{\hbar^2 \kappa^2}{2m}$. We take wlog $\kappa > 0$. The solution is then
+\[
+ \psi = Ae^{\kappa x} + Be^{-\kappa x}.
+\]
+On the other hand, if $U < E$, then the Schr\"odinger equation says
+\[
+ \psi + k^2 \psi = 0,
+\]
+where $k$ is picked such that $E - U = \frac{\hbar^2 k^2}{2m}$. The solutions are
+\[
+ \psi = Ae^{ikx} + Be^{ikx}.
+\]
+Note that these new constants are merely there to simplify our equations. They generally need not have physical meanings.
+
+Now why are we interested in cases where the potential is constant? Wouldn't that be just equivalent to a free particle? This is indeed true. However, knowing these solutions allow us to to study piecewise flat potentials such as steps, wells and barriers.
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 1.5) node [above] {$V$};
+ \draw [mred, semithick] (-1.5, 0) -- (0, 0) -- (0, 1) -- (1, 1);
+ \end{scope}
+
+ \begin{scope}[shift={(4, 0)}]
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 1.5) node [above] {$V$};
+ \draw [mred, semithick] (-1.5, 1) -- (-0.5, 1) -- (-0.5, 0) -- (0.5, 0) -- (0.5, 1) -- (1.5, 1);
+ \end{scope}
+
+ \begin{scope}[shift={(8, 0)}]
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 1.5) node [above] {$V$};
+ \draw [mred, semithick] (-1.5, 0) -- (-0.5, 0) -- (-0.5, 1) -- (0.5, 1) -- (0.5, 0) -- (1.5, 0);
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Here a finite discontinuity in $V$ is allowed. In this case, we can have $\psi, \psi'$ continuous and $\psi''$ discontinuous. Then the discontinuity of $\psi''$ cancels that of $V$, and the Schr\"odinger equation holds everywhere.
+
+In this chapter, we will seek normalizable solutions with
+\[
+ \int_{-\infty}^\infty |\psi(x)|^2 \;\d x.
+\]
+This requires that $\psi(x) \to 0$ as $x \to \pm\infty$. We see that for the segments and the end, we want to have decaying exponentials $e^{-\kappa x}$ instead of oscillating exponentials $e^{-ikx}$.
+
+\subsection{Infinite well --- particle in a box}
+The simplest case to consider is the infinite well. Here the potential is infinite outside the region $[-a, a]$, and we have much less to think about. For $|x| > a$, we must have $\psi(x) = 0$, or else $V(x) \psi(x)$ would be infinite.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2.5, 0) -- (2.5, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 2) node [above] {$V$};
+ \draw [mred, semithick, ->] (-1.5, 0) node [below] {$-a$} -- (-1.5, 2);
+ \draw [mred, semithick, ->] (1.5, 0) node [below] {$a$} -- (1.5, 2);
+ \draw [mred, semithick] (-1.5, 0) -- (1.5, 0);
+ \end{tikzpicture}
+\end{center}
+\[
+ V(x) =
+ \begin{cases}
+ 0 & |x| \leq a\\
+ \infty & |x| > a.
+ \end{cases}
+\]
+We require $\psi = 0$ for $|x| > a$ and $\psi$ continuous at $x = \pm a$. Within $|x| < a$, the Schr\"odinger equation is
+\[
+ -\frac{\hbar^2}{2m}\psi'' = E\psi.
+\]
+We simplify this to become
+\[
+ \psi'' + k^2 \psi = 0,
+\]
+where
+\[
+ E = \frac{\hbar^2 k^2}{2m}.
+\]
+Here, instead of working with the complex exponentials, we use $\sin$ and $\cos$ since we know well when these vanish. The general solution is thus
+\[
+ \psi = A\cos kx + B\sin kx.
+\]
+Our boundary conditions require that $\psi$ vanishes at $x = \pm a$. So we need
+\[
+ A \cos ka \pm B\sin ka = 0.
+\]
+In other words, we require
+\[
+ A\cos ka = B\sin ka = 0.
+\]
+Since $\sin ka$ and $\cos ka$ cannot be simultaneously $0$, either $A = 0$ or $B = 0$. So the two possibilities are
+\begin{enumerate}
+ \item $B = 0$ and $ka = n\pi/2$ with $n = 1, 3, \cdots$
+ \item $A = 0$ and $ka = n\pi/2$ with $n = 2, 4, \cdots$
+\end{enumerate}
+Hence the allowed energy levels are
+\[
+ E_n = \frac{\hbar^2 \pi^2}{8ma^2} n^2,
+\]
+where $n = 1, 2, \cdots$, and the wavefunctions are
+\[
+ \psi_n(x) = \left(\frac{1}{a}\right)^{\frac{1}{2}}
+ \begin{cases}
+ \cos \frac{n\pi x}{2a} & n\text{ odd}\\
+ \sin \frac{n\pi x}{2a} & n\text{ even}
+ \end{cases}.
+\]
+These are normalized with $\int_{-a}^a |\psi_n(x)|^2 \;\d x$.
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \node [right] at (-2.5, 1.5) {$\psi_1$:};
+ \draw [->] (-2.5, 0) -- (2.5, 0) node [right] {$x$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$V$};
+ \node [anchor = north east] at (-1.5, 0) {$-a$};
+ \node [anchor = north west] at (1.5, 0) {$-a$};
+ \draw [mred] (-1.5, -1.5) -- (-1.5, 1.5);
+ \draw [mred] (1.5, -1.5) -- (1.5, 1.5);
+
+ \draw [mblue, semithick, domain=-1.5:1.5, samples=50] plot (\x, {cos (60 * \x)});
+ \end{scope}
+
+ \begin{scope}[shift={(6, 0)}]
+ \node [right] at (-2.5, 1.5) {$\psi_2$:};
+ \draw [->] (-2.5, 0) -- (2.5, 0) node [right] {$x$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$V$};
+ \node [anchor = north east] at (-1.5, 0) {$-a$};
+ \node [anchor = north west] at (1.5, 0) {$-a$};
+ \draw [mred] (-1.5, -1.5) -- (-1.5, 1.5);
+ \draw [mred] (1.5, -1.5) -- (1.5, 1.5);
+
+ \draw [mblue, semithick, domain=-1.5:1.5, samples=50] plot (\x, {sin (120 * \x)});
+ \end{scope}
+
+ \begin{scope}[shift={(0, -4)}]
+ \node [right] at (-2.5, 1.5) {$\psi_3$:};
+ \draw [->] (-2.5, 0) -- (2.5, 0) node [right] {$x$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$V$};
+ \node [anchor = north east] at (-1.5, 0) {$-a$};
+ \node [anchor = north west] at (1.5, 0) {$-a$};
+ \draw [mred] (-1.5, -1.5) -- (-1.5, 1.5);
+ \draw [mred] (1.5, -1.5) -- (1.5, 1.5);
+
+ \draw [mblue, semithick, domain=-1.5:1.5, samples=50] plot (\x, {cos (180 * \x)});
+ \end{scope}
+
+ \begin{scope}[shift={(6, -4)}]
+ \node [right] at (-2.5, 1.5) {$\psi_4$:};
+ \draw [->] (-2.5, 0) -- (2.5, 0) node [right] {$x$};
+ \draw [->] (0, -1.5) -- (0, 1.5) node [above] {$V$};
+ \node [anchor = north east] at (-1.5, 0) {$-a$};
+ \node [anchor = north west] at (1.5, 0) {$-a$};
+ \draw [mred] (-1.5, -1.5) -- (-1.5, 1.5);
+ \draw [mred] (1.5, -1.5) -- (1.5, 1.5);
+
+ \draw [mblue, semithick, domain=-1.5:1.5, samples=50] plot (\x, {sin (240 * \x)});
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+This was a rather simple and nice example. We have an infinite well, and the particle is well-contained inside the box. The solutions just look like standing waves on a string with two fixed end points --- something we (hopefully) are familiar with.
+
+Note that $\psi_n(-x) = (-1)^{n + 1}\psi_n(x)$. We will see that this is a general feature of energy eigenfunctions of a symmetric potential. This is known as \emph{parity}.
+\subsection{Parity}
+Consider the Schr\"odinger equation for a particle of mass $m$
+\[
+ H\psi = -\frac{\hbar^2}{2m}\psi'' + V(x) \psi = E\psi.
+\]
+with potential
+\[
+ V(x) = V(-x).
+\]
+By changing variables $x \to -x$, we see that $\psi(x)$ is an eigenfunction of $H$ with energy $E$ if and only if $\psi(-x)$ is an eigenfunction of $H$ with energy $E$. There are two possibilities:
+\begin{enumerate}
+ \item If $\psi(x)$ and $\psi(-x)$ represent the same quantum state, this can only happen if $\psi(-x) = \eta \psi(x)$ for some constant $\eta$. Since this is true for all $x$, we can do this twice and get
+ \[
+ \psi(x) = \eta \psi(-x) = \eta^2 \psi(x).
+ \]
+ So we get that $\eta = \pm 1$ and $\psi(-x) = \pm \psi(x)$. We call $\eta$ the \emph{parity}, and say $\psi$ has even/odd parity if $\eta$ is $+1/-1$ respectively.
+
+ For example, in our particle in a box, our states $\psi_n$ have parity $(-1)^{n + 1}$.
+ \item If $\psi(x)$ and $\psi(-x)$ represent different quantum states, then we can still take linear combinations
+ \[
+ \psi_\pm (x) = \alpha(\psi(x) \pm \psi(-x)),
+ \]
+ and these are also eigenstates with energy eigenvalue $E$, where $\alpha$ is for normalization. Then by construction, $\psi_\pm (-x) = \pm \psi_\pm(x)$ and have parity $\eta = \pm 1$.
+\end{enumerate}
+Hence, if we are given a potential with reflective symmetry $V(-x) = V(x)$, then we can restrict our attention and just look for solutions with definite parity.
+\subsection{Potential well}
+We will consider a potential that looks like this:
+\begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$x$};
+ \draw [->] (0, -1.5) -- (0, 0.5) node [above] {$V$};
+ \draw [mred, semithick] (-1.5, 0) -- (-0.5, 0) -- (-0.5, -1) node [below] {$-a$} -- (0.5, -1) node [below] {$a$} -- (0.5, 0) -- (1.5, 0);
+
+ \draw [dashed] (0.5, -1) -- (1.5, -1) node [right] {$-U$};
+ \end{tikzpicture}
+\end{center}
+The potential is given by
+\[
+ V(x) =
+ \begin{cases}
+ -U & |x| < a\\
+ 0 & |x| \geq a
+ \end{cases}
+\]
+for some constant $U > 0$. Classically, this is not very interesting. If the energy $E < 0$, then the particle is contained in the well. Otherwise it is free to move around. However, in quantum mechanics, this is much more interesting.
+
+We want to seek energy levels for a particle of mass $m$, defined by the Schr\"odinger equation
+\[
+ H\psi = -\frac{\hbar^2}{2m}\psi'' + V(x) \psi = E\psi.
+\]
+For energies in the range
+\[
+ -U < E < 0,
+\]
+we set
+\[
+ U + E = \frac{\hbar^2 k^2}{2m} > 0,\quad E = -\frac{\hbar^2 \kappa^2}{2m},
+\]
+where $k, \kappa > 0$ are new real constants. Note that these coefficients are not independent, since $U$ is given and fixed. So they must satisfy
+\[
+ k^2 + \kappa^2 = \frac{2mU}{\hbar^2}.
+\]
+Using these constants, the Schr\"odinger equation becomes
+\[
+ \begin{cases}
+ \psi'' + k^2 \psi = 0 & |x| < a\\
+ \psi'' - \kappa^2 \psi = 0 & |x| > a.
+ \end{cases}
+\]
+As we previously said, we want the Schr\"odinger equation to hold even at the discontinuities. So we need $\psi$ and $\psi'$ to be continuous at $x = \pm a$.
+
+We first consider the even parity solutions $\psi(-x) = \psi(x)$. We can write our solution as
+\[
+ \psi =
+ \begin{cases}
+ A \cos kx & |x| < a\\
+ B e^{-\kappa |x|} & |x| > a\\
+ \end{cases}
+\]
+We match $\psi$ and $\psi'$ at $x = a$. So we need
+\begin{align*}
+ A\cos ka &= Be^{-\kappa a}\\
+ -Ak\sin ka &= -\kappa Be^{-\kappa a}.
+\end{align*}
+By parity, there is no additional information from $x = -a$.
+
+We can divide the equations to obtain
+\[
+ k \tan ka = \kappa.
+\]
+this is still not something we can solve easily. To find when solutions exist, it is convenient to introduce
+\[
+ \xi = ak, \quad \eta = a\kappa,
+\]
+where these two constants are dimensionless and positive. Note that this $\eta$ has nothing to do with parity. It's just that we have run out of letters to use. Hence the solution we need are solutions to
+\[
+ \eta = \xi \tan \xi.
+\]
+Also, our initial conditions on $k$ and $\kappa$ require
+\[
+ \xi^2 + \eta^2 = \frac{2ma^2 U}{\hbar^2}.
+\]
+We can look for solutions by plotting these two equations. We first plot the curve $\eta = \xi \tan \xi$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$\xi$};
+ \draw [->] (0, 0) -- (0, 4) node [right] {$\eta$};
+ \begin{scope}[yscale=2]
+ \clip (0, 0) rectangle (6, 2);
+ \foreach \a in {0,1,...,5} {
+ \pgfmathsetmacro\b{\a + 0.499};
+ \draw [domain=\a:\b] plot [smooth] (\x, {\x * min(4, tan(180*\x))});
+ }
+ \end{scope}
+ \foreach \a in {0,1,...,5} {
+ \pgfmathsetmacro\n{\a * 2 + 1};
+ \draw [dashed] (\a + 0.5, 4) -- (\a + 0.5, 0) node [below] {$\frac{\pgfmathprintnumber{\n} \pi}{2}$};
+ }
+ \end{tikzpicture}
+\end{center}
+The other equation is the equation of a circle. Depending on the size of the constant $2ma^2 U/\hbar^2$, there will be a different number of points of intersections.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$\xi$};
+ \draw [->] (0, 0) -- (0, 4) node [right] {$\eta$};
+ \begin{scope}[yscale=2]
+ \clip (0, 0) rectangle (6, 2);
+ \foreach \a in {0,1,...,5} {
+ \pgfmathsetmacro\b{\a + 0.499};
+ \draw [domain=\a:\b] plot [smooth] (\x, {\x * min(4, tan(180*\x))});
+ }
+ \end{scope}
+ \draw [mred, dashed] (2.3, 0) arc (0:90:2.3);
+ \draw [mred, dashed] (1.5, 0) arc (0:90:1.5);
+ \end{tikzpicture}
+\end{center}
+So there will be a different number of solutions depending on the value of $2ma^2 U/\hbar^2$. In particular, if
+\[
+ (n - 1)\pi < \left(\frac{2mUa^2}{\hbar^2}\right)^{1/2} < n\pi,
+\]
+then we have exactly $n$ even parity solutions (for $n \geq 1$).
+
+We can do exactly the same thing for odd parity eigenstates\ldots on example sheet 1.
+
+For $E > 0$ or $E < -U$, we will end up finding non-normalizable solutions. What is more interesting, though is to look at the solutions we have now. We can compare what we've got with what we would expect classically.
+
+Classically, any value of $E$ in the range $-U < E < 0$ is allowed, and the motion is deeply uninteresting. The particle just goes back and forth inside the well, and is \emph{strictly confined} in $-a \leq x \leq a$.
+
+Quantum mechanically, there is just a discrete, \emph{finite} set of allowed energies. What is more surprising is that while $\psi$ decays exponentially outside the well, it is non-zero! This means there is in theory a non-zero probability of finding the particle outside the well! We call these particles \emph{bound} in the potential, but in fact there is a non-zero probability of finding the particle outside the well.
+
+\subsection{The harmonic oscillator}
+So far in our examples, the quantization (mathematically) comes from us requiring continuity at the boundaries. In the harmonic oscillator, it arises in a different way.
+\begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \draw [->] (-1.5, 0) -- (1.5, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 1.5) node [above] {$V$};
+ \draw [mred, semithick] (-1, 1.5) parabola bend (0, 0) (1, 1.5);
+ \end{tikzpicture}
+\end{center}
+This is a harmonic oscillator of mass $m$ with
+\[
+ V(x) = \frac{1}{2}m\omega^2 x^2.
+\]
+Classically, this has a motion of $x = A \cos \omega (t - t_0)$, which is something we (hopefully) know well too.
+
+This is a \emph{really} important example. First of all, we can solve it, which is a good thing. More importantly, any smooth potential can be approximated by a harmonic oscillator near an equilibrium $x_0$, since
+\[
+ V(x) = V(x_0) + \frac{1}{2}V''(x_0)(x - x_0)^2 + \cdots.
+\]
+Systems with many degrees like crystals can also be treated as collections of independent oscillators by considering the normal modes. If we apply this to the electromagnetic field, we get photons! So it is very important to understand the quantum mechanical oscillator.
+
+We are going to seek all normalizable solutions to the time-independent Schr\"odinger equation
+\[
+ H\psi = -\frac{\hbar^2}{2m}\psi'' + \frac{1}{2}m\omega^2 x^2 \psi = E\psi.
+\]
+So simplify constants, we define
+\[
+ y = \left(\frac{m\omega}{\hbar}\right)^{\frac{1}{2}}x,\quad \mathcal{E} = \frac{2E}{\hbar \omega},
+\]
+both of which is dimensionless. Then we are left with
+\[
+ -\frac{\d^2 \psi}{\d y^2} + y^2\psi = \mathcal{E} \psi.
+\]
+We can consider the behaviour for $y^2 \gg \mathcal{E}$. For large $y$, the $y^2 \psi$ term will be large, and so we want the $\psi''$ term to offset it. We might want to try the Gaussian $e^{-y^2/2}$, and when we differentiate it twice, we would have brought down a factor of $y^2$. So we can wlog set
+\[
+ \psi = f(y) e^{-\frac{1}{2}y^2}.
+\]
+Then the Schr\"odinger equation gives
+\[
+ \frac{\d^2 f}{\d y^2} - 2y\frac{\d f}{\d y} + (\mathcal{E} - 1) = 0.
+\]
+This is known as \emph{Hermite's equation}. We try a series solution
+\[
+ f(y) = \sum_{r \geq 0} a_r y^r,
+\]
+and substitute in to get
+\[
+ \sum_{r \geq 0} \big( (r + 2)(r + 1)a_{n + 2} + (\mathcal{E} - 1 - 2r)a_r\big) y^r = 0.
+\]
+This holds if and only if
+\[
+ a_{r + 2} = \frac{2 r + 1 - \mathcal{E}}{(r + 2)(r + 1)} a_r,\quad r \geq 0.
+\]
+We can choose $a_0$ and $a_1$ independently, and can get two linearly independent solutions. Each solution involves either all even or all odd powers.
+
+However, we have a problem. We want normalizable solutions. So we want to make sure our function does not explode at large $y$. Note that it is okay if $f(y)$ is \emph{quite} large, since our $\psi$ is suppressed by the $e^{-\frac{1}{2}y^2}$ terms, but we cannot grow \emph{too} big.
+
+We look at these two solutions individually. To examine the behaviour of $f(y)$ when $y$ is large, observe that unless the coefficients vanish, we get
+\[
+ a_{p}/a_{p - 2} \sim \frac{1}{p}.
+\]
+This matches the coefficients of $y^\alpha e^{y^2}$ for some power $\alpha$ (e.g.\ $\sum_{p \geq 0} \frac{y^{2p}}{p!}$). This is bad, since our $\psi$ will then grow as $e^{\frac{1}{2}y^2}$, and cannot be normalized.
+
+Hence, we get normalizable $\psi$ if and only if the series for $f$ terminates to give a polynomial. This occurs iff $\mathcal{E} = 2n + 1$ for some $n$. Note that for each $n$, only one of the two independent solutions is normalizable. So for each $\mathcal{E}$, we get exactly one solution.
+
+So for $n$ even, we have
+\[
+ a_{r + 2} = \frac{2r - 2n}{(r + 2)(r + 1)} a_r
+\]
+for $r$ even, and $a_r = 0$ for $r$ odd, and the other way round when $n$ is odd.
+
+The solutions are thus $f(y) = h_n(y)$, where $h_n$ is a polynomial of degree $n$ with $h_n(-y) = (-1)^n h_n(y)$.
+
+For example, we have
+\begin{align*}
+ h_0(y) &= a_0\\
+ h_1(y) &= a_1 y\\
+ h_2(y) &= a_0(1 - 2y^2)\\
+ h_3(y) &= a_1\left(y - \frac{2}{3}y^3\right).
+\end{align*}
+These are known as the \emph{Hermite polynomials}. We have now solved our harmonic oscillator. With the constant restored, the possible energy eigenvalues are
+\[
+ E_n = \hbar \omega \left(n + \frac{1}{2}\right),
+\]
+for $n = 0, 1, 2, \cdots$.
+
+The wavefunctions are
+\[
+ \psi_n(x) = h_n \left(\left(\frac{m\omega}{\hbar}\right)^{\frac{1}{2}} x\right) \exp\left(-\frac{1}{2}\frac{m\omega}{\hbar} x^2\right),
+\]
+where normalization fixes $a_0$ and $a_1$.
+
+As we said in the beginning, harmonic oscillators are everywhere. It turns out quantised electromagnetic fields correspond to sums of quantised harmonic oscillators, with
+\[
+ E_n - E_0 = n\hbar \omega
+\]
+This is equivalent to saying the $n$th state contains $n$ photons, each of energy $\hbar \omega$.
+
+\section{Expectation and uncertainty}
+So far, all we've been doing is solving Schr\"odinger's equation, which is arguably not very fun. In this chapter, we will look at some theoretical foundations of quantum mechanics. Most importantly, we will learn how to extract information from the wavefunction $\psi$. Two important concepts will be the expectation and uncertainty. We will then prove two important theorems about these --- Ehrenfest's theorem and the Heisenberg uncertainty principle.
+
+\subsection{Inner products and expectation values}
+\subsubsection*{Definitions}
+\begin{defi}[Inner product]
+ Let $\psi(x)$ and $\phi(x)$ be normalizable wavefunctions at some fixed time (not necessarily stationary states). We define the complex \emph{inner product} by
+ \[
+ (\phi, \psi) = \int_{-\infty}^\infty \phi(x)^* \psi (x)\;\d x.
+ \]
+\end{defi}
+Note that for any complex number $\alpha$, we have
+\[
+ (\phi, \alpha \psi) = \alpha(\phi, \psi) = (\alpha^* \phi, \psi).
+\]
+Also, we have
+\[
+ (\phi, \psi) = (\psi, \phi)^*.
+\]
+These are just the usual properties of an inner product.
+
+\begin{defi}[Norm]
+ The \emph{norm} of a wavefunction $\psi$, written, $\|\psi\|$ is defined by
+ \[
+ \|\psi\|^2 = (\psi, \psi) = \int_{-\infty}^\infty |\psi(x)|^2\;\d x.
+ \]
+\end{defi}
+This ensures the norm is real and positive.
+
+Suppose we have a normalized state $\psi$, i.e.\ $\|\psi\| = 1$, we define the expectation values of observables as
+\begin{defi}[Expectation value]
+ The \emph{expectation value} of any observable $H$ on the state $\psi$ is
+ \[
+ \bra H \ket_\psi = (\psi, H\psi).
+ \]
+\end{defi}
+For example, for the position, we have
+\[
+ \bra \hat{x} \ket_\psi = (\psi, \hat{x}\psi) = \int_{-\infty}^\infty x|\psi(x)|^2 \;\d x.
+\]
+Similarly, for the momentum, we have
+\[
+ \bra \hat{p} \ket_{\psi} = (\psi, \hat{p}\psi) = \int_{-\infty}^\infty \psi^* (-i\hbar \psi')\;\d x.
+\]
+How are we supposed to interpret this thing? So far, all we have said about operators is that if you are an eigenstate, then measuring that property will give a definite value. However, the point of quantum mechanics is that things are waves. We can add them together to get superpositions. Then the sum of two eigenstates will not be an eigenstate, and does not have definite, say, momentum. This formula tells us what the \emph{average value} of any state is.
+
+This is our new assumption of quantum mechanics --- the expectation value is the \emph{mean} or \emph{average} of results obtained by measuring the observable many times, with the system prepared in state $\psi$ before each measurement.
+
+Note that this is valid for any operator. In particular, we can take any function of our existing operators. One important example is the uncertainty:
+\begin{defi}[Uncertainty]
+ The \emph{uncertainty} in position $(\Delta x)_\psi$ and momentum $(\Delta p)_\psi$ are defined by
+ \[
+ (\Delta x)_\psi^2 = \bra (\hat{x} - \bra \hat{x}\ket_\psi)^2 \ket_\psi = \bra \hat{x}^2\ket_\psi - \bra \hat{x}\ket^2_\psi,
+ \]
+ with exactly the same expression for momentum:
+ \[
+ (\Delta p)_\psi^2 = \bra (\hat{p} - \bra \hat{p}\ket_\psi)^2 \ket_\psi = \bra \hat{p}^2\ket_\psi - \bra \hat{p}\ket^2_\psi,
+ \]
+\end{defi}
+We will later show that these quantities $(\Delta x)_\psi^2$ and $(\Delta y)_\psi^2$ are indeed real and positive, so that this actually makes sense.
+
+\subsubsection*{Hermitian operators}
+The expectation values defined can be shown to be real for $\hat{x}$ and $\hat{p}$ specifically, by manually fiddling with stuff. We can generalize this result to a large class of operators known as \emph{Hermitian operators}.
+\begin{defi}[Hermitian operator]
+ An operator $Q$ is \emph{Hermitian} iff for all normalizable $\phi, \psi$, we have
+ \[
+ (\phi, Q\psi) = (Q\phi, \psi).
+ \]
+ In other words, we have
+ \[
+ \int \phi^* Q\psi \;\d x= \int(Q\phi)^* \psi \;\d x.
+ \]
+\end{defi}
+In particular, this implies that
+\[
+ (\psi, Q\psi) = (Q\psi, \psi) = (\psi, Q\psi)^*.
+\]
+So $(\psi, Q\psi)$ is real, i.e.\ $\bra Q\ket_\psi$ is real.
+\begin{prop}
+ The operators $\hat{x}$, $\hat{p}$ and $H$ are all Hermitian (for real potentials).
+\end{prop}
+
+\begin{proof}
+ We do $\hat{x}$ first: we want to show $(\phi, \hat{x} \psi) = (\hat{x}\phi, \psi)$. This statement is equivalent to
+ \[
+ \int_{-\infty}^\infty \phi(x)^* x\psi (x)\;\d x = \int_{-\infty}^\infty (x\phi(x))^* \psi(x)\;\d x.
+ \]
+ Since position is real, this is true.
+
+ To show that $\hat{p}$ is Hermitian, we want to show $(\phi, \hat{p} \psi) = (\hat{p}\phi, \psi)$. This is equivalent to saying
+ \[
+ \int_{-\infty}^\infty \phi^*(-i\hbar \psi')\;\d x = \int_{-\infty}^\infty (i\hbar \phi')^*\psi \;\d x.
+ \]
+ This works by integrating by parts: the difference of the two terms is
+ \[
+ -i\hbar [\phi^*\psi]_{-\infty}^\infty = 0
+ \]
+ since $\phi, \psi$ are normalizable.
+
+ To show that $H$ is Hermitian, we want to show $(\phi, H\psi) = (H\phi, \psi)$, where
+ \[
+ H = -\frac{h^2}{2m}\frac{\d^2}{\d x^2} + V(x).
+ \]
+ To show this, it suffices to consider the kinetic and potential terms separately. For the kinetic energy, we just need to show that $(\phi, \psi'') = (\phi'', \psi)$. This is true since we can integrate by parts twice to obtain
+ \[
+ (\phi, \psi'') = -(\phi', \psi') = (\phi'', \psi).
+ \]
+ For the potential term, we have
+ \[
+ (\phi, V(\hat{x})\psi) = (\phi, V(x) \psi) = (V(x)\phi, \psi) = (V(\hat{x})\phi, \psi).
+ \]
+ So $H$ is Hermitian, as claimed.
+\end{proof}
+Thus we know that $\bra x\ket_\psi, \bra \hat{p}\ket_\psi, \bra H\ket_\psi$ are all real.
+
+Furthermore, observe that
+\[
+ X = \hat{x} - \alpha,\quad P = \hat{p} - \beta
+\]
+are (similarly) Hermitian for any real $\alpha, \beta$. Hence
+\[
+ (\psi, X^2 \psi) = (\psi, X(X\psi)) = (X\psi, X\psi) = \|X\psi\|^2 \geq 0.
+\]
+Similarly, we have
+\[
+ (\psi, P^2 \psi) = (\psi, P(P\psi)) = (P\psi, P\psi) = \|P\psi\|^2 \geq 0.
+\]
+If we choose $\alpha = \bra \hat{x}\ket_\psi$ and $\beta = \bra \hat{p}\ket_\psi$, the expressions above say that $(\Delta x)^2_\psi$ and $(\Delta p)^2_\psi$ are indeed real and positive.
+\subsubsection*{Cauchy-Schwarz inequality}
+We are going to end with a small section on a technical result that will come handy later on.
+\begin{prop}[Cauchy-Schwarz inequality]
+ If $\psi$ and $\phi$ are any normalizable states, then
+ \[
+ \|\psi\|\|\phi\| \geq |(\psi, \phi)|.
+ \]
+\end{prop}
+
+\begin{proof}
+ Consider
+ \begin{align*}
+ \|\psi + \lambda \phi\|^2 &= (\psi + \lambda \phi, \psi + \lambda \phi) \\
+ &= (\psi, \psi) + \lambda(\psi, \phi) + \lambda^*(\phi, \psi) + |\lambda|^2 (\phi, \phi) \geq 0.
+ \end{align*}
+ This is true for any complex $\lambda$. Set
+ \[
+ \lambda = -\frac{(\phi, \psi)}{\|\phi\|^2}
+ \]
+ which is always well-defined since $\phi$ is normalizable, and then the above equation becomes
+ \[
+ \|\psi\|^2 - \frac{|(\psi, \phi)|^2}{\|\phi\|^2} \geq 0.
+ \]
+ So done.
+\end{proof}
+\subsection{Ehrenfest's theorem}
+We will show that, in fact, quantum mechanics is like classical mechanics.
+
+Again, consider a normalizable state $\Psi(x, t)$ satisfying the time-dependent Schr\"odinger equation, i.e.
+\[
+ i\hbar \dot{\Psi} = H\Psi = \left(\frac{\hat{p}^2}{2m} + V(\hat{x})\right) \Psi.
+\]
+Classically, we are used to $x$ and $p$ changing in time. However, here $\hat{x}$ and $\hat{p}$ are fixed in time, while the states change with time. However, what \emph{does} change with time is the expectations. The expectation values
+\[
+ \bra \hat{x}\ket_\Psi = (\Psi, \hat{x}\Psi),\quad \bra \hat{p}\ket_\Psi = (\Psi, \hat{p}\Psi)
+\]
+depend on $t$ through $\Psi$. Ehrenfest's theorem states the following:
+\begin{thm}[Ehrenfest's theorem]
+ \begin{align*}
+ \frac{\d}{\d t}\bra \hat{x}\ket_\Psi &= \frac{1}{m}\bra \hat{p}\ket_\Psi\\
+ \frac{\d}{\d t}\bra \hat{p}\ket_\Psi &= -\bra V'(\hat{x})\ket_\Psi.
+ \end{align*}
+\end{thm}
+These are the quantum counterparts to the classical equations of motion.
+
+\begin{proof}
+ We have
+ \begin{align*}
+ \frac{\d}{\d t}\bra \hat{x}\ket_\Psi &= (\dot{\Psi}, \hat{x}\Psi) + (\Psi, \hat{x}\dot{\Psi})\\
+ &= \left(\frac{1}{i\hbar}H\Psi, \hat{x}\Psi\right) + \left(\Psi, \hat{x}\left(\frac{1}{i\hbar}H\right)\Psi\right)\\
+ \intertext{Since $H$ is Hermitian, we can move it around and get}
+ &= -\frac{1}{i\hbar}(\Psi, H(\hat{x}\Psi)) + \frac{1}{i\hbar}(\Psi, \hat{x}(H\Psi))\\
+ &= \frac{1}{i\hbar}(\Psi, (\hat{x} H - H\hat{x}) \Psi).
+ \end{align*}
+ But we know
+ \[
+ (\hat{x}H - H\hat{x})\Psi = -\frac{\hbar^2}{2m}(x\Psi'' - (x\Psi)'') + (xV\Psi - Vx\Psi) = -\frac{\hbar^2}{m}\Psi' = \frac{i\hbar}{m}\hat{p}\Psi.
+ \]
+ So done.
+
+ The second part is similar. We have
+ \begin{align*}
+ \frac{\d}{\d t}\bra \hat{p}\ket_\Psi &= (\dot{\Psi}, \hat{p}\Psi) + (\Psi, \hat{p}\dot{\Psi})\\
+ &= \left(\frac{1}{i\hbar}H\Psi, \hat{p}\Psi\right) + \left(\Psi, \hat{p}\left(\frac{1}{i\hbar}H\right)\Psi\right)\\
+ \intertext{Since $H$ is Hermitian, we can move it around and get}
+ &= -\frac{1}{i\hbar}(\Psi, H(\hat{p}\Psi)) + \frac{1}{i\hbar}(\Psi, \hat{p}(H\Psi))\\
+ &= \frac{1}{i\hbar}(\Psi, (\hat{p} H - H\hat{p}) \Psi).
+ \end{align*}
+ Again, we can compute
+ \begin{align*}
+ (\hat{p}H - H\hat{p})\Psi &= -i\hbar \left(\frac{-\hbar^2}{2m}\right)((\Psi'')' - (\Psi')'') - i\hbar ((V(x)\Psi)' - V(x) \Psi') \\
+ &= -i\hbar V'(x) \Psi.
+ \end{align*}
+ So done.
+\end{proof}
+
+Note that in general, quantum mechanics can be portrayed in different ``pictures''. In this course, we will be using the Schr\"odinger picture all the time, in which the operators are time-independent, and the states evolve in time. An alternative picture is the Heisenberg picture, in which states are fixed in time, and all the time dependence lie in the operators. When written in this way, quantum mechanics is even more like classical mechanics. This will be explored more in depth in IID Principles of Quantum Mechanics.
+
+\subsection{Heisenberg's uncertainty principle}
+We will show that, in fact, quantum mechanics is not like classical mechanics.
+
+\subsubsection*{Statement}
+The statement of the uncertainty principle (or relation) is
+\begin{thm}[Heisenberg's uncertainty principle]
+ If $\psi$ is any normalized state (at any fixed time), then
+ \[
+ (\Delta x)_\psi (\Delta p)_\psi \geq \frac{\hbar}{2}.
+ \]
+\end{thm}
+
+\begin{eg}
+ Consider the normalized Gaussian
+ \[
+ \psi(x) = \left(\frac{1}{\alpha \pi}\right)^{\frac{1}{4}} e^{-x^2/\alpha}.
+ \]
+ We find that
+ \[
+ \bra \hat{x}\ket_\psi = \bra \hat{p}\ket_\psi = 0,
+ \]
+ and also
+ \[
+ (\Delta x)_\psi^2 = \frac{\alpha}{2},\quad (\Delta p)_\psi^2 = \frac{\hbar^2}{2\alpha}.
+ \]
+ So we get
+ \[
+ (\Delta x)_\psi(\Delta p)_\psi = \frac{\hbar}{2}.
+ \]
+ We see that a small $\alpha$ corresponds to $\psi$ sharply peaked around $x = 0$, i.e.\ it has a rather definite position, but has a large uncertainty in momentum. On the other hand, if $\alpha$ is large, $\psi$ is spread out in position but has a small uncertainty in momentum.
+\end{eg}
+Recall that for $\alpha = \frac{\hbar}{m\omega}$, this is the lowest energy eigenstate for harmonic oscillator with
+\[
+ H = \frac{1}{2m}\hat{p} + \frac{1}{2}m\omega^2 \hat{x}^2,
+\]
+with eigenvalue $\frac{1}{2}\hbar \omega$. We can use the uncertainty principle to understand why we have a minimum energy of $\frac{1}{2}\hbar \omega$ instead of $0$. If we had a really small energy, then we would just sit at the bottom of the potential well and do nothing, with both a small (and definite) momentum and position. Hence for uncertainty to occur, we need a non-zero ground state energy.
+
+To understand where the uncertainty principle come from, we have to understand commutation relations.
+\subsubsection*{Commutation relations}
+We are going to define a weird thing called the \emph{commutator}. At first sight, this seems like a weird definition, but this turns out to be a really important concept in quantum mechanics.
+
+\begin{defi}[Commutator]
+ Let $Q$ and $S$ be operators. Then the \emph{commutator} is denoted and defined by
+ \[
+ [Q, S] = QS - SQ.
+ \]
+ This is a measure of the lack of commutativity of the two operators.
+
+ In particular, the commutator of position and momentum is
+ \[
+ [\hat{x}, \hat{p}] = \hat{x}\hat{p} - \hat{p}\hat{x} = i\hbar.
+ \]
+\end{defi}
+This relation results from a simple application of the product rule:
+\[
+ (\hat{x}\hat{p} - \hat{p}\hat{x}) \psi = -i\hbar \psi' - (-i\hbar(x \psi)' = i\hbar \psi.
+\]
+Note that if $\alpha$ and $\beta$ are any real constants, then the operators
+\[
+ X = \hat{x} - \alpha,\quad P = \hat{p} - \beta
+\]
+also obey
+\[
+ [X, P] = i\hbar.
+\]
+This is something we will use when proving the uncertainty principle.
+
+Recall that when we proved Ehrenfest's theorem, the last step was to calculate the commutator:
+\begin{align*}
+ [\hat{x}, H] &= \hat{x}H - H\hat{x} = \frac{i\hbar}{m}\hat{p}\\
+ [\hat{p}, H] &= \hat{p}H - H\hat{p} = -i\hbar V'(\hat{x}).
+\end{align*}
+Since $H$ is defined in terms of $\hat{p}$ and $\hat{x}$, we can indeed find these relations just using the commutator relations between $\hat{x}$ and $\hat{p}$ (plus a few more basic facts about commutators).
+
+Commutator relations are important in quantum mechanics. When we first defined the momentum operator $\hat{p}$ as $-i\hbar \frac{\partial}{\partial x}$, you might have wondered where this weird definition came from. This definition naturally comes up if we require that $\hat{x}$ is ``multiply by $x$'' (so that the delta function $\delta(x - x_0)$ is the eigenstate with definition position $x_0$), and that $\hat{x}$ and $\hat{p}$ satisfies this commutator relation. With this requirements, $\hat{p}$ \emph{must} be defined as that derivative.
+
+Then one might ask, why would we want $\hat{x}$ and $\hat{p}$ to satisfy this commutator relation? It turns out that in classical dynamics, there is something similar known as the \emph{Poisson bracket} $\{\ph, \ph\}$, where we have $\{x, p\} = 1$. To get from classical dynamics to quantum mechanics, we just have to promote our Poisson brackets to commutator brackets, and multiply by $i\hbar$.
+
+\subsubsection*{Proof of uncertainty principle}
+\begin{proof}[Proof of uncertainty principle]
+ Choose $\alpha = \bra \hat{x}\ket _\psi$ and $\beta = \bra \hat{p}\ket_\psi$, and define
+ \[
+ X = \hat{x} - \alpha,\quad P = \hat{p} - \beta.
+ \]
+ Then we have
+ \begin{align*}
+ (\Delta x)_\psi^2 &= (\psi, X^2 \psi) = (X\psi, X\psi) = \|X\psi\|^2\\
+ (\Delta p)_\psi^2 &= (\psi, P^2 \psi) = (P\psi, P\psi) = \|P\psi\|^2
+ \end{align*}
+ Then we have
+ \begin{align*}
+ (\Delta x)_\psi (\Delta p)_\psi &= \|X\psi\|\|P\psi\|\\
+ &\geq |(X \psi, P \psi)|\\
+ &\geq |\im(X \psi, P \psi)|\\
+ &\geq \left|\frac{1}{2i}\Big[(X\psi, P\psi) - (P\psi, X\psi)\Big]\right|\\
+ &= \left|\frac{1}{2i}\Big[(\psi, XP\psi) - (\psi, PX\psi)\Big]\right|\\
+ &= \left|\frac{1}{2i}(\psi, [X, P]\psi)\right|\\
+ &= \left|\frac{\hbar}{2}(\psi, \psi)\right|\\
+ &= \frac{\hbar}{2}.
+ \end{align*}
+ So done.
+\end{proof}
+\section{More results in one dimensions}
+\subsection{Gaussian wavepackets}
+When we solve Schr\"odinger's equation, what we get is a ``wave'' that represents the probability of finding our thing at each position. However, in real life, we don't think of particles as being randomly distributed over different places. Instead, particles are localized to some small regions of space.
+
+These would be represented by wavefunctions in which most of the distribution is concentrated in some small region:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [semithick] (-3, 0) -- (3, 0);
+
+ \draw [domain=-1:1,samples=50, mblue] plot ({2 * \x}, {1.5 * exp(-7 * \x * \x)});
+ \end{tikzpicture}
+\end{center}
+These are known as \emph{wavepackets}.
+\begin{defi}[Wavepacket]
+ A wavefunction localised in space (about some point, on some scale) is usually called a \emph{wavepacket}.
+\end{defi}
+This is a rather loose definition, and can refer to anything that is localized in space. Often, we would like to consider a particular kind of wavepacket, known as a \emph{Gaussian wavepacket}.
+
+\begin{defi}[Gaussian wavepacket]
+ A \emph{Gaussian wavepacket} is given by
+ \[
+ \Psi_0(x, t) = \left(\frac{\alpha}{\pi}\right)^{1/4} \frac{1}{\gamma(t)^{1/2}} e^{-x^2/2\gamma(t)},
+ \]
+for some $\gamma(t)$.
+\end{defi}
+These are particularly nice wavefunctions. For example, we can show that for a Gaussian wavepacket, $(\Delta x)_\psi (\Delta p)_\psi = \frac{\hbar}{2}$ exactly, and uncertainty is minimized.
+
+The Gaussian wavepacket is a solution of the time-dependent Schr\"odinger equation (with $V = 0$) for
+\[
+ \gamma(t) = \alpha + \frac{i\hbar}{m} t.
+\]
+Substituting this $\gamma(t)$ into our equation, we find that the probability density is
+\[
+ P_0(x, t) = |\Psi_0(x, t)|^2 = \left(\frac{\alpha}{\pi}\right)^{1/2} \frac{1}{|\gamma(t)|} e^{-\alpha x^2/|\gamma(t)|^2},
+\]
+which is peaked at $x = 0$. This corresponds to a particle at rest at the origin, spreading out with time.
+
+A related solution to the time-dependent Schr\"odinger equation with $V = 0$ is a moving particle:
+\[
+ \Psi_u(x, t) = \Psi_0(x - ut) \exp\left(i\frac{mu}{\hbar} x\right) \exp\left(-i \frac{mu^2}{2\hbar}t\right).
+\]
+The probability density resulting from this solution is
+\[
+ P_u(x, t) = |\Psi_u(x, t)|^2 = P_0(x - ut, t).
+\]
+So this corresponds to a particle moving with velocity $u$. Furthermore, we get
+\[
+ \bra \hat{p}\ket_{\Psi_u} = mu.
+\]
+This corresponds with the classical momentum, mass $\times$ velocity.
+
+We see that wavepackets do indeed behave like particles, in the sense that we can set them moving and the quantum momentum of these objects do indeed behave like the classical momentum. In fact, we will soon attempt to send them to a brick wall and see what happens.
+
+In the limit $\alpha \to \infty$, our particle becomes more and more spread out in space. The uncertainty in the position becomes larger and larger, while the momentum becomes more and more definite. Then the wavefunction above resembles something like
+\[
+ \Psi(x, t) = Ce^{ikx}e^{-iEt/\hbar},
+\]
+which is a momentum eigenstate with $\hbar k = mu$ and energy $E = \frac{1}{2}mu^2 = \frac{\hbar^2 k^2}{2m}$. Note, however, that this is not normalizable.
+
+\subsection{Scattering}
+Consider the time-dependent Schr\"odinger equation with a potential barrier. We would like to send a wavepacket towards the barrier and see what happens.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mred, semithick] (-3, 0) -- (2, 0) -- (2, 2) -- (3, 2) -- (3, 0) -- (5, 0);
+
+ \draw [dashed] (-1.5, -0.5) -- (-1.5, 2);
+ \draw [domain=-1:1,samples=50, mblue] plot ({\x - 1.5}, {1.5 * exp(-7 * \x * \x)});
+
+ \draw [->] (-1.5, 0.8) -- +(0.5, 0) node [right] {$u$};
+
+ \node [right] at (-1.5, 1.6) {$\Psi$};
+ \end{tikzpicture}
+\end{center}
+Classically, we would expect the particle to either pass through the barrier or get reflected. However, in quantum mechanics, we would expect it to ``partly'' pass through and ``partly'' get reflected. So the resultant wave is something like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mred, semithick] (-3, 0) -- (2, 0) -- (2, 2) -- (3, 2) -- (3, 0) -- (5, 0);
+
+ \draw [dashed] (-1, -0.5) -- (-1, 2);
+ \draw [domain=-1:1,samples=50, mblue] plot ({\x - 1}, {1 * exp(-7 * \x * \x)});
+ \draw [->] (-1, 0.5) -- +(-0.5, 0);
+ \node [right] at (-1, 1.1) {$A\Psi_{\mathrm{ref}}$};
+
+
+ \draw [dashed] (4, -0.5) -- (4, 2);
+ \draw [domain=-1:1,samples=50, mblue] plot ({\x + 4}, {0.8 * exp(-7 * \x * \x)});
+ \draw [->] (4, 0.4) -- +(0.5, 0);
+ \node [right] at (4, 0.9) {$B\Psi_{\mathrm{tr}}$};
+
+ \end{tikzpicture}
+\end{center}
+Here $\Psi$, $\Psi_{\mathrm{ref}}$ and $\Psi_{\mathrm{tr}}$ are normalized wavefunctions, and
+\[
+ P_{\mathrm{ref}} = |A|^2,\quad P_{\mathrm{tr}} = |B|^2.
+\]
+are the probabilities of reflection and transmission respectively.
+
+This is generally hard to solve. Scattering problems are much simpler to solve for momentum eigenstates of the form $e^{ikx}$. However, these are not normalizable wavefunctions, and despite being mathematically convenient, we are not allowed to use them directly, since they do not make sense physically. These, in some sense, represent particles that are ``infinitely spread out'' and can appear anywhere in the universe with equal probability, which doesn't really make sense.
+
+There are two ways we can get around this problem. We know that we can construct normalized momentum eigenstates for a single particle confined in a box $-\frac{\ell}{2} \leq x \leq \frac{\ell}{2}$, namely
+\[
+ \psi(x) = \frac{1}{\sqrt{\ell}} e^{ikx},
+\]
+where the periodic boundary conditions require $\psi(x + \ell) = \psi(x)$, i.e.\ $k = \frac{2\pi n}{\ell}$ for some integer $n$. After calculations have been done, the box can be removed by taking the limit $\ell \to \infty$.
+
+Identical results are obtained more conveniently by allowing $\Psi(x, t)$ to represent \emph{beams} of infinitely many particles, with $|\Psi(x, t)|^2$ being the density of the number of particles (per unit length) at $x, t$. When we do this, instead of having one particle and watching it evolve, we constantly send in particles so that the system does not appear to change with time. This allows us to find \emph{steady states}. Mathematically, this corresponds to finding solutions to the Schr\"odinger equation that do not change with time. To determine, say, the probability of reflection, roughly speaking, we look at the proportion of particles moving left compared to the proportion of particles moving right in this steady state.
+
+In principle, this interpretation is obtained by considering a constant stream of wavepackets and using some limiting/averaging procedure, but we usually don't care about these formalities.
+
+For these particle beams, $\Psi(x, t)$ is bounded, but no longer normalizable. Recall that for a single particle, the probability current was defined as
+\[
+ j(x, t) = -\frac{i\hbar}{2m}(\Psi^* \Psi' - \Psi \Psi'^*).
+\]
+If we have a particle beam instead of a particle, and $\Psi$ is the particle density instead of the probability distribution, $j$ now represents the \emph{flux} of particles at $x, t$, i.e.\ the number of particles passing the point $x$ in unit time.
+
+Recall that a stationary state of energy $E$ is of the form $\Psi(x, t) = \psi(x) e^{iEt/\hbar}$. We have
+\[
+ |\Psi(x, t)|^2 = |\psi(x)|^2,
+\]
+and
+\[
+ j(x, t) = -\frac{i\hbar}{2m}(\psi^* \psi' - \psi\psi'^*).
+\]
+Often, when solving a scattering problem, the solution will involve sums of momentum eigenstates. So it helps to understand these better.
+
+Our momentum eigenstates are
+\[
+ \psi(x) = Ce^{ikx},
+\]
+which are solutions to the time-independent Schr\"odinger equation with $V = 0$ with $E = \frac{\hbar^2 k^2}{2m}$.
+
+Applying the momentum operator, we find that $p = \hbar k$ is the momentum of each particle in the beam, and $|\psi(x)|^2 = |C|^2$ is the density of particles in the beam. We can also evaluate the current to be
+\[
+ j = \frac{\hbar k}{m}|C|^2.
+\]
+This makes sense. $\frac{\hbar k}{m} = \frac{p}{m}$ is the velocity of the particles, and $|C|^2$ is how many particles we have. So this still roughly corresponds to what we used to have classically.
+
+In scattering problems, we will seek the transmitted and reflected flux $j_{\mathrm{tr}}$, $j_{\mathrm{ref}}$ in terms of the incident flux $j_{\mathrm{inc}}$, and the probabilities for transmission and reflection are then given by
+\[
+ P_{\mathrm{tr}} = \frac{|j_{\mathrm{tr}}|}{|j_{\mathrm{inc}}|},\quad P_{\mathrm{ref}} = \frac{|j_{\mathrm{ref}}|}{|j_{\mathrm{inc}}|}.
+\]
+\subsection{Potential step}
+Consider the time-independent Schr\"odinger equation for a step potential
+\[
+ V(x) =
+ \begin{cases}
+ 0 & x \leq 0\\
+ U & x > 0
+ \end{cases},
+\]
+where $U > 0$ is a constant. The Schr\"odinger equation is, in case you've forgotten,
+\[
+ \frac{-\hbar^2}{2m}\psi'' + V(x) \psi = E\psi.
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2, 0) -- (2, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 1.5) node [above] {$V(x)$};
+ \draw [thick, mred] (-2, 0) -- (0, 0) -- (0, 1) -- (2, 1);
+ \end{tikzpicture}
+\end{center}
+We require $\psi$ and $\psi'$ to be continuous at $x = 0$.
+
+We can consider two different cases:
+\begin{enumerate}
+ \item $0 < E < U$: We apply the standard method, introducing constants $k, \kappa > 0$ such that
+ \[
+ E = \frac{\hbar^2 k^2}{2m},\quad U - E = \frac{\hbar^2 \kappa^2}{2m}.
+ \]
+ Then the Schr\"odinger equation becomes
+ \[
+ \begin{cases}
+ \psi'' + k^2 \psi = 0 & x < 0\\
+ \psi'' - \kappa^2 \psi = 0 & x > 0
+ \end{cases}
+ \]
+ The solutions are $\psi = I e^{ikx} + Re^{-ikx}$ for $x < 0$, and $\psi = C e^{-\kappa x}$ for $x > 0$ (since $\psi$ has to be bounded).
+
+ Since $\psi$ and $\psi'$ are continuous at $x = 0$, we have the equations
+ \[
+ \begin{cases}
+ I + R = C\\
+ ik I - ik R = - \kappa C
+ \end{cases}.
+ \]
+ So we have
+ \[
+ R = \frac{k - i \kappa}{k + i \kappa} I,\quad C = \frac{2k}{k + i\kappa} I.
+ \]
+ If $x < 0$, $\psi(x)$ is a superposition of beams (momentum eigenstates) with $|I|^2$ particles per unit length in the incident part, and $|R|^2$ particles per unit length in the reflected part, with $p = \pm \hbar k$. The current is
+ \[
+ j = j_{\mathrm{inc}} + j_{\mathrm{ref}} = |I|^2 \frac{\hbar k}{m} - |R|^2 \frac{\hbar k}{m},
+ \]
+ The probability of reflection is
+ \[
+ P_{\mathrm{ref}} = \frac{|j_{\mathrm{ref}}|}{|j_{\mathrm{inc}}|} = \frac{|R|^2}{|I|^2} = 1,
+ \]
+ which makes sense.
+
+ On the right hand side, we have $j = 0$. So $P_{\mathrm{tr}} = 0$. However, $|\psi(x)|^2 \not= 0$ in this classically forbidden region.
+
+ \item $E > U$: This time, we set
+ \[
+ E = \frac{\hbar^2 k^2}{2m},\quad E - U = \frac{\hbar^2 \kappa^2}{2m},
+ \]
+ with $k, \kappa > 0$. Then the Schr\"odinger equation becomes
+ \[
+ \begin{cases}
+ \psi'' + k^2 \psi = 0 & x < 0\\
+ \psi'' + \kappa^2 \psi = 0 & x > 0
+ \end{cases}
+ \]
+ Then we find $\psi = I e^{ikx} + R^{-ikx}$ on $x < 0$, with $\psi = T e^{ikx}$ on $x > 0$. Note that it is in principle possible to get an $e^{-ikx}$ term on $x > 0$, but this would correspond to sending in a particle from the right. We, by choice, assume there is no such term.
+
+ We now match $\psi$ and $\psi'$ at $x = 0$. Then we get the equations
+ \[
+ \begin{cases}
+ I + R = T\\
+ ikI - ikR = ikT.
+ \end{cases}
+ \]
+ We can solve these to obtain
+ \[
+ R = \frac{k - \kappa}{k + \kappa}I,\quad T = \frac{2k}{k + \kappa} I.
+ \]
+ Our flux on the left is now
+ \[
+ j = j_{\mathrm{inc}} + j_{\mathrm{ref}} = |I|^2 \frac{\hbar k}{m} - |R|^2 \frac{\hbar k}{m},
+ \]
+ while the flux on the right is
+ \[
+ j = j_{\mathrm{tr}} |T|^2 \frac{\hbar \kappa}{m}.
+ \]
+ The probability of reflection is
+ \[
+ P_{\mathrm{ref}} = \frac{|j_{\mathrm{ref}}|}{|j_{\mathrm{inc}}|} = \frac{|R|^2}{|I|^2} = \left(\frac{k - \kappa}{k + \kappa}\right)^2,
+ \]
+ while the probability of transmission is
+ \[
+ P_{\mathrm{tr}} = \frac{|j_{\mathrm{tr}}|}{|j_{\mathrm{inc}}|} = \frac{|T|^2 \kappa}{|I|^2 k} = \frac{4k\kappa}{ (k + \kappa)^2}.
+ \]
+ Note that $P_{\mathrm{ref}} + P_{\mathrm{tr}} = 1$.
+
+ Classically, we would expect all particles to be transmitted, since they all have sufficient energy. However, quantum mechanically, there is still a probability of being reflected.
+\end{enumerate}
+
+\subsection{Potential barrier}
+So far, the results were \emph{slightly} weird. We just showed that however high energy we have, there is still a non-zero probability of being reflected by the potential step. However, things are \emph{really} weird when we have a potential barrier.
+
+Consider the following potential:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (5.5, 0) node [right] {$x$};
+ \draw [->] (2, 0) -- (2, 3) node [above] {$V$};
+ \node [left] at (2, 2) {$U$};
+ \node [below] at (2, 0) {$0$};
+
+ \node [below] at (3, 0) {$a$};
+ \draw [mred, semithick] (-3, 0) -- (2, 0) -- (2, 2) -- (3, 2) -- (3, 0) -- (5, 0);
+ \end{tikzpicture}
+\end{center}
+We can write this as
+\[
+ V(x) =
+ \begin{cases}
+ 0 & x \leq 0\\
+ U & 0 < x < a\\
+ 0 & x \geq a
+ \end{cases}
+\]
+We consider a stationary state with energy $E$ with $0 < E < U$. We set the constants
+\[
+ E = \frac{\hbar^2 k^2}{2m},\quad U - E = \frac{\hbar^2\kappa^2}{2m}.
+\]
+Then the Schr\"odinger equations become
+\begin{align*}
+ \psi'' + k^2 \psi &= 0 && x < 0\\
+ \psi'' - \kappa^2 \psi &= 0 && 0 < x < a\\
+ \psi'' + k^2 \psi &= 0 && x > a
+\end{align*}
+So we get
+\begin{align*}
+ \psi &= I e^{ikx} + Re^{-ikx}&& x < 0\\
+ \psi &= Ae^{\kappa x} + Be^{-\kappa x}&& 0 < x < a\\
+ \psi &= Te^{ikx} && x > a
+\end{align*}
+Matching $\psi$ and $\psi'$ at $x = 0$ and $a$ gives the equations
+\begin{align*}
+ I + R &= A + B\\
+ ik(I - R) &= \kappa (A - B)\\
+ A e^{\kappa a} + Be^{-\kappa a} &= Te^{ika}\\
+ \kappa(Ae^{\kappa a} - Be^{-\kappa a}) &= ik Te^{ika}.
+\end{align*}
+We can solve these to obtain
+\begin{align*}
+ I + \frac{\kappa - ik}{\kappa + ik}R &= Te^{ika} e^{-\kappa a}\\
+ I + \frac{\kappa + ik}{\kappa - ik}R &= Te^{ika} e^{\kappa a}.
+\end{align*}
+After \st{lots of} \emph{some} algebra, we obtain
+\begin{align*}
+ T &= I e^{-ika}\left(\cosh \kappa a - i\frac{k^2 - \kappa^2}{2k\kappa} \sinh \kappa a\right)^{-1}
+\end{align*}
+To interpret this, we use the currents
+\[
+ j = j_{\mathrm{inc}} + j_{\mathrm{ref}} = (|I|^2 - |R|^2) \frac{\hbar k}{m}
+\]
+for $x < 0$. On the other hand, we have
+\[
+ j = j_{\mathrm{tr}} = |T|^2 \frac{\hbar k}{m}
+\]
+for $x > a$. We can use these to find the transmission probability, and it turns out to be
+\[
+ P_{\mathrm{tr}} = \frac{|j_{\mathrm{tr}j}|}{|j_{\mathrm{inc}}|} = \frac{|T|^2}{|I|^2} = \left[1 + \frac{U^2}{4E(U - E)} \sinh^2 \kappa a\right]^{-1}.
+\]
+This demonstrates \emph{quantum tunneling}. There is a non-zero probability that the particles can pass through the potential barrier even though it classically does not have enough energy. In particular, for $\kappa a \gg 1$, the probability of tunneling decays as $e^{-2\kappa a}$. This is important, since it allows certain reactions with high potential barrier to occur in practice even if the reactants do not classically have enough energy to overcome it.
+
+\subsection{General features of stationary states}
+We are going to end the chapter by looking at the difference between bound states and scattering states in general.
+
+Consider the time-independent Schr\"odinger equation for a particle of mass $m$
+\[
+ H\psi = -\frac{\hbar^2}{2m}\psi'' + V(x) \psi = E\psi,
+\]
+with the potential $V(x) \to 0$ as $x \to \pm \infty$. This is a second order ordinary differential equation for a complex function $\psi$, and hence there are two complex constants in general solution. However, since this is a linear equation, this implies that 1 complex constant corresponds to changing $\psi \mapsto \lambda \psi$, which gives no change in physical state. So we just have one constant to mess with.
+
+As $|x| \to \infty$, our equation simply becomes
+\[
+ -\frac{\hbar^2}{2m} \psi'' = E\psi.
+\]
+So we get
+\[
+ \psi \sim
+ \begin{cases}
+ Ae^{ikx} + Be^{-ikx} & E = \frac{\hbar^2 k^2}{2m} > 0\\\
+ Ae^{\kappa x} + Be^{-\kappa x} & E = -\frac{\hbar^2 \kappa^2}{2m} < 0.
+ \end{cases}
+\]
+So we get two kinds of stationary states depending on the sign of $E$. These correspond to bound states and scattering states.
+
+\subsubsection*{Bound state solutions, \texorpdfstring{$E < 0$}{E \lt 0}}
+If we want $\psi$ to be normalizable, then there are 2 boundary conditions for $\psi$:
+\[
+ \psi \sim
+ \begin{cases}
+ Ae^{\kappa x} & x \to -\infty\\
+ B e^{-\kappa x} & x \to +\infty
+ \end{cases}
+\]
+This is an overdetermined system, since we have too many boundary conditions. Solutions exist only when we are lucky, and only for certain values of $E$. So bound state energy levels are quantized. We may find several bound states or none.
+
+\begin{defi}[Ground and excited states]
+ The lowest energy eigenstate is called the \emph{ground state}. Eigenstates with higher energies are called \emph{excited states}.
+\end{defi}
+
+\subsubsection*{Scattering state solutions, \texorpdfstring{$E > 0$}{E \gt 0}}
+Now $\psi$ is not normalized but bounded. We can view this as particle beams, and the boundary conditions determines the direction of the incoming beam. So we have
+\[
+ \psi \sim
+ \begin{cases}
+ I e^{ikx} + R e^{-ikx} & x \to -\infty\\
+ Te^{ikx} & x \to +\infty
+ \end{cases}
+\]
+This is no longer overdetermined since we have more free constants. The solution for any $E > 0$ (imposing condition on one complex constant) gives
+\[
+ j \sim
+ \begin{cases}
+ j_{\mathrm{inc}} + j_{\mathrm{ref}}\\
+ j_{\mathrm{tr}}
+ \end{cases}
+ =
+ \begin{cases}
+ |I|^2 \frac{\hbar k}{m} - |R|^2 \frac{\hbar k}{m} & x \to -\infty\\
+ |T|^2 \frac{\hbar k}{m} & x \to +\infty
+ \end{cases}
+\]
+We also get the reflection and transmission probabilities
+\begin{align*}
+ P_{\mathrm{ref}} &= |A_{\mathrm{ref}}|^2 = \frac{|j_{\mathrm{ref}}|}{|j_{\mathrm{inc}}|}\\
+ P_{\mathrm{tr}} &= |A_{\mathrm{tr}}|^2 = \frac{|j_{\mathrm{tr}}|}{|j_{\mathrm{inc}}|},
+\end{align*}
+where
+\begin{align*}
+ A_{\mathrm{ref}}(k) &= \frac{R}{I} \\
+ A_{\mathrm{tr}}(k) &= \frac{T}{I}
+\end{align*}
+are the reflection and transmission \emph{amplitudes}. In quantum mechanics, ``amplitude'' general refers to things that give probabilities when squared.
+
+\section{Axioms for quantum mechanics}
+We are going to sum up everything we've had so far into a big picture. To do so, we will state a few axioms of quantum mechanics and see how they relate to what we have done so far.
+
+Our axioms (or postulates) will be marked by bullet points in this chapter, as we go through them one by one.
+
+\subsection{States and observables}
+\begin{itemize}
+ \item States of a quantum system correspond to non-zero elements of a complex vector space $V$ (which has nothing to do with the potential), with $\psi$ and $\alpha \psi$ physically equivalent for all $\alpha \in \C\setminus \{0\}$.
+
+ Furthermore, there is a complex inner product $(\phi, \psi)$ defined on $V$ satisfying
+ \begin{align*}
+ (\phi, \alpha_1 \psi_1 + \alpha_2 \psi_2) &= \alpha_1(\phi, \psi_1) + \alpha_2(\phi, \psi_2)\\
+ (\beta_1 \phi_1 + \beta_2 \phi_2, \psi) &= \beta_1^*(\phi_1, \psi) + \beta_2^*(\phi_2, \psi)\\
+ (\phi, \psi) &= (\psi, \phi)^*\\
+ \|\psi\|^2 &= (\psi, \psi) \geq 0\\
+ \|\psi\| &= 0 \text{ if and only if } \psi = 0.
+ \end{align*}
+ Note that this is just the linear algebra definition of what a complex inner product should satisfy.
+
+ An operator $A$ is a \emph{linear map} $V \to V$ satisfying
+ \[
+ A(\alpha \psi + \beta \phi) = \alpha A\psi + \beta A\phi.
+ \]
+ For any operator $A$, the \emph{Hermitian conjugate} or \emph{adjoint}, denoted $A^{\dagger}$ is defined to be the unique operator satisfying
+ \[
+ (\phi, A^{\dagger}\psi) = (A \phi, \psi).
+ \]
+ An operator $Q$ is called \emph{Hermitian} or \emph{self-adjoint} if
+ \[
+ Q^{\dagger} = Q.
+ \]
+ A state $\chi \not= 0$ is an \emph{eigenstate} of $Q$ with \emph{eigenvalue} $\lambda$ if
+ \[
+ Q \chi = \lambda \chi.
+ \]
+ The set of all eigenvalues of $Q$ is called the \emph{spectrum} of $Q$.
+
+ A measurable quantity, or \emph{observable}, in a quantum system corresponds to a Hermitian operator.
+\end{itemize}
+So far, we have worked with the vector space of functions (that are sufficiently smooth), and used the integral as the inner product. However, in general, we can work with arbitrary complex vector spaces and arbitrary inner products. This will become necessary when we, say, study the spin of electrons in IID Principles of Quantum Mechanics.
+
+Even in the case of the vector space of (sufficiently smooth) functions, we will only work with this formalism informally. When we try to make things precise, we will encounter a lot of subtleties such as operators not being well-defined everywhere.
+
+\subsection{Measurements}
+\subsubsection*{Key results for Hermitian operators}
+Our next set of axioms relate Hermitian operators with physical measurements. Before we state them, we first look at some purely mathematical properties of Hermitian operators.
+We first prove some purely mathematical results about Hermitian operators.
+\begin{prop}
+ Let $Q$ be Hermitian (an observable), i.e.\ $Q^\dagger = Q$. Then
+ \begin{enumerate}
+ \item Eigenvalues of $Q$ are real.
+ \item Eigenstates of $Q$ with different eigenvalues are orthogonal (with respect to the complex inner product).
+ \item Any state can be written as a (possibly infinite) linear combination of eigenstates of $Q$, i.e.\ eigenstates of $Q$ provide a basis for $V$. Alternatively, the set of eigenstates is \emph{complete}.
+ \end{enumerate}
+\end{prop}
+Note that the last property is not actually necessarily true. For example, the position and momentum operators do not have eigenstates at all, since the ``eigenstates'' are not normalizable. However, for many of the operators we care about, this is indeed true, and everything we say in this section only applies to operators for which this holds.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Since $Q$ is Hermitian, we have, by definition,
+ \[
+ (\chi, Q\chi) = (Q\chi, \chi).
+ \]
+ Let $\chi$ be an eigenvector with eigenvalue $\lambda$, i.e.\ $Q\chi = \lambda \chi$. Then we have
+ \[
+ (\chi, \lambda \chi) = (\lambda \chi, \chi).
+ \]
+ So we get
+ \[
+ \lambda (\chi, \chi) = \lambda^*(\chi, \chi).
+ \]
+ Since $(\chi, \chi)\not= 0$, we must have $\lambda = \lambda^*$. So $\lambda$ is real.
+ \item Let $Q \chi = \lambda \chi$ and $Q \phi = \mu \phi$. Then we have
+ \[
+ (\phi, Q\chi) = (Q\phi, \chi).
+ \]
+ So we have
+ \[
+ (\phi, \lambda\chi) = (\mu \phi, \chi).
+ \]
+ In other words,
+ \[
+ \lambda (\phi, \chi) = \mu^* (\phi, \chi) = \mu (\phi, \chi).
+ \]
+ Since $\lambda \not= \mu$ by assumption, we know that $(\phi, \chi) = 0$.
+ \item We will not attempt to justify this, or discuss issues of convergence.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsubsection*{Measurement axioms}
+Consider an observable $Q$ with discrete spectrum (i.e.\ eigenvalues) and normalized eigenstates. From the results we just had, any given state can be written
+\[
+ \psi = \sum_n \alpha_n \chi_n,
+\]
+with $Q\chi_n = \lambda_n \chi_n$, and $(\chi_m, \chi_n) = \delta_{mn}$. This means we can calculate $\alpha_n$ by
+\[
+ \alpha_n = (\chi_n, \psi).
+\]
+We can assume also that $\alpha_n \not= 0$ since we can just pretend the terms with coefficient $0$ do not exist, and that the corresponding $\lambda_n$ are distinct by choosing an appropriate basis. Then
+\begin{itemize}
+ \item The outcome of a measurement is some eigenvalue of $Q$.
+ \item The probability of obtaining $\lambda_n$ is
+ \[
+ P_n = |\alpha_n|^2,
+ \]
+ where $\alpha_n = (\chi_n, \psi)$ is the \emph{amplitude}.
+ \item The measurement is instantaneous and forces the system into the state $\chi_n$. This is the new state immediately after the measurement is made.
+\end{itemize}
+The last statement is rather weird. However, if you think hard, this must be the case. If we measure the state of the system, and get, say, $3$, then if we measure it immediately afterwards, we would expect to get the result $3$ again with certainty, instead of being randomly distributed like the original state. So after a measurement, the system must be forced into the corresponding eigenstate.
+
+Note that these axioms are consistent in the following sense. If $\psi$ is normalized, then
+\begin{align*}
+ (\psi, \psi) &= \left(\sum \alpha_m \chi_m, \sum \alpha_n \chi_n\right)\\
+ &= \sum_{m, n} \alpha_m^* \alpha_n (\chi_m, \chi_n)\\
+ &= \sum_{m, n} \alpha_m^* \alpha_n \delta_{mn}\\
+ &= \sum_n |\alpha_n|^2\\
+ &= \sum_n P_n\\
+ &= 1
+\end{align*}
+So if the state is normalized, then the sum of probabilities is indeed $1$.
+
+\begin{eg}
+ Consider the harmonic oscillator, and consider the operator $Q = H$ with eigenfunctions $\chi_n = \psi_n$ and eigenvalues
+ \[
+ \lambda_n = E_n = \hbar \omega \left(n + \frac{1}{2}\right).
+ \]
+ Suppose we have prepared our system in the state
+ \[
+ \psi = \frac{1}{\sqrt{6}}(\psi_0 + 2\psi_1 - i \psi_4).
+ \]
+ Then the coefficients are
+ \[
+ \alpha_0 = \frac{1}{\sqrt{6}},\quad \alpha_1 = \frac{2}{\sqrt{6}},\quad \alpha_4 = \frac{-i}{\sqrt{6}}.
+ \]
+ This is normalized since
+ \[
+ \|\psi\|^2 = |\alpha_0|^2 + |\alpha_1|^2 + |\alpha_4|^2 = 1.
+ \]
+ Measuring the energy gives the following probabilities:
+ \begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ \textbf{Energy} & \textbf{Probability}\\
+ \midrule
+ $\displaystyle E_0 = \frac{1}{2} \hbar \omega$ & $\displaystyle P_0 = \frac{1}{6}$\\\addlinespace
+ $\displaystyle E_1 = \frac{3}{2} \hbar \omega$ & $\displaystyle P_1 = \frac{2}{3}$\\\addlinespace
+ $\displaystyle E_4 = \frac{9}{2} \hbar \omega$ & $\displaystyle P_4 = \frac{1}{6}$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ If a measurement gives $E_1$, then $\psi_1$ is the new state immediately after measurement.
+\end{eg}
+
+\subsubsection*{Expressions for expectation and uncertainty}
+We can use these probability amplitudes to express the expectation and uncertainty of a state in a more familiar way.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item The expectation value of $Q$ in state $\psi$ is
+ \[
+ \bra Q\ket_\psi = (\psi, Q\psi) = \sum \lambda_n P_n,
+ \]
+ with notation as in the previous part.
+ \item The uncertainty $(\delta Q)_\psi$ is given by
+ \[
+ (\Delta Q)^2_\psi = \bra (Q - \bra Q\ket_\psi)^2\ket_\psi = \bra Q^2\ket_\psi - \bra Q\ket_\psi^2 = \sum_n (\lambda_n - \bra Q\ket_\psi)^2 P_n.
+ \]
+ \end{enumerate}
+\end{prop}
+From this, we see that $\psi$ is an eigenstate of $Q$ with eigenvalue $\lambda$ if and only if $\bra Q\ket_\psi = \lambda$ and $(\Delta Q)_\psi = 0$.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $\psi = \sum \alpha_n \chi_n$. Then
+ \[
+ Q \psi = \sum \alpha_n \lambda_n \chi_n.
+ \]
+ So we have
+ \[
+ (\psi, Q\psi) = \sum_{n, m}(\alpha_m, \chi_n, \alpha_n \lambda_n \chi_n) = \sum_n \alpha_n^* \alpha_n \lambda_n = \sum \lambda_n P_n.
+ \]
+ \item This is a direct check using the first expression.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ Consider the harmonic oscillator as we had in the previous example. Then the expectation value is
+ \[
+ \bra H\ket_\psi = \sum_n E_n P_n = \left(\frac{1}{2} \cdot \frac{1}{6} + \frac{3}{2} \cdot \frac{2}{3} + \frac{9}{2}\cdot \frac{1}{6}\right)\hbar \omega = \frac{11}{6}\hbar \omega.
+ \]
+\end{eg}
+Note that the measurement axiom tells us that after measurement, the system is then forced into the eigenstate. So when we said that we interpret the expectation value as the ``average result for many measurements'', we do not mean measuring a single system many many times. Instead, we prepare a lot of copies of the system in state $\psi$, and measure each of them once.
+
+\subsection{Evolution in time}
+\begin{itemize}
+ \item The state of a quantum system $\Psi(t)$ obeys the Schr\"odinger equation
+ \[
+ i\hbar \dot{\Psi} = H\Psi,
+ \]
+ where $H$ is a Hermitian operator, the \emph{Hamiltonian}; this holds at all times except at the instant a measurement is made.
+\end{itemize}
+In principle, this is all there is in quantum mechanics. However, for practical purposes, it is helpful to note the following:
+
+\subsubsection*{Stationary states}
+Consider the energy eigenstates with
+\[
+ H\psi_n = E_n \psi_n,\quad (\psi_m, \psi_n) = \delta_{mn}.
+\]
+Then we have certain simple solutions of the Schr\"odinger equation of the form
+\[
+ \Psi_n = \psi_n e^{-iE_n t/\hbar}.
+\]
+In general, given an initial state
+\[
+ \Psi(0) = \sum_n \alpha_n \psi_n,
+\]
+since the Schr\"odinger equation is linear, we can get the following solution for all time:
+\[
+ \Psi(t) = \sum_n \alpha_n e^{-iE_n t/\hbar} \psi_n.
+\]
+\begin{eg}
+ Consider again the harmonic oscillator with initial state
+ \[
+ \Psi(0) = \frac{1}{\sqrt{6}}(\psi_0 + 2\psi_1 - i \psi_4).
+ \]
+ Then $\Psi(t)$ is given by
+ \[
+ \Psi(t) = \frac{1}{\sqrt{6}} (\psi_0 e^{-i\omega t/2} + 2\psi_1e^{-3i\omega t/2} - i \psi_4 e^{-9i\omega t/2}).
+ \]
+\end{eg}
+
+\subsubsection*{Ehrenfest's theorem (general form)}
+\begin{thm}[Ehrenfest's theorem]
+ If $Q$ is any operator with no explicit time dependence, then
+ \[
+ i\hbar \frac{\d}{\d t}\bra Q\ket_\Psi = \bra [Q, H]\ket_\Psi,
+ \]
+ where
+ \[
+ [Q, H] = QH - HQ
+ \]
+ is the commutator.
+\end{thm}
+
+\begin{proof}
+ If $Q$ does not have time dependence, then
+ \begin{align*}
+ i\hbar \frac{\d}{\d t}(\Psi, Q\Psi) &= (-i\hbar \dot\Psi, Q\Psi) + (\Psi, Q i\hbar \dot\Psi)\\
+ &= (-H\Psi, Q\Psi) + (\Psi, QH \Psi)\\
+ &= (\Psi, (QH - HQ)\Psi)\\
+ &= (\Psi, [Q, H] \Psi).\qedhere
+ \end{align*}
+\end{proof}
+If $Q$ has explicit time dependence, then we have an extra term on the right, and have
+\[
+ i\hbar \frac{\d}{\d t}\bra Q\ket_\Psi = \bra [Q, H]\ket_\Psi + i\hbar \bra \dot{Q} \ket_\Psi.
+\]
+These general versions correspond to classical equations of motion in Hamiltonian formalism you will (hopefully) study in IIC Classical Dynamics.
+
+\subsection{Discrete and continuous spectra}
+In stating the measurement axioms, we have assumed that our spectrum of eigenvalues of $Q$ was discrete, and we got nice results about measurements. However, for certain systems, the spectra of $\hat{p}$ and $H$ may be continuous. This is the same problem when we are faced with non-normalizable momentum wavefunctions.
+
+To solve this problem, we can make the spectra discrete by a technical device: we put the system in a ``box'' of length $\ell$ with suitable boundary conditions of $\psi(x)$. We can then take $\ell \to \infty$ at the end of the calculation. We've discussed this in some extent for momentum eigenstates. We shall revisit that scenario in the more formal framework we've developed.
+
+\begin{eg}
+ Consider $\psi(x)$ with periodic boundary conditions $\psi(x + \ell) = \psi(x)$. So we can restrict to
+ \[
+ -\frac{\ell}{2} \leq x \leq \frac{\ell}{2}.
+ \]
+ We compare with the general axioms, where
+ \[
+ Q = \hat{p} = -i\hbar \frac{\d}{\d x}.
+ \]
+ The eigenstates are
+ \[
+ \chi_n(x) = \frac{1}{\sqrt{\ell}} e^{i k_n x},\quad k_n = \frac{2\pi n}{\ell}.
+ \]
+ Now we have discrete eigenvalues as before given by
+ \[
+ \lambda_n = \hbar k_n.
+ \]
+ We know that the states are orthonormal on $-\frac{\ell}{2} \leq x \leq \frac{\ell}{2}$, i.e.
+ \[
+ (\chi_n, \chi_m) = \int_{-\frac{\ell}{2}}^{\frac{\ell}{2}} \chi_m(x)^* \chi_n(x)\;\d x = \delta_{mn}.
+ \]
+ We can thus expand our solution in terms of the eigenstates to get a complex Fourier series
+ \[
+ \psi(x) = \sum_n \alpha_n \chi_n(x),
+ \]
+ where the amplitudes are given by
+ \[
+ \alpha_n = (\chi_n, \psi).
+ \]
+ When we take the limit as $n \to \infty$, the Fourier series becomes a Fourier integral.
+\end{eg}
+
+There is another approach to this problem (which is non-examinable). We can also extend from discrete to continuous spectra as follows. We replace the discrete label $n$ with some continuous label $\xi$. Then we have the equation
+\[
+ Q \chi_\xi = \lambda_\xi \chi_\xi.
+\]
+These are eigenstates with an orthonormality conditions
+\[
+ (\chi_\xi, \chi_\eta) = \delta(\xi - \eta),
+\]
+where we replaced our old $\delta_{mn}$ with $\delta(\xi - \eta)$, the Dirac delta function. To perform the expansion in eigenstates, the discrete sum becomes an integral. We have
+\[
+ \psi = \int \alpha_\xi \chi_\xi \;\d \xi,
+\]
+where the coefficients are
+\[
+ \quad \alpha_\xi = (\chi_\xi, \psi).
+\]
+In the discrete case, $|\alpha_n|^2$ is the probability mass function. The obvious generalization here would be to let $|\alpha_\xi|^2$ be our probability density function. More precisely,
+\[
+ \int_a^b |\alpha_\xi|^2 \;\d \xi = \text{probability that the result corresponds to }a \leq \xi \leq b.
+\]
+\begin{eg}
+ Consider the particle in one dimension with position as our operator. We will see that this is just our previous interpretation of the wavefunction.
+
+ Let our operator be $Q = \hat{x}$. Then our eigenstates are
+ \[
+ \chi_\xi (x) = \delta(x - \xi).
+ \]
+ The corresponding eigenvalue is
+ \[
+ \lambda_\xi = \xi.
+ \]
+ This is true since
+ \[
+ \hat{x} \chi_\xi(x) = x \delta (x - \xi) = \xi \delta(x - \xi) = \xi \chi_\xi(x),
+ \]
+ since $\delta(x - \xi)$ is non-zero only when $x = \xi$.
+
+ With this eigenstates, we can expand this is
+ \[
+ \psi(x) = \int \alpha_\xi \chi_\xi(x)\;\d \xi = \int \alpha_\xi \delta(x - \xi) \;\d \xi = \alpha_x.
+ \]
+ So our coefficients are given by $\alpha_\xi = \psi(\xi)$. So
+ \[
+ \int_a^b |\psi(\xi)|^2 \;\d \xi
+ \]
+ is indeed the probability of measuring the particle to be in $a \leq \xi \leq b$. So we recover our original interpretation of the wavefunction.
+\end{eg}
+
+We see that as long as we are happy to work with generalized functions like delta functions, things become much nicer and clearer. Of course, we have to be more careful and study distributions properly if we want to do this.
+
+\subsection{Degeneracy and simultaneous measurements}
+\subsubsection*{Degeneracy}
+\begin{defi}[Degeneracy]
+ For any observable $Q$, the number of linearly independent eigenstates with eigenvalue $\lambda$ is the \emph{degeneracy} of the eigenvalue. In other words, the degeneracy is the dimension of the eigenspace
+ \[
+ V_\lambda = \{\psi: Q \psi = \lambda \psi\}.
+ \]
+ An eigenvalue is \emph{non-degenerate} if the degeneracy is exactly $1$, and is \emph{degenerate} if the degeneracy is more than $1$.
+
+ We say two states are \emph{degenerate} if they have the same eigenvalue.
+\end{defi}
+In one dimension, the energy bound states are always non-degenerate, as we have seen in example sheet 2. However, in three dimensions, energy bound states may be degenerate. If $\lambda$ is degenerate, then there is a large freedom in choosing an orthonormal basis for $V_\lambda$. Physically, we cannot distinguish degenerate states by measuring $Q$ alone. So we would like to measure something \emph{else} as well to distinguish these eigenstates. When can we do this? We have previously seen that we cannot simultaneously measure position and momentum. It turns out the criterion for whether we can do this is simple.
+
+\subsubsection*{Commuting observables}
+Recall that after performing a measurement, the state is forced into the corresponding eigenstate. Hence, to simultaneously measure two observables $A$ and $B$, we need the state to be a simultaneously an eigenstate of $A$ and $B$. In other words, simultaneous measurement is possible if and only if there is a basis for $V$ consisting of simultaneous or joint eigenstates $\chi_n$ with
+\[
+ A \chi_n = \lambda_n \chi_n,\quad B \chi_n = \mu_n \chi_n.
+\]
+Our measurement axioms imply that if the state is in $\chi_n$, then measuring $A$, $B$ in rapid succession, in any order, will give definite results $\lambda_n$ and $\mu_n$ respectively (assuming the time interval between each pair of measurements is short enough that we can neglect the evolution of state in time).
+
+From IB Linear Algebra, a necessary and sufficient condition for $A$ and $B$ to be simultaneously measurable (i.e.\ simultaneously diagonalizable) is for $A$ and $B$ to commute, i.e.
+\[
+ [A, B] = AB - BA = 0.
+\]
+This is consistent with a ``generalized uncertainty relation''
+\[
+ (\Delta A)_\psi (\Delta B)_\psi \geq \frac{1}{2} |\bra [A, B]\ket_\psi|,
+\]
+since if we if have a state that is simultaneously an eigenstate for $A$ and $B$, then the uncertainties on the left would vanish. So $\bra [A, B]\ket_\psi = 0$. The proof of this relation is on example sheet 3.
+
+This will be a technique we will use to tackle degeneracy in general. If we have a system where, say, the energy is degenerate, we will attempt to find another operator that commutes with $H$, and try to further classify the underlying states. When dealing with the hydrogen atom later, we will use the angular momentum to separate the degenerate energy eigenstates.
+
+\section{Quantum mechanics in three dimensions}
+Finally, we will move on and discuss quantum systems in three dimensions, since we (probably) live in a three-dimensional world. These problems are usually substantially more difficult, since we have to deal with partial derivatives. Also, unlike the case of one dimension, we will run into degenerate states, where multiple eigenstates have the same energy. We thus have to find some other observables (that commute with the Hamiltonian) in order to fully classify the eigenstates.
+
+\subsection{Introduction}
+To begin with, we translate everything we've had for the one-dimensional world into the three-dimensional setting.
+
+A quantum state of a particle in three dimensions is given by a wavefunction $\psi(\mathbf{x})$ at fixed time, or $\Psi(\mathbf{x}, t)$ for a state evolving in time. The inner product is defined as
+\[
+ (\varphi, \psi) = \int \varphi(\mathbf{x})^* \psi(\mathbf{x})\;\d^3 x.
+\]
+We adopt the convention that no limits on the integral means integrating over all space. If $\psi$ is normalized, i.e.
+\[
+ \|\psi\|^2 = (\psi, \psi) = \int |\psi(\mathbf{x})|^2 \;\d^3 x = 1,
+\]
+then the probability of measuring the particle to be inside a small volume $\delta V$ (containing $\mathbf{x}$) is
+\[
+ |\psi(\mathbf{x})|^2\; \delta V.
+\]
+The position and momentum are Hermitian operators
+\[
+ \hat{\mathbf{x}} = (\hat{x}_1, \hat{x}_2, \hat{x}_3),\quad \hat{x}_i \psi = x_i \psi
+\]
+and
+\[
+ \hat{\mathbf{p}} = (\hat{p}_1, \hat{p}_2, \hat{p}_3) = -i\hbar \nabla = -i\hbar \left(\frac{\partial}{\partial x_1}, \frac{\partial}{\partial x_2}, \frac{\partial}{\partial x_3}\right)
+\]
+We have the canonical commutation relations
+\[
+ [\hat{x}_i, \hat{p}_j] = -i\hbar \delta_{ij},\quad [\hat{x}_i, \hat{x}_j] = [\hat{p}_i, \hat{p}_j] = 0.
+\]
+We see that position and momentum in different directions don't come into conflict. We can have a definite position in one direction and a definite momentum in another direction. The uncertainty principle only kicks in when we are in the same direction. In particular, we have
+\[
+ (\Delta x_i)(\Delta p_j) \geq \frac{\hbar}{2} \delta_{ij}.
+\]
+Similar to what we did in classical mechanics, we assume our particles are \emph{structureless}.
+\begin{defi}[Structureless particle]
+ A \emph{structureless particle} is one for which all observables can be written in terms of position and momentum.
+\end{defi}
+In reality, many particles are not structureless, and (at least) possess an additional quantity known as ``spin''. We will not study these in this course, but only mention it briefly near the end. Instead, we will just pretend all particles are structureless for simplicity.
+
+The Hamiltonian for a structureless particle in a potential $V$ is
+\[
+ H = \frac{\hat{\mathbf{p}}^2}{2m} + V(\hat{\mathbf{x}}) = -\frac{\hbar^2}{2m} \nabla^2 + V(\mathbf{x}).
+\]
+The time-dependent Schr\"odinger equation is
+\[
+ i \hbar \frac{\partial \Psi}{\partial t} = H \Psi = -\frac{\hbar^2}{2m} \nabla^2\Psi + V(\mathbf{x})\Psi.
+\]
+The probability current is defined as
+\[
+ \mathbf{j} = -\frac{i\hbar}{2m} (\Psi^* \nabla \Psi - \Psi \nabla \Psi^*).
+\]
+This probability current obeys the conservation equation
+\[
+ \frac{\partial}{\partial t}|\Psi(\mathbf{x}, t)|^2 = -\nabla \cdot \mathbf{j}.
+\]
+This implies that for any fixed volume $V$,
+\[
+ \frac{\d}{\d t}\int_V |\Psi(\mathbf{x}, t)|^2 \;\d^3 x = -\int_V \nabla \cdot \mathbf{j}\;\d^3 x = -\int_{\partial V} \mathbf{j}\cdot \d \mathbf{S},
+\]
+and this is true for any fixed volume $V$ with boundary $\partial V$. So if
+\[
+ |\Psi(\mathbf{x}, t)| \to 0
+\]
+sufficiently rapidly as $|\mathbf{x}| \to \infty$, then the boundary term disappears and
+\[
+ \frac{\d}{\d t}\int |\Psi(\mathbf{x}, t)|^2 \;\d^3 x = 0.
+\]
+This is the conservation of probability (or normalization).
+
+\subsection{Separable eigenstate solutions}
+How do we solve the Schr\"odinger equation in three dimensions? Things are more complicated, since we have partial derivatives. Often, those we can solve are those we can reduce to one-dimensional problems, using the symmetry of the system. For example, we will solve the hydrogen atom exploiting the fact that the potential is spherically symmetric. To do so, we will use the method of separation of variables, as we've seen in the IB Methods course.
+
+Consider the simpler case where we only have two dimensions. The time-independent Schr\"odinger equation then gives
+\[
+ H\psi = -\frac{\hbar^2}{2m} \left(\frac{\partial^2}{\partial x_1^2} + \frac{\partial^2}{\partial x_2^2}\right)\psi + V(x_1, x_2)\psi = E\psi.
+\]
+We are going to consider potentials of the form
+\[
+ V(x_1, x_2) = U_1(x_1) + U_2(x_2).
+\]
+The Hamiltonian then splits into
+\[
+ H = H_1 + H_2,
+\]
+where
+\[
+ H_i = -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x_i^2} + U_i(x_i).
+\]
+We look for separable solutions of the form
+\[
+ \psi = \chi_1(x_1) \chi_2(x_2).
+\]
+The Schr\"odinger equation then gives (upon division by $\psi = \chi_1\chi_2$) gives
+\[
+ \left(-\frac{\hbar^2}{2m}\frac{\chi_1''}{\chi_1} + U_1\right) + \left(-\frac{\hbar^2}{2m}\frac{\chi_2''}{\chi_2} + U_2\right) = E.
+\]
+Since each term is independent of $x_2$ and $x_1$ respectively, we have
+\[
+ H_1 \chi_1 = E_1 \chi_1, \quad H_2 \chi_2 = E_2 \chi_2,
+\]
+with
+\[
+ E_1 + E_2 = E.
+\]
+This is the usual separation of variables, but here we can interpret this physically --- in this scenario, the two dimensions are de-coupled, and we can treat them separately. The individual $E_1$ and $E_2$ are just the contributions from each component to the energy. Thus the process of separation variables can be seen as looking for joint or simultaneous eigenstates for $H_1$ and $H_2$, noting the fact that $[H_1, H_2] = 0$.
+
+\begin{eg}
+ Consider the harmonic oscillator with
+ \[
+ U_i(x_i) = \frac{1}{2}\omega^2 x_i^2.
+ \]
+ This corresponds to saying
+ \[
+ V = \frac{1}{2} m\omega^2 \|\mathbf{x}\|^2 = \frac{1}{2}m \omega^2 (x_1^2 + x_2^2).
+ \]
+ Now we have
+ \[
+ H_i = H_0(\hat{x}_i, \hat{p}_i),
+ \]
+ with $H_0$ the usual harmonic oscillator Hamiltonian.
+
+ Using the previous results for the one-dimensional harmonic oscillator, we have
+ \[
+ \chi_1 = \psi_{n_1}(x_1),\quad \chi_2 = \psi_{n_2}(x_2),
+ \]
+ where $\psi_i$ is the $i$th eigenstate of the one-dimensional harmonic oscillator, and $n_1, n_2 = 0, 1, 2, \cdots$.
+
+ The corresponding energies are
+ \[
+ E_i = \hbar \omega\left(n_i + \frac{1}{2}\right).
+ \]
+ The energy eigenvalues of the two-dimensional oscillator are thus
+ \[
+ E = \hbar \omega\left(n_i + n_2 + 1\right)
+ \]
+ for
+ \[
+ \psi(x_1, x_2) = \psi_{n_1}(x_1) \psi_{n_2}(x_2).
+ \]
+ We have the following energies and states:
+ \begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ \textbf{State} & \text{Energy} & \text{Possible states}\\
+ \midrule
+ Ground state & $E = \hbar \omega$ & $\psi = \psi_0(x_1)\psi_0(x_2)$\\\addlinespace
+ \multirow{2}{*}{1st excited state} & \multirow{2}{*}{$E = 2\hbar \omega$} & $\psi = \psi_1(x_1) \psi_0(x_2)$\\
+ & & $\psi = \psi_0(x_1)\psi_1(x_2)$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ We see there is a degeneracy of $2$ for the first excited state.
+\end{eg}
+
+Separable solutions also arise naturally when $H$ (i.e.\ the potential $V$) has some symmetry. For example, if the potential is spherically symmetric, we can find solutions
+\[
+ \psi(x) = R(r) Y(\theta, \phi),
+\]
+where $r, \theta, \phi$ are the spherical polars.
+\subsection{Angular momentum}
+Recall that in IA Dynamics and Relativity, we also solved a spherically symmetric system in classical dynamics. We obtained beautiful, exact solutions for the orbits of particle in a gravitational potential in terms of conic sections.
+
+To do so, we came up with a conserved quantity known as the \emph{angular momentum}. Using the fact that this is conserved, we were able to make a lot of calculations and derive many results. We can also use this conserved quantity to understand, say, why the Earth doesn't just fall into the sun directly.
+
+To understand spherically symmetric potentials in quantum mechanics, it is also helpful to understand the angular momentum.
+\subsubsection*{Definitions}
+
+\begin{defi}[Angular momentum]
+ The \emph{angular momentum} is a vector of operators
+ \[
+ \mathbf{L} = \hat{\mathbf{x}}\wedge \hat{\mathbf{p}} = -i\hbar \mathbf{x} \wedge \nabla.
+ \]
+ In components, this is given by
+ \[
+ L_i = \varepsilon_{ijk} \hat{x}_j\hat{p}_k = -i\hbar \varepsilon_{ijk} x_j \frac{\partial}{\partial x_k}.
+ \]
+ For example, we have
+ \[
+ L_3 = \hat{x}_1 \hat{p}_2 - \hat{x}_2 \hat{p}_1 = -i\hbar \left(x_1 \frac{\partial}{\partial x_2} - x_2 \frac{\partial}{\partial x_1}\right).
+ \]
+\end{defi}
+Note that this is just the same definition as in classical dynamics, but with everything promoted to operators.
+
+These operators are Hermitian, i.e.
+\[
+ L^\dagger = L,
+\]
+since $\hat{x}_i$ and $\hat{p}_j$ are themselves Hermitian, and noting the fact that $\hat{x}_i$ and $\hat{p}_j$ commute whenever $i \not= j$.
+
+Each component of $\mathbf{L}$ is the angular momentum in one direction. We can also consider the length of $\mathbf{L}$, which is the \emph{total} angular momentum.
+
+\begin{defi}[Total angular momentum]
+ The \emph{total angular momentum} operator is
+ \[
+ \mathbf{L}^2 = L_i L_i = L_1^2 + L_2^2 + L_3^3.
+ \]
+\end{defi}
+Again, this is Hermitian, and hence an observable.
+
+\subsubsection*{Commutation relations}
+We have the following important commutation relations for angular momentum:
+\[
+ [L_i, L_j] = i\hbar \varepsilon_{ijk} L_k.\tag{i}
+\]
+For example,
+\[
+ [L_1, L_2] = i\hbar L_3.
+\]
+Recall that in classical dynamics, an important result is that the angular momentum is conserved in \emph{all directions}. However, we know we can't do this in quantum mechanics, since the angular momentum operators do not commute, and we cannot measure all of them. This is why we have this $\mathbf{L}^2$. It captures the total angular momentum, and \emph{commutes with the angular momentum operators}:
+\[
+ [\mathbf{L}^2, L_i] = 0\tag{ii}
+\]
+for all $i$.
+
+Finally, we also have the following commutation relations.
+\[
+ [L_i, \hat{x}_j] = i\hbar \varepsilon_{ijk} \hat{x}_k,\quad [L_i, \hat{p}_j] = i\hbar \varepsilon_{ijk} \hat{p}_k.\tag{iii}
+\]
+These are rather important results. $L_i$ and $\mathbf{L}^2$ are observables, but (i) implies we cannot simultaneously measure, say $L_1$ and $L_2$. The best that can be done is to measure $\mathbf{L}^2$ and $L_3$, say. This is possible by (ii). Finally, (iii) allows computation of the commutator of $L_i$ with any function of $\hat{x}_i$ and $\hat{p}_i$.
+
+First, we want to prove these commutation relations.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $[L_i, L_j] = i\hbar \varepsilon_{ijk} L_k$.
+ \item $[\mathbf{L}^2, L_i] = 0$.
+ \item $[L_i, \hat{x}_j] = i\hbar \varepsilon_{ijk} \hat{x}_k$ and $[L_i, \hat{p}_j] = i\hbar \varepsilon_{ijk} \hat{p}_k$
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We first do a direct approach, and look at specific indices instead of general $i$ and $j$. We have
+ \begin{align*}
+ L_1L_2 &= (-i\hbar)^2 \left(x_2 \pd{x_3} - x_3 \pd{x_2}\right)\left(x_3 \pd{x_1} - x_1 \pd{x_3}\right)\\
+ &= -\hbar^2 \left(x_2 \pd{x_3} x_3 \pd{x_1} - x_1x_2 \pd[2]{x_3} - x_3^2\frac{\partial^2}{\partial x_2\partial x_1} + x_3 x_1 \frac{\partial^2}{\partial x_2 \partial x_3}\right)
+ \end{align*}
+ Now note that
+ \[
+ x_2 \pd{x_3} x_3 \pd{x_1} = x_2x_3 \frac{\partial^2}{\partial x_3x_1} + x_2 \pd{x_1}.
+ \]
+ Similarly, we can compute $L_2L_1$ and find
+ \[
+ [L_1, L_2] = -\hbar^2 \left(x_2 \pd{x_1} - x_1 \pd{x_2}\right) = i\hbar L_3,
+ \]
+ where all the double derivatives cancel. So done.
+
+ Alternatively, we can do this in an abstract way. We have
+ \begin{align*}
+ L_i L_j &= \varepsilon_{iar} \hat{x}_a \hat{p}_r \varepsilon_{jbs} \hat{x}_b \hat{p}_s\\
+ &= \varepsilon_{iar} \varepsilon_{jbs} (\hat{x}_a \hat{p}_r \hat{x}_b \hat{p}_s)\\
+ &= \varepsilon_{iar} \varepsilon_{jbs} (\hat{x}_a (\hat{x}_b \hat{p}_r - [\hat{p}_r, \hat{x}_b]) \hat{p}_s)\\
+ &= \varepsilon_{iar} \varepsilon_{jbs} (\hat{x}_a \hat{x}_b \hat{p}_r \hat{p}_s - i\hbar \delta_{br} \hat{x}_a \hat{p}_s)
+ \end{align*}
+ Similarly, we have
+ \[
+ L_j L_i = \varepsilon_{iar} \varepsilon_{jbs} (\hat{x}_b \hat{x}_a \hat{p}_s \hat{p}_r - i\hbar \delta_{as} \hat{x}_b \hat{p}_r)
+ \]
+ Then the commutator is
+ \begin{align*}
+ L_i L_j - L_j L_i &= -i\hbar \varepsilon_{iar} \varepsilon_{jbs}(\delta_{br} \hat{x}_a \hat{p}_s - \delta_{as}\hat{x}_b \hat{p}_r)\\
+ &= -i\hbar (\varepsilon_{iab} \varepsilon_{jbs} \hat{x}_a \hat{p}_s - \varepsilon_{iar} \varepsilon_{jba} \hat{x}_b\hat{p}_r)\\
+ &= -i\hbar ((\delta_{is} \delta_{ja} - \delta_{ij} \delta_{as})\hat{x}_a \hat{p}_s - (\delta_{ib} \delta_{rj} - \delta_{ij}\delta_{rb}) \hat{x}_b \hat{p}_r)\\
+ &= i\hbar (\hat{x}_i \hat{p}_j - \hat{x}_j \hat{p}_i)\\
+ &= i\hbar \varepsilon_{ijk} L_k.
+ \end{align*}
+ So done.
+ \item This follows from (i) using the \emph{Leibnitz property}:
+ \[
+ [A, BC] = [A, B] C + B[A, C].
+ \]
+ This property can be proved by directly expanding both sides, and the proof is uninteresting.
+
+ Using this, we get
+ \begin{align*}
+ [L_i, \mathbf{L}^2] &= [L_i, L_j L_j]\\
+ &= [L_i, L_j] L_j + L_j [L_i, L_j]\\
+ &= i\hbar \varepsilon_{ijk} (L_k L_j + L_j L_k)\\
+ &= 0
+ \end{align*}
+ where we get $0$ since we are contracting the antisymmetric tensor $\varepsilon_{ijk}$ with the symmetric tensor $L_k L_j + L_j L_k$.
+ \item We will use the Leibnitz property again, but this time we have it the other way round:
+ \[
+ [AB, C] = [A, C]B + A[B, C].
+ \]
+ This follows immediately from the previous version since $[A, B] = -[B, A]$. Then we can compute
+ \begin{align*}
+ [L_i, \hat{x}_j] &= \varepsilon_{iab} [\hat{x}_a \hat{p}_b, \hat{x}_j]\\
+ &= \varepsilon_{iab} ([\hat{x}_a, \hat{x}_j] \hat{p}_b + \hat{x}_a [\hat{p}_b, \hat{x}_j])\\
+ &= \varepsilon_{iab} \hat{x}_a (-i\hbar \delta_{bj})\\
+ &= i\hbar \varepsilon_{ija} \hat{x}_a
+ \end{align*}
+ as claimed.
+
+ We also have
+ \begin{align*}
+ [L_i, \hat{p}_j] &= \varepsilon_{iab} [\hat{x}_a \hat{p}_b, \hat{p}_j]\\
+ &= \varepsilon_{iab} ([\hat{x}_a, \hat{p}_j] \hat{p}_b + \hat{x}_a [\hat{p}_b, \hat{p}_j])\\
+ &= \varepsilon_{iab} (i\hbar \delta_{aj} \hat{p}_b)\\
+ &= i\hbar \varepsilon_{ijb} \hat{p}_b.\qedhere
+ \end{align*}%\qedhere
+ \end{enumerate}
+\end{proof}
+\subsubsection*{Spherical polars and spherical harmonics}
+Recall that angular momentum is something coming from rotation. So let's work with something with spherical symmetry. We will exploit the symmetry and work with spherical polar coordinates. To do so, we first need to convert everything we had so far from Cartesian coordinates to polar coordinates.
+
+We define our spherical polar coordinates $(r, \theta, \varphi)$ by the usual relations
+\begin{align*}
+ x_1 &= r \sin \theta \cos \varphi\\
+ x_2 &= r \sin \theta \sin \varphi\\
+ x_3 &= r\cos \theta.
+\end{align*}
+If we express our angular momentum operators as differential operators, then we can write them entirely in terms of $r, \theta$ and $\varphi$ using the chain rule. The formula for $L_3$ will be rather simple, since $x_3$ is our axis of rotation. However, those for $L_1$ and $L_2$ will be much more complicated. Instead of writing them out directly, we instead write down the formula for $L_{\pm} = L_1 \pm i L_2$. A routine application of the chain rule gives
+\begin{align*}
+ L_3 &= -i\hbar \pd{\varphi}\\
+ L_{\pm} = L_1 \pm i L_2 &= \pm \hbar e^{\pm i\varphi} \left(\pd{\theta} \pm i\cot \theta \pd{\varphi} \right)\\
+ \mathbf{L}^2 &= -\hbar^2 \left(\frac{1}{\sin \theta} \pd{\theta} \sin \theta\pd{\theta} + \frac{1}{\sin^2 \theta} \pd[2]{\varphi}\right).
+\end{align*}
+Note these operators involve only $\theta$ and $\varphi$. Furthermore, the expression for $\mathbf{L}^2$ is something we have all seen before --- we have
+\[
+ \nabla^2 = \frac{1}{r}\pd[2]{r} r - \frac{1}{\hbar^2 r^2} \mathbf{L}^2.
+\]
+Since we know
+\[
+ [L_3 , \mathbf{L}^2] = 0,
+\]
+there are simultaneous eigenfunctions of these operators, which we shall call $Y_{\ell m}(\theta, \varphi)$, with $\ell = 0, 1, 2, \cdots$ and $m = 0, \pm 1, \pm 2, \cdots, \pm \ell$. These have eigenvalues $\hbar m$ for $L_3$ and $\hbar^2 \ell(\ell + 1)$ for $\mathbf{L}^2$.
+
+In general, we have
+\[
+ Y_{\ell m} = \text{const } e^{im\varphi} P_{\ell}^m (\cos \theta),
+\]
+where $P_{\ell}^m$ is the \emph{associated Legendre function}. For the simplest case $m = 0$, we have
+\[
+ Y_{\ell 0} = \text{const } P_\ell(\cos \theta),
+\]
+where $P_\ell$ is the Legendre polynomial.
+
+Note that these details are not important. The important thing to take away is that there are solutions $Y_{\ell m}(\theta, \varphi)$, which we can happily use and be confident that someone out there understands these well.
+
+\subsection{Joint eigenstates for a spherically symmetric potential}
+Unfortunately, it is completely standard to use $m$ to indicate the eigenvalue of $L_3$, as we did above. Hence, here we shall consider a particle of mass $\mu$ in a potential $V(r)$ with spherical symmetry. The Hamiltonian is
+\[
+ H = \frac{1}{2\mu}\hat{p}^2 + V = -\frac{\hbar^2}{2\mu} \nabla^2 + V(r).
+\]
+We have seen that we can write $H$ as
+\[
+ H = -\frac{\hbar^2}{2\mu}\frac{1}{r}\pd[2]{r} r + \frac{1}{2 \mu} \frac{1}{r^2} \mathbf{L}^2 + V(r).
+\]
+The first thing we want to check is that
+\[
+ [L_i, H] = [\mathbf{L}^2, H] = 0.
+\]
+This implies we can use the eigenvalues of $H$, $\mathbf{L}^2$ and $L_3$ to label our solutions to the equation.
+
+We check this using Cartesian coordinates. The kinetic term is
+\begin{align*}
+ [L_i, \hat{\mathbf{p}}^2]&= [L_i, \hat{p}_j \hat{p}_j]\\
+ &= [L_i, \hat{p}_j] \hat{p}_j + \hat{p}_j [L_i, \hat{p}_j]\\
+ &= i\hbar \varepsilon_{ijk} (\hat{p}_k\hat{p}_j + \hat{p}_j \hat{p}_k)\\
+ &= 0
+\end{align*}
+since we are contracting an antisymmetric tensor with a symmetric term. We can also compute the commutator with the potential term
+\begin{align*}
+ [L_i, V(r)] &= -i\hbar \varepsilon_{ijk}\left(x_j \frac{\partial V}{\partial x_k}\right)\\
+ &= -i\hbar \varepsilon_{ijk} x_j \frac{x_k}{r} V'(r)\\
+ &= 0,
+\end{align*}
+using the fact that
+\[
+ \frac{\partial r}{\partial x_i} = \frac{x_i}{r}.
+\]
+Now that $H$, $\mathbf{L}^2$ and $L_3$ are a commuting set of observables, we have the joint eigenstates
+\[
+ \psi(\mathbf{x}) = R(r) Y_{\ell m} (\theta, \varphi),
+\]
+and we have
+\begin{align*}
+ \mathbf{L}^2 Y_{\ell m} &= \hbar^2 \ell (\ell + 1) Y_{\ell m} & \ell &= 0, 1, 2, \cdots\\
+ L_3 Y_{\ell m} &= \hbar m Y_{\ell m} & m &= 0, \pm 1, \pm 2, \cdots, \pm \ell.
+\end{align*}
+The numbers $\ell$ and $m$ are usually known as the angular momentum quantum numbers. Note that $\ell = 0$ is the special case where we have a spherically symmetric solution.
+
+Finally, we solve the Schr\"odinger equation
+\[
+ H \psi = E \psi
+\]
+to obtain
+\[
+ -\frac{\hbar^2}{2 \mu} \frac{1}{r} \frac{\d^2}{\d r^2} (rR) + \frac{\hbar^2}{2 \mu r^2 } \ell(\ell + 1) R + VR = ER.
+\]
+This is now an ordinary differential equation in $R$. We can interpret the terms as the radial kinetic energy, angular kinetic energy, the potential energy and the energy eigenvalue respectively. Note that similar to what we did in classical dynamics, under a spherically symmetric potential, we can replace the angular part of the motion with an ``effective potential'' $\frac{\hbar^2}{2\mu r^2} \ell(\ell + 1)$.
+
+We often call $R(r)$ the \emph{radial part} of the wavefunction, defined on $r \geq 0$. Often, it is convenient to work with $\chi(r) = r R(r)$, which is sometimes called the \emph{radial wavefunction}. Multiplying the original Schr\"odinger equation by $r$, we obtain
+\[
+ -\frac{\hbar^2}{2 \mu} \chi'' + \frac{\hbar^2 \ell(\ell + 1)}{2 \mu r^2} \chi + V \chi = E \chi.
+\]
+This is known as the \emph{radial Schr\"odinger equation}.
+
+This has to obey some boundary conditions. Since we want $R$ to be finite as $r \to 0$, we must have $\chi = 0$ at $r = 0$. Moreover, the normalization condition is now
+\[
+ 1 = \int |\psi(\mathbf{x})|^2 \;\d^3 x = \int |R(r)|^2 r^2 \;\d r \int |Y_{\ell m}(\theta, \varphi)|^2 \sin \theta \;\d \theta \;\d \varphi.
+\]
+Hence, $\psi$ is normalizable if and only if
+\[
+ \int_0^\infty |R(r)|^2 r^2 \;\d r < \infty.
+\]
+Alternatively, this requires
+\[
+ \int_0^\infty |\chi(r)|^2 \;\d r < \infty.
+\]
+
+\begin{eg}[Three-dimensional well]
+ We now plot our potential as a function of $r$:
+ \begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \draw [->] (-0.5, 0) -- (4, 0) node [right] {$r$};
+ \draw [->] (0, -1.5) -- (0, 0.5) node [above] {$V$};
+ \draw [mred, semithick] (0, -1) -- (2, -1) -- (2, 0) -- (4, 0);
+ \node at (0, -1) [left] {$-U$};
+
+ \node [above] at (2, 0) {$a$};
+ \end{tikzpicture}
+ \end{center}
+ This is described by
+ \[
+ V(r) =
+ \begin{cases}
+ 0 & r \geq a\\
+ -U & r < a
+ \end{cases},
+ \]
+ where $U > 0$ is a constant.
+
+ We now look for bound state solutions to the Schr\"odinger equation with $-U < E < 0$, with total angular momentum quantum number $\ell$.
+
+ For $r < a$, our radial wavefunction $\chi$ obeys
+ \[
+ \chi'' - \frac{\ell(\ell + 1)}{r^2} \chi + k^2 \chi = 0,
+ \]
+ where $k$ is a new constant obeying
+ \[
+ U + E = \frac{\hbar^2 k^2}{2 \mu}.
+ \]
+ For $r \geq a$, we have
+ \[
+ \chi'' - \frac{\ell(\ell + 1)}{r^2} \chi - \kappa^2 \chi = 0,
+ \]
+ with $\kappa$ obeying
+ \[
+ E = -\frac{\hbar^2 \kappa^2}{2\mu}.
+ \]
+ We can solve in each region and match $\chi, \chi'$ at $r = a$, with boundary condition $\chi(0) = 0$. Note that given this boundary condition, solving this is equivalent to solving it for the whole $\R$ but requiring the solution to be odd.
+
+ Solving this for general $\ell$ is slightly complicated. So we shall look at some particular examples.
+
+ For $\ell = 0$, we have no angular term, and we have done this before. The general solution is
+ \[
+ \chi(r) =
+ \begin{cases}
+ A \sin k r & r < a\\
+ B e^{-\kappa r} & r > a
+ \end{cases}
+ \]
+ Matching the values at $x = a$ determines the values of $k, \kappa$ and hence $E$.
+
+ For $\ell = 1$, it turns out the solution is just
+ \[
+ \chi(r) =
+ \begin{cases}
+ A\left(\cos kr - \frac{1}{kr} \sin kr\right) & r < a\\
+ B\left(1 + \frac{1}{kr}\right) e^{-\kappa r} & r > a
+ \end{cases}.
+ \]
+ After matching, the solution is
+ \[
+ \psi(r) = R(r) Y_{1m}(\theta, \varphi) = \frac{\chi (r)}{r} Y_{1m}(\theta, \varphi),
+ \]
+ where $m$ can take values $m = 0, \pm 1$.
+
+ Solution for general $\ell$ involves spherical Bessel functions, and are studied more in-depth in the IID Applications of Quantum Mechanics course.
+\end{eg}
+
+\section{The hydrogen atom}
+Recall that at the beginning of the course, we said one of the main motivations for quantum mechanics was to study how atoms worked and where the spectrum of the hydrogen atom came from. This is just an application of what we have done for general spherically symmetric potentials, but this is so important that we give it a separate chapter.
+
+\subsection{Introduction}
+Consider an electron moving in a Coulomb potential
+\[
+ V(r) = -\frac{e^2}{4\pi \varepsilon_0} \frac{1}{r}.
+\]
+This potential is due to a proton stationary at $r = 0$. We follow results from the last section of the last chapter, and set the mass $\mu = m_e$, the electron mass. The joint energy eigenstates of $H, \mathbf{L}^2$ and $L_3$ are of the form
+\[
+ \phi(\mathbf{x}) = R(r) Y_{\ell m} (\theta, \varphi)
+\]
+for $\ell = 0, 1, \cdots$ and $m = 0, \pm 1, \cdots, \pm m$.
+
+The radial part of the Schr\"odinger equation can be written
+\[
+ R'' + \frac{2}{r} R' - \frac{\ell(\ell + 1)}{r^2}R + \frac{2 \lambda}{r} R = \kappa^2 R,\tag{$*$}
+\]
+with
+\[
+ \lambda = \frac{m_e e^2}{4\pi \varepsilon_0 \hbar^2},\quad E = \frac{-\hbar^2 \kappa^2}{2 m_e}.
+\]
+Note that here we work directly with $R$ instead of $\chi$, as this turns out to be easier later on.
+
+The goal of this chapter is to understand \emph{all} the (normalizable) solutions to this equation $(*)$.
+
+As in the case of the harmonic oscillator, the trick to solve this is see what happens for large $r$, and ``guess'' a common factor of the solutions. In the case of the harmonic oscillator, we guessed the solution should have $e^{-y^2/2}$ as a factor. Here, for large $r$, we get
+\[
+ R'' \sim \kappa^2 R.
+\]
+This implies $R \sim e^{-\kappa r}$ for large $R$.
+
+For small $r$, we by assumption know that $R$ is finite, while $R'$ and $R''$ could potentially go crazy. So we multiply by $r^2$ and discard the $rR$ and $r^2R$ terms to get
+\[
+ r^2 R'' + 2r R' - \ell(\ell + 1) R \sim 0.
+\]
+This gives the solution $R \sim r^\ell$.
+
+We'll be bold and try a solution of the form
+\[
+ R(r) = C r^\ell e^{-\kappa r}.
+\]
+When we substitute this in, we will get three kinds of terms. The $r^\ell e^{-\kappa r}$ terms match, and so do the terms of the form $r^{\ell - 2} e^{-\kappa r}$. Finally, we see the $r^{\ell - 1}e^{-\kappa e}$ terms match if and only if
+\[
+ 2(\ell r^{\ell - 1})(- \kappa e^{- \kappa r}) + 2 (r^{\ell - 1})(- \kappa e^{- \kappa r}) + 2\lambda r^{\ell - 1} e^{-\kappa e} = 0.
+\]
+When we simplify this mess, we see this holds if and only if
+\[
+ (\ell + 1) \kappa = \lambda.
+\]
+Hence, for any integer $n = \ell + 1 = 1, 2, 3, \cdots$, there are bound states with energies
+\[
+ E_n = -\frac{\hbar^2}{2 m_e} \frac{\lambda^2}{n^2} = -\frac{1}{2}m_e \left(\frac{e^2}{4 \pi \varepsilon_0 \hbar}\right)^2 \frac{1}{n^2}.
+\]
+These are the energy levels of the Bohr model, derived within the framework of the Bohr model. However, there is a slight difference. In our model, the total angular momentum eigenvalue is
+\[
+ \hbar^2 \ell(\ell + 1) = \hbar^2 n(n - 1),
+\]
+which is not what the Bohr model predicted.
+
+Nevertheless, this is not the full solution. For each energy level, this only gives one possible angular momentum, but we know that there can be many possible angular momentums for each energy level. So there is more work to be done.
+
+\subsection{General solution}
+We guessed our solution $r^\ell e^{-\kappa r}$ above by looking at the asymptotic behaviour at large and small $r$. We then managed to show that this is \emph{one} solution of the hydrogen atom. We are greedy, and we want \emph{all} solutions.
+
+Similar to what we did for the harmonic oscillator, we guess that our general solution is of the form
+\[
+ R(r) = e^{-\kappa r} f(r).
+\]
+Putting it in, we obtain
+\[
+ f'' + \frac{2}{r} f' - \frac{\ell(\ell + 1)}{r^2}f = 2\left(\kappa f' + (\kappa - \lambda)\frac{f}{r}\right).
+\]
+We immediately see an advantage of this substitution --- now each side of the equality is equidimensional, and equidimensionality makes our life much easier when seeking series solution. This equation is regular singular at $r = 0$, and hence we guess a solution of the form
+\[
+ f(r) = \sum_{p = 0}^\infty a_p r^{p + \sigma},\quad a_0 \not= 0.
+\]
+Then substitution gives
+\[
+ \sum_{p \geq 0} ((p + \sigma)(p + \sigma - 1) - \ell(\ell + 1)) a_p r^{p + \sigma - 2} = \sum_{p \geq 0} 2(\kappa(p + \sigma + 1)- \lambda)a_p r^{p + \sigma - 1}.
+\]
+The lowest term gives us the indicial equation
+\[
+ \sigma(\sigma + 1) - \ell(\ell + 1) = (\sigma - \ell)(\sigma + \ell + 1) = 0.
+\]
+So either $\sigma = \ell$ or $\sigma = -(\ell + 1)$. We discard the $\sigma = -(\ell + 1)$ solution since this would make $f$ and hence $R$ singular at $r = 0$. So we have $\sigma = \ell$.
+
+Given this, the coefficients are then determined by
+\[
+ a_p = \frac{2(\kappa (p + \ell) - \lambda)}{p(p + 2\ell + 1)} a_{p - 1}, \quad p \geq 1.
+\]
+Similar to the harmonic oscillator, we now observe that, unless the series terminates, we have
+\[
+ \frac{a_p}{a_{p - 1}} \sim \frac{2\kappa}{p}
+\]
+as $p \to \infty$, which matches the behaviour of $r^\alpha e^{2 \kappa r}$ (for some $\alpha$). So $R(r)$ is normalizable only if the series terminates. Hence the possible values of $\lambda$ are
+\[
+ \kappa n = \lambda
+\]
+for some $n \geq \ell + 1$. So the resulting energy levels are exactly those we found before:
+\[
+ E_n = -\frac{\hbar^2}{2 m_e} \kappa^2 = -\frac{\hbar^2}{2 m_e} \frac{\lambda^2}{n^2} = -\frac{1}{2}m_e\left(\frac{e^2}{4 \pi \varepsilon_0 \hbar}\right)^2 \frac{1}{n^2}.
+\]
+for $n = 1, 2, 3, \cdots$. This $n$ is called the \emph{principle quantum number}.
+
+For any given $n$, the possible angular momentum quantum numbers are
+\begin{align*}
+ \ell &= 0, 1, 2, 3, \cdots, n - 1\\
+ m &= 0, \pm 1, \pm 2, \cdots, \pm \ell.
+\end{align*}
+The simultaneous eigenstates are then
+\[
+ \psi_{n\ell m} (\mathbf{x}) = R_{n\ell}(r)Y_{\ell m}(\theta, \varphi),
+\]
+with
+\[
+ R_{n\ell}(r) = r^\ell g_{n\ell}(r)e^{-\lambda r/n},
+\]
+where $g_{n\ell}(r)$ are (proportional to) the \emph{associated Laguerre polynomials}.
+
+In general, the ``shape'' of probability distribution for any electron state depends on $r$ and $\theta, \varphi$ mostly through $Y_{\ell m}$. For $\ell = 0$, we have a spherically symmetric solutions
+\[
+ \psi_{n00}(\mathbf{x}) = g_{n0}(r) e^{-\lambda r/n}.
+\]
+This is very different from the Bohr model, which says the energy levels depend only on the angular momentum and nothing else. Here we can have many different angular momentums for each energy level, and can even have no angular momentum at all.
+
+The degeneracy of each energy level $E_n$ is
+\[
+ \sum_{\ell = 0}^{n - 1} \sum_{m = -\ell}^\ell 1 = \sum_{\ell = 0}^{n - 1} (2\ell + 1) = n^2.
+\]
+If you study quantum mechanics more in-depth, you will find that the degeneracy of energy eigenstates reflects the symmetries in the Coulomb potential. Moreover, the fact that we have $n^2$ degenerate states implies that there is a hidden symmetry, in addition to the obvious $SO(3)$ rotational symmetry, since just $SO(3)$ itself should give rise to much fewer degenerate states.
+
+So. We have solved the hydrogen atom.
+\subsection{Comments}
+Is this it? Not really. When we solved the hydrogen atom, we made a lot of simplifying assumptions. It is worth revisiting these assumptions and see if they are actually significant.
+
+\subsubsection*{Assumptions in the treatment of the hydrogen atom}
+One thing we assumed was that the proton is stationary at the origin and the electron moves around it. We also took the mass to be $\mu = m_e$. More accurately, we can consider the motion relative to the center of mass of the system, and we should take the mass as the reduced mass
+\[
+ \mu = \frac{m_e m_p}{m_e + m_p},
+\]
+just as in classical mechanics. However, the proton mass is so much larger and heavier, and the reduced mass is very close to the electron mass. Hence, what we've got is actually a good approximation. In principle, we can take this into account and this will change the energy levels very slightly.
+
+What else? The entire treatment of quantum mechanics is non-relativistic. We can work a bit harder and solve the hydrogen atom relativistically, but the corrections are also small. These are rather moot problems. There are larger problems.
+
+\subsubsection*{Spin}
+We have always assumed that particles are structureless, namely that we can completely specify the properties of a particle by its position and momentum. However, it turns out electrons (and protons and neutrons) have an additional internal degree of freedom called \emph{spin}. This is a form of angular momentum, but with $\ell = \frac{1}{2}$ and $m = \pm \frac{1}{2}$. This cannot be due to orbital motion, since orbital motion has integer values of $\ell$ for well-behaved wavefunctions. However, we still call it angular momentum, since angular momentum is conserved only if we take these into account as well.
+
+The result is that for each each quantum number $n, \ell, m$, there are \emph{two} possible spin states, and the total degeneracy of level $E_n$ is then $2n^2$. This agrees with what we know from chemistry.
+
+\subsubsection*{Many electron atoms}
+So far, we have been looking at a hydrogen atom, with just one proton and one electron. What if we had more electrons? Consider a nucleus at the origin with charge $+Ze$, where $Z$ is the atomic number. This has $Z$ independent electrons orbiting it with positions $\mathbf{x}_a$ for $a = 1, \cdots, Z$.
+
+We can write down the Schr\"odinger equation for these particles, and it looks rather complicated, since electrons not only interact with the nucleus, but with other electrons as well.
+
+So, to begin, we first ignore electron-electron interactions. Then the solutions can be written down immediately:
+\[
+ \psi(\mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_Z) = \psi_1(\mathbf{x}_1) \psi_2(\mathbf{x}_2) \cdots \psi_Z(\mathbf{x}_Z),
+\]
+where $\psi_i$ is any solution for the hydrogen atom, scaled appropriately by $e^2 \mapsto Z e^2$ to accommodate for the larger charge of the nucleus. The energy is then
+\[
+ E = E_1 + E_2 + \cdots + E_Z.
+\]
+We can next add in the electron-electron interactions terms, and find a more accurate equation for $\psi$ using perturbation theory, which you will come across in IID Principles of Quantum Mechanics.
+
+However, there is an additional constraint on this. The \emph{Fermi-Dirac statistics} or \emph{Pauli exclusion principle} states that no two particles can have the same state.
+
+In other words, if we attempt to construct a multi-electron atom, we cannot put everything into the ground state. We are forced to put some electrons in higher energy states. This is how chemical reactivity arises, which depends on occupancy of energy levels.
+\begin{itemize}
+ \item For $n = 1$, we have $2n^2 = 2$ electron states. This is full for $Z = 2$, and this is helium.
+ \item For $n = 2$, we have $2n^2 = 8$ electron states. Hence the first two energy levels are full for $Z = 10$, and this is neon.
+\end{itemize}
+These are rather stable elements, since to give them an additional electron, we must put it in a higher energy level, which costs a lot of energy.
+
+We also expect reactive atoms when the number of electrons is one more or less than the full energy levels. These include hydrogen ($Z = 1$), lithium ($Z = 3$), fluorine ($Z = 9$) and sodium ($Z = 11$).
+
+This is a recognizable sketch of the periodic table. However, for $n = 3$ and above, this model does not hold well. At these energy levels, electron-electron interactions become important, and the world is not so simple.
+\end{document}
diff --git a/books/cam/III_E/classical_and_quantum_solitons.tex b/books/cam/III_E/classical_and_quantum_solitons.tex
new file mode 100644
index 0000000000000000000000000000000000000000..bca06e964eca1d06026e6aac9a14a73663a99809
--- /dev/null
+++ b/books/cam/III_E/classical_and_quantum_solitons.tex
@@ -0,0 +1,2810 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Easter}
+\def\nyear {2017}
+\def\nlecturer {N.\ S.\ Manton and D.\ Stuart}
+\def\ncourse {Classical and Quantum Solitons}
+\def\nisofficial {}
+
+\input{header}
+
+\usepackage[compat=1.1.0]{tikz-feynman}
+\tikzfeynmanset{/tikzfeynman/momentum/arrow shorten = 0.3}
+\tikzfeynmanset{/tikzfeynman/warn luatex = false}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+Solitons are solutions of classical field equations with particle-like properties. They are localised in space, have finite energy and are stable against decay into radiation. The stability usually has a topological explanation. After quantisation, they give rise to new particle states in the underlying quantum field theory that are not seen in perturbation theory. We will focus mainly on kink solitons in one space dimension, on gauge theory vortices in two dimensions, and on Skyrmions in three dimensions.
+
+\subsubsection*{Pre-requisites}
+This course assumes you have taken Quantum Field Theory and Symmetries, Fields and Particles. The small amount of topology that is needed will be developed during the course.
+\subsubsection*{Reference}
+N.\ Manton and P.\ Sutcliffe, \emph{Topological Solitons}, CUP, 2004
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+Given a classical field theory, if we want to ``quantize'' it, then we find the vacuum of the theory, and then do perturbation theory around this vacuum. If there are multiple vacua, then what we did was that we arbitrarily picked a vacuum, and then expanded around that vacuum.
+
+However, these field theories with multiple vacua often contain \emph{soliton} solutions. These are localized, smooth solutions of the classical field equations, and they ``connect multiple vacuums''. To quantize these solitons solutions, we fix such a soliton, and use it as the ``background''. We then do perturbation theory around these solutions, but this is rather tricky to do. Thus, in a lot of the course, we will just look at the classical part of the theory.
+
+Recall that when quantizing our field theories in perturbation theory, we obtain particles in the quantum theory, despite the classical theory being completely about fields. It turns out solitons also behave like particles, and they are a \emph{new} type of particles. These are non-perturbative phenomena. If we want to do the quantum field theory properly, we have to include these solitons in the quantum field theory. In general this is hard, and so we are not going to develop this a lot.
+
+What does it mean to say that solitons are like particles? In relativistic field theories, we find these solitons have a classical energy. We define the ``mass'' $M$ of the soliton to be the energy in the ``rest frame''. Since this is relativistic, we can do a Lorentz boost, and we obtain a moving soliton. Then we obtain an energy-momentum relation of the form
+\[
+ E^2 - \mathbf{P} \cdot \mathbf{P} = M^2.
+\]
+This is a Lorentz-invariant property of the soliton. Together with the fact that the soliton is localized, this is a justification for thinking of them as particles.
+
+These particles differ from the particles of perturbative quantum fields, as they have rather different properties. Interesting solitons have a \emph{topological} character different from the classical vacuum. Thus, at least naively, they cannot be thought of perturbatively.
+
+There are also non-relativistic solitons, but they usually don't have interpretations as particles. These appear, for example, as defects in solids. We will not be interested in these much.
+
+What kinds of theories have solitons? To obtain solitons, we definitely need a non-linear field structure and/or non-linear equations. Thus, free field theories with quadratic Lagrangians such as Maxwell theory do not have solitons. We need interaction terms.
+
+Note that in QFT, we dealt with interactions using the interaction picture. We split the Hamiltonian into a ``free field'' part, which we solve exactly, and the ``interaction'' part. However, to quantize solitons, we need to solve the full interacting field equations \emph{exactly}.
+
+Having interactions is not enough for solitons to appear. To obtain solitons, we also need some non-trivial vacuum topology. In other words, we need more than one vacuum. This usually comes from symmetry breaking, and often gauge symmetries are involved.
+
+In this course, we will focus on three types of solitons.
+\begin{itemize}
+ \item In one (space) dimension, we have kinks. We will spend $4$ lectures on this.
+
+ \item In two dimensions, we have vortices. We will spend $6$ lectures on this.
+
+ \item In three dimensions, there are monopoles and Skyrmions. We will only study Skyrmions, and will spend $6$ lectures on these.
+\end{itemize}
+These examples are all relativistic. Non-relativistic solitons include \emph{domain walls}, which occur in ferromagnets, and two-dimensional ``baby'' Skyrmions, which are seen in exotic magnets, but we will not study these.
+
+In general, solitons appear in all sorts of different actual, physical scenarios such as in condensed matter physics, optical fibers, superconductors and exotic magnets. ``Cosmic strings'' have also been studied. Since we are mathematicians, we probably will not put much focus on these actual applications. However, we can talk a bit more about Skyrmions.
+
+Skyrmions are solitons in an \emph{effective field theory} of interacting pions, which are thought to be the most important hadrons because they are the lightest. This happens in spite of the lack of a gauge symmetry. While pions have no baryon number, the associated solitons have a topological charge identified with baryon number. This baryon number is conserved for topological reasons.
+
+Note that in QCD, baryon number is conserved because the quark number is conserved. Experiments tried extremely hard to find proton decay, which would be a process that involves baryon number change, but we cannot find such examples. We have very high experimental certainty that baryon number is conserved. And if baryon number is topological, then this is a very good reason for the conservation of baryon number.
+
+Skyrmions give a model of low-energy interactions of baryons. This leads to an (approximate) theory of nucleons (proton and neutron) and larger nuclei, which are bound states of any number of protons and neutrons.
+
+For these ideas to work out well, we need to eventually do quantization. For example, Skyrmions by themselves do not have spin. We need to quantize the theory before spins come out. Also, Skyrmions cannot distinguish between protons and neutrons. These differences only come up after we quantize.
+
+\section{\tph{$\phi^4$}{phi4}{ϕ4} kinks}
+\subsection{Kink solutions}
+In this section, we are going to study \term{$\phi^4$ kinks}\index{kink}\index{kink!$\phi^4$}. These occur in $1 + 1$ dimensions, and involve a single scalar field $\phi(x, t)$. In higher dimensions, we often need many fields to obtain solitons, but in the case of 1 dimension, we can get away with a single field.
+
+In general, the \term{Lagrangian density} of such a scalar field theory is of the form
+\[
+ \mathcal{L} = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - U(\phi)
+\]
+for some potential $U(\phi)$ polynomial in $\phi$. Note that in $1 + 1$ dimensions, any such theory is renormalizable. Here we will choose the Minkowski metric to be
+\[
+ \eta^{\mu\nu} =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix},
+\]
+with $\mu, \nu = 0, 1$. Then the \term{Lagrangian} is given by
+\[
+ L = \int_{-\infty}^\infty \mathcal{L}\;\d x = \int_{-\infty}^\infty \left(\frac{1}{2} \partial_\mu \phi \partial^\mu \phi - U(\phi)\right)\;\d x,
+\]
+and the \term{action} is
+\[
+ S[\phi] = \int L\;\d t = \int \mathcal{L} \;\d x\;\d t.
+\]
+
+There is a non-linearity in the field equations due to a potential $U(\phi)$ with \emph{multiple vacua}. We need multiple vacua to obtain a soliton. The kink stability comes from the \emph{topology}. It is very simple here, and just comes from counting the discrete, distinct vacua.
+
+As usual, we will write
+\[
+ \dot{\phi} = \frac{\partial \phi}{\partial t},\quad \phi' = \frac{\partial \phi}{\partial x}.
+\]
+Often it is convenient to (non-relativistically) split the Lagrangian as
+\[
+ L = T - V,
+\]
+where
+\[
+ T = \int \frac{1}{2} \dot{\phi}^2\;\d x,\quad V = \int \left(\frac{1}{2} \phi'^2 + U(\phi)\right)\;\d x.
+\]
+In higher dimensions, we separate out $\partial_\mu \phi$ into $\dot{\phi}$ and $\nabla \phi$.
+
+The classical field equation comes from the condition that $S[\phi]$ is stationary under variations of $\phi$. By a standard manipulation, the field equation turns out to be
+\[
+ \partial_\mu \partial^\mu \phi + \frac{\d U}{\d \phi} = 0.
+\]
+This is an example of a Klein--Gordon type of field equation, but is non-linear if $U$ is not quadratic. It is known as the \emph{non-linear Klein--Gordon equation}\index{Klein--Gordon equation!non-linear}.
+
+We are interested in a soliton that is a static solution. For a static field, the time derivatives can be dropped, and this equation becomes
+\[
+ \frac{\d^2 \phi}{\partial x^2} = \frac{\d U}{\d \phi}.
+\]
+Of course, the important part is the choice of $U$! In $\phi^4$ theory, we choose
+\[
+ U(\phi) = \frac{1}{2} (1 - \phi^2)^2.
+\]
+This is mathematically the simplest version, because we set all coupling constants to $1$.
+
+The importance of this $U$ is that it has two minima:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->](-3, -1) -- (3, -1) node [right] {$\phi$};
+ \draw [->] (0, -1.5) -- (0, 3) node [above] {$U(\phi)$};
+
+ \draw [mblue, thick, domain=-2.4:2.4, samples=50] plot [smooth] (\x, {4 * ((\x/2)^4 - (\x/2)^2)});
+
+ \node at (-1.414, -1) [below] {$-1$};
+ \node at (1.414, -1) [below] {$1$};
+ \end{tikzpicture}
+\end{center}
+The two classical vacua are
+\[
+ \phi(x) \equiv 1,\quad \phi(x) \equiv -1.
+\]
+This is, of course, not the only possible choice. We can, for example, include some parameters and set
+\[
+ U(\phi) = \lambda(m^2 - \phi^2)^2.
+\]
+If we are more adventurous, we can talk about a $\phi^6$ theory with
+\[
+ U(\phi) = \lambda \phi^2 (m^2 - \phi^2)^2.
+\]
+In this case, we have $3$ minima, instead of $2$. Even braver people can choose
+\[
+ U(\phi) = 1 - \cos \phi.
+\]
+This has \emph{infinitely} many minima. The field equation involves a $\sin \phi$ term, and hence this theory is called the \term{sine-Gordon theory} (a pun on the name Klein--Gordon, of course).
+
+The sine-Gordon theory is a special case. While it seems like the most complicated potential so far, it is actually \emph{integrable}\index{integrability}. This implies we can find explicit exact solutions involving multiple, interacting solitons in a rather easy way. However, integrable systems is a topic for another course, namely IID Integrable Systems.
+
+For now, we will focus on our simplistic $\phi^4$ theory. As mentioned, there are two vacuum field configurations, both of zero energy. We will in general use the term ``\term{field configuration}'' to refer to fields at a given time that are not necessarily solutions to the classical field equation, but in this case, the vacua are indeed solutions.
+
+If we wanted to quantize this $\phi^4$ theory, then we have to pick one of the vacua and do perturbation theory around it. This is known as \term{spontaneous symmetry breaking}. Of course, by symmetry, we obtain the same quantum theory regardless of which vacuum we expand around.
+
+However, as we mentioned, when we want to study solitons, we have to involve \emph{both} vacua. We want to consider solutions that ``connect'' these two vacua. In other words, we are looking for solutions that look like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$\phi$};
+
+ \draw [mblue, thick, domain=-3:3] plot [smooth] (\x, {tanh(1.5*(\x + 1))});
+
+ \draw [dashed] (-3, 1) -- (3, 1);
+ \draw [dashed] (-3, -1) -- (3, -1);
+
+ \node [circ] at (-1, 0) {};
+ \node [anchor = north west] at (-1, 0) {$a$};
+ \end{tikzpicture}
+\end{center}
+This is known as a \term{kink solution}.
+
+To actually find such a solution, we need the full field equation, given by
+\[
+ \frac{\d^2 \phi}{\d x^2} = -2 (1 - \phi^2) \phi.
+\]
+Instead of solving this directly, we will find the kink solutions by considering the energy, since this method generalizes better.
+
+We will work with a general potential $U$ with minimum value $0$. From Noether's theorem, we obtain a conserved energy
+\[
+ E = \int \left(\frac{1}{2} \dot{\phi}^2 + \frac{1}{2} \phi'^2 + U(\phi)\right)\;\d x.
+\]
+For a static field, we drop the $\dot{\phi}^2$ term. Then this is just the $V$ appearing in the Lagrangian. By definition, the field equation tells us the field is a stationary point of this energy. To find the kink solution, we will in fact find a \emph{minimum} of the energy.
+
+Of course, the global minimum is attained when we have a vacuum field, in which case $E = 0$. However, this is the global minimum only if we don't impose any boundary conditions. In our case, the kinks satisfy the boundary conditions ``$\phi(\infty) = 1$'', ``$\phi(-\infty) = -1$'' (interpreted in terms of limits, of course). The kinks will minimize energy subject to these boundary conditions.
+
+These boundary conditions are important, because they are ``topological''. Eventually, we will want to understand the dynamics of solitons, so we will want to consider fields that evolve with time. From physical considerations, for any fixed $t$, the field $\phi(x, t)$ must satisfy $\phi(x, t) \to \text{vacuum}$ as $x \to \pm \infty$, or else the field will have infinite energy. However, the vacuum of our potential $U$ is discrete. Thus, if $\phi$ is to evolve continuously with time, the boundary conditions must not evolve with time! At least, this is what we expect classically. Who knows what weird tunnelling can happen in quantum field theory.
+
+So from now on, we fix some boundary conditions $\phi(\infty)$ and $\phi(-\infty)$, and focus on fields that satisfy these boundary conditions. The trick is to write the potential in the form
+\[
+ U (\phi) = \frac{1}{2} \left(\frac{\d W (\phi)}{\d \phi}\right)^2.
+\]
+If $U$ is non-negative, then we can always find $W$ in principle --- we take the square root and then integrate it. However, in practice, this is useful only if we can find a simple form for $W$. Let's assume we've done that. Then we can write
+\begin{align*}
+ E &= \frac{1}{2} \int \left(\phi'^2 + \left(\frac{\d W}{\d \phi}\right)^2 \right)\;\d x\\
+ &= \frac{1}{2} \int \left(\phi' \mp \frac{\d W}{\d \phi}\right)^2\;\d x \pm \int \frac{\d W}{\d \phi} \frac{\d \phi}{\d x} \;\d x\\
+ &= \frac{1}{2} \int \left(\phi' \mp \frac{\d W}{\d \phi}\right)^2\;\d x \pm \int \d W\\
+ &= \frac{1}{2} \int \left(\phi' \mp \frac{\d W}{\d \phi}\right)^2\;\d x \pm (W(\phi(\infty)) - W(\phi(-\infty))).
+\end{align*}
+The second term depends purely on the boundary conditions, which we have fixed. Thus, we can minimize energy if we can make the first term vanish! Note that when completing the square, the choice of the signs is arbitrary. However, if we want to set the first term to be $0$, the second term had better be non-negative, since the energy itself is non-negative! Hence, we will pick the sign such that the second term is $\geq 0$, and then the energy is minimized when
+\[
+ \phi' = \pm \frac{\d W}{\d \phi}.
+\]
+In this case, we have
+\[
+ E = \pm (W(\infty) - W(-\infty)).
+\]
+These are known as the \term{Bogomolny equation} and the \term{Bogomolny energy bound}. Note that if we picked the other sign, then we cannot solve the differential equation $\phi' = \pm \frac{\d W}{\d \phi}$, because we know the energy must be non-negative.
+
+For the $\phi^4$ kink, we have
+\[
+ \frac{\d W}{\d \phi} = 1 - \phi^2.
+\]
+So we pick
+\[
+ W = \phi - \frac{1}{3} \phi^3.
+\]
+So when $\phi = \pm 1$, we have $W = \pm \frac{2}{3}$. We need to choose the $+$ sign, and then we know the energy (and hence mass) of the kink is
+\[
+ E \equiv M = \frac{4}{3}.
+\]
+We now solve for $\phi$. The equation we have is
+\[
+ \phi' = 1 - \phi^2.
+\]
+Rearranging gives
+\[
+ \frac{1}{1 - \phi^2} \d \phi = \d x,
+\]
+which we can integrate to give
+\[
+ \phi (x) = \tanh (x - a).
+\]
+This $a$ is an arbitrary constant of integration, labelling the intersection of the graph of $\phi$ with the $x$-axis. We think of this as the ``\emph{location}'' of the kink.
+
+Note that there is not a unique solution, which is not unexpected by translation invariance. Instead, the solutions are labeled by a parameter $a$. This is known as a \term{modulus} of the solution. In general, there can be multiple moduli, and the space of all possible values of the moduli of static solitons is known as the \term{moduli space}. In the case of a kink, the moduli space is just $\R$.
+
+Is this solution stable? We obtained this kink solution by minimizing the energy within this topological class of solutions (i.e.\ among all solutions with the prescribed boundary conditions). Since a field cannot change the boundary conditions during evolution, it follows that the kink must be stable.
+
+Are there other soliton solutions to the field equations? The solutions are determined by the boundary conditions. Thus, we can classify all soliton solutions by counting all possible combinations of the boundary conditions. We have, of course, two vacuum solutions $\phi \equiv 1$ and $\phi \equiv -1$. There is also an \term{anti-kink}\index{kink!anti-} solution obtained by inverting the kink:
+\[
+ \phi(x) = - \tanh (x - b).
+\]
+This also has energy $\frac{4}{3}$.
+
+\subsection{Dynamic kink}
+We now want to look at kinks that move. Given what we have done so far, this is trivial. Our theory is Lorentz invariant, so we simply apply a Lorentz boost. Then we obtain a field
+\[
+ \phi(x, t) = \tanh \gamma (x - vt),
+\]
+where, as usual
+\[
+ \gamma = (1 - v^2)^{-1/2}.
+\]
+But this isn't all. Notice that for small $v$, we can approximate the solution simply by
+\[
+ \phi(x, t) = \tanh (x - vt).
+\]
+This looks like a kink solution with a modulus that varies with time slowly. This is known as the \term{adiabatic} point of view.
+
+More generally, let's consider a ``moving kink'' field
+\[
+ \phi(x, t) = \tanh (x - a(t))
+\]
+for some function $a(t)$. In general, this is not a solution to the field equation, but if $\dot{a}$ is small, then it is ``approximately a solution''.
+
+We can now explicitly compute that
+\[
+ \dot{\phi} = - \frac{\d a}{\d t} \phi'.
+\]
+Let's consider fields of this type, and look at the Lagrangian of the field theory. The kinetic term is given by
+\[
+ T = \int \frac{1}{2} \dot{\phi}^2\;\d x = \frac{1}{2} \left(\frac{\d a}{\d t}\right)^2 \int \phi'^2 \;\d x = \frac{1}{2} M \left(\frac{\d a}{\d t}\right)^2.
+\]
+To derive this result, we had to perform the integral $\int \phi'^2 \;\d x$, and if we do that horrible integral, we will find a value that happens to be equal to $M = \frac{4}{3}$. Of course, this is not a coincidence. We can derive this result from Lorentz invariance to see that the result of integration is manifestly $M$.
+
+The remaining part of the Lagrangian is less interesting. Since it does not involve taking time derivatives, the time variation of $a$ is not seen by it, and we simply have a constant
+\[
+ V = \frac{4}{3}.
+\]
+Then the original field Lagrangian becomes a particle Lagrangian
+\[
+ L = \frac{1}{2}M \dot{a}^2 - \frac{4}{3}.
+\]
+
+Note that when we first formulated the field theory, the action principle required us to find a field that extremizes the action \emph{among all fields}. However, what we are doing now is to restrict to the set of kink solutions only, and then when we solve the variational problem arising from this Lagrangian, we are extremizing the action among fields of the form $\tanh (x - a(t))$. We can think of this as motion in a ``valley'' in the field configuration space. In general, these solutions will not also extremize the action among all fields. However, as we said, it will do so ``approximately'' if $\dot{a}$ is small.
+
+We can obtain an effective equation of motion
+\[
+ M \ddot{a} = 0,
+\]
+which is an equation of motion for the variable $a(t)$ \emph{in the moduli space}.
+
+Of course, the solution is just given by
+\[
+ a(t) = vt + \mathrm{const},
+\]
+where $v$ is an arbitrary constant, which we interpret as the velocity. In this formulation, we do not have any restrictions on $v$, because we took the ``non-relativistic approximation''. This approximation breaks down when $v$ is large.
+
+There is a geometric interpretation to this. We can view the equation of motion $M\ddot{a} = 0$ as the \emph{geodesic equation} in the moduli space $\R$, and we can think of the coefficient $M$ as specifying a Riemannian metric on the moduli space. In this case, the metric is (a scalar multiple of) the usual Euclidean metric $(\d a)^2$.
+
+This seems like a complicated way of describing such a simple system, but this picture generalizes to higher-dimensional systems and allows us to analyze multi-soliton dynamics, in particular, the dynamics of vortices and monopoles.
+
+%We can find the dynamics of $a(t)$ from the Lagrangian of the field theory. Thus, we are reducing the ``field dynamics'' to the ``particle dynamics''. We have
+%\[
+% T = \int \frac{1}{2} \dot{\phi}^2\;\d x = \frac{1}{2} \left(\frac{\d a}{\d t}\right)^2 \int \phi'^2 \;\d x = \frac{1}{2} M \left(\frac{\d a}{\d t}\right)^2.
+%\]
+%The factor of $M$ just comes from doing the (tricky) integral explicitly, but we can also work it out from more general principles to make it manifestly $M$, and this is known as Derrick's theorem.
+%
+%Thus, the kink behaves like a particle with mass $M$! How about the potential energy? The potential energy is \emph{not} time-dependent. We simply integrate some polynomial of $\phi$ over all $x$, and the shift by $a$ does not make a difference. In this case, we have
+%\[
+% V = \frac{4}{3}.
+%\]
+%So we've reduced the field Lagrangian to a particle Lagrangian
+%\[
+% L = \frac{1}{2}M \dot{a}^2 - \frac{4}{3}.
+%\]
+%We can think of this as motion in a \term{valley} in the field configuration space. We are drifting in the energy minima in the field configuration space.
+%
+%This method is powerful, and applies to multi-soliton dynamics in high dimensions. From this, we obtain an effective equation of motion in moduli space
+%\[
+% M \ddot{a} = 0.
+%\]
+%This has \emph{geometric} interpretation as geometric motion in the moduli space. The moduli space is just the real line $\R$ with its standard metric. We can think of the coefficient $M$ as a Riemannian metric, which happens to be constant (as a function of $a$) in this case.
+%
+%Of course, the solution is
+%\[
+% a(t) = vt + \mathrm{const},
+%\]
+%where $v$ is an arbitrary constant, namely velocity.
+%
+%In this approximation, $v$ can be anything, and the approximation does not see the speed of light. However, as we plug this back into the actual field equation, we see that the approximation breaks down when $v$ is large.
+%
+%This motion in moduli space is not exact, but is accurate in the non-relativistic approximation.
+%
+%This was all rather trivial in our case of kinks. However, it is also important and allows us to analyze the motion of several solitons in higher dimension.
+%
+%Note that here we started with a Lagrangian that is quadratic in time derivatives of the field. When we pass on to solitons, we have a term that is quadratic in the time derivative in the moduli, and the coefficients provide a Riemannian geometry on the moduli space.
+
+We might ask ourselves if there are multi-kinks in our theory. There aren't in the $\phi^4$ theory, because we saw that the solutions are classified by the boundary conditions, and we have already enumerated all the possible boundary conditions. In more complicated theories like sine-Gordon theory, multiple kinks are possible.
+
+However, while we cannot have two kinks in $\phi^4$ theory, we can have a kink followed by an anti-kink, or more of these pairs. This actually lies in the ``vacuum sector'' of the theory, but it still looks like it's made up of kinks and anti-kinks, and it is interesting to study these.
+
+\subsection{Soliton interactions}
+We now want to study interactions between kinks and anti-kinks, and see how they cause each other to move. So far, we were able to label the position of the particle by its ``center'' $a$, and thus we can sensibly talk about how this center moves. However, this center is well-defined only in the very special case of a pure kink or anti-kink, where we can use symmetry to identify the center. If there is some perturbation, or if we have a kink and an anti-kink, it is less clear what should be considered the center.
+
+Fortunately, we can still talk about the momentum of the field, even if we don't have a well-defined center. Indeed, since our theory has translation invariance, Noether's theorem gives us a conserved charge which is interpreted as the momentum.
+
+Recall that for a single scalar field in $1 + 1$ dimensions, the Lagrangian density can be written in the form
+\[
+ \mathcal{L} = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - U(\phi).
+\]
+Applying Noether's theorem, to the translation symmetry, we obtain the \term{energy-momentum tensor}
+\[
+ T^\mu_\nu = \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)} \partial_\nu \phi - \delta^\mu_\nu \mathcal{L} = \partial^\mu \phi \partial_\nu \phi - \delta^\mu_\nu \mathcal{L}.
+\]
+Fixing a time and integrating over all space, we obtain the conserved energy and conserved momentum. These are
+\begin{align*}
+ E &= \int_{-\infty}^\infty T^0\!_0 \;\d x = \int_{-\infty}^\infty \left(\frac{1}{2}\dot{\phi}^2 + \frac{1}{2} \phi'^2 + U(\phi)\right)\;\d x,\\
+ P &= - \int_{-\infty}^\infty T^0_1 \;\d x = - \int_{-\infty}^\infty \dot{\phi} \phi' \;\d x.
+\end{align*}
+We now focus on our moving kink in the adiabatic approximation of the $\phi^4$ theory. Then the field is given by
+\[
+ \phi = \tanh (x - a(t)).
+\]
+Doing another horrible integral, we find that the momentum is just
+\[
+ P = M \dot{a}.
+\]
+This is just as we would expect for a particle with mass $M$!
+
+Now suppose what we have is instead a kink-antikink configuration
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$\phi$};
+
+ \draw [mblue, thick] plot [smooth] coordinates {(-3.0,-0.99933) (-2.9,-0.99851) (-2.8,-0.99668) (-2.7,-0.99263) (-2.6,-0.98367) (-2.5,-0.96403) (-2.4,-0.92167) (-2.3,-0.83365) (-2.2,-0.66404) (-2.1,-0.37995) (-2.0,-0.00000) (-1.9,0.37995) (-1.8,0.66404) (-1.7,0.83365) (-1.6,0.92167) (-1.5,0.96403) (-1.4,0.98367) (-1.3,0.99263) (-1.2,0.99668) (-1.1,0.99851) (-1.0,0.99933) (-0.9,0.99970) (-0.8,0.99986) (-0.7,0.99994) (-0.6,0.99997) (-0.5,0.99999) (-0.4,0.99999) (-0.3,1.00000) (-0.2,1.00000) (-0.1,1.00000) (0.0,1.00000) (0.1,1.00000) (0.2,1.00000) (0.3,1.00000) (0.4,0.99999) (0.5,0.99999) (0.6,0.99997) (0.7,0.99994) (0.8,0.99986) (0.9,0.99970) (1.0,0.99933) (1.1,0.99851) (1.2,0.99668) (1.3,0.99263) (1.4,0.98367) (1.5,0.96403) (1.6,0.92167) (1.7,0.83365) (1.8,0.66404) (1.9,0.37995) (2.0,-0.00000) (2.1,-0.37995) (2.2,-0.66404) (2.3,-0.83365) (2.4,-0.92167) (2.5,-0.96403) (2.6,-0.98367) (2.7,-0.99263) (2.8,-0.99668) (2.9,-0.99851) (3.0,-0.99933)};
+ % map (\x -> (showFFloat (Just 1) x "", showFFloat (Just 5) (tanh (4 * (x + 2)) - tanh (4 * (x - 2)) - 1) "")) [-3,-2.9..3]
+
+ \draw [dashed] (-3, 1) -- (3, 1);
+ \draw [dashed] (-3, -1) -- (3, -1);
+
+ \node [circ] at (-2, 0) {};
+ \node [anchor = north west] at (-2, 0) {$-a$};
+
+ \node [circ] at (2, 0) {};
+ \node [anchor = north east] at (2, 0) {$a$};
+ \end{tikzpicture}
+\end{center}
+
+Here we have to make the crucial assumption that our kinks are well-separated. Matters get a lot worse when they get close to each other, and it is difficult to learn anything about them analytically. However, by making appropriate approximations, we can understand well-separated kink-antikink configurations.
+
+When the kink and anti-kink are far away, we first pick a point $b$ lying in-between the kink and the anti-kink:
+
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$\phi$};
+
+ \draw [mblue, thick] plot [smooth] coordinates {(-3.0,-0.99933) (-2.9,-0.99851) (-2.8,-0.99668) (-2.7,-0.99263) (-2.6,-0.98367) (-2.5,-0.96403) (-2.4,-0.92167) (-2.3,-0.83365) (-2.2,-0.66404) (-2.1,-0.37995) (-2.0,-0.00000) (-1.9,0.37995) (-1.8,0.66404) (-1.7,0.83365) (-1.6,0.92167) (-1.5,0.96403) (-1.4,0.98367) (-1.3,0.99263) (-1.2,0.99668) (-1.1,0.99851) (-1.0,0.99933) (-0.9,0.99970) (-0.8,0.99986) (-0.7,0.99994) (-0.6,0.99997) (-0.5,0.99999) (-0.4,0.99999) (-0.3,1.00000) (-0.2,1.00000) (-0.1,1.00000) (0.0,1.00000) (0.1,1.00000) (0.2,1.00000) (0.3,1.00000) (0.4,0.99999) (0.5,0.99999) (0.6,0.99997) (0.7,0.99994) (0.8,0.99986) (0.9,0.99970) (1.0,0.99933) (1.1,0.99851) (1.2,0.99668) (1.3,0.99263) (1.4,0.98367) (1.5,0.96403) (1.6,0.92167) (1.7,0.83365) (1.8,0.66404) (1.9,0.37995) (2.0,-0.00000) (2.1,-0.37995) (2.2,-0.66404) (2.3,-0.83365) (2.4,-0.92167) (2.5,-0.96403) (2.6,-0.98367) (2.7,-0.99263) (2.8,-0.99668) (2.9,-0.99851) (3.0,-0.99933)};
+
+ \draw [dashed] (-3, 1) -- (3, 1);
+ \draw [dashed] (-3, -1) -- (3, -1);
+
+ \node [circ] at (-2, 0) {};
+ \node [anchor = north west] at (-2, 0) {$-a$};
+
+ \node [circ] at (2, 0) {};
+ \node [anchor = north east] at (2, 0) {$a$};
+
+ \draw [dashed] (-0.3, -2) -- (-0.3, 2);
+ \node [circ] at (-0.3, 0) {};
+ \node [anchor = north east] at (-0.3, 0) {$b$};
+ \end{tikzpicture}
+\end{center}
+
+The choice of $b$ is arbitrary, but we should choose it so that it is far away from both kinks. We will later see that, at least to first order, the result of our computations does not depend on which $b$ we choose. We will declare that the parts to the left of $b$ belongs to the kink, and the parts to the right of $b$ belong to the anti-kink. Then by integrating the energy-momentum tensor in these two regions, we can obtain the momentum of the kink and the anti-kink separately.
+
+We will focus on the kink only. Its momentum is given by\index{kink!forces}
+\[
+ P = - \int_{-\infty}^b T^0_1 \;\d x = - \int_{-\infty}^b \dot{\phi} \phi' \;\d x.
+\]
+Since $T^\mu_\nu$ is conserved, we know $\partial_\mu T^\mu\!_\nu = 0$. So we find
+\[
+ \frac{\partial}{\partial t} T^0\!_1 + \frac{\partial}{\partial x}T^1\!_1 = 0.
+\]
+By Newton's second law, the force $F$ on the kink is given by the rate of change of the momentum:
+\begin{align*}
+ F &= \frac{\d}{\d t} P \\
+ &= -\int_{-\infty}^b \frac{\partial}{\partial t} T^0\!_1\;\d x \\
+ &= \int_{-\infty}^b \frac{\partial}{\partial x} T^1\!_1\;\d x\\
+ &= \left.T^1\!_1\right|_b\\
+ &= \left(-\frac{1}{2} \dot{\phi}^2 - \frac{1}{2} \phi'^2 + U\right)_b.
+\end{align*}
+Note that there is no contribution at the $-\infty$ end because it is vacuum and $T^1\!_1$ vanishes.
+
+But we want to actually work out what this is. To do so, we need to be more precise about what our initial configuration is. In this theory, we can obtain it just by adding a kink to an anti-kink. The obvious guess is that it should be
+\[
+ \phi(x) \overset{?}{=} \tanh(x + a) - \tanh(x - a),
+\]
+but this has the wrong boundary condition. It vanishes on both the left and the right. So we actually want to subtract $1$, and obtain
+\[
+ \phi(x) = \tanh(x + a) - \tanh(x - a) - 1 \equiv \phi_1 + \phi_2 - 1.
+\]
+Note that since our equation of motion is not linear, this is in general not a genuine solution! However, it is approximately a solution, because the kink and anti-kink are well-separated. However, there is no hope that this will be anywhere near a solution when the kink and anti-kink are close together!
+
+Before we move on to compute $\dot{\phi}$ and $\phi'$ explicitly and plugging numbers in, we first make some simplifications and approximations. First, we restrict our attention to fields that are initially at rest. So we have $\dot{\phi} = 0$ at $t = 0$. Of course, the force will cause the kinks to move, but we shall, for now, ignore what happens when they start moving.
+
+That gets rid of one term. Next, we notice that we only care about the expression when evaluated at $b$. Here we have $\phi_2 - 1 \approx 0$. So we can try to expand the expression to first order in $\phi_2 - 1$ (and hence $\phi_2'$), and this gives
+\[
+ F = \left(-\frac{1}{2}\phi_1'^2 + U(\phi_1) - \phi_1' \phi_2' + (\phi_2 - 1) \frac{\d U}{\d \phi}(\phi_1)\right)_b.
+\]
+We have a zeroth order term $-\frac{1}{2} \phi_1'^2 + U(\phi_1)$. We claim that this must vanish. One way to see this is that this term corresponds to the force when there is no anti-kink $\phi_2$. Since the kink does not exert a force on itself, this must vanish!
+
+Analytically, we can deduce this from the Bogomolny equation, which says for any kink solution $\phi$, we have
+\[
+ \phi' = \frac{\d W}{\d \phi}.
+\]
+It then follows that
+\[
+ \frac{1}{2} \phi'^2 = \frac{1}{2} \left(\frac{\d W}{\d \phi}\right)^2 = U(\phi).
+\]
+Alternatively, we can just compute it directly! In any case, convince yourself that it indeed vanishes in your favorite way, and then move on.
+
+Finally, we note that the field equations tell us
+\[
+ \frac{\d U}{\d \phi}(\phi_1) = \phi_1''.
+\]
+So we can write the force as
+\[
+ F = \big(-\phi_1' \phi_2' + (\phi_2 - 1) \phi_1''\big)_b.
+\]
+That's about all the simplifications we can make without getting our hands dirty. We might think we should plug in the $\tanh$ terms and compute, but that is \emph{too} dirty. Instead, we use asymptotic expressions of kinks and anti-kinks far from their centers. Using the definition of $\tanh$, we have
+\[
+ \phi_1 = \tanh(x + a) = \frac{1 - e^{-2(x + a)}}{1 + e^{-2(x + a)}} \approx 1 - 2e^{-2(x + a)}.
+\]
+This is valid for $x \gg -a$, i.e.\ to the right of the kink. The constant factor of $2$ in front of the exponential is called the \term{amplitude} of the tail. We will later see that the $2$ appearing in the exponent has the interpretation of the mass of the field $\phi$.
+
+For $\phi_2$, take the approximation that $x \ll a$. Then
+\[
+ \phi_2 - 1= -\tanh(x - a) - 1 \approx -2 e^{2(x - a)}.
+\]
+We assume that our $b$ satisfies both of these conditions. These are obviously easy to differentiate once or twice. Doing this, we obtain
+\[
+ -\phi_1' \phi_2' = (-4e^{-2(x + a)})(-4 e^{2(x - a)}) = 16 e^{-4a}.
+\]
+Note that this is independent of $x$. In the formula, the $x$ will turn into a $b$, and we see that this part of the force is independent of $b$. Similarly, the other term is
+\[
+ (\phi_2 - 1) \phi_1'' = (-2e^{2(x - a)}) (-8 e^{-2(x + a)}) = 16 e^{-4a}.
+\]
+Therefore we find
+\[
+ F = 32 e^{-4a},
+\]
+and as promised, this is independent of the precise position of the cutoff $b$ we chose.
+
+We can write this in a slightly more physical form. Our initial configuration was symmetric around the $y$-axis, but in reality, only the separation matters. We write the separation of the pair as $s = 2a$. Then we have
+\[
+ F = 32 e^{-2s}.
+\]
+What is the interpretation of the factor of $2$? Recall that our potential was given by
+\[
+ U(\phi) = \frac{1}{2}(1 - \phi^2)^2.
+\]
+We can do perturbation theory around one of the vacua, say $\phi = 1$. Thus, we set $\phi = 1 + \eta$, and then expanding gives us
+\[
+ U(\eta) \approx \frac{1}{2} (-2\eta)^2 = \frac{1}{2}m^2 \eta^2,
+\]
+where $m = 2$. This is the same ``2'' that goes into the exponent in the force.
+
+What about the constant factor of $32$? Recall that when we expanded the kink solution, we saw that the amplitude $A$ of the tail was $A = 2$. It turns out if we re-did our theory and put back the different possible parameters, we will find that the force is given by
+\[
+ F = 2 m^2 A^2 e^{-ms}.
+\]
+This is an interesting and important phenomenon. The mass $m$ was the \emph{perturbative} mass of the field. It is something we obtain by perturbation theory. However, the same mass appears in the force between the solitons, which are non-perturbative phenomenon!
+
+This is perhaps not too surprising. After all, when we tried to understand the soliton interactions, we took the approximation that $\phi_1$ and $\phi_2$ are close to $1$ at $b$. Thus, we are in some sense perturbing around the vacuum $\phi \equiv 1$.
+
+%The perturbative field theory has a meson of mass $2 \hbar$. You might have not noticed this $\hbar$ when doing quantum field theory, because we set $\hbar = 1$. But this makes sense, because in free field theory, we decompose the quantum field into a lot of harmonic oscillators, and the energy of the harmonic oscillator was $\hbar\omega\left(n + \frac{1}{2}\right)$. So the $\hbar$ should be here.
+%
+%However, in our soliton, we do \emph{not} have an $\hbar$. That was not a mistake. We have only been working classically.
+%
+%The soliton has mass $M = \frac{4}{3}$, and this is much larger than the meson mass. Note that one should choose $\hbar$ to be small for perturbation theory to be justified. However, soliton is non-perturbative and has larger mass than the meson).
+
+We can interpret the force between the kink and anti-kink diagrammatically. From the quantum field theory point of view, we can think of this force as being due to meson exchange, and we can try to invent a Feynman diagram calculus that involves solitons and mesons. This is a bit controversial, but at least heuristically, we can introduce new propagators representing solitons, using double lines, and draw the interaction as
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i);
+ \vertex [right=of i] (m);
+ \vertex [above right=of m] (f1);
+ \vertex [below right=of m] (f2) {$\bar{K}$};
+ \vertex [above left=of i] (s1);
+ \vertex [below left=of i] (s2) {$K$};
+
+ \diagram*{
+ (i) -- [scalar] (m), % label ``meson''
+ (f2) -- [double distance=1.5pt] (m) -- [double distance=1.5pt] (f1), % make these double lines
+ (s1) -- [double distance=1.5pt] (i) -- [double distance=1.5pt] (s2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+%This is an effective diagram leading to a Yukawa force in $1 + 1$ dimensions, which decays exponentially with separation $s$.
+%
+%The amplitude of the tail of the of the soliton kink is $A = 2$. The factor of $32$ in the force ultimately comes from the mass being $m = 2$. More generally, one can show that if we put in parameters into our theory, we have
+%\[
+% F = 2 m^2 A^2 e^{-ms}.
+%\]
+
+So what happens to this soliton? The force we derived was positive. So the kink is made to move to the right. By symmetry, we will expect the anti-kink to move towards the left. They will collide!
+
+What happens when they collide? All our analysis so far assumed the kinks were well-separated, so everything breaks down. We can only understand this phenomenon numerically. After doing some numerical simulations, we see that there are two regimes:
+\begin{itemize}
+ \item If the kinks are moving slowly, then they will annihilate into \emph{meson radiation}.
+ \item If the kinks are moving very quickly, then they often bounce off each other.
+\end{itemize}
+%We can now study the time-dependence. The kinks will accelerate due to standard Newtonian dynamics, and they will move towards each other. However, the full dynamics of the kink-antikink pair is complicated. When they hit each other, they annihilate, and this happens in the regime where the separation is large. Where does the energy go when they annihilate? The answer is that they annihilate into meson radiation, which we can discover by doing numerical simulations.
+%
+%Annihilation is what happens if they are initially at rest, or slowly moving. However, at high speed, they happen to bounce of each other (of course, there is still some energy loss to radiation). These are very complicated, and we understand this mostly through numerical simulations.
+
+\subsection{Quantization of kink motion}
+We now briefly talk about how to quantize kinks. The most naive way of doing so is pretty straightforward. We use the moduli space approximation, and then we have a very simple kink Lagrangian.
+\[
+ L = \frac{1}{2} M \dot{a}^2.
+\]
+This is just a free particle moving in $\R$ with mass $M$. This $a$ is known as the \term{collective coordinate} of the kink. Quantizing a free particle is very straightforward. It is just IB Quantum Mechanics. For completeness, we will briefly outline this procedure.
+
+We first put the system in Hamiltonian form. The conjugate momentum to $a$ is given by
+\[
+ P = M \dot{a}.
+\]
+Then the Hamiltonian is given by
+\[
+ H = P \dot{a} - L = \frac{1}{2M} P^2.
+\]
+Then to quantize, we replace $P$ by the operator $-i\hbar \frac{\partial}{\partial a}$. In this case, the quantum Hamiltonian is given by
+\[
+ H = - \frac{\hbar^2}{2M} \frac{\partial^2}{\partial a^2}.
+\]
+A wavefunction is a function of $a$ and $t$, and this is just ordinary QM for a single particle.
+
+As usual, the stationary states are given by
+\[
+ \psi(a) = e^{i\kappa a},
+\]
+and the momentum and energy (eigenvalues) are
+\[
+ P = \hbar \kappa,\quad H = E = \frac{\hbar^2 \kappa^2}{2M} = \frac{P^2}{2M}.
+\]
+
+Is this actually ``correct''? Morally speaking, we really should quantize the complete $1 + 1$ dimensional field theory. What would this look like?
+
+In normal quantum field theory, we consider perturbations around a vacuum solution, say $\phi \equiv 1$, and we obtain mesons. Here if we want to quantize the kink solution, we should consider field oscillations around the kink. Then the solution contains both a kink and a meson. These mesons give rise to quantum corrections to the kink mass $M$.
+
+Should we be worried about these quantum corrections? Unsurprisingly, it turns out these quantum corrections are of the order of the meson mass. So we should not be worried when the meson mass is small.
+
+Meson-kink scattering can also be studied in the full quantum theory. To first approximation, since the kink is heavy, mesons are reflected or transmitted with some probabilities, while the momentum of the kink is unchanged. But when we work to higher orders, then of course the kink will move as a result. This is all rather complicated.
+
+For more details, see Rajaraman's \emph{Solitons and Instantons}, or Weinberg's \emph{Classical Solutions in Quantum Field Theory}.
+
+The thing that is really hard to understand in the quantum field theory is kink-antikink pair production. This happens in meson collisions when the mesons are very fast, and the theory is highly relativistic. What we have done so far is perturbative and makes the non-relativistic approximation to get the adiabatic picture. It is \emph{very} difficult to understand the highly relativistic regime.
+
+\subsection{Sine-Gordon kinks}
+We end the section by briefly talking about kinks in a different theory, namely the \term{sine-Gordon theory}. In this theory, kinks are often known as \emph{solitons}\index{sine-Gordon soliton} instead.
+
+The sine-Gordon theory is given by the potential
+\[
+ U(\phi) = 1 - \cos \phi.
+\]
+Again, we suppress coupling constants, but it is possible to add them back.
+
+The potential looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->](-3, 0) -- (3, 0) node [right] {$\phi$};
+ \draw [->] (0, -0.5) -- (0, 2) node [above] {$U(\phi)$};
+
+ \draw [mblue, thick] (0, 0) cos (0.3, 0.7) sin (0.6, 1.4) cos (0.9, 0.7) sin (1.2, 0) cos (1.5, 0.7) sin (1.8, 1.4) cos (2.1, 0.7) sin (2.4, 0) cos (2.7, 0.7) sin (3, 1.4);;
+ \draw [mblue, thick, xscale=-1] (0, 0) cos (0.3, 0.7) sin (0.6, 1.4) cos (0.9, 0.7) sin (1.2, 0) cos (1.5, 0.7) sin (1.8, 1.4) cos (2.1, 0.7) sin (2.4, 0) cos (2.7, 0.7) sin (3, 1.4);;
+
+ \node at (1.2, 0) [below] {$2\pi$};
+ \node at (2.4, 0) [below] {$4\pi$};
+ \node at (-1.2, 0) [below] {$2\pi$};
+ \node at (-2.4, 0) [below] {$4\pi$};
+ \end{tikzpicture}
+\end{center}
+Now there are \emph{infinitely many} distinct vacua. In this case, we find we need to pick $W$ such that
+\[
+ \frac{\d W}{\d \phi} = 2 \sin \frac{1}{2}\phi.
+\]
+
+\subsubsection*{Static sine-Gordon kinks}
+To find the static kinks in the sine-Gordon theory, we again look at the Bogomolny equation. We have to solve
+\[
+ \frac{\d \phi}{\d x} = 2 \sin \frac{1}{2}\phi.
+\]
+This can be solved. This involves integrating a $\csc$, and ultimately gives us a solution
+\[
+ \phi(x) = 4 \tan^{-1} e^{x - a}.
+\]
+We can check that this solution interpolates between $0$ and $2\pi$.
+
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-4, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 2.5) node [above] {$\phi$};
+
+ \draw [mblue, thick] plot [smooth] coordinates {(-4.0,0.00000) (-3.9,0.00000) (-3.8,0.00000) (-3.7,0.00001) (-3.6,0.00001) (-3.5,0.00001) (-3.4,0.00001) (-3.3,0.00002) (-3.2,0.00002) (-3.1,0.00003) (-3.0,0.00005) (-2.9,0.00006) (-2.8,0.00008) (-2.7,0.00011) (-2.6,0.00015) (-2.5,0.00020) (-2.4,0.00027) (-2.3,0.00037) (-2.2,0.00050) (-2.1,0.00068) (-2.0,0.00091) (-1.9,0.00123) (-1.8,0.00166) (-1.7,0.00224) (-1.6,0.00303) (-1.5,0.00409) (-1.4,0.00552) (-1.3,0.00745) (-1.2,0.01005) (-1.1,0.01357) (-1.0,0.01831) (-0.9,0.02472) (-0.8,0.03336) (-0.7,0.04502) (-0.6,0.06074) (-0.5,0.08190) (-0.4,0.11035) (-0.3,0.14847) (-0.2,0.19922) (-0.1,0.26607) (0.0,0.35251) (0.1,0.46091) (0.2,0.59053) (0.3,0.73548) (0.4,0.88474) (0.5,1.02559) (0.6,1.14850) (0.7,1.24946) (0.8,1.32902) (0.9,1.39011) (1.0,1.43628) (1.1,1.47087) (1.2,1.49666) (1.3,1.51583) (1.4,1.53006) (1.5,1.54061) (1.6,1.54843) (1.7,1.55423) (1.8,1.55852) (1.9,1.56170) (2.0,1.56406) (2.1,1.56580) (2.2,1.56710) (2.3,1.56806) (2.4,1.56877) (2.5,1.56929) (2.6,1.56968) (2.7,1.56997) (2.8,1.57019) (2.9,1.57034) (3.0,1.57046) (3.1,1.57055) (3.2,1.57061) (3.3,1.57066) (3.4,1.57070) (3.5,1.57072) (3.6,1.57074) (3.7,1.57076) (3.8,1.57077) (3.9,1.57077) (4.0,1.57078)};
+ % map (\x -> (showFFloat (Just 1) x "", showFFloat (Just 5) (atan ( exp (3*x - 1))) "")) [-4,-3.9..4]
+
+ \draw [dashed] (-4, 1.57078) -- (4, 1.57078);
+
+ \node [left] at (-4, 0) {$0$};
+ \node [right] at (4, 1.57078) {$2\pi$};
+ \node [circ] at (0.33, 0.785398) {};
+ \draw [dashed] (0.33, 0.785398) -- (0.33, 0) node [below] {$a$};
+ \end{tikzpicture}
+\end{center}
+
+Unlike the $\phi^4$ theory, dynamical multi-kink solutions exist here and can be derived \emph{exactly}. One of the earlier ways to do so was via B\"acklund transforms, but that was very complicated. People later invented better methods, but they are still not very straightforward. Nevertheless, it can be done. Ultimately, this is due to the sine-Gordon equation being \emph{integrable}. For more details, refer to the IID Integrable Systems course.
+
+\begin{eg}
+ There is a two-kink solution
+ \[
+ \phi (x, t) = 4 \tan^{-1} \left(\frac{v \sinh \gamma x}{ \cosh \gamma vt}\right),
+ \]
+ where, as usual, we have
+ \[
+ \gamma = (1 - v^2)^{-1/2}.
+ \]
+ For $v = 0.01$, this looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-4, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, -2.5) -- (0, 2.5) node [above] {$\phi$};
+
+ \draw [mblue, thick] plot [smooth] coordinates {(-4.0,-1.56957) (-3.9,-1.56914) (-3.8,-1.56856) (-3.7,-1.56778) (-3.6,-1.56672) (-3.5,-1.56529) (-3.4,-1.56337) (-3.3,-1.56077) (-3.2,-1.55726) (-3.1,-1.55252) (-3.0,-1.54613) (-2.9,-1.53751) (-2.8,-1.52587) (-2.7,-1.51019) (-2.6,-1.48906) (-2.5,-1.46067) (-2.4,-1.42263) (-2.3,-1.37197) (-2.2,-1.30524) (-2.1,-1.21893) (-2.0,-1.11067) (-1.9,-0.98117) (-1.8,-0.83628) (-1.7,-0.68699) (-1.6,-0.54603) (-1.5,-0.42296) (-1.4,-0.32183) (-1.3,-0.24211) (-1.2,-0.18089) (-1.1,-0.13458) (-1.0,-0.09986) (-0.9,-0.07394) (-0.8,-0.05461) (-0.7,-0.04020) (-0.6,-0.02942) (-0.5,-0.02129) (-0.4,-0.01509) (-0.3,-0.01027) (-0.2,-0.00637) (-0.1,-0.00305) (0.0,0.00000) (0.1,0.00305) (0.2,0.00637) (0.3,0.01027) (0.4,0.01509) (0.5,0.02129) (0.6,0.02942) (0.7,0.04020) (0.8,0.05461) (0.9,0.07394) (1.0,0.09986) (1.1,0.13458) (1.2,0.18089) (1.3,0.24211) (1.4,0.32183) (1.5,0.42296) (1.6,0.54603) (1.7,0.68699) (1.8,0.83628) (1.9,0.98117) (2.0,1.11067) (2.1,1.21893) (2.2,1.30524) (2.3,1.37197) (2.4,1.42263) (2.5,1.46067) (2.6,1.48906) (2.7,1.51019) (2.8,1.52587) (2.9,1.53751) (3.0,1.54613) (3.1,1.55252) (3.2,1.55726) (3.3,1.56077) (3.4,1.56337) (3.5,1.56529) (3.6,1.56672) (3.7,1.56778) (3.8,1.56856) (3.9,1.56914) (4.0,1.56957)};
+ \draw [mred, thick] plot [smooth] coordinates {(-4.0,-1.53726) (-3.9,-1.52555) (-3.8,-1.50975) (-3.7,-1.48847) (-3.6,-1.45988) (-3.5,-1.42157) (-3.4,-1.37057) (-3.3,-1.30340) (-3.2,-1.21659) (-3.1,-1.10779) (-3.0,-0.97784) (-2.9,-0.83269) (-2.8,-0.68347) (-2.7,-0.54286) (-2.6,-0.42031) (-2.5,-0.31974) (-2.4,-0.24053) (-2.3,-0.17975) (-2.2,-0.13381) (-2.1,-0.09939) (-2.0,-0.07374) (-1.9,-0.05467) (-1.8,-0.04052) (-1.7,-0.03002) (-1.6,-0.02224) (-1.5,-0.01648) (-1.4,-0.01221) (-1.3,-0.00904) (-1.2,-0.00670) (-1.1,-0.00496) (-1.0,-0.00367) (-0.9,-0.00271) (-0.8,-0.00200) (-0.7,-0.00147) (-0.6,-0.00108) (-0.5,-0.00078) (-0.4,-0.00055) (-0.3,-0.00038) (-0.2,-0.00023) (-0.1,-0.00011) (0.0,0.00000) (0.1,0.00011) (0.2,0.00023) (0.3,0.00038) (0.4,0.00055) (0.5,0.00078) (0.6,0.00108) (0.7,0.00147) (0.8,0.00200) (0.9,0.00271) (1.0,0.00367) (1.1,0.00496) (1.2,0.00670) (1.3,0.00904) (1.4,0.01221) (1.5,0.01648) (1.6,0.02224) (1.7,0.03002) (1.8,0.04052) (1.9,0.05467) (2.0,0.07374) (2.1,0.09939) (2.2,0.13381) (2.3,0.17975) (2.4,0.24053) (2.5,0.31974) (2.6,0.42031) (2.7,0.54286) (2.8,0.68347) (2.9,0.83269) (3.0,0.97784) (3.1,1.10779) (3.2,1.21659) (3.3,1.30340) (3.4,1.37057) (3.5,1.42157) (3.6,1.45988) (3.7,1.48847) (3.8,1.50975) (3.9,1.52555) (4.0,1.53726)};
+ % let v = 0.01 in let g = sqrt (1 / (1 - v * v)) in let t = 400 in map (\x -> (showFFloat (Just 1) x "", showFFloat (Just 5) (atan (v * sinh (g * x * 3) / cosh (v * g * t))) "")) [-4,-3.9..4]
+
+ \draw [dashed] (-4, 1.57078) -- (4, 1.57078);
+ \draw [dashed] (-4, -1.57078) -- (4, -1.57078);
+
+ \node [right] at (4, 1.57078) {$2\pi$};
+ \node [left] at (-4, -1.57078) {$-2\pi$};
+
+ \node [mblue, left] at (1.73, 0.785) {\small$t = 0$};
+ \node [mred, right] at (2.85, 0.785) {\small$t = \pm400$};
+ \end{tikzpicture}
+ \end{center}
+
+ Note that since $\phi(x, t) = \phi(x, -t)$, we see that this solution involves two solitons at first approaching each other, and then later bouncing off. Thus, the two kinks \emph{repel} each other. When we did kinks in $\phi^4$ theory, we saw that a kink and an anti-kink attracted, but here there are two kinks, which is qualitatively different.
+
+ We can again compute the force just like the $\phi^4$ theory, but alternatively, since we have a full, exact solution, we can work it out directly from the solution! The answers, fortunately, agree. If we do the computations, we find that the point of closest approach is $\sim 2 \log \left(\frac{2}{v}\right)$ if $v$ is small.
+\end{eg}
+
+There are some important comments to make. In the sine-Gordon theory, we can have very complicated interactions between kinks and anti-kinks, and these can connect vastly different vacua. However, \emph{static} solutions must join $2n\pi$ and $2(n \pm 1)\pi$ for some $n$, because if we want to join vacua further apart, we will have more than one kink, and they necessarily interact.
+
+If we have multiple kinks and anti-kinks, then each of these things can have their own velocity, and we might expect some very complicated interaction between them, such as annihilation and pair production. But remarkably, the interactions are \emph{not} complicated. If we try to do numerical simulations, or use the exact solutions, we see that we do not have energy loss due to ``radiation''. Instead, the solitons remain very well-structured and retain their identities. This, again, is due to the theory being integrable.
+
+\subsubsection*{Topology of the sine-Gordon equation}
+There are also a lot of interesting things we can talk about without going into details about what the solutions look like.
+
+The important realization is that our potential is periodic in $\phi$. For the sine-Gordon theory, it is much better to think of this as a field modulo $2\pi$, i.e.\ as a function
+\[
+ \phi: \R \to S^1.
+\]
+In this language, the boundary condition is that $\phi(x) = 0 \bmod 2\pi$ as $x \to \pm \infty$. Thus, instead of thinking of the kink as joining two vacua, we can think of it as ``winding around the circle'' instead.
+
+We can go further. Since the boundary conditions of $\phi$ are now the same on two sides, we can join the ends of the domain $\R$ together, and we can think of $\phi$ as a map
+\[
+ \phi: S^1 \to S^1
+\]
+instead. This is a \term{compactification} of space.
+
+Topologically, such maps are classified by their \term{winding number}, or the \term{degree}, which we denote \term{$Q$}. This is a topological (homotopy) invariant of a map, and is preserved under continuous deformations of the field. Thus, it is preserved under time evolution of the field.
+
+Intuitively, the winding number is just how many times we go around the circle. There are multiple (equivalent) ways of making this precise.
+
+The first way, which is the naive way, is purely topological. We simply have to go back to the first picture, where we regard $\phi$ as a real value. Suppose the boundary values are
+\[
+ \phi(-\infty) = 2 n_- \pi,\quad \phi(\infty) = 2 n_+ \pi.
+\]
+Then we set the winding number to be $Q = n_+ - n_-$.
+
+Topologically, we are using the fact that $\R$ is the \term{universal covering space}\index{covering space} of the circle, and thus we are really looking at the induced map on the fundamental group of the circle.
+
+\begin{eg}
+ As we saw, a single kink has $Q = 1$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-4, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 2.5) node [above] {$\phi$};
+
+ \draw [mblue, thick] plot [smooth] coordinates {(-4.0,0.00000) (-3.9,0.00000) (-3.8,0.00000) (-3.7,0.00001) (-3.6,0.00001) (-3.5,0.00001) (-3.4,0.00001) (-3.3,0.00002) (-3.2,0.00002) (-3.1,0.00003) (-3.0,0.00005) (-2.9,0.00006) (-2.8,0.00008) (-2.7,0.00011) (-2.6,0.00015) (-2.5,0.00020) (-2.4,0.00027) (-2.3,0.00037) (-2.2,0.00050) (-2.1,0.00068) (-2.0,0.00091) (-1.9,0.00123) (-1.8,0.00166) (-1.7,0.00224) (-1.6,0.00303) (-1.5,0.00409) (-1.4,0.00552) (-1.3,0.00745) (-1.2,0.01005) (-1.1,0.01357) (-1.0,0.01831) (-0.9,0.02472) (-0.8,0.03336) (-0.7,0.04502) (-0.6,0.06074) (-0.5,0.08190) (-0.4,0.11035) (-0.3,0.14847) (-0.2,0.19922) (-0.1,0.26607) (0.0,0.35251) (0.1,0.46091) (0.2,0.59053) (0.3,0.73548) (0.4,0.88474) (0.5,1.02559) (0.6,1.14850) (0.7,1.24946) (0.8,1.32902) (0.9,1.39011) (1.0,1.43628) (1.1,1.47087) (1.2,1.49666) (1.3,1.51583) (1.4,1.53006) (1.5,1.54061) (1.6,1.54843) (1.7,1.55423) (1.8,1.55852) (1.9,1.56170) (2.0,1.56406) (2.1,1.56580) (2.2,1.56710) (2.3,1.56806) (2.4,1.56877) (2.5,1.56929) (2.6,1.56968) (2.7,1.56997) (2.8,1.57019) (2.9,1.57034) (3.0,1.57046) (3.1,1.57055) (3.2,1.57061) (3.3,1.57066) (3.4,1.57070) (3.5,1.57072) (3.6,1.57074) (3.7,1.57076) (3.8,1.57077) (3.9,1.57077) (4.0,1.57078)};
+ % map (\x -> (showFFloat (Just 1) x "", showFFloat (Just 5) (atan ( exp (3*x - 1))) "")) [-4,-3.9..4]
+
+ \draw [dashed] (-4, 1.57078) -- (4, 1.57078);
+
+ \node [left] at (-4, 0) {$0$};
+ \node [right] at (4, 1.57078) {$2\pi$};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+Thus, we can think of the $Q$ as the \emph{net soliton number}.
+
+
+But this construction we presented is rather specific to maps from $S^1$ to $S^1$. We want something more general that can be used for more complicated systems. We can do this in a more ``physics'' way. We note that there is a \emph{topological} current
+\[
+ j^\mu = \frac{1}{2\pi} \varepsilon^{\mu\nu} \partial_\nu \phi,
+\]
+where $\varepsilon^{\mu\nu}$ is the anti-symmetric tensor in $1 + 1$ dimensions, chosen so that $\varepsilon^{01} = 1$.
+
+In components, this is just
+\[
+ j^\mu = \frac{1}{2\pi} (\partial_x \phi, - \partial_t \phi).
+\]
+This is conserved because of the symmetry of mixed partial derivatives, so that
+\[
+ \partial_\mu j^\mu = \frac{1}{2\pi} \varepsilon^{\mu\nu} \partial_\mu \partial_\nu \phi = 0.
+\]
+As usual, a current induces a conserved charge
+\[
+ Q = \int_{-\infty}^\infty j^0 \;\d x = \frac{1}{2\pi} \int_{-\infty}^\infty \partial_x \phi \;\d x = \frac{1}{2\pi} (\phi(\infty) - \phi(-\infty)) = n_+ - n_-,
+\]
+which is the formula we had earlier.
+
+Note that all these properties do not depend on $\phi$ satisfying any field equations! It is completely topological.
+
+Finally, there is also a differential geometry way of defining $Q$. We note that the target space $S^1$ has a normalized volume form $\omega$ so that
+\[
+ \int_{S^1} \omega = 1.
+\]
+For example, we can take
+\[
+ \omega = \frac{1}{2\pi}\;\d \phi.
+\]
+Now, given a mapping $\phi: \R \to S^1$, we can pull back the volume form to obtain
+\[
+ \phi^* \omega = \frac{1}{2\pi} \frac{\d \phi}{\d x} \;\d x.
+\]
+We can then define the degree of the map to be
+\[
+ Q = \int \phi^* \omega = \frac{1}{2\pi}\int_{-\infty}^\infty \frac{\d \phi}{\d x}\;\d x.
+\]
+This is exactly the same as the formula we obtained using the current!
+
+Note that even though the volume form is normalized on $S^1$ and has integral $1$, the integral when pulled back is not $1$. We can imagine this as saying if we wind around the circle $n$ times, then after pulling back, we would have pulled back $n$ ``copies'' of the volume form, and so the integral will be $n$ times that of the integral on $S^1$.
+
+We saw that these three definitions gave the same result, and different definitions have different benefits. For example, in the last two formulations, it is not \emph{a priori} clear that the winding number has to be an integer, while this is clear in the first formulation.
+
+\section{Vortices}
+We are now going to start studying \emph{vortices}. These are topological solitons in two space dimensions. While we mostly studied $\phi^4$ kinks last time, what we are going to do here is more similar to the sine-Gordon theory than the $\phi^4$ theory, as it is largely topological in nature.
+
+A lot of the computations we perform in the section are much cleaner when presented using the language of differential forms. However, we shall try our best to provide alternative versions in coordinates for the easily terrified.
+
+\subsection{Topological background}
+\subsubsection*{Sine-Gordon kinks}
+We now review what we just did for sine-Gordon kinks, and then try to develop some analogous ideas in higher dimension. The sine-Gordon equation is given by
+\[
+ \frac{\partial^2 \theta}{\partial t^2} - \frac{\partial^2 \theta}{\partial x^2} + \sin \theta = 0.
+\]
+
+We want to choose boundary conditions so that the energy has a chance to be finite. The first part is, of course, to figure out what the energy is. The energy-momentum conservation equation given by Noether's theorem is
+\[
+ \partial_t \left(\frac{\theta_t^2 + \theta_x^2}{2} + (1 - \cos \theta)\right) + \partial_x (-\theta_t \theta_x) = \partial_\mu P^\mu = 0.
+\]
+The energy we will be considering is thus
+\[
+ E = \int_{\R} P^0 \;\d x = \int_\R \left(\frac{\theta_t^2 + \theta_x^2}{2} + (1 - \cos \theta)\right)\;\d x.
+\]
+Thus, to obtain finite energy, we will want $\theta(x) \to 2n_{\pm} \pi$ for some integers $n_{\pm}$ as $x \to \pm \infty$. What is the significance of this $n_{\pm}$?
+\begin{eg}
+ Consider the basic kink
+ \[
+ \theta_K(x) = 4\tan^{-1} e^x.
+ \]
+ Picking the standard branch of $\tan^{-1}$, this kink looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-4, 0) -- (4, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 2.5) node [above] {$\phi$};
+
+ \draw [mblue, thick] plot [smooth] coordinates {(-4.0,0.00001) (-3.9,0.00001) (-3.8,0.00001) (-3.7,0.00002) (-3.6,0.00002) (-3.5,0.00003) (-3.4,0.00004) (-3.3,0.00005) (-3.2,0.00007) (-3.1,0.00009) (-3.0,0.00012) (-2.9,0.00017) (-2.8,0.00022) (-2.7,0.00030) (-2.6,0.00041) (-2.5,0.00055) (-2.4,0.00075) (-2.3,0.00101) (-2.2,0.00136) (-2.1,0.00184) (-2.0,0.00248) (-1.9,0.00335) (-1.8,0.00452) (-1.7,0.00610) (-1.6,0.00823) (-1.5,0.01111) (-1.4,0.01499) (-1.3,0.02024) (-1.2,0.02732) (-1.1,0.03687) (-1.0,0.04975) (-0.9,0.06710) (-0.8,0.09047) (-0.7,0.12185) (-0.6,0.16382) (-0.5,0.21953) (-0.4,0.29255) (-0.3,0.38616) (-0.2,0.50193) (-0.1,0.63760) (0.0,0.78540) (0.1,0.93320) (0.2,1.06887) (0.3,1.18464) (0.4,1.27824) (0.5,1.35126) (0.6,1.40698) (0.7,1.44895) (0.8,1.48033) (0.9,1.50369) (1.0,1.52105) (1.1,1.53393) (1.2,1.54348) (1.3,1.55056) (1.4,1.55580) (1.5,1.55969) (1.6,1.56257) (1.7,1.56470) (1.8,1.56628) (1.9,1.56745) (2.0,1.56832) (2.1,1.56896) (2.2,1.56944) (2.3,1.56979) (2.4,1.57005) (2.5,1.57024) (2.6,1.57039) (2.7,1.57049) (2.8,1.57057) (2.9,1.57063) (3.0,1.57067) (3.1,1.57070) (3.2,1.57073) (3.3,1.57075) (3.4,1.57076) (3.5,1.57077) (3.6,1.57078) (3.7,1.57078) (3.8,1.57079) (3.9,1.57079) (4.0,1.57079)};
+ % map (\x -> (showFFloat (Just 1) x "", showFFloat (Just 5) (atan ( exp (3*x - 1))) "")) [-4,-3.9..4]
+
+ \draw [dashed] (-4, 1.57078) -- (4, 1.57078);
+
+ \node [left] at (-4, 0) {$0$};
+ \node [right] at (4, 1.57078) {$2\pi$};
+ \end{tikzpicture}
+ \end{center}
+ This goes from $\theta = 0$ to $\theta = 2\pi$.
+\end{eg}
+
+To better understand this, we can think of $\theta$ as an angular variable, i.e.\ we identify $\theta \sim \theta + 2n\pi$ for any $n \in \Z$. This is a sensible thing because the energy density and the equation etc.\ are unchanged when we shift everything by $2n\pi$. Thus, $\theta$ is not taking values in $\R$, but in $\R/2\pi\Z \cong S^1$.
+
+Thus, for each fixed $t$, our field $\theta$ is a map
+\[
+ \theta: \R \to S^1.
+\]
+The number $Q = n_+ - n_-$ equals the number of times $\theta$ covers the circle $S^1$ on going from $x = -\infty$ to $x = +\infty$. This is the winding number, which is interpreted as the topological charge.
+
+As we previously discussed, we can express this topological charge as the integral of some current. We can write
+\[
+ Q = \frac{1}{2\pi} \int_{\theta(\R)} \d \theta = \frac{1}{2\pi} \int_{-\infty}^\infty \frac{\d \theta}{\d x}\;\d x.
+\]
+Note that this formula automatically takes into account the orientation. This is the form that will lead to generalization in higher dimensions.
+
+This function $\frac{\d \theta}{\d x}$ appearing in the integral has the interpretation as a topological charge density. Note that there is a topological conservation law
+\[
+ \partial_\mu j^\mu = \frac{\partial j^0}{\partial t} + \frac{\partial j^1}{\partial x} = 0,
+\]
+where
+\[
+ j^0 = \theta_x,\quad j^1 = -\theta_t.
+\]
+This conservation law is not a consequence of the field equations, but merely a mathematical identity, namely the commutation of partial derivatives.
+
+\subsubsection*{Two dimensions}
+For the sine-Gordon kink, the target space was a circle $S^1$. Now, we are concerned with the \term{unit disk}\index{$D$}
+\[
+ D = \{(x^1, x^2): |\mathbf{x}|^2 < 1\} \subseteq \R^2.
+\]
+We will then consider fields
+\[
+ \Phi: D \to D.
+\]
+In the case of a sine-Gordon kink, we still cared about moving solitons. However, here we will mostly work with static solutions, and study fields at a fixed time. Thus, there is no time variable appearing.
+
+Using the canonical isomorphism $\R^2 \cong \C$, we can think of the target space as the unit disk in the complex plane, and write the field as
+\[
+ \Phi = \Phi^1 + i \Phi^2.
+\]
+However, we will usually view the $D$ in the domain as a real space instead.
+
+We will impose some boundary conditions. We pick any function $\chi: S^1 \to \R$, and consider
+\[
+ g = e^{i\chi}: S^1 \to S^1 = \partial D \subseteq D.
+\]
+Here $g$ is a genuine function, and has to be single-valued. So $\chi$ must be single-valued modulo $2\pi$. We then require
+\[
+ \Phi_{\partial D} = g = e^{i \chi}.
+\]
+In particular, $\Phi$ must send the boundary to the boundary.
+
+Now the target space $D$ has a canonical choice of measure $\d \Phi^1 \wedge \d \Phi^2$. Then we can expect the new topological charge to be given by
+\[
+ Q = \frac{1}{\pi} \int_D \d \Phi^1 \wedge \d \Phi^2 = \frac{1}{\pi} \int_D \det
+ \begin{pmatrix}
+ \frac{\partial \Phi^1}{\partial x^1} & \frac{\partial \Phi^1}{\partial x^2}\\
+ \frac{\partial \Phi^2}{\partial x^1} & \frac{\partial \Phi^2}{\partial x^2}
+ \end{pmatrix}\;\d x^1 \wedge \d x^2.
+\]
+Thus, the charge density is given by
+\[
+ j^0 = \frac{1}{2} \varepsilon_{ab} \varepsilon_{ij} \frac{\partial \Phi^a}{\partial x^i} \frac{\partial \Phi^b}{\partial x^j}.
+\]
+Crucially, it turns out this charge density is a total derivative, i.e.\ we have
+\[
+ j^0 = \frac{\partial V^i}{\partial x^i}
+\]
+for some function $V$. It is not immediately obvious this is the case. However, we can in fact pick
+\[
+ V^i = \frac{1}{2} \varepsilon_{ab}\varepsilon^{ij} \Phi^a \frac{\partial \Phi^b}{\partial x^j}.
+\]
+To see this actually works, we need to use the anti-symmetry of $\varepsilon^{ij}$ and observe that
+\[
+ \varepsilon^{ij} \frac{\partial^2 \Phi^b}{\partial x^i \partial x^j} = 0.
+\]
+Equivalently, using the language of differential forms, we view the charge density $j^0$ as the $2$-form
+\[
+ j^0 = \d \Phi^1 \wedge \d \Phi^2 = \d (\Phi^1\;\d \Phi^2) = \frac{1}{2} \d (\Phi^1 \;\d \Phi^2 - \Phi^2 \;\d \Phi^1).
+\]
+By the divergence theorem, we find that
+\[
+ Q = \frac{1}{2\pi}\oint_{\partial D} \Phi^1 \;\d \Phi^2 - \Phi^2 \;\d \Phi^1.
+\]
+We then use that on the boundary,
+\[
+ \Phi^1 = \cos \chi,\quad \Phi^2 = \sin \chi,
+\]
+so
+\[
+ Q = \frac{1}{2\pi} \oint_{\partial D} (\cos^2 \chi + \sin^2 \chi) \;\d \chi = \frac{1}{2\pi} \oint_{\partial D} \;\d \chi = N.
+\]
+Thus, the charge is just the winding number of $g$!
+
+%But what actually is the degree telling us? Let's fix a boundary condition $\Phi|_{\partial D} = g = e^{i\chi}$, where $\chi: \partial D \cong S^1 \to \R$ is a real function. Since $g$ is now a map from the boundary circle to the boundary circle. So $g$ itself has got a winding number,
+%\[
+% \frac{1}{2\pi} \int_0^{2\pi} \frac{\d \chi}{\d \theta} \;\d \theta.
+%\]
+%Let's relate the degree of $\Phi$ to the winding number of $g$. To do so, we will need the fact that the Jacobian determinant is a total derivative, so that we can apply Green's identity. Thus, we want to write $j^0$ as
+%\[
+% j^0 = \frac{\partial V^i}{\partial x^i}
+%\]
+%for some $V^i$. It might not be immediately obvious how we can pick such a $V^i$, but notice that
+%\[
+% \varepsilon_{ij} \frac{\partial^2 \Phi^b}{\partial x^i \partial x^j} = 0
+%\]
+%by anti-symmetry. So we can in fact pick
+%\[
+% V^i = \frac{1}{2} \varepsilon_{ab}\varepsilon_{ij} \Phi^a \frac{\partial \Phi^b}{\partial x^j}.
+%\]
+%Then in this case, we have
+%\[
+% \int_D j^0 \;\d x = \int \left(\frac{\partial V^1}{\partial x^1} + \frac{\partial V^2}{\partial x^2}\right)\;\d x^1 \;\d x^2 = \oint_{\partial D} V^1 \;\d x^2 - V^2 \;\d x^1.
+%\]
+%On the boundary, we know $\Phi^1 = \cos \chi$ and $\Phi^2 = \sin \chi$. Substituting that in, we can calculate $V^1$ and $V^2$ on the boundary, and then work out the integral to be
+%\[
+% \int_D j\;\d x = \frac{1}{2} \oint_D \;\d \chi = \pi N,
+%\]
+%where $N$ is the winding number of $g: S^1 \to S^1$.
+%
+%This is a general phenomenon. Often, the appearance of anti-symmetric combinations of derivatives will allow us to reduce complicated quantities into total derivatives. Then this will allow us to relate things that \emph{a priori} depends on the value of $\Phi$ everywhere into something that depends only on the boundary values.
+%The next interesting thing to notice is that the charge $Q$ is invariant under scaling of the domain. If we scale $x$ by $R$, then $\d x^1 \wedge \d x^2$ gets scaled up by $R^2$, but the Jacobian gets scaled down by $R^2$. Thus, an interesting thing to do is to take take $R \to \infty$. Then the domain becomes all of $\R^2$.
+
+Now notice that our derivation didn't really depend on our domain being $D$. It could have been any region bounded by a simple closed curve in $\R^2$. In particular, we can take it to be a disk $D_R$ of arbitrary radius $R$.
+
+What we are \emph{actually} interested in is a field
+\[
+ \Phi: \R^2 \to D.
+\]
+We then impose asymptotic boundary conditions
+\[
+ \Phi \sim g = e^{i\chi}
+\]
+as $|x| \to \infty$. We can still define the charge or degree by
+\[
+ Q = \frac{1}{\pi} \int_{\R^2} j^0 \;\d x^1 \;\d x^2 = \frac{1}{\pi} \lim_{R \to \infty}\int_{D_R} j^0 \;\d x^1 \;\d x^2
+\]
+This is then again the winding number of $g$.
+%So our field is now a mapping $\Phi: \R^2 \to D$. We then set the asymptotic boundary condition $\Phi \sim g = e^{i \chi}$ as $|x| \to \infty$. For these maps, we have the same formula for degree, namely
+%\[
+% Q = \frac{1}{\pi} \int_{\R^2} \frac{1}{2} \varepsilon_{ab} \varepsilon_{ij} \partial_i \Phi^a \partial_j \Phi^b \;\d x^1 \;\d x^2 = \frac{1}{2\pi} \lim_{R\to \infty} \oint_{|x| = R} \d \chi \in \Z.
+%\]
+%This is going to be our basic topological quantity for the vortices. Its existence, and being non-zero, is what gives rise to a stable vortex.
+
+It is convenient to rewrite this in terms of an inner product. $\R^2$ itself has an inner product, and under the identification $\R^2 \cong \C$, the inner product can be written as
+\[
+ (a, b) = \frac{\bar{a} b + a \bar{b}}{2}.
+\]
+Use of this expression allows calculations to be done efficiently if one makes use of the fact that for real numbers $a$ and $b$, we have
+\[
+ (a, b) = (ai, bi) = ab, \quad (ai, b) = (a, bi) = 0.
+\]
+In particular, we can evaluate
+\[
+ (i \Phi, \d \Phi) = (i \Phi^1 - \Phi^2, \d \Phi^1 + i \d \Phi^2) = \Phi^1\;\d \Phi^2 - \Phi^2\;\d \Phi^1.
+\]
+This is just (twice) the current $V$ we found earlier! So we can write our charge as
+\[
+ Q = \frac{1}{2\pi} \lim_{R \to \infty} \oint_{|x| = R} (i \Phi, \d \Phi).
+\]
+We will refer to $(i \Phi, \d \Phi)$ as the current, and the corresponding charge density is $j^0 = \frac{1}{2} \d (i \Phi, \d \Phi)$.
+
+%We can rewrite this in terms of complex numbers. Instead of viewing a $\Phi$ as a function valued in $D \subseteq \R^2$, we view it as a function $\Phi: D \to \C$. We already implicitly did so when we wrote the boundary condition as $e^{i \chi}$. In terms of complex numbers, we can write the standard inner product on $\R^2$ as
+%\[
+% (a, b) = \frac{\bar{a} b + a \bar{b}}{2}.
+%\]
+%Then we can write the integral for the charge as
+%\[
+% \oint_{\partial D}\;\d \chi = \oint_{\partial D} (i\Phi, \d \Phi).
+%\]
+%This formulation is sometimes more convenient for calculations.
+
+This current is actually a familiar object from quantum mechanics: recall that for the Schr\"odinger's equation
+\[
+ i \frac{\partial \Phi}{\partial t} = - \frac{1}{2m} \Delta \Phi + V(x) \Phi,\quad \Delta = \nabla^2.
+\]
+the probability $\int |\Phi|^s \;\d x$ is conserved. The differential form of the probability conservation law is
+\[
+ \frac{1}{2} \partial_t (\Phi, \Phi) + \frac{1}{2m} \nabla \cdot (i \Phi, \nabla \Phi) = 0.
+\]
+What appears in the flux term here is just the topological current!
+
+%The final comment, which we will not actually use, is that if we have a holomorphic function $f$, then consider the integral
+%\[
+% \frac{1}{2\pi i} \oint_C \frac{f'(z)}{f(z) - w}\;\d z.
+%\]
+%Using the residue theorem, we can see that this is the number of times $f$ takes value $w$, counted with multiplicity, within the area bounded by $C$. This is the \term{local degree} of the mapping. Indeed,
+%\[
+% \Res\left(\frac{f'(z)}{f(z) - w}, z_i\right) = n_i,
+%\]
+%where $f(z) - w = (z - z_i)^{n_i'} h(z)$, with $h(z_i) \not= 0$.
+%
+%But we can think about this in a different way. The integrand is an exact differential
+%\[
+% \frac{1}{2\pi i} [\log (f(z) - w)],
+%\]
+%and if we think about this carefully, this the change in argument of $f(z) - w$ around the curve.
+
+\subsection{Global \texorpdfstring{$U(1)$}{U(1)} Ginzburg--Landau vortices}
+We now put the theory into use. We are going to study \emph{Ginzburg--Landau vortices}\index{Ginzburg--Landau!vortex}. Our previous discussion involved a function taking values in the unit disk $D$. We will not impose such a restriction on our vortices. However, we will later see that any solution must take values in $D$.
+
+The potential energy of the Ginzburg--Landau theory is given by
+\[
+ V(\Phi) = \frac{1}{2} \int_{\R^2} \left((\nabla \Phi, \nabla \Phi) + \frac{\lambda}{4} (1 - (\Phi, \Phi))^2 \right)\;\d x^1 \;\d x^2.
+\]
+where $\lambda > 0$ is some constant.
+
+Note that the inner product is invariant under phase rotation, i.e.
+\[
+ (e^{i\chi} a, e^{i\chi}b) = (a, b)
+\]
+for $\chi \in \R$. So in particular, the potential satisfies
+\[
+ V(e^{i\chi}\Phi) = V(\Phi).
+\]
+Thus, our theory has a \term{global $\U(1)$ symmetry}\index{$\U(1)$ symmetry!global}.
+
+The Euler--Lagrange equation of this theory says
+\[
+ - \Delta \Phi = \frac{\lambda}{2}(1 - |\Phi|^2) \Phi.
+\]
+This is the \emph{ungauged Ginzburg--Landau equation}\index{Ginzburg--Landau!ungauged}.
+
+To justify the fact that our $\Phi$ takes values in $D$, we use the following lemma:
+\begin{lemma}
+ Assume $\Phi$ is a smooth solution of the ungauged Ginzburg--Landau equation in some domain. Then at any interior maximum point $x_*$ of $|\Phi|$, we have $|\Phi(x_*)| \leq 1$.
+\end{lemma}
+
+\begin{proof}
+ Consider the function
+ \[
+ W(x) = 1 - |\Phi(x)|^2.
+ \]
+ Then we want to show that $W \geq 0$ when $W$ is minimized. We note that if $W$ is at a minimum, then the Hessian matrix must have non-negative eigenvalues. So, taking the trace, we must have $\Delta W(x_*) \geq 0$. Now we can compute $\Delta W$ directly. We have
+ \begin{align*}
+ \nabla W &= -2 (\Phi, \nabla \Phi)\\
+ \Delta W &= \nabla^2 W \\
+ &= - 2(\Phi, \Delta \Phi) - 2(\nabla \Phi, \nabla \Phi)\\
+ &= \lambda |\Phi|^2 W - 2 |\nabla \Phi|^2.
+ \end{align*}
+ Thus, we can rearrange this to say
+ \[
+ 2 |\nabla \Phi|^2 + \Delta W = \lambda |\Phi|^2 W.
+ \]
+ But clearly $2 |\nabla \Phi|^2 \geq 0$ everywhere, and we showed that $\Delta W(x_*) \geq 0$. So we must have $W(x_*) \geq 0$.
+\end{proof}
+
+By itself, this doesn't force $|\Phi| \in [0, 1]$, since we could imagine $|\Phi|$ having no maximum. However, if we prescribe boundary conditions such that $|\Phi| = 1$ on the boundary, then this would indeed imply that $|\Phi| \leq 1$ everywhere. Often, we can think of $\Phi$ as some ``complex order parameter'', in which case the condition $|\Phi| \leq 1$ is very natural.
+
+The objects we are interested in are \emph{vortices}.
+
+\begin{defi}[Ginzburg--Landau vortex]\index{Ginzburg--Landau!vortex}
+ A global \emph{Ginzburg--Landau vortex} of charge $N > 0$ is a (smooth) solution of the ungauged Ginzburg--Landau equation of the form
+ \[
+ \Phi = f_N(r) e^{iN\theta}
+ \]
+ in polar coordinates $(r, \theta)$. Moreover, we require that $f_N(r) \to 1$ as $r \to \infty$.
+\end{defi}
+Note that for $\Phi$ to be a smooth solution, we must have $f_N(0) = 0$. In fact, a bit more analysis shows that we must have $f_N = O(r^N)$ as $r \to 0$. Solutions for $f_N$ do exist, and they look roughly like this:
+
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (5, 0) node [right] {$r$};
+ \draw [->] (0, -0.5) -- (0, 3) node [above] {$f_N$};
+
+ \draw [mblue, thick, domain=0:5] plot [smooth] (\x, {2 * \x^4 / (1 + \x^4)});
+
+ \draw [dashed] (-1, 2) -- (5, 2);
+
+ \end{tikzpicture}
+\end{center}
+
+In the case of $N = 1$, we can visualize the field $\Phi$ as a vector field on $\C$. Then it looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \foreach \t in {0,30,60,90,120,150,180,210,240,270,300,330,360} {
+ \begin{scope}[rotate=\t]
+ \foreach \x in {0.5, 1, 1.5, 2, 2.5} {
+ \pgfmathsetmacro\arlen{0.5 * (\x^4 / (1 + \x^4))^2}
+ \draw [-latex'] (\x, 0) -- +(\arlen, 0);
+ }
+ \end{scope}
+ }
+ \end{tikzpicture}
+\end{center}
+This is known as a \emph{$2$-dimensional hedgehog}.
+
+For general $N$, it might be more instructive to look at how the current looks like. Recall that the current is defined by $(i\Phi, \d \Phi)$. We can write this more explicitly as
+\begin{align*}
+ (i\Phi, \d \Phi) &= (if_N e^{iN\theta}, (\d f_N) e^{iN\theta} + i f_N N\;\d \theta e^{iN\theta})\\
+ &= (if_N,\d f_N + i f_N N\;\d \theta).
+\end{align*}
+We note that $if_N$ and $\d f_N$ are orthogonal, while $i f_N$ and $i f_N N \d \theta$ are parallel. So the final result is
+\[
+ (i\Phi, \d \Phi) = f_N^2 N\;\d \theta.
+\]
+So the current just looks this:
+
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \foreach \t in {0,30,60,90,120,150,180,210,240,270,300,330,360} {
+ \begin{scope}[rotate=\t]
+ \foreach \x in {0.5, 1, 1.5, 2, 2.5} {
+ \pgfmathsetmacro\arlen{0.5 * (\x^4 / (1 + \x^4))^2}
+ \draw [-latex'] (\x, 0) -- +(0, \arlen);
+ }
+ \end{scope}
+ }
+ \end{tikzpicture}
+\end{center}
+
+As $|x| \to \infty$, we have $f_N \to 1$. So the winding number is given as before, and we can compute the winding number of this system to be
+\[
+ \frac{1}{2\pi} \lim_{R \to \infty} \oint (i \Phi, \d \Phi) = \frac{1}{2\pi} \lim_{R \to \infty} \oint f_N^2 N \;\d \theta = N.
+\]
+The winding number of these systems is a discrete quantity, and can make the vortex stable.
+
+This theory looks good so far. However, it turns out this model has a problem --- the energy is infinite! We can expand out $V(f_N e^{iN \theta})$, and see it is a sum of a few non-negative terms. We will focus on the $\frac{\partial}{\partial \theta}$ term. We obtain
+\begin{align*}
+ V(f_N e^{iN\theta}) &\geq \int \frac{1}{r^2} \left|\frac{\partial \Phi}{\partial \theta}\right|^2 r\;\d r\;\d \theta\\
+ &= N^2 \int \frac{1}{r^2} f_N^2 r\;\d r\;\d \theta \\
+ &= 2\pi N^2 \int_0^\infty \frac{f_N^2}{r}\;\d r.
+\end{align*}
+Since $f_N \to 1$ as $r \to \infty$, we see that the integral diverges logarithmically.
+
+This is undesirable physically. To understand heuristically why this occurs, decompose $\d \Phi$ into two components --- a mode parallel to $\Phi$ and a mode perpendicular to $\Phi$. For a vortex solution these correspond to the radial and angular modes respectively. We will argue that for fluctuations the parallel mode is massive, while the perpendicular mode is massless. Now given that we just saw that the energy divergence of the vortex arises from the angular part of the energy, we see that it is the massless mode that leads to problems. We will see below that in gauge theories, the Higgs mechanism serves to make all modes massive, thus allowing for finite energy vortices.
+
+We can see the difference between massless and massive modes very explicitly in a different setting, corresponding to Yukawa mesons. Consider the equation
+\[
+ -\Delta u + M^2 u = f.
+\]
+Working in three dimensions, the solution is given by
+\[
+ u(x) = \frac{1}{4\pi} \int \frac{e^{-M|x - y|}}{|x - y|} f(y)\;\d y.
+\]
+Thus, the Green's function is
+\[
+ G(x) = \frac{e^{-M|x|}}{4\pi|x|}.
+\]
+If the system is massless, i.e.\ $M = 0$, then this decays as $\frac{1}{|x|}$. However, if the system is massive with $M > 0$, then this decays exponentially as $|x| \to \infty$. In the nonlinear setting the exponential decay which is characteristic of massive fundamental particles can help to ensure decay of the energy density at a rate fast enough to ensure finite energy of the solution.
+
+So how do we figure out the massive and massless modes? We do not have a genuine decomposition of $\Phi$ itself into ``parallel'' and ``perpendicular'' modes, because what is parallel and what is perpendicular depends on the local value of $\Phi$.
+
+Thus, to make sense of this, we have to consider small fluctuations around a fixed configuration $\Phi$. We suppose $\Phi$ is a solution to the field equations. Then $\frac{\delta V}{\delta \Phi} = 0$. Thus, for small variations $\Phi \mapsto \Phi + \varepsilon \varphi$, we have
+\[
+ V(\Phi + \varepsilon \varphi) = V(\Phi) + \varepsilon^2 \int \left(|\nabla \varphi|^2 + \lambda (\varphi, \Phi)^2 - \frac{2\lambda}{4}(1 - |\Phi|^2)|\varphi|^2\right) \;\d x + O(\varepsilon^3).
+\]
+Ultimately, we are interested in the asymptotic behaviour $|x| \to \infty$, in which case $1 - |\Phi|^2 \to 0$. Moreover, $|\Phi| \to 1$ implies $(\varphi, \Phi)$ becomes a projection along the direction of $\Phi$. Then the quadratic part of the potential energy density for fluctuations becomes approximately
+\[
+ |\nabla \varphi|^2 + \lambda |\varphi^{\mathrm{parallel}}|^2
+\]
+for large $x$. Thus, for $\lambda > 0$, the parallel mode is massive, with corresponding ``Yukawa'' mass parameter $M = \sqrt{\lambda}$, while the perpendicular mode is massless. The presence of the massless mode is liable to produce a soliton with slow algebraic decay at spatial infinity, and hence infinite total energy. This is all slightly heuristic, but is a good way to think about the issues. When we study vortices that are gauged, i.e.\ coupled to the electromagnetic field, we will see that the Higgs mechanism renders all components massive, and this problem does not arise.
+
+%
+%The divergence of energy arises because if we consider fluctuations $\Phi + \varepsilon \varphi$, and look at the energy
+%\[
+% V(\Phi + \varepsilon \varphi) = V(\Phi) + \varepsilon \int \frac{\delta V}{\delta \Phi} (\varphi) + \varepsilon^2 \int |\Delta \varphi|^2 + \text{something}.
+%\]
+%The first order term vanishes because $V$ solves the equations of motion.
+%
+%We look carefully at what the ``something'' is. We have
+%\[
+% (1 - (\Phi + \varepsilon \varphi, \Phi + \varepsilon \varphi))^2 = (1 - |\Phi|^2 - 2 \varepsilon (\varphi, \Phi) - \varepsilon ^2 |\varphi|^2)^2.
+%\]
+%The order $\varepsilon^2$ part is given by
+%\[
+% 4 \varepsilon^2 (\varphi, \Phi)^2 - 2 \varepsilon^2 (1 - |\phi|^2)|\varphi|^2
+%\]
+%So we find
+%\[
+% V(\Phi + \varepsilon \varphi) = V(\Phi) + \varepsilon^2 \int \left(|\nabla \varphi|^2 + \lambda (\varphi, \Phi)^2 - \frac{2\lambda}{4}(1 - |\Phi|^2)|\varphi|^2\right) \;\d x + O(\varepsilon^3).
+%\]
+%We now remember something about Green's functions, and the difference between massless and massive fields. We consider the ordinary Poisson equation
+%\[
+% - \Delta u = f.
+%\]
+%We can write down the solution to this in terms of Green's functions. In three dimensions, this is
+%\[
+% u(x) = \frac{1}{4\pi} \int \frac{f(y)}{|x - y|}\;\d y.
+%\]
+%This Green's function $\frac{1}{4\pi|x|}$ decays very slowly with $x$, and this is a sign that we have a massless field.
+%
+%If we change our integral equation instead to
+%\[
+% (-\Delta u + M^2 u) = f,
+%\]
+%then we find that
+%\[
+% u(x) + \frac{1}{4\pi} \int_{\R^3} \frac{e^{-M|x - y|}}{|x - y|} f(y) \;\d y.
+%\]
+%The Green's function is now exponentially decaying with $|x|$ if $M > 0$. This is characteristic for the behaviour of static Green's functions for massive fields.
+%
+%We now look at our change in $V$ above. We look at the quadratic fluctuation energy as $|x| \to \infty$. For $x$ very large, we have $(1 - |\Phi|^2) \to 0$ by the boundary conditions. Then we are left with the first two terms
+%\[
+% |\nabla \varphi|^2 + \lambda (\Phi, \varphi)^2.
+%\]
+%What does this mean? $\Phi$ and $\varphi$ are complex numbers, and in the second term, we are essentially projecting in the direction to $\Phi$. We write
+%\[
+% \varphi = \varphi^{||} + \varphi^{\perp}.
+%\]
+%Then we have
+%\[
+% \varphi^{||} \frac{(\Phi, \varphi)\Phi}{|\Phi|^2}.
+%\]
+%Thus, heuristically, this term is
+%\[
+% |\nabla \varphi^{||}|^2 + \lambda |\varphi^{||}|^2 + |\Delta \varphi^\perp|^2.
+%\]
+%So the parallel part behaves like a massive particle with quick decay, and the perpendicular part behaves like a massless particle with slow decay.
+
+\subsection{Abelian Higgs/Gauged Ginzburg--Landau vortices}
+We now consider a theory where the complex scalar field $\Phi$ is coupled to a magnetic field. This is a $\U(1)$ gauge theory, with gauge potential given by a smooth real $1$-form
+\[
+ A = A_1 \;\d x^1 + A_2 \;\d x^2.
+\]
+The coupling between $\Phi$ and $A$ is given by \term{minimal coupling}: this is enacted by introduction of the \term{covariant derivative}\index{$\D$}\index{$\D_A$}
+\[
+ \D \Phi = \D_A \Phi = \d \Phi - i A \Phi = \sum_{j = 1}^2 \D_j \Phi \;\d x^j.
+\]
+To proceed, it is convenient to have a list of identities involving the covariant derivative.
+\begin{prop}
+ If $f$ is a smooth real-valued function, and $\Phi$ and $\Psi$ are smooth complex scalar fields, then
+ \begin{align*}
+ \D (f \Phi) &= (\d f) \; \Phi + f\;\D \Phi,\\
+ \d (\Phi, \Psi) &= (\D \Phi, \Psi) + (\Phi, \D \Psi).
+ \end{align*}
+ (Here $(\ph)$ is the real inner product defined above.) In coordinates, these take the form
+ \begin{align*}
+ \D_j (f \Phi) &= (\partial_j f)\;\Phi + f\; \D_j \Phi\\
+ \partial_j (\Phi, \Psi) &= (\D_j \Phi, \Psi) + (\Phi, \D_j \Psi).
+ \end{align*}
+\end{prop}
+The proofs just involve writing all terms out. The first rule is a version of the Leibniz rule, while the second, called unitarity, is analogous to the fact that if $V, W$ are smooth vector fields on a Riemannian manifold, then
+\[
+ \partial_k g(V, W) = g(\nabla_k V, W) + g(V, \nabla_k W)
+\]
+for the Levi-Civita connection $\nabla$ associated to a Riemannian metric $g$.
+
+The curvature term is given by the magnetic field.
+\begin{defi}[Magnetic field/curvature]\index{magnetic field}\index{curvature}
+ The \emph{magnetic field}, or \emph{curvature} is given by
+ \[
+ B = \partial_1 A_2 - \partial_2 A_1.
+ \]
+ We can alternatively think of it as the 2-form
+ \[
+ F = \d A = B \;\d x^1 \wedge\d x^2.
+ \]
+\end{defi}
+The formulation in terms of differential forms is convenient for computations, because we don't have to constrain ourselves to working in Cartesian coordinates --- for example, polar coordinates may be more convenient.
+
+\begin{prop}
+ If $\Phi$ is a smooth scalar field, then
+ \[
+ (\D_1 \D_2 - \D_2 \D_1) \Phi = - i B \Phi.
+ \]
+\end{prop}
+
+The proof is again a direct computation. Alternatively, we can express this without coordinates. We can extend $\D$ to act on $p$-forms by letting $A$ act as $A \wedge$. Then this result says
+\begin{prop}
+ \[
+ \D \D \Phi = -i F \Phi.
+ \]
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ \D \D \Phi &= (\d - iA) (\d \Phi - iA\Phi)\\
+ &= \d^2 \Phi - i \d (A \Phi) - iA\;\d \Phi - A \wedge A\;\Phi\\
+ &= -i \d (A \Phi) - i A \;\d \Phi\\
+ &= -i \d A\; \Phi + i A \;\d \Phi - i A\;\d \Phi\\
+ &= -i (\d A)\; \Phi\\
+ &= -i F \Phi.\qedhere
+ \end{align*}
+\end{proof}
+
+The point of introducing the covariant derivative is that we can turn the global $\U(1)$ invariance into a local one. Previously, we had a global $\U(1)$ symmetry, where our field is unchanged when we replace $\Phi \mapsto \Phi e^{i \chi}$ for some constant $\chi \in \R$. With the covariant derivative, we can promote this to a \emph{gauge} symmetry.
+
+Consider the simultaneous \term{gauge transformations}
+\begin{align*}
+ \Phi(x) &\mapsto e^{i\chi(x)} \Phi(x)\\
+ A(x) &\mapsto A(x) + \d \chi(x).
+\end{align*}
+Then the covariant derivative of $\Phi$ transforms as
+\begin{align*}
+ (\d - iA) \Phi &\mapsto (\d - i (A + \d \chi)) (\Phi e^{i \chi}) \\
+ &= (\d \Phi + i \Phi \d \chi - i (A + \d \chi) \Phi) e^{i\chi} \\
+ &= e^{i \chi} (\d - iA) \Phi.
+\end{align*}
+Similarly, the magnetic field is also invariant under gauge transformations.
+
+As a consequence, we can write down energy functionals that are invariant under these gauge transformations. In particular, we have (using the real inner product defined above)
+\[
+ (\D \Phi, \D \Phi) \mapsto (e^{i\chi} \D \Phi, e^{i\chi} \D \Phi) = (\D \Phi, \D \Phi).
+\]
+So we can now write down the \emph{gauged Ginzburg--Landau energy}\index{Ginzburg--Landau!gauged}
+\[
+ V_\lambda(A, \Phi) = \frac{1}{2} \int_{\R^2} \left(B^2 + |\D \Phi|^2 + \frac{\lambda}{4} (1 - |\Phi|^2)^2\right)\;\d^2 x.
+\]
+This is then manifestly gauge invariant.
+%where
+%\[
+% B^2 = \partial_1 A_2 - \partial_2 A_1,
+%\]
+%which can be shown to be gauge-invariant.
+%
+%Thus, $V(\lambda)$ is invariant under gauge transformations. In other words,
+%\[
+% V_\lambda(A + \d \chi, \Phi e^{i\chi}) = V_\lambda (A, \Phi).
+%\]
+%To proceed, it is convenient to have a list of identities about the covariant derivative.
+%\begin{prop}
+% If $f(x) \in \R$, then
+% \[
+% (\nabla_A)_j (f \Phi) = (\partial_j f) \Phi + f (\nabla_A)_j \Phi.
+% \]
+% If $\Phi$ and $\Psi$ are complex scalar fields, then
+% \[
+% \partial_j (\Phi, \Psi) = (\D_j \Phi, \Psi) + (\Phi, \D_j \Psi).
+% \]
+%\end{prop}
+%The proofs just involve writing all terms out. The second rule is analogous to fact that
+%\[
+% \partial_k g(V, W) = g(\nabla_k V, W) + g(V, \nabla_k W)
+%\]
+%for the Levi-Civita connection.
+%
+%\begin{prop}
+% We have
+% \[
+% -((\nabla_A)_1 (\nabla_A)_2 - (\nabla_A)_2 (\nabla_A)_1) \Phi = - i B \phi,
+% \]
+% where
+% \[
+% B = \partial_1 A_2 - \partial_2 A_1.
+% \]
+% In geometry, this \term{$B$} is known as the \term{curvature}, and in electromagnetism, this is the \term{magnetic field}.
+%\end{prop}
+%
+%The proof is again writing it out.
+%
+%Our theory now has gauge invariance. In the ungauged Ginzburg--Landau theory, there was an invariance under a $\U(1)$ action $\Phi \mapsto \Phi e^{i\chi}$ for $\chi \in \R$ constant. This is \emph{global} gauge invariance, because it is constant everywhere.
+%
+%This invariance is now localized to
+%\[
+% \Phi(x) \mapsto \Phi(x) e^{i \chi(x)},
+%\]
+%where $\chi(x) \in \R$ is a smooth function, as long as our $A$ transforms accordingly as
+%\[
+% A \mapsto A + \d \chi.
+%\]
+%Then the covariant derivative changes as
+%\[
+% \D \Phi \mapsto (\D \Phi)e^{i \chi}.
+%\]
+%Finally, we check that the magnetic field is invariant,
+%\[
+% B \mapsto B.
+%\]
+%therefore the abelian Higgs energy functional
+%\[
+% V_\lambda (A, \Phi) = \frac{1}{2} \int \left(B^2 + |\D \Phi|^2 + \frac{\lambda}{4} (1 - |\Phi|^2)\right)\;\d^2 x
+%\]
+%satisfies
+%\[
+% V_\lambda(A + \d \chi, \Phi e^{i \chi}) = V_\lambda (A, \Phi)
+%\]
+%for all $\chi \in C^1(\R^2)$. So we now have an infinite degeneracy for all solutions.
+
+As before, the equations of motion are given by the Euler--Lagrange equations. Varying $\Phi$, we obtain
+\[
+ - (\D_1^2 + \D_2^2) \Phi - \frac{\lambda}{2} (1 - |\Phi|^2)\Phi = 0.
+\]
+This is just like the previous vortex equation in the ungauged case, but since we have the covariant derivative, this is now coupled to the gauge potential $A$. The equations of motion satisfied by $A$ are
+\begin{align*}
+ \partial_2 B &= (i \Phi, \D_1 \Phi)\\
+ -\partial_1 B &= (i \Phi, \D_2 \Phi).
+\end{align*}
+These are similar to one of Maxwell's equation --- the one relating the curl of the magnetic field to the current.
+
+It is again an exercise to derive these. We refer to the complete system as the gauged Ginzburg--Landau, or Abelian Higgs equations. In deriving them, it is helpful to use the previous identities such as
+\[
+ \partial_j (\Phi, \Psi) = (\D_j \Phi, \Psi) + (\Phi, \D_j \Psi).
+\]
+So we get the integration by parts formula
+\[
+ \int_{\R^2} (\D_j \Phi, \Psi)\;\d^2 x = - \int (\Phi, \D_j \Psi)\;\d^2 x
+\]
+under suitable boundary conditions.
+
+ %Recall that in the ungauged Ginzburg--Landau theory, we proved that we always had $|\Phi| \leq 1$. This is again true. The same proof works, with $\nabla$ replaced with $\D$. (It is literally the same proof. Just copy-and-paste, find-and-replace.)
+\begin{lemma}
+ Assume $\Phi$ is a smooth solution of the gauged Ginzburg--Landau equation in some domain. Then at any interior maximum point $x_*$ of $|\Phi|$, we have $|\Phi(x_*)| \leq 1$.
+\end{lemma}
+
+\begin{proof}
+ Consider the function
+ \[
+ W(x) = 1 - |\Phi(x)|^2.
+ \]
+ Then we want to show that $W \geq 0$ when $W$ is minimized. We note that if $W$ is at a minimum, then the Hessian matrix must have non-negative eigenvalues. So, taking the trace, we must have $\Delta W(x_*) \geq 0$. Now we can compute $\Delta W$ directly. We have
+ \begin{align*}
+ \partial_j W &= -2 (\Phi, \D_j \Phi)\\
+ \Delta W &= \partial_j \partial_j W \\
+ &= - 2(\Phi, \D_j \D_j \Phi) - 2(\D_j \Phi, \D_j \Phi)\\
+ &= \lambda |\Phi|^2 W - 2 |\nabla \Phi|^2.
+ \end{align*}
+ Thus, we can rearrange this to say
+ \[
+ 2 |\nabla \Phi|^2 + \Delta W = \lambda |\Phi|^2 W.
+ \]
+ But clearly $2 |\nabla \Phi|^2 \geq 0$ everywhere, and we showed that $\Delta W(x_*) \geq 0$. So we must have $W(x_*) \geq 0$.
+\end{proof}
+
+As before, this suggests we interpret $|\Phi|$ as an order parameter. This model was first used to describe superconductors. The matter can either be in a ``normal'' phase or a superconducting phase, and $|\Phi|$ measures how much we are in the superconducting phase.
+
+Thus, in our model, far away from the vortices, we have $|\Phi| \approx 1$, and so we are mostly in the superconducting phase. The vortices represent a breakdown of the superconductivity. At the core of the vortices, we have $|\Phi| = 0$, and we are left with completely normal matter. Usually, this happens when we have a strong magnetic field. In general, a magnetic field cannot penetrate the superconductor (the ``Meissner effect''), but if it is strong enough, it will cause such breakdown in the superconductivity along vortex ``tubes''.
+
+\subsubsection*{Radial vortices}
+Similar to the ungauged case, for $\lambda > 0$, there exist vortex solutions of the form
+\begin{align*}
+ \Phi &= f_N(r) e^{iN\theta}\\
+ A &= N \alpha_N(r) \;\d \theta.
+\end{align*}
+The boundary conditions are $f_N, \alpha_N \to 1$ as $r \to \infty$ and $f_N, \alpha_N\to 0$ as $r \to 0$.
+
+Let's say a few words about why these are sensible boundary conditions from the point of view of energy. We want
+\[
+ \lambda \int_{\R^2} (1 - |\Phi|^2)^2 < \infty,
+\]
+and this is possible only for $f_N \to 1$ as $r \to \infty$. What is less obvious is that we also need $\alpha_N \to 1$. We note that we have
+\[
+ \D_\theta \Phi = \frac{\partial \Phi}{\partial \theta} - i A_\theta \Phi = (iN f_N - iN \alpha_N f_N) e^{iN\theta}.
+\]
+We want this to approach $0$ as $r \to \infty$. Since $f_N \to 1$, we also need $\alpha_N \to 1$.
+
+The boundary conditions at $0$ can be justified as before, so that the functions are regular at $0$.
+
+\subsubsection*{Topological charge and magnetic flux}
+Let's calculate the topological charge. We have (assuming sufficiently rapid approach to the asymptotic values as $r \to \infty$)
+\begin{align*}
+ Q &= \frac{1}{\pi} \int_{\R^2} j^0(\Phi)\;\d^2 x \\
+ &= \lim_{R \to \infty} \frac{1}{2\pi} \oint_{|x| = R} (i \Phi, \d \Phi) \\
+ &= \lim_{R \to \infty} \frac{1}{2\pi} \oint (if_N e^{iN\theta}, iN f_N e^{iN\theta})\;\d \theta\\
+ &= \frac{1}{2\pi} \cdot N \lim_{R \to \infty} \int_0^{2\pi} f_N^2 \;\d \theta\\
+ &= N.
+\end{align*}
+Previously, we understood $N$ as the ``winding number'', and it measures how ``twisted'' our field was. However, we shall see shortly that there is an alternative interpretation of this $N$. Previously, in the sine-Gordon theory, we could think of $N$ as the number of kinks present. Similarly, here we can think of this $N$-vortex as a superposition of $N$ vortices at the origin. In the case of $\lambda = 1$, we will see that there are static solutions involving multiple vortices placed at different points in space.
+
+We can compute the magnetic field and total flux as well. It is convenient to use the $\d A$ definition, as we are not working in Cartesian coordinates. We have
+\[
+ \d A = N \alpha_N'(r) \;\d r \wedge \d \theta = \frac{1}{r} N \alpha_N'\;\d x^1 \wedge \d x^2.
+\]
+Thus it follows that
+\[
+ B = \partial_1 A_2 - \partial_2 A_1 = \frac{N}{r} \alpha_N'.
+\]
+Working slightly more generally, we assume given a smooth finite energy configuration $A, \Phi$, and suppose in addition that $|\Phi|^2 \to 1$ and $r|A|$ is bounded as $r \to \infty$, and also that $\D_\theta \Phi = o(r^{-1})$. Then we find that
+\[
+ (\d \Phi)_\theta = i A_\theta \Phi + o(r^{-1}).
+\]
+Then if we integrate around a circular contour, since only the angular part contributes, we obtain
+\[
+ \oint_{|x| = R} (i \Phi, \d \Phi) = \oint_{|x| = R} (i \Phi, i A_\theta \Phi)\;\d \theta + o(1) = \oint_{|x| = R} A + o(1).
+\]
+Note that here we are explicitly viewing $A$ as a differential form so that we can integrate it. We can then note that $|x| = R$ is the boundary of the disk $|x| \leq R$. So we can apply Stokes' theorem and obtain
+\[
+ \oint_{|x| = R} (i \Phi, \d \Phi) = \int_{|x| \leq R} \d A = \int_{|x| \leq R} B\;\d^2 x.
+\]
+%
+%Now suppose that the covariant derivative
+%\[
+% \D_\theta \Phi = (\nabla_A)_\theta \Phi = \left(\frac{\partial \Phi}{\partial \theta} - A_\theta \Phi\right) = (iN f_n - iN \alpha_N f_N) e^{iN\theta}
+%\]
+%is rapidly decreasing as $r \to \infty$, then we know
+%\[
+% A_\theta - \left(i \Phi, \frac{\partial \Phi}{\partial \theta}\right) \to 0.
+%\]
+%quickly as $r \to \infty$. Therefore we obtain
+%\[
+% \oint_{|x| = R} (i \Phi, \d \Phi) = \oint_{|x| = R} \left(i \Phi, \frac{\partial \Phi}{\partial \theta}\right)\;\d \theta = \oint_{|x| = R} A_\theta \;\d \theta + o(1).
+%\]
+%We now apply Stokes theorem. We note that $A$ only has a $\theta$ direction. So we have
+%\[
+% A_\theta \;\d \theta = A_1 \;\d x^1 + A_2\;\d x^2.
+%\]
+%Thus we obtain
+%\[
+% \oint_{|x| = R} A_1 \;\d x^1 + A_2\;\d x^2 = \int_{|x| \leq R} (\partial_1 A_2 - \partial_2 A_1)\;\d x^1 \wedge \d x^2 = \int_{|x| \leq R} B\;\d x^1 \;\d x^2.
+%\]
+Now we let $R \to \infty$ to obtain
+\[
+ \lim_{R \to \infty} \int_{x \leq R} B\;\d^2 x = \lim_{R \to \infty} \oint_{|x| = R} A = \lim_{R \to \infty} \oint_{|x| = R} (i \Phi, \d \Phi) = 2 \pi Q.
+\]
+%If we analyze our derivation of this, it didn't exactly require that we had a radial vortex. It only required that
+%\[
+% |\D \Phi| \leq \frac{\text{constant}}{r^{1 + \varepsilon}}
+%\]
+%for some $\varepsilon > 0$.
+
+Physically, what this tells us then is that there is a relation between the topological winding number and the magnetic flux. This is a common property of topological gauge theories. In mathematics, this is already well known --- it is the fact that we can compute characteristic classes of vector bundles by integrating the curvature, as discovered by Chern.
+
+\subsubsection*{Behaviour of vortices as $r \to 0$}
+We saw earlier that for reasons of regularity, it was necessary that $f_N, \alpha_N \to 0$ as $r \to 0$.
+
+But actually, we must have $f_N \sim r^N$ as $r \to 0$. This has as a consequence that $\Phi \sim (r e^{i\theta})^N = z^N$. So the local appearance of the vortex is the zero of an analytic function with multiplicity $N$.
+
+To see this, we need to compute that the Euler--Lagrange equations are
+\[
+ -f_N''(r) - \frac{1}{r} f_N'(r) + \frac{(N - \alpha_N)^2}{r^2} f_N = \frac{\lambda}{2} f_N(1 - f_N^2).
+\]
+With the boundary condition that $f_N$ and $\alpha_N$ vanish at $r = 0$, we can approximate this locally as
+\[
+ - f_N'' - \frac{1}{r} f_N' + \frac{N^2}{r^2} f_N \approx 0
+\]
+since we have a $\frac{1}{r^2}$ on the left hand side. The approximating equation is homogeneous, and the solutions are just
+\[
+ f_N = r^{\pm N}.
+\]
+So for regularity, we want the one that $\to 0$, so $f_N \sim r^N$ as $r \to 0$.
+
+\subsection{Bogomolny/self-dual vortices and Taubes' theorem}
+As mentioned, we can think of the radial vortex solution as a collection of $N$ vortices all superposed at the origin. Is it possible to have separated vortices all over the plane? Naively, we would expect that the vortices exert forces on each other, and so we don't get a static solution. However, it turns out that in the $\lambda = 1$ case, there do exist static solutions corresponding to vortices at arbitrary locations on the plane.
+
+This is not obvious, and the proof requires some serious analysis. We will not do the analysis, which requires use of Sobolev spaces and PDE theory. However, we will do all the non-hard-analysis part. In particular, we will obtain Bogomolny bounds as we did in the sine-Gordon case, and reduce the problem to finding solutions of a single scalar PDE, which can be understood with tools from calculus of variations and elliptic PDEs.
+
+Recall that for the sine-Gordon kinks, we needed to solve
+\[
+ \theta'' = \sin \theta,
+\]
+with boundary conditions $\theta(x) \to 0$ or $2\pi$ as $x \to \pm \infty$. The only solutions we found were
+\[
+ \theta_K (x - X)
+\]
+for any $X \in \R$. This $X$ is interpreted as the location of the kink. So the moduli space of solutions is $\mathcal{M} = \R$.
+
+We shall get a similar but more interesting description for the $\lambda = 1$ vortices. This time, the moduli space will be $\C^N$, given by $N$ complex parameters describing the solutions.
+
+\begin{thm}[Taubes' theorem]
+ For $\lambda = 1$, the space of (gauge equivalence classes of) solutions of the Euler--Lagrange equations $\delta V_1 = 0$ with winding number $N$ is $\mathcal{M} \cong \C^N$.
+
+ To be precise, given $N \in \N$ and an unordered set of points $\{Z_1, \cdots, Z_N\}$, there exists a smooth solution $A(x; Z_1, \cdots, Z_N)$ and $\Phi(x; Z_1, \cdots, Z_n)$ which solves the Euler-Lagrange equations $\delta V_1 = 0$, and also the so-called \term{Bogomolny equations}
+ \[
+ \D_1 \Phi + i \D_2 \Phi = 0, \quad B = \frac{1}{2} (1 - |\Phi|^2).
+ \]
+ Moreover, $\Phi$ has exactly $N$ zeroes $Z_1, \cdots, Z_N$ counted with multiplicity, where (using the complex coordinates $z = x_1 + ix_2$)
+ \[
+ \Phi(x; Z_1, \cdots, Z_N) \sim c_j (z - Z_j)^{n_j}
+ \]
+ as $z \to Z_j$, where $n_j = |\{k: Z_k = Z_j\}|$ is the multiplicity and $c_j$ is a nonzero complex number.
+
+ This is the unique such solution up to gauge equivalence. Furthermore,
+ \[
+ V_1(A(\ph, Z_1, \cdots, Z_N), \Phi(\ph; Z_1, \cdots, Z_N)) = \pi N\tag{$*$}
+ \]
+ and
+ \[
+ \frac{1}{2\pi} \int_{\R^2} B\;\d^2 x = N = \text{winding number}.
+ \]
+ Finally, this gives all finite energy solutions of the gauged Ginzburg--Landau equations.
+\end{thm}
+Note that it is not immediately clear from our description that the moduli space is $\C^N$. It looks more like $\C^N$ quotiented out by the action of the permutation group $S_N$. However, the resulting quotient is still isomorphic to $\C^N$. (However, it is important for various purposes to remember this quotient structure, and to use holomorphic coordinates which are invariant under the action of the permutation group --- the elementary symmetric polynomials in $\{Z_1, \ldots, Z_n\}$.
+
+There is a lot to be said about this theorem. The equation $(*)$ tells us the energy is just a function of the number of particles, and does not depend on where they are. This means there is no force between the vortices. In situations like this, it is said that the Bogomolny bound is saturated. The final statement suggests that the topology is what is driving the existence of the vortices, as we have already seen. The reader will find it useful to work out the corresponding result in the case of negative winding number (in which case the holomorphicity condition becomes anti-holomorphicity, and the sign of the magnetic field is reversed in the Bogomolny equations).
+
+Note that the Euler--Lagrange equations themselves are second-order equations. However, the Bogomolny equations are \emph{first order}. In general, this is a signature that suggests that interesting mathematical structures are present.
+
+We'll discuss three crucial ingredients in this theorem, but we will not complete the proof, which involves more analysis than is a pre-requisite for this course. The proof can be found in Chapter 3 of Jaffe and Taubes's \emph{Vortices and Monopoles}.
+
+\subsubsection*{Holomorphic structure}
+When there are Bogomolny equations, there is often some complex analysis lurking behind. We can explicitly write the first Bogomolny equation as
+\[
+ \D_1 \Phi + i \D_2 \Phi = \frac{\partial \Phi}{\partial x^1} + i\frac{\partial \Phi}{\partial x^2} - i(A_1 + i A_2) \Phi = 0.
+\]
+Recall that in complex analysis, holomorphic functions can be characterized as complex-valued functions which are continuously differentiable (in the real sense) and also satisfy the Cauchy--Riemann equations
+\[
+ \frac{\partial f}{\partial \bar{z}} = \frac{1}{2} \left(\frac{\partial f}{\partial x^1} + i \frac{\partial f}{\partial x^2}\right) = 0.
+\]
+So we think of the first Bogomolny equation as the covariant Cauchy--Riemann equations. It is possible to convert this into the standard Cauchy--Riemann equations to deduce the local behaviour at $\Phi$.
+
+To do so, we write
+\[
+ \Phi = e^{\omega} f.
+\]
+Then
+\begin{align*}
+ \frac{\partial f}{\partial \bar{z}} &= e^{-\omega} \left(\frac{\partial \Phi}{\partial \bar{z}} - \frac{\partial \omega}{\partial \bar{z}} \Phi\right)\\
+ &= e^{-\omega} \left(\frac{i(A_i + i A_2)}{2} - \frac{\partial \omega}{\partial \bar{z}}\right)\Phi.
+\end{align*}
+This is equal to $0$ if $\omega$ satisfies
+\[
+ \frac{\partial \omega}{\partial \bar{z}} = i \frac{A_1 + iA_2}{2}.
+\]
+So the question is --- can we solve this? It turns out we can always solve this equation, and there is an explicit formula for the solution. In general, if $\beta$ is smooth, then the equation
+\[
+ \frac{\partial w}{\partial \bar{z}} = \beta,
+\]
+has a smooth solution in the disc $\{z: |z| < r\}$, given by
+\[
+ \omega(z, \bar{z}) = \frac{1}{2\pi i} \int_{|w| < r} \frac{\beta(w)}{w - z}\;\d w \wedge \d \bar{w}.
+\]
+A proof can be found in the book \emph{Griffiths and Harris} on algebraic geometry, on page 5. So we can write
+\[
+ \Phi = e^{\omega} f
+\]
+where $f$ is holomorphic. Since $e^{\omega}$ is never zero, we can apply all our knowledge of holomorphic functions to $f$, and deduce that $\Phi$ has isolated zeroes, where $\Phi \sim (z - Z_j)^{n_j}$ for some integer power $n_j$.
+
+\subsubsection*{The Bogomolny equations}
+We'll now show that $(A, \Phi)$ satisfies $V_1(A, \Phi) = \pi N \geq 0$ iff it satisfies the Bogomolny equations, i.e.
+\[
+ \D_1 \Phi + i \D_2 \Phi = 0,\quad B = \frac{1}{2} (1 - |\Phi|^2).
+\]
+We first consider the simpler case of the sine-Gordon equation. As in the $\phi^4$ kinks, to find soliton solutions, we write the energy as
+\begin{align*}
+ E &= \int_{-\infty}^\infty \left(\frac{1}{2} \theta_x^2 + (1 - \cos \theta)\right) \;\d x \\
+ &= \frac{1}{2} \int_{-\infty}^\infty \left(\theta_x^2 + 4 \sin^2 \frac{\theta}{2}\right) \;\d x\\
+ &= \frac{1}{2} \int_{-\infty}^\infty \left(\left(\theta_x - 2 \sin \frac{\theta}{2}\right)^2 + 4 \theta_x \sin \frac{\theta}{2}\right)\;\d x\\
+ &= \frac{1}{2} \int_{-\infty}^\infty \left(\theta_x - 2 \sin \frac{\theta}{2}\right)^2\;\d x + \int_{-\infty}^\infty \frac{\partial}{\partial x}\left(-4 \cos \frac{\theta}{2}\right)\;\d x\\
+ &= \frac{1}{2}\int_{-\infty}^\infty \left(\theta_x - 2 \sin \frac{\theta}{2}\right)^2\;\d x + \left(-4\cos \frac{\theta(+\infty)}{2} + 4 \cos \frac{\theta(-\infty)}{2}\right).
+\end{align*}
+We then use the kink asymptotic boundary conditions to obtain, say, $\theta(+\infty) = 2\pi$ and $\theta(-\infty) = 0$. So the boundary terms gives $8$. Thus, subject to these boundary conditions, we can write the sine-Gordon energy as
+\[
+ E = \frac{1}{2}\int_{-\infty}^\infty \left(\theta_x - 2 \sin \frac{\theta}{2}\right)^2\;\d x + 8.
+\]
+Thus, if we try to minimize the energy, then we know the minimum is at least $8$, and if we could solve the first-order equation $\theta_x = 2 \sin \frac{\theta}{2}$, then the minimum would be exactly $8$. The solution we found does satisfy this first-order equation. Moreover, the solutions are all of the form
+\[
+ \theta(x) = \theta_K(x - X),\quad \theta_K(x) = 4 \arctan e^x.
+\]
+Thus, we have shown that the minimum energy is $8$, and the minimizers are all of this form, parameterized by $X \in \R$.
+
+We want to do something similar for the Ginzburg--Landau theory. In order to make use of the discussion above of the winding number $N$, we will make the same standing assumptions as used in that discussion, but it is possible to generalize the conclusion of the following result to arbitrary finite energy configurations with an appropriate formulation of the winding number.
+\begin{lemma}
+ We have
+ \[
+ V_1(A, \Phi) = \frac{1}{2} \int_{\R^2} \left(\left(B - \frac{1}{2}(1 - |\Phi|^2)\right)^2 + 4 |\bar{\partial}_A \Phi|^2\right)\;\d^2 x + \pi N,
+ \]
+ where
+ \[
+ \bar{\partial}_A \Phi = \frac{1}{2} (\D_1 \Phi + i \D_2 \Phi).
+ \]
+\end{lemma}
+It is clear that the desired result follows from this.
+
+\begin{proof}
+ We complete the square and obtain
+ \[
+ V_1(A, \Phi) = \frac{1}{2} \int \left(\left(B - \frac{1}{2}(1 - |\Phi|^2)\right)^2 + B(1 - |\Phi|^2) + |\D_1\Phi|^2 + |\D_2 \Phi|^2\right)\;\d^2 x.
+ \]
+ We now dissect the terms one by one. We first use the definition of $B \;\d x^1 \wedge \d x^2= \d A$ and integration by parts to obtain
+ \[
+ \int_{\R^2} (1 - |\Phi|^2) \;\d A = - \int_{\R^2} \d (1 - |\Phi|^2) \wedge A = 2 \int_{\R^2} (\Phi, \D \Phi) \wedge A.
+ \]
+ Alternatively, we can explicitly write
+ \begin{align*}
+ \int_{\R^2} B(1 - |\Phi|^2) \;\d^2 x &= \int_{\R^2} (\partial_1 A_2 - \partial_2 A_1) (1 - |\Phi|^2)\;\d ^2x\\
+ &= \int_{\R^2} (A_2 \partial_1 |\Phi|^2 - A_1 \partial_2 |\Phi|^2) \;\d^2 x \\
+ &= 2\int_{\R^2} A_2 (\Phi, \D_1 \Phi) - A_1 (\Phi, \D_2 \Phi).
+ \end{align*}
+ Ultimately, we want to obtain something that looks like $|\bar{\partial}_A \Phi|^2$. We can write this out as
+ \[
+ (\D_1 \Phi + i \D_2 \Phi, \D_1 \Phi + i \D_2 \Phi) = |\D_1 \Phi|^2 + |\D_2 \Phi|^2 + 2(\D_1 \Phi, i \D_2 \Phi).
+ \]
+ We note that $i\Phi$ and $\Phi$ are always orthogonal, and $A_i$ is always a real coefficient. So we can write
+ \begin{align*}
+ (\D_1 \Phi, i \D_2 \Phi) &= (\partial_1 \Phi - i A_1 \Phi, i \partial_2 \Phi + A_2 \Phi) \\
+ &= (\partial_1 \Phi, i \partial _2 \Phi) + A_2 (\Phi, \partial_1 \Phi) - A_1 (\Phi, \partial_2 \Phi).
+ \end{align*}
+ We now use again the fact that $(\Phi, i\Phi) = 0$ to replace the usual derivatives with the covariant derivatives. So we have
+ \[
+ (\D_1 \Phi, i \D_2 \Phi) = (\partial_1 \Phi, i \partial_2 \Phi) + A_2 (\Phi, \D_1 \Phi) - A_1 (\Phi, \D_2 \Phi).
+ \]
+ This tells us we have
+ \[
+ \int \left(B(1 - |\Phi|^2) + |\D_1 \Phi|^2 + |\D_2 \Phi|^2\right)\;\d^2 x = \int \left(4 |\bar{\partial}_A \Phi|^2 + 2 (\partial_1 \Phi, i \partial_2 \Phi)\right)\;\d^2 x.
+ \]
+ It then remains to show that $(\partial_1 \Phi, i \partial_2 \Phi) = j^0(\Phi)$. But we just write
+% This is just like what we had above! So we get
+% \begin{align*}
+% V_1(A, \Phi) &= \frac{1}{2} \int \left(\left(B - \frac{1}{2}(1 - |\Phi|^2)\right)^2 + |\D_1 \Phi|^2 + |\D_2 \Phi|^2 + 2 (\D_1 \Phi, i \D_2 \Phi) - 2 (\partial_1 \Phi, i \partial_2 \Phi)\right) \;\d^2 x\\
+% &= \frac{1}{2} \int \left(\left(B - \frac{1}{2} (1 - |\Phi|^2)\right)^2 + 4 |\bar{\partial}_A \Phi|^2 + 2 j^0(\Phi)\right)\;\d^2 x,
+% \end{align*}
+% since
+ \begin{align*}
+ (\partial_1 \Phi_1 + i \partial_1 \Phi_2, - \partial_2 \Phi_2 + i \partial_2 \Phi_1) &= - (\partial_1 \Phi_1, \partial_2 \Phi_2) + (\partial_1 \Phi_2, \partial_2 \Phi_1)\\
+ &= -j^0(\Phi)\\
+ &= - \det
+ \begin{pmatrix}
+ \partial_1 \Phi_1 & \partial_2 \Phi_1\\
+ \partial_1 \Phi_2 & \partial_2 \Phi_2
+ \end{pmatrix}
+ \end{align*}
+ Then we are done.
+\end{proof}
+
+\begin{cor}
+ For any $(A, \Phi)$ with winding number $N$, we always have $V_1(A, \Phi) \geq \pi N$, and those $(A, \Phi)$ that achieve this bound are exactly those that satisfy
+ \[
+ \bar{\partial}_A \Phi = 0,\quad B = \frac{1}{2} (1 - |\Phi|^2).
+ \]
+\end{cor}
+
+\subsubsection*{Reduction to scalar equation}
+The remaining part of Taubes' theorem is to prove the existence of solutions to these equations, and that they are classified by $N$ unordered complex numbers. This is the main analytic content of the theorem.
+
+To do so, we reduce the two Bogomolny equations into a scalar equation. Note that we have
+\[
+ \D_1 \Phi + i \D_2 \Phi = (\partial_1 \Phi + i \partial_2 \Phi) - i (A_1 + i A_2) \Phi = 0.
+\]
+So we can write
+\[
+ A_1 + i A_2 = - i (\partial_1 + i \partial_2) \log \Phi.
+\]
+Thus, once we've got $\Phi$, we can get $A_1$ and $A_2$.
+
+The next step is to use gauge invariance. Under gauge invariance, we can fix the phase of $\Phi$ to anything we want. We write
+\[
+ \Phi = e^{\frac{1}{2} (u + i \theta)}.
+\]
+Then $|\Phi|^2 = e^u$.
+
+We might think we can get rid of $\theta$ completely. However, this is not quite true, since the argument is not well-defined at a zero of $\Phi$, and in general we cannot get rid of $\theta$ by a smooth gauge transformation. But since
+\[
+ \Phi \sim c_j (z - Z_j)^{n_j}
+\]
+near $Z_j$, we expect we can make $\theta$ look like
+\[
+ \theta = 2 \sum_{j = 1}^N \arg (z - Z_j).
+\]
+We will assume we can indeed do so. Then we have
+\[
+ A_1 = \frac{1}{2}(\partial_2 u + \partial_1 \theta),\quad A_2 = - \frac{1}{2} (\partial_1 u - \partial_2 \theta).
+\]
+We have now solved for $A$ using the first Bogomolny equation. We then use this to work out $B$ and obtain a scalar equation for $u$ by the second Bogomolny equation.
+
+\begin{thm}
+ In the above situation, the Bogomolny equation $B = \frac{1}{2} (1 - |\Phi|^2)$ is equivalent to the scalar equation for $u$
+ \[
+ -\Delta u + (e^u - 1) = -4\pi \sum_{j = 1}^N \delta_{Z_j}.
+ \]
+ This is known as \term{Taubes' equation}.
+\end{thm}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ B &= \partial_1 A_2 - \partial_2 A_1 \\
+ &= -\frac{1}{2} \partial_1^2 u - \frac{1}{2} \partial_2^2 u + \frac{1}{2} (\partial_1 \partial_2 - \partial_2 \partial_1) \theta\\
+ &= - \frac{1}{2} \Delta u + \frac{1}{2} (\partial_1 \partial_2 - \partial_2 \partial_1) \theta.
+ \end{align*}
+ We might think the second term vanishes identically, but that is not true. Our $\theta$ has some singularities, and so that expression is not going to vanish at the singularities. The precise statement is that $(\partial_1 \partial_2 - \partial_2 \partial_1) \theta$ is a distribution supported at the points $Z_j$.
+
+ To figure out what it is, we have to integrate:
+ \begin{align*}
+ \int_{|z - Z_j| \leq \varepsilon} (\partial_1 \partial_2 - \partial_2 \partial_1) \theta \;\d^2 x &= \int_{|z - Z_j| = \varepsilon} \partial_1 \theta \;\d x^1 + \partial_2 \theta \;\d x^2 \\
+ &= \oint_{|z - Z_j| = \varepsilon} \;\d \theta = 4\pi n_j,
+ \end{align*}
+ where $n_j$ is the multiplicity of the zero. Thus, we deduce that
+ \[
+ (\partial_1 \partial_2 - \partial_2 \partial_1) \theta = 2\pi \sum \delta_{Z_j}.
+ \]
+ But then we are done!
+\end{proof}
+We can think of this $u$ as a non-linear combination of fundamental solutions to the Laplacian. Near the $\delta$ functions, the $e^u - 1$ term doesn't contribute much, and the solution looks like the familiar fundamental solutions to the Laplacian with logarithmic singularities. However, far away from the singularities, $e^u - 1$ forces $u$ to tend to $0$, instead of growing to infinity.
+
+Taubes proved that this equation has a unique solution, which is smooth on $\R^2 \setminus \{Z_j\}$, with logarithmic singularities at $Z_j$, and such that $u \to 0$ as $|z| \to \infty$. Also, $u < 0$.
+
+It is an exercise to check that the Bogomolny equations imply the second-order Euler--Lagrange equations.
+
+For example, differentiating the second Bogomolny equation and using the first gives
+\[
+ \partial_1 B = - (\Phi, \D_1 \Phi) = (\Phi, i \D_2 \Phi).
+\]
+We can similarly do this for the sine-Gordon theory.
+
+\subsection{Physics of vortices}
+Recall we began with the ungauged Ginzburg--Landau theory, and realized the solitons didn't have finite energy. We then added a gauge field, and the problem went away --- we argued the coupling to the gauge field ``gave mass'' to the transverse component, thus allowing the existence of finite energy soliton solutions. In the book of Jaffe and Taubes there are results on the exponential decay of gauge invariant combinations of the fields which are another expression of this effect --- the Higgs mechanism. However, there is a useful and complementary way of understanding how gauge fields assist in stabilizing finite energy configurations against collapse, and this doesn't require \emph{any} detailed information about the theory at all --- only scaling. We now consider this technique, which is known either as the Derrick or the Pohozaev argument.
+
+Suppose we work in $d$ space dimensions. Then a general scalar field $\Phi: \R^d \to \R^\ell$ has energy functional given by
+\[
+ \int_{\R^d} \left(\frac{1}{2} |\nabla \Phi|^2 + U(\Phi)\right)\;\d^d x.
+\]
+for some $U$. In the following we consider smooth finite energy configurations for which the energy is stationary. To be precise, we require that the energy is stationary with respect to variations induced by rescaling of space (as is made explicit in the proof); we just refer to these configurations as stationary points.
+
+\begin{thm}[Derrick's scaling argument]\index{Derrick's scaling argument}
+ Consider a field theory in $d$-dimensions with energy functional
+ \[
+ E[\Phi] = \int_{\R^d}\left(\frac{1}{2} |\nabla \Phi|^2 + U(\Phi)\right)\;\d^d x = T + W,
+ \]
+ with $T$ the integral of the gradient term and $W$ the integral of the term involving $U$.
+ \begin{itemize}
+ \item If $d = 1$, then any stationary point must satisfy
+ \[
+ T = W.
+ \]
+ \item If $d = 2$, then all stationary points satisfy $W = 0$.
+ \item If $d \geq 3$, then all stationary points have $T = W = 0$, i.e.\ $\Phi$ is constant.
+ \end{itemize}
+\end{thm}
+This forbids the existence of solitons in high dimensions for this type of energy functional.
+
+\begin{proof}
+ Suppose $\Phi$ were such a stationary point. Then for any variation $\Phi_\lambda$ of $\Phi$ such that $\Phi = \Phi_1$, we have
+ \[
+ \left.\frac{\d}{\d \lambda}\right|_{\lambda = 1}E[\Phi_\lambda] = 0.
+ \]
+ Consider the particular variation given by
+ \[
+ \Phi_\lambda(\mathbf{x}) = \Phi(\lambda \mathbf{x}).
+ \]
+ Then we have
+ \[
+ W[\Phi_\lambda] = \int_{\R^d} U(\Phi_\lambda(\mathbf{x})) \;\d^d x = \lambda^{-d} \int_{\R^d} U(\Phi(\lambda \mathbf{x}))\;\d^d (\lambda x) = \lambda^{-d} W[\Phi].
+ \]
+ On the other hand, since $T$ contains two derivatives, scaling the derivatives as well gives us
+ \[
+ T[\Phi_\lambda] = \lambda^{2 -d} T[\Phi].
+ \]
+ Thus, we find
+ \[
+ E[\Phi_\lambda] = \lambda^{2 - d}T[\Phi] + \lambda^{-d} W[\Phi].
+ \]
+ Differentiating and setting $\lambda = 1$, we see that we need
+ \[
+ (2 - d) T[\Phi] - d W[\Phi] = 0.
+ \]
+ Then the results in different dimensions follow.
+\end{proof}
+%We can look at vortices for general $\lambda > 0$. We shall begin by some high-level argument involving \term{Derrick's scaling argument}. For scalar fields $\phi: \R^d \to \R^\ell$, suppose we have an energy functional
+%\[
+% \int_{\R^d} \left(\frac{1}{2} |\Delta \phi|^2 + U(\phi)\right)\;\d^d x.
+%\]
+%We assume that the potential $U \geq 0$, and that there exists a finite-energy smooth equilibrium $\phi$. In other words, if $\phi_\lambda$ is a smooth family of field configurations with $\phi_\lambda |_{\lambda = 1} = \phi$, then
+%\[
+% \left.\frac{\d}{\d \lambda} V(\phi_\lambda) \right|_{\lambda = 1} = 0.
+%\]
+%We now want to ask if these things exist.
+%
+%We choose our variation by
+%\[
+% \phi_\lambda(x) = \phi(\lambda x).
+%\]
+%We can evaluate
+%\begin{align*}
+% V(\phi_\lambda) &= \int_{\R^d} \left(\frac{1}{2} \lambda^2|(\Delta \phi)(\lambda x)|^2 + U(\phi_\lambda)\right)\;\d^d x\\
+% &= \int_{\R^d} \left(\frac{1}{2} \lambda^2 |\Delta \phi(y)|^2 + U(\phi(y))\right)\;\lambda^{-d} \;\d^d y,
+%\end{align*}
+%where we substituted in $y = \lambda x$. We write
+%\[
+% T = \int_{\R^d} \frac{1}{2} |\Delta \phi|^2 \;\d^d x,\quad W = \int_{\R^d} U(\phi)\;\d^d x.
+%\]
+%Then we find that
+%\[
+% V(\phi_\lambda) = \lambda^{2 - d} T + \lambda^{-d} W.
+%\]
+%Here $T$ and $W$ are constants completely determined by the soliton solution, and is in particular independent of $\lambda$. So we find
+%\[
+% \left.\frac{\d}{\d \lambda} (V(\phi_\lambda))\right|_{\lambda = 1} = (2 - d) T - d W = 0.
+%\]
+%Now if we want interesting, non-vacuum solutions, then $T$ and $W$ are non-zero.
+%
+%If $d = 1$, then there is no problem. We just get that we must have
+%\[
+% T = W.
+%\]
+%However, if $d \geq 3$, then we immediately get an impossibility, as both coefficients are negative. So this happens only if $T = W = 0$. So $\phi$ is constant.
+%
+%If $d = 2$, then this is a bit border-line. Then the kinetic energy term doesn't matter. Then we get a solution only if $W = 0$. There are some interesting solitons satisfying this, if the vacuum states are degenerate enough.
+
+The $d = 2$ case is rather interesting. We can still get interesting soliton theories if we have sufficiently large space of classical vacua $\{\Phi: W(\Phi) = 0\}$.
+\begin{eg}
+ In $d = 2$, we can take $\ell = 3$ and
+ \[
+ W(\phi) = (1 - |\phi|^2)^2.
+ \]
+ Then the set $W = 0$ is given by the unit sphere $S^2 \subseteq \R^3$. With $\phi$ constrained to this $2$-sphere, this is a \term{$\sigma$-model}, and there is a large class of such maps $\phi(x)$ which minimize the energy (for a fixed topology) --- in fact they are just rational functions when stereographic projection is used to introduce complex coordinates.
+\end{eg}
+
+Derrick's scaling argument is not only a mathematical trick. We can also interpret it physically. Increasing $\lambda$ corresponds to ``collapsing'' down the field. Then we see that in $d \geq 3$, both the gradient and potential terms favour collapsing of the field. However, in $d = 1$, the gradient term wants the field to expand, and the potential term wants the field to collapse. If these two energies agree, then these forces perfectly balance, and one can hope that stationary solitons exist.
+
+We will eventually want to work with theories in higher dimensions, and Derrick's scaling argument shows that for scalar theories with energy functionals as above this isn't going to be successful for three or more dimensions, and places strong restrictions in two dimensions. There are different ways to get around Derrick's argument. In the Skyrme model, which we are going to study in the next chapter, there are no gauge fields, but instead we will have some higher powers of derivative terms. In particular, by introducing fourth powers of derivatives in the energy density, we will have a term that scales as $\lambda^{4 - d}$, and this acts to stabilize scalar field solitons in three dimensions.
+
+Now let's see how gauge theory provides a way around Derrick's argument without having to depart from employing only energy densities which are quadratic in the derivatives (as is highly desirable for quantization). To understand this, we need to know how gauge fields transform under spatial rescaling.
+
+One way to figure this out is to insist that the covariant derivative $\D_j \Phi_\lambda$ must scale as a whole in the same way that ordinary derivatives scale in scalar field theory. Since
+\[
+ \partial_j \Phi_\lambda = \lambda (\partial_j \Phi)(\lambda x),
+\]
+we would want $A_j$ to scale as
+\[
+ (A_j)_\lambda(x) = \lambda A_j(\lambda x).
+\]
+We can also see this more geometrically. Consider the function
+\begin{align*}
+ \chi_\lambda: \R^d &\to \R^d\\
+ x &\mapsto \lambda x.
+\end{align*}
+Then our previous transformations were just given by pulling back along $\chi_\lambda$. Since $A$ is a $1$-form, it pulls back as
+\[
+ \chi_\lambda^* (A_j \;\d x^j) = \lambda A_j (\lambda x) \;\d x^j.
+\]
+Then since $B = \d A$, the curvature must scale as
+\[
+ B_\lambda(x) = \lambda^2 B(\lambda x).
+\]
+Thus, we can obtain a gauged version of Derrick's scaling argument.
+
+Since we don't want to develop gauge theory in higher dimensions, we will restrict our attention to the Ginzburg--Landau model. Since we already used the letter $\lambda$, we will denote the scaling parameter by $\mu$. We have
+\begin{align*}
+ V_\lambda(A_\mu, \Phi_\mu) &= \frac{1}{2} \int \left(\mu^4 B(\mu x)^2 + \mu^2 |\D \Phi(\mu x)|^2 + \frac{\lambda}{4} (1 - |\Phi(\mu x)|^2)^2\right) \frac{1}{\mu^2}\;\d^2 (\mu x)\\
+ &= \frac{1}{2} \int \left(\mu^2 B^2(y) + |\D \Phi(y)|^2 + \frac{\lambda}{4 \mu^2} (1 - |\Phi(y)|^2)^2\right)\;\d^2 y.
+\end{align*}
+Again, the gradient term is scale invariant, but the magnetic field term counteracts the potential term. We can find the derivative to be
+\[
+ \left.\frac{\d}{\d \mu}\right|_{\mu = 1} V_\lambda(A_\mu, \Phi_\mu) = \int \left(B^2 - \frac{\lambda}{4}(1 - |\Phi|^2)^2\right)\;\d^2 y.
+\]
+Thus, for a soliton, we must have
+\[
+ \int B^2\;\d^2 x = \frac{\lambda}{4} \int_{\R^2} (1 - |\Phi|^2)^2 \;\d^2 x.
+\]
+Such solutions exist, and we see that this is because they are stabilized by the magnetic field energy in the sense that a collapse of the configuration induced by rescaling would be resisted by the increase of magnetic energy which such a collapse would produce. Note that in the case $\lambda = 1$, this equation is just the integral form of one of the Bogomolny equations!
+
+
+\subsubsection*{Scattering of vortices}
+Derrick's scaling argument suggests that vortices can exist if $\lambda > 0$. However, as we previously discussed, for $\lambda \not= 1$, there are forces between vortices in general, and we don't get static, separated vortices. By doing numerical simulations, we find that when $\lambda < 1$, the vortices attract. When $\lambda > 1$, the vortices repel. Thus, when $\lambda > 1$, the symmetric vortices with $N > 1$ are unstable, as they want to break up into multiple single vortices.
+
+We can talk a bit more about the $\lambda = 1$ case, where we have static multi-vortices. For example, for $N = 2$, the solutions are parametrized by pairs of points in $\C$, up to equivalence
+\[
+ (Z_1, Z_2) \sim (Z_2, Z_1).
+\]
+We said the moduli space is $\C^2$, and this is indeed true. However, $Z_1$ and $Z_2$ are not good coordinates for this space. Instead, good coordinates on the moduli space $\mathcal{M} = \mathcal{M}_2$ are given by some functions symmetric in $Z_1$ and $Z_2$. One valid choice is
+\[
+ Q = Z_1 + Z_2,\quad P = Z_1 Z_2.
+\]
+In general, for vortex number $N$, we should use the \term{elementary symmetric polynomials} in $Z_1, \cdots, Z_N$ as our coordinates.
+
+Now suppose we set our vortices in motion. For convenience, we fix the center of mass so that $Q(t) = 0$. We can then write $P$ as
+\[
+ P = - Z_1^2.
+\]
+If we do some simulations, we find that in a head-on collision, after they collide, the vortices scatter off at $90^\circ$. This is the \term{$90^\circ$ scattering phenomenon}, and holds for other $\lambda$ as well.
+
+In terms of our coordinates, $Z_1^2$ is smoothly evolving from a negative to a positive value, going through $0$. This corresponds to $Z_1 \mapsto \pm i Z_1$, $Z_2 = - Z_1$. Note that at the point when they collide, we lose track of which vortex is which.
+
+Similar to the $\phi^4$ kinks, we can obtain effective Lagrangians for the dynamics of these vortices. However, this is much more complicated.
+
+\subsection{Jackiw--Pi vortices}
+So far, we have been thinking about electromagnetism, using the abelian Higgs model. There is a different system that is useful in condensed matter physics. We look at vortices in Chern--Simons--Higgs theory. This has a different Lagrangian which is not Lorentz invariant --- instead of having the Maxwell curvature term, we have the Chern--Simons Lagrangian term. We again work in two space dimensions, with the Lagrangian density given by
+\[
+ \mathcal{L} = \frac{\kappa}{4} \varepsilon^{\mu\nu\lambda} A_\mu F_{\nu\lambda} - (i \Phi, \D_0 \Phi) - \frac{1}{2} | \D \Phi|^2 + \frac{1}{2\kappa} |\Phi|^4,
+\]
+where $\kappa$ is a constant,
+\[
+ F_{\nu \lambda} = \partial_\nu A_\lambda - \partial_\lambda A_\nu
+\]
+is the electromagnetic field and, as before,
+\[
+ \D_0 \Phi = \frac{\partial \Phi}{\partial t} - i A_0 \Phi.
+\]
+The first term is the \term{Chern--Simons term}, while the rest is the Schr\"odinger Lagrangian density with a $|\Phi|^4$ potential term.
+
+Varying with respect to $\Phi$, the Euler--Lagrange equation gives the Schr\"odinger equation
+\[
+ i D_0 \Phi + \frac{1}{2} \D_j^2 \Phi + \frac{1}{\kappa} |\Phi|^2 \Phi = 0.
+\]
+If we take the variation with respect to $A_0$ instead, then we have
+\[
+ \kappa B + |\Phi|^2 = 0.
+\]
+This is unusual --- it looks more like a constraint than an evolution equation, and is a characteristic feature of Chern--Simons theories.
+
+The other equations give conditions like
+\begin{align*}
+ \partial_1 A_0 &= \partial_0 A_1 + \frac{1}{\kappa} (i \Phi, \D_2 \Phi)\\
+ \partial_2 A_0 &= \partial_0 A_2 - \frac{1}{\kappa} (i \Phi, \D_1 \Phi).
+\end{align*}
+This is peculiar compared to Maxwell theory --- the equations relate the current directly to the electromagnetic field, rather than its derivative.
+
+For static solutions, we need
+\[
+ \partial_i A_0 = \frac{1}{\kappa} \varepsilon_{ij} (i \Phi, \D_j \Phi).
+\]
+To solve this, we assume the field again satisfies the covariant anti-holomorphicity condition
+\[
+ D_j \Phi = i \varepsilon_{jk} \D_k \Phi = 0.
+\]
+Then we can write
+\[
+ \partial_1 A_0 = +\frac{1}{\kappa} (i \Phi, \D_2 \Phi) = - \frac{1}{\kappa} \partial_1 \frac{|\Phi|^2}{2},
+\]
+and similarly for the derivative in the second coordinate direction. We can then integrate these to obtain
+\[
+ A_0 = -\frac{|\Phi|^2}{ 2 \kappa}.
+\]
+
+We can then look at the other two equations, and see how we can solve those. For static fields, the Schr\"odinger equation becomes
+\[
+ A_0 \Phi + \frac{1}{2} \D_j^2 \Phi + \frac{1}{\kappa} |\Phi|^2 \Phi = 0.
+\]
+Substituting in $A_0$, we obtain
+\[
+ \D_j^2 \Phi = -\frac{|\Phi|^2}{\kappa}
+\]
+Let's then see if this makes sense. We need to see whether this can be consistent with the holomorphicity condition. The answer is yes, if we have the equation $\kappa B + |\Phi|^2 = 0$. We calculate
+\[
+ \D_j^2 \Phi = \D_1^2 \Phi + \D_2^2 \Phi = i (\D_1 \D_2 - \D_2 \D_1) \Phi = + B \Phi = -\frac{1}{\kappa} |\Phi|^2 \Phi,
+\]
+exactly what we wanted.
+
+So the conclusion (check as an exercise) is that we can generate vortex solutions by solving
+\begin{align*}
+ \D_j \Phi - i \varepsilon_{jk} \D_k \Phi &= 0\\
+ \kappa B + |\Phi|^2 &= 0.
+\end{align*}
+As in Taubes' theorem there is a reduction to a scalar equation, which is in this case solvable explicitly:
+\[
+ \Delta \log |\Phi| = -\frac{1}{\kappa} |\Phi|^2.
+\]
+Setting $\rho = |\Phi|^2$, this becomes Liouville's equation
+\[
+ \Delta \log \rho = -\frac{2}{\kappa}\rho,
+\]
+which can in fact be solved in terms of rational functions --- see for example Chapter 5 of the book \emph{Solitons in Field Theory and Nonlinear Analysis} by Y. Yang. (As in Taubes' theorem, there is a corresponding statement providing solutions via holomorphic rather than anti-holomorphic conditions.)
+
+\section{Skyrmions}
+
+We now move on to one dimension higher, and study \emph{Skyrmions}. In recent years, there has been a lot of interest in what people call ``Skyrmions'', but what they are studying is a 2-dimensional variant of the original idea of Skyrmions. These occur in certain exotic magnetic systems. But instead, we are going to study the original Skyrmions as discovered by Skyrme, which have applications to nuclear physics.
+
+With details to be filled in soon, hadronic physics exhibits (approximate) spontaneously broken \emph{chiral symmetry} $\frac{\SU(2)_L \times \SU(2)_R}{\Z_2} \cong \SO(4)$, where the unbroken group is (diagonal) $\SO(3)$ isospin, and the elements of $\SO(3)$ are $(g, g) \in \SU(2) \times \SU(2)$.
+
+This symmetry is captured in various effective field theories of pions (which are the approximate Goldstone bosons) and heavier mesons. It is also a feature of QCD with very light $u$ and $d$ quarks.
+
+The special feature of Skyrmion theory is that we describe nucleons as solitons in the effective field theory. Skyrme's original idea was that nucleons and bigger nuclei can be modelled by classical approximations to some ``condensates'' of pion fields. To explain the conservation of baryon number, the classical field equations have soliton solutions (Skyrmions) with an integer topological charge. This topological charge is then identified with what is known, physically, as the baryon number.
+
+\subsection{Skyrme field and its topology}
+Before we begin talking about the Skyrme field, we first discuss the symmetry group this theory enjoys. Before symmetry breaking, our theory has a symmetry group
+\[
+ \frac{\SU(2) \times \SU(2)}{\{\pm (\mathbf{1}, \mathbf{1})\}} = \frac{\SU(2) \times \SU(2)}{\Z_2}.
+\]
+
+This might look like a rather odd symmetry group to work with. We can begin by understanding the $\SU(2) \times \SU(2)$ part of the symmetry group. This group acts naturally on $\SU(2)$ again, by
+\[
+ (A, B) \cdot U = AUB^{-1}.
+\]
+However, we notice that the pair $(-\mathbf{1}, -\mathbf{1}) \in \SU(2) \times \SU(2)$ acts trivially. So the true symmetry group is the quotient by the subgroup generated by this element. One can check that after this quotienting, the action is faithful.
+
+In the Skyrme model, the field will be valued in $\SU(2)$. It is convenient to introduce coordinates for our Skyrme field\index{Skyrme field}. As usual, we let $\boldsymbol\tau$ be the Pauli matrices, and $\mathbf{1}$ be the unit matrix. Then we can write the Skyrme field as
+\[
+ U(\mathbf{x}, t) = \sigma(\mathbf{x}, t) \mathbf{1} + i \boldsymbol\pi(x, t) \cdot \boldsymbol\tau.
+\]
+However, the values of $\sigma$ and $\boldsymbol\pi$ cannot be arbitrary. For $U$ to actually lie in $\SU(2)$, we need the coefficients to satisfy
+\[
+ \sigma, \pi_i \in \R,\quad \sigma^2 + \boldsymbol\pi \cdot \boldsymbol\pi = 1.
+\]
+This is a \emph{non-linear} constraint, and defines what known as a non-linear \term{$\sigma$-model}\index{non-linear $\alpha$-model}.
+
+From this constraint, we see that geometrically, we can identify $\SU(2)$ with $S^3$. We can also see this directly, by writing
+\[
+ \SU(2) =\left\{
+ \begin{pmatrix}
+ \alpha & \beta\\
+ -\bar{\beta} & \bar{\alpha}
+ \end{pmatrix} \in M_2(\C): |\alpha|^2 + |\beta|^2 = 1\right\},
+\]
+and this gives a natural embedding of $\SU(2)$ into $\C^2 \cong \R^4$ as the unit sphere, by sending the matrix to $(\alpha, \beta)$.
+
+One can check that the action we wrote down acts by isometries of the induced metric on $S^3$. Thus, we obtain an inclusion
+\[
+ \frac{\SU(2) \times \SU(2)}{\Z_2} \hookrightarrow \SO(4),
+\]
+which happens to be a surjection.
+
+Our theory will undergo spontaneous symmetry breaking, and the canonical choice of vacuum will be $U = \mathbf{1}$. Equivalently, this is when $\sigma = 1$ and $\boldsymbol\pi = \mathbf{0}$. We see that the stabilizer of $\mathbf{1}$ is given by the diagonal
+\[
+ \Delta : \SU(2) \rightarrow \frac{\SU(2) \times \SU(2)}{\Z_2},
+\]
+since $A\mathbf{1}B^{-1} = \mathbf{1}$ if and only if $A = B$.
+
+Geometrically, if we view $\frac{\SU(2) \times \SU(2)}{\Z_2} \cong \SO(4)$, then it is obvious that the stabilizer of $\mathbf{1}$ is the copy of $\SO(3) \subseteq \SO(4)$ that fixes the $\mathbf{1}$ axis. Indeed, the image of the diagonal $\Delta$ is $\SU(2)/\{\pm \mathbf{1}\} \cong \SO(3)$.
+
+%Note that $\SU(2) \times \SU(2)$ acts naturally on $\SU(2)$ itself, by left and right multiplication. Since we want to capture this symmetry, we will construct the Skyrme fields as $\SU(2)$-valued matrices.
+%
+%We let $\boldsymbol\tau$ be the Pauli matrices. Then we can write the Skyrme field as
+%\[
+% U(\mathbf{x}, t) = \sigma(\mathbf{x}, t) \mathbf{1}_2 + i \boldsymbol\pi(x, t) \cdot \boldsymbol\tau.
+%\]
+%For this to lie in $\SU(2)$, we need the coefficients to satisfy
+%\[
+% \sigma, \pi_i \in \R,\quad \sigma^2 + \boldsymbol\pi \cdot \boldsymbol\pi = 1.
+%\]
+%This is a \emph{non-linear} constraint. This condition is a sum of four squares. So geometrically, we are talking about the unit three sphere $S^3 \subseteq \R^4$. Indeed, $\SU(2)$ is isomorphic to the $3$-sphere. This is known as a \term{$\sigma$-model}.
+%
+%Since we want a $\SU(2) \times \SU(2)$ symmetry, our Lagrangian has to be invariant under left and right multiplication of $\SU(2)$. Under the identification of $\SU(2) \cong \S^3$ geometrically, this is just the usual $\SO(4)$ action on $S^3$. If we want this $\SO(4)$ to be our isometry group, then there is a natural Riemannian metric to put on the sphere, namely the metric induced by $\R^4$.
+
+Note that in our theory, for any choice of $\boldsymbol\pi$, there are at most two possible choices of $\sigma$. Thus, despite there being four variables, there are only three degrees of freedom. Geometrically, this is saying that $\SU(2) \cong S^3$ is a three-dimensional manifold.
+
+This has some physical significance. We are using the $\boldsymbol\pi$ fields to model pions. We have seen and observed pions a lot. We know they exist. However, as far as we can tell, there is no ``$\sigma$ meson'', and this can be explained by the fact that $\sigma$ isn't a genuine degree of freedom in our theory.
+
+Let's now try to build a Lagrangian for our field $U$. We will want to introduce derivative terms. From a mathematical point of view, the quantity $\partial_\mu U$ isn't a very nice thing to work with. It is a quantity that lives in the tangent space $T_U \SU(2)$, and in particular, the space it lives in depends on the value of $U$.
+
+What we want to do is to pull this back to $T_\mathbf{1} \SU(2) = \su(2)$. To do so, we multiply by $U^{-1}$. We write
+\[
+ R_\mu = (\partial_\mu U)U^{-1},
+\]
+which is known as the \term{right current}. For practical, computational purposes, it is convenient to note that
+\[
+ U^{-1} = \sigma \mathbf{1} - i \boldsymbol\pi \cdot \boldsymbol\tau.
+\]
+Using the $(+, -, -, -)$ metric signature, we can now write the Skyrme Lagrangian density as
+\[
+ \mathcal{L} =-\frac{F_{\pi}^2 }{16} \Tr \left( R_\mu R^\mu\right) +\frac{1}{32e^2}\Tr\left([R_\mu, R_\nu][R^\mu, R^\nu]\right) - \frac{1}{8} F_\pi^2 m_\pi^2 \Tr(\mathbf{1} - U).
+\]
+The three terms are referred to as the $\sigma$-model term, Skyrme term and pion mass term respectively.
+
+The first term is the obvious kinetic term. The second term might seem a bit mysterious, but we \emph{must} have it (or some variant of it). By Derrick's scaling argument, we cannot have solitons if we just have the first term. We need a higher multiple of the derivative term to make solitons feasible.
+
+There are really only two possible terms with four derivatives. The alternative is to have the square of the first term. However, Skyrme rejected that object, because that Lagrangian will have four time derivatives. From a classical point of view, this is nasty, because to specify the initial configuration, not only do we need the initial field condition, but also its first three derivatives. This doesn't happen in our theory, because the commutator vanishes when $\mu = \nu$. The pieces of the Skyrme term are thus at most quadratic in time derivatives.
+
+Now note that the first two terms have an exact chiral symmetry, i.e.\ they are invariant under the $\SO(4)$ action previously described. In the absence of the final term, this symmetry would be spontaneously broken by a choice of vacuum. As described before, there is a conventional choice $U = \mathbf{1}$. After this spontaneous symmetry breaking, we are left with an isospin $\SU(2)$ symmetry\index{isospin symmetry}. This isospin symmetry rotates the $\boldsymbol\pi$ fields among themselves.
+
+The role of the extra term is that now the vacuum \emph{has} to be the identity matrix. The symmetry is now \emph{explicitly broken}. This might not be immediately obvious, but this is because the pion mass term is linear in $\sigma$ and is minimized when $\sigma = 1$. Note that this theory is still invariant under the isospin $\SU(2)$ symmetry. Since the isospin symmetry is not broken, all pions have the same mass. In reality, the pion masses are very close, but not exactly equal, and we can attribute the difference in mass as due to electromagnetic effects. In terms of the $\boldsymbol\pi$ fields we defined, the physical pions are given by
+\[
+ \pi^{\pm} = \pi_1 \pm i \pi_2,\quad \pi^0 = \pi_3.
+\]
+It is convenient to draw the target space $\SU(2)$ as
+\begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \draw circle [radius=1];
+
+ \node [circ] at (0, -1) {};
+ \node [below] at (0, -1) {$\sigma = 1$};
+
+ \node [circ] at (0, 1) {};
+ \node [above] at (0, 1) {$\sigma = -1$};
+
+ \draw [->] (2.5, -0.7) -- (2.5, 0.7) node [pos=0.5, right, align=left] {Potential\\ energy};
+
+ \draw [dashed] (1, 0) arc(0:180:1 and 0.3);
+ \draw (1, 0) arc(0:-180:1 and 0.3);
+ \node [right, align=left] at (1, 0) {$\sigma = 0$\\ $|\boldsymbol\pi| = 1$};
+
+ \node at (-0.707, 0.707) [anchor = south east] {$S^3 \cong \SU(2)$};
+ \end{tikzpicture}
+\end{center}
+$\sigma = -1$ is the \emph{anti-vacuum}. We will see that in the core of the Skyrmion, $\sigma$ will take value $\sigma = -1$.
+
+In the Skyrme Lagrangian, we have three free parameters. This is rather few for an effective field theory, but we can reduce the number further by picking appropriate coefficients. We introduce an energy unit $\frac{F_\pi}{4e}$ and length unit $\frac{2}{eF_\pi}$. Setting these to $1$, there is one parameter left, which is the dimensionless pion mass. In these units, we have
+\[
+ L = \int \left(-\frac{1}{2}\Tr(R_\mu R^\mu) + \frac{1}{16} \Tr([R_\mu, R_\nu][R^\mu, R^\nu]) - m^2 \Tr(\mathbf{1} - U)\right)\;\d^3 x.
+\]
+In this notation, we have
+\[
+ m = \frac{2m_\pi}{e F_\pi}.
+\]
+In general, we will think of $m$ as being ``small''.
+
+Let's see what happens if we in fact put $m = 0$. In this case, the lack of mass term means we no longer have the boundary condition that $U \to 1$ at infinity. Hence, we need to manually impose this condition.
+
+Deriving the Euler--Lagrange equations is slightly messy, since we have to vary $U$ while staying inside the group $\SU(2)$. Thus, we vary $U$ multiplicatively,
+\[
+ U \mapsto U (\mathbf{1} + \varepsilon V)
+\]
+for some $V \in \su(2)$. We then have to figure out how $R$ varies, do some differentiation, and then the Euler--Lagrange equations turn out to be
+\[
+ \partial_\mu \left(R^\mu + \frac{1}{4} [R^\nu, [R_\nu, R^\mu]]\right) = 0.
+\]
+For static fields, the energy is given by
+\[
+ E = \int \left(-\frac{1}{2} \Tr(R_i R_i) - \frac{1}{16} \Tr([R_i, R_j] [R_i, R_j])\right)\;\d^3 x \equiv E_2 + E_4.
+\]
+where we sum $i$ and $j$ from $1$ to $3$. This is a sum of two terms --- the first is quadratic in derivatives while the second is quartic.
+
+Note that the trace functional on $\su(2)$ is negative definite. So the energy is actually positive.
+
+We can again run Derrick's theorem.
+\begin{thm}[Derrick's theorem]
+ We have $E_2 = E_4$ for any finite-energy static solution for $m = 0$ Skyrmions.
+\end{thm}
+
+\begin{proof}
+ Suppose $U(\mathbf{x})$ minimizes $E = E_2 + E_4$. We rescale this solution, and define
+ \[
+ \tilde{U}(\mathbf{x}) = U(\lambda \mathbf{x}).
+ \]
+ Since $U$ is a solution, the energy is stationary with respect to $\lambda$ at $\lambda = 1$.
+
+ We can take the derivative of this and obtain
+ \[
+ \partial_i \tilde{U}(\mathbf{x}) = \lambda \tilde{U}(\lambda \mathbf{x}).
+ \]
+ Therefore we find
+ \[
+ \tilde{R}_i(\mathbf{x}) = \lambda R_i(\lambda \mathbf{x}),
+ \]
+ and therefore
+ \begin{align*}
+ \tilde{E}_2 &= - \frac{1}{2} \int \Tr(\tilde{R}_i \tilde{R}_i) \;\d^3 x\\
+ &= - \frac{1}{2} \lambda^2 \int \Tr(R_i R_i)(\lambda \mathbf{x})\;\d^3 x\\
+ &= - \frac{1}{2} \frac{1}{\lambda} \int \Tr(R_i R_i) (\lambda \mathbf{x})\;\d^3 (\lambda x)\\
+ &= \frac{1}{\lambda} E_2.
+ \end{align*}
+ Similarly,
+ \[
+ \tilde{E}_4 = \lambda E_4.
+ \]
+ So we find that
+ \[
+ \tilde{E} = \frac{1}{\lambda} E_2 + \lambda E_4.
+ \]
+ But this function has to have a minimum at $\lambda = 1$. So the derivative with respect to $\lambda$ must vanish at $1$, requiring
+ \[
+ 0 = \frac{\d \tilde{E}}{\d \lambda} = - \frac{1}{\lambda^2}E_2 + E_4 = 0
+ \]
+ at $\lambda = 1$. Thus we must have $E_4 = E_2$.
+\end{proof}
+
+We see that we must have a four-derivative term in order to stabilize the soliton. If we have the mass term as well, then the argument is slightly more complicated, and we get some more complicated relation between the energies.
+
+% Note that the Skyrmions, which we will find later, have a definite scale size of order $1$.
+
+\subsubsection*{Baryon number}
+Recall that our field is a function $U: \R^3 \to \SU(2)$. Since we have a boundary condition $U = \mathbf{1}$ at infinity, we can imagine compactifying $\R^3$ into $S^3$, where the point at infinity is sent to $\mathbf{1}$.
+
+On the other hand, we know that $\SU(2)$ is isomorphic to $S^3$. Geometrically, we think of the space and $\SU(2)$ as ``different'' $S^3$. We should think of $\SU(2)$ as the ``unit sphere'', and write it as $S^3_1$. However, we can think of the compactification of $\R^3$ as a sphere with ``infinite radius'', so we denote it as $S^3_\infty$. Of course, topologically, they are the same.
+
+So the field is a map
+\[
+ U: S^3_\infty \to S^3_1.
+\]
+This map has a degree. There are many ways we can think about the degree. For example, we can think of this as the homotopy class of this map, which is an element of $\pi_3(S^3) \cong \Z$. Equivalently, we can think of it as the map induced on the homology or cohomology of $S^3$.
+
+While $U$ evolves with time, because the degree is a discrete quantity, it has to be independent of time (alternatively, the degree of the map is homotopy invariant).
+
+There is an explicit integral formula for the degree. We will not derive this, but it is
+\[
+ B = - \frac{1}{24\pi^2} \int \varepsilon_{ijk} \Tr(R_i R_j R_k) \;\d^3 x.
+\]
+The factor of $2\pi^2$ comes from the volume of the three sphere, and there is also a factor of $6$ coming from how we anti-symmetrize. We identify $B$ with the conserved, physical baryon number.
+
+If we were to derive this, then we would have to pull back a normalized volume form from $S_1^3$ and then integrate over all space. In this formula, we chose to use the $\SO(4)$-invariant volume form, but in general, we can pull back any normalized volume form.
+
+Locally, near $\sigma = 1$, this volume form is actually
+\[
+ \frac{1}{2\pi^2}\;\d \pi^1 \wedge\d \pi^2 \wedge\d \pi^3.
+\]
+\subsubsection*{Faddeev--Bogomolny bound}
+There is a nice result analogous to the Bogomolny energy bound we saw for kinks and vortices, known as the \term{Faddeev--Bogomolny bound}. We can write the static energy as
+\[
+ E = \int - \frac{1}{2} \Tr\left(\left(R_i \mp \frac{1}{4} \varepsilon_{ijk}[R_j, R_k]\right)^2\right)\;\d^3 x \pm 12 \pi^2 B.
+\]
+This bound is true for both sign choices. However, to get the strongest result, we should choose the sign such that $\pm 12 \pi^2 B > 0$. Then we find
+\[
+ E \geq 12 \pi^2 |B|.
+\]
+By symmetry, it suffices to consider the case $B > 0$, which is what we will do.
+
+The Bogomolny equation for $B > 0$ should be
+\[
+ R_i - \frac{1}{4} \varepsilon_{ijk} [R_j, R_k] = 0.
+\]
+However, it turns out this equation has \emph{no} non-vacuum solution.
+
+Roughly, the argument goes as follows --- by careful inspection, for this to vanish, whenever $R_i$ is non-zero, the three vectors $R_1, R_2, R_3$ must form an orthonormal frame in $\su(2)$. So $U$ must be an isometry. But this isn't possible, because the spheres have ``different radii''.
+
+Therefore, true Skyrmions with $B > 0$ satisfy
+\[
+ E > 12 \pi^2 B.
+\]
+We get a lower bound, but the actual energy is always greater than this lower bound. It is quite interesting to look at the energies of true solutions numerically, and their energy is indeed bigger.
+
+\subsection{Skyrmion solutions}
+The simplest Skyrmion solution has baryon number $B = 1$. We will continue to set $m = 0$.
+\subsubsection*{$B = 1$ hedgehog Skyrmion}
+Consider the spherically symmetric function
+\[
+ U(\mathbf{x}) = \cos f(r) \mathbf{1} + i \sin f(r) \hat{\mathbf{x}} \cdot \boldsymbol\tau.
+\]
+This is manifestly in $\SU(2)$, because $\cos^2 f + \sin^2 f = 1$. This is known as a hedgehog, because the unit pion field is $\hat{\mathbf{x}}$, which points radially outwards. We need some boundary conditions. We need $U \to \mathbf{1}$ at $\infty$. On the other hand, we will see that we need $U \to -\mathbf{1}$ at the origin to get baryon number $1$. So $f \to \pi$ as $r \to 0$, and $f \to 0$ as $r \to \infty$. So it looks roughly like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$r$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$f$};
+
+ \draw [mblue, thick] (0, 2) .. controls (1, 1) and (3, 0.2) .. (5, 0.2);
+ \node [circ] at (0, 2) {};
+ \node [left] at (0, 2) {$\pi$};
+ \end{tikzpicture}
+\end{center}
+After some hard work, we find that the energy is given by
+\[
+ E = 4\pi \int_0^\infty \left(f'^2 + \frac{2 \sin^2 f}{r^2}(1 + f'^2) + \frac{\sin^4 f}{r^4}\right) r^2\;\d r.
+\]
+From this, we can obtain a second-order ordinary differential equation in $f$, which is not simple. Solutions have to be found numerically. This is a sad truth about Skyrmions. Even in the simplest $B = 1, m = 0$ case, we don't have an analytic expression for what $f$ looks like. Numerically, the energy is given by
+\[
+ E = 1.232 \times 12\pi^2.
+\]
+To compute the baryon number of this solution, we plug our solution into the formula, and obtain
+\[
+ B = - \frac{1}{2\pi^2} \int_0^\infty \frac{\sin^2 f}{r^2} \frac{\d f}{\d r} \cdot 4\pi r^2 \;\d r.
+\]
+We can interpret $\frac{\d f}{\d r}$ as the radial contribution to $B$, while there are two factors of $\frac{\sin f}{r}$ coming from the angular contribution due to the $i \sin f(r) \hat{\mathbf{x}} \cdot \boldsymbol\tau$ term.
+
+But this integral is just an exact differential. It simplifies to
+\[
+ B = \frac{1}{\pi} \int_0^\pi 2 \sin^2 f \;\d f.
+\]
+Note that we have lost a sign, because of the change of limits. We can integrate this directly, and just get
+\[
+ B = \frac{1}{\pi}\left(f - \frac{1}{2} \sin 2f\right)^\pi_0 = 1,
+\]
+as promised.
+
+Intuitively, we see that in this integral, $f$ goes from $0$ to $\pi$, and we can think of this as the field $U$ wrapping around the sphere $S^3$ once.
+
+\subsubsection*{More hedgehogs}
+We can consider a more general hedgehog with the same ansatz, but with the boundary conditions
+\[
+ f(0) = n\pi,\quad f(\infty) = 0.
+\]
+In this case, the same computations gives us $B = n$. So in principle, this gives a Skyrmion of any baryon number. The solutions do exist. However, it turns out they have extremely high energy, and are nowhere near minimizing the energy. In fact, the energy increases much faster than $n$ itself, because the Skyrmion ``onion'' structure highly distorts each $B = 1$ Skyrmion. Unsurprisingly, these solutions are unstable.
+
+This is not what we want in hadronic physics, where we expect the energy to scale approximately linearly with $n$. In fact, since baryons attract, we expect the solution for $B = n$ to have less energy than $n$ times the $B = 1$ energy.
+
+We can easily get energies approximately $n$ times the $B = 1$ energy simply by having very separated baryons, and then since they attract, when they move towards each other, we get even lower energies.
+
+% Finding Skyrmions is hard. Up to today, we only have explicit Skyrmions up to $B \approx 25$, and we don't know how the picture looks like in general.
+
+\subsubsection*{A better strategy --- rational map approximation}
+So far, we have been looking at solutions that depend very simply on angle. This means, all the ``winding'' happens in the radial direction. In fact, it is a better idea to wind more in the \emph{angular} direction.
+
+In the case of the $B = 1$ hedgehog, the field looks roughly like this:
+\begin{center}
+ \begin{tikzpicture}
+ \fill [morange, opacity=0.25] (0, -2) rectangle (4, 2);
+
+ \foreach \r in {0.5, 1, 1.5} {
+ \fill [morange, opacity=0.3] (2, 0) circle [radius=\r];
+ }
+ \foreach \r in {0.5, 1, 1.5} {
+ \draw [opacity=0.9](2, 0) circle [radius=\r];
+ }
+ \draw [thick] (0, -2) rectangle (4, 2);
+
+ \node [right] at (4, 1.8) {$\infty$};
+
+ \draw (9, 0) circle [radius=2];
+
+ \fill [morange, opacity=0.25] (9, 0) circle [radius=2];
+
+ \foreach \y in {-1.3, -0.2, 0.9} {
+ \pgfmathsetmacro\len{sqrt(4 - (\y)^2)};
+ \pgfmathsetmacro\ht{\len * 0.1};
+ \pgfmathsetmacro\x{\len + 9};
+ \pgfmathsetmacro\sang{270 - acos(-\y/2)};
+ \pgfmathsetmacro\eang{270 + acos(-\y/2)};
+
+ \fill [morange, opacity=0.05] (\x, \y) arc(0:180:{\len} and \ht) arc(\sang:\eang:2);
+ \fill [morange, opacity=0.25] (\x, \y) arc(0:-180:{\len} and \ht) arc(\sang:\eang:2);
+ }
+ \foreach \y in {-1.3, -0.2, 0.9} {
+ \pgfmathsetmacro\len{sqrt(4 - (\y)^2)};
+ \pgfmathsetmacro\ht{\len * 0.1};
+ \pgfmathsetmacro\x{\len + 9};
+
+ \draw [dashed, opacity=0.9] (\x, \y) arc(0:180:{\len} and \ht);
+ \draw [opacity=0.9] (\x, \y) arc(0:-180:{\len} and \ht);
+ }
+
+ \node [circ] at (2, 0) {};
+ \draw [opacity=0.5] (2, 0) edge [bend right, ->] (9, -2);
+ \draw [opacity=0.5] (2.5, 0) edge [bend right=20, ->] (8.8, -1.44);
+ \draw [opacity=0.5] (3, 0) edge [bend right=10, ->] (8.6, -0.39);
+ \draw [opacity=0.5] (3.5, 0) edge [bend right=5, ->] (8.6, 0.75);
+
+ \draw [opacity=0.5] (4, 1.4) edge [bend left=10, ->] (9, 2);
+
+ \node [circ] at (9, 2) {};
+ \node [above] at (9, 2) {$\sigma = 1$};
+ \node [circ] at (9, -2) {};
+ \node [below] at (9, -2) {$\sigma = -1$};
+ \end{tikzpicture}
+\end{center}
+
+In our $B > 1$ spherically symmetric hedgehogs, we wrapped radially around the sphere many times, and it turns out that was not a good idea.
+
+Better is to use a similar radial configuration as the $B = 1$ hedgehog, but introduce more angular twists. We can think of the above $B = 1$ solution as follows --- we slice up our domain $\R^3$ (or rather, $S^3$ since we include the point at infinity) into $2$-spheres of constant radius, say
+\[
+ S^3 = \bigcup_{r \in [0, \infty]} S_r^2.
+\]
+On the other hand, we can also slice up the $S^3$ in the codomain into constant $\sigma$ levels, which are also $2$-spheres:
+\[
+ S^3 = \bigcup_{\sigma} S_\sigma^2.
+\]
+Then the function $f(r)$ we had tells us we should map the $2$-sphere $S_r^2$ into the two sphere $S_{\cos f(r)}^2$. What we did, essentially, was that we chose to map $S_r^2$ to $S_{\cos f(r)}^2$ via the ``identity map''. This gave a spherically symmetric hedgehog solution.
+
+But we don't have to do this! Pick any function $R: S^2 \to S^2$. Then we can construct the map $\Sigma_{\cos f} R$ that sends $S_r^2$ to $S_{\cos f(r)}^2$ via the map $R$. For simplicity, we will use the same $R$ for all $r$. If we did this, then we obtain a non-trivial map $\Sigma_{\cos f} R: S^3 \to S^3$.
+
+Since $R$ itself is a map from a sphere to a sphere (but one dimension lower), $R$ also has a degree. It turns out this degree is the same as the degree of the induced map $\Sigma_{\cos f} R$! So to produce higher baryon number hedgehogs, we simply have to find maps $R: S^2 \to S^2$ of higher degree.
+
+Fortunately, this is easier than maps between $3$-spheres, because a $2$-sphere is a Riemann surface. We can then use complex coordinates to work on $2$-spheres. By complex analysis, any complex holomorphic map between $2$-spheres is given by a \term{rational map}.
+
+Pick any rational function $R_k(z)$ of degree $k$. This is a map $S^2 \to S^2$. We use coordinates $r, z$, where $r \in \R^+$ and $z \in \C_\infty = \C \cup \{\infty\} \cong S^2$. Then we can look at generalized hedgehogs
+\[
+ U(\mathbf{x}) = \cos f(r) \mathbf{1} + i \sin f(r) \hat{\mathbf{n}}_{R_k(z)} \cdot \boldsymbol\tau,
+\]
+with $f(0) = \pi$, $f(\infty) = 0$, and $\hat{\mathbf{n}}_{R_k}$ is the normalized pion field $\hat{\boldsymbol\pi}$, given by the unit vector obtained from $R_k(z)$ if we view $S^2$ as a subset of $\R^3$ in the usual way. Explicitly,
+\[
+ \hat{\mathbf{n}}_R = \frac{1}{1 + |R|^2} (\bar{R} + R, i (\bar{R} - R), 1 - |R|^2).
+\]
+This construction is in some sense a separation of variables, where we separate the radial and angular dependence of the field.
+
+Note that even if we find a minimum among this class of fields, it is not a true minimum energy Skyrmion. However, it gets quite close, and is much better than our previous attempt.
+
+There is quite a lot of freedom in this construction, since we are free to pick $f(r)$, as well as the rational function $R_k(z)$. The geometric degree $k$ of $R_k(z)$ we care about is the same as the algebraic degree\index{degree!algebraic}\index{degree!rational function} of $R_k(z)$. Precisely, if we write
+\[
+ R_k(z) = \frac{p_k(z)}{q_k(z)},
+\]
+where $p_k$ and $q_k$ are coprime, then the algebraic degree of $R_k$ is the maximum of the degrees of $p_k$ and $q_k$ as polynomials. Since there are finitely many coefficients for $p_k$ and $q_k$, this is a finite-dimensional problem, which is much easier than solving for arbitrary functions. We will talk more about the degree later.
+
+Numerically, we find that minimal energy fields are obtained with
+\begin{align*}
+ R_1(z) &= z & R_2(z) &= z^2\\
+ R_3(z) &= \frac{\sqrt{3}i z^2 - 1}{z^3 - \sqrt{3}i z} & R_4(z) &= \frac{z^4 + 2\sqrt{3} i z^2 + 1}{z^4 - 2\sqrt{3} i z^2 + 1}.
+\end{align*}
+The true minimal energy Skyrmions have also been found numerically, and are very similar to the optimal rational map fields. In fact, the search for the true minima often starts from a rational map field.
+\begin{center}
+ \includegraphics[width=\textwidth]{images/B1-8-Skyrmions.pdf}
+
+ {Constant energy density surfaces of Skyrmions up to baryon number 8 \\(for $m = 0$), by R.\ A.\ Battye and P.\ M.\ Sutcliffe}
+\end{center}
+
+We observe that
+\begin{itemize}
+ \item for $n = 1$, we recover the hedgehog solution.
+ \item for $n = 2$, our solution has an axial symmetry.
+ \item for $n = 3$, the solution might seem rather strange, but it is in fact the unique solution in degree $3$ with \emph{tetrahedral} symmetry.
+ \item for $n = 4$, the solution has cubic symmetry.
+\end{itemize}
+In each case, these are the unique rational maps with such high symmetry. It turns out, even though these are not the exact Skyrmion solutions, the exact solutions enjoy the same symmetries.
+
+The function $f$ can be found numerically, and depends on $B$.
+
+Geometrically, what we are doing is that we are viewing $S^3$ as the \emph{suspension} $\Sigma S^2$, and our construction of $\Sigma_{\cos f} R$ from $R$ is just the suspension of maps. The fact that degree is preserved by suspension can be viewed as an example of the fact that homology is \emph{stable}.
+
+\subsubsection*{More on rational maps}
+Why does the algebraic degree of $R_k$ agree with the geometric degree? One way of characterizing the geometric degree is by counting pre-images. Consider a generic point $c$ in the target $2$-sphere, and consider the equation
+\[
+ R_k(z) = \frac{p_k(z)}{q_k(z)} = c
+\]
+for a generic $c$. We can rearrange this to say
+\[
+ p_k - c q_k = 0.
+\]
+For a \emph{generic} $c$, the $z^k$ terms do not cancel, so this is a polynomial equation of degree $k$. Also, generically, this equation doesn't have repeated roots, so has exactly $k$ solutions. So the number of points in the pre-image of a generic $c$ is $k$.
+
+Because $R_k$ is a holomorphic map, it is automatically orientation preserving (and in fact conformal). So each of these $k$ points contributes $+1$ to the degree.
+
+In the pictures above, we saw that the Skyrmions have some ``hollow polyhedral'' structures. How can we understand these?
+
+The holes turn out to be zeroes of the baryon density, and are where energy density is small. At the center of the holes, the angular derivatives of the Skyrme field $U$ are zero, but the radial derivative is not. % This explains both these points. Using expressions for the baryon density - baryon density is product of two things while energy is sum of squares?
+
+We can find these holes precisely in the rational map approximation. This allows us to find the symmetry of the system. They occur where the derivative $\frac{\d R_k}{\d z} = 0$. Since $R = \frac{p}{q}$, we can rewrite this requirement as
+\[
+ W(z) = p'(z) q(z) - q'(z) p(z) = 0.
+\]
+$W(z)$ is known as the \term{Wronskian}.
+
+A quick algebraic manipulation shows that $W$ has degree at most $2k - 2$, and generically, it is indeed $2k - 2$. This degree is the number of holes in the Skyrmion.
+
+We can look at our examples and look at the pictures to see this is indeed the case.
+
+\begin{eg}
+ For
+ \[
+ R_4(z) = \frac{z^4 + 2\sqrt{3} i z^2 + 1}{z^4 - 2\sqrt{3} i z^2 + 1},
+ \]
+ the Wronskian is
+ \[
+ W(z) = (4z^3 + 4 \sqrt{3}i z)(z^4 - 2\sqrt{3} i z^2 + 1) - (z^4 + 2 \sqrt{3} i z^2 + 1) (4z^3 - 4 \sqrt{3} i z).
+ \]
+ The highest degree $z^7$ terms cancel. But there isn't any $z^6$ term anywhere either. Thus $W$ turns out to be a degree $5$ polynomial. It is given by
+ \[
+ W(z) = - 8 \sqrt{3}i (z^5 - z).
+ \]
+ We can easily list the roots --- they are $z = 0, 1, i, -1, -i$.
+
+ Generically, we expect there to be $6$ roots. It turns out the Wronksian has a zero at $\infty$ as well. To see this more rigorously, we can rotate the Riemann sphere a bit by a M\"obius map, and then see there are $6$ finite roots. Looking back at our previous picture, there are indeed $6$ holes when $B = 4$.
+\end{eg}
+
+% How about energy? We can plot the energies of the energies of the Skyrmions as the baryon number increases:
+
+% insert plot
+
+It turns out although the rational map approximation is a good way to find solutions for $B$ up to $\approx 20$, they are hollow with $U = -1$ at the center. This is not a good model for larger nuclei, especially when we introduce non-zero pion mass.
+
+For $m \approx 1$, there are better, less hollow Skyrmions when $B \geq 8$.
+
+\subsection{Other Skyrmion structures}
+There are other ways of trying to get Skyrmion solutions.
+
+\subsubsection*{Product Ansatz}
+Suppose $U_1(\mathbf{x})$ and $U_2(\mathbf{x})$ are Skyrmions of baryon numbers $B_1$ and $B_2$. Since the target space is a group $\SU(2)$, we can take the product
+\[
+ U(\mathbf{x}) = U_1(\mathbf{x}) U_2(\mathbf{x}).
+\]
+Then the baryon number is $B = B_1 + B_2$. To see this, we can consider the product when $U_1$ and $U_2$ are well-separated, i.e.\ consider $U(\mathbf{x} - \mathbf{a}) U_2(\mathbf{x})$ with $|\mathbf{a}|$ large. Then we can see the baryon number easily because the baryons are well-separated. We can then vary $\mathbf{a}$ continuously to $0$, and $B$ doesn't change as we make this continuous deformation. So we are done. Alternatively, this follows from an Eckmann--Hilton argument.
+
+This can help us find Skyrmions with baryon number $B$ starting with $B$ well-separated $B = 1$ hedgehogs. Of course, this will not be energy-minimizing, but we can numerically improve the field by letting the separation vary.
+
+It turns out this is not a good way to find Skyrmions. In general, it doesn't give good approximations to the Skyrmion solutions. They tend to lack the desired symmetry, and this boils down to the problem that the product is not commutative, i.e.\ $U_1 U_2 \not= U_2 U_1$. Thus, we cannot expect to be able to approximate symmetric things with a product ansatz.
+
+The product ansatz can also be used for several $B = 4$ subunits to construct configurations with baryon number $4n$ for $n \in \Z$. For example, the following is a $B = 31$ Skyrmion:
+\begin{center}
+ \includegraphics[clip, width=0.3\textwidth, trim=4cm 4cm 4cm 4cm]{images/B-31-Skyrmion.pdf}
+
+ B = 31 Skyrmion by P.\ H.\ C.\ Lau and N.\ S.\ Manton
+\end{center}
+This is obtained by putting eight $B = 4$ Skyrmions side by side, and then cutting off a corner.
+
+This strategy tends to work quite well. With this idea, we can in fact find Skyrmion solutions with baryon number infinity! We can form an infinite cubic crystal out of $B = 4$ subunits. For $m = 0$, the energy per baryon is $\approx 1.038 \times 12 \pi^2$. This is a very close to the lower bound!
+
+We can also do other interesting things. In the picture below, on the left, we have a usual $B = 7$ Skyrmion. On the right, we have deformed the Skyrmion into what looks like a $B = 4$ Skyrmion and a $B = 3$ Skyrmion. This is a cluster structure, and it turns out this deformation doesn't cost a lot of energy. This two-cluster system can be used as a model of the lithium-7 nucleus.
+\begin{center}
+ \includegraphics[clip, width=0.4\textwidth]{images/B-7-Cluster.pdf}
+
+ B = 7 Skyrmions by C.\ J.\ Halcrow
+\end{center}
+
+\subsection{Asymptotic field and forces for \texorpdfstring{$B = 1$}{B = 1} hedgehogs}
+We now consider what happens when we put different $B = 1$ hedgehogs next to each other. To understand this, we look at the profile function $f$, for $m = 0$. For large $r$, this has the asymptotic form
+\[
+ f(r) \sim \frac{C}{r^2}.
+\]
+To obtain this, we linearize the differential equation for $f$ and see how it behaves as $r \to \infty$ and $f \to 0$. The linearized equation doesn't determine the coefficient $C$, but the full equation and boundary condition at $r = 0$ does. This has to be worked out numerically, and we find that $C \approx 2.16$.
+
+Thus, as $ \sigma \sim 1$, we find $\boldsymbol\pi \sim C \frac{\mathbf{x}}{r^3}$. So the $B = 1$ hedgehog asymptotically looks like three pion dipoles. Each pion field itself has an axis, but because we have three of them, the whole solution is spherically symmetric.
+
+We can roughly sketch the Skyrmion as
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mblue] (-0.949, 0) -- (0.949, 0) node [pos=-0.1] {$-$} node [pos=1.1] {$+$};
+ \draw [mgreen] (0, -0.949) -- (0, 0.949) node [pos=-0.1] {$-$} node [pos=1.1] {$+$};
+ \draw [mred] (-0.9, -0.3) -- (0.9,0.3) node [pos=-0.1] {$-$} node [pos=1.1] {$+$};
+ \end{tikzpicture}
+\end{center}
+Note that unlike in electromagnetism, scalar dipoles attract if oppositely oriented. This is because the fields have low gradient. So the lowest energy arrangement of two $B = 1$ Skyrmions while they are separated is
+
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mblue] (-0.949, 0) -- (0.949, 0) node [pos=-0.1] {$-$} node [pos=1.1] {$+$};
+ \draw [mgreen] (0, -0.949) -- (0, 0.949) node [pos=-0.1] {$-$} node [pos=1.1] {$+$};
+ \draw [mred] (-0.9, -0.3) -- (0.9,0.3) node [pos=-0.1] {$-$} node [pos=1.1] {$+$};
+
+ \begin{scope}[shift={(5, 0)}]
+ \draw [mblue] (-0.949, 0) -- (0.949, 0) node [pos=-0.1] {$+$} node [pos=1.1] {$-$};
+ \draw [mgreen] (0, -0.949) -- (0, 0.949) node [pos=-0.1] {$-$} node [pos=1.1] {$+$};
+ \draw [mred] (-0.9, -0.3) -- (0.9,0.3) node [pos=-0.1] {$+$} node [pos=1.1] {$-$};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+The right-hand Skyrmion is rotated by $180^\circ$ about a line perpendicular to the line separating the Skyrmions.
+
+These two Skyrmions attract! So two Skyrmions in this ``attractive channel'' can merge to form the $B = 2$ torus, which is the true minimal energy solution.
+
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mgreen] (0, -0.3) -- (0, 1.5) node [above] {$+$};
+ \draw [mgreen] (0, -1.5) node [below] {$-$} -- (0, -1.1);
+ \draw [mgreen, opacity=0.3] (0, -0.3) -- (0, -1.1);
+
+ \draw (0,0) ellipse (2 and 1.12);
+ \path[rounded corners=24pt] (-.9,0)--(0,.6)--(.9,0) (-.9,0)--(0,-.56)--(.9,0);
+ \draw[rounded corners=28pt] (-1.1,.1)--(0,-.6)--(1.1,.1);
+ \draw[rounded corners=24pt] (-.9,0)--(0,.6)--(.9,0);
+
+ \node [mblue] at (0, 0.7) {$\boldsymbol+$};
+ \node [mblue] at (0, -0.7) {$\boldsymbol+$};
+ \node [mblue] at (1.5, 0) {$\boldsymbol-$};
+ \node [mblue] at (-1.5, 0) {$\boldsymbol-$};
+
+ \node [mred] at (0.9, 0.5) {$\boldsymbol-$};
+ \node [mred] at (-0.9, -0.5) {$\boldsymbol-$};
+ \node [mred] at (0.9, -0.5) {$\boldsymbol+$};
+ \node [mred] at (-0.9, 0.5) {$\boldsymbol+$};
+ \end{tikzpicture}
+\end{center}
+
+The blue and the red fields have no net dipole, even before they merge. There is only a quadrupole. However, the field has a strong net green dipole. The whole field has toroidal symmetry, and these symmetries are important if we want to think about quantum states and the possible spins these kinds of Skyrmions could have.
+
+For $B = 4$ fields, we can begin with the arrangement
+\begin{center}
+ \begin{tikzpicture}
+ \draw [gray, opacity=0.5] (0, 0) rectangle (2, 2);
+ \draw [gray, opacity=0.5] (2, 0) -- (2.9, 0.3) -- (2.9, 2.3) -- (2, 2);
+ \draw [gray, opacity=0.5] (0, 2) -- (0.9, 2.3) -- (2.9, 2.3);
+ \draw [dashed, gray, opacity=0.5] (0, 0) -- (0.9, 0.3) -- (2.9, 0.3);
+ \draw [dashed, gray, opacity=0.5] (0.9, 0.3) -- (0.9, 2.3);
+
+% \draw [mblue] (-0.2, 0) node [left] {\tiny$-$} -- (0.2, 0) node [right] {\tiny$+$};
+% \draw [mgreen] (0, -0.2) node [below] {\tiny$-$} -- (0, 0.2) node [above] {\tiny$+$};
+% \draw [mred] (-0.15, -0.05) node [left] {\tiny$-$} -- (0.15,0.05) node [right] {\tiny$+$};
+%
+% \begin{scope}[shift={(0.9, 2.3)}]
+% \draw [mblue] (-0.2, 0) node [left] {\tiny$-$} -- (0.2, 0) node [right] {\tiny$+$};
+% \draw [mgreen] (0, -0.2) node [below] {\tiny$+$} -- (0, 0.2) node [above] {\tiny$-$};
+% \draw [mred] (-0.15, -0.05) node [left] {\tiny$+$} -- (0.15,0.05) node [right] {\tiny$-$};
+% \end{scope}
+%
+% \begin{scope}[shift={(2, 2)}]
+% \draw [mblue] (-0.2, 0) node [left] {\tiny$+$} -- (0.2, 0) node [right] {\tiny$-$};
+% \draw [mgreen] (0, -0.2) node [below] {\tiny$+$} -- (0, 0.2) node [above] {\tiny$-$};
+% \draw [mred] (-0.15, -0.05) node [left] {\tiny$-$} -- (0.15,0.05) node [right] {\tiny$+$};
+% \end{scope}
+%
+% \begin{scope}[shift={(2.9, 0.3)}]
+% \draw [mblue] (-0.2, 0) node [left] {\tiny$+$} -- (0.2, 0) node [right] {\tiny$-$};
+% \draw [mgreen] (0, -0.2) node [below] {\tiny$-$} -- (0, 0.2) node [above] {\tiny$+$};
+% \draw [mred] (-0.15, -0.07) node [left] {\tiny$+$} -- (0.15,0.05) node [right] {\tiny$-$};
+% \end{scope}
+ \begin{scope}[scale=0.3]
+ \draw [mblue] (-0.949, 0) -- (0.949, 0) node [pos=-0.2] {\tiny$-$} node [pos=1.2] {\tiny$+$};
+ \draw [mgreen] (0, -0.949) -- (0, 0.949) node [pos=-0.2] {\tiny$-$} node [pos=1.2] {\tiny$+$};
+ \draw [mred] (-0.9, -0.3) -- (0.9,0.3) node [pos=-0.2] {\tiny$-$} node [pos=1.2] {\tiny$+$};
+ \end{scope}
+
+ \begin{scope}[shift={(0.9, 2.3)}, scale=0.3]
+ \draw [mblue] (-0.949, 0) -- (0.949, 0) node [pos=-0.2] {\tiny$-$} node [pos=1.2] {\tiny$+$};
+ \draw [mgreen] (0, -0.949) -- (0, 0.949) node [pos=-0.2] {\tiny$+$} node [pos=1.2] {\tiny$-$};
+ \draw [mred] (-0.9, -0.3) -- (0.9,0.3) node [pos=-0.2] {\tiny$+$} node [pos=1.2] {\tiny$-$};
+ \end{scope}
+
+ \begin{scope}[shift={(2, 2)}, scale=0.3]
+ \draw [mblue] (-0.949, 0) -- (0.949, 0) node [pos=-0.2] {\tiny$+$} node [pos=1.2] {\tiny$-$};
+ \draw [mgreen] (0, -0.949) -- (0, 0.949) node [pos=-0.2] {\tiny$+$} node [pos=1.2] {\tiny$-$};
+ \draw [mred] (-0.9, -0.3) -- (0.9,0.3) node [pos=-0.2] {\tiny$-$} node [pos=1.2] {\tiny$+$};
+ \end{scope}
+
+ \begin{scope}[shift={(2.9, 0.3)}, scale=0.3]
+ \draw [mblue] (-0.949, 0) -- (0.949, 0) node [pos=-0.2] {\tiny$+$} node [pos=1.2] {\tiny$-$};
+ \draw [mgreen] (0, -0.949) -- (0, 0.949) node [pos=-0.2] {\tiny$-$} node [pos=1.2] {\tiny$+$};
+ \draw [mred] (-0.9, -0.3) -- (0.9,0.3) node [pos=-0.2] {\tiny$+$} node [pos=1.2] {\tiny$-$};
+ \end{scope}
+
+ \end{tikzpicture}
+\end{center}
+To obtain the orientations, we begin with the bottom-left, and then obtain the others by rotating along the axis perpendicular to the faces of the cube shown here.
+
+This configuration only has a tetrahedral structure. However, the Skyrmions can merge to form a cubic $B = 4$ Skyrmion.
+
+\subsection{Fermionic quantization of the \texorpdfstring{$B = 1$}{B = 1} hedgehog}
+We begin by quantizing the $B = 1$ Skyrmion. The naive way to do this is to view the Skyrmion as a rigid body, and quantize it. However, if we do this, then we will find that the result must have integer spin. However, this is not what we want, since we want Skyrmions to model protons and nucleons, which have half-integer spin. In other words, the naive quantization makes the Skyrmion bosonic.
+
+In general, if we want to take the Skyrme model as a low energy effective field theory of QCD with an odd number of colours, then we must require the $B = 1$ Skyrmion to be in a fermionic quantum state, with half-integer spin.
+
+As a field theory, the configuration space for a baryon number $B$ Skyrme field is $\Maps_B(\R^3 \to \SU(2))$, with appropriate boundary conditions. These are all topologically the same as $\Maps_0(\R^3 \to \SU(2))$, because if we fix a single element $U_0 \in \Maps_B(\R^3 \to \SU(2))$, then multiplication by $U_0$ gives us a homeomorphism between the two spaces. Since we imposed the vacuum boundary condition, this space is also the same as $\Maps_0(S^3 \to S^3)$.
+
+This space is \emph{not} simply connected. In fact, it has a first homotopy group of
+\[
+ \pi_1(\Maps_0(S^3 \to S^3)) = \pi_1(\Omega^3 S^3) = \pi_4(S^3) = \Z_2.
+\]
+Thus, $\Maps_0(S^3 \to S^3)$ has a universal double cover. In our theory, the wavefunctions on $\Maps(S^3 \to S^3)$ are not single-valued, but are well-defined functions on the double cover. The wavefunction $\Psi$ changes sign after going around a non-contractible loop in the configuration space. It can be shown that this is not just a choice, but required in a low-energy version of QCD.
+
+This has some basic consequences:
+\begin{enumerate}
+ \item If we rotate a $1$-Skyrmion by $2\pi$, then $\Psi$ changes sign.
+
+ \item $\Psi$ also changes sign when one exchanges two 1-Skyrmions (without rotating them in the process). This was shown by Finkelstein and Rubinstein.
+
+ This links spin with statistics. If we quantized Skyrmions as bosons, then both (i) and (ii) do not happen. Thus, in this theory, we obtain the spin-statistics theorem \emph{from topology}.
+ \item In general, if $B$ is odd, then rotation by $2\pi$ is a non-contractible loop, while if $B$ is even, then it is contractible. Thus, spin is half-integer if $B$ is odd, and integer if $B$ is even.
+
+ \item There is another feature of the Skyrme model. So far, our rotations are spatial rotations. We can also rotate the value of the pion field, i.e.\ rotate the target $3$-sphere. This is \term{isospin rotation}\index{isospin}. This behaves similarly to above. Thus, isospin is half-integer if $B$ is odd, and integer if $B$ is even.
+\end{enumerate}
+
+\subsection{Rigid body quantization}
+We now make the \emph{rigid body approximation}. We allow the Skyrmion to translate, rotate and isorotate rigidly, but do not allow it to deform. This is a low-energy approximation, and we have reduced the infinite dimensional space of configurations to a finite-dimensional space. The group acting is
+\[
+ (\mathrm{translation}) \times (\mathrm{rotation}) \times (\mathrm{isorotation}).
+\]
+The translation part is trivial. The Skyrmion just gets the ability to move, and thus gains momentum. So we are going to ignore it. The remaining group acting is $\SO(3) \times \SO(3)$. This is a bit subtle. We have $\pi_1(\SO(3)) = \Z_2$, so we would expect $\pi_1(\SO(3) \times \SO(3)) = (\Z_2)^2$. However, in the full theory, we only have a single $\Z_2$. So we need to identify a loop in the first $\SO(3)$ with a loop in the second $\SO(3)$.
+
+Our wavefunction is thus a function on some cover of $\SO(3) \times \SO(3)$. While $\SO(3) \times \SO(3)$ is the symmetry group of the full theory, for a particular classical Skyrmion solution, the orbit is smaller than the whole group. This is because the Skyrmion often is invariant under some subgroup of $\SO(3) \times \SO(3)$. For example, the $B = 3$ solution has a tetrahedral symmetry. Then we require our wavefunction to be invariant under this group up to a sign.
+
+For a single $\SO(3)$, we have a rigid-body wavefunction $\bket{J\; L_3\; J_3}$, where $J$ is the spin, $L_3$ is the third component of spin relative to body axes, and $J_3$ is the third component of spin relative to space axes. Since we have two copies of $\SO(3)$, our wavefunction can be represented by
+\[
+ \bket{J\; L_3\; J_3} \otimes \bket{I\; K_3\; I_3}.
+\]
+$I_3$ is the isospin we are physically familiar with. The values of $J_3$ and $I_3$ are not constrained, i.e.\ they take all the standard $(2J + 1)(2I + 1)$ values. Thus, we are going to suppress these labels.
+
+However, the symmetry of the Skyrmion places constraints on the body projections, and not all values of $J$ and $I$ are allowed.
+
+\begin{eg}
+ For the $B = 1$ hedgehog, there is a lot of symmetry. Given an axis $\hat{\mathbf{n}}$ and an angle $\alpha$, we know that classically, if we rotate and isorotate by the same axis and angle, then the wavefunction is unchanged. So we must have
+ \[
+ e^{i \alpha \hat{\mathbf{n}} \cdot \mathbf{L}} e^{i \alpha \hat{\mathbf{n}} \cdot \mathbf{K}} \bket{\Psi} = \bket{\Psi}.
+ \]
+ It follows, by considering $\alpha$ small, and all $\hat{\mathbf{n}}$, that
+ \[
+ (\mathbf{L} + \mathbf{K}) \bket{\Psi} = 0.
+ \]
+ So the ``grand spin'' $\mathbf{L} + \mathbf{K}$ must vanish. So $\mathbf{L} \cdot \mathbf{L} = \mathbf{K} \cdot \mathbf{K}$. Recall that $\mathbf{L} \cdot \mathbf{L} = \mathbf{J} \cdot \mathbf{J}$ and $\mathbf{K} \cdot \mathbf{K} = \mathbf{I} \cdot \mathbf{I}$. So it follows that we must have $J = I$.
+
+ Since $1$ is an odd number, for any axis $\hat{\mathbf{n}}$, we must also have
+ \[
+ e^{i 2\pi \hat{\mathbf{n}} \cdot \mathbf{L}} \bket{\Psi} = - \bket{\Psi}.
+ \]
+ So $I$ and $J$ must be half-integer.
+
+ Thus, the allowed states are
+ \[
+ J = I = \frac{n}{2}
+ \]
+ for some $n \in 1 + 2\Z$. If we work out the formula for the energy in terms of the spin, we see that it increases with $J$. It turns out the system is highly unstable if $n \geq 5$, and so $\frac{1}{2}$ and $\frac{3}{2}$ are the only physically allowed values.
+
+ The $J = I = \frac{1}{2}$ states corresponds to $p$ and $n$, with spin $\frac{1}{2}$. The $J = I = \frac{3}{2}$ correspond to the $\Delta^{++}, \Delta^+, \Delta^0$ and $\Delta^-$ baryons, with spin $\frac{3}{2}$.
+\end{eg}
+
+\begin{eg}
+ If $B = 2$, then we have a toroidal symmetry. This still has one continuous $\SO(2)$ symmetry. Our first constraint becomes
+ \[
+ e^{i \alpha L_3} e^{i 2\alpha J_3} \bket{\Psi} = \bket{\Psi}.
+ \]
+ Note that we have $2 \alpha$ instead of $\alpha$. If we look at our previous picture, we see that when we rotate the space around by $2\pi$, the pion field rotates by $4\pi$.
+
+ There is another discrete symmetry, given by turning the torus upside down. This gives
+ \[
+ e^{i \pi L_1} e^{i \pi K_1} \bket{\Psi} = -\bket{\Psi}.
+ \]
+ The sign is not obvious from what we've said so far, but it is correct. Since we have an even baryon number, the allowed states have integer spin and isospin.
+
+ States with isospin $0$ are most interesting, and have lowest energy. Then the $K$ operators act trivially, and we have
+ \[
+ e^{i \alpha L_3} \bket{\Psi} = \bket{\Psi},\quad e^{i \pi L_1} \bket{\Psi} =- \bket{\Psi}.
+ \]
+ We have reduced the problem to one involving only body-fixed spin projection, because the first equation tells us
+ \[
+ L_3 \bket{\Psi} = 0.
+ \]
+ Thus the allowed states are $\bket{J, L_3 = 0}$.
+
+ The second constraint requires $J$ to be odd. So the lowest energy states with zero isospin are $\bket{1, 0}$ and $\bket{3, 0}$. In particular, there are no spin $0$ states. The state $\bket{1, 0}$ represents the deuteron. This is a spin $1$, isospin $0$ bound state of $p$ and $n$. This is a success.
+
+ The $\bket{3, 0}$ states have too high energy to be stable, but there is some evidence for a spin $3$ dibaryon resonance that decays into two $\Delta$'s.
+
+ There is also a $2$-nucleon resonance with $I = 1$ and $J = 0$, but this is not a bound state. This is also allowed by the Skyrme model.
+\end{eg}
+
+\begin{eg}
+ For $B = 4$, we have cubic symmetry. The symmetry group is rather complicated, and imposes various constraints on our theory. But these constraints always involve $+$ signs on the right-hand side. The lowest allowed state is $\bket{0, 0} \otimes \bket{0, 0}$, which agrees with the $\alpha$-particle. The next $I = 0$ state is
+ \[
+ \left(\bket{4, 4} + \sqrt{\frac{14}{5}} \bket{4, 0} + \bket{4, -4}\right) \otimes \bket{0, 0}
+ \]
+ involving a combination of $L_3$ values. This is an excited state with spin $4$, and would have rather high energy. Unfortunately, we haven't seen this experimentally. It is, however, a success of the Skyrme model that there are no $J = 1, 2, 3$ states with isospin $0$.
+\end{eg}
+
+\printindex
+\end{document}
diff --git a/books/cam/III_L/advanced_quantum_field_theory.tex b/books/cam/III_L/advanced_quantum_field_theory.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b99c3a46dfc9f37af98af412628b5b19381caa72
--- /dev/null
+++ b/books/cam/III_L/advanced_quantum_field_theory.tex
@@ -0,0 +1,5822 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2017}
+\def\nlecturer {D.\ B.\ Skinner}
+\def\ncourse {Advanced Quantum Field Theory}
+
+\input{header}
+
+\makeatletter
+\newcommand*{\textoverline}[1]{$\overline{\hbox{#1}}\m@th$}
+\makeatother
+\usepackage[compat=1.1.0]{tikz-feynman}
+\tikzfeynmanset{/tikzfeynman/momentum/arrow shorten = 0.3}
+\tikzfeynmanset{/tikzfeynman/warn luatex = false}
+
+\usepackage{pgfplots}
+
+\DeclareRobustCommand{\ghost}{\tikz[baseline=.1em,scale=.5]{
+ \draw [fill=yellow] (0, 0) -- (0, 0.45) arc (+180:0:0.3) -- (0.6, 0);
+ \draw [decorate, decoration={snake, amplitude=0.7, segment length=2.3}] (0.6, 0) -- (0, 0);
+ \fill[black] (0.18, 0.45) circle [radius=0.05];
+ \fill[black] (0.44, 0.45) circle [radius=0.05];
+}}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+Quantum Field Theory (QFT) provides the most profound description of Nature we currently possess. As well as being the basic theoretical framework for describing elementary particles and their interactions (excluding gravity), QFT also plays a major role in areas of physics and mathematics as diverse as string theory, condensed matter physics, topology and geometry, astrophysics and cosmology.
+
+This course builds on the Michaelmas Quantum Field Theory course, using techniques of path integrals and functional methods to study quantum gauge theories. Gauge Theories are a generalisation of electrodynamics and form the backbone of the Standard Model --- our best theory encompassing all particle physics. In a gauge theory, fields have an infinitely redundant description; we can transform the fields by a different element of a Lie Group at every point in space-time and yet still describe the same physics. Quantising a gauge theory requires us to eliminate this infinite redundancy. In the path integral approach, this is done using tools such as ghost fields and BRST symmetry. We discuss the construction of gauge theories and their most important observables, Wilson Loops. Time permitting, we will explore the possibility that a classical symmetry may be broken by quantum effects. Such anomalies have many important consequences, from constraints on interactions between matter and gauge fields, to the ability to actually render a QFT inconsistent.
+
+A further major component of the course is to study Renormalization. Wilson's picture of Renormalisation is one of the deepest insights into QFT --- it explains why we can do physics at all! The essential point is that the physics we see depends on the scale at which we look. In QFT, this dependence is governed by evolution along the Renormalisation Group (RG) flow. The course explores renormalisation systematically, from the use of dimensional regularisation in perturbative loop integrals, to the difficulties inherent in trying to construct a quantum field theory of gravity. We discuss the various possible behaviours of a QFT under RG flow, showing in particular that the coupling constant of a non-Abelian gauge theory can effectively become small at high energies. Known as "asymptotic freedom", this phenomenon revolutionised our understanding of the strong interactions. We introduce the notion of an Effective Field Theory that describes the low energy limit of a more fundamental theory and helps parametrise possible departures from this low energy approximation. From a modern perspective, the Standard Model itself appears to be but an effective field theory.
+
+\subsubsection*{Pre-requisites}
+Knowledge of the Michaelmas term Quantum Field Theory course will be assumed. Familiarity with the course Symmetries, Fields and Particles would be very helpful.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+\subsection{What is quantum field theory}
+What is Quantum Field Theory? The first answer we might give is --- it's just a quantum version of a field theory (duh!). We know the world is described by fields, e.g.\ electromagnetic fields, and the world is quantum. So naturally we want to find a quantum version of field theory. Indeed, this is what drove the study of quantum field theory historically.
+
+But there are other things we can use quantum field theory for, and they are not so obviously fields. In a lot of condensed matter applications, we can use quantum field theory techniques to study, say, vibrations of a crystal, and its quanta are known as ``phonons''. More generally, we can use quantum field theory to study things like phase transitions (e.g.\ boiling water). This is not so obvious a place you think quantum field theory will happen, but it is. Some people even use quantum field theory techniques to study the spread of diseases in a population!
+
+We can also use quantum field theory to study problems in mathematics! QFT is used to study knot invariants, which are ways to assign labels to different types of knots to figure out if they are the same. Donaldson won a fields medal for showing that there are inequivalent differentiable structures on $\R^4$, and this used techniques coming from quantum Yang--Mills theory as well.
+
+These are not things we would traditionally think of as quantum field theory. In this course, we are not going to be calculating, say, Higgs corrections to $\gamma\gamma$-interactions. Instead, we try to understand better this machinery known as quantum field theory.
+
+\subsection{Building a quantum field theory}
+We now try to give a very brief outline of how one does quantum field theory. Roughly, we follow the steps below:
+\begin{enumerate}
+ \item We pick a space to represent our ``universe''. This will always be a manifold, but we usually impose additional structure on it.
+ \begin{itemize}
+ \item In particle physics, we often pick the manifold to be a $4$-dimensional pseudo-Riemannian manifold of signature $+---$. Usually, we in fact pick $(M, g) = (\R^4, \eta)$ where $\eta$ is the usual Minkowski metric.
+ \item In condensed matter physics, we often choose $(M, g) = (\R^3, \delta)$, where $\delta$ is the flat Euclidean metric.
+ \item In string theory, we have fields living on Riemann surface $\Sigma$ (e.g.\ sphere, torus). Instead of specifying a metric, we only specify the conformal equivalence class $[g]$ of the metric, i.e.\ the metric up to a scalar factor.
+ \item In QFT for knots, we pick $M$ to be some oriented $3$-manifold, e.g.\ $S^3$, but with \emph{no} metric.
+ \item In this course, we will usually take $(M, g) = (\R^d, \delta)$ for some $d$, where $\delta$ is again the flat, Euclidean metric.
+
+ We might think this is a very sensible and easy choice, because we are all used to working in $(\R^d, \delta)$. However, mathematically, this space is non-compact, and it will lead to a lot of annoying things happening.
+ \end{itemize}
+ \item We pick some fields. The simplest choice is just a function $\phi: M \to \R$ or $\C$. This is a \term{scalar field}. Slightly more generally, we can also have $\phi: M \to N$ for some other manifold $N$. We call $N$ the \term{target space}.
+
+ For example, quantum mechanics is a quantum field theory. Here we choose $M$ to be some interval $M = I = [0, 1]$, which we think of as time, and the field is a map $\phi: I \to \R^3$. We think of this as a path in $\R^3$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} -- (1.5, 0) node [circ] {};
+
+ \draw (3, 0) rectangle (5, -2);
+ \draw (3, 0) -- (4, 0.5) -- (6, 0.5) -- (5, 0);
+ \draw (6, 0.5) -- (6, -1.5) -- (5, -2);
+ \draw [dashed] (4, 0.5) -- (4, -1.5) -- (6, -1.5);
+ \draw [dashed] (3, -2) -- (4, -1.5);
+
+ \draw (0.75, -0.2) edge [out=270, in=180] (4.1, -1);
+
+ \node at (2, -1.4) {$\phi$};
+
+ \draw [->] (4.099, -1) -- (4.1, -1);
+
+ \draw [thick, mred](3.7, -1.4) .. controls (4.4, -1.4) and (3.9, -0.7) .. (5.1, -0.5);
+ \end{tikzpicture}
+ \end{center}
+ In string theory, we often have fields $\phi: \Sigma \to N$, where $N$ is a \emph{Calabi--Yau manifold}.
+
+ In pion physics, $\pi(x)$ describes a map $\phi: (\R^4, \eta) \to G/H$, where $G$ and $H$ are Lie groups.
+
+ Of course, we can go beyond scalar fields. We can also have fields with non-zero spin such as fermions or gauge fields, e.g.\ a connection on a principal $G$-bundle, as we will figure out later. These are fields that carry a non-trivial representation of the Lorentz group. There is a whole load of things we can choose for our field.
+
+ Whatever field we choose, we let $\mathcal{C}$\index{$\mathcal{C}$} be the space of all field configurations, i.e.\ a point $\phi \in \mathcal{C}$ represents a picture of our field across all of $M$. Thus $\mathcal{C}$ is some form of function space and will typically be infinite dimensional. This infinite-dimensionality is what makes QFT hard, and also what makes it interesting.
+
+ \item We choose an action. An \emph{action} is just a function $S: \mathcal{C} \to \R$. You tell me what the field looks like, and I will give you a number, the value of the action. We often choose our action to be local, in the sense that we assume there exists some function $\mathcal{L}(\phi, \partial \phi, \cdots)$ such that
+ \[
+ S[\phi] = \int_M\d^4 x \sqrt{g}\; \mathcal{L}(\phi(x), \partial\phi (x), \cdots).
+ \]
+ The physics motivation behind this choice is obvious --- we don't want what is happening over here to depend on what is happening at the far side of Pluto. However, this assumption is actually rather suspicious, and we will revisit this later.
+
+ For example, we have certainly met actions that look like
+ \[
+ S[\phi] = \int \d^4 x\; \left(\frac{1}{2} (\partial \phi)^2 + \frac{m^2}{2} \phi^2 + \frac{\lambda}{4!} \phi^4\right),
+ \]
+ and for gauge fields we might have seen
+ \[
+ S[A] = \frac{1}{4} \int \d^4 x\; F_{\mu\nu}F^{\mu\nu}.
+ \]
+ If we have a coupled fermion field, we might have
+ \[
+ S[A, \psi] = \frac{1}{4} \int \d^4 x\; F_{\mu\nu}F^{\mu\nu} + \bar\psi (\slashed{D} + m) \psi.
+ \]
+ But recall when we first encountered Lagrangians in classical dynamics, we worked with \emph{lots} of different Lagrangians. We can do whatever thing we like, make the particle roll down the hill and jump into space etc, and we get to deal with a whole family of different Lagrangians. But when we come to quantum field theory, the choices seem to be rather restrictive. Why can't we choose something like
+ \[
+ S[A] = \int F^2 + F^4 + \cosh(F^2) + \cdots?
+ \]
+ It turns out we can, and in fact we \emph{must}. We will have to work with something much more complicated.
+
+ But then what were we doing in the QFT course? Did we just waste time coming up with tools that just work with these very specific examples? It turns out not. We will see that there are very good reasons to study these specific actions.
+ \item What do we compute? In this course, the main object we'll study is the \term{partition function}
+ \[
+ \mathcal{Z} = \int_\mathcal{C} \D \phi\; e^{-S[\phi]/\hbar},
+ \]
+ which is some sort of integral over the space of all fields. Note that the minus sign in the exponential is for the Euclidean signature. If we are not Euclidean, we get some $i$'s instead.
+
+ We will see that the factor of $e^{-S[\phi]/\hbar}$ means that as $\hbar \to 0$, the dominant contribution to the partition function comes from stationary points of $S[\phi]$ over $\mathcal{C}$, and this starts to bring us back to the classical theory of fields. The effect of $e^{-S[\phi]/\hbar}$ is to try to suppress ``wild'' contributions to $\mathcal{Z}$, e.g.\ where $\phi$ is varying very rapidly or $\phi$ takes very large values.
+
+ Heuristically, just as in statistical physics, we have a competition between the two factors $\D \phi$ and $e^{-S[\phi]/\hbar}$. The action part tries to suppress crazy things, but how well this happens depends on how much crazy things are happening, which is measured by the measure $\D \phi$. We can think of this $\D \phi$ as ``entropy''.
+
+ However, the problem is, the measure $\D \phi$ on $\mathcal{C}$ doesn't actually exist! Understanding what we mean by this path integral, and what this measure actually is, and how we can actually compute this thing, and how this has got to do with the canonical quantization operators we had previously, is the main focus of the first part of this course.
+\end{enumerate}
+
+\section{QFT in zero dimensions}
+We start from the simplest case we can think of, namely quantum field theory in zero dimensions. This might seem like an absurd thing to study --- the universe is far from being zero-dimensional. However, it turns out this is the case where we can make sense of the theory mathematically. Thus, it is important to study 0-dimensional field theories and understand what is going on.
+
+There are two reasons for this. As mentioned, we cannot actually define the path integral in higher dimensions. Thus, if we were to do this ``properly'', we will have to define it as the limit of something that is effectively a zero-dimensional quantum field theory. The other reason is that in this course, we are not going to study higher-dimensional path integrals ``rigorously''. What we are going to do is that we will study zero-dimensional field theories rigorously, and then assume that analogous results hold for higher-dimensional field theories.
+
+One drawback of this approach is that what we do in this section will have little physical content or motivation. We will just assume that the partition function is something we are interested in, without actually relating it to any physical processes. In the next chapter, on one-dimensional field theories, we are going see why this is an interesting thing to study.
+
+Let's begin. In $d = 0$, if our universe $M$ is connected, then the only choice of $M$ is $\{\mathrm{pt}\}$. There is a no possibility for a field to have spin, because the Lorentz group is trivial. Our fields are scalar, and the simplest choice is just a single field $\phi: \{\mathrm{pt}\} \to \R$, i.e.\ just a real variable. Similarly, we simply have $\mathcal{C} \cong \R$. This is \emph{not} an infinite-dimensional space.
+
+The action is just a normal function $S: \mathcal{C} \cong \R \to \R$ of one real variable. The path integral measure $\D \phi$ can be taken to just be the standard (Lebesgue) measure $\d \phi$ on $\R$. So our partition function is just
+\[
+ \mathcal{Z} = \int_\R \d \phi\; e^{-S(\phi)/\hbar},
+\]
+where we assume $S$ is chosen so that this converges. This happens if $S$ grows sufficiently quickly as $\phi \to \pm \infty$.
+
+More generally, we may wish to compute \term{correlation functions}, i.e.\ we pick another function $f(\phi)$ and compute the \term{expectation}\index{$\bra f(\phi)\ket$}
+\[
+ \bra f(\phi)\ket = \frac{1}{\mathcal{Z}} \int \d \phi\; f(\phi) e^{-S(\phi)/\hbar}.
+\]
+Again, we can pick whatever $f$ we like as long as the integral converges. In this case, $\frac{1}{\mathcal{Z}} e^{-S(\phi)/\hbar}$ is a probability distribution on $\R$, and as the name suggests, $\bra f(\phi)\ket$ is just the expectation value of $f$ in this distribution. Later on, when we study quantum field theory in higher dimensions, we can define more complicated $f$ by evaluating $\phi$ at different points, and we can use this to figure out how the field at different points relate to each other.
+
+Our action is taken to have a series expansion in $\phi$, so in particular we can write
+\[
+ S(\phi) = \frac{m^2 \phi^2}{2} + \sum_{n = 3}^N g_n \frac{\phi^n}{n!}.
+\]
+We didn't put in a constant term, as it would just give us a constant factor in $\mathcal{Z}$. We could have included linear terms, but we shall not. The important thing is that $N$ has to be even, so that the asymptotic behaviour of $S$ is symmetric in both sides.
+
+Now the partition function is a function of all these terms: $\mathcal{Z} = \mathcal{Z}(m^2, g_n)$. Similarly, $\bra f\ket$ is again a function of $m^2$ and $g_n$, and possibly other things used to define $f$ itself.
+
+Note that \emph{nothing} depends on the field, because we are integrating over all possible fields.
+
+\subsection{Free theories}
+We consider the simplest possible QFT. These QFT's are free, and so $S(\phi)$ is at most quadratic. Classically, this implies the equations of motions are linear, and so there is superposition, and thus the particles do not interact.
+
+Let $\phi: \{\mathrm{pt}\} \to \R^n$ be a field with coordinates $\phi^a$, and define
+\[
+ S(\phi) = \frac{1}{2} M(\phi, \phi) = \frac{1}{2} M_{ab} \phi^a \phi^b,
+\]
+where $M: \R^n \times \R^n \to \R$ is a positive-definite symmetric matrix. Then the partition function $\mathcal{Z}(M)$ is just a Gaussian integral:
+\[
+ \mathcal{Z}(M) = \int_{\R^n} \d^n \phi\; e^{- \frac{1}{2\hbar} M(\phi, \phi)} = \frac{(2 \pi \hbar)^{n/2}}{\sqrt{\det{M}}}.
+\]
+Indeed, to compute this integral, since $M$ is symmetric, there exists an orthogonal transformation $O: \R^n \to \R^n$ that diagonalizes it. The measure $\d^n \phi$ is invariant under orthogonal transformations. So in terms of the eigenvectors of $M$, this just reduces to a product of $n$ 1D Gaussian integrals of this type, and this is a standard integral:
+\[
+ \int \d \chi\; e^{-m\chi^2/2\hbar} = \sqrt{\frac{2\pi \hbar}{m}}.
+\]
+In our case, $m > 0$ runs over all eigenvalues of $M$, and the product of eigenvalues is exactly $\det M$.
+
+A small generalization is useful. We let
+\[
+ S(\phi) = \frac{1}{2} M(\phi, \phi) + J(\phi),
+\]
+where $J: \R^n \to \R$ is some linear map (we can think of $J$ as a (co)vector, and also write $J(\phi) = J \cdot \phi$). $J$ is a \emph{source} in the classical case. Then in this theory, we have
+\[
+ \mathcal{Z}(M, J) = \int_{\R^n} \d^n \phi\; \exp\left(-\frac{1}{\hbar}\left(\frac{1}{2} M(\phi, \phi) + J(\phi)\right)\right).
+\]
+To do this integral, we complete the square by letting $\tilde{\phi} = \phi + M^{-1}J$. In other words,
+\[
+ \tilde{\phi}^a = \phi^a + (M^{-1})^{ab} J_b.
+\]
+The inverse exists because $M$ is assumed to be positive definite. We can now complete the square to find
+\begin{align*}
+ \mathcal{Z}(M, J) &= \int_{\R^n} \d^n \tilde{\phi} \;\exp\left(\frac{-1}{2\hbar}M(\tilde{\phi}, \tilde{\phi}) + \frac{1}{2\hbar}M^{-1}(J, J)\right)\\
+ &= \exp\left(\frac{1}{2\hbar}M^{-1}(J, J)\right)\int_{\R^n} \d^n \tilde{\phi} \;\exp\left(\frac{-1}{2\hbar}M(\tilde{\phi}, \tilde{\phi})\right)\\
+ &= \exp\left(\frac{1}{2\hbar}M^{-1}(J, J)\right) \frac{(2\pi \hbar)^{n/2}}{\sqrt{\det M}}.
+\end{align*}
+In the long run, we really don't care about the case with a source. However, we will use this general case to compute some correlation functions.
+
+We return to the case without a source, and let $P: \R^n \to \R$ be a polynomial. We want to compute
+\[
+ \bra P(\phi)\ket = \frac{1}{\mathcal{Z}(M)} \int_{\R^n} \d^n \phi\; P(\phi) \exp\left(-\frac{1}{2\hbar} M(\phi, \phi)\right).
+\]
+By linearity, it suffices to consider the case where $P$ is just a monomial, so
+\[
+ P(\phi) = \prod_{i = 1}^m (\ell_i(\phi)),
+\]
+for $\ell_i : \R^n \to \R$ linear maps. Now if $m$ is odd, then clearly $\bra P(\phi)\ket = 0$, since this is an integral of an odd function. When $m = 2k$, then we have
+\[
+ \bra P(\phi) \ket = \frac{1}{\mathcal{Z}(M)} \int \d^n \phi\; (\ell_i\cdot \phi) \cdots (\ell_{2k} \cdot \phi) \exp\left(-\frac{1}{2\hbar}M (\phi, \phi) - \frac{J \cdot \phi}{\hbar}\right).
+\]
+Here we are eventually going to set $J = 0$, but for the time being, we will be silly and put the source there. The relevance is that we can then think of our factors $\ell_i \cdot \phi$ as derivatives with respect to $J$:
+\begin{align*}
+ \bra P(\phi)\ket &= \frac{(-\hbar)^{2k}}{\mathcal{Z}(M)} \int \d^n \phi \; \prod_{i = 1}^{2k} \left(\ell_i \cdot \frac{\partial}{\partial J}\right) \exp\left(-\frac{1}{2\hbar} M(\phi, \phi) - \frac{J\cdot \phi}{\hbar}\right)\\
+ \intertext{Since the integral is absolutely convergent, we can move the derivative out of the integral, and get}
+ &= \frac{(-\hbar)^{2k}}{\mathcal{Z}(M)} \prod_{i = 1}^{2k} \left(\ell_i \cdot \frac{\partial}{\partial J}\right) \int \d^n \phi \; \exp\left(-\frac{1}{2\hbar} M(\phi, \phi) - \frac{J\cdot \phi}{\hbar}\right)\\
+ &= \hbar^{2k} \prod_{i = 1}^{2k} \left(\ell_i \cdot \frac{\partial}{\partial J}\right) \exp\left(\frac{1}{2\hbar}M^{-1}(J, J)\right).
+\end{align*}
+When each derivative $\ell_i \cdot \frac{\partial}{\partial J}$ acts on the exponential, we obtain a factor of
+\[
+ \frac{1}{\hbar} M^{-1}(J, \ell_i).
+\]
+in front. At the end, we are going to set $J = 0$. So we only get contributions if and only if exactly half (i.e.\ $k$) of the derivatives act on the exponential, and the other $k$ act on the factor in front to get rid of the $J$.
+
+We let $\sigma$ denote a (complete) pairing of the set $\{1, \cdots, 2k\}$, and $\Pi_{2k}$ be the set of all such pairings. For example, if we have $k = 2$, then the possible pairings are $\{(1, 2), (3, 4)\}$, $\{(1, 3), (2, 4)\}$ and $\{(1, 4), (2, 3)\}$:
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (0, 1) {};
+ \draw (0, 0) -- (0, 1);
+ \draw (1, 0) -- (1, 1);
+
+ \begin{scope}[shift={(3, 0)}]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (0, 1) {};
+ \draw (0, 0) -- (1, 0);
+ \draw (0, 1) -- (1, 1);
+ \end{scope}
+
+ \begin{scope}[shift={(6, 0)}]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (0, 1) {};
+ \draw (0, 0) -- (1, 1);
+ \draw (1, 0) -- (0, 1);
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+In general, we have
+\[
+ |\Pi_{2k}| = \frac{(2k)!}{2^k k!},
+\]
+and we have
+\begin{thm}[Wick's theorem]\index{Wick's theorem}
+ For a monomial
+ \[
+ P(\phi) = \prod_{i = 1}^{2k} \ell_i(\phi),
+ \]
+ we have
+ \[
+ \bra P(\phi)\ket = \hbar^{k}\sum_{\sigma \in \Pi_{2k}} \prod_{i \in \{1, \cdots, 2k\}/\sigma} M^{-1}(\ell_i, \ell_{\sigma(i)}).
+ \]
+ where the $\{1, \cdots, 2k\}/\sigma$ says we sum over each pair $\{i, \sigma(i)\}$ only once, rather than once for $(i, \sigma(i))$ and another for $(\sigma(i), i)$.
+\end{thm}
+This is in fact the version of Wick's theorem for this 0d QFT, and $M^{-1}$ plays the role of the propagator.
+
+For example, we have
+\[
+ \bra \ell_1(\phi) \ell_2(\phi)\ket = \hbar M^{-1} (\ell_1, \ell_2).
+\]
+We can represent this by the diagram
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+
+ \node [above] at (0, 0) {$1$};
+ \node [above] at (2, 0) {$2$};
+ \draw (0, 0) -- (2, 0) node [pos=0.5, above] {$M^{-1}$};
+ \end{tikzpicture}
+\end{center}
+Similarly, we have
+\begin{multline*}
+ \bra \ell_1(\phi) \cdots \ell_4(\phi)\ket = \hbar^2 \Big(M^{-1}(\ell_1, \ell_2) M^{-1}(\ell_3, \ell_4) \\
+ + M^{-1}(\ell_1, \ell_3) M^{-1}(\ell_2, \ell_4) + M^{-1}(\ell_1, \ell_4) M^{-1}(\ell_2, \ell_3)\Big).
+\end{multline*}
+Note that we have now reduced the problem of computing an integral for the correlation function into a purely combinatorial problem of counting the number of ways to pair things up.
+
+\subsection{Interacting theories}
+Physically interesting theories contain interactions, i.e.\ $S(\phi)$ is non-quadratic. Typically, if we are trying to compute
+\[
+ \int \d^n \phi\; P(\phi) \exp\left(-\frac{S(\phi)}{\hbar}\right),
+\]
+these things involve transcendental functions and, except in very special circumstances, is just too hard. We usually cannot perform these integrals analytically.
+
+Naturally, in perturbation theory, we would thus want to approximate
+\[
+ \mathcal{Z} = \int \d^n \phi\; \exp\left(-\frac{S(\phi)}{\hbar}\right)
+\]
+by some series. Unfortunately, the integral very likely diverges if $\hbar < 0$, as usually $S(\phi) \to \infty$ as $\phi \to \pm\infty$. So it can't have a Taylor series expansion around $\hbar = 0$, as such expansions have to be valid in a disk in the complex plane. The best we can hope for is an asymptotic series.
+
+Recall that a series
+\[
+ \sum_{n = 0}^\infty f_n(\hbar)
+\]
+is an \emph{asymptotic series} for $\mathcal{Z}(\hbar)$ if for any fixed $N \in \N$, if we write $\mathcal{Z}_N(\hbar)$ for the first $N$ terms on the RHS, then
+\[
+ \lim_{\hbar \to 0^+} \frac{|\mathcal{Z}(\hbar) - \mathcal{Z}_N(\hbar)|}{\hbar^N} \to 0.
+\]
+Thus as $\hbar \to 0^+$, we get an arbitrarily good approximation to what we really wanted from any finite number of terms. But the series will in general diverge if try to fix $\hbar \in \R_{> 0}$, and include increasingly many terms. Most of the expansions we do in quantum field theories are of this nature.
+
+We will assume standard results about asymptotic series. Suppose $S(\phi)$ is smooth with a global minimum at $\phi = \phi_0 \in \R^n$, where the Hessian
+\[
+ \left.\frac{\partial^2 S}{\partial \phi^a \partial \phi^b}\right|_{\phi_0}
+\]
+is positive definite. Then by Laplace's method/Watson's lemma, we have an asymptotic series of the form
+\[
+ \mathcal{Z}(\hbar) \sim (2\pi \hbar)^{n/2} \frac{\exp\left(-\frac{S(\phi_0)}{\hbar}\right)}{\sqrt{\det \partial_a \partial_b S(\phi_0)}} \left(1 + A \hbar + B \hbar^2 + \cdots\right).
+\]
+We will not prove this, but the proof is available in any standard asymptotic methods textbook. The leading term involves the action evaluated on the classical solution $\phi_0$, and is known as the \term{semiclassical term}. The remaining terms are called the \term{quantum correction}.
+
+In Quantum Field Theory last term, the tree diagrams we worked with were just about calculating the leading term. So we weren't actually doing \emph{quantum} field theory.
+
+\begin{eg}
+ Let's consider a single scalar field $\phi$ with action
+ \[
+ S(\phi) = \frac{m^2}{2}\phi^2 + \frac{\lambda}{4!} \phi^4,
+ \]
+ where $m^2, \lambda > 0$. The action has a unique global minimum at $\phi_0 = 0$. The action evaluated at $\phi_0 = 0$ vanishes, and $\partial^2 S = m^2$. So the leading term in the asymptotic expansion of $\mathcal{Z}(\hbar, m \lambda)$ is
+ \[
+ \frac{(2\pi \hbar)^{1/2}}{m}.
+ \]
+ Further, we can find the whole series expansion by
+ \begin{align*}
+ \mathcal{Z}(\hbar, m, \lambda) &= \int_\R \d \phi \; \exp\left(\frac{-1}{\hbar}\left(\frac{m^2}{2} \phi^2 + \frac{\lambda}{4!}\phi^4\right)\right)\\
+ &= \frac{\sqrt{2\hbar}}{m} \int \d \tilde{\phi} \exp\left(-\tilde{\phi}^2 \right) \exp\left( - \frac{4\lambda \hbar}{4! m^4} \tilde{\phi}^4\right)\\
+ &\sim \frac{\sqrt{2\hbar}}{m} \int \d \tilde{\phi}\; e^{- \tilde{\phi}^2}\sum_{n = 0}^N \frac{1}{n!} \left(\frac{-4\lambda \hbar}{4! m^4}\right)^n \tilde{\phi}^{4n}\\
+ &= \frac{\sqrt{2\hbar}}{m} \sum_{n = 0}^N \left(\frac{-4\lambda \hbar}{4! m^4}\right)^n \frac{1}{n!} \int \d \tilde{\phi}\;e^{- \tilde{\phi}^2}\tilde{\phi}^{4n}\\
+ &= \frac{\sqrt{2\hbar}}{m} \sum_{n = 0}^N \left(\frac{-4\lambda \hbar}{4! m^4}\right)^n \frac{1}{n!} \Gamma\left(2n + \frac{1}{2}\right).
+ \end{align*}
+ The last line uses the definition of the Gamma function.
+
+ We can plug in the value of the $\Gamma$ function to get
+ \begin{align*}
+ \mathcal{Z}(\hbar, m, \lambda) &\sim \frac{\sqrt{2\pi\hbar}}{m} \sum_{n = 0}^N \left(\frac{-\hbar \lambda}{m^4}\right)^n \frac{1}{(4!)^n n!} \frac{(4n)!}{4^n (2n)!}\\
+ &= \frac{\sqrt{2\pi \hbar}}{m} \left(1 - \frac{\hbar\lambda}{8m^4} + \frac{35}{384} \frac{\hbar^2 \lambda^2}{m^8} + \cdots\right).
+ \end{align*}
+ We will get to understand these coefficients much more in terms of Feynman diagrams later.
+
+ Note that apart from the factor $\frac{\sqrt{2\pi \hbar}}{m}$, the series depends on $(\hbar, \lambda)$ only through the product $\hbar \lambda$. So we can equally view it as an asymptotic series in the coupling constant $\lambda$. This view has the benefit that we can allow ourselves to set $\hbar = 1$, which is what we are usually going to do.
+\end{eg}
+As we emphasized previously, we get an asymptotic series, not a Taylor series. Why is this so?
+
+When we first started our series expansion, we wrote $\exp(-S(\phi)/\hbar)$ as a series in $\lambda$. This is absolutely fine, as $\exp$ is a very well-behaved function when it comes to power series. The problem comes when we want to interchange the integral with the sum. We know this is allowed only if the sum is absolutely convergent, but it is not, as the integral does not converge for negative $\hbar$. Thus, the best we can do is that for any $N$, we truncate the sum at $N$, and then do the exchange. This is legal since finite sums always commute with integrals.
+
+We can in fact see the divergence of the asymptotic series for finite $(\hbar \lambda) > 0$ from the series itself, using Stirling's approximation. Recall we have
+\[
+ n! \sim e^{n \log n}.
+\]
+So via a straightforward manipulation, we have
+\[
+ \frac{1}{(4!)^n n!} \frac{(4n)!}{4^n(2n)!} \sim e^{n \log n}.
+\]
+So the coefficients go faster than exponentially, and thus the series has vanishing radius of convergence.
+
+We can actually plot out the partial sums. For example, we can pick $\hbar = m = 1$, $\lambda = 0.1$, and let $\mathcal{Z}_n$ be the $n$th partial sum of the series. We can start plotting this for $n$ up to $40$ (the red line is the true value):
+\begin{center}
+ \begin{tikzpicture}
+ \begin{axis}[
+ xlabel={$n$},
+ ylabel={$\mathcal{Z}_n$},
+ width=0.95\textwidth,
+ height=0.6\textwidth,
+ axis lines=left
+ ]
+ \addplot [no markers, mblue, thick] plot coordinates {(0,2.506628275) (1,2.4752954215625005) (2,2.477580108792318) (3,2.4772659642982178) (4,2.477329774898582) (5,2.477312599211984) (6,2.4773183602235305) (7,2.477316045531391) (8,2.4773171293377625) (9,2.4773165498024112) (10,2.4773168982480414) (11,2.477316665554994) (12,2.4773168364389506) (13,2.4773166995674734) (14,2.477316818311627) (15,2.477316707384797) (16,2.4773168183982883) (17,2.47731669990227) (18,2.4773168342800465) (19,2.4773166729383083) (20,2.477316877405349) (21,2.4773166046609214) (22,2.477316986658088) (23,2.47731642618797) (24,2.4773172858673957) (25,2.4773159099504745) (26,2.4773182038068415) (27,2.4773142267271684) (28,2.477321387246062) (29,2.477308017857696) (30,2.477333870912449) (31,2.4772821543783246) (32,2.4773890554172873) (33,2.4771609593746824) (34,2.4776628545272517) (35,2.4765250462664925) (36,2.4791803272805764) (37,2.4728067557923787) (38,2.4885303286995017) (39,2.448692237728089) (40,2.5522837236571903)};
+
+ \addplot [mred, no markers] plot coordinates {(0, 2.47732) (40, 2.47732)};
+
+ \end{axis}
+ \end{tikzpicture}
+\end{center}
+
+After the initial few terms, it seems like the sum has converged, and indeed to the true value. Up to around $n = 30$, there is no visible change, and indeed, the differences tend to be of the order $\sim 10^{-7}$. However, after around $33$, the sum starts to change drastically.
+
+We can try to continue plotting. Since the values become ridiculously large, we have to plot this in a logarithmic scale, and thus we plot $|\mathcal{Z}_n|$ instead. Do note that for sufficiently large $n$, the actual sign flips every time we increment $n$! This thing actually diverges really badly.
+
+\begin{center}
+ \begin{tikzpicture}
+ \begin{semilogyaxis}[
+ xlabel={$n$},
+ ylabel={$|\mathcal{Z}_n|$},
+ width=0.95\textwidth,
+ height=0.6\textwidth,
+ axis lines=left
+ ]
+
+ \addplot [no markers, mblue, thick] plot coordinates {(0,2.506628275) (1,2.4752954215625005) (2,2.477580108792318) (3,2.4772659642982178) (4,2.477329774898582) (5,2.477312599211984) (6,2.4773183602235305) (7,2.477316045531391) (8,2.4773171293377625) (9,2.4773165498024112) (10,2.4773168982480414) (11,2.477316665554994) (12,2.4773168364389506) (13,2.4773166995674734) (14,2.477316818311627) (15,2.477316707384797) (16,2.4773168183982883) (17,2.47731669990227) (18,2.4773168342800465) (19,2.4773166729383083) (20,2.477316877405349) (21,2.4773166046609214) (22,2.477316986658088) (23,2.47731642618797) (24,2.4773172858673957) (25,2.4773159099504745) (26,2.4773182038068415) (27,2.4773142267271684) (28,2.477321387246062) (29,2.477308017857696) (30,2.477333870912449) (31,2.4772821543783246) (32,2.4773890554172873) (33,2.4771609593746824) (34,2.4776628545272517) (35,2.4765250462664925) (36,2.4791803272805764) (37,2.4728067557923787) (38,2.4885303286995017) (39,2.448692237728089) (40,2.5522837236571903) (41,2.276008178409487) (42,3.0312435602845316) (43,0.9163649458652577) (44,6.9796177916553654) (45,10.807608126230553) (46,42.558903112730995) (47,121.11259124115857) (48,391.76738051934996) (49,1249.5793658417674) (50,4112.563675624475) (51,13762.560713341785) (52,47017.15911404468) (53,163700.204460477) (54,580883.2573369935) (55,2099786.4286479456) (56,7729934.117066026) (57,2.8969844894070018e7) (58,1.104972247833786e8) (59,4.2880499274407196e8) (60,1.692562750825886e9) (61,6.79334292995877e9) (62,2.7717717706646297e10) (63,1.149348470239874e11) (64,4.842337866737232e11) (65,2.072334275173758e12) (66,9.006611524661889e12) (67,3.9742816962114914e13)};
+
+ \addplot [mred, no markers] plot coordinates {(0, 2.47732) (67, 2.47732)};
+ \end{semilogyaxis}
+ \end{tikzpicture}
+\end{center}
+
+We can also see other weird phenomena here. Suppose we decided that we have $m^2 < 0$ instead (which is a rather weird thing to do). Then it is clear that $\mathcal{Z}(m^2, \lambda)$ still exists, as the $\phi^4$ eventually dominates, and so what happens at the $\phi^2$ term doesn't really matter. However, the asymptotic series is invalid in this region.
+
+Indeed, if we look at the fourth line of our derivation, each term in the asymptotic series will be obtained by integrating $e^{\tilde\phi^2} \tilde\phi^{4n}$, instead of integrating $e^{-\tilde\phi^2} \tilde\phi^{4n}$, and each of these integrals would diverge.
+
+Fundamentally, this is since when $m^2 < 0$, the global minimum of $S(\phi)$ are now at
+\[
+ \phi_0 = \pm \sqrt{\frac{6m^2}{\lambda}}.
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->](-3, 0) -- (3, 0) node [right] {$\phi$};
+ \draw [->] (0, -1.2) -- (0, 3) node [above] {$S(\phi)$};
+
+ \draw [mblue, thick, domain=-2.4:2.4, samples=50] plot [smooth] (\x, {4 * ((\x/2)^4 - (\x/2)^2)});
+
+ \draw [dashed] (1.414, 0) node [above] {$\phi_0$} -- (1.414, -1);
+ \end{tikzpicture}
+\end{center}
+Our old minimum $\phi = 0$ is now a (local) maximum! This is the wrong point to expand about! Fields with $m^2 < 0$ are called \term{tachyons}, and these are always signs of some form of instability.
+
+Actions whose minima occur at non-zero field values are often associated with \term{spontaneous symmetry breaking}. Our original theory has a $\Z/2$ symmetry, namely under the transformation $x \leftrightarrow -x$. But once we pick a minimum $\phi_0$ to expand about, we have broken the symmetry. This is an interesting kind of symmetry breaking because the asymmetry doesn't really exist in our theory. It's just our arbitrary choice of minimum that breaks it.
+
+\subsection{Feynman diagrams}
+We now try to understand the terms in the asymptotic series in terms of Feynman diagrams. The easy bits are the powers of
+\[
+ \left(\frac{\hbar \lambda}{m^4}\right)^n.
+\]
+This combination is essentially fixed by dimensional analysis. What we really want to understand are the combinatorial factors, given by
+\[
+ \frac{1}{(4!)^n n!} \times \frac{(4n)!}{4^n (2n)!}.
+\]
+To understand this, we can write the path integral in a different way. We have
+\begin{align*}
+ \mathcal{Z}(\hbar, m, \lambda) &= \int \d \phi\; \exp\left(-\frac{1}{\hbar} \left(\frac{m^2}{2} \phi^2 + \frac{\lambda}{4!}\phi^4\right)\right)\\
+ &= \int \d \phi\; \sum_{n = 0}^\infty \frac{1}{n!} \left(-\frac{\lambda}{4! \hbar}\right)^n \phi^{4n} \exp\left(-\frac{m^2}{2\hbar} \phi^2\right)\\
+ &\sim \sum_{n = 0}^\infty \frac{1}{(4!)^nn!} \left(\frac{-\lambda}{\hbar}\right)^n \bra \phi^{4n}\ket_{\mathrm{free}}\mathcal{Z}_{\mathrm{free}}.
+\end{align*}
+Thus, we can apply our previous Wick's theorem to compute this! We now see that there is a factor of $\frac{1}{(4!)^nn!}$ coming from expanding the exponential, and the factor of $\frac{(4n)!}{4^n(2n)!}$ came from Wick's theorem --- it is the number of ways to pair up $4n$ vertices together. There will be a factor of $\hbar^{2k}$ coming from Wick's theorem when evaluating $\bra \phi^{4n}\ket$, so the expansion is left with a $\hbar^k$ factor.
+
+Let's try to compute the order $\lambda$ term. By Wick's theorem, we should consider diagrams with 4 \emph{external} vertices, joined by straight lines. There are three of these:
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (0, 1) {};
+ \draw (0, 0) -- (0, 1);
+ \draw (1, 0) -- (1, 1);
+
+ \begin{scope}[shift={(3, 0)}]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (0, 1) {};
+ \draw (0, 0) -- (1, 0);
+ \draw (0, 1) -- (1, 1);
+ \end{scope}
+
+ \begin{scope}[shift={(6, 0)}]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (0, 1) {};
+ \draw (0, 0) -- (1, 1);
+ \draw (1, 0) -- (0, 1);
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+But we want to think of these as interaction vertices of valency $4$. So instead of drawing them this way, we ``group'' the four vertices together. For example, the first one becomes
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (0, 1) {};
+ \draw (0, 0) -- (0, 1);
+ \draw (1, 0) -- (1, 1);
+
+ \draw [-latex', mred] (0.1, 0.1) -- (0.4, 0.4);
+ \draw [-latex', mred] (0.1, 0.9) -- (0.4, 0.6);
+ \draw [-latex', mred] (0.9, 0.1) -- (0.6, 0.4);
+ \draw [-latex', mred] (0.9, 0.9) -- (0.6, 0.6);
+
+ \node at (2, 0.5) {$\Rightarrow$};
+ \begin{scope}[shift={(4, 0.5)}]
+ \node [circ] at (0, 0) {};
+
+ \draw (0.5, 0) ellipse (0.5 and 0.3);
+ \draw (-0.5, 0) ellipse (0.5 and 0.3);
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Similarly, at order $\lambda^2$, we have a diagram
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (0, 1) {};
+
+ \node [circ] at (2.5, 0) {};
+ \node [circ] at (3.5, 0) {};
+ \node [circ] at (3.5, 1) {};
+ \node [circ] at (2.5, 1) {};
+
+ \draw (1, 0) -- (2.5, 0);
+ \draw (1, 1) -- (2.5, 1);
+
+ \draw (0, 0) edge [bend right] (3.5, 0);
+ \draw (0, 1) edge [bend left] (3.5, 1);
+
+ \draw [-latex', mred] (0.1, 0.1) -- (0.4, 0.4);
+ \draw [-latex', mred] (0.1, 0.9) -- (0.4, 0.6);
+ \draw [-latex', mred] (0.9, 0.1) -- (0.6, 0.4);
+ \draw [-latex', mred] (0.9, 0.9) -- (0.6, 0.6);
+
+ \begin{scope}[shift={(2.5, 0)}]
+ \draw [-latex', mred] (0.1, 0.1) -- (0.4, 0.4);
+ \draw [-latex', mred] (0.1, 0.9) -- (0.4, 0.6);
+ \draw [-latex', mred] (0.9, 0.1) -- (0.6, 0.4);
+ \draw [-latex', mred] (0.9, 0.9) -- (0.6, 0.6);
+ \end{scope}
+
+ \node at (5, 0.5) {$\Rightarrow$};
+ \begin{scope}[shift={(6.5, 0.5)}]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1.5, 0) {};
+ \draw (0.75, 0) ellipse (0.75 and 0.7);
+ \draw (0.75, 0) ellipse (0.75 and 0.4);
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Note that in the way we are counting diagrams in Wick's theorem means we consider the following diagrams to be distinct:
+\begin{center}
+\begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+
+ \draw (0, 0.5) ellipse (0.3 and 0.5);
+ \draw (0, -0.5) ellipse (0.3 and 0.5);
+
+ \begin{scope}[shift={(2.5, 0)}]
+ \node [circ] at (0, 0) {};
+
+ \draw (0.5, 0) ellipse (0.5 and 0.3);
+ \draw (-0.5, 0) ellipse (0.5 and 0.3);
+ \end{scope}
+
+ \begin{scope}[shift={(5, 0.2)}, rotate=135]
+ \draw (0, 0.3) ellipse (0.5 and 0.3);
+ \draw [ultra thick, white] (-0.1, 0) ellipse (0.1 and 0.3);
+ \draw (-0.1, 0) ellipse (0.1 and 0.3);
+
+ \node [circ] at (0, 0) {};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+However, \emph{topologically}, these are the same diagrams, and we don't want to count them separately. Combinatorially, these diagrams can be obtained from each other by permuting the outgoing edges at each vertex, or by permuting the vertices. In other words, in the ``expanded view'', when we group our vertices as
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (0, 1) {};
+ \draw [mred] (-0.2, -0.2) rectangle (1.2, 1.2);
+
+ \node [circ] at (2.5, 0) {};
+ \node [circ] at (3.5, 0) {};
+ \node [circ] at (3.5, 1) {};
+ \node [circ] at (2.5, 1) {};
+ \draw [mred] (2.3, -0.2) rectangle (3.7, 1.2);
+
+ \node [circ] at (5, 0) {};
+ \node [circ] at (6, 0) {};
+ \node [circ] at (6, 1) {};
+ \node [circ] at (5, 1) {};
+ \draw [mred] (4.8, -0.2) rectangle (6.2, 1.2);
+ \end{tikzpicture}
+\end{center}
+we are allowed to permute the vertices within each block, or permute the blocks themselves. We let $D_n$ be the set of all graphs, and $G_n$ be the group consisting of these ``allowed'' permutations. This $G_n$ is a semi-direct product $(S_4)^n \rtimes S_n$. Then by some combinatorics,
+\[
+ |D_n| = \frac{(4n)!}{4^n(2n)!},\quad |G_n| = (4!)^n n!.
+\]
+Recall that $|D_n|$ is the number Wick's theorem gives us, and we now see that $\frac{1}{|G_n|}$ happens to be the factor we obtained from expanding the exponential. Of course, this isn't a coincidence --- we chose to put $\frac{\lambda}{4!}$ instead of, say, $\frac{\lambda}{4}$ in front of the $\phi^4$ term so that this works out.
+
+We can now write the asymptotic series as
+\[
+ \frac{\mathcal{Z}(m^2, \lambda)}{\mathcal{Z}(m^2, 0)} \sim \sum_{n = 0}^N \frac{|D_n|}{|G_n|} \left(\frac{-\hbar \lambda}{m^4}\right)^n.
+\]
+It turns out a bit of pure mathematics will allow us to express this in terms of the graphs up to topological equivalence. By construction, two graphs are topologically equivalent if they can be related to each other via $G_n$. In other words, the graphs correspond to the set of \term{orbits} $\mathcal{O}_n$ of $D_n$ under $G_n$. For $n = 1$, there is only one graph up to topological equivalence, so $|\mathcal{O}_1| = 1$. It is not hard to see that $|\mathcal{O}_2| = 3$.
+
+In general, each graph $\Gamma$ has some \emph{automorphisms} that fix the graph. For example, in the following graph:
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (0, 1) {};
+
+ \node [below] at (0, 0) {\small $3$};
+ \node [below] at (1, 0) {\small $4$};
+ \node [above] at (1, 1) {\small $2$};
+ \node [above] at (0, 1) {\small $1$};
+
+ \draw (0, 0) -- (0, 1);
+ \draw (1, 0) -- (1, 1);
+ \end{tikzpicture}
+\end{center}
+we see that the graph is preserved by the permutations $(1\; 3)$, $(2\; 4)$ and $(1\; 2)(3\; 4)$. In general, the \term{automorphism group} \term{$\Aut(\Gamma)$} is the subgroup of $G_n$ consisting of all permutations that preserve the graph $\Gamma$. In this case, $\Aut(\Gamma)$ is generated by the above three permutations, and $|\Aut(\Gamma)| = 8$.
+
+Recall from IA Groups that the \emph{orbit-stabilizer theorem} tells us
+\[
+ \frac{|D_n|}{|G_n|} = \sum_{\Gamma \in \mathcal{O}_n} \frac{1}{|\Aut(\Gamma)|}.
+\]
+Thus, if we write
+\[
+ \mathcal{O} = \bigcup_{n \in \N} \mathcal{O}_n,
+\]
+then we have
+\[
+ \frac{\mathcal{Z}(m^2, \lambda)}{\mathcal{Z}(m^2, 0)} \sim \sum_{\Gamma \in \mathcal{O}} \frac{1}{|\Aut(\Gamma)|} \left(\frac{-\hbar \lambda}{m^4}\right)^n,
+\]
+where the $n$ appearing in the exponent is the number of vertices in the graph $\Gamma$.
+
+The last loose end to tie up is the factor of $\left(\frac{-\hbar \lambda}{m^4}\right)^n$ appearing in the sum. While this is a pretty straightforward thing to come up with in this case, for more complicated fields with more interactions, we need a better way of figuring out what factor to put there.
+
+If we look back at how we derived this factor, we had a factor of $-\frac{\lambda}{\hbar}$ coming from each interaction vertex, whereas when computing the correlation functions $\bra \phi^{4n}\ket$, every edge contributes a factor of $\hbar m^{-2}$ (since, in the language we expressed Wick's theorem, we had $M = m^2$). Thus, we can imagine this as saying we have a propagator
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (2, 0) node [pos=0.5, below] {$\hbar/m^2$} node [pos=0.5, above] { };
+ \end{tikzpicture}
+\end{center}
+and a vertex
+\begin{center}
+ \begin{tikzpicture}[scale=0.707]
+ \draw (0, 0) -- (2, 2);
+ \draw (0, 2) -- (2, 0);
+ \node [circ] at (1, 1) {};
+ \node [right] at (2, 1) {$-\lambda /\hbar$};
+ \end{tikzpicture}
+\end{center}
+Note that the negative sign appears because we have $e^{-S}$ in ``Euclidean'' QFT. In Minkowski spacetime, we have a factor of $-i$ instead.
+
+After all this work, we can now expand the partition function as
+\[\arraycolsep=0.25em\def\arraystretch{2.2}
+ \begin{array}{ccccccccccccc}
+ \displaystyle\frac{\mathcal{Z}(m^2, \lambda)}{\mathcal{Z}(m^2, 0)} & \sim &
+ \begin{tikzpicture}[eqpic]
+ \node {$\emptyset$};
+ \end{tikzpicture} &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+
+ \draw (0, 0.5) ellipse (0.3 and 0.5);
+ \draw (0, -0.5) ellipse (0.3 and 0.5);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1.5, 0) {};
+ \draw (0.75, 0) ellipse (0.75 and 0.7);
+ \draw (0.75, 0) ellipse (0.75 and 0.4);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \draw (-0.2, 0) ellipse (0.2 and 0.5);
+ \draw (0.5, 0) ellipse (0.5 and 0.3);
+ \draw (1.2, 0) ellipse (0.2 and 0.5);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+
+ \draw (0, 0.5) ellipse (0.3 and 0.5);
+ \draw (0, -0.5) ellipse (0.3 and 0.5);
+
+ \node [circ] at (0.7, 0) {};
+
+ \draw (0.7, 0.5) ellipse (0.3 and 0.5);
+ \draw (0.7, -0.5) ellipse (0.3 and 0.5);
+ \end{tikzpicture}
+ &+&
+ \cdots\\
+ && 1 &+& \displaystyle\frac{-\lambda \hbar}{m^4} \frac{1}{8} &+& \displaystyle\frac{\lambda^2 \hbar^2}{m^8} \frac{1}{48} &+& \displaystyle\frac{\lambda^2 \hbar^2}{m^8} \frac{1}{16} &+& \displaystyle\frac{\lambda^2 \hbar^2}{m^8} \frac{1}{128} &+& \cdots
+\end{array}
+\]
+
+In generally, if we have a theory with several fields with propagators of value $\frac{1}{P_i}$, and many different interactions with coupling constants $\lambda_\alpha$, then
+\[
+ \frac{\mathcal{Z}(\{\lambda_\alpha\})}{ \mathcal{Z}(\{0\})} \sim \sum_{\Gamma \in \mathcal{O}} \frac{1}{|\Aut \Gamma|} \frac{\prod_\alpha \lambda_\alpha^{|v_\alpha(\Gamma)|}}{\prod_i |P_i|^{|e_i(\Gamma)|}} \hbar^{E(\Gamma) - V(\Gamma)},
+\]
+where
+\begin{itemize}
+ \item $e_i(\Gamma)$ is the number of edges of type $i$ in $\Gamma$;
+ \item $v_\alpha(\Gamma)$ is the number of vertices of type $\alpha$ in $\Gamma$;
+ \item $V(\Gamma)$ is the number of vertices in total; and
+ \item $E(\Gamma)$ the total number of edges.
+\end{itemize}
+There are a few comments we can make. Usually, counting \emph{all} graphs is rather tedious. Instead, we can consider the quantity
+\[
+ \log (\mathcal{Z}/\mathcal{Z}_0).
+\]
+This is the sum of all \emph{connected} graphs. For example, we can see that the term for the two-figure-of-eight is just the square of the term for a single figure-of-eight times the factor of $\frac{1}{2!}$, and one can convince oneself that this generalizes to arbitrary combinations.
+
+In this case, we can simplify the exponent of $\hbar$. In general, by Euler's theorem, we have
+\[
+ E - V = L - C,
+\]
+where $L$ is the number of loops in the graph, and $C$ is the number of connected components. In the case of $\log (\mathcal{Z}/\mathcal{Z}_0)$, we have $C = 1$. Then the factor of $\hbar$ is just $\hbar^{L - 1}$. Thus, the more loops we have, the less it contributes to the sum.
+
+It is not difficult to see that if we want to compute correlation functions, say $\bra \phi^2\ket$, then we should consider graphs with two external vertices. Note that in this case, when computing the combinatorial factors, the automorphisms of a graph are required to fix the external vertices. To see this mathematically, note that the factors in
+\[
+ \frac{1}{|G_n|} = \frac{1}{(4!)^n n!}
+\]
+came from Taylor expanding $\exp$ and the coefficient $\frac{1}{4!}$ of $\phi^4$. We do not get these kinds of factors from $\phi^2$ when computing
+\[
+ \int \d \phi\; \phi^2 e^{-S[\phi]}.
+\]
+Thus, we should consider the group of automorphisms that fix the external vertices to get the right factor of $\frac{1}{|G_n|}$.
+
+Finally, note that Feynman diagrams don't let us compute exact answers in general. Even if we had some magical means to compute all terms in the expansions using Feynman diagrams, if we try to sum all of them up, it will diverge! We only have an asymptotic series, not a Taylor series. However, it turns out in some special theories, the leading term is the \emph{exact} answer! The tail terms somehow manage to all cancel each other out. For example, this happens for supersymmetry theories, but we will not go into details.
+
+\subsection{An effective theory}
+We now play with another toy theory. Suppose we have two scalar fields $\phi, \chi \in \R$, and consider the action
+\[
+ S(\phi, \chi) = \frac{m^2}{2}\phi^2 + \frac{M^2}{2} \chi^2 + \frac{\lambda}{4} \phi^2 \chi^2.
+\]
+For convenience, we will set $\hbar = 1$. We have Feynman rules
+\begin{center}
+ \makecenter{
+ \begin{tikzpicture}
+ \draw (0, 0) -- (2, 0) node [pos=0.5, below] {$1/m^2$} node [pos=0.5, above] { };
+ \end{tikzpicture}}\quad\quad
+ \makecenter{\begin{tikzpicture}
+ \draw [dashed] (0, 0) -- (2, 0) node [pos=0.5, below] {$1/M^2$} node [pos=0.5, above] { };
+ \end{tikzpicture}}
+\end{center}
+with a vertex
+\begin{center}
+ \begin{tikzpicture}[scale=0.707]
+ \draw (0, 0) -- (2, 2);
+ \draw [dashed] (0, 2) -- (2, 0);
+ \node [circ] at (1, 1) {};
+ \node [right] at (2, 1) {$-\lambda$};
+ \end{tikzpicture}
+\end{center}
+We can use these to compute correlation functions and expectation values. For example, we might want to compute
+\[
+ \log (\mathcal{Z}/\mathcal{Z}_0).
+\]
+We have a diagram that looks like
+\[\everymath{\displaystyle}\arraycolsep=0.17em\def\arraystretch{2.2}
+ \begin{array}{ccccccccccc}
+ \displaystyle\log \left(\frac{\mathcal{Z}(m^2, \lambda)}{\mathcal{Z}(m^2, 0)}\right) & \sim &
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+
+ \draw [dashed] (0, 0.5) ellipse (0.3 and 0.5);
+ \draw (0, -0.5) ellipse (0.3 and 0.5);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \draw (-0.2, 0) ellipse (0.2 and 0.5);
+ \draw [dashed] (0.5, 0) ellipse (0.5 and 0.3);
+ \draw (1.2, 0) ellipse (0.2 and 0.5);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \draw [dashed] (-0.2, 0) ellipse (0.2 and 0.5);
+ \draw (0.5, 0) ellipse (0.5 and 0.3);
+ \draw [dashed] (1.2, 0) ellipse (0.2 and 0.5);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1.5, 0) {};
+ \draw [dashed] (0.75, 0) ellipse (0.75 and 0.7);
+ \draw (0.75, 0) ellipse (0.75 and 0.4);
+ \end{tikzpicture}
+ &+&
+ \cdots\\
+ && \frac{-\lambda}{4m^2 M^2} &+& \frac{\lambda^2}{16 m^4 M^4} &+& \frac{\lambda^2}{16 m^4 M^4} &+& \frac{\lambda^2}{8 m^4 M^4} &+& \cdots\\
+\end{array}
+\]
+We can also try to compute $\bra \phi^2\ket$. To do so, we consider diagrams with two vertices connected to a solid line:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} -- (2, 0);
+ \end{tikzpicture}
+\end{center}
+The relevant diagrams are
+\[\everymath{\displaystyle}\arraycolsep=0.3em\def\arraystretch{2.2}
+ \begin{array}{ccccccccccccc}
+ \displaystyle \bra \phi^2 \ket &\sim&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (0, -2) {};
+ \draw (0, 0) -- (0, -2);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (0, -2) {};
+ \draw (0, 0) -- (0, -2);
+
+ \node [circ] at (0, -1) {};
+
+ \draw [dashed] (0.4, -1) ellipse (0.4 and 0.25);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (0, -2) {};
+ \draw (0, 0) -- (0, -2);
+
+ \node [circ] at (0, -0.667) {};
+ \node [circ] at (0, -1.333) {};
+
+ \draw [dashed] (0.4, -0.667) ellipse (0.4 and 0.25);
+ \draw [dashed] (0.4, -1.333) ellipse (0.4 and 0.25);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (0, -2) {};
+ \draw (0, 0) -- (0, -2);
+
+ \node [circ] at (0, -0.667) {};
+ \node [circ] at (0, -1.333) {};
+
+ \draw [dashed] (0, -1) ellipse (0.333 and 0.333);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (0, -2) {};
+ \draw (0, 0) -- (0, -2);
+ \node [circ] at (0, -1) {};
+
+ \draw [dashed] (0.4, -1) ellipse (0.4 and 0.25);
+ \node [circ] at (0.8, -1) {};
+
+ \draw (1.2, -1) ellipse (0.4 and 0.25);
+ \end{tikzpicture}
+ &+&
+ \cdots\\
+ && \frac{1}{m^2} &+& \frac{-\lambda}{2m^4 M^4} &+& \frac{\lambda^2}{4m^6 M^4} &+& \frac{\lambda^2}{2m^6 M^4} &+& \frac{\lambda^2}{4m^6 M^4} &+& \cdots\\
+\end{array}
+\]
+Let's arrive at this result in a different way. Suppose we think of $\chi$ as ``heavy'', so we cannot access it directly using our experimental equipment. In particular, if we're only interested in the correlation functions that depend only on $\phi$, then we could try to ``integrate out'' $\chi$ first.
+
+Suppose we have a function $f(\phi)$, and consider the integral
+\[
+ \int_{\R^2} \d \phi\;\d \chi\; f(\phi) e^{-S(\phi, \chi)/\hbar} = \int_\R \d \phi \left(f(\phi) \int_\R \d \chi\;e^{-S(\phi, \chi)/\hbar}\right).
+\]
+We define the \term{effective action} for $\phi$, $S_{\mathrm{eff}}(\phi)$ by
+\[
+ S_{\mathrm{eff}}(\phi) = -\hbar \log \left(\int_\R \d \chi\; e^{-S(\phi, \chi)/\hbar}\right).
+\]
+Then the above integral becomes
+\[
+ \int_\R \d \phi\; f(\phi) e^{-S_{\mathrm{eff}}(\phi)/\hbar}.
+\]
+So doing any computation with $\phi$ only is equivalent to pretending $\chi$ doesn't exist, and using this effective potential instead.
+
+
+In general, the integral is very difficult to compute, and we can only find an asymptotic series for the effective action. However, in this very particular example, we have chosen $S$ such that it is only quadratic in $\chi$, and one can compute to get
+\[
+ \int_\R \d \chi\; e^{-S(\phi, \chi)/\hbar} = e^{-m^2 \phi^2/2\hbar} \sqrt{\frac{2\pi \hbar}{M^2 + \lambda \phi^2/2}}.
+\]
+Therefore we have
+\begin{align*}
+ S_{\mathrm{eff}}(\phi) &= \frac{m^2}{2}\phi^2 + \frac{\hbar}{2} \log \left(1 + \frac{\lambda \phi^2}{2M^2}\right) + \frac{\hbar}{2} \log \left(\frac{M^2}{2\pi \hbar}\right)\\
+ &= \left(\frac{m^2}{2} + \frac{\hbar \lambda}{4M^2}\right)\phi^2 - \frac{\hbar \lambda^2}{16 M^4} \phi^4 + \frac{\hbar \lambda^3}{48 M^6} \phi^6 + \cdots + \frac{\hbar}{2} \log \left(\frac{M^2}{2\pi \hbar}\right)\\
+ &= \frac{m_{\mathrm{eff}}^2}{2} \phi^2 + \frac{\lambda_4}{4!} \phi^4 + \frac{\lambda_6}{6!} \phi^6 + \cdots + \frac{\hbar}{2} \log \left(\frac{M^2}{2\pi \hbar}\right),
+\end{align*}
+where
+\[
+ \lambda_{2k} = (-1)^{k + 1} \frac{\hbar (2k)!}{2^{k + 1}k!} \frac{\lambda^k}{M^{2k}}.
+\]
+We see that once we integrated out $\chi$, we have generated an \emph{infinite series} of new interactions for $\phi$ in $S_{\mathrm{eff}} (\phi)$. Moreover, the mass also shifted as
+\[
+ m^2 \mapsto m^2_{\mathrm{eff}} = m^2 + \frac{\hbar \lambda}{2M^4}.
+\]
+It is important to notice that the new vertices generated are \emph{quantum effects}. They vanish as $\hbar \to 0$ (they happen to be linear in $\hbar$ here, but that is just a coincidence). They are also suppressed by powers of $\frac{1}{M^2}$. So if $\chi$ is very very heavy, we might think these new couplings have very tiny effects.
+
+This is a very useful technique. If the universe is very complicated with particles at very high energies, then it would be rather hopeless to try to account for all of the effects of the particles we cannot see. In fact, it is impossible to know about these very high energy particles, as their energy are too high to reach. All we can know about them is their incarnation in terms of these induced couplings on lower energy fields.
+
+A few things to note:
+\begin{itemize}
+ \item The original action had a $\Z_2 \times \Z_2$ symmetry, given by $(\phi, \chi) \mapsto (\pm \phi, \pm \chi)$. This symmetry is preserved, and we do not generate any vertices with odd powers of $\phi$.
+ \item The field $S_{\mathrm{eff}}(\phi)$ also contains a field independent term
+ \[
+ \frac{\hbar}{2} \log \left(\frac{M^2}{2\pi \hbar}\right).
+ \]
+ This plays no role in correlation functions $\bra f(\phi)\ket$. Typically, we are just going to drop it. However, this is one of the biggest problem in physics. This term contributes to the cosmological constant, and the scales that are relevant to this term are much larger than the actual observed cosmological constant. Thus, to obtain the observed cosmological constant, we must have some magic cancelling of these cosmological constants.
+ \item In this case, passing to the effective action produced a lot of \emph{new} interactions. However, if we already had a complicated theory, e.g.\ when we started off with an effective action, and then decided to integrate out more things, then instead of introducing new interactions, the effect of this becomes \emph{shifting} the coupling constants, just as we shifted the mass term.
+\end{itemize}
+
+In general, how can we compute the effective potential? We are still computing an integral of the form
+\[
+ \int_\R \d \chi\; e^{-S(\phi, \chi)/\hbar},
+\]
+so our previous technique of using Feynman diagrams should still work. We will treat $\phi$ as a constant in the theory. So instead of having a vertex that looks like
+\begin{center}
+ \begin{tikzpicture}[scale=0.707]
+ \draw (0, 0) -- (2, 2);
+ \draw [dashed] (0, 2) -- (2, 0);
+ \node [circ] at (1, 1) {};
+ \node [right] at (2, 1) {$-\lambda$};
+ \end{tikzpicture}
+\end{center}
+we drop all the $\phi$ lines and are left with
+\begin{center}
+ \begin{tikzpicture}[scale=0.707]
+ \draw [dashed] (0, 2) -- (2, 0);
+ \node [circ] at (1, 1) {};
+ \node [right] at (2, 1) {$-\lambda \phi^2$};
+ \end{tikzpicture}
+\end{center}
+But actually, this would be rather confusing. So instead what we do is that we still draw the solid $\phi$ lines, but whenever such a line appears, we have to terminate the line immediately, and this contributes a factor of $\phi$:
+\begin{center}
+ \begin{tikzpicture}[scale=0.707]
+ \draw (0, 0) node [circ, mblue] {} -- (2, 2) node [circ, mblue] {};
+ \draw [dashed] (0, 2) -- (2, 0);
+ \node [circ] at (1, 1) {};
+ \node [right] at (2, 1) {$-\lambda \phi^2$};
+ \end{tikzpicture}
+\end{center}
+For our accounting purposes, we will say the internal vertex contributes a factor of $-\frac{\lambda}{2}$ (since the automorphism group of this vertex has order $2$, and the action had $\frac{\lambda}{4}$), and each terminating blue vertex contributes a factor of $\phi$.
+
+Since we have a ``constant'' term $\frac{m^2}{2} \phi^2$ as well, we need to add one more diagram to account for that. We allow a single edge of the form
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [mblue, circ] {} -- (2, 0) node [mblue, circ] {} node [pos=0.5, below] {$-m^2$} node [pos=0.5, above] { };
+ \node [circ, mgreen] at (1, 0) {};
+ \end{tikzpicture}
+\end{center}
+With these ingredients, we can compute the effective potential as follows:
+\[\everymath{\displaystyle}\arraycolsep=0.3em\def\arraystretch{2.2}
+ \begin{array}{ccccccccccc}
+ \displaystyle -S_{\mathrm{eff}}(\phi) &\sim&
+ \begin{tikzpicture}[eqpic]
+ \node [circ, mblue] at (0, 0) {};
+ \node [circ, mblue] at (0, -2) {};
+ \draw (0, 0) -- (0, -2);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ, mblue] at (0, 0) {};
+ \node [circ, mblue] at (0, -2) {};
+ \draw (0, 0) -- (0, -2);
+
+ \node [circ] at (0, -1) {};
+
+ \draw [dashed] (0.4, -1) ellipse (0.4 and 0.25);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \draw (0, 0) node [circ, mblue] {} -- +(0, -1) node [circ] {} -- +(0, -2) node [circ, mblue] {};
+ \draw (0.8, 0) node [circ, mblue] {} -- +(0, -1) node [circ] {} -- +(0, -2) node [circ, mblue] {};
+ \draw [dashed] (0.4, -1) ellipse (0.4 and 0.25);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \draw (0, 0) node [circ, mblue] {} -- +(0, -1) node [circ] {} -- +(0, -2) node [circ, mblue] {};
+ \draw (0.8, 0) node [circ, mblue] {} -- +(0, -1) node [circ] {} -- +(0, -2) node [circ, mblue] {};
+ \draw (0.4, 0.25) node [circ, mblue] {} -- +(0, -1) node [circ] {} -- +(0, -2) node [circ, mblue] {};
+ \draw [dashed] (0.4, -1) ellipse (0.4 and 0.25);
+ \end{tikzpicture}
+ &+&
+ \cdots\\
+ && \frac{-m^2}{2}\phi^2 &+& \frac{-\lambda}{4 M^2} \phi^2 &+& \frac{\lambda^2 \phi^4}{16 M^4} &+& \frac{- \lambda^3 \phi^6}{48 M^6} &+& \cdots
+\end{array}
+\]
+These are just the terms we've previously found. There are, again, a few things to note
+\begin{itemize}
+ \item The diagram expansion is pretty straightforward in this case, because we started with a rather simple interacting theory. More complicated examples will have diagram expansions that are actually interesting.
+ \item We only sum over connected diagrams, as $S_{\mathrm{eff}}$ is the logarithm of the integral.
+ \item We see that the new/shifted couplings in $S_{\mathrm{eff}} (\phi)$ are generated by \emph{loops} of $\chi$ fields.
+ \item When we computed the effective action directly, we had a ``cosmological constant'' term $\frac{\hbar}{2} \log\left(\frac{M^2}{2 \pi \hbar}\right)$, but this doesn't appear in the Feynman diagram calculations. This is expected, since when we developed our Feynman rules, what it computed for us was things of the form $\log (\mathcal{Z}/\mathcal{Z}_0)$, and the $\mathcal{Z}_0$ term is that ``cosmological constant''.
+\end{itemize}
+
+\begin{eg}
+ We can use the effective potential to compute $\bra \phi^2 \ket$:
+ \[\everymath{\displaystyle}\arraycolsep=0.3em\def\arraystretch{2.2}
+ \begin{array}{ccccccc}
+ \displaystyle \bra \phi^2 \ket &\sim&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (0, -2) {};
+ \draw (0, 0) -- (0, -2);
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (0, -2) {};
+ \draw (0, 0) -- (0, -2);
+
+ \node [circ] at (0, -1) {};
+
+ \draw (0.4, -1) ellipse (0.4 and 0.25);
+ \end{tikzpicture}
+ &+&
+ \cdots\\
+ && \frac{1}{m_{\mathrm{eff}}^2} &+& \frac{-\lambda}{2m^6_{\mathrm{eff}}} &+& \cdots
+ \end{array}
+ \]
+ We can see that this agrees with our earlier calculation correct to order $\lambda^2$.
+\end{eg}
+
+At this moment, this is not incredibly impressive, since it takes a lot of work to compute $S_{\mathrm{eff}}(\phi)$. But the computation of $S_{\mathrm{eff}}(\phi)$ can be reused to compute any correlation involving $\phi$. So if we do many computations with $\phi$, then we we can save time.
+
+But the point is not that this saves work. The point is that this is what we do when we do physics. We can never know if there is some really high energy field we can't reach with our experimental apparatus. Thus, we can only assume that the actions we discover experimentally are the effective actions coming from integrating out some high energy fields.
+
+\subsection{Fermions}
+So far, we have been dealing with Bosons. These particles obey Bose statistics, so different fields $\phi$ commute with each other. However, we want to study Fermions as well. This would involve variables that anti-commute with each other. When we did canonical quantization, this didn't pose any problem to our theory, because fields were represented by operators, and operators can obey any commutation or anti-commutation relations we wanted them to. However, in the path integral approach, our fields are actual numbers, and they are forced to commute.
+
+So how can we take care of fermions in our theory? We cannot actually use path integrals, as these involves things that commute. Instead, we will now treat the fields as formal symbols.
+
+The theory is rather trivial if there is only one field. So let's say we have many fields $\theta_1, \cdots, \theta_n$. Then a ``function'' of these fields will be a formal polynomial expression in the symbols $\theta_1, \cdots, \theta_n$ subject to the relations $\theta_i \theta_j = - \theta_j \theta_i$. So, for example, the action might be
+\[
+ S[\theta_1, \theta_2] = \frac{1}{2} m^2 \theta_1 \theta_2 = - \frac{1}{2}m^2 \theta_2 \theta_1.
+\]
+Now the path integral is defined as
+\[
+ \int \d \theta_1 \cdots \d \theta_n \; e^{-S[\theta_1, \ldots, \theta_n]}.
+\]
+There are two things we need to make sense of --- the exponential, and the integral.
+
+The exponential is easy. In general, for any analytic function $f: \R \to \R$ (or $f: \C \to \C$), we can write it as a power series
+\[
+ f(x) = \sum_{i = 0}^\infty a_i x^i.
+\]
+Then for any polynomial $p$ in the fields, we can similarly define
+\[
+ f(p) = \sum_{i = 0}^\infty a_i p^i.
+\]
+Crucially, this expression is a \emph{finite} sum, since we have only finitely many fields, and any monomial in the fields of degree greater than the number of fields must vanish by anti-commutativity.
+
+\begin{eg}
+ \begin{align*}
+ e^{\theta_1 + \theta_2} &= 1 + \theta_1 + \theta_2 + \frac{1}{2}(\theta_1 + \theta_2)^2 + \cdots\\
+ &= 1 + \theta_1 + \theta_2 + \frac{1}{2}(\theta_1 \theta_2 + \theta_2 \theta_1)\\
+ &= 1 + \theta_1 + \theta_2
+ \end{align*}
+\end{eg}
+
+How about integration? Since our fields are just formal symbols, and do not represent real/complex numbers, it doesn't make sense to actually integrate it. However, we can still define a linear functional from the space of all polynomials in the fields to the reals (or complexes) that is going to act as integration.
+
+If we have a single field $\theta$, then the most general polynomial expression in $\theta$ is $a + b \theta$. It turns out the correct definition for the ``integral'' is
+\[
+ \int \d \theta\; (a + b \theta) = b.
+\]
+This is known as the \term{Berezin integral}.
+
+How can we generalize this to more fields? Heuristically, the rule is that $\int \d \theta$ is an expression that should anti-commute with all fields. Thus, for example,
+\begin{align*}
+ \int \d \theta_1 \; \d \theta_2\; (3\theta_2 + 2 \theta_1 \theta_2) &= \int \d \theta_1 \left(3 \int \d \theta_2\; \theta_2 + 2 \int \d \theta_2 \; \theta_1 \theta_2\right)\\
+ &= \int \d \theta_1 \left(3 - 2 \theta_1 \int \d \theta_2 \; \theta_2\right)\\
+ &= \int \d \theta_1 (3 - 2 \theta_1)\\
+ &= -2.
+\end{align*}
+On the other hand, we have
+\begin{align*}
+ \int \d \theta_1 \; \d \theta_2\; (3\theta_2 + 2 \theta_2 \theta_1) &= \int \d \theta_1 \left(3 \int \d \theta_2\; \theta_2 + 2 \int \d \theta_2 \; \theta_2 \theta_1\right)\\
+ &= \int \d \theta_1 \left(3 + 2 \left(\int \d \theta_2 \; \theta_2\right) \theta_1\right)\\
+ &= \int \d \theta_1 (3 - 2 \theta_1)\\
+ &= 2.
+\end{align*}
+Formally speaking, we can define the integral by
+\[
+ \int \d \theta_1 \cdots \d \theta_n\; \theta_n \cdots \theta_1 = 1,
+\]
+and then sending other polynomials to $0$ and then extending linearly.
+
+When actually constructing a fermion field, we want an action that is ``sensible''. So we would want a mass term $\frac{1}{2} m^2 \theta^2$ in the action. But for anti-commuting variables, this must vanish!
+
+The solution is to imagine that our fields are \emph{complex}, and we have two fields $\theta$ and $\bar{\theta}$. Of course, formally, these are just two formal symbols, and bear no relation to each other. However, we will think of them as being complex conjugates to each other. We can then define the action to be
+\[
+ S[\bar{\theta}, \theta] = \frac{1}{2} m^2 \bar{\theta} \theta.
+\]
+Then the partition function is
+\[
+ \mathcal{Z} = \int \d \theta\; \d \bar{\theta}\; e^{-S(\bar{\theta}, \theta)}.
+\]
+Similar to the previous computations, we can evaluate this to be
+\begin{align*}
+ \mathcal{Z} &= \int \d \theta \;\d \bar{\theta}\; e^{-S(\bar{\theta}, \theta)}\\
+ &= \int \d \theta \left(\int \d \bar{\theta}\; \left(1 - \frac{1}{2} m^2 \bar{\theta} \theta\right)\right)\\
+ &= \int \d \theta \left(\int \d \bar{\theta} - \frac{1}{2} m^2 \left(\int \d \bar{\theta}\; \bar{\theta}\right) \theta \right)\\
+ &= \int \d \theta \left(- \frac{1}{2} m^2 \theta\right)\\
+ &= -\frac{1}{2} m^2.
+\end{align*}
+
+We will need the following formula:
+\begin{prop}
+ For an invertible $n \times n$ matrix $B$ and $\eta_i, \bar{\eta}_i, \theta^i, \bar{\theta}^i$ independent fermionic variables for $i = 1,\ldots, n$, we have
+ \[
+ \mathcal{Z}(\eta, \bar{\eta}) = \int \d^n \theta\; \d^n \bar{\theta}\;\exp\left(\bar{\theta}^i B_{ij} \theta^j + \bar{\eta}_i \theta^i + \bar{\theta}^i \eta_i\right) = \det B \exp\left(\bar{\eta}_i (B^{-1})^{ij} \eta_j\right).
+ \]
+ In particular, we have
+ \[
+ \mathcal{Z} = \mathcal{Z}(0, 0) = \det B.
+ \]
+\end{prop}
+As before, for any function $f$, we can again define the correlation function
+\[
+ \bra f(\bar{\theta}, \theta)\ket = \frac{1}{\mathcal{Z}(0, 0)} \int \d^n \theta\; \d^n \bar{\theta}\; e^{-S(\bar{\theta}, \theta)} f(\bar{\theta}, \theta).
+\]
+
+Note that usually, $S$ contains even-degree terms only. Then it doesn't matter if we place $f$ on the left or the right of the exponential.
+
+It is an exercise on the first example sheet to prove these, derive Feynman rules, and investigate further examples.
+\section{QFT in one dimension (i.e.\ QM)}
+\subsection{Quantum mechanics}
+In $d = 1$, there are two possible connected, compact manifolds $M$ --- we can have $M = S^1$ or $M = I = [0, 1]$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} -- (1.5, 0) node [circ] {} node [pos=0.5, above] {$M = I$};
+
+ \draw (3, 0) rectangle (5, -2);
+ \draw (3, 0) -- (4, 0.5) -- (6, 0.5) -- (5, 0);
+ \draw (6, 0.5) -- (6, -1.5) -- (5, -2);
+ \draw [dashed] (4, 0.5) -- (4, -1.5) -- (6, -1.5);
+ \draw [dashed] (3, -2) -- (4, -1.5);
+
+ \draw (0.75, -0.2) edge [out=270, in=180] (4.1, -1);
+
+ \node at (2, -1.4) {$\phi$};
+
+ \draw [->] (4.099, -1) -- (4.1, -1);
+
+ \draw [thick, mred](3.7, -1.4) .. controls (4.4, -1.4) and (3.9, -0.7) .. (5.1, -0.5);
+ \end{tikzpicture}
+\end{center}
+We will mostly be considering the case where $M = I$. In this case, we need to specify boundary conditions on the field in the path integral, and we will see that this corresponds to providing start and end points to compute matrix elements.
+
+We let $t \in [0, 1]$ be the worldline coordinate parametrizing the field, and we write our field as
+\[
+ x: I \to N
+\]
+for some Riemannian manifold $(N, g)$, which we call the \term{target manifold}. If $U\subseteq N$ has coordinates $x^a$ with $a = 1, \cdots, n = \dim (N)$, then we usually write $x^a(t)$ for the coordinates of $x(t)$. The standard choice of action is
+\[
+ S[x] = \int_I \left(\frac{1}{2} g(\dot{x}, \dot{x}) + V(x)\right)\;\d t,
+\]
+where $\dot{x}$ is as usual the time derivative, and $V(x)$ is some potential term.
+
+We call this theory a \term{non-linear $\sigma$-model}\index{$\sigma$-model!non-linear}\index{sigma model!non-linear}. This is called $\sigma$-model because when people first wrote this theory down, they used $\sigma$ for the name of the field, and the name stuck. This is non-linear because the term $g(\dot{x}, \dot{x})$ can be very complicated when the metric $g$ if it is not flat. Note that $+V(x)$ is the correct sign for a Euclidean worldline.
+
+Classically, we look for the extrema of $S[x]$ (for fixed end points), and the solutions are given by the solutions to
+\[
+ \frac{\d^2 x^a}{\d t^2} + \Gamma^a\!_{bc} \dot{x}^b \dot{x}^c = g^{ab} \frac{\partial V}{\partial x^b}.
+\]
+The left-hand side is just the familiar geodesic equation we know from general relativity, and the right hand term corresponds to some non-gravitational force.
+
+In the case of zero-dimensional quantum field theory, we just wrote down the partition function
+\[
+ \mathcal{Z} = \int e^{-S},
+\]
+and assumed it was an interesting thing to calculate. There wasn't really a better option because it is difficult to give a physical interpretation to a zero-dimensional quantum field theory. In this case, we will see that path integrals naturally arise when we try to do quantum mechanics.
+
+To do quantum mechanics, we first pick a Hilbert space $\mathcal{H}$. We usually take it as
+\[
+ \mathcal{H} = L^2(N, \d \mu),
+\]
+the space of square-integrable functions on $N$.
+
+To describe dynamics, we pick a Hamiltonian operator $H: \mathcal{H} \to \mathcal{H}$, with the usual choice being
+\[
+ H = \frac{1}{2} \Delta + V,
+\]
+where the \term{Laplacian} $\Delta$\index{$\Delta$} is given by
+\[
+ \Delta = \frac{1}{\sqrt{g}} \partial_a(\sqrt{g} g^{ab} \partial_b).
+\]
+As usually, the $\sqrt{g}$ refers to the square root of the determinant of $g$.
+
+We will work in the Heisenberg picture. Under this set up, the amplitude for a particle to travel from $x \in N$ to $y \in N$ in time $T$ is given by the \term{heat kernel}\index{$K_T(y, x)$}
+\[
+ K_T(y, x) = \brak{y} e^{-HT}\bket{x}.
+\]
+Note that we have $e^{-HT}$ instead of $e^{-iHT}$ because we are working in a Euclidean world. Strictly speaking, this doesn't really make sense, because $\bket{x}$ and $\bket{y}$ are not genuine elements of $\mathcal{H} = L^2(N, \d \mu)$. They are $\delta$-functions, which aren't functions. There are some ways to fix this problem. One way is to see that the above suggests $K_T$ satisfies
+\[
+ \frac{\partial}{\partial t} K_t(y, x) + H K_t(y, x) = 0,
+\]
+where we view $x$ as a fixed parameter, and $K_t$ is a function of $y$, so that $H$ can act on $K_t$. The boundary condition is
+\[
+ \lim_{t \to 0} K_t(y, x) = \delta (y - x),
+\]
+and this uniquely specifies $K_t$. So we can \emph{define} $K_t$ to be the unique solution to this problem.
+
+We can reconnect this back to the usual quantum mechanics by replacing the Euclidean time $T$ with $it$, and the above equation gives us the \term{Schr\"odinger equation}
+\[
+ i \frac{\partial K_t}{\partial t} (y, x) = H K_t(y, x).
+\]
+We first focus on the case where $V = 0$. In the case where $(N, g) = (\R^n, \delta)$, we know from, say, IB Methods, that the solution is given by
+\[
+ K_t(y, x) = \frac{1}{(2\pi t)^{n/2}} \exp\left(-\frac{|x - y|^2}{2t}\right).
+\]
+For an arbitrary Riemannian manifold $(N, g)$, it is in general very hard to write down a closed-form expression for $K_t$. However, we can find the asymptotic form
+\[
+ \lim_{t \to 0} K_t(y, x) \sim \frac{a(x)}{(2\pi t)^{n/2}} \exp\left(- \frac{d(y, x)^2}{2t}\right),
+\]
+where $d(x, y)$ is the geodesic distance between $x$ and $y$, and $a(x)$ is some invariant of our manifold built from (integrals of) polynomials of the Riemann curvature that isn't too important.
+
+Here comes the magic. We notice that since
+\[
+ \mathbb{I} = \int \d^n z\; \bket{z} \brak{z}
+\]
+is the identity operator, we can write
+\begin{align*}
+ K_{t_1 + t_2} (y, x) &= \brak{y} e^{-TH} \bket{x}\\
+ &= \int \d^n z\; \brak{y} e^{-t_1H} \bket{z}\brak{z} e^{-t_2H} \bket{x} \\
+ &= \int \d^n z\; K_{t_2}(y, z) K_{t_1}(z, x).
+\end{align*}
+For flat space, this is just reduces to the formula for convolution of Gaussians. This is the \term{concatenation property} of the heat kernel.
+
+Using this many times, we can break up our previous heat kernel by setting
+\[
+ \Delta t = \frac{T}{N}
+\]
+for some large $N \in \N$. Then we have
+\[
+ K_T(y, x) = \int \prod_{i = 1}^{N - 1} \d^n x_i\; K_{\Delta t} (x_i, x_{i - 1}),
+\]
+where we conveniently set $x_0 = x$ and $x_N = y$.
+
+The purpose of introducing these $\Delta t$ is that we can now use the asymptotic form of $K_t(y, t)$. We can now say
+\begin{multline*}
+ \brak{y_1} e^{-HT} \bket{y_0} \\
+ = \lim_{N \to \infty} \left(\frac{1}{2\pi \Delta t}\right)^{nN/2} \int \prod_{i = 1}^{N - 1} \d^n x_i\; a(x_i) \exp\left(- \frac{\Delta t}{2}\left(\frac{d(x_{i + 1}, x_i)}{\Delta t}\right)^2\right).
+\end{multline*}
+This looks more-or-less like a path integral! We now dubiously introduce the path integral measure
+\[
+ \D x \overset{?}{\equiv} \lim_{N \to \infty} \left(\frac{1}{2\pi \Delta t}\right)^{nN/2} \prod_{i = 1}^{N - 1} \d^n x_i\; a(x_i),
+\]
+and also assume our map $x(t)$ is at least once-continuously differentiable, so that
+\begin{multline*}
+ \lim_{N \to \infty} \prod_{i = 1}^{N - 1} \exp\left(-\frac{\Delta t}{2} \left(\frac{d(x_{i + 1}, x_i)}{\Delta t}\right)^2 \right)\\
+ \overset{?}{\equiv} \exp\left(-\frac{1}{2} \int \d t\; g(\dot{x}, \dot{x})\right) = \exp(-S[x]).
+\end{multline*}
+Assuming these things actually make sense (we'll later figure out they don't), we can write
+\[
+ \brak{y_1} e^{-HT} \bket{y_0} = \int_{\mathcal{C}_T[y_1, y_0]} \D x\; e^{-S[x]},
+\]
+where $\mathcal{C}_T[y_1, y_0]$ is the space of ``all'' maps $I \to N$ such that $x(0) = y_0$ and $x(1) = y_1$.
+
+Now consider an arbitrary $V \not= 0$. Then we can write
+\[
+ H = H_0 + V(x),
+\]
+where $H_0$ is the free Hamiltonian. Then we note that for small $t$, we have
+\[
+ e^{-Ht} = e^{-H_0t}E^{-V(x)t} + o(t).
+\]
+Thus, for small $t$, we have
+\begin{align*}
+ K_t(y, x) &= \brak{y} e^{-Ht}\bket{x}\\
+ &\approx \brak{y}e^{-H_0t} e^{-V(x) t}\bket{x}\\
+ &\sim \frac{a(x)}{(2\pi t)^{n/2}} \exp\left(- \left(\frac{1}{2} \left(\frac{d(y, x)}{t}\right)^2 + V(x)\right) t\right).
+\end{align*}
+Then repeating the above derivations will again give us
+\[
+ \brak{y_1} e^{-HT} \bket{y_0} = \int_{\mathcal{C}_T[y_1, y_0]} \D x\; e^{-S[x]}.
+\]
+
+Before we move on to express other things in terms of path integrals, and then realize our assumptions are all wrong, we make a small comment on the phenomena we see here.
+
+Notice that the states $\bket{y_0} \in \mathcal{H}$ and $\brak{y_1} \in \mathcal{H}^*$ we used to evaluate our propagator here arise as boundary conditions on the map $x$. This is a general phenomenon. The co-dimension-$1$ subspaces (i.e.\ subspaces of $M$ of one dimension lower than $M$) are associated to states in our Hilbert space $\mathcal{H}$. Indeed, when we did quantum field theory via canonical quantization, the states corresponded to the state of the universe at some fixed time, which is a co-dimension 1 slice of spacetime.
+
+\subsubsection*{The partition function}
+We can naturally interpret a lot of the things we meet in quantum mechanics via path integrals. In statistical physics, we often called the quantity $\Tr_\mathcal{H} (e^{-HT})$ the \term{partition function}. Here we can compute it as
+\[
+ \Tr_H(e^{-HT}) = \int \d^n y\; \brak{y}e^{-HT} \bket{y}.
+\]
+Using our path integral formulation, we can write this as
+\[
+ \Tr_\mathcal{H} (e^{-HT}) = \int \d^n y \int_{C_I[y, y]} \D x\; e^{-S} = \int_{\mathcal{C}_{S^1}} \D x\; e^{-S},
+\]
+where we integrate over all circles. This is the partition function $\mathcal{Z}(S^1, (N, g, V))$ of our theory. If one is worried about convergence issues, then we would have some problems in this definition. If we work in flat space, then $K_T(y, y) = \brak{y}e^{-HT}\bket{y}$ is independent of $y$. So when we integrate over all $y$, the answer diverges (as long as $K_T(y, y)$ is non-zero). However, we would get a finite result if we had a compact manifold instead.
+
+\subsubsection*{Correlation functions}
+More interestingly, we can do correlation functions. We will begin by considering the simplest choice --- local operators.
+\begin{defi}[Local operator]\index{local operator}
+ A \emph{local operator} $\mathcal{O}(t)$ is one which depends on the values of the fields and finitely many derivatives just at one point $t \in M$.
+\end{defi}
+We further restrict to local operators that do not depend on derivatives. These are given by functions $\mathcal{O}: N \to \R$, and then by pullback we obtain an operator $\mathcal{O}(x(t))$.
+
+Suppose the corresponding quantum operator is $\hat{\mathcal{O}} = \mathcal{O}(\hat{x})$, characterized by the property
+\[
+ \hat{\mathcal{O}}\bket{x} = \mathcal{O}(x) \bket{x}.
+\]
+If we want to evaluate this at time $t$, then we would compute
+\[
+ \brak{y_1} \hat{\mathcal{O}}(t) \bket{y_0} = \brak{y_1} e^{-H(T - t)} \hat{\mathcal{O}} e^{-Ht} \bket{y_0}.
+\]
+But, inserting a complete set of states, this is equal to
+\begin{multline*}
+ \int \d^n x\; \brak{y_1} e^{-H(T - t)}\bket{x} \brak{x}\hat{\mathcal{O}} e^{-Ht} \bket{y} \\
+ = \int \d^n x\; \mathcal{O}(x) \brak{y_1} e^{-H(T - t)} \bket{x} \brak{x} e^{-Ht} \bket{y}.
+\end{multline*}
+Simplifying notation a bit, we can write
+\[
+ \brak{y_1} \hat{\mathcal{O}}(t) \bket{y_0} = \int \d^n x\; \mathcal{O}(x(t)) \int\limits_{\mathcal{C}_{[T, t]} [y_1, x_t]} \D x\; e^{-S[x]} \int\limits_{\mathcal{C}_{[t, 0]}[x_t, y_0]} \D x\; e^{-S[x]}.
+\]
+But this is just the same as
+\[
+ \int\limits_{\mathcal{C}_{[T, 0]}[y_1, y_0]} \D x\; \mathcal{O}(x(t)) e^{-S[x]}.
+\]
+
+More generally, suppose we have a sequence of operators $\mathcal{O}_n, \cdots, \mathcal{O}_1$ we want to evaluate at times $T > t_n > t_{n - 1} > \cdots > t_1 > 0$, then by the same argument, we find
+\[
+ \brak{y_1} e^{-HT}\hat{\mathcal{O}}_n(t_n)\cdots\hat{\mathcal{O}}_1(t_1) \bket{y_0} = \int_{\mathcal{C}_{[0, T]}[y_0, y_1]} \D x\; \mathcal{O}_n(x(t_n)) \cdots \mathcal{O}(x(t_1)) e^{-S[x]}.
+\]
+Note that it is crucial that the operators are ordered in this way, as one would see if they actually try to prove this. Indeed, the $\hat{\mathcal{O}}_i$ are operators, but the objects in the path integral, i.e.\ the $\mathcal{O}(x(t_i))$ are just functions. Multiplication of operators does not commute, but multiplication of functions does. In general, if $\{t_i\} \in (0, T)$ are a collection of times, then we have
+\[
+ \int \D x \prod_{i = 1}^n \mathcal{O}_i(x(t_i)) e^{-S[x]} = \brak{y_1} e^{-HT}\mathcal{T} \prod_{i = 1}^n \hat{\mathcal{O}}_i(t_i) \bket{y_0},
+\]
+where $\mathcal{T}$ denotes the time ordering operator. For example, for $n = 2$, we have
+\[
+ \mathcal{T} \lbrack \hat{\mathcal{O}}_1(t_1) \hat{\mathcal{O}}_2(t_2)\rbrack = \Theta(t_2 - t_1) \hat{\mathcal{O}}_2(t_2) \hat{\mathcal{O}}_1(t_1) + \Theta(t_1 - t_2) \hat{\mathcal{O}}_1(t_1) \hat{\mathcal{O}}_2(t_2),
+\]
+where $\Theta$ is the step function.
+
+It is interesting to note that in this path integral formulation, we see that we get non-trivial correlation between operators at different times only because of the kinetic (derivative) term in the action. Indeed, for a free theory, the discretized version of the path integral looked like
+\[
+ S_{\mathrm{kin}}[x] = \sum_i \frac{1}{2} \left(\frac{x_{i + 1} - x_i}{\Delta t}\right)^2 \Delta t,
+\]
+Now if we don't have this term, and the action is purely potential:
+\[
+ S_{\mathrm{pot}}[x] = \sum_i V(x_i),
+\]
+then the discretized path integral would have factorized into a product of integrals at these sampling points $x_i$. It would follow that for any operators $\{\mathcal{O}_i\}$ that depend only on position, we have
+\[
+ \left\bra \prod_i \mathcal{O}_i(x(t_i))\right\ket = \prod_i \left\bra \mathcal{O}_i(x(t_i))\right\ket,
+\]
+and this is incredibly boring. When we work with higher dimensional universes, the corresponding result shows that if there are no derivative terms in the potential, then events at one position in spacetime have nothing to do with events at any other position.
+
+We have already seen this phenomena when we did quantum field theory with perturbation theory --- the interactions between different times and places are given by propagators, and these propagators arise from the kinetic terms in the Lagrangian.
+
+ % Note that in the case of QM, it might not be clear what we are computing when we write $\brak{y_1} \mathcal{O}(t) \bket{y_0}$. But, when we do full-blown quantum field theory, we could, for example, take $\bket{y_1} = \bket{y_0} = \bket{\Omega}$, the vacuum, and $\mathcal{O}(t) = \phi(x) \phi(y)$. Then this two-point correlator is just the propagator we are familiar with!
+\subsubsection*{Derivative terms}
+We now move on to consider more general functions of the field and its derivative. Consider operators $\mathcal{O}_i(x, \dot{x}, \cdots)$. We might expect that the value of this operator is related to path integrals of the form
+\[
+ \int \D x\; \mathcal{O}_1(x, \dot{x}, \cdots)|_{t_1} \mathcal{O}_2(x, \dot{x}, \cdots)|_{t_2} e^{-S[x]}
+\]
+But this can't be right. We were told that one of the most important properties of quantum mechanics is that operators do not commute. In particular, for $p_i = \dot{x}_i$, we had the renowned commutator relation
+\[
+ [\hat{x}^i, \hat{p}_j] = \delta^i_j.
+\]
+But in this path integral formulation, we feed in \emph{functions} to the path integral, and it knows nothing about how we order $x$ and $\dot{x}$ in the operators $\mathcal{O}_i$. So what can we do?
+
+The answer to this is really really important. The answer is that path integrals don't work.
+\subsubsection*{The path integral measure}
+Recall that to express our correlation functions as path integrals, we had to take the limits
+\[
+ \D x \overset{?}{=} \lim_{N \to \infty} \frac{1}{(2\pi \Delta t)^{nN/2}}\prod_{i = 1}^{N - 1} \d^n x_i a(x_i),
+\]
+and also
+\[
+ S[x] \overset{?}{=} \lim_{N \to \infty}\sum_{n = 1}^{N - 1} \frac{1}{2} \left(\frac{x_{n + 1} - x_n}{\Delta t}\right)^2 \Delta t.
+\]
+Do these actually make sense?
+
+What we are trying to do with these expressions is that we are trying to \emph{regularize} our path integral, i.e.\ find a finite-dimensional approximation of the path integral. For quantum field theory in higher dimensions, this is essentially a lattice regularization.
+
+Before we move on and try to see if this makes sense, we look at another way of regularizing our path integral. To do so, we decompose our field into Fourier modes:
+\[
+ x^a(t) = \sum_{k \in \Z} x_k^a e^{-2\pi i k t/T},
+\]
+and then we can obtain a regularized form of the action as
+\[
+ S_N[x] = \sum_{k = -N}^N \frac{1}{2} k^2 x^a_k x^a_k.
+\]
+Under this decomposition, we can take the regularized path integral measure to be
+\[
+ \D_N x = \prod_{|k| \leq N} \d^n x_k.
+\]
+This is analogous to high-energy cutoff regularization. Now the natural question to ask is --- do the limits
+\[
+ \lim_{N \to \infty} \int \D_N x,\quad \lim_{N \to \infty} S_N[x]
+\]
+exist?
+
+The answer is, again: NO! This is in fact not a problem of us not being able to produce limits well. It is a general fact of life that we cannot have a Lebesgue measure on an infinite dimensional inner product space (i.e.\ vector space with an inner product).
+
+Recall the following definition:
+\begin{defi}[Lebesgue measure]\index{Lebesgue measure}
+ A \emph{Lebesgue measure} $\d \mu$ on an inner product space $V$ obeys the following properties
+ \begin{itemize}
+ \item For all non-empty open subsets $U \subseteq \R^D$, we have
+ \[
+ \vol(U) = \int_U \d \mu > 0.
+ \]
+ \item If $U'$ is obtained by translating $U$, then
+ \[
+ \vol(U') = \vol(U).
+ \]
+ \item Every $x \in V$ is contained in at least one open neighbourhood $U_x$ with finite volume.
+ \end{itemize}
+\end{defi}
+note that we don't really need the inner product structure in this definition. We just need it to tell us what the word ``open'' means.
+
+We now prove that there cannot be any Lebesgue measure on an infinite dimensional inner product space.
+
+We first consider the case of a finite-dimensional inner product space. Any such inner product space is isomorphic to $\R^D$ for some $D$. Write $C(L)$ for an open hypercube of side length $L$. By translation in variance, the volume of any two such hypercubes would be the same.
+
+Now we note that given any such hypercube, we can cut it up into $2^D$ hypercubes of side length $L/2$:
+\begin{center}
+ \makecenter{\begin{tikzpicture}
+ \draw (0, 0) node [circ] {} -- (2, 0) node [circ] {} node [pos=0.5, circ] {};
+ \end{tikzpicture}}
+ \quad\quad
+ \makecenter{\begin{tikzpicture}
+ \draw (0, 0) rectangle (2, 2);
+ \draw [gray] (1, 0) -- (1, 2);
+ \draw [gray] (0, 1) -- (2, 1);
+ \end{tikzpicture}}
+ \quad\quad
+ \makecenter{\begin{tikzpicture}
+ \draw [thick] (0, 0) rectangle (2, -2);
+ \draw [thick] (0, 0) -- (0.8, 0.5) -- (2.8, 0.5) -- (2, 0);
+ \draw [thick] (2.8, 0.5) -- (2.8, -1.5) -- (2, -2);
+ \draw [dashed, thick] (0.8, 0.5) -- (0.8, -1.5) -- (2.8, -1.5);
+ \draw [dashed, thick] (0, -2) -- (0.8, -1.5);
+
+ \draw [gray] (1, 0) -- (1, -2);
+ \draw [gray] (0, -1) -- (2, -1);
+
+ \draw [gray] (1.8, 0.5) -- (1.8, -1.5);
+ \draw [gray] (0.8, -0.5) -- (2.8, -0.5);
+
+ \draw [gray] (1, 0) -- (1.8, 0.5);
+ \draw [gray] (1, -2) -- (1.8, -1.5);
+
+ \draw [gray] (2, -1) -- (2.8, -0.5);
+ \draw [gray] (0, -1) -- (0.8, -0.5);
+
+ \draw [gray] (0.4, -0.75) -- (2.4, -0.75);
+
+ \draw [gray] (1.4, 0.25) -- (1.4, -1.75);
+
+ \draw [gray] (0.4, 0.25) -- (2.4, 0.25) -- (2.4, -1.75);
+ \draw [gray] (0.4, 0.25) -- (0.4, -1.75) -- (2.4, -1.75);
+
+ \draw [gray] (1, -1) -- (1.8, -0.5);
+ \end{tikzpicture}}
+\end{center}
+Then since $C(L)$ contains $2^D$ disjoint copies of $C(L/2)$ (note that it is not exactly the union of them, since we are missing some boundary points), we know that
+\[
+ \vol(C(L)) \geq \sum_{i = 1}^{2^D} \vol(C(L/2)) = 2^D \vol(C(L/2)),
+\]
+Now in the case of an infinite dimensional vector space, $C(L)$ will contain \emph{infinitely many} copies of $C(L/2)$. So since $\vol(C(L/2))$ must be non-zero, as it is open, we know $\vol(C(L))$ must be infinite, and this is true for any $L$. Since any open set must contain some open hypercube, it follows that all open sets have infinite measure, and we are dead.
+
+\begin{thm}
+ There are no Lebesgue measures on an infinite dimensional inner product space.
+\end{thm}
+
+This means whenever we do path integrals, we need to understand that we are not actually doing an integral in the usual sense, but we are just using a shorthand for the limit of the discretized integral
+\[
+ \lim_{N \to \infty} \left(\frac{1}{2\pi \Delta t}\right)^{nN/2} \int \prod_{i = 1}^{N - 1} \d^n x_i\; \exp\left(- \frac{1}{2}\left(\frac{|x_{i + 1}- x_i|}{\Delta t}\right)^2 \Delta t\right).
+\]
+as a whole. In particular, we cannot expect the familiar properties of integrals to always hold for our path integrals.
+
+If we just forget about this problem and start to do path integrals, then we would essentially be writing down nonsense. We can follow perfectly logical steps and prove things, but the output will still be nonsense. Then we would have to try to invent some new nonsense to make sense of the nonsense. This was, in fact, how renormalization was invented! But as we will see, that is not what renormalization really is about.
+
+Note that we showed that the measure $\D x$ doesn't exist, but what we really need wasn't $\D x$. What we really needed was
+\[
+ \int \D x\; e^{-S[x]}.
+\]
+This is no longer translation invariant, so it is conceivable that it exists. Indeed, in the case of a 1D quantum field theory, it does, and is known as the \term{Wiener measure}.
+
+In higher dimensions, we are less certain. We know it doesn't exist for QED, and we believe it does not exist for the standard model. However, we believe that it does exist for Yang--Mills theory in four dimensions.
+
+\subsubsection*{Non-commutativity in QM}
+Now we know that the path integral measure doesn't exist, and this will solve our problem with non-commutativity. Indeed, as we analyze the discretization of the path integral, the fact that
+\[
+ [\hat{x}, \hat{p}] \not= 0
+\]
+will fall out naturally.
+
+Again, consider a free theory, and pick times
+\[
+ T > t_+ > t > t_- > 0.
+\]
+We will consider
+\begin{align*}
+ \int \D x\; x(t) \dot{x}(t_-) e^{-S} &= \brak{y_1} e^{-H(T - t)} \hat{x} e^{-H(t - t_-)} \hat{p} e^{-H t_-} \bket{y_0},\\
+ \int \D x\; x(t) \dot{x}(t_+) e^{-S} &= \brak{y_1} e^{-H(t - t_+)} \hat{p} e^{-H (t_+ - t)} \hat{x} e^{-Ht}\bket{y_0}.
+\end{align*}
+As we take the limit $t_{\pm} \to t$, the difference of the right hand sides becomes
+\[
+ \brak{y_1} e^{-H(t - t) }[\hat{x}, \hat{p}] e^{-H t}\bket{y_0} = \brak{y_1} e^{-HT} \bket{y_0} \not= 0.
+\]
+On the other hand, in the continuum path integral, the limit seems to give the same expression in both cases, and the difference vanishes, naively. The problem is that we \emph{need} to regularize. We cannot just bring two operators together in time and expect it to behave well. We saw that the path integral was sensitive to the time-ordering of the operators, so we expect something ``discrete'' to happen when the times cross. This is just like in perturbation theory, when we bring two events together in time, we have to worry that the propagators become singular.
+
+Normally, we would have something like
+\[
+ x(t) \dot{x}(t_-) - x(t) \dot{x}(t_+) = x_t \left(\frac{x_{t_-} - x_{t_- - \delta t}}{\delta t}\right) - x_t \left(\frac{x_{t_+ + \delta t} - x_{t_+}}{ \delta t}\right).
+\]
+In the regularized integral, we can keep increasing $t_-$ and decreasing $t_+$, until we get to the point
+\[
+ x_t \left(\frac{x_t - x_{t - \Delta t}}{\Delta t}\right) - x_t \left(\frac{x_{t + \Delta t} - x_t}{ \Delta t}\right).
+\]
+Now that $t_{\pm}$ have hit $t$, we need to look carefully what happens to the individual heat kernels. In general, we stop taking the limit as soon as any part of the discretized derivative touches $x_t$. The part of the integral that depends on $x_t$ looks like
+\[
+ \int \d^n x_t\; K_{\Delta t}(x_{t + \Delta t}, x_t) x_t \left(\frac{x_t - x_{t - \Delta t}}{\Delta t} - \frac{x_{t + \Delta t} - x_t}{ \Delta t}\right) K_{\Delta t} (x_t, x_{t - \Delta t}).
+\]
+Using the fact that
+\[
+ K_{\Delta t}(x_t, x_{t - \Delta t}) \sim \exp\left(- \frac{(x_t - x_{t - \Delta t})^2}{2\Delta t}\right),
+\]
+we can write the integral as
+\[
+ \int \d^n x_t \; x_t \frac{\partial}{\partial x_t} \Big(K_{\Delta t}(x_{t + \Delta t}, x_t) K_{\Delta t}(x_t, x_{t - \Delta t})\Big),
+\]
+Now integrating by parts, we get that this is equal to
+\[
+ \int \d^n x_t\; K_{\Delta t} (x_{t + \Delta t}, x_t) K_{\Delta t} (x_t, x_{t - \Delta t}) = K_{2 \Delta t} (x_{t + \Delta t}, x_{t - \Delta t}).
+\]
+So we get the same as in the operator approach.
+
+%\subsubsection*{``All'' paths}
+%We now have a conundrum --- we said that to get non-commutativity, we relied on the path integral measure not existing, and then we see that non-commutativity appears naturally when we regularize. However, we also said that the path integral measure exists in the case of 1D quantum field theories. So how do we get $[x, \dot{x}] \not= 0$?
+%
+%The answer is that $\dot{x}$ doesn't exist! When we constructed the space of all paths, a path is essentially a limit of piecewise linear functions, and there is no reason to believe such a path is even differentiable. Instead, the space $\mathcal{C}_T[y_1, y_0]$ consists of all \emph{continuous} paths from $y_0$ to $y_1$. This space consists of mostly nowhere-differentiable functions, and it doesn't make sense to talk about the derivative of such functions.
+
+\subsection{Feynman rules}
+Consider a theory with a single field $x: S^1 \to \R$, and action
+\[
+ S[x] = \int_{S^1} \d t\; \left(\frac{1}{2} \dot{x}^2 + \frac{1}{2}m^2 x^2 + \frac{\lambda}{4!} x^4\right).
+\]
+We pick $S^1$ as our universe so that we don't have to worry about boundary conditions. Then the path integral for the partition function is
+\begin{align*}
+ \frac{\mathcal{Z}}{\mathcal{Z}_0} &= \frac{1}{\mathcal{Z}_0}\int_{S^1} \D x\; x(t_1) \cdots x(t_n)\;e^{-S[x]}\\
+ &\sim \frac{1}{\mathcal{Z}_0}\sum_{n = 0}^N \int \prod_{i = 1}^n \d t_i\; \int_{S^1} \D x\; e^{-S_{\mathrm{free}}[x]} \frac{\lambda^n}{(4!)^n n!} \prod_{i = 1}^n x(t_i)^4\\
+ &= \sum_{n = 0}^N \int \prod_{i = 1}^n \d t_i\; \frac{\lambda^n}{(4!)^n n!}\left\bra \prod_{i = 1}^n x(t_i)^4\right\ket_{\mathrm{free}}
+\end{align*}
+So we have again reduced the problem to computing the correlators of the free theory.
+%
+%Again, to compute these, we add a source term, and consider
+%\[
+% \mathcal{Z}(\{J_i\}) = \int \D x\; \exp\left(\int \d t\; \left(\frac{1}{2} \dot{x}^2 + \frac{1}{2}m^2 x^2\right)\right).
+%\]
+
+Instead of trying to compute these correlators directly, we instead move to momentum space. We write
+\[
+ x(t) = \sum_{k \in \Z} x_k e^{-i k t}.
+\]
+For the sake of brevity (or rather, laziness of the author), we shall omit all factors of $2\pi$. Using orthogonality relations, we have
+\[
+ S[x] = \sum_{k \in \Z} \frac{1}{2}(k^2 + m^2) |x_k|^2 + \sum_{k_1, k_2, k_3, k_4 \in \Z}\delta(k_1 + k_2 + k_3 + k_4) \frac{\lambda}{4!}x_{k_1} x_{k_2} x_{k_3} x_{k_4}.
+\]
+Note that here the $\delta$ is the ``discrete'' version, so it is $1$ if the argument vanishes, and $0$ otherwise.
+
+Thus, we may equivalently represent
+\[
+ \frac{\mathcal{Z}}{\mathcal{Z}_0} \sim \sum_{n = 0}^N \sum_{\{k_j^{(i)}\}} \frac{\lambda^n}{4!^n n!}\prod_{i = 1}^n \delta(k_1^{(i)} + k_2^{(i)} + k_3^{(i)} + k_4^{(i)})\left\bra \prod_{i = 1}^n \prod_{j = 1}^4 x_{k_j^{(i)}}\right\ket_{\mathrm{free}}.
+\]
+This time, we are summing over momentum-space correlators. But in momentum space, the free part of the action just looks like countably many free, decoupled $0$-dimensional fields! Moreover, each correlator involves only finitely many of these fields. So we can reuse our results for the $0$-dimensional field, i.e.\ we can compute these correlators using Feynman diagrams! This time, the propagators have value $\frac{1}{k^2 + m^2}$.
+
+If we are working over a non-compact space, then we have a Fourier transform instead of a Fourier series, and we get
+\[
+ \frac{\mathcal{Z}}{\mathcal{Z}_0} \sim \sum_{n = 0}^N \int \prod_{i = 1}^n \prod_{j = 1}^4 \d k_j^{(i)} \frac{\lambda^n}{4!^n n!}\prod_{i = 1}^n \delta(k_1^{(i)} + k_2^{(i)} + k_3^{(i)} + k_4^{(i)}) \left\bra \prod_{i = 1}^n \prod_{j = 1}^4 x(k_j^{(i)})\right\ket_{\mathrm{free}}.
+\]
+\subsection{Effective quantum field theory}
+We now see what happens when we try to obtain effective field theories in 1 dimension. Suppose we have two real-valued fields $x, y: S^1 \to \R$. We pick the circle as our universe so that we won't have to worry about boundary conditions. We pick the action
+\[
+ S[x, y] = \int_{S^1} \left(\frac{1}{2} \dot{x}^2 + \frac{1}{2} \dot{y}^2 + \frac{1}{2} m^2 x^2 + \frac{1}{2}M^2 y^2 + \frac{\lambda}{4} x^2y^2 \right)\;\d t.
+\]
+As in the zero-dimensional case, we have Feynman rules
+\begin{center}
+ \makecenter{
+ \begin{tikzpicture}
+ \draw (0, 0) -- (2, 0) node [pos=0.5, below] {$1/(k^2 + m^2)$} node [pos=0.5, above] { };
+ \end{tikzpicture}}\quad\quad
+ \makecenter{\begin{tikzpicture}
+ \draw [dashed] (0, 0) -- (2, 0) node [pos=0.5, below] {$1/(k^2 + M^2)$} node [pos=0.5, above] { };
+ \end{tikzpicture}}\quad\quad
+ \makecenter{\begin{tikzpicture}[scale=0.707]
+ \draw (0, 0) -- (2, 2);
+ \draw [dashed] (0, 2) -- (2, 0);
+ \node [circ] at (1, 1) {};
+ \node [right] at (2, 1) {$-\lambda$};
+ \end{tikzpicture}}
+\end{center}
+As in the case of zero-dimensional QFT, if we are only interested in the correlations involving $x(t)$, then we can integrate out the field $y(t)$ first. The effective potential can be written as
+\[
+ \int \D y\; \exp\left(-\frac{1}{2} \int_{S^1} y\left(-\frac{\d^2}{\d t^2} + M^2 + \frac{\lambda x^2}{2}\right) y \;\d t\right),
+\]
+where we integrated by parts to turn $\dot{y}^2$ to $y \ddot{y}$.
+
+We start doing dubious things. Recall that we previously found that for a bilinear operator $M: \R^n \times \R^n \to \R$, we have
+\[
+ \int_{\R^n} \d^n x\; \exp\left(- \frac{1}{2} M(x, x)\right) = \frac{(2 \pi)^{n/2}}{\sqrt{\det{M}}}.
+\]
+Now, we can view our previous integral just as a Gaussian integral over the operator
+\[
+ (y, \tilde{y}) \mapsto \int_{S^1} y\left(-\frac{\d^2}{\d t^2} + M^2 + \frac{\lambda x^2}{2}\right) \tilde{y} \;\d t\tag{$*$}
+\]
+on the vector space of fields. Thus, (ignoring the factors of $(2\pi)^{n/2}$) we can formally write the integral as
+\[
+ \det \left(-\frac{\d^2}{\d t^2} + M^2 + \frac{\lambda x^2}{2}\right)^{-1/2}.
+\]
+$S_{\mathrm{eff}}[x]$ thus looks like
+\[
+ S_{\mathrm{eff}}[x] = \int_{S^1} \frac{1}{2}(\dot{x}^2 + m^2 x^2) \;\d t + \frac{1}{2} \log \det \left(-\frac{\d^2}{\d t^2} + M^2 + \frac{\lambda x^2}{2}\right)
+\]
+We now continue with our formal manipulations. Note that $\log \det = \tr \log$, since $\det$ is the product of eigenvalues and $\tr$ is the sum of them. Then if we factor our operators as
+\[
+ -\frac{\d^2}{\d t^2} + M^2 + \frac{\lambda x^2}{2} = \left(-\frac{\d^2}{\d t^2} + M^2\right)\left(1 - \lambda \left(-\frac{\d^2}{\d t^2} + M^2\right)^{-1} \frac{x^2}{2}\right),
+\]
+then we can write the last term in the effective potential as
+\[
+ \frac{1}{2} \tr \log \left(-\frac{\d^2}{\d t^2} + M^2\right) + \frac{1}{2} \tr \log \left(1 - \lambda \left(\frac{\d^2}{\d t^2} - M^2\right)^{-1} \frac{x^2}{2}\right)
+\]
+The first term is field independent, so we might as well drop it. We now look carefully at the second term. The next dodgy step to take is to realize we know what the inverse of the differential operator
+\[
+ \frac{\d^2}{\d t^2} - M^2
+\]
+is. It is just the Green's function! More precisely, it is the convolution with the Green's function. In other words, it is given by the function $G(t, t')$ such that
+\[
+ \left(\frac{\d^2}{\d t^2} - M^2 \right) G(t, t') = \delta(t - t').
+\]
+Equivalently, this is the propagator of the $y$ field. If we actually try to solve this, we find that we have
+\[
+ G(t, t') = \frac{1}{2M} \sum_{n \in \Z} \exp\left(-M \left|t - t' + \frac{k}{T}\right|\right).
+\]
+We don't actually need this formula. The part that will be important is that it is $\sim \frac{1}{M}$.
+
+We now try to evaluate the effective potential. When we expand
+\[
+ \log \left(1 - \lambda G(t, t') \frac{x^2}{2}\right),
+\]
+the first term in the expansion is
+\[
+ -\lambda G(t, t') \frac{x^2}{2}.
+\]
+What does it mean to take the trace of this? We pick a basis for the space we are working on, say $\{\delta(t - t_0): t_0 \in S^1\}$. Then the trace is given by
+\[
+ \int_{t_0 \in S^1} \d t_0\; \int_{t \in S^1}\d t\; \delta(t - t_0) \left(\int_{t' \in S^1} \d t'\; (-\lambda) G(t, t') \frac{x^2(t')}{2} \delta(t' - t_0)\right).
+\]
+We can dissect this slowly. The rightmost integral is nothing but the definition of how $G$ acts by convolution. Then the next $t$ integral is the definition of how bilinear forms act, as in $(*)$. Finally, the integral over $t_0$ is summing over all basis vectors, which is what gives us the trace. This simplifies rather significantly to
+\[
+ -\frac{\lambda}{2} \int_{t \in S^1} G(t, t) x^2(t) \;\d t.
+\]
+In general, we find that we obtain
+\begin{multline*}
+ \tr \log \left(1 - \lambda G(t, t') \frac{x^2}{2}\right) \\
+ = - \frac{\lambda}{2} \int_{S_1}G(t, t) x^2(t) \;\d t - \frac{\lambda^2}{8} \int_{S^1 \times S^1} \d t\; \d t' G(t', t) x^2(t) G(t, t') x^2(t') \cdots
+\end{multline*}
+
+These terms in the effective field theory are \emph{non-local}! It involves integrating over many different points in $S^1$. In fact, we should have expected this non-locality from the corresponding Feynman diagrams. The first term corresponds to
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ, mblue] at (0, 0) {};
+ \node [above] at (0, 0) {$x(t)$};
+ \node [circ, mblue] at (0, -2) {};
+ \node [below] at (0, -2) {$x(t)$};
+ \draw (0, 0) -- (0, -2);
+
+ \node [circ] at (0, -1) {};
+
+ \draw [dashed] (0.4, -1) ellipse (0.4 and 0.25);
+ \end{tikzpicture}
+\end{center}
+Here $G(t, t)$ corresponds to the $y$ field propagator, and the $-\frac{\lambda}{2}$ comes from the vertex.
+
+The second diagram we have looks like this:
+\begin{center}
+ \begin{tikzpicture}[eqpic]
+ \draw (-0.2, 0) node [circ, mblue] {} node [above] {$x(t)$} -- +(0.2, -1) node [circ] {} -- +(0, -2) node [circ, mblue] {} node [below] {$x(t)$};
+ \draw (1, 0) node [circ, mblue] {} node [above] {$x(t')$} -- +(-0.2, -1) node [circ] {} -- +(0, -2) node [circ, mblue] {} node [below] {$x(t')$};
+ \draw [dashed] (0.4, -1) ellipse (0.4 and 0.25);
+ \end{tikzpicture}
+\end{center}
+We see that the first diagram is local, as there is just one vertex at time $t$. But in the second diagram, we use the propagators to allow the $x$ at time $t$ to talk to $x$ at time $t'$. This is non-local!
+
+Non-locality is generic. Whenever we integrate out our fields, we get non-local terms. But non-locality is terrible in physics. It means that the equations of motion we get, even in the classical limit, are going to be integral differential equations, not just normal differential equations. For a particle to figure out what it should do here, it needs to know what is happening in the far side of the universe!
+
+To make progress, we note that if $M$ is very large, then we would expect $G(t, t')$ could be highly suppressed for $t \not= t'$. So we can try to expand around $t = t'$. Recall that the second term is given by
+\[
+ \int \d t\; \d t' G(t, t')^2 x^2(t) x^2(t')
+\]
+We can write out $x'(t^2)$ as
+\[
+ x'(t^2) = x^2 (t) + 2x(t) \dot{x}(t) (t' - t) + \left(\dot{x}^2(t) + \frac{1}{2} x(t) \dot{x}(t)\right) (t - t')^2 + \cdots.
+\]
+Using the fact that $G(t, t')$ depends on $t'$ only through $M(t' - t)$, by dimensional analysis, we get an expansion that looks like
+\[
+ \frac{1}{M^2} \int \d t\; \frac{\alpha}{M} x^4(t) + \frac{\beta}{M^3} \left(x^2 \dot{x}^2 + \frac{1}{2} x^2 \ddot{x}\right) + \frac{\gamma}{M^5} (\text{4-derivative terms}) + \cdots
+\]
+Here $\alpha, \beta, \gamma$ are dimensionless quantities.
+
+Thus, we know that every extra derivative is accompanied by a further power of $\frac{1}{M}$. Thus, provided $x(t)$ is slowly varying on scales of order $\frac{1}{M}$, we may hope to truncate the series.
+
+Thus, at energies $E \ll M$, our theory looks approximately local. So as long as we only use our low-energy approximation to answer low-energy questions, we are fine. However, if we try to take our low-energy theory and try to extrapolate it to higher and higher energies, up to $E \sim M$, it is going to be nonsense. In particular, it becomes non-unitary, and probability is not preserved.
+
+This makes sense. By truncating the series at the first term, we are ignoring all the higher interactions governed by the $y$ fields. By ignoring them, we are ignoring some events that have non-zero probability of happening, and thus we would expect probability not to be conserved.
+
+There are two famous examples of this. The first is weak interactions. At very low energies, weak interactions are responsible for $\beta$-decay. The effective action contains a quartic interaction
+\[
+ \int \d^4 x\; \bar\psi_e n \nu_e p \; G_{\mathrm{weak}}.
+\]
+This coupling constant $G_{\mathrm{weak}}$ has mass dimensional $-1$. At low energies, this is a perfectly well description of beta decay. However, this is suspicious. The fact that we have a coupling constant with negative mass dimension suggests this came from integrating some fields out.
+
+At high energies, we find that this \emph{4-Fermi theory} becomes non-unitary, and $G_{\mathrm{weak}}$ is revealed as an approximation to a $W$-boson propagator. Instead of an interaction that looks like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (2, 2);
+ \draw (0, 2) -- (2, 0);
+ \node [circ] at (1, 1) {};
+ \end{tikzpicture}
+\end{center}
+what we really have is
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (0.75, 1) -- (0, 2);
+
+ \draw [decorate, decoration={snake}](0.75, 1) -- (2.25, 1) node [pos=0.5, above=0.3em] {$W$};
+ \draw (3, 2) -- (2.25, 1) -- (3, 0);
+ \node [circ] at (0.75, 1) {};
+ \node [circ] at (2.25, 1) {};
+ \end{tikzpicture}
+\end{center}
+There are many other theories we can write down that has negative mass dimension, the most famous one being general relativity.
+
+\subsection{Quantum gravity in one dimension}
+In quantum gravity, we also include a (path) integral over all metrics on our spacetime, up to diffeomorphism (isometric) invariance. We also sum over all possible topologies of $M$. In $d = 1$ (and $d = 2$ for string theory), we can just do this.
+
+In $d = 1$, a metric $g$ only has one component $g_{tt}(t) = e(t)$. There is no curvature, and the only diffeomorphism invariant of this metric is the total length
+\[
+ T = \oint e(t) \;\d t.
+\]
+So the instruction to integrate over all metrics modulo diffeomorphism is just the instruction to integrate over all possible lengths of the worldline $T \in (0, \infty)$, which is easy. Let's look at that.
+
+The path integral is given by
+\[
+ \int_T \d T \;\int_{C_{[0, T]} [y, x]} \D x\; e^{-S[x]}.
+\]
+where as usual
+\[
+ S[x] = \frac{1}{2} \int_0^T \dot{x}^2 \;\d t.
+\]
+Just for fun, we will include a ``cosmological constant'' term into our action, so that we instead have
+\[
+ S[x] = \frac{1}{2} \int_0^T \dot{x}^2 + \frac{m^2}{2}\;\d t.
+\]
+The reason for this will be revealed soon.
+
+We can think of the path integral as the heat kernel, so we can write it as
+\begin{align*}
+ \int_0^\infty\; \d T\; \brak{y} e^{-HT} \bket{x} &= \int_0^\infty \d T\; \frac{\d^n p\; \d^n q}{ (2\pi)^n} \braket{y}{q} \brak{q} e^{-HT}\bket{p} \braket{p}{x}\\
+ &= \int_0^\infty \d T\; \frac{\d^n p\; \d ^n q}{(2\pi)^n} e^{ip\cdot x - i q \cdot y} e^{-T(p^2 + m^2)/2} \delta^n(p - q)\\
+ &= \int_0^\infty \d T\; \frac{\d^n p}{(2\pi)^n} e^{ip\cdot (x - y)} e^{-T(p^2 + m^2)/2}\\
+ &= 2 \int \frac{\d^n p}{(2\pi)^n} \frac{e^{ip\cdot (x - y)}}{p^2 + m^2}\\
+ &= 2 D(x, y),
+\end{align*}
+where $D(x, y)$ is the propagator for a scalar field on the \emph{target space} $\R^n$ with action
+\[
+ S[\Phi] = \int \d^n x \; \frac{1}{2} (\partial \Phi)^2 + \frac{m^2}{2} \Phi^2.
+\]
+So a $1$-dimensional quantum gravity theory with values in $\R^n$ is equivalent to (or at least has deep connections to) a scalar field theory on $\R^n$.
+
+How about interactions? So far, we have been taking rather unexciting 1-dimensional manifolds as our universe, and there are only two possible choices. If we allow singularities in our manifolds, then we would allow \emph{graphs} instead of just a line and a circle. Quantum gravity then says we should not only integrate over all possible lengths, but also all possible graphs.
+
+For example, to compute correlation functions such as $\bra \Phi(x_1) \cdots \Phi(x_n)\ket$ in $\Phi^4$ theory on $\R^n$, say, we consider all $4$-valent with $n$ external legs with one endpoint at each of the $x_i$, and then we proceed just as in quantum gravity.
+
+For example, we get a contribution to $\bra \Phi(x) \Phi(y)\ket$ from the graph
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} node [left] {$x$} -- (2, 0) node [circ] {} node [right] {$x$};
+
+ \node [circ] at (1, 0) {};
+ \draw (1, 0.4) ellipse (0.2 and 0.4);
+ \end{tikzpicture}
+\end{center}
+The contribution to the quantum gravity expression is
+\[
+ \int\limits_{\mathclap{z \in \R^n}}\d^n z \int\limits_{\mathclap{[0, \infty)^3}} \d T_1\, \d T_2\, \d T_3\, \int\limits_{\mathclap{\mathcal{C}_{T_1}[z, x]}} \D x\, e^{-S_{T_1}[x]} \int\limits_{\mathclap{C_{T_2}[z, z]}} \D x\, e^{-S_{T_2}[x]} \int\limits_{\mathclap{C_{T_3}[y, z]}} \D x\, e^{-S_{T_3}[x]},
+\]
+where
+\[
+ S_T[x] = \frac{1}{2} \int_0^T x^2 \;\d t + \frac{m^2}{2}\int_0^T \d t.
+\]
+We should think of the second term as the ``cosmological constant'', while the 1D integrals over $T_i$'s are the ``1d quantum gravity'' part of the path integral (also known as the \term{Schwinger parameters} for the graph).
+
+We can write this as
+\begin{align*}
+ &\hphantom{{}={}} \int \d^n z \; \d T_1 \;\d T_2 \;\d T_3\; \brak{z} e^{-HT_1} \bket{x} \brak{z}e^{-HT_2} \bket{z} \brak{y} e^{-H T_3} \bket{z}.\\
+ \intertext{Inserting a complete set of eigenstates between the position states and the time evolution operators, we get}
+ &= \int \frac{\d^n p\; \d^n \ell\;\d^n q}{(2\pi)^{3n}} \frac{e^{ip\cdot(x - z)}}{p^2 + m^2} \frac{e^{iq\cdot(y - z)}}{q^2 + m^2} \frac{e^{i\ell\cdot(z - z)}}{\ell^2 + m^2}\\
+ &= \int \frac{\d^n p\; \d^n \ell}{(2\pi)^{2n}} \frac{e^{ip\cdot (x - y)}}{(p^2 + m^2)^2} \frac{1}{\ell^2 + m^2}.
+\end{align*}
+This is exactly what we would have expected if we viewed the above diagram as a Feynman diagram:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} node [left] {$x$} -- (2, 0) node [circ] {} node [right] {$y$};
+
+ \node at (0.5, 0) [below] {$p$};
+ \node at (1.5, 0) [below] {$p$};
+
+ \node [circ] at (1, 0) {};
+ \draw (1, 0.4) ellipse (0.2 and 0.4);
+ \node at (1.2, 0.4) [right] {$\ell$};
+ \end{tikzpicture}
+\end{center}
+This is the \term{worldline perspective} to QFT, and it was indeed Feynman's original approach to doing QFT.
+
+\section{Symmetries of the path integral}
+From now on, we will work with quantum field theory in general, and impose no restrictions on the dimension of our universe. The first subject to study is the notion of symmetries.
+
+We first review what we had in classical field theory. In classical field theory, Noether's theorem relates symmetries to conservation laws. For simplicity, we will work with the case of a flat space.
+
+Suppose we had a variation
+\[
+ \delta \phi = \varepsilon f(\phi, \partial \phi)
+\]
+of the field. The most common case is when $f(\phi, \partial \phi)$ depends on $\phi$ only locally, in which case we can think of the transformation as being generated by the vector
+\[
+ V_f = \int_M \d^d x \; f(\phi, \partial \phi) \frac{\delta}{\delta \phi(x)}
+\]
+acting on the ``space of fields''.
+
+If the function $S[\phi]$ is invariant under $V_f$ when $\varepsilon$ is constant, then for general $\varepsilon(x)$, we must have
+\[
+ \delta S = \int \d^d x\; j^\mu(x)\; \partial_\mu \varepsilon.
+\]
+for some field-dependent current $j_\mu(x)$ (we can actually find an explicit expression for $j_\mu$). If we choose $\varepsilon(x)$ to have compact support, then we can write
+\[
+ \delta S = - \int \d^d x\; (\partial^\mu j_\mu)\;\varepsilon(x).
+\]
+On \emph{solutions} of the field equation, we know the action is stationary under arbitrary variations. So $\delta S = 0$. Since $\varepsilon(x)$ was arbitrary, we must have
+\[
+ \partial^\mu j_\mu = 0.
+\]
+So we know that $j_\mu$ is a \term{conserved current}.
+
+Given any such conserved current, we can define the \term{charge} $Q[N]$ associated to an (oriented) co-dimension $1$ hypersurface $N$ as
+\[
+ Q[N] = \int_N n^\mu j_\mu\; \d^{d - 1}x,
+\]
+where $n^\mu$ is the normal vector to $N$. Usually, $N$ is a time slice, and the normal points in the future direction.
+
+Now if $N_0$ and $N_1$ are two such hypersurfaces bounding a region $M' \subseteq M$, then by Stokes' theorem, we have
+\[
+ Q[N_0] - Q[N_1] = \left(\int_{N_0} - \int_{N_1}\right) n^\mu j_\mu \;\d^{n - 1}x = \int_{M'} (\partial^\mu j_\mu)\;\d^n x = 0.
+\]
+So we find
+\[
+ Q[N_0] = Q[N_1].
+\]
+This is the conservation of charge!
+
+\subsection{Ward identities}
+The derivation of Noether's theorem used the classical equation of motion. But in a quantum theory, the equation of motion no longer holds. We must re-examine what happens in the quantum theory.
+
+Suppose a transformation $\phi \mapsto \phi'$ of the fields has the property that
+\[
+ \D \phi\; e^{-S[\phi]} = \D \phi'\; e^{-S[\phi']}.
+\]
+In theory, the whole expression $\D \phi\; e^{-S[\phi]}$ is what is important in the quantum theory. In practice, we often look for symmetries where $\D \phi$ and $e^{-S[\phi]}$ are separately conserved. In fact, what we will do is that we look for a symmetry that preserves $S[\phi]$, and then try to find a regularization of $\D \phi$ that is manifestly invariant under the symmetry.
+
+Often, it is \emph{not} the case that the obvious choice of regularization of $\D \phi$ is manifestly invariant under the symmetry. For example, we might have a $S[\phi]$ that is rotationally invariant. However, if we regularize the path integral measure by picking a lattice and sampling $\phi$ on different points, it is rarely the case that this lattice is rotationally invariant.
+
+In general, there are two possibilities:
+\begin{enumerate}
+ \item The symmetry could be restored in the limit. This typically means there exists a regularized path integral measure manifestly invariant under this symmetry, but we just didn't use it. For example, rotational invariance is not manifestly present in lattice regularization, but it is when we do the cut-off regularization.
+ \item It could be that the symmetry is not restored. It is said to be \emph{anomalous}\index{anomalous symmetry}, i.e.\ broken in the quantum theory. In this case, there can be no invariant path integral measure. An example is scale invariance in QED, if the mass of the electron is $0$.
+\end{enumerate}
+Sometimes, it can be hard to tell. For now, let's just assume we are in a situation where $\D \phi = \D \phi'$ when $\varepsilon$ is constant. Then for any $\varepsilon(x)$, we clearly have
+\[
+ \mathcal{Z} = \int \D \phi\; e^{-S [\phi]} = \int \D \phi' e^{-S[\phi']},
+\]
+since this is just renaming of variables. But using the fact that the measure is invariant when $\varepsilon$ is constant, we can can expand the right-hand integral in $\varepsilon$, and again argue that it must be of the form
+\[
+ \int \D \phi'\; e^{-S[\phi']} = \int \D \phi\; e^{-S[\phi]} \left(1 - \int_M j_\mu \partial^\mu \varepsilon\; \d^n x\right).
+\]
+in first order in $\varepsilon$. Note that in general, $j_\mu$ can receive contributions from $S[\phi]$ and $\D \phi$. But if it doesn't receive any contribution from $\D \phi$, then it would just be the classical current.
+
+Using this expansion, we deduce that we must have
+\[
+ \int \D \phi \; e^{-S[\phi]} \int_M j_\mu \partial^\mu \varepsilon \;\d^d x = 0.
+\]
+Integrating by parts, and using the definition of the expectation, we can write this as
+\[
+ -\int_M \varepsilon \partial^\mu \bra j_\mu (x)\ket\; \d^n x,
+\]
+for any $\varepsilon$ with compact support. Note that we dropped a normalization factor of $\mathcal{Z}$ in the definition of $\bra j_\mu(x)\ket$, because $\mathcal{Z}$ times zero is still zero. So we know that $\bra j_\mu (x)\ket$ is a conserved current, just as we had classically.
+
+\subsubsection*{Symmetries of correlation functions}
+Having a current is nice, but we want to say something about actual observable quantities, i.e.\ we want to look at how how symmetries manifest themselves with correlation functions. Let's look at what we might expect. For example, if our theory is translation invariant, we might expect, say
+\[
+ \bra \phi(x) \phi(y)\ket = \bra \phi(x - a) \phi(y - a)\ket
+\]
+for any $a$. This is indeed the case.
+
+Suppose we have an operator $\mathcal{O}(\phi)$. Then under a transformation $\phi \mapsto \phi'$, our operator transforms as
+\[
+ \mathcal{O}(\phi) \mapsto \mathcal{O}(\phi'),
+\]
+By definition, the correlation function is defined by
+\[
+ \bra \mathcal{O}(\phi)\ket = \frac{1}{\mathcal{Z}} \int \D \phi\;e^{-S[\phi]} \mathcal{O}(\phi).
+\]
+Note that despite the appearance of $\phi$ on the left, it is not a free variable. For example, the correlation $\bra \phi(x_1) \phi(x_2)\ket$ is not a function of $\phi$.
+
+We suppose the transformation $\phi \mapsto \phi'$ is a symmetry. By a trivial renaming of variables, we have
+\begin{align*}
+ \bra \mathcal{O}(\phi)\ket &= \frac{1}{\mathcal{Z}} \int \D \phi'\; e^{-S[\phi']} \mathcal{O} (\phi')\\
+ \intertext{By assumption, the function $\D \phi\;e^{-S[\phi]}$ is invariant under the transformation. So this is equal to}
+ &= \frac{1}{\mathcal{Z}} \int \D \phi\; e^{-S[\phi]} \mathcal{O}(\phi')\\
+ &= \bra \mathcal{O}(\phi')\ket.
+\end{align*}
+This is, of course, not surprising. To make this slightly more concrete, we look at an example.
+
+%Now we further suppose this transformation is a symmetry. By definition, we can write
+%\[
+% \bra \mathcal{O}_1(x_1) \cdots \mathcal{O}_n(x_n)\ket = \frac{1}{\mathcal{Z}} \int \D \phi e^{-S[\phi]} \mathcal{O}_1(\phi(x_1)) \cdots \mathcal{O}_n(\phi(x_n)).
+%\]
+%Trivially, by renaming of variables, this is equal to
+%\[
+% \frac{1}{\mathcal{Z}} \int \D \phi' e^{-S[\phi']} \mathcal{O}_1(\phi'(x_1)) \cdots \mathcal{O}_n(\phi'(x_n)).
+%\]
+%Using the invariance of the measure, this is equal to
+%\[
+% \frac{1}{\mathcal{Z}} \int \D \phi\;e^{-S[\phi]} \mathcal{O}_1(\phi'(x_1) \cdots \mathcal{O}_n (\phi'(x_n)).
+%\]
+%Consequently, we find
+%\[
+% \bra \mathcal{O}_1(\phi(x_n)) \cdots \mathcal{O}_n(\phi(x_n))\ket = \bra \mathcal{O}_1(\phi'(x_n)) \cdots \mathcal{O}_n(\phi'(x_n))\ket.
+%\]
+%So the correlation function of the original and transformed operators are the agree \emph{in the same theory}.
+
+\begin{eg}
+ Consider $(M, g) = (\R^4, \delta)$, and consider spacial translation $x \mapsto x' = x - a$ for a constant vector $a$. A scalar field $\phi$ transforms as
+ \[
+ \phi(x) \mapsto \phi'(x) = \phi(x - a).
+ \]
+ In most cases, this is a symmetry.
+
+ We suppose $\mathcal{O}(\phi)$ can be written as
+ \[
+ \mathcal{O}(\phi) = \mathcal{O}_1(\phi(x_1)) \cdots \mathcal{O}_n(\phi(x_n)),
+ \]
+ where $\mathcal{O}_i$ depends only on the value of $\phi$ at $x_i$. A canonical example is when $\mathcal{O}(\phi) = \phi(x_1) \cdots \phi(x_n)$ is an $n$-point correlation function.
+
+ Then the above result tells us that
+ \[
+ \bra \mathcal{O}_1(\phi(x_1)) \cdots \mathcal{O}_n(\phi(x_n))\ket = \bra \mathcal{O}_1(\phi(x_1 - a)) \cdots \mathcal{O}_n(\phi(x_n - a))\ket
+ \]
+ So the correlation depends only on the separations $x_i - x_j$. We can obtain similar results if the action and measure are rotationally or Lorentz invariant.
+% So the scalar operators transform as $\mathcal{O}(\phi(x)) \mapsto \mathcal{O}(\phi(x - a))$. In shorthand, we write $\mathcal{O}(x) \mapsto \mathcal{O}(x - a)$.
+%
+% Thus, we find that.
+% \[
+% \bra \mathcal{O}_1(x_1) \cdots \mathcal{O}_n(x_n)\ket = \bra \mathcal{O}_1(x_1 - a) \cdots \mathcal{O}_n(x_n - a)\ket.
+% \]
+% Hence, the correlation function must depend only on the separation $x_{ij} = x_i - x_j$. Similarly, if the action and measure are rotationally or Lorentz invariant, then under a transformation $L$, we find
+% \[
+% \bra \mathcal{O}(x_1) \cdots \mathcal{O}_n(x_n)\ket = \bra \mathcal{O}_1(L x_1) \cdots \mathcal{O}_n (L x_n)\ket.
+% \]
+% So the correlation only depends on the Lorentz invariant separations $x_{ij}^2$.
+\end{eg}
+
+\begin{eg}
+ Suppose we have a complex field $\phi$, and we have a transformation $\phi \mapsto \phi' = e^{i\alpha}\phi$ for some constant $\alpha \in \R/2\pi \Z$. Then the conjugate field transforms as
+ \[
+ \bar\phi \mapsto \bar\phi' = e^{-i\alpha} \bar\phi.
+ \]
+ Suppose this transformation preserves the action and measure. For example, the measure will be preserved if we integrate over the same number of $\phi$ and $\bar\phi$ modes. Consider the operators
+ \[
+ \mathcal{O}_i (\phi, \bar\phi) = \phi(x_i)^{s_i} \bar\phi(x_i)^{r_i}.
+ \]
+ Then the operators transform as
+ \[
+ \mathcal{O}_i(\phi, \bar\phi) \mapsto \mathcal{O}_i (\phi', \bar\phi') = e^{-\alpha (r_i - s_i)} \mathcal{O}_i(\phi, \bar\phi).
+ \]
+ So symmetry entails
+ \[
+ \left\bra \prod_{i = 1}^m \mathcal{O}_i(x_i)\right\ket = \exp\left(i \alpha \sum_{i = 1}^m (r_i - s_i)\right) \left\bra \prod_{i = 1}^m \mathcal{O}_i(x_i)\right\ket.
+ \]
+ Since this is true for all $\alpha$, the correlator must vanish unless
+ \[
+ \sum_{i = 1}^m r_i = \sum_{i = 1}^m s_i.
+ \]
+ So we need the same number of $\phi$ and $\bar\phi$ insertions in total.
+
+ We can interpret this in terms of Feynman diagrams --- each propagator joins up a $\phi$ and $\bar\phi$. So if we don't have equal number of $\phi$ and $\bar\phi$, then we can't draw any Feynman diagrams at all! So the correlator must vanish.
+\end{eg}
+
+%\begin{eg}
+% Consider $(M, g) = (\R^4, \delta)$, and consider spacial translation $x \mapsto x' = x - a$ for a constant vector $a$. A scalar field $\phi$ transforms as
+% \[
+% \phi(x) \mapsto \phi'(x) = \phi(x - a).
+% \]
+% In most cases, this is a symmetry. So the scalar operators transform as $\mathcal{O}(\phi(x)) \mapsto \mathcal{O}(\phi(x - a))$. In shorthand, we write $\mathcal{O}(x) \mapsto \mathcal{O}(x - a)$.
+%
+% Thus, we find that.
+% \[
+% \bra \mathcal{O}_1(x_1) \cdots \mathcal{O}_n(x_n)\ket = \bra \mathcal{O}_1(x_1 - a) \cdots \mathcal{O}_n(x_n - a)\ket.
+% \]
+% Hence, the correlation function must depend only on the separation $x_{ij} = x_i - x_j$. Similarly, if the action and measure are rotationally or Lorentz invariant, then under a transformation $L$, we find
+% \[
+% \bra \mathcal{O}(x_1) \cdots \mathcal{O}_n(x_n)\ket = \bra \mathcal{O}_1(L x_1) \cdots \mathcal{O}_n (L x_n)\ket.
+% \]
+% So the correlation only depends on the Lorentz invariant separations $x_{ij}^2$.
+%\end{eg}
+
+\subsubsection*{Ward identity for correlators}
+What we've done with correlators was rather expected, and in some sense trivial. Let's try to do something more interesting.
+
+Again, consider an operator that depends only on the value of $\phi$ at finitely many points, say
+\[
+ \mathcal{O}(\phi) = \mathcal{O}_1(\phi(x_1)) \cdots \mathcal{O}_n(\phi(x_n)).
+\]
+As before, we will write the operators as $\mathcal{O}_i(x_i)$ when there is no confusion.
+
+As in the derivation of Noether's theorem, suppose we have an infinitesimal transformation, with $\phi \mapsto \phi + \varepsilon \delta \phi$ that is a symmetry when $\varepsilon$ is constant. Then for general $\varepsilon(x)$, we have
+\begin{align*}
+ &\hphantom{={}} \int \D \phi\; e^{-S[\phi]} \mathcal{O}_1(\phi(x_1)) \cdots \mathcal{O}_n(\phi(x_n)) \\
+ &= \int \D \phi'\; e^{-S[\phi']} \mathcal{O}_1(\phi'(x_1)) \cdots \mathcal{O}_n(\phi'(x_n))\\
+ &= \int \D \phi\; e^{-S[\phi]} \left(1 - \int j^\mu (x) \partial_\mu \varepsilon(x) \;\d x\right)\left(\mathcal{O}_1(\phi(x_1)) \cdots \mathcal{O}_n(\phi(x_n))\vphantom{ + \sum_{i = 1}^n \varepsilon(x_i) \delta \mathcal{O}(x_i) \prod_{j \not= i} \mathcal{O}_j(x_j)}\right.\\
+ &\hphantom{= \int \D \phi e^{-S[\phi]} \left(1 - \int j^\mu (x) \partial_\mu \varepsilon(x) \;\d x\right)}\left.+ \sum_{i = 1}^n \varepsilon(x_i) \delta \mathcal{O}_i(x_i) \prod_{j \not= i} \mathcal{O}_j(x_j)\right),
+\end{align*}
+where
+\[
+ \delta \mathcal{O}(x_i) = \frac{\partial \mathcal{O}}{\partial \phi} \delta \phi.
+\]
+Again, the zeroth order piece of the correlation function cancels, and to lowest non-trivial order, we find that we must have
+\[
+ \int \partial^\mu \varepsilon(x) \left\bra j_\mu(x) \prod_{i = 1}^n \mathcal{O}_i(x_i)\right\ket\;\d^d x = \sum_{i = 1}^m \varepsilon(x_i) \left\bra \delta \mathcal{O}_i(x_i) \prod_{j \not= i} \mathcal{O}_j(x_j)\right\ket.
+\]
+On the left, we can again integrate by parts to shift the derivative to the current term. On the right, we want to write it in the form $\int \varepsilon(x) \cdots \;\d^d x$, so that we can get rid of the $\varepsilon$ term. To do so, we introduce some $\delta$-functions. Then we obtain
+\begin{multline*}
+ \int \varepsilon(x) \partial^\mu \left\bra j_\mu(x) \prod_{i = 1}^n \mathcal{O}_i(x_i)\right\ket\;\d^d x \\
+ = - \sum_{i = 1}^m \int \varepsilon(x) \delta^d(x - x_i) \left\bra \delta \mathcal{O}_i(x_i) \prod_{j \not= i} \mathcal{O}_j(x_j)\right\ket \;\d^d x.
+\end{multline*}
+Since this holds for arbitrary $\varepsilon(x)$ (with compact support), we must have
+\[
+ \partial^\mu \left\bra j_\mu (x) \prod \mathcal{O}_i(x_i)\right\ket = -\sum_{i = 1}^n \delta^d(x - x_i)\left\bra \delta \mathcal{O}_i(x_i) \prod_{j \not= i} \mathcal{O}_j(x_j)\right\ket.
+\]
+This is the \term{Ward identity} for correlation functions. It says that the vector field
+\[
+ f_\mu (x, x_i) = \left\bra j_\mu (x) \prod_i \mathcal{O}_i (x_i)\right\ket
+\]
+is divergence free except at the insertions $x_i$.
+
+This allows us to recover the previous invariance of correlations. Suppose $M$ is compact without boundary. We then integrate the Ward identity over all $M$. By Stokes' theorem, we know the integral of any divergence term vanishes. So we obtain
+\[
+ 0 = \int_M \partial^\mu f_\mu(x, x_i)\;\d^d x = -\sum_{i = 1}^n \left\bra \delta \mathcal{O}_i(x_i) \prod_{j \not= i} \mathcal{O}_j(x_j)\right\ket = \delta \left\bra \prod_{i = 1}^n \mathcal{O}_i(x_i)\right\ket,
+\]
+Of course, this is what we would obtain if we set $\varepsilon(x) \equiv 1$ above.
+
+That was nothing new, but suppose $M' \subseteq M$ is a region with boundary $N_1 - N_0$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [thick, mblue] (0, 0) -- (2, 2) -- (4, 0) -- (2, -2) -- cycle;
+
+ \draw (0, 0) edge [mred, thick, bend left, fill=morange, fill opacity=0.3] (4, 0);
+ \draw (0, 0) edge [mred, thick, bend right, fill=morange, fill opacity=0.3] (4, 0);
+ \node at (2, 0.6) [above] {$N_1$};
+ \node at (2, -0.6) [below] {$N_0$};
+ \node at (2, 0) {$M'$};
+ \end{tikzpicture}
+\end{center}
+Let's see what integrating the Ward identity over $M'$ gives us. The left hand side gives us
+\begin{align*}
+ \int_{M'} \partial^\mu \left\bra j_\mu (x) \prod_{i = 1}^n \mathcal{O}_i(x_i)\right\ket \;\d^d x &= \int_{N_1 - N_0} n^\mu\left\bra j_\mu (x) \prod_{i = 1}^n \mathcal{O}_i(x_i)\right\ket\;\d^{d - 1}x\\
+ &= \left\bra Q[N_1] \prod_{i = 1}^n \mathcal{O}_i(x_i)\right\ket - \left\bra Q[N_0] \prod_{i = 1}^n \mathcal{O}_i(x_i)\right\ket
+\end{align*}
+The right hand side of Ward's identity just gives us the sum over all points inside $M'$.
+\[
+ \left\bra Q[N_1] \prod_{i = 1}^n \mathcal{O}_i(x_i)\right\ket - \left\bra Q[N_0] \prod_{i = 1}^n \mathcal{O}_i(x_i)\right\ket = - \sum_{x_i \in M'} \left\bra \delta \mathcal{O}_i(x_i) \prod_{j \not= i}^m \mathcal{O}_j(x_j)\right\ket.
+\]
+In particular, if $M'$ contains only one point, say $x_1$, and we choose the region to be infinitesimally thin, then in the canonical picture, we have
+\[
+ \brak{\Omega}\mathcal{T} [\hat{Q}, \hat{\mathcal{O}}_1(x_1)] \prod_{j = 2}^m \hat{\mathcal{O}}_j (x_j)\bket{\Omega} = - \brak{\Omega} \mathcal{T} \left\{\delta \hat{\mathcal{O}}_1 \prod_{j = 2}^n \hat{\mathcal{O}}_j(x_j)\right\}\bket{\Omega},
+\]
+where $\bket{\Omega}$ is some (vacuum) state. So in the canonical picture, we find that
+\[
+ \delta \hat{\mathcal{O}} = -[\hat{Q}, \hat{\mathcal{O}}].
+\]
+So we see that the change of $\hat{\mathcal{O}}$ under some transformation is given by the commutator with the charge operator.
+
+\subsection{The Ward--Takahashi identity}
+We focus on an important example of this. The QED action is given by
+\[
+ S[A, \psi] = \int \d^d x\; \left(\frac{1}{4} F_{\mu\nu}F^{\mu\nu} + i \bar\psi \slashed\D \psi + m \bar\psi \psi\right).
+\]
+This is invariant under the global transformations
+\[
+ \psi (x) \mapsto e^{i\alpha} \psi(x),\quad A_\mu(x) \mapsto A_\mu(x)
+\]
+for constant $\alpha \in \R/2\pi \Z$. The path integral measure is also invariant under this transformation provided we integrate over equal numbers of $\psi$ and $\bar\psi$ modes in the regularization.
+
+In this case, the \emph{classical} current is given by
+\[
+ j^\mu (x) = \bar\psi (x) \gamma^\mu \psi(x).
+\]
+We will assume that $\D \psi \D \bar{\psi}$ is invariant under a position-dependent $\alpha$. This is a reasonable assumption to make, if we want our measure to be gauge invariant. In this case, the classical current is also the quantum current.
+
+Noting that the infinitesimal change in $\psi$ is just (proportional to) $\psi$ itself, the Ward identity applied to $\bra\psi(x_1) \bar\psi(x_2)\ket$ gives
+\[
+ \partial^\mu \bra j_\mu(x) \psi(x_1) \bar\psi(x_2)\ket = - \delta^4(x - x_1) \bra \psi(x_1) \bar\psi(x_2)\ket + \delta^4(x - x_2) \bra \psi(x_1) \bar\psi(x_2)\ket.
+\]
+We now try to understand what these individual terms mean. We first understand the correlators $\bra \psi(x_1) \bar\psi(x_2)\ket$.
+
+Recall that when we did perturbation theory, the propagator was defined as the Fourier transform of the \emph{free theory} correlator $\bra \psi(x_1) \bar\psi(x_2)\ket$. This is given by
+\begin{align*}
+ D(k_1, k_2) &= \int \d^4 x_1 \; \d^4 x_2\; e^{ik_1 \cdot x_1} e^{-ik_2 \cdot x_2} \bra \psi(x_1) \bar\psi(x_2)\ket\\
+ &= \int \d^4 y \;\d^4 x_2\; e^{i(k_1 - k_2) \cdot x_2} e^{ik_1 \cdot y} \bra \psi(y) \bar\psi(0)\ket\\
+ &= \delta^4(k_1 - k_2) \int \d^4 y\; e^{ik\cdot y} \bra \psi(y) \bar\psi(0)\ket.
+\end{align*}
+
+Thus, we can interpret the interacting correlator $\bra \psi(x_1) \bar\psi(x_2)\ket$ as the propagator with ``quantum corrections'' due to the interacting field.
+\begin{defi}[Exact propagator]\index{exact propagator}
+ The \emph{exact (electron) propagator} is defined by
+ \[
+ S(k) = \int \d^4 y\; e^{ik\cdot y} \bra \psi(y) \bar\psi(0)\ket,
+ \]
+ evaluated in the full, interacting theory.
+\end{defi}
+
+Usually, we don't want to evaluate this directly. Just as we can compute the sum over \emph{all} diagrams by computing the sum over all connected diagrams, then take $\exp$, in this case, one useful notion is a \emph{one-particle irreducible graph}.
+
+\begin{defi}[One-particle irreducible graph]\index{one-particle irreducible graph}\index{1PI}
+ A \emph{one-particle irreducible graph} for $\bra \psi \bar\psi\ket$ is a connected Feynman diagram (in momentum space) with two external vertices $\bar\psi$ and $\psi$ such that the graph cannot be disconnected by the removal of one internal line.
+\end{defi}
+This definition is rather abstract, but we can look at some examples to see what this actually means.
+\begin{eg}
+ The following are one-particle irreducible graphs:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [white, decorate, decoration={snake}] (0.8, 0) arc(180:360:0.7) node [pos=0.5, above=0.5em] {$\gamma$};
+ \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (3, 0) node [circ] {} node [above] {$\psi$};
+ \draw [decorate, decoration={snake}] (0.8, 0) arc(180:0:0.7) node [pos=0.5, above=0.5em] {$\gamma$};
+ \end{tikzpicture}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (3, 0) node [circ] {} node [above] {$\psi$};
+ \draw [decorate, decoration={snake}] (0.5, 0) arc(180:0:0.7);
+ \draw [decorate, decoration={snake}] (1.2, 0) arc(180:360:0.7);
+ \end{tikzpicture}
+ \end{center}
+ while the following is not:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (4.7, 0) node [circ] {} node [above] {$\psi$};
+ \draw [decorate, decoration={snake}] (0.5, 0) arc(180:0:0.7);
+ \draw [decorate, decoration={snake}] (2.8, 0) arc(180:0:0.7);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+We will write
+\[
+ \begin{tikzpicture}[eqpic]
+ \draw [fill=white] (1, 0) circle [radius=0.3] node {1PI};
+ \end{tikzpicture}
+ =
+ \Sigma(\slashed{k})
+\]
+for the sum of all contributions due to one-particle irreducible graphs. This is known as the \term{electron self-energy}\index{self-energy!electron}. Note that we do not include the contributions of the propagators connecting us to the external legs. It is not difficult to see that any Feynman diagram with external vertices $\bar\psi$, $\psi$ is just a bunch of 1PI's joined together. Thus, we can expand
+\[\arraycolsep=0.25em\def\arraystretch{2.2}
+ \begin{array}{ccccc}
+ S(k) & \sim &
+ \begin{tikzpicture}[eqpic]
+
+ \draw [fill, white] (0.7, 0.3) rectangle (1.3, -0.3);
+ \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (2, 0) node [circ] {} node [above] {$\psi$};
+ \draw [opacity=0] (0, 0) node [circ] {} node [below] {$\bar\psi$} -- (2, 0) node [circ] {} node [below] {$\psi$};
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic]
+ \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (2, 0) node [circ] {} node [above] {$\psi$};
+ \draw [opacity=0] (0, 0) node [circ] {} node [below] {$\bar\psi$} -- (2, 0) node [circ] {} node [below] {$\psi$};
+
+ \draw [fill] (0.7, 0.3) rectangle (1.3, -0.3);
+ \end{tikzpicture}\\
+ & & \displaystyle\frac{1}{i \slashed{k} + m} &+& \text{quantum corrections}.
+\end{array}
+\]
+with the quantum corrections given by
+\[
+ \begin{tikzpicture}[eqpic]
+ \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (2, 0) node [circ] {} node [above] {$\psi$};
+
+ \draw [fill] (0.7, 0.3) rectangle (1.3, -0.3);
+ \draw [opacity=0] (0, 0) node [circ] {} node [below] {$\bar\psi$} -- (2, 0) node [circ] {} node [below] {$\psi$};
+ \end{tikzpicture}
+ =
+ \begin{tikzpicture}[eqpic]
+ \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (2, 0) node [circ] {} node [above] {$\psi$};
+
+ \draw [fill=white] (1, 0) circle [radius=0.3] node {1PI};
+ \draw [opacity=0] (0, 0) node [circ] {} node [below] {$\bar\psi$} -- (2, 0) node [circ] {} node [below] {$\psi$};
+ \end{tikzpicture}
+ +
+ \begin{tikzpicture}[eqpic]
+ \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (3, 0) node [circ] {} node [above] {$\psi$};
+
+ \draw [fill=white] (1, 0) circle [radius=0.3] node {1PI};
+ \draw [fill=white] (2, 0) circle [radius=0.3] node {1PI};
+ \draw [opacity=0] (0, 0) node [circ] {} node [below] {$\bar\psi$} -- (3, 0) node [circ] {} node [below] {$\psi$};
+ \end{tikzpicture}
+ + \cdots
+\]
+This sum is easy to perform. The diagram with $n$ many 1PI's has contributions from $n$ many 1PI's and $n + 1$ many propagators. Also, momentum contribution forces them to all have the same momentum. So we simply have a geometric series
+\begin{align*}
+ S(k) &\sim \frac{1}{i\slashed{k} + m} + \frac{1}{i\slashed{k} + m} \Sigma(\slashed{k}) \frac{1}{i\slashed{k} + m} + \frac{1}{i\slashed{k} + m} \Sigma(\slashed{k}) \frac{1}{i\slashed{k} + m} \Sigma(\slashed{k}) \frac{1}{i\slashed{k} + m} + \cdots\\
+ &= \frac{1}{i\slashed{k} + m- \Sigma(\slashed{k})}.
+\end{align*}
+
+We can interpret this result as saying integrating out the virtual photons gives us a shift in the kinetic term by $\Sigma(\slashed{k})$.
+
+We now move on to study the other term. It was the expectation
+\[
+ \bra j_\mu(x) \psi(x_1) \bar\psi(x_2)\ket.
+\]
+We note that using the definition of $\D$, our classical action can be written as
+\[
+ S[A, \psi] = \int \d^d x\; \frac{1}{4} F_{\mu\nu}F^{\mu\nu} + \bar\psi \slashed \partial \psi + j^\mu A_\mu + m \bar\psi \psi.
+\]
+In position space, this gives interaction vertices of the form
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1.5, -0.5) {};
+ \node [circ] at (3, 0) {};
+ \node [above] at (0, 0) {$x_1$};
+ \node [above] at (1.5, -0.5) {$x$};
+ \node [above] at (3, 0) {$x_2$};
+
+ \draw [/tikzfeynman/fermion] (0, 0) -- (1.5, -0.5);
+ \draw [/tikzfeynman/fermion] (1.5, -0.5) -- (3, 0);
+ \draw [/tikzfeynman/photon] (1.5, -0.5) -- (1.5, -2);
+ \end{tikzpicture}
+\end{center}
+Again, we want to consider quantum corrections to this interaction vertex. It turns out the interesting correlation function is exactly $\bra j_\mu(x) \psi(x_1) \bar\psi(x_2)\ket$.
+
+This might seem a bit odd. Why do we not just look at the vertex itself, and just consider $\bra j_\mu\ket$? Looking at $\psi j_\mu \bar\psi$ instead corresponds to including including the propagators coming from the external legs. The point is that there can be photons that stretch \emph{across} the vertex, looking like
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1.5, -0.5) {};
+ \node [circ] at (3, 0) {};
+ \node [above] at (0, 0) {$x_1$};
+ \node [above] at (1.5, -0.5) {$x$};
+ \node [above] at (3, 0) {$x_2$};
+
+ \draw [/tikzfeynman/fermion] (0, 0) -- (1.5, -0.5);
+ \draw [/tikzfeynman/fermion] (1.5, -0.5) -- (3, 0);
+ \draw [/tikzfeynman/photon] (1.5, -0.5) -- (1.5, -2);
+ \draw (0.75, -0.25) edge [bend left, /tikzfeynman/photon] (2.25, -0.25);
+ \end{tikzpicture}
+\end{center}
+So when doing computations, we must involve the two external electron propagators as well. (We do not include the photon propagator. We explore what happens when we do that in the example sheet)
+
+We again take the Fourier transform of the correlator, and define
+\begin{defi}[Exact electromagnetic vertex]\index{exact electromagnetic vertex}
+ The \emph{exact electromagnetic vertex} $\Gamma_\mu(k_1, k_2)$ is defined by
+ \begin{multline*}
+ \delta^4(p + k_1 - k_2) S(k_1) \Gamma_\mu(k_1, k_2) S(k_2) \\
+ = \int \d^4 x\; \d^4 x_1\; \d^4 x_2\; \bra j_\mu(x) \psi(x_1) \bar\psi(x_2)\ket e^{ip\cdot x} e^{ik_1 \cdot x_1} e^{-ik_2 \cdot x_2}.
+ \end{multline*}
+\end{defi}
+Note that we divided out the $S(k_1)$ and $S(k_2)$ in the definition of $\Gamma_\mu(k_1, k_2)$, because ultimately, we are really just interested in the vertex itself.
+
+Can we figure out what this $\Gamma$ is? Up to first order, we have
+\[
+ \bra \psi(x_1) j_\mu(x) \bar\psi(x_2)\ket \sim \bra \psi(x_1) \bar\psi(x)\ket \gamma_\mu \bra \psi(x) \bar\psi(x_2)\ket + \text{quantum corrections}.
+\]
+So in momentum space, after dividing out by the exact propagators, we obtain
+\[
+ \Gamma_\mu(k_1, k_2) = \gamma_\mu + \text{quantum corrections}.
+\]
+This first order term corresponds to diagrams that do \emph{not} include photons going across the two propagators, and just corresponds to the classical $\gamma_\mu$ inside the definition of $j_\mu$. The quantum corrections are the interesting parts.
+
+In the case of the exact electron propagator, we had this clever idea of one-particle irreducible graphs that allowed us to simplify the propagator computations. Do we have a similar clever idea here? Unfortunately, we don't. But we don't have to! The Ward identity relates the exact vertex to the exact electron propagator.
+
+Taking the Fourier transform of the Ward identity, and dropping some $\delta$-functions, we obtain
+\[
+ (k_1 - k_2)_\mu S(k_1) \Gamma^\mu (k_1, k_2) S(k_2) = i S(k_1) - i S(k_2).
+\]
+Recall that $S(k_i)$ are matrices in spinor space, and we wrote them as $\frac{1}{\cdots}$. So it is easy to invert them, and we find
+\begin{align*}
+ (k_1 - k_2)_\mu \Gamma^\mu(k_1, k_2) &= i S^{-1}(k_2) - i S^{-1}(k_1)\\
+ &= -i (i \slashed{k}_1 + m - \Sigma(\slashed{k}_1) - i \slashed{k}_2 - m + \Sigma(\slashed{k}_2))\\
+ &= (k_1 - k_2)_\mu \gamma^\mu + i (\Sigma(\slashed{k}_1) - \Sigma(\slashed{k}_2)).
+\end{align*}
+This gives us an explicit expression for the quantum corrections of the exact vertex $\Gamma^\mu$ in terms of the quantum corrections of the exact propagator $S(k)$.
+
+Note that very little of this calculation relied on what field we actually worked with. We could have included more fields in the theory, and everything would still go through. We might obtain a different value of $\Sigma(\slashed{k})$, but this relation between the quantum corrections of $\Gamma^\mu$ and the quantum corrections of $S$ still holds.
+
+What is the ``philosophical'' meaning of this? Recall that the contributions to the propagator comes from the $\bar\psi \slashed{\partial} \psi$ term, while the contributions to the vertex comes from the $\bar\psi \slashed{A} \psi$ term. The fact that their quantum corrections are correlated in such a simple way suggests that our quantum theory treats the $\bar\psi \slashed{D} \psi$ term as a whole, and so they receive the ``same'' quantum corrections. In other words, the quantum theory respects gauge transformations. When we first studied QED, we didn't understand renormalization very well, and the Ward identity provided a sanity check that we didn't mess up the gauge invariance of our theory when regularizing.
+
+How could we have messed up? In the derivations, there was one crucial assumption we made, namely that $\D \psi \D \bar\psi$ is invariant under \emph{position-dependent} transformations $\psi(x) \mapsto e^{i\alpha(x)} \psi(x)$. This was needed for $j_\mu$ to be the classical current. This is true if we regularized by sampling our field at different points in space, as long as we included the same number of $\psi$ and $\bar\psi$ terms.
+
+However, historically, this is not what we used. Instead, we imposed cutoffs in the Fourier modes, asking $k^2 \leq \Lambda_0$. This is \emph{not} compatible with arbitrary changes $\psi(x) \mapsto e^{i\alpha(x)} \psi(x)$, as we can introduce some really high frequency changes in $\psi$ by picking a wild $\alpha$.
+
+
+%\separator
+%We now want to look at the other quantity $\bra \psi(x_1) j_\mu(x) \bar\psi(x_2)\ket$ in the Ward identity. This corresponds to the Feynman graphs that connect electron field insertions at $x_1, x_2$ to the current at $x$.
+%
+%We define the \term{exact electromagnetic vertex} $\Gamma_\mu(k_1, k_2)$ by
+%\begin{align*}
+% &\hphantom{={}}\int \d^4 x\; \d^4 x_1\; \d^4 x_2\; \bra j_\mu(x) \psi(x_1) \bar\psi(x_2)\ket e^{ip\cdot x} e^{ik_1 \cdot x_1} e^{-ik_2 \cdot x_2}\\
+% &= \int \d^4 x\; \d^4 x_1\; \d^4 x_2\; \bra j_\mu (x - x_2) \psi(x_1 - x_2) \bar\psi(0) \ket\\
+% &\hphantom{\int \d^4 x\; \d^4 x_1\; \d^4 x_2\; hereissomespacefi} e^{ip\cdot (x - x_2)} e^{ik_1 \cdot (x_1 - x_2)} e^{i(p + k_1 - k_2)\cdot x_2}\\
+% &= \delta^4(p + k_1 - k_2) S(k_1) \Gamma_\mu(k_1, k_2) S(k_2),
+%\end{align*}
+%where $S$ is what we had before, the exact electron propagator. To understand why we write it in this form, note that since
+%\[
+% j_\mu(x) = \bar\psi(x) \gamma_\mu \psi(x),
+%\]
+%we know that the quantity
+%\[
+% \bra \psi(x_1) j_\mu(x) \bar\psi(x_2)\ket
+%\]
+%contains diagrams coming from
+%\[
+% \bra \psi(x_1) \bar\psi(x)\ket \gamma_\mu \bra \psi(x) \bar\psi(x_2)\ket
+%\]
+%Once we Fourier transform this, we just get $S(k_1) \gamma_\mu S(k_2)$. This term corresponds to things of the form
+%\begin{center}
+% \begin{tikzpicture}
+% \draw (0, 0) node [circ] {} node [above] {$\psi(x_1)$} -- (1.5, 0) node [circ] {} node [above] {$\bar\psi(x)$};
+% \node at (2, 0) {$\gamma_\mu$};
+%
+% \draw (2.5, 0) node [circ] {} node [above] {$\psi(x)$} -- (4, 0) node [circ] {} node [above] {$\bar\psi(x_2)$};
+% \end{tikzpicture}
+%\end{center}
+%This is the really boring term. It counts the contributions when virtual interactions happen to the electron from $x_1$ to $x$ ($S(k_1)$), then it meets the current ($\gamma^\mu$), and then as it goes back to $x_2$, some virtual interactions happen again ($S(k_2)$).
+%
+%The interesting terms are those that come from interactions that happen \emph{across} the electromagnetic vertex, such as
+%\begin{center}
+% \begin{tikzpicture}
+% \draw (-2, 0) node [circ] {} node [above] {$\psi(x_1)$} -- (-0.25, 0) node [circ] {};
+%
+% \draw (0.25, 0) node [circ] {} -- (2, 0) node [circ] {} node [above] {$\bar\psi(x_2)$};
+% \draw [decorate, decoration={snake}] (-1.25, 0) arc(180:0:1.25);
+%
+% \draw (0, 0) circle [radius=0.25];
+%
+% \draw (0.177, 0.177) -- (-0.177, -0.177);
+% \draw (-0.177, 0.177) -- (0.177, -0.177);
+% \end{tikzpicture}
+%\end{center}
+%Of course, there is nothing preventing us from having crazy photon stuff happening solely before or after $j(x)$. So we have those $S(k)$ terms. Thus
+%\[
+% \Gamma_\mu(k_1, k_2) = \gamma_\mu + \text{quantum corrections}.
+%\]
+%These are the ingredients we need. Let's now go back to the Ward identity, and see what it says. In momentum space, we take the Fourier transform of the whole equation on top. When we differentiate with respect to $x$, we pull down a factor of $p$ when integrating by parts, but we know $p = k_1 - k_2$. So we get
+%\[
+% (k_1 - k_2)_\mu S(k_1) \Gamma^\mu (k_1, k_2) S(k_2) = i S(k_1) - i S(k_2).
+%\]
+%We can isolate the correction to the vertex here. Recall that $S(k_i)$ are matrices in spinor space. So we can invert them to get
+%\begin{align*}
+% (k_1 - k_2)_\mu \Gamma^\mu(k_1, k_2) &= i S^{-1}(k_2) - i S^{-1}(k_1)\\
+% &= i (i \slashed{k}_1 m - \Sigma(\slashed{k}_1) - i \slashed{k}_2 - m + \Sigma(\slashed{k}_2))\\
+% &= (k_1 - k_2)_\mu \gamma^\mu + i (\Sigma(\slashed{k}_1) - \Sigma(\slashed{k}_2)).
+%\end{align*}
+%What this really is saying is that the quantum corrections to the electron term $i \bar\psi \slashed \partial \psi$ must be related to quantum corrections to the vertex $\bar\psi \slashed{A} \psi$.
+%
+%This is of course important. This identity is an important check of gauge invariance. The fact that quantum corrections treat the covariant kinetic term $i \bar\psi \slashed D \psi$ as a whole tells us that the quantum theory respect gauge transformations. We don't get separate quantum corrections for the kinetic part and the interaction part, or else we would have broken gauge invariance.
+%
+%How could this fail? There is a subtlety. We assumed that
+%\[
+% j_\mu(x) = \bar\psi \gamma_\mu \psi,
+%\]
+%which is the current we obtained in the classical theory. We didn't allow for a change in path integral measure $\D \psi\;\D \bar\psi$, which would have contributed to the current as well. If this is true, then what we said would go through.
+%
+%However, if we regularized $\D \psi\; \D \bar\psi$ by imposing a cutoff in the Fourier modes, asking $k^2 \leq \Lambda_0$, then this is not compatible with $\psi(x) \mapsto e^{i\alpha(x)} \psi(x)$, as doing this transformation will mix low and higher Fourier modes. The Fourier transform of a product is the convolution of the Fourier transform, so we mix different Fourier modes.
+%
+%Historically, this is the cutoff people used, and that is also what we are going to use in the next few lectures. This works well for scalar field theories, but when we try to do it for gauge theory, we would have broken gauge invariance by doing so. Thus, if we see that the Ward identity holds, this is a good check that we haven't broken things too much.
+%
+%
+%To obtain the Ward identity, we do the normal things --- we let $\alpha$ depend on position. So the transformations become
+%\[
+% \psi (x) \mapsto e^{i\alpha (x)} \psi(x),\quad A_\mu(x) \mapsto A_\mu(x).
+%\]
+%Note that this is \emph{not} the gauge transformation, as we don't change the photon field.
+%
+%Under this transformation, the classical action is no longer invariant, and we obtain
+%\[
+% \delta S = \int j^\mu \partial_\mu \alpha \;\d^4 x,
+%\]
+%where
+%\[
+% j^\mu (x) = \bar\psi (x) \gamma^\mu \psi(x).
+%\]
+%is just the current. This leads to a Ward identity
+%\[
+% \partial^\mu \bra j_\mu(x) \psi(x_1) \bar\psi(x_2)\ket = - \delta^4(x - x_1) \bra \psi(x_1) \bar\psi(x_2)\ket + \delta^4(x - x_2) \bra \psi(x_1) \bar\psi(x_2)\ket,
+%\]
+%noting that the infinitesimal change in $\psi$ is just (proportional to) $\psi$ itself. As always, we found that when we computed the insertion of the current, it is divergence free except in at the insertion points $x_1$ and $x_2$.
+%
+%Let's look at this in momentum space, which is how it is traditionally done. We consider the Fourier transform
+%\[
+% \int \d^4 x_1 \; \d^4 x_2\; e^{ik_1 \cdot x_1} e^{-ik_2 \cdot x_2} \bra \psi(x_1) \bar\psi(x_2)\ket.
+%\]
+%But since the system is translation invariant, the correlation $\bra \psi(x_1) \bar\psi(x_2) \ket$ is only dependent on the separation between $x_1$ and $x_2$. So we can write this as
+%\[
+% \int \d^4 y \;\d^4 x_2\; e^{i(k_1 - k_2) \cdot x_2} e^{ik_1 \cdot y} \bra \psi(y) \bar\psi(0)\ket,
+%\]
+%where $y = x_1 - x_2$. We write the result as
+%\[
+% \delta^4(k_1 - k_2) S(k_1),
+%\]
+%where
+%\[
+% S(k) = \int \d^4 y\; e^{ik\cdot y} \bra \psi(y) \bar\psi(0)\ket
+%\]
+%is the \term{$2$-point function} of the electron field in momentum space. This definition is an exact object, but if we wanted to compute this in perturbation theory, we can draw the diagram
+%\[\arraycolsep=0.25em\def\arraystretch{2.2}
+% \begin{array}{ccccc} % fix
+% S(k) & \sim &
+% \begin{tikzpicture}[eqpic]
+%
+% \draw [fill, white] (0.7, 0.3) rectangle (1.3, -0.3);
+% \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (2, 0) node [circ] {} node [above] {$\psi$};
+% \end{tikzpicture}
+% &+&
+% \begin{tikzpicture}[eqpic]
+% \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (2, 0) node [circ] {} node [above] {$\psi$};
+%
+% \draw [fill] (0.7, 0.3) rectangle (1.3, -0.3);
+% \end{tikzpicture}\\
+% & & \displaystyle\frac{1}{i \slashed{k} + m} &+& \text{quantum corrections}.
+%\end{array}
+%\]
+%Here we are lazy and use a black box to denote all quantum corrections. If we are less lazy, we can write this as
+%\[
+% \begin{tikzpicture}[eqpic]
+% \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (2, 0) node [circ] {} node [above] {$\psi$};
+%
+% \draw [fill] (0.7, 0.3) rectangle (1.3, -0.3);
+% \end{tikzpicture}
+% =
+% \begin{tikzpicture}[eqpic]
+% \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (2, 0) node [circ] {} node [above] {$\psi$};
+%
+% \draw [fill=white] (1, 0) circle [radius=0.3] node {1PI};
+% \end{tikzpicture}
+% +
+% \begin{tikzpicture}[eqpic]
+% \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (3, 0) node [circ] {} node [above] {$\psi$};
+%
+% \draw [fill=white] (1, 0) circle [radius=0.3] node {1PI};
+% \draw [fill=white] (2, 0) circle [radius=0.3] node {1PI};
+% \end{tikzpicture}
+% + \cdots
+%\]
+%where 1PI is the sum of all \term{one-particle irreducible graphs}\index{1PI}, i.e.\ graphs connected graphs that cannot be disconnected by any single internal line.
+%
+%For example, the following are one-particle irreducible graphs:
+%\begin{center}
+% \begin{tikzpicture}
+% \draw [white, decorate, decoration={snake}] (0.8, 0) arc(180:360:0.7) node [pos=0.5, above=0.5em] {$\gamma$};
+% \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (3, 0) node [circ] {} node [above] {$\psi$};
+% \draw [decorate, decoration={snake}] (0.8, 0) arc(180:0:0.7) node [pos=0.5, above=0.5em] {$\gamma$};
+% \end{tikzpicture}
+% \begin{tikzpicture}
+% \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (3, 0) node [circ] {} node [above] {$\psi$};
+% \draw [decorate, decoration={snake}] (0.5, 0) arc(180:0:0.7);
+% \draw [decorate, decoration={snake}] (1.2, 0) arc(180:360:0.7);
+% \end{tikzpicture}
+%\end{center}
+%while the following is not:
+%\begin{center}
+% \begin{tikzpicture}
+% \draw (0, 0) node [circ] {} node [above] {$\bar\psi$} -- (4.7, 0) node [circ] {} node [above] {$\psi$};
+% \draw [decorate, decoration={snake}] (0.5, 0) arc(180:0:0.7);
+% \draw [decorate, decoration={snake}] (2.8, 0) arc(180:0:0.7);
+% \end{tikzpicture}
+%\end{center}
+%We denote its contributions by
+%\[
+% \begin{tikzpicture}[eqpic]
+% \draw [fill=white] (1, 0) circle [radius=0.3] node {1PI};
+% \end{tikzpicture}
+% =
+% \Sigma(\slashed{k})
+%\]
+%where $k$ is the momentum following in and out of the end points. Note that this does not include the factors coming from the propagators. This is called the \term{electron self-energy}. For now, we will imagine that we have somehow managed to compute this, and see what we can do. In terms of $\Sigma(\slashed{k})$, the exact propagator is
+%\begin{align*}
+% S(k) &= \frac{1}{i\slashed{k} + m} + \frac{1}{i\slashed{k} + m} \Sigma(\slashed{k}) \frac{1}{i\slashed{k} + m} + \frac{1}{i\slashed{k} + m} \Sigma(\slashed{k}) \frac{1}{i\slashed{k} + m} \Sigma(\slashed{k}) \frac{1}{i\slashed{k} + m} + \cdots\\
+% &= \frac{1}{i\slashed{k} + m- \Sigma(\slashed{k})}
+%\end{align*}
+%by summing the geometric series.
+%
+%Let's think about this for a moment. In the low-dimensional cases we worked, with, we saw that when we integrated out higher terms, the coupling coefficients shifted. Here, what we have done is essentially integrating out all those virtual photons, and we see a shift in the kinetic term by $\Sigma(\slashed{k})$. It's just that this time we are living in momentum space.
+
+
+\section{Wilsonian renormalization}
+\subsection{Background setting}
+We are now going to study renormalization. Most of the time, we will assume we are talking about a real scalar field $\varphi$, but the ideas and results are completely general.
+
+Supposed we did some experiments, obtained some results, and figured that we are probably working with a quantum field theory described by some action. But we didn't test our theory to arbitrarily high energies. We don't know how ``real physics'' looks like at high energy scales. So we can't really write down a theory that we can expect is valid to arbitrarily high energies.
+
+However, we have previously seen that ``integrating out'' high energy particles has the same effect as just modifying the coupling constants of our theory. Similarly, even with a single fixed field $\varphi$, we can integrate out the high energy modes of the field $\varphi$, and obtain an effective theory. Suppose we integrate out all modes with $k^2 \geq \Lambda_0$, and obtain an effective action
+\[
+ S_{\Lambda_0} [\varphi] = \int_M \d^d x\; \left[\frac{1}{2}(\partial \varphi)^2 + \sum_i g_i \mathcal{O}_i(\varphi , \partial\varphi)\right].
+\]
+This, by definition, means the partition function of the theory is now given by
+\[
+ \mathcal{Z} = \int_{C^\infty(M)_{\leq \Lambda_0}} \D \varphi\;e^{-S_{\Lambda_0}[\varphi]},
+\]
+where \term{$C^\infty(M)_{\leq \Lambda_0}$} denotes the space of all functions on $M$ consisting of sums (integrals) of eigenmodes of the Laplacian with eigenvalues $\leq \Lambda_0$ (in ``layman'' terms, these are fields with momentum $k^2 \leq \Lambda_0$). This effective action can answer questions about ``low energy physics'', at scales $< \Lambda_0$, which we can use to test our theory against experiments.
+
+Note that in the case of a compact universe, the Laplacian has discrete eigenvalues. For example, if we work on a flat torus (equivalently, $\R^n$ with periodic boundary conditions), then the possible eigenvalues of the Laplacian lie on a lattice. Then after imposing a cutoff, there are only finitely many energy modes, and we have successfully regularized the theory into something that makes mathematical sense.
+
+In the case of a non-compact universe, this doesn't happen. But still, in perturbation theory, this theory will give finite answers, not infinite ones. The loop integrals will have a finite cut-off, and will thus give finite answers. Of course, we are not saying that summing all Feynman diagrams will give a finite answer --- this doesn't happen even for 0-dimensional QFTs. (This isn't exactly true. There are further subtleties due to ``infrared divergences''. We'll mostly ignore these, as they are not really problems related to renormalization)
+
+Either way, we have managed to find ourselves a theory that we are reasonably confident in, and gives us the correct predictions in our experiments.
+
+Now suppose 10 years later, we got the money and built a bigger accelerator. We can then test our theories at higher energy scales. We can then try to write down an effective action at this new energy scale. Of course, it will be a different action, since we have changed the energy scale. However, the two actions are not unrelated! Indeed, they must give the same answers for our ``low energy'' experiments. Thus, we would like to understand how the action changes when we change this energy scale.
+
+In general, the coupling constants are a function of the energy scale $\Lambda_0$, and we will write them as $g_i(\Lambda_0)$. The most general (local) action can be written as
+\[
+ S_{\Lambda_0} [\varphi] = \int_M \d^d x\; \left[\frac{1}{2}(\partial \varphi)^2 + \sum_i g_i(\Lambda_0) \Lambda^{d - d_i}_0 \mathcal{O}_i(\varphi , \partial\varphi)\right],
+\]
+where $\mathcal{O}(\varphi, \partial \varphi)$ are monomials in fields and derivatives, and $d_i$ the mass dimension $[\mathcal{O}_i]$. Note that this expression assumes that the kinetic term does not depend on $\Lambda_0$. This is generally not the case, and we will address this issue later.
+
+We inserted the factor of $\Lambda_0^{d - d_i}$ such that the coupling constants $g_i(\Lambda_0)$ are dimensionless. This is useful, as we are going to use dimensional analysis a lot. However, this has the slight disadvantage that even without the effects of integrating out fields, the coupling constants $g_i$ must change as we change the energy scale $\Lambda_0$.
+
+To do this, we need to actually figure out the mass dimension of our operators $\mathcal{O}_i$. Thus, we need to figure out the dimensions of $\varphi$ and $\partial$. We know that $S$ itself is dimensionless, since we want to stick it into an exponential. Thus, any term appearing in the integrand must have mass dimension $d$ (as the measure has mass dimension $-d$).
+
+By looking at the kinetic term in the Lagrangian, we deduce that we must have
+\[
+ [(\partial \varphi)^2] = d.
+\]
+as we have to eventually integrate it over space.
+
+Also, we know that $[\partial_\mu] = 1$. Thus, we must have
+\begin{prop}
+ \[
+ [\partial_\mu] = 1,\quad [\varphi] = \frac{d - 2}{2}.
+ \]
+\end{prop}
+
+%\begin{eg}
+% If $\mathcal{O}(\varphi, \partial \varphi) = \varphi^4$, then the mass dimension is
+% \[
+% d_{\varphi^4} = [\varphi^4] = 2 (d - 2).
+% \]
+% On the other hand, if $\mathcal{O}(\varphi, \partial \varphi) = \varphi^2 (\partial\varphi)^2$, then
+% \[
+% d_{\varphi^2 (\partial \varphi)^2} = [\varphi^2 (\partial \varphi)^2] = 2 (d - 2) + 2.
+% \]
+%\end{eg}
+%We now construct our regularized path integral.
+%\begin{notation}
+% We write
+% \[
+% C^\infty(M)_{\leq \Lambda_0}
+% \]
+% for the space of all functions on $M$ that can be written as the sum of eigenmodes of the Laplacian with eigenvalue $\leq \Lambda_0^2$, i.e.\ modes of ``energy'' $\leq \Lambda_0$.
+%\end{notation}
+%We can then define a regularized path integral by
+%\[
+% \mathcal{Z}(\Lambda_0, g_i(\Lambda_0)) = \int_{C^\infty(M)_{\leq \Lambda_0}} \D \varphi\;e^{-S_{\Lambda_0}[\varphi]},
+%\]
+%Note that mathematically, once we have fixed our manifold $M$, this is just an ordinary function of the $\Lambda_0$ and the $g_i$. We can disregard physics completely, and are free to plug in any values of $\Lambda_0$ and $g_i$, and try to compute this integral.
+%
+%Does this regularized path integral actually exist? If $M$ is compact, then $C^\infty(M)_{\leq \Lambda_0}$ is finite-dimensional, and the measure $\D \varphi$ really exists. For example, if $M = T^d = \R^d/L\Z^d$, so that each direction has period $L$, then we can write
+%\[
+% \varphi(x) = \sum_{\mathbf{n} \in \Z^d} \hat{\varphi}_\mathbf{n} e^{2\pi i \mathbf{n}\cdot \mathbf{x}/L},
+%\]
+%and we integrate over modes where
+%\[
+% \mathbf{n}^2 \leq \Lambda_0^2 \left(\frac{L}{2\pi}\right)^2.
+%\]
+%It is an exercise for the reader to show that there are only finitely many $d$-tuples $\mathbf{n}$ that obeys this.
+%
+%However, if $M$ is non-compact, e.g.\ $\R^4$, then there are further subtleties here due to ``infrared divergence''. We'll mostly ignore these, as they are not really problems related to renormalization. When developing the theory, we can just assume we are working with a compact space $M$, and everything is fine. However, when actually doing computations, we will assume (or pretend) our theory still makes sense in non-compact cases.
+
+\subsection{Integrating out modes}
+Suppose, for some magical reason, we know exactly what the theory at the energy scale $\Lambda_0$ is, and they are given by the coupling coefficients $g_i(\Lambda_0)$. What happens when we integrate out some high energy modes?
+
+We pick some $\Lambda < \Lambda_0$, and split our field $\varphi$ into ``low'' and ``high'' energy modes as follows:
+\begin{align*}
+ \varphi(x) &= \int_{|\mathbf{p}| \leq \Lambda_0} \frac{\d^d \mathbf{p}}{ (2\pi)^4}\;\tilde{\varphi}(\mathbf{p}) e^{i\mathbf{p}\cdot \mathbf{x}}\\
+ &= \int_{0 \leq |\mathbf{p}| \leq \Lambda} \frac{\d^d \mathbf{p}}{ (2\pi)^4}\;\tilde{\varphi}(\mathbf{p}) e^{i\mathbf{p}\cdot \mathbf{x}} + \int_{\Lambda < |\mathbf{p}| \leq \Lambda_0} \frac{\d^d \mathbf{p}}{ (2\pi)^4}\;\tilde{\varphi}(\mathbf{p}) e^{i\mathbf{p}\cdot \mathbf{x}}.
+\end{align*}
+We thus define
+\begin{align*}
+ \phi(x) &= \int_{0 \leq |\mathbf{p}| \leq \Lambda} \frac{\d^d \mathbf{p}}{ (2\pi)^4}\;\tilde{\varphi}(\mathbf{p}) e^{i\mathbf{p}\cdot \mathbf{x}} \\
+ \chi(x) &= \int_{\Lambda < |\mathbf{p}| \leq \Lambda_0} \frac{\d^d \mathbf{p}}{ (2\pi)^4}\;\tilde{\varphi}(\mathbf{p}) e^{i\mathbf{p}\cdot \mathbf{x}},
+\end{align*}
+and so
+\[
+ \varphi(x) = \phi(x) + \chi(x).
+\]
+Let's consider the effective theory we obtain by integrating out $\chi$. As before, we define the scale $\Lambda$ \term{effective action}
+\[
+ S_\Lambda[\phi] = -\hbar \log \left[\int_{C^\infty(M)_{\Lambda < |\mathbf{p}| < \Lambda_0}} \D \chi\; e^{-S_{\Lambda_0}[\varphi, \chi]/\hbar}\right]. \tag{$*$}
+\]
+Of course, this can be done for any $\Lambda < \Lambda_0$, and so defines a map from $[0, \Lambda_0]$ to the ``space of all actions''. More generally, for any $\varepsilon$, this procedure allows us to take a scale $\Lambda$ action and produce a scale $\Lambda - \varepsilon$ effective action from it. This is somewhat like a group (or monoid) action on the ``space of all actions'', and thus the equation $(*)$ is known as the \term{Wilsonian renormalization group equation}\index{renormalization group equation}.
+
+Just as we saw in low-dimensional examples, when we do this, the coupling constants of the interactions will shift. For each $\Lambda < \Lambda_0$, we can define the shifted coefficients $g_i(\Lambda)$, as well as $Z_\Lambda$ and $\delta m^2$, by the equation
+\[
+ S_\Lambda[\phi] = \int_M \d^d x \left[\frac{Z_\Lambda}{2} (\partial \phi)^2 + \sum_i \Lambda^{d - d_i} Z_\Lambda^{n_i/2}g_i(\Lambda) \mathcal{O}_i (\phi, \partial \phi)\right],
+\]
+where $n_i$ is the number of times $\phi$ or $\partial \phi$ appears in $\mathcal{O}_i$.
+
+Note that we normalized the $g_i(\Lambda)$ in terms of the new $\Lambda$ and $Z_\Lambda$. So even if, by some miracle, our couplings receive no new corrections, the coefficients still transform by
+\[
+ g_i(\Lambda) = \left(\frac{\Lambda_0}{\Lambda}\right)^{d - d_i} g_i(\Lambda_0).
+\]
+The factor $Z_\Lambda$ account from the fact that there could be new contributions to the kinetic term for $\phi$. This \index{$Z_\Lambda$} is called \term{wavefunction renormalization}. The factor $Z_\Lambda$ is not to be confused with the partition function, which we denote by a calligraphic $\mathcal{Z}$ instead. We will explore these in more detail later.
+
+We define
+\[
+ \mathcal{Z}(\Lambda, g_i(\Lambda)) = \int_{C^\infty(M)_{\leq \Lambda}} \D \varphi\; e^{-S_{\Lambda}[\varphi]/\hbar}.
+\]
+Then by construction, we must have
+\[
+ \mathcal{Z}(\Lambda_0, g_i(\Lambda_0)) = \mathcal{Z}(\Lambda, g_i(\Lambda))
+\]
+for all $\Lambda < \Lambda_0$. This is a completely trivial fact, because we obtained $\mathcal{Z}(\Lambda, g_i(\Lambda))$ simply by doing part of the integral and leaving the others intact.
+
+We will assume that $\mathcal{Z}$ varies continuously with $\Lambda$ (which is actually not the case when the allowed modes are discrete, but whatever). It is then convenient to write the above expression infinitesimally, by taking the derivative. Instead of the usual $\frac{\d}{\d \Lambda}$, it is more convenient to talk about the operator $\Lambda \frac{\d}{\d \Lambda}$ instead, as this is a dimensionless operator.
+
+Differentiating the above equation, we obtain
+\[
+ \Lambda \frac{\d \mathcal{Z}}{\d \Lambda} (\Lambda, g_i(\Lambda)) = \Lambda \left.\frac{\partial \mathcal{Z}}{\partial \Lambda}\right|_{g_i} + \sum_i \left.\frac{\partial \mathcal{Z}}{\partial g_i}\right|_{\Lambda} \Lambda \frac{\partial g_i}{\partial \Lambda} = 0. \tag{$\dagger$}
+\]
+This is the \term{Callan-Symanzik equation}\index{Callan-Symanzik equation!partition function} for the partition function.
+
+It is convenient to refer to the following object:
+\begin{defi}[Beta function]\index{$\beta_i$}\index{beta-function}
+ The \emph{beta function} of the coupling $g_i$ is
+ \[
+ \beta_i (g_j) = \Lambda \frac{\partial g_i}{\partial \Lambda}.
+ \]
+\end{defi}
+
+As mentioned before, even if our coupling constants magically receive no corrections, they will still change. Thus, it is convenient to separate out the boring part, and write
+\[
+ \beta_i(g_i) = (d_i - d) g_i + \beta_i^{\mathrm{quantum}} (\{g_j\}).
+\]
+Notice that perturbatively, the $\beta_i^{\mathrm{quantum}} (\{g_i\})$ come from loops coming from integrating out diagrams. So generically, we expect them to depend on all other coupling constants.
+
+We will later need the following definition, which at this point is rather unmotivated:
+\begin{defi}[Anomalous dimension]
+ The \term{anomalous dimension} of $\phi$ by
+ \[
+ \gamma_\phi = - \frac{1}{2} \Lambda \frac{\partial \log Z_\Lambda}{\partial \Lambda}
+ \]
+\end{defi}
+
+Of course, at any given scale, we can absorb, say, $Z_\Lambda$ by defining a new field
+\[
+ \varphi(x) = \sqrt{Z_\Lambda} \phi
+\]
+so as to give $\varphi(x)$ canonically normalized kinetic terms. Of course, if we do this at any particular scale, and then try to integrate out more modes, then the coefficient will re-appear.
+
+\subsection{Correlation functions and anomalous dimensions}
+Let's say we now want to compute correlation functions. We will write
+\[
+ S_\Lambda[\phi, g_i] = \int_M \d^d x\; \left[\frac{1}{2}(\partial \phi)^2 + \frac{m^2}{2} \phi^2 + \sum_i g_i \Lambda^{d - d_i}_0 \mathcal{O}_i(\phi , \partial\phi)\right],
+\]
+where, as before, we will assume $m^2$ is one the of the $g_i$. Note that the action of the $\phi$ we produced by integrating out modes is \emph{not} $S_\Lambda[\phi, g_i(\Lambda)]$, because we had the factor of $Z_\Lambda$ sticking out in the action. Instead, it is given by $S_\Lambda[Z^{1/2} \phi, g_i(\Lambda)]$.
+
+Now we can write a general $n$-point correlation function as
+\[
+ \bra \phi(x_1) \cdots \phi(x_n)\ket = \frac{1}{\mathcal{Z}} \int_{\leq \Lambda} \D \phi\; e^{-S_\Lambda[Z_\Lambda^{1/2} \phi, g_i(\Lambda)]} \phi(x_1) \cdots \phi(x_n).
+\]
+We can invent a canonically normalized field
+\[
+ \varphi(x) = \sqrt{Z_\Lambda} \phi,
+\]
+so that the kinetic term looks right. Then defining
+\[
+ \bra \varphi(x_1) \cdots \varphi(x_n)\ket = \frac{1}{\mathcal{Z}}\int_{\leq \Lambda} \D \phi\; e^{-S_\Lambda[\varphi, g_i(\Lambda)]} \varphi(x_1) \cdots \varphi(x_n),
+\]
+we find
+\[
+ \bra \phi(x_1) \cdots \phi(x_n)\ket = Z_\Lambda^{-n/2} \bra \varphi(x_1) \cdots \varphi(x_n)\ket.
+\]
+Note that we don't have to worry about the factor of $Z_\Lambda^{1/2}$ coming from the scaling of the path integral measure, as the partition function $\mathcal{Z}$ is scaled by the same amount.
+\begin{defi}[$\Gamma_\Lambda^{(n)}$]\index{$\Gamma_\Lambda^{(n)}$}
+ We write
+ \[
+ \Gamma_\Lambda^{(n)} (\{x_i\}, g_i) = \frac{1}{\mathcal{Z}}\int_{\leq \Lambda} \D \phi\; e^{-S_\Lambda[\phi, g_i]} \phi(x_1) \cdots \phi(x_n) = \bra \varphi(x_1) \cdots \varphi(x_n)\ket.
+ \]
+\end{defi}
+Now suppose $0 < s < 1$, and that we've chosen to insert fields only with energies $< s \Lambda$. Then we should equally be able to compute the correlator using the low energy theory $S_{s\Lambda}$. We're then going to find
+\[
+ Z^{-n/2}_{s\Lambda} \Gamma_{s\Lambda}^{(n)} (x_1, \cdots, x_n, g_i(s\Lambda)) = Z_\Lambda^{-n/2} \Gamma^{(n)}_{s\Lambda} (x_1, \cdots, x_n, g_i(\Lambda)).
+\]
+Differentiating this with respect to $s$, we find
+\[
+ \Lambda \frac{\d}{\d \Lambda} \Gamma_\Lambda^{(n)}(x_1, \cdots, x_n, g_i(\Lambda)) = \left(\Lambda \frac{\partial}{\partial \Lambda} + \beta_i \frac{\partial}{\partial g_i} + n \gamma_\phi\right) \Gamma_\Lambda^{(n)} ( \{x_i\}, g_i(\Lambda)) = 0.
+\]
+This is the \term{Callan-Symanzik equation}\index{Callan-Symanzik equation!correlation function} for the correlation functions.
+
+There is an alternative way of thinking about what happens when change $\Lambda$. We will assume we work over $\R^n$, so that it makes sense to \emph{scale} our universe (on a general Riemannian manifold, we could achieve the same effect by scaling the metric). The coordinates change by $x \mapsto sx$. How does $\Gamma_\Lambda(x_1, \ldots, x_n, g_i)$ relate to $\Gamma_\Lambda(s x_1, \ldots, s x_n, g_i)$?
+
+We unwrap the definitions
+\[
+ \Gamma_\Lambda^{(n)} (\{s x_i\}, g_i) = \frac{1}{\mathcal{Z}}\int_{\leq \Lambda} \D \phi\; e^{-S_\Lambda[\phi, g_i]} \phi(s x_1) \cdots \phi(s x_n)
+\]
+We make the substitution $\varphi(x) = a \phi(sx)$, with a constant $a$ to be chosen later so that things work out. Again, we don't have to worry about how $\D \phi$ transforms. However, this change of variables does scale the Fourier modes, so the new cutoff of $\varphi$ is in fact $s\Lambda$. How the $S_\Lambda[\phi, g_i]$ transform? Using the chain rule, we have
+\begin{align*}
+ S_{s\Lambda}[\varphi, g_i] &= \int_M \d^d x\; \left[\frac{1}{2}(\partial \varphi)^2 + \frac{m^2}{2} \varphi^2 + \sum_i g_i (s\Lambda)^{d - d_i}_0 \mathcal{O}_i(\varphi , \partial\varphi)\right]\\
+ \intertext{Putting in the definition of $\varphi$, and substituting $y = sx$, we have}
+ &= s^{-d} \int_M \d^d y\left[\frac{1}{2}a^2 s^2 (\partial \phi)^2 + \frac{m^2}{2} \varphi^2 + \sum_i g_i (s\Lambda)^{d - d_i}_0 \mathcal{O}_i(a \phi, as \partial\phi)\right],
+\end{align*}
+where all fields are evaluated at $y$. We want this to be equal to $S_\Lambda[\phi, g_i]$. By looking at the kinetic term, we know that we need
+\[
+ a = s^{(d - 2)/2}.
+\]
+By a careful analysis, we see that the other terms also work out (or we know they must be, by dimensional analysis). So we have
+\begin{align*}
+ \Gamma_\Lambda^{(n)} (\{s x_i\}, g_i) &= \frac{1}{\mathcal{Z}}\int_{\leq \Lambda} \D \phi\; e^{-S_\Lambda[\phi, g_i]} \phi(s x_1) \cdots \phi(s x_n)\\
+ &= \frac{1}{\mathcal{Z}}\int_{\leq s\Lambda} \D \varphi\; e^{-S_{s\Lambda}[\varphi, g_i]} s^{(d - 2)n/2} \varphi(x_1) \cdots \varphi(x_n)\\
+ &= s^{(d - 2) n/2}\Gamma_{s\Lambda}^{(n)} (\{x_i\}, g_i)
+\end{align*}
+%
+%Now let's consider a \emph{rescaling} $x \mapsto s x$. The action is invariant provided we change
+%\[
+% \int \d^d x\; (\partial \phi)^2 \mapsto s^{d - 2} \int \d^d x (\partial \phi')^2.
+%\]
+%So we choose
+%\[
+% \phi(sx) = s^{(2 - d)/2} \phi(x).
+%\]
+%Similarly, our cutoff needs to transform as
+%\[
+% \Lambda \mapsto s^{-1}\Lambda.
+%\]
+%With this scaling, we have
+Thus, we can write
+\begin{align*}
+ \Gamma_\Lambda^n (x_1, \cdots, x_n, g_i(\Lambda)) &= \left(\frac{Z_\Lambda}{Z_{s\Lambda}}\right)^{n/2} \Gamma^n_{s\Lambda} (x_1, \cdots, x_n, g_i (s\Lambda))\\
+ &= \left(\frac{Z_\Lambda s^{2 - d}}{Z_{s\Lambda}}\right)^{n/2} \Gamma_\Lambda^n (s x_1, \cdots, s x_n, g_i(s\Lambda)).
+\end{align*}
+Note that in the second step, we don't change the values of the $g_i$! We are just changing units for measuring things. We are not integrating out modes.
+
+Equivalently, if $y_i = sx_i$, then what we found is that
+\[
+ \Gamma_\Lambda^n \left(\frac{y_1}{s}, \cdots, \frac{y_n}{s}, g_i(\Lambda)\right) = \left(\frac{Z_\Lambda s^{d - 2}}{Z_{s\Lambda}}\right)^{n/2} \Gamma_\Lambda^m (y_1, \cdots, y_n, g_i(s\Lambda)).
+\]
+What does this equation say? Here we are cutting off the $\Gamma$ at the same energy level. As we reduce $s$, the right hand side has the positions fixed, while on the left hand side, the points get further and further apart. So on the left hand side, as $s \to 0$, we are probing the theory at longer and longer distances. Thus, what we have found is that ``zooming out'' in our theory is the same as flowing down the couplings $g_i(s\Lambda)$ to a scale appropriate for the low energy theory.
+
+Infinitesimally, let $s = 1 - \delta s$, with $0 < \delta s \ll 1$. Then we have
+\[
+ \left(\frac{Z_\Lambda(1 - \delta s)^{2 - d}}{Z_{(1 - \delta s)\Lambda}}\right)^{1/2} \approx 1 + \left(\frac{d - 2}{2} + \gamma_\phi\right) \delta s,
+\]
+where $\gamma_s$ is the anomalous dimension of $\phi$ we defined before.
+
+Classically, we'd expect this correlation function $ \bra \phi(s x_1) \cdots \phi(s x_n)\ket$ to scale with $s$ as
+\[
+ \left(\frac{d - s}{2}\right)^n,
+\]
+since that's what dimensional analysis would tell us. But quantum mechanically, we see that there is a correction given by $\gamma_\phi$, and what we really have is
+\[
+ \Delta_\phi^n = \left(\frac{d - 2}{2} + \gamma_\phi\right)^n.
+\]
+So the dependence of the correlation on the distance is not just what we expect from dimensional analysis, but it gains a quantum correction factor. Thus, we say $\gamma_\phi$ is the ``anomalous dimension'' of the field.
+
+\subsection{Renormalization group flow}
+We now study the \emph{renormalization group flow}. In other words, we want to understand how the coupling constants actually change as we move to the infrared, i.e.\ take $\Lambda \to 0$. The actual computations are difficult, so in this section, we are going to understand the scenario rather qualitatively and geometrically.
+
+We can imagine that there is a configuration space whose points are the possible combinations of the $g_i$, and as we take $\Lambda \to 0$, we trace out a trajectory in this configuration space. We want to understand how these trajectories look like.
+
+As in most of physics, we start at an equilibrium point.
+\begin{defi}[Critical point]\index{critical point}
+ A \emph{critical point} is a point in the configuration space, i.e.\ a choice of couplings $g_i = g_i^*$ such that $\beta_i(g_i^*) = 0$.
+\end{defi}
+One such example of a critical point is the Gaussian theory, with all couplings, including the mass term, vanishing. Since there are no interactions at all, nothing happens when we integrate out modes. It is certainly imaginable that there are other critical points. We might have a theory where the classical dimensions of all couplings are zero, and also by a miracle, all quantum corrections vanish. This happens, for example, in some supersymmetric theories. Alternatively, the classical dimensions are non-zero, but the quantum corrections happen to exactly compensate the effect of the classical dimension.
+
+In either case, we have some couplings $g_i^*$ that are independent of scale, and thus the anomalous dimension $\gamma_\phi(g_i^*) = \gamma_\phi^*$ would also be independent of scale. This has important consequences.
+\begin{eg}
+ At a critical point, the renormalization group equation for a two-point function becomes
+ \[
+ 0 = \left(\Lambda \frac{\partial}{\partial \Lambda} + \beta_i(g_i^*) \frac{\partial}{\partial g_i} + 2 \gamma_\phi (g_i^*)\right) \Gamma_\Lambda^{(2)} (x, y).
+ \]
+ But the $\beta$-function is zero, and $\gamma_\phi$ is independent of scale. So
+ \[
+ \Lambda \frac{\partial}{\partial \Lambda} \Gamma^{(2)}_\Lambda(x, y) = -2 \gamma_\phi^* \Gamma_\Lambda^{(2)}(x, y).
+ \]
+ On the other hand, on dimensional grounds, $\Gamma$ must be of the form
+ \[
+ \Gamma_\Lambda^{(2)} (x, y, g_i^*) = f(\Lambda|x - y|, g_i^*) \Lambda^{d - 2}
+ \]
+ for some function $f$. Feeding this into the RG equation, we find that $\Gamma$ must be of the form
+ \[
+ \Gamma_\Lambda^{(2)} (x, y, g_i^*) = \frac{\Lambda^{d - 2} c(g_i^*)}{ \Lambda^{2 \Delta_\phi} |x - y|^{2 \Delta_\phi}} \propto \frac{c(g_i^*)}{|x - y|^{2 \Delta_\phi}},
+ \]
+ where $c(g_i^*)$ are some constants independent of the points. This is an example of what we were saying before. This scales as $|x - y|^{-2 \Delta \phi}$, instead of $|x - y|^{2 - d}$, and the anomalous dimension is the necessary correction.
+\end{eg}
+
+Now a Gaussian universe is pretty boring. What happens when we start close to a critical point? As in, say, IA Differential Equations, we can try to Taylor expand, and look at the second derivatives to understand the behaviour of the system. This corresponds to Taylor-expanding the $\beta$-function, which is by itself the first derivative.
+
+We set our couplings to be
+\[
+ g_i = g_i^* + \delta g_i.
+\]
+Then we can write
+\[
+ \left.\Lambda \frac{\partial g_i}{\partial \Lambda} \right|_{g_i^* + \delta g_i} = B_{ij} (\{g_k\}) \delta g_j + O(\delta g^2),
+\]
+where $B_{ij}$ is (sort of) the Hessian matrix, which is an infinite dimensional matrix.
+
+As in IA Differential Equations, we consider the eigenvectors of $B_{ij}$. Suppose we have an ``eigencoupling'' $\sigma_j$. Classically, we expect
+\[
+ g_i(\Lambda) = \left(\frac{\Lambda}{\Lambda_0}\right)^{d_i - d} g_i(\Lambda_0),
+\]
+and so $\delta g_j = \delta_{ij}$ gives an eigenvector with eigenvalue $d_i - d$. In the fully quantum case, we will write the eigenvalue as $\Delta_i - d$, and we define
+\[
+ \gamma_i = \Delta_i - d_i
+\]
+to be the \term{anomalous dimension}\index{anomalous dimension!operator} of the operator. Since $\sigma_j$ was an eigenvector, we find that
+\[
+ \Lambda \frac{\partial \sigma_i}{\partial \Lambda} = (\Delta_i - d) \sigma_i.
+\]
+Consequently, we find
+\[
+ \sigma_i(\Lambda) = \left(\frac{\Lambda}{\Lambda_0}\right)^{\Delta_i - d} \sigma_i(\Lambda_0)
+\]
+to this order.
+
+Suppose $\Delta_i > d$. Then as we lower the cutoff from $\Lambda_0$ to $0$, we find that $\sigma_i(\Lambda) \to 0$ exponentially. So we flow back to the theory at $g_i^*$ as we move to lower energies. These operators are called \emph{irrelevant}\index{irrelevant operator}.
+
+Assuming that quantum corrections do not play a very large role near the critical point, we know that there must be infinitely many such operators, as we can always increase $d_i$, hence $\Delta_i$ by adding more derivatives or fields (for $d > 2$). So we know the critical surface is infinite dimensional.
+
+On the other hand, if $\Delta_i < d$, then $\sigma(\Lambda)$ increases as we go to the infrared. These operators hence become \emph{more} significant. These are called \term{relevant operators}. There are only finitely many such relevant operators, at least for $d > 2$. Any RG trajectory emanating from $g_i^*$ is called a \term{critical trajectory}.
+
+We can draw a picture. The \term{critical surface $C$} consisting of (the span of) all irrelevant modes, and is typically infinite dimensional with finite codimension:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mblue, thick] (0, 4) -- (4, 3) .. controls (4.5, 2) and (3.5, 0) .. (4, -1) -- (0, 0) .. controls (-0.5, 1) and (0.5, 3) .. (0, 4);
+
+ \node [circ, mred] at (2, 1.5) {};
+
+ \draw (0.5, 1.75) edge [->, dashed, out=20, in=166] (1.8, 1.55);
+ \draw (3.5, 1.3) edge [->, dashed, out=2000, in=-14] (2.2, 1.45);
+
+ \draw (3, 3) edge [->, dashed, out=270, in=45] (2.1, 1.6);
+ \draw (1, 3.5) edge [->, dashed, out=290, in=110] (1.9, 1.65);
+
+ \draw (1, 0) edge [->, dashed, out=60, in=225] (1.8, 1.4);
+
+ \draw (3.4, -0.5) edge [->, dashed, out=110, in=-70] (2.1, 1.4);
+
+ \draw [thick, mred, ->-=0.5] (2, 1.5) -- (-1, 0);
+
+ \draw [thick, ->-=0.6] (0.5, 3) .. controls (1.3, 1.3) .. (-1.1, 0.15);
+
+ \draw [thick, ->-=0.63] (3.7, 1) .. controls (2, 1.2) .. (-0.9, -0.1);
+ \draw [thick, ->-=0.6] (2.4, -0.2) .. controls (2, 1) .. (-0.8, -0.2);
+ \end{tikzpicture}
+\end{center}
+A generic QFT will start at scale $\Lambda_0$ with both relevant and irrelevant operators turned on. As we flow along the RG trajectory, we focus towards the critical trajectory. This focusing is called \term{universality}.
+
+This is, in fact, the reason we can do physics! We don't know about the detailed microscopic information about the universe. Further, there are infinitely many coupling constants that can be non-zero. But at low energies, we don't need to know them! Most of them are irrelevant, and at low energies, only the relevant operators matter, and there is only finitely many of them.
+
+One thing we left out in our discussion is \term{marginal operators}, i.e.\ those with $\Delta_i = d$. To lowest order, these are unchanged under RG flow, but we have to examine higher order corrections to decide whether these operators are \term{marginally relevant} or \term{marginally irrelevant}, or perhaps \term{exactly marginal}\index{marginally!exactly}.
+
+Marginally relevant or marginally irrelevant may stay roughly constant for long periods of RG evolution. Because of this, these operators are often important phenomenologically. We'll see that most of the couplings we see in QED and QCD, apart from mass terms, are marginal operators, at least to lowest order.
+
+If we use the classical dimension in place of $\Delta_i$, it is straightforward to figure out what the relevant and marginal operators are. We know that in $d$ dimensions, the mass dimension $[\phi] = \frac{d - 2}{2}$ for a scalar field, and $[\partial] = 1$. Focusing on the even operators only, we find that the only ones are
+\begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ Dimension $d$ & Relevant operators & Marginal operators\\
+ \midrule
+ $2$ & $\phi^{2k}$ for all $k > 0$ & $(\partial \phi)^2$, $\phi^{2k} (\partial \phi)^2$ for all $k > 0$\\
+ $3$ & $\phi^{2k}$ for $k = 1, 2$ & $(\partial \phi)^2$, $\phi^6$\\
+ $4$ & $\phi^2$ & $(\partial \phi)^2$, $\phi^4$\\
+ $> 4$ & $\phi^2$ & $(\partial \phi)^2$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+Of course, there are infinitely many irrelevant operators, and we do not attempt to write them out.
+
+Thus, with the exception of $d = 2$, we see that there is a short, finite list of relevant and marginal operators, at least if we just use the classical dimension.
+
+Note that here we ignored all quantum corrections, and that sort-of defeats the purpose of doing renormalization. In general, the eigen-operators will not be simple monomials, and can in fact look very complicated!
+
+\subsection{Taking the continuum limit}
+So far, we assumed we started with an effective theory at a high energy scale $\Lambda_0$, and studied the behaviour of the couplings as we flow down to low energies. This is pretty much what we do in, say, condensed matter physics. We have some detailed description of the system, and then we want to know what happens when we zoom out. Since we have a fixed lattice of atoms, there is a natural energy scale $\Lambda_0$ to cut off at, based on the spacing and phonon modes of the lattice.
+
+However, in high energy physics, we want to do the opposite. We instead want to use what we know about the low energy version of the system, and then project and figure out what the high energy theory is. In other words, we are trying to take the continuum limit $\Lambda_0 \to \infty$.
+
+What do we actually mean by that? Suppose our theory is defined at a critical point $g_i^*$ and some cutoff $\Lambda_0$. Then by definition, in our path integral
+\[
+ \mathcal{Z}(\Lambda_0, g_i^*) = \int_{C^\infty(M)_{\leq \Lambda_0}} \D \varphi\;e^{-S_{\Lambda_0}[\varphi, g_i^*]},
+\]
+No matter what values of $\Lambda_0$ we pick (while keeping the $g_i^*$ fixed), we are going to get the same path integral, and obtain the same answers, as that is what ``critical point'' means. In particular, we are free to take the limit $\Lambda_0 \to \infty$, and then we are now integrating over ``all paths''.
+
+What if we don't start at a critical point? Suppose we start somewhere on the critical surface, $\{g_i\}$. We keep the same constants, but raise the value of $\Lambda_0$. What does the effective theory at a scale $\Lambda$ look like? As we increase $\Lambda_0$, the amount of ``energy scale'' we have to flow down to get to $\Lambda$ increases. So as we raise $\Lambda_0$, the coupling constants at scale $\Lambda$ flow towards the critical point. As we take this continuum limit $\Lambda_0 \to 0$, we end up at a critical point, namely a \term{conformal field theory}. This is perhaps a Gaussian, which is not very interesting, but at least we got something.
+
+However, suppose our theory has some relevant operators turned on. Then as we take the limit $\Lambda_0 \to \infty$, the coupling constants of our theory diverges! This sounds bad.
+
+It might seem a bit weird that we fix the constants and raise the values of $\Lambda_0$. However, sometimes, this is a reasonable thing to do. For example, if we think in terms of the ``probing distances'' of the theory, as we previously discussed, then this is equivalent to taking the same theory but ``zooming out'' and probing it at larger and larger distances. It turns out, when we do perturbation theory, the ``naive'' thing to do is to do exactly this. Of course, we now know that the right thing to do is that we should change our couplings as we raise $\Lambda_0$, so as to give the same physical predictions at any fixed scale $\Lambda < \Lambda_0$. In other words, we are trying to backtrace the renormalization group flow to see where we came from! This is what we are going to study in the next chapter.
+
+%If we think carefully, what we want to do is not to fix our coupling constants and then taking $\Lambda_0 \to \infty$. As we increase $\Lambda_0$, we want to change our coupling constants accordingly so as to give the same result at our experimental scale $\Lambda \ll \Lambda_0$. In other words, we want to run our renormalization group flow backwards!
+%
+%However, suppose we begin at an experimental scale $\Lambda \ll \Lambda_0$, with the theory that has non-zero values for relevant couplings. Then we appear to lose control of this theory at $\Lambda_0 \to \infty$. We are going further and further away from the critical point. The way to obtain a finite limit is to \emph{tune} the initial values of the couplings $g_i(\Lambda_0)$.
+%
+%Suppose the RG trajectory passes closets to the critical point at some scale $\mu$. % insert picture
+%On dimensional grounds, we must have
+%\[
+% \mu = \Lambda_0 f(g_i(\Lambda_0))
+%\]
+%for some function $f$. We want to tune the values of $g_i(\Lambda_0)$ so that
+%\[
+% \lim_{\Lambda_0 \to \infty} \Lambda_0 f(g_i(\Lambda_0))
+%\]
+%is finite. Note that $f(g_i(\Lambda_0)) = 0$ defines the critical surface.
+%This is what we are going to do --- we have a fixed action $S_{\Lambda_0}[\phi, g_i]$. Practically speaking, changing the $g_i$ as we change the energy scale is annoying. Instead, we introduce a \emph{counterterm action} $S^{CT}$ that has an \emph{explicit} dependence on $\Lambda_0$ i.e.\ we are allowed to set it to any value according to our our own likings at different energy scales.
+%
+%Then we can write the general action as
+%\[
+% S_{\Lambda_0}^{\mathrm{eff}} [\phi] = S_{\Lambda_0}[\phi, g_i] + \hbar S^{CT}[\phi, g_i, \Lambda_0],
+%\]
+%We will come up with some rules for how to choose our counterterms, but the end goal is to choose them so that
+%
+%To achieve this, we modify the action by introducing counter-terms, i.e.
+%\[
+% S_{\Lambda_0}^{\mathrm{eff}} [\phi] = S_{\Lambda_0}[\phi, g_i(\Lambda_0)] + \hbar S^{CT}[\phi, g_i(\Lambda_0), \Lambda_0],
+%\]
+%where $S^{CT}$ is an action that \emph{explicitly} depends on $\Lambda_0$. What is the point of this thing? In the way we've been talking about quantum field theory, the original action we started with allowed an arbitrary set of coupling terms. By adding $S^{CT}$, we haven't introduced any new operators. All we have done is that we have modified couplings in the original action \emph{by hand}. We will do this in a clever way. These should be chosen such that
+%\[
+% \lim_{\Lambda_0 \to \infty}\left[ \int_{C^\infty(M)_{[\Lambda, \Lambda_0]}} \D \chi \; \exp\left(- \frac{1}{\hbar}S_{\Lambda_0}^{\mathrm{eff}}[\phi + \chi, g_i(\Lambda_0)] - S^{CT}[\phi + \chi, \Lambda_0]\right)\right] % check first eff
+%\]
+%is finite.
+%
+%Given how we have presented renormalization so far, it might have seemed absurd that we initially wanted to keep the constants fixed and then take the limit $\Lambda_0 \to \infty$. But when we do quantum field theory perturbatively, this turns out to be a rather natural thing to do. We will see this when we return to these ideas later on in the next chapter, where we actually do the computations.
+%
+%So in practice, we compute the path integral perturbatively. If we evaluate a 1-loop diagram ($O(\hbar^0)$) using the vertices and propagators from the original action, we will get an answer that depends on $\Lambda_0$, as we integrate modes only up to $\Lambda_0$. This result typically diverges at $\Lambda_0 \to \infty$.
+%
+%The vertices in $S^{CT}$ provide further contributions to the loop process, and we tune these \emph{by hand} so that the sum is finite as $\Lambda_0 \to \infty$.
+%
+%For example, we might imagine we have a field that looks like this
+%\begin{center}
+% \begin{tikzpicture}
+% \draw (0, 0) node [left] {$\phi$} -- (1, 0) node [circ] {};
+% \draw (1.5, 0) circle [radius=0.5]; % insert chi above, momentum k + p, show direction
+% \draw (2, 0) node [circ] {} -- (3, 0) node [right] {$\phi$}; % momentum k
+% \end{tikzpicture}
+%\end{center}
+%The loop integral is typically a divergent integral. The idea is to introduce a counterterm
+%\begin{center}
+% \begin{tikzpicture}
+% \draw (0, 0) node [left] {$\phi$} -- (3, 0) node [right] {$\phi$};
+% \node at (1.5, 0) {$\times$};
+% \end{tikzpicture}
+%\end{center}
+%that depends explicitly on $\Lambda_0$, and this will cancel off the divergent loop integral. This is just changing our coupling constants, and this corresponds to changing our starting point in the RG flow.
+%
+%Now it's time to actually do this.
+
+\subsection{Calculating RG evolution}
+We now want to actually compute the RG evolution of a theory. To do so, we of course need to make some simplifying assumptions, and we will also leave out a lot of details. We note that (in $d > 2$), the only marginal or relevant operators that involves derivatives is the kinetic term $(\partial \varphi)^2$. This suggests we can find a simple truncation of the RG evolution by restricting to actions of the form
+\[
+ S[\varphi] = \int \d^d x\; \left(\frac{1}{2} (\partial \varphi)^2 + V(\varphi)\right),
+\]
+and write the potential as
+\[
+ V(\varphi) = \sum_k\Lambda^{d - k(d - 2)} \frac{g_{2k}}{(2k)!} \varphi^{2k}.
+\]
+In other words, we leave out higher terms that involve derivatives. This is known as the \term{local potential approximation} (\term{LPA}). As before, we split our field into low and high energy modes,
+\[
+ \varphi = \phi + \chi,
+\]
+and we want to compute the effective action for $\phi$:
+\[
+ S_{\Lambda}^{\mathrm{eff}} [\phi] = - \hbar \log \int_{C^\infty(M)_{[\Lambda, \Lambda_0]}} \D \chi\; e^{-S[\phi + \chi]}.
+\]
+This is still a very complicated path integral to do. To make progress, we assume we lower the cutoff just infinitesimally, $\Lambda = \Lambda_0 - \delta \Lambda$. The action at scale $\Lambda$ now becomes
+\[
+ S[\phi + \chi] = S[\phi] + \int \d^d x\; \left[\frac{1}{2} (\partial \chi)^2 + \frac{1}{2} \chi^2 V''(\phi) + \frac{1}{3!} \chi^3 V'''(\phi) + \cdots\right],
+\]
+where it can be argued that we can leave out the terms linear in $\chi$. % why
+
+Since we're just doing path integral over modes with energies in $(\Lambda - \delta \Lambda, \Lambda]$, each loop integral takes the form
+\[
+ \int_{\Lambda - \delta \Lambda \leq |p| \leq \Lambda} \d^d p\; \cdots = \Lambda^{d - 1} \delta \Lambda \int_{S^{d - 1}} \d \Omega\; \cdots,
+\]
+where $\d \Omega$ denotes an integral over the unit $(d - 1)$ sphere. Since each loop integral comes with a factor of $\delta \Lambda$, to leading order, we need to consider only $1$-loop diagrams.
+
+A connected graph with $E$ edges and $L$ loops and $V_i$ vertices of $\chi$-valency $i$ (and arbitrarily many valency in $\phi$) obeys
+\[
+ L - 1 = E - \sum_{i = 2}^\infty V_i.
+\]
+Note that by assumption, there are no single-$\chi$ vertices.
+
+Also, every edge contributes to two vertices, as there are no $\chi$ loose ends. On the other hand, each vertex of type $i$ has $i$ many $\chi$ lines. So we have
+\[
+ 2E = \sum_i V_i.
+\]
+Combining these two formulae, we have
+\[
+ L = 1 + \sum_{i = 2}^\infty \frac{(i - 2)}{2} V_i.
+\]
+The number on the right is non-negative with equality iff $V_i = 0$ for all $i \geq 3$. Hence, for $1$-loop diagrams, we only need to consider vertices with precisely two $\chi$-lines attached. Thus, all the contributions look like
+\begin{center}
+ \makecenter{\begin{tikzpicture}
+ \draw [dashed] circle [radius=0.5];
+ \node [circ] at (-0.5, 0) {};
+
+ \foreach \y in {-1, -0.6, -0.2, 0.2, 0.6, 1} {
+ \draw (-1.5, \y) -- (-0.5, 0);
+ }
+ \end{tikzpicture}},\quad
+ \makecenter{\begin{tikzpicture}
+ \draw [dashed] circle [radius=0.5];
+ \node [circ] at (-0.5, 0) {};
+ \node [circ] at (0.5, 0) {};
+
+ \foreach \y in {-1, -0.6, -0.2, 0.2, 0.6, 1} {
+ \draw (-1.5, \y) -- (-0.5, 0);
+ \draw (1.5, \y) -- (0.5, 0);
+ }
+ \end{tikzpicture}},\quad
+ \makecenter{\begin{tikzpicture}
+ \draw [dashed] circle [radius=0.5];
+ \foreach \rot in {0,120,240} {
+ \begin{scope}[rotate=\rot]
+ \node [circ] at (-0.5, 0) {};
+ \foreach \y in {-1, -0.6, -0.2, 0.2, 0.6, 1} {
+ \draw (-1.5, \y) -- (-0.5, 0);
+ }
+ \end{scope}
+ }
+ \end{tikzpicture}},\quad
+ \ldots
+\end{center}
+We can thus truncate the action as
+\[
+ S[\phi + \chi] - S[\phi] = \int \d^d x\;\left[ \frac{1}{2} (\partial \chi)^2 + \frac{1}{2} \chi^2 V''(\phi)\right].
+\]
+This is still not very feasible to compute. We are only going to do this integral in a very specific case, where $\phi$ is chosen to be constant.
+
+We use the fact that the Fourier modes of $\chi$ only live between $\Lambda - \delta \Lambda < |p| < \Lambda$. Then taking the Fourier transform and doing the integral in momentum space, we have
+\begin{align*}
+ S[\phi + \chi] - S[\phi] &= \int_{\Lambda - \delta \Lambda < |p| \leq \Lambda} \frac{\d^d p}{2 (2\pi)^d}\; \tilde{\chi}(-p) (p^2 + V''(\phi)) \tilde{\chi}(p)\\
+ &= \frac{\Lambda^{d - 1} \delta \Lambda}{2(2\pi)^d} [\Lambda^2 + V''(\phi)] \int_{S^{d - 1}} \d \Omega\;\tilde{\chi}(-\Lambda \hat{p}) \tilde{\chi}(\Lambda \hat{p}).
+\end{align*}
+The $\chi$ path integral is finite if we work on a compact space, say $T^d$ with side length $L$, in which case there are only finitely many Fourier modes. Then the momenta are $p_\mu = \frac{2\pi}{L} n_\mu$, and the path integral is just a product of Gaussians integrals. Going through the computations, we find
+\[
+ e^{-\delta_\Lambda S} = \int \D \chi\; e^{-(S[\phi + \chi] - S[\phi])} = C \left(\frac{\pi}{\Lambda^2 + V''(\phi)}\right)^{N/2},
+\]
+where $N$ is the number of $\chi$ modes in our shell of radius $\Lambda$ and thickness $\delta \Lambda$, and $C$ is some constant. From our previous formula, we see that it is just
+\[
+ N = \vol(S^{d - 1}) \Lambda^{d - 1} \delta \Lambda \cdot \left(\frac{L}{2\pi}\right)^d \equiv 2a \Lambda^{d - 1} \delta \Lambda,
+\]
+where
+\[
+ a = \frac{\vol(S^{d - 1})}{2(2\pi^d)} = \frac{1}{(4\pi)^{d/2} \Gamma(d/2)} L^d.
+\]
+Consequently, up to field-independent numerical factors, integrating out $\chi$ leads to a change in the effective action
+\[
+ \frac{\delta_\Lambda S^{\mathrm{eff}}}{\delta \Lambda} = a \log (\Lambda^2 + V''(\phi)) \Lambda^{d - 1} L^d.
+\]
+This diverges as $L \to \infty$! This is \term{infrared divergence}, and it can be traced to our simplifying assumption that $\phi$ is everywhere constant. More generally, it is not unreasonable to believe that we have
+\[
+ \frac{\delta_\Lambda S^{\mathrm{eff}}}{\delta \Lambda} = a \Lambda^{d - 1}\int \d^d x\; \log (\Lambda^2 + V''(\phi)).
+\]
+This isn't actually quite the result, but up to the local approximation, it is.
+
+Now we can write down the $\beta$-function. As expected, integrating out some modes has lead to new terms. We have
+\[
+ \Lambda \frac{\d g_{2k}}{\d \Lambda} = [k (d - 2) - d] g_{2k} - a \Lambda^{k(d - 2)} \left.\frac{\partial^{2k}}{\partial \phi^{2k}} \log (\Lambda^2 + V''(\phi))\right|_{\phi = 0}.
+\]
+As before, the first term does not relate to quantum corrections. It is just due to us rescaling our normalization factors. On the other hand, the $2k$th derivative is just a fancy way to extract the factor of $\phi^{2k}$ in term.
+
+We can actually compute these things!
+\begin{eg}
+ Then we find that
+ \begin{align*}
+ \Lambda \frac{\d g_2}{\d \Lambda} &= - 2g - \frac{a g_4}{1 + g_2}\\
+ \Lambda \frac{\d g_4}{\d \Lambda} &= (d - 4) g_4 - \frac{ag_6}{(1 + g_2)} + \frac{3a g_4^2}{(1 + g_2)^2}\\
+ \Lambda \frac{\d g_6}{\d \Lambda} &= (2d - 6) g_6 - \frac{a g_8}{(1 + g_2)} + \frac{15 a g_4 g_6}{(1 + g_2)^2} - \frac{30 a g_4^3}{(1 + g_2)^3}.
+ \end{align*}
+\end{eg}
+Note that the first term on the right hand side is just the classical behaviour of the dimensionless couplings. It has nothing to do with the $\chi$ field. The remaining terms are quantum corrections ($\sim \hbar$), and each comes from specific Feynman diagrams. For example,
+\[
+ \frac{a g_4}{1 + g_2}
+\]
+involves one quartic vertex, and this comes from the diagram
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] circle [radius=0.5];
+ \node [circ] at (0.5, 0) {};
+ \draw (1.5, 1) -- (0.5, 0) -- (1.5, -1);
+ \end{tikzpicture}
+\end{center}
+There is one $\chi$ propagator, and this gives rise to the one $1 + g_2$ factor in the denominator.
+
+The first term in $\beta_4$ comes from
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] circle [radius=0.5];
+ \node [circ] at (0.5, 0) {};
+ \draw (1.5, 1) -- (0.5, 0) -- (1.5, -1);
+ \draw (1.5, 0.4) -- (0.5, 0) -- (1.5, -0.4);
+ \end{tikzpicture}
+\end{center}
+The second term comes from
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] circle [radius=0.5];
+ \node [circ] at (0.5, 0) {};
+ \draw (1.5, 1) -- (0.5, 0) -- (1.5, -1);
+ \node [circ] at (-0.5, 0) {};
+ \draw (-1.5, 1) -- (-0.5, 0) -- (-1.5, -1);
+ \end{tikzpicture}
+\end{center}
+Note that $g_2$ is just the dimensionless mass coupling of $\chi$,
+\[
+ g_2 = \frac{m^2}{\Lambda^2}.
+\]
+At least perturbatively, we expect this to be a relevant coupling. It increases as $\Lambda \to 0$. Consequently, at scales $\Lambda \ll m$, the quantum corrections to these $\beta$-functions are strongly suppressed! This makes sense!
+
+\subsubsection*{The Gaussian fixed point}
+From the formulae we derived, we saw that there is only one critical point, with $g_{2k}^* = 0$ for all $k \geq 2$. This is free since there are no vertices at all, and hence no corrections can happen. This is just the free theory.
+
+In a neighbourhood of this critical point, we can expand the $\beta$-functions in lowest order in $\delta g_i = g_i - g_i^*$. We have
+\[
+ \beta_{2k} = \Lambda \frac{\partial g_{2k}}{\partial \Lambda} = [k(d - 2) - d] g_{2k} - a g_{2k + 2}.
+\]
+Writing this linearized $\beta$-function as
+\[
+ \beta_{2i} = B_{ij}g_{2j},
+\]
+we see that $B_{ij}$ is upper triangular, and hence its eigenvalues are just the diagonal entries, which are
+\[
+ k(d - 2) - d = 2k - 4,
+\]
+in four dimensions.
+
+So vertices $\phi^{2k}$ with $k \geq 3$ are \emph{irrelevant}. If we turn them on at some scale, they become negligible as we fall towards the infrared. The mass term $g_2$ is relevant, as we said before, so even a small mass becomes increasingly significant in the infrared. Of course, we are making these conclusions based on a rather perturbative way of computing the $\beta$ function, and our predictions about what happens at the far infrared should be taken with a grain of salt. However, we can go back to check our formulation, and see that our conclusion still holds.
+
+The interesting term is $\phi^4$, which is marginal in $d = 4$ to lowest order. This means we have to go to higher order. To next non-trivial order, we have
+\[
+ \Lambda \frac{\d g_4}{\d \Lambda} = 3a g_4^2 + O(g_4^2g_2),
+\]
+where we neglected $g_6$ as it is irrelevant. Using the specific value of $a$ in $d = 4$, we find that, to this order,
+\[
+ \frac{1}{g_4(\Lambda)} = C - \frac{3}{16 \pi^2} \log \Lambda.
+\]
+Equivalently, we have
+\[
+ g_4(\Lambda) = \frac{16 \pi^2}{3}\left(\log \left(\frac{\mu}{\Lambda}\right)\right)^{-1}
+\]
+for some scale $\mu$. If we have no higher order terms, then for the theory to make sense, we must have $g_4 > 0$. This implies that we must pick $\mu > \Lambda$.
+
+How does this coefficient run? Our coupling $g_4$ is marginal to leading order, and consequently it doesn't run as some \emph{power} of $\Lambda$. It runs only \emph{logarithmically} in $\Lambda$.
+
+This coupling is irrelevant. In the infrared limit, as we take $\Lambda \to 0$, we find $g_4 \to 0$, and so we move towards the Gaussian fixed point. This is rather boring.
+
+On the other hand, if we take $\Lambda \to \infty$, then eventually $\frac{\mu}{\Lambda}$ hits $1$, and we divide by zero. So our perturbation theory breaks! Notice that we are not saying that our theory goes out of control as $\Lambda \to \infty$. This perturbation theory breaks at some \emph{finite} energy scale!
+
+Recall that last term, we were studying $\phi^4$ theory. We didn't really run into trouble, because we only worked at tree level (and hence wasn't doing \emph{quantum} field theory). But if we actually try to do higher loop integrals, then everything breaks down. The $\phi^4$ theory doesn't actually exist.
+
+\subsubsection*{The Wilson--Fisher critical point}
+Last time, we ignored all derivatives terms, and we found, disappointedly, that the only fixed point we can find is the free theory. This was bad.
+
+Wilson--Fisher, motivated by condensed matter physics rather than fundamental physics, found another non-trivial fixed point. What they did was rather peculiar. They set the dimension to $d = 4 - \varepsilon$, for some small values of $\varepsilon$. While this might seem rather absurd, because non-integral dimensions do not exist (unless we want to talk about fractals, but doing physics on fractals is hard), but we can still do manipulations formally, and see if we get anything sensible.
+
+They proved that there exists a new fixed point with
+\begin{align*}
+ g_2^* &= -\frac{1}{6} \varepsilon + O(\varepsilon^2)\\
+ g_4^* &= \frac{\varepsilon}{3a} + O(\varepsilon^2)\\
+ g_{2k}^* &\sim O(\varepsilon^k).
+\end{align*}
+for $k \geq 3$. To study the behaviour near this critical point, we again expand
+\[
+ g_i = g_i^* + \delta g_i
+\]
+in the $\beta$-function we found earlier to lowest non-trivial order, this time expanding around the Wilson--Fisher fixed point.
+
+If we do this, then in the $(g_2, g_4)$ subspace, we find that
+\[
+ \Lambda \frac{\partial}{\partial \Lambda}
+ \begin{pmatrix}
+ \delta g_2\\
+ \delta g_4
+ \end{pmatrix} =
+ \begin{pmatrix}
+ \frac{\varepsilon}{3} - 2 & -a \left( 1+ \frac{\varepsilon}{6}\right)\\
+ 0 & \varepsilon
+ \end{pmatrix}
+ \begin{pmatrix}
+ \delta g_2\\
+ \delta g_4
+ \end{pmatrix}.
+\]
+The eigenvalues and eigenvectors are $\frac{\varepsilon}{3} - 2$ and $\varepsilon$, with eigenvectors
+\[
+ \sigma_1=
+ \begin{pmatrix}
+ 1\\0
+ \end{pmatrix},\quad
+ \sigma_2 =
+ \begin{pmatrix}
+ -a\left(3 + \frac{\varepsilon}{2}\right)\\
+ 2(3 + \varepsilon)
+ \end{pmatrix}
+\]
+Notice that while the mass term itself is an eigenvector, the quartic coupling is not! Using the asymptotic expansion
+\[
+ \Gamma\left(-\frac{\varepsilon}{2}\right) \sim -\frac{2}{\varepsilon} - \gamma + O(\varepsilon),
+\]
+where $\gamma \approx 0.577$ is the Euler--Mascheroni constant, plus fact that $\Gamma(x + 1) = x \Gamma(x)$, we find that in $d = 4 - \varepsilon$, we have
+\[
+ a = \left.\frac{1}{(4\pi)^{d/2}} \frac{1}{\gamma(d/2)} \right|_{d = 4 - \varepsilon} \sim \frac{1}{16 \pi^2} + \frac{\varepsilon}{32\pi^2} (1 - \gamma + \log 4\pi) + O(\varepsilon^2),
+\]
+Since $\varepsilon$ is small, we know that the eigenvalue of $\sigma_1$ is negative. This means it is a relevant operator. On the other hand $\sigma_4$, is an irrelevant operator. We thus have the following picture of the RG flow:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$g_4$};
+ \draw [->] (0, -3) -- (0, 3) node [above] {$g_2$};
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, -1) {};
+
+ \draw [thick, mred, ->-=0.5] (0, 0) -- (0, 3);
+ \draw [thick, mred, ->-=0.5] (0, 0) -- (0, -3);
+
+ \draw [thick, mred, ->-=0.5] (2, -1) -- (2, 3);
+ \draw [thick, mred, ->-=0.5] (2, -1) -- (2, -3);
+
+ \draw [->-=0.5, thick, mblue] (0, 0) -- (2, -1);
+ \draw [->-=0.5, thick, mblue] (5, -2.5) -- (2, -1); % insert other flows
+
+ \node at (1, 1.5) {I};
+ \node at (1, -1.5) {II};
+
+ \node at (3, 1) {III};
+ \node at (3, -2.5) {IV};
+
+ \draw [->-=0.5, thick, morange] (0, 0) .. controls (1.8, -0.5) .. (1.8, 3);
+ \draw [->-=0.5, thick, morange] (0, 0) .. controls (1.8, -1.5) .. (1.8, -3);
+
+ \draw [->-=0.5, thick, morange] (5, -2.3) .. controls (2.2, -0.8) .. (2.2, 3);
+ \draw [->-=0.5, thick, morange] (5, -2.7) .. controls (2.2, -1.5) .. (2.2, -3);
+ \end{tikzpicture}
+\end{center}
+We see that theories in region I are massless and free in the deep UV, but flows to become massive and interacting in the IR. Theories in region II behaves similarly, but now the mass coefficient is negative. Consequently, $\phi = 0$ is a \emph{local maximum} of the effective potential. These theories tend to exhibit spontaneous symmetry breaking.
+
+Finally, theories in III and IV do not have a sensible continuum limit as both couplings increase without bound. So at least within perturbation theory, these theories don't exist. They can only manifest themselves as effective field theories.
+
+\section{Perturbative renormalization}
+\subsection{Cutoff regularization}
+Our discussion of renormalization has been theoretical so far. Historically, this was not what people were studying when doing quantum field theory. Instead, what they did was that they had to evaluate integrals in Feynman diagrams, and the results happened to be infinite!
+
+For example, consider the scalar $\phi^4$ theory in $d = 4$ dimensions, with action given by
+\[
+ S[\phi] = \int \left(\frac{1}{2}(\partial \phi)^2 + \frac{1}{2} m^2 \phi^2 + \frac{\lambda}{4!} \phi^4\right)\;\d^4 x.
+\]
+We want to compute the two point function $\bra \phi \phi\ket$. This has, of course, the tree level diagram given by just the propagator. There is also a $1$-loop diagram given by
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.3, ->-=0.8] (0, 0) node [left] {$\phi$} -- (3, 0) node [right] {$\phi$} node [pos=0.25, below] {$k$} node [pos=0.75, below] {$k$};
+
+ \draw (1.5, 0.5) circle [radius=0.5];
+ \node [right] at (2, 0.5) {$p$};
+ \node [circ] at (1.5, 0) {};
+
+ \draw [->] (2, 0.65) -- +(0, 0.001);
+ \end{tikzpicture}
+\end{center}
+We will ignore the propagators coming from the legs, and just look at the loop integral. The diagram has a symmetry factor of $2$, and thus loop integral is given by
+\[
+ -\frac{\lambda}{2(2\pi)^4} \int \frac{\d^4 p}{p^2 + m^2}.
+\]
+This integral diverges. Indeed, we can integrate out the angular components, and this becomes
+\[
+ - \frac{\lambda}{2(2\pi)^4} \vol(S^3) \int \frac{|p|^3\; \d |p|}{|p|^2 + m^2}.
+\]
+The integrand tends to infinity as we take $p \to \infty$, so this clearly diverges.
+
+Well, this is bad, isn't it. In light of what we have been discussing so far, what we should do is to not view $S$ as the Lagrangian in ``continuum theory'', but instead just as a Lagrangian under some cutoff $k^2 \leq \Lambda_0$. Then when doing the loop integral, instead of integrating over all $p$, we should integrate over all $p$ such that $p^2 \leq \Lambda_0$. And this certainly gives a finite answer.
+
+But we want to take the continuum limit, and so we want to take $\Lambda_0 \to \infty$. Of course the loop integral will diverge if we fix our coupling constants. So we might think, based on what we learnt previously, that we should tune the coupling constants as we go.
+
+This is actually very hard, because we have no idea what is the ``correct'' way to tune them. Historically, and practically, what people did was just to introduce some random terms to cancel off the infinity.
+
+The idea is to introduce a \term{counterterm action}
+\[
+ S^{CT}[\phi, \Lambda] = \hbar \int \left[\frac{\delta Z}{2} (\partial \phi)^2 + \frac{\delta m^2}{2} \phi^2 + \frac{\delta \lambda}{4!} \phi^4\right] \d^4 x,
+\]
+where $\delta z$, $\delta m$ and $\delta \phi$ are some functions of $\Lambda$ to be determined. We then set the full action at scale $\Lambda$ to be
+\[
+ S_\Lambda[\phi] = S[\phi] + S^{CT}[\phi, \Lambda].
+\]
+This action depends on $\Lambda$. Then for any physical quantity $\bra \mathcal{O}\ket$ we are interested in, we take it to be
+\[
+ \bra \mathcal{O}\ket = \lim_{\Lambda \to \infty} \Big(\text{$\bra \mathcal{O}\ket$ computed with cutoff $\Lambda$ and action $S_\Lambda$}\Big).
+\]
+Note that in the counterterm action, we included a factor of $\hbar$ in front of everything. This means in perturbation theory, the tree-level contributions from $S^{CT}$ would be of the same order as the 1-loop diagrams in $S$.
+
+For example, in the above 1-loop diagram, we obtain further contributions to the quadratic terms, given by
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [left] {$\phi$} -- (3, 0) node [right] {$\phi$};
+ \draw [fill=white] (1.5, 0) circle [radius=0.11];
+ \node at (1.5, 0) {$\times$};
+ \node [above] at (1.5, 0.11) {$k^2 \delta Z$};
+ \end{tikzpicture}
+ \quad\quad
+ \begin{tikzpicture}
+ \draw (0, 0) node [left] {$\phi$} -- (3, 0) node [right] {$\phi$};
+ \draw [fill=white] (1.5, 0) circle [radius=0.11];
+ \node at (1.5, 0) {$\times$};
+ \node [above] at (1.5, 0.11) {$\delta m^2$};
+ \end{tikzpicture}
+\end{center}
+We first evaluate the original loop integral properly. We use the mathematical fact that $\vol(S^3) = 2\pi^2$. Then the integral is
+\begin{align*}
+ \begin{tikzpicture}[eqpic]
+ \draw [dashed] (0, 0) -- (3, 0);
+ \draw (1.5, 0.5) circle [radius=0.5];
+ \node [right] at (2, 0.5) {$p$};
+ \node [circ] at (1.5, 0) {};
+ \end{tikzpicture} &=
+ -\frac{\lambda}{16 \pi^2} \int_0^{\Lambda_0} \frac{p^3 \;\d p}{p^2 + m^2}\\
+ &= -\frac{\lambda m^2}{32 \pi^2} \int_0^{\Lambda_0^2/m^2} \frac{x\;\d x}{1 + x}\\
+ &= \frac{\lambda}{32 \pi^2} \left[\Lambda_0^2 - m^2 \log \left(1 + \frac{\Lambda_0^2}{m^2}\right)\right],
+\end{align*}
+where we substituted $x = p^2/m^2$ in the middle.
+
+Including these counter-terms, the $1$-loop contribution to $\bra \phi \phi\ket$ is
+\[
+ \frac{\lambda}{32 \pi^2} \left[\Lambda_0^2 - m^2 \log \left(1 + \frac{\Lambda_0^2}{m^2}\right)\right] + k^2 \delta Z + \delta m^2.
+\]
+The objective is, of course, to pick $\delta z$, $\delta m$, $\delta \phi$ so that we always get finite answers in the limit. There are many ways we can pick these quantities, and of course, different ways will give different answers. However, what we can do is that we can fix some prescriptions for how to pick these quantities, and this gives as a well-defined theory. Any such prescription is known as a \term{renormalization scheme}.
+
+It is important that we describe it this way. Each individual loop integral still diverges as we take $\Lambda \to \infty$, as we didn't change it. Instead, for each fixed $\Lambda$, we have to add up, say, all the $1$-loop contributions, and then \emph{after} adding up, we take the limit $\Lambda \to \infty$. Then we do get a finite answer.
+
+%\subsection{General idea}
+%Recall that somewhere in the middle of the previous chapter, we were talking about taking the continuum limit $\Lambda \to 0$. How does this manifest itself in a perturbative framework?
+%
+%Consider the scalar theory
+%\[
+% S[\phi] = \int \left(\frac{1}{2}(\partial \phi)^2 + \frac{1}{2} m^2 \phi^2 + \frac{\lambda}{4!} \phi^4\right)\;\d^4 x
+%\]
+%in $d = 4$. Then we have a $1$-loop diagram
+%\begin{center}
+% \begin{tikzpicture}
+% \draw [->-=0.3, ->-=0.8] (0, 0) node [left] {$\phi$} -- (3, 0) node [right] {$\phi$} node [pos=0.25, below] {$k$} node [pos=0.75, below] {$k$};
+%
+% \draw (1.5, 0.5) circle [radius=0.5];
+% \node [right] at (2, 0.5) {$p$};
+% \node [circ] at (1.5, 0) {};
+%
+% \draw [->] (2, 0.65) -- +(0, 0.001);
+% \end{tikzpicture}
+%\end{center}
+%This diagram has a symmetry factor of $2$, and the loop integral would be
+%\[
+% -\frac{\lambda}{2(2\pi)^4} \int \frac{\d^4 p}{p^2 + m^2}.
+%\]
+%This integral diverges, which is bad.
+%
+%What we previously said was that we impose a cutoff. This corresponds to doing the integral
+%\[
+% -\frac{\lambda}{2(2\pi)^4} \int_{|p| < \Lambda_0} \frac{\d^4 p}{p^2 + m^2}
+%\]
+%instead. We can do this integral explicitly.
+%%
+%%Perturbatively, we evaluate things by looking at Feynman diagrams. We have a $1$-loop diagram that looks like this
+%%When we did renormalization previously, we looked at what happens infinitesimally, when we lowered the energy level a bit. Consequently, we only had to consider 1-loop diagrams. Historically, when people tried to do renormalization, they had a theory which they wanted to take the continuum limit of. But they realized when they tried to do loop integrals, it diverged.
+%%
+%%
+%%Consider the scalar theory
+%%\[
+%% S_{\Lambda_0}[\phi] = \int \left(\frac{1}{2}(\partial \phi)^2 + \frac{1}{2} m^2 \phi^2 + \frac{\lambda}{4!} \phi^4\right)\;\d^4 x,
+%%\]
+%%in dimensions $d = 4$. We expect that in perturbation theory, i.e.\ near the Gaussian fixed point, we find that $m^2$ is relevant and $\lambda$ is marginally irrelevant.
+%%
+%%The quadratic terms in the action see corrections from diagrams with $2$ external $\phi$ fields. At $1$-loop, the only such diagram is
+%%\begin{center}
+%% \begin{tikzpicture}
+%% \draw (0, 0) node [left] {$\phi$} -- (3, 0) node [right] {$\phi$};
+%%
+%% \draw [dashed] (1.5, 0.5) circle [radius=0.5];
+%% \node [right] at (2, 0.5) {$p$};
+%% \node [circ] at (1.5, 0) {};
+%% \end{tikzpicture}
+%%\end{center}
+%%This has a symmetry factor of $2$, and this gives
+%%\[
+%% -\frac{\lambda}{2(2\pi)^4} \int_{|p| \leq \Lambda_0} \frac{\d^4 p}{p^2 + m^2}
+%%\]
+%%if we integrate out modes with $|p| \leq \Lambda_0$.
+%Using the fact that $\vol(S^3) = 2\pi^2$, this is given by
+%\begin{align*}
+% \frac{-\lambda \vol(S^3)}{2(2\pi)^4} \int_0^{\Lambda_0} \frac{p^3 \;\d p}{p^2 + m^2} &= -\frac{\lambda m^2}{32 \pi^2} \int_0^{\Lambda_0^2/m^2} \frac{x \;\d x}{1 + x} \\
+% &= \frac{\lambda}{32 \pi^2} \left[\Lambda_0^2 - m^2 \log \left(1 + \frac{\Lambda_0^2}{m^2}\right)\right],
+%\end{align*}
+%where we substituted $x = p^2/m^2$ in the middle.
+%
+%As expected, this diverges as $\Lambda_0 \to \infty$. To obtain a finite limit in the continuum, we introduce counterterms. We define the counterterm action
+%\subsection{Renormalization schemes}
+
+\subsubsection*{On-shell renormalization scheme}
+We will study one renormalization scheme, known as the \term{on-shell renormalization scheme}. Consider the exact momentum space propagator
+\[
+ \int \d^4 x\; e^{ik\cdot x} \bra \phi(x) \phi(0)\ket.
+\]
+Classically, this is just given by
+\[
+ \frac{1}{k^2 + m^2},
+\]
+where $m^2$ is the original mass term in $S[\phi]$.
+
+In the on-shell renormalization scheme, we pick our counterterms such that the exact momentum space propagator satisfies the following two properties:
+\begin{itemize}
+ \item It has a simple pole when $k^2 = - m^2_{\mathrm{phys}}$, where $m^2_{\mathrm{phys}}$ is the physical mass of the particle; and
+ \item The residue at this pole is $1$.
+\end{itemize}
+Note that we are viewing $k^2$ as a single variable when considering poles.
+
+To find the right values of $\delta m$ and $\delta Z$, we recall that we had the one-particle irreducible graphs, which we write as
+\[
+ \Pi(k^2) =
+ \begin{tikzpicture}[eqpic]
+ \draw [dashed] (0, 0) -- (2, 0) node [pos=0.2, above] {$k$};
+ \draw [fill=white] (1, 0) circle [radius=0.3] node {1PI};
+ \end{tikzpicture},
+\]
+where the dashed line indicates that we do \emph{not} include the propagator contributions. For example, this 1PI includes graphs of the form
+\begin{center}
+ \makecenter{\begin{tikzpicture}
+ \draw [->-=0.3, ->-=0.8, dashed] (0, 0) -- (3, 0) node [pos=0.25, below] {$k$} node [pos=0.75, below] {$k$};
+ \draw (1.5, 0.5) circle [radius=0.5];
+ \node [circ] at (1.5, 0) {};
+ \end{tikzpicture}}
+ \quad
+ \makecenter{\begin{tikzpicture}
+ \draw [->-=0.3, ->-=0.8, dashed] (0, 0) -- (3, 0) node [pos=0.25, below] {$k$} node [pos=0.75, below] {$k$};
+ \draw (1.5, 0.5) circle [radius=0.5];
+ \draw (1.5, 1.5) circle [radius=0.5];
+ \node [circ] at (1.5, 0) {};
+ \node [circ] at (1.5, 1) {};
+ \end{tikzpicture}}
+ \quad
+ \makecenter{\begin{tikzpicture}
+ \draw [dashed] (0, 0) -- (3, 0);
+ \draw (1.5, 0) circle [radius=0.6];
+ \node [circ] at (0.9, 0) {};
+ \node [circ] at (2.1, 0) {};
+ \end{tikzpicture}}
+\end{center}
+as well as counterterm contributions
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] (0, 0) -- (3, 0);
+ \draw [fill=white] (1.5, 0) circle [radius=0.11];
+ \node at (1.5, 0) {$\times$};
+ \node [above] at (1.5, 0.11) {$k^2 \delta Z$};
+ \end{tikzpicture}
+ \quad\quad
+ \begin{tikzpicture}
+ \draw [dashed] (0, 0) -- (3, 0);
+ \draw [fill=white] (1.5, 0) circle [radius=0.11];
+ \node at (1.5, 0) {$\times$};
+ \node [above] at (1.5, 0.11) {$\delta m^2$};
+ \end{tikzpicture}
+\end{center}
+Then the exact momentum propagator is
+\begin{align*}
+ &\hphantom{={}}\Delta(k^2)\\
+ &=
+ \begin{tikzpicture}[eqpic]
+ \draw [white] (0.7, 0.3) rectangle (1.3, -0.3);
+ \draw (0, 0) node [circ] {} node [above] {$\phi$} -- (2, 0) node [circ] {} node [above] {$\phi$} node [pos=0.5, above] {$k$};
+ \end{tikzpicture}
+ +
+ \begin{tikzpicture}[eqpic]
+ \draw (0, 0) node [circ] {} node [above] {$\phi$} -- (2, 0) node [circ] {} node [above] {$\phi$};
+ \draw [fill=white] (1, 0) circle [radius=0.3] node {1PI};
+ \end{tikzpicture}
+ +
+ \begin{tikzpicture}[eqpic]
+ \draw (0, 0) node [circ] {} node [above] {$\phi$} -- (3, 0) node [circ] {} node [above] {$\phi$};
+ \draw [fill=white] (1, 0) circle [radius=0.3] node {1PI};
+ \draw [fill=white] (2, 0) circle [radius=0.3] node {1PI};
+ \end{tikzpicture}
+ + \cdots\\
+ &= \frac{1}{k^2 + m^2} - \frac{1}{k^2 + m^2} \Pi(k^2) \frac{1}{k^2 + m^2} \\
+ &\hphantom{\frac{1}{k^2 + m^2} - \frac{1}{k^2 + m^2} \Pi(k^2)} + \frac{1}{k^2 + m^2} \Pi(k^2) \frac{1}{k^2 + m^2} \Pi(k^2) \frac{1}{k^2 + m^2} + \cdots\\
+ &= \frac{1}{k^2 + m^2 + \Pi(k^2)}.
+\end{align*}
+The negative sign arises because we are working in Euclidean signature with path integrals weighted by $e^{-S}$.
+
+Thus, if we choose our original parameter $m^2$ to be the measured $m_\mathrm{phys}^2$, then in the on-shell scheme, we want
+\[
+ \Pi(-m_{\mathrm{phys}}^2) = 0,
+\]
+and also
+\[
+ \left.\frac{\partial}{\partial k^2} \Pi(k^2)\right|_{k^2 = - m_{\mathrm{phys}}^2} = 0.
+\]
+To 1-loop, the computations at the beginning of the chapter tells us
+\[
+ \Pi(k^2) = \delta m^2 + k^2 \delta Z + \frac{\lambda}{32\pi^2} \left[\Lambda_0^2 - m^2 \log \left(1 + \frac{\Lambda_0^2}{m^2}\right)\right].
+\]
+We see that no 1-loop contributions involve $k$, which we can see in our unique 1-loop diagram, because the loop integral doesn't really involve $k$ in any way.
+
+We see that the second condition forces $\delta Z = 0$, and then we must have
+\begin{align*}
+ \delta Z &= O(\lambda^2)\\
+ \delta m^2 &= -\frac{\lambda}{32 \pi^2} \left(\Lambda_0^2 - m^2 \log \left(1 + \frac{\Lambda_0^2}{m^2}\right)\right) + O(\lambda^2).
+\end{align*}
+Here to 1-loop, we don't need wavefunction renormalization, but this is merely a coincidence, not a general phenomenon.
+
+Of course, if we consider higher loop diagrams, then we have further corrections to the counterterms.
+\subsection{Dimensional regularization}
+People soon realized this was a terrible way to get rid of the infinities. Doing integrals from $0$ to $\Lambda_0$ is usually much harder than integrals from $0$ to $\infty$. Moreover, in gauge theory, it is (at least naively) incompatible with gauge invariance. Indeed, say in $\U(1)$ gauge theory, a transformation
+\[
+ \psi(x) \to e^{i\lambda(x)} \psi(x)
+\]
+can potentially introduce a lot of high energy modes. So our theory will not be gauge invariant.
+
+For these reasons, people invented a different way to get rid of infinite integrals. This is known as \emph{dimensional regularization}. This method of getting rid of infinities doesn't fit into the ideas we've previously discussing. It is just magic. Moreover, this method only works perturbatively --- it tells us how to get rid of infinities in loops. It doesn't give any definition of a regularized path integral measure, or describe any full, coherent non-perturbative theory that predicts the results.
+
+Yet, this method avoids all the problems we mentioned above, and is rather easy to use. Hence, we will mostly used dimensional regularization in the rest of the course.
+
+To do dimensional regularization, we will study our theory in an \emph{arbitrary} dimension $d$, and do the integrals of loop calculations. For certain dimensions, the integral will converge, and give us a sensible answer. For others, it won't. In particular, for $d = 4$, it probably won't (or else we have nothing to do!).
+
+After obtaining the results for some functions, we attempt to analytically continue it as a function of $d$. Of course, the analytic continuation is non-unique (e.g.\ we can multiply the result by $\sin d$ and still get the same result for integer $d$), but there is often an ``obvious'' choice. This does not solve our problem yet --- this analytic continuation tends to still have a pole at $d = 4$. However, after doing this analytic continuation, it becomes more clear how we are supposed to get rid of the pole.
+
+Note that we are not in any way suggesting the universe has a non-integer dimension, or that non-integer dimensions even makes any sense at all. This is just a mathematical tool to get rid of infinities.
+
+Let's actually do it. Consider the same theory as before, but in arbitrary dimensions:
+\[
+ S[\phi] = \int \d^d x\; \left(\frac{1}{2} (\partial \phi)^2 + \frac{1}{2} m^2 \phi^2 + \frac{\lambda}{4!} \phi^4\right).
+\]
+In $d$ dimensions, this $\lambda$ is no longer dimensionless. We have
+\[
+ [\phi] = \frac{d - 2}{2}.
+\]
+So for the action to be dimensionless, we must have
+\[
+ [\lambda] = 4 - d.
+\]
+Thus, we write
+\[
+ \lambda = \mu^{4 - d} g(\mu)
+\]
+for some arbitrary mass scale $\mu$. This $\mu$ is not a cutoff, since we are not going to impose one. It is just some arbitrary mass scale, so that $g(\mu)$ is now dimensionless.
+
+We can then compute the loop integral
+\begin{align*}
+ \begin{tikzpicture}[eqpic]
+ \draw [dashed] (0, 0) -- (3, 0);
+ \draw (1.5, 0.5) circle [radius=0.5];
+ \node [right] at (2, 0.5) {$p$};
+ \node [circ] at (1.5, 0) {};
+ \end{tikzpicture} &=
+ \frac{1}{2} g\mu^{4 - d} \int \frac{\d^d p}{(2\pi)^d} \frac{1}{p^2 + m^2} \\
+ &= \frac{g \mu^{4 - d}}{2(2\pi)^d} \vol(S^{d - 1}) \int_0^\infty \frac{p^{d - 1}\;\d p}{p^2 + m^2}.
+\end{align*}
+We note the mathematical fact that
+\[
+ \vol(S^{d - 1}) = \frac{2\pi^{d/2}}{\Gamma(d/2)}.
+\]
+While $S^{d - 1}$ does not make sense when $d$ is not an integer, the right-hand expression does. So replacing the volume with this expression, we can analytically continue this to all $d$, and obtain
+\begin{align*}
+ \begin{tikzpicture}[eqpic]
+ \draw [dashed] (0, 0) -- (3, 0);
+ \draw (1.5, 0.5) circle [radius=0.5];
+ \node [right] at (2, 0.5) {$p$};
+ \node [circ] at (1.5, 0) {};
+ \end{tikzpicture} &= \mu^{4 - d}\int_0^\infty \frac{p^{d - 1}\; \d p}{p^2 + m^2} \\
+ &= \frac{1}{2} \mu^{4 - d} \int_0^\infty \frac{(p^2)^{d/2 - 1}\;\d p^2}{p^2 + m^2} \\
+ &= \frac{m^2}{2} \left(\frac{\mu}{m}\right)^{4 - d} \Gamma\left(\frac{d}{2}\right)\Gamma\left(1 - \frac{d}{2}\right).
+\end{align*}
+The detailed computations are entirely uninteresting, but if one were to do this manually, it is helpful to note that
+\[
+ \int_0^1 u^{s - 1}(1 - u)^{t - 1} \;\d u = \frac{\Gamma(s) \Gamma(t)}{\Gamma(s + t)}.
+\]
+The appearance of $\Gamma$-functions is typical in dimensional regularization.
+
+Combining all factors, we find that
+\[
+ \begin{tikzpicture}[eqpic]
+ \draw [dashed] (0, 0) -- (3, 0);
+
+ \draw (1.5, 0.5) circle [radius=0.5];
+ \node [right] at (2, 0.5) {$p$};
+ \node [circ] at (1.5, 0) {};
+ \end{tikzpicture} =
+ \frac{gm^2}{2(4\pi)^{d/2}}\left(\frac{\mu}{m}\right)^{4 - d} \Gamma\left(1 - \frac{d}{2}\right).
+\]
+This formula makes sense for any value of $d$ in the complex plane. Let's see what happens when we try to approach $d = 4$. We set $d = 4 - \varepsilon$, and use the Laurent series
+\begin{align*}
+ \Gamma(\varepsilon) &= \frac{1}{\varepsilon} - \gamma + O(\varepsilon)\\
+ x^\varepsilon &= 1 + \frac{\varepsilon}{2} \log x + O(\varepsilon^2),
+\end{align*}
+plus the following usual property of the $\Gamma$ function:
+\[
+ \Gamma(x + 1) = x \Gamma(x).
+\]
+Then, as $d \to 4$, we can asymptotically expand
+\[
+ \begin{tikzpicture}[eqpic]
+ \draw [dashed] (0, 0) -- (3, 0);
+
+ \draw (1.5, 0.5) circle [radius=0.5];
+ \node [right] at (2, 0.5) {$p$};
+ \node [circ] at (1.5, 0) {};
+ \end{tikzpicture} =
+ -\frac{gm^2}{32\pi^2} \left(\frac{2}{\varepsilon} - \gamma + \log\left(\frac{4\pi \mu^2}{m^2}\right) + O(\varepsilon)\right).
+\]
+
+Unsurprisingly, this diverges as $\varepsilon \to 0$, as $\Gamma$ has a (simple) pole at $-1$. The pole in $\frac{1}{\varepsilon}$ reflects the divergence of this loop integral as $\Lambda_0 \to \infty$ in the cutoff regularization.
+
+We need to obtain a finite limit as $d \to 4$ by adding counterterms. This time, the counterterms are not dependent on a cutoff, because there isn't one. There are also not (explicitly) dependent on the mass scale $\mu$, because $\mu$ was arbitrary. Instead, it is now a function of $\varepsilon$.
+
+So again, we introduce a new term
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [left] {$\phi$} -- (3, 0) node [right] {$\phi$};
+ \draw [fill=white] (1.5, 0) circle [radius=0.11];
+ \node at (1.5, 0) {$\times$};
+ \node [above] at (1.5, 0.11) {$\delta m^2$};
+ \end{tikzpicture}
+\end{center}
+Again, we need to make a choice of this. We need to choose a renormalization scheme. We can again use the on-shell renormalization. However, we could have done on-shell renormalization without doing all this weird dimension thing. Once we have done it, there is a more convenient way of doing this in dimensional regularization:
+\begin{enumerate}
+ \item \emph{Minimal subtraction}\index{minimal subtraction} (\term{MS}): we choose
+ \[
+ \delta m^2 = - \frac{gm^2}{16\pi^2\varepsilon}
+ \]
+ so as to cancel just the pole.
+ \item \emph{Modified minimal subtraction}\index{modified minimal subtraction}\index{minimal subtraction!modified} (\emph{\textoverline{MS}}): We set
+ \[
+ \delta m^2 = - \frac{gm^2}{32\pi^2} \left(\frac{2}{\varepsilon} - \gamma + \log 4\pi\right)
+ \]
+ to get rid of some pesky constants, because no one likes the Euler--Mascheroni constant.
+\end{enumerate}
+In practice, we are mostly going to use the \textoverline{MS} scheme, because we really, really, hate the Euler--Mascheroni constant.
+
+Note that at the end, after subtracting off these counter-terms and taking $\varepsilon \to 0$, there is still an explicit $\mu$ dependence! In this case, we are left with
+\[
+ -\frac{gm^2}{32\pi^2} \log \left(\frac{\mu^2}{m^2}\right)
+\]
+Of course, the actual physical predictions of the theory must not depend on $\mu$, because $\mu$ was an arbitrary mass scale. This means $g$ must genuinely be a function of $\mu$, just like when we did renormalization!
+
+What is the physical interpretation of this? We might think that since $\mu$ is arbitrary, there is no significance. However, it turns out when we do actual computations with perturbation theory, the quantum corrections tend to look like $\log \left(\frac{\Lambda^2}{\mu^2}\right)$, where $\Lambda$ is the relevant ``energy scale''. Thus, if we want perturbation theory to work well (or the quantum corrections to be small), we must pick $\mu$ to be close to the ``energy scale'' of the scenario we are interested in. Thus, we can still think of $g(\mu^2)$ as the coupling constant of the theory ``at scale $\mu^2$''.
+
+\subsection{Renormalization of the \tph{$\phi^4$}{phi4}{ϕ4} coupling}
+We now try to renormalize the $\phi^4$ coupling. At $1$-loop, in momentum space, we receive contributions from
+\begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=0.5];
+ \draw (-1.5, 1) node [left] {$x_1$} -- (-0.5, 0) node [pos=0.5, anchor=south west] {\scriptsize $\!\!k_1$} node [circ] {} -- (-1.5, -1) node [left] {$x_2$};
+ \node at (-1.2, -0.4) {\scriptsize $k_2$};
+ \draw (1.5, 1) node [right] {$x_4$} -- (0.5, 0) node [circ] {} -- (1.5, -1) node [right] {$x_3$};
+ \node [above] at (0, 0.5) {\scriptsize$p$};
+ \node [below] at (0, -0.5) {\scriptsize $p\! +\! k_1\! +\! k_2$};
+
+ \draw [->] (0.13, -0.5) -- +(0.0001, 0);
+ \draw [<-] (-0.13, 0.5) -- +(0.0001, 0);
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \draw circle [radius=0.5];
+ \draw (-1.5, 1) -- (-0.5, 0) node [circ] {};
+ \draw (1.5, 1) -- (0.5, 0) node [circ] {};
+
+ \draw (-1.5, -1) edge [out=0,in=-45] (0.5, 0);
+ \draw (1.5, -1) edge [out=180,in=225] (-0.5, 0);
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \draw circle [radius=0.5];
+ \draw (-1.5, 1) -- (-0.5, 0) node [circ] {};
+ \draw (1.5, -1) -- (0.5, 0) node [circ] {};
+
+ \draw (-1.5, -1) edge [out=0,in=-45] (0.5, 0);
+ \draw (1.5, 1) edge [out=180,in=135] (-0.5, 0);
+ \end{tikzpicture}
+\end{center}
+and also a counter-term:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] (-1, 1) -- (1, -1);
+ \draw [dashed] (-1, -1) -- (1, 1);
+ \draw [fill=white] circle [radius=0.13];
+ \node {$\times$};
+ \end{tikzpicture}
+\end{center}
+We first do the first loop integral. It is given by
+\[
+ \frac{g^2 \mu^{4 - d}}{2} \int \frac{\d^4 p}{(2\pi)^4} \frac{1}{p^2 + m^2} \frac{1}{(p + k_1 + k_2)^2 + m^2}.
+\]
+This is a complicated beast. Unlike the loop integral we did for the propagator, this loop integral knows about the external momenta $k_1$, $k_2$. We can imagine ourselves expanding the integrand in $k_1$ and $k_2$, and then the result involves some factors of $k_1$ and $k_2$. If we invert the Fourier transform to get back to position space, then multiplication by $k_i$ becomes differentiation. So these gives contributions to terms of the form, say, $(\partial \phi)^2 \phi^2$, in addition to the $\phi^4$, which is what we really care about.
+
+One can check that only the $\phi^4$ contribution is divergent in $d = 4$. This is reflecting the fact that all these higher operators are all irrelevant.
+
+So we focus on the contribution to $\phi^4$. This is $k_i$-independent, and is given by the leading part of the integral:
+\[
+ \frac{g^2 \mu^{4 - d}}{2(2\pi)^d} \int \frac{\d^4 p}{(p^2 + m^2)^2} = \frac{1}{2} \frac{g^2}{(4\pi)^{d/2}} \left(\frac{\mu}{m}\right)^{4 - d} \Gamma\left(2 - \frac{d}{2}\right).
+\]
+How about the other two loop integrals? They give different integrals, but they differ only in where the $k_i$ appear in the denominator. So up to leading order, they are the same thing. So we just multiply our result by $3$, and find that the loop contributions are
+\begin{multline*}
+ - \delta \lambda + \frac{3 g^2}{2(4\pi)^{d/2}} \left(\frac{\mu}{m}\right)^{4 - d} \Gamma\left(2 - \frac{d}{2}\right) \\
+ \sim - \delta \lambda + \frac{3g^2}{32 \pi^2} \left(\frac{2}{\varepsilon} - \gamma + \log\left(\frac{4 \pi \mu^2}{m^2}\right)\right) + O(\varepsilon).
+\end{multline*}
+Therefore, in the \textoverline{MS} scheme, we choose
+\[
+ \delta \lambda = \frac{3g^2}{32 \pi^2}\left(\frac{2}{\varepsilon} - \gamma + \log 4\pi\right),
+\]
+and so up to $O(\hbar)$, the loop contribution to the $\phi^4$ coupling is
+\[
+ \frac{3g^2}{32 \pi^2} \log \left(\frac{\mu^2}{m^2}\right).
+\]
+%Recall that $\mu$ is an arbitrary mass scale we had when we defined our dimensionless couplings. If we varied the scale $\mu$ at which we defined our original theory, then to keep the same physics, we should vary our initial coupling $g(\mu)$ as
+%\[
+% \beta(g) = \mu \frac{\partial g}{\partial \mu} = \frac{3g^2}{16 \pi^2} > 0.
+%\]
+%This is a positive $\beta$-function. So the coupling $\lambda$ which was classically marginal is in fact marginally irrelevant. Recall that we found exactly the same $\beta$ function using LPA.
+%
+%The fact that this theory is marginally irrelevant means if we keep lowering our theory, then we approach the free theory. On the other hand, the theory doesn't exist in the continuum.
+So in $\lambda \phi^4$ theory, to subleading order, with an (arbitrary) dimensional regularization scale $\mu$, we have % this is an explanation of what we did previously.
+\[\arraycolsep=0.25em\def\arraystretch{2.2}
+ \begin{array}{ccccccc} % fix
+ \begin{tikzpicture}[eqpic, scale=0.5]
+ \draw [dashed] (-1, 1) -- (1, -1);
+ \draw [dashed] (-1, -1) -- (1, 1);
+ \draw [fill=black] circle [radius=0.4];
+ \end{tikzpicture}
+ & \sim &
+ \begin{tikzpicture}[eqpic, scale=0.5]
+ \draw [dashed] (-1, 1) -- (1, -1);
+ \draw [dashed] (-1, -1) -- (1, 1);
+ \node [circ] at (0, 0) {};
+ \end{tikzpicture}
+ &+&
+ \begin{tikzpicture}[eqpic, scale=0.5]
+ \draw circle [radius=0.5];
+ \draw [dashed] (-1.5, 1) -- (-0.5, 0) node [circ] {} -- (-1.5, -1);
+ \draw [dashed] (1.5, 1) -- (0.5, 0) node [circ] {} -- (1.5, -1);
+ \end{tikzpicture}
+ +\text{two more} +
+ \begin{tikzpicture}[eqpic, scale=0.5]
+ \draw [dashed] (-1, 1) -- (1, -1);
+ \draw [dashed] (-1, -1) -- (1, 1);
+ \draw [fill=white] circle [radius=0.26];
+ \node {$\times$};
+ \end{tikzpicture}
+ &+& \cdots\\
+ & & - \displaystyle\frac{g}{\hbar} &+& \displaystyle\frac{3g^2}{32\pi^2}\log \left(\frac{\mu^2}{m^2}\right) &+& O(\hbar)
+\end{array}
+\]
+Now note that nothing physical (such as this $4$-point function) can depend on our arbitrary scale $\mu$. Consequently, the coupling $g(\mu)$ must run so that
+\[
+ \mu \frac{\partial}{\partial \mu} \left[-\frac{g}{\hbar} + \frac{3g^2}{32\pi^2} \log\left(\frac{\mu^2}{m^2}\right) + O(\hbar)\right] = 0.
+\]
+This tells us that we must have
+\[
+ \beta(g) = \frac{3 g^2 \hbar}{32\pi^2}.
+\]
+Note that this is the same $\beta$-function as we had when we did local potential approximation!
+
+We can solve this equation for the coupling $g(\mu)$, and find that the couplings at scales $\mu$ and $\mu'$ are related by
+\[
+ \frac{1}{g(\mu)} = \frac{1}{g(\mu')} + \frac{3}{16 \pi^2} \log \left(\frac{\mu'}{\mu}\right).
+\]
+Thus, if we find that at energy scale $\mu$, the coupling takes value $g_0$, then at the scale
+\[
+ \mu' = \mu e^{16 \pi^2/(3 g_0)},
+\]
+the coupling $g(\mu')$ diverges. Our theory breaks down in the UV!
+
+This is to be taken with a pinch of salt, because we are just doing perturbation theory, with a semi-vague interpretation of what the running of $g$ signifies. So claiming that $g(\mu')$ diverges only says something about our perturbative approximation, and not the theory itself. Unfortunately, more sophisticated non-perturbative analysis of the theory also suggests the theory doesn't exist.
+
+\subsection{Renormalization of QED}
+That was pretty disappointing. How about the other theory we studied in QFT, namely QED? Does it exist?
+
+We again try to do dimensional regularization again. This will be slightly subtle, because in QED, we have the universe and also a spinor space. In genuine QED, both of these have dimension $4$. If we were to do this properly, we would have to change the dimensions of \emph{both} of them to $d$, and then do computations. In this case, it is okay to to just keep working with $4$-dimensional spinors. We can just think of this as picking as slightly different renormalization scheme than \textoverline{MS}.
+
+In $d$ dimensions, the classical action for QED in Euclidean signature is
+\[
+ S[A, \psi] = \int \d^d x\; \left(\frac{1}{4 e^2} F_{\mu\nu}F^{\mu\nu} + \bar\psi \slashed \D \psi + m \bar\psi\psi\right),
+\]
+where
+\[
+ \slashed \D\psi = \gamma^\mu (\partial_\mu + i A_\mu) \psi.
+\]
+Note that in the Euclidean signature, we have lost a factor of $i$, and also we have
+\[
+ \{\gamma^\mu, \gamma^\nu\} = 2 \delta^{\mu\nu}.
+\]
+To do perturbation theory, we'd like the photon kinetic term to be canonically normalized. So we introduce
+\[
+ A_\mu^{\mathrm{new}} = \frac{1}{e} A_\mu^{\mathrm{old}},
+\]
+and then
+\[
+ S[A^\mathrm{new}, \psi] = \int \d^d x\; \left(\frac{1}{4} F_{\mu\nu}F^{\mu\nu} + \bar\psi (\slashed\partial + i e \slashed{A}) \psi + m \bar\psi \psi\right).
+\]
+The original photon field necessarily has $[A^{\mathrm{old}}] = 1$, as it goes together with the derivative. So in $d$ dimensions, we have
+\[
+ [e] = \frac{4 - d}{2}.
+\]
+Thus, we find
+\[
+ [A^{\mathrm{new}}] = [A^{\mathrm{old}}] - [e] = \frac{d - 2}{2}.
+\]
+From now on, unless otherwise specified, we will use $A^{\mathrm{new}}$, and just call it $A$.
+
+As before, we introduce a dimensionless coupling constant in terms of an arbitrary scale $\mu$ by
+\[
+ e^2 = \mu^{4 - d} g^2(\mu).
+\]
+Let's consider the exact photon propagator in momentum space, i.e.
+\[
+ \Delta_{\mu\nu}(q) = \int \d^d x\; e^{iq \cdot x} \bra A_\mu (x) A_\nu(0)\ket
+\]
+in Lorenz gauge $\partial^\mu A_\mu = 0$.
+
+We can expand this perturbatively as
+\[
+ \begin{tikzpicture}[eqpic]
+ \draw [decorate, decoration={snake}] (0, 0) node [circ] {} -- (2, 0) node [circ] {};
+ \end{tikzpicture}
+ +
+ \begin{tikzpicture}[eqpic]
+ \draw [decorate, decoration=snake](0, 0) node [circ] {} -- (2, 0) node [circ] {};
+
+ \draw [fill=white] (1, 0) circle [radius=0.3] node {1PI};
+ \end{tikzpicture}
+ +
+ \begin{tikzpicture}[eqpic]
+ \draw [decorate, decoration=snake](0, 0) node [circ] {} -- (3, 0) node [circ] {};
+
+ \draw [fill=white] (1, 0) circle [radius=0.3] node {1PI};
+ \draw [fill=white] (2, 0) circle [radius=0.3] node {1PI};
+ \end{tikzpicture}
+ + \cdots
+\]
+The first term is the classical propagator
+\[
+ \Delta_{\mu\nu}^0(q) = \frac{1}{q^2} \left(\delta_{\mu\nu} - \frac{q_\mu q_\nu}{q^2}\right),
+\]
+and then as before, we can write the whole thing as
+\[
+ \Delta_{\mu\nu}(q) = \Delta^0_{\mu\nu}(q) + \Delta^{0\rho}_\mu(q) \Pi_\rho^\sigma(q) \Delta^0_{\sigma\nu}(q) + \Delta_\mu^{0\rho} \Pi_\rho^\sigma \Delta_\sigma^{0\lambda} \Pi_\lambda^\kappa \Delta^0_{\kappa\nu} + \cdots,
+\]
+where $-\Pi_{\rho\sigma}(q)$ is the photon self-energy, given by the one-particle irreducible graphs.
+
+We will postpone the computation of the self-energy for the moment, and just quote that the result is
+\[
+ \Pi_\rho^\sigma(q) = q^2 \left(\delta_\rho^\sigma - \frac{q_\rho q^\sigma}{q^2}\right) \pi(q^2)
+\]
+for some scalar function $\pi(q^2)$. This operator
+\[
+ P_\rho^\sigma = \left(\delta_\rho^\sigma - \frac{q_\rho q^\sigma}{q^2}\right)
+\]
+is a projection operator onto transverse polarizations. In particular, like any projection operator, it is idempotent:
+\[
+ P_\rho^\sigma P_\sigma^\lambda = P_\rho^\lambda.
+\]
+This allows us to simply the expression of the exact propagator, and write it as
+\[
+ \Delta_{\mu\nu}(q) = \Delta^0_{\mu\nu}(1 + \pi(q^2) + \pi^2(q^2) + \cdots) = \frac{\Delta_{\mu\nu}^0}{1 - \pi(q^2)}.
+\]
+Just as the classical propagator came from the kinetic term
+\[
+ S_{\mathrm{kin}} = \frac{1}{4} \int F_{\mu\nu}F^{\mu\nu} \;\d x = \frac{1}{2} \int q^2 \left(\delta_{\mu\nu} - \frac{q_\mu q_\nu}{q^2}\right) \tilde{A}^\mu (-q) \tilde{A}^\nu(q)\;\d^d q,
+\]
+so too our exact propagator is what we'd get from an action whose quadratic term is
+\[
+ S_{\mathrm{quant}} = \frac{1}{2} \int (1 - \pi(q^2)) q^2 \left(\delta_{\mu\nu} - \frac{q_\mu q_\nu}{q^2}\right) \tilde{A}^\mu (-q) \tilde{A}^\nu(q) \;\d^d q.
+\]
+Expanding $\pi(q^2)$ around $q^2 = 0$, the leading term just corrects the kinetic term of the photon. So we have
+\[
+ S_{\mathrm{quant}} \sim \frac{1}{4} (1 - \pi(0)) \int F_{\mu\nu}F^{\mu\nu} \;\d^d x + \text{higher derivative terms}.
+\]
+One can check that the higher derivative terms will be irrelevant in $d = 4$.
+
+\subsubsection*{Computing the photon self-energy}
+We now actually compute the self energy. This is mostly just doing some horrible integrals.
+
+To leading order, using the classical action (i.e.\ not including the counterterms), we have
+\begin{align*}
+ \Pi_{\mathrm{1-loop}}^{\rho\sigma} &=
+ \begin{tikzpicture}[eqpic]
+ \draw circle [radius=0.5];
+ \draw [decorate, decoration={snake}] (-1.5, 0) node [left] {$A^\sigma$} -- (-0.5, 0) node [circ] {} node [pos=0.5, above] {$q$};
+ \draw [decorate, decoration={snake}] (0.5, 0) node [circ] {} -- (1.5, 0) node [right] {$A^\rho$};
+ \draw [->] (0.1, -0.5) -- +(0.0001, 0);
+ \node [below] at (0, -0.5) {$p - q$};
+ \draw [->] (-0.1, 0.5) -- +(-0.0001, 0);
+ \node [above] at (0, 0.5) {$p$};
+ \end{tikzpicture}\\
+ &= g^2 \mu^{4 - d} \int \frac{\d^4 p}{(2\pi)^d} \Tr\left(\frac{(-i\slashed p + m)\gamma^\rho}{p^2 + m^2} \frac{-i(\slashed p - \slashed q + m)\gamma^\sigma}{(p - q)^2 + m^2}\right),
+\end{align*}
+where we take the trace to take into account of the fact that we are considering all possible spins.
+
+To compute this, we need a whole series of tricks. They are just tricks. We first need the partial fraction identity
+\[
+ \frac{1}{AB} = \frac{1}{B - A} \left[\frac{1}{A} - \frac{1}{B}\right] = \int_0^1 \frac{\d x}{((1 - x) A + xB)^2}.
+\]
+Applying this to the two denominators in our propagators gives
+\begin{align*}
+ \frac{1}{(p^2 + m^2)((p - q)^2 + m^2)} &= \int_0^1 \frac{\d x}{((p^2 + m^2)(1 - x) + ((p - q)^2 + m^2)x)^2} \\
+ &= \int_0^1 \frac{\d x}{(p^2 + m^2 + - 2xpq + q^2x)^2}\\
+ &= \int_0^1 \frac{\d x}{((p - xq)^2 + m^2 + q^2 x (1 - x))^2}
+\end{align*}
+Letting $p' = p - qx$, and then dropping the prime, our loop integral becomes
+\[
+ \frac{g^2 \mu^{4 - d}}{(2\pi)^d} \int \d^d p\int_0^1\d x\; \frac{\tr ((-i(\slashed p - \slashed q x) + m)\gamma^\rho (-i(\slashed p - \slashed q(1 - x)) + m)\gamma^\sigma)}{ (p^2 + \Delta)^2},
+\]
+where $\Delta = m^2 + q^2 x(1 - x)$.
+
+We next do the trace over Dirac spinor indices. As mentioned, if we are working in $d$ dimensions, then we should be honest and work with spinors in $d$ dimensions, but in this case, we'll get away with just pretending it is four dimensions.
+
+We have
+\[
+ \tr(\gamma^\rho \gamma^\sigma) = 4 \delta^{\rho\sigma},\quad \tr (\gamma^\mu \gamma^\rho \gamma^\nu \gamma^\sigma) = 4 (\delta^{\mu\nu}\delta^{\rho\sigma} - \delta^{\mu\nu} \delta^{\rho\sigma} + \delta^{\mu\sigma}\delta^{\rho\nu}).
+\]
+Then the huge trace expression we have just becomes
+\begin{multline*}
+ 4\Big(-(p + qx)^\rho (p - q(1 - x))^\sigma + (p + qx) \cdot (p - q (1 - x)) \delta^{\rho\sigma} \\
+ - (p + qx)^\sigma(p - q(1 - x))^\rho + m^2 \delta^{\rho\sigma}\Big).
+\end{multline*}
+Whenever $d \in \N$, we certainly get zero if any component of $p^\sigma$ appears an odd number of times. Consequently, in the numerator, we can replace
+\[
+ p^\rho p^\sigma \mapsto \frac{p^2}{d} \delta^{\rho\sigma}.
+\]
+Similarly, we have
+\[
+ p^\mu p^\rho p^\nu p^\sigma \mapsto \frac{(p^2)^2}{d(d + 2)} (\delta^{\mu\nu} \delta^{\rho\sigma} + \delta^{\mu\rho} \delta^{\nu\sigma} + \delta^{\mu\sigma}\delta^{\rho\nu})
+\]
+The integrals are given in terms of $\Gamma$-functions, and we obtain
+\begin{multline*}
+ \Pi^{\rho\sigma}_{\mathrm{1-loop}}(q) = \frac{-4g^2 \mu^{4 - d}}{(4\pi)^{d/2}} \Gamma\left(2 - \frac{d}{2}\right) \int_0^1 \d x\;\frac{1}{\Delta^{2 - d/2}}\\
+ \Big(\delta^{\rho\sigma} (-m^2 + x(1 - x)q^2) + \delta^{\rho\sigma}(m^2 + x(1 - x)q^2) - 2x(1 - x) q^\rho q^\sigma\Big).
+\end{multline*}
+And, looking carefully, we find that the $m^2$ terms cancel, and the result is
+\[
+ \Pi_{\mathrm{1-loop}}^{\rho\sigma}(q) = q^2 \left(\delta^{\rho\sigma} - \frac{q^\rho q^\sigma}{q^2}\right) \pi_{\mathrm{1-loop}}(q^2),
+\]
+where
+\[
+ \pi_{\mathrm{1-loop}}(q^2) = \frac{-8 g^2(\mu)}{(4\pi)^{d/2}} \Gamma\left(2 - \frac{d}{2}\right) \int_0^1 \d x\; x(1 - x) \left(\frac{\mu^2}{\Delta}\right)^{2 - d/2}.
+\]
+The key point is that this diverges when $d = 4$, because $\Gamma(0)$ is a pole, and so we need to introduce counterterms.
+
+We set $d = 4 - \varepsilon$, and $\varepsilon \to 0^+$. We introduce a counterterm
+\[
+ S^{\mathrm{CT}}[A, \psi, \varepsilon] = \int \left(\frac{1}{4} \delta Z_3 F_{\mu\nu}F^{\mu\nu} + \delta Z_2 \bar\psi \slashed\D \psi + \delta M \bar\psi \psi\right)\;\d^d x.
+\]
+Note that the counterterms multiply gauge invariant contributions. We do not have separate counterterms for $\bar{\psi} \slashed\partial \psi$ and $\bar\psi \slashed{A} \psi$. We can argue that this must be the case because our theory is gauge invariant, or if we want to do it properly, we can justify this using the Ward identities. This is important, because if we did this by imposing cutoffs, then this property doesn't hold.
+
+For $\Pi_{\mu\nu}^{\mathrm{1-loop}}$, the appropriate counterterm is $\delta Z_3$. As $\varepsilon \to 0^+$, we have
+\[
+ \pi^{\mathrm{1-loop}}(q^2) \sim - \frac{g^2(\mu)}{2\pi^2} \int_0^1 \d x\; x(1 - x) \left(\frac{2}{\varepsilon} - \gamma + \log \left(\frac{4\pi \mu^2}{\Delta} + O(\varepsilon)\right)\right).
+\]
+The counterterm
+\[
+ \begin{tikzpicture}[eqpic]
+ \draw (0, 0) -- (3, 0);
+ \draw [fill=white] (1.5, 0) circle [radius=0.11];
+ \node at (1.5, 0) {$\times$};
+ \end{tikzpicture}
+ = -\frac{\delta Z_3}{4} \int F_{\mu\nu} F^{\mu\nu} \;\d^4 x,
+\]
+and must be chosen to remove the $\frac{1}{\varepsilon}$ singularity. In the \textoverline{MS} scheme, we also remove the $- \gamma + \log 4\pi$ piece, because we hate them. So what is left is
+\[
+ \pi^{\overline{\mathrm{MS}}}(q^2) = + \frac{g^2(\mu)}{2\pi^2} \int_0^1 \d x\; x(1 - x) \log \left(\frac{m^2 + q^2 x(1 - x)}{\mu^2}\right).
+\]
+Then this is finite in $d = 4$. As we previously described, this $1$-loop correction contains the term $\sim \log \left(\frac{m^2 + q^2}{\mu^2}\right)$. So it is small when $m^2 + q^2 \sim \mu^2$.
+
+Notice that the $\log$ term has a branch point when
+\[
+ m^2 + q^2x(1 - x) = 0.
+\]
+For $x \in [0, 1]$, we have
+\[
+ x (1 - x) \in \left[0, \frac{1}{4}\right].
+\]
+So the branch cut is inaccessible with real Euclidean momenta. But in Lorentzian signature, we have
+\[
+ q^2 = \mathbf{q}^2 - E^2,
+\]
+so the branch cut occurs when
+\[
+ (E^2 - \mathbf{q}^2) x(1 - x) \geq m^2,
+\]
+which can be reached whenever $E^2 \geq (2m)^2$. This is exactly the threshold energy for creating a \emph{real} $e^+ e^-$ pair.
+
+\subsubsection*{The QED \tph{$\beta$}{beta}{β}-function}
+To relate this ``one-loop'' exact photon propagator to the $\beta$-function for the electromagnetic coupling, we rescale back to $A_\mu^{\mathrm{old}} = e A_\mu^{\mathrm{new}}$, where we have
+\begin{align*}
+ S^{(2)}_{\mathrm{eff}} [A^{\mathrm{old}}] &= \frac{1}{4g^2} (1 - \pi(0)) \int F_{\mu\nu}F^{\mu\nu} \;\d^4 z\\
+ &= - \frac{1}{4} \left(\frac{1}{g^2(\mu)} - \frac{1}{2\pi^2}\int_0^1 \d x\; x(1 - x) \log \left(\frac{m^2}{\mu^2}\right)\right) \int F_{\mu\nu}F^{\mu\nu} \;\d^4 z.
+\end{align*}
+We call the spacetime parameter $z$ instead of $x$ to avoid confusing it with the Feynman parameter $x$.
+
+Since nothing physical can depend on the arbitrary scale $\mu$, we must have
+\[
+ 0 = \mu \frac{\partial}{\partial \mu} \left[\frac{1}{g^2(\mu)} - \frac{1}{2\pi^2} \int_0^1 \d x\; x(1 - x) \log\left(\frac{m^2}{\mu^2}\right)\right].
+\]
+Solving this, we find that to lowest order, we have
+\[
+ \beta(g) = \frac{g^2}{12 \pi^2}.
+\]
+This integral is easy to do because we are evaluating at $\pi(0)$, and the then $\log$ term does not depend on $x$.
+
+This tells us that the running couplings are given by
+\[
+ \frac{1}{g^2(\mu)} = \frac{1}{g^2(\mu')} + \frac{1}{6\pi^2} \log\left(\frac{\mu'}{\mu}\right).
+\]
+Now suppose $\mu \sim m_e$, where we measure
+\[
+ \frac{g^2(m_e)}{4\pi} \approx \frac{1}{137},
+\]
+the fine structure constant. Then there exists a scale $\mu'$ given by
+\[
+ m' = m_e e^{6\pi^2/g^2(m_e)} \sim \SI{e286}{\giga\electronvolt},
+\]
+where $g^2(\mu')$ diverges! This is known as a \term{Landau pole}.
+
+Yet again, the belief is that pure QED does not exist as a continuum quantum field theory. Of course, what we have shown is that our perturbation theory breaks down, but it turns out more sophisticated regularizations also break down.
+
+\subsubsection*{Physics of vacuum polarization}
+Despite QED not existing, we can still say something interesting about it. Classically, when we want to do electromagnetism in a dielectric material, the electromagnetic field causes the dielectric material to become slightly polarized, which results in a shift in the effective electromagnetic potential. We shall see that in QED, \emph{vacuum} itself will act as such a dielectric material, due to effects of virtual electron-positron pairs.
+
+Consider the scattering two (distinguishable) Dirac spinors of charges $e_1$ and $e_2$. The $S$ matrix for the process is given by
+\[
+ S(1\; 2 \to 1'\;2') = - \frac{e_1e_2}{4\pi} \delta^4(p_1 + p_2 - p_{1'} - p_{2'})\bar{u}_{1'} \gamma^\mu u_1 \Delta_{\mu\nu}(q) \bar{u}_{2'} \gamma^\nu u_2,
+\]
+where $q = p_1 - p_1'$ is the momentum of the photon propagator.
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (cl);
+ \vertex [right=of cl] (cr);
+ \vertex [above left=of cl] (tl) {$1'$};
+ \vertex [below left=of cl] (bl) {$1$};
+ \vertex [above right=of cr] (tr) {$2'$};
+ \vertex [below right=of cr] (br) {$2$};
+ \diagram*{
+ (bl) -- [fermion, momentum'=$p_1$] (cl) -- [fermion, momentum'=$p_1'$] (tl),
+ (br) -- [fermion, momentum=$p_2$] (cr) -- [fermion, momentum=$p_2'$] (tr),
+ (cl) -- [photon] (cr)
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+Here we are including the exact photon propagator in the diagram. It is given by
+\[
+ \frac{\Delta_{\mu\nu}^0(q)}{1 - \pi(q^2)},
+\]
+and so we can write
+\[
+ \approx \frac{-e_1 e_2}{4\pi} \delta^{(4)} (p_1 + p_2 - p_{1'} - p_{2'}) \bar{u}_{1'} \gamma^\mu u_1 \Delta_{\mu\nu}^0 \bar{u}_{2'} \gamma^\nu u_2 (1 + \pi(q^2) + \cdots).
+\]
+So the quantum corrections modify the classical one by a factor of $(1 + \pi(q^2) + \cdots)$. To evaluate this better, we note that
+\[
+ \bar{u}_i \gamma^\mu u_i \Delta_{\mu\nu}^0\bar{u}_{2'} \gamma^\nu u_2 = \bar{u}_{1'} \gamma^\mu u_1 \bar{u}_{2'} \gamma_\mu u_2 \frac{1}{q^2}.
+\]
+In the non-relativistic limit, we expect $|q^2| \ll |\mathbf{q}|$ and
+\[
+ \bar{u}_{1'}\gamma^\mu u_1 \approx
+ \begin{pmatrix}
+ gi \delta_{m_1 m_1'}\\
+ \mathbf{0}
+ \end{pmatrix},
+\]
+where $m_1, m_{1'}$ are the $\sigma_3$ (spin) angular momentum quantum numbers. This tells us it is very likely that the angular momentum is conserved.
+
+Consequently, in this non-relativistic limit, we have
+\[
+ S(1\;2 \to 1'\; 2') \approx \frac{e_1 e_2}{4\pi |\mathbf{q}|^2} \delta^{(4)} (p_1 + p_2 - p_{1'} - p_{2'}) (1 + \pi (|\mathbf{q}|^2)) \delta_{m_1 m_1'} \delta_{m_2 m_2'}.
+\]
+This is what we would get using the Born approximation if we had a potential of
+\[
+ V(\mathbf{r}) = e_1 e_2 \int \frac{\d^3 \mathbf{q}}{(2\pi)^3} \left(\frac{1 + \pi(|\mathbf{q}|^2)}{ |\mathbf{q}|^2}\right) e^{i\mathbf{q}\cdot \mathbf{r}}.
+\]
+In particular, if we cover off the $\pi(|\mathbf{q}|^2)$ piece, then this is just the Coulomb potential.
+
+In the regime $|\mathbf{q}|^2 \ll m_e^2$, picking $\mu = m_e$, we obtain
+\begin{align*}
+ \pi(|\mathbf{q}|^2) &= \pi(0) + \frac{g^2(\mu)}{2\pi^2} \int_0^1 \d x\; x(1 - x) \log \left(1 + \frac{x(1 - x)|\mathbf{q}|^2}{m^2}\right)\\
+ &\approx \pi(0) + \frac{g^2(\mu)}{60\pi^2} \frac{|\mathbf{q}|^2}{m^2}.
+\end{align*}
+Then we find
+\begin{align*}
+ V(\mathbf{r}) &\approx e_1 e_2 \int \frac{\d^3 \mathbf{q}}{(2\pi)^3} \left(\frac{1 + \pi(0)}{\mathbf{q}^2} + \frac{g^2}{60 \pi^2 m^2} + \cdots\right) e^{i\mathbf{q}\cdot \mathbf{r}}\\
+ &= e_1 e_2\left(\frac{ 1 + \pi(0)}{4\pi r} + \frac{g^2}{60 \pi^2 m^2} \delta^3(\mathbf{r})\right).
+\end{align*}
+So we got a short-range modification to the Coulomb potential. We obtained a $\delta$-function precisely because we made the assumption $|\mathbf{q}|^2 \ll m_e^2$. If we did more accurate calculations, then we don't get a $\delta$ function, but we still get a contribution that is exponentially suppressed as we go away form $0$.
+
+This modification of the Coulomb force is attributed to ``\term{screening}''. It leads to a measured shift in the energy levels of $\ell = 0$ bound states of hydrogen. We can interpret this as saying that the vacuum itself is some sort of dielectric medium, and this is the effect of loop diagrams that look like
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (cl);
+ \vertex [right=of cl] (cr);
+ \vertex [above left=of cl] (tl) {$1'$};
+ \vertex [below left=of cl] (bl) {$1$};
+ \vertex [above right=of cr] (tr) {$2'$};
+ \vertex [below right=of cr] (br) {$2$};
+ \diagram*{
+ (bl) -- [fermion, momentum'=$p_1$] (cl) -- [fermion, momentum'=$p_1'$] (tl),
+ (br) -- [fermion, momentum=$p_2$] (cr) -- [fermion, momentum=$p_2'$] (tr),
+ (cl) -- [photon] (cr)
+ };
+ \end{feynman}
+ \draw [fill=white] (0.75, 0) circle [radius=0.3];
+ \end{tikzpicture}
+\end{center}
+The idea is that the (genuine) charges $e_1$ and $e_2$ polarizes the vacuum, which causes virtual particles to pop in and out to repel or attract other particles.
+
+So far, the conclusions of this course is pretty distressing. We looked at $\phi^4$ theory, and we discovered it doesn't exist. We then looked at QED, and it doesn't exist. Now, we are going to look at Yang--Mills theory, which is believed to exist.
+
+%\separator
+%When we did the loop integral
+%\begin{center}
+% \begin{tikzpicture}
+% \draw circle [radius=0.5];
+% \draw [decorate, decoration={snake}] (-1.5, 0) node [left] {$A^\sigma$} -- (-0.5, 0) node [circ] {} node [pos=0.5, above] {$q$};
+% \draw [decorate, decoration={snake}] (0.5, 0) node [circ] {} -- (1.5, 0) node [right] {$A^\rho$};
+% \draw [->] (0.1, -0.5) -- +(0.0001, 0);
+% \node [below] at (0, -0.5) {$p - q$};
+% \draw [->] (-0.1, 0.5) -- +(-0.0001, 0);
+% \node [above] at (0, 0.5) {$p$};
+% \end{tikzpicture}
+%\end{center}
+%we had the integral
+%\[
+% -(-ie)^2 \int \tr \left(\gamma^\mu \frac{1}{i\slashed p + m} \gamma^\nu \frac{1}{i\slashed p}m\right)\frac{\d^4 p}{(2\pi)^d}.
+%\]
+%The negative sign at the beginning comes from Fermions running around the loop. There are two (equivalent) ways to understand where this came from.
+%
+%Suppose we have
+%\[
+% \bra\psi(x) \bar\psi(y)\ket = S(x, y)
+%\]
+%is the Dirac propagator in space-time. Then we have
+%\[
+% \bra \psi(x) \bar{\psi}(y)\ket = - \bra \bar{\psi(y)}\psi(x)\ket,
+%\]
+%since $\psi$ is anti-commuting. Therefore, in position space, we'd get a term
+%\[
+% (-ie)^2 \int \d^d x\; \bar\psi \slashed A \psi(x) \int \d^d y \bar\psi \slashed A \psi(y)
+%\]
+%from expanding the fermionic term. But joining the electron fields with propagators, we must connect $\bar{\psi} (x)$ with $\psi(y)$ and $\psi(x)$ with $\bar{\psi}(y)$. But one of these terms is out of order. So we got a minus sign. More generally, if we have
+%\[
+% \bar\psi \slashed{A} \psi(x_1) \bar\psi\slashed{A} \psi(x_2) \cdots \bar{\psi} \slashed{A} \psi(x_n),
+%\]
+%if we want to join these $\psi$'s in a loop, then we can connect $\psi(x_1)$ with $\bar\psi(x_2)$, $\psi(x_1)$ with $\bar{\psi}(x_2)$ etc, and the last one will be mismatched. So we always get a minus sign.
+%
+%Alternatively, if we perform the path integral over $\psi \bar\psi$, we get (formally)
+%\[
+% i \int \D A \D \bar\psi \D \psi e^{-S[A, \psi, \bar\psi} = \int \D A \det (\slashed{\D} + m) e^{-\frac{1}{4} \int F^2\;\d^d x},
+%\]
+%since we are just integrating Gaussians. So we get an effective action for the gauge field for $A$ by
+%\[
+% S_{\mathrm{eff}}[A] = \frac{1}{4} \int F^2 \d^d x - \log \det (\slashed D + m).
+%\]
+%This negative sign appears because we are multiplying by $\det (\slashed D + m)$ instead of $(\det (\slashed D + m))^{-1}$ when we have Fermions. The perturbatively, we have
+%\begin{align*}
+% S_{\mathrm{eff}}[A] &= \frac{1}{5} F^2 \;\d^d x - \tr \log (\slashed D + m)\\
+% &= \log (\slashed{\partial} + m) + \sum_{n = 1}^\infty \frac{1}{n} \left(ie\slashed{A} \frac{1}{\slashed{\partial} + m} ie\slashed{A} \frac{1}{\slashed{\partial} + m} \cdots \frac{1}{\slashed{\partial} + m}\right).
+%\end{align*}
+
+\section{Non-abelian gauge theory}
+\subsection{Bundles, connections and curvature}
+\subsubsection*{Vector bundles}
+To do non-abelian gauge theory (or even abelian gauge theory, really), we need to know some differential geometry. In this section, vector spaces can be either real or complex --- our theory works fine with either.
+
+So far, we had a universe $M$, and our fields took values in some vector space $V$. Then a field is just a (smooth/continuous/whatever) function $f: M \to V$. However, this requires the field to take values in the \emph{same} vector space at each point. It turns out it is a more natural thing to assign a vector space $V_x$ for each $x \in M$, and then a field $\phi$ would be a function
+\[
+ \phi: M \to \coprod_{x \in M} V_x \equiv E,
+\]
+where $\coprod$ means, as usual, the disjoint union of the vector spaces. Of course, the point of doing so is that we shall require
+\[
+ \phi(x) \in V_x \text{ for each $x$}.\tag{$*$}
+\]
+This will open a can of worms, but is needed to do gauge theory properly.
+
+We are not saying that each $x \in M$ will be assigned a completely random vector space, e.g.\ one would get $\R^3$ and another would get $\R^{144169}$. In fact, they will be isomorphic to some fixed space $V$. So what do we actually achieve? While it is true that $V_x \cong V$ for all $x$, there is no canonical \emph{choice} of such an isomorphism. We will later see that picking such an isomorphism correspond to picking a \emph{gauge} of the field.
+
+Now if we just have a bunch of vector spaces $V_x$ for each $x \in M$, then we lose the ability to talk about whether a field is differentiable or not, let alone taking the derivative of a field. So we want to ``glue'' them together in some way. We can write
+\[
+ E = \{(x, v): x \in M, v \in V_x\},
+\]
+and we require $E$ to be a \emph{manifold} as well. We call these \emph{vector bundles}. There are some technical conditions we want to impose on $E$ so that it is actually possible to work on it.
+
+There is a canonical projection map $\pi: E \to M$ that sends $(x, v)$ to $x$. Then we have $V_x = \pi^{-1}(\{x\})$. A natural requirement is that we require this map $\pi$ to be smooth. Otherwise, we might find someone making $E$ into a manifold in a really stupid way.
+
+We now introduce a convenient terminology.
+\begin{defi}[Section]\index{section}
+ Let $p: E \to M$ be a map between manifolds. A \emph{section} is a smooth map $s: M \to E$ such that $p \circ s = \id_M$.
+\end{defi}
+Now we can rewrite the condition $(*)$ as saying $\phi$ is a section of the map $\pi: E \to M$. In general, our fields will usually be sections of some vector bundle.
+
+\begin{eg}
+ Let $M$ be any manifold, and $V$ be any vector space. Then $M \times V$ is a manifold in a natural way, with a natural projection $\pi_1: M \times V \to M$. This is called the \term{trivial bundle}.
+\end{eg}
+
+Now trivial bundles are very easy to work with, since a section of the trivial bundle is in a very natural correspondence with maps $M \to V$. So we know exactly how to work with them.
+
+The final condition we want to impose on our vector bundles is not that they are trivial, or else we have achieved absolutely nothing. What we want to require is that near every point, the vector bundle looks trivial.
+
+\begin{defi}[Vector bundle]\index{vector bundle}
+ Let $M$ be a manifold, and $V$ a vector space. A \emph{vector bundle} over $M$ with \term{typical fiber}\index{fiber} $V$ is a manifold $E$ with a map $\pi: E \to M$ such that for all $x \in M$, the fiber $E_x \cong \pi^{-1}(\{x\})$ is a vector space that is isomorphic to $V$.
+
+ Moreover, we require that for each $x \in M$, there exists an open neighbourhood $U$ of $x$, and a diffeomorphism $\Phi: U \times V \to \pi^{-1}(U)$ such that $\pi(\Phi(y, v)) = y$ for all $y$, and $\Phi(y, \ph): \{y\} \times V \to E_y$ is a \emph{linear isomorphism} of vector spaces.
+
+ Such a $\Phi$ is called a \term{local trivialization} of $E$, and $U$ is called a \term{trivializing neighbourhood}.
+\end{defi}
+By definition, each point $x$ is contained in some trivializing neighbourhood. Thus, we can find a \term{trivializing cover} $\{U_\alpha\}$ with a trivialization on each $U_\alpha$ such that $\bigcup U_\alpha = M$.
+
+There are some philosophical remarks we can make here. On $\R^n$, every bundle is isomorphic to a trivial bundle. If we only care about Euclidean (or Minkowski) space, then it seems like we are not actually achieving much. But morally, requiring that a bundle is just locally trivial, and not globally trivial in some sense tells us everything we do is ``local''. Indeed, we are mere mortals who can at best observe the observable universe. We cannot ``see'' anything outside of the observable universe, and in particular, it is impossible to know whether bundles remain trivial if we go out of the observable universe. Even if we observe that everything we find resembles a trivial bundle, we are only morally allowed to conclude that we have a \emph{locally} trivial bundle, and are not allowed to conclude anything about the global geometry of the universe.
+
+Another reason for thinking about bundles instead of just trivial ones is that while every bundle over $\R^n$ is globally trivial, the \emph{choice} of trivialization is not canonical, and there is a lot of choice to be made. Usually, when we have a vector space $V$ and want to identify it with $\R^n$, then we just pick a basis for it. But now if we have a vector bundle, then we have to pick a basis at \emph{each point} in space. This is a lot of arbitrary choices to be made, and it is often more natural to study the universe without making such choices. Working on a vector bundle in general also prevents us from saying things that depends on the way we trivialize our bundle, and so we ``force'' ourselves to say sensible things only.
+
+Recall that for a trivial bundle, a section of the bundle is just a map $M \to V$. Thus, for a general vector bundle, if we have a local trivialization $\Phi$ on $U$, then under the identification given by $\Phi$, a section defined on $U$ can be alternatively be written as a map $\phi: U \to V$, which we may write, in coordinates, as $\phi^a(x)$, for $a = 1, \cdots, \dim V$. Note that this is only valid in the neighbourhood $U$, and also depends on the $\Phi$ we pick.
+
+\begin{eg}
+ Let $M$ be any manifold. Then the \term{tangent bundle}
+ \[
+ TM = \coprod_{x \in M} T_x M \to M
+ \]
+ is a vector bundle. Similarly, the \term{cotangent bundle}
+ \[
+ T^*M = \coprod_{x \in M} T_x^* M \to M
+ \]
+ is a vector bundle.
+
+ Recall that given vector spaces $V$ and $W$, we can form new vector spaces by taking the \term{direct sum} $V \oplus W$ and the \term{tensor product} $V \otimes W$. There is a natural way to extend this to vector bundles, so that if $E, F \to M$ are vector bundles, then $E \oplus F$ and $E \otimes F$ are vector bundles with fibers $(E \oplus F)_x = E_x \oplus F_x$ and $(E \otimes F)_x = E_x \otimes F_x$. It is an exercise for the reader to actually construct these bundles. We can also similarly extend then notion of exterior product $\exterior^p V$ to vector bundles.
+
+ In particular, applying these to $TM$ and $T^*M$ gives us tensor product bundles of the form $(TM)^{\otimes n} \otimes (T^*M)^{\otimes m}$, whose sections are tensor fields.
+
+ In more familiar notation, (in a local trivialization) we write sections of tangent bundles as $X^\mu(x)$, and sections of the cotangent bundle as $Y_\mu(x)$. Sections of the tensor product are written as $X^{\mu_1, \ldots, \mu_n}\!_{\nu_1, \ldots, \nu_m}$.
+\end{eg}
+
+\begin{eg}
+ There is exactly one non-trivial vector bundle we can visualize. Consider the circle $S^1$:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [thick, mblue, dashed] (1.5, 0) arc(0:180:1.5 and 0.3);
+ \draw [thick, mblue] (-1.5, 0) arc(180:360:1.5 and 0.3);
+ \end{tikzpicture}
+ \end{center}
+ Let's consider \term{line bundles} on $S^1$, i.e.\ vector bundles with fiber $\cong \R$. There is of course the trivial bundle, and it looks like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [thick, mblue, dashed] (1.5, 0) arc(0:180:1.5 and 0.3);
+ \draw [thick, mblue] (-1.5, 0) arc(180:360:1.5 and 0.3);
+ \foreach \theta in {7,27,...,167} {
+ \pgfmathsetmacro\x{1.5*cos(\theta)};
+ \pgfmathsetmacro\y{0.3*sin(\theta)};
+ \draw [dashed, shift={(\x, \y)}] (0, -0.5) -- (0, 0.5);
+ }
+ \foreach \theta in {187,207,...,347} {
+ \pgfmathsetmacro\x{1.5*cos(\theta)};
+ \pgfmathsetmacro\y{0.3*sin(\theta)};
+ \draw [shift={(\x, \y)}] (0, -0.5) -- (0, 0.5);
+ }
+ \end{tikzpicture}
+ \end{center}
+ However, we can also introduce a ``twist'' into this bundle, and obtain the \term{M\"obius band}:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [thick, mblue, dashed] (1.5, 0) arc(0:180:1.5 and 0.3);
+ \draw [thick, mblue] (-1.5, 0) arc(180:360:1.5 and 0.3);
+ \foreach \theta in {7,27,...,167} {
+ \pgfmathsetmacro\x{1.5*cos(\theta)};
+ \pgfmathsetmacro\y{0.3*sin(\theta)};
+ \draw [dashed, shift={(\x, \y)}, rotate={\theta/2}] (0, -0.5) -- (0, 0.5);
+ }
+ \foreach \theta in {187,207,...,347} {
+ \pgfmathsetmacro\x{1.5*cos(\theta)};
+ \pgfmathsetmacro\y{0.3*sin(\theta)};
+ \draw [shift={(\x, \y)}, rotate={\theta/2}] (0, -0.5) -- (0, 0.5);
+ }
+ \end{tikzpicture}
+ \end{center}
+ This is an example of a non-trivial line bundle. How do we know it is non-trivial? The trivial bundle obviously has a nowhere-vanishing section. However, if we stare at the M\"obius band hard enough, we see that any section of the M\"obius band must vanish somewhere. Thus, this cannot be the trivial line bundle.
+
+ In fact, it is a general theorem that a line bundle has a nowhere-vanishing section if and only if it is trivial.
+\end{eg}
+
+We introduce a bit more notation which we will use later.
+
+\begin{notation}
+ Let $E \to M$ be a vector bundle. Then we write \term{$\Omega^0_M(E)$} for the vector space of sections of $E \to M$. Locally, we can write an element of this as $X^a$, for $a = 1, \cdots, \dim E_x$.
+
+ More generally, we write \term{$\Omega^p_M(E)$} for sections of $E \otimes \exterior^pT^*M \to M$, where $\exterior^p T^*M$ is the bundle of $p$-forms on $M$. Elements can locally be written as $X^a_{\mu_1 \ldots \mu_n}$.
+
+ If $V$ is a vector space, then $\Omega^p_M(V)$ is a shorthand for $\Omega^p_M(V \times M)$. % explain this better.
+\end{notation}
+
+Let's return to the definition of a vector bundle. Suppose we had two trivializing neighbourhoods $U_\alpha$ and $U_\beta$, and that they have a non-trivial intersection $U_\alpha \cap U_\beta$. We can then compare the two trivializations on $U_\alpha$ and $U_\beta$:
+\[
+ \begin{tikzcd}
+ (U_\alpha \cap U_\beta) \times V \ar[r, "\Phi_\alpha"] & \pi^{-1}(U_\alpha \times U_\beta) & (U_\alpha \cap U_\beta) \times V \ar[l, "\Phi_\beta"'].
+ \end{tikzcd}
+\]
+Composing the maps gives us a map
+\[
+ t_{\alpha\beta} : \Phi_\alpha^{-1} \circ \Phi_\beta: (U_\alpha \cap U_\beta) \times V \to (U_\alpha \cap U_\beta) \times V
+\]
+that restricts to a linear isomorphism on each fiber. Thus, this is equivalently a map $U_\alpha \cap U_\beta \to \GL(V)$. This is called the \term{transition function}.
+
+These transition functions satisfy some compatibility conditions. Whenever $x \in U_\alpha \cap U_\beta \cap U_\gamma$, we have
+\[
+ t_{\alpha\beta}(x) \cdot t_{\beta\gamma}(x) = t_{\alpha\gamma}(x).
+\]
+Note that on the left, what we have is the (group) multiplication of $\GL(V)$. These also satisfy the boring condition $t_{\alpha\alpha} = \id$. These are collectively known as the \term{cocycle conditions}.
+
+\begin{ex}
+ Convince yourself that it is possible to reconstruct the whole vector bundle just from the knowledge of the transition functions. Moreover, given any cover $\{U_\alpha\}$ of $M$ and functions $t_{\alpha\beta}: U_\alpha \cap U_\beta \to \GL(V)$ satisfying the cocycle conditions, we can construct a vector bundle with these transition functions.
+\end{ex}
+This exercise is crucial. It is left as an exercise, instead of being spelt out explicitly, because it is much easier to imagine what is going on geometrically in your head than writing it down in words. The idea is that the transition functions tell us how we can glue different local trivializations together to get a vector bundle.
+
+Now we want to do better than this. For example, suppose we have $V = \R^n$, which comes with the Euclidean inner product. Then we want the local trivializations to respect this inner product, i.e.\ they are given by orthogonal maps, rather than just linear isomorphisms. It turns out this is equivalent to requiring that the transition functions $t_{\alpha\beta}$ actually land in $\Or(n)$ instead of just $\GL(n, \R)$. More generally, we can have the following definition:
+
+\begin{defi}[$G$-bundle]\index{$G$-bundle}
+ Let $V$ be a vector space, and $G \leq \GL(V)$ be a Lie subgroup. Then a $G$-bundle over $M$ is a vector bundle over $M$ with fiber $V$, equipped with a trivializing cover such that the transition functions take value in $G$.
+\end{defi}
+Note that it is possible to turn a vector bundle into a $G$-bundle into many different ways. So the trivializing cover is part of the \emph{data} needed to specify the $G$-bundle.
+
+We can further generalize this a bit. Instead of picking a subgroup $G \leq \GL(V)$, we pick an arbitrary Lie group $G$ with a representation on $V$. The difference is that now certain elements of $G$ are allowed to act trivially on $V$.
+
+\begin{defi}[$G$-bundle]\index{$G$-bundle}
+ Let $V$ be a representation, $G$ a Lie group, and $\rho: G \to \GL(V)$ a representation. Then a $G$-bundle consists of the following data:
+ \begin{enumerate}
+ \item A vector bundle $E \to M$.
+ \item A trivializing cover $\{U_\alpha\}$ with transition functions $t_{\alpha\beta}$.
+ \item A collection of maps $\varphi_{\alpha\beta}: U_{\alpha} \cap U_\beta \to G$ satisfying the cocycle conditions such that $t_{\alpha\beta} = \rho \circ \varphi_{\alpha\beta}$.
+ \end{enumerate}
+\end{defi}
+Note that to specify a $G$-bundle, we require specifying an element $\varphi_{\alpha\beta}(x) \in G$ for each $x \in M$, instead of just the induced action $\rho(\varphi_{\alpha\beta}(x)) \in \GL(V)$. This is crucial for our story. We are requiring more information than just how the elements in $V$ transform. Of course, this makes no difference if the representation $\rho$ is faithful (i.e.\ injective), but makes a huge difference when $\rho$ is the trivial representation.
+
+We previously noted that it is possible to recover the whole vector bundle just from the transition functions. Consequently, the information in (i) and (ii) are actually redundant, as we can recover $t_{\alpha\beta}$ from $\varphi_{\alpha\beta}$ by composing with $\rho$. Thus, a $G$-bundle is equivalently a cover $\{U_\alpha\}$ of $M$, and maps $\varphi_{\alpha\beta}: U_\alpha \cap U_\beta \to G$ satisfying the cocycle condition.
+
+Note that this equivalent formulation \emph{does not} mention $\rho$ or $V$ at all!
+
+\begin{eg}
+ Every $n$-dimensional vector bundle is naturally a $\GL(n)$ bundle --- we take $\rho$ to be the identity map, and$\varphi_{\alpha\beta} = t_{\alpha\beta}$.
+\end{eg}
+
+\subsubsection*{Principal \texorpdfstring{$G$}{G}-bundles}
+We are halfway through our voyage into differential geometry. I promise this really has something to do with physics.
+
+We just saw that a $G$-bundle can be purely specified by the transition functions, without mentioning the representation or fibers at all. In some sense, these transition functions encode the ``pure twisting'' part of the bundle. Given this ``pure twisting'' information, and any object $V$ with a representation of $G$ on $V$, we can construct a bundle with fiber $V$, twisted according to this prescription.
+
+This is what we are going to do with gauge theory. The \term{gauge group} is the group $G$, and the gauge business is encoded in these ``twisting'' information. Traditionally, a field is a function $M \to V$ for some vector space $V$. To do gauge coupling, we pick a representation of $G$ on $V$. Then the twisting information allows us to construct a vector bundle over $M$ with fiber $V$. Then gauge-coupled fields now correspond to \emph{sections} of this bundle. Picking different local trivializations of the vector bundle corresponds to picking different gauges, and the transition functions are the gauge transformations!
+
+But really, we prefer to work with some geometric object, instead of some collection of transition functions. We want to find the most ``natural'' object for $G$ to act on. It turns out the most natural object with a $G$-action is not a vector space. It is just $G$ itself!
+
+\begin{defi}[Principal $G$-bundle]\index{principal $G$-bundle}
+ Let $G$ be a Lie group, and $M$ a manifold. A \emph{principal $G$-bundle} is a map $\pi: P \to M$ such that $\pi^{-1}(\{x\}) \cong G$ for each $x \in M$. Moreover, $\pi: P \to M$ is locally trivial, i.e.\ it locally looks like $U \times G$, and transition functions are given by left-multiplication by an element of $G$.
+
+ More precisely, we are given an open cover $\{U_\alpha\}$ of $M$ and diffeomorphisms
+ \[
+ \Phi_\alpha: U_\alpha \times G \to \pi^{-1}(U_\alpha)
+ \]
+ satisfying $\pi(\Phi_\alpha(x, g)) = x$, such that the transition functions
+ \[
+ \Phi_\alpha^{-1} \circ \Phi_\beta: (U_\alpha \cap U_\beta) \times G \to (U_\alpha \cap U_\beta) \times G
+ \]
+ is of the form
+ \[
+ (x, g) \mapsto (x, t_{\alpha\beta}(x) \cdot g)
+ \]
+ for some $t_{\alpha\beta}: U_\alpha \cap U_\beta \to G$.
+\end{defi}
+
+\begin{thm}
+ Given a principal $G$-bundle $\pi: P \to M$ and a representation $\rho: G \to \GL(V)$, there is a canonical way of producing a $G$-bundle $E \to M$ with fiber $V$. This is called the \term{associated bundle}.
+
+ Conversely, given a $G$-bundle $E\to M$ with fiber $V$, there is a canonical way of producing a principal $G$-bundle out of it, and these procedures are mutual inverses.
+
+ Moreover, this gives a correspondence between local trivializations of $P \to M$ and local trivializations of $E \to M$.
+\end{thm}
+Note that since each fiber of $P \to M$ is a group, and trivializations are required to respect this group structure, a local trivialization is in fact equivalent to a local section of the bundle, where we set the section to be the identity.
+
+\begin{proof}
+ If the expression
+ \[
+ P \times_G V \to M
+ \]
+ makes any sense to you, then this proves the first part directly. Otherwise, just note that both a principal $G$-bundle and a $G$-bundle with fiber $V$ can be specified just by the transition functions, which do not make any reference to what the fibers look like.
+
+ The proof is actually slightly less trivial than this, because the same vector bundle can have be many choices of trivializing covers, which gives us different transition functions. While these different transition functions patch to give the same vector bundle, by assumption, it is not immediate that they must give the same principal $G$-bundle as well, or vice versa.
+
+ The way to fix this problem is to figure out explicitly when two collection of transition functions give the same vector bundle or principal bundle, and the answer is that this is true if they are \term{cohomologous}. Thus, the precise statement needed to prove this is that both principal $G$-bundle and $G$-bundles with fiber $V$ biject naturally with the \term{first \v{C}ech cohomology group} of $M$ with coefficients in $G$.
+\end{proof}
+
+We now get to the physics part of the story. To specify a \emph{gauge theory} with gauge group $G$, we supplement our universe $M$ with a principal $G$-bundle $\pi: P \to M$. In QED, the gauge group is $\U(1)$, and in QCD, the gauge group is $\SU(3)$. In the standard model, for some unknown reason, the gauge group is
+\[
+ G = \SU(3) \times \SU(2) \times \U(1).
+\]
+Normally, a field with values in a vector space $V$ is is given by a smooth map $\phi: M \to V$. To do gauge coupling, we pick a representation $\rho: G \to V$, and then form the associated bundle to $P \to M$. Then a field is now a section of this associated bundle.
+
+\begin{eg}
+ In Maxwell theory, we have $G = \U(1)$. A complex scalar field is a function $\phi: G \to \C$. The vector space $\C$ has a very natural action of $\U(1)$ given by left-multiplication.
+
+ We pick our universe to be $M = \R^4$, and then the bundle is trivial. However, we can consider two different trivializations defined on the whole of $M$. Then we have a transition function $t: M \to \U(1)$, say $t(x) = e^{i \theta(x)}$. Then under this change of trivialization, the field would transform as
+ \[
+ \phi(x) \mapsto e^{i\theta(x)} \phi(x).
+ \]
+ This is just the usual gauge transformation we've seen all the time!
+\end{eg}
+
+\begin{eg}
+ Recall that a vector bundle $E \to M$ with fiber $\R^n$ is naturally a $\GL(n)$-bundle. In this case, there is a rather concrete description of the principal $\GL(n)$-bundle that gives rise to $E$.
+
+ At each point $x \in M$, we let $\mathrm{Fr}(E_x)$ be the set of all ordered bases of $E_x$. We can biject this set with $\GL(n)$ as follows --- we first fix a basis $\{\mathbf{e}_i\}$ of $E_x$. Then given any other basis $\{\mathbf{f}_i\}$, there is a unique matrix in $\GL(n)$ that sends $\{\mathbf{e}_i\}$ to $\{\mathbf{f}_i\}$. This gives us a map $\mathrm{Fr}(E_x) \to \GL(n)$, which is easily seen to be a bijection. This gives a topology on $\mathrm{Fr}(E_x)$.
+
+ The map constructed above obviously depends on the basis $\mathbf{e}_i$ chosen. Indeed, changing the $\mathbf{e}_i$ corresponds to multiplying $t$ on the right by some element of $\GL(n)$. However, we see that at least the induced smooth structure on $\mathrm{Fr}(E_x)$ is well-defined, since right-multiplication by an element of $\GL(n)$ is a diffeomorphism.
+
+ We can now consider the disjoint union of all such $\mathrm{Fr}(E_x)$. To make this into a principal $\GL(n)$-bundle, we need to construct local trivializations. Given any trivialization of $E$ on some neighbourhood $U$, we have a smooth choice of basis on each fiber, since we have bijected the fiber with $\R^n$, and $\R^n$ has a standard basis. Thus, performing the above procedure, we have a choice of bijection between $\mathrm{Fr}(E_x)$ between $\GL(n)$. If we pick a different trivialization, then the choice of bijection differs by some right-multiplication.
+
+ This is almost a principal $\GL(n)$-bundle, but it isn't quite so --- to obtain a principal $\GL(n)$-bundle, we want the transition functions to be given by \emph{left} multiplication. To solve this problem, when we identified $\mathrm{Fr}(E_x)$ with $\GL(n)$ back then, we should have sent $\{\mathbf{f}_i\}$ to the \emph{inverse} of the matrix that sends $\{\mathbf{e}_i\}$ to $\{\mathbf{f}_i\}$.
+
+ In fact, we should have expected this. Recall from linear algebra that under a change of basis, if the \emph{coordinates} of elements transform by $A$, then the basis themselves transform by $A^{-1}$. So if we want our principal $\GL(n)$-bundle to have the same transition functions as $E$, we need this inverse. One can now readily check that this has the same transition functions as $E$. This bundle is known as the \emph{frame bundle}, and is denoted $\mathrm{Fr}(E)$.
+
+ Note that specifying trivializations already gives a smooth structure on $\pi: \mathrm{Fr}(E) \to M$. Indeed, on each local trivialization on $U$, we have a bijection between $\pi^{-1}(U)$ and $U \times \GL(n)$, and this gives a chart on $\pi^{-1}(U)$. The fact that transition functions are given by smooth maps $U \to \GL(n)$ ensures the different charts are compatible.
+
+ Recall that we previously said there is a bijection between a section of a principal $G$-bundle and a trivialization of the associated bundle. This is very clearly true in this case --- a section is literally a choice of basis on each fiber!
+\end{eg}
+
+\subsubsection*{Connection}
+Let's go back to the general picture of vector bundles, and forget about the structure group $G$ for the moment. Consider a general vector bundle $\pi: E \to M$, and a section $s: M \to E$. We would like to be able to talk about derivatives of this section. However, the ``obvious'' attempt looking like
+\[
+ \frac{s(x + \varepsilon) - s(x)}{|\varepsilon|}
+\]
+doesn't really make sense, because $s(x + \varepsilon)$ and $s(x)$ live in different vector spaces, namely $E_{x + \varepsilon}$ and $E_x$.
+
+We have encountered this problem in General Relativity already, where we realized the ``obvious'' derivative of, say, a vector field on the universe doesn't make sense. We figured that what we needed was a \emph{connection}, and it turns out the metric on $M$ gives us a canonical choice of the connection, namely the \term{Levi-Civita connection}.
+
+We can still formulate the notion of a connection for a general vector bundle, but this time, there isn't a canonical choice of connection. It is some additional data we have to supply.
+
+Before we meet the full, abstract definition, we first look at an example of a connection.
+\begin{eg}
+ Consider a trivial bundle $M \times V \to M$. Then the space of sections $\Omega^0_M(V)$ is canonically isomorphic to the space of maps $M \to V$. This we know how to differentiate. There is a canonical map $\d: \Omega^0_M(V) \to \Omega^1_M(V)$ sending $f \mapsto \d f$, where for any vector $X \in T_p M$, we have
+ \[
+ \d f(X) = \frac{\partial f}{\partial X} \in V.
+ \]
+ This is a one-form with values in $V$ (or $M \times V$) because it takes in a vector $X$ and returns an element of $V$.
+
+ In coordinates, we can write this as
+ \[
+ \d f = \frac{\partial f}{\partial x^\mu} \;\d x^\mu.
+ \]
+\end{eg}
+
+We can now define a connection:
+\begin{defi}[Connection]\index{connection}
+ A \emph{connection} \index{$\nabla$} is a linear map $\nabla: \Omega_M^0(E) \to \Omega_M^1(E)$ satisfying
+ \begin{enumerate}
+ \item Linearity:
+ \[
+ \nabla(\alpha_1 s_1 + \alpha_2 s_2) = \alpha_1 (\nabla s_1) + \alpha_2( \nabla s_2)
+ \]
+ for all $s_1, s_2 \in \Omega_M^0(E)$ and $\alpha_1, \alpha_2$ constants.
+ \item Leibnitz property:
+ \[
+ \nabla (fs) = (\d f) s + f (\nabla S)
+ \]
+ for all $s \in \Omega^0_M(E)$ and $f \in C^\infty(M)$, where, $\d f$ is the usual exterior derivative of a function, given in local coordinates by
+ \[
+ \d f = \frac{\partial f}{\partial x^\mu} \d x^\mu.
+ \]
+ \end{enumerate}
+ Given a vector field $V$ on $M$, the \term{covariant derivative} of a section in the direction of $V$ is the map
+ \[
+ \nabla_V: \Omega^0_M(E) \to \Omega^0_M(E)
+ \]
+ defined by
+ \[
+ \nabla_V s = V \lrcorner \nabla s = V^\mu \nabla_\mu s.
+ \]
+\end{defi}
+In more physics settings, the connection is usually written as $\D_\mu$.
+
+Consider any two connections $\nabla, \nabla'$. Their difference is \emph{not} another connection. Instead, for any $f \in C^\infty(M)$ and $s \in \Omega^0_M(E)$, we have
+\[
+ (\nabla' - \nabla)(fs) = f (\nabla' - \nabla)(s).
+\]
+So in fact the difference is a map $\Omega^0_M(E) \to \Omega_M^1(E)$ that is linear over functions in $C^\infty(M)$. Equivalently, it is some element of $\Omega^1_M(\End(E))$, i.e.\ some matrix-valued $1$-form $A_\mu (x) \in \End(E_x)$.
+
+In particular, consider any open set $U \subseteq M$ equipped with a local trivialization. Then after picking the trivialization, we can treat the bundle on $U$ as a trivial one, and our previous example showed that we had a ``trivial'' connection given by $\d$. Then any other connection $\nabla$ can be expressed as
+\[
+ \nabla s = \d s + A s
+\]
+for some $A \in \Omega^1_U(\End(V))$, where the particular $A$ depends on our trivialization. This is called the \term{connection 1-form}, or the \term{gauge field}. In the case where $E$ is the tangent bundle, this is also known as the \term{Christoffel symbols}.
+
+This was all on a general vector bundle. But the case we are really interested in is a $G$-bundle. Of course, we can still say the same words as above, as any $G$-bundle is also a vector bundle. But can we tell a bit more about how the connection looks like? We recall that specifying a $G$-bundle with fiber $V$ is equivalent to specifying a principal $G$-bundle. What we would like is to have some notion of ``connection on a principal $G$-bundle''.
+
+\begin{thm}
+ There exists a notion of a connection on a principal $G$-bundle. Locally on a trivializing neighbourhood $U$, the connection $1$-form is an element $A_\mu(x) \in \Omega^1_U(\mathfrak{g})$, where $\mathfrak{g}$ is the Lie algebra of $G$.
+
+ Every connection on a principal $G$-bundle induces a connection on any associated vector bundle. On local trivializations, the connection on the associated vector bundle has the ``same'' connection $1$-form $A_\mu(x)$, where $A_\mu(x)$ is regarded as an element of $\End(V)$ by the action of $G$ on the vector space.
+\end{thm}
+
+Readers interested in the precise construction of a connection on a principal $G$-bundle should consult a textbook on differential geometry. Our previous work doesn't apply because $G$ is not a vector space.
+
+It is useful to know how the connection transforms under a change of local trivialization. For simplicity of notation, suppose we are working on two trivializations on the same open set $U$, with the transition map given by $g: U \to G$. We write $A$ and $A'$ for the connection $1$-forms on the two trivializations. Then for a section $s$ expressed in the first coordinates, we have
+\[
+ g \cdot (\d s + As) = (\d + A')(g\cdot s).
+\]
+So we find that
+\[
+ A' = g A g^{-1} - g \d (g^{-1}).
+\]
+This expression makes sense if $G$ is a matrix Lie group, and so we can canonically identify both $G$ and $\mathfrak{g}$ as subsets of $\GL(n, \R)$ for some $n$. Then we know what it means to multiply them together. For a more general Lie group, we have to replace the first term by the adjoint representation of $G$ on $\mathfrak{g}$, and the second by the \term{Maurer--Cartan form}.
+
+\begin{eg}
+ In the $\U(1)$ case,our transition functions $g_{\alpha\beta}$ are just multiplication by complex numbers. So if $g = e^{-i\lambda}$, then we have
+ \[
+ A_{\beta} = -g \d g^{-1} + g A_\alpha g^{-1} = -g \d g^{-1} + A_\alpha = i(\d \lambda - iA_\alpha).
+ \]
+ Note that since $\U(1)$ is abelian, the conjugation by $g$ has no effect on $A$. This is one of the reasons why non-abelian gauge theory is simple.
+\end{eg}
+
+\subsubsection*{Minimal coupling}
+So how do we do gauge coupling? Suppose we had an ``ordinary'' scalar field $\psi : M \to \C$ on our manifold, and we have the usual action
+\[
+ S[\psi] = \int \frac{1}{2} |\partial \psi|^2 + \frac{1}{2} m^2 \psi^2 + \cdots.
+\]
+We previously said that to do gauge theory, we pick a representation of our gauge group $G = \U(1)$ on $\C$, which we can take to be the ``obvious'' action by multiplication. Then given a principal $\U(1)$-bundle $P \to M$, we can form the associated vector bundle, and now our field is a section of this bundle.
+
+But how about the action? As mentioned, the $\partial \psi$ term no longer makes sense. But in the presence of a connection, we can just replace $\partial$ with $\nabla$! Now, the action is given by
+\[
+ S[\psi] = \int \frac{1}{2} |\nabla \psi|^2 + \frac{1}{2} m^2 \psi^2 + \cdots.
+\]
+This is known as \term{minimal coupling}.
+
+At this point, one should note that the machinery of principal $G$-bundles was necessary. We only ever have \emph{one} principal $G$-bundle $P \to M$, and a single connection on it. If we have multiple fields, then we use the \emph{same} connection on all of them, via the mechanism of associated bundles. Physically, this is important --- this means different charged particles couple to the same electromagnetic field! This wouldn't be possible if we only worked with vector bundles; we wouldn't be able to compare the connections on different vector bundles.
+
+%To specify how these $V_y$ glue together to form an $E$, it suffices to describe the local trivializations. We have to specify a bunch of trivializing neighbourhoods $U_\alpha$, such that $\pi^{-1}(U_\alpha)$ is isomorphic to $U_\alpha \times V$. The requirement dictates that
+%\[
+% M = \bigcup_\alpha U_\alpha,
+%\]
+%so that each point lies in some of these trivializing neighbourhoods.
+%
+%They trivialization maps are functions $\Phi_\alpha: U_\alpha \times V \to \pi^{-1}(U_\alpha)$. For each $y \in U_\alpha$, this is a map $\Phi_\alpha(y,\ph): \{y\} \times V \to V_y \cong V$. But this is just a linear isomorphism $V \to V$, or equivalently an element of $\GL(V)$. So we can think of $\Phi_\alpha$ as a map $U_\alpha \to \GL(V)$. We need to provide such maps for each $U_\alpha$, and, subject to certain compatibility condition we will not go into detail, this gives us a vector bundle.
+%
+%This is not entirely satisfactory. Usually, our vector space $V$ is not just a vector space. It might have an inner product, for example. Then we want the local trivializations not just to be linear isomorphisms, but orthogonal or unitary maps. In other words, we want the $\Phi_\alpha$ to induce maps $U_\alpha \to \Or(V)$ or $\U(V)$, instead of just $\GL(V)$. More generally, we can make the following definition:
+%\begin{defi}[Vector bundle with structure group]\index{vector bundle!structure group}\index{structure group!vector bundle}
+% A vector bundle with \emph{structure group} $G \leq \GL(V)$ (or a \term{$G$-bundle}) is a vector bundle $E \to M$ with typical fiber $V$, equipped with a collection of local trivializations $\{U_\alpha\}$ such that the corresponding maps $U_\alpha \to \GL(V)$ actually have image lying in $G$.
+%
+% Even more generally, given a Lie group $G$ and a representation $\rho$ on $V$, a $G$-bundle consists of an open cover $\{U_\alpha\}$ of $M$, and maps $\Phi_\alpha: U_\alpha \to G$, such that the maps $\rho \circ \Phi_\alpha: U_\alpha \to \GL(V)$ specify a vector bundle over $M$ with fiber $V$.
+%\end{defi}
+%The second definition is what we really want. Note that we worded the definition very carefully. The $G$-bundle consists of maps $U_\alpha \to G$, and only composing with the representation $\rho$ gives us the data of the local trivialization. This is crucial. For example, $G$ might have the \emph{trivial} representation on $V$. Then as a vector bundle, the trivialization maps are required to be trivial, and so the resulting vector bundle must be trivial. However, as a $G$-bundle, the maps $U_\alpha \to G$ can be very complicated!
+
+%\subsection{Principal bundles}
+%We are first going to talk about the \emph{geometry} lying being the classical theory of non-abelian gauge theory. Only after that, we will do it quantumly.
+%
+%Non-abelian gauge theories are based on saying the world is a \term{principal $G$-bundle} $\pi: P \to M$. For a fixed Lie group $G$, this is a manifold $P$ together with a projection map $\pi: P \to M$ to another manifold $M$, where for each $x \in M$, $\pi^{-1}(\{x\}) \cong G$.
+%
+%We should think of this as assigning a Lie group $G$ to each point of $M$, and the collection of all these Lie groups is $P$. The Lie group $G$ is often called the \term{structure group} (usually in mathematics) or the \term{gauge group} (usually in physics). The name ``gauge group'' is slightly ambiguous, because sometimes ``gauge group'' refers to something else.
+%
+%For example, Maxwell theory has $G = \U(1)$, while the standard model, for some unknown reason, has
+%\[
+% G = \SU(3) \times \SU(2) \times \U(1).
+%\]
+%There is a right action of $G$ on $P$ that preserves the fibers, i.e.\ given $g \in G$, and we fix an isomorphism $\pi^{-1}(\{x\}) \cong G$, then this is simply $p \mapsto pg$. % insert local triviality conditions
+%
+%The simplest example is to take $M = S^1$ and $G = \R$. Then there are two possible $\R$-bundles. One is just the cylinder $S^1 \times \R$, and the other is the M\"obius strip. % insert pictures.
+%
+%For any point $x \in M$, we can find an open set $U \subseteq M$ containing $x$ such that we have a \emph{local trivialization} $\Phi$
+%\[
+% \Phi: P|_{\pi^{-1}(U)} \cong U \times G.
+%\]
+%Suppose, $\{U_\alpha\}$ are a collection of open sets, and that we are given trivializations $\Phi_\alpha$ on each $U_\alpha$. We can write this as
+%\[
+% \Phi_\alpha: p \mapsto (x, \phi_\alpha(p)) \in U_\alpha \times G,
+%\]
+%Similarly, we have
+%\[
+% \Phi_\beta: p' \mapsto (x, \phi_\beta(p')) \in U_\beta \times G.
+%\]
+%Now suppose $U_\alpha \cap U_\beta \not= \emptyset$. Then on this intersection, we can compare our two trivializations. Since $\phi_\alpha(p)$ and $\phi_\beta(p) \in G$, we must have
+%\[
+% \phi_\beta(p) = \phi_\alpha(p) t_{\alpha\beta}
+%\]
+%for some $t_{\alpha\beta} \in G$. We require this $t_{\alpha\beta}$ depends only on $\pi(p)$, and not $p$ itself.
+%
+%If we want to compare these trivializations throughout $U_\alpha \cap U_\beta$, then our $t_{\alpha\beta}$ may need to vary with $x \in U_\alpha$. We define a \emph{transition function} $T_{\alpha\beta}: U_\alpha \cap U_\beta \to G$ by $x \mapsto t_{\alpha\beta}(x)$ obeying the following properties:
+%\[
+% t_{\alpha\beta}(x)^{-1} = t_{\beta\alpha}(x)
+%\]
+%for all $x \in U_\alpha \cap U_\beta$. We also have
+%\[
+% t_{\alpha\beta}(x) t_{\beta\gamma}(x) = t_{\alpha\gamma}(x)
+%\]
+%for all $x \in U_\alpha \cap U_\beta \cap U_\gamma$. We also have $t_{\alpha\alpha}(x) = e$ for all $x$.
+%
+%In physics, the most common application is when $M \cong \R^n$, and all the $U_\alpha$ are the whole space $M$. Still, we can have different trivializations in different regions. So we are just comparing different trivializations of $\pi^{-1}(\R^n) \cong \R^n \times G$. For example, in electromagnetism, we have
+%\[
+% M \cong \R^{3, 1},\quad G = \U(1).
+%\]
+%Then our local trivializations are a choice of \emph{gauge}, and these transition functions are gauge transformations. They allow us to compare the values at different gauges.
+%
+%On the other hand, in GR, we take $G = \GL(n, \R)$ if $\dim M = n$. Then the transition functions just allow us to change coordinates.
+%
+%In Symmetries, Fields, and Particles, we met the idea that given a Lie group, we want to study its representations. Recall that a \emph{representation} is a continuous group homomorphism $\rho: G \to \GL_r(\C)$, where
+%\[
+% \rho(gh) = \rho(g) \rho(h),\quad \rho(e) = I.
+%\]
+%In a principal bundle, we don't just have one Lie group. We have a vector space at each point in spacetime. So given a representation, we get a vector space at every point in spacetime. This gives us a \term{vector bundle} of rank $r$.
+%
+%This is a map $\pi: E \to M$ such that we have local trivializations
+%\[
+% \Phi_\alpha: E|_{\pi^{-1}(U_\alpha)} \to U \times \C^r.
+%\]
+%The transition functions $T_{\alpha\beta}: U_\alpha \cap U_\beta \to M_r(\C)$. If the original $P$ has structure group $G \subseteq \GL_r(\C)$, then the transition functions preserve some extra structure. For example, if $G = \U(r)$, then the transition functions on our vector bundle preserve the obvious inner product
+%\[
+% \bra z_1, z_2\ket = \sum_{i = 1}^r \bar{z}_1^i z_2^i.
+%\]
+%If $G = \SU(r)$, then transition functions also have unit determinant.
+%
+%Charged matter is a section of a vector bundle $\pi: E \to M$.
+%\begin{defi}[Section]\index{section}
+% A \emph{section} of a bundle $\pi: E \to M$ is a (smooth) map $s: M \to E$ that obeys $\pi \circ s = \id$.
+%\end{defi}
+%This corresponds to assigning to each point in $M$ a value in its fiber. % maybe insert a picture.
+%
+%\begin{eg}
+% In electromagnetism, a scalar field $\phi$ of charge $q$ isn't really a function on $M$, but rather a section of a vector bundle of rank $1$ over $M$. This is because under a gauge transform, i.e.\ change of local trivialization, we have
+% \[
+% \phi(x) \mapsto e^{i\theta \lambda(x)} \phi(x),
+% \]
+% which is the transformation behaviour of a section.
+%\end{eg}
+%
+%\begin{eg}
+% In QCD, we have a gauge group $G = \SU(3)$, and quarks and anti-quarks lie in the fundamental and anti-fundamental representations respectively. A quark field $q(x) \in E|_x \cong \C^3$ transforms under gauge transformations by
+% \[
+% q(x) \mapsto \rho(t(x)) q(x),
+% \]
+% where $\rho$ is the (anti-)fundamental representation and $t(x)$ is the gauge transformation.
+%\end{eg}
+%
+%\subsection{Connections and covariant derivatives}
+%On a vector bundle, we need a new notion of how to differentiate a section $s: M \to E$ of a vector bundle, because for any finite $\varepsilon$, the expression
+%\[
+% \frac{s(x + \varepsilon) - s(x)}{|\varepsilon|}
+%\]
+%doesn't really make sense, because $s(x + \varepsilon)$ and $s(x)$ live in different vector spaces, namely $E|_{x + \varepsilon}$ and $E|_x$.
+%
+%Let $\Omega^0_M(E)$ be the space of smooth sections of $E$, and let $\Omega^1_M(E)$ be the space of all smooth $1$-forms with values in $E$, i.e.\ sections of $E \otimes T^* M$. Elements in $\Omega^0_M(E)$ look like $s(x)$, while elements in $\Omega^1_M(E)$ looks like $\omega_\mu(x)$.
+%
+%\begin{defi}[Connection]\index{connection}
+% A \emph{connection} \index{$\nabla$} is a linear map $\nabla: \Omega_M^0(E) \to \Omega_M^1(E)$ satisfying
+% \begin{enumerate}
+% \item Linearity:
+% \[
+% \nabla(\alpha_1 s_1 + \alpha_2 s_2) = \alpha_1 (\nabla s_1) + \alpha_2( \nabla s_2)
+% \]
+% for all $s_1, s_2 \in \Omega_M^0(E)$ and $\alpha_1, \alpha_2$ constants.
+% \item Leibnitz property:
+% \[
+% \nabla (fs) = (\d f) s + f (\nabla S)
+% \]
+% for all $s \in \Omega^0_M(E)$ and $f \in C^\infty(M)$, where, $\d f$ is the usual exterior derivative of a function, given in local coordinates by
+% \[
+% \d f = \frac{\partial f}{\partial x^\mu} \d x^\mu.
+% \]
+% \end{enumerate}
+%\end{defi}
+%
+%Given a vector field $V$ on $M$, the \emph{covariant derivative} of a section in the direction of $V$ is the map
+%\[
+% \nabla_V: \Omega^0_M(E) \to \Omega^0_M(E)
+%\]
+%defined by
+%\[
+% \nabla_V s = v \lrcorner \nabla s = v^\mu \nabla_\mu s.
+%\]
+%There is a lot of freedom in choosing a covariant derivative, but the standard result is that given any two connections $\nabla, \nabla'$, the difference satisfies
+%\[
+% (\nabla' - \nabla)(fs) = f (\nabla' - \nabla)(s).
+%\]
+%So in fact is a map $\Omega^0_M(E) \to \Omega_M^1(E)$ that is linear over functions in $C^\infty(M)$. Hence it must be some element of $\Omega^1_M(\End(E))$, i.e.\ some matrix-valued $1$-form $(A_\mu (x))^a\!_b$.
+%
+%In particular, in an open set $U \subseteq M$ with a trivialization $\Phi$, we have a map
+%\begin{align*}
+% \Phi \circ s: U &\to U \times \C^r\\
+% x &\mapsto (x, s_\Phi(x))
+%\end{align*}
+%Then we can think $s_\phi(x)$ as just a collection of $r$ functions, and we could define a ``trivial'' connection just as $\Delta = \d$.
+%
+%Then any other connection $\nabla$ can be expressed as
+%\[
+% \nabla s = \d s + A s
+%\]
+%for some $A \in \Omega^1_M(\End(E))$, where the particular $A$ depends on our trivialization. This is called the \term{connection 1-form}, or the \term{gauge field}.
+%
+%Suppose $U_\alpha \cap U_\beta \not= \emptyset$, with trivializations $\Phi_\alpha, \Phi_\beta$, and let $g_{\alpha\beta}: U_\alpha \cap U_\beta \to \End(\C^r)$ denote the transition functions.
+%
+%Given a section $s$ of $E$, we have
+%\[
+% s_\alpha = g_{\alpha\beta} s_\beta.
+%\]
+%Also, by definition, we have
+%\[
+% (\nabla s)_\alpha = g_{\alpha\beta} (\nabla s)_\beta.
+%\]
+%From these definitions, it follows that
+%\[
+% A_\alpha = - g_{\alpha\beta} \d (g_{\alpha\beta})^{-1} + g_{\alpha \beta} A g_{\alpha\beta}^{-1}.
+%\]
+%This is the transformation law of the gauge field.
+%
+%In the rank 1 case, our transition functions $g_{\alpha\beta}$ are just multiplication by complex numbers. So
+%\[
+% A_{\beta} = g \d g^{-1} = g A_\alpha g^{-1} = g \d g^{-1} + A = i(\d \lambda - iA)
+%\]
+%if $g = e^{-i \lambda}$.
+
+\subsubsection*{Curvature}
+Given a connection $\nabla$, we can extend it to a map $\Omega^p_M(E) \to \Omega^{p + 1}_M(E)$ by requiring it to satisfy the conditions
+\begin{align*}
+ \nabla (\alpha_1 s_1 + \alpha_2 s_2) &= \alpha_1( \nabla s_1) + \alpha_2 (\nabla s_2),\\
+ \nabla (\omega \wedge s) &= (\d \omega) \wedge s + (-1)^{\deg \omega} \omega \wedge \nabla s.
+\end{align*}
+whenever $\omega \in \Omega^q(M)$ and $s \in \Omega_M^{p - q}(E)$.
+
+We can think of $\nabla$ as a ``covariant generalization'' of the de Rham operator $\d$. However, where the ordinary $\d$ is nilpotent, i.e.\ $\d^2 = 0$, here this is not necessarily the case.
+
+What kind of object is $\nabla^2$, then? We can compute
+\begin{align*}
+ \nabla^2 (\omega \wedge s) &= \nabla(\d \omega \wedge s + (-1)^q \omega \wedge \nabla s)\\
+ &= \d^2 \omega \wedge s + (-1)^{q + 1}\d \omega \wedge \nabla s + (-1)^q \d \omega \wedge \nabla s + (-1)^{2q} \omega \wedge \nabla^2 s\\
+ &= \omega \wedge \nabla^2 s.
+\end{align*}
+So we find that $\nabla^2$ is in fact a map $\Omega^q_M(E) \to \Omega^{q + 2}_M(E)$ that is linear over any forms! Specializing to the case of $q = 0$ only, we can write $\nabla^2$ as
+\[
+ \nabla^2(s) = F_\nabla s,
+\]
+for some $F_\nabla \in \Omega^2_M(\End(E))$. It is an easy exercise to check that the same formula works for all $q$. In local coordinates, we can write
+\[
+ F_\nabla = \frac{1}{2} (F_{\mu\nu}(x))^a\!_b\; \d x^\mu \wedge \d x^\nu.
+\]
+Since we have
+\[
+ \nabla s = \d s + A s,
+\]
+we find that
+\[
+ \nabla^2 s = \nabla(\d s + A s) = \d^2 s + \d (As) + A(\d s + A s) = (\d A + A \wedge A) s.
+\]
+Note that by $A\wedge A$, we mean, locally in coordinates, $A^a\!_b \wedge A^b\!_c$, which is still a form with values in $\End(V)$.
+
+Thus, locally, we have
+\begin{align*}
+ F &= \d A + A \wedge A \\
+ &= (\partial_\mu A_\nu + A_\mu A_\nu)\; \d x^\mu \wedge \d x^\nu\\
+ &= \frac{1}{2} (\partial_\mu A_\nu - \partial_\nu A_\mu + A_\mu A_\nu - A_\nu A_\mu)\;\d x^\mu \wedge \d x^\nu\\
+ &= \frac{1}{2} (\partial_\mu A_\nu - \partial_\nu A_\mu + [A_\mu, A_\nu])\; \d x^\mu \wedge \d x^\nu
+\end{align*}
+Of course, when in the case of $\U(1)$ theory, the bracket vanishes, and this is just the usual field strength tensor. Unsurprisingly, this is what will go into the Lagrangian for a general gauge theory.
+
+Crucially, in the non-abelian theory, the bracket term is there, and is non-zero. This is important. Our $F$ is no longer linear in $A$. When we do Yang--Mills later, the action will contain a $F^2$ term, and then expanding this out will give $A^3$ and $A^4$ terms. This causes interaction of the gauge field with itself, even without the presence of matter!
+
+We end by noting a seemingly-innocent identity. Note that we can compute
+\[
+ \nabla^3 s = \nabla(\nabla^2 s) = \nabla(F s) = (\nabla F)\wedge s + F\wedge (\nabla s).
+\]
+On the other hand, we also have
+\[
+ \nabla^3 s = \nabla^2 (\nabla s) = F \wedge \nabla s.
+\]
+These two ways of thinking about $\nabla^3$ must be consistent. This implies we have
+\[
+ \nabla (F_\nabla) \equiv 0.
+\]
+This is known as the \term{Bianchi identity}.
+
+%We can compute this explicitly on local coordinates. On $U$, we have
+%\begin{align*}
+% \nabla F|_U &= (\d + A)(\d A + A\wedge A)\\
+% &= \d(\d A + A \wedge A) + A \wedge (\d A + A \wedge A) - (\d A + A \wedge A) \wedge A\\
+% &= \d^2 A + \d A \wedge A - A \wedge \d A + A \wedge \d A + A^3 - \d A \wedge A - A^3\\
+% &= 0
+%\end{align*}
+\subsection{Yang--Mills theory}
+At the classical level, Yang--Mills is an example of non-abelian gauge theory defined by the action % Chern-Simon theory?
+\[
+ S[\nabla] = \frac{1}{2g^2_{YM}} \int_M (F_{\mu\nu}, F^{\mu\nu}) \;\sqrt{g} \d^d x,
+\]
+where $(\ph, \ph)$ denotes the Killing form on the Lie algebra $\mathfrak{g}$ of the gauge group, and $g^2_{YM}$ is the coupling constant. For flat space, we have $\sqrt{g} = 1$, and we will drop that term.
+
+For example, if $G = \SU(n)$, we usually choose a basis such that
+\[
+ (t^a, t^b) = \frac{1}{2} \delta^{ab},
+\]
+and on a local $U \subseteq M$, we have
+\[
+ S[\nabla] = \frac{1}{4 g^2_{YM}} \int F_{\mu\nu}^a F^{b, \mu\nu} \delta_{ab} \; \d^d x,
+\]
+with
+\[
+ F_{\mu\nu}^a = \partial_\mu A^a_\nu - \partial_\nu A_\mu^a + f^a_{bc} A_\mu^b A_\nu^c.
+\]
+Thus, Yang--Mills theory is the natural generalization of Maxwell theory to the non-Abelian case.
+
+Note that the action is treated as a function of the connection, and not the curvature, just like in Maxwell's theory. Schematically, we have
+\[
+ F^2 \sim (\d A + A^2)^2 \sim (\d A)^2 + A^2 \;\d A + A^4.
+\]
+So as mentioned, there are non-trivial interactions even in the absence of charged matter. This self-interaction is proportional to the structure constants $f^a_{bc}$. So in the abelian case, these do not appear.
+
+At the level of the classical field equations, if we vary our connection by $\nabla \mapsto \nabla + \delta a$, where $\delta a$ is a matrix-valued $1$-form, then
+\[
+ \delta F_\nabla = F_{\nabla + \delta a} - F_{\nabla} = \nabla_{[\mu} \delta a_{\nu]}\;\d x^\mu\; \d x^\nu.
+\]
+In other words,
+\[
+ \delta F_{\mu\nu} = \partial_{[\mu} \delta a_{\nu]} + [A_\mu, \delta a_\nu].
+\]
+The Yang--Mills equation we obtain from extremizing with respect to these variations is
+\[
+ 0 = \delta S[\nabla] = \frac{1}{g_{YM}^2} \int (\delta F_{\mu\nu}, F^{\mu\nu}) \;\d^d x = \frac{1}{g_{YM}^2} (\nabla_\mu \delta a_\nu, F^{\mu\nu})\;\d^d x = 0.
+\]
+So we get the \term{Yang--Mills equation}
+\[
+ \nabla^\mu F_{\mu\nu} = \partial^\mu F_{\mu\nu} + [A^\mu, F_{\mu\nu}] = 0.
+\]
+This is just like Gauss' equation. Recall we also had the Bianchi identity
+\[
+ \nabla \wedge F = 0,
+\]
+which gives
+\[
+ \nabla_\mu F_{\nu\lambda} + \nabla_\nu F_{\lambda\mu} + \nabla_\lambda F_{\mu\nu} = 0,
+\]
+similar to Maxwell's equations.
+
+But \emph{unlike} Maxwell's equations, these are non-linear PDE's for $A$. We no longer have the principle of superposition. This is much more similar to general relativity. In general relativity, we had some non-linear PDE's we had to solve for the metric or the connection.
+
+We all know some solutions to Einstein's field equations, say black holes and the Schwarzschild metric. We also know many solutions to Maxwell's equations. But most people can't write down a non-trivial solution to Yang--Mills equations.
+
+This is not because Yang--Mills is harder to solve. If you ask anyone who does numerics, Yang--Mills is much more pleasant to work with. The real reason is that electromagnetism and general relativity were around for quite a long time, and we had a lot of time to understand the solutions. Moreover, these solutions have very direct relations to things we can observe at everyday energy scales. However, this is not true for Yang--Mills. It doesn't really describe everyday phenomena, and thus less people care.
+
+Note that the action contains no mass terms for $A$, i.e.\ there is no $A^2$ term. So $A$ should describe a massless particle, and this gives rise to long-range force, just like Coulomb or gravitational forces. When Yang--Mills first introduced this theory, Pauli objected to this, because we are introducing some new long-range force, but we don't see any!
+
+To sort-of explain this, the coupling constant $g^2_{YM}$ plays no role in the classical (pure) theory. Of course, it will play a role if we couple it to matter. However, in the quantum theory, $g^2_{YM}$ appears together with $\hbar$ as $g^2_{YM} \hbar$. So the classical theory is only a reasonable approximation to the physics only if $g^2_{YM} \sim 0$. Skipping ahead of the story, we will see that $g^2_{YM}$ is marginally relevant. So at low energies, the classical theory is not a good approximation for the actual quantum theory.
+
+So let's look at quantum Yang--Mills theory.
+\subsection{Quantum Yang--Mills theory}
+Our first thought to construct a path integral for Yang--Mills may be to compute the partition function
+\[
+ \mathcal{Z}_{\mathrm{naive}} \overset{?}{=} \int_\mathcal{A} \D A\; e^{-S_{YM}[A]},
+\]
+where $\mathcal{A}$ is the ``space of all connections'' on some fixed principal $G$-bundle $P \to M$. If we were more sophisticated, we might try to sum over all possible principal $G$-bundles.
+
+But this is not really correct. We claim that no matter how hard we try to make sense of path integrals, this integral must diverge.
+
+We first think about what this path integral can be. For any two connections $\nabla, \nabla'$, it is straightforward to check that
+\[
+ \nabla^t = t \nabla + (1 - t) \nabla'
+\]
+is a connection of $P$. Consequently, we can find a path between any two connections on $P$. Furthermore, we saw before that the difference $\nabla' - \nabla \in \Omega_M^1 (\mathfrak{g})$. This says that $\mathcal{A}$ is an (infinite-dimensional) affine space modelled on $\Omega^1_M(\mathfrak{g}) \cong T_\nabla \mathcal{A}$. This is like a vector space, but there is no preferred origin $0$. There is even a flat metric on $\mathcal{A}$, given by
+\[
+ \d s^2 _\mathcal{A} = \int_M (\delta A_\mu, \delta A^\mu)\;\d^d x,
+\]
+i.e.\ given any two tangent vectors $a_1, a_2 \in \Omega_M^1(\mathfrak{g}) \cong T_\nabla \mathcal{A}$, we have an inner product
+\[
+ \bra a_1, a_2 \ket_\nabla = \int_M (a_{1 \mu}, a^{\mu}_2) \;\d^d x.
+\]
+Importantly, this is independent of the choice of $\nabla$.
+
+This all is trying to say that this $\mathcal{A}$ is nice and simple. Heuristically, we imagine the path integral measure is the natural ``$L^2$'' measure on $\mathcal{A}$ as an affine space. Of course, this measure doesn't exist, because it is infinite dimensional. But the idea is that this is just the same as working with a scalar field.
+
+Despite this niceness, $S_{YM}[\nabla]$ is degenerate along gauge orbits. The action is invariant under gauge transformations (i.e.\ automorphisms of the principal $G$-bundle), and so we are counting each connection \emph{infinitely} many times. In fact, the group $\mathcal{G}$ of all gauge transformations is an infinite dimensional space that is (locally) $\Maps(M, G)$, and even for compact $M$ and $G$, the volume of this space diverges.
+
+Instead, we should take the integral over all connections modulo gauge transformations:
+\[
+ \mathcal{Z}_{YM} = \int_{\mathcal{A}/\mathcal{G}} \d \mu; e^{-S_{YM}[\nabla]/\hbar},
+\]
+where $\mathcal{A}/\mathcal{G}$ is the space of all connections modulo gauge transformation, and $\d \mu$ is some sort of measure. Note that this means there is \emph{no such thing} as ``gauge symmetry'' in nature. We have quotiented out by the gauge transformations in the path integral. Rather, gauge transformations are a redundancy in our description.
+
+But we now have a more complicated problem. We have \emph{no idea} what the space $\mathcal{A}/\mathcal{G}$ looks like. Thus, even formally, we don't understand what $\d \mu$ on this space should be.
+
+In electromagnetism, we handled this problem by ``picking a gauge''. We are going to do exactly the same here, but the non-linearities of our non-abelian theory means this is more involved. We need to summon ghosts.
+
+\subsection{Faddeev--Popov ghosts \texorpdfstring{\ghost}{}}
+To understand how to do this, we will first consider a particular finite-dimensional example. Suppose we have a field $(x, y): \{\mathrm{pt}\} \to \R^2$ on a zero-dimensional universe and an action $S[x, y]$. For simplicity, we will often ignore the existence of the origin in $\R^2$.
+
+The partition function we are interested in is
+\[
+ \int_{\R^2}\d x\; \d y\; e^{-S[x, y]}.
+\]
+Suppose the action is rotationally invariant. Then we can write the integral as
+\[
+ \int_{\R^2} \d x\; \d y\; e^{-S[x, y]} = \int_0^{2\pi} \d \theta \int_0^\infty r \;\d r\; e^{-S[r]} = 2\pi \int_0^\infty r \;\d r\; e^{-S[r]}.
+\]
+We can try to formulate this result in more abstract terms. Our space $\R^2$ of fields has an action of the group $\SO(2)$ by rotation. The quotient/orbit space of this action is
+\[
+ \frac{\R^2 \setminus \{0\}}{\SO(2)} \cong \R_{> 0}.
+\]
+Then what we have done is that we have replaced the integral over the whole of $\R^2$, namely the $(x, y)$ integral, with an integral over the orbit space $\R_{>0}$, namely the $r$ integral. There are two particularly important things to notice:
+\begin{itemize}
+ \item The measure on $\R_{>0}$ is not the ``obvious'' measure $\d r$. In general, we have to do some work to figure out what the correct measure is.
+ \item We had a factor of $2\pi$ sticking out at the front, which corresponds to the ``volume'' of the group $\SO(2)$.
+\end{itemize}
+In general, how do we figure out what the quotient space $\R_{>0}$ is, and how do we find the right measure to integrate against? The idea, as we have always done, is to ``pick a gauge''.
+
+We do so via a gauge fixing function. We specify a function $f: \R^2 \to \R$, and then our gauge condition will be $f(x) = 0$. In other words, the ``space of gauge orbits'' will be
+\[
+ C = \{\mathbf{x} \in \R^2: f(\mathbf{x}) = 0\}
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, -2) rectangle (2, 2);
+ \node [circ] {};
+
+ \draw (0, 0) .. controls (1, 0.5) and (0, 1) .. (2, 1.7) node [pos=0.6, left] {\small $f(\mathbf{x}) = 0$};
+ \end{tikzpicture}
+\end{center}
+For this to work out well, we need the following two conditions:
+\begin{enumerate}
+ \item For each $x \in \R^2$, there exists some $R \in \SO(2)$ such that $f(R \mathbf{x}) = 0$.
+ \item $f$ is non-degenerate. Technically, we require that for any $x$ such that $f(x) = 0$, we have
+ \[
+ \Delta_f (x) = \left.\frac{\partial}{\partial \theta} f(R_\theta(\mathbf{x}))\right|_{\theta = 0} \not= 0,
+ \]
+ where $R_\theta$ is rotation by $\theta$.
+\end{enumerate}
+The first condition is an obvious requirement --- our function $f$ does pick out a representative for each gauge orbit. The second condition is technical, but also crucial. We want the curve to pick out a unique point in each gauge orbit. This prevents a gauge orbit from looking like
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, -2) rectangle (2, 2);
+ \node [circ] {};
+
+ \draw (0, 0) .. controls (2, 0.5) and (-1.5, -0.5) .. (1, 2);
+
+ \draw [dashed] circle [radius=0.5];
+ \end{tikzpicture}
+\end{center}
+where the dashed circle intersects the curved line three times. For any curve that looks like this, we would have a vanishing $\Delta_f(x)$ at the turning points of the curve. The non-degeneracy condition forces the curve to always move radially outwards, so that we pick out a good gauge representative.
+
+It is important to note that the non-degeneracy condition in general does not guarantee that each gauge orbit has a unique representative. In fact, it forces each gauge orbit to have two representatives instead. Indeed, if we consider a simple gauge fixing function $f(x, y) = x$, then this is non-degenerate, but the zero set looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, -2) rectangle (2, 2);
+ \node [circ] {};
+
+ \draw (-2, 0) -- (2, 0);
+
+ \draw [dashed] circle [radius=1];
+ \end{tikzpicture}
+\end{center}
+It is an easy exercise with the intermediate value theorem to show that there must be at least two representatives in each gauge orbit (one will need to use the non-degeneracy condition).
+
+This is not a particularly huge problem, since we are just double counting each gauge orbit, and we know how to divide by $2$. Let's stick with this and move on.
+
+To integrate over the gauge orbit, it is natural to try the integral.
+\[
+ \int_{\R^2}\d x\; \d y\; \delta(f(x)) e^{-S(x, y)}.
+\]
+Then the $\delta$-function restricts us to the curve $C$, known as the \term{gauge slice}.
+
+However, this has a problem. We would want our result to not depend on how we choose our gauge. But not only does this integral depend on $C$. It in fact depends on $f$ as well!
+
+To see this, we can simply replace $f$ by $cf$ for some constant $c \in \R$. Then
+\[
+ \delta(f(x)) \mapsto \delta(cf(x)) = \frac{1}{|c|} \delta(f(x)).
+\]
+So our integral changes. It turns out the trick is to include the factor of $\Delta_f$ we previously defined. Consider the new integral
+\[
+ \int_{\R^2}\d x\; \d y\; \delta(f(\mathbf{x})) |\Delta_f(\mathbf{x})| e^{-S(\mathbf{x})}.\tag{$*$}
+\]
+To analyze how this depends on $f$, we pretend that the zero set $C$ of $f$ actually looks like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, -2) rectangle (2, 2);
+ \node [circ] {};
+
+ \draw (0, 0) .. controls (1, 0.5) and (0, 1) .. (2, 1.7);
+ \end{tikzpicture}
+\end{center}
+rather than
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, -2) rectangle (2, 2);
+ \node [circ] {};
+
+ \draw (0, 0) .. controls (1, 0.5) and (0, 1) .. (2, 1.7);
+ \draw (0, 0) .. controls (-1, -0.5) and (0, -1) .. (-2, -1.7);
+ \end{tikzpicture}
+\end{center}
+Of course, we know the former is impossible, and the zero set must look like the latter. However, the value of the integral $(*)$ depends only on how $f$ behaves locally near the zero set, and so we may analyze each ``branch'' separately, pretending it looks like the former. This will make it much easier to say what we want to say.
+
+\begin{thm}\leavevmode
+ The integral $(*)$ is independent of the choice of $f$ and $C$.
+\end{thm}
+
+\begin{proof}
+ We first note that if we replace $f(\mathbf{x})$ by $c(r) f(\mathbf{x})$ for some $c > 0$, then we have
+ \[
+ \delta(cf) = \frac{1}{|c|} \delta(f),\quad |\Delta_{cf}(x)| = c(r) |\Delta_f|,
+ \]
+ and so the integral doesn't change.
+
+ Next, suppose we replace $f$ with some $\tilde{f}$, but they have the same zero set. Now notice that $\delta(f)$ and $|\Delta_f|$ depend only on the first-order behaviour of $f$ at $C$. In particular, it depends only on $\frac{\partial f}{\partial \theta}$ on $C$. So for all practical purposes, changing $f$ to $\tilde{f}$ is equivalent to multiplying $f$ by the ratio of their derivatives. So changing the function $f$ while keeping $C$ fixed doesn't affect the value of $(*)$.
+
+ Finally, suppose we have two arbitrary $f$ and $\tilde{f}$, with potentially different zero sets. Now for each value of $r$, we pick a rotation $R_{\theta(r)} \in \SO(2)$ such that
+ \[
+ \tilde{f}(x) \propto f(R_{\theta(r)} \mathbf{x}).
+ \]
+ By the previous part, we can rescale $f$ or $\tilde{f}$, and assume we in fact have equality.
+
+ We let $\mathbf{x}' = R_{\theta(r)}\mathbf{x}$. Now since the action only depends on the radius, it in particular is invariant under the action of $R_{\theta(r)}$. The measure $\d x\;\d y$ is also invariant, which is particularly clear if we write it as $\d \theta\; r\;\d r$ instead. Then we have
+ \begin{align*}
+ \int_{\R^2}\d x\; \d y\; \delta(f(\mathbf{x})) |\Delta_f(\mathbf{x})| e^{-S(\mathbf{x})} &= \int_{\R^2}\d x'\; \d y'\; \delta(f(\mathbf{x}')) |\Delta_f(\mathbf{x}')| e^{-S(\mathbf{x}')}\\
+ &= \int_{\R^2}\d x'\; \d y'\; \delta(\tilde{f}(\mathbf{x})) |\Delta_{\tilde{f}}(\mathbf{x})| e^{-S(\mathbf{x}')}\\
+ &= \int_{\R^2}\d x\; \d y\; \delta(\tilde{f}(\mathbf{x})) |\Delta_{\tilde{f}}(\mathbf{x})| e^{-S(\mathbf{x})}\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{eg}
+ We choose $C$ to be the $x$-axis with $f(\mathbf{x}) = y$. Then under a rotation
+ \[
+ f(\mathbf{x}) = y \mapsto y \sin \theta - x \sin \theta,
+ \]
+ we have
+ \[
+ \Delta_f (\mathbf{x}) = -x.
+ \]
+ So we have
+ \begin{align*}
+ \int_{\R^2} \d x\; \d y\; \delta(f) \Delta_f (x) e^{-S(x, y)} &= \int_{\R^2} \d x\;\d y\; \delta(y) |x| e^{-S(x, y)}\\
+ &= \int_{-\infty}^\infty \d x\; |x| e^{-S(x, 0)}\\
+ &= 2 \int_0^\infty \d |x|\; |x| e^{-S(|x|)}\\
+ &= 2 \int_0^\infty r\;\d r\;e^{-S(r)}.
+ \end{align*}
+\end{eg}
+So this gives us back the original integral
+\[
+ \int_0^\infty r\;\d r\; e^{-S(r)}
+\]
+of along the gauge orbit we wanted, except for the factor of $2$. As we mentioned previously, this is because our gauge fixing condition actually specifies two points on each gauge orbit, instead of one. This is known as the \term{Gribov ambiguity}.
+
+When we do perturbation theory later on, we will not be sensitive to this global factor, because in perturbation theory, we only try to understand the curve locally, and the choice of gauge is locally unique.
+
+The advantage of looking at the integral
+\[
+ \int_{\R^2}\d x\;\d y\; \delta(f) \Delta_f e^{-S(x, y)}
+\]
+is that it only refers to functions and measures on the full space $\R^2$, which we understand well.
+
+More generally, suppose we have a (well-understood) space $X$ with a measure $\d \mu$. We then have a Lie group $G$ acting on $X$. Suppose locally (near the identity), we can parametrize elements of $G$ by parameters $\theta^a$ for $a = 1, \cdots, \dim G$. We write $R_\theta$ for the corresponding element of $G$ (technically, we are passing on to the Lie algebra level).
+
+To do gauge fixing, we now need many gauge fixing functions, say $f^a$, again with $a = 1, \cdots, \dim G$. We then let
+\[
+ \Delta_f = \det\left(\left.\frac{\partial f^a(R_\theta \mathbf{x})}{\partial \theta^b}\right|_{\theta = 0}\right).
+\]
+This is known as the \term{Fadeev--Popov determinant}.
+
+Then if we have a function $e^{-S[\mathbf{x}]}$ that is invariant under the action of $G$, then to integrate over the gauge orbits, we integrate
+\[
+ \int_X \d \mu\; |\Delta_f| \prod_{a = 1}^{\dim G} \delta(f^a(x)) e^{-S[\mathbf{x}]}.
+\]
+Now in Yang--Mills, our spaces and groups are no longer finite-dimensional, and nothing makes sense. Well, we can manipulate expressions formally. Suppose we have some gauge fixing condition $f$. Then the expression we want is
+\[
+ \mathcal{Z} = \int_{\mathcal{A}/\mathcal{G}} \D \mu \; e^{-S_{YM}} = \int_\mathcal{A} \D A\; \delta[f] |\Delta_f(A)| e^{-S_{YM}[A]},
+\]
+Suppose the gauge group is $G$, with Lie algebra $\mathfrak{g}$. We will assume the gauge fixing condition is pointwise, i.e.\ we have functions $f^a: \mathfrak{g} \to \mathfrak{g}$, and the gauge fixing condition is
+\[
+ f(A(x)) = 0\text{ for all } x\in M.
+\]
+Then writing $n = \dim \mathfrak{g}$, we can write
+\[
+ \delta[f] = \prod_{x \in M} \delta^{(n)}(f(A(x))).
+\]
+We don't really know what to do with these formal expressions. Our strategy is to write this integral in terms of more ``usual'' path integrals, and then we can use our usual tools of perturbation theory to evaluate the integral.
+
+We first take care of the $\delta[f]$ term. We introduce a new bosonic field $h \in \Omega^0_M(\mathfrak{g})$, and then we can write
+\[
+ \delta[f] = \int \D h\; \exp\left(- \int i h^a(x) f^a(A(x))\;\d^d x\right),
+\]
+This $h$ acts as a ``Lagrange multiplier'' that enforces the condition $f(A(x)) = 0$, and we can justify this by comparing to the familiar result that
+\[
+ \int e^{ip\cdot x}\;\d p = \delta(x).
+\]
+To take care of the determinant term, we wanted to have
+\[
+ \Delta_f = \det \frac{\delta f^a[A^\lambda(x)]}{\delta \lambda^b (y)},
+\]
+where $\lambda^a(y)$ are our gauge parameters, and $A^\lambda$ is the gauge-transformed field.
+
+Now recall that for a finite-dimensional $n \times n$ matrix $M$, we have
+\[
+ \det (M) = \int \d^n c\; \d^n \bar{c}\; e^{\bar{c} M c},
+\]
+where $c, \bar{c}$ are $n$-dimensional \emph{fermion} fields. Thus, in our case, we can write the Fadeev---Popov determinant as the path integral
+\[
+ \Delta_f = \int \D c\; \D \bar{c}\;\exp \left(\int_{M \times M} \d^d x\;\d^d y\; \bar{c}_a(x) \frac{\delta f^a(A^\lambda(x))}{\delta \lambda^b(y)} c^b(y)\right),
+\]
+where $c, \bar{c}$ are \emph{fermionic} scalars, again valued in $\mathfrak{g}$ under the adjoint action. Since we assumed that $f^a$ is local, i.e.\ $f^a(A)(x)$ is a function of $A(x)$ and its derivative at $x$ only, we can simply write this as
+\[
+ \Delta_f = \int \D c\; \d \bar{c} \exp \left(\int_M \d^d x\; \bar{c}_a (x) \frac{\delta f^a(A^\lambda)}{\delta \lambda^b}(x) c^b(x)\right).
+\]
+The fermionic fields $c$ and $\bar{c}$ are known as \term{ghosts} and \term{anti-ghosts} respectively. We might find these a bit strange, since they are spin $0$ fermionic fields, which violates the spin statistic theorem. So, if we do canonical quantization with this, then we find that we don't get a Hilbert space, as we get states with negative norm! Fortunately, there is a subspace of \emph{gauge invariant} states that do not involve the ghosts, and the inner product is positive definite in this subspace. When we focus on these states, and on operators that do not involve ghosts, then we are left with a good, unitary theory. These $c$ and $\bar{c}$ aren't ``genuine'' fields, and they are just there to get rid of the extra fields we don't want. The ``physically meaningful'' theory is supposed to happen in $\mathcal{A}/\mathcal{G}$, where no ghosts exist.
+
+Will all factors included, the full action is given by
+\[
+ S[A, \bar{c}, c, h] = \int \d^d x\; \left(\frac{1}{4g_{YM}^2} F_{\mu\nu}^a F^{a, \mu\nu} + ih^a f^a(A) - \bar{c}_a \frac{\delta f^a(A^\lambda)}{\delta \lambda^b}c^b\right),
+\]
+and the path integral is given by
+\[
+ \mathcal{Z} = \int \D A\; \D c\; \D \bar{c}\;\D h\; \exp\Big({-S}[A, \bar{c}, c, h]\Big).
+\]
+\begin{eg}
+ We often pick Lorenz gauge $f^a(A) = \partial^\mu A_\mu^a$. Under a gauge transformation, we have $A \mapsto A^\lambda = A + \nabla \lambda$. More explicitly, we have
+ \[
+ (A^\lambda)^a\!_\mu = A^a\!_\mu + \partial_\mu \lambda^a + f^a\!_{bc} A_\mu^b \lambda^c.
+ \]
+ So the matrix appearing in the Fadeev--Popov determinant is
+ \[
+ \frac{\delta f^a(A^\lambda)}{\delta \lambda^b} = \partial^\mu\nabla_\mu.
+ \]
+ Thus, the full Yang--Mills action is given by
+ \[
+ S[A, \bar{c}, c, h] =\int \d^d x\; \left( \frac{1}{4g^2_{YM}} F_{\mu\nu}^a F^{a,\mu\nu} + \frac{i}{2} h^a \partial^\mu A_\mu^a - \bar{c}^a \partial^\mu \nabla_\mu c^a\right).
+ \]
+\end{eg}
+
+Why do have to do this? Why didn't we have to bother when we did electrodynamics?
+
+If we did this for an abelian gauge theory, then the structure constants were zero. Consequently, all $f^a\!_{bc}$ terms do not exist. So the ghost kinetic operator does not involve the gauge field. So the path integral over the ghosts would be completely independent of the gauge field, and so as long as we worked in a gauge, then we can ignore the ghosts. However, in a non-abelian gauge theory, this is not true. We cannot just impose a gauge and get away with it. We need to put back the Jacobian to take into account the change of variables, and this is where the ghosts come in.
+
+The benefit of doing all this work is that it now looks very familiar. It seems like something we can tackle using Feynman rules with perturbation theory.
+
+\subsection{BRST symmetry and cohomology}
+In the Faddeev--Popov construction, we have introduced some gauge-fixing terms, and so naturally, the Lagrangian is no longer gauge invariant. However, we would like to be able to use gauge symmetries to understand our theory. For example, we would expect gauge symmetry to restrict the possible terms generated by the renormalization group flow, but we now can't do that.
+
+It turns out that our action still has a less obvious symmetry arising form gauge invariance, known as \term{BRST symmetry}. This was discovered by Becchi, Rouet and Stora, and also independently by Tyruin.
+
+To describe this symmetry, we are going to construct an BRST operator. Since we want to prove things about this, we have to be more precise about what space this is operating on.
+
+We let $B$ be the (complex) space of all polynomial functions in the fields and their derivatives. More precisely, it is defined recursively as follows:
+\begin{itemize}
+ \item Let $\Psi$ be any of $\{A_\mu, c^a, \bar{c}^a, h^a\}$, and $\partial_\alpha$ be any differential operator (e.g.\ $\partial_1 \partial_3^2$). Then $\partial_\alpha \Psi \in B$.
+ \item Any complex $C^\infty$ function on $M$ is in $B$.
+ \item If $a, b \in B$, then $a + b, ab \in B$.
+\end{itemize}
+We impose the obvious commutativity relations based on fermionic and bosonic statistics. Note that by construction, this space only contains polynomial functions in the fields. This is what we are going to assume when we try to prove things, and makes things much more convenient because we can now induct on the degree of the polynomial.
+
+However, for a general gauge-fixing function, we cannot guarantee that the Yang--Mills Lagrangian lives in $B$ (even though for the Lorenz gauge, it does). What we do can be extended to allow for more general functions of the fields, but this is difficult to make precise and we will not do that.
+
+This $B$ is in fact a $\Z/2\Z$-\term{graded algebra}, or a \term{superalgebra}, i.e.\ we can decompose it as
+\[
+ B = B_0 \oplus B_1,
+\]
+where $B_0, B_1$ are vector subspaces of $B$. Here $B_0$ contains the ``purely bosonic'' terms, while $B_1$ contains the purely fermionic terms. These satisfy
+\[
+ B_s B_t \subseteq B_{s + t},
+\]
+where the subscripts are taken modulo $2$. Moreover, if $y \in B_s$ and $x \in B_t$, then we have the (graded-)commutativity relation
+\[
+ yx = (-1)^{st} xy.
+\]
+If $x$ belongs to one of $B_0$ or $B_1$, we say it has \emph{definite statistics}, and we write $|x| = s$ if $x \in B_s$.
+
+\begin{defi}[BRST operator]\index{BRST operator}
+ The \emph{BRST operator} $\mathcal{Q}$ is defined by
+ \begin{align*}
+ \mathcal{Q} A_\mu &= \nabla_\mu c & \mathcal{Q} \bar{c} &= ih\\
+ \mathcal{Q} c &= - \frac{1}{2}[c, c] & \mathcal{Q} h &= 0.
+ \end{align*}
+ This extends to an operator on $B$ by sending all constants to $0$, and for $f, g \in B$ of definite statistics, we set
+ \[
+ \mathcal{Q} (fg) = (-1)^{|f|} f \mathcal{Q} g + (\mathcal{Q} f) g,\quad \mathcal{Q} (\partial_\mu f) = \partial_\mu \mathcal{Q} f.
+ \]
+ In other words, $\mathcal{Q}$ is a \term{graded derivation}.
+\end{defi}
+There are a couple of things to note:
+\begin{itemize}
+ \item Even though we like to think of $\bar{c}$ as the ``complex conjugate'' of $c$, formally speaking, they are unrelated variables, and so we have no obligation to make $\mathcal{Q} \bar{c}$ related to $\mathcal{Q} c$.
+ \item The expression $[c, c]$ is defined by
+ \[
+ [c, c]^a = f_{bc}^a c^b c^c,
+ \]
+ where $f_{bc}^a$ are the structure constants. This is non-zero, even though the Lie bracket is anti-commutative, because the ghosts are fermions.
+ \item The operator $\mathcal{Q}$ exchanges fermionic variables with bosonic variables. Thus, we can think of this as a ``fermionic operator''.
+ \item It is an exercise to see that $\mathcal{Q}$ is well-defined.
+\end{itemize}
+
+We will soon see that this gives rise to a symmetry of the Yang--Mills action. To do so, we first need the following fact:
+\begin{thm}
+ We have $\mathcal{Q}^2 = 0$.
+\end{thm}
+
+\begin{proof}
+ We first check that for any field $\Psi$, we have $\mathcal{Q}^2 \Psi = 0$.
+ \begin{itemize}
+ \item This is trivial for $h$.
+ \item We have
+ \[
+ \mathcal{Q}^2 \bar{c} = \mathcal{Q}i h = 0.
+ \]
+ \item Note that for fermionic $a, b$, we have $[a, b] = [b, a]$. So
+ \[
+ \mathcal{Q}^2 c = -\frac{1}{2} \mathcal{Q} [c, c] = -\frac{1}{2} ([\mathcal{Q} c, c] + [c, \mathcal{Q} c]) = -[\mathcal{Q}c, c] = \frac{1}{2} [[c, c], c].
+ \]
+ It is an exercise to carefully go through the anti-commutativity and see that the Jacobi identity implies this vanishes.
+ \item Noting that $G$ acts on the ghosts fields by the adjoint action, we have
+ \begin{align*}
+ \mathcal{Q}^2 A_\mu &= \mathcal{Q} \nabla_\mu c \\
+ &= \nabla_\mu (\mathcal{Q} c) + [\mathcal{Q} A_\mu, c]\\
+ &= - \frac{1}{2} \nabla_\mu [c, c] + [\nabla_\mu c, c]\\
+ &= - \frac{1}{2} ([\nabla_\mu c, c] + [c, \nabla_\mu c]) + [\nabla_\mu c, c]\\
+ &= 0.
+ \end{align*}
+ \end{itemize}
+ To conclude the proof, it suffices to show that if $a, b \in B$ are elements of definite statistics such that $\mathcal{Q}^2 a = \mathcal{Q}^2 b = 0$, then $\mathcal{Q}^2 ab = 0$. Then we are done by induction and linearity. Using the fact that $|\mathcal{Q} a| = |a| + 1 \pmod 2$, we have
+ \[
+ \mathcal{Q}^2 (ab) = (\mathcal{Q}^2 a) b + a \mathcal{Q}^2 b + (-1)^{|a|} (\mathcal{Q} a) (\mathcal{Q} b) + (-1)^{|a| + 1} (\mathcal{Q} a) (\mathcal{Q} b) = 0.\qedhere
+ \]
+\end{proof}
+
+We can now introduce some terminology.
+\begin{defi}[BRST exact]\index{BRST exact}\index{exact!BRST}
+ We say $a \in B$ is \emph{BRST exact} if $a = \mathcal{Q} b$ for some $b \in B$.
+\end{defi}
+
+\begin{defi}[BRST closed]\index{BRST closed}\index{closed!BRST}
+ We say $a \in B$ is \emph{BRST closed} if $\mathcal{Q} a = 0$.
+\end{defi}
+By the previous theorem, we know that all BRST exact elements are BRST closed, but the converse need not be true.
+
+In the canonical picture, as we saw in Michaelmas QFT, the ``Hilbert space'' of this quantum theory is not actually a Hilbert space --- some states have negative norm, and some non-zero states have zero norm. In order for things to work, we need to first restrict to a subspace of non-negative norm, and then quotient out by states of zero norm, and this gives us the space of ``physical'' states.
+
+The BRST operator gives rise to an analogous operator in the canonical picture, which we shall denote $\hat{\mathcal{Q}}$. It turns out that the space of non-negative norm states is exactly the states $\bket{\psi}$ such that $\hat{\mathcal{Q}} \bket{\psi} = 0$. We can think of this as saying the physical states must be BRST-invariant. Moreover, the zero norm states are exactly those that are of the form $\mathcal{Q} \bket{\phi} = 0$. So we can write
+\[
+ \mathcal{H}_{\mathrm{phys}} = \frac{\text{BRST closed states}}{\text{BRST exact states}}.
+\]
+This is known as the \term{cohomology} of the operator $\hat{\mathcal{Q}}$. We will not justify this or further discuss this, as the canonical picture is not really the main focus of this course or section.
+
+Let's return to our main focus, which was to find a symmetry of the Yang--Mills action.
+\begin{thm}
+ The Yang--Mills Lagrangian is BRST closed. In other words, $\mathcal{Q} \mathcal{L} = 0$.
+\end{thm}
+
+This theorem is often phrased in terms of the operator $\delta = \varepsilon \mathcal{Q}$, where $\varepsilon$ is a new Grassmannian variable. Since $\varepsilon^2 = 0$, we know $(\delta a)(\delta b) = 0$ for any $a, b$. So the function that sends any $a$ to $a \mapsto a + \delta a$ is well-defined (or rather, respects multiplication), because
+\[
+ ab + \delta(ab) = (a + \delta a)(b + \delta b).
+\]
+So this $\delta$ behaves like an infinitesimal transformation with $\varepsilon$ ``small''. This is not true for $\mathcal{Q}$ itself.
+
+Then the theorem says this infinitesimal transformation is in fact a symmetry of the action.
+
+\begin{proof}
+ We first look at the $(F_{\mu\nu}, F^{\mu\nu})$ piece of $\mathcal{L}$. We notice that for the purposes of this term, since $\delta A$ is bosonic, the BRST transformation for $A$ looks just like a gauge transformation $A_\mu \mapsto A_\mu + \nabla_\mu \lambda$. So the usual (explicit) proof that this is invariant under gauge transformations shows that this is also invariant under BRST transformations.
+
+ We now look at the remaining terms. We claim that it is not just BRST closed, but in fact BRST exact. Indeed, it is just
+ \[
+ \mathcal{Q} (\bar{c}^a f^a[A]) = i h^a f^a[A] - \bar{c}^a \frac{\delta f}{\delta \lambda} c.\qedhere
+ \] % check factors
+\end{proof}
+
+So we have found a symmetry of the action, and thus if we regularize the path integral measure appropriately so that it is invariant under BRST symmetries (e.g.\ in dimensional regularization), then BRST symmetry will in fact be a symmetry of the full quantum theory.
+
+Now what does this actually tell us? We first note the following general fact:
+\begin{lemma}
+ Suppose $\delta$ is some operator such that $\Phi \mapsto \Phi + \delta \Phi$ is a symmetry. Then for all $\mathcal{O}$, we have
+ \[
+ \bra \delta \mathcal{O}\ket = 0.
+ \]
+\end{lemma}
+
+\begin{proof}
+ \[
+ \bra \mathcal{O}(\Phi)\ket = \bra \mathcal{O}(\Phi')\ket = \bra \mathcal{O}(\Phi) + \delta \mathcal{O}\ket.\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ Adding BRST exact terms to the Lagrangian does not affect the expectation of BRST invariant functions.
+\end{cor}
+
+\begin{proof}
+ Suppose we add $\mathcal{Q} g$ to the Lagrangian, and $\mathcal{O}$ is BRST invariant. Then the change in $\bra \mathcal{O}\ket$ is
+ \[
+ \left\bra \int \mathcal{Q} g(x)\;\d^d x\;\mathcal{O}\right\ket = \int \bra \mathcal{Q} (g \mathcal{O})\ket \;\d^d x = 0.
+ \]
+ If we want to argue about this more carefully, we should use $\varepsilon \mathcal{Q}$ instead of $\mathcal{Q}$.
+\end{proof}
+
+This has some pretty important consequences. For example, any gauge invariant term that involves only $A$ is BRST invariant. Also, changing the gauge-fixing function just corresponds to changing the Lagrangian by a BRST exact term, $\mathcal{Q} (\bar{c}^a f^a[A])$. So this implies that as long as we only care about gauge-invariant quantities, then all correlation functions are completely independent of the choice of $f$.
+
+\subsection{Feynman rules for Yang--Mills}
+In general, we cannot get rid of the ghost fields. However, we can use the previous corollary to get rid of the $h$ field. We add the BRST exact term
+\[
+ -i\frac{\xi}{2} \mathcal{Q} (\bar{c}^a h^a) = \frac{\xi}{2} h^a h^a \;\d^d x
+\]
+to the Lagrangian, where $\xi$ is some arbitrary constant. Then we can complete the square, and obtain
+\[
+ i h^a f^a[A] + \frac{\xi}{2} h^a h^a = \frac{\xi}{2} \left(h^a + \frac{i}{\xi} f^a[A]\right)^2 + \frac{1}{2\xi} f^a[A] f^a[A].
+\]
+Moreover, in the path integral, for each fixed $A$, we should have
+\[
+ \int \D h\; \exp \left(-\frac{\xi}{2} \left(h^a + \frac{i}{\xi} f^a[A]\right)^2\right) = \int \D h\; \exp \left(-\frac{\xi}{2} (h^a)^2\right),
+\]
+since we are just shifting all $h$ by a constant. Thus, if we are not interested in correlation functions involving $h$, then we can simply factor out and integrate out $h$, and it no longer exists.
+%
+%there is a trick we can use to get rid of the $h$ field. Instead of imposing $f[A] = 0$ by the term
+%\[
+% h_a f^a[A]\;\d^d x,
+%\]
+%we can instead ``smear'' the gauge condition by including a quadratic piece in $h$, i.e.\ we take
+%\[
+% S[h, \ph] = \int \d^d x\; h_a (\partial^\mu A_\mu^a) + \frac{\xi}{2} \int \d^d x\; h^a h_a
+%\]
+%for some constant $\xi$. Integrating out $h$ now yields a contribution
+%\[
+% - \frac{1}{2\xi} \int\d^d x\; (\partial^\mu A_\mu^a) (\partial^\nu A_\nu^a).
+%\]
+%So the path integral is a ``Gaussian'' peaked at $\partial^\mu A_\mu = 0$, rather than localized at it. The real justification for why we are allowed to include this term is part of the BRST story, which we shall not go into. Heuristically, we can justify this as follows --- in the finite-dimensional example, we saw that any gauge-fixing curve would do the job, so we might as well ``averaging out'' over many gauge fixing curves.
+%
+Then the complete gauge-fixed Yang--Mills action in Lorenz gauge with coupling to fermions $\psi$ is then
+\begin{multline*}
+ S[A, \bar{c}, c, \bar\psi, \psi]\\
+ = \int\d^d x\; \left(\frac{1}{4} F_{\mu\nu}^a F^{\mu\nu,a} + \frac{1}{2\xi} (\partial^\mu A_\mu^a) (\partial^\nu A_\nu^a) - \bar{c}^a \partial^\mu \nabla_\mu c^a + \bar\psi (\slashed\nabla + m)\psi\right), % check sign of \xi term
+\end{multline*}
+where we absorbed the factor of $g_{YM}$ into $A$.
+
+Using this new action, we can now write down our Feynman rules. We have three propagators
+\begin{align*}
+ \begin{tikzpicture}[eqpic]
+ \begin{feynman}
+ \vertex (l);
+ \vertex [right=of l] (r);
+ \diagram*{
+ (l) -- [gluon, momentum=$p$] (r),
+ };
+ \end{feynman}
+ \end{tikzpicture} &= \D_{\mu\nu}^{ab} (p) = - \frac{\delta^{ab}}{p^2} \left(\delta_{\mu\nu} - (1 - \xi) \frac{p_\mu p_\nu}{p^2}\right)\\
+ \begin{tikzpicture}[eqpic]
+ \begin{feynman}
+ \vertex (l);
+ \vertex [right=of l] (r);
+ \diagram*{
+ (l) -- [fermion, momentum=$p$] (r),
+ };
+ \end{feynman}
+ \end{tikzpicture} &= S(p) = \frac{1}{i\slashed{p} + m}\\
+ \begin{tikzpicture}[eqpic]
+ \begin{feynman}
+ \vertex (l) {$\bar{c}$};
+ \vertex [right=of l] (r) {$c$};
+ \diagram*{
+ (l) -- [scalar, momentum=$p$] (r),
+ };
+ \end{feynman}
+ \end{tikzpicture} &= C^{ab}(p) = \frac{\delta^{ab}}{p^2}
+\end{align*}
+We also have interaction vertices given by
+{\allowdisplaybreaks
+\begin{align*}
+ \begin{tikzpicture}[eqpic]
+ \begin{feynman}
+ \vertex (c);
+ \vertex [above left=0.5cm and 1cm of c] (tl) {$A_\mu^a$};
+ \vertex [above right=0.5cm and 1cm of c] (tr) {$A_\nu^b$};
+ \vertex [below=1.12cm of c] (b) {$A_\lambda^c$};
+ \diagram*{
+ (c) -- [gluon, momentum=$p$] (tl),
+ (c) -- [gluon, momentum'=$q$] (tr),
+ (c) -- [gluon, momentum=$k$] (b),
+ };
+ \end{feynman}
+ \end{tikzpicture} &= g_{YM} f^{abc} \left((k - p)_\lambda \delta_{\mu\nu} + (p - q)_\mu \delta_{\nu\lambda} + (q - k) \delta_{\mu\lambda}\right)\\
+ \begin{tikzpicture}[eqpic]
+ \begin{feynman}
+ \vertex (c);
+ \vertex [above left=of c] (tl) {$A_\mu^a$};
+ \vertex [above right=of c] (tr) {$A_\nu^b$};
+ \vertex [below left=of c] (bl) {$A_\lambda^c$};
+ \vertex [below right=of c] (br) {$A_\sigma^d$};
+ \diagram*{
+ (c) -- [gluon] (tl),
+ (c) -- [gluon] (tr),
+ (c) -- [gluon] (bl),
+ (c) -- [gluon] (br),
+ };
+ \end{feynman}
+ \end{tikzpicture} &=
+ \begin{tikzpicture}[eqpic]
+ \node at (0, 0.5) {$-g_{YM}^2 f^{abe}f^{cde} (\delta_{\mu\lambda} \delta_{\nu\sigma} - \delta_{\mu\sigma} \delta_{\nu\lambda})$};
+ \node at (0, 0) {$- g_{YM}^2 f^{ace} f^{bde} (\delta_{\mu\nu} \delta_{\sigma\lambda} - \delta_{\mu\sigma} \delta_{\nu\lambda})$};
+ \node at (0, -0.5) {$- g^2_{YM} f^{ade} f^{bce} (\delta_{\mu\nu} \delta_{\sigma\lambda} - \delta_{\mu\lambda} \delta_{\nu\sigma})$};
+ \end{tikzpicture}\\
+ \begin{tikzpicture}[eqpic]
+ \begin{feynman}
+ \vertex (c);
+ \vertex [above left=0.5cm and 1cm of c] (tl) {$\bar{c}^b$};
+ \vertex [above right=0.5cm and 1cm of c] (tr) {$c^c$};
+ \vertex [below=1.12cm of c] (b) {$A_\mu^a(p)$};
+ \diagram*{
+ (c) -- [anti charged scalar] (tl),
+ (c) -- [charged scalar] (tr),
+ (c) -- [gluon, momentum=$p$] (b),
+ };
+ \end{feynman}
+ \end{tikzpicture} &= g_{YM} f^{abc} p_\mu\\
+ \begin{tikzpicture}[eqpic]
+ \begin{feynman}
+ \vertex (c);
+ \vertex [above left=0.5cm and 1cm of c] (tl) {$\bar{\psi}$};
+ \vertex [above right=0.5cm and 1cm of c] (tr) {$\psi$};
+ \vertex [below=1.12cm of c] (b) {$A_\mu^a(p)$};
+ \diagram*{
+ (c) -- [anti fermion] (tl),
+ (c) -- [fermion] (tr),
+ (c) -- [gluon] (b),
+ };
+ \end{feynman}
+ \end{tikzpicture} &= -g_{YM} \gamma^\mu t^a_f.
+\end{align*}
+}
+
+What is the point of writing all this out? The point is not to use them. The point is to realize these are horrible! It is a complete pain to work with these Feynman rules. For example, a $gg \to ggg$ scattering process involves $\sim 10000$ terms when we expand it in terms of Feynman diagrams, at \emph{tree level}! Perturbation theory is \emph{not} going to right way to think about Yang--Mills. And GR is only worse.
+
+Perhaps this is a sign that our theory is wrong. Surely a ``correct'' theory must look nice. But when we try to do actual computations, despite these horrific expressions, the end results tends to be very nice. So it's not really Yang--Mills' fault. It's just perturbation theory that is bad.
+
+Indeed, there is no reason to expect perturbation theory to look good. We formulated Yang--Mills in terms of a very nice geometric picture, about principal $G$-bundles. From this perspective, everything is very natural and simple. However, to do perturbation theory, we needed to pick a trivialization, and then worked with this object $A$. The curvature $F_{\mu\nu}$ was a natural geometric object to study, but breaking it up into $\d A + A \wedge A$ is not. The individual terms that give rise to the interaction vertices have no geometric meaning --- only $\nabla$ and $F$ do. The brutal butchering of the connection and curvature into these non-gauge-invariant terms is bound to make our theory look messy.
+
+Then what \emph{is} a good way to do Yang--Mills? We don't know. One rather successful approach is to use lattice regularization, and use a computer to actually compute partition functions and correlations directly. But this is very computationally intensive, and it's difficult to do complicated computations with this. Some other people try to understand Yang--Mills in terms of string theory, or twistor theory. But we don't really know.
+
+The only thing we can do now is to work with this mess.
+
+\subsection{Renormalization of Yang--Mills theory}
+To conclude the course, we compute the $\beta$-function of Yang--Mills theory. We are mostly interested in the case of $\SU(N)$. While a lot of the derivations (or lack of) we do are general, every now and then, we use the assumption that $G = \SU(N)$, and that the fermions live in the fundamental representation.
+
+Usually, to do perturbation theory, we expand around a vacuum, where all fields take value $0$. However, this statement doesn't make much sense for gauge theory, because $A_\mu$ is only defined up to gauge transformations, and it doesn't make sense to ``set it to zero''. Instead, what we do is that we fix any classical solution $A_\mu^0$ to the Yang--Mills equation, and take this as the ``vacuum'' for $A_\mu$. This is known as the \term{background field}. We write
+\[
+ A_\mu = A_\mu^0 + a_\mu.
+\]
+Whenever we write $\nabla_\mu$, we will be using the background field, so that
+\[
+ \nabla_\mu = \partial_\mu + A^0_\mu.
+\]
+We can compute
+\begin{align*}
+ F_{\mu\nu} &= \partial_\mu A_\nu - \partial_\nu A_\mu + [A_\mu, A_\nu]\\
+ &= F_{\mu\nu}^0 + \partial_\mu a_\nu - \partial_\nu a_\mu + [a_\mu, a_\nu] + [A_\mu^0, a_\nu] + [a_\mu, A_\nu^0]\\
+ &= F_{\mu\nu}^0 + \nabla_\mu a_\nu - \nabla_\nu a_\mu + [a_\mu, a_\nu].
+\end{align*}
+
+Thus, if we compute the partition function, we would expect to obtain something of the form
+\[
+ \mathcal{Z} \equiv e^{-S_{\mathrm{eff}}[A]} = \exp\left(-\frac{1}{2g_{YM}^2} \int (F_{\mu\nu}^0, F^{0, \mu\nu})\;\d^d x\right) \text{(something)}.
+\]
+A priori, the ``something'' will be a function of $A$, and also the energy scale $\mu$. Then since the result shouldn't actually depend on $\mu$, this allows us to compute the $\beta$-function. % say something about this
+
+We will work in Feynman gauge, where we pick $\xi = 1$. So we have
+\begin{multline*}
+ S[a, \bar{c}, c, \bar\psi, \psi] = \int\;\d^d x\; \left(\frac{1}{4g^2}(F_{\mu\nu}^0 + \nabla_{[\mu}a_{\nu]} + [a_\mu, a_\nu])^2\right. \\
+ \left.\vphantom{\frac{1}{4g^2}}- \frac{1}{2g^2} (\partial^\mu A_\mu + \partial^\mu a_\mu)^2 - \bar{c}\partial^\mu \nabla_\mu c - \bar{c} \partial^\mu a_\mu c + \bar\psi (\slashed \nabla + m) \psi + \bar\psi \slashed a \psi\right).
+\end{multline*}
+This allows us to rewrite the original action in terms of the new field $a_\mu$. We will only compute the $\beta$-function up to 1-loop, and it turns out this implies we don't need to know about the whole action. We claim that we only need to know about quadratic terms.
+
+Let $L$ be the number of loops, $V$ the number of vertices, and $P$ the number of propagators. Then, restricting to connected diagrams, Euler's theorem says
+\[
+ P - V = L - 1.
+\]
+By restricting to 1-loop diagrams, this means we only care about diagrams with $P = V$.
+
+For each node $i$, we let $n_{q, i}$ be the number of ``quantum'' legs coming out of the vertex, i.e.\ we do not count background fields. Then since each propagator connects two vertices, we must have
+\[
+ 2P = \sum_{\text{vertices $i$}} n_{q, i}.
+\]
+Also, almost by definition, we have
+\[
+ V = \sum_{\text{vertices $i$}} 1.
+\]
+So this implies we only care about fields with
+\[
+ 0 = \sum (n_{q, i} - 2).
+\]
+It can be argued that since $A^0_\mu$ satisfies the Yang--Mills equations, we can ignore all linear terms. So this means it suffices to restrict to the quadratic terms. % why can ignore?
+
+Restricting to the quadratic terms, for each of $c, \psi, a$, we have a term that looks like, say
+\[
+ \int \D c \;\D \bar{c}\; e^{\int \d^d x\; \bar{c} \Delta c}
+\]
+for some operator $\Delta$. Then the path integral will give $\det \Delta$. If the field is a boson, then we obtain $\frac{1}{\det}$ instead, but ultimately, the goal is to figure out what this $\Delta$ is.
+
+Note that each particle comes with a representation of $\SO(d)$ (or rather, the spin group) and the gauge group $G$. For our purposes, all we need to know about the representations of $\SO(d)$ is the spin, which may be $0$ (trivial), $\frac{1}{2}$ (spinor) or $1$ (vector). We will refer to the representation of $G$ as ``$R$'', and the spin as $j$. We then define the operator
+\[
+ \Delta_{R, j} = \nabla^2 + 2 \left(\frac{1}{2} F_{\mu\nu}^a J^{\mu\nu}_{(j)}\right) t_R^a,
+\]
+where $\{t_R^a\}$ are the images of the generators of $\mathfrak{g}$ in the representation, and $J^{\mu\nu}_{(j)}$ are the generators of $\so(d)$ in the spin $j$ representation. In particular,
+\[
+ J_{(0)}^{\mu\nu} = 0,\quad J_{(\frac{1}{2})}^{\mu\nu} = S^{\mu\nu} = \frac{1}{4} [\gamma^\mu, \gamma^\nu],\quad J_{(1)}^{\mu\nu} = i(\delta^\rho_\mu \delta^\sigma_\nu - \delta_\nu^\rho \delta_\mu^\sigma).
+\]
+For simplicity, we will assume the fermion masses $m = 0$. Then we claim that (up to a constant factor), the $\Delta$ of the $c$, $\psi$ and $a$ fields are just $\Delta_{\mathrm{adj}, 0}$, $\sqrt{\Delta_{R, \frac{1}{2}}}$ and $\Delta_{\mathrm{adj}, 1}$ respectively. This is just a computation, which we shall omit. % include; what about the constant factor?
+
+Thus, if there are $n$ many fermions, then we find that we have % explain what $n$ is.
+\[
+ \mathcal{Z} = \exp\left(-\frac{1}{2g_{YM}^2} \int (F_{\mu\nu}^0, F^{0, \mu\nu})\;\d^d x\right) \frac{(\det \Delta_{\mathrm{adj}, 0}) (\det \Delta_{R, \frac{1}{2}})^{n/2}}{(\det \Delta_{\mathrm{adj}, 1})^{1/2}}.
+\]
+We are going to view these extra terms as being quantum corrections to the effective action of $A_\mu$, and we will look at the corrections to the coupling $g_{YM}^2$ they induce. Thus, we are ultimately interested in the logarithm of these extra terms.
+
+%We first look at the quadratic terms in $a_\mu$. This is given by
+%\begin{align*}
+% &\hphantom{={}} -\frac{1}{4g^2}\Big((\nabla_\mu a_\nu - \nabla_\nu a_\mu)^2 + 2 (F^0)^{\mu\nu} [a_\mu, a_\nu] + 2 (\partial^\mu a_\mu)^2\Big)\\
+% &= -\frac{1}{4g^2}\Big(-a^\nu \nabla^2 a_\nu - a^\mu \nabla^2 a_\mu + 2a^\nu \nabla^\mu \nabla_\nu a_\mu\Big)
+%\end{align*}
+%\separator
+%
+%To compute the $\beta$-function of Yang--Mills, we expand the action around a classical solution $A_\mu$ with all other fields $\psi, \bar\psi, \bar{c}, c = 0$. We set
+%\[
+% A^{\mathrm{full}}_\mu = A_\mu + a_\mu,
+%\]
+%so that
+%\[
+% \nabla = \nabla^0 + a.
+%\]
+%We then have
+%\[
+% F_{\mu\nu} = F_{\mu\nu}^0 + \nabla_{[\mu} A_{\nu]} + [a_\mu, a_\nu].
+%\]
+%We treat $(a, \psi, \bar{\psi}, \bar{c}, c)$ as fluctuations, and compute the path integral to $1$-loop accuracy. Again, since we always have $P - V = L - 1$, at 1-loop, we must have $P = V$, and since only the fluctuations propagator, we also have
+%\[
+% 2P = \sum_{\text{vertices $i$}} n_{q, i},
+%\]
+%where $n_{q, i}$ is the number of vertices at the node $i$, and $q$ says we just count over the quantum (non-background) fields. So we know that
+%\[
+% 0 = \sum (n_{q, i} - 2).
+%\]
+%If the background field $A_\mu^0$ satisfies the Yang--Mills equation, then we can ignore all vertices with $n_{q, i} = 1$, since the sum of all these terms is proportional to the field equations. So we must have $n_{q, i} = 2$ for all $i$. So at $1$-loop, we only need to consider vertices that are quadratic. We saw exactly the same situation when we were looking at local potentials previously.
+%
+%We'll also work in ``background field gauge'', where $\nabla^\mu a_\mu = 0$. The $\nabla^\mu$ uses the background term $A_\mu^0$. In Feynman gauge ($\xi = 1$), we have
+%\begin{align*}
+% &\phantom{={}}S^{\mathrm{quad}} [a, \bar{c}, c, \bar\psi, \psi]\\
+% &= - \frac{1}{4 g_{YM}^2} \int \d^d x\; \left((\nabla_\mu a_\nu^a - \nabla_\nu a_\mu^a)^2 - 2 F_{\nabla^0}^{\mu\nu} [a_\mu, a_\nu] + 2 (\nabla^{0, \mu}_{a_\mu} \nabla^{0, \nu} a_\nu)\right) \\
+% &\hphantom{={}}+ \int\ d^d x\; \bar{c} \nabla^\mu \nabla_\mu^0 c + \int \d^d x\; \bar\psi (i \slashed{\Delta}^0 + m)\psi.
+%\end{align*}
+%Let's first look at the fluctuations in $a_\mu$, we have % all derivatives are wrt background gauge field
+%\begin{align*}
+% 4g_{YM} \mathcal{L}^{\mathrm{quad}} &= (\nabla_\mu a_\nu - \nabla_\nu a_\mu)^2 - 2F^{\mu\nu} [a_\mu, a_\nu] + 2 (\nabla^\mu a_\mu)^2\\
+% &= a_\mu \left[-\delta^{\mu\nu} \nabla^2 + \nabla^\nu \nabla^\mu - \nabla^\mu \nabla^\nu \right] a_\nu - 2 F^{\mu\nu} [a_\mu, a_\nu]\\
+% &= a_\mu^a \left(- \delta^{\mu\nu} ((\nabla)^2)^c_a + 2 \left(\frac{1}{2} F_{\rho\sigma}^b \mathcal{J}^{\rho\sigma}\right)^{\mu\nu} (t_b)^c_a\right) a_{\nu, c},
+%\end{align*}
+%where
+%\[
+% (\mathcal{J}^{\rho\sigma})_{\mu\nu} = i \left(\delta^\rho_\mu \delta^\sigma_\nu - \delta^\rho_\nu \delta^\sigma_\mu\right)
+%\]
+%are the $\SO(d)$ generators acting on the vector representation, and
+%\[
+% (t_b)^c_a = f^c_{ba}
+%\]
+%are the structure constants of $g$ generated by the adjoint representation.
+%
+%Similarly, for the Dirac field $\psi$, we get $\det(\slashed{\nabla} + m)$, and it will be helpful to use $(\det M)^2 = \det (M^2)$ to write this as follows:
+%\begin{align*}
+% \slashed{\nabla}^2 &= \gamma^\mu \gamma^\nu \nabla_\mu \nabla_\nu \\
+% &= \left(\frac{1}{2} [\gamma^\mu, \gamma^\nu] + \frac{1}{2} \{\gamma^\mu \gamma^\nu\}\right) \nabla_\mu \nabla_\nu\\
+% &= \nabla^2 - \frac{1}{4} [\gamma^\mu \gamma^\nu] (\nabla_\mu \nabla_\nu - \nabla_\nu \nabla_\mu)\\
+% &= \nabla^2 + 2\left(\frac{1}{2} F_{\mu\nu}^a S^{\mu\nu} (t^a_R)\right),
+%\end{align*}
+%where
+%\[
+% S^{\mu\nu} = \frac{1}{4} [\gamma^\mu, \gamma^\nu].
+%\]
+%Finally, the ghost Lagrangian is easier. it is just
+%\[
+% \mathcal{L}_{\mathrm{ghosts}} = \bar{c}_a (\nabla^2 c)^a.
+%\]
+%All together, integrating out these fluctuations gives, to one-loop accuracy,
+%\[
+% \frac{(\det \Delta_{R, 1/2})^{n/2}}{ (\det \Delta_{\mathrm{adj}, 1})^{1/2}} (\det \Delta_{\mathrm{adj}, 0}),
+%\]
+%where
+%\[ % point is that we got every operator in the same form.
+% \Delta_{R, j} = \nabla^2 + 2\left(\frac{1}{2} F_{\mu\nu}^a J^{\mu\nu}_{(j)}\right)t^a_R,
+%\]
+%and we have $n$ flavours of massless Dirac fermions. We can think of the second term as the ``magnetic moment''.
+%
+%Everything here depends on the background gauge field. We now see what this looks like in terms of the background gauge field. The effective action for the background field $\nabla^{(0)} = \partial + A$ is
+%\[
+% e^{-S_{\mathrm{eff}}[A]} = \exp\left(- \frac{1}{4g_{YM}^2} \int (F_{\mu\nu}, F^{\mu\nu})\;\d^d x\right) \frac{(\det \Delta_{\mathrm{adj}, 0}) (\det \Delta_{R, \frac{1}{2}})^{n/2}}{(\det \Delta_{\mathrm{adj}, 1})^{1/2}},
+%\]
+
+To proceed, write write out $\Delta$ explicitly:
+\[
+ \Delta_{R, j} = - \partial^2 + \underbrace{(\partial^\mu A_\mu^a + A_\mu^a \partial^\mu) t^a_{(R)}}_{\Delta^{(1)}} + \underbrace{A^{\mu, a} A_\mu^b t^a_R t^b_R}_{\Delta^{(2)}} + \underbrace{2 \left(\frac{1}{2} F^a_{\mu\nu} J^{\mu\nu}_{(j)}\right) t^a_{(R)}}_{\Delta^{(J)}}.
+\]
+Then we can write the logarithm as
+\begin{align*}
+ \log \det \Delta_{R, j} &= \log \det (- \partial^2 + \Delta^{(1)} + \Delta^{(2)} + \Delta^{(J)})\\
+ &= \log \det (- \partial^2) + \tr \log (1 - (\partial^2)^{-1} (\Delta^{(1)} + \Delta^{(2)} + \Delta^{(J)})).
+\end{align*}
+Again, the first term is a constant, and we will ignore it. We want to know the correction to the coupling $\frac{1}{4 g_{YM}^2}$. Since everything is covariant with respect to the background field $A^0$, it is enough to just compute the quadratic terms in $A_\mu$, because all the other terms are required to behave accordingly.
+
+We now see what quadratic terms we obtain in the series expansion of $\log$. In the case of $G = \SU(N)$, We have $\tr t^a = \tr J_{\mu\nu} = 0$. So certain cross terms vanish, and the only quadratic terms are
+\[
+ \log \det \Delta_{R, j} \sim \underbrace{\vphantom{\frac{1}{2}}\tr (- \partial^{-2} \Delta^{(2)})}_{(a)} + \underbrace{\frac{1}{2} \tr (\partial^{-2} \Delta^{(1)} \partial^{-2} \Delta^{(1)})}_{(b)} + \underbrace{\frac{1}{2} \tr (\partial^{-2} \Delta^{(J)} \partial^{-2} \Delta^{(J)})}_{(c)}.
+\]
+\begin{align*}
+ (a) &= \int \frac{\d^d k}{(2\pi)^d} A_\mu^a(k) A_\nu^b(-k) \int \frac{\d^d p}{(2\pi)^d} \frac{\tr_R(t^a t^b)}{p^2} \delta^{\mu\nu} d(j)\\
+ (b) &= \int \frac{\d^d k}{(2\pi)^d} \frac{\d^d p}{(2\pi)^d} \frac{(k + 2p)_\mu (k + 2p)_\nu\tr(t^a t^b) A_\mu^a(k) A_\nu^b(k)}{p^2 (p + k)^2}\\
+ (c) &= \int \frac{\d^d k}{(2\pi)^d} \frac{\d^d p}{(2\pi)^d} A_\mu^a(k) A_\nu^b(-k) (k^2 \delta_{\mu\nu} - k_\mu k_\nu) \frac{1}{p^2(p + k)^2} C(j) \tr (t^a t^b).
+\end{align*}
+where $d(j)$ is the number of spin components of the field and $C(j)$ is some constant depending on $j$. Explicitly, they are given by
+\begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ & scalar & Dirac & 4-vector\\
+ \midrule
+ $d(j)$ & 1 & 4 & 4\\
+ $C(j)$ & 0 & 1 & 2\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+
+In terms of Feynman diagrams, we can interpret $(a)$ as being given by the loop diagram
+\begin{center}
+ \begin{tikzpicture}
+ \draw [/tikzfeynman/gluon, /tikzfeynman/momentum'=$k$] (0, 0) node [left] {$A_\mu^a$} -- (2, 0.5);
+ \draw [/tikzfeynman/gluon] (2, 0.5) -- (4, 0) node [right] {$A_\nu^b$};
+ \draw [/tikzfeynman/fermion] (2, 1) circle [radius=0.5];
+ \node [left] at (1.5, 1) {$p$};
+ \end{tikzpicture}
+\end{center}
+while $(b)$ and $(c)$ are given by diagrams that look like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [/tikzfeynman/gluon, /tikzfeynman/momentum'=$k$] (0, 0) node [left] {$A_\mu^a$} -- (2, 0);
+ \draw [/tikzfeynman/gluon] (3, 0) -- (5, 0) node [right] {$A_\nu^b$};
+ \draw [/tikzfeynman/fermion] (3, 0) arc (0:180:0.5);
+ \draw (2, 0) arc (180:360:0.5);
+ \end{tikzpicture}
+\end{center}
+
+We now define the quantity $C(R)$ by
+\[
+ \tr_R (t^a t^b) = C(R) \delta^{ab},
+\]
+where the trace is to be taken in the representation $R$. For $G = \SU(N)$, we have
+\[
+ C(\mathrm{adj}) = N,\quad C(\mathrm{fund}) = \frac{1}{2}.
+\]
+Then, evaluating all those integrals, we find that, in dimensional regularization, we have
+\[
+ \log \Delta_{R, j} = \frac{\Gamma(2 - \frac{d}{2})}{4} \int \d^d x\; \mu^{4 - d}(-\partial^2)^{\frac{d - 4}{2}} F_{\mu\nu}^a F^{a, \mu\nu} \cdot \frac{C(R)}{(4\pi)^2} \left(\frac{d(j)}{3} - 4 C(j)\right).
+\]
+%We see that $\Delta^{(2)}$ is already quadratic. So we only need it up to linear order, and this contributes
+%\[
+% \tr (-(\partial^2)^{-1} \Delta^{(2)}).
+%\]
+%The $\Delta^{(1)}$ term is first-order. So we need the square term, and this contributes
+%\[
+% -\frac{1}{2} \tr ((-\partial^2)^{-1} \Delta^{(1)} (-\partial^2)^{-1} \Delta^{(1)}).
+%\]
+%Similarly, the last term contributes
+%\[
+% - \frac{1}{2} \tr ((-\partial^2)^{-1} \Delta^{(J)} (-\partial^2)^{-1} \Delta^{(J)})
+%\]
+%All other cross terms vanish, at least in the case $G = \SU(N)$, using the fact that $\tr t^a = \tr J_{\mu\nu} = 0$ for $J \in \so(d)$.
+%
+%The first term can be interpreted as the Feynman diagram
+%\begin{center}
+% \begin{tikzpicture}
+% \begin{feynman}
+% \vertex (l);
+% \node [left] at (l){$A_\mu^a(k)$};
+% \vertex [right=of l] (c);
+% \vertex [above=1cm of c] (t);
+% \vertex [right=of c] (r) {$A_\nu^b(-k)$};
+%
+% \diagram*{
+% (l) -- [gluon, momentum'=$k$] (c) -- [gluon] (r),
+% (c) -- [half left, momentum=$p$] (t) -- [half left] (c), % loop
+% };
+% \end{feynman}
+% \end{tikzpicture}
+%\end{center}
+%The second term is
+%\begin{center}
+% \begin{tikzpicture}
+% \begin{feynman}
+% \vertex (l);
+% \node [left] at (l){$A_\mu^a(k)$};
+% \vertex [right=of l] (c1);
+% \vertex [right=of c1] (c2);
+% \vertex [right=of c2] (r) {$A_\nu^b(-k)$};
+%
+% \diagram*{
+% (l) -- [gluon, momentum'=$k$] (c1),
+% (c2)-- [gluon] (r),
+% (c1) -- [half left, momentum=$p + k$] (c2),
+% (c1) -- [half right, reversed momentum'=$p$] (c2),
+% };
+% \end{feynman}
+% \end{tikzpicture}
+%\end{center}
+%The last one looks the same, but we have a different vertex structure, depending on what is going on in $J$.
+%
+%The first term gives
+%\[
+% \tr ((-\partial^2)^{-1} \Delta^{(2)}) = \int \frac{\d^d k}{(2\pi)^d} A_\mu^a(k) A_\nu^b(-k) \int \frac{\d^d p}{(2\pi)^d} \frac{\tr_R(t^a t^b)}{p^2} \delta^{\mu\nu} d(j),
+%\]
+%where
+%\[
+% d(j) =
+% \begin{cases}
+% 1 & \mathrm{scalar}\\
+% 4 & \mathrm{dirac}\\
+% 4 & \text{4-vector}
+% \end{cases}
+%\]
+%denotes the number of spin components running around the loop. Similarly,
+%\begin{multline*}
+% \frac{1}{2} \tr ((-\partial^2)^{-1} \Delta^{(1)} (-\partial^2)^{-1} \Delta^{(1)}) \\
+% = \int \frac{\d^d k}{(2\pi)^d} \frac{\d^d p}{(2\pi)^d} \frac{(k + 2p)_\mu (k + 2p)_\nu\tr(t^a t^b) A_\mu^a(k) A_\nu^b(k)}{p^2 (p + k)^2}
+%\end{multline*}
+%In dimensional regularization, these two diagrams combine to give
+%\[
+% \int\frac{\d^d k}{(2\pi)^d} A^a_\mu(k) A^bb_\nu(-k) (k^2 \delta_{\mu\nu} - k_\mu k_\nu) \pi(k^2),
+%\]
+%where
+%\[ % details in Osborn notes.
+% \pi(k^2) = - \frac{1}{d - 1} d(j) \tr_R (t^a t^b) \frac{\Gamma\left(2 - \frac{d}{2}\right) \Gamma\left(\frac{d}{2} - 1\right)^2}{\Gamma(d - 2)} \frac{(k^2)^{(d - 4)/2}}{2(\pi)^{d/2}}.
+%\]
+%We have
+%\[
+% \tr_R (t^a t^b) = C(R) \delta^{ab},
+%\]
+%where $C(R)$ is the index of the representation. For $G = \SU(N)$, we have $C(\mathrm{adj}) = N$ and $C(\mathrm{fund}) = \frac{1}{2}$.
+%
+%Finally, we have
+%\begin{multline*}
+% \frac{1}{2} \tr ((-\partial^2) \Delta^{(J)} (-\partial^2) \Delta^{(J)}) \\
+% = \int \frac{\d^d k}{(2\pi)^d} \frac{\d^d p}{(2\pi)^d} A_\mu^a(k) A_\nu^b(-k) (k^2 \delta_{\mu\nu} - k_\mu k_\nu) \frac{1}{p^2(p + k)^2} C(j) \tr (t^a t^b),
+%\end{multline*}
+%where $C(j)$ is defined by
+%\[
+% \tr_j (J_{\mu\nu} J_{\kappa\lambda}) = C(j) (\delta_{\mu\kappa} \delta_{\nu\lambda} - \delta_{\mu\lambda} \delta_{\nu\kappa}).
+%\]
+%Explicitly, we have
+%\[
+% C(j) =
+% \begin{cases}
+% 0 & \mathrm{scalar}\\
+% 1 & \mathrm{Dirac}\\
+% 2 & \mathrm{4-vector}
+% \end{cases}
+%\]
+%Performing this again in dimensional regularization, we have
+%\[
+% \frac{1}{2} \int \frac{\d^d k}{(2\pi)^d} A_\mu^a (k) A_\nu^a (k^2 \delta_{\mu\nu} - k_\mu k_\nu) (k^2)^{(d - 4)/2} \Gamma\left(2 - \frac{d}{2}\right) \frac{4}{(4\pi)^{d/2}} C(R) C(j).
+%\]
+Thus, combining all the pieces, we obtain % return to this
+\begin{multline*}
+ S_{\mathrm{eff}}[A] = \frac{1}{4 g_{YM}^2}\int \d^d x\; F_{\mu\nu}^a F^{a, \mu\nu} \\
+ - \frac{\Gamma\left(2 - \frac{d}{2}\right)}{4} \int\d^d x\; \mu^{4 - d}(-\partial^2)^{\frac{d - 4}{2}} F_{\mu\nu}^a F^{\mu\nu}_a \times \left(\frac{1}{2} C_{\mathrm{adj}, 1} - C_{\mathrm{adj}, 0} - \frac{n}{2} C_{R, 1/2}\right),
+\end{multline*}
+where
+\[
+ C_{R, j} = \frac{C(R)}{(4\pi)^2} \left(\frac{d(j)}{3} - 4 C(j)\right).
+\]
+Explicitly, we have
+\[
+ C_{R, j} = \frac{C(R)}{(4\pi)^2} \times
+ \begin{cases}
+ \frac{1}{3} & \mathrm{scalars}\\
+ -\frac{8}{3} & \mathrm{Dirac}\\
+ -\frac{20}{3} & \mathrm{vectors}
+ \end{cases}.
+\]
+As always, the $\Gamma$ function diverges as we take $d \to 4$, and we need to remove the divergence using counterterms. In the \textoverline{MS} scheme with scale $\mu$ (with $g_{YM}^2 = \mu^{4 - d} g^2(\mu)$), we are left with logarithmic dependence on $\mu^2$. The independence of $\mu$ gives the condition
+\[
+ \mu\frac{\d}{\d \mu}\left(\frac{1}{g^2(\mu)} + \log \left(\frac{\mu^2}{\text{something}}\right)\left(\frac{1}{2} C_{\mathrm{adj}, 1} - C_{\mathrm{adj}, 0} - \frac{n}{2} C_{R, \frac{1}{2}}\right)\right) = 0.
+\]
+So
+\[
+ -\frac{2}{g^3(\mu)} \beta(g) + 2\left(\frac{1}{2} C_{\mathrm{adj}, 1} - C_{\mathrm{adj}, 0} - \frac{n}{2} C_{R, 1}\right) = 0.
+\]
+In other words, we find
+\begin{align*}
+ \beta(g) &= g^3(\mu) \left(\frac{1}{2} C_{\mathrm{adj}, 1} - C_{\mathrm{adj}, 0} - \frac{n}{2} C_{R, 1}\right)\\
+ &= - \frac{g^3(\mu)}{(4\pi)^2} \left(\frac{11}{3} C(\mathrm{adj}) - \frac{4n}{3} C(R)\right)\\
+ &= - \frac{g^3}{(4\pi)^2} \left(\frac{11}{3} N - \frac{2n}{3}\right).
+\end{align*}
+
+Thus, for $n$ sufficiently small, the $\beta$-function is negative! Hence, at least for small values of $g$, where we can trust perturbation theory, this coupling now \emph{increases} as $\Lambda \to 0$, and decreases as $\Lambda \to \infty$.
+
+The first consequence is that this theory has a sensible continuum limit! The coupling is large at low energies, and after a long story, this is supposed to lead to the confinement of quarks.
+
+In fact, in 1973, Gross--Coleman showed that the \emph{only} non-trivial QFT's in $d = 4$ with a continuum limit are non-abelian gauge theories. The proof was by \emph{exhaustion}! They just considered the most general kind of QFT we can have, and then computed the $\beta$-functions etc., and figured the only ones that existed were non-abelian gauge theories.
+
+
+\printindex
+\end{document}
diff --git a/books/cam/III_L/algebras.tex b/books/cam/III_L/algebras.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5e0aebb329da062860ad4899682ce956a209f2d6
--- /dev/null
+++ b/books/cam/III_L/algebras.tex
@@ -0,0 +1,3703 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2017}
+\def\nlecturer {C.\ J.\ B.\ Brookes}
+\def\ncourse {Algebras}
+
+\input{header}
+
+\newcommand\GKdim{\mathrm{GK}\mdash\mathrm{dim}}
+\DeclareMathOperator\Dim{Dim}
+\newcommand\HH{H\!H}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+The aim of the course is to give an introduction to algebras. The emphasis will be on non-commutative examples that arise in representation theory (of groups and Lie algebras) and the theory of algebraic D-modules, though you will learn something about commutative algebras in passing.
+
+Topics we discuss include:
+
+\begin{itemize}
+ \item Artinian algebras. Examples, group algebras of finite groups, crossed products. Structure theory. Artin--Wedderburn theorem. Projective modules. Blocks. $K_0$.
+
+ \item Noetherian algebras. Examples, quantum plane and quantum torus, differential operator algebras, enveloping algebras of finite dimensional Lie algebras. Structure theory. Injective hulls, uniform dimension and Goldie's theorem.
+
+ \item Hochschild chain and cochain complexes. Hochschild homology and cohomology. Gerstenhaber algebras.
+
+ \item Deformation of algebras.
+
+ \item Coalgebras, bialgebras and Hopf algebras.
+\end{itemize}
+
+\subsubsection*{Pre-requisites}
+It will be assumed that you have attended a first course on ring theory, eg IB Groups, Rings and Modules. Experience of other algebraic courses such as II Representation Theory, Galois Theory or Number Fields, or III Lie algebras will be helpful but not necessary.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+We start with the definition of an algebra. Throughout the course, $k$ will be a field.
+\begin{defi}[$k$-algebra]\index{$k$-algebra}\index{algebra}
+ A (unital) associative $k$-algebra is a $k$-vector space $A$ together with a linear map $m: A \otimes A \to A$, called the product map, and linear map $u: k \to A$, called the unit map, such that
+ \begin{itemize}
+ \item The product induced by $m$ is associative.
+ \item $u(1)$ is the identity of the multiplication.
+ \end{itemize}
+\end{defi}
+In particular, we don't require the product to be commutative. We usually write $m(x \otimes y)$ as $xy$.
+
+\begin{eg}
+ Let $K/k$ be a finite field extension. Then $K$ is a (commutative) $k$-algebra.
+\end{eg}
+
+\begin{eg}
+ The $n\times n$ matrices $M_n(k)$ over $k$ form a non-commutative $k$-algebra.
+\end{eg}
+
+\begin{eg}
+ The quaternions $\H$ is an $\R$-algebra, with an $\R$-basis $1, i, j, k$, and multiplication given by
+ \[
+ i^2 = j^2 = k^2 = 1,\quad ij = k,\quad ji = -k.
+ \]
+ This is in fact a \term{division algebra} (or \term{skew fields}), i.e.\ the non-zero elements have multiplicative inverse.
+\end{eg}
+
+\begin{eg}
+ Let $G$ be a finite group. Then the group algebra
+ \[
+ kG = \left\{\sum \lambda_g g: g \in G, \lambda_g \in k\right\}
+ \]
+ with the obvious multiplication induced by the group operation is a $k$-algebra.
+
+ These are the associative algebras underlying the representation theory of finite groups.
+\end{eg}
+
+Most of the time, we will just care about algebras that are finite-dimensional as $k$-vector spaces. However, often what we need for the proofs to work is not that the algebra is finite-dimensional, but just that it is \emph{Artinian}. These algebras are defined by some finiteness condition on the ideals.
+
+\begin{defi}[Ideal]\index{ideal}
+ A \emph{left ideal} of $A$ is a $k$-subspace of $A$ such that if $x \in A$ and $y \in I$, then $xy \in I$. A \emph{right ideal} is one where we require $yx \in I$ instead. An \emph{ideal} is something that is both a left ideal and a right ideal.
+\end{defi}
+Since the multiplication is not necessarily commutative, we have to make the distinction between left and right things. Most of the time, we just talk about the left case, as the other case is entirely analogous.
+
+The definition we want is the following:
+\begin{defi}[Artinian algebra]\index{Artinian algebra}\index{algebra!Artinian}
+ An algebra $A$ is \emph{left Artinian} if it satisfies the \term{descending chain condition} (\term{DCC}) on left ideals, i.e.\ if we have a descending chain of left ideals
+ \[
+ I_1 \geq I_2 \geq I_3 \geq \cdots,
+ \]
+ then there is some $N$ such that $I_{N + m} = I_{N}$ for all $m \geq 0$.
+
+ We say an algebra is \emph{Artinian} if it is both left and right Artinian.
+\end{defi}
+
+\begin{eg}
+ Any finite-dimensional algebra is Artinian.
+\end{eg}
+
+The main classification theorem for Artinian algebras we will prove is the following result:
+\begin{thm}[Artin--Wedderburn theorem]\index{Artin--Wedderburn theorem}
+ Let $A$ be a left-Artinian algebra such that the intersection of the maximal left ideals is zero. Then $A$ is the direct sum of finitely many matrix algebras over division algebras.
+\end{thm}
+When we actually get to the theorem, we will rewrite this in a way that seems a bit more natural.
+
+One familiar application of this theorem is in representation theory. The group algebra of a finite group is finite-dimensional, and in particular Artinian. We will later see that Maschke's theorem is equivalent to saying that the hypothesis of the theorem holds. So this puts a very strong constraint on how the group algebra looks like.
+
+After studying Artinian rings, we'll talk about Noetherian algebras.
+\begin{defi}[Noetherian algebra]\index{Noetherian algebra}\index{algebra!Noetherian}
+ An algebra is \emph{left Noetherian} if it satisfies the \term{ascending chain condition} (\term{ACC}) on left ideals, i.e.\ if
+ \[
+ I_1 \leq I_2 \leq I_3 \leq \cdots
+ \]
+ is an ascending chain of left ideals, then there is some $N$ such that $I_{N + m} = I_N$ for all $m \geq 0$.
+
+ Similarly, we can define right Noetherian algebras, and say an algebra is \emph{Noetherian} if it is both left and right Noetherian.
+\end{defi}
+
+We can again look at some examples.
+\begin{eg}
+ Again all finite-dimensional algebras are Noetherian.
+\end{eg}
+
+\begin{eg}
+ In the commutative case, Hilbert's basis theorem tells us a polynomial algebra $k[X_1, \cdots, X_k]$ in finitely many variables is Noetherian. Similarly, the power series rings $k[[X_1, \cdots, X_n]]$ are Noetherian.
+\end{eg}
+
+\begin{eg}
+ The \term{universal enveloping algebra} of a finite-dimensional Lie algebra are the (associative!) algebras that underpin the representation theory of these Lie algebras.
+\end{eg}
+
+\begin{eg}
+ Some differential operator algebras are Noetherian. We assume $\Char k = 0$. Consider the polynomial ring $k[X]$. We have operators ``multiplication by $X$'' and ``differentiate with respect to $X$'' on $k[X]$. We can form the algebra $k[X, \frac{\partial}{\partial x}]$ of differential operators on $k[X]$, with multiplication given by the composition of operators. This is called the \term{Weyl algebra} $A_1$. We will show that this is a non-commutative Noetherian algebra.
+\end{eg}
+
+\begin{eg}
+ Some group algebras are Noetherian. Clearly all group algebras of finite groups are Noetherian, but the group algebras of certain infinite groups are Noetherian. For example, we can take
+ \[
+ G = \left\{
+ \begin{pmatrix}
+ 1 & \lambda & \mu\\
+ 0 & 1 & \nu\\
+ 0 & 0 & 0
+ \end{pmatrix}: \lambda, \mu, \nu \in \Z
+ \right\},
+ \]
+ and this is Noetherian. However, we shall not prove this.
+\end{eg}
+
+We will see that all left Artinian algebras are left Noetherian. While there is a general theory of non-commutative Noetherian algebras, it is not as useful as the theory of \emph{commutative} Noetherian algebras.
+
+In the commutative case, we often look at $\Spec A$, the set of prime ideals of $A$. However, sometimes in the non-commutative there are few prime ideals, and so $\Spec$ is not going to keep us busy.
+\begin{eg}
+ In the Weyl algebra $A_1$, the only prime ideals are $0$ and $A_1$.
+\end{eg}
+
+We will prove a theorem of Goldie:
+\begin{thm}[Goldie's theorem]\index{Goldie's theorem}
+ Let $A$ be a right Noetherian algebra with no non-zero ideals all of whose elements are nilpotent. Then $A$ embeds in a finite direct sum of matrix algebras over division algebras.
+\end{thm}
+
+Some types of Noetherian algebras can be thought of as non-commutative polynomial algebras and non-commutative power series, i.e.\ they are \emph{deformations} of the analogous commutative algebra. For example, we say $A_1$ is a deformation of the polynomial algebra $k[X, Y]$, where instead of having $XY - YX = 0$, we have $XY - YX = 1$. This also applies to enveloping algebras and Iwasawa algebras. We will study when one can deform the multiplication so that it remains associative, and this is bound up with the cohomology theory of associative algebras --- \emph{Hochschild cohomology}. The Hochschild complex has rich algebraic structure, and this will allow us to understand how we can deform the algebra.
+
+At the end, we shall quickly talk about bialgebras and Hopf algebras. In a bialgebra, one also has a comultiplication map $A \to A \otimes A$, which in representation theory is crucial in saying how to regard a tensor product of two representations as a representation.
+
+%We finish with a little bit of history. In 1890's, Hilbert proved some big theorems in complex polynomial algebras. In 1920's, Noether came and abstracted out the key properties that made Hilbert's work work, and came up with the notion of the Noetherian property. In 1930s and 1940s, the development of standard commutative algebra. In 1945 came Hochschild, and in 1960s came Goldi.e.\ In 1960 we had Gerstenhaber, who told us about the structure of the Hochschild complex. In the 1960s and 1970s, the study of enveloping algebras of differential operators started, and from the 1980s we started studying quantum groups, whose motivation came from mathematical physics, but got the algebraists excited.
+
+\section{Artinian algebras}
+\subsection{Artinian algebras}
+We continue writing down some definitions. We already defined left and right Artinian algebras in the introduction. Most examples we'll meet are in fact finite-dimensional vector spaces over $k$. However, there exists some more perverse examples:
+
+\begin{eg}
+ Let
+ \[
+ A = \left\{
+ \begin{pmatrix}
+ r & s\\
+ 0 & t
+ \end{pmatrix}: r \in \Q, s, t \in \R
+ \right\}
+ \]
+ Then this is right Artinian but not left Artinian over $\Q$. To see it is not left Artinian, note that there is an ideal
+ \[
+ I = \left\{
+ \begin{pmatrix}
+ 0 & s\\
+ 0 & 0
+ \end{pmatrix}: s \in \R
+ \right\} \cong \R
+ \]
+ of $A$, and a matrix $\begin{pmatrix}r & s\\0 & t\end{pmatrix}$ acts on this on the left by sending $\begin{pmatrix}0 & s'\\ 0 & 0\end{pmatrix}$ to $rs'$. Since $\R$ is an infinite-dimensional $\Q$-vector space, one sees that there is an infinite strictly descending chain of ideals contained in $I$.
+
+ The fact that it is right Artinian is a direct verification. Indeed, it is not difficult to enumerate all the right ideals, which is left as an exercise for the reader.
+\end{eg}
+
+As in the case of commutative algebra, we can study the modules of an algebra.
+\begin{defi}[Module]\index{module}\index{bimodule}
+ Let $A$ be an algebra. A \emph{left $A$-module} is a $k$-vector space $M$ and a bilinear map
+ \[
+ \begin{tikzcd}[cdmap]
+ A \otimes M \ar[r] & M\\
+ a \otimes m \ar[r, maps to] & xm
+ \end{tikzcd}
+ \]
+ such that $(ab)m = a(bm)$ for all $a, b \in A$ and $m \in M$. Right $A$-modules are defined similarly.
+
+ An \emph{$A\mdash A$-bimodule} is a vector space $M$ that is both a left $A$-module and a right $A$-module, such that the two actions commute --- for $a, b \in A$ and $x \in M$, we have
+ \[
+ a(xb) = (ax)b.
+ \]
+\end{defi}
+
+\begin{eg}
+ The algebra $A$ itself is a left $A$-module. We write this as $_A A$, and call this the \term{left regular representation}. Similarly, the right action is denoted $A_A$. These two actions are compatible by associativity, so $A$ is an $A\mdash A$-bimodule.
+\end{eg}
+
+If we write $\End_k(A)$ for the $k$-linear maps $A \to A$, then $\End_k$ is naturally a $k$-algebra by composition, and we have a $k$-algebra homomorphism $A \to \End_k(A)$ that sends $a \in A$ to multiplication by $a$ on the left. However, if we want to multiply on the right instead, it is no longer a $k$-algebra homomorphism $A \to \End_k(A)$. Instead, it is a map $A \to \End_k(A)^\op$, where
+\begin{defi}[Opposite algebra]\index{opposite algebra}\index{algebra!opposite}\index{$A^\op$}
+ Let $A$ be a $k$-algebra. We define the \emph{opposite algebra} $A^\op$ to be the algebra with the same underlying vector space, but with multiplication given by
+ \[
+ x \cdot y = yx.
+ \]
+ Here on the left we have the multiplication in $A^\op$ and on the right we have the multiplication in $A$.
+\end{defi}
+In general, a left $A$-module is a right $A^\op$-module.
+
+As in the case of ring theory, we can talk about prime ideals. However, we will adopt a slightly different definition:
+\begin{defi}[Prime ideal]\index{prime ideal}\index{ideal!prime}
+ An ideal $P$ is \emph{prime} if it is a proper ideal, and if $I$ and $J$ are ideals with $IJ \subseteq P$, then either $I \subseteq P$ or $J \subseteq P$.
+\end{defi}
+It is an exercise to check that this coincides in the commutative case with the definition using elements.
+
+\begin{defi}[Annihilator]\index{annihilator}
+ Let $M$ be a left $A$-module and $m \in M$. We define the \emph{annihilators} to be
+ \begin{align*}
+ \Ann(m) &= \{a \in A: am = 0\}\\
+ \Ann(M) &= \{a \in A: am = 0\text{ for all }m \in M\} = \bigcap_{m \in M} \Ann(m).
+ \end{align*}
+\end{defi}
+Note that $\Ann(m)$ is a left ideal of $A$, and is in fact the kernel of the $A$-module homomorphism $A \to M$ given by $x \mapsto xm$. We'll denote the image of this map by $Am$, a left submodule of $M$, and we have
+\[
+ \frac{A}{\Ann(m)} \cong Am.
+\]
+On the other hand, it is easy to see that $\Ann(M)$ is an fact a (two-sided) ideal.
+
+\begin{defi}[Simple module]\index{simple module}\index{irreducible module}\index{module!simple}\index{module!irreducible}
+ A non-zero module $M$ is \emph{simple} or \emph{irreducible} if the only submodules of $M$ are $0$ and $M$.
+\end{defi}
+
+It is easy to see that
+\begin{prop}
+ Let $A$ be an algebra and $I$ a left ideal. Then $I$ is a maximal left ideal iff $A/I$ is simple.
+\end{prop}
+
+\begin{eg}
+ $\Ann(m)$ is a maximal left ideal iff $Am$ is irreducible.
+\end{eg}
+
+\begin{prop}
+ Let $A$ be an algebra and $M$ a simple module. Then $M \cong A/I$ for some (maximal) left ideal $I$ of $A$.
+\end{prop}
+
+\begin{proof}
+ Pick an arbitrary element $m \in M$, and define the $A$-module homomorphism $\varphi: A \to M$ by $\varphi(a) = am$. Then the image is a non-trivial submodule, and hence must be $M$. Then by the first isomorphism theorem, we have $M \cong A/\ker \varphi$.
+\end{proof}
+
+Before we start doing anything, we note the following convenient lemma:
+\begin{lemma}
+ Let $M$ be a finitely-generated $A$ module. Then $M$ has a maximal proper submodule $M'$.
+\end{lemma}
+
+\begin{proof}
+ Let $m_1, \cdots, m_k \in M$ be a minimal generating set. Then in particular $N = \bra m_1, \cdots, m_{k - 1}\ket$ is a proper submodule of $M$. Moreover, a submodule of $M$ containing $N$ is proper iff it does not contain $m_k$, and this property is preserved under increasing unions. So by Zorn's lemma, there is a maximal proper submodule.
+\end{proof}
+
+\begin{defi}[Jacobson radical]\index{Jacobson radical}\index{$J(A)$}
+ The \index{Jacobson radical} $J(A)$ of $A$ is the intersection of all maximal left ideals.
+\end{defi}
+This is in fact an ideal, and not just a left one, because
+\[
+ J(A) = \bigcap \{\text{maximal left ideals}\} = \bigcap_{m \in M, M\text{ simple}} \Ann(m) = \bigcap_{M\text{ simple}} \Ann(M),
+\]
+which we have established is an ideal. Yet, it is still not clear that this is independent of us saying ``left'' instead of ``right''. It, in fact, does not, and this follows from the Nakayama lemma:
+
+\begin{lemma}[Nakayama lemma]\index{Nakayama lemma}
+ The following are equivalent for a left ideal $I$ of $A$.
+ \begin{enumerate}
+ \item $I \leq J(A)$.
+ \item For any finitely-generated left $A$-module $M$, if $IM = M$, then $M = 0$, where $IM$ is the module generated by elements of the form $am$, with $a \in I$ and $m \in M$.
+ \item $G = \{1 + a: a \in I\} = 1 + I$ is a subgroup of the unit group of $A$.
+ \end{enumerate}
+\end{lemma}
+In particular, this shows that the Jacobson radical is the largest ideal satisfying (iii), which is something that does not depend on handedness.
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Rightarrow$ (ii): Suppose $I \leq J(A)$ and $M \not= 0$ is a finitely-generated $A$-module, and we'll see that $IM \lneq M$.
+
+ Let $N$ be a maximal submodule of $M$. Then $M/N$ is a simple module, so for any $\bar{m} \in M/N$, we know $\Ann(\bar{m})$ is a maximal left ideal. So $J(A) \leq \Ann(M/N)$. So $IM \leq J(A) M \leq N \lneq M$.
+ \item (ii) $\Rightarrow$ (iii): Assume (ii). We let $x \in I$ and set $y = 1 + x$. Hence $1 = y - x \in Ay + I$. Since $Ay + I$ is a left ideal, we know $Ay + I = A$. In other words, we know
+ \[
+ I \left(\frac{A}{Ay}\right) = \frac{A}{Ay}.
+ \]
+ Now using (ii) on the finitely-generated module $A/Ay$ (it is in fact generated by $1$), we know that $A/Ay = 0$. So $A = Ay$. So there exists $z \in A$ such that $1 = zy = z(1 + x)$. So $(1 + x)$ has a left inverse, and this left inverse $z$ lies in $G$, since we can write $z = 1 - zx$. So $G$ is a subgroup of the unit group of $A$.
+
+ \item (iii) $\Rightarrow$ (i): Suppose $I_1$ is a maximal left ideal of $A$. Let $x \in I$. If $x \not \in I_1$, then $I_1 + Ax = A$ by maximality of $I$. So $1 = y + zx$ for some $y \in I_1$ and $z \in A$. So $y = 1 - zx \in G$. So $y$ is invertible. But $y \in I_1$. So $I_1 = A$. This is a contradiction. So we found that $I < I_1$, and this is true for all maximal left ideals $I_1$. Hence $I \leq J(A)$.\qedhere
+ \end{itemize}
+\end{proof}
+
+We now come to the important definition:
+\begin{defi}[Semisimple algebra]\index{semisimple algebra}\index{algebra!semisimple}
+ An algebra is \emph{semisimple} if $J(A) = 0$.
+\end{defi}
+We will very soon see that for Artinian algebras, being semi-simple is equivalent to a few other very nice properties, such as being completely reducible. For now, we shall content ourselves with some examples.
+
+\begin{eg}
+ For any $A$, we know $A/J(A)$ is always semisimple.
+\end{eg}
+
+We can also define
+\begin{defi}[Simple algebra]\index{simple algebra}\index{algebra!simple}
+ An algebra is \emph{simple} if the only ideals are $0$ and $A$.
+\end{defi}
+It is trivially true that any simple algebra is semi-simple --- the Jacobson radical is an ideal, and it is not $A$. A particularly important example is the following:
+
+\begin{eg}
+ Consider $M_n(k)$. We let $e_i$ be the matrix with $1$ in the $(i, i)$th entry and zero everywhere else. This is idempotent, i.e.\ $e_i^2 = e_i$. It is also straightforward to check that
+ \[
+ A e_i =
+ \left\{
+ \begin{pmatrix}
+ 0 & \cdots & 0 & a_1 & 0 & \cdots & 0\\
+ 0 & \cdots & 0 & a_2 & 0 & \cdots & 0\\
+ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\
+ 0 & \cdots & 0 & a_n & 0 & \cdots & 0
+ \end{pmatrix}
+ \right\}
+ \]
+ The non-zero column is, of course, the $i$th column. Similarly, $e_i A$ is the matrices that are zero apart from in the $i$th row. These are in fact all left and all right ideals respectively. So the only ideals are $0$ and $A$.
+
+ As a left $A$-module, we can decompose $A$ as
+ \[
+ _A A = \bigoplus_{i = 1}^n A e_i,
+ \]
+ which is a decomposition into \emph{simple} modules.
+\end{eg}
+
+\begin{defi}[Completely reducible]\index{completely reducible}
+ A module $M$ of $A$ is \emph{completely reducible} iff it is a sum of simple modules.
+\end{defi}
+Here in the definition, we said ``sum'' instead of ``direct sum'', but the following proposition shows it doesn't matter:
+
+\begin{prop}
+ Let $M$ be an $A$-module. Then the following are equivalent:
+ \begin{enumerate}
+ \item $M$ is completely reducible.
+ \item $M$ is the direct sum of simple modules.
+ \item Every submodule of $M$ has a \term{complement}, i.e.\ for any submodule $N$ of $M$, there is a complement $N'$ such that $M = N \oplus N'$.
+ \end{enumerate}
+\end{prop}
+Often, the last condition is the most useful in practice.
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Rightarrow$ (ii): Let $M$ be completely reducible, and consider the set
+ \[
+ \left\{\{S_\alpha \leq M\} : S_\alpha\text{ are simple},\; \sum S_\alpha\text{ is a direct sum}\right\}.
+ \]
+ Notice this set is closed under increasing unions, since the property of being a direct sum is only checked on finitely many elements. So by Zorn's lemma, it has a maximal element, and let $N$ be the sum of the elements.
+
+ Suppose this were not all of $M$. Then there is some $S \leq M$ such that $S \not\subseteq N$. Then $S \cap N \subsetneq S$. By simplicity, they intersect trivially. So $S + N$ is a direct sum, which is a contradiction. So we must have $N = M$, and $M$ is the direct sum of simple modules.
+ \item (ii) $\Rightarrow$ (i) is trivial.
+ \item (i) $\Rightarrow$ (iii): Let $N \leq M$ be a submodule, and consider
+ \[
+ \left\{\{S_\alpha \leq M\} : S_\alpha\text{ are simple},\; N + \sum S_\alpha\text{ is a direct sum}\right\}.
+ \]
+ Again this set has a maximal element, and let $P$ be the direct sum of those $S_\alpha$. Again if $P \oplus N$ is not all of $M$, then pick an $S \leq M$ simple such that $S$ is not contained in $P \oplus N$. Then again $S \oplus P \oplus N$ is a direct sum, which is a contradiction.
+ \item (iii) $\Rightarrow$ (i): It suffices to show that if $N < M$ is a proper submodule, then there exists a simple module that intersects $N$ trivially. Indeed, we can take $N$ to be the sum of all simple submodules of $M$, and this forces $N = M$.
+
+ To do so, pick an $x \not \in N$, and let $P$ be submodule of $M$ maximal among those satisfying $P \cap N = 0$ and $x \not \in N \oplus P$. Then $N \oplus P$ is a proper submodule of $M$. Let $S$ be a complement. We claim $S$ is simple.
+
+ If not, we can find a proper submodule $S'$ of $S$. Let $Q$ be a complement of $N \oplus P \oplus S'$. Then we can write
+ \[
+ \begin{array}{ccccccccc}
+ M &=& N &\oplus& P &\oplus& S' &\oplus& Q\\
+ x &=& n &+& p &+& s&+& q
+ \end{array}.
+ \]
+ By assumption, $s$ and $q$ are not both zero. We wlog assume $s$ is non-zero. Then $P \oplus Q$ is a larger submodule satisfying $(P \oplus Q) \cap N = 0$ and $x \not \in N \oplus (P \oplus Q)$. This is a contradiction. So $S$ is simple, and we are done.\qedhere
+ \end{itemize}
+\end{proof}
+
+Using these different characterizations, we can prove that completely reducible modules are closed under the familiar ex operations.
+\begin{prop}
+ Sums, submodules and quotients of completely reducible modules are completely reducible.
+\end{prop}
+
+\begin{proof}
+ It is clear by definition that sums of completely reducible modules are completely reducible.
+
+ To see that submodules of completely reducible modules are completely reducible, let $M$ be completely reducible, and $N \leq M$. Then for each $x \in N$, there is some simple submodule $S \leq M$ containing $x$. Since $S \cap N \leq S$ and contains $x$, it must be $S$, i.e.\ $S \subseteq N$. So $N$ is the sum of simple modules.
+
+ Finally, to see quotients are completely reducible, if $M$ is completely reducible and $N$ is a submodule, then we can write
+ \[
+ M = N \oplus P
+ \]
+ for some $P$. Then $M/N \cong P$, and $P$ is completely reducible.
+\end{proof}
+
+We will show that every left Artinian algebra is completely reducible over itself iff it is semi-simple. We can in fact prove a more general fact for $A$-modules. To do so, we need a generalization of the Jacobson radical.
+
+\begin{defi}[Radical]\index{radical}
+ For a module $M$, we write $\Rad(M)$ for the intersection of maximal submodules of $M$, and call it the \emph{radical} of $M$.
+\end{defi}
+Thus, we have $\Rad(_A A) = J(A) = \Rad(A_A)$.
+
+\begin{prop}
+ Let $M$ be an $A$-module satisfying the descending chain condition on submodules. Then $M$ is completely reducible iff $\Rad(M) = 0$.
+\end{prop}
+
+\begin{proof}
+ It is clear that if $M$ is completely reducible, then $\Rad(M) = 0$. Indeed, we can write
+ \[
+ M = \bigoplus_{\alpha \in A} S_\alpha,
+ \]
+ where each $S_\alpha$ is simple. Then
+ \[
+ J(A) \leq \bigcap_{\alpha \in A} \left(\bigoplus_{\beta \in A \setminus \{\alpha\}} S_\beta\right) = \{0\}.
+ \]
+ Conversely, if $\Rad(M) = 0$, we note that since $M$ satisfies the descending chain condition on submodules, there must be a \emph{finite} collection $M_1, \cdots, M_n$ of maximal submodules whose intersection vanish. Then consider the map
+ \[
+ \begin{tikzcd}[cdmap]
+ M \ar[r] & \displaystyle\bigoplus_{i = 1}^n \frac{M}{M_i}\\
+ x \ar[r, maps to]& (x + M_1, x + M_2, \cdots, x + M_n)
+ \end{tikzcd}
+ \]
+ The kernel of this map is the intersection of the $M_i$, which is trivial. So this embeds $M$ as a submodule of $\bigoplus \frac{M}{M_i}$. But each $\frac{M}{M_i}$ is simple, so $M$ is a submodule of a completely reducible module, hence completely reducible.
+\end{proof}
+
+\begin{cor}
+ If $A$ is a semi-simple left Artinian algebra, then $_AA$ is completely reducible.
+\end{cor}
+
+\begin{cor}
+ If $A$ is a semi-simple left Artinian algebra, then every left $A$-module is completely reducible.
+\end{cor}
+
+\begin{proof}
+ Every $A$-module $M$ is a quotient of sums of $_AA$. Explicitly, we have a map
+ \[
+ \begin{tikzcd}[cdmap]
+ \displaystyle\bigoplus_{m \in M} {}_A A \ar[r] & M\\
+ (a_m) \ar[r, maps to] & \sum a_m m
+ \end{tikzcd}
+ \]
+ Then this map is clearly surjective, and thus $M$ is a quotient of $\bigoplus_M {}_AA$.
+\end{proof}
+
+If $A$ is not semi-simple, then it turns out it is rather easy to figure out radical of $M$, at least if $M$ is finitely-generated.
+\begin{lemma}
+ Let $A$ be left Artinian, and $M$ a finitely generated left $A$-module, then $J(A) M = \Rad(M)$.
+\end{lemma}
+
+\begin{proof}
+ Let $M'$ be a maximal submodule of $M$. Then $M/M'$ is simple, and is in fact $A/I$ for some maximal left ideal $I$. Then we have
+ \[
+ J(A) \left(\frac{M}{M'}\right) = 0,
+ \]
+ since $J(A) < I$. Therefore $J(A) M \leq M'$. So $J(A)M \leq \Rad(M)$.
+
+ Conversely, we know $\frac{M}{J(A) M}$ is an $A/J(A)$-module, and is hence completely reducible as $A/J(A)$ is semi-simple (and left Artinian). Since an $A$-submodule of $\frac{M}{J(A) M}$ is the same as an $A/J(A)$-submodule, it follows that it is completely reducible as an $A$-module as well. So
+ \[
+ \Rad\left(\frac{M}{J(A)M}\right) = 0,
+ \]
+ and hence $\Rad(M) \leq J(A) M$.
+\end{proof}
+
+\begin{prop}
+ Let $A$ be left Artinian. Then
+ \begin{enumerate}
+ \item $J(A)$ is nilpotent, i.e.\ there exists some $r$ such that $J(A)^r = 0$.
+ \item If $M$ is a finitely-generated left $A$-module, then it is both left Artinian and left Noetherian.
+ \item $A$ is left Noetherian.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Since $A$ is left-Artinian, and $\{J(A)^r: r \in \N\}$ is a descending chain of ideals, it must eventually be constant. So $J(A)^r = J(A)^{r + 1}$ for some $r$. If this is non-zero, then again using the descending chain condition, we see there is a left ideal $I$ with $J(A)^r I \not= 0$ that is minimal with this property (one such ideal exists, say $J(A)$ itself).
+
+ Now pick $x \in I$ with $J(A)^r x \not = 0$. Since $J(A)^{2r} = J(A)^r$, it follows that $J(A)^r (J(A)^r x) \not= 0$. So by minimality, $J(A)^r x \geq I$. But the other inclusion clearly holds. So they are equal. So there exists some $a \in J(A)^r$ with $x = ax$. So
+ \[
+ (1 - a) x = 0.
+ \]
+ But $1 - a$ is a unit. So $x = 0$. This is a contradiction. So $J(A)^r = 0$.
+ \item Let $M_i = J(A)^i M$. Then $M_i/M_{i + 1}$ is annihilated by $J(A)$, and hence completely reducible (it is a module over semi-simple $A/J(A)$). Since $M$ is a finitely generated left $A$-module for a left Artinian algebra, it satisfies the descending chain condition for submodules (exercise), and hence so does $M_i/M_{i + 1}$. % fill in exercise
+
+ So we know $M_i/M_{i + 1}$ is a finite sum of simple modules, and therefore satisfies the ascending chain condition. So $M_i/M_{i + 1}$ is left Noetherian, and hence $M$ is (exercise).
+
+ \item Follows from (ii) since $A$ is a finitely-generated left $A$-module.\qedhere
+ \end{enumerate}
+\end{proof}
+
+%A last little lemma before doing Artin--Wedderburn:
+%\begin{lemma}
+% Let $A$ be left Artinian and suppose $J(A)$ is finitely-generated as a left $A$-module, and $_AA$ is completely reducible, then $A$ is semi-simple.
+%\end{lemma}
+%
+%\begin{proof}
+% If $_AA$ is completely reducible, then $J(A)$ has a complement. Write
+% \[
+% _A A = J(A) \oplus \frac{_AA}{J(A)}.
+% \]
+% Multiplying on the left by $J(A)$ gives $J(A) = J(A)^2$. Since $J(A)$ is nilpotent, this implies we must have $J(A) = 0$. % or by nakamyama?
+%\end{proof}
+
+
+\subsection{Artin--Wedderburn theorem}
+We are going to state the Artin--Wedderburn theorem for right (as opposed to left) things, because this makes the notation easier for us.
+\begin{thm}[Artin--Wedderburn theorem]\index{Artin--Wedderburn theorem}
+ Let $A$ be a semisimple right Artinian algebra. Then
+ \[
+ A = \bigoplus_{i = 1}^r M_{n_i}(D_i),
+ \]
+ for some division algebra $D_i$, and these factors are uniquely determined.
+
+ $A$ has exactly $r$ isomorphism classes of simple (right) modules $S_i$, and
+ \[
+ \End_A (S_i) = \{\text{$A$-module homomorphisms $S_i \to S_i$}\} \cong D_i,
+ \]
+ and
+ \[
+ \dim_{D_i}(S_i) = n_i.
+ \]
+ If $A$ is simple, then $r = 1$.
+\end{thm}
+If we had the left version instead, then we need to insert $\op$'s somewhere.
+
+Artin--Wedderburn is an easy consequence of two trivial lemma. The key idea that leads to Artin--Wedderburn is the observation that the map $A_A \to \End_A(A_A)$ sending $a$ to left-multiplication by $a$ is an isomorphism of algebras. So we need to first understand endomorphism algebras, starting with Schur's lemma.
+
+\begin{lemma}[Schur's lemma]\index{Schur's lemma}
+ Let $M_1, M_2$ be simple right $A$-modules. Then either $M_1 \cong M_2$, or $\Hom_A(M_1, M_2) = 0$. If $M$ is a simple $A$-module, then $\End_A(M)$ is a division algebra.
+\end{lemma}
+
+\begin{proof}
+ A non-zero $A$-module homomorphism $M_1 \to M_2$ must be injective, as the kernel is submodule. Similarly, the image has to be the whole thing since the image is a submodule. So this must be an isomorphism, and in particular has an inverse. So the last part follows as well.
+\end{proof}
+
+As mentioned, we are going to exploit the isomorphism $A_A \cong \End_A(A_A)$. This is easy to see directly, but we can prove a slightly more general result, for the sake of it:
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item If $M$ is a right $A$-module and $e$ is an idempotent in $A$, i.e.\ $e^2 = e$, then $Me \cong \Hom_A(eA, M)$.
+ \item We have
+ \[
+ eAe \cong \End_A(eA).
+ \]
+ In particular, we can take $e = 1$, and recover $\End_A(A_A) \cong A$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We define maps
+ \[
+ \begin{tikzcd}[cdmap]
+ me \ar[r, maps to] & (ex \mapsto mex)\\
+ Me \ar[r, "f_1", yshift=2] & \Hom(eA, M) \ar[l, "f_2", yshift=-2]\\
+ \alpha(e) & \alpha \ar[l, maps to]
+ \end{tikzcd}
+ \]
+ We note that $\alpha(e) = \alpha(e^2) = \alpha(e) e \in Me$. So this is well-defined. By inspection, these maps are inverse to each other. So we are done.
+
+ Note that we might worry that we have to pick representatives $me$ and $ex$ for the map $f_1$, but in fact we can also write it as $f(a)(y) = ay$, since $e$ is idempotent. So we are safe.
+ \item Immediate from above by putting $M = eA$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{lemma}
+ Let $M$ be a completely reducible right $A$-module. We write
+ \[
+ M = \bigoplus S_i^{n_i},
+ \]
+ where $\{S_i\}$ are distinct simple $A$-modules. Write $D_i = \End_A(S_i)$, which we already know is a division algebra. Then
+ \[
+ \End_A (S_i^{n_i}) \cong M_{n_i} (D_i),
+ \]
+ and
+ \[
+ \End_A(M) = \bigoplus M_{n_i} (D_i)
+ \]
+\end{lemma}
+
+\begin{proof}
+ The result for $\End_A(S_i^{n_i})$ is just the familiar fact that a homomorphism $S^n \to S^m$ is given by an $m \times n$ matrix of maps $S \to S$ (in the case of vector spaces over a field $k$, we have $\End(k) \cong k$, so they are matrices with entries in $k$). Then by Schur's lemma, we have
+ \[
+ \End_A(M) = \bigoplus_i \End_A(M_i) \cong M_{n_i}(D_i).\qedhere
+ \]
+\end{proof}
+
+We now prove Artin--Wedderburn.
+\begin{proof}[Proof of Artin--Wedderburn]
+ If $A$ is semi-simple, then it is completely reducible as a right $A$-module. So we have
+ \[
+ A \cong \End(A_A) \cong \bigoplus M_{n_i}(D_i).
+ \]
+ We now decompose each $M_{n_i}(D_i)$ into a sum of simple modules. We know each $M_{n_i}(D_i)$ is a non-trivial $M_{n_i}(D_i)$ module in the usual way, and the action of the other summands is trivial. We can simply decompose each $M_{n_i}(D_i)$ as the sum of submodules of the form
+ \[
+ \left\{
+ \begin{pmatrix}
+ 0 & 0 & \cdots & 0 & 0\\
+ \vdots & \vdots & \ddots & \vdots & \vdots\\
+ 0 & 0 & \cdots & 0 & 0\\
+ a_1 & a_2 & \cdots & a_{n_i - 1} & a_{n_i}\\
+ \vdots & \vdots & \ddots & \vdots & \vdots\\
+ 0 & 0 & \cdots & 0 & 0 \\
+ \end{pmatrix}
+ \right\}
+ \]
+ and there are $n_i$ components. We immediately see that if we write $S_i$ for this submodule, then we have
+ \[
+ \dim_{D_i}(S_i) = n_i.
+ \]
+ Finally, we have to show that every simple module $S$ of $A$ is one of the $S_i$. We simply have to note that if $S$ is a simple $A$-module, then there is a non-trivial map $f: A \to S$ (say by picking $x \in S$ and defining $f(a) = xa$). Then in the decomposition of $A$ into a direct sum of simple submodules, there must be one factor $S_i$ such that $f|_{S_i}$ is non-trivial. Then by Schur's lemma, this is in fact an isomorphism $S_i \cong S$.
+\end{proof}
+
+This was for semi-simple algebras. For a general right Artinian algebra, we know that $A/J(A)$ is semi-simple and inherits the Artinian property. Then Artin--Wedderburn applies to $A/J(A)$.
+
+Some of us might be scared of division algebras. Sometimes, we can get away with not talking about them. If $A$ is not just Artinian, but finite-dimensional, then so are the $D_i$.
+
+Now pick an arbitrary $x \in D_i$. Then the sub-algebra of $D_i$ generated by $x$ is be commutative. So it is in fact a subfield, and finite dimensionality means it is algebraic over $k$. Now if we assume that $k$ is algebraically closed, then $x$ must live in $k$. So we've shown that these $D_i$ must be $k$ itself. Thus we get
+
+\begin{cor}
+ If $k$ is algebraically closed and $A$ is a finite-dimensional semi-simple $k$-algebra, then
+ \[
+ A \cong \bigoplus M_{n_i}(k).
+ \]
+\end{cor}
+This is true, for example, when $k = \C$.
+
+We shall end this section by applying our results to group algebras. Recall the following definition:
+\begin{defi}[Group algebra]\index{group algebra}\index{$kG$}
+ Let $G$ be a group and $k$ a field. The \emph{group algebra} of $G$ over $k$ is
+ \[
+ kG = \left\{\sum \lambda_g g: g \in G, \lambda_g \in k\right\}.
+ \]
+ This has a bilinear multiplication given by the obvious formula
+ \[
+ (\lambda_g g) (\mu_h h) = \lambda_g \mu_h (gh).
+ \]
+\end{defi}
+
+The first thing to note is that group algebras are almost always semi-simple.
+\begin{thm}[Maschke's theorem]\index{Maschke's theorem}
+ Let $G$ be a finite group and $p \nmid |G|$, where $p = \Char k$, so that $|G|$ is invertible in $k$, then $kG$ is semi-simple.
+\end{thm}
+
+\begin{proof}
+ We show that any submodule $V$ of a $kG$-module $U$ has a complement. Let $\pi: U \to V$ be any $k$-vector space projection, and define a new map
+ \[
+ \pi' = \frac{1}{|G|} \sum_{g \in G} g\pi g^{-1}: U \to V.
+ \]
+ It is easy to see that this is a $kG$-module homomorphism $U \to V$, and is a projection. So we have
+ \[
+ U = V \oplus \ker \pi',
+ \]
+ and this gives a $kG$-module complement.
+\end{proof}
+
+There is a converse to Maschke's theorem:
+\begin{thm}
+ Let $G$ be finite and $kG$ semi-simple. Then $\Char k \nmid |G|$.
+\end{thm}
+
+\begin{proof}
+ We note that there is a simple $kG$-module $S$, given by the trivial module. This is a one-dimensional $k$ vector space. We have
+ \[
+ D = \End_{kG}(S) = k.
+ \]
+ Now suppose $kG$ is semi-simple. Then by Artin--Wedderburn, there must be only one summand of $S$ in $kG$.
+
+ Consider the following two ideals of $kG$: we let
+ \[
+ I_1 = \left\{\sum \lambda_g g \in kG: \sum \lambda_g = 0\right\}.
+ \]
+ This is in fact a two-sided ideal of $kG$. We also have the center of the algebra, given by
+ \[
+ I_2 = \left\{\lambda \sum g \in kG: \lambda \in k\right\}.
+ \]
+ Now if $\Char k \mid |G|$, then $I_2 \subseteq I_1$. So we can write
+ \[
+ kG = \frac{kG}{I_1} \oplus I_1 = \frac{kG}{I_1} \oplus I_2 \oplus \cdots.
+ \]
+ But we know $G$ acts trivially on $\frac{kG}{I_1}$ and $I_2$, and they both have dimension $1$. This gives a contradiction. So we must have $\Char k \nmid |G|$.
+\end{proof}
+
+We can do a bit more of representation theory. Recall that when $k$ is algebraically closed and has characteristic zero, then the number of simple $kG$-modules is the number of conjugacy classes of $G$. There is a more general result for a general characteristic $p$ field:
+\begin{thm}
+ Let $k$ be algebraically closed of characteristic $p$, and $G$ be finite. Then the number of simple $kG$ modules (up to isomorphism) is equal to the number of conjugacy classes of elements of order not divisible by $p$. These are known as the \term{$p$-regular elements}.
+\end{thm}
+
+We immediately deduce that
+\begin{cor}
+ If $|G| = p^r$ for some $r$ and $p$ is prime, then the trivial module is the only simple $kG$ module, when $\Char k = p$.
+\end{cor}
+Note that we can prove this directly rather than using the theorem, by showing that $I = \ker (kG \to k)$ is a nilpotent ideal, and annihilates all simple modules. % exercise
+
+\begin{proof}[Proof sketch of theorem]
+ The number of simple $kG$ modules is just the number of simple $kG/J(kG)$ module, as $J(kG)$ acts trivially on every simple module. There is a useful trick to figure out the number of simple $A$-modules for a given semi-simple $A$. Suppose we have a decomposition
+ \[
+ A \cong \bigoplus_{i = 1}^r M_{n_i}(k).
+ \]
+ Then we know $r$ is the number of simple $A$-modules. We now consider $[A, A]$, the $k$-subspace generated by elements of the form $xy - yx$. Then we see that
+ \[
+ \frac{A}{[A, A]} \cong \bigoplus_{i = 1}^r \frac{M_{n_i}(k)}{[M_{n_i}(k), M_{n_i}(k)]}.
+ \]
+ Now by linear algebra, we know $[M_{n_i}(k), M_{n_i}(k)]$ is the trace zero matrices, and so we know
+ \[
+ \dim_k \frac{M_{n_i}(k)}{[M_{n_i}(k), M_{n_i}(k)]} = 1.
+ \]
+ Hence we know
+ \[
+ \dim \frac{A}{[A, A]} = r.
+ \]
+ Thus we need to compute
+ \[
+ \dim_k \frac{kG/J(kG)}{[kG/J(kG), kG/J(kG)]}
+ \]
+ We then note the following facts:
+ \begin{enumerate}
+ \item For a general algebra $A$, we have
+ \[
+ \frac{A/J(A)}{[A/J(A), A/J(A)]} \cong \frac{A}{[A, A] + J(A)}.
+ \]
+ \item Let $g_1, \cdots, g_m$ be conjugacy class representatives of $G$. Then
+ \[
+ \{g_i + [kG, kG]\}
+ \]
+ forms a $k$-vector space basis of $kG/[kG, kG]$. % exercise!
+ \item If $g_1, \cdots, g_r$ is a set of representatives of $p$-regular conjugacy classes, then
+ \[
+ \left\{g_i + \Big([kG, kG] + J(kG)\Big)\right\}
+ \]
+ form a basis of $kG/([kG, kG] + J(kG))$. % not exercise!
+ \end{enumerate}
+ Hence the result follows.
+\end{proof}
+One may find it useful to note that $[kG, kG] + J(kG)$ consists of the elements in $kG$ such that $x^{p^s} \in [kG, kG]$ for some $s$.
+
+In this proof, we look at $A/[A, A]$. However, in the usual proof of the result in the characteristic zero looks at the center $Z(A)$. The relation between these two objects is that the first is the $0$th Hochschild \emph{homology} group of $A$, while the second is the $0$th Hochschild \emph{cohomology} group of $A$.
+
+\subsection{Crossed products}
+Number theorists are often interested in representations of Galois groups and $kG$-modules where $k$ is an algebraic number field, e.g.\ $\Q$. In this case, the $D_i$'s appearing in Artin--Wedderburn may be non-commutative.
+
+We have already met one case of a non-commutative division ring, namely the quaternions $\H$. This is in fact an example of a general construction.
+
+\begin{defi}[Crossed product]\index{crossed product}
+ The \emph{crossed product} of a $k$-algebra $B$ and a group $G$ is specified by the following data:
+ \begin{itemize}
+ \item A group homomorphism $\phi: G \to \Aut_k(B)$, written
+ \[
+ \phi_g(\lambda) = \lambda^g;
+ \]
+ \item A function
+ \[
+ \Psi(g, h): G \times G \to B.
+ \]
+ \end{itemize}
+ The crossed product algebra has underlying set
+ \[
+ \sum \lambda_g g: \lambda_g \in B.
+ \]
+ with operation defined by
+ \[
+ \lambda g \cdot \mu h = \lambda \mu^g \Psi(g, h) (gh).
+ \]
+ The function $\Psi$ is required to be such that the resulting product is associative.
+\end{defi}
+We should think of the $\mu^g$ as specifying what happens when we conjugate $g$ pass $\mu$, and then $\Psi(g, h) (gh)$ is the product of $g$ and $h$ in the crossed product.
+
+Usually, we take $B = K$, a Galois extension of $k$, and $G = \Gal(K/k)$. Then the action $\phi_g$ is the natural action of $G$ on the elements of $K$, and we restrict to maps $\Psi: G \times G \to K^\times$ only.
+
+\begin{eg}
+ Consider $B = K = \C$, and $k = \R$. Then $G = \Gal(\C/\R) \cong \Z/2\Z = \{e, g\}$, where $g$ is complex conjugation. The elements of $\H$ are of the form
+ \[
+ \lambda_e e + \lambda_g g,
+ \]
+ where $\lambda_e, \lambda_g \in \C$, and we will write
+ \[
+ 1 \cdot g = g,\quad i \cdot g = k,\quad 1 \cdot e = 1,\quad i \cdot e = i.
+ \]
+ Now we want to impose
+ \[
+ -1 = j^2 = 1g \cdot 1g = \psi(g, g) e.
+ \]
+ So we set $\Psi(g, g) = -1$. We can similarly work out what we want the other values of $\Psi$ to be. % complete?
+\end{eg}
+Note that in general, crossed products need not be division algebras.
+
+The crossed product is not just a $k$-algebra. It has a natural structure of a \term{$G$-graded algebra}, in the sense that we can write it as a direct sum
+\[
+ BG = \bigoplus_{g \in G} Bg,
+\]
+and we have $Bg_1 \cdot Bg_2 \subseteq B g_1 g_2$.
+
+Focusing on the case where $K/k$ is a Galois extension, we use the notation $(K, G, \Psi)$, where $\Psi: G \times G \to K^\times$. Associativity of these crossed products is equivalent to a \term{$2$-cocycle condition} $\Psi$, which you will be asked to make precise on the first example sheet.
+
+Two crossed products $(K, G, \Psi_1)$ and $(K, G, \Psi_2)$ are isomorphic iff the map
+\[
+ \begin{tikzcd}[cdmap]
+ G \times G \ar[r] & K^\times\\
+ (g, h) \ar[r, maps to] & \Psi_1(g, h) (\Psi_2(g, h))^{-1}
+ \end{tikzcd}
+\]
+satisfies a $2$-coboundary condition, which is again left for the first example sheet. Therefore the second (group) cohomology
+\[
+ \frac{\{\text{$2$-cocycles}: G \times G \to K^\times\}}{\{\text{$2$-coboundaries}: G \times G \to K^\times\}}
+\]
+determines the isomorphism classes of (associative) crossed products $(K, G, \Psi)$.
+
+\begin{defi}[Central simple algebra]\index{central simple algebra}\index{algebra!central simple}
+ A \emph{central simple $k$-algebra} is a finite-dimensional $k$-algebra which is a simple algebra, and with a center $Z(A) = k$.
+\end{defi}
+Note that any simple algebra is a division algebra, say by Schur's lemma. So the center must be a field. Hence any simple $k$-algebra can be made into a central simple algebra simply by enlarging the base field.
+
+\begin{eg}
+ $M_n(k)$ is a central simple algebra.
+\end{eg}
+
+The point of talking about these is the following result:
+\begin{fact}
+ Any central simple $k$-algebra is of the form $M_n(D)$ for some division algebra $D$ which is also a central simple $k$-algebra, and is a crossed product $(K, G, \Psi)$.
+\end{fact}
+Note that when $K = \C$ and $k = \R$, then the second cohomology group has $2$ elements, and we get that the only central simple $\R$-algebras are $M_n(\R)$ or $M_n(\H)$.
+
+For amusement, we also note the following theorem:
+
+\begin{fact}[Wedderburn]
+ Every finite division algebra is a field.
+\end{fact}
+
+\subsection{Projectives and blocks}
+In general, if $A$ is not semi-simple, then it is not possible to decompose $A$ as a direct sum of simple modules. However, what we can do is to decompose it as a direct sum of indecomposable projectives.
+
+We begin with the definition of a projective module.
+
+\begin{defi}[Projective module]\index{projective module}\index{module!projective}
+ An $A$-module is \emph{projective} $P$ if given modules $M$ and $M'$ and maps
+ \[
+ \begin{tikzcd}
+ & P \ar[d, "\alpha"]\\
+ M' \ar[r, two heads, "\theta"] & M \ar[r] & 0
+ \end{tikzcd},
+ \]
+ then there exists a map $\beta: P \to M'$ such that the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ & P \ar[d, "\alpha"] \ar[ld, dashed, "\beta"']\\
+ M' \ar[r, two heads, "\theta"] & M \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ Equivalently, if we have a short exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & N \ar[r, hook] & M' \ar[r, two heads] & M \ar[r] & 0,
+ \end{tikzcd}
+ \]
+ then the sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & \Hom(P, N) \ar[r, hook] & \Hom(P, M') \ar[r, two heads] & \Hom(P, M) \ar[r] & 0
+ \end{tikzcd}
+ \]
+ is exact.
+\end{defi}
+Note that we get exactness at $\Hom(P, N)$ and $\Hom(P, M')$ for any $P$ at all. Projective means it is also exact at $\Hom(P, M)$.
+
+\begin{eg}
+ Free modules are always projective.
+\end{eg}
+In general, projective modules are ``like'' free modules. We all know that free modules are nice, and most of the time, when we want to prove things about free modules, we are just using the property that they are projective. It is also possible to understand projective modules in an algebro-geometric way --- they are ``locally free'' modules.
+
+It is convenient to characterize projective modules as follows:
+\begin{lemma}
+ The following are equivalent:
+ \begin{enumerate}
+ \item $P$ is projective.
+ \item Every surjective map $\phi: M \to P$ splits, i.e.
+ \[
+ M \cong \ker \phi \oplus N
+ \]
+ where $N \cong P$.
+ \item $P$ is a direct summand of a free module.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Rightarrow$ (ii): Consider the following lifting problem:
+ \[
+ \begin{tikzcd}
+ & P \ar[d, equals] \ar[dl, dashed]\\
+ M \ar[r, two heads, "\phi"] & P \ar[r] & 0
+ \end{tikzcd},
+ \]
+ The lifting gives an embedding of $P$ into $M$ that complements $\ker \phi$ (by the splitting lemma, or by checking it directly).
+ \item (ii) $\Rightarrow$ (iii): Every module admits a surjection from a free module (e.g.\ the free module generated by the elements of $P$)
+ \item (iii) $\Rightarrow$ (i): It suffices to show that direct summands of projectives are projective. Suppose $P$ is is projective, and
+ \[
+ P \cong A \oplus B.
+ \]
+ Then any diagram
+ \[
+ \begin{tikzcd}
+ & A \ar[d, "\alpha"]\\
+ M' \ar[r, two heads, "\theta"] & M \ar[r] & 0
+ \end{tikzcd},
+ \]
+ can be extended to a diagram
+ \[
+ \begin{tikzcd}
+ & A \oplus B \ar[d, "\tilde{\alpha}"]\\
+ M' \ar[r, two heads, "\theta"] & M \ar[r] & 0
+ \end{tikzcd},
+ \]
+ by sending $B$ to $0$. Then since $A \oplus B \cong P$ is projective, we obtain a lifting $A \oplus B \to M'$, and restricting to $A$ gives the desired lifting.\qedhere
+ \end{itemize}
+\end{proof}
+
+Our objective is to understand the direct summands of a general Artinian $k$-algebra $A$, not necessarily semi-simple. Since $A$ is itself a free $A$-module, we know these direct summands are always projective.
+
+Since $A$ is not necessarily semi-simple, it is in general impossible to decompose it as a direct sum of simples. What we can do, though, is to decompose it as a direct sum of \emph{indecomposable modules}.
+
+\begin{defi}[Indecomposable]\index{indecomposable}
+ A non-zero module $M$ is indecomposable if $M$ cannot be expressed as the direct sum of two non-zero submodules.
+\end{defi}
+Note that since $A$ is (left) Artinian, it can always be decomposed as a finite sum of indecomposable (left) submodules. Sometimes, we are also interested in decomposing $A$ as a sum of (two-sided) ideals. These are called blocks.
+
+\begin{defi}[Block]\index{block}
+ The \emph{blocks} are the direct summands of $A$ that are indecomposable as ideals.
+\end{defi}
+
+\begin{eg}
+ If $A$ is semi-simple Artinian, then Artin-Wedderburn tells us
+ \[
+ A = \bigoplus M_{n_i}(D_i),
+ \]
+ and the $M_{n_i}(D_i)$ are the blocks.
+\end{eg}
+
+We already know that every Artinian module can be decomposed as a direct sum of indecomposables. The first question to ask is whether this is unique. We note the following definitions:
+
+\begin{defi}[Local algebra]\index{local algebra}\index{algebra!local}
+ An algebra is \emph{local} if it has a unique maximal left ideal, which is $J(A)$, which is the unique maximal right ideal.
+\end{defi}
+If so, then $A/J(A)$ is a division algebra. This name, of course, comes from algebraic geometry (cf.\ local rings).
+
+\begin{defi}[Unique decomposition property]\index{unique decomposition property}
+ A module $M$ has the \emph{unique decomposition property} if $M$ is a finite direct sum of indecomposable modules, and if
+ \[
+ M = \bigoplus_{i = 1}^m M_i = \bigoplus_{i = 1}^n M_i',
+ \]
+ then $n = m$, and, after reordering, $M_i = M_i'$.
+\end{defi}
+
+We want to prove that $A$ as an $A$-module always has the unique decomposition property. The first step is the following criterion for determining unique decomposition property.
+
+\begin{thm}[Krull--Schmidt theorem]\index{Krull--Schmidt theorem}
+ Suppose $M$ is a finite sum of indecomposable $A$-modules $M_i$, with each $\End(M_i)$ local. Then $M$ has the unique decomposition property.
+\end{thm}
+
+\begin{proof} % return to this later
+ Let
+ \[
+ M = \bigoplus_{i = 1}^m M_i = \bigoplus_{i = 1}^n M_i'.
+ \]
+ We prove by induction on $m$. If $m = 1$, then $M$ is indecomposable. Then we must have $n = 1$ as well, and the result follows.
+
+ For $m > 1$, we consider the maps
+ \[
+ \begin{tikzcd}
+ \alpha_i: M_i' \ar[r, hook] & M \ar[r, two heads] & M_1\\
+ \beta_i: M_1 \ar[r, hook] & M \ar[r, two heads] & M_i'
+ \end{tikzcd}
+ \]
+ We observe that
+ \[
+ \id_{M_1} = \sum_{i = 1}^n \alpha_i \circ \beta_i: M_1 \to M_1.
+ \]
+ Since $\End_A(M_1)$ is local, we know some $\alpha_i \circ \beta_i$ must be invertible, i.e.\ a unit, as they cannot all lie in the Jacobson radical. We may wlog assume $\alpha_1 \circ \beta_1$ is a unit. If this is the case, then both $\alpha_1$ and $\beta_1$ have to be invertible. So $M_1 \cong M_1'$. % why
+
+ Consider the map $\id - \theta = \phi$, where
+ \[
+ \begin{tikzcd}
+ \theta: M \ar[r, two heads] & M_1 \ar[r, "\alpha_1^{-1}"] & M_1' \ar[r, hook] & M \ar[r, two heads] & \bigoplus_{i = 2}^m M_i \ar[r, hook] & M.
+ \end{tikzcd}
+ \]
+ Then $\phi(M_1') = M_1$. So $\phi|_{M_1'}$ looks like $\alpha_1$. Also
+ \[
+ \phi\left(\bigoplus_{i = 2}^m M_i\right) = \bigoplus_{i = 2}^m M_i,
+ \]
+ So $\phi|_{\bigoplus_{i = 2}^m M_i}$ looks like the identity map. So in particular, we see that $\phi$ is surjective. However, if $\phi(x) = 0$, this says $x = \theta(x)$, So
+ \[
+ x \in \bigoplus_{i = 2}^m M_i.
+ \]
+ But then $\theta(x) = 0$. Thus $x = 0$. Thus $\phi$ is an automorphism of $m$ with $\phi(M_1') = \phi(M_1)$. So this gives an isomorphism between
+ \[
+ \bigoplus_{i = 2}^m M_i = \frac{M}{M_1} \cong \frac{M}{M_1} \cong \bigoplus_{i = 2}^n M_i',
+ \]
+ and so we are done by induction.
+\end{proof}
+
+Now it remains to prove that the endomorphism rings are local. Recall the following result from linear algebra.
+\begin{lemma}[Fitting]
+ Suppose $M$ is a module with both the ACC and DCC on submodules, and let $f \in \End_A(M)$. Then for large enough $n$, we have
+ \[
+ M = \im f^n \oplus \ker f^n.
+ \]
+\end{lemma}
+
+\begin{proof}
+ By ACC and DCC, we may choose $n$ large enough so that
+ \[
+ f^n: f^n(M) \to f^{2n}(M)
+ \]
+ is an isomorphism, as if we keep iterating $f$, the image is a descending chain and the kernel is an ascending chain, and these have to terminate.
+
+ If $m \in M$, then we can write
+ \[
+ f^n(m) = f^{2n}(m_1)
+ \]
+ for some $m_1$. Then
+ \[
+ m = f^n(m_1) + (m - f^n(m_1)) \in \im f^n + \ker f^n,
+ \]
+ and also
+ \[
+ \im f^n \cap \ker f^n = \ker (f^n: f^n(M) \to f^{2n}(M)) = 0.
+ \]
+ So done.
+\end{proof}
+
+\begin{lemma}
+ Suppose $M$ is an indecomposable module satisfying ACC and DCC on submodules. Then $B = \End_A(M)$ is local.
+\end{lemma}
+
+\begin{proof}
+ Choose a maximal left ideal of $B$, say $I$. It's enough to show that if $x \not \in I$, then $x$ is left invertible. By maximality of $I$, we know $B = Bx + I$. We write
+ \[
+ 1 = \lambda x + y,
+ \]
+ for some $\lambda \in B$ and $y \in I$. Since $y \in I$, it has no left inverse. So it is not an isomorphism. By Fitting's lemma and the indecomposability of $M$, we see that $y^m = 0$ for some $m$. Thus
+ \[
+ (1 + y + y^2 + \cdots + y^{m - 1}) \lambda x = (1 + y + \cdots + y^{m - 1})(1 - y) = 1.
+ \]
+ So $x$ is left invertible.
+\end{proof}
+
+\begin{cor}
+ Let $A$ be a left Artinian algebra. Then $A$ has the unique decomposition property.
+\end{cor}
+
+\begin{proof}
+ We know $A$ satisfies the ACC and DCC condition. So $_A A$ is a finite direct sum of indecomposables.
+\end{proof}
+
+So if $A$ is an Artinian algebra, we know $A$ can be uniquely decomposed as a direct sum of indecomposable projectives,
+\[
+ A = \bigoplus P_j.
+\]
+For convenience, we will work with right Artinian algebras and right modules instead of left ones. It turns out that instead of studying projectives in $A$, we can study idempotent elements instead.
+
+Recall that $\End(A_A) \cong A$. The projection onto $P_j$ is achieved by left multiplication by an idempotent $e_j$,
+\[
+ P_j = e_j A.
+\]
+The fact that the $A$ decomposes as a \emph{direct sum} of the $P_j$ translates to the condition
+\[
+ \sum e_j = 1,\quad e_i e_j = 0
+\]
+for $i \not= j$.
+\begin{defi}[Orthogonal idempotent]\index{orthogonal idempotents}\index{idempotent!orthogonal}
+ A collection of idempotents $\{e_i\}$ is \emph{orthogonal} if $e_i e_j = 0$ for $i \not= j$.
+\end{defi}
+
+The indecomposability of $P_j$ is equivalent to $e_j$ being primitive:
+\begin{defi}[Primitive idempotent]\index{idempotent!primitive}\index{primitive idempotent}
+ An idempotent is \emph{primitive} if it cannot be expressed as a sum
+ \[
+ e = e' + e'',
+ \]
+ where $e', e''$ are orthogonal idempotents, both non-zero.
+\end{defi}
+We see that giving a direct sum decomposition of $A$ is equivalent to finding an orthogonal collection of primitive idempotents that sum to $1$. This is rather useful, because idempotents are easier to move around that projectives.
+
+Our current plan is as follows --- given an Artinian algebra $A$, we can quotient out by $J(A)$, and obtain a semi-simple algebra $A/J(A)$. By Artin--Wedderburn, we know how we can decompose $A/J(A)$, and we hope to be able to lift this decomposition to one of $A$. The point of talking about idempotents instead is that we know what it means to lift elements.
+\begin{prop}
+ Let $N$ be a nilpotent ideal in $A$, and let $f$ be an idempotent of $A/N \equiv \bar{A}$. Then there is an idempotent $e \in A$ with $f = \bar{e}$.
+\end{prop}
+In particular, we know $J(A)$ is nilpotent, and this proposition applies. The proof involves a bit of magic.
+\begin{proof}
+ We consider the quotients $A/N^i$ for $i \geq 1$. We will lift the idempotents successively as we increase $i$, and since $N$ is nilpotent, repeating this process will eventually land us in $A$.
+
+ Suppose we have found an idempotent $f_{i - 1} \in A/N^{i - 1}$ with $\bar{f}_{i - 1} = f$. We want to find $f_i \in A/N^i$ such that $\bar{f}_i = f$.
+
+ For $i > 1$, we let $x$ be an element of $A/N^i$ with image $f_{i - 1}$ in $A/N^{i - 1}$. Then since $x^2 - x$ vansishes in $A/N^{i - 1}$, we know $x^2 - x \in N^{i - 1}/N^i$. Then in particular,
+ \[
+ (x^2 - x)^2 = 0 \in A/N^i.\tag{$\dagger$}
+ \]
+ We let
+ \[
+ f_i = 3x^2 - 2x^3.
+ \]
+ Then by a direct computation using $(\dagger)$, we find $f_i^2 = f_i$, and $f_i$ has image $3f_{i - 1} - 2 f_{i - 1} = f_{i - 1}$ in $A/N^{i - 1}$ (alternatively, in characteristic $p$, we can use $f_i = x^p$). Since $N^k = 0$ for some $k$, this process gives us what we want.
+\end{proof}
+
+Just being able to lift idempotents is not good enough. We want to lift decompositions as projective indecomposables. So we need to do better.
+\begin{cor}
+ Let $N$ be a nilpotent ideal of $A$. Let
+ \[
+ \bar{1} = f_1 + \cdots + f_r
+ \]
+ with $\{f_i\}$ orthogonal primitive idempotents in $A/N$. Then we can write
+ \[
+ 1 = e_1 + \cdots + e_r,
+ \]
+ with $\{e_i\}$ orthogonal primitive idempotents in $A$, and $\bar{e}_i = f_i$.
+\end{cor}
+
+\begin{proof}
+ We define a sequence $e_i' \in A$ inductively. We set
+ \[
+ e_1' = 1.
+ \]
+ Then for each $i > 1$, we pick $e_i'$ a lift of $f_i + \cdots + f_t \in e_{i - 1}' A e_{i - 1}'$, since by inductive hypothesis we know that $f_i + \cdots + f_t \in e_{i - 1}' A e_{i - 1}'/N$. Then
+ \[
+ e_i' e_{i + 1}' = e_{i + 1}' = e_{i + 1}' e_i'.
+ \]
+ We let
+ \[
+ e_i = e_i'- e_{i + 1}'.
+ \]
+ Then
+ \[
+ \bar{e}_i = f_i.
+ \]
+ Also, if $j > i$, then
+ \[
+ e_j = e_{i + 1}' e_j e_{i + 1}',
+ \]
+ and so
+ \[
+ e_i e_j = (e_i' - e_{i + 1}') e_{i + 1}' e_j e_{i + 1}' = 0.
+ \]
+ Similarly $e_j e_i = 0$.
+\end{proof}
+
+We now apply this lifting of idempotents to $N = J(A)$, which we know is nilpotent. We know $A/N$ is the direct sum of simple modules, and thus the decomposition corresponds to
+\[
+ \bar{1} = f_1 + \cdots + f_t \in A/J(A),
+\]
+and these $f_i$ are orthogonal primitive idempotents. Idempotent lifting then gives
+\[
+ 1 = e_1 + \cdots + e_t \in A,
+\]
+and these are orthogonal primitive idempotents. So we can write
+\[
+ A = \bigoplus e_j A = \bigoplus P_i,
+\]
+where $P_i = e_i A$ are indecomposable projectives, and $P_i/P_i J(A) = S_i$ is simple. By Krull--Schmidt, any indecomposable projective isomorphic to one of these $P_j$.
+
+The final piece of the picture is to figure out when two indecomposable projectives lie in the same block. Recall that if $M$ is a right $A$-module and $e$ is idempotent, then
+\[
+ Me \cong \Hom_A(eA, M).
+\]
+In particular, if $M = fA$ for some idempotent $f$, then
+\[
+ \Hom(eA, fA) \cong fAe.
+\]
+However, if $e$ and $f$ are in different blocks, say $B_1$ and $B_2$, then
+\[
+ fAe \in B_1 \cap B_2 = 0,
+\]
+since $B_1$ and $B_2$ are (two-sided!) ideals. So we know
+\[
+ \Hom(eA, fA) = 0.
+\]
+So if $\Hom(eA, fA) \not= 0$, then they are in the same block. The existence of a homomorphism can alternatively be expressed in terms of composition factors.
+
+We have seen that each indecomposable projective $P$ has a simple ``top''
+\[
+ P/PJ(A) \cong S.
+\]
+\begin{defi}[Composition factor]\index{composition factor}
+ A simple module $S$ is a composition factor of a module $M$ if there are submodules $M_1 \leq M_2$ with
+ \[
+ M_2/M_1 \cong S.
+ \]
+\end{defi}
+Suppose $S$ is a composition factor of a module $M$. Then we have a diagram
+\[
+ \begin{tikzcd}
+ & P \ar[d] \ar[dl, dashed]\\
+ M_2 \ar[r, two heads ] & S \ar[r] & 0
+ \end{tikzcd}
+\]
+So by definition of projectivity, we obtain a non-zero diagonal map $P \to M_2 \leq M$ as shown.
+
+\begin{lemma}
+ Let $P$ be an indecomposable projective, and $M$ an $A$-module. Then $\Hom(P, M) \not= 0$ iff $P/P J(A)$ is a composition factor of $M$.
+\end{lemma}
+
+\begin{proof}
+ We have proven $\Rightarrow$. Conversely, suppose there is a non-zero map $f: P \to M$. Then it factors as
+ \[
+ S = \frac{P}{PJ(A)} \to \frac{\im f}{(\im f)J(A)}.
+ \]
+ Now we cannot have $\im f = (\im f)J(A)$, or else we have $\im f = (\im f)J(A)^n = 0$ for sufficiently large $n$ since $J(A)$ is nilpotent. So this map must be injective, hence an isomorphism. So this exhibits $S$ as a composition factor of $M$.
+\end{proof}
+
+We define a (directed) graph whose vertices are labelled by indecomposable projectives, and there is an edge $P_1 \to P_2$ if the top $S_1$ of $P_1$ is a composition factor of $P_2$.
+\begin{thm}
+ Indecomposable projectives $P_1$ and $P_2$ are in the same block if and only if they lie in the same connected component of the graph.
+\end{thm}
+
+\begin{proof}
+ It is clear that $P_1$ and $P_2$ are in the same connected component, then they are in the same block.
+
+ Conversely, consider a connected component $X$, and consider
+ \[
+ I = \bigoplus_{P \in X} P.
+ \]
+ We show that this is in fact a left ideal, hence an ideal. Consider any $x \in A$. Then for each $P \in X$, left-multiplication gives a map $P \to A$, and if we decompose
+ \[
+ A = \bigoplus P_i,
+ \]
+ then this can be expressed as a sum of maps $f_i: P \to P_i$. Now such a map can be non-zero only if $P$ is a composition factor of $P_i$. So if $f_i \not= 0$, then $P_i \in X$. So left-multiplication by $x$ maps $I$ to itself, and it follows that $I$ is an ideal.
+\end{proof}
+
+\subsection{\tph{$K_0$}{K0}{K0}}
+We now briefly talk about the notion of $K_0$.
+\begin{defi}[$K_0$]\index{$K_0$}
+ For any associative $k$-algebra $A$, consider the free abelian group with basis labelled by the isomorphism classes $[P]$ of finitely-generated projective $A$-modules. Then introduce relations
+ \[
+ [P_1] + [P_2] = [P_1 \oplus P_2],
+ \]
+ This yields an abelian group which is the quotient of the free abelian group by the subgroup generated by
+ \[
+ [P_1] + [P_2] - [P_1 \oplus P_2].
+ \]
+ The abelian group is $K_0(A)$.
+\end{defi}
+
+\begin{eg}
+ If $A$ is an Artinian algebra, then we know that any finitely-generated projective is a direct sum of indecomposable projectives, and this decomposition is unique by Krull-Schmidt. So
+ \[
+ K_0(A) = \left\{\parbox{7cm}{\centering abelian group generated by the isomorphism classes of indecomposable projectives}\right\}.
+ \]
+ So $K_0(A) \cong \Z^r$, where $r$ is the number of isomorphism classes of indecomposable projectives, which is the number of isomorphism classes of simple modules.
+
+ Here we're using the fact that two indecomposable projectives are isomorphic iff their simple tops are isomorphic.
+\end{eg}
+
+It turns out there is a canonical map $K_0(A) \to A/[A, A]$. Recall we have met $A/[A, A]$ when we were talking about the number of simple modules. We remarked that it was the $0$th Hochschild homology group, and when $A = kG$, there is a $k$-basis of $A/[A, A]$ given by $g_i + [A, A]$, where $g_i$ are conjugacy class representatives.
+
+To construct this canonical map, we first look at the trace map
+\[
+ M_n(A) \to A/[A, A].
+\]
+This is a $k$-linear map, invariant under conjugation. We also note that the canonical inclusion
+\begin{align*}
+ M_n(A) &\hookrightarrow M_{n + 1}(A)\\
+ X & \mapsto
+ \begin{pmatrix}
+ X & 0\\
+ 0 & 0
+ \end{pmatrix}
+\end{align*}
+is compatible with the trace map. We observe that the trace induces an isomorphism
+\[
+ \frac{M_n(A)}{[M_n(A), M_n(A)]} \to \frac{A}{[A, A]},
+\]
+by linear algebra.
+
+Now if $P$ is finitely generated projective. It is a direct summand of some $A^n$. Thus we can write
+\[
+ A^n = P \oplus Q,
+\]
+for $P, Q$ projective. Moreover, projection onto $P$ corresponds to an idempotent $e$ in $M_n(A) = \End_A(A^n)$, and that
+\[
+ P = e(A^n).
+\]
+and we have
+\[
+ \End_A(P) = e M_n(A) e.
+\]
+Any other choice of idempotent yields an idempotent $e_1$ conjugate to $e$ in $M_{2n}(A)$. % exercise
+
+Therefore the trace of an endomorphism of $P$ is well-defined in $A/[A, A]$, independent of the choice of $e$. Thus we have a trace map
+\[
+ \End_A(P) \to A/[A, A].
+\]
+In particular, the trace of the identity map on $P$ is the trace of $e$. We call this the \emph{trace of $P$}\index{trace!of projective}.
+
+Note that if we have finitely generated projectives $P_1$ and $P_2$, then we have
+\begin{align*}
+ P_1 \oplus Q_1 &= A^n\\
+ P_2 \oplus Q_2 &= A^m
+\end{align*}
+Then we have
+\[
+ (P_1 \oplus P_2) \oplus (Q_1 \oplus Q_2) = A^{m + n}.\
+\]
+So we deduce that
+\[
+ \tr (P_1 \oplus P_2) = \tr P_1 + \tr P_2.
+\]
+\begin{defi}[Hattori-Stallings trace map]
+ The map $K_0(A) \to A/[A, A]$ induced by the trace is the \term{Hattori--Stallings trace map}.
+\end{defi}
+
+\begin{eg}
+ Let $A = kG$, and $G$ be finite. Then $A/[A, A]$ is a $k$-vector space with basis labelled by a set of conjugacy class representatives $\{g_i\}$. Then we know, for a finitely generated projective $P$, we can write
+ \[
+ \tr P = \sum r_P(g_i) g_i,
+ \]
+ where the $r_p(g_i)$ may be regarded as class functions. However, $P$ may be regarded as a $k$-vector space.
+
+ So the there is a trace map
+ \[
+ \End_K(P) \to k,
+ \]
+ and also the ``character'' $\chi_p: G \to k$, where $\chi_P(g) = \tr g$. Hattori proved that if $C_G(g)$ is the centralizer of $g \in G$, then
+ \[
+ \chi_p(g) = |C_G(G)| r_p(g^{-1}).\tag{$*$}
+ \]
+ If $\Char k = 0$ and $k$ is is algebraically closed, then we know $kG$ is semi-simple. So every finitely generated projective is a direct sum of simples, and
+ \[
+ K_0(kG) \cong \Z^r
+ \]
+ with $r$ the number of simples, and $(*)$ implies that the trace map
+ \[
+ \Z^r \cong K_0(kG) \to \frac{kG}{[kG, kG]} \cong k^r
+ \]
+ is the natural inclusion.
+\end{eg}
+This is the start of the theory of algebraic $K$-theory, which is a homology theory telling us about the endomorphisms of free $A$-modules. We can define $K_1(A)$ to be the abelianization of
+\[
+ \GL(A) = \lim_{n \to \infty} \GL_n(A).
+\]
+$K_2(A)$ tells us something about the relations required if you express $\GL(A)$ in terms of generators and relations. We're being deliberately vague. These groups are very hard to compute.
+
+Just as we saw in the $i = 0$ case, there are canonical maps
+\[
+ K_i(A) \to \HH_i(A),
+\]
+where $\HH_*$ is the Hochschild homology. The $i = 1$ case is called the \term{Dennis trace map}. These are analogous to the \emph{Chern maps} in topology.
+
+\section{Noetherian algebras}
+\subsection{Noetherian algebras}
+In the introduction, we met the definition of Noetherian algebras.
+\begin{defi}[Noetherian algebra]\index{Noetherian algebra}\index{algebra!Noetherian}
+ An algebra is \emph{left Noetherian} if it satisfies the \term{ascending chain condition} (\term{ACC}) on left ideals, i.e.\ if
+ \[
+ I_1 \leq I_2 \leq I_3 \leq \cdots
+ \]
+ is an ascending chain of left ideals, then there is some $N$ such that $I_{N + m} = I_N$ for all $m \geq 0$.
+
+ Similarly, we say an algebra is \emph{Noetherian} if it is both left and right Noetherian.
+\end{defi}
+We've also met a few examples. Here we are going to meet lots more. In fact, most of this first section is about establishing tools to show that certain algebras are Noetherian.
+
+One source of Noetherian algebras is via constructing polynomial and power series rings. Recall that in IB Groups, Rings and Modules, we proved the Hilbert basis theorem:
+\begin{thm}[Hilbert basis theorem]\index{Hilbert basis theorem}
+ If $A$ is Noetherian, then $A[X]$ is Noetherian.
+\end{thm}
+Note that our proof did not depend on $A$ being commutative. The same proof works for non-commutative rings. In particular, this tells us $k[X_1, \cdots, X_n]$ is Noetherian.
+
+It is also true that power series rings of Noetherian algebras are also Noetherian. The proof is very similar, but for completeness, we will spell it out completely.
+\begin{thm}
+ Let $A$ be left Noetherian. Then $A[[X]]$ is Noetherian.
+\end{thm}
+
+\begin{proof}
+ Let $I$ be a left ideal of $A[[X]]$. We'll show that if $A$ is left Noetherian, then $I$ is finitely generated. Let
+ \[
+ J_r = \{a: \text{there exists an element of $I$ of the form }aX^r + \text{higher degree terms}\}.
+ \]
+ We note that $J_r$ is a left ideal of $A$, and also note that
+ \[
+ J_0 \leq J_1 \leq J_2 \leq J_3 \leq \cdots,
+ \]
+ as we can always multiply by $X$. Since $A$ is left Noetherian, this chain terminates at $J_N$ for some $N$. Also, $J_0, J_1, J_2, \cdots, J_N$ are all finitely generated left ideals. We suppose $a_{i1}, \cdots, a_{is_i}$ generates $J_i$ for $i = 1, \cdots, N$. These correspond to elements
+ \[
+ f_{ij}(X) = a_{ij} X^j + \text{higher odder terms} \in I.
+ \]
+ We show that this finite collection generates $I$ as a left ideal. Take $f(X) \in I$, and suppose it looks like
+ \[
+ b_n X^n + \text{higher terms},
+ \]
+ with $b_n \not = 0$.
+
+ Suppose $n < N$. Then $b_n \in J^n$, and so we can write
+ \[
+ b_n = \sum c_{nj} a_{nj}.
+ \]
+ So
+ \[
+ f(X) - \sum c_{nj} f_{nj}(X) \in I
+ \]
+ has zero coefficient for $X^n$, and all other terms are of higher degree.
+
+ Repeating the process, we may thus wlog $n \geq N$. We get $f(X)$ of the form $d_N X^N + $ higher degree terms. The same process gives
+ \[
+ f(X) - \sum c_{Nj} f_{Nj}(X)
+ \]
+ with terms of degree $N + 1$ or higher. We can repeat this yet again, using the fact $J_N = J_{N + 1}$, so we obtain
+ \[
+ f(X) - \sum c_{Nj} f_{Nj}(x) - \sum d_{N+1, j} X f_{Nj}(X) + \cdots.
+ \]
+ So we find
+ \[
+ f(X) = \sum e_j(X) f_{Nj}(X)
+ \]
+ for some $e_j(X)$. So $f$ is in the left ideal generated by our list, and hence so is $f$.
+\end{proof}
+
+\begin{eg}
+ It is straightforward to see that quotients of Noetherian algebras are Noetherian. Thus, algebra images of the algebras $A[x]$ and $A[[x]]$ would also be Noetherian.
+
+ For example, finitely-generated commutative $k$-algebras are always Noetherian. Indeed, if we have a generating set $x_i$ of $A$ as a $k$-algebra, then there is an algebra homomorphism
+ \[
+ \begin{tikzcd}[cdmap]
+ k[X_1, \cdots, X_n] \ar[r] & A\\
+ X_i \ar[r, maps to] & x_i
+ \end{tikzcd}
+ \]
+\end{eg}
+
+We also saw previously that
+\begin{eg}
+ Any Artinian algebra is Noetherian.
+\end{eg}
+
+The next two examples we are going to see are less obviously Noetherian, and proving that they are Noetherian takes some work.
+
+\begin{defi}[$n$th Weyl algebra]\index{Weyl algebra}\index{$A_n(k)$}\index{$A_n$}
+ The \emph{$n$th Weyl algebra} $A_n(k)$ is the algebra generated by $X_1, \cdots, X_n, Y_1, \cdots, Y_n$ with relations
+ \[
+ Y_i X_i - X_i Y_i = 1,
+ \]
+ for all $i$, and everything else commutes.
+\end{defi}
+
+This algebra acts on the polynomial algebra $k[X_1, \cdots, X_n]$ with $X_i$ acting by left multiplication and $Y_i = \frac{\partial}{\partial X_i}$. Thus $k[X_1, \cdots, X_n]$ is a left $A_n(k)$ module. This is the prototype for thinking about differential algebras, and $D$-modules in general (which we will not talk about).
+
+The other example we have is the universal enveloping algebra of a Lie algebra.
+\begin{defi}[Universal enveloping algebra]\index{universal enveloping algebra}\index{Lie algebra!universal enveloping algebra}
+ Let $\mathfrak{g}$ be a Lie algebra over $k$, and take a $k$-vector space basis $x_1, \cdots, x_n$. We form an associative algebra with generators $x_1, \cdots, x_n$ with relations
+ \[
+ x_i x_j - x_j x_i = [x_i, x_j],
+ \]
+ and this is the \emph{universal enveloping algebra} $\mathcal{U}(\mathfrak{g})$.
+\end{defi}
+
+\begin{eg}
+ If $\mathfrak{g}$ is abelian, i.e.\ $[x_i, x_j] = 0$ in $\mathfrak{g}$, then the enveloping algebra is the polynomial algebra in $x_1, \cdots, x_n$.
+\end{eg}
+
+\begin{eg}
+ If $\mathfrak{g} = \sl_2(k)$, then we have a basis
+ \[
+ \begin{pmatrix}
+ 0 & 1\\
+ 0 & 0
+ \end{pmatrix},\quad
+ \begin{pmatrix}
+ 0 & 0\\
+ 1 & 0
+ \end{pmatrix},\quad
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix}.
+ \]
+ They satisfy
+ \[
+ [e, f] = h,\quad [h, e] = 2e,\quad [h, f] = -2f,
+ \]
+\end{eg}
+
+To prove that $A_n(k)$ and $\mathcal{U}(\mathfrak{g})$ are Noetherian, we need some machinery, that involves some ``deformation theory''. The main strategy is to make use of a natural \emph{filtration} of the algebra.
+
+\begin{defi}[Filtered algebra]\index{filtered algebra}\index{algebra!filtered}\index{$\Z$-filtered algebra}
+ A \emph{($\Z$-)filtered algebra $A$} is a collection of $k$-vector spaces
+ \[
+ \cdots \leq A_{-1} \leq A_0 \leq A_1 \leq A_2 \leq \cdots
+ \]
+ such that $A_i \cdot A_j \subseteq A_{i + j}$ for all $i, j \in \Z$, and $1 \in A_0$.
+\end{defi}
+For example a polynomial ring is naturally filtered by the degree of the polynomial.
+
+The definition above was rather general, and often, we prefer to talk about more well-behaved filtrations.
+\begin{defi}[Exhaustive filtration]\index{filtration!exhaustive}\index{exhaustive filtration}
+ A filtration is \emph{exhaustive} if $\bigcup A_i = A$.
+\end{defi}
+
+\begin{defi}[Separated filtration]\index{separated filtration}\index{filtration!separated}
+ A filtration is \emph{separated} if $\bigcap A_i = \{0\}$.
+\end{defi}
+Unless otherwise specified, our filtrations are exhaustive and separated.
+
+For the moment, we will mostly be interested in positive filtrations.
+
+\begin{defi}[Positive filtration]\index{positive filtration}\index{filtration!positive}
+ A filtration is \emph{positive} if $A_i = 0$ for $i < 0$.
+\end{defi}
+
+Our canonical source of filtrations is the following construction:
+\begin{eg}
+ If $A$ is an algebra generated by $x_1, \cdots, x_n$, say, we can set
+ \begin{itemize}
+ \item $A_0$ is the $k$-span of $1$
+ \item $A_1$ is the $k$-span of $1, x_1, \cdots, x_n$
+ \item $A_1$ is the $k$-span of $1, x_1, \cdots, x_n, x_i x_j$ for $i, j \in \{1, \cdots, n\}$.
+ \end{itemize}
+ In general, $A_r$ is elements that are of the form of a (non-commutative) polynomial expression of degree $\leq r$.
+\end{eg}
+Of course, the filtration depends on the choice of the generating set.
+
+Often, to understand a filtered algebra, we consider a nicer object, known as the \emph{associated graded algebra}.
+\begin{defi}[Associated graded algebra]\index{associated graded algebra}
+ Given a filtration of $A$, the \emph{associated graded algebra} is the vector space direct sum
+ \[
+ \gr A = \bigoplus \frac{A_i}{A_{i - 1}}.
+ \]
+ This is given the structure of an algebra by defining multiplication by
+ \[
+ (a + A_{i - 1}) (b + A_{j - 1}) = ab + A_{i + j - 1} \in \frac{A_{i + j}}{A_{i + j - 1}}.
+ \]
+\end{defi}
+In our example of a finitely-generated algebra, the graded algebra is generated by $x_1 + A_0, \cdots, x_n + A_0 \in A_1/A_0$.
+
+The associated graded algebra has the natural structure of a graded algebra:
+\begin{defi}[Graded algebra]\index{graded algebra}\index{algebra!graded}\index{$\Z$-graded algebra}
+ A ($\Z$-)\emph{graded algebra} is an algebra $B$ that is of the form
+ \[
+ B = \bigoplus_{i \in \Z} B_i,
+ \]
+ where $B_i$ are $k$-subspaces, and $B_i B_j \subseteq B_{i + j}$. The $B_i$'s are called the \term{homogeneous components}\index{graded algebra!homogeneous components}\index{algebra!homogeneous components}.
+
+ A \term{graded ideal}\index{ideal!graded} is an ideal of the form
+ \[
+ \bigoplus J_i,
+ \]
+ where $J_i$ is a subspace of $B_i$, and similarly for left and right ideals.
+\end{defi}
+
+There is an intermediate object between a filtered algebra and its associated graded algebra, known as the \emph{Rees algebra}.
+\begin{defi}[Rees algebra]\index{Rees algebra}\index{filtered algebra!Rees algebra}\index{algebra!Rees algebra}
+ Let $A$ be a filtered algebra with filtration $\{A_i\}$. Then the \emph{Rees algebra} $\Rees(A)$ is the subalgebra $\bigoplus A_i T^i$ of the Laurent polynomial algebra $A[T, T^{-1}]$ (where $T$ commutes with $A$).
+\end{defi}
+
+Since $1 \in A_0 \subseteq A_1$, we know $T \in \Rees(A)$. The key observation is that
+\begin{itemize}
+ \item $\Rees(A)/(T) \cong \gr A$.
+ \item $\Rees(A)/(1 - T) \cong A$.
+\end{itemize}
+
+Since $A_n(k)$ and $\mathcal{U}(\mathfrak{g})$ are finitely-generated algebras, they come with a natural filtering induced by the generating set. It turns out, in both cases, the associated graded algebras are pretty simple.
+\begin{eg}
+ Let $A = A_n(k)$, with generating set $X_1, \cdots, X_n$ and $Y_1, \cdots, Y_n$. We take the filtration as for a finitely-generated algebra. Now observe that if $a_i \in A_i$, and $a_j \in A_j$, then
+ \[
+ a_i a_j - a_j a_i \in A_{i + j - 2}.
+ \]
+ So we see that $\gr A$ is commutative, and in fact
+ \[
+ \gr A_n(k) \cong k[\bar{X}_1, \cdots, \bar{X}_n, \bar{Y}_1, \cdots, \bar{Y}_n],
+ \]
+ where $\bar{X}_i$, $\bar{Y}_i$ are the images of $X_i$ and $Y_i$ in $A_1/A_0$ respectively. This is not hard to prove, but is rather messy. It requires a careful induction.
+\end{eg}
+
+\begin{eg}
+ Let $\mathfrak{g}$ be a Lie algebra, and consider $A = \mathcal{U}(\mathfrak{g})$. This has generating set $x_1, \cdots, x_n$, which is a vector space basis for $\mathfrak{g}$. Again using the filtration for finitely-generated algebras, we get that if $a_i \in A_i$ and $a_j \in A_j$, then
+ \[
+ a_i a_j - a_j a_i \in A_{i + j - 1}.
+ \]
+ So again $\gr A$ is commutative. In fact, we have
+ \[
+ \gr A \cong k[\bar{x}_1, \cdots, \bar{x}_n].
+ \]
+ The fact that this is a polynomial algebra amounts to the same as the \term{Poincar\'e-Birkhoff-Witt theorem}. This gives a $k$-vector space basis for $\mathcal{U}(\mathfrak{g})$.
+\end{eg}
+
+In both cases, we find that $\gr A$ are finitely-generated and commutative, and therefore Noetherian. We want to use this fact to deduce something about $A$ itself.
+
+\begin{lemma}
+ Let $A$ be a positively filtered algebra. If $\gr A$ is Noetherian, then $A$ is left Noetherian.
+\end{lemma}
+By duality, we know that $A$ is also right Noetherian.
+
+\begin{proof}
+ Given a left ideal $I$ of $A$, we can form
+ \[
+ \gr I = \bigoplus \frac{I \cap A_i}{I \cap A_{i - 1}},
+ \]
+ where $I$ is filtered by $\{I \cap A_i\}$. By the isomorphism theorem, we know
+ \[
+ \frac{I \cap A_i}{I \cap A_{i - 1}} \cong \frac{I \cap A_i + A_{i - 1}}{A_{i - 1}} \subseteq \frac{A_i}{A_{i - 1}}.
+ \]
+ Then $\gr I$ is a left graded ideal of $\gr A$.
+
+ Now suppose we have a strictly ascending chain
+ \[
+ I_1 < I_2 < \cdots
+ \]
+ of left ideals. Since we have a positive filtration, for some $A_i$, we have $I_1 \cap A_i \subsetneq I_2 \cap A_i$ and $I_1 \cap A_{i - 1} = I_2 \cap A_{i - 1}$. Thus
+ \[
+ \gr I_1 \subsetneq \gr I_2 \subsetneq \gr I_3 \subsetneq \cdots.
+ \]
+ This is a contradiction since $\gr A$ is Noetherian. So $A$ must be Noetherian.
+\end{proof}
+Where we need positivity is the existence of that transition from equality to non-equality. If we have a $\Z$-filtered algebra instead, then we need to impose some completeness assumption, but we will not go into that.
+
+\begin{cor}
+ $A_n(k)$ and $\mathcal{U}(\mathfrak{g})$ are left/right Noetherian.
+\end{cor}
+
+\begin{proof}
+ $\gr A_n(k)$ and $\gr \mathcal{U}(\mathfrak{g})$ are commutative and finitely generated algebras.
+\end{proof}
+
+Note that there is an alternative filtration for $A_n(k)$ yielding a commutative associated graded algebra, by setting $A_0 = k[X_1, \cdots, X_n]$ and
+\[
+ A_1 = k[X_1, \cdots, X_n] + \sum_{j = 1}^n k[X_1, \cdots, X_n] Y_j,
+\]
+i.e.\ linear terms in the $Y$, and then keep on going. Essentially, we are filtering on the degrees of the $Y_i$ only. This also gives a polynomial algebra as an associative graded algebra. The main difference is that when we take the commutator, we don't go down by two degrees, but one only. Later, we will see this is advantageous when we want to get a Poisson bracket on the associated graded algebra.
+
+We can look at further examples of Noetherian algebras.
+\begin{eg}
+ The \term{quantum plane} \term{$k_q[X, Y]$} has generators $X$ and $Y$, with relation
+ \[
+ XY = q YX
+ \]
+ for some $q \in k^\times$. This thing behaves differently depending on whether $q$ is a root of unity or not.
+\end{eg}
+This quantum plane first appeared in mathematical physics.
+\begin{eg}
+ The \term{quantum torus} \term{$k_q[X, X^{-1}, Y, Y^{-1}]$} has generators $X$, $X^{-1}$, $Y$, $Y^{-1}$ with relations
+ \[
+ XX^{-1} = YY^{-1} = 1,\quad XY = q YX.
+ \]
+\end{eg}
+The word ``quantum'' in this context is usually thrown around a lot, and doesn't really mean much apart from non-commutativity, and there is very little connection with actual physics.
+
+These algebras are both left and right Noetherian. We cannot prove these by filtering, as we just did. We will need a version of Hilbert's basis theorem which allows twisting of the coefficients. This is left as an exercise on the second example sheet.
+
+In the examples of $A_n(k)$ and $\mathcal{U}(\mathfrak{g})$, the associated graded algebras are commutative. However, it turns out we can still capture the non-commutativity of the original algebra by some extra structure on the associated graded algebra.
+
+So suppose $A$ is a (positively) filtered algebra whose associated graded algebra $\gr A$ is commutative. Recall that the filtration has a corresponding Rees algebra, and we saw that $\Rees A / (T) \cong \gr A$. Since $\gr A$ is commutative, this means
+\[
+ [\Rees A, \Rees A] \subseteq (T).
+\]
+This induces a map $\Rees (A) \times \Rees (A) \to (T)/(T^2)$, sending $(r, s) \mapsto T^2 + [r, s]$. Quotienting out by $(T)$ on the left, this gives a map
+\[
+ \gr A \times \gr A \to \frac{(T)}{(T^2)}.
+\]
+We can in fact identify the right hand side with $\gr A$ as well. Indeed, the map
+\[
+ \begin{tikzcd}[column sep=large]
+ \gr A \cong \displaystyle\frac{\Rees(A)}{(T)} \ar[r, "\text{mult. by $T$}"] & \displaystyle\frac{(T)}{(T^2)}
+ \end{tikzcd},
+\]
+is an isomorphism of $\gr A \cong \Rees A/(T)$-modules. We then have a bracket
+\[
+ \begin{tikzcd}[cdmap]
+ \{\ph, \ph\}: \gr A \times \gr A \ar[r] & \gr A\\
+ (\bar{r}, \bar{s}) \ar[r, maps to] & \{r, s\}
+ \end{tikzcd}.
+\]
+Note that in our original filtration of the Weyl algebra $A_n(k)$, since the commutator brings us down by two degrees, this bracket vanishes identically, but using the alternative filtration does not give a non-zero $\{\ph, \ph\}$.
+
+This $\{\ph, \ph\}$ is an example of a \emph{Poisson bracket}.
+
+\begin{defi}[Poisson algebra]\index{Poisson algebra}
+ An associative algebra $B$ is a \emph{Poisson algebra} if there is a $k$-bilinear bracket $\{\ph, \ph\}: B \times B \to B$ such that
+ \begin{itemize}
+ \item $B$ is a Lie algebra under $\{\ph, \ph\}$, i.e.
+ \[
+ \{r, s\} = - \{s, r\}
+ \]
+ and
+ \[
+ \{\{r, s\}, t\} + \{\{s, t\}, r\} + \{\{t, r\}, s\} = 0.
+ \]
+ \item We have the \term{Leibnitz rule}
+ \[
+ \{r, st\} = s\{r, t\} + \{r, s\} t.
+ \]
+ \end{itemize}
+\end{defi}
+The second condition says $\{r, \ph\}: B \to B$ is a derivation.
+
+\subsection{More on \tph{$A_n(k)$}{An(k)}{An(k)} and \texorpdfstring{$\mathcal{U}(\mathfrak{g})$}{U(g)}}
+Our goal now is to study modules of $A_n(k)$. The first result tells us we must focus on infinite dimensional modules.
+\begin{lemma}
+ Suppose $\Char k = 0$. Then $A_n(k)$ has no non-zero modules that are finite-dimensional $k$-vector spaces.
+\end{lemma}
+
+\begin{proof}
+ Suppose $M$ is a finite-dimensional module. Then we've got an algebra homomorphism $\theta: A_n(k) \to \End_k(M) \cong M_m(k)$, where $m = \dim_k M$.
+
+ In $A_n(k)$, we have
+ \[
+ Y_1 X_1 - X_1 Y_1 = 1.
+ \]
+ Applying the trace map, we know
+ \[
+ \tr(\theta(Y_1) \theta(X_1) - \theta(X_1) \theta(Y_1)) = \tr I = m.
+ \]
+ But since the trace is cyclic, the left hand side vanishes. So $m = 0$. So $M$ is trivial.
+\end{proof}
+A similar argument works for the quantum torus, but using determinants instead.
+
+We're going to make use of our associated graded algebras from last time, which are isomorphic to polynomial algebras. Given a filtration $\{A_i\}$ of $A$, we may filter a module with generating set $\mathcal{S}$ by setting
+\[
+ M_i = A_i \mathcal{S}.
+\]
+Note that
+\[
+ A_j M_i \subseteq M_{i + j},
+\]
+which allows us to form an \term{associated graded module}\index{$\gr M$}
+\[
+ \gr M = \bigoplus \frac{M_i}{M_{i + 1}}.
+\]
+This is a graded $\gr A$-module, which is finitely-generated if $M$ is. So we've got a finitely-generated graded module over a graded commutative algebra.
+
+To understand this further, we prove some results about graded modules over commutative algebras, which is going to apply to our $\gr A$ and $\gr M$.
+\begin{defi}[Poincar\'e series]\index{Poincar\'e series}
+ Let $V$ be a graded module over a graded algebra $S$, say
+ \[
+ V = \bigoplus_{i = 0}^\infty V_i.
+ \]
+ Then the \emph{Poincar\'e series} is
+ \[
+ P(V, t) = \sum_{i = 0}^\infty (\dim V_i) t^i.
+ \]
+\end{defi}
+
+\begin{thm}[Hilbert-Serre theorem]\index{Hilbert-Serre theorem}
+ The Poincar\'e series $P(V, t)$ of a finitely-generated graded module
+ \[
+ V = \bigoplus_{i = 0}^\infty V_i
+ \]
+ over a finitely-generated generated commutative algebra
+ \[
+ S = \bigoplus_{i = 0}^\infty S_i
+ \]
+ with homogeneous generating set $x_1, \cdots, x_m$ is a rational function of the form
+ \[
+ \frac{f(t)}{\prod(1 - t^{k_i})},
+ \]
+ where $f(t) \in \Z[t]$ and $k_i$ is the degree of the generator $x_i$.
+\end{thm}
+
+\begin{proof}
+ We induct on the number $m$ of generators. If $m = 0$, then $S = S_0 = k$, and $V$ is therefore a finite-dimensional $k$-vector space. So $P(V, t)$ is a polynomial.
+
+ Now suppose $m > 0$. We assume the theorem is true for $ 0$. In fact, we have
+ \[
+ \phi(t) = \frac{f(1)}{(d - 1)!} t^{d - 1} + \text{lower degree term}.
+ \]
+ Since $f(1) \not= 0$, this has degree $d - 1$.
+
+ This implies that
+ \[
+ \sum_{j = 0}^i \dim V_j
+ \]
+ is a polynomial in $\Q[t]$ of degree $d$.
+\end{proof}
+This $\phi(t)$ is the \term{Hilbert polynomial}, and $\chi(t)$ the \term{Samuel polynomial}. Some people call $\chi(t)$ the Hilbert polynomial instead, though.
+
+We now want to apply this to our cases of $\gr A$, where $A = A_n(k)$ or $\mathcal{U}(\mathfrak{g})$, filtered as before. Then we deduce that
+\[
+ \sum_0^i \dim \frac{M_j}{M_{j - 1}} = \chi(i),
+\]
+for a polynomial $\chi(t) \in \Q[t]$. But we also know
+\[
+ \sum_{j = 0}^i \dim \frac{M_j}{M_{j - 1}} = \dim M_i.
+\]
+We are now in a position to make a definition.
+\begin{defi}[Gelfand-Kirillov dimension]\index{Gelfand-Kirillov dimension}\index{$\GKdim(M)$}\index{$d(M)$}
+ Let $A = A_n(k)$ or $\mathcal{U}(\mathfrak{g})$ and $M$ a finitely-generated $A$-module, filtered as before. Then the \emph{Gelfand-Kirillov dimension} $d(M)$ of $M$ is the degree of the Samuel polynomial of $\gr M$ as a $\gr A$-module.
+\end{defi}
+This makes sense because $\gr A$ is a commutative algebra in this case. \emph{A priori}, it seems like this depends on our choice of filtering on $M$, but actually, it doesn't. For a more general algebra, we can define the dimension as below:
+
+\begin{defi}[Gelfand-Kirillov dimension]\index{Gelfand-Kirillov dimension}\index{$\GKdim(M)$}\index{$d(M)$}
+ Let $A$ be a finitely-generated $k$-algebra, which is filtered as before, and a finitely-generated $A$-module $M$, filtered as before. Then the GK-dimension of $M$ is
+ \[
+ d(M) = \limsup_{n \to \infty} \frac{\log (\dim M_n)}{\log n}.
+ \]
+\end{defi}
+In the case of $A = A_n(k)$ or $\mathcal{U}(\mathfrak{g})$, this matches the previous definition. Again, this does not actually depend on the choice of generating sets.
+
+Recall we showed that no non-zero $A_n(k)$-module $M$ can have finite dimension as a $k$-vector space. So we know $d(M) > 0$. Also, we know that $d(M)$ is an integer for cases $A = A_n$ or $\mathcal{U}(\mathfrak{g})$, since it is the degree of a polynomial. However, for general $M = A$, we can get non-integral values. In fact, the values we can get are $0, 1, 2$, and then any real number $\geq 2$. We can also have $\infty$ if the $\limsup$ doesn't exist.
+
+\begin{eg}
+ If $A = kG$, then we have $\GKdim(kG) < \infty$ iff $G$ has a subgroup $H$ of finite index with $H$ embedding into the strictly upper triangular integral matrices, i.e.\ matrices of the form
+ \[
+ \begin{pmatrix}
+ 1 & * & \cdots & *\\
+ 0 & 1 & \cdots & *\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & 1
+ \end{pmatrix}.
+ \]
+ This is a theorem of Gromoll, and is quite hard to prove.
+\end{eg}
+
+\begin{eg}
+ We have $\GKdim(A)= 0$ iff $A$ is finite-dimensional as a $k$-vector space.
+
+ We have
+ \[
+ \GKdim(k[X]) = 1,
+ \]
+ and in general
+ \[
+ \GKdim(k[X_1, \cdots, X_n]) = n.
+ \]
+ Indeed, we have
+ \[
+ \dim_k(\text{$m$th homogeneous component}) = \binom{m + n}{n}.
+ \]
+ So we have
+ \[
+ \chi(t) =
+ \begin{pmatrix}
+ t + n\\n
+ \end{pmatrix}
+ \]
+ This is of degree $n$, with leading coefficient $\frac{1}{n!}$.
+\end{eg}
+
+We can make the following definition, which we will not use again:
+\begin{defi}[Multiplicity]\index{multiplicity}
+ Let $A$ be a commutative algebra, and $M$ an $A$-module. The \emph{multiplicity} of $M$ with $d(M) = d$ is
+ \[
+ d! \times \text{leading coefficient of $\chi(t)$}.
+ \]
+\end{defi}
+On the second example sheet, we will see that the multiplicity is integral.
+
+We continue looking at more examples.
+\begin{eg}
+ We have $d(A_n(k)) = 2n$, and $d(\mathcal{U}(\mathfrak{g})) = \dim_k \mathfrak{g}$. Here we are using the fact that the associated graded algebras are polynomial algebras.
+\end{eg}
+
+\begin{eg}
+ We met $k[X_1, \cdots, X_n]$ as the ``canonical'' $A_n(k)$-module. The filtration of the module matches the one we used when thinking about the polynomial algebra as a module over itself. So we get
+ \[
+ d(k[X_1, \cdots, X_n]) = n.
+ \]
+\end{eg}
+
+\begin{lemma}
+ Let $M$ be a finitely-generated $A_n$-module. Then $d(M) \leq 2n$.
+\end{lemma}
+
+\begin{proof}
+ Take generators $m_1, \cdots, m_s$ of $M$. Then there is a surjective filtered module homomorphism
+ \[
+ \begin{tikzcd}[cdmap]
+ A_n \oplus \cdots \oplus A_n \ar[r] & M\\
+ (a_1, \cdots, a_s) \ar[r, maps to] & \sum a_i m_i
+ \end{tikzcd}
+ \]
+ It is easy to see that quotients can only reduce dimension, so
+ \[
+ \GKdim(M) \leq d(A_n \oplus \cdots \oplus A_n).
+ \]
+ But
+ \[
+ \chi_{A_n \oplus \cdots \oplus A_n} = s \chi_{A_n}
+ \]
+ has degree $2n$.
+\end{proof}
+
+More interestingly, we have the following result:
+\begin{thm}[Bernstein's inequality]\index{Bernstein's inequality}
+ Let $M$ be a non-zero finitely-generated $A_n(k)$-module, and $\Char k = 0$. Then
+ \[
+ d(M) \geq n.
+ \]
+\end{thm}
+
+%\begin{cor}
+% If $M$ is an $A_n(k)$-module, and $\Char k = 0$, then
+% \[
+% d(M) \geq n.
+% \]
+%\end{cor}
+%
+\begin{defi}[Holonomic module]\index{holonomic module}
+ An $A_n(k)$ module $M$ is \emph{holonomic} iff $d(M) = n$.
+\end{defi}
+If we have a holonomic module, then we can quotient by a maximal submodule, and get a simple holonomic module. For a long time, people thought all simple modules are holonomic, until someone discovered a simple module that is not holonomic. In fact, most simple modules are not holonomic, but we something managed to believe otherwise.
+
+\begin{proof}
+ Take a generating set and form the canonical filtrations $\{A_i\}$ of $A_n(k)$ and $\{M_i\}$ of $M$. We let $\chi(t)$ be the Samuel polynomial. Then for large enough $i$, we have
+ \[
+ \chi(i) = \dim M_i.
+ \]
+ We claim that
+ \[
+ \dim A_i \leq \dim \Hom_k(M_i, M_{2i}) = \dim M_i \times \dim M_{2i}.
+ \]
+ Assuming this, for large enough $i$, we have
+ \[
+ \dim A_i \leq \chi(i) \chi(2i).
+ \]
+ But we know
+ \[
+ \dim A_i = \binom{i + 2}{2n},
+ \]
+ which is a polynomial of degree $2n$. But $\chi(t) \chi(2t)$ is a polynomial of degree $2 d(M)$. So we get that
+ \[
+ n \leq d(M).
+ \]
+ So it remains to prove the claim. It suffices to prove that the natural map
+ \[
+ A_i \to \Hom_k (M_i, M_{2i}),
+ \]
+ given by multiplication is injective.
+
+ So we want to show that if $a \in A_i \not= 0$, then $a M_i \not= 0$. We prove this by induction on $i$. When $i = 0$, then $A_0 = k$, and $M_0$ is a finite-dimensional $k$-vector space. Then the result is obvious.
+
+ If $i > 0$, we suppose the result is true for smaller $i$. We let $a \in A_i$ is non-zero. If $a M_i = 0$, then certainly $a \not\in k$. We express
+ \[
+ a = \sum c_{\boldsymbol\alpha\boldsymbol\beta} X_1^{\alpha_1} X_2^{\alpha_2} \cdots X_n^{\alpha_n} Y_1^{\beta_1} \cdots Y_n^{\beta_n},
+ \]
+ where $\boldsymbol\alpha = (\alpha_1, \cdots, \alpha_n)$, $\boldsymbol\beta = (\beta_1, \cdots, \beta_n)$, and $c_{\boldsymbol\alpha, \boldsymbol\beta} \in k$.
+
+ If possible, pick a $j$ such that $c_{\boldsymbol\alpha, \boldsymbol\alpha} \not= 0$ for some $\boldsymbol\alpha$ with $\alpha_j \not= 0$ (this happens when there is an $X$ involved). Then
+ \[
+ [Y_j, a] = \sum \alpha_j c_{\boldsymbol\alpha, \boldsymbol\beta} X_1^{\alpha_1} \cdots X_j^{\alpha_j - 1} \cdots X_n^{\alpha_n} Y_1^{\beta_1} \cdots Y_n^{\beta_n},
+ \]
+ and this is non-zero, and lives in $A_{i - 1}$.
+
+ If $a M_i = 0$, then certainly $a M_{i - 1} = 0$. Hence
+ \[
+ [Y_j, a] M_{i - 1} = (Y_j a - a Y_j) M_{i - 1} = 0,
+ \]
+ using the fact that $Y_j M_{i - 1} \subseteq M_i$. This is a contradiction.
+
+ If $a$ only has $Y$'s involved, then we do something similar using $[X_j, a]$.
+\end{proof}
+There is also a geometric way of doing this.
+
+We take $k = \C$. We know $\gr A_n$ is a polynomial algebra
+\[
+ \gr A_n = k[\bar{X}_1, \cdots, \bar{X}_n, \bar{Y}_1, \cdots, \bar{Y}_n],
+\]
+which may be viewed as the coordinate algebra of the cotangent bundle on affine $n$-space $\C^n$. The points of this correspond to the maximal ideals of $\gr A_n$. If $I$ is a left ideal of $A_n(\C)$, then we can form $\gr I$ and we can consider the set of maximal ideals containing it. This gives us the \term{characteristic variety} $\mathrm{Ch}(A_n/I)$.
+
+We saw that there was a Poisson bracket on $\gr A_n$, and this may be used to define a skew-symmetric form on the tangent space at any point of the cotangent bundle. In this case, this is non-degenerate skew-symmetric form.
+
+We can consider the tangent space $U$ of $\mathrm{Ch} (A_n/I)$ at a non-singular point, and there's a theorem of Gabber (1981) which says that $U \supseteq U^\perp$, where $\perp$ is with respect to the skew-symmetric form. By non-degeneracy, we must have $\dim U \geq n$, and we also know that
+\[
+ \dim \mathrm{Ch}(A_n/I) = d(A_n/I).
+\]
+So we find that $d(A_n/I) \geq n$.
+
+In the case of $A = U(\mathfrak{g})$, we can think of $\gr A$ as the coordinate algebra on $\mathfrak{g}^*$, the vector space dual of $\mathfrak{g}$. The Poisson bracket leads to a skew-symmetric form on tangent spaces at points of $\mathfrak{g}^*$. In this case, we don't necessarily get non-degeneracy. However, on $\mathfrak{g}$, we have the adjoint action of the corresponding Lie group $G$, and this induces a co-adjoint action on $\mathfrak{g}^*$. Thus $\mathfrak{g}^*$ is a disjoint union of orbits. If we consider the induced skew-symmetric form on tangent spaces of orbits (at non-singular points), then it is non-degenerate.
+
+\subsection{Injective modules and Goldie's theorem}
+The goal of this section is to prove Goldie's theorem.
+\begin{thm}[Goldie's theorem]
+ Let $A$ be a right Noetherian algebra with no non-zero ideals all of whose elements are nilpotent. Then $A$ embeds in a finite direct sum of matrix algebras over division algebras.
+\end{thm}
+The outline of the proof is as follows --- given any $A$, we embed $A$ in an ``injective hull'' $E(A)$. We will then find that similar to what we did in Artin--Wedderburn, we can decompose $\End(E(A))$ into a direct sum of matrix algebras over division algebras. But we actually cannot. We will have to first quotient $\End(E(A))$ by some ideal $I$.
+
+On the other hand, we do not actually have an embedding of $A \cong \End_A(A)$ into $\End(E(A))$. Instead, what we have is only a homomorphism $\End_A(A) \to \End(E(A))/I$, where we quotient out by the same ideal $I$. So actually the two of our problems happen to cancel each other out.
+
+We will then prove that the kernel of this map contains only nilpotent elements, and then our hypothesis implies this homomorphism is indeed an embedding.
+
+We begin by first constructing the injective hull. This is going to involve talking about injective modules, which are dual to the notion of projective modules.
+\begin{defi}[Injective module]\index{injective module}\index{module!injective}
+ An $A$-module $E$ is \emph{injective} if for every diagram of $A$-module maps
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & M \ar[r, "\theta", tail] \ar[d, "\phi"] & N \ar[ld, "\psi", dashed]\\
+ & E
+ \end{tikzcd},
+ \]
+ such that $\theta$ is injective, there exists a map $\psi$ that makes the diagram commute. Equivalently, $\Hom(\ph, E)$ is an exact functor.
+\end{defi}
+
+\begin{eg}
+ Take $A = k$. Then all $k$-vector spaces are injective $k$-modules.
+\end{eg}
+
+\begin{eg}
+ Take $A = k[X]$. Then $k(X)$ is an injective $k[X]$-module.
+\end{eg}
+
+\begin{lemma}
+ Every direct summand of an injective module is injective, and direct products of injectives is injective.
+\end{lemma}
+
+\begin{proof}
+ Same as proof for projective modules.
+\end{proof}
+
+\begin{lemma}
+ Every $A$-module may be embedded in an injective module.
+\end{lemma}
+We say the category of $A$-modules has \term{enough injectives}. The dual result for projectives was immediate, as free modules are projective.
+
+\begin{proof}
+ Let $M$ be a right $A$-module. Then $\Hom_k(A, M)$ is a right $A$-module via
+ \[
+ (fa)(x) = f(ax).
+ \]
+ We claim that $\Hom_k(A, M)$ is an injective module. Suppose we have
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & M_1 \ar[r, "\theta"] \ar[d, "\phi"] & N_1\\
+ & \Hom_k(A, M)
+ \end{tikzcd}
+ \]
+ We consider the $k$-module diagram
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & M_1 \ar[r, "\theta"] \ar[d, "\alpha"] & N_1 \ar[ld, dashed, "\beta"]\\
+ & M
+ \end{tikzcd}
+ \]
+ where $\alpha(m_1) = \phi(m_1)(1)$. Since $M$ is injective as a $k$-module, we can find the $\beta$ such that $\alpha = \beta \theta$. We define $\psi: N_1 \to \Hom_k(A, M)$ by
+ \[
+ \psi(n_1)(x) = \beta(n_1 x).
+ \]
+ It is straightforward to check that this does the trick. Also, we have an embedding $M \hookrightarrow \Hom_k(A, M)$ by $m \mapsto (\phi_n: x \mapsto mx)$.
+\end{proof}
+The category theorist will write the proof in a line as
+\[
+ \Hom_A(\ph, \Hom_k(A, M)) \cong \Hom_k(\ph \otimes_A A , M) \cong \Hom_k(\ph, M),
+\]
+which is exact since $M$ is injective as a $k$-module.
+
+Note that neither the construction of $\Hom_k(A, M)$, nor the proof that it is injective requires the right $A$-modules structure of $M$. All we need is that $M$ is an injective $k$-module.
+
+\begin{lemma}
+ An $A$-module is injective iff it is a direct summand of every extension of itself.
+\end{lemma}
+
+\begin{proof}
+ Suppose $E$ is injective and $E'$ is an extension of $E$. Then we can form the diagram
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & E \ar[r] \ar[d, "\mathrm{id}"] & E' \ar[dl, dashed, "\psi"]\\
+ & E
+ \end{tikzcd},
+ \]
+ and then by injectivity, we can find $\psi$. So
+ \[
+ E' = E \oplus \ker \psi.
+ \]
+ Conversely, suppose $E$ is a direct summand of every extension. But by the previous lemma, we can embed $E$ in an injective $E'$. This implies that $E$ is a direct summand of $E'$, and hence injective.
+\end{proof}
+
+There is some sort of ``smallest'' injective a module embeds into, and this is called the \emph{injective hull}, or \emph{injective envelope}. This is why our injectives are called $E$. The ``smallness'' will be captured by the fact that it is an essential extension.
+
+\begin{defi}[Essential submodule]\index{essential submodule}\index{submodule!essential}
+ An \emph{essential submodule} $M$ of an $A$-module $N$ is one where $M \cap V \not= \{0\}$ for every non-zero submodule $V$ of $N$. We say $N$ is an \term{essential extension}\index{extension!essential} of $M$.
+\end{defi}
+
+\begin{lemma}
+ An essential extension of an essential extension is essential.
+\end{lemma}
+
+\begin{proof}
+ Suppose $M < E < F$ are essential extensions. Then given $N \leq F$, we know $N \cap E \not= \{0\}$, and this is a submodule of $E$. So $(N \cap E) \cap M = N \cap M \not= 0$. So $F$ is an essential extension of $M$.
+\end{proof}
+
+\begin{lemma}
+ A maximal essential extension is an injective module.
+\end{lemma}
+Such maximal things exist by Zorn's lemma.
+
+\begin{proof}
+ Let $E$ be a maximal essential extension of $M$, and consider any embedding $E \hookrightarrow F$. We shall show that $E$ is a direct summand of $F$. Let $S$ be the set of all non-zero submodules $V$ of $F$ with $V \cap E = \{0\}$. We apply Zorn's lemma to get a maximal such module, say $V_1$.
+
+ Then $E$ embeds into $F/V_1$ as an essential submodule. By transitivity of essential extensions, $F/V_1$ is an essential extension of $M$, but $E$ is maximal. So $E \cong F/V_1$. In other words,
+ \[
+ F = E \oplus V_1.\qedhere
+ \]
+\end{proof}
+We can now make the following definition:
+\begin{defi}[Injective hull]\index{injective hull}\index{injective envelope}
+ A maximal essential extension of $M$ is the \emph{injective hull} (or \emph{injective envelope}) of $M$, written $E(M)$.
+\end{defi}
+
+\begin{prop}
+ Let $M$ be an $A$-module, with an inclusion $M \hookrightarrow I$ into an injective module. Then this extends to an inclusion $E(M) \hookrightarrow I$.
+\end{prop}
+
+\begin{proof}
+ By injectivity, we can fill in the diagram
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & M \ar[r, hook] \ar[d, hook] & E(M) \ar[dl, "\psi", dashed]\\
+ & I
+ \end{tikzcd}.
+ \]
+ We know $\psi$ restricts to the identity on $M$. So $\ker \psi \cap M = \{0\}$. By Since $E(M)$ is essential, we must have $\ker \psi = 0$. So $E(M)$ embeds into $I$.
+\end{proof}
+
+\begin{prop}
+ Suppose $E$ is an injective essential extension of $M$. Then $E \cong E(M)$. In particular, any two injective hulls are isomorphic.
+\end{prop}
+
+\begin{proof}
+ By the previous lemma, $E(M)$ embeds into $E$. But $E(M)$ is a maximal essential extension. So this forces $E = E(M)$.
+\end{proof}
+
+Using what we have got, it is not hard to see that
+\begin{prop}
+ \[
+ E(M_1 \oplus M_2) = E(M_1) \oplus E(M_2).
+ \]
+\end{prop}
+
+\begin{proof}
+ We know that $E(M_1) \oplus E(M_2)$ is also injective (since finite direct sums are the same as direct products), and also $M_1 \oplus M_2$ embeds in $E(M_1) \oplus E(M_2)$. So it suffices to prove this extension is essential.
+
+ Let $V \leq E(M_1) \oplus E(M_2)$. Then either $V/E(M_1) \not= 0$ or $V/E(M_2) \not= 0$.
+
+ We wlog it is the latter. Note that we can naturally view
+ \[
+ \frac{V}{E(M_2)} \leq \frac{E(M_1) \oplus E(M_2)}{E(M_2)} \cong E(M_1).
+ \]
+ Since $M_1 \subseteq E(M_1)$ is essential, we know
+ \[
+ M_1 \cap (V/E(M_2)) \not= 0.
+ \]
+ So there is some $m_1 + m_2 \in V$ such that $m_2 \in E(M_2)$ and $m_1 \in M_1$. Now consider
+ \[
+ \{m \in E(M_2): am_1 + m \in V\text{ for some }a \in A\}.
+ \]
+ This is a non-empty submodule of $E(M_2)$, and so contains an element of $M_2$, say $n$. Then we know $am_1 + n \in V \cap (M_1 \oplus M_2)$, and we are done.
+\end{proof}
+
+The next two examples of injective hulls will be stated without proof: % insert proof?
+\begin{eg}
+ Take $A = k[X]$, and $M = k[X]$. Then $E(M) = k(X)$.
+\end{eg}
+
+\begin{eg}
+ Let $A = k[X]$ and $V = k$ be the trivial module, where $X$ acts by $0$. Then
+ \[
+ E(V) = \frac{k[X, X^{-1}]}{X k[X]},
+ \]
+ which is a quotient of $A$-modules. We note $V$ embeds in this as
+ \[
+ V \cong \frac{k[X]}{X k[X]} \hookrightarrow \frac{k[X, X^{-1}]}{X k[X]}.
+ \]
+\end{eg}
+
+%\begin{eg}
+% Let $A$ be a finite-dimensional $k$-algebra. If $P$ is a left $A$-module, then $P$ is indecomposable projective iff $P^*$ is indecomposable injective $A$-module.
+%
+% Indeed, if $P$ is projective, then it is a summand of $_AA$, and if $P^*$ is injective, then it is a summand of $(_AA)^*$
+%
+% If $A$ is Frobenius, then $(_AA)^* \cong A_A$. Then $P^*$ is a projective module. So in this case, injectives are the same as projectives. % what is Frobenius?
+%\end{eg}
+% fix this
+
+\begin{defi}[Uniform module]\index{uniform module}\index{module!uniform}
+ A non-zero module $V$ is \emph{uniform} if given non-zero submodules $V_1, V_2$, then $V_1 \cap V_2 \not= \{0\}$.
+\end{defi}
+
+\begin{lemma}
+ $V$ is uniform iff $E(V)$ is indecomposable.
+\end{lemma}
+
+\begin{proof}
+ Suppose $E(V) = A \oplus B$, with $A, B$ non-zero. Then $V \cap A \not= \{0\}$ and $V \cap B \not= \{0\}$ since the extension is essential. So we have two non-zero submodules of $V$ that intersect trivially.
+
+ Conversely, suppose $V$ is not uniform, and let $V_1, V_2$ be non-zero submodules that intersect trivially. By Zorn's lemma, we suppose these are maximal submodules that intersect trivially. We claim
+ \[
+ E(V_1) \oplus E(V_2) = E(V_1 \oplus V_2) = E(V)
+ \]
+ To prove this, it suffices to show that $V$ is an essential extension of $V_1 \oplus V_2$, so that $E(V)$ is an injective hull of $V_1 \oplus V_2$.
+
+ Let $W \leq V$ be non-zero. If $W \cap (V_1 \oplus V_2) = 0$, then $V_1 \oplus (V_2 \oplus W)$ is a larger pair of submodules with trivial intersection, which is not possible. So we are done.
+\end{proof}
+
+\begin{defi}[Domain]\index{domain}
+ An algebra is a \emph{domain} if $xy = 0$ implies $x = 0$ or $y = 0$.
+\end{defi}
+This is just the definition of an integral domain, but when we have non-commutative algebras, we usually leave out the word ``integral''.
+
+To show that the algebras we know and love are indeed domains, we again do some deformation.
+\begin{lemma}
+ Let $A$ be a filtered algebra, which is exhaustive and separated. Then if $\gr A$ is a domain, then so is $A$.
+\end{lemma}
+
+\begin{proof}
+ Let $x \in A_i \setminus A_{i - 1}$, and $y \in A_j \setminus A_{j - 1}$. We can find such $i, j$ for any elements $x, y \in A$ because the filtration is exhaustive and separated. Then we have
+ \begin{align*}
+ \bar{x} &= x + A_{i - 1} \not= 0 \in A_i/A_{i - 1}\\
+ \bar{y} &= y + A_{j - 1} \not= 0 \in A_j/A_{j - 1}.
+ \end{align*}
+ If $\gr A$ is a domain, then we deduce $\bar{x}\bar{y} \not= 0$. So we deduce that $xy \not \in A_{i + j - 1}$. In particular, $xy \not = 0$.
+\end{proof}
+
+\begin{cor}
+ $A_n(k)$ and $\mathcal{U}(\mathfrak{g})$ are domains.
+\end{cor}
+
+\begin{lemma}
+ Let $A$ be a right Noetherian domain. Then $A_A$ is uniform, i.e.\ $E(A_A)$ is indecomposable.
+\end{lemma}
+
+\begin{proof}
+ Suppose not, and so there are $xA$ and $yA$ non-zero such that $xA \cap yA = \{0\}$. So $xA \oplus yA$ is a direct sum.
+
+ But $A$ is a domain and so $yA \cong A$ as a right $A$-module. Thus $yxA \oplus yyA$ is a direct sum living inside $yA$. Further decomposing $yyA$, we find that
+ \[
+ xA \oplus yx A \oplus y^2 xA \oplus \cdots \oplus y^n xA
+ \]
+ is a direct sum of non-zero submodules. But this is an infinite strictly ascending chain as $n \to \infty$, which is a contradiction.
+\end{proof}
+
+Recall that when we proved Artin--Wedderburn, we needed to use Krull--Schmidt, which told us the decomposition is unique up to re-ordering. That relied on the endomorphism algebra being local. We need something similar here.
+
+\begin{lemma}
+ Let $E$ be an indecomposable injective right module. Then $\End_A(E)$ is a local algebra, with the unique maximal ideal given by
+ \[
+ I = \{f \in \End(E): \ker f\text{ is essential}\}.
+ \]
+\end{lemma}
+Note that since $E$ is indecomposable injective, given any non-zero $V \leq E$, we know $E(V)$ embeds into, and hence is a direct summand of $E$. Hence $E(V) = E$. So $\ker f$ being essential is the same as saying $\ker f$ being non-zero. However, this description of the ideal will be useful later on.
+
+\begin{proof}
+ Let $f: E \to E$ and $\ker f = \{0\}$. Then $f(E)$ is an injective module, and so is a direct summand of $E$. But $E$ is indecomposable. So $f$ is surjective. So it is an isomorphism, and hence invertible. So it remains to show that
+ \[
+ I = \{f \in \End(E): \ker f\text{ is essential}\}
+ \]
+ is an ideal.
+
+ If $\ker f$ and $\ker g$ are essential, then $\ker (f + g) \geq \ker f \cap \ker g$, and the intersection of essential submodules is essential. So $\ker (f + g)$ is also essential.
+
+ Also, if $\ker g$ is essential, and $f$ is arbitrary, then $\ker (f \circ g) \geq \ker g$, and is hence also essential. So $I$ is a maximal left ideal.
+\end{proof}
+
+The point of this lemma is to allow us to use Krull--Schmidt.
+
+\begin{lemma}
+ Let $M$ be a non-zero Noetherian module. Then $M$ is an essential extension of a direct sum of uniform submodules $N_1, \cdots, N_r$. Thus
+ \[
+ E(M) \cong E(N_1) \oplus \cdots E(N_r)
+ \]
+ is a direct sum of finitely many indecomposables.
+
+ This decomposition is unique up to re-ordering (and isomorphism).
+\end{lemma}
+
+\begin{proof}
+ We first show any non-zero Noetherian module contains a uniform one. Suppose not, and $M$ is in particular not uniform. So it contains non-zero $V_1, V_2'$ with $V_1 \cap V_2' = 0$. But $V_2'$ is not uniform by assumption. So it contains non-zero $V_2$ and $V_3'$ with zero intersection. We keep on repeating. Then we get
+ \[
+ V_1 \oplus V_2 \oplus \cdots \oplus V_n
+ \]
+ is a strictly ascending chain of submodules of $M$, which is a contradiction.
+
+ Now for non-zero Noetherian $M$, pick $N_1$ uniform in $M$. Either $N_1$ is essential in $M$, and we're done, or there is some $N_2'$ non-zero with $N_1 \cap N_2' = 0$. We pick $N_2$ uniform in $N_2'$. Then either $N_1 \oplus N_2$ is essential, or\ldots
+
+ And we are done since $M$ is Noetherian. Taking injective hulls, we get
+ \[
+ E(M) = E(N_1) \oplus \cdots \oplus E(N_r),
+ \]
+ and we are done by Krull--Schmidt and the previous lemma.
+\end{proof}
+
+This is the crucial lemma, which isn't really hard. This allows us to define yet another dimension for Noetherian algebras.
+\begin{defi}[Uniform dimension]\index{uniform dimension}\index{dimension!uniform}
+ The \emph{uniform dimension}, or \term{Goldie rank} of $M$ is the number of indecomposable direct summands of $E(M)$.
+\end{defi}
+This is analogous to vector space dimensions in some ways.
+
+\begin{eg}
+ The Goldie rank of domains is $1$, as we showed $A_A$ is uniform. This is true for $A_n(k)$ and $\mathcal{U}(\mathfrak{g})$.
+\end{eg}
+
+\begin{lemma}
+ Let $E_1, \cdots, E_r$ be indecomposable injectives. Put $E = E_1 \oplus \cdots \oplus E_r$. Let $I = \{f \in \End_A(E): \ker f\text{ is essential}\}$. This is an ideal, and then
+ \[
+ \End_A(E)/I \cong M_{n_1}(D_1) \oplus \cdots \oplus M_{n_s}(D_s)
+ \]
+ for some division algebras $D_i$.
+\end{lemma}
+
+\begin{proof}
+ We write the decomposition instead as
+ \[
+ E = E_1^{n_1} \oplus \cdots \oplus E_r^{n_r}.
+ \]
+ Then as in basic linear algebra, we know elements of $\End(E)$ can be written as an $r \times r$ matrix whose $(i, j)$th entry is an element of $\Hom(E_i^{n_i}, E_j^{n_j})$.
+
+ Now note that if $E_i \not \cong E_j$, then the kernel of a map $E_i \to E_j$ is essential in $E_i$. So quotienting out by $I$ kills all of these ``off-diagonal'' entries.
+
+ Also $\Hom(E_i^{n_i}, E_i^{n_i}) = M_{n_i}(\End(E_i))$, and so quotienting out by $I$ gives $M_{n_i}(\End(E_i)/\{\text{essential kernel}\}) \cong M_{n_i}(D_i)$, where
+ \[
+ D_i \cong \frac{\End(E_i)}{\text{essential kernel}},
+ \]
+ which we know is a division algebra since $I$ is a maximal ideal.
+\end{proof}
+
+The final piece to proving Goldie's theorem is the following piece
+\begin{lemma}
+ If $A$ is a right Noetherian algebra, then any $f: A_A \to A_A$ with $\ker f$ essential in $A_A$ is nilpotent.
+\end{lemma}
+
+\begin{proof}
+ Consider
+ \[
+ 0 < \ker f \leq \ker f^2 \leq \cdots.
+ \]
+ Suppose $f$ is not nilpotent. We claim that this is a strictly increasing chain. Indeed, for all $n$, we have $f^n(A_A) \not= 0$. Since $\ker f$ is essential, we know
+ \[
+ f^n(A_A) \cap \ker f \not= \{0\}.
+ \]
+ This forces $\ker f^{n + 1} > \ker f^n$, which is a contradiction.
+\end{proof}
+
+We can now prove Goldie's theorem.
+\begin{thm}[Goldie's theorem]\index{Goldie's theorem}
+ Let $A$ be a right Noetherian algebra with no non-zero ideals all of whose elements are nilpotent. Then $A$ embeds in a finite direct sum of matrix algebras over division algebras.
+\end{thm}
+
+\begin{proof}
+ As usual, we have a map
+ \[
+ \begin{tikzcd}[cdmap]
+ A \ar[r] & \End_A(A_A)\\
+ x \ar[r, maps to] & \text{left multiplication by $x$}
+ \end{tikzcd}
+ \]
+ For a map $A_A \to A_A$, it lifts to a map $E(A_A) \to E(A_A)$ by injectivity:
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_A \ar[d, "f"] \ar[r, "\theta"] & E(A_A) \ar[d, dashed, "f'"]\\
+ & A_A \ar[r, "\theta"] & E(A_A)
+ \end{tikzcd}
+ \]
+ We can complete the diagram to give a map $f': E(A_A) \to E(A_A)$, which restricts to $f$ on $A_A$. This is not necessarily unique. However, if we have two lifts $f'$ and $f''$, then the difference $f' - f''$ has $A_A$ in the kernel, and hence has an essential kernel. So it lies in $I$. Thus, if we compose maps
+ \[
+ \begin{tikzcd}
+ A_A \ar[r] & \End_A(A_A) \ar[r] & \End(E(A_A))/I
+ \end{tikzcd}.
+ \]
+ The kernel of this consists of $A$ which when multiplying on the left has essential kernel. This is an ideal all of whose elements is nilpotent. By assumption, any such ideal vanishes. So we have an embedding of $A$ in $\End(E(A_A))/I$, which we know to be a direct sum of matrix algebras over division rings.
+\end{proof}
+Goldie didn't present it like this. This work in injective modules is due to Matlis.
+
+We saw that (right Noetherian) domains had Goldie rank $1$. So we get that $\End(E(A))/I \cong D$ for some division algebra $D$. So by Goldie's theorem, a right Noetherian algebra embeds in a division algebra. In particular, this is true for $A_n(k)$ and $\mathcal{U}(\mathfrak{g})$.
+
+\section{Hochschild homology and cohomology}
+\subsection{Introduction}
+We now move on to talk about Hochschild (co)homology. We will mostly talk about Hochschild cohomology, as that is the one that is interesting. Roughly speaking, given a $k$-algebra $A$ and an $A\mdash A$-bimodule $M$, Hochschild cohomology is an infinite sequence of $k$-vector spaces $HH^n(A, M)$ indexed by $n \in \N$ associated to the data. While there is in theory an infinite number of such vector spaces, we are mostly going to focus on the cases of $n = 0, 1, 2$, and we will see that these groups can be interpreted as things we are already familiar with.
+
+The construction of these Hochschild cohomology groups might seem a bit arbitrary. It is possible to justify these \emph{a priori} using the general theory of homological algebra and/or model categories. On the other hand, Hochschild cohomology is sometimes used as motivation for the general theory of homological algebra and/or model categories. Either way, we are not going to develop these general frameworks, but are going to justify Hochschild cohomology in terms of its practical utility.
+
+Unsurprisingly, Hochschild (co)homology was first developed by Hochschild in 1945, albeit only working with algebras of finite (vector space) dimension. It was introduced to give a cohomological interpretation and generalization of some results of Wedderburn. Later in 1962/1963, Gerstenhaber saw how Hochschild cohomology was relevant to the deformations of algebras. More recently, it's been realized that that the Hochschild cochain complex has additional algebraic structure, which allows yet more information about deformation.
+
+As mentioned, we will work with $A\mdash A$-bimodules over an algebra $A$. If our algebra has an augmentation, i.e.\ a ring map to $k$, then we can have a decent theory that works with left or right modules. However, for the sake of simplicity, we shall just work with bimodules to make our lives easier.
+
+Recall that a $A\mdash A$-bimodule is an algebra with both left and right $A$ actions that are compatible. For example, $A$ is an $A\mdash A$-bimodule, and we sometimes write it as $_AA_A$ to emphasize this. More generally, we can view $A^{\otimes (n + 2)} = A \otimes_k \cdots \otimes_k A$ as an $A\mdash A$-bimodule by
+\[
+ x(a_0 \otimes a_1 \otimes \cdots \otimes a_{n + 1})y = (x a_0) \otimes a_1 \otimes \cdots \otimes (a_{n + 1} y).
+\]
+The crucial property of this is that for any $n \geq 0$, the bimodule $A^{\otimes (n + 2)}$ is a free $A\mdash A$-bimodule. For example, $A \otimes_k A$ is free on a single generator $1 \otimes_k 1$, whereas if $\{x_i\}$ is a $k$-basis of $A$, then $A \otimes_k A \otimes_k A$ is free on $\{1 \otimes_k x_i \otimes_k 1\}$.
+
+The general theory of homological algebra says we should be interested in such free things.
+\begin{defi}[Free resolution]\index{free resolution}\index{resolution!free}
+ Let $A$ be an algebra and $M$ an $A\mdash A$-bimodule. A \emph{free resolution} of $M$ is an exact sequence of the form
+ \[
+ \begin{tikzcd}
+ \cdots \ar[r, "d_2"] & F_2 \ar[r, "d_1"] & F_1 \ar[r, "d_0"] & F_0 \ar[r] & M
+ \end{tikzcd},
+ \]
+ where each $F_n$ is a free $A\mdash A$-bimodule.
+\end{defi}
+More generally, we can consider a \term{projective resolution}\index{resolution!projective} instead, where we allow the bimodules to be projective. In this course, we are only interested in one particular free resolution:
+
+\begin{defi}[Hochschild chain complex]\index{Hochschild chain complex}\index{chain complex!Hochschild}
+ Let $A$ be a $k$-algebra with multiplication map $\mu: A \otimes A$. The \emph{Hochschild chain complex} is
+ \[
+ \begin{tikzcd}
+ \cdots \ar[r, "d_1"] & A \otimes_k A \otimes_k A \ar[r, "d_0"] & A \otimes_k A \ar[r, "\mu"] & A \ar[r] & 0.
+ \end{tikzcd}
+ \]
+ We refer to $A^{\otimes_k(n + 2)}$ as the \term{degree $n$} term. The differential $d: A^{\otimes_k (n + 3)} \to A^{\otimes_k(n + 2)}$ is given by
+ \[
+ d(a_0 \otimes_k \cdots \otimes_k a_{n + 1}) = \sum_{i = 0}^{n + 1} (-1)^i a_0 \otimes_k \cdots \otimes_k a_i a_{i + 1} \otimes_k \cdots \otimes_k a_{n + 2}.
+ \]
+\end{defi}
+This is a free resolution of $_AA_A$ (the exactness is merely a computation, and we shall leave that as an exercise to the reader). In a nutshell, given an $A\mdash A$-bimodule $M$, its Hochschild homology and cohomology is obtained by applying $\ph\otimes_{A\mdash A} M$ and $\Hom_{A\mdash A}(\ph, M)$ to the Hochschild chain complex, and then taking the homology and cohomology of the resulting chain complex. We shall explore in more detail what this means.
+
+It is a general theorem that we could have applied the functors $\ph \otimes_{A \mdash A} M$ and $\Hom_{A \mdash A} (\ph, M)$ to \emph{any} projective resolution of $_AA_A$ and take the (co)homology, and the resulting vector spaces will be the same. However, we will not prove that, and will just always stick to this standard free resolution all the time.
+
+\subsection{Cohomology}
+As mentioned, the construction of Hochschild cohomology involves applying $\Hom_{A\mdash A}(\ph, M)$ to the Hochschild chain complex, and looking at the terms $\Hom_{A\mdash A}(A^{\otimes(n + 2)}, M)$. This is usually not very convenient to manipulate, as it involves talking about bimodule homomorphisms. However, we note that $A^{\otimes (n + 2)}$ is a free $A\mdash A$-bimodule generated by a basis of $A^{\otimes n}$. Thus, there is a canonical isomorphism
+\[
+ \Hom_{A \mdash A}(A^{\otimes (n + 2)}, M)\cong \Hom_k(A^{\otimes n}, M),
+\]
+and $k$-linear maps are much simpler to work with.
+
+\begin{defi}[Hochschild cochain complex]\index{Hochschild cochain complex}\index{cochain complex!Hochschild}
+ The \emph{Hochschild cochain complex} of an $A\mdash A$-bimodule $M$ is what we obtain by applying $\Hom_{A\mdash A}(\ph, M)$ to the Hochschild chain complex of $A$. Explicitly, we can write it as
+ \[
+ \begin{tikzcd}
+ \Hom_k(k, M) \ar[r, "\delta_0"] & \Hom_k(A, M) \ar[r, "\delta_1"] & \Hom_k(A \otimes A, M) \ar[r] & \cdots,
+ \end{tikzcd}
+ \]
+ where
+ \begin{align*}
+ (\delta_0 f)(a) &= a f(1) - f(1) a\\
+ (\delta_1 f)(a_1 \otimes a_2) &= a_1 f(a_2) - f(a_1 a_2) + f(a_1) a_2\\
+ (\delta_2 f)(a_1 \otimes a_2 \otimes a_3) &= a_1 f(a_2 \otimes a_3) - f(a_1a_2 \otimes a_3) \\
+ &\hphantom{={}}+ f(a_1 \otimes a_2 a_3) - f(a_1 \otimes a_2)a_3\\
+ (\delta_{n - 1} f)(a_1 \otimes \cdots \otimes a_n) &= a_1 f(a_2 \otimes \cdots \otimes a_n)\\
+ &\hphantom{={}}+ \sum_{i = 1}^n (-1)^i f(a_1 \otimes \cdots \otimes a_i a_{i + 1} \otimes \cdots \otimes a_n) \\
+ &\hphantom{={}}+ (-1)^{n + 1} f(a_1 \otimes \cdots \otimes a_{n - 1})a_n
+ \end{align*}
+ The reason the end ones look different is that we replaced $\Hom_{A\mdash A}(A^{\otimes (n + 2)}, M)$ with $\Hom_k(A^{\otimes n}, M)$.
+\end{defi}
+
+The crucial observation is that the exactness of the Hochschild chain complex, and in particular the fact that $d^2 = 0$, implies $\im \delta_{n - 1} \subseteq \ker \delta_n$.
+\begin{defi}[Cocycles]\index{cocycles}
+ The \emph{cocycles} are the elements in $\ker \delta_n$.
+\end{defi}
+
+\begin{defi}[Coboundaries]\index{coboundaries}
+ The \emph{coboundaries} are the elements in $\im \delta_n$.
+\end{defi}
+These names came from algebraic topology.
+
+\begin{defi}[Hochschild cohomology groups]\index{Hochschild cohomology groups}\index{$\HH^n(A, M)$}
+ We define
+ \begin{align*}
+ \HH^0(A, M) &= \ker \delta_0\\
+ \HH^n(A, N) &= \frac{\ker \delta_n}{\im \delta_{n - 1}}
+ \end{align*}
+ These are $k$-vector spaces.
+\end{defi}
+If we do not want to single out $\HH^0$, we can extend the Hochschild cochain complex to the left with $0$, and setting $\delta_n = 0$ for $n < 0$ (or equivalently extending the Hochschild chain complex to the right similarly), Then
+\[
+ \HH^0(A, M) = \frac{\ker \delta_0}{\im \delta_{-1}} = \ker \delta_0.
+\]
+The first thing we should ask ourselves is when the cohomology groups vanish. There are two scenarios where we can immediately tell that the (higher) cohomology groups vanish.
+
+\begin{lemma}
+ Let $M$ be an injective bimodule. Then $\HH^n(A, M) = 0$ for all $n \geq 1$.
+\end{lemma}
+
+\begin{proof}
+ $\Hom_{A\mdash A}(\ph, M)$ is exact.
+\end{proof}
+
+\begin{lemma}
+ If $_AA_A$ is a projective bimodule, then $\HH^n(A, M) = 0$ for all $M$ and all $n \geq 1$.
+\end{lemma}
+If we believed the previous remark that we could compute Hochschild cohomology with any projective resolution, then this result is immediate --- indeed, we can use $\cdots \to 0 \to 0 \to A \to A \to 0$ as the projective resolution. However, since we don't want to prove such general results, we shall provide an explicit computation.
+
+\begin{proof}
+ If $_AA_A$ is projective, then all $A^{\otimes n}$ are projective. At each degree $n$, we can split up the Hochschild chain complex as the short exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & \displaystyle\frac{A^{\otimes(n + 3)}}{\ker d_n} \ar[r, hook, "d_n"] & A^{\otimes(n + 2)} \ar[r, two heads, "d_{n - 1}"] & \im d_{n - 1} \ar[r] & 0
+ \end{tikzcd}
+ \]
+ The $\im d$ is a submodule of $A^{\otimes (n + 1)}$, and is hence projective. So we have
+ \[
+ A^{\otimes(n + 2)} \cong \frac{A^{\otimes(n + 3)}}{\ker d_n}\oplus \im d_{n - 1},
+ \]
+ and we can write the Hochschild chain complex at $n$ as
+ \[
+ \begin{tikzcd}[row sep=tiny]
+ \displaystyle \ker d_n \oplus \frac{A^{\otimes(n + 3)}}{\ker d_n} \ar[r, "d_n"] & \displaystyle \frac{A^{\otimes(n + 3)}}{\ker d_n}\oplus \im d_{n - 1} \ar[r, "d_{n - 1}"] & \displaystyle \frac{A^{\otimes(n + 1)}}{\im d_{n - 1}} \oplus \im d_{n - 1}\\
+ (a, b) \ar[r, maps to] & (b, 0)\\
+ & (c, d) \ar[r, maps to] & (0, d)
+ \end{tikzcd}
+ \]
+ Now $\Hom(\ph, M)$ certainly preserves the exactness of this, and so the Hochschild cochain complex is also exact. So we have $\HH^n(A, M) = 0$ for $n \geq 1$.
+\end{proof}
+This is a rather strong result. By knowing something about $A$ itself, we deduce that the Hochschild cohomology of \emph{any} bimodule whatsoever must vanish.
+
+Of course, it is not true that $\HH^n(A, M)$ vanishes in general for $n \geq 1$, or else we would have a rather boring theory. In general, we define
+\begin{defi}[Dimension]\index{dimension}
+ The \emph{dimension} of an algebra $A$ is
+ \[
+ \Dim A = \sup\{n: \HH^n(A, M) \not= 0\text{ for some $A\mdash A$-bimodule $M$}\}.
+ \]
+ This can be infinite if such a sup does not exist.
+\end{defi}
+Thus, we showed that $_AA_A$ embeds as a direct summand in $A \otimes A$, then $\Dim A = 0$.
+
+\begin{defi}[$k$-separable]\index{separable algebra}\index{$k$-separable algebra}\index{separable algebra}\index{algebra!$k$-separable}
+ An algebra $A$ is \emph{$k$-separable} if $_AA_A$ embeds as a direct summand of $A \otimes A$.
+\end{defi}
+Since $A \otimes A$ is a free $A\mdash A$-bimodule, this condition is equivalent to $A$ being projective. However, there is some point in writing the definition like this. Note that an $A\mdash A$-bimodule is equivalently a left $A \otimes A^{\op}$-module. Then $_AA_A$ is a direct summand of $A \otimes A$ if and only if there is a \term{separating idempotent} $e \in A \otimes A^\op$ so that $_AA_A$ viewed as $A \otimes A^{\op}$-bimodule is $(A \otimes A^\op)e$.
+
+This is technically convenient, because it is often easier to write down a separating idempotent than to prove directly $A$ is projective.
+
+Note that whether we write $A \otimes A^\op$ or $A \otimes A$ is merely a matter of convention. They have the same underlying set. The notation $A \otimes A$ is more convenient when we take higher powers, but we can think of $A \otimes A^\op$ as taking $A$ as a left-$A$ right-$k$ module and $A^\op$ as a left-$k$ right-$A$, and tensoring them gives a $A\mdash A$-bimodule.
+
+We just proved that separable algebras have dimension $0$. Conversely, we have
+\begin{lemma}
+ If $\Dim A = 0$, then $A$ is separable.
+\end{lemma}
+
+\begin{proof}
+ Note that there is a short exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & \ker \mu \ar[r] & A \otimes A \ar[r, "\mu"] & A \ar[r] & 0
+ \end{tikzcd}
+ \]
+ If we can show this splits, then $A$ is a direct summand of $A \otimes A$. To do so, we need to find a map $A \otimes A \to \ker \mu$ that restricts to the identity on $\ker \mu$.
+
+ To do so, we look at the first few terms of the Hochschild chain complex
+ \[
+ \begin{tikzcd}
+ \cdots \ar[r, "d"] & \im d \oplus \ker \mu \ar[r] & A \otimes A \ar[r, "\mu"] & A \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ By assumption, for any $M$, applying $\Hom_{A\mdash A} (\ph, M)$ to the chain complex gives an exact sequence. Omitting the annoying $_{A \mdash A}$ subscript, this sequence looks like
+ \begin{multline*}
+ 0 \longrightarrow \Hom(A, M) \overset{\mu^*}{\longrightarrow} \Hom(A \otimes A, M)\\
+ \overset{(*)}{\longrightarrow} \Hom(\ker \mu, M) \oplus \Hom(\im d, M) \overset{d^*}{\longrightarrow} \cdots
+ \end{multline*}
+ Now $d^*$ sends $\Hom(\ker \mu, M)$ to zero. So $\Hom(\ker \mu, M)$ must be in the image of $(*)$. So the map
+ \[
+ \Hom(A \otimes A, M) \longrightarrow \Hom(\ker \mu, M)
+ \]
+ must be surjective. This is true for any $M$. In particular, we can pick $M = \ker \mu$. Then the identity map $\id_{\ker \mu}$ lifts to a map $A \otimes A \to \ker \mu$ whose restriction to $\ker \mu$ is the identity. So we are done.
+\end{proof}
+
+\begin{eg}
+ $M_n(k)$ is separable. It suffices to write down the separating idempotent. We let $E_{ij}$ be the elementary matrix with $1$ in the $(i,j)$th slot and $0$ otherwise. We fix $j$, and then
+ \[
+ \sum_i E_{ij} \otimes E_{ji} \in A\otimes A^\op
+ \]
+ is a separating idempotent.
+\end{eg}
+
+\begin{eg}
+ Let $A = kG$ with $\Char k \nmid |G|$. Then $A \otimes A^\op = kG \otimes (kG)^\op \cong kG \otimes kG$. But this is just isomorphic to $k(G \times G)$, which is again semi-simple.
+
+ Thus, as a bimodule, $A \otimes A$ is completely reducible. So the quotient of $_AA_A$ is a direct summand (of bimodules). So we know that whenever $\Char k \nmid |G|$, then $kG$ is $k$-separable.
+\end{eg}
+
+The obvious question is --- is this notion actually a generalization of separable field extensions? This is indeed the case.
+\begin{fact} % insert a proof
+ Let $L/K$ be a finite field extension. Then $L$ is separable as a $K$-algebra iff $L/K$ is a separable field extension.
+\end{fact}
+However $k$-separable algebras must be finite-dimensional $k$-vector spaces. So this doesn't pass on to the infinite case.
+
+In the remaining of the chapter, we will study what Hochschild cohomology in the low dimensions mean. We start with $\HH^0$. The next two propositions follow from just unwrapping the definitions:
+\begin{prop}
+ We have
+ \[
+ \HH^0(A, M) = \{m \in M : am - ma = 0\text{ for all }a \in A\}.
+ \]
+ In particular, $\HH^0(A, A)$ is the center of $A$.
+\end{prop}
+
+\begin{prop}
+ \[
+ \ker \delta_1 = \{f \in \Hom_k(A, M): f(a_1 a_2) = a_1 f(a_2) + f(a_1) a_2\}.
+ \]
+ These are the \term{derivations} from $A$ to $M$. We write this as \term{$\Der(A, M)$}.
+
+ On the other hand,
+ \[
+ \im \delta_0 = \{f \in \Hom_k(A, M): f(a) = am - ma \text{ for some }m \in M\}.
+ \]
+ These are called the \term{inner derivations} from $A$ to $M$. So
+ \[
+ \HH^1(A, M) = \frac{\Der(A, M)}{\mathrm{InnDer}(A, M)}.
+ \]
+ Setting $A = M$, we get the derivations and inner derivations of $A$.
+\end{prop}
+
+\begin{eg}
+ If $A = k[x]$, and $\Char k = 0$, then
+ \[
+ \Der A = \left\{p(X) \frac{\d}{\d x} : p(x) \in k[X]\right\},
+ \]
+ and there are no (non-trivial) inner derivations because $A$ is commutative. So we find
+ \[
+ \HH^1(k[X], k[X]) \cong k[X].
+ \]
+\end{eg}
+In general, $\Der(A)$ form a Lie algebra, since
+\[
+ D_1 D_2 - D_2 D_1 \in \End_k(A)
+\]
+is in fact a derivation if $D_1$ and $D_2$ are.
+
+There is another way we can think about derivations, which is via semi-direct products.
+
+\begin{defi}[Semi-direct product]\index{semi-direct product}\index{$A \ltimes M$}
+ Let $M$ be an $A\mdash A$-bimodule. We can form the semi-direct product of $A$ and $M$, written $A \ltimes M$, which is an algebra with elements $(a, m) \in A \times M$, and multiplication given by
+ \[
+ (a_1, m_1) \cdot (a_2, m_2) = (a_1 a_2, a_1 m_2 + m_1 a_2).
+ \]
+ Addition is given by the obvious one.
+
+ Alternatively, we can write
+ \[
+ A \ltimes M \cong A + M\varepsilon,
+ \]
+ where $\varepsilon$ commutes with everything and $\varepsilon^2 = 0$. Then $M\varepsilon$ forms an ideal with $(M\varepsilon)^2 = 0$.
+\end{defi}
+In particular, we can look at $A \ltimes A \cong A + A \varepsilon$. This is often written as $A[\varepsilon]$\index{$A[\varepsilon]$}.
+
+Previously, we saw first cohomology can be understood in terms of derivations. We can formulate derivations in terms of this semi-direct product.
+\begin{lemma}
+ We have
+ \[
+ \Der_k(A, M) \cong \{\text{algebra complements to $M$ in $A \ltimes M$ isomorphic to $A$}\}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ A complement to $M$ is an embedded copy of $A$ in $A \ltimes M$,
+ \[
+ \begin{tikzcd}[cdmap]
+ A \ar[r] & A \ltimes M\\
+ a \ar[r, maps to] & (a, D_a)
+ \end{tikzcd}
+ \]
+ The function $A \to M$ given by $a \mapsto D_a$ is a derivation, since under the embedding, we have
+ \[
+ ab \mapsto (ab, a D_b + D_a b).
+ \]
+ Conversely, a derivation $f: A \to M$ gives an embedding of $A$ in $A \ltimes M$ given by $a \mapsto (a, f(a))$.
+\end{proof}
+
+We can further rewrite this characterization in terms of automorphisms of the semi-direct product. This allows us to identify inner derivations as well.
+
+\begin{lemma}
+ We have
+ \[
+ \Der(A, M) \cong \left\{\parbox{6cm}{\centering automorphisms of $A \ltimes M$ of the form $a \mapsto a + f(a) \varepsilon$, $m\varepsilon \mapsto m\varepsilon$}\right\},
+ \]
+ where we view $A \ltimes M \cong A + M \varepsilon$.
+
+ Moreover, the inner derivations correspond to automorphisms achieved by conjugation by $1 + m\varepsilon$, which is a unit with inverse $1 - m \varepsilon$.
+\end{lemma}
+The proof is a direct check.
+
+This applies in particular when we pick $M = {_AA_A}$, and the Lie algebra of derivations of $A$ may be thought of as the set of infinitesimal automorphisms.
+
+\separator
+
+Let's now consider $\HH^2(A, M)$. This is to be understood in terms of extensions, of which the ``trivial'' example is the semi-direct product.
+
+\begin{defi}[Extension]\index{extension}
+ Let $A$ be an algebra and $M$ and $A\mdash A$-bimodule. An extension of $A$ by $M$. An \emph{extension} of $A$ by $M$ is a $k$-algebra $B$ containing a 2-sided ideal $I$ such that
+ \begin{itemize}
+ \item $I^2 = 0$;
+ \item $B/I \cong A$; and
+ \item $M \cong I$ as an $A\mdash A$-bimodule.
+ \end{itemize}
+ Note that since $I^2 = 0$, the left and right multiplication in $B$ induces an $A\mdash A$-bimodule structure on $I$, rather than just a $B\mdash B$-bimodule.
+\end{defi}
+
+We let $\pi: B \to A$ be the canonical quotient map. Then we have a short exact sequence
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & I \ar[r] & B \ar[r] & A \ar[r] & 0
+ \end{tikzcd}.
+\]
+Then two extensions $B_1$ and $B_2$ are isomorphic if there is a $k$-algebra isomorphism $\theta: B_1 \to B_2$ such that the following diagram commutes:
+\[
+ \begin{tikzcd}[row sep=small]
+ & & B_1 \ar[rd] \ar[dd, dashed, "\theta"]\\
+ 0 \ar[r] & I \ar[ur] \ar[dr] & & A \ar[r] & 0\\
+ & & B_2 \ar[ur]
+ \end{tikzcd}.
+\]
+Note that the semi-direct product is such an extension, called the \term{split extension}.
+
+\begin{prop}
+ There is a bijection between $\HH^2(A, M)$ with the isomorphism classes of extensions of $A$ by $M$.
+\end{prop}
+This is something that should be familiar if we know about group cohomology.
+
+\begin{proof}
+ Let $B$ be an extension with, as usual, $\pi: B \to A$, $I = M = \ker \pi$, $I^2 = 0$. We now try to produce a cocycle from this.
+
+ Let $\rho$ be any $k$-linear map $A \to B$ such that $\pi(\rho(a)) = a$. This is possible since $\pi$ is surjective. Equivalently, $\rho(\pi(b)) = b \bmod I$. We define a $k$-linear map
+ \[
+ f_\rho: A \otimes A \to I \cong M
+ \]
+ by
+ \[
+ a_1 \otimes a_2 \mapsto \rho(a_1) \rho(a_2) - \rho(a_1 a_2).
+ \]
+ Note that the image lies in $I$ since
+ \[
+ \rho(a_1) \rho(a_2) \equiv \rho(a_1 a_2) \pmod I.
+ \]
+ It is a routine check that $f_\rho$ is a $2$-cocycle, i.e.\ it lies in $\ker \delta_2$.
+
+ If we replace $\rho$ by any other $\rho'$, we get $f_{\rho'}$, and we have
+ \begin{align*}
+ &\hphantom{={}}f_\rho(a_1 \otimes a_2) - f_{\rho'} (a_1 \otimes a_2) \\
+ &= \rho(a_1) (\rho(a_2) - \rho'(a_2)) - (\rho(a_1 a_2) - \rho'(a_1 a_2)) + (\rho(a_1) - \rho'(a_1))\rho'(a_2)\\
+ &= a_1 \cdot (\rho(a_2) - \rho'(a_2)) - (\rho(a_1 a_2) - \rho'(a_1 a_2)) + (\rho(a_1) - \rho'(a_1)) \cdot a_2,
+ \end{align*}
+ where $\ph$ denotes the $A\mdash A$-bimodule action in $I$. Thus, we find
+ \[
+ f_\rho - f_{\rho'} = \delta_1(\rho - \rho'),
+ \]
+ noting that $\rho - \rho'$ actually maps to $I$.
+
+ So we obtain a map from the isomorphism classes of extensions to the second cohomology group.
+
+ Conversely, given an $A\mdash A$-bimodule $M$ and a $2$-cocycle $f: A \otimes A \to M$, we let
+ \[
+ B_f = A \oplus M
+ \]
+ as $k$-vector spaces. We define the multiplication map
+ \[
+ (a_1, m_1)(a_2, m_2) = (a_1 a_2, a_1 m_2 + m_1 a_2 + f(a_1 \otimes a_2)).
+ \]
+ This is associative precisely because of the 2-cocycle condition. The map $(a, m) \to a$ yields a homomorphism $\pi: B \to A$, with kernel $I$ being a two-sided ideal of $B$ which has $I^2 = 0$. Moreover, $I \cong M$ as an $A\mdash A$-bimodule. Taking $\rho: A \to B$ by $\rho(a) = (a, 0)$ yields the $2$-cocycle we started with.
+
+ Finally, let $f'$ be another $2$ co-cycle cohomologous to $f$. Then there is a linear map $\tau: A \to M$ with
+ \[
+ f - f' = \delta_1 \tau.
+ \]
+ That is,
+ \[
+ f(a_1 \otimes A_2) = f'(a_1 \otimes a_2) + a_1 \cdot \tau(a_2) - \tau(a_1 a_2) + \tau(a_1) \cdot a_2.
+ \]
+ Then consider the map $B_f \to B_f'$ given by
+ \[
+ (a, m) \mapsto (a, m + \tau(a)).
+ \]
+ One then checks this is an isomorphism of extensions. And then we are done.
+\end{proof}
+In the proof, we see $0$ corresponds to the semi-direct product.
+
+\begin{cor}
+ If $\HH^2(A, M) = 0$, then all extensions are split.
+\end{cor}
+
+We now prove some theorems that appear to be trivialities. However, they are trivial only because we now have the machinery of Hochschild cohomology. When they were first proved, such machinery did not exist, and they were written in a form that seemed less trivial.
+\begin{thm}[Wedderburn, Malcev]
+ Let $B$ be a $k$-algebra satisfying
+ \begin{itemize}
+ \item $\Dim (B/J(B)) \leq 1$.
+ \item $J(B)^2 = 0$
+ \end{itemize}
+ Then there is an subalgebra $A \cong B/J(B)$ of $B$ such that
+ \[
+ B = A \ltimes J(B).
+ \]
+ Furthermore, if $\Dim (B/J(B)) = 0$, then any two such subalgebras $A, A'$ are conjugate, i.e.\ there is some $x \in J(B)$ such that
+ \[
+ A' = (1 + x) A (1 + x)^{-1}.
+ \]
+ Notice that $1 + x$ is a unit in $B$.
+\end{thm}
+In fact, the same result holds if we only require $J(B)$ to be nilpotent. This follows from an induction argument using this as a base case, which is messy and not really interesting.
+
+\begin{proof}
+ We have $J(B)^2 = 0$. Since we know $\Dim (B/J(B)) \leq 1$, we must have
+ \[
+ \HH^2(A, J(B)) = 0
+ \]
+ where
+ \[
+ A \cong \frac{B}{J(B)}.
+ \]
+ Note that we regard $J(B)$ as an $A\mdash A$-bimodule here. So we know that all extension of $A$ by $J(B)$ are semi-direct, as required.
+
+ Furthermore, if $\Dim (B/J(B)) = 0$, then we know $\HH^1(A, J(A)) = 0$. So by our older lemmas, we see that complements are all conjugate, as required.
+\end{proof}
+
+\begin{cor}
+ If $k$ is algebraically closed and $\dim_k B < \infty$, then there is a subalgebra $A$ of $B$ such that
+ \[
+ A \cong \frac{B}{J(B)},
+ \]
+ and
+ \[
+ B = A \ltimes J(B).
+ \]
+ Moreover, $A$ is unique up to conjugation by units of the form $1 + x$ with $x \in J(B)$.
+\end{cor}
+
+\begin{proof}
+ We need to show that $\Dim (A) = 0$. But we know $B/J(B)$ is a semi-simple $k$-algebra of finite dimension, and in particular is Artinian. So by Artin--Wedderburn, we know $B/J(B)$ is a direct sum of matrix algebras over $k$ (since $k$ is algebraically closed and $\dim_k(B/J(B))$).
+
+ We have previously observed that $M_n(k)$ is $k$-separable. Since $k$-separability behaves well under direct sums, we know $B/J(B)$ is $k$-separable, hence has dimension zero.
+
+ It is a general fact that $J(B)$ is nilpotent.
+\end{proof}
+
+\subsection{Star products}
+We are now going to do study some deformation theory. Suppose $A$ is a $k$-algebra. We write $V$ for the underlying vector space of $A$. Then there is a natural algebra structure on $V \otimes_k k[[t]] = V[[t]]$, which we may write as $A[[t]]$. However, we might we to consider more interesting algebra structures on this vector space. Of course, we don't want to completely forget the algebra structure on $A$. So we make the following definition:
+
+\begin{defi}[Star product]\index{star product}
+ Let $A$ be a $k$-algebra, and let $V$ be the underlying vector space. A \emph{star product} is an associative $k[[t]]$-bilinear product on $V[[t]]$ that reduces to the multiplication on $A$ when we set $t = 0$.
+\end{defi}
+
+Can we produce non-trivial star products? It seems difficult, because when we write down an attempt, we need to make sure it is in fact associative, and that might take quite a bit of work. One example we have already seen is the following:
+\begin{eg} % return to this
+ Given a filtered $k$-algebra $A'$, we formed the Rees algebra associated with the filtration, and it embeds as a vector space in $(\gr A')[[t]]$. Thus we get a product on $(\gr A')[[t]]$.
+
+ There are two cases where we are most interested in --- when $A' = A_n(k)$ or $A' = \mathcal{U}(\mathfrak{g})$. We saw that $\gr A'$ was actually a (commutative) polynomial algebra. However, the product on the Rees algebra is non-commutative. So the $*$-product will be non-commutative.
+\end{eg}
+% We are in the world of quantization here. We started with a commutative algebra $\gr A'$, and ended up with a non-commutative star product. For algebraists, this is what ``quantization'' means. Often, if we want to stress the ``quantum'' point of view, we may use $q$ instead of $t$.
+
+In general, the availability of star products is largely controlled by the Hochschild cohomology of $A$. To understand this, let's see what we actually need to specify to get a star product. Since we required the product to be a $k[[t]]$-bilinear map
+\[
+ f: V[[t]] \times V[[t]] \to V[[t]],
+\]
+all we need to do is to specify what elements of $V = A$ are sent to. Let $a, b \in V = A$. We write\index{$F_i$}
+\[
+ f(a, b) = ab + t F_1(a, b) + t^2 F_2(a, b) + \cdots.
+\]
+Because of bilinearity, we know $F_i$ are $k$-bilinear maps, and so correspond to $k$-linear maps $V \otimes V \to V$. For convenience, we will write
+\[
+ F_0(a, b) = ab.
+\]
+The only non-trivial requirement $f$ has to satisfy is associativity:
+\[
+ f(f(a, b), c) = f(a, f(b, c)).
+\]
+What condition does this force on our $F_i$? By looking at coefficients of $t$, this implies that for all $\lambda = 0, 1, 2, \cdots$, we have
+\[
+ \sum_{\substack{m + n = \lambda\\m, n \geq 0}} \Big(F_m(F_n(a, b), c) - F_m(a, F_n(b, c))\Big) = 0.\tag{$*$}
+\]
+For $\lambda = 0$, we are just getting the associativity of the original multiplication on $A$. When $\lambda = 1$, then this says
+\[
+ a F_1(b, c) - F_1(ab, c) + F_1(a, bc) - F_1(a, b)c = 0.
+\]
+All this says is that $F_1$ is a $2$-cocycle! This is not surprising. Indeed, we've just seen (a while ago) that working mod $t^2$, the extensions of $A$ by $_AA_A$ are governed by $\HH^2$. Thus, we will refer to $2$-cocycles as \term{infinitesimal deformations}\index{deformation!infinitesimal} in this context.
+
+Note that given an arbitrary $2$ co-cycle $A \otimes A \to A$, it may not be possible to produce a star product with the given $2$-cocycle as $F_1$.
+\begin{defi}[Integrable $2$-cocycle]\index{integrable $2$-cocycle}\index{$2$-cocycle!integrable}\index{cocycle!integrable}
+ Let $f: A \otimes A \to A$ be a $2$-cocycle. Then it is \emph{integrable} if it is the $F_1$ of a star product.
+\end{defi}
+We would like to know when a $2$-cocycle is integrable. Let's rewrite $(*)$ as $(\dagger_\lambda)$:
+\[
+ \sum_{\substack{m + n = \lambda\\m, n > 0}}\Big(F_m(F_n(a, b), c) - F_m(a, F_n(b, c))\Big) = (\delta_2 F_\lambda)(a, b, c).\tag{$\dagger_\lambda$}
+\]
+Here we are identifying $F_\lambda$ with the corresponding $k$-linear map $A \otimes A \to A$.
+
+For $\lambda = 2$, this says
+\[
+ F_1(F_1(ab), c) - F_1(a, F_1(b, c)) = (\delta_2 F_2)(a, b, c).
+\]
+If $F_1$ is a $2$-cocycle, then one can check that the LHS gives a $3$-cocycle. If $F_1$ is integrable, then the LHS has to be equal to the RHS, and so must be a coboundary, and thus has cohomology class zero in $\HH^3(A, A)$.
+
+In fact, if $F_1, \cdots, F_{\lambda - 1}$ satisfy $(\dagger_1), \cdots, (\dagger_{\lambda - 1})$, then the LHS of $(\dagger_\lambda)$ is also a $3$-cocycle. If it is a coboundary, and we have defined $F_1, \cdots, F_{\lambda_1}$, then we can define $F_\lambda$ such that $(\dagger_\lambda)$ holds. However, if it is not a coboundary, then we get stuck, and we see that our choice of $F_1, \cdots, F_{\lambda - 1}$ does not lead to a $*$-product.
+
+The $3$-cocycle appearing on the LHS of $(\dagger_\lambda)$ is an \term{obstruction} to integrability.
+
+If, however, they are always coboundaries, then we can inductively define $F_1, F_2, \cdots$ to give a $*$-product. Thus we have proved
+\begin{thm}[Gerstenhaber]
+ If $\HH^3(A, A) = 0$, then all infinitesimal deformations are integrable.
+\end{thm}
+Of course, even if $\HH^3(A, A) \not= 0$, we can still get $*$-products, but we need to pick our $F_1$ more carefully.
+
+Now after producing star products, we want to know if they are equivalent.
+
+\begin{defi}[Equivalence of star proeducts]\index{equivalence of star products}\index{star product!equivalence}
+ Two star products $f$ and $g$ are \emph{equivalent} on $V \otimes k[[t]]$ if there is a $k[[t]]$-linear automorphism $\Phi$ of $V[[t]]$ of the form
+ \[
+ \Phi(a) = a + t \phi_1(a) + t^2 \phi_2(a) + \cdots
+ \]
+ sch that
+ \[
+ f(a, b) = \Phi^{-1} g(\Phi(a), \Phi(b)).
+ \]
+ Equivalently, the following diagram has to commute:
+ \[
+ \begin{tikzcd}
+ V[[t]] \otimes V[[t]] \ar[r, "f"] \ar[d, "\Phi \otimes \Phi"] & V[[t]] \ar[d, "\Phi"]\\
+ V[[t]] \otimes V[[t]] \ar[r, "g"] & V[[t]]
+ \end{tikzcd}
+ \]
+ Star products equivalent to the usual product on $A \otimes k[[t]]$ are called \emph{trivial}\index{trivial!star product}\index{star product!trivial}.
+\end{defi}
+
+\begin{thm}[Gerstenhaber]
+ Any non-trivial star product $f$ is equivalent to one of the form
+ \[
+ g(a, b) = ab + t^n G_n(a, b) + t^{n + 1} G_{n + 1}(a, b) + \cdots,
+ \]
+ where $G_n$ is a $2$-cocycle and not a coboundary. In particular, if $\HH^2(A, A) = 0$, then any star product is trivial.
+\end{thm}
+
+\begin{proof}
+ Suppose as usual
+ \[
+ f(a, b) = ab + t F_1(a, b) + t^2 F_2(a, b) + \cdots,
+ \]
+ and suppose $F_1, \cdots, F_{n - 1} = 0$. Then it follows from $(\dagger)$ that
+ \[
+ \delta_2 F_n = 0.
+ \]
+ If $F_n$ is a coboundary, then we can write
+ \[
+ F_n = - \delta \phi_n
+ \]
+ for some $\phi_n: A \to A$. We set
+ \[
+ \Phi_n(a) = a + t^n \phi_n(a).
+ \]
+ Then we can compute that
+ \[
+ \Phi_n^{-1}(f (\Phi_n(a), \Phi_n(b)))
+ \]
+ is of the form
+ \[
+ ab + t^{n + 1}G_{n + 1}(a, b) + \cdots.
+ \]
+ So we have managed to get rid of a further term, and we can keep going until we get the first non-zero term not a coboundary.
+
+ Suppose this never stops. Then $f$ is trivial --- we are using that $\cdots \circ \Phi_{n + 2} \circ \Phi_{n + 1} \circ \Phi_n$ converges in the automorphism ring, since we are adding terms of higher and higher degree.
+\end{proof}
+
+We saw that derivations can be thought of as infinitesimal automorphisms. One can similarly consider $k[[t]]$-linear maps of the form
+\[
+ \Phi(a) = a + t \phi_1(a) + t^2 \phi_2(a) + \cdots
+\]
+and consider whether they define automorphisms of $A \otimes k[[t]]$. Working modulo $t^2$, we have already done this problem --- we are just considering automorphisms of $A[\varepsilon]$, and we saw that these automorphisms correspond to derivations.
+
+\begin{defi}[Integrable derivation]\index{derivation!integrable}\index{integrable!derivation}
+ We say a derivation is \emph{integrable} if there is an automorphism of $A \otimes k[[t]]$ that gives the derivation when we mod $t^2$.
+\end{defi}
+In this case, the obstructions are $2$-cocycles which are not coboundaries.
+
+\begin{thm}[Gerstenhaber]
+ Suppose $\HH^2(A, A) = 0$. Then all derivations are integrable.
+\end{thm}
+The proof is an exercise on the third example sheet.
+
+We haven't had many examples so far, because Hochschild cohomology is difficult to compute. But we can indeed do some examples.
+\begin{eg}
+ Let $A = k[x]$. Since $A$ is commutative, we have
+ \[
+ \HH^0(A, A) = A.
+ \]
+ Since $A$ is commutative, $A$ has no inner derivations. So we have
+ \[
+ \HH^1(A, A) = \Der A = \left\{f(x) \frac{\d}{\d x}: f(x) \in k[x]\right\}.
+ \]
+ For any $i > 1$, we have
+ \[
+ \HH^i(A, A) = 0.
+ \]
+ So we have
+ \[
+ \Dim(A) = 1.
+ \]
+ We can do this by explicit calculation. If we look at our Hochschild chain complex, we had a short exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & \ker \mu \ar[r] & A \otimes A \ar[r] & A \ar[r] & 0
+ \end{tikzcd}\tag{$*$}
+ \]
+ and thus we have a map
+ \[
+ \begin{tikzcd}
+ A \otimes A \otimes A \ar[r, "d"] & A \otimes A
+ \end{tikzcd}
+ \]
+ whose image is $\ker \mu$ .
+
+ The point is that $\ker \mu$ is a projective $A\mdash A$-bimodule. This will mean that $\HH^i(A, M) = 0$ for $i \geq 2$ in the same way we used to show that when $_AA_A$ is a projective $A\mdash A$-bimodule for $i \geq 1$. In particular, $\HH^i(A, A) = 0$ for $i \geq 2$.
+
+ To show that $\ker \mu$ is projective, we notice that
+ \[
+ A \otimes A = k[X] \otimes_k k[X] \cong k[X, Y].
+ \]
+ So the short exact sequence $(*)$ becomes
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & (X - Y)k[X, Y] \ar[r] & k[X, Y] \ar[r] & k[X] \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ So $(X - Y)k[X, Y]$ is a free $k[X, Y]$ module, and hence projective.
+
+ We can therefore use our theorems to see that any extension of $k[X]$ by $k[X]$ is split, and any $*$-product is trivial. We also get that any derivation is integrable.
+\end{eg}
+
+\begin{eg}
+ If we take $A = k[X_1, X_2]$, then again this is commutative, and
+ \begin{align*}
+ \HH^0(A, A) &= A\\
+ \HH^1(A, A) &= \Der A.
+ \end{align*}
+ We will talk about $\HH^2$ later, and similarly
+ \[
+ \HH^i(A, A) = 0
+ \]
+ for $i \geq 3$.
+\end{eg}
+From this, we see that we may have star products other than the trivial ones, and in fact we know we have, because we have one arising from the Rees algebra of $A_1(k)$. But we know that any infinitesimal deformation yields a star product. So there are much more.
+
+\subsection{Gerstenhaber algebra}
+We now want to understand the equations $(\dagger)$ better. To do so, we consider the graded vector space
+\[
+ \HH^\Cdot (A, A) = \bigoplus_{A = 0}^\infty \HH^n(A, A),
+\]
+as a whole. It turns out this has the structure of a \emph{Gerstenhaber algebra}
+
+The first structure to introduce is the cup product. They are a standard tool in cohomology theories. We will write
+\[
+ S^n(A, A) = \Hom_k(A^{\otimes n}, A) = \Hom_{A\mdash A} (A^{\otimes (n + 2)}, {_AA_A}).
+\]
+The Hochschild chain complex is then the graded chain complex $S^{\Cdot}(A, A)$.
+
+\begin{defi}[Cup product]\index{cup product}
+ The \emph{cup product}
+ \[
+ \smile: S^m (A, A) \otimes S^n(A, A) \to S^{m + n}(A, A)
+ \]
+ is defined by
+ \[
+ (f \smile g)(a_1 \otimes \cdots \otimes a_m \otimes b_1 \otimes \cdots \otimes b_n) = f(a_1 \otimes \cdots \otimes a_m) \cdot g(b_1 \otimes \cdots \otimes b_n),
+ \]
+ where $a_i, b_j \in A$.
+\end{defi}
+Under this product, $S^\Cdot(A, A)$ becomes an associative graded algebra.
+
+Observe that
+\[
+ \delta(f \smile g) = \delta f \smile g + (-1)^{mn} f \smile \delta g.
+\]
+So we say $\delta$ is a (left) \term{graded derivation}\index{derivation!graded} of the graded algebra $S^\Cdot(A, A)$. In homological (graded) algebra, we often use the same terminology but with suitable sign changes which depends on the degree.
+
+Note that the cocycles are closed under $\smile$. So cup product induces a product on $\HH^\Cdot(A, A)$. If $f \in S^m(A, A)$ and $g \in S^n(A, A)$, and both are cocycles, then
+\[
+ (-1)^m(g \smile f - (-1)^{mn} (f \smile g)) = \delta (f \circ g),
+\]
+where $f \circ g$ is defined as follows: we set\index{$f \circ_i g$}
+\begin{multline*}
+ f \circ_i g (a_1 \otimes \cdots \otimes a_{i - 1} \otimes b_1 \otimes \cdots \otimes b_n \otimes a_{i + 1} \cdots \otimes a_m)\\
+ = f(a_1 \otimes \cdots \otimes a_{i - 1} \otimes g(b_1 \otimes \cdots \otimes b_n) \otimes a_{i + 1} \otimes \cdots \otimes a_m).
+\end{multline*}
+Then we define\index{$f \circ g$}\index{circle product}
+\[
+ f \circ g = \sum_{i = 1}^m (-1)^{(n - 1)(i - 1)} f \circ_i g.
+\]
+This product $\circ$ is not an associative product, but is giving a \term{pre-Lie structure} to $S^\Cdot (A, A)$.
+
+\begin{defi}[Gerstenhaber bracket]\index{Gerstenhaber bracket}
+ The \emph{Gerstenhaber bracket} is
+ \[
+ [f, g] = f \circ g - (-1)^{(n + 1)(m + 1)} g \circ f
+ \]
+\end{defi}
+This defines a graded Lie algebra structure on the Hochschild chain complex, but notice that we have a degree shift by $1$. It is a grade Lie algebra on $S^{\Cdot + 1}(A, A)$.
+
+Of course, we really should define what a graded Lie algebra is.
+\begin{defi}[Graded Lie algebra]\index{graded Lie algebra}\index{Lie algebra!graded}
+ A \emph{graded Lie algebra} is a vector space
+ \[
+ L = \bigoplus L_i
+ \]
+ with a bilinear bracket $[\ph,\ph] : L \times L \to L$ such that
+ \begin{itemize}
+ \item $[L_i, L_j] \subseteq L_{i + j}$;
+ \item $[f, g] -(-1)^{mn} [g, f]$; and
+ \item The \term{graded Jacobi identity}\index{Jacobi identity!graded} holds:
+ \[
+ (-1)^{mp} [[f, g], h] + (-1)^{mn} [[g, h], f] + (-1)^{np} [[h, f] ,g] = 0
+ \]
+ \end{itemize}
+ where $f \in L_m$, $g \in L_n$, $h \in L_p$.
+\end{defi}
+
+In fact, $S^{\Cdot + 1}(A, A)$ is a \term{differential graded Lie algebra} under the Gerstenhaber bracket.
+
+\begin{lemma}
+ The cup product on $\HH^\Cdot(A, A)$ is graded commutative, i.e.
+ \[
+ f \smile g = (-1)^{mn} (g \smile f).
+ \]
+ when $f \in \HH^m(A, A)$ and $g \in \HH^n(A, A)$.
+\end{lemma}
+
+\begin{proof}
+ We previously ``noticed'' that
+ \[
+ (-1)^m(g \smile f - (-1)^{mn} (f \smile g)) = \delta (f \circ g),\qedhere
+ \]
+\end{proof}
+
+\begin{defi}[Gerstenhaber algebra]\index{Gerstenhaber algebra}
+ A \emph{Gerstenhaber algebra} is a graded vector space
+ \[
+ H = \bigoplus H^i
+ \]
+ with $H^{\Cdot + 1}$ a graded Lie algebra with respect to a bracket $[\ph, \ph]: H^m \times H^n \to H^{m + n - 1}$, and an associative product $\smile: H^m \times H^n \to H^{m + n}$ which is graded commutative, such that if $f \in H^m$, then $[f, \ph]$ acts as a degree $m - 1$ graded derivation of $\smile$:
+ \[
+ [f, g \smile h] = [f, g] \smile h + (-1)^{(m - 1)}n g \smile [f, h]
+ \]
+ if $g \in H^n$.
+\end{defi}
+This is analogous to the definition of a Poisson algebra. We've seen that $\HH^\Cdot(A, A)$ is an example of a Gerstenhaber algebra.
+
+We can look at what happens in low degrees. We know that $H^0$ is a commutative $k$-algebra, and $\smile: H^0 \times H^1 \to H^1$ is a module action.
+
+Also, $H^1$ is a Lie algebra, and $[\ph, \ph]: H^1 \times H^0 \to H^0$ is a Lie module action, i.e.\ $H^0$ gives us a Lie algebra representation of $H^1$. In other words, the corresponding map $[\ph, \ph]: H^1 \to \End_k(H^0)$ gives us a map of Lie algebras $H^1 \to \Der(H^0)$.
+
+The prototype Gerstenhaber algebra is the exterior algebra $\bigwedge \Der A$ for a commutative algebra $A$ (with $A$ in degree $0$).
+
+Explicitly, to define the exterior product over $A$, we first consider the tensor product over $A$ of two $A$-modules $V$ and $W$, defined by
+\[
+ V \otimes_A W = \frac{V \otimes_k W}{\bra av \otimes w - v \otimes aw\ket}
+\]
+The exterior product is then
+\[
+ V \wedge_A V = \frac{V \otimes_A V}{\bra v \otimes v: v \in V\ket}.
+\]
+The product is given by the wedge, and the \term{Schouten bracket} is given by
+\begin{multline*}
+ [\lambda_1 \wedge \cdots \wedge \lambda_m, \lambda_1' \wedge \cdots \wedge \lambda_n'] \\
+ = (-1)^{(m - 1)(n - 1)} \sum_{i, j} (-1)^{i + j} [\lambda_i, \lambda_j] \underbrace{\lambda_1 \wedge \cdots \lambda_m}_{\text{$i$th missing}} \wedge \underbrace{\lambda_1' \wedge \cdots \wedge \lambda_n'}_{\text{$j$th missing}}.
+\end{multline*}
+For any Gerstenhaber algebra $H = \bigoplus H^i$, there is a canonical homomorphism of Gerstenhaber algebras
+\[
+ \bigwedge_{H^0}H^1 \to H.
+\]
+\begin{thm}[Hochschild--Kostant--Ronsenberg (HKR) theorem]\index{Hochschild--Kostant--Ronsenberg theorem}\index{HKR theorem}
+ If $A$ is a ``smooth'' commutative $k$-algebra, and $\Char k = 0$, then the canonical map
+ \[
+ \bigwedge_A (\Der A) \to \HH^*(A, A)
+ \]
+ is an isomorphism of Gerstenhaber algebras.
+\end{thm}
+We will not say what ``smooth'' means, but this applies if $A = k[X_1, \cdots, X_n]$, or if $k = \C$ or $\R$ and $A$ is appropriate functions on smooth manifolds or algebraic varieties.
+
+In the 1960's, this was stated just for the algebra structure, and didn't think about the Lie algebra.
+
+\begin{eg}
+ Let $A = k[X, Y]$, with $\Char k = 0$. Then $\HH^0(A, A) = A$ and
+ \[
+ \HH^1(A, A) = \Der A \cong \left\{p(X, Y) \frac{\partial}{\partial y} + q(X, Y) \frac{\partial}{\partial Y}: p, q \in A\right\}.
+ \]
+ So we have
+ \[
+ \HH^2(A, A) = \Der A \wedge_A \Der A,
+ \]
+ which is generated as an $A$-modules by $\frac{\partial}{\partial X} \wedge \frac{\partial}{\partial Y}$. Then
+ \[
+ \HH^i(A, A) = 0\text{ for all }i \geq 3
+ \]
+\end{eg}
+
+We can now go back to talk about star products. Recall when we considered possible star products on $V \otimes_k k[[t]]$, where $V$ is the underlying vector space of the algebra $A$. We found that associativity of the star product was encapsulated by some equations $(\dagger_\lambda)$. Collectively, these are equivalent to the statement
+\begin{defi}[Maurer--Cartan equation]\index{Maurer--Cartan equation}
+ The \emph{Maurer--Cartan equation} is
+ \[
+ \delta f + \frac{1}{2} [f, f]_{\mathrm{Gerst}} = 0
+ \]
+ for the element
+ \[
+ f = \sum t^\lambda F_\lambda,
+ \]
+ where $F_0(a, b) = ab$.
+\end{defi}
+When we write $[\ph, \ph]_{\mathrm{Gerst}}$, we really mean the $k[[t]]$-linear extension of the Gerstenhaber bracket.
+
+If we want to think of things in cohomology instead, then we are looking at things modulo coboundaries. For the graded Lie algebra $\bigwedge^{\Cdot + 1}(\Der A)$, the Maurer--Cartan elements, i.e.\ solutions of the Maurer--Cartan equation, are the \term{formal Poisson structures}. They are formal power series of the form
+\[
+ \Pi = \sum t_i \pi_i
+\]
+for $\pi_i \in \Der A \wedge \Der A$, satisfying
+\[
+ [\Pi, \Pi] = 0.
+\]
+There is a deep theorem of Kontsevich from the early 2000's which implies
+
+\begin{thm}[Kontsevich]
+ There is a bijection
+ \[
+ \left\{\parbox{3cm}{\centering equivalence classes of star products}\right\} \longleftrightarrow \left\{\parbox{3cm}{\centering classes of formal Poisson structures}\right\}
+ \]
+ This applies for smooth algebras in $\Char 0$, and in particular for polynomial algebras $A = k[X_1, \cdots, X_n]$.
+\end{thm}
+This is a difficult theorem, and the first proof appeared in 2002.
+
+An unnamed lecturer once tried to give a Part III course with this theorem as the punchline, but the course ended up lasting 2 terms, and they never reached the punchline.
+
+\subsection{Hochschild homology}
+We don't really have much to say about Hochschild homology, but we are morally obliged to at least write down the definition.
+
+To do Hochschild homology, we apply $\ph \otimes_{A\mdash A} M$ for an $A\mdash A$-bimodule $M$ to the Hochschild chain complex.
+\[
+ \begin{tikzcd}
+ \cdots \ar[r, "d"] & A \otimes_k A \otimes_k A \ar[r, "d"] & A \otimes_k A \ar[r, "\mu"] & A \ar[r] & 0.
+ \end{tikzcd},
+\]
+We will ignore the $\to A \to 0$ bit. We need to consider what $\ph\otimes_{A\mdash A} \ph$ means. If we have bimodules $V$ and $W$, we can regard $V$ as a right $A \otimes A^{\op}$-module. We can also think of $W$ as a left $A \otimes A^\op$ module. We let
+\[
+ B = A \otimes A^{\op},
+\]
+and then we just consider
+\[
+ V \otimes_B W = \frac{V \otimes_k W}{\bra v x \otimes w - v \otimes xw: w \in B\ket} = \frac{V \otimes_k W}{\bra ava' \otimes w - v \otimes a' w a \ket}.
+\]
+Thus we have
+\[
+ \begin{tikzcd}
+ \cdots \ar[r, "b_1"] & (A \otimes_k A \otimes_k A) \otimes_{A\mdash A} M \ar[r, "b_0"] & (A \otimes_k A) \otimes_{A \mdash A}M \cong M
+ \end{tikzcd},
+\]
+\begin{defi}[Hochschild homology]\index{Hochschild homology}
+ The \emph{Hochschild homology} groups are
+ \begin{align*}
+ \HH_0 (A, M) &= \frac{M}{\im b_0}\\
+ \HH_i(A, M) &= \frac{\ker b_{i - 1}}{\im b_i}
+ \end{align*}
+ for $i > 0$.
+\end{defi}
+
+A long time ago, we counted the number of simple $kG$-modules for $k$ algebraically closed of characteristic $p$ when $G$ is finite. In the proof, we used $\frac{A}{[A, A]}$, and we pointed out this is $\HH_0(A, A)$.
+
+\begin{lemma}
+ \[
+ \HH_0(A, M) = \frac{M}{\bra xm - mx: m \in M, x \in A\ket}.
+ \]
+ In particular,
+ \[
+ \HH_0(A, A) = \frac{A}{[A, A]}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Exercise.
+\end{proof}
+
+\section{Coalgebras, bialgebras and Hopf algebras}
+We are almost at the end of the course. So let's define an algebra.
+\begin{defi}[Algebra]\index{algebra}\index{$k$-algebra}
+ A \emph{$k$-algebra} is a $k$-vector space $A$ and $k$-linear maps
+ \begin{align*}
+ \mu: A \otimes A &\to A & u: k &\to A\\
+ x \otimes y &\mapsto xy & \lambda &\mapsto \lambda I
+ \end{align*}
+ called the \term{multiplication}/\term{product} and \term{unit} such that the following two diagrams commute:
+ \[
+ \begin{tikzcd}
+ A \otimes A \otimes A \ar[r, "\mu \otimes \id"] \ar[d, "\id \otimes \mu"] & A \otimes A \ar[d, "\mu"]\\
+ A \otimes A \ar[r, "\mu"] & A
+ \end{tikzcd}
+ \quad
+ \begin{tikzcd}
+ k \otimes A \ar[rd, "\cong"] \ar[r, "u \otimes \id"] & A \otimes A \ar[d, "\mu"] & A \otimes k \ar[l, "\id \otimes u"] \ar[ld, "\cong"]\\
+ & A
+ \end{tikzcd}
+ \]
+ These encode \term{associativity} and \term{identity} respectively.
+\end{defi}
+Of course, the point wasn't to actually define an algebra. The point is to define a \emph{co}algebra, whose definition is entirely dual.
+
+\begin{defi}[Coalgebra]\index{coalgebra}
+ A \emph{coalgebra} is a $k$-vector space $C$ and $k$-linear maps
+ \begin{align*}
+ \Delta: C &\to C \otimes C & \varepsilon: C &\to k
+ \end{align*}
+ called \term{comultiplication}/\term{coproduct} and \term{counit} respectively, such that the following diagrams commute:
+ \[
+ \begin{tikzcd}
+ C \otimes C \otimes C & C \otimes C \ar[l, "\id \otimes \Delta"]\\
+ C \otimes C \ar[u, "\Delta \otimes \id"] & C \ar[l, "\Delta"] \ar[u, "\Delta"]
+ \end{tikzcd}
+ \quad
+ \begin{tikzcd}
+ k \otimes C & C \otimes C \ar[l, "\varepsilon \otimes \id"']\ar[r, "\id \otimes \varepsilon"] & C \otimes k\\
+ & C \ar[u, "\mu"] \ar[ru, "\cong"'] \ar[ul, "\cong"]
+ \end{tikzcd}
+ \]
+ These encode \term{coassociativity} and \term{coidentity}
+
+ A \term{morphism}\index{morphism!coalgebras} of coalgebras $f: C \to D$ is a $k$-linear map such that the following diagrams commute:
+ \[
+ \begin{tikzcd}
+ C \ar[d, "\Delta"] \ar[r, "f"] & D \ar[d, "\Delta"]\\
+ C \otimes C \ar[r, "f \otimes f"] & D \otimes D
+ \end{tikzcd}
+ \quad
+ \begin{tikzcd}
+ C \ar[d, "\varepsilon"] \ar[r, "f"] & D \ar[d, "\varepsilon"]\\
+ k \ar[r, equals] & k
+ \end{tikzcd}
+ \]
+ A subspace $I$ of $C$ is a \term{co-ideal} if $\Delta(I) \leq C \otimes I + I \otimes C$, and $\varepsilon(I) = 0$. In this case, $C/I$ inherits a coproduct and counit.
+
+ A \term{cocommutative coalgebra}\index{algebra!cocommutative} is one for which $\tau \circ \Delta = \Delta$, where $\tau: V \otimes W \to W \otimes V$ given by the $v \otimes w \mapsto w \otimes v$ is the ``twist map''.
+\end{defi}
+It might be slightly difficult to get one's head around what a coalgebra actually is. It, of course, helps to look at some examples, and we will shortly do so. It also helps to know that for our purposes, we don't really care about coalgebras \emph{per se}, but things that are both algebras and coalgebras, in a compatible way.
+
+There is a very natural reason to be interested in such things. Recall that when doing representation theory of groups, we can take the tensor product of two representations and get a new representation. Similarly, we can take the dual of a representation and get a new representation.
+
+If we try to do this for representations (ie.\ modules) of general algebras, we see that this is not possible. What is missing is that in fact, the algebras $kG$ and $\mathcal{U}(\mathfrak{g})$ also have the structure of coalgebras. In fact, they are \emph{Hopf algebras}, which we will define soon.
+
+We shall now write down some coalgebra structures on $kG$ and $\mathcal{U}(\mathfrak{g})$.
+\begin{eg}
+ If $G$ is a group, then $kG$ is a co-algebra, with
+ \begin{align*}
+ \Delta(g) &= g \otimes g\\
+ \varepsilon\left(\lambda_g \sum(g)\right) &= \sum \lambda_g.
+ \end{align*}
+ We should think of the specification $\Delta (g) = g \otimes g$ as saying that our groups act diagonally on the tensor products of representations. More precisely, if $V, W$ are representations and $v \in V, w \in W$, then $g$ acts on $v \otimes w$ by
+ \[
+ \Delta(g) \cdot (v \otimes w) = (g \otimes g) \cdot (v \otimes w) = (gv) \otimes (gw).
+ \]
+\end{eg}
+
+\begin{eg}
+ For a Lie algebra $\mathfrak{g}$ over $k$, the universal enveloping algebra $\mathcal{U}(\mathfrak{g})$ is a co-algebra with
+ \[
+ \Delta(x) = x \otimes 1 + 1 \otimes x
+ \]
+ for $x \in \mathfrak{g}$, and we extend this by making it an algebra homomorphism.
+
+ To define $\varepsilon$, we note that elements of $\mathcal{U}(\mathfrak{g})$ are uniquely of the form
+ \[
+ \lambda + \sum \lambda_{i_1, \ldots, i_n} x_1^{i_1} \cdots x_n^{i_n},
+ \]
+ where $\{x_i\}$ is a basis of $\mathfrak{g}$ (the PBW theorem). Then we define
+ \[
+ \varepsilon\left(\lambda + \sum \lambda_{i_1, \ldots, i_n} x_1^{i_1} \cdots x_n^{i_n}\right) = \lambda.
+ \]
+ This time, the specification of $\Delta$ is telling us that if $X \in \mathfrak{g}$ and $v, w$ are elements of a representation of $\mathfrak{g}$, then $X$ acts on the tensor product by
+ \[
+ \Delta(X) \cdot (v \otimes w) = Xv \otimes w + v \otimes Xw.
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider
+ \[
+ \mathcal{O}(M_n(k)) = k[X_{ij}: 1 \leq i, j\leq n],
+ \]
+ the polynomial functions on $n \times n$ matrices, where $X_{ij}$ denotes the $ij$th entry. Then we define
+ \[
+ \Delta (X_{ij}) = \sum_{i = 1}^n X_{i\ell} \otimes X_{\ell j},
+ \]
+ and
+ \[
+ \varepsilon(X_{ij}) = \delta_{ij}.
+ \]
+ These are again algebra maps.
+
+ We can also talk about $\mathcal{O}(\GL_n(k))$ and $\mathcal{O}(\SL_n(k))$. The formula of the determinant gives an element $D \in \mathcal{O}(M_n(k))$. Then $\mathcal{O}(\GL_n(k))$ is given by adding a formal inverse to $D$ in $\mathcal{O}(\GL_n(k))$, and $\mathcal{O}(\SL_n(k))$ is obtained by quotienting out $\mathcal{O}(\GL_n(k))$ by the bi-ideal $\bra D - 1\ket$.
+
+ From an algebraic geometry point of view, these are the coordinate algebra of the varieties $M_n(k)$, $\GL_n(k)$ and $\SL_n(k)$.
+\end{eg}
+This is dual to matrix multiplication.
+
+We have seen that we like things that are both algebras and coalgebras, compatibly. These are known as \emph{bialgebras}.
+\begin{defi}[Bialgebra]\index{bialgebra}
+ A \emph{bialgebra} is a $k$-vector space $B$ and maps $\mu, \upsilon, \Delta, \varepsilon$ such that
+ \begin{enumerate}
+ \item $(B, \mu, u)$ is an algebra.
+ \item $(B, \Delta, \varepsilon)$ is a coalgebra.
+ \item $\Delta$ and $\varepsilon$ are algebra morphisms.
+ \item $\mu$ and $u$ are coalgebra morphisms.
+ \end{enumerate}
+\end{defi}
+Being a bialgebra means we can take tensor products of modules and still get modules. If we want to take duals as well, then it turns out the right notion is that of a Hopf algebra:
+\begin{defi}[Hopf algebra]\index{Hopf algebra}
+ A bialgebra $(H, \mu, u, \Delta, \varepsilon)$ is a \emph{Hopf algebra} if there is an \term{antipode} $S: H \to H$ that is a $k$-linear map such that
+ \[
+ \mu \circ (S \otimes \id) \circ \Delta = \mu \circ (\id \otimes S) \circ \Delta = u \circ \varepsilon.
+ \]
+\end{defi}
+
+\begin{eg}
+ $kG$ is a Hopf algebra with $S(g) = g^{-1}$.
+\end{eg}
+
+\begin{eg}
+ $\mathcal{U}(\mathfrak{g})$ is a Hopf algebra with $S(x) = -x$ for $x \in \mathcal{U}(\mathfrak{g})$.
+\end{eg}
+
+%\begin{eg}
+% The coordinate algebra $\mathcal{O}(M_n(k))$ is a Hopf algebra in the following way --- we let $D$ be the determinant of a matrix, which is an element of $\mathcal{O}(M_n(k))$. Then $\varepsilon(D) = 1$ and $\Delta(D) = D \otimes D$. So $D - 1$ generates a bi-ideal. We define
+% \[
+% \mathcal{O}(\SL_n(k)) = \frac{\mathcal{O}(\GL_n(k))}{\bra D - 1\ket}.
+% \]
+% This is a Hopf algebra with $SX_{ij}$ being the $ij$th entry of $(X_{ij})^{-1}$ (modulo $D - 1$).
+%\end{eg}
+
+Note that our examples are all commutative or co-commutative. The term \term{quantum groups} usually refers to a non-commutative non-co-commutative Hopf algebras. These are neither quantum nor groups.
+
+As usual, we write $V^*$ for $\Hom_k(V, k)$, and we note that if we have $\alpha: V \to W$, then this induces a dual map $\alpha^*: W^* \to V^*$.
+
+\begin{lemma}
+ If $C$ is a coalgebra, then $C^*$ is an algebra with multiplication $\Delta^*$ (that is, $\Delta^*|_{C^* \otimes C^*}$) and unit $\varepsilon^*$. If $C$ is co-commutative, then $C^*$ is commutative.
+\end{lemma}
+
+However, if an algebra $A$ is infinite dimensional as a $k$-vector space, then $A^*$ may not be a coalgebra. The problem is that $(A^* \otimes A^*)$ is a proper subspace of $(A \otimes A)^*$, and $\mu^*$ of an infinite dimensional $A$ need not take values in $A^* \otimes A^*$. However, all is fine for finite dimensional $A$, or if $A$ is graded with finite dimensional components, where we can form a graded dual.
+
+In general, for a Hopf algebra $H$, one can define the \term{Hopf dual},
+\[
+ H^0 = \{f \in H^*: \ker f \text{ contains an ideal of finite codimension}\}.
+\]
+\begin{eg}
+ Let $G$ be a finite group. Then $(kG)^*$ is a commutative non-co-commutative Hopf algebra if $G$ is non-abelian.
+
+ Let $\{g\}$ be the canonical basis for $kG$, and $\{\phi_g\}$ be the dual basis of $(kG)^*$. Then
+ \[
+ \Delta(\phi_g) = \sum_{h_1 h_2 = g} \phi_{h_1} \otimes \phi_{h_2}.
+ \]
+\end{eg}
+There is an easy way of producing non-commutative non-co-commutative Hopf algebras --- we take a non-commutative Hopf algebra and a non-co-commutative Hopf algebra, and take the tensor product of them, but this is silly.
+
+The easiest non-trivial example of a non-commutative non-co-commutative Hopf algebra is the \emph{Drinfeld double}, or \emph{quantum double}, which is a general construction from a finite dimensional hopf algebra.
+
+\begin{defi}[Drinfeld double]\index{Drinfeld double}\index{quantum double}
+ Let $G$ be a finite group. We define
+ \[
+ D(G) = (kG)^* \otimes_k kG
+ \]
+ as a vector space, and the algebra structure is given by the crossed product $(kG)^* \rtimes G$, where $G$ acts on $(kG)^*$ by
+ \[
+ f^g(x) = f(gxg^{-1}).
+ \]
+ Then the product is given by
+ \[
+ (f_1 \otimes g_1) (f_2 \otimes g_2) = f_1 f_2^{g_1^{-1}} \otimes g_1 g_2.
+ \]
+ The coalgebra structure is the tensor of the two coalgebras $(kG)^*$ and $kG$, with
+ \[
+ \Delta (\phi_g \otimes h) = \sum_{g_1 g_2 = g} \phi_{g_1} \otimes h \otimes \phi_{g_2} \otimes h.
+ \]
+ $D(G)$ is \term{quasitriangular}, i.e.\ there is an invertible element $R$ of $D(G) \otimes D(G)$ such that
+ \[
+ R \Delta (x) R^{-1} = \tau (\Delta(x)),
+ \]
+ where $\tau$ is the twist map. This is given by
+ \begin{align*}
+ R &= \sum_g (\phi_g \otimes 1) \otimes (1 \otimes g)\\
+ R^{_1} &= \sum_g (\phi_g \otimes 1) \otimes (1 \otimes g^{-1}).
+ \end{align*}
+ The equation $R \Delta R^{-1} = \tau \Delta$ results in an isomorphism between $U \otimes V$ and $V \otimes U$ for $D(G)$-bimodules $U$ and $V$, given by flip follows by the action of $R$.
+\end{defi}
+If $G$ is non-abelian, then this is non-commutative and non-co-commutative. The point of defining this is that the representations of $D(G)$ correspond to the $G$-equivariant $k$-vector bundles on $G$.
+
+As we said, this is a general construction.
+\begin{thm}[Mastnak, Witherspoon (2008)]
+ The bialgebra cohomology $H^{\Cdot}_{bi}(H, H)$ for a finite-dimensional Hopf algebra is equal to $\HH^{\Cdot}(D(H), k)$, where $k$ is the trivial module, and $D(H)$ is the Drinfeld double.
+\end{thm}
+\separator
+
+In 1990, Gerstenhaber and Schack defined bialgebra cohomology, and proved results about deformations of bialgebras analogous to our results from the previous chapter for algebras. In particular, one can consider infinitesimal deformations, and up to equivalence, these correspond to elements of the $2$nd cohomology group.
+
+There is also the question as to whether an infinitesimal deformation is integrable to give a bialgebra structure on $V \otimes k[[t]]$, where $V$ is the underlying vector space of the bialgebra.
+
+\begin{thm}[Gerstenhaber--Schack]
+ Every deformation is equivalent to one where the unit and counit are unchnaged. Also, deformation preserves the existence of an antipode, though it might change.
+\end{thm}
+
+\begin{thm}[Gerstenhaber--Schack]
+ All deformations of $\mathcal{O}(M_n(k))$ or $\mathcal{O}(\SL_n(k))$ are equivalent to one in which the comultiplication is unchanged.
+\end{thm}
+
+We nwo try to deform $\mathcal{O}(M_2(k))$. By the previous theorems, we only have to change the multiplication. Consider $\mathcal{O}_q(M_2(k))$ defined by
+\begin{align*}
+ X_{12} X_{11} &= q X_{11} X_{12}\\
+ X_{22} X_{12} &= q X_{12} X_{22}\\
+ X_{21} X_{11} &= q X_{11} X_{21}\\
+ X_{22} X_{21} &= q X_{21} X_{22}\\
+ X_{21} X_{12} &= X_{12} X_{21}\\
+ X_{11} X_{22} - X_{22} X_{11} &= (q^{-1} - q) X_{12} X_{21}.
+\end{align*}
+We define the \emph{quantum determinant}
+\[
+ \det_q = X_{11} X_{22} - q^{-1} X_{12} X_{21} = X_{22} X_{11} - q X_{12} X_{21}.
+\]
+Then
+\[
+ \Delta(\det_q) = \det_q \otimes \det_q,\quad \varepsilon (\det_q) = 1.
+\]
+Then we define
+\[
+ \mathcal{O}(\SL_2(k)) = \frac{\mathcal{O}(M_2(k))}{(\det_q - 1)},
+\]
+where we are quotienting by the 2-sided ideal. It is possible to define an antipode, given by
+\begin{align*}
+ S(X_{11}) &= X_{22}\\
+ S(X_{12}) &= -q X_{12}\\
+ S(X_{21}) &= -q^{-1} X_{21}\\
+ S(X_{22}) &= X_{11},
+\end{align*}
+and this gives a non-commutative and non-co-commutative Hopf algebra. This is an example that we pulled out of a hat. But there is a general construction due to Faddeev--Reshetikhin--Takhtajan (1988) via $R$-matrices, which are a way of producing a $k$-linear map
+\[
+ V \otimes V \to V \otimes V,
+\]
+where $V$ is a fintie-dimesnional vector space.
+
+We take a basis $e_1, \cdots, e_n$ of $V$, and thus a basis $e_1 \otimes e_j$ of $V \otimes V$. We write $R_{ij}^{\ell m}$ for the matrix of $R$, defined by
+\[
+ R(e_i \otimes e_j) = \sum_{\ell, m} R_{ij}^{\ell m} e_\ell \otimes e_m.
+\]
+The rows are indexed by pairs $(\ell, m)$, and the columns by pairs $(i, j)$, which are put in lexicographic order.
+
+The action of $R$ on $V \otimes V$ induces 3 different actions on $V \otimes V \otimes V$. For $s, t \in \{1, 2, 3\}$, we let $R_{st}$ be the invertible map $V \otimes V \otimes V \to V \otimes V \otimes V$ which acts like $R$ on the $s$th and $t$th components, and identity on the other. So for example,
+\[
+ R_{12} (e_1 \otimes e_2 \otimes v) = \sum^{\ell m}_{i, j} e_{\ell} \otimes e_m \otimes v.
+\]
+\begin{defi}[Yang--Baxter equation]
+ $R$ satisfies the \term{quantum Yang--Baxter equation} (\term{QYBE}) if
+ \[
+ R_{12} R_{13} R_{23} = R_{23} R_{13} R_{12}
+ \]
+ and the braided form of QYBE (\term{braid equation}) if
+ \[
+ R_{12} R_{23} R_{12} = R_{23} R_{12} R_{23}.
+ \]
+\end{defi}
+Note that $R$ satisfies QYBE iff $R\tau$ satisfies the braid equation. Solutions to either case are $R$-matrices.
+
+\begin{eg}
+ The identity map and the twist map $\tau$ satisfies both.
+
+ Take $V$ to be $2$-dimensional, and $R$ to be the map
+ \[
+ R_{ij}^{\ell m} =
+ \begin{pmatrix}
+ q & 0 & 0 & 0\\
+ 0 & 1 & 0 & 0\\
+ 0 & q - q^{-1} & 1 & 0\\
+ 0 & 0 & 0 & q
+ \end{pmatrix},
+ \]
+ where $q \not= 0 \in K$. Thus, we have
+ \begin{align*}
+ R(e_1 \otimes e_1) &= q e_1 \otimes e_2\\
+ R(e_2 \otimes e_1) &= e_2 \otimes e_1\\
+ R(e_1 \otimes e_2) &= e_1 \otimes e_2 + (q - q^{-1}) e_2 \otimes e_1\\
+ R(e_2 \otimes e_2) &= q e_2 \otimes e_2,
+ \end{align*}
+ and this satisfies QYBE. Similarly,
+ \[
+ (R \tau)^{\ell m}_{ij} =
+ \begin{pmatrix}
+ q & 0 & 0 & 0\\
+ 0 & 0 & 1 & 0\\
+ 0 & 1 & q - q^{-1} & 0\\
+ 0 & 0 & 0 & q
+ \end{pmatrix}
+ \]
+ satisfies the braid equation.
+\end{eg}
+We now define the general construction.
+\begin{defi}[$R$-symmetric algebra]\index{$R$-symmetric algebra}
+ Given the tensor algebra
+ \[
+ T(V) = \bigoplus_{n = 0}^\infty V^{\otimes n},
+ \]
+ we form the $R$-symmetric algebra
+ \[
+ S_{R}(V) = \frac{T(V)}{\bra z - R(z): z \in V \otimes V\ket}.
+ \]
+\end{defi}
+\begin{eg}
+ If $R$ is the identity, then $S_R(V) = T(V)$.
+\end{eg}
+\begin{eg}
+ If $R = \tau$, then $S_R(V)$ is the usual symmetric algebra.
+\end{eg}
+
+\begin{eg}
+ The \term{quantum plane} $\mathcal{O}_q(k^2)$ can be written as $S_R(V)$ with
+ \begin{align*}
+ R(e_1 \otimes e_2) &= q e_2 \otimes e_1\\
+ R(e_1 \otimes e_1) &= e_1 \otimes e_1\\
+ R(e_2 \otimes e_1) &= q^{-1} e_1 \otimes e_2\\
+ R(e_2 \otimes e_2) &= e_2 \otimes e_2.
+ \end{align*}
+\end{eg}
+Generally, given a $V$ which is finite-dimensional as a vector space, we can identify $(V \otimes V)^*$ with $V^* \otimes V^*$.
+
+We set $E = V \otimes V^* \cong \End_k(V) \cong M_n(k)$. We define $R_{13}$ and $R_{24}^*: E \otimes E \to E \otimes E$, where $R_{13}$ acts like $R$ on terms $1$ and $3$ in $E = V \otimes V^* \otimes V \otimes V^*$, and identity on the rest; $R_{24}^*$ acts like $R^*$ on terms $2$ and $4$.
+
+\begin{defi}[Coordinate algebra of quantum matrices]\index{coordinate algebra associated with quantum matrices}
+ The coordinate algebra of quantum matrices associated with $R$ is
+ \[
+ \frac{T(E)}{ \bra R_{13}(z) - R_{24}^*(z): z \in E \otimes E\ket} = S_R(E),
+ \]
+ where
+ \[
+ T = R_{24}^* R_{13}^{-1}.
+ \]
+ The coalgebra structure remains the same as $\mathcal{O}(M_n(k))$, and for the antipode, we write $E_1$ for the image of $e_1$ in $S_R(V)$, and similarly $F_j$ for $f_j$. Then we map
+ \begin{align*}
+ E_1 &\mapsto \sum_{j = 1}^n X_{ij} \otimes E_j\\
+ F_j &\mapsto \sum_{i = 1}^n F_i \otimes X_{ij}.
+ \end{align*}
+\end{defi}
+This is the general construction we are up for.
+
+\begin{eg}
+ We have
+ \[
+ \mathcal{O}_q(M_2(k)) = A_{R\tau}(V)
+ \]
+ for
+ \[
+ R_{ij}^{\ell m} =
+ \begin{pmatrix}
+ q & 0 & 0 & 0\\
+ 0 & 1 & 0 & 0\\
+ 0 & q - q^{-1} & 1 & 0\\
+ 0 & 0 & 0 & q
+ \end{pmatrix},
+ \]
+\end{eg}
+\printindex
+\end{document}
diff --git a/books/cam/III_L/logic.tex b/books/cam/III_L/logic.tex
new file mode 100644
index 0000000000000000000000000000000000000000..209307ae01973977c26e632a9da83d22da1afd08
--- /dev/null
+++ b/books/cam/III_L/logic.tex
@@ -0,0 +1,3022 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2017}
+\def\nlecturer {T.\ E.\ Forster}
+\def\ncourse {Logic}
+
+\input{header}
+
+\usepackage{bussproofs}
+\newcommand\intro[1]{\RightLabel{\scriptsize#1-int}}
+\newcommand\intron[2]{\RightLabel{\scriptsize#1-int (#2)}}
+\newcommand\elim[1]{\RightLabel{\scriptsize#1-elim}}
+
+\newcommand\last{\mathsf{last}}
+\newcommand\butlast{\mathsf{butlast}}
+\newcommand\head{\mathsf{head}}
+\newcommand\tail{\mathsf{tail}}
+\newcommand\proj{\mathsf{proj}}
+\newcommand\pair{\mathsf{pair}}
+\newcommand\unpair{\mathsf{unpair}}
+\newcommand\plus{\mathsf{plus}}
+\newcommand\mult{\mathsf{mult}}
+\newcommand\lexp{\mathsf{exp}}
+\newcommand\K{\mathbb{K}}
+\renewcommand\succ{\mathsf{succ}}
+\newcommand\clet{\mathsf{let}\;}
+\newcommand\cin{\;\mathsf{in}\;}
+\newcommand\cid{\;\mathsf{id}\;}
+\newcommand\cif{{\color{blue} \mathsf{if}\;}}
+\newcommand\cthen{{\color{blue}\;\mathsf{then}\;}}
+\newcommand\celse{{\color{blue}\;\mathsf{else}\;}}
+\newcommand\con{\mathrm{con}}
+\newcommand\ctrue{\mathsf{true}}
+\newcommand\cons{\mathsf{cons}}
+\newcommand\cfalse{\mathsf{false}}
+\newcommand\ciszero{\mathsf{iszero}}
+\newcommand\cfst{\mathsf{fst}}
+\newcommand\csnd{\mathsf{snd}}
+\newcommand\cnil{\mathsf{nil}}
+\newcommand\cfact{\mathsf{fact}}
+\newcommand\cmap{\mathsf{map}}
+\newcommand\cnull{\mathsf{null}}
+\newcommand\metafact{\mathsf{metafact}}
+\newcommand\Yc{\mathsf{Y}}
+\newcommand\cfind{\mathsf{find}}
+\newenvironment{bprooftree}
+ {\leavevmode\hbox\bgroup}
+ {\DisplayProof\egroup}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+This course is the sequel to the Part II courses in Set Theory and Logic and in Automata and Formal Languages lectured in 2015-6. (It is already being referred to informally as ``Son of ST\&L and Automata \& Formal Languages''). Because of the advent of that second course this Part III course no longer covers elementary computability in the way that its predecessor (``Computability and Logic'') did, and this is reflected in the change in title. It will say less about Set Theory than one would expect from a course entitled `Logic'; this is because in Lent term Benedikt L\"owe will be lecturing a course entitled `Topics in Set Theory' and I do not wish to tread on his toes. Material likely to be covered include: advanced topics in first-order logic (Natural Deduction, Sequent Calculus, Cut-elimination, Interpolation, Skolemisation, Completeness and Undecidability of First-Order Logic, Curry-Howard, Possible world semantics, G\"odel's Negative Interpretation, Generalised quantifiers\ldots); Advanced Computability ($\lambda$-representability of computable functions, Tennenbaum's theorem, Friedberg-Muchnik, Baker-Gill-Solovay\ldots); Model theory background (ultraproducts, Los's theorem, elementary embeddings, omitting types, categoricity, saturation, Ehrenfeucht-Mostowski theorem\ldots); Logical combinatorics (Paris-Harrington, WQO and BQO theory at least as far as Kruskal's theorem on wellquasiorderings of trees\ldots). This is a new syllabus and may change in the coming months. It is entirely in order for students to contact the lecturer for updates.
+
+\subsubsection*{Pre-requisites}
+The obvious prerequisites from last year's Part II are Professor Johnstone's Set Theory and Logic and Dr Chiodo's Automata and Formal Languages, and I would like to assume that everybody coming to my lectures is on top of all the material lectured in those courses. This aspiration is less unreasonable than it may sound, since in 2016-7 both these courses are being lectured the term before this one, in Michaelmas; indeed supervisions for Part III students attending them can be arranged if needed: contact me or your director of studies. I am lecturing Part II Set Theory and Logic and I am even going to be issuing a ``Sheet 5'' for Set Theory and Logic, of material likely to be of interest to people who are thinking of pursuing this material at Part III. Attending these two Part II courses in Michaelmas is a course of action that may appeal particularly to students from outside Cambridge.
+}
+\tableofcontents
+
+\section{Proof theory and constructive logic}
+\subsection{Natural deduction}
+The first person to have the notion of ``proof'' as a mathematical notion was probably G\"odel, and he needed this to write down the incompleteness theorem. The notion of proof he had was a very unintuitive notion. It is not very easy to manipulate, but they are easy to reason about.
+
+In later years, people came up with more ``natural'' ways of defining proofs, and they are called natural deduction. In the formalism we learnt in IID Logic and Set Theory, we had three axioms only, and one rule of inference. In natural deduction, we have many rules of deduction.
+
+We write or rules in the following form:
+\[
+ \begin{bprooftree}
+ \AxiomC{$A$}
+ \AxiomC{$B$}
+ \intro{$\wedge$}
+ \BinaryInfC{$A \wedge B$}
+ \end{bprooftree}
+\]
+This says if we know $A$ and $B$ are true, then we can conclude $A \wedge B$. We call the things above the line the \term{premises}, and those below the line the \term{conclusions}. We can write out the other rules as follows:
+\begin{center}
+\begin{tabular}{cc}
+ \begin{bprooftree}
+ \AxiomC{$A$}
+ \AxiomC{$B$}
+ \intro{$\wedge$}
+ \BinaryInfC{$A \wedge B$}
+ \end{bprooftree} &
+ \begin{bprooftree}
+ \AxiomC{$A \wedge B$}
+ \elim{$\wedge$}
+ \UnaryInfC{$A$}
+ \end{bprooftree}
+ \begin{bprooftree}
+ \AxiomC{$A \wedge B$}
+ \elim{$\wedge$}
+ \UnaryInfC{$B$}
+ \end{bprooftree}\\[2em]
+ \begin{bprooftree}
+ \AxiomC{$A$}
+ \intro{$\vee$}
+ \UnaryInfC{$A \vee B$}
+ \end{bprooftree}
+ \begin{bprooftree}
+ \AxiomC{$B$}
+ \intro{$\vee$}
+ \UnaryInfC{$A \vee B$}
+ \end{bprooftree}\\[2em]
+ &
+ \begin{bprooftree}
+ \AxiomC{$A$}
+ \AxiomC{$A \to B$}
+ \elim{$\to$}
+ \BinaryInfC{$B$}
+ \end{bprooftree}
+\end{tabular}
+\end{center}
+Here we are separating these rules into two kinds --- the first column is the \term{introduction rules}. These tell us how we can \emph{introduce} a $\wedge$ or $\vee$ into our conclusions. The second column is the \term{elimination rules}. These tell us how we can \emph{eliminate} the $\wedge$ or $\to$ from our premises.
+
+In general, we can think of these rules as ``LEGO pieces'', and we can use them to piece together to get ``LEGO assembles'', i.e.\ proofs.
+\begin{eg}
+ For example, we might have a proof that looks like
+ \begin{prooftree}
+ \AxiomC{$A$}
+ \intro{$\vee$}
+ \UnaryInfC{$A \vee B$}
+ \AxiomC{$A \vee B \to C$}
+ \elim{$\to$}
+ \BinaryInfC{$C$}
+ \end{prooftree}
+ This corresponds to a proof that we can prove $C$ from $A$ and $A \vee B \to C$. Note that sometimes we are lazy and don't specify the rules we are using.
+\end{eg}
+Instead of trying to formally describe how we can put these rules together to form a proof, we will work through some examples as we go, and it should become clear.
+
+We see that we are missing some rules from the table, as there are no introduction rule for $\to$ and elimination rule for $\vee$.
+
+We work with $\to$ first. How we can prove $A \to B$? To do so, we assume $A$, and then try to prove $B$. If we can do so, then we have proved $A \to B$. But we cannot express this in the form of our previous rules. Instead what we want is some ``function'' that takes proof trees to proof trees.
+
+The actual rule is as follows: suppose we have derivation that looks like
+\begin{prooftree}
+ \AxiomC{$A$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$C$}
+\end{prooftree}
+This is a proof of C under the assumption A. The $\to$-introduction rule says we can take this and turn it into a proof of $A \to C$.
+\begin{prooftree}
+ \AxiomC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$A \to C$}
+\end{prooftree}
+This rule is not a LEGO piece. Instead, it is a magic wand that turns a LEGO assembly into a LEGO assembly.
+
+But we do not want magic wands in our proofs. We want to figure out some more static way of writing this rule. We decided that it should look like this:
+\begin{prooftree}
+ \AxiomC{$[A]$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$C$}
+ \intro{$\to$}
+ \UnaryInfC{$A \to C$}
+\end{prooftree}
+Here the brackets denotes that we have given up on our assumption $A$ to obtain the conclusion $A \to C$. After doing so, we are no longer assuming $A$. When we work with complicated proofs, it is easy to get lost where we are eliminating the assumptions. So we would label them, and write this as, say
+\begin{prooftree}
+ \AxiomC{$[A]^1$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$C$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$A \to C$}
+\end{prooftree}
+
+\begin{eg}
+ We can transform our previous example to say
+ \begin{prooftree}
+ \AxiomC{$[A]^1$}
+ \intro{$\vee$}
+ \UnaryInfC{$A \vee B$}
+ \AxiomC{$A \vee B \to C$}
+ \elim{$\to$}
+ \BinaryInfC{$C$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$A \to C$}
+ \end{prooftree}
+ Originally, we had a proof that $A$ and $A \vee B \to C$ proves $C$. Now what we have is a proof that $A \vee B \to C$ implies $A \to C$.
+\end{eg}
+
+Next, we need an elimination rule of $A \vee B$. What should this be? Suppose we proved \emph{both} that $A$ proves $C$, \emph{and} $B$ proves $C$. Then if we know $A \vee B$, then we know $C$ must be true.
+
+In other words, if we have
+\begin{prooftree}
+ \AxiomC{$A \vee B$}
+ \AxiomC{$A$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$C$}
+ \AxiomC{$B$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$C$}
+ \TrinaryInfC{}
+\end{prooftree}
+then we can deduce $C$. We write this as
+\begin{prooftree}
+ \AxiomC{$A \vee B$}
+ \AxiomC{$[A]$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$C$}
+ \AxiomC{$[B]$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$C$}
+ \elim{$\vee$}
+ \TrinaryInfC{$C$}
+\end{prooftree}
+There is an obvious generalization to many disjunctions:
+\begin{prooftree}
+ \AxiomC{$A_1 \vee \cdots\vee A_n$}
+ \AxiomC{$[A_1]$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$C$}
+ \AxiomC{$[A_n]$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$C$}
+ \elim{$\vee$}
+ \TrinaryInfC{$C$}
+\end{prooftree}
+How about when we have an empty disjunction? The empty disjunction is just false. So this gives the rule
+\begin{prooftree}
+ \AxiomC{$\bot$}
+ \UnaryInfC{$B$}
+\end{prooftree}
+In other words, we can prove anything assume falsehood. This is known as \term{ex falso sequitur quolibet}.
+
+Note that we did not provide any rules for talking about negation. We do not need to do so, because we just take $\neg A$ to be $A \to \bot$.
+
+So far, what we have described it \term{constructive propositional logic}. What is missing? We cannot use our system to prove the \term{law of excluded middle}, $A \vee \neg A$, or the \term{law of double negation} $\neg \neg A \to A$. It is not difficult to convince ourselves that it is impossible to prove these using the laws we have described above.
+
+To obtain \term{classical propositional logic}, we need just one more rule, which is
+\begin{prooftree}
+ \AxiomC{$[A\to \bot]$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$\bot$}
+ \UnaryInfC{$A$}
+\end{prooftree}
+If we add this, then we get classical propositional calculus, and it is a theorem that any truth-table-tautology (i.e.\ propositions that are always true for all possible values of $A, B, C$ etc) can be proved using natural deduction with the law of excluded middle.
+
+\begin{eg}
+ Suppose we want to prove
+ \[
+ A \to (B \to C) \to ((A \to B) \to (A \to C))
+ \]
+ How can we possibly prove this? The only way we can obtain this is to get something of the form
+ \begin{prooftree}
+ \AxiomC{$[A \to (B \to C)]$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$(A \to B) \to (A \to C)$}
+ \intro{$\to$}
+ \UnaryInfC{$A \to (B \to C) \to ((A \to B) \to (A \to C))$}
+ \end{prooftree}
+ Now the only way we can get the second-to-last conclusion is
+ \begin{prooftree}
+ \AxiomC{$A \to B$}
+ \AxiomC{$[A]$}
+ \noLine
+ \BinaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$C$}
+ \UnaryInfC{$A \to C$}
+ \end{prooftree}
+ and then further eliminating $A \to B$ gives us $(A \to B) \to (A \to C)$. At this point we might see how we can patch them together to get a proof:
+ \begin{prooftree}
+ \AxiomC{$[A]^3$}
+ \AxiomC{$[A \to (B \to C)]^1$}
+ \BinaryInfC{$B \to C$}
+
+ \AxiomC{$[A \to B]^2$}
+ \AxiomC{$[A]^3$}
+ \BinaryInfC{$B$}
+ \BinaryInfC{$C$}
+ \intron{$\to$}{3}
+ \UnaryInfC{$A \to C$}
+ \intron{$\to$}{2}
+ \UnaryInfC{$(A \to B) \to (A \to C)$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$A \to (B \to C) \to ((A \to B) \to (A \to C))$}
+ \end{prooftree}
+ Note that when we did $\to$-int (3), we consumed two copies of $A$ in one go. This is allowed, as an assumption doesn't become false after we use it once.
+
+ However, some people study logical systems that do \emph{not} allow it, and demand that assumptions can only be used once. These are known as \term{resource logics}, one prominent example of which is \term{linear logic}.
+\end{eg}
+
+\begin{eg}
+ Suppose we want to prove $A \to (B \to A)$. We have a proof tree
+ \begin{prooftree}
+ \AxiomC{$[A]^1$}
+ \AxiomC{$[B]^2$}
+ \BinaryInfC{$A$}
+ \intron{$\to$}{2}
+ \UnaryInfC{$B \to A$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$A \to (B \to A)$}
+ \end{prooftree}
+ How did we manage to do the step from $A\;\; B$ to $A$? One can prove it as follows:
+ \begin{prooftree}
+ \AxiomC{$A$}
+ \AxiomC{$B$}
+ \intro{$\wedge$}
+ \BinaryInfC{$A\wedge B$}
+ \elim{$\wedge$}
+ \UnaryInfC{$A$}
+ \end{prooftree}
+ but this is slightly unpleasant. It is redundant, and also breaks some nice properties of natural deduction we are going to prove later, e.g.\ the subformula property. So instead, we just put an additional \term{weakening rule} that just says we can drop any assumptions we like at any time.
+\end{eg}
+
+\begin{eg}
+ Suppose we wanted to prove
+ \[
+ A \to (B \vee C) \to ((A \to B) \vee (A \to C)).
+ \]
+ This is indeed a truth-table tautology, as we can write out the truth table and see this is always true.
+
+ If we try very hard, we will find out that we cannot prove it without using the law of excluded middle. In fact, this is not valid in constructive logic. Intuitively, the reason is that assuming $A$ is true, which of $B$ or $C$ is true can depend on why $A$ is true, and this it is impossible to directly prove that either $A \to B$ is true, or $A \to C$ is true.
+
+ Of course, this is true in classical logic.
+\end{eg}
+
+\begin{ex}
+ Prove the following:
+ \begin{itemize}
+ \item $(P \to Q) \to ((Q \to R) \to (P \to R))$
+ \item $(A \to C) \to ((A \wedge B) \to C)$
+ \item $((A \vee B) \to C) \to (A \to C)$
+ \item $P \to (\neg P \to Q)$
+ \item $A \to (A \to A)$
+ \item $(P \vee Q) \to ((( P \to R) \wedge (Q \to S)) \to R \vee S)$
+ \item $(P \wedge Q) \to ((( P \to R) \vee (Q \to S)) \to R \vee S)$
+ \item $A \to ((((A \to B) \to B) \to C) \to C)$
+ \item $((((P \to Q) \to P) \to P ) \to Q) \to Q$
+ \end{itemize}
+ The last two are hard.
+\end{ex}
+
+Instead of thinking of natural deduction as an actual logical theory, we often think of it as a ``platform'' that allows us to describe different logical theories, which we can do by introducing other connectives.
+
+\begin{eg}
+ For example, we might want to describe first-order logic. We can give introduction rules for $\exists$ as follows:
+ \begin{prooftree}
+ \AxiomC {$\varphi(t)$}
+ \intro{$\exists$}
+ \UnaryInfC{$\exists_x\varphi(x)$}
+ \end{prooftree}
+ Similarly, we can have an elimination rule for $\forall$ easily:
+ \begin{prooftree}
+ \AxiomC{$\forall_x \varphi(x)$}
+ \elim{$\forall$}
+ \UnaryInfC{$\varphi(t)$}
+ \end{prooftree}
+ The introduction rule for $\forall$ is a bit more complicated. The rule again looks something like
+ \begin{prooftree}
+ \AxiomC {$\varphi(x)$}
+ \intro{$\forall$}
+ \UnaryInfC{$\forall_x\varphi(x)$}
+ \end{prooftree}
+ but for such a proof to be valid, the $x$ must be ``arbitrary''. So we need an additional condition on this rule that there is no free occurrence of $x$'s.
+
+ Finally, suppose we know $\exists_x \varphi(x)$. How can we use this to deduce some $p$? Suppose we know that $\varphi(x)$ implies $p$, and also $p$ does not depend on $x$. Then just knowing that there exists some $x$ such that $\varphi(x)$, we know $p$ must be true.
+ \begin{prooftree}
+ \AxiomC{$\exists_x \varphi(x)$}
+ \AxiomC{$\varphi(x)$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$p$}
+ \elim{$\exists$}
+ \BinaryInfC{$p$}
+ \end{prooftree}
+ Again we need $x$ to be arbitrary in $p$. % maybe clarify this more
+\end{eg}
+
+\begin{eg}
+ We can also use natural deduction for ``naive set theory'', which we will later find to be inconsistent. We can give an $\in$-introduction rule
+ \begin{prooftree}
+ \AxiomC{$\varphi(x)$}
+ \intro{$\in$}
+ \UnaryInfC{$x \in \{y : \varphi(y)\}$}
+ \end{prooftree}
+ and a similar elimination rule
+ \begin{prooftree}
+ \AxiomC{$x \in \{y: \varphi (y)\}$}
+ \elim{$\in$}
+ \UnaryInfC{$\varphi(x)$}
+ \end{prooftree}
+\end{eg}
+
+One cannot expect that we can write down whatever introduction and elimination rule we like, and expect to get a sensible theory. For example, if we introduce a connective $\&$ with the following silly rules:
+\begin{center}
+ \begin{bprooftree}
+ \AxiomC{$A$}
+ \intro{$\&$}
+ \UnaryInfC{$A \& B$}
+ \end{bprooftree}
+ \begin{bprooftree}
+ \AxiomC{$A \& B$}
+ \elim{$\&$}
+ \UnaryInfC{$B$}
+ \end{bprooftree}
+\end{center}
+then this would immediately give us a theory that proves anything we like.
+
+There are in fact two problems with our introduction and elimination rules. The first is that once we use our $A$ to deduce $A \& B$, it is completely gone, and we cannot recover anything about $A$ from $A \& B$. The second problem is that, obviously, proving $A \& B$ doesn't really involve $B$ at all, but we can use $A \& B$ to deduce something about $B$.
+
+In general, to obtain a sensible natural deduction theory, we need harmony.
+\begin{defi}[Harmony]\index{harmony}
+ We say rules for a connective $\$ $ are \term{harmonious} if $\phi \$ \psi$ is the strongest assertion you can deduce from the assumptions in the rule of $\$ $-introduction, and $\phi \$ \psi$ is the weakest thing that implies the conclusion of the $\$ $-elimination rule.
+\end{defi}
+We will not make the notion more precise than this.
+
+\begin{eg}
+ The introduction and elimination rules for $\in$ we presented are certainly harmonious, because the conclusions and assumptions are in fact equivalent.
+
+ One can also check that the introduction and elimination rules for $\wedge$ and $\vee$ are harmonious.
+\end{eg}
+
+\begin{eg}[Russel's paradox]
+ In naive set theory, consider
+ \[
+ R = \{x: x \in x \to \bot\}.
+ \]
+ Then we can have the deductions
+ \begin{prooftree}
+ \AxiomC{$R \to R$}
+ \AxiomC{$R \to R$}
+ \elim{$\in$}
+ \UnaryInfC{$R \in R \to \bot$}
+ \elim{$\to$}
+ \BinaryInfC{$\bot$}
+ \end{prooftree}
+ We can then move on to use this to deduce that $R \in R$:
+ \begin{prooftree}
+ \AxiomC{$[R \to R]^1$}
+ \AxiomC{$[R \to R]^1$}
+ \elim{$\in$}
+ \UnaryInfC{$R \in R \to \bot$}
+ \elim{$\to$}
+ \BinaryInfC{$\bot$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$R \in R \to \bot$}
+ \intro{$\in$}
+ \UnaryInfC{$R \in R$}
+ \end{prooftree}
+ Now we have proved $R \in R$. But we previously had a proof that $R \in R$ gives a contradiction. So what we are going to do is to make two copies of this proof:
+ \begin{prooftree}
+ \AxiomC{$[R \to R]^1$}
+ \AxiomC{$[R \to R]^1$}
+ \elim{$\in$}
+ \UnaryInfC{$R \in R \to \bot$}
+ \elim{$\to$}
+ \BinaryInfC{$\bot$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$R \in R \to \bot$}
+ \intro{$\in$}
+ \UnaryInfC{$R \in R$}
+ \AxiomC{$[R \to R]^2$}
+ \AxiomC{$[R \to R]^2$}
+ \elim{$\in$}
+ \UnaryInfC{$R \in R \to \bot$}
+ \elim{$\to$}
+ \BinaryInfC{$\bot$}
+ \intron{$\to$}{2}
+ \UnaryInfC{$R \in R \to \bot$}
+ \elim{$\to$}
+ \BinaryInfC{$\bot$}
+ \end{prooftree}
+ So we showed that we have an inconsistent theory.
+
+ Note that we didn't use a lot of ``logical power'' here. We didn't use the law of excluded middle, so this contradiction manifests itself even in the case of constructive logic. Moreover, we didn't really use many axioms of naive set theory. We didn't even need things like extensionality for this to work.
+\end{eg}
+
+Now we notice that this proof is rather weird. We first had an $\to$-introduction (2), and then subsequently eliminated it immediately. In general, if we have a derivation of the form
+\begin{prooftree}
+ \AxiomC{$P$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$Q$}
+ \intro{$\to$}
+ \UnaryInfC{$P \to Q$}
+ \AxiomC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$P$}
+ \elim{$\to$}
+ \BinaryInfC{$Q$}
+\end{prooftree}
+We can just move the second column to the top of the first to get
+\begin{prooftree}
+ \AxiomC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$Q$}
+\end{prooftree}
+
+In general,
+\begin{defi}[Maximal formula]\index{maximal formula}\index{formula}
+ We say a formula in a derivation is \emph{maximal} iff it is both the conclusion of an occurrence of an introduction rule, \emph{and} the major premiss of an occurrence of the elimination rule for the same connective.
+\end{defi}
+
+Here it is important that we are talking about the \emph{major} premise. In a deduction
+\begin{prooftree}
+ \AxiomC{$A \to B$}
+ \AxiomC{$A$}
+ \elim{$\to$}
+ \BinaryInfC{$B$}
+\end{prooftree}
+we call $A \to B$ the \emph{major premise}, and $B$ the \emph{minor premise}. In the case of $\wedge$ and $\vee$, the terms are symmetric, and we do not need such a distinction.
+
+This distinction is necessary since a deduction of the form
+\begin{prooftree}
+ \AxiomC{$(A \to B) \to C$}
+ \AxiomC{$[A]$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$B$}
+ \intro{$\to$}
+ \UnaryInfC{$A \to B$}
+ \elim{$\to$}
+ \BinaryInfC{$C$}
+\end{prooftree}
+
+One can prove that any derivation in propositional logic can be converted to one that does not contain maximal formulae. However, if we try to eliminate the maximal formula in our proof of inconsistency of naive set theory, we will find that we will end up with the same proof!
+
+\subsection{Curry--Howard correspondence}
+Return to the days where we did IID Logic and Set Theory. Instead of natural deduction, we bootstrapped our system with two axioms $K$ and $S$:
+\begin{itemize}
+ \item[$K$:] $A\to(B\to A)$
+ \item[$S$:] $[A\to(B\to C)]\to[(A\to B)\to(A \to C)]$
+\end{itemize}
+It is a fact that $\{K, S\}$ axiomatizes the conditional ($\to$) fragment of \emph{constructive} propositional logic. To axiomatize the conditional fragment of \emph{classical} logic, we need a new law, \term{Peirce's law}:
+\[
+ ((A \to B) \to A) \to A.
+\]
+In particular, we cannot prove Peirce's law from $K$ and $S$.
+
+How can we go about proving that Peirce's law is unprovable? The most elementary way of doing so would be to try to allow for more truth values, such that $K$, $S$ and modus ponens hold, but Peirce's law does not. It turns out three truth values are sufficient. We consider the following truth table for $\to$:
+\begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ $\to$ & 1 & 2 & 3\\
+ \midrule
+ 1 & 1 & 2 & 3\\
+ 2 & 1 & 1 & 3\\
+ 3 & 1 & 1 & 1\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+Here we interpret $1$ as ``true'', $3$ as ``false'', and $2$ as ``maybe''.
+
+By writing out the truth tables, one can verify that the formulas $K$ and $S$ always take truth value $1$. Also, if $A$ and $A \to B$ are both $1$, then so is $B$. So by induction on the structure of the proof, we know that anything deduced from $K$ and $S$ and \emph{modus ponens} will have truth value $1$.
+
+However, if we put $A = 2$ and $B = 3$, then $((A \to B) \to A) \to A$ has value $2$, not $1$. So it cannot possibly be deduced from $K$ and $S$.
+
+This is, of course, a very unsatisfactory answer. The truth table was just pulled out of a hat, and we just checked to see that Peirce's law doesn't hold. It doesn't tell us where the truth table came from, and \emph{why} we can't prove Peirce's law.
+
+We will answer the second question first using the \emph{Curry--Howard correspondence}. Then we will see how we can come up with this truth table using \emph{possible world semantics} in the next chapter.
+
+Consider the piece of syntax
+\[
+ A \to B.
+\]
+So far, we have viewed this as a logical formula --- $A$ and $B$ are formulae, and $\to$ denotes implication. However, in mathematics, there is another common use of this notation --- we can think of $A$ and $B$ as sets, and $A \to B$ is the set of all functions from $A$ to $B$. The idea of the Curry--Howard correspondence is to view implication as functions between sets.
+
+While this might seem like a bizarre thing to do, it actually makes sense. Given a formula $A$, we can view $A$ as a ``set'', whose ``elements'' are the proofs of $A$. Now if we know $A \to B$, this means any proof of $A$ becomes a proof of $B$. In other words, we have a function $A \to B$. As we move on, we will see that this idea of viewing a formula $A$ as the set of all proofs of $A$ gives a very close correspondence between the logical connectives and set-theoretic constructions.
+
+This is also known as \term{propositions as types}, and it is common to call $A$, $B$ etc ``types'' instead of ``sets''.
+
+Under this correspondence, \emph{modus ponens} has a very natural interpretation. We can just think of the rule
+\begin{prooftree}
+ \AxiomC{$A$}
+ \AxiomC{$A \to B$}
+ \BinaryInfC{$B$}
+\end{prooftree}
+as function application! If we have an element $a\in A$, and an element $f \in A \to B$, then we have $f(a) \in B$. Now it is awkward to write $f \in A \to B$ instead of $f: A \to B$. So in fact, we will write $f: A \to B$ instead. Moreover, we will use the colon instead of $\in$ for \emph{all} types. So we can write
+\begin{prooftree}
+ \AxiomC{$x: A$}
+ \AxiomC{$f: A \to B$}
+ \BinaryInfC{$f(x): B$}
+\end{prooftree}
+Often, we will just write $fx$ instead of $f(x)$. If we write $fxy$, then it means $(fx)y$, but we will usually put brackets to indicate.
+
+\begin{eg}
+ What is the interpretation of $K$ under the Curry--Howard correspondence? It corresponds to a function
+ \[
+ A \to (B \to A).
+ \]
+ Let's try to think of a function that does this. Given an element $a: A$, we need to find a function $B \to A$. Since the only thing we know about is $a$, we could return the constant function that always returns $a$. So for each $a : A$, we define $K(a)$ to be the constant function $a$.
+
+ This is indeed where the name $K$ came from. It's just that the inventor was German, and so it is not $C$.
+\end{eg}
+What about conjunction? To prove $A \wedge B$, we need to prove $A$ and $B$. In other words, an element of $A \wedge B$ should be an element of the Cartesian product $A \times B$.
+
+We can write the introduction rule as
+\begin{prooftree}
+ \AxiomC{$x: A$}
+ \AxiomC{$y: B$}
+ \BinaryInfC{$\bra x, y\ket: A \wedge B$}
+\end{prooftree}
+Here we are using the angled brackets to denote ordered pairs. The elimination rules are just given by the projection $\fst: A \times B \to A$ and $\snd: A \times B \to B$:
+\begin{center}
+ \begin{bprooftree}
+ \AxiomC{$x: A \wedge B$}
+ \UnaryInfC{$\fst(x): A$}
+ \end{bprooftree}
+ \begin{bprooftree}
+ \AxiomC{$x: A \wedge B$}
+ \UnaryInfC{$\snd(x): B$}
+ \end{bprooftree}
+\end{center}
+Some people write $\fst$ and $\snd$ as $\pi_1$ and $\pi_2$ instead.
+
+We are not going to talk about the rules for ``or'', because they are horrendous. But if we think hard about it, then $A \vee B$ corresponds to the disjoint union of $A$ and $B$. % include
+
+So far, we only know how to decorate our LEGO pieces. When we assemble them to get proofs, we can get more complicated tress. Suppose we had the tree
+\begin{prooftree}
+ \AxiomC{$[A \wedge B]^1$}
+ \elim{$\wedge$}
+ \UnaryInfC{$B$}
+ \AxiomC{$[A \wedge B]^1$}
+ \elim{$\wedge$}
+ \UnaryInfC{$A$}
+ \intro{$\wedge$}
+ \BinaryInfC{$B \wedge A$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$(A \wedge B) \to (B \wedge A)$}
+\end{prooftree}
+We now try to decorate this tree:
+\begin{prooftree}
+ \AxiomC{$x: [A \wedge B]^1$}
+ \elim{$\wedge$}
+ \UnaryInfC{$\snd(x): B$}
+ \AxiomC{$x: [A \wedge B]^1$}
+ \elim{$\wedge$}
+ \UnaryInfC{$\fst(x): A$}
+ \intro{$\wedge$}
+ \BinaryInfC{$\bra \snd(x), \fst(x)\ket : B \wedge A$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$(A \wedge B) \to (B \wedge A)$}
+\end{prooftree}
+Note that both $[A \wedge B]^1$ are decorated using the same letter $x$. This is since they are cancelled at the same time, so we are using the same instance of $A \wedge B$ in the statement $(A \wedge B) \to (B \wedge A)$.
+
+Now the remaining question to answer is how we are going to decorate the $\to$-introduction. We want some notation that denotes the function that takes $x$ and returns $\bra \snd(x), \fst(x)$. More generally, if we have an expression $\varphi(x)$ in $x$, we want a notation that denotes the function that takes in $x$ and returns $\varphi(x)$.
+
+The convention is to denote this using a $\lambda$:\index{$\lambda$}
+\[
+ \lambda x. \varphi(x).
+\]
+These expressions we produce are known as \term{$\lambda$ expressions}\index{lambda expressions}.
+
+For example, in the previous case, the final result is
+\[
+ \lambda x. \bra \fst(x), \snd(x)\ket : (A \wedge B) \to (B \wedge A).
+\]
+The general decoration rule for $\to$-introduction is
+\begin{prooftree}
+ \AxiomC{$x: [A]^1$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$\varphi(x): C$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$\lambda x. \varphi(x): A \to C$}
+\end{prooftree}
+Recall that previously we had a rule for getting rid of maximal formulae. Given a tree of the form
+\begin{prooftree}
+ \AxiomC{$P$}
+ \noLine
+ \UnaryInfC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$Q$}
+ \intro{$\to$}
+ \UnaryInfC{$P \to Q$}
+ \AxiomC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$P$}
+ \elim{$\to$}
+ \BinaryInfC{$Q$}
+\end{prooftree}
+We can just move the second column to the top of the first to get
+\begin{prooftree}
+ \AxiomC{$\rvdots$}
+ \noLine
+ \UnaryInfC{$Q$}
+\end{prooftree}
+This conversion corresponds to the conversion
+\[
+ (\lambda x. \varphi(x)) a \rightsquigarrow \varphi(a),
+\]
+which is just applying the function! This process is known as \term{$\beta$-reduction}\index{beta-reduction}.
+
+Since there is $\beta$-reduction, there is, of course, also \term{$\alpha$-reduction}\index{alpha-reduction}, but this is boring. It is just re-lettering $\lambda x. \phi(x)$ to $\lambda y. \phi(y)$. There will also be an $\eta$-reduction, and we will talk about that later.
+
+In the following exercise/example, note that we have a convention that if we write $A \to B \to C$, then we mean $A \to (B \to C)$.
+\begin{ex}
+ Find $\lambda$-expressions that encode proofs of
+ \begin{enumerate}
+ \item $((A \to B) \wedge (C \to D)) \to ((A \wedge C) \to (B \wedge D))$
+ \item $(A \to B) \to (B \to C) \to (A \to C)$
+ \item $((A \to B) \to A) \to (A \to B) \to B$
+ \item $(A \to B \to C) \to (B \to A \to C)$
+ \item $(A \to B \to C) \to ((A \wedge B) \to C)$
+ \item $(B \wedge A \to C) \to (A \to B \to C)$
+ \end{enumerate}
+ Note that statements of the form $A \to B \to C$ are to be interpreted as $A \to (B \to C)$.
+\end{ex}
+
+\begin{proof}[Solutions]\leavevmode
+ \begin{enumerate}
+ \item $\lambda f. \lambda x. \bra \fst(f)(\fst(x)), \snd(f)(\snd(x))\ket$
+ \item $\lambda f. \lambda g. \lambda a. g(fa)$
+ \item $\lambda f. \lambda g. g(fg)$
+ \item $\lambda f. \lambda b. \lambda a. (fa)b$
+ \item $\lambda f. \lambda x. f(\fst x)(\snd x)$
+ \item $\lambda f. \lambda a. \lambda b. f\bra b, a\ket$
+ \end{enumerate}
+ One can write out the corresponding trees explicitly. For example, (iii) can be done by
+ \begin{prooftree}
+ \AxiomC{$g: [A \to B]^1$}
+ \AxiomC{$f: [(A \to B) \to A]^2$}
+ \elim{$\to$}
+ \BinaryInfC{$fg: A$}
+ \AxiomC{$g: [A \to B]^1$}
+ \elim{$\to$}
+ \BinaryInfC{$g(fg)$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$\lambda g. g(fg): (A \to B) \to B$}
+ \intron{$\to$}{2}
+ \UnaryInfC{$\lambda f. \lambda g. g(fg): (A \to B) \to A \to (A \to B) \to B$}
+ \end{prooftree}
+ Note that we always decorate assumptions with a single variable, say $f$, even if they are very complicated. For example, if we have an assumption $A \wedge B$, it might be tempting to decorate it as $\bra a, b\ket$, but we do not.
+\end{proof}
+
+Now what we have is that for any proof tree, we obtain a $\lambda$ formula. But for any formula, it can have many proofs, and so it can be given by many $\lambda$ formula.
+\begin{eg}
+ Consider
+ \begin{prooftree}
+ \AxiomC{$x: [A]^1$}
+ \AxiomC{$f: [A \to A]^2$}
+ \elim{$\to$}
+ \BinaryInfC{$fx: A$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$\lambda x. fx: A \to A$}
+ \intron{$\to$}{2}
+ \UnaryInfC{$\lambda f. \lambda x. fx: (A \to A) \to (A \to A)$}
+ \end{prooftree}
+ This is really just the identity function!
+
+ We can also try something slightly more complicated.
+ \begin{prooftree}
+ \AxiomC{$x: [A]^1$}
+ \AxiomC{$f: [A \to A]^2$}
+ \BinaryInfC{$f x: A$}
+ \AxiomC{$f: [A \to A]^2$}
+ \BinaryInfC{$f(f x): A$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$\lambda x. f(f(x)): A \to A$}
+ \intron{$\to$}{2}
+ \UnaryInfC{$\lambda f. \lambda x. f(f x): (A \to A) \to (A \to A)$}
+ \end{prooftree}
+ This is another $\lambda$ term for $(A \to A) \to (A \to A)$. This says given any $f$, we return the function that does $f$ twice. These are two genuinely different functions, and thus these two $\lambda$ terms correspond to two different \emph{proofs} of $(A \to A) \to (A \to A)$.
+\end{eg}
+
+Note that given any proof tree, we can produce some $\lambda$ expression representing the proof rather systematically. However, given any $\lambda$ expression, it is difficult to figure out what it is a proof of, or if it represents a proof at all. For example, if we write down
+\[
+ \lambda x. xx,
+\]
+then we have no idea what this can possibly mean.
+
+The problem is that we don't know what formula, or \emph{type} each variable is representing. The expression
+\[
+ \lambda f. \lambda x. f (f x)
+\]
+is confusing, because we need to guess that $x$ is probably just some arbitrary variable of type $A$, and that $f$ is probably a function $A \to A$. Instead, we can make these types explicit:
+\[
+ \lambda f_{A \to A}. \lambda x_A. f_{A \to A} (f_{A \to A} x_A).
+\]
+Given such a term, we can systematically deduce that it is of type $(A \to A) \to (A \to A)$, i.e.\ it encodes a proof of $(A \to A) \to (A \to A)$. Such an expression with explicit types is known as a \term{typed $\lambda$ term}\index{typed lambda term}, and one without types is known as an \term{untyped $\lambda$ term}\index{untyped lambda term}.
+
+Moreover, it is also straightforward to reconstruct the proof tree from this expression. Of course, we need to make sure the typing of the expression is valid. For example, we cannot write
+\[
+ f_{A \to A} x_B,
+\]
+since $f$ is not a function from $B$. Assuming we restrict to these valid terms, we have established a correspondence
+\begin{significant}
+ Proofs in propositional natural deduction are in a $1$-to-$1$ correspondence with typed $\lambda$ terms.
+\end{significant}
+That is, if we also make it clear what the rules for $\vee$ are.
+
+Finally, what about $\bot$? To what set does the proposition $\bot$ correspond? The proposition $\bot$ should have \emph{no} proofs. So we would think $\bot$ should correspond to $\emptyset$.
+
+In propositional calculus, the defining property of $\bot$ is that we can prove any proposition we like from $\bot$. Indeed, the empty set is a set that admits a function to any set $X$ whatsoever, and in fact this function can be uniformly defined for all sets, by the empty function. We again see that the ideas of propositional calculus correspond nicely with the properties of sets.
+
+\begin{eg}
+ Consider the law of excluded middle:
+ \[
+ ((A \to \bot) \to \bot) \to A.
+ \]
+ Can we write down a $\lambda$ expressions for this?
+
+ We will argue rather informally. If there is a $\lambda$ expression of this type, then it should define the function ``uniformly'' without regards to what $A$ is about. We now see what this function could be.
+ \begin{itemize}
+ \item If $A$ is empty, then $A \to \bot$ contains exactly the empty function. So $(A\to \bot) \to \bot$ is the empty set, since there is no function from a non-empty set to the empty set. And there is a function $\emptyset \to A$, namely the empty function again.
+ \item If $A$ is non-empty, then $A \to \bot$ contains no functions, so $(A \to \bot) \to \bot$ contains the empty function. So $((A \to \bot) \to \bot) \to A$ is given by exactly an element of $A$.
+ \end{itemize}
+ We see that to construct such a function, we need to first know if $A$ is empty or not. If it is non-empty, then we have to arbitrarily pick an element of $A$. Thus, there is no way we can uniformly construct such a function without knowing anything about $A$.
+\end{eg}
+
+\begin{prop}
+ We cannot prove $((A \to B) \to A) \to A$ in natural deduction without the law of excluded middle.
+\end{prop}
+
+ % prove this
+\begin{proof}
+ Suppose there were a lambda term $P: ((A \to B) \to A) \to A$.
+
+ We pick
+ \[
+ B = \{0, 1\}, \quad A = \{0, 1, 2, 3, 4\}.
+ \]
+ In this setting, any function $f: A \to B$ identifies a distinguished member of $B$, namely the one that is hit by more members of $A$. We know $B \subseteq A$, so this is an element in $A$. So we have a function $F: (A \to B) \to A$. Then $P(F)$ is a member of $A$. We claim that in fact $P(F) \in B$.
+
+ Indeed, we write $P(F) = a \in A$. Let $\pi$ be a $3$-cycle moving everything in $A \setminus B$ and fixing $B$. Then this induces an action on functions among $A$ and $B$ by conjugation. We have.
+ \[
+ \pi(P(F)) = \pi(a).
+ \]
+ But since $P$ is given by a $\lambda$ term, it cannot distinguish between the different memebers of $A \setminus B$. So we have
+ \[
+ P(\pi(F)) = \pi(a).
+ \]
+ We now claim that $\pi(F) = F$. By construction, $\pi (F) = \pi^{-1} \circ F \circ \pi$. Then for all $f: A \to B$, we have
+ \begin{multline*}
+ \pi(F)(f) = (\pi^{-1} \circ f \circ \pi)(f) = (\pi^{-1} \circ F)(\pi(f)) \\
+ = \pi^{-1} (F(\pi(f))) = \pi^{-1}(F(f)) = F(f).
+ \end{multline*}
+ Noting that $\pi$ fixes $f$, the only equality that isn't obvious from unravelling the definitions is $F(\pi(f)) = F(f)$. Indeed, we have $\pi(f) = \pi^{-1} \circ f \circ \pi = f \circ \pi$. But $F(f\circ \pi) = F(f)$ because our explicit construction of $F$ depends only on the image of $f$. So $\pi(F) = F$. So this implies $a = \pi(a)$, which means $a \in B$. So we always have $P(F) \in B$.
+
+ But this means we have found a way of uniformly picking a distinguished element out of a two-membered set. Indeed, given any such set $B$, we pick any three random objects, and combine it with $B$ to obtain $A$. Then we just apply $P(F)$. This is clearly nonsense.
+\end{proof}
+
+\subsection{Possible world semantics}
+For classical logic, semantics was easy. We had a collection of propositions $P$, and a model is just a function $P \to \{\top, \bot\}$. We can then define truth in this model recursively in the obvious way.
+
+It turns out to get a sensible model theory of constructive logic, we need to consider \emph{many} such objects.
+
+\begin{defi}[Possible world semantics]\index{possible world semantics}
+ Let $P$ be a collection of propositions. A \term{world}\index{possible world semantics!world} is a subset $w \subseteq P$. A \term{model}\index{possible world semantics!model} is a collection $W$ of worlds, and a partial order $\geq$ on $W$ called \term{accessibility}\index{possible world semantics!accessibility}, satisfying the \term{persistence}\index{possible world semantics!persistence} property:
+ \begin{itemize}
+ \item If $p \in P$ is such that $p \in w$ and $w' \geq w$, then $p \in w'$.
+ \end{itemize}
+ Given any proposition $\varphi$, we define the relation $w \vDash \varphi$ by
+ \begin{itemize}
+ \item $w \not\vDash \bot$
+ \item If $\varphi$ is \emph{atomic} (and not $\bot$), then then $w \vDash \varphi$ iff $\varphi \in w$.
+ \item $w \vDash \varphi \wedge \psi$ iff $w \vDash \varphi$ and $w \vDash \psi$.
+ \item $w \vDash \varphi \vee \psi$ iff $w \vDash \varphi$ or $w \vDash \psi$.
+ \item $w \vDash \varphi \to \psi$ iff (for all $w' \geq w$, if $w' \vDash \varphi$, then $w' \vDash \psi$).
+ \end{itemize}
+ We will say that $w$ ``believes'' $\varphi$ if $w \vDash \varphi$, and that $w$ ``sees'' $w'$ if $w' \geq w$.
+
+ We also require that there is a designated minimum element under $\leq$, known as the \term{root world}. We can think of a possible world model as a poset decorated with worlds, and such a poset is called a \term{frame}.
+\end{defi}
+All but the last rule are obvious. The idea is that each world can have some beliefs, namely the subset $w \subseteq p$. The important idea is that if a world $w$ does not believe in $p$, it does not mean it believes that $p$ is false. It just that $w$ remains ignorant about whether $p$ is true.
+
+Now we have $w' \geq w$ if $w'$ a more knowledgable version of $w$ that has more beliefs. Alternatively, $w'$ is a world whose beliefs are compatible with that of $w$. It is easy to show by induction that if $\varphi$ is \emph{any formula} that $w$ believes, and $w' \geq w$, then $w'$ also believes in $\varphi$. Then we can interpret the law of implication as saying ``$w$ believes that $p \to q$ iff for any possible thinkable world where $p$ is true, then $q$ is also true''.
+
+Note that if we only have one world, then we recover models of classical logic.
+
+What happens to negation? By definition, we have $w \vDash \neg A$ iff $w \vDash A \to \bot$. Expanding the definition, believing that $\neg A$ means there is no possible world compatible with $w$ in which $A$ is true. This is much stronger than just saying $w \not\vDash A$, which just says we do not hold any opinion on whether $A$ is true.
+
+\begin{eg}
+ We want to construct a world $w$ where $w \not \vDash A\vee \neg A$. So we need a $W$ such that $w$ neither believes $A$ nor believes $\neg A$.
+
+ We can achieve the former by just making the world not believe in $A$. To make the world not believe $\neg A$, it needs to be able to see a world $w'$ such that $w' \vDash A$. So we can consider the following world:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [mstate] (1) at (0, 0) {$w'$};
+ \node [right=10pt] at (1) {$\vDash A$};
+
+ \node [mstate] (2) at (0, -2) {$w$};
+ \node [right=10pt] at (2) {$\not\vDash A$};
+
+ \draw (1) edge (2);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{eg}
+ This time consider the statement $w \vDash \neg \neg A \to A$. This takes some work to unravel.
+
+ We need a world $w$ such that for all $w' \geq w$, if $w' \vDash \neg \neg A$, then $w' \vDash A$. If we unravel a bit more, we find that $w' \vDash \neg \neg A$ means for any $w'' \geq w'$, we can find some $w''' \geq w''$ such that $w''' \vDash A$.
+
+ It is easy to see that actually in the model of the previous question, $w \not\vDash \neg \neg A \to A$, since $w \vDash \neg \neg A$, but $w \not\vDash A$.
+\end{eg}
+
+\begin{eg}
+ The following model is a counter-model for $\neg \neg A \vee \neg A$:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [mstate] (b) at (0, 0) {};
+ \node [right=10pt] at (b) {$\not\vDash A$};
+
+ \node [mstate, above right=of b] (r) {};
+ \node [right=10pt] at (r) {$\vDash A$};
+
+ \node [mstate, above left=of b] (l) {};
+ \node [right=10pt] at (l) {$\not\vDash A$};
+
+ \draw (b) edge (r);
+ \draw (b) edge (l);
+ \end{tikzpicture}
+ \end{center}
+ where the bottom world believes in neither $\neg \neg A$ nor $\neg A$, since it sees worlds where $\neg A$ is true and also worlds where $A$ is true.
+\end{eg}
+
+Usually, we don't talk about a particular world believing in something. We talk about an entire structure of possible worlds believing something.
+\begin{notation}
+ If the root world of our model is $w$, then we write
+ \[
+ \vDash \varphi \Longleftrightarrow w \vDash \varphi.
+ \]
+\end{notation}
+Almost by definition, if $\vDash \varphi$, then for any world $w'$, we have $w' \vDash \varphi$.
+
+\begin{ex}
+ Find a possible world model that does not believe Peirce's law.
+\end{ex}
+
+One can readily verify that classically, we have $(A \to B) \to B$ is equivalent to $A \vee B$, but they are not constructively equivalent.
+\begin{ex}
+ Find a possible world model that $\vDash (A \to B) \to B$, but does not $\vDash A \vee B$.
+\end{ex}
+
+\begin{ex}
+ Find a possible world model that $\vDash (A \to B) \to B$ but does not believe $\vDash (B \to A) \to A$.
+\end{ex}
+
+\begin{ex}
+ Find a possible world model that does not believe in $\vDash (A \to B) \vee (B \to A)$.
+\end{ex}
+
+Possible world semantics can be used for all sorts of logical systems, not just propositional logic. In general, the notion of accessibility need not be a partial order, but can be anything we want. Also, we do not necessarily have to require persistence. Of course, we also need to modify the according notion of a world.
+
+\begin{eg}
+ If we want to produce possible world semantics for constructive first-order logic, we need to modify our definition of worlds so that they have ``elements''. Assuming we have done so, we can encode $\forall$ and $\exists$ using the following rules:
+ \begin{itemize}
+ \item $w \vDash \exists_x, F(x)$ if there is some $y \in w$ for which $w \vDash F(y)$.
+ \item $w \vDash \forall_x, F(x)$ if for all $w' \geq w$ and $x \in w'$, we have $w' \vDash F(x)$.
+ \end{itemize}
+ The second condition is much stronger than the naive ``for all $x \in w$, we have $w \vDash F(x)$'', because the $w'' \geq w$ may have many more elements. What our condition say is that $w$ somehow has a ``global'' way of showing that everything satisfies $F$.
+\end{eg}
+
+\begin{eg}
+ The proposition $A \to \neg \neg A$ is valid all possible world models.
+
+ % To see this, we note that if we have a world $w$ that believes in $A$, then by persistence, any world that $A$ sees also believes in $A$. We want to show that it does not believe in $\neg A$. % complete proof, make use of reflexivity and transitivity
+\end{eg}
+
+For classical logic, we had semantics given by truth tables. This easily shows that classical truth is decidable. Similarly, given any formula, it involves only finitely many variables, there are only finitely many distinct possible world models one has to check. So whether something is refuted by possible world models is again a decidable problem.
+
+Now suppose we have a possible world model in front of us. Then given a formula $\varphi$, the set of worlds that believe $\varphi$ is an upward-closed subset of the frame. So we can think of the truth value of $\varphi$ as these upward-closed set.
+\begin{eg}
+ Suppose we have the following worlds:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [mstate] (1) at (0, 0) {$w_2$};
+
+ \node [mstate] (2) at (0, -2) {$w_1$};
+
+ \draw (1) edge (2);
+ \end{tikzpicture}
+ \end{center}
+ Then the possible truth values are
+ \[
+ \emptyset, \{w_2\}, \{w_1, w_2\}.
+ \]
+ Suppose the top world believes $A$, but the bottom world does not. Then we say that $A$ has truth value $\{w_2\}$. This is indeed the three-valued algebra we used to disprove Peirce's law.
+\end{eg}
+What can we say about the collection of upward-closed subset of frames, i.e.\ our truth values? Under inclusion, it is easy to see that these are complete and co-complete, i.e.\ we have all infinite unions and intersections. More importantly, we can do ``implications''.
+\begin{defi}[Heyting algebra]\index{Heyting algebra}
+ A Heyting algebra is a poset with $\top$, $\bot$, $\wedge$ and $\vee$ and an operator $\Rightarrow$ such that $A \Rightarrow B$ is the largest $C$ such that
+ \[
+ C \wedge A \leq B.
+ \]
+\end{defi}
+If a poset is complete co-complete, then we would expect the implication to be given by
+\[
+ A \Rightarrow B = \bigvee \{C : C \wedge A \leq B\}.
+\]
+Indeed, if the poset is a Heyting algebra, then it \emph{must} be given by this. So we just have to check that this works.
+
+The thought is that Heyting algebras would ``validate'' all constructive theses, by interpreting $\wedge$ with $\wedge$, $\vee$ with $\vee$ and $\rightarrow$ with $\Rightarrow$. In other words, if we try to build a truth table for a formula with values in a Heyting algebra, we will get the top element $\top$ all the time iff the formula is constructively valid. So we have a parallel between the relation between classical logic and boolean algebras, and constructive logic and Heyting algebras.
+
+It is an easy inductive proof that any constructively valid statement is ``true'' in all Heyting algebras, and the other direction is also true, but takes more work to prove.
+
+Note that in classical logic, we only ever use the two-element boolean algebra. If we use the 4-element Boolean algebra, then we would detect the same set of tautologies. However, in the case of constructive logic we genuinely need all Heyting algebras.
+
+Finally, we prove half the completeness theorem for possible world models.
+\begin{lemma}
+ Any formula with a natural deduction proof not using the rule for classical negation is true in all possible world models.
+\end{lemma}
+For any frame $F$, every such formula is satisfied by every possible world model on that frame. We say it is \emph{valid on $F$}. Such formulae are valid on all posets with a bottom element.
+
+\begin{proof}
+ The hard case is implication. Suppose we have a natural deduction proof of $A \to B$. The last rule is a $\to$-introduction. So by the induction hypothesis, every world that believes $A$ also believes $B$. Now let $w$ be a world that believes all the other undischarged assumptions in $A \to B$. By persistence, every $w' \geq w$ believes similarly. So any $w' \geq w$ that believes $A$ also believes $B$. So $w \vDash A \to B$.
+\end{proof}
+
+ % how about other half.
+
+\subsection{Negative interpretation}
+Suppose a constructive logician found a classical logician, and then they try to talk to each other. Then they would find that they disagree on a lot of things. The classical logician has no problems accepting $\neg \neg A \to A$, but the constructive logician thinks that is nonsense. Can they make sense of each other?
+
+The classical logician can understand the constructive logician via possible world semantics. He could think that when the constructive logician says a proposition is false, they mean there are possible world models where the proposition fails. This is all fine.
+
+But can a constructive logician make sense of the classical logician? When the constructivist sees $A \vee B$, he thinks that either we can prove $A$, or we can prove $B$. However, the classical logician views this as a much weaker statement. It is more a long the lines of ``it is not true that $A$ and $B$ are both false''. In general, we can try to interpret classical statements by putting in enough negations:
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ Classical proposition & Constructive interpretation\\
+ \midrule
+ $A \vee B$ & $\neg (\neg A \vee \neg B)$\\
+ $A \wedge B$& $A \wedge B$\\
+ $\exists_x W(x)$ & $\neg \forall_x \neg W(x)$\\
+ $A \to B$ & $\neg (A \wedge \neg B)$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+This gives rise to the \emph{negative interpretation} of classical logic into constructive logic, due to G\"odel:
+\begin{defi}[Negative interpretation]\index{negative interpretation}
+ Given a proposition $\phi$, the interpretation $\phi^*$ is defined recursively by
+ \begin{itemize}
+ \item $\bot^* = \bot$.
+ \item If $\varphi$ is atomic, then $\varphi^* = \neg\neg \varphi$.
+ \item If $\varphi$ is negatomic, then $\varphi^* = \varphi$.
+ \item If $\varphi = \psi \wedge \theta$, then $\varphi^* = \psi^* \wedge \theta^*$.
+ \item If $\varphi = \psi \vee \theta$, then $\varphi^* = \neg (\neg \psi^* \wedge \neg \theta^*)$.
+ \item If $\varphi = \forall_x \psi(x)$, then $(\forall_x)(\psi^*(x))$.
+ \item If $\varphi = \psi \to \theta$, then $\varphi^* = \neg(\psi^* \wedge \neg \theta^*)$.
+ \item If $\varphi = \exists_x \psi(x)$, then $\varphi^* = \neg \forall_x \neg \psi^*(x)$.
+ \end{itemize}
+\end{defi}
+One can check that we always have $\vdash \varphi \to \varphi^*$. So the constructive version is always stronger.
+
+\begin{defi}[Stable formula]\index{stable formula}
+ A formula is \emph{stable} if
+ \[
+ \vdash \varphi^* \to \varphi.
+ \]
+\end{defi}
+
+\begin{lemma}
+ Any formula built up from negated and doubly negated atomics by $\neg$, $\wedge$ and $\forall$ is stable.
+\end{lemma}
+
+\begin{proof}
+ By induction on formulae. The base case is immediate, using the fact that $\neg \neg \neg A \to \neg A$. This follows from the more general fact that
+ \[
+ (((p \to q) \to q) \to q) \to p \to q.
+ \]
+ It is less confusing to prove this in two steps, and we will write $\lambda$-terms for our proofs. First note that if we have $f: A \to B$, then we can obtain $f^T: (B \to q) \to A \to q$ for any $q$, using
+ \[
+ f^T = \lambda g_{B \to q}. \lambda a_A. g(f(a)).
+ \]
+ So it suffices to prove that
+ \[
+ p \to (p \to q) \to q,
+ \]
+ and the corresponding term is
+ \[
+ \lambda x_p. \lambda g_{p \to q}. g x
+ \]
+ We now proceed by induction.
+ \begin{itemize}
+ \item Now assume that we have proofs of $\neg \neg p \to p$ and $\neg \neg q \to q$. We want to prove $p \wedge q$ from $\neg \neg (p \wedge q)$. It suffices to prove $\neg \neg p$ and $\neg \neg q$ form $\neg \neg (p \wedge q)$, and we will just do the first one by symmetry.
+
+ We suppose $\neg p$. Then we know $\neg (p \wedge q)$. But we know $\neg \neg (p \wedge q)$. So we obtain a contradiction. So we have proved that $\neg \neg p$.
+ \item Note that we have
+ \[
+ \vdash \exists_x \neg \neg \varphi(x) \to \neg \neg \exists_x \varphi(x),
+ \]
+ but not the other way round. For universal quantification, we have
+ \[
+ \neg \neg \forall_x \varphi(x) \to \forall_x \neg \neg \varphi(x),
+ \]
+ but not the other way round. We can construct a proof as follows:
+ \begin{prooftree}
+ \AxiomC{$[\forall_x \varphi(x)]^1$}
+ \elim{$\forall$}
+ \UnaryInfC{$\varphi(a)$}
+ \AxiomC{$[\neg \varphi(x)]^2$}
+ \elim{$\to$}
+ \BinaryInfC{$\to$}
+ \intron{$\to$}{1}
+ \UnaryInfC{$\neg \forall_x \varphi(x)$}
+ \AxiomC{$[\neg \neg \forall_x \varphi(x)]^3$}
+ \elim{$\to$}
+ \BinaryInfC{$\bot$}
+ \intron{$\to$}{2}
+ \UnaryInfC{$\neg \neg \varphi(a)$}
+ \intro{$\forall$}
+ \UnaryInfC{$\neg \neg \forall_x \varphi(x)$}
+ \intron{$\to$}{3}
+ \UnaryInfC{$\neg \neg \forall_x F(x) \to \forall_x \neg \neg F(x)$}
+ \end{prooftree}
+ We now want to show that if $\varphi$ is stable, then $\forall_x \varphi(x)$ is stable. In other words, we want
+ \[
+ \neg \neg \forall_x \varphi^*(x) \to \forall_x \varphi^*(x).
+ \]
+ But form $\neg \neg \forall_x \varphi^*(x)$, we can deduce $\forall_x \neg \neg \varphi^*(x)$, which implies $\forall_x \varphi^* (x)$.
+ \end{itemize}
+ So every formula in the range of the negative interpretation is stable. Every stable formula is equivalent to its double negation (classically). Every formula is classically equivalent to a stable formula.
+\end{proof}
+So if a constructive logician meets a classical logician who is making some bizarre assertions about first order logic. To make sense of this, the constructive logician can translate it to its negative interpretation, which the classical logician thinks is equivalent to the original one.
+
+\subsection{Constructive mathematics}
+We now end with some remarks about how we can do mathematics ``constructively'', and the subtleties we have to be careful of.
+
+We first discuss the notion of finiteness. Even classically, if we do not have the axiom of choice, then we have many distinct notions of ``finite set''. When we go constructive, things get more complicated.
+
+We can produce (at least) two different definitions of finiteness.
+\begin{defi}[Kuratowski finite]\index{Kuratowski finite}\index{finite!Kuratowski}
+ We define ``finite'' recursively: $\emptyset$ is Kuratowski finite. If $x$ is Kuratowski finite, then so is $x \cup \{y\}$.
+\end{defi}
+
+There is a separate definition of $N$-finiteness:
+\begin{defi}[$N$-finite]\index{$N$-finite}\index{finite!$N$}
+ $\emptyset$ is $N$-finite. If $x$ is $N$-finite, and $y \not \in x$, then $x \cup \{y\}$ is $N$-finite.
+\end{defi}
+These two definitions are not the same! In a $N$-finite set, we know any two things are either equal, or they are not, as this is built into the definition of an $N$-finite set. However, in the case of Kuratowski finite, this is not true, since we don't have the law of excluded middle. We say $N$-finite sets have \term{decidable equality}.
+If we want to do constructive arithmetic, the correct notion is $N$-finiteness.
+
+We can still define natural numbers as the smallest set containing $\emptyset$ and is closed under successor. This gives us $N$-finite sets, but the least number principle is dodgy. It turns out the least number principle implies excluded middle.
+
+There are also subtleties involving the notion of a non-empty set!
+\begin{defi}[Non-empty set]\index{non-empty set}
+ A set $x$ is \emph{non-empty} if $\neg \forall_yy \not \in x$.
+\end{defi}
+
+\begin{defi}[Inhabited set]\index{inhabited set}
+ A set $x$ is \emph{inhabited} if $\exists_yy \in x$.
+\end{defi}
+
+They are not the same! Being inhabited is stronger than being non-empty! We shall not go into more details about constructive mathematics, because, disappointingly, very few people care.
+
+\section{Model theory}
+
+\subsection{Universal theories}
+Recall the following definition:
+\begin{defi}[Universal theory]\index{universal theory}
+ A universal theory is a theory that can be axiomatized in a way such that all axioms are of the form
+ \[
+ \forall_{\cdots}(\text{stuff not involving quantifiers})\tag{$*$}
+ \]
+\end{defi}
+For example, groups, rings, modules etc. are universal theories. It is easy to see that if we have a model of a universal theory, then any substructure is also a model.
+
+It turns out the converse is also true! If a theory is such that every substructure of a model is a model, then it is universal. This is a very nice result, because it gives us a correspondence between syntax and semantics.
+
+The proof isn't very hard, and this is the first result we will prove about model theory. We begin with some convenient definitions.
+\begin{defi}[Diagram]\index{diagram}
+ Let $\mathcal{L}$ be a language and $\mathcal{M}$ a structure of this language. The \emph{diagram} of $\mathcal{M}$ is the theory obtained by adding a constant symbol $a_x$ for each $x \in \mathcal{M}$, and then taking the axioms to be all quantifier-free sentences that are true in $M$. We will write the diagram as $D(\mathcal{M})$.
+%
+% The \emph{diagram} of a structure $\mathcal{M}$ is the theory obtained by expanding $\mathcal{M}$ by giving names to all its elements, and then recording all atomic truths about them.
+\end{defi}
+%
+%\begin{defi}[Expansion and reduct of structure]\index{expansion of structure}\index{reduct of structure}
+% The \emph{expansion} of a structure is what we get when we add more gadgets, i.e.\ add more things to the signature. A \emph{reduct} of a structure is what we get when we forget gadgets.
+%\end{defi}
+%Here we change the signature but not the carrier set.
+%
+%\begin{eg}
+% $\bra \Q, 0, +, \leq\ket$ is an expansion of $\bra \Q, \leq\ket$. Conversely, $\bra \Q, \leq\ket$ is a reduct of $\bra \Q, 0, +, \leq \ket$.
+%\end{eg}
+%
+%\begin{defi}[Extension and substructure]\index{extension}\index{substructure}
+% An \emph{extension} of a structure is a superstructure of the same signature. A substructure is what you think it is.
+%\end{defi}
+%
+%\begin{eg}
+% $\bra \R, \leq\ket$ is an extension of $\bra \Q, \leq\ket$, and $\bra \Q, \leq\ket$ is a substructure of $\bra \R, \leq\ket$.
+%\end{eg}
+
+\begin{lemma}
+ Let $T$ be a consistent theory, and let $T_{\forall}$ be the set of all universal consequences of $T$, i.e.\ all things provable from $T$ that are of the form $(*)$. Let $\mathcal{M}$ be a model of $T_\forall$. Then $T \cup D(\mathcal{M})$ is also consistent.
+\end{lemma}
+
+\begin{proof}
+ Suppose $T \cup D(\mathcal{M})$ is not consistent. Then there is an inconsistency that can be derived from finitely many of the new axioms. Call this finite conjunction $\psi$. Then we have a proof of $\neg \psi$ from $T$. But $T$ knows nothing about the constants we added to $T$. So we know $T\vdash \forall_{\mathbf{x}} \neg \psi$. This is a universal consequence of $T$ that $\mathcal{M}$ does not satisfy, and this is a contradiction.
+\end{proof}
+
+\begin{thm}
+ A theory $T$ is universal if and only if every substructure of a model of $T$ is a model of $T$.
+\end{thm}
+
+\begin{proof}
+ $\Rightarrow$ is easy. For $\Rightarrow$, suppose $T$ is a theory such that every substructure of a model of $T$ is still a model of $T$.
+
+ Let $\mathcal{M}$ be an arbitrary model of $T_\forall$. Then $T \cup D(\mathcal{M})$ is consistent. So it must have a model, say $\mathcal{M}^*$, and this is in particular a model of $T$. Moreover, $\mathcal{M}$ is a submodel of $\mathcal{M}^*$. So $\mathcal{M}$ is a model of $T$.
+
+ So any model of $T_{\forall}$ is also a model of $T$, and the converse is clearly true. So we know $T_{\forall}$ is equivalent to $T$.
+\end{proof}
+
+\subsection{Products}
+In model theory, we would often like to produce new models of a theory from old ones. One way to do so is via products.
+
+We will use $\lambda$-notation to denote functions.
+\begin{defi}[Product of structures]\index{product of structures}
+ Suppose $\{A_i\}_{i \in I}$ is a family of structures of the same signature. Then the product
+ \[
+ \prod_{i \in I} A_i
+ \]
+ has carrier set the set of all functions
+ \[
+ \alpha: I \to \bigcup_{i \in I} A_i
+ \]
+ such that $\alpha(i) \in A_i$.
+
+ Given an $n$-ary function $f$ in the language, the interpretation in the product is given pointwise by
+ \[
+ f(\alpha_1, \cdots, \alpha_n) = \lambda i. f(\alpha_1(i), \cdots, \alpha_n(i)).
+ \]
+ Relations are defined by
+ \[
+ \varphi(\alpha_1, \cdots, \alpha_n) = \bigwedge_{i \in I} \varphi(\alpha_1(i), \cdots, \alpha_n(i)).
+ \]
+\end{defi}
+The natural question to ask is if we have a model of a theory, then is the product a model again? We know this is not always true. For example, the product of groups is a group, but the product of total orders is not a total order.
+
+We say a product \emph{preserves} a formula $\varphi$ if
+\[
+ \prod_{i \in I} A_i \vDash \varphi \Longleftrightarrow \forall_{i \in I}, A_i \vDash \varphi
+\]
+What sort of things do products preserve? As we know from undergraduate mathematics, theories such as groups and rings are preserved.
+\begin{defi}[Equational theory]\index{equational theory}
+ An equational theory is a theory all of whose axioms are of the form
+ \[
+ \forall_{\mathbf{x}} (w_1(\mathbf{\mathbf{x}}) = w_2(\mathbf{x})),
+ \]
+ where $w_i$ are some terms in $\mathbf{x}$.
+\end{defi}
+It is not hard to see that models of equational theories are preserved under products.
+
+It turns out actually the class of things preserved by products is much more general.
+
+\begin{defi}[Horn clause]\index{Horn clause}
+ A \emph{Horn clause} is a disjunction of atomics and negatomics of which at most one disjunct is atomic. % need a definition of negatomic somewhere
+
+ It is usually better to think of Horn clauses as formulae of the form
+ \[
+ \left(\bigwedge \varphi_i\right) \to \chi
+ \]
+ where $\varphi_i$ and $\chi$ are atomic formulae. Note that $\bot$ is considered an atomic formula.
+
+ A \term{universal Horn clause}\index{Horn clause!universal} is a universal quantifier followed by a Horn clause.
+\end{defi}
+
+\begin{eg}
+ Transitivity, symmetry, antisymmetry, reflexivity are all universal Horn clauses.
+\end{eg}
+
+\begin{prop}
+ Products preserve (universal) Horn formulae.
+\end{prop}
+
+\begin{proof}
+ Suppose every factor
+ \[
+ A_i \vDash \forall_{\mathbf{x}} \bigwedge \varphi_i(\mathbf{x}) \to \chi(\mathbf{x}).
+ \]
+ We want to show that the product believes in the same statement. So let $(f_1, \cdots, f_k)$ be a tuple in the product of the right length satisfying the antecedent, i.e.\ for each $n \in I$, we have
+ \[
+ A_n \vDash \varphi_i (f_1(n), \cdots, f_k(n))
+ \]
+ for each $i$, But then by assumption,
+ \[
+ A_n \vDash \chi(f_1(n), \cdots, f_k(n))
+ \]
+ for all $n$. So the product also believes in $\varphi_j(f_1, \cdots, f_n)$. So we are done.
+\end{proof}
+
+\subsubsection*{Reduced products}
+Unfortunately, it is rarely the case that products give us new interesting models. However, \emph{reduced} products do.
+
+Reduced products arise from having a filter on the index set.
+\begin{defi}[Filter]\index{filter}
+ Let $I$ be a set. A \emph{filter} on $I$ is a (non-empty) subset $F \subseteq P(I)$ such that $F$ is closed under intersection and superset. A \term{proper filter}\index{proper filter}\index{filter!proper} is a filter $F \not= P(I)$.
+\end{defi}
+
+The intuition to hang on to is that $F$ captures the intuition of ``largeness''.
+\begin{eg}
+ Take $I = \N$, and $F$ the set of cofinite subsets of $\N$, namely the sets whose complement are finite, then $F$ is a filter.
+
+ Similarly, we can take $F$ to contain the set of naturals with asymptotic density $1$.
+\end{eg}
+
+\begin{eg}
+ Let
+ \[
+ F = \{x \subseteq \N: 17 \in \N\}
+ \]
+ Then this is a \emph{maximal} proper filter.
+\end{eg}
+This is a rather silly filter. We will see later that model-theoretically, these give us uninteresting reduced products.
+
+\begin{defi}[Principal filter]\index{principal filter}
+ A \emph{principal filter} is a filter of the form
+ \[
+ F = \{X \subseteq I: x \not\in X\}
+ \]
+ for some $x \in I$.
+\end{defi}
+We don't tend to care about principal filter, as they are boring.
+
+\begin{defi}[Complete filter]\index{complete filter}\index{$\kappa$-complete filter}\index{filter!complete}\index{filter!$\kappa$-complete}
+ A filter $F$ is $\kappa$-complete if it is closed under intersection of $<\kappa$ many things.
+\end{defi}
+
+\begin{eg}
+ By definition, every filter is $\aleph_0$-complete.
+\end{eg}
+
+\begin{eg}
+ Principal filters are $\kappa$-complete for all $\kappa$.
+\end{eg}
+
+A natural question is, are there any non-principal ultrafilters that are $\aleph_1$ complete? It turns out this is a deep question, and a lot of the study of set theory originated from this question.
+
+Filters on $I$ form a complete poset under inclusion, with top element given by $F = P(I)$. Moreover, the \emph{proper} filters are a chain complete poset. Thus, by Zorn's lemma, there are \emph{maximal} proper filters.
+\begin{defi}[Ultrafilter]\index{ultrafilter}
+ An \emph{ultrafilter} is a maximal filter.
+\end{defi}
+
+We saw that principal filters are ultra. Are there non-principal ultrafilters? Fortunately, by Zorn's lemma, they do exist.
+\begin{eg}
+ By Zorn's lemma, there is a maximal filter extending the cofinite filter on $\N$, and this is non-principal.
+\end{eg}
+It turns out it is impossible to explicitly construct a non-principal ultrafilter, but Zorn's lemma says they exist.
+
+%The dual notion to filter is an \term{ideal}, which are closed under union and subsets. They are called ideals because they are ideals in the corresponding boolean ring.
+
+Now we can get to reduced products.
+\begin{defi}[Reduced product]\index{reduced product}\index{product!reduced}
+ let $\{A_i: i \in I\}$ be a family of structures, and $F$ a filter on $I$. We define the \emph{reduced product}
+ \[
+ \prod_{i \in I} A_i / F
+ \]
+ as follows: the underlying set is the usual product $\prod A_i$ quotiented by the equivalence relation
+ \[
+ \alpha \sim_F \beta \Longleftrightarrow \{i : \alpha(i) = \beta(i)\} \in F
+ \]
+ Given a function symbol $f$, the interpretation of $f$ in the reduced product is induced by that on the product.
+
+ Given a relational symbol $\varphi$, we define
+ \[
+ \varphi(\alpha_1, \cdots, \alpha_n) \Longleftrightarrow \{i : \varphi(\alpha_1(i), \cdots, \alpha_n(i))\} \in F.
+ \]
+ If $F$ is an ultrafilter, then we call it the \term{ultraproduct}. If all the factors in an ultraproduct are the same, then we call it an \term{ultrapower}.
+\end{defi}
+It is an easy exercise to show that these are all well-defined. If we view the filter as telling us which subsets of $I$ are ``large'' then, our definition of reduced product says we regard two functions in the reduced products as equivalent if they agree on a ``large'' set.
+
+Note that if our filter is principal, say $F = \{J \subseteq I: i \in J\}$, then the reduced product is just isomorphic to $A_i$ itself. So this is completely uninteresting.
+
+%Suppose we have a family $\{A_i: i \in I\}$ of structures. We can as usual form the product $\prod A_i$. Suppose $F$ is a filter. Recall that $F$ is supposed to capture the notion of largeness. So we now form an equivalence relation on the elements by saying two elements are equal if they agree on a large set: For $f, g \in \prod A_i$, we say
+%
+%Observe that the definition of ``filter'' is designed to make $\sim_F$ an equivalence relation.
+%
+%\begin{defi}[Congruence relation]\index{congruence relation}
+% We say an equivalence relation $\sim$ on $X$ is a \emph{congruence relation} for a function $f: X^m \to X$ iff for all $\mathbf{x}, \mathbf{y} \in X^m$ such that $x_i \sim y_i$ for all $i$, then
+% \[
+% f(\mathbf{x}) \sim f(\mathbf{y}).
+% \]
+%\end{defi}
+%This in some sense says $f$ cannot tell equivalent things apart. Alternatively, this says $f$ passes on to a function $X^m/\sim \to X/\sim$.
+%
+%We can similarly define the notion of being a congruence relation for a predicate.
+%\begin{eg}
+% Equivalence mod $p$ is a congruence relation for $+$ and $\times$, but not for exponentiation.
+%\end{eg}
+%
+%We now consider the quotient
+%\[
+% \prod_{i \in I} A_i/\sim_{F} = \prod_{i \in I} A_i /F.
+%\]
+%It is a very important, and also trivial, fact, that $\sim_F$ is a congruence relation for all the operations on the $A_i$ that are inherited by the product. The result is called the \term{reduced product} modulo $F$.
+
+Reduced products preserve various things, but not everything. For example, the property of being a total order is in general not preserved by reduced products. So we still have the same problem.
+
+But if the filter $F$ is an \emph{ultrafilter}, then nice things happen.
+
+\begin{thm}[\L{}o\'s theorem]\index{\L{}o\'s theorem}
+ Let $\{A_i: i \in I\}$ be a family of structures of the same (first-order) signature, and $\mathcal{U} \subseteq P(I)$ an ultrafilter. Then
+ \[
+ \prod_{i \in I} A_i/\mathcal{U} \vDash \varphi \Longleftrightarrow \{i: A_i \vDash \varphi\} \in \mathcal{U}.
+ \]
+ In particular, if $A_i$ are all models of some theory, then so is $\prod A_i / \mathcal{U}$.
+\end{thm}
+
+The key of the proof is the following lemma, which is a nice exercise:
+\begin{lemma}
+ Let $F$ be a filter on $I$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $F$ is an ultrafilter.
+ \item For $X \subseteq I$, either $X \in F$ or $I\setminus X \in F$ (``$F$ is prime'').
+ \item If $X, Y \subseteq I$ and $X \cup Y \in I$, then $X \in I$ or $Y \in I$.
+ \end{enumerate}
+\end{lemma}
+With this in mind, it is hard not to prove the theorem.
+
+% How should we think about the elements of the ultraproduct? One can certainly view its elements as equivalence classes of elements in the product. However, sometimes it is helpful to think of the carrier set as the product itself, and instead weaken to notion of equality to say two things are equal if the are equivalent under $\sim_\mathcal{U}$.
+
+%Note that there is an elementary embedding $K: \mathcal{M} \to \mathcal{M}^I/\mathcal{U}$ given by sending $m \in \mathcal{M}$ to $[\lambda i. m] \in \mathcal{M}^I/\mathcal{U}$.
+
+%This is the natural inclusion one can think of, and we can show this is in fact elementary. It suffices to show that for any $m \in \mathcal{M}$, if there an $x \in \mathcal{M}^I/\mathcal{U}$ such that
+%\[
+% \mathcal{M}^I/\mathcal{U} \vDash \varphi(x, \iota(m)),
+%\]
+%then there is such an $x$ that is in the image of $\iota$.
+%
+%Consider such an $x$. It is an equivalence class of a family of functions which is almost everywhere related to $m$ by $\varphi$. So by \L{}o\'s theorem, tthere must be some $x' \in \mathcal{M}$ such that
+%\[
+% \mathcal{M} \vDash \varphi(x, m).
+%\]
+%Then we have
+%\[
+% \mathcal{M}^I/\mathcal{U} \vDash \varphi(\iota(x'), \iota(m')).
+%\]
+Let's now look at some applications of \L{}o\'s theorem.
+
+Recall the compactness theorem of first order logic --- if we have a theory $T$ such that every finite subset of $T$ has a model, then so does $T$. The way we proved it was rather roundabout. We proved the completeness theorem of first order logic. Then we notice that if every finite subset of $T$ has a model, then every finite subset of $T$ is consistent. Since proofs are finite, we know $T$ is consistent. So by completeness, $T$ has a model.
+
+Now that we are equipped with ultraproducts, it is possible to prove the compactness theorem directly!
+\begin{thm}[Compactness theorem]\index{compactness theorem}
+ Let $T$ be a theory in first order logic such that every finite subset has a model. Then $T$ has a model.
+\end{thm}
+
+\begin{proof}
+ Let $\Delta$ be such a theory. Let $S = \mathcal{P}_{\aleph_0}(\Delta)$ be the set of all finite subsets of $\Delta$. For each $s \in S$, we pick a model $\mathcal{M}_s$ of $s$.
+
+ Given $s \in S$, we define
+ \[
+ X_s = \{t \in S: s \subseteq t\}.
+ \]
+ We notice that $\{X_s: s \in S\}$ generate a proper filter on $S$. We extend this to ultrafilter $\mathcal{U}$ by Zorn's lemma. Then we claim that
+ \[
+ \prod_{s \in S} \mathcal{M}_s/\mathcal{U}\vDash \Delta.
+ \]
+ Indeed, for any $\varphi \in \Delta$, we have
+% To see this, we note that for any $\varphi \in \Delta$, we have $X_{\{\varphi\}} \in \mathcal{U}$. Also, for any $s \in X_{\{\varphi\}}$, we have $\mathcal{M}_s \vDash \varphi$. So
+ \[
+ \{s: \mathcal{M}_s \vDash \varphi\} \supseteq X_{\{\varphi\}} \in \mathcal{U}.\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ Let $T$ be the theory consisting of all first-order statements true in $\bra \R, 0, 1, +, \times\ket$. Add to $T$ a constant $\varepsilon$ and the axioms $\varepsilon < \frac{1}{n}$ for all $n \in \N$. Then for any finite subset of this new theory, $\R$ is still a model, by picking $\varepsilon$ small enough. So by the compactness theorem, we know this is consistent.
+
+ Using \emph{our} proof of the compactness theorem, we now have a ``concrete'' model of real numbers with infinitesimals --- we just take the ultraproduct of infinitely many copies of $\R$.
+\end{eg}
+
+\subsection{Ehrenfeucht--Mostowski theorem}
+\begin{defi}[Skolem function]\index{Skolem function}
+ \emph{Skolem functions} for a structure are functions $f_\varphi$ for each $\varphi \in \mathcal{L}$ such that if
+ \[
+ \mathcal{M} \vDash \forall_\mathbf{x} \exists_\mathbf{y} \varphi(\mathbf{x}, \mathbf{y}),
+ \]
+ then
+ \[
+ \mathcal{M} \vDash \forall_\mathbf{x} \varphi(\mathbf{x}, f_\varphi(\mathbf{x})).
+ \]
+\end{defi}
+
+\begin{defi}[Skolem hull]\index{Skolem hull}
+ The \emph{Skolem hull} of a structure is obtained from the constants term by closure under the Skolem functions.
+\end{defi}
+
+\begin{defi}[Elementary embedding]\index{elementary embedding}
+ Let $\Gamma$ be a set of formulae. A function $i: \mathcal{M}_1 \to \mathcal{M}_2$ is \term{$\Gamma$-elementary} iff for all $\varphi \in \Gamma$ we have $\mathcal{M}_1 \vDash \varphi(\mathbf{x})$ implies $\mathcal{M}_2 \vDash \varphi(i(\mathbf{x}))$.
+
+ If $\Gamma$ is the set of all formulae in the language, then we just say it is \emph{elementary}.
+\end{defi}
+Usually, we take $\Gamma$ to be the set of all formulae, and $\mathcal{M}_1$ to be a substructure of $\mathcal{M}_2$.
+
+\begin{eg}
+ It is, at least intuitively, clear that the inclusion $\bra \Q, \leq\ket \hookrightarrow \bra \R, \leq \ket$ is elementary.
+
+ However, $\Q$ as a field is \emph{not} elementary as a substructure of $\R$ as a field.
+\end{eg}
+
+Note that first order logic is not decidable. However, \emph{monadic} first order logic is.
+\begin{defi}[Monadic first order logic]\index{monadic first order logic}\index{first order logic!monadic}
+ \emph{Monadic first-order logic} is first order logic with only one-place predicates, no equality and no function symbols.
+\end{defi}
+
+\begin{prop}
+ Monadic first-order logic is decidable.
+\end{prop}
+
+\begin{proof}
+ Consider any formula $\varphi$. Suppose it involves the one-place predicates $p_1, \dots, p_n$. Given any structure $\mathcal{M}$, we consider the quotient of $\mathcal{M}$ by
+ \[
+ x \sim y \Leftrightarrow p_i (x) = p_i(y) \text{ for all i}.
+ \]
+ Then there are at most $2^n$ things in the quotient.
+
+ Then given any transversal of the quotient, we only have to check if the formula holds for this transversal, and this is finite. So we can decide.
+\end{proof}
+
+\begin{defi}[Set of indiscernibles]\index{set of indiscernibles}
+ We say $\bra I, \leq_I\ket$ is a \emph{set of indiscernibles} for $\mathcal{L}$ and a structure $\mathcal{M}$ with $I \subseteq \mathcal{M}$ if for any $\varphi \in \mathcal{L}(M)$ of arity $n$, and for all increasing tuples $\mathbf{x}, \mathbf{y} \in I$,
+ \[
+ \mathcal{M} \vDash \varphi(\mathbf{x}) \Longleftrightarrow \mathcal{M} \vDash \varphi(\mathbf{y})
+ \]
+\end{defi}
+This is weaker than just saying everything in there are the same.
+
+\begin{eg}
+ $\bra \Q, I\ket$ is a set of indiscernibles for $\bra \R, \leq\ket$.
+\end{eg}
+
+Let $T$ be a theory with infinite models. The general line of thought is --- can we get a model of $T$ with special properties?
+
+Given any set $\mathcal{M}$, invent names for every member of $\mathcal{M}$. Add these names to the language of $T$. We add axioms to say all these constants are distinct. Since $T$ has infinite models, we know this theory has a model.
+
+Let $\Omega$ be the set of countable ordinals, consider the language of $\R$, and add a name to this language for every countable ordinal. We further add axioms to say that $\alpha < \beta$ in the ordering of $\R$ whenever $\alpha < \beta$ as ordinals.
+
+Again by compactness, we find that this has a model. So we can embed the set of countable ordinals into a model of reals, which we know is impossible for the genuine $\R$. However, in general, there is no reason to believe these elements are a set of indiscernibles. What Ehrenfeucht--Mostowski says is that we can in fact do so.
+
+\begin{thm}[Ehrenfeucht--Mostowski theorem (1956)]\index{Ehrenfeucht--Mostowski theorem}
+ Let $\bra I, \leq\ket$ be a total order, and let $T$ be a theory with infinite models. Suppose we have a unary predicate $P$ and a $2$-ary relation $\preccurlyeq \in \mathcal{L}(T)$ such that
+ \[
+ T \vdash \text{$\preccurlyeq$ is a total order on $\{x: P(x)\}$}.
+ \]
+ Then $T$ has a model $\mathcal{M}$ with a copy of $I$ as a sub-order of $\preccurlyeq$, and the copy of $I$ is a set of indiscernibles. Moreover, we can pick $\mathcal{M}$ such that every order-automorphism of $\bra I, \leq \ket$ extends to an automorphism of $\mathcal{M}$.
+\end{thm}
+
+We will give two proofs of this result. We first give the original proof of Ehrenfeucht--Mostowski, which uses Ramsey theory.
+\begin{proof}
+ Let $T$ and $\bra I, \leq \ket$ be as in the statement of the theorem. We add to $\mathcal{L}(T)$ names for every element of $I$, say $\{c_i: i \in I\}$. We add axioms that says $P(c_i)$ and $c_i \preccurlyeq c_j$ whenever $i < j$. We will thereby confuse the orders $\leq$ and $\preccurlyeq$, partly because $\leq$ is much easier to type. We call this theory $T^*$.
+
+ Now we add to $T^*$ new axioms to say that the $c_i$ form a set of indiscernibles. So we are adding axioms like
+ \[
+ \varphi(c_i, c_j) \Leftrightarrow \varphi(c_{i'}, c_{j'})\tag{$*$}
+ \]
+ for all $i < j$ and $i' < j'$. We do this simultaneously for all $\varphi \in \mathcal{L}(T)$ and all tuples of the appropriate length. We call this theory $T^I$, and it will say that $\bra I, \leq\ket$ forms a set of indiscernibles. The next stage is, of course, to prove that $T^I$ is consistent.
+
+ Consider any finite fragment $T'$ of $T^I$. We want to show that $T'$ is consistent. By finiteness, $T'$ only mentions finitely many constants, say $c_1 < \cdots < c_K$, and only involve finitely many axioms of the form $(*)$. Denote those predicates as $\varphi_1, \cdots, \varphi_n$. We let $N$ be the supremum of the arities of the $\varphi_i$.
+
+ Pick an infinite model $\mathcal{M}$ of $T$. We write
+ \[
+ \mathcal{M}^{[N]} = \{A \subseteq \mathcal{M}: |A| = N\},
+ \]
+ For each $\varphi_i$, we partition $\mathcal{M}^{[N]}$ as follows --- given any collection $\{a_k\}_{k = 1}^N$, we use the order relation $\preccurlyeq$ to order them, so we suppose $a_k \preccurlyeq a_{k + 1}$. If $\varphi_i$ has arity $m \leq N$, then we can check whether $\varphi_i(a_1, \cdots, a_m)$ holds, and the truth value gives us a partition of $\mathcal{M}^{[N]}$ into 2 bits.
+
+ If we do this for all $\varphi_i$, then we have finitely partitioned $\mathcal{M}^{[N]}$. By Ramsey's theorem, this has an infinite monochromatic subset, i.e.\ a subset such that any two collection of $N$ members fall in the same partition. We pick elements $c_1, \cdots, c_K, \cdots, c_{K + N}$, in increasing order (under $\preccurlyeq$). We claim that picking the $c_1, \cdots, c_K$ to be our constants satisfy the axioms of $T'$.
+
+ Indeed, given any $\varphi_i$ mentioned in $T'$ with arity $m < N$, and sequences $c_{\ell_1} < \cdots < c_{\ell_m}$ and $c_{\ell_1'} < \cdots < c_{\ell_m'}$, we can extend these sequences on the right by adding more of those $c_i$. Then by our choice of the colouring, we know
+ \[
+ \varphi_i(c_{\ell_1}, \cdots, c_{\ell_m}) \Leftrightarrow \varphi_i(c_{\ell_1'}, \cdots, _{\ell_m'}).
+ \]
+ So we know $T'$ is consistent. So $T^I$ is consistent. So we can just take a model of $T^I$, and the Skolem hull of the indiscernibles is the model desired.
+\end{proof}
+
+There is another proof of Ehrenfeucht--Mostowski due to Gaifman, using ultraproducts. We will sketch the proof here.
+
+To do so, we need the notion of colimits.
+\begin{defi}[Colimit]\index{colimit}
+ Let $\{A_i: i \in I\}$ be a family of structures index by a poset $\bra I, \leq\ket$, with a family of (structure-preserving) maps $\{\sigma_{ij}: A_i \hookrightarrow A_j \mid i \leq j\}$ such that whenever $i \leq j \leq k$, we have
+ \[
+ \sigma_{jk} \sigma_{ij} = \sigma_{ik}.
+ \]
+ In particular $\sigma_{ii}$ is the identity. A \emph{colimit} or \term{direct limit} of this family of structures is a ``minimal'' structure $A_\infty$ with maps $\sigma_i: A_i \hookrightarrow A_\infty$ such that whenever $i \leq j$, then the maps
+ \[
+ \begin{tikzcd}
+ A_i \ar[d, "\sigma_{ij}"'] \ar[r, "\sigma_i"] & A_\infty\\
+ A_j \ar[ur, "\sigma_j"']
+ \end{tikzcd}
+ \]
+ commute.
+
+ By ``minimal'', we mean if $A_\infty'$ is another structure with this property, then there is a unique inclusion map $A_\infty \hookrightarrow A_\infty'$ such that for any $i \in I$, the maps
+ \[
+ \begin{tikzcd}
+ A_i \ar[r, "\sigma_i"] \ar[rd, "\sigma_i'"'] & A_\infty\ar[d, hook]\\
+ & A_\infty'
+ \end{tikzcd}
+ \]
+\end{defi}
+
+\begin{eg}
+ The colimit of a family of sets is obtained by taking the union of all of them, and then identifying $x \sim \sigma_{ij}(x)$ for all $x \in A_i$ and $i \leq j$.
+\end{eg}
+
+The key observation is the following:
+\begin{significant}
+ Every structure is a direct limit of its finitely generated substructures, with the maps given by inclusion.
+\end{significant}
+We will neither prove this nor make this more precise, but it will be true for the different kinds of structures we are interested in. In particular, for a poset, a finitely generated substructure is just a finite suborder, because there are no function symbols to ``generate'' anything in the language of posets.
+
+To prove Ehrenfeucht--Mostowski, we construct a model satisfying the statement of Ehrenfeucht--Mostowski as the direct limit of structures indexed by finite subset of $I$:
+\[
+ \{\mathcal{M}_s: s \in \mathcal{P}(I)\}
+\]
+and when $s \subseteq t \in \mathcal{P}(I)$, we have an elementary embedding $\mathcal{M}_s \hookrightarrow \mathcal{M}_t$. We then note that if all the $\sigma_{ij}$ are elementary, then the $\sigma_i: A_i \hookrightarrow A_\infty$ are also elementary. In particular, if each $\mathcal{M}_s$ are models of our theory, then so is $A_\infty$.
+
+We start by picking any infinite model $\mathcal{M}$ of $I$. By standard compactness arguments, we may wlog there is no last $\preccurlyeq$-thing in $\mathcal{M}$, and also that $J = \{x: P(x)\}$ is infinite. We pick $\mathcal{U}$ an ultrafilter on $J$ that contains all terminal segments of $\preccurlyeq$. Since $J$ does not have a last element, this is a non-principal ultrafilter.
+
+We now write
+\[
+ L(\mathcal{M}) = \mathcal{M}^{|J|}/\mathcal{U}.
+\]
+We will define
+\[
+ \mathcal{M}_s = L^{|s|}(\mathcal{M}).
+\]
+In particular, $\mathcal{M}_{\emptyset} = \mathcal{M}$.
+
+To construct the embedding, we consider the following two classes of maps:
+\begin{enumerate}
+ \item For any structure $\mathcal{M}$, there is a map $K = K(\mathcal{M}): \mathcal{M} \to L(\mathcal{M})$ given by the constant embedding.
+ \item If $i$ is an embedding from $\mathcal{M}$ into $\mathcal{N}$, then there is an embedding
+ \[
+ L(i): L(\mathcal{M}) \to L(\mathcal{N}).
+ \]
+ which is ``compose with $i$ on the right''.
+\end{enumerate}
+It is easy to see that both of these maps are elementary embeddings, where we require $i$ to be elementary in the second case. Moreover, these maps are compatible in the sense that the following diagram always commutes:
+\[
+ \begin{tikzcd}
+ \mathcal{M} \ar[r, "i"] \ar[d, "K(\mathcal{M})"] & \mathcal{N} \ar[d, "K(\mathcal{N})"]\\
+ L(\mathcal{M}) \ar[r, "L(i)"'] & L(\mathcal{N})
+ \end{tikzcd}
+\]
+%How do we construct these $\mathcal{M}_s$? We start with any model $\mathcal{M}$ of $T$. We are going to assume that there is no last $\preccurlyeq$-thing in $\mathcal{M}$, and also that the graph of $\preccurlyeq$ is infinite. We let $\mathcal{U}$ be a non-principal ultrafilter $J = \{x: P(x)\}$. Then we pick $\mathcal{M}_s$ to be the result of doing an ultrapower construction using $\mathcal{U}$, $|s|$ many times to $\mathcal{M}$. The tricky bit is to construct the embeddings.
+%We define this by recursion. We write $\last(s)$ for the last element of $s$, and $\butlast(s)$ for $s \setminus \{\last(s)\}$. Given a structure $\mathcal{M}$, we write
+%\[
+% L(\mathcal{M}) = \mathcal{M}^p/\mathcal{U}.
+%\]
+%We need two gadgets
+%\begin{enumerate}
+% \item An elementary embedding $K(\mathcal{M}): \mathcal{M} \to L(\mathcal{M})$.
+% \item If $i$ is an embedding from $\mathcal{M}$ into $\mathcal{N}$, then there is an
+% \[
+% L(i): L(\mathcal{M}) \to L(\mathcal{N}).
+% \]
+% which is ``compose with $i$ on the right''.
+%\end{enumerate}
+%Moreover, these are compatible in the sense
+%\[
+% \begin{tikzcd}
+% \mathcal{M} \ar[r, "i"] \ar[d, "K(\mathcal{M})"] & \mathcal{N} \ar[d, "K(\mathcal{N})"]\\
+% L(\mathcal{M}) \ar[r, "L(i)"] & L(\mathcal{N})
+% \end{tikzcd}
+%\]
+%We can now recursively define $I(s, t): \mathcal{M}_s \to \mathcal{M}_t$.
+We now further consider the functions $\head$ and $\tail$ defined on finite linear orders by
+\begin{align*}
+ \head(a_1 < \cdots < a_n) &= a_1\\
+ \tail(a_1 < \cdots < a_n) &= a_2 < \cdots < a_n.
+\end{align*}
+We can now define the embeddings recursively. We write $I(s, t)$ for the desired inclusion from $\mathcal{M}_s$ into $\mathcal{M}_t$. We set
+\begin{itemize}
+ \item If $s = t$, then $I(s, t)$ is the identity.
+ \item If $\head(s) = \head(t)$, then $I(s, t)$ is $L(I(\tail(s), \tail(t)))$.
+ \item Otherwise, $I(s, t)$ is $K \circ I(s, \tail(t))$.
+% \item If $s = t$, then $I(s, t)$ is the identity.
+% \item If $\last(s) = \last(t)$, then $I(s, t)$ is $L(I(\butlast(s), \butlast(t)))$. Otherwise, it is $K \circ I(s, \butlast(t))$.
+\end{itemize}
+By our previous remark, we know these are all elementary embeddings, and that these maps form a commuting set. Thus, we obtain a direct limit $\mathcal{M}_\infty$.
+
+Now we want to produce a copy of $I$ in the direct limit. So we want to produce a copy of $s$ in $\mathcal{M}_s$ for each $s$, such that the maps preserve these copies. It suffices to be able to do it in the case $|s| = 2$, and then we can bootstrap it up by induction, since if $|s| > 3$, then we find two sets of $s$ of order $< |s|$ whose union gives $s$, and compatibility means we can glue them together. To ensure compatibility, we pick a fixed element $x \in L(\mathcal{M})$, and set this as the desired element in $\mathcal{M}_s$ whenever $|s| = 1$. It then suffices to prove that, say, if $0 < 1$, then $I(\{0\}, \{0, 1\})(x) < I(\{1\}, \{0, 1\})(x)$. It is then a straightforward exercise to check that
+\[
+ x = [\lambda p. p] \in L(\mathcal{M}) = \mathcal{M}^{|P|}/\mathcal{U}
+\]
+works.
+
+Once we have done this, it is immediate that the copy of $I$ in the direct limit is a family of indiscernibles. Indeed, we simply have to check that the copy of $s$ in $\mathcal{M}_s$ is a family of indiscernibles, since every formula only involves finitely many things. Then since the inclusino of $\mathcal{M}_s$ into $\mathcal{M}$ is elementary, the result follows.
+
+%
+%We construct this by induction. If $|s| = 1$, then $\mathcal{M}_s \cong \mathcal{M}$. We arbitrarily pick some $p \in \mathcal{M}$ and let it be our designated member.
+%
+%Now suppose know how to point to $|s|$-many things in $\mathcal{M}_s$. How about $\mathcal{M}_t$, where $|t| = |s| + 1$? We let $t'$ be $T$ with the penultimate elemenet deleted. By induction hypothesis, we know what the lsat thing in $\mathcal{M}_{t'}$ is. We have the embedding
+%\[
+% I(t', t): \mathcal{M}_{t'} \to \mathcal{M}_t.
+%\]
+%This will tells us which thing in the $|t|$th ultrapower is the designated element. % figure this out.
+%
+%Since every finite formula contains only finitely many constants, it follows that the copy of $I$ in the direct limit is a family of indiscernibles.
+
+\subsection{The omitting type theorem}
+\begin{defi}[Type]\index{type}\index{$n$-types}
+ A \emph{type} is a set of formulae all with the same number of free variables. An \emph{$n$-type} is a set of formulae each with $n$ free variables.
+\end{defi}
+What is the motivation of this? A $1$-type is a set of formulae all with one free variable. So if we have a model $\mathcal{M}$ and an element $x \in \mathcal{M}$, then we can consider all formulae that $x$ satisfies, and we see that this gives as a $1$-type.
+
+\begin{defi}[Realization of type]\index{realization of type}
+ A model $\mathcal{M}$ \emph{realizes an $n$-type $\Sigma$} if there exists $x_1, \cdots, x_n \in \mathcal{M}$ such that for all $\sigma \in \Sigma$, we have
+ \[
+ \mathcal{M} \vDash \sigma(x_1, \cdots, x_n).
+ \]
+\end{defi}
+%
+%\begin{defi}[Saturated model]\index{saturated model}\index{model!saturated}\index{model!countably saturated}\index{$\aleph_0$-saturated model}\index{countably saturated model}
+% We say a model $\mathcal{M}$ is called $\aleph_0$-saturated if $\mathcal{M}$ realizes every finitely satisfiable type.
+%\end{defi}
+
+Any fool can realize a type, but it takes a model theorist to omit one!
+
+\begin{defi}[Omit a type]\index{omit a type}
+ A model $\mathcal{M}$ \emph{omits an $n$-type $\Sigma$} if for all $x_1, \cdots, x_n$, there exists $\sigma \in \Sigma$ such that
+ \[
+ \mathcal{M} \not\vDash \sigma(x_1, \cdots, x_n).
+ \]
+\end{defi}
+
+\begin{eg}
+ Consider the language of PA, and consider the type containing $\sigma_i(x)$ saying $x \not= s^i(x)$. We really want to omit this type!
+
+ Of course, it is not hard in this case --- the standard model of PA does.
+\end{eg}
+
+Let's look for a condition on $T$ and $\Sigma$ for $T$ to have a model that omits this type.
+
+\begin{defi}[Locally realize]\index{locally realize a type}\index{realize a type!locally}
+ We say $\varphi$ realizes $\Sigma$ locally if
+ \[
+ T \vdash \forall_x(\varphi(x) \to \sigma(x)).
+ \]
+\end{defi}
+Suppose there is some $\varphi \in \mathcal{L}(T)$ such that
+\[
+ T \vdash \exists_x \varphi(x)
+\]
+and $\varphi(x)$ locally realizes $\Sigma$, then we certainly cannot omit $\Sigma$.
+
+Now if $T \vDash \forall_x \neg \varphi(x)$ whenever $\varphi$ locally realizes $\Sigma$, then we have a chance. We say \term{$T$ locally omits $\Sigma$}. It turns out this is sufficient!
+
+\begin{thm}[Omitting type theorem]\index{omitting type theorem}\index{type}
+ Let $T$ be a first-order theory, and $\Sigma$ an $n$-type. If
+ \[
+ T \vdash \forall_x \neg \varphi(x)
+ \]
+ whenever $\varphi$ locally realizes $\Sigma$, then $T$ has a model omitting $\Sigma$.
+\end{thm}
+
+We first prove a special case for propositional logic.
+\begin{thm}
+ Let $T$ be a propositional theory, and $\Sigma \subseteq \mathcal{L}(T)$ a type (with $n = 0$). If $T$ locally omits $\Sigma$, then there is a $T$-valuation that omits $\Sigma$.
+\end{thm}
+
+\begin{proof}
+ Suppose there is no $T$-valuation omitting $\Sigma$. By the completeness theorem, we know everything in $\Sigma$ is a theorem of $T$. So $T$ can't locally omit $\Sigma$.
+\end{proof}
+That was pretty trivial. But we can do something more than that --- the \emph{extended} omitting types theorem. It says we can omit countably many types simultaneously.
+
+%\begin{thm}
+% Let $T$ be a first-order theory, and for each $i \in \N$, we have a type $\Sigma_i$. If $T$ locally emits $\Sigma_i$ for all $i$, then $T$ has a model that omits every $\Sigma_i$.
+%\end{thm}
+%
+%Again, we prove the propositional version first.
+\begin{thm}
+ Let $T$ be a propositional theory, and for each $i \in \N$, we let $\Sigma_i \subseteq \mathcal{L}(T)$ be types for each $i \in \N$. If $T$ locally omits each $\Sigma_i$, then there is a $T$-valuation omitting all $\Sigma_i$.
+\end{thm}
+
+\begin{proof}
+ We will show that whenever $T \cup \{\neg A_1, \cdots, \neg A_i\}$ is consistent, where $A_n \in \Sigma_n$ for $n \leq i$, then we can find $A_{n + 1} \in \Sigma_{n + 1}$ such that
+ \[
+ T \cup \{\neg A_1, \cdots, \neg A_n, \neg A_{n + 1}\}
+ \]
+ is consistent.
+
+ Suppose we can't do this. Then we know that
+ \[
+ T \vdash \left(\bigwedge_{1 \leq j \leq n} \neg A_j \right) \to A_{n + 1},
+ \]
+ for every $A_{n + 1} \in \Sigma_{n + 1}$. But by assumption, $T$ locally omits $\Sigma_{i + 1}$. This implies
+ \[
+ T \vdash \neg \left(\bigwedge_{1 \leq j \leq n} \neg A_j \right),
+ \]
+ contradicting the inductive hypothesis that $T \cup \{\neg A_1, \cdots, \neg A_i\}$ is consistent.
+
+ Thus by compactness, we know $T \cup \{\neg A_1, \neg A_2, \cdots\}$ is consistent, and a model of this would give us a model of $T$ omitting all those types.
+\end{proof}
+
+We now prove the general case.
+\begin{proof}
+ Let $T$ be a first order theory (in a countable language) locally omitting $\Sigma$. For simplicity, we suppose $\Sigma$ is a $1$-type. We want to find a model omitting $\Sigma$. Suppose $T$ locally omits $\Sigma$, and let $\{c_i: i \in \N\}$ be a countable set of new constant symbols. Let $\bra \varphi_i: i \in \N\ket$ be an enumeration of the sentences of $\mathcal{L}(T)$. We will construct an increasing sequence $\{ T_i: i \in \N\}$ of finite extensions of $T$ such that for each $m \in \N$,
+ \begin{enumerate}
+ \item[(0)] $T_{m + 1}$ is consistent.
+ \item $T_{m + 1}$ decides $\varphi_n$ for $n \leq m$, i.e.\ $T_{m + 1} \vdash \varphi_n$ or $T_{m + 1} \vdash \neg \varphi_n$.
+ \item If $\varphi_m$ is $\exists_x \psi(x)$ and $\varphi_m \in T_{m + 1}$, then $\psi(c_p) \in T_{m + 1}$, where $c_p$ is the first constant not occurring in $T_m$ or $\varphi_m$.
+ \item There is a formula $\sigma(x) \in \Sigma$ such that $\neg \sigma(c_m) \in T_{m + 1}$.
+ \end{enumerate}
+ We will construct this sequence by recursion. Given $T_m$, we construct $T_{m + 1}$ as follows: think of $T_m$ as $T \cup \{\theta_1, \cdots, \theta_r\}$, and let
+ \[
+ \Theta = \bigwedge_{j \leq r} \theta_j.
+ \]
+ We let $\{c_1, \cdots, c_N\}$ be the constants that have appeared in $\Theta$, and let
+ \[
+ \Theta(\mathbf{x})
+ \]
+ be the result of replacing $c_i$ with $x_i$ in $\theta$. Then clearly, $\Theta(\mathbf{x})$ is consistent with $T$. Since $T$ locally omits $\Sigma$, we know there exists some $\sigma(x) \in \Sigma$ such that
+ \[
+ \Theta \wedge \neg \sigma(x_m)
+ \]
+ is consistent with $T$. We put $\neg \sigma(c_m)$ into $T_{m + 1}$, and this takes care of (iii).
+
+ If $\varphi_m$ is consistent with $T_m \cup \{\neg \sigma(c_m)\}$, then put it into $T_{m + 1}$. Otherwise, put in $\neg \varphi_m$. This takes care of (i).
+
+ If $\varphi_m$ is $\exists_x \psi(x)$ and it's consistent with $T_m \cup \{\neg \sigma(c_m)\}$, we put $\psi(c_m)$ into $T_{m + 1}$. This takes care of (ii).
+
+ Now consider
+ \[
+ T^* = \bigcup_{n \in \N} T_n
+ \]
+ Then $T^*$ is complete by construction, and is complete by compactness.
+
+ Consider an arbitrary countable model of $T^*$, and the submodel generated by the constants in $C$. This is a model of $T^* \supseteq T$ and condition (iii) ensures that it omits $\Sigma$.
+\end{proof}
+
+\section{Computability theory}
+\subsection{Computability}
+We begin by discussing some number theory. Consider the Diophantine equation
+\[
+ x^2 + y^2 = z^2.\tag{$*$}
+\]
+We can certainly write down some solutions to this equation, e.g.
+\[
+ 3^2 + 4^2 = 5^2.
+\]
+But is it possible to find \emph{all} solutions? It is a well-known result that we can. Of course, to do so, it suffices to enumerate all solutions where $x, y, z$ are coprime It turns out every such solution of $(*)$ can be written as
+\[
+ x = m^2 - n^2,\quad y = 2mn, z = m^2 + n^2
+\]
+for some $m, n \in \N$, and conversely, for any $m, n \in \N$, the $(x, y, z)$ given by this formula is a solution. Thus, if we want to list all solutions to $(*)$, then it suffices to plug in all possible values of $m$ and $n$. In other words, we have an \emph{algorithm} that enumerates all solutions to $(*)$.
+
+Given any Diophantine equation, it would be great to have some similar algorithm that enumerates all solutions to the equation. But even before that, it would be great to have some algorithms that tell us whether there is any solution at all. This was Hilbert's 10th problem --- is there an algorithm that determines whether or not a Diophantine equation has a solution?
+
+If such an algorithm existed, then to prove this, we merely have to exhibit such an algorithm, and prove that it works? But how could we prove that there \emph{isn't} one. To do so, we must formulate an ``algorithm'' as a \emph{mathematical object}. Alternatively, we want a formal notion of what is a ``computable'' function.
+
+One way to do so is to define ``computable functions'' recursively. We will be concerned with functions $\N^n \to \N^m$ for different $k$ and $m$, and the idea is to list some functions that any sensible computer should be able to compute, and then close it under operations that really should preserve computability. We then hope that the set of functions produced this way is exactly the functions that can be ``computed'' by an ``algorithm''.
+
+We start with a naive attempt, and what we obtain is known as ``primitive recursive'' functions.
+\begin{defi}[Primitive recursive functions]\index{primitive recursive function}
+ The class of \emph{primitive recursive functions} $\N^n \to \N^m$ for all $n, m$ is defined inductively as follows:
+ \begin{itemize}
+ \item The constantly zero function $f(\mathbf{x}) = 0$, $f: \N^n \to \N$ is primitive recursive.
+ \item The successor function $\succ: \N \to \N$ sending a natural number to its successor (i.e.\ it ``plus one'') is primitive recursive.
+ \item The identity function $\cid: \N \to \N$ is primitive recursive.
+ \item The projection functions
+ \[
+ \begin{tikzcd}[cdmap]
+ \proj_j^i (\mathbf{x}) : \N^j \ar[r] & \N\\
+ (x_1, \cdots, x_j) \ar[r, maps to ]& x_i
+ \end{tikzcd}
+ \]
+ are primitive recursive.
+ \end{itemize}
+ Moreover,
+ \begin{itemize}
+ \item Let $f: \N^k \to \N^m$ and $g_1, \cdots, g_k: \N^n \to \N$ be primitive recursive. Then the function
+ \[
+ (x_1, \cdots, x_n) \mapsto f(g_1(x_1, \cdots, x_n), \cdots, g_k(x_1, \cdots, x_n)): \N^n \to \N^m
+ \]
+ is primitive recursive.
+ \end{itemize}
+ Finally, we have closure under \term{primitive recursion}
+ \begin{itemize}
+ \item If $g: \N^k \to \N^m$ and $f: \N^{m + k + 1} \to \N^m$ are primitive recursive, then so is the function $h: \N^{k + 1} \to \N^m$ defined by
+ \begin{align*}
+ h(0, \mathbf{x}) &= g(\mathbf{x})\\
+ h(\succ \; n, \mathbf{x}) &= f(h(n, \mathbf{x}), n, \mathbf{x}).
+ \end{align*}
+ \end{itemize}
+\end{defi}
+Do these give us everything we want? We first notice that we can define basic arithmetic using primitive recursion by
+\begin{align*}
+ \plus(0, n) &= n\\
+ \plus(\succ\; m, n) &= \succ (\plus(m, n))\\
+ \mult(1, n) &= n\\
+ \mult(\succ\; m, n) &= \plus(n, \mult(m, n)),
+\end{align*}
+Similarly, we can define $\mathsf{exp}$ etc.
+
+What else can we do? We might want to define the Fibonacci function
+\[
+ \mathsf{Fib}(0) = \mathsf{Fib}(1) = 1, \quad \mathsf{Fib}(n + 1) = \mathsf{Fib}(n) + \mathsf{Fib}(n - 1)?
+\]
+But the definition of primitive recursion only allows us to refer to $h(n, \mathbf{x})$ when computing $h(\succ \; n, \mathbf{x})$. This isn't really a problem. The trick is to not define $\mathsf{Fib}$ directly, but to define the function
+\[
+ \mathsf{Fib}' (n) =
+ \begin{pmatrix}
+ \mathsf{Fib} (n)\\
+ \mathsf{Fib} (n - 1)
+ \end{pmatrix}
+\]
+instead, using primitive recursion, and then projecting back to the first component. But this requires us to fix the number of things we are required to refer back to. What if we want to refer back to everything less than $n$?
+
+The solution to this is that we can actually encode pairs as natural numbers themselves:
+\begin{prop}
+ There exists primitive recursion functions $\pair: \N^2 \to \N$ and $\unpair: \N \to \N^2$ such that
+ \[
+ \unpair (\pair(x, y)) = (x, y)
+ \]
+ for all $x, y \in \N$.
+\end{prop}
+
+\begin{proof}
+ We can define the pairing function by
+ \[
+ \pair(x, y) = \binom{x + y + 1}{2} + y.
+ \]
+ The unpairing function can be shown to be primitive recursive, but is more messy.
+\end{proof}
+
+The benefit of this is that we can now encode lists as naturals. What would it mean to be a list? We will informally denote a list by square brackets, e.g.\ $[x_1, \cdots, x_n]$. The idea is that we have functions $\head, \tail, \cons$ such that
+\begin{align*}
+ \head\; [x_1, \cdots, x_n] &= x_1\\
+ \tail\; [x_1, \cdots, x_n] &= [x_2, \cdots, x_n]\\
+ \cons (x, [x_1, \cdots, x_n]) &= [x, x_1, \cdots, x_n]
+\end{align*}
+If we think a bit harder, then what we are really looking for is really the following property:
+\begin{cor}
+ There exists $\cons : \N \to \N, \head: \N \to \N$ and $\tail: \N \to \N$ such that
+ \[
+ \cons (\head \;x, \tail \;x) = x
+ \]
+\end{cor}
+
+\begin{proof}
+ Take $\cons = \pair$; $\head$ to be the first projection of the unpairing and $\tail$ to be a second projection.
+\end{proof}
+
+So we can encode lists as well. We can lock ourselves in a room with enough pen and paper, and try to prove that most functions $\N \to \N$ we encounter ``in daily life'' are indeed primitive recursive.
+
+However, it turns out there are some contrived examples that really should be computable, but not primitive recursive.
+
+\begin{defi}[Ackermann function]
+ The \term{Ackermann function} is defined to be
+ \begin{align*}
+ A(0, n) &= n + 1\\
+ A(m, 0) &= A(m -1, 1)\\
+ A(m + 1, n + 1) &= A(m, A(m + 1, n)).
+ \end{align*}
+\end{defi}
+
+\begin{prop}
+ The Ackermann function is well-defined.
+\end{prop}
+
+\begin{proof}
+ To see this, note that computing $A(m + 1, n + 1)$ requires knowledge of the values of $A(m + 1, n)$, and $A(m, A(m + 1, n))$.
+
+ Consider $\N \times \N$ ordered lexicographically. Then computing $A(m + 1, n + 1)$ requires knowledge of the values of $A$ at pairs lying below $\bra m + 1, n + 1\ket$ in this order. Since the lexicographic order of $\N \times \N$ is well-ordered (it has order type $\omega \times \omega$), by transfinite induction, we know $A$ is well-defined.
+\end{proof}
+We can indeed use this definition to compute $A$, and it is going to take forever. But we can still do it.
+
+However, this function is not primitive recursive! Intuitively, this is because the definition of $A$ does ``transfinite recursion over $\omega \times \omega$'', while the definition of primitive recursion only allows us to do ``transfinite recursion over $\omega$''.
+
+That is the idea, but obviously not a proof. To prove it properly, this is done by proving that the Ackermann ``grows too fast''.
+\begin{defi}[Dominating function]\index{dominating function}\index{$f < g$}\index{$<$}
+ Let $f, g: \N \to \N$ be functions. Then we write $f < g$ if for all sufficiently large integer $n$, we have $f(n) < g(n)$. We say $g$ \emph{dominates} $f$.
+\end{defi}
+
+It is a remarkable theorem that
+\begin{thm}
+ The function $n \mapsto A(n, n)$ dominates all primitive recursive functions.
+\end{thm}
+We will not prove this, as it is messy, and not really the point of the course. The point of bringing up the Ackermann function is just to know that we are not doing good enough!
+
+The obvious fix would be to allow some more complicated form of recursion, and hope that we will be safe. But this doesn't really do the job.
+
+It turns out the problem is that we are only defining \emph{total} functions. If we tried to compute any of the primitive recursive functions, we are guaranteed that we can do it in finite time. In particular, our functions are always defined everywhere. But anyone with programming experience knows very well that most programs we write are not everywhere defined. Often, they are \emph{nowhere defined}, and just doesn't run, or perhaps gets stuck in a loop for ever! But we did write some code for this program, so surely this nowhere defined function should be computable as well!
+
+Our way of thinking about functions that are not everywhere defined is via non-termination. We imagine we have a computer program trying to compute a function $f$. If $f(42)$ is undefined, instead of the program running for a while and then saying ``I give up'', the program keeps running forever,
+
+It turns out we can ``fix'' the problem merely by introducing one operation --- inverses. This is slightly subtle. Imagine we are writing a compute program, and suppose we have a (computable) function $f: \N \to \N$. We want to compute an inverse to $f$. Say we want to find $f^{-1}(17)$. To compute this, we just compute $f(0)$, $f(1)$, $f(2)$, etc., and if $17$ is in the image, then we will eventually find something that gets sent to $17$.
+
+There are two minor problems and one major problems with this:
+\begin{itemize}
+ \item $f$ might not be injective, and there are multiple values that get sent to $17$. If this happens, we just agree that we return the first natural number that gets sent to $17$, which is what is going to happen if we perform the above procedure.
+ \item $f$ might not be surjective, and we can never find something that gets sent to $17$. But this is not a problem, since we just decided that our functions don't have to be total! If $f$ is not surjective, we can just let the program keep running forever.
+\end{itemize}
+But there is still a major problem. Since we decided functions need not be total, what happens if $f$ is total? We might have $f(23) = 17$, but $f(11)$ never halts. So we never learn that $f(23) = 17$.
+
+We might think we can get around this problem by ``diagonalizing''. We assume our computers perform computations for discrete time steps. Then we compute $f(1)$ for one step; then $f(1)$ and $f(2)$ for two steps; then $f(1)$, $f(2)$ and $f(3)$ for three steps, etc. Then if $f(23) = 17$, and it takes $100$ steps of computation to compute this, then we will figure this out on the $100$th iteration of this process.
+
+This doesn't really solve our problem, though. If $f$ is not injective, then this doesn't guarantee to return the minimum $x$ such that $f(x) = 17$. It just returns some \emph{arbitrary} $x$ such that $f(x) = 17$. This is bad.
+
+So we make a compromise. The obvious compromise is to just allow computing inverses of total functions, but since total functions are hard to invert, we shall make a further compromise to just allow computing inverse of primitive recursive functions.
+
+Before we continue, we define some useful conventions:
+\begin{notation}\index{$\uparrow$}\index{$\downarrow$}
+ We write $f(x) \uparrow$ if $f(x)$ is undefined, or alternatively, after we define what this actually means, if the computation of $f(x)$ doesn't halt. We write $f(x) \downarrow$ otherwise.
+\end{notation}
+
+\begin{defi}[Partial recursive function]\index{partial recursive function}
+ The class of \emph{partial recursive functions} is given inductively by
+ \begin{itemize}
+ \item Every primitive recursive function is partial recursive.
+ \item The inverse of every primitive recursive function is partial recursive, where if $f: \N^{k + n} \to \N^m$, then the/an inverse of $f$ is the function $f^{-1}: \N^{k + m} \to \N^n$ given by
+ \[
+ f^{-1}(\mathbf{x}; \mathbf{y}) =
+ \begin{cases}
+ \min \{\mathbf{z}: f(\mathbf{x}, \mathbf{z}) = \mathbf{y}\} &\text{if exists}\\
+ \uparrow & \text{otherwise}
+ \end{cases}.
+ \]
+ \item The set of partial recursive functions is closed under primitive recursion and composition.
+ \end{itemize}
+\end{defi}
+Of course, this allows us to compute the inverse of functions of several inputs using pairing and unpairing functions.
+
+It turns out this definition is ``correct''. What do we mean by this?
+
+What we have described so far is \emph{syntactic} declarations of functions. We describe how we can build up our functions. We can also define computability \emph{semantically}, by describing some rules for constructing machines that compute functions. In other words, we actually define what a ``computer'' is, and then a function is computable if it can be computed by a computer. What we want to prove is that the partial recursive functions are exactly the functions that can be computed by a machine.
+
+We will not go into details about how we can define these machines. It is possible to do so; it doesn't really matter; and the actual details are messy. However, any sensible construction of machines will satisfy the following description:
+\begin{itemize}
+ \item A machine can be described by a finite string. In particular, it is possible to number all the possible machines. Thus, each machine can be specified by a \term{G\"odel number} uniquely. We write $\{m\}$ for the machine corresponding to the numbering $m$.
+ \item A machine has a countable (enumerated) number of possible \term{states}, and one of them is called \texttt{HALT}. A machine also has a \term{register} that can store a natural number.
+ \item A machine can take in a fixed number of inputs. Afterwards, in every discrete time step, it (potentially) changes the content of its register, and moves to a new state. However, if the current state is \texttt{HALT}, then it does nothing.
+
+ For inputs $x_1, \cdots, x_n$ into the machine $\{m\}$, we write
+ \[
+ \{m\}(x_1, \cdots, x_n) =
+ \begin{cases}
+ \text{value in the register after halting} & \text{if machine halts}\\
+ \uparrow & \text{otherwise}
+ \end{cases}
+ \]
+ Thus, we will also write $\{m\}$ for the function computed by $\{m\}$.
+ \item The above process is ``computable'' in the following precise sense --- we define \term{Kleene's $T$ function} $T(m, i, t): \N^3 \to \N$ whose value encodes (in a primitive recursive way) the states and register contents during the first $t$ steps in the evaluation of $\{m\}(i)$. Then $T$ is a \emph{primitive} recursive function.
+ \item For any partial recursive function $f$, there is a machine $\{m\}$ such that $\{m\}(x) = f(x)$ for all $x \in \N$. In particular $\{m\}(x)$ always halts.
+\end{itemize}
+
+Now of course, given our requirements for what a ``machine'' should be, it is absolutely obvious that functions computable by programs in the above sense is exactly the same as the functions that are partial recursive. The only perhaps slightly non-trivial part is to show that every function computed by a machine is partial recursive. To prove this, given a machine $\{m\}$, inversion gives us the function
+\[
+ g(i) = \min \{t: T(m, i, t) \text{ has final state \texttt{HALT}}\},
+\]
+and then we can define $h(i)$ by computing $T(m, i, g(i))$ and then extracting the register content in the final state, which we assumed is a primitive recursive process. Then $h(i) = \{m\}(i)$, and so $\{m\}$ is a partial recursive function. % revisit this
+
+%
+%The key property is that they are ``finite'', and that they are deterministic. In particular, it is possible to number all machines. For example, we can use register machines, or Turing machines.
+%
+%We consider \term{Kleene's $T$-function}. We define a function $T(m, i, t)$, where $m$ is the number of a machine; $i$ is the input; and $t$ is the time interval. This function does this --- we feed $i$ into the machine $m$, and then let it run for $t$ many time steps. We then return the description of the state of the machine at all times $1, 2, \cdots, t$, encoded as an natural.
+%
+%It is obvious that this function is ``informally computable'', because the machine is deterministic. What is less obvious, but nevertheless true, is that for sufficiently sensible encodings of machines, our function $T$ is primitive recursive. To prove this properly, we have to pick an actual model of machines, and write things out properly, but we will not do this. % include some justification.
+%
+%On the other hand, every primitive recursive function can be computed by a register/Turing machine. This is, of course, proven inductively, and we are not going to prove this.
+%
+%So, any function computed by a register/Turing machine can be captured by primitive recursion and \emph{one} application of minimization. Indeed, consider the function computed by the machine numbered by $m$. This is the function which, on being given $i$, looks for the least $t$ such that the last frame of $T(m, i, t)$ says that the machine is halted. our output is then the content of register zero (or whatever it is).
+%
+%This gives us a completeness theorem! % talk about the other direction
+%
+%Now Kleene's $T$-function is primitive recursive, and is computed by some machine. Such a machine can be used to simulate all other machines! In principle, what we have described so far specifies a rule for producing a universal Turing machine, but of course, following this description will construct something really nasty. It turns out we can produce them rather more easily, but we are programmers, and we will not do that.
+
+\subsection{Decidable and semi-decidable sets}
+So we've managed to come up with a notion of computability of a function. From now on, we will assume anything that is ``intuitively computable'' is actually computable.
+
+We now want to try to talk about ``computable subsets'' of $\N$. There are two possible things we might be interested in.
+\begin{defi}[Decidable set]\index{decidable set}
+ A subset $X \subseteq \N$ is \emph{decidable} if there is a total computable function $\N \to \N$ such that
+ \[
+ f(n) =
+ \begin{cases}
+ 1 & n \in X\\
+ 0 & n \not \in X
+ \end{cases}.
+ \]
+\end{defi}
+This is a rather strong notion. We can always tell, in finite time, whether an element is in $X$. Often, a weaker notion is desired, namely semi-decidability. Informally, a subset $X \subseteq \N$ is semi-decidable if upon given some $n \in X$, if $n \in X$, then we can learn this in finite time. If $n \not \in X$, we might never figure this is the case.
+
+There are several ways to define semi-decidable sets properly.
+\begin{defi}[Semi-decidable set]\index{semi-decidable set}
+ We say a subset $X \subseteq \N$ is \emph{semi-decidable} if it satisfies one of the following equivalent definitions:
+ \begin{enumerate}
+ \item $X$ is the image of some partial computable function $f: \N \to \N$.
+ \item $X$ is the image of some total computable function $f: \N \to \N$.
+ \item There is some partial computable function $f: \N \to \N$ such that
+ \[
+ X = \{n \in \N: f(n) \downarrow\}
+ \]
+ \item The function $\chi_X: \N \to \{0\}$ given by
+ \[
+ \chi_X =
+ \begin{cases}
+ 0 & n \in X\\
+ \uparrow & n\not\in X
+ \end{cases}
+ \]
+ is computable.
+ \end{enumerate}
+\end{defi}
+%The last two obviously capture what we are trying to do, but the first two seem a bit less clear. How are these related?
+%
+%Suppose we have some $x \in \N$. How do we know if it is in the image of $f: \N \to \N$? To do so, we enumerate all pairs $\bra n, m\ket \in \N \times \N$. For each $(n, m)$, we run $f(n)$ for $m$ steps. If it halts and gives the answer $x$, then we know $x$ is in the image. If it never halts, then we know $x$ is not in the image.
+
+There are obvious generalizations of these definitions to subsets of $\N^k$ for any $k$. It is an easy, and also important, exercise to prove that these are equivalent.
+
+We can prove another equivalent characterization of semi-decidable sets.
+\begin{prop}
+ A set $X \subseteq \N^k$ is semi-decidable iff it is a projection of a decidable subset of $\N^{k + 1}$.
+\end{prop}
+
+\begin{proof}
+ $(\Leftarrow)$ Let $Y \subseteq \N^{k + 1}$ be such that $\proj_k Y = X$, where $\proj_k$ here denotes the projection to the first $k$ factors. Since $Y$ is decidable, there is a computable function $f: \N \to \N^{k + 1}$ such that $\im f = Y$. Then $\im (\proj_k \circ f) = X$. So $X$ is the image of a computable function.
+
+ $(\Rightarrow)$ Suppose $X$ is semi-decidable. So $X = \dom(\{m\})$ for some $m \in \N$, i.e.\ $X = \{n: \{m\}(n) \downarrow\}$. Then we can pick
+ \[
+ Y = \{(\mathbf{x}, t) \mid \{m\}(\mathbf{x})\text{ halts in $t$ steps}\}.\qedhere
+ \]
+\end{proof}
+
+A crucial example of a semi-decidable set is the \emph{halting set}.
+\begin{defi}[Halting set]\index{halting set}
+ The \emph{halting set} is
+ \[
+ \{\bra p, i\ket : \{p\}(i) \downarrow \} \subseteq \N^2.
+ \]
+ Some people prefer to define it as
+ \[
+ \{m: \{m\}(m) \downarrow\} \subseteq \N
+ \]
+ instead.
+\end{defi}
+These two definitions are ``the same'' in the sense that if we are given some magic ``oracle'' that determines membership of one of these sets, then we can use it to build a program that determines the other. (Exercise)
+
+\begin{thm}[Turing]\index{Halting problem}
+ The halting set is semi-decidable but not decidable.
+\end{thm}
+
+The proof is just some version of Liar's paradox.
+\begin{proof}
+ Suppose not, and $\mathcal{M}$ is a machine with two inputs such that for all $p, i$, we have
+ \[
+ \mathcal{M}(p, i) =
+ \begin{cases}
+ \mathrm{yes} & \{p\}(i) \downarrow\\
+ \mathrm{no} & \{p\}(i) \uparrow
+ \end{cases}.
+ \]
+ If there were such a machine, then we could do some ``wrapping'' --- if the output is ``yes'', we intercept this, and pretend we are still running. If the output is ``no'', then we halt. Call this $\mathcal{M}'$. From this, we construct $\mathcal{M}''(n) = \mathcal{M}'(n, n)$. Suppose this machine is coded by $m$.
+
+ Now does $\{m\}(m)$ halt? Suppose it does. Then $\mathcal{M}(m, m) = \mathrm{yes}$, and hence $\mathcal{M}'(m, m)$ does not halt. This means $m(m)$ doesn't halt, which is a contradiction.
+
+ Conversely, if $m(m)$ does not halt, then $\mathcal{M}'(m, m)$ says $\mathrm{no}$. Thus, $m(m)$ halts. This is again a contradiction!
+
+ So $\mathcal{M}$ cannot exist.
+\end{proof}
+So the problem of deciding whether a program halts is not decidable. In fact, we can prove something much stronger --- \emph{any} non-trivial question you can ask about the behaviour of the function represented by a program is undecidable. To prove this, we need to go through a few equally remarkable theorems.
+
+\begin{thm}[$smn$ theorem]\index{$snm$ theorem}
+ There is a total computable function $s$ of two variables such that for all $e$, we have
+ \[
+ \{e\}(b, a) = \{s(e, b)\}(a).
+ \]
+ Similarly, we can find such an $s$ for any tuples $\mathbf{b}$ and $\mathbf{a}$.
+\end{thm}
+This is called the $smn$ theorem because the function is called $s$, and $m$ and $n$ usually refers to the length of the tuples $\mathbf{b}$ and $\mathbf{a}$.
+
+Computer scientists call this process ``currying''.
+
+\begin{proof}
+ We can certainly write a program that does this.
+\end{proof}
+
+\begin{thm}[Fixed point theorem]
+ Let $h: \N \to \N$ be total computable. Then there is an $n \in \N$ such that $\{n\} = \{h(n)\}$ (as functions).
+\end{thm}
+
+\begin{proof}
+ Consider the map
+ \[
+ \bra e, x\ket \mapsto \{h(s(e, e))\}(x).
+ \]
+ This is clearly computable, and is computed by a machine numbered $a$, say. We then pick $n = s(a, a)$. Then we have
+ \[
+ \{n\}(x) = \{s(a, a)\}(x) = \{a\}(a, x) = \{h(s(a, a))\}(x) = \{h(n)\}(x).\qedhere
+ \]
+\end{proof}
+This is a piece of black magic. When we do lambda calculus later, we will see that this is an example of the $Y$ combinator, which is what is used to produce fixed points in general (and is also black magic).
+
+\begin{thm}[Rice's theorem]\index{Rice's theorem}
+ Let $A$ be a non-empty proper subset of the set of all computable functions $\N \to \N$. Then $\{n: \{n\} \in A\}$ is not decidable.
+\end{thm}
+This says it is impossible to decide what a program does based on what its code is.
+
+\begin{proof}
+ We fix an $A$. Suppose not. We let $\chi$ be the (total) characteristic function of $\{n: \{n\} \in A\}$. By assumption, $\chi$ is computable. We find naturals $a, b$ such that $\{a\} \in A$ and $\{b\} \not\in A$. Then by the hypothesis, the following is computable:
+ \[
+ g(n) =
+ \begin{cases}
+ b & \{n\} \in A\\
+ a & \text{otherwise}
+ \end{cases}.
+ \]
+ The key point is that this is the wrong way round. We return something in $A$ if the graph of $\{n\}$ is \emph{not} in $A$. Now by the fixed point theorem, there is some $n \in \N$ such that
+ \[
+ \{n\} = \{g(n)\}.
+ \]
+ We now ask ourselves --- do we have $\{n\} \in A$? If so, then we also have $\{g(n)\} \in A$. But these separately imply $g(n) = b$ and $g(g(n)) = b$ respectively. This implies $g(b) = b$, which is not possible.
+
+ Similarly, if $\{n\} \not \in A$, then $\{g(n)\} \not \in A$. These again separately imply $g(n) = a$ and $g(g(n)) = a$. So we find $g(a) = a$, which is again a contradiction.
+\end{proof}
+
+\begin{cor}
+ It is impossible to grade programming homework.
+\end{cor}
+
+\subsection{Computability elsewhere}
+In all of mathematics, whenever the objects we talk about are countable, we can try to sneak in computability theory. Examples include, for example, Ramsey theory, or logic.
+
+\subsection{Logic}
+When we do logic, our theorems, proofs, rules etc. are all encoded as some finite strings, which can certainly be encoded as numbers. Even though our theories often have infinitely many axioms, the set of axioms is also decidable. It follows from this that the set of theorems in our theory is semi-decidable, as to know if something is a theorem, we can run through all possible strings and see if they encode a proof of the statement. This also works if our set of axioms is just semi-decidable, but we will need to use some diagonalization argument.
+
+We start with a rather cute observation.
+\begin{thm}[Craig's theorem]
+ Every first-order theory with a semi-decidable set of axioms has a decidable set of axioms.
+\end{thm}
+
+\begin{proof}
+ By assumption, there is a total computable function $f$ such that the axioms of the theory are exactly $\{f(n): n \in \N\}$. We write the $n$th axiom as $\varphi_n$.
+
+ The idea is to give an alternative axiom that says the same thing as $\varphi_n$, but from the form of the new axiom itself, we can deduce the value of $n$, and so we can compute $f(n)$ to see if they agree. There are many possible ways to do this. For example, we can say that we add $n$ many useless brackets around $\varphi_n$ and take it as the new axiom.
+
+ Alternatively, without silly bracketing, we can take the $n$th axiom to be
+ \[
+ \phi_n = \left(\bigwedge_{i < n} \varphi_i\right) \to \varphi_n.
+ \]
+ Then given any statement $\psi$, we can just keep computing $f(1), f(2), \cdots$, and then if $\psi = \phi_n$, then we will figure in finite time. Otherwise, we will, in finite time, see that $\psi$ doesn't look like $f(1) \wedge f(2) \wedge \cdots $, and thus deduce it is not an axiom.
+\end{proof}
+
+We can also do some other fun things. Suppose we take our favorite copy of $\N$ with the usual $0$ and successor functions. We let $T$ be the theory of all true first-order statements in $\N$. We call this the theory of \term{true arithmetic}. We add a constant $c$ to this theory, and add axioms to say
+\[
+ 0 < c,\quad 1 < c, \quad 2 < c, \quad \cdots.
+\]
+Then by compactness, this has a model, and by Lowenheim--Skolem, it has a countable model, which is a \term{non-standard model} of $\N$. We wlog assume the carrier set of this model is in fact $\N$. How bad can this be?
+
+\begin{thm}[Tennenbaum's theorem]\index{Tennenbaum's theorem}
+ For any countable non-standard model of true arithmetic, the graph of $+$ and $\times$ cannot be decidable.
+\end{thm}
+
+%\begin{proof}
+%
+%\end{proof}
+%
+%It turns out every model of arithmetic gives us a model of second-order arithmetic. The idea is to code up subsets of the ``standard copy of $\N$'' using other (bigger) numbers.
+%
+%We note that
+%\[
+% nEm\text{ if the $n$th bit of $m$ is $1$}.
+%\]
+%In fact, this gives $\bra \N, E\ket \cong \bra V_\omega\ket$. This is not hard to see --- this relation is clearly extensional, and given any collection of naturals, we can construct a number that ``contains'' them.
+%
+%We claim that any decidable set of standard naturals can be encoded by a (possibly nonstandard) natural, in the sense that if $X \subseteq \{\text{standard part}\}$, then there exists a possibly non-standard $n$ such that $x = \{\text{standard part}\} \cap \{m: m E n\}$. Note that this $n$ is not unique.
+%
+%We write $x \oplus y$ for the logical or of $x, y \in \N$ thought of as bit-strings. This is arithmetically definable. Let $P$ be a decidable predicate. We want to find a natural number that encodes the extension of $P$. We declare
+%\begin{align*}
+% f(0) &= 0\\
+% f(n + 1) &= \cif P(n + 1) \cthen f(n) \oplus 2^{n + 1} \celse f(n).
+%\end{align*}
+%Since $P$ is decidable, this is a computable function. Then by induction, we know that if $n < m$, then % need to take care of n =0
+%\[
+% P(n) \Leftrightarrow n E f(m).
+%\]
+%Thus, we simply pick an $m$ greater than all non-standard naturals, and then the standard part of $\mathrm{graph}(P)$ is
+%\[
+% \{n: n E f(m)\} \cap \text{standard}.
+%\]
+%Can we encode any undecidable subset? Consider the function
+%\begin{align*}
+% f(0) &= 0\\
+% f(n + 1) &= \clet n + 1 = \bra p, i, t\ket \cin \cif \{p\} \cthen f(n) \oplus 2^{\bra p, i \ket} \cthen f(n).
+%\end{align*}
+%Then values of $f$ are every-improving approximations to the halting set. The plan is that if $m$ is nonstandard, then $f(m)$ all questions ``does $p(i)$ halt'' for standard $p$ and $i$.
+%
+%Unfortunately, this doesn't really work! Our function does tell us whether $p(i)$ halts, but it is possible that it halts in a non-standard number of steps!
+%
+%For example, consider the theory $T = PA + \neg \con(PA)$. By G\"odel's incompleteness theorem, this is consistent! Certainly, $T \vdash \neg \con(T)$. Let $P$ be the programme that, on given an input $i$, examines, in order, all numbers $ > i$ and halts when it finds a proof of $\neg\con(T)$. Then $P$ halts on all standard input, but never holds in standard time, because there is no proof of $\neg \con (PA)$ (if $PA$ is indeed consistent!)!
+%
+%What we need is the extra assumption that our model is a model of true arithmetic. If this is the case, then this function $f$ really does encode the halting set of standard computations. So there exists some $m$ such that $\{n: n E m\} \cap \mathrm{standard} = \{\bra p, i\ket: \{p\}(i) \downarrow, p, i\text{standard}\}$.
+%
+%Now if $+$ and $\times$ are computable, then we can compute this $f$! And thus we can solve the halting problem, which is nonsense.
+ % insert an actual proof
+
+Interestingly, computability theory offers us an easy proof of G\"odel's incompleteness theorem. We first note the following result:
+\begin{thm}
+ The set of G\"odel numbers of machines that compute total functions is not semi-decidable.
+\end{thm}
+
+\begin{proof}
+ Suppose it were. Then there is a computable total function $f: \N \to \N$ such that every computable total function is $\{f(n)\}$ for some $n$. Now consider the function
+ \[
+ g(n) = \{f(n)\} (n) + 1.
+ \]
+ This is a total computable function! But it is not $\{f(n)\}$ for any $n$, because it differs from $\{f(n)\}$ at $n$.
+\end{proof}
+Note that this proof is very explicit and constructive. If we are presented with a function $\{e\}: \N \to \N$, then there is an algorithm that returns an $n$ such that $\{n\}$ is total, and is not in the image of $\{e\}$.
+
+\begin{defi}[Productive set]\index{productive set}
+ A set $X \subseteq \N$ is \emph{productive} if there is a total computable function $f: \N \to \N$ such that for all $n \in \N$, we have $f(n) \in X \setminus (\im \{n\})$.
+\end{defi}
+Then the theorem says the set of all machines that compute total functions is productive.
+
+In some sense, productive sets are ``decidably non-semi-decidable''.
+
+If we work in, say, ZFC, then there exists a fixed, canonical copy of $\N$. We note the following definition:
+\begin{defi}[Sound theory]\index{sound theory}\index{theory!sound}
+ A \emph{sound} theory of arithmetic is one all of whose axioms are true in $\N$.
+\end{defi}
+This is not to be confused with consistency, which does not refer to any model. For example, the theory of Peano arithmetic plus the statement that PA is inconsistent is actually a consistent theory, but is unsound (if PA is consistent).
+
+\begin{thm}[G\"odel's incompleteness theorem]\index{G\"odel's incompleteness theorem}
+ Let $T$ be a recursively axiomatized theory of arithmetic, i.e.\ the set of axioms is semi-decidable (hence decidable). Suppose $T$ is sufficiently strong to talk about computability of functions (e.g.\ Peano arithmetic is). Then there is some proposition true in $\N$ that cannot be proven in $T$.
+\end{thm}
+
+\begin{proof}
+ We know the set of theorems of $T$ is semi-decidable. So $\{n: T \vdash \{n\}\text{ is a total function}\}$ is semi-decidable. So there must be some total function such that $T$ does not prove it is total. In other words, $T$ is not complete!
+\end{proof}
+
+ % maybe missing something, arithmetic truths are productive?
+
+% Let $A$ be a non-empty semi-decidable set, so it is $\im g$ for some total computable $g: \N \to \N$. Recall that we can construct a ``diagonalization'' of $g$ to enumerate elements in the image. Let $f$ be the G\"odel number of this function.
+
+% Let $a \in A$ be any member of $A$, and deifne $h: \N^2 \to \N$ by
+%\[
+% h(n, k) = \cif T(f, n, k)\text{ has final state \textsf{HALT}} \cthen \text{emit register contents of $T(f, n, k)$} \celse a.
+%\]
+%Evidently $h$ is primitive recursive, and $A = h(\N \times \N)$.
+
+\subsubsection*{Ramsey theory}
+In Ramsey theory, we have the following famous result:
+\begin{thm}[Ramsey's theorem]\index{Ramsey's theorem}
+ We write $\N^{(k)}$ for the set of all subsets of $\N$ of size $k$. Suppose we partition $\N^{(k)}$ in $m$ many distinct pieces. Then there exists some infinite $X \subseteq \N$ such that $X$ is monochromatic, i.e.\ $X^{(k)} \subseteq \N^{(k)}$ lie entirely within a partition.
+\end{thm}
+The natural thing for us to do is to insert the word ``decidable'' everywhere in the theorem, and see if it is true. Note that a partition of $\N^{(k)}$ into $k$ pieces is equivalently a function $\N^{(k)} \to \{0, 1, \cdots, k - 1\}$, and it is not hard to encode $k$-subsets of $\N$ as natural numbers, e.g.\ by encoding them as an increasing $k$-tuple. Thus, it makes sense for us to ask if the partition is decidable, then are we guaranteed to have a decidable infinite monochromatic set?
+
+One might expect not, because the proof of Ramsey's theorem is rather non-constructive. Indeed, we have the following theorem:
+\begin{thm}[Jockusch]
+ There exists a decidable partition of $\N^{(3)}$ into two pieces with no infinite decidable monochromatic set.
+\end{thm}
+
+\begin{proof}
+ We define a partition $\rho: \N^{(3)} \to \{0, 1\}$ as follows. Given $x < y < z$, we define $\rho(\{x, y, z\})$ to be $0$ if for any $p, d < x$, we have ``$\{p\}(d)$ halts in $y$ steps iff $\{p\}(d)$ halts in $z$ steps'', and $1$ otherwise. This is a rather weird colouring, but let's see what it gives us.
+
+ We first note that there can be no monochromatic set coloured $1$, as given any $x$, for sufficiently large $y$ and $z$, we must have $\{p\}(d)$ halts in $y$ steps iff $\{p\}(d)$ halts in $z$ steps.
+
+ We claim that an infinite decidable monochromatic set $A$ for $0$ will solve the halting problem. Indeed, if we want to know if $\{p\}$ halts on $d$, then we pick some $x \in A$ such that $p, d < x$. Then pick some $y \in A$ such that $y > x$. Then by construction of $\rho$, if $\{p\}(d)$ halts at all, then it must halt in $x$ steps, and we can just check this.
+
+ Thus, there cannot be an infinite decidable monochromatic set for this partition.
+\end{proof}
+
+But what about partitioning $\N^{(2)}$? The same trick doesn't work, and the situation is quite complex. We will not talk about this.
+
+\subsection{Computability by \tph{$\lambda$}{lambda}{λ}-calculus} % this section needs a better rewrite
+We first introduced $\lambda$-calculus in the setting of decorating natural deduction proofs. Remarkably, it turns out $\lambda$-calculus can be used to encode any program at all! For the sake of concreteness, we will properly define $\lambda$ calculus.
+
+\begin{defi}[$\lambda$ terms]\index{$\lambda$ terms}
+ The set $\Lambda$ of $\lambda$ terms is defined recursively as follows:
+ \begin{itemize}
+ \item If $x$ is any variable, then $x \in \Lambda$.
+ \item If $x$ is a variable and $g \in \Lambda$, then $\lambda x.\; g \in \Lambda$.
+ \item If $f, g \in \Lambda$, then $(f\; g) \in \Lambda$.
+ \end{itemize}
+\end{defi}
+As always, we use variables sensibly, so that we don't write things like $x\;(\lambda x.\; f x)$. We also write $f(x)$ to denote a $\lambda$ term that has a free variable $x$, and then $f(y)$ would mean taking $f$ and replacing all occurrences of $x$ with $y$.
+
+We will often be lazy and write $\lambda xy. f$ instead of $\lambda x.\; \lambda y.\; f$.
+
+We can define free variables and bound variables in the obvious way. If we want to be really formal, we can define them as follows:
+\begin{defi}[Free and bound variables]\index{free variables}\index{bound variables}\leavevmode
+ \begin{itemize}
+ \item In the $\lambda$ term $x$, we say $x$ is a free variable.
+ \item In the $\lambda$ term $\lambda x.\; g$, the free variables are all free variables of $g$ except $x$.
+ \item In the $\lambda$ term $(f\; g)$, the free variables is the union of the free variables of $f$ and those of $g$,
+ \end{itemize}
+ The variables that are not free are said to be bound.
+\end{defi}
+The definitions become more subtle when we have, say, a variable that is free in $f$ and bound in $g$, but this is not a course on pedantry. We will just not worry about these, and keep ourself disciplined and not wrote things like that. We will gloss over subtleties like this in our upcoming definitions as well.
+
+As mentioned previously, we want to think of $\lambda x.\; f(x)$ as some sort of function. So we want things like
+\[
+ (\lambda x.\; f(x))\; y = f(y)
+\]
+to be true. Of course, they are not actually equal as expressions, but they are more-or-less equivalent.
+\begin{defi}[$\alpha$-equivalence]\index{$\alpha$-equivalence}
+ We say two $\lambda$ terms are $\alpha$-equivalent if they are the same up to renaming of bound variables.
+\end{defi}
+
+\begin{eg}
+ $\lambda x.\; x$ and $\lambda y.\; y$ are $\alpha$-equivalent.
+\end{eg}
+For any sane purposes at all in $\lambda$ calculus, $\alpha$-equivalent terms are really considered equal.
+
+\begin{defi}[$\beta$-reduction]\index{$\beta$-reduction}
+ If $f = (\lambda x.\; y(x)) z$ and $g = y(z)$, then we say $g$ is obtained from $f$ via $\beta$-reduction.
+\end{defi}
+This captures the idea that $\lambda$ expressions are really functions.
+
+\begin{defi}[$\eta$-reduction]\index{$\eta$-reduction}
+ $\eta$-conversion is the conversion from $\lambda x.\; (f\; x)$ to $f$, whenever $x$ is not free in $f$.
+\end{defi}
+This captures ``function extensionality'', i.e.\ the idea that functions are the same iff they give the same results for all inputs.
+
+\begin{defi}[$\beta$-normal form]\index{$\beta$-normal form}
+ We say a term is in $\beta$-normal form if it has no possible $\beta$-reduction.
+\end{defi}
+Our model of computation with $\lambda$ calculus is as follows. A program is a $\lambda$ expression $f$, and an input is another $\lambda$ expression $x$. To run $f$ on $x$, we take $f\; x$, and then try to $\beta$-reduce it as far as possible. If it reaches a $\beta$-normal form, then we are done, and say that is the output. If we never get to a $\beta$-normal form, then we say we don't terminate.
+
+To do so, we need a \term{reduction strategy}. It is possible that a $\lambda$ expression can be $\beta$-reduced in many ways, e.g.\ if we have
+\[
+ (\lambda x.\; (\lambda y.\; f\;y\; x)\;x)\;z,
+\]
+then we can reduce this to either $(\lambda y.\; f\; y\; z)\; z$ or $(\lambda x\; f\; x\; x)\; z$. These are not $\alpha$-equivalent, but of course performing one more $\beta$-reduction will yield the same terms. A reduction strategy is a prescription of which $\beta$-reduction we should perform when there is more than one choice.
+
+There are some things that might worry us. The first is that if we reduce in different ways, then we might end up in different normal forms. Even worse, it could be that one reduction strategy would not terminate, while another might halt in $3$ steps!
+
+For this to work, we need the following theorem:
+\begin{thm}
+ Every $\lambda$ expression can be reduced to at most one $\beta$-normal form. Moreover, there exists a reduction strategy such that whenever a $\lambda$ expression can be reduced to a $\beta$-normal form, then it will be reduced to it via this reduction strategy.
+
+ This magic reduction strategy is just to always perform $\beta$-reduction on the leftmost thing possible.
+\end{thm}
+
+\begin{notation}[$\rightsquigarrow$]
+ We write $f \rightsquigarrow g$ if $f$ $\beta$-reduces to $g$.
+\end{notation}
+There is another thing we should note, which is \term{currying}. If we want to implement a function with, say, $2$ inputs, say a function $f: \N \times \N \to \N$, we can instead implement it as a function $\tilde{f}:\N \to (\N \to \N)$, where
+\[
+ \tilde{f}(x)(y) = f(x, y).
+\]
+This allows us to always work with functions of one variable that can potentially return functions. We will usually just write the type as $\N \to \N \to \N$.
+
+Note that in this setting, officially, our $\lambda$ terms do not have ``types''. However, it is often convenient to imagine they have types to make things less confusing, and we will often do so.
+
+With the formalities out of the way, we should start doing things. We claimed that we can do programming with $\lambda$ terms. The first thing we might want to try is to encode numbers. How can we do so?
+
+Recalling that types don't exist in $\lambda$ calculus, consider a ``generic type'' $A$. We will imagine that natural numbers take in ``functions'' $A \to A$ and returns a new function $A \to A$. We write
+\[
+ \mathbf{N} = (A \to A) \to (A \to A)
+\]
+for the ``type'' of natural numbers. The idea is that $n$ represents the $\lambda$ term that sends $f$ to the $n$th iteration $f^n$.
+
+Formally, we define them as follows:
+\begin{defi}[Church numerals]\index{church numerals}\index{$\underline{n}$}
+ We define
+ \begin{align*}
+ \underline{0} &= K(\id) = \lambda f.\; \lambda x.\; x : \mathbf{N}\\
+ \succ &= \lambda n.\; \lambda f.\; \lambda x.\; f\; ((n\; f)\; x) : \mathbf{N} \to \mathbf{N}
+ \end{align*}
+ We write $\underline{n} = \succ^n(\underline{0})$.
+
+ We can define arithmetic as follows:
+ \begin{align*}
+ \plus &= \lambda n.\; \lambda m.\; \lambda f.\; \lambda x.\; (n\;f)\;((m\;f)\;x): \mathbf{N} \to \mathbf{N} \to \mathbf{N}\\
+ \mult &= \lambda n.\; \lambda m.\; \lambda f.\; \lambda x.\; (n\;(m\; f))\;x: \mathbf{N} \to \mathbf{N} \to \mathbf{N}\\
+ \lexp &= \lambda n.\; \lambda m.\; \lambda f.\; \lambda x.\; ((n\;m)\;f))\;x: \mathbf{N} \to \mathbf{N} \to \mathbf{N}
+ \end{align*}
+ Here $\exp n\; m = n^m$.
+\end{defi}
+Of course, our ``typing'' doesn't really make sense, but helps us figure out what we are trying to say.
+
+Certain people might prefer to write $\mult$ instead as
+\[
+ \mult = \lambda n.\; \lambda m.\;\lambda f.\; n\; (m\;f).
+\]
+We can define a lot of other things. It is possible to give motivation for these, and if one thinks hard enough, these are ``obviously'' the right definitions to make. However, we are not really concerned about this, because what we really want to talk about is recursion. We can define
+\begin{align*} % explain this
+ \ctrue &= \lambda xy.\; x\\
+ \cfalse &= \lambda xy.\; y\\
+ \cif b \cthen x \celse y &= \lambda b x y.\; b\;x\;y\\
+ \ciszero &= \lambda n.\; n \;(\lambda x.\; \cfalse)\; \ctrue\\
+ \pair &= \lambda x y f.\; f\; x\; y\\
+ \cfst &= \lambda p.\; p\;\ctrue\\
+ \csnd &= \lambda p.\; p\;\cfalse\\
+ \cnil &= \lambda x.\; \ctrue\\
+ \cnull &= \lambda p.\; p \;( \lambda xy.\; \cfalse)
+\end{align*}
+We can check that
+\begin{align*}
+ \cfst\; (\pair\; x\; y) &= x\\
+ \csnd\; (\pair\; x\; y) &= y
+\end{align*}
+Using $\pair$, $\cfst$ and $\csnd$, we can implement lists just as pairs of pair, with functions $\head, \tail, \cons$ satisfying
+\[
+ \cons\; (\head \;x)\; (\tail \;x) = x.
+\]
+We by convention always end our lists with a $\cnil$, and $\cnull$ is a test of whether something is $\cnil$.
+
+\begin{ex}
+ Implement the predecessor function that sends $n$ to $(n - 1)$, if $n > 1$, and $0$ otherwise. Also implement $\mathsf{and}$, $\mathsf{or}$, and test of equality for two general church numerals. % fill this in
+\end{ex}
+We want to show that we can in fact represent all primitive recursive functions with $\lambda$-terms. We will in fact do something stronger --- we will implement \term{general recursion}. This is a version of recursion that doesn't always guarantee termination, but will be useful if we want to implement ``infinite structures''.
+
+We look at an example. We take our favorite example $\cfact: \N \to \N$
+\[
+ \cfact\; n = \cif \ciszero \;n \cthen 1 \celse n \cdot \cfact\; (n - 1).
+\]
+But this is not a valid definition, since it is circular, and doesn't really give us a $\lambda$ expression. The trick is to introduce a new function $\metafact: (\N \to \N) \to (\N \to \N)$, given by
+\[
+ \metafact\; f\; n = \cif n = 0 \cthen 1 \celse n \cdot f\; (n - 1)
+\]
+This is a genuinely legitimate definition of a function. Now suppose $f$ is a fixed point of $\metafact$, i.e.\ $f = \metafact\; f$. Then we must have
+\[
+ f(n) = (\metafact\; f)(n) = \cif n = 0\cthen 1 \celse n \cdot (f(n - 1)).
+\]
+So $f$ is in fact the factorial function!
+
+Thus, we have reduced the problem of recursion to the problem of computing fixed points. If we have a general fixed point operator, then we can implement all recursive functions.
+
+\begin{defi}[$\Yc$-combinator]\index{$\Yc$-combinator}
+ The \emph{$\Yc$-combinator} is
+ \[
+ \Yc = \lambda f. \Big[ (\lambda x. f(x\; x)) (\lambda x. f(x\; x))\Big].
+ \]
+\end{defi}
+
+\begin{thm}
+ For any $g$, we have
+ \[
+ \Yc\; g \rightsquigarrow g\; (\Yc\; g).
+ \]
+\end{thm}
+
+\begin{proof}
+ \begin{align*}
+ \Yc\; g &\rightsquigarrow \lambda f. \Big[ (\lambda x. f(x\; x)) (\lambda x. f(x\; x))\Big]\; g\\
+ &\rightsquigarrow (\lambda x. (g(x\; x))) (\lambda x . g(x\; x))\\
+ &\rightsquigarrow g\Big((\lambda x. (g(x\; x))) (\lambda x . g(x\; x))\Big)
+ \end{align*}
+\end{proof}
+We also need to implement the minimization operator. To compute $f^{-1}(17)$, our plan is as follows:
+\begin{enumerate}
+ \item Produce an ``infinite list'' (\term{stream}) of natural numbers.
+ \item Apply a function $f$ to every element of the stream.
+ \item Find the first element in the stream that is sent to $17$.
+\end{enumerate}
+Note that infinite lists are genuinely allowed in $\lambda$-calculus. These behave like lists, but they never have an ending $\cnil$ element. We can always extract more and more terms out of it.
+
+We do them in a slightly different order
+\begin{enumerate}
+ \item[(ii)] We implement a function $\cmap$ that takes in a list or stream, and applies a function $f$ to every element of the list. We simply define it as
+ \[
+ \cmap\; f\; x = \cif \cnull\; x \cthen x \celse \cons\; (f\; (\head\; x))\;(\cmap\;f\; (\tail\; x)).
+ \]
+ Of course, we need to use our $\Yc$ combinator to actually achieve this.
+ \item[(i)] We can produce a stream of natural numbers by asking for the fixed point of
+ \[
+ \lambda \ell.\; \cons \; 0 (\cmap\;\succ\;\ell).
+ \]
+ \item [(iii)] We will actually apply $\cmap$ to $\lambda n.\; \pair\; n\; (f\; n)$, so that we remember the index. We then use the function
+ \[
+ \cfind\; n\; x = \cif \csnd\;(\head\;x) = n \cthen \cfst\;(\head\;x) \celse \cfind\; n\; (\tail\;x).
+ \]
+\end{enumerate}
+%This is not so satisfactory. The function $\Yc$ clearly cannot be well-typed. These things are actually quite hard to think or reason about. It would be nice if we can implement more interesting functions using only well-typed objects. For example, we can implement the Ackermann function by
+%\[
+% \lambda m. m ( \lambda f n. n\; f\; (f\; 1))\; \succ.
+%\]
+%One can check this is indeed well-typed.
+
+% can we implement primitive recursion.
+
+\subsection{Reducibility}
+We've alluded to the notion of reducibility several times previously. For example, we had two versions of the halting set, and said that if we could decide membership of one of them, then we can decide membership of the other. When we proved that there is a decidable partition of $\N^{(3)}$ with no infinite decidable monochromatic set, we said if we had such a thing, then we could use it to solve the halting problem.
+
+The general idea is that we can often ``reduce'' one problem to another one. The most basic way of expressing this is via many-to-one reducibility.
+
+\begin{defi}[Many-to-one reducibility]\index{many-to-one reducibility}\index{$\leq_m$}
+ Let $A, B \in \N$. We write $B \leq_m A$ if there exists a total computable functions $f: \N \to \N$ such that for all $n \in \N$, we have $n \in B \leftrightarrow f(n) \in A$.
+\end{defi}
+Thus, if we have such a set, then whenever we can correctly answer questions about membership in $A$, then we can answer questions about membership in $B$.
+
+It is easy to see that this is a quasi-order. If we take the intersection $\leq_m \cap \geq_m$ of the relations, then we get an equivalence relation. The equivalence classes are called \term{many-one degrees}. We think of these as ``degree of unsolvability''.
+
+\begin{prop}
+ Let $A \subseteq \N$. Then $A \leq_m \K$ iff $A$ is semi-decidable.
+\end{prop}
+This is a technical exercise. Another useful fact is the following:
+\begin{prop}
+ $(\N \setminus \K) \leq_M A$ iff $A$ is productive.
+\end{prop}
+But it turns out this is a rather awkward definition of ``relative computability''. In our definition of many-one reducibility, we are only allowed to run $f$ once, and see if the answer is in $A$. But why once?
+
+The idea is define reducibility semantically. We assume that when writing programs, we can do all the usual stuff, but also have access to a magic function $f$, and we are allowed to call $f$ as many times as we wish and use the results. We call $f$ an \term{oracle}. For our purposes, $f$ is the (total) characteristic function of $A$.
+
+For technical reasons, we will suppose we have a fixed programming language that just refers to an opaque function $f$, and then we can later plug different things as $f$. This allows us to compare G\"odel numberings for functions with different oracles.
+
+\begin{defi}[Turing reducibility]\index{Turing reducibility}\index{$\leq$}
+ We say $B \leq_T A$, or simply $B \leq A$, if it is possible to determine membership of $B$ whenever we have access to an oracle that computes $\chi_A$.
+\end{defi}
+Again $\leq$ is a quasi-order, and intersecting with the reverse, we get an equivalence relation, which we call \term{Turing degree}, or just \term{degree}.
+
+Now the quasi-order gives us a relation on Turing degrees. A natural question to ask is if this is actually a total order.
+
+To begin with, Turing degrees certainly form an upper semi-lattice, as given $A$ and $B$, we can form the disjoint union of $A$ and $B$, and knowing about membership of $A \coprod B$ means we know membership of $A$ and $B$, and vice versa. But are they linearly ordered by $\leq$? The answer is no!
+\begin{thm}[Friedberg--Muchnik]
+ There exists two $A, B \subseteq \N$ such that $A \not\leq B \not\leq A$. Moreover, $A$ and $B$ are both semi-decidable.
+\end{thm}
+
+\begin{proof} % think a bit more about the proof.
+ We will obtain the sets $A$ and $B$ as
+ \[
+ A = \bigcup_{n < \omega} A_n,\quad B = \bigcup_{n < \omega} B_n,
+ \]
+ where $A_n$ and $B_n$ are finite (and in particular decidable) and nested, i.e.\ $i < j$ implies $A_i \subseteq A_j$ and $B_i \subseteq B_j$.
+
+% Note that if we want $A \not\leq B$ and $B \not\leq A$, then neither of them can be decidable. Thus, these inclusions cannot be end extensions.
+
+ We introduce a bit of notation. As mentioned, if we allow our programs to access the oracle $B$, then our ``programming language'' will allow consultation of $B$. Then in this new language, we can again assign G\"odel numbers to programs, and we write $\{e\}^B$ for the program whose G\"odel number is $e$. Instead of inventing a new language for each $B$, we invent a language that allows calling an ``oracle'', without specifying what it is. We can then plug in different sets and run.
+
+ Our objective is to pick $A, B$ such that
+ \begin{align*}
+ \chi_A &\not= \{e\}^B\\
+ \chi_B &\not= \{e\}^A
+ \end{align*}
+ for all $e$. Here we are taking $\chi_A$ to be the total version, i.e.
+ \[
+ \chi_A(x) =
+ \begin{cases}
+ 1 & x \in A\\
+ 0 & x \not\in A
+ \end{cases}.
+ \]
+ The idea is, of course, to choose $A_m$ such that it prevents $\chi_A$ from being $\{m\}^B$, and vice versa. But this is tricky, because when we try to pick $A_m$, we don't know the whole of $B$ yet.
+
+ It helps to spell out more explicitly what we want to achieve. We want to find some $n_i, m_i \in \N$ such that for each $i$, we have
+ \begin{align*}
+ \chi_A(n^{(i)}) &\not= \{i\}^B(n^{(i)})\\
+ \chi_B(m^{(i)}) &\not= \{i\}^A(m^{(i)})
+ \end{align*}
+ These $n_i$ and $m_i$ are the \emph{witness} to the functions being not equal.
+
+ We now begin by producing infinite lists
+ \begin{align*}
+ N_i &= \{n_1^{(i)}, n_2^{(i)}, \cdots\}\\
+ M_i &= \{m_1^{(i)}, m_2^{(i)}, \cdots\}
+ \end{align*}
+ of ``candidate witnesses''. We will assume all of these are disjoint. For reasons that will become clear later, we assign some ``priorities'' to these sets, say
+ \[
+ N_1 > M_1 > N_2 > M_2 > N_3 > N_3 > \cdots.
+ \]
+ We begin by picking
+ \[
+ A_0 = B_0 = \emptyset.
+ \]
+ Suppose at the $t$th iteration of the process, we have managed to produce $A_{t - 1}$ and $B_{t - 1}$. We now look at $N_1, \ldots, N_t$ and $M_1, \cdots, M_t$. If they have managed to found a witness, then we leave them alone. Otherwise, suppose $N_i$ hasn't found a witness yet. Then we run $\{i\}^{B_{t - 1}}(n^{(i)}_1)$ for $t$ many time steps. We consider the various possibilities:
+ \begin{itemize}
+ \item Let $n$ be the first remaining element of $N_i$. If $\{i\}^{B_{t - 1}}(n)$ halts within $t$ steps and returns $0$, then we put $n$ into $A_t$, and then pick this as the desired witness. We now look at all members of $B_{t - 1}$ our computation of $\{i\}^{B_{t - 1}}$ has queried. We \emph{remove} all of these from all sets of lower priority than $N_i$.
+ \item Otherwise, do nothing, and move on with life.
+ \end{itemize}
+ Now the problem, of course, is that whenever we add things into $A_t$ or $B_t$, it might have changed the results of some previous computations. Suppose we originally found that $\{10\}^{B_4}(32) = 0$, and therefore put $32$ into $A_5$ as a witness. But later, we found that $\{3\}^{A_{23}}(70) = 0$, and so we put $70$ into $B_{24}$. But if the computation of $\{10\}^{B_4}(32)$ relied on the fact that $70\not\in B_4$, then we might have
+ \[
+ \{10\}^{B_{24}}(32) \not= 0,
+ \]
+ and now our witness is ``injured''. When this happens, we forget the fact that we picked $32$ as a witness, and then pretend $N_{10}$ hasn't managed to find a witness after all.
+
+ Fortunately, this can only happen because $M_3$ is of higher priority than $N_{10}$, and thus $70$ was not forbidden from being a witness of $M_3$. Since there are only finitely many things of higher priorities, our witness can be injured only finitely many times, and we will eventually be stabilized.
+
+ We are almost done. After all this process, we take the union and get $A$ and $B$. If, say, $\{i\}^A$ has a witness, then we are happy. What if it does not? Suppose $\{i\}^A$ is some characteristic function, and $m$ is the first element in the list of witnesses. Then since the lists of candidate witnesses are disjoint, we know $m \not \in B$. So it suffices to show that $\{i\}^A(m) \not= 0$. But if $\{i\}^{A}(m) = 0$, then since this computation only involves finitely many values of $A$, eventually, the membership of these things in $A$ would have stabilized. So we would have $\{i\}^{A}(m) = 0$ long ago, and made $m$ a witness.
+
+ % finite injury construction
+\end{proof}
+
+\section{Well-quasi-orderings}
+We end by a discussion on well-quasi-orders. The beginning of all this is the idea of well-foundedness. We begin by recalling the definition of a quasi-order.
+
+\begin{defi}[Quasi-order]\index{quasi-order}
+ An order $\bra X, \leq\ket$ is a \emph{quasi-order} if it is \term{transitive} and \term{reflexive}, i.e.\ for all $a, b, c \in X$,
+ \begin{itemize}
+ \item If $a \leq b$ and $b \leq c$, then $a \leq c$.
+ \item We always have $a \leq a$.
+ \end{itemize}
+\end{defi}
+Note that unlike a poset, we do not require reflexivity, i.e.\ we can have distinct $a, b$ satisfying $a \leq b$ and $b \leq a$. There is a canonical way of turning a quasi-order into a partial order, namely by actually identifying elements satisfying $a \leq b$ and $b \leq a$, and so it appears we don't lose much by restricting to partial orders. However, it turns out a lot of the orders we construct are naturally quasi-orders, not partial orders, and the extra hypothesis of being a partial order is not really useful. So we will talk about quasi-orders.
+
+The main idea behind the whole chapter is the notion of well-foundedness. We probably first met this concept in set theory, where the axiom of foundation said sets are well-founded. Chances are, it was motivated to get rid of scenarios such as $x \in x$. However, as we do set theory, we figure that the real benefit it brings us is the ability to do $\varepsilon$-induction.
+
+In general, for an arbitrary relation $R$ on $X$, we want to admit the following \term{$R$-induction} principle:
+\begin{prooftree}
+ \AxiomC{$\forall_{x \in X} ((\forall_{y \in X} R(y, x) \to \varphi(y)) \to \varphi(x))$}
+ \UnaryInfC{$\forall_{x \in X} \varphi(x)$}
+\end{prooftree}
+How can this fail? Suppose the top line holds, but the bottom line does not. We set
+\[
+ A = \{x: \neg \varphi(x)\}.
+\]
+Then we know this set is non-empty. Moreover, the top line says
+\[
+ \forall_{x \in A} \exists_{y \in A} R(y, x).
+\]
+Thus, if we forbid such sets from existing, then we can admit the induction principle.
+\begin{defi}[Well-founded relation]\index{well-founded relation}\index{relation!well-founded}
+ A relation $R$ on $X$ is said to be \emph{well-founded} if for all $A \subseteq X$ non-empty, there is some $x \in A$ such that for any $y \in A$, we have $\neg R(y, x)$.
+\end{defi}
+
+There is another way of expressing the well-founded relation, if we believe in the \term{axiom of dependent choice}.
+\begin{axiom}[Axiom of dependent choice]
+ Let $X$ be a set an $R$ a relation on $X$. The \emph{axiom of dependent choice} says if for all $x \in X$, there exists $y \in X$ such that $R(x, y)$, then we can find a sequence $x_1, x_2, \cdots$ such that $R(x_i, x_{i + 1})$ for all $i \in \N$.
+\end{axiom}
+This is a rather weak form of the axiom of choice, which we will freely assume. Assuming this, then being well-founded is equivalent to the non-existing of infinite decreasing $R$-chains.
+
+Why do we care about well-foundedness? There are many reasons, and one we can provide comes from computer science. In general, it guarantees \emph{termination}. When writing programs, we often find ourselves writing loops like
+\begin{verbatim}
+ while ( some condition holds ) {
+ do something;
+ }
+\end{verbatim}
+This repeats the ``do something'' until the condition fails. In general, there is no reason why such a program should eventually terminate.
+
+Often, the loop is actually of the form
+\begin{verbatim}
+ while ( i != BOT ) {
+ do something with i;
+ }
+\end{verbatim}
+where $i$ takes value in some partial order $R$, with \texttt{BOT} being the bottom element. Suppose every time the loop runs, the value of $i$ decreases. Then if $R$ is well-founded, then we are guaranteed that the program will indeed terminate. More generally, if $R$ doesn't have a unique bottom element, but is still well-founded, then loops of the form
+\begin{verbatim}
+ while ( i is not a minimal element ) {
+ do something with i;
+ }
+\end{verbatim}
+are still guaranteed to terminate.
+
+\begin{eg}
+ Consider \emph{typed} $\lambda$ calculus, which is the form of $\lambda$ calculus where we require each term to have a type, and all terms have to be typed, just as we did when we used it to decorate natural deduction proofs.
+
+ We let $X$ be the set of all typed $\lambda$ terms up to $\alpha$-equivalence, and for $f, g \in X$, we write $f \leq g$ if $g$ can be $\beta$-reduced to $f$. It is a remarkable theorem that $X$ is well-founded. In other words, we can keep applying $\beta$-reduction in any way we like, and we are guaranteed to always end up in a normal form. It is also true that the normal form of a $\lambda$ term is unique, but this is not implied by well-foundedness.
+\end{eg}
+
+It is clear from definition that a quasi-order is never well-founded, since it is reflexive. It is also clear that it fails for a silly reason, and we can fix it by defining
+\begin{defi}[Well-ordered quasi-order]\index{well-ordered quasi-order}\index{quasi-order!well-founded}
+ A quasi-order is \emph{well-ordered} if there is no \emph{strictly} decreasing infinite sequence.
+\end{defi}
+Here we are not worried about choice issues anymore.
+
+Once we are convinced that being well-founded is a good thing, it is natural ask what operations preserve well-foundedness. We begin by listing some constructions we can do.
+\begin{defi}[$\mathcal{P}(X)$]\index{$\mathcal{P}(X)$}\index{$\leq^+$}
+ Let $\bra X, \leq_X \ket$ be a quasi-order. We define a quasi-order $\leq_X^+$ on $\mathcal{P}(X)$ by
+ \[
+ X_1\leq_X^+ X_2\text{ if }\forall_{x_1 \in X_1} \exists_{x_2 \in X_2} (x_1 \leq_X x_2).
+ \]
+\end{defi}
+
+\begin{defi}[$X^{<\omega}$]\index{$X^{<\omega}$}\index{$\leq_s$}
+ Let $\bra X, \leq_X\ket$ be a quasi-order. We let $X^{<\omega}$ be the set of all \emph{finite} lists (i.e.\ sequences) in $X$. We define a quasi-order on $X^{<\omega}$ recursively by
+ \begin{itemize}
+ \item $\cnil \leq \ell_1$, where $\cnil$ is the empty list.
+ \item $\tail (\ell_1) \leq \ell_1$
+ \item If $\tail (\ell_1) \leq \tail(\ell_2)$ and $\head(\ell_1) \leq_X \head (\ell_2)$, then $\ell_1 \leq \ell_2$.
+ \end{itemize}
+ for all lists $\ell_1, \ell_2$.
+
+ Equivalently, sequences $\{x_i\}_{i = 1}^n \leq_s \{y_i\}_{i = 1}^\ell$\index{$\leq_s$} if there is a subsequence $\{y_{i_k}\}_{k = 1}^n$ of $\{y_i\}$ such that $x_k \leq y_{i_k}$ for all $k$.
+\end{defi}
+
+Finally, the construction we are really interested in is \emph{trees}.
+\begin{defi}[Tree]\index{tree}\index{$\leq_s$}\index{$\mathrm{Trees}(X)$}
+ Let $\bra X, \leq_X\ket $ be a quasi-order. The set of all trees (in $X$) is defined inductively as follows:
+ \begin{itemize}
+ \item If $x \in X$ and $L$ is a list of trees, then $(x, L)$ is a tree. We call $x$ a \term{node} of the tree. In particular $(x, \cnil)$ is a tree. We write
+ \[
+ \mathrm{root}(x, L) = x,\quad \mathrm{children}(x, L) = L.
+ \]
+ \end{itemize}
+ Haskell programmers would define this by
+ \begin{verbatim}
+ data Tree a = Branch a [Tree a]\end{verbatim}
+ We write $\mathrm{Trees}(X)$ for the set of all trees on $X$.
+
+ We define an order relation $\leq_s$ on $\mathrm{Trees}(X)$ as follows --- let $T_1, T_2 \in \mathrm{Trees}(X)$. Then $T_1 \leq T_2$ if
+ \begin{enumerate}
+ \item $T_1 \leq T'$ for some $T' \in \mathrm{children}(T_2)$.
+ \item $\mathrm{root}(T_1) \leq \mathrm{root}(T_2)$ and $\mathrm{children}(T_1) \leq \mathrm{children}(T_2)$ as lists.
+ \end{enumerate}
+\end{defi}
+It is an exercise to think hard and convince yourself this recursively-defined relation is indeed well-defined.
+
+Which of these operations preserve well-foundedness? It is not hard to see the following:
+\begin{lemma}
+ If $\bra X, \leq_X\ket$ is a well-founded quasi-order, then so is $X^{<\omega}$ and $\mathrm{Trees}(X)$.
+\end{lemma}
+The reason why this works is that the lifts of the order on $X$ to $X^{<\omega}$ and $\mathrm{Trees}(X)$ are ``of finite character''. To compare two elements in $X^{<\omega}$ or $\mathrm{Trees}(X)$, we only have to appeal to finitely many instances of $\leq_X$.
+
+On the other hand, $\bra \mathcal{P}(X), \leq_X^+\ket$ is in general not well-founded. For example, $\bra \N, =\ket$ is a well-founded quasi-order, being the reflexive transitive closure of the trivial relation. But $\bra \mathcal{P}(N), =^+\ket$ has an infinite decreasing sequence. Indeed, $=^+$ is just the subset relation.
+
+\begin{defi}[Well-quasi-order]\index{well-quasi-order}\index{WQO}
+ A \emph{well-quasi-order} (WQO) is a well-founded quasi-order $\bra X, \leq_X\ket$ such that $\bra \mathcal{P}(X), \leq_X^+\ket$ is also well-founded.
+\end{defi}
+Note that the power set of a well-quasi-order need not be a well-quasi-order.
+
+The question we want to answer is whether the operations $X^{<\omega}$ and $\mathrm{Trees}(X)$ preserve well-quasi-orders. To do so, we try to characterize well-quasi-orders in a more concrete way.
+
+When does a quasi-order fail to be a well quasi-order? Suppose $x_1, x_2, x_3, \cdots$ is an infinite antichain, i.e.\ a collection of mutually unrelated elements. Then as above
+\[
+ \bra \{x_i: i > n\}: n \in \N\ket
+\]
+is an infinite descending sequence in $\bra \mathcal{P}(X), \leq^+\ket$. Of course, to be well-founded, we also need the non-existence of infinite decreasing sequences.
+
+\begin{prop}
+ Let $\bra X, \leq$ be a quasi-order. Then the following are equivalent:
+ \begin{enumerate}
+ \item $\bra X, \leq\ket$ is a well-quasi-order.
+ \item There is no infinite decreasing sequence and no infinite anti-chain.
+ \item Whenever we have any sequence $x_i \in X$ whatsoever, we can find $i < j$ such that $x_i \leq x_j$.
+ \end{enumerate}
+\end{prop}
+In standard literature, (iii) is often taken as the definition of a well-quasi-order instead.
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Rightarrow$ (ii): We just showed this.
+ \item (iii) $\Rightarrow$ (i): Suppose $X_1, X_2, X_3, \cdots$ is a strictly decreasing chain in $\mathcal{P}(X)$. Then by definition, we can pick $x_i \in X_i$ such that $x_i$ is not $\leq$ anything in $X_{i + 1}$. Now for any $j > i$, we know $x_i \not \leq x_j$, because $x_j$ is $\leq$ something in $X_{i + 1}$. So we have found a sequence $\{x_i\}$ such that $x_i \not \leq x_j$ for all $i < j$.
+
+ \item (iii) $\Rightarrow$ (ii) is clear.
+
+ \item (ii) $\Rightarrow$ (iii), we show that whenever $x_i$ is a sequence without $i < j$ such that $x_i \leq x_j$, then it either contains an infinite anti-chain, or an infinite descending chain.
+
+ To do so, we two-colour $[\N]$ by
+ \[
+ X(\{i < j\}) =
+ \begin{cases}
+ 0 & x_i > x_j\\
+ 1 & \text{otherwise}
+ \end{cases}
+ \]
+ By Ramsey's theorem, this has an infinite monochromatic set. If we have an infinite monochromatic set of colour $0$, then this is a descending chain. Otherwise, if it is of colour $1$, then together with our hypothesis, we know $\{x_i\}$ is an anti-chain.\qedhere
+ \end{itemize}
+\end{proof}
+
+If we want to reason about well-founded relations, then the following notion is often a useful one to have in mind:
+\begin{defi}[Well-founded part of a relation]\index{well-founded part of relation}
+ Let $\bra X, R\ket$ be a set with a relation. Then the \emph{well-founded part} of a relation is the $\subseteq$-least subset $A \subseteq X$ satisfying the property
+ \begin{itemize}
+ \item If $x$ is such that all predecessors of $x$ are in $A$, then $x \in A$.
+ \end{itemize}
+\end{defi}
+So for example, any minimal element of $X$ would be in the well-founded part. One can prove that the well-founded part is indeed well-founded without much difficulty.
+
+Is there a good notion of the WQO-part of a quasi-order? It turns out not, or at least there isn't an obvious one that works. For our later discussion, it is convenient to have the following definition:
+
+\begin{defi}[Bad sequence]\index{bad sequence}
+ Let $\bra X, \leq\ket$ be a well-founded quasi-order. A sequence $\{x_i\}$ is \emph{bad} if for all $i < j$, we have $f(i) \not\leq f(j)$.
+\end{defi}
+Then a quasi-order is a WQO iff it has no bad sequences.
+
+The idea is now to pick a bad sequence that is ``minimal'' in some sense, and then looking at things below this minimal bad sequence will give is a well-quasi-order. This ``WQO-part'' isn't some object canonically associated to the order $\bra X, \leq\ket$, because it depends on the choice of the minimal bad sequence, but is very useful in our upcoming proof.
+
+\begin{defi}[Minimal bad sequence]\index{minimal bad sequence}\index{bad sequence!minimal}
+ Let $\bra X, \leq\ket$ be a quasi-order. A \emph{minimal bad sequence} is a bad sequence $\{x_i\}$ such that for each $k \in \N$, $x_k$ is a $\leq$-minimal element in
+ \[
+ \{x:\text{there exists a bad sequence starting with }x_1, x_2, \cdots, x_{k - 1}, x\}.
+ \]
+\end{defi}
+
+\begin{lemma}
+ Let $\bra X, \leq\ket$ be a well-founded quasi-order that is not a WQO. Then it has a minimal bad sequence.
+\end{lemma}
+
+\begin{proof}
+ Just pick each $x_i$ according to the requirement in the definition. Such an $x_i$ can always be found because $\bra X, \leq\ket$ is well-founded.
+\end{proof}
+
+What makes minimal bad sequences useful is the following promised lemma:
+\begin{lemma}[Minimal bad sequence lemma]\index{minimal bad sequence lemma}
+ Let $\bra X, \leq\ket$ be a quasi-order and $B = \{b_i\}$ a minimal bad sequence. Let
+ \[
+ X' = \{x \in X: \exists_{n \in \N} x < b_n\}.
+ \]
+ Then $\bra X', \leq\ket$ is a WQO.
+\end{lemma}
+
+\begin{proof}
+ Suppose $\{s_i\}$ is a bad sequence in $X'$. We prove by induction that nothing in $\{s_i\}$ is below $b_n$, which gives a contradiction.
+
+ Suppose $s_i < b_0$ for some $i$. Then $s_i, s_{i + 1}, s_{i + 2}, \cdots$ is a bad sequence, whose first element is less than $b_0$. This then contradicts the minimality of $B$.
+
+ For the induction step, suppose nothing in $S$ is strictly less than $b_0, \cdots, b_n$. Suppose, for contradiction, that $s_i < b_{n + 1}$. Now consider the sequence
+ \[
+ b_0, \cdots, b_n, s_i, s_{i + 1}, s_{i + 2}, \cdots.
+ \]
+ By minimality of $B$, we know this cannot be a bad sequence. This implies there is some $n, m$ such that the $n$th element of the sequence is less than the $m$th element. But they can't be both from the $\{b_k\}$ or both from then $\{s_k\}$. This implies there is some $b_j \leq s_k$ with $j \leq n$ and $k \geq i + 1$.
+
+ Consider $s_k$. Since $s_k \in X'$, it must be $\leq b_m$ for some $m$. Moreover, by induction hypothesis, we must have $m > n$. Then by transitivity, we have $b_j \leq b_m$, contradicting the fact that $B$ is bad.
+\end{proof}
+
+We need one more technical lemma.
+
+\begin{lemma}[Perfect subsequence lemma]\index{perfect subsequence lemma}
+ Let $\bra X, \leq\ket$ be a WQO. Then every sequence $\{x_n\}$ in $X$ has a \term{perfect subsequence}, i.e.\ an infinite subset $A \subseteq \N$ such that for all $i < j \in A$, we have $f(i) \leq f(j)$.
+\end{lemma}
+By definition of well-quasi order, we can always find some $i < j$ such that $f(i) \leq f(j)$. This says we can find an infinite subset such that this works.
+
+\begin{proof}
+ We apply Ramsey's theorem. We two-colour $[\N]^2$ by
+ \[
+ c(i < j) =
+ \begin{cases}
+ \mathrm{red} & f(i) \leq f(j)\\
+ \mathrm{blue} & f(i) \not\leq f(j)
+ \end{cases}
+ \]
+ Then Ramsey's theorem gives us an infinite monochromatic set. But we cannot have an infinite monochromatic blue set, as this will give a bad sequence. So there is a monochromatic red set, which is a perfect subsequence.
+\end{proof}
+
+We are now in a position to prove that lists over a WQO are WQO's.
+\begin{lemma}[Higman's lemma]\index{Higman's lemma}
+ Let $\bra X, \leq_X\ket$ be a WQO. Then $\bra X^{<\omega}, \leq_s\ket$ is a WQO.
+\end{lemma}
+
+\begin{proof}
+ We already know that $X^{<\omega}$ is well-founded. Suppose it is not a WQO. Then there is a minimal bad sequence $\{x_i\}$ of $X$-lists under $\leq_s$.
+
+ Now consider the sequence $\{\head(x_i)\}$. By the perfect subsequence lemma, after removing some elements in the sequence, we may wlog this is a perfect sequence.
+
+ Now consider the sequence $\{\tail(x_i)\}$. Now note that $\tail(x_i) < x_i$ for each $i$ (here it is crucial that lists are finite). Thus, using the notation in the minimal bad sequence lemma, we know $\tail(x_i) \in X'$ for all $i$.
+
+ But $X'$ is a well quasi-order. So we can find some $i < j$ such that $\tail(x_i) \leq \tail(x_j)$. But also $\head (x_i) \leq \head(x_j)$ by assumption. So $x_i \leq x_j$, and this is a contradiction.
+\end{proof}
+
+We can apply a similar juggle to prove Kruskal's theorem. The twist is that we will have to apply Higman's lemma!
+
+\begin{thm}[Kruskal's theorem]\index{Kruskal's theorem}
+ Let $\bra X, \leq_X\ket$ be a WQO. Then $\bra \mathrm{Trees}(X), \leq_s\ket$ is a WQO.
+\end{thm}
+
+\begin{proof}
+ We already know that $\mathrm{Trees}(X)$ is well-founded. Suppose it is not a WQO. Then we can pick a minimal bad sequence $\{x_i\}$ of trees.
+
+ As before, we may wlog $\{\mathrm{root}(x_i)\}$ is a perfect sequence. Consider
+ \[
+ Y = \{T: T \in \mathrm{children}(x_i)\text{ for some }i\}.
+ \]
+ Then $Y \subseteq X'$. So $Y$ is a WQO, and hence by Higman's lemma, $Y^{<\omega}$ is a WQO. So there exists some $i, j$ such that $\mathrm{children}(x_i) \leq \mathrm{children}(x_j)$. But by perfectness, we have $\mathrm{root}(x_i) \leq \mathrm{root}(x_j)$. So by definition of the ordering, $x_i \leq x_j$.
+\end{proof}
+
+What good is this theorem? It is possible to use Kruskal's theorem to prove the termination of some algorithms, but this is not a computer science course, and we will not go into that. Instead, we look at some proof-theoretic consequences of Kruskal's theorem.
+
+We begin with the simplest possible WQO --- the WQO with one element only and the only possible order. Then a tree over this WQO is just a tree with no labels on each node, i.e.\ a tree.
+
+Consider the statement
+\[
+ P(k) = \parbox{9cm}{there exists some $n \in \N$ such that there is no finite bad sequence $T_1, \cdots, T_n$ where $T_i$ has $k + i$ nodes}.
+\]
+This is a finitary version of the statement of Kruskal's theorem. We claim that $P(k)$ is true for all $k$. Suppose not. Then for each $n$, we find a bad sequence $T_1^n, \cdots, T_n^n$, where $T_i^n$ is a tree with $k + i$ nodes. We can put these trees in a grid:
+\begin{center}
+ \begin{tabular}{ccccc}
+ $T_1^1$\\
+ $T_1^2$ & $T_2^2$\\
+ $T_1^3$ & $T_2^3$ & $T_3^3$\\
+ $T_1^4$ & $T_2^4$ & $T_3^4$ & $T_4^4$\\
+ $T_1^5$ & $T_2^5$ & $T_3^5$ & $T_4^5$ & $T_5^5$\\
+ \end{tabular}
+\end{center}
+We look at the first column. Each tree has $k + 1$ nodes. But there are only finitely many such trees. So there is some tree that appears infinitely often. Pick one, call it $T_1$, and throw away all other rows. Similarly, we can then find some tree that appears infinitely often in the second row, call it $T_2$, and throw the other rows away. We keep going. Then we find a sequence
+\[
+ T_1, T_2, T_3, \cdots
+\]
+such that every initial segment is a bad sequence. So this is a bad sequence, which is impossible.
+
+\begin{prop}[Friedman's finite form]\index{Friedman's finite form}
+ For all $k \in \N$, the statement $P(k)$ is true.
+\end{prop}
+Why do we care about this? The statement $P(k)$ is something we can write out in Peano arithmetic, as it is finitary in nature. But our proof certainly didn't live in Peano arithmetic, as we had to talk about infinite sets. It turns out for each $k \in \N$, we can indeed prove $P(k)$ in Peano arithmetic. However, it is impossible to prove $\forall_k P(k)$! This in particular means we cannot ``prove Kruskal's theorem'' in Peano arithmetic (whatever that might mean).
+
+We can also relate this to ordinal analysis. Given a WQO $\bra X, \leq\ket$, we can construct a well-founded structure from it. The obvious thing to try is to take $\bra \mathcal{P}(X), \leq^+\ket$, but this is not what we want. The problem is that talking about power sets involves talking about infinite sets, and this is not something we know how to do in Peano arithmetic.
+
+Instead, we let $\tilde{X}$ be the set of all finite bad sequences in $X$, and order them by $s \leq t$ if $s$ is an end-extension of $t$. Note that this is ``upside down''. Then by definition of a WQO, this is a well-founded downward-branching tree whose top element is the empty sequence. Then by taking the rank of this structure, we obtain an ordinal.
+
+Using Kruskal's theorem, we can produce some really huge WQO's, which in turn gives us some really huge ordinals. So in some sense, being able to prove Kruskal's theorem means we can prove the well-foundedness of some really big ordinal in our theory.
+
+%Recall that we usually define (second-order) arithmetic by quantifying over infinite sets in the induction principle. However, it is possible to state the induction principle using finite sets. Indeed, we say $n$ is a natural number iff for all $X$, if $n \in X$ and ``$m \in X \Rightarrow (m - 1) \in X$'', then $0 \in X$.
+%
+%We can obtain long sequences from WQO's. We know that a WQO has no infinite bad sequences. So, from $\bra X, \leq\ket$, we construct the set of finite bad sequences from $X$, and order them upside down, i.e.\ This gives a downward-branching tree whose top element is the empty sequence.
+%
+%Note that this tree is well-founded! Any infinite decreasing tree would patch to give us an infinite bad sequence. So it has a rank. This rank is in some sense the largest ordinal that can be associated to the WQO.
+%
+%Rank functions on a well-founded structure are parsimonious --- they are surjections onto an initial segment of $\mathrm{On}$.
+%
+%The rank of the tree of bad sequences of $\bra X, \leq\ket$ the largest initial segment of $\mathrm{On}$ such that there exists a homomorphism from $\bra X,\leq\ket$ onto that initial segment.
+
+We end with some natural generalization of WQO's. Suppose $\bra X, \leq\ket$ is a WQO. Then $\bra \mathcal{P}(X), \leq^+\ket$ need not be a WQO. Suppose it is not, and let $X_1, X_2, X_3, \cdots$ be a bad sequence of subsets. Then for each $i < j$, we can find some $x_{ij} \in X_i$ such that $x_{ij}$ is not less than anything in $X_j$.
+
+Using dependent choice, we can find a function
+\[
+ f: \{\bra i, j\ket: i < j \in \N\} \to X
+\]
+such that for all $i < j < k$, we have $f(i, j) \not\leq f(j, k)$. This is a strengthening of bad sequence, where the sequence is now indexed by pairs $(i < j)$.
+
+\begin{defi}[$\omega^2$-good]\index{$\omega^2$-good}
+ A well-ordering $\bra X, \leq\ket$ is \emph{$\omega^2$-good} if for any
+ \[
+ f: \{\bra i, j\ket: i < j \in \N\} \to X,
+ \]
+ there is some $i < j < k$ such that $f(i, j) \leq f(j, k)$.
+\end{defi}
+There is an analogue of the perfect subsequence lemma for these, but there is no analogous notion of a minimal bad sequence (or at least no obvious one).
+
+\printindex
+\end{document}
diff --git a/books/cam/III_L/modular_forms_and_l_functions.tex b/books/cam/III_L/modular_forms_and_l_functions.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c323d16df32696d5a625fd86bbb90a109646ad32
--- /dev/null
+++ b/books/cam/III_L/modular_forms_and_l_functions.tex
@@ -0,0 +1,5081 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2017}
+\def\nlecturer {A.\ J.\ Scholl}
+\def\ncourse {Modular Forms and L-functions}
+
+\input{header}
+
+\renewcommand{\H}{\mathcal{H}}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+Modular Forms are classical objects that appear in many areas of mathematics, from number theory to representation theory and mathematical physics. Most famous is, of course, the role they played in the proof of Fermat's Last Theorem, through the conjecture of Shimura-Taniyama-Weil that elliptic curves are modular. One connection between modular forms and arithmetic is through the medium of $L$-functions, the basic example of which is the Riemann $\zeta$-function. We will discuss various types of $L$-function in this course and give arithmetic applications.
+
+\subsubsection*{Pre-requisites}
+Prerequisites for the course are fairly modest; from number theory, apart from basic elementary notions, some knowledge of quadratic fields is desirable. A fair chunk of the course will involve (fairly 19th-century) analysis, so we will assume the basic theory of holomorphic functions in one complex variable, such as are found in a first course on complex analysis (e.g.\ the 2nd year Complex Analysis course of the Tripos).
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+One of the big problems in number theory is the so-called Langlands programme, which is relates ``arithmetic objects'' such as representations of the Galois group and elliptic curves over $\Q$, with ``analytic objects'' such as modular forms and more generally automorphic forms and representations.
+
+\begin{eg}
+ $y^2 + y = x^3 - x$ is an elliptic curve, and we can associate to it the function
+ \[
+ f(z) = q\prod_{n \geq 1} (1 - q^n)^2 (1 - q^{11n})^2 = \sum_{n = 1}^\infty a_n q^n,\quad q= e^{2\pi i z},
+ \]
+ where we assume $\Im z > 0$, so that $|q| < 1$. The relation between these two objects is that the number of points of $E$ over $\F_p$ is equal to $1 + p - a_p$, for $p \not= 11$. This strange function $f$ is a modular form, and is actually cooked up from the slightly easier function
+ \[
+ \eta(z) = q^{1/24} \prod_{n = 1}^\infty (1 - q^n)
+ \]
+ by
+ \[
+ f(z) = (\eta(z)\eta(11z))^2.
+ \]
+ This function $\eta$ is called the \emph{Dedekind eta function}, and is one of the simplest examples of a modular forms (in the sense that we can write it down easily). This satisfies the following two identities:
+ \[
+ \eta(z + 1) = e^{i \pi/12}\eta(z),\quad \eta\left(\frac{-1}{z}\right) = \sqrt{\frac{z}{i}} \eta(z).
+ \]
+ The first is clear, and the second takes some work to show. These transformation laws are exactly what makes this thing a modular form.
+
+ Another way to link $E$ and $f$ is via the \emph{$L$-series}
+ \[
+ L(E, s) = \sum_{n = 1}^\infty \frac{a_n}{n^s},
+ \]
+ which is a generalization of the Riemann $\zeta$-function
+ \[
+ \zeta(s) = \sum_{n = 1}^\infty \frac{1}{n^s}.
+ \]
+\end{eg}
+We are in fact not going to study elliptic curves, as there is another course on that, but we are going to study the modular forms and these $L$-series. We are going to do this in a fairly classical way, without using algebraic number theory.
+
+\section{Some preliminary analysis}
+\subsection{Characters of abelian groups}
+When we were young, we were forced to take some ``applied'' courses, and learnt about these beasts known as Fourier transforms. At that time, the hope was that we can leave them for the engineers, and never meet them ever again. Unfortunately, it turns out Fourier transforms are also important in ``pure'' mathematics, and we must understand them well.
+
+Let's recall how Fourier transforms worked. We had two separate but closely related notions. First of all, we can take the Fourier transform of a function $f: \R \to \C$. The idea is that we wanted to write any such function as
+\[
+ f(x) = \int_{-\infty}^\infty e^{2\pi i yx} \hat{f}(y) \;\d y.
+\]
+One way to think about this is that we are expanding $f$ in the basis $\chi_y(x) = e^{2\pi i yx}$. We also could take the Fourier series of a periodic function, i.e.\ a function defined on $\R/\Z$. In this case, we are trying to write our function as
+\[
+ f(x) = \sum_{n = -\infty}^\infty c_n e^{2\pi i n x}.
+\]
+In this case, we are expanding our function in the basis $\chi_n(x) = e^{2\pi i n x}$. What is special about these basis $\{\chi_y\}$ and $\{\chi_y\}$?
+
+We observe that $\R$ and $\R/\Z$ are not just topological spaces, but in fact abelian topological groups. These $\chi_y$ and $\chi_n$ are not just functions to $\C$, but continuous group homomorphisms to $\U(1) \subseteq \C$. In fact, these give \emph{all} continuous group homomorphisms from $\R$ and $\R/\Z$ to $\U(1)$.
+
+\begin{defi}[Character]\index{character}
+ Let $G$ be an abelian topological group. A (unitary) \emph{character} of $G$ is a continuous homomorphism $\chi: G \to \U(1)$, where $\U(1) = \{z \in \C \mid |z| = 1\}$.
+\end{defi}
+To better understand Fourier transforms, we must understand characters, and we shall spend some time doing so.
+
+\begin{eg}
+ For any group $G$, there is the trivial character $\chi_0(g) \equiv 1$.
+\end{eg}
+
+\begin{eg}
+ The product of two characters is a character.
+\end{eg}
+
+\begin{eg}
+ If $\chi$ is a character, then so is $\chi^*$, and $\chi \chi^* = 1$.
+\end{eg}
+
+Thus, we see that the collection of all characters form a group under multiplication.
+\begin{defi}[Character group]\index{character group}\index{Pontryagin dual}
+ Let $G$ be a group. The \emph{character group} (or \emph{Pontryagin dual}) $\hat{G}$ is the group of all characters of $G$.
+\end{defi}
+
+It is usually not hard to figure out what the character group is.
+\begin{eg}
+ Let $G = \R$. For $y \in \R$, we let $\chi_y: \R \to \U(1)$ be
+ \[
+ \chi_y(x) = e^{2\pi i xy}.
+ \]
+ For each $y \in \R$, this is a character, and all characters are of this form. So $\hat{\R} \cong \R$ under this correspondence.
+\end{eg}
+
+\begin{eg}
+ Take $G = \Z$ with the discrete topology. A character is uniquely determined by the image of $1$, and any element of $\U(1)$ can be the image of $1$. So we have $\hat{G} \cong \U(1)$.
+\end{eg}
+
+\begin{eg}
+ Take $G = \Z/N\Z$. Then the character is again determined by the image of $1$, and the allowed values are exactly the $N$th roots of unity. So
+ \[
+ \hat{G} \cong \mu_N = \{\zeta \in \C^\times: \zeta^N = 1\}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Let $G = G_1 \times G_2$. Then $\hat{G} \cong \hat{G}_1 \times \hat{G}_2$. So, for example, $\hat{\R^n} = \R^n$. Under this correspondence, a $y \in \R^n$ corresponds to
+ \[
+ \chi_y(x) = e^{2\pi x\cdot y}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Take $G = \R^\times$. We have
+ \[
+ G \cong \{\pm 1\} \times \R^\times_{>0} \cong \{\pm 1\} \times \R,
+ \]
+ where we have an isomorphism between $\R^{\times}_{> 0} \cong \R$ by the exponential map. So we have
+ \[
+ \hat{G} \cong \Z/2\Z \times \R.
+ \]
+ Explicitly, given $(\varepsilon, \sigma) \in \Z/2\Z \times \R$, then character is given by
+ \[
+ x \mapsto \sgn(x)^\varepsilon |x|^{i\sigma}.
+ \]
+\end{eg}
+
+Note that $\hat{G}$ has a natural topology for which the evaluation maps $(\chi \in \hat{G}) \mapsto \chi(g) \in \U(1)$ are all continuous for all $g$. Moreover, evaluation gives us a map $G \to \hat{\hat{G}}$.
+
+\begin{thm}[Pontryagin duality]\term{Pontryagin duality}
+ If $G$ is locally compact, then $G \to \hat{\hat{G}}$ is an isomorphism.
+\end{thm}
+Since this is a course on number theory, and not topological groups, we will not prove this.
+
+\begin{prop}
+ Let $G$ be a finite abelian group. Then $|\hat{G}| = |G|$, and $G$ and $\hat{G}$ are in fact isomorphic, but not canonically.
+\end{prop}
+
+\begin{proof}
+ By the classification of finite abelian groups, we know $G$ is a product of cyclic groups. So it suffices to prove the result for cyclic groups $\Z/N\Z$, and the result is clear since
+ \[
+ \widehat{\Z/N\Z} = \mu_N \cong \Z/N\Z.\qedhere
+ \]
+\end{proof}
+
+\subsection{Fourier transforms}
+Equipped with the notion of characters, we can return to our original goal of understand Fourier transforms. We shall first recap the familiar definitions of Fourier transforms in specific cases, and then come up with the definition of Fourier transforms in full generality. In the mean time, we will get some pesky analysis out of the way.
+
+\begin{defi}[Fourier transform]\index{Fourier transform}\index{$\hat{f}$}
+ Let $f: \R \to \C$ be an $L^1$ function, i.e.\ $\int |f| \;\d x < \infty$. The \emph{Fourier transform} is
+ \[
+ \hat{f}(y) = \int_{-\infty}^\infty e^{-2\pi i xy} f(x) \;\d x = \int_{-\infty}^\infty \chi_y(x)^{-1} f(x) \;\d x.
+ \]
+ This is a bounded and continuous function on $\R$.
+\end{defi}
+We will see that the ``correct'' way to think about the Fourier transform is to view it as a function on $\hat{\R}$ instead of $\R$.
+
+In general, there is not much we can say about how well-behaved $\hat{f}$ will be. In particular, we cannot expect the ``Fourier inversion theorem'' to hold for general $L^1$ functions. If we like analysis, then we can figure out exactly how much we need to assume about $\hat{f}$. But we don't. We chicken out and only consider functions that decay rely fast at infinity. This makes our life much easier.
+\begin{defi}[Schwarz space]\index{Schwarz space}\index{$\mathcal{S}(\R)$}
+ The \term{Schwarz space} is defined by
+ \[
+ \mathcal{S}(\R) = \{f \in C^\infty(\R) : x^n f^{(k)}(x) \to 0\text{ as }x \to \pm\infty\text{ for all }k, n\geq 0\}.
+ \]
+\end{defi}
+
+\begin{eg}
+ The function
+ \[
+ f(x) = e^{-\pi x^2}.
+ \]
+ is in the Schwarz space.
+\end{eg}
+
+One can prove the following:
+\begin{prop}
+ If $f \in \mathcal{S}(\R)$, then $\hat{f} \in \mathcal{S}(\R)$, and the \term{Fourier inversion formula}
+ \[
+ \hat{\hat{f}} = f(-x)
+ \]
+ holds.
+\end{prop}
+
+Everything carries over when we replace $\R$ with $\R^n$, as long as we promote both $x$ and $y$ into vectors.
+
+We can also take the Fourier transform of functions defined on $G = \R/\Z$. For $n \in \Z$, we let $\chi_n \in \hat{G}$ by
+\[
+ \chi_n(x) = e^{2\pi i nx}.
+\]
+These are exactly all the elements of $\hat{G}$, and $\hat{G} \cong \Z$. We then define the Fourier coefficients of a periodic function $f: \R/\Z \to \C$ by
+\[
+ c_n(f) = \int_0^1 e^{-2\pi i n x} f(x) \;\d x = \int_{\R/\Z} \chi_n(x)^{-1} f(x) \;\d x.
+\]
+Again, under suitable regularity conditions on $f$, e.g.\ if $f \in C^\infty(\R/\Z)$, we have
+\begin{prop}
+ \[
+ f(x) = \sum_{n \in \Z} c_n(f) e^{2\pi i nx} = \sum_{n \in \Z \cong \hat{G}} c_n(f) \chi_n(x).
+ \]
+\end{prop}
+This is the Fourier inversion formula for $G = \R/\Z$.
+
+Finally, in the case when $G = \Z/N\Z$, we can define
+\begin{defi}[Discrete Fourier transform]\index{discrete Fourier transform}\index{Fourier transform!discrete}\index{$\hat{f}$}
+ Given a function $f: \Z/N\Z \to \C$, we define the \emph{Fourier transform} $\hat{f}: \mu_N \to \C$ by
+ \[
+ \hat{f}(\zeta) = \sum_{a \in \Z/N\Z} \zeta^{-a} f(a).
+ \]
+\end{defi}
+
+This time there aren't convergence problems to worry with, so we can quickly prove this result:
+\begin{prop}
+ For a function $f: \Z/N\Z \to \C$, we have
+ \[
+ f(x) = \frac{1}{N} \sum_{\zeta \in \mu_N} \zeta^x \hat{f}(\zeta).
+ \]
+\end{prop}
+
+\begin{proof}
+ We see that both sides are linear in $f$, and we can write each function $f$ as
+ \[
+ f = \sum_{a \in \Z/N\Z} f(a) \delta_a,
+ \]
+ where
+ \[
+ \delta_a(x) =
+ \begin{cases}
+ 1 & x = a\\
+ 0 & x \not= a
+ \end{cases}.
+ \]
+ So we wlog $f = \delta_a$. Thus we have
+ \[
+ \hat{f}(\zeta) = \zeta^{-a},
+ \]
+ and the RHS is
+ \[
+ \frac{1}{N} \sum_{\zeta \in \mu_N} \zeta^{x - a}.
+ \]
+ We now note the fact that
+ \[
+ \sum_{\zeta \in \mu_N} \zeta^k =
+ \begin{cases}
+ N & k \equiv 0 \pmod N\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ So we know that the RHS is equal to $\delta_a$, as desired.
+\end{proof}
+
+It is now relatively clear what the general picture should be, except that we need a way to integrate functions defined on an abelian group. Since we are not doing analysis, we shall not be very precise about what we mean:
+
+\begin{defi}[Haar measure]\index{Haar measure}
+ Let $G$ be a topological group. A \emph{Haar measure} is a left translation-invariant Borel measure on $G$ satisfying some regularity conditions (e.g.\ being finite on compact sets).
+\end{defi}
+
+\begin{thm}
+ Let $G$ be a locally compact abelian group $G$. Then there is a Haar measure on $G$, unique up to scaling.
+\end{thm}
+
+\begin{eg}
+ On $G = \R$, the Haar measure is the usual Lebesgue measure.
+\end{eg}
+
+\begin{eg}
+ If $G$ is discrete, then the Haar measure is the counting measure, so that
+ \[
+ \int f\;\d g = \sum_{g \in G} f(g).
+ \]
+\end{eg}
+
+\begin{eg}
+ If $G = \R_{> 0}^\times$, then the integral given by the Haar measure is
+ \[
+ \int f(x) \frac{\d x}{x},
+ \]
+ since $\frac{\d x}{x}$ is invariant under multiplication of $x$ by a constant.
+\end{eg}
+
+Now we can define the general Fourier transform.
+\begin{defi}[Fourier transform]\index{Fourier transform}\index{$\hat{f}$}
+ Let $G$ be a locally compact abelian group with a Haar measure $\d g$, and let $f: G \to \C$ be a continuous $L^1$ function. The \emph{Fourier transform} $\hat{f}: \hat{G} \to \C$ is given by
+ \[
+ \hat{f}(\chi) = \int_G \chi(g)^{-1}f(g) \;\d g.
+ \]
+\end{defi}
+
+It is possible to prove the following theorem:
+\begin{thm}[Fourier inversion theorem]\index{Fourier inversion theorem}
+ Given a locally compact abelian group $G$ with a fixed Haar measure, there is some constant $C$ such that for ``suitable'' $f: G \to \C$, we have
+ \[
+ \hat{\hat{f}}(g) = C f(-g),
+ \]
+ using the canonical isomorphism $G \to \hat{\hat{G}}$.
+\end{thm}
+This constant is necessary, because the measure is only defined up to a multiplicative constant.
+
+One of the most important results of this investigation is the following result:
+\begin{thm}[Poisson summation formula]\index{Poisson summation formula}
+ Let $f \in \mathcal{S}(\R^n)$. Then
+ \[
+ \sum_{a \in \Z^n} f(a) = \sum_{b \in \Z^n} \hat{f}(b).
+ \]
+\end{thm}
+
+\begin{proof}
+ Let
+ \[
+ g(x) = \sum_{a \in \Z^n} f(x + a).
+ \]
+ This is now a function that is invariant under translation of $\Z^n$. It is easy to check this is a well-defined $C^\infty$ function on $\R^n/\Z^n$, and so has a Fourier series. We write
+ \[
+ g(x) = \sum_{b \in \Z^n} c_b(g) e^{2\pi i b \cdot x},
+ \]
+ with
+ \[
+ c_b(g) = \int_{\R^n/\Z^n} e^{-2\pi i b\cdot x} g(x) \;\d x = \sum_{a \in \Z^n} \int_{[0, 1]^n} e^{-2\pi i b\cdot x} f(x + a) \;\d x.
+ \]
+ We can then do a change of variables $x \mapsto x - a$, which does not change the exponential term, and get that
+ \[
+ c_b(g) = \int_{\R^n} e^{-2\pi i b \cdot x} f(x) \;\d x = \hat{f}(b).
+ \]
+ Finally, we have
+ \[
+ \sum_{a \in \Z^n} f(a) = g(0) = \sum_{b \in \Z^n} c_b(x) = \sum_{b \in \Z^n} \hat{f}(b).\qedhere
+ \]
+\end{proof}
+
+\subsection{Mellin transform and \tph{$\Gamma$}{Gamma}{Γ}-function}
+We unfortunately have a bit more analysis to do, which we will use a lot later on. This is the \emph{Mellin transform}.
+
+\begin{defi}[Mellin transform]\index{Mellin transform}\index{$M(f, s)$}
+ Let $f: \R_{>0} \to\C$ be a function. We define
+ \[
+ M(f, s) = \int_0^\infty y^s f(y) \;\frac{\d y}{y},
+ \]
+ whenever this converges.
+\end{defi}
+We want to think of this as an analytic function of $s$. The following lemma tells us when we can actually do so
+\begin{lemma}
+ Suppose $f: \R_{>0} \to \C$ is such that
+ \begin{itemize}
+ \item $y^N f(y) \to 0$ as $y \to \infty$ for all $N \in \Z$
+ \item there exists $m$ such that $|y^m y(f)|$ is bounded as $y \to 0$
+ \end{itemize}
+ Then $M(f, s)$ converges and is an analytic function of $s$ for $\Re(s) > m$.
+\end{lemma}
+The conditions say $f$ is rapidly decreasing at $\infty$ and has moderate growth at $0$.
+
+\begin{proof}
+ We know that for any $0 < r < R < \infty$, the integral
+ \[
+ \int_r^R y^s f(y) \;\frac{\d y}{y}
+ \]
+ is analytic for all $s$ since $f$ is continuous.
+
+ By assumption, we know $\int_R^\infty \to 0$ as $R \to \infty$ uniformly on compact subsets of $\C$. So we know
+ \[
+ \int_r^\infty y^s f(y) \;\frac{\d y}{y}
+ \]
+ converges uniformly on compact subsets of $\C$.
+
+ On the other hand, the integral $\int_0^r$ as $r \to 0$ converges uniformly on compact subsets of $\{s \in \C: \Re(s) > m\}$ by the other assumption. So the result follows.
+\end{proof}
+
+This transform might seem a bit strange, but we can think of this as an analytic continuation of the Fourier transform.
+\begin{eg}
+ Suppose we are in the rather good situation that
+ \[
+ \int_0^\infty |f| \;\frac{\d y}{y} < \infty.
+ \]
+ In practice, this will hardly ever be the case, but this is a good place to start exploring. In this case, the integral actually converges on $i\R$, and equals the Fourier transform of $f \in L^1(G) = L^1(\R^\times_{>0})$. Indeed, we find
+ \[
+ \hat{G} = \{y \mapsto y^{i\sigma} : \sigma \in \R\} \cong \R,
+ \]
+ and $\frac{\d y}{y}$ is just the invariant measure on $G$. So the formula for the Mellin transform is exactly the formula for the Fourier transform, and we can view the Mellin transform as an analytic continuation of the Fourier transform.
+\end{eg}
+We now move on to explore properties of the Mellin transform. When we make a change of variables $y \leftrightarrow \alpha y$, by inspection of the formula, we find
+\begin{prop}
+ \[
+ M(f(\alpha y), s) = \alpha^{-s} M(f, s)
+ \]
+ for $\alpha > 0$.
+\end{prop}
+
+The following is a very important example of the Mellin transform:
+\begin{defi}[$\Gamma$ function]\index{Gamma function}\index{$\Gamma$ function}
+ The \emph{$\Gamma$ function} is the Mellin transform of
+ \[
+ f(y) = e^{-y}.
+ \]
+ Explicitly, we have
+ \[
+ \Gamma(s) = \int_0^\infty e^{-y} y^s \;\frac{\d y}{y}.
+ \]
+\end{defi}
+By general theory, we know this is analytic for $\Re(s) > 0$.
+
+If we just integrate by parts, we find
+\[
+ \Gamma(s) = \int_0^\infty e^{-y} y^{s - 1}\;\d y = \left[e^{-y} \frac{y^s}{s} \right]_0^\infty + \frac{1}{s}\int_0^\infty e^{-y} y^s \;\d y = \frac{1}{s}\Gamma(s + 1).
+\]
+So we find that
+\begin{prop}
+ \[
+ s \Gamma(s) = \Gamma(s + 1).
+ \]
+\end{prop}
+Moreover, we can compute
+\[
+ \Gamma(1) = \int_0^\infty e^{-y}\;\d y = 1.
+\]
+So we get
+\begin{prop}
+ For an integer $n \geq 1$, we have
+ \[
+ \Gamma(n) = (n - 1)!.
+ \]
+\end{prop}
+In general, iterating the above formula, we find
+\[
+ \Gamma(s) = \frac{1}{s (s + 1) \cdots (s + N - 1)} \Gamma(s + N).
+\]
+Note that the right hand side makes sense for $\Re(s) > -N$ (except at non-positive integer points). So this allows us to extend $\Gamma(s)$ to a meromorphic function on $\{\Re(s) > -N\}$, with simple poles at $0, 1, \cdots, 1 - N$ of residues
+\[
+ \res_{s = 1 - N} \Gamma(s) = \frac{(-1)^{N - 1}}{(N - 1)!}.
+\]
+Of course, since $N$ was arbitrary, we know $\Gamma(s)$ extends to a meromorphic function on $\C \setminus \Z_{\leq 0}$.
+
+Here are two facts about the $\Gamma$ function that we are not going to prove, because, even if the current experience might suggest otherwise, this is not an analysis course.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item The \term{Weierstrass product}: We have
+ \[
+ \Gamma(s)^{-1} = e^{\gamma s} s \prod_{n \geq 1} \left(1 + \frac{s}{n}\right) e^{-s/n}
+ \]
+ for all $s \in \C$. In particular, $\Gamma(s)$ is never zero. Here $\gamma$ is the \term{Euler-Mascheroni constant}, given by
+ \[
+ \gamma = \lim_{n \to \infty}\left(1 + \frac{1}{2} + \cdots + \frac{1}{n} - \log n\right).
+ \]
+ \item \emph{Duplication and reflection formulae}\index{duplication formula}\index{reflection formula}:
+ \[
+ \pi^{\frac{1}{2}} \Gamma(2s) = 2^{2s - 1} \Gamma(s) \Gamma\left(s + \frac{1}{2}\right)
+ \]
+ and
+ \[
+ \Gamma(s) \Gamma(1 - s) = \frac{\pi}{\sin \pi z}.
+ \]
+ \end{enumerate}
+\end{prop}
+
+The main reason why we care about the Mellin transform in this course is that a lot of \term{Dirichlet series} are actually Mellin transforms of some functions. Suppose we have a Dirichlet series
+\[
+ \sum_{n = 1}^\infty \frac{a_n}{n^s},
+\]
+where the $a_n$ grow not too quickly. Then we can write
+\begin{align*}
+ (2\pi)^{-s} \Gamma(s)\sum_{n = 1}^\infty \frac{a_n}{n^s} &= \sum_{n = 1}^\infty a_n (2\pi n)^{-s} M(e^{-y}, s)\\
+ &= \sum_{n = 1}^\infty M(e^{-2 \pi n y}, s) \\
+ &= M(f, s),
+\end{align*}
+where we set
+\[
+ f(y) = \sum_{n \geq 1} a_n e^{-2\pi n y}.
+\]
+Since we know about the analytic properties of the $\Gamma$ function, by understanding $M(f, s)$, we can deduce useful properties about the Dirichlet series itself.
+
+\section{Riemann \tph{$\zeta$}{zeta}{ζ}-function}
+We ended the previous chapter by briefly mentioning Dirichlet series. The first and simplest example one can write down is the Riemann $\zeta$-function.
+\begin{defi}[Riemann $\zeta$-function]\index{Riemann $\zeta$-function}
+ The \emph{Riemann $\zeta$-function} is defined by
+ \[
+ \zeta(s) = \sum_{n \geq 1} \frac{1}{n^s}
+ \]
+ for $\Re(s) > 1$.
+\end{defi}
+
+This $\zeta$-function is related to prime numbers by the following famous formula:
+\begin{prop}[Euler product formula]\index{Euler index formula}
+ We have
+ \[
+ \zeta(s) = \prod_{p \text{ prime}} \frac{1}{1 - p^{-s}}.
+ \]
+\end{prop}
+
+\begin{proof}
+ Euler's proof was purely formal, without worrying about convergence. We simply note that
+ \[
+ \prod_{p\text{ prime}} \frac{1}{1 - p^{-s}} = \prod_p (1 + p^{-s} + (p^2)^{-s} + \cdots) = \sum_{n \geq 1} n^{-s},
+ \]
+ where the last equality follows by unique factorization in $\Z$. However, to prove this properly, we need to be a bit more careful and make sure things converge.
+
+ Saying the infinite product $\prod_p$ convergence is the same as saying $\sum p^{-s}$ converges, by basic analysis, which is okay since we know $\zeta(s)$ converges absolutely when $\Re(s) > 1$. Then we can look at the difference
+ \begin{align*}
+ \zeta(s) - \prod_{p \leq X} \frac{1}{1 - p^{-s}} &= \zeta(s) - \prod_{p \leq X} (1 + p^{-s} + p^{-2s} + \cdots)\\
+ &= \prod_{n \in \mathcal{N}_X} n^{-s},
+ \end{align*}
+ where $\mathcal{N}_X$ is the set of all $n \geq 1$ such that at least one prime factor is $\geq X$. In particular, we know
+ \[
+ \left|\zeta(s) - \prod_{p \leq X} \frac{1}{1 - p^{-s}}\right| \leq \sum_{n \geq X} |n^{-s}| \to 0
+ \]
+ as $X \to \infty$. So the result follows.
+\end{proof}
+The Euler product formula is the beginning of the connection between the $\zeta$-function and the distribution of primes. For example, as the product converges for $\Re(s) > 1$, we know in particular that $\zeta(s) \not= 0$ for all $s$ when $\Re(s) > 1$. Whether or not $\Re(s)$ vanishes elsewhere is a less straightforward matter, and this involves quite a lot of number theory.
+
+We will, however, not care about primes in this course. Instead, we look at some further analytic properties of the $\zeta$ function. To do so, we first write it as a Mellin transform.
+\begin{thm}
+ If $\Re(s) > 1$, then
+ \[
+ (2\pi)^{-s} \Gamma(s) \zeta(s) = \int_0^\infty \frac{y^s}{e^{2\pi y} - 1} \frac{\d y}{y} = M(f, s),
+ \]
+ where
+ \[
+ f(y) = \frac{1}{e^{2 \pi y} - 1}.
+ \]
+\end{thm}
+This is just a simple computation.
+
+\begin{proof}
+ We can write
+ \[
+ f(y) = \frac{e^{-2\pi y}}{1 - e^{-2 \pi y}} = \sum_{n \geq 1} e^{-2\pi n y}
+ \]
+ for $y > 0$.
+
+ As $y \to 0$, we find
+ \[
+ f(y) \sim \frac{1}{2\pi y}.
+ \]
+ So when $\Re(s) > 1$, the Mellin transform converges, and equals
+ \[
+ \sum_{n \geq 1} M(e^{-2\pi n y}, s) = \sum_{n \geq 1} (2\pi n)^{-s} M(e^{-y}, s) = (2\pi)^{-s} \Gamma(s) \zeta(s).\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ $\zeta(s)$ has a meromorphic continuation to $\C$ with a simple pole at $s = 1$ as its only singularity, and
+ \[
+ \res_{s = 1}\zeta(s) = 1.
+ \]
+\end{cor}
+
+\begin{proof}
+ We can write
+ \[
+ M(f, s) = M_0 + M_\infty = \left(\int_0^1 + \int_1^\infty\right) \frac{y^s}{e^{2\pi y} - 1} \frac{\d y}{y}.
+ \]
+ The second integral $M_\infty$ is convergent for all $s \in \C$, hence defines a holomorphic function.
+
+ For any fixed $N$, we can expand
+ \[
+ f(y) = \sum_{n = -1}^{N - 1} c_n y^n + y^N g_N(y)
+ \]
+ for some $g \in C^\infty(\R)$, as $f$ has a simple pole at $y = 0$, and
+ \[
+ c_{-1} = \frac{1}{2\pi}.
+ \]
+ So for $\Re(s) > 1$, we have
+ \begin{align*}
+ M_0 &= \sum_{n = -1}^{N - 1} c_n \int_0^1 y^{n + s - 1} \;\d y + \int_0^N y^{N + s - 1} g_N(y) \;\d y\\
+ &= \sum_{n = -1}^{N - 1} \frac{c_n}{s + n} y^{s + n} + \int_0^1 g_N(y) y^{s + N - 1} \;\d y.
+ \end{align*}
+ We now notice that this formula makes sense for $\Re(s) > -N$. Thus we have found a meromorphic continuation of
+ \[
+ (2\pi)^{-s} \Gamma(s) \zeta(s)
+ \]
+ to $\{\Re(s) > N\}$, with at worst simple poles at $s = 1 - N, 2 - N, \cdots, 0, 1$. Also, we know $\Gamma(s)$ has a simple pole at $s = 0, -1, -2, \cdots$. So $\zeta(s)$ is analytic at $s = 0, -1, -2, \cdots$. Since $c_{-1} = \frac{1}{2\pi}$ and $\Gamma(1) = 1$, we get
+ \[
+ \res_{s = 1} \zeta(s) = 1.\qedhere
+ \]
+\end{proof}
+
+Now we note that by the Euler product formula, if there are only finitely many primes, then $\zeta(s)$ is certainly analytic everywhere. So we deduce
+\begin{cor}
+ There are infinitely many primes.
+\end{cor}
+
+Given a function $\zeta$, it is natural to ask what values it takes. In particular, we might ask what values it takes at integers. There are many theorems and conjectures concerning the values at integers of $L$-functions (which are Dirichlet series like the $\zeta$-function). These properties relate to subtle number-theoretic quantities. For example, the values of $\zeta(s)$ at negative integers are closely related to the class numbers of the cyclotomic fields $\Q(\zeta_p)$. These are also related to early (partial) proofs of Fermat's last theorem, and things like the Birch--Swinnerton-Dyer conjecture on elliptic curves.
+
+We shall take a tiny step by figuring out the values of $\zeta(s)$ at negative integers. They are given by the \emph{Bernoulli numbers}.
+\begin{defi}[Bernoulli numbers]\index{Bernoulli numbers}\index{$B_n$}
+ The \emph{Bernoulli numbers} are defined by a generating function
+ \[
+ \sum_{n = 0}^\infty B_n \frac{t^n}{n!} = \frac{t}{e^t - 1} = \left(1 + \frac{t}{2!} + \frac{t^2}{3!} + \cdots\right)^{-1}.
+ \]
+\end{defi}
+
+Clearly, all Bernoulli numbers are rational. We can directly compute
+\[
+ B_0 = 1, \quad B_1 = -\frac{1}{2}, \cdots.
+\]
+The first thing to note about this is the following:
+\begin{prop}
+ $B_n = 0$ if $n$ is odd and $n \geq 3$.
+\end{prop}
+
+\begin{proof}
+ Consider
+ \[
+ f(t) = \frac{t}{e^t - 1} + \frac{t}{2} = \sum_{n \geq 0, n \not= 1} B_n \frac{t^n}{n!}.
+ \]
+ We find that
+ \[
+ f(t) = \frac{t}{2} \frac{e^t + 1}{e^t - 1} = f(-t).
+ \]
+ So all the odd coefficients must vanish.
+\end{proof}
+
+\begin{cor}
+ We have
+ \[
+ \zeta(0) = B_1 = -\frac{1}{2},\quad \zeta(1 - n)= - \frac{B_n}{n}
+ \]
+ for $n > 1$. In particular, for all $n \geq 1$ integer, we know $\zeta(1 - n) \in \Q$ and vanishes if $n > 1$ is odd.
+\end{cor}
+
+\begin{proof}
+ We know
+ \[
+ (2\pi)^{-s} \Gamma(s) \zeta(s)
+ \]
+ has a simple pole at $s = 1 - n$, and the residue is $c_{n - 1}$, where
+ \[
+ \frac{1}{e^{2\pi y} - 1} = \sum_{n \geq -1} c_n y^n.
+ \]
+ So we know
+ \[
+ c_{n - 1} = (2\pi)^{n - 1} \frac{B_n}{n!}.
+ \]
+ We also know that
+ \[
+ \res_{s = 1 - n} \Gamma(s) = \frac{(-1)^{n - 1}}{(n - 1)!},
+ \]
+ we get that
+ \[
+ \zeta(1 - n) = (-1)^{n - 1} \frac{B_n}{n}.
+ \]
+ If $n = 1$, then this gives $-\frac{1}{2}$. If $n$ is odd but $> 1$, then this vanishes. If $n$ is even, then this is $-\frac{B_n}{n}$, as desired.
+\end{proof}
+
+To end our discussion on the $\zeta$-function, we shall prove a functional equation, relating $\zeta(s)$ to $\zeta(1 - s)$. To do so, we relate the $\zeta$-function to \emph{another} Mellin transform. We define\index{$\Theta$}
+\[
+ \Theta(y) = \sum_{n \in \Z} e^{-\pi n^2 y} = 1 + 2 \sum_{n \geq 1} e^{-\pi n^2 y}.
+\]
+This is convergent for for $y > 0$. So we can write\index{$\vartheta$}
+\[
+ \Theta(y) = \vartheta(iy),
+\]
+where
+\[
+ \vartheta(z) = \sum_{n \in \Z} e^{\pi i n^2 z},
+\]
+which is analytic for $|e^{\pi i z}| < 1$, i.e.\ $\Im(z) > 0$. This is \term{Jacobi's $\vartheta$-function}. This function is also important in algebraic geometry, representation theory, and even applied mathematics. But we will just use it for number theory. We note that
+\[
+ \Theta(y) \to 1
+\]
+as $y \to \infty$, so we can't take its Mellin transform. What we \emph{can} do is
+\begin{prop}
+ \[
+ M\left(\frac{\Theta(y) - 1}{2}, \frac{s}{2}\right) = \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \zeta(s).
+ \]
+\end{prop}
+
+The proof is again just do it.
+\begin{proof}
+ The left hand side is
+ \[
+ \sum_{n \geq 1} M\left(e^{-\pi n^2 y}, \frac{s}{2}\right) = \sum_{n \geq 1} (\pi n^2) ^{-s/2} M\left(e^{-y}, \frac{s}{2}\right) = \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \zeta(s).\qedhere
+ \]
+\end{proof}
+
+To produce a functional equation for $\zeta$, we first do it for $\Theta$.
+
+\begin{thm}[Functional equation for $\Theta$-function]
+ If $y > 0$, then
+ \[
+ \Theta\left(\frac{1}{y}\right) = y^{1/2} \Theta(y)\tag{$*$},
+ \]
+ where we take the positive square root. More generally, taking the branch of $\sqrt{\;}$ which is positive real on the positive real axis, we have
+ \[
+ \vartheta \left(-\frac{1}{z}\right) = \left(\frac{z}{i}\right)^{1/2} \vartheta(z).
+ \]
+\end{thm}
+
+The proof is rather magical.
+\begin{proof}
+ By analytic continuation, it suffices to prove $(*)$. Let
+ \[
+ g_t(x) = e^{-\pi t x^2} = g_1(t^{1/2} x).
+ \]
+ In particular,
+ \[
+ g_1(x) = e^{-\pi x^2}.
+ \]
+ Now recall that $\hat{g}_1 = g_1$. Moreover, the Fourier transform of $f(\alpha x)$ is $\frac{1}{\alpha} \hat{f}(y/\alpha)$. So
+ \[
+ \hat{g}_t(y) = t^{-1/2} \hat{g}_1(t^{-1/2} y) = t^{-1/2} g_1(t^{-1/2} y) = t^{-1/2} e^{-\pi y^2/t}.
+ \]
+ We now apply the Poisson summation formula:
+ \[
+ \Theta(t) = \sum_{n \in \Z} e^{-\pi n^2 t} = \sum_{n \in \Z} g_t(n) = \sum_{n \in \Z} \hat{g}_t(n) = t^{-1/2} \Theta(1/t).\qedhere
+ \]
+\end{proof}
+
+Before we continue, we notice that most of the time, when we talk about the $\Gamma$-function, there are factors of $\pi$ floating around. So we can conveniently set
+\begin{notation}\index{$\Gamma_\R(s)$}\index{$\Gamma_\C(s)$}
+ \begin{align*}
+ \Gamma_\R(s) &= \pi^{-s/2} \Gamma(s/2).\\
+ \Gamma_\C(s) &= 2 (2\pi)^{-s} \Gamma(s)
+ \end{align*}
+ These are the real/complex \term{$\Gamma$-factors}\index{Gamma-factors}.
+\end{notation}
+
+We also define
+\begin{notation}
+ \[
+ Z(s) \equiv \Gamma_\R(s) \zeta(s) = \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \zeta(s).
+ \]
+\end{notation}
+The theorem is then
+\begin{thm}[Functional equation for $\zeta$-function]
+ \[
+ Z(s) = Z(1 - s).
+ \]
+ Moreover, $Z(s)$ is meromorphic, with only poles at $s = 1$ and $0$.
+\end{thm}
+
+\begin{proof}
+ For $\Re(s) > 1$, we have
+ \begin{align*}
+ 2Z(s) &= M\left(\Theta(y) - 1, \frac{s}{2}\right) \\
+ &= \int_0^\infty [\Theta(y) - 1] y^{s/2} \frac{\d y}{y}\\
+ &= \left(\int_0^1 + \int_1^\infty\right)[\Theta(y) - 1] y^{s/2} \frac{\d y}{y}
+ \end{align*}
+ The idea is that using the functional equation for the $\Theta$-function, we can relate the $\int_0^1$ part and the $\int_1^\infty$ part. We have
+ \begin{align*}
+ \int_0^1 (\Theta(y) - 1) y^{s/2} \frac{\d y}{y} &= \int_0^1 (\Theta(y) - y^{-1/2}) y^{s/2} \frac{\d y}{y} + \int_0^1 \left(y^{\frac{s - 1}{2}} - y^{1/2}\right) \frac{\d y}{y}\\
+ &= \int_0^1 (y^{-1/2} \Theta(1/y) - y^{-1/2}) \frac{\d y}{y} + \frac{2}{s - 1} - \frac{2}{s}.\\
+ \intertext{In the first term, we change variables $y \leftrightarrow 1/y$, and get}
+ &= \int_1^\infty y^{1/2} (\Theta(y) - 1) y^{-s/2} \frac{\d y}{y} + \frac{2}{s - 1} - \frac{2}{s}.
+ \end{align*}
+ So we find that
+ \[
+ 2Z(s) = \int_1^\infty (\Theta(y) - 1)(y^{s/2} + y^{\frac{1 - s}{2}}) \frac{\d y}{y} + \frac{2}{s - 1} - \frac{2}{s} = 2Z(1 - s).
+ \]
+ Note that what we've done by separating out the $y^{\frac{s - 1}{2}} - y^{s/2}$ term is that we separated out the two poles of our function.
+\end{proof}
+Later on, we will come across more $L$-functions, and we will prove functional equations in the same way.
+
+Note that we can write
+\[
+ Z(s) = \Gamma_\R(s) \prod_{p\text{ primes}} \frac{1}{1 - p^{-s}},
+\]
+and the term $\Gamma_\R(s)$ should be thought of as the Euler factor for $p = \infty$, i.e.\ the Archimedean valuation on $\Q$.
+
+\section{Dirichlet \texorpdfstring{$L$}{L}-functions}
+We now move on to study a significant generalization of $\zeta$-functions, namely Dirichlet $L$-functions. While these are generalizations of the $\zeta$-function, it turns out the $\zeta$ function is a very particular kind of $L$-function. For example, most $L$-functions are actually analytic on all of $\C$, except for (finite multiples of) the $\zeta$-function.
+
+Recall that a Dirichlet series is a series of the form
+\[
+ \sum_{n = 1}^\infty \frac{a_n}{n^s}.
+\]
+A Dirichlet $L$-function is a Dirichlet series whose coefficients come from \emph{Dirichlet characters}.
+
+\begin{defi}[Dirichlet characters]\index{Dirichlet character}
+ Let $N \geq 1$. A \emph{Dirichlet character mod $N$} is a character $\chi: (\Z/N\Z)^\times \to \C^\times$.
+
+ As before, we write $\widehat{(\Z/N\Z)^\times}$ for the group of characters.
+\end{defi}
+Note that in the special case $N = 1$, we have
+\[
+ \Z/N\Z = \{0 = 1\} = (\Z/N\Z)^\times,
+\]
+and so $\widehat{(\Z/N\Z)^\times} \cong \{1\}$, and the only Dirichlet character is identically $1$.
+
+Not all characters are equal. Some are less exciting than others. Suppose $\chi$ is a character mod $N$, and we have some integer $d > 1$. Then we have the reduction mod $N$ map
+\[
+ \begin{tikzcd}
+ (\Z/Nd\Z)^\times \ar[r, two heads] & (\Z/N\Z)^\times,
+ \end{tikzcd}
+\]
+and we can compose $\chi$ with this to get a character mod $Nd$. This is a rather boring character, because the value of $x \in (\Z/Nd\Z)^\times$ only depends on the value of $x$ mod $N$.
+\begin{defi}[Primitive character]\index{primitive character}\index{character!primitive}
+ We say a character $\chi \in \widehat{(\Z/n\Z)^\times}$ is \emph{primitive} if there is no $M < N$ with $M \mid N$ with $\chi' \in \widehat{(\Z/M\Z)^\times}$ such that
+ \[
+ \chi = \chi' \circ (\text{reduction mod $M$}).
+ \]
+\end{defi}
+
+Similarly we define
+\begin{defi}[Equivalent characters]\index{equivalent characters}\index{characters!equivalent}
+ We say characters $\chi_1 \in \widehat{(\Z/N_1\Z)^\times}$ and $\chi_2 \in \widehat{(\Z/N_2\Z)^\times}$ are \emph{equivalent} if for all $x \in \Z$ such that $(x, N_1 N_2) = 1$, we have
+ \[
+ \chi_1(x \bmod N_1) = \chi_2(x \bmod N_2).
+ \]
+\end{defi}
+It is clear that if we produce a new character from an old one via reduction mod $Nd$, then they are equivalent.
+
+One can show the following:
+\begin{prop}
+ If $\chi \in \widehat{(\Z/N\Z)^\times}$, then there exists a unique $M \mid N$ and a \emph{primitive} $\chi_* \in \widehat{(\Z/M\Z)^\times}$ that is equivalent to $\chi$.
+\end{prop}
+
+\begin{defi}[Conductor]\index{conductor}
+ The \emph{conductor} of a character $\chi$ is the unique $M \mid N$ such that there is a \emph{primitive} $\chi_* \in \widehat{(\Z/M\Z)^\times}$ that is equivalent to $\chi$.
+\end{defi}
+
+\begin{eg}
+ Take
+ \[
+ \chi = \chi_0 \in \widehat{(\Z/N\Z)^\times},
+ \]
+ given by $\chi_0(x) \equiv 1$. If $N > 1$, then $\chi_0$ is not primitive, and the associated primitive character is the trivial character modulo $M = 1$. So the conductor is $1$.
+\end{eg}
+
+Using these Dirichlet characters, we can define \emph{Dirichlet $L$-series}:
+
+\begin{defi}[Dirichlet $L$-series]\index{Dirichlet $L$-series}\index{$L(\chi, s)$}
+ Let $\chi \in \widehat{(\Z/N\Z)^\times}$ be a Dirichlet character. The \emph{Dirichlet $L$-series} of $\chi$ is
+ \[
+ L(\chi, s) = \sum_{\substack{n \geq 1\\(n, N) = 1}} \chi(n) n^{-s}.
+ \]
+\end{defi}
+Since $|\chi(n)| = 1$, we again know this is absolutely convergent for $\Re(s) > 1$.
+
+As $\chi(mn) = \chi(m)\chi(n)$ whenever $(mn, N) = 1$, we get the same Euler product as we got for the $\zeta$-function:
+\begin{prop}
+ \[
+ L(\chi, s) = \prod_{\text{prime }p \nmid N} \frac{1}{1 - \chi(p) p^{-s}}.
+ \]
+\end{prop}
+The proof of convergence is again similar to the case of the $\zeta$-function.
+
+It follows that
+\begin{prop}
+ Suppose $M \mid N$ and $\chi_M \in \widehat{(\Z/M\Z)^\times}$ and $\chi_N \in \widehat{(\Z/N\Z)^\times}$ are equivalent. Then
+ \[
+ L(\chi_M, s) = \prod_{\substack{p \nmid M\\ p \mid N}} \frac{1}{1 - \chi_M(p) p^{-s}} L(\chi_N, s).
+ \]
+ In particular,
+ \[
+ \frac{L(\chi_M, s)}{ L(\chi_N, s)} = \prod_{\substack{p \nmid M\\ p \mid N}} \frac{1}{1 - \chi_M(p) p^{-s}}
+ \]
+ is analytic and non-zero for $\Re(s) > 0$.
+\end{prop}
+
+We'll next show that $L(\chi, s)$ has a meromorphic continuation to $\C$, and is analytic unless $\chi = \chi_0$.
+
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item $L(\chi, s)$ has a meromorphic continuation to $\C$, which is analytic except for at worst a simple pole at $s = 1$.
+ \item If $\chi \not= \chi_0$ (the trivial character), then $L(\chi, s)$ is analytic everywhere. On the other hand, $L(\chi_0, s)$ has a simple pole with residue
+ \[
+ \frac{\varphi(N)}{N} = \prod_{p \mid N} \left(1 - \frac{1}{p}\right),
+ \]
+ where $\varphi$ is the Euler function.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ More generally, let $\phi: \Z/N\Z \to \C$ be any $N$-periodic function, and let
+ \[
+ L(\phi, s) = \sum_{n = 1}^\infty \phi(n) n^{-s}.
+ \]
+ Then
+ \[
+ (2\pi)^{-s} \Gamma(s) L(\phi,s) = \sum_{n = 1}^\infty \phi(n) M(e^{-2\pi ny}, s) = M(f(y), s),
+ \]
+ where
+ \[
+ f(y) = \sum_{n \geq 1} \phi(n) e^{-2\pi n y}.
+ \]
+ We can then write
+ \[
+ f(y) = \sum_{n = 1}^N \sum_{r = 0}^\infty \phi(n) e^{-2\pi (n + rN) y} = \sum_{n = 1}^N \phi(n) \frac{e^{-2\pi n y}}{1 - e^{-2 \pi N y}} = \sum_{n = 1}^N \phi(n) \frac{e^{2\pi (N - n) y}}{e^{2\pi Ny} - 1}.
+ \]
+ As $0 \leq N - n < N$, this is $O(e^{-2\pi y})$ as $y \to \infty$. Copying for $\zeta(s)$, we write
+ \[
+ M(f, s) = \left(\int_0^1 + \int_1^\infty\right)f(y) y^s\; \frac{\d y}{y} \equiv M_0(s) + M_\infty(s).
+ \]
+ The second term is analytic for all $s \in \C$, and the first term can be written as
+ \[
+ M_0(s) = \sum_{n = 1}^N \phi(n) \int_0^1 \frac{e^{2\pi (N -n) y}}{ e^{2\pi Ny} - 1} y^s\;\frac{\d y}{y}.
+ \]
+ Now for any $L$, we can write
+ \[
+ \frac{e^{2\pi (N - n) y}}{e^{2\pi Ny} - 1} = \frac{1}{2 \pi N y} + \sum_{r = 0}^{L - 1} c_{r, n} y^r + y^L g_{L, n}(y)
+ \]
+ for some $g_{L, n}(y) \in C^\infty[0, 1]$. Hence we have
+ \[
+ M_0(s) = \sum_{n = 1}^N \phi(n) \left(\int_0^1 \frac{1}{2\pi N y} y^s \frac{\d y}{y} + \int_0^1 \sum_{r = 0}^{L - 1} c_{r, n} y^{r + s - 1} \;\d y\right) + G(s),
+ \]
+ where $G(s)$ is some function analytic for $\Re(s) > -L$. So we see that
+ \[
+ (2\pi)^{-s} \Gamma(s) L(\phi, s) = \sum_{n = 1}^N \phi(n) \left(\frac{1}{2\pi N(s - 1)} + \frac{c_{0, n}}{s} + \cdots + \frac{c_{L - 1, n}}{s + L - 1}\right) + G(s).
+ \]
+ As $\Gamma(s)$ has poles at $s = 0, -1, \cdots$, this cancels with all the poles apart from the one at $s = 1$.
+
+ The first part then follows from taking
+ \[
+ \phi(n) =
+ \begin{cases}
+ \chi(n) & (n, N) = 1\\
+ 0 & (n, N) \geq 1
+ \end{cases}.
+ \]
+ By reading off the formula, since $\Gamma(1) = 1$, we know
+ \[
+ \res_{s = 1} L(\chi, s) = \frac{1}{N}\sum_{n = 1}^N \phi(n).
+ \]
+ If $\chi \not= \chi_0$, then this vanishes by the orthogonality of characters. Otherwise, it is $|(\Z/N\Z)^\times|/N = \varphi(N)/N$.
+\end{proof}
+Note that this is consistent with the result
+\[
+ L(\chi_0, s) = \prod_{p \mid N} (1 - p^{-s}) \zeta(s).
+\]
+So for a non-trivial character, our $L$-function doesn't have a pole.
+
+The next big theorem is that in fact $L(\chi, 1)$ is non-zero. In number theory, there are lots of theorems of this kind, about non-vanishing of $L$-functions at different points.
+\begin{thm}
+ If $\chi \not= \chi_0$, then $L(\chi, 1) \not= 0$.
+\end{thm}
+
+\begin{proof}
+ The trick is the consider all characters together. We let
+ \[
+ \zeta_N(s) = \prod_{\chi \in \widehat{(\Z/N\Z)^\times}} L(\chi, s) = \prod_{p \nmid N} \prod_\chi (1 - \chi(p) p^{-s})^{-1}
+ \]
+ for $\Re(s) > 1$. Now we know $L(\chi_0, s)$ has a pole at $s = 1$, and is analytic everywhere else. So if any other $L(\chi, 1) = 0$, then $\zeta_N(s)$ is analytic on $\Re(s) > 0$. We will show that this cannot be the case.
+
+ We begin by finding a nice formula for the product of $(1 - \chi(p) p^{-s})^{-1}$ over all characters.
+ \begin{claim}
+ If $p \nmid N$, and $T$ is any complex number, then
+ \[
+ \prod_{\chi \in \widehat{(\Z/N\Z)^\times}} (1 - \chi(p) T) = (1 - T^{f_p})^{\varphi(N)/f_p},
+ \]
+ where $f_p$ is the order of $p$ in $(\Z/n\Z)^\times$.
+
+ So
+ \[
+ \zeta_N(s) = \prod_{p \nmid N} (1 - p^{-f_p s})^{-\varphi(N)/f_p}.
+ \]
+ \end{claim}
+ To see this, we write $f = f_p$, and, for convenience, write
+ \begin{align*}
+ G &= (\Z/N\Z)^\times\\
+ H &= \bra p \ket \subseteq G.
+ \end{align*}
+ We note that $\hat{G}$ naturally contains $\widehat{G/H} = \{\chi \in \hat{G}: \chi(p) = 1\}$ as a subgroup. Also, we know that
+ \[
+ |\widehat{G/H}| = |G/H| = \varphi(N)/f.
+ \]
+ Also, the restriction map
+ \[
+ \frac{\hat{G}}{\widehat{G/H}} \to \hat{H}
+ \]
+ is obviously injective, hence an isomorphism by counting orders. So
+ \[
+ \prod_{\chi \in \hat{G}} (1 - \chi(p) T) = \prod_{\chi \in \hat{H}}(1 - \chi(p) T)^{\varphi(N)/f} = \prod_{\zeta \in \mu_f}(1 - \zeta T)^{\varphi(N)/f} = (1 - T^f)^{\varphi(N)/f}.
+ \]
+ We now notice that when we expand the product of $\zeta_N$, at least formally, then we get a Dirichlet series with non-negative coefficients. We now prove the following peculiar property of such Dirichlet series:
+ \begin{claim}
+ Let
+ \[
+ D(s) = \sum_{n \geq 1} a_n n^{-s}
+ \]
+ be a Dirichlet series with real $a_n \geq 0$, and suppose this is absolutely convergent for $\Re(s) > \sigma > 0$. Then if $D(s)$ can be analytically continued to an analytic function $\tilde{D}$ on $\{\Re(s) > 0\}$, then the series converges for all real $s > 0$.
+ \end{claim}
+ Let $\rho > \sigma$. Then by the analytic continuation, we have a convergent Taylor series on $\{|s - \rho| < \rho\}$
+ \[
+ D(s) = \sum_{k \geq 0} \frac{1}{k!} D^{(k)}(\rho) (s - \rho)^k.
+ \]
+ Moreover, since $\rho > \sigma$, we can directly differentiate the Dirichlet series to obtain the derivatives:
+ \[
+ D^{(k)} (\rho) = \sum_{n\geq 1} a_n (-\log n)^k n^{-\rho}.
+ \]
+ So if $0 < x < \rho$, then
+ \[
+ D(x) = \sum_{k \geq 0} \frac{1}{k!} (p - x)^k \left(\sum_{n \geq 1} a_n (\log n)^k n^{-\rho}\right).
+ \]
+ Now note that all terms in this sum are all non-negative. So the double series has to converge absolutely as well, and thus we are free to rearrange the sum as we wish. So we find
+ \begin{align*}
+ D(x) &= \sum_{n \geq 1} a_n n^{-\rho} \sum_{k \geq 0} \frac{1}{k!} (\rho - x)^k (\log n)^l\\
+ &= \sum_{n \geq 1} a_n n^{-\rho} e^{(\rho - x) \log n}\\
+ &= \sum_{n \geq 1} a_n n^{-\rho} n^{\rho - x}\\
+ &= \sum_{n \geq 1} a_n n^{-x},
+ \end{align*}
+ as desired.
+
+ Now we are almost done, as
+ \[
+ \zeta_N(s) = L(\chi_0, s) \prod_{\chi \not= \chi_0} L(\chi, s).
+ \]
+ We saw that $L(\chi_0, s)$ has a simple pole at $s = 1$, and the other terms are all holomorphic at $s = 1$. So if some $L(\chi, 1) = 0$, then $\zeta_N(s)$ is holomorphic for $\Re(s) > 0$ (and in fact everywhere). Since the Dirichlet series of $\eta_N$ has $\geq 0$ coefficients, by the lemma, it suffices to find some point on $\R_{>0}$ where the Dirichlet series for $\zeta_N$ doesn't converge.
+
+ We notice
+ \[
+ \zeta_N(x) = \prod_{p \nmid N} (1 + p^{-f_p x} + p^{-2 f_p x} + \cdots)^{\varphi(N)/f_p} \geq \sum_{p \nmid N} p^{- \varphi(N) x}.
+ \]
+ It now suffices to show that $\sum p^{-1} = \infty$, and thus the series for $\zeta_N(x)$ is not convergent for $x = \frac{1}{\varphi(N)}$.
+ \begin{claim}
+ We have
+ \[
+ \sum_{p \text{ prime}} p^{-x} \sim - \log (x - 1)
+ \]
+ as $x \to 1^+$. On the other hand, if $\chi \not= \chi_0$ is a Dirichlet character mod $N$, then
+ \[
+ \sum_{p \nmid N} \chi(p) p^{-x}
+ \]
+ is bounded as $x \to 1^+$.
+ \end{claim}
+ Of course (and crucially, as we will see), the second part is not needed for the proof, but it is still nice to know.
+
+ To see this, we note that for any $\chi$, we have
+ \[
+ \log L(\chi, x) = \sum_{p \nmid N} - \log (1 - \chi(p) p^{-x}) = \sum_{p \nmid N} \sum_{r \geq 1} \frac{\chi(p)^r p^{-rx}}{r}.
+ \]
+ So
+ \begin{align*}
+ \left|\log L(\chi, x) - \sum_{p \nmid N} \chi(p) p^{-x}\right| &< \sum_{p \nmid N} \sum_{r \geq 2} p^{-rx}\\
+ &= \sum_{p \nmid N} \frac{p^{-2x}}{1 - p^{-x}}\\
+ &\leq \sum_{n \geq 1} \frac{n^{-2}}{1/2},
+ \end{align*}
+ which is a (finite) constant for $C < \infty$. When $\chi = \chi_0, N = 1$, then
+ \[
+ \left|\log \zeta(x) - \sum_p p^{-x}\right|
+ \]
+ is bounded as $x \to 1^+$. But we know
+ \[
+ \zeta(s) = \frac{1}{s - 1} + O(s).
+ \]
+ So we have
+ \[
+ \sum_p p^{-x} \sim \log (x - 1).
+ \]
+ When $\chi \not= \chi_0$, then $L(\chi, 1) \not= 0$, as we have just proved! So $\log L(\chi, x)$ is bounded as $x \to 1^+$. and so we are done.
+\end{proof}
+
+Note that up to a finite number of factors in the Euler product (for $p \mid N$), this $\zeta_N(s)$ equals to the Dedekind $\zeta$-function of the number field $K = \Q(\sqrt[n]{1})$, given by
+\[
+ \zeta_K(s) = \sum_{\text{ideals } 0 \not= I \subseteq \mathcal{O}_K} \frac{1}{(N(I))^s}.
+\]
+We can now use what we've got to quickly prove Dirichlet's theorem:
+\begin{thm}[Dirichlet's theorem on primes in arithmetic progressions]\index{Dirichlet's theorem on primes in arithmetic progressions}
+ Let $a \in \Z$ be such that $(a, N) = 1$. Then there exists infinitely many primes $p \equiv a \pmod N$.
+\end{thm}
+
+\begin{proof}
+ We want to show that the series
+ \[
+ \sum_{\substack{p\text{ prime}\\p\equiv a \bmod N}} p^{-x}
+ \]
+ is unbounded as $x \to 1^+$, and in particular must be infinite. We note that for $(x, N) = 1$, we have
+ \[
+ \sum_{\chi \in \widehat{(\Z/N\Z)^\times}} \chi(x) =
+ \begin{cases}
+ \varphi(N) & x \equiv 1 \pmod N\\
+ 0 & \text{otherwise}
+ \end{cases},
+ \]
+ since the sum of roots of unity vanishes. We also know that $\chi$ is a character, so $\chi(a)^{-1}\chi(p) = \chi(a^{-1}p)$. So we can write
+ \[
+ \sum_{\substack{p\text{ prime}\\p\equiv a \bmod N}} p^{-x} = \frac{1}{\varphi(N)} \sum_{\chi \in (\Z/N\Z)^\times} \chi(a)^{-1}\sum_{\text{all }p} \chi(p) p^{-x},
+ \]
+ Now if $\chi = \chi_0$, then the sum is just
+ \[
+ \sum_{p \nmid N} p^{-x} \sim -\log(x - 1)
+ \]
+ as $x \to 1^+$. Moreover, all the other sums are bounded as $x \to 1^+$. So
+ \[
+ \sum_{p \equiv a \bmod N} p^{-x} \sim -\frac{1}{\varphi(N)} \log (x - 1).
+ \]
+ So the whole sum must be unbounded as $x \to 1^+$. So in particular, the sum must be infinite.
+\end{proof}
+
+This actually tells us something more. It says
+\[
+ \frac{\sum\limits_{p \equiv a \bmod N} p^{-x}}{\sum\limits_{\text{all }p} p^{-x}} \sim \frac{1}{\varphi(N)}.
+\]
+as $x \to 1^+$. So in some well-defined sense (namely \term{analytic density}), $\frac{1}{\varphi(N)}$ of the primes are $\equiv a \pmod N$.
+
+In fact, we can prove that
+\[
+ \lim_{X \to \infty} \frac{|\{p \leq X : p \equiv a \bmod N\}|}{|\{p \leq X\}|} = \frac{1}{\varphi(N)}.
+\]
+This theorem has many generalizations. In general, let $L/K$ be a finite Galois extension of number fields with Galois group $G = \Gal(L/K)$. Then for all primes $\mathfrak{p}$ of $K$ which is unramified in $L$, we can define a \term{Frobenius conjugacy class} $[\sigma_\mathfrak{p}] \subseteq G$.
+
+\begin{thm}[Cebotarev density theorem]\emph{Cebotarev density theorem}
+ Let $L/K$ be a Galois extension. Then for any conjugacy class $C \subseteq \Gal(L/K)$, there exists infinitely many $\mathfrak{p}$ with $[\sigma_\mathfrak{p}] = C$.
+\end{thm}
+If $L/K = \Q(\sqrt[n]{1})/\Q$, then $G \cong(\Z/N\Z)^\times$, and $\sigma_p$ is just the element of $G$ given by $p \pmod N$. So if we fix $a \pmod N \in G$, then there are infinitely many $p$ with $p \equiv a \pmod N$. So we see the Cebotarev density theorem is indeed a generalization of Dirichlet's theorem.
+
+\section{The modular group}
+We now move on to study the other words that appear in the title of the course, namely modular forms. Modular forms are very special functions defined on the upper half plane
+\[
+ \H = \{z \in \C: \Im (z) > 0\}.
+\]
+The main property of a modular form is that they transform nicely under M\"obius transforms. In this chapter, we will first try to understand these M\"obius transforms. Recall that a matrix
+\[
+ \gamma =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \GL_2(\C)
+\]
+acts on $\C\cup \infty$ by
+\[
+ z \mapsto \gamma(z) = \frac{az + b}{cz + d}.
+\]
+If we further restrict to matrices in $\GL_2(\R)$, then this maps $\C \setminus \R$ to $\C \setminus \R$, and $\R \cup \{\infty\}$ to $\R \cup \{\infty\}$.
+
+We want to understand when this actually fixes the upper half plane. This is a straightforward computation
+\[
+ \Im \gamma(z) = \frac{1}{2i} \left(\frac{az + b}{cz + d} - \frac{a\bar{z} + b}{c \bar{z} + d}\right) = \frac{1}{2i} \frac{(ad - bc) (z - \bar{z})}{|cz + d|^2} = \det(\gamma) \frac{\Im z}{ |cz + d|^2}.
+\]
+%Instead of looking at all M\"obius transforms, as we did in IA Groups, we are only going to restrict to a very particular subgroup of the M\"obius group. Recall that a matrix
+%
+%Before we do any formal theory, we look at an example of a modular form.
+%\begin{eg}
+% Recall that we had
+% \[
+% \vartheta(z) = \sum_{n \in \Z} e^{\pi i n^2 z} = \vartheta(z + 2) = \left(\frac{z}{i}\right)^{-1/2} \vartheta\left(-\frac{1}{z}\right).
+% \]
+% The first identity says $\vartheta$ is invariant under the M\"obius transformation given by the matrix
+% \[
+% \begin{pmatrix}
+% 1 & 2 \\
+% 0 & 1
+% \end{pmatrix},
+% \]
+% and the second says it is invariant under
+% \[
+% \begin{pmatrix}
+% 0 & -1\\
+% 1 & 0
+% \end{pmatrix}
+% \]
+% These two elements generate a finite index subgroup of $\SL_2(\Z)$. This is an example of a \emph{modular form}.
+%\end{eg}
+%Let's go back a little bit. We look at real M\"obius transformations. We know $\GL_2(\R)$ acts on $\C \setminus \R$ as follows:
+%\[
+% \gamma =
+% \begin{pmatrix}
+% a & b\\
+% c & d
+% \end{pmatrix}
+%\]
+%sends
+%\[
+% z \mapsto \gamma(z) = \frac{az + b}{cz + d},
+%\]
+%and it acts on $\R \cup \{\infty\}$ in the same way. This gives a left action of $\GL_2(\R)$ on $\C_\infty$. We also have
+Thus, we know $\Im(\gamma(z))$ and $\Im(z)$ have the same sign iff $\det(\gamma) > 0$. We write
+\begin{defi}[$\GL_2(\R)^+$]\index{$\GL_2(\R)^+$}
+ \[
+ \GL_2(\R)^+ = \{\gamma \in \GL_2(\R) : \det \gamma > 0\}.
+ \]
+\end{defi}
+This is the group of M\"obius transforms that map $\H$ to $\H$.
+
+However, note that the action of $\GL_2(\R)^+$ on $\H$ is not faithful. The kernel is given by the subgroup
+\[
+ \R^\times \cdot I = \R^\times \cdot
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 1
+ \end{pmatrix}.
+\]
+Thus, we are naturally led to define
+\begin{defi}[$\PGL_2(\R)^+$]\index{$\PGL_2(\R)^+$}
+ \[
+ \PGL_2(\R)^+ = \frac{\GL_2(\R)^+}{\R^\times \cdot I}.
+ \]
+\end{defi}
+There is a slightly better way of expressing this. Now note that we can obtain any matrix in $\GL_2(\R^+)$, by multiplying an element of $\SL_2(\R)$ with a unit in $\R$. So we have \index{$\PSL_2(\R)$}
+\[
+ \PGL_2(\R)^+ \cong \SL_2(\R)/\{\pm I\} \equiv \PSL_2(\R).
+\]
+What we have is thus a faithful action of $\PSL_2(\R)$ on the upper half plane $\H$. From IA Groups, we know this action is transitive, and the stabilizer of $i = \sqrt{-1}$ is $\SO(2)/\{\pm I\}$.
+
+In fact, this group $\PSL_2(\R)$ is the group of all holomorphic automorphisms of $\H$, and the subgroup $\SO(2) \subseteq \SL_2(\R)$ is a maximal compact subgroup.
+
+\begin{thm}
+ The group $\SL_2(\R)$ admits the \term{Iwasawa decomposition}
+ \[
+ \SL_2(\R) = KAN = NAK,
+ \]
+ where
+ \[
+ K = \SO(2), \quad A = \left\{
+ \begin{pmatrix}
+ r & 0\\
+ 0 & \frac{1}{r}
+ \end{pmatrix}
+ \right\},
+ \quad N = \left\{
+ \begin{pmatrix}
+ 1 & x\\
+ 0 & 1
+ \end{pmatrix}
+ \right\}
+ \]
+\end{thm}
+Note that this implies a few of our previous claims. For example, any $z = x + iy \in \C$ can be written as
+\[
+ z = x + iy =
+ \begin{pmatrix}
+ 1 & x\\
+ 0 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ \sqrt{y} & 0\\
+ 0 & \frac{1}{\sqrt{y}}
+ \end{pmatrix}
+ \cdot i,
+\]
+using the fact that $K = \SO(2)$ fixes $i$, and this gives transitivity.
+
+\begin{proof}
+ This is just Gram--Schmidt orthogonalization. Given $g \in \GL_2(\R)$, we write
+ \[
+ g e_1 = e_1',\quad g e_2 = e_2',
+ \]
+ By Gram-Schmidt, we can write
+ \begin{align*}
+ f_1 &= \lambda_1 e_1'\\
+ f_2 &= \lambda_2 e_1' + \mu e_2'
+ \end{align*}
+ such that
+ \[
+ \|f_1\| = \|f_2\| = 1,\quad (f_1, f_2) = 0.
+ \]
+ So we can write
+ \[
+ \begin{pmatrix}
+ f_1 & f_2
+ \end{pmatrix} =
+ \begin{pmatrix}
+ e_1' & e_2'
+ \end{pmatrix}
+ \begin{pmatrix}
+ \lambda_1 & \lambda_2\\
+ 0 & \mu
+ \end{pmatrix}
+ \]
+ Now the left-hand matrix is orthogonal, and by decomposing the inverse of $\begin{pmatrix} \lambda_1 & \lambda_2\\0 & \mu\end{pmatrix}$, we can write $g = \begin{pmatrix}e_1' & e_2'\end{pmatrix}$ as a product in $KAN$.
+\end{proof}
+In general, we will be interested in subgroups $\Gamma \leq\SL_2(\R)$, and their images $\bar\Gamma$ in $\Gamma \in \PSL_2(\R)$, i.e.
+\[
+ \bar\Gamma = \frac{\Gamma}{\Gamma \cap \{\pm I\}}.
+\]
+We are mainly interested in the case $\Gamma = \SL_2(\Z)$, or a subgroup of finite index.
+
+\begin{defi}[Modular group]\index{modular group}
+ The \emph{modular group} is
+ \[
+ \PSL_2(\Z) = \frac{\SL_2(\Z)}{\{\pm I\}}.
+ \]
+\end{defi}
+There are two particularly interesting elements of the modular group, given by\index{$S$}\index{$T$}
+\[
+ S = \pm
+ \begin{pmatrix}
+ 0 & -1\\
+ 1 & 0
+ \end{pmatrix},\quad
+ T = \pm
+ \begin{pmatrix}
+ 1 & 1\\
+ 0 & 1
+ \end{pmatrix}.
+\]
+Then we have $T(z) = z + 1$ and $S(z) = -\frac{1}{z}$. One immediately sees that $T$ has infinite order and $S^2 = 1$ (in $\PSL_2(\Z)$). We can also compute
+\[
+ TS = \pm
+ \begin{pmatrix}
+ 1 & -1\\
+ 1 & 0
+ \end{pmatrix}
+\]
+and
+\[
+ (TS)^3 = 1.
+\]
+The following theorem essentially summarizes the basic properties of the modular group we need to know about:
+\begin{thm}
+ Let
+ \[
+ \mathcal{D} = \left\{z \in \H: -\frac{1}{2} \leq \Re z \leq \frac{1}{2}, |z| > 1\right\} \cup \{z \in \H: |z| = 1, \Re(z) \geq 0\}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0) -- (3, 0);
+
+ \draw (0, -1) -- (0, 4);
+
+ \node [circ] at (1, 0) {};
+ \node [below] at (1, 0) {$\frac{1}{2}$};
+
+ \node [circ] at (2, 0) {};
+ \node [below] at (2, 0) {$1$};
+
+ \node [circ] at (-1, 0) {};
+ \node [below] at (-1, 0) {$-\frac{1}{2}$};
+
+ \node [circ] at (-2, 0) {};
+ \node [below] at (-2, 0) {$-1$};
+
+ \fill [opacity=0.5, mblue] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+
+ \draw [dashed] (2, 0) arc (0:180:2);
+
+ \draw [dashed] (1, 0) -- (1, 4);
+ \draw [dashed] (-1, 0) -- (-1, 4);
+ \draw [very thick, mblue] (1, 4) -- (1, 1.732) arc (60:90:2);
+
+ \node [circ] at (1, 1.732) {};
+ \node [anchor = south west] at (1, 1.732) {$\rho = e^{\pi i/3}$};
+ \node [circ] at (0, 2) {};
+ \node [anchor = north east] at (0, 2){$i$};
+ \end{tikzpicture}
+ \end{center}
+ Then $\mathcal{D}$ is a \term{fundamental domain} for the action of $\bar{\Gamma}$ on $\H$, i.e.\ every orbit contains exactly one element of $\mathcal{D}$.
+
+ The stabilizer of $z \in \mathcal{D}$ in $\Gamma$ is trivial if $z \not= i, \rho$, and the stabilizers of $i$ and $\rho$ are
+ \[
+ \bar{\Gamma}_i = \bra S\ket \cong \frac{\Z}{2Z},\quad \bar\Gamma_\rho = \bra TS\ket \cong \frac{\Z}{3\Z}.
+ \]
+ Finally, we have $\bar \Gamma = \bra S, T \ket = \bra S, TS\ket$.
+\end{thm}
+In fact, we have
+\[
+ \bar \Gamma = \bra S, T \mid S^2 = (TS)^3 = e\ket,
+\]
+but we will neither prove nor need this.
+
+The proof is rather technical, and involves some significant case work.
+\begin{proof}
+ Let $\bar{\Gamma}^* = \bra S, T\ket \subseteq \bar{\Gamma}$. We will show that if $z \in \H$, then there exists $\gamma \in \bar\Gamma^*$ such that $\gamma(z) \in \mathcal{D}$.
+
+ Since $z \not \in \R$, we know $\Z + \Z z = \{cz + d: c, d \in \Z\}$ is a discrete subgroup of $\C$. So we know
+ \[
+ \{|cz + d|: c, d \in \Z\}
+ \]
+ is a discrete subset of $\R$, and is in particular bounded away from $0$. Thus, we know
+ \[
+ \left\{\Im \gamma(z) = \frac{\Im (z)}{|cz + d|^2}: \gamma =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \bar\Gamma^*\right\}
+ \]
+ is a discrete subset of $\R_{>0}$ and is bounded above. Thus there is some $\gamma \in \bar{\Gamma}^*$ with $\Im \gamma(z)$ maximal. Replacing $\gamma$ by $T^n \gamma$ for suitable $n$, we may assume $|\Re \gamma(z)| \leq \frac{1}{2}$.
+
+ We consider the different possible cases.
+ \begin{itemize}
+ \item If $|\gamma(z)| < 1$, then
+ \[
+ \Im S \gamma(z) = \Im \frac{-1}{\gamma(z)} = \frac{\Im \gamma(z)}{|\gamma(z)|^2} > \Im \gamma(z),
+ \]
+ which is impossible. So we know $|\gamma(z)| \geq 1$. So we know $\gamma(z)$ lives in the closure of $\mathcal{D}$.
+
+ \item If $\Re(\gamma(z)) = -\frac{1}{2}$, then $T \gamma(z)$ has real part $+\frac{1}{2}$, and so $T(\gamma(z)) \in \mathcal{D}$.
+
+ \item If $-\frac{1}{2} < \Re(z) < 0$ and $|\gamma(z)| = 1$, then $|S\gamma(z)| = 1$ and $0 < \Re S\gamma(z) < \frac{1}{2}$, i.e.\ $S \gamma(z) \in \mathcal{D}$.
+ \end{itemize}
+ So we can move it to somewhere in $\mathcal{D}$.
+
+ \separator
+
+ We shall next show that if $z, z' \in \mathcal{D}$, and $z' = \gamma(z)$ for $\gamma \in \bar{\Gamma}$, then $z = z'$. Moreover, either
+ \begin{itemize}
+ \item $\gamma = 1$; or
+ \item $z = i$ and $\gamma = S$; or
+ \item $z = \rho$ and $\gamma = TS$ or $(TS)^2$.
+ \end{itemize}
+ It is clear that this proves everything.
+
+ To show this, we wlog
+ \[
+ \Im (z') = \frac{\Im z}{|cz + d|^2} \geq \Im z
+ \]
+ where
+ \[
+ \gamma =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix},
+ \]
+ and we also wlog $c \geq 0$.
+
+ Therefore we know that $|cz + d| \leq 1$. In particular, we know
+ \[
+ 1 \geq \Im(cz + d) = c \Im (z) \geq c \frac{\sqrt{3}}{2}
+ \]
+ since $z \in \mathcal{D}$. So $c = 0$ or $1$.
+
+ \begin{itemize}
+ \item If $c = 0$, then
+ \[
+ \gamma = \pm
+ \begin{pmatrix}
+ 1 & m\\
+ 0 & 1
+ \end{pmatrix}
+ \]
+ for some $m \in \Z$, and this $z' = z + m$. But this is clearly impossible. So we must have $m = 0$, $z = z'$, $\gamma = 1 \in \PSL_2(\Z)$.
+
+ \item If $c = 1$, then we know $|z + d| \leq 1$. So $z$ is at distance $1$ from an integer. As $z \in \mathcal{D}$, the only possibilities are $d = 0$ or $-1$.
+
+ \begin{itemize}
+ \item If $d = 0$, then we know $|z| = 1$. So
+ \[
+ \gamma =
+ \begin{pmatrix}
+ a & -1\\
+ 1 & 0
+ \end{pmatrix}
+ \]
+ for some $a \in \Z$. Then $z' = a - \frac{1}{z}$. Then
+ \begin{itemize}
+ \item either $a = 0$, which forces $z = i$, $\gamma = S$; or
+ \item $a = 1$, and $z' = 1 - \frac{1}{z}$, which implies $z = z' = \rho$ and $\gamma = TS$.
+ \end{itemize}
+ \item If $d = -1$, then by looking at the picture, we see that $z = \rho$. Then
+ \[
+ |cz + d| = |z - 1| = 1,
+ \]
+ and so
+ \[
+ \Im z' = \Im z = \frac{\sqrt{3}}{2}.
+ \]
+ So we have $z' = \rho$ as well. So
+ \[
+ \frac{a \rho + b}{\rho - 1} = \rho,
+ \]
+ which implies
+ \[
+ \rho^2 - (a + 1) \rho - b = 0
+ \]
+ So $\rho = -1, a = 0$, and $\gamma = (TS)^2$.\qedhere
+ \end{itemize}%\qedhere
+ \end{itemize}
+\end{proof}
+Note that this proof is the same as the proof of reduction theory for binary positive definite binary quadratic forms.
+
+What does the quotient $\bar{\Gamma}\setminus \N$ look like? Each point in the quotient can be identified with an element in $\mathcal{D}$. Moreover, $S$ and $T$ identify the portions of the boundary of $\mathcal{D}$. Thinking hard enough, we see that the quotient space is homeomorphic to a disk.
+
+An important consequence of this is that the quotient $\Gamma \setminus \H$ has \emph{finite invariant measure}.
+\begin{prop}
+ The measure
+ \[
+ \d \mu = \frac{\d x\;\d y}{y^2}
+ \]
+ is invariant under $\PSL_2(\R)$. If $\Gamma \subseteq \PSL_2(\Z)$ is of finite index, then $\mu(\Gamma\setminus \H) < \infty$.
+\end{prop}
+
+\begin{proof}
+ Consider the $2$-form associated to $\mu$, given by
+ \[
+ \eta = \frac{\d x \wedge \d y}{y^2} = \frac{i \d z \wedge \d \bar{z}}{2 (\Im z)^2}.
+ \]
+ We now let
+ \[
+ \gamma =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \SL_2(\R).
+ \]
+ Then we have
+ \[
+ \Im \gamma(z) = \frac{\Im z}{|cz + d|^2}.
+ \]
+ Moreover, we have
+ \[
+ \frac{\d \gamma(z)}{\d z} = \frac{a(cz + d) - c(az + b)}{(cz + d)^2} = \frac{1}{(cz + d)^2}.
+ \]
+ Plugging these into the formula, we see that $\eta$ is invariant under $\gamma$.
+
+ Now if $\bar\Gamma \leq \PSL_2(\Z)$ has finite index, then we can write $\PSL_2(\Z)$ as a union of cosets
+ \[
+ \PSL_2(\Z) = \coprod_{i = 1}^n \bar\gamma \gamma_i,
+ \]
+ where $n = (\PSL_2(\Z): \bar\Gamma)$. Then a fundamental domain for $\bar\Gamma$ is just
+ \[
+ \bigcup_{i = 1}^n \gamma_i(\mathcal{D}),
+ \]
+ and so
+ \[
+ \mu(\bar\Gamma \setminus H) = \sum \mu(\gamma_i \mathcal{D}) = n \mu(\mathcal{D}).
+ \]
+ So it suffices to show that $\mu(\mathcal{D})$ is finite, and we simply compute
+ \[
+ \mu(\mathcal{D}) = \int_\mathcal{D} \frac{\d x\; \d y}{y^2} \leq \int_{x = -\frac{1}{2}}^{x = \frac{1}{2}}\int_{y = \sqrt{2}/2}^{y = \infty} \frac{\d x\;\d y}{y^2} < \infty.\qedhere
+ \]
+\end{proof}
+It is an easy exercise to show that we actually have
+\[
+ \mu(\mathcal{D}) = \frac{\pi}{3}.
+\]
+We end with a bit terminology.
+\begin{defi}[Principal congruence subgroup]\index{$\Gamma(N)$}
+ For $N \geq 1$, the \term{principal congruence subgroup}\index{congruence subgroup!principal} of level $N$ is
+ \[
+ \Gamma(N) = \{\gamma \in \SL_2(\Z) : \gamma \equiv I \pmod N\} = \ker (\SL_2(\Z) \to \SL_2(\Z/N\Z)).
+ \]
+ Any $\Gamma \subseteq \SL_2(\Z)$ containing some $\Gamma(N)$ is called a \term{congruence subgroup}, and its \term{level} is the smallest $N$ such that $\Gamma \supseteq \Gamma(N)$
+\end{defi}
+
+This is a normal subgroup of finite index.
+\begin{defi}[$\Gamma_0(N)$, $\Gamma_1(N)$]\index{$\Gamma_0(N)$}\index{$\Gamma_1(N)$}
+ We define
+ \[
+ \Gamma_0(N) =\left\{
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}
+ \in \SL_2(\Z): c \equiv 0\pmod N\right\}
+ \]
+ and
+ \[
+ \Gamma_1(N) =\left\{
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}
+ \in \SL_2(\Z): c \equiv 0, d\equiv 1 \pmod N\right\}.
+ \]
+ We similarly define $\Gamma^0(N)$ and $\Gamma^1(N)$ to be the transpose of $\Gamma_0(N)$ and $\Gamma_1(N)$ respectively.
+\end{defi}
+Note that ``almost all'' subgroups of $\SL_2(\Z)$ are \emph{not} congruence subgroups. On the other hand, if we try to define the notion of congruence subgroups in higher dimensions, we find that all subgroups of $\SL_n(\Z)$ for $n > 2$ are congruence!
+
+\section{Modular forms of level 1}
+\subsection{Basic definitions}
+We can now define a modular form. Recall that we have $\SL_2(\Z) = \Gamma(1)$.
+\begin{defi}[Modular form of level 1]\index{modular form}\index{modular form!level 1}
+ A holomorphic function $f: \H \to \C$ is a \emph{modular form of weight $k \in \Z$ and level $1$} if
+ \begin{enumerate}
+ \item For any
+ \[
+ \gamma =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \Gamma(1),
+ \]
+ we have
+ \[
+ f(\gamma(z)) = (cz + d)^k f(z).
+ \]
+ \item $f$ is holomorphic at $\infty$ (to be defined precisely later).
+ \end{enumerate}
+\end{defi}
+What can we deduce about modular forms from these properties? If we take $\gamma = -I$, then we get
+\[
+ f(z) = (-1)^k f(z).
+\]
+So if $k$ is odd, then $f \equiv 0$. So they only exist for even weights. If we have even weights, then it suffices to consider $\bar{\Gamma} = \bra S, T\ket$. Since
+\[
+ f(z) \mapsto (cz + d)^{-k} f(\gamma(z))
+\]
+is a \emph{group action} of $\Gamma(1)$ on functions on $\H$, it suffices to check that $f$ is invariant under the generators $S$ and $T$. Thus, (i) is equivalent to
+\[
+ f(z + 1) = f(z),\quad f(-1/z) = z^k f(z).
+\]
+How do we interpret (ii)? We know $f$ is $\Z$-periodic. If we write $q = e^{2\pi i z}$, then we have $z \in \H$ iff $0 < |q| < 1$, and moreover, if two different $z$ give the same $q$, then the values of $f$ on the two $z$ agree. In other words, $f(z)$ only depends on $q$, and thus there exists a holomorphic function $\tilde{f}(q)$ on $\{0< |q| < 1\}$ such that
+\[
+ \tilde{f}(e^{2\pi i z}) = f(z).
+\]
+Explicitly, we can write
+\[
+ \tilde{f}(q) = f\left(\frac{1}{2\pi i} \log q\right).
+\]
+By definition, $\tilde{f}$ is a holomorphic function on a punctured disk. So we have a Laurent expansion
+\[
+ \tilde{f}(q) = \sum_{n = -\infty}^\infty a_n (f) q^n,
+\]
+called the \term{Fourier expansion} or \term{$q$-expansion} of $f$. We say $f$ is meromorphic (resp. holomorphic) at $\infty$ if $\tilde{f}$ is meromorphic (resp. holomorphic) at $q = 0$.
+
+In other words, it is meromorphic at $\infty$ if $a_n(f) = 0$ for $n$ sufficiently negative, and holomorphic if $a_n(f) = 0$ for all $n \geq 0$. The latter just says $f(z)$ is bounded as $\Im(z) \to \infty$.
+
+The following definition is also convenient:
+\begin{defi}[Cusp form]\index{cusp form}\index{modular form!cusp form}
+ A modular form $f$ is a \emph{cusp form} if the constant term $a_0(f)$ is $0$.
+\end{defi}
+We will later see that ``almost all'' modular forms are cusp forms.
+
+In this case, we have
+\[
+ \tilde{f} = \sum_{n\geq 1} a_n(f) q^n.
+\]
+From now on, we will drop the $\tilde{\;}$, which should not cause confusion.
+
+\begin{defi}[Weak modular form]\index{weak modular form}\index{modular form!weak}
+ A \emph{weak modular form} is a holomorphic form on $\H$ satisfying (i) which is \emph{meromorphic} at $\infty$.
+\end{defi}
+We will use these occasionally.
+
+The transformation rule for modular forms seem rather strong. So, are there actually modular forms? It turns out that there are quite a lot of modular forms, and remarkably, there is a relatively easy way of listing out all the modular forms.
+
+The main class (and in fact, as we will later see, a generating class) of modular forms is due to Eisenstein. This has its origin in the theory of elliptic functions, but we will not go into that.
+
+\begin{defi}[Eisenstein series]\index{Eisenstein series}
+ Let $k \geq 4$ be even. We define
+ \[
+ G_k(z) = \sum_{\substack{m, n \in \Z\\ (m, n) \not= (0, 0)}} \frac{1}{(mz + n)^k} = \sideset{}{'}\sum_{(m, n) \in \Z^2} \frac{1}{(mz + n)^k}.
+ \]
+\end{defi}
+Here the $\sum'$ denotes that we are omitting $0$, and in general, it means we don't sum over things we obviously don't want to sum over.
+
+When we just write down this series, it is not clear that it is a modular form, or even that it converges. This is given by the following theorem:
+\begin{thm}
+ $G_k$ is a modular form of weight $k$ and level $1$. Moreover, its $q$-expansion is
+ \[
+ G_k(z) = 2 \zeta(k) \left(1 - \frac{2k}{B_k} \sum_{n \geq 1} \sigma_{k - 1}(n) q^n\right),\tag{$1$}
+ \]
+ where\index{$\sigma_r(n)$}
+ \[
+ \sigma_r(n) = \sum_{1 \leq d \mid n} d^r.
+ \]
+\end{thm}
+
+Convergence of the series follows from the following more general result. Note that since $z \not \in \R$, we know $\{1, z\}$ is an $\R$-basis for $\C$.
+\begin{prop}
+ Let $(e_1, \cdots, e_d)$ be some basis for $\R^d$. Then if $r \in \R$, the series
+ \[
+ \sideset{}{'}\sum_{\mathbf{m} \in \Z^d} \|m_1 e_1 + \cdots + m_d e_d\|^{-r}
+ \]
+ converges iff $r > d$.
+\end{prop}
+
+\begin{proof}
+ The function
+ \[
+ (x_i) \in \R^d \mapsto \left\|\sum_{i = 1} x_i e_i \right\|
+ \]
+ is a norm on $\R^d$. As any $2$ norms on $\R^d$ are equivalent, we know this is equivalent to the sup norm $\|\ph\|_\infty$. So the series converges iff the corresponding series
+ \[
+ \sum_{\mathbf{m} \in \Z^d}' \|\mathbf{m}\|_\infty^{-r}
+ \]
+ converges. But if $1 \leq N \leq Z$, then the number of $\mathbf{m} \in \Z^d$ such that $\|\mathbf{m}\|_\infty = N$ is $(2N + 1)^d - (2N - 1)^d \sim 2^d d N^{d - 1}$. So the series converges iff
+ \[
+ \sum_{N \geq 1} N^{-r} N^{d - 1}
+ \]
+ converges, which is true iff $r > d$.
+\end{proof}
+
+\begin{proof}[Proof of theorem]
+ Then convergence of the Eisenstein series by applying this to $\R^2 \cong \C$. So the series is absolutely convergent. Therefore we can simply compute
+ \[
+ G_k(z + 1) = \sum_{m, n}' \frac{1}{(mz + (m + n))^k} = G_k(z).
+ \]
+ Also we can compute
+ \[
+ G_k\left(-\frac{1}{z}\right) = \sum_{m, n}' = \frac{z^k}{(-m + nz)^k} = z^k G_k(z).
+ \]
+ So $G_k$ satisfies the invariance property. To show that $G_k$ is holomorphic, and holomorphic at infinity, we'll derive the $q$-expansion $(1)$.
+\end{proof}
+
+\begin{lemma}
+ \[
+ \sum_{n = \infty}^\infty \frac{1}{(n + w)^k} = \frac{(-2\pi i)^k}{(k - 1)!} \sum_{d = 1}^\infty d^{k - 1} e^{2\pi i d w}
+ \]
+ for any $w \in \H$ and $k \geq 2$.
+\end{lemma}
+There are (at least) two ways to prove this. One of this is to use the series for the cotangent, but here we will use Poisson summation.
+
+\begin{proof}
+ Let
+ \[
+ f(x) = \frac{1}{(x + w)^k}.
+ \]
+ We compute
+ \[
+ \hat{f}(y) = \int_{-\infty}^\infty \frac{e^{-2\pi i x y}}{(x + w)^k}\;\d x.
+ \]
+ We replace this with a contour integral. We see that this has a pole at $-w$. If $y > 0$, then we close the contour downwards, and we have
+ \[
+ \hat{f}(y) = -2\pi i \Res_{z = -w} \frac{e^{-2\pi i y z}}{(z + w)^k} = -2\pi i \frac{(-2\pi i y)^{k - 1}}{(k - 1)!} e^{2\pi i yw}.
+ \]
+ If $y \leq 0$, then we close in the upper half plane, and since there is no pole, we have $\hat{f}(y) = 0$. So we have
+ \[
+ \sum_{n = -\infty}^\infty \frac{1}{(n + w)^k} = \sum_{n \in \Z} f(n) = \sum_{d \in \Z} \hat{f}(d) = \frac{(-2\pi i)^k}{(k - 1)!} \sum_{d \geq 1} d^{k - 1} e^{2\pi i d w}
+ \]
+ by Poisson summation formula.
+\end{proof}
+
+Note that when we proved the Poisson summation formula, we required $f$ to decrease very rapidly at infinity, and our $f$ does not satisfy that condition. However, we can go back and check that the proof still works in this case.
+
+Now we get back to the Eisenstein series. Note that since $k$ is even, we can drop certain annoying signs. We have
+\begin{align*}
+ G_k(z) &= 2 \sum_{n \geq 1} \frac{1}{n^k} + 2 \sum_{m \geq 1} \sum_{n \in \Z} \frac{1}{(n + mz)^k}\\
+ &= 2 \zeta(k) + 2 \sum_{m \geq 1} \frac{(2\pi i )^k}{(k - 1)!} \sum_{d \geq 1} d^{k - 1} q^{dm}.\\
+ &= 2 \zeta(k) + 2 \frac{(2\pi i)^k}{(k - 1)!} \sum_{n \geq 1} \sigma_{k - 1}(n) q^n.
+\end{align*}
+Then the result follows from the fact that
+\[
+ \zeta(k) = -\frac{1}{2} (2\pi i)^k \frac{B_k}{k!}.
+\]
+So we see that $G_k$ is holomorphic in $\H$, and is also holomorphic at $\infty$.
+
+It is convenient to introduce a \emph{normalized} Eisenstein series
+\begin{defi}[Normalized Eisenstein series]\index{normalized Eisenstein series}\index{Eisenstein series!normalized}
+ We define
+ \begin{align*}
+ E_k(z) &= (2 \zeta(k))^{-1} G_k(z) \\
+ &= 1 - \frac{2k}{B_k} \sum_{n \geq 1} \sigma_{k - 1}(n) q^n\\
+ &= \frac{1}{2} \sum_{\substack{(m, n) = 1\\m , n \in \Z}} \frac{1}{(mz + n)^k}.
+ \end{align*}
+\end{defi}
+The last line follows by taking out any common factor of $m, n$ in the series defining $G_k$.
+
+Thus, to figure out the (normalized) Eisenstein series, we only need to know the Bernoulli numbers.
+\begin{eg}
+ We have
+ \begin{align*}
+ B_2 &= \frac{1}{6}, & B_4 &= \frac{-1}{30}, & B_6 &= \frac{1}{42}, & B_8 &= \frac{-1}{30}\\
+ B_{10} &= \frac{5}{66}, & B_{12} &= \frac{-631}{2730}, & B_{14} &= \frac{7}{6}.
+ \end{align*}
+ Using these, we find
+ \begin{align*}
+ E_4 &= 1 + 240 \sum \sigma_3(n) q^n\\
+ E_6 &= 1 - 504 \sum \sigma_5 (n) q^n\\
+ E_8 &= 1 + 480 \sum \sigma_7(n) q^n\\
+ E_{10} &= 1 - 264 \sum \sigma_9(n) q^n\\
+ E_{12} &= 1 + \frac{65520}{691} \sum \sigma_{11}(n) q^n\\
+ E_{14} &= 1 - 24 \sum \sigma_{13}(n) q^n.
+ \end{align*}
+\end{eg}
+We notice that there is a simple pattern for $k \leq 14$, except for $k = 12$.
+
+For more general analysis of modular forms, it is convenient to consider the following notation:
+\begin{defi}[Slash operator]\index{slash operator}
+ Let
+ \[
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} = \gamma \in \GL_2(\R)^+,\quad z \in \H,
+ \]
+ and $f: \H \to \C$ any function. We write\index{$j(\gamma, z)$}\index{$f\underset{k}{|}\gamma$}
+ \[
+ j(\gamma, z) = cz + d.
+ \]
+ We define the \emph{slash operator} to be
+ \[
+ (f\underset{k}{|} \gamma) (z) = (\det \gamma)^{k/2} j(\gamma, z)^{-k} f(\gamma(z)).
+ \]
+\end{defi}
+Note that some people leave out the $\det \gamma^{k/2}$ factor, but if we have it, then whenever $\gamma = Ia$, then
+\[
+ f\underset{k}{|} \gamma = \sgn(a)^k f,
+\]
+which is annoying. In this notation, then condition (i) for $f$ to be a modular form is just
+\[
+ f\underset{k}{|}\gamma = f
+\]
+for all $\gamma \in \SL_2(\Z)$.
+
+To prove things about our $j$ operator, it is convenient to note that
+\[
+ \gamma
+ \begin{pmatrix}
+ z\\1
+ \end{pmatrix}
+ = j(\gamma, z)
+ \begin{pmatrix}
+ \gamma(z)\\1
+ \end{pmatrix}.\tag{$*$}
+\]
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $j(\gamma\delta, z) = j(\gamma, \delta(z)) j(\delta, z)$ (in fancy language, we say $j$ is a 1-cocycle).
+ \item $j(\gamma^{-1}, z) = j (\gamma, \gamma^{-1}(z))^{-1}$.
+ \item $\gamma: \varphi \mapsto f\underset{k}{|} \gamma$ is a (right) action of $G = \GL_2(\R)^+$ on functions on $\H$. In other words,
+ \[
+ f\underset{k}{|} \gamma \underset{k}{|} \delta = f\underset{k}{|}(\gamma\delta).
+ \]
+ \end{enumerate}
+\end{prop}
+Note that this implies that if if $\Gamma \leq \GL_2(\R)^+$ and $\Gamma = \bra \gamma_1, \cdots, \gamma_m\ket$ then
+\[
+ f\underset{k}{|} \gamma = f\Longleftrightarrow f\underset{k}{|} \gamma_i = f\text{ for all }i = 1, \cdots, m.
+\]
+The proof is just a computation.
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We have
+ \[
+ j(\gamma\delta, z)
+ \begin{pmatrix}
+ \gamma\delta(z)\\
+ 1
+ \end{pmatrix} =
+ \gamma\delta
+ \begin{pmatrix}
+ z\\1
+ \end{pmatrix}
+ =
+ j(\delta, z) \gamma
+ \begin{pmatrix}
+ \delta(z)\\
+ 1
+ \end{pmatrix}
+ =
+ j(\delta, z) j(\gamma, \delta(z))
+ \begin{pmatrix}
+ z\\1
+ \end{pmatrix}
+ \]
+ \item Take $\delta = \gamma^{-1}$.
+ \item We have
+ \begin{align*}
+ ((f\underset{k}{|} \gamma) \underset{k}{|} \delta)(z) &= (\det \delta)^{k/2} j(\delta, z)^{-k} (f\underset{k}{|} \gamma) (\delta(z))\\
+ &= (\det \delta)^{k/2} j(\delta, z)^{-k} (\det \gamma)^{k/2} j(\gamma, \delta(z))^{-k} f(\gamma\delta(z))\\
+ &= (\det \gamma \delta)^{k/2} j(\gamma \delta, z)^{-k} f(\gamma\delta(z))\\
+ &= (f\underset{k}{|} \gamma\delta) (z).\qedhere
+ \end{align*}%\qedhere
+ \end{enumerate}
+\end{proof}
+Back to the Eisenstein series. $G_k$ arise naturally in elliptic functions, which are coefficients in the series expansion of Weierstrass $\wp$ function.
+
+There is another group-theoretic interpretation, which generalizes in many ways. Consider
+\[
+ \Gamma(1)_\infty = \left\{
+ \pm
+ \begin{pmatrix}
+ 1 & n\\
+ 0 & 1
+ \end{pmatrix}: n \in \Z
+ \right\} \leq \Gamma(1) = \SL_2(\Z),
+\]
+which is the stabilizer of $\infty$. If
+\[
+ \delta = \pm
+ \begin{pmatrix}
+ 1 & n\\
+ 0 & 1
+ \end{pmatrix}
+ \in \Gamma(1)_\infty,
+\]
+then we have
+\[
+ j(\delta\gamma, z) = j(\delta, \gamma(z)) j(\gamma, z) = \pm j(\gamma, z).
+\]
+So $j(\gamma, z)^2$ depends only on the coset $\Gamma(1)_\infty \gamma$. We can also check that if
+\[
+ \gamma =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix},\quad
+ \gamma =
+ \begin{pmatrix}
+ a' & b'\\
+ c' & d'
+ \end{pmatrix} \in \Gamma(1),
+\]
+then $\Gamma(1)_\infty \gamma = \Gamma(1)_\infty \gamma'$ iff $(c, d) = \pm (c', d')$.
+
+Moreover, $\gcd(c, d) = 1 $ iff there exists $a, b$ such that
+\[
+ \begin{vmatrix}
+ a & b\\
+ c & d
+ \end{vmatrix} = 1.
+\]
+We therefore have
+\[
+ E_k(z) = \sum_{\gamma \in \Gamma(1)_\infty \setminus \Gamma(1)} j(\gamma, z)^{-k},
+\]
+where we sum over (any) coset representatives of $\Gamma(1)_\infty$.
+
+We can generalize this in two ways. We can either replace $j$ with some other appropriate function, or change the groups.
+
+\subsection{The space of modular forms}
+In this section, we are going to find out \emph{all} modular forms! For $k \in \Z$, we write $M_k = M_k(\Gamma(1))$\index{$M_k$}\index{$M_k(\Gamma(1))$} for the set of modular forms of weight $k$ (and level $1$). We have $S_k \subseteq S_k(\Gamma(1))$\index{$S_k$}\index{$S_k(\Gamma)$} containing the cusp forms. These are $\C$-vector spaces, and are zero for odd $k$.
+
+Moreover, from the definition, we have a natural product
+\[
+ M_k \cdot M_\ell \subseteq M_{k + \ell}.
+\]
+Likewise, we have
+\[
+ S_k\cdot M_\ell \subseteq S_{k + \ell}.
+\]
+We let\index{$M_*$}\index{$S_*$}
+\[
+ M_* = \bigoplus_{k \in \Z} M_k,\quad S_* = \bigoplus_{k \in \Z} S_k.
+\]
+Then $M_*$ is a graded ring and $S_*$ is a graded ideal. By definition, we have
+\[
+ S_k = \ker (a_0: M_k \to \C).
+\]
+To figure out what all the modular forms are, we use the following constraints on the zeroes of a modular form:
+
+\begin{prop}
+ Let $f$ be a weak modular form (i.e.\ it can be meromorphic at $\infty$) of weight $k$ and level $1$. If $f$ is not identically zero, then
+ \[
+ \left(\sum_{z_0 \in \mathcal{D} \setminus \{i, \rho\}} \ord_{z_0} (f)\right) + \frac{1}{2} \ord_i(f) + \frac{1}{3} \ord_\rho f + \ord_\infty(f) = \frac{k}{12},
+ \]
+ where $\ord_\infty f$ is the least $r \in \Z$ such that $a_r(f) \not= 0$.
+\end{prop}
+Note that if $\gamma \in \Gamma(1)$, then $j(\gamma, z) = cz + d$ is never $0$ for $z \in \H$. So it follows that $\ord_z f = \ord_{\gamma(z)} f$.
+
+We will prove this using the argument principle.
+\begin{proof}
+ Note that the function $\tilde{f}(q)$ is non-zero for $0 < |q| < \varepsilon$ for some small $\varepsilon$ by the principle of isolated zeroes. Setting
+ \[
+ \varepsilon = e^{-2\pi R},
+ \]
+ we know $f(z) \not= 0$ if $\Im z \geq R$.
+
+ In particular, the number of zeroes of $f$ in $\mathcal{D}$ is finite. We consider the integral along the following contour, counterclockwise.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.7]
+% \draw [gray] (-2, -5) -- (-2, 1) -- (2, 1) -- (2, -5);
+% \draw [gray] (0, -7.46) circle [radius=4];
+% \draw [gray] (2, -4) circle [radius=0.3];
+% \draw [gray] (-2, -4) circle [radius=0.3];
+% \draw [gray] (0, -3.46) circle [radius=0.3];
+% \draw (-1, 2) -- (1, 2) -- (1, -1.5) arc(90:164.48:0.5) arc(88.96:90:1);
+
+ \node [circ] at (2, -4) {};
+ \node [right] at (2, -4) {$\rho$};
+ \node [circ] at (-2, -4) {};
+ \node [left] at (-2, -4) {$\rho^2$};
+ \node [circ] at (0, -3.46) {};
+ \node [below] at (0, -3.46) {$i$};
+
+ \node [above] at (2, 1) {$\frac{1}{2} + iR$};
+ \node [above] at (-2, 1) {$-\frac{1}{2} + iR$};
+ \draw (-2, 1) -- (2, 1) -- (2, -3.7) arc(90:152.15:0.3) arc(64.298:85.7018:4) node [pos=0.5, above] {$C'$} arc(-2.149:182.149:0.3) arc(94.2982:115.702:4) node [pos=0.5, above] {$C$} arc(27.5:90:0.3) -- cycle;;
+ \end{tikzpicture}
+ \end{center}
+ We assume $f$ has no zeroes along the contour. Otherwise, we need to go around the poles, which is a rather standard complex analytic maneuver we will not go through.
+
+ For $\varepsilon$ sufficiently small, we have
+ \[
+ \int_\Gamma \frac{f'(z)}{f(z)}\;\d z = 2\pi i \sum_{z_0 \in \mathcal{D} \setminus \{i, \rho\}} \ord_{z_0 f}
+ \]
+ by the argument principle. Now the top integral is
+ \[
+ \int_{\frac{1}{2} + iR}^{-\frac{1}{2} iR} \frac{f'}{f}\;\d z = -\int_{|q| = \varepsilon} \frac{\frac{\d \tilde{f}}{\d q}}{\tilde{f}(q)} \;\d q = -2\pi i \ord_{\infty} f.
+ \]
+ As $\frac{f'}{f}$ has at worst a simple pole at $z = i$, the residue is $\ord_i f$. Since we are integrating along only half the circle, as $\varepsilon \to 0$, we pick up
+ \[
+ - \pi i \res = - \pi i \ord_i f.
+ \]
+ Similarly, we get $-\frac{2}{3} \pi i \ord_\rho f$ coming from $\rho$ and $\rho^2$.
+
+ So it remains to integrate along the bottom circular arcs. Now note that $S: z \mapsto -\frac{1}{z}$ maps $C$ to $C'$ with opposite orientation, and
+ \[
+ \frac{\d f(Sz)}{f(Sz)} = k \frac{\d z}{z} + \frac{\d f(z)}{f(z)}
+ \]
+ as
+ \[
+ f(Sz) = z^k f(z).
+ \]
+ So we have
+ \begin{align*}
+ \int_C + \int_{C'} \frac{f'}{f}\;\d z &= \int_{C'} \frac{f'}{f}\;\d z - \left(\frac{k}{z} \;\d z + \frac{f'}{f}\;\d z\right) - -k \int_{C'} \frac{\d z}{z}\\
+ &\to k \int_\rho^i \frac{\d z}{z} \\
+ &= \frac{\pi i k}{6}.
+ \end{align*}
+ So taking the limit $\varepsilon \to 0$ gives the right result.
+\end{proof}
+
+\begin{cor}
+ If $k < 0$, then $M_k = \{0\}$.
+\end{cor}
+
+\begin{cor}
+ If $k = 0$, then $M_0 = \C$, the constants, and $S_0 = \{0\}$.
+\end{cor}
+
+\begin{proof}
+ If $f \in M_0$, then $g = f - f(1)$. If $f$ is not constant, then $\ord_i g \geq 1$, so the LHS is $>0$, but the RHS is $=0$. So $f \in \C$.
+
+ Of course, $a_0(f) = f$. So $S_0 = \{0\}$.
+\end{proof}
+
+\begin{cor}
+ \[
+ \dim M_k \leq 1 + \frac{k}{12}.
+ \]
+ In particular, they are finite dimensional.
+\end{cor}
+
+\begin{proof}
+ We let $f_0, \cdots, f_d$ be $d + 1$ elements of $M_k$, and we choose distinct points $z_1, \cdots, z_d \in \mathcal{D} \setminus \{i, \rho\}$. Then there exists $\lambda_0, \cdots, \lambda_d \in \C$, not all $0$, such that
+ \[
+ f = \sum_{i = 0}^d \lambda_i f_i
+ \]
+ vanishes at all these points. Now if $d > \frac{k}{12}$, then LHS is $> \frac{k}{12}$. So $f \equiv 0$. So $(f_i)$ are linearly dependent, i.e.\ $\dim M_k < d + 1$.
+\end{proof}
+
+\begin{cor}\leavevmode
+ $M_2 = \{0\}$ and $M_k = \C E_k$ for $4 \leq k \leq 10$ ($k$ even). We also have $E_8 = E_4^2$ and $E_{10} = E_4 E_6$.
+\end{cor}
+
+\begin{proof}
+ Only $M_2 = \{0\}$ requires proof. If $0 \not= f \in M_2$, then this implies
+ \[
+ a + \frac{b}{2} + \frac{c}{3} = \frac{1}{6}
+ \]
+ for integers $a, b, c \geq 0$, which is not possible.
+
+ Alternatively, if $f \in M_2$, then $f^2 \in M_4$ and $f^3 \in M_6$. This implies $E_4^3 = E_6^2$, which is not the case as we will soon see.
+
+ Note that we know $E_8 = E_4^2$, and is not just a multiple of it, by checking the leading coefficient (namely 1).
+\end{proof}
+
+\begin{cor}
+ The cusp form of weight $12$ is
+ \[
+ E_4^3 - E_6^2 = (1 + 240 q + \cdots)^3 - (1 - 504 q + \cdots)^2 = 1728q + \cdots.
+ \]
+\end{cor}
+Note that $1728 = 12^3$.
+
+\begin{defi}[$\Delta$ and $\tau$]\index{$\Delta$}\index{$\tau(n)$}
+ \[
+ \Delta = \frac{E_4^3 - E_6^2}{1728} = \sum_{n \geq 1} \tau(n) q^n \in S_{12}.
+ \]
+\end{defi}
+This function $\tau$ is very interesting, and is called \term{Ramanujan's $\tau$-function}. It has nice arithmetic properties we'll talk about soon.
+
+The following is a crucial property of $\Delta$:
+\begin{prop}
+ $\Delta(z) \not= 0$ for all $z \in \H$.
+\end{prop}
+
+\begin{proof}
+ We have
+ \[
+ \sum_{z_0 \not= i, \rho} \ord_{z_0} \Delta + \frac{1}{2} \ord_i \Delta + \frac{1}{3} \ord_\rho \Delta + \ord_\infty \Delta = \frac{k}{12} = 1.
+ \]
+ Since $\ord_\rho \Delta = 1$, it follows that there can't be any other zeroes.
+\end{proof}
+
+It follows from this that
+\begin{prop}
+ The map $f \mapsto \Delta f$ is an isomorphism $M_{k - 12}(\Gamma(1)) \to S_k(\Gamma(1))$ for all $k > 12$.
+\end{prop}
+
+\begin{proof}
+ Since $\Delta \in S_{12}$, it follows that if $f \in M_{k - 1}$, then $\Delta f \in S_k$. So the map is well-defined, and we certainly get an injection $M_{k - 12} \to S_k$. Now if $g \in S_k$, since $\ord_\infty \Delta = 1 \leq \ord_\infty g$ and $\Delta \not= \H$. So $\frac{g}{\Delta}$ is a modular form of weight $k - 12$.
+\end{proof}
+
+Thus, we find that
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item We have
+ \[
+ \dim M_k (\Gamma(1)) =
+ \begin{cases}
+ 0 & k < 0\text{ or }k \text{ odd }\\
+ \left\lfloor \frac{k}{12}\right\rfloor & k > 0, k \equiv 2 \pmod{12}\\
+ 1 + \left\lfloor \frac{k}{12}\right\rfloor & \text{otherwise}
+ \end{cases}
+ \]
+ \item If $k > 4$ and even, then
+ \[
+ M_k = S_k \oplus \C E_k.
+ \]
+ \item Every element of $M_k$ is a polynomial in $E_4$ and $E_6$.
+ \item Let
+ \[
+ b =
+ \begin{cases}
+ 0 & k\equiv 0 \pmod 4\\
+ 1 & k\equiv 2 \pmod 4
+ \end{cases}.
+ \]
+ Then
+ \[
+ \{h_j = \Delta^j E_6^b E_4^{(k - 12j - 6b)/4} : 0 \leq j < \dim M_k\}.
+ \]
+ is a basis for $M_k$, and
+ \[
+ \{h_j : 1 \leq j < \dim M_k\}
+ \]
+ is a basis for $S_k$.
+ \end{enumerate}
+\end{thm}
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item[(ii)] $S_k$ is the kernel of the homomorphism $M_k \to \C$ sending $f \mapsto a_0(f)$. So the complement of $S_k$ has dimension at most $1$, and we know $E_k$ is an element of it. So we are done.
+ \item[(i)] For $k < 12$, this agrees with what we have already proved. By the proposition, we have
+ \[
+ \dim M_{k - 12} = \dim S_k.
+ \]
+ So we are done by induction and (ii).
+ \item[(iii)] This is true for $k < 12$. If $k \geq 12$ is even, then we can find $a, b \geq 0$ with $4a + 6b = k$. Then $E_4^a E_6^b \in M_k$, and is not a cusp form. So
+ \[
+ M_k = \C E_4^a E_6^b \oplus \Delta M_{k - 12}.
+ \]
+ But $\Delta$ is a polynomial in $E_4, E_6$, So we are done by induction on $k$.
+ \item[(iv)] By (i), we know $k - 12j - 6k \geq 0$ for $j < \dim M_k$, and is a multiple of $4$. So $h_j \in M_k$. Next note that the $q$-expansion of $h_j$ begins with $q^j$. So they are all linearly independent.\qedhere
+ \end{enumerate}
+\end{proof}
+So we have completely determined all modular forms, and this is the end of the course.
+
+\newpage
+\subsection{Arithmetic of \tph{$\Delta$}{Delta}{Δ}}
+Recall that we had
+\[
+ \Delta = \sum \tau(n) q^n,
+\]
+and we knew
+\[
+ \tau(1) = 1,\quad \tau(n) \in \Q.
+\]
+In fact, more is true.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\tau(n) \in \Z$ for all $n \geq 1$.
+ \item $\tau(n) = \sigma_{11}(n) \pmod {691}$
+ \end{enumerate}
+\end{prop}
+The function $\tau$ satisfies many more equations, some of which are on the second example sheet.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We have
+ \[
+ 1728 \Delta = (1 + 240 A_3(q))^3 - (1 - 504 A_5 (q))^2,
+ \]
+ where
+ \[
+ A_r = \sum_{n \geq 1} \sigma_r(n) q^n.
+ \]
+ We can write this as
+ \[
+ 1728\Delta = 3\cdot 240 A_3 + 3\cdot 240^2 A_3^2 + 240^3 A_3^3 + 2\cdot 504 A_5 - 504^2 A_5^2.
+ \]
+ Now recall the deep fact that $1728 = 12^3$ and $504 = 21 \cdot 24$.
+
+ Modulo 1728, this is equal to
+ \[
+ 720 A_3 + 1008 A_5.
+ \]
+ So it suffices to show that
+ \[
+ 5 \sigma_3 + 7 \sigma_5(n) \equiv 0 \pmod {12}.
+ \]
+ In other words, we need
+ \[
+ 5 d^3 + 7 d^5 \equiv 0 \pmod {12},
+ \]
+ and we can just check this manually for all $d$.
+ \item Consider
+ \[
+ E_4^3 = 1 + \sum_{n \geq 1} b_n q^n
+ \]
+ with $b_n \in \Z$. We also have
+ \[
+ E_{12} = 1 + \frac{65520}{691} \sum_{n \geq 1} \sigma_{11}(n) q^n.
+ \]
+ Also, we know
+ \[
+ E_{12} - E_4^3 \in S_{12}.
+ \]
+ So it is equal to $\lambda \Delta$ for some $\lambda \in \Q$. So we find that for all $n \geq 1$, we have
+ \[
+ \frac{665520}{691} \sigma_{11}(n) - b_n = \lambda \tau(n).
+ \]
+ In other words,
+ \[
+ 65520 \sigma_{11}(n) - 691 b_n = \mu \tau(n)
+ \]
+ for some $\tau \in \Q$.
+
+ Putting $n = 1$, we know $\tau(1) = 1$, $\sigma_{11}(1) = 1$, and $b_1 \in \Z$. So $\mu \in \Z$ and $\mu \equiv 65520 \pmod {691}$. So for all $n \geq 1$, we have
+ \[
+ 65520 \sigma_{11}(n) \equiv 65520 \tau(n) \pmod {691}.
+ \]
+ Since $691$ and $65520$ are coprime, we are done.\qedhere
+ \end{enumerate}
+\end{proof}
+This proof is elementary, once we had the structure theorem, but doesn't really explain \emph{why} the congruence is true.
+
+The function $\tau(n)$ was studied extensively by Ramanujan. He proved the $691$ congruence (and many others), and (experimentally) observed that if $(m, n) = 1$, then
+\[
+ \tau(mn) = \tau(m) \tau(n).
+\]
+Also, he observed that for any prime $p$, we have
+\[
+ |\tau(p)| < 2 p^{11/2},
+\]
+which was a rather curious thing to notice. Both of these things are true, and we will soon prove that the first is true. The second is also true, but it uses deep algebraic geometry. It was proved by Deligne in 1972, and he got a fields medal for proving this. So it's pretty hard.
+
+We will also prove a theorem of Jacobi:
+\[
+ \Delta = q \prod_{n = 1}^{\infty} (1 - q^n)^{24}.
+\]
+The numbers $\tau(p)$ are related to \emph{Galois representations}.
+
+\subsubsection*{Rationality and integrality}
+So far, we have many series that have rational coefficients in them. Given any subring $R \subseteq \C$, we let $M_k(R) = M_k(\Gamma(1), R)$ be the set of all $f \in M_k$ such that all $a_n(f) \in R$. Likewise, we define $S_k(R)$. For future convenience, we will prove a short lemma about them.
+
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item Suppose $\dim M_k = d + 1 \geq 1$. Then there exists a basis $\{g_j: 0 \leq j \leq d\}$ for $M_k$ such that
+ \begin{itemize}
+ \item $g_j \in M_k(\Z)$ for all $j \in \{0, \cdots, d\}$.
+ \item $a_n(g_j) = \delta_{nj}$ for all $j, n \in \{0, \cdots, d\}$.
+ \end{itemize}
+ \item For any $R$, $M_k(R) \cong R^{d + 1}$ generated by $\{g_j\}$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We take our previous basis $h_j = \Delta^j E_6^b E_4^{(k - 12j - 6b)/4} \in M_k(\Z)$. Then we have $a_n(h_n) = 1$, and $a_j(h_n) = 0$ for all $j < n$. Then we just row reduce.
+ \item The isomorphism is given by
+ \[
+ \begin{tikzcd}[cdmap]
+ M_k(R) \ar[r, leftrightarrow] & R^{d + 1}\\
+ f \ar[r, maps to] & (a_n(f))\\
+ \displaystyle \sum_{j = 0}^d c_j g_j & (c_n) \ar[l, maps to]
+ \end{tikzcd}\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+
+\section{Hecke operators}
+\subsection{Hecke operators and algebras}
+Recall that for $f: \H \to \C$, $\gamma \in \GL_2(\R)^+$ and $k \in \Z$, we defined
+\[
+ (f\underset{k}{|} \gamma)(z) = (\det \gamma)^{k/2} j(\gamma, z)^k f(\gamma(z)),
+\]
+where
+\[
+ \gamma =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix},\quad j(\gamma, z) = cz + d.
+\]
+We then defined
+\[
+ M_k = \{f: f \underset{k}{|} \gamma = f\text{ for all }\gamma \in \Gamma(1) + \text{holomorphicity condition}\}.
+\]
+We showed that these are finite-dimensional, and we found a basis. But there is more to be said about modular forms. Just because we know polynomials have a basis $1, x, x^2, \cdots$ does not mean there isn't anything else to say about polynomials!
+
+In this chapter, we will figure that $M_k$ has the structure of a module for the \emph{Hecke algebra}. This structure underlies the connection with arithmetic, i.e.\ Galois representations etc.
+
+How might we try to get some extra structure on $M_k$? We might try to see what happens if we let something else in $\GL_2(\R)^+$ act on $f$. Unfortunately, in general, if $f$ is a modular form and $\gamma \in \GL_2(\R)^+$, then $g = f \underset{k}{|} \gamma$ is not a modular form. Indeed, given a $\delta\in \Gamma(1)$, then it acts on $g$ by
+\[
+ g\underset{k}{|} \delta = f\underset{k}{|} \gamma\delta = (f\underset{k}{|} \gamma \delta \gamma^{-1})\gamma
+\]
+and usually $\gamma \delta \gamma^{-1} \not \in \Gamma(1)$. In fact the normalizer of $\Gamma(1)$ in $\GL_2(\R)^+$ is generated by $\Gamma(1)$ and $aI$ for $a \in \R^*$.
+
+It turns out we need to act in a smarter way. To do so, we have to develop quite a lot of rather elementary group theory.
+
+%\begin{lemma}
+% Let $G$ be a group and $\Gamma, \Gamma' \leq G$ be subgroups. We further suppose $\Gamma$ has finite index, and let $\Gamma \gamma_1, \cdots, \Gamma \gamma_n$ be the cosets. Also, we suppose $\Gamma'$ permutes the cosets of $\Gamma$ when acting by right multiplication.
+%
+% Let $M$ be a $G$-module, and let $f \in M^\Gamma$. We define
+% \[
+% g = \sum_{i = 1}^n f \gamma_i.
+% \]
+% Then $g \in \Gamma'$, and does not depend on the choice of the $\gamma_i$.
+%\end{lemma}
+%
+%\begin{proof}
+% If $\Gamma \gamma_i = \Gamma \gamma_i'$, then there are some $h_i \in \Gamma$ such that $\gamma_i = h \gamma_i'$. Then
+% \[
+% g = \sum f \gamma_i = \sum f h_i \gamma_i' = \sum f \gamma_i'
+% \]
+% since $f \in M^\Gamma$. So $g$ does not depend on the choice of the $\gamma_i$.
+%
+% Now let $\delta \in \Gamma'$. By hypothesis, there exists $\pi \in S_n$ and $\varepsilon_i \in \Gamma$ for $i = 1,\cdots, n$ such that
+% \[
+% \Gamma \gamma_i \delta = \Gamma \gamma_{\pi(i)},\quad \gamma_{\pi(i)} = \varepsilon_i \gamma_i \delta.
+% \]
+% Then we have
+% \[
+% g \delta = \sum f \gamma_i \delta = \sum f \varepsilon_i ^{-1} \gamma_{\pi(i)} = \sum f \gamma_{\pi(i)} = g,
+% \]
+% since $\varepsilon_i^{-1} \in \Gamma$.
+%\end{proof}
+Consider a group $G$, and $\Gamma \leq G$. The idea is to use the \term{double cosets} of $\Gamma$ defined by
+\[
+ \Gamma g \Gamma = \{\gamma g \gamma' : \gamma, \gamma' \in \Gamma\}.
+\]
+One alternative way to view this is to consider the right multiplication action of $G$, hence $\Gamma$ on the right cosets $\Gamma g$. Then the double coset $\Gamma g \Gamma$ is the union of the orbits of $\Gamma g$ under the action of $\Gamma$. We can write this as
+\[
+ \Gamma g \Gamma = \coprod_{i \in I} \Gamma g_i
+\]
+for some $g_i \in g\Gamma \subseteq G$ and index set $I$.
+
+In our applications, we will want this disjoint union to be finite. By the orbit-stabilizer theorem, the size of this orbit is the index of the stabilizer of $\Gamma g$ in $\Gamma$. It is not hard to see that the stabilizer is given by $\Gamma \cap g^{-1} \Gamma g$. Thus, we are led to consider the following hypothesis:
+\begin{center}
+ \textbf{Hypothesis (H)}: For all $g \in G$, $(\Gamma: \Gamma \cap g^{-1} \Gamma g) < \infty$.
+\end{center}
+Then $(G, \Gamma)$ satisfies (H) iff for any $g$, the double coset $\Gamma g \Gamma$ is the union of finitely many cosets.
+
+The important example is the following:
+\begin{thm}
+ Let $G = \GL_2(\Q)$, and $\Gamma \subseteq \SL_2(\Z)$ a subgroup of finite index. Then $(G, \Gamma)$ satisfies (H).
+\end{thm}
+
+\begin{proof}
+ We first consider the case $\Gamma = \SL_2(\Z)$. We first suppose
+ \[
+ g =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}
+ \in \Mat_2(\Z),
+ \]
+ and $\det g = \pm N$, $N \geq 1$. We claim that
+ \[
+ g^{-1}\Gamma g \cap \Gamma \supseteq \Gamma(N),
+ \]
+ from which it follows that
+ \[
+ (\Gamma : \Gamma \cap g^{-1} \Gamma g) < \infty.
+ \]
+ So given $\gamma \in \Gamma(N)$, we need to show that $g \gamma g^{-1} \in \Gamma$, i.e.\ it has integer coefficients. We consider
+ \[
+ \pm N \cdot g \gamma g^{-1} =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \gamma
+ \begin{pmatrix}
+ d & -b\\
+ -c& a
+ \end{pmatrix}
+ \equiv
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}
+ \begin{pmatrix}
+ d & -b\\
+ -c & a
+ \end{pmatrix} \equiv NI \equiv 0\pmod N.
+ \]
+ So we know that $g \gamma g^{-1}$ must have integer entries. Now in general, if $g' \in \GL_2(\Q)$, then we can write
+ \[
+ g' = \frac{1}{M} g
+ \]
+ for $g$ with integer entries, and we know conjugating by $g$ and $g'$ give the same result. So $(G, \Gamma)$ satisfies (H).
+
+ \separator
+
+ The general result follows by a butterfly. Recall that if $(G: H) < \infty$ and $(G: H') < \infty$, then $(G: H \cap H') < \infty$. Now if $\Gamma \subseteq \Gamma(1) = \SL_2(\Z)$ is of finite index, then we can draw the diagram
+ \begin{center}
+ \begin{tikzpicture}[yscale=0.7, commutative diagrams/every diagram]
+ \node (12) at (0, 0) {$\Gamma(1)$}; \node (14) at (4, 0) {$g^{-1}\Gamma(1)g$};
+ \node (21) at (-2, -2) {$\Gamma$}; \node (23) at (2, -2) {$\Gamma(1) \cap g^{-1}\Gamma(1) g$}; \node (25) at (6, -2) {$g^{-1}\Gamma g$};
+ \node (32) at (0, -4) {$\Gamma \cap g^{-1} \Gamma(1)G$}; \node (34) at (4, -4) {$\Gamma(1) \cap g^{-1} \Gamma g$};
+ \node (43) at (2, -6) {$\Gamma \cap g^{-1} \Gamma g$};
+
+ \path [commutative diagrams/.cd, every arrow, every label, dash]
+ (12) edge node [swap] {finite} (21)
+ (12) edge node {finite} (23)
+ (14) edge node [swap] {finite} (23)
+ (14) edge node {finite} (25)
+ (21) edge (32)
+ (23) edge (32)
+ (23) edge (34)
+ (25) edge (34)
+ (32) edge (43)
+ (34) edge (43);
+ \end{tikzpicture}
+ \end{center}
+ Each group is the intersection of the two above, and so all inclusions are of finite index.
+\end{proof}
+Note that the same proof works for $\GL_N(\Q)$ for any $N$.
+
+Before we delve into concreteness, we talk a bit more about double cosets. Recall that cosets partition the group into pieces of equal size. Is this true for double cosets as well? We can characterize double cosets as orbits of $\Gamma \times \Gamma$ acting on $G$ by
+\[
+ (\gamma, \delta) \cdot g = \gamma g \delta^{-1}.
+\]
+So $G$ is indeed the disjoint union of the double cosets of $\Gamma$.
+
+However, it is not necessarily the case that all double cosets have the same size. For example $|\Gamma e \Gamma| = |\Gamma|$, but for a general $g$, $|\Gamma g \Gamma|$ can be the union of many cosets of $\Gamma$.
+
+Our aim is to define a ring $\mathcal{H}(G, \Gamma)$ generated by double cosets called the \emph{Hecke algebra}. As an abelian group, it is the free abelian group on symbols $[\Gamma g\Gamma]$ for each double coset $[\Gamma g \Gamma]$. It turns out instead of trying to define a multiplication for the Hecke algebra directly, we instead try to define an action of this on interesting objects, and then there is a unique way of giving $\mathcal{H}(G, \Gamma)$ a multiplicative structure such that this is a genuine action.
+
+Given a group $G$, a \term{$G$-module} is an abelian group with a $\Z$-linear $G$-action. In other words, it is a module of the group ring $\Z G$. We will work with right modules, instead of the usual left modules.
+
+Given such a module and a subgroup $\Gamma \leq G$, we will write
+\[
+ M^\Gamma = \{m \in M : m \gamma = m \text{ for all } \gamma \in \Gamma\}.
+\]
+\begin{notation}
+ For $g \in G$ and $m \in M^\Gamma$, we let
+ \[
+ m|[\Gamma g \Gamma] = \sum_{i = 1}^n m g_i,\tag{$*$}
+ \]
+ where
+ \[
+ \Gamma g \Gamma = \coprod_{i = 1}^n \Gamma g_i.
+ \]
+\end{notation}
+The following properties are immediate, but also crucial.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $m | [\Gamma g \Gamma]$ depends only on $\Gamma g \Gamma$.
+ \item $m|[\Gamma g \Gamma] \in M^\Gamma$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $g_i' = \gamma_i g_i$ for $\gamma_i \in \Gamma$, then
+ \[
+ \sum m g_i' = \sum m \gamma_i g_i = \sum m g_i
+ \]
+ as $m \in M^\Gamma$.
+ \item Just write it out, using the fact that $\{\Gamma g_i\}$ is invariant under $\Gamma$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{thm}
+ There is a product on $\mathcal{H}(G, \Gamma)$ making it into an associative ring, the \term{Hecke algebra} of $(G, \Gamma)$, with unit $[\Gamma e \Gamma] = [\Gamma]$, such that for every $G$-module $M$, we have $M^\Gamma$ is a right $\mathcal{H}(G, \Gamma)$-module by the operation $(*)$.
+\end{thm}
+
+In the proof, and later on, we will use the following observation: Let $\Z[\Gamma\setminus G]$ be the free abelian group on cosets $[\Gamma g]$. This has an obvious right $G$-action by multiplication. We know a double coset is just an orbit of $\Gamma$ acting on a single coset. So there is an isomorphism between
+\[
+ \Theta: \mathcal{H}(G, \Gamma) \to \Z[\Gamma \setminus G]^\Gamma.
+\]
+given by
+\[
+ [\Gamma g \Gamma] \mapsto \sum [\Gamma g_i],
+\]
+where
+\[
+ \Gamma g \Gamma = \coprod \Gamma g_i.
+\]
+\begin{proof}
+ Take $M = \Z[\Gamma \setminus G]$, and let
+ \begin{align*}
+ \Gamma g \Gamma &= \coprod \Gamma g_i\\
+ \Gamma h \Gamma &= \coprod \Gamma h_j.
+ \end{align*}
+ Then
+ \[
+ \sum_i [\Gamma g_i] \in M^\Gamma,
+ \]
+ and we have
+ \[
+ \sum_i [\Gamma g_i] | [\Gamma h \Gamma] = \sum_{i, j} [\Gamma g_i h_j] \in M^\Gamma,
+ \]
+ and this is well-defined. This gives us a well-defined product on $\mathcal{H}(G, \Gamma)$. Explicitly, we have
+ \[
+ [\Gamma g \Gamma] \cdot [\Gamma h \Gamma] = \Theta^{-1}\left(\sum_{i, j} [\Gamma g_i h_j]\right).
+ \]
+ It should be clear that this is associative, as multiplication in $G$ is associative, and $[\Gamma] = [\Gamma e \Gamma]$ is a unit.
+
+ Now if $M$ is \emph{any} right $G$-module, and $m \in M^\Gamma$, we have
+ \[
+ m|[\Gamma g \Gamma] | [\Gamma h \Gamma] = \left(\sum m g_i\right)|[\Gamma h \Gamma] = \sum m g_i h_j = m ([\Gamma g \Gamma] \cdot [\Gamma h \Gamma]).
+ \]
+ So $M^\Gamma$ is a right $\mathcal{H}(G, \Gamma)$-module.
+\end{proof}
+
+Now in our construction of the product, we need to apply the map $\Theta^{-1}$. It would be nice to have an explicit formula for the product in terms of double cosets. To do so, we choose representatives $S \subseteq G$ such that
+\[
+ G = \coprod_{g \in S} \Gamma g \Gamma.
+\]
+\begin{prop}
+ We write
+ \begin{align*}
+ \Gamma g \Gamma &= \coprod_{i = 1}^r \Gamma g_i\\
+ \Gamma h \Gamma &= \coprod_{j = 1}^s \Gamma h_j.
+ \end{align*}
+ Then
+ \[
+ [\Gamma g \Gamma] \cdot [\Gamma h \Gamma] = \sum_{k \in S} \sigma(k) [\Gamma k \Gamma],
+ \]
+ where $\sigma(k)$ is the number of pairs $(i, j)$ such that $\Gamma g_i h_j = \Gamma k$.
+\end{prop}
+
+\begin{proof}
+ This is just a simple counting exercise.
+\end{proof}
+
+%
+%\begin{proof}
+% Consider the family of cosets $\Gamma g_i h_j$, not necessarily distinct. So there exists $k_\ell \in S$ for $1 \leq \ell \leq t$, not necessarily distinct, and a partition
+% \[
+% \left\{ (i, j) : \substack{1 \leq i \leq r\\ 1 \leq j \leq s}\right\} = \coprod_{1 \leq \ell \leq t} T_\ell
+% \]
+% such that
+% \[
+% \Gamma l_\ell \Gamma = \coprod_{(i, j) \in T_\ell} \Gamma g_i h_j.
+% \]
+% For each $\ell$, $\Gamma k_\ell$ is exacly one of $\{\Gamma g_i h_j: (i, j) \in T_\ell\}$. So $\sigma(k)$ is the number of $\ell$ such that $k_\ell = k$, and
+% \[
+% [\Gamma g \Gamma] [\Gamma h \Gamma] = \sum_{\ell = 1}^t [\Gamma k_\ell \Gamma] = \sum \sigma(k) [\Gamma k \Gamma].
+% \]
+%\end{proof}
+Of course, we could have taken this as the definition of the product, but we have to prove that this is independent of the choice of representatives $g_i$ and $h_j$, and of $S$, and that it is associative, which is annoying.
+
+\subsection{Hecke operators on modular forms}
+We are now done with group theory. For the rest of the chapter, we take $G = \GL_2(\Q)^+$ and $\Gamma(1) = \SL_2(\Z)$. We are going to compute the Hecke algebra in this case.
+
+The first thing to do is to identify what the single and double cosets are. Let's first look at the case where the representative lives in $\GL_2(\Z)^+$. We let
+\[
+ \gamma \in \GL_2(\Z)^+
+\]
+with
+\[
+ \det \gamma = n > 0.
+\]
+The rows of $\gamma$ generate a subgroup $\Lambda \subseteq \Z^2$. If the rows of $\gamma'$ also generate the same subgroup $\Lambda$, then there exists $\delta \in \GL_2(\Z)$ with $\det \delta = \pm 1$ such that
+\[
+ \gamma' = \delta \gamma.
+\]
+So we have $\deg \gamma' = \pm n$, and if $\det \gamma' = +n$, then $\delta \in \SL_2(\Z) = \Gamma$. This gives a bijection
+\[
+ \left\{\parbox{4cm}{\centering cosets $\Gamma \gamma$ such that\\$\gamma \in \Mat_2(\Z)$, $\det \gamma = n$}\right\} \longleftrightarrow \left\{\parbox{3cm}{\centering subgroups $\Lambda \subseteq \Z^2$ of index $n$} \right\}
+\]
+What we next want to do is to pick representatives of these subgroups< hence the cosets. Consider an arbitrary subgroup $\Lambda \subseteq \Z^2 = \Z e_1 \oplus \Z e_2$. We let
+\[
+ \Lambda \cap \Z e_2 = \Z \cdot d e_2
+\]
+for some $d \geq 1$. Then we have
+\[
+ \Lambda = \bra ae_1 + be_2, d e_2 \ket
+\]
+for some $a \geq 1$, $b \in \Z$ such that $0 \leq b < d$. Under these restrictions, $a$ and $b$ are unique. Moreover, $ad = n$. So we can define
+\[
+ \Pi_n = \left\{
+ \begin{pmatrix}
+ a & b\\
+ 0 & d
+ \end{pmatrix} \in \Mat_2(\Z): a, d \geq 1, ad = n, 0 \leq b < d
+ \right\}.
+\]
+Then
+\[
+ \Big\{\gamma \in \Mat_2(\Z): \det \gamma = n\Big\} = \coprod_{\gamma \in \Pi_n}\Gamma \gamma.
+\]
+These are the single cosets. How about the double cosets? The left hand side above is invariant under $\Gamma$ on the left and right, and is so a union of double cosets.
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item Let $\gamma \in \Mat_2(\Z)$ and $\det \gamma = n \geq 1$. Then
+ \[
+ \Gamma \gamma \Gamma = \Gamma
+ \begin{pmatrix}
+ n_1 & 0\\
+ 0 & n_2
+ \end{pmatrix} \Gamma
+ \]
+ for unique $n_1, n_2 \geq 1$ and $n_2 \mid n_1$, $n_1 n_2 = n$.
+ \item
+ \[
+ \Big\{\gamma \in \Mat_2(\Z) : \det \gamma = n\Big\} = \coprod \Gamma \begin{pmatrix}
+ n_1 & 0\\
+ 0 & n_2
+ \end{pmatrix} \Gamma,
+ \]
+ where we sum over all $1 \leq n_2 \mid n_1$ such that $n = n_1 n_2$.
+ \item Let $\gamma, n_1, n_2$ be as above. if $d \geq 1$, then
+ \[
+ \Gamma (d^{-1}\gamma) \Gamma = \Gamma \begin{pmatrix}
+ n_1/d & 0\\
+ 0 & n_2/d
+ \end{pmatrix} \Gamma,
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ This is the Smith normal form theorem, or, alternatively, the fact that we can row and column reduce.
+\end{proof}
+
+\begin{cor}
+ The set
+ \[
+ \left\{\left[\Gamma
+ \begin{pmatrix}
+ r_1 & 0\\
+ 0 & r_2
+ \end{pmatrix} \Gamma\right] : r_1, r_2 \in \Q_{>0}, \frac{r_1}{r_2} \in \Z \right\}
+ \]
+ is a basis for $\mathcal{H}(G, \Gamma)$ over $\Z$.
+\end{cor}
+
+So we have found a basis. The next goal is to find a \emph{generating set}. To do so, we define the following matrices:
+
+For $1 \leq n_2 \mid n_1$, we define\index{$T(n_1, n_2)$}
+\[
+ T(n_1, n_2) = \left[\Gamma
+ \begin{pmatrix}
+ n_1 & 0 \\ 0& n_2
+ \end{pmatrix} \Gamma
+ \right]
+\]
+For $n \geq 1$, we write\index{$R(n)$}
+\[
+ R(n) = \left[\Gamma
+ \begin{pmatrix}
+ n & 0 \\ 0& n
+ \end{pmatrix}\Gamma
+ \right] =
+ \left[\Gamma
+ \begin{pmatrix}
+ n & 0 \\ 0& n
+ \end{pmatrix}
+ \right] =
+ T(n, n)
+\]
+Finally, we define\index{$T(n)$}
+\[
+ T(n) = \sum_{\substack{1 \leq n_2 \mid n_1 \\ n_1 n_2 = n}} T(n_1, n_2)
+\]
+In particular, we have
+\[
+ T(1, 1) = R(1) = 1 = T(1),
+\]
+and if $n$ is square-free, then
+\[
+ T(n) = T(n, 1).
+\]
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item $R(mn) = R(m) R(n)$ and $R(m) T(n) = T(n) R(m)$ for all $m, n \geq 1$.
+ \item $T(m)T(n) = T(mn)$ whenever $(m, n) = 1$.
+ \item $T(p)T(p^r) = T(p^{r + 1}) + p R(p) T(p^{r - 1})$ of $r \geq 1$.
+ \end{enumerate}
+\end{thm}
+Before we prove this theorem, we see how it helps us find a nice generating set for the Hecke algebra.
+
+\begin{cor}
+ $\mathcal{H}(G, \Gamma)$ is \emph{commutative}, and is generated by $\{T(p), R(p), R(p)^{-1} : p \text{ prime}\}$.
+\end{cor}
+This is rather surprising, because the group we started with was very non-commutative.
+
+\begin{proof}
+ We know that $T(n_1, n_2)$, $R(p)$ and $R(p)^{-1}$ generate $\mathcal{H}(G, \Gamma)$, because
+ \[
+ \left[\Gamma
+ \begin{pmatrix}
+ p & 0 \\ 0& p
+ \end{pmatrix}\Gamma
+ \right]
+ \left[\Gamma
+ \begin{pmatrix}
+ n_1 & 0 \\ 0& n_2
+ \end{pmatrix} \Gamma
+ \right]
+ =
+ \left[\Gamma
+ \begin{pmatrix}
+ pn_1 & 0 \\ 0& pn_2
+ \end{pmatrix} \Gamma
+ \right]
+ \]
+ In particular, when $n_2 \mid n_1$, we can write
+ \[
+ T(n_1, n_2) = R(n_2) T\left(\frac{n_1}{n_2}, 1\right).
+ \]
+ So it suffices to show that we can produce any $T(n, 1)$ from the $T(m)$ and $R(m)$. We proceed inductively. The result is immediate when $n$ is square-free, because $T(n, 1) = T(n)$. Otherwise,
+ \begin{align*}
+ T(n) &= \sum_{\substack{1 \leq n_2 \mid n_1\\n_1 n_2 = n}} T(n_1, n_2) \\
+ &= \sum_{\substack{1 \leq n_2 \mid n_1\\n_1 n_2 = n}} R(n_2) T\left(\frac{n_1}{n_2}, 1\right) \\
+ &= T(n, 1) + \sum_{\substack{1 < n_2 \mid n_1\\n_1 n_2 = n}} R(n_2) T\left(\frac{n_1}{n_2}, 1\right).
+ \end{align*}
+ So $\{T(p), R(p), R(p)^{-1}\}$ does generate $\mathcal{H}(G, \Gamma)$, and by the theorem, we know these generators commute. So $\mathcal{H}(G, \Gamma)$ is commutative.
+\end{proof}
+
+We now prove the theorem.
+\begin{proof}[Proof of theorem]\leavevmode
+ \begin{enumerate}
+ \item We have
+ \[
+ \left[\Gamma
+ \begin{pmatrix}
+ a & 0\\
+ 0 & a
+ \end{pmatrix}\Gamma\right][\Gamma\gamma\Gamma] =
+ \left[\Gamma
+ \begin{pmatrix}
+ a & 0\\
+ 0 & a
+ \end{pmatrix}\gamma \Gamma\right]
+ = [\Gamma \gamma \Gamma] \left[\Gamma
+ \begin{pmatrix}
+ a & 0\\
+ 0 & a
+ \end{pmatrix}\Gamma\right]
+ \]
+ by the formula for the product.
+ \item Recall we had the isomorphism $\Theta: \mathcal{H}(G, \Gamma) \mapsto \Z[\Gamma \setminus G]^\Gamma$, and
+ \[
+ \Theta(T(n)) = \sum_{\gamma \in \Pi_n} [\Gamma\gamma]
+ \]
+ for some $\Pi_n$. Moreover, $\{\gamma \Z^2 \mid \gamma \in \Pi_n\}$ is exactly the subgroups of $\Z^2$ of index $n$.
+
+ On the other hand,
+ \[
+ \Theta(T(m)T(n)) = \sum_{\delta \in \Pi_m, \gamma \in \Pi_n} [\Gamma \delta\gamma],
+ \]
+ and
+ \[
+ \{\delta\gamma \Z^2 \mid \delta \in \Pi_m\} = \{\text{subgroups of $\gamma \Z^2$ of index $n$}\}.
+ \]
+ Since $n$ and $m$ are coprime, every subgroup $\Lambda \subseteq \Z^2$ of index $mn$ is contained in a unique subgroup of index $n$. So the above sum gives exactly $\Theta(T(mn))$.
+ \item We have
+ \[
+ \Theta(T(p^r) T(p)) = \sum_{\delta \in \Pi_{p^r}, \gamma \in \Pi_p} [\Gamma \delta \gamma],
+ \]
+ and for fixed $\gamma \in \Pi_p$, we know $\{\delta \gamma \Z^2: \delta \in \Pi_{p^r}\}$ are the index $p^r$ subgroups of $\Z^2$.
+
+ On the other hand, we have
+ \[
+ \Theta(T(p^{r + 1})) = \sum_{\varepsilon \in \Pi_{p^{r + 1}}} [\Gamma \varepsilon],
+ \]
+ where $\{\varepsilon \Z^2\}$ are the subgroups of $\Z^2$ of index $p^{r + 1}$.
+
+ Every $\Lambda = \varepsilon \Z^2$ of index $p^{r + 1}$ is a subgroup of some index $p$ subgroup $\Lambda' \in \Z^2$ of index $p^r$. If $\Lambda \not \subseteq p\Z^2$, then $\Lambda'$ is unique, and $\Lambda' = \Lambda + p \Z^2$. On the other hand, if $\Lambda \subseteq p\Z^2$, i.e.\ % check subgroup indices
+ \[
+ \varepsilon =
+ \begin{pmatrix}
+ p & 0\\
+ 0 & p
+ \end{pmatrix} \varepsilon'
+ \]
+ for some $\varepsilon'$ of determinant $p^{r - 1}$, then there are $(p + 1)$ such $\Lambda'$ corresponding to the $(p + 1)$ order $p$ subgroups of $\Z^2/p\Z^2$.
+
+ So we have
+ \begin{align*}
+ \Theta(T(p^r)T(p)) &= \sum_{\varepsilon \in \Pi_{p^{r + 1}} \setminus (pI \Gamma_{p^{r - 1}})} [\Gamma \varepsilon] + (p + 1) \sum_{\varepsilon' \in \Pi_{p^{r - 1}}} [\Gamma pI \varepsilon']\\
+ &= \sum_{\varepsilon \in \Pi_{p^{r + 1}}} [\Gamma \varepsilon] + p \sum_{\varepsilon' \in \Pi_{p^{r - 1}}} [\Gamma pI \varepsilon'] \\
+ &= T(p^{r + 1}) + p R(p) T(p^{r - 1}).\qedhere
+ \end{align*}%\qedhere
+ \end{enumerate}
+\end{proof}
+What's underlying this is really just the structure theorem of finitely generated abelian groups. We can replace $\GL_2$ with $\GL_N$, and we can prove some analogous formulae, only much uglier. We can also do this with $\Z$ replaced with any principal ideal domain.
+
+Given all these discussion of the Hecke algebra, we let them act on modular forms! We write
+\[
+ V_k = \{\text{all functions } f: \H \to \C\}.
+\]
+This has a right $G = \GL_2(\Q)^+$ action on the right by
+\[
+ g: f \mapsto f \underset{k}{|}g.
+\]
+Then we have $M_k \subseteq V_k^\Gamma$. For $f \in V_k^\Gamma$, and $g \in G$, we again write
+\[
+ \Gamma g \Gamma = \coprod \Gamma g_i,
+\]
+and then we have
+\[
+ f \underset{k}{|} [\Gamma g \Gamma] = \sum f\underset{k}{|}g_i \in V_k^\Gamma.
+\]
+Recall when we defined the slash operator, we included a determinant in there. This gives us
+\[
+ f\underset{k}{|} R(n) = f
+\]
+for all $n \geq 1$, so the $R(n)$ act trivially. We also define
+\[
+ T_n = T_n^k: V_k^\Gamma \to V_k^\Gamma
+\]
+by
+\[
+ T_n f = n^{k/2 - 1} f \underset{k}{|} T(n).
+\]
+Since $\mathcal{H}(G, \Gamma)$ is commutative, there is no confusion by writing $T_n$ on the left instead of the right.
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $T_{mn}^k T_m^k T_n^k$ if $(m, n) = 1$, and
+ \[
+ T_{p^{r + 1}}^k = T_{p^r}^k T_p^k - p^{k - 1} T_{p^{r - 1}}^k.
+ \]
+ \item If $f \in M_k$, then $T_n f \in M_k$. Similarly, if $f \in S_k$, then $T_n f \in S_k$.
+ \item We have
+ \[
+ a_n (T_m f) = \sum_{1 \leq d \mid (m, n)} d^{k - 1}a_{mn/d^2} (f).
+ \]
+ In particular,
+ \[
+ a_0(T_m f) = \sigma_{k - 1}(m) a_0(f).
+ \]
+ \end{enumerate}
+\end{prop}
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item This follows from the analogous relations for $T(n)$, plus $f|R(n) = f$.
+ \item This follows from (iii), since $T_n$ clearly maps holomorphic $f$ to holomorphic $f$.
+ \item If $r \in \Z$, then
+ \[
+ q^r \underset{k}{|} T(m) = m^{k/2} \sum_{e \mid m, 0 \leq b < e} e^{-k}\exp\left(2\pi i \frac{mzr }{e^2} + 2\pi i \frac{br}{e}\right),
+ \]
+ where we use the fact that the elements of $\Pi_m$ are those of the form
+ \[
+ \Pi_m = \left\{
+ \begin{pmatrix}
+ a & b\\
+ 0 & e
+ \end{pmatrix} : ae = m, 0 \leq b < e
+ \right\}.
+ \]
+ Now for each fixed $e$, the sum over $b$ vanishes when $\frac{r}{e} \not\in \Z$, and is $e$ otherwise. So we find
+ \[
+ q^r \underset{k}{|} T(m) = m^{k/2} \sum_{e \mid (n, r)} e^{1 - k} q^{mr/e^2}.
+ \]
+ So we have
+ \begin{align*}
+ T_m(f) &= \sum_{r \geq 0} a_r(f) \sum_{e \mid (m, r)} \left(\frac{m}{e}\right)^{k - 1} q^{mr/e^2} \\
+ &= \sum_{1 \leq d \mid m} e^{k - 1} \sum a_{ms/d} (f) q^{ds} \\
+ &= \sum_{n \geq 0} \sum_{d \mid (m, n)} d^{k - 1} a_{mn/d^2} q^n,
+ \end{align*}
+ where we put $n = ds$.\qedhere
+ \end{enumerate}
+\end{proof}
+So we actually have a rather concrete formula for what the action looks like. We can use this to derive some immediate corollaries.
+
+\begin{cor}
+ Let $f \in M_k$ be such that
+ \[
+ T_n(f) = \lambda f
+ \]
+ for some $m > 1$ and $\lambda \in \C$. Then
+ \begin{enumerate}
+ \item For every $n$ with $(n, m) = 1$, we have
+ \[
+ a_{mn}(f) = \lambda a_n(f).
+ \]
+ If $a_0(f) \not= 0$, then $\lambda = \sigma_{k - 1}(m)$.
+ \end{enumerate}
+\end{cor}
+
+\begin{proof}
+ This just follows from above, since
+ \[
+ a_n(T_m f) = \lambda a_n(f),
+ \]
+ and then we just plug in the formula.
+\end{proof}
+This gives a close relationship between the eigenvalues of $T_m$ and the Fourier coefficients. In particular, if we have an $f$ that is an eigenvector for \emph{all} $T_m$, then we have the following corollary:
+\begin{cor}
+ Let $0 \not= f \in M_k$, and $k \geq 4$ with $T_m f = \lambda_m f$ for all $m \geq 1$. Then
+ \begin{enumerate}
+ \item If $f \in S_k$, then $a_1(f) \not= 0$ and
+ \[
+ f = a_1(f) \sum_{n \geq 1} \lambda_n q^n.
+ \]
+ \item If $f \not \in S_k$, then
+ \[
+ f = a_0 (f) E_k.
+ \]
+ \end{enumerate}
+\end{cor}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We apply the previous corollary with $n = 1$.
+ \item Since $a_0(f) \not= 0$, we know $a_n(f) = \sigma_{k - 1}(m) a_1(f)$ by (both parts of) the corollary. So we have
+ \[
+ f = a_0(f) + a_1(f) \sum_{n \geq 1} \sigma_{k - 1}(n) q^n = A + B E_k.
+ \]
+ But since $F$ and $E_k$ are modular forms, and $k \not= 0$, we know $A = 0$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[Hecke eigenform]\index{Hecke eigenform}
+ Let $f \in S_k \setminus \{0\}$. Then $f$ is a \term{Hecke eigenform} if for all $n \geq 1$, we have
+ \[
+ T_n f = \lambda_n f
+ \]
+ for some $l_n \in \C$. It is \emph{normalized} if $a_1(f) = 1$.
+\end{defi}
+
+We now state a theorem, which we cannot prove yet, because there is still one missing ingredient. Instead, we will give a partial proof of it.
+
+\begin{thm}
+ There exists a basis for $S_k$ consisting of normalized Hecke eigenforms.
+\end{thm}
+So this is actually typical phenomena!
+
+\begin{proof}[Partial proof]
+ We know that $\{T_n\}$ are commuting operators on $S_k$.
+ \begin{fact}
+ There exists an inner product on $S_k$ for which $\{T_n\}$ are self-adjoint.
+ \end{fact}
+ Then by linear algebra, the $\{T_n\}$ are simultaneously diagonalized.
+\end{proof}
+
+\begin{eg}
+ We take $k = 12$, and $\dim S_{12} = 1$. So everything in here is an eigenvector. In particular,
+ \[
+ \Delta(z) = \sum_{n \geq 1} \tau(n) q^n
+ \]
+ is a normalized Hecke eigenform. So $\tau(n) = \lambda_n$. Thus, from properties of the $T_n$, we know that
+ \begin{align*}
+ \tau(mn) &= \tau(m) \tau(n)\\
+ \tau(p^{r + 1}) &= \tau(p) \tau(p^r) - p^{11}\tau(p^{r - 1})
+ \end{align*}
+ whenever $(m, n) = 1$ and $r \geq 1$.
+\end{eg}
+We can do this similarly for $k = 16, 18, 20, 22, 26$, because $\dim S_k = 1$, with Hecke eigenform $f = E_{k - 12} \Delta$.
+
+Unfortunately, when $\dim S_k(\Gamma(1)) > 1$, there do not appear to be any ``natural'' eigenforms. It seems like we just have to take the space and diagonalize it by hand. For example, $S_{24}$ has dimension $2$, and the eigenvalues of the $T_n$ live in the strange field $\Q(\sqrt{144169})$ (note that 144169 is a prime), and not in $\Q$. We don't seem to find good reasons for why this is true. It appears that the nice things that happen for small values of $k$ happen only because there is no choice.
+
+\section{\texorpdfstring{$L$}{L}-functions of eigenforms}
+Given any modular form, or in fact any function with a $q$ expansion
+\[
+ f = \sum_{n \geq 1} a_n q^n \in S_k(\Gamma(1)),
+\]
+we can form a Dirichlet series\index{$L(f, s)$}
+\[
+ L(f, s) = \sum_{n \geq 1} a_n n^{-s}.
+\]
+Our goal of this chapter is to study the behaviour of this $L$-function. There are a few things we want to understand. First, we want to figure out when this series converges. Next, we will come up with an Euler product formula for the $L$-series. Finally, we come up with an analytic continuation and then a functional equation.
+
+Afterwards, we will use such analytic methods to understand how $E_2$, which we figured is not a modular form, transforms under $\Gamma(1)$, and this in turns gives us a product formula for $\Delta(z)$.
+
+\begin{notation}\index{$O(f(n))$}
+ We write $|a_n| = O(n^{k/2})$ if there exists $c \in \R$ such that for sufficiently large $n$, we have $|a_n| \leq c n^{k/2}$. We will also write this as
+ \[
+ |a_n| \ll n^{k/2}.
+ \]
+\end{notation}
+The latter notation might seem awful, but it is very convenient if we want to write down a chain of such ``inequalities''.
+
+\begin{prop}
+ Let $f \in S_k(\Gamma(1))$. Then $L(f, s)$ converges absolutely for $\Re(s) > \frac{k}{2} + 1$.
+\end{prop}
+
+To prove this, it is enough to show that
+\begin{lemma}
+ If
+ \[
+ f = \sum_{n \geq 1} a_n q^n \in S_k(\Gamma(1)),
+ \]
+ then
+ \[
+ |a_n| \ll n^{k/2}
+ \]
+\end{lemma}
+
+\begin{proof}
+ Recall from the example sheet that if $f \in S_k$, then $y^{k/2} |f|$ is bounded on the upper half plane. So
+ \[
+ |a_n(f)| = \left|\frac{1}{2\pi} \int_{|q| = r} q^{-n} \tilde{f}(q) \frac{\d q}{q}\right|
+ \]
+ for $r \in (0, 1)$. Then for \emph{any} $y$, we can write this as
+ \[
+ \left|\int_0^1 e^{-2\pi i n(x + iy)} f(x + iy) \d x\right| \leq e^{2\pi n y}\sup_{0 \leq x \leq 1} |f(x + iy)| \ll e^{2\pi n y} y^{-k/2}.
+ \]
+ We now pick $y = \frac{1}{n}$, and the result follows.
+\end{proof}
+
+As before, we can write the $L$-function as an Euler product. This time it looks a bit more messy.
+
+\begin{prop}
+ Suppose $f$ is a normalized eigenform. Then
+ \[
+ L(f, s) = \prod_{p\text{ prime}} \frac{1}{1 - a_p p^{-s} + p^{k - 1 - 2s}}.
+ \]
+\end{prop}
+This is a very important fact, and is one of the links between cusp forms and algebraic number theory.
+
+There are two parts of the proof --- a formal manipulation, and then a convergence proof. We will not do the convergence part, as it is exactly the same as for $\zeta(s)$.
+\begin{proof}
+ We look at
+ \begin{multline*}
+ (1 - a_p p^{-s} + p^{k - 1 - 2s}) (1 + a_p p^{-s} + a_{p^2} p^{-2s} + \cdots) \\
+ = 1 + \sum_{r \geq 2} (a_{p^r} + p^{k - 1} a_{p^{r - 2}} - a_p a_p^{r - 1}) p^{-rs}.
+ \end{multline*}
+ Since we have an eigenform, all of those coefficients are zero. So this is just $1$. Thus, we know
+ \[
+ 1 + a_p p^{-s} + a_{p^2} p^{-2s} + \cdots = \frac{1}{1 - a_p p^{-s} + p^{k - 1 - 2s}}.
+ \]
+ Also, we know that when $(m, n) = 1$, we have
+ \[
+ a_{mn} = a_m a_n,
+ \]
+ and also $a_1 = 1$. So we can write
+ \[
+ L(f, s) = \prod_p (1 + a_p p^{-s} + a_{p^2} p^{-2s} + \cdots) = \prod_p \frac{1}{1 - a_p p^{-s} + p^{k - 1 - 2s}}.\qedhere
+ \]
+\end{proof}
+
+We now obtain an analytic continuation and functional equation for our $L$-functions. It is similar to what we did for the $\zeta$-function, but it is easier this time, because we don't have poles.
+\begin{thm}
+ If $f \in S_k$ then, $L(f, s)$ is entire, i.e.\ has an analytic continuation to all of $C$. Define
+ \[
+ \Lambda(f, s) = (2\pi)^{-s} \Gamma(s) L(f, s) = M(f(iy), s).
+ \]
+ Then we have
+ \[
+ \Lambda(f, s) = (-1)^{k/2} \Lambda(f, k - s).
+ \]
+\end{thm}
+
+The proof follows from the following more general fact:
+\begin{thm}
+ Suppose we have a function
+ \[
+ 0 \not= f(z) = \sum_{n \geq 1} a_n q^n,
+ \]
+ with $a_n = O(n^R)$ for some $R$, and there exists $N > 0$ such that
+ \[
+ f\underset{k}{|}
+ \begin{psmallmatrix}
+ 0 & -1\\
+ N & 0
+ \end{psmallmatrix} = cf
+ \]
+ for some $k \in \Z_{> 0}$ and $c \in \C$. Then the function
+ \[
+ L(s) = \sum_{n \geq 1} a_n n^{-s}
+ \]
+ is entire. Moreover, $c^2 = (-1)^k$, and if we set
+ \[
+ \Lambda(s) = (2\pi)^{-s} \Gamma(s) L(s), \quad \varepsilon = c \cdot i^k \in \{\pm 1\},
+ \]
+ then
+ \[
+ \Lambda(k - s) = \varepsilon N^{s - k/2} \Lambda(s).
+ \]
+\end{thm}
+Note that the condition is rather weak, because we don't require $f$ to even be a modular form! If we in fact have $f \in S_k$, then we can take $N = 1, c = 1$, and then we get the desired analytic continuation and functional equation for $L(f, s)$.
+
+\begin{proof}
+ By definition, we have
+ \[
+ c f(z) =
+ f\underset{k}{|}
+ \begin{psmallmatrix}
+ 0 & -1\\
+ N & 0
+ \end{psmallmatrix} = N^{-k/2} z^{-k} f\left(-\frac{1}{Nz}\right).
+ \]
+ Applying the matrix once again gives
+ \[
+ f\underset{k}{|}
+ \begin{psmallmatrix}
+ 0 & -1\\
+ N & 0
+ \end{psmallmatrix}\underset{k}{|}
+ \begin{psmallmatrix}
+ 0 & -1\\
+ N & 0
+ \end{psmallmatrix}
+ =
+ f\underset{k}{|}
+ \begin{psmallmatrix}
+ -N & 0\\
+ 0 & -N
+ \end{psmallmatrix}
+ = (-1)^k f(z),
+ \]
+ but this is equal to $c^2 f(z)$. So we know
+ \[
+ c^2 = (-1)^k.
+ \]
+ We now apply the Mellin transform. We assume $\Re(s) \gg 0$, and then we have
+ \[
+ \Lambda(f, s) = M(f(iy), s) = \int_0^\infty f(iy) y^s\;\frac{\d y}{y} = \left(\int_{1/\sqrt{N}}^{\infty} + \int_0^{1/\sqrt{N}}\right) f(iy) y^s\;\frac{\d y}{y}.
+ \]
+ By a change of variables, we have
+ \begin{align*}
+ \int_0^{1/\sqrt{N}} f(iy) y^s\; \frac{\d y}{y} &= \int_{1/\sqrt{N}}^\infty f\left(\frac{i}{Ny}\right) N^{-s}y^{-s} \;\frac{\d y}{y} \\
+ &= \int_{1/\sqrt{N}}^\infty c i^k N^{k/2 - s} f(iy) y^{k - s} \;\frac{\d y}{y}.
+ \end{align*}
+ So
+ \[
+ \Lambda(f, s) = \int_{1/\sqrt{N}}^\infty f(iy) (y^s + \varepsilon N^{k/2 - s} y^{k - s}) \frac{\d y}{y},
+ \]
+ where
+ \[
+ \varepsilon = i^k c = \pm 1.
+ \]
+ Since $f\to 0$ rapidly for $y \to \infty$, this integral is an entire function of $s$, and satisfies the functional equation
+ \[
+ \Lambda(f, k - s) = \varepsilon N^{s - \frac{k}{2}} \Lambda (f, s).\qedhere
+ \]
+\end{proof}
+Sometimes, we absorb the power of $N$ into $\Lambda$, and define a new function
+\[
+ \Lambda^*(f, s) = N^{s/2} \Lambda(f, s) = \varepsilon \Lambda^*(f, k - s).
+\]
+However, we can't get rid of the $\varepsilon$.
+
+What we have established is a way to go from modular forms to $L$-functions, and we found that these $L$-functions satisfy certain functional equations. Now is it possible to go the other way round? Given any $L$-function, does it come from a modular form? This is known as the \term{converse problem}. One obvious necessary condition is that it should satisfy the functional equation, but is this sufficient?
+
+To further investigate this, we want to invert the Mellin transform.
+
+\begin{thm}[Mellin inversion theorem]\index{Mellin inversion theorem}
+ Let $f: (0, \infty) \to \C$ be a $C^\infty$ function such that
+ \begin{itemize}
+ \item for all $N, n \geq 0$, the function $y^N f^{(n)}(y)$ is bounded as $y \to \infty$; and
+ \item there exists $k \in \Z$ such that for all $n \geq 0$, we have $y^{n + k} f^{(n)}(y)$ bounded as $y \to 0$.
+ \end{itemize}
+ Let $\Phi(s) = M(f, s)$, analytic for $\Re(s) > k$. Then for all $\sigma > k$, we have
+ \[
+ f(y) = \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} \Phi(s) y^{-s} \;\d s.
+ \]
+\end{thm}
+Note that the conditions can be considerably weakened, but we don't want to do so much analysis.
+
+\begin{proof}
+ The idea is to reduce this to the inversion of the Fourier transform. Fix a $\sigma > k$, and define
+ \[
+ g(x) = e^{2\pi \sigma x} f(e^{2\pi x}) \in C^\infty(\R).
+ \]
+ Then we find that for any $N,n \geq 0$, the function $e^{Nx} g^{(n)}(x)$ is bounded as $x \to +\infty$. On the other hand, as $x \to -\infty$, we have
+ \begin{align*}
+ g^{(n)}(x) &\ll \sum_{j = 0}^n e^{2\pi (\sigma + j)x}|f^{(j)} (e^{2\pi x})| \\
+ &\ll \sum_{j = 0}^n e^{2\pi (\sigma + j)x} e^{-2\pi (j + k) x}\\
+ &\ll e^{2\pi (\sigma - k)x}.
+ \end{align*}
+ So we find that $g \in \mathcal{S}(\R)$. This allows us to apply the Fourier inversion formula. By definition, we have
+ \begin{align*}
+ \hat{g}(-t) &= \int_{-\infty}^\infty e^{2\pi \sigma x} f(e^{2\pi x}) e^{2\pi i x t}\;\d x\\
+ &= \frac{1}{2\pi} \int_0^\infty y^{\sigma + it} f(y) \frac{\d y}{y} = \frac{1}{2\pi} \Phi(\sigma + it).
+ \end{align*}
+ Applying Fourier inversion, we find
+ \begin{align*}
+ f(y) &= y^{-\sigma} g\left(\frac{\log y}{2\pi}\right)\\
+ &= y^{-\sigma} \frac{1}{2\pi} \int_{-\infty}^\infty e^{-2\pi i t (\log y / 2\pi)}\Phi(\sigma + it) \;\d t\\
+ &= \frac{1}{2\pi i}\int_{\sigma - i \infty}^{\sigma + i \infty} \Phi(s) y^{-s}\;\d s.\qedhere
+ \end{align*}
+\end{proof}
+
+We can now use this to prove a simple converse theorem.
+\begin{thm}
+ Let
+ \[
+ L(s) = \sum_{n \geq 1} a_n n^{-s}
+ \]
+ be a Dirichlet series such that $a_n = O(n^R)$ for some $R$. Suppose there is some even $k \geq 4$ such that
+ \begin{itemize}
+ \item $L(s)$ can be analytically continued to $\{\Re(s) > \frac{k}{2} - \varepsilon\}$ for some $\varepsilon > 0$;
+ \item $|L(s)|$ is bounded in vertical strips $\{\sigma_0 \leq \Re s \leq \sigma_1\}$ for $\frac{k}{2} \leq \sigma_0 < \sigma_1$.
+ \item The function
+ \[
+ \Lambda(s) = (2\pi)^{-s} \Gamma(s) L(s)
+ \]
+ satisfies
+ \[
+ \Lambda(s) = (-1)^{k/2} \Lambda(k - s)
+ \]
+ for $\frac{k}{2} - \varepsilon < \Re s < \frac{k}{2} + \varepsilon$.
+ \end{itemize}
+ Then
+ \[
+ f = \sum_{n \geq 1} a_n q^n \in S_k(\Gamma(1)).
+ \]
+\end{thm}
+Note that the functional equation allows us to continue the Dirichlet series to the whole complex plane.
+
+\begin{proof}
+ Holomorphicity of $f$ on $\H$ follows from the fact that $a_n = O(n^R)$, and since it is given by a $q$ series, we have $f(z + 1) = f(z)$. So it remains to show that
+ \[
+ f\left(-\frac{1}{z}\right) = z^k f(z).
+ \]
+ By analytic continuation, it is enough to show this for
+ \[
+ f\left(\frac{i}{y}\right) = (iy)^k f(iy).
+ \]
+ Using the inverse Mellin transform (which does apply in this case, even if it might not meet the conditions of the version we proved), we have
+ \begin{align*}
+ f(iy) &= \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} \Lambda(s) y^{-s}\;\d s \\
+ &= \frac{1}{2\pi i} \int_{\frac{k}{2} - i \infty}^{\frac{k}{2} + i \infty} \Lambda(s) y^{-s}\;\d s\\
+ &= \frac{(-1)^{k/2}}{2\pi i} \int_{\frac{k}{2} - i\infty}^{\frac{k}{2} + i \infty} \Lambda(k - s) y^{-s}\;\d s\\
+ &= \frac{(-1)^{k/2}}{2\pi i}\int_{\frac{k}{2} - i\infty}^{\frac{k}{2} + i\infty} \Lambda(s) y^{s - k}\;\d s\\
+ &= (-1)^{k/2} y^{-k}f\left(\frac{i}{y}\right).
+ \end{align*}
+ Note that for the change of contour, we need
+ \[
+ \int_{\frac{k}{2} \pm iT }^{\sigma \pm iT} \Lambda(s) y^{-s}\;\d s \to 0
+ \]
+ as $T \to \infty$. To do so, we need the fact that $\Gamma(\sigma + iT) \to 0$ rapidly as $T \to \pm \infty$ uniformly for $\sigma$ in any compact set, which indeed holds in this case.
+\end{proof}
+This is a pretty result, but not really of much interest at this level. However, it is a model for other proofs of more interesting things, which we unfortunately would not go into.
+
+Recall we previously defined the Eisenstein series $E_2$, and found that
+\[
+ E_2(z) = 1 - 24 \sum_{n \geq 1} \sigma_1(n) q^n.
+\]
+We know this is not a modular form, because there is no modular form of weight $2$. However, $E_2$ does satisfy $E_2(z + 1) = E(z)$, as it is given by a $q$-expansion. So we know that $E_2(-\frac{1}{z}) \not= z^2 E_2(z)$. But what is it?
+
+We let
+\[
+ f(y) = \frac{1 - E_2(iy)}{24} = \sum_{n \geq 1} \sigma_1(n) e^{-2\pi n y}.
+\]
+\begin{prop}
+ We have
+ \[
+ M(f, s) = (2\pi)^{-s} \Gamma(s) \zeta(s) \zeta(s - 1).
+ \]
+\end{prop}
+This is a really useful result, because we understand $\Gamma$ and $\zeta$ well.
+
+\begin{proof}
+ Doing the usual manipulations, it suffices to show that
+ \[
+ \sum \sigma_1(m) m^{-s} = \zeta(s) \zeta(s - 1).
+ \]
+ We know if $(m, n) = 1$, then
+ \[
+ \sigma_1(mn) = \sigma_1(m) \sigma_1(n).
+ \]
+ So we have
+ \[
+ \sum_{m \geq 1} \sigma_1(m) m^{-s} = \prod_p (1 + (p + 1) p^{-s} + (p^2 + p + 1) p^{-2s} + \cdots).
+ \]
+ Also, we have
+ \begin{multline*}
+ (1 - p^{-s}) (1 + (p + 1) p^{-s} + (p^2 + p + 1) p^{-2s} + \cdots) \\
+ = 1 + p^{1 - s} + p^{2 - 2s} + \cdots = \frac{1}{1 - p^{1 - s}}.
+ \end{multline*}
+ Therefore we find
+ \[
+ \sum \sigma_1(m) m^{-s} = \zeta(s) \zeta(s - 1).\qedhere
+ \]
+\end{proof}
+The idea is now to use the functional equation for $\zeta$ and the inverse Mellin transform to obtain the transformation formula for $E_2$. This is the \emph{reverse} of what we did for genuine modular forms. This argument is due to Weil.
+
+Recall that we defined
+\[
+ \Gamma_\R(s) = \pi^{-s/2} \Gamma\left(\frac{s}{2}\right),\quad Z(s) = \Gamma_\R(s) \zeta(s).
+\]
+Then we found the functional equation
+\[
+ Z(s) = Z(1 - s).
+\]
+Similarly, we defined
+\[
+ \Gamma_\C(s) = 2(2\pi)^{-s} \Gamma(s) = \Gamma_\R(s) \Gamma_\R(s + 1),
+\]
+where the last equality follows from the duplication formula. Then we know
+\[
+ (2\pi)^{-s} \Gamma(s) = (2\pi)^{-s} (s - 1) \Gamma(s - 1) = \frac{s - 1}{4\pi} \Gamma_\R(s) \Gamma_\R(s - 1).
+\]
+This implies we have the functional equation
+\begin{prop}
+ \[
+ M(f, s) = \frac{s - 1}{4\pi} Z(s) Z(s - 1) = - M(f, 2 - s).
+ \]
+\end{prop}
+This also tells us the function is holomorphic except for poles at $s = 0, 1, 2$, which are all simple.
+
+\begin{thm}
+ We have
+ \[
+ f(y) + y^{-2} f\left(\frac{1}{y}\right) = \frac{1}{24} - \frac{1}{4\pi} y^{-1} + \frac{1}{24} y^{-2}.
+ \]
+\end{thm}
+
+\begin{proof}
+ We will apply the Mellin inversion formula. To justify this application, we need to make sure our $f$ behaves sensibly ass $y \to 0, \infty$. We use the absurdly terrible bound
+ \[
+ \sigma_1(m) \leq \sum_{1 \leq d \leq m} d \leq m^2.
+ \]
+ Then we get
+ \[
+ f^{(n)}(y) \ll \sum_{m \geq 1} m^{2 + n} e^{-2\pi m y}
+ \]
+ This is certainly very well-behaved as $y \to \infty$, and is $\ll y^{-N}$ for all $N$. As $y \to 0$, this is
+ \[
+ \ll \frac{1}{(1 - e^{2\pi y})^{n + 3}} \ll y^{-n - 3}.
+ \]
+ So $f$ satisfies conditions of our Mellin inversion theorem with $k = 3$.
+
+ We pick any $\sigma > 3$. Then the inversion formula says
+ \[
+ f(y) = \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} M(f, s) y^{-s}\;\d s.
+ \]
+ So we have
+ \begin{align*}
+ f\left(\frac{1}{y}\right) &= \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} - M(f, 2 - s) y^s \;\d s\\
+ &= \frac{-1}{2\pi i} \int_{2 - \sigma - i \infty}^{2 - \sigma + i \infty}M(f, s) y^{2 - s}\;\d s
+ \end{align*}
+ So we have
+ \[
+ f(y) + y^{-2} f\left(\frac{1}{y}\right) = \frac{1}{2\pi i} \left(\int_{\sigma - i \infty}^{\sigma + i \infty} - \int_{2 - \sigma - i \infty}^{2 + \sigma + i \infty}\right) M(f, s) y^{-s}\;\d s.
+ \]
+ This contour is pretty simple. It just looks like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.54] (1.5, -2) -- (1.5, 2);
+ \draw [->-=0.54] (-1.5, 2) -- (-1.5, -2);
+ \node at (0, 0) {$\times$};
+ \node at (-0.75, 0) {$\times$};
+ \node at (0.75, 0) {$\times$};
+ \node [above] at (0.75, 0) {$2$};
+ \node [above] at (0, 0) {$1$};
+ \node [above] at (-0.75, 0) {$0$};
+ \end{tikzpicture}
+ \end{center}
+ Using the fact that $M(f, s)$ vanishes quickly as $|\Im (s)| \to \infty$, this is just the sum of residues
+ \[
+ f(y) + y^{-2} f\left(\frac{1}{y}\right) = \sum_{s_0 = 0, 1, 2} \res_{s = s_0} M(f, s) y^{-s_0}.
+ \]
+ It remains to compute the residues. At $s = 2$, we have
+ \[
+ \res_{s = 2} M(f, s) = \frac{1}{4\pi} Z(2) \res_{s = 1}Z(s) = \frac{1}{4\pi}\cdot \frac{\pi}{6} \cdot 1 = \frac{1}{24}.
+ \]
+ By the functional equation, this implies
+ \[
+ \res_{s = 0} M(f, s) = \frac{1}{24}.
+ \]
+ Now it remains to see what happens when $s = 1$. We have
+ \[
+ \res_{s = 1} M(f, s) = \frac{1}{4\pi} \res_{s = 1} Z(s) \res_{s = 0}Z(s) = -\frac{1}{4\pi}.
+ \]
+ So we are done.
+\end{proof}
+
+\begin{cor}
+ \[
+ E_2 \left(-\frac{1}{z}\right) = z^2 E_2(z) + \frac{12z}{2\pi i}.
+ \]
+\end{cor}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ E_2(iy) &= 1 - 24 f(y)\\
+ &= 1 - 24 y^{-2} f\left(\frac{1}{y}\right) - 1 + \frac{6}{\pi} y^{-1} + y^{-2}\\
+ &= y^{-2}\left(1 - 24 f\left(\frac{1}{y}\right)\right) + \frac{6}{\pi} y^{-1}\\
+ &= y^{-2}E\left(\frac{-1}{iy}\right) + \frac{6}{\pi} y^{-1}.
+ \end{align*}
+ Then the result follows from setting $z = iy$, and then applying analytic continuiation.
+\end{proof}
+
+\begin{cor}
+ \[
+ \Delta(z) = q \prod_{m \geq 1} (1 - q^m)^{24}.
+ \]
+\end{cor}
+
+\begin{proof}
+ Let $D(z)$ be the right-hand-side. It suffices to show this is a modular form, since $S_{12}(\Gamma(1))$ is one-dimensional. It is clear that this is holomorphic on $\H$, and $D(z + 1) = D(z)$. If we can show that
+ \[
+ D\underset{12}{|}
+ \begin{psmallmatrix}
+ 0 & -1\\
+ 1 & 0
+ \end{psmallmatrix} = D,
+ \]
+ then we are done. In other words, we need to show that
+ \[
+ D\left(\frac{-1}{z}\right) = z^{12}D(z).
+ \]
+ But we have
+ \begin{align*}
+ \frac{D'(z)}{D(z)} &= 2\pi i - 24 \sum_{m \geq 1} \frac{2\pi i m q}{1 - q^m} \\
+ &= 2\pi i \left(1 - 24 \sum_{m, d\geq 1} m q^{md}\right)\\
+ &= 2\pi i E_2(z)
+ \end{align*}
+ So we know
+ \begin{align*}
+ \frac{\d}{\d z}\left(\log D\left(-\frac{1}{z}\right)\right) &= \frac{1}{z^2} \frac{D'}{D}\left(-\frac{1}{z}\right) \\
+ &= \frac{1}{z^2} 2\pi i E_2\left(-\frac{1}{z}\right)\\
+ &= \frac{D'}{D}(z) + 12 \frac{\d}{\d z} \log z.
+ \end{align*}
+ So we know that
+ \[
+ \log D\left(-\frac{1}{z}\right) = \log D + 12 \log z + c,
+ \]
+ for some locally constant function $c$. So we have
+ \[
+ D\left(-\frac{1}{z}\right) = z^{12}D(z) \cdot C
+ \]
+ for some other constant $C$. By trying $z = i$, we find that $C = 1$ (since $D(i) \not= 0$ by the infinite product). So we are done.
+\end{proof}
+
+\section{Modular forms for subgroups of \tph{$\SL_2(\Z)$}{SL2(Z)}{SL2(&\#x2124)}}
+\subsection{Definitions}
+For the rest of the course, we are going to look at modular forms defined on some selected subgroups of $\SL_2(\Z)$.
+
+We fix a subgroup $\Gamma \subseteq \Gamma(1)$ of finite index. For $\Gamma(1)$, we defined a modular form to a holomorphic function $f: \H \to \C$ that is invariant under the action of $\Gamma(1)$, and is holomorphic at infinity. For a general subgroup $\Gamma$, the invariance part of the definition works just as well. However, we need something stronger than just holomorphicity at infinity.
+
+Before we explain what the problem is, we first look at some examples. Recall that we write $\bar{\Gamma}$ for the image of $\Gamma$ in $\PSL_2(\Z)$.
+
+\begin{lemma}
+ Let $\Gamma \leq \Gamma(1)$ be a subgroup of finite index, and $\gamma_1, \cdots, \gamma_i$ be right coset representatives of $\bar{\Gamma}$ in $\overline{\Gamma(1)}$, i.e.
+ \[
+ \overline{\Gamma(1)} = \coprod_{i = 1}^d \bar{\Gamma} \gamma_i.
+ \]
+ Then
+ \[
+ \coprod_{i = 1}^d \gamma_i \mathcal{D}
+ \]
+ is a fundamental domain for $\Gamma$.
+\end{lemma}
+
+\begin{eg}
+ Take
+ \[
+ \Gamma^0(p) = \left\{
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \SL_2(\Z) : b \equiv 0 \pmod p
+ \right\}
+ \]
+ Recall there is a canonical map $\SL_2(\Z) \to \SL_2(\F_p)$ that is surjective. Then $\Gamma^0(p)$ is defined to be the inverse image of
+ \[
+ H =
+ \left\{
+ \begin{pmatrix}
+ a & 0\\
+ b & c
+ \end{pmatrix}
+ \right\} \leq \SL_2(\F_p).
+ \]
+ So we know
+ \[
+ (\Gamma(1) : \Gamma^0(p)) = (\SL_2(\F_q): H) = \frac{|\SL_2(\F_q)|}{|H|} = p + 1,
+ \]
+ where the last equality follows from counting. In fact, we have an explicit choice of coset representatives
+ \[
+ \SL_2(\F_p) = \coprod_{b \in \F_p}
+ \begin{pmatrix}
+ * & 0\\
+ * & *
+ \end{pmatrix}
+ \begin{pmatrix}
+ 1 & b\\
+ 0 & 1
+ \end{pmatrix}
+ \coprod
+ \begin{pmatrix}
+ * & 0\\
+ * & *
+ \end{pmatrix}
+ \begin{pmatrix}
+ & -1\\
+ 1 &
+ \end{pmatrix}
+ \]
+ Thus, we also have coset representatives of $\Gamma^0(p)$ by
+ \[
+ \left\{
+ T^b =
+ \begin{pmatrix}
+ 1 & b\\
+ 0 & 1
+ \end{pmatrix}
+ : b \in \F_p\right\} \cup \left\{
+ S =
+ \begin{pmatrix}
+ 0 & -1\\
+ 1 & 0
+ \end{pmatrix}
+ \right\}.
+ \]
+ For example, when $p = 3$, then $b \in \{0, -1, +1\}$. Then the fundamental domain is
+ \begin{center}
+ \begin{tikzpicture}
+
+ \fill [opacity=0.5, mblue] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \draw [mblue, thick] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \node at (0, 3) {$\mathcal{D}$};
+
+ \begin{scope}[shift={(2, 0)}]
+ \fill [opacity=0.5, mblue] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \draw [mblue, thick] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \node at (0, 3) {$T^1\mathcal{D}$};
+ \end{scope}
+ \begin{scope}[shift={(-2, 0)}]
+ \fill [opacity=0.5, mblue] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \draw [mblue, thick] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \node at (0, 3) {$T^{-1}\mathcal{D}$};
+ \end{scope}
+
+ \fill [opacity=0.5, mblue] (1, 1.732) arc(120:180:2) arc(0:60:2) arc(120:60:2);
+ \draw [thick, mblue] (1, 1.732) arc(120:180:2) arc(0:60:2) arc(120:60:2);
+ \node at (0, 1.5) {$S\mathcal{D}$};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+So in defining modular forms, we'll want to control functions as $z \to 0$ (in some way), as well as when $y \to \infty$. In fact, what we really need is that the function has to be holomorphic at all points in $\P^1(\Q) = \Q \cup \{\infty\}$. It happens that in the case of $\Gamma(1)$, the group $\Gamma(1)$ acts transitively on $\Q \cup \{\infty\}$. So by invariance of $f$ under $\Gamma(1)$, being holomorphic at $\infty$ ensures we are holomorphic everywhere.
+
+In general, we will have to figure out the orbits of $\Q \cup \{\infty\}$ under $\Gamma$, and then pick a representative of each orbit. Before we go into that, we first understand what the fundamental domain of $\Gamma$ looks like.
+
+%Note that $\Gamma(1)$ acts transitively on $\P^1(\Q) = \Q \cup \{\infty\}$ by M\"obius transforms, but in general, subgroups $\Gamma$ don't. To see this, note that if $-\frac{d}{c} \in \Q$ and $(c, d) = 1$, Then there are $a, b \in \Z$ with $ad -b c = 1$ and $c, d \in \Z$. So we have
+%\[
+% \begin{pmatrix}
+% a & b\\
+% c & d
+% \end{pmatrix}
+% \frac{-d}{c} = \infty.
+%\]
+\begin{defi}[Cusps]\index{cusps}
+ The \emph{cusps} of $\Gamma$ (or $\bar{\Gamma}$) are the orbits of $\Gamma$ on $\P^1(\Q)$.
+\end{defi}
+We would want to say a subgroup of index $n$ has $n$ many cusps, but this is obviously false, as we can see from our example above. The problem is that we should could each cusp with ``multiplicity''. We will call this the \emph{width}. For example, in the fundamental domain above
+\begin{center}
+ \begin{tikzpicture}
+
+ \fill [opacity=0.5, mblue] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \draw [mblue, thick] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \node at (0, 3) {$\mathcal{D}$};
+
+ \node [above] at (0, 4) {$p = \infty$};
+
+ \begin{scope}[shift={(2, 0)}]
+ \fill [opacity=0.5, mblue] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \draw [mblue, thick] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \node at (0, 3) {$T^1\mathcal{D}$};
+ \end{scope}
+ \begin{scope}[shift={(-2, 0)}]
+ \fill [opacity=0.5, mblue] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \draw [mblue, thick] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \node at (0, 3) {$T^{-1}\mathcal{D}$};
+ \end{scope}
+
+ \fill [opacity=0.5, mblue] (1, 1.732) arc(120:180:2) arc(0:60:2) arc(120:60:2);
+ \draw [thick, mblue] (1, 1.732) arc(120:180:2) arc(0:60:2) arc(120:60:2);
+ \node at (0, 1.5) {$S\mathcal{D}$};
+
+ \node [below] at (0, 0) {$0$};
+ \end{tikzpicture}
+\end{center}
+In this case, we should count $p = \infty$ three times, and $p = 0$ once. One might worry this depends on which fundamental domain we pick for $\Gamma$. Thus, we will define it in a different way. From now on, it is more convenient to talk about $\bar{\Gamma}$ than $\Gamma$.
+
+Since $\overline{\Gamma(1)}$ acts on $\P^1(\Q)$ transitively, it actually just suffices to understand how to define the multiplicity for the cusp of $\infty$. The stabilizer of $\infty$ in $\overline{\Gamma(1)}$ is
+\[
+ \overline{\Gamma(1)}_\infty = \left\{
+ \pm \begin{pmatrix}
+ 1 & b\\
+ 0 & 1
+ \end{pmatrix}: b \in \Z\right\}.
+\]
+For a general subgroup $\overline{\Gamma} \leq \overline{\Gamma(1)}$, the stabilizer of $\infty$ is $\overline{\Gamma}_\infty = \overline{\Gamma} \cap \overline{\Gamma(1)}_\infty$. Then this is a finite index subgroup of $\overline{\Gamma(1)}_\infty$, and hence must be of the form
+\[
+ \overline{\Gamma}_\infty = \left\bra \pm
+ \begin{pmatrix}
+ 1 & m\\
+ 0 & 1
+ \end{pmatrix}\right\ket
+\]
+for some $m \geq 1$. We define the width of the cusp to be this $m$.
+
+More generally, for an arbitrary cusp, we define the width by conjugation.
+
+\begin{defi}[Width of cusp]\index{width of cusp}
+ Let $\alpha \in \Q \cup \{\infty\}$ be a representation of a cusp of $\Gamma$. We pick $g \in \Gamma(1)$ with $g(\infty) = \alpha$. Then $\gamma(\alpha) = \alpha$ iff $g^{-1}\gamma g(\infty) = \infty$. So
+ \[
+ g^{-1} \overline{\Gamma}_\alpha g = (\overline{g^{-1}\Gamma g})_\infty = \left\bra \pm
+ \begin{pmatrix}
+ 1 & m_\alpha\\
+ 0 & 1
+ \end{pmatrix}\right\ket
+ \]
+ for some $m_\alpha \geq 1$. This $m_\alpha$ is called the \emph{width} of the cusp $\alpha$ (i.e.\ the cusp $\Gamma \alpha$).
+\end{defi}
+
+The $g$ above is not necessarily unique. But if $g'$ is another, then
+\[
+ g' = g
+ \begin{pmatrix}
+ \pm 1 & n\\
+ 0 & \pm 1
+ \end{pmatrix}
+\]
+for some $n \in \Z$. So $m_\alpha$ is independent of the choice of $g$.
+
+As promised, we have the following proposition:
+\begin{prop}
+ Let $\Gamma$ have $\nu$ cusps of widths $m_1, \cdots, m_\nu$. Then
+ \[
+ \sum_{i = 1}^\nu m_i = (\overline{\Gamma(1)}: \bar{\Gamma}).
+ \]
+\end{prop}
+
+\begin{proof}
+ There is a surjective map
+ \[
+ \pi: \bar{\Gamma} \setminus \overline{\Gamma(1)} \to \text{cusps}
+ \]
+ given by sending
+ \[
+ \bar{\Gamma} \cdot \gamma \mapsto \bar{\Gamma}\cdot \gamma(\infty).
+ \]
+ It is then an easy group theory exercise that $|\pi^{-1}([\alpha])| = m_\alpha$.
+%
+% Let $\alpha = g(\infty) \in \P^1(\Q)$, and $g \in \Gamma(1)$. Then
+% \begin{align*}
+% \pi(\bar{\Gamma} \cdot \gamma g) = \bar{\Gamma} \cdot \alpha &\Longleftrightarrow\bar{\Gamma} \gamma g(\infty) = \bar{\Gamma} \alpha\\
+% &\Longleftrightarrow \gamma(\alpha) \in \bar{\Gamma}_\alpha\\
+% &\Longleftrightarrow \gamma \in \bar{\Gamma} \overline{\Gamma(1)}_\alpha.
+% \end{align*}
+% Now note that there is a bijection
+% \[
+% \frac{\bar{\Gamma} \cdot \overline{\Gamma(1)}_\alpha}{\bar{\Gamma}} \cong \frac{\bar{\Gamma(1)}_\alpha}{ \bar{\Gamma}_\alpha}
+% \]
+% by mapping $\bar{\Gamma} \gamma$ to $\bar{\Gamma}_\alpha \gamma$.
+%
+% So we know $\bar{\Gamma} \cdot \overline{\Gamma(1)}_\alpha$ is a disjoint union of $(\overline{\Gamma(1)}_\alpha: \bar{\Gamma}_\alpha) = m_\alpha$ cosets of $\bar{\Gamma}$. So $\pi^{-1}(\bar{\Gamma}_\alpha)$ has $m_\alpha$ elements.
+\end{proof}
+
+\begin{eg}
+ Consider the following subgroup
+ \[
+ \Gamma = \Gamma_0(p) =\left\{
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} : c \equiv 0 \pmod p
+ \right\}.
+ \]
+ Then we have
+ \[
+ \big(\overline{\Gamma(1)}: \overline{\Gamma_0(p)}\big) = \big(\Gamma(1): \Gamma_0(p)\big) = p + 1.
+ \]
+ We can compute
+ \[
+ \Gamma_\infty = \left\langle -1,
+ \begin{pmatrix}
+ 1 & 1\\
+ 0 & 1
+ \end{pmatrix}\right\rangle = \Gamma(1)_\infty.
+ \]
+ So $m_\infty = 1$. But we also have
+ \[
+ \Gamma_0 = \left\langle -1,
+ \begin{pmatrix}
+ 1 & 0\\
+ p & 1
+ \end{pmatrix}\right\rangle,
+ \]
+ and this gives $m_0 = p$. Since $p + 1 = p = 1$, these are the only cusps of $\Gamma_0(p)$. Likewise, for $\Gamma^0(p)$, we have $m_\infty = p$ and $m_0 = 1$.
+%
+% Geometrically, the width of a cusp $\alpha$ is the number of copies of fundamental domain where the boundary meets $\alpha$:
+% \begin{center}
+% \begin{tikzpicture}
+%
+% \fill [opacity=0.5, mblue] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+% \draw [mblue, thick] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+% \node at (0, 3) {$\mathcal{D}$};
+%
+% \node [above] at (0, 4) {$p = \infty$};
+%
+% \begin{scope}[shift={(2, 0)}]
+% \fill [opacity=0.5, mblue] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+% \draw [mblue, thick] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+% \node at (0, 3) {$T^1\mathcal{D}$};
+% \end{scope}
+% \begin{scope}[shift={(-2, 0)}]
+% \fill [opacity=0.5, mblue] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+% \draw [mblue, thick] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+% \node at (0, 3) {$T^{-1}\mathcal{D}$};
+% \end{scope}
+%
+% \fill [opacity=0.5, mblue] (1, 1.732) arc(120:180:2) arc(0:60:2) arc(120:60:2);
+% \draw [thick, mblue] (1, 1.732) arc(120:180:2) arc(0:60:2) arc(120:60:2);
+% \node at (0, 1.5) {$S\mathcal{D}$};
+%
+% \node [below] at (0, 0) {$0$};
+% \end{tikzpicture}
+% \end{center}
+\end{eg}
+
+Equipped with the definition of a cusp, we can now define a modular form!
+\begin{defi}[Modular form]\index{modular form}
+ Let $\Gamma \subseteq \SL_2(\Z)$ be of finite index, and $k \in \Z$. A \emph{modular form of weight $k$ on $\Gamma$} is a holomorphic function $f: \H \to \C$ such that
+ \begin{enumerate}
+ \item $f\underset{k}{|} \gamma = f$ for all $\gamma \in \Gamma$.
+ \item $f$ is holomorphic at the cusps of $\Gamma$.
+ \end{enumerate}
+ If moreover,
+ \begin{enumerate}
+ \item[(iii)] $f$ vanishes at the cusps of $\Gamma$,
+ \end{enumerate}
+ then we say $f$ is a \term{cusp form}.
+\end{defi}
+As before, we have to elaborate a bit more on what we mean by (ii) and (iii). We already know what it means when the cusp is $\infty$ (i.e.\ it is $\Gamma \infty$). Now in general, we write our cusp as $\Gamma \alpha = \Gamma g(\infty)$ for some $g \in \Gamma(1)$.
+
+Then we know
+\[
+ \bar{\Gamma}_\alpha = g \left\bra \pm
+ \begin{pmatrix}
+ 1 & m\\
+ 0 & 1
+ \end{pmatrix}\right\ket g^{-1}.
+\]
+This implies we must have
+\[
+ \begin{pmatrix}
+ 1 & m\\
+ 0 & 1
+ \end{pmatrix}\text{ or }-
+ \begin{pmatrix}
+ 1 & m\\
+ 0 & 1
+ \end{pmatrix} \in g^{-1}\Gamma_\alpha g.
+\]
+Squaring, we know that we must have
+\[
+ \begin{pmatrix}
+ 1 & 2m\\
+ 0 & 1
+ \end{pmatrix} \in g^{-1} \Gamma_\alpha g.
+\]
+So we have
+\[
+ f \underset{k}{|} g \underset{k}{|}
+ \begin{psmallmatrix}
+ 1 & 2m\\
+ 0 & 1
+ \end{psmallmatrix} = f \underset{k}{|} g.
+\]
+So we know
+\[
+ (f\underset{k}{|}g) (z + 2m) = (f\underset{k}{|}g)(z).
+\]
+Thus, we can write
+\[
+ f\underset{k}{|} g = \tilde{f}_g(q) = \sum_{n \in \Z} (\text{constant}) q^{n/2m} = \sum_{\substack{n \in \Q\\ 2mn \in \Z}} a_{g, n}(f) q^n,
+\]
+where we define
+\[
+ q^{a/b} = e^{2\pi i a z/b}.
+\]
+Then $f$ is holomorphic at the cusp $\alpha = g(\infty)$ if
+\[
+ a_{g, n}(f) = 0
+\]
+for all $n < 0$, and \emph{vanishes} at $\alpha$ if moreover
+\[
+ a_{g, 0}(f) = 0.
+\]
+Note that if $-I \in \Gamma$, then in fact
+\[
+ \begin{pmatrix}
+ 1 & m\\
+ 0 & 1
+ \end{pmatrix} \in g^{-1} \Gamma_\alpha g.
+\]
+So the ``$q$ expansion at $\alpha$'' is actually series in $q^{1/m}$.
+
+\separator
+
+There is a better way of phrasing this. Suppose we have $g(\infty) = \alpha = g'(\infty)$, where $g' \in \GL_2(\Q)^+$. Then we have
+\[
+ g' = gh
+\]
+for some $h \in \GL_2(\Q)^+$ such that $h(\infty) = \infty$. So we can write
+\[
+ h = \pm
+ \begin{pmatrix}
+ a & b\\
+ 0 & c
+ \end{pmatrix}
+\]
+where $a, b, d \in \Q$ and $a, d > 0$.
+
+Then, we have
+\begin{align*}
+ f\underset{k}{|} g' &= (f \underset{k}{|} g) \underset{k}{|} h \\
+ &= \sum_{\substack{n \in \Q \\ 2mn \in \Z}} a_{g, n} (f) q^n \underset{k}{|} \pm
+ \begin{psmallmatrix}
+ a & b\\
+ 0 & d
+ \end{psmallmatrix}\\
+ &= (\pm i)^k \sum_n a_{g, n} (f) q^{an/d} e^{2\pi bn/d}.
+\end{align*}
+In other words, we have
+\[
+ f\underset{k}{|} g = \sum_{n \geq 0} c_n q^{rn}
+\]
+for some positive $r \in \Q$. So condition (ii) is equivalent to (ii'):
+\begin{enumerate}
+ \item[(ii')] For all $g \in \GL_2(\Q)^+$, the function $f\underset{k}{|} g$ is holomorphic at $\infty$.
+\end{enumerate}
+
+Note that (ii') doesn't involve $\Gamma$, which is convenient. Likewise, (iii) is consider to
+\begin{enumerate}
+ \item[(iii')] $f\underset{k}{|} g$ vanishes at $\infty$ for all $g \in \GL_2(\Q)^+$.
+\end{enumerate}
+We can equivalently replace $\GL_2(\Q)^+$ with $\SL_2(\Z)$.
+
+Modular form and cusp forms of weight $k$ on $\Gamma$ form a vector space $M_k(\Gamma) \supseteq S_k(\Gamma)$.
+
+Recall that for $\Gamma = \Gamma(1) = \SL_2(\Z)$, we knew $M_k = 0$ if $k$ is odd, because
+\[
+ f\underset{k}{|} (-I) = (-1)^k f.
+\]
+More generally, if $-I \in \Gamma$, then $M_k = 0$ for all odd $k$. But if $-I \not\in \Gamma$, then usually there can be non-zero forms of odd weight.
+
+Let's see some examples of such things.
+
+\begin{prop}
+ Let $\Gamma \subseteq \Gamma(1)$ be of finite index, and $g \in G = \GL_2(\Q)^+$. Then $\Gamma' = g^{-1} \Gamma g \cap \Gamma(1)$ also has finite index in $\Gamma(1)$, and if $f \in M_k(\Gamma)$ or $S_k(\Gamma)$, then $f\underset{k}{|} g \in M_k(\Gamma')$ or $S_k(\Gamma')$ respectively.
+\end{prop}
+
+\begin{proof}
+ We saw that $(G, \Gamma)$ has property (H). So this implies the first part. Now if $\gamma \in \Gamma'$, then $g\gamma g^{-1} \in \Gamma$. So
+ \[
+ f \underset{k}{|}g \gamma g^{-1} = f \Rightarrow f\underset{k}{|} g \underset{k}{|}\gamma = f\underset{k}{|} g.
+ \]
+ The conditions (ii') and (iii') are clear.
+\end{proof}
+This provides a way of producing a lot of modular forms. For example, we can take $\Gamma = \Gamma(1)$, and take
+\[
+ g =
+ \begin{pmatrix}
+ N & 0\\
+ 0 & 1
+ \end{pmatrix}.
+\]
+Then it is easy to see that $\Gamma' = \Gamma_0(N)$. So if $f(z) \in M_k(\Gamma(1))$, then $f(Nz) \in M_k(\Gamma_0(N))$. But in general, there are lots of modular forms in $\Gamma_0(N)$ that cannot be constructed this way.
+
+As before, we don't have a lot of modular forms in low dimensions, and there is also an easy bound for those in higher dimensions.
+\begin{thm}
+ We have
+ \[
+ M_k(\Gamma) =
+ \begin{cases}
+ 0 & k < 0\\
+ \C & k = 0
+ \end{cases},
+ \]
+ and
+ \[
+ \dim_\C M_k(\Gamma) \leq 1 + \frac{k}{12} (\Gamma(1): \Gamma).
+ \]
+ for all $k > 0$.
+\end{thm}
+In contrast to the case of modular forms of weight $1$, we don't have explicit generators for this.
+
+\begin{proof}
+ Let
+ \[
+ \Gamma(1) = \coprod_{i = 1}^d \Gamma \gamma_i.
+ \]
+ We let
+ \[
+ f \in M_k(\Gamma),
+ \]
+ and define
+ \[
+ \mathcal{N}_f = \prod_{1 \leq i \leq d} f\underset{k}{|} \gamma_i.
+ \]
+ We claim that $\mathcal{N}_f \in M_{kd} (\Gamma(1))$, and $\mathcal{N}_f = 0$ iff $f = 0$. The latter is obvious by the principle of isolated zeroes.
+
+ Indeed, $f$ is certainly holomorphic on $\H$, and if $\gamma \in \Gamma(1)$, then
+ \[
+ \mathcal{N}_f\underset{k}{|} \gamma = \prod_i f\underset{k}{|} \gamma_i \gamma = \mathcal{N}_f.
+ \]
+ As $f \in M_k(\Gamma)$, each $f\underset{k}{|} \gamma_i$ is holomorphic at $\infty$.
+ \begin{itemize}
+ \item If $k < 0$, then $\mathcal{N}_f \in M_{kd} (\Gamma(1)) = 0$. So $f = 0$.
+ \item If $k \geq 0$, then suppose $\dim M_k(G) > N$. Pick $z_1, \cdots, z_N \in \mathcal{D} \setminus \{i, \rho\}$ distinct. Then there exists $0 \not= f \in M_k(\Gamma)$ with
+ \[
+ f(z_1) = \cdots = f(z_N) = 0.
+ \]
+ So
+ \[
+ \mathcal{N}_f(z_1) = \cdots = \mathcal{N}_f (z_N) = 0.
+ \]
+ Then by our previous formula for zeros of modular forms, we know $N \leq \frac{kd}{12}$. So $\dim M_k(\Gamma) \leq1 + \frac{kd}{12}$.
+ \item If $k = 0$, then $M_0(\Gamma)$ has dimension $\leq 1$. So $M_0(\Gamma) = \C$.\qedhere
+ \end{itemize}
+\end{proof}
+
+\subsection{The Petersson inner product}
+As promised earlier, we define an inner product on the space of cusp forms.
+
+We let $f, g \in S_k(\Gamma)$. Then the function $y^k f(z) \overline{g(z)}$ is $\Gamma$-invariant, and is bounded on $\H$, since $f$ and $g$ vanish at cusps. Also, recall that $\frac{\d x\; \d y}{y^2}$ is an $\GL_2(\R)^+$-invariant measure. So we can define
+\[
+ \bra f, g\ket = \frac{1}{v(\Gamma)} \int_{\Gamma \setminus \H} y^k f(z) \overline{g(z)} \frac{\d x\; \d y}{y^z} \in \C,
+\]
+where $\int_{\Gamma \setminus \H}$ means we integrate over any fundamental domain, and $v(\Gamma)$ is the volume of a fundamental domain,
+\[
+ v(\Gamma) = \int_{\Gamma \H} \frac{\d x\; \d y}{y^2} = (\overline{\Gamma(1)}:\bar{\Gamma}) \int_\mathcal{D} \frac{\d x\;\d y}{y^2}.
+\]
+The advantage of this normalization is that if we replace $\Gamma$ by a subgroup $\Gamma'$ of finite index, then a fundamental domain for $\Gamma'$ is the union of $(\bar{\Gamma}: \bar{\Gamma}')$ many fundamental domains for $\Gamma$. So the expression $(*)$ is \emph{independent} of $\Gamma$, as long as both $f, g \in S_k(\Gamma)$.
+
+This is called the \term{Petersson inner product}.
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\bra\ph, \ph\ket$ is a Hermitian inner product on $S_k(\Gamma)$.
+ \item $\bra \ph, \ph\ket$ is invariant under translations by $\GL_2(\Q)^+$. In other words, if $\gamma \in \GL_2(\Q)^+$, then
+ \[
+ \bra f\underset{k}{|}\gamma, g \underset{k}{|} \gamma\ket = \bra f, g\ket.
+ \]
+ \item If $f, g \in S_k(\Gamma(1))$, then
+ \[
+ \bra T_n f, g\ket = \bra f, T_n g\ket.
+ \]
+ \end{enumerate}
+\end{prop}
+This completes our previous proof that the $T_n$ can be simultaneously diagonalized.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We know $\bra f, g\ket$ is $\C$-linear in $f$, and $\overline{\bra f, g\ket} = \bra g, f\ket$. Also, if $\bra f, f \ket = 0$, then
+ \[
+ \int_{\Gamma \setminus \H} y^{k - 2} |f|^2 \;\d x \;\d y = 0,
+ \]
+ but since $f$ is continuous, and $y$ is never zero, this is true iff $f$ is identically zero.
+ \item Let $f' = f\underset{k}{|} \gamma$ and $g' = g \underset{k}{|}\gamma \in S_k(\Gamma')$, where $\Gamma' = \Gamma \cap \gamma^{-1} \Gamma \gamma$. Then
+ \[
+ y^k f' \bar{g}' = y^k \frac{(\det \gamma)^k}{|cz + d|^{2k}} \cdot f(\gamma(z)) \overline{g(\gamma(z))} = (\Im \gamma(z))^k f(\gamma(z)) \overline{g(\gamma(z))}.
+ \]
+ Now $\Im \gamma(z)$ is just the $y$ of $\gamma(z)$. So it follows that
+ Then we have
+ \begin{align*}
+ \bra f', g'\ket &= \frac{1}{v(\Gamma')} \int_{\mathcal{D}_{\Gamma'}} \left.y^k f \bar{g} \frac{\d x\;\d y}{y^2}\right|_{\gamma(z)} \\
+ &= \frac{1}{v(\Gamma')} \int_{\gamma(\mathcal{D}_{\Gamma'})} y^k f \bar{g} \frac{\d x\;\d y}{y^2}.
+ \end{align*}
+ Now $\gamma(\mathcal{D}_{\Gamma'})$ is a fundamental domain for $\gamma \Gamma'\gamma^{-1} = \gamma \Gamma \gamma^{-1} \Gamma$, and note that $v(\Gamma') = v(\gamma \Gamma' \gamma^{-1})$ by invariance of measure. So $\bra f', g'\ket = \bra f, g\ket$.
+ \item Note that $T_n$ is a polynomial with integer coefficients in $\{T_p : p \mid n\}$. So it is enough to do it for $n = p$. We claim that
+ \[
+ \bra T_p f, g\ket = p^{\frac{k}{2} - 1} (p + 1) \bra f\underset{k}{|} \delta, g\ket,
+ \]
+ where $\delta \in \Mat_2(\Z)$ is any matrix with $\det (\delta) = p$.
+
+ Assuming this, we let
+ \[
+ \delta^a = p \delta^{-1} \in \Mat_2(\Z),
+ \]
+ which also has determinant $p$. Now as
+ \[
+ g \underset{k}{|}
+ \begin{psmallmatrix}
+ p & 0\\
+ 0 & p
+ \end{psmallmatrix} = g,
+ \]
+ we know
+ \begin{align*}
+ \bra T_p f, g\ket &= p^{\frac{k}{2} - 1}(p + 1) \bra f\underset{k}{|}\delta, g\ket \\
+ &= p^{\frac{k}{2} - 1} (p + 1) \bra f, g \underset{k}{|} \delta^{-1}\ket \\
+ &= p^{\frac{k}{2} - 1} (p + 1) \bra f, g \underset{k}{|} \delta^{a }\ket \\
+ &= \bra f, T_p g\ket
+ \end{align*}
+ To prove the claim, we let
+ \[
+ \Gamma(1)
+ \begin{pmatrix}
+ p & 0\\
+ 0 & 1
+ \end{pmatrix} \Gamma(1) =
+ \coprod_{0 \leq j \leq p} \Gamma(1) \delta \gamma_i
+ \]
+ for some $\gamma_i \in \Gamma(1)$. Then we have
+ \begin{align*}
+ \bra T_p f, g\ket &= p^{\frac{k}{2} - 1} \left\bra \sum_j f \underset{k}{|} \delta \gamma_j, g\right\ket\\
+ &= p^{\frac{k}{2} - 1} \sum_j \bra f\underset{k}{|} \delta \gamma_j, g \underset{k}{|} \gamma_j\ket \\
+ &= p^{\frac{k}{2} - 1} (p + 1) \bra f\underset{k}{|} \delta, g\ket,
+ \end{align*}
+ using the fact that $g \underset{k}{|} \gamma_j = g$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{Examples of modular forms}
+We now look at some examples of (non-trivial) modular forms for different subgroups. And the end we will completely characterize $M_2(\Gamma_0(4))$. This seems like a rather peculiar thing to completely characterize, but it turns out this understanding $M_2(\Gamma_0(4))$ can allow us to prove a rather remarkable result!
+
+\subsubsection*{Eisenstein series}
+At the beginning of the course, the first example of a modular form we had was Eisenstein series. It turns out a generalized version of Eisenstein series will give us more modular forms.
+
+\begin{defi}[$G_{\mathbf{r}, k}$]\index{$G_{\mathbf{r}, k}$}
+ Let $k \geq 3$. Pick any vector $\mathbf{r} = (r_1, r_2) \in \Q^2$. We define
+ \[
+ G_{\mathbf{r}, k}(z) = \sideset{}{'}\sum_{\mathbf{m} \in \Z^2} \frac{1}{((m_1 + r_1) z + m_2 + r_2)^k},
+ \]
+ where $\sum'$ means we omit any $\mathbf{m}$ such that $\mathbf{m} + \mathbf{r} = \mathbf{0}$.
+\end{defi}
+For future purposes, we will consider $\mathbf{r}$ as a row vector.
+
+As before, we can check that this converges absolutely for $k \geq 3$ and $z \in \H$. This obviously depends only on $\mathbf{r} \bmod \Z^2$, and
+\[
+ G_{\mathbf{0}, k} = G_k.
+\]
+
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item If $\gamma \in \Gamma(1)$, then
+ \[
+ G_{\mathbf{r}, k}\underset{k}{|}\gamma = G_{\mathbf{r} \gamma, k}.
+ \]
+ \item If $N\mathbf{r} \in \Z^2$, then $G_{\mathbf{r}, k} \in M_k(\Gamma(N))$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $g \in \GL_2(\R)^+$ and $\mathbf{u} \in \R^2$, then
+ \[
+ \frac{1}{(u_1 z + u_2)^k} \underset{k}{|} g = \frac{(\det g)^{k/2}}{ ((au_1 + cu_2) z + (bu_1 + d u_2))^k} = \frac{(\det g)^{k/2}}{(v_1 z + v_2)^k},
+ \]
+ where $\mathbf{v} = \mathbf{n}\cdot g$. So
+ \begin{align*}
+ G_{\mathbf{r}, k} \underset{k}{|}\gamma &= \sideset{}{'}\sum_{\mathbf{m}}\frac{1}{(((\mathbf{m} + \mathbf{r})_1 \gamma) z + ((\mathbf{m} + \mathbf{r})\gamma)_2)^k} \\
+ &= \sum_{\mathbf{m}'} \frac{1}{((m_1' + r_1') z + m_2' + r_2')^k}\\
+ &= G_{\mathbf{r} \gamma, k}(z),
+ \end{align*}
+ where $\mathbf{m}' = \mathbf{m} \gamma$ and $\mathbf{r}' = \mathbf{r} \gamma$.
+ \item By absolute convergence, $G_{\mathbf{r}, k}$ is holomorphic on the upper half plane. Now if $N\mathbf{r} \in \Z^2$ and $\gamma \in \Gamma(N)$, then $N\mathbf{r} \gamma \equiv N\mathbf{r} \pmod N$. So $\mathbf{r} \gamma \equiv \mathbf{r} \pmod {\Z^2}$. So we have
+ \[
+ G_{\mathbf{r}, k}\underset{k}{|}\gamma = G_{\mathbf{r} \gamma, k} = G_{\mathbf{r}, k}.
+ \]
+ So we get invariance under $\Gamma(N)$. So it is enough to prove $G_{\mathbf{r}, k}$ is holomorphic at cusps, i.e.\ $G_{\mathbf{r}, k}\underset{k}{|} \gamma$ is holomorphic at $\infty$ for all $\gamma \in \Gamma(1)$. So it is enough to prove that for \emph{all} $\mathbf{r}$, $G_{\mathbf{r}, k}$ is holomorphic at $\infty$.
+
+ We can write
+ \[
+ G_{\mathbf{r}, k} = \left(\sum_{m_1 + r_1 > 0} + \sum_{m_1 + r_1 = 0} + \sum_{m_1 + r_1 < 0}\right)\frac{1}{((m_1 + r_1) z + m_2 + r_2)^k}.
+ \]
+ The first sum is
+ \[
+ \sum_{m_1 + r_1 > 0} = \sum_{m_1 > - r_1} \sum_{m_2 \in \Z} \frac{1}{([(m_1 + r_1) z + r_2] + m_2)^k}.
+ \]
+ We know that $(m_1 + r_1)z + r_2 \in \H$. So we can write this as a Fourier series
+ \[
+ \sum_{m_1 > -r_1} \sum_{d \geq 1} \frac{(-2\pi)^k}{(k - 1)!} d^{k - 1} e^{2\pi r_2 d} q^{(m_1 + r_1) d}.
+ \]
+ We now see that all powers of $q$ are positive. So this is holomorphic.
+
+ The sum over $m_1 + r_1 = 0$ is just a constant. So it is fine.
+
+ For the last term, we have
+ \[
+ \sum_{m_1 + r_1 < 0} = \sum_{m_1 < - r_1} \sum_{m_2 \in \Z} \frac{(-1)^k}{((-m_1 - r_1)z - r_2 - m_2)^k},
+ \]
+ which is again a series in positive powers of $q^{-m_1 - r_1}$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsubsection*{\tph{$\vartheta$}{theta}{θ} functions}
+Our next example of modular forms is going to come from $\vartheta$ functions. We previously defined a $\vartheta$ function, and now we are going to call it $\vartheta_3$:\index{$\vartheta_3$}
+\[
+ \vartheta_3(z) = \vartheta(z) = \sum_{n \in \Z} e^{\pi in^2 z} = 1 + 2 \sum_{n \geq 1} q^{n^2/2}.
+\]
+We proved a functional equation for this, which was rather useful.
+
+Unfortunately, this is not a modular form. Applying elements of $\Gamma(1)$ to $\vartheta_3$ will give us some new functions, which we shall call $\vartheta_2$ and $\vartheta_4$.
+
+\begin{defi}[$\vartheta_2$ and $\vartheta_4$]\index{$\vartheta_2$}\index{$\vartheta_4$}
+ \begin{align*}
+ \vartheta_2(z) &= \sum_{n \in \Z} e^{\pi i (n + 1/2)^2 z} = q^{1/8} \sum_{n \in \Z} q^{n(n + 1)/2} = 2 q^{1/8} \sum_{n \geq 0} q^{n(n + 1)/2}\\
+ \vartheta_4(z) &= \sum_{n \in \Z} (-1)^n e^{\pi i n^2 z} = 1 + 2 \sum_{n \geq 1} (-1)^n q^{n^2/2}.
+ \end{align*}
+\end{defi}
+
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item $\vartheta_4(z) = \vartheta_3(z \pm 1)$ and $\theta_2(z + 1) = e^{\pi i/4} \vartheta_2(z)$.
+ \item
+ \begin{align*}
+ \vartheta_3\left(-\frac{1}{z}\right) &= \left(\frac{z}{i}\right)^{1/2} \vartheta_3(z)\\
+ \vartheta_4 \left(-\frac{1}{z}\right) &= \left(\frac{z}{i}\right)^{1/2} \vartheta_2(z)
+ \end{align*}
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Immediate from definition, e.g.\ from the fact that $e^{\pi i} = 1$.
+ \item The first part we've seen already. To do the last part, we use the Poisson summation formula. Let
+ \[
+ h_t(x) = e^{-\pi t(x + 1/2)^2} = g_t\left(x + \frac{1}{2}\right),
+ \]
+ where
+ \[
+ g_t(x) = e^{-\pi tx^2}.
+ \]
+ We previously saw
+ \[
+ \hat{g}_t(y) = t^{-1/2} e^{-\pi y^2/t}.
+ \]
+ We also have
+ \begin{align*}
+ \hat{h}_t(y) &= \int e^{-2\pi i xy} g_t\left(x + \frac{1}{2}\right)\;\d x \\
+ &= \int e^{-2\pi i (x - 1/2) y} g_t(x) \;\d x \\
+ &= e^{\pi i y} \hat{g}_t(y).
+ \end{align*}
+ So by the Poisson summation formula,
+ \[
+ \vartheta_2(it) = \sum_{n \in \Z} h_t(n) = \sum_{n \in \Z} \hat{h}_t(n) = \sum_{n \in \Z} (-1)^n t^{-1/2} e^{-\pi n^2/t} = t^{-1/2} \vartheta_4 \left(\frac{i}{t}\right).\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+There is also a $\vartheta_1$, but we have $\vartheta_1 = 0$. Of course, we did not just invent a funny name for the zero function for no purpose. In general, we can define functions $\vartheta_j(u, z)$, and the $\vartheta$ functions we defined are what we get when we set $u = 0$. It happens that $\vartheta_1$ has the property that
+\[
+ \vartheta_1(u, z) = - \vartheta_1(-u, z),
+\]
+which implies $\vartheta_1(z) = 0$.
+
+We now see that the action of $\SL_2(\Z)$ send us between $\vartheta_2$, $\vartheta_3$ and $\vartheta_4$, up to simple factors.
+
+\begin{cor}\leavevmode
+ \begin{enumerate}
+ \item Let
+ \[
+ F =
+ \begin{pmatrix}
+ \vartheta_2^4\\
+ \vartheta_3^4\\
+ \vartheta_4^4
+ \end{pmatrix}.
+ \]
+ Then
+ \[
+ F(z + 1) =
+ \begin{pmatrix}
+ -1 & 0 & 0\\
+ 0 & 0 & 1\\
+ 0 & 1 & 0
+ \end{pmatrix} F,\quad z^{-2} F\left(-\frac{1}{z}\right) =
+ \begin{pmatrix}
+ 0 & 0 & -1\\
+ 0 & -1 & 0\\
+ -1 & 0 & 0
+ \end{pmatrix}F
+ \]
+ \item $\vartheta_j^4 \in M_2(\Gamma)$ for a subgroup $\Gamma \leq \Gamma(1)$ of finite index. In particular, $\vartheta_j^4 \underset{z}{|} \gamma$ is holomorphic at $\infty$ for any $\gamma \in \GL_2(\Q)^+$.
+ \end{enumerate}
+\end{cor}
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Immediate from the theorem.
+ \item We know $\overline{\Gamma(1)} = \bra S, T\ket$, where $T = \pm \begin{psmallmatrix}1 & 1\\0 & 1\end{psmallmatrix}$ and $S = \pm \begin{psmallmatrix}0 & -1\\1 & 0\end{psmallmatrix}$. So by (i), there is a homomorphism $\rho: \Gamma(1) \to \GL_3(\Z)$ and $\rho(-I) = I$ with
+ \[
+ F\underset{2}{|} \gamma = \rho(\gamma) F,
+ \]
+ where $\rho(\gamma)$ is a signed permutation. In particular, the image of $\rho$ is finite, so the kernel $\Gamma = \ker \rho$ has finite index, and this is the $\Gamma$ we want.
+
+ It remains to check holomorphicity. But each $\vartheta_j$ is holomorphic at $\infty$. Since $F\underset{2}{|} \gamma= \rho (\gamma) F$, we know $\vartheta_j^4\underset{2}{|}$ is a sum of things holomorphic at $\infty$, and is hence holomorphic at $\infty$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+It would be nice to be more specific about exactly what subgroup $\vartheta_j^4$ is invariant under. Of course, whenever $\gamma \in \Gamma$, then we have $\vartheta_j^4 \underset{2}{|}\gamma = \vartheta_j^4$. But in fact $\vartheta_j^4$ is invariant under a larger group.
+
+To do this, it suffices to do it for $\vartheta^4 = \vartheta_3^4$, and the remaining follow by conjugation.
+
+We introduce a bit of notation
+\begin{notation}
+ We write\index{$W_N$}
+ \[
+ W_N =
+ \begin{pmatrix}
+ 0 & -1\\
+ N & 0
+ \end{pmatrix}
+ \]
+\end{notation}
+Note that in general, $W_N$ does \emph{not} have determinant $1$.
+
+\begin{thm}
+ Let $f(z) = \vartheta(2z)^4$. Then $f(z) \in M_2(\Gamma_0(4))$, and moreover, $f\underset{2}{|}W_4 = -f$.
+\end{thm}
+
+To prove this, we first note the following lemma:
+\begin{lemma}
+ $\Gamma_0(4)$ is generated by
+ \[
+ -I,\quad
+ T=
+ \begin{pmatrix}
+ 1 & 1\\0 & 1
+ \end{pmatrix},
+ \quad
+ U=
+ \begin{pmatrix}
+ 1 & 0\\
+ 4 & 1
+ \end{pmatrix} =
+ W_4
+ \begin{pmatrix}
+ 1 & -1 \\
+ 0 & 1
+ \end{pmatrix}
+ W_4^{-1}.
+ \]
+\end{lemma}
+Four is special. This is not a general phenomenon.
+
+\begin{proof}
+ It suffices to prove that $\overline{\Gamma_0(4)}$ is generated by $T$ and $U = \pm \begin{psmallmatrix} 1 & 0\\ 4 & 1 \end{psmallmatrix}$.
+
+ Let
+ \[
+ \gamma = \pm
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \overline{\Gamma_0(4)}.
+ \]
+ We let
+ \[
+ s(\gamma) = a^2 + b^2.
+ \]
+ As $c$ is even, we know $a \equiv 1 \pmod 2$. So $s(\gamma) \geq 1$, and moreover $s(\gamma) = 1$ iff $b = 0, a = \pm 1$, iff $\gamma = T^n$ for some $n$.
+
+ We now claim that if $s(\gamma) \not= 1$, then there exists $\delta \in \{T^{\pm 1}, U^{\pm 1}\}$ such that $s(\gamma\delta) < s(\gamma)$. If this is true, then we are immediately done.
+
+ To prove the claim, if $s(\gamma) \not= 1$, then note that $|a| \not= |2b|$ as $a$ is odd.
+ \begin{itemize}
+ \item If $|a| < |2b|$, then $\min\{|b \pm a|\} < |b|$. This means $s(\gamma T^{\pm 1}) = a^2 + (b \pm a)^2 < s(\gamma)$.
+ \item If $|a| > |2b|$, then $\min \{|a \pm 4b|\} < |a|$, so $s(\gamma U^{\pm 1}) = (a \pm 4b)^2 + b^2 < s(\gamma)$.\qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{proof}[Proof of theorem]
+ It is enough to prove that
+ \[
+ f\underset{2}{|}T = f\underset{2}{|} U = f.
+ \]
+ This is now easy to prove, as it is just a computation. Since $\vartheta(z + 2) = \vartheta(z)$, we know
+ \[
+ f\underset{2}{|} T = f(z + 1) = f(z).
+ \]
+ We also know that
+ \[
+ f\underset{2}{|} W_4 = 4 (4z)^{-2} f\left(\frac{-1}{4z}\right) = \frac{1}{4 z^2} \vartheta\left(-\frac{1}{2z}\right)^4 = - f(z),
+ \]
+ as
+ \[
+ \vartheta\left(-\frac{1}{z}\right) = \left(\frac{z}{i}\right)^{1/2} \vartheta(z).
+ \]
+ So we have
+ \[
+ f\underset{2}{|} U = f\underset{2}{|} W_4 \underset{2}{|} T^{-1} \underset{2}{|} W_4 = (-1)(-1)f = f.\qedhere
+ \]
+\end{proof}
+
+We look at $\Gamma_0(2)$ and $\Gamma_0(4)$ more closely. We have
+\[
+ \Gamma_0(2) = \left\{
+ \begin{pmatrix}
+ a & b\\
+ 2c & d
+ \end{pmatrix}
+ \right\} = \left\{\gamma \in \Gamma(1): \gamma =
+ \begin{pmatrix}
+ 1 & *\\
+ 0 & 1
+ \end{pmatrix}\mod 2
+ \right\}.
+\]
+We know $|\SL_2(\Z/2\Z)| = 6$, and so $(\Gamma(1): \Gamma_0(2)) = 3$. We have coset representatives
+\[
+ I,
+ \begin{pmatrix}
+ 0 & -1\\
+ 0 & 1
+ \end{pmatrix},
+ \begin{pmatrix}
+ 1 & 0\\
+ 1 & 1
+ \end{pmatrix}.
+\]
+We also have a map
+\begin{align*}
+ \Gamma_0(2) &\twoheadrightarrow \Z/2\Z\\
+ \begin{pmatrix}
+ a & b\\
+ 2c & d
+ \end{pmatrix} &\mapsto c,
+\end{align*}
+which one can directly check to be a homomorphism. This has kernel $\Gamma_0(4)$. So we know $(\Gamma_0(2) : \Gamma_0(4)) = 2$, and has coset representatives
+\[
+ I,
+ \begin{pmatrix}
+ 1 & 0\\
+ 2 & 1
+ \end{pmatrix}
+\]
+So
+\[
+ \overline{\Gamma_0(2)} = \left\bra T,
+ \pm
+ \begin{pmatrix}
+ 1 & 0\\
+ 2 & 1
+ \end{pmatrix} = W_2
+ \begin{pmatrix}
+ 1 & 1\\
+ 0 & 1
+ \end{pmatrix} W_2^{-1}\right\ket.
+\]
+We can draw fundamental domains for $\Gamma^0(2)$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mblue, thick, dashed] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+
+ \begin{scope}[shift={(2, 0)}, xscale=-1]
+ \draw [mblue, thick, dashed] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \end{scope}
+
+ \begin{scope}[shift={(-2, 0)}]
+ \draw [mblue, thick, dashed] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+ \end{scope}
+
+ \fill [opacity=0.5, mblue] (2, 4) -- (2, 2) arc(90:180:2) arc(0:90:2) -- (-2, 4);
+ \draw [thick, mblue] (2, 4) -- (2, 2) arc(90:180:2) arc(0:90:2) -- (-2, 4);
+ \end{tikzpicture}
+\end{center}
+and $\Gamma^0(4)$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mblue, thick, dashed] (1, 4) -- (1, 1.732) arc (60:120:2) -- (-1, 4);
+
+ \begin{scope}[shift={(2, 0)}, xscale=-1]
+ \draw [mblue, thick, dashed] (-1, 4) -- (-1, 1.732) arc(60:90:2);
+ \end{scope}
+
+ \begin{scope}[shift={(-2, 0)}]
+ \draw [mblue, thick, dashed] (-1, 4) -- (-1, 1.732) arc(60:90:2);
+ \end{scope}
+
+ \draw [thick, mblue] (4, 4) -- (4, 0) arc(0:180:2) arc(0:180:2) -- (-4, 4);
+ \fill [opacity=0.5, mblue] (4, 4) -- (4, 0) arc(0:180:2) arc(0:180:2) -- (-4, 4);
+ \end{tikzpicture}
+\end{center}
+We are actually interested in $\Gamma_0(2)$ and $\Gamma_0(4)$ instead, and their fundamental domains look ``dual''.
+
+Consider
+\[
+ g(z) = E_2 \underset{2}{|}
+ \begin{psmallmatrix}
+ 2 & 0\\
+ 0 & 1
+ \end{psmallmatrix} - E_2 = 2 E_2(2z) - E_2(z).
+\]
+Recall that we had
+\[
+ E_2(z) = 1 - 24 \sum_{n \geq 1} \sigma_1(n) q^n = z^{-2} E_2 \left(-\frac{1}{z}\right) - \frac{12}{2\pi i z}.
+\]
+\begin{prop}
+ We have $g \in M_2(\Gamma_0(2))$, and $g \underset{2}{|} W_2 = -g$.
+\end{prop}
+
+\begin{proof}
+ We compute
+ \begin{align*}
+ g\underset{2}{|} W_2 &= \frac{2}{(2z)^2} g\left(-\frac{1}{2z}\right)\\
+ &= \frac{1}{z^2} E_2\left(-\frac{1}{z}\right) - \frac{2}{(2z)^2} E_2\left(\frac{-1}{2z}\right)\\
+ &= E_2(z) + \frac{1}{2\pi i z} - 2\left(E_2(2z) + \frac{12}{2\pi i \cdot 2z}\right)\\
+ &= - g(z).
+ \end{align*}
+ We also have
+ \[
+ g\underset{2}{|}T = g(z + 1) = g(z),
+ \]
+ and so
+ \[
+ g \underset{2}{|}
+ \begin{psmallmatrix}
+ 1 & 0\\
+ 2 & 1
+ \end{psmallmatrix} = g \underset{2}{|} W_2 T^{-1} W_2^{-1} = g.
+ \]
+ Moreover, $g$ is holomorphic at $\infty$, and hence so is $g \underset{2}{|} W_2 = -g$. So $g$ is also holomorphic at $0 = W_2(\infty)$. As $\infty$ has width $1$ and $0$ has width $2$, we see that these are all the cusps, and so $g$ is holomorphic at the cusps. So $g \in M_2(\Gamma_0(2))$.
+\end{proof}
+
+Now we can do this again. We let
+\[
+ h = g(2z) = \frac{1}{2} g\underset{2}{|}
+ \begin{psmallmatrix}
+ 2 & 0\\
+ 0 & 1
+ \end{psmallmatrix} = 2 E_2(4z) - E_2(2z).
+\]
+Since $g \in M_2(\Gamma_0(2))$, this implies $h \in M_2(\Gamma_0(4))\supseteq M_0 (\Gamma_0(2))$.
+
+The functions $g$ and $h$ are obviously linearly independent. Recall that we have
+\[
+ \dim M_2(\Gamma_0(4)) \leq 1 + \frac{k(\Gamma(1): \Gamma_0(4))}{12} = 2.
+\]
+So the inequality is actually an equality. We have therefore shown that
+\begin{thm}
+ \[
+ M_2(\Gamma_0(4)) = \C g \oplus \C h.
+ \]
+\end{thm}
+
+Recall we also found an $f(z) = \vartheta(2z)^{4} \in M_2(\Gamma_0(4))$. So we know we must have
+\[
+ f = ag + bh
+\]
+for some constants $a, b \in \C$.
+
+It is easy to find what $a$ and $b$ are. We just write down the $q$-expansions. We have
+\begin{align*}
+ f &= \vartheta(2z)^4 \\
+ &= (1 + 2q + 2q^4 + \cdots)^4\\
+ &= 1 + 8q + 24 q^2 + 32 q^3 + \cdots\\
+ g &= 2 E_2(2z) - E_2(z)\\
+ &= 1 + 24 \sum_{n \geq 1 } \sigma_1(n) (q^n - 2 q^{2n})\\
+ &= 1 + 24q + 24 q^2 + 96 q^3 + \cdots\\
+ h &= g(2z)\\
+ &= 1 + 24 q^2 + \cdots
+\end{align*}
+By looking at the first two terms, we find that we must have
+\[
+ f = \frac{1}{3}g + \frac{2}{3} h = \frac{1}{3} (4 E_2(4z) - E_2(z)) = 1 + 8 \sum_{k \geq 1} \left(\sigma_1(n) - 4 \sigma_1\left(\frac{n}{4}\right)\right) q^n,
+\]
+where $\sigma_1\left(\frac{n}{4}\right) = 0$ if $\frac{n}{4} \not \in \Z$.
+
+But recall that
+\[
+ f = \left(\sum_{n \in \Z} q^{n^2}\right)^4 = \sum_{a, b, c, d \in \Z} q^{a^2 + b^2 + c^2 + d^2} = \sum_{n \in \N} r_4(n) q^n,
+\]
+where \term{$r_4(n)$} is the number of ways to write $n$ as a sum of $4$ squares (where order matters). Therefore,
+\begin{thm}[Lagrange's 4-square theorem]\index{Lagrange's 4-square theorem}\index{4-square theorem}
+ For all $n \geq 1$, we have
+ \[
+ r_4(n) = 8\left(\sigma_1(n) - 4 \sigma_1\left(\frac{n}{4}\right)\right) = 8 \sum_{\substack{d \mid n}{4 \nmid d}} d.
+ \]
+ In particular, $r_4(n) > 0$.
+\end{thm}
+We could imagine ourselves looking at other sums of squares. For example, we can look instead at $\vartheta_2(2z)^2$, which turns out to be an element of $M_1(\Gamma_1(4))$, one can get a similar formula for the number of ways of writing $n$ as a sum of $2$ squares.
+
+We can also consider higher powers, and then get \emph{approximate formulae} for $r_{2k}(n)$, because the appropriate Eisenstein series no longer generate $M_k$. There may be a cusp form as well, which gives an error term.
+
+In general, if we have
+\[
+ \gamma =
+ \begin{pmatrix}
+ a & b\\
+ Nc & d
+ \end{pmatrix} \in \Gamma_0(N),
+\]
+then we find
+\[
+ W_N \gamma W_N^{-1} \in
+ \begin{pmatrix}
+ d & -c\\
+ -Nb & a
+ \end{pmatrix} \in \Gamma_0(N).
+\]
+So $W_N$ normalizes the group $\Gamma_0(N)$. Then if $f \in M_k(\Gamma_0(N))$, then $f \underset{k}{|} W_N \in M_k(\Gamma_0(N))$, and this also preserves cusp forms.
+
+Moreover, we have
+\[
+ f \underset{k}{|} W_N^2 = f \underset{k}{|}
+ \begin{psmallmatrix}
+ -N & 0\\
+ 0 & -N
+ \end{psmallmatrix} = f,
+\]
+as $-I \in \Gamma_0(N)$. So
+\[
+ M_k(\Gamma_0(N)) = M_k(\Gamma_0(N))^+ \oplus M_k(\Gamma_0(N))^-,
+\]
+where we split into the $(\pm 1)$-eigenspaces for $W_N$, and the cusp forms decompose similarly. This $W_N$ is the \term{Atkin-Lehner involution}. This is the ``substitute'' for the the operator $S = \begin{psmallmatrix} 0 & -1\\ 1 & 0\end{psmallmatrix}$ in $\Gamma(1)$.
+
+\section{Hecke theory for \tph{$\Gamma_0(N)$}{Gamma0(N)}{Γ0(N)}}
+Note that it is possible to do this for other congruence subgroups. The key case is
+\[
+ \Gamma_1(N) =
+ \left\{
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \SL_2(\Z) : c \equiv 0 \pmod N, d , a \equiv 1 \pmod N
+ \right\}
+\]
+What is special about this? There are two things
+\begin{itemize}
+ \item The map $ \begin{psmallmatrix} a & b\\ c & d \end{psmallmatrix} \mapsto d \bmod N$ is a homomorphism $\Gamma_0(N) \to (\Z/n\Z)^\times$, and the kernel is $\Gamma_1(N)$.
+\end{itemize}
+So we can describe
+\[
+ S_k(\Gamma_1(N)) = \bigoplus_{\chi \in \widehat{(\Z/N\Z)^\times}} S_k(\Gamma_1(N), \chi),
+\]
+where $f \in S_k(\Gamma_1(n), \chi)$ if
+\[
+ f \underset{k}{|}
+ \begin{psmallmatrix}
+ a & b\\
+ c & d
+ \end{psmallmatrix} = \chi(d) f\text{ for all }
+ \begin{psmallmatrix}
+ a & b\\
+ c & d
+ \end{psmallmatrix} \in \Gamma_0(\N).
+\]
+Of course, $S_k(\Gamma_1(N), \chi_{\mathrm{trivial}}) = S_k(\Gamma_0(N))$. In general, everything we can do for $\Gamma_0(N)$ can be done for $S_k(\Gamma_1(N), \chi)$.
+
+But why not study $\Gamma(N)$ itself? We can check that
+\[
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & N
+ \end{pmatrix} \Gamma(N)
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & N^{-1}
+ \end{pmatrix} \supseteq \Gamma_1(N^2).
+\]
+So we can go from modular forms on $\Gamma(N)$ to ones on $\Gamma_1(N')$.
+
+For various representation-theoretic reasons, things work much better on $\Gamma_1(N)$.
+
+Last time, we used in various places the matrix
+\[
+ W_N =
+ \begin{pmatrix}
+ 0 & -1\\
+ N & 0
+ \end{pmatrix}.
+\]
+Then we decomposed
+\[
+ S_k(\Gamma_0(N)) = S_k(\Gamma_2(N))^+ \oplus S_k(\Gamma_0(N))^-,
+\]
+according to the $\pm$-eigenspaces of the operator $W_N$. A while ago, we proved some theorem about the function equation of $L$-functions.
+
+\begin{thm}
+ Let $f \in S_k(\Gamma_0(N))^{\varepsilon}$, where $\varepsilon = \pm 1$. Then define
+ \[
+ L(f, s) = \sum_{n\geq 1} a_n n^{-s}.
+ \]
+ Then $L(f, s)$ is am entire function, and satisfies the functional equation
+ \[
+ \Lambda(f, s) = (2\pi)^{-s} \Gamma(s) L(f, s) = \varepsilon (-N)^{k/2} \Lambda(f, k - s).
+ \]
+\end{thm}
+
+\begin{proof}
+ We have $f \underset{k}{|}W_N = \varepsilon f$, and then we can apply our earlier result. % is there underset?
+\end{proof}
+
+This is a rather remarkable thing. We didn't really use much about the properties of $f$.
+
+Now we want to talk about Hecke operators on $\Gamma_0(N)$. These are a bit more complicate. It is much better to understand these in terms of representation theory instead, but that is outside the scope of the course. So we will just state the relevant definitions and results.
+
+Recall that a modular form of level $1$ is defined by the $q$-expansion, and if what we have is in fact a Hecke eigenform, then it suffices to know the Hecke eigenvalues, i.e.\ the values of $a_p$. We describe this as having ``multiplicity one''.
+\begin{thm}[Strong multiplicity one for $\SL_2(\Z)$]\index{Strong multiplicity one}
+ Let $f, g \in S_k(\Gamma(1))$ be normalized Hecke eigenforms, i.e.
+ \begin{align*}
+ f | T_p &= \lambda_p f & \lambda_p &= a_p(f)\\
+ g | T_p &= \mu_p g & \mu_p &= a_p(g).
+ \end{align*}
+ Suppose there exists a finite set of primes $S$ such that such that for all $p \not \in S$, then $\lambda_p = \mu_p$. Then $f = g$.
+\end{thm}
+Note that since the space of modular forms is finite dimensional, we know that the modular forms can only depend on finitely many of the coefficients. But this alone does not give the above result. For example, it might be that $a_2(f)$ is absolutely crucial for determining which modular form we are, and we cannot miss it out. The strong multiplicity one theorem says this does not happen.
+
+\begin{proof}[Idea of proof]
+ We use the functional equations
+ \begin{align*}
+ \Lambda(f, k - s) &= (-1)^{k/2} \Lambda(f, s)\\
+ \Lambda(g, k - s) &= (-1)^{k/2} \Lambda(g, s)
+ \end{align*}
+ So we know
+ \[
+ \frac{L(f, k - s)}{L(f, s)} = \frac{L(g, k - s)}{L(g, s)}.
+ \]
+ Since these are eigenforms, we have an Euler product
+ \[
+ L(f, s) = \prod_p (1 - \lambda_p p^{-s} + p^{k - 1 - 2s})^{-1},
+ \]
+ and likewise for $g$. So we obtain
+ \[
+ \prod_p \frac{1 - \lambda_p p^{s - k} + p^{2s - k - 1}}{1 - \lambda_p p^{-s} + p^{k - 1 - 2s}} = \prod_p \frac{1 - \mu_p p^{s - k} + p^{2s - k - 1}}{1 - \mu_p p^{-s} + p^{k - 1 - 2s}}.
+ \]
+ Now we can replace this $\prod_p$ with $\prod_{p \in S}$. Then we have some very explicit rational functions, and then by looking at the appropriate zeroes and poles, we can actually get $\lambda_p = \mu_p$ for all $p$.
+\end{proof}
+This uses $L$-functions in an essential way.
+
+The reason we mention this here is that a naive generalization of this theorem does not hold for, say, $\Gamma_0(N)$. To even make sense of this statement, we need to say what the Hecke operators are for $\Gamma_0(N)$. We are going to write the definition in a way as short as possible.
+\begin{defi}[Hecke operators on $\Gamma_0(N)$]\index{Hecke operator!$\Gamma_0(N)$}
+ If $p \nmid N$, we define\index{$T_p$}
+ \[
+ T_p f = p^{\frac{k}{2} - 1} \left(f \underset{k}{|}
+ \begin{psmallmatrix}
+ p & 0\\
+ 0 & 1
+ \end{psmallmatrix} +
+ \sum_{k = 0}^{p - 1} f \underset{k}{|}
+ \begin{psmallmatrix}
+ 1 & b\\
+ 0 & p
+ \end{psmallmatrix}\right)
+ \]
+ which is the same as the case with $\Gamma(1)$.
+
+ When $p \mid N$, then we define\index{$U_p$}
+ \[
+ U_p f = p^{\frac{k}{2} - 1} \sum_{n = 0}^{p - 1} f\underset{k}{|}
+ \begin{psmallmatrix}
+ 1 & b\\
+ 0 & p
+ \end{psmallmatrix}.
+ \]
+ Some people call this $T_p$ instead, and this is very confusing.
+\end{defi}
+
+We can compute the effect on $q$-expansions as follows --- when $p \nmid N$, then we have
+\[
+ a_n(T_p f) = a_{np}(f) + p^{k - 1} a_{n/p}(f),
+\]
+where the second term is set to $0$ if $p \nmid n$. If $p \mid N$, then we have
+\[
+ a_n(U_p f) = a_{np}(f).
+\]
+\begin{prop}
+ $T_p, U_p$ send $S_k(\Gamma_0(N))$ to $S_k(\Gamma_0(N))$, and they all commute.
+\end{prop}
+
+\begin{proof}
+ $T_p, U_p$ do correspond to double coset actions
+ \[
+ \Gamma_0(N)
+ \begin{psmallmatrix}
+ 1 & 0\\
+ 0 & p
+ \end{psmallmatrix}\Gamma_0(N) =
+ \begin{cases}
+ \Gamma_0(N)
+ \begin{psmallmatrix}
+ p & 0\\
+ 0 & 1
+ \end{psmallmatrix} \amalg \coprod_b \Gamma_0(N)
+ \begin{psmallmatrix}
+ 1 & b\\
+ 0 & p
+ \end{psmallmatrix} & p \nmid N\\
+ \coprod_b \Gamma_0(N)
+ \begin{psmallmatrix}
+ 1 & b\\
+ 0 & p
+ \end{psmallmatrix} & p \mid N\\
+ \end{cases}.
+ \]
+ Commutativity is checked by carefully checking the effect on the $q$-expansions.
+\end{proof}
+However, these do not generate all the Hecke operators. For example, we have $W_N$!
+
+\begin{eg}
+ Consider $S_{12} (\Gamma_0(2))$. This contains $f = \Delta (z)$ and
+ \[
+ g = f \underset{12}{|}
+ \begin{psmallmatrix}
+ 2 & 0\\
+ 0 & 1
+ \end{psmallmatrix} = 2^6 \Delta(2z) = \Delta \underset{12}{|} W_2,
+ \]
+ using the fact that
+ \[
+ \Delta\underset{k}{|}
+ \begin{psmallmatrix}
+ 0 & -1\\
+ 1 & 0
+ \end{psmallmatrix} = \Delta.
+ \]
+ So the matrix of $W_2$ on $\spn\{f, g\}$ is
+ \[
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 0
+ \end{pmatrix}.
+ \]
+ We can write out
+ \begin{align*}
+ f &= \sum \tau(n) q^n = q - 24 q^2 + 252q^3 - 1472 q^4 + \cdots\\
+ g &= 2^6 \sum \tau(n) q^{2n} = 2^6(q^2 + 24 q^4 + \cdots)
+ \end{align*}
+ So we find that
+ \[
+ U_2 g = 2^6 f.
+ \]
+ It takes a bit more work to see what this does on $f$. We in fact have
+ \[
+ U_2 f = \sum \tau(2n) q^n = -24 q - 1472 q^4 + \cdots = -24 f - 32 g.
+ \]
+ So in fact we have
+ \[
+ U_2 =
+ \begin{pmatrix}
+ 24 & 64\\
+ -32 & 0
+ \end{pmatrix}.
+ \]
+ Now $U_2$ and $W_2$ certainly do not commute. So the Hecke algebra is \emph{not} commutative. In fact, $\Delta$ generates a two-dimensional representation of the Hecke algebra.
+\end{eg}
+
+This makes life much worse. When we did Hecke algebras for $\Gamma(1)$, all our representations are $1$-dimensional, and we can just work with linear spans. Now everything has higher dimensional, and things go rather wrong. Similarly, we can consider $\Delta(dz) \in S_{12}(\Gamma_0(N))$ for any $d \mid N$, and this gives a whole bunch of things like this.
+
+This turns out to be the only obstruction to the commutativity of the action of the Hecke algebra. We know $S_k(\Gamma_0(N))$ contains
+\[
+ \{f(dz): f \in S_k(\Gamma_0(M)), dM \mid N, M \not= N\}.
+\]
+We let \term{$S_k(\Gamma_0(N))^{\mathrm{old}}$} be the span of these. These are all the forms that come from a smaller level.
+
+Now $S_k(\Gamma_0(N))$ has an inner product! So the natural thing to do is to consider the orthogonal complement of $S_k(\Gamma_0(N))^{\mathrm{old}}$, and call it \term{$S_k(\Gamma_0(N))^\mathrm{new}$}.
+
+\begin{thm}[Atkin--Lehner]
+ The Hecke algebra $\mathcal{H}(G, \Gamma_0(N))$ fixes $S_k(\Gamma_0(N))^{\mathrm{new}}$ and $S_k(\Gamma_0(N))^{\mathrm{old}}$, and on $S_k(\Gamma_0(N))^{\mathrm{new}}$, it acts as a \emph{commutative} subalgebra of the endomorphism ring, is closed under adjoint, and hence is diagonalizable. Moreover, strong multiplicity one holds, i.e.\ if $S$ is a finite set of primes, and we have $\{\lambda_p: p \not\in S\}$ given, then there exists at most one $N \geq 1$ and at most one $f \in S_k(\Gamma_0(N), 1)^{\mathrm{new}}$ (up to scaling, obviously) for which
+ \[
+ T_p f = \lambda_p f\text{ for all }p \nmid N, p \not \in S.
+ \]
+\end{thm}
+
+\section{Modular forms and rep theory}
+In this final chapter, we are going to talk about the relation between modular forms and representation theory. The words ``representation theory'' are a bit vague. We are largely going to talk about automorphic representations, and this is related to Langlands programme.
+
+Recall that $f$ is a modular form on $\SL_2(\Z)$ if
+\begin{enumerate}
+ \item $f$ is holomorphic $\H \to \C$
+ \item $f\underset{k}{|} \gamma = (cz + d)^{-k} f(\gamma(z)) = f(z)$ for all
+ \[
+ \gamma =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \SL_2(\Z)
+ \]
+ \item It satisfies suitable growth conditions at the cusp $\infty$.
+\end{enumerate}
+Let's look at the different properties in turn. The second is the modularity condition, which is what gave us nice properties like Hecke operators. The growth condition is some ``niceness'' condition, and for example this gives the finite-dimensionality of the space of modular forms.
+
+But how about the first condition? It seems like an ``obvious'' condition to impose, because we are working on the complex plane. Practically speaking, it allows us to use the tools of complex analysis. But what if we dropped this condition?
+
+\begin{eg}
+ Recall that we had an Eisenstein series of weight $2$,
+ \[
+ E_2 (z) = 1 - 24 \sum_{n \geq 1} \sigma_1(n) q^n.
+ \]
+ This is \emph{not} a modular form. Of course, we have $E_2(z) = E_2(z + 1)$, but we saw that
+ \[
+ E_2 \left(-\frac{1}{z}\right) - z^2 E_2 (z) = \frac{12 z}{2\pi i} \not= 0.
+ \]
+ However, we can get rid of this problem at the expense of making a non-holomorphic modular form. Let's consider the function
+ \[
+ f(z) = \frac{1}{y} = \frac{1}{\Im (z)} = f(z + 1).
+ \]
+ We then look at
+ \[
+ f\left(-\frac{1}{z}\right) - z^2 f(z) = \frac{|z|^2}{y} - \frac{z^2}{y} = \frac{z(\bar{z} - z)}{y} = -2iz.
+ \]
+ Aha! This is the same equation as that for $E_2$ apart from a constant factor. So if we let\index{$\tilde{E}_2(z)$}
+ \[
+ \tilde{E}_2(z) = E_2(z) - \frac{3}{\pi y},
+ \]
+ then this satisfies
+ \[
+ \tilde{E}_2(z) = \tilde{E}_2(z + 1) = z^{-2}\tilde{E}_2\left(-\frac{1}{z}\right).
+ \]
+ The term $\frac{3}{\pi y}$ certainly tends to $0$ rapidly as $|z| \to \infty$, so if we formulate the growth condition in (iii) without assuming holomorphicity of $f$, then we will find that $\tilde{E}_2$ satisfies (ii) and (iii), but not (i). This is an example of a non-holomorphic modular form of weight $2$.
+\end{eg}
+Perhaps this is a slightly artificial example, but it is one.
+
+Let's explore what happens when our functions satisfy (ii) and (iii), but not (i).
+\begin{defi}[Non-holomorphic modular forms]\index{modular form!non-holomorphic}\index{non-holomorphic modular form}
+ We let \term{$W_k(\Gamma(1))$} be the set of all $C^\infty$ functions $\H \to \C$ such that
+ \begin{enumerate}\setcounter{enumi}{1}
+ \item $f\underset{k}{|}\gamma = f$ for all $\gamma \in \Gamma(1)$
+ \item $f(x + iy) = O(y^R)$ as $y \to \infty$ for some $R > 0$, and the same holds for all derivatives.
+ \end{enumerate}
+\end{defi}
+Note that the notation is not standard.
+
+Before we proceed, we need to introduce some notation from complex analysis. As usual, we write $z = x + iy$, and we define the operators
+\begin{align*}
+ \frac{\partial}{\partial z} &= \frac{1}{2} \left(\frac{\partial}{\partial x} + \frac{\partial}{i \partial y}\right)\\
+ \frac{\partial}{\partial \bar{z}} &= \frac{1}{2} \left(\frac{\partial}{\partial x} - \frac{\partial}{i \partial y}\right).
+\end{align*}
+We can check that these operators satisfy
+\[
+ \frac{\partial z}{\partial z} = \frac{\partial \bar{z}}{\partial \bar{z}} = 1,\quad \frac{\partial \bar{z}}{\partial z} = \frac{\partial z}{\partial \bar{z}} = 0.
+\]
+Moreover, the Cauchy--Riemann equations just says $\frac{\partial f}{\partial \bar{z}} = 0$, and if this holds, then the complex derivative is just $\frac{\partial f}{\partial z}$. Thus, if we are working with potentially non-holomorphic functions on the complex plane, it is often useful to consider the operators $\frac{\partial}{\partial z}$ and $\frac{\partial}{\partial \bar{z}}$ separately.
+
+Using this notation, given $f \in W_k$, we have
+\[
+ f \in M_k \Longleftrightarrow \frac{\partial f}{\partial \bar{z}} = 0.
+\]
+So suppose $f$ is not holomorphic, then $\frac{\partial f}{\partial \bar{z}} \not= 0$. We can define a new operator by\index{$L_k^*$}
+\[
+ L_k^* (f) = -2i y^2 \frac{\partial f}{\partial \bar{z}}.
+\]
+Note that this is slightly strange, because we have a subscript $k$, but the function doesn't depend on $k$. Also, we put a star up there for some reason. It turns out there is a related operator called $L_k$, which does depend on $k$, and this $L_k^*$ is a slight modification that happens not to depend on $k$.
+
+This has the following properties:
+
+\begin{prop}\leavevmode
+ \begin{itemize}
+ \item We have $L_k^* f = 0$ iff $f$ is holomorphic.
+ \item If $f \in W_K(\Gamma(1))$, then $g \equiv L_k^* f \in W_{k - 2} (\Gamma(1))$.
+ \end{itemize}
+\end{prop}
+Thus, $L_k^*$ is a ``lowering'' operator.
+
+\begin{proof}
+ The first part is clear. For the second part, note that we have
+ \[
+ f(\gamma(z)) = (cz + d)^k f(z).
+ \]
+ We now differentiate both sides with respect to $\bar{z}$. Then (after a bit of analysis), we find that
+ \[
+ (c\bar{z} + d)^{-2} \frac{\partial f}{\partial \bar{z}} (\gamma(z)) = (cz + d)^k \frac{\partial f}{\partial \bar{z}}.
+ \]
+ On the other hand, we have
+ \[
+ (\Im \gamma(z))^2 = \frac{y^2}{|cz + d|^4}.
+ \]
+ So we find
+ \[
+ g(\gamma(z)) = -2i \frac{y^2}{|2z + d|^4} (c \bar{z} + d)^2 (cz + d)^k \frac{\partial f}{\partial \bar{z}} = (cz + d)^{k - 2} g(z).
+ \]
+ The growth condition is easy to check.
+\end{proof}
+
+\begin{eg}
+ Consider $\tilde{E}_2$ defined previously. Since $E_2$ is holomorphic, we have
+ \[
+ L_k^* \tilde{E}_2 = \frac{6i}{\pi} y^2 \frac{\partial}{\partial \bar{z}}\left(\frac{1}{y}\right) = \text{constant},
+ \]
+ which is certainly a (holomorphic) modular form of weight $0$.
+\end{eg}
+
+In general, if $L_k^* f$ is actually holomorphic, then it is in $M_{k - 2}$. Otherwise, we can just keep going! There are two possibilities:
+\begin{itemize}
+ \item For some $0 \leq \ell < \frac{k}{2}$, we have
+ \[
+ 0 \not= L_{k - 2\ell}^* \cdots L_{k - 2}^* L_k^* f \in M_{k - 2\ell}.
+ \]
+ \item The function $g = L_2^* L_4^* \cdots L_k^* f \in W_0(\Gamma(1))$, and is non-zero. In this case, $g(\gamma(z)) = g(z)$ for all $\gamma \in \SL_2(\Z)$.
+\end{itemize}
+What does $W_0(\Gamma(1))$ look like? Since it is invariant under $\Gamma(1)$, it is just a $C^\infty$ function on the fundamental domain $\mathcal{D}$ satisfying suitable $C^\infty$ conditions on the boundary. This space is \emph{huge}. For example, it contains any $C^\infty$ function on $\mathcal{D}$ vanishing in a neighbourhood of the boundary.
+
+This is too big. We want to impose some ``regularity'' conditions. Previously, we imposed a very strong regularity condition of holomorphicity, but this is too strong, since the only invariant holomorphic functions are constant.
+
+A slightly weaker condition might be to require it is harmonic, i.e.\ $\tilde{\Delta} f = \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} = 0$. However, the maximum principle also implies $f$ must vanish.
+
+A weaker condition would be to require that $f$ is an \emph{eigenfunction} of $\tilde{\Delta}$, but there is a problem that $\tilde{\Delta}$ is not invariant under $\Gamma(1)$. It turns out we need a slight modification, and take
+\[
+ \Delta = - y^2 \left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right).
+\]
+It is a straightforward verification that this is indeed invariant under $\SL_2(\R)$, i.e.
+\[
+ \Delta (f(\gamma(z))) = (\Delta f)(\gamma(z)).
+\]
+In fact, this $\Delta$ is just the Laplacian under the hyperbolic metric.
+
+\begin{defi}[Maass form]\index{Maass form}
+ A \emph{Maass form} on $\SL_2(\Z)$ is an $f \in W_0(\Gamma(1))$ such that
+ \[
+ \Delta f = \lambda f
+ \]
+ for some $\lambda \in \C$.
+\end{defi}
+There are interesting things we can prove about these. Recall that our first examples of modular forms came from Eisenstein series. There are also non-holomorphic Eisenstein series.
+
+\begin{eg}
+ Let $s \in \C$ and $\Re(s) > 0$. We define
+ \[
+ E(z, s) = \frac{1}{2} \sum_{\substack{(c, d) = 1}{c, d \in \Z}} \frac{y^s}{|cz + d|^{2s}} = \frac{1}{2} \sum_{\gamma = \pm \begin{psmallmatrix}* & *\\c & d\end{psmallmatrix} \in \pm \begin{psmallmatrix}1 & *\\0 & 1\end{psmallmatrix} \backslash \PSL_2(\Z)} (\Im \gamma(z))^s.
+ \]
+ It is easy to see that this converges. From the second definition, we see that $E(z, s)$ is invariant under $\Gamma(1)$, and after some analysis, this is $C^\infty$ and satisfies the growth condition.
+
+ Finally, we check the eigenfunction condition. We can check
+ \[
+ \Delta y^s = -y^2 \frac{\partial^2}{\partial y^2} (y^s) = s(1 - s) y^s.
+ \]
+ But since $\Delta$ is invariant under $\SL_2(\R)$, it follows that we also have
+ \[
+ \Delta E(z, s) = s(1 - s) E(z, s).
+ \]
+\end{eg}
+
+In the case of modular forms, we studied the cusp forms in particular. To study similar phenomena here, we look at the Fourier expansion of $f$. We have the periodicity condition
+\[
+ f(x + iy + 1) = f(x + iy).
+\]
+Since this is not holomorphic, we cannot expand it as a function of $e^{2\pi iz}$. However, we can certainly expand it as a function in $e^{2\pi i x}$. Thus, we write
+\[
+ f(x + iy) = \sum_{n = -\infty}^\infty F_n(y) e^{2\pi i n x}.
+\]
+This looks pretty horrible, but now recall that we had the eigenfunction condition. Then we have
+\[
+ \lambda f = \Delta f = -y^2 \sum_{n= -\infty}^\infty (F_n''(y) - 4 \pi n^2 F_n(y))e^{2\pi i nx}.
+\]
+This tells us $F_n(y)$ satisfies the differential equation
+\[
+ -y^2 F''_n(y) + (\lambda - 4\pi^2 n^2 y^2) F_n(y) = 0.\tag{$*$}
+\]
+It isn't terribly important what exactly the details are, but let's look what happens in particular when $n = 0$. Then we have
+\[
+ y^2 F''_0 + \lambda F_0 = 0.
+\]
+This is pretty easy to solve. The general solution is given by
+\[
+ F_0 = Ay^s + B y^{s'},
+\]
+where $s$ and $s' = 1 - s$ are the roots of $s(1 - s) = \lambda$.
+
+What about the other terms? We see that if $y$ is large, then, if we were an applied mathematician, then we would say the $\lambda F(y)$ term is negligible, and then the equation looks like
+\[
+ F''_n(y) = 4\pi^2 n^2 F(y).
+\]
+This has two independent solutions, and they are $e^{\pm 2\pi n y}$. It is in fact true that the true solutions to the equation grow as $e^{\pm 2 \pi n y}$ for large $y$. To satisfy the growth condition, we must only pick those that grow as $e^{-2\pi n y}$. We call this $\kappa_{|n|, \lambda}(y)$. These are known as the \term{Bessel functions}\index{$\kappa_n$}.
+
+Thus, we find that we have
+\[
+ f(z) = \underbrace{A y^s + B y^{1 - s}}_{\text{``constant term''}} + \sum_{n \not= 0} a_n (f) \kappa_{|n|, \lambda}(y) e^{2\pi i n x}.
+\]
+The exact form isn't really that important. The point is that we can separate out these ``constant terms''. Then it is now not difficult to define cusp forms.
+
+\begin{defi}[Cusp form]\index{cusp form}
+ A Maass form is a \emph{cusp form} if $F_1 = 0$, i.e.\ $A = B = 0$.
+\end{defi}
+
+Similar to modular forms, we have a theorem classifying Maass cusp forms.
+
+\begin{thm}[Maass]
+ Let $S_{\mathrm{Maass}}(\Gamma(1), \lambda)$ be the space of Maass cusp forms with eigenvalue $\lambda$. This space is finite-dimensional, and is non-zero if and only if $\lambda \in \{\lambda_n : n \geq 0\}$, where $\{\lambda_n\}$ is a sequence satisfying
+ \[
+ 0 < \lambda_0 < \lambda_1 < \lambda_2 < \cdots \to \infty.
+ \]
+\end{thm}
+Given this, we can define Hecke operators just as for holomorphic forms (this is easier as $k = 0$), and most of the theory we developed for modular forms carry over.
+
+Even though we proved all these very nice properties of these cusps forms, it took people a lot of time to actually come up with such a cusp form! Nowadays, we are able to compute these with the aid of computers, and there exists tables of $\lambda$'s and Hecke eigenforms.
+
+Now recall that we had this mysterious operator
+\[
+ L_k^* = -2i y^2 \frac{\partial}{\partial \bar{z}},
+\]
+which had the property that if $f \underset{k}{|} \gamma = f$, then $(L_k^* f)\underset{k - 2}{|} \gamma = (L_k^* f)$.
+
+With a bit of experimentation, we can come up with something that raises the weight.
+\begin{defi}[$R_k^*$]\index{$R_k^*$}
+ Define
+ \[
+ R_k^* = 2i \frac{\partial}{\partial z} + \frac{1}{y}k.
+ \]
+\end{defi}
+
+Now this has a property that
+\begin{prop}
+ If $f\underset{k}{|} \gamma = f$, then $(R_k^* f)\underset{k + 2}{|} \gamma = R_k^* f$.
+\end{prop}
+Note that this time, since we are differentiating against $z$, the $cz + d$ term will be affected, and this is where the $\frac{1}{y}k$ term comes in.
+
+Suppose we have $f = f_0 \in M_k(\Gamma(1))$. Then we can apply $R$ to it to obtain $f_1 = R_k^* f_0$. We can now try to apply $L_{k + 2}^*$ to it. Then we have
+\[
+ L_{k + 2}^* R_k^* f = -2iy^2 \frac{\partial}{\partial \bar{z}} \left(2i f' + \frac{k}{y} f \right) = -2iy^2 kf \frac{\partial y^{-1}}{\partial \bar{z}} = -kf.
+\]
+So we don't get anything new.
+
+But of course, we can continue in the other direction. We can recursively obtain
+\[
+ f_2 = R_{k + 2}^* f_1,\quad f_3 = R_{k + 4}^* f_2, \cdots.
+\]
+Then we can compute $L_{k + 2n}^*$ and $R_{k + 2n}^*$ of these, and we find that
+\[
+ (R^* L^* - L^* R^*) f_n = (k + 2n) f_n.
+\]
+This looks suspiciously like the representation of the Lie algebra of $\sl_2$, where we have operators that raise and lower weights. The only slightly non-trivial part is that this is an infinite-dimensional representation, as we can keep on raising and (at least in general) it doesn't get to $0$.
+
+It turns out it is much easier to make sense of this by replacing functions on $\H$ with functions on $G = \SL_2(\R)$. By the orbit-stabilizer theorem, we can write $\H = G/K$, where
+\[
+ K = \SO(2) = \{g \in \SL_2(\R) : g(i) = 1\} = \left\{
+ r_\theta =
+ \begin{pmatrix}
+ \cos \theta & \sin \theta\\
+ -\sin \theta & \cos \theta
+ \end{pmatrix}
+ \right\}.
+\]
+Recall that we defined the function $j(\gamma, z) = cz + d$, where $\gamma = \begin{psmallmatrix} a & b\\c & d\end{psmallmatrix}$. This satisfied the property
+\[
+ j(\gamma \delta, z) = j(\gamma, \delta(z)) j(\delta, z).
+\]
+The main theorem is the following:
+\begin{prop}
+ For $\Gamma \subseteq \Gamma(1)$, there is a bijection between functions $f: \H \to \C$ such that $f\underset{k}{|} \gamma = f$ for all $\gamma \in \Gamma$, and functions $\Phi: G \to \C$ such that $\Phi(\gamma g) = \Phi(g)$ for all $\gamma \in \Gamma$ and $\Phi(g r_\theta) = e^{ik\theta} \Phi(g)$. % fix this
+\end{prop}
+The real reason of this is that such an $f$ is a section of a certain line bundle $\mathcal{L}_k$ on $\Gamma \backslash \H = \Gamma \backslash G / K$. The point is that this line bundle can be made trivial either by pulling to $\H = G/K$, or to $\Gamma \backslash G$. Of course, to actually prove it, we don't need such fancy language. We just need to write down the map.
+
+\begin{proof}
+ Given an $f$, we define
+ \[
+ \Phi(g) = (ci + d)^{-k} f(g(i)) = j(g, i)^{-k} f(g(i)).
+ \]
+ We can then check that
+ \begin{align*}
+ \Phi(\gamma g) &= j(\gamma g, i)^{-k} f(\gamma(g(i)))\\
+ &= j(\gamma g, i)^{-k} j(\gamma, g(i))^k f(g(i))\\
+ &= \Phi(g).
+ \end{align*}
+ On the other hand, using the fact that $r_\theta$ is in the stabilizer of $i$, we obtain
+ \begin{align*}
+ \Phi(g r_\theta) &= j(g r_\theta, i)^{-k} f(g r_\theta(i))\\
+ &= j(g r_\theta, i)^{-k} f(g(i))\\
+ &= j(g, r_\theta(i)) j(r_\theta, 1) f(g(i))\\
+ &= \Phi(g) j (r_\theta, i)^{-k}.
+ \end{align*}
+ But $j(r_\theta, i) = - \sin \theta + \cos \theta$. So we are done.
+\end{proof}
+
+What we can do with this is that we can cast everything in the language of these functions on $G$. In particular, what do these lowering and raising operators do? We have our $C^\infty$ function $\Phi: \Gamma \backslash G \to \C$. Now if $X \in \mathfrak{g} = \sl_2(\R)$, then this acts on $\Phi$ by differentiation, since that's how Lie algebras and Lie groups are related. Explicitly, we have
+\[
+ X \Phi = \left.\frac{\d}{\d t}\right|_{t = 0} \Phi (g e^{Xt}).
+\]
+When we compute these things explicitly, we find that, up to conjugacy, $L^*$ and $R^*$ just correspond to the standard elements
+\[
+ X_- =
+ \begin{pmatrix}
+ 0 & 0 \\
+ 1 & 0
+ \end{pmatrix},\quad
+ X_+ =
+ \begin{pmatrix}
+ 0 & 1\\
+ 0 & 0
+ \end{pmatrix} \in \sl_2,
+\]
+and we have
+\[
+ [X_+, X_-] = 2H,\quad H =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix}.
+\]
+Then the weight $k$ is just corresponds to $H$.
+
+What does the Laplacian correspond to? It corresponds to a certain product of these operators, the \term{Casimir operator}, given by
+\[
+ \Omega = X_+ X_- + X_- X_+ + \frac{1}{2}H^2.
+\]
+This leads to the notion of automorphic forms.
+
+\begin{defi}[Automorphic form]\index{automorphic form}
+ An \emph{automorphic form} on $\Gamma$ is a $C^\infty$ function $\Phi: \Gamma \backslash G \to \C$ such that $\Phi(g r_\theta) = e^{ik\theta} \Phi(g)$ for some $k \in \Z$ such that
+ \[
+ \Omega \Phi = \lambda \Phi
+ \]
+ for some $\lambda \in \C$, satisfying a growth condition given by
+ \[
+ \Phi
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \leq \text{polynomial in }a, b, c, d.
+ \]
+\end{defi}
+
+The condition of this being a cusp function is then
+\[
+ \int_0^1 \Phi \left(
+ \begin{pmatrix}
+ 1 & x\\
+ 0 & 1
+ \end{pmatrix}g\right) \;\d x = 0.
+\]
+
+These things turn out to be exactly what we've looked at before.
+\begin{prop}
+ The set of cuspoidal automorphic forms bijects with representations of $\sl_2$ generated by holomorphic cusp forms $f$ and their conjugates $\bar{f}$, and Maass cusp forms.
+
+ The holomorphic cusp forms $f$ generate a representation of $\sl_2$ with lowest weight; The conjugates of holomorphic cusp forms generate those with highest weight, while the Maass forms generate the rest.
+\end{prop}
+
+This is now completely susceptible to generalization. We can replace $G, \Gamma$ with any semi-simple Lie group (e.g.\ $\SL_n(\R)$, $\Sp_{2n}(\R)$), and $\Gamma$ by some arithmetic subgroup. This leads to the general theory of automorphic forms, and is one half of the Langlands' program.
+
+
+\printindex
+\end{document}
diff --git a/books/cam/III_L/positivity_in_algebraic_geometry.tex b/books/cam/III_L/positivity_in_algebraic_geometry.tex
new file mode 100644
index 0000000000000000000000000000000000000000..adae064a8a5250500a42cd59c31b86393a7dcb6b
--- /dev/null
+++ b/books/cam/III_L/positivity_in_algebraic_geometry.tex
@@ -0,0 +1,1885 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2018}
+\def\nlecturer {S.\ Svaldi}
+\def\ncourse {Positivity in Algebraic Geometry}
+
+\input{header}
+
+\newcommand\A{\mathbb{A}}
+\newcommand\val{\mathrm{val}}
+\renewcommand\div{\mathrm{div}}
+\newcommand\Cl{\mathrm{Cl}}
+\newcommand\WDiv{\mathrm{WDiv}}
+\newcommand\Div{\mathrm{Div}}
+\newcommand\CaDiv{\mathrm{CaDiv}}
+\newcommand\Pic{\mathrm{Pic}}
+\newcommand\Num{\mathrm{Num}}
+\newcommand\Amp{\mathrm{Amp}}
+\newcommand\Nef{\mathrm{Nef}}
+\newcommand\NS{\mathrm{NS}}
+\newcommand\NE{\mathrm{NE}}
+\newcommand\Bl{\mathrm{Bl}}
+\DeclareMathOperator\red{red}
+\DeclareMathOperator\Proj{Proj}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+This class aims at giving an introduction to the theory of divisors, linear systems and their positivity properties on projective algebraic varieties.
+
+The first part of the class will be dedicated to introducing the basic notions and results regarding these objects and special attention will be devoted to discussing examples in the case of curves and surfaces.
+
+In the second part, the course will cover classical results from the theory of divisors and linear systems and their applications to the study of the geometry of algebraic varieties.
+
+If time allows and based on the interests of the participants, there are a number of more advanced topics that could possibly be covered: Reider's Theorem for surfaces, geometry of linear systems on higher dimensional varieties, multiplier ideal sheaves and invariance of plurigenera, higher dimensional birational geometry.
+
+\subsubsection*{Pre-requisites}
+The minimum requirement for those students wishing to enroll in this class is their knowledge of basic concepts from the Algebraic Geometry Part 3 course, i.e.\ roughly Chapters 2 and 3 of Hartshorne's Algebraic Geometry.
+
+Familiarity with the basic concepts of the geometry of algebraic varieties of dimension 1 and 2 --- e.g.\ as covered in the preliminary sections of Chapters 4 and 5 of Hartshorne's Algebraic Geometry --- would be useful but will not be assumed --- besides what was already covered in the Michaelmas lectures.
+
+Students should have also some familiarity with concepts covered in the Algebraic Topology Part 3 course such as cohomology, duality and characteristic classes.
+}
+\tableofcontents
+
+%\section{Introduction}
+%Fix a field $K$. The first problem we want to look at is the following:
+%\begin{problem}
+% Classify all finitely-generated field extension $L/K$.
+%\end{problem}
+%A closely related problem is
+%\begin{problem}
+% Classify all towers of field extensions $L/M/K$.
+%\end{problem}
+%In general, we don't want to consider finite extensions --- this falls within the scope of Galois theory instead.
+%
+%In general, given a finite extension $L/K$, we can factorize it into a tower $L/M/K$, where $M/K$ is purely transcendental, i.e.\ $M \cong K(x_1, \ldots, x_n)$ is a field of rational functions over $K$.
+%
+%A large class of examples comes from \term{function fields} of algebraic varieties, $K(X)$.
+%\begin{eg}
+% Recall that if $X$ is affine, then $K(X)$ is the field of fractions of the ring of functions of $X$, i.e.\ $Q(K[X])$.
+%\end{eg}
+%
+%\begin{question}
+% Are these all the examples of finitely-generated field extensions $L/K$?
+%\end{question}
+%
+%
+%\begin{lemma}
+% Let $\Char K = 0$. If $L/K$ is finitely-generated and not finite, then there exists an algebraic variety $X/K$ such that $K(X) \cong L$.
+%\end{lemma}
+%Note that the condition is satisfied in particular if $K$ is algebraically closed.
+%
+%\begin{proof}
+% We factor $L/M/K$, where $M/K$ is purely transcendental, say $M = K(x_1, \ldots, x_n) = K(\A^n)$. Since $\Char(K) = 0$, we can use the primitive element theorem to get that $L = M(\alpha)$ for some $\alpha \in L$. Then if $f$ is the minimal polynomial for $\alpha$, then
+% \[
+% L \cong Q(K[x_1, \ldots, x_n, x]/(f(x))).\qedhere
+% \]
+%\end{proof}
+%
+%
+%now suppose $\varphi: x \to y$ is a dominant birational map, % make birational
+%i.e.\ $\overline{\varphi(X)} = Y$. Then this induces a map $\varphi^* : K(Y) \to K(X)$.
+%
+%Indeed, an element of $K(Y)$ is just a rational function $Y \to \A^1_K$, and pre-composing with $\varphi$ gives the desired function. The dominance ensures the composition is well-defined.
+%
+%Now if every embedding of field extensions arise this way, then we have reduced our problem to studying algebraic varieties over $K$ and their rational dominant maps. This is indeed the case.
+%\begin{thm}
+% Let $K$ be a field with $\Char K = 0$. Then there is an equivalence of categories between
+% \[
+% \left\{ \parbox{4.8cm}{\centering algebraic varieties over $K$\\ rational dominant morphisms}\right\} \longleftrightarrow \left\{\parbox{5.3cm}{\centering infinite f.g.\ field extensions over $K$\\field embeddings}\right\}
+% \]
+%\end{thm}
+%
+%\begin{proof}
+% We define a functor from the left category to the right by sending a variety $X$ to the field of rational functions on $X$, $K(X)$, and morphisms are as above. We have already seen that this is essentially surjective. To show it is fully faithful, suppose $X, Y$ are algebraic varieties over $K$, and $\psi: K(Y) \hookrightarrow K(X)$. We want to show that $\psi = \varphi^*$ for some rational dominant map $X \to Y$.
+%
+% We can assume that $X$ and $Y$ are affine, since every variety is birationally equivalent to any Zariski open subset. Then $K(X) = Q(K[X])$ and $K(Y) = Q(K[Y])$. So in particular, we have an embedding $K[Y] \hookrightarrow K(X)$. Since $K[Y]$ is finitely-generated, we can produce a factorization
+% \[
+% \begin{tikzcd}
+% K[Y] \ar[r, hook] \ar[dr, hook] & K(X) \\
+% & K[X]_{s_1, \ldots, s_n} \ar[u, hook]
+% \end{tikzcd}
+% \]
+% for some $s_1, \ldots, s_n$. But then $K[X]_{s_1, \ldots, s_n} = K[U]$ for some open $U \subseteq X$. Then the map $K[Y] \to K[U]$ determines a map $U \to Y$, hence a rational map $X \to Y$. The injectivity of the map $K[Y] \to K[U]$ corresponds to the fact that the induced map $U \to Y$ is dominant. So we are done.
+%\end{proof}
+%
+\section{Divisors}
+\subsection{Projective embeddings}
+Our results here will be stated for rather general schemes, since we can, but at the end, we are only interested in concrete algebraic varieties.
+
+We are interested in the following problem --- given a scheme $X$, can we embed it in projective space? The first observation that $\P^n$ comes with a very special line bundle $\mathcal{O}(1)$, and every embedding $X \hookrightarrow \P^n$ gives pulls back to a corresponding sheaf $\mathcal{L}$ on $X$.
+
+The key property of $\mathcal{O}(1)$ is that it has \emph{global} sections $x_0, \ldots, x_n$ which generate $\mathcal{O}(1)$ as an $\mathcal{O}_X$-module.
+
+\begin{defi}[Generating section]\index{generating section}
+ Let $X$ be a scheme, and $\mathcal{F}$ a sheaf of $\mathcal{O}_X$-modules. Let $s_0, \ldots, s_n \in H^0(X, \mathcal{F})$ be sections. We say the sections \emph{generate} $\mathcal{F}$ if the natural map
+ \[
+ \bigoplus_{i = 0}^{n + 1} \mathcal{O}_X \to \mathcal{F}
+ \]
+ induced by the $s_i$ is a surjective map of $\mathcal{O}_X$-modules.
+\end{defi}
+
+The generating sections are preserved under pullbacks. So if $X$ is embedded into $\P^n$, then it should have a corresponding line bundle generated by $n + 1$ global sections. More generally, if there is any map to $\P^n$ at all, we can pull back a corresponding bundle. Indeed, we have the following theorem:
+\begin{thm}
+ Let $A$ be any ring, and $X$ a scheme over $A$.
+ \begin{enumerate}
+ \item If $\varphi: X \to \P^n$ is a morphism over $A$, then $\varphi^* \mathcal{O}_{\P^n}(1)$ is an invertible sheaf on $X$, generated by the sections $\varphi^* x_0, \ldots, \varphi^* x_n \in H^0(X, \varphi^* \mathcal{O}_{\P^n}(1))$.
+ \item If $\mathcal{L}$ is an invertible sheaf on $X$, and if $s_0, \ldots, s_n \in H^0(X, \mathcal{L})$ which generate $\mathcal{L}$, then there exists a unique morphism $\varphi: X \to \P^n$ such that $\varphi^* \mathcal{O}(1) \cong \mathcal{L}$ and $\varphi^* x_i = s_i$.
+ \end{enumerate}
+\end{thm}
+
+This theorem highlights the importance of studying line bundles and their sections, and in some sense, understanding these is the whole focus of the course.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item The pullback of an invertible sheaf is an invertible, and the pullbacks of $x_0, \ldots, x_n$ generate $\varphi^* \mathcal{O}_{\P^n}(1)$.
+
+ \item In short, we map $x \in X$ to $[s_0(x): \cdots : s_n(x)] \in \P^n$.
+
+ In more detail, define
+ \[
+ X_{s_i} = \{p \in X: s_i \not\in \mathfrak{m}_p \mathcal{L}_p\}.
+ \]
+ This is a Zariski open set, and $s_i$ is invertible on $X_{s_i}$. Thus there is a dual section $s_i^\vee \in \mathcal{L}^{\vee}$ such that $s_i \otimes s_i^\vee \in \mathcal{L} \otimes \mathcal{L}^\vee \cong \mathcal{O}_X$ is equal to $1$. Define the map $X_{s_i} \to \A^n$ by the map
+ \begin{align*}
+ K[\A^n] &\to H^0(X_{s_i}, \mathcal{O}_{s_i})\\
+ y_i &\mapsto s_j \otimes s_i^\vee.
+ \end{align*}
+ Since the $s_i$ generate, they cannot simultaneously vanish on a point. So $X = \bigcup X_{s_i}$. Identifying $\A^n$ as the chart of $\P^n$ where $x_i \not= 0$, this defines the desired map $X \to \P^n$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+The theorem tells us we get a map from $X$ to $\P^n$. However, it says nothing about how nice these maps are. In particular, it says nothing about whether or not we get an embedding.
+
+\begin{defi}[Very ample sheaf]\index{very ample sheaf}\index{sheaf!very ample}
+ Let $X$ be an algebraic variety over $K$, and $\mathcal{L}$ be an invertible sheaf. We say that $\mathcal{L}$ is very ample if there is a closed immersion $\varphi: X \to \P^n$ such that $\varphi^* \mathcal{O}_{\P^n}(1) \cong \mathcal{L}$.
+\end{defi}
+It would be convenient if we had a good way of identifying when $\mathcal{L}$ is very ample. In this section, we will prove some formal criteria for very ampleness, which are convenient for proving things but not very convenient for actually checking if a line bundle is very ample. In specific cases, such as for curves, we can obtain some rather more concrete and usable criterion.
+
+So how can we understand when a map $X \hookrightarrow \P^n$ is an embedding? If it were an embedding, then it in particular is injective. So given any two points in $X$, there is a hyperplane in $\P^n$ that passes through one but not the other. But being an embedding means something more. We want the differential of the map to be injective as well. This boils down to the following conditions:
+
+\begin{prop}
+ Let $K = \bar{K}$, and $X$ a projective variety over $K$. Let $\mathcal{L}$ be an invertible sheaf on $X$, and $s_0, \ldots, s_n \in H^0(X, \mathcal{L})$ generating sections. Write $V = \bra s_0, \ldots, s_n\ket$ for the linear span. Then the associated map $\varphi: X \to \P^n$ is a closed embedding iff
+ \begin{enumerate}
+ \item For every distinct closed points $p \not= q \in X$, there exists $s_{p, q} \in V$ such that $s_{p, q} \in \mathfrak{m}_p \mathcal{L}_p$ but $s_{p, q} \not \in \mathfrak{m}_q \mathcal{L}_q$.
+ \item For every closed point $p \in X$, the set $\{s \in V \mid s \in \mathfrak{m}_p \mathcal{L}_p\}$ spans the vector space $\mathfrak{m}_p \mathcal{L}_p /\mathfrak{m}^2_p \mathcal{L}_p$.
+ \end{enumerate}
+\end{prop}
+
+\begin{defi}[Separate points and tangent vectors]
+ With the hypothesis of the proposition, we say that
+ \begin{itemize}
+ \item elements of $V$ \term{separate points} if $V$ satisfies (i).
+ \item elements of $V$ \term{separate tangent vectors} if $V$ satisfies (ii).
+ \end{itemize}
+\end{defi}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item[$(\Rightarrow)$] Suppose $\phi$ is a closed immersion. Then it is injective on points. So suppose $p \not= q$ are (closed) points. Then there is some hyperplane $H_{p, q}$ in $\P^n$ passing through $p$ but not $q$. The hyperplane $H_{p, q}$ is the vanishing locus of a global section of $\mathcal{O}(1)$. Let $s_{p, q} \in V \subseteq H^0(X, \mathcal{L})$ be the pullback of this section. Then $s_{p, q} \in \mathfrak{m}_p \mathcal{L}_p$ and $s_{p, q} \not \in \mathfrak{m}_q \mathcal{L}_q$. So (i) is satisfied.
+
+ To see (ii), we restrict to the affine patch containing $p$, and $X$ is a closed subvariety of $\A^n$. The result is then clear since $\mathfrak{m}_p\mathcal{L}_p/\mathfrak{m}_p^2\mathcal{L}_p$ is exactly the span of $s_0, \ldots, s_n$. We used $K = \bar{K}$ to know what the closed points of $\P^n$ look like.
+
+ \item[$(\Leftarrow)$] We first show that $\varphi$ is injective on closed points. For any $p \not= q \in X$, write the given $s_{p, q}$ as
+ \[
+ s_{p, q} = \sum \lambda_i s_i = \sum \lambda_i \varphi^* x_i = \varphi^* \sum \lambda_i x_i
+ \]
+ for some $\lambda_i \in K$. So we can take $H_{p, q}$ to be given by the vanishing set of $\sum \lambda_i x_i$, and so it is injective on closed point. It follows that it is also injective on schematic points. Since $X$ is proper, so is $\varphi$, and in particular $\varphi$ is a homeomorphism onto the image.
+
+ To show that $\varphi$ is in fact a closed immersion, we need to show that $\mathcal{O}_{\P^n} \to \varphi_* \mathcal{O}_X$ is surjective. As before, it is enough to prove that it holds at the level of stalks over closed points. To show this, we observe that $\mathcal{L}_p$ is trivial, so $\mathfrak{m}_p\mathcal{L}_p/\mathfrak{m}^2_p\mathcal{L}_p \cong \mathfrak{m}_p/\mathfrak{m}^2_p$ (unnaturally). We then apply the following lemma:
+ \begin{lemma}
+ Let $f: A \to B$ be a local morphism of local rings such that
+ \begin{itemize}
+ \item $A/\mathfrak{m}_A \to B/\mathfrak{m}_B$ is an isomorphism;
+ \item $\mathfrak{m}_A \to \mathfrak{m}_B/\mathfrak{m}_B^2$ is surjective; and
+ \item $B$ is a finitely-generated $A$-module.
+ \end{itemize}
+ Then $f$ is surjective.\fakeqed
+ \end{lemma}
+ To check the first condition, note that we have
+ \[
+ \frac{\mathcal{O}_{p, \P^n}}{\mathfrak{m}_{p, \P^n}} \cong \frac{\mathcal{O}_{p, X}}{\mathfrak{m}_{p, X}} \cong K.
+ \]
+ Now since $\mathfrak{m}_{p, \P^n}$ is generated by $x_0, \ldots, x_n$, the second condition is the same as saying
+ \[
+ \mathfrak{m}_{p, \P^n} \to \frac{\mathfrak{m}_{p, X}}{\mathfrak{m}_{p, X}^2}
+ \]
+ is surjective. The last part is immediate.\qedhere
+ \end{itemize}
+\end{proof}
+Unsurprisingly, this is not a very pleasant hypothesis to check, since it requires us to really understand the structure of $V$. In general, given an invertible sheaf $\mathcal{L}$, it is unlikely that we can concretely understand the space of sections. It would be nice if there is some simpler criterion to check if sheaves are ample.
+
+One convenient place to start is the following theorem of Serre:
+\begin{thm}[Serre]
+ Let $X$ be a projective scheme over a Noetherian ring $A$, $\mathcal{L}$ be a very ample invertible sheaf, and $\mathcal{F}$ a coherent $\mathcal{O}_X$-module. Then there exists a positive integer $n_0 = n_0(\mathcal{F}, \mathcal{L})$ such that for all $n \geq n_0$, the twist $\mathcal{F} \otimes \mathcal{L}^n$ is generated by global sections.\fakeqed
+\end{thm}
+The proof of this is straightforward and will be omitted. The idea is that tensoring with $\mathcal{L}^n$ lets us clear denominators, and once we have cleared all the denominators of the (finitely many) generators of $\mathcal{F}$, the resulting sheaf $\mathcal{F} \otimes \mathcal{L}^n$ will be generated by global sections.
+
+We can weaken the condition of very ampleness to only require the condition of this theorem to hold.
+\begin{defi}[Ample sheaf]\index{ample sheaf}
+ Let $X$ be a Noetherian scheme over $A$, and $\mathcal{L}$ an invertible sheaf over $X$. We say $\mathcal{L}$ is ample iff for any coherent $\mathcal{O}_X$-module $\mathcal{F}$, there is an $n_0$ such that for all $n \geq n_0$, the sheaf $\mathcal{F} \otimes \mathcal{L}^n$ is generated by global sections.
+\end{defi}
+
+While this seems like a rather weak condition, it is actually not too bad. First of all, by taking $\mathcal{F}$ to be $\mathcal{O}_X$, we can find some $\mathcal{L}^n$ that is generated by global sections. So at least it gives some map to $\P^n$. In fact, another theorem of Serre tells us this gives us an embedding.
+
+\begin{thm}[Serre]
+ Let $X$ be a scheme of finite type over a Noetherian ring $A$, and $\mathcal{L}$ an invertible sheaf on $X$. Then $\mathcal{L}$ is ample iff there exists $m > 0$ such that $\mathcal{L}^m$ is very ample.
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item[($\Leftarrow$)] Let $\mathcal{L}^m$ be very ample, and $\mathcal{F}$ a coherent sheaf. By Serre's theorem, there exists $n_0$ such that for all $j \geq j_0$, the sheafs
+ \[
+ \mathcal{F} \otimes \mathcal{L}^{mj}, (\mathcal{F} \otimes \mathcal{L}) \otimes \mathcal{L}^{mj}, \ldots, (\mathcal{F} \otimes \mathcal{L}^{m - 1}) \otimes \mathcal{L}^{mj}
+ \]
+ are all globally generated. So $\mathcal{F} \otimes \mathcal{L}^n$ is globally generated for $n \geq mj_0$.
+ \item[($\Rightarrow$)] Suppose $\mathcal{L}$ is ample. Then $\mathcal{L}^m$ is globally generated for $m$ sufficiently large. We claim that there exists $t_1, \ldots, t_n \in H^0(X, \mathcal{L}^N)$ such that $\mathcal{L}|_{X_{t_i}}$ are all trivial (i.e.\ isomorphic to $\mathcal{O}_{X_{t_i}}$), and $X = \bigcup X_{t_i}$.
+
+ By compactness, it suffices to show that for each $p \in X$, there is some $t \in H^0(X, \mathcal{L}^n)$ (for some $n$) such that $p \in X_{t_i}$ and $\mathcal{L}$ is trivial on $X_{t_i}$. First of all, since $\mathcal{L}$ is locally free by definition, we can find an open affine $U$ containing $p$ such that $\mathcal{L}|_U$ is trivial.
+
+ Thus, it suffices to produce a section $t$ that vanishes on $Y = X - U$ but not at $p$. Then $p \in X_t \subseteq U$ and hence $\mathcal{L}$ is trivial on $X_t$. Vanishing on $Y$ is the same as belonging to the ideal sheaf $\mathcal{I}_Y$. Since $\mathcal{I}_Y$ is coherent, ampleness implies there is some large $n$ such that $\mathcal{I}_Y \otimes \mathcal{L}^n$ is generated by global sections. In particular, since $\mathcal{I}_Y \otimes \mathcal{L}^n$ doesn't vanish at $p$, we can find some $t \in \Gamma(X, \mathcal{I}_Y \otimes \mathcal{L}^N)$ such that $t \not \in \mathfrak{m}_p (\mathcal{I}_Y \otimes \mathcal{L}^n)_p$. Since $\mathcal{I}_Y$ is a subsheaf of $\mathcal{O}_X$, we can view $t$ as a section of $\mathcal{L}^n$, and this $t$ works.
+
+ Now given the $X_{t_i}$, for each fixed $i$, we let $\{b_{ij}\}$ generate $\mathcal{O}_{X_{t_i}}$ as an $A$-algebra. Then for large $n$, $c_{ij} = t_i^n b_{ij}$ extends to a global section $c_{ij} \in \Gamma(X, \mathcal{L}^n)$ (by clearing denominators). We can pick an $n$ large enough to work for all $b_{ij}$. Then we use $\{t_i^n, c_{ij}\}$ as our generating sections to construct a morphism to $\P^N$, and let $\{x_i, x_{ij}\}$ be the corresponding coordinates. Observe that $\bigcup X_{t_i} = X$ implies the $t_i^n$ already generate $\mathcal{L}^n$. Now each $x_{t_i}$ gets mapped to $U_i \subseteq \P^N$, the vanishing set of $x_i$. The map $\mathcal{O}_{U_i} \to \varphi_* \mathcal{O}_{X_{t_i}}$ corresponds to the map
+ \[
+ A[y_i, y_{ij}] \to \mathcal{O}_{X_{t_i}},
+ \]
+ where $y_{ij}$ is mapped to $c_{ij}/t_i^n = b_{ij}$. So by assumption, this is surjective, and so we have a closed embedding.\qedhere
+ \end{itemize}
+\end{proof}
+
+From this, we also see that
+\begin{prop}
+ Let $\mathcal{L}$ be a sheaf over $X$ (which is itself a projective variety over $K$). Then the following are equivalent:
+ \begin{enumerate}
+ \item $\mathcal{L}$ is ample.
+ \item $\mathcal{L}^m$ is ample for all $m > 0$.
+ \item $\mathcal{L}^m$ is ample for some $m > 0$.\fakeqed
+ \end{enumerate}
+\end{prop}
+
+We will also frequently make use of the following theorem:
+\begin{thm}[Serre]
+ Let $X$ be a projective scheme over a Noetherian ring $A$, and $\mathcal{L}$ is very ample on $X$. Let $\mathcal{F}$ be a coherent sheaf. Then
+ \begin{enumerate}
+ \item For all $i \geq 0$ and $n \in \N$, $H^i(\mathcal{F} \otimes \mathcal{L}^n)$ is a finitely-generated $A$-module.
+ \item There exists $n_0 \in \N$ such that for all $n \geq n_0$, $H^i(\mathcal{F} \otimes \mathcal{L}^n) = 0$ for all $i > 0$.\fakeqed
+ \end{enumerate}
+\end{thm}
+The proof is exactly the same as the case of $\mathcal{O}(1)$ on $\P^n$.
+
+As before, this theorem still holds for ample sheaves, and in fact characterizes them.
+\begin{thm}
+ Let $X$ be a proper scheme over a Noetherian ring $A$, and $\mathcal{L}$ an invertible sheaf. Then the following are equivalent:
+ \begin{enumerate}
+ \item $\mathcal{L}$ is ample.
+ \item For all coherent $\mathcal{F}$ on $X$, there exists $n_0 \in \N$ such that for all $n \geq n_0$, we have $H^i(\mathcal{F} \otimes \mathcal{L}^n) = 0$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ Proving (i) $\Rightarrow$ (ii) is the same as the first part of the theorem last time.
+
+ To prove (ii) $\Rightarrow$ (i), fix a point $x \in X$, and consider the sequence
+ \[
+ 0 \to \mathfrak{m}_x \mathcal{F} \to \mathcal{F} \to \mathcal{F}_x \to 0.
+ \]
+ We twist by $\mathcal{L}^n$, where $n$ is sufficiently big, and take cohomology. Then we have a long exact sequence
+ \[
+ 0 \to H^0(\mathfrak{m}_x \mathcal{F}(n)) \to H^0(\mathcal{F}(n)) \to H^0(\mathcal{F}_x(n)) \to H^1(\mathfrak{m}_x \mathcal{F}(n)) = 0.
+ \]
+ In particular, the map $H^0(\mathcal{F}(n)) \to H^0(\mathcal{F}_x(n))$ is surjective. This mean at $x$, $\mathcal{F}(n)$ is globally generated. Then by compactness, there is a single $n$ large enough such that $\mathcal{F}(n)$ is globally generated everywhere.
+\end{proof}
+
+\subsection{Weil divisors}
+If $X$ is a sufficiently nice Noetherian scheme, $\mathcal{L}$ is a line bundle over $X$, and $s \in H^0(X, \mathcal{L})$, then the vanishing locus of $s$ is a codimension-$1$ subscheme. More generally, if $s$ is a rational section, then the zeroes and poles form codimension $1$ subschemes. Thus, to understand line bundles, a good place to start would be to understand these subschemes. We will see that for suitably nice schemes, we can recover $\mathcal{L}$ completely from this information.
+
+For the theory to work out nicely, we need to assume the following technical condition:
+\begin{defi}[Regular in codimension 1]\index{regular in codimension 1}
+ Let $X$ be a Noetherian scheme. We say $X$ is \emph{regular in codimension 1} if every local ring $\mathcal{O}_x$ of dimension $1$ is regular.
+\end{defi}
+The key property of such schemes we will make use of is that if $Y \subseteq X$ is a codimension $1$ integral subscheme, then the local ring $\mathcal{O}_{X, Y}$ is a DVR. Then there is a valuation
+\[
+ \val_Y: \mathcal{O}_{X, Y} \to \Z,
+\]
+which tells us the order of vanishing/poles of our function along $Y$.
+
+\begin{defi}[Weil divisor]\index{Weil divisor}
+ Let $X$ be a Noetherian scheme, regular in codimension $1$. A \term{prime divisor} is a codimension $1$ integral subscheme of $X$. A \emph{Weil divisor} is a formal sum
+ \[
+ D = \sum a_i Y_i,
+ \]
+ where the $a_i \in \Z$ and $Y_i$ are prime divisors. We write $\WDiv(X)$ for the group of Weil divisors of $X$.
+
+ For $K$ a field, a \term{Weil $K$-divisor} is the same where we allow $a_i \in K$.
+\end{defi}
+
+\begin{defi}[Effective divisor]\index{effective divisor}
+ We say a Weil divisor is \emph{effective} if $a_i \geq 0$ for all $i$. We write $D \geq 0$.
+\end{defi}
+
+\begin{defi}[Principal divisor]\index{principal divisor}\index{$\mathrm{div}(f)$}
+ If $f \in K(X)$, then we define the \emph{principal divisor}
+ \[
+ \div(f) = \sum_{Y} \val_Y(f) \cdot Y.
+ \]
+\end{defi}
+One can show that $\val_Y(f)$ is non-zero for only finitely many $Y$'s, so this is a genuine divisor. To see this, there is always a Zariski open $U$ such that $f|_U$ is invertible, So $\val_Y(f) \not= 0$ implies $Y \subseteq X \setminus U$. Since $X$ is Noetherian, $X \setminus U$ can only contain finitely many codimension $1$ subscheme.
+
+\begin{defi}[Support]\index{support}
+ The \emph{support} of $D = \sum a_i Y_i$ is
+ \[
+ \supp(D) = \bigcup Y_i.
+ \]
+\end{defi}
+
+Observe that $\div(fg) = \div(f) + \div(g)$. So the principal divisors form a subgroup of the Weil divisors.
+
+\begin{defi}[Class group]\index{class group}
+ The \emph{class group} of $X$, $\Cl(X)$, is the group of Weil divisors quotiented out by the principal divisors.
+
+ We say Weil divisors $D, D'$ are \term{linearly equivalent} if $D - D'$ is principal, and we write $D \sim D'$.
+\end{defi}
+
+\begin{eg}
+ Take $X = \A^1_K$, and $f = \frac{x^3}{x + 1} \in K(X)$. Then
+ \[
+ \div(f) = 3[0] - [-1].
+ \]
+\end{eg}
+
+A useful result for the future is the following:
+\begin{thm}[Hartog's lemma]\index{Hartog's lemma}
+ Let $X$ be normal, and $f \in \mathcal{O}(X \setminus V)$ for some $V \geq 2$. Then $f \in \mathcal{O}_X$. Thus, $\div(f) = 0$ implies $f \in \mathcal{O}_X^\times$.
+\end{thm}
+
+\subsection{Cartier divisors}
+Weil divisors do not behave too well on weirder schemes. A better alternative is Cartier divisors. Let $\mathcal{L}$ be a line bundle, and $s$ a rational section of $\mathcal{L}$, i.e.\ $s$ is a section of $\mathcal{L}|_U$ for some open $U \subseteq X$. Given such an $s$, we can define
+\[
+ \div(s) = \sum_Y \val_Y(s) \cdot Y.
+\]
+To make sense of $\val_Y(s)$, for a fixed codimension $1$ subscheme $Y$, pick $W$ such that $W \cap U \not= \emptyset$, and $\mathcal{L}|_W$ is trivial. Then we can make sense of $\val_Y(s)$ using the trivialization. It is clear from Hartog's lemma that
+\begin{prop}
+ If $X$ is normal, then
+ \[
+ \div: \{\text{rational sections of }\mathcal{L}\} \to \WDiv(X).
+ \]
+ is well-defined, and two sections have the same image iff they differ by an element of $\mathcal{O}_X^*$.
+\end{prop}
+
+\begin{cor}
+ If $X$ is normal and proper, then there is a map
+ \[
+ \div \{\text{rational sections of }\mathcal{L}\}/K^* \to \WDiv(X).
+ \]
+\end{cor}
+
+\begin{proof}
+ Properness implies $\mathcal{O}_X^* = K^*$.
+\end{proof}
+
+\begin{eg}
+ Take $X = \P^1_K$, and $s = \frac{X^2}{X + Y} \in H^0(\mathcal{O}(1))$, where $X, Y$ are our homogeneous coordinates. Then
+ \[
+ \div(s) = 2[0:1] - [1:-1].
+ \]
+\end{eg}
+
+This lets us go from line bundles to divisors. To go the other direction, let $X$ be a normal Noetherian scheme. Fix a Weil divisor $X$. We define the sheaf $\mathcal{O}_X(D)$\index{$\mathcal{O}_X(D)$} by setting, for all $U \subseteq X$ open,
+\[
+ \mathcal{O}_U(D) = \{f \in K(X) : \div(f) + D |_U \geq 0\}.
+\]
+\begin{prop}
+ $\mathcal{O}_X(D)$ is a rank $1$ quasicoherent $\mathcal{O}_X$-module. \fakeqed.
+\end{prop}
+
+In general, $\mathcal{O}_X(D)$ need not be a line bundle. If we want to prove that it is, then we very quickly see that we need the following condition:
+\begin{defi}[Locally principal]\index{locally principal}
+ Let $D$ be a Weil divisor on $X$. Fix $x \in X$. Then $D$ is locally principal at $x$ if there exists an open set $U \subseteq X$ containing $x$ such that $D|_U = \div(f)|_U$ for some $f \in K(X)$.
+\end{defi}
+
+\begin{prop}
+ If $D$ is locally principal at every point $x$, then $\mathcal{O}_X(D)$ is an invertible sheaf.
+\end{prop}
+\begin{proof}
+ If $U \subseteq X$ is such that $D|_U = \div(f)|_U$, then there is an isomorphism
+ \begin{align*}
+ \mathcal{O}_X|_U &\to \mathcal{O}_X(D) |_U\\
+ g &\mapsto g/f.\qedhere
+ \end{align*}
+\end{proof}
+\begin{defi}[Cartier divisor]\index{Cartier divisor}
+ A \emph{Cartier divisor} is a locally principal Weil divisor.
+\end{defi}
+
+By checking locally, we see that
+\begin{prop}
+ If $D_1, D_2$ are Cartier divisors, then
+ \begin{enumerate}
+ \item $\mathcal{O}_X(D_1 + D_2) = \mathcal{O}_X(D_1) \otimes \mathcal{O}_X(D_2)$.
+ \item $\mathcal{O}_X(-D) \cong \mathcal{O}_X(D)^{\vee}$.
+ \item If $f \in K(X)$, then $\mathcal{O}_X(\div(f)) \cong \mathcal{O}_X$.\fakeqed
+ \end{enumerate}
+\end{prop}
+
+\begin{prop}
+ Let $X$ be a Noetherian, normal, integral scheme. Assume that $X$ is \term{factorial}, i.e.\ every local ring $\mathcal{O}_{X, x}$ is a UFD. Then any Weil divisor is Cartier.
+\end{prop}
+Note that smooth schemes are always factorial.
+
+\begin{proof}
+ It is enough to prove the proposition when $D$ is prime and effective. So $D \subseteq X$ is a codimension $1$ irreducible subvariety. For $x \in D$
+ \begin{itemize}
+ \item If $x \not \in D$, then $1$ is a divisor equivalent to $D$ near $x$.
+ \item If $x \in D$, then $I_{D, x} \subseteq \mathcal{O}_{X, x}$ is a height $1$ prime ideal. So $I_{D, x} = (f)$ for $f \in \mathfrak{m}_{X, x}$. Then $f$ is the local equation for $D$.\qedhere
+ \end{itemize}
+\end{proof}
+
+Recall the Class group $\Cl(X)$ was the group of Weil divisors modulo principal equivalence.
+\begin{defi}[Picard group]\index{Picard group}
+ We define the \emph{Picard group} of $X$ to be the group of Cartier divisors modulo principal equivalence.
+\end{defi}
+Then if $X$ is factorial, then $\Cl(X) = \Pic(X)$.
+
+Recall that if $\mathcal{L}$ is an invertible sheaf, and $s$ is a rational section of $\mathcal{L}$, then we can define $\div(s)$.
+
+\begin{thm}
+ Let $X$ be normal and $\mathcal{L}$ an invertible sheaf, $s$ a rational section of $\mathcal{L}$. Then $\mathcal{O}_X(\div(s))$ is invertible, and there is an isomorphism
+ \[
+ \mathcal{O}_X(\div(s)) \to \mathcal{L}.
+ \]
+ Moreover, sending $\mathcal{L}$ to $\div s$ gives a map
+ \[
+ \Pic(X) \to \Cl(X),
+ \]
+ which is an isomorphism if $X$ is factorial (and Noetherian and integral).
+\end{thm}
+So we can think of $\Pic(X)$ as the group of invertible sheaves modulo isomorphism, namely $H^1(X, \mathcal{O}_X^*)$.
+
+\begin{proof}
+ Given $f \in H^0(U, \mathcal{O}_X(\div(s)))$, map it to $f \cdot s \in H^0(U, \mathcal{L})$. This gives the desired isomorphism.
+
+ If we have to sections $s' \not= s$, then $f= s'/s \in K(X)$. So $\div(s) = \div (s') + \div(f)$, and $\div(f)$ is principal. So this gives a well-defined map $\Pic(X) \to \Cl(X)$.
+\end{proof}
+
+\subsection{Computations of class groups}
+To explicitly compute the class group, the following proposition is useful:
+\begin{prop}
+ Let $X$ be an integral scheme, regular in codimension $1$. If $Z \subseteq X$ is an integral closed subscheme of codimension $1$, then we have an exact sequence
+ \[
+ \Z \to \Cl(X) \to \Cl(X \setminus Z) \to 0,
+ \]
+ where $n \in \Z$ is mapped to $[nZ]$.
+\end{prop}
+
+\begin{proof}
+ The map $\Cl(X) \to \Cl(X \setminus Z)$ is given by restriction. If $S$ is a Weil divisor on $X \setminus Z$, then $\bar{S} \subseteq X$ maps to $S$ under the restriction map. So this map is surjective.
+
+ Also, that $[nZ]|_{X \setminus Z}$ is trivial. So the composition of the first two maps vanishes. To check exactness, suppose $D$ is a Weil divisor on $X$, principal on $X \setminus Z$. Then $D|_{X \setminus Z} = \dim(f)|_{X \setminus Z}$ for some $f \in K(X)$. Then $D - \div(f)$ is just supported along $Z$. So it must be of the form $nZ$.
+\end{proof}
+
+If we remove something of codimension at least two, then something even simpler happens.
+\begin{prop}
+ If $Z \subseteq X$ has codimension $\geq 2$, then $\Cl(X) \to \Cl(X \setminus Z)$ is an isomorphism.
+\end{prop}
+The proof is the same as above, except no divisor can be supported on $Z$.
+
+\begin{eg}
+ $\Cl(\A^2 \setminus \{0\}) = \Cl(\A^2)$.
+\end{eg}
+In general, to use the above propositions to compute the class group, a good strategy is to remove closed subschemes as above until we reach something we understand well. One thing we understand well is affine schemes:
+
+\begin{prop}
+ If $A$ is a Noetherian ring, regular in codimension $1$, then $A$ is a UFD iff $A$ is normal and $\Cl(\Spec A) = 0$
+\end{prop}
+
+\begin{proof}
+ If $A$ is a UFD, then it is normal, and every prime ideal of height $1$ is principally generated. So if $D \in \Spec A$ is Weil and prime, then $D = V(f)$ for some $f$, and hence $(f) = I_D$.
+
+ Conversely, if $A$ is normal and $\Cl(\Spec A) = 0$, then every Weil divisor is principal. So if $I$ is a height $1$ prime ideal, then $V(I) = D$ for some Weil divisor $D$. Then $D$ is principal. So $I = (f)$ for some $f$. So $A$ is a Krull Noetherian integral domain with principally generated height $1$ prime ideals. So it is a UFD.
+\end{proof}
+
+\begin{eg}
+ $\Cl(\A^2) = 0$.
+\end{eg}
+
+\begin{eg}
+ Consider $\P^n$. We have an exact sequence
+ \[
+ \Z \to \Cl(\P^n) = \Cl(\P^n \setminus \P^{n - 1}) = \Cl(\A^n) \to 0.
+ \]
+ So $\Z \to \Cl(\P^n)$ is surjective. So $\Cl(\P^n)$ is generated by a hyperplane. Moreover, since any principal divisor has degree $0$, it follows that $nH \not= 0$ for all $n > 0$. This $H$ corresponds to $\mathcal{O}(1)$.
+\end{eg}
+
+\begin{eg}
+ Let $X = V(xy - z^2) \subseteq \A^3$. Let $Z = V(x, z)$. We claim that $\Cl(X) \cong \Z/2\Z$, and is generated by $[Z]$.
+
+ We compute
+ \[
+ K[X \setminus Z] = \frac{K[x, x^{-1}, y, z]}{(xy - z)} \cong \frac{K[x, x^{-1}, t, z]}{(t - z)} = K[x, x^{-1}, z],
+ \]
+ where $t = xy$, and this is a UFD. So $\Cl(X \setminus Z) = 0$.
+
+ We now want to compute the kernel of the map $\Z \to \Cl(X)$. We have
+ \[
+ \mathcal{O}_{X, Z} = \left(\frac{K[x, y, z]}{xy - z^2}\right)_{(x, z)} = \left(\frac{K[x, y, y^{-1}, z]}{x - z^2}\right)_{(x, y)} = K[y, y^{-1}, z]_{(z)}.
+ \]
+ Unsurprisingly, this is a DVR, and crucially, the uniformizer is $z$. So we know that $\div(x) = 2Z$. So we know that $2\Z \subseteq \ker(\Z \to \Cl(X))$. There is only one thing to check, which is that $(x, y)$ is not principal. To see this, consider
+ \[
+ T_{(0)}X = \A^3,\quad T_{(0)} (Z \subseteq X) = \A^1.
+ \]
+ But if $Z$ were principal, then $I_{Z, 0} = (f)$ for some $f \in \mathcal{O}_{X, 0}$. But then $T_0(Z) \subseteq T_0 X$ will be $\ker \d f$. But then $\ker \d f$ should have dimension at least $2$, which is a contradiction.
+\end{eg}
+
+\begin{prop}
+ Let $X$ be Noetherian and regular in codimension one. Then
+ \[
+ \Cl(X) = \Cl(X \times \A^1).
+ \]
+\end{prop}
+
+\begin{proof}
+ We have a projection map
+ \begin{align*}
+ \pr_1^*: \Cl(X) &\to \Cl(X \times \A^1)\\
+ [D_i] &\mapsto [D_i \times \A^1]
+ \end{align*}
+ It is an exercise to show that is injective. `
+
+ To show surjectivity, first note that we can the previous exact sequence and the $4$-lemma to assume $X$ is affine.
+
+ We consider what happens when we localize a prime divisor $D$ at the generic point of $X$. Explicitly, suppose $\mathcal{I}_D$ is the ideal of $D$ in $K[X \times \A^1]$, and let $\mathcal{I}_D^0$ be the ideal of $K(X)[t]$ generated b $\mathcal{I}_D$ under the inclusion
+ \[
+ K[X \times \A^1] = K[X][t] \subseteq K(X) [t].
+ \]
+
+ If $\mathcal{I}_D^0 = 1$, then $\mathcal{I}_D$ contains some function $f \in K(X)$. Then $D \subseteq V(f)$ as a subvariety of $X \times \A^1$. So $D$ is an irreducible component of $V(f)$, and in particular is of the form $D' \times \A^1$.
+
+ If not, then $\mathcal{I}_D^0 = (f)$ for some $f \in K(X)[t]$, since $K(X)[t]$ is a PID. Then $\div f$ is a principal divisor of $X \times \A^1$ whose localization at the generic point is $D$. Thus, $\div f$ is $D$ plus some other divisors of the form $D' \times \A^1$. So $D$ is linearly equivalent to a sum of divisors of the form $D' \times \A^1$.
+\end{proof}
+
+\begin{ex}
+ We have $\Cl(X \times \P^n) = \Z \oplus \Cl(X)$.
+\end{ex}
+
+\subsection{Linear systems}
+We end by collecting some useful results about divisors and linear systems.
+
+\begin{prop}
+ Let $X$ be a smooth projective variety over an algebraically closed field. Let $D_0$ be a divisor on $X$.
+ \begin{enumerate}
+ \item For all $s \in H^0(X, \mathcal{O}_X(D))$, $\div(s)$ is an effective divisor linearly equivalent to $D$.
+ \item If $D \sim D_0$ and $D \geq 0$, then there is $s \in H^0(\mathcal{O}_X(D_0))$ such that $\div(s) = D$
+ \item If $s, s' \in H^0(\mathcal{O}_X(D_0))$ and $\div(s) = \div(s')$, then $s' = \lambda s$ for some $\lambda \in K^*$.
+ \end{enumerate}
+\end{prop}
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Done last time.
+ \item If $D \sim D_0$, then $D - D_0 = \div(f)$ for some $f \in K(X)$. Then $(f) + D_0 \geq 0$. So $f$ induces a section $s \in H^0(\mathcal{O}_X(D_0))$. Then $\div(s) = D$.
+ \item We have $\frac{s'}{s} \in K(X)^*$. So $\div\left(\frac{s'}{s}\right)$. So $\frac{s'}{s} \in H^0(\mathcal{O}^*) = K^*$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[Complete linear system]\index{complete linear system}
+ A \emph{complete linear system} is the set of all effective divisors linearly equivalent to a given divisor $D_0$, written $|D_0|$\index{$\lvert D\rvert$}.
+\end{defi}
+Thus, $|D_0|$ is the projectivization of the vector space $H^0(X, \mathcal{O}(D_0))$.
+
+We also define
+\begin{defi}[Linear system]\index{linear system}
+ A linear system is a linear subspace of the projective space structure on $|D_0|$.
+\end{defi}
+
+Given a linear system $|V|$ of $\mathcal{L}$ on $X$, we have previously seen that we get a rational map $X \dashrightarrow \P(V)$. For this to be a morphism, we need $V$ to generate $\mathcal{L}$. In general, the $\mathcal{O}_X$ action on $\mathcal{L}$ induces a map
+\[
+ V \otimes \mathcal{O}_X \to \mathcal{L}.
+\]
+Tensoring with $\mathcal{L}^{-1}$ gives a map
+\[
+ V \otimes \mathcal{L}^{-1} \to \mathcal{O}_X.
+\]
+The image is an $\mathcal{O}_X$-submodule, hence an ideal sheaf. This ideal $b_{|V|}$ is called the \term{base ideal} of $|V|$ on $X$, and the corresponding closed subscheme is the \term{base locus}. Then the map $X \dashrightarrow \P(V)$ is a morphism iff $b_{|V|} = \mathcal{O}_X$. In this case, we say $|V|$ is \term{free}.
+
+Accordingly, we can define
+\begin{defi}[(Very) ample divisor]\index{very ample}\index{ample}
+ We say a Cartier divisor $D$ is \emph{(very) ample} when $\mathcal{O}_X(D)$ is.
+\end{defi}
+By Serre's theorem, if $\mathcal{A}$ is ample and $X$ is projective, and $\mathcal{L}$ is any line bundle, then for $n$ sufficiently large, we have
+\[
+ h^0(X, \mathcal{L} \otimes \mathcal{A}^n) = \sum (-1)^i h^i(X, \mathcal{L} \otimes \mathcal{A}^n) = \chi(X, \mathcal{L} \otimes \mathcal{A}^n).
+\]
+In the case of curves, Riemann--Roch lets us understand this number well:
+
+\begin{thm}[Riemann--Roch theorem]\index{Riemann--Roch theorem}
+ If $C$ is a smooth projective curve, then
+ \[
+ \chi(\mathcal{L}) = \deg(\mathcal{L}) + 1 - g(C).\fakeqed
+ \]
+\end{thm}
+
+It is often easier to reason about very ample divisors than general Cartier divisors. First observe that by definition, every projective normal scheme has a very ample divisor, say $H$. If $D$ is any divisor, then by Serre's theorem, $D + nH$ is globally generated for large $n$. Moreover, one can check that the sum of a globally generated divisor and a very ample divisor is still very ample, e.g.\ by checking it separates points and tangent vectors. Hence $D + nH$ is very ample for large $n$. Thus, we deduce that
+
+\begin{prop}
+ Let $D$ be a Cartier divisor on a projective normal scheme .Then $D \sim H_1 - H_2$ for some very ample divisors $H_i$. We can in fact take $H_i$ to be effective, and if $X$ is smooth, then we can take $H_i$ to be smooth and intersecting transversely.\fakeqed
+\end{prop}
+
+The last part is a consequence of Bertini's theorem.
+\begin{thm}[Bertini]\index{Bertini's theorem}
+ Let $X$ be a smooth projective variety over an algebraically closed field $K$, and $D$ a very ample divisor. Then there exists a Zariski open set $U \subseteq |D|$ such that for all $H \in U$, $H$ is smooth on $X$ and if $H_1 \not= H_2$, then $H_1$ and $H_2$ intersect transversely.\fakeqed
+\end{thm}
+
+\section{Surfaces}
+\subsection{The intersection product}
+We now focus on the case of smooth projective surfaces over an algebraically closed field. In this case, there is no need to distinguish between Weil and Cartier divisors.
+
+Let $X$ be such a surface. Let $C, D \subseteq X$ be smooth curves intersecting transversely. We want to count the number of points of intersection. This is given by
+\[
+ |C \cap D| = \deg_C(\mathcal{O}_C(D)) = h^0(\mathcal{O}_{C \cap D}) = \chi(\mathcal{O}_{C \cap D}),
+\]
+using that $\mathcal{O}_{C \cap D}$ is a skyscraper sheaf supported at $C \cap D$. To understand this number, we use the short exact sequences
+\[
+ \begin{tikzcd}[row sep=tiny]
+ 0 \ar[r] & \mathcal{O}_C(-D|_C) \ar[r] & \mathcal{O}_C \ar[r] & \mathcal{O}_{C \cap D} \ar[r] & 0\\
+ 0 \ar[r] & \mathcal{O}_X(-C - D) \ar[r] & \mathcal{O}_X(-D) \ar[r] & \mathcal{O}_C(-D|_C) \ar[r] & 0\\
+ 0 \ar[r] & \mathcal{O}_X(-C) \ar[r] & \mathcal{O}_X \ar[r] & \mathcal{O}_C \ar[r] & 0
+ \end{tikzcd}
+\]
+This allows us to write
+\begin{align*}
+ \chi(\mathcal{O}_{C \cap D}) &= -\chi(C, \mathcal{O}_C(-D|_C)) + \chi(C, \mathcal{O}_C)\\
+ &= \chi(\mathcal{O}_X(-C - D)) - \chi(\mathcal{O}_X(-D)) + \chi(C, \mathcal{O}_C)\\
+ &= \chi(\mathcal{O}_X(-C - D)) - \chi(\mathcal{O}_X(-D)) - \chi(\mathcal{O}_X(-C)) + \chi(\mathcal{O}_X).
+\end{align*}
+Thus suggests for for general divisors $D_1, D_2$, we can define
+\begin{defi}[Intersection product]\index{intersection product}
+ For divisors $D_1, D_2$, we define the \emph{intersection product} to be
+ \[
+ D_1 \cdot D_2 = \chi(\mathcal{O}_X) + \chi(-D_1 - D_2) - \chi(-D_1) - \chi(-D_2).
+ \]
+\end{defi}
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item The product $D_1 \cdot D_2$ depends only on the classes of $D_1, D_2$ in $\Pic(X)$.
+ \item $D_1 \cdot D_2 = D_2 \cdot D_1$.
+ \item $D_1 \cdot D_2 = |D_1 \cap D_2|$ if $D_1$ and $D_2$ are curves intersecting transversely.
+ \item The intersection product is bilinear.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ Only (iv) requires proof. First observe that if $H$ is a very ample divisor represented by a smooth curve, then we have
+ \[
+ H \cdot D = \deg_H(\mathcal{O}_H(D)),
+ \]
+ and this is linear in $D$.
+
+ Next, check that $D_1 \cdot (D_2 + D_3) - D_1 \cdot D_2 - D_1 \cdot D_3$ is symmetric in $D_1, D_2, D_3$. So
+ \begin{enumerate}[label=(\alph*)]
+ \item Since this vanishes when $D_1$ is very ample, it also vanishes if $D_2$ or $D_3$ is very ample.
+ \item Thus, if $H$ is very ample, then $D \cdot (-H) = - (D \cdot H)$.
+ \item Thus, if $H$ is very ample, then $(-H) \cdot D$ is linear in $D$.
+ \item If $D$ is any divisor, write $D = H_1 - H_2$ for $H_1, H_2$ very ample and smooth. Then $D \cdot D' = H_1 \cdot D' - H_2 \cdot D'$ by (a), and thus is linear in $D'$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+In fact, bilinearity and (iii) shows that these properties uniquely characterize the intersection product, since every divisor is a difference of smooth very ample divisors.
+
+\begin{eg}
+ Take $X = \P^2$. We saw that $\Pic(\P^2) \cong \Z$, generated by a hyperplane $\ell$. So all we need to understand is what $\ell^2$ is. But two transverse lines intersect at a point. So $\ell^2 = 1$. Thus, the intersection product on $\Pic(\P^2)$ is just ordinary multiplication. In particular, if $D_1, D_2$ are curves that intersect transversely, we find that
+ \[
+ D_1 \cdot D_2 = \deg(D_1) \deg(D_2).
+ \]
+ This is B\'ezout's theorem!
+\end{eg}
+\begin{ex}
+ Let $C_1, C_2 \subseteq X$ be curves without common components. Then
+ \[
+ C_1 \cdot C_2 = \sum_{p \in C_1 \cap C_2} \ell(\mathcal{O}_{C_1 \cap C_2}),
+ \]
+ where for $p \in C_1 \cap C_2$, if $C_1 = V(x)$ and $C_2 = V(y)$, then
+ \[
+ \ell(\mathcal{O}_{C_1 \cap C_2}) = \dim_k(\mathcal{O}_{X, p}/(x, y))
+ \]
+ counts multiplicity.
+\end{ex}
+
+\begin{eg}
+ Take $X = \P^1 \times \P^1$. Then
+ \[
+ \Pic(X) = \Z^2 = \Z[p_1^* \mathcal{O}(1)] \oplus \Z[p_2^* \mathcal{O}(1)].
+ \]
+ Let $A = p_1^* \mathcal{O}(1), B = p_2^* \mathcal{O}(1)$. Then we see that
+ \[
+ A^2 = B^2 = 0,\quad A \cdot B = 1
+ \]
+ by high school geometry.
+\end{eg}
+
+The Riemann--Roch theorem for curves lets us compute $\chi(D)$ for a divisor $D$. We have an analogous theorem for surfaces:
+\begin{thm}[Riemann--Roch for surfaces]\index{Riemann--Roch for surfaces}
+ Let $D \in \Div(X)$. Then
+ \[
+ \chi(X, \mathcal{O}_X(D)) = \frac{D \cdot (D - K_X)}{2} + \chi(\mathcal{O}_X),
+ \]
+ where \term{$K_X$} is the \term{canonical divisor}.
+\end{thm}
+
+To prove this, we need the adjunction formula:
+\begin{thm}[Adjunction formula]\index{adjunction formula}
+ Let $X$ be a smooth surface, and $C \subseteq X$ a smooth curve. Then
+ \[
+ (\mathcal{O}_X(K_X) \otimes \mathcal{O}_X(C))|_C \cong \mathcal{O}_C(K_C).
+ \]
+\end{thm}
+
+\begin{proof}
+ Let $\mathcal{I}_C = \mathcal{O}_X(-C)$ be the ideal sheaf of $C$. We then have a short exact sequence on $C$:
+ \[
+ 0 \to \mathcal{O}_X(-C)|_C \cong \mathcal{I}_C/\mathcal{I}_C^2 \to \Omega^1_X|_C \to \Omega^1_C \to 0,
+ \]
+ where the left-hand map is given by $\d$. To check this, note that locally on affine charts, if $C$ is cut out by the function $f$, then smoothness of $C$ implies the kernel of the second map is the span of $\d f$.
+
+ By definition of the canonical divisor, we have
+ \[
+ \mathcal{O}_X(K_X) = \det (\Omega^1_X).
+ \]
+ Restricting to $C$, we have
+ \[
+ \mathcal{O}_X(K_X)|_C = \det(\Omega^1_X|_C) = \det(\mathcal{O}_X(-C)|_C) \otimes \det(\Omega^1_C) = \mathcal{O}_X(C)|_C^\vee \otimes \mathcal{O}_C(K_C).\qedhere
+ \]
+\end{proof}
+
+\begin{proof}[Proof of Riemann--Roch]
+ We can assume $D = H_1 - H_2$ for very ample line bundles $H_1, H_2$, which are smoothly irreducible curves that intersect transversely. We have short exact sequences
+ \[
+ \begin{tikzcd}[row sep=tiny]
+ 0 \ar[r] & \mathcal{O}_X(H_1 - H_2) \ar[r] & \mathcal{O}_X(H_1) \ar[r] & \mathcal{O}_{H_2}(H_1|_{H_2}) \ar[r] & 0\\
+ 0 \ar[r] & \mathcal{O}_X \ar[r] & \mathcal{O}_X(H_1) \ar[r] & \mathcal{O}_{H_1}(H_1) \ar[r] & 0
+ \end{tikzcd}
+ \]
+ where $\mathcal{O}_{H_1}(H_1)$ means the restriction of the \emph{line bundle} $\mathcal{O}_X(H_1)$ to $H_1$. We can then compute
+ \begin{align*}
+ \chi(H_1 - H_2) &= \chi(\mathcal{O}_X(H_1)) - \chi(H_2, \mathcal{O}_{H_2}(H_1|_{H_2}))\\
+ &= \chi(\mathcal{O}_X) + \chi(H_1, \mathcal{O}_{H_1}(H_1)) - \chi(H_2, \mathcal{O}_{H_2}(H_1|_{H_2})).
+ \end{align*}
+ The first term appears in our Riemann--Roch theorem, so we leave it alone. We then use Riemann--Roch for curves to understand the remaining. It tells us
+ \[
+ \chi(H_i, \mathcal{O}_{H_i}(H_1)) = \deg(\mathcal{O}_{H_i}(H_1)) + 1 - g(H_i) = (H_i \cdot H_1) + 1 - g(H_i).
+ \]
+ By definition of genus, we have
+ \[
+ 2g(H_i) - 2 = \deg (K_{H_i}).
+ \]
+ and the adjunction formula lets us compute $\deg (K_{H_i})$ by
+ \[
+ \deg(K_{H_i}) = H_i \cdot (K_X + H_i).
+ \]
+ Plugging these into the formula and rearranging gives the desired result.
+\end{proof}
+Previously, we had the notion of linear equivalence of divisors. A coarser notion of equivalence is that of numerical equivalence, which is about what the intersection product can detect:
+\begin{defi}[Numerical equivalence]\index{numerical equivalence}
+ We say divisors $D, D'$ are \emph{numerically equivalent}, written $D \equiv D'$, if
+ \[
+ D\cdot E = D' \cdot E
+ \]
+ for all divisors $E$.
+
+ We write\index{$\Num_0$}
+ \[
+ \Num_0 = \{D \in \Div(X) : D \equiv 0\}.
+ \]
+\end{defi}
+
+\begin{defi}[N\'eron--Severi group]\index{N\'eron--Severi group}
+ The \term{N\'eron--Severi group} is
+ \[
+ \NS(X) = \Div(X)/\Num_0(X).
+ \]
+\end{defi}
+
+We will need the following important fact:
+\begin{fact}
+ $\NS(X)$ is a finitely-generated free module.
+\end{fact}
+Note that $\Pic(X)$ is not be finitely-generated in general. For example, for an elliptic curve, $\Pic(X)$ bijects with the closed points of $X$, which is in general very large!
+
+\begin{defi}[$\rho(X)$]\index{$\rho(X)$}
+ $\rho(X) = \dim \NS_\R(X) = \rk \NS(X)$.
+\end{defi}
+
+Tensoring with $\R$, the intersection product gives a map
+\[
+ (\ph, \ph): \NS_\R(X) = \NS(X)\otimes \R \to \R.
+\]
+By definition of $\NS(X)$, this is non-degenerate. A natural question to ask is then, what is the signature of this form? Since $X$ is projective, there is a very ample divisor $H$, and then $H^2 > 0$. So this is definitely not negative definite. It turns out there is only one positive component, and the signature is $(1, \rho(X) - 1)$:
+
+\begin{thm}[Hodge index theorem]\index{Hodge index theorem}
+ Let $X$ be a projective surface, and $H$ be a (very) ample divisor on $X$. Let $D$ be a divisor on $X$ such that $D \cdot H = 0$ but $D \not \equiv 0$. Then $D^2 < 0$.
+\end{thm}
+
+\begin{proof}
+ Before we begin the proof, observe that if $H'$ is very ample and $D'$ is (strictly) effective, then $H' \cdot D' > 0$, since this is given by the number of intersections between $D'$ and any hyperplane in the projective embedding given by $H'$.
+
+ Now assume for contradiction that $D^2 \geq 0$.
+ \begin{itemize}
+ \item If $D^2 > 0$, fix an $n$ such that $H_n = D + nH$ is very ample. Then $H_n \cdot D > 0$ by assumption.
+
+ We use Riemann--Roch to learn that
+ \[
+ \chi(X, \mathcal{O}_X(mD)) = \frac{m^2 D^2 - m K_X \cdot D}{2} + \chi(\mathcal{O}_X).
+ \]
+ We first consider the $H^2(\mathcal{O}_X(mD))$ term in the left-hand side. By Serre duality, we have
+ \[
+ H^2(\mathcal{O}_X(mD)) = H^0(K_X - mD).
+ \]
+ Now observe that for large $m$, we have
+ \[
+ H_n \cdot (K_X - mD) < 0.
+ \]
+ Since $H_n$ is very ample, it cannot be the case that $K_X - mD$ is effective. So $H^0(K_X - D) = 0$.
+
+ Thus, for $m$ sufficiently large, we have
+ \[
+ h^0(mD) - h^1(mD) > 0.
+ \]
+ In particular, $mD$ is (strictly) effective for $m \gg 0$. But then $H \cdot mD > 0$ since $H$ is very ample and $mD$ is effective. This is a contradiction.
+
+ \item If $D^2 = 0$. Since $D$ is not numerically trivial, there exists a divisor $E$ on $X$ such that $D \cdot E \not= 0$. We define
+ \[
+ E' = (H^2) E - (E \cdot H) H.
+ \]
+ It is then immediate that $E' \cdot H = 0$. So $D_n' = nD + E'$ satisfies $D_n' \cdot H = 0$. On the other hand,
+ \[
+ (D_n')^2 = (E')^2 + 2n D\cdot E' > 0
+ \]
+ for $n$ large enough. This contradicts the previous part.\qedhere
+ \end{itemize}
+\end{proof}
+
+%Recall that we have an exponential exact sequence
+%\[
+% \begin{tikzcd}
+% 0 \ar[r] & \Z \ar[r] & \mathcal{O} \ar[r, "\exp"] & \mathcal{O}^* \ar[r] & 0
+% \end{tikzcd}
+%\]
+%in the analytic topology. So we have a long exact sequence
+%\begin{multline*}
+% 0 \to H^0(\Z) \to H^0(\mathcal{O}_X) \to H^0(\mathcal{O}_X^*) \to H^1(\Z)\\
+% \to H^1(\mathcal{O}_X) \to H^1(\mathcal{O}_X^*) \to H^2(\Z) \to \cdots.
+%\end{multline*}
+%We have $H^1(\mathcal{O}^*) = \Pic(X)$, where we take the \emph{analytic} Picard group. However, it is a classical theorem that this is the same as the algebraic Picard group. We can also identify $H^2(\Z) = H^2(X, \Z)$, and the map $\Pic(X) \to H^2(X, \Z)$ is the first Chern class.
+
+\subsection{Blow ups}
+Let $X$ be a smooth surface, and $p \in X$. There is then a standard way to ``replace'' $p$ with a copy of $\P^1$. The result is a surface $\bar{X} = \Bl_p X$ together with a map $\varepsilon: \bar{X} \to X$ such that $\varepsilon^{-1}(p)$ is a smoothly embedded copy of $\P^1$, and $\varepsilon$ is an isomorphism outside of this $\P^1$.
+
+To construct $\bar{X}$, pick local coordinates $x, y$ in a neighbourhood $U$ of $\bar{X}$ such that $V(x, y) = p$. Work on $U \times \P^1$ with coordinates $[X: Y]$ on $\P^1$. We then set
+\[
+ \bar{U} = V(xY - yX).
+\]
+There is a canonical projection of $\bar{U}$ onto $U$, and this works, since above the point $(0, 0)$, any choice of $X, Y$ satisfies the equation, and everywhere else, we must have $[X:Y] = [x:y]$.
+
+\begin{defi}[Blow up]\index{blow up}
+ The \emph{blow up} of $X$ at $p$ is the variety $\bar{X} = \Bl_p X$ that we just defined. The preimage of $p$ is known as the \term{exceptional curve}, and is denoted $E$.
+\end{defi}
+
+We would like to understand $\Pic(\bar{X})$ in terms of $\Pic(X)$, and in particular the product structure on $\Pic(\bar{X})$. This will allow us to detect whether a surface is a blow up.
+
+The first step to understanding $\Pic(\bar{X})$ is the localization sequence
+\[
+ \Z \to \Pic(\bar{X}) \to \Pic(\bar{X} \setminus \P^1) \to 0.
+\]
+We also have the isomorphism
+\[
+ \Pic(\bar{X} \setminus \P^1) = \Pic(X \setminus \{p\}) \cong \Pic(X).
+\]
+Finally, pulling back along $\varepsilon$ gives a map $\Pic(X) \to \Pic(\bar{X})$, and the push-pull formula says
+\[
+ \pi_* (\pi^* \mathcal{L}) = \mathcal{L} \otimes \pi_*(\mathcal{O}_{\bar{X}}).
+\]
+Hartog's lemma says
+\[
+ \pi_* \mathcal{O}_{\bar{X}} = \mathcal{O}_X.
+\]
+So in fact $\varepsilon^*$ is a splitting. So $\Pic(\bar{X})$ is the direct sum of $\Pic(X)$ with a quotient of $\Z$, generated by $[E]$. We will show that in fact $\Pic(\bar{X}) \cong \varepsilon^* \Pic(X) \oplus \Z[E]$. This will be shown by calculating $E^2 = -1$.
+
+In general, if $C$ is a curve in $X$, then this lifts to a curve $\tilde{C} \subseteq \bar{X}$, called the \term{strict transform} of $C$. This is given by
+\[
+ \tilde{C} = \overline{\varepsilon^{-1}(C \setminus \{p\})}.
+\]
+There is also another operation, we can do, which is the pullback of divisors $\pi^* C$.
+\begin{lemma}
+ \[
+ \pi^* C = \tilde{C} + mE,
+ \]
+ where $m$ is the multiplicity of $C$ at $p$.
+\end{lemma}
+
+\begin{proof}
+ Choose local coordinates $x, y$ at $p$ and suppose the curve $y = 0$ is not tangent to any branch of $C$ at $p$. Then in the local ring $\hat{\mathcal{O}}_{X, p}$, the equation of $C$ is given as
+ \[
+ f = f_m(x, y) + \text{higher order terms},
+ \]
+ where $f_m$ is a non-zero degree $m$ polynomial. Then by definition, the multiplicity is $m$. Then on $\bar{U} \subseteq (U \times \P^1)$, we have the chart $U \times \A^1$ where $X \not= 0$, with coordinates
+ \[
+ (x, y, Y/X = t).
+ \]
+ Taking $(x, t)$ as local coordinates, the map to $U$ is given by $(x, t) \mapsto (x, xt)$. Then
+ \[
+ \varepsilon^* f = f(x, tx) = x^m f_m(1, t) + \text{higher order terms} = x^m \cdot h(x, t),
+ \]
+ with $h(0, 0) \not= 0$. But then on $\bar{U}_{x \not= 0}$, the curve $x = 0$ is just $E$. So this equation has multiplicity $m$ along $E$.
+\end{proof}
+
+\begin{prop}
+ Let $X$ be a smooth projective surface, and $x \in X$. Let $\bar{X} = \Bl_x X \overset{\pi}{\to} X$. Then
+ \begin{enumerate}
+ \item $\pi^* \Pic(X) \oplus \Z[E] = \Pic(\bar{X})$
+ \item $\pi^*D \cdot \pi^* F = D \cdot F$,\quad $\pi^* D \cdot E = 0$,\quad $E^2 = -1$.
+ \item $K_{\bar{X}} = \pi^* (K_X) + E$.
+ \item $\pi^*$ is defined on $\NS(X)$. Thus,
+ \[
+ \NS(\bar{X}) = \NS(X) \oplus \Z [E].
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Recall we had the localization sequence
+ \[
+ \Z \to \Pic(\bar{X}) \to \Pic(X) \to 1
+ \]
+ and there is a right splitting. To show the result, we need to show the left-hand map is injective, or equivalently, $m E \not\sim 0$. This will follow from the fact that $E^2 = -1$.
+ \item For the first part, it suffices to show that $\pi^* D \cdot \pi^* F = D \cdot F$ for $D, F$ very ample, and so we may assume $D, F$ are curves not passing through $x$. Then their pullbacks is just the pullback of the curve, and so the result is clear. The second part is also clear.
+
+ Finally, pick a smooth curve $C$ passing through $E$ with multiplicity $1$. Then
+ \[
+ \pi^*C = \tilde{C} + E.
+ \]
+ Then we have
+ \[
+ 0 = \pi^* C \cdot E = \tilde{C} \cdot E + E^2,
+ \]
+ But $\tilde{C}$ and $E$ intersect at exactly one point. So $\tilde{C} \cdot E = 1$.
+ \item By the adjunction formula, we have
+ \[
+ (K_{\bar{X}} + E) \cdot E = \deg(K_{\P^1}) = -2.
+ \]
+ So we know that $K_{\bar{X}} \cdot E = -1$. Also, outside of $E$, $K_{\bar{X}}$ and $K_X$ agree. So we have
+ \[
+ K_{\bar{X}} = \pi^* (K_X) + mE
+ \]
+ for some $m \in \Z$. Then computing
+ \[
+ -1 = K_{\bar{X}} \cdot E = \pi^*(K_X) \cdot E + mE^2
+ \]
+ gives $m = 1$.
+ \item We need to show that if $D \in \Num_0(X)$, then $\pi^*(D) \in \Num_0(\bar{X})$. But this is clear for both things that are pulled back and things that are $E$. So we are done.\qedhere
+ \end{enumerate}
+\end{proof}
+
+One thing blow ups are good for is resolving singularities.
+\begin{thm}[Elimination of indeterminacy]
+ Let $X$ be a sooth projective surface, $Y$ a projective variety, and $\varphi: X \to Y$ a rational map. Then there exists a smooth projective surface $X'$ and a commutative diagram
+ \[
+ \begin{tikzcd}[column sep=tiny]
+ & X' \ar[ld, "p"] \ar[rd, "q"]\\
+ X \ar[rr,dashed, "\varphi"] && Y
+ \end{tikzcd}
+ \]
+ where $p: X' \to X$ is a composition of blow ups, and in particular is birational.
+\end{thm}
+
+\begin{proof}
+ We may assume $Y = \P^n$, and $X \dashrightarrow \P^n$ is non-degenerate. Then $\varphi$ is induced by a linear system $|V| \subseteq H^0(X, \mathcal{L})$. We first show that we may assume the base locus has codimension $> 1$.
+
+ If not, every element of $|V|$ is of the form $C + D'$ for some fixed $C$. Let $|V'|$ be the set of all such $D'$. Then $|V|$ and $|V'|$ give the same rational map. Indeed, if $V$ has basis $f_0, \ldots, f_n$, and $g$ is the function defining $C$, then the two maps are, respectively,
+ \[
+ [f_0: \cdots : f_n]\text{ and } [f_0/g : \cdots : f_n/g],
+ \]
+ which define the same rational maps. By repeating this process, replacing $V$ with $V'$, we may assume the base locus has codimension $>1$.
+
+ We now want to show that by blowing up, we may remove all points from the base locus. We pick $x \in X$ in the base locus of $|D|$. Blow it up to get $X_1 = \Bl_x X \to X$. Then
+ \[
+ \pi^* |D| = |D_1| + mE,
+ \]
+ where $m > 0$, and $|D_1|$ is a linear system which induces the map $\varphi \circ \pi_1$. If $|D_1|$ has no basepoints, then we are done. If not, we iterate this procedure. The procedure stops, because
+ \[
+ 0 \leq D_1^2 = (\pi_1^* D^2 + m^2 E^2) < D^2.\qedhere
+ \]
+\end{proof}
+
+\begin{ex}
+ We have the following universal property of blow ups: If $Z \to X$ is a birational map of surfaces, and suppose $f^{-1}$ is not defined at a point $x \in X$. Then $f$ factors uniquely through the blow up $\Bl_x(X)$.
+\end{ex}
+
+\begin{thm}
+ Let $g: Z \to X$ be a birational morphism of surfaces. Then $g$ factors as $Z \overset{g'}{\to} X' \overset{p}{\to} X $, where $p: X' \to X$ is a composition of blow ups, and $g'$ is an isomorphism.
+\end{thm}
+
+\begin{proof}
+ Apply elimination of indeterminacy to the rational inverse to $g$ to obtain $p: X' \to X$. There is then a lift of $g'$ to $X'$ by the universal property, and these are inverses to each otehr.
+% If $f: Z \to X$ is not an isomorphism, then there must be a point in $X$ where $f^{-1}$ is not well-defined. So by the universal property, we can factor $f$ as $Z \overset{g_1}{\to} X_1 \overset{f_1}{\to} X$, where $f_1$ is the blow up at that points. We want to know this terminates.
+%
+% If $f$ is not invertible, then there exists $C_1, \ldots, C_k$ on $Z$ such that $g(C_i)$ is a point. Then the number of curves contracted to a point by $g_i$ is less than that by $g_{i - 1}$. When there are none left, we have an isomorphism.
+\end{proof}
+
+Birational isomorphism is an equivalence relation. So we can partition the set of smooth projective surfaces into birational isomorphism classes. If $X, X'$ are in the same birational isomorphism class, we can say $X \leq X'$ if $X'$ is a blow up of $X$. What we have seen is that any two elements have an upper bound. There is no global maximum element, since we can keep blowing up. However, we can look for minima elements.
+\begin{defi}[Relatively minimal]\index{relatively minimal}
+ We say $X$ is \emph{relatively minimal} if there does not exist a smooth $X'$ and a birational morphism $X \to X'$ that is not an isomorphism. We say $X$ is \term{minimal} if it is the unique relative minimal surface in the birational equivalence class.
+\end{defi}
+
+If $X$ is not relatively minimal, then it is the blow up of another smooth projective surface, and in this case the exceptional curve $C$ satisfies $C^2 = -1$. It turns out this criterion itself is enough to determine minimality.
+\begin{thm}[Castelnuovo's contractibility criterion]\index{Castelnuovo's contractibility criterion}
+ Let $X$ be a smooth projective surface over $K = \bar{K}$. If there is a curve $C \subseteq X$ such that $C \cong \P^1$ and $C^2 = -1$, then there exists $f: X \to Y$ that exhibits $X$ as a blow up of $Y$ at a point with exceptional curve $C$.
+\end{thm}
+
+\begin{ex}
+ Let $X$ be a smooth projective surface. Then any two of the following three conditions imply the third:
+ \begin{enumerate}
+ \item $C \cong \P^1$
+ \item $C^2 = -1$
+ \item $K_X \cdot C = -1$
+ \end{enumerate}
+ When these are satisfied, we say $C$ is a \term{$(-1)$ curve}.
+\end{ex}
+For example, the last follows from the first two by the adjunction formula.
+
+\begin{cor}
+ A smooth projective surface is relatively minimal if and only if it does not contain a $(-1)$ curve.\fakeqed
+\end{cor}
+
+\begin{proof}
+ The idea is to produce a suitable linear system $|L|$, giving a map $f: X \to \P^n$, and then take $Y$ to be the image. Note that $f(C) = *$ is the same as requiring
+ \[
+ \mathcal{O}_X(L)|_C = \mathcal{O}_C.
+ \]
+ After finding a linear system that satisfies this, we will do some work to show that $f$ is an isomorphism outside of $C$ and has smooth image.
+
+ Let $H$ be a very ample divisor on $X$. By Serre's theorem, we may assume $H^1(X, \mathcal{O}_X(H)) = 0$. Let
+ \[
+ H \cdot C = K > 0.
+ \]
+
+ Consider divisors of the form $H + iC$. We can compute
+ \[
+ \deg_C (H + iC)|_C = (H + iC) \cdot C = K - i.
+ \]
+ Thus, we know $\deg_C (H + KC)|_C = 0$, and hence
+ \[
+ \mathcal{O}_X(H + KC)|_C \cong \mathcal{O}_C.
+ \]
+
+ We claim that $\mathcal{O}_X(H + KC)$ works. To check this, we pick a fairly explicit basis of $H^0(H + KC)$. Consider the short exact sequence
+ \[
+ 0 \to \mathcal{O}_X (H + (i - 1) C) \to \mathcal{O}_X(H + iC) \to \mathcal{O}_C(H + iC) \to 0,
+ \]
+ inducing a long exact sequence
+ \begin{multline*}
+ 0 \to H^0(H + (i - 1)C) \to H^0(H + iC) \to H^0(C, (H + iC)|_C)\\
+ \to H^1(H + (i - 1)C) \to H^1(H + iC) \to H^1(C, (H + iC)|_C) \to \cdots
+ \end{multline*}
+ We know $\mathcal{O}_X(H + iC)|_C = \mathcal{O}_{\P^1}(K - i)$. So in the range $i = 0, \ldots, K$, we have $H^1(C, (H + iC)|_C) = 0$. Thus, by induction, we find that
+ \[
+ H^1(H + iC) = 0\text{ for }i = 0, \ldots, K.
+ \]
+
+ As a consequence of this, we know that for $i = 1, \ldots, K$, we have a short exact sequence
+ \[
+ H^0(H + (i - 1)C) \hookrightarrow H^0(H + iC) \twoheadrightarrow H^0(C, (H + iC)|_C) = H^0(\mathcal{O}_{\P^1}(K - i)).
+ \]
+ Thus, $H^0(H + iC)$ is spanned by the image of $H^0(H + (i - 1)C)$ plus a lift of a basis of $H^0(\mathcal{O}_{\P^1}(K - i))$.
+
+ For each $i > 0$, pick a lift $y^{(i)}_0, \ldots, y^{(i)}_{K - i}$ of a basis of $H^0(\mathcal{O}_{\P^1}(K - i))$ to $H^0(H + iC)$, and take the image in $H^0(H + KC)$. Note that in local coordinates, if $C$ is cut out by a function $g$, then the image in $H^0(H + KC)$ is given by
+ \[
+ g^{K - i}y^{(i)}_0, \ldots, g^{K - i}y^{(i)}_{K - i}.
+ \]
+ For $i = 0$, we pick a basis of $H^0(H)$, and then map it down to $H^0(H + KC)$. Taking the union over all $i$ of these elements, we obtain a basis of $H^0(H + KC)$. Let $f$ be the induced map to $\P^r$.
+
+ For concreteness, list the basis vectors as $x_1, \ldots, x_r$, where $x_r$ is a lift of $1 \in \mathcal{O}_{\P^1}$ to $H^0(H + KC)$, and $x_{r - 1}, x_{r - 2}$ are $g y_0^{(K - 1)}, gy_1^{(K - 1)}$.
+
+ First observe that $x_1, \ldots, x_{r - 1}$ vanish at $C$. So
+ \[
+ f(C) = [0:\cdots:0:1] \equiv p,
+ \]
+ and $C$ is indeed contracted to a point.
+
+ Outside of $C$, the function $g$ is invertible, so just the image of $H^0(H)$ is enough to separate points and tangent vectors. So $f$ is an isomorphism outside of $C$.
+
+ All the other basis elements of $H + KC$ are needed to make sure the image is smooth, and we only have to check this at the point $p$. This is done by showing that $\dim \mathfrak{m}_{Y, p}/\mathfrak{m}_{Y, p}^2 \leq 2$, which requires some ``infinitesimal analysis'' we will not perform.
+% We use the chart where $x_r \not= 0$, so we have coordinates
+% \[
+% \frac{x_{r - 1}}{x_r}, \ldots, \frac{x_0}{x_r}.
+% \]
+% We know the pullback $f^* \frac{x_i}{x_r}$ vanishes along $C$, and the order of vanishing is exactly determined by how many times $f$ appears in the basis element. Observe that
+% Thus, we find that $H^0(H + iC) \to H^0(C, (H + iC)|_C)$ is surjective. Recall the map $H^0(H + (i - 1)C) \to H^0(H + iC)$ is given by tensoring with $f$, where $f = 1_C \in H^0(\mathcal{O}_X(C))$. Picking $i = K$, we have a surjection
+% \[
+% H^0(\mathcal{O}_X(H + KC)) \to H^0(\mathcal{O}_{\P^1}).
+% \]
+% Let $1 \in H^0(\mathcal{O}_{\P^1})$, and lift to a section $x_r \in H^0(\mathcal{O}_X(H + KC))$. This is a section that does not vanish anywhere on $C$.
+%
+% Inductively, we pick a basis of $H^0(\mathcal{O}_X(H + KC))$ of the form
+% \[
+% x_r, f y_{r - 1, 1}, f y_{r - 2, 1}, f^2 y_{r - 3, 2}, f^2 y_{r - 4, 2}, f^2 y_{r - 4, 2}, \ldots, f^j y_{\ell, j}, \ldots, f^k \sigma,
+% \]
+% where $y_{\ell, j} \in H^0(\mathcal{O}_X(H + (K - j)C))$ are such that $\{y_{\ell, j}\}_\ell$ lift a basis of $\mathcal{O}_{\P^1}(j)$. This basis gives us a map $f: X \to \P(H^0(H + KC))$. Indeed, our linear system separate points and tangent vectors outside of $C$, and $x_r$ is non-zero along $C$. Moreover, the image of $C$ is a point, since $f(C) = [0:0:\cdots :1]$. Moreover, $f|_{(X \setminus C)}$ is an isomorphism since the sections separate points and tangent vectors.
+%
+% It remains to prove that $Y = \im (f)$ is smooth. This can fail to be smooth only at the point $p = f(C)$. But at $p$, we have coordinates . But the pullback $f^* \frac{x_i}{x_r}$ vanishes along $C$, and the order of vanishing is exactly determined by how many times $f$ appears in the basis element. Morever, $y_{\ell, j}|_C \in H^0(\mathcal{O}_{\P^1(j)})$ and $\sigma|_C \in H^0(\mathcal{O}_{\P^1}(K))$. Observing that
+% \[
+% H^0(\mathcal{O}_{\P^1}(j)) = \Sym^j H^0(\mathcal{O}_{\P^1}(1)) = \Sym^j \bra y_0^{(K - 1)}|_{\P^1}, y_1^{(K - 1)}|_{\P^1} \ket,
+% \]
+% So we know that $\frac{x_i}{x_r} \in \mathfrak{m}_p$, and we can write
+% \[
+% \frac{x_i - \sum^\infty_{\ell = k} Q_\ell(y_{r - 1, 1}, y_{r - 2, 1})}{x_r} \in \hat{\mathcal{O}}_{Y, p}, % fix this
+% \]
+% for some polynomials $Q_{\ell}$, whose pullback to $\hat{\mathcal{O}}_{X, C}$ vanishes. So we have
+% \[
+% 0 = \frac{x_i}{x_r} = \frac{\sum Q_\ell(y_{r - 1, 1}, y_{r - 2, 1})}{x_r}.
+% \]
+% for all $i \leq r - 3$. So $\frac{x_i}{x_r} \in \mathfrak{m}_p^2$ for all $i \leq r - 3$. So $\dim \hat{\mathfrak{m}}_p/\hat{\mathfrak{m}}_p^2 \leq 2$. So we are done.
+\end{proof}
+
+\begin{eg}
+ Consider $X = \P^2$, and $x, y, z$ three non-colinear points in $\P^2$ given by $[1:0:0], [0:1:0], [0:0:1]$.
+
+ There are three conics passing through $x, y, z$, given by $x_1 x_2$, $x_0, x_2$ and $x_0 x_1$, which is the union of lines $\ell_0, \ell_1, \ell_2$, and let $|C_{x, y, z}|$ be the linear system generated by these three conics. Then we obtain a birational map
+ \begin{align*}
+ \P^2 &\dashrightarrow \P^2\\
+ [x_0:x_1:x_2] & \mapsto [x_1 x_2: x_0 x_2: x_0 x_1] = [x_0^{-1}:x_1^{-1}:x_2^{-1}]
+ \end{align*}
+ with base locus $\{x, y, z\}$.
+
+ Blowing up at the three points $x, y, z$ resolves our indeterminacies, and we obtain a diagram
+ \[
+ \begin{tikzcd}[column sep=tiny]
+ & \Bl_{x, y, z} \P^2 \ar[dr] \ar[dl]\\
+ \P^2 \ar[rr, dashed, "C"] && \P^2
+ \end{tikzcd}
+ \]
+ Then in the blowup, we have
+ \[
+ \pi^* \ell_i = \tilde{\ell}_i + E_j + E_k
+ \]
+ for $i, j, k$ distinct. We then check that $\tilde{\ell}_i^2 = -1$, and each $\tilde{\ell}_i$ is isomorphic to $\P^1$. Thus, Castelnuovo tells us we can blow these down. This blow down map is exactly the right-hand map in the diagram above, which one can check compresses each line $\tilde{\ell}_i$ to a point.
+% Indeed, in the lifting diagram
+% \[
+% \begin{tikzcd}[column sep=tiny]
+% & \Bl_{x, y, z} \P^2 \ar[dr] \ar[dl]\\
+% \P^2 \ar[rr, dashed, "C"] && \P^2
+% \end{tikzcd}
+% \]
+% the right hand map is exactly what Castelnuovo gives us.
+\end{eg}
+
+
+\section{Projective varieties}
+We now turn to understand general projective schemes. Our goal will be to understand the ``geometry'' of the set of ample divisors as a subspace of $\NS(X)$.
+
+\subsection{The intersection product}
+First, we have to define $\NS(X)$, and to do so, we need to introduce the intersection product. Thus, given a projective variety $X$ over an algebraically closed field $K$ with $\dim X = n$, we seek a multilinear symmetric function $\CaDiv(X)^n \to \Z$, denoted
+\[
+ (D_1, \ldots, D_n) \mapsto (D_1 \cdot \ldots \cdot D_n).
+\]
+This should satisfy the following properties:
+\begin{enumerate}
+ \item The intersection depends only on the linear equivalence class, i.e.\ it factors through $\Pic(X)^n$.
+ \item If $D_1, \ldots, D_n$ are smooth hypersurfaces that all intersect transversely, then
+ \[
+ D_1 \cdot \cdots \cdot D_n = |D_1 \cap \cdots \cap D_n|.
+ \]
+ \item If $V \subseteq X$ is an integral subvariety of codimension $1$, and $D_1, \ldots, D_{n - 1}$ are Cartier divisors, then
+ \[
+ D_1 \cdot \ldots \cdot D_{n - 1} \cdot [V] = (D_1|_V \cdot \ldots \cdot D_{n - 1}|_V).
+ \]
+\end{enumerate}
+
+In general, if $V \subseteq X$ is an integral subvariety, we write
+\[
+ D_1 \cdot \ldots \cdot D_{\dim V} \cdot [V] = (D_1|_V \cdot \ldots \cdot D_{\dim V}|_V).
+\]
+
+There are two ways we can define the intersection product:
+\begin{enumerate}
+ \item For each $D_i$, write it as $D_i \sim H_{i, 1} - H_{i, 2}$ with $H_{i, j}$ very ample. Then multilinearity tells us how we should compute the intersection product, namely as
+ \[
+ \sum_{I \subseteq \{1, \ldots, n\}} (-1)^{|I|} \prod_{i \in I} H_{i, 2} \cdot \prod_{j \not \in I} H_{j, 1}.
+ \]
+ By Bertini, we can assume the $H_{i, j}$ intersect transversely, and then we just compute these by counting the number of points of intersection. We then have to show that this is well-defined.
+ \item We can write down the definition of the intersection product in a more cohomological way. Recall that for $n = 2$, we had
+ \[
+ D \cdot C = \chi(\mathcal{O}_X) + \chi(\mathcal{O}_X(-C - D)) - \chi(\mathcal{O}_X (-C)) - \chi(\mathcal{O}_X(-D)).
+ \]
+ For general $n$, let $H_1, \ldots, H_n$ be very ample divisors. Then the last property implies we have
+ \[
+ H_1 \cdot \cdots \cdot H_n = (H_2|_{H_1} \cdot \ldots \cdot H_n|_{H_1}).
+ \]
+ So inductively, we have
+ \[
+ H_1 \cdot \cdots \cdot H_n = \sum_{I \subseteq \{2, \ldots, n\}} (-1)^{|I|} \chi\left(H_1, \mathcal{O}_{H_1}\left(- \sum_{i \in I} H_i\right)\right)
+ \]
+ But we also have the sequence
+ \[
+ 0 \to \mathcal{O}_X(-H_1) \to \mathcal{O}_X \to \mathcal{O}_{H_1} \to 0.
+ \]
+ Twisting this sequence shows that
+ \begin{align*}
+ H_1 \cdot \cdots \cdot H_n &= \sum_{I \subseteq \{2, \ldots, n\}} (-1)^{|I|}\Big(\chi\left(\mathcal{O}_X\left(- \sum\nolimits_{i \in I} H_i\right)\right)\\
+ &\hphantom{aaaaaaaaaaaaaaaa}- \chi\left(\mathcal{O}_X\left(- H_1 - \sum\nolimits_{i \in I} H_i\right)\right)\Big)\\
+ &= \sum_{J \subseteq \{1, \ldots, n\}} (-1)^{|J|} \chi\left(\mathcal{O}_X\left(- \sum\nolimits_{i \in J} H_i\right)\right)
+ \end{align*}
+ We can then adopt this as the definition of the intersection product
+ \[
+ D_1 \cdot \cdots \cdot D_n = \sum_{J \subseteq \{1, \ldots, n\}} (-1)^{|J|} \chi\left(\mathcal{O}_X\left(- \sum_{i \in J} D_i\right)\right).
+ \]
+ Reversing the above procedure shows that property (iii) is satisfied.
+\end{enumerate}
+
+As before we can define
+\begin{defi}[Numerical equivalence]\index{numerical equivalence}
+ Let $D, D' \in \CaDiv(X)$. We say $D$ and $D'$ are \emph{numerically equivalent}, written $D \equiv D'$, if $D \cdot [C] = D' \cdot [C]$ for all integral curves $C \subseteq X$.
+\end{defi}
+
+In the case of surfaces, we showed that being numerically equivalent to zero is the same as being in the kernel of the intersection product.
+
+\begin{lemma}
+ Assume $D \equiv 0$. Then for all $D_2, \ldots, D_{\dim X}$, we have
+ \[
+ D \cdot D_2 \cdot \ldots \cdot D_{\dim X} = 0.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We induct on $n$. As usual, we can assume that the $D_i$ are very ample. Then
+ \[
+ D \cdot D_2 \cdot \cdots D_{\dim X} = D|_{D_2} \cdot D_3|_{D_2} \cdot \cdots \cdot D_{\dim X}|_{D_2}.
+ \]
+ Now if $D \equiv 0$ on $X$, then $D|_{D_2} \equiv 0$ on $D_2$. So we are done.
+\end{proof}
+
+\begin{defi}[Neron--Severi group]\index{Neron--Severi group}
+ The \emph{Neron--Severi group} of $X$, written $\NS(X) = N^1(X)$, is
+ \[
+ N^1(X) = \frac{\CaDiv(X)}{\Num_0(X)} = \frac{\CaDiv(X)}{\{D \mid D \equiv 0\}}.
+ \]
+\end{defi}
+
+As in the case of surfaces, we have
+\begin{thm}[Severi]
+ Let $X$ be a projective variety over an algebraically closed field. Then $N^1(X)$ is a finitely-generated torsion free abelian group, hence of the form $\Z^n$.\fakeqed
+\end{thm}
+
+\begin{defi}[Picard number]\index{Picard number}
+ The \emph{Picard number} of $X$ is the rank of $N^1(X)$.
+\end{defi}
+
+We will not prove Severi's theorem, but when working over $\C$, we can indicate a proof strategy. Over $\C$, in the analytic topology, we have the exponential sequence
+\[
+ 0 \to \Z \to \mathcal{O}_X^{\mathrm{an}} \to \mathcal{O}_X^{*, \mathrm{an}} \to 0.
+\]
+This gives a long exact sequence
+\[
+ H^1(X, \Z) \to H^1(X, \mathcal{O}_X^{\mathrm{an}}) \to H^1(X, \mathcal{O}_X^{*, \mathrm{an}}) \to H^2(X, \Z).
+\]
+It is not hard to see that $H^i(X, \Z)$ is in fact the same as singular cohomology, and $H^1(\mathcal{O}_X^{*, an})$ is the complex analytic Picard group. By Serre's GAGA theorem, this is the same as the algebraic $\Pic(X)$. Moreover, the map $\Pic(X) \to H^2(X, \Z)$ is just the first Chern class.
+
+Next check that the intersection product $[C]\cdot D$ is the same as the topological intersection product $[C] \cdot c_1(D)$. So at least modulo torsion, we have an embedding of the Neron--Severi group into $H^2(X, \Z)$, which we know is finite dimensional.
+
+%Also, $H^1(X, \mathcal{O}_X^{\mathrm{an}})/H^1(X, \Z) \cong \mathrm{Alb}(X) \cong \Pic^0(X)$. Thus, $\Pic(X)$ is the extension of an abelian variety by some discrete finite rank abelian Lie group.
+
+In the case of surfaces, we had the Riemann--Roch theorem. We do not have a similarly precise statement for arbitrary varieties, but an ``asymptotic'' version is good enough for what we want.
+\begin{thm}[Asymptotic Riemann--Roch]\index{Asymptotic Riemann--Roch}
+ Let $X$ be a projective normal variety over $K = \bar{K}$. Let $D$ be a Cartier divisor, and $E$ a Weil divisor on $X$. Then $\chi(X, \mathcal{O}_X(mD + E))$ is a numerical polynomial in $m$ (i.e.\ a polynomial with rational coefficients that only takes integral values) of degree at most $n = \dim X$, and
+ \[
+ \chi(X, \mathcal{O}_X(mD + E)) = \frac{D^n}{n!} m^n + \text{lower order terms}.
+ \]
+\end{thm}
+
+\begin{proof}
+ By induction on $\dim X$, we can assume the theorem holds for normal projective varieties of dimension $< n$. We fix $H$ on $X$ very ample such that $H + D$ is very ample. Let $H' \in |H|$ and $G \in |H + D|$ be sufficiently general. We then have short exact sequences
+ \[
+ 0 \to \mathcal{O}_X(mD + E) \to \mathcal{O}_X(mD + E + H) \to \mathcal{O}_{H'}((mD + E + H)|_{H'}) \to 0
+ \]
+ \[
+ 0 \to \mathcal{O}_X((m - 1)D + E) \to \mathcal{O}_X(mD + E + H) \to \mathcal{O}_G((mD + E + H)|_G) \to 0.
+ \]
+ Note that the middle term appears in both equations. So we find that
+ \begin{multline*}
+ \chi(X, \mathcal{O}_X(mD + E)) + \chi (H', \mathcal{O}_{H'}((mD + E + H)|_H)) \\
+ = \chi(X, \mathcal{O}_X((m - 1)D + E)) + \chi (H', \mathcal{O}_{G}((mD + E + H)|_G))
+ \end{multline*}
+ Rearranging, we get
+ \begin{multline*}
+ \chi(\mathcal{O}_X(mD - E)) - \chi(\mathcal{O}_X((m - 1)D - E)) \\
+ = \chi(G, \mathcal{O}_G(mD + E + H)) - \chi(H', \mathcal{O}_{H'} (mD + E + H)).
+ \end{multline*}
+ By induction (see proposition below) the right-hand side is a numerical polynomial of degree at most $n - 1$ with leading term
+ \[
+ \frac{D^{n - 1} \cdot G - D^{n - 1} \cdot H}{(n - 1)!} m^{n - 1} + \text{lower order terms},
+ \]
+ since $D^{n - 1}\cdot G$ is just the $(n-1)$ self-intersection of $D$ along $G$. But $G - H = D$. So the LHS is
+ \[
+ \frac{D^n}{(n - 1)!} m^{n - 1} + \text{lower order terms},
+ \]
+ so we are done.
+\end{proof}
+
+\begin{prop}
+ Let $X$ be a normal projective variety, and $|H|$ a very ample linear system. Then for a general element $G \in |H|$, $G$ is a normal projective variety.\fakeqed
+\end{prop}
+
+Some immediate consequences include
+\begin{prop}
+ Let $X$ be a normal projective variety.
+ \begin{enumerate}
+ \item If $H$ is a very ample Cartier divisor, then
+ \[
+ h^0(X, mH) = \frac{H^n}{n!} m^n + \text{lower order terms}\text{ for }m \gg 0.
+ \]
+ \item If $D$ is any Cartier divisor, then there is some $C \in \R_{>0}$ such that
+ \[
+ h^0(mD) \leq C \cdot m^n\text{ for }m \gg 0.
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item By Serre's theorem, $H^i(\mathcal{O}_x(mH)) = 0$ for $i > 0$ and $m \gg 0$. So we apply asymptotic Riemann Roch.
+ \item There exists a very ample Cartier divisor $H'$ on $X$ such that $H' + D$ is also very ample. Then
+ \[
+ h^0(mD) \leq h^0(m (H' + D)).\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{Ample divisors}
+We begin with the following useful lemma:
+\begin{lemma}
+ Let $X, Y$ be projective schemes. If $f: X \to Y$ is a finite morphism of schemes, and $D$ is an ample Cartier divisor on $Y$, then so is $f^* D$.
+\end{lemma}
+
+\begin{proof}
+ Let $\mathcal{F}$ be a coherent sheaf on $X$. Since $f$ is finite, we have $R^i f_* \mathcal{F} = 0$ for all $i > 0$. Then
+ \[
+ H^i(\mathcal{F} \otimes f^* \mathcal{O}_X(mD)) = H^i(f_* \mathcal{F} \otimes \mathcal{O}_Y(mD)) = 0
+ \]
+ for all $i > 0$ and $m \gg 0$. So by Serre's theorem, we know $f^* D$ is ample.
+\end{proof}
+
+Another useful result is
+\begin{prop}
+ Let $X$ be a proper scheme, and $\mathcal{L}$ an invertible sheaf. Then $\mathcal{L}$ is ample iff $\mathcal{L}|_{X_{\red}}$ is ample.
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item[($\Rightarrow$)] If $\mathcal{L}$ induces a closed embedding of $X$, then map given by $\mathcal{L}|_{X_{\red}}$ is given by the composition with the closed embedding $X_{\red} \hookrightarrow X$.
+
+ \item[($\Leftarrow$)] Let $\mathcal{J} \subseteq \mathcal{O}_X$ be the nilradical. By Noetherianness, there exists $n$ such that $\mathcal{J}^n = 0$.
+
+ Fix $\mathcal{F}$ a coherent sheaf. We can filter $\mathcal{F}$ using
+ \[
+ \mathcal{F} \supseteq \mathcal{J}\mathcal{F} \supseteq \mathcal{J}^2 \mathcal{F} \supseteq \cdots \supseteq \mathcal{J}^{n - 1} \mathcal{F} \supseteq \mathcal{J}^n \mathcal{F} = 0.
+ \]
+ For each $j$, we have a short exact sequence
+ \[
+ 0 \to \mathcal{J}^{j + 1} \mathcal{F} \to \mathcal{J}^j \mathcal{F} \to \mathcal{G}_j \to 0.
+ \]
+ This $\mathcal{G}_i$ is really a sheaf on the reduced structure, since $\mathcal{J}$ acts trivially. Thus $H^i(\mathcal{G}_j \otimes \mathcal{L}^m)$ for $j > 0$ and large $m$. Thus inducting on $j \geq 0$, we find that for $i > 0$ and $m \gg 0$, we have
+ \[
+ H^i(\mathcal{J}^j \mathcal{F} \otimes \mathcal{L}^m) = 0.\qedhere
+ \]%\qedhere
+ \end{itemize}
+\end{proof}
+
+The following criterion for ampleness will give us a very good handle on how ample divisors behave:
+\begin{thm}[Nakai's criterion]\index{Nakai's criterion}
+ Let $X$ be a projective variety. Let $D$ be a Cartier divisor on $X$. Then $D$ is ample iff for all $V \subseteq X$ integral proper subvariety (proper means proper scheme, not proper subset), we have
+ \[
+ (D|_V)^{\dim V} = D^{\dim V} [V] > 0.
+ \]
+\end{thm}
+
+Before we prove this, we observe some immediate corollaries:
+\begin{cor}
+ Let $X$ be a projective variety. Then ampleness is a numerical condition, i.e.\ for any Cartier divisors $D_1, D_2$, if $D_1 \equiv D_2$, then $D_1$ is ample iff $D_2$ is ample.
+\end{cor}
+
+\begin{cor}
+ Let $X, Y$ be projective variety. If $f: X \to Y$ is a surjective finite morphism of schemes, and $D$ is a Cartier divisor on $Y$. Then $D$ is ample iff $f^*D$ is ample.
+\end{cor}
+
+\begin{proof}
+ It remains to prove $\Leftarrow$. If $f$ is finite and surjective, then for all $V \subseteq Y$, there exists $V' \subseteq f^{-1}(V) \subseteq X$ such that $f|_{V'}: V' \to V$ is a finite surjective morphism. Then we have
+ \[
+ (f^* D)^{\dim V'} [V'] = \deg f|_{V'} D^{\dim V} [V],
+ \]
+ which is clear since we only have to prove this for very ample $D$.
+\end{proof}
+
+\begin{cor}
+ If $X$ is a projective variety, $D$ a Cartier divisor and $\mathcal{O}_X(D)$ globally generated, and
+ \[
+ \Phi: X \to \P(H^0(X, \mathcal{O}_X(D))^*)
+ \]
+ the induced map. Then $D$ is ample iff $\Phi$ is finite.
+\end{cor}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item[$(\Leftarrow)$] If $X \to \Phi(X)$ is finite, then $D = \Phi^* \mathcal{O}(1)$. So this follows from the previous corollary.
+ \item[$(\Rightarrow)$] If $\Phi$ is not finite, then there exists $C \subseteq X$ such that $\Phi(C)$ is a point. Then $D \cdot [C] = \Phi^* \mathcal{O}(1) \cdot [C] = 0$, by the push-pull formula. So by Nakai's criterion, $D$ is not ample.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{proof}[Proof of Nakai's criterion]\leavevmode
+ \begin{itemize}
+ \item[$(\Rightarrow)$] If $D$ is ample, then $mD$ is very ample for some $m$. Then by multilinearity, we may assume $D$ is very ample. So we have a closed embedding
+ \[
+ \Phi: X \to \P(H^0(D)^*).
+ \]
+ If $V \subseteq X$ is a closed integral subvariety, then $D^{\dim V} \cdot [V] = (D|_V)^{\dim V}$. But this is just $\deg_{\Phi(V)} \mathcal{O}(1) > 0$.
+ \item[$(\Leftarrow)$] We proceed by induction on $\dim X$, noting that $\dim X = 1$ is trivial. By induction, for any proper subvariety $V$, we know that $D|_V$ is ample.
+
+ The key of the proof is to show that $\mathcal{O}_X(mD)$ is globally generated for large $m$. If so, the induced map $X \to \P(|mD|)$ cannot contract any curve $C$, or else $mD \cdot C = 0$. So this is a finite map, and $mD$ is the pullback of the ample divisor $\mathcal{O}_{\P(|mD|)}(1)$ along a finite map, hence is ample.
+
+ We first reduce to the case where $D$ is effective. As usual, write $D \sim H_1 - H_2$ with $H_i$ very ample effective divisors. We have exact sequences
+ \[
+ 0 \to \mathcal{O}_X(mD - H_1) \to \mathcal{O}_X(mD) \to \mathcal{O}_{H_1}(mD) \to 0
+ \]
+ \[
+ 0 \to \mathcal{O}_X(mD - H_1) \to \mathcal{O}_X((m - 1)D) \to \mathcal{O}_{H_2}((m - 1)D) \to 0.
+ \]
+ We know $D|_{H_i}$ is ample by induction. So the long exact sequences implies that for all $m \gg 0$ and $j \geq 2$, we have
+ \[
+ H^j(mD) \cong H^j(mD - H_1) = H^j((m - 1)D).
+ \]
+ So we know that
+ \[
+ \chi(mD) = h^0(mD) - h^1(mD) + \text{constant}
+ \]
+ for all $m \gg 0$. On the other hand, since $X$ is an integral subvariety of itself, $D^n > 0$, and so asymptotic Riemann--Roch tells us $h^0(mD) > 0$ for all $m \gg 0$. Since $D$ is ample iff $mD$ is ample, we can assume $D$ is effective.
+
+ To show that $mD$ is globally generated, we observe that it suffices to show that this is true in a neighbourhood of $D$, since outside of $D$, the sheaf is automatically globally generated by using the tautological section that vanishes at $D$ with multiplicity $m$.
+
+ Moreover, we know $mD|_D$ is very ample, and in particular globally generated for large $m$ by induction (the previous proposition allows us to pass to $D|_{red}$ if necessary). Thus, it suffices to show that
+ \[
+ H^0(\mathcal{O}_X(mD)) \to H^0(\mathcal{O}_D(mD))
+ \]
+ is surjective.
+
+ To show this, we use the short exact sequence
+ \[
+ 0 \to \mathcal{O}_X((m - 1)D) \to \mathcal{O}_X(mD) \to \mathcal{O}_D(mD) \to 0.
+ \]
+ For $i > 0$ and large $m$, we know $H^i(\mathcal{O}_D(mD)) = 0$. So we have surjections
+ \[
+ H^1((m - 1)D) \twoheadrightarrow H^1(mD)
+ \]
+ for $m$ large enough. But these are finite-dimensional vector spaces. So for $m$ sufficiently large, this map is an isomorphism. Then $H^0(\mathcal{O}_X(mD)) \to H^0(\mathcal{O}_D(mD))$ is a surjection by exactness.\qedhere
+ \end{itemize}
+\end{proof}
+
+Recall that we defined $\Pic_K(X)$ for $K = \Q$ or $\R$. We can extend the intersection product linearly, and all the properties of the intersection product are preserved. We write
+\[
+ N^1_K(X) = \Pic_K(X) / \Num_{0, K}(X) = \NS(X) \otimes K
+\]
+where, unsurprisingly,
+\[
+ \Num_{0, K}(X) = \{D \in \Pic_K(X): D \cdot C = 0\text{ for all }C \subseteq X\} = \Num_0 \otimes K.
+\]
+
+We now want to define ampleness for such divisors.
+
+\begin{prop}
+ Let $D \in \CaDiv_\Q(X)$ Then the following are equivalent:
+ \begin{enumerate}
+ \item $cD$ is an ample integral divisor for some $c \in \N_{>0}$.
+ \item $D = \sum c_i D_i$, where $c_i \in \Q_{>0}$ and $D_i$ are ample Cartier divisors.
+ \item $D$ satisfies Nakai's criterion. That is, $D^{\dim V} [V] > 0$ for all integral subvarieties $V \subseteq X$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ It is easy to see that (i) and (ii) are equivalent. It is also easy to see that (i) and (iii) are equivalent.
+\end{proof}
+
+We write $\Amp(X) \subseteq N_\Q^1(X)$ for the cone given by ample divisors.
+
+\begin{lemma}
+ A positive linear combination of ample divisors is ample.
+\end{lemma}
+
+\begin{proof}
+ Let $H_1, H_2$ be ample. Then for $\lambda_1, \lambda_2 > 0$, we have
+ \[
+ (\lambda_1 H_1 + \lambda_2 H_2)^{\dim V}[V] = \left(\sum \binom{\dim V}{p} \lambda_1^p \lambda_2^{\dim V - p}\; H_1^p \cdot H_2^{\dim V - p}\right)[V]
+ \]
+ Since any restriction of an ample divisor to an integral subscheme is ample, and multiplying with $H$ is the same as restricting to a hyperplane cuts, we know all the terms are positive.
+\end{proof}
+
+\begin{prop}
+ Ampleness is an open condition. That is, if $D$ is ample and $E_1, \ldots E_r$ are Cartier divisors, then for all $|\varepsilon_i| \ll 1$, the divisor $D + \varepsilon_i E_i$ is still ample.
+\end{prop}
+
+\begin{proof}
+ By induction, it suffices to check it in the case $n = 1$. Take $m \in \N$ such that $mD \pm E_1$ is still ample. This is the same as saying $D \pm \frac{1}{m} E_1$ is still ample.
+
+ Then for $|\varepsilon_1| < \frac{1}{m}$, we can write
+ \[
+ D + \varepsilon E_1 = (1 - q) D + q \left(D + \frac{1}{m} E_1\right)
+ \]
+ for some $q < 1$.
+\end{proof}
+
+Similarly, if $D \in \CaDiv_\R(X)$, then we say $D$ is ample iff
+\[
+ D = \sum a_i D_i,
+\]
+where $D_i$ are ample divisors and $a_i > 0$.
+
+\begin{prop}
+ Being ample is a numerical property over $\R$, i.e.\ if $D_1, D_2 \in \CaDiv_\R(X)$ are such that $D_1 \equiv D_2$, then $D_1$ is ample iff $D_2$ is ample.
+\end{prop}
+
+\begin{proof}
+ We already know that this is true over $\Q$ by Nakai's criterion. Then for real coefficients, we want to show that if $D$ is ample, $E$ is numerically trivial and $t \in \R$, then $D + tE$ is ample. To do so, pick $t_1 < t < t_2$ with $t_i \in \Q$, and then find $\lambda, \mu > 0$ such that
+ \[
+ \lambda (D_1 + t_1 E) + \mu (D_1 + t_2 E) = D_1 + t E.
+ \]
+ Then we are done by checking Nakai's criterion.
+\end{proof}
+
+Similarly, we have
+\begin{prop}
+ Let $H$ be an ample $\R$-divisor. Then for all $\R$-divisors $E_1, \ldots, E_r$, for all $\|\varepsilon_i\| \leq 1$, the divisor $H + \sum \varepsilon_i E_i$ is still ample.\fakeqed
+\end{prop}
+
+\subsection{Nef divisors}
+We can weaken the definition of an ample divisor to get an ample divisor.
+
+\begin{defi}[Numerically effective divisor]\index{numerically effective divisor}\index{nef divisor}
+ Let $X$ be a proper scheme, and $D$ a Cartier divisor. Then $D$ is \emph{numerically effective (nef)} if $D \cdot C \geq 0$ for all integral curves $C \subseteq X$.
+\end{defi}
+
+Similar to the case of ample divisors, we have
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $D$ is nef iff $D|_{X_{\red}}$ is nef.
+ \item $D$ is nef iff $D|_{X_i}$ is nef for all irreducible components $X_i$.
+ \item If $V \subseteq X$ is a proper subscheme, and $D$ is nef, then $D|_V$ is nef.
+ \item If $f: X \to Y$ is a finite morphism of proper schemes, and $D$ is nef on $Y$, then $f^* D$ is nef on $X$. The converse is true if $f$ is surjective.\fakeqed
+ \end{enumerate}
+\end{prop}
+The last one follows from the analogue of Nakai's criterion, called Kleinmann's criterion.
+\begin{thm}[Kleinmann's criterion]\index{Kleinmann's criterion}
+ Let $X$ be a proper scheme, and $D$ an $\R$-Cartier divisor. Then $D$ is nef iff $D^{\dim V}[V] \geq 0$ for all proper irreducible subvarieties.
+\end{thm}
+
+\begin{cor}
+ Let $X$ be a projective scheme, and $D$ be a nef $\R$-divisor on $X$, and $H$ be a Cartier divisor on $X$.
+ \begin{enumerate}
+ \item If $H$ is ample, then $D + \varepsilon H$ is also ample for all $\varepsilon > 0$.
+ \item If $D + \varepsilon H$ is ample for all $0 < \varepsilon \ll 1$, then $D$ is nef.
+ \end{enumerate}
+\end{cor}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We may assume $H$ is very ample. By Nakai's criterion this is equivalent to requiring
+ \[
+ (D + \varepsilon H)^{\dim V}\cdot [V] = \left(\sum \binom{\dim V}{p} \varepsilon^p D^{\dim V - p} H^p\right)[V] > 0.
+ \]
+ Since any restriction of a nef divisor to any integral subscheme is also nef, and multiplying with $H$ is the same as restricting to a hyperplane cuts, we know the terms that involve $D$ are non-negative. The $H^p$ term is positive. So we are done.
+ \item We know $(D + \varepsilon H) \cdot C > 0$ for all positive $\varepsilon$ sufficiently small. Taking $\varepsilon \to 0$, we know $D \cdot C \geq 0$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{cor}
+ $\Nef(X) = \overline{\Amp(X)}$ and $\mathrm{int}(\Nef(X)) = \Amp(X)$.
+\end{cor}
+
+\begin{proof}
+ We know $\Amp(X) \subseteq \Nef(X)$ and $\Amp(X)$ is open. So this implies $\Amp(X) \subseteq \mathrm{int}(\Nef(X))$, and thus $\overline{\Amp(X)} \subseteq \Nef(X)$.
+
+ Conversely, if $D \in \mathrm{int}(\Nef(X))$, we fix $H$ ample. Then $D - tH \in \Nef(X)$ for small $t$, by definition of interior. Then $D = (D - tH) + tH$ is ample. So $\Amp(X) \supseteq \mathrm{int}(\Nef(X))$.
+\end{proof}
+
+\begin{proof}[Proof of Kleinmann's criterion]
+ We may assume that $X$ is an integral projective scheme. The $\Leftarrow$ direction is immediate. To prove the other direction, since the criterion is a closed condition, we may assume $D \in \Div_\Q(X)$. Moreover, by induction, we may assume that $D^{\dim V}[V] \geq 0$ for all $V$ strictly contained in $X$, and we have to show that $D^{\dim X} \geq 0$. Suppose not, and $D^{\dim X} < 0$.
+
+ Fix a very ample Cartier divisor $H$, and consider the polynomial
+ \begin{multline*}
+ P(t) = (D + tH)^{\dim X} = D^{\dim X} + \sum_{i = 1}^{\dim X - 1} t^i \binom{\dim X}{i} H^i D^{\dim X - i} \\
+ + t^{\dim X} H^{\dim X}.
+ \end{multline*}
+ The first term is negative; the last term is positive; and the other terms are non-negative by induction since $H$ is very ample.
+
+ Then on $\R_{>0}$, this polynomial is increasing. So there exists a unique $t$ such that $P(t) = 0$. Let $\bar{t}$ be the root. Then $P(\bar{t}) = 0$. We can also write
+ \[
+ P(t) = (D + tH) \cdot (D + tH)^{\dim X - 1} = R(t) + t Q(t),
+ \]
+ where
+ \[
+ R(t) = D \cdot (D + tH)^{\dim X - 1},\quad Q(t) = H \cdot (D + tH)^{\dim X - 1}.
+ \]
+ We shall prove that $R(\bar{t}) \geq 0$ and $Q(\bar{t}) > 0$, which is a contradiction.
+
+ We first look at $Q(t)$, which is
+ \[
+ Q(t) = \sum_{i = 0}^{\dim X - 1} t^i \binom{\dim X - 1}{i} H^{i + 1} D^{\dim X - i},
+ \]
+ which, as we argued, is a sum of non-negative terms and a positive term.
+
+ To understand $R(t)$, we look at
+ \[
+ R(t) = D \cdot (D + tH)^{\dim X - 1}.
+ \]
+ Note that so far, we haven't used the assumption that $D$ is nef. If $t > \bar{t}$, then $(D + tH)^{\dim X} > 0$, and $(D + tH)^{\dim V}[V] > 0$ for a proper integral subvariety, by induction (and binomial expansion). Then by Nakai's criterion, $D + tH$ is ample. So the intersection $(D + tH)^{\dim X - 1}$ is essentially a curve. So we are done by definition of nef. Then take the limit $t \to \bar{t}$.
+\end{proof}
+
+It is convenient to make the definition
+
+\begin{defi}[1-cycles]\index{1-cycle}\index{$Z_1(X)_R$}
+ For $K = \Z, \Q$ or $\R$, we define the space of $1$-cycles to be
+ \[
+ Z_1(X)_K = \left\{\sum a_i C_i : a_i \in K, C_i \subseteq X\text{ integral proper curves}\right\}.
+ \]
+\end{defi}
+We then have a pairing
+\begin{align*}
+ N^1_K(X) \times Z_1(X)_K &\to K\\
+ (D, E) &\mapsto D \cdot E.
+\end{align*}
+We know that $N^1_K(X)$ is finite-dimensional, but $Z_1(X)_K$ is certainly uncountable. So there is no hope that this intersection pairing is perfect
+\begin{defi}[Numerical equivalence]\index{numerical equivalence}\index{$\equiv$}
+ Let $X$ be a proper scheme, and $C_1, C_2 \in Z_1(X)_K$. Then $C_1 \equiv C_2$ iff
+ \[
+ D \cdot C_1 = D \cdot C_2
+ \]
+ for all $D \in N^1_K(X)$.
+\end{defi}
+
+\begin{defi}[$N_1(X)_K$]\index{$N_1(X)_K$}
+ We define $N_1(X)_K$ to be the $K$-module of $K$ $1$-cycles modulo numerical equivalence.
+\end{defi}
+Thus we get a pairing $N^1(X)_K \times N_1(X)_K \to K$.
+
+\begin{defi}[Effective curves]\index{effective curves}\index{$\mathrm{NE}(X)$}
+ We define the cone of effective curves to be
+ \[
+ \NE(X) = \{\gamma \in N_1(X)_\R : \gamma \equiv \sum [a_i C_i]: a_i > 0, C_i \subseteq X\text{ integral curves}\}.
+ \]
+\end{defi}
+This cone detects nef divisors, since by definition, an $\R$-divisor $D$ is nef iff $D \cdot \gamma \geq 0$ for all $\gamma \in \NE(X)$. In general, we can define
+
+\begin{defi}[Dual cone]\index{dual cone}
+ Let $V$ be a finite-dimensional vector space over $\R$ and $V^*$ the dual of $V$. If $C \subseteq V$ is a cone, then we define the \emph{dual cone} $C^\vee \subseteq V^*$ by
+ \[
+ C^\vee = \{f \in V^* : f(c) \geq 0\text{ for all }c \in C\}.
+ \]
+\end{defi}
+Then we see that
+\begin{prop}
+ $\Nef(X) = \NE(X)^\vee$.\fakeqed
+\end{prop}
+
+It is also easy to see that $C^{\vee \vee} = \bar{C}$. So
+
+\begin{prop}
+ $\overline{NE(X)} = \Nef(X)^\vee$.\fakeqed
+\end{prop}
+
+\begin{thm}[Kleinmann's criterion]
+ If $X$ is a projective scheme and $D \subseteq \CaDiv_\R(X)$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $D$ is ample
+ \item $D|_{\overline{NE(X)}} > 0$, i.e.\ $D \cdot \gamma > 0$ for all $\gamma \in \overline{NE(X)}$.
+ \item $\mathbb{S}_1 \cap \overline{NE(X)} \subseteq \mathbb{S}_1 \cap D_{>0}$, where $\mathbb{S}_1 \subseteq N_1(X)_\R$ is the unit sphere under some choice of norm.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (1) $\Rightarrow$ (2): Trivial.
+ \item (2) $\Rightarrow$ (1): If $D|_{\overline{NE(X)}} > 0$, then $D \in \mathrm{int} (\Nef(X))$.
+ \item (2) $\Leftrightarrow$ (3): Similar.\qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{prop}
+ Let $X$ be a projective scheme, and $D, H \in N^1_\R(X)$. Assume that $H$ is ample. Then $D$ is ample iff there exists $\varepsilon > 0$ such that
+ \[
+ \frac{\D \cdot C}{H \cdot C} \geq \varepsilon.
+ \]
+\end{prop}
+
+\begin{proof}
+ The statement in the lemma is the same as $(D - \varepsilon H) \cdot C \geq 0$.
+\end{proof}
+
+\begin{eg}
+ Let $X$ be a smooth projective surface. Then
+ \[
+ N^1(X)_\R = N_1(X)_\R,
+ \]
+ since $\dim X = 2$ and $X$ is smooth (so that we can identify Weil and Cartier divisors). We certainly have
+ \[
+ \Nef(X) \subseteq \overline{\NE(X)}.
+ \]
+ This can be proper. For example, if $C \subseteq X$ is such that $C$ is irreducible with $C^2 < 0$, then $C$ is not nef.
+
+ What happens when we have such a curve? Suppose $\gamma = \sum a_i C_i \in \NE(X)$ and $\gamma \cdot C < 0$. Then $C$ must be one of the $C_i$, since the intersection with any other curve is non-negative. So we can write
+ \[
+ \gamma = a_C C + \sum_{i = 2}^k a_i C_i,
+ \]
+ where $a_C, a_i > 0$ and $C_i \not= C$ for all $i \geq 2$.
+
+ So for any element in $\gamma \in \NE(X)$, we can write
+ \[
+ \gamma = \lambda C + \gamma',
+ \]
+ where $\lambda > 0$ and $\gamma' = \overline{\NE(X)}_{C \geq 0}$, i.e.\ $\gamma' \cdot C \geq 0$. So we can write
+ \[
+ \overline{\NE(C)} = \R^+[C] + \overline{\NE(X)}_{C \geq 0}.
+ \]
+ Pictorially, if we draw a cross section of $\overline{\NE(C)}$, then we get something like this:
+ \begin{center}
+ \begin{tikzpicture}[rotate=30]
+ \draw [dashed] (0, -1.5) -- (0, 1.5) node [right] {\small$\gamma \cdot C = 0$};
+ \draw (0, 1) -- (1, 0.3) node [circ] {} node [right] {$C$} -- (0, -1);
+ \draw plot [smooth] coordinates {(0, 1) (-1, 0.8) (-1.2, 0.1) (-1, -0.7) (0, -1)};
+ \end{tikzpicture}
+ \end{center}
+ To the left of the dahsed line is $\overline{\NE(X)}_{C \geq 0}$, which we can say nothing about, and to the right of it is a ``cone'' to $C$, generated by interpolating $C$ with the elements of $\overline{\NE(X)}_{C \geq 0}$.
+
+ Thus, $[C] \in N_1(X)_\R$ is an extremal ray of $\overline{\NE(X)}$. In other words, if
+ \[
+ \lambda [C] = \mu_1 \gamma_1 + \mu_2 \gamma_2
+ \]
+ where $\mu_i > 0$ and $\gamma_i \in \overline{\NE(X)}$, then $\gamma_i$ is a multiple of $[C]$.
+\end{eg}
+
+%\begin{eg}
+% We continue to think about surfaces. One way to construct surfaces is as follows: Take a smooth projective curve of genus $g$. Let $U$ be a vector bundle on $C$ of rank $2$ and degree $0$. We can then define $X = \P(U) \overset{\pi}{\to} C$, and $X$ is a surface.
+%
+% One can show using the localization sequence that
+% \[
+% N^1(X)_\R = \R[F] + \R[\mathcal{O}_{\P(U)}(1)],
+% \]
+% where $F$ is a fiber of $\pi$. We write $\psi$ for $\mathcal{O}_{\P(U)}(1)$. It is clear that $F^2 = 0$, since different fibers don't intersect, and also $F \cdot \psi = 1$, since $\mathcal{O}_{\P(U)}(1)$ restricted to $F$ is just $\mathcal{O}(1)$. We also have $\psi^2 = \deg U = 0$. So the intersection product is given by
+% \[
+% (aF + b\psi)^2 = 2 ab.
+% \]
+% So we find that $\Nef(X)$ is contained in the first first quartile $\R^+ [F] + \R^+ [\psi]$.
+%
+% Note that there is a one-to-one correspondence between sections of $\pi$ and splittings
+% \[
+% 0 \to L_1 \to V \to L_2 \to 0\tag{$*$}
+% \]
+% where $L_1, L_2$ are line bundles. Since $V$ has degree zero, we always have $\deg L_1 = - \deg L_2$. A section of $\pi$ gives a curve $C \subseteq X$ such that $C \cdot F = 1$. Thus, $C$ is of the form
+% \[
+% [C] = [\xi + aF]
+% \]
+% for some $a$. To determine this $a$, note that $C \in H^0(\mathcal{O}_{\P_C(V)}(1) \otimes L_1^\vee)$. So we have $a = \deg L_2$.
+%
+% Then $C^2 = 2a$. Now if $a = \deg (L_2) < 0$, then $[C]$ is an extremal ray of $\overline{\NE(X)}$. Since $F^2 = 0$, we know $[F]$ is the extremal ray, and the Nef cone is the dual, hence has extremal rays $F$ and $-aF + \xi$.
+%
+% It follows from this description that $\NE(X) = \overline{\NE(X)}$. If $V$ is semistable, i.e.
+% \[
+% \deg (L_2) \geq 0\text{ in }(*),
+% \]
+% then $\Sym^n V$ are also semistable. % i.e.\ if there is a sujection $\Sym^n V \twoheadrightarrow L$, then $\deg (O) \geq 0$.
+%
+% Now a curve $E \subseteq X$ corresponds to sections of $\mathcal{O}_{\P_V(C)} (n) \otimes \pi^*(A)$ for $A$ a line bundle on $C$. Then the globla sections of this correopnd to $H^0(C, \Sym^n V \otimes A)$. If $h^0 \not= 0$, then semi-stability means if $\deg A \geq 0$. So $\overline{\NE(X)} = \R^+ \xi + \R^+ F$.
+%
+% Is $\NE(X)$ closed? In other words, is $\R^+ \xi$ spanned by an effective $1$-cycle? If the answer is no, then we see an explicit example that using Kleinmann's criterion, we actually need to use $\overline{\NE(X)}$.
+%
+% The answer is actually no in general.
+%\end{eg}
+%
+%\begin{thm}[Hartshorne]
+% If $g(C) \geq 2$, then there exists a semi-stable vector bundle $V$ of degree $0$ over $C$ such that $H^0(\Sym^nV \otimes A) = 0$ for all $A \in \Pic^0(C)$ and all $n \geq 0$.\fakeqed
+%\end{thm}
+%In fact, this is a generic phenomenon.
+
+In fact, in general, we have
+\begin{thm}[Cone theorem]
+ Let $X$ be a smooth projective variety over $\C$. Then there exists rational curves $\{C_i\}_{i \in I}$ such that
+ \[
+ \overline{\NE(X)} = \overline{\NE}_{K_X \geq 0} + \sum_{i \in I} \R^+ [C_i]
+ \]
+ where $\overline{\NE}_{K_X \geq 0} = \{\gamma \in \overline{\NE(X)} : K_X \cdot \gamma \geq 0\}$. Further, we need at most countably many $C_i$'s, and the accumulation points are all at $K_X^\perp$.
+\end{thm}
+
+%\begin{eg}
+% If $X$ is a surface. If $K_X$ is not nef, either we can find a curve $C$ with
+% \begin{enumerate}
+% \item $K_X \cdot C = -1$, where we can blow down the curve;
+% \item $K_X \cdot C = -2$, then $X \to C $ is a $\P^1$-bundle;
+% \item $K_X \cdot C = -3$, where we are $\P^2$.
+% \end{enumerate}
+% If we know a priori that (ii) and (iii) are not the case, then we can keep blowing down until either $K_X$ is nef or (ii) or (iii). Then for some large $m$, we have $|m K_{X_m}| \not= 0$ and base-point free. This then gives a canonical model $Y$ with a map $X_n \to Y$ given by
+% \[
+% \Proj\left(\bigoplus_{m \geq 0} H^0(mK_X)\right) = \Proj \left(\bigoplus_{m \geq 0} H^0 (mK_X)\right).
+% \]
+%\end{eg}
+%This is generalized to things that are not surfaces. % minimal model program
+
+\subsection{Kodaira dimension}
+Let $X$ be a normal projective variety, and $\mathcal{L}$ a line bundle with $|\mathcal{L}| \not= 0$. We then get a rational map
+\[
+ \varphi_{|\mathcal{L}|}: X \dashrightarrow \P(H^0(\mathcal{L})).
+\]
+
+More generally, for each $m > 0$, we can form $\mathcal{L}^{\otimes m}$, and get a rational map $\varphi_{\mathcal{L}^{\otimes m}}$. What can we say about the limiting behaviour as $m \to \infty$?
+%
+%We can do the same for all $m > 0$ with $\mathcal{L}^{\otimes m}$. What can we say the limiting behaviour of $m \to \infty$? In particular, do the $\varphi_{|\mathcal{L}^{\otimes m}|}$ display eventually the same behaviour? We have a map
+%\[
+% \psi: \bigotimes^m H^0(\mathcal{L}) \to H^0(\mathcal{L}^{\otimes m}).
+%\]
+%Write $V_m$ for the image of $\psi$. We then have a composition
+%\[
+% \begin{tikzcd}
+% X \ar[r, dashed, "\varphi_{|\mathcal{L}^{\otimes m}|}"] & \P(H^0(\mathcal{L}^{\otimes m})) \ar[r, "\mathrm{proj}"] & \P(V_m)
+% \end{tikzcd}
+%\]
+%The composition is then just the $m$th veloromes of $\varphi|_{\mathcal{L}}$. % fix the v-something word.
+
+\begin{thm}[Iitaka]
+ Let $X$ be a normal projective variety and $\mathcal{L}$ a line bundle on $X$. Suppose there is an $m$ such that $|\mathcal{L}^{\otimes m}| \not= 0$. Then there exists $X_\infty, Y_\infty$, a map $\psi_\infty: X_\infty \to Y_\infty$ and a birational map $U_\infty: X_\infty \dashrightarrow X$ such that for $K \gg 0$ such that $|\mathcal{L}^{\otimes K}| \not= 0$, we have a commutative diagram
+ \[
+ \begin{tikzcd}
+ X \ar[r, dashed, "\varphi_{|\mathcal{L}^{\otimes k}|}"] & \Im (\varphi_{|\mathcal{L}^{\otimes K}})\\
+ X_\infty \ar[u, "U_\infty", dashed] \ar[r, "\psi_\infty"] & Y_\infty \ar[u, dashed]
+ \end{tikzcd}
+ \]
+ where the right-hand map is also birational.
+\end{thm}
+So birationally, the maps $\psi_K$ stabilize.
+
+\begin{defi}[Kodaira dimension]\index{Kodaira dimension}
+ The \emph{Kodaira dimension} of $\mathcal{L}$ is
+ \[
+ K(X, \mathcal{L}) =
+ \begin{cases}
+ -\infty & h^0(X, \mathcal{L}^{\otimes m}) = 0\text{ for all }m > 0\\
+ \dim (Y_\infty) & \text{otherwise}
+ \end{cases}.
+ \]
+\end{defi}
+This is a very fundamental invariant for understanding $\mathcal{L}$. For example, if $K(X, \mathcal{L}) = K \geq 0$, then there exists $C_1, C_2 \in \R_{>0}$ such that
+\[
+ C_2 m^K \leq h^0(X, \mathcal{L}^{\otimes m}) \leq C_1 m^K
+\]
+for all $m$. In other words, $h^0(X, \mathcal{L}^{\otimes m}) \sim m^K$.
+
+\printindex
+\end{document}
diff --git a/books/cam/III_L/ramsey_theory.tex b/books/cam/III_L/ramsey_theory.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ffd8480cc1e6e9b66197bea5b7e359c164a1a5d6
--- /dev/null
+++ b/books/cam/III_L/ramsey_theory.tex
@@ -0,0 +1,2742 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2017}
+\def\nlecturer {B.\ P.\ Narayanan}
+\def\ncourse {Ramsey Theory}
+
+\input{header}
+
+\DeclareMathOperator\Seq{Seq}
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+What happens when we cut up a mathematical structure into many `pieces' ? How big must the original structure be in order to guarantee that at least one of the pieces has a specific property of interest? These are the kinds of questions at the heart of Ramsey theory. A prototypical result in the area is van der Waerden's theorem, which states that whenever we partition the natural numbers into finitely many classes, there is a class that contains arbitrarily long arithmetic progressions.
+
+The course will cover both classical material and more recent developments in the subject. Some of the classical results that I shall cover include Ramsey's theorem, van der Waerden's theorem and the Hales--Jewett theorem; I shall discuss some applications of these results as well. More recent developments that I hope to cover include the properties of non-Ramsey graphs, topics in geometric Ramsey theory, and finally, connections between Ramsey theory and topological dynamics. I will also indicate a number of open problems.
+
+\subsubsection*{Pre-requisites}
+There are almost no pre-requisites and the material presented in this course will, by and large, be self-contained. However, students familiar with the basic notions of graph theory and point-set topology are bound to find the course easier.%
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+Vaguely, the main idea of Ramsey theory is as follows --- we have some object $X$ that comes with some structure, and then we break $X$ up into finitely many pieces. Can we find a piece that retains some of the structure of $X$? Usually, we phrase this in terms of \emph{colourings}. We pick a list of finitely many colours, and colour each element of $X$. Then we want to find a monochromatic subset of $X$ that satisfies some properties.
+
+The most classical example of this is a graph colouring problem. We take a graph, say
+\begin{center}
+ \begin{tikzpicture}
+
+ \node [circ] (0) at (1, 0) {};
+ \node [circ] (1) at (0, 0) {};
+ \node [circ] (2) at (-0.707, -0.707) {};
+ \node [circ] (3) at (-0.707, -1.707) {};
+ \node [circ] (4) at (0, -2.414) {};
+ \node [circ] (5) at (1, -2.414) {};
+ \node [circ] (6) at (1.707, -1.707) {};
+ \node [circ] (7) at (1.707, -0.707) {};
+
+ \draw (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);
+ \draw (0) -- (2) -- (4) -- (6) -- (0);
+ \draw (1) -- (3) -- (5) -- (7) -- (1);
+ \draw (0) -- (3) -- (6) -- (1) -- (4) -- (7) -- (2) -- (5) -- (0);
+ \draw (0) -- (4);
+ \draw (1) -- (5);
+ \draw (2) -- (6);
+ \draw (3) -- (7);
+ \end{tikzpicture}
+\end{center}
+We then colour each edge with either red or blue:
+\begin{center}
+ \begin{tikzpicture}
+
+ \node [circ] (0) at (1, 0) {};
+ \node [circ] (1) at (0, 0) {};
+ \node [circ] (2) at (-0.707, -0.707) {};
+ \node [circ] (3) at (-0.707, -1.707) {};
+ \node [circ] (4) at (0, -2.414) {};
+ \node [circ] (5) at (1, -2.414) {};
+ \node [circ] (6) at (1.707, -1.707) {};
+ \node [circ] (7) at (1.707, -0.707) {};
+
+ \draw [red] (3) -- (4);
+ \draw [red] (5) -- (0);
+ \draw [red] (5) -- (6);
+ \draw [red] (1) -- (5);
+ \draw [red] (0) -- (2);
+ \draw [red] (1) -- (3);
+ \draw [red] (3) -- (5);
+ \draw [red] (2) -- (5);
+ \draw [red] (5) -- (7);
+ \draw [red] (1) -- (2);
+ \draw [red] (7) -- (2);
+ \draw [red] (2) -- (3);
+ \draw [red] (6) -- (0);
+ \draw [red] (7) -- (0);
+ \draw [red] (6) -- (7);
+ \draw [red] (7) -- (1);
+ \draw [red] (4) -- (7);
+ \draw [blue] (0) -- (1);
+ \draw [blue] (1) -- (4);
+ \draw [blue] (2) -- (4);
+ \draw [blue] (0) -- (4);
+ \draw [blue] (3) -- (7);
+ \draw [blue] (4) -- (5);
+ \draw [blue] (6) -- (1);
+ \draw [blue] (3) -- (6);
+ \draw [blue] (0) -- (3);
+ \draw [blue] (2) -- (6);
+ \draw [blue] (4) -- (6);
+ \end{tikzpicture}
+\end{center}
+We now try to find a (complete) \emph{subgraph} that is monochromatic, i.e.\ a subgraph all of whose edges are the same colour. With a bit of effort, we can find a red monochromatic subgraph of size $4$:
+\begin{center}
+ \begin{tikzpicture}
+
+ \node [circ] (1) at (0, 0) {};
+ \node [circ] (2) at (-0.707, -0.707) {};
+ \node [circ] (5) at (1, -2.414) {};
+ \node [circ] (7) at (1.707, -0.707) {};
+
+ \draw [red] (1) -- (5);
+ \draw [red] (2) -- (5);
+ \draw [red] (5) -- (7);
+ \draw [red] (1) -- (2);
+ \draw [red] (7) -- (2);
+ \draw [red] (7) -- (1);
+ \end{tikzpicture}
+\end{center}
+Are we always guaranteed that we can find a monochromatic subgraph of size $4$? If not, how big does the initial graph have to be? These are questions we will answer in this course. Often, we will ask the question in an ``infinite'' form --- given an \emph{infinite} graph, is it always possible to find an infinite monochromatic subgraph? The answer is yes, and it turns out we can deduce results about the finitary version just by knowing the infinite version.
+
+These questions about graphs will take up the first chapter of the course. In the remaining of the course, we will discuss Ramsey theory on the integers, which, for the purpose of this course, will always mean $\N = \{1, 2, 3, \cdots\}$. We will now try to answer questions about the \emph{arithmetic structure} of $\N$. For example, when we finitely colour $\N$, can we find an infinite arithmetic progression? With some thought, we can see that the answer is no. However, it turns out we can find arbitrarily long arithmetic progressions.
+
+More generally, suppose we have some system of linear equations
+\begin{align*}
+ 3x + 6y + 2z &= 0\\
+ 2x + 7y + 3z &= 0.
+\end{align*}
+If we finitely colour $\N$, can we always find a monochromatic solution to these equations?
+
+Remarkably, there is a \emph{complete characterization} of all linear systems that always admit monochromatic solutions. This is given by \emph{Rado's theorem}, which is one of the hardest theorems we are going to prove in the course.
+
+If we are brave, then we can try to get the multiplicative structure of $\N$ in the picture. We begin by asking the most naive question --- if we finitely colour $\N$, can we always find $x, y \in \N$, not both $2$, such that $x + y$ and $xy$ are the same colour? This is a very hard question, and the answer turns out to be yes.
+
+\section{Graph Ramsey theory}
+\subsection{Infinite graphs}
+We begin by laying out some notations and definitions.
+\begin{defi}[$\N$ and $\lbrack n\rbrack$]\index{$\N$}\index{$[n]$}
+ We write
+ \[
+ \N = \{1, 2, 3, 4, \cdots\}.
+ \]
+ We also write
+ \[
+ [n] = \{1, 2, 3, \cdots, n\}.
+ \]
+\end{defi}
+
+\begin{notation}
+ For a set $X$, we write $X^{(r)}$\index{$X^{(r)}$} for the subsets of $X$ of size $r$. The elements are known as \term{$r$-sets}.
+\end{notation}
+
+We all know what a graph is, hopefully, but for concreteness we provide a definition:
+\begin{defi}[Graph]\index{graph}
+ A graph $G$ is a pair $(V, E)$, where $E \subseteq V^{(2)}$.
+\end{defi}
+In particular, our graphs are not directed.
+
+We are mostly interested in graphs with $V = \N$ or $[n]$. In this case, we will write an edge $\{i, j\}$ as $ij$, and always assume that $i < j$. More generally, in $\N^{(r)}$, we write $i_1 i_2 \cdots i_r$ for an $r$-set, and implicitly assume $i_1 < i_2 < \cdots < i_r$.
+
+\begin{defi}[$k$-colouring]\index{$k$-colouring}\index{colouring}
+ A $k$-colouring of a set $X$ is a map $c: X \to [k]$.
+\end{defi}
+Of course, we often replace $k$ with some actual set of colours, e.g.\ $\{\mathrm{red}, \mathrm{blue}\}$. We make more obvious definitions:
+
+\begin{defi}[Monochromatic set]\index{monochromatic set}
+ Let $X$ be a set with a $k$-colouring. We say a subset $Y \subseteq X$ is \emph{monochromatic} if the colouring restricted to $Y$ is constant.
+\end{defi}
+
+The first question we are interested is in the following --- if $\N^{(2)}$ is $k$-coloured, can we find a complete infinite subgraph that is monochromatic? In other words, is there an infinite subset $X \subseteq \N$ such that $X^{(2)} \subseteq \N^{(2)}$ is monochromatic?
+
+We begin by trying some examples. The correct way to read these examples is to stare at the colouring, and then try to find a monochromatic subset yourself, instead of immediately reading on to see what the monochromatic subset is.
+\begin{eg}
+ Suppose we had the colouring $c: \N^{(2)} \to \{\mathrm{red}, \mathrm{blue}\}$ by
+ \[
+ c(ij) =
+ \begin{cases}
+ \mathrm{red} & i + j \text{ is even}\\
+ \mathrm{blue} & i + j \text{ is odd}
+ \end{cases}.
+ \]
+ Then we can pick $X = \{2, 4, 6, 8, \cdots \}$, and this is an infinite monochromatic set.
+\end{eg}
+
+\begin{eg}
+ Consider $c: \N^{(2)} \to \{0, 1, 2\}$, where
+ \[
+ c(ij) = \max \{n: 2^n \mid i + j\} \bmod 3
+ \]
+ In this case, taking $X = \{8, 8^2, 8^3, \cdots\}$ gives an infinite monochromatic set of colour $0$.
+\end{eg}
+
+\begin{eg}
+ Let $c: \N^{(2)} \to \{\mathrm{red}, \mathrm{blue}\}$ by
+ \[
+ c(ij) =
+ \begin{cases}
+ \mathrm{red} & i + j\text{ has an even number of distinct prime factors}\\
+ \mathrm{blue} & \mathrm{otherwise}
+ \end{cases}
+ \]
+\end{eg}
+It is in fact an open problem to find an explicit infinite monochromatic set for this colouring, or even for which colour do we have an infinite monochromatic set. However, we can prove that such a set must exist!
+
+\begin{thm}[Ramsey's theorem]\index{Ramsey's theorem}
+ Whenever we $k$-colour $\N^{(2)}$, there exists an infinite monochromatic set $X$, i.e.\ given any map $c: \N^{(2)} \to [k]$, there exists a subset $X \subseteq \N$ such that $X$ is infinite and $c|_{X^{(2)}}$ is a constant function.
+\end{thm}
+
+\begin{proof}
+ Pick an arbitrary point $a_1 \in \N$. Then by the pigeonhole principle, there must exist an infinite set $\mathcal{B}_1 \subseteq \N \setminus \{a_1\}$ such that all the $a_1\mdash \mathcal{B}_1$ edges (i.e.\ edges of the form $(a_1, b_1)$ with $b_1 \in \mathcal{B}_1$) are of the same colour $c_1$.
+
+ Now again arbitrarily pick an element $a_2 \in \mathcal{B}_1$. Again, we find some infinite $\mathcal{B}_2 \subseteq \mathcal{B}_1$ such that all $a_2$-$\mathcal{B}_2$ edges are the same colour $c_2$. We proceed inductively.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill opacity=0.2, fill=mred] ellipse (1.5 and 3);
+
+ \draw [fill=white] (0.3, 0) ellipse (0.8 and 2);
+ \draw [fill opacity=0.2, fill=mblue] (0.3, 0) ellipse (0.8 and 2);
+ \draw [fill=white] (0.5, 0) ellipse (0.3 and 1);
+ \draw [fill opacity=0.2, fill=mgreen] (0.5, 0) ellipse (0.3 and 1);
+
+ \node [circ] at (-2.5, 0) {};
+ \node [left] at (-2.5, 0) {$a_1$};
+
+ \foreach \y in {1.8, 1.4, 1, 0.6, 0.2, -0.2, -0.6, -1, -1.4, -1.8} {
+ \pgfmathsetmacro{\x}{-1 + 0.1 * abs(\y)^2};
+ \draw [mred, thick] (-2.5, 0) -- (\x, \y) ;
+ }
+ \node [circ] (a2) at (-0.9, 1) {};
+ \node [anchor=south west] at (a2) {$a_2$};
+ \foreach \y in {1.4, 1, 0.6, 0.2, -0.2, -0.6, -1, -1.4} {
+ \pgfmathsetmacro{\x}{0.1 * abs(\y)^2};
+ \draw [mblue, thick] (a2) -- (\x, \y) ;
+ }
+ \node [circ] (a3) at (0.004, 0.2) {};
+ \foreach \y in {0.8, 0.4, 0, -0.4, -0.8} {
+ \pgfmathsetmacro{\x}{0.4 + 0.1 * abs(\y)^2};
+ \draw [mgreen, thick] (a3) -- (\x, \y) ;
+ }
+ \end{tikzpicture}
+ \end{center}
+ We obtain a sequence $\{a_1, a_2, \cdots\}$ and a sequence of colours $\{c_1, c_2, \cdots\}$ such that $c(a_i, a_j) = c_i$, for $i < j$.
+
+ Now again by the pigeonhole principle, since there are finitely many colours, there exists an infinite subsequence $c_{i_1}, c_{i_2}, \cdots$ that is constant. Then $a_{i_1}, a_{i_2}, \cdots$ is an infinite monochromatic set, since all edges are of the colour $c_{i_1} = c_{i_2} = \cdots$. So we are done.
+\end{proof}
+This proof exhibits many common themes we will see in the different Ramsey theory proofs we will later do. The first is that the proof is highly non-constructive. Not only does it not give us the infinite monochromatic set; it doesn't even tell us what the colour is.
+
+Another feature of the proof is that we did not obtain the infinite monochromatic set in one go. Instead, we had to first pass through that intermediate structure, and then obtain an infinite monochromatic set from that. In future proofs, we might have to go through \emph{many} passes, instead of just two.
+
+This theorem looks rather innocuous, but it has many interesting applications.
+\begin{cor}[Bolzano-Weierstrass theorem]
+ Let $(x_i)_{i \geq 0}$ be a bounded sequence of real numbers. Then it has a convergent subsequence.
+\end{cor}
+
+\begin{proof}
+ We define a colouring $c: \N^{(2)} \to \{\uparrow, \downarrow\}$, where
+ \[
+ c(ij) =
+ \begin{cases}
+ \uparrow & x_i < x_j\\
+ \downarrow & x_j \leq x_i
+ \end{cases}
+ \]
+ Then Ramsey's theorem gives us an infinite monochromatic set, which is the same as a monotone subsequence. Since this is bounded, it must convergence.
+\end{proof}
+
+With a very similar technique, we can prove that we can do this for $\N^{(r)}$ for any $r$, and not just $\N^{(2)}$.
+\begin{thm}[Ramsey's theorem for $r$ sets]\index{Ramsey's theorem!for $r$ sets}
+ Whenever $\N^{(r)}$ is $k$-coloured, there exists an infinite monochromatic set, i.e.\ for any $c: \N^{(r)} \to [k]$, there exists an infinite $X \subseteq \N$ such that $c|_{X^{(r)}}$ is constant.
+\end{thm}
+
+We can again try some concrete examples:
+\begin{eg}
+ We define $c: \N^{(3)} \to \{\mathrm{red}, \mathrm{blue}\}$ by
+ \[
+ c(ijk) =
+ \begin{cases}
+ \mathrm{red} & i \mid j + k\\
+ \mathrm{blue} & \mathrm{otherwise}
+ \end{cases}
+ \]
+ Then $X = \{2^0, 2^1, 2^2, \cdots\}$ is an infinite monochromatic set.
+\end{eg}
+
+\begin{proof}
+ We induct on $r$. This is trivial when $r = 1$. Assume $r > 1$. We fix $a_1 \in \N$. We induce a $k$-colouring $c_1$ of $(\N \setminus \{a_1\})^{(r - 1)}$ by
+ \[
+ c_1(F) = c(F \cup \{a_1\}).
+ \]
+ By induction, there exists an infinite $B_1 \subseteq \N \setminus \{a_1\}$ such that $B_1$ is monochromatic for $c_1$, i.e.\ all $a_1\mdash B_1$ $r$-sets have the same colour $c_1$.
+
+ We proceed inductively as before. We get $a_1, a_2, a_3, \cdots$ and colours $c_1, c_2, \cdots$ etc. such that for any $r$-set $F$ contained in $\{a_1, a_2, \cdots\}$, we have $c(F) = c_{\min F}$.
+
+ Then again, there exists $c_{i_1}, c_{i_2}, c_{i_3}, \cdots$ all identical, and our monochromatic set is $\{a_{i_1}, a_{i_2}, a_{i_3}, \cdots\}$.
+\end{proof}
+
+Now a natural question to ask is --- what happens when we have infinitely many colours? Clearly an infinite monochromatic subset cannot be guaranteed, because we can just colour all edges with different colours.
+
+The natural compromise is to ask if we can find an $X$ such that \emph{either} $c|_{X}$ is monochromatic, \emph{or} $c|_{X}$ is injective. After a little thought, we realize this is also impossible. We can construct a colouring on $\N^{(2)}$ as follows: we first colour all edges that involve $1$ with the colour $1$, then all edges that involve $2$ with the colour $2$ etc:
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {1, 2, 3, 4, 5} {
+ \node [circ] at (\x, 0) {};
+ }
+
+ \foreach \x in {2, 3, 4, 5} {
+ \draw (1, 0) edge [bend left, mred, thick] (\x, 0);
+ }
+ \foreach \x in {3, 4, 5} {
+ \draw (2, 0) edge [bend right, mblue, thick] (\x, 0);
+ }
+ \foreach \x in {4, 5} {
+ \draw (3, 0) edge [bend left, mgreen, thick] (\x, 0);
+ }
+ \foreach \x in {5} {
+ \draw (4, 0) edge [bend right, morange, thick] (\x, 0);
+ }
+ \node at (6, 0) {$\ldots$};
+ \end{tikzpicture}
+\end{center}
+It is easy to see that we cannot find an infinite monochromatic subset or an infinite subset with all edges of different colours.
+
+However, this counterexample we came up with still has a high amount of structure --- the colour of the edges are uniquely determined by the first element. It turns out this is the only missing piece (plus the obvious dual case).
+
+With this, we can answer our previous question:
+\begin{thm}[Canonical Ramsey theorem]\index{canonical Ramsey theorem}\index{Ramsey theorem!canonical}
+ For any $c: \N^{(2)} \to \N$, there exists an infinite $X \subseteq \N$ such that one of the following hold:
+ \begin{enumerate}
+ \item $c|_{X^{(2)}}$ is constant.
+ \item $c|_{X^{(2)}}$ is injective.
+ \item $c(ij) = c(k\ell)$ iff $i = k$ for all $i, j, k, \ell \in X$.
+ \item $c(ij) = c(k\ell)$ iff $j = \ell$ for all $i, j, k, \ell \in X$.
+ \end{enumerate}
+\end{thm}
+Recall that when we write $ij$, we always implicitly assume $i < j$, so that (iii) and (iv) make sense.
+
+In previous proofs, we only had to go through two passes to get the desired set. This time we will need more.
+\begin{proof}
+ Consider the following colouring of $X^{(4)}$: let $c_1$ be a $2$-colouring
+ \[
+ c_1(ijk\ell) =
+ \begin{cases}
+ \mathtt{SAME} & c(ij) = c(k\ell)\\
+ \mathtt{DIFF} & \mathrm{otherwise}
+ \end{cases}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0, 1, 2, 3} {
+ \node [circ] at (\x, 0) {};
+ }
+ \draw (0, 0) edge [out=60, in=120] (1, 0);
+ \draw (2, 0) edge [out=60, in=120] (3, 0);
+ \end{tikzpicture}
+ \end{center}
+ Then we know there is some infinite monochromatic set $X_1 \subseteq \N$ for $c_1$. If $X_1$ is coloured $\mathtt{SAME}$, then we are done. Indeed, for any pair $ij$ and $i'j'$ in $X_1$, we can pick some huge $k, \ell$ such that $j, j' < k < \ell$, and then
+ \[
+ c(ij) = c(k\ell) = c(i'j')
+ \]
+ as we know $c_1(ijk\ell) = c_1(i'j'k\ell) = \mathtt{SAME}$.
+
+ What if $X_1$ is coloured $\mathtt{DIFF}$? We next look at what happens when we have edges that are nested each other. We define $c_2: X_1^{(4)} \to \{\mathtt{SAME}, \mathtt{DIFF}\}$ defined by
+ \[
+ c_2(ijk\ell) =
+ \begin{cases}
+ \mathtt{SAME} & c(i\ell) = c(jk)\\
+ \mathtt{DIFF} & \mathrm{otherwise}
+ \end{cases}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0, 1, 2, 3} {
+ \node [circ] at (\x, 0) {};
+ }
+ \draw (0, 0) edge [out=60, in=120] (3, 0);
+ \draw (1, 0) edge [out=60, in=120] (2, 0);
+ \end{tikzpicture}
+ \end{center}
+ Again, we can find an infinite monochromatic subset $X_2 \subseteq X_1$ with respect to $c_2$.
+
+ We now note that $X_2$ cannot be coloured $\mathtt{SAME}$. Indeed, we can just look at
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x/\y in {0/i, 1/j, 2/k, 3/\ell, 4/m, 5/n} {
+ \node [circ] at (\x, 0) {};
+ \node [below] at (\x, 0) {$\y$};
+ }
+ \draw (0, 0) edge [out=60, in=120] (5, 0);
+ \draw (1, 0) edge [out=60, in=120] (2, 0);
+ \draw (3, 0) edge [out=60, in=120] (4, 0);
+ \end{tikzpicture}
+ \end{center}
+ So if $X_2$ were $\mathtt{SAME}$, we would have
+ \[
+ c(\ell m) = c(in) = c(jk),
+ \]
+ which is impossible since $X_1$ is coloured $\mathtt{DIFF}$ under $c_1$.
+
+ So $X_2$ is $\mathtt{DIFF}$. Now consider $c_3: X_2^{(4)} \to \{\mathtt{SAME}, \mathtt{DIFF}\}$ given by
+ \[
+ c_3(ijk\ell) =
+ \begin{cases}
+ \mathtt{SAME} & c(ik) = c(j\ell)\\
+ \mathtt{DIFF} & \mathrm{otherwise}
+ \end{cases}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0, 1, 2, 3} {
+ \node [circ] at (\x, 0) {};
+ }
+ \draw (0, 0) edge [out=60, in=120] (2, 0);
+ \draw (1, 0) edge [out=-60, in=-120] (3, 0);
+ \end{tikzpicture}
+ \end{center}
+ Again find an infinite monochromatic subset $X_3 \subseteq X_2$ for $c_3$. Then $X_3$ cannot be $\mathtt{SAME}$, this time using the following picture:
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x/\y in {0, 1, 2, 3, 4, 5} {
+ \node [circ] at (\x, 0) {};
+ }
+ \draw (0, 0) edge [out=-60, in=-120] (3, 0);
+ \draw (1, 0) edge [out=60, in=120] (5, 0);
+ \draw (2, 0) edge [out=60, in=120] (4, 0);
+ \end{tikzpicture}
+ \end{center}
+ contradicting the fact that $c_2$ is $\mathtt{DIFF}$. So we know $X_3$ is $\mathtt{DIFF}$.
+
+ We have now have ended up in this set $X_3$ such that if we have any two pairs of edges with different end points, then they must be different.
+
+ We now want to look at cases where things share a vertex. Consider $c_4: X_3^{(3)} \to \{\mathtt{SAME}, \mathtt{DIFF}\}$ given by
+ \[
+ c_4(ijk) =
+ \begin{cases}
+ \mathtt{SAME} & c(ij) = c(jk)\\
+ \mathtt{DIFF} & \mathrm{otherwise}
+ \end{cases}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0, 1, 2} {
+ \node [circ] at (\x, 0) {};
+ }
+ \draw (0, 0) edge [out=60, in=120] (1, 0);
+ \draw (1, 0) edge [out=60, in=120] (2, 0);
+ \end{tikzpicture}
+ \end{center}
+ Let $X_4 \subseteq X_3$ be an infinite monochromatic set for $c_4$. Now $X_4$ cannot be coloured $\mathtt{SAME}$, using the following picture:
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0, 1, 2, 3} {
+ \node [circ] at (\x, 0) {};
+ }
+ \draw (0, 0) edge [out=60, in=120] (1, 0);
+ \draw (1, 0) edge [out=60, in=120] (2, 0);
+ \draw (2, 0) edge [out=60, in=120] (3, 0);
+ \end{tikzpicture}
+ \end{center}
+ which contradicts the fact that $c_1$ is $\mathtt{DIFF}$. So we know $X_4$ is also coloured $\mathtt{DIFF}$ under $c_4$.
+
+ We are almost there. We need to deal with edges that nest in the sense of (iii) and (iv). We look at $c_5: X_4^{(3)} \to \{\mathtt{LSAME}, \mathtt{LDIFF}\}$ given by
+ \[
+ c_5(ijk) =
+ \begin{cases}
+ \mathtt{LSAME} & c(ij) = c(ik)\\
+ \mathtt{LDIFF} & \mathrm{otherwise}
+ \end{cases}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0, 1, 2} {
+ \node [circ] at (\x, 0) {};
+ }
+ \draw (0, 0) edge [out=60, in=120] (1, 0);
+ \draw (0, 0) edge [out=60, in=120] (2, 0);
+ \end{tikzpicture}
+ \end{center}
+ Again we find $X_5 \subseteq X_4$, an infinite monochromatic set for $c_5$. We don't separate into cases yet, because we know both cases are possible, but move on to classify the right case as well. Define $c_6: X_5^{(3)} \to \{\mathtt{RSAME}, \mathtt{RDIFF}\}$ given by
+ \[
+ c_5(ijk) =
+ \begin{cases}
+ \mathtt{RSAME} & c(ik) = c(jk)\\
+ \mathtt{RDIFF} & \mathrm{otherwise}
+ \end{cases}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0, 1, 2} {
+ \node [circ] at (\x, 0) {};
+ }
+ \draw (1, 0) edge [out=60, in=120] (2, 0);
+ \draw (0, 0) edge [out=60, in=120] (2, 0);
+ \end{tikzpicture}
+ \end{center}
+ Let $X_6 \subseteq X_5$ be an infinite monochromatic subset under $c_5$.
+
+ As before, we can check that it is impossible to get both $\mathtt{LSAME}$ and $\mathtt{RSAME}$, using the following picture:
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0, 1, 2} {
+ \node [circ] at (\x, 0) {};
+ }
+ \draw (0, 0) edge [out=60, in=120] (1, 0);
+ \draw (1, 0) edge [out=60, in=120] (2, 0);
+ \draw (0, 0) edge [out=60, in=120] (2, 0);
+ \end{tikzpicture}
+ \end{center}
+ contradicting $c_4$ being $\mathtt{DIFF}$.
+
+ Then the remaining cases $(\mathtt{LDIFF}, \mathtt{RDIFF})$, $(\mathtt{LSAME}, \mathtt{RDIFF})$ and $(\mathtt{RDIFF}, \mathtt{LSAME})$ corresponds to the cases (ii), (iii) and (iv).
+\end{proof}
+Note that we could this theorem in one pass only, instead of six, by considering a much more complicated colouring $(c_1, c_2, c_3, c_4, c_5, c_6)$ with values in
+\[
+ \{\mathtt{SAME}, \mathtt{DIFF}\}^4 \times \{\mathtt{LSAME}, \mathtt{LDIFF}\} \times \{\mathtt{RSAME}, \mathtt{RDIFF}\},
+\]
+but we still have to do the same analysis and it just complicates matters more.
+
+There is a generalization of this to $r$-sets. One way we can rewrite the theorem is to say that the colour is uniquely determined by some subset of the vertices. The cases (i), (ii), (iii), (iv) correspond to no vertices, all vertices, first vertex, and second vertex respectively. Then for $r$-sets, we have $2^r$ possibilities, one for each subset of the $r$-coordinates.
+
+\begin{thm}[Higher dimensional canonical Ramsey theorem]\index{higher dimensional canonical Ramsey theorem}\index{canonical Ramsey theorem!!higher dimensional}\index{Ramsey theorem!higher dimensional, canonical}
+ Let $c: \N^{(r)} \to \N$ be a colouring. Then there exists $D \subseteq [r]$ and an infinite subset $X \subseteq \N$ such that for all $x, y \in X^{(r)}$, we have $c(x) = c(y)$ if $\{i: x_i = y_i\} \supseteq D$, where $x = \{x_1 < x_2 < \cdots < x_r\}$ (and similarly for $y$).
+\end{thm}
+
+\subsection{Finite graphs}
+We now move on to discuss a finitary version of the above problem. Of course, if we finitely colour an infinite graph, then we can obtain a finite monochromatic subgraph of any size we want, because this is just a much weaker version of Ramsey's theorem. However, given $n < N$, if we finitely colour a graph of size $N$, are we guaranteed to have a monochromatic subgraph of size $n$?
+
+Before we begin, we note some definitions. Recall again that a graph $G$ is a pair $(V, E)$ where $E \subseteq V^{(2)}$.
+\begin{eg}
+ The \term{path graph} on $n$ vertices \term{$P_n$} is
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} -- (1, 0) node [circ] {} -- (2, 0) node [circ] {};
+
+ \draw [dashed] (2.2, 0) -- (2.8, 0);
+
+ \draw (3, 0) node [circ] {} -- (4, 0) node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ We can write
+ \[
+ V = [n],\quad E = \{12, 23, 34, \cdots, (n-1)n\}.
+ \]
+\end{eg}
+
+\begin{eg}
+ The $n$-cycle \term{$C_n$} is
+ \[
+ V = [n],\quad E = \{12, 23, \cdots, (n-1)n, 1n\}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+
+ \node [circ] (0) at (1, 0) {};
+ \node [circ] (1) at (0, 0) {};
+ \node [circ] (2) at (-0.707, -0.707) {};
+ \node [circ] (3) at (-0.707, -1.707) {};
+ \node [circ] (4) at (0, -2.414) {};
+ \node [circ] (5) at (1, -2.414) {};
+ \node [circ] (6) at (1.707, -1.707) {};
+ \node [circ] (7) at (1.707, -0.707) {};
+
+ \draw (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7);
+ \draw [dotted] (7) -- (0);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+Finally, we have
+\begin{eg}
+ The \term{complete graph} \term{$V_n$} on $n$ vertices is
+ \[
+ V = [n],\quad E = V^{(2)}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+
+ \node [circ] (0) at (1, 0) {};
+ \node [circ] (1) at (0, 0) {};
+ \node [circ] (2) at (-0.707, -0.707) {};
+ \node [circ] (3) at (-0.707, -1.707) {};
+ \node [circ] (4) at (0, -2.414) {};
+ \node [circ] (5) at (1, -2.414) {};
+ \node [circ] (6) at (1.707, -1.707) {};
+ \node [circ] (7) at (1.707, -0.707) {};
+
+ \draw (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);
+ \draw (0) -- (2) -- (4) -- (6) -- (0);
+ \draw (1) -- (3) -- (5) -- (7) -- (1);
+ \draw (0) -- (3) -- (6) -- (1) -- (4) -- (7) -- (2) -- (5) -- (0);
+ \draw (0) -- (4);
+ \draw (1) -- (5);
+ \draw (2) -- (6);
+ \draw (3) -- (7);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+Concretely, the question we want to answer is the following --- how big does our (complete) graph have to be to guarantee a complete monochromatic subgraph of size $n$?
+
+In this section, we will usually restrict to $2$ colours. Everything we say will either be trivially be generalizable to more colours, or we have no idea how to do so. It is an exercise for the reader to figure out which it is.
+\begin{defi}[Ramsey number]\index{Ramsey number}\index{$R(n)$}\index{$R(K_n)$}
+ We let $R(n) = R(K_n)$ to be the smallest $N \in \N$ whenever we red-blue colour the edges of $K_N$, then there is a monochromatic copy of $K_n$.
+\end{defi}
+
+It is not immediately obvious that $R(n)$ has to exist, i.e.\ it is a finite number. It turns out we can easily prove this from the infinite Ramsey theorem.
+\begin{thm}[Finite Ramsey theorem]\index{finite Ramsey theorem}
+ For all $n$, we have $R(n) < \infty$.
+\end{thm}
+
+\begin{proof}
+ Suppose not. Let $n$ be such that $R(n) = \infty$. Then for any $m \geq n$, there is a $2$-colouring $c_m$ of $K_m = [m]^{(2)}$ such that there is no monochromatic set of size $n$.
+
+ We want to use these colourings to build up a colouring of $\N^{(2)}$ with no monochromatic set of size $n$. We want to say we take the ``limit'' of these colourings, but what does this mean? To do so, we need these colourings to be nested.
+
+ By the pigeonhole principle, there exists an infinite set $M_1 \subseteq \N$ and some fixed $2$-colouring $d_n$ of $[n]$ such that $c_m|_{[n]^{(2)}} = d_n$ for all $m \in M_1$.
+
+ Similarly, there exists an infinite $M_2 \subseteq M_1$ such that $c_m|_{[n + 1]^{(2)}} = d_{n + 1}$ for $m \in M_2$, again for some $2$-colouring $d_{n + 1}$ of $[n + 1]$. We repeat this over and over again. Then we get a sequence $d_n, d_{n + 1}, \cdots$ of colourings such that $d_i$ is a $2$-colouring of $[i]^{(2)}$ without a monochromatic $K_n$, and further for $i < j$, we have
+ \[
+ d_j|_{[i]^{(2)}} = d_i.
+ \]
+ We then define a $2$-colouring $c$ of $\N^{(2)}$ by
+ \[
+ c(ij) = d_m(ij)
+ \]
+ for any $m > i, j$. Clearly, there exists no monochromatic $K_n$ in $c$, as any $K_n$ is finite. This massively contradicts the infinite Ramsey theorem.
+\end{proof}
+
+There are other ways of obtaining the finite Ramsey theorem from the infinite one. For example, we can use the compactness theorem for first order logic.
+\begin{proof}
+ Suppose $R(n) = \infty$. Consider the theory with proposition letters $p_{ij}$ for each $ij \in \N^{(2)}$. We will think of the edge $ij$ as red if $p_{ij} = \top$, and blue if $p_{ij} = \bot$. For each subset of $\N$ of size $n$, we add in the axiom that says that set is not monochromatic.
+
+ Then given any finite subset of the axioms, it mentions only finitely many subsets of $\N$. Suppose it mentions vertices only up to $m \in \N$. Then by assumption, there is a $2$-colouring of $[m]^{(2)}$ with no monochromatic subset of size $n$. So by assigning $p_{ij}$ accordingly (and randomly assigning the remaining ones), we have found a model of this finite subtheory. Thus every finite fragment of the theory is consistent. Hence the original theory is consistent. So there is a model, i.e.\ a colouring of $\N^{(2)}$ with no monochromatic subset.
+
+ This contradicts the infinite Ramsey theorem.
+\end{proof}
+
+Similarly, we can do it by compactness of the space $\{1, 2\}^\N$ endowed with metric
+\[
+ d(f, g) = \frac{1}{2^n}\text{ if } n = \min \{i: f_i \not= g_i\}.
+\]
+By Tychonoff theorem, we know this is compact, and we can deduce the theorem from this. % how?
+
+While these theorems save work by reusing the infinite Ramsey theorem, it is highly non-constructive. It is useless if we want to obtain an actual bound on $R(n)$. We now go back and see what we did when we proved the infinite Ramsey theorem.
+
+To prove the infinite Ramsey theory, We randomly picked a point $a_1$. Then there are some things that are connected by red to it, and others connected by blue:
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [left] {$a_1$};
+
+ \draw [fill opacity=0.3, fill=mred] (2, 1.3) ellipse (0.3 and 0.8);
+ \draw [fill opacity=0.3, fill=mblue] (2, -1.3) ellipse (0.3 and 0.8);
+
+ \draw [thick, mred] (0, 0) -- (2, 1.9);
+ \draw [thick, mred] (0, 0) -- (2, 1.3);
+ \draw [thick, mred] (0, 0) -- (2, 0.7);
+
+ \draw [thick, mblue] (0, 0) -- (2, -1.9);
+ \draw [thick, mblue] (0, 0) -- (2, -1.3);
+ \draw [thick, mblue] (0, 0) -- (2, -0.7);
+ \end{tikzpicture}
+\end{center}
+Next we randomly pick a point in one of the red or blue sets, and try to move on from there. Suppose we landed in the red one. Now note that if we find a blue $K_n$ in the red set, then we are done. But on the other hand, we only need a red $K_{n - 1}$, and not a full blown $K_n$. When we moved to this red set, the problem is no longer symmetric.
+
+Thus, to figure out a good bound, it is natural to consider an asymmetric version of the problem.
+\begin{defi}[Off-diagonal Ramsey number]\index{off-diagonal Ramsey number}\index{Ramsey number!off-diagonal}\index{$R(n, m)$}\index{$R(K_n, K_m)$}
+ We define $R(n, m) = R(K_n, K_m)$ to be the minimum $N \in \N$ such that whenever we red-blue colour the edges of $K_N$, we either get a red $K_n$ or a blue $K_m$.
+\end{defi}
+
+Clearly we have
+\[
+ R(n, m) \leq R(\max\{n, m\}).
+\]
+In particular, they are finite. Once we make this definition, it is easy to find a bound on $R$.
+\begin{thm}
+ We have
+ \[
+ R(n, m) \leq R(n - 1, m) + R(n, m - 1).
+ \]
+ for all $n, m \in \N$. Consequently, we have
+ \[
+ R(n, m) \leq \binom{n + m - 1}{n - 2}.
+ \]
+\end{thm}
+
+\begin{proof}
+ We induct on $n + m$. It is clear that
+ \[
+ R(1, n) = R(n, 1) = 1,\quad R(n, 2) = R(2, n) = n
+ \]
+ Now suppose $N \geq R(n - 1, m) + R(n, m - 1)$. Consider any red-blue colouring of $K_N$ and any vertex $v \in V(K_n)$. We write
+ \[
+ v(K_n) \setminus \{v\} = A \cup B,
+ \]
+ where each vertex $A$ is joined by a red edge to $v$, and each vertex in $B$ is joined by a blue edge to $v$. Then
+ \[
+ |A| + |B| \geq N - 1 \geq R(n - 1, m) + R(n, m - 1) - 1.
+ \]
+ It follows that either $|A| \geq R(n - 1, m)$ or $|B| \geq R(n, m - 1)$. We wlog it is the former. Then by definition of $R(n - 1, m)$, we know $A$ contains either a blue copy of $K_m$ or a red copy of $K_{n - 1}$. In the first case, we are done, and in the second case, we just add $v$ to the red $K_{n - 1}$.
+
+ The last formula is just a straightforward property of binomial coefficients.
+\end{proof}
+
+In particular, we find
+\[
+ R(n) \leq \binom{2n - 1}{n - 2} \leq \frac{4^n}{\sqrt{n}}.
+\]
+We genuinely have no idea whether $\sim 4^n$ is the correct growth rate, i.e.\ if there is some $\varepsilon$ such that $R(n) \leq (4 - \varepsilon)^n$. However, we do know that for any $c > 0$, we eventually have
+\[
+ R(n) \leq \frac{4^n}{n^c}.
+\]
+That was an upper bound. Do we have a lower bound? In particular, does $R(n)$ have to grow exponentially? The answer is yes, and the answer is a very classical construction of Erd\"os.
+
+\begin{thm}
+ We have $R(n) \geq \sqrt{2}^n$ for sufficiently large $n \in \N$.
+\end{thm}
+The proof is remarkable in that before this was shown, we had no sensible bound at all. However, the proof is incredibly simple, and revolutionized how we think about colourings.
+
+\begin{proof}
+ Let $N \leq \sqrt{2}^n$. We perform a red-blue colouring of $K_N$ randomly, where each edge is coloured red independently of the others with probability $\frac{1}{2}$.
+
+ We let $X_R$ be the number of red copies of $K_n$ in such a colouring. Then since expectation is linear, we know the expected value is
+ \begin{align*}
+ [X_R] &= \binom{N}{n} \left(\frac{1}{2}\right)^{\binom{n}{2}}\\
+ &\leq \left(\frac{eN}{n}\right)^n \left(\frac{1}{2}\right)^{\binom{n}{2}}\\
+ &\leq \left(\frac{e}{n}\right)^n \sqrt{2}^{n^2} \sqrt{2}^{-n^2}\\
+ &= \left(\frac{e}{n}\right)^{n}\\
+ &< \frac{1}{2}
+ \end{align*}
+ for sufficiently large $n$.
+
+ Similarly, we have $[X_B] < \frac{1}{2}$. So the expected number of monochromatic $K_n$ is $< 1$. So in particular there must be some colouring with no monochromatic $K_n$.
+\end{proof}
+
+Recall the bound
+\[
+ R(m, n) \leq \binom{m + n - 2}{m - 1}.
+\]
+If we think of $m$ as being fixed, then this tells us
+\[
+ R(m, n) \sim (n + m)^{m - 1}.
+\]
+For example, if $m$ is $3$, then we have
+\[
+ R(3, n) \leq \binom{n + 1}{2} = \frac{n(n + 1)}{2} \sim n^2.
+\]
+We can sort-of imagine where this bound came from. Suppose we randomly pick a vertex $v_1$. Then if it is connected to at least $n$ other vertices by a red edge, then we are done --- if there is even one red edge among those $n$ things, then we have a red triangle. Otherwise, all edges are blue, and we've found a complete blue $K_n$.
+
+So this is connected to at most $n - 1$ things by a red edge. So if our graph is big enough, we can pick some $v_2$ connected to $v_1$ by a blue edge, and do the same thing to $v_2$. We keep going on, and by the time we reach $v_n$, we would have found $v_1, \cdots, v_n$ all connected to each other by blue edges, and we are done. So we have $K(3, n) \sim n^2$.
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \s in {0, 4, 8} {
+ \begin{scope}[shift={(\s, 0)}]
+ \node [circ] {};
+
+ \draw [fill opacity=0.3, fill=mred] (2, 0) ellipse (1 and 0.3);
+ \foreach \x in {1.2,1.6,2.0,2.4,2.8} {
+ \draw (0, 0) edge [bend left, mred, thick] (\x, 0) node [circ] {};
+ }
+ \foreach \x in {1.6,2.0,2.4,2.8} {
+ \draw (1.2, 0) edge [bend right, mblue, thick] (\x, 0) node [circ] {};
+ }
+ \foreach \x in {2.0,2.4,2.8} {
+ \draw (1.6, 0) edge [bend right, mblue, thick] (\x, 0) node [circ] {};
+ }
+ \foreach \x in {2.4,2.8} {
+ \draw (2.0, 0) edge [bend right, mblue, thick] (\x, 0) node [circ] {};
+ }
+ \foreach \x in {2.8} {
+ \draw (2.4, 0) edge [bend right, mblue, thick] (\x, 0) node [circ] {};
+ }
+ \end{scope};
+ }
+ \draw (0, 0) edge [bend right, mblue, thick] (4, 0);
+ \draw (4, 0) edge [bend right, mblue, thick] (8, 0);
+ \draw (0, 0) edge [bend right, mblue, thick] (8, 0);
+ \node at (0, 0) [above] {$v_1$};
+ \node at (4, 0) [above] {$v_2$};
+ \node at (8, 0) [above] {$v_3$};
+ \end{tikzpicture}
+\end{center}
+But this argument is rather weak, because we didn't use that large pool of blue edges we've found at $v_1$. So in fact this time we can do better than $n^2$.
+\begin{thm}
+ We have
+ \[
+ R(3, n) \leq \frac{100 n^2}{\log n}
+ \]
+ for sufficiently large $n \in \N$.
+\end{thm}
+Here the $100$ is, obviously, just some random big number.
+
+\begin{proof}
+ Let $N \geq 100n^2/\log n$, and consider a red-blue colouring of the edges of $K_N$ with no red $K_3$. We want to find a blue $K_n$ in such a colouring.
+
+ We may assume that
+ \begin{enumerate}
+ \item No vertex $v$ has $ \geq n$ red edges incident to it, as argued just now.
+ \item If we have $v_1, v_2, v_3$ such that $v_1v_2$ and $v_1 v_3$ are red, then $v_2 v_3$ is blue:
+ \begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \draw [mred, thick] (2, 0) -- (0, 0) -- (1, 1.732);
+ \draw [mblue, thick] (1, 1.732) -- (2, 0);
+
+ \node [left] at (0, 0) {$v_1$};
+ \node [above] at (1, 1.732) {$v_2$};
+ \node [right] at (2, 0) {$v_3$};
+ \end{tikzpicture}
+ \end{center}
+ \end{enumerate}
+ We now let
+ \[
+ \mathcal{F} = \{W : W \subseteq V(K_N)\text{ such that }W^{(2)} \text{ is monochromatic and blue}\}.
+ \]
+ We want to find some $W \in \mathcal{F}$ such that $|W| \geq n$, i.e.\ a blue $K_n$. How can we go about finding this?
+
+ Let $W$ be a uniformly random member of $\mathcal{F}$. We will be done if we can show that that $\E[|W|] \geq n$.
+
+ We are going to define a bunch of random variables. For each vertex $v \in V(K_N)$, we define the variable
+ \[
+ X_v = n \mathbf{1}_{\{v \in W\}} + |\{u: uv\text{ is red and }u \in W\}|.
+ \]
+ \begin{claim}
+ \[
+ \E[X_v] > \frac{\log n}{10}
+ \]
+ for each vertex $v$.
+ \end{claim}
+ To see this, let
+ \[
+ A = \{u: uv \text{ is red}\}
+ \]
+ and let
+ \[
+ B = \{u: uv \text{ is blue}\}.
+ \]
+ then from the properties we've noted down, we know that $|A| < n$ and $A^{(2)}$ is blue. So we know very well what is happening in $A$, and nothing about what is in $B$.
+
+ We fix a set $S \subseteq B$ such that $S \in \mathcal{F}$, i.e.\ $S^{(2)}$ is blue. What can we say about $W$ if we condition on $B \cap W = S$?
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [left] {$a_1$};
+
+ \draw [fill opacity=0.3, fill=mred] (2, 1.3) ellipse (0.5 and 1);
+ \draw [fill opacity=0.3, fill=mblue] (2, -1.3) ellipse (0.5 and 1);
+
+ \draw [thick, mred] (0, 0) -- (1.8, 1.9);
+ \draw [thick, mred] (0, 0) -- (1.7, 1.3);
+ \draw [thick, mred] (0, 0) -- (1.8, 0.7);
+
+ \draw [thick, mblue] (0, 0) -- (1.8, -1.9);
+ \draw [thick, mblue] (0, 0) -- (1.7, -1.3);
+ \draw [thick, mblue] (0, 0) -- (1.8, -0.7);
+
+ \draw [fill=mblue, fill opacity=0.7] (2.1, -1.3) ellipse (0.3 and 0.6);
+
+ \node at (2.1, -1.3) [white] {$S$};
+
+ \node [mblue, left] at (1.5, -1.8) {$B$};
+ \node [mred, left] at (1.5, 1.8) {$A$};
+
+ \draw [fill=mblue, fill opacity=0.5] (2.1, 1.3) ellipse (0.25 and 0.55);
+
+ \node at (2.1, 1.3) [white] {$T$};
+
+ \draw (2.2, -1) edge [bend right, mblue, thick] (2.2, 1);
+ \draw (2.2, -1.5) edge [bend right, mblue, thick] (2.2, 1.2);
+ \end{tikzpicture}
+ \end{center}
+ Let $T \subseteq A$ be the set of vertices that are joined to $S$ only by blue edges. Write $|T| = x$. Then if $B \cap W = S$, then either $W \subseteq S \cup T$, or $W \subseteq S \cup \{v\}$. So there are exactly $2^x + 1$ choices of $W$. So we know
+ \begin{align*}
+ \E[X_v \mid W \cap B = S] &\geq \frac{n}{2^x + 1} + \frac{2^x}{2^x + 1}(\E[|\text{random subset of T}|])\\
+ &= \frac{n}{2^x + 1} + \frac{2^{x - 1}x}{2^x + 1}.
+ \end{align*}
+ Now if
+ \[
+ x < \frac{\log n}{2},
+ \]
+ then
+ \[
+ \frac{n}{2^x + 1} \geq \frac{n}{\sqrt{n} + 1} \geq \frac{\log n}{10}
+ \]
+ for all sufficiently large $n$. On the other hand, if
+ \[
+ x \geq \frac{\log n}{2},
+ \]
+ then
+ \[
+ \frac{2^{x - 1}x}{2^x + 1} \geq \frac{1}{4} \cdot \frac{\log n}{2} \geq \frac{\log n}{10}.
+ \]
+ So we are done.
+
+ \begin{claim}
+ \[
+ \sum_{v \in V} X_v \leq 2n |W|.
+ \]
+ \end{claim}
+ To see this, for each vertex $v$, we know that if $v \in W$, then it contributes $n$ to the sum via the first term. Also, by our initial observation, we know that $v \in W$ has at most $n$ neighbours. So it contributes at most $n$ to the second term (acting as the ``$u$'').
+
+ Finally, we know that
+ \[
+ \E[|W|] \geq \frac{1}{2n} \sum \E[X_V] \geq \frac{N}{2n} \frac{\log n}{10} \geq \frac{100n^2}{\log n} \cdot \frac{\log n}{20 n} \geq 5n.
+ \]
+ Therefore we can always find some $W \in \mathcal{F}$ such that $|W| \geq n$.
+\end{proof}
+
+Now of course, there is the following obvious generalization of Ramsey numbers:
+\begin{defi}[$R(G, H)$]\index{$R(G, H)$}
+ Let $G, H$ be graphs. Then we define $R(G, H)$ to be the smallest $N$ such that any red-blue colouring of $K_N$ has either a red copy of $G$ or a blue copy of $H$.
+\end{defi}
+Obviously, we have $R(G, H) \leq R(|G|, |H|)$. So the natural question is if we can do better than that.
+
+\begin{ex}
+ Show that
+ \[
+ R(P_n, P_n) \leq 2n.
+ \]
+\end{ex}
+So sometimes it can be much better.
+
+\section{Ramsey theory on the integers}
+So far, we've been talking about what happens when we finitely colour graphs. What if we $k$-colour the integers $\N$? What can we say about it?
+
+It is a trivial statement that this colouring has a monochromatic subset, by the pigeonhole principle. Interesting questions arise when we try to take the additive structure of $\N$ into account. So we could ask, can we find a monochromatic ``copy'' of $\N$.
+
+One way to make this question concrete is to ask if there is an infinite monochromatic arithmetic progression.
+
+The answer is easily a ``no''! There are only countably many progressions, so for each arithmetic progression, we pick two things in the progression and colour them differently.
+
+We can also construct this more concretely. We can colour the first number red, the next two blue, the next three red etc. then it is easy to see that it doesn't have an infinite arithmetic progression.
+\begin{center}
+ \begin{tikzpicture}[xscale=0.5]
+ \foreach \x in {0, 3, 4, 5, 10, 11, 12, 13,14} {
+ \node [circ, mred] at (\x, 0) {};
+ }
+ \foreach \x in {1,2,6,7,8,9,15,16,17,18,19,20} {
+ \node [circ, mblue] at (\x, 0) {};
+ }
+ \node at (21.3, 0) {$\cdots$};
+ \end{tikzpicture}
+\end{center}
+But this is somewhat silly, because there is clearly a significant amount of structure in the sequence there. It turns out the following is true:
+\begin{thm}[van der Waerden theorem]\index{van der Waerden theorem}
+ Let $m, k \in N$. Then there is some $N = W(m, k)$ such that whenever $[N]$ is $k$-coloured, then there is a monochromatic arithmetic progression of length $n$.
+\end{thm}
+
+The idea is to do induction on $m$. We will be using colourings with much greater than $k$ colours to deduce the existence of $W(m, k)$.
+
+We can try a toy example first. Let's try to show that $W(3, 2)$ exists. Suppose we have three natural numbers:
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (-0.5, 0.5) rectangle (2.5, -0.5);
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (2, 0) {};
+ \end{tikzpicture}
+\end{center}
+By the pigeonhole principle, there must be two things that are the same colour, say
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (-0.5, 0.5) rectangle (2.5, -0.5);
+ \node [circ, red] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ, red] at (2, 0) {};
+ \end{tikzpicture}
+\end{center}
+If this is the case, then if we don't want to have an arithmetic progression of length $3$, then the fifth number must be blue
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (-0.5, 0.5) rectangle (2.5, -0.5);
+ \node [circ, red] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ, red] at (2, 0) {};
+
+ \draw (2.5, -0.5) rectangle (4.5, 0.5);
+ \node [circ] at (3, 0) {};
+ \node [circ, mblue] at (4, 0) {};
+ \end{tikzpicture}
+\end{center}
+We now cut the universe into blocks into 5 things. Again by the pigeonhole principle, there must be two blocks that look the same. Say it's this block again.
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (-0.5, 0.5) rectangle (2.5, -0.5);
+ \node [circ, red] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ, red] at (2, 0) {};
+
+ \draw (2.5, -0.5) rectangle (4.5, 0.5);
+ \node [circ] at (3, 0) {};
+ \node [circ, mblue] at (4, 0) {};
+
+ \begin{scope}[shift={(9, 0)}]
+ \draw (-0.5, 0.5) rectangle (2.5, -0.5);
+ \node [circ, red] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ, red] at (2, 0) {};
+
+ \draw (2.5, -0.5) rectangle (4.5, 0.5);
+ \node [circ] at (3, 0) {};
+ \node [circ, mblue] at (4, 0) {};
+ \end{scope}
+ \draw [red] (0, 0) edge [bend left] (11, 0);
+ \draw [red] (11, 0) edge [bend left] (22, 0);
+
+ \draw [mblue] (4, 0) edge [bend right] (13, 0);
+ \draw [mblue] (13, 0) edge [bend right] (22, 0);
+
+ \node [circ] at (22, 0) {};
+ \end{tikzpicture}
+\end{center}
+Now we have two sequences, and the point at the end belongs to both of the two sequences. And no matter what colour it is, we are done.
+
+For $k = 3$, we can still find such a block, but now that third point could be a third colour, say, green. This will not stop us. We can now look at these big blocks, and we know that eventually, these big blocks must repeat themselves.
+\begin{center}
+ \begin{tikzpicture}[scale=1.1]
+ \draw (-0.05, 0.05) rectangle (0.25, -0.05);
+ \node [scirc, red] at (0, 0) {};
+ \node [scirc] at (0.1, 0) {};
+ \node [scirc, red] at (0.2, 0) {};
+
+ \draw (0.25, -0.05) rectangle (.45, .05);
+ \node [scirc] at (.3, 0) {};
+ \node [scirc, mblue] at (.4, 0) {};
+
+ \begin{scope}[shift={(.9, 0)}]
+ \draw (-.05, .05) rectangle (.25, -.05);
+ \node [scirc, red] at (0, 0) {};
+ \node [scirc] at (.1, 0) {};
+ \node [scirc, red] at (.2, 0) {};
+
+ \draw (.25, -.05) rectangle (.45, .05);
+ \node [scirc] at (.3, 0) {};
+ \node [scirc, mblue] at (.4, 0) {};
+ \end{scope}
+ \draw [red] (0, 0) edge [bend left] (5.1, 0);
+ \draw [red] (5.1, 0) edge [bend left] (10.2, 0);
+
+ \draw [mblue] (.4, 0) edge [bend right] (5.3, 0);
+ \draw [mblue] (5.3, 0) edge [bend right] (10.2, 0);
+
+ \draw [mgreen] (2.2, 0) edge [bend right] (6.2, 0);
+ \draw [mgreen] (6.2, 0) edge [bend right] (10.2, 0);
+
+ \node [mgreen, scirc] at (2.2, 0) {};
+
+ \begin{scope}[shift={(4, 0)}]
+ \draw (-.05, .05) rectangle (.25, -.05);
+ \node [scirc, red] at (0, 0) {};
+ \node [scirc] at (.1, 0) {};
+ \node [scirc, red] at (.2, 0) {};
+
+ \draw (.25, -.05) rectangle (.45, .05);
+ \node [scirc] at (.3, 0) {};
+ \node [scirc, mblue] at (.4, 0) {};
+
+ \begin{scope}[shift={(.9, 0)}]
+ \draw (-.05, .05) rectangle (.25, -.05);
+ \node [scirc, red] at (0, 0) {};
+ \node [scirc] at (.1, 0) {};
+ \node [scirc, red] at (.2, 0) {};
+
+ \draw (.25, -.05) rectangle (.45, .05);
+ \node [scirc] at (.3, 0) {};
+ \node [scirc, mblue] at (.4, 0) {};
+ \end{scope}
+
+ \node [mgreen, scirc] at (2.2, 0) {};
+ \end{scope}
+
+ \node [scirc] at (10.2, 0) {};
+ \end{tikzpicture}
+\end{center}
+Here we did the case $m = 2$, and we used the pigeonhole principle. When we do $m > 2$, we will use van der Waerden theorem for smaller $m$ inductively.
+
+We now come up with names to describe the scenario we had above.
+\begin{defi}[Focused progression]\index{focused progression}
+ We say a collection of arithmetic progressions $A_1, A_2, \cdots, A_r$ of length $m$ with
+ \[
+ A_i = \{a_i, a_i + d_i, \cdots, a_i + (m - 1) d_i\}
+ \]
+ are \emph{focused} at $f$ if $a_i + m d_i = f$ for all $1 \leq i \leq r$.
+\end{defi}
+
+\begin{eg}
+ $\{1, 4\}$ and $\{5, 6\}$ are focused at $7$.
+\end{eg}
+
+\begin{defi}[Colour focused progression]\index{colour focused progression}\index{focus progression!colour}
+ If $A_1, \cdots, A_r$ are focused at $f$, and each $A_i$ is monochromatic and no two are the same colour, then we say they are \emph{colour focused} at $f$.
+\end{defi}
+
+We can now write the proof.
+\begin{proof}
+ We induct on $m$. The result is clearly trivial when $m = 1$, and follows easily from the pigeonhole principle when $m = 2$.
+
+ Suppose $m > 1$, and assume inductively that $W(m - 1, k')$ exists for any $k' \in \N$.
+
+ Here is the claim we are trying to established:
+ \begin{claim}
+ For each $r \leq k$, there is a natural number $n$ such that whenever we $k$-colour $[n]$, then either
+ \begin{enumerate}
+ \item There exists a monochromatic AP of length $m$; or
+ \item There are $r$ colour-focused AP's of length $m - 1$.
+ \end{enumerate}
+ \end{claim}
+ It is clear that this claim implies the theorem, as we can pick $r = k$. Then if there isn't a monochromatic AP of length $m$, then we look at the colour of the common focus, and it must be one of the colours of those AP's.
+
+ To prove the claim, we induct on $r$. When $n = 1$, we may take $W(m - 1, k)$. Now suppose $r > 1$, and some $n'$ works for $r - 1$. With the benefit of hindsight, we shall show that
+ \[
+ n = W(m - 1, k^{2n'}) 2n'
+ \]
+ works for $r$.
+
+ We consider any $k$-colouring of $[n]$, and suppose it has no monochromatic AP of length $m$. We need to find $r$ colour-focused progressions of length $n - 1$.
+
+ We view this $k$-colouring of $[n]$ as a $k^{2n'}$ colouring of blocks of length $2n'$, of which there are $W(m - 1, k^{2n'})$.
+
+ Then by definition of $W(m - 1, k^{2n'})$, we can find blocks
+ \[
+ B_s, B_{s + t}, \cdots, B_{s + (m - 2)t}
+ \]
+ which are coloured identically. By the inductive hypothesis, we know each $B_s$ contains $r - 1$ colour-focused AP's of length $m - 1$, say $A_1, .., A_{r - 1}$ with first terms $a_1, \cdots, a_r$ and common difference $d_1 , \cdots, d_{r - 1}$, and also their focus $f$, because the length of $B_s$ is $2n'$, not just $n'$.
+
+ Since we assumed there is no monochromatic progression of length $n$, we can assume $f$ has a different colour than the $A_i$.
+
+ Now consider $A_1', A_2', \cdots, A_{r - 1}'$, where $A_i'$ has first term $a_i$, common difference $d_i + 2n't$, and length $m - 1$. This difference sends us to the next block, and then the next term in the AP. We also pick $A_r'$ to consist of the all the focus of the blocks $B_i$, namely
+ \[
+ A_r' = \{f, f + 2n't, \cdots, f + 2n't(m - 2)\}
+ \]
+ These progressions are monochromatic with distinct colours, and focused at $f + (2n' t)(m - 1)$. So we are done.
+\end{proof}
+
+This argument, where one looks at the induced colourings of blocks, is called a \term{product argument}.
+
+The bounds we obtain from this argument are, as one would expect, terrible. We have
+\[
+ W(3, k) \leq k^{\iddots^{k^{4k}}},
+\]
+where the tower of $k$'s has length $k - 1$.
+
+Now we can generalize in a different way. What can we say about monochromatic structures a $k$-colouring of $\N^d$? What is the right generalization of van der Waerden theorem?
+
+To figure out the answer, we need to find the right generalization of arithmetic progressions.
+
+\begin{defi}[Homothetic copy]\index{homothetic copy}
+ Given a finite $S \subseteq \N^d$, a \emph{homothetic copy} of $S$ is a set of the form
+ \[
+ \ell S + M,
+ \]
+ where $\ell, M \in \N^d$ and $\ell \not= 0$.
+\end{defi}
+
+\begin{eg}
+ An arithmetic progression of length $m$ is just a homothetic copy of $[m] = \{1, 2, \cdots, m\}$.
+\end{eg}
+
+Thus, the theorem we want to say is the following:
+\begin{thm}[Gallai]
+ Whenever $\N^d$ is $k$-coloured, there exists a monochromatic (homothetic) copy of $S$ for each finite $S \subseteq \N^d$.
+\end{thm}
+
+In order to prove this, we prove the beautiful Hales--Jewett theorem, which is in some sense the abstract core of the argument we used for van der Waerden theorem.
+
+We need a bit of set up. As usual, we write
+\[
+ [m]^n = \{(x_1, \cdots, x_n): x_i \in [m]\}.
+\]
+Here is the important definition:
+\begin{defi}[Combinatorial line]\index{combinatorial line}
+ A \emph{combinatorial line} $L \subseteq [m]^n$ is a set of the form
+ \[
+ \{(x_1, \cdots, x_nn): x_i = x_j\text{ for all }i, j \in I; x_i = a_i\text{ for all }i \not\in I\}
+ \]
+ for some fixed non-empty set of coordinates $I \subseteq [n]$ and $a_i \in [m]$.
+
+ $I$ is called the set of \term{active coordinates}.
+\end{defi}
+
+\begin{eg}
+ Here is a line in $[3]^2$
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0, 1, 2} {
+ \foreach \y in {0, 1, 2} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \draw (0, 0) -- (0, 2);
+ \end{tikzpicture}
+ \end{center}
+ This line is given by $I = \{1\}$, $a_2 = 1$.
+
+ The following shows all the lines we have:
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0, 1, 2} {
+ \foreach \y in {0, 1, 2} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {0, 1, 2} {
+ \draw (\x, 0) -- (\x, 2);
+ \draw (0, \x) -- (2, \x);
+ }
+ \draw (0, 0) -- (2, 2);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+It is easy to see that any line has exactly $[m]$ elements. We write $L^-$ and $L^+$ for the first and last point of the line, i.e.\ the points where the active coordinates are all $1$ and $m$ respectively. It is clear that any line $L$ is uniquely determined by $L^-$ and its active coordinates.
+
+\begin{eg}
+ In $[3]^3$, we have the line
+ \[
+ L = \{(1, 2, 1), (2, 2, 2), (3, 2, 3)\}.
+ \]
+ This is a line with $I = \{1, 3\}$ and $a_2 = 2$. The first and last points are $(1, 2, 1)$ and $(3, 2, 3)$.
+\end{eg}
+
+Then we have the following theorem:
+\begin{thm}[Hales--Jewett theorem]\index{Hales--Jewett theorem}
+ For all $m, k \in \N$, there exists $N = HJ(m, k)$ such that whenever $[m]^N$ is $k$-coloured, there exists a monochromatic line.
+\end{thm}
+Note that this theorem implies van der Waerden's theorem easily. The idea is to embed $[m]^N$ into $\N$ linearly, so that a monochromatic line in $[m]^N$ gives an arithmetic progression of length $m$. Explicitly, given a $k$-colouring $c: \N \to [k]$, we define $c': [m]^N \to [k]$ by
+\[
+ c'(x_1, x_2, \cdots, x_N) = c(x_1 + x_2 + \cdots + x_N).
+\]
+Now a monochromatic line gives us an arithmetic progression of length $m$. For example, if the line is
+\[
+ L = \{(1, 2, 1), (2, 2, 2), (3, 2, 3)\},
+\]
+then we get the monochromatic progression $4, 6, 8$ of length $3$. In general, the monochromatic AP defined has $d = |I|$.
+
+The proof is essentially what we did for van der Waerden theorem. We are going to build a lot of almost-lines that point at the same vertex, and then no matter what colour the last vertex is, we are happy.
+
+\begin{defi}[Focused lines]\index{focused lines}
+ We say lines $L_1, \cdots, L_r$ are \emph{focused} at $f \in [m]^N$ if $L_i^+ = f$ for all $i = 1, \cdots, r$.
+\end{defi}
+
+\begin{defi}[Colour focused lines]\index{colour focused lines}\index{focused lines!colour}
+ If $L_1, \cdots, L_r$ are focused lines, and $L_i \setminus \{L_i^+\}$ is monochromatic for each $i = 1, \cdots, r$ and all these colours are distinct, then we say $L_1, \cdots, L_r$ are \emph{colour focused} at $f$.
+\end{defi}
+
+\begin{proof}
+ We proceed by induction on $m$. This is clearly trivial for $m = 1$, as a line only has a single point.
+
+ Now suppose $m > 1$, and that $HJ(m - 1, k)$ exists for all $k \in \N$. As before, we will prove the following claim:
+ \begin{claim}
+ For each $1 \leq r \leq k$, there is an $n \in \N$ such that in any $k$-colouring of $[m]^n$, either
+ \begin{enumerate}
+ \item there exists a monochromatic line; or
+ \item there exists $r$ colour-focused lines.
+ \end{enumerate}
+ \end{claim}
+ Again, the result is immediate from the claim, as we just use it for $r = k$ and look at the colour of the focus.
+
+ The prove this claim, we induct on $r$. If $r = 1$, then picking $n = HJ(m - 1, k)$ works, as a single colour-focused line of length $n$ is just a monochromatic line of length $n - 1$, and $[m - 1]^n \subseteq [m]^n$ naturally.
+
+ Now suppose $r > 1$ and $n$ works for $r - 1$. Then we will show that $n + n'$ works, where
+ \[
+ n' = HJ(m - 1, k^{m^n}).
+ \]
+ Consider a colouring $c: [m]^{n + n'} \to [k]$, and we assume this has no monochromatic lines.
+
+ We think of $[m]^{n + n'}$ as $[m]^n \times [m]^{n'}$. So for each point in $[m]^{n'}$, we have a whole cube $[m]^n$. Consider a $k^{m^n}$ colouring of $[m]^{n'}$ as follows --- given any $x \in [m]^{n'}$, we look at the subset of $[m]^{n + n'}$ containing the points with last $n'$ coordinates $x$. Then we define the new colouring of $[m]^{n'}$ to be the restriction of $c$ to this $[m]^n$, and there are $m^n$ possibilities.
+
+ Now there exists a line $L$ such that $L \setminus L^+$ is monochromatic for the new colouring. This means for all $a \in [m]^n$ and for all $b, b' \in L\setminus L^+$, we have
+ \[
+ c(a, b) = c(a, b').
+ \]
+ Let $c''(a)$ denote this common colour for all $a \in [m]^n$. this is a $k$-colouring of $[m]^n$ with no monochromatic line (because $c$ doesn't have any). So by definition of $n$, there exists lines $L_1, L_2, \cdots, L_{r - 1}$ in $[m]^n$ which are colour-focused at some $f \in [m]^n$ for $c''$.
+
+ In the proof of van der Waerden, we had a progression within each block, and also how we just between blocks. Here we have the same thing. We have the lines in $[m]^n$, and also the ``external'' line $L$.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \foreach \x in {0, 3, 6}{
+ \begin{scope}[shift={(\x, 0)}]
+ \draw [step=1] (0, 0) grid (2, 2);
+
+ \node [circ, mblue] at (0, 0) {};
+ \node [circ, mblue] at (1, 1) {};
+ \node [circ, red] at (2, 0) {};
+ \node [circ, red] at (2, 1) {};
+
+ \draw [morange] (2, 2) circle [radius=0.2];
+ \end{scope}
+ }
+
+ \draw [->] (2.5, -1) -- (5.5, -1) node [pos=0.5, below] {$L$} ;
+
+ \draw [mblue] (0, 0) edge [bend left] (4, 1);
+ \draw [mblue] (4, 1) edge [bend left] (8, 2);
+
+ \draw [red] (2, 0) edge [bend left] (5, 1);
+ \draw [red] (5, 1) edge [bend left] (8, 2);
+ \draw (2, 2) edge [bend left] (5, 2);
+ \draw (5, 2) edge [bend left] (8, 2);
+ \end{tikzpicture}
+ \end{center}
+ Consider the line $L_1', L_2', \cdots, L_{r - 1}'$ in $[m]^{n + n'}$, where
+ \[
+ (L_i')^- = (L_i^-, L^-),
+ \]
+ and the active coordinate set is $I_i \cup I$, where $I$ is the active coordinate set of $L$.
+
+ Also consider the line $F$ with $F^- = (f, L^-)$ and active coordinate set $I$.
+
+ Then we see that $L_1', \cdots, L_{r - 1}', F$ are $r$ colour-focused lines with focus $(f, L^+)$.
+\end{proof}
+
+We can now prove our generalized van der Waerden.
+\begin{thm}[Gallai]
+ Whenever $\N^d$ is $k$-coloured, there exists a monochromatic (homothetic) copy of $S$ for each finite $S \subseteq \N^d$.
+\end{thm}
+
+\begin{proof}
+ Let $S = \{S(1), S(2), \cdots, S(m)\} \subseteq \N^d$. Given a $k$-colouring $\N^d \to [k]$, we induce a $k$-colouring $c: [m]^N \to [k]$ by
+ \[
+ c'(x_1, \cdots, x_N) = c(S(x_1) + S(x_2) + \cdots + S(x_N)).
+ \]
+ By Hales-Jewett, for sufficiently large $N$, there exists a monochromatic line $L$ for $c'$, which gives us a monochromatic homothetic copy of $S$ in $\N^d$. For example, if the line is $(1\; 2\; 1)$, $(2\; 2\; 2)$ and $(3\; 2\; 3)$, then we know
+ \[
+ c(S(1) + S(2) + S(1)) = c(S(2) + S(2) + S(2)) = c(S(3) + S(2) + S(3)).
+ \]
+ So we have the monochromatic homothetic copy $\lambda S + \mu$ , where $\lambda = 2$ (the number of active coordinates), and $\mu = S(2)$.
+\end{proof}
+
+\subsubsection*{Largeness in Ramsey theory*}
+In van der Waerden, we proved that for each $k, m$, there is some $n$ such that whenever we $k$-colour $[n]$, then there is a monochromatic AP of length $m$. How much of this is relies on the underlying set being $[n]$? Or is it just that if we finitely colour $[n]$, then one of the colours must contain a lot of numbers, and if we have a lot of numbers, then we are guaranteed to have a monochromatic AP?
+
+Of course, by ``contains a lot of numbers'', we do not mean the actual number of numbers we have. It is certainly not true that for some $n$, whenever we $k$-colour any set of $n$ integers, we must have a monochromatic $m$-AP, because an arbitrary set of $n$ integers need not even contain an $m$-AP at all, let alone a monochromatic one. Thus, what we really mean is \emph{density}.
+%
+%We say $X \subseteq \N^{(2)}$ is \emph{large} if $X$ contains an infinite complete graph. Then Ramsey's theorem says if we finitely colour $\N$, then one of the colours gives a large subset.
+%
+%We say a $X \subseteq \N$ is large if $X$ contains an $m$ AP for all $m \in \N$. Again van der Waerden says if we finitely colour $\N$, then there is one colour that gives a large subset.
+%
+%But if we are lazy, then we might try to find such a large set simply by looking at the ``largest'' subset. So one natural question is as follows --- does a simpler ``counting'' notion of largeness imply Ramsey-theoretic structures?
+%
+%In $\N^{(2)}$ the answer is no, as there isn't much structure in $\N^{(2)}$, so we can only talk about whether things are infinite or not.
+%
+%However, in $\N$, we can talk about density.
+\begin{defi}[Density]
+ For $A \subseteq \N$, we let the \term{density} of $A$ as
+ \[
+ \bar{d}(A) = \limsup_{(b - a) \to \infty} \frac{A \cap [a, b]}{|b - a|}.
+ \]
+\end{defi}
+Clearly, in any finite $k$-colouring of $\N$, there exists a colour class with positive density. Thus, we want to know if merely a positive density implies the existence of progressions. Remarkably, the answer is yes!
+
+\begin{thm}[Szemer\'edi theorem]\index{Szemer\'edi theorem}
+ Let $\delta > 0$ and $m \in \N$. Then there exists some $N = S(m, \delta) \in \N$ such that any subset $A \subseteq [N]$ with $|A| \geq \delta N$ contains an $m$-term arithmetic progression.
+\end{thm}
+The proof of this theorem is usually the subject of an entire lecture course, so we are not going to attempt to prove this. Even the particular case $m = 3$ is very hard.
+
+This theorem has a lot of very nice Ramsey consequences. In the case of graph colourings, we asked ourselves what happens if we colour a graph with \emph{infinitely} many colours. Suppose we now have a colouring $c: \N \to \N$. Can we still find a monochromatic progression of length $m$? Of course not, because $c$ can be injective.
+
+\begin{thm}
+ For any $c: \N \to \N$, there exists a $m$-AP on which either
+ \begin{enumerate}
+ \item $c$ is constant; or
+ \item $c$ is injective.
+ \end{enumerate}
+\end{thm}
+It is also possible to prove this directly, but it is easy with Szemer\'edi.
+
+\begin{proof}
+ We set
+ \[
+ \delta = \frac{1}{m^2(m + 1)^2}.
+ \]
+ We let $N = S(m, \delta)$. We write
+ \[
+ [N] = A_1 \cup A_2 \cup \cdots \cup A_k,
+ \]
+ where the $A_i$'s are the colour-classes of $c|_{[N]}$. By choice of $N$, we are done if $|A_i| \geq \delta N$ for some $1 \leq i \leq k$. So suppose not.
+
+ Let's try to count the number of arithmetic progressions in $[N]$. There are more than $N^2/(m + 1)^2$ of these, as we can take any $a, d \in [N/m + 1]$. We want to show that there is an AP that hits each $A_i$ at most once.
+
+ So, fixing an $i$, how many AP's are there that hit $A_i$ at least twice? We need to pick two terms in $A_i$, and also decide which two terms in the progression they are in, e.g.\ they can be the first and second term, or the 5th and 17th term. So there are at most $m^2 |A_i|^2$ terms.
+
+ So the number of AP's on which $c$ is injective is greater than
+ \[
+ \frac{N^2}{(m + 1)^2} - k |A_i|^2 m^2 \geq \frac{N^2}{(m + 1)^2} - \frac{1}{\delta} (\delta N)^2 m^2 = \frac{N^2}{(m + 1)^2} - \delta N^2 m^2 \geq 0.
+ \]
+ So we are done. Here the first inequality follows from the fact that $\sum |A_i| = [N]$ and each $|A_i| < \delta N$.
+\end{proof}
+
+Our next theorem will mix arithmetic and graph-theoretic properties. Consider a colouring $c: \N^{(2)} \to [2]$. As before, we say a set $X$ is monochromatic if $c|_{X^{(2)}}$ is constant. Now we want to try to find a monochromatic set with some arithmetic properties.
+
+The first obvious question to ask is --- can we find a monochromatic 3-term arithmetic progression? The answer is no. For example, we can define $c(ij)$ to be the parity of largest power of $2$ dividing $j - i$, and then there is no monochromatic $3$-term arithmetic progression.
+
+What if we make some concessions --- can we find a blue 10-AP, or if not, find an infinite red set? The answer is again no. This construction is slightly more involved. To construct a counterexample, we can make progressively larger red cliques, and take all other edges blue. If we double the size of the red cliques every time, it is not hard to check that there is no blue 4-AP, and no infinite red set.
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+
+ \node [circ] at (1, 0) {};
+ \draw [mred] (1, 0) -- (1.5, 0);
+ \node [circ] at (1.5, 0) {};
+
+ \node [circ] at (2.5, 0.25) {};
+ \node [circ] at (3, 0.25) {};
+ \node [circ] at (2.5, -0.25) {};
+ \node [circ] at (3, -0.25) {};
+
+ \draw [mred] (2.5, 0.25) -- (2.5, -0.25) -- (3, -0.25) -- (3, 0.25) -- cycle;
+
+ \draw [mred] (2.5, 0.25) -- (3, -0.25);
+ \draw [mred] (2.5, -0.25) -- (3, 0.25);
+
+ \node at (4, 0) {$\cdots$};
+ \end{tikzpicture}
+\end{center}
+What if we further relax our condition, and only require an arbitrarily large red set?
+\begin{thm}
+ For any $c: \N^{(2)} \to \{\mathrm{red}, \mathrm{blue}\}$, either
+ \begin{enumerate}
+ \item There exists a blue $m$-AP for each $m \in \N$; or
+ \item There exists arbitrarily large red sets.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ Suppose we can't find a blue $m$-AP for some fixed $m$. We induct on $r$, and try to find a red set of size $r$.
+
+ Say $A \subseteq \N$ is a progression of length $N$. Since $A$ has no blue $m$-term progression, so it must contain many red edges. Indeed, each $m$-AP in $A$ must contain a red edge. Also each edge specifies two points, and this can be extended to an $m$-term progression in at most $m^2$ ways. Since there are $N^2/(m + 1)^2$. So there are at least
+ \[
+ \frac{N^2}{m^2(m + 1)^2}
+ \]
+ red edges. With the benefit of hindsight, we set
+ \[
+ \delta = \frac{1}{2m^2(m + 1)^2}.
+ \]
+ The idea is that since we have lots of red edges, we can try to find a point with a lot of red edges connected to it, and we hope to find a progression in it.
+
+ We say $X, Y \subseteq \N$ form an $(r, k)$-structure if
+ \begin{enumerate}
+ \item They are disjoint
+ \item $X$ is red;
+ \item $Y$ is an arithmetic progression;
+ \item All $X\mdash Y$ edges are red;
+ \item $|X| = r$ and $|Y| = k$.
+ \end{enumerate} % insert a picture
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mred, opacity=0.3] (0.3, -0.1) ellipse (1 and 1.5);
+ \node [circ] (1) at (0, 0) {};
+ \node [circ] (2) at (1, 0.4) {};
+ \node [circ] (3) at (0.5, 0.1) {};
+ \node [circ] (4) at (-0.2, 0.5) {};
+ \node [circ] (5) at (0.8, -0.6) {};
+ \node [circ] (6) at (-0.1, -0.8) {};
+
+ \draw [mred] (1) -- (2);
+ \draw [mred] (1) -- (3);
+ \draw [mred] (1) -- (4);
+ \draw [mred] (1) -- (5);
+ \draw [mred] (1) -- (6);
+ \draw [mred] (2) -- (3);
+ \draw [mred] (2) -- (4);
+ \draw [mred] (2) -- (5);
+ \draw [mred] (2) -- (6);
+ \draw [mred] (3) -- (4);
+ \draw [mred] (3) -- (5);
+ \draw [mred] (3) -- (6);
+ \draw [mred] (4) -- (5);
+ \draw [mred] (4) -- (6);
+ \draw [mred] (5) -- (6);
+
+ \foreach \x in {3, 4, 5, 7.5} {
+ \node [circ] at (\x, 0) {};
+ \foreach \y in {1, 2, 3, 4, 5, 6} {
+ \draw (\y) edge [bend left, mred] (\x, 0);
+ }
+ }
+ \node at (6.25, 0) {$\cdots$};
+ \node [circ] (1) at (0, 0) {};
+ \node [circ] (2) at (1, 0.4) {};
+ \node [circ] (3) at (0.5, 0.1) {};
+ \node [circ] (4) at (-0.2, 0.5) {};
+ \node [circ] (5) at (0.8, -0.6) {};
+ \node [circ] (6) at (-0.1, -0.8) {};
+
+ \end{tikzpicture}
+ \end{center}
+ We show by induction that there is an $(r, k)$-structure for each $r$ and $k$.
+
+ A $(1, k)$ structure is just a vertex connected by red edges to a $k$-point structure. If we take $N = S(\delta, k)$, we know among the first $N$ natural numbers, there are at least $N^2/(m^2(m + 1)^2)$ red edges inside $[N]$. So in particular, some $v \in [N]$ has at least $\delta N$ red neighbours in $[N]$, and so we know $v$ is connected to some $k$-AP by red edges. That's the base case done.
+
+ Now suppose we can find an $(r - 1, k')$-structure for all $k' \in \N$. We set
+ \[
+ k' = S\left(\frac{1}{2m^2(m + 1)^2}, k\right).
+ \]
+ We let $(X, Y)$ be an $(r - 1, k')$-structure. As before, we can find $v \in Y$ such that $v$ has $\delta|Y|$ red neighbours in $Y$. Then we can find a progression $Y'$ of length $k$ in the red neighbourhood of $v$, and we are done, as $(X \cup \{v\}, Y')$ is an $(r, k)$-structure, and an ``arithmetic progression'' within an arithmetic progression is still an arithmetic progression.
+\end{proof}
+Before we end this chapter, we make a quick remark. Everything we looked for in this chapter involved the additive structure of the naturals. What about the multiplicative structure? For example, given a finite colouring of $\N$, can we find a monochromatic geometric progression? The answer is trivially yes. We can look at $\{2^x: x \in \N\}$, and then multiplication inside this set just looks like addition in the naturals.
+
+But what if we want to mix additive and multiplicative structures? For example, can we always find a monochromatic set of the form $\{x + y, xy\}$? Of course, there is the trivial answer $x = y = 2$, but is there any other? This question was answered positively in 2016! We will return to this at the end of the course.
+
+\section{Partition Regularity}
+In the previous chapter, the problems we studied were mostly ``linear'' in nature. We had some linear system, namely that encoding the fact that a sequence is an AP, and we wanted to know if it had a monochromatic solution. More generally, we can define the following:
+
+\begin{defi}[Partition regularity]\index{partition regular}
+ We say an $m \times n$ matrix $A$ over $\Q$ is \emph{partition regular} if whenever $\N$ is finitely coloured, there exists an $x \in \N^n$ such that $Ax = 0$ and $x$ is monochromatic, i.e.\ all coordinates of $x$ have the same colour.
+\end{defi}
+Recall that $\N$ does not include $0$.
+
+There is no loss in generality by assuming $A$ in fact has entries in $\Z$, by scaling $A$, but sometimes it is (notationally) convenient to consider cases where the entries are rational.
+
+The question of the chapter is the following --- when is a matrix partition regular? We begin by looking at some examples.
+\begin{eg}[Schur's theorem]\index{Schur's theorem}
+ \emph{Schur's theorem} says whenever $\N$ is finitely coloured, there exists a monochromatic set of the form $\{x, y, x + y\}$. In other words the matrix $\begin{pmatrix}1 & 1 & -1\end{pmatrix}$ is partition regular, since
+ \[
+ \begin{pmatrix}
+ 1 & 1 & -1
+ \end{pmatrix}
+ \begin{pmatrix}
+ x\\y\\z
+ \end{pmatrix} = 0 \Longleftrightarrow z = x + y.
+ \]
+\end{eg}
+
+\begin{eg}
+ How about $\begin{pmatrix}2 & 3 & -5\end{pmatrix}$. This is partition regular, because we can pick any $x$, and we have
+ \[
+ \begin{pmatrix}
+ 2 & 3 & -5
+ \end{pmatrix}
+ \begin{pmatrix}
+ x\\x\\x
+ \end{pmatrix} = 0.
+ \]
+ This is a trivial example.
+\end{eg}
+
+How about van der Waerden's theorem?
+\begin{eg}
+ Van der Waerden's theorem says there is a monochromatic $3$-AP $\{x_1, x_2, x_3\}$ whenever $\N$ is finitely-coloured. We know $x_1, x_2, x_3$ forms a $3$-AP iff
+ \[
+ x_3 - x_2 = x_2 - x_1,
+ \]
+ or equivalently
+ \[
+ x_3 + x_1 = 2 x_2.
+ \]
+ This implies that $\begin{pmatrix}1 & -2 & 1\end{pmatrix}$ is partition regular. But this is actually not a very interesting thing to say, because $x_1 = x_2 = x_3$ is always a solution to this equation. So this falls into the previous ``trivial'' case.
+
+ On the second example sheet we saw a stronger version of van der Waerden. It says we can always find a monochromatic set of the form
+ \[
+ \{d, a, a + d, a + 2d, \cdots, a + md\}.
+ \]
+ By including this variable, we can write down the property of being a progression in a non-trivial format by
+ \[
+ \begin{pmatrix}
+ 1 & 1 & 0 & -1\\
+ 2 & 1 & -1 & 0
+ \end{pmatrix}
+ \begin{pmatrix}
+ d \\ a \\ x_2 \\ x_3
+ \end{pmatrix} = 0
+ \]
+ This obviously generalizes to an arbitrary $m$-AP, with matrix
+ \[
+ \begin{pmatrix}
+ 1 & 1 & -1 & 0 & 0 & \cdots & 0\\
+ 1 & 2 & 0 & -1 & 0 & \cdots & 0\\
+ 1 & 3 & 0 & 0 & -1 & \cdots & 0\\
+ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\
+ 1 & m & 0 & 0 & 0 & \cdots & -1
+ \end{pmatrix}.
+ \]
+\end{eg}
+We've seen three examples of partition regular matrices. Of course, not every matrix is partition regular. The matrix $\begin{pmatrix}1 & 1\end{pmatrix}$ is not partition regular, for the silly reason that two positive things cannot add up to zero.
+
+Let's now look at some non-trivial first matrix that is not partition regular.
+\begin{eg}
+ The matrix $\begin{pmatrix}2 & -1\end{pmatrix}$ is not partition regular, since we can put $c(x) = (-1)^n$, where $n$ is the maximum integer such that $2^n \mid x$. Then $\{x, 2x\}$ is never monochromatic.
+
+ A similar argument shows that if $\lambda \in \Q$ is such that $\begin{pmatrix}\lambda, -1\end{pmatrix}$ is partition regular, then $\lambda = 1$.
+\end{eg}
+But if we write down a random matrix, say $\begin{pmatrix}2 & 3 & -6\end{pmatrix}$? The goal of this chapter is to find a complete characterization of matrices that are partition regular.
+
+\begin{defi}[Columns property]\index{columns property}
+ Let
+ \[
+ A =
+ \begin{pmatrix}
+ \uparrow & \uparrow & & \uparrow\\
+ c^{(1)} & c^{(2)} & \cdots & c^{(n)}\\
+ \downarrow & \downarrow & & \downarrow
+ \end{pmatrix}.
+ \]
+ We say $A$ has the \emph{columns property} if there is a partition $[n] = B_1 \cup B_2 \cup \cdots \cup B_d$ such that
+ \[
+ \sum_{i \in B_s} c^{(i)} \in \spn\{c^{(i)} : c^{(i)} \in B_1 \cup \cdots \cup B_{s - 1}\}
+ \]
+ for $s = 1, \cdots, d$. In particular,
+ \[
+ \sum_{i \in B_1} c^{(i)} = 0
+ \]
+\end{defi}
+What does this mean? Let's look at the matrices we've seen so far.
+\begin{eg}
+ $\begin{pmatrix}1 & 1 & -1\end{pmatrix}$ has the columns property by picking $B_1 = \{1, 3\}$ and $B_2 = \{2\}$.
+\end{eg}
+
+\begin{eg}
+ $\begin{pmatrix}2 & 3 & -5\end{pmatrix}$ has the columns property by picking $B_1 = \{1, 2, 3\}$.
+\end{eg}
+
+\begin{eg}
+ The matrix
+ \[
+ \begin{pmatrix}
+ 1 & 1 & -1 & 0 & 0 & \cdots & 0\\
+ 1 & 2 & 0 & -1 & 0 & \cdots & 0\\
+ 1 & 3 & 0 & 0 & -1 & \cdots & 0\\
+ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\
+ 1 & m & 0 & 0 & 0 & \cdots & -1
+ \end{pmatrix}
+ \]
+ has the column property. Indeed, we take $B_1 = \{1, 3, \cdots, m + 2\}$, and since $\{c^{(3)}, \cdots, c^{(m + 2)}\}$ spans all of $\R^m$, we know picking $B_2 = \{2\}$ works.
+\end{eg}
+
+\begin{eg}
+ $\begin{pmatrix}\ell & -1\end{pmatrix}$ has the columns property iff $\ell = 1$. In particular, $\begin{pmatrix}1 & 1 \end{pmatrix}$ does not have a columns property.
+\end{eg}
+
+Given these examples, it is natural to conjecture the following:
+\begin{thm}[Rado's theorem]\index{Rado's theorem}
+ A matrix $A$ is partition regular iff it has the column property.
+\end{thm}
+This is a remarkable theorem! The property of being partition regular involves a lot of quantifiers, over infinitely many things, but the column property is entirely finite, and we can get a computer to check it for us easily.
+
+Another remarkable property of this theorem is that neither direction is obvious! It turns out partition regularity implies the column property is (marginally) easier, because if we know something is partition regular, then we can try to cook up some interesting colourings and see what happens. The other direction is harder.
+
+To get a feel of the result, we will first prove it in the case of a single equation. The columns property in particular becomes much simpler. It just means that there exists a non-empty subset of the non-zero $a_i$'s that sums up to zero.
+
+\begin{thm}
+ If $a_1,\cdots, a_n \in \Q \setminus \{0\}$, then $\begin{pmatrix} a_1 & \cdots & a_n \end{pmatrix}$ is partition regular iff there exists a non-empty $I \subseteq [n]$ such that
+ \[
+ \sum_{i \in I} a_i = 0.
+ \]
+\end{thm}
+
+For a fixed prime $p$, we let $d(x)$ denote the last non-zero digit of $x$ in base $p$, i.e.\ if
+\[
+ x = d_n p^n + d_{n - 1}p^{n - 1} + \cdots + d_0,
+\]
+then
+\[
+ L(x) = \min\{i: d_i \not= 0\}
+\]
+and $d(x) = d_{L(x)}$. We now prove the easy direction of the theorem.
+
+\begin{prop}
+ If $a_2, \cdots, a_n \in \Q \setminus \{0\}$ and $\begin{pmatrix}a_1 & a_2 & \cdots & a_n\end{pmatrix}$ is partition regular, then
+ \[
+ \sum_{i \in I} a_i = 0
+ \]
+ for some non-empty $I$.
+\end{prop}
+
+\begin{proof}
+ We wlog $a_i \in \Z$, by scaling. Fix a suitably large prime
+ \[
+ p > \sum_{i = 1}^n |a_i|,
+ \]
+ and consider the $(p - 1)$-colouring of $\N$ where $x$ is coloured $d(x)$. We find $x_1, \cdots, x_n$ such that
+ \[
+ \sum a_i x_i = 0.
+ \]
+ and $d(x_i) = d$ for some $d \in \{1, \cdots, p - 1\}$. We write out everything in base $p$, and let
+ \[
+ L = \min \{L(x_i): 1 \leq i \leq n\},
+ \]
+ and set
+ \[
+ I = \{i: L(x_i) = L\}.
+ \]
+ Then for all $i \in I$, we have
+ \[
+ x_i \equiv d \pmod {p^{L + 1}}.
+ \]
+ On the other hand, we are given that
+ \[
+ \sum a_i x_i = 0.
+ \]
+ Taking mod $p^{L + 1}$ gives us
+ \[
+ \sum_{i \in I} a_i d = 0 \pmod {p^{L + 1}}.
+ \]
+ Since $d$ is invertible, we know
+ \[
+ \sum_{i \in I} a_i = 0 \pmod {p^{L + 1}}.
+ \]
+ But by our choice of $p$, this implies that $\sum_{i \in I} a_i = 0$.
+\end{proof}
+Here we knew what prime $p$ to pick ahead. If we didn't, then we could consider \emph{all} primes $p$, and for each $p$ we find $I_p \subseteq [n]$ such that
+\[
+ \sum_{i \in I_p} a_i = 0 \pmod p.
+\]
+Then some $I_p$ has to occur infinitely often, and then we are done. Note that this is merely about the fact that if a fixed number is $0$ mod $n$ for arbitrarily large $n$, then it must be zero. This is not some deep local-global correspondence number theoretic fact.
+
+It turns out this is essentially the only way we know for proving this theorem. One possible variation is to use the ``first non-zero digit'' to do the colouring, but this is harder.
+
+Let's now try and do the other direction. Before we do that, we start by doing a warm up. Last time, we proved that if we had $\begin{pmatrix}1 & \lambda\end{pmatrix}$, then this is partition regular iff $\lambda = -1$. We now prove a three element version.
+
+\begin{prop}
+ The equation $\begin{pmatrix}1 & \lambda & -1\end{pmatrix}$ is partition regular for all $\lambda \in \Q$.
+\end{prop}
+
+\begin{proof}
+ We may wlog $\lambda > 0$. If $\lambda = 0$, then this is trivial, and if $\lambda < 0$, then we can multiply the whole equation by $-1$.
+
+ Say
+ \[
+ \lambda = \frac{r}{s}.
+ \]
+ The idea is to try to find solutions of this in long arithmetic progressions.
+
+ Suppose $\N$ is $k$-coloured. We let
+ \[
+ \{a, a + d, \cdots, a + nd\}
+ \]
+ be a monochromatic AP, for $n$ sufficiently large.
+
+ If $sd$ were the same colour as this AP, then we'd be done, as we can take
+ \[
+ x = a,\quad y = sd,\quad z = a + \frac{r}{s} sd.
+ \]
+ In fact, if any of $sd, 2sd, \cdots, \left\lfloor\frac{n}{r}\right\rfloor sd$ have the same colour as the AP, then we'd be done by taking
+ \[
+ x = a,\quad y = isd,\quad z = a + \frac{r}{s} isd = a + ird \leq a + nd.
+ \]
+ If this were not the case, then $\{sd, 2sd, \cdots, \left\lfloor\frac{n}{r}\right\rfloor sd\}$ is $(k - 1)$-coloured, and this is just a scaled up copy of $\N$. So we are done by induction on $k$.
+\end{proof}
+
+Note that when $\lambda = 1$, we have $\begin{pmatrix}1 & 1 & -1\end{pmatrix}$ is partition regular, and this may be proved by Ramsey's theorem. Can we prove this more general result by Ramsey's theorem as well? The answer is, we don't know.
+
+It turns out this is not just a warm up, but the main ingredient of what we are going to do.
+
+\begin{thm}
+ If $a_1,\cdots, a_n \in \Q$, then $\begin{pmatrix} a_1 & \cdots & a_n \end{pmatrix}$ is partition regular iff there exists a non-empty $I \subseteq [n]$ such that
+ \[
+ \sum_{i \in I} a_i = 0.
+ \]
+\end{thm}
+
+\begin{proof}
+ One direction was done. To do the other direction, we recall that we had a really easy case of, say,
+ \[
+ \begin{pmatrix}
+ 2 & 3 & -5
+ \end{pmatrix},
+ \]
+ because we can just make all the variables the same?
+
+ In the general case, we can't quite do this, but we may try to solve this equation with the least number of variables possible. In fact, we shall find some monochromatic $x, y, z$, and then assign each of $x_1, \cdots, x_n$ to be one of $x, y, z$.
+
+ We know
+ \[
+ \sum_{i \in I} a_i = 0.
+ \]
+ We now partition $I$ into two pieces. We fix $i_0 \in I$, and set
+ \[
+ x_i =
+ \begin{cases}
+ x & i = i_0\\
+ y & i \in I \setminus \{i_0\}\\
+ z & i \not\in I
+ \end{cases}.
+ \]
+ We would be done if whenever we finitely colour $\N$, we can find monochromatic $x, y, z$ such that
+ \[
+ a_{i_0}x + \left(\sum_{i \in I \setminus \{i_0\}} a_i\right) z + \left(\sum_{i \not \in I} a_i\right) y = 0.
+ \]
+ But, since
+ \[
+ \sum_{i \in I} a_i = 0,
+ \]
+ this is equivalent to
+ \[
+ a_{i_0} x - a_{i_0} z + (\text{something}) y = 0.
+ \]
+ Since all these coefficients were non-zero, we can divide out by $a_{i_0}$, and we are done by the previous case.
+\end{proof}
+
+Note that our proof of the theorem shows that if an equation is partition regular for all last-digit base $p$ colourings, then it is partition regular for all colourings. This might sound like an easier thing to prove that the full-blown Rado's theorem, but it turns out the only proof we have for this implication is Rado's theorem.
+
+We now prove the general case. We first do the easy direction, because it is largely the same as the single-equation case.
+
+\begin{prop}
+ If $A$ is an $m \times n$ matrix with rational entries which is partition regular, then $A$ has the columns property.
+\end{prop}
+
+\begin{proof}
+ We again wlog all entries of $A$ are integers. Let the columns of $A$ be $c^{(1)}, \cdots, c^{(n)}$. Given a prime $p$, we consider the $(p - 1)$-colouring of $\N$, where $x$ is coloured $d(x)$, the last non-zero digit in the base $p$ expansion. Since $A$ is partition regular, we obtain a monochromatic solution.
+
+ We then get a monochromatic $x_1, \cdots, x_n$ such that $Ax = 0$, i.e.
+ \[
+ \sum x_i c^{(i)} = 0.
+ \]
+ Any such solution with colour $d$ induces a partition of $[n] = B_1 \cup B_2 \cup \cdots \cup B_r$, where
+ \begin{itemize}
+ \item For all $i, j \in B_s$, we have $L(x_i) = L(x_j)$; and
+ \item For all $s < t$ and $i \in B_s$, $j \in B_t$, the $L(x_i) < L(x_j)$.
+ \end{itemize}
+ Last time, with the benefit of hindsight, we were able to choose some large prime $p$ that made the argument work. So we use the trick we mentioned after the proof last time.
+
+ Since there are finitely many possible partitions of $[n]$, we may assume that this particular partition is generated by infinitely many primes $p$. Call these primes $\mathcal{P}$. We introduce some notation. We say two vectors $u, v \in \Z^m$ satisfy $u \equiv v \pmod p$ if $u_i \equiv v_i \pmod p$ for all $i = 1, \cdots, m$.
+
+ Now we know that
+ \[
+ \sum x_i c^{(i)} = 0.
+ \]
+ Looking at the first non-zero digit in the base $p$ expansion, we have
+ \[
+ \sum_{i \in B_1} d c^{(i)} \equiv 0 \pmod p.
+ \]
+ From this, we conclude that, by multiplying by $d^{-1}$
+ \[
+ \sum_{i \in B_1} c^{(i)} \equiv 0 \pmod p,
+ \]
+ for all $p \in \mathcal{P}$. So we deduce that
+ \[
+ \sum_{i \in B_1} c^{(i)} = 0.
+ \]
+ Similarly, for higher $s$, we find that for each base $p$ colouring, we have
+ \[
+ \sum_{i \in B_s} p^t d c^{(i)} + \sum_{i \in B_1 \cup \ldots \cup B_s} x_i c^{(i)} \equiv 0 \pmod {p^{t + 1}}
+ \]
+ for all $s \geq 2$, and some $t$ dependent on $s$ and $p$. Multiplying by $d^{-1}$, we find
+ \[
+ \sum_{i \in B_s} p^t c^{(i)} + \sum_{i \in B_1 \cup \ldots \cup B_{s - 1}} (d^{-1} x_i) c^{(i)} \equiv 0 \pmod {p^{t + 1}}.\tag{$*$}
+ \]
+ We claim that this implies
+ \[
+ \sum_{i \in \mathcal{B}_s} c^{(i)} \in \bra c^{(i)}: i \in B_1 \cup \cdots \cup B_{s - 1}\ket.
+ \]
+ This is not exactly immediate, because the values of $x_i$ in $(*)$ may change as we change our $p$. But it is still some easy linear algebra.
+
+ Suppose this were not true. Since we are living in a Euclidean space, we have an inner product, and we can find some $v \in \Z^m$ such that
+ \[
+ \bra v, c^{(i)} \ket = 0\text{ for all }i \in B_1 \cup \cdots \cup B_{s - 1},
+ \]
+ and
+ \[
+ \left\bra v, \sum_{i \in B_s}c^{(i)}\right\ket \not= 0.
+ \]
+ But then, taking inner products with gives
+ \[
+ p^t \left\bra v, \sum_{i \in B_s} c^{(i)}\right\ket \equiv 0 \pmod {p^{t + 1}}.
+ \]
+ Equivalently, we have
+ \[
+ \left\bra v, \sum_{i \in B_s}c^{(i)}\right\ket \equiv 0 \pmod p,
+ \]
+ but this is a contradiction. So we showed that $A$ has the columns property with respect to the partition $[n] = B_1 \cup \cdots \cup B_r$.
+\end{proof}
+
+We now prove the hard direction. We want an analogous gadget to our $\begin{pmatrix}1 & \lambda & -1\end{pmatrix}$ we had for the single-equation case. The definition will seem rather mysterious, but it turns out to be what we want, and its purpose becomes more clear as we look at some examples.
+
+\begin{defi}[$(m, p, c)$-set]\index{$(m, p, c)$-set}
+ For $m, p, c \in \N$, a set $S \subseteq \N$ is an $(m, p, c)$-set with generators $x_1, \cdots, x_m$ if $S$ has the form
+ \[
+ S = \left\{ \sum_{i = 0}^m \lambda_i x_i : \begin{array}{c}\lambda_i = 0 \text{ for all }i < j\\ \lambda_j = c\\ \lambda_i \in [-p, p]\end{array}\right\}.
+ \]
+ In other words, we have
+ \[
+ S = \bigcup_{j = 1}^m \{c x_j + \lambda_{j + 1} x_{j + 1} + \cdots + \lambda_m x_m : \lambda_i \in [-p, p]\}.
+ \]
+ For each $j$, the set $\{c x_j + \lambda_{j + 1} x_{j + 1} + \cdots + \lambda_m x_m: \lambda_i \in [-p, p]\}$ is called a \emph{row} of $S$.
+\end{defi}
+
+\begin{eg}
+ What does a $(2, p, 1)$ set look like? It has the form
+ \[
+ \{x_1 - p x_2, x_1 - (p - 1) x_2, \cdots, x_1 + p x_2\} \cup \{x_2\}.
+ \]
+ In other words, this is just an arithmetic progression with its common difference.
+\end{eg}
+
+\begin{eg}
+ A $(2, p, 3)$-set has the form
+ \[
+ \{3x_1 - p x_2, \cdots, 3 x_1, \cdots, 3 x_1 = p x_2\} \cup \{3 x_2\}.
+ \]
+\end{eg}
+The idea of an $(m, p, c)$ set is that we ``iterate'' this process many times, and so an $(m, p, c)$-set is an ``iterated AP's and various patterns of their common differences''.
+
+Our proof strategy is to show that that whenever we finitely-colour $\N$, we can always find an $(m, p, c)$-set, and given any matrix $A$ with the columns property and any $(m, p, c)$-set (for suitable $m$, $p$, $c$), there will always be a solution in there.
+
+\begin{prop}
+ Let $m, p, c \in \N$. Then whenever $\N$ is finitely coloured, there exists a monochromatic $(m, p, c)$-set.
+\end{prop}
+
+\begin{proof}
+ It suffices to find an $(m, p, c)$-set all of whose rows are monochromatic, since when $\N$ is $k$-coloured, and $(m', p, c)$-set with $m' = mk + 1$ has $m$ monochromatic rows of the same colour by pigeonhole, and these rows contain a monochromatic $(m, p, c)$-set, by restricting to the elements where a lot of the $\lambda_i$ are zero. In this proof, whenever we say $(m, p, c)$-set, we mean one all of whose rows are monochromatic.
+
+ We will prove this by induction. We have a $k$-colouring of $[n]$, where $n$ is very very very large. This contains a $k$-colouring of
+ \[
+ B = \left\{c, 2c, \cdots, \left\lfloor \frac{n}{c}\right\rfloor c\right\}.
+ \]
+ Since $c$ is fixed, we can pick this so that $\frac{n}{c}$ is large. By van der Waerden, we find some set monochromatic
+ \[
+ A = \{c x_1 - Nd, cx_1 - (N - 1)d, \cdots, cx_1 + Nd \} \subseteq B,
+ \]
+ with $N$ very very large. Since each element is a multiple of $c$ by assumption, we know that $c \mid d$. By induction, we may find an $(m - 1, p, c)$-set in the set $\{d, 2d, \cdots, Md\}$, where $M$ is large. We are now done by the $(m, p, c)$ set on generators $x_1, \cdots, x_n$, provided
+ \[
+ c x_1 + \sum_{i = 2}^m \lambda_i x_i \in A
+ \]
+ for all $\lambda_i \in [-p, p]$, which is easily seen to be the case, provided $N \geq (m - 1) pM$.
+\end{proof}
+Note that the argument itself is quite similar to that in the $\begin{pmatrix}1 & \lambda & -1\end{pmatrix}$ case.
+
+Recall that Schur's theorem said that whenever we finitely-colour $\N$, we can find a monochromatic $\{x, y , x + y\}$. More generally, for $x_1, x_2, \cdots, x_m \in \N$, we let
+\[
+ FS(x_1, \cdots, x_m) = \left\{\sum_{i \in I} x_i : I \subseteq [n], I \not= \emptyset\right\}.
+\]
+The existence of a monochromatic $(m, 1, 1)$-sets gives us
+\begin{cor}[Finite sum theorem]\index{Finite sum theorem}
+ For every fixed $m$, whenever we finitely-colour $\N$, there exists $x_1, \cdots, x_m$ such that $FS(x_1, \cdots, x_m)$ is monochromatic.
+\end{cor}
+This is since an $(m, 1, 1)$-set contains more things than $FS(x_1, \cdots, x_m)$. This was discovered independently by a bunch of different people, including Folkman, Rado and Sanders.
+
+Similarly, if we let
+\[
+ FP(x_1, \cdots, x_m) = \left\{\prod_{i \in I} x_i : I \subseteq [n], I \not= \emptyset\right\},
+\]
+then we can find these as well. For example, we can restrict attention to $\{2^n: n \in \N\}$, and use the finite sum theorem. This is the same idea as we had when we used van der Waerden to find geometric progressions.
+
+But what if we want both? Can we have $FS(x_1,\cdots, x_m) \cup FP(x_1, \cdots, x_m)$ in the same colour class? The answer is actually not known! Even the case when $m = 2$, i.e.\ finding a monochromatic set of the form $\{x, y, x + y, xy\}$ is open. Until mid 2016, we did not know if we can find $\{x + y, xy\}$ monochromatic ($x, y > 2$).
+
+To finish he proof of Rado's theorem, we need the following proposition:
+\begin{prop}
+ If $A$ is a rational matrix with the columns property, then there is some $m, p, c \in \N$ such that $Ax = 0$ has a solution inside any $(m, p, c)$ set, i.e.\ all entries of the solution lie in the $(m, p, c)$ set.
+\end{prop}
+In the case of a single equation, we reduced the general problem to the case of 3 variables only. Here we are going to do something similar --- we will use the columns property to reduce the solution to something much smaller.
+
+\begin{proof}
+ We again write
+ \[
+ A =
+ \begin{pmatrix}
+ \uparrow & \uparrow & & \uparrow\\
+ c^{(1)} & c^{(2)} & \cdots & c^{(n)}\\
+ \downarrow & \downarrow & & \downarrow
+ \end{pmatrix}.
+ \]
+ Re-ordering the columns of $A$ if necessary, we assume that we have
+ \[
+ [n] = B_1 \cup \cdots \cup B_r
+ \]
+ such that $\max(B_s) < \min(B_{s + 1})$ for all $s$, and we have
+ \[
+ \sum_{i \in B_s} c^{(i)} = \sum_{i \in B_1 \cup \ldots \cup B_{s - 1}} q_{is}c^{(i)}
+ \]
+ for some $q_{is} \in \Q$. These $q_{is}$ only depend on the matrix. In other words, we have
+ \[
+ \sum d_{is} c^{(i)} = 0,
+ \]
+ where
+ \[
+ d_{is} =
+ \begin{cases}
+ -q_{is} & i \in B_1 \cup \cdots B_{s - 1}\\
+ 1 & i \in B_s\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ For a fixed $s$, if we scan these coefficients starting from $i = n$ and then keep decreasing $i$, then the first non-zero coefficient we see is $1$, which is good, because it looks like what we see in an $(m, p, c)$ set.
+
+ Now we try to write down a general solution with $r$ many free variables. Given $x_1, \cdots, x_r \in \N^r$, we look at
+ \[
+ y_i = \sum_{s = 1}^r d_{is}x_s.
+ \]
+ It is easy to check that $Ay = 0$ since
+ \[
+ \sum y_i c^{(i)} = \sum_i \sum_s d_{is} x_s c^{(i)} = \sum_s x_s \sum_i d_{is} c^{(i)} = 0.
+ \]
+ Now take $m = r$, and pick $c$ large enough such that $c d_{is} \in \Z$ for all $i, s$, and finally, $p = \max \{c d_{is}: i, s \in \Q\}$.
+
+ Thus, if we consider the $(m, p, c)$-set on generators $(x_m, \cdots, x_1)$ and $y_i$ as defined above, then we have $Ay = 0$ and hence $A(cy) = 0$. Since $cy$ is integral, and lies in the $(m, p, c)$ set, we are done!
+\end{proof}
+
+We have thus proved Rado's theorem.
+\begin{thm}[Rado's theorem]\index{Rado's theorem}
+ A matrix $A$ is partition regular iff it has the column property.
+\end{thm}
+So we have a complete characterization of all partition regular matrices.
+
+Note that Rado's theorem reduces Schur's theorem, van der Waerden's theorem, finite sums theorem etc. to just checking if certain matrices have the columns property, which are rather straightforward computations.
+
+More interestingly, we can prove some less obvious ``theoretical'' results.
+
+\begin{cor}[Consistency theorem]\index{consistency theorem}
+ If $A$, $B$ are partition regular in independent variables, then
+ \[
+ \begin{pmatrix}
+ A & 0\\
+ 0 & B
+ \end{pmatrix}
+ \]
+ is partition regular. In other words, we can solve $Ax = 0$ and $By = 0$ simultaneously in the same colour class.
+\end{cor}
+
+\begin{proof}
+ The matrix
+ \[
+ \begin{pmatrix}
+ A & 0\\
+ 0 & B
+ \end{pmatrix}
+ \]
+ has the columns property if $A$ and $B$ do.
+\end{proof}
+In fact, much much more is true.
+
+\begin{cor}
+ Whenever $\N$ is finitely-coloured, one colour class contains solutions to \emph{all} partition regular systems!
+\end{cor}
+
+\begin{proof}
+ Suppose not. Then we have $\N = D_1 \cup \cdots \cup D_k$ such that for each $D_i$, there is some partition regular matrix $A_i$ such that we cannot solve $A_i x = 0$ inside $D_i$. But this contradicts the fact that $\diag(A_1, A_2, \cdots, A_k)$ is partition regular (by applying consistency theorem many times).
+\end{proof}
+
+Where did this whole idea of the $(m, p, c)$-sets come from? The original proof by Rado didn't use $(m, p, c)$-sets, and this idea only appeared only a while later, when we tried to prove a more general result.
+
+In general, we call a set $D \subseteq \N$ \emph{partition regular} if we can solve any partition regular system in $D$. Then we know that $\N, 2\N$ are partition regular sets, but $2\N + 1$ is not (because we can't solve $x + y = z$, say). Then what Rado's theorem says is that whenever we finitely partition $\N$, then one piece of $\N$ is partition regular.
+
+In the 1930's, Rado conjectured that there is nothing special about $\N$ to begin with --- whenever we break up a partition regular set, then one of the pieces is partition regular. This conjecture was proved by Deuber in the 1970s who introduced the idea of the $(m, p, c)$-set.
+
+It is not hard to check that $D$ is partition regular iff $D$ contains an $(m, p, c)$ set of each size. Then Deuber's proof involves showing that for all $m, p, c, k \in \N$, there exists $n, q, d \in \N $ such that any $k$-colouring of an $(n, q, d)$-set contains a monochromatic $(m, p, c)$ set. The proof is quite similar to how we found $(m, p, c)$-sets in the naturals, but instead of using van der Waerden theorem, we need the Hales--Jewett theorem.
+
+\separator
+
+We end by mentioning an open problem in this area. Suppose $A$ is an $m \times n $ matrix $A$ that is not partition regular. That is, there is some $k$-colouring of $\N$ with no solution to $Ax = 0$ in a colour class. Can we find some bound $f(m, n)$, such that every such $A$ has a ``bad'' colouring with $k < f(m, n)$? This is an open problem, first conjectured by Rado, and we think the answer is yes.
+
+What do we actually know about this problem? The answer is trivially yes for $f(1, 2)$, as there aren't many matrices of size $1 \times 2$, up to rescaling. It is a non-trivial theorem that $f(1, 3)$ exists, and in fact $f(1, 3) \leq 24$. We don't know anything more than that.
+
+\section{Topological Dynamics in Ramsey Theory}
+Recall the finite sum theorem --- for any fixed $k$, whenever $\N$ is finitely coloured, we can find $x_1, \cdots, x_k$ such that $FS(x_1, x_2, \cdots, x_k)$ is monochromatic. One natural generalization is to ask the following question --- can we find an infinite set $A$ such that
+\[
+ FS(A) = \left\{\sum_{x \in B} x : B \subseteq A\text{ finite non-empty}\right\}
+\]
+is monochromatic? The first thought upon hearing this question is probably either ``this is so strong that there is no way it can be true'', or ``we can deduce it easily, perhaps via some compactness argument, from the finite sum theorem''.
+
+Both of these thoughts are wrong. It is true that it is always possible to find these $x_1, x_2, \cdots$, but the proof is hard. This is \emph{Hindman's theorem}.
+
+The way we are going to approach this is via \emph{topological dynamics}, which is the title of this chapter. The idea is we construct a metric space (and in particular a \emph{dynamical system}) out of the Ramsey-theoretic problem we are interested in, and then convert the Ramsey-theoretic problem into a question about topological properties of the space we are constructed.
+
+In the case of Hindman's theorem, it turns out the ``topological'' form is a general topological fact, which we can prove for a general dynamical system. What is the advantage of this approach? When we construct the dynamical system we are interested in, what we get is highly ``non-geometric''. It would be very difficult to think of it as an actual ``space''. It is just something that happens to satisfy the definition of a metric space. However, when we prove the general topological facts, we will think of our metric space as a genuine space like $\R^n$, and this makes it much easier to visualize what is going on in the proofs.
+
+A less obvious benefit of this approach is that we will apply Tychonoff's theorem many times, and also use the fact that the possibly infinite intersection of nested compact sets is open. If we don't formulate our problem in topological terms, then we might find ourselves re-proving these repeatedly, which further complicates the proof.
+
+We now begin our journey.
+
+\begin{defi}[Dynamical system]\index{dynamical system}
+ A \emph{dynamical system} is a pair $(X, T)$ where $X$ is the compact metric space and $T$ is a homeomorphism on $X$.
+\end{defi}
+
+\begin{eg}
+ $(T = \R/Z, T_\alpha(x) = x + \alpha)$ for $\alpha \in \R$ is a dynamical system.
+\end{eg}
+In this course, this is not the dynamical system we are interested in. In fact, we are only ever going to be interested in one dynamical system.
+
+We fix number of colours $k \in \N$. Instead of looking at a particular colouring, we consider the space of all $k$-colourings. We let
+\[
+ \mathcal{C} = \{c: \Z \to [k]\} \cong [k]^{\Z}.
+\]
+We endow this with the product topology. Since $[k]$ has the discrete topology, our basic open sets have the form
+\[
+ U = \{c: \Z \to [k] : c(i_1) = c_1, \cdots, c(i_s) = c_s\}
+\]
+for some $i_1, \cdots, i_s \in \Z$ and $c_1, \cdots, c_s \in [k]$.
+
+Since $[k]$ is trivially compact, by Tychonoff's theorem, we know that $\mathcal{C}$ is compact. But we don't really need Tychonoff, since we know that $\mathcal{C}$ is metrizable, with metric
+\[
+ \rho(c_1, c_2) = \frac{1}{n + 1},
+\]
+where $n$ is the largest integer for which $c_1$ and $c_2$ agree on $[-n, n]$. This metric is easily seen to give rise to the same topology, and by hand, we can see this is sequentially compact.
+
+We now have to define the operation on $\mathcal{C}$ we are interested in.
+\begin{defi}[Left shift]\index{left shift}\index{$\mathcal{L}$}
+ The \emph{left shift} operator $\mathcal{L}: \mathcal{C} \to \mathcal{C}$ is defined by
+ \[
+ (\mathcal{L}c)(i) = c(i + 1).
+ \]
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2,-1,1,2} {
+ \node [mblue, circ] at (\x, 0) {};
+ }
+ \node [mred, circ] at (0, 0) {};
+ \begin{scope}[shift={(7, 0)}]
+ \foreach \x in {-2,0,1,2} {
+ \node [mblue, circ] at (\x, 0) {};
+ }
+ \node [mred, circ] at (-1, 0) {};
+ \end{scope}
+ \draw [->] (3, 0) -- (4, 0);
+ \end{tikzpicture}
+\end{center}
+We observe that this map is invertible, with inverse given by the right shift. This is one good reason to work over $\Z$ instead of $\N$. Moreover, we see that $\mathcal{L}$ is nice and uniformly continuous, since if
+\[
+ \rho(c_1, c_2) \leq \frac{1}{n + 1},
+\]
+then
+\[
+ \rho(\mathcal{L} c_1, \mathcal{L} c_2) \leq \frac{1}{n}.
+\]
+Similarly, $\mathcal{L}^{-1}$ is uniformly continuous. So it follows that $(\mathcal{C}, \mathcal{L})$ is a dynamical system.
+
+Instead of proving Hindman's theorem directly, we begin by re-proving van der Waerden's theorem using this approach, to understand better how it works.
+
+\begin{thm}[Topological van der Waerden]\index{topological van der Waerden theorem}\index{van der Waerden theorem!topological}
+ Let $(X, T)$ be a dynamical system. Then there exists an $\varepsilon > 0$ such that whenever $r \in\N$, then we can find $x \in X$ and $n \in \N$ such that $\rho(x, T^{in} x) < \varepsilon$ for all $i = 1, \cdots, r$.
+\end{thm}
+In words, this says if we flow around $X$ under $T$, then eventually some point must go back near to itself, and must do that as many times as we wish to, in regular intervals.
+
+To justify calling this the topological van der Waerden theorem, we should be able to easily deduce van der Waerden's theorem from this. It is not difficult to imagine this is vaguely related to van der Waerden in some sense, but it is not obvious how we can actually do the deduction. After all, topological van der Waerden theorem talks about the whole space of all colourings, and we do not get to control what $x$ we get back from it. What we want is to start with a single colouring $c$ and try to say something about this particular $c$.
+
+Now of course, we cannot restrict to the sub-system consisting only of $c$ itself, because, apart from it being really silly, the function $\mathcal{L}$ does not restrict to a map $\{c\} \to \{c\}$. We might think of considering the set of all translates of $c$. While this is indeed closed under $\mathcal{L}$, this is not a dynamical system, since it is not compact! The solution is to take the closure of that.
+
+\begin{defi}[Orbital closure]\index{orbital closure}
+ Let $(X, T)$ be a dynamical system, and $x \in X$. The \emph{orbital closure} $\bar{x}$ of $x$ is the set $\cl \{T^s x: s \in \Z\}$.
+\end{defi}
+Observe that $\bar{x}$ is a closed subset of a compact space, hence compact.
+
+\begin{prop}
+ $(\bar{X}, T)$ is a dynamical system.
+\end{prop}
+
+\begin{proof}
+ It suffices to show that $\bar{x}$ is closed under $T$. If $y \in \bar{x}$, then we have some $s_i$ such that $T^{s_i}x \to y$. Since $T$ is continuous, we know $T^{s_i + 1}x \to T y$. So $T y \in \bar{x}$. Similarly, $T^{-1}\bar{x} = \bar{x}$.
+\end{proof}
+
+Once we notice that we can produce such a dynamical system from our favorite point, it is straightforward to prove van der Waerden.
+\begin{cor}[van der Waerden theorem]\index{van der Waerden theorem}
+ Let $r, k \in \N$. Then whenever $\Z$ is $k$-coloured, then there is a monochromatic arithmetic progression of length $r$.
+\end{cor}
+
+\begin{proof}
+ Let $c: \Z \to [k]$. Consider $(\bar{c}, \mathcal{L})$. By topological van der Waerden, we can find $x \in \bar{c}$ and $n \in \N$ such that $\rho(x, \mathcal{L}^{in} x) < 1$ for all $i = 1, \cdots, r$. In particular, we know that $x$ and $\mathcal{L}^{in} x$ agree at $0$. So we know that
+ \[
+ x(0) = \mathcal{L}^{in} x(0) = x(in)
+ \]
+ for all $i = 0, \cdots, r$.
+
+ We are not quite done, because we just know that $x \in \bar{c}$, and not that $x = \mathcal{L}^k x$ for some $k$.
+
+ But this is not bad. Since $x \in \bar{c}$, we can find some $s \in \Z$ such that
+ \[
+ \rho(T^s c, x) \leq \frac{1}{rn + 1}.
+ \]
+ This means $x$ and $T^s c$ agree on the first $rn + 1$ elements. So we know
+ \[
+ c(s) = c(s + n) = \cdots = c(s + rn).\qedhere
+ \]
+\end{proof}
+Since we will be looking at these $\bar{c}$ a lot, it is convenient to figure out what this $\bar{c}$ actually looks like. It turns out there is a reasonably good characterization of these orbital closures.
+
+Let's look at some specific examples. Consider $c$ given by
+\begin{center}
+ \begin{tikzpicture}[scale=0.6]
+ \foreach \x in {-1, 1} {
+ \node [mblue, circ] at (\x, 0) {};
+ }
+ \foreach \x in {-2,0,2} {
+ \node [mred, circ] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+\end{center}
+Then the orbital closure has just two points, $c$ and $\mathcal{L} c$.
+
+What if we instead had the following?
+\begin{center}
+ \begin{tikzpicture}[scale=0.6]
+ \foreach \x in {-3,-2,-1} {
+ \node [mblue, circ] at (\x, 0) {};
+ }
+ \foreach \x in {0,1,2,3} {
+ \node [mred, circ] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+\end{center}
+Then $\bar{c}$ contains all translates of these, but also
+\begin{center}
+ \begin{tikzpicture}[scale=0.6]
+ \foreach \x in {-3,-2,-1,0,1,2,3} {
+ \node [mblue, circ] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+ ,
+ \quad
+ \begin{tikzpicture}[scale=0.6]
+ \foreach \x in {-3,-2,-1,0,1,2,3} {
+ \node [mred, circ] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+\end{center}
+We define the following:
+\begin{defi}[$\Seq$]\index{$\Seq$}
+ Let $c: \Z \to [k]$. We define
+ \[
+ \Seq (c) = \{(c(i), \cdots, c(i + r)): i \in \Z, r \in \N\}.
+ \]
+\end{defi}
+It turns out these sequences tell us everything we want to know about orbital closures.
+
+\begin{prop}
+ We have $c' \in \bar{c}$ iff $\Seq(c') \subseteq \Seq(c)$.
+\end{prop}
+The proof is just following the definition and checking everything works.
+\begin{proof}
+ We first prove $\Rightarrow$. Suppose $c' \in \bar{c}$. Let $(c'(i) , \cdots, c'(i + r - 1)) \in \Seq(c)$. Then we have $s \in \Z$ such that
+ \[
+ \rho(c', \mathcal{L}^s c) < \frac{1}{1 + \max(|i|, |i + s - 1|)},
+ \]
+ which implies
+ \[
+ (c(s + i), \cdots, c(s + i + r - 1)) = (c'(i), \cdots, c'(i + s - 1)).
+ \]
+ So we are done.
+
+ For $\Leftarrow$, if $\Seq(c') \subseteq \Seq (c)$, then for all $n \in \N$, there exists $s_n \in \Z$ such that
+ \[
+ (c'(-n), \cdots, c'(n)) = (c(s_n - n), \cdots, c(s_n + n)).
+ \]
+ Then we have
+ \[
+ \mathcal{L}^{s_i} c \to c'.
+ \]
+ So we have $c' \in \bar{c}$.
+\end{proof}
+
+We now return to talking about topological dynamics. We saw that in our last example, the orbital closure of $c$ has three ``kinds'' of things --- translates of $c$, and the all-red and all-blue ones. The all-red and all-blue ones are different, because they seem to have ``lost'' the information of $c$. In fact, the orbital closure of each of them is just that colouring itself. This is a \emph{strict} subset of $\bar{c}$. These are phenomena we don't want.
+\begin{defi}[Minimal dynamical system]\index{minimal dynamical system}\index{dynamical system!minimal}
+ We say $(X, T)$ is \emph{minimal} if $\bar{x} = X$ for all $x \in X$. We say $x \in X$ is \emph{minimal} if $(\bar{x}, T)$ is a minimal dynamical system.
+\end{defi}
+
+\begin{prop}
+ Every dynamical system $(X, T)$ has a minimal point.
+\end{prop}
+Thus, most of the time, when we want to prove something about dynamical systems, we can just pass to a minimal subsystem, and work with it.
+
+\begin{proof}
+ Let $\mathcal{U} = \{\bar{x}: x \in X\}$. Thus is a family of closed sets, ordered by inclusion. We want to apply Zorn's lemma to obtain a minimal element. Consider a chain $S$ in $\mathcal{U}$. If
+ \[
+ \bar{x}_1 \supseteq \bar{x}_2 \supseteq \cdots \supseteq \bar{x}_n,
+ \]
+ then their intersection is $\bar{x}_n$, which is in particular non-empty. So any finite collection in $S$ has non-empty intersection. Since $X$ is compact, we know
+ \[
+ \bigcap_{\bar{x} \in S} \bar{x} \not= \emptyset.
+ \]
+ We pick
+ \[
+ z \in \bigcap_{\bar{x} \in S} \bar{x} \not= \emptyset.
+ \]
+ Then we know that
+ \[
+ \bar{z} \subseteq \bar{x}
+ \]
+ for all $\bar{x} \in S$. So we know
+ \[
+ \bar{z} \subseteq \bigcap_{\bar{x} \in S} \bar{x} \not= \emptyset.
+ \]
+ So by Zorn's lemma, we can find a minimal element (in both senses).
+\end{proof}
+
+\begin{eg}
+ Consider $c$ given by
+ \begin{center}
+ \begin{tikzpicture}[scale=0.6]
+ \foreach \x in {-3,-2,-1} {
+ \node [mblue, circ] at (\x, 0) {};
+ }
+ \foreach \x in {0,1,2,3} {
+ \node [mred, circ] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+ \end{center}
+ Then we saw that $\bar{c}$ contains all translates of these, and also
+ \begin{center}
+ \begin{tikzpicture}[scale=0.6]
+ \foreach \x in {-3,-2,-1,0,1,2,3} {
+ \node [mblue, circ] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+ ,
+ \quad
+ \begin{tikzpicture}[scale=0.6]
+ \foreach \x in {-3,-2,-1,0,1,2,3} {
+ \node [mred, circ] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+ \end{center}
+ Then this system is not minimal, since the orbital closure of the all-red one is a strict subsystem of $\bar{c}$.
+\end{eg}
+
+We are now going to derive some nice properties of minimal systems.
+\begin{lemma}
+ If $(X, T)$ is a minimal system, then for all $\varepsilon > 0$, there is some $m \in \N$ such that for all $x, y \in X$, we have
+ \[
+ \min_{|s| \leq m} \rho(T^s x, y) < \varepsilon.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Suppose not. Then there exists $\varepsilon > 0$ and points $x_i, y_i \in X$ such that
+ \[
+ \min_{|s| \leq i} \rho(T^s x_i, y_i) \geq \varepsilon.
+ \]
+ By compactness, we may pass to subsequences, and assume $x_i \to x$ and $y_i \to y$. By continuity, it follows that
+ \[
+ \rho(T^s x, y) \geq \varepsilon
+ \]
+ for all $s \in \Z$. This is a contradiction, since $\bar{x} = X$ by minimality.
+\end{proof}
+We now want to prove topological van der Waerden.
+\begin{thm}[Topological van der Waerden]\index{topological van der Waerden theorem}\index{van der Waerden theorem!topological}
+ Let $(X, T)$ be a dynamical system. Then there exists an $\varepsilon > 0$ such that whenever $r \in\N$, then we can find $x \in X$ and $n \in \N$ such that $\rho(x, T^{in} x) < \varepsilon$ for all $i = 1, \cdots, r$.
+\end{thm} % this needs some pictures
+
+\begin{proof}
+ Without loss of generality, we may assume $(X, T)$ is a minimal system. We induct on $r$. If $r = 1$, we can just choose $y \in X$, and consider $T y, T^2 y, \cdots$. Then note that by compactness, we have $s_1, s_2 \in \N$ such that
+ \[
+ \rho(T^{s_1} y,T^{s_2} y) < \varepsilon,
+ \]
+ and then take $x = T^{s_1} y$ and $n = s_2 - s_1$.
+
+ Now suppose the result is true for $r > 1$, and that we have the result for all $\varepsilon > 0$ and $r - 1$.
+ \begin{claim}
+ For all $\varepsilon > 0$, there is some point $y \in X$ such that there is an $x \in X$ and $n \in \N$ such that
+ \[
+ \rho(t^{in} x, y) < \varepsilon
+ \]
+ for all $1 \leq i \leq r$.
+ \end{claim}
+ Note that this is a different statement from the theorem, because we are not starting at $x$. In fact, this is a triviality. Indeed, let $(x_0, n)$ be as guaranteed by the hypothesis. That is, $\rho(x_0, T^{in}x_0) < \varepsilon$ for all $1 \leq i \leq r$. Then pick $y = x_0$ and $x = T^{-n}x_0$.
+
+ The next goal is to show that there is nothing special about this $y$.
+ \begin{claim}
+ For all $\varepsilon > 0$ and for all $z \in X$, there exists $x_z \in X$ and $n \in \N$ for which $\rho(T^{in} x_z, z) < \varepsilon$.
+ \end{claim}
+ The idea is just to shift the picture so that $y$ gets close to $z$, and see where we send $x$ to. We will use continuity to make sure we don't get too far away.
+
+ We choose $m$ as in the previous lemma for $\frac{\varepsilon}{2}$. Since $T^{-m}, T^{-m + 1}, \cdots, T^m$ are all uniformly continuous, we can choose $\varepsilon'$ such that $\rho(a, b) < \varepsilon'$ implies $\rho(T^s a, T^s b) < \frac{\varepsilon}{2}$ for all $|s| \leq m$.
+
+ Given $z \in X$, we obtain $x$ and $y$ by applying our first claim to $\varepsilon'$. Then we can find $s \in \Z$ with $|s| \leq m$ such that $\rho(T^s y, z) < \frac{\varepsilon}{2}$. Consider $x_z = T^s x$. Then
+ \begin{align*}
+ \rho(T^{in} x_z, z) &\leq \rho(T^{in} x_z, T^s y) + \rho(T^s y, z) \\
+ &\leq \rho(T^s(T^{in}x), T^s y) + \frac{\varepsilon}{2}\\
+ &\leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2}\\
+ &= \varepsilon
+ \end{align*}
+
+ \begin{claim}
+ For all $\varepsilon > 0$ and $z \in X$, there exists $x \in X$, $n \in \N$ and $\varepsilon' > 0$ such that $T^{in}(B(x, \varepsilon')) \subseteq B(z, \varepsilon)$ for all $1 \leq i \leq r$.
+ \end{claim}
+ We choose $\varepsilon'$ by continuity, using the previous claim.
+
+ \separator
+
+ The idea is that we repeatedly apply the final claim, and then as we keep moving back, eventually two points will be next to each other.
+
+ We pick $z_0 \in X$ and set $\varepsilon_0 = \frac{\varepsilon}{2}$. By the final claim, there exists $z_1 \in X$ and some $n_1 \in \N$ such that $T^{in_1}(B(z_1, \varepsilon_1)) \subseteq B(z_0, \varepsilon_0)$ for some $0 < \varepsilon_1 \leq \varepsilon_0$ and all $1 \leq i \leq r$.
+
+ Inductively, we find $z_s \in X$, $n_s \in \N$ and some $0 < \varepsilon_s \leq \varepsilon_{s - 1}$ such that
+ \[
+ T^{in_s}(B(z_s, \varepsilon_s)) \subseteq B(z_{s - 1}, \varepsilon_{s - 1})
+ \]
+ for all $1 \leq i \leq r$.
+
+ By compactness, $(z_s)$ has a convergent subsequence, and in particular, there exists $i< j \in \N$ such that $\rho(z_i, z_j) < \frac{\varepsilon}{2}$.
+
+ Now take $x = z_j$, and
+ \[
+ n = n_j + n_{j - 1} + \cdots + n_{i + 1}
+ \]
+ Then
+ \[
+ T^{\ell n} (B(x, \varepsilon_j)) \subseteq B(z_i, \varepsilon_i).
+ \]
+ for all $1 \leq \ell \leq r$. But we know
+ \[
+ \rho(z_i, z_j) \leq \frac{\varepsilon}{2},
+ \]
+ and
+ \[
+ \varepsilon_i \leq \varepsilon_0 \leq \frac{\varepsilon}{2}.
+ \]
+ So we have
+ \[
+ \rho(T^{\ell n} x, x) \leq \varepsilon
+ \]
+ for all $1 \leq \ell \leq r$.
+\end{proof}
+
+In the remaining of the chapter, we will actually prove Hindman's theorem. We will again first prove a ``topological'' version, but unlike the topological van der Waerden theorem, it will look nothing like the actually Hindman's theorem. So we still need to do some work to deduce the actual version from the topological version.
+
+To state topological Hindman, we need the following definition:
+\begin{defi}[Proximal points]\index{proximal points}
+ We say $x, y \in X$ are \emph{proximal} if
+ \[
+ \inf_{s \in \Z} \rho(T^s x, T^s y) = 0.
+ \]
+\end{defi}
+
+\begin{eg}
+ Two colourings $c_1, c_2 \in \mathcal{C}$ are proximal iff there are arbitrarily large intervals where they agree.
+\end{eg}
+
+The topological form of Hindman's theorem says the following:
+\begin{thm}[Topological Hindman's theorem]\index{topological Hindman's theorem}\index{Hindman's theorem!topological}
+ Let $(X, T)$ be a dynamical system, and suppose $X = \bar{x}$ for some $x \in X$. Then for any minimal subsystem $Y \subseteq X$, then there is some $y \in Y$ such that $x$ and $y$ are proximal.
+\end{thm}
+We will first show how this implies Hindman's theorem, and then prove this.
+
+To do the translation, we need to, of course, translate concepts in dynamical systems to concepts about colourings. We previously showed that for colourings $c, c'$, we have $c' \in \bar{c}$ iff $\Seq(c') \subseteq \Seq(c)$. The next thing to figure out is when $c: \Z \to [k]$ is minimal. By definition, this means for any $x \in \bar{c}$, we have $\bar{x} = \bar{c}$. Equivalently, we have $\Seq(x) = \Seq(c)$.
+
+We introduce some notation.
+\begin{notation}
+ For an interval $I \subseteq \Z$, we say $I = \{a, a + 1, \cdots, b\}$, we write $c(I)$ for the sequence $(c(a), c(a + 1), \cdots, c(b))$.
+\end{notation}
+Clearly, we have
+\[
+ \Seq(c) = \{c(I): I \subseteq \Z\text{ is an interval}\}.
+\]
+We say for intervals $I_1, I_2 \subseteq \Z$, we have $c(I_1) \preccurlyeq c(I_2)$, if $c(I_1)$ is a subsequence of $c(I_2)$.
+
+\begin{defi}[Bounded gaps property]\index{bounded gaps property}
+ We say a colouring $c: \Z \to [k]$ has the \emph{bounded gaps property} if for each interval $I \subseteq \Z$, there exists some $M > 0$ such that $c(I) \preccurlyeq c(\mathcal{U})$ for every interval $\mathcal{U} \subseteq \Z$ of length at least $M$.
+\end{defi}
+For example, every periodic colouring has the bounded gaps property.
+
+It turns out this is precisely what we need to characterize minimal colourings.
+
+\begin{prop}
+ A colouring $c: \Z \to [k]$ is minimal iff $c$ has the bounded gaps property.
+\end{prop}
+
+\begin{proof}
+ Suppose $c$ has the bounded gaps property. We want to show that for all $x \in \bar{c}$, we have $\Seq(x) = \Seq(c)$. We certainly got one containment $\Seq(x) \subseteq \Seq(c)$. Consider any interval $I \subseteq \Z$. We want to show that $c(I) \in \Seq(x)$. By the bounded gaps property, there is some $M$ such that any interval $\mathcal{U} \subseteq \Z$ of length $> M$ satisfies $c(I) \preccurlyeq c(\mathcal{U})$. But since $x \in \bar{c}$, we know that
+ \[
+ x([0, \cdots, M]) = c([t, t+ 1, \cdots, t + M])
+ \]
+ for some $t \in \Z$. So $c(I) \preccurlyeq x([0, \cdots, M])$, and this implies $\Seq(c) \subseteq \Seq (x)$.
+
+ For the other direction, suppose $c$ does not have the bounded gaps property. So there is some bad interval $I \subseteq \Z$. Then for any $n \in \N$, there is some $t_n$ such that
+ \[
+ c(I) \not \preccurlyeq c([t_n - n, t_n - n + 1, \cdots, t_n + n]).
+ \]
+ Now consider $x_n = \mathcal{L}^{t_n}(c)$. By passing to a subsequence if necessary, we may assume that $x_n \to x$. Clearly, $c(I) \not\in \Seq(x)$. So we have found something in $\Seq(c) \setminus \Seq(x)$. So $c \not \in \bar{x}$.
+\end{proof}
+It turns out once we have equated minimality with the bounded gaps property, it is not hard to deduce Hindman's theorem.
+\begin{thm}[Hindman's theorem]\index{Hindman's theorem}
+ If $c: \N \to [k]$ is a $k$-colouring, then there exists an infinite $A \subseteq \N$ such that $FS(A)$ is monochromatic.
+\end{thm}
+
+\begin{proof}
+ We extend the colouring $c$ to a colouring of $\Z$ by defining $x: \Z \to [k + 1]$ with
+ \[
+ x(i) = x(-i) = c(i)
+ \]
+ if $x\not= 0$, and $x(0) = k + 1$. Then it suffices to find an infinite an infinite $A \subseteq \Z$ such that $FS(A)$ is monochromatic with respect to $x$.
+
+ We apply topological Hindman to $(\bar{x}, \mathcal{L})$ to find a minimal colouring $y$ such that $x$ and $y$ are proximal. Then either
+ \[
+ \inf_{n \in \N} \rho(\mathcal{L}^n x, \mathcal{L}^n y) = 0\text{ or } \inf_{n \in \N} \rho(\mathcal{L}^{-n} x, \mathcal{L}^{-n} y) = 0.
+ \]
+ We wlog it is the first case, i.e.\ $x$ and $y$ are positively proximal. We now build up this infinite set $A$ we want one by one.
+
+ Let the colour of $y(0)$ be red. We shall in fact pick $0 < a_1 < a_2 < \cdots$ inductively so that $x(t) = y(t) = \text{red}$ for all $t \in FS(a_1, a_2, \cdots)$.
+
+ By the bounded gaps property of $y$, there exists some $M_1$ such that for any $I \subseteq \Z$ of length $\geq M_1$, then it contains $y([0])$, i.e.\ $y([0]) \preccurlyeq y(I)$.
+
+ Since $x$ and $y$ are positively proximal, there exists $I \subseteq \Z$ such that $|I| \geq M$ and $\min(I) > 0$ such that $x(I) = y(I)$. Then we pick $a_1 \in I$ such that $x(a_1) = y(a_1) = y(0)$.
+
+ Now we just have to keep doing this. Suppose we have found $a_1 < \cdots < a_n$ as required. Consider the interval $J = [0, \cdots, a_1 + a_2 + \cdots + a_n]$. Again by the bounded gaps property, there exists some $M_{n + 1}$ such that if $I \subseteq \Z$ has $|I| \geq M_{n + 1}$, then $y(J) \preccurlyeq y(I)$.
+
+ We choose an interval $I \subseteq \Z$ such that
+ \begin{enumerate}
+ \item $x(I) = y(I)$
+ \item $|I| \geq M_{n + 1}$
+ \item $\min I > \sum_{i = 1}^n a_i$.
+ \end{enumerate}
+ Then we know that
+ \[
+ y([0, \cdots, a_1 + \cdots + a_n]) \preccurlyeq y(I).
+ \]
+ Let $a_{n + 1}$ denote by the position at which $y([0, \cdots, a_1 + \cdots + a_n])$ occurs in $y(I)$. It is then immediate that $x(z) = y(z) = \mathrm{red}$ for all $z \in FS(a_1, \cdots, a_{n + 1})$, as any element in $FS(a_1, \cdots, a_{n + 1})$ is either
+ \begin{itemize}
+ \item $t'$ for some $t' \in FS(a_1, \ldots, a_n)$;
+ \item $a_{n + 1}$; or
+ \item $a_{n + 1} + t'$ for some $t' \in FS(a_1, \cdots, a_n)$,
+ \end{itemize}
+ and these are all red by construction.
+
+ Then $A = \{a_1, a_2, \cdots\}$ is the required set.
+\end{proof}
+The heart of the proof is that we used topological Hindman to obtain a minimal proximal point, and this is why we can iteratively get the $a_n$.
+
+It remains to prove topological Hindman.
+\begin{thm}[Topological Hindman's theorem]\index{topological Hindman's theorem}
+ If $(X, T)$ is a dynamical system, $X = \bar{x}$ and $Y \subseteq X$ is minimal, then there exists $y \in Y$ such that $x$ and $y$ are proximal.
+\end{thm}
+
+This is a long proof, and we will leave out proofs of basic topological facts.
+\begin{proof}
+ If $X$ is a compact metric space, then $X^X = \{f: X \to X\}$ is compact under the product topology by Tychonoff. The basic open sets are of the form
+ \[
+ \{f: X \to Y \mid f(x_i) \in U_i\text{ for }i = 1, \cdots, k\}
+ \]
+ for some fixed $x_i \in X$ and $U_i \subseteq X$ open.
+
+ Now any function $g: X \to X$ can act on $X^X$ in two obvious ways ---
+ \begin{itemize}
+ \item Post-composition $\mathcal{L}_g: X^X \to X^X$ be given by $\mathcal{L}_g(f) = g \circ f$.
+ \item Pre-composition $\mathcal{R}_g: X^X \to X^X$ be given by $R_g(f) = f \circ g$.
+ \end{itemize}
+ The useful thing to observe is that $\mathcal{R}_g$ is always continuous, while $\mathcal{L}_g$ is continuous if $g$ is continuous.
+
+ Now if $(X, T)$ is a dynamical system, we let
+ \[
+ E_T = \cl \{T^s: s \in \Z \} \subseteq X^X.
+ \]
+ The idea is that we look at what is going on in this space instead. Let's note a few things about this subspace $E_T$.
+ \begin{itemize}
+ \item $E_T$ is compact, as it is a closed subset of a compact set.
+ \item $f \in E_T$ iff for all $\varepsilon > 0$ and points $x_1, \cdots, x_k \in X$, there exists $s \in \Z$ such that $\rho(f(x_i), T^s(x_i)) < \varepsilon$.
+ \item $E_T$ is closed under composition.
+ \end{itemize}
+ So in fact, $E_T$ is a compact semi-group inside $X^X$. It turns out every proof of Hindman's theorem involves working with a semi-group-like object, and then trying to find an idempotent element.
+\end{proof}
+
+\begin{thm}[Idempotent theorem]\index{idempotent theorem}
+ If $E \subseteq X^X$ is a compact semi-group, then there exists $g \in E$ such that $g^2 = g$.
+\end{thm}
+This is a strange thing to try to prove, but it turns out once we came up with this statement, it is not hard to prove. The hard part is figuring out this is the thing we want to prove.
+
+\begin{proof}
+ Let $\mathcal{F}$ denote the collection of all compact semi-groups of $E$. Let $A \in \mathcal{F}$ be minimal with respect to inclusion. To see $A$ exists, it suffices (by Zorn) to check that any chain $S$ in $\mathcal{F}$ has a lower bound in $\mathcal{F}$, but this is immediate, since the intersection of nested compact sets is noon-empty.
+
+ \begin{claim}
+ $Ag = A$ for all $g \in A$.
+ \end{claim}
+ We first observe that $Ag$ is compact since $\mathcal{R}_g$ is continuous. Also, $Ag$ is a semigroup, since if $f_1 g, f_2 g \in Ag$, then
+ \[
+ f_1 g f_2 g = (f_1 g f_2) g\in A_g.
+ \]
+ Finally, since $g \in A$, we have $Ag \subseteq A$. So by minimality, we have $Ag = A$.
+
+ Now let
+ \[
+ B_g = \{f \in A: fg = g\}.
+ \]
+ \begin{claim}
+ $B_g = A$ for all $g \in A$.
+ \end{claim}
+ Note that $B_g$ is non-empty, because $Ag = A$. Moreover, $B$ is closed, as $B$ is the inverse of $\{g\}$ under $\mathcal{R}_g$. So $B$ is compact. Finally, it is clear that $B$ is a semi-group. So by minimality, we have $B_g = A$.
+
+ So pick any $g \in A$, and then $g^2 = g$.
+\end{proof}
+\begin{proof}[Proof of topological Hindman (continued)]
+ We are actually going to work in a slightly smaller compact semigroup than $E_T$. We let $\mathcal{F} \subseteq E_T$ be defined by
+ \[
+ \mathcal{F} = \{f \in E_T: f(x) \in Y\}.
+ \]
+ \begin{claim}
+ $\mathcal{F} \subseteq X^X$ is a compact semigroup.
+ \end{claim}
+ Before we prove the claim, we see why it makes sense to consider this $\mathcal{F}$. Suppose the claim is true. Then by applying the idempotent theorem, to get $g \in \mathcal{F}$ such that $g^2 = g$. We now check that $x, g(x)$ are proximal.
+
+ Since $g \in \mathcal{F} \subseteq E_T$, we know for all $\varepsilon$, there exists $s \in \Z$ such that
+ \[
+ \rho(g(x), T^s(x)) < \frac{\varepsilon}{2},\quad \rho(g(g(x)), T^s(g(x))) < \frac{\varepsilon}{2}.
+ \]
+ But we are done, since $g(x) = g(g(x))$, and then we conclude from the triangle inequality that
+ \[
+ \rho(T^s(x), T^s(g(x))) < \varepsilon.
+ \]
+ So $x$ and $g(x) \in Y$ are proximal.
+
+ It now remains to prove the final claim. We first note that $\mathcal{F}$ is non-empty. Indeed, pick any $y \in Y$. Since $X = \bar{x}$, there exists some sequence $T^{n_i}(x) \to y$. Now by compactness of $E_T$, we can find some $f \in E_T$ such that $(T^{n_i})_{i \geq 0}$ cluster at $f$, i.e.\ every open neighbourhood of $f$ contains infinitely many $T^{n_i}$. Then since $X$ is Hausdorff, it follows that $f(x) = y$. So $f \in \mathcal{F}$.
+
+ To show $\mathcal{F}$ is compact, it suffices to show that it is closed. But since $Y = \bar{y}$ for all $y \in Y$ by minimality, we know $Y$ is in particular closed, and so $\mathcal{F}$ is closed in the product topology.
+
+ Finally, we have to show that $\mathcal{F}$ is closed under composition, so that it is a semi-group. To show this, it suffices to show that any map $f \in E_T$ sends $Y$ to $Y$. Indeed, by minimality of $Y$, this is true for $T^s$ for $s \in \Z$, and then we are done since $Y$ is closed.
+\end{proof}
+The actual hard part of the proof is deciding we should prove the idempotent theorem. It is not an obvious thing to try!
+
+\section{Sums and products*}
+The goal of this final (non-examinable) chapter is to prove the following theorem:
+\begin{thm}[Moreira, 2016]
+ If $\N$ is $k$-coloured, then there exists infinitely many $x, y$ such that $\{x, x + y, xy\}$ is monochromatic.
+\end{thm}
+This is a rather hard theorem, but the proof is completely elementary, and just involves a very clever use of van der Waerden's theorem. One of the main ideas is to consider \emph{syndetic} sets.
+\begin{defi}[Syndetic set]\index{syndetic set}
+ We say $A = \{a_1 < a_2 < \cdots\} \subseteq \N$ is \emph{syndetic} if it has bounded gaps, i.e.\ $a_{i + 1} - a_i < b$ for all $i$.
+\end{defi}
+
+It turns out these sets contain a lot of structure. What happens when we finitely colour a syndetic set? Are we guaranteed to have a monochromatic syndetic set? Let's take the simplest syndetic set --- $\N$. Let's say we $2$-colour $\N$. Does one of the classes have to by syndetic? Of course not. For example, we can look at this:
+\begin{center}
+ \begin{tikzpicture}[xscale=0.5]
+ \foreach \x in {0, 3, 4, 5, 10, 11, 12, 13,14} {
+ \node [circ, mred] at (\x, 0) {};
+ }
+ \foreach \x in {1,2,6,7,8,9,15,16,17,18,19,20} {
+ \node [circ, mblue] at (\x, 0) {};
+ }
+ \node at (21.3, 0) {$\cdots$};
+ \end{tikzpicture}
+\end{center}
+But this set obviously has some very nice structure. So we are motivated to consider the following definition:
+\begin{defi}[Piecewise syndetic set]\index{piecewise syndetic set}\index{syndetic!piecewise}
+ We say $A \subseteq \N$ is \emph{piecewise syndetic} if there exists $b > 0$ such that $A$ has gaps bounded by $b$ on arbitrarily long intervals.
+
+ More explicitly, if we again write $A = \{a_1 < a_2 < \cdots\}$, then there exists $b$ such that for all $N > 0$, there is $i \in \N$ such that $a_{k + 1} - a_k < b$ for $k = i, \cdots, i + N$.
+\end{defi}
+We begin with two propositions which are standard results.
+
+\begin{prop}
+ If $A$ is piecewise syndetic and $A = X \cup Y$, then one of $X$ and $Y$ is piecewise syndetic.
+\end{prop}
+
+\begin{proof}
+ We fix $b > 0$ and the sequence of intervals $I_1, I_2, \cdots$ with $|I_n| \geq n$ such that $A$ has gaps $\leq b$ in each $I_n$. We may, of course, wlog $I_n$ are disjoint.
+
+ Each element in $I_n$ is either in $X$ or not. We let $t_n$ denote the largest number of consecutive things in $I_n$ that are in $X$. If $t_n$ is unbounded, then $X$ is piecewise syndetic. If $t_n \leq K$, then $Y$ is piecewise syndetic, with gaps bounded by $(K + 1)b$.
+\end{proof}
+Of course, this can be iterated to any finite partition. There is nothing clever so far.
+
+Our next proposition is going to be a re-statement of van der Waerden's theorem in the world of piecewise syndetic sets.
+\begin{prop}
+ Let $A \subseteq \N$ be piecewise syndetic. Then for all $m \in \N$, there exists $d \in \N$ such that the set
+ \[
+ A^* = \{ x\in \N: x , x + d, \cdots, x + md \in A\}.
+ \]
+ is piecewise syndetic.
+\end{prop}
+
+\begin{proof}
+ Let's in fact assume that $A$ is syndetic, with gaps bounded by $b$. The proof for a piecewise syndetic set is similar, but we have to pass to longer and longer intervals, and then piece them back together. % fix this
+
+ We want to apply van der Waerden's theorem. By definition,
+ \[
+ \N = A \cup (A + 1) \cup \cdots \cup (A + b).
+ \]
+ Let $c: \N \to \{0, \cdots, b\}$ by given by
+ \[
+ c(x) = \min \{i: x \in A + i\}.
+ \]
+ Then by van der Waerden, we can find some $a_1, d_1$ such that
+ \[
+ a_1, a_1 + d_1, \cdots, a_1 + m d_1 \in A + i.
+ \]
+ Of course, this is equivalent to
+ \[
+ (a_1 - i), (a_1 + i) + d_1, \cdots, (a_1 - i) + m d_1 \in A.
+ \]
+ So we wlog $i = 0$.
+
+ But van der Waerden doesn't require the whole set of $\N$. We can split $\N$ into huge blocks of size $W(m + 1, b + 1)$, and then we run the same argument in each of these blocks. So we find $(a_1, d_1), (a_2, d_2), \cdots$ such that
+ \[
+ a_i, a_i + d_i, \cdots, a_i + m d_i \in A.
+ \]
+ Moreover, $\tilde{A}= \{a_1, a_2, \cdots\}$ is syndetic with gaps bounded by $W(m + 1, b + 1)$. But we need a fixed $d$ that works for everything.
+
+ We now observe that by construction, we must have $d_i \leq W(m + 1, b + 1)$ for all $i$. So this induces a finite colouring on the $a_i$, where the colour of $a_i$ is just $d_i$. Now we are done by the previous proposition, which tells us one of the partitions must be piecewise syndetic.
+\end{proof}
+That was the preliminary work we had to do to prove the theorem. The proof will look very magical, and things only start to make sense as we reach the end. This is not a deficiency of the proof. There is some reason this problem was open for more than 40 years.
+
+The original proof (\href{https://arxiv.org/abs/1605.01469}{arxiv:1605.01469}) is a bit less magical, and used concepts from topological dynamics. Interested people can find the original paper to read. However, we will sacrifice ``deep understanding'' for having a more elementary proof that just uses van der Waerden.
+
+\begin{proof}[Proof of theorem]
+ Let $\N = C_1 \cup \cdots \cup C_r$. We build the following sequences:
+ \begin{enumerate}
+ \item $y_1, y_2, \cdots \in \N$;
+ \item $B_0, B_1, \cdots$ and $D_1, D_2, \cdots$ which are piecewise syndetic sets;
+ \item $t_0, t_1, \cdots \in [r]$ which are colours.
+ \end{enumerate}
+ We will pick these such that $B_i \subseteq C_{t_i}$.
+
+ We first observe that one of the colour classes $C_i$ is piecewise syndetic. We pick $t_0 \in [r]$ be such that $C_{t_0}$ is piecewise syndetic, and pick $B_0 = C_{t_0}$. We want to apply the previous proposition to obtain $D_0$ from $B_0$. We pick $m = 2$, and then we can find a $y_1$ such that
+ \[
+ D_1 = \{z \mid z, z + y_1 \in B_0\}
+ \]
+ is piecewise syndetic.
+
+ We now want to get $B_1$. We notice that $y_1 D_1$ is also piecewise syndetic, just with a bigger gap. So there is a colour class $t_1 \in [r]$ such that $y_1 D_1 \cap C_{t_1}$ is piecewise syndetic, and we let $B_1 = y_1 D_1 \cap C_{t_1}$.
+
+ In general, having constructed $y_1, \cdots, y_{i - 1}$; $B_0, \cdots, B_{i - 1}$; $D_1, \cdots, D_{i - 1}$ and $t_0, \ldots, t_{i - 1}$, we proceed in the following magical way:
+
+ We apply the previous proposition with the really large $m = y_1^2 y_2^2 \cdots y_{i - 1}^2$ to find $y_i \in \N$ such that the set
+ \[
+ D_i = \{z\mid z, z + y_i, \cdots, z + (y_1^2 y_2^2 \cdots y_{i - 1}^2) y_i \in B_{i - 1} \}
+ \]
+ is piecewise syndetic. The main thing is that we are not using all points in this progression, but are just using some important points in there. In particular, we use the fact that
+ \[
+ z, z + y_i, z + (y_j^2 \cdots y_{i - 1}^2)y_i \in B_{i - 1}
+ \]
+ for all $1 \leq j \leq i - 1$. It turns out these squares are important.
+
+ We have now picked $y_i$ and $D_i$, but we still have to pick $B_i$. But this is just the same as what we did. We know that $y_i D_i$ is piecewise syndetic. So we know there is some $t_i \in [r]$ such that $B_i = y_i D_i \cap C_{t_i}$ is piecewise syndetic.
+
+ Let's write down some properties of these $B_i$'s. Clearly, $B_i \subseteq C_{t_i}$, but we can say much more. We know that
+ \[
+ B_i \subseteq y_i B_{i - 1} \subseteq \cdots \subseteq y_{i} y_{i - 1}\cdots y_{j + 1} B_j.
+ \]
+ Of course, there exists some $t_j, t_i$ with $j < i$ such that $t_j = t_i$. We set
+ \[
+ y = y_i y_{i - 1} \cdots y_{j + 1}.
+ \]
+ Let's choose any $z \in B_i$. Then we can find some $x \in B_j$ such that
+ \[
+ z = xy,
+ \]
+ We want to check that this gives us what we want. By construction, we have
+ \[
+ x \in B_j \subseteq C_{t_i},\quad xy = z \in B_i \subseteq C_{t_i}.
+ \]
+ So they have the same colour.
+
+ How do we figure out the colour of $x + y$? Consider
+ \[
+ y(x + y) = z + y^2 \in y_i D_i + y^2.
+ \]
+ By definition, if $a \in D_i$, then we know
+ \[
+ a + (y_{j + 1}^2 \cdots y_{i - 1}^2)y_i \in B_{i - 1}.
+ \]
+ So we have
+ \[
+ D_i \subseteq B_{i - 1} - (y_{j + 1}^2 \cdots y_{i + 1}^2)y_i \subseteq y_{j + 1} \cdots y_{i - 1} B_j - (y_{j + 1}^2 \cdots y_{i - 1}^2)y_i
+ \]
+ So it follows that
+ \[
+ y(x + y) \subseteq y_i(y_{i - 1} \cdots y_{j + 1}) B_j - y^2 + y^2 = yB_j.
+ \]
+ So $x + y \in B_j \subseteq C_{t_j} = C_{t_i}$. So $\{x, x + y, xy\}$ have the same colour.
+\end{proof}
+
+\printindex
+\end{document}
diff --git a/books/cam/III_L/riemannian_geometry.tex b/books/cam/III_L/riemannian_geometry.tex
new file mode 100644
index 0000000000000000000000000000000000000000..a402d86d2b28985f8a77b31f012ea25ae197f799
--- /dev/null
+++ b/books/cam/III_L/riemannian_geometry.tex
@@ -0,0 +1,3648 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2017}
+\def\nlecturer {A.\ G.\ Kovalev}
+\def\ncourse {Riemannian Geometry}
+
+\input{header}
+
+\renewcommand\div{\mathrm{div}}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+This course is a possible natural sequel of the course Differential Geometry offered in Michaelmas Term. We shall explore various techniques and results revealing intricate and subtle relations between Riemannian metrics, curvature and topology. I hope to cover much of the following:
+
+\emph{A closer look at geodesics and curvature.} Brief review from the Differential Geometry course. Geodesic coordinates and Gauss' lemma. Jacobi fields, completeness and the Hopf--Rinow theorem. Variations of energy, Bonnet--Myers diameter theorem and Synge's theorem.
+
+\emph{Hodge theory and Riemannian holonomy.} The Hodge star and Laplace--Beltrami operator. The Hodge decomposition theorem (with the `geometry part' of the proof). Bochner--Weitzenb\"ock formulae. Holonomy groups. Interplays with curvature and de Rham cohomology.
+
+\emph{Ricci curvature.} Fundamental groups and Ricci curvature. The Cheeger--Gromoll splitting theorem.
+
+\subsubsection*{Pre-requisites}
+Manifolds, differential forms, vector fields. Basic concepts of Riemannian geometry (curvature, geodesics etc.) and Lie groups. The course Differential Geometry offered in Michaelmas Term is the ideal pre-requisite.
+}
+\tableofcontents
+
+\section{Basics of Riemannian manifolds}
+Before we do anything, we lay out our conventions. Given a choice of local coordinates $\{x^k\}$, the coefficients $X^k$ for a vector field $X$ are defined by
+\[
+ X = \sum_k X^k \frac{\partial}{\partial x^k}.
+\]
+In general, for a tensor field $X \in TM^{\otimes q} \otimes T^*M^{\otimes p}$, we write
+\[
+ X = \sum X^{k_1 \ldots k_q}_{\ell_1 \ldots \ell_p} \frac{\partial}{\partial x^{k_1}} \otimes \cdots \otimes \frac{\partial}{\partial x^{k_q}} \otimes \d x^{\ell_1} \otimes \cdots \otimes \d x^{\ell_p},
+\]
+and we often leave out the $\otimes$ signs.
+
+For the sake of sanity, we will often use implicit summation convention, i.e.\ whenever we write something of the form
+\[
+ X_{ijk} Y^{i\ell jk},
+\]
+we mean
+\[
+ \sum_{i, j} X_{ijk} Y^{i\ell jk}.
+\]
+
+We will use upper indices to denote contravariant components, and lower indices for covariant components, as we have done above. Thus, we always sum an upper index with a lower index, as this corresponds to applying a covector to a vector.
+
+We will index the basis elements oppositely, e.g.\ we write $\d x^k$ instead of $\d x_k$ for a basis element of $T^*M$, so that the indices in expressions of the form $A_k \;\d x^k$ seem to match up. Whenever we do not follow this convention, we will write out summations explicitly.
+
+We will also adopt the shorthands\index{$\partial_k$}\index{$\nabla_k$}
+\[
+ \partial_k = \frac{\partial}{\partial x^k},\quad \nabla_k = \nabla_{\partial_k}.
+\]
+
+With these conventions out of the way, we begin with a very brief summary of some topics in the Michaelmas Differential Geometry course, starting from the definition of a Riemannian metric.
+
+\begin{defi}[Riemannian metric]
+ Let $M$ be a smooth manifold. A \term{Riemannian metric} $g$ on $M$ is an inner product on the tangent bundle $TM$ varying smoothly with the fibers. Formally, this is a global section of $T^*M \otimes T^*M$ that is fiberwise symmetric and positive definite.
+
+ The pair $(M, g)$ is called a \term{Riemannian manifold}.
+\end{defi}
+
+On every coordinate neighbourhood with coordinates $x = (x_1, \cdots, x_n)$, we can write
+\[
+ g = \sum_{i, j = 1}^n g_{ij}(x)\;\d x^i\; \d x^j,
+\]
+and we can find the coefficients $g_{ij}$ by
+\[
+ g_{ij} = g\left(\frac{\partial}{\partial x^i}, \frac{\partial}{\partial x^j}\right)
+\]
+and are $C^\infty$ functions.
+
+\begin{eg}
+ The manifold $\R^k$ has a canonical metric given by the Euclidean metric. In the usual coordinates, $g$ is given by $g_{ij} = \delta_{ij}$.
+\end{eg}
+
+Does every manifold admit a metric? Recall
+
+\begin{thm}[Whitney embedding theorem]\index{Whitney embedding theorem}
+ Every smooth manifold $M$ admits an embedding into $\R^k$ for some $k$. In other words, $M$ is diffeomorphic to a submanifold of $\R^k$. In fact, we can pick $k$ such that $k \leq 2 \dim M$.
+\end{thm}
+
+Using such an embedding, we can induce a Riemannian metric on $M$ by restricting the inner product from Euclidean space, since we have inclusions $T_pM \hookrightarrow T_p \R^k \cong \R^k$.
+
+More generally,
+\begin{lemma}
+ Let $(N, h)$ be a Riemannian manifold, and $F: M \to N$ is an immersion, then the pullback $g = F^*h$ defines a metric on $M$.
+\end{lemma}
+The condition of immersion is required for the pullback to be non-degenerate.
+
+In Differential Geometry, if we do not have metrics, then we tend to consider diffeomorphic spaces as being the same. With metrics, the natural notion of isomorphism is
+\begin{defi}[Isometry]\index{isometry}
+ Let $(M, g)$ and $(N, h)$ be Riemannian manifolds. We say $f: M \to N$ is an \emph{isometry} if it is a diffeomorphism and $f^*h = g$. In other words, for any $p \in M$ and $u, v \in T_p M$, we need
+ \[
+ h\big((\d f)_p u, (\d f)_p v\big) = g(u, v).
+ \]
+\end{defi}
+
+\begin{eg}
+ Let $G$ be a Lie group. Then for any $x$, we have translation maps $L_x, R_x: G \to G$ given by
+ \begin{align*}
+ L_x(y) &= xy\\
+ R_x(y) &= yx
+ \end{align*}
+ These maps are in fact diffeomorphisms of $G$.
+
+ We already know that $G$ admits a Riemannian metric, but we might want to ask something stronger --- does there exist a \term{left-invariant metric?} In other words, is there a metric such that each $L_x$ is an isometry?
+
+ Recall the following definition:
+ \begin{defi}[Left-invariant vector field]\index{left-invariant vector field}\index{vector field!left invariant}
+ Let $G$ be a Lie group, and $X$ a vector field. Then $X$ is \emph{left invariant} if for any $x \in G$, we have $\d (L_x) X = X$.
+ \end{defi}
+ We had a rather general technique for producing left-invariant vector fields. Given a Lie group $G$, we can define the \term{Lie algebra} $\mathfrak{g} = T_e G$. Then we can produce left-invariant vector fields by picking some $X_e \in \mathfrak{g}$, and then setting
+ \[
+ X_a = \d (L_a) X_e.
+ \]
+ The resulting vector field is indeed smooth, as shown in the differential geometry course.
+
+ Similarly, to construct a left-invariant metric, we can just pick a metric at the identity and the propagating it around using left-translation. More explicitly, given any inner product on $\bra \ph, \ph\ket$ on $T_eG$, we can define $g$ by
+ \[
+ g(u, v) = \bra (\d L_{x^{-1}})_x u, (\d L_{x^{-1}})_x v\ket
+ \]
+ for all $x \in G$ and $u, v \in T_x G$. The argument for smoothness is similar to that for vector fields.
+\end{eg}
+Of course, everything works when we replace ``left'' with ``right''. A Riemannian metric is said to be \emph{bi-invariant}\index{bi-invariant metric} if it is both left- and right-invariant. These are harder to find, but it is a fact that every compact Lie group admits a bi-invariant metric. The basic idea of the proof is to start from a left-invariant metric, then integrate the metric along right translations of all group elements. Here compactness is necessary for the result to be finite.
+
+We will later see that we cannot drop the compactness condition. There are non-compact Lie groups that do not admit bi-invariant metrics, such as $\SL(2, \R)$.
+
+Recall that in order to differentiate vectors, or even tensors on a manifold, we needed a connection on the tangent bundle. There is a natural choice for the connection when we are given a Riemannian metric.
+\begin{defi}[Levi-Civita connection]\index{Levi-Civita connection}
+ Let $(M, g)$ be a Riemannian manifold. The \emph{Levi-Civita connection} is the unique connection $\nabla: \Omega^0_M(TM) \to \Omega^1_M(TM)$ on $M$ satisfying
+ \begin{enumerate}
+ \item Compatibility with metric:\index{compatible connection}\index{connection!compatible with metric}
+ \[
+ Z g(X, Y) = g(\nabla_Z X, Y) + g(X, \nabla_Z Y),
+ \]
+ \item Symmetry/torsion-free:\index{symmetric connection}\index{torsion-free connection}\index{connection!symmetric}\index{connection!torsion-free}
+ \[
+ \nabla_X Y - \nabla_Y X = [X, Y].
+ \]
+ \end{enumerate}
+\end{defi}
+
+\begin{defi}[Christoffel symbols]\index{Christoffel symbols}
+ In local coordaintes, the \emph{Christoffel symbols} are defined by
+ \[
+ \nabla_{\partial_j} \frac{\partial}{\partial x^k} = \Gamma_{jk}^i \frac{\partial}{\partial x^i}.
+ \]
+\end{defi}
+
+With a bit more imagination on what the symbols mean, we can write the first property as
+\[
+ \d (g(X, Y)) = g(\nabla X, Y) + g(X, \nabla Y),
+\]
+while the second property can be expressed in coordinate representation by
+\[
+ \Gamma_{jk}^i = \Gamma_{kj}^i.
+\]
+
+The connection was defined on $TM$, but in fact, the connection allows us to differentiate many more things, and not just tangent vectors.
+
+Firstly, the connection $\nabla$ induces a unique covariant derivative on $T^*M$, also denoted $\nabla$, defined uniquely by the relation
+\[
+ X\bra \alpha, Y\ket = \bra \nabla_X \alpha , Y\ket + \bra \alpha, \nabla_X Y\ket
+\]
+for any $X,Y \in \Vect(M)$ and $\alpha \in \Omega^1(M)$.
+
+To extend this to a connection $\nabla$ on tensor bundles $\mathcal{T}^{q, p} \equiv (TM)^{\otimes q} \otimes (T^*M)^{\otimes p}$\index{$\mathcal{T}^{q, p}M$} for any $p, q \geq 0$, we note the following general construction:
+
+In general, suppose we have vector bundles $E$ and $F$, and $s_1 \in \Gamma(E)$ and $s_2 \in \Gamma(F)$. If we have connections $\nabla^E$ and $\nabla^F$ on $E$ and $F$ respectively, then we can define
+\[
+ \nabla^{E\otimes F} (s_1 \otimes s_2) = (\nabla^E s_1) \otimes s_2 + s_1 \otimes (\nabla^F s_2).
+\]
+
+Since we already have a connection on $TM$ and $T^*M$, this allows us to extend the connection to all tensor bundles.
+
+Given this machinery, recall that the Riemannian metric is formally a section $g \in \Gamma(T^*M \otimes T^*M)$. Then the compatibility with the metric can be written in the following even more compact form:
+\[
+ \nabla g = 0.
+\]
+
+\section{Riemann curvature}
+With all those definitions out of the way, we now start by studying the notion of \emph{curvature}. The definition of the curvature tensor might not seem intuitive at first, but motivation was somewhat given in the III Differential Geometry course, and we will not repeat that.
+
+\begin{defi}[Curvature]\index{curvature}\index{curvature 2-form}
+ Let $(M, g)$ be a Riemannian manifold with Levi-Civita connection $\nabla$. The \emph{curvature $2$-form} is the section
+ \[
+ R = - \nabla \circ \nabla \in \Gamma(\exterior^2 T^*M \otimes T^*M \otimes TM) \subseteq \Gamma(T^{1, 3}M).
+ \]
+\end{defi}
+This can be thought of as a $2$-form with values in $T^*M \otimes TM = \End(TM)$. Given any $X, Y \in \Vect(M)$, we have
+\[
+ R(X, Y) \in \Gamma(\End TM) .
+\]
+The following formula is a straightforward, and also crucial computation:
+\begin{prop}
+ \[
+ R(X, Y) = \nabla_{[X, Y]} - [\nabla_X, \nabla_Y].
+ \]
+\end{prop}
+%One can also show that locally, we can write
+%\[
+% R = -(\d A + A \wedge A),
+%\]
+%where $A$ is some quantity derived from the connection.
+
+In local coordinates, we can write
+\[
+ R = \Big(R^i_{j, k\ell} \d x^k \d x^\ell\Big)_{i, j = 1, \ldots, \dim M} \in \Omega_M^2(\End(TM)).
+\]
+Then we have
+\[
+ R(X, Y)^i_j = R^i_{j, k\ell} X^k Y^\ell.
+\]
+
+The comma between $j$ and $k\ell$ is purely for artistic reasons.
+
+It is often slightly convenient to consider a different form of the Riemann curvature tensor. Instead of having a tensor of type $(1, 3)$, we have one of type $(0, 4)$ by
+\[
+ R(X, Y, Z, T) = g(R(X, Y)Z, T)
+\]
+for $X, Y, Z, T \in T_p M$. In local coordinates, we write this as
+\[
+ R_{ij, k\ell} = g_{iq} R^q_{j, k\ell}.
+\]
+The first thing we want to prove is that $R_{ij, k\ell}$ enjoys some symmetries we might not expect:
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item
+ \[
+ R_{ij, k\ell} = - R_{ij, \ell k} = - R_{ji, k\ell}.
+ \]
+ \item The \term{first Bianchi identity}\index{Bianchi identity!first}:
+ \[
+ R^i_{j, k \ell} + R^i_{k, \ell j} + R^i_{\ell, jk} = 0.
+ \]
+ \item
+ \[
+ R_{ij, k\ell} = R_{k\ell, ij}.
+ \]
+ \end{enumerate}
+\end{prop}
+Note that the first Bianchi identity can also be written for the $(0, 4)$ tensor as
+\[
+ R_{ij, k\ell} + R_{ik, \ell j} + R_{i\ell, jk} = 0.
+\]
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item The first equality is obvious as coefficients of a $2$-form. For the second equality, we begin with the compatibility of the connection with the metric:
+ \[
+ \frac{\partial g_{ij}}{\partial x^k} = g(\nabla_k \partial_i, \partial_j) + g(\partial_i, \nabla_k \partial_j).
+ \]
+ We take a partial derivative, say with respect to $x_\ell$, to obtain
+ \[
+ \frac{\partial^2 g_{ij}}{\partial x^\ell \partial x^k} = g(\nabla_\ell \nabla_k \partial_i, \partial_j) + g(\nabla_k \partial_i, \nabla_\ell \partial_j) + g(\nabla_\ell \partial_i, \nabla_k \partial_j) + g(\partial_i, \nabla_\ell \nabla_k \partial_j).
+ \]
+ Then we know
+ \[
+ 0 = \frac{\partial^2 g}{\partial x^\ell \partial x^k} - \frac{\partial^2 g}{\partial x_k \partial x_\ell} = g([\nabla_\ell, \nabla_k] \partial_i, \partial_j) + g(\partial_i, [\nabla_\ell, \nabla_k]\partial_j).
+ \]
+ But we know
+ \[
+ R(\partial_k, \partial_\ell) = \nabla_{[\partial_k, \partial_\ell]} - [\nabla_k, \nabla_\ell] = -[\nabla_k, \nabla_\ell].
+ \]
+ Writing $R_{k\ell} = R(\partial_k, \partial_\ell)$, we have
+ \[
+ 0 = g(R_{k\ell} \partial_i, \partial_j) + g(\partial_i, R_{k\ell} \partial_j) = R_{ji, k\ell} + R_{ij, k\ell}.
+ \]
+ So we are done.
+ \item Recall
+ \[
+ R^i_{j, k\ell} = (R_{k\ell} \partial_j)^i = ([\nabla_\ell, \nabla_k] \partial_j)^i.
+ \]
+ So we have
+ \begin{align*}
+ &\hphantom{={}}R^i_{j, k\ell} + R^i_{k, \ell j} + R^i_{\ell, jk}\\
+ &= \left[(\nabla_\ell \nabla_k \partial_j - \nabla_k \nabla_\ell \partial_j) + (\nabla_j \nabla_\ell \partial_k - \nabla_\ell \nabla_j \partial_k) + (\nabla_k \nabla_j \partial_\ell - \nabla_j \nabla_k \partial_\ell)\right]^i.
+ \end{align*}
+ We claim that
+ \[
+ \nabla_\ell \nabla_k \partial_j - \nabla_\ell \nabla_j \partial_k = 0.
+ \]
+ Indeed, by definition, we have
+ \[
+ (\nabla_k \partial_j)^q = \Gamma_{kj}^q = \Gamma_{jk}^q = (\nabla_j \partial_k)^q.
+ \]
+ The other terms cancel similarly, and we get $0$ as promised.
+ \item Consider the following octahedron:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] (se) at (1.5, 0) {};
+ \node [circ] (sw) at (0, 0) {};
+ \node [circ] (ne) at (2.5, 0.7) {};
+ \node [circ] (nw) at (1, 0.7) {};
+ \node [circ] (top) at (1.25, 1.7) {};
+ \node [circ] (bot) at (1.25, -1) {};
+
+ \draw (sw) node [left] {\small $R_{ik, \ell j} = R_{ki, j\ell}$} -- (se) node [right] {\;\;\;\;\;\;\small $R_{i\ell, jk} = R_{\ell i, kj}$} -- (ne) node [right] {\small $R_{j\ell, ki} = R_{\ell j, ik}$} -- (nw) node [left] {\small $R_{jk, i\ell} = R_{kj, \ell i}$\;\;\;\;\;\;} -- (sw);
+
+ \node [above] at (top) {\small $R_{ij, k\ell} = R_{ji, \ell k}$};
+ \node [below] at (bot) {\small $R_{k\ell, ij} = R_{\ell k, ji}$};
+
+ \draw (top) -- (sw) -- (bot);
+ \draw (top) -- (se) -- (bot);
+ \draw (top) -- (ne) -- (bot);
+
+ \draw (top) -- (nw);
+ \draw [dashed] (nw) -- (bot);
+
+ \fill [opacity=0.3, gray] (top) -- (1.5, 0) -- (0, 0) -- (top);
+ \fill [opacity=0.3, gray] (top) -- (2.5, 0.7) -- (1, 0.7) -- (top);
+
+ \fill [opacity=0.3, gray] (bot) -- (1.5, 0) -- (2.5, 0.7) -- (bot);
+ \fill [opacity=0.3, gray] (bot) -- (0, 0) -- (1, 0.7) -- (bot);
+ \end{tikzpicture}
+ \end{center}
+ The equalities on each vertex is given by (i). By the first Bianchi identity, for each greyed triangle, the sum of the three vertices is zero.
+
+ Now looking at the upper half of the octahedron, adding the two greyed triangles shows us the sum of the vertices in the horizontal square is $(-2) R_{ij, k\ell}$. Looking at the bottom half, we find that the sum of the vertices in the horizontal square is $(-2)R_{k\ell, ij}$. So we must have
+ \[
+ R_{ij, k\ell} = R_{k\ell, ij}.\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+What exactly are the properties of the Levi-Civita connection that make these equality works? The first equality of (i) did not require anything. The second equality of (i) required the compatibility with the metric, and (ii) required the symmetric property. The last one required both properties.
+
+Note that we can express the last property as saying $R_{ij, k\ell}$ is a symmetric bilinear form on $\exterior^2 T_p^*M$.
+
+\subsubsection*{Sectional curvature}
+The full curvature tensor is rather scary. So it is convenient to obtain some simpler quantities from it. Recall that if we had tangent vectors $X, Y$, then we can form
+\[
+ |X \wedge Y| = \sqrt{g(X, X)g(Y, Y) - g(X, Y)^2},
+\]
+which is the area of the parallelogram spanned by $X$ and $Y$. We now define
+\[
+ K(X, Y) = \frac{R(X, Y, X, Y)}{|X \wedge Y|^2}.
+\]
+Note that this is invariant under (non-zero) scaling of $X$ or $Y$, and is symmetric in $X$ and $Y$. Finally, it is also invariant under the transformation $(X, Y) \mapsto (X + \lambda Y, Y)$.
+
+But it is an easy linear algebra fact that these transformations generate all isomorphism from a two-dimensional vector space to itself. So $K(X, Y)$ depends only on the $2$-plane spanned by $X, Y$. So we have in fact defined a function on the Grassmannian of $2$-planes, $K: \Gr(2, T_p M) \to \R$. This is called the \term{sectional curvature} (of $g$).
+
+It turns out the sectional curvature determines the Riemann curvature tensor completely!
+\begin{lemma}
+ Let $V$ be a real vector space of dimension $\geq 2$. Suppose $R', R'': V^{\otimes 4} \to \R$ are both linear in each factor, and satisfies the symmetries we found for the Riemann curvature tensor. We define $K', K'': \Gr(2, V) \to \R$ as in the sectional curvature. If $K' = K''$, then $R' = R''$.
+\end{lemma}
+This is really just linear algebra.
+\begin{proof}
+ For any $X, Y, Z \in V$, we know
+ \[
+ R'(X + Z, Y, X + Z, Y) = R''(X + Z, Y, X + Z, Y).
+ \]
+ Using linearity of $R'$ and $R''$, and cancelling equal terms on both sides, we find
+ \[
+ R'(Z, Y, X, Y) + R'(X, Y, Z, Y) = R''(Z, Y, X, Y) + R''(X, Y, Z, Y).
+ \]
+ Now using the symmetry property of $R'$ and $R''$, this implies
+ \[
+ R'(X, Y, Z, Y) = R''(X, Y, Z, Y).
+ \]
+ Similarly, we replace $Y$ with $Y + T$, and then we get
+ \[
+ R'(X, Y, Z, T) + R'(X, T, Z, Y) = R''(X, Y, Z, Y) + R''"(X, T, Z, Y).
+ \]
+ We then rearrange and use the symmetries to get
+ \[
+ R'(X, Y, Z, T) - R''(X, Y, Z, T) = R'(Y, Z, X, T) - R''(Y, Z, X, T).
+ \]
+ We notice this equation says $R'(X, Y, Z, T) - R''(X, Y, Z, T)$ is invariant under the cyclic permutation $X \to Y \to Z \to X$. So by the first Bianchi identity, we have
+ \[
+ 3(R'(X, Y, Z, T) - R''(X, Y, Z, T)) = 0.
+ \]
+ So we must have $R' = R''$.
+\end{proof}
+
+\begin{cor}
+ Let $(M, g)$ be a manifold such that for all $p$, the function $K_p: \Gr(2, T_p M) \to \R$ is a constant map. Let
+ \[
+ R^0_p (X, Y, Z, T) = g_p(X, Z) g_p(Y, T) - g_p(X, T) g_p(Y, Z).
+ \]
+ Then
+ \[
+ R_p= K_p R_p^0.
+ \]
+ Here $K_p$ is just a real number, since it is constant. Moreover, $K_p$ is a smooth function of $p$.
+
+ Equivalently, in local coordinates, if the metric at a point is $\delta_{ij}$, then we have
+ \[
+ R_{ij, ij} = - R_{ij, ji} = K_p,
+ \]
+ and all other entries all zero.
+\end{cor}
+Of course, the converse also holds.
+
+\begin{proof}
+ We apply the previous lemma as follows: we define $R' = K_p R_p^0$ and $R'' = R_p$. It is a straightforward inspection to see that this $R^0$ does follow the symmetry properties of $R_p$, and that they define the same sectional curvature. So $R'' = R'$. We know $K_p$ is smooth in $p$ as both $g$ and $R$ are smooth.
+\end{proof}
+We can further show that if $\dim M > 2$, then $K_p$ is in fact independent of $p$ under the hypothesis of this function, and the proof requires a second Bianchi identity. This can be found on the first example sheet.
+
+\subsubsection*{Other curvatures}
+There are other quantities we can extract out of the curvature, which will later be useful.
+\begin{defi}[Ricci curvature]\index{Ricci curvature}\index{curvature!Ricci}
+ The \emph{Ricci curvature} of $g$ at $p \in M$ is
+ \[
+ \Ric_p(X, Y) = \tr(v \mapsto R_p(X, v) Y).
+ \]
+ In terms of coordinates, we have
+ \[
+ \Ric_{ij} = R^q_{i,jq} = g^{pq} R_{pi, jq},
+ \]
+ where $g^{pq}$ denotes the inverse of $g$.
+
+ This $\Ric$ is a symmetric bilinear form on $T_p M$. This can be determined by the quadratic form
+ \[
+ \Ric(X) = \frac{1}{n - 1} \Ric_p(X, X).
+ \]
+ The coefficient $\frac{1}{n - 1}$ is just a convention.
+\end{defi}
+There are still two indices we can contract, and we can define
+\begin{defi}[Scalar curvature]\index{scalar curvature}\index{scalar curvature}
+ The \emph{scalar curvature} of $g$ is the trace of $\Ric$ respect to $g$. Explicitly, this is defined by
+ \[
+ s = g^{ij}\Ric_{ij} = g^{ij} R^q_{i, jq} = R^{qi}\!_{iq}.
+ \]
+\end{defi}
+Sometimes a convention is to define the scalar curvature as $\frac{s}{n(n - 1)}$ instead.
+
+In the case of a constant sectional curvature tensor, we have
+\[
+ \Ric_p = (n - 1) K_p g_p,
+\]
+and
+\[
+ s(p) = n(n - 1) K_p.
+\]
+
+\subsubsection*{Low dimensions}
+If $n = 2$, i.e.\ we have surfaces, then the Riemannian metric $g$ is also known as the \term{first fundamental form}, and it is usually written as
+\[
+ g = E\;\d u^2 + 2 F \;\d u\;\d v + G \;\d v^2.
+\]
+Up to the symmetries, the only non-zero component of the curvature tensor is $R_{12, 12}$, and using the definition of the scalar curvature, we find
+\[
+ R_{12,12} = \frac{1}{2} s (EG - F^2).
+\]
+Thus $s/2$ is also the sectional curvature (there can only be one plane in the tangent space, so the sectional curvature is just a number). One can further check that
+\[
+ \frac{s}{2} = K = \frac{LN - M^2}{EG - F^2},
+\]
+the \term{Gaussian curvature}. Thus, the full curvature tensor is determined by the Gaussian curvature. Also, $R_{12, 21}$ is the determinant of the second fundamental form.
+
+If $n = 3$, one can check that $R(g)$ is determined by the Ricci curvature.
+
+\section{Geodesics}
+\subsection{Definitions and basic properties}
+We will eventually want to talk about geodesics. However, the setup we need to write down the definition of geodesics can be done in a much more general way, and we will do that.
+
+The general setting is that we have a vector bundle $\pi: E \to M$.
+\begin{defi}[Lift]\index{lift}
+ Let $\pi: E \to M$ be a vector bundle with typical fiber $V$. Consider a curve $\gamma: (-\varepsilon, \varepsilon) \to M$. A \emph{lift} of $\gamma$ is a map $\gamma^E: (-\varepsilon, \varepsilon) \to E$ if $\pi \circ \gamma^E = \gamma$, i.e.\ the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ & E \ar[d, "\pi"]\\
+ (-\varepsilon, \varepsilon) \ar[r, "\gamma"] \ar[ur, "\gamma^E"] & M
+ \end{tikzcd}.
+ \]
+\end{defi}
+For $p \in M$, we write $E_p = \pi^{-1}(\{p\}) \cong V$ for the fiber above $p$. We can think of $E_p$ as the space of some ``information'' at $p$. For example, if $E = TM$, then the ``information'' is a tangent vector at $p$. In physics, the manifold $M$ might represent our universe, and a point in $E_p$ might be the value of the electromagnetic field at $p$.
+
+Thus, given a path $\gamma$ in $M$, a lift corresponds to providing that piece of ``information'' at each point along the curve. For example, if $E = TM$, then we can canonically produce a lift of $\gamma$, given by taking the derivative of $\gamma$ at each point.
+
+Locally, suppose we are in some coordinate neighbourhood $U \subseteq M$ such that $E$ is trivial on $U$. After picking a trivialization, we can write our lift as
+\[
+ \gamma^E(t) = (\gamma(t), a(t))
+\]
+for some function $a: (-\varepsilon, \varepsilon) \to V$.
+
+One thing we would want to do with such lifts is to differentiate them, and see how it changes along the curve. When we have a \emph{section} of $E$ on the whole of $M$ (or even just an open neighbourhood), rather than just a lift along a curve, the connection provides exactly the information needed to do so. It is not immediately obvious that the connection also allows us to differentiate curves along paths, but it does.
+
+\begin{prop}
+ Let $\gamma: (-\varepsilon, \varepsilon) \to M$ be a curve. Then there is a uniquely determined operation $\frac{\nabla}{\d t}$ from the space of all lifts of $\gamma$ to itself, satisfying the following conditions:
+ \begin{enumerate}
+ \item For any $c, d \in \R$ and lifts $\tilde{\gamma}^E, \gamma^E$ of $\gamma$, we have.
+ \[
+ \frac{\nabla}{\d t}(c\gamma^E + d \tilde{\gamma}^E) = c\frac{\nabla \gamma^E}{\d t} + d \frac{\nabla \tilde{\gamma}^E}{\d t}
+ \]
+ \item For any lift $\gamma^E$ of $\gamma$ and function $f: (-\varepsilon, \varepsilon) \to \R$, we have
+ \[
+ \frac{\nabla}{\d t}(f \gamma^E) = \frac{\d f}{\d t} + f \frac{\nabla \gamma^E}{\d t}.
+ \]
+ \item If there is a local section $s$ of $E$ and a local vector field $V$ on $M$ such that
+ \[
+ \gamma^E(t) = s(\gamma(t)),\quad \dot{\gamma}(t) = V(\gamma(t)),
+ \]
+ then we have
+ \[
+ \frac{\nabla \gamma^E}{\d t} = (\nabla_V s) \circ \gamma.
+ \]
+ \end{enumerate}
+ Locally, this is given by
+ \[
+ \left(\frac{\nabla \gamma^E}{\d t}\right)^i = \dot{a}^i + \Gamma^i_{jk} a^j \dot{x}^k.
+ \]
+\end{prop}
+The proof is straightforward --- one just checks that the local formula works, and the three properties force the operation to be locally given by that formula.
+
+\begin{defi}[Covariant derivative]\index{covariant derivative}
+ The uniquely defined operation in the proposition above is called the \emph{covariant derivative}.
+\end{defi}
+
+In some sense, lifts that have vanishing covariant derivative are ``constant'' along the map.
+
+\begin{defi}[Horizontal lift]\index{horizontal lift}\index{lift!horizontal}
+ Let $\nabla$ be a connection on $E$ with $\Gamma^i_{jk}(x)$ the coefficients in a local trivialization. We say a lift $\gamma^E$ is \emph{horizontal} if
+ \[
+ \frac{\nabla \gamma^E}{\d t} = 0.
+ \]
+\end{defi}
+Since this is a linear first-order ODE, we know that for a fixed $\gamma$, given any initial $a(0) \in E_{\gamma(0)}$, there is a unique way to obtain a horizontal lift.
+
+\begin{defi}[Parallel transport]\index{parallel transport}
+ Let $\gamma : [0, 1] \to M$ be a curve in $M$. Given any $a_0 \in E_{\gamma(0)}$, the unique horizontal lift of $\gamma$ with $\gamma^E(0) = (\gamma(0), a_0)$ is called the \emph{parallel transport} of $a_0$ along $\gamma(0)$. We sometimes also call $\gamma^E(1)$ the parallel transport.
+\end{defi}
+Of course, we want to use this general theory to talk about the case where $M$ is a Riemannian manifold, $E = TM$ and $\nabla$ is the Levi-Civita connection of $g$. In this case, each curve $\gamma(t)$ has a canonical lift independent of the metric or connection given simply by taking the derivative $\dot{\gamma}(t)$.
+\begin{defi}[Geodesic]\index{geodesic}
+ A curve $\gamma(t)$ on a Riemannian manifold $(M, g)$ is called a \emph{geodesic curve} if its canonical lift is horizontal with respect to the Levi-Civita connection. In other words, we need
+ \[
+ \frac{\nabla \dot{\gamma}}{\d t} = 0.
+ \]
+\end{defi}
+In local coordinates, we write this condition as
+\[
+ \ddot{x}_i + \Gamma^i_{jk}\dot{x}^j \dot{x}^k = 0.
+\]
+This time, we obtain a second-order ODE. So a geodesic is uniquely specified by the initial conditions $p = x(0)$ and $a = \dot{x}(0)$. We will denote the resulting geodesic as $\gamma_p(t, a)$, where $t$ is the time coordinate as usual.
+
+Since we have a non-linear ODE, existence is no longer guaranteed on all time, but just for some interval $(-\varepsilon, \varepsilon)$. Of course, we still have uniqueness of solutions.
+
+We now want to prove things about geodesics. To do so, we will need to apply some properties of the covariant derivative we just defined. Since we are lazy, we would like to reuse results we already know about the covariant derivative for vector fields. The trick is to notice that locally, we can always extend $\dot{\gamma}$ to a vector field.
+
+Indeed, we work in some coordinate chart around $\gamma(0)$, and we wlog assume
+\[
+ \dot{\gamma}(0) = \frac{\partial}{\partial x_1}.
+\]
+By the inverse function theorem, we note that $x_1(t)$ is invertible near $0$, and we can write $t = t(x_1)$ for small $x_1$. Then in this neighbourhood of $0$, we can view $x_k$ as a function of $x_1$ instead of $t$. Then we can define the vector field
+\[
+ \dot{\underline{\gamma}}(x_1, \cdots, x_k) = \dot{\gamma}(x_1, x_2(x_1), \cdots, x_k(x_1)).
+\]
+By construction, this agrees with $\dot{\gamma}$ along the curve.
+
+Using this notation, the geodesic equation can be written as
+\[
+ \left.\nabla_{\dot{\underline{\gamma}}} \dot{\underline{\gamma}}\right|_{\gamma(t)} = 0,
+\]
+where the $\nabla$ now refers to the covariant derivative of vector fields, i.e.\ the connection itself.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (3, 0);
+ \draw [->] (0, -2) -- (0, 2);
+
+ \draw [mblue, thick] (0, 0) .. controls (1, 0) and (0.5, 0.5) .. (1.5, 0.5) .. controls (2, 0.5) .. (3, 0.2) node [right] {$\gamma$};
+
+ \foreach \x in {-1.5, -1, -0.5, 0, 0.5, 1, 1.5} {
+ \begin{scope}[shift={(0, \x)}]
+ \draw [-latex'] (0, 0) -- +(0.4, 0) ;
+ \draw [-latex'] (0.8, -0.2) -- +(0.2, 0.2) ;
+ \draw [-latex'] (1.5, 0) -- +(0.4, 0) ;
+ \draw [-latex'] (2.3, -0.1) -- +(0.3, -0.08) ;
+ \end{scope}
+ }
+ \end{tikzpicture}
+\end{center} % improve picture
+Using this, a lot of the desired properties of geodesics immediately follow from well-known properties of the covariant derivative. For example,
+\begin{prop}
+ If $\gamma$ is a geodesic, then $|\dot{\gamma}(t)|_g$ is constant.
+\end{prop}
+
+\begin{proof}
+ We use the extension $\dot{\underline{\gamma}}$ around $p = \gamma(0)$, and stop writing the underlines. Then we have
+ \[
+ \dot{\gamma}(g(\dot{\gamma}, \dot{\gamma})) = g(\nabla_{\dot{\gamma}} \dot{\gamma}, \dot{\gamma})+ g(\dot{\gamma}, \nabla_{\dot{\gamma}} \dot{\gamma}) = 0,
+ \]
+ which is valid at each $q = \gamma(t)$ on the curve. But at each $q$, we have
+ \[
+ \dot{\gamma}(g(\dot{\gamma}, \dot{\gamma})) = \dot{x}^k \frac{\partial}{\partial x_k} g(\dot{\gamma}, \dot{\gamma}) = \frac{\d}{\d t} |\dot{\gamma}(t)|_g^2
+ \]
+ by the chain rule. So we are done.
+\end{proof}
+
+At this point, it might be healthy to look at some examples of geodesics.
+\begin{eg}
+ In $\R^n$ with the Euclidean metric, we have $\Gamma^i_{jk} = 0$. So the geodesic equation is
+ \[
+ \ddot{x}_k = 0.
+ \]
+ So the geodesics are just straight lines.
+\end{eg}
+
+\begin{eg}
+ On a sphere $S^n$ with the usual metric induced by the standard embedding $S^n \hookrightarrow \R^{n + 1}$. Then the geodesics are great circles.
+
+ To see this, we may wlog $p = e_0$ and $a = e_1$, for a standard basis $\{e_i\}$ of $\R^{n + 1}$. We can look at the map
+ \[
+ \varphi: (x_0, \cdots, x_n) \mapsto (x_0, x_1, -x_2, \cdots, -x_n),
+ \]
+ and it is clearly an isometry of the sphere. Therefore it preserves the Riemannian metric, and hence sends geodesics to geodesics. Since it also preserves $p$ and $a$, we know $\varphi(\gamma) = \gamma$ by uniqueness. So it must be contained in the great circle lying on the plane spanned by $e_0$ and $e_1$.
+\end{eg}
+
+\begin{lemma}
+ Let $p \in M$, and $a \in T_p M$. As before, let $\gamma_p(t, a)$ be the geodesic with $\gamma(0) = p$ and $\dot{\gamma}(0) = p$. Then
+ \[
+ \gamma_p(\lambda t, a) = \gamma_p(t, \lambda a),
+ \]
+ and in particular is a geodesic.
+\end{lemma}
+
+\begin{proof}
+ We apply the chain rule to get
+ \begin{align*}
+ \frac{\d}{\d t} \gamma(\lambda t, a) &= \lambda \dot{\gamma} (\lambda t, a)\\
+ \frac{\d^2}{\d t^2} \gamma(\lambda t, a) &= \lambda^2 \ddot{\gamma}(\lambda t, a).
+ \end{align*}
+ So $\gamma(\lambda t, a)$ satisfies the geodesic equations, and have initial velocity $\lambda a$. Then we are done by uniqueness of ODE solutions.
+\end{proof}
+
+Thus, instead of considering $\gamma_p(t, a)$ for arbitrary $t$ and $a$, we can just fix $t = 1$, and look at the different values of $\gamma_p(1, a)$. By ODE theorems, we know this depends smoothly on $a$, and is defined on some open neighbourhood of $0 \in T_p M$.
+
+\begin{defi}[Exponential map]\index{exponential map}
+ Let $(M, g)$ be a Riemannian manifold, and $p \in M$. We define $\exp_p$ by
+ \[
+ \exp_p(a) = \gamma(1, a) \in M
+ \]
+ for $a \in T_p M$ whenever this is defined.
+\end{defi}
+
+We know this function has domain at least some open ball around $0 \in T_p M$, and is smooth. Also, by construction, we have $\exp_p(0) = p$.
+
+In fact, the exponential map gives us a chart around $p$ locally, known as \emph{geodesic local coordinates}. To do so, it suffices to note the following rather trivial proposition.
+
+\begin{prop}
+ We have
+ \[
+ (\d \exp_p)_0 = \id _{T_p M},
+ \]
+ where we identify $T_0 (T_p M) \cong T_p M$ in the natural way.
+\end{prop}
+All this is saying is if you go in the direction of $a \in T_p M$, then you go in the direction of $a$.
+
+\begin{proof}
+ \[
+ (\d \exp_p)_0(v) = \frac{\d}{\d t} \exp_p(tv) = \frac{\d}{\d t} \gamma(1, tv) = \frac{\d}{\d t} \gamma(t, v) = v.\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ $\exp_p$ maps an open ball $B(0, \delta) \subseteq T_p M$ to $U \subseteq M$ diffeomorphically for some $\delta > 0$.
+\end{cor}
+
+\begin{proof}
+ By the inverse mapping theorem.
+\end{proof}
+
+This tells us the inverse of the exponential map gives us a chart of $M$ around $p$. These coordinates are often known as \term{geodesic local coordinates}.
+
+In these coordinates, the geodesics from $p$ have the very simple form
+\[
+ \gamma(t, a) = ta
+\]
+for all $a \in T_p M$ and $t$ sufficiently small that this makes sense.
+
+\begin{cor}
+ For any point $p \in M$, there exists a local coordinate chart around $p$ such that
+ \begin{itemize}
+ \item The coordinates of $p$ are $(0, \cdots, 0)$.
+ \item In local coordinates, the metric at $p$ is $g_{ij}(p) = \delta_{ij}$.
+ \item We have $\Gamma^i_{jk}(p) = 0$ .
+ \end{itemize}
+\end{cor}
+Coordinates satisfying these properties are known as \term{normal coordinates}.
+\begin{proof}
+ The geodesic local coordinates satisfies these property, after identifying $T_p M$ isometrically with $(\R^n, \mathrm{eucl})$. For the last property, we note that the geodesic equations are given by
+ \[
+ \ddot{x}^i + \Gamma^i_{jk}\dot{x}^k \dot{x}^j = 0.
+ \]
+ But geodesics through the origin are given by straight lines. So we must have $\Gamma^i_{jk} = 0$.
+\end{proof}
+Such coordinates will be useful later on for explicit calculations, since whenever we want to verify a coordinate-independent equation (which is essentially all equations we care about), we can check it at each point, and then use normal coordinates at that point to simplify calculations.
+
+We again identify $(T_p N, g(p)) \cong (\R^n, \mathrm{eucl})$, and then we have a map
+\[
+ (r, \mathbf{v}) \in (0, \delta) \times S^{n - 1} \mapsto \exp_p (r\mathbf{v}) \in M^n.
+\]
+This chart is known as \term{geodesic polar coordinates}. For each fixed $r$, the image of this map is called a \term{geodesic sphere} of geodesic radius $r$, written $\Sigma_r$\index{$\Sigma_r$}. This is an embedded submanifold of $M$.
+
+Note that in geodesic local coordinates, the metric at $0 \in T_p N$ is given by the Euclidean metric. However, the metric at other points can be complicated. Fortunately, Gauss' lemma says it is not \emph{too} complicated.
+
+\begin{thm}[Gauss' lemma]
+ The geodesic spheres are perpendicular to their radii. More precisely, $\gamma_p(t, a)$ meets every $\Sigma_r$ orthogonally, whenever this makes sense. Thus we can write the metric in geodesic polars as
+ \[
+ g = \d r^2 + h(r, \mathbf{v}),
+ \]
+ where for each $r$, we have
+ \[
+ h(r, \mathbf{v}) = g|_{\Sigma_r}.
+ \]
+ In matrix form, we have
+ \[
+ g =
+ \begin{pmatrix}
+ 1 & 0 & \cdots & 0\\
+ 0\\
+ \rvdots & & h\\
+ 0
+ \end{pmatrix}
+ \]
+\end{thm}
+
+The proof is not hard, but it involves a few subtle points.
+\begin{proof}
+ We work in geodesic coordinates. It is clear that $g(\partial_r, \partial_r) = 1$.
+
+ Consider an arbitrary vector field $X = X(\mathbf{v})$ on $S^{n - 1}$. This induces a vector field on some neighbourhood $B(0, \delta) \subseteq T_p M$ by
+ \[
+ \tilde{X}(r\mathbf{v}) = X(\mathbf{v}).
+ \]
+ Pick a direction $\mathbf{v} \in T_pM$, and consider the unit speed geodesic $\gamma$ in the direction of $\mathbf{v}$. We define
+ \[
+ G(r) = g(\tilde{X}(r\mathbf{v}), \dot{\gamma}(r)) = g(\tilde{X}, \dot{\gamma}(r)).
+ \]
+ We begin by noticing that
+ \[
+ \nabla_{\partial_r} \tilde{X} - \nabla_{\tilde{X}} \partial_r = [\partial_r , \tilde{X}] = 0.
+ \]
+ Also, we have
+ \[
+ \frac{\d}{\d r} G(r) = g(\nabla_{\dot{\gamma}} \tilde{X}, \dot{\gamma}) + g(\tilde{X}, \nabla_{\dot{\gamma}} \dot{\gamma}).
+ \]
+ We know the second term vanishes, since $\gamma$ is a geodesic. Noting that $\dot{\gamma} = \frac{\partial}{\partial r}$, we know the first term is equal to
+ \[
+ g(\nabla_{\tilde{X}} \partial_r, \partial_r) = \frac{1}{2} \Big(g(\nabla_{\tilde{X}} \partial_r, \partial_r) + g( \partial_r, \nabla_{\tilde{X}}\partial_r)\Big) = \frac{1}{2} \tilde{X} (g(\partial_r, \partial_r)) = 0,
+ \]
+ since we know that $g(\partial_r, \partial_r) = 1$ constantly.
+
+ Thus, we know $G(r)$ is constant. But $G(0) = 0$ since the metric at $0$ is the Euclidean metric. So $G$ vanishes everywhere, and so $\partial_r$ is perpendicular to $\Sigma_g$.
+
+%
+% We work in the geodesic coordinates. Consider an arbitrary vector field $X = X(\mathbf{v}) \in \Vect(S^{n - 1})$. Identifying $S^{n - 1} \subseteq T_p M$, we extend this to a vector field on $B(0,\delta) \subseteq T_p M$ by
+% \[
+% \tilde{X}(r\mathbf{v}) = r X(\mathbf{v}).
+% \]
+% We want to show that $\tilde{X}$ is perpendicular $\frac{\partial}{\partial r}$ at each point in $B \setminus \{0\}$. Consider an arbitrary unit speed geodesic $\gamma$ in direction $\mathbf{v}$. We define the quantity
+% \[
+% G(r) = g(\tilde{X}(r\mathbf{v}), \dot{\gamma}(r)).
+% \]
+% We will show that $G$ vanishes everywhere. To do so, we claim that
+% \[
+% \frac{\d G}{\d r} = \frac{G}{r}.
+% \]
+% It then follows that $G$ is linear in $r$,
+%
+%
+% This is equivalent to showing that it is perpendicular to $\dot{\gamma}$. Recall that we have
+% \[
+% \nabla_{\dot{\gamma}} \dot{\gamma} = 0
+% \]
+% by definition. Also, we know $\frac{\partial}{\partial r}$ has unit norm. Thus to show that
+% \[
+% g(\tilde{X}, \dot{\gamma}) = 0
+% \]
+%
+% We consider
+% \[
+% \nabla_{\partial/\partial r} \tilde{X} - \nabla_{\tilde{X}} \frac{\partial}{\partial r} = \left[\frac{\partial}{\partial r}, \tilde{X}\right] = \frac{\d}{\d r}\tilde{X} = \frac{\tilde{X}}{r}.
+% \]
+% We also have
+% \begin{align*}
+% \frac{\d}{\d r} (g(\tilde{X}, \dot{\gamma})) &= g(\nabla_{\dot{\gamma}} \tilde{X}, \dot{\gamma}) + g(\tilde{X}, \nabla_{\dot{\gamma}} \dot{\gamma})\\
+% &= -g\left(\nabla_{\tilde{X}} \dot{\gamma} + \frac{\tilde{X}}{r}, \dot{\gamma}\right) = \frac{1}{r} g(Y, \dot{~g}),
+% \end{align*}
+% noting that $g(\nabla_{\tilde{X}} \dot{\gamma}, \dot{\gamma}) = Y g(\dot{\gamma}, \dot{\gamma}) = 0$ since $|\dot{\gamma}|$ is constant along spheres. We define
+% \[
+% G = g(\tilde{X}, \dot{\gamma}).
+% \]
+% Then we know this satisfies
+% \[
+% \frac{\d}{\d r} G = \frac{G}{r}.
+% \]
+% This is a very simple ODE in $r$, and this tells us $G$ is linear in $r$. So
+% \[
+% \frac{\d G}{\d r}
+% \]
+% is independent of $r$. But as $r \to 0$, we know
+% \[
+% \lim_{r \to 0} \frac{\d G}{\d r} = \lim_{r \to 0} g\left(X, \frac{\partial}{\partial r}\right) = 0.
+% \]
+\end{proof}
+
+\begin{cor}
+ Let $a, w \in T_p M$. Then
+ \[
+ g((\d \exp_p)_a a, (\d \exp_p)_a w) = g(a, w)
+ \]
+ whenever $a$ lives in the domain of the geodesic local neighbourhood.
+\end{cor}
+
+\subsection{Jacobi fields}
+Fix a Riemannian manifold $M$. Let's imagine that we have a ``manifold'' of all smooth curves on $M$. Then this ``manifold'' has a ``tangent space''. Morally, given a curve $\gamma$, a ``tangent vector'' at $\gamma$ in the space of curve should correspond to providing a tangent vector (in $M$) at each point along $\gamma$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [mblue, thick] (0, 0) sin (1, 0.3) cos (2, 0) sin (3, -0.3) cos (4, 0);
+
+ \draw [-latex'] (0, 0) -- +(0, 0.4);
+ \draw [-latex'] (0.5, 0.2) -- +(0, 0.3);
+ \draw [-latex'] (1, 0.3) -- +(0, 0.15);
+ \draw [-latex'] (1.5, 0.2) -- +(0, -0.15);
+ \draw [-latex'] (2, 0) -- +(0, -0.15);
+ \draw [-latex'] (2.5, -0.2) -- +(0, -0.2);
+ \draw [-latex'] (3, -0.3) -- +(0, 0.2);
+ \draw [-latex'] (3.5, -0.2) -- +(0, 0.4);
+ \draw [-latex'] (4, 0) -- +(0, 0.3);
+ \end{tikzpicture}
+\end{center}
+Since we are interested in the geodesics only, we consider the ``submanifold'' of geodesics curves. What are the corresponding ``tangent vectors'' living in this ``submanifold''?
+
+In rather more concrete terms, suppose $f_s(t) = f(t, s)$ is a family of geodesics in $M$ indexed by $s \in (-\varepsilon, \varepsilon)$. What do we know about $\left.\frac{\partial f}{\partial s}\right|_{s = 0}$, a vector field along $f_0$?
+
+We begin by considering such families that fix the starting point $f(0, s)$, and then derive some properties of $\frac{\partial f}{\partial s}$ in these special cases. We will then define a \emph{Jacobi field} to be any vector field along a curve that satisfies these properties. We will then prove that these are exactly the variations of geodesics.
+
+Suppose $f(t, s)$ is a family of geodesics such that $f(0, s) = p$ for all $s$. Then in geodesics local coordinates, it must look like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \foreach \y in {-1.75,-1.25,-0.75,-0.25,0.25,0.75,1.25,1.75} {
+ \draw [->, mblue, thick] (-2.5, -\y) -- (2.5, \y);
+ }
+ \end{tikzpicture}
+\end{center}
+For a fixed $p$, such a family is uniquely determined by a function
+\[
+ a(s): (-\varepsilon, \varepsilon) \to T_p M
+\]
+such that
+\[
+ f(t, s) = \exp_p(t a(s)).
+\]
+The initial conditions of this variation can be given by $a(0) = a$ and
+\[
+ \dot{a}(0) = w \in T_a(T_p M) \cong T_p M.
+\]
+We would like to know the ``variation field'' of $\gamma(t) = f(t, 0) = \gamma_p(t, a)$ this induces. In other words, we want to find $\frac{\partial f}{\partial s} (t, 0)$. This is not hard. It is just given by
+\[
+ (\d \exp_p)_{t a_0} (tw) = \frac{\partial f}{\partial s}(t, 0),
+\]
+As before, to prove something about $f$, we want to make good use of the properties of $\nabla$. Locally, we extend the vectors $\frac{\partial f}{\partial s}$ and $\frac{\partial f}{\partial t}$ to vector fields $\frac{\partial}{\partial t}$ and $\frac{\partial}{\partial s}$. Then in this set up, we have
+\[
+ \dot{\gamma} = \frac{\partial f}{\partial t} = \frac{\partial}{\partial t}.
+\]
+Note that in $\frac{\partial f}{\partial t}$, we are differentiating $f$ with respect to $t$, whereas the $\frac{\partial}{\partial t}$ on the far right is just a formal expressions.
+
+By the geodesic equation, we have
+\[
+ 0 = \frac{\nabla}{\d t} \dot{\gamma} = \nabla_{\partial_t} \partial_t.
+\]
+Therefore, using the definition of the curvature tensor $R$, we obtain
+\begin{align*}
+ 0 = \nabla_{\partial_s} \nabla_{\partial_t} \frac{\partial}{\partial t} &= \nabla_{\partial_t} \nabla_{\partial_s} \partial_t - R(\partial_s, \partial_t) \partial_t\\
+ &= \nabla_{\partial_t} \nabla_{\partial_s} \partial_t + R(\partial_t, \partial_s) \partial_t
+\end{align*}
+We let this act on the function $f$. So we get
+\[
+ 0 = \frac{\nabla}{\d t} \frac{\nabla}{\d s} \frac{\partial f}{\partial t} + R(\partial_t, \partial_s) \frac{\partial f}{\partial t}.
+\]
+We write
+\[
+ J(t) = \frac{\partial f}{\partial s}(t, 0),
+\]
+which is a vector field along the geodesic $\gamma$. Using the fact that
+\[
+ \frac{\nabla}{\d s} \frac{\partial f}{\partial t} = \frac{\nabla}{\d t} \frac{\partial f}{\partial s},
+\]
+we find that $J$ must satisfy the ordinary differential equation
+\[
+ \frac{\nabla^2}{\d t^2} J + R(\dot{\gamma}, J) \dot{\gamma} = 0.
+\]
+This is a linear second-order ordinary differential equation.
+
+\begin{defi}[Jacobi field]\index{Jacobi field}
+ Let $\gamma: [0, L] \to M$ be a geodesic. A \emph{Jacobi field} is a vector field $J$ along $\gamma$ that is a solution of the \term{Jacobi equation} on $[0, L]$
+ \[
+ \frac{\nabla^2}{\d t^2} J + R(\dot{\gamma}, J) \dot{\gamma} = 0. \tag{$\dagger$}
+ \]
+\end{defi}
+
+We now embark on a rather technical journey to prove results about Jacobi fields. Observe that $\dot{\gamma}(t)$ and $t \dot{\gamma}(t)$ both satisfy this equation, rather trivially.
+\begin{thm}
+ Let $\gamma: [0, L] \to N$ be a geodesic in a Riemannian manifold $(M, g)$. Then
+ \begin{enumerate}
+ \item For any $u, v \in T_{\gamma(0)}M$, there is a unique Jacobi field $J$ along $\Gamma$ with
+ \[
+ J(0) = u,\quad \frac{\nabla J}{\d t}(0) = v.
+ \]
+ If
+ \[
+ J(0) = 0,\quad \frac{\nabla J}{\d t}(0) = k \dot{\gamma}(0),
+ \]
+ then $J(t) = kt \dot{\gamma}(t)$. Moreover, if both $J(0), \frac{\nabla J}{\d t}(0)$ are orthogonal to $\dot{\gamma}(0)$, then $J(t)$ is perpendicular to $\dot{\gamma}(t)$ for all $[0, L]$.
+
+ In particular, the vector space of all Jacobi fields along $\gamma$ have dimension $2n$, where $n = \dim M$.
+
+ The subspace of those Jacobi fields pointwise perpendicular to $\dot{\gamma}(t)$ has dimensional $2(n - 1)$.
+ \item $J(t)$ is independent of the parametrization of $\dot{\gamma}(t)$. Explicitly, if $\tilde{\gamma}(t) = \tilde{\gamma}(\lambda t)$, then $\tilde{J}$ with the same initial conditions as $J$ is given by
+ \[
+ \tilde{J}(\tilde{\gamma}(t)) = J(\gamma(\lambda t)).
+ \]
+ \end{enumerate}
+\end{thm}
+
+This is the kind of theorem whose statement is longer than the proof.
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Pick an orthonormal basis $e_1,\cdots, e_n$ of $T_p M$, where $p = \gamma(0)$. Then parallel transports $\{X_i(t)\}$ via the Levi-Civita connection preserves the inner product.
+
+ We take $e_1$ to be parallel to $\dot{\gamma}(0)$. By definition, we have
+ \[
+ X_i(0) = e_i,\quad \frac{\nabla X_i}{\d t} = 0.
+ \]
+ Now we can write
+ \[
+ J = \sum_{i = 1}^n y_i X_i.
+ \]
+ Then taking $g(X_i, \ph)$ of $(\dagger)$ , we find that
+ \[
+ \ddot{y}_i + \sum_{j = 2}^n R(\dot{\gamma}, X_j, \dot{\gamma}, X_i) y_j = 0.
+ \]
+ Then the claims of the theorem follow from the standard existence and uniqueness of solutions of differential equations.
+
+ In particular, for the orthogonality part, we know that $J(0)$ and $\frac{\nabla J}{\d t}(0)$ being perpendicular to $\dot{\gamma}$ is equivalent to $y_1(0) = \dot{y}_1 (0) = 0$, and then Jacobi's equation gives
+ \[
+ \ddot{y}_1(t) = 0.
+ \]
+ \item This follows from uniqueness.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Our discussion of Jacobi fields so far has been rather theoretical. Now that we have an explicit equation for the Jacobi field, we can actually produce some of them. We will look at the case where we have constant sectional curvature.
+
+\begin{eg}
+ Suppose the sectional curvature is constantly $K \in \R$, for $\dim M \geq 3$. We wlog $|\dot{\gamma}| = 1$. We let $J$ along $\gamma$ be a Jacobi field, normal to $\dot{\gamma}$.
+
+ Then for any vector field $T$ along $\gamma$, we have
+ \[
+ \bra R(\dot{\gamma}, J) \dot{\gamma}, T\ket = K(g(\dot{\gamma}, \dot{\gamma}) g(J, T) - g(\dot{\gamma}, J) g(\dot{\gamma}, T)) = K g(J, T).
+ \]
+ Since this is true for all $T$, we know
+ \[
+ R(\dot{\gamma}, J) \dot{\gamma} = KJ.
+ \]
+ Then the Jacobi equation becomes
+ \[
+ \frac{\nabla^2}{\d t^2}J + KJ = 0.
+ \]
+ So we can immediately write down a collection of solutions
+ \[
+ J(t) =
+ \begin{cases}
+ \frac{\sin(t \sqrt{K})}{\sqrt{K}} X_i(t) & K > 0\\
+ t X_i(t) & K = 0\\
+ \frac{\sinh(t \sqrt{-K})}{\sqrt{-K}} X_i(t) & K < 0
+ \end{cases}.
+ \]
+ for $i = 2, \cdots, n$, and this has initial conditions
+ \[
+ J(0) = 0,\quad \frac{\nabla J}{\d t}(0) = e_i.
+ \]
+ Note that these Jacobi fields vanishes at $0$.
+\end{eg}
+
+We can now deliver our promise, proving that Jacobi fields are precisely the variations of geodesics.
+\begin{prop}
+ Let $\gamma: [a, b] \to M$ be a geodesic, and $f(t, s)$ a variation of $\gamma(t) = f(t, 0)$ such that $f(t, s) = \gamma_s(t)$ is a geodesic for all $|s|$ small. Then
+ \[
+ J(t) = \frac{\partial f}{\partial s}
+ \]
+ is a Jacobi field along $\dot{\gamma}$.
+
+ Conversely, every Jacobi field along $\gamma$ can be obtained this way for an appropriate function $f$.
+\end{prop}
+
+\begin{proof}
+ The first part is just the exact computation as we had at the beginning of the section, but for the benefit of the reader, we will reproduce the proof again.
+ \begin{align*}
+ \frac{\nabla^2 J}{\d t} &= \nabla_t \nabla_t \frac{\partial f}{\partial s}\\
+ &= \nabla_t \nabla_s \frac{\partial f}{\partial t}\\
+ &= \nabla_s \left(\nabla_t \frac{\partial f}{\partial t}\right) - R(\partial_t, \partial_s) \dot{\gamma}_s.
+ \end{align*}
+ We notice that the first term vanishes, because $\nabla_t \frac{\partial f}{\partial t} = 0$ by definition of geodesic. So we find
+ \[
+ \frac{\nabla^2 J}{\d t} = -R(\dot{\gamma}, J) \dot{\gamma},
+ \]
+ which is the Jacobi equation.
+
+ The converse requires a bit more work. We will write $J'(0)$ for the covariant derivative of $J$ along $\gamma$. Given a Jacobi field $J$ along a geodesic $\gamma(t)$ for $t \in [0, L]$, we let $\tilde{\gamma}$ be another geodesic such that
+ \[
+ \tilde{\gamma}(0) = \gamma(0),\quad \dot{\tilde{\gamma}}(0) = J(0).
+ \]
+ We take parallel vector fields $X_0, X_1$ along $\tilde{\gamma}$ such that
+ \[
+ X_0(0) = \dot{\gamma}(0),\quad X_1(0) = J'(0).
+ \]
+ We put $X(s) = X_0(s) + s X_1(s)$. We put
+ \[
+ f(t, s) = \exp_{\tilde{\gamma}(s)} (t X(s)). % insert a picture?
+ \]
+ In local coordinates, for each fixed $s$, we find
+ \[
+ f(t, s) = \tilde{\gamma}(s) + t X(s) + O(t^2)
+ \]
+ as $t \to 0$. Then we define
+ \[
+ \gamma_s(t) = f(t, s)
+ \]
+ whenever this makes sense. This depends smoothly on $s$, and the previous arguments say we get a Jacobi field
+ \[
+ \hat{J}(t) = \frac{\partial f}{\partial s}(t, 0)
+ \]
+ We now want to check that $\hat{J} = J$. Then we are done. To do so, we have to check the initial conditions. We have
+ \[
+ \hat{J}(0) = \frac{\partial f}{\partial s}(0, 0) = \frac{\d \tilde{\gamma}}{\d s}(0) = J(0),
+ \]
+ and also
+ \[
+ \hat{J}'(0) = \frac{\nabla}{\d t} \frac{\partial f}{\partial s}(0, 0) = \frac{\nabla}{\d s} \frac{\partial f}{\partial t}(0, 0) = \frac{\nabla X}{\d s}(0) = X_1(0) = J'(0).
+ \]
+ So we have $\hat{J} = J$.
+\end{proof}
+
+\begin{cor}
+ Every Jacobi field $J$ along a geodesic $\gamma$ with $J(0) = 0$ is given by
+ \[
+ J(t) = (\d \exp_p)_{t \dot{\gamma}(0)} (t J'(0))
+ \]
+ for all $t \in [0, L]$.
+\end{cor}
+This is just a reiteration of the fact that if we pull back to the geodesic local coordinates, then the variation must look like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -2) -- (0, 2);
+ \draw [mblue, thick] (-2.5, 0) -- (2.5, 0);
+ \draw [mblue, thick] (-2.5, -1.2) -- (2.5, 1.2);
+
+ \draw [-latex'] (0.5, 0) -- +(0, 0.24);
+ \draw [-latex'] (1, 0) -- +(0, 0.48);
+ \draw [-latex'] (1.5, 0) -- +(0, 0.72);
+ \draw [-latex'] (2, 0) -- +(0, 0.96);
+
+ \draw [-latex'] (-0.5, 0) -- +(0, -0.24);
+ \draw [-latex'] (-1, 0) -- +(0, -0.48);
+ \draw [-latex'] (-1.5, 0) -- +(0, -0.72);
+ \draw [-latex'] (-2, 0) -- +(0, -0.96);
+ \end{tikzpicture}
+\end{center}
+But this corollary is stronger, in the sense that it holds even if we get out of the geodesic local coordinates (i.e.\ when $\exp_p$ no longer gives a chart).
+
+\begin{proof}
+ Write $\dot{\gamma}(0) = a$, and $J'(0) = w$. By above, we can construct the variation by
+ \[
+ f(t, s) = \exp_p(t (a + sw)).
+ \]
+ Then
+ \[
+ (\d \exp_p)_{t(a + sw)} (tw) = \frac{\partial f}{\partial s}(t, s),
+ \]
+ which is just an application of the chain rule. Putting $s = 0$ gives the result.
+\end{proof}
+
+It can be shown that in the situation of the corollary, if $a \perp w$, and $|a| = |w| = 1$, then
+\[
+ |J(t)| = t - \frac{1}{3!} K(\sigma) t^3 + o(t^3)
+\]
+as $t \to 0$, where $\sigma$ is the plane spanned by $a$ and $w$.
+
+\subsection{Further properties of geodesics}
+We can now use Jacobi fields to prove interesting things. We now revisit the Gauss lemma, and deduce a stronger version.
+\begin{lemma}[Gauss' lemma]\index{Gauss' lemma}
+ Let $a, w \in T_p M$, and
+ \[
+ \gamma = \gamma_p(t, a) = \exp_p(ta)
+ \]
+ a geodesic. Then
+ \[
+ g_{\gamma(t)} ((\d \exp_p)_{ta} a, (\d \exp_p)_{ta} w) = g_{\gamma(0)}(a, w).
+ \]
+ In particular, $\gamma$ is orthogonal to $\exp_p \{v \in T_p M: |v| = r\}$. Note that the latter need not be a submanifold.
+\end{lemma}
+This is an improvement of the previous version, which required us to live in the geodesic local coordinates.
+
+\begin{proof}
+ We fix any $r > 0$, and consider the Jacobi field $J$ satisfying
+ \[
+ J(0) = 0,\quad J'(0) = \frac{w}{r}.
+ \]
+ Then by the corollary, we know the Jacobi field is
+ \[
+ J(t) = (\d \exp_p)_{ta} \left(\frac{tw}{r}\right).
+ \]
+ We may write
+ \[
+ \frac{w}{r} = \lambda a + u,
+ \]
+ with $a \perp u$. Then since Jacobi fields depend linearly on initial conditions, we write
+ \[
+ J(t) = \lambda t \dot{\gamma}(t) + J_n(t)
+ \]
+ for a Jacobi field $J_n$ a normal vector field along $\gamma$. So we have
+ \[
+ g(J(r), \dot{\gamma}(r)) = \lambda r |\dot{\gamma}(r)|^2 = g(w, a).
+ \]
+ But we also have
+ \[
+ g(w, a) = g(\lambda ar + u, a) = \lambda r |a|^2 = \lambda r |\dot{\gamma}(0)|^2 = \lambda r |\dot{\gamma}(r)|^2.
+ \]
+ Now we use the fact that
+ \[
+ J(r) = (\d \exp_p)_{ra} w
+ \]
+ and
+ \[
+ \dot{\gamma}(r) = (\d \exp_p)_{ra} a,
+ \]
+ and we are done.
+\end{proof}
+
+\begin{cor}[Local minimizing of length]
+ Let $a \in T_p M$. We define $\varphi(t) = ta$, and $\psi(t)$ a piecewise $C^1$ curve in $T_p M$ for $t \in [0, 1]$ such that
+ \[
+ \psi(0) = 0,\quad \psi(1) = a.
+ \]
+ Then
+ \[
+ \length(\exp_p \circ \psi) \geq \length(\exp_p \circ \varphi) = |a|.
+ \]
+\end{cor}
+It is important to interpret this corollary precisely. It only applies to curves with the same end point \emph{in $T_p M$}. If we have two curves in $T_p M$ whose end points have the same image in $M$, then the result need not hold (the torus would be a counterexample).
+
+\begin{proof}
+ We may of course assume that $\psi$ never hits $0$ again after $t = 0$. We write
+ \[
+ \psi(t) = \rho(t) \mathbf{u}(t),
+ \]
+ where $\rho(t) \geq 0$ and $|\mathbf{u}(t)| = 1$. Then
+ \[
+ \psi' = \rho' \mathbf{u} + \rho \mathbf{u}'.
+ \]
+ Then using the extended Gauss lemma, and the general fact that if $\mathbf{u}(t)$ is a unit vector for all $t$, then $\mathbf{u} \cdot \mathbf{u}' = \frac{1}{2}(\mathbf{u}\cdot \mathbf{u})' = 0$, we have
+ \begin{align*}
+ \left|\frac{\d}{\d x} (\exp_p \circ \psi) (t)\right|^2 &= \left|(\d \exp_p)_{\psi(t)} \psi'(t) \right|^2 \\
+ &= \rho'(t)^2 + 2g(\rho'(t) \mathbf{u}(t), \rho(t) \mathbf{u}'(t)) + \rho(t)^2 |(\d \exp_p)_{\psi(t)} \mathbf{u}'(t)|^2\\
+ &= \rho'(t)^2 + \rho(t)^2 |(\d \exp_p)_{\psi(t)} \mathbf{u}'(t)|^2,
+ \end{align*}
+ Thus we have
+ \[
+ \length(\exp_p \circ \psi) \geq \int_0^1 \rho'(t) \;\d t = \rho(1) - \rho(0) = |a|.\qedhere
+ \]
+\end{proof}
+
+\begin{notation}\index{$\Omega(p, q)$}
+ We write $\Omega(p, q)$ for the set of all piecewise $C^1$ curves from $p$ to $q$.
+\end{notation}
+
+We now wish to define a metric on $M$, in the sense of metric spaces.
+\begin{defi}[Distance]\index{distance}
+ Suppose $M$ is connected, which is the same as it being path connected. Let $(p, q) \in M$. We define
+ \[
+ d(p, q) = \inf_{\xi \in \Omega(p, q)} \length(\xi),
+ \]
+ where
+\end{defi}
+To see this is indeed a metric, All axioms of a metric space are obvious, apart from the non-negativity part.
+\begin{thm}
+ Let $p \in M$, and let $\varepsilon$ be such that $\exp_p|_{B(0, \varepsilon)}$ is a diffeomorphism onto its image, and let $U$ be the image. Then
+ \begin{itemize}
+ \item For any $q \in U$, there is a unique geodesic $\gamma \in \Omega(p, q)$ with $\ell(\gamma) < \varepsilon$. Moreover, $\ell(\gamma) = d(p, q)$, and is the unique curve that satisfies this property.
+ \item For any point $q \in M$ with $d(p, q) < \varepsilon$, we have $q \in U$.
+ \item If $q \in M$ is any point, $\gamma \in \Omega(p, q)$ has $\ell(\gamma) = d(p, q) < \varepsilon$, then $\gamma$ is a geodesic.
+ \end{itemize}
+\end{thm}
+
+\begin{proof}
+ Let $q = \exp_p(a)$. Then the path $\gamma(t) = \exp_p(ta)$ is a geodesic from $p$ to $q$ of length $|a| = r < \varepsilon$. This is clearly the only such geodesic, since $\exp_p|_{B(0, \varepsilon)}$ is a diffeomorphism.
+
+ Given any other path $\tilde{\gamma} \in \Omega(p, q)$, we want to show $\ell(\tilde{\gamma}) > \ell(\gamma)$. We let
+ \[
+ \tau = \sup \left\{t \in [0, 1]: \gamma([0, t]) \subseteq \exp_p (\overline{B(0, r)})\right\}.
+ \]
+ Note that if $\tau \not= 1$, then we must have $\gamma(\tau) \in \Sigma_r$, the geodesic sphere of radius $r$, otherwise we can continue extending. On the other hand, if $\tau = 1$, then we certainly have $\gamma(\tau) \in \Sigma_r$, since $\gamma(\tau) = q$. Then by local minimizing of length, we have
+ \[
+ \ell(\tilde{\gamma}) \geq \ell(\tilde{\gamma}_{[0, \tau]}) \geq r.
+ \]
+ Note that we can always lift $\tilde{\gamma}{[0, \tau]}$ to a curve from $0$ to $a$ in $T_p M$, since $\exp_p$ is a diffeomorphism in $B(0, \varepsilon)$.
+
+ By looking at the proof of the local minimizing of length, and using the same notation, we know that we have equality iff $\tau = 1$ and
+ \[
+ \rho(t)^2 |(\d \exp_p)_{\psi(t)} \psi(t) \mathbf{u}'(t)|^2 = 0
+ \]
+ for all $t$. Since $\d \exp_p$ is regular, this requires $\mathbf{u}'(t) = 0$ for all $t$ (since $\rho(t) \not= 0$ when $t \not= 0$, or else we can remove the loop to get a shorter curve). This implies $\tilde{\gamma}$ lifts to a straight line in $T_p M$, i.e.\ is a geodesic.
+
+ \separator
+
+ Now given any $q \in M$ with $r = d(p, q) < \varepsilon$, we pick $r' \in [r, \varepsilon)$ and a path $\gamma \in \Omega(p, q)$ such that $\ell(\gamma) = r'$. We again let
+ \[
+ \tau = \sup \left\{t \in [0, 1]: \gamma([0, t]) \subseteq \exp_p (\overline{B(0, r')})\right\}.
+ \]
+ If $\tau \not= 1$, then we must have $\gamma(\tau) \in \Sigma_{r'}$, but lifting to $T_pM$, this contradicts the local minimizing of length.
+
+ \separator
+
+ The last part is an immediate consequence of the previous two.
+\end{proof}
+
+\begin{cor}
+ The distance $d$ on a Riemannian manifold is a metric, and induces the same topology on $M$ as the $C^\infty$ structure.
+\end{cor}
+
+\begin{defi}[Minimal geodesic]\index{minimal geodesic}
+ A \emph{minimal geodesic} is a curve $\gamma: [0, 1] \to M$ such that
+ \[
+ d(\gamma(0), \gamma(1)) = \ell(\gamma).
+ \]
+\end{defi}
+One would certainly want a minimal geodesic to be an actual geodesic. This is an easy consequence of what we've got so far, using the observation that a sub-curve of a minimizing geodesic is still minimizing.
+
+\begin{cor}
+ Let $\gamma: [0, 1] \to M$ be a piecewise $C^1$ minimal geodesic with constant speed. Then $\gamma$ is in fact a geodesic, and is in particular $C^\infty$.
+\end{cor}
+
+\begin{proof}
+ We wlog $\gamma$ is unit speed. Let $t \in [0, 1]$, and pick $\varepsilon > 0$ such that $\exp_p|_{B(0, \varepsilon)}$ is a diffeomorphism. Then by the theorem, $\gamma_{[t, t + \frac{1}{2}\varepsilon]}$ is a geodesic. So $\gamma$ is $C^\infty$ on $(t, t + \frac{1}{2}\varepsilon)$, and satisfies the geodesic equations there.
+
+ Since we can pick $\varepsilon$ continuously with respect to $t$ by ODE theorems, any $t \in (0, 1)$ lies in one such neighbourhood. So $\gamma$ is a geodesic.
+\end{proof}
+
+While it is not true that geodesics are always minimal geodesics, this is locally true:
+\begin{cor}
+ Let $\gamma: [0, 1] \subseteq \R \to M$ be a $C^2$ curve with $|\dot{\gamma}|$ constant. Then this is a geodesic iff it is locally a minimal geodesic, i.e.\ for any $t \in [0, 1)$, there exists $\delta > 0$ such that
+ \[
+ d(\gamma(t), \gamma(t + \delta)) = \ell(\gamma|_{[t, t + \delta]}).
+ \]
+\end{cor}
+
+\begin{proof}
+ This is just carefully applying the previous theorem without getting confused.
+
+ To prove $\Rightarrow$, suppose $\gamma$ is a geodesic, and $t \in [0, 1)$. We wlog $\gamma$ is unit speed. Then pick $U$ and $\varepsilon$ as in the previous theorem, and pick $\delta = \frac{1}{2}\varepsilon$. Then $\gamma|_{[t, t + \delta]}$ is a geodesic with length $ < \varepsilon$ between $\gamma(t)$ and $\gamma(t + \delta)$, and hence must have minimal length.
+
+ To prove the converse, we note that for each $t$, the hypothesis tells us $\gamma|_{[t, t + \delta]}$ is a minimizing geodesic, and hence a geodesic, but the previous corollary. By continuity, $\gamma$ must satisfy the geodesic equation at $t$. Since $t$ is arbitrary, $\gamma$ is a geodesic.
+\end{proof}
+
+There is another sense in which geodesics are locally length minimizing. Instead of chopping up a path, we can say it is minimal ``locally'' in the space $\Omega(p, q)$. To do so, we need to give $\Omega(p, q)$ a topology, and we pick the topology of uniform convergence.
+
+\begin{thm}
+ Let $\gamma(t) = \exp_p(ta)$ be a geodesic, for $t \in [0, 1]$. Let $q = \gamma(1)$. Assume $ta$ is a regular point for $\exp_p$ for all $t \in [0, 1]$. Then there exists a neighbourhood of $\gamma$ in $\Omega(p, q)$ such that for all $\psi$ in this neighbourhood, $\ell(\psi) \geq \ell(\gamma)$, with equality iff $\psi = \gamma$ up to reparametrization.
+\end{thm}
+Before we prove the result, we first look at why the two conditions are necessary. To see the necessity of $ta$ being regular, we can consider the sphere and two antipodal points:
+\begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=1];
+ \draw [dashed] (1, 0) arc (0:180:1 and 0.3);
+ \draw (1, 0) arc (0:-180:1 and 0.3);
+
+ \draw [mblue, thick] (0, 1) arc (90:-90:0.3 and 1);
+ \node [circ] at (0, 1) {};
+ \node [above] at (0, 1) {$p$};
+
+ \node [circ] at (0, -1) {};
+ \node [below] at (0, -1) {$q$};
+ \end{tikzpicture}
+\end{center}
+Then while the geodesic between them does minimize distance, it does not do so strictly.
+
+We also do not guarantee global minimization of length. For example, we can consider the torus
+\[
+ T^n = \R^n/\Z^n.
+\]
+This has a flat metric from $\R^n$, and the derivative of the exponential map is the ``identity'' on $\R^n$ at all points. So the geodesics are the straight lines in $\R^n$. Now consider any two $p, q \in T^n$, then there are infinitely many geodesics joining them, but typically, only one of them would be the shortest.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [step=1cm, gray, very thin] (0, 0) grid (6, 4);
+
+ \node [circ, mred] (p) at (1.2, 1.2) {};
+ \node [below] at (p) {$p$};
+
+ \foreach \x in {0,1,2,3,4,5} {
+ \foreach \y in {0, 1, 2, 3} {
+ \pgfmathsetmacro\a{\x + 0.6}
+ \pgfmathsetmacro\b{\y + 0.7}
+ \node [circ, opacity=0.5] at (\a, \b) {};
+
+ \draw [opacity=0.5] (p) -- (\a, \b);
+ }
+ }
+ \node [circ, mblue] at (3.6, 2.7) {};
+ \node [below] at (3.6, 2.7) {$q$};
+ \draw (p) -- (3.6, 2.7);
+ \end{tikzpicture}
+\end{center}
+
+\begin{proof}
+ The idea of the proof is that if $\psi$ is any curve close to $\gamma$, then we can use the regularity condition to lift the curve back up to $T_p M$, and then apply our previous result.
+
+ Write $\varphi(t) = ta \in T_p M$. Then by the regularity assumption, for all $t \in [0, 1]$, we know $\exp_p$ is a diffeomorphism of some neighbourhood $W(t)$ of $\varphi(t) = at \in T_p M$ onto the image. By compactness, we can cover $[0, 1]$ by finitely many such covers, say $W(t_1), \cdots, W(t_n)$. We write $W_i = W(t_i)$, and we wlog assume
+ \[
+ 0 = t_0 < t_1 < \cdots < t_k = 1.
+ \]
+ By cutting things up, we may assume
+ \[
+ \gamma([t_i, t_{i + 1}]) \subseteq W_i.
+ \]
+ We let
+ \[
+ U = \bigcup \exp_p (W_i).
+ \]
+ Again by compactness, there is some $\varepsilon < 0$ such that for all $t \in [t_i, t_{i + 1}]$, we have $B(\gamma(t), \varepsilon) \subseteq W_i$.
+
+ Now consider any curve $\psi$ of distance $\varepsilon$ away from $\gamma$. Then $\psi([t_i, t_{i + 1}]) \subseteq W_i$. So we can lift it up to $T_p M$, and the end point of the lift is $a$. So we are done by local minimization of length.
+\end{proof}
+Note that the tricky part of doing the proof is to make sure the lift of $\psi$ has the same end point as $\gamma$ in $T_p M$, which is why we needed to do it neighbourhood by neighbourhood.
+
+\subsection{Completeness and the Hopf--Rinow theorem}
+There are some natural questions we can ask about geodesics. For example, we might want to know if geodesics can be extended to exist for all time. We might also be interested if distances can always be realized by geodesics. It turns out these questions have the same answer.
+
+\begin{defi}[Geodesically complete]\index{geodesically complete}\index{complete!geodesically}
+ We say a manifold $(M, g)$ is \emph{geodesically complete} if each geodesic extends for all time. In other words, for all $p \in M$, $\exp_p$ is defined on \emph{all} of $T_p M$.
+\end{defi}
+
+\begin{eg}
+ The upper half plane
+ \[
+ H^2 = \{(x, y) : y> 0\}
+ \]
+ under the induced Euclidean metric is not geodesically complete. However, $H^2$ and $\R^2$ are diffeomorphic but $\R^2$ is geodesically complete.
+\end{eg}
+
+The first theorem we will prove is the following:
+\begin{thm}
+ Let $(M, g)$ be geodesically complete. Then any two points can be connected by a minimal geodesic.
+\end{thm}
+In fact, we will prove something stronger --- let $p \in M$, and suppose $\exp_p$ is defined on all of $T_p M$. Then for all $q \in M$, there is a minimal geodesic between them.
+
+To prove this, we need a lemma
+\begin{lemma}
+ Let $p, q \in M$. Let
+ \[
+ S_\delta = \{x \in M: d(x, p) = \delta\}.
+ \]
+ Then for all sufficiently small $\delta$, there exists $p_0 \in S_\delta$ such that
+ \[
+ d(p, p_0) + d(p_0, q) = d(p, q).
+ \]
+\end{lemma}
+ % insert a picture?
+\begin{proof}
+ For $\delta > 0$ small, we know $S_\delta = \Sigma_\delta$ is a geodesic sphere about $p$, and is compact. Moreover, $d(\ph, q)$ is a continuous function. So there exists some $p_0 \in \Sigma_\delta$ that minimizes $d(\ph, q)$.
+
+ Consider an arbitrary $\gamma \in \Omega(p, q)$. For the sake of sanity, we assume $\delta < d(p, q)$. Then there is some $t$ such that $\gamma(t) \in \Sigma_\delta$, and
+ \[
+ \ell(\gamma) \geq d(p, \gamma(t)) + d(\gamma(t), q) \geq d(p, p_0) + d(p_0, q).
+ \]
+ So we know
+ \[
+ d(p, q) \geq d(p, p_0) + d(p_0, p).
+ \]
+ The triangle inequality gives the opposite direction. So we must have equality.
+\end{proof}
+
+We can now prove the theorem.
+\begin{proof}[Proof of theorem]
+ We know $\exp_p$ is defined on $T_p M$. Let $q \in M$. Let $q \in M$. We want a minimal geodesic in $\Omega(p, q)$. By the first lemma, there is some $\delta > 0$ and $p_0$ such that
+ \[
+ d(p, p_0) = \delta,\quad d(p, p_0) + d(p_0, q) = d(p, q).
+ \]
+ Also, there is some $v \in T_p M$ such that $\exp_p v = p_0$. We let
+ \[
+ \gamma_p (t) = \exp_p\left(t \frac{v}{|v|}\right).
+ \]
+ We let
+ \[
+ I = \{t \in \R: d(q, \gamma_p(t)) + t = d(p, q)\}.
+ \]
+ Then we know
+ \begin{enumerate}
+ \item $\delta \in I$
+ \item $I$ is closed by continuity.
+ \end{enumerate}
+ Let
+ \[
+ T = \sup\{I \cap [0, d(p, q)]\}.
+ \]
+ Since $I$ is closed, this is in fact a maximum. So $T \in I$. We claim that $T = d(p, q)$. If so, then $\gamma_p \in \Omega(p, q)$ is the desired minimal geodesic, and we are done.
+
+ Suppose this were not true. Then $T < d(p, q)$. We apply the lemma to $\tilde{p} = \gamma_p(T)$, and $q$ remains as before. Then we can find $\varepsilon > 0$ and some $p_1 \in M$ with the property that
+ \begin{align*}
+ d(p_1, q) &= d(\gamma_p(T), q) - d(\gamma_p(T), p_1) \\
+ &= d(\gamma_p(T), q) - \varepsilon\\
+ &= d(p, q) - T - \varepsilon
+ \end{align*}
+ Hence we have
+ \[
+ d(p, p_1) \geq d(p, q) - d(q, p_1) = T + \varepsilon.
+ \]
+ Let $\gamma_1$ be the radial (hence minimal) geodesic from $\gamma_p(T)$ to $p_1$. Now we know
+ \[
+ \ell(\gamma_p|_{[0, T]}) + \ell(\gamma_1) = T + \varepsilon.
+ \]
+ So $\gamma_1$ concatenated with $\gamma_p|_{[0, T]}$ is a length-minimizing geodesic from $p$ to $p_1$, and is hence a geodesic. So in fact $p_1$ lies on $\gamma_p$, say $p_1 = \gamma_p(T + s)$ for some $s$. Then $T + s \in I$, which is a contradiction. So we must have $T = d(p, q)$, and hence
+ \[
+ d(q, \gamma_p(T)) + T = d(p, q),
+ \]
+ hence $d(q, \gamma_p(T)) = 0$, i.e.\ $q = \gamma_p(T)$.
+\end{proof}
+
+\begin{cor}[Hopf--Rinow theorem]\index{Hopf--Rinow theorem}
+ For a connected Riemannian manifold $(M, g)$, the following are equivalent:
+ \begin{enumerate}
+ \item $(M, g)$ is geodesically complete.
+ \item For all $p \in M$, $\exp_p$ is defined on all $T_p M$.
+ \item For some $p \in M$, $\exp_p$ is defined on all $T_p M$.
+ \item Every closed and bounded subset of $(M, d)$ is compact.
+ \item $(M, d)$ is complete as a metric space.
+ \end{enumerate}
+\end{cor}
+
+\begin{proof}
+ (i) and (ii) are equivalent by definition. (ii) $\Rightarrow$ (iii) is clear, and we proved (iii) $\Rightarrow$ (i).
+
+ \begin{itemize}
+ \item (iii) $\Rightarrow$ (iv): Let $K \subseteq M$ be closed and bounded. Then by boundedness, $K$ is contained in $\exp_p(\overline{B(0, R)})$. Let $K'$ be the pre-image of $K$ under $\exp_p$. Then it is a closed and bounded subset of $\R^n$, hence compact. Then $K$ is the continuous image of a compact set, hence compact.
+ \item (iv) $\Rightarrow$ (v): This is a general topological fact.
+ \item (v) $\Rightarrow$ (i): Let $\gamma(t): I \to \R$ be a geodesic, where $I \subseteq \R$. We wlog $|\dot{\gamma}| \equiv 1$. Suppose $I \not= \R$. We wlog $\sup I = a < \infty$. Then $\lim_{t \to a} \gamma(t)$ exist by completeness, and hence $\gamma(a)$ exists. Since geodesics are locally defined near $a$, we can pick a geodesic in the direction of $\lim_{t \to a} \gamma'(t)$. So we can extend $\gamma$ further, which is a contradiction.\qedhere
+ \end{itemize}
+\end{proof}
+
+\subsection{Variations of arc length and energy}
+This section is mostly a huge computation. As we previously saw, geodesics are locally length-minimizing, and we shall see that another quantity, namely the energy is also a useful thing to consider, as minimizing the energy also forces the parametrization to be constant speed.
+
+To make good use of these properties of geodesics, it is helpful to compute explicitly expressions for how length and energy change along variations. The computations are largely uninteresting, but it will pay off.
+
+\begin{defi}[Energy]\index{energy}
+ The \emph{energy function} $E: \Omega(p, q) \to \R$ is given by
+ \[
+ E(\gamma) = \frac{1}{2} \int_0^T|\dot{\gamma}|^2\;\d t,
+ \]
+ where $\gamma: [0, T] \to M$.
+\end{defi}
+Recall that $\Omega(p, q)$ is defined as the space of piecewise $C^1$ curves. Often, we will make the simplifying assumption that all curves are in fact $C^1$. It doesn't really matter.
+
+Note that the length of a curve is independent of parametrization. Thus, if we are interested in critical points, then the critical points cannot possibly be isolated, as we can just re-parametrize to get a nearby path with the same length. On the other hand, the energy $E$ \emph{does} depend on parametrization. This does have isolated critical points, which is technically very convenient.
+
+\begin{prop}
+ Let $\gamma_0: [0, T] \to M$ be a path from $p$ to $q$ such that for all $\gamma \in \Omega(p, q)$ with $\gamma:[0, T] \to M$, we have $E(\gamma) \geq E(\gamma_0)$. Then $\gamma_0$ must be a geodesic.
+\end{prop}
+
+Recall that we already had such a result for length instead of energy. The proof is just the application of Cauchy-Schwartz.
+
+\begin{proof}
+ By the Cauchy-Schwartz inequality, we have
+ \[
+ \int_0^T |\dot{\gamma}|^2\; \d t \geq \left(\int_0^T |\dot{\gamma}(t)|\;\d t\right)^2
+ \]
+ with equality iff $|\dot{\gamma}|$ is constant. In other words,
+ \[
+ E(\gamma) \geq \frac{\ell(\gamma)^2}{2T}.
+ \]
+ So we know that if $\gamma_0$ minimizes energy, then it must be constant speed. Now given any $\gamma$, if we just care about its length, then we may wlog it is constant speed, and then
+ \[
+ \ell(\gamma) = \sqrt{2E(\gamma)T }\geq \sqrt{2 E(\gamma_0)T} = \ell(\gamma_0).
+ \]
+ So $\gamma_0$ minimizes length, and thus $\gamma_0$ is a geodesic.
+\end{proof}
+
+We shall consider smooth variations $H(t, s)$ of $\gamma_0(t) = H(t, 0)$. We require that $H: [0, T] \times (-\varepsilon, \varepsilon) \to M$ is smooth. Since we are mostly just interested in what happens ``near'' $s = 0$, it is often convenient to just consider the corresponding vector field along $\gamma$:
+\[
+ Y(t) = \left.\frac{\partial H}{\partial s}\right|_{s = 0} = (\d H)_{(t, 0)} \frac{\partial}{\partial s},
+\]
+Conversely, given any such vector field $Y$, we can generate a variation $H$ that gives rise to $Y$. For example, we can put
+\[
+ H(t, s) = \exp_{\gamma_0(t)} (sY(t)),
+\]
+which is valid on some neighbourhood of $[0, T] \times \{0\}$. If $Y(0) = 0 = Y(T)$, then we can choose $H$ fixing end-points of $\gamma_0$.
+
+\begin{thm}[First variation formula]\index{first variation formula!geodesic}\index{geodesic!first variation formula}\leavevmode
+ \begin{enumerate}
+ \item For any variation $H$ of $\gamma$, we have
+ \[
+ \left.\frac{\d}{\d s}E(\gamma_s)\right|_{s = 0} = g(Y(t), \dot{\gamma}(t))|_0^T - \int_0^T g\left(Y(t), \frac{\nabla}{\d t} \dot{\gamma}(t)\right)\;\d t.\tag{$*$}
+ \]
+ \item The critical points, i.e.\ the $\gamma$ such that
+ \[
+ \left.\frac{\d}{\d s} E(\gamma_s)\right|_{s = 0}
+ \]
+ for all (end-point fixing) variation $H$ of $\gamma$, are geodesics.
+ \item If $|\dot{\gamma}_s(t)|$ is constant for each fixed $s \in (-\varepsilon, \varepsilon)$, and $|\dot{\gamma}(t)| \equiv 1$, then
+ \[
+ \left.\frac{\d}{\d s}E(\gamma_s) \right|_{s = 0} = \left.\frac{\d}{\d s} \ell(\gamma_s)\right|_{s = 0}
+ \]
+ \item If $\gamma$ is a critical point of the length, then it must be a reparametrization of a geodesic.
+ \end{enumerate}
+\end{thm}
+This is just some calculations.
+
+\begin{proof}
+ We will assume that we can treat $\frac{\partial}{\partial s}$ and $\frac{\partial}{\partial t}$ as vector fields on an embedded submanifold, even though $H$ is not necessarily a local embedding.
+
+ The result can be proved without this assumption, but will require more technical work.
+ \begin{enumerate}
+ \item We have
+ \begin{align*}
+ \frac{1}{2} \frac{\partial}{\partial s} g(\dot{\gamma}_s(t), \dot{\gamma}_s(t)) &= g\left(\frac{\nabla}{\d s} \dot{\gamma}_s(t), \dot{\gamma}_s(t)\right)\\
+ &= g\left(\frac{\nabla}{\d t} \frac{\partial H}{\partial s}(t, s), \frac{\partial H}{\partial t} (t, s)\right)\\
+ &= \frac{\partial}{\partial t} g\left(\frac{\partial H}{\partial s}, \frac{\partial H}{\partial t}\right) - g \left(\frac{\partial H}{\partial s}, \frac{\nabla}{\d t} \frac{\partial H}{\partial t}\right).
+ \end{align*}
+ Comparing with what we want to prove, we see that we get what we want by integrating $\int_0^T\;\d t$, and then putting $s = 0$, and then noting that
+ \[
+ \left.\frac{\partial H}{\partial s}\right|_{s = 0} = Y,\quad \left.\frac{\partial H}{\partial t}\right|_{s = 0} = \dot{\gamma}.
+ \]
+ \item If $\gamma$ is a geodesic, then
+ \[
+ \frac{\nabla}{\d t} \dot{\gamma}(t) = 0.
+ \]
+ So the integral on the right hand side of $(*)$ vanishes. Also, we have $Y(0) = 0 = Y(T)$. So the RHS vanishes.
+
+ Conversely, suppose $\gamma$ is a critical point for $E$. Then choose $H$ with
+ \[
+ Y(t) = f(t) \frac{\nabla}{\d t} \dot{\gamma}(t)
+ \]
+ for some $f \in C^\infty[0, T]$ such that $f(0) = f(T) = 0$. Then we know
+ \[
+ \int_0^T f(t) \left|\frac{\nabla}{\d t} \dot{\gamma}(t)\right|^2 \;\d t = 0,
+ \]
+ and this is true for all $f$. So we know
+ \[
+ \frac{\nabla}{\d t}\dot{\gamma} = 0.
+ \]
+ \item This is evident from the previous proposition. Indeed, we fix $[0, T]$, then for all $H$, we have
+ \[
+ E(\gamma_s) = \frac{\ell (\gamma_s)^2}{2T},
+ \]
+ and so
+ \[
+ \left.\frac{\d}{\d s} E(\gamma_s)\right|_{s = 0} = \frac{1}{T} \ell(\gamma_s) \left.\frac{\d}{\d s}\ell(\gamma_s) \right|_{s = 0},
+ \]
+ and when $s = 0$, the curve is parametrized by arc-length, so $\ell(\gamma_s) = T$.
+ \item By reparametrization, we may wlog $|\dot{\gamma}| \equiv 1$. Then $\gamma$ is a critical point for $\ell$, hence for $E$, hence a geodesic.\qedhere
+ \end{enumerate}
+\end{proof}
+Often, we are interested in more than just whether the curve is a critical point. We want to know if it maximizes or minimizes energy. Then we need more than the ``first derivative''. We need the ``second derivative'' as well.
+
+\begin{thm}[Second variation formula]\index{second variation formula!geodesic}\index{geodesic!second variation formula}\leavevmode
+ Let $\gamma(t): [0, T] \to M$ be a geodesic with $|\dot{\gamma}| = 1$. Let $H(t, s)$ be a variation of $\gamma$. Let
+ \[
+ Y(t, s) = \frac{\partial H}{\partial s}(t, s) = (\d H)_{(t, s)} \frac{\partial}{\partial s}.
+ \]
+ Then
+ \begin{enumerate}
+ \item We have
+ \[
+ \left.\frac{\d^2}{\d s^2} E(\gamma_s)\right|_{s = 0} = \left.g\left(\frac{\nabla Y}{\d s}(t, 0), \dot{\gamma}\right)\right|_0^T + \int_0^T (|Y'|^2 - R(Y, \dot\gamma, Y, \dot{\gamma})) \;\d t.
+ \]
+ \item Also
+ \begin{multline*}
+ \left.\frac{\d^2}{\d s^2}\ell(\gamma_s)\right|_{s = 0} = \left.g\left(\frac{\nabla Y}{\d s} (t, 0), \dot{\gamma}(t)\right)\right|_0^T \\
+ + \int_0^T \left( |Y'|^2 - R(Y, \dot{\gamma}, Y, \dot{\gamma}) - g(\dot{\gamma}, Y')^2\right)\;\d t,
+ \end{multline*}
+ where $R$ is the $(4, 0)$ curvature tensor, and
+ \[
+ Y'(t) = \frac{\nabla Y}{\d t}(t, 0).
+ \]
+ Putting
+ \[
+ Y_n = Y - g(Y, \dot{\gamma}) \dot{\gamma}
+ \]
+ for the normal component of $Y$, we can write this as
+ \[
+ \left.\frac{\d^2}{\d s^2}\ell(\gamma_s)\right|_{s = 0} = \left.g\left(\frac{\nabla Y_n}{\d s} (t, 0), \dot{\gamma}(t)\right)\right|_0^T + \int_0^T \left( |Y'_n|^2 - R(Y_n, \dot{\gamma}, Y_n, \dot{\gamma})\right)\;\d t.
+ \]
+ \end{enumerate}
+\end{thm}
+Note that if we have fixed end points, then the first terms in the variation formulae vanish.
+
+\begin{proof}
+ We use
+ \[
+ \frac{\d}{\d s}E(\gamma_s) = \left.g(Y(t, s), \dot{\gamma}_s(t))\right|_{t = 0}^{t = T} - \int_0^T g\left(Y(t, s), \frac{\nabla}{\d t} \dot{\gamma}_s(t)\right)\;\d t.
+ \]
+ Taking the derivative with respect to $s$ again gives
+ \begin{multline*}
+ \frac{\d^2}{\d s^2}E(\gamma_s) =\left. g\left(\frac{\nabla Y}{\d s}, \dot{\gamma}\right)\right|_{t = 0}^T + \left.g\left(Y, \frac{\nabla}{\d s} \dot{\gamma}_s\right)\right|_{t = 0}^{T} \\
+ - \int_0^T \left(g\left(\frac{\nabla Y}{\d s}, \frac{\nabla}{\d t} \dot{\gamma}_s\right) + g\left(Y, \frac{\nabla}{\d s} \frac{\nabla}{\d t} \dot{\gamma}\right)\right)\;\d t.
+ \end{multline*}
+ We now use that
+ \begin{align*}
+ \frac{\nabla}{\d s} \frac{\nabla}{\d t} \dot\gamma_s(t) &= \frac{\nabla}{\d t} \frac{\nabla}{\d s} \dot{\gamma}_s(t) + R\left(\frac{\partial H}{\partial s}, \frac{\partial H}{\partial t}\right)\dot{\gamma}_s\\
+ &= \left(\frac{\nabla}{\d t}\right)^2 Y(t, s) + R\left(\frac{\partial H}{\partial s}, \frac{\partial H}{\partial t}\right)\dot{\gamma}_s.
+ \end{align*}
+ We now set $s = 0$, and then the above gives
+ \begin{multline*}
+ \left.\frac{\d^2}{\d s^2} E(\gamma_s)\right|_{s = 0} = \left.g\left(\frac{\nabla Y}{\d s}, \dot{\gamma}\right)\right|_0^T + \left.g\left(Y, \frac{\nabla \dot{\gamma}}{\d s}\right)\right|_0^T \\
+ - \int_0^T \left[g\left(Y, \left(\frac{\nabla}{\d t}\right)^2 Y\right) + R(\dot{\gamma}, Y, \dot{\gamma}, Y)\right]\;\d t.
+ \end{multline*}
+ Finally, applying integration by parts, we can write
+ \[
+ -\int_0^T g\left(Y, \left(\frac{\nabla}{\d t}\right)^2 Y\right) \;\d t= - \left.g\left(Y, \frac{\nabla}{\d t}Y\right)\right|_0^T + \int_0^T \left|\frac{\nabla Y}{\d t}\right|^2\;\d t.
+ \]
+ Finally, noting that
+ \[
+ \frac{\nabla}{\d s} \dot{\gamma}(s) = \frac{\nabla}{\d t} Y(t, s),
+ \]
+ we find that
+ \[
+ \left.\frac{\d^2}{\d s^2} E(\gamma_s)\right|_{s = 0} = \left.g\left(\frac{\nabla Y}{\d s}, \dot{\gamma}\right)\right|_0^T + \int_0^T \left(|Y'|^2 - R(Y, \dot{\gamma}, Y, \dot{\gamma})\right)\;\d t.
+ \]
+ It remains to prove the second variation of length. We first differentiate
+ \[
+ \frac{\d}{\d s}\ell(\gamma_s) = \int_0^T \frac{1}{2 \sqrt{g(\dot{\gamma}_s, \dot{\gamma}_s)}} \frac{\partial}{\partial s} g(\dot{\gamma}_s, \dot{\gamma}_s)\;\d t.
+ \]
+ Then the second derivative gives
+ \[
+ \left.\frac{\d^2}{\d s^2} \ell(\gamma_s) \right|_{s = 0} = \int_0^T \left[\frac{1}{2} \left.\frac{\partial^2}{\partial s^2} g(\dot{\gamma}_s, \dot{\gamma}_s) \right|_{s = 0} - \left.\frac{1}{4} \left(\frac{\partial}{\partial s} g(\dot{\gamma}_s, \dot{\gamma}_s)\right)^2\right|_{s = 0}\right]\;\d t,
+ \]
+ where we used the fact that $g(\dot{\gamma}, \dot{\gamma}) = 1$.
+
+ We notice that the first term can be identified with the derivative of the energy function. So we have
+ \[
+ \left.\frac{\d^2}{\d s^2} \ell(\gamma_s) \right|_{s = 0} = \left.\frac{\d^2}{\d s^2} E(\gamma_s)\right|_{s = 0} - \int_0^T \left( \left.g\left(\dot{\gamma}_s, \frac{\nabla}{\d s} \dot{\gamma}_s\right)\right|_{s = 0}\right)^2\;\d t.
+ \]
+ So the second part follows from the first.
+\end{proof}
+
+\subsection{Applications}
+This finally puts us in a position to prove something more interesting.
+
+\subsubsection*{Synge's theorem}
+We are first going to prove the following remarkable result relating curvature and topology:
+\begin{thm}[Synge's theorem]\index{Synge's theorem}
+ Every compact orientable Riemannian manifold $(M, g)$ such that $\dim M$ is even and has $K(g) > 0$ for all planes at $p \in M$ is simply connected.
+\end{thm}
+
+We can see that these conditions are indeed necessary. For example, we can consider $\RP^2 = S^2/\pm 1$ with the induced metric from $S^2$. Then this is compact with positive sectional curvature, but it is not orientable. Indeed it is not simply connected.
+
+Similarly, if we take $\RP^3$, then this has odd dimension, and the theorem breaks.
+
+Finally, we do need strict inequality, e.g.\ the flat torus is not simply connected.
+
+We first prove a technical lemma.
+\begin{lemma}
+ Let $M$ be a compact manifold, and $[\alpha]$ a non-trivial homotopy class of closed curves in $M$. Then there is a closed minimal geodesic in $[\alpha]$.
+\end{lemma}
+
+\begin{proof}
+ Since $M$ is compact, we can pick some $\varepsilon > 0$ such that for all $p \in M$, the map $\exp_p |_{B(0, p)}$ is a diffeomorphism.
+
+ Let $\ell = \inf_{\gamma \in [\alpha]} \ell(\gamma)$. We know that $\ell > 0$, otherwise, there exists a $\gamma$ with $\ell(\gamma) < \varepsilon$. So $\gamma$ is contained in some geodesic coordinate neighbourhood, but then $\alpha$ is contractible. So $\ell$ must be positive.
+
+ Then we can find a sequence $\gamma_n \in [\alpha]$ with $\gamma_n: [0, 1] \to M$, $|\dot{\gamma}|$ constant, such that
+ \[
+ \lim_{n \to \infty} \ell(\gamma_n) = \ell.
+ \]
+ Choose
+ \[
+ 0 = t_0 < t_1 < \cdots < t_k = 1
+ \]
+ such that
+ \[
+ t_{i + 1} - t_i < \frac{\varepsilon}{2 \ell}.
+ \]
+ So it follows that
+ \[
+ d(\gamma_n(t_i), \gamma_n(t_{i + 1})) < \varepsilon
+ \]
+ for all $n$ sufficiently large and all $i$. Then again, we can replace $\gamma_n|_{[t_i, t_{i + 1}]}$ by a radial geodesic without affecting the limit $\lim \ell(\gamma_n)$.
+
+ Then we exploit the compactness of $M$ (and the unit sphere) again, and pass to a subsequence of $\{\gamma_n\}$ so that $\gamma_n(t_i), \dot{\gamma}_n (t_i)$ are all convergent for every fixed $i$ as $n \to \infty$. Then the curves converges to some
+ \[
+ \gamma_n \to \hat{\gamma} \in [\alpha],
+ \]
+ given by joining the limits $\lim_{n \to \infty} \gamma_n(t_i)$. Then we know that the length converges as well, and so we know $\hat{\gamma}$ is minimal among curves in $[\alpha]$. So $\hat{\gamma}$ is locally minimal, hence a geodesic. So we can take $\gamma = \hat{\gamma}$, and we are done.
+\end{proof}
+
+\begin{proof}[Proof of Synge's theorem]
+ Suppose $M$ satisfies the hypothesis, but $\pi_1(M) \not= \{1\}$. So there is a path $\alpha$ with $[\alpha] \not= 1$, i.e.\ it cannot be contracted to a point. By the lemma, we pick a representative $\gamma$ of $[\alpha]$ that is a closed, minimal geodesic.
+
+ We now prove the theorem. We may wlog assume $|\dot{\gamma}| = 1$, and $t$ ranges in $[0, T]$. Consider a vector field $X(t)$ for $0 \leq t \leq T$ along $\gamma(t)$ such that
+ \[
+ \frac{\nabla X}{\d t} = 0,\quad g(X(0), \dot{\gamma}(0)) = 0.
+ \]
+ Note that since $g$ is a geodesic, we know
+ \[
+ g(X(t), \dot{\gamma}(t)) = 0,
+ \]
+ for all $t \in [0, T]$ as parallel transport preserves the inner product. So $X(T) \perp \dot{\gamma}(T) = \dot{\gamma}(0)$ since we have a closed curve.
+
+ We consider the map $P$ that sends $X(0) \mapsto X(T)$. This is a linear isometry of $(\dot{\gamma}(0))^\perp$ with itself that preserves orientation. So we can think of $P$ as a map
+ \[
+ P \in \SO(2n - 1),
+ \]
+ where $\dim M = 2n$. It is an easy linear algebra exercise to show that every element of $\SO(2n - 1)$ must have an eigenvector of eigenvalue $1$. So we can find $v \in T_p M$ such that $v \perp \dot{\gamma}(0)$ and $P(v) = v$. We take $X(0) = v$. Then we have $X(T) = v$.
+
+ Consider now a variation $H(t, s)$ inducing this $X(t)$. We may assume $|\dot{\gamma}_s|$ is constant. Then
+ \[
+ \frac{\d}{\d s} \ell(\gamma_s)|_{s = 0} = 0
+ \]
+ as $\gamma$ is minimal. Moreover, since it is a minimum, the second derivative must be positive, or at least non-negative. Is this actually the case?
+
+ We look at the second variation formula of length. Using the fact that the loop is closed, the formula reduces to
+ \[
+ \left.\frac{\d^2}{\d s^2} \ell(\gamma_s)\right|_{s = 0} = - \int_0^T R(X, \dot{\gamma}, X, \dot{\gamma})\;\d t.
+ \]
+ But we assumed the sectional curvature is positive. So the second variation is negative! This is a contradiction.
+\end{proof}
+
+\subsubsection*{Conjugate points}
+Recall that when a geodesic starts moving, for a short period of time, it is length-minimizing. However, in general, if we keep on moving for a long time, then we cease to be minimizing. It is useful to characterize when this happens.
+
+As before, for a vector field $J$ along a curve $\gamma(t)$, we will write
+\[
+ J' = \frac{\nabla J}{\d t}.
+\]
+\begin{defi}[Conjugate points]\index{conjugate points}
+ Let $\gamma(t)$ be a geodesic. Then
+ \[
+ p = \gamma(\alpha), \quad q = \gamma(\beta)
+ \]
+ are \emph{conjugate points} if there exists some non-trivial $J$ such that $J(\alpha) = 0 = J(\beta)$.
+\end{defi}
+It is easy to see that this does not depend on parametrization of the curve, because Jacobi fields do not.
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item If $\gamma(t) = \exp_p(t a)$, and $q = \exp_p(\beta a)$ is conjugate to $p$, then $q$ is a singular value of $\exp$.
+ \item Let $J$ be as in the definition. Then $J$ must be pointwise normal to $\dot{\gamma}$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We wlog $[\alpha, \beta] = [0, 1]$. So $J(0) = 0 = J(1)$. We $a = \dot{\gamma}(0)$ and $w = J'(0)$. Note that $a, w$ are both non-zero, as Jacobi fields are determined by initial conditions. Then $q = \exp_p(a)$.
+
+ We have shown earlier that if $J(0) = 0$, then
+ \[
+ J(t) = (\d \exp_p)_{ta} (tw)
+ \]
+ for all $0 \leq t \leq 1$. So it follows $(\d \exp_p)_{a}(w) = J(1) = 0$. So $(\d \exp_p)_a$ has non-trivial kernel, and hence isn't surjective.
+ \item We claim that any Jacobi field $J$ along a geodesic $\gamma$ satisfies
+ \[
+ g(J(t), \dot{\gamma}(t)) = g(J'(0), \dot{\gamma}(0)) t + g(J(0), \dot{\gamma}(0)).
+ \]
+ To prove this, we note that by the definition of geodesic and Jacobi fields, we have
+ \[
+ \frac{\d}{\d t} g(J', \dot{\gamma}) = g(J'', \dot{\gamma}(0)) = -g(R(\dot{\gamma}, J), \dot{\gamma}, \dot{\gamma}) = 0
+ \]
+ by symmetries of $R$. So we have
+ \[
+ \frac{\d}{\d t} g(J, \dot{\gamma}) = g(J'(t), \dot{\gamma}(t)) = g(J'(0), \dot{\gamma}(0)).
+ \]
+ Now integrating gives the desired result.
+
+ This result tells us $g(J(t), \dot{\gamma}(t))$ is a linear function of $t$. But we have
+ \[
+ g(J(0), \dot{\gamma}(0)) = g(J(1), \dot{\gamma}(1)) = 0.
+ \]
+ So we know $g(J(t), \dot{\gamma}(t))$ is constantly zero.\qedhere
+ \end{enumerate}
+\end{proof}
+From the proof, we see that for any Jacobi field with $J(0) = 0$, we have
+\[
+ g(J'(0), \dot{\gamma}(0)) = 0 \Longleftrightarrow g(J(t), \dot{\gamma}(t)) = \text{constant}.
+\]
+This implies that the dimension of the normal Jacobi fields along $\gamma$ satisfying $J(0) = 0$ is $\dim M - 1$.
+
+\begin{eg}
+ Consider $M = S^2 \subseteq \R^3$ with the round metric, i.e.\ the ``obvious'' metric induced from $\R^3$. We claim that $N = (0, 0, 1)$ and $S = (0, 1, 0)$ are conjugate points.
+
+ To construct a Jacobi field, instead of trying to mess with the Jacobi equation, we construct a variation by geodesics. We let
+ \[
+ f(t, s) =
+ \begin{pmatrix}
+ \cos s \sin t\\
+ \sin s \sin t\\
+ \cos t
+ \end{pmatrix}.
+ \]
+ We see that when $s = 0$, this is the great-circle in the $(x, z)$-plane. Then we have a Jacobi field
+ \[
+ J(t) = \left.\frac{\partial f}{\partial s}\right|_{s = 0} =
+ \begin{pmatrix}
+ 0\\
+ \sin t\\
+ 0
+ \end{pmatrix}.
+ \]
+ This is then a Jacobi field that vanishes at $N$ and $S$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=1];
+ \draw [dashed] (1, 0) arc (0:180:1 and 0.3);
+ \draw (1, 0) arc (0:-180:1 and 0.3);
+
+ \node [circ] at (0, 1) {};
+ \node [above] at (0, 1) {$p$};
+
+ \draw [mblue, thick] (0, 1) arc(90:-90: 0.0 and 1);
+ \draw [mblue, opacity=0.7, thick] (0, 1) arc(90:-90: 0.3 and 1);
+ \draw [mblue, opacity=0.4, thick] (0, 1) arc(90:-90: 0.5 and 1);
+ \draw [mblue, opacity=0.1, thick] (0, 1) arc(90:-90: 0.8 and 1);
+ \draw [mblue, opacity=0.7, thick] (0, 1) arc(90:270: 0.3 and 1);
+ \draw [mblue, opacity=0.4, thick] (0, 1) arc(90:270: 0.5 and 1);
+ \draw [mblue, opacity=0.1, thick] (0, 1) arc(90:270: 0.8 and 1);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+When we are \emph{at} the conjugate point, then there are many adjacent curves whose length is equal to ours. If we extend our geodesic \emph{beyond} the conjugate point, then it is no longer even locally minimal:
+\begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=1];
+ \draw [dashed] (1, 0) arc (0:180:1 and 0.3);
+ \draw (1, 0) arc (0:-180:1 and 0.3);
+
+ \node [circ] at (0, 1) {};
+ \node [above] at (0, 1) {$p$};
+
+ \draw [mblue, thick] (0, 1) arc(90:-130: 0.3 and 1);
+
+ \node [circ] (q) at (-0.18, -0.77) {};
+ \node [above] at (q) {$q$};
+ \end{tikzpicture}
+\end{center}
+We can push the geodesic slightly over and the length will be shorter. On the other hand, we proved that up to the conjugate point, the geodesic is always locally minimal.
+
+In turns out this phenomenon is generic:
+
+\begin{thm}
+ Let $\gamma: [0, 1] \to M$ be a geodesic with $\gamma(0) = p$, $\gamma(1) = q$ such that $p$ is conjugate to some $\gamma(t_0)$ for some $t_0 \in (0, 1)$. Then there is a piecewise smooth variation of $f(t, s)$ with $f(t, 0) = \gamma(t)$ such that
+ \[
+ f(0, s) = p,\quad f(1, s) = q
+ \]
+ and $\ell(f(\ph, s)) < \ell(\gamma)$ whenever $s \not= 0$ is small.
+\end{thm}
+
+The proof is a generalization of the example we had above. We know that up to the conjugate point, we have a Jacobi filed that allows us to vary the geodesic without increasing the length. We can then give it a slight ``kick'' and then the length will decrease.
+
+\begin{proof}
+ By the hypothesis, there is a $J(t)$ defined on $t \in [0, 1]$ and $t_0 \in (0, 1)$ such that
+ \[
+ J(t) \perp \dot{\gamma}(t)
+ \]
+ for all $t$, and $J(0) = J(t_0) = 0$ and $J \not\equiv 0$. Then $J'(t_0) \not= 0$.
+
+ We define a parallel vector field $Z_1$ along $\gamma$ by $Z_1(t_0) = -J'(t_0)$. We pick $\theta \in C^\infty[0, 1]$ such that $\theta(0) = \theta(1) = 0$ and $\theta(t_0) = 1$.
+
+ Finally, we define
+ \[
+ Z = \theta Z_1,
+ \]
+ and for $\alpha \in \R$, we define
+ \[
+ Y_\alpha(t) =
+ \begin{cases}
+ J(t) + \alpha Z(t) & 0 \leq t \leq t_0\\
+ \alpha Z(t) & t_0 \leq t \leq 1
+ \end{cases}.
+ \]
+ We notice that this is not smooth at $t_0$, but is just continuous. We will postpone the choice of $\alpha$ to a later time.
+
+ We know $Y_\alpha(t)$ arises from a piecewise $C^\infty$ variation of $\gamma$, say $H_\alpha(t, s)$. The technical claim is that the second variation of length corresponding to $Y_\alpha(t)$ is negative for some $\alpha$.
+
+ We denote by $I(X, Y)_T$ the symmetric bilinear form that gives rise to the second variation of length with fixed end points. If we make the additional assumption that $X, Y$ are normal along $\gamma$, then the formula simplifies, and reduces to
+ \[
+ I(X, Y)_T = \int_0^T \left(g(X', Y') - R(X, \dot{\gamma}, Y, \dot{\gamma})\right)\;\d t.
+ \]
+ Then for $H_\alpha(t, s)$, we have
+ \begin{align*}
+ \left.\frac{\d^2}{\d s^2}\ell(\gamma_s)\right|_{s = 0} &= I_1 + I_2 + I_3\\
+ I_1 &= I(J, J)_{t_0}\\
+ I_2 &= 2\alpha I(J, Z)_{t_0}\\
+ I_3 &= \alpha^2 I(Z, Z)_1.
+ \end{align*}
+ We look at each term separately.
+
+ We first claim that $I_1 = 0$. We note that
+ \[
+ \frac{\d}{\d t} g(J, J') = g(J', J') + g(J, J''),
+ \]
+ and $g(J, J'')$ added to the curvature vanishes by the Jacobi equation. Then by integrating by parts and applying the boundary condition, we see that $I_1$ vanishes.
+
+ Also, by integrating by parts, we find
+ \[
+ I_2 = \left. 2 \alpha g(Z, J') \right|_0^{t_0}.
+ \]
+ Whence
+ \[
+ \left.\frac{\d^2}{\d s^2}\ell(\gamma_s)\right|_{s = 0} = -2\alpha |J'(t_0)|^2 + \alpha^2 I(Z, Z)_1.
+ \]
+ Now if $\alpha > 0$ is very very small, then the linear term dominates, and this is negative. Since the first variation vanishes ($\gamma$ is a geodesic), we know this is a local maximum of length.
+\end{proof}
+Note that we made a compromise in the theorem by doing a piecewise $C^\infty$ variation instead of a smooth one, but of course, we can fix this by making a smooth approximation.
+
+\subsubsection*{Bonnet--Myers diameter theorem}
+We are going to see yet another application of our previous hard work, which may also be seen as an interplay between curvature topology. In case it isn't clear, all our manifolds are connected.
+
+\begin{defi}[Diameter]\index{diameter}
+ The \emph{diameter} of a Riemannian manifold $(M, g)$ is
+ \[
+ \diam(M, g) = \sup_{p, q \in M} d(p, q).
+ \]
+\end{defi}
+Of course, this definition is valid for any metric space.
+\begin{eg}
+ Consider the sphere
+ \[
+ S^{n - 1}(r) = \{x \in \R^n: |x| = r\},
+ \]
+ with the induced ``round'' metric. Then
+ \[
+ \diam (S^{n - 1} (r)) = \pi r.
+ \]
+ It is an exercise to check that
+ \[
+ K \equiv \frac{1}{r^2}.
+ \]
+\end{eg}
+
+We will also need the following notation:
+\begin{notation}
+ Let $h, \hat{h}$ be two symmetric bilinear forms on a real vector space. We say $h \geq \hat{h}$ if $h - \hat{h}$ is non-negative definite.
+
+ If $h, \hat{h} \in \Gamma(S^2 T^* M)$ are fields of symmetric bilinear forms, we write $h \geq \hat{h}$ if $h_p \geq \hat{h}_p$ for all $p \in M$.
+\end{notation}
+
+The following will also be useful:
+\begin{defi}[Riemannian covering map]\index{Riemannian covering map}\index{covering map!Riemannian}\index{Riemannian covering}\index{Riemannian cover}\index{cover!Riemannian}
+ Let $(M, g)$ and $(\tilde{M}, \tilde{g})$ be two Riemannian manifolds, and $f: \tilde{M} \to M$ be a smooth covering map. We say $f$ is a \emph{Riemannian covering map} if it is a local isometry. Alternatively, $f^* g = \tilde{g}$. We say $\tilde{M}$ is a Riemannian cover of $M$.
+\end{defi}
+
+Recall that if $f$ is in fact a universal cover, i.e.\ $\tilde{M}$ is simply connected, then we can (non-canonically) identify $\pi_1(M)$ with $f^{-1}(p)$ for any point $p \in M$.
+
+\begin{defi}[Bonnet--Myers diameter theorem]\index{Bonnet--Myers theorem}
+ Let $(M, g)$ be a complete $n$-dimensional manifold with
+ \[
+ \Ric(g) \geq \frac{n - 1}{r^2} g,
+ \]
+ where $r > 0$ is some positive number. Then
+ \[
+ \diam(M, g) \leq \diam S^n(r) = \pi r.
+ \]
+ In particular, $M$ is compact and $\pi_1(M)$ is finite.
+\end{defi}
+
+\begin{proof}
+ Consider any $L < \diam (M, g)$. Then by definition (and Hopf--Rinow), we can find $p, q \in M$ such that $d(p, q) = L$, and a minimal geodesic $\gamma \in \Omega(p, q)$ with $\ell(\gamma) = d(p, q)$. We parametrize $\gamma: [0, L] \to M$ so that $|\dot{\gamma}| = 1$.
+
+ Now consider any vector field $Y$ along $\gamma$ such that $Y(p) = 0 = Y(q)$. Since $\Gamma$ is a minimal geodesic, it is a critical point for $\ell$, and the second variation $I(Y, Y)_{[0, L]}$ is non-negative (recall that the second variation has fixed end points).
+
+ We extend $\dot{\gamma}(0)$ to an orthonormal basis of $T_p M$, say $\dot{\gamma}(0) = e_1, e_2, \cdots, e_n$. We further let $X_i$ be the unique vector field such that
+ \[
+ X_i' = 0,\quad X_i(0) = e_i.
+ \]
+ In particular, $X_1(t) = \dot{\gamma}(t)$.
+
+ For $i = 2, \cdots, n$, we put
+ \[
+ Y_i(t) = \sin \left(\frac{\pi t}{L}\right) X_i(t).
+ \]
+ Then after integrating by parts, we find that we have
+ \begin{align*}
+ I(Y_i, Y_i)_{[0,L]} &= - \int_0^L g(Y_i'' + R(\dot{\gamma}, Y_i) Y_i, \dot{\gamma}) \;\d t\\
+ \intertext{Using the fact that $X_i$ is parallel, this can be written as}
+ &= \int_0^L \sin^2 \frac{\pi t}{L} \left(\frac{\pi^2}{L^2} - R(\dot{\gamma}, X_i, \dot{\gamma}, X_i)\right)\;\d t,
+ \end{align*}
+ and since this is length minimizing, we know this is $\geq 0$.
+
+ We note that we have $R(\dot{\gamma}, X_1, \dot{\gamma}, X_1) = 0$. So we have
+ \[
+ \sum_{i = 2}^n R(\dot{\gamma}, X_i, \dot{\gamma}, X_i) = \Ric(\dot{\gamma}, \dot{\gamma}).
+ \]
+ So we know
+ \[
+ \sum_{i = 2}^n I(Y_i, Y_i) = \int_0^L \sin^2 \frac{\pi t}{L}\left((n - 1) \frac{\pi^2}{L} - \Ric(\dot{\gamma}, \dot{\gamma})\right)\;\d t \geq 0.
+ \]
+ We also know that
+ \[
+ \Ric(\dot{\gamma}, \dot{\gamma}) \geq \frac{n - 1}{r^2}
+ \]
+ by hypothesis. So this implies that
+ \[
+ \frac{\pi^2}{L^2} \geq \frac{1}{r^2}.
+ \]
+ This tells us that
+ \[
+ L \leq \pi r.
+ \]
+ Since $L$ is any number less that $\diam(M, g)$, it follows that
+ \[
+ \diam(M, g) \leq \pi r.
+ \]
+ Since $M$ is known to be complete, by Hopf-Rinow theorem, any closed bounded subset is compact. But $M$ itself is closed and bounded! So $M$ is compact.
+
+ To understand the fundamental, group, we simply have to consider a universal Riemannian cover $f: \tilde{M} \to M$. We know such a topological universal covering space must exist by general existence theorems. We can then pull back the differential structure and metric along $f$, since $f$ is a local homeomorphism. So this gives a universal Riemannian cover of $M$. But they hypothesis of the theorem is local, so it is also satisfied for $\tilde{M}$. So it is also compact. Since $f^{-1}(p)$ is a closed discrete subset of a compact space, it is finite, and we are done.
+\end{proof}
+It is an easy exercise to show that the hypothesis on the Ricci curvature cannot be weakened to just saying that the Ricci curvature is positive definite.
+
+\subsubsection*{Hadamard--Cartan theorem}
+To prove the next result, we need to talk a bit more about coverings.
+\begin{prop}
+ Let $(M, g)$ and $(N, h)$ be Riemannian manifolds, and suppose $M$ is complete. Suppose there is a smooth surjection $f: M \to N$ that is a local diffeomorphism. Moreover, suppose that for any $p \in M$ and $v \in T_p M$, we have $|\d f_p (v)|_h \geq |v|$. Then $f$ is a covering map.
+\end{prop}
+
+\begin{proof}
+ By general topology, it suffices to prove that for any smooth curve $\gamma: [0, 1] \to N$, and any $q \in M$ such that $f(q) = \gamma(0)$, there exists a lift of $\gamma$ starting from from $q$.
+ \[
+ \begin{tikzcd}
+ & M \ar[d, "f"]\\
+ \lbrack 0, 1\rbrack \ar[r, "\gamma"] \ar[ur, dashed, "\tilde{\gamma}"] & N
+ \end{tikzcd}
+ \]
+ From the hypothesis, we know that $\tilde{\gamma}$ exists on $[0, \varepsilon_0]$ for some ``small'' $\varepsilon_0 > 0$. We let
+ \[
+ I = \{0 \leq \varepsilon \leq : \tilde{\gamma} \text{ exists on }[0, \varepsilon]\}.
+ \]
+ We immediately see this is non-empty, since it contains $\varepsilon_0$. Moreover, it is not difficult to see that $I$ is open in $[0, 1]$, because $f$ is a local diffeomorphism. So it suffices to show that $I$ is closed.
+
+ We let $\{t_n\}_{n = 1}^\infty \subseteq I$ be such that $t_{n + 1} > t_n$ for all $n$, and
+ \[
+ \lim_{n \to \infty} t_n = \varepsilon_1.
+ \]
+ Using Hopf-Rinow, either $\{\tilde{\gamma}(t_n)\}$ is contained in some compact $K$, or it is unbounded. We claim that unboundedness is impossible. We have
+ \begin{align*}
+ \ell(\gamma) \geq \ell(\gamma|_{[0, t_n]}) &= \int_0^{t_n} |\dot{\gamma}|\; \d t\\
+ &= \int_0^{t_n} |\d f_{\tilde{\gamma}(t)} \dot{\tilde{\gamma}}(t)|\;\d t\\
+ &\geq \int_0^{t_n} |\dot{\tilde{\gamma}}| \;\d t \\
+ &= \ell(\tilde{\gamma}|_{[0, t_n]}) \\
+ &\geq d(\tilde{\gamma}(0), \tilde{\gamma}(t_n)).
+ \end{align*}
+ So we know this is bounded. So by compactness, we can find some $x$ such that $\tilde{\gamma} (t_{n_\ell}) \to x$ as $\ell \to \infty$. There exists an open $x \in V \subseteq M$ such that $f|_{V}$ is a diffeomorphism.
+
+ Since there are extensions of $\tilde{\gamma}$ to each $t_n$, eventually we get an extension to within $V$, and then we can just lift directly, and extend it to $\varepsilon_1$. So $\varepsilon_1 \in I$. So we are done.
+\end{proof}
+
+\begin{cor}
+ Let $f: M \to N$ be a local isometry onto $N$, and $M$ be complete. Then $f$ is a covering map.
+\end{cor}
+Note that since $N$ is (assumed to be) connected, we know $f$ is necessarily surjective. To see this, note that the completeness of $M$ implies completeness of $f(M)$, hence $f(M)$ is closed in $N$, and since it is a local isometry, we know $f$ is in particular open. So the image is open and closed, hence $f(M) = N$.
+
+For a change, our next result will assume a \emph{negative} curvature, instead of a positive one!
+\begin{thm}[Hadamard--Cartan theorem]\index{Hadamard--Cartan theorem}
+ Let $(M^n, g)$ be a complete Riemannian manifold such that the sectional curvature is always non-positive. Then for every point $p \in M$, the map $\exp_p: T_p M \to M$ is a covering map. In particular, if $\pi_1(M) = 0$, then $M$ is diffeomorphic to $\R^n$.
+\end{thm}
+
+We will need one more technical lemma.
+\begin{lemma}
+ Let $\gamma(t)$ be a geodesic on $(M, g)$ such that $K \leq 0$ along $\gamma$. Then $\gamma$ has no conjugate points.
+\end{lemma}
+
+\begin{proof}
+ We write $\gamma(0) = p$. Let $I(t)$ be a Jacobi field along $\gamma$, and $J(0) = 0$. We claim that if $J$ is not identically zero, then $J$ does not vanish everywhere else.
+
+ We consider the function
+ \[
+ f(t) = g(J(t), J(t)) = |J(t)|^2.
+ \]
+ Then $f(0) = f'(0) = 0$. Consider
+ \[
+ \frac{1}{2} f''(t) = g(J''(t), J(t)) + g(J'(t), J'(t)) = g(J', J') - R(\dot{\gamma}, J, \dot{\gamma}, J) \geq 0.
+ \]
+ So $f$ is a convex function, and so we are done.
+\end{proof}
+
+We can now prove the theorem.
+\begin{proof}[Proof of theorem]
+ By the lemma, we know there are no conjugate points. So we know $\exp_p$ is regular everywhere, hence a local diffeomorphism by inverse function theorem. We can use this fact to pull back a metric from $M$ to $T_p M$ such that $\exp_p$ is a local isometry. Since this is a local isometry, we know geodesics are preserved. So geodesics originating from the origin in $T_p M$ are straight lines, and the speed of the geodesics under the two metrics are the same. So we know $T_p M$ is complete under this metric. Also, by Hopf--Rinow, $\exp_p$ is surjective. So we are done.
+\end{proof}
+
+\section{Hodge theory on Riemannian manifolds}
+\subsection{Hodge star and operators}
+Throughout this chapter, we will assume our manifolds are oriented, and write $n$ of the dimension. We will write $\varepsilon \in \Omega^n(M)$ for a non-vanishing form defining the orientation.
+
+Given a coordinate patch $U \subseteq M$, we can use Gram-Schmidt to obtain a positively-oriented orthonormal frame $e_1, \cdots, e_n$. This allows us to dualize and obtain a basis $\omega_1, \cdots, \omega_n \in \Omega^1(M)$, defined by
+\[
+ \omega_i(e_i) = \delta_{ij}.
+\]
+Since these are linearly independent, we can multiply all of them together to obtain a non-zero $n$-form
+\[
+ \omega_1 \wedge \cdots \wedge \omega_n = a \varepsilon,
+\]
+for some $a \in C^\infty(U)$, $a > 0$. We can do this for any coordinate patches, and the resulting $n$-form agrees on intersections. Indeed, given any other choice $\omega_1', \cdots, \omega_n'$, they must be related to the original $\omega_1, \cdots, \omega_n$ by an element $\Phi \in \SO(n)$. Then by linear algebra, we have
+\[
+ \omega_1' \wedge \cdots \wedge \omega_n' = \det (\Phi)\; \omega_1 \wedge \cdots \wedge \omega_n = \omega_1 \wedge \cdots \wedge \omega_n.
+\]
+So we can patch all these forms together to get a global $n$-form $\omega_g \in \Omega^n(M)$ that gives the same orientation. This is a canonical such $n$-form, depending only on $g$ and the orientation chosen. This is called the (Riemannian) \term{volume form} of $(M, g)$.
+
+Recall that the $\omega_i$ are orthonormal with respect to the natural dual inner product on $T^* M$. In general, $g$ induces an inner product on $\exterior^p T^* M$ for all $p = 0, 1, \cdots, n$, which is still denoted $g$. One way to describe this is to give an orthonormal basis on each fiber, given by
+\[
+ \{\omega_{i_1} \wedge \cdots \wedge \omega_{i_p} : 1 \leq i_1 < \cdots < i_p \leq n\}.
+\]
+From this point of view, the volume form becomes a form of ``unit length''.
+
+We now come to the central definition of Hodge theory.
+\begin{defi}[Hodge star]\index{Hodge star}
+ The \emph{Hodge star operator} on $(M^n, g)$ is the linear map
+ \[
+ \star: \exterior^p (T^*_x M) \to \exterior^{n - p} (T_x^* M)
+ \]
+ satisfying the property that for all $\alpha, \beta \in \exterior^p (T_x^* M)$, we have
+ \[
+ \alpha \wedge \star\beta = \bra \alpha, \beta\ket_g\;\omega_g.
+ \]
+\end{defi}
+Since $g$ is non-degenerate, it follows that this is well-defined.
+
+How do we actually compute this? Since we have vector spaces, it is natural to consider what happens in a basis.
+\begin{prop}
+ Suppose $\omega_1, \cdots, \omega_n$ is an orthonormal basis of $T_x^* M$. Then we claim that
+ \[
+ \star(\omega_1 \wedge \cdots \wedge \omega_p) = \omega_{p + 1} \wedge \cdots \wedge \omega_n.
+ \]
+\end{prop}
+We can check this by checking all basis vectors of $\exterior^p M$, and the result drops out immediately. Since we can always relabel the numbers, this already tells us how to compute the Hodge star of all other basis elements.
+
+We can apply the Hodge star twice, which gives us a linear endomorphism $\star\star: \exterior^p T_x^* M \to \exterior^p T_x^* M$. From the above, it follows that
+\begin{prop}
+ The double Hodge star $\star\star: \exterior^p (T_x^* M) \to \exterior^p (T_x^* M)$ is equal to $(-1)^{p(n - p)}$.
+\end{prop}
+In particular,
+\[
+ \star1 = \omega_g,\quad \star \omega_g = 1.
+\]
+Using the Hodge star, we can define a differential operator:
+\begin{defi}[Co-differential ($\delta$)]\index{$\delta$}
+ We define $\delta: \Omega^p(M) \to \Omega^{p - 1}(M)$ for $0 \leq p \leq \dim M$ by
+ \[
+ \delta =
+ \begin{cases}
+ (-1)^{n(p + 1) + 1} \star\d \star & p \not= 0\\
+ 0 & p = 0
+ \end{cases}.
+ \]
+ This is (sometimes) called the \term{co-differential}.
+\end{defi}
+The funny powers of $(-1)$ are chosen so that our future results work well.
+
+We further define
+\begin{defi}[Laplace--Beltrami operator $\Delta$]\index{Laplace--Beltrami operator}\index{$\Delta$}
+ The \emph{Laplace--Beltrami operator} is
+ \[
+ \Delta = \d \delta + \delta \d: \Omega^p(M) \to \Omega^p(M).
+ \]
+ This is also known as the (Hodge) \term{Laplacian}\index{Hodge Laplacian}.
+\end{defi}
+We quickly note that
+\begin{prop}
+ \[
+ \star\Delta = \Delta \star.
+ \]
+\end{prop}
+Consider the spacial case of $(M, g) = (\R^n, \mathrm{eucl})$, and $p = 0$. Then a straightforward calculation shows that
+\[
+ \Delta f = - \frac{\partial^2 f}{\partial x_1^2} - \cdots - \frac{\partial^2 f}{\partial x_n^2}
+\]
+for each $f \in C^\infty(\R^n) = \Omega^0(\R^n)$. This is just the usual Laplacian, except there is a negative sign. This is there for a good reason, but we shall not go into that.
+
+More generally, metric $g = g_{ij}\; \d x^i\; \d x^j$ on $\R^n$ (or alternatively a coordinate patch on any Riemannian manifold), we have
+\[
+ \omega_g = \sqrt{|g|}\;\d x^1 \wedge \cdots \wedge \d x^n,
+\]
+where $|g|$ is the determinant of $g$. Then we have
+\[
+ \Delta_g f = -\frac{1}{\sqrt{|g|}} \partial_j (\sqrt{|g|} g^{ij} \partial_i f) = - g^{ij} \partial_i \partial_j f + \text{lower order terms}.
+\]
+
+How can we think about this co-differential $\delta$? One way to understand it is that it is the ``adjoint'' to $\d$.
+
+\begin{prop}
+ $\delta$ is the formal adjoint of $\d$. Explicitly, for any compactly supported $\alpha \in \Omega^{p - 1}$ and $\beta \in \Omega^p$, then
+ \[
+ \int_M \bra \d \alpha, \beta\ket_g\;\omega_g = \int_M \bra \alpha, \delta \beta\ket_g\;\omega_g.
+ \]
+\end{prop}
+We just say it is a formal adjoint, rather than a genuine adjoint, because there is no obvious Banach space structure on $\Omega^p(M)$, and we don't want to go into that. However, we can still define
+
+\begin{defi}[$L^2$ inner product]
+ For $\xi, \eta \in \Omega^p(M)$, we define the \term{$L^2$ inner product}\index{$\bra\bra \xi, \eta\ket \ket_g$} by
+ \[
+ \bra \bra \xi, \eta\ket\ket_g = \int_M \bra \xi, \eta\ket_g\;\omega_g,
+ \]
+ where $\xi, \eta \in \Omega^p(M)$.
+\end{defi}
+Note that this may not be well-defined if the space is not compact.
+
+Under this notation, we can write the proposition as
+\[
+ \bra\bra \d \alpha, \beta\ket\ket_g = \bra \bra \alpha, \delta \beta \ket\ket_g.
+\]
+Thus, we also say $\delta$ is the $L^2$ adjoint.
+
+To prove this, we need to recall Stokes' theorem. Since we don't care about manifolds with boundary in this course, we just have
+\[
+ \int_M \d \omega = 0
+\]
+for all forms $\omega$.
+\begin{proof}
+ We have
+ \begin{align*}
+ 0 &= \int_M \d (\alpha \wedge \star\beta)\\
+ &= \int_M \d \alpha \wedge \star\beta + \int_M (-1)^{p - 1} \alpha \wedge \d \star \beta\\
+ &= \int_M \bra \d \alpha, \beta \ket_g\;\omega_g + (-1)^{p - 1} (-1)^{(n - p + 1)(p - 1)}\int_M \alpha \wedge \star\star\d\star \beta\\
+ &= \int_M \bra \d \alpha, \beta \ket_g\;\omega_g + (-1)^{(n - p)(p - 1)} \int_M \alpha \wedge \star\star\d\star \beta\\
+ &= \int_M \bra \d \alpha, \beta\ket_g\;\omega_g - \int_M \alpha \wedge \star \delta \beta\\
+ &= \int_M \bra \d \alpha, \beta\ket_g\;\omega_g - \int_M \bra \alpha, \delta \beta\ket_g\;\omega_g.\qedhere
+ \end{align*}
+\end{proof}
+This result explains the funny signs we gave $\delta$.
+
+\begin{cor}
+ $\Delta$ is formally self-adjoint.
+\end{cor}
+
+Similar to what we did in, say, IB Methods, we can define
+\begin{defi}[Harmonic forms]\index{harmonic form}\index{$\mathcal{H}^p$}
+ A \emph{harmonic form} is a $p$-form $\omega$ such that $\Delta \omega = 0$. We write
+ \[
+ \mathcal{H}^p = \{\alpha \in \Omega^p(M): \Delta \alpha = 0\}.
+ \]
+\end{defi}
+
+We have a further corollary of the proposition.
+\begin{cor}
+ Let $M$ be compact. Then
+ \[
+ \Delta \alpha = 0 \Leftrightarrow \d \alpha = 0\text{ and }\delta \alpha = 0.
+ \]
+\end{cor}
+We say $\alpha$ is closed and \term{co-closed}.
+
+\begin{proof}
+ $\Leftarrow$ is clear. For $\Rightarrow$, suppose $\Delta \alpha = 0$. Then we have
+ \[
+ 0 = \bra\bra \alpha, \Delta \alpha\ket\ket = \bra\bra\alpha, \d \delta \alpha + \delta \d \alpha\ket\ket = \|\delta \alpha\|_g^2 + \|\d \alpha\|_g^2.
+ \]
+ Since the $L^2$ norm is non-degenerate, it follows that $\delta \alpha = \d \alpha = 0$.
+\end{proof}
+
+In particular, in degree $0$, co-closed is automatic. Then for all $f \in C^\infty(M)$, we have
+\[
+ \Delta f = 0 \Leftrightarrow \d f = 0.
+\]
+In other words, harmonic functions on a compact manifold must be constant. This is a good way to demonstrate that the compactness hypothesis is required, as there are many non-trivial harmonic functions on $\R^n$, e.g.\ $x$.
+
+Some of these things simplify if we know about the parity of our manifold. If $\dim M = n = 2m$, then $\star\star = (-1)^p$, and
+\[
+ \delta = - \star \d \star
+\]
+whenever $p \not= 0$. In particular, this applies to complex manifolds, say $\C^n \cong \R^{2n}$, with the Hermitian metric. This is to be continued in sheet 3.
+
+\subsection{Hodge decomposition theorem}
+We now work towards proving the Hodge decomposition theorem. This is a very important and far-reaching result.
+
+\begin{thm}[Hodge decomposition theorem]\index{Hodge decomposition theorem}
+ Let $(M, g)$ be a compact oriented Riemannian manifold. Then
+ \begin{itemize}
+ \item For all $p = 0, \cdots, \dim M$, we have $\dim \mathcal{H}^p < \infty$.
+ \item We have
+ \[
+ \Omega^p(M) = \mathcal{H}^p \oplus \Delta \Omega^p(M).
+ \]
+ Moreover, the direct sum is orthogonal with respect to the $L^2$ inner product. We also formally set $\Omega^{-1}(M) = 0$.
+ \end{itemize}
+\end{thm}
+As before, the compactness of $M$ is essential, and cannot be dropped.
+\begin{cor}
+ We have orthogonal decompositions
+ \begin{align*}
+ \Omega^p(M) &= \mathcal{H}^p \oplus \d \delta \Omega^p(M) \oplus \delta \d \Omega^p(M)\\
+ &= \mathcal{H}^p \oplus \d \Omega^{p - 1}(M) \oplus \delta \Omega^{p + 1}(M).
+ \end{align*}
+\end{cor}
+
+\begin{proof}
+ Now note that for an $\alpha, \beta$, we have
+ \[
+ \bra \bra \d \delta \alpha, \delta \d \beta\ket\ket_g = \bra \bra \d \d \delta \alpha, \d \beta\ket \ket_g = 0.
+ \]
+ So
+ \[
+ \d \delta \Omega^p(M) \oplus \delta \d \Omega^p(M)
+ \]
+ is an orthogonal direct sum that clearly contains $\Delta \Omega^p(M)$. But each component is also orthogonal to harmonic forms, because harmonic forms are closed and co-closed. So the first decomposition follows.
+
+ To obtain the final decomposition, we simply note that
+ \[
+ \d \Omega^{p - 1}(M) = \d (\mathcal{H}^{p - 1} \oplus \Delta \Omega^{p - 1}(M)) = \d (\delta \d \Omega^{p - 1}(M)) \subseteq \d \delta \Omega^p(M).
+ \]
+ On the other hand, we certainly have the other inclusion. So the two terms are equal. The other term follows similarly.
+\end{proof}
+
+This theorem has a rather remarkable corollary.
+\begin{cor}
+ Let $(M, g)$ be a compact oriented Riemannian manifold. Then for all $\alpha \in H_{\dR}^p(M)$, there is a unique $\alpha \in \mathcal{H}^p$ such that $[\alpha] = a$. In other words, the obvious map
+ \[
+ \mathcal{H}^p \to H_{\dR}^p(M)
+ \]
+ is an isomorphism.
+\end{cor}
+This is remarkable. On the left hand side, we have $\mathcal{H}^p$, which is a completely analytic thing, defined by the Laplacian. On the other hand, the right hand sides involves the de Rham cohomology, which is just a topological, and in fact homotopy invariant.
+
+\begin{proof}
+ To see uniqueness, suppose $\alpha_1, \alpha_2 \in \mathcal{H}^p$ are such that $[\alpha_1] = [\alpha_2] \in H_{\dR}^p(M)$. Then
+ \[
+ \alpha_1 - \alpha_2 = \d \beta
+ \]
+ for some $\beta$. But the left hand side and right hand side live in different parts of the Hodge decomposition. So they must be individually zero. Alternatively, we can compute
+ \[
+ \|\d \beta\|_g^2 = \bra\bra \d \beta, \alpha_1- \alpha_2\ket\ket_g = \bra \bra \beta, \delta \alpha_1 - \delta \alpha_2\ket\ket_g = 0
+ \]
+ since harmonic forms are co-closed.
+
+ To prove existence, let $\alpha \in \Omega^p(M)$ be such that $\d \alpha = 0$. We write
+ \[
+ \alpha = \alpha_1 + \d \alpha_2 + \delta\alpha_3 \in \mathcal{H}^p \oplus \d \Omega^{p - 1}(M) \oplus \delta \Omega^{p + 1}(M).
+ \]
+ Applying $\d$ gives us
+ \[
+ 0 = \d \alpha_1 + \d^2 \alpha_2 + \d \delta \alpha_3.
+ \]
+ We know $\d \alpha_1 = 0$ since $\alpha_1$ is harmonic, and $\d^2 = 0$. So we must have $\d \delta \alpha_3 = 0$. So
+ \[
+ \bra \bra \delta \alpha_3, \delta \alpha_3\ket\ket_g = \bra \bra \alpha_3, \d \delta \alpha_3\ket\ket_g = 0.
+ \]
+ So $\delta \alpha_3 = 0$. So $[\alpha] = [\alpha_1]$ and $\alpha$ has a representative in $\mathcal{H}^p$.
+\end{proof}
+We can also heuristically justify why this is true. Suppose we are given some de Rham cohomology class $a \in H_{\dR}^p(M)$. We consider
+\[
+ B_a = \{\xi \in \Omega^p(M): \d \xi = 0, [\xi] = a\}.
+\]
+This is an infinite dimensional affine space.
+
+We now ask ourselves --- which $\alpha \in B_a$ minimizes the $L^2$ norm? We consider the function $F: B_a \to \R$ given by $F(\alpha) = \|\alpha\|^2$. Any minimizing $\alpha$ is an extremum. So for any $\beta \in \Omega^{p - 1}(M)$, we have
+\[
+ \left.\frac{\d}{\d t}\right|_{t = 0} F(\alpha + t \d \beta) = 0.
+\]
+In other words, we have
+\[
+ 0 = \left.\frac{\d}{\d t}\right|_{t = 0} (\|\alpha\|^2 + 2t \bra \bra \alpha, \d \beta\ket\ket_g + t^2 \|\d \beta\|^2) = 2 \bra\bra \alpha, \d \beta\ket\ket_g.
+\]
+This is the same as saying
+\[
+ \bra \bra \delta \alpha, \beta\ket\ket_g = 0.
+\]
+So this implies $\delta \alpha = 0$. But $\d \alpha = 0$ by assumption. So we find that $\alpha \in \mathcal{H}^p$. So the result is at least believable.
+
+The proof of the Hodge decomposition theorem involves some analysis, which we are not bothered to do. Instead, we will just quote the appropriate results. For convenience, we will use $\bra \ph, \ph\ket$ for the $L^2$ inner product, and then $\|\ph\|$ is the $L^2$ norm.
+
+The first theorem we quote is the following:
+\begin{thm}[Compactness theorem]
+ If a sequence $\alpha_n \in \Omega^n(M)$ satisfies $\|\alpha_n\| < C$ and $\|\Delta \alpha_n\| < C$ for all $n$, then $\alpha_n$ contains a Cauchy subsequence.
+\end{thm}
+This is almost like saying $\Omega^n(M)$ is compact, but it isn't, since it is not complete. So the best thing we can say is that the subsequence is Cauchy.
+
+\begin{cor}
+ $\mathcal{H}^p$ is finite-dimensional.
+\end{cor}
+
+\begin{proof}
+ Suppose not. Then by Gram--Schmidt, we can find an infinite orthonormal sequence $e_n$ such that $\|e_n\| = 1$ and $\|\Delta e_n\| = 0$, and this certainly does not have a Cauchy subsequence.
+\end{proof}
+
+A large part of the proof is trying to solve the PDE
+\[
+ \Delta \omega = \alpha,
+\]
+which we will need in order to carry out the decomposition. In analysis, one useful idea is the notion of weak solutions. We notice that if $\omega$ is a solution, then for any $\varphi \in \Omega^p(M)$, we have
+\[
+ \bra \omega, \Delta\varphi\ket = \bra \Delta \omega, \varphi\ket = \bra \alpha, \varphi\ket,
+\]
+using that $\Delta$ is self-adjoint. In other words, the linear form $\ell = \bra \omega, \ph\ket: \Omega^p(M) \to \R$ satisfies
+\[
+ \ell(\Delta \varphi) = \bra \alpha, \varphi\ket.
+\]
+Conversely, if $\bra \omega, \ph\ket$ satisfies this equation, then $\omega$ must be a solution, since for any $\beta$, we have
+\[
+ \bra \Delta \omega, \beta\ket = \bra \omega, \Delta \beta\ket = \bra \alpha, \beta\ket.
+\]
+\begin{defi}[Weak solution]\index{weak solution}
+ A weak solution to the equation $\Delta \omega = \alpha$ is a linear functional $\ell: \Omega^p(M) \to \R$ such that
+ \begin{enumerate}
+ \item $\ell(\Delta \varphi) = \bra \alpha, \varphi\ket$ for all $\varphi \in \Omega^p(M)$.
+ \item $\ell$ is \term{bounded}, i.e.\ there is some $C$ such that $|\ell (\beta)| < C \|\beta\|$ for all $\beta$.
+ \end{enumerate}
+\end{defi}
+Now given a weak solution, we want to obtain a genuine solution. If $\Omega^p(M)$ were a Hilbert space, then we are immediately done by the Riesz representation theorem, but it isn't. Thus, we need a theorem that gives us what we want.
+
+\begin{thm}[Regularity theorem]
+ Every weak solution of $\Delta \omega = \alpha$ is of the form
+ \[
+ \ell(\beta) = \bra \omega, \beta\ket
+ \]
+ for $\omega \in \Omega^p(M)$.
+\end{thm}
+
+Thus, we have reduced the problem to finding weak solutions. There is one final piece of analysis we need to quote. The definition of a weak solution only cares about what $\ell$ does to $\Delta \Omega^p(M)$. And it is easy to define what $\ell$ should do on $\Delta \Omega^p(M)$ --- we simply define
+\[
+ \ell(\Delta \eta) = \bra \eta, \alpha\ket.
+\]
+Of course, for this to work, it must be well-defined, but this is not necessarily the case in general. We also have to check it is bounded. But suppose this worked. Then the remaining job is to extend this to a bounded functional on all of $\Omega^p(M)$ in \emph{whatever} way we like. This relies on the following (relatively easy) theorem from analysis:
+
+\begin{thm}[Hahn--Banach theorem]
+ Let $L$ be a normed vector space, and $L_0$ be a subspace. We let $f: L_0 \to \R$ be a bounded linear functional. Then $f$ extends to a bounded linear functional $L \to \R$ with the same bound.
+\end{thm}
+
+We can now begin the proof.
+\begin{proof}[Proof of Hodge decomposition theorem]
+ Since $\mathcal{H}^p$ is finite-dimensional, by basic linear algebra, we can decompose
+ \[
+ \Omega^p(M) = \mathcal{H}^p \oplus (\mathcal{H}^p)^\perp.
+ \]
+ Crucially, we know $(\mathcal{H}^p)^\perp$ is a \emph{closed} subspace. What we want to show is that
+ \[
+ (\mathcal{H}^p)^\perp = \Delta \Omega^p(M).
+ \]
+ One inclusion is easy. Suppose $\alpha \in \mathcal{H}^p$ and $\beta \in \Omega^p(M)$. Then we have
+ \[
+ \bra \alpha, \Delta \beta\ket = \bra \Delta \alpha, \beta\ket = 0.
+ \]
+ So we know that
+ \[
+ \Delta \Omega^p(M) \subseteq (\mathcal{H}^p)^\perp.
+ \]
+ The other direction is the hard part. Suppose $\alpha \in (\mathcal{H}^p)^\perp$. We may assume $\alpha$ is non-zero. Since our PDE is a linear one, we may wlog $\|\alpha\| = 1$.
+
+ By the regularity theorem, it suffices to prove that $\Delta \omega = \alpha$ has a \emph{weak} solution. We define $\ell: \Delta \Omega^p(M) \to \R$ as follows: for each $\eta \in \Omega^p(M)$, we put
+ \[
+ \ell (\Delta \eta) = \bra \eta, \alpha\ket.
+ \]
+ We check this is well-defined. Suppose $\Delta \eta = \Delta \xi$. Then $\eta - \xi \in \mathcal{H}^p$, and we have
+ \[
+ \bra \eta, \alpha\ket - \bra \xi, \alpha\ket = \bra \eta - \xi, \alpha \ket = 0
+ \]
+ since $\alpha \in (\mathcal{H}^p)^\perp$.
+
+ We next want to show the boundedness property. We now claim that there exists a positive $C > 0$ such that
+ \[
+ \ell(\Delta \eta)\leq C \|\Delta \eta\|
+ \]
+ for all $\eta \in \Omega^p(M)$. To see this, we first note that by Cauchy--Schwartz, we have
+ \[
+ |\bra \alpha, \eta\ket| \leq \|\alpha\| \cdot \|\eta\| = \|\eta\|.
+ \]
+ So it suffices to show that there is a $C > 0$ such that
+ \[
+ \|\eta\| \leq C\|\Delta \eta\|
+ \]
+ for every $\eta \in \Omega^p(M)$.
+
+ Suppose not. Then we can find a sequence $\eta_k \in (\mathcal{H}^p)^{\perp}$ such that $\|\eta_k\| = 1$ and $\|\Delta \eta_k\| \to 0$.
+
+ But then $\|\Delta \eta_k\|$ is certainly bounded. So by the compactness theorem, we may wlog $\eta_k$ is Cauchy. Then for any $\psi \in \Omega^p(M)$, the sequence $\bra \psi, \eta_k\ket$ is Cauchy, by Cauchy--Schwartz, hence convergent.
+
+ We define $a: \Omega^p(M) \to \R$ by
+ \[
+ a(\psi) = \lim_{k \to \infty} \bra \psi, \eta_k\ket.
+ \]
+ Then we have
+ \[
+ a(\Delta \psi) = \lim_{k \to \infty} \bra \eta_k, \Delta \psi\ket = \lim_{k \to \infty} \bra \Delta \eta_k, \psi\ket = 0.
+ \]
+ So we know that $a$ is a weak solution of $\Delta \xi = 0$. By the regularity theorem again, we have
+ \[
+ a(\psi) = \bra \xi, \psi\ket
+ \]
+ for some $\xi \in \Omega^p(M)$. Then $\xi \in \mathcal{H}^p$.
+
+ We claim that $\eta_k \to \xi$. Let $\varepsilon > 0$, and pick $N$ such that $n, m > N$ implies $\|\eta_n - \eta_m\| < \varepsilon$. Then
+ \[
+ \|\eta_n - \xi\|^2 = \bra \eta_n - \xi, \eta_n - \xi\ket \leq |\bra \eta_m - \xi, \eta_n - \xi\ket| + \varepsilon\|\eta_n - \xi\|.
+ \]
+ Taking the limit as $m \to \infty$, the first term vansihes, and this tells us $\|\eta_n - \xi\| \leq \varepsilon$. So $\eta_n \to \xi$.
+
+ But this is bad. Since $\eta_k \in (\mathcal{H}^p)^\perp$, and $(\mathcal{H}^p)^\infty$ is closed, we know $\xi \in (\mathcal{H}^p)^\perp$. But also by assumption, we have $\xi \in \mathcal{H}^p$. So $\xi = 0$. But we \emph{also} know $\|\xi\| = \lim \|\eta_k\| = 1$, whcih is a contradiction. So $\ell$ is bounded.
+
+ We then extend $\ell$ to any bounded linear map on $\Omega^p(M)$. Then we are done.
+\end{proof}
+
+That was a correct proof, but we just pulled a bunch of theorems out of nowhere, and non-analysts might not be sufficiently convinced. We now look at an explicit example, namely the torus, and sketch a direct proof of Hodge decomposition. In this case, what we needed for the proof reduces to the fact Fourier series and Green's functions work, which is IB Methods.
+
+Consider $M = T^n = \R^n/(2\pi \Z)^n$, the $n$-torus with flat metric. This has local coordinates $(x_1, \cdots, x_n)$, induced from the Euclidean space. This is convenient because $\exterior^p T^*M$ is trivialized by $\{\d x^{i_1} \wedge \cdots \d x^{i_p}\}$. Moreover, the Laplacian is just given by
+\[
+ \Delta (\alpha \; \d x^{i_1} \wedge \cdots \wedge \d x^{i_p}) = -\sum_{i = 1}^n \frac{\partial^2 \alpha}{\partial x_i^2}\;\d x^{i_1} \wedge \cdots \wedge \d x^{i_p}.
+\]
+So to do Hodge decomposition, it suffices to consider the case $p = 0$, and we are just looking at functions $C^\infty(T^n)$, namely the $2\pi$-periodic functions on $\R$.
+
+Here we will quote the fact that Fourier series work.
+
+\begin{fact}
+ Let $\varphi \in C^\infty (T^n)$. Then it can be (uniquely) represented by a convergent Fourier series
+ \[
+ \varphi(x) = \sum_{k \in \Z^n} \varphi_k e^{ik \cdot x},
+ \]
+ where $k$ and $x$ are vectors, and $k \cdot x$ is the standard inner product, and this is uniformly convergent in all derivatives. In fact, $\varphi_k$ can be given by
+ \[
+ \varphi_k = \frac{1}{(2\pi)^n} \int_{T^n} \varphi(x) e^{-ik\cdot x}\;\d x.
+ \]
+ Consider the inner product
+ \[
+ \bra \varphi, \psi\ket = (2\pi)^n \sum \bar\varphi_k \psi_k.
+ \]
+ on $\ell^2$, and define the subspace
+ \[
+ H_\infty = \left\{(\varphi_k) \in \ell^2: \varphi_k = o(|k|^m) \text{ for all }m \in \Z\right\}.
+ \]
+ Then the map
+ \begin{align*}
+ \mathcal{F}: C^\infty(T^n) &\to \ell^2\\
+ \varphi &\mapsto (\varphi_k).
+ \end{align*}
+ is an isometric bijection onto $H_\infty$.
+\end{fact}
+So we have reduced our problem of working with functions on a torus to working with these infinite series. This makes our calculations rather more explicit.
+
+The key property is that the Laplacian is given by
+\[
+ \mathcal{F}(\Delta \varphi) = (-|k|^2 \varphi_k).
+\]
+In some sense, $\mathcal{F}$ ``diagonalizes'' the Laplacian. It is now clear that
+\begin{align*}
+ \mathcal{H}^0 &= \{\varphi \in C^\infty(T^n) : \varphi_k = 0\text{ for all }k \not =0\}\\
+ (\mathcal{H}^0)^\perp &= \{\varphi \in C^\infty(T^n) : \varphi_0 = 0\}.
+\end{align*}
+Moreover, since we can divide by $|k|^2$ whenever $k$ is non-zero, it follows that $(\mathcal{H}^0)^\perp = \Delta C^\infty(T^n)$.
+
+%Now $\ell$ is a weak solution of $\Delta \omega = \alpha$ if $\ell(\Delta \varphi) = \bra \alpha, \varphi\ket$ for all $\varphi \in C^\infty(T^n)$. In terms of the coefficients, this means
+%\[
+% \ell((-|k|^2 \varphi_k)) = (2\pi)^n \sum_k \bar{\alpha}_k \varphi_k.
+%\]
+%Now consider the $\R$-linear map $\lambda: H_\infty\to H_\infty$ by
+%\[
+% (\alpha_k) \mapsto \left(\lambda_k(\alpha) =
+% \begin{cases}
+% \frac{\alpha_k}{-|k|^2} & k \not= 0\\
+% 0 & k = 0
+% \end{cases}\right).
+%\]
+%This defines the \emph{Green's operator} $G: C^\infty(T^n) \to C^\infty(T^n)$ satisfying
+%\[
+% \begin{tikzcd}
+% C^\infty(T^n) \ar[r, "G"] \ar[d] & C^\infty(T^n) \ar[d]\\
+% H_\infty \ar[r, "\lambda"] & H_\infty
+% \end{tikzcd}.
+%\]
+%It is immediate that we have $G(\alpha) \in (\mathcal{H}^0)^\perp$ for all $\alpha$. We set
+%\[
+% \ell(\beta) = \bra G(\alpha), \beta\ket,
+%\]
+%which is bounded by Cauchy--Schwartz. In fact, the strong solution is $\omega = G(\alpha) \in C^\infty(T^n)$. This is what was promised by the regularity theorem.
+%
+%For the compactness part, we restrict to $n = 1$, so that $T^n = S^1$. We efine the ``\term{Hilbert cube}'' by
+%\[
+% K = \left\{\varphi_k) \in \ell^2: \varphi_{-k} = \bar{\varphi}_k \in \C\text{ and }|\varphi_k| \frac{1}{|k}\text{ for all }k \not= 0\right\}.
+%\]
+%It is an exercise to show that $K$ is sequentially compact, i.e.\ Bolzano--Weierstrass holds.
+%
+%The key property in all of the above is that $\Delta$ corresponds in $\ell^2$ to the diagonal map
+%\[
+% \varphi \mapsto -|k|^2 \varphi_k.
+%\]
+%Since $|k|^2$ is non-zero for all but one $k$. So we were able to invert it on the complement of a finite-dimensional subspace. Such operators are particularly nice in analysis, and this is related to them being \term{elliptic}.
+%
+%More can be found in Well's ``Differentiable analysis on complex manifolds''.
+%
+%We now start a new topic which is more geometric in nature, but continues certain thoughts in Hodge decomposition.
+
+\subsection{Divergence}
+In ordinary multi-variable calculus, we had the notion of the divergence. This makes sense in general as well. Given any $X \in \Vect(M)$, we have
+\[
+ \nabla X \in \Gamma(TM \otimes T^*M) = \Gamma(\End TM).
+\]
+Now we can take the trace of this, because the trace doesn't depend on the choice of the basis.
+\begin{defi}[Divergence]\index{divergence}
+ The \emph{divergence} of a vector field $X \in \Vect(M)$ is
+ \[
+ \div X = \tr (\nabla X).
+ \]
+\end{defi}
+It is not hard to see that this extends the familiar definition of the divergence. Indeed, by definition of trace, for any local frame field $\{e_i\}$, we have
+\[
+ \div X = \sum_{i = 1}^n g(\nabla_{e_i} X, e_i).
+\]
+It is straightforward to prove from definition that
+\begin{prop}
+ \[
+ \div (fX) = \tr(\nabla (fX)) = f \div X + \bra \d f, X\ket.
+ \]
+\end{prop}
+The key result about divergence is the following:
+\begin{thm}
+ Let $\theta \in \Omega^1(M)$, and let $X_\theta \in \Vect(M)$ be such that $\bra \theta, V\ket = g(X_\theta, V)$ for all $V \in TM$. Then
+ \[
+ \delta \theta = - \div X_\theta.
+ \]
+\end{thm}
+So the divergence isn't actually a new operator. However, we have some rather more explicit formulas for the divergence, and this helps us understand $\delta$ better.
+
+To prove this, we need a series of lemmas.
+\begin{lemma}
+ In local coordinates, for any $p$-form $\psi$, we have
+ \[
+ \d \psi = \sum_{k = 1}^n \d x^k \wedge \nabla_k \psi.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We fix a point $x \in M$, and we may wlog we work in normal coordinates at $x$. By linearity and change of coordinates, we wlog
+ \[
+ \psi = f\; \d x^1 \wedge \cdots \wedge \d x^p.
+ \]
+ Now the left hand side is just
+ \[
+ \d \psi = \sum_{k = p + 1}^n \frac{\partial f}{\partial x^k}\; \d x^k \wedge \d x^1 \wedge \cdots \wedge \d x^p.
+ \]
+ But this is also what the RHS is, because $\nabla_k = \partial_k$ at $p$.
+\end{proof}
+To prove this, we first need a lemma, which is also useful on its own right.
+\begin{defi}[Interior product]\index{$i(X)$}
+ Let $X \in \Vect(M)$. We define the \term{interior product} $i(X): \Omega^p(M) \to \Omega^{p - 1}(M)$ by
+ \[
+ (i(X)\psi)(Y_1, \cdots, Y_{p - 1}) = \psi(X, Y_1, \cdots, Y_{p - 1}).
+ \]
+ This is sometimes written as $i(X) \psi = X \lrcorner \psi$.
+\end{defi}
+
+\begin{lemma}
+ We have
+ \[
+ (\div X)\;\omega_g = \d(i(X)\;\omega_g),
+ \]
+ for all $X \in \Vect(M)$.
+\end{lemma}
+
+\begin{proof}
+ Now by unwrapping the definition of $i(X)$, we see that
+ \[
+ \nabla_Y (i (X) \psi) = i(\nabla_Y X)\psi + i(X) \nabla_Y \psi.
+ \]
+ From example sheet 3, we know that $\nabla \omega_g = 0$. So it follows that
+ \[
+ \nabla_Y (i(X)\;\omega_g) = i(\nabla_Y X)\;\omega_g.
+ \]
+ Therefore we obtain
+ \begin{align*}
+ &\hphantom{={}}d(i(X) \omega_g) \\
+ &= \sum_{k = 1}^n \d x^k \wedge \nabla_k (i(X) \omega_g)\\
+ &= \sum_{k = 1}^n \d x^k \wedge i(\nabla_k X) \omega_g\\
+ &= \sum_{k = 1}^n \d x^k \wedge i (\nabla_k X) (\sqrt{|g|} \d x^1 \wedge \cdots \wedge \d x^n)\\
+ &= \d x^k(\nabla_k X)\; \omega_g\\
+ &= (\div X)\; \omega_g.
+ \end{align*}
+ Note that this requires us to think carefully how wedge products work ($i(X)(\alpha \wedge \beta)$ is not just $\alpha(X) \beta$, or else $\alpha \wedge \beta$ would not be anti-symmetric).
+\end{proof}
+
+\begin{cor}[Divergence theorem]\index{divergence theorem}
+ For any vector field $X$, we have
+ \[
+ \int_M \div(X) \;\omega_g = \int_M \d (i(X) \;\omega_g) = 0.
+ \]
+\end{cor}
+We can now prove the theorem.
+
+\begin{thm}
+ Let $\theta \in \Omega^1(M)$, and let $X_\theta \in \Vect(M)$ be such that $\bra \theta, V\ket = g(X_\theta, V)$ for all $V \in TM$. Then
+ \[
+ \delta \theta = - \div X_\theta.
+ \]
+\end{thm}
+
+\begin{proof}
+ By the formal adjoint property of $\delta$, we know that for any $f \in C^\infty(M)$, we have
+ \[
+ \int_M g(\d f, \theta)\;\omega_g = \int_M f \delta \theta\; \omega_g.
+ \]
+ So we want to show that
+ \[
+ \int_M g(\d f, \theta)\;\omega_g = - \int_M f\div X_\omega \; \omega_g.
+ \]
+ But by the product rule, we have
+ \[
+ \int_M \div (f X_\theta)\; \omega_g = \int_M g(\d f, \theta) \;\omega_g + \int_M f \div X_\theta\; \omega_g.
+ \]
+ So the result follows by the divergence theorem.
+\end{proof}
+
+We can now use this to produce some really explicit formulae for what $\delta$ is, which will be very useful next section.
+\begin{cor}
+ If $\theta$ is a $1$-form, and $\{e_k\}$ is a local orthonormal frame field, then
+ \[
+ \delta \theta = - \sum_{k = 1}^n i(e_k) \nabla_{e_i} \theta = - \sum_{k = 1}^n \bra \nabla_{e_k} \theta, e_k\ket.
+ \]
+\end{cor}
+
+\begin{proof}
+ We note that
+ \begin{align*}
+ e_i \bra \theta, e_i\ket &= \bra \nabla_{e_i} \theta, e_i\ket + \bra \theta, \nabla_{e_i}e_i\ket\\
+ e_i g(X_\theta, e_i) &= g(\nabla_{e_i} X_\theta, e_i) + g(X_\theta, \nabla_{e_i}e_i).
+ \end{align*}
+ By definition of $X_\theta$, this implies that
+ \[
+ \bra \nabla_{e_i} \theta, e_i\ket = g(\nabla_{e_i} X_\theta, e_i).
+ \]
+ So we obtain
+ \[
+ \delta \theta = - \div X_\theta = -\sum_{i = 1}^n g(\nabla_{e_i} X_\theta, e_i) = - \sum_{k = 1}^n \bra \nabla_{e_i} \theta, e_i\ket,\qedhere
+ \]
+\end{proof}
+
+We will assume a version for $2$-forms (the general result is again on the third example sheet):
+\begin{prop}
+ If $\beta \in \Omega^2(M)$, then
+ \[
+ (\delta \beta)(Y) = - \sum_{k = 1}^n (\nabla_{e_k} \beta)(e_k, Y).
+ \]
+ In other words,
+ \[
+ \delta \beta = - \sum_{k = 1}^n i(e_k) (\nabla_{e_k}\beta).
+ \]
+\end{prop}
+
+\subsection{Introduction to Bochner's method}
+How can we apply the Hodge decomposition theorem? The Hodge decomposition theorem tells us the de Rham cohomology group is the kernel of the Laplace--Beltrami operator $\Delta$. So if we want to show, say, $H^1_{\dR}(M) = 0$, then we want to show that $\Delta \alpha \not= 0$ for all non-zero $\alpha \in \Omega^1(M)$. The strategy is to show that
+\[
+ \bra \bra \alpha, \Delta \alpha\ket\ket \not= 0
+\]
+for all $\alpha \not= 0$. Then we have shown that $H^1_{\dR}(M) = 0$. In fact, we will show that this inner product is positive. To do so, the general strategy is to introduce an operator $T$ with adjoint $T^*$, and then write
+\[
+ \Delta = T^* T + C
+\]
+for some operator $C$. We will choose $T$ cleverly such that $C$ is very simple.
+
+Now if we can find a manifold such that $C$ is always positive, then since
+\[
+ \bra \bra T^*T \alpha, \sigma\ket\ket = \bra \bra T \alpha, T \alpha\ket\ket \geq 0,
+\]
+it follows that $\Delta$ is always positive, and so $H^1_{\dR}(M) = 0$.
+
+Our choice of $T$ will be the covariant derivative $\nabla$ itself. We can formulate this more generally. Suppose we have the following data:
+\begin{itemize}
+ \item A Riemannian manifold $M$.
+ \item A vector bundle $E \to M$.
+ \item An inner product $h$ on $E$.
+ \item A connection $\nabla = \nabla^E : \Omega^0(E) \to \Omega^1(E)$ on $E$.
+\end{itemize}
+We are eventually going to take $E = T^*M$, but we can still proceed in the general setting for a while.
+
+The formal adjoint $(\nabla^E)^*: \Omega^1(E) \to \Omega^0(E)$ is defined by the relation
+\[
+ \int_M \bra \nabla \alpha, \beta\ket_{E, g}\; \omega_g = \int_M \bra \alpha, \nabla^* \beta\ket_E\; \omega_g
+\]
+for all $\alpha \in \Omega^0(E)$ and $\beta \in \Omega^1(E)$. Since $h$ is non-degenerate, this defines $\nabla^*$ uniquely.
+
+\begin{defi}[Covariant Laplacian]
+ The \term{covariant Laplacian} is
+ \[
+ \nabla^* \nabla : \Gamma(E) \to \Gamma(E)
+ \]
+\end{defi}
+
+We are now going to focus on the case $E = T^*M$. It is helpful to have the following explicit formula for $\nabla^*$, which we shall leave as an exercise:
+
+As mentioned, the objective is to understand $\Delta - \nabla^* \nabla$. The theorem is that this difference is given by the Ricci curvature.
+
+This can't be quite right, because the Ricci curvature is a bilinear form on $TM^2$, but $\Delta - \nabla^* \nabla$ is a linear endomorphism $\Omega^1(M) \to \Omega^1(M)$. Thus, we need to define an alternative version of the Ricci curvature by ``raising indices''. In coordinates, we consider $g^{jk}\Ric_{ik}$ instead.
+
+We can also define this $\Ric_{ik}$ without resorting to coordinates. Recall that given an $\alpha \in \Omega^1(M)$, we defined $X_\alpha \in \Vect(M)$ to be the unique field such that
+\[
+ \alpha(z) = g(X_\alpha, Z)
+\]
+for all $Z \in \Vect(M)$. Then given $\alpha \in \Omega^1(M)$, we define $\Ric(\alpha) \in \Omega^1(M)$ by
+\[
+ \Ric(\alpha)(X) = \Ric(X, X_\alpha).
+\]
+With this notation, the theorem is
+\begin{thm}[Bochner--Weitzenb\"ock formula]\index{Bochner--Weitzenb\"ock formula}
+ On an oriented Riemannian manifold, we have
+ \[
+ \Delta = \nabla^* \nabla + \Ric.
+ \]
+\end{thm}
+Before we move on to the proof of this formula, we first give an application.
+
+\begin{cor}
+ Let $(M, g)$ be a compact connected oriented manifold. Then
+ \begin{itemize}
+ \item If $\Ric(g) > 0$ at each point, then $H_{\dR}^1(M) = 0$.
+ \item If $\Ric(g) \geq 0$ at each point, then $b^1(M) = \dim H_{\dR}^1(M) \leq n$.
+ \item If $\Ric(g) \geq 0$ at each point, and $b^1(M) = n$, then $g$ is flat.
+ \end{itemize}
+\end{cor}
+
+\begin{proof}
+ By Bochner--Weitzenb\"ock, we have
+ \begin{align*}
+ \bra \bra \Delta \alpha, \alpha\ket\ket &= \bra \bra \nabla^* \nabla \alpha, \alpha\ket \ket + \int_M \Ric(\alpha, \alpha)\;\omega_g\\
+ &= \|\nabla \alpha\|^2_2 + \int_M \Ric(\alpha, \alpha)\;\omega_g.
+ \end{align*}
+
+ \begin{itemize}
+ \item Suppose $\Ric > 0$. If $\alpha \not= 0$, then the RHS is strictly positive. So the left-hand side is non-zero. So $\Delta \alpha \not =0$. So $\mathcal{H}_M^1 \cong H_{\dR}^1(M) = 0$.
+
+ \item Suppose $\alpha$ is such that $\Delta \alpha = 0$. Then the above formula forces $\nabla \alpha = 0$. So if we know $\alpha(x)$ for some fixed $x \in M$, then we know the value of $\alpha$ everywhere by parallel transport. Thus $\alpha$ is determined by the initial condition $\alpha(x)$, Thus there are $\leq n = \dim T_x^* M$ linearly independent such $\alpha$.
+
+ \item If $b^1(M) = n$, then we can pick a basis $\alpha_1, \cdots, \alpha_n$ of $\mathcal{H}^1_M$. Then as above, these are parallel $1$-forms. Then we can pick a dual basis $X_1, \cdots, X_n \in \Vect(M)$. We claim they are also parallel, i.e.\ $\nabla X_i = 0$. To prove this, we note that
+ \[
+ \bra \alpha_j, \nabla X_i\ket + \bra \nabla \alpha_j, X_i\ket = \nabla \bra \alpha_j, X_i\ket.
+ \]
+ But $\bra \alpha_j, X_i\ket$ is constantly $0$ or $1$ depending on $i$ and $j$, So the RHS vanishes. Similarly, the second term on the left vanishes. Since the $\alpha_j$ span, we know we must have $\nabla X_i = 0$.
+
+ Now we have
+ \[
+ R(X_i, X_j) X_k = (\nabla_{[X_i, X_j]} - [\nabla X_i, \nabla_{X_j}]) X_k = 0,
+ \]
+ Since this is valid for all $i, j, k$, we know $R$ vanishes at each point. So we are done.\qedhere
+ \end{itemize}
+\end{proof}
+Bochner--Weitzenb\"ock can be exploited in a number of similar situations.
+
+In the third part of the theorem, we haven't actually proved the optimal statement. We can deduce more than the flatness of the metric, but requires some slightly advanced topology. We will provide a sketch proof of the theorem, making certain assertions about topology.
+
+\begin{prop}
+ In the case of (iii), $M$ is in fact isometric to a flat torus.
+\end{prop}
+
+\begin{proof}[Proof sketch] % check this
+ We fix $p \in M$ and consider the map $M \to \R^n$ given by
+ \[
+ x \mapsto \left(\int_p^x \alpha_i\right)_{i = 1, \cdots, n} \in \R^n,
+ \]
+ where the $\alpha_i$ are as in the previous proof. The integral is taken along any path from $p$ to $x$, and this is not well-defined. But by Stokes' theorem, and the fact that $\d \alpha_i = 0$, this only depends on the homotopy class of the path.
+
+ In fact, $\int_p^x$ depends only on $\gamma \in H_1(M)$, which is finitely generated. Thus, $\int_p^x \alpha_i$ is a well-defined map to $S^1 = \R/\lambda_i \Z$ for some $\lambda_i \not= 0$. Therefore we obtain a map $M \to (S^1)^n = T^n$. Moreover, a bit of inspection shows this is a local diffeomorphism. But since the spaces involved are compact, it follows by some topology arguments that it must be a covering map. But again by compactness, this is a finite covering map. So $M$ must be a torus. So we are done.
+\end{proof}
+
+We only proved this for $1$-forms, but this is in fact fact valid for forms of any degree. To do so, we consider $E = \exterior^p T^* M$, and then we have a map
+\[
+ \nabla: \Omega_M^0(E) \to \Omega_M^1(E),
+\]
+and this has a formal adjoint
+\[
+ \nabla^*: \Omega_M^1(E) \to \Omega_M^0(E).
+\]
+Now if $\alpha \in \Omega^p(M)$, then it can be shown that
+\[
+ \Delta \alpha = \nabla^* \nabla \alpha + \mathfrak{R}(\alpha),
+\]
+where $\mathfrak{R}$ is a linear map $\Omega^p(M) \to \Omega^p(M)$ depending on the curvature. Then by the same proof, it follows that if $\mathfrak{R} > 0$ at all points, then $\mathcal{H}^k(M) = 0$ for all $k = 1, \cdots, n - 1$.
+
+If $\mathfrak{R} \geq 0$ only, which in particular is the case if the space is flat, then we have
+\[
+ b^k(M) \leq \binom{n}{k} = \dim \exterior^k T^* M,
+\]
+and moreover $\Delta \alpha = 0$ iff $\nabla \alpha = 0$.
+
+\subsubsection*{Proof of Bochner--Weitzenb\"ock}
+We now move on to actually prove Bochner--Weitzenb\"ock. We first produce an explicit formula for $\nabla^*$, and hence $\nabla^* \nabla$.
+
+\begin{prop}
+ Let $e_1, \cdots, e_n$ be an orthonormal frame field, and $\beta \in \Omega^1(T^*M)$. Then we have
+ \[
+ \nabla^* \beta = - \sum_{i = 1}^n i(e_i) \nabla_{e_i}\beta.
+ \]
+\end{prop}
+
+\begin{proof}
+ Let $\alpha \in \Omega^0(T^*M)$. Then by definition, we have
+ \[
+ \bra \nabla \alpha, \beta\ket = \sum_{i = 1}^n \bra \nabla_{e_i} \alpha, \beta(e_i)\ket.
+ \]
+ Consider the $1$-form given by
+ \[
+ \theta(Y) = \bra \alpha, \beta(Y)\ket.
+ \]
+ Then we have
+ \begin{align*}
+ \div X_\theta &= \sum_{i = 1}^n \bra \nabla_{e_i} X_\theta, e_i \ket\\
+ &= \sum_{i = 1}^n \nabla_{e_i}\bra X_\theta, e_i\ket - \bra X_\theta, \nabla_{e_i}e_i\ket\\
+ &= \sum_{i = 1}^n \nabla_{e_i} \bra \alpha, \beta(e_i)\ket - \bra \alpha, \beta(\nabla_{e_i} e_i)\ket\\
+ &= \sum_{i = 1}^n \bra \nabla_{e_i} \alpha, \beta(e_i)\ket + \bra \alpha, \nabla_{e_i}(\beta(e_i))\ket - \bra \alpha, \beta(\nabla_{e_i} e_i)\ket\\
+ &= \sum_{i = 1}^n \bra \nabla_{e_i} \alpha, \beta(e_i)\ket + \bra \alpha, (\nabla_{e_i} \beta) (e_i)\ket.
+ \end{align*}
+ So by the divergence theorem, we have
+ \[
+ \int_M \bra \nabla \alpha, \beta\ket\; \omega_g = \int_M \sum_{i = 1}^n \bra \alpha, (\nabla_{e_i}\beta)(e_i)\ket\; \omega_g.
+ \]
+ So the result follows.
+\end{proof}
+
+\begin{cor}
+ For a local orthonormal frame field $e_1, \cdots, e_n$, we have
+ \[
+ \nabla^* \nabla \alpha = -\sum_{i = 1}^n \nabla_{e_i} \nabla_{e_i} \alpha.
+ \]
+\end{cor}
+
+We next want to figure out more explicit expressions for $\d \delta$ and $\delta \d$. To make our lives much easier, we will pick a normal frame field:
+
+\begin{defi}[Normal frame field]\index{normal frame field}
+ A local orthonormal frame $\{e_k\}$ field is \emph{normal} at $p$ if further
+ \[
+ \nabla e_k|_p = 0
+ \]
+ for all $k$.
+\end{defi}
+It is a fact that normal frame fields exist. From now on, we will fix a point $p \in M$, and assume that $\{e_k\}$ is a normal orthonormal frame field at $p$. Thus, the formulae we derive are only valid at $p$, but this is fine, because $p$ was arbitrary.
+
+The first term $\d \delta$ is relatively straightforward.
+\begin{lemma}
+ Let $\alpha \in \Omega^1(M)$, $X \in \Vect(M)$. Then
+ \[
+ \bra \d \delta \alpha, X \ket = - \sum_{i = 1}^n \bra \nabla_X \nabla_{e_i} \alpha, e_i\ket.
+ \]
+\end{lemma}
+
+\begin{proof}
+ \begin{align*}
+ \bra \d \delta \alpha, X\ket &= X( \delta \alpha) \\
+ &= - \sum_{i = 1}^n X \bra \nabla_{e_i} \alpha, e_i\ket\\
+ &= - \sum_{i = 1}^n \bra \nabla_X \nabla_{e_i} \alpha, e_i\ket.\qedhere
+ \end{align*}
+\end{proof}
+
+This takes care of one half of $\Delta$ for the other half, we need a bit more work. Recall that we previously found a formula for $\delta$. We now re-express the formula in terms of this local orthonormal frame field.
+
+\begin{lemma}
+ For any $2$-form $\beta$, we have
+ \[
+ (\delta \beta)(X) = \sum_{k = 1}^n -e_k( \beta(e_k, X)) + \beta(e_k, \nabla_{e_k} X).
+ \]
+\end{lemma}
+
+\begin{proof}
+ \begin{align*}
+ (\delta \beta)(X) &= - \sum_{k = 1}^n (\nabla_{e_k} \beta)(e_k, X)\\
+ &= \sum_{k = 1}^n -e_k( \beta(e_k, X)) + \beta(\nabla_{e_k} e_k, X) + \beta(e_k, \nabla_{e_k} X)\\
+ &= \sum_{k = 1}^n -e_k( \beta(e_k, X)) + \beta(e_k, \nabla_{e_k} X).\qedhere
+ \end{align*}
+\end{proof}
+
+Since we want to understand $\delta \d \alpha$ for $\alpha$ a $1$-form, we want to find a decent formula for $\d \alpha$.
+\begin{lemma}
+ For any $1$-form $\alpha$ and vector fields $X, Y$, we have
+ \[
+ \d \alpha(X, Y) = \bra \nabla_X \alpha, Y\ket - \bra \nabla_Y \alpha, X\ket.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Since the connection is torsion-free, we have
+ \[
+ [X, Y] = \nabla_X Y - \nabla_Y X.
+ \]
+ So we obtain
+ \begin{align*}
+ \d \alpha(X, Y) &= X \bra \alpha, Y\ket - Y \bra \alpha, X\ket - \bra \alpha, [X, Y]\ket \\
+ &= \bra \nabla_X \alpha, Y\ket - \bra \nabla_Y \alpha, X\ket.\qedhere
+ \end{align*}
+\end{proof}
+
+Finally, we can put these together to get
+\begin{lemma}
+ For any $1$-form $\alpha$ and vector field $X$, we have
+ \[
+ \bra \delta\d \alpha, X\ket = - \sum_{k = 1}^n \bra \nabla_{e_k} \nabla_{e_k} \alpha, X\ket + \sum_{k = 1}^n \bra \nabla_{e_k} \nabla_X \alpha, e_k\ket - \sum_{k = 1}^n \bra \nabla_{\nabla_{e_k} X} \alpha, e_k\ket.
+ \]
+\end{lemma}
+
+\begin{proof}
+ \begin{align*}
+ \bra \delta\d \alpha, X\ket &= \sum_{k = 1}^n \Big[-e_k (\d \alpha(e_k, X)) + \d \alpha(e_k, \nabla_{e_k} X)\Big]\\
+ &= \sum_{k = 1}^n \Big[- e_k(\bra \nabla_{e_k}\alpha, X\ket - \bra \nabla_X \alpha, e_k\ket)\\
+ &\hphantom{aaaaaaaaaaaaaaaaaaaaa}+ \bra \nabla_{e_k}\alpha, \nabla_{e_k} X\ket - \bra\nabla_{\nabla_{e_k}X} \alpha, e_k\ket\Big]\\
+ &= \sum_{k = 1}^n \Big[- \bra \nabla_{e_k}\nabla_{e_k}\alpha, X\ket - \bra \nabla_{e_k}\alpha, \nabla_{e_k}X\ket + \bra \nabla_{e_k}\nabla_X \alpha, e_k\ket)\\
+ &\hphantom{aaaaaaaaaaaaaaaaaaaaa}+ \bra \nabla_{e_k}\alpha, \nabla_{e_k} X\ket - \bra\nabla_{\nabla_{e_k}X} \alpha, e_k\ket\Big]\\
+ &= - \sum_{k = 1}^n \bra \nabla_{e_k} \nabla_{e_k} \alpha, X\ket + \sum_{k = 1}^n \bra \nabla_{e_k} \nabla_X \alpha, e_k\ket - \sum_{k = 1}^n \bra \nabla_{\nabla_{e_k} X} \alpha, e_k\ket.\qedhere
+ \end{align*}
+\end{proof}
+
+What does this get us? The first term on the right is exactly the $\nabla^* \nabla$ term we wanted. If we add $\d \delta \alpha$ to this, then we get
+\[
+ \sum_{k = 1}^n \bra ([\nabla_{e_k}, \nabla_X] - \nabla_{\nabla_{e_k}X}) \alpha, e_k\ket.
+\]
+We notice that
+\[
+ [e_k, X] = \nabla_{e_k}X - \nabla_X e_k = \nabla_{e_k}X.
+\]
+So we can alternatively write the above as
+\[
+ \sum_{k = 1}^n \bra ([\nabla_{e_k}, \nabla_X] - \nabla_{[e_k, X]}) \alpha, e_k\ket.
+\]
+The differential operator on the left looks just like the Ricci curvature. Recall that
+\[
+ R(X, Y) = \nabla_{[X, Y]} - [\nabla_X, \nabla_Y].
+\]
+\begin{lemma}[Ricci identity]\index{Ricci identity}
+ Let $M$ be any Riemannian manifold, and $X, Y, Z \in \Vect(M)$ and $\alpha \in \Omega^1(M)$. Then
+ \[
+ \bra ([\nabla_X, \nabla_Y] - \nabla_{[X, Y]})\alpha, Z\ket = \bra \alpha, R(X, Y) Z\ket.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We note that
+ \[
+ \bra \nabla_{[X, Y]} \alpha, Z\ket + \bra \alpha, \nabla_{[X, Y]}Z\ket = [X, Y] \bra \alpha, Z\ket = \bra [\nabla_X, \nabla_Y]\alpha, Z\ket + \bra \alpha, [\nabla_X, \nabla_Y] Z\ket.
+ \]
+ The second equality follows from writing $[X, Y] = XY - YX$. We then rearrange and use that $R(X, Y) = \nabla_{[X, Y]} - [\nabla_X, \nabla_Y]$.
+\end{proof}
+
+\begin{cor}
+ For any $1$-form $\alpha$ and vector field $X$, we have
+ \[
+ \bra \Delta \alpha, X\ket = \bra \nabla^* \nabla \alpha, X\ket + \Ric(\alpha)(X).
+ \]
+\end{cor}
+This is the theorem we wanted.
+
+\begin{proof}
+ We have found that
+ \[
+ \bra \Delta \alpha, X\ket = \bra \nabla^* \nabla \alpha, X\ket + \sum_{i = 1}^n \bra \alpha, R(e_k, X) e_k\ket.
+ \]
+ We have
+ \[
+ \sum_{i = 1}^n \bra \alpha, R(e_k, X)e_k\ket = \sum_{i = 1}^n g(X_\alpha, R(e_k, X) e_k) = \Ric(X_\alpha, X) = \Ric(\alpha)(X).
+ \]
+ So we are done.
+\end{proof}
+%We first come up with a more explicit formula for $\nabla^*$. In local coordinates, we write our forms as
+%\[
+% \alpha = \sum_I \alpha_I \;\d x^I, \beta = \sum_J \beta_J \;\d x^J \in \Omega_M^p(E)
+%\]
+%where $I, J \subseteq \{1, \cdots, n\}$ are indices. Then we can define the Hodge star by
+%\[
+% \star \beta = \sum_J \beta_J \star \d x^J \in \Omega^{n - p}_M(E).
+%\]
+%\begin{lemma}
+% We have
+% \[
+% \nabla^* = - \star \nabla \star.
+% \]
+%\end{lemma}
+%
+%\begin{proof}
+% For $\alpha \in \Gamma(T^* M)$ and $\beta \in \Gamma(T^* M \otimes T^* M)$,
+% \begin{align*}
+% -\int_M \bra \alpha, \star\nabla\star \beta \ket_E\; \omega_g &= -\int_M \alpha \wedge \star \star \nabla \star \beta \\
+% &= (-1)^n \int \alpha \wedge \nabla \star \beta
+% \end{align*}
+% \[
+% \int_M \bra \nabla \alpha, \beta\ket_{E, g}\; \omega_g = \int_M \bra \alpha, \nabla^* \beta\ket_E\; \omega_g
+% \]
+%\end{proof}
+%
+%Of course, the only case we are interested in is when $E = T^*M$, and $\nabla$ is the Levi-Civita connection of $g$. In this case, we have a rather nice formula for the covariant Laplacian.
+%\begin{thm}
+% Suppose the connection on the vector bundle is compatible in the sense that
+% \[
+% \d(h(\alpha, \beta)) = h(\nabla \alpha, \beta) + h(\alpha, \nabla \beta)
+% \]
+% for $\alpha, \beta \in \Gamma(E) = \Omega_M^p(E)$. Then for $\beta \in \Omega_M^p(E)$, we have
+% \[
+% \nabla^* \beta = -\sum_{k = 1}^n i(e_k) \nabla_{e_k}\beta.
+% \]
+% In particular, when $\alpha \in \Omega^1(M) = \Omega^0_M(T^*M)$, then
+% \[
+% \nabla \alpha = \sum_{\ell} \varepsilon_\ell \otimes (\nabla_{e_\ell} \alpha) \in \Gamma(T^* M \otimes T^* M),
+% \]
+%\end{thm}
+%
+%To understand $\nabla^*$ better, we further assume that the connection is ``compatible'' in the sense that
+% This is true in the case we are interested in.
+%
+%This is something we can do on any vector bundle, but really, we are only going to use it on one particular bundle --- $E = T^*M$, and we just take $\nabla$ to be the (induced) Levi-Civita of $g$. More generally, we can take $E = \exterior^p T^*M$ for any $p$. What we do in this section can be extended to this general case, but is messier and takes more technical work. Thus, we will restrict to the case of $1$-forms only.
+%
+%\begin{prop}
+% For a local orthonormal frame field $e_1, \cdots, e_n$ on $U \subseteq M$, we have
+% \[
+% \nabla^* \nabla \alpha |_U = -\sum_{i = 1}^n \nabla_{e_i} \nabla_{e_i} \alpha.
+% \]
+%\end{prop}
+%To see this, we recall that
+%\[
+% \nabla \alpha |_U = \sum_{i = 1}^n (\nabla_{e_i} \alpha) \otimes \varepsilon_i,
+%\]
+%where $\varepsilon_i$ is the dual orthonormal co-frame field. It is an exercise to show that
+%\[
+% \nabla^* \sum_{i = 1}^n \beta_i \otimes \varepsilon_i = - \sum_{i = 1}^n \nabla_{e_i}\beta_i.
+%\]
+%This will become clearer as we progress, when we prove more general theory.
+%
+%In general, $\nabla^* \nabla$ is different from the Laplace--Beltrami operator $\Delta$, and Bochner's method is about exploiting the difference between the two operators.
+%
+%It turns out that the difference is given by the Ricci curvature. However, the Ricci curvature used to be a bilinear form on $TM^2$, but $\Delta - \nabla^* \nabla$ is a linear endomorphism $\Omega^1(M) \to \Omega^1(M)$. Thus, we need to define an alternative version of the Ricci curvature by ``raising indices'', i.e.\ in coordinates, we consider $g^{jk}\Ric_{ik}$ instead.
+%
+%Without resorting to coordinates, we note that given an $\alpha \in \Omega^1(M)$, we can find a unique $Y_\alpha \in \Vect(M)$ such that
+%\[
+% \alpha(z) = g(Y_\alpha, Z)
+%\]
+%for all $Z \in \Vect(M)$. Then given $\alpha \in \Omega^1(M)$, we define $\Ric(\alpha) \in \Omega^1(M)$ by
+%\[
+% \Ric(\alpha)(X) = \Ric(X, Y_\alpha).
+%\]
+%With this notation, we have
+%
+%The proof is a lot of technical work. Recall that we had
+%\[
+% R(X, Y) = \nabla_{[X, Y]} - [\nabla_X, \nabla_Y].
+%\]
+%We will need a ``form'' version of this.
+%\begin{lemma}[Ricci identity]\index{Ricci identity}
+% Let $M$ be any Riemannian manifold, and $X, Y, Z \in \Vect(M)$ and $\alpha \in \Omega^1(M)$. Then
+% \[
+% \bra ([\nabla_X, \nabla_Y] - \nabla_{[X, Y]})\alpha, Z\ket = \bra \alpha, R(X, Y) Z\ket.
+% \]
+%\end{lemma}
+%Note that here when we write $\bra \ph, \ph\ket$, we just mean applying the form to the vector, and is not related to genuine inner products.
+%
+%\begin{proof}
+% We note that
+% \[
+% \bra \nabla_{[X, Y]} \alpha, Z\ket + \bra \alpha, \nabla_{[X, Y]}Z\ket = [X, Y] \bra \alpha, Z\ket = \bra [\nabla_X, \nabla_Y]\alpha, Z\ket + \bra \alpha, [\nabla_X, \nabla_Y] Z\ket.
+% \]
+% The second equality follows from writing $[X, Y] = XY - YX$. We then rearrange and use that $R(X, Y) = \nabla_{[X, Y]} - [\nabla_X, \nabla_Y]$.
+%\end{proof}
+%
+
+%More generally, if
+%\[
+% \alpha = \sum_I \alpha_I \;\d x^I, \beta = \sum_J \beta_J \;\d x^J \in \Omega_M^p(E)
+%\]
+%are local $p$ forms, where $I, J \subseteq \{1, \cdots, n\}$ are indices, then we have a Hodge star
+%\[
+% \star \beta = \sum_J \beta_J \star \d x^J.
+%\]
+%We can then define
+%\[
+% \nabla^\star = - \star \nabla \star
+%\]
+%when $p = 1$.
+%
+%Similar to scalar forms, we can check that
+%\[
+% \nabla^* \beta = -\sum_{k = 1}^n i(e_k) \nabla_{e_k}\beta.
+%\]
+%In particular, when $E = T^*M$ with the canonical connection induced by the Levi-Civita connection, when $\alpha \in \Omega^1(M) = \Omega^0_M(T^*M)$, then
+%\[
+% \nabla \alpha = \sum_{\ell} \varepsilon_\ell \otimes (\nabla_{e_\ell} \alpha) \in \Gamma(T^* M \otimes T^* M),
+%\]
+%where $\varepsilon_\ell$ are the dual orthonormal frame fields. Then we have
+%\[
+% \nabla^* \nabla_\alpha = - \sum_{k = 1}^n \nabla_{e_k} \nabla_{e_k} \alpha
+%\]
+%Finally, to prove Bochner-Weitzenb\"ock, we need to pick our $e_k$ a bit more cleverly.
+%\begin{thm}
+% Let $p \in M$. Then there exists a local orthonormal frame field $\{e_i\}$ such that
+% \[
+% \nabla_Y e_i|_p = 0
+% \]
+% for all $i$ and $Y \in \Vect(M)$. Such a frame field is known as a \term{normal local orthonormal frame field}.
+%\end{thm}
+%
+%\begin{proof}
+% See example sheet.
+%\end{proof}
+%
+%\begin{proof}[Proof of Bochner--Weitzenb\"ock]
+% We pick a point $p \in M$, and try to prove Bochner--Weitzenb\"ock at this point. We pick a normal local orthonormal frame field at $p$.
+% \[
+% \nabla_Y e_i|_p = 0
+% \]
+% Then for any $1$-form $\alpha \in \Omega^1(M)$ and any $X \in \Vect(M)$, we have
+% \begin{align*}
+% \bra \d \delta \alpha, X\ket &= X( \delta \alpha) \\
+% &= - \sum_{i = 1}^n X \bra \nabla_{e_i} \alpha, e_i\ket\\
+% &= - \sum_{i = 1}^n \bra \nabla_X \nabla_{e_i} \alpha, e_i\ket.
+% \end{align*}
+% We also know
+% \begin{align*}
+% \d \alpha(X, Y) &= X \bra \alpha, Y\ket - Y \bra \alpha, X\ket - \bra \alpha, [X, Y]\ket \\
+% &= \bra \nabla_X \alpha, Y\ket - \bra \nabla_Y \alpha, X\ket.
+% \end{align*}
+% Recall that our goal is to understand
+% \[
+% \Delta \alpha = \d \delta \alpha + \delta \d \alpha
+% \]
+% for $\alpha \in \Omega^1(M)$. We have previously managed to understand $\d \delta \alpha$. For a fixed point $p \in M$, pick a normal orthonormal frame field at $p$.
+% \[
+% \bra \d \delta \alpha, X\ket = -\sum_{k = 1}^n \bra \nabla_X \nabla_{e_k} \alpha, e_k\ket.
+% \]
+% We are going to exploit this normality quite a lot. Since we know
+% \[
+% \nabla_Y e_k = 0
+% \]
+% for all $Y$ at $p$, we know
+% \[
+% \nabla_{e_k} Y = [e_k, Y]
+% \]
+% at $p$. Then we have
+% \[
+% \bra \d \delta \alpha, X \ket = -\sum_{k = 1}^n (\nabla_{e_k} \d \alpha)(e_k, X).
+% \]
+% We now note that for any $\beta \in \Omega^2(M)$, we have
+% \begin{align*}
+% (\delta \beta)(X) &= - \sum_{k = 1}^n (\nabla_{e_k} \beta)(e_k, X)\\
+% &= \sum_{k = 1}^n -e_k( \beta(e_k, X)) + \beta(\nabla_{e_k} e_k, X) + \beta(e_k, \nabla_{e_k} X\\
+% &= \sum_{k = 1}^n -e_k( \beta(e_k, X)) + \beta(e_k, \nabla_{e_k} X).
+% \end{align*}
+% Putting $\beta = \d \alpha$, we obtain
+% \begin{align*}
+% \bra \delta(\d \alpha), X\ket &= \sum_{k = 1}^n \Big[-e_k (\d \alpha(e_k, X) + \d \alpha(e_k, \nabla_{e_k} X))\Big]\\
+% &= \sum_{k = 1}^n \Big[- e_k(e_k \bra \alpha, X\ket- X \bra \alpha, e_k\ket - \bra \alpha, [e_k, X]\ket)\Big] \\
+% &\hphantom{aaaaaaaaaaaaaaaaaaaaa}+ \bra \nabla_{e_k}\alpha, \nabla_{e_k} X\ket - \bra\nabla_{\nabla_{e_k}X} \alpha, e_k\ket\\
+% &= - \sum_{k = 1}^n \bra \nabla_{e_k} \nabla_{e_k} \alpha, X\ket + \sum_{k = 1}^n \bra \nabla_{e_k} \nabla_X \alpha, e_k\ket - \sum_{k = 1}^n \bra \nabla_{\nabla_{e_k} X} \alpha, e_k\ket.
+% \end{align*}
+% Thus
+% \begin{align*}
+% \bra \nabla \alpha, X\ket &= - \sum_{k = 1}^n \bra \nabla_{e_k} \nabla_{e_k} \alpha, X\ket + \sum_{k = 1}^n \nabla_{e_k} \bra (\nabla_X - \nabla_X \nabla_{e_i} - \nabla_{e_i, [X]})\alpha , e_i\ket\\
+% &= \bra \nabla^* \nabla \alpha, X\ket + \left\bra \alpha, \sum_{k = 1^n} R(e_i, X) e_i\right\ket\\
+% &= \bra \nabla^* \nabla \alpha, X\ket + g(\alpha, \Ric(X, \ph))\\
+% &= \bra \nabla^* \nabla \alpha, X\ket + \Ric(\alpha)(X)
+% \end{align*}
+% For the last equality, recall that $\Ric(\alpha)(X)$ in coordinates is
+% \[
+% \Ric_{ij} g^{ik} a_k X^j.
+% \]
+%\end{proof}
+%\subsection{Applications of Bochner--\texorpdfstring{Weitzenb\"ock}{Weitzenbock}}
+%We are now going to understand how Riemannian geometry relates to topology. Previously, we mostly used sectional curvature, and exploited properties of geodesics. Here we are going to use Ricci curvature instead.
+%
+
+\section{Riemannian holonomy groups}
+Again let $M$ be a Riemannian manifold, which is always assumed to be connected. Let $x \in M$, and consider a path $\gamma \in \Omega(x, y)$, $\gamma: [0, 1] \to M$. At the rather beginning of the course, we saw that $\gamma$ gives us a parallel transport from $T_x M$ to $T_y M$. Explicitly, given any $X_0 \in T_x M$, there exists a unique vector field $X$ along $\gamma$ with
+\[
+ \frac{\nabla X}{\d t} = 0,\quad X(0) = X_0.
+\]
+\begin{defi}[Holonomy transformation]\index{holonomy transformation}
+ The \emph{holonomy transformation} $P(\gamma)$ sends $X_0 \in T_x M$ to $X(1) \in T_y M$.
+\end{defi}
+We know that this map is invertible, and preserves the inner product. In particular, if $x = y$, then $P(\gamma) \in \Or(T_x M) \cong \Or(n)$.
+
+\begin{defi}[Holonomy group]\index{holonomy group}\index{$\Hol_x(M)$}
+ The \emph{holonomy group} of $M$ at $x \in M$ is
+ \[
+ \Hol_x(M) = \{P(\gamma): \gamma \in \Omega(x, x)\} \subseteq \Or(T_x M).
+ \]
+ The group operation is given by composition of linear maps, which corresponds to composition of paths.
+\end{defi}
+
+We note that this group doesn't really depend on the point $x$. Given any other $y \in M$, we can pick a path $\beta \in \Omega(x, y)$. Writing $P_\beta$ instead of $P(\beta)$, we have a map
+\[
+ \begin{tikzcd}[cdmap]
+ \Hol_x(M) \ar[r] & \Hol_y(M)\\
+ P_\gamma \ar[r, maps to] & P_\beta \circ P_\gamma \circ P_{\beta^{-1}} \in \Hol_y(M)
+ \end{tikzcd}.
+\]
+So we see that $\Hol_x(M)$ and $\Hol_y(M)$ are isomorphic. In fact, after picking an isomorphism $\Or(T_x M) \cong \Or(T_y M) \cong \Or(N)$, these subgroups are conjugate as subgroups of $\Or(n)$. We denote this class by $\Hol(M)$\index{$\Hol(N)$}.
+
+Note that depending of what we want to emphasize, we write $\Hol(M, g)$, or even $\Hol(g)$ instead.
+
+Really, $\Hol(M)$ is a \emph{representation} (up to conjugacy) induced by the standard representation of $\Or(n)$ on $\R^n$.
+
+\begin{prop}
+ If $M$ is simply connected, then $\Hol_x(M)$ is path connected.
+\end{prop}
+
+\begin{proof}
+ $\Hol_x(M)$ is the image of $\Omega(x, x)$ in $\Or(n)$ under the map $P$, and this map is continuous from the standard theory of ODE's. Simply connected means $\Omega(x, x)$ is path connected. So $\Hol_x(M)$ is path connected.
+\end{proof}
+
+It is convenient to consider the \emph{restricted} holonomy group.
+
+\begin{defi}[Restricted holonomy group]\index{restricted holonomy group}\index{holonomy group!restricted}\index{$\Hol^0_x(M)$}
+ We define
+ \[
+ \Hol_x^0(M) = \{P(\gamma): \gamma \in \Omega(x, x) \text{ nullhomotopic}\}.
+ \]
+\end{defi}
+As before, this group is, up to conjugacy, independent of the choice of the point in the manifold. We write this group as $\Hol^0(M)$\index{$\Hol^0(M)$}.
+
+Of course, $\Hol^0(M) \subseteq \Hol(M)$, and if $\pi_1(M) = 0$, then they are equal.
+
+\begin{cor}
+ $\Hol^0(M) \subseteq \SO(n)$ .
+\end{cor}
+
+\begin{proof}
+ $\Hol^0(M)$ is connected, and thus lies in the connected component of the identity in $\Or(n)$.
+\end{proof}
+
+Note that there is a natural action of $\Hol_x(M)$ and $\Hol^0_x(M)$ on $T^*_x M$, $\exterior^p T^*M$ for all $p$, and more generally tensor products of $T_xM$.
+
+\begin{fact}\leavevmode
+ \begin{itemize}
+ \item $\Hol^0(M)$ is the connected component of $\Hol(M)$ containing the identity element.
+ \item $\Hol^0(M)$ is a \emph{Lie subgroup} of $\SO(n)$, i.e.\ it is a subgroup and an immersed submanifold. Thus, the Lie algebra of $\Hol^0(M)$ is a Lie subalgebra of $\so(n)$, which is just the skew-symmetric $n \times n$ matrices.
+
+ This is a consequence of Yamabe theorem, which says that a path-connected subgroup of a Lie group is a Lie subgroup.
+ \end{itemize}
+\end{fact}
+We will not prove these.
+
+\begin{prop}[Fundamental principle of Riemannian holonomy]
+ Let $(M, g)$ be a Riemannian manifold, and fix $p, q \in \Z_+$ and $x \in M$. Then the following are equivalent:
+ \begin{enumerate}
+ \item There exists a $(p, q)$-tensor field $\alpha$ on $M$ such that $\nabla \alpha = 0$.
+ \item There exists an element $\alpha_0 \in (T_x M)^{\otimes p} \otimes (T_x^* M)^{\otimes q}$ such that $\alpha_0$ is invariant under the action of $\Hol_x(M)$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ To simplify notation, we consider only the case $p = 0$. The general case works exactly the same way, with worse notation. For $\alpha \in (T_x^* M)^q$, we have
+ \[
+ (\nabla_X \alpha)(X_1, \cdots, X_q) = X(\alpha(X_1, \cdots, X_q)) - \sum_{i = 1}^q \alpha(X_1, \cdots, \nabla_X X_i, \cdots, X_q).
+ \]
+ Now consider a loop $\gamma: [0, 1] \to M$ be a loop at $x$. We choose vector fields $X_i$ along $\gamma$ for $i = 1, \cdots, q$ such that
+ \[
+ \frac{\nabla X_i}{\d t} = 0.
+ \]
+ We write
+ \[
+ X_i(\gamma(0)) = X_i^0.
+ \]
+ Now if $\nabla \alpha = 0$, then this tells us
+ \[
+ \frac{\nabla \alpha}{\d t}(X_1, \cdots, X_q) = 0.
+ \]
+ By our choice of $X_i$, we know that $\alpha(X_1, \cdots, X_q)$ is constant along $\gamma$. So we know
+ \[
+ \alpha(X_1^0, \cdots,X_q^0) = \alpha(P_\gamma(X_1^0), \cdots, P_\gamma(X_q^0)).
+ \]
+ So $\alpha$ is invariant under $\Hol_x(M)$. Then we can take $\alpha_0 = \alpha_x$.
+
+ Conversely, if we have such an $\alpha_0$, then we can use parallel transport to transfer it to everywhere in the manifold. Given any $y \in M$, we define $\alpha_y$ by
+ \[
+ \alpha_y(X_1, \cdots, X_q) = \alpha_0(P_\gamma(X_1), \cdots, P_\gamma(X_q)),
+ \]
+ where $\gamma$ is any path from $y$ to $x$. This does not depend on the choice of $\gamma$ precisely because $\alpha_0$ is invariant under $\Hol_x(M)$.
+
+ It remains to check that $\alpha$ is $C^\infty$ with $\nabla \alpha = 0$, which is an easy exercise.
+\end{proof}
+
+\begin{eg}
+ Let $M$ be oriented. Then we have a volume form $\omega_g$. Since $\nabla \omega_g = 0$, we can take $\alpha = \omega_g$. Here $p = 0$ and $q = n$. Also, its stabilizer is $H = \SO(n)$. So we know $\Hol(M) \subseteq \SO(n)$ if (and only if) $M$ is oriented.
+
+ The ``only if'' part is not difficult, because we can use parallel transport to transfer an orientation at a particular point to everywhere.
+\end{eg}
+
+\begin{eg}
+ Let $x \in M$, and suppose
+ \[
+ \Hol_x(M) \subseteq \U(n) = \{g \in \SO(2n): g J_0 g^{-1} = J_0\},
+ \]
+ where
+ \[
+ J_0 =
+ \begin{pmatrix}
+ 0 & I\\
+ -I & 0
+ \end{pmatrix}.
+ \]
+ By looking at $\alpha_0 = J_0$, we obtain $\alpha = J \in \Gamma(\End TM)$ with $\nabla J = 0$ and $J^2 = -1$. This is a well-known standard object in complex geometry, and such a $J$ is an instance of an \term{almost complex structure} on $M$.
+\end{eg}
+
+\begin{eg}
+ Recall (from the theorem on applications of Bochner--Weitzenb\"ock) that a Riemannian manifold $(M, g)$ is flat (i.e.\ $R(g) \equiv 1$) iff around each point $x \in M$, there is a parallel basis of parallel vector fields. So we find that $(M, g)$ is flat iff $\Hol^0(M, g) = \{id\}$.
+
+ It is essential that we use $\Hol^0(M, g)$ rather than the full $\Hol(M, g)$. For example, we can take the Klein bottle
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.6] (0, 0) -- (2, 0);
+ \draw [->>-=0.65] (2, 0) -- (2, 2);
+ \draw [->-=0.6] (0, 2) -- (2, 2);
+ \draw [->>-=0.65] (0, 2) -- (0, 0);
+
+ \draw (0, 1) -- (2, 1) node [pos=0.5, above] {$\gamma$};
+ \end{tikzpicture}
+ \end{center}
+ with the flat metric. Then parallel transport along the closed loop $\gamma$ has
+ \[
+ P_\gamma =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix}.
+ \]
+ In fact, we can check that $\Hol(K) = \Z_2$. Note that here $K$ is non-orientable.
+\end{eg}
+
+Since we know that $\Hol(M)$ is a Lie group, we can talk about its Lie algebra.
+
+\begin{defi}[Holonomy algebra]\index{holonomy algebra}
+ The \emph{holonomy algebra} $\hol(M)$ is the Lie algebra of $\Hol(M)$.
+\end{defi}
+
+Thus $\hol(M) \leq \so(n)$ up to conjugation.
+
+Now consider some open coordinate neighbourhood $U \subseteq M$ with coordinates $x_1, \cdots, x_n$. As before, we write
+\[
+ \partial_i = \frac{\partial}{\partial x_i},\quad \nabla_i = \nabla_{\partial_i}.
+\]
+The curvature may also be written in terms of coordinates $R = R^i_{j,k\ell}$, and we also have
+\[
+ R(\partial_k, \partial_\ell) = - [\nabla_k, \nabla_\ell].
+\]
+Thus, $\hol(M)$ contains
+\[
+ \left.\frac{\d}{\d t} \right|_{t = 0} P(\gamma_t),
+\]
+where $\gamma_t$ is the square
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.6] (0, 0) -- (2, 0);
+ \draw [->-=0.6] (2, 0) -- (2, 2);
+ \draw [->-=0.6] (2, 2) -- (0, 2);
+ \draw [->-=0.6] (0, 2) -- (0, 0);
+
+ \draw [->] (0, 0) -- (3, 0) node [right] {$x_k$};
+ \draw [->] (0, 0) -- (0, 3) node [right] {$x_\ell$};
+ \node [left] at (0, 2) {$\sqrt{t}$};
+ \node [below] at (2, 0) {$\sqrt{t}$};
+ \end{tikzpicture}
+\end{center}
+By a direct computation, we find
+\[
+ P(\gamma_t) = I + \lambda t R(\partial_k, \partial_\ell) + o(t).
+\]
+Here $\lambda\in \R$ is some non-zero absolute constant that doesn't depend on anything (except convention).
+
+Differentiating this with respect to $t$ and taking the limit $t\to 0$, we deduce that at for $p \in U$, we have
+\[
+ R_p = (R^i_{j, k\ell})_p \in \exterior^2 T^*_p M \otimes \hol_p(M),
+\]
+where we think $\hol_p(M) \subseteq \End T_p M$. Recall we also had the $R_{ij, k\ell}$ version, and because of the nice symmetric properties of $R$, we know
+\[
+ (R_{ij, k\ell})_p \in S^2 \hol_p(M) \subseteq \exterior^2 T^*_p M \otimes \exterior^2 T_p^* M.
+\]
+Strictly speaking, we should write
+\[
+ (R^i\!_j\!^k\!_\ell)_p \in S^2 \hol_p(M),
+\]
+but we can use the metric to go between $T_pM$ and $T^*_p M$.
+
+So far, what we have been doing is rather tautological. But it turns out this allows us to decompose the de Rham cohomology groups.
+
+In general, consider an arbitrary Lie subgroup $G \subseteq \GL_n(\R)$. There is a standard representation $\rho$ of $\GL_n(\R)$ on $\R^n$, which restricts to a representation $(\rho, \R^n)$ of $G$. This induces a representation $(\rho^k, \exterior^k(\R^*))$ of $G$.
+
+This representation is in general not irreducible. We decompose it into irreducible components $(\rho^k_i, W_i^k)$, so that
+\[
+ \exterior^k(\R^*) = \bigoplus_i W_i^k.
+\]
+We can do this for bundles instead of just vector spaces. Consider a manifold $M$ with a $G$-structure, i.e.\ there is an atlas of coordinate neighbourhoods where the transition maps satisfy
+\[
+ \left(\frac{\partial x_\alpha}{\partial x_\beta'}\right)_p \in G
+\]
+for all $p$. Then we can use this to split our bundle of $k$-forms into well-defined vector sub-bundles with typical fibers $W_i^k$:
+\[
+ \exterior^k T^* M = \bigoplus \Lambda^k_i.
+\]
+We can furthermore say that every $G$-equivariant linear map $\varphi: W_i^k \to W_j^\ell$ induces a morphism of vector bundles $\phi: \Lambda_i^k \to \Lambda_j^\ell$.
+
+Now suppose further that $\Hol(M) \leq G \leq \Or(n)$. Then we can show that parallel transport preserves this decomposition into sub-bundles. So $\nabla$ restricts to a well-defined connection on each $\Lambda^k_i$.
+
+Thus, if $\xi \in \Gamma(\Lambda_i^k)$, then $\nabla \xi \in \Gamma(T^* M \otimes \Lambda^k_i)$, and then we have $\nabla^*\nabla \xi \in \Gamma(\Lambda_i^k)$. But we know the covariant Laplacian is related to Laplace--Beltrami via the curvature. We only do this for $1$-forms for convenience of notation. Then if $\xi \in \Omega^1(M)$, then we have
+\[
+ \Delta \xi = \nabla^* \nabla \xi + \Ric(\xi).
+\]
+We can check that $\Ric$ also preserves these subbundles. Then it follows that $\Delta: \Gamma(\Lambda_j^1) \to \Gamma(\Lambda^1_j)$ is well-defined.
+
+Thus, we deduce
+\begin{thm}
+ Let $(M, g)$ be a connected and oriented Riemannian manifold, and consider the decomposition of the bundle of $k$-forms into irreducible representations of the holonomy group,
+ \[
+ \exterior^k T^* M = \bigoplus_i \Lambda_i^k.
+ \]
+ In other words, each fiber $(\Lambda_i^k)_x \subseteq \exterior^k T^*_x M$ is an irreducible representation of $\Hol_x(g)$. Then
+ \begin{enumerate}
+ \item For all $\alpha \in \Omega^k_i(M) \equiv \Gamma(\Lambda_i^l)$, we have $\Delta \alpha \in \Omega_i^k(M)$.
+ \item If $M$ is compact, then we have a decomposition
+ \[
+ H_{\dR}^k (M) = \bigoplus H_{i, \dR}^k (M),
+ \]
+ where
+ \[
+ H^k_{i, \dR} (M) = \{[\alpha] : \alpha \in \Omega_i^k(M), \Delta \alpha = 0\}.
+ \]
+ \end{enumerate}
+ The dimensions of these groups are known as the \term{refined Betti numbers}.
+\end{thm}
+We have only proved this for $k = 1$, but the same proof technique can be used to do it for arbitrary $k$.
+
+Our treatment is rather abstract so far. But for example, if we are dealing with complex manifolds, then we know that $\Hol(M) \leq \U(n)$. So this allows us to have a canonical refinement of the de Rham cohomology, and this is known as the Lefschetz decomposition.
+
+\section{The Cheeger--Gromoll splitting theorem}
+We will talk about the Cheeger--Gromoll splitting theorem. This is a hard theorem, so we will not prove it. However, we will state it, and discuss a bit about it. To state the theorem, we need some preparation.
+
+\begin{defi}[Ray]\index{ray}
+ Let $(M, g)$ be a Riemannian manifold. A \emph{ray} is a map $r(t): [0, \infty) \to M$ if $r$ is a geodesic, and minimizes the distance between any two points on the curve.
+\end{defi}
+
+\begin{defi}[Line]\index{line}
+ A \emph{line} is a map $\ell(t): \R \to M$ such that $\ell(t)$ is a geodesic, and minimizes the distance between any two points on the curve.
+\end{defi}
+
+We have seen from the first example sheet that if $M$ is a complete unbounded manifold, then $M$ has a ray from each point, i.e.\ for all $x \in M$, there exists a ray $r$ such that $r(0) = x$.
+
+\begin{defi}[Connected at infinity]\index{connected at infinity}
+ A complete manifold is said to be \emph{connected at infinity} if for all compact set $K \subseteq M$, there exists a compact $C \supseteq K$ such that for every two points $p, q \in M \setminus C$, there exists a path $\gamma \in \Omega(p, q)$ such that $\gamma(t) \in M \setminus K$ for all $t$.
+
+ We say $M$ is \emph{disconnected at infinity} if it is not connected at infinity.
+\end{defi}
+
+Note that if $M$ is disconnected at infinity, then it must be unbounded, and in particular non-compact.
+
+\begin{lemma}
+ If $M$ is disconnected at infinity, then $M$ contains a line.
+\end{lemma}
+
+\begin{proof}
+ Note that $M$ is unbounded. Since $M$ is disconnected at infinity, we can find a compact subset $K \subseteq M$ and sequences $p_m, q_m \to \infty$ as $m \to \infty$ (to make this precise, we can pick some fixed point $x$, and then require $d(x, p_m), d(x, q_m) \to \infty$) such that every $\gamma_m \in \Omega(p_m, q_m)$ passes through $K$.
+
+ In particular, we pick $\gamma_m$ to be a minimal geodesic from $p_m$ to $q_m$ parametrized by arc-length. Then $\gamma_m$ passes through $K$. By reparametrization, we may assume $\gamma_m(0) \in K$.
+
+ Since $K$ is compact, we can pass to a subsequence, and wlog $\gamma_m(0) \to x \in K$ and $\dot{\gamma}_m(0) \to a \in T_xM$ (suitably interpreted).
+
+ Then we claim the geodesic $\gamma_{x, a}(t)$ is the desired line. To see this, since solutions to ODE's depend smoothly on initial conditions, we can write the line as
+ \[
+ \ell(t) = \lim_{m \to \infty} \gamma_m(t).
+ \]
+ Then we know
+ \[
+ d(\ell(s), \ell(t)) = \lim_{m \to \infty} d(\gamma_m(s), \gamma_m(t)) = |s - t|.
+ \]
+ So we are done.
+\end{proof}
+
+Let's look at some examples.
+\begin{eg}
+ The elliptic paraboloid
+ \[
+ \{z = x^2 + y^2\} \subseteq \R^3
+ \]
+ with the induced metric does not contain a line. To prove this, we can show that any geodesic that is not a meridian must intersect itself.
+\end{eg}
+
+\begin{eg}
+ Any complete metric $g$ on $S^{n - 1} \times \R$ contains a line since it is disconnected at $\infty$.
+\end{eg}
+
+\begin{thm}[Cheeger--Gromoll line-splitting theorem (1971)]\index{Cheeger--Gromoll line-splitting theorem}
+ If $(M, g)$ is a complete connected Riemannian manifold containing a line, and has $\Ric(g) \geq 0$ at each point, then $M$ is isometric to a Riemannian product $(N \times \R, g_0 + \d t^2)$ for some metric $g_0$ on $N$.
+\end{thm}
+We will not prove this, but just see what the theorem can be good for.
+
+First of all, we can slightly improve the statement. After applying the theorem, we can check again if $N$ contains a line or not. We can keep on proceeding and splitting lines off. Then we get
+\begin{cor}
+ Let $(M, g)$ be a complete connected Riemannian manifold with $\Ric(g) \geq 0$. Then it is isometric to $X \times \R^q$ for some $q \in \N$ and Riemannian manifold $X$, where $X$ is complete and does not contain any lines.
+\end{cor}
+
+Note that if $X$ is zero-dimensional, since we assume all our manifolds are connected, then this implies $M$ is flat. If $\dim X = 1$, then $X \cong S^1$ (it can't be a line, because a line contains a line). So again $M$ is flat.
+
+Now suppose that in fact $\Ric(g) = 0$. Then it is not difficult to see from the definition of the Ricci curvature that $\Ric(X) = 0$ as well. If we know $\dim X \leq 3$, then $M$ has to be flat, since in dimensions $\leq 3$, the Ricci curvature determines the full curvature tensor.
+
+We can say a bit more if we assume more about the manifold. Recall (from example sheets) that a manifold is \term{homogeneous} if the group of isometries acts transitively. In other words, for any $p, q \in M$, there exists an isometry $\phi: M \to M$ such that $\phi(p) = q$. This in particular implies the metric is complete.
+
+It is not difficult to see that if $M$ is homogeneous, then so is $X$. In this case, $X$ must be compact. Suppose not. Then $X$ is unbounded. We will obtain a line on $X$.
+
+By assumption, for all $n = 1, 2, \cdots$, we can find $p_n, q_n$ with $d(p_n, q_n) \geq 2n$. Since $X$ is complete, we can find a minimal geodesic $\gamma_n$ connecting these two points, parametrized by arc length. By homogeneity, we may assume that the midpoint $\gamma_n(0)$ is at a fixed point $x_0$. By passing to a subsequence, we wlog $\dot{\gamma}_n(0)$ converges to some $a \in T_{x_0}(X$. Then we use $a$ as an initial condition for our geodesic, and this will be a line.
+
+A similar argument gives
+\begin{lemma}
+ Let $(M, g)$ be a compact Riemannian manifold, and suppose its universal Riemannian cover $(\tilde{M}, \tilde{g})$ is non-compact. Then $(\tilde{M}, \tilde{g})$ contains a line.
+\end{lemma}
+
+\begin{proof}
+ We first find a compact $K \subseteq \tilde{M}$ such that $\pi(K) = M$. Since $\tilde{M}$ must be complete, it is unbounded. Choose $p_n, q_n, \gamma_n$ like before. Then we can apply deck transformations so that the midpoint lies inside $K$, and then use compactness of $K$ to find a subsequence so that the midpoint converges.
+\end{proof}
+
+We do more applications.
+
+\begin{cor}
+ Let $(M, g)$ be a compact, connected manifold with $\Ric(g) \geq 0$. Then
+ \begin{itemize}
+ \item The universal Riemannian cover is isometric to the Riemannian product $X \times \R^N$, with $X$ compact, $\pi_1(X) = 1$ and $\Ric(g_X) \geq 0$.
+ \item If there is some $p \in M$ such that $\Ric(g)_p > 0$, then $\pi_1(M)$ is finite.
+ \item Denote by $I(\tilde{M})$ the group of isometries $\tilde{M} \to \tilde{M}$. Then $I (\tilde{M}) = I(X) \times E(\R^q)$, where $E(\R^q)$ is the group of rigid Euclidean motions,
+ \[
+ \mathbf{y} \mapsto A\mathbf{y} + \mathbf{b}
+ \]
+ where $\mathbf{b} \in \R^n$ and $A \in \Or(q)$.
+ \item If $\tilde{M}$ is homogeneous, then so is $X$.
+ \end{itemize}
+\end{cor}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item This is direct from Cheeger--Gromoll and the previous lemma.
+ \item If there is a point with strictly positive Ricci curvature, then the same is true for the universal cover. So we cannot have any non-trivial splitting. So by the previous part, $\tilde{M}$ must be compact. By standard topology, $|\pi_1(M)| = |\pi^{-1}(\{p\})|$.
+
+ \item We use the fact that $E(\R^q) = I(\R^q)$. Pick a $g \in I(\tilde{M})$. Then we know $g$ takes lines to lines. Now use that all lines in $\tilde{M} \times \R^q$ are of the form $p \times \R$ with $p \in X$ and $\R \subseteq \R^q$ an affine line. Then
+ \[
+ g(p \times \R) = p' \times \R,
+ \]
+ for some $p'$ and possibly for some other copy of $\R$. By taking unions, we deduce that $g(p \times \R^q) = p' \times \R^q$. We write $h(p) = p'$. Then $h \in I(X)$.
+
+ Now for any $X \times \mathbf{a}$ with $\mathbf{a} \in \R^q$, we have $X \times \mathbf{a} \perp p \times \R^q$ for all $p \in X$. So we must have
+ \[
+ g(X \times \mathbf{a}) = X \times \mathbf{b}
+ \]
+ for some $\mathbf{b} \in \R^q$. We write $e(\mathbf{a}) = \mathbf{b}$. Then
+ \[
+ g(p, a) = (h(p), e(a)).
+ \]
+ Since the metric of $X$ and $\R^q$ are decoupled, it follows that $h$ and $e$ must separately be isometries.\qedhere
+ \end{itemize}
+\end{proof}
+
+We can look at more examples.
+\begin{prop}
+ Consider $S^n \times \R$ for $n = 2$ or $3$. Then this does not admit any Ricci-flat metric.
+\end{prop}
+
+\begin{proof}
+ Note that $S^n \times \R$ is disconnected at $\infty$. So any metric contains a line. Then by Cheeger--Gromoll, $\R$ splits as a Riemannian factor. So we obtain $\Ric = 0$ on the $S^n$ factor. Since we are in $n = 2, 3$, we know $S^n$ is flat, as the Ricci curvature determines the full curvature. So $S^n$ is the quotient of $\R^n$ by a discrete group, and in particular $\pi_1(S^n) \not= 1$. This is a contradiction.
+\end{proof}
+
+Let $G$ be a Lie group with a bi-invariant metric $g$. Suppose the center $Z(G)$ is finite. Then the center of $\mathfrak{g}$ is trivial (since it is the Lie algebra of $G/Z(G)$, which has trivial center). From sheet 2, we find that $\Ric(g) > 0$ implies $\pi_1(G)$ is finite. The converse is also true, but is harder. This is done on Q11 of sheet 3 --- if $\pi_1(G)$ is finite, then $Z(G)$ is finite.
+
+\printindex
+\end{document}
diff --git a/books/cam/III_L/schramm-loewner_evolutions.tex b/books/cam/III_L/schramm-loewner_evolutions.tex
new file mode 100644
index 0000000000000000000000000000000000000000..660e3fce119ba5af56baf1cff7e268a271309c12
--- /dev/null
+++ b/books/cam/III_L/schramm-loewner_evolutions.tex
@@ -0,0 +1,1951 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2018}
+\def\nlecturer {J.\ Miller}
+\def\ncourse {Schramm--Loewner Evolutions}
+\def\nofficial {http://statslab.cam.ac.uk/~jpm205/teaching/lent2018/}
+
+\input{header}
+\renewcommand\D{\mathbb{D}}
+\DeclareMathOperator\hcap{hcap}
+\newcommand\SLE{\mathrm{SLE}}
+\newcommand\rad{\mathrm{Rad}}
+\newcommand\dist{\mathrm{dist}}
+\newcommand\BES{\mathrm{BES}}
+
+\usepackage[first=0,last=1, seed=173]{lcg}
+%\usepackage{mathrsfs}
+
+\pdfmapline{=stix-mathscr STIXMathScript-Regular " -.25 SlantFont " ] node [pos=0.5, above] {$f$} (2.7, 0.1);
+
+ \draw [-latex'] (3, 0) -- (5, 0);
+ \draw [-latex'] (4, -1) -- (4, 1);
+ \draw [fill=mblue, fill opacity=0.5] (4, 0) ellipse (0.65 and 0.12);
+ \end{tikzpicture}
+\end{center}
+By scaling it suffices to prove this for the case $r = 1$. This follows from the following result:
+\begin{thm}
+ If $f \in \mathcal{U}$, then $|a_2| \leq 2$.
+\end{thm}
+The proof of this proposition will involve quite some work. So let us just conclude the theorem from this.
+\begin{proof}[Proof of Koebe-1/4 theorem]
+ Suppose $f: \D \to D$ is in $\mathcal{U}$, and $z_0 \not \in D$. We shall show that $|z_0| \geq \frac{1}{4}$. Consider the function
+ \[
+ \tilde{f}(z) = \frac{z_0 f(z)}{z_0 - f(z)}.
+ \]
+ Since $\tilde{f}$ is composition of conformal transformations, it is itself conformal, and a direct computation shows $\tilde{f} \in \mathcal{U}$. Moreover, if
+ \[
+ f(z) = z + a_2 z^2 + \cdots,
+ \]
+ then
+ \[
+ \tilde{f}(z) = z + \left(a_2 + \frac{1}{z_0}\right)z^2 + \cdots.
+ \]
+ So we obtain the bounds
+ \[
+ |a_2|, \left|a_2 + \frac{1}{z_0}\right| \leq 2.
+ \]
+ By the triangle inequality, we must have $|z_0^{-1}| \leq 4$, hence $|z_0| \geq \frac{1}{4}$.
+\end{proof}
+
+The 1/4 theorem bounds the distortion in terms of the value of $f'(0)$. Conversely, if we know how much the distortion is, we can use the theorem to derive bounds on $f'(0)$.
+\begin{cor}
+ Let $D, \tilde{D}$ be domains and $z \in D$, $\tilde{z} \in \tilde{D}$. If $f: D \to \tilde{D}$ is a conformal transformation with $f(z) = \tilde{z}$, then
+ \[
+ \frac{\tilde{d}}{4d} \leq |f'(z)| \leq \frac{4\tilde{d}}{d},
+ \]
+ where $d = \dist(z, \partial D)$ and $\tilde{d} = \dist(\tilde{z}, \partial \tilde{D})$.
+\end{cor}
+
+\begin{proof}
+ By translation, scaling and rotation, we may assume that $z = \tilde{z} = 0$, $d = 1$ and $f'(0) = 1$. Then we have
+ \[
+ \tilde{D} = f(D) \supseteq f(\D) \supseteq B(0, 1/4).
+ \]
+ So $\tilde{d} \geq \frac{1}{4}$, as desired. The other bound follows by considering $f^{-1}$.
+\end{proof}
+We now proceed to prove the theorem. We first obtain a bound on area, using IA Vector Calculus:
+\begin{prop}
+ Let $f \in \mathcal{U}$. Then
+ \[
+ \area(f(\D)) = \pi \sum_{n = 1}^\infty n |a_n|^2.
+ \]
+\end{prop}
+
+\begin{proof}
+ In the ideal world, we will have something that helps us directly compute $\area(f(\D))$. However, for the derivation to work, we need to talk about what $f$ does to the boundary of $\D$, but that is not necessarily well-defined. So we compute $\area(f(r\D))$ for $r < 1$ and then take the limit $r \to 1$.
+
+ So fix $r \in (0, 1)$, and define the curve $\gamma(\theta) = f(re^{i\theta})$ for $\theta \in [0, 2\pi]$. Then we can compute
+ \begin{align*}
+ \frac{1}{2i} \int_\gamma \bar{z} \;\d z &= \frac{1}{2i} \int_\gamma (x - iy) (\d x + i\;\d y)\\
+ &= \frac{1}{2i} \int_\gamma (x - iy) \;\d x + (ix + y)\;\d y\\
+ &= \frac{1}{2i} \iint_{f(r\D)} 2i\;\d x\;\d y\\
+ &= \area(f(r\D)),
+ \end{align*}
+ using Green's theorem. We can also compute the left-hand integral directly as
+ \begin{align*}
+ \frac{1}{2i} \int_\gamma \bar{z}\;\d z &= \frac{1}{2i} \int_0^{2\pi} \overline{f(re^{i\theta})} f'(re^{i\theta}) i re^{i\theta }\;\d \theta\\
+ &= \frac{1}{2i} \int_0^{2\pi} \left(\sum_{n = 1}^\infty \bar{a}_n r^n e^{-i\theta n}\right) \left(\sum_{n = 1}^\infty a_n nr^{n - 1} e^{i\theta (n - 1)}\right) ir e^{i\theta}\;\d \theta\\
+ &= \pi \sum_{n = 1}^\infty r^{2n} |a_n|^2 n.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{defi}[Compact hull]\index{compact hull}
+ A \emph{compact hull} is a connected compact set $K \subseteq \C$ with more than one point such that $\C \setminus K$ is connected.
+\end{defi}
+
+We want to obtain a similar area estimate for compact hulls. By the Riemann mapping theorem, if $K$ is a compact hull, then there exists a unique conformal transformation $F: \C \setminus \bar{\D}$ to $\C \setminus K$ that fixes infinity and has positive derivative at $\infty$, i.e.\ $\lim\limits_{z \to \infty} F(z)/z > 0$. Let $\mathcal{H}$ be the set of compact hulls containing $0$ with $\lim\limits_{z\to \infty} F(z)/z = 1$. If $K \in \mathcal{H}$, then the Laurent expansion of the corresponding $F$ is
+\[
+ F(z) = z + b_0 + \sum_{n = 1}^\infty \frac{b_n}{z^n}.
+\]
+Observe there is a correspondence between the space of such $F$ and $\mathcal{U}$ by sending $F$ to $1/F(1/z)$, and vice versa.
+\begin{prop}
+ If $K \in \mathcal{H}$, then
+ \[
+ \area (K) = \pi \left(1 - \sum_{n = 1}^\infty n|b_n|^2\right).
+ \]
+\end{prop}
+Observe that the area is always non-negative. So in particular, we obtain the bound
+\[
+ \sum_{n = 1}^\infty n |b_n|^2 \leq 1.
+\]
+In particular, $|b_1| \leq 1$. This will be how we ultimately bound $|a_2|$.
+\begin{proof}
+ The proof is essentially the same as last time. Let $r > 1$, and let $K_r = F(r \bar{\D})$ (or, if you wish, $\C \setminus F(\C \setminus r\bar{\D})$), and $\gamma(\theta) = F(r e^{i\theta})$. As in the previous proposition, we have
+ \begin{align*}
+ \area(K_r) &= \frac{1}{2i} \int_\gamma \bar{z} \;\d z\\
+ &= \frac{1}{2i} \int_0^{2\pi} \overline{F(re^{i\theta})}F'(re^{i\theta}) ir e^{i\theta}\;\d \theta\\
+ &= \pi \left(r^2 - \sum_{n = 1}^\infty n |b_n|^2 r^{-2n}\right).
+ \end{align*}
+ Then take the limit as $r \to 1$.
+\end{proof}
+By the correspondence we previously noted, this gives us some estimates on the coefficients of any $f \in \mathcal{U}$. It turns out applying this estimate directly is not good enough. Instead, we want to take the square root of $f(z^2)$.
+
+\begin{lemma}
+ Let $f \in \mathcal{U}$. Then there exists an odd function $h \in \mathcal{U}$ with $h(z)^2 = f(z^2)$.
+\end{lemma}
+
+\begin{proof}
+ Note that $f(0) = 0$ by assumption, so taking the square root can potentially be problematic, since $0$ is a branch point. To get rid of the problem, define the function
+ \[
+ \tilde{f}(z) =
+ \begin{cases}
+ \frac{f(z)}{z} & z \not =0\\
+ f'(0) & z = 0
+ \end{cases}.
+ \]
+ Then $\tilde{f}$ is non-zero and conformal in $\D$, and so there is a function $g$ with $g(z)^2 = \tilde{f}(z)$. We then set
+ \[
+ h(z) = z g(z^2).
+ \]
+ Then $h$ is odd and $h^2 = z^2 g(z^2)^2 = f(z^2)$. It is also immediate that $h(0) = 0 $ and $h'(0) = 1$. We need to show $h$ is injective on $\D$. If $z_1, z_2 \in \D$ with $h(z_1) = h(z_2)$, then
+ \[
+ z_1 g(z_1^2) = z_2 g(z_2^2).\tag{$*$}
+ \]
+ By squaring, we know
+ \[
+ z_1^2 \tilde{f}(z_1^2) = z_2^2 \tilde{f}(z_2)^2.
+ \]
+ Thus, $f(z_1^2) = f(z_2^2)$ and so $z_1^2 = z_2^2$. But then $(*)$ implies $z_1 = z_2$. So $h$ is injective, and hence $h \in \mathcal{U}$.
+\end{proof}
+
+We can now prove the desired theorem.
+\begin{proof}[Proof of theorem]
+ We can Taylor expand
+ \[
+ h(z) = z + c_3 z^3 + c_5 z^5 + \cdots.
+ \]
+ Then comparing $h(z)^2 = f(z^2)$ implies
+ \[
+ c_3 = \frac{a_2}{2}.
+ \]
+ Setting $g(z) = \frac{1}{h(1/z)}$, we find that the $z^{-1}$ coefficient of $g$ is $-\frac{a_2}{2}$, and as we previously noted, this must be $\leq 1$.
+\end{proof}
+\subsection{Half-plane capacity}
+
+\begin{defi}[Compact $\H$-hull]\index{compact $\H$-hull}
+ A set $A \subseteq \H$ is called a \emph{compact $\H$-hull} if $A$ is compact, $A = \H \cap \bar{A}$ and $\H \setminus A$ is simply connected. We write $\mathcal{Q}$\index{$\mathcal{Q}$} for the collection of compact $\H$-hulls.
+\end{defi}
+A large part of the course will be about studying families of compact $\H$-hulls, and in particular random families of compact $\H$-hulls. In this chapter, we will establish some basic results about compact $\H$-hulls, in preparation for Loewner's theorem in the next chapter. We begin with the following important proposition:
+
+\begin{prop}
+ For each $A \in \mathcal{Q}$, there exists a unique conformal transformation $g_A: \H \setminus A \to \H$ with $|g_A(z) - z| \to 0$ as $z \to \infty$.
+\end{prop}
+
+This conformal transformation $g_A$ will be key to understanding compact $\H$-hulls. The proof of this requires the following theorem:
+\begin{thm}[Schwarz reflection principle]\index{Schwarz reflection principle}
+ Let $D \subseteq \H$ be a simply connected domain, and let $\phi: D \to \H$ be a conformal transformation which is bounded on bounded sets and sends $\R \cap D$ to $\R$. Then $\phi$ extends by reflection to a conformal transformation on
+ \[
+ D^* = D \cup \{\bar{z} : z \in D\} = D \cup \bar{D}
+ \]
+ by setting $\phi(\bar{z}) = \overline{\phi(z)}$.\fakeqed
+\end{thm}
+
+\begin{proof}[Proof of proposition]
+ The Riemann mapping theorem implies that there exists a conformal transformation $g: \H \setminus A \to \H$. Then $g(z) \to \infty$ as $z \to \infty$. By the Schwarz reflection principle, extend $g$ to a conformal transformation defined on $\C \setminus (A \cup \bar{A})$.
+
+ By Laurent expanding $g$ at $\infty$, we can write
+ \[
+ g(z) = \sum_{n = -\infty}^N b_{-N} z^N.
+ \]
+ Since $g$ maps the real line to the real line, all $b_i$ must be real. Moreover, by injectivity, considering large $z$ shows that $N = 1$. In other words, we can write
+ \[
+ g(z) = b_{-1} z + b_0 + \sum_{n = 1}^\infty \frac{b_n}{z^n},
+ \]
+ with $b_{-1} \not= 0$. We can then define
+ \[
+ g_A(z) = \frac{g(z) - b_0}{b_{-1}}.
+ \]
+ Since $b_0$ and $b_{-1}$ are both real, this is still a conformal transformation, and $|g_A(z) - z| \to 0$ as $z \to \infty$.
+
+ To show uniqueness, suppose $g_A, g_A'$ are two such functions. Then $g_A' \circ g_A^{-1}: \H \to \H$ is such a function for $A = \emptyset$. Thus, it suffices to show that if $g: \H \to \H$ is a conformal mapping such that $g(z) - z \to 0$ as $z \to \infty$, then in fact $g = z$. But we can expand $g(z) - z$ as
+ \[
+ g(z) - z = \sum_{n = 1}^\infty \frac{c_n}{z^n},
+ \]
+ and this has to be holomorphic at $0$. So $c_n = 0$ for all $n$, and we are done.
+\end{proof}
+
+\begin{defi}[Half-plane capacity]\index{half-plane capacity}
+ Let $A \in \mathcal{Q}$. Then the \emph{half-plane capacity} of $A$ is defined to be
+ \[
+ \hcap(A) = \lim_{z \to \infty} z(g_A(z) - z).
+ \]
+ Thus, we have
+ \[
+ g_A(z) = z + \frac{\hcap(A)}{z} + \sum_{n = 2}^\infty \frac{b_n}{z^n}.
+ \]
+\end{defi}
+
+To justify the name ``capacity'', $\hcap(A)$ had better be non-negative and increasing in $A$. These are both true, as we will soon prove. However, let us first look at some examples.
+
+\begin{eg}
+ $z \mapsto \sqrt{z^2 + 4t}$ is the unique conformal transformation $\H \setminus [0, \sqrt{t}i] \to \H$ with $|\sqrt{z^2 + 4z} - z| \to 0$ as $z \to \infty$. We can expand
+ \[
+ \sqrt{z^2 + 4t} = z + \frac{2t}{z} + \cdots.
+ \]
+ Thus, we know that $\hcap([0, 2\sqrt{t} i]) = 2t$. This justifies our previous funny parametrization of $[0, \sqrt{t} i]$.
+\end{eg}
+
+\begin{eg}
+ The map $z \mapsto z + \frac{1}{z}$ maps $\H \setminus \bar{\D} \to \H$. Again, we have $\left|z + \frac{1}{z} - z\right| \to 0$ as $z \to \infty$, and so $\hcap(\H \cap \bar{D}) = 1$.
+\end{eg}
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item Scaling: If $r > 0$ and $A \in \mathcal{Q}$, then $\hcap(rA) = r^2 \hcap(A)$.
+ \item Translation invariance: If $x \in \R$ and $a \in \mathcal{Q}$, then $\hcap(A + x) = \hcap(A)$.
+ \item Monotonicity: If $A, \tilde{A} \in \mathcal{Q}$ are such that $A \subseteq \tilde{A}$. Then $\hcap(A) \leq \hcap(\tilde{A})$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We have $g_{rA}(z) = r g_A(z/r)$.
+ \item Observe $g_{A + x}(z) = g_A(z - x) + x$.
+ \item We can write
+ \[
+ g_{\tilde{A}} = g_{g_A(\tilde{A} \setminus A)} \circ g_A.
+ \]
+ Thus, expanding out tells us
+ \[
+ \hcap(\tilde{A}) = \hcap(A) + \hcap(g_A(\tilde{A}\setminus A)).
+ \]
+ So the desired result follows if we know that the half-plane capacity is non-negative, which we will prove next.\qedhere
+ \end{enumerate}
+\end{proof}
+Observe that these results together imply that if $A \in \mathcal{Q}$ and $A \subseteq r(\bar{\D} \cap \H)$, then
+\[
+ \hcap(A) \leq \hcap(r(\bar{\D} \cap \H)) \leq r^2 \hcap (\bar{\D} \cap \H) = r^2.
+\]
+So we know that $\hcap(A) \leq \diam(A)^2$.
+
+Compared to the above proofs, it seems much less straightforward to prove non-negativity of the half-plane capacity. Our strategy for doing so is to relate the half-plane capacity to Brownian motion! This is something we will see a lot in this course.
+
+\begin{prop}
+ Let $A \in \mathcal{Q}$ and $B_t$ be complex Brownian motion. Define the stopping time
+ \[
+ \tau = \inf\{t \geq 0: B_t \not \in \H \setminus A\}.
+ \]
+ Then
+ \begin{enumerate}
+ \item For all $z \in \H \setminus A$, we have
+ \[
+ \im(z - g_A(z)) = \E_z[\im(B_\tau)]
+ \]
+ \item
+ \[
+ \hcap(A) = \lim_{y \to \infty} y\, \E_y[\im(B_\tau)].
+ \]
+ In particular, $\hcap(A) \geq 0$.
+ \item If $A \subseteq \bar{\D} \cap \H$, then
+ \[
+ \hcap(A) = \frac{2}{\pi} \int_0^\pi \E_{e^{i\theta}}[\im(B_\tau)]\sin \theta \;\d \theta.
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $\phi(z) = \im(z - g_A(z))$. Since $z - g_A(z)$ is holomorphic, we know $\phi$ is harmonic. Moreover, $\varphi$ is continuous and bounded, as it $\to 0$ at infinity. These are exactly the conditions needed to solve the Dirichlet problem using Brownian motion.
+
+ Since $\im (g_A(z)) = 0$ when $z \in \partial (\H \setminus A)$, we know $\im(B_\tau) = \im(B_t - g_A(B_\tau))$. So the result follows.
+
+ \item We have
+ \begin{align*}
+ \hcap(A) &= \lim_{z \to \infty} z (g_A(z) - z) \\
+ &= \lim_{y \to \infty} (iy) (g_A(iy) - iy)\\
+ &= \lim_{y \to \infty} y \im (iy - g_A(iy))\\
+ &= \lim_{y \to \infty} y\, \E_{iy} [\im(B_\tau)]
+ \end{align*}
+ where we use the fact that $\hcap(A)$ is real, so we can take the limit of the real part instead.
+ \item See example sheet.\qedhere
+ \end{enumerate}
+\end{proof}
+
+In some sense, what we used above is that Brownian motion is conformally invariant, where we used a conformal mapping to transfer between our knowledge of Brownian motion on $\H$ to that on $\H \setminus \D$. Informally, this says the conformal image of a Brownian motion is a Brownian motion.
+
+We have in fact seen this before. We know that rotations preserve Brownian motion, and so does scaling, except when we scale we need to scale our time as well. In general, a conformal mapping is locally a rotation and scaling. So we would indeed expect that the conformal image of a Brownian motion is a Brownian motion, up to some time change. Since we are performing different transformations all over the domain, the time change will be random, but that is still not hard to formalize. For most purposes, the key point is that the image of the path is unchanged.
+
+
+\begin{thm}
+ Let $D, \tilde{D} \subseteq \C$ be domains, and $f: D \to \tilde{D}$ a conformal transformation. Let $B, \tilde{B}$ be Brownian motions starting from $z \in D, \tilde{z} \in \tilde{D}$ respectively, with $f(z) = \tilde{z}$. Let
+ \begin{align*}
+ \tau = \inf\{t \geq 0: B_t \not \in D\}\\
+ \tilde{\tau} = \inf\{t \geq 0: \tilde{B}_t \not \in \tilde{D}\}
+ \end{align*}
+ Set
+ \begin{align*}
+ \tau' &= \int_0^\tau |f'(B_s)|^2\;\d s\\
+ \sigma(t) &= \inf\left\{s \geq 0 : \int_0^s |f'(B_r)|^2\;\d r = t\right\}\\
+ B_t' &= f(B_{\sigma(t)}).
+ \end{align*}
+ Then $(B_t' : t < \tilde{\tau}')$ has the same distribution as $(\tilde{B}_t: t < \tilde{\tau})$.
+\end{thm}
+
+\begin{proof}
+ See Stochastic Calculus.
+\end{proof}
+
+\begin{eg}
+ We can use conformal invariance to deduce the first exit distribution of a Brownian motion from different domains. First of all, observe that if we take our domain to be $\D$ and start our Brownian motion from the origin, the first exit distribution is uniform. Thus, applying a conformal transformation, we see that the exit distribution starting from any point $z \in \D$ is
+ \[
+ f(e^{i\theta}) = \frac{1}{2\pi}\left(\frac{1 - |z|^2}{ |e^{i\theta} - z|}\right).
+ \]
+ Similarly, on $\H$, starting from $z = x + iy$, the exit distribution is
+ \[
+ f(u) = \frac{1}{\pi} \frac{y}{(x - u)^2 + y^2}.
+ \]
+ Note that if $x = 0, y = 1$, then this is just
+ \[
+ f(u) = \frac{1}{\pi}\left(\frac{1}{u^2 + 1}\right).
+ \]
+ This is the Cauchy distribution!
+\end{eg}
+
+\section{Loewner's theorem}
+\subsection{Key estimates}
+Before we prove Loewner's theorem, we establish some key identities and estimates. As before, a useful thing to do is to translate everything in terms of Brownian motion.
+\begin{prop}
+ Let $A \in \mathcal{Q}$ and $B$ be a complex Brownian motion. Set
+ \[
+ \tau = \inf \{t \geq 0: B_t \not \in \H \setminus A\}.
+ \]
+ Then
+ \begin{itemize}
+ \item If $x > \rad(A)$, then
+ \[
+ g_A(x) = \lim_{y \to \infty} \pi y \left(\frac{1}{2} - \P_{iy} [B_\tau \in [x, \infty)]\right).
+ \]
+ \item If $x < -\rad(A)$, then
+ \[
+ g_A(x) = \lim_{y \to \infty} \pi y \left(\P_{iy}[B_\tau \in (-\infty, x]] - \frac{1}{2}\right).
+ \]
+ \end{itemize}
+\end{prop}
+
+\begin{proof}
+ First consider the case $A = \emptyset$ and, by symmetry, $x > 0$. Then
+ \begin{align*}
+ &\hphantom{{}={}}\lim_{y \to \infty} \pi y \left(\frac{1}{2} - \P_{iy}[B_\tau \in [x, \infty)]\right)\\
+ &= \lim_{y \to \infty} \pi y \,\P_{iy} [B_\tau \in [0, x)]\\
+ &=\lim_{y \to \infty} \pi y \int_0^x \frac{y}{\pi (s^2 + y^2)}\;\d s\\
+ &= x,
+ \end{align*}
+ where the first equality follows from the fact that Brownian motion exits through the positive reals with probability $\frac{1}{2}$; the second equality follows from the previously computed exit distribution; and the last follows from dominated convergence.
+
+ Now suppose $A \not= \emptyset$. We will use conformal invariance to reduce this to the case above. We write $g_A = u_A + i v_A$. We let
+ \[
+ \sigma = \inf \{t > 0: B_t \not \in \H\}.
+ \]
+ Then we know
+ \begin{align*}
+ \P_{iy} [B_\tau \in [x, \infty)] &= \P_{g_A(iy)} [B_\sigma \in [g_A(x), \infty)]\\
+ &= \P_{iv_A(iy)} [B_\sigma \in [g_A(x) - u_A(iy), \infty)].
+ \end{align*}
+ Since $g_A(z) - z \to 0$ as $z \to \infty$, it follows that $\frac{v_A(iy)}{y} \to 1$ and $u_A(iy) \to 0$ as $y \to \infty$. So we have
+ \[
+ \Big|\P_{iv_A(iy)} \big[B_\sigma \in [g_A(x) - u_A(iy), \infty)\big] - \P_{iy}\big[B_\sigma \in [g_A(x), \infty)\big]\Big| = o(y^{-1})
+ \]
+ as $y \to \infty$. Combining with the case $A = \emptyset$, the the proposition follows.
+\end{proof}
+
+\begin{cor}
+ If $A \in \mathcal{Q}$, $\rad(A) \leq 1$, then
+ \begin{align*}
+ x &\leq g_A(x) \leq x + \frac{1}{x}&&\text{if }x > 1\\
+ x + \frac{1}{x} &\leq g_A(x) \leq x && \text{if }x < -1.
+ \end{align*}
+ Moreover, for all $A \in \mathcal{Q}$, we have
+ \[
+ |g_A(z) - z| \leq 3\, \rad(A).
+ \]
+\end{cor}
+
+\begin{proof}
+ Exercise on the first example sheet. Note that $z \mapsto z + \frac{1}{z}$ sends $\H \setminus \bar{\D}$ to $\H$.
+\end{proof}
+This corollary gives us a ``zeroth order'' bound on $g_A(z) - z$. We can obtain a better first-order bound as follows:
+\begin{prop}
+ There is a constant $c > 0$ so that for every $A \in \mathcal{Q}$ and $|z| > 2\, \rad(A)$, we have
+ \[
+ \left|g_a(z) - \left(z + \frac{\hcap(A)}{z}\right)\right| \leq c \frac{\rad(A) \cdot \hcap(A)}{|z|^2}.
+ \]
+\end{prop}
+
+\begin{proof}
+ Performing a scaling if necessary, we can assume that $\rad(A) = 1$. Let
+ \[
+ h(z) = z + \frac{\hcap (A)}{z} - g_A(z).
+ \]
+ We aim to control the imaginary part of this, and then use the Cauchy--Riemann equations to control the real part as well. We let
+ \[
+ v(z) = \im(h(z)) = \im(z - g_A(z)) = \frac{\im (z)}{|z|^2} \hcap(A).
+ \]
+ Let $B$ be a complex Brownian motion, and let
+ \begin{align*}
+ \sigma &= \inf \{t \geq 0: B_t \not \in \H \setminus \bar{\D}\}\\
+ \tau &= \inf \{t \geq 0: B_t \not \in \H\}.
+ \end{align*}
+ Let $p(z, e^{i\theta})$ be the density with respect to the Lebesgue measure at $e^{i\theta}$ for $B_\sigma$. Then by the strong Markov property at the time $\sigma$, we have
+ \[
+ \im(z - g_A(z)) = \int_0^\pi \E_{e^{i\theta}} [\im(B_\tau)] p(z, e^{i\theta})\;\d \theta.
+ \]
+ In the first example sheet Q3, we show that
+ \[
+ p(z, e^{i\theta}) = \frac{2}{\pi} \frac{\im(z)}{|z|^2} \sin \theta \left(1 + O\left(\frac{1}{|z|}\right)\right).\tag{$*$}
+ \]
+ We also have
+ \[
+ \hcap(A) = \frac{2}{\pi} \int_0^\pi \E_{e^{i\theta}} [\im(B_\tau)] \sin \theta \;\d \theta.
+ \]
+ So
+ \begin{align*}
+ |v(z)| &= \left| \im (z - g_A(z)) - \frac{\im(z)}{|z|^2} \hcap(A)\right|\\
+ &= \left| \int_0^\pi \E_{e^{i\theta}} [\im(B_\tau)] p(z, e^{i\theta})\;\d \theta - \frac{\im(z)}{|z|^2} \cdot \frac{2}{\pi} \int_0^\pi \E_{e^{i\theta}}[\im(B_\tau)] \sin \theta \;\d \theta\right|.
+ \end{align*}
+ By applying $(*)$, we get
+ \[
+ |v(z)| \leq c \frac{c \hcap (A) \im(z)}{|z|^3}.
+ \]
+ where $c$ is a constant.
+
+ Recall that $v$ is harmonic as it is the imaginary part of a holomorphic function. By example sheet 1 Q9, we have
+ \[
+ |\partial_x v(z)| \leq \frac{c \hcap(A)}{|z|^3},\quad |\partial_y v(z)| \leq \frac{c \hcap(A)}{|z|^3}.
+ \]
+ By the Cauchy--Riemann equations, $\re(h(z))$ satsifies the same bounds. So we know that
+ \[
+ |h'(z)| \leq \frac{c \hcap(A)}{|z|^3}.
+ \]
+ Then
+ \[
+ h(iy) = \int_y^\infty h'(is)\;\d s,
+ \]
+ since $h(iy) \to 0$ as $y \to \infty$. Taking absolute values, we have
+ \begin{align*}
+ |h(iy) &= \int_y^\infty |h'(is)|\;\d s\\
+ &\leq c \hcap (A) \int_y^\infty s^{-3} \;\d s\\
+ &\leq c' \hcap(A) y^{-z}.
+ \end{align*}
+ To get the desired bound for a general $z$, integrate $h'$ along the boundary of the circle of radius $|z|$ to get
+ \[
+ |h(z)| = |h(r e^{i\theta})| \leq \frac{c \hcap(A)}{|z|^2} + h(iz).\qedhere
+ \]
+\end{proof}
+
+The following is a very useful fact about Brownian motion:
+\begin{thm}[Beurling estimate]\index{Beurling estimate}
+ There exists a constant $c > 0$ so that the following holds. Let $B$ be a complex Brownian motion, and $A \subseteq \bar{\D}$ be connected, $0 \in A$, and $A \cap \partial \bar{\D} \not= \emptyset$. Then for $z \in \D$, we have
+ \[
+ \P_z[B[0, \tau] \cap A = \emptyset] \leq c |z|^{1/2},
+ \]
+ where $\tau = \inf \{t \geq 0: B_t \not \in \D\}$.\fakeqed
+\end{thm}
+We will not prove this, since it is quite tricky to prove. The worst case behaviour is obtained when $A = [-i, 0]$.
+
+To the theorem, we need one more estimate, which uses the Beurling estimate.
+
+\begin{prop}
+ There exists a constant $c > 0$ so that the following is true: Suppose $A, \tilde{A} \in \mathcal{Q}$ with $A \subseteq \tilde{A}$ and $\tilde{A} \setminus A$ is coonnected. Then
+ \[
+ \diam(g_A(\tilde{A}\setminus A))\leq c
+ \begin{cases}
+ (dr)^{1/2} & d \leq r\\
+ \rad(\tilde{A}) & d > r
+ \end{cases},
+ \]
+ where
+ \[
+ d = \diam(\tilde{A} \setminus A),\quad r = \sup \{\im (z) : z \in \tilde{A}\}.
+ \]
+\end{prop}
+
+\begin{proof}
+ By scaling, we can assume that $r = 1$.
+ \begin{itemize}
+ \item If $d \geq 1$, then the result follows since
+ \[
+ |g_A(z) - z| \leq 3 \rad(A),
+ \]
+ and so
+ \[
+ \diam(g_A(\tilde{A}\setminus A)) \leq \diam(A) + 6 \rad(A) \leq 8 \diam(\tilde{A}).
+ \]
+ \item If $d < 1$, fix $z \in \H$ so that $U = B(z, d) \supseteq \tilde{A} \setminus A$. It then suffices to bound the size of $g_A(U)$ (or, to be precise, $g_A(U \setminus A)$).
+
+ Let $B$ be a complex Brownian motion starting from $iy$ with $y \geq 2$. Let
+ \[
+ \tau = \inf \{t \geq 0: B_t \not \in \H \setminus A\}.
+ \]
+ For $B[0, \tau]$ to reach $U$, it must
+ \begin{enumerate}
+ \item Reach $B(z, 1)$ without leaving $\H \setminus A$, which occurs with probability at most $c/y$ for some constant $c$, by example sheet.
+ \item It must then hit $U$ before leaving $\H \setminus A$. By the Beurling estimate, this occurs with probability $\leq c d^{1/2}$.
+ \end{enumerate}
+ Combining the two, we see that
+ \[
+ \limsup_{y \to \infty} \P_{iy}[B[0, \tau] \cap U \not= \emptyset] \leq c d^{1/2}.
+ \]
+ By the conformal invariance of Brownian motion, if $\sigma = \inf \{t \geq 0 : B_t \not \in \H\}$, this implies
+ \[
+ \limsup_{y \to \infty} y\, \P_y [B[0, \sigma] \cap g_A(\tilde{A} \setminus A) \not= \emptyset] \leq c d^{1/2}.
+ \]
+ Since $g_A(\tilde{A} \setminus A)$ is connected, by Q10 of example sheet 1, we have
+ \[
+ \diam(g_A(\tilde{A} \setminus A)) \leq cd^{1/2}.\qedhere
+ \]%\qedhere
+ \end{itemize}
+\end{proof}
+
+We can finally get to the key content of Loewner's theorem. An important definition is the following:
+\begin{defi}
+ Suppose $(A_t)_{t \geq 0}$ is a family of compact $\H$-hulls. We say that $(A_t)$ is
+ \begin{enumerate}
+ \item \term{non-decreasing} if $s \leq t$ implies $A_s \subseteq A_t$.
+ \item \term{locally growing} if for all $T > 0$ and $\varepsilon > 0$, there exists $\delta > 0$ such that whenever $0 \leq s \leq t \leq s + \delta \leq T$, we have
+ \[
+ \diam(g_{A_s}(A_t \setminus A_s)) \leq \varepsilon.
+ \]
+ This is a continuity condition.
+ \item \term{parametrized by half-plane capacity} if $\hcap(A_t) = 2t$ for all $t \geq 0$.
+ \end{enumerate}
+ We write $\mathcal{A}$ be the set of families of compact $\H$-hulls which satisfy (i) to (iii). We write $\mathcal{A}_T$ for the set of such families defined on $[0, T]$.
+\end{defi}
+
+\begin{eg}
+ If $\gamma$ is is a simple curve in $\H$ starting from $0$ and $A_t = \gamma[0, t]$. This clearly satisfies (i), and the previous proposition tells us this satsifies (ii). On the first example sheet, we show that we can reparametrize $\gamma$ so that $\hcap(A_t) = 2t$ for all $t \geq 0$. Upon doing so, we have $A_t \in \mathcal{A}$.
+\end{eg}
+
+\begin{thm} % Loewner's theorem.
+ Suppose that $(A_t) \in \mathcal{A}$. Let $g_t = g_{A_t}$. Then there exists a continuous function $U: [0, \infty) \to \R$ so that
+ \[
+ \partial_t g_t (z) = \frac{2}{g_t(z) - U_t},\quad g_0(z) = z.
+ \]
+\end{thm}
+This is known as the \term{chordal Loewner ODE}, and $U$ is called the \term{driving function}.
+
+\begin{proof}
+ First note that since the hulls are locally growing, the intersection $\bigcap_{s \geq t} g_t(A_s)$ consists of exactly one point. Call this point $U_t$. Again by the locally growing property, $U_t$ is in fact a continuous function in $t$.
+
+ Recall that if $A \in \mathcal{Q}$, then
+ \[
+ g_A(z) = z + \frac{\hcap(A)}{z} + O\left(\frac{\hcap(A) \rad(A)}{|z|^2}\right).
+ \]
+ If $x \in \R$, then as $g_{A + x}(z) - x = g_A(z - x)$, we have
+ \[
+ g_A(z) = g_A(z + x) - x = z + \frac{\hcap(z)}{z + x} + O\left(\frac{\hcap(A) \rad(A + x)}{|z + x|^2}\right).\tag{$*$}
+ \]
+ Fix $\varepsilon > 0$. For $0 \leq s \leq t$, let
+ \[
+ g_{s, t} = g_t \circ g_s^{-1}.
+ \]
+ Note that
+ \[
+ \hcap(g_T(A_{t + \varepsilon} \setminus A_t)) = 2 \varepsilon.
+ \]
+ Apply $(*)$ with $A = g_t(A_{t + \varepsilon} \setminus A_t)$, $x = -U_t$, and use that $\rad(A + x) = \rad(A - U_t) \leq \diam(A)$ to see that
+ \[
+ g_A(z) = g_{t, t + \varepsilon}(z) = z + \frac{2\varepsilon}{z - U_t} + 2 \varepsilon \diam(g_t(A_{t + \varepsilon} \setminus A_t))O\left(\frac{1}{|z - U_t|^2}\right).
+ \]
+ So
+ \begin{align*}
+ g_{t + \varepsilon}(z) - g_t(z) &= (g_{t, t + \varepsilon} - g_{t, t}) \circ g_t(z)\\
+ &= \frac{2\varepsilon}{g_t(z) - U_t} + 2\varepsilon \diam(g_t(A_{t + \varepsilon} \setminus A_t)) O\left(\frac{1}{|g_t(z) - U_t|^2}\right).
+ \end{align*}
+ Dividing both sides by $\varepsilon$ and taking the limit as $\varepsilon \to 0$, the desired result follows since $\diam(g_t(A_{t + \varepsilon} \setminus A_t)) \to 0$.
+\end{proof}
+
+We can ask about the reverse procedure --- given a continuous function $U_t$, can we find a collection $A_t \in \mathcal{A}$ whose driving function is $U_t$? We can simply let $g_t$ be the solution to this differential equation. We can then let
+\[
+ A_t = \H \setminus \mathrm{domain}(g_t).
+\]
+On the example sheet, we show that this is indeed a family in $\mathcal{A}$, and $g_t$ are indeed conformal transformations.
+
+\subsection{Schramm--Loewner evolution}
+We now take $U_t$ to be a \emph{random process} adapted to $\mathcal{F}_t = \sigma(U_s: s \leq t)$. We can then define $g_t(z)$ as the solution to the Loewner equation for each $z$, and define $A_t$ accordingly. We then obtain a \emph{random} family $A_t$ in $\mathcal{A}$.
+
+\begin{defi}[Conformal Markov property]\index{conformal Markov property}
+ We say that $(A_t)$ satisfy the \emph{conformal Markov property} if
+ \begin{enumerate}
+ \item Given $\mathcal{F}_t$, $(g_t(A_{t + s}) - U_t)_{s \geq 0} \overset{d}{=} (A_s)_{s \geq 0}$.
+ \item Scale-invariance: $(r A_{t/r^2} )_{t \geq 0} \overset{d}{=} (A_t)$.
+ \end{enumerate}
+\end{defi}
+Here (i) is the Markov property, and the second part is the conformal part, since the only conformal maps $\H \to \H$ that fix $0$ and $\infty$ are the rescalings.
+
+Schramm was interested in such processes because they provide us with potential scaling limits of random processes, such as self-avoiding walk or percolation. If we start with a process that has the Markov property, then the scaling limit, if exists, ought to be scale invariant, and thus have the conformal Markov property. Schramm classified the list of all possible such scaling limits:
+
+\begin{thm}[Schramm]
+ If $(A_t)$ satisfy the conformal Markov property, then there exists $\kappa \geq 0$ so that $U_t = \sqrt{\kappa} B_t$, where $B$ is a standard Brownian motion.
+\end{thm}
+
+\begin{proof}
+ The first property is exactly the same thing as saying that given $\mathcal{F}_t$, we have
+ \[
+ (U_{t + s} - U_t)_{s \geq 0} \overset{d}{=} (U_s).
+ \]
+ So $U_t$ is a continuous process with stationary, independent increments. This implies there exists $\kappa \geq 0$ and $a \in \R$ such that
+ \[
+ U_t = \sqrt{\kappa}B_t + at,
+ \]
+ where $B$ is a standard Brownian motion. Then the second part says
+ \[
+ (r U_{t/r^2})_{t \geq 0} \overset{d}{=} (U_t)_{t \geq 0}.
+ \]
+ So $U$ satisfies Brownian scaling. Plugging this in, we know
+ \[
+ r \sqrt{\kappa} B_{t/r^2} + at/r \overset{d}{=} r \sqrt{\kappa} B_t + at.
+ \]
+ Since $r \sqrt{\kappa} B_{t/r^2} \overset{d}{=} r \sqrt{\kappa} B_t$, we must have $a = 0$.
+\end{proof}
+
+This finally gives us the definition of SLE.
+\begin{defi}[Schramm--Loewner evolution]\index{Schramm--Loewner evolution}\index{SLE}
+ For $\kappa > 0$, $\SLE_\kappa$ is the random family of hulls encoded by $U_t = \sqrt{\kappa} B_t$, where $B$ is a standard Brownian motion.
+\end{defi}
+When $\kappa = 0$, we have $U_t = 0$. This corresponds to the curve $\gamma(t) = 2 \sqrt{t} i$.
+
+Usually, when talking about $\SLE_\kappa$, we are instead thinking about a curve.
+\begin{thm}[Rhode--Schramm, 2005]
+ If $(A_t)$ is an $\SLE_\kappa$ with flow $g_t$ and driving function $U_t$, then $g_t^{-1}: \H \to A_t$ extends to a map on $\bar{\H}$ for all $t \geq 0$ almost surely. Moreover, if we set $\gamma(t) = g_t^{-1}(U_t)$, then $\H \setminus A_t$ is the unbounded component of $\H \setminus \gamma([0, t])$.\fakeqed
+\end{thm}
+We will not prove this. From now on, $\SLE_\kappa$ will often refer to this curve instead.
+
+In the remainder of the course, we will first discuss some properties of $\SLE_\kappa$, and then ``prove'' that the scaling limit of certain objects are $\SLE_\kappa$'s. We will not be providing anything near a full proof of the identification of the scaling limit. Instead, we will look at certain random processes, and based on the properties of the random process, we deduce that the scaling limit ``ought to'' have a certain analogous property. We will then show that for a very particular choice of $\kappa$, the curve $\SLE_\kappa$ has that property. We can then deduce that the scaling limit is $\SLE_\kappa$ for this $\kappa$.
+
+What we will do is to prove properly that the $\SLE_\kappa$ does indeed have the desired property. What we will not do is to prove that the scaling limit exists, satisfies the conformal Markov property and actually satisfies the desired properties (though the latter two should be clear once we can prove the scaling limit exists).
+
+\section{Review of stochastic calculus}
+Before we start studying SLE, we do some review of stochastic calculus. In stochastic calculus, the basic object is a continuous semi-martingale. We write these as
+\[
+ X_t = M_t + A_t,
+\]
+where $M$ is a continuous local martingale and $A$ is continuous with bounded variation.
+
+The important concepts we will review are
+\begin{enumerate}
+ \item Stochastic integrals
+ \item Quadratic variation
+ \item It\^o's formula
+ \item L\'evy characterization of Brownian motion
+ \item Stochastic differential equations
+\end{enumerate}
+
+The general setting is that we have a probability space $(\Omega, \mathcal{F}, \P)$ with a continuous time filtration $\mathcal{F}_t$, which satisfies the ``usual conditions'', namely $\mathcal{F}_0$ contains all $\P$-null sets, and $\mathcal{F}_t$ is right continuous, i.e.
+\[
+ \mathcal{F}_t = \bigcap_{s > t} \mathcal{F}_s.
+\]
+\subsubsection*{Stochastic integral}
+If $X_t = M_t + A_t$ is a continuous semi-martingale, and $H_t$ is a previsible process, we set
+\[
+ \int_0^t H_s \;\d X_s = \int_0^t H_s \;\d M_s + \int_0^t H_s \;\d A_s,
+\]
+where the first integral is the It\^o integral, and the second is the Lebesgue--Stieltjes integral. The first term is a continuous local martingale, and the second is a continuous bounded variation process.
+
+The It\^o integral is defined in the same spirit as the Riemann integral. The key thing that makes it work is that there is ``extra cancellation'' in the definition, from the fact that we are integrating against a martingale, and so makes the integral converge even though the process we are integrating against is not of bounded variation.
+
+\subsubsection*{Quadratic variation}
+If $M$ is a continuous local martingale, then the quadratic variation is
+\[
+ [M]_t = \lim_{n \to \infty} \sum_{k = 0}^{\lceil 2^n t\rceil - 1}(M_{(k + 1)2^{-n}} - M_{k2^{-n}})^2,
+\]
+and is the unique continuous non-decreasing process of bounded variation such that $M_t^2 - [M]_t$ is a continuous local martingale. Applying the same definition to a bounded variation process always gives zero, so if $X$ is a semi-martingale, it makes sense to define
+\[
+ [X]_t = [M + A]_t = [M]_t.
+\]
+Also, we have
+\[
+ \left[\int_0^{(\ph)} H_s\;\d M_s\right]_t = \int_0^t H_s^2 \;\d [M]_s,
+\]
+where the integral on the right is the Lebesgue--Stieltjes integral.
+
+\subsubsection*{It\^o's formula}
+It\^o's formula is the stochastic calculus' analogue of the fundamental theorem of calculus. It takes a bit of work to prove It\^o's formula, but it is easy to understand what the intuition is. If $f \in C^2$, then we have
+\[
+ f(t) = f(0) + \sum_{k = 1}^n (f(t_k) - f(t_{k - 1}))
+\]
+for any partition $0 = t_0 < \cdots < t_n = t$. If we were to do a regular fundamental theorem of calculus, then we can perform a Taylor approximation
+\[
+ f(t) = f(0) + \sum_{k = 1}^n f'(t_{k - 1})\Big((t_k - t_{k - 1}) + o(t_k - t_{k - 1})\Big) \to f(0) + \int_0^t f'(s)\;\d s
+\]
+as $\max |t_k - t_{k - 1}| \to 0$.
+
+In It\^o's formula, we want to do the same thing but with Brownian motion. Suppose $B$ is a Brownian motion. Then
+\[
+ f(B_t) = f(B_0) + \sum_{k = 1}^n \Big(f(B_{t_k}) - f(B_{t_{k - 1}})\Big).
+\]
+When we do a Taylor expansion of this, we have to be a bit more careful than before, as we cannot ignore all the higher order terms. The point is that the quadratic variation of $B_t$ is non-zero. We write this as
+\[
+ f(0) + \sum_{k = 1}^n \Big[f'(B_{t_{k\!-\!1}}) (B_{t_k} - B_{t_{k\!-\!1}}) + \frac{1}{2} f''(B_{t_{k\!-\!1}}) (B_{t_k} - B_{t_{k\!-\!1}})^2 + o((B_{t_k} + B_{t_{k\!-\!1}})^2)\Big].
+\]
+Taking the limit, and using that $\E[(B_{t_k} - B_{t_{k - 1}})^2] = t_k - t_{k - 1}$, we get
+\[
+ f(B_t) = f(0) + \int_0^t f'(B_s)\;\d B_s + \frac{1}{2} \int_0^t f''(B_s) \;\d s.
+\]
+More generally, if $X_t = M_t + A_t$ is a continuous semi-martingale, and $f \in C^{1, 2}(\R_t \times \R)$, so that it is $C^1$ in the first variable and $C^2$ in the second, then
+\begin{multline*}
+ f(t, X_t) = f(0, X_0) + \int_0^t \partial_s f(s, X_s) \;\d s + \int_0^t \partial_x f(s, X_s)\;\d A_s\\
+ + \int_0^t \partial_x f(s, X_s) \;\d M_s + \frac{1}{2} \int_0^t \partial_x^2 f(s, X_s) \;\d [M]_s.
+\end{multline*}
+
+\subsubsection*{L\'evy characterization of Brownian motion}
+\begin{thm}[L\'evy characterization]\index{L\'evy characterization}
+ Let $M_t$ is a continuous local martingale with $[M]_t = t$ for $t \geq 0$, then $M_t$ is a standard Brownian motion.
+\end{thm}
+
+\begin{proof}[Proof sketch]
+ Use It\^o's formula with the exponential moment generating function $e^{i\theta M_t + \theta^2/2 [M]_t}$.
+\end{proof}
+
+\subsubsection*{Stochastic differential equations}
+Suppose $(\Omega, \mathcal{F}, \P)$ and $(\mathcal{F}_t)$ are as before, and $(B_t)$ is a Brownian motion adapted to $(\mathcal{F}_t)$. We say a process $X_t$ satisfies satisfies the \term{stochastic differential equation}
+\[
+ \d X_t = b(X_t) \;\d t + \sigma(X_t) \;\d B_t
+\]
+for functions $b, \sigma$ iff
+\[
+ X_t = \int_0^t b(X_s) \;\d s + \int_0^t \sigma(X_s) \;\d B_s + X_0
+\]
+for all $t \geq 0$. At the end of the stochastic calculus course, we will see that there is a unique solution to this equation provided $b, \sigma$ are Lipschitz functions.
+
+This in particular implies we can solve
+\[
+ \partial_t g_t(z) = \frac{2}{g_t(z) - U_t},\quad g_0(z) = z
+\]
+for $U_t = \sqrt{\kappa} B_t$, where $B_t$ is a standard Brownian motion.
+
+\section{Phases of SLE}
+We now study some basic properties of $\SLE_\kappa$. Our first goal will be to prove the following theorem:
+\begin{thm}
+ $\SLE_\kappa$ is a simple curve if $\kappa \leq 4$, and is self-intersecting if $\kappa > 4$.
+\end{thm}
+
+When proving this theorem, we will encounter the \emph{Bessel stochastic differential equation}, and so we shall begin by understanding this SDE and the associated Bessel process.
+
+\begin{defi}[Square Bessel process]\index{square Bessel process}
+ Let $X = (B^1, \ldots, B^d)$ be a $d$-dimensional standard Brownian motion. Then
+ \[
+ Z_t = \|X_t\|^2 = (B^1_t)^2 + (B_t^2)^2 + \cdots + (B_t^d)^2
+ \]
+ is a \emph{square Bessel process} of dimension $d$.
+\end{defi}
+The square Bessel process satisfies a stochastic differential equation, which we can obtain from It\^o's formula. It\^o's formula tells us
+\[
+ \d Z_t = 2B_t^1 \;\d B_t^1 + \cdots + 2 B_t^d\;\d B_t^d + d\cdot \d t.
+\]
+For reasons that will become clear soon, we define
+\[
+ Y_t = \int_0^t \frac{B_s^1 \;\d B_s^1 + \cdots + B_s^d \;\d B_s^d}{Z_s^{1/2}}.
+\]
+Then tautologically, we have
+\[
+ \d Z_t = 2 Z_t^{1/2} \;\d Y_t + d\cdot \d t.
+\]
+Observe that $Y_t$ is a continuous local martingale, and the quadratic variation is equal to
+\[
+ [Y]_t = \int_0^t \frac{(B_s^1)^2 + \cdots + (B_s^d)^2}{Z_s}\;\d s = t.
+\]
+By the L\'evy characterization, $Y_T$ is a standard Brownian motion. So we can write
+\begin{lemma}
+ \[
+ \d Z_t = 2 Z_t^{1/2} \;\d \tilde{B}_t + d \cdot \d t.
+ \]
+ where $\tilde{B}$ is a standard Brownian motion.
+\end{lemma}
+This is called the \term{square Bessel stochastic differential equation} of dimension $d$.
+
+Ultimately, we are interested in the Bessel process, i.e.\ the square root of the square Bessel process.
+\begin{defi}[Bessel process]\index{Bessel process}
+ The \emph{Bessel process} of dimension $d$, written \term{BES\textsuperscript{$d$}}, is
+ \[
+ U_t = Z_t^{1/2}.
+ \]
+\end{defi}
+Applying It\^o's formula, we have
+\[
+ \d U_t = \frac{1}{2} Z_t^{-1/2} \;\d Z_t - \frac{1}{8} Z_t^{-3/2}\;\d [Z]_t
+\]
+The square Bessel SDE lets us understand the first term, and we can calculate
+\[
+ [Z]_t = 4 Z_t \d t
+\]
+Simplifying, this becomes
+\begin{lemma}
+ \[
+ \d U_t = \left(\frac{d - 1}{2}\right) U_t^{-1}\;\d t + \d \tilde{B}_t.
+ \]
+\end{lemma}
+This is the \term{Bessel stochastic differential equation} of dimension $d$.
+
+Observe that we can make sense of this equation for any $d \in \R$, not necessarily integral. However, the equation only makes sense when $U_t \not= 0$, and $U_t^{-1}$ fails to be Lipschitz as $U_t \to 0$. To avoid this problem, we say this is defined up to when it hits $0$.
+
+\begin{prop}
+ Let $d \in \R$, and $U_t$ a BES$^d$.
+ \begin{enumerate}
+ \item If $d < 2$, then $U_t$ hits $0$ almost surely.
+ \item If $d \geq 2$, then $U_t$ doesn't hit $0$ almost surely.
+ \end{enumerate}
+\end{prop}
+This agrees with what we expect from the recurrence of Brownian motion, for integral $d$. In particular, for $d = 2$, a Brownian motion gets arbitrarily close to $0$ infinitely often, so we expect that if we are just a bit lower than $d = 2$, we would hit $0$ almost surely.
+\begin{proof}
+ The proof is similar to the proof of recurrence of Brownian motion. For $a \in \R_{>0}$, we define
+ \[
+ \tau_a = \inf \{t \geq 0: U_t = a\}.
+ \]
+ We then consider $\P[\tau_b < \tau_a]$, and take the limit $a \to 0$ and $b \to \infty$. To do so, we claim
+ \begin{claim}
+ $U_t^{2 - d}$ is a continuous local martingale.
+ \end{claim}
+
+ To see this, we simply compute using It\^o's formula to get
+ \begin{align*}
+ \d U_t^{2 - d} &= (2 - d) U_t^{1 - d} \;\d U_t + \frac{1}{2} (2 - d) (1 - d) U_t^{-d} \;\d [U]_t\\
+ &= (2 - d) U_t^{1 - d} \;\d \tilde{B}_t + \frac{(2 - d)(d - 1)}{2 U_t} U_t^{1 - d}\;\d t + \frac{1}{2} (2 - d)(1 - d) U_t^{-d}\;\d t\\
+ &= (2 - d) U_t^{1 - d}\;\d \tilde{B}_t.
+ \end{align*}
+ Therefore $U_t$ is a continuous local martingale. Since $U_{t \wedge \tau_a \wedge \tau_b}$ is bounded, it is a true martingale, and so optional stopping tells us
+ \[
+ U_0^{2 - d} = \E [U_{\tau_a \wedge \tau_b}^{2 - d}] = a^{2 - d} \P[\tau_a < \tau_b] + b^{2 - d} \P[\tau_b < \tau_a].
+ \]
+ \begin{itemize}
+ \item If $d < 2$, we set $a = 0$, and then
+ \[
+ U_0^{2 - d} = b^{2 - d} \P[\tau_b < \tau_0].
+ \]
+ Dividing both sides as $b$, we find that
+ \[
+ \left(\frac{U_0}{b}\right)^{2 - d} = \P[\tau_b < \tau_0].
+ \]
+ Taking the limit $b \to \infty$, we see that $U_t$ hits $0$ almost surely.
+ \item If $d > 2$, then we have
+ \[
+ \P[\tau_a < \tau_b] = \left(\frac{U_0}{a}\right)^{2 - d} - \left(\frac{b}{a}\right)^{2 - d} \P[\tau_b < \tau_a] \to 0
+ \]
+ as $a \to 0$ for any $b$ and $U_0 > 0$. So we are done in this case.
+ \item If $d = 2$, then our martingale is just constant, and this analysis is useless. In this case, we consider $\log U_t$ and perform the same analysis to obtain the desired conclusion.\qedhere
+ \end{itemize}
+\end{proof}
+
+Now take an $\SLE_\kappa$ curve. For $x \in \R$, consider the process
+\[
+ V_t^x = g_t(x) - U_t = g_t(x) - g_t(\gamma(t)).
+\]
+Essentially by definition, $x$ starts being in $A_t$ when $V_t^x = 0$ (formally, we need to take limits, since $x$ is not in $\H$). We thus define
+\[
+ \tau_x = \inf \{t \geq 0: V_t^x = 0\}.
+\]
+This is then the time $\gamma$ cuts $x$ off from $\infty$. We thus want to understand $\P[\tau_x < \infty]$.
+
+We can calculate
+\[
+ \d V_t^x = \d (g_t(x) - U_t) = \frac{2}{g_t(x) - U_t}\;\d t - \sqrt{\kappa} \;\d B_t = \frac{2}{V_t^x}\;\d t - \sqrt{\kappa}\;\d B_t.
+\]
+This looks almost like the Bessel SDE, but we need to do some rescaling to write this as
+\[
+ \d \left(\frac{V_t^x}{\sqrt{\kappa}}\right) = \frac{2/\kappa}{V_t^\times/\sqrt{\kappa}} \;\d t + d \tilde{B}_t,\quad \tilde{B}_t = - B_t.
+\]
+So we get that $V_t^\times/\sqrt{\kappa} \sim \BES^d$, with $d = 1 + 4/\kappa$. Thus, our previous result implies
+\[
+ \P[\tau_x < \infty] =
+ \begin{cases}
+ 1 & \kappa > 4\\
+ 0 & \kappa \leq 4
+ \end{cases}.
+\]
+\begin{thm}
+ $\SLE_\kappa$ is a simple curve if $\kappa \leq 4$, and is self-intersecting if $\kappa > 4$.
+\end{thm}
+
+\begin{proof}
+ If $\kappa \leq 4$, consider the probability that $\gamma(t + n)$ hits $\gamma([0, n])$. This is equivalently the probability that $\gamma(t + n)$ hits $\partial A_n$. This is bounded above by the probably that $g_n(\gamma(t + n)) - U_n$ hits $\partial \H$. But by the conformal Markov property, $g_n(\gamma(t + n)) - U_n$ is an $\SLE_\kappa$. So this probability is $0$.
+
+ If $\kappa > 4$, we want to reverse the above argument, but we need to be a bit careful in taking limits. We have
+ \[
+ \lim_{n \to \infty} \P[\gamma(t) \in [-n, n]\text{ for some }t] = 1.
+ \]
+ On the other hand, for any fixed $n$, we have
+ \[
+ \lim_{m \to \infty} \P[g_m(A_m) - U_m \supseteq [-n, n]] = 1.
+ \]
+ The probability that $\SLE_\kappa$ self-intersects is
+ \[
+ \geq \P[g_m(A_m) - U_m \supseteq [-n, n]\text{ and } g_m(\gamma(t + m)) - U_m \in [-n, n]\text{ for some }t].
+ \]
+ By the conformal Markov property, $g_m(\gamma(t + m)) - U_m$ is another $\SLE_\kappa$ given $\mathcal{F}_m$. So this factors as
+ \[
+ \P[g_m (A_m) - U_m \supseteq [-n, n]] \P[\gamma(t) \in [-n, n]\text{ for some }t].
+ \]
+ Since this is true for all $m$ and $n$, we can take the limit $m \to \infty$, then the limit $n \to \infty$ to obtain the desired result.
+\end{proof}
+
+It turns out there are two further regimes when $\kappa > 4$.
+\begin{thm}
+ If $\kappa \geq 8$, then $\SLE_\kappa$ is space-filling, but not if $\kappa \in (4, 8)$.\fakeqed
+\end{thm}
+This will be shown in the second example sheet. Here we prove a weaker result:
+
+\begin{prop}
+ $\SLE_\kappa$ fills $\partial \H$ iff $\kappa \geq 8$.
+\end{prop}
+
+To phrase this in terms of what we already have, we fix $\kappa > 4$. Then for any $0 < x < y$, we want to understand the probability
+\[
+ g(x, y) = \P[\tau_x = \tau_y].
+\]
+where we assume $0 < x < y$. If $\tau_x = \tau_y$, this means the curve $\gamma$ cuts $x$ and $y$ from $\infty$ ft the same time, and hence doesn't hit $[x, y]$. Conversely, if $\P[\tau_x = \tau_y] = 0$, then with probability one, $\gamma$ hits $(x, y)$, and this is true for all $x$ and $y$. Thus, we want to show that $g(x, y) > 0$ for $\kappa \in (4, 8)$, and $g(x, y) = 0$ for $\kappa \geq 8$.
+
+To simplify notation, observe that $g(x, y) = g(1, y/x)$ as $\SLE_\kappa$ is scale-invariant. So we may assume $x = 1$. Moreover, we know
+\[
+ g(1, r) \to 0\text{ as }r \to \infty,
+\]
+since $\P[\tau_1 < t] \to 1$ as $t \to \infty$, but $\P[\tau_r < t] \to 0$ as $r \to \infty$ for $t$ fixed.
+
+\begin{defi}[Equivalent events]\index{equivalent events}
+ We say two events $A, B$ are \emph{equivalent} if
+ \[
+ \P[A \setminus B] = \P[B \setminus A] = 0,
+ \]
+\end{defi}
+
+\begin{prop}
+ For $r > 1$, the event $\{\tau_r = \tau_1\}$ is equivalent to the event
+ \[
+ \sup_{t < \tau_1} \frac{V_t^r - V_t^1}{V_t^1} < \infty.\tag{$*$}
+ \]
+\end{prop}
+
+\begin{proof}
+ If $(*)$ happens, then we cannot have that $\tau_1 < \tau_r$, or else the supremum is infinite. So $(*) \subseteq \{\tau_1 = \tau_r\}$. The prove the proposition, we have to show that
+ \[
+ \P\left[\tau_1 = \tau_r, \sup_{t < \tau_1} \frac{V_t^r - V_t^1}{V_t^1} = \infty\right] = 0.
+ \]
+
+ For $M > 0$, we define
+ \[
+ \sigma_M = \inf \left\{t \geq 0: \frac{V_t^r - V_t^1}{V_t^1} \geq M\right\}.
+ \]
+ It then suffices to show that
+ \[
+ P_M \equiv \P\left[\tau_1 = \tau_r \,\middle|\, \sup_{t < \tau_1} \frac{V_t^r - V_t^1}{V_t^1} \geq M\right] = \P[\tau_1 = \tau_r \mid \sigma_M < \tau_1] \to 0\text{ as }M \to \infty.
+ \]
+ But at time $\sigma_M$, we have $V_t^r = (M + 1) V_t^1$, and $\tau_1 = \tau_r$ if these are cut off at the same time. Thus, the conformal Markov property implies
+ \[
+ P_M = g(1, M + 1).
+ \]
+ So we are done.
+\end{proof}
+
+So we need to show that
+\[
+ \P\left[\sup_{t < \tau_1} \frac{V_t^r - V_t^1}{V_t^1} < \infty\right]
+ \begin{cases}
+ >0 & \kappa \in (4, 8)\\
+ =0 & k \geq 8
+ \end{cases}.
+\]
+We will prove this using stochastic calculus techniques. Since ratios are hard, we instead consider
+\[
+ Z_t = \log \left(\frac{V_t^r - V_t^1}{V_t^1}\right).
+\]
+We can then compute
+\[
+ \d Z_t = \left[\left(\frac{3}{2} - d\right)\frac{1}{(V_t^1)^2} + \left(\frac{d - 1}{2}\right) \frac{V_t^r - V_t^1}{(V_t^1)^2 V_t^r}\right] \;\d t - \frac{1}{V_t^1} \;\d B_t,\quad Z_0 = \log (r - 1).
+\]
+Here we can already see why $\kappa = 8$ is special, since it corresponds to $d = \frac{3}{2}$.
+
+When faced with such complex equations, it is usually wise to perform a time change. We define $\sigma(t)$ by the equation
+\[
+ \sigma(0) = 0,\quad \d t = \frac{\d \sigma(t)}{(V_{\sigma(t)}^1)^2}.
+\]
+So the map
+\[
+ t \mapsto \tilde{B}_t = - \int_0^{\sigma(t)} \frac{1}{V_s^1} \;\d B_s
+\]
+is a continuous local martingale, with
+\[
+ [\tilde{B}]_t = \left[ - \int_0^{\sigma(t)} \frac{1}{V_s^1} \;\d B_s\right]_t = \int_0^{\sigma(t)} \frac{1}{(V_s^1)^2}\;\d s = t.
+\]
+So by the L\'evy characterization, we know that $\tilde{B}_t$ is a Brownian motion.
+
+Now let
+\[
+ \tilde{Z}_t = Z_{\sigma(t)}.
+\]
+Then we have
+\[
+ \d \tilde{Z}_t = \left[ \left(\frac{3}{2} - d\right) + \left(\frac{d - 1}{2}\right) \frac{V_{\sigma(t)}^r - V_{\sigma(t)}^1}{V_{\sigma(t)}^r}\right] \;\d t + \d \tilde{B}_t.
+\]
+In integral form, we get
+\begin{align*}
+ \tilde{Z}_t &= \tilde{Z}_0 + \tilde{B}_t + \left(\frac{3}{2} - d\right)t + \frac{d - 1}{2} \int_0^t \frac{V_{\sigma(s)}^r - V_{\sigma(s)}^1}{ V_{\sigma(s)}^r}\;\d s\\
+ &\geq \tilde{Z}_0 + \tilde{B}_t + \left(\frac{3}{2} - d\right)t.
+\end{align*}
+Now if $\kappa \geq 8$, then $d = 1 + \frac{4}{\kappa} \leq \frac{3}{2}$. So we have
+\[
+ \tilde{Z}_t \geq \tilde{Z}_0 + \tilde{B}_t.
+\]
+So we find that
+\[
+ \sup_t \tilde{Z}_t \geq \tilde{Z}_0 + \sup_t \tilde{B}_t = + \infty.
+\]
+So we find that
+\[
+ \sup_{t < \tau_1} \frac{V_t^r - V_t^1}{V_t^1} = \infty.
+\]
+So $g(x, y) = 0$ for all $0 < x < y$ if $\kappa \geq 8$.
+
+Now if $\kappa \in (4, 8)$, we pick $\varepsilon > 0$ and set
+\[
+ r = 1 + \frac{\varepsilon}{2}.
+\]
+Then we have $\tilde{Z}_0 = \log (r - 1) = \log (\varepsilon/2)$. We will show that for small $\varepsilon$, we have $g(1, r) > 0$. In fact, it is always positive, which is to be shown on the example sheet.
+
+We let
+\[
+ \tau = \inf \{t > 0 : \tilde{Z}_t = \log \varepsilon\}.
+\]
+Then
+\begin{align*}
+ \tilde{Z}_{t \wedge \tau} &= \tilde{Z}_0 + \tilde{B}_{t \wedge \tau} + \left(\frac{3}{2} - d\right) t\wedge \tau + \frac{d - 1}{2} \int_0^{t \wedge \tau} \frac{V_{\sigma(s)}^r - V_{\sigma(s)}^1}{V_{\sigma(s)}^r}\;\d s\\
+ &\leq \tilde{Z}_0 + \tilde{B}_{t \wedge \tau} + \left(\frac{3}{2} - d \right) t \wedge \tau + \frac{d - 1}{2} \int_0^{t \wedge \tau} e^{\tilde{Z}_s}\;\d s\\
+ &\leq \tilde{Z}_0 + \tilde{B}_{t \wedge \tau} + \left(\frac{3}{2} - d \right) t \wedge \tau - \left(\frac{d - 1}{2}\right)(t \wedge \tau)\varepsilon\\
+ &= \tilde{Z}_0t + \tilde{B}_{t \wedge \tau} + \left[\left(\frac{3}{2} - d\right) + \left(\frac{d - 1}{2}\right)\varepsilon \right] (t \wedge \tau).
+\end{align*}
+We let
+\[
+ Z_t^*= \tilde{Z}_0t + \tilde{B}_{t} + \left[\frac{3}{2} - d + \frac{d - 1}{2}\varepsilon \right] t.
+\]
+We then have shown that
+\[
+ Z_{t \wedge \tau}^* \geq \tilde{Z}_{t \wedge \tau}.
+\]
+We assume that $\varepsilon > 0$ is such that $\left(\frac{3}{2} - d + \frac{d - 1}{2} \varepsilon\right) < 0$. Then $Z_t^*$ is a Brownian motion with a negative drift, starting from $\log \frac{\varepsilon}{2}$, so (by the second example sheet), we have
+\[
+ \P \left(\sup_{t \geq 0} Z_t^* < \log \varepsilon\right) > 0.
+\]
+So we know that
+\[
+ \P\left[ \sup_{t \geq 0} \tilde{Z}_t < \log \varepsilon \right] > 0.
+\]
+So we have
+\[
+ g\left(1, 1 + \frac{\varepsilon}{2}\right) > 0.
+\]
+This concludes the proof of the theorem.
+
+\subsubsection*{SLE on other domains}
+So far, we only defined $\SLE_\kappa$ in the upper half plane $\H$ from $0$ to $\infty$. But we can define SLE in really any simply connected domain. If $D \subseteq \C$ is a simply connected domain, and $x, y \in \partial D$ are distinct, then there exists a conformal transformation $\varphi: \H \to D$ with $\varphi(0) = x$ and $\varphi(\infty) = y$.
+
+An $\SLE_\kappa$ in $D$ from $x$ to $y$ is defined by setting it to be $\gamma = \varphi(\tilde{\gamma})$, where $\tilde{\gamma}$ is an $\SLE_\kappa$ in $\H$ from $0$ to $\infty$. On the second example sheet, we check that this definition is well-defined, i.e.\ it doesn't depend on the choice of the conformal transformation.
+
+\section{Scaling limit of critical percolation}\label{sec:percolation}
+The reason we care about SLE is that they come from scaling limits of certain discrete models. In this chapter, we will ``show'' that $\SLE_6$ corresponds to the scaling limit of critical percolation on the hexagonal lattice.
+
+Let $D \subseteq \C$ be simply connected, and $x, y \in \partial D$ distinct. Pick a hexagonal lattice in $D$ with hexagons of size $\varepsilon$ such that $x$ and $y$ are lattice points. We perform critical percolation on the hexagonal lattice, i.e.\ for each hexagon, we colour it black or white with probability $p = \frac{1}{2}$. We enforce the condition that the hexagons that intersect the clockwise arc of $\partial D$ from $x$ to $y$ to all be black, and those along the counterclockwise arc must be white.
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[scale=0.15]
+ % seed 173
+ \foreach \i in {-7,...,7} {
+ \foreach \j in {-7,...,7} {
+ \rand
+ \draw [fill, fill opacity=\arabic{rand}] (\i*3, \j*1.732) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ \rand
+ \draw [fill, fill opacity=\arabic{rand}] (\i*3 + 1.5, \j*1.732 - 0.866) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ }
+ }
+ \foreach \j in {-7,...,7} {
+ \draw [fill] (-7*3, \j*1.732) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ \draw [fill=white] (7*3 + 1.5, \j*1.732 - 0.866) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ }
+
+ \foreach \i in {-7,...,0} {
+ \draw [fill] (\i*3, -8*1.732) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ \draw [fill] (\i*3 + 1.5, -8*1.732 - 0.866) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ \draw [fill] (\i*3, 8*1.732) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ \draw [fill] (\i*3 + 1.5, 8*1.732 - 0.866) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ }
+ \foreach \i in {1,...,7} {
+ \draw (\i*3, -8*1.732) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ \draw (\i*3 + 1.5, -8*1.732 - 0.866) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ \draw (\i*3, 8*1.732) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ \draw (\i*3 + 1.5, 8*1.732 - 0.866) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1);
+ }
+
+ \node [circ, morange] at (3, -13.866) {};
+ \node [below] at (3, -13.866) {$x$};
+ \node [circ, morange] at (2.5, 14.866) {};
+ \node [above] at (2.5, 14.866) {$y$};
+
+ \draw [morange, very thick] (3, -13.866) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(60:1) -- ++(0:1) -- ++(60:1) -- ++(0:1) -- ++(60:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(60:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(60:1) -- ++(0:1) -- ++(60:1) -- ++(0:1) -- ++(300:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(180:1) -- ++(240:1) -- ++(300:1) -- ++(0:1) -- ++(60:1) -- ++(0:1) -- ++(60:1) -- ++(0:1) -- ++(300:1) -- ++(240:1) -- ++(180:1) -- ++(240:1) -- ++(180:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1) -- ++(0:1) -- ++(300:1) -- ++(240:1) -- ++(300:1) -- ++(0:1) -- ++(60:1) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(0:1) -- ++(300:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(300:1) -- ++(0:1) -- ++(60:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(60:1) -- ++(0:1) -- ++(300:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(300:1) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(60:1) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(180:1) -- ++(240:1) -- ++(180:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(180:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1) -- ++(0:1) -- ++(300:1) -- ++(240:1) -- ++(180:1) -- ++(240:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(300:1) -- ++(240:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(180:1) -- ++(120:1) -- ++(60:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(60:1) -- ++(0:1) -- ++(300:1) -- ++(0:1) -- ++(60:1) -- ++(120:1) -- ++(180:1) -- ++(120:1);
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Then there exists a unique interface $\gamma_\varepsilon$ that connects $x$ to $y$ with the property that the black hexagons on its left and white on its right. It was conjectured (and now proved by Smirnov) that the limit of the law of $\gamma_\varepsilon$ exists in distribution and is conformally invariant.
+
+This means that if $\tilde{D}$ is another simply connected domain, and $\tilde{x}, \tilde{y} \in \partial \tilde{D}$ are distinct, then $\varphi: D \to \tilde{D}$ is a conformal transformation with $\varphi(x) = \tilde{x}$, $\varphi(y) = \tilde{y}$, then $\varphi(\gamma)$ is equal in distribution of the scaling limit of percolation in $\tilde{D}$ from $\tilde{x}$ to $\tilde{y}$.
+
+Also, percolation also satisfies a natural Markov property: if we condition on $\gamma_\varepsilon$ up to a given time $t$, then the rest of $\gamma_\varepsilon$ is a percolation exploration in the remaining domain. The reason for this is very simple --- the path only determines the colours of the hexagons right next to it, and the others are still randomly distributed independently.
+
+If we assume these two properties, then the limiting path $\gamma$ satisfies the conformal Markov property, and so must be an $\SLE_\kappa$. So the question is, what is $\kappa$?
+
+To figure out this $\kappa$, we observe that the scaling limit of percolation has a locality property, and we will later see that $\SLE_6$ is the only SLE that satisfies this locality property.
+
+To explain the locality property, fix a simply-connected domain $D$ in $\H$ (for simplicity), and assume that $0 \in \partial D$. Fixing a point $y \in \partial D$, we perform the percolation exploration as before. Then the resulting path would look exactly the same as if we performed percolation on $\H$ (with black boundary on $\R_{<0}$ and white boundary on $\R_{>0}$), up to the point we hit $\partial D \setminus \partial \H$. In other words, $\gamma$ doesn't feel the boundary conditions until it hits the boundary. This is known as \term{locality}.
+
+It should then be true that the corresponding $\SLE_\kappa$ should have an analogous property. To be precise, we want to find a value of $\kappa$ so that the following is true: If $\gamma$ is an $\SLE_\kappa$ in $\H$ from $0$ to $\infty$, run up until it first hits $\partial D \setminus \partial \H$, then $\psi(\gamma)$ is a (stopped) $\SLE_\kappa$ in $\H$ from $0$ to $\infty$ where $\psi: D \to \H$ is a conformal transformation with $\psi(0) = 0$, $\psi(y) = \infty$. This is the \term{locality property}. We will show that locality holds precisely for $\kappa = 6$.
+
+Suppose that $(A_t) \in \mathcal{A}$ has Loewner driving function $U_t$. We define
+\[
+ \tilde{A}_t = \psi(A_t).
+\]
+Then $\tilde{A}_t$ is a family of compact $\H$-hulls that are non-decreasing, locally growing, and $A_0 = \emptyset$. However, in general, this is not going to be parametrized by capacity. On the second example sheet, we show that this has half plane capacity
+\[
+ \tilde{a}(t) = \hcap(\tilde{A}_t) = \int_0^t \gamma(\psi_t'(U_s))^2\;\d s,
+\]
+which should be somewhat believable, given that $\hcap(rA) = r^2 \hcap(A)$.
+
+$\tilde{A}_t$ has a ``driving function'' $\tilde{U}_t$ given by
+\[
+ \tilde{U}_t = \psi_t(U_t),\quad \psi_t = \tilde{g}_t \circ \psi \circ g_t^{-1},\quad g_t = g_{A_t}.
+\]
+We then have
+\[
+ \partial_t \tilde{g}_t (z) = \frac{\partial_t \tilde{a}_t}{\tilde{g}_t(z) - \tilde{U}_t},\quad \tilde{g}_0(z),
+\]
+To see this, simply recall that if $A_t$ is $\gamma([0, t])$ for a curve $\gamma$, then $U_t = g_t(\gamma(t))$.
+
+To understand $\tilde{U}_t$, it is convenient to know something about $\psi_t$:
+\begin{prop}
+ The maps $(\psi_t)$ satisfy
+ \[
+ \partial_t \psi_t(z) = 2 \left(\frac{(\psi_t'(U_t))^2}{\psi_t(z) - \psi_t(U_t)} - \frac{\psi_t' (U_t)}{z - U_t}\right).
+ \]
+ In particular, at $z = U_t$, we have
+ \[
+ \partial_t \psi_t(U_t) = \lim_{z \to U_t} \partial_t \psi_t (z) = -3 \psi_t''(U_t).
+ \]
+\end{prop}
+
+\begin{proof}
+ These are essentially basic calculus computations.
+\end{proof}
+
+To get Loewner's equation in the right shape, we perform a time change
+\[
+ \sigma(t) = \inf \left\{u \geq 0: \int_0^u (\psi_s'(U_s))^2 \;\d s = t\right\}.
+\]
+Then we have
+\[
+ \partial_t \tilde{g}_{\sigma(t)} (z) = \frac{2}{\tilde{g}_{\sigma(t)} - \tilde{U}_{\sigma(t)}},\quad \tilde{g}_0(z) = z.
+\]
+
+It then remains to try to understand $\d \tilde{U}_{\sigma(t)}$. Note that so far what we said works for general $U_t$. If we put $U_t = \sqrt{\kappa} B_t$, where $B$ is a standard Brownian motion, then It\^o's formula tells us before the time change, we have
+\begin{align*}
+ \d \tilde{U}_t &= \d \psi_t(U_t) \\
+ &= \left(\partial_t \psi_t(U_t) + \frac{\kappa}{2} \psi_t'' (U_t)\right)\;\d t + \sqrt{\kappa} \psi_t'(U_t)\;\d B_t\\
+ &= \frac{\kappa - 6}{2} \psi_t''(U_t) \;\d t + \sqrt{\kappa} \psi_t'(U_t)\;\d B_t.
+\end{align*}
+After the time change, we get
+\[
+ \d \tilde{U}_{\sigma(t)} = \frac{\kappa - 6}{2} \frac{\psi_{\sigma(t)}'' (U_{\sigma(t)})}{\psi_{\sigma(t)}' (U_{\sigma(t)})^2} \;\d t + \sqrt{\kappa} \;\d \tilde{B}_t,
+\]
+where
+\[
+ \tilde{B}_t = \int_0^{\sigma(t)} \psi_s'(U_s) \;\d B_s
+\]
+is a standard Brownian motion by the L\'evy characterization and the definition of $\sigma(t)$.
+
+The point is that when $\kappa = 6$, the drift term goes away, and so
+\[
+ \tilde{U}_{\sigma(t)} = \sqrt{6} \tilde{B}_t.
+\]
+So $(\tilde{A}_{\sigma(t)})$ is an $\SLE_6$. Thus, we have proved that
+\begin{thm}
+ If $\gamma$ is an $\SLE_\kappa$, then $\psi(\gamma)$ is an $\SLE_\kappa$ up until hitting $\psi(\partial D \setminus \partial \H)$ if and only if $\kappa = 6$.
+\end{thm}
+
+So $\kappa = 6$ is the only possible $\SLE_\kappa$ which could be the limit of percolation.
+
+\section{Scaling limit of self-avoiding walks}\label{sec:saw}
+We next think about self-avoiding walk (SAW).
+
+\begin{defi}[Self-avoiding walk]\index{self-avoiding walk}
+ Let $G = (V, E)$ be a graph with uniformly bounded degree, and pick $x \in V$ and $n \in \N$. The self-avoiding in $G$ starting from $x$ of length $n$ is the uniform measure on \emph{simple} paths in $G$ starting from $x$ of length $n$.
+\end{defi}
+Self-avoiding walk tends to be difficult to understand, since they usually don't have the Markov property. However, in the scaling limit, it tends to be more well-behaved. Restricting to the case of $G = \Z^d$, it has been shown that for $d \geq 5$, the scaling limit is just Brownian motion (Hara and Slade).
+
+In $d = 4$, the same is conjectured to be true. In $d = 3$, there is no conjecture for the continuous object which should describe its scaling limit. In this course, we are only interested in $d = 2$. In this case, the self-avoiding walk is conjectured to converge to $\SLE_{8/3}$ (Lawler--Schramm--Werner).
+
+In percolation, the property that allowed us to conclude that the scaling limit was $\SLE_6$ is the locality property. In self-avoiding walks, the special property is \emph{restriction}. This is a very simple concept --- if $G' = (V', E')$ is a subgraph of $G$ with $x \in V'$, then the self-avoiding walk on $G$ conditioned to stay in $G'$ is a self-avoiding walk on $G'$. Indeed, uniform measures always restricts to the uniform measure on a smaller subset. It turns out $\SLE_{8/3}$ is the only $\SLE$ which satisfies a continuum version of restriction. We will only show one direction, namely that $\SLE_{8/3}$ satisfies the restriction property, but the other direction is also known.
+
+\subsubsection*{Brownian excursion}
+When studying the restriction property, we will encounter another type of process, known as \emph{Brownian excursion}. Fix a simply connected domain $D \subseteq \C$ and points $x, y \in \partial D$ distinct. Then roughly speaking, a Brownian excursion is a Brownian motion starting at $x$ conditioned to leave $D$ at $\partial D$. Of course, we cannot interpret these words literally, since we want to condition on a zero probability event.
+
+To make this rigorous, we first use conformal invariance to say we only have to define this for $\H$ with $x = 0$, $y = \infty$.
+
+To construct it in $\H$, we start with a complex Brownian motion $B = B^1 + i B^2$, with $B^1_0 = 0$ and $B_0^2 = \varepsilon > 0$. We then condition $B$ on the event that $B^2$ hits $R \gg 0$ before hitting $0$. This is a positive probability event. We then take limits $R \to \infty$ and $\varepsilon \to 0$. On the second example sheet, we will show that this makes sense, and the result is called \term{Brownian excursion}.
+
+It turns out the limiting object is pretty simple. It is given by
+\[
+ \hat{B} = (\hat{B}^1, \hat{B}^2)
+\]
+where $\hat{B}^1$ is a standard Brownian motion and $\hat{B}^2 \sim \BES^3$, and the two are independent.
+
+The key property about Brownian excursion we will need is the following:
+\begin{prop}
+ Suppose $A$ be a compact $\H$-hull and $g_A$ is as usual. If $x \in \R \setminus A$, then
+ \[
+ \P_x[\hat{B}[0, \infty) \cap A = \emptyset] = g_A'(x).
+ \]
+\end{prop}
+
+\begin{proof}
+ This is a straightforward computation. Take $z = x + i \varepsilon$ with $\varepsilon > 0$, and let $B_t$ be a Brownian motion. Define
+ \[
+ \sigma_R = \inf \{t \geq 0: \im(B_t) = R\}.
+ \]
+ Then the desired probability is
+ \[
+ \lim_{\varepsilon \to 0} \lim_{R \to \infty} \P_z [B[0, \sigma_R] \cap A = \emptyset \mid B[0, \sigma_R] \cap \R = \emptyset].
+ \]
+ By Bayes' theorem, this is equal to
+ \[
+ \lim_{\varepsilon \to 0}\lim_{R \to \infty} \frac{\P_z[B[0, \sigma_R] \cap (A \cup \R) = \emptyset]}{\P[B[0, \sigma_R] \cap \R = \emptyset]}.
+ \]
+ We understand the numerator and denominator separately. The gambler's ruin estimate says the denominator is just $\varepsilon/R$, and to bound the numerator, recall that for $z \in \H \setminus A$, we have
+ \[
+ |g_A(z) - z| \leq 3 \rad(A).
+ \]
+ Thus, using conformal invariance, we can bound
+ \begin{multline*}
+ \P_{g_A(z)} [B[0, \sigma_{R + 3\rad(A)}] \cap \R = \emptyset] \leq \P_z[B[0, \sigma_R] \cap (A \cup \R) = \emptyset]\\
+ \leq \P_{g_A(z)}[B[0, \sigma_{R - 3 \rad(A)}] \cap \R = \emptyset].
+ \end{multline*}
+ So we get
+ \[
+ \frac{\im(g_A(z))}{R+ 3 \rad(A)} \leq \text{numerator} \leq \frac{\im(g_A(z))}{R - 3 \rad(Z)}.
+ \]
+ Combining, we find that the desired probability is
+ \[
+ \lim_{\varepsilon \to 0} \frac{\im(g_A(x + i \varepsilon))}{\varepsilon} = g_A'(x).\qedhere
+ \]
+\end{proof}
+
+\subsubsection*{The restriction property}
+We now return to understanding the restriction property. We assume that $\kappa \leq 4$, since this is the range of $\kappa$-values so that $\SLE_\kappa$ is simple.
+
+Recall that $\mathcal{Q}$ is the set of all compact $\H$-hulls. We define
+\begin{align*}
+ \mathcal{Q}_+ &= \{A \in \mathcal{Q} : \bar{A} \cap (-\infty, 0] = \emptyset\}\\\
+ \mathcal{Q}_- &= \{A \in \mathcal{Q} : \bar{A} \cap [0, \infty) = \emptyset\}
+\end{align*}
+For $A \in \mathcal{Q}_{\pm} = \mathcal{Q}_+ \cup \mathcal{Q}_-$, we define $\psi_A: \H \to A \to \H$ by
+\[
+ \psi_A(z) = g_A(z) - g_A(0).
+\]
+This is the version of $g_A$ that fixes $0$, which is what we need. This is the unique conformal transformation with
+\[
+ \psi_A(0) = 0,\quad \lim_{z \to \infty} \frac{\psi_A(z)}{z} = 1.
+\]
+
+We will need the following fact about SLE:
+\begin{fact}
+ $\SLE_\kappa$ is transient, i.e.\ if $\gamma$ is an $\SLE_\kappa$ in $\H$ from $0$ to $\infty$, then
+ \[
+ \lim_{t \to \infty} \gamma(t) = \infty\text{ almost surely}.
+ \]
+\end{fact}
+This is not very difficult to prove, but the proof is uninteresting and we have limited time. Since $\SLE_\kappa$ is simple for $\kappa \leq 4$ and is also transient, it follows that for all $A \in \mathcal{Q}_{\pm}$,
+\[
+ 0 < \P[\gamma[0, \infty) \cap A = \emptyset] < 1.
+\]
+This is useful because we want to condition on this event. Write
+\[
+ V_A = \{\gamma[0, \infty) \cap A = \emptyset\}.
+\]
+\begin{defi}[Restriction property]\index{restriction property}
+ We say an $\SLE_\kappa$ satisfies the restriction property if whenever $\gamma$ is an $\SLE_\kappa$, for any $A \in \mathcal{Q}_{\pm}$, the law of $\psi_A(\gamma)$ conditional on $V_A$ is that of an $\SLE_\kappa$ curve (for the same $\kappa$).
+\end{defi}
+
+Observe that the law of $\gamma$ is determined by the probabilities $A \mapsto \P[V_A]$ for all $A \in \mathcal{Q}_{\pm}$.
+
+\begin{lemma}
+ Suppose there exists $\alpha > 0$ so that
+ \[
+ \P[V_A] = (\psi_A'(0))^\alpha
+ \]
+ for all $A \in \mathcal{Q}_{\pm}$, then $\SLE_\kappa$ satisfies restriction.
+\end{lemma}
+
+\begin{proof}
+ Suppose the assertion in the lemma is true. Suppose that $A, B \in \mathcal{Q}_{\pm}$. Then we have that
+ \begin{align*}
+ \P[\psi_A(\gamma[0, \infty)) \cap B = \emptyset \mid V_A] &= \frac{\P[\gamma[0, \infty) \cap (\psi_A^{-1}(B) \cup A) = \emptyset]}{\P[\gamma[0, \infty) \cap A = \emptyset]}\\
+ &= \frac{(\psi_{(\psi_A^{-1}(B) \cup A)}'(0))^\alpha}{(\psi_A'(0))^\alpha}\\
+ &= \frac{(\psi_B'(0))^\alpha (\psi_A'(0))^\alpha}{(\psi_A'(0))^\alpha}\\
+ &= (\psi_B'(0))^\alpha\\
+ &= \P[V_B],
+ \end{align*}
+ where we used that
+ \[
+ \psi_{\psi_A^{-1}(B) \cup A} = \psi_B \circ \psi_A.
+ \]
+ So the law of $\psi_A(\gamma)$ given $V_A$ is the law of $\gamma$.
+\end{proof}
+
+We now have to show that $\SLE_{8/3}$ satisfies the condition in the lemma. Let $\mathcal{F}_t$ be the filtration of $U_t = \sqrt{\kappa} B_t$. Then
+\[
+ \tilde{M}_t = \P[V_A \mid \mathcal{F}_t]
+\]
+is a bounded martingale with $\tilde{M}_0 = \P[V_A]$. Also,
+\[
+ \tilde{M}_t \to \mathbf{1}_{V_A}
+\]
+by the martingale convergence theorem. Also, if we define the stopping time
+\[
+ \tau = \inf \{t \geq 0: \gamma(t) \in A\},
+\]
+then we get
+\[
+ \tilde{M}_T = \P[V_A \mid \mathcal{F}_t] = \P[V_A \mid \mathcal{F}_t] \mathbf{1}_{\{t < \tau\}} = \P[V_{g_t(A) - g_t(0)}] \mathbf{1}_{\{t < \tau\}}
+\]
+by the conformal Markov property.
+
+Observe that if $M_t$ is another bounded $\mathcal{F}_t$-martingale with the property that $M_t \to \mathbf{1}_{V_A}$ as $t \to \infty$, then $M_t = \tilde{M}_t$ for all $t \geq 0$, since
+\[
+ M_t = \E[\mathbf{1}_{V_A} \mid \mathcal{F}_t] = \tilde{M}_t.
+\]
+Given what we were aiming for, we consider
+\[
+ M_t = (\psi_{g_t(A) - g_t(0)}'(0))^\alpha \mathbf{1}_{\{t < \tau\}}
+\]
+\begin{lemma}
+ $M_{t \wedge \tau}$ is a continuous martingale if
+ \[
+ \kappa = \frac{8}{3},\quad \alpha = \frac{5}{8}.
+ \]
+\end{lemma}
+These numbers are just what comes out when we do the computations.
+
+\begin{proof}
+ Recall that we showed that $g'_A$ is a probability involving Brownian excursion, and in particular is bounded in $[0, 1]$. So the same is true for $\psi'_A$, and hence $M_{t \wedge \tau}$. So it suffices to show that $M_{t \wedge \tau}$ is a continuous local martingale. Observe that
+ \[
+ M_{t \wedge \tau} = (\psi_{g_{t \wedge \tau}(A) - g_{t \wedge \tau}(0)}'(0))^\alpha
+ \]
+ So if we define
+ \[
+ N_t = (\psi_{g_t(A) - g_t(0)}'(0))^\alpha,
+ \]
+ then it suffices to show that $N_t$ is a continuous local martingale by optional stopping. We write
+ \[
+ \psi_t = \tilde{g}_t \circ \psi_A \circ g_t^{-1},
+ \]
+ where $\tilde{g}_t = g_{\psi_A(\gamma(0, t])}$. We then have
+ \[
+ N_t = (\psi_t'(U_t))^\alpha.
+ \]
+ In the example sheet, we show that
+ \[
+ \partial_t \psi_t' (U_t) = \frac{\psi_t''(U_t)^2}{2\psi_t'(U_t)} - \frac{4}{3} \psi_t'''(U_t).
+ \]
+ By It\^o's formula, we get
+ \begin{multline*}
+ \d N_t = \alpha N_t \left[\frac{(\alpha - 1)\kappa + 1}{2} \frac{\psi_t''(U_t)^2}{\psi_t'(U_t)^2} + \left(\frac{\kappa}{2} - \frac{4}{3}\right) \frac{\psi_t'''(U_t)}{\psi_t'(U_t)}\right]\;\d t \\
+ + \alpha N_t \frac{\psi_t''(U_t)}{\psi_t'(U_t)} \cdot \sqrt{\kappa} \;\d B_t.
+ \end{multline*}
+ Picking $\kappa = \frac{8}{3}$ ensures the second $\d t$ term vanishes, and then setting $\alpha = \frac{5}{8}$ kills the first $\d t$ term as well, and we are done.
+\end{proof}
+
+We next want to establish that $M_t \to \mathbf{1}_{V_A}$ as $t \to \infty$.
+
+Recall that $M_t$ is the probability that a Brownian excursion on $\H \setminus \gamma[0, t]$ from $\gamma(t)$ to infinity does not hit $A$. So we would expect this to be true, since on $V_A$, as $t$ increases, the tip of the SLE gets further and further away from $A$, and so it is difficult for a Brownian excursion to hit $A$; conversely on $V_A^C$, it eventually gets to $A$, and then we are dead.
+
+By scaling, we can assume that
+\[
+ \sup \{\im(\omega): \omega \in A\} = 1.
+\]
+It is convenient to define the stopping times
+\[
+ \sigma(r) = \inf \{t \geq 0 : \im(\gamma(t)) = r\}.
+\]
+Note that $\sigma_r < \infty$ almost surely for all $r > 0$ since $\SLE_{8/3}$ is transient.
+
+\begin{lemma}
+ $M_{t \wedge \tau} \to 1$ on $V_A$ as $t \to \infty$.
+\end{lemma}
+
+\begin{proof}
+ Let $\hat{B}$ be a Brownian excursion in $\H \setminus \gamma[0, \sigma_r]$ from $\gamma(\sigma_r)$ to $\infty$. Let $B$ be a complex Brownian motion, and
+ \[
+ \tau_R = \inf\{t \geq 0: \im(B_t) = R\},\quad z = \gamma(\sigma_r) + i\varepsilon.
+ \]
+ Then the probability that $\hat{B}$ hits $A$ is
+ \[
+ 1 - \psi_{\sigma_r}'(U_{\sigma_r}) = \lim_{\varepsilon \to 0} \lim_{R \to \infty} \frac{\P_z [B[0, \tau_R] \subseteq \H \setminus \gamma[0, \sigma_r], B[0, \tau_R] \cap A \not= \emptyset]}{\P_z[B[0, \tau_R] \subseteq \H \setminus \gamma[0, \sigma_r]]},\tag{$*$}
+ \]
+ We will show that this expression is $\leq C r^{-1/2}$ for some constant $C > 0$. Then we know that $M_{\sigma_r \wedge \tau} \to 1$ as $r \to \infty$ on $V_A$. This is convergence along a subsequence, but since we already know that $M_{t \wedge \tau}$ converges this is enough.
+
+ We first tackle the denominator, which we want to bound from below. The idea is to bound the probability that the Brownian motion reaches the lime $\im(z) = r + 1$ without hitting $\R \cup \gamma[0, \sigma_r]$. Afterwards, the gambler's ruin estimate tells us the probability of reaching $\im(z) = R$ without going below the $\im(z) = r$ line is $\frac{1}{R - r}$.
+
+ In fact, we shall consider the box $S = [-1, 1]^2 + \gamma(\sigma_r)$ of side length $2$ centered at $\gamma(\sigma_r)$. Let $\eta$ be the first time $B$ leaves $S$, and we want this to leave via the top edge $\ell$. By symmetry, if we started right at $\gamma(\sigma_r)$, then the probability of leaving at $\ell$ is exactly $\frac{1}{4}$. Thus, if we are at $z = \gamma(\sigma_r) + i\varepsilon$, then the probability of leaving via $\ell$ is $> \frac{1}{4}$.
+
+ What we would want to show is that
+ \[
+ \P_z[B(\eta) \in \ell \mid B[0, \eta] \cap \gamma[0, \sigma_r] = \emptyset] > \frac{1}{4}.\tag{$\dagger$}
+ \]
+ We then have the estimate
+ \[
+ \text{denominator} \geq \frac{1}{4}\cdot \P_z[B[0, \eta] \cap \gamma[0, \sigma_r] = \emptyset] \cdot \frac{1}{R - r}.
+ \]
+ Intuitively, $(\dagger)$ must be true, because $\gamma[0, \sigma_r]$ lies below $\im(z) = r$, and so if $B[0, \eta]$ doesn't hit $\gamma[0, \sigma_r]$, then it is more likely to go upwards. To make this rigorous, we write
+ \begin{align*}
+ \frac{1}{4} &< \P_z[B(\eta) \in \ell]\\
+ &= \P_z[B(\eta) \in \ell \mid B[0, \eta] \cap \gamma[0, \sigma_r] = \emptyset]\; \P[B[0, \eta] \cap \gamma[0, \sigma_r] = \emptyset]\\
+ &\,+\P_z[B(\eta) \in \ell \mid B[0, \eta] \cap \gamma[0, \sigma_r] \not= \emptyset]\;\P[ B[0, \eta] \cap \gamma[0, \sigma_r] \not= \emptyset]
+ \end{align*}
+ To prove $(\dagger)$, it suffices to observe that the first factor of the second term is is $\leq \frac{1}{4}$, which follows from the strong Markov property, since $\P_w[B(\eta) \in \ell] \leq \frac{1}{4}$ whenever $\im(w)\leq r$, which in particular is the case when $w \in \gamma[0, \sigma_r]$.
+
+ To bound the numerator, we use the strong Markov property and the Beurling estimate to get
+ \[
+ \P_z[B\text{ hits $A$ without hitting }\R \cup \gamma[0, \sigma_r]] \leq \P_z[B[0, \eta] \cap \gamma[0, \sigma_r]] \cdot C r^{-1/2}.
+ \]
+ Combining, we know the numerator in $(*)$ is
+ \[
+ \leq \frac{1}{R} C \cdot r^{-1/2} \cdot \P[B[0, \eta] \cap \gamma[0, \sigma_r] = \emptyset].
+ \]
+ These together give the result.
+\end{proof}
+
+\begin{lemma}
+ $M_{t \wedge \tau} \to 0$ as $t \to \infty$ on $V_A^c$.
+\end{lemma}
+
+This has a ``shorter'' proof, because we outsource a lot of the work to the second example sheet.
+\begin{proof}
+ By the example sheet, we may assume that $A$ is bounded by a smooth, simple curve $\beta: (0, 1) \to \H$.
+
+ Note that $\gamma(\tau) = \beta(s)$ for some $s \in (0, 1)$. We need to show that
+ \[
+ \lim_{t \to \tau} \psi_t'(U_t) = 0.
+ \]
+ For $m \in \N$, let
+ \[
+ t_m = \inf \left\{t \geq 0 : |\gamma(t) - \beta(s)| = \frac{1}{m}\right\}
+ \]
+ Since $\beta$ is smooth, there exists $\delta > 0$ so that
+ \[
+ \ell = [\beta (s), \beta(s) + \delta \mathbf{n}] \subseteq A,
+ \]
+ where $\mathbf{n}$ is the unit inward pointing normal at $\beta(s)$. Let
+ \[
+ L_t = g_t(\ell) - U_t.
+ \]
+ Note that a Brownian motion starting from a point on $\ell$ has a uniformly positive chance of exiting $\H \setminus \gamma[0 ,t_m]$ on the left side of $\gamma[0, t_m]$ and on the right side as well.
+
+ On the second example sheet, we see that this implies that
+ \[
+ L_{t_m} \subseteq \{w : \im (w) \geq a |\Re(w)|\}
+ \]
+ for some $a > 0$, using the conformal invariance of Brownian motion. Intuitively, this is because after applying $g_t - U_t$, we have uniformly positive probability of exiting via the positive or real axis, and so we cannot be too far away in one directionn.
+
+ Again by the second example sheet, the Brownian excursion in $\H$ from $0$ to $\infty$ hits $L_{t_m}$ with probability $\to 1$ as $m \to \infty$.
+\end{proof}
+
+We thus conclude
+\begin{thm}
+ $\SLE_{8/3}$ satisfies the restriction property. Moreover, if $\gamma \sim \SLE_{8/3}$, then
+ \[
+ \P[\gamma[0, \infty) \cap A = \emptyset] = (\psi_A'(0))^{5/8}.\qedhere
+ \]
+\end{thm}
+
+There is a rather surprising consequence of this computation. Take $\gamma_1, \ldots, \gamma_8$ to be independent $\SLE_{8/3}$'s. Then we have
+\[
+ \P[\gamma_j[0, \infty) \cap A = \emptyset\text{ for all }j] = (\psi_A'(0))^5.
+\]
+Note that this is the same as the probability that the hull of $\gamma_1, \ldots, \gamma_8$ des not intersect $A$, where the hull is the union of the $\gamma_j$'s together with the bounded components of $\H \setminus \bigcup_j \gamma_j$.
+
+In the same manner, if $\hat{B}_1, \ldots, \hat{B}_5$ are independent Brownian excursions, then
+\[
+ \P[\hat{B}_j[0, \infty) \cap A = \emptyset\text{ for all }j] = (\psi_A'(0))^5.
+\]
+Thus, the hull of $\gamma_1, \ldots, \gamma_8$ has the same distribution as the hull of $\hat{B}_1, \ldots, \hat{B}_5$.
+
+Moreover, if we take a boundary point of the hull of, say, $\gamma_1, \ldots, \gamma_8$, then we would expect it to belong to just one of the $\SLE_{8/3}$'s. So the boundary of the hull of $\gamma_1, \ldots, \gamma_8$ looks locally like an $\SLE_{8/3}$, and the same can be said for the hull fo $\hat{B}_1, \ldots, \hat{B}_5$. Thus, we conclude that the boundary of a Brownian excursion looks ``locally'' like $\SLE_{8/3}$.
+
+\section{The Gaussian free field}\label{sec:gff}
+We end by discussing the Gaussian free field, which we can think of as a two-dimensional analogue of Brownian motion, i.e.\ a random surface. We will show that the level curves of the Gaussian free field are $\SLE_4$s.
+
+To define the Gaussian free field, we have to do some analysis.
+\begin{notation}\leavevmode
+ \begin{itemize}
+ \item $C^\infty$ is the space of infinitely differentiable functions on $\C$
+ \item $C_0^\infty$ is the space of functions in $C^\infty$ with compact support.
+ \item If $D$ is a domain, $C_0^\infty(D)$ is the functions in $C_0^\infty$ supported in $D$.
+ \end{itemize}
+\end{notation}
+
+\begin{defi}[Dirichlet inner product]\index{Dirichlet inner product}
+ Let $f, g \in C_0^\infty$. The \emph{Dirichlet inner product} of $f, g$ is
+ \[
+ (f, g)_\nabla = \frac{1}{2\pi}\int \nabla f(x) \cdot \nabla g(x)\;\d x.
+ \]
+\end{defi}
+This defines an inner product function on $C_0^\infty$. If $D \subseteq \C$ is a non-trivial simply-connected domain, i.e.\ not $\emptyset$ or $\C$, we can define
+
+\begin{defi}[$H_0^1(D)$]\index{$H_0^1(D)$}
+ We write $H_0^1(D)$ for the Hilbert space completion of $C_0^\infty(D)$ with respect to $(\ph, \ph)_{\nabla}$.
+\end{defi}
+Elements of $H_0^1(D)$ can be thought of as functions well-defined up to a null set. These functions need not be continuous (i.e.\ need not have a continuous representatives), and in particular need not be genuinely differentiable, but they have ``weak derivatives''.
+
+We will need the following key properties of $H_0^1(D)$:
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item Conformal invariance: Suppose $\varphi: D \to \tilde{D}$ is a conformal transformation, and $f, g \in C_0^\infty(D)$. Then
+ \[
+ (f, g)_\nabla = (f \circ \varphi^{-1}, g \circ \varphi^{-1})_\nabla
+ \]
+ In other words, the Dirichlet inner product is conformally invariant.
+
+ In other words, $\varphi^*: H_0^1(D) \to H_0^1(\tilde{D})$ given by $f \mapsto f \circ \varphi^{-1}$ is an isomorphism of Hilbert spaces.
+ \item Inclusion: Suppose $U \subseteq D$ is open. If $f \in C_0^\infty(U)$, then $f \in C_0^\infty(D)$. Therefore the inclusion map $i: H_0^1(U) \to H_0^1(D)$ is well-defined and associates $H_0^1(U)$ with a subspace of $H_0^1(D)$. We write the image as $H_{\mathrm{supp}}(U)$.
+ \item Orthogonal decomposition: If $U \subseteq D$, let
+ \[
+ H_{\mathrm{harm}}(U) = \{f \in H_0^1(D): f\text{ is harmonic on }U\}.
+ \]
+ Then
+ \[
+ H_0^1(D) = H_{\mathrm{supp}}(U) \oplus H_{\mathrm{harm}}(U)
+ \]
+ is an orthogonal decomposition of $H_0^1(D)$. This is going to translate to a Markov property of the Gaussian free field.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ Conformal invariance is a routine calculation, and inclusion does not require proof. To prove orthogonality, suppose $f \in H_{\mathrm{supp}}(U)$ and $g \in H_{\mathrm{harm}}(U)$. Then
+ \[
+ (f, g) = \frac{1}{2\pi} \int \nabla f(x) \cdot \nabla g(x) \;\d x =- \frac{1}{2\pi} \int f(x) \Delta g(x)\;\d x = 0.
+ \]
+ since $f$ is supported on $U$ and $\Delta g$ is supported outside of $U$.
+
+ To prove that they span, suppose $f \in H_0^1(D)$, and $f_0$ the orthogonal projection of $f$ onto $H_{\mathrm{supp}}(U)$. Let $g_0 = f - f_0$. We want to show that $g_0$ is harmonic. It would be a straightforward manipulation if we can take $\Delta$, but there is no guarantee that $f_0$ is smooth.
+
+ We shall show that $g_0$ is weakly harmonic, and then it is a standard analysis result (which is also on the example sheet) that $g_0$ is in fact genuinely harmonic.
+
+ Suppose $\varphi \in C_0^\infty(U)$. Then since $g_0 \perp H_{\mathrm{supp}}(U)$, we have
+ \[
+ 0 = (g_0, \varphi) = \frac{1}{2\pi} \int \nabla g_0(x) \cdot \nabla \varphi(x)\;\d x = - \frac{1}{2\pi} \int g_0(x) \Delta \varphi(x)\;\d x.
+ \]
+ This implies $g_0$ is $C^\infty$ on $U$ and harmonic.
+\end{proof}
+
+We will define the Gaussian free field to be a Gaussian taking values in $H_0^1$. To make this precise, we first understand how Gaussian random variables on $\R^n$ work.
+
+Observe that if $\alpha_1, \ldots, \alpha_n$ are iid $N(0, 1)$ variables, and $e_1, \ldots, e_n$ is a standard basis of $R^n$, then
+\[
+ h = \alpha_1 e_1 + \cdots + \alpha_n e_n
+\]
+is a standard $n$-dimensional Gaussian random variable.
+
+If $x = \sum x_j e_j \in \R^n$, then
+\[
+ (h, x) = \sum_{j = 1}^n \alpha_j x_j \sim N(0, \|x\|).
+\]
+Moreover, if $x, y \in \R^n$, then $(h, x)$ and $(h, y)$ are jointly Gaussian with covariance $(x, y)$.
+
+Thus, we can associate with $h$ a \emph{family} of Gaussian random variables $(h, x)$ indexed by $x \in \R^n$ with mean zero and covariance given by the inner product on $\R^n$. This is an example of a \term{Gaussian Hilbert space}.
+
+We now just do the same for the infinite-dimensional vector space $H_0^1(D)$. One can show that this is separable, and so we can pick an orthonormal basis $(f_n)$. Then the Gaussian free field $h$ on $D$ is defined by
+\[
+ h = \sum_{j = 1}^\infty \alpha_j f_j,
+\]
+where the $\alpha_j$'s are iid $N(0, 1)$ random variables. Thus, if $f \in H_0^1(D)$, then
+\[
+ (h, f)_\nabla \sim N(0, \|f\|_\nabla^2).
+\]
+More generally, if $f, g \in H_0^1(D)$, then $(h, f)_\nabla$ and $(h, g)_\nabla$ are jointly Gaussian with covariance $(f, g)_\nabla$. Thus, the Gaussian free field is a family of Gaussian variables $(H, f)_\nabla$ indexed by $f \in H_0^1(D)$ with mean zero and covariance $(\ph, \ph)_\nabla$.
+
+We can't actually quite make this definition, because the sum $h = \sum_{j = 1}^\infty \alpha_j f_j$ does not converge. So $h$ is not really a function, but a distribution. However, this difference usually does not matter.
+
+We can translate the properties of $H_0^1$ into analogous properties of the Gaussian free field.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item If $\varphi: D \to \tilde{D}$ is a conformal transformation and $h$ is a Gaussian free field on $D$, then $h \circ \varphi^{-1}$ is a Gaussian free field on $\tilde{D}$.
+ \item Markov property: If $U \subseteq D$ is open, then we can write $h = h_1 + h_2$ with $h_1$ and $h_2$ independent where $h_1$ is a Gaussian free field on $U_1$ and $h_2$ is harmonic on $U$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Clear.
+ \item Take $h_1$ to be the projection onto $H_{\mathrm{supp}}(U)$. This works since we can take the orthonormal basis $(f_n)$ to be the union of an orthonormal basis of $H_{\mathrm{supp}}(U)$ plus an orthonormal basis of $H_{\mathrm{harm}}(U)$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Often, we would rather think about the $L^2$ inner product of $h$ with something else. Observe that integration by parts tells us
+\[
+ (h, f)_\nabla = \frac{-1}{2\pi} (h, \Delta f)_{L^2}.
+\]
+Thus, we would be happy if we can invert $\Delta$, which we can by IB methods. Recall that
+\[
+ \Delta (- \log|x - y|) = -2\pi \delta(y - x),
+\]
+where $\Delta$ acts on $x$, and so $-\log|x - y|$ is a Green's function for $\Delta$. Given a domain $D$, we wish to obtain a version of the Green's function that vanishes on the boundary. To do so, we solve $\Delta \tilde{G}_x = 0$ on $D$ with the boundary conditions
+\[
+ \tilde{G}_x(y) = -\log|x - y|\text{ if }y \in \partial D.
+\]
+We can then set
+\[
+ G(x, y) = - \log |x - y| - \tilde{G}_x(y).
+\]
+With this definition, we can define
+\[
+ \Delta^{-1} \varphi(x) = -\frac{1}{2\pi} \int G(x, y) \varphi(y)\;\d y.
+\]
+Then $\Delta \Delta^{-1} \varphi(x) = \varphi(x)$, and so
+\[
+ (h, \varphi) \equiv (h, \varphi)_{L^2} = -2\pi (h, \Delta^{-1} \varphi)_\nabla.
+\]
+Then $(h, \varphi)$ is a mean-zero Gaussian with variance
+\begin{align*}
+ (2\pi)^2 \|\Delta^{-1} \varphi\|_{\nabla}^2 &= (2\pi)^2 (\Delta^{-1}\varphi, \Delta^{-1}\varphi)_{\nabla} \\
+ &= - 2\pi (\Delta^{-1} \varphi, \Delta \Delta^{-1}\varphi)\\
+ &= (-2\pi \Delta^{-1} \varphi, \varphi)\\
+ &= \iint \varphi(x) G(x, y) \varphi(y)\;\d x\;\d y.
+\end{align*}
+More generally, if $\varphi, \psi \in C_0^\infty(D)$, then
+\[
+ \cov( (h, \varphi), (h, \psi)) = \iint \varphi(x) G(x, y) \psi(y)\;\d x\;\d y.
+\]
+
+On the upper half plane, we have a very explicit Green's function
+\[
+ G(x, y) = G_\H(x, y) = -\log |x - y| + \log |x - \bar{y}|.
+\]
+It is not hard to show that the Green's function is in fact conformally invariant:
+\begin{prop}
+ Let $D, \tilde{D}$ be domains in $\C$ and $\varphi$ is a conformal transformation $D \to \tilde{D}$. Then $G_D(x, y) = G_{\tilde{D}}(\varphi(x), \varphi(y))$.\fakeqed
+\end{prop}
+One way to prove this is to use the conformal invariance of Brownian motion, but a direct computation suffices as well.
+
+Schramm and Sheffield showed that the level sets of $h$, i.e.\ $\{x : h(x) = 0\}$ are $\SLE_4$'s. It takes some work to make this precise, since $h$ is not a genuine function, but the formal statement is as follows:
+\begin{thm}[Schramm--Sheffield]
+ Let $\lambda = \frac{\pi}{2}$. Let $\gamma \sim \SLE_4$ in $\H$ from $0$ to $\infty$. Let $g_t$ its Loewner evolution with driving function $U_t = \sqrt{\kappa} B_t = 2 B_t$, and set $f_t = g_t - U_t$. Fix $W \subseteq \H$ open and let
+ \[
+ \tau = \inf \{t \geq 0: \gamma(t) \in W\}.
+ \]
+ Let $h$ be a Gaussian free field on $\H$, $\lambda > 0$, and $\mathscr{h}$ be the unique harmonic function on $\H$ with boundary values $\lambda$ on $\R_{>0}$ and $-\lambda$ on $\R_{<0}$. Explicitly, it is given by
+ \[
+ \mathscr{h} = \lambda - \frac{2\lambda}{\pi} \arg(\ph).
+ \]
+ Then
+ \[
+ h + \mathscr{h} \overset{d}{=} (h + \mathscr{h}) \circ f_{t \wedge \tau},
+ \]
+ where both sides are restricted to $W$.
+\end{thm}
+A few words should be said about why we should think of this as saying the level curves of $h$ are $\SLE_4$'s. First, observe that since $\mathscr{h}$ is harmonic, adding $\mathscr{h}$ to $h$ doesn't change how $h$ pairs with other functions in $H_0^1(\H)$. All it does is to change the boundary conditions of $h$. So we should think of $h + \mathscr{h}$ as a version of the Gaussian free field with boundary conditions
+\[
+ (h + \mathscr{h})(x) = \sgn(x) \lambda.
+\]
+It is comforting to know the fact that the boundary conditions of $h$ is well-defined as an element of $L^2$.
+
+Similarly, $(h + \mathscr{h}) \circ f_{t \wedge \tau}$ is a Gaussian free field with the boundary condition that it takes $\lambda$ to the right of $\gamma \sim \SLE_4$ and $-\lambda$ to the left of it. Thus, we can think of this as taking value $0$ along $\gamma$. The theorem says this has the same distribution just $h + \mathscr{h}$. So we interpret this as saying the Gaussian free field has the same distribution as a Gaussian free field forced to vanish along an $\SLE_4$.
+
+What we stated here is a simplified version of the theorem by Schramm and Sheffield, because we are not going to deal with what happens $\Gamma$ hits $W$.
+
+The proof is not difficult. The difficult part is to realize that this is the thing we want to prove.
+\begin{proof}
+ We want to show that if $\varphi \in C_0^\infty(W)$,
+ \[
+ ((h+ \mathscr{h}) \circ f_{t \wedge \tau}, \varphi) \overset{d}{=} (h + \mathscr{h}, \varphi).
+ \]
+ In other words, writing
+ \[
+ m_t (\varphi) = (\mathscr{h} \circ f_t, \varphi),\quad \sigma_0^2(\varphi) = \iint \varphi(x) G_\H(f_t(x), f_t(y)) \varphi(y)\;\d x\;\d y,
+ \]
+ we want to show that
+ \[
+ ((h + \mathscr{h}) \circ f_{t \wedge \tau}, \varphi) \sim N(m_0(\varphi), \sigma_0^2(\varphi)).
+ \]
+ This is the same as proving that
+ \[
+ \E\left[e^{i \theta((h + \mathscr{h}) \circ f_{t \wedge \tau}, \varphi)}\right] = \exp \left[i \theta m_0(\varphi) - \frac{\theta^2}{2} \sigma_0^2 (\varphi)\right].
+ \]
+ Let $\mathcal{F}_t = \sigma(U_s: s \leq t)$ be the filtration of $U_t$. Then
+ \begin{align*}
+ \E\left[ e^{i\theta ((h + \mathscr{h}) \circ f_{t \wedge \tau}, \varphi)}\, \middle|\, \mathcal{F}_{t \wedge \tau}\right] &= \E\left[e^{i\theta(h \circ f_{t \wedge \tau}, \varphi)} \mid \mathcal{F}_{t \wedge \tau}\right] e^{i\theta m_{t \wedge \tau} (\varphi)}\\
+ &= \exp\left[i\theta m_{t \wedge \tau}(\varphi) - \frac{\theta^2}{2} \sigma^2_{t \wedge \tau}(\varphi)\right],
+ \end{align*}
+ If we knew that
+ \[
+ \exp\left[i\theta m_t(\varphi) - \frac{\theta^2}{2} \sigma_t^2(\varphi)\right]
+ \]
+ is a martingale, then taking the expectation of the above equation yields the desired results.
+
+ Note that this looks exactly like the form of an exponential martingale, which in particular is a martingale. So it suffices to show that $m_t(\varphi)$ is a martingale with
+ \[
+ [m_\Cdot(\varphi)]_t = \sigma_0^2(\varphi) - \sigma_t^2(\varphi).
+ \]
+
+ To check that $m_t(\varphi)$ is a martingale, we expand it as
+ \[
+ \mathscr{h} \circ f_t(z) = \lambda - \frac{2\lambda}{\pi} \arg(f_t(z)) = \lambda - \frac{2\lambda}{\pi} \im(\log (g_t(z) - U_t)).
+ \]
+ So it suffices to check that $\log (g_t(z) - U_t)$ is a martingale. We apply It\^o's formula to get
+ \[
+ \d \log(g_t(z) - U_t) = \frac{1}{g_t(z) - U_t} \cdot \frac{2}{g_t(z) - U_t}\;\d t - \frac{1}{g_t(z) - U_t} \;\d U_t -\frac{\kappa/2}{(g_t(z) - U_t)^2}\;\d t,
+ \]
+ and so this is a continuous local martingale when $\kappa = 4$. Since $m_t(\varphi)$ is bounded, it is a genuine martingale.
+
+ We then compute the derivative of the quadratic variation
+ \[
+ \d [m_\Cdot(\varphi)]_t = \int \varphi(x) \im\left(\frac{2}{g_t(x) - U_t}\right) \im \left(\frac{2}{g_t(y) - U_t}\right) \varphi(y) \;\d x\;\d y\;\d t.
+ \]
+ To finish the proof, we need to show that $\d \sigma_t^2(\varphi)$ takes the same form. Recall that the Green's function can be written as
+ \[
+ G_\H(x, y) = - \log |x - y| + \log |x - \bar{y}| = -\Re(\log (x - y) - \log (x -\bar{y})).
+ \]
+ Since we have
+ \[
+ \log(f_t(x) - f_t(y)) = \log (g_t(x) - g_t(y)),
+ \]
+ we can compute
+ \begin{align*}
+ \d \log (g_t(x) - g_t(y)) &= \frac{1}{g_t(x) - g_t(y)} \left[\frac{2}{g_t(x) - U_t} - \frac{2}{g_t(y) - U_t}\right]\;\d t\\
+ &= \frac{-2}{(g_t(x) - U_t)(g_t(y) - U_t)}\;\d t.
+ \end{align*}
+ Similarly, we have
+ \[
+ \d \log (g_t(x) - \overline{g_t(y)}) = \frac{-2}{(g_t(x) - U_t)(\overline{g_t(y)} - U_t)}\;\d t.
+ \]
+ So we have
+ \[
+ \d G_t(x, y) = - \im \left(\frac{2}{g_t(x) - U_t}\right) \im \left(\frac{2}{g_t(y) - U_t}\right)\;\d t.
+ \]
+ This is exactly what we wanted it to be.
+\end{proof}
+\printindex
+\end{document}
diff --git a/books/cam/III_L/stochastic_calculus_and_applications.tex b/books/cam/III_L/stochastic_calculus_and_applications.tex
new file mode 100644
index 0000000000000000000000000000000000000000..67d21a69abcf84b7ad883781574760a25259f671
--- /dev/null
+++ b/books/cam/III_L/stochastic_calculus_and_applications.tex
@@ -0,0 +1,2238 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2018}
+\def\nlecturer {R.\ Bauerschmidt}
+\def\ncourse {Stochastic Calculus and Applications}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+\begin{itemize}
+ \item \textit{Brownian motion.} Existence and sample path properties.
+ \item \textit{Stochastic calculus for continuous processes.} Martingales, local martingales, semi-martingales, quadratic variation and cross-variation, It\^o's isometry, definition of the stochastic integral, Kunita--Watanabe theorem, and It\^o's formula.
+ \item \textit{Applications to Brownian motion and martingales.} L\'evy characterization of Brownian motion, Dubins--Schwartz theorem, martingale representation, Girsanov theorem, conformal invariance of planar Brownian motion, and Dirichlet problems.
+ \item \textit{Stochastic differential equations.} Strong and weak solutions, notions of existence and uniqueness, Yamada--Watanabe theorem, strong Markov property, and relation to second order partial differential equations.
+\end{itemize}
+\subsubsection*{Pre-requisites}
+Knowledge of measure theoretic probability as taught in Part III Advanced Probability will be assumed, in particular familiarity with discrete-time martingales and Brownian motion.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+Ordinary differential equations are central in analysis. The simplest class of equations tend to look like
+\[
+ \dot{x}(t) = F(x(t)).
+\]
+\emph{Stochastic} differential equations are differential equations where we make the function $F$ ``random''. There are many ways of doing so, and the simplest way is to write it as
+\[
+ \dot{x}(t) = F(x(t)) + \eta(t),
+\]
+where $\eta$ is a random function. For example, when modeling noisy physical systems, our physical bodies will be subject to random noise. What should we expect the function $\eta$ to be like? We might expect that for $|t - s| \gg 0$, the variables $\eta(t)$ and $\eta(s)$ are ``essentially'' independent. If we are interested in physical systems, then this is a rather reasonable assumption, since random noise is random!
+
+In practice, we work with the idealization, where we claim that $\eta(t)$ and $\eta(s)$ are independent for $t \not= s$. Such an $\eta$ exists, and is known as \emph{white noise}. However, it is not a function, but just a Schwartz distribution.
+
+To understand the simplest case, we set $F = 0$. We then have the equation
+\[
+ \dot{x} = \eta.
+\]
+We can write this in integral form as
+\[
+ x(t) = x(0) + \int_0^t \eta(s)\;\d s.
+\]
+To make sense of this integral, the function $\eta$ should at least be a signed measure. Unfortunately, white noise isn't. This is bad news.
+
+We ignore this issue for a little bit, and proceed as if it made sense. If the equation held, then for any $0 = t_0 < t_1 < \cdots$, the increments
+\[
+ x(t_i) - x(t_{i - 1}) = \int_{t_{i - 1}}^{t_i} \eta(s) \;\d s
+\]
+should be independent, and moreover their variance should scale linearly with $|t_i - t_{i - 1}|$. So maybe this $x$ should be a Brownian motion!
+
+Formalizing these ideas will take up a large portion of the course, and the work isn't always pleasant. Then why should we be interested in this continuous problem, as opposed to what we obtain when we discretize time? It turns out in some sense the continuous problem is easier. When we learn measure theory, there is a lot of work put into constructing the Lebesgue measure, as opposed to the sum, which we can just define. However, what we end up is much easier --- it's easier to integrate $\frac{1}{x^3}$ than to sum $\sum_{n = 1}^\infty \frac{1}{n^3}$. Similarly, once we have set up the machinery of stochastic calculus, we have a powerful tool to do explicit computations, which is usually harder in the discrete world.
+
+Another reason to study stochastic calculus is that a lot of continuous time processes can be described as solutions to stochastic differential equations. Compare this with the fact that functions such as trigonometric and Bessel functions are described as solutions to ordinary differential equations!
+
+There are two ways to approach stochastic calculus, namely via the It\^o integral and the Stratonovich integral. We will mostly focus on the It\^o integral, which is more useful for our purposes. In particular, the It\^o integral tends to give us martingales, which is useful.
+
+To give a flavour of the construction of the It\^o integral, we consider a simpler scenario of the Wiener integral.
+
+\begin{defi}[Gaussian space]\index{Gaussian space}
+ Let $(\Omega, \mathcal{F}, \P)$ be a probability space. Then a subspace $S \subseteq L^2(\Omega, \mathcal{F}, \P)$ is called a \emph{Gaussian space} if it is a closed linear subspace and every $X \in S$ is a centered Gaussian random variable.
+\end{defi}
+
+An important construction is
+\begin{prop}
+ Let $H$ be any separable Hilbert space. Then there is a probability space $(\Omega, \mathcal{F}, \P)$ with a Gaussian subspace $S \subseteq L^2(\Omega, \mathcal{F}, \P)$ and an isometry $I: H \to S$. In other words, for any $f \in H$, there is a corresponding random variable $I(f) \sim N(0, (f, f)_H)$. Moreover, $I(\alpha f + \beta g) = \alpha I(f) + \beta I(g)$ and $(f, g)_H = \E[I(f) I(g)]$.
+\end{prop}
+
+\begin{proof}
+ By separability, we can pick a Hilbert space basis $(e_i)_{i = 1}^\infty$ of $H$. Let $(\Omega, \mathcal{F}, \P)$ be any probability space that carries an infinite independent sequence of standard Gaussian random variables $X_i \sim N(0, 1)$. Then send $e_i$ to $X_i$, extend by linearity and continuity, and take $S$ to be the image.
+\end{proof}
+
+In particular, we can take $H = L^2(\R_+)$.
+
+\begin{defi}[Gaussian white noise]\index{Gaussian white noise}
+ A \emph{Gaussian white noise} on $\R_+$ is an isometry $WN$ from $L^2(\R_+)$ into some Gaussian space. For $A \subseteq \R_+$, we write $WN(A) = WN(\mathbf{1}_A)$.
+\end{defi}
+
+\begin{prop}\leavevmode
+ \begin{itemize}
+ \item For $A \subseteq \R_+$ with $|A| < \infty$, $WN(A) \sim N(0, |A|)$.
+ \item For disjoint $A, B \subseteq \R_+$, the variables $WN(A)$ and $WN(B)$ are independent.
+ \item If $A = \bigcup_{i = 1}^\infty A_i$ for disjoint sets $A_i \subseteq \R_+$, with $|A| < \infty, |A_i| < \infty$, then
+ \[
+ WN(A) = \sum_{i = 1}^\infty WN(A_i)\text{ in $L^2$ and a.s.}
+ \]
+ \end{itemize}
+\end{prop}
+
+\begin{proof}
+ Only the last point requires proof. Observe that the partial sum
+ \[
+ M_n = \sum_{i = 1}^n WN(A)
+ \]
+ is a martingale, and is bounded in $L^2$ as well, since
+ \[
+ \E M_n^2 = \sum_{i = 1}^n \E WN(A_i)^2 = \sum_{i = 1}^n |A_i| \leq |A|.
+ \]
+ So we are done by the martingale convergence theorem. The limit is indeed $WN(A)$ because $\mathbf{1}_A = \sum_{n = 1}^\infty \mathbf{1}_{A_i}$.
+\end{proof}
+The point of the proposition is that $WN$ really looks like a random measure on $\R_+$, except it is \emph{not}. We only have convergence almost surely above, which means we have convergence on a set of measure $1$. However, the set depends on which $A$ and $A_i$ we pick. For things to actually work out well, we must have a fixed set of measure $1$ for which convergence holds for all $A$ and $A_i$.
+
+But perhaps we can ignore this problem, and try to proceed. We define
+\[
+ B_t = WN([0, t])
+\]
+for $t \geq 0$.
+\begin{ex}
+ This $B_t$ is a standard Brownian motion, except for the continuity requirement. In other words, for any $t_1, t_2, \ldots, t_n$, the vector $(B_{t_i})_{i = 1}^n$ is jointly Gaussian with
+ \[
+ \E[B_s B_t] = s \wedge t\text{ for }s, t \geq 0.
+ \]
+ Moreover, $B_0 = 0$ a.s.\ and $B_t - B_s$ is independent of $\sigma(B_r: r \leq s)$. Moreover, $B_t - B_s \sim N(0, t - s)$ for $t \geq s$.
+\end{ex}
+In fact, by picking a good basis of $L^2(\R_+)$, we can make $B_t$ continuous.
+
+We can now try to define some stochastic integral. If $f \in L^2(\R_+)$ is a step function,
+\[
+ f = \sum_{i = 1}^n f_i \mathbf{1}_{[s_i, t_i]}
+\]
+with $s_i < t_i$, then
+\[
+ WN(f) = \sum_{i = 1}^n f_i (B_{t_i} - B_{s_i})
+\]
+This motivates the notation
+\[
+ WN(f) = \int f(s)\; \d B_S.
+\]
+However, extending this to a function that is not a step function would be problematic.
+
+\section{The Lebesgue--Stieltjes integral}
+In calculus, we are able to perform integrals more exciting than simply $\int_0^1 h(x) \;\d x$. In particular, if $h, a: [0, 1] \to \R$ are $C^1$ functions, we can perform integrals of the form
+\[
+ \int_0^1 h(x) \;\d a(x).
+\]
+For them, it is easy to make sense of what this means --- it's simply
+\[
+ \int_0^1 h(x) \;\d a = \int_0^1 h(x) a'(x) \;\d x.
+\]
+In our world, we wouldn't expect our functions to be differentiable, so this is not a great definition. One reasonable strategy to make sense of this is to come up with a measure that should equal ``$\d a$''.
+
+An immediate difficulty we encounter is that $a'(x)$ need not be positive all the time. So for example, $\int_0^1 1 \;\d a$ could be a negative number, which one wouldn't expect for a usual measure! Thus, we are naturally lead to think about \emph{signed measures}.
+
+From now on, we always use the Borel $\sigma$-algebra on $[0, T]$ unless otherwise specified.
+\begin{defi}[Signed measure]\index{signed measure}
+ A \emph{signed measure} on $[0, T]$ is a difference $\mu = \mu_+ - \mu_-$ of two positive measures on $[0, T]$ of disjoint support. The decomposition $\mu = \mu_+ - \mu_-$ is called the \term{Hahn decomposition}.
+\end{defi}
+
+In general, given two measures $\mu_1$ and $\mu_2$ with not necessarily disjoint supports, we may still want to talk about $\mu_1 - \mu_2$.
+\begin{thm}
+ For any two finite measures $\mu_1, \mu_2$, there is a signed measure $\mu$ with $\mu(A) = \mu_1(A) - \mu_2(A)$.
+\end{thm}
+
+If $\mu_1$ and $\mu_2$ are given by densities $f_1, f_2$, then we can simply decompose $\mu$ as $(f_1 - f_2)^+\;\d t + (f_1 - f_2)^- \;\d t$, where $^+$ and $^-$ denote the positive and negative parts respectively. In general, they need not be given by densities with respect to $\d x$, but they are always given by densities with respect to some other measure.
+\begin{proof}
+ Let $\nu = \mu_1 + \mu_2$. By Radon--Nikodym, there are positive functions $f_1, f_2$ such that $\mu_i(\d t) = f_i(t) \nu(\d t)$. Then
+ \[
+ (\mu_1 - \mu_2)(\d t) = (f_1 - f_2)^+(t) \cdot \nu(\d t) + (f_1 - f_2)^- (t) \cdot \nu(\d t).\qedhere
+ \]
+\end{proof}
+
+\begin{defi}[Total variation]\index{total variation}
+ The total variation of a signed measure $\mu = \mu_+ - \mu_-$ is $|\mu| = \mu_+ + \mu_-$.
+\end{defi}
+
+We now want to figure out how we can go from a function to a signed measure. Let's think about how one would attempt to define $\int_0^1 f(x)\;\d g$ as a Riemann sum. A natural option would be to write something like
+\[
+ \int_0^t h(s) \;\d a(s) = \lim_{m \to \infty} \sum_{i = 1}^{n_m} h(t_{i - 1}^{(m)}) \Big(a (t_i^{(m)}) - a(t_{i - 1}^{(m)})\Big)
+\]
+for any sequence of subdivisions $0 = t_0^{(m)} < \cdots < t_{n_m}^{(m)} = t$ of $[0, t]$ with $\max_i |t_i^{(m)} - t_{i - 1}^{(m)}| \to 0$.
+
+In particular, since we want the integral of $h = 1$ to be well-behaved, the sum $\sum (a(t_i^{(m)}) - a(t_{i - 1}^{(m)}))$ must be well-behaved. This leads to the notion of
+\begin{defi}[Total variation]\index{total variation}
+ The \emph{total variation} of a function $a: [0, T] \to \R$ is
+ \[
+ V_a(t) = |a(0)| + \sup \left\{\sum_{i = 1}^n |a(t_i) - a(t_{i - 1})|: 0 = t_0 < t_1 < \cdots < t_n = T\right\}.
+ \]
+ We say $a$ has \term{bounded variation} if $V_a(T) < \infty$. In this case, we write $a \in BV$.\index{BV}
+\end{defi}
+We include the $|a(0)|$ term because we want to pretend $a$ is defined on all of $\R$ with $a(t) = 0$ for $t < 0$.
+
+We also define
+\begin{defi}[C\`adl\`ag]\index{c\`adl\`ag}
+ A function $a: [0, T] \to \R$ is \emph{c\`adl\`ag} if it is right-continuous and has left-limits.
+\end{defi}
+
+The following theorem is then clear:
+\begin{thm}
+ There is a bijection
+ \[
+ \left\{\vphantom{\parbox{4.5cm}{a\\b}}\parbox{4.5cm}{\centering signed measures on $[0, T]$} \right\} \longleftrightarrow \left\{\parbox{4.5cm}{\centering c\`adl\`ag functions of bounded variation $a: [0, T] \to \R$}\right\}\\
+ \]
+ that sends a signed measure $\mu$ to $a(t) = \mu([0, t])$. To construct the inverse, given $a$, we define
+ \[
+ a_{\pm} = \frac{1}{2}(V_a \pm a).
+ \]
+ Then $a_{\pm}$ are both positive, and $a = a_+ - a_-$. We can then define $\mu_{\pm}$ by
+ \begin{align*}
+ \mu_{\pm}[0, t] &= a_{\pm}(t) - a_{\pm}(0)\\
+ \mu &= \mu_+ - \mu_-
+ \end{align*}
+ Moreover, $V_{\mu[0, t]} = |\mu|[0, t]$.
+\end{thm}
+
+\begin{eg}
+ Let $a: [0, 1] \to \R$ be given by
+ \[
+ a(t) =
+ \begin{cases}
+ 1 & t < \frac{1}{2}\\
+ 0 & t \geq \frac{1}{2}
+ \end{cases}.
+ \]
+ This is c\`adl\`ag, and it's total variation is $v_0(1) = 2$. The associated signed measure is
+ \[
+ \mu = \delta_0 - \delta_{1/2},
+ \]
+ and the total variation measure is
+ \[
+ |\mu| = \delta_0 + \delta_{1/2}.
+ \]
+\end{eg}
+
+We are now ready to define the Lebesgue--Stieltjes integral.
+
+\begin{defi}[Lebesgue--Stieltjes integral]\index{Lebesgue--Stieltjes integral}
+ Let $a: [0, T] \to \R$ be c\`adl\`ag of bounded variation and let $\mu$ be the associated signed measure. Then for $h \in L^1([0, T], |\mu|)$, the \emph{Lebesgue--Stieltjes integral} is defined by
+ \[
+ \int_s^t h(r)\; \d a(r) = \int_{(s, t]} h(r) \mu (\d r),
+ \]
+ where $0 \leq s \leq t \leq T$, and
+ \[
+ \int_s^t h(r)\; |\d a(r)| = \int_{(s, t]} h(r) |\mu| (\d r).
+ \]
+ We also write
+ \[
+ h \cdot a(t) = \int_0^t h(r) \;\d a(r).
+ \]
+\end{defi}
+
+To let $T = \infty$, we need the following notation:
+\begin{defi}[Finite variation]\index{finite variation}
+ A c\`adl\`ag function $a: [0, \infty) \to \R$ is of finite variation if $a|_{[0, T]} \in BV[0, 1]$ for all $T > 0$.
+\end{defi}
+
+\begin{fact}
+ Let $a: [0, T] \to \R$ be c\`adl\`ag and BV, and $h \in L^1([0, T], |\d a|)$, then
+ \[
+ \left|\int_0^T h(s) \;\d a (s) \right| \leq \int_a^b |h(s)|\;|\d a(s)|,
+ \]
+ and the function $h \cdot a: [0, T] \to \R$ is c\`adl\`ag and BV with associated signed measure $h(s) \;\d a(s)$. Moreover, $|h(s) \;\d a(s)| = |h(s)|\;|\d a(s)|$.
+\end{fact}
+
+We can, unsurprisingly, characterize the Lebesgue--Stieltjes integral by a Riemann sum:
+\begin{prop}
+ Let $a$ be c\`adl\`ag and BV on $[0, t]$, and $h$ bounded and left-continuous. Then
+ \begin{align*}
+ \int_0^t h(s) \;\d a(s) &= \lim_{m \to \infty} \sum_{i = 1}^{n_m} h(t_{i - 1}^{(m)}) \Big(a (t_i^{(m)}) - a(t_{i - 1}^{(m)})\Big)\\
+ \int_0^t h(s) \;|\d a(s)| &= \lim_{m \to \infty} \sum_{i = 1}^{n_m} h(t_{i - 1}^{(m)}) \Big|a (t_i^{(m)}) - a(t_{i - 1}^{(m)})\Big|
+ \end{align*}
+ for any sequence of subdivisions $0 = t_0^{(m)} < \cdots < t_{n_m}^{(m)} = t$ of $[0, t]$ with $\max_i |t_i^{(m)} - t_{i - 1}^{(m)}| \to 0$.
+\end{prop}
+
+\begin{proof}
+ We approximate $h$ by $h_m$ defined by
+ \[
+ h_m(0) = 0,\quad h_m(s) = h(t_{i - 1}^{(m)})\text{ for }s \in (t_{i - 1}^{(m)}, t_i^{(m)}].
+ \]
+ Then by left continuity, we have
+ \[
+ h(s) = \lim_{n \to \infty}h_m(s)
+ \]
+ by left continuity, and moreover
+ \[
+ \lim_{m \to \infty} \sum_{i = 1}^{n_m} h(t_{i - 1}^{(m)}) (a(t_i^{(m)}) - a(t_{i - 1}^{(m)})) = \lim_{m \to \infty} \int_{(0, t]} h_m(s) \mu (\;\d s) = \int_{(0, t]} h(s) \mu(\d s)
+ \]
+ by dominated convergence theorem. The statement about $|\d a(s)|$ is left as an exercise.
+\end{proof}
+
+
+
+\section{Semi-martingales}
+The title of the chapter is ``semi-martingales'', but we are not going even meet the definition of a semi-martingale till the end of the chapter. The reason is that a semi-martingale is essentially defined to be the sum of a (local) martingale and a finite variation process, and understanding semi-martingales mostly involves understanding the two parts separately. Thus, for most of the chapter, we will be studying local martingales (finite variation processes are rather more boring), and at the end we will put them together to say a word or two about semi-martingales.
+
+From now on, $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t \geq 0}, \P)$ will be a filtered probability space. Recall the following definition:
+
+\begin{defi}[C\`adl\`ag adapted process]\index{c\`adl\`ag adapted process}
+ A \emph{c\`adl\`ag adapted process} is a map $X: \Omega \times [0, \infty) \to \R$ such that
+ \begin{enumerate}
+ \item $X$ is c\`adl\`ag, i.e.\ $X(\omega, \ph): [0, \infty) \to \R$ is c\`adl\`ag for all $\omega \in \Omega$.
+ \item $X$ is adapted, i.e.\ $X_t = X(\ph, t) $ is $\mathcal{F}_t$-measurable for every $t \geq 0$.
+ \end{enumerate}
+\end{defi}
+
+\begin{notation}
+ We will write $X \in \mathcal{G}$ to denote that a random variable $X$ is measurable with respect to a $\sigma$-algebra $\mathcal{G}$.
+\end{notation}
+
+\subsection{Finite variation processes}
+The definition of a finite variation function extends immediately to a finite variation process.
+\begin{defi}[Finite variation process]\index{Finite variation process}
+ A \emph{finite variation process} is a c\`adl\`ag adapted process $A$ such that $A(\omega, \ph): [0, \infty) \to \R$ has finite variation for all $\omega \in \Omega$. The \term{total variation process} $V$ of a finite variation process $A$ is
+ \[
+ V_t = \int_0^T |\d A_s|.
+ \]
+\end{defi}
+
+\begin{prop}
+ The total variation process $V$ of a c\`adl\`ag adapted process $A$ is also c\`adl\`ag, finite variation and adapted, and it is also increasing.
+\end{prop}
+
+\begin{proof}
+ We only have to check that it is adapted. But that follows directly from our previous expression of the integral as the limit of a sum. Indeed, let $0 = t_0^{(m)} < t_1^{(m)} < \cdots < t_{n_m} = t$ be a (nested) sequence of subdivisions of $[0, t]$ with $\max_i |t_i^{(m)} - t_{i - 1}^{(m)}| \to 0$. We have seen
+ \[
+ V_t = \lim_{m \to \infty} \sum_{i = 1}^{n_m} |A_{t_i^{(m)}} - A_{t_{i - 1}^{(m)}}| + |A(0)| \in \mathcal{F}_t.\qedhere
+ \]
+\end{proof}
+
+\begin{defi}[$(H\cdot A)_t$]\index{$(H\cdot A)_t$}
+ Let $A$ be a finite variation process and $H$ a process such that for all $\omega \in \Omega$ and $t \geq 0$,
+ \[
+ \int_0^t H_s(\omega)|\;|\d A_s(\omega)| < \infty.
+ \]
+ Then define a process $((H \cdot A)_t)_{t \geq 0}$ by
+ \[
+ (H \cdot A)_t = \int_0^t H_s\;\d A_s.
+ \]
+\end{defi}
+For the process $H \cdot A$ to be adapted, we need a condition.
+\begin{defi}[Previsible process]\index{previsible process}
+ A process $H: \Omega \times [0, \infty) \to \R$ is \emph{previsible} if it is measurable with respect to the \term{previsible $\sigma$-algebra} $\mathcal{P}$ generated by the sets $E \times (s, t]$, where $E \in \mathcal{F}_s$ and $s < t$. We call the generating set $\Pi$.
+\end{defi}
+Very roughly, the idea is that a previsible event is one where whenever it happens, you know it a finite (though possibly arbitrarily small) before.
+
+\begin{defi}[Simple process]\index{simple process}\index{$\mathcal{E}$}
+ A process $H: \Omega \times [0, \infty) \to \R$ is \emph{simple}, written $H \in \mathcal{E}$, if
+ \[
+ H(\omega, t) = \sum_{i = 1}^n H_{i - 1}(\omega) \mathbf{1}_{(t_{i - 1}, t_i]}(t)
+ \]
+ for random variables $H_{i - 1} \in \mathcal{F}_{i - 1}$ and $0 = t_0 < \cdots < t_n$.
+\end{defi}
+
+\begin{fact}
+ Simple processes and their limits are previsible.
+\end{fact}
+
+\begin{fact}
+ Let $X$ be a c\`adl\`ag adapted process. Then $H_t = X_{t^-}$ defines a left-continuous process and is previsible.
+\end{fact}
+In particular, continuous processes are previsible.
+
+\begin{proof}
+ Since $X$ is c\`adl\`ag adapted, it is clear that $H$ is left-continuous and adapted. Since $H$ is left-continuous, it is approximated by simple processes. Indeed, let
+ \[
+ H_t^n = \sum_{i = 1}^{2^n} H_{(i - 1)2^{-n}} \mathbf{1}_{((i - 1)2^{-n}, i 2^{-n}]} (t) \wedge n \in \mathcal{E}.
+ \]
+ Then $H_t^n \to H$ for all $t$ by left continuity, and previsibility follows.
+\end{proof}
+
+\begin{ex}
+ Let $H$ be previsible. Then
+ \[
+ H_t \in \mathcal{F}_{t^-} = \sigma(\mathcal{F}_s : s < t).
+ \]
+\end{ex}
+
+\begin{eg}
+ Brownian motion is previsible (since it is continuous).
+\end{eg}
+
+\begin{eg}
+ A Poisson process $(N_t)$ is not previsible since $N_t \not \in \mathcal{F}_{t^-}$.
+\end{eg}
+
+\begin{prop}
+ Let $A$ be a finite variation process, and $H$ previsible such that
+ \[
+ \int_0^t |H(\omega, s)|\;|\d A(\omega, s)| < \infty\text{ for all }(\omega, t) \in \Omega \times [0, \infty).
+ \]
+ Then $H \cdot A$ is a finite variation process.
+\end{prop}
+
+\begin{proof}
+ The finite variation and c\`adl\`ag parts follow directly from the deterministic versions. We only have to check that $H \cdot A$ is adapted, i.e.\ $(H \cdot A)(\ph, t) \in \mathcal{F}_t$ for all $t \geq 0$.
+
+ First, $H \cdot A$ is adapted if $H(\omega, s) = 1_{(u, v]}(s) 1_E(\omega)$ for some $u < v$ and $E \in \mathcal{F}_u$, since
+ \[
+ (H \cdot A)(\omega, t) = 1_E(\omega) (A(\omega, t \wedge v) - A(\omega, t \wedge u)) \in \mathcal{F}_t.
+ \]
+ Thus, $H \cdot A$ is adapted for $H = \mathbf{1}_F$ when $F \in \Pi$. Clearly, $\Pi$ is a $\pi$ system, i.e.\ it is closed under intersections and non-empty, and by definition it generates the previsible $\sigma$-algebra $\mathcal{P}$. So to extend the adaptedness of $H \cdot A$ to all previsible $H$, we use the monotone class theorem.
+
+ We let
+ \[
+ \mathcal{V} = \{H: \Omega \times [0, \infty) \to \R: H \cdot A\text{ is adapted}\}.
+ \]
+ Then
+ \begin{enumerate}
+ \item $1 \in \mathcal{V}$
+ \item $1_F \in \mathcal{V}$ for all $F \in \Pi$.
+ \item $\mathcal{V}$ is closed under monotone limits.
+ \end{enumerate}
+ So $\mathcal{V}$ contains all bounded $\mathcal{P}$-measurable functions.
+\end{proof}
+
+So the conclusion is that if $A$ is a finite variation process, then as long as reasonable finiteness conditions are satisfied, we can integrate functions against $\d A$. Moreover, this integral was easy to define, and it obeys all expected properties such as dominated convergence, since ultimately, it is just an integral in the usual measure-theoretic sense. This crucially depends on the fact that $A$ is a finite variation process.
+
+However, in our motivating example, we wanted to take $A$ to be Brownian motion, which is \emph{not} of finite variation. The work we will do in this chapter and the next is to come up with a stochastic integral where we let $A$ be a martingale instead. The heuristic idea is that while martingales can vary wildly, the martingale property implies there will be some large cancellation between the up and down movements, which leads to the possibility of a well-defined stochastic integral.
+
+\subsection{Local martingale}
+From now on, we assume that $(\Omega, \mathcal{F}, (\mathcal{F}_t)_t, \P)$ satisfies the \term{usual conditions}, namely that
+\begin{enumerate}
+ \item $\mathcal{F}_0$ contains all $\P$-null sets
+ \item $(\mathcal{F}_t)_t$ is right-continuous, i.e.\ $\mathcal{F}_t = (\mathcal{F}_{t+} = \bigcap_{s > t} \mathcal{F}_s$ for all $t \geq 0$.
+\end{enumerate}
+
+We recall some of the properties of continuous martingales.
+\begin{thm}[Optional stopping theorem]
+ Let $X$ be a c\`adl\`ag adapted integrable process. Then the following are equivalent:
+ \begin{enumerate}
+ \item $X$ is a martingale, i.e.\ $X_t \in L^1$ for every $t$, and
+ \[
+ \E(X_t \mid \mathcal{F}_s) = X_s \text{ for all }t > s.
+ \]
+ \item The \term{stopped process}\index{$X^T$} $X^T = (X^T_t) = (X_{T \wedge t})$ is a martingale for all stopping times $T$.
+ \item For all stopping times $T, S$ with $T$ bounded, $X_T \in L^1$ and $\E(X_T \mid \mathcal{F}_S) = X_{T \wedge S}$ almost surely.
+ \item For all bounded stopping times $T$, $X_T \in L^1$ and $\E(X_T) = \E(X_0)$.
+ \end{enumerate}
+ For $X$ uniformly integrable, (iii) and (iv) hold for all stopping times.
+\end{thm}
+
+In practice, most of our results will be first proven for bounded martingales, or perhaps square integrable ones. The point is that the square-integrable martingales form a Hilbert space, and Hilbert space techniques can help us say something useful about these martingales. To get something about a general martingale $M$, we can apply a cutoff $T_n = \inf \{t > 0: M_t \geq n\}$, and then $M^{T_n}$ will be a martingale for all $n$. We can then take the limit $n \to \infty$ to recover something about the martingale itself.
+
+But if we are doing this, we might as well weaken the martingale condition a bit --- we only need the $M^{T_n}$ to be martingales. Of course, we aren't doing this just for fun. In general, martingales will not always be closed under the operations we are interested in, but local (or maybe semi-) martingales will be. In general, we define
+
+\begin{defi}[Local martingale]\index{local martingale}
+ A c\`adl\`ag adapted process $X$ is a \emph{local martingale} if there exists a sequence of stopping times $T_n$ such that $T_n \to \infty$ almost surely, and $X^{T_n}$ is a martingale for every $n$. We say the sequence $T_n$ \term{reduces} $X$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Every martingale is a local martingale, since by the optional stopping theorem, we can take $T_n = n$.
+ \item Let $(B_t)$ to be a standard 3d Brownian motion on $\R^3$. Then
+ \[
+ (X_t)_{t \geq 1} = \left(\frac{1}{|B_t|}\right)_{t \geq 1}
+ \]
+ is a local martingale but not a martingale.
+
+ To see this, first note that
+ \[
+ \sup_{t \geq 1} \E X_t^2 < \infty,\quad \E X_t \to 0.
+ \]
+ Since $\E X_t \to 0$ and $X_t \geq 0$, we know $X$ cannot be a martingale. However, we can check that it is a local martingale. Recall that for any $f \in C^2_b$,
+ \[
+ M^f = f(B_t) - f(B_1) - \frac{1}{2} \int_0^t \Delta f(B_s)\;\d s
+ \]
+ is a martingale. Moreover, $\Delta \frac{1}{|x|} = 0$ for all $x \not= 0$. Thus, if $\frac{1}{|x|}$ didn't have a singularity at $0$, this would have told us $X_t$ is a martingale. Thus, we are safe if we try to bound $|B_s|$ away from zero.
+
+ Let
+ \[
+ T_n = \inf \left\{t \geq 1: |B_t| < \frac{1}{n}\right\},
+ \]
+ and pick $f_n \in C_b^2$ such that $f_n(x) = \frac{1}{|x|}$ for $|x| \geq \frac{1}{n}$. Then $X_t^T - X_1^{T_n} = M^{f_n}_{t \wedge T_n}$. So $X^{T_n}$ is a martingale.
+
+ It remains to show that $T_n \to \infty$, and this follows from the fact that $\E X_t \to 0$.
+ \end{enumerate}
+\end{eg}
+
+\begin{prop}
+ Let $X$ be a local martingale and $X_t \geq 0$ for all $t$. Then $X$ is a supermartingale.
+\end{prop}
+
+\begin{proof}
+ Let $(T_n)$ be a reducing sequence for $X$. Then
+ \begin{align*}
+ \E(X_t \mid \mathcal{F}_s) &= \E \left(\liminf_{n \to \infty} X_{t \wedge T_n} \mid \mathcal{F}_s\right) \\
+ &\leq \lim_{n \to \infty} \E(X_{t \wedge T_n} \mid \mathcal{F}_s) \\
+ &= \liminf_{T_n \to \infty} X_{s \wedge T_n} \\
+ &= X_s.\qedhere
+ \end{align*}
+\end{proof}
+
+Recall the following result from Advanced Probability:
+\begin{prop}
+ Let $X \in L^1 (\Omega, \mathcal{F}, \P)$. Then the set
+ \[
+ \chi = \{\E(X \mid \mathcal{G}): G \subseteq \mathcal{F}\text{ a sub-$\sigma$-algebra}\}
+ \]
+ is uniformly integrable, i.e.
+ \[
+ \sup_{Y \in \chi} \E (|Y| \mathbf{1}_{|Y| > \lambda}) \to 0\text{ as } \lambda \to \infty.
+ \]
+\end{prop}
+
+Recall also the following important result about uniformly integrable random variables:
+\begin{thm}[Vitali theorem]\index{Vitali theorem}
+ $X_n \to X$ in $L^1$ iff $(X_n)$ is uniformly integrable and $X_n \to X$ in probability.
+\end{thm}
+
+With these, we can state the following characterization of martingales in terms of local martingales:
+\begin{prop}
+ The following are equivalent:
+ \begin{enumerate}
+ \item $X$ is a martingale.
+ \item $X$ is a local martingale, and for all $t \geq 0$, the set
+ \[
+ \chi_t = \{X_T: T\text{ is a stopping time with }T \leq t\}
+ \]
+ is uniformly integrable.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (a) $\Rightarrow$ (b): Let $X$ be a martingale. Then by the optional stopping theorem, $X_T = \E(X_t \mid \mathcal{F}_T)$ for any bounded stopping time $T \leq t$. So $\chi_t$ is uniformly integrable.
+ \item (b) $\Rightarrow$ (a): Let $X$ be a local martingale with reducing sequence $(T_n)$, and assume that the sets $\chi_t$ are uniformly integrable for all $t \geq 0$. By the optional stopping theorem, it suffices to show that $\E(X_T) = \E(X_0)$ for any bounded stopping time $T$.
+
+ So let $T$ be a bounded stopping time, say $T \leq t$. Then
+ \[
+ \E(X_0) = \E(X_0^{T_n}) = \E(X_T^{T_n}) = \E(X_{T \wedge T_n})
+ \]
+ for all $n$. Now $T \wedge T_n$ is a stopping time $\leq t$, so $\{X_{T \wedge T_n}\}$ is uniformly integrable by assumption. Moreover, $T_n \wedge T \to T$ almost surely as $n \to \infty$, hence $X_{T \wedge T_n} \to X_T$ in probability. Hence by Vitali, this converges in $L^1$. So
+ \[
+ \E(X_T) = \E(X_0).\qedhere
+ \]%\qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{cor}
+ If $Z \in L^1$ is such that $|X_t| \leq Z$ for all $t$, then $X$ is a martingale. In particular, every bounded local martingale is a martingale.
+\end{cor}
+
+The definition of a local martingale does not give us control over what the reducing sequence $\{T_n\}$ is. In particular, it is not necessarily true that $X^{T_n}$ will be bounded, which is a helpful property to have. Fortunately, we have the following proposition:
+\begin{prop}
+ Let $X$ be a \emph{continuous} local martingale with $X_0 = 0$. Define
+ \[
+ S_n = \inf \{t \geq 0 : |X_t| = n \}.
+ \]
+ Then $S_n$ is a stopping time, $S_n \to \infty$ and $X^{S_n}$ is a bounded martingale. In particular, $(S_n)$ reduces $X$.
+\end{prop}
+
+\begin{proof}
+ It is clear that $S_n$ is a stopping time, since (if it is not clear)
+ \[
+ \{S_n \leq t\} = \bigcap_{k \in \N} \left\{\sup_{s \leq t} |X_s| > n - \frac{1}{k}\right\} = \bigcap_{k \in \N} \bigcup_{s < t, s \in \Q} \left\{|X_s| > n - \frac{1}{k}\right\} \in \mathcal{F}_t.
+ \]
+ It is also clear that $S_n \to \infty$, since
+ \[
+ \sup_{s \leq t} |X_s| \leq n \leftrightarrow S_n \geq t,
+ \]
+ and by continuity and compactness, $\sup_{s \leq t} |X_s|$ is finite for every $(\omega, t)$.
+
+ Finally, we show that $X^{S_n}$ is a martingale. By the optional stopping theorem, $X^{T_n \wedge S_n}$ is a martingale, so $X^{S_n}$ is a local martingale. But it is also bounded by $n$. So it is a martingale.
+\end{proof}
+
+An important and useful theorem is the following:
+\begin{thm}
+ Let $X$ be a continuous local martingale with $X_0 = 0$. If $X$ is also a finite variation process, then $X_t = 0$ for all $t$.
+\end{thm}
+This would rule out interpreting $\int H_s \;\d X_s$ as a Lebesgue--Stieltjes integral for $X$ a non-zero continuous local martingale. In particular, we cannot take $X$ to be Brownian motion. Instead, we have to develop a new theory of integration for continuous local martingales, namely the It\^o integral.
+
+On the other hand, this theorem is very useful. We will later want to define the stochastic integral with respect to the sum of a continuous local martingale and a finite variation process, which is the appropriate generality for our theorems to make good sense. This theorem tells us there is a unique way to decompose a process as a sum of a finite variation process and a continuous local martingale (if it can be done). So we can simply define this stochastic integral by using the Lebesgue--Stieltjes integral on the finite variation part and the It\^o integral on the continuous local martingale part.
+\begin{proof}
+ Let $X$ be a finite-variation continuous local martingale with $X_0 = 0$. Since $X$ is finite variation, we can define the total variation process $(V_t)$ corresponding to $X$, and let
+ \[
+ S_n = \inf \{t \geq 0: V_t \geq n\} = \inf \left\{t \geq 0: \int_0^1 |\d X_s| \geq n\right\}.
+ \]
+ Then $S_n$ is a stopping time, and $S_n \to \infty$ since $X$ is assumed to be finite variation. Moreover, by optional stopping, $X^{S_n}$ is a local martingale, and is also bounded, since
+ \[
+ X_t^{S_n} \leq \int_0^{t \wedge S_n} |\d X_s| \leq n.
+ \]
+ So $X^{S_n}$ is in fact a martingale.
+
+ We claim its $L^2$-norm vanishes. Let $0 = t_0 < t_1 < \cdots < t_n = t$ be a subdivision of $[0, t]$. Using the fact that $X^{S_n}$ is a martingale and has orthogonal increments, we can write
+ \[
+ \E((X_t^{S_n})^2) = \sum_{i = 1}^k \E((X_{t_i}^{S_n} - X_{t_{i - 1}}^{S_n})^2).
+ \]
+ Observe that $X^{S_n}$ is finite variation, but the right-hand side is summing the \emph{square} of the variation, which ought to vanish when we take the limit $\max |t_i - t_{i - 1}| \to 0$. Indeed, we can compute
+ \begin{align*}
+ \E((X_t^{S_n})^2) &= \sum_{i = 1}^k \E((X_{t_i}^{S_n} - X_{t_{i - 1}}^{S_n})^2)\\
+ &\leq \E\left(\max_{1 \leq i \leq k} |X_{t_i}^{S_n} - X_{t_{i - 1}}^{S_n}| \sum_{i =1 }^k |X_{t_i}^{S_n} - X_{t_{i - 1}}^{S_n}|\right)\\
+ &\leq \E\left(\max_{1 \leq i \leq k} |X_{t_i}^{S_n} - X_{t_{i - 1}}^{S_n}| V_{t \wedge S_n}\right)\\
+ &\leq \E\left(\max_{1 \leq i \leq k} |X_{t_i}^{S_n} - X_{t_{i - 1}}^{S_n}| n\right).
+ \end{align*}
+ Of course, the first term is also bounded by the total variation. Moreover, we can make further subdivisions so that the mesh size tends to zero, and then the first term vanishes in the limit by continuity. So by dominated convergence, we must have $\E((X_t^{S_n})^2) = 0$. So $X_t^{S_n} = 0$ almost surely for all $n$. So $X_t = 0$ for all $t$ almost surely.
+\end{proof}
+
+\subsection{Square integrable martingales}
+As previously discussed, we will want to use Hilbert space machinery to construct the It\^o integral. The rough idea is to define the It\^o integral with respect to a fixed martingale on simple processes via a (finite) Riemann sum, and then by calculating appropriate bounds on how this affects the norm, we can extend this to all processes by continuity, and this requires our space to be Hilbert. The interesting spaces are defined as follows:
+
+\begin{defi}[$\mathcal{M}^2$]\index{$\mathcal{M}^2$}
+ Let
+ \begin{align*}
+ \mathcal{M}^2 &= \left\{X : \Omega \times [0, \infty) \to \R : X\text{ is c\'adl\'ag martingale with } \sup_{t \geq 0} \E(X_t^2) < \infty\right\}.\\
+ \mathcal{M}^2_c &= \left\{X \in \mathcal{M}^2: X(\omega, \ph)\text{ is continuous for every }\omega \in \Omega\right\}
+ \end{align*}
+ We define an inner product on $\mathcal{M}^2$ by
+ \[
+ (X, Y)_{\mathcal{M}^2} = \E(X_\infty Y_\infty),
+ \]
+ which in aprticular induces a norm
+ \[
+ \|X\|_{\mathcal{M}^2} = \left(\E(X_\infty^2)\right)^{1/2}.
+ \]
+ We will prove this is indeed an inner product soon. Here recall that for $X \in \mathcal{M}^2$, the martingale convergence theorem implies $X_t \to X_\infty$ almost surely and in $L^2$.
+\end{defi}
+Our goal will be to prove that these spaces are indeed Hilbert spaces. First observe that if $X \in \mathcal{M}^2$, then $(X_t^2)_{t \geq 0}$ is a submartingale by Jensen, so $t \mapsto \E X_t^2$ is increasing, and
+\[
+ \E X_\infty^2 = \sup_{t \geq 0} \E X_t^2.
+\]
+All the magic that lets us prove they are Hilbert spaces is Doob's inequality.
+\begin{thm}[Doob's inequality]\index{Doob's inequality}
+ Let $X \in \mathcal{M}^2$. Then
+ \[
+ \E \left(\sup_{t \geq 0} X_t^2\right) \leq 4 \E(X_\infty^2).
+ \]
+\end{thm}
+So once we control the limit $X_\infty$, we control the whole path. This is why the definition of the norm makes sense, and in particular we know $\|X\|_{\mathcal{M}^2} = 0$ implies that $X = 0$.
+
+\begin{thm}
+ $\mathcal{M}^2$ is a Hilbert space and $\mathcal{M}_c^2$ is a closed subspace.
+\end{thm}
+
+\begin{proof}
+ We need to check that $\mathcal{M}^2$ is complete. Thus let $(X^n) \subseteq \mathcal{M}^2$ be a Cauchy sequence, i.e.
+ \[
+ \E((X_\infty^n - X_\infty^m)^2) \to 0\text{ as }n, m \to \infty.
+ \]
+ By passing to a subsequence, we may assume that
+ \[
+ \E((X_\infty^n - X_{\infty}^{n - 1})^2) \leq 2^{-n}.
+ \]
+ First note that
+ \begin{align*}
+ \E\left(\sum_n \sup_{t \geq 0} |X_t^n - X_t^{n - 1}|\right) &\leq \sum_n \E \left(\sup_{t \geq 0} |X_t^n - X_t^{n - 1}|^2 \right)^{1/2}\tag{CS}\\
+ &\leq \sum_n 2 \E \left(|X_\infty^n - X_\infty^{n - 1}|^2\right)^{1/2}\tag{Doob's}\\
+ &\leq 2 \sum_n 2^{-n/2} < \infty.
+ \end{align*}
+ So
+ \[
+ \sum_{n = 1}^\infty \sup_{t \geq 0} |X_t^n - X_t^{n - 1}|< \infty\text{ a.s.}\tag{$*$}
+ \]
+ So on this event, $(X^n)$ is a Cauchy sequence in the space $(D[0, \infty), \|\ph\|_\infty)$ of c\'adl\'ag sequences. So there is some $X(\omega, \ph) \in D[0, \infty)$ such that
+ \[
+ \|X^n (\omega, \ph) - X(\omega, \ph)\|_\infty \to 0\text{ for almost all }\omega.
+ \]
+ and we set $X = 0$ outside this almost sure event $(*)$. We now claim that
+ \[
+ \E \left(\sup_{t \geq 0} |X^n - X|^2\right) \to 0\text{ as }n \to \infty.
+ \]
+ We can just compute
+ \begin{align*}
+ \E \left(\sup_t |X^n - X|^2\right) &= \E \left(\lim_{m \to \infty} \sup_t |X^n - X^m|^2\right)\\
+ &\leq \liminf_{m \to \infty} \E\left(\sup_t |X^n - X^m|^2\right) \tag{Fatou}\\
+ &\leq \liminf_{m \to \infty} 4 \E (X^n_\infty - X_m^\infty)^2 \tag{Doob's}
+ \end{align*}
+ and this goes to $0$ in the limit $n \to \infty$ as well.
+
+ We finally have to check that $X$ is indeed a martingale. We use the triangle inequality to write
+ \begin{align*}
+ \|E(X_t \mid \mathcal{F}_s) - X_s \|_{L^2} &\leq \|\E (X_t - X_t^n \mid \mathcal{F}_s)\|_{L^2} + \|X_s^n - X_s\|_{L^2}\\
+ &\leq \E (\E ((X_t - X_t^n)^2 \mid \mathcal{F}_s))^{1/2} + \|X_s^n - X_s\|_{L^2}\\
+ &= \|X_t - X_t^n\|_{L^2} + \|X_s^n - X_s\|_{L^2}\\
+ &\leq 2 \E\left(\sup_t |X_t - X_t^n|^2\right)^{1/2} \to 0
+ \end{align*}
+ as $n \to \infty$. But the left-hand side does not depend on $n$. So it must vanish. So $X \in \mathcal{M}^2$.
+
+ We could have done exactly the same with continuous martingales, so the second part follows.
+\end{proof}
+\subsection{Quadratic variation}
+Physicists are used to dropping all terms above first-order. It turns out that Brownian motion, and continuous local martingales in general oscillate so wildly that second order terms become important. We first make the following definition:
+\begin{defi}[Uniformly on compact sets in probability]\index{u.c.p.}\index{uniformly on compact sets in probability}
+ For a sequence of processes $(X^n)$ and a process $X$, we say that $X^n \to X$ u.c.p. iff
+ \[
+ \P\left(\sup_{s \in [0, t]} |X_s^n - X_s| > \varepsilon\right) \to 0\text{ as }n \to \infty\text{ for all }t > 0, \varepsilon > 0.
+ \]
+\end{defi}
+
+\begin{thm}
+ Let $M$ be a continuous local martingale with $M_0 = 0$. Then there exists a unique (up to indistinguishability) continuous adapted increasing process $(\bra M\ket_t)_{t \geq 0}$ such that $\bra M\ket_0 = 0$ and $M_t^2 - \bra M\ket_t$ is a continuous local martingale. Moreover,
+ \[
+ \bra M \ket_t = \lim_{n \to \infty} \bra M\ket_t^{(n)},\quad \bra M\ket_t^{(n)} = \sum_{i = 1}^{\lceil 2^n t\rceil} (M_{t 2^{-n}} - M_{(i - 1)2^{-n}})^2,
+ \]
+ where the limit u.c.p.
+\end{thm}
+\begin{defi}[Quadratic variation]\index{quadratic variation}
+ $\bra M\ket$ is called the \term{quadratic variation} of $M$.
+\end{defi}
+It is probably more useful to understand $\bra M\ket_t$ in terms of the explicit formula, and the fact that $M_t^2 - \bra M\ket_t$ is a continuous local martingale is a convenient property.
+
+\begin{eg}
+ Let $B$ be a standard Brownian motion. Then $B_t^2 - t$ is a martingale. Thus, $\bra B \ket_t = t$.
+\end{eg}
+
+The proof is long and mechanical, but not hard. All the magic happened when we used the magical Doob's inequality to show that $\mathcal{M}_c^2$ and $\mathcal{M}^2$ are Hilbert spaces.
+
+\begin{proof}
+ To show uniqueness, we use that finite variation and local martingale are incompatible. Suppose $(A_t)$ and $(\tilde{A}_t)$ obey the conditions for $\bra M\ket$. Then $A_t - \tilde{A}_t = (M_t^2 - \tilde{A}_t) - (M_t^2 - A_t)$ is a continuous adapted local martingale starting at $0$. Moreover, both $A_t$ and $\tilde{A}_t$ are increasing, hence have finite variation. So $A - \tilde{A} = 0$ almost surely.
+
+ To show existence, we need to show that the limit exists and has the right property. We do this in steps.
+ \begin{claim}
+ The result holds if $M$ is in fact bounded.
+ \end{claim}
+ Suppose $|M(\omega, t)| \leq C$ for all $(\omega, t)$. Then $M \in \mathcal{M}_c^2$. Fix $T > 0$ deterministic. Let
+ \[
+ X_t^n = \sum_{i = 1}^{\lceil 2^n T \rceil} M_{(i - 1)2^{-n}} (M_{i 2^{-n} \wedge t} - M_{(i - 1) 2^{-n} \wedge t}).
+ \]
+ This is defined so that
+ \[
+ \bra M \ket_{k2^{-n}}^{(n)} = M_{k2^{-n}}^2 - 2 X_{k 2^{-n}}^n.
+ \]
+ This reduces the study of $\bra M\ket^{(n)}$ to that of $X_{k2^{-n}}^n$.
+
+ We check that $(X_t^n)$ is a Cauchy sequence in $\mathcal{M}_c^2$. The fact that it is a martingale is an immediate computation. To show it is Cauchy, for $n \geq m$, we calculate
+ \[
+ X_\infty^n - X_\infty^m = \sum_{i = 1}^{\lceil 2^n T\rceil} (M_{(i - 1)2^{-n}} - M_{\lfloor (i - 1) 2^{m - n}\rfloor 2^{-m}})(M_{i2^{-n}} - M_{(i - 1)2^{-n}}).
+ \]
+ We now take the expectation of the square to get
+ \begin{align*}
+ \E (X_\infty^n - X_\infty^m)^2 &= \E\left(\sum_{i = 1}^{\lceil 2^n T\rceil} (M_{(i\!-\!1)2^{-\!n}} - M_{\lfloor\!(i\!-\!1) 2^{m\!-\!n}\!\rfloor 2^{-\!m}})^2(M_{i 2^{-\!n}} - M_{(i\!-\!1)2^{-\!n}})^2\right)\\
+ &\leq \E \left(\sup_{|s - t| \leq 2^{-m}} |M_t - M_s|^2 \sum_{i = 1}^{\lceil 2^n T\rceil} (M_{i2^{-n}} - M_{(i - 1)2^{-n}})^2\right)\\
+ &= \E \left(\sup_{|s - t| \leq 2^{-m}} |M_t - M_s|^2 \bra M\ket_T^{(n)}\right)\\
+ &\leq \E \left(\sup_{|s - t| \leq 2^{-m}}|M_t - M_s|^4\right)^{1/2}\E \left((\bra M\ket_T^{(n)})^2\right)^{1/2}\tag{Cauchy--Schwarz}
+ \end{align*}
+ We shall show that the second factor is bounded, while the first factor tends to zero as $m \to \infty$. These are both not surprising --- the first term vanishing in the limit corresponds to $M$ being continuous, and the second term is bounded since $M$ itself is bounded.
+
+ To show that the first term tends to zero, we note that we have
+ \[
+ |M_t - M_s|^4 \leq 16 C^4,
+ \]
+ and moreover
+ \[
+ \sup_{|s - t| \leq 2^{-m}} |M_t - M_s| \to 0\text{ as }m \to \infty\text{ by uniform continuity}.
+ \]
+ So we are done by the dominated convergence theorem.
+
+ To show the second term is bounded, we do (writing $N = \lceil 2^n T\rceil$)
+ \begin{align*}
+ \E\left((\bra M\ket_T^{(n)})^2\right) &= \E \left(\left(\sum_{i = 1}^{N} (M_{i 2^{-n}} - M_{(i - 1)2^{-n}})^2 \right)^2\right)\\
+ &= \sum_{i = 1}^N \E \left( (M_{i 2^{-n}}- M_{(i - 1)2^{-n}})^4\right) \\
+ &\hphantom{{}={}}+ 2 \sum_{i = 1}^N\E \left((M_{i 2^{-\!n}} - M_{(i\!-\!1)2^{-\!n}})^2 \sum_{k = i + 1}^N (M_{k2^{-\!n}} - M_{(k\!-\!1)2^{-\!n}})^2\right)
+ \end{align*}
+ We use the martingale property and orthogonal increments the rearrange the off-diagonal term as
+ \[
+ \E\left((M_{i2^{-n}} - M_{(i - 1)2^{-n}})(M_{N2^{-n}} - M_{i 2^{-n}})^2\right).
+ \]
+ Taking some sups, we get
+ \begin{align*}
+ \E\left((\bra M\ket_T^{(n)})^2\right) &\leq 12 C^2 \E \left(\sum_{i = 1}^N (M_{i 2^{-n}} - M_{(i - 1)2^{-n}})^2\right)\\
+ &= 12C^2 \E \left((M_{N 2^{-n}} - M_0)^2\right)\\
+ &\leq 12 C^2 \cdot 4 C^2.
+ \end{align*}
+ So done.
+
+ So we now have $X^n \to X$ in $M^2_c$ for some $X \in M_c^2$. In particular, we have
+ \[
+ \left\|\sup_t |X_t^n - X_t|\right\|_{L^2} \to 0
+ \]
+ So we know that
+ \[
+ \sup_t |X_t^n - X_t| \to 0
+ \]
+ almost surely along a subsequence $\Lambda$.
+
+ Let $N \subseteq \Omega$ be the events on which this convergence fails. We define
+ \[
+ A_t^{(T)} =
+ \begin{cases}
+ M_t^2 - 2X_t& \omega \in \Omega \setminus N\\
+ 0 & \omega \in N
+ \end{cases}.
+ \]
+ Then $A^{(T)}$ is continuous, adapted since $M$ and $X$ are, and $(M_{t \wedge T}^2 - A^{(T)}_{t \wedge T})_t$ is a martingale since $X$ is. Finally, $A^{(T)}$ is increasing since $M_t^2 - X_t^n$ is increasing on $2^{-n} \Z \cap [0, T]$ and the limit is uniform. So this $A^{(T)}$ basically satisfies all the properties we want $\bra M\ket_t$ to satisfy, except we have the stopping time $T$.
+
+ We next observe that for any $T \geq 1$, $A_{t \wedge T}^{(T)} = A_{t \wedge T}^{(T + 1)}$ for all $t$ almost surely. This essentially follows from the same uniqueness argument as we had at the beginning of the proof. Thus, there is a process $(\bra M\ket_t)_{t \geq 0}$ such that
+ \[
+ \bra M\ket_t = A_t^{(T)}
+ \]
+ for all $t \in [0, T]$ and $T \in \N$, almost surely. Then this is the desired process. So we have constructed $\bra M\ket$ in the case where $M$ is bounded.
+
+ \begin{claim}
+ $\bra M\ket^{(n)} \to \bra M\ket$ u.c.p.
+ \end{claim}
+ Recall that
+ \[
+ \bra M\ket_t^{(n)} = M^2_{2^{-n}\lfloor 2^n t\rfloor} - 2 X^n_{2^{-n} \lfloor 2^n t\rfloor}.
+ \]
+ We also know that
+ \[
+ \sup_{t \leq T} |X_t^n - X_t| \to 0
+ \]
+ in $L^2$, hence also in probability. So we have
+ \begin{multline*}
+ |\bra M\ket_t - \bra M\ket^{(n)}_t| \leq \sup_{t \leq T} |M^2_{2^{-n}\lfloor 2^n t\rfloor} - M_t^2| \\
+ + \sup_{t \leq T} |X^n_{2^{-n}\lfloor 2^n t\rfloor} - X_{2^{-n} \lfloor 2^n t\rfloor}| + \sup_{t \leq T} |X_{2^{-n}\lfloor 2^n t\rfloor} - X_t|.
+ \end{multline*}
+ The first and last terms $\to 0$ in probability since $M$ and $X$ are uniformly continuous on $[0, T]$. The second term converges to zero by our previous assertion. So we are done.
+
+ \begin{claim}
+ The theorem holds for $M$ any continuous local martingale.
+ \end{claim}
+ We let $T_n = \inf\{t \geq 0 : |M_t| \geq n\}$. Then $(T_n)$ reduces $M$ and $M^{T_n}$ is a bounded martingale. So in particular $M^{T_n}$ is a bounded continuous martingale. We set
+ \[
+ A^n = \bra M^{T_n}\ket.
+ \]
+ Then $(A_t^n)$ and $(A_{t \wedge T_n}^{n + 1})$ are indistinguishable for $t < T_n$ by the uniqueness argument. Thus there is a process $\bra M\ket$ such that $\bra M \ket_{t \wedge T_n} = A_t^n$ are indistinguishable for all $n$. Clearly, $\bra M\ket$ is increasing since the $A^n$ are, and $M^2_{t \wedge T_n} - \bra M\ket_{t \wedge T_n}$ is a martingale for every $n$, so $M^2_t - \bra M\ket_t$ is a continuous local martingale.
+
+ \begin{claim}
+ $\bra M\ket^{(n)} \to \bra M\ket$ u.c.p.
+ \end{claim}
+ We have seen
+ \[
+ \bra M^{T_k}\ket^{(n)} \to \bra M^{T_k}\ket\text{ u.c.p.}
+ \]
+ for every $k$. So
+ \[
+ \P\left(\sup_{t \leq T} |\bra M\ket_t^{(n)} - \bra M_t\ket| > \varepsilon\right) \leq \P(T_k < T) + \P\left(\sup_{t \leq T} |\bra M^{T_k}\ket_t^{(n)} - \bra M^{T_k}\ket_t > \varepsilon\right).
+ \]
+ So we can fisrt pick $k$ large enough such that the first term is small, then pick $n$ large enough so that the second is small.
+\end{proof}
+
+There are a few easy consequence of this theorem.
+
+\begin{fact}
+ Let $M$ be a continuous local martingale, and let $T$ be a stopping time. Then alsmot surely for all $t \geq 0$,
+ \[
+ \bra M^T\ket_t = \bra M\ket_{t \wedge T}
+ \]
+\end{fact}
+
+\begin{proof}
+ Since $M_t^2 - \bra M\ket_t$ is a continuous local martingle, so is $M^2_{t \wedge T} - \bra M\ket_{t \wedge T} = (M^T)_t^2 - \bra M\ket_{t \wedge T}$. So we are done by uniqueness.
+\end{proof}
+
+\begin{fact}
+ Let $M$ be a continuous local martingale with $M_0 = 0$. Then $M = 0$ iff $\bra M\ket = 0$.
+\end{fact}
+
+\begin{proof}
+ If $M = 0$, then $\bra M \ket = 0$. Conversely, if $\bra M\ket = 0$, then $M^2$ is a continuous local martingale and positive. Thus $\E M_t^2 \leq \E M_0^2 = 0$.
+\end{proof}
+
+\begin{prop}
+ Let $M \in \mathcal{M}_c^2$. Then $M^2 - \bra M\ket$ is a uniformly integrable martingale, and
+ \[
+ \|M - M_0\|_{\mathcal{M}^2} = (\E \bra M\ket_\infty)^{1/2}.
+ \]
+\end{prop}
+
+\begin{proof}
+ We will show that $\bra M\ket_\infty \in L^1$. This then implies
+ \[
+ |M_t^2 - \bra M\ket_t| \leq \sup_{t \geq 0} M_t^2 + \bra M\ket_\infty.
+ \]
+ Then the right hand side is in $L^1$. Since $M^2 - \bra M\ket$ is a local martingale, this implies that it is in fact a uniformly integrable martingale.
+
+ To show $\bra M\ket_\infty \in L^1$, we let
+ \[
+ S_n = \inf \{t \geq 0: \bra M\ket_t \geq n\}.
+ \]
+ Then $S_n \to \infty$, $S_n$ is a stopping time and moreover $\bra M\ket_{t \wedge S_n} \leq n$. So we have
+ \[
+ M_{t \wedge S_n}^2 - \bra M\ket_{t \wedge S_n} \leq n + \sup_{t \geq 0} M_t^2,
+ \]
+ and the second term is in $L^1$. So $M_{t \wedge S_n}^2 - \bra M\ket_{t \wedge S_n}$ is a true martingale.
+
+ So
+ \[
+ \E M_{t \wedge S_n}^2 - \E M_0^2 = \E \bra M\ket_{t \wedge S_n}.
+ \]
+ Taking the limit $t\to \infty$, we know $\E M_{t \wedge S_n}^2 \to \E M^2_{S_n}$ by dominated convergence. Since $\bra M\ket_{t \wedge S_n}$ is increasing, we also have $\E \bra M\ket_{t \wedge S_n} \to \E \bra M\ket_{S_n}$ by \emph{monotone} convergence. We can take $n \to \infty$, and by the same justification, we have
+ \[
+ \E \bra M\ket \leq \E M_\infty^2 - \E M_0^2 = \E (M_\infty - M_0)^2 < \infty.\qedhere
+ \]
+\end{proof}
+
+\subsection{Covariation}
+We know $\mathcal{M}_c^2$ not only has a norm, but also an inner product. This can also be reflected in the bracket by the polarization identity, and it is natural to define
+
+\begin{defi}[Covariation]\index{covariation}
+ Let $M, N$ be two continuous local martingales. Define the \emph{covariation} (or simply the \term{bracket}) between $M$ and $N$ to be process
+ \[
+ \bra M, N\ket_t = \frac{1}{4} (\bra M + N\ket_t - \bra M -N\ket_t).
+ \]
+\end{defi}
+Then if in fact $M, N \in \mathcal{M}_c^2$, then putting $t = \infty$ gives the inner product.
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\bra M, N\ket$ is the unique (up to indistinguishability) finite variation process such that $M_t N_t - \bra M, N\ket_t$ is a continuous local martingale.
+ \item The mapping $(M, N) \mapsto \bra M, N\ket$ is bilinear and symmetric.
+ \item
+ \begin{align*}
+ \bra M, N\ket_t &= \lim_{n \to \infty} \bra M, N\ket_t^{(n)}\text{ u.c.p.}\\
+ \bra M, N\ket^{(n)}_t &= \sum_{i = 1}^{\lceil 2^n t\rceil} (M_{i2^{-n} - M_{(i - 1)2^{-n}}})(N_{i2^{-n}} - N_{(i - 1)} 2^{-n}).
+ \end{align*}
+ \item For every stopping time $T$,
+ \[
+ \bra M^T, N^T\ket_t = \bra M^T, N\ket_t = \bra M, N\ket_{t \wedge T}.
+ \]
+ \item If $M, N \in \mathcal{M}_c^2$, then $M_t N_t - \bra M, N\ket_t$ is a uniformly integrable martingale, and
+ \[
+ \bra M - M_0, N - N_0\ket_{\mathcal{M}^2} = \E \bra M, N\ket_\infty.\fakeqed
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{eg}
+ Let $B, B'$ be two independent Brownian motions (with respect to the same filtration). Then $\bra B, B'\ket = 0$.
+\end{eg}
+
+\begin{proof}
+ Assume $B_0 = B_0' = 0$. Then $X_{\pm} = \frac{1}{\sqrt{2}}(B \pm B')$ are Brownian motions, and so $\bra X_{\pm}\ket = t$. So their difference vanishes.
+\end{proof}
+
+An important result about the covariation is the following Cauchy--Schwarz like inequality:
+\begin{prop}[Kunita--Watanabe]\index{Kunita--Watanabe}
+ Let $M, N$ be continuous local martingales and let $H, K$ be two (previsible) processes. Then almost surely
+ \[
+ \int_0^\infty |H_s| |K_s| |\d \bra M, N\ket_s| \leq \left(\int_0^\infty H_s^2 \;\d \bra M\ket_s\right)^{1/2} \left(\int_0^\infty H_s^2 \bra N\ket_s\right)^{1/2}.
+ \]
+\end{prop}
+
+In fact, this \emph{is} Cauchy--Schwarz. All we have to do is to take approximations and take limits and make sure everything works out well.
+\begin{proof}
+ For convenience, we write
+ \[
+ \bra M, N\ket_s^t = \bra M, N\ket_t - \bra M, N\ket_s.
+ \]
+ \begin{claim}
+ For all $0 \leq s \leq t$, we have
+ \[
+ |\bra M, N\ket_s^t| \leq \sqrt{\bra M, M\ket_s^t} \sqrt{\bra N, N\ket_s^t}.
+ \]
+ \end{claim}
+ By continuity, we can assume that $s, t$ are dyadic rationals. Then
+ \begin{align*}
+ |\bra M, N\ket_s^t| &= \lim_{n \to \infty} \left| \sum_{i = 2^ns + 1}^{2^n t} (M_{i 2^{-n}} - M_{(i - 1)2^{-n}})(N_{i 2^{-n}} -N_{(i - 1)2^{-n}})\right|\\
+ &\leq \lim_{n \to \infty} \left| \sum_{i = 2^ns + 1}^{2^n t} (M_{i 2^{-n}} - M_{(i - 1)2^{-n}})^2\right|^{1/2}\times\\
+ &\hphantom{aaaaaaaaaaa}\left| \sum_{i = 2^n s + 1}^{2^n t} (N_{i 2^{-n}} - N_{(i - 1)2^{-n}})^2\right|^{1/2}\tag{Cauchy--Schwarz}\\
+ &= \left(\bra M, M\ket_s^t \right)^{1/2}\left(\bra N, N\ket_s^t\right)^{1/2},
+ \end{align*}
+ where all equalities are u.c.p.
+
+ \begin{claim}
+ For all $0 \leq s < t$, we have
+ \[
+ \int_s^t |\d \bra M, N\ket_u| \leq \sqrt{\bra M, M\ket_s^t}\sqrt{\bra N, N\ket_s^t}.
+ \]
+ \end{claim}
+ Indeed, for any subdivision $s = t_0 < t_1 < \cdots t_n = t$, we have
+ \begin{align*}
+ \sum_{i = 1}^n |\bra M, N\ket_{t_{i - 1}}^{t_i}| &\leq \sum_{i = 1}^n \sqrt{\bra M, M\ket_{t_{i - 1}}^{t_i}} \sqrt{\bra N, N\ket_{t_{i - 1}}^{t_i}}\\
+ &\leq \left(\sum_{i = 1}^n \bra M, M\ket_{t_{i - 1}}^{t_i}\right)^{1/2} \left(\sum_{i = 1}^n \bra N, N\ket_{t_{i - 1}}^{t_i}\right)^{1/2}. \tag{Cauchy--Schwarz}
+ \end{align*}
+ Taking the supremum over all subdivisions, the claim follows.
+
+ \begin{claim}
+ For all bounded Borel sets $B \subseteq [0, \infty)$, we have
+ \[
+ \int_B |\d \bra M, N\ket_u| \leq \sqrt{\int_B \d \bra M\ket_u} \sqrt{\int_B \d \bra N\ket_u}.
+ \]
+ \end{claim}
+ We already know this is true if $B$ is an interval. If $B$ is a finite union of integrals, then we apply Cauchy--Schwarz. By a monotone class argument, we can extend to all Borel sets.
+
+ \begin{claim}
+ The theorem holds for
+ \[
+ H = \sum_{\ell = 1}^k h_\ell \mathbf{1}_{B_\ell},\quad K = \sum_{\ell = 1}^n k_\ell \mathbf{1}_{B_\ell}
+ \]
+ for $B_\ell \subseteq [0, \infty)$ bounded Borel sets with disjoint support.
+ \end{claim}
+ We have
+ \begin{align*}
+ \int |H_s K_s| \;|\d \bra M, N\ket_s| &\leq \sum_{\ell = 1}^n |h_\ell k_\ell| \int_{B_\ell} |\d \bar M, N\ket_s|\\
+ &\leq \sum_{\ell = 1}^n |h_\ell k_\ell| \left(\int_{B_\ell} \d \bra M\ket_s\right)^{1/2} \left(\int_{B_\ell} \d \bra N\ket_s\right)^{1/2}\\
+ &\leq \left(\sum_{\ell = 1}^n h_\ell^2 \int_{B_\ell} \d \bra M\ket_s\right)^{1/2} \left(\sum_{\ell = 1}^n k_\ell^2 \int_{B_\ell} \d \bra N\ket_s\right)^{1/2}
+ \end{align*}
+ To finish the proof, approximate general $H$ and $K$ by step functions and take the limit.
+\end{proof}
+
+\subsection{Semi-martingale}
+\begin{defi}[Semi-martingale]\index{semi-martingale}
+ A (continuous) adapted process $X$ is a \emph{(continuous) semi-martingale} if
+ \[
+ X = X_0 + M + A,
+ \]
+ where $X_0 \in \mathcal{F}_0$, $M$ is a continuous local martingale with $M_0 = 0$, and $A$ is a continuous finite variation process with $A_0 = 0$.
+\end{defi}
+This decomposition is unique up to indistinguishables.
+
+\begin{defi}[Quadratic variation]\index{quadratic variation!semi-martingale}
+ Let $X = X_0 + M + A$ and $X' = X_0' + M' + A'$ be (continuous) semi-martingales. Set
+ \[
+ \bra X\ket = \bra M\ket, \quad \bra X, X'\ket = \bra M, M'\ket.
+ \]
+\end{defi}
+
+This definition makes sense, because continuous finite variation processes do not have quadratic variation.
+
+\begin{ex}
+ We have
+ \[
+ \bra X, Y\ket_t^{(n)} = \sum_{i = 1}^{\lceil 2^n t\rceil} (X_{i 2^{-n}} - X_{(i - 1)2^{-n}})(Y_{i2^{-n}} - Y_{(i - 1)2^{-n}}) \to \bra X, Y\ket\text{ u.c.p.}
+ \]
+\end{ex}
+
+\section{The stochastic integral}
+\subsection{Simple processes}
+We now have all the background required to define the stochastic integral, and we can start constructing it. As in the case of the Lebesgue integral, we first define it for simple processes, and then extend to general processes by taking a limit. Recall that we have
+
+\begin{defi}[Simple process]\index{simple process}
+ The space of \emph{simple processes} $\mathcal{E}$ consists of functions $H: \Omega \times [0, \infty) \to \R$ that can be written as
+ \[
+ H_t(\omega) = \sum_{i = 1}^n H_{i - 1}(\omega) \mathbf{1}_{(t_{i - 1}, t_i]} (t)
+ \]
+ for some $0 \leq t_0 \leq t_1 \leq \cdots \leq t_n$ and bounded random variables $H_i \in \mathcal{F}_{t_i}$.
+\end{defi}
+
+\begin{defi}[$H\cdot M$]
+ For $M \in \mathcal{M}^2$ and $H \in \mathcal{E}$, we set
+ \[
+ \int_0^t H \;\d M = (H\cdot M)_t = \sum_{i = 1}^n H_{i - 1} (M_{t_i \wedge t} - M_{t_{i - 1} \wedge t}).
+ \]
+\end{defi}
+If $M$ were of finite variation, then this is the same as what we have previously seen.
+
+Recall that for the Lebesgue integral, extending this definition to general functions required results like monotone convergence. Here we need some similar results that put bounds on how large the integral can be. In fact, we get something better than a bound.
+
+\begin{prop}
+ If $M \in \mathcal{M}_c^2$ and $H \in \mathcal{E}$, then $H \cdot M \in \mathcal{M}_c^2$ and
+ \[
+ \|H \cdot M\|_{\mathcal{M}^2}^2 = \E \left(\int_0^\infty H^2_s \;\d \bra M\ket_s\right).\tag{$*$}
+ \]
+\end{prop}
+
+\begin{proof}
+ We first show that $H \cdot M \in \mathcal{M}_c^2$. By linearity, we only have to check it for
+ \[
+ X_t^i = H_{i - 1} (M_{t_i \wedge t} - M_{t_{i - 1} \wedge t})
+ \]
+ We have to check that $\E(X_t^i \mid \mathcal{F}_s) = 0$ for all $t > s$, and the only non-trivial case is when $t > t_{i - 1}$.
+ \[
+ \E (X_t^i \mid \mathcal{F}_s) = H_{i - 1} \E (M_{t_i \wedge t} - M_{t_{i - 1} \wedge t} \mid \mathcal{F}_s) = 0.
+ \]
+ We also check that
+ \[
+ \|X^i\|_{\mathcal{M}^2} \leq 2 \|H\|_{\infty} \|M\|_{\mathcal{M}^2}.
+ \]
+ So it is bounded. So $H \cdot M \in \mathcal{M}_c^2$.
+
+ To prove $(*)$, we note that the $X^i$ are orthogonal and that
+ \[
+ \bra X^i\ket_t = H_{i - 1}^2 (\bra M\ket_{t_i \wedge t} - \bra M\ket_{t_{i - 1} \wedge t}).
+ \]
+ So we have
+ \[
+ \bra H \cdot M, H \cdot M\ket = \sum \bra X^i, X^i\ket = \sum H_{i - 1}^2 (\bra M\ket_{t_i \wedge t} - \bra M\ket_{t_{i - 1} \wedge t}) = \int_0^t H_s^2 \;\d \bra M\ket_s.
+ \]
+ In particular,
+ \[
+ \|H \cdot M\|_{\mathcal{M}^2}^2 = \E \bra H \cdot M\ket_\infty = \E \left(\int_0^\infty H_s^2 \;\d \bra M\ket_s\right).\qedhere
+ \]
+\end{proof}
+
+\begin{prop}
+ Let $M \in \mathcal{M}_c^2$ and $H \in \mathcal{E}$. Then
+ \[
+ \bra H \cdot M, N\ket = H \cdot \bra M, N\ket
+ \]
+ for all $N \in \mathcal{M}^2$.
+\end{prop}
+In other words, the stochastic integral commutes with the bracket.
+
+\begin{proof}
+ Write $H \cdot M = \sum X^i = \sum H_{i - 1}(M_{t_i \wedge t} - M_{t_{i - 1} \wedge t})$ as before. Then
+ \[
+ \bra X^i, N\ket_t = H_{i - 1}\bra M_{t_i \wedge t} - M_{t_{i - 1} \wedge t}, N\ket = H_{i - 1} (\bra M, N\ket_{t_i \wedge t} - \bra M, N\ket_{t_{i - 1}\wedge t}).\qedhere
+ \]
+\end{proof}
+
+\subsection{\tph{It\^o}{Ito}{Itô} isometry}
+We now try to extend the above definition to something more general than simple processes.
+\begin{defi}[$L^2(M)$]\index{$L^2(M)$}
+ Let $M \in \mathcal{M}_c^2$. Define $L^2(M)$ to be the space of (equivalence classes of) previsible $H: \Omega \times [0, \infty) \to \R$ such that
+ \[
+ \|H\|_{L^2(M)} = \|H\|_{\mathcal{M}} = \E\left(\int_0^\infty H_s^2 \;\d \bra M\ket_s\right)^{1/2} < \infty.
+ \]
+ For $H, K \in L^2(M)$, we set
+ \[
+ (H, K)_{L^2(M)} = \E \left(\int_0^\infty H_s K_s \;\d \bra M\ket_s\right).
+ \]
+\end{defi}
+In fact, $L^2(M)$ is equal to $L^2(\Omega \times [0, \infty), \mathcal{P}, \d P\;\d \bra M\ket)$, where $\mathcal{P}$ is the previsible $\sigma$-algebra, and in particular is a Hilbert space.
+
+\begin{prop}
+ Let $M \in \mathcal{M}_c^2$. Then $\mathcal{E}$ is dense in $L^2(M)$.
+\end{prop}
+
+\begin{proof}
+ Since $L^2(M)$ is a Hilbert space, it suffices to show that if $(K, H) = 0$ for all $H \in \mathcal{E}$, then $K = 0$.
+
+ So assume that $(K, H) = 0$ for all $H \in \mathcal{E}$ and
+ \[
+ X_t = \int_0^t K_s \;\d \bra M\ket_s,
+ \]
+ Then $X$ is a well-defined finite variation process, and $X_t \in L^1$ for all $t$. It suffices to show that $X_t = 0$ for all $t$, and we shall show that $X_t$ is a continuous martingale.
+
+ Let $0 \leq s < t$ and $F \in \mathcal{F}_s$ bounded. We let $H = F 1_{(s, t]} \in \mathcal{E}$. By assumption, we know
+ \[
+ 0 = (K, H) = \E \left(F \int_s^t K_u\; \d \bra M\ket_u\right) = \E (F (X_t - X_S)).
+ \]
+ Since this holds for all $\mathcal{F}_s$ measurable $F$, we have shown that
+ \[
+ \E(X_t \mid \mathcal{F}_s) = X_s.
+ \]
+ So $X$ is a (continuous) martingale, and we are done.
+\end{proof}
+
+\begin{thm}
+ Let $M \in \mathcal{M}_c^2$. Then
+ \begin{enumerate}
+ \item The map $H \in \mathcal{E} \mapsto H \cdot M \in \mathcal{M}_c^2$ extends uniquely to an isometry $L^2(M) \to \mathcal{M}^2_c$, called the \term{It\^o isometry}.
+ \item For $H \in L^2(M)$, $H \cdot M$ is the unique martingale in $\mathcal{M}_c^2$ such that
+ \[
+ \bra H \cdot M, N\ket = H \cdot \bra M, N\ket
+ \]
+ for all $N \in \mathcal{M}_c^2$, where the integral on the LHS is the stochastic integral (as above) and the RHS is the finite variation integral.
+ \item If $T$ is a stopping time, then $(1_{[0, T]} H) \cdot M = (H \cdot M)^T = H \cdot M^T$.
+ \end{enumerate}
+\end{thm}
+
+\begin{defi}[Stochastic integral]\index{stochastic integral}
+ $H \cdot M$ is the \emph{stochastic integral} of $H$ with respect to $M$ and we also write
+ \[
+ (H \cdot M)_t = \int_0^t H_s \;\d M_s.
+ \]
+\end{defi}
+It is important that the integral of martingale is still a martingale. After proving It\^o's formula, we will use this fact to show that a lot of things are in fact martingales in a rather systematic manner. For example, it will be rather effortless to show that $B_t^2 - t$ is a martingale when $B_t$ is a standard Brownian motion.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We have already shown that this map is an isometry when restricted to $\mathcal{E}$. So extend by completeness of $\mathcal{M}_c^2$ and denseness of $\mathcal{E}$.
+ \item Again the equation to show is known for simple $H$, and we want to show it is preserved under taking limits. Suppose $H^n \to H$ in $L^2(M)$ with $H^n \in L^2(M)$. Then $H^n \cdot M \to H \cdot M$ in $\mathcal{M}_c^2$. We want to show that
+ \begin{align*}
+ \bra H \cdot M, N\ket_\infty &= \lim_{n \to \infty} \bra H^n \cdot M, N\ket_\infty\text{ in }L^1.\\
+ H \cdot \bra M, N\ket &= \lim_{n \to \infty} H^n \cdot \bra M, N\ket\text{ in }L^1.
+ \end{align*}
+ for all $N \in \mathcal{M}_c^2$.
+
+ To show the first holds, we use the Kunita--Watanabe inequality to get
+ \[
+ \E |\bra H \cdot M - H^n \cdot M, N\ket_\infty| \leq \E \left(\bra H \cdot M - H^n \cdot M\ket_\infty\right)^{1/2} \left(\E \bra N\ket_\infty\right)^{1/2},
+ \]
+ and the first factor is $\|H \cdot M - H^n \cdot M\|_{\mathcal{M}^2} \to 0$, while the second is finite since $N \in \mathcal{M}_c^2$. The second follows from
+ \[
+ \E \left|((H - H^n) \cdot \bra M, N\ket)_\infty\right| \leq \|H - H^n\|_{L^2(M)} \|N\|_{\mathcal{M}^2} \to 0.
+ \]
+ So we know that $\bra H \cdot M, N\ket_\infty = (H \cdot \bra M, N\ket)_\infty$. We can then replace $N$ by the stopped process $N^t$ to get $\bra H \cdot M, N\ket_t = (H \cdot \bra M, N\ket)_t$.
+
+ To see uniqueness, suppose $X \in \mathcal{M}_c^2$ is another such martingale. Then we have $\bra X - H \cdot M, N\ket = 0$ for all $N$. Take $N = X - H \cdot M$, and then we are done.
+ \item For $N \in \mathcal{M}^2_c$, we have
+ \[
+ \bra (H \cdot M)^T, N\ket_t = \bra H \cdot M, N \ket_{t \wedge T} = H \cdot \bra M, N\ket_{t \wedge T} = (H 1_{[0, T]} \cdot \bra M, N\ket)_t
+ \]
+ for every $N$. So we have shown that
+ \[
+ (H \cdot M)^T = (1_{[0, T]} H \cdot M)
+ \]
+ by (ii). To prove the second equality, we have
+ \[
+ \bra H \cdot M^T, N\ket_t = H \cdot \bra M^T, N\ket_t = H \cdot \bra M, N\ket_{t \wedge T} = ((H1_{[0, T]} \cdot \bra M, N\ket)_t.\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+
+Note that (ii) can be written as
+\[
+ \left\bra \int_0^{(-)} H_s \;\d M_s, N\right\ket_t = \int_0^t H_s \;\d \bra M, N\ket_s.
+\]
+\begin{cor}
+ \[
+ \bra H \cdot M, K \cdot N\ket = H \cdot (K \cdot \bra M, N\ket) = (HK) \cdot \bra M, N\ket.
+ \]
+ In other words,
+ \[
+ \left\bra \int_0^{(-)} H_s \;\d M_s, \int_0^{(-)} K_s\;\d N_s\right\ket_t = \int_0^t H_s K_s \;\d \bra M, N\ket_s.\fakeqed
+ \]
+\end{cor}
+
+\begin{cor}
+ Since $H \cdot M$ and $(H \cdot M)(K \cdot N) - \bra H \cdot M, K \cdot N\ket$ are martingales starting at $0$, we have
+ \begin{align*}
+ \E \left(\int_0^t H\;\d M_s\right) &= 0\\
+ \E \left(\left(\int_0^t H_s \;\d M_s\right) \left(\int_0^t K_s \;\d N_s \right)\right) &= \int_0^t H_s K_s \;\d \bra M, N\ket_s.\fakeqed
+ \end{align*}
+\end{cor}
+
+\begin{cor}
+ Let $H \in L^2 (M)$, then $HK \in L^2(M)$ iff $K \in L^2(H \cdot M)$, in which case
+ \[
+ (KH) \cdot M = K \cdot (H \cdot M).
+ \]
+\end{cor}
+
+\begin{proof}
+ We have
+ \[
+ \E \left(\int_0^\infty K_s^2 H_s^2 \;\d \bra M_s\ket\right) = \E \left(\int_0^\infty K_s^2 \bra H \cdot M\ket_s \right),
+ \]
+ so $\|K\|_{L^2(H \cdot M)} = \|HK\|_{L^2(M)}$. For $N \in \mathcal{M}_c^2$, we have
+ \[
+ \bra (KH) \cdot M, N\ket_t = (KH \cdot \bra M, N\ket)_t = (K \cdot (H \cdot \bra M, N\ket))_t = (K \cdot \bra H \cdot M, N\ket)_t.\qedhere
+ \]
+\end{proof}
+
+\subsection{Extension to local martingales}
+We have now defined the stochastic integral for continuous martingales. We next go through some formalities to extend this to local martingales, and ultimately to semi-martingales. We are not doing this just for fun. Rather, when we later prove results like It\^o's formula, even when we put in continuous (local) martingales, we usually end up with some semi-martingales. So it is useful to be able to deal with semi-martingales in general.
+
+\begin{defi}[$L_{bc}^2(M)$]\index{$L_{bc}^2(M)$}
+ Let $L_{bc}^2(M)$ be the space of previsible $H$ such that
+ \[
+ \int_0^t H_s^2 \;\d \bra M\ket_s < \infty\text{ a.s.}
+ \]
+ for all finite $t > 0$.
+\end{defi}
+
+\begin{thm}
+ Let $M$ be a continuous local martingale.
+ \begin{enumerate}
+ \item For every $H \in L_{bc}^2(M)$, there is a unique continuous local martingale $H \cdot M$ with $(H \cdot M)_0= 0 $ and
+ \[
+ \bra H \cdot M, N\ket = H \cdot \bra M, N\ket
+ \]
+ for all $N, M$.
+ \item If $T$ is a stopping time, then
+ \[
+ (\mathbf{1}_{[0, T]}H) \cdot M = (H \cdot M)^T = H \cdot M^T.
+ \]
+ \item If $H \in L^2_{loc}(M)$, $K$ is previsible, then $K \in L^2_{loc}(H \cdot M)$ iff $HK \in L^2_{loc}(M)$, and then
+ \[
+ K \cdot (H \cdot M) = (KH) \cdot M.
+ \]
+ \item Finally, if $M \in \mathcal{M}_c^2$ and $H \in L^2(M)$, then the definition is the same as the previous one.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ Assume $M_0 = 0$, and that $\int_0^t H_s^2 \;\d \bra M\ket_s < \infty$ for all $\omega \in \Omega$ (by setting $H = 0$ when this fails). Set
+ \[
+ S_n = \inf \left\{ t \geq 0 : \int_0^t (1 + H_s^2) \;\d \bra M \ket_s \geq n\right\}.
+ \]
+ These $S_n$ are stopping times that tend to infinity. Then
+ \[
+ \bra M^{S_n}, M^{S_n}\ket_t = \bra M, M\ket_{t \wedge S_n} \leq n.
+ \]
+ So $M^{S_n} \in \mathcal{M}_c^2$. Also,
+ \[
+ \int_0^\infty H_s \;\d \bra M^{S_n}\ket_s = \int_0^{S_n} H_s^2 \;\d \bra M\ket_s \leq n.
+ \]
+ So $H \in L^2(M^{S_n})$, and we have already defined what $H \cdot M^{S_n}$ is. Now notice that
+ \[
+ H \cdot M^{S_n} = (H \cdot M^{S_m})^{S_n}\text{ for }m \geq n.
+ \]
+ So it makes sense to define
+ \[
+ H \cdot M = \lim_{n \to \infty} H \cdot M^{S_n}.
+ \]
+ This is the unique process such that $(H \cdot M)^{S_n} = H \cdot M^{S_n}$. We see that $H \cdot M$ is a continuous adapted local martingale with reducing sequence $S_n$.
+ \begin{claim}
+ $\bra H \cdot M, N\ket = H \cdot \bra M, N\ket$.
+ \end{claim}
+ Indeed, assume that $N_0 = 0$. Set $S_n' = \inf \{t \geq 0: |N_t| \geq n\}$. Set $T_n = S_n \wedge S_n'$. Observe that $N^{S_n'} \in \mathcal{M}_c^2$. Then
+ \[
+ \bra H \cdot M, N\ket^{T_n} = \bra H \cdot M^{S_n}, N^{S_n'}\ket = H \cdot \bra M^{S_n}, N^{S_n'}\ket = H \cdot \bra M, N\ket^{T_n}.
+ \]
+ Taking the limit $n \to \infty$ gives the desired result.
+
+ The proofs of the other claims are the same as before, since they only use the characterizing property $\bra H \cdot M, N\ket = H \cdot \bra M, N\ket$.
+\end{proof}
+
+\subsection{Extension to semi-martingales}
+\begin{defi}[Locally boounded previsible process]\index{locally bounded previsible process}
+ A previsible process $H$ is \emph{locally bounded} if for all $t \geq 0$, we have
+ \[
+ \sup_{s \leq t}|H_s| < \infty\text{ a.s.}
+ \]
+\end{defi}
+
+\begin{fact}\leavevmode
+ \begin{enumerate}
+ \item Any adapted continuous process is locally bounded.
+ \item If $H$ is locally bounded and $A$ is a finite variation process, then for all $t \geq 0$, we have
+ \[
+ \int_0^t |H_s|\;|\d A_s| < \infty\text{ a.s.}
+ \]
+ \end{enumerate}
+\end{fact}
+
+Now if $X = X_0 + M + A$ is a semi-martingale, where $X_0 \in \mathcal{F}_0$, $M$ is a continuous local martingale and $A$ is a finite variation process, we want to define $\int H_s \;\d X_s$. We already know what it means to define integration with respect to $\d M_s$ and $\d A_s$, using the It\^o integral and the finite variation integral respectively, and $X_0$ doesn't change, so we can ignore it.
+
+\begin{defi}[Stochastic integral]\index{stochastic integral}\index{$H \cdot X$}
+ Let $X = X_0 + M + A$ be a continuous semi-martingale, and $H$ a locally bounded previsible process. Then the \term{stochastic integral} $H \cdot X$ is the continuous semi-martingale defined by
+ \[
+ H \cdot X = H \cdot M + H \cdot A,
+ \]
+ and we write
+ \[
+ (H \cdot X)_t = \int_0^T H_s \;\d X_s.
+ \]
+\end{defi}
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $(H, X) \mapsto H \cdot X$ is bilinear.
+ \item $H \cdot (K \cdot X) = (HK) \cdot X$ if $H$ and $K$ are locally bounded.
+ \item $(H \cdot X)^T = H1_{[0, T]} \cdot X = H \cdot X^T$ for every stopping time $T$.
+ \item If $X$ is a continuous local martingale (resp.\ a finite variation process), then so is $H \cdot X$.
+ \item If $H = \sum_{i = 1}^n H_{i - 1} \mathbf{1}_{(t_{i - 1}, t_i]}$ and $H_{i - 1} \in \mathcal{F}_{t_{i - 1}}$ (not necessarily bounded), then
+ \[
+ (H \cdot X)_t = \sum_{i = 1}^n H_{i - 1}(X_{t_i \wedge t} - X_{t_{i - 1} \wedge t}).
+ \]
+ \end{enumerate}
+\end{prop}
+\begin{proof}
+ (i) to (iv) follow from analogous properties for $H \cdot M$ and $H \cdot A$. The last part is also true by definition if the $H_i$ are uniformly bounded. If $H_i$ is not bounded, then the finite variation part is still fine, since for each fixed $\omega \in \Omega$, $H_i(\omega)$ is a fixed number. For the martingale part, set
+ \[
+ T_n = \inf \{t \geq 0 : |H_t| \geq n\}.
+ \]
+ Then $T_n$ are stopping times, $T_n \to \infty$, and $H1_{[0, T_n]} \in \mathcal{E}$. Thus
+ \[
+ (H \cdot M)_{t \wedge T_n} = \sum_{i = 1}^n H_{i - 1} T_{[0, T_n]} (X_{t_i \wedge t} - X_{t_{i - 1}\wedge t}).
+ \]
+ Then take the limit $n \to \infty$.
+\end{proof}
+
+Before we get to It\^o's formula, we need a few more useful properties:
+\begin{prop}[Stochastic dominated convergence theorem]\index{stochastic dominated convergence theorem}
+ Let $X$ be a continuous semi-martingale. Let $H, H_s$ be previsible and locally bounded, and let $K$ be previsible and non-negative. Let $t > 0$. Suppose
+ \begin{enumerate}
+ \item $H_s^n \to H_s$ as $n \to \infty$ for all $s \in [0, t]$.
+ \item $|H_s^n| \leq K_s$ for all $s \in [0, t]$ and $n \in \N$.
+ \item $\int_0^t K_s^2\;\d \bra M\ket_s < \infty$ and $\int_0^t K_s \;|\d A_s|< \infty$ (note that both conditions are okay if $K$ is locally bounded).
+ \end{enumerate}
+ Then
+ \[
+ \int_0^t H_s^n \;\d X_s \to \int_0^t H_s\;\d X_s \text{ in probability}.
+ \]
+\end{prop}
+
+\begin{proof}
+ For the finite variation part, the convergence follows from the usual dominated convergence theorem. For the martingale part, we set
+ \[
+ T_m = \inf \left\{t \geq 0: \int_0^t K_s^2 \;\d \bra M\ket_s \geq m\right\}.
+ \]
+ So we have
+ \[
+ \E \left(\left(\int_0^{T_m \wedge t}\hspace{-13pt}H_s^n \;\d M_s - \int_0^{T_n \wedge t}\hspace{-13pt}H_s\;\d M_s\right)^2\right) \leq \E \left(\int_0^{T_n \wedge t}\vspace{-13pt}(H_s^n- H_s)^2 \;\d \bra M\ket_s\right) \to 0.
+ \]
+ using the usual dominated convergence theorem, since $\int_0^{T_n \wedge t} K_s^2 \;\d \bra M\ket_s \leq m$.
+
+ Since $T_n \wedge t = t$ eventually as $n \to \infty$ almost surely, hence in probability, we are done.
+\end{proof}
+
+\begin{prop}
+ Let $X$ be a continuous semi-martingale, and let $H$ be an adapted bounded left-continuous process. Then for every subdivision $0 < t_0^{(m)} < t_1^{(m)} < \cdots < t_{n_m}^{(m)}$ of $[0, t]$ with $\max_i |t_i^{(m)} - t_{i - 1}^{(m)}| \to 0$, then
+ \[
+ \int_0^t H_s \;\d X_s = \lim_{m \to \infty} \sum_{i = 1}^{n_m} H_{t_{i - 1}^{(m)}} (X_{t_i^{(m)}} - X_{t_{i - 1}^{(m)}})
+ \]
+ in probability.
+\end{prop}
+
+\begin{proof}
+ We have already proved this for the Lebesgue--Stieltjes integral, and all we used was dominated convergence. So the same proof works using stochastic dominated convergence theorem.
+\end{proof}
+
+\subsection{\tph{It\^o}{Ito}{Itô} formula}
+We now prove the equivalent of the integration by parts and the chain rule, i.e.\ It\^o's formula. Compared to the world of usual integrals, the difference is that the quadratic variation, i.e.\ ``second order terms'' will crop up quite a lot, since they are no longer negligible.
+
+\begin{thm}[Integration by parts]\index{integration by parts}
+ Let $X, Y$ be a continuous semi-martingale. Then almost surely,
+ \[
+ X_t Y_t - X_0 Y_0 = \int_0^t X_s \;\d Y_s + \int_0^t Y_s\;\d X_s + \bra X, Y\ket_t
+ \]
+ The last term is called the \term{It\^o correction}.
+\end{thm}
+Note that if $X, Y$ are martingales, then the first two terms on the right are martingales, but the last is not. So we are forced to think about semi-martingales.
+
+Observe that in the case of finite variation integrals, we don't have the correction.
+\begin{proof}
+ We have
+ \[
+ X_t Y_t - X_s Y_s = X_s (Y_t - Y_s) + (X_t - X_s) Y_s + (X_t - X_s)(Y_t - Y_s).
+ \]
+ When doing usual calculus, we can drop the last term, because it is second order. However, the quadratic variation of martingales is in general non-zero, and so we must keep track of this. We have
+ \begin{align*}
+ X_{k2^{-n}} Y_{k2^{-n}} - X_0 Y_0 &= \sum_{i = 1}^k (X_{i2^{-n}} Y_{i2^{-n}} - X_{(i - 1)2^{-n}}Y_{(i - 1)2^{-n}})\\
+ &= \sum_{i = 1}^n \Big( X_{(i - 1)2^{-n}} (Y_{i2^{-n}} - Y_{(i - 1)2^{-n}})\\
+ &\hphantom{=aaa}+ Y_{(i - 1)2^{-n}}(X_{i 2^{-n}} - X_{(i - 1)2^{-n}}) \\
+ &\hphantom{=aaa} + (X_{i2^{-n}} - X_{(i - 1)^{2^{-n}}})(Y_{i2^{-n}} - Y_{(i - 1)2^{-n}})\Big)
+ \end{align*}
+ Taking the limit $n \to \infty$ with $k2^{-n}$ fixed, we see that the formula holds for $t$ a dyadic rational. Then by continuity, it holds for all $t$.
+\end{proof}
+
+The really useful formula is the following:
+\begin{thm}[It\^o's formula]\index{It\^o's formula}
+ Let $X^1, \ldots, X^p$ be continuous semi-martingales, and let $f: \R^p \to \R$ be $C^2$. Then, writing $X = (X^1, \ldots, X^p)$, we have, almost surely,
+ \[
+ f(X_t) = f(X_0) + \sum_{i = 1}^p \int_0^t \frac{\partial f}{\partial x_i}(X_s)\;\d X_s^i + \frac{1}{2} \sum_{i, j = 1}^p \int_0^t \frac{\partial^2 f}{\partial x^i \partial x^j} (X_s) \;\d \bra X^i, X^j\ket_s.
+ \]
+ In particular, $f(X)$ is a semi-martingale.
+\end{thm}
+
+The proof is long but not hard. We first do it for polynomials by explicit computation, and then use Weierstrass approximation to extend it to more general functions.
+\begin{proof}
+ \begin{claim}
+ It\^o's formula holds when $f$ is a polynomial.
+ \end{claim}
+ It clearly does when $f$ is a constant! We then proceed by induction. Suppose It\^o's formula holds for some $f$. Then we apply integration by parts to
+ \[
+ g(x) = x^k f(x).
+ \]
+ where $x^k$ denotes the $k$th component of $x$. Then we have
+ \[
+ g(X_t) = g(X_0) + \int_0^t X_s^k \;\d f(X_s) + \int_0^t f(X_s) \;\d X_s^k + \bra X^k, f(X)\ket_t
+ \]
+ We now apply It\^o's formula for $f$ to write
+ \begin{multline*}
+ \int_0^t X_s^k \;\d f(X_s) = \sum_{i = 1}^p \int_0^t X_s^k \frac{\partial f}{\partial x^i}(X_s) \;\d X_s^i\\
+ + \frac{1}{2} \sum_{i, j = 1}^p \int_0^t X_s^k \frac{\partial^2f}{\partial x^i \partial x^j} (X_s)\;\d \bra X^i, X^j\ket_s.
+ \end{multline*}
+ We also have
+ \[
+ \bra X^k, f(X)\ket_t = \sum_{i = 1}^p \int_0^t \frac{\partial f}{\partial x^i}(X_s) \;\d \bra X^k, X^i\ket_s.
+ \]
+ So we have
+ \[
+ g(X_t) = g(X_0) + \sum_{i = 1}^p \int_0^t \frac{\partial g}{\partial x^i}(X_s) \;\d X_s^i + \frac{1}{2} \sum_{i, j = 1}^p \int_0^t \frac{\partial^2 g}{\partial x^i \partial x^j} (X_s) \;\d \bra X^i, X^j\ket_s.
+ \]
+ So by induction, It\^o's formula holds for all polynomials.
+
+ \begin{claim}
+ It\^o's formula holds for all $f \in C^2$ if $|X_t(\omega)| \leq n$ and $\int_0^t |\d A_s| \leq n$ for all $(t, \omega)$.
+ \end{claim}
+
+ By the Weierstrass approximation theorem, there are polynomials $p_k$ such that
+ \[
+ \sup_{|x| \leq k} \left(|f(x) - p_k(x)| + \max_i \left|\frac{\partial f}{\partial x^i} - \frac{\partial p}{\partial x^i}\right| + \max_{i, j} \left|\frac{\partial^2 f}{\partial x^i \partial x^j} - \frac{\partial p_k}{\partial x^i \partial x^j}\right| \right) \leq \frac{1}{k}.
+ \]
+ By taking limits, in probability, we have
+ \begin{align*}
+ f(X_t) - f(X_0) &= \lim_{k \to \infty} (p_k(X_t) - p_k(X_0))\\
+ \int_0^t \frac{\partial f}{\partial x^i} (X_s) \;\d X_s^i &= \lim_{k \to \infty} \frac{\partial p_k}{\partial x^i} (X_s) \;\d X_s^i
+ \end{align*}
+ by stochastic dominated convergence theorem, and by the regular dominated convergence, we have
+ \[
+ \int_0^t \frac{\partial f}{\partial x^i \partial x^j} \;\d \bra X^i, X^j\ket_s = \lim_{k \to \infty} \int_0^t \frac{\partial^2 p_k}{\partial x^i \partial x^j} \;\d \bra X^i, X^j\ket.
+ \]
+ \begin{claim}
+ It\^o's formula holds for all $X$.
+ \end{claim}
+
+ Let
+ \[
+ T_n = \inf \left\{t \geq 0: |X_t| \geq n\text{ or }\int_0^t |\d A_s| \geq n\right\}
+ \]
+ Then by the previous claim, we have
+ \begin{align*}
+ f(X_t^{T_n}) &= f(X_0) + \sum_{i = 1}^p \int_0^t \frac{\partial f}{\partial x^i} (X_s^{T_n})\;\d (X_i)_s^{T_n} \\
+ &\hphantom{aaaaaa}+ \frac{1}{2} \sum_{i, j} \int_0^t \frac{\partial^2 f}{\partial x^i \partial x^j} (X_s^{T_n})\;\d \bra (X_i)^{T_n}, (X^j)^{T_n}\ket_s\\
+ &= f(X_0) + \sum_{i = 1}^p \int_0^{t \wedge T_n} \frac{\partial f}{\partial x^i} (X_s)\;\d (X_i)_s \\
+ &\hphantom{aaaaaa}+ \frac{1}{2} \sum_{i, j} \int_0^{t \wedge T_n} \frac{\partial^2 f}{\partial x^i \partial x^j} (X_s)\;\d \bra (X_i), (X^j)\ket_s.
+ \end{align*}
+ Then take $T_n \to \infty$.
+\end{proof}
+
+\begin{eg}
+ Let $B$ be a standard Brownian motion, $B_0 = 0$ and $f(x) = x^2$. Then
+ \[
+ B_t^2 = 2 \int_0^t B_S \;\d B_s + t.
+ \]
+ In other words,
+ \[
+ B_t^2 - t = 2 \int_0^t B_s\;\d B_s.
+ \]
+ In particular, this is a continuous local martingale.
+\end{eg}
+
+\begin{eg}
+ Let $B = (B^1, \ldots, B^d)$ be a $d$-dimensional Brownian motion. Then we apply It\^o's formula to the semi-martingale $X = (t, B^1, \ldots, B^d)$. Then we find that
+ \[
+ f(t, B_t) - f(0, B_0) - \int_0^t \left(\frac{\partial}{\partial s} + \frac{1}{2} \Delta\right) f(s, B_s) \;\d s = \sum_{i = 1}^d \int_0^t \frac{\partial}{\partial x^i} f(s, B_s)\;\d B_s^i
+ \]
+ is a continuous local martingale.
+\end{eg}
+
+There are some syntactic tricks that make stochastic integrals easier to manipulate, namely by working in differential form. We can state It\^o's formula in differential form
+\[
+ \d f(X_t) = \sum_{i = 1}^p \frac{\partial f}{\partial x^i} \;\d X^i + \frac{1}{2} \sum_{i, j = 1}^p \frac{\partial^2 f}{\partial x^i\partial x^j} \;\d \bra X^i, X^j\ket,
+\]
+which we can think of as the chain rule. For example, in the case case of Brownian motion, we have
+\[
+ \d f(B_t) = f'(B_t) \;\d B_t + \frac{1}{2} f''(B_t) \;\d t.
+\]
+Formally, one expands $f$ using that that ``$(\d t)^2 = 0$'' but ``$(\d B)^2 = \d t$''. The following formal rules hold:
+\begin{align*}
+ Z_t - Z_0 = \int_0^t H_s \;\d X_s &\Longleftrightarrow \d Z_t = H_t \;\d X_t\\
+ Z_t = \bra X, Y\ket_t = \int_0^t \;\d \bra X, Y\ket_t &\Longleftrightarrow \d Z_t = \d X_t \;\d Y_t.
+\end{align*}
+Then we have rules such as
+\begin{align*}
+ H_t(K_t \;\d X_t) &= (H_t K_t)\;\d X_t\\
+ H_t (\d X_t \;\d Y_t) &= (H_t \;\d X_t)\;\d Y_t\\
+ \d (X_t Y_t) &= X_t \;\d Y_t + Y_t \;\d X_t + \d X_t \;\d Y_t\\
+ \d f(X_t) &= f'(X_t) \;\d X_t + \frac{1}{2} f''(X_t) \;\d X_t\;\d X_t.
+\end{align*}
+
+\subsection{The \tph{L\'evy}{L\'evy}{Levy} characterization}
+A more major application of the stochastic integral is the following convenient characterization of Brownian motion:
+\begin{thm}[L\'evy's characterization of Brownian motion]\index{L\'evy's characterization of Brownian motion}
+ Let $(X^1, \ldots, X^d)$ be continuous local martingales. Suppose that $X_0 = 0$ and that $\bra X^i, X^j\ket_t = \delta_{ij} t$ for all $i, j = 1, \ldots, d$ and $t \geq 0$. Then $(X^1, \ldots, X^d)$ is a standard $d$-dimensional Brownian motion.
+\end{thm}
+This might seem like a rather artificial condition, but it turns out to be quite useful in practice (though less so in this course). The point is that we know that $\bra H \cdot M\ket_t = H^2_t \cdot \bra M\ket_t$, and in particular if we are integrating things with respect to Brownian motions of some sort, we know $\bra B_t \ket_t = t$, and so we are left with some explicit, familiar integral to do.
+
+\begin{proof}
+ Let $0 \leq s < t$. It suffices to check that $X_t - X_s$ is independent of $\mathcal{F}_s$ and $X_t - X_s \sim N(0, (t - s) I)$.
+ \begin{claim}
+ $\E(e^{i\theta \cdot (X_t - X_s)} \mid \mathcal{F}_s) = e^{-\frac{1}{2} |\theta|^2 (t - s)}$ for all $\theta \in \R^d$ and $s < t$.
+ \end{claim}
+ This is sufficient, since the right-hand side is independent of $\mathcal{F}_s$, hence so is the left-hand side, and the Fourier transform characterizes the distribution.
+
+ To check this, for $\theta \in \R^d$, we define
+ \[
+ Y_t = \theta \cdot X_t = \sum_{i = 1}^d \theta^i X_t^i.
+ \]
+ Then $Y$ is a continuous local martingale, and we have
+ \[
+ \bra Y\ket_t = \bra Y, Y\ket_t = \sum_{i, j = 1}^d \theta^j \theta^k\; \bra X^j, X^k\ket_t = |\theta|^2 t.
+ \]
+ by assumption. Let
+ \[
+ Z_t = e^{iY_t + \frac{1}{2} \bra Y\ket_t} = e^{i \theta \cdot X_t + \frac{1}{2} |\theta|^2 t}.
+ \]
+ By It\^o's formula, with $X = i Y + \frac{1}{2} \bra Y\ket_t$ and $f(x) = e^x$, we get
+ \[
+ \d Z_t = Z_t \left(i \d Y_t - \frac{1}{2} \d \bra Y\ket_t + \frac{1}{2} \d \bra Y\ket_t\right) = i Z_t \;\d Y_t.
+ \]
+ So this implies $Z$ is a continuous local martingale. Moreover, since $Z$ is bounded on bounded intervals of $t$, we know $Z$ is in fact a martingale, and $Z_0 = 1$. Then by definition of a martingale, we have
+ \[
+ \E (Z_t \mid \mathcal{F}_s) = Z_s,
+ \]
+ And unwrapping the definition of $Z_t$ shows that the result follows.
+\end{proof}
+
+In general, the quadratic variation of a process doesn't have to be linear in $t$. It turns out if the quadratic variation increases to infinity, then the martingale is still a Brownian motion up to reparametrization.
+\begin{thm}[Dubins--Schwarz]\index{Dubins--Schwarz theorem}
+ Let $M$ be a continuous local martingale with $M_0 = 0$ and $\bra M\ket_\infty = \infty$. Let
+ \[
+ T_s = \inf \{t \geq 0: \bra M\ket_t > s\},
+ \]
+ the right-continuous inverse of $\bra M\ket_t$. Let $B_s = M_{T_s}$ and $\mathcal{G}_s = \mathcal{F}_{T_s}$. Then $T_s$ is a $(\mathcal{F}_t)$ stopping time, $\bra M\ket_{T_s} = s$ for all $s \geq 0$, $B$ is a $(\mathcal{G}_s)$-Brownian motion, and
+ \[
+ M_t = B_{\bra M\ket_t}.
+ \]
+\end{thm}
+
+\begin{proof}
+ Since $\bra M\ket$ is continuous and adapted, and $\bra M\ket_\infty = \infty$, we know $T_s$ is a stopping time and $T_s < \infty$ for all $s \geq 0$.
+ \begin{claim}
+ $(\mathcal{G}_s)$ is a filtration obeying the usual conditions, and $\mathcal{G}_\infty = \mathcal{F}_\infty$
+ \end{claim}
+ Indeed, if $A \in \mathcal{G}_s$ and $s < t$, then
+ \[
+ A \cap \{T_t \leq u\} = A \cap \{T_s \leq u\} \cap \{T_t \leq u\} \in \mathcal{F}_u,
+ \]
+ using that $A \cap \{T_s \leq u\} \in \mathcal{F}_u$ since $A \in \mathcal{G}_s$. Then right-continuity follows from that of $(\mathcal{F}_t)$ and the right-continuity of $s \mapsto T_s$.
+
+ \begin{claim}
+ $B$ is adapted to $(\mathcal{G}_s)$.
+ \end{claim}
+ In general, if $X$ is c\'adl\'ag and $T$ is a stopping time, then $X_T \mathbf{1}_{\{T < \infty\}} \in \mathcal{F}_T$. Apply this is with $X = M$, $T = T_s$ and $\mathcal{F}_T = \mathcal{G}_s$. Thus $B_s \in \mathcal{G}_s$.
+
+ \begin{claim}
+ $B$ is continuous.
+ \end{claim}
+ Here this is actually something to verify, because $s \mapsto T_s$ is only right continuous, not necessarily continuous. Thus, we only know $B_s$ is right continuous, and we have to check it is left continuous.
+
+ Now $B$ is left-continuous at $s$ iff $B_s = B_{s^-}$, iff $M_{T_s} = M_{T_{s-}}$.
+ Now we have
+ \[
+ T_{s-} = \inf \{t \geq 0 : \bra M\ket_t \geq s\}.
+ \]
+ If $T_s = T_{s-}$, then there is nothing to show. Thus, we may assume $T_s > T_{s-}$. Then we have $\bra M\ket_{T_s} = \bra M\ket_{T_{s-}}$. Since $\bra M\ket_t$ is increasing, it means $\bra M\ket_{T_s}$ is constant in $[T_{s-}, T_s]$. We will later prove that
+ \begin{lemma}
+ $M$ is constant on $[a, b]$ iff $\bra M\ket$ being constant on $[a, b]$.
+ \end{lemma}
+ So we know that if $T_s > T_{s-}$, then $M_{T_s} = M_{T_s-}$. So $B$ is left continuous.
+
+ We then have to show that $B$ is a martingale.
+ \begin{claim}
+ $(M^2 - \bra M\ket)^{T_s}$ is a uniformly integrable martingale.
+ \end{claim}
+ To see this, observe that $\bra M^{T_s}\ket_\infty = \bra M\ket_{T_s} = s$, and so $M^{T_s}$ is bounded. So $(M^2 - \bra M\ket)^{T_s}$ is a uniformly integrable martingale.
+
+ We now apply the optional stopping theorem, which tells us
+ \[
+ \E (B_s \mid \mathcal{G}_r) = \E (M^{T_s}_\infty \mid \mathcal{G}_s) = M_{T_t} = B_t.
+ \]
+ So $B_t$ is a martingale. Moreover,
+ \[
+ \E (B_s^2 - s \mid \mathcal{G}_r) = \E ((M^2 - \bra M\ket)^{T_s} \mid \mathcal{F}_{T_r}) = M_{T_r}^2 - \bra M\ket_{T_r} = B^2_r - r.
+ \]
+ So $B^2_t - t$ is a martingale, so by the characterizing property of the quadratic variation, $\bra B\ket_t = t$. So by L\'evy's criterion, this is a Brownian motion in one dimension.
+\end{proof}
+
+The theorem is only true for martingales in one dimension. In two dimensions, this need not be true, because the time change needed for the horizontal and vertical may not agree. However, in the example sheet, we see that the holomorphic image of a Brownian motion is still a Brownian motion up to a time change.
+
+\begin{lemma}
+ $M$ is constant on $[a, b]$ iff $\bra M\ket$ being constant on $[a, b]$.
+\end{lemma}
+
+\begin{proof}
+ It is clear that if $M$ is constant, then so is $\bra M\ket$. To prove the converse, by continuity, it suffices to prove that for any fixed $a < b$,
+ \[
+ \{M_t = M_a \text{ for all }t \in [a, b]\} \supseteq \{\bra M\ket_b = \bra M\ket_a\}\text{ almost surely}.
+ \]
+ We set $N_t = M_t - M_t \wedge a$. Then $\bra N\ket_t = \bra M\ket_t - \bra M\ket_{t \wedge a}$. Define
+ \[
+ T_\varepsilon = \inf \{t \geq 0: \bra N\ket_t \geq \varepsilon\}.
+ \]
+ Then since $N^2 - \bra N\ket$ is a local martingale, we know that
+ \[
+ \E(N_{t \wedge T_\varepsilon}^2) = \E (\bra N\ket_{t \wedge T_\varepsilon})\leq \varepsilon.
+ \]
+ Now observe that on the event $\{\bra M\ket_b = \bra M\ket_a\}$, we have $\bra N\ket_b = 0$. So for $t \in [a, b]$, we have
+ \[
+ \E(1_{\{\bra M\ket_b = \bra M\ket_a\}} N_t^2) = \E(1_{\{\bra M\ket_b = \bra M\ket_a} N_{t \wedge T_\varepsilon}^2)= \E(\bra N\ket_{t \wedge T_\varepsilon}) = 0.\qedhere
+ \]
+\end{proof}
+
+\subsection{Girsanov's theorem}
+Girsanov's theorem tells us what happens to our (semi)-martingales when we change the measure of our space. We first look at a simple example when we perform a shift.
+\begin{eg}
+ Let $X \sim N(0, C)$ be an $n$-dimensional centered Gaussian with positive definite covariance $C = (C_{ij})_{i, j = 1}^n$. Put $M = C^{-1}$. Then for any function $f$, we have
+ \[
+ \E f(X) = \left(\det \frac{M}{2\pi}\right)^{1/2} \int_{\R^n} f(x) e^{-\frac{1}{2} (x, Mx)}\;\d x.
+ \]
+ Now fix an $a \in \R^n$. The distribution of $X + a$ then satisfies
+ \[
+ \E f(X + a) = \left(\det \frac{M}{2\pi}\right)^{1/2} \int_{\R^n} f(x) e^{-\frac{1}{2} (x - a, M (x - a))}\;\d x = \E [Z f(X)],
+ \]
+ where
+ \[
+ Z = Z(x) = e^{-\frac{1}{2} (a, Ma) + (x, Ma)}.
+ \]
+ Thus, if $\P$ denotes the distribution of $X$, then the measure $\Q$ with
+ \[
+ \frac{\d \Q}{\d \P} = Z
+ \]
+ is that of $N(a, C)$ vector.
+\end{eg}
+
+\begin{eg}
+ We can extend the above example to Brownian motion. Let $B$ be a Brownian motion with $B_0 = 0$, and $h: [0, \infty) \to \R$ a deterministic function. We then want to understand the distribution of $B_t + h$.
+
+ Fix a finite sequence of times $0 = t_0 < t_1 < \cdots < t_n$. Then we know that $(B_{t_i})_{i = 1}^n$ is a centered Gaussian random variable. Thus, if $f(B) = f(B_{t_1}, \ldots, B_{t_n})$ is a function, then
+ \[
+ \E (f(B)) = c \cdot \int_{\R^n} f(x) e^{-\frac{1}{2} \sum_{i = 1}^n \frac{(x_i - x_{i - 1})^2}{t_i - t_{i - 1}}}\;\d x_1 \cdots \d x_n.
+ \]
+ Thus, after a shift, we get
+ \begin{align*}
+ &\E (f (B + h)) = \E (Z f(B)),\\
+ &Z = \exp \left(-\frac{1}{2} \sum_{i = 1}^n \frac{(h_{t_i} - h_{t_{i - 1}})^2}{t_i - t_{i - 1}} + \sum_{i = 1}^n \frac{(h_{t_i} - h_{t_{i - 1}})(B_{t_i} - B_{t_{i - 1}})}{t_i - t_{i - 1}}\right).
+ \end{align*}
+\end{eg}
+In general, we are interested in what happens when we change the measure by an exponential:
+\begin{defi}[Stochastic exponential]\index{stochastic exponential}
+ Let $M$ be a continuous local martingale. Then the \emph{stochastic exponential} (or \term{Dol\'eans--Dade exponential}) of $M$ is
+ \[
+ \mathcal{E}(M)_t = e^{M_t - \frac{1}{2} \bra M\ket_t}
+ \]
+\end{defi}
+The point of introducing that quadratic variation term is
+\begin{prop}
+ Let $M$ be a continuous local martingale with $M_0 = 0$. Then $\mathcal{E}(M) = Z$ satisfies
+ \[
+ \d Z_t = Z_t \;\d M,
+ \]
+ i.e.
+ \[
+ Z_t = 1 + \int_0^t Z_s \;\d M_s.
+ \]
+ In particular, $\mathcal{E}(M)$ is a continuous local martingale. Moreover, if $\bra M\ket$ is uniformly bounded, then $\mathcal{E}(M)$ is a uniformly integrable martingale.
+\end{prop}
+There is a more general condition for the final property, namely Novikov's condition, but we will not go into that.
+
+\begin{proof}
+ By It\^o's formula with $X = M - \frac{1}{2} \bra M\ket$, we have
+ \[
+ \d Z_t = Z_t \d \left(M_t - \frac{1}{2} \d \bra M\ket_t\right) + \frac{1}{2} Z_t \d \bra M\ket_t = Z_t \;\d M_t.
+ \]
+ Since $M$ is a continuous local martingale, so is $\int Z_s \;\d M_s$. So $Z$ is a continuous local martingale.
+
+ Now suppose $\bra M\ket_\infty \leq b < \infty$. Then
+ \[
+ \P \left(\sup_{t \geq 0} M_t \geq a\right) = \P \left(\sup_{t \geq 0} M_t \geq a,\, \bra M\ket_\infty \leq b\right) \leq e^{-a^2/2b},
+ \]
+ where the final equality is an exercise on the third example sheet, which is true for general continuous local martingales. So we get
+ \begin{align*}
+ \E \left(\exp \left(\sup_t M_t\right)\right) &= \int_0^\infty \P(\exp (\sup M_t)\geq \lambda)\;\d \lambda \\
+ &= \int_0^\infty \P (\sup M_t \geq \log \lambda)\;\d \lambda\\
+ &\leq 1 + \int_1^\infty e^{-(\log \lambda)^2/2b} \;\d \lambda < \infty.
+ \end{align*}
+ Since $\bra M\ket \geq 0$, we know that
+ \[
+ \sup_{t \geq 0} \mathcal{E}(M)_t \leq \exp \left(\sup M_t\right),
+ \]
+ So $\mathcal{E}(M)$ is a uniformly integrable martingale.
+\end{proof}
+
+\begin{thm}[Girsanov's theorem]\index{Girsanov's theorem}
+ Let $M$ be a continuous local martingale with $M_0 = 0$. Suppose that $\mathcal{E}(M)$ is a uniformly integrable martingale. Define a new probability measure
+ \[
+ \frac{\d \Q}{\d \P} = \mathcal{E}(M)_\infty
+ \]
+ Let $X$ be a continuous local martingale with respect to $\P$. Then $X - \bra X, M\ket$ is a continuous local martingale with respect to $\Q$.
+\end{thm}
+
+\begin{proof}
+ Define the stopping time
+ \[
+ T_n = \inf\{ t \geq 0: |X_t - \bra X, M\ket_t| \geq n\},
+ \]
+ and $\P(T_n \to \infty) = 1$ by continuity. Since $\Q$ is absolutely continuous with respect to $\P$, we know that $\Q(T_n \to \infty) = 1$. Thus it suffices to show that $X^{T_n} - \bra X^{T_n}, M\ket$ is a continuous martingale for any $n$. Let
+ \[
+ Y = X^{T_n} - \bra X^{T_n}, M\ket,\quad Z = \mathcal{E}(M).
+ \]
+ \begin{claim}
+ $ZY$ is a continuous local martingale with respect to $\P$.
+ \end{claim}
+ We use the product rule to compute
+ \begin{align*}
+ \d (ZY) &= Y_t \;\d Z_t + Z_t \;\d Y_t + \d \bra Y, Z\ket_t\\
+ &= Y Z_t \;\d M_t + Z_t (\d X^{T_n} - \d \bra X^{T_n}, M\ket_t) + Z_t \;\d \bra M, X^{T_n}\ket\\
+ &= Y Z_t \;\d M_t + Z_t\;\d X^{T_n}
+ \end{align*}
+ So we see that $ZY$ is a stochastic integral with respect to a continuous local martingale. Thus $ZY$ is a continuous local martingale.
+
+ \begin{claim}
+ $ZY$ is uniformly integrable.
+ \end{claim}
+ Since $Z$ is a uniformly integrable martingale, $\{Z_T: T\text{ is a stopping time}\}$ is uniformly integrable. Since $Y$ is bounded, $\{Z_T Y_T: T\text{ is a stopping time}\}$ is also uniformly integrable. So $ZY$ is a true martingale (with respect to $\P$).
+
+ \begin{claim}
+ $Y$ is a martingale with respect to $\Q$.
+ \end{claim}
+ We have
+ \begin{align*}
+ \E^{\Q}(Y_t - Y_s \mid \mathcal{F}_s) &= \E^\P(Z_\infty Y_t - Z_\infty Y_s \mid \mathcal{F}_s)\\
+ &= \E^{\P} (Z_t Y_t - Z_s Y_s \mid \mathcal{F}_s) = 0.\qedhere
+ \end{align*}
+\end{proof}
+Note that the quadratic variation does not change since
+\[
+ \bra X - \bra X, M\ket \ket = \bra X \ket_t = \lim_{n \to \infty} \sum_{i = 1}^{\lfloor 2^n t \rfloor} (X_{i 2^{-n}} - X_{(i - 1)2^{-n}})^2\text{ a.s.}
+\]
+along a subsequence.
+
+\section{Stochastic differential equations}
+\subsection{Existence and uniqueness of solutions}
+After all this work, we can return to the problem we described in the introduction. We wanted to make sense of equations of the form
+\[
+ \dot{x}(t) = F(x(t)) + \eta(t),
+\]
+where $\eta(t)$ is Gaussian white noise. We can now interpret this equation as saying
+\[
+ \d X_t = F(X_t)\;\d t + \d B_t,
+\]
+or equivalently, in integral form,
+\[
+ X_t - X_0 = \int_0^T F(X_s) \;\d s+ B_t.
+\]
+In general, we can make the following definition:
+\begin{defi}[Stochastic differential equation]\index{stochastic differential equation}
+ Let $d, m \in \N$, $b:\R_+ \times \R^d \to \R^d$, $\sigma: \R_+ \times \R^d \to \R^{d \times m}$ be locally bounded (and measurable). A solution to the stochastic differential equation $E(\sigma, b)$ given by
+ \[
+ \d X_t = b(t, X_t) \;\d t + \sigma(t, X_t) \;\d B_t
+ \]
+ consists of
+ \begin{enumerate}
+ \item a filtered probability space $(\Omega, \mathcal{F}, (\mathcal{F}_t), \P)$ obeying the usual conditions;
+ \item an $m$-dimensional Brownian motion $B$ with $B_0 = 0$; and
+ \item an $(\mathcal{F}_t)$-adapted continuous process $X$ with values in $\R^d$ such that
+ \[
+ X_t = X_0 + \int_0^t \sigma(s, X_s) \;\d B_s + \int_0^t b(s, X_s) \;\d s.
+ \]
+ \end{enumerate}
+ If $X_0 = x \in \R^d$, then we say $X$ is a \emph{(weak) solution}\index{weak solution} to $E_x(\sigma, b)$. It is a \emph{strong} solution\index{strong solution} if it is adapted with respect to the canonical filtration of $B$.
+\end{defi}
+Our goal is to prove existence and uniqueness of solutions to a general class of SDEs. We already know what it means for solutions to be unique, and in general there can be multiple notions of uniqueness:
+
+\begin{defi}[Uniqueness of solutions]
+ For the stochastic differential equation $E(\sigma, b)$, we say there is
+ \begin{itemize}
+ \item \term{uniqueness in law} if for every $x \in \R^d$, all solutions to $E_x(\sigma, b)$ have the same distribution.
+ \item \term{pathwise uniqueness} if when $(\Omega, \mathcal{F}, (\mathcal{F}_t), \P)$ and $B$ are fixed, any two solutions $X, X'$ with $X_0 = X_0'$ are indistinguishable.
+ \end{itemize}
+\end{defi}
+
+These two notions are not equivalent, as the following example shows:
+\begin{eg}[Tanaka]\index{Tanaka equation}
+ Consider the stochastic differential equation
+ \[
+ \d X_t = \sgn(X_t) \;\d B_t,\quad X_0 = x,
+ \]
+ where
+ \[
+ \sgn(x) =
+ \begin{cases}
+ +1 & x > 0\\
+ -1 & x \leq 0
+ \end{cases}.
+ \]
+ This has a weak solution which is unique in law, but pathwise uniqueness fails.
+
+ To see the existence of solutions, let $X$ be a one-dimensional Brownian motion with $X_0 = x$, and set
+ \[
+ B_t = \int_0^t \sgn(X_s) \;\d X_s,
+ \]
+ which is well-defined because $\sgn(X_s)$ is previsible and left-continuous. Then we have
+ \[
+ x + \int_0^t \sgn(X_s)\;\d B_s = x + \int_0^t \sgn(X_s)^2 \;\d X_s = x + X_t - X_0 = X_t.
+ \]
+ So it remains to show that $B$ is a Brownian motion. We already know that $B$ is a continuous local martingale, so by L\'evy's characterization, it suffices to show its quadratic variation is $t$. We simply compute
+ \[
+ \bra B, B\ket_t = \int_0^t \d \bra X_s, X_s\ket = t.
+ \]
+ So there is weak existence. The same argument shows that any solution is a Brownian motion, so we have uniqueness in law.
+
+ Finally, observe that if $x = 0$ and $X$ is a solution, then $-X$ is also a solution with the same Brownian motion. Indeed,
+ \[
+ -X_t = \int_0^t \sgn(X_s) \;\d B_s = \int_0^t \sgn(-X_s)\;\d B_s + 2 \int_0^t \mathbf{1}_{X_s = 0} \;\d B_s,
+ \]
+ where the second term vanishes, since it is a continuous local martingale with quadratic variation $\int_0^t \mathbf{1}_{X_s = 0} \;\d s = 0$. So pathwise uniqueness does not hold.
+\end{eg}
+
+In the other direction, however, it turns out pathwise uniqueness implies uniqueness in law.
+\begin{thm}[Yamada--Watanabe]\index{Yamada--Watanabe}
+ Assume weak existence and pathwise uniqueness holds. Then
+ \begin{enumerate}
+ \item Uniqueness in law holds.
+ \item For every $(\Omega, \mathcal{F}, (\mathcal{F}_t), \P)$ and $B$ and any $x \in \R^d$, there is a unique strong solution to $E_x(a, b)$.\fakeqed
+ \end{enumerate}
+\end{thm}
+We will not prove this, since we will not actually need it.
+
+The key, important theorem we are now heading for is the existence and uniqueness of solutions to SDEs, assuming reasonable conditions. As in the case of ODEs, we need the following Lipschitz conditions:
+
+\begin{defi}[Lipschitz coefficients]\index{Lipschitz coefficients}
+ The coefficients $b: \R_+ \times \R^d \to \R^d$, $\sigma: \R_+ \times \R^d \to \R^{d \times m}$ are Lipschitz in $x$ if there exists a constant $K > 0$ such that for all $t \geq 0$ and $x, y \in \R^d$, we have
+ \begin{align*}
+ |b(t, x) - b(t, y)| &\leq K|x - y|\\
+ |\sigma(t, x) - \sigma(t, y)| &\leq |x - y|
+ \end{align*}
+\end{defi}
+
+\begin{thm}
+ Assume $b, \sigma$ are Lipschitz in $x$. Then there is pathwise uniqueness for the $E(\sigma, b)$ and for every $(\Omega, \mathcal{F}, (\mathcal{F}_t), \P)$ satisfying the usual conditions and every $(\mathcal{F}_t)$-Brownian motion $B$, for every $x \in \R^d$, there exists a unique strong solution to $E_x(\sigma, b)$.
+\end{thm}
+
+\begin{proof}
+ To simplify notation, we assume $m = d = 1$.
+
+ We first prove pathwise uniqueness. Suppose $X, X'$ are two solutions with $X_0 = X_0'$. We will show that $\E[(X_t - X_t')^2] = 0$. We will actually put some bounds to control our variables. Define the stopping time
+ \[
+ S = \inf \{t \geq 0: |X_t| \geq n\text{ or }|X_t'| \geq n\}.
+ \]
+ By continuity, $S \to \infty$ as $n \to \infty$. We also fix a deterministic time $T > 0$. Then whenever $t \in [0, T]$, we can bound, using the identity $(a + b)^2 \leq 2a^2 + 2b^2$,
+% \begin{align*}
+% X_{t \wedge S} &= X_0 + \int_0^{t \wedge S} \sigma(s, X_s)\;\d B_s + \int_0^{t \wedge S} b(s, X_s)\;\d s\\
+% X'_{t \wedge S} &= X_0 + \int_0^{t \wedge S} \sigma(s, X_s')\;\d B_s + \int_0^{t \wedge S} b(s, X_s')\;\d s.
+% \end{align*}
+% Fix a deterministic $T > 0$. Then for $t \in [0, T]$, (using $(a + b)^2 \leq 2a^2 + 2b^2$), we have
+ \begin{multline*}
+ \E ((X_{t \wedge S} - X'_{t \wedge S})^2) \leq 2 \E \left(\left(\int_0^{t \wedge S} (\sigma(s, X_s) - \sigma(s, X_s'))\;\d B_s\right)^2\right) \\
+ + 2 \E \left(\left(\int_0^{t \wedge S} (b(s, X_s) - b(s, X_s'))\;\d s\right)^2 \right).
+ \end{multline*}
+ We can apply the Lipschitz bound to the second term immediately, while we can simplify the first term using the (corollary of the) It\^o isometry
+ \[
+ \E \left(\left(\int_0^{t \wedge S}\!\!\!\!\!(\sigma(s, X_s) - \sigma(s, X_s'))\;\d B_s\right)^2\right) = \E \left(\int_0^{t \wedge S}\!\!\!\!\!(\sigma(s, X_s) - \sigma(s, X_s'))^2\;\d s\right).
+ \]
+ So using the Lipschitz bound, we have
+ \begin{align*}
+ \E ((X_{t \wedge S} - X'_{t \wedge S})^2) &\leq 2K^2 (1 + T) \E \left(\int_0^{t \wedge S} |X_s - X_s'|^2 \;\d s\right)\\
+ &\leq 2K^2 (1 + T) \int_0^t \E (|X_{s \wedge S} - X'_{s \wedge S}|^2)\;\d s.
+ \end{align*}
+ We now use Gr\"onwall's lemma:
+ \begin{lemma}
+ Let $h(t)$ be a function such that
+ \[
+ h(t) \leq c \int_0^t h(s)\;\d s
+ \]
+ for some constant $c$. Then
+ \[
+ h(t) \leq h(0) e^{ct}.\fakeqed
+ \]
+ \end{lemma}
+ Applying this to
+ \[
+ h(t) = \E((X_{t \wedge S} - X_{t \wedge S}')^2),
+ \]
+ we deduce that $h(t) \leq h(0) e^{ct} = 0$. So we know that
+ \[
+ \E(|X_{t \wedge S} - X_{t \wedge S}'|^2) = 0
+ \]
+ for every $t \in [0, T]$. Taking $n \to \infty$ and $T \to \infty$ gives pathwise uniqueness.
+
+ We next prove existence of solutions. We fix $(\Omega, \mathcal{F}, (\mathcal{F}_t)_t)$ and $B$, and define
+ \[
+ F(X)_t = X_0 + \int_0^t \sigma(s, X_s)\;\d B_s + \int_0^t b(s, X_s) \;\d s.
+ \]
+ Then $X$ is a solution to $E_x(a, b)$ iff $F(X) = X$ and $X_0 = x$. To find a fixed point, we use Picard iteration. We fix $T > 0$, and define the $T$-norm of a continuous adapted process $X$ as
+ \[
+ \|X\|_T = \E\left(\sup_{t \leq T} |X_t|^2\right)^{1/2}.
+ \]
+ In particular, if $X$ is a martingale, then this is the same as the norm on the space of $L^2$-bounded martingales by Doob's inequality. Then
+ \[
+ B = \{X: \Omega \times [0, T] \to \R: \|X\|_T < \infty\}
+ \]
+ is a Banach space.
+ \begin{claim}
+ $\|F(0)\|_T < \infty$, and
+ \[
+ \|F(X) - F(Y)\|_T^2 \leq (2T + 8)K^2 \int_0^T \|X - Y\|_t^2 \;\d t.
+ \]
+ \end{claim}
+ We first see how this claim implies the theorem. First observe that the claim implies $F$ indeed maps $B$ into itself. We can then define a sequence of processes $X^i_t$ by
+ \[
+ X_t^0 = x,\quad
+ X^{i + 1} = F(X^i).
+ \]
+ Then we have
+ \[
+ \|X^{i + 1} - X^i\|_T^2 \leq CT \int_0^T \|X^i - X^{i - 1}\|^2_t \;\d t \leq \cdots \leq \|X^1 - X^0\|^2_T \left(\frac{CT^i}{i!}\right).
+ \]
+ So we find that
+ \[
+ \sum_{i = 1}^\infty \|X^i - X^{i - 1}\|_T^2 < \infty
+ \]
+ for all $T$. So $X^i$ converges to $X$ almost surely and uniformly on $[0, T]$, and $F(X) = X$. We then take $T \to \infty$ and we are done.
+
+ To prove the claim, we write
+ \[
+ \|F(0)\|_T \leq |X_0| + \left\| \int_0^t b(s, 0) \;\d s\right\| + \left\| \int_0^t \sigma(s, 0) \;\d B_s\right\|_T.
+ \]
+ The first two terms are constant, and we can bound the last by Doob's inequality and the It\^o isometry:
+ \[
+ \left\| \int_0^t \sigma(s, 0) \;\d B_s\right\|_T \leq 2 \E \left(\left|\int_0^T \sigma(s, 0) \;\d B_s\right|^2\right) = 2 \int_0^T \sigma(s, 0)^2 \;\d s.
+ \]
+ To prove the second part, we use
+ \begin{multline*}
+ \|F(X) - F(Y)\|^2 \leq 2 \E \left(\sup_{t \leq T} \left|\int_0^t b(s, X-s) - b(s, Y_s)\;\d s\right|^2\right) \\
+ + 2 \E \left(\sup_{t \leq T} \left|\int_0^t(\sigma(s, X_s) - \sigma(s, Y_s))\;\d B_s\right|^2\right).
+ \end{multline*}
+ We can bound the first term with Cauchy--Schwartz by
+ \[
+ T \E \left(\int_0^T |b(s, X_s) - b(s, Y_s)|^2 \right) \leq TK^2 \int_0^T \|X -Y \|_t^2 \;\d t,
+ \]
+ and the second term with Doob's inequality by
+ \[
+ \E \left(\int_0^T |\sigma(s, X_s) - \sigma(s, Y_s)|^2 \;\d s\right) \leq 4K^2 \int_0^T \|X - Y\|_t^2\;\d t.\fakeqed%\qedhere
+ \]
+\end{proof}
+
+\subsection{Examples of stochastic differential equations}
+\begin{eg}[The \term{Ornstein--Uhlenbeck process}]
+ Let $\lambda > 0$. Then the Ornstein--Uhlenbeck process is the solution to
+ \[
+ \d X_t = - \lambda X_t \;\d t + \d B_t.
+ \]
+ The solution exists by the previous theorem, but we can also explicitly find one.
+
+ By It\^o's formula applied to $e^{\lambda t} X_t$, we get
+ \[
+ \d (e^{\lambda t} X_t) = e^{\lambda t} \d X_t + \lambda e^{\lambda t} X_t \;\d t = \d B_t.
+ \]
+ So we find that
+ \[
+ X_t = e^{-\lambda t} X_0 + \int_0^t e^{-\lambda (t - s)}\;\d B_s.
+ \]
+ Observe that the integrand is deterministic. So we can in fact interpret this as an Wiener integral.
+\end{eg}
+
+\begin{fact}
+ If $X_0 = x \in \R$ is fixed, then $(X_t)$ is a Gaussian process, i.e.\ $(X_{t_i})_{i = 1}^n $ is jointly Gaussian for all $t_1 < \cdots < t_n$. Any Gaussian process is determined by the mean and covariance, and in this case, we have
+ \[
+ \E X_t = e^{-\lambda t} x,\quad \cov(X_t, X_s) = \frac{1}{2\lambda} \left(e^{-\lambda |t - s|} - e^{-\lambda |t + s|}\right)
+ \]
+\end{fact}
+
+\begin{proof}
+ We only have to compute the covariance. By the It\^o isometry, we have
+ \begin{align*}
+ \E ((X_t - \E X_t) (X_s - \E X_s)) &= \E \left(\int_0^t e^{-\lambda (t - u)}\;\d B_u \int_0^s e^{-\lambda (s - u)}\;\d B_u\right)\\
+ &= e^{-\lambda (t + s)} \int_0^{t \wedge s} e^{\lambda u}\;\d u.\qedhere
+ \end{align*}
+\end{proof}
+In particular,
+\[
+ X_t \sim N\left(e^{-\lambda t}x, \frac{1 - e^{-2\lambda t}}{2 \lambda}\right) \to N\left(0, \frac{1}{2\lambda}\right).
+\]
+\begin{fact}
+ If $X_0 \sim N(0, \frac{1}{2\lambda})$, then $(X_t)$ is a centered Gaussian process with stationary covariance, i.e.\ the covariance depends only on time differences:
+ \[
+ \cov(X_t, X_s) = \frac{1}{2\lambda} e^{-\lambda |t - s|}.
+ \]
+\end{fact}
+The difference is that in the deterministic case, the $\E X_t$ cancels the first $e^{-\lambda t} X_0$ term, while in the non-deterministic case, it doesn't.
+
+This is a very nice example where we can explicitly understand the long-time behaviour of the SDE. In general, this is non-trivial.
+
+\subsubsection*{Dyson Brownian motion}
+\index{Dyson Brownian motion}
+Let $\mathcal{H}_N$ be an inner product space of real symmetric $N \times N$ matrices with inner product $N \Tr(HK)$ for $H, K \in \mathcal{H}_N$. Let $H^1, \ldots, H^{\dim(\mathcal{H}_N)}$ be an orthonormal basis for $\mathcal{H}_N$.
+
+\begin{defi}[Gaussian orthogonal ensemble]\index{Gaussian orthogonal ensemble}
+ The \emph{Gaussian Orthogonal Ensemble} GOE$_N$ is the standard Gaussian measure on $\mathcal{H}_N$, i.e.\ $H \sim \mathrm{GOE}_N$ if
+ \[
+ H = \sum_{r = 1}^{\dim \mathcal{H}_n} H^i X^i
+ \]
+ where each $X^i$ are iid standard normals.
+\end{defi}
+
+We now replace each $X^i$ by a Ornstein--Uhlenbeck process with $\lambda = \frac{1}{2}$. Then GOE$_N$ is invariant under the process.
+
+\begin{thm}
+ The eigenvalues $\lambda_1(t) \leq \cdots \leq \lambda_N(t)$ satisfies
+ \[
+ \d \lambda_t^i = \left(-\frac{\lambda^i}{2} + \frac{1}{N} \sum_{j \not= i} \frac{1}{\lambda_i - \lambda_j}\right)\;\d t + \sqrt{\frac{2}{N\beta}}\;\d B^i.
+ \]
+ Here $\beta = 1$, but if we replace symmetric matrices by Hermitian ones, we get $\beta = 2$; if we replace symmetric matrices by symplectic ones, we get $\beta = 4$.
+\end{thm}
+This follows from It\^o's formula and formulas for derivatives of eigenvalues.% read more about this.
+
+\begin{eg}[Geometric Brownian motion]\index{Geometric Brownian motion}
+ Fix $\sigma > 0$ and $t \in \R$. Then geometric Brownian motion is given by
+ \[
+ \d X_t = \sigma X_t \;\d B_t + r X_t \;\d t.
+ \]
+ We apply It\^o's formula to $\log X_t$ to find that
+ \[
+ X_t = X_0 \exp \left(\sigma B_t + \left(r - \frac{\sigma^2}{2}\right)t\right).
+ \]
+\end{eg}
+
+\begin{eg}[Bessel process]\index{Bessel process}
+ Let $B = (B^1, \ldots, B^d)$ be a $d$-dimensional Brownian motion. Then
+ \[
+ X_t = |B_t|
+ \]
+ satisfies the stochastic differential equation
+ \[
+ \d X_t = \frac{d - 1}{2X_t} \;\d t + \d B_t % check
+ \]
+ if $t < \inf\{t \geq 0: X_t = 0\}$.
+\end{eg}
+
+\subsection{Representations of solutions to PDEs}
+Recall that in Advanced Probability, we learnt that we can represent the solution to Laplace's equation via Brownian motion, namely if $D$ is a suitably nice domain and $g: \partial D \to \R$ is a function, then the solution to the Laplace's equation on $D$ with boundary conditions $g$ is given by
+\[
+ u(\mathbf{x}) = \E_{\mathbf{x}}[g(B_T)],
+\]
+where $T$ is the first hitting time of the boundary $\partial D$.
+
+A similar statement we can make is that if we want to solve the heat equation
+\[
+ \frac{\partial u}{\partial t} = \nabla^2 u
+\]
+with initial conditions $u(x, 0) = u_0(x)$, then we can write the solution as
+\[
+ u(\mathbf{x}, t) = \E_\mathbf{x}[u_0(\sqrt{2} B_t)]
+\]
+This is just a fancy way to say that the Green's function for the heat equation is a Gaussian, but is a good way to think about it nevertheless.
+
+In general, we would like to associate PDEs to certain stochastic processes. Recall that a stochastic PDE is generally of the form
+\[
+ \d X_t = b(X_t)\;\d t + \sigma(X_t)\;\d B_t
+\]
+for some $b: \R^d \to \R$ and $\sigma: \R^d \to \R^{d \times m}$ which are measurable and locally bounded. Here we assume these functions do not have time dependence. We can then associate to this a differential operator $L$ defined by
+\[
+ L = \frac{1}{2} \sum_{i, j} a_{ij} \partial_i \partial_j + \sum_i b_i \partial_i.
+\]
+where $a = \sigma \sigma^T$.
+
+\begin{eg}
+ If $b = 0$ and $\sigma = \sqrt{2} I$, then $L = \Delta$ is the standard Laplacian.
+\end{eg}
+
+The basic computation is the following result, which is a standard application of the It\^o formula:
+\begin{prop}
+ Let $x \in \R^d$, and $X$ a solution to $E_x(\sigma, b)$. Then for every $f: \R_+ \times \R^d \to \R$ that is $C^1$ in $\R_+$ and $C^2$ in $\R^d$, the process
+ \[
+ M_t^f = f(t, X_t) - f(0, X_0) - \int_0^t \left(\frac{\partial}{\partial s} + L\right)f(s, X_s)\;\d s
+ \]
+ is a continuous local martingale.\fakeqed
+\end{prop}
+
+We first apply this to the \term{Dirichlet--Poisson problem}, which is essentially to solve $-Lu = f$. To be precise, let $U \subseteq \R^d$ be non-empty, bounded and open; $f \in C_b(U)$ and $g \in C_b(\partial U)$. We then want to find a $u \in C^2(\bar{U}) = C^2(U) \cap C(\bar{U})$ such that
+\begin{align*}
+ -Lu(x) &= f(x)\quad\text{ for } x \in U\\
+ u(x) &= g(x)\quad\text{ for } x \in \partial U.
+\end{align*}
+If $f = 0$, this is called the \term{Dirichlet problem}; if $g = 0$, this is called the \term{Poisson problem}.
+
+We will have to impose the following technical condition on $a$:
+\begin{defi}[Uniformly elliptic]\index{uniformly elliptic}
+ We say $a: \bar{U} \to \R^{d \times d}$ is \emph{uniformly elliptic} if there is a constant $c > 0$ such that for all $\xi \in \R^d$ and $x \in \bar{U}$, we have
+ \[
+ \xi^T a(x) \xi \geq c |\xi|^2.
+ \]
+\end{defi}
+If $a$ is symmetric (which it is in our case), this is the same as asking for the smallest eigenvalue of $a$ to be bounded away from $0$.
+
+It would be very nice if we can write down a solution to the Dirichlet--Poisson problem using a solution to $E_x(\sigma, b)$, and then simply check that it works. We can indeed do that, but it takes a bit more time than we have. Instead, we shall prove a slightly weaker result that if we happen to have a solution, it must be given by our formula involving the SDE. So we first note the following theorem without proof:
+
+\begin{thm}
+ Assume $U$ has a smooth boundary (or satisfies the exterior cone condition), $a, b$ are H\"older continuous and $a$ is uniformly elliptic. Then for every H\"older continuous $f: \bar{U} \to \R$ and any continuous $g: \partial U \to \R$, the Dirichlet--Poisson process has a solution.\fakeqed
+\end{thm}
+
+The main theorem is the following:
+\begin{thm}
+ Let $\sigma$ and $b$ be bounded measurable and $\sigma \sigma^T$ uniformly elliptic, $U \subseteq \R^d$ as above. Let $u$ be a solution to the Dirichlet--Poisson problem and $X$ a solution to $E_x(\sigma, b)$ for some $x \in \R^d$. Define the stopping time
+ \[
+ T_U = \inf \{t \geq 0: X_t \not \in U\}.
+ \]
+ Then $\E T_U < \infty$ and
+ \[
+ u(x) = \E_x\left(g(X_{T_U}) + \int_0^{T_U} f(X_s)\;\d s\right).
+ \]
+ In particular, the solution to the PDE is unique.
+\end{thm}
+
+\begin{proof}
+ Our previous proposition applies to functions defined on all of $\R^n$, while $u$ is just defined on $U$. So we set
+ \[
+ U_n = \left\{x \in U: \mathrm{dist}(x, \partial U) > \frac{1}{n}\right\},\quad T_n = \inf \{ t \geq 0 : X_t \not \in U_n\},
+ \]
+ and pick $u_n \in C_b^2(\R^d)$ such that $u|_{U_n} = u_n|_{U_n}$. Recalling our previous notation, let
+ \[
+ M^n_t = (M^{u_n})^{T_n}_t = u_n(X_{t \wedge T_n}) - u_n(X_0) - \int_0^{t \wedge T_n} Lu_n(X_s)\;\d s.
+ \]
+ Then this is a continuous local martingale that is bounded by the proposition, and is bounded, hence a true martingale. Thus for $x \in U$ and $n$ large enough, the martingale property implies
+ \begin{multline*}
+ u(x) = u_n(x) =\E \left(u(X_{t \wedge T_n}) - \int_0^{t \wedge T_n} Lu(X_s)\;\d s\right) \\
+ = \E \left(u(X_{t \wedge T_n}) + \int_0^{t \wedge T_n} f(X_s)\;\d s\right).
+ \end{multline*}
+ We would be done if we can take $n \to \infty$. To do so, we first show that $\E [T_U] < \infty$.
+
+ Note that this does not depend on $f$ and $g$. So we can take $f = 1$ and $g = 0$, and let $v$ be a solution. Then we have
+ \[
+ \E (t \wedge T_n) = \E \left(- \int_0^{t \wedge T_n} Lv(X_s)\;\d s\right) = v(x) - \E(v(X_{t \wedge T_n})).
+ \]
+ Since $v$ is bounded, by dominated/monotone convergence, we can take the limit to get
+ \[
+ \E (T_U) < \infty.
+ \]
+
+ Thus, we know that $t \wedge T_n \to T_U$ as $t \to \infty$ and $n \to \infty$. Since
+ \[
+ \E \left(\int_0^{T_U} |f(X_s)|\;\d s\right) \leq \|f\|_\infty \E[T_U] < \infty,
+ \]
+ the dominated convergence theorem tells us
+ \[
+ \E \left(\int_0^{t \wedge T_n} f(X_s)\;\d s \right) \to \E \left(\int_0^{T_U} f(X_s)\;\d s\right).
+ \]
+ Since $u$ is continuous on $\bar{U}$, we also have
+ \[
+ \E (u(X_{t \wedge T_n})) \to \E(u(T_u)) = \E(g(T_u)).\qedhere
+ \]
+\end{proof}
+
+We can use SDE's to solve the \term{Cauchy problem} for parabolic equations as well, just like the heat equation. The problem is as follows: for $f \in C_b^2(\R^d)$, we want to find $u: \R_+ \times \R^d \to \R$ that is $C^1$ in $\R_+$ and $C^2$ in $\R^d$ such that
+\begin{align*}
+ \frac{\partial u}{\partial t} &= Lu \quad \text{ on }\R_+ \times \R^d\\
+ u(0, \ph) &= f\hphantom{L} \quad \text{ on }\R^d
+\end{align*}
+
+Again we will need the following theorem:
+\begin{thm}
+ For every $f \in C_b^2(\R^d)$, there exists a solution to the Cauchy problem.\fakeqed
+\end{thm}
+
+\begin{thm}
+ Let $u$ be a solution to the Cauchy problem. Let $X$ be a solution to $E_x(\sigma, b)$ for $x \in \R^d$ and $0 \leq s \leq t$. Then
+ \[
+ \E_x(f(X_t) \mid \mathcal{F}_s) = u(t - s, X_s).
+ \]
+ In particular,
+ \[
+ u(t, x) = \E_x(f(X_t)).
+ \]
+\end{thm}
+In particular, this implies $X_t$ is a continuous Markov process.
+
+\begin{proof}
+ The martingale has $\frac{\partial}{\partial t} + L$, but the heat equation has $\frac{\partial}{\partial t} - L$. So we set $g(s, x) = u(t - s, x)$. Then
+ \[
+ \left(\frac{\partial}{\partial s} + L\right) g(s, x) = - \frac{\partial}{\partial t} u(t - s, x) + Lu(t - s, x) = 0.
+ \]
+ So $g(s, X_s) - g(0, X_0)$ is a martingale (boundedness is an exercise), and hence
+ \[
+ u(t - s, X_s) = g(s, X_s) = \E (g(t, X_t) \mid \mathcal{F}_s) = \E(u(0, X_t) \mid \mathcal{F}_s) = \E(f(X_t) \mid X_s).\qedhere
+ \]
+\end{proof}
+
+There is a generalization to the \emph{Feynman--Kac formula}.
+\begin{thm}[Feynman--Kac formula]\index{Feynman--Kac formula}
+ Let $f \in C_b^2(\R^d)$ and $V \in C_b(\R^d)$ and suppose that $u: \R_+ \times \R^d \to \R$ satisfies
+ \begin{align*}
+ \frac{\partial u}{\partial t} &= Lu + Vu \quad \text{ on }\R_+ \times \R^d\\
+ u(0, \ph) &= f\hphantom{L+Vu}\quad \text{ on } \R^d,
+ \end{align*}
+ where $Vu = V(x) u(x)$ is given by multiplication.
+
+ Then for all $t > 0$ and $x \in \R^d$, and $X$ a solution to $\E_x(\sigma, b)$. Then
+ \[
+ u(t, x) = \E_x\left(f(X_t) \exp \left(\int_0^t V(X_s)\;\d s\right)\right).\fakeqed
+ \]
+\end{thm}
+If $L$ is the Laplacian, then this is Schr\"odinger equation, which is why Feynman was thinking about this.
+%In quantum mechanics, a basic problem is to estimate the lowest eigenvalue of the Schr\"odinger operator, which is the ground state energy. This is hard. This gives a means to do that, as this gives a formula to calculate $e^{-tH}$, since $\frac{1}{t} (e^{-tH} - 1)$ converges to the lowest eigenvalue.
+
+\printindex
+\end{document}
diff --git a/books/cam/III_L/symplectic_geometry.tex b/books/cam/III_L/symplectic_geometry.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f2c7dd5adac6015a76dd62a8524e3e8c378ad034
--- /dev/null
+++ b/books/cam/III_L/symplectic_geometry.tex
@@ -0,0 +1,2964 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2018}
+\def\nlecturer {A.\ R.\ Pires}
+\def\ncourse {Symplectic Geometry}
+
+\input{header}
+
+\usepackage{multicol}
+
+\DeclareMathOperator{\grph}{graph}
+\DeclareMathOperator{\Symp}{Symp}
+\DeclareMathOperator{\Ad}{Ad}
+\DeclareMathOperator{\curv}{curv}
+\DeclareMathOperator{\Crit}{Crit}
+\newcommand\red{\mathrm{red}}
+\newcommand\Dolb{\mathrm{Dolb}}
+\newcommand\Vol{\mathrm{Vol}}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+The first part of the course will be an overview of the basic structures of symplectic geometry, including symplectic linear algebra, symplectic manifolds, symplectomorphisms, Darboux theorem, cotangent bundles, Lagrangian submanifolds, and Hamiltonian systems. The course will then go further into two topics. The first one is moment maps and toric symplectic manifolds, and the second one is capacities and symplectic embedding problems.
+
+\subsubsection*{Pre-requisites}
+Some familiarity with basic notions from Differential Geometry and Algebraic Topology will be assumed. The material covered in the respective Michaelmas Term courses would be more than enough background.
+}
+\tableofcontents
+\section{Symplectic manifolds}
+\subsection{Symplectic linear algebra}
+In symplectic geometry, we study symplectic manifolds. These are manifolds equipped with a certain structure on the tangent bundle. In this section, we first analyze the condition fiberwise.
+
+\begin{defi}[Symplectic vector space]\index{symplectic vector space}
+ A \emph{symplectic vector space} is a real vector space $V$ together with a non-degenerate skew-symmetric bilinear map $\Omega: V \times V \to \R$.
+\end{defi}
+
+Recall that
+\begin{defi}[Non-degenerate bilinear map]\index{non-degenerate bilinear map}
+ We say a bilinear map $\Omega$ is non-degenerate if the induced map $\tilde{\Omega}: V \to V^*$ given by $v \mapsto \Omega(v, \ph)$ is bijective.
+\end{defi}
+
+Even if we drop the non-degeneracy condition, there aren't that many symplectic vector spaces around up to isomorphism.
+\begin{thm}[Standard form theorem]
+ Let $V$ be a real vector space and $\Omega$ a skew-symmetric bilinear form. Then there is a basis $\{u_1, \ldots, u_k, e_1, \ldots, e_n, f_1, \ldots, f_n\}$ of $V$ such that
+ \begin{enumerate}
+ \item $\Omega(u_i, v) = 0$ for all $v \in V$
+ \item $\Omega(e_i, e_j) = \Omega(f_i, f_j) = 0$.
+ \item $\Omega(e_i, f_j) = \delta_{ij}$.
+ \end{enumerate}
+\end{thm}
+
+The proof is a skew-symmetric version of Gram--Schmidt.
+\begin{proof}
+ Let
+ \[
+ U = \{u \in V : \Omega(u, v) = 0\text{ for all }v \in V\},
+ \]
+ and pick a basis $u_1, \ldots, u_k$ of this. Choose any $W$ complementary to $U$.
+
+ First pick $e_1 \in W \setminus \{0\}$ arbitrarily. Since $e_1 \not \in U$, we can pick $f_1$ such that $\Omega(e_1, f_1) = 1$. Then define $W_1 = \spn\{e_i, f_i\}$, and
+ \[
+ W_1^\Omega = \{w \in W: \Omega(w, v) = 0\text{ for all }v \in W_1\}.
+ \]
+ It is clear that $W_1 \cap W_1^\Omega = \{0\}$. Moreover, $W = W_1 \oplus W_1^\Omega$. Indeed, if $v \in W$, then
+ \[
+ v = (\Omega(v, f_1) e_1 - \Omega(v, e_1) f_1) + (v - (\Omega(v, f_1) e_1 - \Omega(v, e_1) f_1)),
+ \]
+ Then we are done by induction on the dimension.
+\end{proof}
+Here $k = \dim U$ and $2n = \dim V - k$ are invariants of $(V, \Omega)$. The number $2n$ is called the \term{rank} of $\Omega$. Non-degeneracy is equivalent to $k = 0$.
+
+\begin{ex}
+ $\Omega$ is non-degenerate iff $\Omega^{\wedge n} = \Omega \wedge \cdots \wedge \Omega \not= 0$.
+\end{ex}
+
+By definition, every symplectic vector space is canonically isomorphic to its dual, given by the map $\tilde{\Omega}$. As in the above theorem, a \term{symplectic basis} $\{e_1, \ldots, e_n, f_1, \ldots, f_n\}$ of $V$ is a basis such that
+\[
+ \Omega(e_i, e_j) = \Omega(f_i, f_j) = 0,\quad \Omega(e_i, f_i) = \delta_{ij}.
+\]
+Every symplectic vector space has a symplectic basis, and in particular has even dimension. In such a basis, the matrix representing $\Omega$ is
+\[
+ \Omega_0 = \begin{pmatrix}
+ 0 & I\\
+ -I & 0
+ \end{pmatrix}.
+\]
+We will need the following definitions:
+\begin{defi}[Symplectic subspace]\index{symplectic subspace}
+ If $(V, \Omega)$ is a symplectic vector space, a \emph{symplectic subspace} is a subspace $W \subseteq V$ such that $\Omega|_{W \times W}$ is non-degenerate.
+\end{defi}
+
+\begin{defi}[Isotropic subspace]\index{isotropic subspace}
+ If $(V, \Omega)$ is a symplectic vector space, an \emph{isotropic subspace} is a subspace $W \subseteq V$ such that $\Omega|_{W \times W} = 0$.
+\end{defi}
+
+\begin{defi}[Lagrangian subspace]\index{Lagrangian subspace}
+ If $(V, \Omega)$ is a symplectic vector space, an \emph{Lagrangian subspace} is an isotropic subspace $W$ with $\dim W = \frac{1}{2} \dim V$.
+\end{defi}
+
+\begin{defi}[Symplectomorphism]\index{symplectomorphism}
+ A \emph{symplectomorphism} between symplectic vector spaces $(V, \Omega), (V', \Omega')$ is an isomorphism $\varphi: V \to V'$ such that $\varphi^* \Omega' = \Omega$.
+\end{defi}
+
+The standard form theorem says any two $2n$-dimensional symplectic vector space $(V, \Omega)$ are symplectomorphic to each other.
+
+\subsection{Symplectic manifolds}
+We are now ready to define symplectic manifolds.
+\begin{defi}[Symplectic manifold]\index{symplectic manifold}
+ A \emph{symplectic manifold} is a manifold $M$ of dimension $2n$ equipped with a $2$-form $\omega$ that is closed (i.e.\ $\d \omega = 0$) and non-degenerate (i.e.\ $\omega^{\wedge n} \not= 0$). We call $\omega$ the \term{symplectic form}.
+\end{defi}
+
+The closedness condition is somewhat mysterious. There are some motivations from classical mechanics, for examples, but they are not very compelling, and the best explanation for the condition is ``these manifolds happen to be quite interesting''.
+
+Observe that by assumption, the form $\omega^{\wedge n}$ is nowhere-vanishing, and so it is a volume form. If our manifold is compact, then pairing the volume form with the fundamental class (i.e.\ integrating it over the manifold) gives the volume, and in particular is non-zero. Thus, $\omega^{\wedge n}\not= 0 \in H^{2n}_{\dR}(M)$. Since the wedge is well-defined on cohomology, an immediate consequence of this is that
+\begin{prop}
+ If a compact manifold $M^{2n}$ is such that $H_{\dR}^{2k}(M) = 0$ for some $k < n$, then $M$ does not admit a symplectic structure.
+\end{prop}
+A more down-to-earth proof of this result would be that if $\omega^{2k} = \d \alpha$ for some $\alpha$, then
+\[
+ \int \omega^n = \int \omega^{2k} \wedge \omega^{2(n - k)} = \int \d (\alpha \wedge \omega^{2(n - k)}) = 0
+\]
+by Stokes' theorem.
+
+\begin{eg}
+ $S^n$ does not admit a symplectic structure unless $n = 2$ (or $n = 0$, if one insists).
+\end{eg}
+
+On the other hand, $S^2$ is a symplectic manifold.
+\begin{eg}
+ Take $M = S^2$. Take the standard embedding in $\R^3$, and for $p \in S^2$, the tangent space consists of the vectors perpendicular to $p$. We can take the symplectic form to be
+ \[
+ \omega_p(u, v) = p \cdot (u \times v).
+ \]
+ Anti-symmetry and non-degeneracy is IA Vectors and Matrices.
+\end{eg}
+
+\begin{defi}[Symplectomorphism]\index{symplectomorphic}
+ Let $(X_1, \omega_1)$ and $(X_2, \omega_2)$ be symplectic manifolds. A \emph{symplectomorphism} is a diffeomorphism $f: X_1 \to X_2$ such that $f^* \omega_2 = \omega_1$.
+\end{defi}
+
+If we have a single fixed manifold, and two symplectic structures on it, there are other notions of ``sameness'' we might consider:
+\begin{defi}[Strongly isotopic]\index{strongly isotopic}
+ Two symplectic structures on $M$ are \emph{strongly isotopic} if there is an isotopy taking one to the other.
+\end{defi}
+
+\begin{defi}[Deformation equivalent]\index{deformation equivalent}
+ Two symplectic structures $\omega_0, \omega_1$ on $M$ are \emph{deformation equivalent} if there is a family of symplectic forms $\omega_t$ that start and end at $\omega_0$ and $\omega_1$ respectively.
+\end{defi}
+
+\begin{defi}[Isotopic]\index{isotopic}
+ Two symplectic structures $\omega_0, \omega_1$ on $M$ are \emph{isotopic} if there is a family of symplectic forms $\omega_t$ that start and end at $\omega_0$ and $\omega_1$ respectively, and further the cohomology class $[\omega_t]$ is independent of $t$.
+\end{defi}
+
+A priori, it is clear that we have the following implications:
+\[
+ \text{symplectomorphic} \Leftarrow \text{strongly isotopic} \Rightarrow \text{isotopic} \Rightarrow \text{deformation equivalent}.
+\]
+It turns out when $M$ is compact, isotopic implies strongly isotopic:
+\begin{thm}[Moser]\index{Moser's trick}
+ If $M$ is compact with a family $\omega_t$ of symplectic forms with $[\omega_t]$ constant, then there is an isotopy $\rho_t: M \to M$ with $\rho_t^* \omega_t = \omega_0$.
+\end{thm}
+The key idea is to express the condition $\rho_t^* \omega_t = \omega_0$ as a differential equation for $v_t = \frac{\d}{\d t} \rho_t$, and ODE theory guarantees the existence of a $v_t$ satisfying the equation. Then compactness allows us to integrate it up to get a genuine $\rho_t$.
+
+\begin{proof}
+ We set $\rho_0$ to be the identity. Then the equation $\rho_t^* \omega_t = \omega_0$ is equivalent to $\rho_t^* \omega_t$ being constant. If $v_t$ is the associated vector field to $\rho_t$, then we need
+ \[
+ 0 = \frac{\d}{\d t}(\rho_t^* \omega_t) = \rho_t^* \left(\mathcal{L}_{v_t} \omega_t + \frac{\d \omega_t}{\d t}\right).
+ \]
+ So we want to solve
+ \[
+ \mathcal{L}_{v_t} \omega_t + \frac{\d \omega_t}{\d t} = 0.
+ \]
+ To solve for this, since $[\frac{\d \omega_t}{\d t}] = 0$, it follows that there is a family $\mu_t$ of $1$-forms such that $\frac{\d \omega_t}{\d t} = \d \mu_t$. Then our equation becomes
+ \[
+ \mathcal{L}_{v_t} \omega_t + \d \mu_t = 0.
+ \]
+ By assumption, $\d \omega_t = 0$. So by Cartan's magic formula, we get
+ \[
+ \d \iota_{v_t} \omega_t + \d \mu_t = 0.
+ \]
+ To solve this equation, it suffices to solve \term{Moser's equation},
+ \[
+ \iota_{v_t} \omega_t + \mu_t = 0,
+ \]
+ which can be solved since $\omega_t$ is non-degenerate.
+\end{proof}
+
+There is also a relative version of this.
+\begin{thm}[Relative Moser]\index{relative Moser's trick}\index{Moser's trick!relative}
+ Let $X \subseteq M$ be a compact manifold of a manifold $M$, and $\omega_0, \omega_1$ symplectic forms on $M$ agreeing on $X$. Then there are neighbourhoods $U_0, U_1$ of $X$ and a diffeomorphism $\varphi: U_0 \to U_1$ fixing $X$ such that $\varphi^* \omega_1 = \omega_0$.
+\end{thm}
+
+\begin{proof}
+ We set
+ \[
+ \omega_t = (1 - t) \omega_0 + t \omega_1.
+ \]
+ Then this is certainly closed, and since this is constantly $\omega_0$ on $X$, it is non-degenerate on $X$, hence, by compactness, non-degenerate on a small tubular neighbourhood $U_0$ of $X$ (by compactness). Now
+ \[
+ \frac{\d}{\d t} \omega_t = \omega_1 - \omega_0,
+ \]
+ and we know this vanishes on $X$. Since the inclusion of $X$ into a tubular neighbourhood is a homotopy equivalence, we know $[\omega_1 - \omega_0] = 0 \in H^1_{\dR}(U_0)$. Thus, we can find some $\mu$ such that $\omega_1 - \omega_0 = \d \mu$, and by translation by a constant, we may suppose $\mu$ vanishes on $X$. We then solve Moser's equation, and the resulting $\rho$ will be constant on $X$ since $\mu$ vanishes.
+\end{proof}
+
+We previously had the standard form theorem for symplectic \emph{vector spaces}, which is not surprising --- we have the same for Riemannian metrics, for example. Perhaps more surprisingly, give a symplectic \emph{manifold}, we can always pick coordinate charts where the symplectic form looks like the standard symplectic form throughout:
+\begin{thm}[Darboux theorem]\index{Darboux theorem}
+ If $(M, \omega)$ is a symplectic manifold, and $p \in M$, then there is a chart $(U, x_1, \ldots, x_n, y_1, \ldots, y_n)$ about $p$ on which
+ \[
+ \omega = \sum \d x_i \wedge \d y_i.
+ \]
+\end{thm}
+
+\begin{proof}
+ $\omega$ can certainly be written in this form \emph{at} $p$. Then relative Moser with $X = \{p\}$ promotes this to hold in a neighbourhood of $p$.
+\end{proof}
+Thus, symplectic geometry is a global business, unlike Riemannian geometry, where it is interesting to talk about local properties such as curvature.
+
+A canonical example of a symplectic manifold is the \term{cotangent bundle} of a manifold $X$. In physics, $X$ is called the \term{configuration space}, and $M = T^*X$ is called the \term{phase space}.
+
+On a coordinate chart $(U, x_1, \ldots, x_n)$ for $X$, a generic $1$-form can be written as
+\[
+ \xi = \sum_{i = 1}^n \xi_i \;\d x_i.
+\]
+Thus, we have a coordinate chart on $T^*X$ given by $(T^* U, x_1, \ldots, x_n, \xi_1, \ldots, \xi_n)$. On $T^* U$, there is the \term{tautological $1$-form}
+\[
+ \alpha = \sum_{i = 1}^n \xi_i \;\d x_i.
+\]
+Observe that this is independent of the chart chosen, since it can be characterized in the following coordinate-independent way:
+
+\begin{prop}
+ Let $\pi: M = T^*X \to X$ be the projection map, and $\pi^*: T^*X \to T^* M$ the pullback map. Then for $\xi \in M$, we have $\alpha_\xi = \pi^* \xi$.
+\end{prop}
+
+\begin{proof}
+ On a chart, we have
+ \[
+ \pi^* \xi\left( \frac{\partial}{\partial x_j}\right) = \xi\left(\frac{\partial}{\partial x_j}\right) = \xi_j = \alpha_\xi \left(\frac{\partial}{\partial x_j}\right),
+ \]
+ and similarly both vanish on $\frac{\partial}{\partial \xi_j}$.
+\end{proof}
+This tautological $1$-form is constructed so that the following holds:
+\begin{prop}
+ Let $\mu$ be a one-form of $X$, i.e.\ a section $s_\mu: X \to T^* X$. Then $s_\mu^* \alpha = \mu$.
+\end{prop}
+
+\begin{proof}
+ By definition,
+ \[
+ \alpha_\xi = \xi \circ \d \pi.
+ \]
+ So for $x \in X$, we have
+ \[
+ s_\mu^*\alpha_{\mu(x)} = \mu(x) \circ \d \pi \circ \d s_\mu = \mu(x) \circ \d (\pi \circ s_\mu) = \mu(x) \circ \d(\mathrm{id}_X) = \mu(x).\qedhere
+ \]
+\end{proof}
+
+Given this, we can set
+\[
+ \omega = - \d \alpha = \sum_{i = 1}^n \;\d x_i \wedge \;\d \xi_i,
+\]
+the \term{canonical symplectic form} on $T^* X$. It is clear that it is anti-symmetric, closed, and non-degenerate, because it looks just like the canonical symplectic form on $\R^{2n}$.
+
+\begin{eg}
+ Take $X = S^1$, parametrized by $\theta$, and $T^* X = S^1 \times \R$ with coordinates $(\theta, \xi_\theta)$. Then
+ \[
+ \omega = \d \theta \wedge \d \xi_\theta.
+ \]
+ We can see explicitly how this is independent of coordinate. Suppose we parametrized the circle by $\tau = 2 \theta$. Then $\d \tau = 2 \d \theta$. However, since we defined $\xi_\theta$ by
+ \[
+ \xi = \xi_\theta(\xi) \;\d \theta
+ \]
+ for all $\xi \in T^* S^1$, we know that $\xi_\tau = \frac{1}{2} \xi_\theta$, and hence
+ \[
+ \d \theta \wedge \d \xi_\theta = \d \tau \wedge \d \xi_\tau.
+ \]
+\end{eg}
+
+Of course, if we have a diffeomorphism $f: X \to Y$, then the pullback map induces a symplectomorphism $f^*: T^*Y \to T^*X$. However, not all symplectomorphisms arise this way.
+\begin{eg}
+ If $X = S^1$ and $T^* X = S^1 \times \R$ is given the canonical symplectic structure, then the vertical translation $g(\theta, \xi) = (\theta, \xi + c)$ is a symplectomorphism that is not a lift of a diffeomorphism $S^1 \to S^1$.
+\end{eg}
+
+So when does a symplectomorphism come from a diffeomorphism?
+
+\begin{ex}
+ A symplectomorphism $y: T^* X \to T^* X$ is a lift of a diffeomorphism $X \to X$ iff $g^* \alpha = \alpha$.
+\end{ex}
+
+\subsection{Symplectomorphisms and Lagrangians}
+We now consider Lagrangian submanifolds. These are interesting for many reasons, but in this section, we are going to use them to understand symplectomorphisms.
+\begin{defi}[Lagrangian submanifold]
+ Let $(M, \omega)$ be a symplectic manifold, and $L \subseteq M$ a submanifold. Then $L$ is a \term{Lagrangian submanifold} if for all $p \in L$, $T_p L$ is a Lagrangian subspace of $T_p M$. Equivalently, the restriction of $\omega$ to $L$ vanishes and $\dim L = \frac{1}{2} \dim M$.
+\end{defi}
+
+\begin{eg}
+ If $(M, \omega)$ is a surface with area form $\omega$, and $L$ is any $1$-dimensional submanifold, then $L$ is Lagrangian, since any $1$-dimensional subspace of $T_p M$ is Lagrangian.
+\end{eg}
+
+What do Lagrangian submanifolds of $T^* X$ look like? Let $\mu$ be a $1$-form on $X$, i.e.\ a section of $T^* X$, and let
+\[
+ X_\mu = \{(x, \mu_x): x \in X\} \subseteq T^* X.
+\]
+The map $s_\mu: X \to T^*X$ is an embedding, and $\dim X_\mu = \dim X = \frac{1}{2} \dim T^* X$. So this is a good candidate for a Lagrangian submanifold. By definition, $X_\mu$ is Lagrangian iff $s_\mu^* \omega = 0$. But
+\[
+ s_\mu^* \omega = s_\mu^* (\d \alpha) = \d (s_\mu^* \alpha) = \d \mu.
+\]
+So $X_\mu$ is Lagrangian iff $\mu$ is closed. In particular, if $h \in C^\infty(X)$, then $X_{\d h}$ is a Lagrangian submanifold of $T^* X$. We call $h$ the \term{generating function} of the Lagrangian submanifold.
+
+There are other examples of Lagrangian submanifolds of the cotangent bundle. Let $S$ be a submanifold of $X$, and $x \in X$. We define the \term{conormal space} of $S$ at $x$ to be
+\[
+ N_x^* S = \{\xi \in T_x^* X : \xi|_{T_x S} = 0\}.
+\]
+The \term{conormal bundle} of $S$ is the union of all $N^*_x S$.
+
+\begin{eg}
+ If $S = \{x\}$, then $N^*S = T^*_x X$.
+\end{eg}
+
+\begin{eg}
+ If $S = X$, then $N^*S = S$ is the zero section.
+\end{eg}
+
+Note that both of these are $n$-dimensional submanifolds of $T^*X$, and this is of course true in general.
+
+So if we are looking for Lagrangian submanifolds, this is already a good start. In the case of $S = X$, this is certainly a Lagrangian submanifold, and so is $S = \{x\}$. Indeed,
+\begin{prop}
+ Let $L = N^* S$ and $M = T^* X$. Then $L \hookrightarrow M$ is a Lagrangian submanifold.
+\end{prop}
+
+\begin{proof}
+ If $L$ is $k$-dimensional, then $S \hookrightarrow X$ locally looks like $\R^k \hookrightarrow \R^n$, and it is clear in this case.
+\end{proof}
+
+Our objective is to use Lagrangian submanifolds to understand the following problem:
+\begin{problem}
+ Let $(M_1, \omega_1)$ and $(M_2, \omega_2)$ be $2n$-dimensional symplectic manifolds. If $f: M_1 \to M_2$ is a diffeomorphism, when is it a symplectomorphism?
+\end{problem}
+
+Consider the $4n$-dimensional manifold $M_1 \times M_2$, with the \term{product form} $\omega = \pr_1^* \omega_1 + \pr_2^* \omega_2$. We have
+\[
+ \d \omega = \pr_1^* \d \omega_1 + \pr_2^* \d \omega_2 = 0,
+\]
+and $\omega$ is non-degenerate since
+\[
+ \omega^{2n} = \binom{2n}{n} \pr_1^* \omega_1^{\wedge n} \wedge \pr_2^* \omega_2^{\wedge n} \not= 0.
+\]
+So $\omega$ is a symplectic form on $M_1 \times M_2$. In fact, for $\lambda_1, \lambda_2 \in \R \setminus \{0\}$, the combination
+\[
+ \lambda_1 \pr_1^* \omega_1 + \lambda_2 \pr_2^* \omega_2
+\]
+is also symplectic. The case we are particularly interested in is $\lambda_1 = 1$ and $\lambda_2 = -1$, where we get the \term{twisted product form}\index{$\tilde{\omega}$}.
+\[
+ \tilde{\omega} = \pr_1^* \omega_1 - \pr_2^* \omega_2.
+\]
+
+Let $T_f$ be the graph of $f$, namely
+\[
+ T_f = \{(p_1, f(p_1)) : p_1 \in M_1\} \subseteq M_1 \times M_2.
+\]
+and let $\gamma_f: M_1 \to M_1 \times M_2$ be the natural embedding with image $T_f$.
+
+\begin{prop}
+ $f$ is a symplectomorphism iff $T_f$ is a Lagrangian submanifold of $(M_1 \times M_2, \tilde{\omega})$.
+\end{prop}
+\begin{proof}
+ The first condition is $f^* \omega_2 = \omega_1$, and the second condition is $\gamma_f^* \tilde{\omega} = 0$, and
+ \[
+ \gamma_f^* \tilde{\omega} = \gamma_f^* \pr_1^* \omega_1 = \gamma_f^* \pr_2^* \omega_2 = \omega_1 - f^* \omega_2.\qedhere
+ \]
+\end{proof}
+
+We are particularly interested in the case where $M_i$ are cotangent bundles. If $X_1, X_2$ are $n$-dimensional manifolds and $M_i = T^* X_i$, then we can naturally identify
+\[
+ T^*(X_1 \times X_2) \cong T^*X_1 \times T^* X_2.
+\]
+The canonical symplectic form on the left agrees with the canonical symplectic form on the right. However, we want the \emph{twisted} symplectic form instead. So we define an involution
+\[
+ \sigma: M_1 \times M_2 \to M_1 \times M_2
+\]
+that fixes $M_1$ and sends $(x, \xi) \mapsto (x, -\xi)$ on $M_2$. Then $\sigma^* \omega = \tilde{\omega}$.
+
+Then, to find a symplectomorphism $M_1 \to M_2$, we can carry out the following procedure:
+\begin{enumerate}
+ \item Start with $L$ a Lagrangian submanifold of $(M_1 \times M_2, \omega)$.
+ \item Twist it to get $L^\sigma = \sigma(L)$.
+ \item Check if $L^\sigma$ is the graph of a diffeomorphism.
+\end{enumerate}
+This doesn't sound like a very helpful specification, until we remember that we have a rather systematic way of producing Lagrangian submanifolds of $(M_1, \times M_2, \omega)$. We pick a generating function $f \in C^\infty(X_1 \times X_2)$. Then $\d f$ is a closed form on $X_1 \times X_2$, and
+\[
+ L = X_{\d f} = \{(x, y, \partial_x f_{(x, y)}, \partial_y f_{(x, y)}): (x, y) \in X_1 \times X_2\}
+\]
+is a Lagrangian submanifold of $(M_1 \times M_2, \omega)$.
+
+The twist $L^\sigma$ is then
+\[
+ L^\sigma = \{(x, y, \d_x f_{(x, y)}, -\d_y f_{(x, y)}): (x, y) \in X_1 \times X_2\}
+\]
+We then have to check if $L^\sigma$ is the graph of a diffeomorphism $\varphi: M_1 \to M_2$. If so, we win, and we call this the symplectomorphism generated by $f$\index{symplectomorphism generated by $f$}. Equivalently, we say $f$ is the \term{generating function} for $\varphi$.
+
+To check if $L^\sigma$ is the graph of a diffeomorphism, for each $(x, \xi) \in T^*X_1$, we need to find $(y, \eta) \in T^* X_2$ such that
+\[
+ \xi = \partial_x f_{(x, y)},\quad \eta = -\partial_y f_{(x, y)}.
+\]
+In local coordinates $\{x_i, \xi_i\}$ for $T^* X_1$ and $\{y_i, \eta_i\}$ for $T^*X_2$, we want to solve
+\begin{align*}
+ \xi_i &= \frac{\partial f}{\partial x_i} (x, y)\\
+ \eta_i &= - \frac{\partial f}{\partial y_i} (x, y)
+\end{align*}
+The only equation that requires solving is the first equation for $y$, and then the second equation tells us what we should pick $\eta_i$ to be.
+
+For the first equation to have a unique smoothly-varying solution for $y$, locally, we must require
+\[
+ \det \left[\frac{\partial}{\partial y_i}\left(\frac{\partial f}{\partial x_j}\right)\right]_{i, j} \not= 0.
+\]
+This is a necessary condition for $f$ to generate a symplectomorphism $\varphi$. If this is satisfied, then we should think a bit harder to solve the global problem of whether $\xi_i$ is uniquely defined.
+
+We can in fact do this quite explicitly:
+\begin{eg}
+ Let $X_1 = X_2 = \R^n$, and $f: \R^n \times \R^n \to \R$ given by
+ \[
+ (x, y) \mapsto - \frac{|x - y|^2}{2} = -\frac{1}{2} \sum_{i = 1}^n (x_i - y_i)^2.
+ \]
+ What, if any, symplectomorphism is generated by $f$? We have to solve
+ \begin{align*}
+ \xi_i &= \frac{\partial f}{\partial x_i} = y_i - x_i\\
+ \eta_i &= -\frac{\partial f}{\partial y_i} = y_i - x_i
+ \end{align*}
+ So we find that
+ \begin{align*}
+ y_i &= x_i + \xi_i,\\
+ \eta_i &= \xi_i.
+ \end{align*}
+ So we have
+ \[
+ \varphi(x, \xi) = (x + \xi, \xi).
+ \]
+ If we use the Euclidean inner product to identify $T^* \R^n \cong \R^{2n}$, and view $\xi$ as a vector, then $\varphi$ is a free translation in $\R^n$.
+\end{eg}
+
+That seems boring, but in fact this generalizes quite well.
+\begin{eg}
+ Let $X_1 = X_2 = X$ be a connected manifold, and $g$ a Riemannian metric on $X$. We then have the Riemannian distance $d: X \times X \to \R$, and we can define
+ \begin{align*}
+ f: X \times X &\to \R\\
+ (x, y) &\mapsto - \frac{1}{2} d(x, y)^2.
+ \end{align*}
+ Using the Riemannian metric to identify $T^*X \cong TX$, we can check that $\varphi: TX \to TX$ is given by
+ \[
+ \varphi(x, v) = (\gamma(1), \dot{\gamma}(1)),
+ \]
+ where $\gamma$ is the geodesic with $\gamma(0) = x$ and $\dot{\gamma}(0) = v$. Thus, $\varphi$ is the geodesic flow.
+\end{eg}
+
+\subsection{Periodic points of symplectomorphisms}
+Symplectic geometers have an unreasonable obsession with periodic point problems.
+
+\begin{defi}[Periodic point]\index{periodic point}\index{$n$-periodic point}
+ Let $(M, \omega)$ be a symplectic manifold, and $\varphi: M \to M$ a symplectomorphism. An \emph{$n$-periodic point} of $\varphi$ is an $x \in M$ such that $\varphi^n(x) = x$. A \emph{periodic point} is an $n$-periodic point for some $n$.
+\end{defi}
+
+We are particularly interested in the case where $M = T^*X$ with the canonical symplectic form on $M$, and $\varphi$ is generated by a function $f: X \to \R$. We begin with $1$-periodic points, namely fixed points of $f$.
+
+If $\varphi(x_0, \xi_0) = (x_0, \xi_0)$, this means we have
+\begin{align*}
+ \xi_0 &= \partial_x f_{(x_0, x_0)}\\
+ \xi_0 &= -\partial_y f_{(x_0, x_0)}
+\end{align*}
+In other words, we need
+\[
+ (\partial_x + \partial_y) f_{(x_0, x_0)} = 0.
+\]
+Thus, if we define
+\begin{align*}
+ \psi: X &\to \R\\
+ x &\mapsto f(x, x),
+\end{align*}
+then we see that
+\begin{prop}
+ The fixed point of $\varphi$ are in one-to-one correspondence with the critical points of $\psi$.
+\end{prop}
+
+It is then natural to consider the same question for $\varphi^n$. One way to think about this is to consider the symplectomorphism $\tilde{\varphi}: M^n \to M^n$ given by
+\[
+ \tilde{\varphi}(m_1, m_2, \ldots, m_n) = (\varphi(m_n), \varphi(m_1), \ldots, \varphi(m_{n - 1})).
+\]
+Then fixed points of $\tilde{\varphi}$ correspond exactly to the $n$-periodic points of $\varphi$.
+
+We now have to find a generating function for this $\tilde{\varphi}$. By inspection, we see that this works:
+\[
+ \tilde{f}((x_1, \ldots, x_n), (y_1, \ldots, y_n)) = f(x_1, y_2) + f(x_2, y_3) + \cdots + f(x_n, y_1).
+\]
+Thus, we deduce that
+\begin{prop}
+ The $n$-periodic points of $\varphi$ are in one-to-one correspondence with the critical points of
+ \[
+ \psi_n(x_1, \ldots, x_n) = f(x_1, x_2) + f(x_2, x_3) + \cdots + f(x_n, x_1).
+ \]
+\end{prop}
+%Thus, if $\varphi^N$ is generated by some $f_N: X \times X \to \R$, then the fixed points of $\varphi^N$ are in one-to-one correspondence with critical points of $\psi_N$.
+%
+%Then natural question is then if $f_N$ exists. The answer is sort-of. We fix $x, y \in X$. We define the map
+%\begin{align*}
+% X &\to \R\\
+% z &\mapsto f(x, z) + f(z, y).
+%\end{align*}
+%Suppose that this map always has a unique critical point $z_0$ and that $z_0$ is non-degenerate. We let
+%\[
+% f_2(x, y) = f(x, z_0) + f(z_0, y).
+%\]
+%Then we get
+%\begin{align*}
+% \partial_x f_2(x, y) &= \partial_x f(x, z_0)\\
+% \partial_y f_2(x, y) &= \partial_y f(z_0, y),
+%\end{align*}
+%We then see that $f_2$ is a smooth function, and is a generating function for $\varphi^2$. Indeed,
+%\begin{align*}
+% \varphi^2(x, \partial_x f_2(x, y)) &= \varphi(\varphi(x, \partial_x f(x, z_0)))\\
+% &= \varphi(z_0, -\partial_y f(x, z_0))\\
+% &= \varphi(z_0, \partial_x f(z_0, y))\\
+% &= (y, -\partial_y f(z_0, y))\\
+% &= (y, -\partial_y f_2(x, y)),
+%\end{align*}
+%using that $\partial_y f(x, z_0) + \partial_x f(z_0, y) = 0$ as $z_0$ is a critical point.
+%
+%Can we do the same thing for $\varphi^3$? Similarly, consider the map
+%\begin{align*}
+% X \times X &\to \R\\
+% (z, w) &\mapsto f(x, z) + f(z, w) + f(w, y).
+%\end{align*}
+%Suppose there is a unique critical point $(z_0, w_0)$ and is non-degenerate. Let
+%\[
+% f_3(x, y) = f(x, z_0) + f(z_0, w_0) + f(w_0, y).
+%\]
+%Similarly, one can show that $f_3$ is smooth and is a generating function for $\varphi^3$. In general, it follows that, under certain hypothesis, the fixed points of $\varphi^N$ are in one-to-one correspondence with the critical points of
+%\[
+% (x_1, x_2, \ldots, x_n) \mapsto f(x_1, x_2) + f(x_2, x_3) + \cdots + f(x_{n - 1}, x_n) + f(x_n, x_1).
+%\]
+\begin{eg}
+ We consider the problem of periodic billiard balls. Suppose we have a bounded, convex region $Y \subseteq \R^2$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+
+ \node {$Y$};
+ \end{tikzpicture}
+ \end{center}
+ We put a billiard ball in $Y$ and set it in motion. We want to see if it has periodic orbits.
+
+ Two subsequent collisions tend to look like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 0) arc (180:330:2);
+
+ \draw [dashed] (-1.5, 0.2) -- (-1.532, -1.286) -- (1.074, -1.687) -- +(73.71:1);
+
+ \node [circ] at (-1.532, -1.286) {};
+ \node [anchor = north east] at (-1.532, -1.286) {$x$};
+
+ \draw [densely dotted] (-1.532, -1.286) +(130:1.5) -- +(-50:1.5);
+
+ \draw (-1.532, -1.286) +(-50:0.4) arc(-50:-8.76:0.4) node [pos=0.1, right] {$\theta$};
+ \end{tikzpicture}
+ \end{center}
+ We can parametrize this collision as $(x, v = \cos \theta) \in \partial Y \times (-1, 1)$. We can think of this as the unit disk bundle of cotangent bundle on $X = \partial Y \cong S^1$ with canonical symplectic form $\d x \wedge \d v$, using a fixed parametrization $\chi: S^1 \cong \R/\Z \to X$. If $\varphi(x, v) = (y, w)$, then $v$ is the projection of the unit vector pointing from $x$ to $y$ onto the tangent line at $x$. Thus,
+ \[
+ v = \frac{\chi(y) - \chi(x)}{\|\chi(y) - \chi(x)\|} \cdot \left.\frac{\d \chi}{\d s}\right|_{s = x} = \frac{\partial}{\partial x} (-\|\chi(x) - \chi(y)\|),
+ \]
+ and a similar formula holds for $w$, using that the angle of incidence is equal to the angle of reflection. Thus, we know that
+ \[
+ f(x) = -\|\chi(x) - \chi(y)\|
+ \]
+ is a generating function for $\varphi$.
+%
+% There is then a function $\varphi$ that sends $(x, v)$ to the coordinates of the next collision. We claim this is in fact a symplectomorphism, and to do so, we shall find a generating function for it.
+%
+% Let $Y \subseteq \R^2$ be a bounded, convex region, thought of as a billiard table, and let $X = \partial Y \cong S^1$. Let $\chi: S^1 \cong \R/\Z \to \R^2$ be a parametrization of $X$.
+%
+% The laws of physics determine a map $\varphi: X \times (-1, 1) \to X \times (-1, 1)$ as follows --- given $(x, \cos \theta)$, consider a billiard ball that leaves $X$ at an angle $\theta$ from the tangent. Then $\varphi(x, v) = (y, \cos \tau)$ is so that $y$ is the point where the ball first hits the boundary, and $\cos \tau$ is the angle at which the ball leaves after bouncing off.
+%
+% We can think of $X \times (-1, 1)$ as the unit ball in the cotangent, and $\omega = \d x \wedge \d v$ is the canonical symplectic form. We claim that $\varphi$ is area preserving. To show this, we find a generating function $f$ for $\varphi$. We define $f: X \times X \to \R$ by
+% \[
+% f(x, y) = - \|\chi(x) - \chi(y)\|.
+% \]
+% This is smooth off the diagonal, and we have
+% \begin{align*}
+% \frac{\partial f}{\partial x}(x, y) &= \frac{\chi(y) - \chi(x)}{\|\chi(y) - \chi(x)\|} \cdot \left. \frac{\d \chi}{\d s} \right|_{s = x} = \cos \theta = v\\
+% \frac{\partial f}{\partial y}(x, y) &= \frac{\chi(x) - \chi(y)}{\|\chi(y) - \chi(x)\|} \cdot \left. \frac{\d \chi}{\d s} \right|_{s = y} = - \cos \nu = -w.
+% \end{align*}
+% So $f$ is indeed a generating function for $\varphi$.
+
+ The conclusion is that the $N$-periodic points are given by the critical points of
+ \[
+ (x_1, \ldots, x_N) \mapsto - |\chi(x_1) - \chi(x_2)| - |\chi(x_2) - \chi(x_3)| - \cdots - |\chi(x_N) - \chi(x_1)|.
+ \]
+ Up to a sign, this is the length of the ``generalized polygon'' with vertices $(x_1, \ldots, x_N)$.
+\end{eg}
+
+In general, if $X$ is a compact manifold, and $\varphi: T^*X \to T^*X$ is a symplectomorphism generated by a function $f$, then the number of fixed points of $\varphi$ are the number of fixed points of $\psi(x) = f(x, x)$. By compactness, there is at least a minimum and a maximum, so $\varphi$ has at least two fixed points.
+
+In fact,
+\begin{thm}[Poincar\'e's last geometric theorem (Birkhoff, 1925)]\index{Poincar\'e's last geometric theorem}
+ Let $\varphi: A \to A$ be an area-preserving diffeomorphism such that $\varphi$ preserves the boundary components, and twists them in opposite directions. Then $\varphi$ has at least two fixed points.
+\end{thm}
+
+\subsection{Lagrangian submanifolds and fixed points}
+Recall what we have got so far. Let $(M, \omega)$ be a symplectic manifold, $\varphi: M \to M$. Then the graph of $\varphi$ is a subset of $M \times M$. Again let $\tilde{\omega}$ be the twisted product form on $M \times M$. Then we saw that a morphism $\varphi: M \to M$ is a symplectomorphism iff the graph of $\varphi$ is Lagrangian in $(M \times M, \tilde{\omega})$. Moreover, the set of fixed points is exactly $\Delta \cap \grph \varphi$, where $\Delta = \grph \id_M$ is the diagonal.
+
+If $\varphi$ is ``close to'' the identity, map, then its graph is close to $\Delta$. Thus, we are naturally led to understanding the neighbourhoods of the identity. An important theorem is the following symplectic version of the tubular neighbourhood theorem:
+
+\begin{thm}[Lagrangian neighbourhood theorem]\index{Lagrangian neighbourhood theorem}
+ Let $(M, \omega)$ be a symplectic manifold, $X$ a compact Lagrangian submanifold, and $\omega_0$ the canonical symplectic form on $T^* X$. Then there exists neighbourhoods $U_0$ of $X$ in $T^* X$ and $U$ of $X$ in $M$ and a symplectomorphism $\varphi: \mathcal{U}_0 \to \mathcal{U}$ sending $X$ to $X$.
+\end{thm}
+
+An equivalent theorem is the following:
+\begin{thm}[Weinstein]
+ Let $M$ be a $2n$-dimensional manifold, $X$ $n$-dimensional compact submanifold, and $i: X \hookrightarrow M$ the inclusion, and symplectic forms $\omega_0, \omega_1$ on $M$ such that $i^* \omega_0 = i^* \omega_1 = 0$, i.e.\ $X$ is Lagrangian with respect to both symplectic structures. Then there exists neighbourhoods $\mathcal{U}_0, \mathcal{U}_1$ of $X$ in $M$ such that $\rho|_X = \id_X $ and $\rho^* \omega_1 = \omega_0$.
+\end{thm}
+
+We first prove these are equivalent. This amounts to identifying the (dual of the) cotangent bundle of $X$ with the normal bundle of $X$.
+
+\begin{proof}[Proof of equivalence]
+ If $(V, \Omega)$ is a symplectic vector space, $L$ a Lagrangian subspace, the bilinear form
+ \begin{align*}
+ \Omega: V/L \times L &\to \R\\
+ ([v], u) &\to \Omega(v, u).
+ \end{align*}
+ is non-degenerate and gives a natural isomorphism $V/L \cong L^*$. Taking $V = T_p M$ and $L = T_p X$, we get an isomorphism
+ \[
+ NX = TM|_X /TX \cong T^* X.
+ \]
+ Thus, by the standard tubular neighbourhood theorem, there is a neighbourhood $\mathcal{N}_0$ of $X$ in $NX$ and a neighbourhood $\mathcal{N}$ of $X$ in $M$, and a diffeomorphism $\psi: \mathcal{N}_0 \to \mathcal{N}$. We now have two symplectic forms on $\mathcal{N}_0$ --- the one from the cotangent bundle and the pullback of that from $M$. Then applying the second theorem gives the first.
+
+ Conversely, if we know the first theorem, then applying this twice gives us symplectomorphisms between neighbourhoods of $X$ in $M$ under each symplectic structure with a neighbourhood in the cotangent bundle.
+\end{proof}
+
+It now remains to prove the second theorem. This is essentially an application of the relative Moser theorem, which is where the magic happens. The bulk of the proof is to get ourselves into a situation where relative Moser applies.
+\begin{proof}[Proof of second theorem]
+ For $p \in X$, we define $V = T_p M$ and $U = T_p X$, and $W$ any complement of $U$. By assumption, $U$ is a Lagrangian subspace of both $(V, \omega_0|_p = \Omega_0)$ and $(V, \omega_1|_p = \Omega_1)$. We apply the following linear-algebraic lemma:
+
+ \begin{lemma}
+ Let $V$ be a $2n$-dimensional vector space, $\Omega_0, \Omega_1$ symplectic structures on $V$. Suppose $U$ is a subspace of $V$ Lagrangian with respect to both $\Omega_0$ and $\Omega_1$, and $W$ is any complement of $V$. Then we can construct canonically a linear isomorphism $H: V \to V$ such that $H|_U = \id_U$ and $H^* \Omega_1 = \Omega_2$.
+
+ Note that the statement of the theorem doesn't mention $W$, but the construction of $H$ requires a complement of $V$, so it is canonical only after we pick a $W$. % prove lemma
+ \end{lemma}
+ By this lemma, we get canonically an isomorphism $H_p: T_p M \to T_p M$ such that $H_p|_{T_p X} = \id_{T_p X}$ and $H_p^* \omega_1|_p = \omega_0|_p$. The canonicity implies $H_p$ varies smoothly with $p$. We now apply the Whitney extension theorem
+ \begin{thm}[Whitney extension theorem]\index{Whitney extension theorem}
+ Let $X$ be a submanifold of $M$, $H_p: T_p M \to T_p M$ smooth family of isomorphisms such that $H_p|_{T_p X} = \id_{T_p X}$. Then there exists an neighbourhood $\mathcal{N}$ of $X$ in $M$ and an embedding $h: \mathcal{N} \to M$ such that $h|_X = \id_X$ and for all $p \in X$, $\d h_p = H_p$.
+ \end{thm}
+ So at $p \in X$, we have $h^* \omega_1|_p = (\d h_p)^*\omega_1|_P = h_p^* \omega_1|_p = \omega_0|_p$. So we are done by relative Moser.
+\end{proof}
+
+%\begin{proof}[Proof sketch of lemma]
+% We first claim that for $V$ a vector space, $\Omega$ a symplectic form, $U$ a Lagrangian subspace and $W$ any complement of $U$, there is a canonical way to convert $W$ into a Lagrangian complement of $V$, by taking the linear map $A: W \to U$ such that $\Omega(A\omega, \ph) = - \frac{1}{2} \Omega(w, \ph)$, and then taking % this might be wrong
+% \[
+% W' = (I - A) W.
+% \]
+% Thus in the situation of the lemma, we get two complements $W_0, W_1$ of $V$ that are Lagrangian with respect to $\Omega_0$ and $\Omega_1$ respectively. We set
+% \[
+% H = \id_U \oplus B: V \oplus W_0 \to U \oplus W_1,
+% \]
+% where $B$ satisfies $\Omega_0(w, u) = \Omega_1(Bw, u)$. Then $H^* \Omega_1 = \Omega_0$.
+%\end{proof}
+
+\begin{eg}
+ We can use this result to understand the neighbourhood of the identity in the group $\Symp(M, \omega)$ of symplectomorphisms of a symplectic manifold $(M, \omega)$.
+
+ Suppose $\varphi, \id \in \Symp(M, \omega)$. Then the graphs $\Gamma_\varphi$ and $\Delta = \Gamma_{\id}$ are Lagrangian submanifolds of $(M \times M, \tilde{\omega})$. Then by our theorem, there is a neighbourhood $\mathcal{U}$ of $\Delta$ in $(M \times M, \tilde{\omega})$ that is symplectomorphic to a neighbourhood $\mathcal{U}_0$ of the zero section of $(T^*M, \omega_0)$.
+
+ Suppose $\varphi$ is sufficiently $C^0$-close to $\id$. Then $\Gamma_\varphi \subseteq \mathcal{U}$. If $\varphi$ is sufficiently $C^1$-close to the identity, then the image of $\Gamma_\varphi$ in $\mathcal{U}_0$ is a smooth section $X_\mu$ for some $1$-form $\mu$.
+
+ Now $\varphi$ is a symplectomorphism iff $\Gamma_\varphi$ is Lagrangian iff $X_\mu$ is Lagrangian iff $\mu$ is closed. So a small $C^1$-neighbourhood of $\id$ in $\Sym(M, \omega)$ is ``the same as'' a small $C^1$-neighbourhood of the zero-section in the space of closed $1$-forms on $X$.
+\end{eg}
+
+We can also use this to understand fixed points of symplectomorphisms (again!).
+\begin{thm}
+ Let $(M, \omega)$ be a compact symplectic manifold such that $H^1_{\dR}(M) = 0$. Then any symplectomorphism $\varphi: M \to M$ sufficiently close to the identity has at least two fixed points.
+\end{thm}
+
+\begin{proof}
+ The graph of $\varphi$ corresponds to a closed $1$-form on $M$. Since $\mu$ is closed and $H^1_{\dR}(M) = 0$, we know $\mu = \d h$ for some $h \in C^\infty$. Since $M$ is compact, $h$ has at least two critical points (the global maximum and minimum). Since the fixed points corresponding to the points where $\mu$ vanish (so that $\Gamma_\varphi$ intersects $\Delta$), we are done.
+\end{proof}
+Counting fixed points of symplectomorphisms is a rather popular topic in symplectic geometry, and Arnold made some conjectures about these. A version of this is
+\begin{thm}[Arnold conjecture] % check
+ Let $(M, \omega)$ be a compact symplectic manifold of dimension $2n$, and $\varphi: M \to M$ a symplectomorphism. Suppose $\varphi$ is \emph{exactly homotopic} to the identity and \emph{non-degenerate}. Then the number of fixed points of $\varphi$ is at least $\sum_{i = 0}^{2n} \dim H^i(M, \R)$.
+\end{thm}
+We should think of the sum $\sum_{i = 0}^{2n} \dim H^i(M, \R)$ as the minimal number of critical points of a function, as Morse theory tells us.
+
+We ought to define the words we used in the theorem:
+
+\begin{defi}[Exactly homotopic]\index{exactly homotopic}
+ We say $\varphi$ is \emph{exactly homotopic} to the identity if there is isotopy $\rho_t: M \to M$ such that $\rho_0 = \id$ and $\rho_1 = \varphi$, and further there is some $1$-periodic family of functions $h_t$ such that $\rho_t$ is generated by the vector field $v_t$ defined by $\iota_{v_t}^* \omega = \d h_t$.
+\end{defi}
+The condition $\iota_{v_t}^* \omega \d h_t$ says $v_t$ is a Hamiltonian vector field, which we will discuss soon after this.
+
+\begin{defi}[Non-degenerate function]\index{non-degenerate function}
+ A endomorphism $\varphi: M \to M$ is non-degenerate iff all its fixed points are non-degenerate, i.e.\ if $p$ is a fixed point, then $\det (\id - \d \varphi_p) \not= 0$.
+\end{defi}
+\begin{eg}
+ In the original statement of the Arnold conjecture, which is the case $(T^2, \d \theta_1 \wedge \d \theta_2)$, any symplectomorphism has at least $4$ fixed points.
+\end{eg}
+
+In the case where $h_t$ is actually not time dependent, Arnold's conjecture is easy to prove.
+
+\section{Complex structures}
+\subsection{Almost complex structures}
+Symplectic manifolds are very closely related to complex manifolds. A first (weak) hint of this fact is that they both have to be even dimensional! In general, symplectic manifolds will have \emph{almost} complex structures, but this need not actually come from a genuine complex structure. If it does, then we say it is \emph{K\"ahler}, which is a very strong structure on the manifold.
+
+We begin by explaining what almost complex structures are, again starting from the linear algebraic side of the story. A complex vector space can be thought of as a real vector space with a linear endomorphism that acts as ``multiplication by $i$''.
+
+\begin{defi}[Complex structure]\index{complex structure}
+ Let $V$ be a vector space. A \emph{complex structure} is a linear $J: V \to V$ with $J^2 = -1$.
+\end{defi}
+Here we call it a complex structure. When we move on to manifolds, we will call this ``almost complex'', and a complex structure is a stronger condition. It is clear that
+
+\begin{lemma}
+ There is a correspondence between real vector spaces with a complex structure and complex vector spaces, where $J$ acts as multiplication by $i$.\fakeqed
+\end{lemma}
+
+Our symplectic manifolds come with symplectic forms on the tangent space. We require the following compatibility condition:
+
+\begin{defi}[Compatible complex structure]\index{compatible complex structure}
+ Let $(V, \Omega)$ be a symplectic vector space, and $J$ a complex structure on $V$. We say $J$ is \emph{compatible} with $\Omega$ if $G_J(u, v) = \Omega(u, Jv)$ is an inner product. In other words, we need
+ \[
+ \Omega(Ju, Jv) = \Omega(u, v),\quad \Omega(v, Jv) \geq 0
+ \]
+ with equality iff $v = 0$.
+\end{defi}
+
+\begin{eg}
+ On the standard symplectic vector space $(\R^{2n}, \Omega)$, we set
+ \[
+ J_0(e_i) = f_i,\quad J_0(f_i) = - e_i.
+ \]
+ We can then check that this is compatible with the symplectic structure, and in fact gives the standard inner product.
+\end{eg}
+
+\begin{prop}[Polar decomposition]\index{polar decomposition}
+ Let $(V, \Omega)$ be a symplectic vector space, and $G$ an inner product on $V$. Then from $G$, we can \emph{canonically} construct a compatible complex structure $J$ on $(V, \Omega)$. If $G = G_{J}$ for some $J$, then this process returns $J$.
+\end{prop}
+
+Note that in general, $G_J(u, v) = \Omega(u, Jv) \not= G(u, v)$.
+\begin{proof}
+ Since $\Omega, G$ are non-degenerate, we know
+ \[
+ \Omega(u, v) = G(u, Av)
+ \]
+ for some $A: V \to V$. If $A^2 = -1$, then we are done, and set $J = A$. In general, we also know that $A$ is skew-symmetric, i.e.\ $G(Au, v) = G(u, -Av)$, which is clear since $\Omega$ is anti-symmetric. Since $AA^t$ is symmetric and positive definite, it makes sense to write down $\sqrt{AA^T}$ (e.g.\ by diagonalizing), and we take
+ \[
+ J = \sqrt{AA^t}^{-1} A = \sqrt{-A^2}^{-1} A.
+ \]
+ It is clear that $J^2 = -1$, since $A$ commutes with $\sqrt{AA^t}$, so this is a complex structure. We can write this as $A = \sqrt{AA^T} J$, and this is called the \emph{(left)} \term{polar decomposition} of $A$.
+
+ We now check that $J$ is a compatible, i.e.\ $G_J(u, v) = \Omega(u, Jv)$ is symmetric and positive definite. But
+ \[
+ G_J(u, v) = G(u, \sqrt{AA^t}v),
+ \]
+ and we are done since $\sqrt{AA^t}$ is positive and symmetric.
+\end{proof}
+
+\begin{notation}
+ Let $(V, \Omega)$ be a symplectic vector space. We write \term{$\mathcal{J}(V, \Omega)$} for the space of all compatible complex structures on $(V, \Omega)$.
+\end{notation}
+
+\begin{prop}
+ $\mathcal{J}(V, \Omega)$ is path-connected.
+\end{prop}
+
+\begin{proof}
+ Let $J_0, J_1 \in \mathcal{J}(V, \Omega)$. Then this induces inner products $G_{J_0}, G_{J_1}$. Let $G_t = (1 - t) G_{J_0} + t G_{J_1}$ be a smooth family of inner products on $V$. Then apply polar decomposition to get a family of complex structures that start from $J_0$ to $J_1$.
+\end{proof}
+A quick adaptation of the proof shows it is in fact contractible.
+
+We now move on to the case of manifolds.
+
+\begin{defi}[Almost complex structure]\index{almost complex structure}
+ An \emph{almost complex structure} $J$ on a manifold is a smooth field of complex structures on the tangent space $J_p: T_p M \to T_p M$, $J_p^2 = -1$.
+\end{defi}
+
+\begin{eg}
+ Suppose $M$ is a complex manifold with local complex coordinates $z_1, \ldots, z_n$ on $U \subseteq M$. We have real coordinates $\{x_i, y_i\}$ given by
+ \[
+ z_i = x_i + i y_i.
+ \]
+ The tangent space is spanned by $\frac{\partial}{\partial x_i}, \frac{\partial }{\partial y_i}$. We define $J$ by
+ \[
+ J_p\left(\frac{\partial}{\partial x_j}\right) = \frac{\partial}{\partial y_j},\quad J_p\left(\frac{\partial}{\partial y_j}\right) = - \frac{\partial}{\partial x_j}.
+ \]
+ The Cauchy--Riemann equations imply this is globally well-defined. This $J$ is called the \term{canonical almost-complex structure} on the complex manifold $M$.
+\end{eg}
+\begin{defi}[Integrable almost complex structure]\index{integrable almost complex structures}\index{almost complex structure!integrable}
+ An almost complex structure on $M$ is called \emph{integrable} if it is induced by a complex structure.
+\end{defi}
+
+\begin{eg}
+ It is a fact that $\CP^2 \# \CP^2 \# \CP^2$ has an almost complex but no complex structure.
+\end{eg}
+
+\begin{defi}[Compatible almost complex structure]\index{compatible almost complex structure}\index{almost complex structure!compatible}
+ An almost complex structure $J$ on $M$ is \emph{compatible} with a symplectic structure $\omega$ if $J_p$ is compatible with $\omega_p$ for all $p \in M$. In this case, $(\omega, g_J, J)$ is called a \emph{compatible triple}.
+\end{defi}
+Any two of the structures determine the three, and this gives rise to the nice fact that the intersection of any two of $O(2n)$, $\Sp(2n, \R)$ and $\GL(n, \C)$ is $\U(n)$.
+
+Performing polar decomposition pointwise gives
+\begin{prop}
+ Let $(M, \omega)$ be a symplectic manifold, and $g$ a metric on $M$. Then from $g$ we can canonically construct a compatible almost complex structure $J$.\fakeqed
+\end{prop}
+
+As before, in general, $g_J(\ph, \ph)\not= g(\ph, \ph)$.
+
+\begin{cor}
+ Any symplectic manifold has a compatible almost complex structure.\fakeqed
+\end{cor}
+
+The converse does not hold. For example, $S^6$ is almost complex but not symplectic.
+
+\begin{notation}
+ Let $(M, \omega)$ be a symplectic manifold. We write \term{$\mathcal{J}(V, \Omega)$} for the space of all compatible almost complex structures on $(M, \omega)$.
+\end{notation}
+The same proof as before shows that
+\begin{prop}
+ $J(M, \omega)$ is contractible.\fakeqed
+\end{prop}
+
+\begin{prop}
+ Let $J$ be an almost complex structure on $M$ that is compatible with $\omega_0$ and $\omega_1$. Then $\omega_0$ and $\omega_1$ are deformation equivalent.
+\end{prop}
+
+\begin{proof}
+ Check that $\omega_t = (1 - t) \omega_0 + t \omega_1$ works, which is non-degenerate since $\omega_t(\ph, J\ph)$ is a positive linear combination of inner products, hence is non-degenerate.
+\end{proof}
+
+\begin{prop}
+ Let $(M, \omega)$ be a symplectic manifold, $J$ a compatible almost complex structure. If $X$ is an almost complex submanifold of $(M, J)$, i.e.\ $J(TX) = TX$, then $X$ is a symplectic submanifold of $(M, \omega)$.
+\end{prop}
+
+\begin{proof}
+ We only have to check $\omega|_{TX}$ is non-degenerate, but $\Omega(\ph, J\ph)$ is a metric, so is in particular non-degenerate.
+\end{proof}
+
+We shall not prove the following theorem:
+\begin{thm}[Gromov]
+ Let $(M, J)$ be an almost complex manifold with $M$ open, i.e.\ $M$ has no closed connected components. Then there exists a symplectic form $\omega$ in any even $2$-cohomology class and such that $J$ is homotopic to an almost complex structure compatible with $\omega$.\fakeqed
+\end{thm}
+
+\subsection{Dolbeault theory}
+An almost complex structure on a manifold allows us to discuss the notion of holomorphicity, and this will in turn allow us to stratify our $k$-forms in terms of ``how holomorphic'' they are.
+
+Let $(M, J)$ be an almost complex manifold. By complexifying $TM$ to $TM \otimes \C$ and then extending $J$ linearly, we can split $TM \otimes \C$ into its $\pm i$ eigenspace.
+\begin{notation}
+ We write \term{$T_{1, 0}$} for the $+i$ eigenspace of $J$ and \term{$T_{1, 0}$} for the $-i$-eigenspace of $J$. These are called the \term{$J$-holomorphic tangent vectors}\index{holomorphic tangent vector} and the \term{$J$-anti-holomorphic tangent vectors}\index{anti-holomorphic tangent vector} respectively.
+\end{notation}
+We write \term{$T_{1, 0}$} for the $+i$ eigenspace of $J$, and \term{$T_{0, 1}$} the $-i$-eigenspace of $J$. Then we have a splitting
+\[
+ TM \otimes \C \overset{\sim}{\to} T_{1, 0} \oplus T_{0, 1}.
+\]
+We can explicitly write down the projection maps as
+\begin{align*}
+ \pi_{1, 0} (w) &= \frac{1}{2} (w - i J w)\\
+ \pi_{0, 1} (w) &= \frac{1}{2} (w + i J w).
+\end{align*}
+\begin{eg}
+ On a complex manifold with local complex coordinates $(z_1, \ldots, z_n)$, the holomorphic tangent vectors $T_{1, 0}$ is spanned by the $\frac{\partial}{\partial z_j}$, while $T_{0, 1}$ is spanned by the $\frac{\partial}{\partial \bar{z}_j}$.
+\end{eg}
+Similarly, we can complexify the cotangent bundle, and considering the $\pm i$ eigenspaces of (the dual of) $J$ gives a splitting\index{$T^{1, 0}$}\index{$T^{0, 1}$}
+\[
+ (\pi^{1, 0}, \pi^{0, 1}): T^*M \otimes C \overset{\sim}{\to} T^{1, 0} \oplus T^{0, 1}.
+\]
+These are the \term{complex linear cotangent vectors} and \term{complex anti-linear cotangent vectors}.
+\begin{eg}
+ In the complex case, $T^{1, 0}$ is spanned by the $\d z_j$ and $T^{0, 1}$ is spanned by the $\d \bar{z}_j$.
+\end{eg}
+
+More generally, we can decompose\index{$\Lambda^{p, q}$}
+\[
+ \exterior^k (T^* M \otimes \C) = \exterior^k (T^{1, 0} \oplus T^{0, 1}) = \bigoplus_{p + q = k} \left(\exterior^\ell T^{1, 0}\right) \otimes \left(\exterior^m T^{0, 1}\right) \equiv \bigoplus_{p + q = k} \Lambda^{p, q}.
+\]
+We write\index{$\Omega^{p, q}$}\index{$\Omega^k(M, \C)$}
+\begin{align*}
+ \Omega^k (M, \C) &= \text{sections of }\Lambda^k (T^* M \otimes \C)\\
+ \Omega^{p, q}(M, \C) &= \text{sections of }\Lambda^{p, q}.
+\end{align*}
+So
+\[
+ \Omega^k(M, \C) = \bigoplus_{p + q = k} \Omega^{p, q} (M, \C).
+\]
+The sections in $\Omega^{p, q}(M, \C)$ are called \term{forms of type $(p, q)$}.
+\begin{eg}
+ In the complex case, we have, locally,
+ \[
+ \Lambda^{p, q}_p = \C\{\d z_I \wedge \d z_K : |I| = \ell, |K| = m\}
+ \]
+ and
+ \[
+ \Omega^{p, q} = \left\{\sum_{|I| = p, |K| = q} b_{I, K}\; \d z_I \wedge \d \bar{z}_K : b_{IK} \in C^\infty(U, \C)\right\}.
+ \]
+\end{eg}
+
+As always, we have projections
+\[
+ \pi^{p, q}: \exterior^k (T^*M \otimes \C ) \to \Lambda^{p, q}.
+\]
+Combining the exterior derivative $\d: \Omega^k(M, \C) \to \Omega^{k - 1}(M, \C)$ with projections yield the $\partial$ and $\bar{\partial}$ operators
+\begin{align*}
+ \partial: \Omega^{p, q} &\to \Omega^{p + 1, q}\\
+ \bar{\partial}: \Omega^{p, q} &\to \Omega^{p, q + 1}.
+\end{align*}
+Observe that for functions $f$, we have $\d f = \partial f + \bar{\partial} f$, but this is not necessarily true in general.
+
+\begin{defi}[$J$-holomorphic]\index{$J$-holomorphic}\index{$J$-anti-holomorphic}
+ We say a function $f$ is \emph{$J$-holomorphic} if $\bar{\partial} f = 0$, and \emph{$J$-anti-holomorphic} if $\partial f = 0$.
+\end{defi}
+
+It would be very nice if the sequence
+\[
+ \begin{tikzcd}
+ \Omega^{p, q} \ar[r, "\bar{\partial}"] &
+ \Omega^{p, q + 1} \ar[r, "\bar{\partial}"] &
+ \Omega^{p, q + 2} \ar[r, "\bar{\partial}"] &
+ \cdots
+ \end{tikzcd}
+\]
+were a chain complex, i.e.\ $\bar{\partial}^2 = 0$. This is not always true. However, this is true in the case where $J$ is integrable. Indeed, if $M$ is a complex manifold and $\beta \in \Omega^k(M, \C)$, then in local complex coordinates, we can write
+\[
+ \beta = \sum_{p + q = k} \left(\sum_{|I| = p, |K| = q} b_{I, K}\; \d z_I \wedge \d \bar{z}_K\right).
+\]
+So
+\begin{align*}
+ \d \beta &= \sum_{p + q = k} \left(\sum_{|I| = p, |K| = q} \d b_{I, K}\wedge \d z_I \wedge \d \bar{z}_K\right)\\
+ &= \sum_{p + q = k} \left(\sum_{|I| = p, |K| = q} (\partial + \bar{\partial}) b_{I, K}\wedge \d z_I \wedge \d \bar{z}_K\right)\\
+ &= (\partial + \bar{\partial}) \sum_{p + q = k} \left(\sum_{|I| = p, |K| = q} b_{I, K}\d z_I \wedge \d \bar{z}_K\right)
+\end{align*}
+Thus, on a complex manifold, $\d = \partial + \bar{\partial}$.
+
+Thus, if $\beta \in \Omega^{p, q}$, then
+\[
+ 0 = \d^2 \beta = \partial^2 \beta + (\partial \bar{\partial} + \bar{\partial} \partial) \beta + \bar{\partial}^2 \beta.
+\]
+Since the three terms are of different types, it follows that
+\[
+ \partial^2 = \bar{\partial}^2 = \partial\bar{\partial} + \bar{\partial}\partial = 0.
+\]
+In fact, the converse of this computation is also true, which we will not prove:
+\begin{thm}[Newlander--Nirenberg]\index{Newlander--Nirenberg}
+ The following are equivalent:
+ \begin{multicols}{2}
+ \begin{itemize}
+ \item $\bar{\partial}^2 = 0$
+ \item $\partial^2 = 0$
+ \item $\d = \partial + \bar{\partial}$
+ \item $J$ is integrable
+ \item $\mathcal{N} = 0$
+ \item[ ]
+ \end{itemize}
+ \end{multicols}
+ where $\mathcal{N}$ is the \term{Nijenhuis torsion}
+ \[
+ \mathcal{N}(X, Y) = [JX, JY] - J[JX, Y] - J[X, JY] - [X, Y].\fakeqed
+ \]
+\end{thm}
+
+When our manifold is complex, we can then consider the cohomology of the chain complex.
+\begin{defi}[Dolbeault cohomology groups]
+ Let $(M, J)$ be a manifold with an integrable almost complex structure. The \term{Dolbeault cohomology groups} are the cohomology groups of the cochain complex
+ \[
+ \begin{tikzcd}
+ \Omega^{p, q} \ar[r, "\bar{\partial}"] &
+ \Omega^{p, q + 1} \ar[r, "\bar{\partial}"] &
+ \Omega^{p, q + 2} \ar[r, "\bar{\partial}"] &
+ \cdots
+ \end{tikzcd}.
+ \]
+ Explicitly,
+ \[
+ H^{p, q}_{\Dolb} (M) = \frac{\ker(\bar{\partial}: \Omega^{p, q} \to \Omega^{p, q + 1})}{\im (\bar{\partial}: \Omega^{p, q - 1} \to \Omega^{p, q})}.
+ \]
+\end{defi}
+
+\subsection{\tph{K\"ahler}{K\"ahler}{Kähler} manifolds}
+In the best possible scenario, we would have a symplectic manifold with a compatible, integrable almost complex structure. Such manifolds are called \emph{K\"ahler} manifolds.
+\begin{defi}[K\"ahler manifold]\index{K\"ahler manifold}
+ A \emph{K\"ahler manifold} is a symplectic manifold equipped with a compatible integrable almost complex structure. Then $\omega$ is called a \term{K\"ahler form}.
+\end{defi}
+
+Let $(M, \omega, J)$ be a K\"ahler manifold. What can we say about the K\"ahler form $\omega$? We can decompose it as
+\[
+ \omega \in \Omega^2(M) \subseteq \Omega^2(M, \C) = \Omega^{2, 0} \oplus \Omega^{1, 1} \oplus \Omega^{2, 2}.
+\]
+We claim that
+\begin{lemma}
+ $\omega \in \Omega^{1, 1}$.
+\end{lemma}
+
+\begin{proof}
+ Since $\omega(\ph, J\ph)$ is symmetric, we have
+ \[
+ J^*\omega (u, v) = \omega(Ju, Jv) = \omega(v, JJu) = -\omega(v, -u) = \omega(u, v).
+ \]
+ So $J^* \omega = \omega$.
+
+ On the other hand, $J^*$ acts on holomorphic forms as multiplication by $i$ and anti-holomorphic forms by multiplication by $-1$ (by definition). So it acts on $\Omega^{2, 0}$ and $\Omega^{0, 2}$ by multiplication by $-1$ (locally $\Omega^{2, 0}$ is spanned by $\d z_i \wedge \d z_j$, etc.), while it fixes $\Omega^{1, 1}$. So $\omega$ must lie in $\Omega^{1, 1}$.
+\end{proof}
+%Since $(M, J)$ is a complex manifold, we have the Dolbeault cohomology groups $H^{\ell, m}_{\Dolb}$. We can write our symplectic form as
+%\[
+% \omega \in \Omega^2(M) \subseteq \Omega^2(M, \C) = \Omega^{2, 0} \oplus \Omega^{1, 1} \oplus \Omega^{0, 2}.
+%\]
+%On a complex chart $(U, z_1, \ldots, z_n)$, we can write
+%\[
+% \omega = \sum_{j < k} a_{jk} \;\d z_j \wedge \d z_k + \sum_{j, k} + b_{jk} \d z_j \wedge \d \bar{z}_j + \sum_{j < k} c_{jk} \d \bar{z}_j \wedge \d \bar{z}_k.
+%\]
+%The compatibility condition implies $\omega(\ph, J\ph)$ is symmetric. We then have
+%\[
+% J^* \omega(u, v) = \omega(Ju, Jv) = \omega(v, JJu) = - \omega(v, -u) = \omega(u, v).
+%\]
+%So $J^* \omega = \omega$. now observe that
+%\[
+% J^* \d z_j = \d z_j \circ J = i \d z_j,\quad J^* \d \bar{z}_k = - i \d \bar{z}_k.
+%\]
+%Then
+%\[
+% J^* \omega = \sum_{j < k}(-1) a_{jk}\; \d z_j \wedge \d z_k + \sum_{j, k} b_{jk}\; \d z_j \wedge \d \bar{z}_k + \sum_{j < k} (-1) c_{jk} \;\d \bar{z}_j \wedge \d \bar{z}_k.
+%\]
+%Then $J^* \omega = \omega$ implies $a_{jk} = c_{jk} = 0$. So we know that $\omega \in \Omega^{1, 1}$.
+
+We can explore what the other conditions on $\omega$ tell us. Closedness implies
+\[
+ 0 = \d \omega = \partial \omega + \bar{\partial} \omega = 0.
+\]
+So we have $\partial \omega = \bar{\partial} \omega = 0$. So in particular $\omega \in H^{1, 1}_{\Dolb}(M)$.
+
+In local coordinates, we can write
+\[
+ \omega = \frac{i}{2} \sum_{j, k} h_{j, k} \;\d z_j \wedge \d \bar{z}_k
+\]
+for some $h_{j, k}$. The fact that $\omega$ is real valued, so that $\bar{\omega} = \omega$ gives some constraints on what the $h_{j k}$ can be. We compute
+\[
+ \bar{\omega} = -\frac{i}{2} \sum_{j, k}\overline{h_{jk}}\; \d \bar{z}_j \wedge \d z_k = \frac{i}{2} \sum_{j, k} \overline{h_{jk}} \;\d z_k \wedge \bar{z}_j.
+\]
+So we have
+\[
+ \overline{h}_{kj} = h_{jk}.
+\]
+
+The non-degeneracy condition $\omega^{\wedge n} \not= 0$ is equivalent to $\det h_{jk} \not= 0$, since
+\[
+ \omega^n = \left(\frac{i}{2}\right)^n n! \det(h_{jk}) \;\d z_1 \wedge \d \bar{z}_1 \wedge \cdots \wedge \d z_n \wedge \d \bar{z}_n.
+\]
+So $h_{jk}$ is a non-singular Hermitian matrix.
+
+Finally, we take into account the compatibility condition $\omega(v, Jv) > 0$. If we write
+\[
+ v = \sum_j a_j \frac{\partial}{\partial z_j} + b_j \frac{\partial}{\partial \bar{z}_j},
+\]
+then we have
+\[
+ Jv = i \left(\sum_j a_j \frac{\partial}{\partial z_j} - b_j \frac{\partial}{\partial \bar{z}_j}\right).
+\]
+So we have
+\[
+ \omega(v, Jv) = \frac{i}{2} \sum h_{jk} (-ia_j b_k - ia_j b_k) = \sum h_{jk} a_j b_k > 0.
+\]
+So the conclusion is that $h_{jk}$ is positive definite.
+
+Thus, the conclusion is
+\begin{thm}
+ A K\"ahler form $\omega$ on a complex manifold $M$ is a $\partial$- and $\bar{\partial}$-closed form of type $(1, 1)$ which on a local chart is given by
+ \[
+ \omega = \frac{i}{2} \sum_{j, k} h_{jk} \;\d z_j \wedge \d \bar{z}_k
+ \]
+ where at each point, the matrix $(h_{jk})$ is Hermitian and positive definite.
+\end{thm}
+
+Often, we start with a complex manifold, and want to show that it has a K\"ahler form. How can we do so? First observe that we have the following proposition:
+\begin{prop}
+ Let $(M, \omega)$ be a complex K\"ahler manifold. If $X \subseteq M$ is a complex submanifold, then $(X, i^* \omega)$ is K\"ahler, and this is called a \term{K\"ahler submanifold}.\fakeqed
+\end{prop}
+
+In particular, if we can construct K\"ahler forms on $\C^n$ and $\CP^n$, then we have K\"ahler forms for a lot of our favorite complex manifolds, and in particular complex projective varieties.
+
+But we still have to construct some K\"ahler forms to begin with. To do so, we use so-called \emph{strictly plurisubharmonic functions}.
+\begin{defi}[Strictly plurisubharmonic (spsh)]\index{strictly plurisubharmonic}\index{spsh}
+ A function $\rho \in C^\infty(M, \R)$ is strictly plurisubharmonic (spsh) if locally, $\left(\frac{\partial^2 \rho}{\partial z_j \partial \bar{z}_k}\right)$ is positive definite.
+\end{defi}
+\begin{prop}
+ Let $M$ be a complex manifold, and $\rho \in C^\infty(M; \R)$ strictly plurisubharmonic. Then
+ \[
+ \omega = \frac{i}{2} \partial \bar{\partial} \rho
+ \]
+ is a K\"ahler form.
+\end{prop}
+We call $\rho$ the \term{K\"ahler potential} for $\omega$.
+\begin{proof}
+ $\omega$ is indeed a $2$-form of type (1, 1). Since $\partial^2 = \bar{\partial}^2 = 0$ and $\partial \bar{\partial} = -\bar{\partial} \partial$, we know $\partial \omega = \bar{\partial} \omega = 0$. We also have
+ \[
+ \omega = \frac{i}{2} \sum_{j, k} \frac{\partial^2 \rho}{\partial z_j \partial \bar{z}_k}\;\d z_j\wedge \d \bar{z}_k,
+ \]
+ and the matrix is Hermitian positive definite by assumption.
+\end{proof}
+
+\begin{eg}
+ If $M = \C^n \cong \R^{2n}$, we take
+ \[
+ \rho(\mathbf{z}) = |\mathbf{z}|^2 = \sum z_k \bar{z}_k.
+ \]
+ Then we have
+ \[
+ h_{jk} = \frac{\partial^2 \rho}{\partial z_j \partial \bar{z}_k} = \delta_{jk},
+ \]
+ so this is strictly plurisubharmonic. Then
+ \begin{align*}
+ \omega &= \frac{i}{2} \sum \d z_j \wedge \d \bar{z}_k \\
+ &= \frac{i}{2} \sum \d (x_j + i y_j) \wedge \d (x_k - i y_k)\\
+ &= \sum \d x_k \wedge \d y_k,
+ \end{align*}
+ which is the standard symplectic form. So $(\C^n, \omega)$ is K\"ahler and $\rho = |z|^2$ is a (global) K\"ahler potential for $\omega_0$.
+\end{eg}
+
+There is a local converse to this result.
+
+\begin{prop}
+ Let $M$ be a complex manifold, $\omega$ a closed real-valued $(1, 1)$-form and $p \in M$, then there exists a neighbourhood $U$ of $p$ in $M$ and a $\rho \in C^\infty(U, \R)$ such that
+ \[
+ \omega = i \partial \bar{\partial} \rho\text{ on }U.
+ \]
+\end{prop}
+
+\begin{proof}
+ This uses the holomorphic version of the Poincar\'e lemma.
+\end{proof}
+
+When $\rho$ is K\"ahler, such a $\rho$ is called a \term{local K\"ahler potential} for $\omega$.
+
+Note that it is not possible to have a global K\"ahler potential on a closed K\"ahler manifold, because if $\omega = \frac{i}{2}\partial \bar{\partial} \rho$, then
+\[
+ \omega = \d \left(\frac{i}{2} \bar{\partial} \rho\right)
+\]
+is exact, and we know symplectic forms cannot be exact.
+
+\begin{eg}
+ Let $M = \C^n$ and
+ \[
+ \rho(z) = \log (|z|^2 + 1).
+ \]
+ It is an exercise to check that $\rho$ is strictly plurisubharmonic. Then
+ \[
+ \omega_{FS} = \frac{i}{2} \partial \bar{\partial} (\log(|z^2| + 1))
+ \]
+ is another K\"ahler form on $\C^n$, called the \term{Fubini--Study form}.
+\end{eg}
+The reason this is interesting is that it allows us to put a K\"ahler structure on $\CP^n$.
+\begin{eg}
+ Let $M = \CP^n$. Using homogeneous coordinates, this is covered by the open sets
+ \[
+ U_j = \{[z_0, \ldots, z_n] \in \CP^n \mid z_j \not= 0\}.
+ \]
+ with the chart given by
+ \begin{align*}
+ \varphi_j: U_j &\to \C^n\\
+ [z_0, \ldots, z_n] &\mapsto \left(\frac{z_0}{z_j}, \ldots, \frac{z_{j - 1}}{z_j}, \frac{z_{j + 1}}{z_j}, \ldots, \frac{z_n}{z_j}\right).
+ \end{align*}
+ One can check that $\varphi_j^* \omega_{FS} = \varphi_k^* \omega_{FS}$. Thus, these glue to give the \term{Fubini--Study form} on $\CP^n$, making it a K\"ahler manifold.
+\end{eg}
+
+\subsection{Hodge theory}
+So what we have got so far is that if we have a complex manifold, then we can decompose
+\[
+ \Omega^k(M; \C) = \bigoplus_{p + q = k} \Omega^{p, q},
+\]
+and using $\bar{\partial}: \Omega^{p, q} \to \Omega^{p, q + 1}$, we defined the Dolbeault cohomology groups
+\[
+ H_{\Dolb}^{p, q}(M) = \frac{\ker \bar{\partial}}{\im \bar{\partial}}.
+\]
+It would be nice if we also had a decomposition
+\[
+ H^k_{\dR} \cong \bigoplus_{p + q = k} H^{p, q}_{\Dolb}(M).
+\]
+This is not always true. However, it is true for compact K\"ahler manifolds:
+\begin{thm}[Hodge decomposition theorem]\index{Hodge decomposition theorem}
+ Let $(M, \omega)$ be a compact K\"haler manifold. Then
+ \[
+ H^k_{\dR} \cong \bigoplus_{p + q = k} H^{p, q}_{\Dolb}(M).
+ \]
+\end{thm}
+To prove the theorem, we will first need a ``real'' analogue of the theorem. This is an analytic theorem that lets us find canonical representatives for each cohomology class. We can develop the same theory for Dolbeault cohomology, and see that the canonical representatives for Dolbeault cohomology are the same as those for de Rham cohomology. We will not prove the analytic theorems, but just say how these things piece together.
+\subsubsection*{Real Hodge theory}
+Let $V$ be a real oriented vector space $m$ with inner product $G$. Then this induces an inner product on each $\Lambda^k = \exterior^k (V)$, denoted $\bra \ph, \ph\ket$, defined by
+\[
+ \bra v_1 \wedge \cdots \wedge v_k, w_1 \wedge \cdots \wedge w_k\ket = \det (G(v_i, w_j))_{i, j}
+\]
+Let $e_1, \ldots, e_n$ be an oriented orthonormal basis for $V$. Then
+\[
+ \{e_{j_1} \wedge \cdots \wedge e_{j_k} : 1 \leq j_1 < \cdots < j_k \leq m\}
+\]
+is an orthonormal basis for $\Lambda^k$.
+
+\begin{defi}[Hodge star]\index{hodge star}
+ The Hodge $*$-operator \index{$*$}$*: \Lambda^k \to \Lambda^{m - k}$ is defined by the relation
+ \[
+ \alpha \wedge *\beta = \bra \alpha, \beta\ket\; e_1 \wedge \cdots \wedge e_m.
+ \]
+\end{defi}
+It is easy to see that the Hodge star is in fact an isomorphism. It is also not hard to verify the following properties:
+\begin{prop}\leavevmode
+ \begin{itemize}
+ \item $*(e_1 \wedge \cdots \wedge e_k) = e_{k + 1} \wedge \cdots \wedge e_m$
+ \item $*(e_{k + 1} \wedge \cdots \wedge e_m) = (-1)^{k(m - k)} e_1 \wedge \cdots \wedge e_k$.
+ \item $** =\alpha = (-1)^{k(m - k)}\alpha$ for $\alpha \in \Lambda^k$.\fakeqed
+ \end{itemize}
+\end{prop}
+
+In general, let $(M, g)$ be a compact real oriented Riemannian manifold of dimension $m$. Then $g$ gives an isomorphism $TM \cong T^*M$, and induces an inner product on each $T_p^* M$, which we will still denote $g_p$. This induces an inner product on each $\exterior^k T_p^*M$, which we will denote $\bra \ph, \ph \ket$ again.
+
+The Riemannian metric and orientation gives us a volume form $\Vol \in \Omega^m(M)$, defined locally by
+\[
+ \Vol_p(e_1 \wedge \cdots \wedge e_m),
+\]
+where $e_1, \ldots, e_m$ is an oriented basis of $T_p^*M$. This induces an $L^2$-inner product on $\Omega^k(M)$,
+\[
+ \bra \alpha, \beta\ket_{L^2} = \int_M \bra \alpha, \beta\ket\;\Vol.
+\]
+Now apply Hodge $*$-operator to each $(V, G) = (T_p^* M, g_p)$ and $p \in M$. We then get
+\begin{defi}[Hodge star operator]\index{Hodge star}\index{$*$}
+ The \emph{Hodge $*$-operator} on forms $*: \Omega^k(M) \to \Omega^{m - k}(M)$ is defined by the equation
+ \[
+ \alpha \wedge (*\beta) = \bra \alpha ,\beta\ket \; \Vol.
+ \]
+\end{defi}
+
+We again have some immediate properties.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $**\alpha = (-1)^{k(m - k)} \alpha$ for $\alpha \in \Omega^k(M)$.
+ \item $*1 = \Vol$\fakeqed
+ \end{enumerate}
+\end{prop}
+
+We now introduce the codifferential operator
+\begin{defi}[Codifferential operator]\index{codifferential operator}\index{$\delta$}
+ We define the \emph{codifferential operator} $\delta: \Omega^k \to \Omega^{k - 1}$ to be the $L^2$-formal adjoint of $d$. In other words, we require
+ \[
+ \bra \d \alpha, \beta\ket_{L^2} = \bra \alpha, \delta \beta\ket_{L^2}
+ \]
+ for all $\alpha \in \Omega^k$ and $\beta \in \Omega^{k + 1}$.
+\end{defi}
+We immediately see that
+\begin{prop}
+ $\delta^2 = 0$.
+\end{prop}
+
+Using the Hodge star, there is a very explicit formula for the codifferential (which in particular shows that it exists).
+\begin{prop}
+ \[
+ \delta = (-1)^{m(k + 1) + 1} {*\,\d\, *} : \Omega^k \to \Omega^{k - 1}.
+ \]
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ \bra \d \alpha, \beta \ket_{L^2} &= \int_M \d \alpha \wedge * \beta\\
+ &= \int_M \d (\alpha \wedge *\beta) - (-1)^k \int_M \alpha \wedge \d (*\beta)\\
+ &= (-1)^{k + 1} \int_M \alpha \wedge \d (*\beta)\tag{Stokes'}\\
+ &= (-1)^{k + 1} \int_M (-1)^{(m - k)k} \alpha \wedge **\d (*\beta)\\
+ &= (-1)^{k + 1 + (m - k)k} \int_M \bra \alpha, {*\ \d\,*}\beta\ket.\qedhere
+ \end{align*}
+\end{proof}
+
+We can now define the Laplace--Beltrami operator
+\begin{defi}[Laplace--Beltrami operator]\index{Laplace--Beltrami operator}\index{Laplacian}
+ We define the \emph{Laplacian}, or the \emph{Laplace--Beltrami operator} to be
+ \[
+ \Delta = \d \delta + \delta \d: \Omega^k \to \Omega^k.
+ \]
+\end{defi}
+
+\begin{eg}
+ If $M = \R^m$ with the Euclidean inner product, and $f \in \Omega^0(M) = C^\infty(M)$, then
+ \[
+ \Delta f = - \sum_{i = 1}^n \frac{\partial^2 f}{\partial x_i^2}.
+ \]
+\end{eg}
+
+It is an exercise to prove the following
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\Delta * = *\Delta: \Omega^k \to \Omega^{m - k}$
+ \item $\Delta = (\d + \delta)^2$
+ \item $\bra \Delta \alpha, \beta\ket_{L^2} = \bra \alpha, \Delta \beta\ket_{L^2}$.
+ \item $\Delta \alpha = 0$ iff $\d \alpha = \delta \alpha = 0$.\fakeqed
+ \end{enumerate}
+\end{prop}
+In particular, (iv) follows from the fact that
+\[
+ \bra \Delta \alpha, \alpha\ket = \bra \d \alpha, \d \alpha\ket + \bra \delta \alpha, \delta \alpha\ket = \|\d \alpha\|_{L^2}^2 + \|\delta \alpha\|_{L^2}^2.
+\]
+Similar to IA Vector Calculus, we can define
+\begin{defi}[Harmonic form]\index{harmonic form}
+ A form $\alpha$ is \emph{harmonic} if $\Delta \alpha = 0$. We write\index{$\mathcal{H}^k$}
+ \[
+ \mathcal{H}^k = \{\alpha \in \Omega^k(m) \mid \Delta \alpha = 0\}
+ \]
+ for the space of harmonic forms.
+\end{defi}
+Observe there is a natural map $\mathcal{H}^k \to H^k_{\dR}(M)$, sending $\alpha$ to $[\alpha]$. The main result is that
+\begin{thm}[Hodge decomposition theorem]\index{Hodge decomposition theorem}
+ Let $(M, g)$ be a compact oriented Riemannian manifold. Then every cohomology class in $H_{\dR}^k(M)$ has a unique harmonic representation, i.e.\ the natural map $\mathcal{H} ^k \to H_{\dR}^k(M)$ is an isomorphism.\qedhere
+\end{thm}
+We will not prove this.
+
+\subsubsection*{Complex Hodge theory}
+We now see what we can do for complex K\"ahler manifolds. First check that
+\begin{prop}
+ Let $M$ be a complex manifold, $\dim_\C M = n$ and $(M, \omega)$ K\"ahler. Then
+ \begin{enumerate}
+ \item $*: \Omega^{p, q} \to \Omega^{n - p, n - q}$.
+ \item $\Delta: \Omega^{p, q} \to \Omega^{p, q}$.\fakeqed
+ \end{enumerate}
+\end{prop}
+
+define the $L^2$-adjoints $\bar{\partial}^* = \pm *\bar{\partial}*$ and $\partial^* = -- * \partial *$ with the appropriate signs as before, and then
+\[
+ \d = \partial + \bar{\partial},\quad \delta = \partial^* + \bar{\partial}^*.
+\]
+We can then define
+\begin{align*}
+ \Delta_{\partial} &= \partial \partial^* + \partial^* \partial : \Omega^{p, q} \to \Omega^{p, q}\\
+ \Delta_{\bar{\partial}} &= \bar{\partial} \bar{\partial}^* + \bar{\partial}^* \bar{\partial} : \Omega^{p, q} \to \Omega^{p, q}.
+\end{align*}
+
+\begin{prop}
+ If our manifold is K\"ahler, then
+ \[
+ \Delta = 2 \Delta_\partial = 2 \Delta_{\bar{\partial}}.\fakeqed
+ \]
+\end{prop}
+So if we have a harmonic form, then it is in fact $\partial$ and $\bar{\partial}$-harmonic, and in particular it is $\partial$ and $\bar{\partial}$-closed. This give us a \emph{Hodge decomposition}
+\[
+ \mathcal{H}^k_\C = \bigoplus_{p + q = k} \mathcal{H}^{p, q},
+\]
+where\index{$\mathcal{H}^{p, q}$}
+\[
+ \mathcal{H}^{p, q} = \{\alpha \in \Omega^{p, q}(M): \Delta \alpha = 0\}.
+\]
+\begin{thm}[Hodge decomposition theorem]\index{Hodge decomposition theorem}
+ Let $(M, \omega)$ be a compact K\"ahler manifold. The natural map $\mathcal{H}^{p, q} \to H_{\Dolb}^{p, q}$ is an isomorphism. Hence
+ \[
+ H^k_{\dR} (M; \C) \cong \mathcal{H}^k_{\C} = \bigoplus_{p + q = k} \mathcal{H}^{p, q} \cong \bigoplus_{p + q = k} H_{\Dolb}^{p, q}(M).\fakeqed
+ \]
+\end{thm}
+
+What are some topological consequences of this? Recall the \term{Betti numbers} are defined by
+\[
+ b_k = \dim_\R H^k_{\dR}(M) = \dim_\C H^k_{\dR}(M; \C).
+\]
+We can further define the \term{Hodge numbers}
+\[
+ h_{p, q} = \dim_\C H^{p, q}_{\Dolb}(M).
+\]
+Then the Hodge theorem says
+\[
+ b_k = \sum_{p + q = k} h_{p, q}.
+\]
+Moreover, since
+\[
+ H_{\Dolb}^{p, q}(M) = \overline{H_{\Dolb}^{q, p}(M)}.
+\]
+So we have \term{Hodge symmetry},
+\[
+ h_{p, q} = h_{q, p}.
+\]
+Moreover, the $*$ operator induces an isomorphism
+\[
+ H^{p, q}_{\Dolb} \cong H_{\Dolb}^{n - p, n - q}.
+\]
+So we have
+\[
+ h_{p, q} = h_{n - p, n - q}.
+\]
+There is called \term{central symmetry}, or \term{Serre duality}. Thus, we know that
+\begin{cor}
+ Odd Betti numbers are even.
+\end{cor}
+
+\begin{proof}
+ \[
+ b_{2k + 1} = \sum_{p + q = 2k + 1} h_{p, q} = 2 \left(\sum_{p = 0}^k h_{p, 2k + 1 - p}\right).\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ $h_{1, 0} = h_{0, 1} = \frac{1}{2}b_1$ is a topological invariant.
+\end{cor}
+
+We have also previously seen that for a general compact K\"ahler manifold, we have
+\begin{prop}
+ Even Betti numbers are positive.
+\end{prop}
+
+Recall that we proved this by arguing that $[\omega^k] \not= 0 \in H^{2k}(M)$. In fact, $\omega^k \in H^{k, k}_{\Dolb}(M)$, and so
+\begin{prop}
+ $h_{k, k} \not= 0$.\fakeqed % insert proof
+\end{prop}
+
+We usually organize these $h_{p, q}$ in the form of a \term{Hodge diamond}, e.g.
+\begin{center}
+ \begin{tikzpicture}[xscale=1.5, yscale=0.5]
+ \node at (0, -2) {$h_{0, 0}$};
+ \node at (-0.5, -1) {$h_{1, 0}$};
+ \node at (0.5, -1) {$h_{0, 1}$};
+ \node at (-1, 0) {$h_{2, 0}$};
+ \node at (0, 0) {$h_{1, 1}$};
+ \node at (1, 0) {$h_{0, 2}$};
+ \node at (-0.5, 1) {$h_{2, 1}$};
+ \node at (0.5, 1) {$h_{1, 2}$};
+ \node at (0, 2) {$h_{2, 2}$};
+ \end{tikzpicture}
+\end{center}
+
+%\subsection{Conclusion}
+%So suppose $M$ is a compact manifold. When is there a symplectic manifold? Elementary necessary conditions involve
+%\begin{itemize}
+% \item $M$ must be even-dimensional and orientable
+% \item There exists $[\omega] \in H^2(M)$ such that $[\omega^n] \not= 0$ by Stokes' theorem. Then $M$ must be almost complex.
+%\end{itemize}
+%
+%For dimension $2$, being orientable implies K\"ahler.
+%
+%Note that $S^2$ is K\"ahler, and the only spheres that admit an almost complex structure are $S^2$ and $S^6$.
+\section{Hamiltonian vector fields}
+Symplectic geometry was first studied by physicists, who modeled their systems by a symplectic manifold. The \emph{Hamiltonian function} $H \in C^\infty(M)$, which returns the energy of a particular configuration, generates a vector field on $M$ which is the equation of motion of the system. In this chapter, we will discuss how this process works, and further study what happens when we have families of such Hamiltonian functions, giving rise to Lie group actions.
+
+\subsection{Hamiltonian vector fields}
+\begin{defi}[Hamiltonian vector field]\index{Hamiltonian vector field}
+ Let $(M, \omega)$ be a symplectic manifold. If $H \in C^\infty(M)$, then since $\tilde{\omega}: TM \to T^*M$ is an isomorphism, there is a unique vector field $X_H$ on $M$ such that
+ \[
+ \iota_{X_H} \omega = \d H.
+ \]
+ We call $X_H$ the \emph{Hamiltonian vector field} with \term{Hamiltonian function} $H$.
+\end{defi}
+
+Suppose $X_H$ is complete (e.g.\ when $M$ is compact). This means we can integrate $X_H$, i.e.\ solve
+\[
+ \frac{\partial \rho_t}{\partial t} (p) = X_H(\rho_t(p)),\quad \rho_0(p) = p.
+\]
+These flow have some nice properties.
+\begin{prop}
+ If $X_H$ is a Hamiltonian vector field with flow $\rho_t$, then $\rho_t^* \omega = \omega$. In other words, each $\rho_t$ is a symplectomorphism.
+\end{prop}
+
+\begin{proof}
+ It suffices to show that $\frac{\partial}{\partial t} \rho_t^* \omega = 0$. We have
+ \[
+ \frac{\d}{\d t} (\rho_t^* \omega) = \rho_t^* (\mathcal{L}_{X_H} \omega) = \rho_t^* (\d \iota_{X_H} \omega + \iota_{X_H} \d \omega) = \rho_t^* (\d \d H) = 0.\qedhere
+ \]
+\end{proof}
+
+Thus, every function $H$ gives us a one-parameter subgroup of symplectomorphisms.
+
+\begin{prop}
+ $\rho_t$ preserves $H$, i.e.\ $\rho_t^* H = H$.
+\end{prop}
+
+\begin{proof}
+ \[
+ \frac{\d}{\d t} \rho_t^* H = \rho_t^* (\mathcal{L}_{X_H}H) = \rho_t^* (\iota_{X_H} \d H) = \rho_t^* (\iota_{X_H} \iota_{X_H}\omega) = 0.\qedhere
+ \]
+\end{proof}
+
+So the flow lines of our vector field are contained in level sets of $H$.
+\begin{eg}
+ Take $(S^2, \omega = \d \theta \wedge \d h)$. Take
+ \[
+ H(h, \theta) = h
+ \]
+ to be the height function. Then $X_H$ solves
+ \[
+ \iota_{X_H} (\d \theta \wedge \d h) = \d h.
+ \]
+ So
+ \[
+ X_H = \frac{\partial}{\partial \theta},\quad \rho_t(h, \theta) = (h, \theta + t).
+ \]
+ As expected, the flow preserves height and the area form.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->, gray] (0, -0.25) -- (0, 2.3);
+ \draw [gray] (0, -2) -- (0, -0.35);
+
+ \draw circle [radius=1.5];
+
+ \draw [dashed] (-1.5, 0) arc(180:0:1.5 and 0.3);
+ \draw (-1.5, 0) arc(180:360:1.5 and 0.3);
+
+ \draw [-latex] (-0.06, 1.8) arc(250:-80:0.2 and 0.05);
+
+ \draw [->] (2, 0) -- (3, 0);
+
+ \draw (4, -2) -- (4, 2);
+ \draw [mblue, thick] (4, -1.5) -- (4, 1.5);
+ \node [circ] at (4, -1.5) {};
+ \node [circ] at (4, 1.5) {};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+We have seen that Hamiltonian vector fields are symplectic:
+\begin{defi}[Symplectic vector field]\index{symplectic vector field}
+ A vector field $X$ on $(M, \omega)$ is a \emph{symplectic vector field} if $\mathcal{L}_X \omega = 0$.
+\end{defi}
+Observe that
+\[
+ \mathcal{L}_X \omega= \iota_X \d \omega + \d \iota_X \omega = \d \iota_X \omega.
+\]
+So $X$ is symplectic iff $\iota_X \omega$ is closed, and is Hamiltonian if it is exact. Thus, locally, every symplectic vector field is Hamiltonian and globally, the obstruction lies in $H^1_{\dR}(M)$.
+
+\begin{eg}
+ Take $(T^2, \omega = \d \theta_1 \wedge \d \theta_2)$. Then $X_i = \frac{\partial}{\partial \theta_i}$ are symplectic but not Hamiltonian, since $\iota_{X_i} \omega = \d \theta_{2 - i}$ is closed but not exact.
+\end{eg}
+
+\begin{prop}
+ Let $X, Y$ be symplectic vector fields on $(M, \omega)$. Then $[X, Y]$ is Hamiltonian.
+\end{prop}
+Recall that if $X, Y$ are vector fields on $M$ and $f \in C^\infty(M)$, then their \term{Lie bracket} is given by
+\[
+ [X, Y]f = (XY - YX)f.
+\]
+This makes $\chi(M)$, the space of vector fields on $M$, a Lie algebra.
+
+In order to prove the proposition, we need the following identity:
+\begin{ex}
+ $\iota_{[X, Y]}\alpha = [\mathcal{L}_X, \iota_Y] \alpha = [\iota_X, \mathcal{L}_Y] \alpha$.
+\end{ex}
+%To prove this, check this on $0$-forms; exact $1$-forms; then check when both sides are anti-derivatives
+%\[
+% \D (\alpha \wedge \beta) = \D \alpha \wedge \beta + (-1)^{\deg \alpha} \alpha \wedge \D \beta.
+%\]
+
+\begin{proof}[Proof of proposition]
+ We need to check that $\iota_{[X, Y]} \omega$ is exact. By the exercise, this is
+ \[
+ \iota_{[X, Y]}\omega = \mathcal{L}_X \iota_Y \omega - \iota_Y \mathcal{L}_X \omega = \d (\iota_X \iota_Y \omega) + \iota_X \d \iota_Y \omega + \iota_Y \d \iota_X \omega - \iota_Y \iota_Y \d \omega.
+ \]
+ Since $X, Y$ are symplectic, we know $\d \iota_Y \omega = \d \iota_X \omega = 0$, and the last term always vanishes. So this is exact, and $\omega(Y, X)$ is a Hamiltonian function for $[X, Y]$.
+\end{proof}
+
+\begin{defi}[Poisson bracket]\index{Poisson bracket}
+ Let $f, g \in C^\infty(M)$. We then define the \emph{Poisson bracket} $\{f, g\}$ by
+ \[
+ \{f, g\} = \omega(X_f, X_g).
+ \]
+\end{defi}
+This is defined so that
+\[
+ X_{\{f, g\}} = -[X_f, X_g].
+\]
+\begin{ex}
+ The Poisson bracket satisfies the Jacobi identity, and also the Leibniz rule
+ \[
+ \{f, gh\} = g\{f, h\} + \{f, g\}h.
+ \]
+\end{ex}
+Thus, if $(M, \omega)$ is symplectic, then $(C^\infty(M), \{\ph, \ph\})$ is a \term{Poisson algebra}. This means it is a commutative, associative algebra with a Lie bracket that satisfies the Leibniz rule.
+
+Further, the map $C^\infty(M) \to \chi(M)$ sending $H \mapsto X_H$ is a Lie algebra (anti-)homomorphism.
+
+\begin{prop}
+ $\{f, g\} = 0$ iff $f$ is constant along integral curves of $X_g$.
+\end{prop}
+
+\begin{proof}
+ \[
+ \mathcal{L}_{X_g} f = \iota_{X_g}\;\d f = \iota_{X_g} \iota_{X_f} \omega = \omega(X_f, X_g) = \{f, g\} = 0.\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ If $M = \R^{2n}$ and $\omega = \omega_0 = \sum \d x_j \wedge \d y_j$, and $f \in C^\infty(\R^{2n})$, then
+ \[
+ X_f = \sum_i \left(\frac{\partial f}{\partial y_i} \frac{\partial}{\partial x_i} - \frac{\partial f}{\partial x_i} \frac{\partial }{\partial y_i}\right).
+ \]
+ If $\rho_0(p) = p$, then $\rho_t(p) = (x(t), y(t))$ is an integral curve for $X_f$ iff
+ \[
+ \frac{\d x_i}{\d t} = \frac{\partial f}{\partial y_i},\quad \frac{\partial y_i}{\partial t} = -\frac{\partial f}{\partial x_i}.
+ \]
+ In classical mechanics, these are known as \term{Hamilton equations}.
+\end{eg}
+
+\subsection{Integrable systems}
+In classical mechanics, we usually have a fixed $H$, corresponding to the energy.
+\begin{defi}[Hamiltonian system]\index{Hamiltonian system}
+ A \emph{Hamiltonian system} is a triple $(M, \omega, H)$ where $(M, \omega)$ is a symplectic manifold and $H \in C^\infty(M)$, called the \term{Hamiltonian function}.
+\end{defi}
+
+\begin{defi}[Integral of motion]\index{integral of motion}
+ A \emph{integral of motion}/\term{first integral}/\term{constant of motion}/\term{conserved quantity} of a Hamiltonian system is a function $f \in C^\infty(M)$ such that $\{f, H\} = 0$.
+\end{defi}
+For example, $H$ is an integral of motion. Are there others? Of course, we can write down $2H, H^2, H^{12}, e^H$, etc., but these are more-or-less the same as $H$.
+\begin{defi}[Independent integrals of motion]\index{independent integrals of motion}
+ We say $f_1, \ldots, f_n \in C^\infty(M)$ are \emph{independent} if $(\d f_1)_p, \ldots, (\d f_n)_p$ are linearly independent at all points on some dense subset of $M$.
+\end{defi}
+
+\begin{defi}[Commuting integrals of motion]\index{commuting integrals of motion}
+ We say $f_1, \ldots, f_n \in C^\infty$ \emph{commute} if $\{f_i, f_j\} = 0$.
+\end{defi}
+
+If we have $n$ independent integrals of motion, then we clearly have $\dim M \geq n$. In fact, the commuting condition implies:
+\begin{ex}
+ Let $f_1, \ldots, f_n$ be independent commuting functions on $(M, \omega)$. Then $\dim M \geq 2n$.
+\end{ex}
+The idea is that the $(\d f_i)_p$ are not only independent, but span an isotropic subspace of $TM$.
+
+If we have the maximum possible number of independent commuting first integrals, then we say we are integrable.
+\begin{defi}[Completely integrable system]\index{completely integrable system}
+ A Hamiltonian system $(M, \omega, H)$ of dimension $\dim M = 2n$ is \emph{(completely) integrable} if it has $n$ independent commuting integrals of motion $f_1 = H, f_2, \ldots, f_n$.
+\end{defi}
+
+\begin{eg}
+ If $\dim M = 2$, then we only need one integral of motion, which we can take to be $H$. Then $(M, \omega, H)$ is integrable as long as the set of non-critical points of $H$ is dense.
+\end{eg}
+
+\begin{eg}
+ The physics of a simple pendulum of length $1$ and mass $1$ can be modeled by the symplectic manifold $M = T^*S^1$, where the $S^1$ refers to the angular coordinate $\theta$ of the pendulum, and the cotangent vector is the momentum. The Hamiltonian function is
+ \[
+ H = K + V = \text{kinetic energy} + \text{potential energy} = \frac{1}{2}\xi^2 + (1 - \cos \omega).
+ \]
+ We can check that the critical points of $H$ are $(\theta, \xi) = (0, 0)$ and $(\pi, 0)$. So $(M, \omega, H)$ is integrable.
+\end{eg}
+
+\begin{eg}
+ If $\dim M = 4$, then $(M, \omega, H)$ is integrable as long as there exists an integral motion independent of $H$. For example, on a spherical pendulum, we have $M = T^* S^2$, and $H$ is the total energy. Then the angular momentum is an integral of motion.
+\end{eg}
+
+What can we do with a completely integrable system? Suppose $(M, \omega, H)$ is completely integrable system with $\dim M = 2n$ and $f_1 = H, f_2, \ldots, f_n$ are commuting. Let $c$ be a regular value of $f = (f_1, \ldots, f_n)$. Then $f^{-1}(c)$ is an $n$-dimensional submanifold of $M$. If $p \in f^{-1}(c)$, then
+\[
+ T_p(f^{-1}(c)) = \ker (\d f)_p.
+\]
+Since
+\[
+ \d f_p =
+ \begin{pmatrix}
+ (\d f_1)_p\\
+ \vdots\\
+ (\d f_n)_p
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ \iota_{X_{f_1}} \omega\\\vdots \\\iota_{X_{f_n}}\omega
+ \end{pmatrix},
+\]
+we know
+\[
+ T_p (f^{-1}(c)) = \ker (\d f_p) = \spn \{(X_{f_1})_p, \ldots (X_{f_n})_p\},
+\]
+Moreover, since
+\[
+ \omega(X_{f_i}, X_{f_j}) = \{f_i, f_j\} = 0,
+\]
+we know that $T_p (f^{-1}(c))$ is an \emph{isotropic} subspace of $(T_p M, \omega_p)$.
+
+If $X_{f_1}, \ldots, X_{f_n}$ are complete, then following their flows, we obtain global coordinates of (the connected components of) $f^{-1}(c)$, where $q \in f^{-1}(c)$ has coordinates $(\varphi_1, \ldots, \varphi_m)$ (\term{angle coordinates}) if $q$ is achieved from the base point $p$ by following the flow of $X_{f_i}$ for $\varphi_i$ seconds for each $i$. The fact that the $f_i$ are Poisson commuting implies the vector fields commute, and so the order does not matter, and this gives a genuine coordinate system.
+
+By the independence of $X_{f_i}$, the connected components look like $\R^{n - k} \times T^k$, where $T^k = (S^1)^k$ is the $k$ torus. In the extreme case $k = n$, we simply get a torus, which is a compact connected component.
+
+\begin{defi}[Liouville torus]\index{Liouville torus}
+ A \emph{Liouville torus} is a compact connected component of $f^{-1}(c)$.
+\end{defi}
+
+It would be nice if the $(\varphi_i)$ are part of a Darboux chart of $M$, and this is true.
+
+\begin{thm}[Arnold--Liouville thoerem]\index{Arnold--Liouville theorem}
+ Let $(M, \omega, H)$ be an integrable system with $\dim M = 2n$ and $f_1 = H, f_2, \ldots, f_n$ integrals of motion, and $c \in \R$ a regular value of $f = (f_1, \ldots, f_n)$.
+
+ \begin{enumerate}
+ \item If the flows of $X_{f_i}$ are complete, then the connected components of $f^{-1}(\{c\})$ are homogeneous spaces for $\R^n$ and admit affine coordinates $\varphi_1, \ldots, \varphi_n$ (\emph{angle coordinates}), in which the flows of $X_{f_i}$ are linear.
+ \item There exists coordinates $\psi_1, \ldots, \psi_n$ (\term{action coordinates}) such that the $\psi_i$'s are integrals of motion and $\varphi_1, \ldots, \varphi_n, \psi_1, \ldots, \psi_n$ form a Darboux chart.
+ \end{enumerate}
+\end{thm}
+
+%\begin{eg}
+% For a simple pendulum, % insert picture
+%\end{eg}
+
+\subsection{Classical mechanics}
+As mentioned at the beginning, symplectic geometry was first studied by physicists. In this section, we give a brief overview of how symplectic geometry arises in physics. Historically, there have been three ``breakthroughs'' in classical mechanics:
+\begin{enumerate}
+ \item In $\sim 1687$, Newton wrote down his three laws of physics, giving rise to Newtonian mechanics.
+ \item In $\sim 1788$, this was reformulated into the Lagrangian formalism.
+ \item In $\sim 1833$, these were further developed into the Hamiltonian formalism.
+\end{enumerate}
+
+\subsubsection*{Newtonian mechanics}
+In Newtonian mechanics, we consider a particle of mass $m$ moving in $\R^3$ under the potential $V(x)$. \term{Newton's second law} then says the trajectory of the particle obeys
+\[
+ m\frac{\d^2 x}{\d t^2} = - \nabla V(x).
+\]
+\subsubsection*{Hamiltonian mechanics.}
+To do Hamiltonian mechanics, a key concept to introduce is the momentum:
+\begin{defi}[Momentum]\index{momentum}
+ The \emph{momentum} of a particle is
+ \[
+ y = m \frac{\d x}{\d t}.
+ \]
+\end{defi}
+We also need the energy function
+\begin{defi}[Energy]\index{energy}
+ The \emph{energy} of a particle is
+ \[
+ H(x, y) = \frac{1}{2m} |y|^2 + V(x).
+ \]
+\end{defi}
+We call $\R^3$ the \term{configuration space} and $T^* \R^3$ the \term{phase space}, parametrized by $(x, y)$. This has a canonical symplectic form $\omega = \d x_i \wedge \d y_i$.
+
+Newton's second law can be written as
+\[
+ \frac{\d y_i}{\d t} = - \frac{\partial V}{\partial x_i}.
+\]
+Combining with the definition of $y$, we find that $(x, y)$ evolves under.
+\begin{align*}
+ \frac{\d x_i}{\d t} &= \frac{\partial H}{\partial y_i}\\
+ \frac{\d y_i}{\d t} &= -\frac{\partial H}{\partial x_i}
+\end{align*}
+So physical motion is given by Hamiltonian flow under $H$. $H$ is called the \term{Hamiltonian} of the system.
+
+\subsubsection*{Lagrangian mechanics}
+Lagrangian mechanics is less relevant to our symplectic picture, but is nice to know about nevertheless. This is formulated via a variational principle.
+
+In general, consider a system with $N$ particles of masses $m_1, \ldots, m_N$ moving in $\R^3$ under a potential $V \in C^\infty(\R^{3N})$. The Hamiltonian function can be defined exactly as before:
+\[
+ H(x, y) = \sum_k \frac{1}{2m_k} |y_k|^2 + V(x),
+\]
+where $x(t) = (x_1, \ldots, x_n)$ and each $x_i$ is a $3$-vector; and similarly for $y$ with $y_k = m_k \frac{\d x_t}{\d t}$. Then in Hamiltonian mechanics, we say $(x, y)$ evolves under Hamiltonian flow.
+
+Now fix $a, b \in \R$ and $p, q \in \R^{3N}$. Write $\mathcal{P}$ for the space of all piecewise differentiable paths $\gamma = (\gamma_1, \ldots, \gamma_n): [a, b] \to \R^{3N}$.
+
+\begin{defi}[Action]
+ The \term{action} of a path $\gamma \in \mathcal{P}$ is
+ \[
+ A_\gamma = \int_a^b \left(\sum_k \frac{m_k}{2} \left|\frac{\d \gamma_k}{\d t}(t) \right|^2 - V(\gamma(t))\right)\;\d t.
+ \]
+\end{defi}
+The integrand is known as the \term{Lagrangian function}. We will see that $\gamma(t) = x(t)$ is (locally) a stationary point of $A_\gamma$ iff
+\[
+ m_k \frac{\d^2 x_t}{\d t} = - \frac{\partial V}{\partial x_k},
+\]
+i.e.\ if and only if Newton's second law is satisfied.
+
+The Lagrangian formulation works more generally if our particles are constrained to live in some submanifold $X \subseteq \R^{3n}$. For example, if we have a pendulum, then the particle is constrained to live in $S^1$ (or $S^2$). Then we set $\mathcal{P}$ to be the maps $\gamma: [a, b] \to X$ that go from $p$ to $q$. The Lagrangian formulation is then exactly the same, except the minimization problem is now performed within this $\mathcal{P}$.
+
+More generally, suppose we have an $n$-dimensional manifolds $X$, and $F: TX \to \R$ is a \term{Lagrangian function}. Given a curve $\gamma: [a, b] \to X$, there is a lift
+\begin{align*}
+ \tilde{\gamma}: [a, b] &\to TX\\
+ t &\mapsto \left(\gamma(t), \frac{\d \gamma}{\d t}(t)\right).
+\end{align*}
+The \term{action} is then
+\[
+ A_\gamma = \int_a^b (\tilde{\gamma}^* F)(t) = \int_a^b F\left(\gamma(t), \frac{\d \gamma}{\d t}(t)\right) \;\d t.
+\]
+
+To find the critical points of the action, we use \term{calculus of variations}. Pick a chart $(x_1, \ldots, x_n)$ for $X$ and naturally extend to a chart $(x_1, \ldots, x_n, v_1, \ldots, v_n)$ for $TX$. Consider a small perturbation of our path
+\[
+ \gamma_\varepsilon(t) = (\gamma_1(t) + \varepsilon c_1(t), \ldots, \gamma_n(t) + \varepsilon c_n(t))
+\]
+for some functions $c_1, \ldots, c_n \in C^\infty([a, b])$ such that $c_i(a) = c_i(b) = 0$. We think of this as an infinitesimal variation of $\gamma$. We then find that
+\[
+ \left.\frac{\d A_{\gamma_\varepsilon}}{\d \varepsilon} \right|_{\varepsilon = 0} = 0 \Leftrightarrow \frac{\partial F}{\partial x_k} = \frac{\d}{\d t} \frac{\partial F}{\partial v_F}\text{ for }k = 1, \ldots, n.
+\]
+These are the \term{Euler--Lagrange equations}.
+
+\begin{eg}
+ In $X = \R^{3N}$ and
+ \[
+ F(x_1, \ldots, x_n, v_1, \ldots, v_n) = \sum_k \frac{m_k}{2} |v_k|^2 - V(x_1, \ldots, x_n).
+ \]
+ Then the Euler--Lagrange equations are
+ \[
+ -\frac{\partial V}{\partial x_{k_i}} = m_k \frac{\d^2 x_{k_i}}{\d t^2}.
+ \]
+\end{eg}
+
+\begin{eg}
+ On a Riemannian manifold, if we set $F: TX \to \R$ be $(x, v) \mapsto |v|^2$, then we obtain the Christoffel equations for a geodesic.
+\end{eg}
+
+In general, there need not be solutions to the Euler--Lagrange equation. However, if we satisfy the \term{Legendre condition}
+\[
+ \det \left(\frac{\partial^2 F}{\partial v_i \partial v_j}\right) \not= 0,
+\]
+then the Euler--Lagrange equations become second order ODEs, and there is a unique solution given $\gamma(0)$ and $\dot{\gamma}(0)$. If furthermore this is positive definite, then the solution is actually a locally minimum.
+
+%If we want to go between these the Lagrangian and Hamiltonian formalisms, we need some way to go between $TX$ and $T^*X$. This is where the Legendre transform comes in.
+%
+%Suppose we have a map $\mathcal{L}: TX \to T^*X$. We then want $(x(t), \dot{x}(t))$ to satisfy the Euler--Lagrange equations iff $(x(t), \xi(t)) = \mathcal{L}(x(t), \dot{x}(t))$ satisfies the Hamiltonian equations.
+%
+%To understand this, given $F: TX \to \R$, we get a map
+%\[
+% (\d F_x)_v : T_v(T_x X) \cong T_x X \to T_{F_x}(x) \R \cong \R,
+%\]
+%so we have $(\d F_x)_v \in T_x^*X$.
+
+\subsection{Hamiltonian actions}
+In the remainder of the course, we are largely interested in how Lie groups can act on symplectic manifolds via Hamiltonian vector fields. These are known as \emph{Hamiltonian actions}. We first begin with the notion of symplectic actions.
+
+Let $(M, \omega)$ be a symplectic manifold, and $G$ a Lie group.
+\begin{defi}[Symplectic action]\index{symplectic action}
+ A \emph{symplectic action} is a smooth group action $\psi: G \to \Diff(M)$ such that each $\psi_g$ is a symplectomorphism. In other words, it is a map $G \to \Symp(M, \omega)$.
+\end{defi}
+
+\begin{eg}
+ Let $G = \R$. Then a map $\psi: G \to \Diff(M)$ is a one-parameter group of transformations $\{\psi_t: t \in \R\}$. Given such a group action, we obtain a complete vector field
+ \[
+ X_p = \left.\frac{\d \psi_t}{\d t}(p)\right|_{t = 0}.
+ \]
+ Conversely, given a complete vector field $X$, we can define
+ \[
+ \psi_t = \exp tX,
+ \]
+ and this gives a group action by $\R$.
+
+ Under this correspondence, symplectic actions correspond to complete symplectic vector fields.
+\end{eg}
+
+\begin{eg}
+ If $G = S^1$, then a symplectic action of $S^1$ is a symplectic action of $\R$ which is periodic.
+\end{eg}
+
+In the case where $G$ is $\R$ or $S^1$, it is easy to define what it means for an action to be Hamiltonian:
+\begin{defi}[Hamiltonian action]\index{Hamiltonian action}
+ An action of $\R$ or $S^1$ is \emph{Hamiltonian} if the corresponding vector field is Hamiltonian.
+\end{defi}
+
+\begin{eg}
+ Take $(S^2, \omega = \d \theta \wedge \d h)$. Then we have a rotation action
+ \[
+ \psi_t(\theta, h) = (\theta + t, h)
+ \]
+ generated by the vector field $\frac{\partial}{\partial \theta}$. Since $\iota_{\frac{\partial}{\partial \theta}} \omega = \d h$ is exact, this is in fact a Hamiltonian $S^1$ action.
+\end{eg}
+
+\begin{eg}
+ Take $(T^2, \d \theta_1 \wedge \d \theta_2)$. Consider the action
+ \[
+ \psi_t(\theta_1, \theta_2) = (\theta_1 + t, \theta_2).
+ \]
+ This is generated by the vector field $\frac{\partial}{\partial \theta_1}$. But $\iota_{\frac{\partial}{\partial \theta_1}}\omega = \d \theta_2$, which is closed but not exact. So this is a symplectic action that is not Hamiltonian.
+\end{eg}
+
+How should we define Hamiltonian group actions for groups that are not $\R$ or $S^1$? The simplest possible next case is the torus $G = T^n = S^1 \times \cdots \times S^1$. If we have a map $\psi; T^n \to \Symp(M, \omega)$, then for this to be Hamiltonian, it should definitely be the case that the restriction to each $S^1$ is Hamiltonian in the previous sense. Moreover, for these to be compatible, we would expect each Hamiltonian function to be preserved by the other factors as well.
+
+For the general case, we need to talk about the Lie group of $G$. Let $G$ be a Lie group. For each $g \in GG$, there is a left multiplication map\index{$L_g$}
+\begin{align*}
+ L_g: G &\to G\\
+ a &\mapsto ga.
+\end{align*}
+
+\begin{defi}[Left-invariant vector field]\index{left-invariant vector field}
+ A \emph{left-invariant vector field} on a Lie group $G$ is a vector field $X$ such that
+ \[
+ (L_g)_* X = X
+ \]
+ for all $g \in G$.
+
+ We write $\mathfrak{g}$ for the space of all left-invariant vector fields on $G$, which comes with the Lie bracket on vector fields. This is called the \term{Lie algebra} of $G$.
+\end{defi}
+
+If $X$ is left-invariant, then knowing $X_e$ tells us what $X$ is everywhere, and specifying $X_e$ produces a left-invariant vector field. Thus, we have an isomorphism $\mathfrak{g} \cong T_e G$.
+
+The Lie algebra admits a natural action of $G$, called the \term{adjoint action}. To construct this, note that $G$ acts on itself by conjugation,
+\[
+ \varphi_g(a) = gag^{-1}.
+\]
+This fixes the identity, and taking the derivative gives $\Ad_g: \mathfrak{g} \to \mathfrak{g}$, or equivalently, $\Ad$ is a map $\Ad: G \to \GL(\mathfrak{g})$. The dual $\mathfrak{g}^*$ admits the dual action of $G$, called the \term{coadjoint action}. Explicitly, this is given by
+\[
+ \bra \Ad_g^* (\xi), x\ket = \bra \xi, \Ad_g x\ket.
+\]
+An important case is when $G$ is abelian, i.e.\ a product of $S^1$'s and $\R$'s, in which case the conjugation action is trivial, hence the (co)adjoint action is trivial.
+
+Returning to group actions, the correspondence between complete vector fields and $\R$/$S^1$ actions can be described as follows: Given a smooth action $\psi: G \to \Diff (M)$ and a point $p \in M$, there is a map
+\begin{align*}
+ G &\to M\\
+ g &\mapsto \psi_g(p).
+\end{align*}
+Differentiating this at $e$ gives
+\begin{align*}
+ \mathfrak{g} \cong T_e G &\to T_p M\\
+ X &\mapsto X_p^\#.
+\end{align*}
+We call $X^\#$ the vector field on $M$ generated by $X \in \mathfrak{g}$. In the case where $G = S^1$ or $\R$, we have $\mathfrak{g} \cong \R$, and the complete vector field corresponding to the action is the image of $1$ under this map.
+
+We are now ready to define
+\begin{defi}[Hamiltonian action]\index{Hamiltonian action}
+ We say $\psi: G \to \Symp(M, \omega)$ is a \emph{Hamiltonian action} if there exists a map $\mu: M \to \mathfrak{g}^*$ such that
+ \begin{enumerate}
+ \item For all $X \in \mathfrak{g}$, $X^\#$ is the Hamiltonian vector field generated by $\mu^X$, where $\mu^X: M \to \R$ is given by
+ \[
+ \mu^X(p) = \bra \mu(p), X \ket.
+ \]
+ \item $\mu$ is $G$-equivariant, where $G$ acts on $\mathfrak{g}^*$ by the coadjoint action. In other words,
+ \[
+ \mu \circ \psi_g = \Ad_g^* \circ \mu\text{ for all }g \in G.
+ \]
+ \end{enumerate}
+ $\mu$ is then called a \term{moment map} for the action $\psi$.
+\end{defi}
+In the case where $G$ is abelian, condition (ii) just says $\mu$ is $G$-invariant.
+
+\begin{eg}
+ Let $M = \C^n$, and
+ \[
+ \omega = \frac{1}{2} \sum_j \d z_j \wedge \d \bar{z}_j = \sum_i r_j \;\d r_j \wedge \d \theta_j.
+ \]
+ We let
+ \[
+ T^n = \{(t_1, \ldots, t_n) \in \C^n : |t_k| = 1\text{ for all }k\},
+ \]
+ acting by
+ \[
+ \psi_{(t_1, \ldots, t_n)}(z_1, \ldots, z_n) = (t_1^{k_1}z_1, \ldots, t_n^{k_n} z_n)
+ \]
+ where $k_1, \ldots, k_n \in \Z$.
+
+ We claim this action has moment map
+ \begin{align*}
+ \mu: \C^n &\to (\mathfrak{t}^n)^* \cong \R^n\\
+ (x_1,\ldots, z_n) &\mapsto - \frac{1}{2} (k_1 |z_1|^2, \ldots, k_n |z_n|^2).
+ \end{align*}
+ It is clear that this is invariant, and if $X = (a_1, \ldots, a_n) \in \mathfrak{t}^n \in \R^n$, then
+ \[
+ X^\# = k_1 a_1 \frac{\partial}{\partial \theta_1} + \cdots + k_n a_n \frac{\partial}{\partial \theta_n}.
+ \]
+ Then we have
+ \[
+ \d \mu^X = \d \left(-\frac{1}{2} \sum k_j a_j r_j^2\right) = - \sum k_j a_j r_j \;\d r_j = \iota_{X^\#} \omega.
+ \]
+\end{eg}
+
+\begin{eg}
+ A nice example of a non-abelian Hamiltonian action is coadjoint orbits. Let $G$ be a Lie group, and $\mathfrak{g}$ the Lie algebra. If $X \in \mathfrak{g}$, then we get a vector field $^{\mathfrak{g}} X^\#$ generated by $X$ via the adjoint action, and also a vector field $^{\mathfrak{g}}X$ on $\mathfrak{g}^*$ generated by the co-adjoint action.
+
+ If $\xi \in \mathfrak{g}^*$, then we can define the \term{coadjoint orbit}\index{$\mathcal{O}_{\xi}$} through $\xi$
+ \[
+ \mathcal{O}_\xi = \{ \Ad_g^*(\xi) : g \in G\}.
+ \]
+ What is interesting about this is that this coadjoint orbit is actually a symplectic manifold, with symplectic form given by
+ \[
+ \omega_\xi (X_\xi^{\#}, Y_{\xi}^{\#}) = \bra \xi, [X, Y]\ket.
+ \]
+ Then the coadjoint action of $G$ on $\mathcal{O}_\xi$ has moment map $\mathcal{O}_\xi \to \mathfrak{g}^*$ given by the inclusion.
+\end{eg}
+
+\subsection{Symplectic reduction}
+Given a Lie group action of $G$ on $M$, it is natural to ask what the ``quotient'' of $M$ looks like. What we will study is not quite the quotient, but a \emph{symplectic reduction}, which is a natural, well-behaved subspace of the quotient that is in particular symplectic.
+
+We first introduce some words. Let $\psi: G \to \Diff(M)$ be a smooth action.
+\begin{defi}[Orbit]\index{orbit}
+ If $p \in M$, the \emph{orbit} of $p$ under $G$ is
+ \[
+ \mathcal{O}_p = \{\psi_g(p): g \in G\}.
+ \]
+\end{defi}
+
+\begin{defi}[Stabilizer]
+ The \term{stabilizer} or \term{isotropy group} of $p \in M$ is the closed subgroup
+ \[
+ G_p = \{g \in G: \psi_g(p) = p\}.
+ \]
+ We write $\mathfrak{g}_p$ for the Lie algebra of $G_p$.
+\end{defi}
+
+\begin{defi}[Transitive action]\index{transitive action}
+ We say $G$ acts \emph{transitively} if $M$ is one orbit.
+\end{defi}
+
+\begin{defi}[Free action]\index{free action}
+ We say $G$ acts \emph{freely} if $G_p = \{e\}$ for all $p$.
+\end{defi}
+
+\begin{defi}[Locally free action]\index{locally free action}
+ We say $G$ acts \emph{locally freely} if $\mathfrak{g}_p = \{0\}$, i.e.\ $G_p$ is discrete.
+\end{defi}
+
+\begin{defi}[Orbit space]\index{orbit space}
+ The \emph{orbit space} is $M/G$, and we write $\pi: M \to M/G$ for the orbit projection. We equip $M/G$ with the quotient topology.
+\end{defi}
+
+The main theorem is the following:
+\begin{thm}[Marsden--Weinstein, Meyer]
+ Let $G$ be a compact Lie group, and $(M, \omega)$ a symplectic manifold with a Hamiltonian $G$-action with moment map $\mu: M \to \mathfrak{g}^*$. Write $i: \mu^{-1}(0) \hookrightarrow M$ for the inclusion. Suppose $G$ acts freely on $\mu^{-1}(0)$. Then
+ \begin{enumerate}
+ \item $M_{\red} = \mu^{-1}(0)/G$ is a manifold;
+ \item $\pi: \mu^{-1}(0) \to M_{\red}$ is a principal $G$-bundle; and
+ \item There exists a symplectic form $\omega_{\red}$ on $M_{\red}$ such that $i^*\omega = \pi^* \omega_{\red}$.
+ \end{enumerate}
+\end{thm}
+
+\begin{defi}[Symplectic quotient]
+ We call $M_{\red}$ the \term{symplectic quotient}/\term{reduced space}/\term{symplectic reduction} of $(M, \omega)$ with respect to the given $G$-action and moment map.
+\end{defi}
+
+What happens if we do reduction at other levels? In other words, what can we say about $\mu^{-1}(\xi)/G$ for other $\xi \in \mathfrak{g}^*$?
+
+If we want to make sense of this, we need $\xi$ to be preserved under the coadjoint action of $G$. This is automatically satisfied if $G$ is abelian, and in this case, we simply have $\mu^{-1}(\xi) = \varphi^{-1}(0)$, where $\varphi = \mu - \xi$ is another moment map. So this is not more general.
+
+If $\xi$ is not preserved by $G$, then we can instead consider $\mu^{-1}(\xi)/G_\xi$, or equivalently take $\mu^{-1}(\mathcal{O}_\xi)/G$. We check that
+\[
+ \mu^{-1}(\xi)/G_\xi \cong \mu^{-1}(\mathcal{O}_\xi)/G \cong \varphi^{-1}(0)/G,
+\]
+where
+\begin{align*}
+ \varphi: M \times \mathcal{O}_\xi &\to \mathfrak{g}^*\\
+ (\rho, \eta) &\mapsto \mu(p) - \eta
+\end{align*}
+is a moment map for the product action of $G$ on $(M \times \mathcal{O}_\xi, \omega \times \omega_\xi)$.
+
+So in fact there is no loss in generality for considering just $\mu^{-1}(0)$.
+\begin{proof}
+ We first show that $\mu^{-1}(0)$ is a manifold. This follows from the following claim:
+ \begin{claim}
+ $G$ acts locally freely at $p$ iff $p$ is a regular point of $\mu$.
+ \end{claim}
+ We compute the dimension of $\im \d \mu_p$ using the rank-nullity theorem. We know $\d \mu_p v = 0$ iff $\bra \d \mu_p(v), X\ket = 0$ for all $X \in \mathfrak{g}$. We can compute
+ \[
+ \bra \d \mu_p(v), X\ket = (\d \mu^X)_p (v) = (\iota_{X_p^\#} \omega) (v) = \omega_p(X_p^\#, v).
+ \]
+ Moreover, the span of the $X^\#_p$ is exactly $T_p \mathcal{O}_p$. So
+ \[
+ \ker \d \mu_p = (T_p \mathcal{O}_p)^\omega.
+ \]
+ Thus,
+ \[
+ \dim (\im \d \mu_p) = \dim \mathcal{O}_p = \dim G - \dim G_p.
+ \]
+ In particular, $\d \mu_p$ is surjective iff $G_p = 0$.
+
+ Then (i) and (ii) follow from the following theorem:
+ \begin{thm}
+ Let $G$ be a compact Lie group and $Z$ a manifold, and $G$ acts freely on $Z$. Then $Z/G$ is a manifold and $Z \to Z/G$ is a principal $G$-bundle.
+ \end{thm}
+%
+% For $p \in M$, define $\mathfrak{g}_p$ to be the Lie algebra of $G_p$, and
+% \[
+% \mathfrak{g}_p^0 = \{\xi \in \mathfrak{g}^*: \bra \xi, X \ket = 0\text{ for all }X \in \mathfrak{g}_p\}.
+% \]
+% Note that for $v \in T_p M$ and $X \in \mathfrak{g}$, we have
+% \[
+% \omega_p(X_p^\#, v) = (\iota_{X_p^\#} \omega) (v) = (\d \mu^X)_p (v) = \bra \d \mu_p(v), X\ket.
+% \]
+% Note that $\d \mu_p: T_p M \to \mathfrak{g}^*$. We want to think about its kernel and image.
+% \begin{enumerate}
+% \item $\d \mu_p(v) = 0$ iff $\bra \d \mu_p(v), X\ket = 0$ for all $X \in \mathfrak{g}$ iff $\omega_p(X_p^\#, v) = 0$ for all $X \in \mathfrak{g}$. Since $T_p \mathcal{O}_p$ is spanned by the $X_p^\#$'s, we know $v \in (T_p \mathcal{O}_p)^{\omega}$, the symplectic orthogonal.
+%
+% So we conclude that
+% \[
+% \ker \d \mu_p = (T_p \mathcal{O}_p)^\omega.
+% \]
+% In particular, $\dim (\ker \d \mu_p) = \dim M - \dim \mathcal{O}_p$. On the other hand, by the rank-nullity theorem, we know $\dim (\ker \d \mu_p) = \dim M - \dim \im \d \mu_p$. So we know
+% \[
+% \dim (\im \d \mu_p) = \dim \mathcal{O}_p = \dim G - \dim G_p.
+% \]
+% \item We have $X \in \mathfrak{g}_p$ iff $X_p^\# = 0$ iff $\omega_p(X_p^\#, v) = 0$ for all $v \in T_p M$. So it follows that
+% \[
+% \im \d \mu_p \subseteq \omega_p^0.
+% \]
+% By dimension counting, they are equal.
+% \end{enumerate}
+% Thus, we know the action is locally free at $p$ iff $\mathfrak{g}_p = \{0\}$ iff $\mathfrak{g}_p^0 = \mathfrak{g}^*$ iff $\d \mu_p$ is surjective iff $p$ is a regular point of $\mu$.
+%
+% By the hypothesis, $G$ acts freely on $\mu^{-1}(0)$. So $0$ is a regular value of $\mu$, and so $\mu^{-1}(0)$ is a submanifold of $M$ of codimension $\dim G$. Then parts (i) and (ii) of the theorem follow from a general differential-geometric result:
+% \begin{thm}
+% Let $G$ be a compact Lie group and $Z$ a manifold, and $G$ acts freely on $Z$. Then $Z/G$ is a manifold and $Z \to Z/G$ is a principal $G$-bundle.
+% \end{thm}
+ Note that if $G$ does not act freely on $\mu^{-1}(0)$, then by Sard's theorem, generically, $\xi$ is a regular value of $\mu$, and so $\mu^{-1}(\xi)$ is a manifold, and $G$ acts locally freely on $\mu^{-1}(\xi)$. If $\mu^{-1}(\xi)$ is preserved by $G$, then $\mu^{-1}(\xi)/G$ is a symplectic orbifold.
+
+ It now remains to construct the symplectic structure. Observe that if $p \in \mu^{-1}(0)$, then
+ \[
+ T_p \mathcal{O}_p \subseteq T_p \mu^{-1}(0) = \ker \d \mu_p = (T_p \mathcal{O}_p)^\omega.
+ \]
+ So $T_p \mathcal{O}_p$ is an isotropic subspace of $(T_p M, \omega)$. We then observe the following straightforward linear algebraic result:
+
+ \begin{lemma}
+ Let $(V, \Omega)$ be a symplectic vector space and $I$ an isotropic subspace. Then $\Omega$ induces a canonical symplectic structure $\Omega_{\red}$ on $I^\Omega/I$, given by $\Omega_{\red}([u], [v]) = \Omega(u, v)$.
+ \end{lemma}
+
+ Applying this, we get a canonical symplectic structure on
+ \[
+ \frac{(T_p\mathcal{O}_p)^\omega}{T_p \mathcal{O}_p} = \frac{T_p \mu^{-1}(0)}{T_p \mathcal{O}_p} = T_{[p]} M_{\red}.
+ \]
+ This defines $\omega_{\red}$ on $M_{\red}$, which is well-defined because $\omega$ is $G$-invariant, and is smooth by local triviality and canonicity.
+
+ It remains to show that $\d \omega_{\red} = 0$. By construction, $i^* \omega = \pi^* \omega_{\red}$. So
+ \[
+ \pi^* (\d \omega_{\red}) = \d \pi^* \omega_{\red} = \d i^* \omega = i^* \d \omega = 0
+ \]
+ Since $\pi^*$ is injective, we are done.
+\end{proof}
+
+\begin{eg}
+ Take
+ \[
+ (M, \omega) = \left(\C^n, \omega_0 = \frac{i}{2} \sum \d z_k \wedge \d \bar{z}_k = \sum \d x_k \wedge \d y_k = \sum r_k \;\d r_k \wedge \d \theta_k\right).
+ \]
+ We let $G = S^1$ act by multiplication
+ \[
+ e^{it} \cdot (z_1, \ldots, z_n) = (e^{it} z_1, \ldots, e^{it} z_n).
+ \]
+ This action is Hamiltonian with moment map
+ \begin{align*}
+ \mu: \C^n &\to \R\\
+ z &\mapsto - \frac{|z|^2}{2} + \frac{1}{2}.
+ \end{align*}
+ The $+\frac{1}{2}$ is useful to make the inverse image of $0$ non-degenerate. To check this is a moment map, we compute
+ \[
+ \d \left(\frac{1}{2} |z|^2\right) = \d \left(-\frac{1}{2} \sum r_k^2\right) = - \sum r_k \;\d r_k.
+ \]
+ On the other hand, if
+ \[
+ X = a \in \mathfrak{g} \cong \R,
+ \]
+ then we have
+ \[
+ X^\# = a \left(\frac{\partial}{\partial \theta_1} + \cdots + \frac{\partial}{\partial \theta_n}\right).
+ \]
+ So we have
+ \[
+ \iota_{X^\#} \omega = - a \sum r_k \;\d r_k = \d \mu^X.
+ \]
+ It is also clear that $\mu$ is $S^1$-invariant. We then have
+ \[
+ \mu^{-1}(0) = \{z \in \C^n : |z|^2 = 1\} = S^{2n - 1}.
+ \]
+ Then we have
+ \[
+ M_{\red} = \mu^{-1}(0)/S^1 = \CP^{n - 1}.
+ \]
+ One can check that this is in fact the Fubini--Study form. So $(\CP^{n - 1}, \omega_{FS})$ is the symplectic quotient of $(\C^n, \omega_0)$ with respect to the diagonal $S^1$ action and the moment map $\mu$.
+\end{eg}
+
+\begin{eg}
+ Fix $k, \ell \in \Z$ relatively prime. Then $S^1$ acts on $\C^2$ by
+ \[
+ e^{it} (z_1, z_2) = (e^{ikt} z_1, e^{i\ell t} z_2).
+ \]
+ This action is Hamiltonian with moment map
+ \begin{align*}
+ \mu: \C^2 &\to \R\\
+ (z_1, z_2) &\mapsto - \frac{1}{2} (k |z_1|^2 + \ell |z_2|^2).
+ \end{align*}
+ There is no level set of $\mu$ where the action is free, since
+ \begin{itemize}
+ \item $(z, 0)$ has stabilizer $\Z/k\Z$
+ \item $(0, z)$ has stabilizer $\Z/\ell \Z$
+ \item $(z_1, z_2)$ has trivial stabilizer if $z_1, z_2 \not= 0$.
+ \end{itemize}
+ On the other hand, the action is still locally free on $\C^2 \setminus \{(0, 0)\}$ since the stabilizers are discrete.
+
+ The reduced spaces $\mu^{-1}(\xi)/S^1$ for $\xi \not= 0$ are \emph{orbifolds}, called \emph{weighted} or \term{twisted projective spaces}\index{weighted projective space}.
+\end{eg}
+
+The final example is an infinite dimensional one by Atiyah and Bott. We will not be able to prove the result in full detail, or any detail at all, but we will build up to the statement. The summary of the result is that performing symplectic reduction on the space of all connections gives the moduli space of flat connections.
+
+Let $G \to P \overset{\pi}{\to} B$ be a principal $G$-bundle, and $\psi: G \to \Diff(P)$ the associated free action. Let
+\begin{align*}
+ \d \psi: \mathfrak{g} &\to \chi(P)\\
+ X &\mapsto X^\#
+\end{align*}
+be the associated infinitesimal action. Let $X_1, \ldots, X_k$ be a basis of the Lie algebra $\mathfrak{g}$. Then since $\psi$ is a free action, $X_1^\#, \ldots, X_k^\#$ are all linearly independent at each $p \in P$.
+
+Define the \term{vertical tangent space}
+\[
+ V_p = \spn \{(X_1^\#)_p, \ldots, (X_k^\#)_p\} \subseteq T_p P = \ker (\d \pi_p)
+\]
+We can then put these together to get $V \subseteq TP$, the \term{vertical tangent bundle}.
+
+\begin{defi}[Ehresmann Connection]\index{Ehresmann connection}\index{connection}
+ An \emph{(Ehresmann) connection} on $P$ is a choice of subbundle $H \subseteq TP$ such that
+ \begin{enumerate}
+ \item $P = V \oplus H$
+ \item $H$ is $G$-invariant.
+ \end{enumerate}
+ Such an $H$ is called a \term{horizontal bundle}.
+\end{defi}
+There is another way of describing a connection. A \term{connection form} on $P$ is a $\mathfrak{g}$-valued $1$-form $A \in \Omega^1(P) \otimes \mathfrak{g}$ such that
+\begin{enumerate}
+ \item $\iota_{X^\#} A = X$ for all $X \in \mathfrak{g}$
+ \item $A$ is $G$-invariant for the action
+ \[
+ g \cdot (A_i \otimes X_i) = (\psi_{g^{-1}})^* A_i \otimes \Ad_g X_i.
+ \]
+\end{enumerate}
+
+\begin{lemma}
+ Giving an Ehresmann connection is the same as giving a connection $1$-form.
+\end{lemma}
+
+\begin{proof}
+ Given an Ehresmann connection $H$, we define
+ \[
+ A_p(v) = X,
+ \]
+ where $v = X_p^\# + h_p \in V \oplus H$.
+
+ Conversely, given an $A$, we define
+ \[
+ H_p = \ker A_p = \{v \in T_p P : i_v A_p = 0\}.\fakeqed
+ \]
+\end{proof}
+
+We next want to define the notion of curvature. We will be interested in \emph{flat} connections, i.e.\ connections with zero curvature.
+
+To understand curvature, if we have a connection $TP = V \oplus H$, then we get further decompositions
+\[
+ T^*P = V^* \oplus H^*,\quad \Lambda^2 (T^*P) = (\exterior^2 V^*) \oplus (V^* \wedge H^*) \oplus (\exterior^2 H^*).
+\]
+So we end up having
+\begin{align*}
+ \Omega^1(P) &= \Omega^1_{\mathrm{vert}}(P) \oplus \Omega^1_{\mathrm{hor}}(P)\\
+ \Omega^2(P) &= \Omega_{\mathrm{vert}}^2 \oplus \Omega_{\mathrm{mixed}} \oplus \Omega^2_{\mathrm{hor}}
+\end{align*}
+If $A = \sum_{i = 1}^k A_i \otimes X_i \in \Omega^1 \otimes \mathfrak{g}$, then
+\[
+ \d A \in \Omega^2 \otimes \mathfrak{g}.
+\]
+We can then decompose this as
+\[
+ \d A = (\d A)_{\mathrm{vert}}+ (\d A)_{\mathrm{mix}} + (\d A)_{\mathrm{hor}}.
+\]
+The first part is uninteresting, in the sense that it is always given by
+\[
+ (\d A)_{\mathrm{vert}}(X, Y) = [X, Y],
+\]
+the second part always vanishes, and the last is the \term{curvature form}
+\[
+ \curv A \in \Omega_{\mathrm{hor}}^2 \otimes \mathfrak{g}.
+\]
+\begin{defi}[Flat connection]\index{flat connection}
+ A connection $A$ is \emph{flat} if $\curv A = 0$.
+\end{defi}
+We write $\mathcal{A}$ for the space of all connections on $P$. Observe that if $A_1, A_0 \in \mathcal{A}$, then $A_1 - A_0$ is $G$-invariant and
+\[
+ \iota_{X^\#}(A_1 - A_0) = X - X = 0
+\]
+for all $X \in \mathfrak{g}$. So $A_1 - A_0 \in (\Omega^1_{hor} \otimes \mathfrak{g})^G$. So
+\[
+ \mathcal{A} = A_0 + (\Omega^1_{hor} \otimes \mathfrak{g})^G,
+\]
+and $T_{A_0} \mathcal{A} = (\Omega^1_{hor} \otimes \mathfrak{g})^G$.
+
+Suppose $B$ is a compact Riemann surface, and $G$ a compact or semisimple Lie group. Then there exists an $\Ad$-invariant inner product on $\mathfrak{g}$. We can then define
+\[
+ \omega: (\Omega^1_{hor} \otimes \mathfrak{g})^G \times (\Omega_{hor}^1 \otimes \mathfrak{g})^G \to \R,
+\]
+sending
+\[
+ \left(\sum a_i X_i, \sum b_i X_i\right) \mapsto \int_B \sum_{i, j} a_i \wedge b_j (X_i, X_j).
+\]
+This is easily seen to be bilinear, anti-symmetric and non-degenerate. It is also closed, if suitably interpreted, since it is effectively constant across the affine space $\mathcal{A}$. Thus, $\mathcal{A}$ is an ``infinite-dimensional symplectic manifold''.
+
+To perform symplectic reduction, let $\mathcal{G}$ be the \term{gauge group}, i.e.\ the group of $G$-equivariant diffeomorphisms $f: P \to P$ covering the identity. $\mathcal{G}$ acts on $\mathcal{A}$ by
+\[
+ V \oplus H \mapsto V \oplus H_f,
+\]
+where $H_f$ is the image of $H$ by $\d f$. Atiyah and Bott proved that the action of $\mathcal{G}$ on $(\mathcal{A}, \omega)$ is Hamiltonian with moment map
+\begin{align*}
+ \mu: \mathcal{A} &\mapsto \Lie(\mathcal{G})^* = (\Omega_{hor}^2 \otimes \mathfrak{g})^G\\
+ A &\mapsto \curv A
+\end{align*}
+Performing symplectic reduction, we get
+\[
+ \mathcal{M} = \mu^{-1}(0)/\mathcal{G},
+\]
+the moduli space of flat connections, which has a symplectic structure. It turns out this is in fact a finite-dimensional symplectic orbifold.
+
+\subsection{The convexity theorem}
+We focus on the case $G = T^n$. It turns out the image of the moment map $\mu$ has a very rigid structure.
+
+\begin{thm}[Convexity theorem (Atiyah, Guillemin--Sternberg)]\index{Convexity theorem}
+ Let $(M, \omega)$ be a compact connected symplectic manifold, and
+ \[
+ \mu: M \to \R^n
+ \]
+ a moment map for a Hamiltonian torus action. Then
+ \begin{enumerate}
+ \item The levels $\mu^{-1}(c)$ are connected for all $c$
+ \item The image $\mu(M)$ is convex.
+ \item The image $\mu(M)$ is in fact the convex hull of $\mu(M^G)$.
+ \end{enumerate}
+ We call $\mu(M)$ the \term{moment polytope}.
+\end{thm}
+Here we identify $G \cong T^n \simeq \R^n/\Z^n$, which gives us an identification $\mathfrak{g} \cong \R^n$ and $\mathfrak{g}^* \cong (\R^n)^* \cong\R^n$.
+
+\begin{eg}
+ Consider $(M = \CP^n, \omega_{FS})$. $T^n$ acts by letting $t = (t_1, \ldots, t_n) \in T^n \cong \U(1)^n$ send
+ \[
+ \psi_t([z_0: \cdots z_n]) = [z_0: t_1 z_1: \cdots :t_n z_n].
+ \]
+ This has moment map
+ \[
+ \mu([z_0: \cdots z_n]) = -\frac{1}{2} \frac{(|z_1|^2, \ldots, |z_n|^2)}{|z_0|^2 + \cdots + |z_n|^2}.
+ \]
+ The image of this map is
+ \[
+ \mu(M) = \left\{x \in \R^n: x_k \leq 0, x_1 + \cdots + x_n \geq -\frac{1}{2}\right\}.
+ \]
+ For example, in the $\CP^2$ case, the moment image lives in $\R^2$, and is just
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.3] (0, -2) -- (-2, 0) -- (0, 0);
+ \draw [->] (-3, 0) -- (1, 0);
+ \draw [->] (0, -3) -- (0, 1);
+ \draw (-2, 0) node [above] {$-\frac{1}{2}$} -- (0, -2) node [right] {$-\frac{1}{2}$};
+ \end{tikzpicture}
+ \end{center}
+ The three vertices are $\mu([0:0:1]), \mu([0:1:0])$ and $\mu([1:0:0])$.
+\end{eg}
+
+We now want to prove the convexity theorem. We first look at (iii). While it seems like a very strong claim, it actually follows formally from (ii).
+\begin{lemma}
+ (ii) implies (iii).
+\end{lemma}
+
+\begin{proof}
+ Suppose the fixed point set of the action has $k$ connected components $Z = Z_1 \cup \cdots \cup Z_k$. Then $\mu$ is constant on each $Z_j$, since $X^\#|_{Z_j} = 0$ for all $X$. Let $\mu(Z_j) = \eta_j \in \R^n$. By (ii), we know the convex hull of $\{\eta_1, \ldots \eta_k\}$ is contained in $\mu(M)$.
+
+ To see that $\mu(M)$ is exactly the convex hull, observe that if $X \in \R^n$ has rationally independent components, so that $X$ topologically generates $T$, then $p$ is fixed by $T$ iff $X_p^\# = 0$, iff $\d \mu_p^X = 0$. Thus, $\mu^X$ attains its maximum on one of the $Z_j$.
+
+ Now if $\xi$ is not in the convex hull of $\{\eta_j\}$, then we can pick an $X \in \R^n$ with rationally independent components such that $\bra \xi, X\ket > \bra \eta_j, X\ket$ for all $j$, since the space of such $X$ is open and non-empty. Then
+ \[
+ \bra \xi, X\ket > \sup_{p \in \bigcup Z_j} \bra \mu(p), X\ket = \sup_{p \in M} \bra \mu(p), X\ket.
+ \]
+ So $\xi \not \in \mu(M)$.
+\end{proof}
+
+With a bit more work, (i) also implies (ii).
+\begin{lemma}
+ (i) implies (ii).
+\end{lemma}
+\begin{proof}
+ The case $n = 1$ is immediate, since $\mu(M)$ is compact and connected, hence a closed interval.
+
+ In general, to show that $\mu(M)$ is convex, we want to show that the intersection of $\mu(M)$ with any line is connected. In other words, if $\pi: \R^{n + 1} \to \R^n$ is any projection and $\nu = \pi \circ \mu$, then
+ \[
+ \pi^{-1}(c) \cap \mu(M) = \mu(\nu^{-1}(c))
+ \]
+ is connected. This would follow if we knew $\nu^{-1}(c)$ were connected, which would follow from (i) if $\nu$ were a moment map of an $T^n$ action. Unfortunately, most of the time, it is just the moment map of an $\R^n$ action. For it to come from a $T^n$ action, we need $\pi$ to be represented by an \emph{integer} matrix. Then
+ \[
+ T = \{\pi^T t : t \in T^n = \R^n /\Z^n\} \subseteq T^{n + 1}
+ \]
+ is a subtorus, and one readily checks that $\nu$ is the moment map for the $T$ action.
+
+ Now for any $p_0, p_1 \in M$, we can find $p_0', p_1'$ arbitrarily close to $p_0, p_1$ and a line of the form $\pi^{-1}(c)$ with $\pi$ integral. Then the line between $p_0'$ and $p_1'$ is contained in $\mu(M)$ by the above argument, and we are done since $\mu(M)$ is compact, hence closed.
+\end{proof}
+
+It thus remains to prove (i), where we have to put in some genuine work. This requires \emph{Morse--Bott theory}.
+
+Let $M$ be a manifold, $\dim M = m$, and $f: M \to \R$ a smooth map. Let
+\[
+ \Crit(f) = \{p \in M: \d f_p = 0\}
+\]
+be the set of critical points. For $p \in \Crit(f)$ and $(U, x_1, \ldots, x_m)$ a coordinate chart around $p$, we have a \term{Hessian matrix}
+\[
+ H_pf = \left( \frac{\partial^2 f}{\partial x_i \partial x_j}\right)
+\]
+\begin{defi}[Morse(-Bott) function]\index{Morse function}
+ $f$ is a \emph{Morse function} if at each $p \in \Crit(f)$, $H_p f$ is non-degenerate.
+
+ $f$ is a \term{Morse--Bott function} if the connected components of $\Crit(f)$ are submanifolds and for all $p \in \Crit(p)$, $T_p \Crit(f) = \ker (H_p f)$.
+\end{defi}
+If $f$ is Morse, then the critical points are isolated. If $f$ is Morse--Bott, then the Hessian is non-degenerate in the normal bundle to $\Crit(f)$.
+
+If $M$ is compact, then there is a finite number of connected components of $\Crit(f)$. So we have
+\[
+ \Crit(f) = Z_1 \cup \cdots \cup Z_k,
+\]
+and the $Z_i$ are called the \term{critical submanifolds}.
+
+For $p \in Z_i$, the Hessian $H_p f$ is a quadratic form and we can write
+\[
+ T_p M = E_p^- \oplus T_p Z_i \oplus E_p^+,
+\]
+where $E_p^{\pm}$ are the positive and negative eigenspaces of $H_p f$ respectively. Note that $\dim E_p^{\pm}$ are locally constant, hence constant along $Z_i$. So we can define the \term{index} of $Z_i$ to be $\dim E_p^-$, and the \term{coindex} to be $\dim E_p^+$.
+
+We can then define a vector bundle $E^- \to Z_i$, called the \term{negative normal bundle}.
+
+Morse theory tells us how the topology of $M_c^- = \{p \in M: f(p) \leq c\}$ changes with $c \in \R$.
+\begin{thm}[Morse theory]\index{Morse theory}\leavevmode
+ \begin{enumerate}
+ \item If $f^{-1}([c_1, c_2])$ does not contain any critical point. Then $f^{-1}(c_1) \cong f^{-1}(c_2)$ and $M_{c_1} \cong M_{c_2}$ (where $\cong$ means diffeomorphic).
+ \item If $f^{-1}([c_1, c_2])$ contains one critical manifold $Z$, then $M_{c_2}^- \simeq M_{c_1}^- \cup D(E^-)$, where $D(E^-)$ is the disk bundle of $E^-$.
+
+ In particular, if $Z$ is an isolated point, $M_{c_2}^-$ is, up to homotopy equivalence, obtained by adding a $\dim E_p^-$-cell to $M_{c_1}^-$.\fakeqed
+ \end{enumerate}
+\end{thm}
+The key lemma in this proof is the following result:
+\begin{lemma}
+ Let $M$ be a compact connected manifold, and $f: M \to \R$ a Morse--Bott function with no critical submanifold of index or coindex $1$. Then
+ \begin{enumerate}
+ \item $f$ has a unique local maximum and local minimum
+ \item All level sets of $f^{-1}(c)$ are connected.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}[Proof sketch]
+ There is always a global minimum since $f$ is compact. If there is another local minimum at $c$, then the disk bundle is trivial, and so in
+ \[
+ M_{c + \varepsilon}^- \simeq M_{c - \varepsilon}^- \cup D(E^-)
+ \]
+ for $\varepsilon$ small enough, the union is a disjoint union. So $M_{c + \varepsilon}$ has two components. Different connected components can only merge by crossing a level of index $1$, so this cannot happen. To handle the maxima, consider $-f$.
+
+ More generally, the same argument shows that a change in connectedness must happen by passing through a index or coindex $1$ critical submanifold.
+\end{proof}
+
+To apply this to prove the convexity theorem, we will show that for any $X$, $\mu^X$ is a Morse--Bott function where all the critical submanifolds are symplectic. In particular, they are of even index and coindex.
+\begin{lemma}
+ For any $X \in \R^n$, $\mu^X$ is a Morse--Bott function where all critical submanifolds are symplectic.
+\end{lemma}
+
+\begin{proof}[Proof sketch]
+ Note that $p$ is a fixed point iff $X^\#_p = 0$ iff $\d \mu^X_p = 0$ iff $p$ is a critical point. So the critical points are exactly the fixed points of $X$.
+
+ $(T_p M, \omega_p)$ models $(M, \omega)$ in a neighbourhood of $p$ by Darboux theorem. Near a fixed point $T^n$, an equivariant version of the Darboux theorem tells us there is a coordinate chart $U$ where $(M, \omega, \mu)$ looks like
+ \begin{align*}
+ \omega|_U &= \sum \d x_i \wedge \d y_i\\
+ \mu|_U &= \mu(p) - \frac{1}{2} \sum_i (x_i^2 + y_i^2) \alpha_i,
+ \end{align*}
+ where $\alpha_i \in \Z$ are \emph{weights}.
+
+ Then the critical submanifolds of $\mu$ are given by
+ \[
+ \{x_i = y_i = 0 : \alpha_i \not= 0\},
+ \]
+ which is locally a symplectic manifold and has even index and coindex.
+\end{proof}
+
+%We now return to the proof of the convexity theorem. If $(M, \omega)$ is compact connected, $\psi: T^n \to \Symp(M, \omega)$ a Hamiltonian action with moment map $\mu: M \to \R^n$, then any $X \in \R^n$ generates a subtorus of $T^n$,
+%\[
+% T^X = \{\exp tX : t \in \R\} \subseteq T^n.
+%\]
+%Let $f = \mu^X = \bra \mu, X\ket : M \to \R$.
+%\begin{claim}
+% $\mu^X$ is a Morse--Bott function and all its critical submanifolds are symplectic and of even index and coindex.
+%\end{claim}
+%Then (i$_1$) is an immediate consequence of this.
+%
+%\begin{proof}[Proof sketch]
+% Let $p$ be a fixed point of $X$. Then $X^\#_p = 0$, and so $\d \mu^X_p = 0$, and the converse holds. So the critical points are exactly the fixed points.
+%\end{proof}
+Finally, we can prove the theorem.
+\begin{lemma}
+ (i) holds.
+\end{lemma}
+
+\begin{proof}
+ The $n = 1$ case follows from the previous lemmas. We then induct on $n$.
+
+ Suppose the theorem holds for $n$, and let $\mu = (\mu_1, \ldots, \mu_n): M \to \R^{n + 1}$ be a moment map for a Hamiltonian $T^{n + 1}$-action. We want to show that for all $c = (c_1, \ldots, c_n) \in \R^{n + 1}$, the set
+ \[
+ \mu^{-1}(c) = \mu_1^{-1}(c_1) \cap \cdots \cap \mu_{n + 1}^{-1}(c_{n + 1})
+ \]
+ is connected.
+
+ The idea is to set
+ \[
+ N = \mu_1^{-1}(c_1) \cap \cdots \mu_1^{-1}(c_n),
+ \]
+ and then show that $\mu_{n + 1}|_N: N \to \R$ is a Morse--Bott function with no critical submanifolds of index or coindex $1$.
+
+ We may assume that $\d \mu_1, \ldots, \d \mu_n$ are linearly independent, or equivalently, $\d \mu^X \not= 0$ for all $X \in \R^n$. Otherwise, this reduces to the case of an $n$-torus.
+
+ To make sense of $N$, we must pick $c$ to be a regular value. Density arguments imply that
+ \[
+ \mathcal{C} = \bigcup_{X \not= 0} \Crit(\mu^X) = \bigcup_{X \in \Z^{n + 1} \setminus \{0\}} \Crit \mu^X.
+ \]
+ Since $\Crit \mu^X$ is a union of codimension $\geq 2$ submanifolds, its complement is dense. Hence by the Baire category theorem, $\mathcal{C}$ has dense complement. Then a continuity argument shows that we only have to consider the case when $c$ is a regular value of $\mu$, hence $N$ is a genuine submanifold of codimension $n$.
+
+ By the induction hypothesis, $N$ is connected. We now show that $\mu_{n + 1}|_N: N \to \R$ is Morse--Bott with no critical submanifolds of index or coindex $1$.
+%
+% By continuity, it is enough to show that $\mu^{-1}(c)$ is connected for regular values $c$. Then $N = \mu_1^{-1}(c_1) \cap \cdots \cap \mu_n^{-1}(c_n)$ is a submanifold of codimension $n$ in $M$, and by induction hypothesis, $N$ is connected. The goal is to show that
+% \[
+% \mu^{-1}(c) = N \cap \mu_{n + 1}^{-1}(c_{n + 1}) = (\mu_{n + 1}|_N)^{-1}(c_n)
+% \]
+% is connected. So we want to apply the key lemma to $\mu_{n + 1}|_N: N \to \R$. So we need to show that $\mu_{n + 1}|_N$ is Morse-Bott with no critical submanifolds of index or coindex $1$.
+
+ Let $x$ be a critical point. Then the theory of Lagrange multipliers tells us there are some $\lambda_i \in \R$ such that
+ \[
+ \left[\d \mu_{n + 1} + \sum_{n = 1}^n \lambda_i \d \mu_i\right]_x = 0
+ \]
+ Thus, $\mu$ is critical in $M$ for the function
+ \[
+ \mu^Y = \mu_{n + 1} + \sum_{i = 1}^n \lambda_i \mu_i,
+ \]
+ where $Y = (\lambda_1, \ldots, \lambda_n, 1) \in \R^{n + 1}$. So by the claim, $\mu^Y$ is Morse--Bott with only even indices and coindices. Let $W$ be a critical submanifold of $\mu^Y$ containing $x$.
+ \begin{claim}
+ $W$ intersects $N$ transversely at $x$.
+ \end{claim}
+ If this were true, then $\mu^Y|_N$ has $W \cap N$ as a non-degenerate critical submanifold of even index and coindex, since the coindex doesn't change and $W$ is even-dimensional. Moreover, when restricted to $N$, $\sum \lambda_i \mu_i$ is a constant. So $\mu_{n + 1}|_N$ satisfies the same properties.
+
+ To prove the claim, note that
+ \[
+ T_x N = \ker \d \mu_1|_x \cap \cdots \cap \ker \d \mu_n|_x.
+ \]
+ With a moments thought, we see that it suffices to show that $\d \mu_1, \ldots, \d \mu_n$ remain linearly independent when restricted to $T_x W$. Now observe that the Hamiltonian vector fields $X_1^\#|_x, \ldots, X_n^\#|_x$ are independent since $\d \mu_1|_x, \ldots \d \mu_n|_x$ are, and they live in $T_x W$ since their flows preserve $W$.
+
+ Since $W$ is symplectic (by the claim), for all $k = (k_1, \ldots k_n)$, there exists $v \in T_x W$ such that
+ \[
+ \omega \left(\sum k_i X_i^\#|_x, v\right) \not= 0.
+ \]
+ In other words,
+ \[
+ \left(\sum k_i \d \mu_i\right)(v) \not= 0.\qedhere
+ \]
+\end{proof}
+
+%\begin{proof}[Proof of (iii)]
+% Suppose the fixed point set of the action has $k$ connected components $Z = Z_1 \cup \cdots \cup Z_k$. Then $\mu$ is constant on each $Z_j$, since $X^\#|_{Z_j} = 0$ for all $X$. Let $\mu(Z_j) = \eta_j \in \R^n$.
+%
+% By (ii), we know the convex hull of $\{\eta_1, \ldots \eta_k\}$ is contained in $\mu(M)$. Now suppose $\xi$ is not in the convex hull. Choose $X \in \R^n$ with rationally independent components, such that the torus topologically generated by $X$ is $T$, and such that $\bra \xi, X\ket > \bra \eta_j, X\ket$ for all $j$.
+%
+% The generation condition implies that $p$ is fixed iff $X_p^\# = 0$, iff $\d \mu_p^X = 0$. This means $\mu^X$ attains its maximum on one of the $Z_j$. So
+% \[
+% \bra \xi, X\ket > \sup_{p \in M} \bra \mu(p), X\ket.
+% \]
+% So $\xi \not \in \im \mu$.
+%\end{proof}
+
+It is natural to seek a non-abelian generalization of this, and it indeed exists. Let $(M, \omega)$ be a symplectic manifold, and $G$ a compact Lie group with a Hamiltonian action on $M$ with moment map $\mu: M \to \mathfrak{g}^*$. From Lie group theory, there is a maximal torus $T \subseteq G$ with Lie algebra $\mathfrak{t}$, and the \term{Weyl group} $W = N(T)/T$ is finite (where $N(T)$ is the normalizer of $T$).
+
+Then under the coadjoint action, we have
+\[
+ \mathfrak{g}^*/G \cong \mathfrak{t}^*/W,
+\]
+Pick a \term{Weyl chamber} $\mathfrak{t}^*_+$ of $\mathfrak{t}^*$, i.e.\ a fundamental domain of $\mathfrak{t}^*$ under $W$. Then $\mu$ induces a moment map $\mu_+: M \to \mathfrak{t}^*_+$, and the non-abelian convexity theorem says
+\begin{thm}[Kirwan, 1984]
+ $\mu_+(M) \subseteq \mathfrak{t}^*_+$ is a convex polytope.
+\end{thm}
+
+We shall end with an application of the convexity theorem to linear algebra. Let $\lambda = (\lambda_1, \lambda_2) \in \R^2$ and $\lambda_1 \geq \lambda_2$, and $\mathcal{H}^2_\lambda$ the set of all $2 \times 2$ Hermitian matrices with eigenvalues $(\lambda_1, \lambda_2)$. What can the diagonal entries of $A \in \mathcal{H}^2_\lambda$ be?
+
+We can definitely solve this by brute force, since any entry in $\mathcal{H}$ looks like
+\[
+ A =
+ \begin{pmatrix}
+ a & z\\
+ \bar{z} & b
+ \end{pmatrix}
+\]
+where $a, b \in \R$ and $z \in \C$. We know
+\begin{align*}
+ \tr a &= a + b = \lambda_1 + \lambda_2\\
+ \det a &= ab - |z|^2 = \lambda_1 \lambda_2.
+\end{align*}
+The first implies $b = \lambda_1 + \lambda_2 - a$, and all the second condition gives is that $ab > \lambda_1 \lambda_2$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (3, 0) node [right] {$a$};
+ \draw [->] (0, -0.5) -- (0, 3) node [above] {$b$};
+ \draw [domain=0.1667:1] plot (\x, {1/(2*\x)});
+ \draw [domain=1:3] plot (\x, {1/(2*\x)});
+
+ \draw (-0.5, 3) -- (3, -0.5);
+
+ \draw [mblue, thick] (0.2192, 2.2808) node [circ] {} -- (2.2808, 0.2192) node [circ] {};
+ \end{tikzpicture}
+\end{center}
+This completely determines the geometry of $\mathcal{H}^2_\lambda$, since for each allowed value of $a$, there is a unique value of $b$, which in turn determines the norm of $z$. Topologically, this is a sphere, since there is a $S^1$'s worth of choices of $z$ except at the two end points $ab = \lambda_1 \lambda_2$.
+
+What has this got to do with Hamiltonian actions? Observe that $\U(2)$ acts transitively on $\mathcal{H}_\lambda^2$ by conjugation, and
+\[
+ T^2 = \left\{
+ \begin{pmatrix}
+ e^{i\theta_1} & 0\\
+ 0 & e^{i\theta_2}
+ \end{pmatrix}\right\} \subseteq \U(2).
+\]
+This contains a copy of $S^1$ given by
+\[
+ S^1 = \left\{
+ \begin{pmatrix}
+ e^{i\theta_1} & 0\\
+ 0 & e^{-i\theta_1}
+ \end{pmatrix}\right\} \subseteq T^2.
+\]
+We can check that
+\[
+ \begin{pmatrix}
+ e^{i\theta} & 0\\
+ 0 & e^{-i\theta}
+ \end{pmatrix}
+ \begin{pmatrix}
+ a & z\\
+ \bar{z} & b
+ \end{pmatrix}
+ \begin{pmatrix}
+ e^{i\theta} & 0\\
+ 0 & e^{-i\theta}
+ \end{pmatrix}^{-1} =
+ \begin{pmatrix}
+ a & e^{i\theta}z\\
+ \overline{e^{i\theta} z} & b
+ \end{pmatrix}
+\]
+Thus, if $\varphi: \mathcal{H}_\lambda^2 \to \R^2$ is the map that selects the diagonal elements, then
+\[
+ \varphi^{-1}(a, b) =
+ \begin{pmatrix}
+ a & |z|e^{i\theta}\\
+ |z|e^{-i\theta} & b
+ \end{pmatrix}\\
+\]
+is one $S^1$-orbit. This is reminiscent of the $S^1$ action of $S^2$ quotienting out to a line segment.
+
+We can think more generally about $\mathcal{H}_\lambda^n$, the $n\times n$ Hermitian matrices with eigenvalues $\lambda_1 \geq \cdots \geq \lambda_n$, and ask what the diagonal elements can be.
+
+Take $\varphi: \mathcal{H}_\lambda^n \to \R^n$ be the map that selects the diagonal entries. Then the image lies on the plane $\tr A = \sum \lambda_i$. This certainly contain the $n!$ points whose coordinates are all possible permutation of the $\lambda_i$, given by the diagonal matrices.
+
+\begin{thm}[Schur--Horn theorem]\index{Schur--Horn theorem}
+ $\varphi(\mathcal{H}_\lambda^n)$ is the convex hull of the $n!$ points from the diagonal matrices.
+\end{thm}
+
+To view this from a symplectic perspective, let $M = \mathcal{H}_\lambda^n$, and $\U(n)$ acts transitively by conjugation. For $A \subseteq M$, let $G_A$ be the stabilizer of $A$. Then
+\[
+ \mathcal{H}_\lambda^n = \U(n)/G_A.
+\]
+Thus,
+\[
+ T_A \mathcal{H}_\lambda^n \cong \frac{i \mathcal{H}^n}{\mathfrak{g}_A}
+\]
+where $\mathcal{H}^n$ is the Hermitian matrices. The point of this is to define a symplectic form. We define
+\begin{align*}
+ \omega_A : i \mathcal{H}^n \times i \mathcal{H}^n &\to \R\\
+ (X, Y) &\mapsto i\tr([X, Y]A) = i \tr (X (YA - AY))
+\end{align*}
+So
+\[
+ \ker \omega_A = \{Y : [A, Y] = 0\} = \mathfrak{g}_A.
+\]
+So $\omega_A$ induces a non-degenerate form on $T_A \mathcal{H}_\lambda^n$. In fact, this gives a symplectic form on $\mathcal{H}^n_\lambda$.
+
+Let $T^n \subseteq \U(n)$ be the maximal torus consisting of diagonal entries. We can show that the $T^n$ action is Hamiltonian with moment map $\varphi$. Since $T^n$ fixes exactly the diagonal matrices, we are done.
+
+\subsection{Toric manifolds}
+What the convexity theorem tells us is that if we have a manifold $M$ with a torus action, then the image of the moment map is a convex polytope. How much information is retained by a polytope?
+
+Of course, if we take a torus that acts trivially on $M$, then no information is retained.
+
+\begin{defi}[Effective action]\index{effective action}
+ An action $G$ on $M$ is \emph{effective} (or \emph{faithful}) if every non-identity $g \in G$ moves at least one point of $M$.
+\end{defi}
+
+But we can still take the trivial torus $T^0$ that acts trivially, and it will still be effective. Of course, no information is retained in the polytope as well. Thus, we want to have as large of a torus action as we can. The following proposition puts a bound on ``how much'' torus action we can have:
+\begin{prop}
+ Let $(M, \omega)$ be a compact, connected symplectic manifold with moment map $\mu: M \to \R^n$ for a Hamiltonian $T^n$ action. If the $T^n$ action is effective, then
+ \begin{enumerate}
+ \item There are at least $n + 1$ fixed points.
+ \item $\dim M \geq 2n$.
+ \end{enumerate}
+\end{prop}
+
+We first state without proof a result that is just about smooth actions.
+\begin{fact}
+ An effective action of $T^n$ has orbits of dimension $n$.\fakeqed
+\end{fact}
+This doesn't mean all orbits are of dimension $n$. It just means \emph{some} orbit has dimension $n$.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $\mu = (\mu_1, \ldots, \mu_n): M \to \R^n$ and $p$ is a point in an $n$-dimensional orbit, then $\{(\d \mu_i)_p\}$ are linearly independent. So $\mu(p)$ is an interior point (if $p$ is not in the interior, then there exists a direction $X$ pointing out of $\mu(M)$. So $(\d \mu^X)_p = 0$, and $\d \mu^X$ gives a non-trivial linear combination of the $\d \mu_i$'s that vanishes).
+
+ So if there is an interior point, we know $\mu(M)$ is a non-degenerate polytope in $\R^n$. This mean it has at least $n + 1$ vertices. So there are at least $n + 1$ fixed points.
+ \item Let $\mathcal{O}$ be an orbit of $p$ in $M$. Then $\mu$ is constant on $\mathcal{O}$ by invariance of $\mu$. So
+ \[
+ T_p \mathcal{O} \subseteq \ker (\d \mu_p) = (T_p \mathcal{O})^\omega.
+ \]
+ So all orbits of a Hamiltonian torus action are isotropic submanifolds. So $\dim \mathcal{O} \leq \frac{1}{2}\dim M$. So we are done.\qedhere
+ \end{enumerate}
+\end{proof}
+
+In the ``optimal'' case, we have $\dim M = 2n$.
+\begin{defi}[(Symplectic) toric manifold]\index{symplectic toric manifold}\index{toric manifold}
+ A \emph{(symplectic) toric manifold} is a compact connected symplectic manifold $(M^{2n}, \omega)$ equipped with an effective $T^n$ action of an $n$-torus together with a choice of corresponding moment map $\mu$.
+\end{defi}
+
+\begin{eg}
+ Take $(\CP^n, \omega_{FS})$, where the moment map is given by
+ \[
+ \mu([z_0:z_1:\cdots :z_n]) = - \frac{1}{2} \frac{(|z_1|^2, \ldots, |z_n|^2)}{|z_0|^2 + |z_1|^2 + \cdots + |z_n|^2}.
+ \]
+ Then this is a symplectic toric manifold.
+\end{eg}
+
+Note that if $(M, \omega, T \mu)$ is a toric manifold and $\mu = (\mu_0, \ldots, \mu_n): M \to \R^n$, then $\mu_1, \ldots, \mu_n$ are commuting integrals of motion
+\[
+ \{\mu_i, \mu_j\} = \omega(X_i^\#, X_j^\#) = 0.
+\]
+So we get an integrable system.
+
+The punch line of the section is that there is a correspondence between toric manifolds and polytopes of a suitable kind. First, we need a suitable notion of equivalence of toric manifolds.
+
+\begin{defi}[Equivalent toric manifolds]
+ Fix a torus $T = \R^{2n}/(2\pi \Z)^n$, and fix an identification $\mathfrak{t}^* \cong \mathfrak{t} \cong \R^n$. Given two toric manifolds $(M_i, \omega_i, T, \mu_i)$ for $i = 1, 2$, We say they are
+ \begin{enumerate}
+ \item \emph{equivalent} if there exists a symplectomorphism $\varphi: M_1 \to M_2$ such that $\varphi(x \cdot p) = x \cdot \varphi(p)$ and $\mu_2 \circ \varphi = \mu_1$.
+ \item \emph{weakly equivalent} if there exists an automorphism $\lambda: T \to T$ and $\varphi: M_1 \to M_2$ symplectomorphism such that $\varphi(x, p) = \lambda(x) \cdot \varphi(p)$.
+ \end{enumerate}
+\end{defi}
+%Let $(M_1, \omega_1, T_1, \mu_1)$ and $(M_2, \omega_2, T_2, \mu_2)$ be toric manifolds. Then they are said to be equivalent if there is a group homomorphism $\lambda: T_1 \to T_2$ and a $\lambda$-equivariant symplectomorphism $\varphi: M_1 \to M_2$, i.e.
+%\[
+% \varphi(tp) = \lambda(t) \cdot \phi(p)\text{ for all }t \in T_1, p \in M_1
+%\]
+%such that $\d \lambda \circ \mu_1 = \mu_2 \circ \varphi$.
+
+We also need a notion of equivalence of polytopes. Recall that $\Aut(T) = \GL(n, \Z)$, and we can define
+\begin{defi}
+ \[
+ \mathrm{AGL}(n, \Z) = \{x \mapsto Bx + c: B \in \GL(n, \Z), c \in \R^n\}.
+ \]
+\end{defi}
+Finally, not all polytopes can arise from the image of a moment map. It is not hard to see that the following are some necessary properties:
+\begin{defi}[Delzant polytope]\index{Delzant polytope}
+ A \emph{Delzant polytope} in $\R^n$ is a compact convex polytope satisfying
+ \begin{enumerate}
+ \item \emph{Simplicity}: There exists exactly $n$ edges out meeting at each vertex.
+ \item \emph{Rationality}: The edges meeting at each vertex $P$ are of the form $P + t u_i$ for $t \geq 0$ and $u_i \in \Z^n$.
+ \item \emph{Smoothness}: For each vertex, the corresponding $u_i$'s can be chosen to be a $\Z$-basis of $\Z^n$.
+ \end{enumerate}
+\end{defi}
+Observe that all polytopes arising as $\mu(M)$ satisfy these properties.
+
+We can equivalently define rationality and smoothness as being the exact same conditions on the outward-pointing normals to the facets (co-dimension $1$ faces) meeting at $P$.
+
+\begin{eg}
+ In $\R$, there is any Delzant polytope is a line segment. This corresponds to the toric manifold $S^2 = \CP^1$ as before, and the length of the polytope corresponds to the volume of $\CP^1$ under $\omega$.
+\end{eg}
+
+\begin{eg}
+ In $\R^2$, this is Delzant polytope:
+ \begin{center}
+ \begin{tikzpicture}[scale=0.8]
+ \draw [fill=mblue, fill opacity=0.3] (0, 0) -- (0, 2) -- (2, 0) -- (0, 0);
+ \draw [-latex, mred] (0, 0) -- (0.5, 0);
+ \draw [-latex, mred] (0, 0) -- (0, 0.5);
+ \draw [-latex, mred] (0, 2) -- (0, 1.5);
+ \draw [-latex, mred] (0, 2) -- (0.5, 1.5);
+ \draw [-latex, mred] (2, 0) -- (1.5, 0);
+ \draw [-latex, mred] (2, 0) -- (1.5, 0.5);
+ \end{tikzpicture}
+ \end{center}
+ On the other hand, this doesn't work:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.3] (0, 0) -- (1, 0) -- (0, 2) -- (0, 0);
+ \draw [-latex, mred] (0, 0) -- (0.3, 0);
+ \draw [-latex, mred] (0, 0) -- (0, 0.3);
+ \draw [-latex, mred] (0, 2) -- (0, 1.4);
+ \draw [-latex, mred] (0, 2) -- (0.3, 1.4);
+ \draw [-latex, mred] (1, 0) -- (0.7, 0);
+ \draw [-latex, mred] (1, 0) -- (0.7, 0.6);
+ \end{tikzpicture}
+ \end{center}
+ since on the bottom right vertex, we have
+ \[
+ \det
+ \begin{pmatrix}
+ -1 & -1\\
+ 0 & 2
+ \end{pmatrix} = -2 \not= \pm 1.
+ \]
+ To fix this, we can do
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.3] (0, 0) -- (0, 3) -- (1, 1) -- (1, 0) -- (0, 0);
+ \draw [-latex, mred] (0, 0) -- (0.3, 0);
+ \draw [-latex, mred] (0, 0) -- (0, 0.3);
+ \draw [-latex, mred] (0, 3) -- (0, 2.4);
+ \draw [-latex, mred] (0, 3) -- (0.3, 2.4);
+ \draw [-latex, mred] (1, 1) -- (1, 0.7);
+ \draw [-latex, mred] (1, 1) -- (0.7, 1.6);
+
+ \draw [-latex, mred] (1, 0) -- (0.7, 0);
+ \draw [-latex, mred] (1, 0) -- (1, 0.3);
+ \end{tikzpicture}
+ \end{center}
+ Of course, we can also do boring things like rectangles.
+\end{eg}
+There is in fact a classification of all Delzant polytopes in $\R^2$, but we shall not discuss this.
+
+\begin{eg}
+ The rectangular pyramid in $\R^3$ is not Delzant because it is not simple. The tetrahedron is.
+\end{eg}
+
+\begin{thm}[Delzant]
+ There are correspondences
+ \begin{align*}
+ \left\{\parbox{4cm}{\centering symplectic toric manifolds up to equivalence}\right\} &\longleftrightarrow \left\{\parbox{4cm}{\centering Delzant polytopes}\vphantom{\parbox{4cm}{\centering symplectic toric manifolds up to equivalence}}\right\}\\
+ \left\{\parbox{4cm}{\centering symplectic toric manifolds up to weak equivalence}\right\} &\longleftrightarrow \left\{\parbox{4cm}{\centering Delzant polytopes \\modulo $\mathrm{AGL}(n, \Z)$}\vphantom{\parbox{4cm}{\centering symplectic toric manifolds up to equivalence}}\right\}
+ \end{align*}
+\end{thm}
+
+\begin{proof}[Proof sketch]
+ Given a Delzant polytope $\Delta$ in $(\R^n)^*$ with $d$ facets, we want to construct $(M_\Delta, \omega_\Delta, T_\Delta, \mu_\Delta)$ with $\mu_\Delta(M_\Delta) = \Delta$. The idea is to perform the construction for the ``universal'' Delzant polytope with $d$ facets, and then obtain the desired $M_\Delta$ as a symplectic reduction of this universal example. As usual, the universal example will be ``too big'' to be a genuine symplectic toric manifold. Instead, it will be non-compact.
+
+ If $\Delta$ has $d$ facets with primitive outward-point normal vectors $v_1, \ldots, v_d$ (i.e.\ they cannot be written as a $\Z$-multiple of some other $\Z$-vector), then we can write $\Delta$ as
+ \[
+ \Delta = \{x \in (\R^n)^* : \bra x, v_i \ket \leq \lambda_i\text{ for }i = 1, \ldots, d\}
+ \]
+ for some $\lambda_i$.
+
+ There is a natural (surjective) map $\pi: \R^d \to \R^n$ that sends the basis vector $e_d$ of $\R^d$ to $v_d$. If $\lambda = (\lambda_1, \ldots, \lambda_d)$, and we have a pullback diagram
+ \[
+ \begin{tikzcd}[column sep=large]
+ \Delta \ar[d, hook] \ar[r] & \R^d_\lambda \ar[d, hook]\\
+ (\R^n)^* \ar[r, hook, "\pi^*"] & (\R^d)^*
+ \end{tikzcd}
+ \]
+ where
+ \[
+ \R^d_\lambda = \{X \in (\R^d)^* : \bra X, e_i\ket \leq \lambda_i\text{ for all }i\}.
+ \]
+ In more down-to-earth language, this says
+ \[
+ \pi^*(x) \in \R^d_\lambda \Longleftrightarrow x \in \Delta,
+ \]
+ which is evident from definition.
+
+ Now there is a universal ``toric manifold'' with $\mu(M) = \R^d_\lambda$, namely $(\C^d, \omega_0)$ with the diagonal action
+ \[
+ (t_1, \ldots, t_d) \cdot (z_1, \ldots, z_d) = (e^{it_1} z_1, \ldots, e^{it_n}z_n),
+ \]
+ using the moment map
+ \[
+ \phi(z_1, \ldots, z_d) = -\frac{1}{2}(|z_1|^2, \ldots, |z_d|^2) + (\lambda_1, \ldots,\lambda_d).
+ \]
+
+ We now want to pull this back along $\pi^*$. To this extent, note that $\pi$ sends $\Z^d$ to $\Z^n$, hence induces a map $T^d \to T^n$ with kernel $N$. If $\mathfrak{n}$ is the Lie algebra of $N$, then we have a short exact sequence
+ \[
+ 0 \longrightarrow (\R^n)^* \overset{\pi^*}{\longrightarrow} (\R^d)^* \overset{i^*}{\longrightarrow} \mathfrak{n}^* \longrightarrow 0.
+ \]
+ Since $\im \pi^* = \ker i^*$, the pullback of $\C^d$ along $\pi^*$ is exactly
+ \[
+ Z = (i^* \circ \phi)^{-1}(0).
+ \]
+ It is easy to see that this is compact.
+
+ Observe that $i^* \circ \phi$ is exactly the the moment map of the induced action by $N$. So $Z/N$ is the symplectic reduction of $\C^d$ by $N$, and in particular has a natural symplectic structure. It is natural to consider $Z/N$ instead of $Z$ itself, since $Z$ carries a $T^d$ action, but we only want to be left with a $T^n$ action. Thus, after quotienting out by $N$, the $T^d$ action becomes a $T^d/N \cong T^n$ action, with moment map given by the unique factoring of
+ \[
+ Z \hookrightarrow \C^d \to (\R^d)^*
+ \]
+ through $(\R^n)^*$. The image is exactly $\Delta$.
+% This is not difficult. The short exact sequence
+% \[
+% 0 \longrightarrow N \longrightarrow T^d \longrightarrow T^n \longrightarrow 0
+% \]
+% admits a right splitting $\sigma: T^n \to T^d$, so that
+% \[
+% T^d \cong N \times \sigma(T^n).
+% \]
+% This naturally induces a splitting $(\R^d) \cong \mathfrak{n}^* \oplus (\R^n)^*$. Then the $\sigma(T^n)$ action on $Z$ descends to one on $Z/N$, with moment map given by
+% \[
+% Z \hookrightarrow \C^d \rightarrow (\R^d)^* \cong \mathfrak{n}^* \otimes (\R^n)^* \overset{\sigma^*}{\to} (\R^n)^*,
+% \]
+% whose image is exactly $\Delta$.
+%
+% By assumption, $z_1, \ldots, v_d$ span $\Z^n$ over $\Z$.
+%
+% Let $e_1, \ldots, e_d$ be a basis of $\R^d$. Then there is a map $T: \R^d \to \R^n$ that sends $e_i \to v_i$. Note that this maps $\Z^d$ onto $\Z^n$. This thus induces a map $T^d = \R^d/2\pi \Z^d \to T^n / 2\pi \Z^n$.
+%
+% Let $N = \ker \pi$, a $(d - n)$-dimensional Lie subgroup, and $\mathfrak{n}$ the Lie algebra. We then get an exact sequences
+% \[
+% \begin{tikzcd}[row sep=tiny]
+% 0 \ar[r] & N \ar[r, "i"] & T^d \ar[r, "\pi"] & T^n \ar[r] & 0\\
+% 0 \ar[r] & \mathfrak{n} \ar[r, "i"] & \R^d \ar[r, "\pi"] & \R^n \ar[r] & 0\\
+% 0 \ar[r] & (\R^n)^* \ar[r, "\pi^*"] & (\R^d)^* \ar[r, "i^*"] & \mathfrak{n}^* \ar[r] & 0
+% \end{tikzcd}
+% \]
+% Now consider $(\C^d, \omega_0 = -\frac{1}{2} \sum \d z_k \wedge \d \bar{z}_k)$, with the diagonal action of $T^d$ on $\C^d$ given by
+% \[
+% (t_1, \ldots, t_d) \cdot (z_1, \ldots, z_d) = (e^{it_1} z_1, \ldots, e^{it_n}z_n)
+% \]
+% which is Hamiltonian with moment map $\phi: \C^d \to (\R^d)^*$ given by
+% \[
+% \phi(z_1, \ldots, z_d) = -\frac{1}{2}(|z_1|^2, \ldots, |z_d|^2) + (\lambda_1, \ldots,\lambda_d).
+% \]
+% The action of the subgroup $N$ is Hamiltonian with moment map
+% \[
+% \begin{tikzcd}
+% \C^d \ar[r, "\phi"] & (\R^d)^* \ar[r, "i^*"] & \mathfrak{n}^*
+% \end{tikzcd}.
+% \]
+% We want to reduce it at $0$. Let
+% \begin{multline*}
+% Z = (i^* \circ \varphi)^{-1}(0) = \phi^{-1}( (i^*)^{-1}(0) \cap \im(\phi)) \\
+% = \phi^{-1}(\ker i^* \cap \im(\phi)) = \phi^{-1}(\im \pi^* \cap \im \phi).
+% \end{multline*}
+% Since $\im \pi^* \cap \im \phi$ is $n$-dimensional, we know $\dim Z = n + d$.
+% \begin{lemma}
+% Let $N$ be a $(d - n)$-dimensional subtorus of $T^d$ and $T^d \cong N \times T^n$.
+% \end{lemma}
+% \begin{lemma}
+% $\phi(Z) = \pi^*(\Delta)$
+% \end{lemma}
+% Since $\phi$ is proper, we know
+% \begin{lemma}
+% $Z$ is compact.
+% \end{lemma}
+% \begin{lemma}
+% $N$ acts freely on $Z$.
+% \end{lemma}
+% So $M_\Delta = Z/N$ is a compact connected manifold of dimension $2n$, which has a reduced symplectic form. We seek a Hamiltonian $T^n$-action with $\mu_\Delta(M_\Delta) = \Delta$.
+%
+% To see this, pick a section $\sigma: T^n \to T^d$ of $0 \to N \to T^d \to T^n \to 0$. Then $\sigma(T^n)$ is a subtorus of $T^d$ of dimension $n$, and its action on $\C^d$ descends to $Z/N$ since the composition
+% \[
+% \begin{tikzcd}
+% Z \ar[r, hook, "j"] & \C^d \ar[r, "\phi"] & (\R^d)^* \cong \mathfrak{n}^* \oplus (\R^n)^* \ar[r, "\sigma^*"] & (\R^n)^*
+% \end{tikzcd}
+% \]
+% is constant along $N$-orbits. Then
+% \[
+% \mu_\Delta(M_\Delta) = (\mu_\Delta \circ \mathrm{pr})(Z) = \sigma^* \circ \phi \circ j(z) = \sigma^* \pi^* \Delta = \Delta.\qedhere
+% \]
+\end{proof}
+
+%$(M_\Delta, \omega_\Delta, T, \mu_\Delta)$ is actually K\"ahler, and this connects to the analogous theory of toric varieties in algebraic geometry.
+
+%In fact, $\Delta$ is the orbit space of the torus action. These Delzant polytopes contain a lot of information about the manifold. % chop off corner of |_\ of CP^2 horizontally is blowup.
+
+\section{Symplectic embeddings}
+We end with a tiny chapter on symplectic embeddings, as promised in the course description.
+\begin{defi}[Symplectic embedding]\index{symplectic embedding}
+ A symplectic embedding is an embedding $\varphi: M_1 \hookrightarrow M_2$ such that $\varphi^* \omega_2 = \omega_1$. The notation we use is $(M_1, \omega_1) \overset{s}{\hookrightarrow} (M_2, \omega_2)$.
+\end{defi}
+
+A natural question to ask is, if we have two symplectic manifolds, is there a symplectic embedding between them?
+
+For concreteness, take $(\C^n, \omega_0) \cong (\R^{2n}, \omega_0)$, and consider the subsets $B^{2n}(r)$ and $Z^{2n}(R) = B^2(R) \times \R^{2n - 2}$ (where the product is one of symplectic manifolds). If $r \leq R$, then there is a natural inclusion of $B^{2n}(r)$ into $Z^{2n}(R)$? If we only ask for volume-preserving embeddings, then we can always embed $B^{2n}(r)$ into $Z^{2n}(R)$, since $Z^{2n}(R)$ has infinite volume. It turns out, if we require the embedding to be symplectic, we have
+\begin{thm}[Non-squeezing theorem, Gromov, 1985]\index{non-squeezing theorem}
+ There is an embedding $B^{2}(n) \hookrightarrow Z^{2n}(R)$ iff $r < R$.
+\end{thm}
+
+When studying symplectic embeddings, it is natural to consider the following:
+\begin{defi}[Symplectic capacity]\index{symplectic capacity}
+ A \emph{symplectic capacity} is a function $c$ from the set of $2n$-dimensional manifolds to $[0, \infty]$ such that
+ \begin{enumerate}
+ \item Monotonicity: if $(M_1, \omega_1) \hookrightarrow (M_2, \omega_2)$, then $c(M_1, \omega_1) \leq c(M_2, \omega_2)$.
+ \item Conformality: $c(M, \lambda \omega) = \lambda c(M, \omega)$.
+ \item Non-triviality: $c(B^{2n}(1), \omega_0) > 0$ and $c(Z^{2n}(1), \omega_0) < \infty$.
+ \end{enumerate}
+ If we only have (i) and (ii), this is called a \term{generalized capacity}.
+\end{defi}
+Note that the volume is a generalized capacity, but not a symplectic capacity.
+
+\begin{prop}
+ The existence of a symplectic capacity is equivalent to Gromov's non-squeezing theorem.
+\end{prop}
+
+\begin{proof}
+ The $\Rightarrow$ direction is clear by monotonicity and conformality. Conversely, if we know Gromov's non-squeezing theorem, we can define the \emph{Gromov width}
+ \[
+ W_G(M, \omega) = \sup \{\pi r^2 \mid (B^{2n}(r), \omega_0) \hookrightarrow (M, \omega)\}.
+ \]
+ This clearly satisfies (i) and (ii), and (iii) follows from Gromov non-squeezing. Note that Darboux's theorem says there is always an embedding of $B^{2n}(r)$ into any symplectic manifold as long as $r$ is small enough.
+\end{proof}
+
+\printindex
+\end{document}
diff --git a/books/cam/III_L/the_standard_model.tex b/books/cam/III_L/the_standard_model.tex
new file mode 100644
index 0000000000000000000000000000000000000000..55d228533ecfc1a20a9e262b06b41acf37be78ed
--- /dev/null
+++ b/books/cam/III_L/the_standard_model.tex
@@ -0,0 +1,4275 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2017}
+\def\nlecturer {C.\ E.\ Thomas}
+\def\ncourse {The Standard Model}
+\def\nofficial {https://www.damtp.cam.ac.uk/user/cet34/teaching/SM/}
+
+\input{header}
+
+\usepackage{pifont}
+\usepackage[compat=1.1.0]{tikz-feynman}
+\tikzfeynmanset{/tikzfeynman/momentum/arrow shorten = 0.3}
+\tikzfeynmanset{/tikzfeynman/warn luatex = false}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+The Standard Model of particle physics is, by far, the most successful application of quantum field theory (QFT). At the time of writing, it accurately describes all experimental measurements involving strong, weak, and electromagnetic interactions. The course aims to demonstrate how this model, a QFT with gauge group $\SU(3) \times \SU(2) \times \U(1)$ and fermion fields for the leptons and quarks, is realised in nature. It is intended to complement the more general Advanced QFT course.
+
+We begin by defining the Standard Model in terms of its local (gauge) and global symmetries and its elementary particle content (spin-half leptons and quarks, and spin-one gauge bosons). The parity $P$, charge-conjugation $C$ and time-reversal $T$ transformation properties of the theory are investigated. These need not be symmetries manifest in nature; e.g.\ only left-handed particles feel the weak force and so it violates parity symmetry. We show how $CP$ violation becomes possible when there are three generations of particles and describe its consequences.
+
+Ideas of spontaneous symmetry breaking are applied to discuss the Higgs Mechanism and why the weakness of the weak force is due to the spontaneous breaking of the $\SU(2) \times \U(1)$ gauge symmetry. Recent measurements of what appear to be Higgs boson decays will be presented.
+
+We show how to obtain cross sections and decay rates from the matrix element squared of a process. These can be computed for various scattering and decay processes in the electroweak sector using perturbation theory because the couplings are small. We touch upon the topic of neutrino masses and oscillations, an important window to physics beyond the Standard Model.
+
+The strong interaction is described by quantum chromodynamics (QCD), the non-abelian gauge theory of the (unbroken) $\SU(3)$ gauge symmetry. At low energies quarks are confined and form bound states called hadrons. The coupling constant decreases as the energy scale increases, to the point where perturbation theory can be used. As an example we consider electron- positron annihilation to final state hadrons at high energies. Time permitting, we will discuss nonperturbative approaches to QCD. For example, the framework of effective field theories can be used to make progress in the limits of very small and very large quark masses.
+
+Both very high-energy experiments and very precise experiments are currently striving to observe effects that cannot be described by the Standard Model alone. If time permits, we comment on how the Standard Model is treated as an effective field theory to accommodate (so far hypothetical) effects beyond the Standard Model.
+
+\subsubsection*{Pre-requisites}
+It is necessary to have attended the Quantum Field Theory and the Symmetries, Fields and Particles courses, or to be familiar with the material covered in them. It would be advantageous to attend the Advanced QFT course during the same term as this course, or to study renormalisation and non-abelian gauge fixing.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+In the Michaelmas Quantum Field Theory course, we studied the general theory of quantum field theory. At the end of the course, we had a glimpse of quantum electrodynamics (QED), which was an example of a quantum field theory that described electromagnetic interactions. QED was a massively successful theory that explained experimental phenomena involving electromagnetism to a very high accuracy.
+
+But of course, that is far from being a ``complete'' description of the universe. It does not tell us what ``matter'' is actually made up of, and also all sorts of other interactions that exist in the universe. For example, the atomic nucleus contains positively-charged protons and neutral neutrons. From a purely electromagnetic point of view, they should immediately blow apart. So there must be some force that holds the particles together.
+
+Similarly, in experiments, we observe that certain particles undergo decay. For example, muons may decay into electrons and neutrinos. QED doesn't explain these phenomena as well.
+
+As time progressed, physicists managed to put together all sorts of experimental data, and came up with the \emph{Standard Model}. This is the best description of the universe we have, but it is lacking in many aspects. Most spectacularly, it does not explain gravity at all. There are also some details not yet fully sorted out, such as the nature of neutrinos. In this course, our objective is to understand the standard model.
+
+Perhaps to the disappointment of many readers, it will take us a while before we manage to get to the actual standard model. Instead, during the first half of the course, we are going to discuss some general theory regarding symmetries. These are crucial to the development of the theory, as symmetry concerns impose a lot of restriction on what our theories can be. More importantly, the ``forces'' in the standard model can be (almost) completely described by the gauge group we have, which is $\SU(3) \times \SU(2) \times \U(1)$. Making sense of these ideas is crucial to understanding how the standard model works.
+
+With the machinery in place, the actual description of the Standard Model is actually pretty short. After describing the Standard Model, we will do various computations with it, and make some predictions. Such predictions were of course important --- it allowed us to verify that our theory is correct! Experimental data also helps us determine the values of the constants that appear in the theory, and our computations will help indicate how this is possible.
+
+Historically, the standard model was of course discovered experimentally, and a lot of the constructions and choices are motivated only by experimental reasons. Since we are theorists, we are not going to motivate our choices by experiments, but we will just write them down.
+
+\section{Overview}
+We begin with a quick overview of the things that exist in the standard model. This description will mostly be words, and the actual theory will have to come quite some time later.
+\begin{itemize}
+ \item Forces are mediated by spin 1 \emph{gauge bosons}. These include
+ \begin{itemize}
+ \item The \emph{electromagnetic field} (\emph{EM}), which is mediated by the \emph{photon}. This is described by \emph{quantum electrodynamics} (\emph{QED});
+ \item The \emph{weak interaction}, which is mediated by the $W^{\pm}$\index{$W^{\pm}$ boson} and $Z$\index{$Z$ boson} \emph{bosons}; and
+ \item The \emph{strong interaction}, which is mediated by \emph{gluons} $g$. This is described by \emph{quantum chromodynamics} (\emph{QCD}).
+ \end{itemize}
+ While the electromagnetic field and weak interaction seem very different, we will see that at high energies, they merge together, and can be described by a single gauge group.
+
+ \item Matter is described by spin $\frac{1}{2}$ \emph{fermions}. These are described by Dirac spinors. Roughly, we can classify them into 4 ``types'', and each type comes in 3 generations, which will be denoted G1, G2, G3 in the following table, which lists which forces they interact with, alongside with their charge.
+ \begin{center}
+ \begin{tabular}{cccccccc}
+ \toprule
+ Type & G1 & G2 & G3 & Charge & EM & Weak & Strong \\
+ \midrule
+ Charged leptons & $e$ & $\mu$ & $\tau$ & $-1$ & \ding{51} & \ding{51} & \ding{55} \\
+ Neutrinos & $\nu_e$ & $\nu_\mu$ & $\nu_\tau$ & $0$ & \ding{55} & \ding{51} & \ding{55}\\
+ \midrule
+ Positive quarks & $u$ & $c$ & $t$ & $+\frac{2}{3}$ & \ding{51} & \ding{51} & \ding{51}\\
+ Negative quarks & $d$ & $s$ & $b$ & $-\frac{1}{3}$ & \ding{51} & \ding{51} & \ding{51}\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ The first two types are known as \term{leptons}, while the latter two are known as \term{quarks}. We do not know why there are three generations. Note that a particle interacts with the electromagnetic field iff it has non-zero charge, since the charge by definition measures the strength of the interaction with the electromagnetic field.
+
+ \item There is the \emph{Higgs boson}, which has spin 0. This is responsible for giving mass to the $W^{\pm}, Z$ bosons and fermions. This was just discovered in 2012 in CERN, and subsequently more properties have been discovered, e.g.\ its spin.
+\end{itemize}
+
+As one would expect from the name, the gauge bosons are manifestations of local gauge symmetries. The gauge group in the Standard Model is
+\[
+ \SU(3)_C \times \SU(2)_L \times \U(1)_Y.
+\]
+We can talk a bit more about each component. The subscripts indicate what things the group is responsible for. The subscript on $\SU(3)_C$ means ``colour'', and this is responsible for the strong force. We will not talk about this much, because it is complicated.
+
+The remaining $\SU(2)_L \times \U(1)_Y$ bit collectively gives the electroweak interaction. This is a \emph{unified} description of electromagnetism and the weak force. These are a bit funny. The $\SU(2)_L$ is a chiral interaction. It only couples to left-handed particles, hence the $L$. The $\U(1)_Y$ is something we haven't heard of (probably), namely the hypercharge, which is conventionally denoted $Y$. Note that while electromagnetism also has a $\U(1)$ gauge group, it is different from this $\U(1)$ we see.
+
+\subsubsection*{Types of symmetry}
+One key principle guiding our study of the standard model is \emph{symmetry}. Symmetries can manifest themselves in a number of ways.
+\begin{enumerate}
+ \item We can have an \term{intact symmetry}\index{symmetry!intact}, or \term{exact symmetry}\index{symmetry!exact}. In other words, this is an actual symmetry. For example, $\U(1)_{EM}$ and $\SU(3)_C$ are exact symmetries in the standard model.
+ \item Symmetries can be broken by an \term{anomaly}. This is a symmetry that exists in the classical theory, but goes away when we quantize. Examples include global axial symmetry for massless spinor fields in the standard model.
+ \item Symmetry is explicitly broken by some terms in the Lagrangian. This is not a symmetry, but if those annoying terms are small (intentionally left vague), then we have an \term{approximate symmetry}, and it may also be useful to consider these.
+
+ For example, in the standard model, the up and down quarks are very close in mass, but not exactly the same. This gives to the (global) isospin symmetry.
+ \item The symmetry is respected by the Lagrangian $\mathcal{L}$, but not by the vacuum. This is a ``hidden symmetry''.
+ \begin{enumerate}
+ \item We can have a \emph{spontaneously broken symmetry}: we have a vacuum expectation value for one or more scalar fields, e.g.\ the breaking of $\SU(2)_L \times \U(1)_Y$ into $\U(1)_{EM}$.
+ \item Even without scalar fields, we can get \term{dynamical symmetry breaking} from quantum effects. An example of this in the standard model is the $\SU(2)_L \times \SU(2)_R$ global symmetry in the strong interaction.
+ \end{enumerate}
+\end{enumerate}
+One can argue that (i) is the only case where we actually have a symmetry, but the others are useful to consider as well, and we will study them.
+
+\section{Chiral and gauge symmetries}
+We begin with the discussion of chiral and gauge symmetries. These concepts should all be familiar from the Michaelmas QFT course. But it is beneficial to do a bit of review here. While doing so, we will set straight our sign and notational conventions, and we will also highlight the important parts in the theory.
+
+As always, we will work in natural units $c = \hbar = 1$, and the sign convention is $(+,-,-,-)$.
+
+\subsection{Chiral symmetry}
+Chiral symmetry is something that manifests itself when we have spinors. Since all matter fields are spinors, this is clearly important.
+
+The notion of chirality is something that exists classically, and we shall begin by working classically. A Dirac spinor is, in particular, a (4-component) spinor field. As usual, the \term{Dirac matrices}\index{$\gamma^\mu$} $\gamma^\mu$ are $4 \times 4$ matrices satisfying the \term{Clifford algebra} relations
+\[
+ \{\gamma^\mu, \gamma^\nu\} = 2 g^{\mu\nu} I,
+\]
+where $g^{\mu\nu}$ is the Minkowski metric. For any operator $A_\mu$, we write
+\[
+ \slashed A = \gamma^\mu A_\mu.
+\]
+Then a \term{Dirac fermion} is defined to be a spinor field satisfying the Dirac equation
+\[
+ (i \slashed{\partial} - m) \psi = 0.
+\]
+We define\index{$\gamma^5$}
+\[
+ \gamma^5 = +i \gamma^0 \gamma^1 \gamma^2 \gamma^3,
+\]
+which satisfies
+\[
+ (\gamma^5)^2 = I,\quad \{\gamma^5, \gamma^\mu\} = 0.
+\]
+One can do a lot of stuff without choosing a particular basis/representation for the $\gamma$-matrices, and the physics we get out must be the same regardless of which representation we choose, but sometimes it is convenient to pick some particular representation to work with. We'll generally use the \term{chiral representation} (or \term{Weyl representation}), with
+\[
+ \gamma^0 =
+ \begin{pmatrix}
+ 0 & 1 \\
+ 1 & 0
+ \end{pmatrix}, \quad
+ \gamma^i =
+ \begin{pmatrix}
+ 0 & \sigma^i\\
+ -\sigma^i & 0
+ \end{pmatrix},\quad
+ \gamma^5 =
+ \begin{pmatrix}
+ -1 & 0\\
+ 0 & 1
+ \end{pmatrix},
+\]
+where the $\sigma^i \in \Mat_2(\C)$ are the \term{Pauli matrices}.
+
+\subsubsection*{Chirality}
+$\gamma^5$ is a particularly interesting matrix to consider. We saw that it satisfies $(\gamma^5)^2 = 1$. So we know that $\gamma^5$ is diagonalizable, and the eigenvalues of $\gamma^5$ are either $+1$ or $-1$.
+\begin{defi}[Chirality]\index{chirality}\index{right-handed fermion}\index{left-handed fermion}\index{fermion!left handed}\index{fermion!right handed}
+ A Dirac fermion $\psi$ is \emph{right-handed} if $\gamma^5 \psi = \psi$, and \emph{left-handed} if $\gamma^5 \psi =- \psi$.
+
+ A left- or right-handed fermion is said to have \emph{definite chirality}.
+\end{defi}
+In general, a fermion need not have definite chirality. However, as we know from linear algebra, the spinor space is a direct sum of the eigenspaces of $\gamma^5$. So given any spinor $\psi$, we can write it as
+\[
+ \psi = \psi_L + \psi_R,
+\]
+where $\psi_L$ is left-handed, and $\psi_R$ is right-handed.
+
+It is not hard to find these $\psi_L$ and $\psi_R$. We define the \term{projection operators}\index{$P_R$}\index{$P_L$}
+\[
+ P_R = \frac{1}{2} (1 + \gamma^5),\quad P_L = \frac{1}{2} (1 - \gamma^5).
+\]
+It is a direct computation to show that
+\[
+ \gamma^5 P_L = - P_L,\quad \gamma^5 P_R = P_R,\quad (P_{R, L})^2 = P_{R, L},\quad P_L + P_R = I.
+\]
+Thus, we can write
+\[
+ \psi = (P_L + P_R) \psi = (P_L \psi) + (P_R \psi),
+\]
+and thus we have
+\begin{notation}
+ \[
+ \psi_L = P_L \psi,\quad \psi_R = P_R \psi.
+ \]
+\end{notation}
+Moreover, we notice that
+\[
+ P_L P_R = P_R P_L = 0.
+\]
+This implies
+\begin{lemma}
+ If $\psi_L$ is left-handed and $\phi_R$ is right-handed, then
+ \[
+ \bar\psi_L \phi_L = \bar\psi_R \phi_R = 0.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We only do the left-handed case.
+ \begin{align*}
+ \bar\psi_L \phi_L &= \psi_L^\dagger \gamma^0 \phi_L\\
+ &= (P_L \psi_L)^\dagger \gamma^0 (P_L \phi_L)\\
+ &= \psi_L^\dagger P_L \gamma^0 P_L \phi_L\\
+ &= \psi_L^\dagger P_L P_R \gamma^0 \phi_L\\
+ &= 0,
+ \end{align*}
+ using the fact that $\{\gamma^5, \gamma^0\} = 0$.
+\end{proof}
+The projection operators look very simple in the chiral representation. They simply look like
+\[
+ P_L =
+ \begin{pmatrix}
+ I & 0\\
+ 0 & 0
+ \end{pmatrix},\quad
+ P_R =
+ \begin{pmatrix}
+ 0 & 0\\
+ 0 & I
+ \end{pmatrix}.
+\]
+Now we have produced these $\psi_L$ and $\psi_R$, but what are they? In particular, are they themselves Dirac spinors? We notice that the anti-commutation relations give
+\[
+ \slashed \partial \gamma^5 \psi = -\gamma^5 \slashed \partial \psi.
+\]
+Thus, $\gamma^5$ does not commute with the Dirac operator $(i\slashed \partial - m)$, and one can check that in general, $\psi_L$ and $\psi_R$ do not satisfy the Dirac equation. But there is one exception. If the spinor is massless, so $m = 0$, then the Dirac equation becomes
+\[
+ \slashed \partial \psi = 0.
+\]
+If this holds, then we also have $\slashed \partial \gamma^5 \psi = - \gamma^5 \slashed \partial \psi = 0$. In other words, $\gamma^5 \psi$ is still a Dirac spinor, and hence
+\begin{prop}
+ If $\psi$ is a massless Dirac spinor, then so are $\psi_L$ and $\psi_R$.
+\end{prop}
+More generally, if the mass is non-zero, then the Lagrangian is given by
+\[
+ \mathcal{L} = \bar\psi (i \slashed{\partial} - m) \psi = \bar\psi_L i \slashed{\partial} \psi_L + \bar\psi_L i \slashed{\partial} \psi_R - m (\bar\psi_L \psi_R + \bar\psi_R \psi_L).
+\]
+In general, it ``makes sense'' to treat $\psi_L$ and $\psi_R$ as separate objects only if the spinor field is massless. This is crucial. As mentioned in the overview, the weak interaction only couples to left-handed fermions. For this to ``make sense'', the fermions must be massless! But the electrons and fermions we know and love are \emph{not} massless. Thus, the masses of the electron and fermion terms cannot have come from a direct mass term in the Lagrangian. They must obtain it via some other mechanism, namely the \emph{Higgs mechanism}.
+
+To see an example of how the mass makes such a huge difference, by staring at the Lagrangian, we notice that if the fermion is massless, then we have a $\U(1)_L \times \U(1)_R$ global symmetry --- under an element $(\alpha_L, \alpha_R) \in \U(1)_L \times \U(1)_R$, the fermion transforms as
+\[
+ \begin{pmatrix}
+ \psi_L\\
+ \psi_R
+ \end{pmatrix} \mapsto
+ \begin{pmatrix}
+ e^{i\alpha_L} \psi_L\\
+ e^{i\alpha_R} \psi_R
+ \end{pmatrix}.
+\]
+The adjoint field transforms as
+\[
+ \begin{pmatrix}
+ \bar{\psi}_L\\
+ \bar{\psi}_R
+ \end{pmatrix} \mapsto
+ \begin{pmatrix}
+ e^{-i\alpha_L} \bar{\psi}_L\\
+ e^{-i\alpha_R} \bar{\psi}_R
+ \end{pmatrix},
+\]
+and we see that the Lagrangian is invariant. However, if we had a massive particle, then we would have the cross terms in the Lagrangian. The only way for the Lagrangian to remain invariant is if $\alpha_L = \alpha_R$, and the symmetry has reduced to a single $\U(1)$ symmetry.
+
+\subsubsection*{Quantization of Dirac field}
+Another important difference the mass makes is the notion of helicity. This is a quantum phenomenon, so we need to describe the quantum theory of Dirac fields. When we quantize the Dirac field, we can decompose it as
+\[
+ \psi = \sum_{s, p}\left[b^s (p) u^s(p) e^{-ip\cdot x} + d^{s\dagger}(p) v^s(p) e^{-ip\cdot x}\right].
+\]
+We explain these things term by term:
+\begin{itemize}
+ \item $s$ is the spin, and takes values $s = \pm \frac{1}{2}$.
+ \item The summation over all $p$ is actually an integral
+ \[
+ \sum_p = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3 (2E_\mathbf{p})}.
+ \]
+ \item $b^\dagger$ and $d^\dagger$ operators that create positive and negative frequency particles respectively. We use relativistic normalization, where the states
+ \[
+ \bket{p} = b^\dagger(p) \bket{0}
+ \]
+ satisfy
+ \[
+ \braket{p}{q} = (2\pi)^3 2E_p \delta^{(3)} (\mathbf{p} - \mathbf{q}).
+ \]
+ \item The $u^s(p)$ and $v^s(p)$ form a basis of the solution space to the (classical) Dirac equation, so that
+ \[
+ u^s(p) e^{-ip\cdot x},\quad v^s(p) e^{ip\cdot x}
+ \]
+ are solutions for any $p$ and $s$. In the chiral representation, we can write them as
+ \[
+ u^s(p) =
+ \begin{pmatrix}
+ \sqrt{p \cdot \sigma} \xi^s\\
+ \sqrt{p \cdot \bar\sigma} \xi^s
+ \end{pmatrix},\quad
+ v^s(p) =
+ \begin{pmatrix}
+ \sqrt{p \cdot \sigma} \eta^s\\
+ -\sqrt{p \cdot \bar\sigma} \eta^s
+ \end{pmatrix},
+ \]
+ where as usual
+ \[
+ \sigma^\mu = (I, \sigma^i),\quad \bar{\sigma}^\mu = (I, - \sigma^i),
+ \]
+ and $\{\xi^{\pm \frac{1}{2}}\}$ and $\{\eta^{\pm \frac{1}{2}}\}$ are bases for $\R^2$.
+\end{itemize}
+
+We can define a quantum operator corresponding to the chirality, known as \emph{helicity}.
+\begin{defi}[Helicity]\index{helicity}
+ We define the \emph{helicity} to be the projection of the angular momentum onto the direction of the linear momentum:
+ \[
+ h = \mathbf{J} \cdot \hat{\mathbf{p}} = \mathbf{S} \cdot \hat{\mathbf{p}},
+ \]
+ where
+ \[
+ \mathbf{J} = -i \mathbf{r} \times \nabla + \mathbf{S}
+ \]
+ is the total angular momentum, and $\mathbf{S}$ is the spin operator given by
+ \[
+ S_i = \frac{i}{4} \varepsilon_{ijk} \gamma^j \gamma^k = \frac{1}{2}
+ \begin{pmatrix}
+ \sigma^i & 0\\
+ 0 & \sigma^i
+ \end{pmatrix}.
+ \]
+\end{defi}
+
+The main claim about helicity is that for a massless spinor, it reduces to the chirality, in the following sense:
+\begin{prop}
+ If we have a massless spinor $u$, then
+ \[
+ hu(p) = \frac{\gamma^5}{2} u(p).
+ \]
+\end{prop}
+
+\begin{proof}
+ Note that if we have a massless particle, then we have
+ \[
+ \slashed p u = 0,
+ \]
+ since quantumly, $p$ is just given by differentiation. We write this out explicitly to see
+ \[
+ \gamma^\mu p_\mu u^s = (\gamma^0 p^0 - \boldsymbol \gamma \cdot \mathbf{p})u = 0.
+ \]
+ Multiplying it by $\gamma^5 \gamma^0/p^0$ gives
+ \[
+ \gamma^5 u(p) = \gamma^5 \gamma^0 \gamma^i \frac{p^i}{p^0} u(p).
+ \]
+ Again since the particle is massless, we know
+ \[
+ (p^0)^2 - \mathbf{p}\cdot \mathbf{p} = 0.
+ \]
+ So $\hat{\mathbf{p}} = \mathbf{p}/p^0$. Also, by direct computation, we find that
+ \[
+ \gamma^5 \gamma^0 \gamma^i = 2 S^i.
+ \]
+ So it follows that
+ \[
+ \gamma^5 u(p) = 2 h u(p).\qedhere
+ \]
+\end{proof}
+In particular, we have
+\[
+ h u_{L, R} = \frac{\gamma^5}{2} u_{L, R} = \mp \frac{1}{2} u_{L, R}.
+\]
+So $u_{L, R}$ has helicity $\mp\frac{1}{2}$.
+
+This is another crucial observation. Helicity is the spin in the direction of the momentum, and spin is what has to be conserved. On the other hand, chirality is what determines whether it interacts with the weak force. For massless particles, these to notions coincide. Thus, the \emph{spin} of the particles participating in weak interactions is constrained by the fact that the weak force only couples to left-handed particles. Consequently, spin conservation will forbid certain interactions from happening.
+
+However, once our particles have mass, then helicity is different from spin. In general, helicity is still quite closely related to spin, at least when the mass is small. So the interactions that were previously forbidden are now rather unlikely to happen, but are actually possible.
+
+\subsection{Gauge symmetry}
+Another important aspect of the Standard Model is the notion of a gauge symmetry. Classically, the Dirac equation has the gauge symmetry
+\[
+ \psi(x) \mapsto e^{i\alpha}\psi(x)
+\]
+for any constant $\alpha$, i.e.\, this transformation leaves all observable physics unchanged. However, if we allow $\alpha$ to vary with $x$, then unsurprisingly, the kinetic term in the Dirac Lagrangian is no longer invariant. In particular, it transforms as
+\[
+ \bar\psi i \slashed\partial \psi \mapsto \bar\psi i \slashed\partial \psi - \bar\psi \gamma^\mu \psi \partial_\mu \alpha (x).
+\]
+To fix this problem, we introduce a gauge covariant derivative $\D_\mu$ that transforms as
+\[
+ \D_\mu \psi(x) \mapsto e^{i\alpha (x)} \D_\mu \psi(x).
+\]
+Then we find that $\bar\psi i \slashed \D \psi$ transforms as
+\[
+ \bar\psi i \slashed \D \psi \mapsto \bar\psi i \slashed \D \psi.
+\]
+So if we replace every $\partial_\mu$ with $\D_\mu$ in the Lagrangian, then we obtain a gauge invariant theory.
+
+To do this, we introduce a gauge field $A_\mu(x)$, and then define
+\[
+ \D_\mu \psi(x) = (\partial_\mu + i g A_\mu) \psi(x).
+\]
+We then assert that under a gauge transformation $\alpha(x)$, the gauge field $A_\mu$ transforms as
+\[
+ A_\mu \mapsto A_\mu - \frac{1}{g} \partial_\mu \alpha(x).
+\]
+It is then a routine exercise to check that $\D_\mu$ transforms as claimed.
+
+If we want to think of $A_\mu(x)$ as some physical field, then it should have a kinetic term. The canonical choice is
+\[
+ \mathcal{L}_G = -\frac{1}{4} F_{\mu\nu} F^{\mu\nu},
+\]
+where
+\[
+ F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu = \frac{1}{ig} [\D_\mu, \D_\nu].
+\]
+We call this a $\U(1)$ gauge theory, because $e^{i\alpha}$ is an element of $\U(1)$. Officially, $A_\mu$ is an element of the Lie algebra $\uu(1)$, but it is isomorphic to $\R$, so we did not bother to make this distinction.
+
+What we have here is a rather simple overview of how gauge theory works. In reality, we find the weak field couples only with left-handed fields, and we have to modify the construction accordingly.
+
+Moreover, once we step out of the world of electromagnetism, we have to work with a more complicated gauge group. In particular, the gauge group will be non-abelian. Thus, the Lie algebra $\mathfrak{g}$ has a non-trivial bracket, and it turns out the right general formulation should include some brackets in $F_{\mu\nu}$ and the transformation rule for $A_\mu$. We will leave these for a later time.
+
+\section{Discrete symmetries}
+We are familiar with the fact that physics is invariant under Lorentz transformations and translations. These were relatively easy to understand, because they are ``continuous symmetries''. It is possible to ``deform'' any such transformation continuously (and even smoothly) to the identity transformation, and thus to understand these transformations, it often suffices to understand them ``infinitesimally''.
+
+There is also a ``trivial'' reason why these are easy to understand --- the ``types'' of fields we have, namely vector fields, scalar fields etc.\ are \emph{defined} by how they transform under change of coordinates. Consequently, by \emph{definition}, we know how vector fields transform under Lorentz transformations.
+
+In this chapter, we are going to study \emph{discrete} symmetries. These cannot be understood by such means, and we need to do a bit more work to understand them. It is important to note that these discrete ``symmetries'' aren't actually symmetries of the universe. Physics is \emph{not} invariant under these transformations. However, it is still important to understand them, and in particular understand how they fail to be symmetries.
+
+We can briefly summarize the three discrete symmetries we are interested in as follows:
+\begin{itemize}
+ \item \term{Parity} (\term{P}): $(t, \mathbf{x}) \mapsto (t, -\mathbf{x})$
+ \item \term{Time-reversal} (\term{T}): $(t, \mathbf{x}) \mapsto (-t, \mathbf{x})$
+ \item \term{Charge conjugation} (\term{C}): This sends particles to anti-particles and vice versa.
+\end{itemize}
+Of course, we can also perform combinations of these. For example, CP corresponds to first applying the parity transformation, and then applying charge conjugation. It turns out none of these are symmetries of the universe. Even worse, any combination of two transformations is not a symmetry. We will discuss these violations later on as we develop our theory.
+
+Fortunately, the combination of all three, namely CPT, \emph{is} a symmetry of a universe. This is not (just) an experimental observation. It is possible to prove (in some sense) that any (sensible) quantum field theory must be invariant under CPT, and this is known as the \emph{CPT theorem}.
+
+Nevertheless, for the purposes of this chapter, we will assume that C, P, T are indeed symmetries, and try to derive some consequences assuming this were the case.
+
+The above description of P and T tells us how the \emph{universe} transforms under the transformations, and the description of the charge conjugation is just some vague words. The goal of this chapter is to figure out what exactly these transformations do to our fields, and, ultimately, what they do to the $S$-matrix.
+
+Before we begin, it is convenient to rephrase the definition of P and T as follows. A general Poincar\'e transformation can be written as a map
+\[
+ x^\mu \mapsto x'^\mu = \Lambda^\mu\!_\nu x^\nu + a^\mu.
+\]
+A \term{proper Lorentz transform}\index{Lorentz transform!proper} has $\det \Lambda = +1$. The transforms given by parity and time reversal are \emph{improper} transformations\index{improper Lorentz transform}\index{Lorentz transform!improper}, and are given by
+\begin{defi}[Parity transform]\index{parity transform}
+ The \emph{parity transform} is given by
+ \[
+ \Lambda^\mu\!_\nu = \Prob^\mu\!_\nu =
+ \begin{pmatrix}
+ 1 & 0 & 0 & 0\\
+ 0 & -1 & 0 & 0\\
+ 0 & 0 & -1 & 0\\
+ 0 & 0 & 0 & -1
+ \end{pmatrix}.
+ \]
+\end{defi}
+
+
+\begin{defi}[Time reversal transform]\index{time reversal transform}
+ The \emph{time reversal transform} is given by
+ \[
+ \mathbb{T}^\mu\!_\nu =
+ \begin{pmatrix}
+ -1 & 0 & 0 & 0\\
+ 0 & 1 & 0 & 0\\
+ 0 & 0 & 1 & 0\\
+ 0 & 0 & 0 & 1
+ \end{pmatrix}.
+ \]
+\end{defi}
+
+\subsection{Symmetry operators}
+How do these transformations of the universe relate to our quantum-mechanical theories? In quantum mechanics, we have a state space $\mathcal{H}$, and the fields are operators $\phi(t, \mathbf{x}): \mathcal{H} \to \mathcal{H}$ for each $(t, \mathbf{x}) \in \R^{1, 3}$. Unfortunately, we do not have a general theory of how we can turn a classical theory into a quantum one, so we need to make some (hopefully natural) assumptions about how this process behaves.
+
+The first assumption is absolutely crucial to get ourselves going.
+\begin{assumption}
+ Any classical transformation of the universe (e.g.\ P or T) gives rise to some function $f: \mathcal{H} \to \mathcal{H}$.
+\end{assumption}
+In general $f$ need not be a linear map. However, it turns out if the transformation is in fact a symmetry of the theory, then \emph{Wigner's theorem} forces a lot of structure on $f$.
+
+Roughly, Wigner's theorem says the following --- if a function $f: \mathcal{H} \to \mathcal{H}$ is a symmetry, then it is linear and unitary, or anti-linear and anti-unitary.
+\begin{defi}[Linear and anti-linear map]\index{linear map}\index{anti-linear map}
+ Let $\mathcal{H}$ be a Hilbert space. A function $f: \mathcal{H} \to \mathcal{H}$ is \emph{linear} if
+ \[
+ f(\alpha \Phi + \beta \Psi) = \alpha f(\Phi) + \beta f(\Psi)
+ \]
+ for all $\alpha, \beta \in \C$ and $\Phi, \Psi \in \mathcal{H}$. A map is \emph{anti-linear} if
+ \[
+ f(\alpha \Phi + \beta \Psi) = \alpha^* f(\Phi) + \beta^* f(\Psi).
+ \]
+\end{defi}
+\begin{defi}[Unitary and anti-unitary map]\index{unitary map}\index{anti-unitary map}
+ Let $\mathcal{H}$ be a Hilbert space, and $f: \mathcal{H} \to \mathcal{H}$ a linear map. Then $f$ is \emph{unitary} if
+ \[
+ \bra f \Phi, f \Psi\ket = \bra \Phi, \Psi\ket
+ \]
+ for all $\Phi, \Psi \in \mathcal{H}$.
+
+ If $f: \mathcal{H} \to \mathcal{H}$ is anti-linear, then it is \emph{anti-unitary} if
+ \[
+ \bra f \Phi, f \Psi\ket = \bra \Phi, \Psi\ket^*.
+ \]
+\end{defi}
+But to state the theorem precisely, we need to be a bit more careful. In quantum mechanics, we consider two (normalized) states to be equivalent if they differ by a phase. Thus, we need the following extra assumption on our function $f$:
+\begin{assumption}
+ In the previous assumption, we further assume that if $\Phi, \Psi \in \mathcal{H}$ differ by a phase, then $f\Phi$ and $f\Psi$ differ by a phase.
+\end{assumption}
+Now what does it mean for something to ``preserve physics''? In quantum mechanics, the physical properties are obtained by taking inner products. So we want to require $f$ to preserve inner products. But this is not quite what we want, because states are only defined up to a phase. So we should only care about inner products defined up to a phase. Note that this in particular implies $f$ preserves normalization.
+
+Thus, we can state Wigner's theorem as follows:
+\begin{thm}[Wigner's theorem]\index{Wigner's theorem} % check this
+ Let $\mathcal{H}$ be a Hilbert space, and $f: \mathcal{H} \to \mathcal{H}$ be a bijection such that
+ \begin{itemize}
+ \item If $\Phi, \Psi \in \mathcal{H}$ differ by a phase, then $f\Phi$ and $f\Psi$ differ by a phase.
+ \item For any $\Phi, \Psi \in \mathcal{H}$, we have
+ \[
+ |\bra f\Phi, f\Psi\ket| = |\bra \Phi, \Psi\ket|.
+ \]
+ \end{itemize}
+ Then there exists a map $W: \mathcal{H} \to \mathcal{H}$ that is either linear and unitary, or anti-linear and anti-unitary, such that for all $\Phi \in \mathcal{H}$, we have that $W\Phi$ and $f\Phi$ differ by a phase.
+\end{thm}
+
+For all practical purposes, this means we can assume the transformations C, P and T are given by unitary or anti-unitary operators.
+
+We will write $\hat{C}$, $\hat{P}$ and $\hat{T}$ for the (anti-)unitary transformations induced on the Hilbert space by the C, P, T transformations. We also write $W(\Lambda, a)$ for the induced transformation from the (not necessarily proper) Poincar\'e transformation
+\[
+ x^\mu \mapsto x'^\mu = \Lambda^\mu\!_\nu x^\nu + a^\mu.
+\]
+We want to understand which are unitary and which are anti-unitary. We will assume that these $W(\Lambda, a)$ ``compose properly'', i.e.\ that
+\[
+ W(\Lambda_2, a_2)W(\Lambda_1, a_1) = W(\Lambda_2 \Lambda_1, \Lambda_2 a_1 + a_2).
+\]
+Moreover, we assume that for infinitesimal transformations
+\[
+ \Lambda^\mu\!_\nu = \delta^\mu\!_\nu + \omega^\mu\!_\nu,\quad a^\mu = \varepsilon^\mu,
+\]
+where $\omega$ and $\varepsilon$ are small parameters, we can expand
+\[
+ W = W(\Lambda, a) = W(I + \omega, \varepsilon) = 1 + \frac{i}{2} \omega_{\mu\nu} J^{\mu\nu} + i \varepsilon_\mu P^\mu,
+\]
+where $J^{\mu\nu}$ are the operators generating rotations and boosts, and $P^\mu$ are the operators generating translations. In particular, $P^0$ is the Hamiltonian.
+
+Of course, we cannot write parity and time reversal in this form, because they are discrete symmetries, but we can look at what happens when we combine these transformations with infinitesimal ones.
+
+By assumption, we have
+\[
+ \hat{P} = W(\Prob, 0),\quad \hat{T} = W(\mathbb{T}, 0).
+\]
+Then from the composition rule, we expect
+\begin{align*}
+ \hat{P} W \hat{P}^{-1} &= W(\Prob \Lambda \Prob^{-1}, \Prob a)\\
+ \hat{T} W \hat{T}^{-1} &= W(\mathbb{T} \Lambda \mathbb{T}^{-1}, \mathbb{T} a).
+\end{align*}
+Inserting expansions for $W$ in terms of $\omega$ and $\varepsilon$ on both sides, and comparing coefficients of $\varepsilon_0$, we find
+\begin{align*}
+ \hat{P} i H \hat{P}^{-1} &= iH\\
+ \hat{T} iH \hat{T}^{-1} &= -iH.
+\end{align*}
+So $iH$ and $\hat{P}$ commute, but $iH$ and $\hat{T}$ \emph{anti-commute}.
+
+To proceed further, we need to make the following assumption, which is a natural one to make if we believe P and T are symmetries.
+\begin{assumption}
+ The transformations $\hat{P}$ and $\hat{T}$ send an energy eigenstate of energy $E$ to an energy eigenstate of energy $E$.
+\end{assumption}
+From this, it is easy to figure out whether the maps $\hat{P}$ and $\hat{T}$ should be unitary or anti-unitary.
+
+Indeed, consider any normalized energy eigenstate $\Psi$ with energy $E \not = 0$. Then by definition, we have
+\[
+ \bra \Psi, i H \Psi\ket = iE.
+\]
+Then since $\hat{P}\Psi$ is also an energy eigenstate of energy $E$, we know
+\[
+ iE = \bra \hat{P} \Psi, iH \hat{P} \Psi\ket = \bra \hat{P} \Psi, \hat{P} iH \Psi \ket.
+\]
+In other words, we have
+\[
+ \bra \hat{P} \Psi, \hat{P} iH \Psi \ket = \bra \Psi, iH \Psi \ket = iE.
+\]
+So $\hat{P}$ must be unitary.
+
+On the other hand, we have
+\[
+ iE = \bra \hat{T} \Psi, iH \hat{T} \Psi\ket = -\bra \hat{T} \Psi, \hat{T} iH \Psi \ket.
+\]
+In other words, we obtain
+\[
+ \bra \hat{T} \Psi, \hat{T} iH \Psi \ket = -\bra \Psi, iH \Psi \ket = \bra \Psi, iH\Psi\ket^* -iE.
+\]
+Thus, it follows that $\hat{T}$ must be anti-unitary. It is a fact that $\hat{C}$ is linear and unitary.
+
+Note that these derivations rely crucially on the fact that we know the operators must either by unitary or anti-unitary, and this allows us to just check \emph{one} inner product to determine which is the case, rather than all.
+
+So far, we have been discussing how these operators act on the elements of the state space. However, we are ultimately interested in understanding how the \emph{fields} transform under these transformations. From linear algebra, we know that an endomorphism $W: \mathcal{H} \to \mathcal{H}$ on a vector space canonically induces a transformation on the space of operators, by sending $\phi$ to $W\phi W^{-1}$. Thus, what we want to figure out is how $W \phi W^{-1}$ relates to $\phi$.
+
+\subsection{Parity}
+Our objective is to figure out how
+\[
+ \hat{P} = W(\Prob, 0)
+\]
+acts on our different quantum fields. For convenience, we will write
+\begin{align*}
+ x^\mu &\mapsto x^\mu_P = (x^0, -\mathbf{x})\\
+ p^\mu &\mapsto p^\mu_P = (p^0, -\mathbf{p}).
+\end{align*}
+
+As before, we will need to make some assumptions about how $\hat{P}$ behaves. Suppose our field has creation operator $a^\dagger(p)$. Then we might expect
+\[
+ \bket{p} \mapsto \eta^*_a\bket{p_P}
+\]
+for some complex phase $\eta^*_a$. We can alternatively write this as
+\[
+ \hat{P}a^\dagger(p) \bket{0} = \eta_a^* a^\dagger(p_P)\bket{0}.
+\]
+
+We assume the vacuum is parity-invariant, i.e.\ $\hat{P}\bket{0} = \bket{0}$. So we can write this as
+\[
+ \hat{P}a^\dagger(p) \hat{P}^{-1}\bket{0} = \eta_a^* a^\dagger(p_P) \bket{0}.
+\]
+
+Thus, it is natural to make the following assumption:
+
+\begin{assumption}
+ Let $a^\dagger(p)$ be the creation operator of any field. Then $a^\dagger(p)$ transforms as
+ \[
+ \hat{P} a^\dagger (p) \hat{P}^{-1} = \eta^* a^\dagger(p_P)
+ \]
+ for some $\eta^*$. Taking conjugates, since $\hat{P}$ is unitary, this implies
+ \[
+ \hat{P} a(p) \hat{P}^{-1} = \eta a(p_P).
+ \]
+\end{assumption}
+We will also need to assume the following:
+\begin{assumption}
+ Let $\phi$ be any field. Then $\hat{P} \phi(x) \hat{P}^{-1}$ is a multiple of $\phi(x_P)$, where ``a multiple'' can be multiplication by a linear map in the case where $\phi$ has more than $1$ component.
+\end{assumption}
+
+
+\subsubsection*{Scalar fields}
+Consider a complex scalar field
+\[
+ \phi(x) = \sum_p \left(a(p) e^{-ip\cdot x} + c(p)^\dagger e^{+i p\cdot x}\right),
+\]
+where $a(p)$ is an annihilation operator for a particle and $c(p)^\dagger$ is a creation operator for the anti-particles. Then we have
+\begin{align*}
+ \hat{P} \phi(x) \hat{P}^{-1} &= \sum_p \left(\hat{P}a(p)\hat{P}^{-1} e^{-ip\cdot x} + \hat{P}c^\dagger (p)\hat{P}^{-1} e^{+ip\cdot x}\right)\\
+ &= \sum_p \left(\eta_a^* a(p_P) e^{-ip\cdot x} + \eta_c^* c^\dagger (p_P) e^{+ip\cdot x}\right)\\
+ \intertext{Since we are integrating over all $p$, we can relabel $p_P \leftrightarrow p$, and then get}
+ &= \sum_p \left(\eta_a a(p) e^{-i p_P \cdot x} + \eta_c^* c^\dagger(p) e^{+ip_P \cdot x}\right)\\
+ \intertext{We now note that $x \cdot p_P = x_P \cdot p$ by inspection. So we have}
+ &= \sum_p\left(\eta_a a(p) e^{-ip\cdot x_P} + \eta_c^* c^\dagger(p) e^{ip\cdot x_P}\right).
+\end{align*}
+
+By assumption, this is proportional to $\phi(x_P)$. So we must have $\eta_a = \eta_c^* \equiv \eta_P$. Then we just get
+\[
+ \hat{P} \phi(x) \hat{P}^{-1} = \eta_P \phi(x_P).
+\]
+\begin{defi}[Intrinsic parity]\index{intrinsic parity}
+ The \emph{intrinsic parity} of a field $\phi$ is the number $\eta_P \in \C$ such that
+ \[
+ \hat{P} \phi(x) \hat{P}^{-1} = \eta_P \phi(x_P).
+ \]
+\end{defi}
+
+For real scalar fields, we have $a = c$, and so $\eta_a = \eta_c$, and so $\eta_a = \eta_P = \eta_P^*$. So $\eta_P = \pm 1$.
+\begin{defi}[Scalar and pseudoscalar fields]\index{scalar field}\index{pseudoscalar field}
+ A real scalar field is called a \emph{scalar field} (confusingly) if the intrinsic parity is $-1$. Otherwise, it is called a \emph{pseudoscalar field}.
+\end{defi}
+
+Note that under our assumptions, we have $\hat{P}^2 = I$ by the composition rule of the $W(\Lambda, a)$. Hence, it follows that we always have $\eta_P = \pm 1$. However, in more sophisticated treatments of the theory, the composition rule need not hold. The above analysis still holds, but for a complex scalar field, we need not have $\eta_P = \pm 1$.
+
+\subsubsection*{Vector fields}
+Similarly, if we have a vector field $V^\mu (x)$, then we can write
+\[
+ V^\mu(x) = \sum_{p, \lambda} \mathcal{E}^\mu(\lambda, p) a^\lambda(p) e^{-ip\cdot x} + \mathcal{E}^{\mu *} (\lambda, p) c^{\dagger \lambda} (p) e^{ip\cdot x}.
+\]
+where $\mathcal{E}^\mu(\lambda, p)$ are some polarization vectors.
+
+Using similar computations, we find
+\[
+ \hat{P} V^\mu \hat{P}^{-1} = \sum_{p, \lambda} \left(\mathcal{E}^\mu (\lambda, p_P)a^\lambda(p) e^{-ip\cdot x_P} \eta_a + \mathcal{E}^{\mu*}(\lambda, p_P) c^{\dagger\lambda} (p) e^{+ip\cdot x_P} \eta_c^*\right).
+\]
+This time, we have to deal with the $\mathcal{E}^\mu(\lambda, p_P)$ term. Using explicit expressions for $\mathcal{E}^\mu$, we have
+\[
+ \mathcal{E}^\mu(\lambda, p_P) = - \Prob^\mu\!_\nu \mathcal{E}^\nu (\lambda, p).
+\]
+So we find that
+\[
+ \hat{P} V^\mu(x_P) \hat{P}^{-1} = - \eta_P \Prob^\mu\!_\nu V^\nu(x_P),
+\]
+where for the same reasons as before, we have
+\[
+ \eta_P = \eta_a = \eta_c^*.
+\]
+\begin{defi}[Vector and axial vector fields]\index{vector field}\index{axial vector field}
+ \emph{Vector fields} are vector fields with $\eta_P = -1$. Otherwise, they are \emph{axial vector fields}.
+\end{defi} % return to this later. What is polarization vector?
+
+\subsubsection*{Dirac fields}
+We finally move on to the case of Dirac fields, which is the most complicated. Fortunately, it is still not too bad.
+
+As before, we obtain
+\[
+ \hat{P}\psi(x) \hat{P}^{-1} = \sum_{p, s}\left( \eta_b b^s(p) u(p_P) e^{-p\cdot x_P} + \eta_d^* d^{s\dagger}(p) v^s(p_P) e^{+ip\cdot x_P}\right).
+\]
+We use that
+\[
+ u^s(p_P) = \gamma^0 u^s(p),\quad v^s(p_P) = - \gamma^0 v^s(p),
+\]
+which we can verify using Lorentz boosts. Then we find
+\[
+ \hat{P} \psi(x) \hat{P}^{-1} = \gamma^0 \sum_{p, s}\left( \eta_b b^s(p) u(p) e^{-p\cdot x_P} - \eta_d^* d^{s\dagger}(p) v^s(p) e^{+ip\cdot x_P}\right).
+\]
+So again, we require that
+\[
+ \eta_b = - \eta_d^*.
+\]
+We can see this minus sign as saying particles and anti-particles have opposite intrinsic parity.
+
+Unexcitingly, we end up with
+\[
+ \hat{P} \psi(x) \hat{P}^{-1} = \eta_P \gamma^0 \psi(x_P).
+\]
+Similarly, we have
+\[
+ \hat{P} \bar\psi(x) \hat{P}^{-1} = \eta_P^* \bar\psi(x_P) \gamma^0.
+\]
+Since $\gamma^0$ anti-commutes with $\gamma^5$, it follows that we have
+\[
+ \hat{P} \psi_L \hat{P}^{-1} = \eta_P \gamma^0 \psi_R.
+\]
+So the parity operator exchanges left-handed and right-handed fermions.
+
+\subsubsection*{Fermion bilinears}
+We can now determine how various fermions bilinears transform. For example, we have
+\[
+ \bar\psi(x) \psi(x) \mapsto \bar\psi(x_P) \psi(x_P).
+\]
+So it transforms as a scalar. On the other hand, we have
+\[
+ \bar\psi(x) \gamma^5 \psi(x) \mapsto - \bar{\psi}(x_P) \gamma^5 \psi(x_P),
+\]
+and so this transforms as a \emph{pseudoscalar}. We also have
+\[
+ \bar\psi(x) \gamma^\mu \psi(x) \mapsto \Prob^\mu\!_\nu \bar\psi(x_P) \gamma^\nu \psi(x_P),
+\]
+and so this transforms as a \emph{vector}. Finally, we have
+\[
+ \bar\psi(x) \gamma^5 \gamma^\mu \psi(x) \mapsto - \Prob^\mu\!_0 \bar\psi(x_P) \gamma^5 \gamma^\mu \psi(x_P).
+\]
+So this transforms as an \emph{axial vector}.
+\subsection{Charge conjugation}
+We will do similar manipulations, and figure out how different fields transform under charge conjugation. Unlike parity, this is not a spacetime symmetry. It transforms particles to anti-particles, and vice versa. This is a unitary operator $\hat{C}$, and we make the following assumption:
+
+\begin{assumption}
+ If $a$ is an annihilation operator of a particle, and $c$ is the annihilation operator of the anti-particle, then we have
+ \[
+ \hat{C} a(p)\hat{C}^{-1} = \eta c(p)
+ \]
+ for some $\eta$.
+\end{assumption}
+
+As before, this is motivated by the requirement that $\hat{C} \bket{p} = \eta^* \bket{\bar{p}}$. We will also assume the following:
+\begin{assumption}
+ Let $\phi$ be any field. Then $\hat{C} \phi(x) \hat{C}^{-1}$ is a multiple of the conjugate of $\phi$, where ``a multiple'' can be multiplication by a linear map in the case where $\phi$ has more than $1$ component. Here the interpretation of ``the conjugate'' depends on the kind of field:
+ \begin{itemize}
+ \item If $\phi$ is a bosonic field, then the conjugate of $\phi$ is $\phi^* = (\phi^\dagger)^T$. Of course, if this is a scalar field, then the conjugate is just $\phi^\dagger$.
+ \item If $\phi$ is a spinor field, then the conjugate of $\phi$ is $\bar{\phi}^T$.
+ \end{itemize}
+\end{assumption}
+
+\subsubsection*{Scalar and vector fields}
+Scalar and vector fields behave in a manner very similar to that of parity operators. We simply have
+\begin{align*}
+ \hat{C} \phi \hat{C}^{-1} &= \eta_c \phi^\dagger\\
+ \hat{C} \phi^\dagger \hat{C}^{-1} &= \eta_c^* \phi
+\end{align*}
+for some $\eta_c$. In the case of a real field, we have $\phi^\dagger = \phi$. So we must have $\eta_c = \pm 1$, which is known as the \term{intrinsic $c$-parity} of the field. For complex fields, we can introduce a global phase change of the field so as to set $\eta_c = 1$.
+
+This has some physical significance. For example, the photon field transforms like
+\[
+ \hat{C}A_\mu(x) \hat{C}^{-1} = - A_\mu(x).
+\]
+Experimentally, we see that $\pi^0$ only decays to $2$ photons, but not $1$ or $3$. Therefore, assuming that $c$-parity is conserved, we infer that
+\[
+ \eta_c^{\pi_0} = (-1)^2 = +1.
+\]
+
+\subsubsection*{Dirac fields}
+Dirac fields are more complicated. As before, we can compute
+\[
+ \hat{C} \psi(x) \hat{C}^{-1} = \eta_c \sum_{p, s}\left(d^s(p) u^s(p)e^{-ip\cdot x} + b^{s\dagger} (p) v^s(p) e^{+ip\cdot x}\right).
+\]
+We compare this with
+\[
+ \bar\psi^T(x) = \sum_{p, s} \left( b^{s\dagger}(p) \bar{u}^{sT}(p) e^{+ip\cdot x} + d^s (p) \bar{v}^{sT}(p) e^{-ip\cdot x}\right).
+\]
+We thus see that we have to relate $\bar{u}^{sT}$ and $v^s$ somehow; and $\bar{v}^{sT}$ with $u^s$.
+
+Recall that we constructed $u^s$ and $v^s$ in terms of elements $\xi^s, \eta^s \in \R^2$. At that time, we did not specify what $\xi$ and $\eta$ are, or how they are related, so we cannot expect $u$ and $v$ to have any relation at all. So we need to make a specific choice of $\xi$ and $\eta$. We will choose them so that
+\[
+ \eta^s = i \sigma^2 \xi^{s*}.
+\]
+Now consider the matrix
+\[
+ C = -i \gamma^0 \gamma^2 =
+ \begin{pmatrix}
+ i \sigma^2 & 0\\
+ 0 & -i \sigma^2
+ \end{pmatrix}.
+\]
+The reason we care about this is the following:
+\begin{prop}
+ \[
+ v^s(p) = C \bar{u}^{sT},\quad u^s(p) = C \bar{v}^{sT}(p).
+ \]
+\end{prop}
+From this, we infer that
+\begin{prop}
+ \begin{align*}
+ \psi^c(x) \equiv \hat{C} \psi(x) \hat{C}^{-1} &= \eta_c C \bar\psi^T (x)\\
+ \bar\psi^c(x) \equiv \hat{C} \bar\psi(x) \hat{C}^{-1} &= \eta_c^* \psi^T(x) C = - \eta_c^* \psi^T(x) C^{-1}.
+ \end{align*}
+\end{prop}
+It is convenient to note the following additional properties of the matrix $C$:
+\begin{prop}
+ \begin{gather*}
+ (C \gamma^\mu)^T = C \gamma^\mu\\
+ C = -C^T = - C^\dagger = -C^{-1}\\
+ (\gamma^\mu)^T = - C \gamma^\mu C^{-1},\quad (\gamma^5)^T = + C \gamma^5 C^{-1}.
+ \end{gather*}
+\end{prop}
+Note that if $\psi(x)$ satisfies the Dirac equation, then so does $\psi^c(x)$.
+
+Apart from Dirac fermions, there are also things called \term{Majorana fermions}. They have $b^s(p) = d^s(p)$. This means that the particle is its own antiparticle, and for these, we have
+\[
+ \psi^c(x) = \psi(x).
+\]
+These fermions have to be neutral. A natural question to ask is --- do these exist? The only type of neutral fermions in the standard model is neutrinos, and we don't really know what neutrinos are. In particular, it is possible that they are Majorana fermions. Experimentally, if they are indeed Majorana fermions, then we would be able to observe neutrino-less double $\beta$ decay, but current experiments can neither observe nor rule out this possibility.
+
+\subsubsection*{Fermion bilinears}
+We can look at how fermion bilinears change. For example,
+\[
+ j^\mu = \bar\psi(x) \gamma^\mu \psi(x).
+\]
+Then we have
+\begin{align*}
+ \hat{C} j^\mu(x) \hat{C}^{-1} &= \hat{C} \bar\psi \hat{C}^{-1} \gamma^\mu \hat{C} \psi \hat{C}^{-1}\\
+ &= -\eta_c^* \psi^T C^{-1} \gamma^\mu C \bar\psi^T \eta_c\\
+ \intertext{We now notice that the $\eta_c^*$ and $\eta_c$ cancel each other. Also, since this is a scalar, we can take the transpose of the whole thing, but we will pick up a minus sign because fermions anti-commute}
+ &= \bar\psi (C^{-1} \gamma^\mu C)^T \psi\\
+ &= \bar\psi C^T \gamma^{\mu T} ( C^{-1})^T \psi\\
+ &= - \bar\psi \gamma^\mu \psi\\
+ &= - j^\mu (x).
+\end{align*}
+Therefore $A_\mu(x) j^\mu(x)$ is invariant under $\hat{C}$. Similarly,
+\[
+ \bar\psi \gamma^\mu \gamma^5 \psi \mapsto + \bar\psi \gamma^\mu \gamma^5 \psi.
+\]
+\subsection{Time reversal}
+Finally, we get to time reversal. This is more messy. Under $\mathbb{T}$, we have
+\[
+ x^\mu = (x^0, x^i) \mapsto x_T^\mu = (-x^0, x^i).
+\]
+Momentum transforms in the \emph{opposite} way, with
+\[
+ p^\mu = (p^0, p^i) \mapsto p^\mu_T = (p^0, -p^i).
+\]
+Theories that are invariant under $\hat{T}$ look the same when we run them backwards. For example, Newton's laws are time reversal invariant. We will also see that the electromagnetic and strong interactions are time reversal invariant, but the weak interaction is not.
+
+Here our theory begins to get more messy.
+\begin{assumption}
+ For any field $\phi$, we have
+ \[
+ \hat{T} \phi(x) \hat{T}^{-1} \propto \phi(x_T).
+ \]
+\end{assumption}
+
+\begin{assumption}
+ For bosonic fields with creator $a^\dagger$, we have
+ \[
+ \hat{T} a(p) \hat{T}^{-1} = \eta a(p_T)
+ \]
+ for some $\eta$. For Dirac fields with creator $b^{s\dagger}$, we have
+ \[
+ \hat{T} b^s(p) \hat{T}^{-1} = \eta (-1)^{1/2 - s} b^{-s}(p_T)
+ \]
+ for some $\eta$.
+\end{assumption}
+Why that complicated rule for Dirac fields? We first justify why we want to swap spins when we perform time reversal. This is just the observation that when we reverse time, we spin in the opposite direction. The factor of $(-1)^{\frac{1}{2} - s}$ tells us spinors of different spins transform differently. % why
+
+\subsubsection*{Boson field}
+Again, bosonic fields are easy. The creation and annihilation operators transform as
+\begin{align*}
+ \hat{T} a(p) \hat{T}^{-1} &= \eta_T a(p_T)\\
+ \hat{T} c^\dagger(p) \hat{T}^{-1} &= \eta_T c^\dagger(p_T).
+\end{align*}
+Note that the relative phases are fixed using the same argument as for $\hat{P}$ and $\hat{C}$. When we do the derivations, it is very important to keep in mind that $\hat{T}$ is \emph{anti}-linear.
+
+Then we have
+\[
+ \hat{T}\phi(x) \hat{T}^{-1} = \eta_T \phi(x_T).
+\]
+\subsubsection*{Dirac fields}
+As mentioned, for Dirac fields, the annihilation and creation operators transform as
+\begin{align*}
+ \hat{T} b^s(p) \hat{T}^{-1} &= \eta_T (-1)^{1/2 - s} b^{-s}(p_T)\\
+ \hat{T} d^{s\dagger}(p) \hat{T}^{-1} &= \eta_T (-1)^{1/2 - s} d^{-s\dagger}(p_T)
+\end{align*}
+
+It can be shown that
+\begin{align*}
+ (-1)^{\frac{1}{2} - s} u^{-s*}(p_T) &= -B u^s(p)\\
+ (-1)^{\frac{1}{2} - s} v^{-s*}(p_T) &= -B v^s(p),
+\end{align*}
+where
+\[
+ B = \gamma^5 C =
+ \begin{pmatrix}
+ i\sigma_2 & 0\\
+ 0 & i \sigma_2
+ \end{pmatrix}
+\]
+in the chiral representation.
+
+Then, we have
+\[
+ \hat{T} \psi(x) \hat{T}^{-1} = \eta_T \sum_{p, s} (-1)^{\frac{1}{2} - s} \left(b^{-s}(p_T) u^{s*}(p) e^{+ip\cdot x} + d^{-s\dagger} v^{s*}(p) e^{-ip\cdot x}\right).
+\]
+Doing the standard manipulations, we find that
+\begin{align*}
+ \hat{T} \psi(x) \hat{T}^{-1} &= \eta_T \sum_{p, s} (-1)^{\frac{1}{2} - s + 1} \left(b^s(p) u^{-s*}(p_T) e^{-ip\cdot x_T} + d^{s\dagger}(p) v^{-s*}(p_T) e^{+ip\cdot x_T}\right)\\
+ &= +\eta_T \sum_{p, s} \left(b^s(p) BU^s(p) e^{-ip\cdot x_T} + d^{s\dagger} (p) BV^s(p) e^{+ip\cdot x_T}\right)\\
+ &= \eta_T B \psi(x_T).
+\end{align*}
+Similarly,
+\[
+ \hat{T}\bar\psi(x) \hat{T}^{-1} = \eta_T^* \bar\psi(x_T) B^{-1}.
+\]
+\subsubsection*{Fermion bilinears}
+Similarly, we have
+\begin{align*}
+ \bar\psi (x) \psi(x) &\mapsto \bar\psi(x_T) \psi(x_T)\\
+ \bar\psi(x) \gamma^\mu \psi(x) &\mapsto -\mathbb{T}^\mu\!_\nu \bar\psi(x_T) \gamma^\mu \psi(x_T)
+\end{align*}
+This uses the fact that
+\[
+ B^{-1} \gamma^{\mu*}B = - \mathbb{T}^\mu\!_\nu \gamma^\nu.
+\]
+So we see that in the second case, the $0$ component, i.e.\ charge density, is unchanged, while the spacial component, i.e.\ current density, gets a negative sign. This makes physical sense.
+
+\subsection{\texorpdfstring{$S$}{S}-matrix}
+We consider how $S$-matrices transform. Recall that $S$ is defined by
+\begin{align*}
+ \brak{p_1, p_2, \cdots} S \bket{k_A, k_B, \cdots} &={} _{\mathrm{out}}\!\braket{p_1, p_2, \cdots}{k_A, k_b, \cdots}_{\mathrm{in}} \\
+ &= \lim_{T \to \infty} \brak{p_1, p_2, \cdots} e^{-iH2T} \bket{k_A, k_B, \cdots}
+\end{align*}
+We can write
+\[
+ S = \mathcal{T} \exp\left(-i \int_{-\infty}^\infty V(t) \;\d t\right),
+\]
+where $\mathcal{T}$ denotes the time-ordered integral, and
+\[
+ V(t) = -\int \d^3 x\; \mathcal{L}_I(x),
+\]
+and $\mathcal{L}_I(x)$ is the interaction part of the Lagrangian.
+\begin{eg}
+ In QED, we have
+ \[
+ \mathcal{L}_I = - e \bar\psi (x) \gamma^\mu A^\mu (x) \psi(x).
+ \]
+ We can draw a table of how things transform:
+ \begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ & P & C & T\\
+ \midrule
+ $\mathcal{L}_I(x)$ & $\mathcal{L}_I(x_P)$ & $\mathcal{L}_I(x)$ & $\mathcal{L}_I(x_T)$\\
+ $V(t)$ & $V(t)$ & $ V(t)$ & $V(-t)$ \\
+ $S$ & $S$ & $S$ & ??\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ A bit more care is to figure out how the $S$ matrix transforms when we reverse time, as we have a time-ordering in the integral. To figure out, we explicitly write out the time-ordered integral. We have
+ \[
+ S = \sum_{n = 0}^\infty (-i)^n \int_{-\infty}^\infty \d t_1 \int_{-\infty}^{t_1} \d t_2 \cdots \int_{-\infty}^{t_{n - 1}} \d t_n\; V(t_1) V(t_2) \cdots V(t_n).
+ \]
+ Then we have
+ \begin{align*}
+ S_T &= \hat{T} S\hat{T}^{-1} \\
+ &= \sum_n (+i)^n \int_{-\infty}^\infty \d t_1 \int_{-\infty}^{t_1} \d t_2 \cdots \int_{-\infty}^{t_{n - 1}} \d t_n\; V(-t_1) V(-t_2) \cdots V(-t_n)\\
+ \intertext{We now put $\tau_i = -t_{n + 1 - 2}$, and then we can write this as}
+ &= \sum_n (+i)^n \int_{-\infty}^\infty \d t_1 \; \int_{-\infty}^{-\tau_n} \cdots \int_{-\infty}^{-\tau_2} \d t_n\; V(\tau_n) V(\tau_{n - 1}) \cdots V(\tau_1)\\
+ &= \sum_n (+i)^n \int_{-\infty}^\infty \d \tau_n \int_{\tau_n}^\infty \d \tau_{n - 1} \cdots \int_{\tau_2}^\infty \d \tau_1 V(\tau_n) V(\tau_{n - 1}) \cdots V(\tau_1).
+ \end{align*}
+ We now notice that
+ \[
+ \int_{-\infty}^\infty \d \tau_n \int_{\tau_n}^\infty \d \tau_{n - 1} = \int_{-\infty}^\infty \d \tau_{n - 1} \int_{-\infty}^{\tau_{n - 1}} \d \tau_n.
+ \]
+ We can see this by drawing the picture
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$\tau_n$};
+ \draw [->] (0, -3) -- (0, 3) node [above] {$\tau_{n - 1}$};
+
+ \fill [opacity=0.5, morange] (-3, -3) -- (3, 3) -- (-3, 3) -- cycle;
+ \draw (-3, -3) -- (3, 3) node [right] {$\tau_{n - 1} = \tau_n$};
+ \end{tikzpicture}
+ \end{center}
+ So we find that
+ \[
+ S_T = \sum_n (+i)^n \int_{-\infty}^\infty \d \tau_1 \int_{-\infty}^{\tau_1} \d \tau_2 \cdots \int_{-\infty}^{\tau_{n - 1}} \d \tau_n \; V(\tau_n) V(\tau_{n - 1}) \cdots V(\tau_1).
+ \]
+ We can then see that this is equal to $S^\dagger$.
+
+ What does this tell us? Consider states $\bket{\eta}$ and $\bket{\xi}$ with
+ \begin{align*}
+ \bket{\eta_T} &= \hat{T} \bket{\eta}\\
+ \bket{\xi_T} &= \hat{T} \bket{\xi}
+ \end{align*}
+ The Dirac bra-ket notation isn't very useful when we have anti-linear operators. So we will write inner products explicitly. We have
+ \begin{align*}
+ (\eta_T, S\xi_T) &= (\hat{T} \eta, S\hat{T} \xi) \\
+ &= (\hat{T} \eta, S_T^\dagger \hat{T} \xi) \\
+ &= (\hat{T} \eta, \hat{T} S^\dagger, \xi) \\
+ &= (\eta, S^\dagger \xi)^* \\
+ &= (\xi, S\eta)
+ \end{align*}
+ where we used the fact that $\hat{T}$ is anti-unitary. So the conclusion is
+ \[
+ \brak{\eta_T} S\bket{\xi_T} = \brak{\xi}S\bket{\eta}.
+ \]
+ So if
+ \[
+ \hat{T} \mathcal{L}_I(x) \hat{T}^{-1} = \mathcal{L}_I(x_T),
+ \]
+ then the $S$-matrix elements are equal for time-reversed processes, where the initial and final states are swapped.
+\end{eg}
+
+\subsection{CPT theorem}
+\begin{thm}[CPT theorem]\index{CPT theorem}
+ Any Lorentz invariant Lagrangian with a Hermitian Hamiltonian should be invariant under the product of P, C and T.
+\end{thm}
+We will not prove this. For details, one can read Streater and Wightman, \emph{PCT, spin and statistics and all that} (1989).
+
+All observations we make suggest that CPT is respected in nature. This means a particle propagating forwards in time cannot be distinguished from the antiparticle propagating backwards in time.
+
+\subsection{Baryogenesis}
+In the universe, we observe that there is much more matter than anti-matter. \emph{Baryogenesis}\index{baryogenesis} is the generation of this asymmetry in the universe. Sakarhov came up with three conditions that are necessary for this to happen.
+\begin{enumerate}
+ \item Baryon number violation (or leptogenesis, i.e.\ lepton number asymmetry), i.e.\ some process $X \to Y + B$ that generates an excess baryon.
+ \item Non-equilibrium. Otherwise, the rate of $Y + B \to X$ is the same as the rate of $X \to Y + B$.
+ \item C and CP violation. We need C violation, or else the rate of $X \to Y + B$ and the rate of $\bar{X} \to \bar{Y} + \bar{B}$ would be the same, and the effects cancel out each other.
+
+ We similarly need CP violation, or else the rate of $X \to nq_L$ and the rate of $X \to n q_R$, where $q_{L, R}$ are left and right handed quarks, is equal to the rate of $\bar{x} \to n \bar{q}_L$ and $\bar{x} \to n\bar{q}_R$, and this will wash out our excess.
+\end{enumerate}
+
+\section{Spontaneous symmetry breaking}
+In this chapter, we are going to look at spontaneous symmetry breaking. The general setting is as follows --- our Lagrangian $\mathcal{L}$ enjoys some symmetry. However, the potential has \emph{multiple} minima. Usually, in perturbation theory, we imagine we are sitting near an equilibrium point of the system (the ``vacuum''), and then look at what happens to small perturbations near the equilibrium.
+
+When there are multiple minima, we have to arbitrarily pick a minimum to be our vacuum, and then do perturbation around it. In many cases, this choice of minimum is \emph{not} invariant under our symmetries. Thus, even though the theory itself is symmetric, the symmetry is lost once we pick a vacuum. It turns out interesting things happen when this happens.
+
+\subsection{Discrete symmetry}
+We begin with a toy example, namely that of a discrete symmetry. Consider a real scalar field $\phi(x)$ with a symmetric potential $V(\phi)$, so that
+\[
+ V(-\phi) = V(\phi).
+\]
+This gives a discrete $\Z/2\Z$ symmetry $\phi \leftrightarrow -\phi$.
+
+We will consider the case of a $\phi^4$ theory, with Lagrangian
+\[
+ \mathcal{L} = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \left(\frac{1}{2} m^2 \phi^2 - \frac{\lambda}{4!} \phi^4\right)
+\]
+for some $\lambda$.
+
+We want the potential to $\to \infty$ as $\phi \to \infty$, so we necessarily require $\lambda > 0$. However, since the $\phi^4$ term dominates for large $\phi$, we are now free to pick the sign of $m^2$, and still get a sensible theory.
+
+Usually, this theory has $m^2 > 0$, and thus $V(\phi)$ has a minimum at $\phi = 0$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->](-3, 0) -- (3, 0) node [right] {$\phi$};
+ \draw [->] (0, -1.2) -- (0, 3) node [above] {$S(\phi)$};
+
+ \draw [mblue, thick, domain=-1.414:1.414, samples=50] plot [smooth] (\x, {4 * ((\x/2)^4 + (\x/2)^2)});
+ \end{tikzpicture}
+\end{center}
+However, we could imagine a scenario where $m^2 < 0$, where we have ``imaginary mass''. In this case, the potential looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->](-3, 0) -- (3, 0) node [right] {$\phi$};
+ \draw [->] (0, -1.2) -- (0, 3) node [above] {$V(\phi)$};
+
+ \draw [mblue, thick, domain=-2.4:2.4, samples=50] plot [smooth] (\x, {4 * ((\x/2)^4 - (\x/2)^2)});
+
+ \draw [dashed] (1.414, 0) node [above] {$v$} -- (1.414, -1);
+ \end{tikzpicture}
+\end{center}
+To understand this potential better, we complete the square, and write it as
+\[
+ V(\phi) = \frac{\lambda}{4} (\phi^2 - v^2)^2 + \text{constant},
+\]
+where
+\[
+ v = \sqrt{-\frac{m^2}{\lambda}}.
+\]
+We see that now $\phi = 0$ becomes a local maximum, and there are two (global) minima at $\phi = \pm v$. In particular, $\phi$ has acquired a non-zero \term{vacuum expectation value} (\term{VEV}).
+
+We shall wlog consider small excitations around $\phi = v$. Let's write
+\[
+ \phi(x) = v + cf(x).
+\]
+Then we can write the Lagrangian as
+\[
+ \mathcal{L} = \frac{1}{2} \partial_\mu f \partial^\mu f - \lambda \left(v^2 f^2 + + vf^3 + \frac{1}{4} f^4\right),
+\]
+plus some constants. Therefore $f$ is a scalar field with mass
+\[
+ m_f^2 = 2v^2 \lambda.
+\]
+This $\mathcal{L}$ is \emph{not} invariant under $f \to -f$. The symmetry of the original Lagrangian has been broken by the VEV of $\phi$.
+
+\subsection{Continuous symmetry}
+We can consider a slight generalization of the above scenario. Consider an $N$-component real scalar field $\phi = (\phi_1, \phi_2, \cdots, \phi_N)^T$, with Lagrangian
+\[
+ \mathcal{L} = \frac{1}{2} (\partial^\mu \phi) \cdot (\partial_\mu \phi) - V(\phi),
+\]
+where
+\[
+ V(\phi) = \frac{1}{2} m^2 \phi^2 + \frac{\lambda}{4} \phi^4.
+\]
+As before, we will require that $\lambda > 0$.
+
+This is a theory that is invariant under global $\Or(N)$ transforms of $\phi$. Again, if $m^2 > 0$, then $\phi = 0$ is a global minimum, and we do not have spontaneous symmetry breaking. Thus, we consider the case $m^2 < 0$.
+
+In this case, we can write
+\[
+ V(\phi) = \frac{\lambda}{4} (\phi^2 - v^2)^2 + \text{constant},
+\]
+where
+\[
+ v = -\frac{m^2}{\lambda} > 0.
+\]
+So \emph{any} $\phi$ with $\phi^2 = v^2$ gives a global minimum of the system, and so we have a continuum of vacua. We call this a \term{Sombrero potential} (or \term{wine bottle potential}).
+\begin{center}
+ \begin{tikzpicture}
+ \begin{axis}[
+ hide axis,
+ samples=30,
+ domain=0:360,
+ y domain=0:1.25,clip=false
+ ]
+ \addplot3 [surf, shader=flat, draw=black, fill=white, z buffer=sort] ({sin(x)*y}, {cos(x)*y}, {(y^2-1)^2});
+ \draw[blue,thick,dashed] (axis cs:0,0,0) -- (axis cs:1,0,0);
+ \draw[blue,thick,-stealth] (axis cs:1,0,0) -- (axis cs:1.6,0,0) node[above,font=\footnotesize]{$\phi_1$};
+ \draw[blue,thick,dashed] (axis cs:0,0,0) -- (axis cs:0,-1,0);
+ \draw[blue,thick,-stealth] (axis cs:0,-1,0) -- (axis cs:0,-1.9,0) node[right=1mm,font=\footnotesize]{$\phi_2$};
+ \draw[blue,thick,dashed] (axis cs:0,0,0) -- (axis cs:0,0,1);
+ \draw[blue,thick,-stealth] (axis cs:0,0,1) -- (axis cs:0,0,1.3) node[right,font=\footnotesize]{$V(\phi)$};
+ \end{axis}
+ \end{tikzpicture}
+\end{center}
+Without loss of generality, we may choose the vacuum to be
+\[
+ \phi_0 = (0, 0, \cdots, 0, v)^T.
+\]
+We can then consider steady small fluctuations about this:
+\[
+ \phi(x) = (\pi_1(x), \pi_2(x), \cdots, \pi_{N-1}(x), v + \sigma(x))^T.
+\]
+We now have a $\pi$ field with $N - 1$ components, plus a $1$-component $\sigma$ field.
+
+We rewrite the Lagrangian in terms of the $\pi$ and $\sigma$ fields as
+\[
+ \mathcal{L} = \frac{1}{2}(\partial^\mu \pi) \cdot (\partial_\mu \pi) + \frac{1}{2}(\partial^\mu \sigma) \cdot (\partial_\mu \sigma) - V(\pi, \sigma),
+\]
+where
+\[
+ V(\pi, \sigma) = \frac{1}{2} m_\sigma^2 \sigma^s + \lambda v(\sigma^2 + \pi^2) \sigma + \frac{\lambda}{4}(\sigma^2 + \pi^2)^2.
+\]
+Again, we have dropped a constant term.
+
+In this Lagrangian, we see that the $\sigma$ field has mass
+\[
+ m_\sigma = \sqrt{2\lambda v^2},
+\]
+but the $\pi$ fields are massless. We can understand this as follows --- the $\sigma$ field corresponds to the radial direction in the potential, and this is locally quadratic, and hence has a mass term. However, the $\pi$ fields correspond to the excitations in the azimuthal directions, which are flat.
+
+\subsection{General case}
+What we just saw is a completely general phenomenon. Suppose we have an $N$-dimensional field $\phi = (\phi_1, \cdots, \phi_N)$, and suppose our theory has a Lie group of symmetries $G$. We will assume that the action of $G$ preserves the potential and kinetic terms individually, and not just the Lagrangian as a whole, so that $G$ will send a vacuum to another vacuum.
+
+Suppose there is more than one choice of vacuum. We write
+\[
+ \Phi_0 = \{\phi_0: V(\phi_0) = V_{\mathrm{min}}\}
+\]
+for the set of all vacua. Now we pick a favorite vacuum $\phi_0 \in \Phi_0$, and we want to look at the elements of $G$ that fix this vacuum. We write
+\[
+ H = \stab(\phi_0) = \{h \in G: h \phi_0 = \phi_0\}.
+\]
+This is the \term{invariant subgroup}, or \term{stabilizer subgroup} of $\phi_0$, and is the symmetry we are left with after the spontaneous symmetry breaking.
+
+We will make some further simplifying assumptions. We will suppose $G$ acts \emph{transitively} on $\Phi_0$, i.e.\ given any two vacua $\phi_0, \phi_0'$, we can find some $g \in G$ such that
+\[
+ \phi_0' = g \phi_0.
+\]
+This ensures that any two vacua are ``the same'', so which $\phi_0$ we pick doesn't really matter. Indeed, given two such vacua and $g$ relating them, it is a straightforward computation to check that
+\[
+ \stab(\phi_0') = g \stab(\phi_0) g^{-1}.
+\]
+So any two choices of vacuum will result in conjugate, and in particular isomorphic, subgroups $H$.
+
+It is a curious, and also unimportant, observation that given a choice of vacuum $\phi_0$, we have a canonical bijection
+\[
+ \begin{tikzcd}[cdmap]
+ G/H \ar[r, "\sim"] & \Phi_0\\
+ g \ar[r, maps to] & g \phi_0
+ \end{tikzcd}.
+\]
+So we can identify $G/H$ and $\Phi_0$. This is known as the orbit-stabilizer theorem.
+
+We now try to understand what happens when we try to do perturbation theory around our choice of vacuum. At $\phi = \phi_0 + \delta \phi$, we can as usual write
+\[
+ V(\phi_0 + \delta \phi) - V(\phi_0) = \frac{1}{2} \delta \phi_r \frac{\partial^2 V}{\partial \phi_r \partial \phi_s} \delta \phi_s + O(\delta \phi^3).
+\]
+This quadratic $\partial^2 V$ term is now acting as a mass. We call it the \term{mass matrix}
+\[
+ M_{rs}^2 = \frac{\partial^2 V}{\partial \phi_r \partial \phi_s}.
+\]
+Note that we are being sloppy with where our indices go, because $(M^2)^{rs}$ is ugly. It doesn't really matter much, since $\phi$ takes values in $\R^N$ (or $\C^n$), which is a Euclidean space.
+
+In our previous example, we saw that there were certain modes that went massless. We now want to see if the same happens. To do so, we pick a basis $\{\tilde{t}^a\}$ of $\mathfrak{h}$ (the Lie algebra of $H$), and then extend it to a basis $\{t^a, \theta^a\}$ of $\mathfrak{g}$.
+
+We consider a variation
+\[
+ \delta \phi = \varepsilon \theta^a \phi_0
+\]
+around $\phi_0$.
+
+Note that when we write $\theta^a \phi_0$, we mean the resulting infinitesimal change of $\phi_0$ when $\theta^a$ acts on field. For a general Lie algebra element, this may be zero. However, we know that $\theta^a$ does not belong to $\mathfrak{h}$, so it does not fix $\phi_0$. Thus (assuming sensible non-degeneracy conditions) this implies that $\theta^a \phi_0$ is non-zero.
+
+At this point, the uncareful reader might be tempted to say that since $G$ is a symmetry of $V$, we must have $V(\phi_0 + \delta \phi) = V(\phi_0)$, and thus
+\[
+ (\theta^a \phi)_r M_{rs}^2 (\theta^a \phi)_s = 0,
+\]
+and so we have found zero eigenvectors of $M_{rs}$. But this is wrong. For example, if we take
+\[
+ M =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix},\quad
+ \theta^a \phi =
+ \begin{pmatrix}
+ 1\\1
+ \end{pmatrix},
+\]
+then our previous equation holds, but $\theta^a \phi$ is \emph{not} a zero eigenvector. We need to be a bit more careful.
+
+Instead, we note that by definition of $G$-invariance, for any field value $\phi$ and a corresponding variation
+\[
+ \phi \mapsto \phi + \varepsilon \theta^a \phi,
+\]
+we must have
+\[
+ V(\phi + \delta \phi) = V(\phi),
+\]
+since this is what it means for $G$ to be a symmetry.
+
+Expanding to first order, this says
+\[
+ V(\phi + \delta \phi) - V(\phi) = \varepsilon (\theta^a \phi)_r \frac{\partial V}{\partial \phi_r} = 0.
+\]
+Now the trick is to take a further derivative of this. We obtain
+\[
+ 0 = \frac{\partial}{\partial \phi_s} \left((\theta^a \phi)_r \frac{\partial V}{\partial \phi_r}\right) = \left(\frac{\partial}{\partial \phi_s} (\theta^a \phi)_r\right) \frac{\partial V}{\partial \phi_r} + (\theta^a \phi)_r M_{rs}^2.
+\]
+Now at a vacuum $\phi = \phi_0$, the first term vanishes. So we must have
+\[
+ (\theta^a \phi_0)_r M_{rs}^2 = 0.
+\]
+By assumption, $\theta^a \phi_0 \not= 0$. So we have found a zero vector of $M_{rs}$. Recall that we had $\dim G - \dim H$ of these $\theta^a$ in total. So we have found $\dim G - \dim H$ zero eigenvectors in total.
+
+We can do a small sanity check. What if we replaced $\theta^a$ with $t^a$? The same derivations would still follow through, and we deduce that
+\[
+ (t^a \phi_0)_r M_{rs}^2 = 0.
+\]
+But since $t^a$ is an \emph{unbroken symmetry}, we actually just have $t^a \phi_0 = 0$, and this really doesn't say anything interesting.
+
+What about the other eigenvalues? They could be zero for some other reason, and there is no reason to believe one way or the other. However, generally, in the scenarios we meet, they are usually massive. So after spontaneous symmetry breaking, we end up with $\dim G - \dim H$ massless modes, and $N - (\dim G - \dim H)$ massive ones.
+
+This is \term{Goldstone's theorem} in the classical case.
+\begin{eg}
+ In our previous $O(N)$ model, we had $G = O(N)$, and $H = O(N - 1)$. These have dimensions
+ \[
+ \dim G = \frac{N(N - 1)}{2},\quad \dim H = \frac{(N - 1)(N - 2)}{2}.
+ \]
+ Then we have
+ \[
+ \Phi_0 \cong S^{N - 1},
+ \]
+ the $(N - 1)$-dimensional sphere. So we expect to have
+ \[
+ \frac{N - 1}{2} (N - (N - 2)) = N - 1
+ \]
+ massless modes, which are the $\pi$ fields we found. We also found $N - (N - 1) = 1$ massive mode, namely the $\sigma$ field.
+\end{eg}
+
+\begin{eg}
+ In the first discrete case, we had $G = \Z/2\Z$ and $H = \{e\}$. These are (rather degenerate) $0$-dimensional Lie groups. So we expect $0 - 0 = 0$ massless modes, and $1 - 0 = 1$ massive modes, which is what we found.
+\end{eg}
+\subsection{Goldstone's theorem}
+We now consider the quantum version of Goldstone's theorem. We hope to get something similar.
+
+Again, suppose our Lagrangian has a symmetry group $G$, which is spontaneously broken into $H \leq G$ after picking a preferred vacuum $\bket{0}$. Again, we take a Lie algebra basis $\{t_a, \theta_a\}$ of $\mathfrak{g}$, where $\{t_a\}$ is a basis for $\mathfrak{h}$. Note that we will assume that $a$ runs from $1, \cdots, \dim G$, and we use the first $\dim H$ labels to refer to the $t_a$, and the remaining to label the $\theta_a$, so that the actual basis is
+\[
+ \{t_1, t_2, \cdots, t_{\dim H}, \theta_{\dim H + 1}, \cdots, \theta_{\dim G}\}.
+\]
+For aesthetic reasons, this time we put the indices on the generators below.
+
+Now recall, say from AQFT, that these symmetries of the Lagrangian give rise to conserved currents $j_a^\mu(x)$. These in turn give us conserved charges
+\[
+ Q_a = \int \d^3 x\; j^0_a (x).
+\]
+Moreover, under the action of the $t^a$ and $\theta^a$, the corresponding change in $\phi$ is given by
+\[
+ \delta \phi = -[Q^a, \phi].
+\]
+The fact that the $\theta^a$ break the symmetry of $\bket{0}$ tells us that, for $a > \dim H$,
+\[
+ \brak{0}[Q^a, \phi(0)] \bket{0} \not= 0.
+\]
+Note that the choice of evaluating $\phi$ at $0$ is completely arbitrary, but we have to pick somewhere, and $0$ is easy to write. Writing out the definition of $Q_a$, we know that
+\[
+ \int \d^3 x\; \brak{0} [j_a^0(x), \phi(0)] \bket{0} \not= 0.
+\]
+More generally, since the label $^0$ is arbitrary by Lorentz invariance, we are really looking at the relation
+\[
+ \brak{0} [j_a^\mu (x), \phi(0)] \bket{0} \not= 0,
+\]
+and writing it in this more Lorentz covariant way makes it easier to work with. The goal is to deduce the existence of massless states from this non-vanishing.
+
+For convenience, we will write this expression as
+\[
+ X_a^\mu = \brak{0} [j_a^\mu (x), \phi(0)] \bket{0}.
+\]
+We treat the two terms in the commutator separately. The first term is
+\[
+ \brak{0} j_a^\mu (x) \phi(0) \bket{0} = \sum_n \brak{0}j_a^\mu (x) \bket{n} \brak{n}\phi(0) \bket{0}.\tag{$*$}
+\]
+We write $p_n$ for the momentum of $\bket{n}$. We now note the operator equation
+\[
+ j_a^\mu (x) = e^{i\hat{p} \cdot x} j_a^\mu (0) e^{-i\hat{p} \cdot x}.
+\]
+So we know
+\[
+ \brak{0}j_a^\mu(x) \bket{n} = \brak{0} j_a^\mu(0) \bket{n} e^{-ip_n \cdot x}.
+\]
+We can use this to do some Fourier transform magic. By direct verification, we can see that $(*)$ is equivalent to
+\[
+ i \int \frac{\d^4 k}{(2\pi)^3} \rho_a^\mu(k) e^{-ik\cdot x},
+\]
+where
+\[
+ i \rho_a^\mu (k) = (2\pi)^3 \sum_n \delta^4(k - p_n)\brak{0}j_a^\mu(0) \bket{n} \brak{n}\phi(0)\bket{0}.
+\]
+Similarly, we define
+\[
+ i \tilde{\rho}_a^\mu (k) = (2\pi)^3 \sum_n \delta^4(k - p_n) \brak{0} \phi(0) \bket{n} \brak{n} j_a^\mu (0) \bket{0}.
+\]
+Then we can write
+\[
+ X_a^\mu = i \int \frac{\d^4 k}{(2\pi)^3}\left(\rho_a^\mu (k) e^{-ik\cdot x} - \tilde{\rho}_a^\mu(k) e^{+ik\cdot x}\right).
+\]
+This is called the \term{K\"allen-Lehmann spectral representation}.
+
+We now claim that $\rho_a^\mu(k)$ and $\tilde{\rho}_a^\mu(k)$ can be written in a particularly nice form. This involves a few observations (when I say $\rho_a^\mu$, I actually mean $\rho_a^\mu$ and $\tilde{\rho}_a^\mu$):
+\begin{itemize}
+ \item $\rho_a^\mu$ depends Lorentz covariantly in $k$. So it ``must'' be a scalar multiple of $k^\mu$.
+ \item $\rho_a^\mu(k)$ must vanish when $k^0 > 0$, since $k^0 \leq 0$ is non-physical. % why?
+ \item By Lorentz invariance, the magnitude of $\rho_a^\mu(k)$ can depend only on the length (squared) $k^2$ of $k$.
+\end{itemize}
+Under these assumptions, we can write can write $\rho_a^\mu$ and $\tilde{\rho}_a^\mu$ as
+\begin{align*}
+ \rho_a^\mu (k) &= k^\mu \Theta(k^0) \rho_a (k^2),\\
+ \tilde{\rho}_a^\mu (k) &= k^\mu \Theta(k^0) \tilde{\rho}_a(k^2),
+\end{align*}
+where $\Theta$ is the \term{Heaviside theta function}
+\[
+ \Theta(x) =
+ \begin{cases}
+ 1 & x > 0\\
+ 0 & x \leq 0
+ \end{cases}.
+\]
+So we can write
+\[
+ X_a^\mu = i \int \frac{\d^4 k}{(2\pi)^3}\;k^\mu \Theta(k^0) (\rho_a(k^2)e^{-ik\cdot x} - \tilde{\rho}_a(k^2) e^{+ik\cdot x}).
+\]
+We now use the (nasty?) trick of hiding the $k^\mu$ by replacing it with a derivative:
+\[
+ X_a^\mu = - \partial^\mu \int \frac{\d^4 k}{(2\pi)^3} \Theta(k^0)\left(\rho_a(k^2) e^{-ik\cdot x} + \tilde{\rho}_a(k^2) e^{ik\cdot x}\right).
+\]
+Now we might find the expression inside the integral a bit familiar. Recall that the propagator is given by
+\[
+ D(x, \sigma) = \brak{0} \phi(x) \phi(0) \bket{0} = \int \frac{\d^4 k}{(2\pi)^3}\; \Theta(k^0) \delta(k^2 - \sigma) e^{-ik\cdot x}.
+\]
+where $\sigma$ is the square of the mass of the field. Using the really silly fact that
+\[
+ \rho_a(k^2) = \int \d \sigma\; \rho_a(\sigma) \delta(k^2 - \sigma),
+\]
+we find that
+\[
+ X_a^\mu = -\partial^\mu \int \d \sigma\; (\rho_a(\sigma) D(x, \sigma) + \tilde{\rho}_a (\sigma) D(-x, \sigma)).
+\]
+Now we have to recall more properties of $D$. For spacelike $x$, we have $D(x, \sigma) = D(-x, \sigma)$. Therefore, requiring $X_a^\mu$ to vanish for spacelike $x$ by causality, we see that we must have
+\[
+ \rho_a(\sigma) = - \tilde{\rho}_a(\sigma).
+\]
+Therefore we can write
+\[
+ X_a^\mu = -\partial^\mu \int \d \sigma\; \rho^a(\sigma) i\Delta(x, \sigma),\tag{$\dagger$}
+\]
+where
+\[
+ i \Delta(x, \sigma) = D(x, \sigma) - D(-x, \sigma) = \int \frac{\d^4 k}{(2\pi)^3} \delta(k^2 - \sigma) \varepsilon(k^0) e^{-ik\cdot x},
+\]
+and
+\[
+ \varepsilon(k^0) =
+ \begin{cases}
+ +1 & k^0 > 0\\
+ -1 & k^0 < 0
+ \end{cases}.
+\]
+This $\Delta$ is again a different sort of propagator.
+
+We now use current conservation, which tells us
+\[
+ \partial_\mu j^\mu_a = 0.
+\]
+So we must have
+\[
+ \partial_\mu X_a^\mu =- \partial^2 \int \d \sigma\; \rho_a(\sigma) i \Delta(x, \sigma) = 0.
+\]
+On the other hand, by inspection, we see that $\Delta$ satisfies the Klein-Gordon equation
+\[
+ (\partial^2 + \sigma) \Delta = 0.
+\]
+So in $(\dagger)$, we can replace $-\partial^2 \Delta$ with $\sigma \Delta$. So we find that
+\[
+ \partial_\mu X_a^\mu = \int \d \sigma\; \sigma \rho^a (\sigma) i \Delta(x, \sigma) = 0.
+\]
+This is true for all $x$. In particular, it is true for timelike $x$ where $\Delta$ is non-zero. So we must have
+\[
+ \sigma \rho_a(\sigma) = 0.
+\]
+But we also know that $\rho_a(\sigma)$ cannot be identically zero, because $X_a^\mu$ is not. So the only possible explanation is that
+\[
+ \rho_a(\sigma) = N_a \delta(\sigma),
+\]
+where $N_a$ is a dimensionful non-zero constant.
+
+Now we retrieve our definitions of $\rho_a$. Recall that they are defined by
+\begin{align*}
+ i \rho_a^\mu (k) &= (2\pi)^3 \sum_n \delta^4(k - p_n)\brak{0}j_a^\mu(0) \bket{n} \brak{n}\phi(0)\bket{0}\\
+ \rho_a^\mu (k) &= k^\mu \Theta(k^0) \rho_a (k^2).
+\end{align*}
+So the fact that $\rho_a(\sigma) = N_a \delta(\sigma)$ implies that there must be some states, which we shall call $\bket{B(p)}$, of momentum $p$, such that $p^2 = 0$, and
+\begin{align*}
+ \brak{0}j_a^\mu(0)\bket{B(p)} &\not= 0\\
+ \brak{B(p)} \phi(0) \bket{0} &\not= 0.
+\end{align*}
+The condition that $p^2 = 0$ tell us these particles are massless, which are those massless modes we were looking for! These are the \term{Goldstone bosons}.
+
+We can write the values of the above as
+\begin{align*}
+ \brak{0} j_a^\mu (0)\bket{B(p)} &= i F_a^B p^\mu\\
+ \brak{B(p)} \phi(0)\bket{0} &= Z^B,
+\end{align*}
+whose form we deduced by Lorentz in/covariance. By dimensional analysis, we know $F_{aB}$ is a dimension $-1$ constant, and $Z_B$ is a dimensionless constant.
+
+From these formulas, we note that since $\phi(0)\bket{0}$ is rotationally invariant, we deduce that $\bket{B(p)}$ also is. So we deduce that these are in fact spin $0$ particles.
+
+Finally, we end with some computations that relate these different numbers we obtained. We will leave the computational details for the reader, and just outline what we should do. Using our formula for $\rho$, we find that
+\[
+ \brak{0} [j_a^\mu(x), \phi(0)] \bket{0} = - \partial^\mu \int N^a \delta (\sigma) i \Delta(x, \sigma) \;\d \sigma = -iN^a \partial^\mu \Delta(x, 0).
+\]
+Integrating over space, we find
+\[
+ \brak{0}[Q_a, \phi(0)]\bket{0} = - iN_a \int \d^3 x\; \partial^0 \Delta(x, 0) = i N_a.
+\]
+Then we have
+\[
+ \brak{0} t_a \phi(0)\bket{0} = i \brak{0} [Q_a, \phi(0)]\bket{0} = i \cdot i N_a = -N_a.
+\]
+So this number $N_a$ sort-of measures how much the symmetry is broken, and we once again see that it is the breaking of symmetry that gives rise to a non-zero $N_a$, hence the massless bosons.
+
+We also recall that we had
+\[
+ i k^\mu \Theta(k^0) N_a \delta(k^2) = \sum_B \int \frac{\d^3 p}{2|\mathbf{p}|}\; \delta^4(k - p) \brak{0} j_a^\mu(0) \bket{B(p)} \brak{B(p)} \phi(0) \bket{0}.
+\]
+On the RHS, we can plug in the names we gave for the matrix elements, and on the left, we can write it as an integral
+\[
+ \int \frac{\d^3 p}{2|\mathbf{p}|} \delta^4 (k - p) i k^\mu N_a = \int \frac{\d^3 p}{2|\mathbf{p}|} \delta^4(k - p) i p^\mu \sum_B F_a^B Z^B.
+\]
+So we must have
+\[
+ N^a = \sum_B F_a^B Z^B.
+\]
+We can repeat this process for each independent symmetry-breaking $\theta^a \in \mathfrak{g} \setminus \mathfrak{h}$, and obtain a Goldstone boson this way. So all together, at least superficially, we find
+\[
+ n = \dim G - \dim H
+\]
+many Goldstone bosons.
+
+%The number of broken generators is
+%\[
+% n = \dim G - \dim H.
+%\]
+%So we see that there are $n$ of the $\rho^a$ that have non-zero contributions at $\sigma = 0$.
+%
+%Here $F_{aB}$ is a rank $n$ matrix, and this gives rise to $n$ Goldstone bosons $B$.
+
+It is important to figure out the assumptions we used to derive the result. We mentioned ``Lorentz invariance'' many times in the proof, so we certainly need to assume our theory is Lorentz invariant.
+%While it may not seem obvious, the proof actually requires there are more than $2$ spacetime dimensions, and in fact the result does not hold in $2$ or $1$ dimensions.
+Another perhaps subtle point is that we also needed states to have a positive definite norm. It turns out in the case of gauge theory, these do not necessarily hold, and we need to work with them separately. % revisit
+
+%Note that we assumed we have a Lorentz invariant theory, and we also assumed there were more than $2$ spacetime dimensions. We don't get spontaneous symmetry breaking in $2$ or $1$ dimensions. We also assumed that states have a positive definite norm. % something about things cancelling.
+
+
+%We now consider spontaneous symmetry breaking in a fully quantum way. Again, the symmetry group $G$ of the Lagrangian is broken spontaneously to $H \leq G$. In other words, the scalar field gets a non-zero vacuum expectation value
+%\[
+% \brak{0} \phi(x) \bket{0} = \phi_0 \not= 0.
+%\]
+%The VEV is invariant under $h \in H$, and so
+%\[
+% \brak{0} h \phi(x) \bket{0} = \phi_0,
+%\]
+%but this is not invariant under a general $g \in G \setminus H$. Again suppose we have a basis
+%\[
+% t^a = \{\tilde{t}^i, \tilde{\theta}^{\tilde{a}}\}
+%\]
+%of $\mathfrak{g}$, where $\tilde{t}^i$ span $\mathfrak{h} \leq \mathfrak{g}$.
+%
+%Since $G$ is a symmetry of the Lagrangian, we know there are conserved currents and charges, say $j_a^\mu (x)$ and
+%\[
+% Q_a = \int \d^3 x\; j^0_a = \int \d^3 x\; \pi(x) t_a \phi,
+%\]
+%where $\pi$ is the conjugate field. Using the canonical commutation relations, we have
+%\[
+% \delta \phi(0) = i \alpha^a t^a \phi(0) = - [Q^a, \phi(0)] \alpha^a.
+%\]
+%Thus these charges induce a representation on the Lie algebra of $\phi$. % ???
+%
+%To investigate excitations from spontaneous symmetry breaking, consider the VEV of
+%\[
+% X_a^\mu = \brak{0}[j_a^\mu (x), \phi(0)]\bket{0} = \sum_n \left( \brak{0}j_a^\mu (x) \bket{n} \brak{n}\phi(0) \bket{0} - \brak{0}\phi(0) \bket{n} \brak{n}j_a^\mu(x) \bket{0}\right).
+%\]
+%We can then write this as
+%\[
+% i \int \frac{\d^4 k}{(2\pi)^3}\left(\rho_a^\mu (k) e^{-ik\cdot x} - \tilde{\rho}_a^\mu(k) e^{+ik\cdot x}\right),
+%\]
+%where
+%\begin{align*}
+% i \rho_a^\mu (k) &= (2\pi)^3 \sum_n \delta^4(k - p_n)\brak{0}j_a^\mu(0) \bket{n} \brak{n}\phi(0)\bket{0}\\
+% i \tilde{\rho}_a^\mu (k) &= (2\pi)^3 \sum_n \delta^4(k - p_n) \brak{0} \phi(0) \bket{n} \brak{n} j_a^\mu (0) \bket{0}
+%\end{align*}
+%Note that we had $j_a^\mu(0)$ because the $e^{-ik\cdot x}$ factors generated the translations. Here $p_n$ is the $4$-momentum of $n$. % ???
+%This is called the \term{K\"allen-Lehmann spectral representation}.
+%
+%From the Lorentz covariance of $\rho_a^\mu$ and $\tilde{\rho}_a^\mu$, they must be proportional to $k^\mu$, as that is the only $4$-vector we have that they can be proportional to. % huh?
+%
+%Physical states have $k^0 > 0$. Then we can write
+%\[
+% \rho_a^\mu (k) = k^\mu \Theta(k^0) \rho_a (k^2), % This is k squared.
+%\]
+%where $\Theta$ is the Heaviside function. Similarly, we have
+%\[
+% \tilde{\rho}_a^\mu (k) = k^\mu \Theta(k^0) \tilde{\rho}_a(k^2).
+%\]
+%Then we have
+%\[
+% X_a^\mu = - \partial^\mu \int \frac{\d^4 k}{(2\pi)^3} \Theta(k^0)\left(\rho_a(k^2) e^{-ik\cdot x} + \tilde{\rho}_a(k^2) e^{ik\cdot x}\right).
+%\]
+%We can try to relate this to the propagator,
+%\[
+% D(z - y, \sigma) = \brak{0} \phi(x) \phi(y) \bket{0}, % should be phi(z)
+%\]
+%where $\sigma$ is the square of the mass of the field. We can write this as
+%\[
+% D(z - y, \sigma) = \int \frac{\d^4 p}{(2\pi)^3} \Theta(p^0) \delta(p^4 - \sigma) e^{-ip\cdot (z - y)}. % what is p^4?
+%\]
+%Then we can write
+%\[
+% X_a^\mu = -\partial_\mu \int \d \sigma (\rho_a(\sigma) D(x, \sigma) + \tilde{\rho}_a (\sigma) D(-x, \sigma)).
+%\]
+%Here we used the fact that
+%\[
+% \rho(k^2) = \int \d \sigma\; \rho(\sigma) \delta(k^2 - \sigma).
+%\]
+%Now for spacelike $x$, we have $D(x, \sigma) = D(-x, \sigma)$. Therefore, requiring $X_a^\mu$ to vanish for spacelike $x$ by causality, we see that
+%\[
+% \rho_a(\sigma) = - \tilde{\rho}_a(\sigma).
+%\]
+%Therefore we can write
+%\[
+% X_a^\mu = -\partial^\mu \int \d \sigma\; \rho^a(\sigma) i\Delta(x, \sigma),\tag{$\dagger$}
+%\]
+%where
+%\[
+% i \Delta(x, \sigma) = D(x, \sigma) - D(-x, \sigma) = \int \frac{\d^4 k}{(2\pi)^3} \delta(k^2 - \sigma) \varepsilon(k^0) e^{-ik\cdot x},
+%\]
+%where
+%\[
+% \varepsilon(k^0) =
+% \begin{cases}
+% +1 & k^0 > 0\\
+% -1 & k^0 < 0
+% \end{cases}.
+%\]
+%We can think of this as some different sort of propagator. For spacelike $x$, this vanishes.
+%
+%We now use current conservation, so
+%\[
+% \partial_\mu j^\mu_a = 0.
+%\]
+%So we find
+%\[
+% \partial_\mu X_a^\mu =- \partial^2 \int \d \sigma\; \rho_a(\sigma) i \Delta(x, \sigma) = 0.
+%\]
+%Also, the Klein-Gordon equation gives
+%\[
+% (\partial^2 + \sigma) \Delta = 0.
+%\]
+%So we can replace $-\partial^s \Delta$ with $\sigma \Delta$. So
+%\[
+% \partial_\mu X_a^\mu = \int \d \sigma\; \sigma \rho^a (\sigma) i \Delta(x, \sigma) = 0.
+%\]
+%This is true for all $x$. In particular, it is true for timelike $x$ where $\Delta$ is non-zero. So we need
+%\[
+% \sigma \rho(\sigma) = 0.
+%\]
+%There are two possibilities:
+%\begin{enumerate}
+% \item $\rho(\sigma) = 0$. This means
+% \[
+% \brak{0} [j_a^\mu(x), \phi(0)] \bket{0} = 0.
+% \]
+% So $t^a$ is an unbroken symmetry, and
+% \[
+% \brak{0} t^a \phi\bket{0} = 0.
+% \]
+% \item $\rho^a(\sigma) = N^a \delta(\sigma)$, where $N^a$ is a dimensionful non-zero constant.
+%\end{enumerate}
+%The first case is boring, so we want to figure out what the consequences of (ii) are. Plugging this into $(\dagger)$, we get
+%\[
+% \brak{0} [j_a^\mu(x), \phi(0)] \bket{0} = - \partial^\mu \int N^a \delta (\sigma) i \Delta(x, \sigma) \;\d \sigma = -iN^a \partial^\mu \Delta(x, 0).
+%\]
+%Integrating over time, we find
+%\[
+% \brak{0}[Q^a, \phi(0)]\bket{0} = - iM^a \int \d^3 x\; \partial^0 \Delta(x, 0) = i N^a.
+%\]
+%Then we have\[
+% \brak{0} t^a \phi_0\bket{0} = i \brak{0} [Q^a, \phi(0)]\bket{0} = i \cdot i N^a = -N^a.
+%\]
+%Now if $N^a$ and $\phi_0$ are both non-zero, then some states in $\rho_a^\mu$ and $\tilde{\rho}_a^\mu$ have non-zero matrix elements. Label these states by $B(p)$. From Lorentz covariance and dimensional analysis, we have
+%\begin{align*}
+% \brak{0} j_a^\mu (0)\bket{B(p)} &= i F_{aB} p^\mu\\
+% \brak{B(p)} \phi(0)\bket{0} &= Z_B,
+%\end{align*}
+%where $F_{aB}$ is a dimension $-1$ constant, and $Z_B$ is a dimensionless constnat.
+%
+%We see that $B(p)$ are spin zero by rotation invariance, and massless (only contribute when $\sigma = p^2 = 0$). We have
+%\[
+% i \rho_a^\mu (k) = i k^\mu \Theta(k^0) N^a \sigma(k^2) = \sum_B \int \frac{\d^3 p}{2|\mathbf{p}|} \delta^4(k - p) \brak{0} j_a^\mu(0) \bket{B(p)} \brak{B(p)} \phi(0) \bket{0}.
+%\]
+%We write the LHS as an integral, and plug in the matrix element on the RHS. Then we see that
+%\[
+% \int \frac{\d^3 p}{2|\mathbf{p}|} \delta^4 (k - p) i k^\mu N^a = \int \frac{\d^3 p}{2|\mathbf{p}|} \delta^4(k - p) i p^\mu \sum_B F_{aB} Z_B.
+%\]
+%So we have
+%\[
+% N^a = \sum_B F_{aB} Z_B.
+%\]
+%The number of broken generators is
+%\[
+% n = \dim G - \dim H.
+%\]
+%So we see that there are $n$ of the $\rho^a$ that have non-zero contributions at $\sigma = 0$.
+%
+%Here $F_{aB}$ is a rank $n$ matrix, and this gives rise to $n$ Goldstone bosons $B$.
+%
+%Note that we assumed we have a Lorentz invariant theory, and we also assumed there were more than $2$ spacetime dimensions. We don't get spontaneous symmetry breaking in $2$ or $1$ dimensions. We also assumed that states have a positive definite norm. % something about things cancelling.
+
+\subsection{The Higgs mechanism}
+Recall that we had a few conditions for our previous theorem to hold. In the case of gauge theories, these typically don't hold. For example, in QED, imposing a Lorentz invariant gauge condition (e.g.\ the Lorentz gauge) gives us negative norm states. On the other hand, if we fix the gauge condition so that we don't have negative norm states, this breaks Lorentz invariance. What happens in this case is known as the \term{Higgs mechanism}.
+
+Let's consider the case of scalar electrodynamics. This involves two fields:
+\begin{itemize}
+ \item A complex scalar field $\phi(x) \in \C$.
+ \item A 4-vector field $A(x) \in \R^{1, 3}$.
+\end{itemize}
+As usual the components of $A(x)$ are denoted $A_\mu(x)$. From this we define the electromagnetic field tensor
+\[
+ F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu,
+\]
+and we have a covariant derivative
+\[
+ \D_\mu = \partial_\mu + i q A_\mu.
+\]
+As usual, the Lagrangian is given by
+\[
+ \mathcal{L} = -\frac{1}{4} F_{\mu\nu} F^{\mu\nu} + (\D_\mu \phi)^* (\D^\mu \phi) - V(|\phi|^2)
+\]
+for some potential $V$.
+
+A $\U(1)$ gauge transformation is specified by some $\alpha(x) \in \R$. Then the fields transform as
+\begin{align*}
+ \phi(x) &\mapsto e^{i\alpha(x)} \phi(x)\\
+ A_\mu(x) &\mapsto A_\mu(x) - \frac{1}{q} \partial_\mu \alpha(x).
+\end{align*}
+We will consider a $\phi^4$ theory, so that the potential is
+\[
+ V(|\phi|^2) = \mu^2 |\phi|^2 + \lambda |\phi|^4.
+\]
+As usual, we require $\lambda > 0$, and if $\mu^2 > 0$, then this is boring with a unique vacuum at $\phi = 0$. In this case, $A_\mu$ is massless and $\phi$ is massive.
+
+If instead $\mu^2 < 0$, then this is the interesting case. We have a minima at
+\[
+ |\phi_0|^2 = -\frac{\mu^2}{2\lambda} \equiv \frac{v^2}{2}.
+\]
+Without loss of generality, we expand around a real $\phi_0$, and write
+\[
+ \phi(x) = \frac{1}{\sqrt{2}} e^{i \theta(x)/v} (v + \eta(x)),
+\]
+where $\eta, \theta$ are \emph{real} fields.
+
+Now we notice that $\theta$ isn't a ``genuine'' field (despite being real). Our theory is invariant under gauge transformations. Thus, by picking
+\[
+ \alpha(x) = - \frac{1}{v} \theta(x),
+\]
+we can get rid of the $\theta(x)$ term, and be left with
+\[
+ \phi(x) = \frac{1}{\sqrt{2}}(v + \eta(x)).
+\]
+This is called the \term{unitary gauge}. Of course, once we have made this choice, we no longer have the gauge freedom, but on the other hand everything else going on becomes much clearer.
+
+%For small fluctuations, we approximate
+%\[
+% \phi(x) \approx \frac{1}{\sqrt{2}} (v + \eta + i \theta).
+%\]
+%We will chose $v > 0$. We substitute this into the potential to get
+%\[
+% V(\phi^* \phi) = \lambda \left(|\phi|^2 - \frac{v^2}{2}\right)^2 = \frac{\lambda}{4} (v^2 + \eta^2 + 2 v \eta - v^2)^2
+%\]
+%where we as usual dropped a constant. Putting this to the Lagrangian, we get
+%\[
+% \mathcal{L} = \frac{1}{2} \left(\partial_\mu \eta \partial^\mu \eta + 2 \mu^2 \eta^2\right) + \frac{1}{2} \partial_\mu \theta \partial^\mu \theta - \frac{1}{4} F^{\mu\nu} F_{\mu\nu} + qv A_\mu \partial^\mu \theta + \frac{q^2 v^2}{2} A_\mu A^\mu + \mathcal{L}_{\mathrm{int}},
+%\]
+%where $\mathcal{L}_{\mathrm{int}}$ involves terms with $\geq 2$ fields.
+%
+%It appears that that we have a mass for $\eta$ and $A_\mu$, but not $\theta$, and also a strange $A_\mu \partial^\mu \theta$ term.
+%
+%To see what is going on, we transform to the \term{unitary gauge}.
+%\[
+% A_\mu \mapsto A_\mu + \frac{1}{qv} \partial_\mu \theta(x),
+%\]
+%where we have chosen
+%\[
+% \alpha(x) = - \frac{1}{v} \theta(x).
+%\]
+%Then the $\phi$ field transforms as
+%\[
+% \phi \mapsto e^{i\theta/v} \phi = \frac{1}{2} (v + \eta).
+%\]
+%This corresponds to getting rid of the phase, and so we may assume $\phi$ is real.
+In this gauge, the Lagrangian can be written as
+\[
+ \mathcal{L} = \frac{1}{2}(\partial_\mu \eta \partial^\mu \eta + 2 \mu^2 \eta^2) - \frac{1}{4} F^{\mu\nu}F_{\mu\nu} + \frac{q^2 v^2}{2} A_\mu A^\mu + \mathcal{L}_{\mathrm{int}},
+\]
+where $\mathcal{L}_{\mathrm{int}}$ is the interaction piece that involves more than two fields.
+
+We can now just read off what is going on in here.
+\begin{itemize}
+ \item The $\eta$ field is massive with mass
+ \[
+ m_\mu^2 = - 2 \mu^2 = 2 \lambda v^2 > 0.
+ \]
+ \item The photon now gains a mass
+ \[
+ m_A^2 = q^2 v^2.
+ \]
+ \item What would be the Goldstone boson, namely the $\theta$ field, has been ``eaten'' to become the longitudinal polarization of $A_\mu$ and $A_\mu$ has gained a degree of freedom (or rather, the gauge non-degree-of-freedom became a genuine degree of freedom).
+\end{itemize}
+One can check that the interaction piece becomes
+\[
+ \mathcal{L}_{\mathrm{int}} = \frac{q^2}{2} A^\mu A^\mu \eta^2 + q m_A A_\mu A^\mu \eta - \frac{\lambda}{4} \eta^4 - m_\eta \sqrt{\frac{\lambda}{2}} \eta^3.
+\]
+So we have interactions that look like
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (tl);
+ \vertex [below right=of tl] (c);
+ \vertex [above right=of c] (tr);
+ \vertex [below left=of c] (bl);
+ \vertex [below right=of c] (br);
+
+ \diagram*{
+ (tl) -- [gluon] (c) -- [gluon] (tr),
+ (bl) -- [scalar] (c) -- [scalar] (br)
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (tl);
+ \vertex [below right=of tl] (c);
+ \vertex [above right=of c] (tr);
+ \vertex [below =of c] (b);
+
+ \diagram*{
+ (tl) -- [gluon] (c) -- [gluon] (tr),
+ (b) -- [scalar] (c)
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (tl);
+ \vertex [below right=of tl] (c);
+ \vertex [above right=of c] (tr);
+ \vertex [below left=of c] (bl);
+ \vertex [below right=of c] (br);
+
+ \diagram*{
+ (tl) -- [scalar] (c) -- [scalar] (tr),
+ (bl) -- [scalar] (c) -- [scalar] (br)
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (tl);
+ \vertex [below right=of tl] (c);
+ \vertex [above right=of c] (tr);
+ \vertex [below =of c] (b);
+
+ \diagram*{
+ (tl) -- [scalar] (c) -- [scalar] (tr),
+ (b) -- [scalar] (c)
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+This $\eta$ field is the ``Higgs boson'' in our toy theory.
+
+\subsection{Non-abelian gauge theories}
+We'll not actually talk about spontaneous symmetry in non-abelian gauge theories. That is the story of the next chapter, where we start studying electroweak theory. Instead, we'll just briefly set up the general framework of non-abelian gauge theory.
+
+In general, we have a gauge group $G$, which is a compact Lie group with Lie algebra $\mathfrak{g}$. We also have a representation of $G$ on $\C^n$, which we will assume is unitary, i.e.\ each element of $G$ is represented by a unitary matrix. Our field $\psi(x) \in \C^n$ takes values in this representation.
+
+A \term{gauge transformation} is specified by giving a $g(x) \in G$ for each point $x$ in the universe, and then the field transforms as
+\[
+ \psi(x) \mapsto g(x) \psi(x).
+\]
+Alternatively, we can represent this transformation infinitesimally, by producing some $t(x) \in \mathfrak{g}$, and then the transformation is specified by
+\[
+ \psi(x) \mapsto \exp(i t(x)) \psi(x).
+\]
+Associated to our gauge theory is a \term{gauge field} $A_\mu(x) \in \mathfrak{g}$ (i.e.\ we have an element of $\mathfrak{g}$ for each $\mu$ and $x$), which transforms under an infinitesimal transformation $t(x) \in \mathfrak{g}$ as
+\[
+ A_\mu(x) \mapsto - \partial_\mu t(x) + [t, A_\mu].
+\]
+The gauge covariant derivative is again give by
+\[
+ \D_\mu = \partial_\mu + i g A_\mu.
+\]
+where all fields are, of course, implicitly evaluated at $x$. As before, we can define
+\[
+ F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu - g [A_\mu, A_\nu] \in \mathfrak{g},
+\]
+Alternatively, we can write this as
+\[
+ [\D_\mu, \D_\nu] = ig F_{\mu\nu}.
+\]
+We will later on work in coordinates. We can pick a basis of $\mathfrak{g}$, say $t^1, \cdots, t^n$. Then we can write an arbitrary element of $\mathfrak{g}$ as $\theta^a(x) t^a$. Then in this basis, we write the components of the gauge field as $A_\mu^a$, and similarly the field strength is denoted $F_{\mu\nu}^a$. We can define the \term{structure constants} \term{$f^{abc}$} by
+\[
+ [t^a, t^b] = f^{abc}t^c.
+\]
+Using this notation, the gauge part of the Lagrangian $\mathcal{L}$ is
+\[
+ \mathcal{L}_g = -\frac{1}{4} F_{\mu\nu}^a F^{a\mu\nu} = - \frac{1}{2} \Tr (F_{\mu\nu} F^{\mu\nu}).
+\]
+We now move on to look at some actual theories.
+
+\section{Electroweak theory}
+We can now start discussing the standard model. In this chapter, we will account for everything in the theory apart from the strong force. In particular, we will talk about the gauge (electroweak) and Higgs part of the theory, as well as the matter content.
+
+The description of the theory will be entirely in the classical world. It is only when we do computations that we quantize the theory and compute $S$-matrices.
+
+\subsection{Electroweak gauge theory}
+We start by understanding the gauge and Higgs part of the theory. As mentioned, the gauge group is
+\[
+ G = \SU(2)_L \times \U(1)_Y.
+\]
+We will see that this is broken by the Higgs mechanism.
+
+It is convenient to pick a basis for $\su(2)$, which we will denote by
+\[
+ \tau^a = \frac{\sigma^a}{2}.
+\]
+Note that these are not genuinely elements of $\su(2)$. We need to multiply these by $i$ to actually get members of $\su(2)$ (thus, they form a complex basis for the complexification of $\su(2)$). As we will later see, these act on fields as $e^{i \tau^a}$ instead of $e^{\tau^a}$. Under this basis, the structure constants are $f^{abc} = \varepsilon^{abc}$.
+
+We start by describing how our gauge group acts on the Higgs field.
+\begin{defi}[Higgs field]\index{Higgs field}
+ The \emph{Higgs field} $\phi$ is a complex scalar field with two components, $\phi(x) \in \C^2$. The $\SU(2)$ action is given by the fundamental representation on $\C^2$, and the hypercharge is $Y = \frac{1}{2}$.
+
+ Explicitly, an (infinitesimal) gauge transformation can be represented by elements $\alpha^a(x), \beta(x) \in \R$, corresponding to the elements $\alpha^a(x) \tau^a \in \su(2)$ and $\beta(x) \in \uu (1) \cong \R$. Then the Higgs field transform as
+ \[
+ \phi(x) \mapsto e^{i \alpha^a(x) \tau^a} e^{i \frac{1}{2} \beta(x)} \phi(x),
+ \]
+ where the $\frac{1}{2}$ factor of $\beta(x)$ comes from the hypercharge being $\frac{1}{2}$.
+\end{defi}
+Note that when we say $\phi$ is a scalar field, we mean it transforms trivially via Lorentz transformations. It still takes values in the vector space $\C^2$.
+
+The gauge fields corresponding to $\SU(2)$ and $\U(1)$ are denoted $W_\mu^a$ and $B_\mu$, where again $a$ runs through $a = 1, 2, 3$. The covariant derivative associated to these gauge fields is
+\[
+ \D_\mu = \partial_\mu + i g W_\mu^a \tau^a + \frac{1}{2} i g' B_\mu
+\]
+for some coupling constants $g$ and $g'$.
+
+The part of the Lagrangian relating the gauge and Higgs field is
+\[
+ \mathcal{L}_{\mathrm{gauge}, \phi} = -\frac{1}{2} \Tr (F^W_{\mu\nu} F^{W,\mu\nu}) - \frac{1}{4} F_{\mu\nu}^B F^{B, \mu\nu} + (\D_\mu \phi)^\dagger (\D^\mu \phi) - \mu^2 |\phi|^2 - \lambda |\phi|^4,
+\]
+where the field strengths of $W$ and $B$ respectively are given by
+\begin{align*}
+ F_{\mu\nu}^{W, a} &= \partial_\mu W_\nu^a - \partial_\nu W_\mu^a - g \varepsilon^{abc} W_\mu^b W_\nu^c\\
+ F_{\mu\nu}^B &= \partial_\mu B_\nu - \partial_\nu V_\mu.
+\end{align*}
+In the case $\mu^2 < 0$, the Higgs field acquires a VEV, and we wlog shall choose the vacuum to be
+\[
+ \phi_0 = \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 0\\ v
+ \end{pmatrix},
+\]
+where
+\[
+ \mu^2 = - \lambda v^2 < 0.
+\]
+As in the case of $\U(1)$ symmetry breaking, the gauge term gives us new things with mass. After spending hours working out the computations, we find that $(\D_\mu \phi)^\dagger (\D^\mu \phi)$ contains mass terms
+\[
+ \frac{1}{2} \frac{v^2}{4} \left(g^2 (W^1)^2 + g^2 (W^2)^2 + (-gW^3 + g' B)^2\right).
+\]
+This suggests we define the following new fields:
+\begin{align*}
+ W_\mu^{\pm} &= \frac{1}{\sqrt{2}} (W_\mu^1 \mp i W_\mu^2)\\
+ \begin{pmatrix}
+ Z_\mu^0\\
+ A_\mu
+ \end{pmatrix} &=
+ \begin{pmatrix}
+ \cos \theta_W & - \sin \theta_W\\
+ \sin \theta_W & \cos \theta_W
+ \end{pmatrix}
+ \begin{pmatrix}
+ W_\mu^3\\
+ B_\mu
+ \end{pmatrix},
+\end{align*}
+where we pick $\theta_W$ such that
+\[
+ \cos \theta_W = \frac{g}{\sqrt{g^2 + g'^2}},\quad \sin \theta_W = \frac{g'}{\sqrt{g^2 + g'^2}}.
+\]
+This $\theta_W$ is called the \term{Weinberg angle}. Then the mass term above becomes
+\[
+ \frac{1}{2} \left(\frac{v^2g^2}{4} ((W^+)^2 + (W^-)^2) + \frac{v^2(g^2 + g'^2)}{4} Z^2\right).
+\]
+Thus, our particles have masses
+\begin{align*}
+ m_W &= \frac{vg}{2}\\
+ m_Z &= \frac{v}{2} \sqrt{g^2 + g'^2}\\
+ m_\gamma &= 0,
+\end{align*}
+where $\gamma$ is the $A_\mu$ particle, which is the photon. Thus, our original $\SU(2) \times \U(1)_Y$ breaks down to a $\U(1)_{EM}$ symmetry. In terms of the Weinberg angle,
+\[
+ m_W = m_Z \cos \theta_W.
+\]
+Thus, through the Higgs mechanism, we find that the $W^{\pm}$ and $Z$ bosons gained mass, but the photon does not. This agrees with what we find experimentally. In practice, we find that the masses are
+\begin{align*}
+ m_W &\approx \SI{80}{\giga\electronvolt}\\
+ m_Z &\approx \SI{91}{\giga\electronvolt}\\
+ m_\gamma &< \SI{e-18}{\giga\electronvolt}.
+\end{align*}
+Also, the Higgs boson gets mass, as what we saw previously. It is given by
+\[
+ m_H = \sqrt{-2 \mu^2} = \sqrt{2 \lambda v^2}.
+\]
+Note that the Higgs mass depends on the constant $\lambda$, which we haven't seen anywhere else so far. So we can't tell $m_H$ from what we know about $W$ and $Z$. Thus, until the Higgs was discovered recently, we didn't know about the mass of the Higgs boson. We now know that
+\[
+ m_H \approx \SI{125}{\giga\electronvolt}.
+\]
+Note that we didn't write out all terms in the Lagrangian, but as we did before, we are going to get $W^{\pm}$, $Z$-Higgs and Higgs-Higgs interactions.
+
+One might find it a bit pointless to define the $W^+$ and $W^-$ fields as we did, as the kinetic terms looked just as good with $W^1$ and $W^2$. The advantage of this new definition is that $W_\mu^+$ is now the complex conjugate $W_\mu^-$, so we can instead view the $W$ bosons as given by a single complex scalar field, and when we quantize, $W_\mu^+$ will be the anti-particle of $W_\mu^-$.
+
+\subsection{Coupling to leptons}
+We now look at how gauge bosons couple to matter. In general, for a particle with hypercharge $Y$, the covariant derivative is
+\[
+ \D_\mu = \partial_\mu + i g W_\mu^a \tau^a + i g' Y B_\mu
+\]
+In terms of the $W^{\pm}$ and $Z$ bosons, we can write this as
+\begin{multline*}
+ \D_\mu = \partial_\mu + \frac{ig}{\sqrt{2}} (W_\mu^+ \tau^+ + W_\mu^- \tau^-) + \frac{ig Z_\mu}{\cos \theta_W} (\cos^2 \theta_W \tau^3 - \sin^2 \theta_W Y) \\
+ + ig \sin \theta_W A_\mu (\tau^3 + Y),
+\end{multline*}
+where
+\[
+ \tau^{\pm} = \tau^1 \pm i \tau^2.
+\]
+By analogy, we interpret the final term $ig \sin \theta_W A_\mu (\tau^3 + Y)$ is the usual coupling with the photon. So we can identify the (magnitude of the) electron charge as
+\[
+ e = g \sin \theta_W,
+\]
+while the $\U(1)_{EM}$ charge matrix is
+\[
+ Q = \U(1)_{EM} = \tau^3 + Y.
+\]
+If we want to replace all occurrences of the hypercharge $Y$ with $Q$, then we need to note that
+\[
+ \cos^2 \theta_W \tau^3 - \sin^2 \theta_W Y = \tau^3 - \sin^2 \theta_W Q.
+\]
+We now introduce the electron field. The electron field is given by a spinor field $e(x)$. We will decompose it as left and right components
+\[
+ e(x) = e_L(x) + e_R(x).
+\]
+There is also a neutrino field $\nu_{e_L}(x)$. We will assume that neutrinos are massless, and there are only left-handed neutrinos. We know this is not entirely true, because of neutrinos oscillation, but the mass is very tiny, and this is a very good approximation.
+
+To come up with an actual electroweak theory, we need to specify a representation of $\SU(2) \times \U(1)$. It is convenient to group our particles by handedness:
+\[
+ R(x) = e_R(x),\quad L(x) =
+ \begin{pmatrix}
+ \nu_{e_L}(x)\\
+ e_L(x)
+ \end{pmatrix}.
+\]
+Here $R(x)$ is a single spinor, while $L(x)$ consists of a \emph{pair}, or ($\SU(2)$) \term{doublet}\index{$\SU(2)$ doublet} of spinors.
+
+We first look at $R(x)$. Experimentally, we find that $W^{\pm}$ only couples to left-handed leptons. So $R(x)$ will have the \emph{trivial representation} of $\SU(2)$. We also know that electrons have charge $Q = -1$. Since $\tau^3$ acts trivially on $R$, we must have
+\[
+ Q = Y = -1
+\]
+for the right-handed leptons.
+
+The left-handed particles are more complicated. We will assert that $L$ has the ``fundamental'' representation of $\SU(2)$, by which we mean given a matrix
+\[
+ g = \begin{pmatrix}
+ a & b\\
+ -\bar{b} & \bar{a}
+ \end{pmatrix}\in \SU(2),
+\]
+it acts on $L$ as
+\[
+ g L =
+ \begin{pmatrix}
+ a & b\\
+ - \bar{b} & \bar{a}
+ \end{pmatrix}
+ \begin{pmatrix}
+ \nu_{e_L}\\
+ e_L
+ \end{pmatrix} =
+ \begin{pmatrix}
+ a \nu_{e_L} + b e_L\\
+ -\bar{b} \nu_{e_L} + \bar{a} e_L
+ \end{pmatrix}.
+\]
+We know the electron has charge $-1$ and the neutrino has charge $0$. So we need
+\[
+ Q =
+ \begin{pmatrix}
+ 0 & 0 \\
+ 0 & -1
+ \end{pmatrix}.
+\]
+Since $Q = \tau^3 + Y$, we must have $Y = -\frac{1}{2}$.
+
+Using this notation, the gauge part of the lepton Lagrangian can be written concisely as
+\[
+ \mathcal{L}_{\mathrm{lepton}}^{EW} = \bar L i \slashed D L + \bar R i \slashed D R.
+\]
+Note that $\bar{L}$ means we take the transpose of the matrix $L$, viewed as a matrix with two components, and then take the conjugate of each spinor individually, so that
+\[
+ \bar{L} =
+ \begin{pmatrix}
+ \bar{\nu}_{e_L}(x) & \bar{e}_L(x)
+ \end{pmatrix}.
+\]
+How about the mass? Recall that it makes sense to deal with left and right-handed fermions separately only if the fermion is massless. A mass term
+\[
+ m_e (\bar{e}_Le_R + \bar{e}_R e_L)
+\]
+would be very bad, because our $\SU(2)$ action mixes $e_L$ with $\nu_{e_L}$.
+
+But we do know that the electron has mass. The answer is that the mass is again granted by the spontaneous symmetry breaking of the Higgs boson. Working in unitary gauge, we can write the Higgs boson as
+\[
+ \phi(x) = \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 0\\
+ v + h(x)
+ \end{pmatrix},
+\]
+with $v, h(x) \in \R$.
+
+The lepton-Higgs interactions is given by
+\[
+ \mathcal{L}_{\mathrm{lept}, \phi} = - \sqrt{2} \lambda_e (\bar L \phi R + \bar R \phi^\dagger L),
+\]
+where $\lambda_e$ is the \term{Yukawa coupling}. It is helpful to make it more explicit what we mean by this expression. We have
+\[
+ \bar{L} \phi R =
+ \begin{pmatrix}
+ \bar{\nu}_{e_L}(x) & \bar{e}_L(x)
+ \end{pmatrix}
+ \left(
+ \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 0\\
+ v + h(x)
+ \end{pmatrix}
+ \right)
+ e_R(x) = \frac{1}{\sqrt{2}} (v + h(x)) \bar{e}_L(x) e_R(x).
+\]
+Similarly, we can write out the second term and obtain
+\begin{align*}
+ \mathcal{L}_{\mathrm{lept}, \phi} &= - \lambda_e (v + h) (\bar{e}_L e_R + \bar{e}_R e_L)\\
+ &= - m_e \bar{e} e - \lambda_e h \bar{e} e,
+\end{align*}
+where
+\[
+ m_e = \lambda_e v.
+\]
+We see that while the interaction term is a priori gauge invariant, once we have spontaneously broken the symmetry, we have magically obtained a mass term $m_e$. The second term in the Lagrangian is the Higgs-fermion coupling. We see that this is proportional to $m_e$. So massive things couple more strongly.
+
+We now return to the fermion-gauge boson interactions. We can write it as
+\[
+ \mathcal{L}_{\mathrm{lept}}^{EM, \mathrm{int}} = \frac{g}{2\sqrt{2}} (J^\mu W_\mu^+ + J^{\mu\dagger} W_\mu^-) + e J_{EM}^\mu A_\mu + \frac{g}{2 \cos \theta_W} J_n^\mu Z_\mu.
+\]
+where
+\begin{align*}
+ J^\mu_{EM} &= - \bar{e} \gamma^\mu e\\
+ J^\mu &= \bar{\nu}_{e_L}\gamma^\mu (1 - \gamma^5) e\\
+ J^\mu_n &= \frac{1}{2} \left(\bar{\nu}_{e_L} \gamma^\mu (1 - \gamma^5) \nu_{e_L} - \bar{e} \gamma^\mu (1 - \gamma^5 - 4 \sin^2 \theta_w)e\right)
+\end{align*}
+Note that $J^\mu_{EM}$ has a negative sign because the electron has negative charge. These currents are known as the \term{EM current}, \term{charge weak current} and \term{neutral weak current} respectively.
+
+There is one thing we haven't mentioned so far. We only talked about electrons and neutrinos, but the standard model has three generations of leptons. There are the \term{muon} (\term{$\mu$}) and \term{tau} (\term{$\tau$}), and corresponding left-handed neutrinos. With these in mind, we introduce
+\[
+ L^1 =
+ \begin{pmatrix}
+ \nu_{e_L}\\
+ e_L
+ \end{pmatrix},\quad
+ L^2 =
+ \begin{pmatrix}
+ \nu_{\mu_L}\\
+ \mu_L
+ \end{pmatrix},\quad
+ L^3 =
+ \begin{pmatrix}
+ \nu_{\tau_L}\\
+ \tau_L
+ \end{pmatrix}.
+\]
+\[
+ R^1 = e_R,\quad R^2 = \mu_R,\quad R^3 = \tau_R.
+\]
+These couple and interact in exactly the same way as the electrons, but have heavier mass. It is straightforward to add these into the $\mathcal{L}_{\mathrm{lept}}^{EW, \mathrm{int}}$ term. What is more interesting is the Higgs interaction, which is given by
+\[
+ \mathcal{L}_{\mathrm{lept}, \phi} = - \sqrt{2} \Big(\lambda^{ij} \bar{L}^i \phi R^j + (\lambda^\dagger)^{ij} \bar{R}^i \phi^\dagger L^j\Big),
+\]
+where $i$ and $j$ run over the different generations. What's new now is that we have the matrices $\lambda \in M_3(\C)$. These are \emph{not} predicted by the standard model.
+
+This $\lambda$ is just a general matrix, and there is no reason to expect it to be diagonal. However, in some sense, it is always diagonalizable. The key insight is that we contract the two indices $\lambda$ with two \emph{different} kinds of things. It is a general linear algebra fact that for any matrix $\lambda$ at all, we can find some unitary matrices $U$ and $S$ such that
+\[
+ \lambda = U \Lambda S^\dagger,
+\]
+where $\Lambda$ is a diagonal matrix with real entries. We can then transform our fields by
+\begin{align*}
+ L^i &\mapsto U^{ij} L^j\\
+ R^i &\mapsto S^{ij} R^j,
+\end{align*}
+and this diagonalizes $\mathcal{L}_{\mathrm{lept}, \phi}$.
+
+It is also clear that this leaves $\mathcal{L}_{\mathrm{lept}}^{EW}$ invariant, especially if we look at the expression before symmetry breaking. So the mass eigenstates are the same as the ``\term{weak eigenstates}''. This tells us that after diagonalizing, there is no mixing within the different generations.
+
+This is important. It is \emph{not} possible for quarks (or if we have neutrino mass), and mixing \emph{does} occur in these cases.
+
+\subsection{Quarks}
+We now move on to study quarks. There are 6 flavours of quarks, coming in three generations:
+\begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ Charge & First generation & Second generation & Third generation\\
+ \midrule
+ $+\frac{2}{3}$ & Up ($u$) & Charm ($c$) & Top ($t$)\\
+ $-\frac{1}{3}$ & Down ($d$) & Strange ($s$) & Bottom ($b$)\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+each of which is a spinor.
+
+The right handed fields have trivial $\SU(2)$ representations, and are thus $\SU(2)$ singlets. We write them as
+\[
+ u_R = \begin{pmatrix} u_R & c_R & t_R\end{pmatrix}
+\]
+which have $Y = Q = +\frac{2}{3}$, and
+\[
+ d_R = \begin{pmatrix} d_R & s_R & b_R\end{pmatrix}
+\]
+which have $Y = Q = -\frac{1}{3}$.
+
+The left-handed fields are in $\SU(2)$ doublets
+\[
+ Q_L^i =
+ \begin{pmatrix}
+ u_L^i\\
+ d_L^i
+ \end{pmatrix} =
+ \begin{pmatrix}
+ \begin{pmatrix}
+ u\\d
+ \end{pmatrix}_L &
+ \begin{pmatrix}
+ c\\
+ s
+ \end{pmatrix}_L &
+ \begin{pmatrix}
+ t\\
+ b
+ \end{pmatrix}_L
+ \end{pmatrix}
+\]
+and these all have $Y = \frac{1}{6}$. Here $i = 1, 2, 3$ labels generations.
+
+The electroweak part of the Lagrangian is again straightforward, given by
+\[
+ \mathcal{L}_{\mathrm{quark}}^{EW} = \bar{Q}_L i \slashed \D Q_L + \bar{u}_R i \slashed\D u_R + \bar{d}_R i \slashed{\D} d_R.
+\]
+It takes a bit more work to couple these things with $\phi$. We want to do so in a gauge invariant way. To match up the $\SU(2)$ part, we need a term that looks like
+\[
+ \bar{Q}_L^i \phi,
+\]
+as both $Q_L$ and $\phi$ have the fundamental representation. Now to have an invariant $\U(1)$ theory, the hypercharges have to add to zero, as under a $\U(1)$ gauge transformation $\beta(x)$, the term transforms as $e^{i\sum Y_i \beta(x) }$. We see that the term
+\[
+ \bar{Q}_L^i \phi d_R^i
+\]
+works well. However, coupling $\bar{Q}_L^i$ with $u_R$ is more problematic. $\bar{Q}_L^i$ has hypercharge $-\frac{1}{6}$ and $u_R$ has hypercharge $+\frac{2}{3}$. So we need another term of hypercharge $-\frac{1}{2}$. To do so, we introduce the \term{charge-conjugated $\phi$}\index{$\phi^c$}, defined by
+\[
+ (\phi^c)^\alpha = \varepsilon^{\alpha\beta} \phi^{* \beta}.
+\]
+One can check that this transforms with the fundamental $\SU(2)$ representation, and has $Y = -\frac{1}{2}$. Inserting generic coupling coefficients $\lambda_{u, d}^{ij}$, we write the Lagrangian as
+\[
+ \mathcal{L}_{\mathrm{quark}, \phi} = - \sqrt{2} (\lambda_d^{ij} \bar{Q}_L^i \phi d_R^i + \lambda_u^{ij} \bar{Q}_L^i \phi^c u_R^j + \mathrm{h.c.}).
+\]
+Here $\text{h.c.}$ means ``hermitian conjugate''.
+
+Let's try to diagonalize this in the same way we diagonalized leptons. We again work in unitary gauge, so that
+\[
+ \phi(x) = \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 0\\
+ v + h(x)
+ \end{pmatrix},
+\]
+Then the above Lagrangian can be written as
+\[
+ \mathcal{L}_{\mathrm{quark}, \phi} = - (\lambda_d^{ij} \bar{d}_L^i (v + h) d_R^i + \lambda_u^{ij} \bar{u}_L^i (v + h) u_R^j + \mathrm{h.c.}).
+\]
+We now write
+\begin{align*}
+ \lambda_u &= U_u \Lambda_u S_u^\dagger\\
+ \lambda_d &= U_d \Lambda_d S_d^\dagger,
+\end{align*}
+where $\Lambda_{u, d}$ are diagonal and $S_{u, d}, U_{u, d}$ are unitary. We can then transform the field in a similar way:
+\[
+ u_L \mapsto U_u u_L,\quad d_L \mapsto U_d d_L,\quad u_R \mapsto S_u u_R,\quad d_R \mapsto S_d d_R.
+\]
+and then it is a routine check that this diagonalizes $\mathcal{L}_{\mathrm{quark}, \phi}$. In particular, the mass term looks like
+\[
+ -\sum_i (m_d^i \bar{d}_L^i d_R^i + m_d^i \bar{u}_L^i u_R^i + \text{h.c.}),
+\]
+where
+\[
+ m_q^i = v \Lambda_q^{ii}.
+\]
+% Let's stop and think about symmetries. In this basis, the $\mathcal{L}_{\mathrm{quark}, \phi}$ bit is invariant under P, C and T. Similarly, $\mathcal{L}_{\mathrm{gauge}, \phi}$ is also invariant under P, C and T.
+How does this affect our electroweak interactions? The $\bar{u}_R i \slashed \D u_R$ and $\bar{d}_R i \slashed \D d_R$ are staying fine, but the different components of $\bar{Q}_L$ ``differently''. In particular, the $W_\mu^{\pm}$ piece given by $\bar{Q}_L i \slashed \D Q_L$ is not invariant. That piece, can be explicitly written out as
+\[
+ \frac{g}{2\sqrt{2}} J^{\pm \mu} W_{\mu}^{\pm},
+\]
+where
+\[
+ J^{\mu+} = \bar{u}_L^i \gamma^\mu d_L^i.
+\]
+Under the basis transformation, this becomes
+\[
+ \bar{u}_L^i \gamma^\mu (U_u^\dagger U_d)^{ij} d_L^j.
+\]
+This is not going to be diagonal in general. This leads to \emph{inter-generational} quark couplings. In other words, we have discovered that the mass eigenstates are (in general) not equal to the weak eigenstates.
+
+The mixing is dictated by the \term{Cabibbo--Kabyashi--Maskawa matrix} (\term{CKM matrix})
+\[
+ V_{CKM} = U^\dagger_u U_d.
+\]
+Explicitly, we can write
+\[
+ V_{CKM} =
+ \begin{pmatrix}
+ V_{ud} & V_{us} & V_{ub}\\
+ V_{cd} & V_{cs} & V_{cb}\\
+ V_{td} & V_{ts} & V_{tb}
+ \end{pmatrix},
+\]
+where the subscript indicate which two things it mixes. So far, these matrices are not predicted by any theory, and is manually plugged into the model.
+
+However, the entries aren't completely arbitrary. Being the product of two unitary matrices, we know $V_{CKM}$ is a unitary matrix. Moreover, the entries are only uniquely defined up to some choice of phases, and this further cuts down the number of degrees of freedom.
+
+If we only had two generations, then we get what is known as \term{Cabibbo mixing}. A general $2 \times 2$ unitary matrix has $4$ real parameters --- one angle and three phases. However, redefining each of the 4 quark fields with a global $\U(1)$ transformation, we can remove three relative phases (if we change all of them by the same phase, then nothing happens). We can then write this matrix as
+\[
+ V =
+ \begin{pmatrix}
+ \cos \theta_c & \sin \theta_c\\
+ - \sin \theta_c & \cos \theta_c
+ \end{pmatrix},
+\]
+where $\theta_c$ is the \term{Cabibbo angle}. Experimentally, we have $\sin \theta_c \approx 0.22$. It turns out the reality of this implies that CP is conserved, which is left as an exercise on the example sheet.
+
+In this case, we explicitly have
+\[
+ J^{\mu\dagger} = \cos \theta_c \bar{u}_L \gamma^\mu d_L + \sin \theta_c \bar{u}_l \gamma^\mu s_L - \sin \theta_c \bar{c}_L \gamma^\mu d_L + \cos \theta_c \bar{c}_L \gamma^\mu s_L.
+\]
+If the angle is $0$, then we have no mixing between the generations.
+
+With three generations, there are nine parameters. We can think of this as $3$ (Euler) angles and $6$ phases. As before, we can re-define some of the quark fields to get rid of five relative phases. We can then write $V_{CKM}$ in terms of three angles and $1$ phase. In general, this $V_{CKM}$ is not real, and this gives us CP violation. Of course, if it happens that this phase is real, then we do not have CP violation. Unfortunately, experimentally, we figured this is not the case. By the CPT theorem, we thus deduce that T violation happens as well.
+
+\subsection{Neutrino oscillation and mass}
+Since around $2000$, we know that the mass eigenstates and weak eigenstates for neutrinos are not equivalent, as neutrinos were found to change from one flavour to another. This implies there is some mixing between different generations of leptons. The analogous mixing matrix is the \term{Pontecorov--Maki--Nakagawa--Sakata matrix}, written $U_{PMNS}$\index{$U_{PMNS}$}.
+
+As of today, we do not really understand what neutrinos actually are, as neutrinos don't interact much, and we don't have enough experimental data. If neutrinos are Dirac fermions, then they behave just like quarks, and we expect CP violation.
+
+However, there is another possibility. Since neutrinos do not have a charge, it is possible that they are their own anti-particles. In other words, they are \term{Majorana fermions}. It turns out this implies that we cannot get rid of that many phases, and we are left with 3 angles and 3 phases. Again, we get CP violation in general.
+
+We consider these cases briefly in turn.
+
+\subsubsection*{Dirac fermions}
+If they are Dirac fermions, then we must also get some right-handed neutrinos, which we write as
+\[
+ N^i = \nu_R^i = (\nu_{eR}, \nu_{\mu R}, \nu_{\tau R}).
+\]
+Then we modify the Dirac Lagrangian to say
+\[
+ \mathcal{L}_{\mathrm{lept}, \phi} = -\sqrt{2} (\lambda^{ij} \bar{L}^i \phi R^j + \lambda_\nu^{ij} \bar{L}^i \phi^c N^j + \mathrm{h.c.}).
+\]
+This is exactly like for quarks. As in quarks, we obtain a mass term.
+\[
+ - \sum_i m_\nu^i (\bar{\nu}_R^i \nu_L^i + \bar{\nu}_L^i \nu_R^i).
+\]
+\subsubsection*{Majorana neutrinos}
+If neutrinos are their own anti-particles, then, in the language we had at the beginning of the course, we have
+\[
+ d^s(p) = b^s(p).
+\]
+%We take the intrinsic c-parity to be $1$, wlog.
+Then $\nu(x) = \nu_L(x) + \nu_R(x)$ must satisfy
+\[
+ \nu^c(x) = C \bar{\nu}_L^T = \nu(x).
+\]
+Then we see that we must have
+\[
+ \nu_R(x) = \nu_L^c(x),
+\]
+and vice versa. So the right-handed neutrino field is not independent of the left-handed field. In this case, the mass term would look like
+\[
+ -\frac{1}{2} \sum_i m_\nu^i (\bar{\nu}_L^{ic} \nu_L^i + \bar{\nu}_L^i \nu_L^{ic}).
+\]
+As in the case of leptons, postulating a mass term directly like this breaks gauge invariance. Again, we solve this problem by coupling with the Higgs field. It takes some work to find a working gauge coupling, and it turns out it the simplest thing that works is
+\[
+ \mathcal{L}_{L, \phi} = -\frac{Y^{ij}}{M} (L^{iT} \phi^c)C (\phi^{cT} L^j) + \mathrm{h.c.}.
+\]
+This is weird, because it is a dimension $5$ operator. This dimension 5 operator is \emph{non-renormalizable}. This is actually okay, as along as we think of the standard model as an effective field theory, describing physics at some low energy scale.
+
+\subsection{Summary of electroweak theory}
+We can do a quick summary of the electroweak theory. We start with the picture before spontaneous symmetry breaking.
+
+The gauge group of this theory is $\SU(2)_L \times \U(1)_Y$, with gauge fields $W_\mu \in \su(2)$ and $B_\mu \in \uu(1)$. The coupling of $\U(1)$ with the particles is specified by a \emph{hypercharge} $Y$, and the $\SU(2)$ couplings will always be trivial or fundamental. The covariant derivative is given by
+\[
+ \D_\mu = \partial_\mu + i g W_\mu^a \tau^a + \frac{1}{2} g' Y B_\mu,
+\]
+where $\tau^a$ are the canonical generators of $\su(2)$. The field strengths are given by
+\begin{align*}
+ F_{\mu\nu}^{W, a} &= \partial_\mu W_\nu^a - \partial_\nu W_\mu^a - g \varepsilon^{abc} W_\mu^b W_\nu^c\\
+ F_{\mu\nu}^B &= \partial_\mu B_\nu - \partial_\nu V_\mu.
+\end{align*}
+The theory contains the scalar \emph{Higgs field} $\phi \in \C^2$, which has hypercharge $Y = \frac{1}{2}$ and the fundamental $\SU(2)$ representation. We also have three generations of matter, given by
+\begin{center}
+ \begin{tabular}{ccccccc}
+ \toprule
+ Type & G1 & G2 & G2 & $Q$ & $Y_L$ & $Y_R$\\
+ \midrule
+ Positive quarks & $u$ & $c$ & $t$ & $+\frac{2}{3}$ & $+\frac{1}{6}$ & $+\frac{2}{3}$ \\
+ Negative quarks & $d$ & $s$ & $b$ & $-\frac{1}{3}$ & $+\frac{1}{6}$ & $-\frac{1}{3}$\\
+ \midrule
+ Leptons ($e, \mu, \tau$) & $e$ & $\mu$ & $\tau$ & $-1$ & $-\frac{1}{2}$ & $-1$\\
+ Leptons (neutrinos) & $\nu_e$ & $\nu_\mu$ & $\nu_\tau$ & $0$ & $-\frac{1}{2}$ & ???\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+Here G1, G2, G3 are the three generations, $Q$ is the charge, $Y_L$ is the hypercharge of the left-handed version, and $Y_R$ is the hypercharge of the right-handed version. Each of these matter fields is a spinor field.
+
+From now on, we describe the theory in the case of a massless neutrino, because we don't really know what the neutrinos are. We group the matter fields as
+\[
+ \begin{array}{ccccccc}
+ L &=& \left(\vphantom{\begin{pmatrix}
+ \nu_{e_L}\\
+ e_L
+ \end{pmatrix}}\right.&\begin{pmatrix}
+ \nu_{e_L}\\
+ e_L
+ \end{pmatrix} &
+ \begin{pmatrix}
+ \nu_{\mu_L}\\
+ \mu_L
+ \end{pmatrix} &
+ \begin{pmatrix}
+ \nu_{\tau_L}\\
+ \tau_L
+ \end{pmatrix}&\left.\vphantom{\begin{pmatrix}
+ \nu_{e_L}\\
+ e_L
+ \end{pmatrix}}\right)\\
+ R &= &(& e_R & \mu_R & \tau_R &)\\
+ u_R &=&(& u_R & c_R & t_R&)\\
+ d_R &=&(& d_R & s_R & b_R&)\\
+ Q_L &= &\left(\vphantom{\begin{pmatrix}
+ \nu_{e_L}\\
+ e_L
+ \end{pmatrix}}\right.&
+ \begin{pmatrix}
+ u_L\\d_L
+ \end{pmatrix} &
+ \begin{pmatrix}
+ c_L\\ s_L
+ \end{pmatrix} &
+ \begin{pmatrix}
+ t_L\\ b_L
+ \end{pmatrix}&\left.\vphantom{\begin{pmatrix}
+ \nu_{e_L}\\
+ e_L
+ \end{pmatrix}}\right)
+ \end{array}
+\]
+The Lagrangian has several components:
+\begin{itemize}
+ \item The kinetic term of the gauge field is given by
+ \[
+ \mathcal{L}_{\mathrm{gauge}} = -\frac{1}{2} \Tr (F^W_{\mu\nu} F^{W,\mu\nu}) - \frac{1}{4} F_{\mu\nu}^B F^{B, \mu\nu},
+ \]
+ \item The Higgs field couples with the gauge fields by
+ \[
+ \mathcal{L}_{\mathrm{gauge}, \phi} = (\D_\mu \phi)^\dagger (\D^\mu \phi) - \mu^2 |\phi|^2 - \lambda |\phi|^4,
+ \]
+ After spontaneous symmetry breaking, this gives rise to massive $W^{\pm}, Z$ and Higgs bosons. This also gives us $W^{\pm}, Z$-Higgs interactions, as well as gives us Higgs-Higgs interactions.
+ \item The leptons interact with the Higgs field by
+ \[
+ \mathcal{L}_{\mathrm{lept}, \phi} =- \sqrt{2} (\lambda^{ij} \bar L^i \phi R^j + \mathrm{h.c.}).
+ \]
+ This gives us lepton masses and lepton-Higgs interactions. Of course, this piece has to be modified after we figure out what neutrinos actually are.
+ \item The gauge coupling of the leptons induce
+ \[
+ \mathcal{L}_{\mathrm{lept}}^{EW} = \bar L i \slashed \D L + \bar R i \slashed \D R,
+ \]
+ with an implicit sum over all generations. This gives us lepton interactions with $W^{\pm}, Z, \gamma$. Once we introduce neutrino masses, this is described by the PMNS matrix, and gives us neutrino oscillations and (possibly) CP violation.
+ \item Higgs-quark interactions are given by
+ \[
+ \mathcal{L}_{\mathrm{quark}, \phi} = - \sqrt{2} (\lambda_d^{ij} \bar{Q}_L^i \phi d_R^i + \lambda_u^{ij} \bar{Q}_L^i \phi^c u_R^j + \mathrm{h.c.}).
+ \]
+ which gives rise to quark masses.
+ \item Finally, the gauge coupling of the quarks is given by
+ \[
+ \mathcal{L}_{\mathrm{quark}}^{EW} = \bar{Q}_L i \slashed \D Q_L + \bar{u}_R i \slashed\D u_R + \bar{d}_R i \slashed{\D} d_R,
+ \]
+ which gives us quark interactions with $W^{\pm}, Z, \gamma$. The interactions are described by the CKM matrix. This gives us quark flavour and CP violation.
+\end{itemize}
+
+We now have all of the standard model that involves the electroweak part and the matter. Apart from QCD, which we will describe quite a bit later, we've got everything in the standard model.
+
+\section{Weak decays}
+We have mostly been describing (the electroweak part of) the standard model rather theoretically. Can we actually use it to make predictions? This is what we will do in this chapter. We will work out decay rates for certain processes, and see how they compare with experiments. One particular point of interest in our computations is to figure out how CP violation manifest itself in decay rates.
+
+\subsection{Effective Lagrangians}
+We'll only consider processes where energies and momentum are much less than $m_W, m_Z$. In this case, we can use an effective field theory. We will discuss more formally what effective field theories are, but we first see how it works in practice.
+
+In our case, what we'll get is the \term{Fermi weak Lagrangian}. This Lagrangian in fact predates the Standard Model, and it was only later on that we discovered the Fermi weak Lagrangian is an effective Lagrangian of what we now know of as electroweak theory.
+
+Recall that the weak interaction part of the Lagrangian is
+\[
+ \mathcal{L}_W = -\frac{g}{2\sqrt{2}}(J^\mu W^+_\mu + J^{\mu\dagger}W_\mu^-) - \frac{g}{2 \cos \theta_W} J_n^\mu Z_\mu.
+\]
+Our general goal is to compute the \term{$S$-matrix}
+\[
+ S = \mathcal{T} \exp\left(i \int \d^4 x\; \mathcal{L}_W(x)\right).
+\]
+As before, $\mathcal{T}$ denotes time-ordering. The strategy is to Taylor expand this in $g$.
+
+Ultimately, we will be interested in computing
+\[
+ \brak{f} S\bket{i}
+\]
+for some initial and final states $\bket{i}$ and $\bket{f}$. Since we are at the low energy regime, we will only attempt to compute these quantities when $\bket{i}$ and $\bket{f}$ do not contain $W^{\pm}$ or $Z$ bosons. This allows us to drop terms in the Taylor expansion having free $W^{\pm}$ or $Z$ components.
+
+We can explicitly Taylor expand this, keeping the previous sentence in mind. How can we possibly get rid of the $W^{\pm}$ and $Z$ terms in the Taylor expansion? If we think about Wick's theorem, we know that when we take the time-ordered product of several operators, we sum over all possible contractions of the fields, and contraction practically means we replace two operators by the Feynman propagator of that field.
+
+Thus, if we want to end up with no $W^{\pm}$ or $Z$ term, we need to contract all the $W^{\pm}$ and $Z$ fields together. This in particular requires an even number of $W^{\pm}$ and $Z$ terms. So we know that there is no $O(g)$ term left, and the first non-trivial term is $O(g^2)$. We write the propagators as
+\begin{align*}
+ D_{\mu\nu}^W (x - x') &= \bra \mathcal{T} W_\mu^-(x) W_\nu^+(w')\ket\\
+ D_{\mu\nu}^Z (x - x') &= \bra \mathcal{T} Z_\mu(x) Z_\nu(x') \ket.
+\end{align*}
+Thus, the first interesting term is $g^2$ one. For initial and final states $\bket{i}$ and $\bket{f}$, we have
+\begin{multline*}
+ \brak{f}S\bket{i} = \brak{f} \mathcal{T} \left\{ 1 - \frac{g^2}{8} \int \d^4 x\; \d^4 x'\left[J^{\mu\dagger}(x) D_{\mu\nu}^W (x - x') J^\nu(x') \vphantom{\frac{1}{\cos^2 \theta_W}}\right.\right. \\
+ \left.\left.+ \frac{1}{\cos^2 \theta_W} J_n^{\mu\dagger} D_{\mu\nu}^Z (x - x') J_n^\nu(x')\right] + O(g^4)\right\}\bket{i}
+\end{multline*}
+As always, we work in momentum space. We define the Fourier transformed propagator $\tilde{D}_{\mu\nu}^{Z, W}(p)$ by
+\[
+ D_{\mu\nu}^{Z, W} (x - y) = \int \frac{\d^4 p}{(2\pi)^4} e^{-ip\cdot (x - y)} \tilde{D}_{\mu\nu}^{Z, W}(p),
+\]
+and we will later compute to find that $\tilde{D}_{\mu\nu}$ is
+\[
+ \tilde{D}_{\mu\nu}^{Z, W} (p) = \frac{i}{p^2 - m_{Z, W}^2 + i \varepsilon}\left(-g_{\mu\nu} + \frac{p_\mu p_\nu}{m^2_{Z, W}}\right).
+\]
+Here $g_{\mu\nu}$ is the metric of the Minkowski space. We will put aside the computation of the propagator for the moment, and discuss consequences.
+
+At low energies, e.g.\ the case of quarks and leptons (except for the top quark), the momentum scales involved are much less than $m^2_{Z, W}$. In this case, we can approximate the propagators by ignoring all the terms involving $p$. So we have
+\[
+ \tilde{D}^{Z, W}_{\mu\nu}(p) \approx \frac{i g_{\mu\nu}}{m_{Z, W}^2}.
+\]
+Plugging this into the Fourier transform, we have
+\[
+ D_{\mu\nu}^{Z, W}(x - y) \approx \frac{ig_{\mu\nu}}{m^2_{Z, W}} \delta^4(x - y).
+\]
+What we see is that we can describe this interaction by a contact interaction, i.e.\ a four-fermion interaction. Note that if we did not make the approximation $p \approx 0$, then our propagator will not have the $\delta^{(4)}(x - y)$, hence the effective action is \emph{non-local}.
+
+Thus, the second term in the $S$-matrix expansion becomes
+\[
+ -\int \d^4 x\; \frac{ig^2}{8 m_W^2} \left(J^{\mu \dagger}(x) J_\mu(x) + \frac{m_W^2}{m_Z^2 \cos^2 \theta_W} J_n^{\mu\dagger} (x) J_{n\mu} (x)\right).
+\]
+We want to define the \term{effective Lagrangian} \term{$\mathcal{L}_W^{\mathrm{eff}}$} to be the Lagrangian not involving $W^{\pm}$, $Z$ such that for ``low energy states'', we have
+\begin{align*}
+ \brak{f}S\bket{i} &= \brak{f} \mathcal{T} \exp\left(i \int \d^4 x\; \mathcal{L}_W^{\mathrm{eff}}\right)\bket{i}\\
+ &= \brak{f} \mathcal{T} \left[1 + i \int \d^4 x\; \mathcal{L}_W^{\mathrm{eff}} + \cdots\right] \bket{i}
+\end{align*}
+Based on our previous computations, we find that up to tree level, we can write
+\[
+ i \mathcal{L}_W^{\mathrm{eff}} (x) \equiv - \frac{i G_F}{\sqrt{2}} \left[J^{\mu \dagger} J_\mu(x) + \rho J_n^{\mu\dagger} J_{n\mu} (x)\right],
+\]
+where, again up to tree level,
+\[
+ \frac{G_F}{\sqrt{2}} = \frac{g^2}{8 m_W^2},\quad \rho = \frac{m_W^2}{m_Z^2 \cos^2 \theta_W},
+\]
+Recall that when we first studied electroweak theory, we found a relation $m_W= m_Z \cos \theta_W$. So, up to tree level, we have $\rho = 1$. When we look at higher levels, we get quantum corrections, and we can write
+\[
+ \rho = 1 + \Delta \rho.
+\]
+This value is sensitive to physics ``beyond the Standard Model'', as the other stuff can contribute to the loops. Experimentally, we find
+\[
+ \Delta \rho \approx 0.008.
+\]
+We can now do our usual computations, but with the effective Lagrangian rather than the usual Lagrangian. This is the \term{Fermi theory} of weak interaction. This predates the idea of the standard model and the weak interaction. The $\frac{1}{m_W^2}$ in $G_F$ in some sense indicates that Fermi theory breaks down at energy scales near $m_W$, as we would expect.
+
+It is interesting to note that the mass dimension of $G_F$ is $-2$. This is to compensate for the dimension $6$ operator $J^{\mu\dagger}J_\mu$. This means our theory is non-renormalizable. This is, of course, not a problem, because we do not think this is a theory that should be valid up to arbitrarily high energy scales. To derive this Lagrangian, we've assumed our energy scales are $\ll m_W, m_Z$.
+
+\subsubsection*{Computation of propagators}
+We previously just wrote down the values of the $W^{\pm}$ and $Z$-propagators. We will now explicitly do the computations of the $Z$ propagator. The computation for $W^{\pm}$ is similar. We will gloss over subtleties involving non-abelian gauge theory here, ignoring problems involving ghosts etc. We'll work in the so-called \term{$R_\varepsilon$-gauge}.
+
+In this case, we explicitly write the free $Z$-Lagrangian as
+\[
+ \mathcal{L}^{\mathrm{Free}}_Z = - \frac{1}{4}(\partial_\mu Z_\nu - \partial_\nu Z_\mu) (\partial^\mu Z^\nu - \partial^\nu Z^\mu) + \frac{1}{2} m_Z^2 Z_\mu Z^\mu.
+\]
+To find the propagator, we introduce an external current $j^\mu$ coupled to $Z_\mu$. So the new Lagrangian is
+\[
+ \mathcal{L} = \mathcal{L}_Z^{\mathrm{free}} + j^\mu (x) Z_\mu(x).
+\]
+Through some routine computations, we see that the Euler--Lagrange equations give us
+\[
+ \partial^2 Z_\rho - \partial_\rho \partial \cdot Z + m_Z^2 Z_\rho = -j_\rho.\tag{$*$}
+\]
+We need to solve this. We take the $\partial_\rho$ of this, which gives
+\[
+ \partial^2 \partial\cdot Z - \partial^2 \partial \cdot Z + m_Z^2 \partial \cdot Z =- \partial \cdot j.
+\]
+So we obtain
+\[
+ m_Z^2 \partial \cdot Z = - \partial \cdot j.
+\]
+Putting this back into $(*)$ and rearranging, we get
+\[
+ (\partial^2 + m_Z^2) Z_\mu = - \left(g_{\mu\nu} + \frac{\partial_\mu\partial_\nu}{m_Z^2}\right)j^\nu.
+\]
+We can write the solution as an integral over this current. We write the solution as % how are propagators related to greens functions?
+\[
+ Z_\mu(x) = i \int \d^4 y\; D_{\mu\nu}^Z(x - y)j^\nu(y),\tag{$\dagger$}
+\]
+and further write
+\[
+ D_{\mu\nu}^2 (x - y) = \int \frac{\d^4 p}{(2\pi)^4} e^{-ip\cdot (x - y)} \tilde{D}_{\mu\nu}^2(p).
+\]
+Then by applying $(\partial^2 + m_Z^2)$ to $(\dagger)$, we find that we must have
+\[
+ \tilde{D}_{\mu\nu}^2(p) = \frac{-i}{p^2 - m_Z^2 + i \omega}\left(g_{\mu\nu} - \frac{p_\mu p_\nu}{m_Z^2}\right).
+\]
+\subsection{Decay rates and cross sections}
+We now have an effective Lagrangian. What can we do with it? In this section, we will consider two kinds of experiments:
+\begin{itemize}
+ \item We leave a particle alone, and see how frequently it decays, and into what.
+ \item We crash a lot of particles into each other, and count the number of scattering events produced.
+\end{itemize}
+The relevant quantities are \emph{decay rates} and \emph{cross sections} respectively. We will look at both of these in turn, but we will mostly be interested in computing decay rates only.
+
+\subsubsection*{Decay rate}
+\begin{defi}[Decay rate]\index{decay rate}
+ Let $X$ be a particle. The decay rate \term{$\Gamma_X$} is rate of decay of $X$ in its rest frame. In other words, if we have a sample of $X$, then this is the number of decays of $X$ per unit time divided by the number of $X$ present.
+
+ The \term{lifetime} of $X$ is
+ \[
+ \tau \equiv \frac{1}{\Gamma_X}.
+ \]
+ We can write
+ \[
+ \Gamma_X = \sum_{f_i} \Gamma_{X \to f_i},
+ \]
+ where \term{$\Gamma_{X \to f_i}$} is the partial decay rate to the final state $f_i$.
+\end{defi}
+Note that the $\sum_{f_i}$ is usually a complicated mixture of genuine sums and integrals.
+
+Often, we are only interested in how frequently it decays into, a particular particle, instead of just the total decay ray. Thus, for each final state $\bket{f_i}$, we want to compute $\Gamma_{X \to f_i}$, and then sum over all final states we are interested in.
+
+As before, we can compute the $S$-matrix elements, defined by
+\[
+ \brak{f} S\bket{i}.
+\]
+We will take $i = X$. As before, recall that there is a term $1$ of $S$ that corresponds to nothing happening, and we are not interested in that. If $\bket{f} \not= \bket{i}$, then this term does not contribute.
+
+It turns out the interesting quantity to extract out of the $S$-matrix is given by the \emph{invariant amplitude}:
+\begin{defi}[Invariant amplitude]\index{invariant amplitude}
+ We define the \emph{invariant amplitude} $M$ by
+ \[
+ \brak{f}S - 1 \bket{i} = (2\pi)^4 \delta^{(4)}(p_f - p_i) i M_{fi}.
+ \]
+\end{defi}
+When actually computing this, we make use of the following convenient fact:
+\begin{prop}
+ Up to tree order, and a phase, we have
+ \[
+ M_{fi} = \brak{f} \mathcal{L}(0)\bket{i},
+ \]
+ where $\mathcal{L}$ is the Lagrangian. We usually omit the $(0)$.
+\end{prop}
+
+\begin{proof}[Proof sketch]
+ Up to tree order, we have
+ \[
+ \brak{f} S - 1\bket{i} = i\int \d^4 x\;\brak{f} \mathcal{L}(x)\bket{i}
+ \]
+ We write $\mathcal{L}$ in momentum space. Then the only $x$-dependence in the $\brak{f} \mathcal{L}(x) \bket{i}$ factor is a factor of $e^{i (p_f - p_i) \cdot x}$. Integrating over $x$ introduces a factor of $(2\pi)^4 \delta^{(4)}(p_f - p_i)$. Thus, we must have had, up to tree order,
+ \[
+ \brak{f} \mathcal{L}(x) \bket{i} = M_{fi} e^{i(p_f - p_i)\cdot x}.
+ \]
+ So evaluating this at $x = 0$ gives the desired result.
+\end{proof}
+How does this quantity enter the picture? If we were to do this naively, we would expect the probability of a transition $i \to f$ is
+\[
+ P(i \to f) = \frac{|\brak{f}S - 1 \bket{i}|^2}{\braket{f}{f}\braket{i}{i}}.
+\]
+It is not hard to see that we will very soon have a lot of $\delta$ functions appearing in this expression, and these are in general bad. As we saw in QFT, these $\delta$ functions came from the fact that the universe is infinite. So what we do is that we work with a finite spacetime, and then later take appropriate limits.
+
+We suppose the universe has volume $V$, and we also work over a finite temporal extent $T$. Then we replace
+\[
+ (2\pi)^4 \delta^{(4)}(0) \mapsto VT,\quad (2\pi)^3 \delta^{(3)}(0) \mapsto V.
+\]
+Recall that with infinite volume, we found the normalization of our states to be
+\[
+ \braket{i}{i} = (2\pi)^3 2 p_i^0 \delta^{(3)}(0).
+\]
+In finite volume, we can replace this with
+\[
+ \braket{i}{i} = 2p_i^0 V.
+\]
+Similarly, we have
+\[
+ \braket{f}{f} = \prod_r (2 p_r^0 V),
+\]
+where $r$ runs through all final state labels. In the $S$-matrix, we have
+\[
+ |\brak{f}S - 1 \bket{i}| = \Big((2\pi)^4 \delta^{(4)}(p_f - p_i)\Big)^2 |M_{fi}|^2.
+\]
+We don't really have a $\delta(0)$ here, but we note that for any $x$, we have
+\[
+ (\delta(x))^2 = \delta(x) \delta(0).
+\]
+The trick here is to replace only one of the $\delta(0)$ with $VT$, and leave the other one alone. Of course, we don't really have a $\delta(p_f - p_i)$ function in finite volume, but when we later take the limit, it will become a $\delta$ function again.
+
+Noting that $p_i^0 = m_i$ since we are in the rest frame, we find
+\[
+ P(i \to f) = \frac{1}{2 m_i V}|M_{fi}|^2 (2\pi)^4 \delta^{(4)}\left(p_i - \sum_r p_r\right) VT \prod_r \left(\frac{1}{2 p_r^0 V}\right).
+\]
+We see that two of our $V$'s cancel, but there are a lot left.
+
+The next thing to realize is that it is absurd to ask what is the probability of decaying into \emph{exactly} $f$. Instead, we have a range of final states. We will abuse notation and use $f$ to denote both the set of all final states we are interested in, and members of this set. We claim that the right expression is
+\[
+ \Gamma_{i \to f} = \frac{1}{T} \int P(i \to f) \prod_r \left(\frac{V}{(2\pi)^3}\d^3 p_r\right).
+\]
+The obvious question to ask is why we are integrating against $\frac{V}{(2\pi)^3} \d^3 p_r$, and not, say, simply $\d^3 p_r$. The answer is that both options are not quite right. Since we have finite volume, as we are familiar from introductory QM, the momentum eigenstates should be discretized. Thus, what we really want to do is to do an honest sum over all possible values of $p_r$.
+
+But of course, we don't like sums, and since we are going to take the limit $V \to \infty$ anyway, we replace the sum with an integral, and take into account of the density of the momentum eigenstates, which is exactly $\frac{V}{(2\pi)^3}$.
+
+We now introduce a measure
+\[
+ \d \rho_f = (2\pi)^4 \delta^{(4)}\left(p_i - \sum_r p_r\right) \prod_r \frac{\d^3 p_r}{(2\pi)^3 2 p_r^0},
+\]
+and then we can concisely write
+\[
+ \Gamma_{i \to f} = \frac{1}{2 m_i} \int |M_{fi}|^2 \;\d \rho_f.
+\]
+Note that when we actually do computations, we need to manually pick what range of momenta we want to integrate over.
+
+%It is interesting to note that here we had a perfect canceling of the factors of $V$ everywhere, so that our answer stays bounded as we take the limit $V \to \infty$. Our computations assumed nothing about the number of final states, but what if we had more than one thing in the initial state? If our initial state had two particles instead, then the same computations will show that we get a ``decay rate'' proportional to $\frac{1}{V}$ instead, which $\to 0$ as we take the limit $V \to \infty$.
+%
+%This is actually not too surprising. Having two things in the initial state means we are looking at the probabilities of these two things interacting to form something new. But as we take the limit as the universe expands to an infinite size, since our particles are spread all over the universe, the probability of interactions tends to $0$.
+
+\subsubsection*{Cross sections}
+We now quickly look at another way we can do experiments. We imagine we set two beams running towards each other with velocities $\mathbf{v}_a$ and $\mathbf{v}_b$ respectively. We let the particle densities be $\rho_a$ and $\rho_b$.
+
+The idea is to count the number of scattering events, $n$. We will compute this relative to the \term{incident flux}
+\[
+ F = |\mathbf{v}_a - \mathbf{v}_b| \rho_a,
+\]
+which is the number of incoming particles per unit area per unit time.
+\begin{defi}[Cross section]\index{cross section}
+ The \emph{cross section}\index{$\sigma$} is defined by
+ \[
+ n = F \sigma.
+ \]
+\end{defi}
+Given these, the total number of scattering events is
+\[
+ N = n \rho_b V = F \sigma \rho_b V = |\mathbf{v}_a - \mathbf{v}_b| \rho_a \rho_b V \sigma.
+\]
+This is now a more symmetric-looking expression.
+
+Note that in this case, we genuinely have a finite volume, because we have to pack all our particles together in order to make them collide. Since we are boring, we suppose we actually only have one particle of each type. Then we have
+\[
+ \rho_a = \rho_b = \frac{1}{V}.
+\]
+In this case, we have
+\[
+ \sigma = \frac{V}{|\mathbf{v}_a - \mathbf{v}_b|} N.
+\]
+We can do computations similar to what we did last time. This time there are a few differences. Last time, the initial state only had one particle, but now we have two. Thus, if we go back and look at our computations, we see that we will gain an extra factor of $\frac{1}{V}$ in the frequency of interactions. Also, since we are no longer in rest frame, we replace the masses of the particles with the energies. Then the number of interactions is
+\[
+ N = \int \frac{1}{(2E_a) (2 E_b) V} |M_{fi}|^2 \d \rho_f.
+\]
+We are often interested in knowing the number of interactions sending us to each particular momentum (range) individually, instead of just knowing about how many particles we get. For example, we might be interested in which directions the final particles are moving in. So we are interested in
+\[
+ \d \sigma = \frac{V}{|\mathbf{v}_a - \mathbf{v}_b|}\; \d N = \frac{1}{|\mathbf{v}_a - \mathbf{v}_b| 4E_a E_b} |M_{fi}|^2 \d \rho_f.
+\]
+Experimentalists will find it useful to know that cross sections are usually measured in units called ``\term{barns}''. This is defined by
+\[
+ 1\text{ barn} = \SI{e-28}{\meter\squared}.
+\]
+\subsection{Muon decay}
+We now look at our first decay process, the muon decay:
+\[
+ \mu^- \to e \bar{\nu}_e \nu_\mu.
+\]
+This is in fact the only decay channel of the muon.
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (l) {$\mu^-$};
+ \vertex [right=2cm of l] (c);
+ \vertex [above right=2cm of c] (tr) {$\nu_\mu$};
+ \vertex [right=2cm of c] (r) {$e$};
+ \vertex [below right=2cm of c] (br) {$\bar{\nu}_e$};
+ \diagram*{
+ (l) -- [fermion, momentum=$p$] (c) -- [fermion, momentum=$k$] (r),
+ (c) -- [anti fermion, momentum'=$q$] (br),
+ (c) -- [fermion, momentum=$q'$] (tr),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+We will make the simplifying assumption that neutrinos are massless.
+
+The relevant bit of $\mathcal{L}_W^{\mathrm{eff}}$ is
+\[
+ - \frac{G_F}{\sqrt{2}} J^{\alpha\dagger}J_\alpha,
+\]
+where the weak current is
+\[
+ J^\alpha = \bar{\nu}_e \gamma^\alpha (1 - \gamma^5) e + \bar{\nu}_\mu \gamma^\alpha(1 - \gamma^5)\mu + \bar{\nu}_\tau \gamma^\alpha(1 - \gamma^5) \tau.
+\]
+We see it is the interaction of the first two terms that will render this decay possible.
+
+To make sure our weak field approximation is valid, we need to make sure we live in sufficiently low energy scales. The most massive particle involved is
+\[
+ m_\mu = \SI{105.6583715(35)}{\mega\electronvolt}.
+\]
+On the other hand, the mass of the weak boson is
+\[
+ m_W = \SI{80.385(15)}{\giga\electronvolt},
+\]
+which is much bigger. So the weak field approximation should be valid.
+
+We can now compute
+\[
+ M = \brak{e^-(k) \bar{\nu}_e (q) \nu_\mu(q')} \mathcal{L}_W^{\mathrm{eff}} \bket{\mu^-(p)}.
+\]
+Note that we left out all the spin indices to avoid overwhelming notation. We see that the first term in $J^\alpha$ is relevant to the electron bit, while the second is relevant to the muon bit. We can then write this as
+\begin{align*}
+ M &= -\frac{G_F}{\sqrt{2}} \brak{e^-(k) \bar{\nu}_e(q)} \bar{e} \gamma^\alpha (1 - \gamma^5) \nu_e \bket{0} \brak{\nu_\mu(q')} \bar{\nu}_\mu \gamma_\alpha(1 - \gamma^5) \mu \bket{\mu^-(p)}\\
+ &= - \frac{G_F}{\sqrt{2}} \bar{u}_e(k) \gamma^\alpha (1 - \gamma^5)v_{\nu_e}(q) \bar{u}_{\nu_\mu} (q') \gamma_\alpha (1 - \gamma^5) u_p(p).
+\end{align*}
+Before we plunge through more computations, we look at what we are interested in and what we are not.
+
+At this point, we are not interested in the final state spins. Therefore, we want to sum over the final state spins. We also don't know the initial spin of $\mu^-$. So we average over the initial states. For reasons that will become clear later, we will write the desired amplitude as
+\begin{align*}
+ \frac{1}{2} \sum_{\mathrm{spins}} |M|^2 &= \frac{1}{2} \sum_{\mathrm{spins}} MM^*\\
+ &= \frac{1}{2} \frac{G_F^2}{2} \sum_{\mathrm{spins}} \left(\bar{u}_e(k) \gamma^\alpha (1 - \gamma^5) v_{\nu_e}(q)\bar{u}_{\nu_\mu} (q') \gamma_\alpha (1 - \gamma^5) u_\mu(p)\right)\\
+ &\hphantom{\frac{1}{2} \frac{G_F^2}{2}\sum_{\mathrm{spins}}}\times \left(\bar{u}_\mu (p) \gamma_\beta(1 - \gamma^5) u_{\nu_\mu}(q') \bar{v}_{\nu_e}(q) \gamma^\beta (1 - \gamma^5) u_e(k)\right)\\
+ &= \frac{1}{2} \frac{G_F^2}{2} \sum_{\mathrm{spins}} \left(\bar{u}_e(k) \gamma^\alpha (1 - \gamma^5) v_{\nu_e}(q) \bar{v}_{\nu_e}(q) \gamma^\beta (1 - \gamma^5) u_e(k)\right)\\
+ &\hphantom{\frac{1}{2} \frac{G_F^2}{2}\sum_{\mathrm{spins}}}\times \left(\bar{u}_{\nu_\mu} (q') \gamma_\alpha (1 - \gamma^5) u_\mu(p) \bar{u}_\mu (p) \gamma_\beta(1 - \gamma^5) u_{\nu_\mu}(q')\right).
+\end{align*}
+We write this as
+\[
+ \frac{G_F^2}{4} S_1^{\alpha\beta}S_{2\alpha\beta},
+\]
+where
+\begin{align*}
+ S_1^{\alpha\beta} &= \sum_{\mathrm{spins}} \bar{u}_e(k) \gamma^\alpha (1 - \gamma^5) v_{\nu_e}(q) \bar{v}_{\nu_e}(q) \gamma^\beta (1 - \gamma^5) u_e(k)\\
+ S_{2\alpha\beta} &= \sum_{\mathrm{spins}}\bar{u}_{\nu_\mu} (q') \gamma_\alpha (1 - \gamma^5) u_\mu(p) \bar{u}_\mu (p) \gamma_\beta(1 - \gamma^5) u_{\nu_\mu}(q').
+\end{align*}
+To actually compute this, we recall the spinor identities
+\begin{align*}
+ \sum_{\mathrm{spins}} u(p) \bar{u}(p) &= \slashed{p} + m,\\
+ \sum_{\mathrm{spins}} v(p) \bar{v}(p) &= \slashed{p} - m
+\end{align*}
+In our expression for, say, $S_1$, the $v_{\nu_e} \bar{v}_{\nu_e}$ is already in the right form to apply these identities, but $\bar{u}_e$ and $u_e$ are not. Here we do a slightly sneaky thing. We notice that for each fixed $\alpha, \beta$, the quantity $S^{\alpha\beta}_1$ is a scalar. So we trivially have $S_1^{\alpha\beta} = \Tr(S_1^{\alpha \beta})$. We now use the cyclicity of trace, which says $\Tr (AB) = \Tr (BA)$. This applies even if $A$ and $B$ are not square, by the same proof. Then noting further that the trace is linear, we get
+\begin{align*}
+ S_1^{\alpha\beta} &= \Tr(S_1^{\alpha\beta}) \\
+ &= \sum_{\mathrm{spins}} \Tr \Big[\bar{u}_e(k) \gamma^\alpha (1 - \gamma^5) v_{\nu_e}(q) \bar{v}_{\nu_e}(q) \gamma^\beta (1 - \gamma^5)\Big]\\
+ &= \sum_{\mathrm{spins}} \Tr \Big[u_e(k)\bar{u}_e(k) \gamma^\alpha (1 - \gamma^5) v_{\nu_e}(q) \bar{v}_{\nu_e}(q) \gamma^\beta (1 - \gamma^5)\Big]\\
+ &= \Tr \Big[(\slashed{k} + m_e) \gamma^\alpha (1 - \gamma^5) \slashed{q} \gamma^\beta(1 - \gamma^5)\Big]\\
+ \intertext{Similarly, we find}
+ S_{2\alpha\beta} &= \Tr \Big[\slashed{q}' \gamma_\alpha(1 - \gamma^5) (\slashed{p} + m_\mu) \gamma_\beta(1 - \gamma^5)\Big].
+\end{align*}
+To evaluate these traces, we use trace identities
+\begin{align*}
+ \Tr(\gamma^{\mu_1} \cdots \gamma^{\mu_n}) &= 0\text{ if $n$ is odd}\\
+ \Tr(\gamma^\mu \gamma^\nu \gamma^\rho \gamma^\sigma) &= 4 (g^{\mu\nu} g^{\rho\sigma} - g^{\mu\rho} \gamma^{\nu\sigma} + g^{\mu\sigma} g^{\nu\rho})\\
+ \Tr(\gamma^5 \gamma^\mu \gamma^\nu \gamma^\rho \gamma^\sigma) &= 4i \varepsilon^{\mu\nu\rho\sigma}.
+\end{align*}
+This gives the rather scary expressions
+\begin{align*}
+ S_1^{\alpha\beta} &= 8\Big(k^\alpha q^\beta + k^\beta q^\alpha - (k\cdot q) g^{\alpha\beta} - i \varepsilon^{\alpha\beta\mu\rho} k_\mu q_\rho\Big)\\
+ S_{2\alpha\beta} &= 8\Big(q'_\alpha p_\beta + q'_\beta p_\alpha - (q' \cdot p) g_{\alpha\beta} - i \varepsilon_{\alpha\beta\mu\rho} q'^\mu p^\rho\Big),
+\end{align*}
+but once we actually contract these two objects, we get the really pleasant result
+\[
+ \frac{1}{2} \sum |M|^2 = 64 G_F^2 (p \cdot q)(k \cdot q').
+\]
+It is instructive to study this expression in particular cases. Consider the case where $e^-$ and $\nu_\mu$ go out along the $+z$ direction, and $\bar{\nu}_e$ along $-z$. Then we have
+\[
+ k \cdot q' = \sqrt{m_e^2 + k_z^2} q_z' - k_z q'_z.
+\]
+As $m_e \to 0$, we have $\to 0$.
+
+This is indeed something we should expect even without doing computations. We know weak interaction couples to left-handed particles and right-handed anti-particles. \emph{If} $m_e = 0$, then we saw that helicity are chirality the same. Thus, the spin of $\bar{\nu}_e$ must be opposite to that of $\nu_\mu$ and $e^-$:
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \draw [->] (0, 0) -- (-1, 0) node [pos=0.5, above] {$\bar{\nu}_e$} node [pos=0.5, below] {$\Leftarrow \frac{1}{2}$};
+
+ \node [circ] at (2, 0) {};
+ \draw [->] (2, 0) -- (3, 0) node [pos=0.5, above] {$\nu_\mu$} node [pos=0.5, below] {$\Leftarrow \frac{1}{2}$};
+
+ \node [circ] at (3.5, 0) {};
+ \draw [->] (3.5, 0) -- (4.5, 0) node [pos=0.5, above] {$e$} node [pos=0.5, below] {$\Leftarrow \frac{1}{2}$};
+ \end{tikzpicture}
+\end{center}
+So they all point in the same direction, and the total spin would be $\frac{3}{2}$. But the initial total angular momentum is just the spin $\frac{1}{2}$. So this violates conservation of angular momentum.
+
+But if $m_e \not= 0$, then the left-handed and right-handed components of the electron are coupled, and helicity and chirality do not coincide. So it is possible that we obtain a right-handed electron instead, and this gives conserved angular momentum. We call this \term{helicity suppression}, and we will see many more examples of this later on.
+
+It is important to note that here we are only analyzing decays where the final momenta point in these particular directions. If $m_e = 0$, we can still have decays in other directions.
+
+There is another interesting thing we can consider. In this same set up, if $m_e \not= 0$, but neutrinos are massless, then we can only possibly decay to left-handed neutrinos. So the only possible assignment of spins is this:
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \draw [->] (0, 0) -- (-1, 0) node [pos=0.5, above] {$\bar{\nu}_e$} node [pos=0.5, below] {$\Leftarrow \frac{1}{2}$};
+
+ \node [circ] at (2, 0) {};
+ \draw [->] (2, 0) -- (3, 0) node [pos=0.5, above] {$\nu_\mu$} node [pos=0.5, below] {$\Leftarrow \frac{1}{2}$};
+
+ \node [circ] at (3.5, 0) {};
+ \draw [->] (3.5, 0) -- (4.5, 0) node [pos=0.5, above] {$e$} node [pos=0.5, below] {$\frac{1}{2} \Rightarrow$};
+ \end{tikzpicture}
+\end{center}
+Under parity transform, the momenta reverse, but spins don't. If we transform under parity, we would expect to see the same behaviour.
+\begin{center}
+ \begin{tikzpicture}[xscale=-1]
+ \node [circ] {};
+ \draw [->] (0, 0) -- (-1, 0) node [pos=0.5, above] {$\bar{\nu}_e$} node [pos=0.5, below] {$\Leftarrow \frac{1}{2}$};
+
+ \node [circ] at (2, 0) {};
+ \draw [->] (2, 0) -- (3, 0) node [pos=0.5, above] {$\nu_\mu$} node [pos=0.5, below] {$\Leftarrow \frac{1}{2}$};
+
+ \node [circ] at (3.5, 0) {};
+ \draw [->] (3.5, 0) -- (4.5, 0) node [pos=0.5, above] {$e$} node [pos=0.5, below] {$\frac{1}{2} \Rightarrow$};
+ \end{tikzpicture}
+\end{center}
+But (at least in the limit of massless neutrinos) this isn't allowed, because weak interactions don't couple to right-handed neutrinos. So we know that weak decays violate P.
+
+We now now return to finish up our computations. The decay rate is given by
+\begin{multline*}
+ \Gamma = \frac{1}{2 m_\mu} \int \frac{\d^3 k}{(2\pi)^3 2k^0} \int \frac{\d^3 q}{(2\pi)^3 2q^0} \int \frac{\d^3 q'}{(2\pi)^3 q'^0} \\
+ \times (2\pi)^4 \delta^{(4)}(p - k - q - q') \frac{1}{2} \sum |M|^2.
+\end{multline*}
+Using our expression for $|M|$, we find
+\[
+ \Gamma = \frac{G_F^2}{8 \pi^5 m_\mu} \int \frac{\d^3 k\; \d^3 q\; \d^3 q'}{k^0 q^0 q'^0} \delta^{(4)} (p - k - q - q') (p \cdot q) (k \cdot q').
+\]
+To evaluate this integral, there is a useful trick.
+
+For convenience, we write $Q = p - k$, and we consider
+\[
+ I_{\mu\nu}(Q) = \int \frac{\d^3 q}{q^0} \frac{\d^3 q'}{q'^0} \delta^{(4)}(Q - q - q') q_\mu q'_\nu.
+\]
+By Lorentz symmetry arguments, this must be of the form
+\[
+ I_{\mu\nu} (Q) = a(Q^2) Q_\mu Q_\nu + b(Q^2) g_{\mu\nu} Q^2,
+\]
+where $a, b: \R \to \R$ are some scalar functions.
+
+Now consider
+\[
+ g^{\mu\nu} I_{\mu\nu} = \int \frac{\d^3 q}{q^0} \frac{\d^3 q'}{q'^0} \delta^{(4)}(Q - q - q') q \cdot q' = (a + 4b) Q^2 .
+\]
+But we also know that
+\[
+ (q + q')^2 = q^2 + q'^2 + 2 q \cdot q' = 2 q \cdot q'
+\]
+because neutrinos are massless. On the other hand, by momentum conservation, we know
+\[
+ q + q' = Q.
+\]
+So we know
+\[
+ q \cdot q' = \frac{1}{2} Q^2.
+\]
+So we find that
+\[
+ a + 4b = \frac{I}{2},\tag{$1$}
+\]
+where
+\[
+ I = \int \frac{\d^3 q}{q^0} \int \frac{\d^3 q'}{q'^0} \delta^{(4)} (Q - q - q').
+\]
+We can consider something else. We have
+\begin{multline*}
+ Q^\mu Q^\nu I_{\mu\nu} = a(Q^2) Q^4 + b(Q^2) Q^4 \\
+ = \int \frac{\d^3 q}{q^0} \int \frac{\d^3 q'}{q'^0} \delta^{(4)} (Q - q - q') (q\cdot Q) (q' \cdot Q).
+\end{multline*}
+Using the masslessness of neutrinos and momentum conservation again, we find that
+\[
+ (q \cdot Q) (q' \cdot Q) = (q \cdot q') (q \cdot q').
+\]
+So we find
+\[
+ a + b = \frac{I}{4}\tag{$2$}.
+\]
+It remains to evaluate $I$, and to do so, we can just evaluate it in the frame where $Q = (\sigma, 0)$ for some $\sigma$. Now note that since $q^2 = q'^2 = 0$, we must have
+\[
+ q^0 = |\mathbf{q}|.
+\]
+So we have
+\begin{align*}
+ I &= \int \frac{\d^3 q}{|\mathbf{q}|} \int \frac{\d^3 q'}{|\mathbf{q}'|} \delta(\sigma - |\mathbf{q}| - |\mathbf{q}'|) \prod_{i = 1}^3 \delta(q_i - q_i')\\
+ &= \int \frac{\d^3 q}{|\mathbf{q}|^2} \delta(\sigma - 2|\mathbf{q}|)\\
+ &= 4\pi \int_0^\infty \d |\mathbf{q}|\; \delta(\sigma - 2|\mathbf{q}|)\\
+ &= 2\pi.
+\end{align*}
+So we find that
+\[
+ \Gamma = \frac{G_F^2}{3 m_\mu (2\pi)^4} \int \frac{\d^3 k}{k^0} \Big(2 p \cdot (p - k) k \cdot (p - k) + (p \cdot k)(p - k)^2\Big)
+\]
+Recall that we are working in the rest frame of $\mu$. So we know that
+\[
+ p \cdot k = m_\mu E,\quad p \cdot p = m_\mu^2,\quad k \cdot k = m_e^2,
+\]
+where $E = k^0$. Note that we have
+\[
+ \frac{m_e}{m_\mu}\approx 0.0048 \ll 1.
+\]
+So to make our lives easier, it is reasonable to assume $m_e = 0$. In this case, $|\mathbf{k}| = E$, and then
+\begin{align*}
+ \Gamma &= \frac{G_F^2}{(2\pi)^4 3m_\mu} \int \frac{\d^3 k}{E} \Big(2 m_\mu^2 m_\mu E - 2 (m_\mu E)^2 - 2 (m_\mu E)^2 + m_\mu E m_\mu^2\Big)\\
+ &= \frac{G_F^2 m_\mu}{ (2\pi)^4 3} \int \d^3 k\; (3 m_\mu - 4 E)\\
+ &= \frac{4\pi G_F^2 m_\mu}{(2\pi)^4 3} \int \d E\; E^2(3 m_\mu - 4E)
+\end{align*}
+We now need to figure out what we want to integrate over. When $e$ is at rest, then $E_{\mathrm{min}} = 0$. The maximum energy is obtained when $\nu_\mu, \bar{\nu}_e$ are in the same direction and opposite to $e^-$. In this case, we have
+\[
+ E + (E_{\bar{\nu}_e} + E_{\nu_\mu}) = m_\mu.
+\]
+By energy conservation, we also have
+\[
+ E - (E_{\bar{\nu}_e} + E_{\nu_\mu}) = 0.
+\]
+So we find
+\[
+ E_{\mathrm{max}} = \frac{m_\mu}{2}.
+\]
+Thus, we can put in our limits into the integral, and find that % figure out where the limits come from. What happens when we integrate past this limit?
+\[
+ \Gamma = \frac{G_F^2 m_\mu^5}{192 \pi^3}.
+\]
+As we mentioned at the beginning of the chapter, this is the only decay channel off the muon. From experiments, we can measure the lifetime of the muon as
+\[
+ \tau_{\mu} = \SI{2.1870e-6}{\second}.
+\]
+This tells us that
+\[
+ G_F = \SI{1.164e-5}{\giga\electronvolt\squared}.
+\]
+Of course, this isn't exactly right, because we ignored all loop corrections (and approximated $m_e = 0$). But this is reasonably good, because those effects are very small, at an order of $10^{-6}$ as large. Of course, if we want to do more accurate and possibly beyond standard model physics, we need to do better than this.
+
+Experimentally, $G_F$ is consistent with what we find in the $\tau \to e \bar{\nu}_e \nu_\tau$ and $\mu \to e \bar{\nu}_e \nu_\mu$ decays. This is some good evidence for lepton universality, i.e.\ they have different masses, but they couple in the same way.
+
+\subsection{Pion decay}
+We are now going to look at a slightly different example. We are going to study decays of \term{pions}, which are made up of a pair of quark and anti-quark. We will in particular look at the decay
+\[
+ \pi^- (\bar u d) \to e^- \bar{\nu}_e.
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (c);
+ \vertex [left=of c] (pi) {$\pi^-$};
+ \vertex [above right=of c] (e) {$e^-$};
+ \vertex [below right=of c] (nu) {$\bar{\nu}_e$};
+
+ \diagram*{
+ (pi) -- [fermion, momentum=$p$] (c) -- [fermion, momentum=$k$] (e);
+ (c) -- [anti fermion, momentum'=$q$] (nu);
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+This is actually quite hard to do from first principles, because $\pi^-$ is made up of quarks, and quark dynamics is dictated by QCD, which we don't know about yet. However, these quarks are not free to move around, and are strongly bound together inside the pion. So the trick is to hide all the things that happen in the QCD side in a single constant $F_\pi$, without trying to figure out, from our theory, what $F_\pi$ actually is.
+
+The relevant currents are
+\begin{align*}
+ J_{\mathrm{lept}}^\alpha &= \bar{\nu}_e \gamma^\alpha (1 - \gamma^5) e\\
+ J_{\mathrm{had}}^\alpha &= \bar{u} \gamma^\alpha (1 - \gamma^5)(V_{ud} d + V_{us}s + v_{ub}b) \equiv V_{\mathrm{had}}^\alpha - A_{\mathrm{had}}^\alpha,
+\end{align*}
+where the $V_{\mathrm{had}}^\alpha$ contains the $\gamma^\alpha$ bit, while $A_{\mathrm{had}}^\alpha$ contains the $\gamma^\alpha \gamma^5$ bit.
+
+Then the amplitude we want to compute is
+\begin{align*}
+ M &= \brak{e^-(k) \bar{\nu}_e (q)} \mathcal{L}_W^{\mathrm{eff}} \bket{\pi^-(p)}\\
+ &= - \frac{G_F}{\sqrt{2}} \brak{e^-(k) \bar{\nu}_e(q)} J_{\alpha, \mathrm{lept}}^\dagger \bket{0} \brak{0} J_{\mathrm{had}}^\alpha \bket{\pi^-(p)}\\
+ &= - \frac{G_F}{\sqrt{2}} \brak{e^-(k) \bar{\nu}_e(q)} \bar{e} \gamma_\alpha (1 - \gamma^5) \nu_e \bket{0} \brak{0} J_{\mathrm{had}}^\alpha \bket{\pi^-(p)}\\
+ &= \frac{G_F}{\sqrt{2}} \bar{u}_e (k) \gamma_\alpha(1 - \gamma^5) v_{\nu_e}(q) \brak{0} V_{\mathrm{had}}^\alpha - A_{\mathrm{had}}^\alpha \bket{\pi^-(p)}.
+\end{align*}
+We now note that $V_{\mathrm{had}}^\alpha$ does not contribute. This requires knowing something about QCD and $\pi^-$. It is known that QCD is a P-invariant theory, and experimentally, we find that $\pi^-$ has spin $0$ and odd parity. In other words, under P, it transforms as a pseudoscalar. Thus, the expression
+\[
+ \brak{0}\bar{u} \gamma^\alpha d \bket{\pi^-(p)}
+\]
+transforms as an axial vector. But since the only physical variable involved is $p^\alpha$, the only quantities we can construct are multiples of $p^\alpha$, which are vectors. So this must vanish. By a similar chain of arguments, the remaining part of the QCD part must be of the form
+\[
+ \brak{0} \bar{u} \gamma^\alpha \gamma^5 d \bket{\pi^-(p)} = i \sqrt{2} F_\pi p^\alpha
+\]
+for some constant \term{$F_\pi$}. Then we have
+\[
+ M = i G_F F_\pi V_{ud} \bar{u}_e(k) \slashed p (1 - \gamma^5) v_{\nu_e}(q).
+\]
+To simplify this, we use momentum conservation $p = k + q$, and some spinor identities
+\[
+ \bar{u}_e(k) \slashed{k} = \bar{u}_e(k) m_e,\quad \slashed{q} v_{\nu_e}(q) = 0.
+\]
+Then we find that
+\[
+ M = i G_F F_\pi V_{ud} m_e \bar{u}_e(k) (1 - \gamma^5) v_{\nu_e}(q).
+\]
+Doing a manipulation similar to last time's, and noting that
+\begin{align*}
+ (1 - \gamma^5) \gamma^\mu(1 + \gamma^5) &= 2(1 - \gamma^5) \gamma^\mu\\
+ \Tr (\slashed{k}\slashed{q}) &= 4 k\cdot q\\
+ \Tr (\gamma^5 \slashed{k}\slashed{q}) &= 0
+\end{align*}
+we find
+\begin{align*}
+ \sum_{\text{spins }e, \bar{\nu}_e} |M|^2 &= \sum_{\mathrm{spins}} |G_F F_\pi m_e V_{ud}|^2 [\bar{u}_e(k) (1 - \gamma^5) v_{\nu_e}(q) \bar{v}_{\nu_e}(q) (1 + \gamma^5) u_e(k)]\\
+ &= 2|G_F F_\pi m_e V_{ud}|^2 \Tr \Big[(\slashed{k} + m_e)( 1 - \gamma^5) \slashed{q}\Big]\\
+ &= 8 |G_F F_\pi m_e V_{ud}|^2 (k \cdot q).
+\end{align*}
+This again shows helicity suppression. The spin-$0$ $\pi^-$ decays to the positive helicity $\bar{\nu}_e$, and hence decays to a positive helicity electron by helicity conservation.
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [below] at (0, 0) {$\pi^-$};
+
+ \draw [->] (1, 0) node [circ] {} -- (2, 0) node [pos=0.5, above] {$e^-$} node [pos=0.5, below] {$\frac{1}{2} \Rightarrow$};
+ \draw [->] (-1, 0) node [circ] {} -- (-2, 0) node [pos=0.5, above] {$\bar{\nu}_e$} node [pos=0.5, below] {$\Leftarrow \frac{1}{2}$};
+ \end{tikzpicture}
+\end{center}
+But if $m_e = 0$, then this has right-handed chirality, and so this decay is forbidden.
+
+We can now compute an actual decay rate. We note that since we are working in the rest frame of $\pi^-$, we have $\mathbf{k} + \mathbf{q} = 0$; and since the neutrino is massless, we have $q^0 = |\mathbf{q}| = |\mathbf{k}|$. Finally, writing $E = k^0$ for the energy of $e^-$, we obtain
+\begin{align*}
+ \Gamma_{\pi \to e \bar{\nu}_e} &= \frac{1}{2 m_\pi} \int \frac{\d^3 k}{(2\pi)^3 2 k^0} \int \frac{\d^3 q}{(2\pi)^3 2 q^0} (2\pi)^4 \delta^{(4)}(p - k - q) \\
+ &\hphantom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} 8 |G_F F_\pi m_e V_{ud}|^2 (k \cdot q)\\
+ &= \frac{|G_F F_\pi m_e V_{ud}|^2 }{4 m_\pi \pi^2} \int \frac{\d^3 k}{E|\mathbf{k}|} \delta(m_{\pi} - E - |\mathbf{k}|) (E |\mathbf{k}| + |\mathbf{k}|^2)
+\end{align*}
+To simplify this further, we use the property
+\[
+ \delta(f(k)) = \sum_i \frac{\delta(k - k_0^i)}{|f'(k_0^i)|},
+\]
+where $k_0^i$ runs over all roots of $f$. In our case, we have a unique root
+\[
+ k_0 =\frac{m_\pi^2 - m_e^2}{2m_\pi},
+\]
+and the corresponding derivative is
+\[
+ |f'(k_0)| = 1 + \frac{k_0}{E}.
+\]
+Then we get
+\begin{align*}
+ \Gamma_{\pi \to e \bar{\nu}_e} &= \frac{|G_F F_\pi m_e V_{ud}|^2}{4 \pi^2 m_\pi}\int \frac{4\pi k^2 \;\d k}{E} \frac{E + k}{1 + k_0/E} \delta(k - k_0)\\
+ &= \frac{|G_F F_\pi V_{ud}|^2}{4\pi} m_e^2 m_\pi \left(1 - \frac{m_e^2}{m_\pi^2}\right)^2.
+\end{align*}
+Note that if we set $m_e \to 0$, then this vanishes. This time, it is the whole decay rate, and not just some particular decay channel.
+
+Let's try to match this with experiment. Instead of looking at the actual lifetime, we compare it with some other possible decay rates. The expression for $\Gamma_{\pi \to \mu \bar{\nu}_\mu}$ is exactly the same, with $m_e$ replaced with $m_\mu$. So the ratio
+\[
+ \frac{\Gamma_{\pi \to e\bar{\nu}_e}}{\Gamma_{\pi \to \mu \bar{\nu}_\mu}} = \frac{m_e^2}{m_\mu^2} \left(\frac{m_\pi^2 - m_e^2}{m_\pi^2 - m_\mu^2}\right)^2 \approx 1.28 \times 10^{-4}.
+\]
+Here all the decay constants cancel out. So we can actually compare this with experiment just by knowing the electron and muon masses. When we actually do experiments, we find $1.230(4) \times 10^{-4}$. This is pretty good agreement, but not agreement to within experimental error. Of course, this is not unexpected, because we didn't include the quantum loop effects in our calculations.
+
+Another thing we can see is that the ratio is very small, on the order of $10^{-4}$. This we can understand from helicity suppression, because $m_\mu \gg m_e$.
+
+Note that we were able to get away without knowing how QCD works!
+
+\subsection{\tph{$K^0\mdash \bar{K}^0$}{K0-K0}{K0-K0} mixing}
+We now move on to consider our final example of weak decays, and look at $K^0$-$\bar{K}^0$ mixing. We will only do this rather qualitatively, and look at the effects of CP violation.
+
+\term{Kaons} contain a strange quark/antiquark. There are four ``flavour eigenstates'' given by
+\[
+ K^0 (\bar{s} d),\quad \bar{K}^0 (\bar{d} s),\quad K^+(\bar{s} u),\quad K^-(\bar{u} s).
+\]
+These are the lightest kaons, and they have spin $J = 0$ and parity $p = -\mathrm{ve}$. We can concisely write these information as $J^p = 0^-$. These are pseudoscalars.
+
+We want to understand how these things transform under CP. Parity transformation doesn't change the particle contents, and charge conjugation swaps particles and anti-particles. Thus, we would expect CP to send $K^0$ to $\bar{K}^0$, and vice versa. For kaons at rest, we can pick the relative phases such that
+\begin{align*}
+ \hat{C} \hat{P} \bket{K^0} &= - \bket{\bar{K}^0}\\
+ \hat{C} \hat{P} \bket{\bar{K}^0} &= - \bket{K^0}.
+\end{align*}
+So the CP eigenstates are just
+\[
+ \bket{K_\pm^0} = \frac{1}{\sqrt{2}} (\bket{K^0} \mp \bket{\bar{K}^0}).
+\]
+Then we have
+\[
+ \hat{C}\hat{P} \bket{K_{\pm}^0} = \pm \bket{K_{\pm}^0}.
+\]
+Let's consider the two possible decays $K^0 \to \pi^0 \pi^0$ and $K^0 \to \pi^+ \pi^-$. This requires converting one of the strange quarks into an up or down quark.
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (tl);
+ \vertex [below=0.5cm of tl] (bl);
+ \vertex [right=2cm of bl] (bc);
+ \vertex [right=2cm of bc] (br);
+ \vertex [above=0.5cm of br] (tr);
+
+ \vertex [below=1cm of bc] (d);
+
+ \vertex [right=2cm of d] (r);
+ \vertex [above=0.25cm of r] (ru);
+ \vertex [below=0.25cm of r] (rd);
+
+ \diagram*{
+ (tl) -- [fermion, edge label=$d$] (tr),
+ (bl) -- [anti fermion, edge label'=$\bar{s}$] (bc),
+ (bc) -- [anti fermion, edge label'=$\bar{u}$] (br),
+ (bc) -- [photon, edge label'=$W$] (d),
+ (d) -- [fermion, edge label=$u$] (ru),
+ (d) -- [anti fermion, edge label'=$\bar{d}$] (rd),
+ };
+ \end{feynman}
+
+ \draw [fill=white] (0, -0.25) ellipse (0.1 and 0.3); % make this more beautiful
+ \node [left] at (-0.1, -0.25) {$K^0$};
+
+ \draw [fill=white] (4, -0.25) ellipse (0.1 and 0.3);
+ \node [right] at (4.1, -0.25) {$\pi^-$};
+
+ \draw [fill=white] (4, -1.5) ellipse (0.1 and 0.3);
+ \node [right] at (4.1, -1.5) {$\pi^+$};
+ \end{tikzpicture}
+\end{center}
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex at (0, 0) (tl);
+ \vertex at (4, 0) (r1);
+ \vertex at (4, -0.5) (r2);
+ \vertex at (4, -1) (r3);
+ \vertex at (4, -1.5) (r4);
+ \vertex at (0, -1.5) (bl);
+ \vertex at (2, -1.5) (bc);
+ \vertex at (2, -0.75) (c);
+ \diagram*{
+ (tl) -- [fermion, edge label=$d$] (r1),
+ (bl) -- [anti fermion, edge label'=$\bar{s}$] (bc) -- [anti fermion, edge label'=$\bar{u}$] (r4),
+ (bc) -- [photon, edge label=$W$] (c),
+ (c) -- [anti fermion, edge label=$\bar{d}$] (r2),
+ (c) -- [fermion, edge label'=$u$] (r3)
+ };
+ \end{feynman}
+
+ \draw [fill=white] (0, -0.75) ellipse (0.1 and 0.8); % make this more beautiful
+ \node [left] at (-0.1, -0.75) {$K^0$};
+
+ \draw [fill=white] (4, -0.25) ellipse (0.1 and 0.3);
+ \node [right] at (4.1, -0.25) {$\pi^0$};
+
+ \draw [fill=white] (4, -1.25) ellipse (0.1 and 0.3);
+ \node [right] at (4.1, -1.25) {$\pi^0$};
+ \end{tikzpicture}
+\end{center}
+From the conservation of angular momentum, the total angular momentum of $\pi \pi$ is zero. Since they are also spinless, the orbital angular momentum $L = 0$.
+
+When we apply CP to the final states, we note that the relative phases of $\pi^+$ and $\pi^-$ (or $\pi^0$ and $\pi^0$) cancel out each other. So we have
+\[
+ \hat{C}\hat{P} \bket{\pi^+\pi^-} = (-1)^L \bket{\pi^+\pi^-} = \bket{\pi^+ \pi^-},
+\]
+where the relative phases of $\pi^+$ and $\pi^-$ cancel out. Similarly, we have
+\[
+ \hat{C}\hat{P} \bket{\pi^0 \pi^0} = \bket{\pi^0 \pi^0}.
+\]
+Therefore $\pi \pi$ is always a CP eigenstate with eigenvalue $+1$.
+
+What does this tell us about the possible decays? We know that CP is conserved by the strong and electromagnetic interactions. If it were conserved by the weak interaction as well, then there is a restriction on what can happen. We know that
+\[
+ K^0_+ \to \pi \pi
+\]
+is allowed, because both sides have CP eigenvalue $+1$, but
+\[
+ K^0_- \to \pi \pi
+\]
+is not. So $K^0_+$ is ``short-lived'', and $K^0_-$ is ``long-lived''. Of course, $K^0_-$ will still decay, but must do so via more elaborate channels, e.g.\ $K^0_- \to \pi \pi \pi$.
+
+Does this agree with experiments? When we actually look at Kaons, we find two neutral Kaons, which we shall call $K^0_S$ and $K_L^0$. As the subscripts suggest, $K^0_S$ has a ``short'' lifetime of $\tau \approx \SI{9e-11}{\second}$, while $K^0_L$ has a ``long'' lifetime of $\tau \approx \SI{5e-8}{\second}$.
+
+But does this actually CP is not violated? Not necessarily. For us to be correct, we want to make sure $K_L^0$ never decays to $\pi\pi$. We define the quantities
+\[
+ \eta_{+ -} = \frac{|\brak{\pi^+ \pi^-}H \bket{K_L^0}|}{|\brak{\pi^+ \pi^-}H \bket{K_S^0}|},\quad \eta_{0 0} = \frac{|\brak{\pi^0 \pi^0}H \bket{K_L^0}|}{|\brak{\pi^0 \pi^0}H \bket{K_S^0}|}
+\]
+Experimentally, when we measure these things, we have
+\[
+ \eta_{\pm} \approx \eta_{00} \approx 2.2\times 10^{-3} \not= 0.
+\]
+So $K_L^0$ does decay into $\pi \pi$.
+
+If we think about what is going on here, there are two ways CP can be violated:
+\begin{itemize}
+ \item Direct CP violation of $s \to u$ due to a phase in $V_{CKM}$. % what is this?
+ \item Indirect CP violation due to $K^0 \to \bar{K}^0$ or vice-versa, then decaying.
+\end{itemize}
+Of course, ultimately, the ``indirect violation'' is still due to phases in the CKM matrix, but the second is more ``higher level''.
+
+It turns out in this particular process, it is the indirect CP violation that is mainly responsible, and the dominant contributions are ``box diagrams'', where the change in strangeness $\Delta S = 2$.
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (ull);
+ \vertex (ul) [right=of ull];
+ \vertex (ur) [right=of ul];
+ \vertex (urr) [right=of ur];
+
+ \vertex (dll) [below=1cm of ull];
+ \vertex (dl) [right=of dll];
+ \vertex (dr) [right=of dl];
+ \vertex (drr) [right=of dr];
+
+ \diagram*{
+ (ull) -- [fermion, edge label=$d$] (ul) -- [fermion, edge label={$u, c, t$}] (ur) -- [fermion, edge label=$s$] (urr),
+ (dll) -- [anti fermion, edge label'=$\bar{s}$] (dl) -- [anti fermion, edge label'={$\bar{u}, \bar{c}, \bar{t}$}] (dr) -- [anti fermion, edge label'=$\bar{d}$] (drr),
+ (ul) -- [photon, edge label=$W$] (dl),
+ (ur) -- [photon, edge label=$W$] (dr),
+ };
+ \end{feynman}
+ \draw [fill=white] (0, -0.5) ellipse (0.1 and 0.55); % make this more beautiful
+ \node [left] at (-0.1, -0.5) {$K^0$};
+ \draw [fill=white] (4.5, -0.5) ellipse (0.1 and 0.55); % make this more beautiful
+ \node [right] at (4.6, -0.5) {$\bar K^0$};
+ \end{tikzpicture}
+\end{center}
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (ull);
+ \vertex (ul) [right=of ull];
+ \vertex (ur) [right=of ul];
+ \vertex (urr) [right=of ur];
+
+ \vertex (dll) [below=1cm of ull];
+ \vertex (dl) [right=of dll];
+ \vertex (dr) [right=of dl];
+ \vertex (drr) [right=of dr];
+
+ \diagram*{
+ (ull) -- [fermion, edge label=$d$] (ul) -- (dl) -- [fermion, edge label=$\bar{s}$] (dll),
+ (urr) -- [anti fermion, edge label'=$s$] (ur) -- (dr) -- [anti fermion, edge label'=$\bar{d}$] (drr),
+ (ul) -- [photon, edge label=$W$] (ur),
+ (dl) -- [photon, edge label'=$W$] (dr),
+ };
+ \draw [fill=white] (0, -0.5) ellipse (0.1 and 0.55); % make this more beautiful
+ \node [left] at (-0.1, -0.5) {$K^0$};
+ \draw [fill=white] (4.5, -0.5) ellipse (0.1 and 0.55);
+ \node [right] at (4.6, -0.5) {$\bar K^0$};
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+Given our experimental results, we know that $K_S^0$ and $K_L^0$ aren't \emph{quite} $K_+^0$ and $K_-^0$ themselves, but have some corrections. We can write them as
+\begin{align*}
+ \bket{K_S^0} &= \frac{1}{\sqrt{1 + |\varepsilon_1|^2}} (\bket{K_+^0} + \varepsilon_1 \bket{K_-^0}) \approx \bket{K_+^0}\\
+ \bket{K_L^0} &= \frac{1}{\sqrt{1 + |\varepsilon_2|^2}} (\bket{K_-^0} + \varepsilon_2 \bket{K_+^0}) \approx \bket{K_-^0},
+\end{align*}
+where $\varepsilon_1, \varepsilon_2 \in \C$ are some small complex numbers. This way, very occasionally, $K_L^0$ can decay as $K_+^0$.
+
+We assume that we just have two state mixing, and ignore details of the strong interaction. Then as time progresses, we can write have
+\begin{align*}
+ \bket{K_S(t)} &= a_S(t) \bket{K^0} + b_S(t) \bket{\bar{K}^0}\\
+ \bket{K_L(t)} &= a_L(t) \bket{K^0} + b_L(t) \bket{\bar{K}^0}
+\end{align*}
+for some (complex) functions $a_S, b_S, a_L, b_L$. Recall that Schr\"odinger's equation says
+\[
+ i \frac{\d}{\d t} \bket{\psi(t)} = H \bket{\psi(t)}.
+\]
+Thus, we can write
+\[
+ i \frac{\d}{\d t}
+ \begin{pmatrix}
+ a\\ b
+ \end{pmatrix} =
+ R
+ \begin{pmatrix}
+ a\\b
+ \end{pmatrix},
+\]
+where
+\[
+ R =
+ \begin{pmatrix}
+ \brak{K^0}H' \bket{K^0} & \brak{K^0}H' \bket{\bar{K}^0}\\
+ \brak{\bar{K}^0}H' \bket{K^0} & \brak{\bar{K}^0}H' \bket{\bar{K}^0}
+ \end{pmatrix}
+\]
+and $H'$ is the next-to-leading order weak Hamiltonian. Because Kaons decay in finite time, we know $R$ is not Hermitian. By general linear algebra, we can always write it in the form
+\[
+ R = M - \frac{i}{2} \Gamma,
+\]
+where $M$ and $\Gamma$ are Hermitian. We call $M$ the \term{mass matrix}, and $\Gamma$ the \term{decay matrix}.
+
+We are not going to actually compute $R$, but we are going to use the known symmetries to put some constraint on $R$. We will consider the action of $\Theta = \hat{C}\hat{P}\hat{T}$. The CPT theorem says observables should be invariant under conjugation by $\Theta$. So if $A$ is Hermitian, then $\Theta A \Theta^{-1} = A$. Now our $H'$ is not actually Hermitian, but as above, we can write it as
+\[
+ H' = A + i B,
+\]
+where $A$ and $B$ are Hermitian. Now noting that $\Theta$ is \emph{anti-unitary}, we have
+\[
+ \Theta H' \Theta^{-1} = A - iB = H'^\dagger.
+\]
+In the rest frame of a particle $\bar{K}^0$, we know $\hat{T} \bket{K^0} = \bket{K^0}$, and similarly for $\bar{K}^0$. So we have
+\[
+ \Theta \bket{\bar{K}^0} = - \bket{K^0},\quad \Theta \bket{K^0} = - \bket{\bar{K}^0},
+\]
+Since we are going to involve time reversal, we stop using bra-ket notation for the moment. We have
+\begin{multline*}
+ R_{11} = (K^0, H' K^0)
+ = (\Theta^{-1}\Theta K^0, H' \Theta^{-1}\Theta K^0)
+ = (\bar{K}^0, H'^\dagger \bar{K}^0)^*\\
+ = (H' \bar{K}^0, \bar{K}^0)^*
+ = (\bar K^0, H' \bar{K}^0)
+ = R_{22}
+\end{multline*}
+Now \emph{if} $\hat{T}$ was a good symmetry (i.e.\ $\hat{C}\hat{P}$ is good as well), then a similar computation shows that
+\[
+ R_{12} = R_{21}.
+\]
+We can show that we in fact have % maybe insert this
+\[
+ \varepsilon_1 = \varepsilon_2 = \varepsilon = \frac{\sqrt{R_{21}} - \sqrt{R_{21}}}{\sqrt{R_{12}} + \sqrt{R_{21}}}.
+\]
+So if CP is conserved, then $R_{12} = R_{21}$, and therefore $\varepsilon_1 = \varepsilon_2 = \varepsilon = 0$.
+
+Thus, we see that if we want to have mixing, then we must have $\varepsilon_1, \varepsilon_2 \not= 0$. So we need $R_{12} \not =R_{21}$. In other words, we must have CP violation!
+
+One can also show that
+\[
+ \eta_{+-} = \varepsilon + \varepsilon',\quad \eta_{00} = \varepsilon - 2 \varepsilon',
+\]
+where $\varepsilon'$ measures the direct source of CP violation. By looking at these two modes of decay, we can figure out the values of $\varepsilon$ and $\varepsilon'$. Experimentally, we find
+\[
+ |\varepsilon| = (2.228 \pm 0.011) \times 10^{-3},
+\]
+and
+\[
+ \left|\frac{\varepsilon}{\varepsilon'}\right| = (1.66 \pm 0.23) \times 10^{-3}.
+\]
+As claimed, it is the indirect CP violation that is dominant, and the direct one is suppressed by a factor of $10^{-3}$.
+
+Other decays can be used to probe $K_{L, S}^0$. For example, we can look at semi-leptonic decays. We can check that
+\[
+ K^0 \to \pi^- e^+ \nu_e
+\]
+is possible, while
+\[
+ K^0 \to \pi^+ e^- \bar{\nu}_e
+\]
+is not. $\bar{K}^0$ has the opposite phenomenon. To show these, we just have to try to write down diagrams for these events. % insert diagram
+
+Now if CP is conserved, we'd expect the decay rates
+\[
+ \Gamma(K_{L, S}^0 \to \pi^- e^+ \nu_e) = \Gamma(K_{L, S}^0 \to \pi^+ e^- \bar{\nu}_e),
+\]
+since we expect $K_{L, S}$ to consist of the same amount of $K^0$ and $\bar{K}^0$.
+
+We define
+\[
+ A_L = \frac{\Gamma(K_L^0 \to \pi^- e^+ \nu_e) - \Gamma(K_L^0 \to \pi^+ e^- \bar{\nu}_e)}{\Gamma(K_L^0 \to \pi^- e^+ \nu_e) + \Gamma(K_L^0 \to \pi^+ e^- \bar{\nu}_e)}.
+\]
+If this is non-zero, then we have evidence for CP violation. Experimentally, we find
+\[
+ A_L = (3.32 \pm 0.06) \times 10^{-3} \approx 2 \Re (\varepsilon).
+\]
+This is small, but certainly significantly non-zero.
+
+% We can actually use CP violation to tell some aliens far far away what we mean by matter and anti-matter.
+
+\section{Quantum chromodynamics (QCD)}
+In the early days of particle physics, we didn't really know what we were doing. So we just smashed particles into each other and see what happened. Our initial particle accelerators weren't very good, so we mostly observed some low energy particles.
+
+Of course, we found electrons, but the more interesting discoveries were in the hadrons. We had protons and neutrons, $n$ and $p$, as well as pions $\pi^+, \pi^0$ and $\pi^0$. We found that $n$ and $p$ behaved rather similarly, with similar interaction properties and masses. On the other hand, the three pions behaved like each other as well. Of course, they had different charges, so this is not a genuine symmetry.
+
+Nevertheless, we decided to assign numbers to these things, called \term{isospin}. We say $n$ and $p$ have isospin $I = \frac{1}{2}$, while the pions have isospin $I = 1$. The idea was that if a particle has spin $\frac{1}{2}$, then it has two independent spin states; If it has spin $1$, then it has three independent spin states. This, we view $n, p$ as different ``spin states'' of the same object, and similarly $\pi^{\pm}, \pi^0$ are the three ``spin states'' of the same object. Of course, isospin has nothing to do with actual spin.
+
+As in the case of spin, we have spin projections $I_3$. So for example, $p$ has $I_3 = + \frac{1}{2}$ and $n$ has $I_3 = -\frac{1}{2}$. Similarly, $\pi^+, \pi^0$ and $\pi^0$ have $I_3 = +1, 0, -1$ respectively. Mathematically, we can think of these particles as living in representations of $\su(2)$. Each ``group'' $\{n, p\}$ or $\{\pi^+, \pi^0, \pi\}$ corresponded to a representation of $\su(2)$, and the isospin labelled the representation they belonged to. The eigenvectors corresponded to the individual particle states, and the isospin projection $I_3$ referred to this eigenvalue.
+
+That might have seemed like a stretch to invoke representation theory. We then built better particle accelerators, and found more particles. These new particles were quite strange, so we assigned a number called \emph{strangeness} to measure how strange they are. Four of these particles behaved quite like the pions, and we called them \emph{Kaons}. Physicists then got bored and plotted out these particles according to isospin and strangeness:
+\begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \draw [gray] (-2, 0) -- (-1, -1.732);
+ \draw [gray] (2, 0) -- (1, 1.732);
+ \draw [gray] (1, -1.732) -- (-1, 1.732);
+
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$I_3$};
+ \draw [->] (0, -3) -- (0, 3) node [above] {$S$};
+ \node [circ] at (2, 0) {};
+ \node [above] at (2, 0) {$\pi^+$};
+ \node [circ] at (1, 1.732) {};
+ \node [above] at (1, 1.732) {$K^+$};
+ \node [circ] at (1, -1.732) {};
+ \node [below] at (1, -1.732) {$\bar{K}^0$};
+ \node [circ] at (-2, 0) {};
+ \node [above] at (-2, 0) {$\pi^-$};
+ \node [circ] at (-1, 1.732) {};
+ \node [above] at (-1, 1.732) {$K^0$};
+ \node [circ] at (-1, -1.732) {};
+ \node [below] at (-1, -1.732) {$K^-$};
+ \node [circ] at (0, 0) {};
+ \node [anchor = north east] at (0, 0) {$\eta$};
+ \node [anchor = south west] at (0, 0) {$\pi^0$};
+ \end{tikzpicture}
+\end{center}
+
+Remarkably, the diagonal lines join together particles of the same charge! Something must be going on here. It turns out if we include these ``strange'' particles into the picture, then instead of a representation of $\su(2)$, we now have representations of $\su(3)$. Indeed, this just looks like a weight diagram of $\su(3)$.
+
+Ultimately, we figured that things are made out of quarks. We now know that there are $6$ quarks, but that's too many for us to handle. The last three quarks are very heavy. They weren't very good at forming hadrons, and their large mass means the particles they form no longer ``look alike''. So we only focus on the first three.
+
+At first, we only discovered things made up of up quarks and down quarks. We can think of these quarks as living in the fundamental representation $V_1$ of $\su(2)$, with
+\[
+ u =
+ \begin{pmatrix}
+ 1 \\ 0
+ \end{pmatrix},\quad
+ d =
+ \begin{pmatrix}
+ 0 \\ 1
+ \end{pmatrix}.
+\]
+These are eigenvectors of the Cartan generator $H$, with weights $+\frac{1}{2}$ and $-\frac{1}{2}$ (using the ``physicist's'' way of numbering). The idea is that physics is approximately invariant under the action of $\su(2)$ that mixes $u$ and $d$. Thus, different hadrons made out of $u$ and $d$ might look alike. Nowadays, we know that the QCD part of the Lagrangian is exactly invariant under the $\SU(2)$ action, while the other parts are not.
+
+The anti-quarks lived in the anti-fundamental representation (which is also the fundamental representation). A meson is made of two quarks. So they live in the tensor product
+\[
+ V_1 \otimes V_1 = V_0 \oplus V_2.
+\]
+The $V_2$ was the pions we found previously. Similarly, the protons and neutrons consist of three quarks, and live in
+\[
+ V_1 \otimes V_1 \otimes V_1 = (V_0 \oplus V_2) \otimes V_1 = V_1 \oplus V_1 \oplus V_3.
+\]
+One of the $V_1$'s contains the protons and neutrons.
+
+The ``strange'' hadrons contain what is known as the strange quark, $s$. This is significantly more massive than the $u$ and $d$ quarks, but are not \emph{too} far off, so we still get a reasonable approximate symmetry. This time, we have three quarks, and they fall into an $\su(3)$ representation,
+\[
+ u =
+ \begin{pmatrix}
+ 1 \\ 0 \\ 0
+ \end{pmatrix},\quad
+ d =
+ \begin{pmatrix}
+ 0 \\ 1 \\ 0
+ \end{pmatrix},\quad
+ s =
+ \begin{pmatrix}
+ 0 \\ 0 \\ 1
+ \end{pmatrix}.
+\]
+This is the fundamental $\mathbf{3}$ representation, while the anti-quarks live in the anti-fundamental $\bar{\mathbf{3}}$. These decompose as
+\begin{align*}
+ \mathbf{3} \otimes \bar{\mathbf{3}} &= \mathbf{1} \oplus \bar{\mathbf{8}}\\
+ \mathbf{3} \otimes \mathbf{3} \otimes \mathbf{3} &= \mathbf{1} \oplus \mathbf{8} \oplus \mathbf{8} \oplus \mathbf{10}.
+\end{align*}
+The quantum numbers correspond to the weights of the eigenvectors, and hence when we plot the particles according to quantum numbers, they fall in such a nice lattice.
+
+There are a few mysteries left to solve. Experimentally, we found a baryon $\Delta^{++} = uuu$ with spin $\frac{3}{2}$. The wavefunction appears to be symmetric, but this would violate Fermi statistics. This caused theorists to go and scratch their heads again and see what they can do to understand this. Also, we couldn't explain why we only had particles coming from $\mathbf{3} \otimes \bar{\mathbf{3}}$ and $\mathbf{3} \otimes \mathbf{3} \otimes \mathbf{3}$, and nothing else.
+
+The resolution is that we need an extra quantum number. This quantum number is called \term{colour}. This resolved the problem of Fermi statistics, but also, we postulated that any bound state must have no ``net colour''. Effectively, this meant we needed to have groups of three quarks or three anti-quarks, or a quark-antiquark pair. This leads to the idea of \term{confinement}. This principle predicted the $\Omega^-$ baryon $sss$ with spin $\frac{3}{2}$, and was subsequently observed.
+
+Nowadays, we understand this is all due to a $\SU(3)$ gauge symmetry, which is \emph{not} the $\SU(3)$ we encountered just now. This is what we are going to study in this chapter.
+
+\subsection{QCD Lagrangian}
+The modern description of the strong interaction of quarks is \term{quantum chromodynamics}, \term{QCD}. This is a gauge theory with a $\SU(3)_C$ gauge group. The strong force is mediated by gauge bosons known as \term{gluons}. This gauge symmetry is exact, and the gluons are massless.
+
+In QCD, each flavour of quark comes in three ``copies'' of different colour. It is conventional to call these colours red, green and blue, even though they have nothing to do with actual colours. For a flavour $f$, we can write these as $q_f^{\mathrm{\color{red} red}}$, $q_f^{\mathrm{\color{mgreen} green}}$ and $q_f^{\mathrm{\color{blue} blue}}$. We can put these into an triplet:
+\[
+ q_f =
+ \begin{pmatrix}
+ q_f^{\mathrm{\color{red} red}}\\
+ q_f^{\mathrm{\color{mgreen} green}}\\
+ q_f^{\mathrm{\color{blue} blue}}
+ \end{pmatrix}.
+\]
+Then QCD says this has an $\SU(3)$ gauge symmetry, where the triplet transforms under the fundamental representation. Since this symmetry is exact, quarks of all three colours behave exactly the same, and except when we are actually doing QCD, it doesn't matter that there are three flavours.
+
+We do this for each quark individually, and then the QCD Lagrangian is given by
+\[
+ \mathcal{L}_{QCD} = -\frac{1}{4} F^{a\mu\nu} F^a_{\mu\nu} + \sum_f \bar{q}_f (i \slashed{\D} - m_f) q_f,
+\]
+where, as usual,
+\begin{align*}
+ \D_\mu &= \partial_\mu + i g A_\mu^a T^a\\
+ F_{\mu\nu}^a &= \partial_\mu A_\nu^a - \partial_\nu A_\mu^a - g f^{abc} A_\mu^b A_\nu^c.
+\end{align*}
+Here $T^a$ for $a = 1, \cdots, 8$ are generators of $\su(3)$, and, as usual, satisfies
+\[
+ [T^a, T^b] = i f^{abc}T^c.
+\]
+One possible choice of generators is
+\[
+ T^a = \frac{1}{2} \lambda^a,
+\]
+where the $\lambda^a$ are the \term{Gell-Mann matrices}. The fact that we have 8 independent generators means we have 8 gluons.
+
+One thing that is very different about QCD is that it has interactions between gauge bosons. If we expand the Lagrangian, and think about the tree level interactions that take place, we naturally have interactions that look like
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (c);
+ \vertex [below left=of c] (bl);
+ \vertex [below right=of c] (br);
+ \vertex [above=of c] (t);
+
+ \diagram*{
+ (bl) -- [fermion] (c) -- [fermion] (br),
+ (t) -- [gluon] (c),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+but we also have three and four-gluon interactions
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (c);
+ \vertex [below left=of c] (bl);
+ \vertex [below right=of c] (br);
+ \vertex [above=of c] (t);
+
+ \diagram*{
+ (bl) -- [gluon] (c) -- [gluon] (br),
+ (t) -- [gluon] (c),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (c);
+ \vertex [above right=of c] (1);
+ \vertex [above left=of c] (2);
+ \vertex [below right=of c] (3);
+ \vertex [below left=of c] (4);
+ \diagram* {
+ (1) -- [gluon] (c),
+ (2) -- [gluon] (c),
+ (3) -- [gluon] (c),
+ (4) -- [gluon] (c),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+Mathematically, this is due to the non-abelian nature of $\SU(2)$, and physically, we can think of this as saying the gluon themselves have colour charge.
+
+\subsection{Renormalization}
+We now spend some time talking about renormalization of QCD. We didn't talk about renormalization when we did electroweak theory, because the effect is much less pronounced in that case. Renormalization is treated much more thoroughly in the Advanced Quantum Field Theory course, as well as Statistical Field Theory. Thus, we will just briefly mention the key ideas and results for QCD.
+
+QCD has a coupling constant, which we shall call $g$. The idea of renormalization is that this coupling constant should depend on the energy scale $\mu$ we are working with. We write this as $g(\mu)$. However, this dependence on $\mu$ is not arbitrary. The physics we obtain should not depend on the renormalization point $\mu$ we chose. This imposes some restrictions on how $g(\mu)$ depends on $\mu$, and this allows us to define and compute the quantity.
+\[
+ \beta(g(\mu)) = \mu \frac{\d}{\d \mu} g(\mu).
+\]
+%
+%A theory has a Lagrangian $\mathcal{L}$ that contains a set of couplings $g_i$, which for convenience we take to include the masses. For each of these, we need a physical/observed/derived quantity $g_i^0$ and an expression (renormalization condition)
+%\[
+% g_i^0 = G_i^0 (\{g_j(\mu)\}, \mu),
+%\]
+%where $\{g_j(\mu)\} \equiv g(\mu)$ are called \emph{renormalized couplings}, and $\mu$ is the \term{renormalization point}.
+%
+%We are going to consider perturbative expressions for $G_i^0$.
+%
+%The physics should not depend on the renormalization point $\mu$, and this tells us how $g(\mu)$ should vary with $\mu$. We define
+%\[
+% \beta_j(g(\mu), \mu) = \mu \frac{\d}{\d \mu} g_j(\mu).
+%\]
+%We want $g_i^0$ not to depend on $\mu$. So we get
+%\[
+% 0 = i \mu \frac{\d}{\d \mu} G_i^0(g(\mu), \mu) = \left(\mu \frac{\partial}{\partial \mu} + \beta_j \frac{\partial}{\partial g_j}\right) G_i^0(g_j(\mu), \mu).
+%\]
+
+The $\beta$-function for non-abelian gauge theories typically looks like
+\[
+ \beta(g) = - \frac{\beta_0 g^3}{16 \pi^2} + O(g^5)
+\]
+for some $\beta$. For an $\SU(N)$ gauge theory coupled to fermions $\{f\}$ (both left- and right-handed), up to one-loop order,we have
+\[
+ \beta_0 = \frac{11}{3}N - \frac{4}{3} \sum_f T_f,
+\]
+where $T_f$ is the \term{Dynkin index} of the representations of the fermion $f$. For the fundamental representation, which is all we are going to care about, we have $T_f = \frac{1}{2}$.
+%If $t_f^a$ are the generators of the representations, then the Dynkin index is defined by
+%\[
+% \Tr (t_f^a t_f^b) = T_f \delta^{ab}. % what actually is this?
+%\]
+%We assume that the left-handed and right-handed fermions couple equally. For the fundamental representation, we have $T_f = \frac{1}{2}$.
+%
+%The 1-loop expression for QCD is given by
+%\[
+% \beta_0 = 11 - \frac{2}{3} n_f,
+%\]
+%where $n_f$ is the number of quark flavours.
+
+In our model of QCD, we have $6$ quarks. So
+\[
+ \beta_0 = 11 - 4 = 7.
+\]
+So we find that the $\beta$-function is always negative!
+
+This isn't actually quite it. The number of ``active'' quarks depends on the energy scale. At energies $\ll m_{\mathrm{top}} \approx \SI{173}{\giga\electronvolt}$, then the top quark is no longer active, and $n_f = 5$. At energies $\sim \SI{100}{\mega\electronvolt}$, we are left with three quarks, and then $n_f = 3$. Matching the $\beta$ functions between these regimes requires a bit of care, and we will not go into that. But in any case, the $\beta$ function is always negative.
+
+Often, we are not interested in the constant $g$ itself, but the \term{strong coupling}
+\[
+ \alpha_S = \frac{g^2}{4\pi}.
+\]
+It is an easy application of the chain rule to find that, to lowest order,
+\[
+ \mu \frac{\d \alpha_S}{\d \mu} = - \frac{\beta_0}{2\pi} \alpha_S^2.
+\]
+We now integrate this equation, and see what we get. We have
+\[
+ \int_{\alpha_S(\mu_0)}^{\alpha_S(\mu)} \frac{\d \alpha_S}{\alpha_S^2} = - \frac{\beta_0}{2\pi} \int_{\mu_0}^\mu \frac{\d \mu}{\mu}.
+\]
+So we find
+\[
+ \alpha_S(\mu) = \frac{2\pi}{\beta_0} \frac{1}{\log (\mu/\mu_0) + \frac{2\pi}{\beta_0 \alpha_S(\mu_0)}}.
+\]
+There is an energy scale where $\alpha_S$ diverges, which we shall call $\Lambda_{QCD}$. This is given by
+\[
+ \log \Lambda_{\mathrm{QCD}} = \log \mu_0 - \frac{2\pi}{\beta_0 \alpha_S(\mu_0)}.
+\]
+In terms of $\Lambda_{\mathrm{QCD}}$, we can write $\alpha_S(\mu)$ as
+\[
+ \alpha_S(\mu) = \frac{2\pi}{\beta_0 \log (\mu/\Lambda_{\mathrm{QCD}})}.
+\]
+Note that in the way we defined it, $\beta_0$ is positive. So we see that $\alpha_S$ \emph{decreases} with increasing $\mu$. This is called \term{asymptotic freedom}. Thus, the divergence occurs at low $\mu$. This is rather different from, say, QED, which is the other way round.
+
+Another important point to get out is that we haven't included any mass term yet, and so we do not have a natural ``energy scale'' given by the masses. Thus, $\mathcal{L}_{\mathrm{QCD}}$ is scale invariant, but quantization has led to a characteristic scale $\Lambda_{\mathrm{QCD}}$. This is called \term{dimensional transmutation}.
+
+What is this scale? This depends on what regularization and renormalization scheme we are using, and tends to be
+\[
+ \Lambda_{\mathrm{QCD}} \sim 200\mdash 500\SI{}{\mega\electronvolt}.
+\]
+We can think of this as approximately the scale of the border between perturbative and non-perturbative physics. Note that non-perturbative means we are in low energies, because that is when the coupling is strong!
+
+Of course, we have to be careful, because these results were obtained by only looking up to one-loop, and so we cannot expect it to make sense at low energy scales.
+
+\subsection{\tph{$e^+ e^- \to $ hadrons}{e+e- -> hadrons}{e+e- → hadrons}}
+Doing QCD computations is very hard\textsuperscript{TM}. Partly, this is due to the problem of confinement. Confinement in QCD means it is impossible to observe free quarks. When we collide quarks together, we can potentially produce single quarks or anti-quarks. Then because of confinement, jets of quarks, anti-quarks and gluons would be produced and combine to form colour-singlet states. This process is known as \term{hadronization}.
+
+Confinement and hadronization are not very well understood, and these happen in the non-perturbative regime of QCD. We will not attempt to try to understand it. Thus, to do computations, we will first \emph{ignore} hadronization, which admittedly isn't a very good idea. We then try to parametrize the hadronization part, and then see if we can go anywhere.
+
+Practically, our experiments often happen at very high energy scales. At these energy scales, $\alpha_S$ is small, and we can expect perturbation theory to work.
+
+We now begin by ignoring hadronization, and try to compute the amplitudes for the interaction
+\[
+ e^+ e^- \to q\bar{q}.
+\]
+The leading process is
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [above left=of m1] (i1) {$e^-$};
+ \vertex [below left=of m1] (i2) {$e^+$};
+ \vertex [right=of m1] (m2);
+ \vertex [above right=of m2] (f1) {$q$};
+ \vertex [below right=of m2] (f2) {$\bar{q}$};
+
+ \diagram*{
+ (i1) -- [fermion, momentum=$p_1$] (m1),
+ (i2) -- [anti fermion, momentum'=$p_2$] (m1),
+ (m2) -- [fermion, momentum'=$k_2$] (f2),
+ (m2) -- [fermion, momentum=$k_1$] (f1),
+ (m1) -- [photon, edge label=$\gamma$] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+We let
+\[
+ q = k_1 + k_2 = p_1 + p_2,
+\]
+and $Q$ be the quark charge. Repeating computations as before, and neglecting fermion masses, we find that
+\[
+ M = (-ie)^2 Q \bar{u}_q(k_1) \gamma^\mu v_q(k_2) \frac{-ig_{\mu\nu}}{q^2} \bar{v}_e (p_2) \gamma^\nu u_e(p_1).
+\]
+We average over initial spins and sum over final states. Then we have
+\begin{align*}
+ \frac{1}{4} \sum_{\mathrm{spins}} |M|^2 &= \frac{e^4 Q^2}{4 q^4} \Tr(\slashed{k}_1 \gamma^\mu \slashed{k}_2 \gamma^\nu) \Tr(\slashed{p}_1 \gamma_\mu \slashed{p}_2 \gamma_\nu)\\
+ &= \frac{8 e^4 Q^2}{q^4} \Big((p_1 \cdot k_1) (p_2 \cdot k_2) + (p_1 \cdot k_2) (p_2 \cdot k_1)\Big)\\
+ &= e^4 Q^2 (1 + \cos^2 \theta),
+\end{align*}
+where $\theta$ is the angle between $\mathbf{k}_1$ and $\mathbf{p}_1$, and we are working in the COM frame of the $e^+$ and $e^-$.
+
+Now what we actually want to do is to work out the cross section. We have
+\[
+ \d \sigma = \frac{1}{|\mathbf{v}_1 - \mathbf{v}_2|} \frac{1}{4 p_1^0 p_2^0} \frac{\d^3 k_1}{ (2\pi)^3 2 k_1^0} \frac{\d^3 k_2}{(2\pi)^3 2 k_2^0} (2\pi)^4 \delta^{(4)} (q - k_1 - k_2) \times \frac{1}{4}\sum_{\mathrm{spins}} |M|^2.
+\]
+We first take care of the $|\mathbf{v}_1 - \mathbf{v}_2|$ factor. Since $m = 0$ in our approximation, they travel at the speed of light, and we have $|\mathbf{v}_1 - \mathbf{v}_2| = 2$. Also, we note that $k_1^0 = k_2^0 \equiv |\mathbf{k}|$ due to working in the center of mass frame.
+
+Using these, and plugging in our expression for $|M|$, we have
+\[
+ \d \sigma = \frac{e^4 Q^2}{2} \cdot \frac{1}{16} \cdot \frac{\d^3 k_1 \d^3 k_2}{(2\pi)^2|\mathbf{k}|^4} \delta^{(4)} (q - k_1 - k_2) (1 + \cos^2\theta).
+\]
+This is all, officially, inside an integral, and if we are only interested in what directions things fly out, we can integrate over the momentum part. We let $\d \Omega$ denote the solid angle, and then
+\[
+ \d^3 k_1 = |\mathbf{k}|^2 \;\d |\mathbf{k}|\;\d \Omega.
+\]
+Then, integrating out some delta functions, we obtain
+\begin{align*}
+ \frac{\d \sigma}{\d \Omega} &= \int \d|\mathbf{k}| \frac{e^4 Q^2}{8 \pi^2 q^4} \frac{1}{2} \delta\left(\frac{\sqrt{q^2}}{2} - |\mathbf{k}|\right)(1 + \cos^2 \theta)\\
+ &= \frac{\alpha^2 Q^2}{4 q^2} (1 + \cos^2 \theta),
+\end{align*}
+where, as usual
+\[
+ \alpha = \frac{e^2}{4\pi}.
+\]
+We can integrate over all solid angle to obtain
+\[
+ \sigma(e^+ e^- \to q \bar{q}) = \frac{4\pi \alpha^2}{3 q^2}Q^2.
+\]
+We can compare this to $e^+ e^- \to \mu^+ \mu^-$, which is exactly the same, except we put $Q = 1$ because that's the charge of a muon.
+
+But this isn't it. We want to include the effects of hadronization. Thus, we want to consider the decay of $e^+ e^-$ into any possible hadronic final state. For any final state $X$, the invariant amplitude is given by
+\[
+ M_X = \frac{e^2}{q^2} \brak{X} J^\mu_h \bket{0} \bar{v}_e (p_2) \gamma^\nu u_e(p_1),
+\]
+where
+\[
+ J^\mu_h = \sum_f Q_f \bar{q}_f \gamma^\mu q_f,
+\]
+and $Q_f$ is the quark charge. Then the total cross section is
+\[
+ \sigma(e^+ e^- \to \mathrm{hadrons}) = \frac{1}{8p_1^0 p_2^0} \sum_{X} \frac{1}{4} \sum_{\mathrm{spins}} (2\pi)^4 \delta^{(4)} (q - p_X) |M_X|^2.
+\]
+We can't compute perturbatively the hadronic bit, because it is non-perturbative physics. So we are going to parameterize it in some way. We introduce a \term{hadronic spectral density}\index{$\rho_h^{\mu\nu}(q)$}
+\[
+ \rho_h^{\mu\nu} (q) = (2\pi)^3 \sum_{X, p_X} \delta^{(4)} (q - p_X) \brak{0}J_h^\mu\bket{X} \brak{X}J_h^\nu\bket{0}.
+\]
+We now do some dodgy maths. By current conservation, we have $q_\mu \rho^{\mu\nu} = 0$. Also, we know $X$ has positive energy. Then Lorentz covariance forces $\rho_h$ to take the form
+\[
+ \rho_h^{\mu\nu}(q) = (- g^{\mu\nu} q^2 + q^\mu q^\nu) \Theta(q^0) \rho_h (q^2).
+\]
+We can then plug this into the cross-section formula, and doing annoying computations, we find
+\begin{align*}
+ \sigma &= \frac{1}{8 p_1^0 p_2^0} \frac{(2\pi) e^4}{4 q^4} 4 (p_{1\mu} p_{2\nu} - p_1 \cdot p_2 g_{\mu\nu} + p_{1\nu} p_{2\mu})(-g^{\mu\nu} q^2 + q^\mu q^\nu) \rho_h(q^2)\\
+ &= \frac{16 \pi^3 \alpha^2}{q^2} \rho_h(q^2).
+\end{align*}
+Of course, we can't compute this $\rho_h$ directly. However, if we are lazy, we can consider only quark-antiquark final states $X$. It turns out this is a reasonably good approximation. Then we obtain something similar to what we had at the beginning of the section. We will be less lazy and include the quark masses this time. Then we have
+%
+%
+%We might think this is all we can do. But if we make a further assumption, then we can go a bit further. We assume
+%\[
+% \sum_{X \in \mathrm{Had}}\bket{X}\brak{X} = \sum_{X \in q, \bar{q}, g\text{ states}} \bket{X}\brak{X}.
+%\]
+%This is known as \term{quark-hadron duality}. This is not proved to be valid, but works reasonably well. The assumption we are making is that $e^+ e^- \to \gamma^* \to q\bar{q}$ can be separated from hadronization. If we make this assumption, then we can actually write down the value of $\rho_h$. We have
+\begin{multline*}
+ \rho_h^{\mu\nu}(q^2) = N_c \sum_f Q_f^2 \int \frac{\d^3 k_1}{(2\pi)^3 2 k_1^0} \int \frac{\d^3 k_2}{(2\pi)^3 2k_2^0} (2\pi)^3 \delta^{(4)}(q - k_1 - k_2)\\
+ \times \left.\Tr [(\slashed{k}_1 + m_f) \gamma^\mu (\slashed{k}_2 - m_f) \gamma^{\nu}]\right|_{k_1^2 = k_2^2 = m_f^2}, % what does this mean?
+\end{multline*}
+where $N_c$ is the number of colours and $m_f$ is the quark $q_f$ mass. We consider the quantity
+\[
+ I^{\mu\nu} = \left.\int \frac{\d^3 k_1}{k_1^0} \int \frac{\d^3 k_2}{k_2^0} \delta^{(4)}(q - k_1 - k_2) k_1^\mu k_2^\nu\right|_{k_1^2 = k_2^2 = m_f^2}.
+\]
+We can argue that we can write
+\[
+ I^{\mu\nu} = A(q^2) q^\mu q^\nu + B(q^2) g^{\mu\nu}.
+\]
+We contract this with $g_{\mu\nu}$ and $q_\mu q_\nu$ (separately) to obtain equations for $A, B$. We also use
+\[
+ q^2 = (k_1 + k_2)^2 = 2 m_f^2 + 2 k_1 \cdot k_2.
+\]
+We then find that
+\[
+ \rho_h(q^2) = \frac{N_c}{12 \pi^2} \sum_f Q_f^2 \Theta(q^2 - 4 m_f^2)\left(1 - \frac{4 m_f^2}{q^2}\right)^{1/2} \frac{q^2 + 2 m_f^2}{q^2}.
+\]
+That's it! We can now plug this into the equation we had for the cross-section. It's still rather messy. If all $m_f \to 0$, then this simplifies very nicely, and we find that
+\[
+ \rho_h(q^2) = \frac{N_c}{12 \pi^2} \sum_f Q_f^2.
+\]
+Then after some hard work, we find that
+\[
+ \sigma_{\mathrm{LO}}(e^+ e^- \to \mathrm{hadrons}) = N_c \frac{4\pi \alpha^2}{3q^2} \sum_f Q_f^2,
+\]
+where LO denotes ``leading order''.
+
+An experimentally interesting quantity is the following ratio:
+\[
+ R = \frac{\sigma(e^+ e^- \to \mathrm{hadrons})}{\sigma(e^+ e^- \to \mu^+ \mu^-)}.
+\]
+Then we find that
+\[
+ R_{LO} = N_c \sum_f Q_f^2 =
+ \begin{cases}
+ \frac{2}{3} N_c & \text{when $u$, $d$, $s$ are active}\\
+ \frac{10}{9} N_c & \text{when $u$, $d$, $s$, $c$ are active}\\
+ \frac{11}{9} N_c & \text{when $u$, $d$, $s$, $c$, $b$ are active}
+ \end{cases}
+\]
+In particular, we expect ``jumps'' as we go between the quark masses. Of course, it is not going to be a sharp jump, but some continuous transition.
+
+We've been working with tree level diagrams so far. The one-loop diagrams are UV finite but have IR divergences, where the loop momenta $\to 0$. The diagrams include
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [above left=of m1] (i1) {$e^-$};
+ \vertex [below left=of m1] (i2) {$e^+$};
+ \vertex [right=of m1] (m2);
+ \vertex [above right=of m2] (f1) {$q$};
+ \vertex [below right=of m2] (f2) {$\bar{q}$};
+
+ \vertex [above right=0.6cm of m2] (g1) {};
+ \vertex [below right=0.6cm of m2] (g2) {};
+ \diagram*{
+ (i1) -- [fermion] (m1),
+ (i2) -- [anti fermion] (m1),
+ (m2) -- [fermion] (f2),
+ (m2) -- [fermion] (f1),
+ (m1) -- [photon, edge label=$\gamma$] (m2),
+ (g1) -- [gluon] (g2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [above left=of m1] (i1) {$e^-$};
+ \vertex [below left=of m1] (i2) {$e^+$};
+ \vertex [right=of m1] (m2);
+ \vertex [above right=of m2] (f1) {$q$};
+ \vertex [below right=of m2] (f2) {$\bar{q}$};
+
+ \vertex [above right=0.2cm of m2] (g1);
+ \vertex [above right=1.3cm of m2] (g2);
+ \diagram*{
+ (i1) -- [fermion] (m1),
+ (i2) -- [anti fermion] (m1),
+ (m2) -- [fermion] (f2),
+ (m2) -- [fermion] (f1),
+ (m1) -- [photon, edge label=$\gamma$] (m2),
+ (g1) -- [gluon, half right] (g2), % make this move
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [above left=of m1] (i1) {$e^-$};
+ \vertex [below left=of m1] (i2) {$e^+$};
+ \vertex [right=of m1] (m2);
+ \vertex [above right=of m2] (f1) {$q$};
+ \vertex [below right=of m2] (f2) {$\bar{q}$};
+
+ \vertex [left=0.22cm of m2] (l);
+
+ \vertex [below right=0.3cm of l] (g1) {};
+ \vertex [below right=1.3cm of l] (g2) {};
+ \diagram*{
+ (i1) -- [fermion] (m1),
+ (i2) -- [anti fermion] (m1),
+ (m2) -- [fermion] (f2),
+ (m2) -- [fermion] (f1),
+ (m1) -- [photon, edge label=$\gamma$] (m2),
+ (g1) -- [gluon, half left] (g2), % make this move
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+However, it turns out the IR divergence is cancelled by tree level $e^+ e^- \to q\bar{q} g$ such as
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [above left=of m1] (i1) {$e^-$};
+ \vertex [below left=of m1] (i2) {$e^+$};
+ \vertex [right=of m1] (m2);
+ \vertex [above right=of m2] (f1) {$q$};
+ \vertex [below right=of m2] (f2) {$\bar{q}$};
+
+ \vertex [right=of m2] (g2) {$g$};
+ \diagram*{
+ (i1) -- [fermion] (m1),
+ (i2) -- [anti fermion] (m1),
+ (m2) -- [fermion] (f2),
+ (m2) -- [fermion] (f1),
+ (m1) -- [photon, edge label=$\gamma$] (m2),
+ (m2) -- [gluon] (g2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+\subsection{Deep inelastic scattering}
+In this chapter, we are going to take an electron, accelerate it to really high speeds, and then smash it into a proton.
+
+If we do this at low energies, then the proton appears pointlike. This is Rutherford and Mott scattering we know and love from A-levels Physics. If we increase the energy a bit, then the wavelength of the electron decreases, and now the scattering would be sensitive to charge distributions within the proton. But this is still elastic scattering. After the interactions, the proton remains a proton and the electron remains an electron.
+
+What we are interested in is \emph{inelastic} scattering. At very high energies, what tends to happen is that the proton breaks up into a lot of hadrons $X$. We can depict this interaction as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [/tikzfeynman/fermion] (-2, -1) -- (0, 0);
+ \draw [/tikzfeynman/fermion] (-2, -0.9) -- (0, 0.1);
+ \draw [/tikzfeynman/fermion] (-2, -1.1) -- (0, -0.1);
+
+ \node [fill=white] (H) at (-2, -1) {$H$};
+
+ \draw [/tikzfeynman/fermion] (0, 0.1) -- +(2, 0);
+ \draw [/tikzfeynman/fermion] (0, 0.3) -- +(2, 0);
+ \draw [/tikzfeynman/fermion] (0, -0.1) -- +(2, 0);
+ \draw [/tikzfeynman/fermion] (0, -0.3) -- +(2, 0);
+
+
+ \draw [/tikzfeynman/fermion] (-2, 1.5) node [left] {$e^-$} -- (0, 1.5) node [pos=0.5, above=0.1cm] {$p$};
+ \draw [/tikzfeynman/fermion] (0, 1.5) -- (2, 2.5) node [right] {$e^-$} node [pos=0.5, above=0.1cm] {$p'$};
+
+ \draw [/tikzfeynman/photon] (0, 1.5) -- (0, 0) node [pos=0.4, right] {$\gamma$};
+ \node [right] at (2, 0) {$X$};
+ \draw [fill=gray] circle [radius=0.4cm];
+
+ \draw [dashed] (0, 1.5) -- (2, 1.5);
+
+ \draw (0.5, 1.5) arc(0:26.565:0.5) node [pos=0.8, right] {$\theta$};
+
+ \end{tikzpicture}
+\end{center}
+
+This led to the idea that hadrons are made up of \term{partons}. When we first studied this, we thought these partons are weakly interacting, but nowadays, we know this is due to asymptotic freedom.
+
+Let's try to understand this scattering. The final state $X$ can be very complicated in general, and we have no interest in this part. We are mostly interested in the difference in momentum,
+\[
+ q = p - p',
+\]
+as well as the scattering angle $\theta$. We will denote the mass of the initial hadron $H$ by $M$, and we shall treat the electron as being massless.
+
+It is conventional to define
+\[
+ Q^2 \equiv - q^2 = 2 p \cdot p' = 2 E E' (1 - \cos \theta) \geq 0,
+\]
+where $E = p^0$ and $E' = p'^0$, since we assumed electrons are massless. We also let
+\[
+ \nu = p_H \cdot q.
+\]
+It is an easy manipulation to show that $p_X^2 = (p_H + q)^2 \geq M^2$. This implies
+\[
+ Q^2 \leq 2 \nu.
+\]
+For simplicity, we are going to consider the scattering in the rest frame of the hadron. In this case, we simply have
+\[
+ \nu = M(E - E')
+\]
+We can again compute the amplitude, which is confusingly also called $M$:
+\[
+ M = (ie)^2 \bar{u}_e (p') \gamma^\mu u_e (p) \left(-\frac{i g_{\mu\nu}}{q^2} \brak{X} J_h^\nu \bket{H(p_H)}\right).
+\]
+Then we can write down the differential cross-section
+\[
+ \d \sigma = \frac{1}{4 ME |\mathbf{v}_e - \mathbf{v}_H|} \frac{\d^3 p'}{(2\pi)^3 2 p'^0} \sum_{X, p_X} (2\pi)^4 \delta^{(4)} (q + p_H - p_X) \frac{1}{2} \sum_{\mathrm{spins}} |M|^2.
+\]
+Note that in the rest frame of the hadron, we simply have $|\mathbf{v}_e - \mathbf{v}_H| = 1$.
+
+We can't actually compute this non-perturbatively. So we again have to parametrize this. We can write
+\[
+ \frac{1}{2} \sum_{\mathrm{spins}} |M|^2 = \frac{e^4}{2q^4} L_{\mu\nu} \brak{H(p_H)} J_h^\mu \bket{X} \brak{X} J^\nu_h\bket{H(p_H)},
+\]
+where
+\[
+ L_{\mu\nu} = \Tr \left(\slashed{p} \gamma^\mu \slashed{p}' \gamma^\nu) = 4(p_\mu p'_\nu - g_{\mu\nu} p \cdot p' + p_\nu p_\mu'\right).
+\]
+We define another tensor
+\[
+ W^{\mu\nu}_H = \frac{1}{4\pi} \sum_X (2\pi)^4 \delta^{(4)} (p + p_H - o_X) \brak{H}J_h^\mu \bket{X} \brak{X} J_H^\nu \bket{H}.
+\]
+Note that this $\sum_X$ should also include the sum over the initial state spins. Then we have
+\[
+ E' \frac{\d \sigma}{\d^3 p'} = \frac{1}{8ME(2\pi)^3} 4\pi \frac{e^4}{2q^4} L_{\mu\nu} W^{\mu\nu}_H.
+\]
+We now use our constraints on $W^{\mu\nu}_H$ such as Lorentz covariance, current conservation and parity, and argue that $W^{\mu\nu}_H$ can be written in the form
+\begin{multline*}
+ W_H^{\mu\nu} = \left(- g^{\mu\nu} + \frac{q^\mu q^\nu}{q^2}\right) W_1(\nu, Q^2) \\
+ + \left(p_H^\mu - \frac{p_H \cdot q}{q^2} q^\mu\right) \left(p_H^\nu - \frac{p_H \cdot q}{q^2}q^\nu\right) \times W_2(\nu, Q^2).
+\end{multline*}
+Now, using
+\[
+ q^\mu L_{\mu\nu} = q^\nu L_{\mu\nu} = 0,
+\]
+we have
+\begin{align*}
+ L_{\mu\nu} W_H^{\mu\nu} &= 4 (-2 p\cdot p' + 4 p\cdot p') W_1 + 4(2 p\cdot p_H p'\cdot p_H - p_H^2 p \cdot p')\\
+ &= 4Q^2 W_1 + 2M^2 (4 EE' - Q^2)W_2.
+\end{align*}
+We now want to examine what happens as we take the energy $E \to \infty$. In this case, for a generic collision, we have $Q^2 \to \infty$, which necessarily implies $\nu \to \infty$. To understand how this behaves, it is helpful to introduce dimensionless quantities
+\[
+ x = \frac{Q^2}{2 \nu},\quad y = \frac{\nu}{p_H \cdot p},
+\]
+known as the \term{Bjorken $x$} and the \term{inelasticity} respectively. We can interpret $y$ as the fractional energy loss of the electron. Then it is not difficult to see that $0 \leq x, y \leq 1$. So these are indeed bounded quantities. In the rest frame of the hadron, we further have
+\[
+ y = \frac{\nu}{ME} = \frac{E - E'}{E}.
+\]
+This allows us to write $L_{\mu\nu}W_H^{\mu\nu}$ as
+\[
+ L_{\mu\nu} W^{\mu\nu}_H \approx 8EM \left(xy W_1 + \frac{(1 - y)}{y} \nu W_2\right),
+\]
+where we dropped the $-2M^2Q^2 W_2$ term, which is of lower order.
+
+To understand the cross section, we need to simplify $\d^3 p'$. We integrate out the angular $\phi$ coordinate to obtain
+\[
+ \d^3 p' \mapsto 2\pi E'^2 \;\d (\cos \theta)\;\d E'.
+\]
+We also note that by definition of $Q, x, y$, we have
+\begin{align*}
+ \d x &= \d \left(\frac{Q^2}{2\nu}\right) = - 2EE' \;\d \cos \theta + (\cdots)\;\d E'\\
+ \d y &= -\frac{\d E'}{E}.
+\end{align*}
+Since $(\d E')^2 = 0$, the $\d^3 p'$ part becomes
+\[
+ \d^3 p' \mapsto \pi E'\;\d Q^2\;\d y = 2\pi E' \nu\;\d x\;\d y.
+\]
+Then we have
+\begin{align*}
+ \frac{\d \sigma}{\d x\; \d y} &= \frac{1}{8(2\pi)^2} \frac{1}{EM} \frac{e^4}{q^4} 2\pi \nu \cdot 8EM\left(xy W_1 + \frac{(1 - y)}{y} \nu W_2\right)\\
+ &= \frac{8 \pi \alpha^2 ME}{Q^4}\left( xy^2 F_1 + (1 - y) F_2\right),
+\end{align*}
+where
+\[
+ F_1 \equiv W_2,\quad F_2 \equiv \nu W_2.
+\]
+By varying $x$ and $y$ in experiments, we can figure out the values of $F_1$ and $F_2$. Moreover, we expect if we do other sorts of experiments that also involve hadrons, then the same $F_1$ and $F_2$ will appear. So if we do other sorts of experiments and measure the same $F_1$ and $F_2$, we can increase our confidence that our theory is correct.
+
+Without doing more experiments, can we try to figure out something about $F_1$ and $F_2$? We make a simplifying assumption that the electron interacts with only a single constituent of the hadron:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [/tikzfeynman/fermion] (-2, -1) -- (0, 0);
+ \draw [/tikzfeynman/fermion] (-2, -0.9) -- (0, 0.1);
+ \draw [/tikzfeynman/fermion] (-2, -1.1) -- (0, -0.1);
+
+ \node [fill=white] (H) at (-2, -1) {$H$};
+
+ \draw [/tikzfeynman/fermion] (0, 0.1) -- +(2, 0);
+ \draw [/tikzfeynman/fermion] (0, 0.3) -- +(2, 0);
+ \draw [/tikzfeynman/fermion] (0, -0.1) -- +(2, 0);
+ \draw [/tikzfeynman/fermion] (0, -0.3) -- +(2, 0);
+
+
+ \draw [/tikzfeynman/fermion] (-2, 3) node [left] {$e^-$} -- (0, 3) node [pos=0.5, above=0.1cm] {$p$};
+ \draw [/tikzfeynman/fermion] (0, 3) -- (2, 4) node [right] {$e^-$} node [pos=0.5, above=0.1cm] {$p'$};
+
+ \draw [/tikzfeynman/photon] (0, 3) -- (0.5, 1.5) node [pos=0.4, right] {$q$};
+ \draw [/tikzfeynman/fermion] (0.1265, 0.3795) -- (0.5, 1.5) node [pos=0.5, left] {$k$};
+ \draw [/tikzfeynman/fermion] (0.5, 1.5) -- (2, 1.5) node [pos=0.5, below] {$k + q$};
+
+ \node [right] at (2, 0) {$X'$};
+ \draw [fill=gray] circle [radius=0.4cm];
+
+ \draw [dashed] (0, 3) -- (2, 3);
+
+ \draw (0.5, 3) arc(0:26.565:0.5) node [pos=0.8, right] {$\theta$};
+ \end{tikzpicture}
+\end{center}
+
+We further suppose that the EM interaction is unaffected by strong interactions. This is known as \term{factorization}. This leading order model we have constructed is known as the \term{parton model}. This was the model used before we believed in QCD. Nowadays, since we do have QCD, we know these ``partons'' are actually quarks, and we can use QCD to make some more accurate predictions.
+
+We let $f$ range over all partons. Then we can break up the sum $\sum_X$ as
+\[
+ \sum_X = \sum_{X'} \sum_f \frac{1}{(2\pi)^3} \int \d^4 k\; \Theta(k^0) \delta(k^2) \sum_{\mathrm{spins}},
+\]
+where we put $\delta(k^2)$ because we assume that partons are massless.
+
+To save time (and avoid unpleasantness), we are not going to go through the details of the calculations. The result is that
+\[
+ W_H^{\mu\nu} = \sum_f \int \d^4 k \Tr\left(W_f^{\mu\nu} \Gamma_{H, f} (p_H, k) + \bar{W}_f^{\mu\nu} \bar{\Gamma}_{H, f} (p_H, k)\right),
+\]
+where
+\begin{align*}
+ W_f^{\mu\nu} &= \bar{W}_f^{\mu\nu} = \frac{1}{2} Q_f^2 \gamma^\mu (\slashed{k} + \slashed{q}) \gamma^\nu \delta((k + q)^2)\\
+ \Gamma_{H, f} (p_H, k)_{\beta\alpha} &= \sum_{X'} \delta^{(4)} (p_H - k - p_X') \brak{H(p_H)} \bar{q}_{f, \alpha} \bket{X'} \brak{X'}q_{f, \beta}\bket{H(p_H)},
+\end{align*}
+where $\alpha, \beta$ are spinor indices.
+
+In the deep inelastic scattering limit, putting everything together, we find
+\begin{align*}
+ F_1(x, Q^2) &\sim \frac{1}{2} \sum_f Q_f^2 [ q_f(x) + \bar{q}_f(x)]\\
+ F_2(x, Q^2) &\sim 2 x F_1,
+\end{align*}
+for some functions $q_f(x)$, $\bar{q}_f(x)$. These functions are known as the \term{parton distribution functions} (\term{PDF}'s). They are roughly the distribution of partons with the longitudinal momentum function.
+
+The very simple relation between $F_2$ and $F_1$ is called the \term{Callan--Gross relation}, which suggests the partons are spin $\frac{1}{2}$. This relation between $F_2$ and $F_1$ is certainly something we can test in experiments, and indeed they happen. We also see the \term{Bjorken scaling phenomenon} --- $F_1$ and $F_2$ are independent of $Q^2$. This boils down to the fact that we are scattering with point-like particles.
+
+%\section{Effective Field Theory}
+%Actual QCD calculations, even at high energies where it is perturbative, are difficult. If we go to the non-perturbative regime, then obviously we can't do perturbative calculations. What can we do?
+%
+%One possibility is to use numerical methods called \term{lattice field theory}, or in this case, \term{lattice QCD}. Unfortunately, this is not what we are going to talk about, but we can get some insights into what is going on with effective field theories.
+%
+%In effective field theory, we exploit large separations in energy scale to construct a description of low-energy physics. For example, in Fermi theory, approximated the $W$-boson propagator
+%\[
+% \frac{1}{p^2 - m_W^2} \approx - \frac{1}{m_W^2} - \frac{p^2}{m_W^4} + \cdots
+%\]
+%for $p^2 \ll m_W^2$. This is an example of effective field theory.
+%% Georgi, Ann. Rev. Nucl. Part. Sci. 43:209 (1903)
+%% Kaplan, arXiv nucl-th/0501023 (2005)
+%
+%We are now going to do this in a more general and formal way.
+%\subsection{Scaling dimensions of local operators}
+%In principle, we can have an infinite number of term in $\mathcal{L}$, because an effective field theory is only valid up to a scale $\Lambda$, and we have no constraint imposed by renormalization.
+%
+%Of course, symmetries can still provide constraints. If we want a Lorentz-invariant theory, then we can only write down Lorentz-invariant terms. We write
+%\[
+% \mathcal{L}_{\mathrm{eff}} = \mathcal{L}_{\mathrm{kin}} + \sum_n \mathcal{L}_{\mathrm{int}}^{(n)},
+%\]
+%Since the action is dimensionless, we know we must have mass dimension $[\mathcal{L}] = 4$. We write
+%\[
+% \mathcal{L}_{\mathrm{int}}^{(n + 4)} = \sum \frac{c_i^{(n)}}{\Lambda^n} \mathcal{O}_i^{(n + 4)},
+%\]
+%where $\Lambda$ is some heavy mass scale independent of $n$, with $[\Lambda] = 1$, $[c_i^{(n)}] = 0$, and $\mathcal{O}^{(n + 4)}$ is an operator of dimension $n + 4$.
+%
+%For example, in Fermi theory, we can take $\Lambda$ to be the $W$-boson mass.
+%
+%We assume
+%\begin{enumerate}
+% \item We have a finite number of independent operators in each fixed mass dimension.
+% \item $c_i^{(n)}$ are at most order $1$
+%\end{enumerate}
+%We can now truncate the series depending on the accuracy we want. Suppose we are working at an energy scale $E$. To obtain an accuracy of accuracy $\sim \varepsilon$, we must find an $n_\varepsilon$ such that
+%\[
+% \left(\frac{E}{\Lambda}\right)^{n_\varepsilon} \approx \varepsilon.
+%\]
+%Rearranging, we find
+%\[
+% n_\varepsilon \approx \frac{\log (1/\varepsilon)}{\log(\Lambda/E)}.
+%\]
+%So to obtain greater accuracy, we need to look at more terms.
+%
+%\begin{eg}
+% Consider a real scalar field in $4$ dimensions. Assume this has the additional symmetry of $\phi \mapsto -\phi$. We are also going to work in Euclidean spacetime to get around some technical issues.
+%
+% The constraint of the symmetry means we only have to care about terms with an even number of $\phi$'s. Then we can write
+% \[
+% \mathcal{L} = \frac{1}{2} (\partial \phi)^2 + \frac{1}{2} m^2 \phi^2 + \frac{\lambda}{4!} \phi^4 + \sum_{n = 1}^\infty\left(\frac{c_n}{\Lambda^{2n}} \phi^{4 + 2n} + \frac{d_n}{\Lambda^{2n}} (\partial \phi)^2 \phi^{2n} + \cdots\right).
+% \]
+% As before, $[\phi] = 1$ and $\lambda, c_n, d_n$ are dimensionless, and order $1$ or smaller.
+%\end{eg}
+%
+%In general, we are interested in the correlation function
+%\[
+% \bra \phi_1 \cdots \phi_n\ket = \frac{1}{2} \int \D \phi\; \phi_1 \cdots \phi_n e^{-S},\quad \mathcal{Z} = \int \D \phi\; e^{-S}.
+%\]
+%Here
+%\[
+% S = \int \d^x\; \mathcal{L}
+%\]
+%is the Euclidean action. There are two arguments to justify truncating the sum.
+%\begin{enumerate}
+% \item Consider a specific field configuration $\tilde{\phi}(x)$ localized in some volume $L^4$, where $L \approx \frac{2\pi}{k}$ with $k$ the wavenumber or momentum. The amplitude of the wavelet is $\phi_k$, and define
+% \[
+% \hat{\phi}_k = \frac{1}{k} \phi_k.
+% \]
+% We are going to crudely approximate the terms in $S$. This is
+% \[
+% \int \d^4 x\; m^2 \tilde{\phi}^2 \approx L^4 m^2 \phi_k^2 = \left(\frac{\pi}{k}\right)^4 m^2 k^2 \hat{\phi}_k^2 = \frac{(2\pi)^4}{k^2} m^2 \hat{\phi}_k.
+% \]
+% Also, we have
+% \[
+% \int \d^4 x\; (\partial \tilde{\phi})^2 \approx L^4 k^2 \phi_k^2 = (2\pi)^4 \hat{\phi}_k^2.
+% \]
+% Then we have
+% \[
+% \frac{1}{\Lambda^{2p + q - 4}} \int \d^4 x\; (\partial \tilde{\phi})^p \tilde{\phi}^q \approx \frac{L^4 k^{2p} k^q \hat{\phi}_k^{p + q}}{\Lambda^{2p + q - 4}} = (2\pi)^4 \left(\frac{k}{\Lambda}\right)^{2p + q - 4} \hat{\phi}_k^{p + q}.
+% \]
+% So we have
+% \[
+% S \approx (2\pi)^4 \left( \frac{1}{2} \hat{\phi}_k^2 + \frac{m^2 \hat{\phi}_k^2}{2k^2} + \frac{\lambda}{4!} \hat{\phi}_k^4 + \sum_{n = 1}^\infty \left(c_n \left(\frac{k}{\Lambda}\right)^{2n} \hat{\phi}_k^{4 + 2n} + d_n \left(\frac{k}{\Lambda}\right)^{2n} \hat{\phi}_k^{2 + 2n} + \cdots\right)\right).
+% \]
+% Path integrals are dominated by $\hat{\phi}_k$ that minimize $S$. Consider the quadratic terms, for relative scales $k^2 \gg m^2$. So kinetic term dominates and $\hat{\phi}_k \approx \frac{1}{(2\pi)^2}$ or similar.
+%
+% As $k$ is reduced, $\left(\frac{k}{\Lambda}\right)^{2n}$ terms are reduced. These are called \term{irrelevant terms}. The mass term is a \term{relevant term}, and the $\phi^4$ term is a \term{marginal term}.
+%
+% \item We can adopt a more general argument using scale transformations. $\phi(x)$ is an arbitrary field configuration, and consider
+% \[
+% \phi_\xi(x) = \phi(\xi x),\quad x' = \xi x,\quad \phi'(x') = \frac{1}{\xi}\phi(\xi, x).
+% \]
+% Note that the prime is \emph{not} differentiation. Consider
+% \[
+% S(\phi_\xi(x), m^2, \lambda, c_n, d_n, \cdots) = S(\xi^{-1}\phi(x), \xi^{-2} m^2, \lambda, \xi^{2n} c_n, \xi^{2n} d_n, \cdots).
+% \]
+% As $\xi \to 0$, we expose the infrared ``flow'' of the couplings, as
+% \[
+% \phi(x) \sim e^{+ i k\cdot x},\quad \phi_\xi (x) = e^{i (\xi k) \cdot x}.
+% \]
+% What we see from this argument again is that these couplings go smaller as we go towards the infrared.
+%\end{enumerate}
+
+\printindex%
+\end{document}
diff --git a/books/cam/III_L/theoretical_physics_of_soft_condensed_matter.tex b/books/cam/III_L/theoretical_physics_of_soft_condensed_matter.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1316c8320f14f064cd275d9d445f5d4cf67e30ca
--- /dev/null
+++ b/books/cam/III_L/theoretical_physics_of_soft_condensed_matter.tex
@@ -0,0 +1,3075 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2018}
+\def\nlecturer {M.\ E.\ Cates}
+\def\ncourse {Theoretical Physics of Soft Condensed Matter}
+\def\ncoursehead {Soft Condensed Matter}
+
+\input{header}
+\usepackage{tikz-3dplot}
+\newcommand\splus{\!{\vphantom{\prod}}^+}
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+Soft Condensed Matter refers to liquid crystals, emulsions, molten polymers and other microstructured fluids or semi-solid materials. Alongside many high-tech examples, domestic and biological instances include mayonnaise, toothpaste, engine oil, shaving cream, and the lubricant that stops our joints scraping together. Their behaviour is classical ($\hbar = 0$) but rarely is it deterministic: thermal noise is generally important.
+
+The basic modelling approach therefore involves continuous classical field theories, generally with noise so that the equations of motion are stochastic PDEs. The form of these equations is helpfully constrained by the requirement that the Boltzmann distribution is regained in the steady state (when this indeed holds, i.e.\ for systems in contact with a heat bath but not subject to forcing). Both the dynamical and steady-state behaviours have a natural expression in terms of path integrals, defined as weighted sums of trajectories (for dynamics) or configurations (for steady state). These concepts will be introduced in a relatively informal way, focusing on how they can be used for actual calculations.
+
+In many cases mean-field treatments are sufficient, simplifying matters considerably. But we will also meet examples such as the phase transition from an isotropic fluid to a `smectic liquid crystal' (a layered state which is periodic, with solid-like order, in one direction but can flow freely in the other two). Here mean-field theory gets the wrong answer for the order of the transition, but the right one is found in a self-consistent treatment that lies one step beyond mean-field (and several steps short of the renormalization group, whose application to classical field theories is discussed in other courses but not this one).
+
+Important models of soft matter include diffusive $\phi^4$ field theory (`Model B'), and the noisy Navier--Stokes equation which describes fluid mechanics at colloidal scales, where the noise term is responsible for Brownian motion of suspended particles in a fluid. Coupling these together creates `Model H', a theory that describes the physics of fluid-fluid mixtures (that is, emulsions). We will explore Model B, and then Model H, in some depth. We will also explore the continuum theory of nematic liquid crystals, which spontaneously break rotational but not translational symmetry, focusing on topological defects and their associated mathematical structure such as homotopy classes.
+
+Finally, the course will cover some recent extensions of the same general approach to systems whose microscopic dynamics does not have time-reversal symmetry, such as self-propelled colloidal swimmers. These systems do not have a Boltzmann distribution in steady state; without that constraint, new field theories arise that are the subject of ongoing research.
+\subsubsection*{Pre-requisites}
+Knowledge of Statistical Mechanics at an undergraduate level is essential. This course complements the following Michaelmas Term courses although none are prerequisites: Statistical Field Theory; Biological Physics and Complex Fluids; Slow Viscous Flow; Quantum Field Theory.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+\subsection{The physics}
+Unsurprisingly, in this course, we are going to study models of soft-condensed matter. Soft condensed matter of various types are ubiquitous in daily life:
+\begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ \textbf{Type} & \multicolumn{2}{c}{\textbf{Examples}}\\
+ \midrule
+ \emph{emulsions} & mayonnaise & pharmaceuticals \\
+ \emph{suspensions} & toothpaste & paints and ceramics\\
+ \emph{liquid crystals} & wet soap & displays \\
+ \emph{polymers} & gum & plastics\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+The key property that makes them ``soft'' is that they are easy to change in shape but not volume (except foams). To be precise,
+\begin{itemize}
+ \item They have a \term{shear modulus} $G$ of $\sim 10^2$--$10^7$ Pascals (compare with steel, which has a shear modulus of $10^{10}$ Pascals).
+
+ \item The \term{bulk modulus} $K$ remains large, with order of magnitude $K \sim 10^{10}$ Pascal. As $K/G \sim \infty$, this is the same as the object is incompressible.
+\end{itemize}
+
+Soft condensed matter exhibit \term{viscoelasticity}, i.e.\ they have slow response to a changing condition. Suppose we suddenly apply a force on the material. We can graph the force and the response in a single graph:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$t$};
+ \draw [->] (0, 0) -- (0, 4) node [pos=0.5, left] {$\sigma_0$};
+
+ \draw [thick, mblue](0, 0) .. controls (0.1, 2) .. (0.3, 2) -- (5, 2);
+ \draw (-0.05, 2) -- (0.05, 2);
+
+ \draw [dashed, thick, mred](0, 0) .. controls (0.2, 1) .. (0.6, 1) -- (2, 1) .. controls (2.5, 1) .. (3, 1.5) -- (5, 3.5);
+
+ \draw (-0.05, 1) node [left] {$\sigma_0/G$} -- (0.05, 1);
+ \draw (2, -0.05) node [below] {$\tau$} -- (2, 0.05);
+
+ \draw (4, 2.5) -- (4.5, 2.5) -- (4.5, 3) node [pos=0.5, right] {$\eta^{-1}$};
+ \end{tikzpicture}
+\end{center}
+here the blue, solid line is the force applied and the red, dashed line is the response. The slope displayed is $\eta^{-1}$, and $\eta \approx G_0 \tau$ is the viscosity. Note that the time scale for the change is of the order of a few seconds! The reason for this is large internal length scales.
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ \textbf{Thing} & \textbf{Length scale} \\
+ \midrule
+ Polymer & $\SI{100}{\nano\meter}$\\
+ Colloids & $\sim\SI{1}{\micro\meter}$\\
+ Liquid crystal domains & $\sim\SI{1}{\micro\meter}$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+These are all much much larger than the length scale of atoms.
+
+Being mathematicians, we want to be able to model such systems. First of all, observe that quantum fluctuations are negligible. Indeed, the time scale $\tau_Q$ of quantum fluctuations is given by
+\[
+ \hbar \omega_Q = \frac{\hbar}{\tau_Q} \simeq k_B T.
+\]
+At room temperature, this gives that $\tau_Q\sim \SI{e-13}{\second}$, which is much much smaller than soft matter time scales, which are of the order of seconds and minutes. So we might as well set $\hbar = 0$.
+
+The course would be short if there were no fluctuations at all. The counterpart is that \emph{thermal} fluctuations do matter.
+
+To give an example, suppose we have some hard, spherical colloids suspended in water, each of radius $a \simeq \SI{1}{\micro\meter}$. An important quantity that determines the behaviour of the colloid is the \emph{volume fraction}
+\[
+ \Phi = \frac{4}{3} \pi a^3 \frac{N}{V},
+\]
+where $N$ is the number of colloid particles.
+
+Experimentally, we observe that when $\Phi < 0.49$, then this behaves like fluid, and the colloids are free to move around.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (3, 3);
+ \draw (0.6, 0.4) circle [radius=0.15];
+ \draw (1.8, 0.8) circle [radius=0.15];
+ \draw (2.4, 1.8) circle [radius=0.15];
+ \draw (0.6, 2.4) circle [radius=0.15];
+ \draw (1.2, 1.7) circle [radius=0.15];
+ \draw [-latex', dashed] plot [smooth] coordinates {(1.2, 1.55) (1.1, 1.3) (1.2, 1) (1.3, 0.3) (2, 0.4) (2.5, 0.35)};
+ \end{tikzpicture}
+\end{center}
+In this regime, the colloid particles undergo Brownian motion. The time scale of the motion is determined by the diffusivity constant, which turns out to be
+\[
+ D = \frac{k_B T}{6 \pi \eta_s a},
+\]
+where $\eta_s$ is the solvent viscosity. Thus, the time $\tau$ it takes for the particle to move through a distance of its own radius $a$ is given by $a^2 = D \tau$, which we can solve to give
+\[
+ \tau \sim \frac{a^3 \eta_s}{ k_B T}.
+\]
+In general, this is much longer than the time scale $\tau_Q$ of quantum fluctuations, since $a^3 \eta_S \gg \hbar$.
+
+When $\Phi > 0.55$, then the colloids fall into a crystal structure:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-0.05, -0.1) rectangle +(3, 3);
+
+ \draw (0.6, 0.4) circle [radius=0.15];
+ \draw (1.1, 0.4) circle [radius=0.15];
+ \draw (1.6, 0.4) circle [radius=0.15];
+ \draw (2.1, 0.4) circle [radius=0.15];
+ \draw (2.6, 0.4) circle [radius=0.15];
+
+ \draw (0.35, 0.8) circle [radius=0.15];
+ \draw (0.85, 0.8) circle [radius=0.15];
+ \draw (1.35, 0.8) circle [radius=0.15];
+ \draw (1.85, 0.8) circle [radius=0.15];
+ \draw (2.35, 0.8) circle [radius=0.15];
+
+ \draw (0.6, 1.2) circle [radius=0.15];
+ \draw (1.1, 1.2) circle [radius=0.15];
+ \draw (1.6, 1.2) circle [radius=0.15];
+ \draw (2.1, 1.2) circle [radius=0.15];
+ \draw (2.6, 1.2) circle [radius=0.15];
+
+ \draw (0.35, 1.6) circle [radius=0.15];
+ \draw (0.85, 1.6) circle [radius=0.15];
+ \draw (1.35, 1.6) circle [radius=0.15];
+ \draw (1.85, 1.6) circle [radius=0.15];
+ \draw (2.35, 1.6) circle [radius=0.15];
+
+ \draw (0.6, 2.0) circle [radius=0.15];
+ \draw (1.1, 2.0) circle [radius=0.15];
+ \draw (1.6, 2.0) circle [radius=0.15];
+ \draw (2.1, 2.0) circle [radius=0.15];
+ \draw (2.6, 2.0) circle [radius=0.15];
+
+ \draw (0.35, 2.4) circle [radius=0.15];
+ \draw (0.85, 2.4) circle [radius=0.15];
+ \draw (1.35, 2.4) circle [radius=0.15];
+ \draw (1.85, 2.4) circle [radius=0.15];
+ \draw (2.35, 2.4) circle [radius=0.15];
+ \end{tikzpicture}
+\end{center}
+
+Here the colloids don't necessarily touch, but there is still resistance to change in shape due to the entropy changes associated. We can find that the elasticity is given by
+\[
+ G \simeq k_B T \frac{N}{V}.
+\]
+%In the small $\Phi$ case, the ``source'' of entropy is clear, given by the disorder in the position of the colloids. Here the entropy is given by the ``rattle room'' entropy.
+
+In both cases, we see that the elasticity and time scales are given in terms of $k_B T$. If we ignore thermal fluctuations, then we have $G = 0$ and $\tau = \infty$, which is extremely boring, and more importantly, is not how the real world behaves!
+
+\subsection{The mathematics}
+To model the systems, one might begin by looking at the microscopic laws of physics, and build models out of them. However, this is usually infeasible, because there are too many atoms and molecules lying around to track them individually. The solution is to do some \term{coarse-graining} of the system. For example, if we are modelling colloids, we can introduce a function $\psi(\mathbf{r})$ that tells us the colloid density near the point $\mathbf{r}$. We then look for laws on how this function $\psi$ behave, which is usually phenomenological, i.e.\ we try to find some equations that happen to model the real world well, as opposed to deriving these laws from the underlying microscopic principles. In general, $\psi$ will be some sort of \term{order parameter} that describes the substance.
+
+The first thing we want to understand is the equilibrium statistical physics. This tell us what we expect the field $\psi$ to look like after it settles down, and also crucially, how the field is expected to fluctuate in equilibrium. Ultimately, what we get out of this is $P[\psi(r)]$, the probability (density) of seeing a particular field configuration at equilibrium. The simplest way of understanding this is via mean field theory, which seeks a single field $\psi$ that maximizes $P[\psi(r)]$. However, this does not take into account fluctuations, A slightly more robust way of dealing with fluctuations is the variational method, which we will study next.
+
+After understanding the equilibrium statistical physics, we turn to understanding the dynamics, namely what happens when we start our system in a non-equilibrium state. We will be interested in systems that undergo phase transition. For example, liquid crystals tend to be in a disordered state at high temperatures and ordered at low temperatures. What we can do is then to prepare our liquid crystals at high temperature so that it stays at a homogeneous, disordered state, and then rapidly decrease the temperature. We then expect the system to evolve towards a ordered state, and we want to understand how it does so.
+
+The first step to understanding dynamics is to talk about the hydrodynamic level equations, which are deterministic PDEs for how the system evolves. These usually look like
+\[
+ \dot{\psi}(\mathbf{r}, t) = \cdots,
+\]
+These equations come from our understanding of equilibrium statistical mechanics, and in particular that of the free energy functional and chemical potential. Naturally, the hydrodynamic equations do not take into account fluctuations, but report the expected evolution in time. These equations are particularly useful in late time behaviour where the existing movement and changes dominate over the small fluctuations.
+
+To factor in the fluctuations, we promote our hydrodynamic PDEs to stochastic PDEs of the form
+\[
+ \dot{\psi}(\mathbf{r}, t) = \cdots + \text{noise}.
+\]
+Usually, the noise comes from random external influence we do not wish to explicitly model. For example, suspensions in water are bombarded by the water molecules all the time, and we model the effect by a noise term. Since the noise is the contribution of a large number of largely independent factors, it is reasonable to model it as Gaussian noise.
+
+The mean noise will always be zero, and we must determine the variance. The key insight is that this random noise is the \emph{mechanism} by which the Boltzmann distribution arises. Thus, the probability distribution of the field $\psi$ determined by the stochastic PDE at equilibrium must coincide with what we know from equilibrium statistical dynamics. Since there is only one parameter we can toggle for the random noise, this determines it completely. This is the \emph{fluctuation dissipation theorem}.
+
+%There are two parts to the study of soft condensed matters, confusingly and classically called \term{statics} and \term{dynamics}. The basic structure of the static part is going to look like this: we have some microscopic physics, but we don't want to track atoms and molecules. So we try to find some coarse-graining of the system. Thus, we come up with \emph{order parameter} fields, which we shall generically call $\psi(r)$. This goes into equilibrium statistical physics. To do this, we need to feed in symmetries and conservation laws. The output of this is some Boltzmann-distribution like probabilities $P[\psi(r)]$. In statistical physical field theory, $\psi$ might be our magnetization.
+%
+%To do dynamics, we want to think of $\psi(r)$ as a function of space and time. The starting point is ``hydrodynamic'' PDEs of the form
+%\[
+% \dot{\psi}(r, t) = \cdots,
+%\]
+%and the thing on the right hand side will depend on the type of substance we are talking about, and this is often informed by the static equilibrium statistical physics. The next step to realize is that even in dynamics, we cannot forget about thermal fluctuations. However, the PDE itself is deterministic. Thus, we have to promote these to stochastic PDEs, which are of the form
+%\[
+% \dot{\psi}(r, t) = \cdots + \text{noise}.
+%\]
+%This is informed by the static probability given by the fluctuation dissipation theorem, which fixes the noise by requiring that it produces the right equilibrium statistics. If we have a stochastic equation like this, the basic thing we are trying to find out is the probability $\P[\psi(r, t)]$ of a certain evolution of a system, not only in space but also in time.
+%
+%\begin{center}
+% \begin{tikzpicture}[xscale=4, yscale=2]
+% \node at (0, 0) {\textbf{statics}};
+% \node at (1, 0) {\textbf{dynamics}};
+% \node (mic) at (-1, -1) {microscopics};
+% \node (coarse) at (0, -1) {coarse-graining $\psi$};
+% \node (pde) at (1, -1) {hydrodynamic PDE};
+% \node (sym) [align=center] at (-1, -2) {symmetries and\\ conservation laws};
+% \node (equil) [align=center] at (0, -2) {equilibrium\\statistical physics};
+% \node (stochastic) at (1, -2) {stochastic PDE};
+% \node at (-1, -3) {probabilities};
+% \node (p1) at (0, -3) {$P[\psi(r)]$};
+% \node (p2) at (1, -3) {$\P[\psi(r, t)]$};
+%
+% \draw [->] (mic) -- (coarse);
+% \draw [->] (sym) -- (equil);
+% \draw [->] (coarse) -- (equil);
+% \draw [->] (equil) -- (p1);
+% \draw [->] (coarse) -- (pde);
+% \draw [->] (pde) -- (stochastic);
+% \draw [->] (equil) -- (pde);
+% \draw [->] (p1) -- (stochastic) node [pos=0.5, anchor=north west] {FDT};
+% \draw [->] (stochastic) -- (p2);
+% \end{tikzpicture}
+%\end{center}
+
+\begin{eg}
+ To model a one-component isothermal fluid such as water, we can take $\psi(\mathbf{r}, t)$ to consist of the density $\rho$ and velocity $\mathbf{v}$. The hydrodynamic PDE is exactly the \term{Navier--Stokes equation}. Assuming incompressibility, so that $\dot{\rho} = 0$, we get
+ \[
+ \rho (\dot{\mathbf{v}} + \mathbf{v} \cdot \nabla \mathbf{v}) = \eta \nabla^2 \mathbf{v} - \nabla p,
+ \]
+ We can promote this to a stochastic PDE, which is usually called the \term{Navier--Stokes--Landau--Lipschitz equation}. This is given by
+ \[
+ \rho (\dot{\mathbf{v}} + \mathbf{v} \cdot \nabla \mathbf{v}) = \eta \nabla^2 \mathbf{v} - \nabla p + \nabla \cdot \Sigma^N,
+ \]
+ The last term is thought of as a noise stress tensor on our fluid, and is conventionally treated as a Gaussian. As mentioned, this is fixed by the fluctuation-dissipation theorem, and it turns out this is given by
+ \[
+ \bra \Sigma_{ij}^N(r, t) \Sigma_{k \ell}^N (r', t')\ket = 2k_B T \eta (\delta_{i\ell} \delta_{jk} + \delta_{ik} \delta_{j\ell}) \delta(\mathbf{r} - \mathbf{r}') \delta(t - t').
+ \]
+\end{eg}
+
+\begin{eg}
+ If we want to describe a binary fluid, i.e.\ a mixture of two fluids, we introduce a further composition function $\phi$ that describes the (local) proportion of the fluids present.
+
+ If we think about liquid crystals, then we need to add the molecular orientation.
+\end{eg}
+
+\section{Revision of equilibrium statistical physics}
+\subsection{Thermodynamics}
+A central concept in statistical physics is entropy.
+
+\begin{defi}[Entropy]\index{entropy}
+ The \emph{entropy} of a system is
+ \[
+ S = - k_B \sum_i p_i \log p_i,
+ \]
+ where $k_B$ is \term{Boltzmann's constant}, $i$ is a \term{microstate} --- a complete specification of the microscopics (e.g.\ the list of all particle coordinates and velocities) --- and $p_i$ is the probability of being in a certain microstate.
+\end{defi}
+
+The axiom of Gibbs is that a system in thermal equilibrium maximizes $S$ subject to applicable constraints.
+\begin{eg}
+ In an isolated system, the number of particles $N$, the energy $E$ and the volume $V$ are all fixed. Our microstates then range over all microstates that have this prescribed number of particles, energy and volume only. After restricting to such states, the only constraint is
+ \[
+ \sum_i p_i = 1.
+ \]
+ Gibbs says we should maximize $S$. Writing $\lambda$ for the Lagrange multiplier maintaining this constraint, we require
+ \[
+ \frac{\partial}{\partial p_i} \left(S - \lambda \sum_i p_i\right) = 0.
+ \]
+ So we find that
+ \[
+ -k_B \log p_i + 1 - \lambda = 0
+ \]
+ for all $i$. Thus, we see that all $p_i$ are equal.
+\end{eg}
+The above example does not give rise to the Boltzmann distribution, since our system is completely isolated. In the Boltzmann distribution, instead of fixing $E$, we fix the average value of $E$ instead.
+
+\begin{eg}
+ Consider a system of fixed $N, V$ in contact with a heat bath. So $E$ is no longer fixed, and fluctuates around some average $\bra E \ket = \bar{E}$. So we can apply Gibbs' principle again, where we now sum over all states of all $E$, with the restrictions
+ \[
+ \sum p_i E_i = \bar{E},\quad \sum p_i = 1.
+ \]
+ So our equation is
+ \[
+ \frac{\partial}{\partial p_i} \left(S - \lambda_I \sum p_i - \lambda_E \sum p_i E_i\right) = 0.
+ \]
+ Differentiating this with respect to $p_i$, we get
+ \[
+ -k_B (\log p_i + 1) - \lambda_I - \lambda_E E_i = 0.
+ \]
+ So it follows that
+ \[
+ p_i = \frac{1}{Z} e^{-\beta E_i},
+ \]
+ where $Z = \sum_i e^{-\beta E_i}$ and $\beta = \lambda_E/k_B$. This is the \term{Boltzmann distribution}.
+
+ What is this mysterious $\beta$? Recall that the Lagrange multiplier $\lambda_E$ measures how $S$ reacts to a change in $\bar{E}$. In other words,
+ \[
+ \frac{\partial S}{\partial E} = \lambda_E = k_B \beta.
+ \]
+ Moreover, \emph{by definition} of temperature\index{temperature}, we have
+ \[
+ \left.\frac{\partial S}{\partial E}\right|_{V, N, \ldots} = \frac{1}{T}.
+ \]
+ So it follows that
+ \[
+ \beta = \frac{1}{k_B T}.
+ \]
+\end{eg}
+
+Recall that the first law of thermodynamics says
+\[
+ \d E = T\;\d S - P \;\d V + \mu \;\d N + \cdots.
+\]
+This is a natural object to deal with when we have fixed $S, V, N$, etc. However, often, it is temperature that is fixed, and it is more natural to consider the free energy:
+\begin{defi}[Helmholtz free energy]\index{Helmholtz free energy}
+ The \emph{Helmholtz free energy} of a system at fixed temperature, volume and particle number is defined by
+ \[
+ F(T, V, N) = U - TS = \bar{E} - TS = - k_B T \log Z.
+ \]
+\end{defi}
+This satisfies
+\[
+ \d F = -S \;\d T - P\;\d V + \mu\;\d N + \cdots,
+\]
+and is minimized at equilibrium for fixed $T, V, N$. % The derivative of $F$ in an isothermal system tells us ``which way it moves''.
+
+\subsection{Coarse Graining}
+Usually, in statistical mechanics, we distinguish between two types of objects --- microstates, namely the exact configuration of the system, and macrostates, which are variables that describe the overall behaviour of the system, such that pressure and temperature. Here we would like to consider something in between.
+
+For example, if we have a system of magnets as in the Ising model, we the microstate would be the magnetization at each site, and the macrostate would be the overall magnetization. A coarse-graining of this would be a function $m(\mathbf{r})$ of space that describes the ``average magnetization around $\mathbf{r}$''. There is no fixed prescription on how large an area we average over, and usually it does not matter much.
+
+In general, the coarse-grained variable would be called $\psi$. We can define a coarse-grained partition function
+\[
+ Z[\psi(\mathbf{r})] = \sum_{i \in \psi} e^{-\beta E_i},
+\]
+where we sum over all states that coarse-grain to $\psi$. We can similarly define the energy and entropy by restricting to all such $\psi$, and get
+\[
+ F[\psi] = E[\psi] - T S[\psi].
+\]
+The probability of being in a state $\psi$ is then
+\[
+ \P[\psi] = \frac{e^{-\beta F[\psi]}}{Z_{\mathrm{TOT}}},\quad Z_{\mathrm{TOT}} = \int e^{-\beta F[\psi]} \;\D [\psi].
+\]
+What we have on the end is a \emph{functional integral}, where we integrate over all possible values of $\psi$. We shall go into details later. We then have
+\[
+ F_{\mathrm{TOT}} = -k_B T \log Z_{\mathrm{TOT}}.
+\]
+In theory, one can obtain $F[\psi]$ by explicitly doing a coarse graining of the macroscopic laws.
+
+\begin{eg}
+ Consider an interacting gas with $N$ particles. We can think of the energy as a sum of two components, the ideal gas part ($\frac{d}{2}NkT$), and an interaction part, given by
+ \[
+ E_{\mathrm{int}} = \frac{1}{2}\sum_{i \not j} U(\mathbf{r}_i - \mathbf{r}_j),
+ \]
+ where $i, j$ range over all particles with positions $\mathbf{r}_i, \mathbf{r}_j$ respectively, and $U$ is some potential function. When we do coarse-graining, we introduce a function $\rho$ that describes the local density of particles. The interaction energy can then be written as
+ \[
+ E_{\mathrm{int}} = \frac{1}{2} \iint U(\mathbf{r} - \mathbf{r}') \rho(\mathbf{r}) \rho(\mathbf{r}') \;\d \mathbf{r} \;\d \mathbf{r}'.
+ \]
+ Similarly, up to a constant, we can write the entropy as
+ \[
+ S[\rho] = -k_B \int \rho(\mathbf{r}) \log \rho(\mathbf{r}) \;\d \mathbf{r}.
+ \]
+\end{eg}
+
+In practice, since the microscopic laws aren't always accessible anyway, what is more common is to take a phenomenological approach, namely we write down a Taylor expansion of $F[\psi]$, and then empirically figure out what the coefficients should be, as a function of temperature and other parameters. In many cases, the signs of the first few coefficients dictate the overall behaviour of the system, and phase transition occurs when the change in temperature causes the coefficients to switch signs.
+
+\section{Mean field theory}
+In this chapter, we explore the mean field theory of two physical systems --- binary fluids and nematic liquid crystals. In mean field theory, what we do is we write down the free energy of the system, and then find a state $\phi$ that minimizes the free energy. By the Boltzmann distribution, this would be the ``most likely state'' of the system, and we can pretend $F_{\mathrm{TOT}} = F[\phi]$.
+
+This is actually not a very robust system, since it ignores all the fluctuations about the minimum, but gives a good starting point for understanding the system.
+
+\subsection{Binary fluids}
+Consider a binary fluid, consisting of a mixture of two fluids $A$ and $B$. For simplicity, we assume we are in the symmetric case, where $A$ and $B$ are the same ``type'' of fluids. In other words, the potentials between the fluids are such that
+\[
+ U_{AA}(\mathbf{r}) = U_{BB}(\mathbf{r}) \not= U_{AB}(\mathbf{r}).
+\]
+We consider the case where $A$ and $B$ repulse each other (or rather, repulse each other more than the $A$-$A$ and $B$-$B$ repulsions). Thus, we expect that at high temperatures, entropy dominates, and the two fluids are mixed together well. At low temperatures, energy dominates, and the two fluids would be well-separated.
+
+We let $\rho_A(\mathbf{r})$ and $\rho_B(\mathbf{r})$ be the coarse-grained particle density of each fluid, and we set our order parameter to be
+\[
+ \phi(\mathbf{r}) = \frac{\rho_A(\mathbf{r}) - \rho_B(\mathbf{r})}{(N_A + N_B)/V},
+\]
+with $N_A$ and $N_B$, the total amount of fluids $A$ and $B$, and $V$ the volume. This is normalized so that $\phi(\mathbf{r}) \in [-1, 1]$.
+
+We model our system with \term{Landau--Ginzburg theory}, with free energy given by
+\[
+ \beta F = \int \Big(\underbrace{\frac{a}{2} \phi^2 + \frac{b}{4} \phi^4}_{f(\phi)} + \frac{\kappa}{2} (\nabla \phi)^2\Big)\;\d \mathbf{r},
+\]
+where $a, b, \kappa$ are functions of temperature.
+
+Why did we pick such a model? Symmetry suggests the free energy should be even, and if we Taylor expand any even free energy functional, the first few terms will be of this form. For small $\phi$ and certain values of $a, b, \kappa$, we shall see there is no need to look further into higher order terms.
+
+Observe that even without symmetry, we can always assume we do not have a linear term, since a $c \phi$ term will integrate out to give $cV \bar{\phi}$, and $\bar{\phi}$, the average composition of the fluid, is a fixed number. So this just leads to a constant shift.
+
+The role of the gradient term $\int \frac{\kappa}{2} (\nabla \phi)^2 \;\d \mathbf{r}$ captures at order $\nabla^{(2)}$ the non-locality of $E_{int}$,
+\[
+ E_{\mathrm{int}} = \sum_{i, j \in \{A, B\}} \int \rho_i (\mathbf{r}) \rho_j(\mathbf{r}') U_{ij}(|\mathbf{r} - \mathbf{r}'|)\;\d \mathbf{r}\;\d \mathbf{r}',
+\]
+If we assume $\phi(\mathbf{r})$ is slowly varying on the scale of interactions, then we can Taylor expand this $E_{\mathrm{int}}$ and obtain a $(\nabla \phi)^2$ term.
+
+Now what are the coefficients $a, b, \kappa$? For the model to make sense, we want the free energy to be suppressed for large fluctuating $\phi$. Thus, we want $b, \kappa > 0$, while $a$ can take either sign. In general, the sign of $a$ is what determines the behaviour of the system, so for simplicity, we suppose $b$ and $\kappa$ are fixed, and let $a$ vary with temperature.
+
+To do mean field theory, we find a single $\phi$ that minimizes $F$. Since the gradient term $\int \frac{\kappa}{2} (\nabla \phi)^2\;\d x \geq 0$, a naive guess would be that we should pick a uniform $\phi$,
+\[
+ \phi(\mathbf{r}) = \bar{\phi}.
+\]
+Note that $\bar{\phi}$ is fixed by the constraint of the system, namely how much fluid of each type we have. So we do not have any choice. In this configuration, the free energy per unit volume is
+\[
+ \frac{F}{V} = f(\bar{\phi}) = \frac{a}{2} \bar{\phi}^2 + \frac{b}{4} \bar{\phi}^4.
+\]
+The global of this function depends only on the sign of $a$. For $a > 0$ and $a < 0$ respectively, the plots look like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2.3, 0) -- (2.3, 0) node [right] {$\bar{\phi}$};
+ \draw [->] (0, -1) -- (0, 3) node [above] {$f$};
+ \draw [domain=-2:2, mblue, thick] plot (\x, {(\x)^2/3 + (\x)^4/10});
+ \draw [domain=-2:2, mred, thick] plot [smooth] (\x, {-(\x)^2 + (\x)^4/3});
+
+ \node [right, mblue] at (1.8, 2.13) {$a > 0$};
+ \node [right, mred] at (1.9, 0.734) {$a < 0$};
+ \node [left, white] at (-1.8, 2.13) {$a > 0$};
+ \node [left, white] at (-1.9, 0.734) {$a < 0$};
+ \end{tikzpicture}
+\end{center}
+We first think about the $a > 0$ part. The key point is that the function $f(\phi)$ is a convex function. Thus, for a fixed average value of $\phi$, the way to minimize $f(\phi)$ is to take $\phi$ to be constant. Thus, since
+\[
+ \beta F = \int \left(f(\phi(\mathbf{r})) + \frac{\kappa}{2} (\nabla \phi)^2\right)\;\d \mathbf{r},
+\]
+even considering the first term alone tells us we must take $\phi$ to be constant, and the gradient term reinforces this further.
+
+The $a < 0$ case is more interesting. The function $f(\phi)$ has two minima, $\phi_{1, 2} = \pm \phi_B$, where
+\[
+ \phi_B = \sqrt{\frac{-a}{b}}.
+\]
+Now suppose $\bar{\phi}$ lies between $\pm \phi_B$. Then it might be advantageous to have some parts of the fluid being at $-\phi_B$ and the others at $\phi_B$, and join them smoothly in between to control the gradient term. Mathematically, this is due to the concavity of the function $f$ in the region $[-\phi_B, \phi_B]$.
+
+Suppose there is $V_1$ many fluid with $\phi = \phi_1$, and $V_2$ many fluid with $\phi = \phi_2$. Then these quantities must obey
+\begin{align*}
+ V_1 \phi_1 + V_2 \phi_2 &= V \bar{\phi},\\
+ V_1 + V_2 &= V.
+\end{align*}
+Concavity tells us we must have
+\[
+ V_1 f(\phi_1) + V_2 f(\phi_2) < (V_1 + V_2) f(\bar{\phi}).
+\]
+Thus, if we only consider the $f$ part of the free energy, it is advantageous to have this phase separated state. If our system is very large in size, since the interface between the two regions is concentrated in a surface of finite thickness, the gradient cost will be small compared to the gain due to phase separation.
+
+We can be a bit more precise about the effects of the interface. In the first example sheet, we will explicitly solve for the actual minimizer of the free energy subject to the boundary condition $\phi(x) \to \pm \phi_B$ as $x \to \pm \infty$, as in our above scenario. We then find that the thickness of the interface is (of the order)
+\[
+ \xi_0 = \frac{-2 \kappa}{a},
+\]
+and the cost per unit area of this interface is
+\[
+ \sigma = \left(\frac{-8 \kappa a^3}{9 b^2}\right)^{1/2}.
+\]
+This is known as the \term{interfacial tension}. When calculating the free energy of a phase separated state, we will just multiply the interfacial tension by the area, instead of going back to explicit free energy calculations.
+
+In general the mean-field phase diagram looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 3) node [above] {$a$} -- (-2, 0) node [below] {$-1$} -- (2, 0) node [below] {$1$} node [right] {$\bar \phi$} -- (2, 3);
+ \node [left] at (-2, 2.5) {$a(T) = 0$};
+
+ \draw [domain=2.5:0.2, samples=40, mblue, semithick] plot [smooth] ({sqrt (2.5 - \x)}, \x);
+ \draw [domain=2.5:0.2, samples=40, mblue, semithick] plot [smooth] ({-sqrt (2.5 - \x)}, \x);
+
+ \draw [dashed, domain=0.2:2.5, mred, semithick] plot [smooth] ({sqrt ((2.5 - \x)/3)}, \x);
+ \draw [dashed, domain=0.2:2.5, mred, semithick] plot [smooth] ({-sqrt ((2.5 - \x)/3)}, \x);
+ \draw (-2.05, 2.5) -- (-1.95, 2.5);
+ \end{tikzpicture}
+\end{center}
+Within the solid lines, we have phase separation, where the ground state of the system for the given $a$ and $\bar{\phi}$ is given by the state described above. The inner curve denotes \term{spinodal instability}, where we in fact have local instability, as opposed to global instability. This is given by the condition $f''(\bar{\phi}) < 0$, which we solve to be
+\[
+ \phi_S = \sqrt{\frac{-a}{3b}}.
+\]
+
+What happens if our fluid is no longer symmetric? In this case, we should add odd terms as well. As we previously discussed, a linear term has no effect. How about a cubic term $\int \frac{c}{3} \phi(\mathbf{r})^3 \;\d \mathbf{r}$ to our $\beta F$? It turns out we can remove the $\phi(\mathbf{r})$ term by a linear shift of $\phi$ and $a$, which is a simple algebraic maneuver. So we have a shift of axes on the phase diagram, and nothing interesting really happens.
+
+\subsection{Nematic liquid crystals}
+For our purposes, we can imagine liquid crystals as being made of rod-like molecules
+\begin{center}
+ \begin{tikzpicture}[scale=0.3]
+ \begin{scope}
+ \draw (0, 1) ellipse (0.1 and 0.05);
+ \draw (0, 0) ellipse (0.1 and 0.05);
+ \draw (0.1, 0) -- (0.1, 1);
+ \draw (-0.1, 0) -- (-0.1, 1);
+ \end{scope}
+ \begin{scope}[shift={(1, 0.5)},rotate=-10]
+ \draw (0, 1) ellipse (0.1 and 0.05);
+ \draw (0, 0) ellipse (0.1 and 0.05);
+ \draw (0.1, 0) -- (0.1, 1);
+ \draw (-0.1, 0) -- (-0.1, 1);
+ \end{scope}
+ \begin{scope}[shift={(-0.5, 2)},rotate=10]
+ \draw (0, 1) ellipse (0.1 and 0.05);
+ \draw (0, 0) ellipse (0.1 and 0.05);
+ \draw (0.1, 0) -- (0.1, 1);
+ \draw (-0.1, 0) -- (-0.1, 1);
+ \end{scope}
+ \begin{scope}[shift={(-2, -0.5)},rotate=70]
+ \draw (0, 1) ellipse (0.1 and 0.05);
+ \draw (0, 0) ellipse (0.1 and 0.05);
+ \draw (0.1, 0) -- (0.1, 1);
+ \draw (-0.1, 0) -- (-0.1, 1);
+ \end{scope}
+ \begin{scope}[shift={(-2, 0.5)},rotate=-20]
+ \draw (0, 1) ellipse (0.1 and 0.05);
+ \draw (0, 0) ellipse (0.1 and 0.05);
+ \draw (0.1, 0) -- (0.1, 1);
+ \draw (-0.1, 0) -- (-0.1, 1);
+ \end{scope}
+ \begin{scope}[shift={(-2.5, 1)},rotate=15]
+ \draw (0, 1) ellipse (0.1 and 0.05);
+ \draw (0, 0) ellipse (0.1 and 0.05);
+ \draw (0.1, 0) -- (0.1, 1);
+ \draw (-0.1, 0) -- (-0.1, 1);
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+
+We are interested in the transition between two phases:
+\begin{itemize}
+ \item The \term{isotropic phase}, where the rods are pointed in random directions.
+ \item The \term{nematic phase}, where the rods all point in the same direction, so that there is a long-range orientation order, but there is no long range positional order.
+\end{itemize}
+
+In general, there can be two different sorts of liquid crystals --- the rods can either be symmetric in both ends or have ``direction''. Thus, in the first case, rotating the rod by $180^\circ$ does not change the configuration, and in the second case, it does. We shall focus on the first case in this section.
+
+The first problem we have to solve is to pick an order parameter. We want to take the direction of the rod $\mathbf{n}$, but mod it out by the relation $\mathbf{n} \sim -\mathbf{n}$. One way to do so is to consider the second-rank traceless tensor $n_i n_j$. This has the property that $A_i n_i n_j$ is the component of a vector $\mathbf{A}$ in the direction of $\mathbf{n}$, and is invariant under $n_i \mapsto n_j$. Observe that if we normalize $\mathbf{n}$ to be a unit vector, then $n_i n_j$ has trace $1$. Thus, if we have isotropic rods in $d$ dimensions, then we have
+\[
+ \bra n_i n_j \ket = \frac{\delta_{ij}}{d}.
+\]
+In general, we can defined a coarse-grained order parameter to be
+\[
+ Q_{ij} (\mathbf{r}) = \bra n_i n_j\ket_{local} - \frac{1}{d} \delta_{ij}.
+\]
+This is then a traceless symmetric second-rank tensor that vanishes in the isotropic phase.
+
+One main difference from the case of the binary fluid is that $Q_{ij}$ is no longer conserved., i.e.\ the ``total $Q$''
+\[
+ \int Q_{ij}(\mathbf{r})\;\d \mathbf{r}
+\]
+is not constant in time. This will have consequences for equilibrium statistical mechanics, but also the dynamics.
+
+We now want to construct the leading-order terms of the ``most general'' free energy functional. We start with the local part $f(Q)$, which has to be a scalar built on $Q$. The possible terms are as follows:
+\begin{enumerate}
+ \item There is only one linear one, namely $Q_{ii} = \Tr(Q)$, but this vanishes.
+ \item We can construct a quadratic term $Q_{ij} Q_{ji} = \Tr(Q^2)$, and this is in general non-zero.
+ \item There is a cubic term $Q_{ij} Q_{jk} Q_{ki} = \Tr(Q^3)$, and is also in general non-zero.
+ \item There are two possible quartic terms, namely $\Tr(Q^2)^2$ and $\Tr(Q^4)$.
+\end{enumerate}
+So we can write
+\[
+ f(Q) = a\Tr(Q^2) + c \Tr(Q^3) + b_1 \Tr(Q^2)^2 + b_2 \Tr(Q^4).
+\]
+This is the local part of the free energy up to fourth order in $Q$. We can go on, and in certain conditions we have to, but if these coefficients $b_i$ are sufficiently positive in an appropriate sense, this is enough.
+
+How can we think about this functional? Observe that if all of the rods point tend to point in a fixed direction, say $z$, and are agnostic about the other two directions, then $Q$ will be given by
+\[
+ Q_{ij} =
+ \begin{pmatrix}
+ -\lambda/2 & 0 & 0\\
+ 0 & -\lambda/2 & 0\\
+ 0 & 0 & \lambda
+ \end{pmatrix},\quad \lambda > 0.
+\]
+If the rod is agnostic about the $x$ and $y$ directions, but instead avoids the $z$-direction, then $Q_{ij}$ takes the same form but with $\lambda < 0$. For the purposes of $f(Q)$, we can locally diagonalize $Q$, and it should somewhat look like this form. So this seemingly-special case is actually quite general.
+
+The $\lambda > 0$ and $\lambda < 0$ cases are physically very different scenarios, but the difference is only detected in the odd terms. Hence the cubic term is extremely important here. To see this more explicitly, we compute $f$ in terms of $\lambda$ as
+\begin{align*}
+ f(Q) &= a \left(\frac{3}{2} \lambda^2\right) + c \left(\frac{3}{4} \lambda^3\right) + b_1\left(\frac{9}{4} \lambda^4\right) + b_2 \left(\frac{9}{8} \lambda^4\right)\\
+ &= \bar{a} \lambda^2 + \bar{c} \lambda^3 + \bar{b}\lambda^4.
+\end{align*}
+We can think of this in a way similar to the binary fluid, where $\lambda$ is are sole order parameter. We fix $\bar{b}$ and $\bar{c} < 0$, and then vary $\bar{a}$. In different situations, we get
+\begin{center}
+ \begin{tikzpicture}[yscale=4]
+ \draw [->] (-1, 0) -- (4, 0) node [right] {$\lambda$};
+ \draw [->] (0, -0.2) -- (0, 0.8) node [above] {$f$};
+ \clip (-1, -0.16) rectangle (4.3, 0.8);
+ \draw [domain=-1:4, samples=50, mred, semithick] plot [smooth] (\x, {(\x)^2/6 - (\x)^3/5 + (\x)^4 / 20});
+ \draw [domain=-1:4, samples=50, mblue, semithick] plot [smooth] (\x, {(\x)^2/5 - (\x)^3/5 + (\x)^4 / 20});
+ \draw [domain=-1:4, samples=50, mgreen, semithick] plot [smooth] (\x, {(\x)^2/3 - (\x)^3/5 + (\x)^4 / 20});
+
+ \node [mgreen] at (1.3, 0.5) {\small$\alpha < \alpha_c$};
+ \node [mblue] at (2.3, 0.35) {\small$\alpha = \alpha_c$};
+ \node [mred] at (3.6, 0.2) {\small$\alpha > \alpha_c$};
+ \end{tikzpicture}
+\end{center}
+Here the cubic term gives a discontinuous transition, which is a first-order transition. If we had $\bar{c} > 0$ instead, then the minima are on the other side.
+
+We now move on to the gradient terms. The possible gradient terms up to order $\nabla^{(2)}$ and $Q^{(2)}$ are
+\begin{align*}
+ \kappa_1 \nabla_i \nabla_i Q_{j\ell} Q_{j\ell} &= \kappa_1 \nabla^2 \Tr(Q^2) \\
+ \kappa_2 (\nabla_i Q_{im}) (\nabla_j Q_{jm}) &= \kappa_2(\nabla \cdot Q)^2\\
+ \kappa_3 (\nabla_i Q_{jm}) (\nabla_j Q_{im}) &= \text{yuck}.
+\end{align*}
+Collectively, these three terms describe the energy costs of the following three things:
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \draw (-1, -1) rectangle (1, 1);
+ \node [below] at (0, -1) {splay};
+
+ \draw [semithick] (0, 0.2) -- +(0, 0.5);
+ \draw [semithick, rotate=30] (0, 0.2) -- +(0, 0.5);
+ \draw [semithick, rotate=60] (0, 0.2) -- +(0, 0.5);
+ \draw [semithick, rotate=90] (0, 0.2) -- +(0, 0.5);
+ \draw [semithick, rotate=-30] (0, 0.2) -- +(0, 0.5);
+ \draw [semithick, rotate=-60] (0, 0.2) -- +(0, 0.5);
+ \draw [semithick, rotate=-90] (0, 0.2) -- +(0, 0.5);
+ \end{scope}
+ \begin{scope}[shift={(3, 0)}]
+ \draw (-1, -1) rectangle (1, 1);
+ \node [below] at (0, -1) {twisting};
+
+ \draw [semithick] (-0.6, -0.3) -- +(0, 0.6);
+ \draw [semithick] (0.6, -0.3) -- +(0, 0.6);
+
+ \draw [semithick, rotate around={-30:(-0.4,0)}] (-0.4, -0.2) -- +(0, 0.4);
+ \draw [semithick, rotate around={-60:(-0.2,0)}] (-0.2, -0.1) -- +(0, 0.2);
+
+ \draw [semithick, rotate around={30:(0.4,0)}] (0.4, -0.2) -- +(0, 0.4);
+ \draw [semithick, rotate around={60:(0.2,0)}] (0.2, -0.1) -- +(0, 0.2);
+
+ \draw (0, 0) circle [radius=0.02];
+
+ \draw [dashed, thin] (-0.8, 0) -- (0.8, 0);
+ \end{scope}
+ \begin{scope}[shift={(6, 0)}]
+ \draw (-1, -1) rectangle (1, 1);
+ \node [below] at (0, -1) {bend};
+
+ \draw [semithick] (-0.8, 0) -- (-0.7, 0.3);
+ \draw [semithick, rotate=-30] (-0.8, 0) -- (-0.7, 0.3);
+ \draw [semithick, rotate=-60] (-0.8, 0) -- (-0.7, 0.3);
+ \draw [semithick, rotate=-90] (-0.8, 0) -- (-0.7, 0.3);
+ \draw [semithick, rotate=-120] (-0.8, 0) -- (-0.7, 0.3);
+ \draw [semithick, rotate=-150] (-0.8, 0) -- (-0.7, 0.3);
+
+ \draw [semithick] (-0.7, 0.1) -- (-0.57, 0.35);
+ \draw [semithick, rotate=-30] (-0.7, 0.1) -- (-0.57, 0.35);
+ \draw [semithick, rotate=-60] (-0.7, 0.1) -- (-0.57, 0.35);
+ \draw [semithick, rotate=-90] (-0.7, 0.1) -- (-0.57, 0.35);
+ \draw [semithick, rotate=-120] (-0.7, 0.1) -- (-0.57, 0.35);
+ \draw [semithick, rotate=-150] (-0.7, 0.1) -- (-0.57, 0.35);
+
+ \draw [semithick] (-0.6, 0) -- (-0.5, 0.25);
+ \draw [semithick, rotate=-30] (-0.6, 0) -- (-0.5, 0.25);
+ \draw [semithick, rotate=-60] (-0.6, 0) -- (-0.5, 0.25);
+ \draw [semithick, rotate=-90] (-0.6, 0) -- (-0.5, 0.25);
+ \draw [semithick, rotate=-120] (-0.6, 0) -- (-0.5, 0.25);
+ \draw [semithick, rotate=-150] (-0.6, 0) -- (-0.5, 0.25);
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+In general, each of these modes correspond to linear combination of the three terms, and it is difficult to pin down how exactly these correspondences work.
+
+Assuming these linear combinations are sufficiently generic, a sensible choice is to set $\kappa_1 = \kappa_3 = 0$ (for example), and then the elastic costs of these deformations will all be comparable.
+
+\section{Functional derivatives and integrals}
+We shall be concerned with two objects --- functional derivatives and integrals. Functional derivatives are (hopefully) familiar from variational calculus, and functional integrals might be something new. They will be central to what we are going to do next.
+
+\subsection{Functional derivatives}
+Consider a scalar field $\phi(\mathbf{r})$, and consider a functional
+\[
+ A[\phi] = \int L(\phi, \nabla \phi)\;\d \mathbf{r}.
+\]
+Under a small change $\phi \mapsto \phi + \delta \phi(\mathbf{r})$ with $\delta \phi = 0$ on the boundary, our functional becomes
+\begin{align*}
+ A[\phi + \delta \phi] &= \int \left(L(\phi, \nabla \phi) + \delta \phi \frac{\partial L}{\partial \phi} + \nabla d \phi \cdot \frac{\partial L}{\partial \phi}\right)\;\d \mathbf{r}\\
+ &= A[\phi] + \int \delta \phi \left(\frac{\partial L}{\partial \phi} - \nabla \cdot \frac{\partial L}{\partial \nabla \phi}\right)\;\d \mathbf{r},
+\end{align*}
+where we integrated by parts using the boundary condition. This suggests the definition\index{functional derivative}\index{$\frac{\delta A}{\delta \phi(\mathbf{r})}$}
+\[
+ \frac{\delta A}{\delta \phi(\mathbf{r})} = \frac{\partial L}{\partial \phi(\mathbf{r})} - \nabla \cdot \frac{\partial L}{\partial \nabla \phi}.
+\]
+\begin{eg}
+ In classical mechanics, we replace $\mathbf{r}$ by the single variable $t$, and $\phi$ by position $x$. We then have
+ \[
+ A = \int L(x, \dot{x})\;\d t.
+ \]
+ Then we have
+ \[
+ \frac{\delta A}{\delta x(t)} = \frac{\partial L}{\partial x} - \frac{\d}{\d t} \left(\frac{\partial L}{\partial \dot{x}}\right).
+ \]
+ The equations of classical mechanics are $\frac{\delta A}{\delta x(t)} = 0$.
+\end{eg}
+
+The example more relevant to us is perhaps Landau--Ginzburg theory:
+\begin{eg}
+ Consider a coarse-grained free energy
+ \[
+ F[\phi] = \int \left(\frac{a}{2} \phi^2 + \frac{b}{4} \phi^4 + \frac{\kappa}{2} (\nabla \phi)^2 \right)\;\d \mathbf{r}.
+ \]
+ Then
+ \[
+ \frac{\delta F}{\delta \phi(\mathbf{r})} = a \phi + b \phi^3 - \kappa \nabla^2 \phi.
+ \]
+ In mean field theory, we set this to zero, since by definition, we are choosing a single $\phi(\mathbf{r})$ that minimizes $F$. In the first example sheet, we find that the minimum is given by
+ \[
+ \phi(x) = \phi_B \tanh \left(\frac{x - x_0}{\xi_0}\right),
+ \]
+ where $\xi_0$ is the interface thickness we previously described.
+\end{eg}
+
+In general, we can think of $\frac{\delta F}{\delta \phi(\mathbf{r})}$ as a ``generalized force'', telling us how we should change $\phi$ to reduce the free energy, since for a small change $\delta \phi(\mathbf{r}))$, the corresponding change in $F$ is
+\[
+ \delta F = \int \frac{\delta F}{\delta \phi(\mathbf{r})} \;\delta \phi(\mathbf{r})\;\d \mathbf{r}.
+\]
+Compare this with the equation
+\[
+ \d F = - S \;\d T - p \;\d V + \mu \;\d N + \mathbf{h} \cdot \d \mathbf{M} + \cdots.
+\]
+Under the analogy, we can think of $\frac{\delta F}{\delta \phi(\mathbf{r})}$ as the intensive variable, and $\delta \phi(\mathbf{r})$ as the extensive variable. If $\phi$ is a conserved scalar density such as particle density, then we usually write this as
+\[
+ \mu(\mathbf{r}) = \frac{\delta F}{\delta \phi(\mathbf{r})},
+\]
+and call it the \term{chemical potential}. If instead $\phi$ is not conserved, e.g.\ the $Q$ we had before, then we write
+\[
+ H_{ij} = \frac{\delta F}{\delta Q_{ij}}
+\]
+and call it the \emph{molecular field}.
+
+We will later see that in the case where $\phi$ is conserved, $\phi$ evolves according to the equation
+\[
+ \dot{\phi} = - \nabla \cdot J,\quad J \propto -D \nabla \mu,
+\]
+where $D$ is the diffusivity. The non-conserved case is simpler, with equation of motion given by.
+\[
+ \dot{Q} = - \Gamma H.
+\]
+
+Let us go back to the scalar field $\phi(\mathbf{r})$. Consider a small displacement
+\[
+ \mathbf{r} \mapsto \mathbf{r} + \mathbf{u}(\mathbf{r}).
+\]
+We take this to be incompressible, so that $\nabla \cdot \mathbf{u} = 0$. Then
+\[
+ \phi \mapsto \phi' = \phi'(\mathbf{r}) = \phi(\mathbf{r} - \mathbf{u}).
+\]
+Then
+\[
+ \delta \phi(\mathbf{r}) = \phi'(\mathbf{r}) - \phi(\mathbf{r}) = - \mathbf{u} \cdot \nabla \phi(\mathbf{r}) + O(u^2).
+\]
+Then
+\begin{multline*}
+ \delta F = \int \delta \phi \frac{\delta F}{\delta \phi}\;\d \mathbf{r}
+ = -\int \mu \mathbf{u} \cdot \nabla \phi \;\d \mathbf{r}\\
+ = \int \phi \nabla \cdot (\mu \mathbf{u})\;\d \mathbf{r}
+ = \int (\phi \nabla \mu)\cdot \mathbf{u}\;\d \mathbf{r}
+ = \int (\phi \nabla_j \mu)u_j\;\d \mathbf{r}.
+\end{multline*}
+using incompressibility.
+
+We can think of the free energy change as the work done by stress,
+\[
+ \delta F = \int \sigma_{ij}(\mathbf{r}) \varepsilon_{ij}(\mathbf{r})\;\d \mathbf{r},
+\]
+where $\varepsilon_{ij} = \nabla_i u_j$ is the strain tensor, and $\sigma_{ij}$ is the stress tensor. So we can write this as
+\[
+ \delta F = \int \sigma_{ij} \nabla_i u_j\;\d \mathbf{r} = - \int (\nabla_i \sigma_{ij}) u_j\;\d \mathbf{r}.
+\]
+So we can identify
+\[
+ \nabla_i \sigma_{ij} = -\phi \nabla_j \mu.
+\]
+So $\mu$ also contains the ``mechanical information''.
+
+\subsection{Functional integrals}
+Given a coarse-grained $\psi$, we have can define the total partition function
+\[
+ e^{-\beta F_{\mathrm{TOT}}} = Z_{\mathrm{TOT}} = \int e^{-\beta F[\psi]} \;\D [\psi],
+\]
+where $\D [\psi]$ is the ``sum over all field configurations''. In mean field theory, we approximate this $F_{\mathrm{TOT}}$ by replacing the functional integral by the value of the integrand at its maximum, i.e.\ taking the minimum value of $F[\psi]$. What we are going to do now is to evaluate the functional integral ``honestly'', and this amounts to taking into account fluctuations around the minimum (since those far away from the minimum should contribute very little).
+
+To make sense of the integral, we use the fact that the space of all $\psi$ has a countable orthonormal basis. We assume we work in $[0, L]^q$ of volume $V = L^q$ with periodic boundary conditions. We can define the Fourier modes
+\[
+ \psi_\mathbf{q} = \frac{1}{\sqrt{V}} \int \psi(\mathbf{r}) e^{-i\mathbf{q}\cdot \mathbf{r}}\;\d \mathbf{r},
+\]
+Since we have periodic boundary conditions, $\mathbf{q}$ can only take on a set of discrete values. Moreover, molecular physics or the nature of coarse-graining usually implies there is some ``maximum momentum'' $q_{max}$, above which the wavelengths are too short to make physical sense (e.g.\ vibrations in a lattice of atoms cannot have wavelengths shorter than the lattice spacing). Thus, we assume $\psi_\mathbf{q} = 0$ for $|\mathbf{q}| > q_{max}$. This leaves us with finitely many degrees of freedom.
+
+The normalization of $\psi_\mathbf{q}$ is chosen so that \term{Parseval's theorem} holds:
+\[
+ \int |\psi|^2 \;\d \mathbf{r} = \sum_{\mathbf{q} } |\psi_\mathbf{q}|^2.
+\]
+We can then define
+\[
+ \D [\psi] = \prod_{\mathbf{q}} \;\d \psi_\mathbf{q}.
+\]
+Since we imposed a $q_{max}$, this is a finite product of measures, and is well-defined.
+
+In some sense, $q_{max}$ is arbitrary, but for most cases, it doesn't really matter what $q_{max}$ we choose. Roughly speaking, at really short wavelengths, the behaviour of $\psi$ no longer depends on what actually is going on in the system, so these modes only give a constant shift to $F$, independent of interesting, macroscopic properties of the system. Thus, we will mostly leave the cutoff implicit, but it's existence is important to keep our sums convergent.
+
+It is often the case that after doing calculations, we end up with some expression that sums over the $\mathbf{q}$'s. In such cases, it is convenient to take the limit $V \to \infty$ so that the sum becomes an integral, which is easier to evaluate.
+
+An infinite product is still bad, but usually molecular physics or the nature of coarse graining imposes a maximum $q_{max}$, and we take the product up to there. In most of our calculations, we need such a $q_{max}$ to make sense of our integrals, and that will be left implicit. Most of the time, the results will be independent of $q_{max}$ (for example, it may give rise to a constant shift to $F$ that is independent of all the variables of interest).
+
+Before we start computing, note that a significant notational annoyance is that if $\psi$ is a real variable, then $\psi_\mathbf{q}$ will still be complex in general, but they will not be independent. Instead, we always have
+\[
+ \psi_\mathbf{q} = \psi_{-\mathbf{q}}^*.
+\]
+Thus, we should only multiply over half of the possible $\mathbf{q}$'s, and we usually denote this by something like $\prod^+_{\mathbf{q}}$.
+
+In practice, there is only one path integral we are able to compute, namely when $\beta F$ is a quadratic form, i.e.
+\[
+ \beta F = \frac{1}{2} \int \phi(\mathbf{r}) G(\mathbf{r} - \mathbf{r}') \phi(\mathbf{r}')\;\d \mathbf{r}\;\d \mathbf{r}' - \int h(\mathbf{r}) \phi(\mathbf{r})\;\d \mathbf{r}.
+\]
+Note that this expression is non-local, but has no gradient terms. We can think of the gradient terms we've had as localizations of first-order approximations to the non-local interactions. Taking the Fourier transform, we get
+\[
+ \beta F[\psi_\mathbf{q}] = \frac{1}{2} \sum_q G(\mathbf{q}) \phi_\mathbf{q} \phi_{-\mathbf{q}} - \sum_\mathbf{q} h_\mathbf{q} \phi_\mathbf{q}.
+\]
+\begin{eg}
+ We take Landau--Ginzburg theory and consider terms of the form
+ \[
+ \beta F[\phi] = \int \left\{\xi \phi^2 - h \phi + \frac{\kappa}{2} (\nabla \phi)^2 + \frac{\gamma}{2} (\nabla^2 \phi)^2\right\}\;\d \mathbf{r}
+ \]
+ The $\gamma$ term is new, and is necessary because we will be interested in the case where $\kappa$ is negative.
+
+ We can now take the Fourier transform to get
+ \begin{align*}
+ \beta F \{\phi_\mathbf{q}\} &= \frac{1}{2} \sum_\mathbf{q}\splus (a + \kappa q^2 + \gamma q^4) \phi_\mathbf{q} \phi_{-\mathbf{q}} - \sum_\mathbf{q}\splus h_\mathbf{q} \phi_\mathbf{q}.\\
+ &= \sum_\mathbf{q}\splus (a + \kappa q^2 + \gamma q^4) \phi_\mathbf{q} \phi_{-\mathbf{q}} - \sum_\mathbf{q}\splus h_\mathbf{q} \phi_\mathbf{q}.
+ \end{align*}
+ So our $G(\mathbf{q})$ is given by
+ \[
+ G(\mathbf{q}) = a + k q^2 + \gamma q^4.
+ \]
+\end{eg}
+To actually perform the functional integral, first note that if $h \not= 0$, then we can complete the square so that the $h$ term goes away. So we may assume $h = 0$. We then have
+\begin{align*}
+ Z_{\mathrm{TOT}} &= \int \left[ \prod_{\mathbf{q}}\splus \;\d \phi_\mathbf{q}\right] e^{-\beta F\{\phi_\mathbf{q}\}}\\
+ &= \prod_\mathbf{q}\splus \int \d \phi_\mathbf{q}\;e^{-|\phi_\mathbf{q}|^2 G(q)}
+\end{align*}
+Each individual integral can be evaluated as
+\[
+ \int \d \phi_\mathbf{q}\;e^{-|\phi_\mathbf{q}|^2 G(q)} = \int \rho \;\d \rho \;\d \theta\; e^{-G(q) \rho^2} = \frac{\pi}{G(q)},
+\]
+where $\phi_\mathbf{q} = \rho e^{i\theta}$. So we find that
+\[
+ Z_{\mathrm{TOT}} = \prod_\mathbf{q}\splus \frac{\pi}{G(q)},
+\]
+and so
+\[
+ \beta F_T = -\log Z_T = \sum_\mathbf{q}\splus \log \frac{G(q)}{\pi}.
+\]
+We now take large $V$ limit, and replace the sum of the integral. Then we get
+\[
+ \beta F_T = \frac{1}{2} \frac{V}{(2\pi)^d} \int^{q_{max}} \d \mathbf{q}\;\log \frac{G(q)}{\pi}.
+\]
+There are many quantities we can compute from the free energy.
+\begin{eg}
+ The \term{structure factor} is defined to be
+ \[
+ S(\mathbf{k}) = \bra \phi_\mathbf{k} \phi_{-\mathbf{k}}\ket = \frac{1}{Z_T} \int \phi_\mathbf{k} \phi_{-\mathbf{k}} e^{-\sum_\mathbf{q}^+ \phi_\mathbf{q} \phi_{-\mathbf{q}} G(q)} \prod_q\splus \;\d \phi_\mathbf{q}.
+ \]
+ We see that this is equal to
+ \[
+ \frac{1}{Z_T} \frac{\partial Z_T}{\partial G(\mathbf{k})} = - \frac{\partial \log Z_T}{\partial G(\mathbf{k})} = \frac{1}{G(k)}.
+ \]
+ We could also have done this explicitly using the product expansion.
+
+ This $S(k)$ is measured in scattering experiments. In our previous example, for small $k$ and $\kappa > 0$, we have
+ \[
+ S(q) = \frac{1}{a + \kappa k^2 + \gamma k^4} \approx \frac{a^{-1}}{1 + k^2 \xi^2},\quad
+ \xi = \sqrt{\frac{\kappa}{a}}.
+ \]
+ where $\xi$ is the \term{correlation length}. We can return to real space by
+ \begin{align*}
+ \bra \phi^2(\mathbf{r})\ket &= \frac{1}{V} \left\bra \int |\phi(\mathbf{r})|^2 \;\d \mathbf{r}\right\ket \\
+ &= \frac{1}{V} \sum_q \bra \phi_\mathbf{q}|^2\ket\\
+ &= \frac{1}{(2\pi)^d} \int^{q_{max}} \frac{\d \mathbf{q}}{a + \kappa q^2 + \gamma q^4}.
+ \end{align*}
+\end{eg}
+
+\section{The variational method}
+\subsection{The variational method}
+The variational method is a method to estimate the partition function
+\[
+ e^{-\beta F_{\mathrm{TOT}}} = \int e^{-\beta F[\phi]} \;\D [\phi]
+\]
+when $F$ is not Gaussian. To simplify notation, we will set $\beta = 1$. It is common to make a notational change, where we replace $F_{\mathrm{TOT}}$ with $F$ and $F[\phi]$ with $H[\phi]$. We then want to estimate
+\[
+ e^{-F_{\mathrm{TOT}}} = \int e^{-F[\phi]}\;\D [\phi].
+\]
+% What can we do when our free energy is not quadratic? What we previously did was to do mean field theory, but we know this fails near the critical point. In Statistical Field Theory, we used renormalization group methods. Here we go for something in between, namely the variational method. In condensed matter physics, this is known as \term{Hartree theory}, and in QFT it is known as ``\term{1-loop self consistency}''. This theory handles large Gaussian fluctuations well, and we shall use it to study smectic liquid crystals.
+We now make a notation change, where we write $F_{\mathrm{TOT}}$ as $F$, and $F[\phi]$ as $H[\phi]$ instead, called \term{the effective Hamiltonian}. In this notation, we write
+\[
+ e^{-F} = \int e^{-H[\phi]} \;\D [\phi].
+\]
+The idea of the variational method is to find some upper bounds on $F$ in terms of path integrals we \emph{can} do, and then take the best upper bound as our approximation to $F$.
+
+Thus, we introduce a \term{trial Hamiltonian} $H_0[\phi]$, and similarly define
+\[
+ e^{-F_0} = \int e^{-H_0[\phi]} \;\D [\phi].
+\]
+We can then write
+\[
+ e^{-F} = \frac{e^{-F_0}}{\int e^{-H_0}\;\D [\phi]} \int e^{-H_0} e^{-(H - H_0)}\;\D [\phi] = e^{-F_0} \bra e^{-(H - H_0)} \ket_0,
+\]
+where the subscript $0$ denotes the average over the trial distribution. Taking the logarithm, we end up with
+\[
+ F = F_0 - \log \bra e^{-(H - H_0)}\ket_0.
+\]
+So far, everything is exact. It would be nice if we can move the logarithm inside the expectation to cancel out the exponential. While the result won't be exactly equal, the fact that $\log$ is concave, i.e.
+\[
+ \log (\alpha A + (1 - \alpha) B) \geq \alpha \log A + (1 - \alpha) \log B.
+\]
+Thus Jensen's inequality tells us
+\[
+ \log \bra Y \ket_0 \geq \bra \log Y \ket_0.
+\]
+Applying this to our situation gives us an inequality
+\[
+ F \leq F_0 - \bra H_0 - H\ket_0 = F_0 - \bra H_0\ket_0 + \bra H\ket_0 = S_0 + \bra H\ket_0.
+\]
+This is the \term{Feynman--Bogoliubov inequality}.
+
+To use this, we have to choose the trial distribution $H_0$ simple enough to actually do calculations (i.e.\ Gaussian), but include variational parameters in $H_0$. We then minimize the quantity $F_0 - \bra H_0\ket + \bra H\ket_0$ over our variational parameters, and this gives us an upper bound on $F$. We then take this to be our best estimate of $F$. If we are brave, we can take this minimizing $H_0$ as an approximation of $H$, at least for some purposes.
+
+\subsection{Smectic liquid crystals}
+We use this to talk about the isotropic to smectic transition in liquid crystals. The molecules involved often have two distinct segments. For example, we may have soap molecules that look like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (3, 0) ellipse (0.5 and 0.15);
+ \draw [decorate, decoration=snake] (0.5, 0) -- (2.5, 0);
+ \end{tikzpicture}
+\end{center}
+The key property of soap molecules is that the tail hates water while the head likes water. So we expect these molecules to group together like
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \foreach \r in {0,15,...,345} {
+ \begin{scope}[rotate=\r]
+ \draw (3, 0) ellipse (0.5 and 0.15);
+ \draw [decorate, decoration={snake, amplitude=1.5, segment length=5}] (0.5, 0) -- (2.5, 0);
+ \end{scope}
+ }
+ \end{tikzpicture}
+\end{center}
+In general, we can imagine our molecules look like
+\begin{center}
+ \begin{tikzpicture}[rotate=90, scale=0.5]
+ \fill circle [radius=0.3];
+ \draw (0, -1) circle [radius=0.3];
+ \draw (0, -0.3) -- (0, -0.7);
+ \end{tikzpicture}
+\end{center}
+and like attracts like. As in the binary fluid, we shall assume the two heads are symmetric, so $U_{AA} = U_{BB} \not= U_{AB}$. If we simply want the different parts to stay away from each other, then we can have a configuration that looks like
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \foreach \x in {0,...,10} {
+ \foreach \y in {0,4} {
+ \begin{scope}[shift={(\x, \y)}]
+ \fill circle [radius=0.3];
+ \draw (0, -1) circle [radius=0.3];
+ \draw (0, -2) circle [radius=0.3];
+ \fill (0, -3) circle [radius=0.3];
+ \draw (0, -0.3) -- (0, -0.7);
+ \draw (0, -2.3) -- (0, -2.7);
+ \end{scope}
+ }
+ }
+ \draw [->] (12, -3) -- (12, 3) node [pos=0.5, right] {$z$};
+ \end{tikzpicture}
+\end{center}
+In general, we expect that there is such an order along the $z$ direction, as indicated, while there is no restriction on the alignments in the other directions. So the system is a lattice in the $z$ direction, and a fluid in the remaining two directions. This is known as a \term{smectic liquid crystal}, and is also known as the \term{lamellar phase}. This is an example of \term{microphase separation}.
+
+As before, we let $\phi$ be a coarse grained relative density. The above ordered phase would then look like
+\[
+ \phi(\mathbf{x}) = \cos q_0 z.
+\]
+for some $q_0$ that comes from the molecular length. If our system is not perfectly ordered, then we may expect it to look roughly like $A \cos q_0 z$ for some $A$.
+
+We again use Landau--Ginzburg model, which, in our old notation, has
+\[
+ \beta F = \int \left(\frac{a}{2} \phi^2 + \frac{b}{4} \phi^4 + \frac{\kappa}{2} (\nabla \phi)^2 + \frac{\gamma}{2} (\nabla^2 \phi)^2\right)\;\d \mathbf{r}.
+\]
+If we write this in Fourier space, then we get
+\[
+ \beta F = \frac{1}{2} \sum_{\mathbf{q}} (a + \kappa q^2 + \gamma q^4) \phi_{\mathbf{q}} \phi_{-\mathbf{q}} + \frac{b}{4V} \sum_{\mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3} \phi_{\mathbf{q}_1} \phi_{\mathbf{q}_2} \phi_{\mathbf{q}_3} \phi_{-(\mathbf{q}_1 + \mathbf{q}_2 + \mathbf{q}_3)}.
+\]
+Notice that the quartic term results in the rather messy sum at the end. For the iso-smectic transition, we choose $\kappa < 0, \gamma > 0$.
+
+Again for simplicity, we first consider the case where $b = 0$. Then this is a Gaussian model with
+\[
+ G(q) = a + \kappa q^2 + \gamma q^4 = \tau + \alpha(q - q_0)^2,
+\]
+Varying $a$ gives a linear shift in $G(q)$. As we change $a$, we get multiple different curves.
+\begin{center}
+ \begin{tikzpicture}[xscale=3]
+ \draw [->] (0, 0) -- (1.3, 0) node [right] {$q$};
+ \draw [->] (0 ,0) -- (0, 3) node [above] {$G(q)$};
+
+ \draw [domain=0:1.3] plot [smooth] (\x, {1 - \x^2 + \x^4});
+ \draw [domain=0:1.3, thick, mred] plot [smooth] (\x, {0.25 - \x^2 + \x^4});
+ \draw [domain=0:1.3] plot [smooth] (\x, {1.75 - \x^2 + \x^4});
+ \node [right, mred] at (1.3, 1.4161) {$a = a_c$};
+
+ \node [circ] at (0.707, 0) {};
+ \node [below] at (0.707, 0) {$q_0$};
+ \end{tikzpicture}
+\end{center}
+Thus, as $a$ decreases, $S(q) = \bra |\phi_\mathbf{q}|^2 \ket = \frac{1}{G(q)}$ blows up at some finite
+\[
+ q = q_0 = \sqrt{\frac{-\kappa}{2\gamma}},\quad a_c = \frac{\kappa^2}{4 \gamma}.
+\]
+We should take this blowing up as saying that the $|\mathbf{q}| = q_0$ states are highly desired, and this results in an ordered phase. Note that \emph{any} $\mathbf{q}$ with $|\mathbf{q}| = q_0$ is highly desired. When the system actually settles to an ordered state, it has to \emph{pick} one such $\mathbf{q}$ and let the system align in that direction. This is \term{spontaneous symmetry breaking}.
+
+It is convenient to complete to square, and expand $G$ about $q = q_0$ and $a = a_c$. Then we have
+\[
+ G(q) = \tau + \alpha(q - q_0)^2,
+\]
+where
+\[
+ \tau = a - a_c,\quad \alpha = \frac{1}{2} G''(q_0) = - 2\kappa.
+\]
+Then the transition we saw above happens when $\tau = 0$.
+
+We now put back the quartic term. We first do this with mean field theory, and then later return to the variational method.
+\subsubsection*{Mean Field Theory}
+In mean field theory, it is easier to work in real space. We look for a single field configuration that minimizes $F$. As suggested before, we try a solution of the form
+\[
+ \phi = A \cos q_0 z,
+\]
+which is smectic along $z$. We can then evaluate the free energy (per unit volume) to be
+\[
+ \frac{\beta F}{V} = \beta \overline{F[\phi]} = \frac{a}{2} \overline{\phi^2} + \frac{\kappa}{2} \overline{(\nabla \phi)^2} + \frac{\gamma}{2} \overline{(\nabla^2 \phi)^2} + \frac{b}{4} \overline{\phi^4}
+\]
+where the bar means we average over one period of the periodic structure. It is an exercise to directly compute
+\[
+ \overline{\phi^2} = \frac{1}{2} A^2,\quad \overline{(\nabla \phi)^2} = \frac{1}{2} A^2 q_0^2,\quad \overline{(\nabla^2 \phi)^2} =\frac{1}{2} A^2 q_0^4,\quad \overline{\phi^4} = \frac{3}{8} A^4.
+\]
+This gives
+\[
+ \frac{\beta F}{V} = \frac{1}{4} \left[aA^2 + \kappa A^2 q_0^2 + \gamma A^2 q_0^4 + \frac{3b}{8} A^4\right] = \frac{1}{4} \left[ A^2 \underbrace{(a - a_c)}_\tau + \frac{3b}{8} A^4\right].
+\]
+Note that $q_0$ is fixed by the system as we originally had, while $A$ is the amplitude of the fluctuation which we get to choose. Plotting this, we get a familiar graph
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2.3, 0) -- (2.3, 0) node [right] {$A$};
+ \draw [->] (0, -1) -- (0, 3) node [above] {$\beta F/V$};
+ \draw [domain=-2:2, mblue, thick] plot (\x, {(\x)^2/3 + (\x)^4/10});
+ \draw [domain=-2:2, mred, thick] plot [smooth] (\x, {-(\x)^2 + (\x)^4/3});
+
+ \node [right, mblue] at (1.8, 2.13) {$\tau > 0$};
+ \node [right, mred] at (1.9, 0.734) {$\tau < 0$};
+ \node [left, white] at (-1.8, 2.13) {$\tau > 0$};
+ \node [left, white] at (-1.9, 0.734) {$\tau < 0$};
+ \end{tikzpicture}
+\end{center}
+If $\tau > 0$, then the optimum solution is given by $A = 0$. Otherwise, we should pick $A \not= 0$. Observe that as we slowly reduce $\tau$ across $0$, the minimum varies \emph{continuously} with $\tau$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$a$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$|A|$};
+
+ \draw [mblue, thick] (3.9, 0) -- (2, 0);
+ \draw [domain=0:2, mblue, thick, samples=30] plot [smooth] (\x, {sqrt(2 - \x)});
+
+ \node [below] at (2, 0) {$a_c$};
+ \end{tikzpicture}
+\end{center}
+\subsubsection*{Variational method}
+We now consider the variational theory. In the notation we were using previously, we have
+\[
+ H = \sum_\mathbf{q} \splus \phi_\mathbf{q} \phi_{-\mathbf{q}} G(q) + \frac{b}{4V} \sum_\mathbf{q} \phi_{\mathbf{q}_1} \phi_{\mathbf{q}_2} \phi_{\mathbf{q}_3} \phi_{-(\mathbf{q}_1 + \mathbf{q}_2 + \mathbf{q}_3)}.
+\]
+Our trial $H_0$ is
+\[
+ H_0 = \sum_\mathbf{q} \phi_\mathbf{q} \phi_{-\mathbf{q}} J(q).
+\]
+Since this is a Gaussian model, we know that
+\[
+ F_0 = \sum_{\mathbf{q}} \splus \log \frac{J(q)}{\pi}.
+\]
+To use our inequality, we need to evaluate our other two bits. We have
+\[
+ \bra H_0\ket_0 = \sum_{\mathbf{q}}\splus \bra \phi_\mathbf{q} \phi_{-\mathbf{q}}\ket_0 J(q).
+\]
+We already calculated
+\[
+ \bra \phi_{\mathbf{q}} \phi_{-\mathbf{q}} \ket_0 = \frac{1}{J(q)}.
+\]
+Thus, we have
+\[
+ \bra H_0 \ket_0 = \sum_{\mathbf{q}} \splus 1.
+\]
+Here it is clear that we must impose a cutoff on $\mathbf{q}$. We can think of this $1$ as the equipartition theorem.
+
+We can also compute
+\[
+ \bra H\ket_0 = \sum_\mathbf{q} \splus \frac{1}{J(q)} G(q) + \frac{b}{4V} \underbrace{\sum_{\mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3}\bra \phi_{\mathbf{q}_1} \phi_{\mathbf{q}_2} \phi_{\mathbf{q}_3} \phi_{-(\mathbf{q}_1 \mathbf{q}_2 \mathbf{q}_3)}\ket_0}.
+\]
+In the Gaussian model, each $\phi_\mathbf{q}$ is a zero mean Gaussian random variables, which have certain nice properties. \term{Wick's theorem} tells us we have
+\[
+ \bra abcd \ket_0 = \bra ab\ket_0 \bra cd\ket_0 + \bra ac\ket_0 \bra bd\ket_0 + \bra ad\ket_0 \bra bc\ket_0.
+\]
+Applying this, together with the result
+\[
+ \bra \phi_{\mathbf{q}_1} \phi_{\mathbf{q}_0}\ket_0 = \bra |\phi_\mathbf{q}|^2 \ket_0 \delta_{\mathbf{q}_1, - \mathbf{q}_2},
+\]
+we obtain
+\[
+ U = 3 \left[\sum_\mathbf{q} \bra |\phi_\mathbf{q}|^2\ket_0\right]^2 = 12 \left[\sum_\mathbf{q} \splus \frac{1}{J(q)}\right]^2.
+\]
+Thus, we have
+\[
+ \bra H_0\ket_0 = \sum_{\mathbf{q}} \splus \frac{G(q)}{J(q)} + \frac{3b}{V} \left(\sum_\mathbf{q} \splus \frac{1}{J(q)}\right)^2.
+\]
+We can then compute
+\[
+ \tilde{F} = \sum_{\mathbf{q}} \splus \left(\log \frac{J(q)}{\pi} - 1 + \frac{G(q)}{J(q)}\right) + \frac{3b}{V}\left(\sum_\mathbf{q} \splus \frac{1}{J(q)}\right)^2.
+\]
+We minimize over $J(q)$ by solving
+\[
+ \frac{\partial \tilde{F}}{\partial J(q)} = 0
+\]
+for all $\mathbf{q}$. Differentiating, we obtain
+\[
+ \frac{1}{J(q)} - \frac{G(q)}{J(q)^2} - \frac{6b}{V J(q)^2} \sum_{\mathbf{q}'} \splus \frac{1}{J(q')} = 0.
+\]
+Multiplying through by $J^2$, we get
+\[
+ J(q) = G(q) + \frac{6b}{V} \sum_{\mathbf{q}'} \splus \frac{1}{J(q')}.
+\]
+For large $V$, we can replace the sum by an integral, and we have
+\[
+ J(q) = G(q) + \frac{3b}{(2\pi)^d} \int \frac{\d \mathbf{q}'}{J(\mathbf{q}')}.
+\]
+It is very important that once we have fixed $J(\mathbf{q})$, the second term is a constant. Writing
+\[
+ C = \frac{2}{V} \sum_{\mathbf{q}'} \splus \frac{1}{J(q')} = \frac{1}{(2\pi)^d} \int \frac{\d \mathbf{q}'}{J(\mathbf{q}')},
+\]
+we can then say the minimum is given by $J(q) = G(q) + 3bC$. Thus, solving for $J(q)$ is equivalent to finding a $C$ such that
+\[
+ C = \frac{1}{(2\pi)^d} \int \frac{\d \mathbf{q}'}{G(q) + 3bC}.
+\]
+This is a \term{self-consistency equation} for $C$ (and hence $J$).
+
+There is a very concrete interpretation of $C$. Recall that $\bra \phi^2\ket$ is the average value of $\phi^2$ at some point $\mathbf{r}$ (which is independent of $\mathbf{r}$. So we can compute
+\[
+ \bra \phi(\mathbf{r})^2\ket = \frac{1}{V} \int \bra \phi(\mathbf{r})^2\ket \;\d \mathbf{r} = \frac{1}{V} \sum_q \bra |\phi_\mathbf{q}|^2 \ket = \frac{1}{V} \sum_{\mathbf{q}} \frac{1}{J(\mathbf{q})}.
+\]
+Thus, what our computation above amounts to is that we replaced $G(q)$ by $G(q) + 3b \bra \phi^2\ket$.
+
+A good way to think about this is that we are approximating the $\phi^4$ term in the free energy by
+\[
+ \frac{b}{4} \phi^4 \approx \frac{3b}{2} \bra \phi^2\ket \phi^2.
+\]
+We have to pick a value so that this approximation is consistent. We view the $\bra \phi^2\ket$ above as just another constant, so that the free energy is now a Gaussian. We can compute the expectation of $\phi^2$ using this free energy, and then we find that
+\[
+ \bra \phi^2\ket = \frac{1}{(2\pi)^d} \int \frac{\d \mathbf{q}'}{G(q) + 3b \bra \phi^2\ket}.
+\]
+This is the self-consistency equation as above.
+
+We could have done the approximation above without having to through the variational method, but the factor of $\frac{3b}{2}$ is then no longer obvious.
+
+%The factor of $\frac{3b}{2}$ is motivated by the previous calculation. Using this, we then treat $\bra\phi^2\ket$ as a constant and do our path integral calculation as before. Note that in principle, $\bra \phi^2\ket$ is a function of position, namely the expected value of $\phi^2$ at position $\mathbf{r}$. But this, of course, does not depend on $\mathbf{r}$. So we have
+%\[
+% \bra \phi^2\ket = \bra \phi(\mathbf{r})^2\ket = \frac{1}{V} \int \bra \phi(\mathbf{r})^2\ket \;\d \mathbf{r} = \frac{1}{V} \sum_q \bra |\phi_\mathbf{q}|^2 \ket = \frac{1}{V} \sum_{\mathbf{q}} \frac{1}{G(q) + \bra \phi^2\ket},
+%\]
+%where the last equality follows from our previous calculations. Finding a value of $\bra \phi^2\ket$ that makes this equation consistent is exactly the same as solving for $C$ above. This $C$ has a very concrete interpretation.
+%Informally, there is another way of viewing these types of calculations. This is a self-consistent treatment of the quartic. This is equivalent to making the following ad hoc approximation inside the Landau--Ginzburg theory.
+%\[
+% \int \left\{ \frac{a}{2} \phi^2 + \frac{b}{4} \phi^4\right\} \;\d \mathbf{r} \mapsto \int \left\{ \frac{a}{2} \phi^2 + \frac{3b}{2} \bra \phi^2 \ket \phi^2\right\}\;\d \mathbf{r}'
+%\]
+%While $\bra \phi^2\ket = \bra \phi(\mathbf{r})^2\ket$ is the average over all ensembles at a fixed point $\mathbf{r}$, it is equal to what we get by averaging over all points, so
+%\[
+% \bra \phi(\mathbf{r})^2\ket = \frac{1}{V} \int \bra \phi(\mathbf{r})^2\ket \;\d \mathbf{r} = \frac{1}{V} \sum_q \bra |\phi_\mathbf{q}|^2 \ket = \frac{1}{V} \sum_{\mathbf{q}} \frac{1}{J(\mathbf{q})}.
+%\]
+%The only perhaps sightly tricky part if we were to just do this ad hoc is to get the factor of $\frac{3b}{2}$ right.
+
+To solve this self-consistency equation, or at least understand it, it is convenient to think in terms of $\tau$ instead. If we write our $G(q)$ as
+\[
+ G(q) = \tau + \alpha(q - q_0)^2,
+\]
+then we have
+\[
+ J(q) = \bar{\tau} + \alpha (q - q_0)^2,\quad \bar{\tau} = 3b \bra \phi^2\ket.
+\]
+The self-consistency equation now states
+\[
+ \bar{\tau} = \tau + \frac{3b}{(2\pi)^d} \int \frac{\d^d \mathbf{q}}{\bar{\tau} + \alpha(q - q_0)^2}.
+\]
+We are interested in what happens near the critical point. In this case, we expect $\bar{\tau}$ to be small, and hence the integral is highly peaked near $0$. In $d = 3$, we can make the approximation.
+\begin{multline*}
+ \frac{3b}{(2\pi)^3} \int \frac{\d^3 \mathbf{q}}{\bar{\tau} + \alpha (q - q_0)^2} = \frac{3b}{2\pi^2} \int_0^\infty \frac{q^2\; \d q}{\bar{\tau} + \alpha (q - q_0)^2}\\
+ \approx \frac{3bq_0^2}{2\pi^2} \int_0^\infty \frac{\d q}{\bar{\tau} + \alpha (q - q_0)^2}.
+\end{multline*}
+While there is a nasty integral to evaluate, we can make the substitution $q \mapsto \bar{\tau} q$ to bring the dependency on $\bar{\tau}$ outside the integral, and obtain
+\[
+ \bar{\tau} = \tau + \frac{sb}{\sqrt{\bar{\tau}}},\quad s = \frac{3q_0^2}{2\pi^2} \frac{1}{\sqrt{\alpha}} \int_0^\infty \frac{\d y}{1 + y^2} \sim \frac{q_0^2}{\sqrt{\alpha}}.
+\]
+The precise value of $s$ does not matter. The point is that it is constant independent of $\bar{\tau}$. From this, we see that $\bar{\tau}$ can never reach zero! This means we can never have a continuous transition. At small $\bar{\tau}$, we will have large fluctuations, and the quantity
+\[
+ \bra |\phi_\mathbf{q}|^2 \ket = S(q) = \frac{1}{\bar{\tau} + \alpha(q - q_0)^2}
+\]
+becomes large but finite. Note that since $\bar{\tau}$ is finite, this sets some sort of length scale where all $q$ with $|q - q_0| \sim \bar{\tau}$ have large amplitudes.
+
+We can think of the above computation as looking at neighbourhoods of the $\phi = 0$ vacuum, as that is what the variational methods see. So $\bar{\tau}$ never vanishes in this regime, we would expect a discontinuous isotropic-smectic transition that happens ``far away'' from this vacuum.
+
+Consider fluctuations about an ordered state, $\phi = \phi_0 + \delta \phi$, where
+\[
+ \phi_0 = A \cos q_0 z.
+\]
+We can do computations similar to what we did above by varying $\delta \phi$, and obtain a $\tilde{F}(A)$. Then the global minimum over all $A$ then gives us the ``true'' ground state. Instead of doing that, which is quite messy, we use the heuristic version instead. For $A = 0$, we had
+%
+%A formal approach to this would be to set
+%\[
+% H_0 = \sum_{\mathbf{q}} \splus J(q) + hA,
+%\]
+%where we think of $h$ as a Lagrangian multiplier for $A$. We minimize $\tilde{F}(A)$ over $J(q)$. Then $\tilde{F}(A)$ replaces
+%\[
+% F_{MFT}(A) = \frac{V}{4} \left[\tau A^2 + \frac{3b}{8} A^4\right].
+%\]
+%This is actually quite messy, so let us do a heuristic version instead. For $A = 0$, we had
+\[
+ \tau = \bar{\tau} + 3b \bra \overline{\phi(\mathbf{r})^2}\ket.
+\]
+For finite $A$, the quartic term now has an extra contribution, and we bet
+\[
+ \bar{\tau} = \tau + \frac{sb}{\sqrt{\bar{\tau}}} + \frac{3b A^2}{2}.
+\]
+Compare this with mean field theory, where we have
+\[
+ F_{MFT}(A) = \frac{V}{4} \left[\tau A^2 + \frac{3b}{8} A^4\right].
+\]
+We see that for small $A$, the fluctuations are large, and mean field theory is quite off. For large $A$, the fluctuation terms are irrelevant and mean field theory is a good approximation. Thus, we get
+\begin{center}
+ \begin{tikzpicture}[xscale=3]
+ \draw [->] (0, 0) -- (2.3, 0) node [right] {$A$};
+ \draw [->] (0, -1) -- (0, 3) node [above] {$F$};
+ \draw [domain=0:2.2, opacity=0.5, thick] plot [smooth] (\x, {-(\x)^2 + (\x)^4/3});
+ \draw [domain=0:2.2, mred, thick] plot [smooth] (\x, {(-1 + 10 * exp(-2.3 * \x)) * (\x)^2 - 5 * \x^3 * exp(-2.5 * \x) + (\x)^4/3});
+ \draw [domain=0:2.2, mred, thick] plot [smooth] (\x, {(-1 + 15 * exp(-2.3 * \x)) * (\x)^2 - 5 * \x^3 * exp(-2.5 * \x) + (\x)^4/3});
+ \draw [domain=0:2.2, mred, thick] plot [smooth] (\x, {(-1 + 20 * exp(-2.3 * \x)) * (\x)^2 - 5 * \x^3 * exp(-2.5 * \x) + (\x)^4/3});
+
+ \node [opacity=0.5, below] at (1.2247, -0.75) {MFT};
+ \end{tikzpicture}
+\end{center}
+
+We see that as $\tau$ decreases, the minimum discontinuously jumps from $A = 0$ to a finite value of $A$. Mean field theory is approached at large $A$.
+
+We can then plot the minimum value of $A$ as a function of $\tau$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [domain=0:-4, opacity=0.5, thick, samples=50] plot [smooth] (\x, {1.8 * sqrt(- \x / 2)});
+ \node [anchor = south west, opacity=0.5] at (-3.9, 2.514) {MFT};
+
+ \draw [->] (-4, 0) -- (4, 0) node [right] {$\tau$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$|A|$};
+
+ \draw [mblue, thick] (3.9, 0) -- (-1.3, 0);
+ \draw [dashed] (-1.3, 0) node [below] {$\tau_c$} -- (-1.3, 0.8) -- (0, 0.8) node [right] {$A_c$};
+ \draw [mblue, thick] (-1.3, 0.8) .. controls (-2.2, 1.8) .. (-4, 2.53);
+
+ \draw (-3, -0.08) node [below] {$\tau_H$} -- (-3, 0.08);
+ \end{tikzpicture}
+\end{center}
+We have not calculated $A_c$ or $\tau_c$, but it can be done. We shall not do this, but we can state the result (Brazovskii, 1975):
+\[
+ \tau_c \simeq -(sb)^{2/3},\quad A_c \simeq s^{1/3} b^{-1/6}.
+\]
+It turns out the variational approach finally breaks down for $\tau < \tau_H \sim -s^{3/4} b^{1/2}$. We have $\tau_H \ll \tau_c$ if $b \ll \sqrt{s}$. The reason this breaks down is that at low enough temperatures, the fluctuations from the quartic terms become significant, and our Gaussian approximation falls apart.
+
+To figure out what $\tau_H$ is, we need to find leading corrections to $\tilde{F}$, as Brazovskii did. In general, there is no reason to believe $\tau_H$ is large or small. For example, this method breaks down completely for the Ising model, and is correct in no regime at all. In general, the self-consistent approach here is ad hoc, and one has to do some explicit error analysis to see if it is actually good.
+
+%To find $\tau_H$, we need to compute some leading corrections to $\tilde{F}$, as Brazovskii did. This is very rarely done. Most people don't bother to check how big the corrections to the approximation are. Thus, there is no reason to believe our theories are accurate. For example, if we do this for the Ising model, then everything goes wrong, because there is no regime where it is reasonable. So the self-consistent approach remains ad-hoc. However, in our case, it is asymptotically correct at small $b$.
+
+\subsubsection*{Brazovskii transition with cubic term}
+What happens when we have a cubic term? This would give an $A^3$ term at mean field level, which gives a discontinuous transition, but in a completely different way. We shall just state the results here, using a phase diagram with two parameters $\tau$ and $c$. In mean field theory, we get
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (3, 0) node [right] {$c$};
+ \draw [->] (0, -2) -- (0, 2) node [right] {$\tau$};
+
+ \draw (0, 0) edge [bend right=10] (2.5, 1.8);
+ \draw (0, 0) edge [bend left=10] (2.5, -1.8);
+ \draw [fill=white] circle [radius=0.05];
+
+ \node at (1, 1.5) {I};
+ \node at (1, -1.5) {S};
+ \node at (2, 0.5) {H};
+ \end{tikzpicture}
+\end{center}
+where H is a hexagonal phase, which looks like
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[shift={(0.3, 0)}]
+ \draw ellipse (0.2 and 0.1);
+ \draw (0.2, 0) -- (0.2, -2);
+ \draw (-0.2, 0) -- (-0.2, -2);
+
+ \draw (-0.2, -2) arc (180:360:0.2 and 0.1);
+ \draw [dashed] (-0.2, -2) arc (180:0:0.2 and 0.1);
+ \end{scope}
+ \begin{scope}[shift={(1, -0.5)}]
+ \draw ellipse (0.2 and 0.1);
+ \draw (0.2, 0) -- (0.2, -2);
+ \draw (-0.2, 0) -- (-0.2, -2);
+
+ \draw (-0.2, -2) arc (180:360:0.2 and 0.1);
+ \draw [dashed] (-0.2, -2) arc (180:0:0.2 and 0.1);
+ \end{scope}
+ \begin{scope}[shift={(2, -0.5)}]
+ \draw ellipse (0.2 and 0.1);
+ \draw (0.2, 0) -- (0.2, -2);
+ \draw (-0.2, 0) -- (-0.2, -2);
+
+ \draw (-0.2, -2) arc (180:360:0.2 and 0.1);
+ \draw [dashed] (-0.2, -2) arc (180:0:0.2 and 0.1);
+ \end{scope}
+ \begin{scope}[shift={(2.7, 0)}]
+ \draw ellipse (0.2 and 0.1);
+ \draw (0.2, 0) -- (0.2, -2);
+ \draw (-0.2, 0) -- (-0.2, -2);
+
+ \draw (-0.2, -2) arc (180:360:0.2 and 0.1);
+ \draw [dashed] (-0.2, -2) arc (180:0:0.2 and 0.1);
+ \end{scope}
+
+ \begin{scope}[shift={(1, 0.5)}]
+ \draw ellipse (0.2 and 0.1);
+ \draw [path fading=south] (0.2, 0) -- (0.2, -0.8);
+ \draw [path fading=south] (-0.2, 0) -- (-0.2, -0.8);
+ \end{scope}
+ \begin{scope}[shift={(2, 0.5)}]
+ \draw ellipse (0.2 and 0.1);
+ \draw [path fading=south] (0.2, 0) -- (0.2, -0.8);
+ \draw [path fading=south] (-0.2, 0) -- (-0.2, -0.8);
+ \end{scope}
+ \begin{scope}[shift={(1.5, 0)}]
+ \draw ellipse (0.2 and 0.1);
+ \draw (0.2, 0) -- (0.2, -2);
+ \draw (-0.2, 0) -- (-0.2, -2);
+
+ \draw (-0.2, -2) arc (180:360:0.2 and 0.1);
+ \draw [dashed] (-0.2, -2) arc (180:0:0.2 and 0.1);
+ \end{scope}
+
+ \draw [gray](0.3, 0) -- (1, -0.5) -- (2, -0.5) -- (2.7, 0) -- (2, 0.5) -- (1, 0.5) -- cycle;
+ \end{tikzpicture}
+\end{center}
+where each cylinder contains a high concentration of a fixed end of the molecule. This is another liquid crystal, with two crystal directions and one fluid directions.
+
+The self-consistent version instead looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (3, 0) node [right] {$c$};
+ \draw [->] (0, -2) -- (0, 2) node [right] {$\tau$};
+
+ \draw (0, -0.5) -- (1.7, -0.5) -- (3, 0.4);
+ \draw (1.7, -0.5) -- (3, -1.4);
+
+ \node at (1, 0.8) {I};
+ \node at (1, -1.5) {S};
+ \node at (2.5, -0.5) {H};
+
+ \draw (1.7, -0.06) -- (1.7, 0.06) node [above] {\small$\tilde{c}$};
+ \end{tikzpicture}
+\end{center}
+Here $c$ only matters above a threshold $\tilde{c}$.
+
+\section{Dynamics}
+We now want to understand \emph{dynamics}, namely if we have a system out of equilibrium, how will it evolve in time? Physically, such situations are usually achieved by rapidly modifying external parameters of a system. For example, if the system is temperature-dependent, one may prepare a sample at high temperature so that the system is in a homogeneous state, and then quench the system by submerging it in water to lower the temperature rapidly. The system will then slowly evolve towards equilibrium.
+
+Before we think about the problem of dynamics, let's think about a more fundamental question --- what is it that is preventing the system from collapsing to the ground state entirely, as opposed to staying in the Boltzmann distribution? The answer is that our system is in contact with a heat bath, which we can model as some random noise driving the movement of our particles. This gives a dynamical way of achieving the Boltzmann distribution. When the system is out of equilibrium, the random noise is still present and drives our system. The key point is that the properties of the noise can be derived from the fact that at equilibrium, they give the Boltzmann distribution.
+
+\subsection{A single particle}
+We ultimately want to think about field theories, but it is helpful to first consider the case of a single, 1-dimensional particle. The action of the particle is given by
+\[
+ A = \int L\;\d t,\quad L = T - V.
+\]
+The equations of motion are
+\[
+ \frac{\delta A}{\delta x(t)} = \frac{\partial L}{\partial x} - \frac{\d}{\d t} \left(\frac{\partial L}{\partial x}\right) = 0.
+\]
+For example, if
+\[
+ L = \frac{1}{2} m\dot{x}^2 - V(x),
+\]
+then the equation of motion is
+\[
+ -\frac{\delta A}{\delta x(t)} = m\ddot{x} + V'(x) = 0.
+\]
+There are two key properties of this system
+\begin{itemize}
+ \item The system is deterministic.
+ \item The Lagrangian is invariant under time reversal. This is a consequence of the time reversal symmetry of microscopic laws.
+\end{itemize}
+We now do something different. We immerse the particle in a fluid bath, modelling the situation of a colloid. If we were an honest physicist, we would add new degrees of freedom for each of the individual fluid particles. However, we are dishonest, and instead we aggregate these effects as new forcing terms in the equation of motion:
+\begin{enumerate}
+ \item We introduce damping, $F_D = - \zeta \dot{x}$.
+ \item We introduce a noise $f$, with $\bra f \ket = 0$.
+\end{enumerate}
+We set $F_{\mathrm{BATH}} = F_D + f$. Then we set our equation of motion to be
+\[
+ -\frac{\delta A}{\delta x} = F_{\mathrm{BATH}} - \zeta \dot{x} + f.
+\]
+This is the \term{Langevin equation}.
+
+What more can we say about the noise? Physically, we expect it to be the sum of many independent contributions from the fluid particles. So it makes sense to assume it takes a Gaussian distribution. So the probability density of the realization of the noise being $f$ is
+\[
+ \P[f(t)] = \mathcal{N}_f \exp \left(- \frac{1}{\sigma^2} \int f(t)^2 \;\d t\right),
+\]
+where $\mathcal{N}_f$ is a normalization constant and $\sigma$ is the variance, to be determined. This is called \term{white noise}. This has the strong independence property that
+\[
+ \bra f(t) f(t')\ket = \sigma^2 \delta(t - t').
+\]
+In this course, we always assume we have a Gaussian white noise.
+
+Since we have a random noise, in theory, any path we can write down is a possible actual trajectory, but some are more likely than others. To compute the probability density, we fixed start and end points $(x_1, t_1)$ and $(x_2, t_2)$. Given any path $x(t)$ between these points, we can find the noise of the trajectory to be
+\[
+ f = \zeta \dot{x} - \frac{\delta A}{\delta x}.
+\]
+We then can compute the probability of this trajectory happening as
+\[
+ \P_F[x(t)] \qeq \P[f] = \mathcal{N}_x \exp \left( -\frac{1}{2 \sigma^2} \int_{t_1}^{t_2} \left|\zeta \dot{x} - \frac{\delta A}{\delta x}\right|^2\;\d t\right).
+\]
+This is slightly dodgy, since there might be some Jacobian factors we have missed out, but it doesn't matter at the end.
+
+We now consider the problem of finding the value of $\sigma$. In the probability above, we wrote it as $\P_F$, denoting the \emph{forward} probability. We can also consider the backward probability $\P_B[x(t)]$, which is the probability of going along the path in the opposite direction, from $(x_2, t_2)$ to $(x_1, t_1)$.
+
+To calculate this, we use the assumption that $\frac{\delta A}{\delta x}$ is time-reversal invariant, whereas $\dot{x}$ changes sign. So the backwards probability is
+\[
+ \P_B[x(t)] = \mathcal{N}_x \exp \left( - \frac{1}{2\sigma^2}\int_{t_1}^{t_2} \left|-\zeta\dot{x} - \frac{\delta A}{\delta x}\right|^2\right)\;\d t.
+\]
+The point is that \emph{at equilibrium}, the probability of seeing a particle travelling along $x(t)$ forwards should be the same as the probability of seeing a particle travelling along the same path backwards, since that is what equilibrium means. This is not the same as saying $\P_B[x(t)]$ is the same as $\P_F[x(t)]$. Instead, if at equilibrium, there are a lot of particles at $x_2$, then it should be much less likely for a particle to go from $x_2$ to $x_1$ than the other way round.
+
+Thus, what we require is that
+\[
+ \P_F[x(t)] e^{-\beta H_1} = \P_B[x(t)] e^{-\beta H_2},
+\]
+where $H_i = H(x_i, \dot{x_i})$. This is the \term{principle of detailed balance}. This is a fundamental consequence of \emph{microscopic} reversibility. It is a symmetry, so coarse graining must respect it.
+
+To see what this condition entails, we calculate
+\begin{align*}
+ \frac{\P_F[x(t)]}{\P_B[x(t)]} &= \frac{\exp \left(-\frac{1}{2\sigma^2} \int_{t_1}^{t_2} \left((\zeta \dot{x})^2 - 2 \zeta \dot{x} \frac{\delta A}{\delta x} + \left(\frac{\delta A}{\delta x}\right)^2\right)\;\d t\right)}{\exp \left(-\frac{1}{2\sigma^2} \int_{t_1}^{t_2} \left((\zeta \dot{x})^2 + 2 \zeta \dot{x} \frac{\delta A}{\delta x} + \left(\frac{\delta A}{\delta x}\right)^2\right)\;\d t\right)}\\
+ &= \exp \left(-\frac{1}{2\sigma^2} \int_{t_1}^{t_2} \left(-4 \zeta \dot{x} \frac{\delta A}{\delta x}\right)\;\d t\right).
+\end{align*}
+To understand this integral, recall that the Hamiltonian is given by
+\[
+ H(x, \dot{x}) = \dot{x} \frac{\partial L}{\partial \dot{x}} - L.
+\]
+In our example, we can explicitly calculate this as
+\[
+ H = \frac{1}{2} \dot{x}^2 + V(x).
+\]
+We then find that
+\begin{align*}
+ \frac{\d H}{\d t} &= \frac{\d}{\d t} \left(\dot{x} \frac{\partial L}{\partial \dot{x}}\right) - \frac{\d L}{\d t} \\
+ &= \ddot{x} \frac{\partial L}{\partial \dot{x}} + \dot{x} \frac{\d}{\d t} \frac{\partial L}{\partial \dot{x}} - \left(\dot{x} \frac{\partial L}{\partial x} + \ddot{x} \frac{\partial L}{\partial \dot{x}}\right)\\
+ &= \dot{x}\left(\frac{\d}{\d t} \left(\frac{\partial L}{\partial \dot{x}}\right) - \frac{\partial L}{\partial x}\right)\\
+ &= - \dot{x} \frac{\delta A}{\delta x}.
+\end{align*}
+Therefore we get
+\[
+ \int_{t_1}^{t_2} \dot{x} \frac{\delta A}{\delta x(t)}\;\d t = - (H(x_2, \dot{x}_2) - H(x_1, \dot{x}_1)) = - (H_2 - H_1).
+\]
+Therefore, the principle of detailed balance tells us we should pick
+%\[
+% \frac{\P_F[x(t)]}{\P_B[x(t)]} \exp (-\beta(H_2 - H_1))\tag{$*$}
+%\]
+%for \emph{all} paths connecting $x_1, x_2$ is
+%\[
+% \frac{4 \zeta}{2\sigma^2} = \beta,
+%\]
+%or equivalently
+\[
+ \sigma^2 = 2k_B T \zeta.
+\]
+This is the simplest instance of the \term{fluctuation dissipation theorem}.
+
+Given this, we usually write
+\[
+ f = \sqrt{2k_B T \zeta} \Lambda,
+\]
+where $\Lambda$ is a Gaussian process and
+\[
+ \bra \Lambda(t) \Lambda(t')\ket = \delta(t - t').
+\]
+We call $\Lambda$ a \term{unit white noise}\index{white noise!unit}.
+
+%The equation $(*)$ is \emph{required} by the \term{Principle of Detailed Balance} (\term{PDB}). Suppose we have microstates $A$, $B$. Then the probability of being in $B$ is $e^{-\beta H_A}$, and the probability of being in $B$ is $e^{-\beta H_B}$, up to a normalization by a constant factor.
+%
+%We must observe the forward and backward events at equal rate in equilibrium. So it must be the case that
+%\[
+% e^{-\beta H_A} \P(A \to B) = e^{-\beta H_B} \P(B \to A).
+%\]
+
+In summary, for a particle under a potential $V$, we have an equation
+\[
+ m\ddot{x} + V'(x) = - \zeta \dot{x} + f.
+\]
+The term $-\zeta \dot{x}$ gives an arrow of time en route to equilibrium, while the noise term resolves time reversal symmetry once equilibrium is reached. Requiring this fixes the variance, and we have
+\[
+ \bra f(t) f(t')\ket = \sigma^2 \delta(t - t') = 2 k_B T \zeta \delta(t - t').
+\]
+
+In general, in the coarse grained world, suppose we have mesostates $A, B$, with probabilities $e^{-\beta F_A}$ and $e^{-\beta F_B}$, then we have an identical statement
+\[
+ e^{-\beta F_A} \P(A \to B) = e^{-\beta F_B} \P(B \to A).
+\]
+
+\subsection{The Fokker--Planck equation}
+So far, we have considered a single particle, and considered how it evolved over time. We then asked how likely certain trajectories are. An alternative question we can ask is if we have a probability density for the position of $x$, how does this evolve over time?
+
+It is convenient to consider the overdamped limit, where $m = 0$. Our equation then becomes
+\[
+ \zeta \dot{x} = - \nabla V + \sqrt{2k_B T \zeta} \Lambda.
+\]
+Dividing by $\zeta$ and setting $\tilde{M} = \zeta^{-1}$, we get
+\[
+ \dot{x} = - \tilde{M} \nabla V + \sqrt{2k_B T \tilde{M}} \Lambda.
+\]
+This $\tilde{M}$ is the \term{mobility}, which is the velocity per unit force.
+
+We define the probability density function
+\[
+ P(x, t) = \text{probability density at $x$ at time $t$}.
+\]
+We can look at the probability of moving by a distance $\Delta x$ in a time interval $\Delta t$. Equivalently, we are asking $\Lambda$ to change by
+\[
+ \Delta \Lambda = \frac{1}{\sqrt{2k_B T \zeta}} (\zeta \Delta x + \nabla V \cdot \Delta t).
+\]
+Thus, the probability of this happening is
+\[
+ W(\Delta x, x) \equiv P_{\Delta t}(\Delta x) = \mathcal{N} \exp \left(-\frac{1}{4 \zeta k_B T \Delta t} (\zeta \Delta x + \nabla V \Delta t)^2\right).
+\]
+We will write $u$ for $\Delta x$. Note that $W(u, x)$ is just a normal, finite-dimensional Gaussian distribution in $u$. We can then calculate that after time $\Delta t$, the expectation and variance of $u$ are
+\[
+ \bra u\ket = - \frac{\nabla V}{\zeta} \Delta t,\quad \bra u^2\ket - \bra u\ket^2 = \frac{2k_B T}{\zeta} \Delta t + O(\Delta t^2).
+\]
+We can find a \emph{deterministic} equation for $P(x, t)$, given in integral form by
+\[
+ P(x, t + \Delta t) = \int P(x - u, t) W(u, x - u)\;\d u.
+\]
+To obtain a differential form, we Taylor expand the integrand as
+\begin{multline*}
+ P(x - u, t) W(u, x - u) \\
+ = \left(P - u \nabla P + \frac{1}{2} u^2 \nabla^2 P\right)\left(W(u, x) - u \nabla W + \frac{1}{2} u^2 \nabla^2 W\right),
+\end{multline*}
+where all the gradients act on $x$, not $u$. Applying the integration to the expanded equation, we get
+\[
+ P(x, t + \Delta t) = P(x, t) - \bra u \ket \nabla P + \frac{1}{2} \bra u^2\ket \nabla^2 P - P \nabla \bra u\ket,
+\]
+Substituting in our computations for $\bra u\ket$ and $\bra u^2\ket$ gives
+\[
+ \dot{P}(x, t) \Delta t = \left(\frac{\nabla V}{\zeta} \nabla P + \frac{k_B T}{\zeta} \nabla^2 P + \frac{1}{\zeta} P \nabla^2 V \right)\Delta t.
+\]
+Dividing by $\Delta t$, we get
+\begin{align*}
+ \dot{P} &= \frac{k_B T}{\zeta} \nabla^2 P + \frac{1}{\zeta} \nabla(P \nabla V)\\
+ &= D \nabla^2 P + \tilde{M} \nabla(P \nabla V),
+\end{align*}
+where
+\[
+ D = \frac{k_B T}{\zeta},\quad \tilde{M} = \frac{D}{k_B T} = \frac{1}{\zeta}
+\]
+are the \term{diffusivity} and the \term{mobility} respectively.
+
+Putting a subscript $_1$ to emphasize that we are working with one particle, the structure of this is
+\begin{align*}
+ \dot{P}_1 &= - \nabla \cdot \mathbf{J}_1\\
+ \mathbf{J}_1 &= -P_1 D \nabla (\log P_1 + \beta V)\\
+ &= -P_1 \tilde{M} \nabla \mu(x),
+\end{align*}
+where
+\[
+ \mu = k_B T \log P_1 + V
+\]
+is the chemical potential of a particle in $V(x)$, as promised. Observe that
+\begin{itemize}
+ \item This is deterministic for $P_1$.
+ \item This has the same information as the Langevin equation, which gives the statistics for paths $x(t)$.
+ \item This was ``derived'' for a constant $\zeta$, independent of position. However, the equations in the final form turn out to be correct even for $\zeta = \zeta(x)$ as long as the temperature is constant, i.e.\ $\tilde{M} = \tilde{M}(x) = \frac{D(x)}{k_B T}$. In this case, the Langevin equation says
+ \[
+ \dot{x} = - \tilde{M}(x) \nabla V + \sqrt{2 D(x)} \Lambda.
+ \]
+ The multiplicative (i.e.\ non-constant) noise term is problematic. To understand multiplicative noise, we need advanced stochastic calculus (It\^o/Stratonovich). In this course (and in many research papers), we avoid multiplicative noise.
+\end{itemize}
+
+\subsection{Field theories}
+Suppose now that we have $N$ non-interacting colloids under the same potential $V(x)$. As usual, we model this by some coarse-grained density field $\rho(x, t)$. If we assume that the particles do not interact, then the expected value of this density is just given by
+\[
+ \bra \rho\ket = NP_1,\quad \bra \dot{\rho}\ket = N \dot{P}_1.
+\]
+Then our previous discussion entails $\dot{\rho}$ evolves by
+\[
+ \dot{\rho} = - \nabla \cdot \mathbf{J},
+\]
+where
+\[
+ \bra \mathbf{J}\ket = - \rho \tilde{M} \nabla \mu,\quad \mu = k_B T \log \rho + V(x).
+\]
+
+If we wish to consider a general, interacting field, then we can take the same equations, but set
+\[
+ \mu = \frac{\delta F}{\delta \rho}
+\]
+instead.
+
+Note that these are hydrodynamic level equations for $\rho$, i.e.\ they only tell us what happens to $\bra \rho\ket$. If we put $\mathbf{J} = \bra \mathbf{J}\ket$, then we get a mean field solution that evolves to the minimum of the free energy. To understand the stochastic evolution of $\rho$ itself, we put
+\[
+ \mathbf{J} = - \rho \tilde{M} \nabla \mu + \mathbf{j},
+\]
+where $\mathbf{j}$ is a \term{noise current}. This is the \term{Langevin equation} for a fluctuating field $\rho(\mathbf{r}, t)$.
+
+We can fix the distribution of $\mathbf{j}$ by requiring detailed balance as before. We will implement this for a constant $M = \rho \tilde{M}$, called the \term{collective mobility}. This is what we have to do to avoid having multiplicative noise in our system. While this doesn't seem very physical, this is reasonable in situations where we are looking at small fluctuations about a fixed density, for example.
+
+As before, we assume $\mathbf{j}(r, t)$ is Gaussian white noise, so
+\[
+ \P[\mathbf{j}(\mathbf{r}, t)] = \mathcal{N} \exp \left(-\frac{1}{2\sigma^2} \int_{t_1}^{t_2} \d t\int \d \mathbf{r}\; |\mathbf{j}(\mathbf{r}, t)|^2\right).
+\]
+This corresponds to
+\[
+ \bra j_k (\mathbf{r}, t)j_\ell(\mathbf{r}', t')\ket = \sigma^2 \delta_{k\ell} \delta(\mathbf{r} - \mathbf{r}') \delta(t - t').
+\]
+We now repeat the detailed balance argument to find $\sigma^2$. We start from
+\[
+ \mathbf{J} + M \nabla \mu = \mathbf{j}.
+\]
+Using $F$ to mean forward path, we have
+\[
+ \P_F[\mathbf{J}(\mathbf{r}, t)] = \mathcal{N} \exp \left(-\frac{1}{2\sigma^2} \int_{t_1}^{t_2} \d t \int \d \mathbf{r} \; |\mathbf{J} + M \nabla \mu|^2 \right),
+\]
+where
+\[
+ \mu = \frac{\delta F[\rho]}{\delta \rho}.
+\]
+We consider the backwards part and get
+\[
+ \P_B[\mathbf{J}(\mathbf{r}, t)] = \mathcal{N} \exp \left(-\frac{1}{2\sigma^2} \int_{t_1}^{t_2} \d t \int \d \mathbf{r} \; |-\mathbf{J} + M \nabla \mu|^2 \right),
+\]
+Then
+\[
+ \log \frac{\P_F}{\P_B} = - \frac{2M}{\sigma^2} \int_{t_1}^{t_2} \d t \int \d \mathbf{r}\;\mathbf{J} \cdot \nabla \mu.
+\]
+We integrate by parts in space to write
+\[
+ \int \d \mathbf{r}\; \mathbf{J} \cdot \nabla \mu = - \int \d \mathbf{r}\; (\nabla \cdot \mathbf{J}) \mu = \int \d \mathbf{r} \left(\dot{\rho} \frac{\delta F}{\delta \rho}\right) = \frac{\d F[\rho]}{\d t}.
+\]
+So we get
+\[
+ \log \frac{\P_F}{\P_B} = - \frac{2M}{\sigma^2} (F_2 - F_1).
+\]
+So we need
+\[
+ \frac{2M}{\sigma^2} = \beta,
+\]
+or equivalently
+\[
+ \sigma^2 = 2k_B T M.
+\]
+So our final many-body Langevin equation is
+\begin{align*}
+ \dot{\rho} &= - \nabla \cdot \mathbf{J}\\
+ \mathbf{J} &= - M \nabla \left(\frac{\delta F}{\delta \rho}\right) + \sqrt{2k_B T M} \Lambda,
+\end{align*}
+where $\Lambda$ is spatiotemporal unit white noise. As previously mentioned, a constant $M$ avoids multiplicative white noise.
+
+In general, we get the same structure for any other diffusive system, such as $\phi(\mathbf{r}, t)$ in a binary fluid.
+
+We might want to get a Fokker--Planck equation for our field theory. First recap what we did. For one particle, we had the Langevin equation
+\[
+ \dot{x} = - \tilde{M} \nabla V + \sqrt{2k_B T \tilde{M}} \Lambda,
+\]
+and we turned this into a Fokker--Planck equation
+\begin{align*}
+ \dot{P} &= - \nabla \cdot \mathbf{J}\\
+ \mathbf{J} &= -P \tilde{M} \nabla \mu\\
+ \mu &= k_B T \log P + V(x).
+\end{align*}
+We then write this as
+\[
+ \dot{P} = \nabla \cdot \left(\tilde{M} k_B T \left(\nabla + \beta \nabla V\right)P\right),
+\]
+where $P(x, t)$ is the time dependent probability density for $x$.
+
+A similar equation can be derived for the multi-particle case, which we will write down but not derive. We replace $x(t)$ with $\rho(\mathbf{r}, t)$, and we replace $P(x, t)$ with $P[\rho(\mathbf{r}); t]$. We then replace $\nabla$ with $\frac{\delta}{\delta \rho(\mathbf{r})}$. So the Fokker--Planck equation becomes
+\[
+ \dot{P}[\rho(t); t] = \int \d \mathbf{r} \; \frac{\delta}{\delta \rho} \left(k_B T \nabla \cdot \tilde{M} \nabla \left(\frac{\delta}{\delta \rho} + \beta \frac{\delta F}{\delta \rho}\right)P\right).
+\]
+This is the \term{Fokker--Planck equation} for fields $\rho$.
+
+As one can imagine, it is not very easy to solve. Note that in both cases, the quantities $\nabla + \beta \nabla V$ and $\frac{\delta}{\delta \rho} + \beta \frac{\delta F}{\delta \rho}$ annihilate the Boltzmann distribution. So the Boltzmann distribution is invariant.
+
+The advantage of the Langevin equation is that it is easy to understand the mean field theory/deterministic limit $\rho = \rho_{hydro}(\mathbf{r}, t)$. However, it is difficult to work with multiplicative noise. In the Fokker--Planck equation, multiplicative noise is okay, but the deterministic limit may be singular. Schematically, we have
+\[
+ P[\rho(\mathbf{r}), t] = \delta(\rho(\mathbf{r}, t) - \rho_{hydro}(\mathbf{r}, t)).
+\]
+In this course, we take the compromise and use the Langevin equation with constant $M$.
+
+\section{Model B}
+We now apply this theory to model some concrete systems. We shall first consider a simple model of binary fluids. We assume that diffusion happens but without fluid flow. As before, this is modeled by a scalar composition field $\phi(\mathbf{r})$, and evolves under the equations
+\begin{align*}
+ \dot{\phi} &= - \nabla \cdot \mathbf{J}\\
+ \mathbf{J} &= - M \nabla \mu + \sqrt{2k_B TM} \boldsymbol \Lambda\\
+ \mu &= \frac{\delta F}{\delta \phi}.
+\end{align*}
+Recall that the system is modelled under the Landau--Ginzburg free energy
+\[
+ F[\phi] = \int \Big(\underbrace{\frac{a}{2} \phi^2 + \frac{b}{4}\phi^4}_f + \frac{\kappa}{2} (\nabla \phi)^2\Big)\;\d \mathbf{r}.
+\]
+We then have
+\[
+ \mu = a \phi + b \phi^3 - \kappa \nabla^2 \phi.
+\]
+As before, the mean field theory for equilibrium looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2.3, 0) -- (2.3, 0) node [right] {$\bar{\phi}$};
+ \draw [->] (0, -1) -- (0, 3) node [above] {$f$};
+ \draw [domain=-2:2, mblue, thick] plot (\x, {(\x)^2/3 + (\x)^4/10});
+ \draw [domain=-2:2, mred, thick] plot [smooth] (\x, {-(\x)^2 + (\x)^4/3});
+
+ \node [right, mblue] at (1.8, 2.13) {$a > 0$};
+ \node [right, mred] at (1.9, 0.734) {$a < 0$};
+ \node [left, white] at (-1.8, 2.13) {$a > 0$};
+ \node [left, white] at (-1.9, 0.734) {$a < 0$};
+
+ \draw [dashed] (1.2247, -0.75) -- (1.2247, 0) node [above] {\small$\phi_B$};
+ \draw (0.707, -0.417) node [below] {\small$\phi_S$} node [circ] {};
+ \end{tikzpicture}
+\end{center}
+Here $\pm \phi_S$ are the spinodals, where the second derivative changes sign. This gives the phase diagram
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 3) node [above] {$a$} -- (-2, 0) node [below] {$-1$} -- (2, 0) node [below] {$1$} node [right] {$\bar \phi$} -- (2, 3);
+ \node [left] at (-2, 2.5) {$a(T) = 0$};
+
+ \draw [domain=2.5:0.2, samples=40] plot [smooth] ({sqrt (2.5 - \x)}, \x);
+ \draw [domain=2.5:0.2, samples=40] plot [smooth] ({-sqrt (2.5 - \x)}, \x);
+
+ \draw [dashed, domain=0.2:2.5] plot [smooth] ({sqrt ((2.5 - \x)/3)}, \x);
+ \draw [dashed, domain=0.2:2.5] plot [smooth] ({-sqrt ((2.5 - \x)/3)}, \x);
+ \draw (-2.05, 2.5) -- (-1.95, 2.5);
+
+ \node at (1.4, 2) {$\mathbf{1}$};
+ \node at (0.85, 1.3) {$\mathbf{3}$};
+ \node at (0, 0.8) {$\mathbf{2}$};
+ \end{tikzpicture}
+\end{center}
+Here $\bar{\phi}$ is the global composition, which is a control variable.
+
+In the past, we discussed what the field looks like in each region when we are at equilibrium. At (1), the system is in a uniform phase that is globally stable. If we set up our system at (1), and then rapidly change the temperature so that we lie in (2) or (3), then we know that after the system settles, we will have a phase separated state. However, how this transition happens is not something mean field theory tells us. Heuristically, we expect
+
+\begin{itemize}
+ \item In (2), we have $|\bar{\phi}| < \phi_S$, and $f''(\phi) < 0$ implies local instability. The system rapidly becomes phase separated. This is \term{spinodal behaviour}.
+ \item In (3), we have $\phi_S < |\bar{\phi}| < \phi_B$. A uniform phase is locally stable, but not globally. To reach the phase separated state, we need nucleation and growth to occur, and requires the contribution of noise.
+\end{itemize}
+We now study these in detail.
+\subsubsection*{Regime 1}
+We know that regime (1) is stable, and we shall see how it responds to perturbation about $\phi(r) = \bar{\phi}$. Put
+\[
+ \phi = \bar{\phi} + \tilde{\phi}(\mathbf{r}).
+\]
+We can then write
+\[
+ \mu = \frac{\delta F}{\delta \phi} = \frac{\partial f}{\partial \phi} - \kappa \nabla^2 \phi = f'(\bar{\phi}) + \tilde{\phi} f''(\bar{\phi}) - \kappa \nabla^2 \tilde{\phi}.
+\]
+Note that the first term is a constant. We then have
+\begin{align*}
+ \dot{\phi} &= - \nabla \cdot \mathbf{J}\\
+ \mathbf{J} &= -M \nabla [f''\tilde{\phi} - \kappa \nabla^2 \tilde{\phi}] + \sqrt{2k_B T M} \boldsymbol\Lambda.
+\end{align*}
+We drop the tildes and take the Fourier transform to get
+\[
+ \dot{\phi}_{\mathbf{q}} = - M q^2 (f'' + \kappa q^2)\phi_{\mathbf{q}} + i\mathbf{q} \cdot \sqrt{2k_B TM} \boldsymbol\Lambda_{\mathbf{q}}.
+\]
+Compare this with an overdamped particle in a simple harmonic oscillator,
+\[
+ V = \frac{1}{2} \kappa x^2,
+\]
+where we have
+\[
+ \dot{x} = - \tilde{M} \kappa x + \sqrt{2k_B T \tilde{M}} \Lambda.
+\]
+Indeed, we can think of our system as an infinite family of decoupled harmonic oscillators, and solve each of them independently.
+
+In the second example sheet, we compute
+\[
+ S(\mathbf{q}, t) \equiv \bra \phi_\mathbf{q}(0) \phi_{-\mathbf{q}} (t)\ket = S(q) e^{-r(q) t},
+\]
+This \term{$S(\mathbf{q}, t)$} is called the \term{dynamic structure factor}, which can be measured by light scattering. This doesn't say the fluctuations go away completely --- we expect there to be fluctuations all the time. What this says is that fluctuations at late times come completely from the random noise, and not the initial fluctuations.
+
+%Here $s(q)$ is the equilibrium equal time static correlator, which is
+%\[
+% \frac{k_B T}{f'' + \kappa q^2}.
+%\]
+\subsubsection*{Regime 2}
+Consider the case where we are in the second regime. As before, we have the equation
+\[
+ \dot{\phi}_{\mathbf{q}} = - \underbrace{M q^2 (f'' + \kappa q^2)}_{r(q)}\phi_\mathbf{q} + i\mathbf{q} \cdot \sqrt{2k_B TM} \boldsymbol\Lambda_{\mathbf{q}},
+\]
+but crucially, now $f''(\bar{\phi}) < 0$, so it is possible to have $r(q) < 0$. The system is unstable.
+
+If we ignore the noise by averaging the equation, then we get
+\[
+ \bra \dot{\phi}_\mathbf{q}\ket = - r(q) \bra \phi_q\ket.
+\]
+So if we have a noisy initial state $\phi_\mathbf{q}(0)$, then the perturbation grows as
+\[
+ \bra \phi_\mathbf{q}(t)\ket = \phi_\mathbf{q}(0) e^{-r(q)t}.
+\]
+When $r(q) < 0$, then this amplifies the initial noise. In this world, even if we start with a perfectly uniform $\phi$, noise terms will kick in and get amplified over time. Moreover, since we have an exponential growth, the earliest noise gets amplified the most, and at late times, the new perturbations due to the noise are negligible.
+
+We can plot our $r(q)$ as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$q$};
+ \draw [->] (0, -1.3) -- (0, 3) node [above] {$r(q)$};
+ \draw [domain=0:3.8, thick, mblue] plot [smooth] (\x, {\x^2 * (-2 + \x^2/5) / 5});
+
+ \draw [dashed] (2.236, -1) -- (2.236, 0) node [above] {$q^*$};
+ \end{tikzpicture}
+\end{center}
+The maximally unstable mode $q^*$ is given by the solution to $r'(q^*) = 0$, which we can easily check to be given by
+\[
+ q^* = \sqrt{\frac{-f''}{2\kappa}}.
+\]
+Now consider the equal time \term{non-equilibrium structure factor}\index{$S_\mathbf{q}(t)$}
+\[
+ S_\mathbf{q}(t) = \bra \phi_\mathbf{q}(t) \phi_{-\mathbf{q}}(t)\ket \sim S_\mathbf{q}(0) e^{-2r(q) t}.
+\]
+As time evolves, this gets more and more peaked around $q = q^*$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$q$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$S_\mathbf{q}(t)$};
+
+ \draw[domain=0:4, mblue, semithick] plot [smooth] (\x, {0.1 + 0.02 * sin(500 * \x)});
+ \draw[domain=0:4, mblue!25!mgreen, semithick] plot [smooth] (\x, {0.1 * e^(- \x^2 * (-2 + \x^2/5) / 5)});
+ \draw[domain=0:4, mblue!50!mgreen, semithick] plot [smooth] (\x, {0.1 * e^(-2* \x^2 * (-2 + \x^2/5) / 5)});
+ \draw[domain=0:4, mgreen!75!mgreen, semithick] plot [smooth] (\x, {0.1 * e^(-2.8* \x^2 * (-2 + \x^2/5) / 5)});
+ \draw[domain=0:4, mgreen, semithick] plot [smooth] (\x, {0.1 * e^(-3.3* \x^2 * (-2 + \x^2/5) / 5)});
+ \draw [dashed] (2.236, 3) -- (2.236, 0) node [below] {$q^*$};
+ \end{tikzpicture}
+\end{center}
+So we see a growth of random structure with scale $L \sim \pi/q^*$. This process is called \term{spinodal decomposition}.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (3, 3);
+ \draw [fill=mblue, fill opacity=0.3] plot [smooth] coordinates {(0.7, 0) (0.5, 0.9) (0.8, 1.5) (1, 2.5) (3, 2.3)} -- (3, 3) -- (0, 3) -- (0, 0);
+
+ \draw [fill=mblue, fill opacity=0.3] (1.4, 0.7) circle [radius=0.3];
+
+ \draw [fill=mblue, fill opacity=0.3] plot[smooth] coordinates {(3, 1.8) (1.8, 2) (1.7, 1.5) (3, 1.2)};
+
+ \draw [fill=mblue, fill opacity=0.3] plot[smooth] coordinates {(3, 0.5) (2.3, 0.8) (2.2, 0.2) (2.6, 0)} -- (3, 0);
+
+ \draw [-latex'] (1.7, 0.7) -- (2.21, 0.6) node [pos=0.5, above] {\small $L$};
+ \draw [-latex'] (2.21, 0.6) -- (1.7, 0.7);
+ \end{tikzpicture}
+\end{center}
+Note that this computation was done on the assumption that $\tilde{\phi}$ is small, where we ignored the quadratic terms. At intermediate $t$, as these phase separated states grow, the quartic terms are going to kick in. An informal use of variational theory says we should replace $f''$ by $\bar{f}''$, where
+\[
+ \bar{f}'' = f'' + \frac{3b}{(2\pi)^4} \int^{q_{max}} S_q(t) \;\d^d \mathbf{q}.
+\]
+This says $\bar{f}''$ is less negative as the fluctuations grow. Since
+\[
+ q^* = \sqrt{\frac{-\bar{f}''}{2k}},
+\]
+this moves to a smaller $q$. So $L(t) \sim \pi/q^*(t)$ starts increasing. This is called \term{domain growth}.
+
+In the late stages, we have large regions of $\phi \approx \pm \phi_B$, so it is almost in equilibrium locally. We are well away from the exponential growth regime, and the driving force for domain growth is the reduction of interfacial area. We can estimate the free energy (per unit volume) as
+\[
+ \frac{F}{V} = \frac{\sigma A(t)}{V},
+\]
+where $A(t)$ is the area of the interface. So by dimensional analysis, this is $\sim \sigma/L(t)$. We have calculated the interfacial surface tension $\sigma$ before to be
+\[
+ \sigma = \left(\frac{-8\kappa a^3}{9b^2}\right)^{1/2},
+\]
+but it doesn't really matter what this is. The ultimate configuration with minimal surface area is when we have complete phase separation. The result is that
+\[
+ L(t) \sim \left(\frac{M_\sigma}{\phi_B} t\right)^{1/3}.
+\]
+We will not derive this yet, because this result is shared with the late time behaviour of the third regime, and we will discuss this at that point.
+
+\subsubsection*{Regime 3}
+Finally, consider the third regime. Suppose we have $\bar{\phi} = -\phi_B + \delta$, where $\delta$ is small. The system is locally stable, so $r(q) > 0$ for all $q$. On the other hand, it is globally unstable, so phase separation is preferred. To achieve phase separation, we must overcome a \term{nucleation barrier}, and we must rely on noise to do that.
+
+To understand how the process will look like, formally, we can inspect the path probabilities
+\[
+ \P[\phi(\mathbf{r}, t)] = \mathcal{N} \exp \left(-\frac{\beta}{4M} \int |\mathbf{J} + M \nabla \mu|^2 \;\d \mathbf{r} \;\d t\right)
+\]
+given by the Langevin equation. We seek to find the most likely trajectory from the initial to the final state. In field theory, this is called the \term{instanton path}, and in statistical physics, this is called \term{large deviation theory}. Instead of doing this, we use our physical intuition to guide ourselves.
+
+Heuristically, we expect that if we start with a uniform phase $\phi = -\phi_B + \delta$, then at some point, there will be some random small droplet with $\phi = +\phi_B$ of small radius $R$. This is already unlikely, but after this, we need $R$ to increase until we have full phase separation. The key question is --- how unlikely is this process?
+\begin{center}
+ \begin{tikzpicture}[scale=0.8]
+ \draw (0, 0) rectangle (2, 2);
+ \draw [->] (2.3, 1) -- (3.2, 1);
+ \begin{scope}[shift={(3.5, 0)}]
+ \draw (0, 0) rectangle (2, 2);
+ \draw [fill=mblue, fill opacity=0.5] (1, 1) circle [radius=0.05];
+ \draw [->] (2.3, 1) -- (3.2, 1);
+ \end{scope}
+
+ \begin{scope}[shift={(7, 0)}]
+ \draw (0, 0) rectangle (2, 2);
+ \draw [fill=mblue, fill opacity=0.5] (1, 1) circle [radius=0.1];
+ \draw [->] (2.3, 1) -- (3.2, 1);
+ \end{scope}
+
+ \begin{scope}[shift={(10.5, 0)}]
+ \draw (0, 0) rectangle (2, 2);
+ \draw [fill=mblue, fill opacity=0.5] (1, 1) circle [radius=0.3];
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+The idea is to consider the cost of having a droplet of radius $R$. First there is the cost of having an interface, which is $4 \pi \sigma R^2$. However, the fact that we have $+\phi_B$ areas is energetically favorable, and this grows as the volume. So we get a cubic term $\sim -R^3$. If we add these two together, we get a barrier:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$R$};
+ \draw [->] (0, -1.8) -- (0, 2.5) node [above] {$F(R)$};
+
+ \draw [domain=0:3.7, mblue, semithick] plot [smooth] (\x, {3*(\x^2/3 -\x^3/10)});
+
+ \draw [dashed] (2.222, 1.6461) -- (2.222, 0) node [below] {$R^*$};
+ \draw [dashed] (2.222, 1.6461) -- (0, 1.6461) node [left] {$F^*$};
+ \end{tikzpicture}
+\end{center}
+Once $R > R^*$, it is then energetically favorable for the radius to continue increasing, and then we can easily reach phase separation. To reach this, we must rely on noise to push us over this barrier, and this noise-induced rate is $\sim e^{-\beta F^*}$. To see what happens afterwards, we need to better understand how droplets work.
+
+\subsubsection*{Droplet in equilibrium}
+The mechanics of a droplet is slightly less straightforward than what one might hope, due to the presence of surface tension that tries to compress the droplet. The result is that the value of $\phi$ inside and outside the droplet is not exactly $\pm \phi_B$, but with a shift.
+
+For simplicity, we shall first consider an \emph{equilibrium} system with a droplet. This is achieved by having a large box of fluid with $\bar{\phi}$ just slightly above $-\phi_B$. Then in the phase separated state, the $+\phi_B$ phase will lump together in a droplet (if we had $\bar{\phi} = 0$, then we would have a horizontal interface).
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (3, 3);
+ \draw [fill=mblue, fill opacity=0.5] (1.5, 1.5) circle [radius=0.6];
+ \node at (1.5, 1.5) {$2$};
+ \node at (2.4, 2.4) {$1$};
+ \end{tikzpicture}
+\end{center}
+Within each region $1$ and $2$, the value of $\phi$ is constant, so the term that contributes to the free energy is
+\[
+ f (\phi) = \frac{a}{2} \phi^2 + \frac{b}{4} \phi^4.
+\]
+We would expect $1$ and $2$ to respectively be located at
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2.3, 0) -- (2.3, 0) node [right] {$\phi$};
+ \draw [->] (0, -1.2) -- (0, 2.5) node [above] {$f$};
+ \draw [domain=-2.15:2.15, mred, thick] plot [smooth] (\x, {-(\x)^2 + (\x)^4/3});
+
+ \draw (-1.21, -0.65) -- (-1.21, -0.85) node [below] {$1$};
+ \draw (1.24, -0.65) -- (1.24, -0.85) node [below] {$2$};
+ \end{tikzpicture}
+\end{center}
+When we have a spherical interface, $1$ and $2$ are not exactly at $\pm \phi_B$. To see this, Consider the \term{bulk chemical potential}
+\[
+ \mu = \frac{\partial f}{\partial \phi},
+\]
+The thermodynamic pressure is then
+\[
+ \Pi = \mu \phi - f.
+\]
+This is the negative of the $y$-intercept of the tangent line at $\phi$.
+
+If we have a flat interface, which we can think of as the limit $R \to \infty$, then we require
+\[
+ \mu_1^{bulk} = \mu_2^{bulk},\quad \Pi_1^{bulk} = \Pi_2^{bulk}.
+\]
+This means the points 1, 2 have a common tangent
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2.3, 0) -- (2.3, 0) node [right] {$\phi$};
+ \draw [->] (0, -1.2) -- (0, 2.5) node [above] {$f$};
+ \draw [domain=-2.15:2.15, mred, thick] plot [smooth] (\x, {-(\x)^2 + (\x)^4/3});
+
+ \draw (-1.2247, -0.65) -- (-1.2247, -0.85) node [below] {$1$};
+ \draw (1.2247, -0.65) -- (1.2247, -0.85) node [below] {$2$};
+
+ \draw [dashed] (-2.3, -0.75) -- (2.3, -0.75);
+ \end{tikzpicture}
+\end{center}
+
+If we have a droplet, then there is surface tension. Consider an imaginary interface between the upper and lower interface. Then the pressure difference tries to move the upper hemisphere up, and contributes a force of $(\Pi_2 - \Pi_1) \pi R^2$, while the interfacial tension pulls the boundary down by $2\pi R \sigma$. In general, in $d$ dimensions, we require
+\[
+ \Pi_2 = \Pi_1 + \frac{\sigma}{R}(d - 1)
+\]
+This is called the \term{Laplace pressure}.
+
+In static equilibrium, we still require $\mu_1 = \mu_2$, since this is the same as saying $\mathbf{J} = \nabla \mu = 0$. So $f$ has the same slope at $\phi_1$ and $\phi_2$. However, the two tangent lines no longer have a common intercept, but they are separated by $\frac{\sigma}{R}(d - 1)$. So it looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-2.3, 0) -- (2.3, 0) node [right] {$\phi$};
+ \draw [->] (0, -1.2) -- (0, 2.5) node [above] {$f$};
+ \draw [domain=-2.15:2.15, mred, thick] plot [smooth] (\x, {-(\x)^2 + (\x)^4/3});
+
+ \draw (-1.18, -0.646) -- (-1.18, -0.846) node [below] {$\phi_1$};
+ \draw (1.27, -0.6458) -- (1.27, -0.8458) node [below] {$\phi_2$};
+
+ \draw [dashed] (2, -0.6062) -- (-1, -1.1797);
+ \draw [dashed] (-2, -0.88496) -- (1, -0.31);
+
+ \draw [-latex] (-0.1, -0.988533) -- (-0.1, -0.501653) node [pos=0.4, left] {\scalebox{0.6}{$\Pi_2\!-\!\Pi_1$}\!};
+ \draw [-latex] (-0.1, -0.501653) -- (-0.1, -0.988533);
+ \end{tikzpicture}
+\end{center}
+
+To solve for this, we take the approximation that $\delta$ is small for $R$ decently large. Then we can write
+\begin{align*}
+ f_1 &= f(-\phi_B + \delta_1) \approx \frac{1}{2} f''(-\phi_B) \delta_1^2 + f(-\phi_B)\\
+ f_2 &= f(+\phi_B + \delta_2) \approx \frac{1}{2} f''(+\phi_B) \delta_1^2 + f(+\phi_B).
+\end{align*}
+So $\mu_i = \alpha \delta_i$, where $\alpha = f''(\pm\phi_B)$. So we find that up to first order, $\delta_1 = \delta_2$.
+
+To compute $\delta$, we compute
+\[
+ \Pi_1 = \mu_1\phi_1 - f_1 = -\alpha \delta \phi_B.
+\]
+Similarly, we have $\Pi_2 = +\alpha \delta \phi_B$. So
+\[
+ \Pi_1 - \Pi_2 = -2\alpha \phi_B \delta.
+\]
+Since this equals $-(d - 1)\frac{\sigma}{R}$, we have
+\[
+ \delta = \frac{d - 1}{2\alpha \phi_B} \cdot \frac{\sigma}{R}.
+\]
+
+\subsubsection*{Multiple droplet dynamics}
+We now move on to understand multiple droplet dynamics. This is relevant because we expect that noise will produce multiple droplets around the fluid, which will then interact and combine to a single phase separated state.
+
+The way droplets interact with each other is that once we have a droplet of large $\phi$, then the average $\phi$ outside of the droplet will decrease. So to begin understanding this scenario, we first see how a droplet reacts when the relative density of the outside bath is not what it expects.
+
+So suppose we have a (3D) droplet of radius $R$ inside a bath with $\phi = - \phi_B + \varepsilon$, where $\varepsilon \not= \delta = \delta(R)$. This $\varepsilon$ is called the \term{supersaturation}. Note that to have a droplet of radius $R$, the value of $\phi$ inside and immediately outside the droplet must be $\pm \phi_B + \delta$. Outside of the droplet, the value of $\phi$ will slowly decay to $-\phi_B + \varepsilon$. Thus, outside of the droplet, we write
+\[
+ \phi(\mathbf{r}) = -\phi_B+ \tilde{\phi}(\mathbf{r}),
+\]
+where $\tilde{\phi}(\infty) = \varepsilon$ and $\tilde{\phi}(R^+) = \delta$.
+
+In this situation, unless $\delta$ happens to be $\varepsilon$, we have a gradient of $\phi$, hence a gradient of chemical potential, hence a flux. Again in Model B, we have
+\[
+ \dot{\phi} = - \nabla \cdot \mathbf{J},\quad \mathbf{J} = -M \nabla \mu = -M \alpha \nabla \tilde{\phi}(\mathbf{r}),
+\]
+assuming a weak enough gradient. We assume this has a quasi-static behaviour, which is reasonable since molecules move quickly relative to how quickly the droplet changes in size. So to solve for $\tilde{\phi}(\mathbf{r})$ at any point in time, we set $\dot{\phi} = 0$. So $\nabla^2 \tilde{\phi} = 0$. We solve this with boundary conditions
+\[
+ \tilde{\phi}(\infty) = \varepsilon,\quad \tilde{\phi}(R^+) = \delta.
+\]
+So we have
+\[
+ \tilde{\phi} = \varepsilon + (\delta - \varepsilon) \frac{R}{r}.
+\]
+Now if we assume this is what $\tilde{\phi}(\mathbf{r})$ looks like, then the current just outside the droplet gives
+\[
+ \mathbf{J}(R^+) = -M \nabla \mu = - \alpha M \left.\frac{\partial \tilde{\phi}}{\partial r}\right|_{R^+} = \alpha M(\delta - \varepsilon) \left.\frac{B}{r^2}\right|_{r = R^+} = \frac{\alpha M(\delta - \varepsilon)}{R}.
+\]
+Thus, when $\delta$ and $\varepsilon$ are not the same, there is a flow of fluid in or out of the droplet. The discontinuity in $\phi$ across the boundary is $\Delta \phi = 2 \phi_B$. So mass conservation implies
+\[
+ 2 \phi_B \dot{R} = - J = - \frac{\alpha M (\delta - \varepsilon)}{R}.
+\]
+Thus, we conclude that
+\[
+ \dot{R} = \frac{1}{2 \phi_B} \left(\frac{\alpha M}{R} (\varepsilon - \delta(R))\right).
+\]
+We can plug in our previous expression for $\delta$. Fixing $\varepsilon$, we can plot $\dot{R}$ as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (3, 0) node [right] {$R$};
+ \draw (0, -1.5) -- (0, 1.5) node [above] {$\dot{R}$};
+ \draw [domain=0.56:3, thick, mblue] plot [smooth] (\x, {2/\x * ( 1.4 - 1/\x)});
+
+ \node at (0.714, 0) [anchor = north west] {$R^*$};
+ \node [circ, mblue] at (0.714, 0) {};
+ \end{tikzpicture}
+\end{center}
+where
+\[
+ R^* = \frac{\sigma}{\alpha \varepsilon \phi_B}.
+\]
+So if we have a bath containing many droplets, then the big droplets grow and the small droplets shrink. Indeed, the interfacial tension penalizes small droplets more heavily than large droplets.
+
+To understand exactly how these grow, we make a scaling ansatz that there is only one length scale, namely the mean droplet size $\bar{R}$. Then we have
+\[
+ \dot{\bar{R}} \approx \frac{1}{2\phi_B} \frac{\alpha M}{\bar{R}} (\varepsilon - \delta(\bar{R})).
+\]
+We know that the value of $\varepsilon$ is also determined by $\bar{R}$, so we know $\varepsilon - \delta (\bar{R})$ is of order $\delta (\bar{R})$. Hence
+\[
+ \dot{\bar{R}} \sim \frac{M\sigma}{\phi_B^2 \bar{R}^2}
+\]
+So
+\[
+ \bar{R}^3 \sim \frac{M\sigma t}{\phi_B^2}.
+\]
+So the typical droplet size is $\sim t^{1/3}$. Likewise, $R^* \sim t^{1/3}$, and so $\varepsilon \sim t^{-1/3}$.
+
+So if we have a sea of droplets, they go into this competitive process, and we get fewer and fewer droplets of larger and larger size. This is called \term{Ostwald ripening}, and is a diffusive coarsening mechanism.
+
+We have the same scaling for non-droplet geometries, e.g.\ spinodal decomposition at late times. In this case, our domains flatten and enlarge, and we have
+\[
+ L(t) \sim \left(\frac{M\sigma}{\phi_B^2} t\right)^{1/3}.
+\]
+
+In practice, we often want to stop this from happening. One way to do so is to add trapped species insoluble in the continuous phase, e.g.\ polymers or salt. If there are $N$ particles inside the droplet exerting ideal gas pressure, then we have
+\[
+ \Pi_2 - \Pi_1 = \frac{2\sigma}{R} + \frac{Nk_B T}{4/3 \pi R^3},
+\]
+We again have $\mu_1 = \mu_2$. This ends up giving a new boundary condition at $R^+$,
+\[
+ \tilde{\phi}(R^+) = \frac{\sigma}{\alpha R \phi_B} - \frac{3N k_B T}{8 \alpha \phi_B \pi R^3} = \frac{1}{2 \alpha \phi_B} (\Pi_{\mathrm{Lap}} - \Pi_{\mathrm{sol}})
+\]
+The first term is the Laplace pressure just as before, while the second term is the extra term from the trapped species.
+
+If we put this back into the system of equations before, then we have a new equation of motion
+\[
+ \dot{R} = \frac{1}{2 \phi_B} \left( \frac{\alpha M}{R} \left(\varepsilon - \frac{\sigma}{\alpha \phi_B R} + \frac{3 N k_B T}{8 \alpha \phi_B \pi R^3}\right)\right).
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$R$};
+ \draw [->] (0, -1) -- (0, 2) node [above] {$\dot{R}$};
+ \draw [mblue, semithick, domain=0.65:1.5] plot (\x, {1/\x (1 - 3/\x + 1.5/\x^3)});
+ \draw [mblue, semithick, domain=1.5:4] plot (\x, {1/\x (1 - 3/\x + 1.5/\x^3)});
+ \node [circ] at (0.832, 0) {};
+ \node [anchor = north east] at (0.832, 0) {$R_s$};
+ \end{tikzpicture}
+\end{center}
+We now see that there is a stable fixed point $R_s$, and further shrinkage is prevented by the presence of the trapped species that will be further compressed by shrinking. Thus, if we manage to produce a system with all droplets of size $< R^*$, then we end up with a lot of small but finite size droplets $R_s$.
+
+\section{Model H}
+Model B was purely diffusive, and the only way $\phi$ can change is by diffusion. Often, in real life, fluid can flow as well. If the fluid has a velocity $\mathbf{v}$, then our equation is now
+\[
+ \dot{\phi} + \mathbf{v} \cdot \nabla \phi = - \nabla \cdot \mathbf{J}.
+\]
+The $\mathbf{v} \cdot \nabla \phi$ is called the \term{advection term}. Our current $\mathbf{J}$ is the same as before, with
+\[
+ \mathbf{J} = -M \frac{\delta F}{\delta \phi} + \sqrt{2k_B T M} \boldsymbol\Lambda.
+\]
+We also need an evolution equation for $\mathbf{v}$, which will be the Navier--Stokes equation with some noise term. We assume flow is incompressible, so $\nabla \cdot \mathbf{v} = 0$. We then have the Cauchy equation with stress tensor $\Sigma^{\mathrm{TOT}}$, given by
+\[
+ \rho(\dot{\mathbf{v}} + \mathbf{v} \cdot \nabla \mathbf{v}) = \nabla \cdot \Sigma^{\mathrm{TOT}} + \text{body forces}.
+\]
+We will assume there is no body force. This is essentially the momentum conservation equation, where $- \Sigma^{\mathrm{TOT}}$ is the momentum flux tensor.
+
+Of course, this description is useless if we don't know what $\Sigma^{\mathrm{TOT}}$ looks like. It is a sum of four contributions:
+\[
+ \Sigma^{\mathrm{TOT}} = \Sigma^p + \Sigma^\eta + \Sigma^\phi + \Sigma^N.
+\]
+\begin{itemize}
+ \item The $\Sigma^p$ term is the \term{pressure} term, given by
+ \[
+ \Sigma_{ij}^p = - P \delta_{ij}.
+ \]
+ We should think of this $P$ as a Lagrange multiplier for incompressibility.
+ \item The $\Sigma^{\eta}$ term is the \term{viscous stress}, which we can write as
+ \[
+ \Sigma_{ij}^\eta = \eta (\nabla_i v_j + \nabla_j v_i)
+ \]
+ For simplicity, we assume we have a constant viscosity $\eta$. In general, it could be a function of the composition.
+ \item The $\Sigma^{\phi}$ term is the \term{$\phi$-stress}, given by
+ \[
+ \Sigma^\phi = - \Pi \delta_{ij} - \kappa (\nabla_i \phi) (\nabla_j \phi),\quad \Pi = \phi \mu - F.
+ \]
+ This is engineered so that
+ \[
+ \nabla \cdot \Sigma^\phi = - \phi \nabla \mu.
+ \]
+ This says a non-constant chemical potential causes things to move to even that out.
+ \item The final term is a \term{noise stress} with
+ \[
+ \bra \Sigma_{ij}^N (\mathbf{r}, t) \Sigma_{k\ell}^N (\mathbf{r}', t')\ket = 2k_B T \eta \left(\delta_{ik} \delta_{j\ell} + \delta_{i\ell} \delta_{jk} - \frac{2}{3} \delta_{ij} \delta_{k\ell}\right) \delta(\mathbf{r} - \mathbf{r}') \delta(t - t').
+ \]
+ The last term $\delta_{ij} \delta_{k\ell}$ is there to ensure the noise does not cause any compression. This is a white noise term whose variance is determined by the fluctuation dissipation theorem.
+\end{itemize}
+We can then compute
+\begin{align*}
+ \nabla \cdot \Sigma^{\mathrm{TOT}} &= \nabla \cdot \Sigma^P + \nabla \cdot \Sigma^\eta + \nabla \cdot \Sigma^\phi + \nabla \cdot \Sigma^N\\
+ &= - \nabla P + \eta \nabla^2 \mathbf{v} + - \phi \nabla \mu + \nabla \cdot \Sigma^N
+\end{align*}
+Hence Model H has equations
+\begin{align*}
+ \dot{\phi} + \mathbf{v} \cdot \nabla \phi &=- \nabla \cdot \mathbf{J}\\
+ \mathbf{J} &= -M \nabla \mu + \sqrt{2k_B T M} \boldsymbol\Lambda\\
+ \nabla \cdot \mathbf{v} &= 0\\
+ \rho (\dot{\mathbf{v}} + \mathbf{v} \cdot \nabla \mathbf{v}) &= \eta \nabla^2 \mathbf{v} - \nabla P - \phi \nabla \mu + \nabla \cdot \Sigma^N.
+\end{align*}
+Compared to Model B, we have the following new features:
+\begin{enumerate}
+ \item $-\phi\nabla \mu$ drives deterministic fluid flow.
+ \item $\Sigma^N$ drives a \emph{random} flow.
+ \item Fluid flow advects $\phi$.
+\end{enumerate}
+How does this affect the coarsening dynamics? We will see that (i) and (iii) gives us enhanced coarsening of bicontinuous states. However, this does not have any effect on isolated/disconnected droplet states, since in a spherically symmetric setting, $\phi \nabla \mu$ and $\nabla P$ will be radial, and so $\nabla \cdot \mathbf{v} = 0$ implies $\mathbf{v} = 0$. In other words, for $\phi \nabla \mu$ to drive a flow, we must have some symmetry breaking.
+
+Of course, this symmetry breaking is provided by the noise term in (ii). The result is that the droplets will undergo Brownian motion with $\bra r^2\ket \sim Dt$, where
+\[
+ D = \frac{k_B T}{4\pi \eta R}
+\]
+is the diffusion constant.
+
+If we think about the Ostwald process, even if we manage to stop the small droplets from shrinking, they may collide and combine to form larger droplets. This forms a new channel for instability, and separate measures are needed to prevent this. For example, we can put charged surfactants that prevent collisions.
+
+We can roughly estimate the time scale of this process. We again assume there is one length scale $\bar{R}(t)$, which determines the size and separation of droplets. We can then calculate the collision time
+\[
+ \Delta t \simeq \frac{\bar{R}^2}{D(\bar{R})} \sim \bar{R}^3 \frac{\eta}{k_B T}.
+\]
+Each collision doubles the volume, and so $\bar{R} \to 2^{1/3} \bar{R}$. Taking the logarithm, we have
+\[
+ \Delta \log \bar{R} \sim \frac{\log 2}{3}\text{ in time }\Delta t.
+\]
+So we crudely have
+\[
+ \frac{\Delta \log \bar{R}}{\Delta t} \sim \frac{\log 2}{3} \frac{k_B T}{\eta \bar{R}^3}.
+\]
+If we read this as a differential equation, then we get
+\[
+ \frac{\d \log \bar{R}}{\d t} = \frac{1}{\bar{R}} \dot{\bar{R}} \sim \frac{k_B T}{\eta \bar{R}^3}.
+\]
+So we find that
+\[
+ \bar{R}^2 \dot{\bar{R}} \sim \frac{k_B T}{\eta}.
+\]
+So
+\[
+ \bar{R}(t) \sim \left(\frac{k_B T}{\eta} t\right)^{1/3}.
+\]
+This is \term{diffusion limited coalescence}.
+
+Recall that in the Ostwald process, droplets grew by diffusion of molecules, and we had the same power of $t$. However, the coefficient was different, with
+\[
+ \bar{R} \sim \left(\frac{M \sigma}{\phi_B} t\right)^{1/3}.
+\]
+It makes sense that they have the same scaling law, because ultimately, we are still doing diffusion on different scales.
+
+\subsubsection*{Bicontinuous states}
+We now see what the fluid flow does to bicontinuous states.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (3, 3);
+ \draw [fill=mblue, fill opacity=0.3] plot [smooth] coordinates {(0.7, 0) (0.5, 0.9) (0.8, 1.5) (1, 2.5) (3, 2.3)} -- (3, 3) -- (0, 3) -- (0, 0);
+
+ \draw [fill=mblue, fill opacity=0.3] (1.4, 0.7) circle [radius=0.3];
+
+ \draw [fill=mblue, fill opacity=0.3] plot[smooth] coordinates {(3, 1.8) (1.8, 2) (1.7, 1.5) (3, 1.2)};
+
+ \draw [fill=mblue, fill opacity=0.3] plot[smooth] coordinates {(3, 0.5) (2.3, 0.8) (2.2, 0.2) (2.6, 0)} -- (3, 0);
+
+ \draw [-latex'] (1.7, 0.7) -- (2.21, 0.6) node [pos=0.5, above] {\small $L$};
+ \draw [-latex'] (2.21, 0.6) -- (1.7, 0.7);
+ \end{tikzpicture}
+\end{center}
+
+Again assume we have a single length scale $L(t)$, given by the domain size. Again we assume we have a single length scale. As time goes on, we expect $L(t)$ to increase with time.
+
+A significant factor in the evolution of the bicontinuous phase is the Laplace pressure, which is ultimately due to the curvature $K$. Since there is only one length scale, we must have
+\[
+ \dot{K} \sim v.
+\]
+The Laplace pressure then scales as $\sim \frac{\sigma}{L}$.
+
+The noise terms $\Sigma^N$ matter at early times only. At late times, the domains grow deterministically from random initial conditions. The key question is how $L(t)$ scales with time. The equation of motion of the flow is
+\[
+ \rho(\dot{\mathbf{v}} + \mathbf{v} \cdot \nabla \mathbf{v}) = \eta \nabla^2 \mathbf{v} - \nabla P - \phi \nabla \mu,
+\]
+We make single length scale approximations as before, so that $v \sim \dot{L}$ and $\nabla \sim L^{-1}$. Then we have an equation of the form.
+\[
+ \rho \ddot{L} + \rho \frac{\dot{L}^2}{L} \sim \eta \frac{\dot{K}}{L^2} + \text{Lagrange multiplier} + \frac{\sigma}{L^2}\tag{$*$} % align as above.
+\]
+where we recall that at curved interfaces, $\mu \sim \pm \frac{\sigma}{R}$. Here we have a single variable $L(t)$, and three dimensionful parameters $\rho, \eta, \sigma$.
+%Note that $\sigma$ depends on the coefficients $a, b, \kappa$ in $F$, but these $a, b, \kappa$ do not enter separately. They only do so via the combination $\sigma = \left(-\frac{8 \kappa a^3}{9b^2}\right)^{1/2}$, and not via the interfacial thickness $\xi = \left(-\frac{2\kappa}{a}\right)^{1/2}$. This is not surprising, since the interfacial thickness is very small relative to the length scale $L(t)$.
+We can do some dimensional analysis. In $d$ dimensions, we have
+\[
+ \rho = ML^{-d},\quad \eta = M L^{2 - d} T^{-1},\quad \sigma = ML^{3 - d} T^{-2}.
+\]
+We want to come up with combinations of these for length to depend on time, and we find that in three dimensions, we have
+\[
+ L_0 = \frac{\eta^2}{\rho \sigma},\quad t_0 = \frac{\eta^3}{\rho \sigma^2}.
+\]
+One can check that these are the only combinations with units $L, T$. So we must have
+\[
+ \frac{L(t)}{L_0} = f \left(\frac{t}{t_0}\right).
+\]
+We now substitute this into the equation $(*)$. We then get a non-dimensionalized equation for $(*)$
+\[
+ \alpha f'' + \beta f'^2/f = \gamma \frac{f'}{f^2} + \frac{\delta}{f^2},
+\]
+with $\alpha, \beta, \gamma, \delta = O(1)$ dimensionless numbers.
+
+If we think about this, we see there are two regimes,
+\begin{enumerate}
+ \item The LHS (inertia) is negligible at small $t/t_0$ (or small $f$). Then we get
+ \[
+ \frac{\gamma f'}{f^2} + \frac{\delta}{f^2} = 0,
+ \]
+ so $f'$ is a constant, and so $L$ grows linearly with $t$. Putting all the appropriate constants in, we get
+ \[
+ L \propto \frac{\sigma}{\eta} t.
+ \]
+ This is called the \term{viscous hydrodynamic regime}, \term{VH}.
+
+ \item For large $f$, we assume we have a power law $f(x) \sim x^y$, where $y > 0$ (or else $f$ would not be large). Then
+ \[
+ \bar{\alpha} x^{y - 2} + \bar{\beta} x^{y - 2} = \bar{\gamma} x^{-y - 1} + \delta x^{-2y}.
+ \]
+ It turns out at large $f$, the $x^{-y - 1}$ term is negligible, scaling wise. So we have $y - 2 = -2y$, or equivalently, $y = \frac{2}{3}$. So
+ \[
+ \frac{L}{L_0} \sim \left(\frac{t}{t_0}\right)^{2/3}.
+ \]
+ Putting back our factors, we have
+ \[
+ L \sim \left(\frac{\sigma}{\rho}\right)^{1/3} t^{2/3}.
+ \]
+ This is called the \term{inertial hydrodynamic regime}, \term{IH}. In this regime, interfacial energy is converted into kinetic energy, and then only ``later'' dissipated by $\eta$.
+\end{enumerate}
+Essentially, in the first regime, the system is overdamped. This happens until the right-hand side becomes big and takes over, until the viscous term finally takes over. In practice, it is difficult to reach the last regime in a lab, since the time it takes is often $\sim 10^4$.
+
+%The outcome for $d = 3$ is
+%\begin{center}
+% \begin{tikzpicture}
+% \draw [->] (0, 0) -- (4, 0) node [right] {$\log (t/t_0)$};
+% \draw [->] (0, 0) -- (0, 3) node [above] {$\log (L/K_0)$};
+%
+%
+% \end{tikzpicture}
+%\end{center}
+%The crossover is at $\frac{t^*}{t_0} \sim O(1)$.
+
+\subsubsection*{Droplet vs bicontinuous}
+In general, when do we expect a bicontinuous phase and when do we expect a droplet phase?
+%Recall we had the phase diagram
+%\begin{center}
+% \begin{tikzpicture}
+% \draw (-2, 3) node [above] {$a$} -- (-2, 0) node [below] {$-1$} -- (2, 0) node [below] {$1$} node [right] {$\bar \phi$} -- (2, 3);
+%
+% \draw [domain=2.5:0.2, samples=40] plot [smooth] ({sqrt (2.5 - \x)}, \x);
+% \draw [domain=2.5:0.2, samples=40] plot [smooth] ({-sqrt (2.5 - \x)}, \x);
+%
+% \draw [dashed, domain=0.2:2.5] plot [smooth] ({sqrt ((2.5 - \x)/3)}, \x);
+% \draw [dashed, domain=0.2:2.5] plot [smooth] ({-sqrt ((2.5 - \x)/3)}, \x);
+% \draw (-2.05, 2.5) -- (-1.95, 2.5);
+% \end{tikzpicture}
+%\end{center}
+%Suppose we start with a global composition $\bar{\phi}$ at large $a$, and quench the system so that we are now in the spinodal region. The conservation laws require
+%\begin{align*}
+% - \phi_B V_a + \phi_B V_2 &= \bar\phi V\\
+% V_1 + V_2 &= V
+%\end{align*}
+%
+%We can then define the \term{phase volume}
+%\[
+% \psi = \frac{V_2}{ V},
+%\]
+%which we can find by
+%\[
+% \psi = \frac{1}{2} \left(1 + \frac{\bar{\phi}}{\phi_B}\right).
+%\]
+%When this is sufficiently far away from $\frac{1}{2}$, we have droplets, and when this is close to $\frac{1}{2}$, we have a bicontinuous medium.
+
+\begin{itemize}
+ \item In three dimensions, the rule of thumb is that if $\psi \sim 0.4 \text{ to }0.6$, then we always get a bicontinuous medium. If $\psi < 0.3$ or $>0.7$, then we get droplets always. In between these two regions, we initially have bicontinuous medium, which essentially de-percolates into droplets.
+ \item In two dimensions, things are different. In the fully symmetric case, with a constant $\eta$ throughout, and $F$ is strictly symmetric ($F(\phi) = F(-\phi)$), the \emph{only} case where we have bicontinuous phase is on the $\psi = \frac{1}{2}$ line.
+\end{itemize}
+
+%Note that for droplets, we have $\psi \sim \frac{R^3}{L^3}$, where $R$ is the droplet size and $L$ is the droplet separation. Since we have a fixed $\psi$, it follows that $R \sim L$.
+
+
+%We can get a ``mixed'' Ostwald and fluid flow, and is complicated and difficult to understand.
+
+\section{Liquid crystals hydrodynamics}
+\subsection{Liquid crystal models}
+We finally turn to the case of liquid crystals. In this case, our order parameter no longer takes a scalar value, and interesting topological phenomena can happen. We first write down the theory in detail, and then discuss coarsening behaviour of liquid crystals. We will describe the coarsening purely via geometric means, and the details of the model are not exactly that important.
+
+Recall that there are two types of liquid crystals:
+\begin{enumerate}
+ \item Polar liquid crystals, where the molecules have ``direction'', and the order parameter is $\mathbf{p}$, a vector, which is orientational but not positional.
+ \item Nematic liquid crystals, where the molecules are like a cylinder, and there is an order parameter $\boldsymbol\Theta$.
+\end{enumerate}
+
+We will do the polar case only, and just quote the answers for the nematic case.
+
+As before, we have to start with a free energy
+\[
+ F = \int \left\{\frac{a}{2} |\mathbf{p}|^2 + \frac{b}{4} |\mathbf{p}|^4 + \frac{\kappa}{2} (\nabla_i \mathbf{p}_j)(\nabla_i p_j)\right\}\;\d \mathbf{r} \equiv \int \F \;\d \mathbf{r}.
+\]
+The first two terms are needed for the isotropic-polar transition to occur. Note that
+\begin{enumerate}
+ \item $F[\mathbf{p}] = F[-\mathbf{p}]$, so we have no cubic term.
+ \item A linear term would correspond to the presence of an external field, e.g.\ magnetic field.
+ \item The $\kappa$ term penalizes splay, twist and bend, and this term penalizes them roughly equally. This is called the \term{one elastic constant approximation}. If we want to be more general, we need something like
+ \[
+ \frac{\kappa_1}{2} |\nabla \cdot \mathbf{p}|^2 + \frac{\kappa_2}{2} |\hat{\mathbf{p}} \cdot \nabla \wedge \mathbf{p}|^2 + \frac{\kappa_3}{2} |\hat{\mathbf{p}} \wedge \mathbf{p}|^{2}.
+ \]
+\end{enumerate}
+Here $\mathbf{p}$ is not conserved, so $\dot{\mathbf{p}}$ is not of the form $-\nabla \cdot \mathbf{J}$. Instead, (without flow) we have
+\[
+ \dot{\mathbf{p}} = - \Gamma \mathbf{h},\quad \mathbf{h} = \frac{\delta F}{\delta p(\mathbf{r})},
+\]
+where $\mathbf{h}(\mathbf{r})$ is called the \term{molecular field}, and $\Gamma$ is a constant, called the \term{angular mobility}.
+
+We now want to generalize this to the case when there is a field flow. We can just write this as
+\[
+ \frac{\D p}{\D t} = - \Gamma \mathbf{h},
+\]
+where $\D$ is some sort of comoving derivative. For the scalar field, we had
+\[
+ \frac{\D \phi}{\D t} = \dot{\phi} + \mathbf{v} \cdot \nabla \phi.
+\]
+Note that the advective term is trilinear, being first order in $\mathbf{v}$, $\nabla$ and $\phi$. For $\mathbf{p}$, there is certainly something like this going on. If we have a translation, then $\mathbf{p}$ gets incremented by
+\[
+ \Delta \mathbf{p} = \mathbf{v} \cdot \nabla \mathbf{p}\;\Delta t,
+\]
+as for a scalar.
+
+There is at least one more thing we should think about, namely if $\mathbf{v}$ is rotational, then we would expect this to rotate $\mathbf{p}$ as well. We have the corotational term
+\[
+ \Delta \mathbf{p} = \boldsymbol\omega \wedge \mathbf{p}\;\Delta t,
+\]
+where $\boldsymbol\omega$ is the angular velocity of the fluid, given by
+\[
+ \omega_i = \frac{1}{2} \varepsilon_{ijk} \Omega_{jk},\quad \Omega_{jk} = \frac{1}{2} (\nabla_i v_j - \nabla_j v_i).
+\]
+This part must be present. Indeed, if we rotate the whole system as a rigid body, with $\mathbf{v}(\mathbf{r}) = \boldsymbol\omega \times \mathbf{r}$, then we must have $\dot{\mathbf{p}} = \boldsymbol\omega \wedge \mathbf{p}$.
+
+It turns out in general, there is one more contribution to the advection, given by
+\[
+ \Delta \mathbf{p} = - \xi \D \cdot \mathbf{p} \Delta t
+\]
+with $\D_{ij} = \frac{1}{2} (\nabla_i v_j + \nabla_j v_i)$ and $\xi$ a parameter. This is the irrotational part of the derivative. The reason is that in a general flow, $\mathbf{p}$ needn't simply rotate with $\boldsymbol\omega$. Instead, it typically aligns along streamlines. In total, we have
+\[
+ \frac{\D \mathbf{p}}{\D t} = (\partial_t + \mathbf{v} \cdot \nabla) \mathbf{p} + \Omega \cdot \mathbf{p} - \xi \D \cdot \mathbf{p}.
+\]
+The parameter $\xi$ is a molecular parameter, which depends on the liquid crystal. The $\xi = 1$ case is called \term{no slip}, and the $\xi = 0$ case is called \term{full slip}.
+
+With this understanding, we can write the hydrodynamic equation as
+\[
+ \frac{\D \mathbf{p}}{\D t} = - \Gamma \mathbf{h},\quad \mathbf{h} = \frac{\delta F}{\delta \mathbf{p}}.
+\]
+We next need an equation of motion for $\mathbf{v}$. We can simply
+\[
+ \rho(\partial_t +\mathbf{v} \cdot \nabla) \mathbf{v} = \eta \nabla^2 \mathbf{v} - \nabla P + \nabla \cdot \Sigma^p,
+\]
+where $\Sigma^p$ is a stress tensor coming from the order parameter.
+
+To figure out what $\Sigma^p$ should be, we can consider what happens when we have an ``advective'' elastic distortion. In this case, we have $\frac{\D \mathbf{p}}{\D t} = 0$, so we have
+\[
+ \dot{\mathbf{p}} = - \mathbf{v} \cdot \nabla \mathbf{p} - \Omega \cdot \mathbf{p} + \xi \D \cdot \mathbf{p},
+\]
+The free energy change is then
+\[
+ \delta F = \int \frac{\delta F}{\delta \mathbf{p}} \cdot \dot{\mathbf{p}}\; \Delta t \;\d \mathbf{r} = \Delta t \int \mathbf{h} \cdot \mathbf{p} \;\d \mathbf{r},
+\]
+On the other hand, the free energy change must also be given by
+\[
+ \delta F = \int \Sigma_{ij}^{\mathbf{p}} \nabla_i u_j(\mathbf{r}) \;\d \mathbf{r},
+\]
+the product of the stress and strain tensors. By matching these two expressions, we see that we can write
+\[
+ \Sigma_{ij}^p = \Sigma_{ij}^{(1)} + \Sigma_{ij}^{(2)} + \Sigma_{ij}^{(3)},
+\]
+where
+\[
+ \nabla_i \Sigma_{ij}^{(1)} = - p_k \nabla_j h_k,\quad
+ \Sigma_{ij}^{(2)} = \frac{1}{2} (p_i h_j - p_j h_i),\quad
+ \Sigma_{ij}^{(2)} = \frac{\xi}{2} (p_i h_j + p_j h_i).
+\]
+
+Analogous results for a nematic liquid crystal holds, whose derivation is a significant pain. We have
+\[
+ \frac{\D Q}{\D t} = - \Gamma H,\quad H_{ij} = \frac{\delta F}{\delta Q_{ij}} - \left( \Tr\frac{\delta F}{\delta Q}\right) \frac{\delta_{ij}}{d}.
+\]
+The second term in $H$ is required to ensure $Q$ remains traceless.
+
+The total derivative is given by
+\[
+ \frac{\D Q}{\D t} = (\partial_t + \mathbf{v} \cdot \nabla) Q + (\Omega \cdot Q - Q \cdot \omega) + \xi (\D \cdot Q + Q \cdot D) - 2\xi \left(Q + \frac{\mathbf{1}}{d}\right) \Tr(Q \cdot \nabla \mathbf{v}).
+\]
+The terms are the usual advection term, rotation, alignment/slip and tracelessness terms respectively. The Navier--Stokes equation involves a stress term
+\[
+ \Sigma^Q = \Sigma^{Q, 1} + \Sigma^{Q, 2},
+\]
+where
+\begin{align*}
+ \nabla_k \Sigma^{Q, 1}_{k, \ell} &= - Q_{ij} \nabla_\ell H_{ij}\\
+ \Sigma^{Q, 2} &= Q \cdot H - H \cdot Q - \xi \left(\frac{2}{3} H + 2 \widehat{QH} - 2 Q \tr (Q H)\right),
+\end{align*}
+with the hat denoting the traceless symmetric part. The rest of the Navier--Stokes equations is the same.
+
+\subsection{Coarsening dynamics for nematics}
+We will discuss the coarsening dynamics for nematic liquid crystals, and indicate how polar liquid crystals are different when appropriate. As before, we begin with a completely disordered phase with $Q = 0$, and then quench the system by dropping the temperature quickly. The liquid crystals then want to arrange themselves. Afterwards, we local have
+\[
+ Q =
+ \begin{pmatrix}
+ \lambda & 0 & 0\\
+ 0 & - \lambda/2 & 0\\
+ 0 & 0 & -\lambda/2
+ \end{pmatrix}
+\]
+with free energy $f(\lambda)$. If the quench is deep enough, we have spinodal-like instability, and we quickly get locally ordered. Coordinate-independently, we can write
+\[
+ Q = \hat{\lambda} \left(n_i n_j - \frac{1}{3} \delta_{ij}\right).
+\]
+Since all this ordering is done locally, the principal axis $\mathbf{n}$ can vary over space. There is then a slower process that sorts out global ordering, driven by the elastic part of the free energy.
+
+%we previously saw that there is still an energy barrier to overcome, and so we have to deal with nucleation.
+%
+%For a deep quench, we no longer have an energy barrier, and we have spinodal-like local instability. We have
+%\[
+% \frac{\D Q}{\D t} = - \Gamma H,\quad H = \frac{\partial f}{\partial Q} - \kappa \nabla^2 Q.
+%\]
+%Going into Fourier space, the fastest growth occur at $q \to 0$, because of the second term which only slows us down at large $q$. This has a growth rate $r(0)$. So on a time scale of $r(0)^{-1}$, we know that $Q \to \diag(\lambda, -\lambda/2, -\lambda/2)$ almost everywhere. In other words,
+%\[
+% Q = \hat{\lambda} \left(n_i n_j - \frac{1}{3} \delta_{ij}\right)
+%\]
+%However, in general, we expect the principal axes to depend on the point, since these orderings happen at different points independently. So we have $\mathbf{n} = \mathbf{n}(\mathbf{r})$. So we have a quick process that sorts out the local ordering, then there is a slow process that sorts out the global ordering. This is driven by the elastic part of the free energy. Recall that
+%\[
+% \F = a \Tr(Q^2) + b (\Tr Q^2)^2 + b (\Tr Q^4) + c (\Tr Q^2) + \frac{\kappa}{2} (\nabla \cdot Q)^2,
+%\]
+%and the elastic term is the last one. So
+%\[
+% \frac{\D Q}{\D t} = - \Gamma \frac{\delta F_{el}}{\delta Q} \sim - \nabla \nabla(\text{something}).
+%\]
+% like conserved quantity % goldstone's theorem, spontaneous symmetry breaking
+
+
+Compare this with Model B/H: at early times, we have $\phi: \pm \phi_B$, but we get domain walls between the $\pm \phi_B$ phases.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$\phi$};
+
+ \draw [mblue, thick, domain=-3:3] plot [smooth] (\x, {tanh(1.5*(\x + 1))});
+
+ \draw [dashed] (-3, 1) -- (3, 1);
+ \draw [dashed] (-3, -1) -- (3, -1);
+
+ \end{tikzpicture}
+\end{center}
+The late time dynamics is governed by the slow coarsening of these domain walls. The key property of this is that it has reduced dimensionality, i.e.\ the domain wall is $2$ dimensional while space is $3$ dimensional, and it connects two different grounds states (i.e.\ minima of $F$).
+
+The domain wall can be moved around, but there is no local change that allows us to remove it. They can only be removed by ``collision'' of domain walls.
+
+Analogous structures are present for nematic liquid crystals. The discussion will largely involve us drawing pictures. For now, we will not do this completely rigorously, and simply rely on our intuitive understanding of when defects can or cannot arise. We later formalize these notions in terms of homotopy groups.
+
+We first do this in two dimensions, where defects have dimension $< 2$. There can be no line defects like a domain wall. The reason is that if we try to construct a domain wall
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, -2) rectangle (2, 2);
+
+ \foreach \y in {-1.5,-1,-0.5,0,0.5,1,1.5} {
+ \foreach \x in {-1.25, -0.75, -0.25} {
+ \draw [mblue, semithick] (\x - 0.1, \y - 0.1) -- (\x + 0.1, \y + 0.1);
+ \draw [mblue, semithick] (-\x + 0.1, \y - 0.1) -- (-\x - 0.1, \y + 0.1);
+ }
+ }
+ \end{tikzpicture}
+\end{center}
+then this can relax locally to become
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, -2) rectangle (2, 2);
+
+ \foreach \y in {-1.5,-1,-0.5,0,0.5,1,1.5} {
+ \foreach \x in {-1.25, -0.75, -0.25} {
+ \draw [mblue, semithick] (\x - 0.1, \y + \x * 0.1 / 1.25) -- (\x + 0.1, \y - 0.1 * \x / 1.25);
+ \draw [mblue, semithick] (-\x - 0.1, \y - \x * 0.1 / 1.25) -- (-\x + 0.1, \y + 0.1 * \x / 1.25);
+ }
+ }
+ \end{tikzpicture}
+\end{center}
+
+On the other hand, we can have \term{point defects}, which are $0$-dimensional. Two basic ones are as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \draw [mblue, semithick, loosely dashed] (-1.732, -0.8) edge [bend right] (-0.2, 2);
+ \draw [mblue, semithick, loosely dashed] (-1.732, -0.3) edge [bend right] (-0.7, 2);
+ \draw [mblue, semithick, loosely dashed, rotate=120] (-1.732, -0.8) edge [bend right] (-0.2, 2);
+ \draw [mblue, semithick, loosely dashed, rotate=120] (-1.732, -0.3) edge [bend right] (-0.7, 2);
+ \draw [mblue, semithick, loosely dashed, rotate=240] (-1.732, -0.8) edge [bend right] (-0.2, 2);
+ \draw [mblue, semithick, loosely dashed, rotate=240] (-1.732, -0.3) edge [bend right] (-0.7, 2);
+
+ \node [below] at (0, -1.8) {$q = -\frac{1}{2}$};
+ \begin{scope}[shift={(6, 0)}]
+ \node [circ] {};
+
+ \draw [mblue, semithick, loosely dashed] (0, 0) -- (0, 2);
+
+ \draw [mblue, semithick, loosely dashed] (-0.5, 2) -- (-0.5, 0) arc (180:360:0.5) -- (0.5, 2);
+ \draw [mblue, semithick, loosely dashed] (-1, 2) -- (-1, 0) arc (180:360:1) -- (1, 2);
+ \draw [mblue, semithick, loosely dashed] (-1.5, 2) -- (-1.5, 0) arc (180:360:1.5) -- (1.5, 2);
+ \node [below] at (0, -1.8) {$q = +\frac{1}{2}$};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+
+The charge $q$ can be described as follows --- we start at a point near the defect, and go around the defect once. When doing so, the direction of the order parameter turns. After going around the defect once, in the $q = -\frac{1}{2}$, the order parameter made a half turn in the opposite sense to how we moved around the defect. In the $q = +\frac{1}{2}$ case, they turned in the same sense.
+
+We see that $q \pm \frac{1}{2}$ are the smallest possible \term{topological charge}, and is a \term{quantum} of a charge. In general, we can have defects of other charges. For example, here are two $q = +1$ charges:
+
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+
+ \foreach \x in {0,25,...,335}{
+ \draw [loosely dashed,rotate=\x] (0.4, 0) -- (2, 0);
+ }
+ \node at (0, -2.5) {hedgehog};
+ \begin{scope}[shift={(6, 0)}]
+ \node [circ] {};
+ \draw [loosely dashed] circle [radius=0.5];
+ \draw [loosely dashed] circle [radius=1];
+ \draw [loosely dashed] circle [radius=1.5];
+ \node at (0, -2.5) {vortex};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+
+Both of these are $q = +1$ defects, and they can be continuously deformed into each other, simply by rotating each bar by $90^\circ$. For polar liquid crystals, the quantum of a charge is $1$.
+
+If we have defects of charge greater than $\pm \frac{1}{2}$, then they tend to dissociate into multiple $q = \pm \frac{1}{2}$ defects. This is due to energetic reasons. The elastic energy is given by
+\[
+ F_{\mathrm{ell}} = \frac{\kappa}{2} |\nabla \cdot Q|^2 \sim \frac{\kappa}{2} \lambda^2 |(\nabla \cdot \mathbf{n})\mathbf{n} + \mathbf{n} \cdot \nabla \mathbf{n}|^2.
+\]
+If we double the charge, we double the $Q$ tensor. Since this term is quadratic in the gradient, putting two defects together doubles the energy. In general, topological defects tend to dissociate to smaller $q$-values.
+
+To recap, after quenching, at early stages, we locally have
+\[
+ Q \to 2\lambda (\mathbf{n} \mathbf{n} - \frac{1}{2}\mathbf{1}).
+\]
+This $\mathbf{n}(\mathbf{r})$ is random, and tend to vary continuously. However, topological defects are present, which cannot be ironed out locally. All topological defects with $|q| > \frac{1}{2}$ dissociate quickly, and we are left with $q = \pm \frac{1}{2}$ defects floating around.
+
+We then have a late stage process where opposite charges attract and then annihilate. So the system becomes more and more ordered as a nematic. We can estimate the energy of an isolated defect as
+\[
+ \frac{\tilde{\kappa}}{2} |(\nabla \cdot \mathbf{n}) \mathbf{n} + \nabla \mathbf{n}|^2,
+\]
+where $\tilde{\kappa} = \kappa \lambda^2$. Dimensionally, we have
+\[
+ \nabla \sim \frac{1}{r}.
+\]
+So we have an energy
+\[
+ E \sim \tilde{\kappa} \int \frac{1}{r^2} \;\d \mathbf{r} \simeq \tilde{\kappa} \log \left(\frac{L}{r_0}\right),
+\]
+where $L$ is the mean spacing and $r_0$ is some kind of core radius of the defect. The core radius reflects the fact that as we zoom close enough to the core of the signularity, $\lambda$ is no longer constant and our above energy estimate fails. In fact, $\lambda \to 0$ at the core.
+
+Recall that the electrostatic energy in two dimensions is given by a similar equation. Thus, this energy is Coulombic, with force $\propto \frac{\tilde{\kappa}}{R}$. Under this force, the defects move with overdamped motion, with the velocity being proportional to the force. So
+\[
+ \dot{R} \sim \frac{1}{R},\quad \dot{L} \propto \frac{1}{L}.
+\]
+So
+\[
+ L(t) \sim t^{1/2}.
+\]
+This is the scaling law for nematic defect coarsening in 2 dimensions.
+
+\subsection{Topological defects in three dimensions}
+In three dimensions, we also have defects of the above kind lying along a line.
+\begin{center}
+ \tdplotsetmaincoords{74}{60}
+ \begin{tikzpicture}[tdplot_main_coords]
+
+ \foreach \z in {0,1.2,2.4}{
+ \begin{scope}[shift={(0,0,\z)}]
+ \draw [mblue, semithick, loosely dashed] (0, 0, 0) -- (0, 2, 0);
+ \draw [mblue, semithick, loosely dashed] (-0.5, 2, 0) -- (-0.5, 0) arc (180:360:0.5) -- (0.5, 2, 0);
+ \draw [mblue, semithick, loosely dashed] (-1, 2, 0) -- (-1, 0) arc (180:360:1) -- (1, 2, 0);
+ \draw [mblue, semithick, loosely dashed] (-1.5, 2, 0) -- (-1.5, 0) arc (180:360:1.5) -- (1.5, 2, 0);
+ \end{scope}
+ }
+ \draw [mred, thick] (0, 0, -0.8) -- (0, 0, 3.2);
+
+ \end{tikzpicture}
+\end{center}
+For such line defects, everything we said so far carries through --- separated at a distance $R$, the interaction force is $\frac{\tilde{\kappa}}{R}$ and so
+and so we similarly have
+\[
+ L(t) \sim t^{1/2}.
+\]
+However, in three dimensions, the $q = \pm \frac{1}{2}$ defects are \emph{the same} topologically. In other words, we can change $+q$ to $-q$ via continuous, local deformations. This involves rotating out to the $z$ direction, which is not available in two dimensions. While it is possible to visually understand how this works, it is difficult to draw on paper, and it is also evident that we should proceed in more formal manners to ensure we understand exactly how these things work.
+
+To begin, we have a space $\mathcal{M}$ of order parameters. In our case, this is the space of all possible orientations of rods.
+\begin{eg}
+ In the case of a polar liquid crystal in $d$ dimensions, we have $\mathcal{M} = S^{d - 1}$, the $(d - 1)$-dimensional unit sphere.
+\end{eg}
+
+\begin{eg}
+ For nematic liquid crystals in $d$-dimensions, we have $\mathcal{M} = \RP^{d - 1}$, which is obtained from $S^{d - 1}$ by identifying antipodal points.
+\end{eg}
+
+When we discussed the charge of a topological defect, we picked a loop around the singularity and see what happened when we went around a defect. So we pick a domain $D$ that encloses a defect core, and consider the map $f: D \to \mathcal{M}$ that assigns to each point the order parameter at that point. In our cases, $D$ is a circle $S^1$, and so $f$ is a loop in $M$.
+
+We say two mappings $f_1, f_2$, are \term{homotopic} if they can be continuously deformed into each other. Defects lie in the same \term{homotopy class} if maps for all $D$'s enclosing them are homotopic. The \emph{fundamental group} $\pi_1(\mathcal{M})$ is the set of all homotopy classes of maps $S^1 \to \mathcal{M}$. This encodes the set of all possible charges.
+
+Since we call it a fundamental \emph{group}, it had better have a group structures. If we have two defects, we can put them next to each other, and pick a new circle that goes around the two defects. This then gives rise to a new homotopy class $S^1 \to \mathcal{M}$.
+
+More generally, if we consider $d - n$-dimensional defects, then we can enclose the defect with a sphere of dimension $n - 1$. The corresponding classes live in the higher homotopy groups $\pi_{n - 1}(\mathcal{M})$.
+
+\begin{eg}
+ Observe that $\RP^1$ is actually just $S^1$ in disguise, and so $\pi_1(\RP^1) = \Z$. The generator of $\pi_1(\RP^1)$ is the charge $\frac{1}{2}$ topological defect.
+\end{eg}
+
+\begin{eg}
+ We can visualize $\RP^2$ as a certain quotient of the disk, namely
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [mgreen, opacity=0.1] (1, 0) circle [radius=1];
+ \draw [mblue, ->-=0.54] (0, 0) arc (180:0:1);
+ \draw [mred, ->-=0.54] (2, 0) arc (0:-180:1);
+ \node [circ] {};
+ \node [circ] at (2, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ where we identify the two arcs in the boundary according to the arrow. Observe that the two marked points are in fact the same point under the identification. If we have a path from the first point to the second point, then this would be considered a loop in $\RP^2$, and this is the $q = \frac{1}{2}$ defect.
+
+ Observe that in the two-dimensional case, the $q = \pm \frac{1}{2}$ defects correspond to going along the top arc and bottom arc from the left to right respectively. In $\RP^2$, there is then a homotopy between these two paths by going through the disk. So in $\RP^2$, they lie in the same homotopy class.
+
+ In general, it is easy to see that $\pi_1(\RP^2) = \Z/2\Z$, so $q = \frac{1}{2}$ is the unique non-trivial defect.
+\end{eg}
+This is particularly interesting, because two $q = \frac{1}{2}$ defects can merge and disappear! Similarly, what you would expect to be a $q = 1$ defect could locally relax to become trivial.
+
+Observe that in our ``line defects'', the core can actually form a loop instead. We can also have point defects that correspond to elements in $\pi_2(\mathcal{M}) \cong \Z$. It is an exercise to draw some pictures yourself to see how these look.
+
+% some loops
+%
+%If we view this from a distance, this looks like a 3d hedgehog, i.e.\ $\mathbf{n} = \mathbf{r}$ on a sphere. If we shrink the loop to zero, then we get a point defect. This lies in $\pi_2(\mathcal{M})$.
+%
+%If we have polar liquid crystals, then our maps take values in $S^n$, and so we are looking at homotopy groups of spheres.
+%
+% insert examples
+%
+%Since $\pi_1(S^2) = 0$ but $\pi_2(S^2) = \Z$, we get point defects but not line defects.
+
+\section{Active Soft Matter}
+We shall finish the course by thinking a bit about \term{motile} particles. These are particles that are self-propelled. For example, micro-organisms such as bacteria and algae can move by themselves. There are also synthetic microswimmers. For example, we can make a sphere and coat it with gold and platinum on two sides
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \fill [morange, opacity=0.15] (0, 1) arc(90:270:1);
+ \fill [black, opacity=0.1] (0, 1) arc(90:-90:1);
+ \draw circle [radius=1];
+ \draw (0, -1) -- (0, 1);
+ \node [right] at (1, 0) {Pt};
+ \node [left] at (-1, 0) {Au};
+ \end{tikzpicture}
+\end{center}
+We put this in hydrogen peroxide H$_2$O$_2$. Platinum is a catalyst of the decomposition
+\begin{center}
+ 2H$_2$O$_2$ $\to$ 2H$_2$O + O$_2$,
+\end{center}
+and this reaction will cause the swimmer to propel forward in a certain direction. This reaction implies that entropy is constantly being produced, and this cannot be captured adequately by Newtonian or Lagrangian mechanics on a macroscopic scale.
+
+Two key consequences of this are:
+\begin{enumerate}
+ \item There is no Boltzmann distribution.
+ \item The principle of detailed balance, which is a manifestation of time reversal symmetry, no longer holds.
+\end{enumerate}
+
+\begin{eg}
+ Take bacteria in microfluidic enclosure with funnel gates:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (2, 2);
+ \foreach \y in {0.2, 0.4, 0.6, 0.8} {
+ \draw (0.9, \y - 0.07) -- (1.1, \y) -- (0.9, \y + 0.07);
+ }
+ \foreach \y in {1.2, 1.4, 1.6, 1.8} {
+ \draw (1.1, \y - 0.07) -- (0.9, \y) -- (1.1, \y + 0.07);
+ }
+ \draw [-latex, mred, thick] (1.4, 1.33) arc(90:-90:0.3);
+ \draw [-latex, mred, thick] (0.6, 0.67) arc(270:90:0.3);
+ \end{tikzpicture}
+ \end{center}
+
+ In this case, we expect there to be a rotation of particles if they are self-propelled, since it is easier to get through one direction than the other. Contrast this with the fact that there is no current in the steady state for any thermal equilibrium system. The difference is that Brownian motion has independent increments, but self-propelled particles tend to keep moving in the same direction.
+
+ Note also that we have to have to break spatial symmetry for this to happen. This is an example of the \term{Ratchet theorem}, namely if we have broken time reversal symmetry pathwise, and broken spatial symmetry, then we can have non-zero current.
+\end{eg}
+If we want to address this type of system in the language we have been using, we need to rebuild our model of statistical physics. In general, there are two model building strategies:
+\begin{enumerate}
+ \item Explicit coarse-graining of ``micro'' model, where we coarse-grain particles and rules to PDEs for $\rho, \phi, \mathbf{P}, Q$.
+ \item Start with models of passive soft matter (e.g.\ Model B and Model H), and add \emph{minimal terms} to explicitly break time reversal phenomenologically.
+\end{enumerate}
+
+Of course, we are going to proceed phenomenologically.
+\subsubsection*{Active Model B}
+Start with Model B, which has a diffusive, symmetric scalar field $\phi$ with phase separation:
+\begin{align*}
+ \dot{\phi} &= - \nabla \cdot \mathbf{J}\\
+ \mathbf{J} &= - \nabla \tilde{\mu} + \sqrt{2D} \boldsymbol\Lambda.
+\end{align*}
+We took
+\[
+ F = \int \left\{\frac{a}{2} + \frac{b}{4} \phi^4 + \frac{\kappa}{2} (\nabla \phi)^2 \right\} \;\d \mathbf{r}.
+\]
+To model our system without time reversal symmetry, we put
+\[
+ \tilde{\mu} = \frac{\delta F}{\delta \phi} + \lambda (\nabla \phi)^2.
+\]
+The new term breaks the time reversal structure. These equations are called \term{active Model B}. Another way to destroy time reversal symmetry is by replacing the white noise with something else, but that is complicated
+
+Note that
+\begin{enumerate}
+ \item $(\nabla \phi)^2$ is not the functional derivative of any $F$. This breaks the free energy structure, and
+ \[
+ \frac{\P_F}{\P_B} \not= e^{-\beta (F_2 - F_1)}
+ \]
+ for any $F[\phi]$. So time reversal symmetric is broken barring miracles.
+ \item We cannot achieve the breaking by introducing a polynomial term, since if $g(\phi)$ is a polynomial, then
+ \[
+ g(\phi) = \frac{\delta}{\delta \phi} \int \d \mathbf{r} \left(\int^\phi g(u)\;\d u\right).
+ \]
+ So gradient terms are required to break time reversal symmetry. We will later see this is not the case for liquid crystals.
+ \item The active model B is agnostic about the cause of phase separation at $a < 0$. There are two possibilities:
+ \begin{enumerate}
+ \item We can have attractive interactions
+ \item We can have repulsive interactions plus motile particles: if two particles collide head-on, then we have pairwise jamming. They then move together for a while, and this impersonates attraction. This is called \term{MIPS} --- \term{mobility-induced phase separation}. It is possible to study this at a particle level, but we shall not.
+ \end{enumerate}
+ \item The dynamics of coarsening during phase separation turns out to be similar, with $L(t) \sim t^{1/3}$. The Ostwald--like process remains intact.
+ \item The coexistence conditions are altered. We previously found the coexistence conditions simply by global free energy minimization. In the active case, we can't do free energy minimization, but we can still solve the equations of motion explicitly. In this case, instead of requiring a common tangent, we have equal slope but different intercepts, where we set
+ \[
+ (\mu \phi - f)_1 = (\mu \phi - f)_2 + \Delta.
+ \]
+ This is found by solving $\mathbf{J} = 0$, so
+ \[
+ \tilde{\mu} = \frac{\partial f}{\partial \phi} - \kappa \nabla^2 \phi + \lambda (\nabla \phi)^2 =\text{const}.
+ \]
+ \item There is a further extension, active model B+, where we put
+ \[
+ \mathbf{J} = -\nabla \tilde{\mu} + \sqrt{2D} \boldsymbol\Lambda + \zeta (\nabla^2 \phi) \nabla \phi.
+ \]
+ This extra term is similar to $\nabla (\lambda (\nabla \phi)^2)$ in that it has two $\phi$'s and three $\nabla$'s, and they are equivalent in 1 dimension, but in 1 dimension only. This changes the coarsening process significantly. For example, Ostwald can stop at finite $R$ (see arXiv:1801.07687).
+\end{enumerate}
+
+\subsubsection*{Active polar liquid crystals}
+Consider first a polar system. Then the order parameter is $\mathbf{p}$. In the simplest case, the field is relaxational with $\mathbf{v} = \mathbf{0}$. The hydrodynamic level equation is
+\[
+ \dot{\mathbf{p}} = - \Gamma \mathbf{h},\quad \mathbf{h} = \frac{\delta F}{\delta \mathbf{p}}.
+\]
+We had a free energy
+\[
+ F = \int \left(\frac{a}{2} p^2 + \frac{b}{4} p^4 + \frac{\kappa}{2} (\nabla_\alpha p_\beta)(\nabla_\alpha p_\beta)\right)\;\d \mathbf{r}.
+\]
+As for active model B, $\mathbf{h}$ can acquire gradient terms that are incompatible with $F$. But also, we can have a lower order term in $\nabla$ that is physically well-motivated --- if we think of our rod as having a direction $\mathbf{p}$, then it is natural that $\mathbf{p}$ wants to translate along its own direction at some speed $w$. Thus, $\mathbf{p}$ acquires \term{self-advected motion} $w\mathbf{p}$. Thus, our equation of motion becomes
+\[
+ \dot{\mathbf{p}} + \mathbf{p} \cdot \nabla \mathbf{p} = - \Gamma \mathbf{h}.
+\]
+This is a bit like the Navier--Stokes equation non-linearity. Now couple this to a fluid flow $\mathbf{v}$. Then
+\[
+ \frac{\D\mathbf{p}}{\D t} = - \Gamma \mathbf{h},
+\]
+where
+\[
+ \frac{\D \mathbf{p}}{\D t} = \left(\frac{\partial}{\partial t} + \mathbf{v} \cdot \nabla\right)\mathbf{p} + \Omega \cdot \mathbf{p} - \xi D \cdot \mathbf{p} + w \mathbf{p} \cdot \nabla \mathbf{p}.
+\]
+The Navier--Stokes/Cauchy equation is now
+\[
+ (\partial_t + \mathbf{v} \cdot \nabla) \mathbf{v} = \eta \nabla^2 \mathbf{v} - \nabla P + \nabla \cdot \Sigma^{(p)} + \nabla \cdot \Sigma^A,
+\]
+where as before,
+\[
+ \nabla \cdot \Sigma^{(p)} = - p_i \nabla_j h_j + \nabla_i \left(\frac{1}{2} (p_i h_j - p_j h_i) + \frac{\xi}{2}(p_i h_j + p_j h_i)\right).
+\]
+and we have a new term $\Sigma^A$ given by the active stress, and the lowest order term is $\zeta p_i p_j$. This is a new mechanical term that is incompatible with $F$. We then have
+\[
+ \nabla \cdot \Sigma^A = (\nabla \cdot \mathbf{p}) \mathbf{p}.
+\]
+We can think of this as an effective body force in the Navier--Stokes equation. The effect is that we have forces whenever we have situations looking like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (1, 0);
+ \draw [->] (0, 0.2) -- (1, 0.4);
+ \draw [->] (0, -0.2) -- (1, -0.4);
+
+ \node at (2, 0) {or};
+ \draw [<-] (3, 0) -- (4, 0);
+ \draw [<-] (3, 0.2) -- (4, 0.4);
+ \draw [<-] (3, -0.2) -- (4, -0.4);
+
+ \end{tikzpicture}
+\end{center}
+In these cases, We have a force acting to the right for $\zeta > 0$, and to the left if $\zeta < 0$.
+
+These new terms give spontaneous flow, symmetric breaking and macroscopic fluxes. At high $w, \zeta$, we get chaos and turbulence.
+\subsubsection*{Active nematic liquid crystals}
+In the nematic case, there is no self-advection. So we can't make a velocity from $Q$. We again have
+\[
+ \frac{\D Q}{\D t} = - \Gamma H,\quad
+ H = \left[\frac{\delta F}{\delta Q}\right]^{\text{traceless}}.
+\]
+where $\frac{\D Q}{\D t}$ is given by
+\[
+ \frac{\D Q}{\D t} = (\partial_t + \mathbf{v} \cdot \nabla) Q + S(Q, K, \xi).
+\]
+Here $K = \nabla v$ and
+\[
+ S = (-\Omega \cdot Q - Q \cdot \Omega) - \xi (D \cdot Q + Q \cdot D) + 2 \xi \left(Q + \frac{1}{d}\right) \Tr(Q \cdot K)
+\]
+Since there is no directionality as in the previous case, the material derivative will remain unchanged with active matter. Thus, at lowest order, all the self-propelled motion can do is to introduce an active stress term. The leading-order stress is
+\[
+ \Sigma^A = \zeta Q.
+\]
+This breaks the free energy structure. Indeed, if we have a uniform nematic, then the passive stress vanishes, because there is no elastic distortion at all. However, the active stress does not since $\zeta Q \not= 0$. Physically, the non-zero stress is due to the fact that the rods tend to generate local flow fields around themselves to propel motion, and these remain even in a uniform phase.
+
+After introducing this, the effective body force density is
+\[
+ \mathbf{f} = \nabla \cdot \Sigma^A = \zeta \nabla \cdot Q \sim \zeta \lambda (\nabla \cdot \mathbf{n}) \mathbf{n}.
+\]
+This is essentially the same as the case of the polar case. Thus, if we see something like
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (1, 0);
+ \draw (0, 0.2) -- (1, 0.4);
+ \draw (0, -0.2) -- (1, -0.4);
+ \end{tikzpicture}
+\end{center}
+then we have a rightward force if $\zeta > 0$ and leftward force if $\zeta < 0$.
+
+This has important physical consequences. If we start with a uniform phase, then we expect random noise to exist, and then the active stress will destablize the system. For example, if we start with
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0,0.3,...,3.0}{
+ \draw (\x, 0) -- (\x, 1);
+ }
+ \end{tikzpicture}
+\end{center}
+and a local deformation happens:
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0,1.8,3.6}{
+ \draw (\x-0.2, 0) -- (\x - 0.3, 1);
+ \draw (\x, 0) -- (\x, 1);
+ \draw (\x+0.2, 0) -- (\x + 0.3, 1);
+ }
+ \foreach \x in {0.9,2.7}{
+ \draw (\x-0.2, 1) -- (\x - 0.3, 0);
+ \draw (\x, 0) -- (\x, 1);
+ \draw (\x+0.2, 1) -- (\x + 0.3, 0);
+ }
+ \end{tikzpicture}
+\end{center}
+then in the $\zeta > 0$ case, this will just fall apart. Conversely, bends are destabilized for $\zeta < 0$. Either case, there is a tendency for these things to be destabilized, and a consequence is that active nematics are never stably uniform for large systems. Typically, we get spontaneous flow.
+
+To understand this more, we can explicitly describe how the activity parameter $\zeta$ affects the local flow patterns. Typically, we have the following two cases:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-0.2, 1) -- (-0.2, 0) arc(180:360:0.2 and 0.05) -- (0.2, 1);
+ \draw (0, 1) ellipse (0.2 and 0.05);
+ \draw [densely dashed] (-0.2, 0) arc(180:0:0.2 and 0.05);
+
+ \draw [decoration={markings, mark=at position 0.4 with {\arrowreversed[scale=1.2]{latex'}}}, postaction={decorate}] (0.3, -0.3) .. controls (0.3, -0.1) and (0.4, 0.3) .. (0.8, 0.3);
+ \draw [decoration={markings, mark=at position 0.4 with {\arrowreversed[scale=1.2]{latex'}}}, postaction={decorate}] (-0.3, -0.3) .. controls (-0.3, -0.1) and (-0.4, 0.3) .. (-0.8, 0.3);
+ \draw [decoration={markings, mark=at position 0.4 with {\arrowreversed[scale=1.2]{latex'}}}, postaction={decorate}] (-0.3, 1.3) .. controls (-0.3, 1.1) and (-0.4, 0.7) .. (-0.8, 0.7);
+ \draw [decoration={markings, mark=at position 0.4 with {\arrowreversed[scale=1.2]{latex'}}}, postaction={decorate}] (0.3, 1.3) .. controls (0.3, 1.1) and (0.4, 0.7) .. (0.8, 0.7);
+
+ \node at (0, -1) {$\zeta > 0$};
+ \node at (0, -1.4) {contractile};
+ \begin{scope}[shift={(3, 0)}]
+ \draw (-0.2, 1) -- (-0.2, 0) arc(180:360:0.2 and 0.05) -- (0.2, 1);
+ \draw (0, 1) ellipse (0.2 and 0.05);
+ \draw [densely dashed] (-0.2, 0) arc(180:0:0.2 and 0.05);
+
+ \draw [decoration={markings, mark=at position 0.6 with {\arrow[scale=1.2]{latex'}}}, postaction={decorate}] (0.3, -0.3) .. controls (0.3, -0.1) and (0.4, 0.3) .. (0.8, 0.3);
+ \draw [decoration={markings, mark=at position 0.6 with {\arrow[scale=1.2]{latex'}}}, postaction={decorate}] (-0.3, -0.3) .. controls (-0.3, -0.1) and (-0.4, 0.3) .. (-0.8, 0.3);
+ \draw [decoration={markings, mark=at position 0.6 with {\arrow[scale=1.2]{latex'}}}, postaction={decorate}] (-0.3, 1.3) .. controls (-0.3, 1.1) and (-0.4, 0.7) .. (-0.8, 0.7);
+ \draw [decoration={markings, mark=at position 0.6 with {\arrow[scale=1.2]{latex'}}}, postaction={decorate}] (0.3, 1.3) .. controls (0.3, 1.1) and (0.4, 0.7) .. (0.8, 0.7);
+
+ \node at (0, -1) {$\zeta < 0$};
+ \node at (0, -1.4) {extensile};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Suppose we take an active liquid crystal and put it in a shear flow. A rod-like object tends to align along the extension axis, at a $45^\circ$ angle.
+
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[rotate=-45]
+ \draw (-0.2, 0.5) -- (-0.2, -0.5) arc(180:360:0.2 and 0.05) -- (0.2, 0.5);
+ \draw (0, 0.5) ellipse (0.2 and 0.05);
+ \draw [densely dashed] (-0.2, -0.5) arc(180:0:0.2 and 0.05);
+ \end{scope}
+
+ \draw [->] (1, -1) -- (-1, -1);
+ \draw [<-] (1, 1) -- (-1, 1);
+
+ \end{tikzpicture}
+\end{center}
+
+If the liquid crystal is active, then we expect the local flows to interact with the shear flow. Suppose the shear rate is $v_x = y g$. Then the viscous stress is
+\[
+ \Sigma^\eta = \eta g
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 0
+ \end{pmatrix}.
+\]
+We have
+\[
+ \Sigma^A \propto \zeta \lambda \left(\mathbf{n}\mathbf{n} - \frac{\mathbf{1}}{d}\right) = \zeta \lambda
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 0
+ \end{pmatrix}
+\]
+if $\mathbf{n}$ is at $45^\circ$ exactly. Note that the sign of $\zeta$ affects whether it reinforces or weakens the stress.
+
+A crucial property is that $\Sigma^A$ does not depend on the shear rate. So in the contractile case, the total stress looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$g$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$\Sigma_{\mathrm{TOT}}$};
+
+ \draw [mblue, thick] (-3, -1.5) -- (0, -1) -- (0, 1) -- (3, 1.5);
+ \end{tikzpicture}
+\end{center}
+In the extensile case, however, we have
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$g$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$\Sigma_{\mathrm{TOT}}$};
+
+ \draw [mblue, thick] (-3, -0.5) -- (0, 1) -- (0, -1) -- (3, 0.5); % insert intercepts $\pm g^*$
+
+ \node [circ] at (2, 0) {};
+ \node [below] at (2, 0) {$g^*$};
+ \end{tikzpicture}
+\end{center}
+
+This is very weird, and leads to spontaneous flow \emph{at zero applied stress} of the form
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, -1) -- (2, -1);
+ \draw (-2, 1) -- (2, 1);
+
+ \draw (-0.5, -1) -- (0.5, 0) -- (-0.5, 1);
+
+ \foreach \y in {0.1,0.3,...,0.9} {
+ \draw [-latex](-0.5, 1-\y) -- (-0.5 + \y, 1-\y);
+ \draw [-latex](-0.5, \y-1) -- (-0.5 + \y, \y-1);
+ }
+ \end{tikzpicture}
+\end{center}
+%The consequence is that at zero applied stress, we want to have bits at $-g^*$ and bits at $g^*$. This gives rise to a spontaneous flow of the form
+
+\subsubsection*{Defect motion in active nematics}
+For simplicity, work in two dimensions. We have two simple defects as before
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \node [circ] {};
+ \draw [mblue, semithick, loosely dashed] (-1.732, -0.8) edge [bend right] (-0.2, 2);
+ \draw [mblue, semithick, loosely dashed] (-1.732, -0.3) edge [bend right] (-0.7, 2);
+ \draw [mblue, semithick, loosely dashed, rotate=120] (-1.732, -0.8) edge [bend right] (-0.2, 2);
+ \draw [mblue, semithick, loosely dashed, rotate=120] (-1.732, -0.3) edge [bend right] (-0.7, 2);
+ \draw [mblue, semithick, loosely dashed, rotate=240] (-1.732, -0.8) edge [bend right] (-0.2, 2);
+ \draw [mblue, semithick, loosely dashed, rotate=240] (-1.732, -0.3) edge [bend right] (-0.7, 2);
+
+ \node [below] at (0, -1.8) {$q = -\frac{1}{2}$};
+ \begin{scope}[shift={(6, 0)}]
+ \node [circ] {};
+
+ \draw [mblue, semithick, loosely dashed] (0, 0) -- (0, 2);
+
+ \draw [mblue, semithick, loosely dashed] (-0.5, 2) -- (-0.5, 0) arc (180:360:0.5) -- (0.5, 2);
+ \draw [mblue, semithick, loosely dashed] (-1, 2) -- (-1, 0) arc (180:360:1) -- (1, 2);
+ \draw [mblue, semithick, loosely dashed] (-1.5, 2) -- (-1.5, 0) arc (180:360:1.5) -- (1.5, 2);
+ \node [below] at (0, -1.8) {$q = +\frac{1}{2}$};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Note that the $q = -\frac{1}{2}$ charge is symmetric, and so by symmetry, there cannot be a net body stress. However, in the $q = +\frac{1}{2}$ defect, we have a non-zero effective force density.
+
+So the defects themselves are like quasi-particles that are themselves active. We see that contractile rods move in the direction of the opening, and the extensile goes in the other direction. The outcome of this is self-sustaining ``turbulent'' motion, with defect pairs $\pm \frac{1}{2}$ are formed \emph{locally}. The $-\frac{1}{2}$ stay put and the $+\frac{1}{2}$ ones self-propel, and depending on exactly how the defect pairs are formed, the $+\frac{1}{2}$ defect will fly away.
+
+Experimental movies of these can be found in \emph{T. Sanchez Nature 491, 431 (2012)}. There are also simulations in \emph{T. Shendek, et al, Soft Matter 13, 3853 (2017)}.
+
+\printindex
+\end{document}
diff --git a/books/cam/III_M/advanced_probability.tex b/books/cam/III_M/advanced_probability.tex
new file mode 100644
index 0000000000000000000000000000000000000000..4fc332e61e9611920f4c1cc89cd5e40c3bd21237
--- /dev/null
+++ b/books/cam/III_M/advanced_probability.tex
@@ -0,0 +1,2873 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2017}
+\def\nlecturer {M. Lis}
+\def\ncourse {Advanced Probability}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+The aim of the course is to introduce students to advanced topics in modern probability theory. The emphasis is on tools required in the rigorous analysis of stochastic processes, such as Brownian motion, and in applications where probability theory plays an important role.
+
+\noindent\textbf{Review of measure and integration:} sigma-algebras, measures and filtrations; integrals and expectation; convergence theorems; product measures, independence and Fubini's theorem.\\
+\noindent\textbf{Conditional expectation:} Discrete case, Gaussian case, conditional density functions; existence and uniqueness; basic properties.\\
+\noindent\textbf{Martingales:} Martingales and submartingales in discrete time; optional stopping; Doob's inequalities, upcrossings, martingale convergence theorems; applications of martingale techniques.\\
+\noindent\textbf{Stochastic processes in continuous time:} Kolmogorov's criterion, regularization of paths; martingales in continuous time.\\
+\noindent\textbf{Weak convergence:} Definitions and characterizations; convergence in distribution, tightness, Prokhorov's theorem; characteristic functions, L\'evy's continuity theorem.\\
+\noindent\textbf{Sums of independent random variables:} Strong laws of large numbers; central limit theorem; Cram\'er's theory of large deviations.\\
+\noindent\textbf{Brownian motion:} Wiener's existence theorem, scaling and symmetry properties; martingales associated with Brownian motion, the strong Markov property, hitting times; properties of sample paths, recurrence and transience; Brownian motion and the Dirichlet problem; Donsker's invariance principle.\\
+\noindent\textbf{Poisson random measures:} Construction and properties; integrals.\\
+\noindent\textbf{L\'evy processes:} L\'evy-Khinchin theorem.
+
+\subsubsection*{Pre-requisites}
+A basic familiarity with measure theory and the measure-theoretic formulation of probability theory is very helpful. These foundational topics will be reviewed at the beginning of the course, but students unfamiliar with them are expected to consult the literature (for instance, Williams' book) to strengthen their understanding.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+In some other places in the world, this course might be known as ``Stochastic Processes''. In addition to doing probability, a new component studied in the course is \emph{time}. We are going to study how things change over time.
+
+In the first half of the course, we will focus on discrete time. A familiar example is the simple random walk --- we start at a point on a grid, and at each time step, we jump to a neighbouring grid point randomly. This gives a sequence of random variables indexed by discrete time steps, and are related to each other in interesting ways. In particular, we will consider \emph{martingales}, which enjoy some really nice convergence and ``stopping ''properties.
+
+In the second half of the course, we will look at continuous time. There is a fundamental difference between the two, in that there is a nice topology on the interval. This allows us to say things like we want our trajectories to be continuous. On the other hand, this can cause some headaches because $\R$ is uncountable. We will spend a lot of time thinking about Brownian motion, whose discovery is often attributed to Robert Brown. We can think of this as the limit as we take finer and finer steps in a random walk. It turns out this has a very rich structure, and will tell us something about Laplace's equation as well.
+
+Apart from stochastic processes themselves, there are two main objects that appear in this course. The first is the conditional expectation. Recall that if we have a random variable $X$, we can obtain a number $\E[X]$, the expectation of $X$. We can think of this as integrating out all the randomness of the system, and just remembering the average. Conditional expectation will be some subtle modification of this construction, where we don't actually get a number, but another random variable. The idea behind this is that we want to integrate out some of the randomness in our random variable, but keep the remaining.
+
+Another main object is \emph{stopping time}. For example, if we have a production line that produces random number of outputs at each point, then we can ask how much time it takes to produce a fixed number of goods. This is a nice random time, which we call a stopping time. The niceness follows from the fact that when the time comes, we know it. An example that is not nice is, for example, the last day it rains in Cambridge in a particular month, since on that last day, we don't necessarily know that it is in fact the last day.
+
+At the end of the course, we will say a little bit about large deviations.
+\section{Some measure theory}
+\subsection{Review of measure theory}
+To make the course as self-contained as possible, we shall begin with some review of measure theory. On the other hand, if one doesn't already know measure theory, they are recommended to learn the measure theory properly before starting this course.
+
+\begin{defi}[$\sigma$-algebra]\index{$\sigma$-algebra}
+ Let $E$ be a set. A subset $\mathcal{E}$ of the power set $\mathcal{P}(E)$ is called a \emph{$\sigma$-algebra} (or \term{$\sigma$-field}) if
+ \begin{enumerate}
+ \item $\emptyset \in \mathcal{E}$;
+ \item If $A \in \mathcal{E}$, then $A^C = E \setminus A \in \mathcal{E}$;
+ \item If $A_1, A_2, \ldots \in \mathcal{E}$, then $\bigcup_{n = 1}^\infty A_n \in \mathcal{E}$.
+ \end{enumerate}
+\end{defi}
+
+\begin{defi}[Measurable space]\index{measurable space}
+ A \emph{measurable space} is a set with a $\sigma$-algebra.
+\end{defi}
+
+
+\begin{defi}[Borel $\sigma$-algebra]\index{Borel $\sigma$-algebra}\index{$\sigma$-algebra!Borel}\index{$\mathcal{B}(E)$}
+ Let $E$ be a topological space with topology $\mathcal{T}$. Then the \emph{Borel $\sigma$-algebra} $\mathcal{B}(E)$ on $E$ is the $\sigma$-algebra generated by $\mathcal{T}$, i.e.\ the smallest $\sigma$-algebra containing $\mathcal{T}$.
+\end{defi}
+
+We are often going to look at $\mathcal{B}(\R)$, and we will just write $\mathcal{B}$\index{$\mathcal{B}$} for it.
+
+\begin{defi}[Measure]\index{measure}
+ A function $\mu: \mathcal{E} \to [0, \infty]$ is a \emph{measure} if
+ \begin{itemize}
+ \item $\mu(\emptyset) = 0$
+ \item If $A_1, A_2, \ldots \in \mathcal{E}$ are disjoint, then
+ \[
+ \mu \left(\bigcup_{i = 1}^\infty A_i \right) = \sum_{i = 1}^\infty \mu(A_i).
+ \]
+ \end{itemize}
+\end{defi}
+
+\begin{defi}[Measure space]\index{measure space}
+ A \emph{measure space} is a measurable space with a measure.
+\end{defi}
+
+\begin{defi}[Measurable function]\index{measurable function}
+ Let $(E_1, \mathcal{E}_1)$ and $(E_2, \mathcal{E}_2)$ be measurable spaces. Then $f: E_1 \to E_2$ is said to be \emph{measurable} if $A \in \mathcal{E}_2$ implies $f^{-1}(A) \in \mathcal{E}_1$.
+\end{defi}
+This is similar to the definition of a continuous function.
+
+\begin{notation}\index{$m\mathcal{E}$}\index{$m\mathcal{E}^+$}
+ For $(E, \mathcal{E})$ a measurable space, we write $m\mathcal{E}$ for the set of measurable functions $E \to \R$.
+
+ We write $m\mathcal{E}^+$ to be the positive measurable functions, which are allowed to take value $\infty$.
+\end{notation}
+Note that we do \emph{not} allow taking the values $\pm \infty$ in the first case.
+
+\begin{thm}
+ Let $(E, \mathcal{E}, \mu)$ be a measure space. Then there exists a unique function $\tilde{\mu}: m\mathcal{E}^+ \to [0, \infty]$ satisfying
+ \begin{itemize}
+ \item $\tilde{\mu}(\mathbf{1}_A) = \mu(A)$, where $\mathbf{1}_A$ is the indicator function of $A$.
+ \item Linearity: $\tilde{\mu}(\alpha f + \beta g) = \alpha \tilde{\mu}(f) + \beta \tilde{\mu}(g)$ if $\alpha, \beta \in \R_{\geq 0}$ and $f, g \in m\mathcal{E}^+$.
+ \item Monotone convergence: iff $f_1, f_2, \ldots \in m\mathcal{E}^+$ are such that $f_n \nearrow f \in m\mathcal{E}^+$ pointwise a.e.\ as $n \to \infty$, then
+ \[
+ \lim_{n \to \infty} \tilde{\mu}(f_n) = \tilde{\mu} (f).
+ \]
+ \end{itemize}
+ We call $\tilde{\mu}$ the \term{integral} with respect to $\mu$, and we will write it as $\mu$ from now on.
+\end{thm}
+
+\begin{defi}[Simple function]\index{simple function}
+ A function $f$ is \emph{simple} if there exists $\alpha_n \in \R_{\geq 0}$ and $A_n \in \mathcal{E}$ for $1 \leq n \leq k$ such that
+ \[
+ f = \sum_{n = 1}^k \alpha_n \mathbf{1}_{A_n}.
+ \]
+\end{defi}
+From the first two properties of the measure, we see that
+\[
+ \mu(f) = \sum_{n = 1}^k \alpha_n \mu(A_n).
+\]
+One convenient observation is that a function is simple iff it takes on only finitely many values. We then see that if $f \in m \mathcal{E}^+$, then
+\[
+ f_n = 2^{-n}\lfloor 2^n f\rfloor \wedge n
+\]
+is a sequence of simple functions approximating $f$ from below. Thus, given monotone convergence, this shows that
+\[
+ \mu(f) = \lim \mu(f_n),
+\]
+and this proves the uniqueness part of the theorem.
+
+Recall that
+\begin{defi}[Almost everywhere]\index{almost everywhere}
+ We say $f = g$ almost everywhere if
+ \[
+ \mu(\{x \in E: f(x) \not= g(x)\}) = 0.
+ \]
+ We say $f$ is a \term{version} of $g$.
+\end{defi}
+
+\begin{eg}
+ Let $\ell_n = \mathbf{1}_{[n, n + 1]}$. Then $\mu(\ell_n) = 1$ for all $1$, but also $f_n \to 0$ and $\mu(0) = 0$. So the ``monotone'' part of monotone convergence is important.
+\end{eg}
+
+So if the sequence is not monotone, then the measure does not preserve limits, but it turns out we still have an inequality.
+
+\begin{lemma}[Fatou's lemma]\index{Fatou's lemma}
+ Let $f_i \in m \mathcal{E}^+$. Then
+ \[
+ \mu\left(\liminf_n f_n\right) \leq \liminf_n \mu(f_n).
+ \]
+\end{lemma}
+
+\begin{proof}
+ Apply monotone convergence to the sequence $\inf_{m \geq n} f_m$
+\end{proof}
+
+Of course, it would be useful to extend integration to functions that are not necessarily positive.
+\begin{defi}[Integrable function]\index{integrable function}
+ We say a function $f \in m\mathcal{E}$ is \emph{integrable} if $\mu(|f|) \leq \infty$. We write \term{$L^1(E)$} (or just $L^1$) for the space of integrable functions.
+
+ We extend $\mu$ to $L^1$ by
+ \[
+ \mu(f) = \mu(f^+) - \mu(f^-),
+ \]
+ where $f^{\pm} = (\pm f) \wedge 0$.
+\end{defi}
+
+If we want to be explicit about the measure and the $\sigma$-algebra, we can write $L^1(E, \mathcal{E} \mu)$.
+
+\begin{thm}[Dominated convergence theorem]\index{dominated convergence theorem}
+ If $f_i \in m\mathcal{E}$ and $f_i \to f$ a.e., such that there exists $g \in L^1$ such that $|f_i| \leq g$ a.e. Then
+ \[
+ \mu(f) = \lim \mu(f_n).
+ \]
+\end{thm}
+
+\begin{proof}
+ Apply Fatou's lemma to $g - f_n$ and $g + f_n$.
+\end{proof}
+
+\begin{defi}[Product $\sigma$-algebra]\index{product $\sigma$-algebra}\index{$\sigma$-algebra!product}
+ Let $(E_1, \mathcal{E}_1)$ and $(E_2 , \mathcal{E}_2)$ be measure spaces. Then the product $\sigma$-algebra$ \mathcal{E}_1 \otimes \mathcal{E}_2$ is the smallest $\sigma$-algebra on $E_1 \times E_2$ containing all sets of the form $A_1 \times A_2$, where $A_i \in \mathcal{E}_i$.
+\end{defi}
+
+\begin{thm}
+ If $(E_1, \mathcal{E}_1, \mu_1)$ and $(E_2, \mathcal{E}_2, \mu_2)$ are $\sigma$-finite measure spaces, then there exists a unique measure $\mu$ on $\mathcal{E}_1 \otimes \mathcal{E}_2)$ satisfying
+ \[
+ \mu(A_1 \times A_2) = \mu_1(A_1) \mu_2(A_2)
+ \]
+ for all $A_i \in \mathcal{E}_i$.
+
+ This is called the \term{product measure}\index{measure!product}.
+\end{thm}
+
+\begin{thm}[Fubini's/Tonelli's theorem]\index{Fubini's theorem}\index{Tonelli's theorem}
+ If $f = f(x_1, x_2) \in m\mathcal{E}^+$ with $\mathcal{E} = \mathcal{E}_1 \otimes \mathcal{E}_2$, then the functions
+ \begin{align*}
+ x_1 \mapsto \int f(x_1, x_2) \d \mu_2(x_2) \in m \mathcal{E}_1^+\\
+ x_2 \mapsto \int f(x_1, x_2) \d \mu_1(x_1) \in m \mathcal{E}_2^+
+ \end{align*}
+ and
+ \begin{align*}
+ \int_E f \;\d u &= \int_{E_1} \left(\int_{E_2} f(x_1, x_2)\;\d \mu_2(x_2)\right) \d \mu_1(x_1)\\
+ &= \int_{E_2} \left(\int_{E_1} f(x_1, x_2)\;\d \mu_1(x_1)\right) \d \mu_2(x_2)
+ \end{align*}
+\end{thm}
+\subsection{Conditional expectation}
+In this course, conditional expectation is going to play an important role, and it is worth spending some time developing the theory. We are going to focus on probability theory, which, mathematically, just means we assume $\mu(E) = 1$. Practically, it is common to change notation to $E = \Omega$, $\mathcal{E} = \mathcal{F}$, $\mu = \P$ and $\int\;\d \mu = \E$. Measurable functions will be written as $X, Y, Z$, and will be called \term{random variables}. Elements in $\mathcal{F}$ will be called \term{events}. An element $\omega \in \Omega$ will be called a \term{realization}.
+
+There are many ways we can think about conditional expectations. The first one is how most of us first encountered conditional probability.
+
+Suppose $B \in \mathcal{F}$, with $\P(B) > 0$. Then the conditional probability of the event $A$ given $B$ is
+\[
+ \P(A \mid B) = \frac{\P(A \cap B)}{\P(B)}.
+\]
+This should be interpreted as the probability that $A$ happened, given that $B$ happened. Since we assume $B$ happened, we ought to restrict to the subset of the probability space where $B$ in fact happened. To make this a probability space, we scale the probability measure by $\P(B)$. Then given any event $A$, we take the probability of $A \cap B$ under this probability measure, which is the formula given.
+
+More generally, if $X$ is a random variable, the conditional expectation of $X$ given $B$ is just the expectation under this new probability measure,
+\[
+ \E[X \mid B] = \frac{\E[X \mathbf{1}_B]}{\P[B]}.
+\]
+We probably already know this from high school, and we are probably not quite excited by this. One natural generalization would be to allow $B$ to vary.
+
+Let $G_1, G_2, \ldots \in \mathcal{F}$ be disjoint events such that $\bigcup_n G_n = \Omega$. Let
+\[
+ \mathcal{G} = \sigma(G_1, G_2, \ldots) = \left\{\bigcup_{n \in I} G_n: I \subseteq \N\right\}. % coarse graining
+\]
+Let $X \in L^1$. We then define
+\[
+ Y = \sum_{n = 1}^\infty \E(X \mid G_n) \mathbf{1}_{G_n}.
+\]
+Let's think about what this is saying. Suppose a random outcome $\omega$ happens. To compute $Y$, we figure out which of the $G_n$ our $\omega$ belongs to. Let's say $\omega \in G_k$. Then $Y$ returns the expected value of $X$ given that we live in $G_k$. In this processes, we have forgotten the exact value of $\omega$. All that matters is which $G_n$ the outcome belongs to. We can ``visually'' think of the $G_n$ as cutting up the sample space $\Omega$ into compartments:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [thick] plot [smooth cycle] coordinates {(1.3, -1.2) (1.3, 0) (0.7, 0.7) (0.6, 1.3) (2.6, 1.4) (3, 0.3) (2.8, -1.7)};
+ \clip plot [smooth cycle] coordinates {(1.3, -1.2) (1.3, 0) (0.7, 0.7) (0.6, 1.3) (2.6, 1.4) (3, 0.3) (2.8, -1.7)};
+ \draw [step=0.5, opacity=0.8] (0, -2) grid (4, 1.7);
+ \end{tikzpicture}
+\end{center}
+We then average out $X$ in each of these compartments to obtain $Y$. This is what we are going to call the conditional expectation of $X$ given $\mathcal{G}$, written $\E(X \mid \mathcal{G})$.
+
+Ultimately, the characterizing property of $Y$ is the following lemma:
+\begin{lemma}
+ The conditional expectation $Y = \E(X \mid \mathcal{G})$ satisfies the following properties:
+ \begin{itemize}
+ \item $Y$ is $\mathcal{G}$-measurable
+ \item We have $Y \in L^1$, and
+ \[
+ \E Y \mathbf{1}_A = \E X \mathbf{1}_A
+ \]
+ for all $A \in \mathcal{G}$.
+ \end{itemize}
+\end{lemma}
+
+\begin{proof}
+ It is clear that $Y$ is $\mathcal{G}$-measurable. To show it is $L^1$, we compute
+ \begin{align*}
+ \E[|Y|] &= \E \left|\sum_{n = 1}^\infty \E(X \mid G_n) \mathbf{1}_{G_n}\right|\\
+ &\leq \E \sum_{n =1 }^\infty \E(|X| \mid G_n) \mathbf{1}_{G_n} \\
+ &= \sum \E \left( \E(|X| \mid G_n) \mathbf{1}_{G_n}\right)\\
+ &= \sum \E |X| \mathbf{1}_{G_n}\\
+ &= \E \sum |X| \mathbf{1}_{G_n}\\
+ &= \E |X|\\
+ &< \infty,
+ \end{align*}
+ where we used monotone convergence twice to swap the expectation and the sum.
+
+ The final part is also clear, since we can explicitly enumerate the elements in $\mathcal{G}$ and see that they all satisfy the last property.
+\end{proof}
+
+It turns out for any $\sigma$-subalgebra $\mathcal{G} \subseteq \mathcal{F}$, we can construct the conditional expectation $\E(X \mid \mathcal{G})$, which is uniquely characterized by the above two properties.
+
+\begin{thm}[Existence and uniqueness of conditional expectation]
+ Let $X \in L^1$, and $\mathcal{G} \subseteq \mathcal{F}$. Then there exists a random variable $Y$ such that
+ \begin{itemize}
+ \item $Y$ is $\mathcal{G}$-measurable
+ \item $Y \in L^1$, and $\E X \mathbf{1}_A = \E Y \mathbf{1}_A$ for all $A \in \mathcal{G}$.
+ \end{itemize}
+ Moreover, if $Y'$ is another random variable satisfying these conditions, then $Y' = Y$ almost surely.
+
+ We call $Y$ a (version of) the conditional expectation given $\mathcal{G}$.
+\end{thm}
+
+We will write the condition expectation as $\E(X \mid \mathcal{G})$, and if $X = \mathbf{1}_A$, we will write $\P(A \mid \mathcal{G}) = \E(\mathbf{1}_A \mid \mathcal{G})$.
+
+Recall also that if $Z$ is a random variable, then $\sigma(Z) = \{Z^{-1}(B): B \in \mathcal{B}\}$. In this case, we will write $\E(X \mid Z) = \E(X \mid \sigma(Z))$.
+
+By, say, bounded convergence, it follows from the second condition that $\E XZ = \E YZ$ for all bounded $\mathcal{G}$-measurable functions $Z$.
+\begin{proof}
+ We first consider the case where $X \in L^2(\Omega, \mathcal{F}, \mu)$. Then we know from functional analysis that for any $\mathcal{G} \subseteq \mathcal{F}$, the space $L^2(\mathcal{G})$ is a Hilbert space with inner product
+ \[
+ \langle X, Y \rangle = \mu (X Y).
+ \]
+ In particular, $L^2(\mathcal{G})$ is a closed subspace of $L^2(\mathcal{F})$. We can then define $Y$ to be the orthogonal projection of $X$ onto $L^2(\mathcal{G})$. It is immediate that $Y$ is $\mathcal{G}$-measurable. For the second part, we use that $X - Y$ is orthogonal to $L^2(\mathcal{G})$, since that's what orthogonal projection is supposed to be. So $\E(X - Y)Z = 0$ for all $Z \in L^2(\mathcal{G})$. In particular, since the measure space is finite, the indicator function of any measurable subset is $L^2$. So we are done.
+
+ We next focus on the case where $X \in m\mathcal{E}^+$. We define
+ \[
+ X_n = X \wedge n
+ \]
+ We want to use monotone convergence to obtain our result. To do so, we need the following result:
+
+ \begin{claim}
+ If $(X, Y)$ and $(X', Y')$ satisfy the conditions of the theorem, and $X' \geq X$ a.s., then $Y' \geq Y$ a.s.
+ \end{claim}
+
+ \begin{proof}
+ Define the event $A = \{Y' \leq Y\} \in \mathcal{G}$. Consider the event $Z = (Y - Y')\mathbf{1}_A$. Then $Z \geq 0$. We then have
+ \[
+ \E Y' \mathbf{1}_A = \E X' \mathbf{1}_A \geq \E X \mathbf{1}_A = \E Y \mathbf{1}_A.
+ \]
+ So it follows that we also have $\E(Y - Y')\mathbf{1}_A \leq 0$. So in fact $\E Z = 0$. So $Y' \geq Y$ a.s.
+ \end{proof}
+ We can now define $Y_n = \E(X_n \mid \mathcal{G})$, picking them so that $\{Y_n\}$ is increasing. We then take $Y_\infty = \lim Y_n$. Then $Y_\infty$ is certainly $\mathcal{G}$-measurable, and by monotone convergence, if $A \in \mathcal{G}$, then
+ \[
+ \E X \mathbf{1}_A = \lim \E X_n \mathbf{1}_A = \lim \E Y_n \mathbf{1}_A = \E Y_\infty \mathbf{1}_A.
+ \]
+ Now if $\E X < \infty$, then $\E Y_\infty = \E X < \infty$. So we know $Y_\infty$ is finite a.s., and we can define $Y = Y_\infty \mathbf{1}_{Y_\infty < \infty}$.
+
+ Finally, we work with arbitrary $X \in L^1$. We can write $X = X^+ - X^-$, and then define $Y^\pm = \E (X^\pm \mid \mathcal{G})$, and take $Y = Y^+ - Y^-$.
+
+ Uniqueness is then clear.
+\end{proof}
+
+\begin{lemma}
+ If $Y$ is $\sigma(Z)$-measurable, then there exists $h: \R \to \R$ Borel-measurable such that $Y = h(Z)$. In particular,
+ \[
+ \E(X \mid Z) = h(Z) \text{ a.s.}
+ \]
+ for some $h: \R \to \R$.
+\end{lemma}
+We can then define $\E(X \mid Z = z) = h(z)$. The point of doing this is that we want to allow for the case where in fact we have $\P(Z = z) = 0$, in which case our original definition does not make sense.
+
+\begin{ex}
+ Consider $X \in L^1$, and $Z: \Omega \to \N$ discrete. Compute $\E(X \mid Z)$ and compare our different definitions of conditional expectation.
+\end{ex}
+
+\begin{eg}
+ Let $(U, V) \in \R^2$ with density $f_{U, V}(u, v)$, so that for any $B_1, B_2 \in \mathcal{B}$, we have
+ \[
+ \P(U \in B_1, V \in B_2) = \int_{B_1} \int_{b_2} f_{U, V}(u, v) \;\d u \;\d v.
+ \]
+ We want to compute $\E(h(V) \mid U)$, where $h: \R \to \R$ is Borel measurable. We can define
+ \[
+ f_U(u) = \int_\R f_{U, V} (u, v) \;\d v,
+ \]
+ and we define the conditional density of $V$ given $U$ by
+ \[
+ F_{ \mid U} (v \mid u) = \frac{f_{U, V}(u, v)}{f_U(u)}.
+ \]
+ We define
+ \[
+ g(u) = \int h(u) f_{V \mid U} (v \mid u)\;\d v.
+ \]
+ We claim that $\E(h(V) \mid U)$ is just $g(U)$.
+
+ To check this, we show that it satisfies the two desired conditions. It is clear that it is $\sigma(U)$-measurable. To check the second condition, fix an $A \in \sigma(U)$. Then $A = \{(u, v): u \in B\}$ for some $B$. Then
+ \begin{align*}
+ \E(h(V) \mathbf{1}_A) &= \iint h(v) \mathbf{1}_{u \in B} f_{U, V} (u, v)\;\d u\;\d v\\
+ &= \iint h(v) \mathbf{1}_{u \in B} f_{V \mid U}(v \mid u) f_V(u)\;\d u\;\d v\\
+ &= \int g(U) \mathbf{1}_{u \in B} f_U(u) \;\d u\\
+ &= \E(g(U) \mathbf{1}_A),
+ \end{align*}
+ as desired.
+\end{eg}
+The point of this example is that to compute conditional expectations, we use our intuition to guess what the conditional expectation should be, and then check that it satisfies the two uniquely characterizing properties.
+
+\begin{eg}
+ Suppose $(X, W)$ are Gaussian. Then for all linear functions $\varphi: \R^2 \to \R$, the quantity $\varphi(X, W)$ is Gaussian.
+
+ One nice property of Gaussians is that lack of correlation implies independence. We want to compute $\E(X \mid W)$. Note that if $Y$ is such that $\E X = \E Y$, $X - Y$ is independent of $W$, and $Y$ is $W$-measurable, then $Y = \E(X \mid W)$, since
+ $\E(X - Y) \mathbf{1}_A = 0$ for all $\sigma(W)$-measurable $A$.
+
+ The guess is that we want $Y$ to be a Gaussian variable. We put $Y = aW + b$. Then $\E X = \E Y$ implies we must have
+ \[
+ a \E W + b = \E X.\tag{$*$}
+ \]
+ The independence part requires $\cov(X - Y, W) = 0$. Since covariance is linear, we have
+ \[
+ 0 = \cov(X - Y, W) = \cov(X, W) - \cov(aW + b, W) = \cov(X, W) - a \cov(W, W).
+ \]
+ Recalling that $\cov(W, W) = \var(W)$, we need
+ \[
+ a = \frac{\cov(X, W)}{\var(W)}.
+ \]
+ This then allows us to use $(*)$ to compute $b$ as well. This is how we compute the conditional expectation of Gaussians.
+\end{eg}
+
+We note some immediate properties of conditional expectation. As usual, all (in)equality and convergence statements are to be taken with the quantifier ``almost surely''.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\E(X \mid \mathcal{G}) = X$ iff $X$ is $\mathcal{G}$-measurable.
+ \item $\E(\E(X \mid \mathcal{G})) = \E X$
+ \item If $X \geq 0$ a.s., then $\E(X \mid \mathcal{G}) \geq 0$
+ \item If $X$ and $\mathcal{G}$ are independent, then $\E(X \mid \mathcal{G}) = \E[X]$
+ \item If $\alpha, \beta \in \R$ and $X_1, X_2 \in L^1$, then
+ \[
+ \E(\alpha X_1 + \beta X_2 \mid \mathcal{G}) = \alpha \E(X_1 \mid\mathcal{G}) + \beta \E(X_2 \mid \mathcal{G}).
+ \]
+ \item Suppose $X_n \nearrow X$. Then
+ \[
+ \E(X_n\mid \mathcal{G}) \nearrow \E(X \mid \mathcal{G}).
+ \]
+ \item \term{Fatou's lemma}: If $X_n$ are non-negative measurable, then
+ \[
+ \E\left(\liminf_{n \to \infty} X_n \mid \mathcal{G}\right) \leq \liminf_{n \to \infty} \E(X_n \mid \mathcal{G}).
+ \]
+ \item \emph{Dominated convergence theorem}\index{dominated convergence theorem}: If $X_n \to X$ and $Y \in L^1$ such that $Y \geq |X_n|$ for all $n$, then
+ \[
+ \E(X_n \mid \mathcal{G}) \to \E(X \mid \mathcal{G}).
+ \]
+ \item \term{Jensen's inequality}: If $c: \R \to \R$ is convex, then
+ \[
+ \E(c(X) \mid \mathcal{G}) \geq c(\E(X) \mid \mathcal{G}).
+ \]
+ \item \emph{Tower property}\index{tower property}: If $\mathcal{H} \subseteq \mathcal{G}$, then
+ \[
+ \E(\E(X \mid \mathcal{G}) \mid \mathcal{H}) = \E(X \mid \mathcal{H}).
+ \]
+ \item For $p \geq 1$,
+ \[
+ \|\E(X \mid \mathcal{G})\|_p \leq \|X\|_p.
+ \]
+ \item If $Z$ is bounded and $\mathcal{G}$-measurable, then
+ \[
+ \E(ZX \mid \mathcal{G}) = Z \E(X \mid \mathcal{G}).
+ \]
+ \item Let $X \in L^1$ and $\mathcal{G}, \mathcal{H} \subseteq \mathcal{F}$. Assume that $\sigma(X, \mathcal{G})$ is independent of $\mathcal{H}$. Then
+ \[
+ \E (X \mid \mathcal{G}) = \E(X \mid \sigma(\mathcal{G}, \mathcal{H})).
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate} % actually check these
+ \item Clear.
+ \item Take $A = \omega$.
+ \item Shown in the proof.
+ \item Clear by property of expected value of independent variables.
+ \item Clear, since the RHS satisfies the unique characterizing property of the LHS.
+ \item Clear from construction.
+ \item Same as the unconditional proof, using the previous property.
+ \item Same as the unconditional proof, using the previous property.
+ \item Same as the unconditional proof.
+ \item The LHS satisfies the characterizing property of the RHS
+ \item Using the convexity of $|x|^p$, Jensen's inequality tells us
+ \begin{align*}
+ \|E(X \mid \mathcal{G})\|_p^p &= \E |\E(X \mid \mathcal{G})|^p\\
+ &\leq \E (\E (|X|^p \mid \mathcal{G}))\\
+ &= \E |X|^p\\
+ &= \|X\|_p^p
+ \end{align*}
+ \item If $Z = \mathbf{1}_B$, and let $b \in \mathcal{G}$. Then
+ \[
+ \E(Z \E(X \mid \mathcal{G}) \mathbf{1}_A) = \E (\E (X \mid \mathcal{G}) \cdot \mathbf{1}_{A \cap B}) = \E(X \mathbf{1}_{A \cap B}) = \E(Z X \mathbf{1}_A).
+ \]
+ So the lemma holds. Linearity then implies the result for $Z$ simple, then apply our favorite convergence theorems.
+ \item Take $B \in \mathcal{H}$ and $A \in \mathcal{G}$. Then
+ \begin{align*}
+ \E(\E(X \mid \sigma(\mathcal{G}, \mathcal{H}))\cdot \mathbf{1}_{A \cap B}) &= \E(X \cdot \mathbf{1}_{A \cap B})\\
+ &= \E(X \mathbf{1}_A) \P(B)\\
+ &= \E(\E(X \mid \mathcal{G}) \mathbf{1}_A) \P(B)\\
+ &= \E(\E(X \mid \mathcal{G}) \mathbf{1}_{A \cap B})
+ \end{align*}
+ If instead of $A \cap B$, we had any $\sigma(\mathcal{G}, \mathcal{H})$-measurable set, then we would be done. But we are fine, since the set of subsets of the form $A \cap B$ with $A \in \mathcal{G}$, $B \in \mathcal{H}$ is a generating $\pi$-system for $\sigma(\mathcal{H}, \mathcal{G})$. \qedhere
+ \end{enumerate}
+\end{proof}
+
+We shall end with the following key lemma. We will later use it to show that many of our \emph{martingales} are uniformly integrable.
+\begin{lemma}
+ If $X \in L^1$, then the family of random variables $Y_{\mathcal{G}} = \E(X \mid \mathcal{G})$ for all $\mathcal{G} \subseteq \mathcal{F}$ is uniformly integrable.
+
+ In other words, for all $\varepsilon > 0$, there exists $\lambda > 0$ such that
+ \[
+ \E(Y_{\mathcal{G}} \mathbf{1}_{|Y_{\mathcal{G}} > \lambda|}) < \varepsilon
+ \]
+ for all $\mathcal{G}$.
+\end{lemma}
+
+\begin{proof}
+ Fix $\varepsilon > 0$. Then there exists $\delta > 0$ such that $\E |X|\mathbf{1}_A < \varepsilon$ for any $A$ with $\P(A) < \delta$.
+
+ Take $Y = \E (X \mid \mathcal{G})$. Then by Jensen, we know
+ \[
+ |Y| \leq \E(|X| \mid \mathcal{G})
+ \]
+ In particular, we have
+ \[
+ \E|Y| \leq \E|X|.
+ \]
+ By Markov's inequality, we have
+ \[
+ \P(|Y| \geq \lambda) \leq \frac{\E|Y|}{\lambda} \leq \frac{\E|X|}{\lambda}.
+ \]
+ So take $\lambda$ such that $\frac{\E|X|}{\lambda} < \delta$. So we have
+ \[
+ \E(|Y| \mathbf{1}_{|Y| \geq \lambda}) \leq \E(\E(|X| \mid \mathcal{G})\mathbf{1}_{|Y| \geq \lambda}) = \E(|X| \mathbf{1}_{|Y| \geq \lambda}) < \varepsilon
+ \]
+ using that $\mathbf{1}_{|Y| \geq \lambda}$ is a $\mathcal{G}$-measurable function.
+\end{proof}
+
+\section{Martingales in discrete time}
+\subsection{Filtrations and martingales}
+We would like to model some random variable that ``evolves with time''. For example, in a simple random walk, $X_n$ could be the position we are at time $n$. To do so, we would like to have some $\sigma$-algebras $\mathcal{F}_n$ that tells us the ``information we have at time $n$''. This structure is known as a \emph{filtration}.
+
+\begin{defi}[Filtration]\index{filtration}
+ A \emph{filtration} is a sequence of $\sigma$-algebras $(\mathcal{F}_n)_{n \geq 0}$ such that $\mathcal{F} \supseteq \mathcal{F}_{n + 1} \supseteq \mathcal{F}_n$ for all $n$. We define $\mathcal{F}_\infty = \sigma(\mathcal{F}_0, \mathcal{F}_1, \ldots) \subseteq \mathcal{F}$.
+\end{defi}
+
+We will from now on assume $(\Omega, \mathcal{F}, \P)$ is equipped with a filtration $(\mathcal{F}_n)_{n \geq 0}$.
+
+\begin{defi}[Stochastic process in discrete time]\index{discrete stochastic process}\index{stochastic process!discrete}
+ A \emph{stochastic process} (in discrete time) is a sequence of random variables $(X_n)_{n \geq 0}$.
+\end{defi}
+
+This is a very general definition, and in most cases, we would want $X_n$ to interact nicely with our filtration.
+\begin{defi}[Natural filtration]\index{natural filtration}\index{filtration!natural}
+ The \emph{natural filtration} of $(X_n)_{n \geq 0}$ is given by
+ \[
+ \mathcal{F}_n^X = \sigma(X_1, \ldots, X_n).
+ \]
+\end{defi}
+
+\begin{defi}[Adapted process]\index{adapted process}\index{stochastic process!adapted}
+ We say that $(X_n)_{n \geq 0}$ is \emph{adapted} (to $(\mathcal{F}_n)_{n \geq 0}$) if $X_n$ is $\mathcal{F}_n$-measurable for all $n \geq 0$. Equivalently, if $\mathcal{F}^X_n \subseteq \mathcal{F}_n$.
+\end{defi}
+
+\begin{defi}[Integrable process]\index{integrable process}\index{stochastic process!integrable}
+ A process $(X_n)_{n \geq 0}$ is \emph{integrable} if $X_n \in L^1$ for all $n \geq 0$.
+\end{defi}
+
+We can now write down the definition of a martingale.
+\begin{defi}[Martingale]\index{martingale}
+ An integrable adapted process $(X_n)_{n \geq 0}$ is a \emph{martingale} if for all $n \geq m$, we have
+ \[
+ \E(X_n \mid \mathcal{F}_m) = X_m.
+ \]
+ We say it is a \term{super-martingale} if
+ \[
+ \E(X_n \mid \mathcal{F}_m) \leq X_m,
+ \]
+ and a \term{sub-martingale} if
+ \[
+ \E(X_n \mid \mathcal{F}_m) \geq X_m,
+ \]
+\end{defi}
+Note that it is enough to take $m = n - 1$ for all $n \geq 0$, using the tower property.
+
+The idea of a martingale is that we cannot predict whether $X_n$ will go up or go down in the future even if we have all the information up to the present. For example, if $X_n$ denotes the wealth of a gambler in a gambling game, then in some sense $(X_n)_{n \geq 0}$ being a martingale means the game is ``fair'' (in the sense of a fair dice).
+
+Note that $(X_n)_{n \geq 0}$ is a super-martingale iff $(-X_n)_{n \geq 0}$ is a sub-martingale, and if $(X_n)_{n \geq 0}$ is a martingale, then it is both a super-martingale and a sub-martingale. Often, what these extra notions buy us is that we can formulate our results for super-martingales (or sub-martingales), and then by applying the result to both $(X_n)_{n \geq 0}$ and $(-X_n)_{n \geq 0}$, we obtain the desired, stronger result for martingales.
+
+\subsection{Stopping time and optimal stopping}
+The optional stopping theorem says the definition of a martingale in fact implies an \emph{a priori} much stronger property. To formulate the optional stopping theorem, we need the notion of a stopping time.
+
+\begin{defi}[Stopping time]\index{stopping time}
+ A \emph{stopping time} is a random variable $T: \Omega \to \N_{ \geq 0} \cup \{\infty\}$ such that
+ \[
+ \{T \leq n\} \in \mathcal{F}_n
+ \]
+ for all $n \geq 0$.
+\end{defi}
+This means that at time $n$, if we want to know if $T$ has occurred, we can determine it using the information we have at time $n$.
+
+Note that $T$ is a stopping time iff $\{t = n \} \in \mathcal{F}_n$ for all $n$, since if $T$ is a stopping time, then
+\[
+ \{T = n\} = \{T \leq n\} \setminus \{T \leq n - 1\},
+\]
+and $\{T \leq n- 1\} \in \mathcal{F}_{n - 1} \subseteq \mathcal{F}_n$. Conversely,
+\[
+ \{T \leq n \} = \bigcup_{k = 1}^n \{T = k\} \in \mathcal{F}_n.
+\]
+This will not be true in the continuous case.
+
+\begin{eg}
+ If $B \in \mathcal{B}(\R)$, then we can define
+ \[
+ T = \inf \{n : X_n \in B\}.
+ \]
+ Then this is a stopping time.
+
+ On the other hand,
+ \[
+ T = \sup \{n: X_n \in B\}
+ \]
+ is not a stopping time (in general).
+\end{eg}
+
+Given a stopping time, we can make the following definition:
+\begin{defi}[$X_T$]\index{$X_T$}
+ For a stopping time $T$, we define the random variable $X_T$ by
+ \[
+ X_T (\omega) = X_{T(\omega)}(\omega)
+ \]
+ on $\{T < \infty\}$, and $0$ otherwise.
+\end{defi}
+Later, for suitable martingales, we will see that the limit $X_\infty = \lim_{n \to \infty} X_n$ makes sense. In that case, We define $X_T(\omega)$ to be $X_\infty(\omega)$ if $T = \infty$.
+
+Similarly, we can define
+\begin{defi}[Stopped process]\index{stopped process}
+ The \emph{stopped process} is defined by
+ \[
+ (X_n^T)_{n \geq 0} = (X_{T(\omega) \wedge n}(\omega))_{n \geq 0}.
+ \]
+\end{defi}
+This says we stop evolving the random variable $X$ once $T$ has occurred.
+
+We would like to say that $X_T$ is ``$\mathcal{F}_T$-measurable'', i.e.\ to compute $X_T$, we only need to know the information up to time $T$. After some thought, we see that the following is the correct definition of $\mathcal{F}_T$:
+\begin{defi}[$\mathcal{F}_T$]\index{$\mathcal{F}_T$}
+ For a stopping time $T$, define
+ \[
+ \mathcal{F}_T = \{A \in \mathcal{F}_\infty : A \cap \{T \leq n\} \in \mathcal{F}_n\}.
+ \]
+\end{defi}
+This is easily seen to be a $\sigma$-algebra.
+
+\begin{eg}
+ If $T \equiv n$ is constant, then $\mathcal{F}_T = \mathcal{F}_n$.
+\end{eg}
+
+There are some fairly immediate properties of these objects, whose proof is left as an exercise for the reader:
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item If $T, S, (T_n)_{n \geq 0}$ are all stopping times, then
+ \[
+ T \vee S, T \wedge S, \sup_n T_n, \inf T_n, \limsup T_n, \liminf T_n
+ \]
+ are all stopping times.
+ \item $\mathcal{F}_T$ is a $\sigma$-algebra
+ \item If $S \leq T$, then $\mathcal{F}_S \subseteq \mathcal{F}_T$.
+ \item $X_T \mathbf{1}_{T < \infty}$ is $\mathcal{F}_T$-measurable.
+ \item If $(X_n)$ is an adapted process, then so is $(X^T_n)_{n \geq 0}$ for any stopping time $T$.
+ \item If $(X_n)$ is an integrable process, then so is $(X^T_n)_{n \geq 0}$ for any stopping time $T$.\fakeqed
+ \end{enumerate}
+\end{prop}
+
+We now come to the fundamental property of martingales.
+\begin{thm}[Optional stopping theorem]\index{optional stopping theorem}
+ Let $(X_n)_{n \geq 0}$ be a super-martingale and $S \leq T$ \emph{bounded} stopping times. Then
+ \[
+ \E X_T \leq \E X_S.
+ \]
+\end{thm}
+
+\begin{proof}
+ Follows from the next theorem.
+\end{proof}
+What does this theorem mean? If $X$ is a martingale, then it is both a super-martingale and a sub-martingale. So we can apply this to both $X$ and $-X$, and so we have
+\[
+ \E(X_T) = \E(X_S).
+\]
+In particular, since $0$ is a stopping time, we see that
+\[
+ \E X_T = \E X_0
+\]
+for \emph{any} bounded stopping time $T$.
+
+Recall that martingales are supposed to model fair games. If we again think of $X_n$ as the wealth at time $n$, and $T$ as the time we stop gambling, then this says no matter how we choose $T$, as long as it is bounded, the expected wealth at the end is the same as what we started with.
+
+\begin{eg}
+ Consider the stopping time
+ \[
+ T = \inf \{n : X_n = 1\},
+ \]
+ and take $X$ such that $\E X_0 = 0$. Then clearly we have $\E X_T = 1$. So this tells us $T$ is not a bounded stopping time!
+\end{eg}
+
+\begin{thm}
+ The following are equivalent:
+ \begin{enumerate}
+ \item $(X_n)_{n \geq 0}$ is a super-martingale.
+ \item For any bounded stopping times $T$ and any stopping time $S$,
+ \[
+ \E(X_T \mid \mathcal{F}_S) \leq X_{S \wedge T}.
+ \]
+ \item $(X_n^T)$ is a super-martingale for any stopping time $T$.
+ \item For bounded stopping times $S, T$ such that $S \leq T$, we have
+ \[
+ \E X_T \leq \E X_S.
+ \]
+ \end{enumerate}
+\end{thm}
+In particular, (iv) implies (i).
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (ii) $\Rightarrow$ (iii): Consider $(X_n^{T'})_{ \geq 0}$ for a stopping time $T'$. To check if this is a super-martingale, we need to prove that whenever $m \leq n$,
+ \[
+ \E(X_{n \wedge T'} \mid \mathcal{F}_m) \leq X_{m \wedge T'}.
+ \]
+ But this follows from (ii) above by taking $S = m$ and $T = T' \wedge n$.
+ \item (ii) $\Rightarrow$ (iv): Clear by the tower law.
+ \item (iii) $\Rightarrow$ (i): Take $T = \infty$.
+ \item (i) $\Rightarrow$ (ii): Assume $T \leq n$. Then
+ \begin{align*}
+ X_T &= X_{S \wedge T} + \sum_{S \leq k < T} (X_{k + 1} - X_k)\\
+ &= X_{S \wedge T} + \sum_{k = 0}^n (X_{k + 1} - X_k) \mathbf{1}_{S \leq k < T} \tag{$*$}
+ \end{align*}
+ Now note that $\{S \leq k < T\} = \{S \leq k\} \cap \{T \leq k\}^c \in \mathcal{F}_k$. Let $A \in \mathcal{F}_S$. Then $A \cap \{S \leq k\} \in \mathcal{F}_k$ by definition of $\mathcal{F}_S$. So $A \cap \{S \leq k < T\} \in \mathcal{F}_k$.
+
+ Apply $\E$ to to $(*) \times \mathbf{1}_A$. Then we have
+ \[
+ \E (X_T \mathbf{1}_A) = \E(X_{S \wedge T} \mathbf{1}_A) + \sum_{k = 0}^n \E(X_{k + 1} - X_k) \mathbf{1}_{A \cap \{S \leq k < T\}}.
+ \]
+ But for all $k$, we know
+ \[
+ \E(X_{k + 1} - X_k) \mathbf{1}_{A \cap \{S \leq k < T\}} \leq 0,
+ \]
+ since $X$ is a super-martingale. So it follows that for all $A \in \mathcal{F}_S$, we have
+ \[
+ \E(X_T \cdot \mathbf{1}_A) \leq \E(X_{S \wedge T} \mathbf{1}_A).
+ \]
+ But since $X_{S \wedge T}$ is $\mathcal{F}_{S \wedge T}$ measurable, it is in particular $\mathcal{F}_S$ measurable. So it follows that for all $A \in \mathcal{F}_S$, we have
+ \[
+ \E(\E (X_T \mid \mathcal{F}_S) \mathbf{1}_A) \leq \E(X_{S \wedge T} \mathbf{1}_A).
+ \]
+ So the result follows.
+ \item (iv) $\Rightarrow$ (i): Fix $m \leq n$ and $A \in \mathcal{F}_m$. Take
+ \[
+ T = m \mathbf{1}_A + \mathbf{1}_{A^C}.
+ \]
+ One then manually checks that this is a stopping time. Now note that
+ \[
+ X_T = X_m \mathbf{1}_{A} + X_n \mathbf{1}_{A^C}.
+ \]
+ So we have
+ \begin{align*}
+ 0 &\geq \E(X_n) - \E(X_T) \\
+ &= \E(X_n) - \E(X_n \mathbf{1}_{A^c}) - \E(X_m \mathbf{1}_A) \\
+ &= \E (X_n \mathbf{1}_A) - \E (X_m \mathbf{1}_A).
+ \end{align*}
+ Then the same argument as before gives the result.\qedhere
+ \end{itemize}
+\end{proof}
+
+\subsection{Martingale convergence theorems}
+One particularly nice property of martingales is that they have nice convergence properties. We shall begin by proving a pointwise version of martingale convergence.
+
+\begin{thm}[Almost sure martingale convergence theorem]\index{martingale convergence theorem!almost sure}
+ Suppose $(X_n)_{n \geq 0}$ is a super-martingale that is bounded in $L^1$, i.e.\ $\sup_n \E|X_n| < \infty$. Then there exists an $\mathcal{F}_\infty$-measurable $X_\infty \in L^1$ such that
+ \[
+ X_n \to X_\infty\text{ a.s. as }n \to \infty.
+ \]
+\end{thm}
+
+To begin, we need a convenient characterization of when a series converges.
+\begin{defi}[Upcrossing]\index{upcrossing}
+ Let $(x_n)$ be a sequence and $(a, b)$ an interval. An \emph{upcrossing} of $(a, b)$ by $(x_n)$ is a sequence $j, j + 1, \ldots, k$ such that $x_j \leq a$ and $x_k \geq b$. We define\index{$U_n[a, b, (x_n)]$}\index{$U[a, b, (x_n)$}
+ \begin{align*}
+ U_n[a, b, (x_n)] &= \text{number of disjoint upcrossings contained in }\{1, \ldots, n\}\\
+ U[a, b, (x_n)] &= \lim_{n \to \infty} U_n[a, b, x].
+ \end{align*}
+\end{defi}
+
+We can then make the following elementary observation:
+\begin{lemma}
+ Let $(x_n)_{n \geq 0}$ be a sequence of numbers. Then $x_n$ converges in $\R$ if and only if
+ \begin{enumerate}
+ \item $\liminf |x_n| < \infty$.
+ \item For all $a, b \in \Q$ with $a < b$, we have $U[a, b, (x_n)] < \infty$.
+ \end{enumerate}
+\end{lemma}
+For our martingales, since they are bounded in $L^1$, Fatou's lemma tells us $\E \liminf |X_n| < \infty$. So $\liminf |X_n| = 0$ almost surely. Thus, it remains to show that for any fixed $a < b \in \Q$, we have $\P(U[a, b, (X_n)] = \infty) = 0$. This is a consequence of Doob's upcrossing lemma.
+
+\begin{lemma}[Doob's upcrossing lemma]\index{Doob's upcrossing lemma}
+ If $X_n$ is a super-martingale, then
+ \[
+ (b - a) \E(U_n[a, b(X_n)]) \leq \E(X_n - a)^-
+ \]
+\end{lemma}
+
+\begin{proof}
+ Assume that $X$ is a positive super-martingale. We define stopping times $S_k, T_k$ as follows:
+ \begin{itemize}
+ \item $T_0 = 0$
+ \item $S_{k + 1} = \inf\{n: X_n \leq a, n \geq T_n\}$
+ \item $T_{k + 1} = \inf\{n: X_n \geq b, n \geq S_{k + 1}\}$.
+ \end{itemize}
+ Given an $n$, we want to count the number of upcrossings before $n$. There are two cases we might want to distinguish:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-0.1, 0) node [left] {$b$} -- (4, 0);
+ \draw (-0.1, -2) node [left] {$a$} -- (4, -2);
+ \draw (3, -2.5) node [below] {$n$} -- (3, 0.5);
+
+ \draw plot [smooth] coordinates {(0, -2.2) (0.5, 0.2) (1.1, -0.5) (1.3, 0.1) (2.2, -2.3) (2.8, 0.4) (3.5, 0.4)};
+
+ \begin{scope}[shift={(6, 0)}]
+ \draw (-0.1, 0) node [left] {$b$} -- (4, 0);
+ \draw (-0.1, -2) node [left] {$a$} -- (4, -2);
+ \draw (3, -2.5) node [below] {$n$} -- (3, 0.5);
+
+ \draw plot [smooth] coordinates {(0, -2.2) (0.5, 0.2) (1.1, -0.5) (1.3, 0.1) (2.2, -2.3) (3.6, 0.4)};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ Now consider the sum
+ \[
+ \sum_{k = 1}^n X_{T_k \wedge n} - X_{S_k \wedge n}.
+ \]
+ In the first case, this is equal to
+ \[
+ \sum_{k = 1}^{U_n} X_{T_k} - X_{S_k} + \sum_{k = U_n + 1}^n X_n - X_n \geq (b - a) U_n.
+ \]
+ In the second case, it is equal to
+ \[
+ \sum_{k = 1}^{U_n} X_{T_k} - X_{S_k} + (X_n - X_{S_{U_n + 1}}) + \sum_{k = U_n + 2}^n X_n - X_n \geq (b - a) U_n + (X_n - X_{S_{U_n + 1}}).
+ \]
+ Thus, in general, we have
+ \[
+ \sum_{k = 1}^n X_{T_k \wedge n} - X_{S_k \wedge n} \geq (b - a) U_n + (X_n - X_{S_{U_n + 1} \wedge n}).
+ \]
+ By definition, $S_k < T_k \leq n$. So the expectation of the LHS is always non-negative by super-martingale convergence, and thus
+ \[
+ 0 \geq (b - a) \E U_n + \E(X_n - X_{S_{U_n + 1} \wedge n}).
+ \]
+ Then observe that
+ \[
+ X_n- X_{S_{U_n + 1}} \geq - (X_n - a)^-.\qedhere
+ \]
+\end{proof}
+%
+%\begin{proof}[Proof of theorem]
+% Define
+% \[
+% \Omega_\infty = \{\omega: \limsup (X_n(\omega))< \infty\}.
+% \]
+% Moreover, for all $a < b \in \R$, define
+% \[
+% \Omega_{a, b} = \{\omega: U[a, b, (X_n)] < \infty\}.
+% \]
+% By our lemma, we know that $X_n$ converges on
+% \[
+% \Omega_\infty \cap \left(\cap_{a < b \in \Q} \Omega_{a, b}\right).
+% \]
+% Moreover, by Doob's upcrossings lemma, we know
+% \[
+% \P\left(\Omega_\infty \cap \left(\cap_{a < b \in \Q} \Omega_{a, b}\right)\right) = 1
+% \]
+% since we are taking a countable intersection.
+%\end{proof}
+
+The almost-sure martingale convergence theorem is very nice, but often it is not good enough. For example, we might want convergence in $L^p$ instead. The following example shows this isn't always possible:
+
+\begin{eg}
+ Suppose $(\rho_n)_{n \geq 0}$ is a sequence of iid random variables and
+ \[
+ \P(\rho_n = 0) = \frac{1}{2} = \P(\rho_n = 2).
+ \]
+ Let
+ \[
+ X_n = \prod_{k = 0}^n \rho_k.
+ \]
+ Then this is a martingale, and $\E X_n = 1$. On the other hand, $X_n \to 0$ almost surely. So $\|X_n - X_\infty\|_1$ does not converge to $0$.
+\end{eg}
+
+For $p > 1$, if we want convergence in $L^p$, it is not surprising that we at least need the sequence to be $L^p$ bounded. We will see that this is in fact sufficient. For $p = 1$, however, we need a bit more than being bounded in $L^1$. We will need uniform integrability.
+
+To prove this, we need to establish some inequalities.
+\begin{lemma}[Maximal inequality]\index{maximal inequality}
+ Let $(X_n)$ be a sub-martingale that is non-negative, or a martingale. Define
+ \[
+ X^*_n = \sup_{k \leq n} |X_k|,\quad X^* = \lim_{n \to \infty} X_n^*.
+ \]
+ If $\lambda \geq 0$, then
+ \[
+ \lambda \P(X_n^* \geq \lambda) \leq \E[|X_n|\mathbf{1}_{X_n^* \geq \lambda}].
+ \]
+ In particular, we have
+ \[
+ \lambda \P(X_n^* \geq \lambda) \leq \E[|X_n|].
+ \]
+\end{lemma}
+Markov's inequality says almost the same thing, but has $\E[|X_n^*|]$ instead of $\E[|X_n|]$. So this is a stronger inequality.
+
+\begin{proof}
+ If $X_n$ is a martingale, then $|X_n|$ is a sub-martingale. So it suffices to consider the case of a non-negative sub-martingale. We define the stopping time
+ \[
+ T = \inf\{n: X_n \geq \lambda\}.
+ \]
+ By optional stopping,
+ \begin{align*}
+ \E X_n &\geq \E X_{T \wedge n} \\\qedhere
+ &= \E X_T \mathbf{1}_{T \leq n} + \E X_n \mathbf{1}_{T > n}\\
+ &\geq \lambda \P(T \leq n) + \E X_n \mathbf{1}_{T > n}\\
+ &= \lambda \P(X_n^* \geq \lambda) + \E X_n \mathbf{1}_{T > n}.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{lemma}[Doob's $L^p$ inequality]\index{Doob's $L^p$ inequality}\index{$L^p$ inequality}
+ For $p > 1$, we have
+ \[
+ \|X_n^*\|_p \leq \frac{p}{p - 1} \|X_n\|_p
+ \]
+ for all $n$.
+\end{lemma}
+
+\begin{proof}
+ Let $k > 0$, and consider
+ \[
+ \|X_n^* \wedge k \|^p_p = \E |X_n^* \wedge k|^p.
+ \]
+ We use the fact that
+ \[
+ x^p = \int_0^x p s^{p - 1}\;\d s.
+ \]
+ So we have
+ \begin{align*}
+ \|X_n^* \wedge k\|^p_p &= \E|X_n^* \wedge k|^p\\
+ &= \E \int_0^{X_n^* \wedge k} p x^{p - 1}\;\d x\\
+ &= \E \int_0^k p x^{p - 1} \mathbf{1}_{X_n^* \geq x}\;\d x\\
+ &= \int_0^k p x^{p - 1} \P(X_n^* \geq x)\;\d x\tag{Fubini}\\
+ &\leq \int_0^k px^{p - 2} \E X_n \mathbf{1}_{X_n^* \geq x} \;\d x\tag{maximal inequality}\\
+ &= \E X_n \int_0^k p x^{p - 2} \mathbf{1}_{X_n^* \geq x}\;\d x \tag{Fubini}\\
+ &= \frac{p}{p - 1} \E X_n (X_n^* \wedge k) ^{p - 1}\\
+ &\leq \frac{p}{p - 1} \|X_n\|_p \left(\E(X_n^* \wedge k)^p\right)^{\frac{p - 1}{p}}\tag{H\"older}\\
+ &= \frac{p}{p - 1} \|X_n\|_p \|X_n^* \wedge k\|_p^{p - 1}
+ \end{align*}
+ Now take the limit $k \to \infty$ and divide by $\|X_n^*\|_p^{p - 1}$.
+\end{proof}
+
+\begin{thm}[$L^p$ martingale convergence theorem]\index{martingale convergence theorem!$L^p$}
+ Let $(X_n)_{n \geq 0}$ be a martingale, and $p > 1$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $(X_n)_{n \geq 0}$ is bounded in $L^p$, i.e.\ $\sup_n \E |X_i|^p < \infty$.
+ \item $(X_n)_{n \geq 0}$ converges as $n \to \infty$ to a random variable $X_\infty \in L^p$ almost surely and in $L^p$.
+ \item There exists a random variable $Z \in L^p$ such that
+ \[
+ X_n = \E (Z \mid \mathcal{F}_n)
+ \]
+ \end{enumerate}
+ Moreover, in (iii), we always have $X_\infty = \E(Z \mid \mathcal{F}_\infty)$.
+\end{thm}
+This gives a bijection between martingales bounded in $L^p$ and $L^p(\mathcal{F}_\infty)$, sending $(X_n)_{n \geq 0} \mapsto X_\infty$.
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Rightarrow$ (ii): If $(X_n)_{n \geq 0}$ is bounded in $L^p$, then it is bounded in $L^1$. So by the martingale convergence theorem, we know $(X_n)_{n \geq 0}$ converges almost surely to $X_\infty$. By Fatou's lemma, we have $X_\infty \in L^p$.
+
+ Now by monotone convergence, we have
+ \[
+ \|X^*\|_p = \lim_n \|X_n^*\|_p \leq \frac{p}{p - 1} \sup_n \|X_n\|_p < \infty.
+ \]
+ By the triangle inequality, we have
+ \[
+ |X_n - X_\infty| \leq 2 X^*\text{ a.s.}
+ \]
+ So by dominated convergence, we know that $X_n \to X_\infty$ in $L^p$.
+ \item (ii) $\Rightarrow$ (iii): Take $Z = X_\infty$. We want to prove that
+ \[
+ X_m = \E(X_\infty \mid \mathcal{F}_m).
+ \]
+ To do so, we show that $\|X_m - \E(X_\infty \mid \mathcal{F}_m)\|_p = 0$. For $n \geq m$, we know this is equal to
+ \[
+ \|E(X_n \mid \mathcal{F}_m) - \E(X_\infty \mid \mathcal{F}_m)\|_p = \|\E(X_n - X_\infty \mid \mathcal{F}_m)\|_p \leq \|X_n - X_\infty\|_p \to 0
+ \]
+ as $n \to \infty$, where the last step uses Jensen's. But it is also a constant. So we are done.
+ \item (iii) $\Rightarrow$ (i): Since expectation decreases $L^p$ norms, we already know that $(X_n)_{n \geq 0}$ is $L^p$-bounded.
+
+ To show the ``moreover'' part, note that $\bigcup_{n \geq 0} \mathcal{F}_n$ is a $\pi$-system that generates $\mathcal{F}_\infty$. So it is enough to prove that
+ \[
+ \E X_\infty \mathbf{1}_A = \E(\E(Z \mid \mathcal{F}_\infty) \mathbf{1}_A).
+ \]
+ But if $A \in \mathcal{F}_N$, then
+ \begin{align*}
+ \E X_\infty \mathbf{1}_A &= \lim_{n \to \infty} \E X_n \mathbf{1}_A\\
+ &= \lim_{n \to \infty} \E(\E(Z \mid \mathcal{F}_n) \mathbf{1}_A)\\
+ &= \lim_{n \to \infty} \E (\E(Z \mid \mathcal{F}_\infty)\mathbf{1}_A),
+ \end{align*}
+ where the last step relies on the fact that $\mathbf{1}_A$ is $\mathcal{F}_n$-measurable.\qedhere
+ \end{itemize}
+\end{proof}
+
+We finally finish off the $p = 1$ case with the additional uniform integrability condition.
+\begin{thm}[Convergence in $L^1$]\index{martingale convergence theorem!$L^1$}
+ Let $(X_n)_{n \geq 0}$ be a martingale. Then the following are equivalent:
+ \begin{enumerate}
+ \item $(X_n)_{n \geq 0}$ is uniformly integrable.
+ \item $(X_n)_{n \geq 0}$ converges almost surely and in $L^1$.
+ \item There exists $Z \in L^1$ such that $X_n = \E(Z \mid \mathcal{F}_n)$ almost surely.
+ \end{enumerate}
+ Moreover, $X_\infty = \E(Z \mid \mathcal{F}_\infty)$.
+\end{thm}
+
+The proof is very similar to the $L^p$ case.
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Rightarrow$ (ii): Let $(X_n)_{n \geq 0}$ be uniformly integrable. Then $(X_n)_{n \geq 0}$ is bounded in $L^1$. So the $(X_n)_{n \geq 0}$ converges to $X_\infty$ almost surely. Then by measure theory, uniform integrability implies that in fact $X_n \to L^1$.
+ \item (ii) $\Rightarrow$ (iii): Same as the $L^p$ case.
+ \item (iii) $\Rightarrow$ (i): For any $Z \in L^1$, the collection $\E(Z \mid \mathcal{G})$ ranging over all $\sigma$-subalgebras $\mathcal{G}$ is uniformly integrable.\qedhere
+ \end{itemize}
+\end{proof}
+Thus, there is a bijection between uniformly integrable martingales and $L^1(\mathcal{F}_\infty)$.
+
+We now revisit optional stopping for uniformly integrable martingales. Recall that in the statement of optional stopping, we needed our stopping times to be bounded. It turns out if we require our martingales to be uniformly integrable, then we can drop this requirement.
+
+\begin{thm}
+ If $(X_n)_{n \geq 0}$ is a uniformly integrable martingale, and $S, T$ are arbitrary stopping times, then $\E(X_T \mid \mathcal{F}_S) = X_{S \wedge T}$. In particular $\E X_T = X_0$.
+\end{thm}
+Note that we are now allowing arbitrary stopping times, so $T$ may be infinite with non-zero probability. Hence we define
+\[
+ X_T = \sum_{n = 0}^\infty X_n \mathbf{1}_{T = n} + X_\infty \mathbf{1}_{T = \infty}.
+\]
+
+\begin{proof}
+ By optional stopping, for every $n$, we know that
+ \[
+ \E (X_{T \wedge n} \mid \mathcal{F}_S) = X_{S \wedge T \wedge n}.
+ \]
+ We want to be able to take the limit as $n \to \infty$. To do so, we need to show that things are uniformly integrable. First, we apply optional stopping to write $X_{T \wedge n}$ as
+ \begin{align*}
+ X_{T\wedge n} &= \E(X_n \mid \mathcal{F}_{T \wedge n})\\
+ &= \E(\E(X_\infty \mid \mathcal{F}_n) \mid \mathcal{F}_{T \wedge n})\\
+ &= \E(X_\infty \mid \mathcal{F}_{T \wedge n}).
+ \end{align*}
+ So we know $(X_n^T)_{n \geq 0}$ is uniformly integrable, and hence $X_{n \wedge T} \to X_T$ almost surely and in $L^1$.
+
+ To understand $\E (X_{T \wedge n} \mid \mathcal{F}_S)$, we note that
+ \[
+ \|\E(X_{n \wedge T} - X_T \mid \mathcal{F}_S)\|_1 \leq \|X_{n \wedge T} - X_T \|_1 \to 0\text{ as }n \to \infty.
+ \]
+ So it follows that $\E(X_{n \wedge T} \mid \mathcal{F}_S) \to \E(X_T \mid \mathcal{F}_S)$ as $n \to \infty$.
+\end{proof}
+
+\subsection{Applications of martingales}
+Having developed the theory, let us move on to some applications. Before we do that, we need the notion of a \emph{backwards martingale}.
+\begin{defi}[Backwards filtration]\index{backwards filtration}\index{filtration!backwards}
+ A \emph{backwards filtration} on a measurable space $(E, \mathcal{E})$ is a sequence of $\sigma$-algebras $\hat{\mathcal{F}}_n \subseteq \mathcal{E}$ such that $\hat{F}_{n + 1} \subseteq \hat{F}_n$. We define
+ \[
+ \hat{\mathcal{F}}_\infty = \bigcap_{n \geq 0} \hat{\mathcal{F}}_n.
+ \]
+\end{defi}
+
+\begin{thm}
+ Let $Y \in L^1$, and let $\hat{\mathcal{F}}_n$ be a backwards filtration. Then
+ \[
+ \E(Y \mid \hat{\mathcal{F}}_n) \to \E(Y \mid \hat{\mathcal{F}}_\infty)
+ \]
+ almost surely and in $L^1$.
+\end{thm}
+A process of this form is known as a \term{backwards martingale}.
+
+\begin{proof}
+ We first show that $\E(Y \mid \hat{F}_n)$ converges. We then show that what it converges to is indeed $\E(Y \mid \hat{\mathcal{F}}_\infty)$.
+
+ We write
+ \[
+ X_n = \E(Y \mid \hat{\mathcal{F}}_n).
+ \]
+ Observe that for all $n \geq 0$, the process $(X_{n - k})_{0 \leq k \leq n}$ is a martingale by the tower property, and so is $(-X_{n - k})_{0 \leq k \leq n}$. Now notice that for all $a < b$, the number of upcrossings of $[a, b]$ by $(X_k)_{0 \leq k \leq n}$ is equal to the number of upcrossings of $[-b, -a]$ by $(-X_{n - k})_{0 \leq k \leq n}$.
+
+ Using the same arguments as for martingales, we conclude that $X_n \to X_\infty$ almost surely and in $L^1$ for some $X_\infty$.
+
+ To see that $X_\infty = \E(Y \mid \hat{\mathcal{F}}_\infty)$, we notice that $X_\infty$ is $\hat{F}_\infty$ measurable. So it is enough to prove that
+ \[
+ \E X_\infty \mathbf{1}_A = \E(\E(Y \mid \hat{\mathcal{F}}_\infty) \mathbf{1}_A)
+ \]
+ for all $A \in \hat{\mathcal{F}}_\infty$. Indeed, we have
+ \begin{align*}
+ \E X_\infty \mathbf{1}_A &= \lim_{n \to \infty} \E X_n \mathbf{1}_A\\
+ &= \lim_{n\to \infty} \E(\E (Y \mid \hat{\mathcal{F}}_n) \mathbf{1}_A)\\
+ &= \lim_{n \to \infty} \E(Y \mid \mathbf{1}_A)\\
+ &= \E(Y \mid \mathbf{1}_A)\\
+ &= \E(\E (Y \mid \hat{\mathcal{F}}_n) \mathbf{1}_A).\qedhere
+ \end{align*}
+\end{proof}
+
+
+\begin{thm}[Kolmogorov 0-1 law]\index{Kolmogorov 0-1 law}
+ Let $(X_n)_{n \geq 0}$ be independent random variables. Then, let
+ \[
+ \hat{\mathcal{F}}_n = \sigma (X_{n + 1}, X_{n + 2}, \ldots).
+ \]
+ Then the \term{tail $\sigma$-algebra} $\hat{\mathcal{F}}_{\infty}$ is trivial\index{trivial $\sigma$-algebra}\index{$\sigma$-algebra!trivial}, i.e.\ $\P(A) \in \{0, 1\}$ for all $A \in \hat{\mathcal{F}}_\infty$.
+\end{thm}
+
+\begin{proof}
+ Let $\mathcal{F}_n = \sigma(X_1, \ldots, X_n)$. Then $\mathcal{F}_n$ and $\hat{\mathcal{F}}_n$ are independent. Then for all $A \in \hat{\mathcal{F}}_\infty$, we have
+ \[
+ \E(\mathbf{1}_A \mid \mathcal{F}_n) = \P(A).
+ \]
+ But the LHS is a martingale. So it converges almost surely and in $L^1$ to $\E(\mathbf{1}_A \mid \mathcal{F}_\infty)$. But $\mathbf{1}_A$ is $\mathcal{F}_\infty$-measurable, since $\hat{\mathcal{F}}_\infty \subseteq \mathcal{F}_\infty$. So this is just $\mathbf{1}_A$. So $\mathbf{1}_A = \P(A)$ almost surely, and we are done.
+\end{proof}
+
+\begin{thm}[Strong law of large numbers]
+ Let $(X_n)_{n \geq 1}$ be iid random variables in $L^1$, with $\E X_1 = \mu$. Define
+ \[
+ S_n = \sum_{i = 1}^n X_i.
+ \]
+ Then
+ \[
+ \frac{S_n}{n} \to \mu\text{ as }n \to \infty
+ \]
+ almost surely and in $L^1$.
+\end{thm}
+
+\begin{proof}
+ We have
+ \[
+ S_n = \E(S_n \mid S_n) = \sum_{i = 1}^n \E(X_i \mid S_n) = n \E(X_1 \mid S_n).
+ \]
+ So the problem is equivalent to showing that $\E(X_1 \mid S_n) \to \mu$ as $n \to \infty$. This seems like something we can tackle with our existing technology, except that the $S_n$ do not form a filtration.
+
+ Thus, define a backwards filtration
+ \[
+ \hat{\mathcal{F}}_n = \sigma(S_n, S_{n + 1}, S_{n + 2}, \ldots) = \sigma(S_n, X_{n + 1}, X_{n + 2}, \ldots) = \sigma(S_n, \tau_n),
+ \]
+ where $\tau_n = \sigma(X_{n + 1}, X_{n + 2}, \ldots)$. We now use the property of conditional expectation that we've never used so far, that adding independent information to a conditional expectation doesn't change the result. Since $\tau_n$ is independent of $\sigma(X_1, S_n)$, we know
+ \[
+ \frac{S_n}{n} = \E(X_1 \mid S_n) = \E(X_1 \mid \hat{\mathcal{F}}_n).
+ \]
+ Thus, by backwards martingale convergence, we know
+ \[
+ \frac{S_n}{n} \to \E(X_1 \mid \hat{\mathcal{F}}_{\infty}).
+ \]
+ But by the Kolmogorov 0-1 law, we know $\hat{\mathcal{F}}_{\infty}$ is trivial. So we know that $\E(X_1 \mid \hat{\mathcal{F}}_{\infty})$ is almost constant, which has to be $\E(\E(X_1 \mid \hat{\mathcal{F}}_{\infty})) = \E(X_1) = \mu$.
+\end{proof}
+
+Recall that if $(E, \mathcal{E}, \mu)$ is a measure space and $f \in m\mathcal{E}^+$, then
+\[
+ \nu(A) = \mu(f \mathbf{1}_A)
+\]
+is a measure on $\mathcal{E}$. We say $f$ is a density of $\nu$ with respect to $\mu$.
+
+We can ask an ``inverse'' question -- given two different measures on $\mathcal{E}$, when is it the case that one is given by a density with respect to the other?
+
+A first observation is that if $\nu(A) = \mu(f \mathbf{1}_A)$, then whenever $\mu(A) = 0$, we must have $\nu(A) = 0$. However, this is not sufficient. For example, let $\mu$ be a counting measure on $\R$, and $\nu$ the Lebesgue measure. Then our condition is satisfied. However, if $\nu$ is given by a density $f$ with respect to $\nu$, we must have
+\[
+ 0 = \nu(\{x\}) = \mu(f \mathbf{1}_{\{x\}}) = f(x).
+\]
+So $f \equiv 0$, but taking $f \equiv 0$ clearly doesn't give the Lebesgue measure.
+
+The problem with this is that $\mu$ is not a $\sigma$-finite measure.
+
+\begin{thm}[Radon--Nikodym]\index{Radon--Nikodym theorem}
+ Let $(\Omega, \mathcal{F})$ be a measurable space, and $\Q$ and $\P$ be two probability measures on $(\Omega, \mathcal{F})$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $\Q$ is absolutely continuous with respect to $\P$, i.e.\ for any $A \in \mathcal{F}$, if $\P(A) = 0$, then $\Q(A) = 0$.
+ \item For any $\varepsilon > 0$, there exists $\delta > 0$ such that for all $A \in \mathcal{F}$, if $\P(A) \leq \delta$, then $\Q(A) \leq \varepsilon$.
+ \item There exists a random variable $X \geq 0$ such that
+ \[
+ \Q(A) = \E_{\P}(X \mathbf{1}_A).
+ \]
+ In this case, $X$ is called the \term{Radon--Nikodym derivative} of $\Q$ with respect to $\P$, and we write $X = \frac{\d \Q}{\d \P}$.
+ \end{enumerate}
+\end{thm}
+Note that this theorem works for all finite measures by scaling, and thus for $\sigma$-finite measures by partitioning $\Omega$ into sets of finite measure.
+
+\begin{proof}
+ We shall only treat the case where $\mathcal{F}$ is \emph{countably generated}\index{countably generated $\sigma$-algebra}\index{$\sigma$-algebra!countably generated}, i.e.\ $\mathcal{F} = \sigma(F_1, F_2, \ldots)$ for some sets $F_i$. For example, any second-countable topological space is countably generated.
+ \begin{itemize}
+ \item (iii) $\Rightarrow$ (i): Clear.
+ \item (ii) $\Rightarrow$ (iii): Define the filtration
+ \[
+ \mathcal{F}_n = \sigma(F_1, F_2, \ldots, F_n).
+ \]
+ Since $\mathcal{F}_n$ is finite, we can write it as
+ \[
+ \mathcal{F}_n = \sigma(A_{n, 1}, \ldots, A_{n, m_n}),
+ \]
+ where each $A_{n, i}$ is an \term{atom}, i.e.\ if $B \subsetneq A_{n, i}$ and $B \in \mathcal{F}_n$, then $B = \emptyset$. We define
+ \[
+ X_n = \sum_{n = 1}^{m_n} \frac{\Q(A_{n, i})}{\P(A_{n, i})} \mathbf{1}_{A_{n, i}},
+ \]
+ where we skip over the terms where $\P(A_{n, i}) = 0$. Note that this is exactly designed so that for any $A \in \mathcal{F}_n$, we have
+ \[
+ \E_\P (X_n \mathbf{1}_A) = \E_\P \sum_{A_{n, i} \subseteq A} \frac{\Q(A_{n, i})}{\P(A_n, i)} \mathbf{1}_{A_{n, i}} = \Q(A).
+ \]
+ Thus, if $A \in \mathcal{F}_n \subseteq \mathcal{F}_{n + 1}$, we have
+ \[
+ \E X_{n + 1} \mathbf{1}_A = \Q(A) = \E X_n \mathbf{1}_A.
+ \]
+ So we know that
+ \[
+ \E(X_{n + 1} \mid \mathcal{F}_n) = X_n.
+ \]
+ It is also immediate that $(X_n)_{n \geq 0}$ is adapted. So it is a martingale.
+
+ We next show that $(X_n)_{n \geq 0}$ is uniformly integrable. By Markov's inequality, we have
+ \[
+ \P(X_n \geq \lambda) \leq \frac{\E X_n}{\lambda} = \frac{1}{\lambda}\leq \delta
+ \]
+ for $\lambda$ large enough. Then
+ \[
+ \E(X_n \mathbf{1}_{X_n \geq \lambda}) = \Q(X_n \geq \lambda) \leq \varepsilon.
+ \]
+ So we have shown uniform integrability, and so we know $X_n \to X$ almost surely and in $L^1$ for some $X$. Then for all $A \in \bigcup_{n \geq 0} \mathcal{F}_n$, we have
+ \[
+ \Q(A) = \lim_{n \to \infty} \E X_n \mathbf{1}_A = \E X \mathbf{1}_A.
+ \]
+ So $\Q(-)$ and $\E X \mathbf{1}_{(-)}$ agree on $\bigcup_{n \geq 0} \mathcal{F}_n$, which is a generating $\pi$-system for $\mathcal{F}$, so they must be the same.
+ \item (i) $\Rightarrow$ (ii): Suppose not. Then there exists some $\varepsilon > 0$ and some $A_1, A_2, \ldots \in \mathcal{F}$ such that
+ \[
+ \Q(A_n) \geq \varepsilon,\quad \P(A_n) \leq \frac{1}{2^n}.
+ \]
+ Since $\sum_n \P(A_n)$ is finite, by Borel--Cantelli, we know
+ \[
+ \P \limsup A_n = 0.
+ \]
+ On the other hand, by, say, dominated convergence, we have
+ \begin{align*}
+ \Q\limsup A_n &= \Q \left(\bigcap_{n = 1}^\infty \bigcup_{m = n}^\infty A_m\right) \\
+ &= \lim_{k \to \infty} \Q\left(\bigcap_{n = 1}^k \bigcup_{m = n}^\infty A_m \right)\\
+ &\geq \lim_{k \to \infty} \Q \left(\bigcup_{m = k}^\infty A_k\right)\\
+ &\geq \varepsilon.
+ \end{align*}
+ This is a contradiction.\qedhere
+ \end{itemize}
+\end{proof}
+
+Finally, we end the part on discrete time processes by relating what we have done to Markov chains.
+
+Let's first recall what Markov chains are. Let $E$ be a countable space, and $\mu$ a measure on $E$. We write $\mu_x = \mu(\{x\})$, and then $\mu(f) = \mu \cdot f$.
+
+\begin{defi}[Transition matrix]\index{transition matrix}
+ A \emph{transition matrix} is a matrix $P = (p_{xy})_{x, y \in E}$ such that each $p_x = (p_{x, y})_{y \in E}$ is a probability measure on $E$.
+\end{defi}
+
+\begin{defi}[Markov chain]\index{Markov chain}
+ An adapted process $(X_n)$ is called a \emph{Markov chain} if for any $n$ and $A \in \mathcal{F}_n$ such that $\{x_n = x\} \supseteq A$, we have
+ \[
+ \P(X_{n + 1} = y\mid A) = p_{xy}.
+ \]
+\end{defi}
+
+\begin{defi}[Harmonic function]\index{harmonic function}
+ A function $f: E \to \R$ is \emph{harmonic} if $Pf = f$. In other words, for any $x$, we have
+ \[
+ \sum_{y} p_{xy} f(y) = f(x).
+ \]
+\end{defi}
+We then observe that
+
+\begin{prop}
+ If $F$ is harmonic and bounded, and $(X_n)_{n \geq 0}$ is Markov, then $(f(X_n))_{n \geq 0}$ is a martingale.
+\end{prop}
+
+\begin{eg}
+ Let $(X_n)_{n \geq 0}$ be iid $\Z$-valued random variables in $L^1$, and $\E[X_i] = 0$. Then
+ \[
+ S_n = X_0 + \cdots + X_n
+ \]
+ is a martingale and a Markov chain.
+
+ However, if $Z$ is a $\Z$-valued random variable, consider the random variable $(ZS_n)_{n \geq 0}$ and $\mathcal{F}_n = \sigma(\mathcal{F}_n, Z)$. Then this is a martingale but not a Markov chain.
+\end{eg}
+
+\section{Continuous time stochastic processes}
+In the remainder of the course, we shall study continuous time processes. When doing so, we have to be rather careful, since our processes are indexed by an \emph{uncountable} set, when measure theory tends to only like countable things. Ultimately, we would like to study Brownian motion, but we first develop some general theory of continuous time processes.
+
+\begin{defi}[Continuous time stochastic process]\index{continuous time stochastic process}\index{stochastic process}
+ A \emph{continuous time stochastic process} is a family of random variables $(X_t)_{t \geq 0}$ (or $(X_t)_{t \in [a, b]}$).
+\end{defi}
+
+In the discrete case, if $T$ is a random variable taking values in $\{0, 1, 2, \ldots\}$, then it makes sense to look at the new random variable $X_T$, since this is just
+\[
+ X_T = \sum_{n = 0}^\infty X_n \mathbf{1}_{T = n}.
+\]
+This is obviously measurable, since it is a limit of measurable functions.
+
+However, this is not necessarily the case if we have continuous time, unless we assume some regularity conditions on our process. In some sense, we want $X_t$ to depend ``continuously'' or at least ``measurably'' on $t$.
+
+To make sense of $X_T$, It would be enough to require that the map
+\[
+ \varphi: (\omega, t) \mapsto X_t(\omega)
+\]
+is measurable when we put the product $\sigma$-algebra on the domain. In this case, $X_T(\omega) = \varphi(\omega, T(\omega))$ is measurable. In this formulation, we see why we didn't have this problem with discrete time --- the $\sigma$-algebra on $\N$ is just $\mathcal{P}(\N)$, and so all sets are measurable. This is not true for $\mathcal{B}([0, \infty))$.
+
+However, being able to talk about $X_T$ is not the only thing we want. Often, the following definitions are useful:
+
+\begin{defi}[Cadlag function]\index{cadlag function}
+ We say a function $X: [0, \infty] \to \R$ is \emph{cadlag} if for all $t$
+ \[
+ \lim_{s \to t^+} x_s = x_t,\quad \lim_{s \to t^-}x_s\text{ exists}.
+ \]
+\end{defi}
+The name cadlag (or c\'adl\'ag) comes from the French term \emph{continue \'a droite, limite \'a gauche}, meaning ``right-continuous with left limits''.
+
+\begin{defi}[Continuous/Cadlag stochastic process]\index{continuous stochastic process}\index{stochastic process!continuous}\index{cadlag stochastic process}\index{stochastic process!cadlag}
+ We say a stochastic process is \emph{continuous} (resp.\ cadlag) if for any $\omega \in \Omega$, the map $t \mapsto X_t (\omega)$ is continuous (resp.\ cadlag).
+\end{defi}
+
+\begin{notation}
+ We write \term{$C([0, \infty), \R)$} for the space of all continuous functions $[0, \infty) \to \R$, and $D([0, \infty), \R)$ the space of all cadlag functions.\index{$C([0, \infty),\R)$}\index{$D([0, \infty), \R)$}
+
+ We endow these spaces with a $\sigma$-algebra generated by the coordinate functions
+ \[
+ (x_t)_{t \geq 0} \mapsto x_s.
+ \]
+\end{notation}
+
+Then a continuous (or cadlag) process is a random variable taking values in $C([0, \infty), \R)$ (or $D([0, \infty), \R)$).
+
+\begin{defi}[Finite-dimensional distribution]\index{finite-dimensional distribution}\index{distribution!finite-dimensional}
+ A \emph{finite dimensional distribution} of $(X_t)_{t \geq 0}$ is a measure on $\R^n$ of the form
+ \[
+ \mu_{t_1, \ldots, t_n}(A) = \P((X_{t_1}, \ldots, X_{t_n}) \in A)
+ \]
+ for all $A \in \mathcal{B}(\R^n)$, for some $0 \leq t_1 < t_2 < \ldots < t_n$.
+\end{defi}
+
+The important observation is that if we know all finite-dimensional distributions, then we know the law of $X$, since the cylinder sets form a $\pi$-system generating the $\sigma$-algebra.
+
+If we know, a priori, that $(X_t)_{t \geq 0}$ is a continuous process, then for any dense set $I \subseteq [0, \infty)$, knowing $(X_t)_{t \geq 0}$ is the same as knowing $(X_t)_{t \in I}$. Conversely, if we are given some random variables $(X_t)_{t \in I}$, can we extend this to a continuous process $(X_t)_{t \geq 0}$? The answer is, of course, ``not always'', but it turns out we can if we assume some H\"older conditions.
+
+\begin{thm}[Kolmogorov's criterion]\index{Kolmogorov's criterion}
+ Let $(\rho_t)_{t \in I}$ be random variables, where $I \subseteq [0, 1]$ is dense. Assume that for some $p > 1$ and $\beta > \frac{1}{p}$, we have
+ \[
+ \|\rho_t - \rho_s\|_p \leq C |t - s|^\beta\text{ for all }t, s \in I.\tag{$*$}
+ \]
+
+ Then there exists a continuous process $(X_t)_{t \in I}$ such that for all $t \in I$,
+ \[
+ X_t = \rho_t \text{ almost surely},
+ \]
+ and moreover for any $\alpha \in [0, \beta - \frac{1}{p})$, there exists a random variable $K_\alpha \in L^p$ such that
+ \[
+ |X_s - X_t| \leq K_\alpha |s - t|^\alpha
+ \]
+ for all $s, t \in [0, 1]$.
+\end{thm}
+
+Before we begin, we make the following definition:
+\begin{defi}[Dyadic numbers]\index{dyadic numbers}
+ We define
+ \[
+ D_n = \left\{s \in [0, 1] : s = \frac{k}{2^n}\text{ for some }k \in \Z\right\},\quad D = \bigcup_{n \geq 0}D_n.
+ \]
+\end{defi}
+Observe that $D \subseteq [0, 1]$ is a dense subset. Topologically, this is just like any other dense subset. However, it is convenient to use $D$ instead of an arbitrary subset when writing down formulas.
+
+\begin{proof}
+ First note that we may assume $D \subseteq I$. Indeed, for $t \in D$, we can define $\rho_t$ by taking the limit of $\rho_s$ in $L^p$ since $L^p$ is complete. The equation $(*)$ is preserved by limits, so we may work on $I \cup D$ instead.
+
+ By assumption, $(\rho_t)_{t \in I}$ is H\"older in $L^p$. We claim that it is almost surely pointwise H\"older.
+ \begin{claim}
+ There exists a random variable $K_\alpha \in L^p$ such that
+ \[
+ |\rho_s - \rho_t| \leq K_\alpha |s - t|^\alpha\text{ for all }s, t \in D.
+ \]
+ Moreover, $K_\alpha$ is increasing in $\alpha$.
+ \end{claim}
+ Given the claim, we can simply set
+ \[
+ X_t(\omega) =
+ \begin{cases}
+ \lim_{q \to t, q \in D} \rho_q(\omega) & K_\alpha < \infty\text{ for all }\alpha \in [0, \beta - \frac{1}{p})\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ Then this is a continuous process, and satisfies the desired properties.
+
+ To construct such a $K_\alpha$, observe that given any $s, t \in D$, we can pick $m \geq 0$ such that
+ \[
+ 2^{-(m+1)} < t - s \leq 2^{-m}.
+ \]
+ Then we can pick $u = \frac{k}{2^{m + 1}}$ such that $s < u < t$. Thus, we have
+ \[
+ u - s < 2^{-m},\quad t - u < 2^{-m}.
+ \]
+ Therefore, by binary expansion, we can write
+ \begin{align*}
+ u - s = \sum_{i \geq m + 1} \frac{x_i}{2^i},\quad t - u = \sum_{i \geq m + 1} \frac{y_i}{2^i},
+ \end{align*}
+ for some $x_i, y_i \in \{0, 1\}$. Thus, writing
+ \[
+ K_n = \sup_{t \in D_n} |S_{t + 2^{-n}} - S_t|,
+ \]
+ we can bound
+ \[
+ |\rho_s - \rho_t| \leq 2 \sum_{n = m + 1}^\infty K_n,
+ \]
+ and thus
+ \[
+ \frac{|\rho_s - \rho_t|}{|s - t|^\alpha} \leq 2 \sum_{n = m + 1}^\infty 2^{(m + 1) \alpha} K_n \leq 2 \sum_{n = m + 1}^\infty 2^{(n + 1) \alpha} K_n.
+ \]
+ Thus, we can define
+ \[
+ K_\alpha = 2 \sum_{n \geq 0} 2^{n\alpha} K_n.
+ \]
+ We only have to check that this is in $L^p$, and this is not hard. We first get
+ \[
+ \E K_n^p \leq \sum_{t \in D_n} \E |\rho_{t + 2^{-n}} - \rho_t|^p \leq C^p 2^n \cdot 2^{-n \beta} = C 2^{n(1 - p\beta)}.
+ \]
+ Then we have
+ \[
+ \|K_\alpha\|_p \leq 2 \sum_{n \geq 0}2^{n\alpha} \|K_n\|_p \leq 2C \sum_{n \geq 0} 2^{n(\alpha + \frac{1}{p} - \beta)} < \infty.\qedhere
+ \]
+% We claim that this has the desired properties. Indeed,
+%
+% Take $s < t$ with $s, t \in D$. Take $m > t$ such that
+% \[
+% 2^{-(m + 1)} < t - s \leq 2^{-m}.
+% \]
+% Hence, there exists $u = \frac{k}{2^{m + 1}}$ such that $s < u < t$. Then
+% \[
+% u - s < 2^{-m},\quad t - u < 2^{-m}.
+% \]
+% Then we can write
+% \begin{align*}
+% u - s &= \sum_{i \geq m + 1} \frac{x_i}{2^i}\\
+% t - u &= \sum_{i \geq m + 1} \frac{y_i}{2^i},
+% \end{align*}
+% where $x_i, y_i \in \{0, 1\}$. This implies we can bound
+% \[
+% |\rho_s - \rho_t| \leq 2 \sum_{n = m + 1}^\infty K_n.
+% \]
+% Thus, we have
+% \begin{align*}
+% \frac{|\rho_s - \rho_t|}{|s - t|} &\leq 2^{(n + 1)\alpha} 2 \sum_{n = m + 1}^\infty K_n\\
+% &= 2^{n\alpha}K_n\\
+% &\leq K_\alpha.
+% \end{align*}
+% Then define
+% \[
+% X_t =
+% \begin{cases}
+% \lim_{q \in D} \rho_q & K_\alpha < \infty\\
+% 0 & \text{otherwise}.
+% \end{cases}
+% \]
+\end{proof}
+
+We will later use this to construct Brownian motion. For now, we shall develop what we know about discrete time processes for continuous time ones. Fortunately, a lot of the proofs are either the same as the discrete time ones, or can be reduced to the discrete time version. So not much work has to be done!
+
+\begin{defi}[Continuous time filtration]\index{continuous time filtration}\index{filtration!continuous time}
+ A \emph{continuous-time filtration} is a family of $\sigma$-algebras $(\mathcal{F}_t)_{t \geq 0}$ such that $\mathcal{F}_s \subseteq \mathcal{F}_t \subseteq \mathcal{F}$ if $s \leq t$. Define $\mathcal{F}_\infty = \sigma(\mathcal{F}_t: t \geq 0)$.\index{$\mathcal{F}_\infty$}
+\end{defi}
+
+\begin{defi}[Stopping time]\index{stopping time}\index{stopping time!continuous time}
+ A random variable $t: \Omega \to [0, \infty]$ is a \emph{stopping time} if $\{T \leq t\} \in \mathcal{F}_t$ for all $t \geq 0$.
+\end{defi}
+
+\begin{prop}
+ Let $(X_t)_{t \geq 0}$ be a cadlag adapted process and $S, T$ stopping times. Then
+ \begin{enumerate}
+ \item $S \wedge T$ is a stopping time.
+ \item If $S \leq T$, then $\mathcal{F}_S \subseteq \mathcal{F}_T$.
+ \item $X_T \mathbf{1}_{T < \infty}$ is $\mathcal{F}_T$-measurable.
+ \item $(X_t^T)_{t \geq 0} = (X_{T \wedge t})_{t \geq 0}$ is adapted.
+ \end{enumerate}
+\end{prop}
+
+We only prove (iii). The first two are the same as the discrete case, and the proof of (iv) is similar to that of (iii).
+
+To prove this, we need a quick lemma, whose proof is a simple exercise.
+\begin{lemma}
+ A random variable $Z$ is $\mathcal{F}_T$-measurable iff $Z \mathbf{1}_{\{T \leq t\}}$ is $\mathcal{F}_t$-measurable for all $t \geq 0$.
+\end{lemma}
+
+\begin{proof}[Proof of (iii) of proposition]
+ We need to prove that $X_T \mathbf{1}_{\{T \leq t\}}$ is $\mathcal{F}_t$-measurable for all $t \geq 0$.
+
+ We write
+ \[
+ X_T\mathbf{1}_{T \leq t} = X_T \mathbf{1}_{T < t} + X_t \mathbf{1}_{T = t}.
+ \]
+ We know the second term is measurable. So it suffices to show that $X_T \mathbf{1}_{T < t}$ is $\mathcal{F}_t$-measurable.
+
+ Define $T_n = 2^{-n} \lceil 2^n T\rceil$. This is a stopping time, since we always have $T_n \geq T$. Since $(X_t)_{t \geq 0}$ is cadlag, we know
+ \[
+ X_T \mathbf{1}_{T < t} = \lim_{n \to \infty} X_{T_n \wedge t} \mathbf{1}_{T < t}.
+ \]
+ Now $T_n \wedge t$ can take only countably (and in fact only finitely) many values, so we can write
+ \[
+ X_{T_n \wedge t} = \sum_{q \in D_n, q < t} X_{q} \mathbf{1}_{T_n = q} + X_t \mathbf{1}_{T < t < T_n},
+ \]
+ and this is $\mathcal{F}_t$-measurable. So we are done.
+\end{proof}
+
+In the continuous case, stopping times are a bit more subtle. A natural source of stopping times is given by hitting times.
+
+\begin{defi}[Hitting time]\index{hitting time}
+ Let $A \in \mathcal{B}(\R)$. Then the \emph{hitting time} of $A$ is
+ \[
+ T_A = \inf_{t \geq 0} \{X_t \leq A\}.
+ \]
+\end{defi}
+
+This is not always a stopping time. For example, consider the process $X_t$ such that with probability $\frac{1}{2}$, it is given by $X_t = t$, and with probability $\frac{1}{2}$, it is given by
+\[
+ X_t =
+ \begin{cases}
+ t & t \leq 1\\
+ 2 - t & t > 1
+ \end{cases}.
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (2.5, 0);
+ \draw (0, -1) -- (0, 2.5);
+ \draw (0, 0) -- (1, 1);
+ \draw [dashed] (1, 1) -- (2.5, 2.5);
+ \draw [dashed] (1, 1) -- (2.5, -1);
+
+ \draw (1, -0.04) node [below] {$1$} -- (1, 0.04);
+ \draw (-0.04, 1) node [left] {$1$} -- (0.04, 1);
+
+ \node [circ] at (1, 1){};
+ \end{tikzpicture}
+\end{center}
+Take $A = (1, \infty)$. Then $T_A = 1$ in the first case, and $T_A = \infty$ in the second case. But $\{T_a \leq 1\} \not \in \mathcal{F}_1$, as at time $1$, we don't know if we are going up or down.
+
+The problem is that $A$ is not closed.
+
+\begin{prop}
+ Let $A \subseteq \R$ be a closed set and $(X_t)_{t \geq 0}$ be continuous. Then $T_A$ is a stopping time.
+\end{prop}
+
+\begin{proof}
+ Observe that $d(X_q, A)$ is a continuous function in $q$. So we have
+ \[
+ \{T_A \leq t\} = \left\{\inf_{q \in \Q, q < t} d(X_q, A) = 0\right\}.\qedhere
+ \]
+\end{proof}
+
+Motivated by our previous non-example of a hitting time, we define
+\begin{defi}[Right-continuous filtration]\index{filtration!right continuous}\index{right continuous filtration}
+ Given a continuous filtration $(\mathcal{F}_t)_{t \geq 0}$, we define
+ \[
+ \mathcal{F}_t^+ = \bigcap_{s > t} \mathcal{F}_s \supseteq \mathcal{F}_t.
+ \]
+ We say $(\mathcal{F}_t)_{t \geq 0}$ is \emph{right continuous} if $\mathcal{F}_t = \mathcal{F}_t^+$.
+\end{defi}
+
+Often, we want to modify our events by things of measure zero. While this doesn't really affect anything, it could potentially get us out of $\mathcal{F}_t$. It does no harm to enlarge all $\mathcal{F}_t$ to include events of measure zero.
+
+\begin{defi}[Usual conditions]\index{usual conditions}
+ Let $\mathcal{N} = \{A \in \mathcal{F}_\infty : \P(A) \in \{0, 1\}\}$. We say that $(\mathcal{F}_t)_{t \geq 0}$ satisfies the \emph{usual conditions} if it is right continuous and $\mathcal{N} \subseteq \mathcal{F}_0$.
+\end{defi}
+
+\begin{prop}
+ Let $(X_t)_{t \geq 0}$ be an adapted process (to $(\mathcal{F}_{t})_{t \geq 0}$) that is cadlag, and let $A$ be an open set. Then $T_A$ is a stopping time with respect to $\mathcal{F}_t^+$.
+\end{prop}
+
+\begin{proof}
+ Since $(X_t)_{t \geq 0}$ is cadlag and $A$ is open. Then
+ \[
+ \{T_A < t\} = \bigcup_{q < t, q \in \Q} \{X_q \in A\} \in \mathcal{F}_t.
+ \]
+ Then
+ \[
+ \{T_A \leq t\} = \bigcap_{n \geq 0} \left\{T_A < t + \frac{1}{n}\right\} \in \mathcal{F}_t^+.\qedhere
+ \]
+\end{proof}
+
+\begin{defi}[Coninuous time martingale]\index{martingale!continuous time}\index{continuous time martingale}\index{sub-martingale!continuous time}\index{continuous time sub-martingale}\index{super-martingale!continuous time}\index{continuous time super-martingale}
+ An adapted process $(X_t)_{t \geq 0}$ is called a \emph{martingale} iff
+ \[
+ \E (X_t \mid \mathcal{F}_s) = X_s
+ \]
+ for all $t \geq s$, and similarly for super-martingales and sub-martingales.
+\end{defi}
+
+Note that if $t_1 \leq t_2 \leq \cdots$, then
+\[
+ \tilde{X}_n = X_{t_n}
+\]
+is a discrete time martingale. Similarly, if $t_1 \geq t_2 \geq \cdots$, and
+\[
+ \hat{X}_n = X_{t_n}
+\]
+defines a discrete time backwards martingale. Using this observation, we can now prove what we already know in the discrete case.
+
+\begin{thm}[Optional stopping theorem]\index{optional stopping theorem}
+ Let $(X_t)_{t \geq 0}$ be an adapted cadlag process in $L^1$. Then the following are equivalent:
+ \begin{enumerate}
+ \item For any bounded stopping time $T$ and any stopping time $S$, we have $X_T \in L^1$ and
+ \[
+ \E(X_T \mid \mathcal{F}_S) = X_{T \wedge S}.
+ \]
+ \item For any stopping time $T$, $(X_t^T)_{t \geq 0} = (X_{T \wedge t})_{t \geq 0}$ is a martingale.
+ \item For any bounded stopping time $T$, $X_T \in L^1$ and $\E X_T = \E X_0$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ We show that (i) $\Rightarrow$ (ii), and the rest follows from the discrete case similarly.
+
+ Since $T$ is bounded, assume $T \leq t$, and we may wlog assume $t \in \N$. Let
+ \[
+ T_n = 2^{-n} \lceil 2^n T\rceil,\quad S_n = 2^{-n} \lceil 2^n S\rceil.
+ \]
+ We have $T_n \searrow T$ as $n \to \infty$, and so $X_{T_n} \to X_T$ as $n \to \infty$.
+
+ Since $T_n \leq t + 1$, by restricting our sequence to $D_n$, discrete time optional stopping implies
+ \[
+ \E (X_{t + 1} \mid \mathcal{F}_{T_n}) = X_{T_n}.
+ \]
+ In particular, $X_{T_n}$ is uniformly integrable. So it converges in $L^1$. This implies $X_T \in L^1$.
+
+ To show that $\E(X_t \mid \mathcal{F}_S) = X_{T \wedge S}$, we need to show that for any $A \in \mathcal{F}_S$, we have
+ \[
+ \E X_t \mathbf{1}_A = \E X_{S \wedge T} \mathbf{1}_A.
+ \]
+ Since $\mathcal{F}_S \subseteq \mathcal{F}_{S_n}$, we already know that
+ \[
+ \E X_{T_n} \mathbf{1}_A = \lim_{n \to \infty} \E X_{S_n \wedge T_n} \mathbf{1}_A
+ \]
+ by discrete time optional stopping, since $\E (X_{T_n} \mid \mathcal{F}_{S_n}) = X_{T_n \wedge S_n}$. So taking the limit $n \to \infty$ gives the desired result.
+\end{proof}
+
+\begin{thm}
+ Let $(X_t)_{t \geq 0}$ be a super-martingale bounded in $L^1$. Then it converges almost surely as $t \to \infty$ to a random variable $X_\infty \in L^1$.
+\end{thm}
+
+\begin{proof}
+% Let
+% \[
+% X^* = \sup_{t \geq 0} |X_t|,\quad X_{(n)}^* = \sup_{t \in D_n} |X_t|.
+% \]
+% Since $X_t$ is cadlag, and since $D_n$ are are monotone (in $n$), we can write
+% \[
+% X^* = \lim_{n \to \infty} X_n^*.
+% \]
+% Since $(X_t^{(n), *}) = (X_t)_{t \in D_n}$ is a super-martingale bounded in $L^1$, we know $X^*_{(n)}$ is bounded almost surely. So $X^*$ is finite almost surely.
+
+ Define $U_s[a, b, (x_t)_{t \geq 0}]$ be the number of upcrossings of $[a, b]$ by $(x_t)_{t \geq 0}$ up to time $s$, and
+ \[
+ U_\infty[a, b, (x_t)_{t \geq 0}] = \lim_{s \to \infty} U_s [a, b, (x_t)_{t \geq 0}].
+ \]
+ Then for all $s \geq 0$, we have
+ \[
+ U_s[a, b, (x_t)_{t \geq 0}] = \lim_{n \to \infty} U_s[a, b, (x_t^{(n)})_{t \in D_n}].
+ \]
+ By monotone convergence and Doob's upcrossing lemma, we have
+ \[
+ \E U_s[a, b, (X_t)_{t \geq 0}] = \lim_{n \to \infty} \E U_s[a, b, (X_t)_{t \in D_n}] \leq \frac{\E(X_s - a)^-}{b - 1} \leq \frac{\E |X_s| + a}{b - a}. % second X_t should be something else
+ \]
+ We are then done by taking the supremum over $s$. Then finish the argument as in the discrete case.
+
+ This shows we have pointwise convergence in $\R \cup \{\pm \infty\}$, and by Fatou's lemma, we know that
+ \[
+ \E |X_\infty| = \E \liminf_{t_n \to \infty} |X_{t_n}| \leq \liminf_{t_n \to \infty} \E |X_{t_n}| < \infty.
+ \]
+ So $X_\infty$ is finite almost surely.
+\end{proof}
+
+We shall now state without proof some results we already know for the discrete case. The proofs are straightforward generalizations of the discrete version.
+
+\begin{lemma}[Maximal inequality]\index{maximal inequality}
+ Let $(X_t)_{t \geq 0}$ be a cadlag martingale or a non-negative sub-martingale. Then for all $t \geq 0$, $\lambda \geq 0$, we have
+ \[
+ \lambda \P (X^*_t \geq \lambda)\leq \E |X_t|.
+ \]
+\end{lemma}
+
+\begin{lemma}[Doob's $L^p$ inequality]\index{Doob's $L^p$ inequality}\index{$L^p$ inequality}
+ Let $(X_t)_{t \geq 0}$ be as above. Then
+ \[
+ \|X_t^*\|_p \leq \frac{p}{p - 1} \|X_t\|_p.
+ \]
+\end{lemma}
+
+\begin{defi}[Version]\index{version}
+ We say a process $(Y_t)_{t \geq 0}$ is a \emph{version} of $(X_t)_{t \geq 0}$ if for all $t$, $\P(Y_t = X_t) = 1$.
+\end{defi}
+
+Note that this not the same as saying $\P(\forall_t :Y_t = X_t) = 1$.
+
+\begin{eg}
+ Take $X_t \equiv 0$ for all $t$ and take $U$ be a uniform random variable on $[0, 1]$. Define
+ \[
+ Y_t =
+ \begin{cases}
+ 1 & t = U\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ Then for all $t$, we have $X_t = Y_t$ almost surely. So $(Y_t)$ is a version of $(X_t)$. However, $X_t$ is continuous but $Y_t$ is not.
+\end{eg}
+\begin{thm}[Regularization of martingales]\index{regularization}
+ Let $(X_t)_{t \geq 0}$ be a martingale with respect to $(\mathcal{F}_t)$, and suppose $\mathcal{F}_t$ satisfies the usual conditions. Then there exists a version $(\tilde{X}_t)$ of $(X_t)$ which is cadlag.
+\end{thm}
+
+\begin{proof}
+ For all $M > 0$, define
+ \[
+ \Omega_0^M = \left\{\sup_{q \in D \cap [0, M]} |X_q| < \infty\right\} \cap \bigcap_{a < b \in \Q} \left\{U_M[a, b, (X_t)_{t \in D \cap [0, M]}] < \infty\right\}
+ \]
+ Then we see that $\P(\Omega_0^M) = 1$ by Doob's upcrossing lemma. Now define
+ \[
+ \tilde{X}_t = \lim_{s \geq t, s \to t, s \in D} X_s \mathbf{1}_{\Omega^t_0}.
+ \]
+ Then this is $\mathcal{F}_t$ measurable because $\mathcal{F}_t$ satisfies the usual conditions.
+
+ Take a sequence $t_n \searrow t$. Then $(X_{t_n})$ is a backwards martingale. So it converges almost surely in $L^1$ to $\tilde{X}_t$. But we can write
+ \[
+ X_t = \E (X_{t_n} \mid \mathcal{F}_t).
+ \]
+ Since $X_{t_n} \to \tilde{X}_t$ in $L^1$, and $\tilde{X}_t$ is $\mathcal{F}_t$-measurable, we know $X_t = \tilde{X}_t$ almost surely.
+
+ The fact that it is cadlag is an exercise.
+\end{proof}
+
+\begin{thm}[$L^p$ convergence of martingales]\index{martingale convergence theorem!$L^p$}
+ Let $(X_t)_{t \geq 0}$ be a cadlag martingale. Then the following are equivalent:
+ \begin{enumerate}
+ \item $(X_t)_{t \geq 0}$ is bounded in $L^p$.
+ \item $(X_t)_{t \geq 0}$ converges almost surely and in $L^p$.
+ \item There exists $Z \in L^p$ such that $X_t = \E (Z \mid \mathcal{F}_t)$ almost surely.
+ \end{enumerate}
+\end{thm}
+
+\begin{thm}[$L^1$ convergence of martingales]\index{martingale convergence theorem!$L^1$}
+ Let $(X_t)_{t \geq 0}$ be a cadlag martingale. Then the folloiwng are equivalent:
+ \begin{enumerate}
+ \item $(X_t)_{t \geq 0}$ is uniformly integrable.
+ \item $(X_t)_{t \geq 0}$ converges almost surely and in $L^1$ to $X_\infty$.
+ \item There exists $Z \in L^1$ such that $\E(Z \mid \mathcal{F}_t) = X_t$ almost surely.
+ \end{enumerate}
+\end{thm}
+
+\begin{thm}[Optional stopping theorem]
+ Let $(X_t)_{t \geq 0}$ be a uniformly integrable martingale, and let $S, T$ b e any stopping times. Then
+ \[
+ \E (X_T \mid \mathcal{F}_s) = X_{S \wedge T}.
+ \]
+\end{thm}
+
+\section{Weak convergence of measures}
+Often, we may want to consider random variables defined on different spaces. Since we cannot directly compare them, a sensible approach would be to use them to push our measure forward to $\R$, and compare them on $\R$.
+
+\begin{defi}[Law]\index{law}
+ Let $X$ be a random variable on $(\Omega, \mathcal{F}, \P)$. The \emph{law} of $X$ is the probability measure $\mu$ on $(\R, \mathcal{B}(\R))$ defined by
+ \[
+ \mu(A) = \P(X^{-1}(A)).
+ \]
+\end{defi}
+
+\begin{eg}
+ For $x \in \R$, we have the \term{Dirac $\delta$ measure}
+ \[
+ \delta_x(A) = \mathbf{1}_{\{x \in A\}}.
+ \]
+ This is the law of a random variable that constantly takes the value $x$.
+\end{eg}
+Now if we have a sequence $x_n \to x$, then we would like to say $\delta_{x_n} \to \delta_x$. In what sense is this true? Suppose $f$ is continuous. Then
+\[
+ \int f \d \delta_{x_n} = f(x_n) \to f(x) = \int f \d \delta_x.
+\]
+So we do have some sort of convergence if we pair it with a continuous function.
+
+\begin{defi}[Weak convergence]\index{weak convergence}
+ Let $(\mu_n)_{n \geq 0}$, $\mu$ be probability measures on a metric space $(M, d)$ with the Borel measure. We say that $\mu_n \Rightarrow \mu$, or $\mu_n$ \emph{converges weakly} to $\mu$ if
+ \[
+ \mu_n(f) \to \mu(f)
+ \]
+ for all $f$ bounded and continuous.
+
+ If $(X_n)_{n \geq 0}$ are random variables, then we say $(X_n)$ converges \emph{in distribution}\index{convergence in distribution} if $\mu_{X_n}$ converges weakly.
+\end{defi}
+
+Note that in general, weak convergence does not say anything about how measures of subsets behave.
+\begin{eg}
+ If $x_n \to x$, then $\delta_{x_n} \to \delta_x$ weakly. However, if $x_n \not= x$ for all $n$, then $\delta_{x_n} (\{x\}) = 0$ but $\delta_x(\{x\}) = 1$. So
+ \[
+ \delta_{x_n}(\{x\}) \not\to \delta_n(\{x\}).
+ \]
+\end{eg}
+
+\begin{eg}
+ Pick $X = [0, 1]$. Let $\mu_n = \frac{1}{n} \sum_{k = 1}^n \delta_{\frac{k}{n}}$. Then
+ \[
+ \mu_n(f) = \frac{1}{n} \sum_{k = 1}^n f\left(\frac{k}{n}\right).
+ \]
+ So $\mu_n$ converges to the Lebesgue measure.
+\end{eg}
+
+\begin{prop}
+ Let $(\mu_n)_{n \geq 0}$ be as above. Then, the following are equivalent:
+ \begin{enumerate}
+ \item $(\mu_n)_{n \geq 0}$ converges weakly to $\mu$.
+ \item For all open $G$, we have
+ \[
+ \liminf_{n \to \infty} \mu_n(G) \geq \mu(G).
+ \]
+ \item For all closed $A$, we have
+ \[
+ \limsup_{n \to \infty} \mu_n(A) \leq \mu(A).
+ \]
+ \item For all $A$ such that $\mu(\partial A) = 0$, we have
+ \[
+ \lim_{n \to \infty}\mu_n(A) = \mu(A)
+ \]
+ \item (when $M = \R$) $F_{\mu_n}(x) \to F_\mu(x)$ for all $x$ at which $F_\mu$ is continuous, where $F_\mu$ is the \term{distribution function} of $\mu$, defined by $F_\mu(x) = \mu_n((-\infty, t])$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Rightarrow$ (ii): The idea is to approximate the open set by continuous functions. We know $A^c$ is closed. So we can define
+ \[
+ f_N(x) = 1 \wedge (N \cdot \mathrm{dist}(x, A^c)).
+ \]
+ This has the property that for all $N > 0$, we have
+ \[
+ f_N \leq \mathbf{1}_A,
+ \]
+ and moreover $f_N \nearrow \mathbf{1}_A$ as $N \to \infty$. Now by definition of weak convergence,
+ \[
+ \liminf_{n \to \infty} \mu(A)\geq \liminf_{n \to \infty} \mu_n(f_N) = \mu(F_N) \to \mu(A)\text{ as }N \to \infty.
+ \]
+ \item (ii) $\Leftrightarrow$ (iii): Take complements.
+ \item (iii) and (ii) $\Rightarrow$ (iv): Take $A$ such that $\mu(\partial A) = 0$. Then
+ \[
+ \mu(A) = \mu(\mathring{A}) = \mu(\bar{A}).
+ \]
+ So we know that
+ \[
+ \liminf_{n \to \infty} \mu_n(A) \geq \liminf_{n \to \infty} \mu_n(\mathring{A}) \geq \mu(\mathring{A}) = \mu(A).
+ \]
+ Similarly, we find that
+ \[
+ \mu(A) \geq \limsup_{n \to \infty} \mu_n(A).
+ \]
+ So we are done.
+ \item (iv) $\Rightarrow$ (i): We have
+ \begin{align*}
+ \mu(f) &= \int_M f(x) \;\d \mu(x)\\
+ &= \int_M \int_0^\infty \mathbf{1}_{f(x) \geq t}\;\d t \;\d \mu(x)\\
+ &= \int_0^\infty \mu(\{f \geq t\})\;\d t.
+ \end{align*}
+ Since $f$ is continuous, $\partial \{f \leq t\} \subseteq \{f = t\}$. Now there can be only countably many $t$'s such that $\mu(\{f = t\}) > 0$. So replacing $\mu$ by $\lim_{n \to \infty}\mu_n$ only changes the integrand at countably many places, hence doesn't affect the integral. So we conclude using bounded convergence theorem.
+
+ \item (iv) $\Rightarrow$ (v): Assume $t$ is a continuity point of $F_\mu$. Then we have
+ \[
+ \mu(\partial(-\infty, t]) = \mu(\{t\}) = F_\mu(t) - F_\mu(t_-) = 0.
+ \]
+ So $\mu_n(\partial_n(-\infty, t]) \to \mu((-\infty, t])$, and we are done.
+ \item (v) $\Rightarrow$ (ii): If $A = (a, b)$, then
+ \[
+ \mu_n(A) \geq F_{\mu_n} (b') - F_{\mu_n}(a')
+ \]
+ for any $a \leq a' \leq b' \leq b$ with $a', b'$ continuity points of $F_\mu$. So we know that
+ \[
+ \liminf_{n \to \infty} \mu_n(A) \geq F_\mu(b') - F_\mu(a') = \mu(a', b').
+ \]
+ By taking supremum over all such $a', b'$, we find that
+ \[
+ \liminf_{n \to \infty} \mu_n(A) \geq \mu(A).\qedhere
+ \]%\qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{defi}[Tight probability measures]\index{tight probability measures}
+ A sequence of probability measures $(\mu_n)_{n \geq 0}$ on a metric space $(M, e)$ is \emph{tight} if for all $\varepsilon > 0$, there exists compact $K \subseteq M$ such that
+ \[
+ \sup_n \mu_n (M \setminus K) \leq \varepsilon.
+ \]
+\end{defi}
+Note that this is always satisfied for compact metric spaces.
+
+\begin{thm}[Prokhorov's theorem]\index{Prokhorov's theorem}
+ If $(\mu_n)_{n \geq 0}$ is a sequence of tight probability measures, then there is a subsequence $(\mu_{n_k})_{k \geq 0}$ and a measure $\mu$ such that $\mu_{n_k} \Rightarrow \mu$.
+\end{thm}
+
+To see how this can fail without the tightness assumption, suppose we define measures $\mu_n$ on $\R$ by
+\[
+ \mu_n(A) = \tilde{\mu}(A \cap [n, n + 1]),
+\]
+where $\tilde{\mu}$ is the Lebesgue measure. Then for any bounded set $S$, we have $\lim_{n \to \infty} \mu_n(S) = 0$. Thus, if the weak limit existed, it must be everywhere zero, but this does not give a probability measure.
+
+We shall prove this only in the case $M = \R$. It is not difficult to construct a candidate of what the weak limit should be. Simply use Bolzano--Weierstrass to pick a subsequence of the measures such that the distribution functions converge on the rationals. Then the limit would essentially be what we want. We then apply tightness to show that this is a genuine distribution.
+
+\begin{proof}
+ Take $\Q\subseteq \R$, which is dense and countable. Let $x_1, x_2, \ldots$ be an enumeration of $\Q$. Define $F_n = F_{\mu_n}$. By Bolzano--Weierstrass, and some fiddling around with sequences, we can find some $F_{n_k}$ such that
+ \[
+ F_{n_k}(x_i) \to y_i \equiv F(x_i)
+ \]
+ as $k \to \infty$, for each fixed $x_i$.
+
+ Since $F$ is non-decreasing on $\Q$, it has left and right limits everywhere. We extend $F$ to $\R$ by taking right limits. This implies $F$ is cadlag.
+
+ Take $x$ a continuity point of $F$. Then for each $\varepsilon > 0$, there exists $s < x < t$ rational such that
+ \[
+ |F(s) - F(t)| < \frac{\varepsilon}{2}.
+ \]
+ Take $n$ large enough such that $|F_n(s) - F(s)| < \frac{\varepsilon}{4}$, and same for $t$. Then by monotonicity of $F$ and $F_n$, we have
+ \[
+ |F_n(x) - F(x)| \leq |F(s) - F(t)| + |F_n(s) - F(s) | + |F_n(t) - F(t)| \leq \varepsilon.
+ \]
+ It remains to show that $F(x) \to 1$ as $x \to \infty$ and $F(x) \to 0$ as $x \to -\infty$. By tightness, for all $\varepsilon > 0$, there exists $N > 0$ such that
+ \[
+ \mu_n((-\infty, N]) \leq \varepsilon,\quad \mu_n((N, \infty) \leq \varepsilon.
+ \]
+ This then implies what we want.
+\end{proof}
+
+We shall end the chapter with an alternative characterization of weak convergence, using characteristic functions.
+\begin{defi}[Characteristic function]\index{characteristic function}
+ Let $X$ be a random variable taking values in $\R^d$. The \emph{characteristic function} of $X$ is the function $\R^d \to \C$ defined by
+ \[
+ \varphi_X(t) = \E e^{i \bra t, x\ket} = \int_{\R^d} e^{i\bra t, x\ket}\;\d \mu_X(x).
+ \]
+\end{defi}
+Note that $\varphi_X$ is continuous by bounded convergence, and $\varphi_X(0) = 1$.
+
+\begin{prop}
+ If $\varphi_X = \varphi_Y$, then $\mu_X = \mu_Y$.
+\end{prop}
+
+\begin{thm}[L\'evy's convergence theroem]\index{L\'evy's convergence theorem}
+ Let $(X_n)_{n \geq 0}$, $X$ be random variables taking values in $\R^d$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $\mu_{X_n} \Rightarrow \mu_X$ as $n \to \infty$.
+ \item $\varphi_{X_n} \to \varphi_X$ pointwise.
+ \end{enumerate}
+\end{thm}
+We will in fact prove a stronger theorem.
+\begin{thm}[L\'evy]
+ Let $(X_n)_{n \geq 0}$ be as above, and let $\varphi_{X_n}(t) \to \psi(t)$ for all $t$. Suppose $\psi$ is continuous at $0$ and $\psi(0) = 1$. Then there exists a random variable $X$ such that $\varphi_X = \psi$ and $\mu_{X_n} \Rightarrow \mu_X$ as $n \to \infty$.
+\end{thm}
+We will only prove the case $d = 1$. We first need the following lemma:
+\begin{lemma}
+ Let $X$ be a real random variable. Then for all $\lambda > 0$,
+ \[
+ \mu_X( |x| \geq \lambda) \leq c \lambda \int_0^{1/\lambda} (1 - \Re \varphi_X(t)) \;\d t,
+ \]
+ where $C = (1 - \sin 1)^{-1}$.
+\end{lemma}
+
+\begin{proof}
+ For $M \geq 1$, we have
+ \[
+ \int_0^M (1 - \cos t)\;\d t = M - \sin M \geq M(1 - \sin 1).
+ \]
+ By setting $M = \frac{|x|}{\lambda}$, we have
+ \[
+ \mathbf{1}_{|X| \geq \lambda} \leq C \frac{\lambda}{|X|} \int_0^{|X|/\lambda} (1 - \cos t)\;\d t.
+ \]
+ By a change of variables with $t \mapsto Xt$, we have
+ \[
+ \mathbf{1}_{|X| \geq \lambda} \leq c\lambda \int_0^1 (1 - \cos Xt)\;\d t.
+ \]
+ Apply $\mu_X$, and use the fact that $\Re \varphi_X(t) = \E \cos (Xt)$.
+\end{proof}
+
+We can now prove L\'evy's theorem.
+\begin{proof}[Proof of theorem]
+ It is clear that weak convergence implies convergence in characteristic functions.
+
+ Now observe that if $\mu_n \Rightarrow \mu$ iff from every subsequence $(n_k)_{k \geq 0}$, we can choose a further subsequence $(n_{k_\ell})$ such that $\mu_{n_{k_{\ell}}} \Rightarrow \mu$ as $\ell \to \infty$. Indeed, $\Rightarrow$ is clear, and suppose $\mu_n \not \Rightarrow \mu$ but satisfies the subsequence property. Then we can choose a bounded and continuous function $f$ such that
+ \[
+ \mu_n(f) \not \Rightarrow \mu(f).
+ \]
+ Then there is a subsequence $(n_k)_{k \geq 0}$ such that $|\mu_{n_k}(f) - \mu(f)| > \varepsilon$. Then there is no further subsequence that converges.
+
+ Thus, to show $\Leftarrow$, we need to prove the existence of subsequential limits (uniqueness follows from convergence of characteristic functions). It is enough to prove tightness of the whole sequence.
+
+ By the mean value theorem, we can choose $\lambda$ so large that
+ \[
+ c \lambda \int_0^{1/\lambda} (1 - \Re \psi(t)) \;\d t < \frac{\varepsilon}{2}.
+ \]
+ By bounded convergence, we can choose $\lambda$ so large that
+ \[
+ c \lambda \int_0^{1/\lambda} (1 - \Re \varphi_{X_n}(t))\;\d t \leq \varepsilon
+ \]
+ for all $n$. Thus, by our previous lemma, we know $(\mu_{X_n})_{n \geq 0}$ is tight. So we are done.
+\end{proof}
+
+\section{Brownian motion}
+Finally, we can begins studying Brownian motion. Brownian motion was first observed by the botanist Robert Brown in 1827, when he looked at the random movement of pollen grains in water. In 1905, Albert Einstein provided the first mathematical description of this behaviour. In 1923, Norbert Wiener provided the first rigorous construction of Brownian motion.
+
+\subsection{Basic properties of Brownian motion}
+\begin{defi}[Brownian motion]\index{Brownian motion}
+ A continuous process $(B_t)_{t \geq 0}$ taking values in $\R^d$ is called a \emph{Brownian motion} in $\R^d$ started at $x \in \R^d$ if
+ \begin{enumerate}
+ \item $B_0 = x$ almost surely.
+ \item For all $s < t$, the \term{increment} $B_t - B_s \sim N(0, (t - s) I)$.
+ \item Increments are independent. More precisely, for all $t_1 < t_2 < \cdots < t_k$, the random variables
+ \[
+ B_{t_1}, B_{t_2} - B_{t_1}, \ldots, B_{t_k} - B_{t_{k - 1}}
+ \]
+ are independent.
+ \end{enumerate}
+ If $B_0 = 0$, then we call it a \term{standard Brownian motion}.
+\end{defi}
+We always assume our Brownian motion is standard.
+
+\begin{thm}[Wiener's theorem]\index{Wiener's theorem}
+ There exists a Brownian motion on some probability space.
+\end{thm}
+
+\begin{proof}
+ We first prove existence on $[0, 1]$ and in $d = 1$. We wish to apply Kolmogorov's criterion.
+
+ Recall that $D_n$ are the dyadic numbers. Let $(Z_d)_{d \in D}$ be iid $N(0, 1)$ random variables on some probability space. We will define a process on $D_n$ inductively on $n$ with the required properties. We wlog assume $x = 0$.
+
+ In step $0$, we put
+ \[
+ B_0 = 0,\quad B_1 = Z_1.
+ \]
+ Assume that we have already constructed $(B_d)_{d \in D_{n - 1}}$ satisfying the properties. Take $d \in D_n \setminus D_{n - 1}$, and set
+ \[
+ d^{\pm} = d \pm 2^{-n}.
+ \]
+ These are the two consecutive numbers in $D_{n - 1}$ such that $d_- < d < d_+$. Define
+ \[
+ B_d = \frac{B_{d_+} + B_{d_-}}{2} + \frac{1}{2^{(n + 1)/2}} Z_d.
+ \]
+ The condition (i) is trivially satisfied. We now have to check the other two conditions.
+
+ Consider
+ \begin{align*}
+ B_{d_+} - B_d &= \frac{B_{d_+} - B_{d_-}}{2} - \frac{1}{2^{(n + 1)/2}} Z_d\\
+ B_d - B_{d_-} &= \underbrace{\frac{B_{d_+} - B_{d_-}}{2}}_N + \underbrace{\frac{1}{2^{(n + 1)/2}} Z_d}_{N'}.
+ \end{align*}
+ Notice that $N$ and $N'$ are normal with variance $\var(N') = \var(N) = \frac{1}{2^{n + 1}}$. In particular, we have
+ \[
+ \cov(N - N', N + N') = \var(N) - \var(N') = 0.
+ \]
+ So $B_{d_+} - B_d$ and $B_d - B_{d_-}$ are independent.
+
+ Now note that the vector of increments of $(B_d)_{d \in D_n}$ between consecutive numbers in $D_n$ is Gaussian, since after dotting with any vector, we obtain a linear combination of independent Gaussians. Thus, to prove independence, it suffice to prove that pairwise correlation vanishes.
+
+ We already proved this for the case of increments between $B_d$ and $B_{d_{\pm}}$, and this is the only case that is tricky, since they both involve the same $Z_d$. The other cases are straightforward, and are left as an exercise for the reader.
+
+ Inductively, we can construct $(B_d)_{d \in D}$, satisfying (i), (ii) and (iii). Note that for all $s, t \in D$, we have
+ \[
+ \E |B_t - B_s|^p = |t - s|^{p/2} \E |N|^p
+ \]
+ for $N \sim N(0, 1)$. Since $\E |N|^p < \infty$ for all $p$, by Kolmogorov's criterion, we can extend $(B_d)_{d \in D}$ to $(B_t)_{t \in [0, 1]}$. In fact, this is $\alpha$-H\"older continuous for all $\alpha < \frac{1}{2}$.
+
+ Since this is a continuous process and satisfies the desired properties on a dense set, it remains to show that the properties are preserved by taking continuous limits.
+
+ Take $0 \leq t_1 < t_2 < \cdots < t_m \leq 1$, and $0 \leq t_1^n < t_2^n < \cdots < t_m^n \leq 1$ such that $t_i^n \in D_n$ and $t_i^n \to t_i$ as $n \to \infty$ and $i = 1, \ldots m$.
+
+ We now apply L\'evy's convergence theorem. Recall that if $X$ is a random variable in $\R^d$ and $X \sim N(0, \Sigma)$, then
+ \[
+ \varphi_X (u) = \exp\left(-\frac{1}{2} u^T \Sigma u\right).
+ \]
+ Since $(B_t)_{t \in [0, 1]}$ is continuous, we have
+ \begin{align*}
+ \varphi_{(B_{t_2^n} - B_{t_1^n}, \ldots, B_{t_m^n} - B_{t_{m - 1}}^n)}(u) &= \exp \left(- \frac{1}{2} u^T \Sigma u\right)\\
+ &= \exp \left(-\frac{1}{2} \sum_{i = 1}^{m - 1} (t_{i + 1}^n - t_i^n) u_i^2\right).
+ \end{align*}
+ We know this converges, as $n \to \infty$, to $\exp \left(-\frac{1}{2} \sum_{i = 1}^{m - 1} (t_{i + 1} - t_i) u_i^2\right)$.
+
+ By L\'evy's convergence theorem, the law of $(B_{t_2} - B_{t_1}, B_{t_3} - B_{t_2}, \ldots, B_{t_n} - B_{t_{m - 1}})$ is Gaussian with the right covariance. This implies that (ii) and (iii) hold on $[0, 1]$.
+
+ To extend the time to $[0, \infty)$, we define independent Brownian motions $(B_t^i)_{t \in [0, 1], i \in \N}$ and define
+ \[
+ B_t = \sum_{i = 0}^{\lfloor t\rfloor - 1} B_1^i + B^{\lfloor t\rfloor}_{t - \lfloor t \rfloor}
+ \]
+ To extend to $\R^d$, take the product of $d$ many independent one-dimensional Brownian motions.
+\end{proof}
+
+\begin{lemma}
+ Brownian motion is a Gaussian process, i.e.\ for any $0 \leq t_1 < t_2 < \cdots < t_m \leq 1$, the vector $(B_{t_1}, B_{t_2}, \ldots, B_{t_n})$ is Gaussian with covariance
+ \[
+ \cov(B_{t_1}, B_{t_2}) = t_1 \wedge t_2.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We know $(B_{t_1}, B_{t_2} - B_{t_1}, \ldots, B_{t_m} - B_{t_{m - 1}})$ is Gaussian. Thus, the sequence $(B_{t_1}, \ldots, B_{t_m})$ is the image under a linear isomorphism, so it is Gaussian. To compute covariance, for $s \leq t$, we have
+ \[
+ \cov(B_s, B_t) = \E B_s B_t = \E B_s B_T - \E B_s^2 + \E B_s^2 = \E B_s(B_t - B_s) + \E B_s^2 = s.\qedhere
+ \]
+\end{proof}
+
+\begin{prop}[Invariance properties]
+ Let $(B_t)_{t \geq 0}$ be a standard Brownian motion in $\R^d$.
+ \begin{enumerate}
+ \item If $U$ is an orthogonal matrix, then $(UB_t)_{t \geq 0}$ is a standard Brownian motion.
+ \item \term{Brownian scaling}: If $a > 0$, then $(a^{-1/2} B_{at})_{t \geq 0}$ is a standard Brownian motion. This is known as a \term{random fractal property}.
+ \item (\emph{Simple}) \term{Markov property}: For all $s \geq 0$, the sequence $(B_{t + s} - B_s)_{t \geq 0}$ is a standard Brownian motion, independent of $(\mathcal{F}_s^B)$.
+ \item \emph{Time inversion}: Define a process
+ \[
+ X_t =
+ \begin{cases}
+ 0 & t = 0\\
+ t B_{1/t} & t > 0
+ \end{cases}.
+ \]
+ Then $(X_t)_{t \geq 0}$ is a standard Brownian motion.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ Only (iv) requires proof. It is enough to prove that $X_t$ is continuous and has the right finite-dimensional distributions. We haves
+ \[
+ (X_{t_1}, \ldots, X_{t_m}) = (t_1 B_{1/t_1}, \ldots, t_m B_{1/t_m}).
+ \]
+ The right-hand side is the image of $(B_{1/t_1}, \ldots, B_{1/t_m})$ under a linear isomorphism. So it is Gaussian. If $s \leq t$, then the covariance is
+ \[
+ \cov(s B_s, t B_t) = st \cov(B_{1/s}, B_{1/t}) = st \left(\frac{1}{s} \wedge \frac{1}{t}\right) = s = s \wedge t.
+ \]
+ Continuity is obvious for $t > 0$. To prove continuity at $0$, we already proved that $(X_q)_{q > 0, q \in \Q}$ has the same law (as a process) as Brownian motion. By continuity of $X_t$ for positive $t$, we have
+ \[
+ \P \left(\lim_{q \in \Q_+, q \to 0} X_q = 0\right) = \P \left(\lim_{q \in \Q_+, q \to 0} B_q = 0\right) = \mathbf{1}_B
+ \]
+ by continuity of $B$.
+\end{proof}
+
+Using the natural filtration, we have
+\begin{thm}
+ For all $s \geq t$, the process $(B_{t + s} - B_s)_{t \geq 0}$ is independent of $\mathcal{F}_s^+$.
+\end{thm}
+
+\begin{proof}
+ Take a sequence $s_n \to s$ such that $s_n > s$ for all $n$. By continuity,
+ \[
+ B_{t + s} - B_s = \lim_{n \to \infty} B_{t + s_n} - B_{s_n}
+ \]
+ almost surely. Now each of $B_{t + s_n} - B_{s_n}$ is independent of $\mathcal{F}_s^+$, and hence so is the limit.
+% Let $F$ be a real-valued continuous bounded function on $(\R^d)^m$, and let $A \in \mathcal{F}_{s+}$. Then by dominated convergence, we have
+% \[
+% \E F (B_{t_1 + s} - B_s, \ldots, B_{t_m - s} - B_s) \mathbf{1}_A = \lim_{n \to \infty} \E F(B_{t_1 - s_n} - B_{s_n}, \ldots, B_{t_m - s_n} - B_{s_n}) \mathbf{1}_A.
+% \]
+% Now by continuity of $F$ and $B$, and independence of $B_{t_1 - s_n} - B_{s_n}$ of $\mathcal{F}_{s_n}$, and since $A \in \mathcal{F}_{s_n}$ for all $n$, this is
+% \[
+% \P(A) \E F (B_{t_1 + s} - B_s, \ldots, B_{t_n + s} - B_s).
+% \]
+% So we are done. ?????
+\end{proof}
+
+\begin{thm}[Blumenthal's 0-1 law]\index{Blumenthal's 0-1 law}
+ The $\sigma$-algebra $\mathcal{F}^+_0$ is trivial, i.e.\ if $A \in \mathcal{F}_0^+$, then $\P(A) \in \{0, 1\}$.
+\end{thm}
+
+\begin{proof}
+ Apply our previous theorem. Take $A \in \mathcal{F}_0^+$. Then $A \in \sigma (B_s: s \geq 0)$. So $A$ is independent of itself.
+\end{proof}
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item If $d = 1$, then
+ \begin{align*}
+ 1 &= \P(\inf \{t \geq 0: B_t > 0\} = 0) \\
+ &= \P (\inf \{t \geq 0: B_t < 0\} = 0) \\
+ &= \P(\inf \{t > 0: B_t = 0\} = 0)
+ \end{align*}
+ \item For any $d \geq 1$, we have
+ \[
+ \lim_{t \to \infty} \frac{B_t}{t} = 0
+ \]
+ almost surely.
+ \item If we define
+ \[
+ S_t = \sup_{0 \leq s \leq t} B_t,\quad I_t = \inf_{0 \leq s \leq t} B_t,
+ \]
+ then $S_\infty = \infty$ and $I_\infty = -\infty$ almost surely.
+ \item If $A$ is open an $\R^d$, then the cone of $A$ is $C_A = \{tx: x \in A, t > 0\}$. Then $\inf \{t \geq 0: B_t \in C_A\} = 0$ almost surely.
+ \end{enumerate}
+\end{prop}
+Thus, Brownian motion is pretty chaotic.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item It suffices to prove the first equality. Note that the event $\{\inf \{t \geq 0: B_k > 0\} = 0\}$ is trivial. Moreover, for any finite $t$, the probability that $B_t > 0$ is $\frac{1}{2}$. Then take a sequence $t_n$ such that $t_n \to 0$, and apply Fatou to conclude that the probability is positive.
+ \item Follows from the previous one since $t B_{1/t}$ is a Brownian motion.
+ \item By scale invariance, because $S_\infty = a S_\infty$ for all $a > 0$.
+ \item Same as (i).\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{thm}[Strong Markov property]\index{strong Markov property}
+ Let $(B_t)_{t \geq 0}$ be a standard Brownian motion in $\R^d$, and let $T$ be an almost-surely finite stopping time with respect to $(\mathcal{F}_t^+)_{t \geq 0}$. Then
+ \[
+ \tilde{B}_t = B_{T + t} - B_T
+ \]
+ is a standard Brownian motion with respect to $(\mathcal{F}_{T + t}^ +)_{t \geq 0}$ that is independent of $\mathcal{F}_T^+$.
+\end{thm}
+
+\begin{proof}
+ Let $T_n = 2^{-n} \lceil 2^n T \rceil$. We first prove the statement for $T_n$. We let
+ \[
+ B^{(k)}_t = B_{t + k/2^n} - B_{k/2^n}
+ \]
+ This is then a standard Browninan motion independent of $\mathcal{F}_{k/2^n}^+$ by the simple Markov property. Let
+ \[
+ B_*(t) = B_{t + T_n} - B_{T_n}.
+ \]
+ Let $\mathcal{A}$ be the $\sigma$-algebra on $\mathcal{C} = C([0, \infty), \R^d)$, and $A \in \mathcal{A}$. Let $E \in \mathcal{F}_{T_n}^+$. The claim that $B_*$ is a standard Brownian motion independent of $E$ can be concisely captured in the equality
+ \[
+ \P(\{B_* \in A\} \cap E) = \P(\{B \in A\}) \P(E).\tag{$\dagger$}
+ \]
+ Taking $E = \Omega$ tells us $B_*$ and $B$ have the same law, and then taking general $E$ tells us $B_*$ is independent of $\mathcal{F}_{T_n}^+$.
+
+ It is a straightforward computation to prove $(\dagger)$. Indeed, we have
+ \begin{align*}
+ \P(\{B_* \in A\}\cap E) &= \sum_{k = 0}^\infty \P\left(\{B^{(k)} \in A\} \cap E \cap \left\{T_n = \frac{k}{2^n}\right\}\right)\\
+ \intertext{Since $E \in \mathcal{F}_{T_n}^+$, we know $E \cap \{T_n = k/2^n\} \in \mathcal{F}_{k/2^n}^+$. So by the simple Markov property, this is equal to}
+ &=\sum_{k = 0}^\infty \P(\{B^{(k)} \in A\}) \P\left(E \cap \left\{T_n = \frac{k}{2^n}\right\}\right).
+ \intertext{But we know $B_k$ is a standard Brownian motion. So this is equal to}
+ &=\sum_{b = 0}^\infty \P(\{B \in A\}) \P \left(E \cap \left\{ T_n = \frac{k}{2^n}\right\}\right) \\
+ &= \P (\{B \in A\}) \P (E).
+ \end{align*}
+ So we are done.
+
+ Now as $n \to \infty$, the increments of $B_*$ converge almost surely to the increments of $\tilde{B}$, since $B$ is continuous and $T_n \searrow T$ almost surely. But $B_*$ all have the same distribution, and almost sure convergence implies convergence in distribution. So $\tilde{B}$ is a standard Brownian motion. Being independent of $\mathcal{F}_T^+$ is clear.
+\end{proof}
+
+We know that we can reset our process any time we like, and we also know that we have a bunch of invariance properties. We can combine these to prove some nice results.
+
+\begin{thm}[Reflection principle]\index{reflection principle}
+ Let $(B_t)_{T \geq 0}$ and $T$ be as above. Then the \emph{reflected process} $(\tilde{B}_t)_{t \geq 0}$ defined by
+ \[
+ \tilde{B}_t = B_t \mathbf{1}_{t < T} + (2 B_T - B_t)\mathbf{1}_{t \geq T}
+ \]
+ is a standard Brownian motion.
+\end{thm}
+Of course, the fact that we are reflecting is not important. We can apply any operation that preserves the law. This theorem is ``obvious'', but we can be a bit more careful in writing down a proof.
+
+\begin{proof}
+ By the strong Markov property, we know
+ \[
+ B^T_t = B_{T + t} - B_T
+ \]
+ and $-B_t^T$ are standard Brownian motions independent of $\mathcal{F}_T^+$. This implies that the pairs of random variables
+ \[
+ P_1 = ((B_t)_{0 \leq t \leq T}, (B_t)^T_{t \geq 0}),\quad P_2 = ((B_t)_{0 \leq t \leq T}, (-B_t)^T_{t \geq 0})
+ \]
+ taking values in $\mathcal{C} \times \mathcal{C}$ have the same law on $\mathcal{C} \times \mathcal{C}$ with the product $\sigma$-algebra.
+
+ Define the concatenation map $\psi_T(X, Y): \mathcal{C} \times \mathcal{C} \to \mathcal{C}$ by
+ \[
+ \psi_T(X, Y) = X_t \mathbf{1}_{t < T} + (X_T + Y_{t - T}) \mathbf{1}_{t \geq T}.
+ \]
+ Assuming $Y_0 = 0$, the resulting process is continuous.
+
+ Notice that $\psi_T$ is a measurable map, which we can prove by approximations of $T$ by discrete stopping times. We then conclude that $\psi_T(P_1)$ has the same law as $\psi_T(P_2)$.
+\end{proof}
+
+\begin{cor}
+ Let $(B_t)_{T \geq 0}$ be a standard Brownian motion in $d = 1$. Let $b > 0$ and $a \leq b$. Let
+ \[
+ S_t = \sup_{0 \leq s \leq t} B_t.
+ \]
+ Then
+ \[
+ \P(S_t \geq b, B_t \leq a) = \P(B_t \geq 2b - a).
+ \]
+\end{cor}
+
+\begin{proof}
+ Consider the stopping time $T$ given by the first hitting time of $b$. Since $S_\infty = \infty$, we know $T$ is finite almost surely. Let $(\tilde{B}_t)_{t\geq 0}$ be the reflected process. Then
+ \[
+ \{S_t \geq b, B_t \leq a\} = \{\tilde{B}_t \geq 2b - a \}.\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ The law of $S_t$ is equal to the law of $|B_t|$.
+\end{cor}
+
+\begin{proof} Apply the previous process with $b = a$ to get
+ \begin{align*}
+ \P(S_t \geq a) &= \P(S_t \geq a, B_t < a) + \P(S_t \geq a, B_t \geq a)\\
+ &= \P(B_t \geq a) + \P(B_t \geq a)\\
+ &= \P(B_t \leq a) + \P(B_t \geq a)\\
+ &= \P(|B_t| \geq a).\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{prop}
+ Let $d = 1$ and $(B_t)_{t \geq 0}$ be a standard Brownian motion. Then the following processes are $(\mathcal{F}_t^+)_{t \geq 0}$ martingales:
+ \begin{enumerate}
+ \item $(B_t)_{t \geq 0}$
+ \item $(B_t^2 - t)_{t \geq 0}$
+ \item $\left(\exp\left(u B_t - \frac{u^2 t}{2}\right)\right)_{t \geq 0}$ for $u \in \R$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Using the fact that $B_t - B_s$ is independent of $\mathcal{F}_s^+$, we know
+ \[
+ \E (B_t - B_s \mid \mathcal{F}_s^+) = \E (B_t - B_s) = 0.
+ \]
+ \item We have
+ \begin{align*}
+ \E (B_t^2 - t \mid \mathcal{F}_s^+) &= \E ((B_t - B_s)^2 \mid \mathcal{F}_s)- \E(B_s^2 \mid \mathcal{F}_s^+) + 2 \E(B_t B_s \mid \mathcal{F}_s^+) - t\\
+ \intertext{We know $B_t - B_s$ is independent of $\mathcal{F}_s^+$, and so the first term is equal to $\var(B_t - B_s) = (t - s)$, and we can simply to get}
+ &= (t - s) - B_s^2 + 2 B_s^2 - t\\
+ &= B_s^2 - s.
+ \end{align*}
+ \item Similar.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{Harmonic functions and Brownian motion}
+Recall that a Markov chain plus a harmonic function gave us a martingale. We shall derive similar results here.
+
+\begin{defi}[Domain]\index{domain}
+ A \emph{domain} is an open connected set $D \subseteq \R^d$.
+\end{defi}
+
+\begin{defi}[Harmonic function]\index{harmonic function}
+ A function $u: D \to \R$ is called \emph{harmonic} if\index{$\Delta$}\index{Laplacian operator}
+ \[
+ \Delta f = \sum_{i = 1}^d \frac{\partial^2 f}{\partial x_i^2} = 0.
+ \]
+\end{defi}
+
+There is also an alternative characterization of harmonic functions that involves integrals instead of derivatives.
+\begin{lemma}
+ Let $u: D \to \R$ be measurable and locally bounded. Then the following are equivalent:
+ \begin{enumerate}
+ \item $u$ is twice continuously differentiable and $\Delta u = 0$.
+ \item For any $x \in D$ and $r > 0$ such that $B(x, r) \subseteq D$, we have
+ \[
+ u(x) = \frac{1}{\mathrm{Vol}(B(x, r))} \int_{B(x, r)} u(y) \;\d y
+ \]
+ \item For any $x \in D$ and $r > 0$ such that $B(x, r) \subseteq D$, we have
+ \[
+ u(x) = \frac{1}{\mathrm{Area}(\partial B(x, r))} \int_{\partial B(x, r)} u(y) \;\d y.
+ \]
+ \end{enumerate}
+\end{lemma}
+The latter two properties are known as the \term{mean value property}.
+
+\begin{proof}
+ IA Vector Calculus.
+\end{proof}
+
+\begin{thm}
+ Let $(B_t)_{t \geq 0}$ be a standard Brownian motion in $\R^d$, and $u: \R^d \to \R$ be harmonic such that
+ \[
+ \E |u(x + B_t)| < \infty
+ \]
+ for any $x \in \R^d$ and $t \geq 0$. Then the process $(u(B_t))_{t \geq 0}$ is a martingale with respect to $(\mathcal{F}_t^+)_{t \geq 0}$.
+\end{thm}
+
+To prove this, we need to prove a side lemma:
+\begin{lemma}
+ If $X$ and $Y$ are independent random variables in $\R^d$, and $X$ is $\mathcal{G}$-measurable. If $f: \R^d \times \R^d \to \R$ is such that $f(X, Y)$ is integrable, then
+ \[
+ \E(f(X, Y) \mid \mathcal{G}) = \E f(z, Y)|_{z = X}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Use Fubini and the fact that $\mu_{(X, Y)} = \mu_X \otimes \mu_Y$.
+\end{proof}
+
+Observe that if $\mu$ is a probability measure in $\R^d$ such that the density of $\mu$ with respect to the Lebesgue measure depends only on $|x|$, then if $u$ is harmonic, the mean value property implies
+\[
+ u(x) = \int_{\R^d} u(x + y) \;\d \mu(y).
+\]
+\begin{proof}[Proof of theorem]
+ Let $t \geq s$. Then
+ \begin{align*}
+ \E (u (B_t) \mid \mathcal{F}_s^+) &= \E(u(B_s + (B_t - B_s)) \mid \mathcal{F}_s^+)\\
+ &= \E(u(z + B_t - B_s))|_{Z = B_s}\\
+ &= u(z)|_{z = B_s}\\
+ &= u(B_s).\qedhere
+ \end{align*}
+\end{proof}
+
+In fact, the following more general result is true:
+\begin{thm}
+ Let $f: \R^d \to \R$ be twice continuously differentiable with bounded derivatives. Then, the processes $(X_t)_{t \geq 0}$ defined by
+ \[
+ X_t = f(B_t) - \frac{1}{2} \int_0^t \Delta f(B_s)\;\d s
+ \]
+ is a martingale with respect to $(\mathcal{F}_t^+)_{t \geq 0}$.
+\end{thm}
+We shall not prove this, but we can justify this as follows: suppose we have a sequence of independent random variables $\{X_1, X_2, \ldots\}$, with
+\[
+ \P(X_i = \pm 1) = \frac{1}{2}.
+\]
+Let $S_n = X_1 + \cdots + X_n$. Then
+\[
+ \E(f(S_{n + 1}) \mid S_1, \ldots, S_n) - f(S_n) = \frac{1}{2} (f(S_n - 1) + f(S_n + 1) - 2 f(s_n)) \equiv \frac{1}{2} \tilde{\Delta} f(S_n),
+\]
+and we see that this is the discretized second derivative. So
+\[
+ f(S_n) - \frac{1}{2} \sum_{i = 0}^{n - 1} \tilde{\Delta} f(S_i)
+\]
+is a martingale.
+
+Now the mean value property of a harmonic function $u$ says if we draw a sphere $B$ centered at $x$, then $u(x)$ is the average value of $u$ on $B$. More generally, if we have a surface $S$ containing $x$, is it true that $u(x)$ is the average value of $u$ on $S$ in some sense?
+
+Remarkably, the answer is yes, and the precise result is given by Brownian motion. Let $(X_t)_{t \geq 0}$ be a Brownian motion started at $x$, and let $T$ be the first hitting time of $S$. Then, under certain technical conditions, we have
+\[
+ u(x) = \E_x u(X_T).
+\]
+In fact, given some function $\varphi$ defined on the boundary of $D$, we can set
+\[
+ u(x) = \E_x \varphi(X_T),
+\]
+and this gives us the (unique) solution to Laplace's equation with the boundary condition given by $\varphi$.
+
+It is in fact not hard to show that the resulting $u$ is harmonic in $D$, since it is almost immediate by construction that $u(x)$ is the average of $u$ on a small sphere around $x$. The hard part is to show that $u$ is in fact continuous at the boundary, so that it is a genuine solution to the boundary value problem. This is where the technical condition comes in.
+
+First, we quickly establish that solutions to Laplace's equation are unique.
+\begin{defi}[Maximum principle]\index{maximum principle}
+ Let $u: \bar{D} \to \R$ be continuous and harmonic. Then
+ \begin{enumerate}
+ \item If $u$ attains its maximum inside $D$, then $u$ is constant.
+ \item If $D$ is bounded, then the maximum of $u$ in $\bar{D}$ is attained at $\partial D$.
+ \end{enumerate}
+\end{defi}
+Thus, harmonic functions do not have interior maxima unless it is constant.
+
+\begin{proof}
+ Follows from the mean value property of harmonic functions.
+\end{proof}
+
+\begin{cor}
+ If $u$ and $u'$ solve $\Delta u = \Delta u' = 0$, and $u$ and $u'$ agree on $\partial D$, then $u = u'$.
+\end{cor}
+
+\begin{proof}
+ $u - u'$ is also harmonic, and so attains the maximum at the boundary, where it is $0$. Similarly, the minimum is attained at the boundary.
+\end{proof}
+
+The technical condition we impose on $D$ is the following:
+\begin{defi}[Poincar\'e cone condition]\index{Poincar\'e cone condition}
+ We say a domain $D$ satisfies the \emph{Poincar\'e cone condition} if for any $x \in \partial D$, there is an open cone $C$ based at $X$ such that
+ \[
+ C \cap D \cap B(x, \delta) = \emptyset
+ \]
+ for some $\delta \geq 0$.
+\end{defi}
+
+\begin{eg}
+ If $D = \R^2 \setminus (\{0\} \times \R_{\geq 0})$, then $D$ does not satisfy the Poincar\'e cone condition.
+\end{eg}
+And the technical lemma is as follows:
+\begin{lemma}
+ Let $C$ be an open cone in $\R^d$ based at $0$. Then there exists $0 \leq a < 1$ such that if $|x| \leq \frac{1}{2^k}$, then
+ \[
+ \P_x(T_{\partial B(0, 1)} < T_C) \leq a^k.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Pick
+ \[
+ a = \sup_{|x| \leq \frac{1}{2}} \P_x(T_{\partial B(0, 1)} < T_C) < 1.
+ \]
+ We then apply the strong Markov property, and the fact that Brownian motion is scale invariant. We reason as follows --- if we start with $|x| \leq \frac{1}{2^k}$, we may or may not hit $\partial B(2^{-k + 1})$ before hitting $C$. If we don't, then we are happy. If we are not, then we have reached $\partial B(2^{-k + 1})$. This happens with probability at most $a$. Now that we are at $\partial B(2^{-k + 1})$, the probability of hitting $\partial B(2^{-k + 2})$ before hitting the cone is at most $a$ again. If we hit $\partial B(2^{-k + 3})$, we again have a probability of $\leq a$ of hitting $\partial B(2^{-k + 4})$, and keep going on. Then by induction, we find that the probability of hitting $\partial B(0, 1)$ before hitting the cone is $\leq a^k$.
+\end{proof}
+
+The ultimate theorem is then
+\begin{thm}
+ Let $D$ be a bounded domain satisfying the Poincar\'e cone condition, and let $\varphi: \partial D \to \R$ be continuous. Let
+ \[
+ T_{\partial D} = \inf\{ t \geq 0 : B_t \in \partial D \}.
+ \]
+ This is a \emph{bounded} stopping time. Then the function $u: \bar{D} \to \R$ defined by
+ \[
+ u(x) = \E_x (\varphi(B_{T_{\partial D}})),
+ \]
+ where $\E_x$ is the expectation if we start at $x$, is the unique continuous function such that $u(x) = \varphi(x)$ for $x \in \partial D$, and $\Delta u = 0$ for $x \in D$.
+\end{thm}
+
+\begin{proof}
+ Let $\tau = T_{\partial B(x, \delta)}$ for $\delta$ small. Then by the strong Markov property and the tower law, we have
+ \[
+ u(x) = \E_x(u(x_\tau)),
+ \]
+ and $x_\tau$ is uniformly distributed over $\partial B(x, \delta)$. So we know $u$ is harmonic in the interior of $D$, and in particular is continuous in the interior. It is also clear that $u|_{\partial D} = \varphi$. So it remains to show that $u$ is continuous up to $\bar{D}$.
+
+ So let $x \in \partial D$. Since $\varphi$ is continuous, for every $\varepsilon > 0$, there is $\delta > 0$ such that if $y \in \partial D$ and $|y - x| < \delta$, then $|\varphi(y) - \varphi(x)| \leq \varepsilon$.
+
+ Take $z \in \bar{D}$ such that $|z - x| \leq \frac{\delta}{2}$. Suppose we start our Brownian motion at $z$. If we hit the boundary before we leave the ball, then we are in good shape. If not, then we are sad. But if the second case has small probability, then since $\varphi$ is bounded, we can still be fine.
+
+ Pick a cone $C$ as in the definition of the Poincar\'e cone condition, and assume we picked $\delta$ small enough that $C \cap B(x, \delta) \cap D = \emptyset$. Then we have
+ \begin{align*}
+ |u(z) - \varphi(x)| &= |\E_z(\varphi(B_{T_{\partial D}})) - \varphi(x)|\\
+ &\leq \E_z |\varphi(B_{T_{\partial D}} - \varphi(x))|\\
+ &\leq \varepsilon \P_z(T_{B(x, \delta)} > T_{\partial D}) + 2 \sup \|\varphi\| \P_z(T_{\partial D} > T_{\partial B(x, \delta)})\\
+ &\leq \varepsilon + 2 \|\varphi\|_{\infty} \P_z(T_{B(x, \delta)} \leq T_C),
+ \end{align*}
+ and we know the second term $\to 0$ as $z \to x$.
+\end{proof}
+
+\subsection{Transience and recurrence}
+\begin{thm}
+ Let $(B_t)_{t \geq 0}$ be a Brownian motion in $\R^d$.
+ \begin{itemize}
+ \item If $d = 1$, then $(B_t)_{t \geq 0}$ is \term{point recurrent}, i.e.\ for each $x, z \in \R$, the set $\{t \geq 0: B_t = z\}$ is unbounded $\P_x$-almost surely.
+ \item If $d = 2$, then $(B_t)_{t \geq 0}$ is \term{neighbourhood recurrent}, i.e.\ for each $x \in \R^2$ and $U \subseteq \R^2$ open, the set $\{t \geq 0: B_t \in U\}$ is unbounded $\P_x$-almost surely. However, the process does not visit points, i.e.\ for all $x, z \in \R^d$, we have
+ \[
+ \P_X(B_t = z\text{ for some }t > 0) = 0.
+ \]
+ \item If $d \geq 3$, then $(B_t)_{t \geq 0}$ is \term{transient}, i.e.\ $|B_t| \to \infty$ as $t\to \infty$ $\P_x$-almost surely.
+ \end{itemize}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item This is trivial, since $\inf_{t \geq 0} B_t = -\infty$ and $\sup_{t \geq 0} B_t = \infty$ almost surely, and $(B_t)_{t \geq 0}$ is continuous.
+ \item It is enough to prove for $x = 0$. Let $0 < \varepsilon < R < \infty$ and $\varphi \in C_b^2(\R^2)$ such that $\varphi(x) = \log |x|$ for $\varepsilon \leq |x| \leq R$. It is an easy exercise to check that this is harmonic inside the annulus. By the theorem we didn't prove, we know
+ \[
+ M_t = \varphi(B_t) - \frac{1}{2} \int_0^t \Delta \varphi (B_s)\;\d s
+ \]
+ is a martingale. For $\lambda \geq 0$, let $S_\lambda = \inf \{t \geq 0: |B_t| = \lambda\}$. If $\varepsilon \leq |x| \leq R$, then $H = S_\varepsilon \wedge S_R$ is $\P_X$-almost surely finite. Then $M_H$ is a bounded martingale. By optional stopping, we have
+ \[
+ \E_x (\log |B_H|) = \log |x|.
+ \]
+ But the LHS is
+ \[
+ \log \varepsilon \P(S_\varepsilon < S_R) + \log R \P(S_R < S_\varepsilon).
+ \]
+ So we find that
+ \[
+ \P_x(S_\varepsilon < S_R) = \frac{\log R - \log |x|}{\log R - \log \varepsilon}.\tag{$*$}
+ \]
+ Note that if we let $R \to \infty$, then $S_R \to \infty$ almost surely. Using $(*)$, this implies $\P_X(S_\varepsilon < \infty) = 1$, and this does not depend on $x$. So we are done.
+
+ To prove that $(B_t)_{t \geq 0}$ does not visit points, let $\varepsilon \to 0$ in $(*)$ and then $R \to \infty$ for $x \not= z = 0$.
+ \item It is enough to consider the case $d = 3$. As before, let $\varphi \in C_b^2(\R^3)$ be such that
+ \[
+ \varphi(x) = \frac{1}{|x|}
+ \]
+ for $\varepsilon \leq x \leq 2$. Then $\Delta \varphi(x) = 0$ for $\varepsilon \leq x \leq R$. As before, we get
+ \[
+ \P_x(S_\varepsilon < S_R) = \frac{|x|^{-1} - |R|^{-1}}{\varepsilon^{-1} - R^{-1}}.
+ \]
+ As $R \to \infty$, we have
+ \[
+ \P_x(S_\varepsilon < \infty) = \frac{\varepsilon}{x}.
+ \]
+ Now let
+ \[
+ A_n = \{B_t \geq n\text{ for all }t \geq B_{T_{n^3}}\}.
+ \]
+ Then
+ \[
+ \P_0(A_n^c) = \frac{1}{n^2}.
+ \]
+ So by Borel--Cantelli, we know only finitely of $A_n^c$ occur almost surely. So infinitely many of the $A_n$ hold. This guarantees our process $\to \infty$. \qedhere % there is a mistake somewhere with correction E_0(P_{B_{S_n}^3}(S_n < \infty))
+ \end{itemize}
+\end{proof}
+
+\subsection{Donsker's invariance principle}
+To end our discussion on Brownian motion, we provide an alternative construction of Brownian motion, given by Donsker's invariance principle. Suppose we run any simple random walk on $\Z^d$. We can think of this as a very coarse approximation of a Brownian motion. As we zoom out, the step sizes in the simple random walk look smaller and smaller, and if we zoom out sufficiently much, then we might expect that the result looks like Brownian motion, and indeed it converges to Brownian motion in the limit.
+
+%Consider the space $C([0, 1], \R)$ with the supremum norm
+%\[
+% \|f\|_\infty = \sup_{0 \leq t \leq 1} |f_t|.
+%\]
+%Then this makes $C([0, 1], \R)$ into a Banach space. The Borel $\sigma$-algebra on this is the same as the $\sigma$-algebra generated by the evaluation functions.
+
+\begin{thm}[Donsker's invariance principle]
+ Let $(X_n)_{n \geq 0}$ be iid random variables with mean $0$ and variance $1$, and set $S_n = X_1 + \cdots + X_n$. Define
+ \[
+ S_t = (1 - \{t\}) s_{\lfloor t \rfloor} + \{t\} S_{\lfloor t \rfloor + 1}.
+ \]
+ where $\{t\} = t - \lfloor t \rfloor$.
+
+ Define
+ \[
+ (S_t^{[N]})_{t \geq 0} = (N^{-1/2} S_{t \cdot N})_{t \in [0, 1]}.
+ \]
+ As $(S_t^{[N]})_{t \in [0, 1]}$ converges in distribution to the law of standard Brownian motion on $[0, 1]$.
+\end{thm}
+
+The reader might wonder why we didn't construct our Brownian motion this way instead of using Wiener's theorem. The answer is that our proof of Donsker's invariance principle relies on the existence of a Brownian motion! The relevance is the following theorem:
+
+\begin{thm}[Skorokhod embedding theorem]\index{Skorokhod embedding theorem}
+ Let $\mu$ be a probability measure on $\R$ with mean $0$ and variance $\sigma^2$. Then there exists a probability space $(\Omega, \mathcal{F}, \P)$ with a filtration $(\mathcal{F}_t)_{t \geq 0}$ on which there is a standard Brownian motion $(B_t)_{t \geq 0}$ and a sequence of stopping times $(T_n)_{n \geq 0}$ such that, setting $S_n = B_{T_n}$,
+ \begin{enumerate}
+ \item $T_n$ is a random walk with steps of mean $\sigma^2$
+ \item $S_n$ is a random walk with step distribution $\mu$.
+ \end{enumerate}
+\end{thm}
+So in some sense, Brownian motion contains all random walks with finite variance.
+
+The only stopping times we know about are the hitting times of some value. However, if we take $T_n$ to be the hitting time of some fixed value, then $B_{T_n}$ would be a pretty poor attempt at constructing a random walk. Thus, we may try to come up with the following strategy --- construct a probability space with a Brownian motion $(B_t)_{t \geq 0}$, and an independent iid sequence $(X_n)_{n \in \N}$ of random variables with distribution $\mu$. We then take $T_n$ to be the first hitting time of $X_1 + \cdots + X_n$. Then setting $S_n = X_{T_n}$, property (ii) is by definition satisfied. However, (i) will not be satisfied in general. In fact, for any $y \not= 0$, the expected first hitting time of $y$ is infinite! The problem is that if, say, $y > 0$, and we accidentally strayed off to the negative side, then it could take a long time to return.
+
+The solution is to ``split'' $\mu$ into two parts, and construct two random variables $(X, Y) \in [0, \infty)^2$, such that if $T$ is $T$ is the first hitting time of $(-X, Y)$, then $B_T$ has law $\mu$.
+
+Since we are interested in the stopping times $T_{-x}$ and $T_y$, the following computation will come in handy:
+\begin{lemma}
+ Let $x, y > 0$. Then
+ \[
+ \P_0(T_{-x} < T_y) = \frac{y}{x + y},\quad \E_0 T_{-x} \wedge T_y = xy.
+ \]
+\end{lemma}
+
+\begin{proof}[Proof sketch]
+ Use optional stopping with $(B_t^2 - t)_{t \geq 0}$.
+\end{proof}
+
+\begin{proof}[Proof of Skorokhod embedding theorem]
+ Define Borel measures $\mu^{\pm}$ on $[0, \infty)$ by
+ \[
+ \mu^{\pm}(A) = \mu(\pm A).
+ \]
+ Note that these are not probability measures, but we can define a probability measure $\nu$ on $[0, \infty)^2$ given by
+ \[
+ \d \nu(x, y) = C(x + y)\; \d \mu^-(x) \;\d \mu^+(y)
+ \]
+ for some normalizing constant $C$ (this is possible since $\mu$ is integrable). This $(x + y)$ is the same $(x + y)$ appearing in the denominator of $\P_0(T_{-x} < T_y) = \frac{y}{x + y}$. Then we claim that any $(X, Y)$ with this distribution will do the job.
+
+ We first figure out the value of $C$. Note that since $\mu$ has mean $0$, we have
+ \[
+ C \int_0^\infty x \;\d \mu^-(x) = C \int_0^\infty y \;\d \mu^+(y).
+ \]
+ Thus, we have
+ \begin{align*}
+ 1 &= \int C(x + y) \;\d \mu^-(x) \;\d \mu^+(y) \\
+ &= C \int x\;\d \mu^-(x) \int \;\d \mu^+(y) + C \int y\;\d \mu^+(y) \int\;\d \mu^-(x)\\
+ &= C \int x \;\d \mu^-(x) \left( \int \;\d \mu^+(y) + \int\;\d \mu^-(x)\right) \\
+ &= C \int x\;\d \mu^-(x) = C \int y\;\d \mu^+(y).
+ \end{align*}
+
+ We now set up our notation. Take a probability space $(\Omega, \mathcal{F}, \P)$ with a standard Brownian motion $(B_t)_{t \geq 0}$ and a sequence $((X_n, Y_n))_{n \geq 0}$ iid with distribution $\nu$ and independent of $(B_t)_{t \geq 0}$.
+
+ Define
+ \[
+ \mathcal{F}_0 = \sigma((X_n, Y_n), n = 1, 2, \ldots),\quad
+ \mathcal{F}_t = \sigma(\mathcal{F}_0, \mathcal{F}_t^B).
+ \]
+ Define a sequence of stopping times
+ \[
+ T_0 = 0, T_{n + 1} = \inf\{t \geq T_n: B_t - B_{T_n} \in \{-X_{n + 1}, Y_{n + 1}\}\}.
+ \]
+ By the strong Markov property, it suffices to prove that things work in the case $n = 1$. So for convenience, let $T = T_1, X = X_1, Y = Y_1$.
+
+ To simplify notation, let $\tau: C([0, 1], \R) \times [0, \infty)^2 \to [0, \infty)$ be given by
+ \[
+ \tau(\omega, x, y) = \inf \{t \geq 0: \omega(t) \in \{-x, y\}\}.
+ \]
+ Then we have
+ \[
+ T = \tau((B_t)_{t \geq 0}, X, Y).
+ \]
+
+ To check that this works, i.e.\ (ii) holds, if $A \subseteq [0, \infty)$, then
+ \[
+ \P(B_T \in A) = \int_{[0,\infty)^2} \int_{C([0, \infty), \R)} \mathbf{1}_{\tau(\omega, x, y) \in A} \;\d \mu_B(\omega) \;\d \nu(x, y).
+ \]
+ Using the first part of the previous computation, this is given by
+ \[
+ \int_{[0, \infty)^2} \frac{x}{x + y} \mathbf{1}_{y \in A}\; C(x + y) \;\d \mu^-(x) \;\d \mu^+(y) = \mu^+(A).
+ \]
+ We can prove a similar result if $A \subseteq (-\infty, 0)$. So $B_T$ has the right law.
+
+ To see that $T$ is also well-behaved, we compute
+ \begin{align*}
+ \E T &= \int_{[0, \infty)^2} \int_{\C([0, 1], \R)} \tau(\omega, x, y)\;\d \mu_B(\omega)\;\d \nu(x, y)\\
+ &= \int_{[0, \infty)^2} xy \;\d \nu(x, y)\\
+ &= C \int_{[0, \infty)^2} (x^2 y + yx^2)\;\d \mu^-(x) \;\d \mu^+(y)\\
+ &= \int_{[0, \infty)} x^2 \;\d \mu^-(x) + \int_{[0, \infty)} y^2 \;\d \mu^+(y)\\
+ &= \sigma^2.\qedhere
+ \end{align*}
+\end{proof}
+
+The idea of the proof of Donsker's invariance principle is that in the limit of large $N$, the $T_n$ are roughly regularly spaced, by the law of large numbers, so this allows us to reverse the above and use the random walk to approximate the Brownian motion.
+
+\begin{proof}[Proof of Donsker's invariance principle]
+ Let $(B_t)_{t \geq 0}$ be a standard Brownian motion. Then by Brownian scaling,
+ \[
+ (B_t^{(N)})_{t \geq 0} = (N^{1/2} B_{t/N})_{t \geq 0}
+ \]
+ is a standard Brownian motion.
+
+ For every $N > 0$, we let $(T_n^{(N)})_{n \geq 0}$ be a sequence of stopping times as in the embedding theorem for $B^{(N)}$. We then set
+ \[
+ S_n^{(N)} = B_{T_n^{(N)}}^{(N)}.
+ \]
+ For $t$ not an integer, define $S_t^{(N)}$ by linear interpolation. Observe that
+ \[
+ ((T_n^{(N)})_{n \geq 0}, S_t^{(N)}) \sim ((T_n^{(1)})_{n \geq 0}, S_t^{(1)}).
+ \]
+ We define
+ \[
+ \tilde{S}_t^{(N)} = N^{-1/2} S_{tN}^{(N)},\quad \tilde{T}_n^{(N)} = \frac{T_n^{(N)}}{N}.
+ \]
+ Note that if $t = \frac{n}{N}$, then
+ \[
+ \tilde{S}^{(N)}_{n/N} = N^{-1/2} S_n^{(N)} = N^{-1/2} B_{T_n^{(N)}}^{(N)} = B_{T_n^{(N)}/N} = B_{\tilde{T}^N_n}.\tag{$*$}
+ \]
+ Note that $(\tilde{S}_t^{(N)})_{t \geq 0} \sim (S_t^{(N)})_{t \geq 0}$. We will prove that we have convergence in probability, i.e.\ for any $\delta > 0$,
+ \[
+ \P\left(\sup_{0\leq t < 1} |\tilde{S}_t^{(N)} - B_t| > \delta \right) = \P(\|\tilde{S}^{(N)} - B\|_\infty > \delta) \to 0 \text{ as }N \to \infty.
+ \]
+ We already know that $\tilde{S}$ and $B$ agree at some times, but the time on $\tilde{S}$ is fixed while that on $B$ is random. So what we want to apply is the law of large numbers. By the strong law of large numbers,
+ \[
+ \lim_{n \to \infty} \frac{1}{n} |T_n^{(1)} - n| \to 0\text{ as } n \to 0.
+ \]
+ This implies that
+ \[
+ \sup_{1 \leq n \leq N} \frac{1}{N} |T_n^{(1)} - n| \to 0\text{ as }N \to \infty.
+ \]
+ Note that $(T_n^{(1)})_{n \geq 0} \sim (T_n^{(N)})_{n \geq 0}$, it follows for any $\delta > 0$,
+ \[
+ \P\left(\sup_{1 \leq n \leq N} \left|\frac{T_n^{(N)}}{N} - \frac{n}{N} \right| \geq \delta\right) \to 0\text{ as }N \to \infty.
+ \]
+ Using $(*)$ and continuity, for any $t \in [\frac{n}{N}, \frac{n + 1}{N}]$, there exists $u \in [T^{(N)}_{n/N}, T^{(N)}_{(n+1)/N}]$ such that
+ \[
+ \tilde{S}^{(N)}_t = B_u.
+ \]
+ Note that if times are approximated well up to $\delta$, then $|t - u| \leq \delta + \frac{1}{N}$.
+
+ Hence we have
+ \begin{align*}
+ \{\|\tilde{S} - B\|_{\infty} > \varepsilon\} &\leq \left\{ \left|\tilde{T}_n^{(N)} - \frac{n}{N}\right| > \delta\text{ for some }n \leq N\right\} \\
+ &\phantomeq \cup \left\{|B_t - B_u| > \varepsilon \text{ for some }t \in [0, 1], |t - u| < \delta + \frac{1}{N}\right\}.
+ \end{align*}
+ The first probability $ \to 0$ as $n \to \infty$. For the second, we observe that $(B_t)_{T \in [0, 1]}$ has uniformly continuous paths, so for $\varepsilon > 0$, we can find $\delta > 0$ such that the second probability is less than $\varepsilon$ whenever $N > \frac{1}{\delta}$ (exercise!).
+
+ So $\tilde{S}^{(N)} \to B$ uniformly in probability, hence converges uniformly in distribution.
+\end{proof}
+
+\section{Large deviations}
+So far, we have been interested in the average or ``typical'' behaviour of our processes. Now, we are interested in ``extreme cases'', i.e.\ events with small probability. In general, our objective is to show that these probabilities tend to zero very quickly.
+
+Let $(X_n)_{n \geq 0}$ be a sequence of iid integrable random variables in $\R$ and mean value $\E X_1 = \bar{x}$ and finite variance $\sigma^2$. We let
+\[
+ S_n = X_1 + \cdots + X_n.
+\]
+By the central limit theorem, we have
+\[
+ \P(S_n \geq n \bar{x} + \sqrt{n} \sigma a) \to \P(Z \geq a)\text{ as }n \to \infty,
+\]
+where $Z \sim N(0, 1)$. This implies
+\[
+ \P(S_n \geq an) \to 0
+\]
+for any $a > \bar{x}$. The question is then how fast does this go to zero?
+
+There is a very useful lemma in the theory of sequences that tells us this vanishes exponentially quickly with $n$. Note that
+\[
+ \P(S_{m + n} \geq a (m + n))\geq \P(S_m \geq am) \P(S_n \geq an).
+\]
+So the sequence $\P(S_n \geq an)$ is super-multiplicative. Thus, the sequence
+\[
+ b_n = -\log \P (S_n \geq an)
+\]
+is sub-additive.
+
+\begin{lemma}[Fekete]\index{Fekete's lemma}
+ If $b_n$ is a non-negative sub-additive sequence, then $\lim_n \frac{b_n}{n}$ exists.
+\end{lemma}
+
+This implies the rate of decrease is exponential. Can we do better than that, and point out exactly the rate of decrease?
+
+For $\lambda \geq 0$, consider the \term{moment generating function}\index{$M(\lambda)$}
+\[
+ M(\lambda) = \E e^{\lambda X_1}.
+\]
+We set $\psi(\lambda) = \log M(\lambda)$, and the \term{Legendre transform} of $\psi$ is
+\[
+ \psi^*(a) = \sup_{\lambda \geq 0} (a\lambda - \psi (\lambda)).
+\]
+Note that these things may be infinite.
+
+\begin{thm}[Cram\'er's theorem]\index{Cram\'er's theorem}
+ For $a > \bar{x}$, we have
+ \[
+ \lim_{n \to \infty} \frac{1}{n} \log \P(S_n \geq an) = - \psi^*(a).
+ \]
+\end{thm}
+Note that we always have
+\[
+ \psi^*(a) = \sup_{\lambda \geq 0} (a\lambda - \psi(\lambda)) \geq -\psi(0) = 0.
+\]
+
+\begin{proof}
+ We first prove an upper bound. For any $\lambda$, Markov tells us
+ \begin{multline*}
+ \P(S_n \geq an) = \P(e^{\lambda S_n} \geq e ^{\lambda an}) \leq e^{-\lambda a_n} \E e^{\lambda S_n}\\
+ = e^{-\lambda an} \prod_{i = 1}^n \E e^{\lambda X_i} = e^{-\lambda a n} M(\lambda)^n = e^{-n (\lambda a - \psi(\lambda))}.
+ \end{multline*}
+ Since $\lambda$ was arbitrary, we can pick $\lambda$ to maximize $\lambda a - \psi(\lambda)$, and so by definition of $\psi^*(a)$, we have $\P(S_n \geq a_n) \leq e^{-n \psi^*(a)}$. So it follows that
+ \[
+ \limsup \frac{1}{n} \log \P(S_n \geq a_n) \leq - \psi^*(a).
+ \]
+ The lower bound is a bit more involved. One checks that by translating $X_i$ by $a$, we may assume $a = 0$, and in particular, $\bar{x} < 0$.
+% For convenience, we set
+% \[
+% \tilde{X}_i = X_i - a,\quad \tilde{S}_n = \sum_{i = 1}^n \tilde{X}_i.
+% \]
+% Then we have
+% \[
+% \tilde{M}(\lambda) = e^{-a \lambda} M(\lambda),\quad \tilde{\psi}(\lambda) = \psi(\lambda) - a \lambda.
+% \]
+% Also, we have
+% \[
+% \P (S_n \geq an) = \P(\tilde{S}_n \geq 0).
+% \]
+% Then it is enough to prove that
+% \[
+% \liminf_n \frac{1}{n} \log \P(\tilde{S}_n \geq 0) \geq - \psi^*(a) = \inf_{\lambda \geq 0} \tilde{\psi}(\lambda).
+% \]
+
+ So we want to prove that
+ \[
+ \liminf_n \frac{1}{n} \log \P(S_n \geq 0) \geq \inf_{\lambda \geq 0} \psi(\lambda).
+ \]
+ We consider cases:
+ \begin{itemize}
+ \item If $\P(X \leq 0) = 1$, then
+ \[
+ \P(S_n \geq 0) = \P(X_i = 0\text{ for } i = 1, \ldots n) = \P(X_1 = 0)^n.
+ \]
+ So in fact
+ \[
+ \liminf_n \frac{1}{n} \log \P(S_n \geq 0) = \log \P(X_1 = 0).
+ \]
+ But by monotone convergence, we have
+ \[
+ \P(X_1 = 0) = \lim_{\lambda \to \infty} \E e^{\lambda X_1}.
+ \]
+ So we are done.
+ \item Consider the case $\P(X_1 > 0) > 0$, but $\P(X_1 \in [-K, K]) = 1$ for some $K$. The idea is to modify $X_1$ so that it has mean $0$. For $\mu = \mu_{X_1}$, we define a new distribution by the density
+ \[
+ \frac{\d \mu^\theta}{\d \mu}(x) = \frac{e^{\theta x}}{M(\theta)}.
+ \]
+ We define
+ \[
+ g(\theta) = \int x\; \d \mu^\theta(x).
+ \]
+ We claim that $g$ is continuous for $\theta \geq 0$. Indeed, by definition,
+ \[
+ g(\theta) = \frac{\int x e^{\theta x}\;\d \mu(x)}{ \int e^{\theta x}\;\d \mu(x)},
+ \]
+ and both the numerator and denominator are continuous in $\theta$ by dominated convergence.
+
+ Now observe that $g(0) = \bar{x}$, and
+ \[
+ \limsup_{\theta \to \infty} g(\theta) > 0.
+ \]
+ So by the intermediate value theorem, we can find some $\theta_0$ such that $g(\theta_0) = 0$.
+
+ Define $\mu^{\theta_0}_n$ to be the law of the sum of $n$ iid random variables with law $\mu^{\theta_0}$. We have
+ \[
+ \P(S_n \geq 0) \geq \P(S_n \in [0, \varepsilon n]) \geq \E e^{\theta_0(S_n - \varepsilon n)} \mathbf{1}_{S_n \in [0, \varepsilon n]},
+ \]
+ using the fact that on the event $S_n \in [0, \varepsilon n]$, we have $e^{\theta_0 (S_n - \varepsilon n)} \leq 1$. So we have
+ \[
+ \P(S_n \geq 0) \geq M(\theta_0)^n e^{-\theta_0 \varepsilon n} \mu_n^{\theta_0} (\{S_n \in [0, \varepsilon n]\}).
+ \]
+ By the central limit theorem, for each fixed $\varepsilon$, we know
+ \[
+ \mu_n^{\theta_0} (\{S_n \in [0, \varepsilon n]\}) \to \frac{1}{2}\text{ as }n \to \infty.
+ \]
+ So we can write
+ \[
+ \liminf_n \frac{1}{n} \log \P(S_n \geq 0) \geq \psi(\theta_0) - \theta_0 \varepsilon.
+ \]
+ Then take the limit $\varepsilon \to 0$ to conclude the result.
+ \item Finally, we drop the finiteness assumption, and only assume $\P(X_1 > 0) > 0$. We define $\nu$ to be the law of $X_1$ condition on the event $\{|X_1| \leq K\}$. Let $\nu_n$ be the law of the sum of $n$ iid random variables with law $\nu$. Define
+ \begin{align*}
+ \psi_K(\lambda) &= \log \int_{-K}^K e^{\lambda x} \;\d \mu(x)\\
+ \psi^\nu(\lambda) &= \log \int_{-\infty}^\infty e^{\lambda x} \;\d \nu(x) = \psi_K(\lambda) - \log \mu (\{|X| \leq K\}).
+ \end{align*}
+ Note that for $K$ large enough, $\int x \;\d \nu(x) < 0$. So we can use the previous case. By definition of $\nu$, we have
+ \[
+ \mu_n([0, \infty)) \geq \nu([0, \infty)) \mu(|X| \leq K)^n.
+ \]
+ So we have
+ \begin{align*}
+ \liminf_n \frac{1}{n} \log \mu([0, \infty)) &\geq \log \mu(|X| \leq K) + \liminf \log \nu_n([0, \infty))\\
+ &\geq \log \mu(|X| \leq K) + \inf \psi^\nu (\lambda)\\
+ &= \inf_\lambda \psi_K(\lambda)\\
+ &= \psi_K^\lambda.
+ \end{align*}
+ Since $\psi_K$ increases as $K$ increases to infinity, this increases to some $\mathcal{J}$ we have
+ \[
+ \liminf_n \frac{1}{n} \log \mu_n([0, \infty)) \geq \mathcal{J}.\tag{$\dagger$}
+ \]
+ Since $\psi_K(\lambda)$ are continuous, $\{\lambda: \psi_K (\lambda) \leq \mathcal{J}\}$ is non-empty, compact and nested in $K$. By Cantor's theorem, we can find
+ \[
+ \lambda_0 \in \bigcap_K \{\lambda: \psi_K(\lambda) \leq \mathcal{J}\}.
+ \]
+ So the RHS of $(\dagger)$ satisfies
+ \[
+ \mathcal{J} \geq \sup_K \psi_K(\lambda _0) = \psi(\lambda_0) \geq \inf_\lambda \psi(\lambda).\qedhere
+ \]%\qedhere
+ \end{itemize}
+\end{proof}
+\printindex
+\end{document}
diff --git a/books/cam/III_M/algebraic_topology_iii.tex b/books/cam/III_M/algebraic_topology_iii.tex
new file mode 100644
index 0000000000000000000000000000000000000000..0a0cb958df3b48929718bb5e9e63349bb11c833e
--- /dev/null
+++ b/books/cam/III_M/algebraic_topology_iii.tex
@@ -0,0 +1,4610 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2016}
+\def\nlecturer {O.\ Randal-Williams}
+\def\ncourse {Algebraic Topology}
+
+\input{header}
+
+\theoremstyle{definition}
+\newtheorem{cclaim}{Claim}
+\setcounter{cclaim}{-1}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+Algebraic Topology assigns algebraic invariants to topological spaces; it permeates modern pure mathematics. This course will focus on (co)homology, with an emphasis on applications to the topology of manifolds. We will cover singular homology and cohomology, vector bundles and the Thom Isomorphism theorem, and the cohomology of manifolds up to Poincar\'e duality. Time permitting, there will also be some discussion of characteristic classes and cobordism, and conceivably some homotopy theory.
+
+\subsubsection*{Pre-requisites}
+
+Basic topology: topological spaces, compactness and connectedness, at the level of Sutherland's book. The course will not assume any knowledge of Algebraic Topology, but will go quite fast in order to reach more interesting material, so some previous exposure to simplicial homology or the fundamental group would be helpful. The Part III Differential Geometry course will also contain useful, relevant material.
+
+Hatcher's book is especially recommended for the course, but there are many other suitable texts.
+}
+\tableofcontents
+
+\section{Homotopy}
+In this course, the word ``map'' will always mean ``continuous function''.
+
+In topology, we study spaces up to ``continuous deformation''. Famously, a coffee mug can be continuously deformed into a doughnut, and thus they are considered to be topologically the same. Now we also talk about maps between topological spaces. So a natural question is if it makes sense to talk about the continuous deformations of maps. It turns out it does, and the definition is sort-of the obvious one:
+
+\begin{defi}[Homotopy]\index{homotopy}
+ Let $X, Y$ be topological spaces. A \emph{homotopy} between $f_0, f_1: X \to Y$ is a map $F: [0, 1] \times X \to Y$ such that $F(0, x) = f_0(x)$ and $F(1, x) = f_1(x)$. If such an $F$ exists, we say $f_0$ is \emph{homotopic} to $f_1$, and write $f_0 \simeq f_1$.
+
+ This $\simeq$ defines an equivalence relation on the set of maps from $X$ to $Y$.
+\end{defi}
+
+Just as concepts in topology are invariant under homeomorphisms, it turns out the theories we develop in algebraic topology are invariant under homotopies, i.e.\ homotopic maps ``do the same thing''. This will be made more precise later.
+
+Under this premise, if we view homotopic functions as ``the same'', then we have to enlarge our notion of isomorphism to take this into account. To do so, we just write down the usual definition of isomorphism, but with equality replaced with homotopy.
+
+\begin{defi}[Homotopy equivalence]\index{homotopy equivalence}
+ A map $f: X \to Y$ is a \emph{homotopy equivalence} if there is some $g: Y \to X$ such that $f \circ g \simeq \id_Y$ and $g \circ f \simeq \id_X$. We call $g$ a \emph{homotopy inverse}\index{homotopy inverse} to $f$.
+\end{defi}
+As always, homotopy equivalence is an equivalence relation. This relies on the following straightforward property of homotopy:
+
+\begin{prop}
+ If $f_0 \simeq f_1: X \to Y$ and $g_0 \simeq g_1: Y \to Z$, then $g_0 \circ f_0 \simeq g_1 \circ f_1: X \to Z$.
+ \[
+ \begin{tikzcd}
+ X \ar[r, bend left, "f_0"] \ar[r, bend right, "f_1"'] & Y \ar[r, bend left, "g_0"] \ar[r, bend right, "g_1"'] & Z
+ \end{tikzcd}
+ \]
+\end{prop}
+
+\begin{eg}[Stupid example]
+ If $f: X \to Y$ is a homeomorphism, then it is a homotopy equivalence --- we take the actual inverse for the homotopy inverse, since equal functions are homotopic.
+\end{eg}
+
+\begin{eg}[Interesting example]
+ Let $i: \{0\} \to \R^n$ be the inclusion map. To show this is a homotopy equivalence, we have to find a homotopy inverse. Fortunately, there is only one map $\R^n \to \{0\}$, namely the constant function $0$. We call this $r: \R^n \to \{0\}$. The composition $r \circ i: \{0\} \to \{0\}$ is exactly the identity. So this is good.
+
+ In the other direction, the map $i \circ r: \R^n \to \R^n$ sends everything to $0$. We need to produce a homotopy to the identity. We let $F: [0, 1] \times \R^n \to \R^n$ be
+ \[
+ F(t, \mathbf{v}) = t\mathbf{v}.
+ \]
+ We have $F(0, \mathbf{v}) = 0$ and $F(1, \mathbf{v}) = \mathbf{v}$. So this is indeed a homotopy from $i \circ r$ to $\id_{\R^n}$.
+\end{eg}
+
+So from the point of view of homotopy, the one-point space $\{0\}$ is the same as $\R^n$! So dimension, or even cardinality, is now a meaningless concept.
+
+\begin{eg}[Also interesting example]
+ Let $S^n \subseteq \R^{n + 1}$ be the unit sphere, and $i: S^n \hookrightarrow \R^{n + 1} \setminus \{0\}$. We show that this is a homotopy equivalence. We define $r: \R^{n + 1} \setminus \{0\} \to S^n$ by
+ \[
+ r(\mathbf{v}) = \frac{\mathbf{v}}{\|\mathbf{v}\|}.
+ \]
+ Again, we have $r \circ i = \id_{S^n}$. In the other direction, we need to construct a path from each $\mathbf{v}$ to $\frac{\mathbf{v}}{\|\mathbf{v}\|}$ in a continuous way. We could do so by
+ \begin{align*}
+ H: [0, 1] \times (\R^{n + 1} \setminus \{0\}) &\to \R^{n + 1} \setminus \{0\}\\
+ (t, \mathbf{v}) &\mapsto (1 - t) \mathbf{v} + t \frac{\mathbf{v}}{\|\mathbf{v}\|}.
+ \end{align*}
+ We can easily check that this is a homotopy from $\id_{\R^{n + 1}\setminus \{0\}}$ to $i \circ r$.
+\end{eg}
+
+Again, homotopy equivalence allowed us to squash down one dimension of $\R^{n + 1} \setminus \{0\}$ to get $S^n$. However, there is one thing we haven't gotten rid of --- the hole. It turns out what \emph{is} preserved by homotopy equivalence is exactly the holes.
+
+Now we might ask ourselves --- can we distinguish holes of ``different dimension''? For $n \not= m$, is $S^n$ homotopy equivalent to $S^m$? If we try hard to construct homotopies, we will always fail, and then we would start to think that maybe they aren't homotopy equivalent. However, at this point, we do not have any tools we can use the prove this.
+
+The solution is algebraic topology. The idea is that we assign algebraic objects to each topological space in a way that is homotopy-invariant. We then come up with tools to compute these algebraic invariants. Then if two spaces are assigned different algebraic objects, then we know they cannot be homotopy equivalent.
+
+What is this good for? At this point, you might not be convinced that it is a good idea to consider spaces up to homotopy equivalence only. However, algebraic topology can also help us show that spaces are not homeomorphic. After all, homeomorphic spaces are also homotopy equivalent. For example, suppose we want to show that if $\R^n \cong \R^m$, then $n = m$. Algebraic topology doesn't help us directly, because both $\R^n$ and $\R^m$ are homotopy equivalent to a point. Instead, we do the slightly sneaky thing of removing a point. If $\R^n \cong \R^m$, then it must also be the case that $\R^n \setminus \{0\} \cong \R^m \setminus \{0\}$. Since these are homotopy equivalent to $S^{n - 1}$ and $S^{m - 1}$, this implies that $S^{n - 1} \simeq S^{m - 1}$ are homotopy equivalent. By algebraic topology, we will show that that this can only happen if $m = n$. So indeed we can recover the notion of dimension using algebraic topology!
+
+\section{Singular (co)homology}
+\subsection{Chain complexes}
+This course is called algebraic topology. We've already talked about some topology, so let's do some algebra. We will just write down a bunch of definitions, which we will get to use in the next chapter to define something useful.
+
+\begin{defi}[Chain complex]\index{chain complex}
+ A \emph{chain complex} is a sequence of abelian groups and homomorphisms
+ \[
+ \begin{tikzcd}
+ \cdots \ar[r] & C_3 \ar[r, "d_3"] & C_2 \ar[r, "d_2"] & C_1 \ar[r, "d_1"] & C_0 \ar[r, "d_0"] & 0
+ \end{tikzcd}
+ \]
+ such that
+ \[
+ d_i \circ d_{i + 1} = 0
+ \]
+ for all $i$.
+\end{defi}
+
+Very related to the notion of a chain complex is a \emph{co}chain complex, which is the same thing with the maps the other way.
+
+\begin{defi}[Cochain complex]\index{cochain complex}
+ A \emph{cochain complex} is a sequence of abelian groups and homomorphisms
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & C^0 \ar[r, "d^0"] & C^1 \ar[r, "d^1"] & C^2 \ar[r, "d^2"] & C^3 \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ such that
+ \[
+ d^{i + 1} \circ d^i = 0
+ \]
+ for all $i$.
+\end{defi}
+
+Note that each of the maps $d$ is indexed by the number of its domain.
+
+\begin{defi}[Differentials]\index{differentials}
+ The maps $d^i$ and $d_i$ are known as \emph{differentials}.
+\end{defi}
+Eventually, we will get lazy and just write all the differentials as $d$.
+
+Given a chain complex, the only thing we know is that the composition of any two maps is zero. In other words, we have $\im d_{i+1} \subseteq \ker d_i$. We can then ask how good this containment is. Is it that $\im d_{i + 1} = \ker d_i$, or perhaps that $\ker d_i$ is huge but $\im d_{i + 1}$ is trivial? The homology or cohomology measures what happens.
+
+\begin{defi}[Homology]\index{homology}
+ The \emph{homology} of a chain complex $C_{\Cdot}$ is
+ \[
+ H_i(C_{\Cdot}) = \frac{\ker (d_i: C_i \to C_{i - 1})}{\im (d_{i + 1}: C_{i + 1} \to C_i)}.
+ \]
+ An element of $H_i(C_{\Cdot})$ is known as a \term{homology class}.
+\end{defi}
+
+Dually, we have
+\begin{defi}[Cohomology]\index{cohomology}
+ The \emph{cohomology} of a cochain complex $C^{\Cdot}$ is
+ \[
+ H^i(C^{\Cdot}) = \frac{\ker (d^i: C^i \to C^{i + 1})}{\im (d^{i - 1}: C^{i - 1} \to C^i)}.
+ \]
+ An element of $H^i(C^{\Cdot})$ is known as a \term{cohomology class}.
+\end{defi}
+
+More names:
+\begin{defi}[Cycles and cocycles]\index{cycle}\index{cocycle}
+ The elements of $\ker d_i$ are the \emph{cycles}, and the elements of $\ker d^i$ are the \emph{cocycles}.
+\end{defi}
+
+\begin{defi}[Boundaries and coboundaries]\index{boundary}\index{coboundary}
+ The elements of $\im d_i$ are the \emph{boundaries}, and the elements of $\im d^i$ are the \emph{coboundaries}.
+\end{defi}
+
+As always, we want to talk about maps between chain complexes. These are known as chain maps.
+\begin{defi}[Chain map]\index{chain map}
+ If $(C_{\Cdot}, d_{\Cdot}^C)$ and $(D_{\Cdot}, d_{\Cdot}^D)$ are chain complexes, then a \emph{chain map} $C_{\Cdot} \to D_{\Cdot}$ is a collection of homomorphisms $f_n: C_n \to D_n$ such that $d_n^D \circ f_n = f_{n - 1} \circ d_n^C$. In other words, the following diagram has to commute for all $n$:
+ \[
+ \begin{tikzcd}
+ C_n \ar[r, "f_n"] \ar[d, "d_n^C"] & D_n \ar[d, "d_n^D"]\\
+ C_{n - 1} \ar[r, "f_{n - 1}"] & D_{n - 1}
+ \end{tikzcd}
+ \]
+\end{defi}
+There is an obvious analogous definition for \term{cochain maps} between cochain complexes.
+
+\begin{lemma}
+ If $f_{\Cdot}: C_{\Cdot} \to D_{\Cdot}$ is a chain map, then $f_*: H_n(C_{\Cdot}) \to H_n(D_{\Cdot})$ given by $[x] \mapsto [f_n(x)]$ is a well-defined homomorphism, where $x \in C_n$ is any element representing the homology class $[x] \in H_n(C_{\Cdot})$.
+\end{lemma}
+
+\begin{proof}
+ Before we check if it is well-defined, we first need to check if it is defined at all! In other words, we need to check if $f_n(x)$ is a cycle. Suppose $x \in C_n$ is a cycle, i.e.\ $d_n^C(x) = 0$. Then we have
+ \[
+ d_n^D(f_n(x)) = f_{n - 1}(d_n^C(x)) = f_{n - 1}(0) = 0.
+ \]
+ So $f_n(x)$ is a cycle, and it does represent a homology class.
+
+ To check this is well-defined, if $[x] = [y] \in H_n(C_{\Cdot})$, then $x - y = d_{n + 1}^C(z)$ for some $z\in C_{n + 1}$. So $f_n(x) - f_n(y) = f_n (d_{n + 1}^C(z)) = d_{n + 1}^D (f_{n + 1}(z))$ is a boundary. So we have $[f_n(x)] = [f_n(y)] \in H_n(D_{\Cdot})$.
+\end{proof}
+
+\subsection{Singular (co)homology}
+The idea now is that given any space $X$, we construct a chain complex $C_\Cdot(X)$, and for any map $f: X \to Y$, we construct a chain map $f_\Cdot: C_\Cdot(X) \to C_\Cdot(Y)$. We then take the homology of these, so that we get some homology groups $H_*(X)$ for each space $X$, and a map $f_*: H_*(X) \to H_*(Y)$ for each map $f: X \to Y$.
+
+There are many ways we can construct a chain complex $C_\Cdot(X)$ from a space $X$. The way we are going to define the chain complex is via \emph{singular homology}. The advantage of this definition is that it is obviously a property just of the space $X$ itself, whereas in other definitions, we need to pick, say, a triangulation of the space, and then work hard to show that the homology does not depend on the choice of the triangulation.
+
+The disadvantage of singular homology is that the chain complexes $C_\Cdot(X)$ will be \emph{huge} (and also a bit scary). Except for the case of a point (or perhaps a few points), it is impossible to actually write down what $C_\Cdot(X)$ looks like and use that to compute the homology groups. Instead, we are going to play around with the definition and prove some useful results that help us compute the homology groups indirectly.
+
+Later, for a particular type of spaces known as \emph{CW complexes}, we will come up with a different homology theory where the $C_{\Cdot}(X)$ are actually nice and small, so that we can use it to compute the homology groups directly. We will prove that this is equivalent to singular homology, so that this also provides an alternative method of computing homology groups.
+
+Everything we do can be dualized to talk about cohomology instead. Most of the time, we will just write down the result for singular homology, but analogous results hold for singular cohomology as well. However, later on, we will see that there are operations we can perform on cohomology groups only, which makes them better, and the interaction between homology and cohomology will become interesting when we talk about manifolds at the end of the course.
+
+We start with definitions.
+\begin{defi}[Standard $n$-simplex]\index{standard $n$-simplex}
+ The standard $n$-simplex is
+ \[
+ \Delta^n = \left\{(t_0, \cdots, t_n) \in \R^{n + 1} : t_i \geq 0, \sum t_i = 1\right\}.
+ \]
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (3, 0);
+ \draw [->] (0, 0) -- (0, 3);
+ \draw [->] (0, 0) -- (-1.5, -1.5);
+
+ \draw [fill=mblue, fill opacity=0.5] (0, 2) -- (2, 0) -- (-.866, -0.866) -- cycle;
+ \end{tikzpicture}
+\end{center}
+We notice that $\Delta^n$ has $n + 1$ ``faces''.
+\begin{defi}[Face of standard simplex]\index{face of standard simplex}
+ The \emph{$i$th face} of $\Delta^n$ is
+ \[
+ \Delta_i^n = \{(t_0, \cdots, t_n) \in \Delta^n : t_i = 0\}.
+ \]
+\end{defi}
+\begin{eg}
+ The faces of $\Delta^1$ are labelled as follows:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (3, 0) node [right] {$t_0$};
+ \draw [->] (0, -1) -- (0, 3) node [above] {$t_1$};
+
+ \draw (2, 0) -- (0, 2) node [pos=0.5, anchor = south west] {$\Delta_1$};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (0, 2) {};
+ \node at (2, 0) [below] {$\Delta_1^1$};
+ \node at (0, 2) [left] {$\Delta_0^1$};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+We see that the $i$th face of $\Delta^n$ looks like the standard $(n-1)$-simplex. Of course, it is just homeomorphic to it, with the map given by
+\begin{align*}
+ \delta_i: \Delta^{n - 1} &\to \Delta^n\\
+ (t_0, \cdots, t_{n - 1}) &\mapsto (t_0, \cdots, t_{i - 1}, 0, t_i, \cdots, t_{n - 1})
+\end{align*}
+This is a homeomorphism onto $\Delta_i^n$. We will make use of these maps to define our chain complex.
+
+The idea is that we will look at subspaces of $X$ that ``look like'' these standard $n$-simplices.
+\begin{defi}[Singular $n$-simplex]\index{singular $n$-simplex}
+ Let $X$ be a space. Then a \emph{singular $n$-simplex} in $X$ is a map $\sigma: \Delta^n \to X$.
+\end{defi}
+
+\begin{eg}
+ The inclusion of the standard $n$-simplex into $\R^{n+1}$ is a singular simplex, but so is the constant map to any point in $X$. So the singular $n$-simplices can be stupid.
+\end{eg}
+
+Given a space $X$, we obtain a \emph{set} of singular $n$-simplices. To talk about homology, we need to have an abelian group. We do the least imaginative thing ever:
+
+\begin{defi}[Singular chain complex]
+ We let $C_n(X)$ be the free abelian group on the set of singular $n$-simplices in $X$. More explicitly, we have
+ \[
+ C_n(X) = \left\{\sum n_\sigma \sigma: \sigma: \Delta^n \to X, n_\sigma \in \Z, \text{only finitely many $n_\sigma$ non-zero}\right\}.
+ \]
+ We define $d_n: C_n(X) \to C_{n - 1}(X)$ by
+ \[
+ \sigma \mapsto \sum_{i = 0}^n (-1)^i \sigma \circ \delta_i,
+ \]
+ and then extending linearly.
+\end{defi}
+Note that it is essential that we insert those funny negative signs. Indeed, if the signs weren't there, then all terms in $d(d(\sigma))$ would have positive coefficients, and there is no hope that they all cancel. Intuitively, we can think of these signs as specifying the ``orientation'' of the faces. For example, if we have a line
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} -- (2, 0) node [circ] {};
+ \end{tikzpicture}
+\end{center}
+then after taking $d$, one vertex would have a positive sign, and the other would have a negative sign.
+
+Now we actually check that this indeed gives us a chain complex. The key tool is the following unexciting result:
+\begin{lemma}
+ If $i < j$, then $\delta_j \circ \delta_i = \delta_i \circ \delta_{j - 1} : \Delta^{n - 2} \to \Delta^n$.
+\end{lemma}
+
+\begin{proof}
+ Both send $(t_0, \cdots, t_{n - 2})$ to
+ \[
+ (t_0, \cdots, t_{i - 1}, 0, t_i, \cdots, t_{j - 2}, 0, t_{j - 1}, \cdots, t_{n - 2}).\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ The homomorphism $d_{n - 1} \circ d_n: C_n(X) \to C_{n - 2}(X)$ vanishes.
+\end{cor}
+
+\begin{proof}
+ It suffices to check this on each basis element $\sigma: \Delta^n \to X$. We have
+ \begin{align*}
+ d_{n - 1} \circ d_n (\sigma) &= \sum_{i = 0}^{n - 1}(-1)^i \sum_{j = 0}^n (-1)^j \sigma \circ \delta_j \circ \delta_i.
+ \intertext{We use the previous lemma to split the sum up into $i < j$ and $i \geq j$:}
+ &= \sum_{i < j} (-1)^{i + j} \sigma \circ \delta_j \circ \delta_i + \sum_{i \geq j} (-1)^{i + j} \sigma \circ \delta_j \circ \delta_i\\
+ &= \sum_{i < j} (-1)^{i + j} \sigma \circ \delta_i \circ \delta_{j - 1} + \sum_{i \geq j} (-1)^{i + j} \sigma \circ \delta_j \circ \delta_i\\
+ &= \sum_{i \leq j} (-1)^{i + j + 1} \sigma \circ \delta_i \circ \delta_j + \sum_{i \geq j} (-1)^{i + j} \sigma \circ \delta_j \circ \delta_i\\
+ &= 0.\qedhere
+ \end{align*}
+\end{proof}
+So the data $d_n: C_n (X) \to C_{n - 1}(X)$ is indeed a chain complex. The only thing we can do to a chain complex is to take its homology!
+
+\begin{defi}[Singular homology]\index{singular homology}
+ The \emph{singular homology} of a space $X$ is the homology of the chain complex $C_\Cdot(X)$:
+ \[
+ H_i(X) = H_i(C_{\Cdot}(X), d_{\Cdot}) = \frac{\ker (d_i: C_i(X) \to C_{i - 1}(X))}{ \im(d_{i + 1}: C_{i + 1}(X) \to C_i(X))}.
+ \]
+\end{defi}
+We will also talk about the ``dual'' version of this:
+\begin{defi}[Singular cohomology]\index{singular cohomology}
+ We define the dual cochain complex by
+ \[
+ C^n(X) = \Hom(C_n(X), \Z).
+ \]
+ We let
+ \[
+ d^n: C^n(X) \to C^{n + 1}(X)
+ \]
+ be the adjoint to $d_{n + 1}$, i.e.
+ \[
+ (\varphi: C_n(X) \to \Z) \mapsto (\varphi \circ d_{n + 1}: C_{n + 1}(X) \to \Z).
+ \]
+ We observe that
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & C^0(X) \ar[r, "d^0"] & C^1(X) \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ is indeed a cochain complex, since
+ \[
+ d^{n + 1}(d^n(\varphi)) = \varphi \circ d_{n + 1} \circ d_{n + 2} = \varphi \circ 0 = 0.
+ \]
+ The \emph{singular cohomology} of $X$ is the cohomology of this cochain complex, i.e.
+ \[
+ H^i(X) = H^i(C^{\Cdot}(X), d^{\Cdot}) = \frac{\ker(d^i: C^i(X) \to C^{i + 1}(X))}{\im(d^{i - 1}: C^{i - 1}(X) \to C^i (X))}.
+ \]
+\end{defi}
+
+Note that in general, it is \emph{not} true that $H^n(X) = \Hom(H_n(X), \Z)$. Thus, dualizing and taking homology do not commute with each other. However, we will later come up with a relation between the two objects.
+
+The next thing to show is that maps of spaces induce chain maps, hence maps between homology groups.
+
+\begin{prop}
+ If $f: X \to Y$ is a continuous map of topological spaces, then the maps
+ \begin{align*}
+ f_n: C_n(X) &\to C_n(Y)\\
+ (\sigma: \Delta^n \to X) &\mapsto (f \circ \sigma: \Delta^n \to Y)
+ \end{align*}
+ give a chain map. This induces a map on the homology (and cohomology).
+\end{prop}
+
+\begin{proof}
+ To see that the $f_n$ and $d_n$ commute, we just notice that $f_n$ acts by composing on the left, and $d_n$ acts by composing on the right, and these two operations commute by the associativity of functional composition.
+\end{proof}
+
+Now if in addition we have a map $g: Y \to Z$, then we obtain two maps $H_n(X) \to H_n(Z)$ by
+\[
+ \begin{tikzcd}
+ H_n(X) \ar[rr, "(g \circ f)_*"] \ar[rd, "f_*"] & & H_n(Z)\\
+ & H_n(Y) \ar[ru, "g_*"]
+ \end{tikzcd}.
+\]
+By direct inspection of the formula, we see that this diagram commutes. In equation form, we have
+\[
+ (g \circ f)_* = g_* \circ f_*.
+\]
+Moreover, we trivially have
+\[
+ (\id_X)_* = \id_{H_n(X)}: H_n(X) \to H_n(X).
+\]
+Thus we deduce that
+\begin{prop}
+ If $f: X \to Y$ is a homeomorphism, then $f_*: H_n(X) \to H_n(Y)$ is an isomorphism of abelian groups.
+\end{prop}
+
+\begin{proof}
+ If $g: Y \to X$ is an inverse to $f$, then $g_*$ is an inverse to $f_*$, as $f_* \circ g_* = (f \circ g)_* = (\id)_* = \id$, and similarly the other way round.
+\end{proof}
+If one is taking category theory, what we have shown is that $H_*$ is a functor, and the above proposition is just the usual proof that functors preserve isomorphisms.
+
+This is not too exciting. We will later show that \emph{homotopy equivalences} induce isomorphisms of homology groups, which is much harder.
+
+Again, we can dualize this to talk about cohomology. Applying $\Hom(\ph, \Z)$ to $f_{\Cdot}: C_{\Cdot}(X) \to C_{\Cdot}(Y)$ gives homomorphisms $f^n: C^n(Y) \to C^n(X)$ by mapping
+\[
+ (\varphi: C_n(Y) \to \Z) \mapsto (\varphi \circ f_n: C_n(X) \to \Z).
+\]
+Note that this map goes the other way! Again, this is a cochain map, and induces maps $f^*: H^n(Y) \to H^n(X)$.
+
+How should we think about singular homology? There are two objects involved --- cycles and boundaries. We will see that cycles in some sense ``detect holes'', and quotienting out by boundaries will help us identify cycles that detect the same hole.
+
+\begin{eg}
+ We work with the space that looks like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=morange, opacity=0.3] circle [radius=1.5];
+ \draw circle [radius=1.5];
+ \draw [fill=white] circle [radius=0.5];
+ \end{tikzpicture}
+ \end{center}
+ Suppose we have a single chain complex $\sigma: \Delta^1 \to X$. Then its boundary $d_1(\sigma) = \sigma(1) - \sigma(0)$. This is in general non-zero, unless we have $\sigma(0) = \sigma(1)$. In other words, this has to be a loop!
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=morange, opacity=0.3] circle [radius=1.5];
+ \draw circle [radius=1.5];
+ \draw [fill=white] circle [radius=0.5];
+
+ \draw [mblue, thick] (-1, 0) to [out=-80, in=270, looseness=1.2] (0.8, 0) node [right] {$\sigma$} to [out=90, in=80, looseness=1.2] (-1, 0) node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ So the $1$-cycles represented by just one element are exactly the loops.
+
+ How about more complicated cycles? We could have, say, four $1$-simplices $\sigma_1, \sigma_2, \sigma_3, \sigma_4$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=morange, opacity=0.3] circle [radius=1.5];
+ \draw circle [radius=1.5];
+ \draw [fill=white] circle [radius=0.5];
+
+ \draw [mblue, thick] (-1, 0) to [out=-80, in=180] (0, -0.65) node [circ] {};
+ \draw [mgreen, thick] (0, -0.65) to [out=0, in=270] (0.8, 0) node [circ] {};
+ \draw [mred, thick] (0.8, 0) to [out=90, in=0] (0, 0.65) node [circ] {};
+ \draw [morange, thick] (0, 0.65) to [out=180, in=80] (-1, 0) node [circ] {};
+
+ \node [circ] at (-1, 0) {};
+ \node [circ] at (0, -0.65) {};
+ \node [circ] at (0.8, 0) {};
+ \node [circ] at (0, 0.65) {};
+
+ \node [mblue] at (-0.8, -0.7) {$\sigma_1$};
+ \node [mgreen] at (0.8, -0.7) {$\sigma_2$};
+ \node [mred] at (0.8, 0.7) {$\sigma_3$};
+ \node [morange] at (-0.8, 0.7) {$\sigma_4$};
+ \end{tikzpicture}
+ \end{center}
+ In this case we have
+ \begin{align*}
+ \sigma_1(1) &= \sigma_2(0)\\
+ \sigma_2(1) &= \sigma_3(0)\\
+ \sigma_3(1) &= \sigma_4(0)\\
+ \sigma_4(1) &= \sigma_1(0)
+ \end{align*}
+ Thus, we have $\sigma_1 + \sigma_2 + \sigma_3 + \sigma_4 \in C_1(X)$. We can think of these cycles as detecting the holes by surrounding them.
+
+ However, there are some stupid cycles:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=morange, opacity=0.3] circle [radius=1.5];
+ \draw circle [radius=1.5];
+ \draw [fill=white] circle [radius=0.5];
+
+ \draw [mblue, thick] (0.7, 0) to [out=-80, in=270, looseness=1.2] (1.3, 0) to [out=90, in=80, looseness=1.2] (0.7, 0) node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ These cycles don't really surround anything. The solution is that these cycles are actually boundaries, as it is the boundary of the $2$-simplex that fills the region bounded by the loop (details omitted).
+
+ Similarly, the boundaries also allow us to identify two cycles that surround the same hole, so that we don't double count:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=morange, opacity=0.3] circle [radius=1.5];
+ \draw circle [radius=1.5];
+ \draw [fill=white] circle [radius=0.5];
+
+ \draw [mblue, thick] (-0.8, 0) to [out=-80, in=270, looseness=1.4] (0.6, 0) to [out=90, in=80, looseness=1.4] (-0.8, 0) node [circ] {};
+ \draw [mgreen, thick] (-1.3, 0) to [out=-80, in=270, looseness=1.2] (1.1, 0) to [out=90, in=80, looseness=1.2] (-1.3, 0) node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ This time, the \emph{difference} between the two cycles is the boundary of the $2$-simplex given by the region in between the two loops.
+
+ Of course, some work has to be done to actually find a $2$-simplex whose boundary is the difference of the two loops above, and in fact we will have to write the region as the sum of multiple $2$-simplices for this to work. However, this is just to provide intuition, not a formal proof.
+\end{eg}
+
+We now do some actual computations. If we want to compute the homology groups directly, we need to know what the $C_\Cdot(X)$ look like. In general, this is intractable, unless we have the very simple case of a point:
+\begin{eg}
+ Consider the one-point space $\pt = \{*\}$. We claim that
+ \[
+ H^n(\pt) = H_n(\pt) =
+ \begin{cases}
+ \Z & n = 0\\
+ 0 & n > 0
+ \end{cases}.
+ \]
+ To see this, we note that there is always a single singular $n$-simplex $\sigma_n: \Delta^n \to \pt$. So $C_n(\pt) = \Z$, and is generated by $\sigma_n$. Now note that
+ \[
+ d_n(\sigma_n) = \sum_{i = 0}^n (-1)^i \sigma_n \delta_i =
+ \begin{cases}
+ \sigma_{n - 1} & n\text{ even}\\
+ 0 & n\text{ odd}
+ \end{cases}.
+ \]
+ So the singular chain complex looks like
+ \[
+ \begin{tikzcd}
+ \Z \ar[r, "1"] & \Z\ar[r, "0"] & \Z \ar[r, "1"] & \Z \ar[r, "0"] & \Z \ar[r] & 0.
+ \end{tikzcd}
+ \]
+ The homology groups then follow from direct computation. The cohomology groups are similar.
+\end{eg}
+This result is absolutely unexciting, and it is also almost the only space whose homology groups we can find at this point. However, we are still capable of proving the following general result:
+
+\begin{eg}
+ If $X = \coprod_{\alpha \in I} X_\alpha$ is a disjoint union of path components, then each singular simplex must lie in some $X_\alpha$. This implies that
+ \[
+ H_n(X) \cong \bigoplus_{\alpha \in I} H_n(X_\alpha).
+ \]
+\end{eg}
+Now we know how to compute, say, the homology of three points. How exciting.
+
+\begin{lemma}
+ If $X$ is path-connected and non-empty, then $H_0(X) \cong \Z$.
+\end{lemma}
+
+\begin{proof}
+ Define a homomorphism $\varepsilon: C_0(X) \to \Z$ given by
+ \[
+ \sum n_\sigma \sigma \mapsto \sum n_\sigma.
+ \]
+ Then this is surjective. We claim that the composition
+ \[
+ \begin{tikzcd}
+ C_1(X) \ar[r, "d"] & C_0(X) \ar[r, "\varepsilon"] & \Z
+ \end{tikzcd}
+ \]
+ is zero. Indeed, each simplex has two ends, and a $\sigma: \Delta^1 \to X$ is mapped to $\sigma \circ \delta_0 - \sigma \circ \delta_1$, which is mapped by $\varepsilon$ to $1 - 1 = 0$.
+
+ Thus, we know that $\varepsilon(\sigma) = \varepsilon (\sigma + d \tau)$ for any $\sigma \in C_0(X)$ and $\tau \in C_1(X)$. So we obtain a well-defined map $\varepsilon: H_0(X) \to \Z$ mapping $[x] \mapsto \varepsilon(x)$, and this is surjective as $X$ is non-empty.
+
+ So far, this is true for a general space. Now we use the path-connectedness condition to show that this map is indeed injective. Suppose $\sum n_\sigma \sigma \in C_0(X)$ lies in $\ker \varepsilon$. We choose an $x_0 \in X$. As $X$ is path-connected, for each of $\Delta^0 \to X$ we can choose a path $\tau_\sigma: \Delta^1 \to X$ with $\tau_\sigma \circ \delta_0 = \sigma$ and $\tau_\sigma \circ \delta_1 = x_0$.
+
+ Given these $1$-simplices, we can form a $1$-chain $\sum n_\sigma \tau_\sigma \in C_1(X)$, and
+ \[
+ d_1\left(\sum n_\sigma \tau_\sigma\right)= \sum n_\sigma(\sigma + x_0) = \sum n_\sigma \cdot \sigma - \left(\sum n_\sigma\right) x_0.
+ \]
+ Now we use the fact that $\sum n_\sigma = 0$. So $\sum n_\sigma \cdot \sigma$ is a boundary. So it is zero in $H_0(X)$.
+\end{proof}
+
+Combining with the coproduct formula, we have
+\begin{prop}
+ For any space $X$, we have $H_0(X)$ is a free abelian group generated by the path components of $X$.
+\end{prop}
+These are pretty much the things we can do by hand. To do more things, we need to use some tools.
+
+\section{Four major tools of (co)homology}
+We now state four major properties of cohomology. At the end, we will use these properties to compute a lot of homology groups. The proofs are long and boring, so we postpone them until we have gone through the properties and enjoyed some applications of them.
+
+\subsection{Homotopy invariance}
+The first result is pretty easy to state, but it is a really powerful result that really drives what we are doing:
+
+\begin{thm}[Homotopy invariance theorem]\index{homotopy invariance theorem}
+ Let $f \simeq g: X \to Y$ be homotopic maps. Then they induce the same maps on (co)homology, i.e.
+ \[
+ f_* = g_*: H_\Cdot(X) \to H_\Cdot(Y)
+ \]
+ and
+ \[
+ f^* = g^* : H^\Cdot(Y) \to H^\Cdot(X).
+ \]
+\end{thm}
+
+\begin{cor}
+ If $f: X \to Y$ is a homotopy equivalence, then $f_*: H_{\Cdot}(X) \to H_{\Cdot}(Y)$ and $f^*: H^{\Cdot}(Y) \to H^{\Cdot}(X)$ are isomorphisms.
+\end{cor}
+
+\begin{proof}
+ If $g: Y \to X$ is a homotopy inverse, then
+ \[
+ g_* \circ f_* = (g \circ f)_* = (\id_X)_* = \id_{H_{\Cdot}(X)}.
+ \]
+ Similarly, we have $f_* \circ g_* = (\id_Y)_* = \id_{H_{\Cdot}(Y)}$. So $f_*$ is an isomorphism with an inverse $g_*$.
+
+ The case for cohomology is similar.
+\end{proof}
+
+Since we know that $\R^n$ is homotopy equivalent to a point, it immediately follows that:
+\begin{eg}
+ We have
+ \[
+ H^n(\R^n) = H_n(\R^n) =
+ \begin{cases}
+ \Z & n = 0\\
+ 0 & n > 0
+ \end{cases}.
+ \]
+\end{eg}
+
+\subsection{Mayer-Vietoris}
+The next tool is something that allows us to compute (co)homology by computing the (co)homology of smaller subspaces. Unfortunately, the result doesn't just say we can directly add the cohomologies, or anything like that. Instead, the information is encoded in an \emph{exact sequence}.
+
+\begin{defi}[Exact sequence]\index{exact sequence}
+ We say a pair of homomorphisms
+ \[
+ \begin{tikzcd}
+ A \ar[r, "f"] & B \ar[r, "g"] & C
+ \end{tikzcd}
+ \]
+ is \emph{exact at $B$} if $\im(f) = \ker(g)$.
+
+ We say a sequence
+ \[
+ \begin{tikzcd}
+ \cdots \ar[r] & X_0 \ar[r] & X_1 \ar[r] & X_2 \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ is \emph{exact} if it is exact at each $X_n$.
+\end{defi}
+
+\begin{eg}
+ If
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A \ar[r, "f"] & B \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ is exact, then $f$ is an isomorphism.
+\end{eg}
+
+\begin{defi}[Short exact sequence]\index{short exact sequence}
+ A \emph{short exact sequence} is a sequence of the form
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A \ar[r] & B \ar[r] & C \ar[r] & 0
+ \end{tikzcd}.
+ \]
+\end{defi}
+It is an easy consequence of the first isomorphism theorem that
+\begin{lemma}
+ In a short exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A \ar[r, "f"] & B \ar[r, "g"] & C \ar[r] & 0
+ \end{tikzcd},
+ \]
+ the map $f$ is injective; $g$ is surjective, and $C \cong B/A$.
+\end{lemma}
+
+\begin{eg}
+ Consider the sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & \Z/n\Z \ar[r] & A \ar[r] & \Z/m\Z \ar[r] & 0
+ \end{tikzcd}
+ \]
+ There are many possible values of $A$. A few obvious examples are $A = \Z/nm\Z$ and $A = \Z/n\Z \oplus \Z/m\Z$. If $n$ and $m$ are not coprime, then these are different groups.
+\end{eg}
+Many of the important theorems for computing homology groups tell us that the homology groups of certain spaces fall into some exact sequence, and we have to try figure out what the groups are.
+
+\begin{thm}[Mayer-Vietoris theorem]\index{Mayer-Vietoris theorem}
+ Let $X = A \cup B$ be the union of two open subsets. We have inclusions
+ \[
+ \begin{tikzcd}
+ A \cap B \ar[r, "i_A", hook] \ar[d, "i_B", hook] & A \ar[d, hook, "j_A"]\\
+ B \ar[r, hook, "j_B"] & X
+ \end{tikzcd}.
+ \]
+ Then there are homomorphisms $\partial_{MV}: H_n(X) \to H_{n - 1}(A \cap B)$ such that the following sequence is exact:
+ \[
+ \begin{tikzcd}
+ \ar[r, "\partial_{MV}"] & H_n(A \cap B) \ar[r, "i_{A*} \oplus i_{B*}"] & H_n(A) \oplus H_n(B) \ar[r, "j_{A*} - j_{B*}"] & H_n(X)\ar[out=0, in=180, looseness=2, overlay, lld, "\partial_{MV}"']\\
+ & H_{n - 1}(A \cap B) \ar[r, "i_{A*} \oplus i_{B*}"] & H_{n - 1}(A) \oplus H_{n - 1}(B) \ar[r, "j_{A*} - j_{B*}"] & H_{n - 1}(X) \ar [r] & \cdots\\
+ &\cdots \ar[r] & H_0(A) \oplus H_0(B) \ar[r, "j_{A*} - j_{B*}"] & H_0(X) \ar [r] & 0
+ \end{tikzcd}
+ \]
+ Furthermore, the Mayer-Vietoris sequence is \emph{natural}, i.e.\ if $f: X = A\cup B \to Y = U \cup V$ satisfies $f(A) \subseteq U$ and $f(B) \subseteq V$, then the diagram
+ \[
+ \begin{tikzcd}
+ H_{n + 1}(X) \ar[r, "\partial_{MV}"] \ar[d, "f_*"] & H_{n}(A \cap B) \ar[r, "i_{A*} \oplus i_{B*}"] \ar[d, "f|_{A \cap B*}"] & H_n(A) \oplus H_n(B) \ar[r, "j_{A*} - j_{B*}"] \ar[d, "f|_{A*} \oplus f|_{B*}"] & H_n(X) \ar[d, "f_*"]\\
+ H_{n + 1}(Y) \ar[r, "\partial_{MV}"] & H_{n}(U \cap V) \ar[r, "i_{U*} \oplus i_{V*}"] & H_n(U) \oplus H_n(V) \ar[r, "j_{U*} - j_{V*}"] & H_n(Y)
+ \end{tikzcd}
+ \]
+ commutes.
+\end{thm}
+For certain elements of $H_n(X)$, we can easily specify what $\partial_{MV}$ does to it. The meat of the proof is to show that every element of $H_n(X)$ can be made to look like that. If $[a + b] \in H_n(X)$ is such that $a \in C_n(A)$ and $b \in C_n(B)$, then the map $\partial_{MV}$ is specified by
+\[
+ \partial_{MV}([a + b]) = [d_n(a)] = [-d_n(b)] \in H_{n - 1}(A \cap B).
+\]
+To see this makes sense, note that we have $d_n(a + b) = 0$. So $d_n(a) = - d_n(b)$. Moreover, since $a \in C_n(A)$, we have $d_n(a) \in C_{n - 1}(A)$. Similarly, $d_n(b) \in C_{n - 1}(B)$. So $d_n(a) = - d_n(b) \in C_{n - 1}(A) \cap C_{n - 1}(B) = C_{n - 1}(A \cap B)$.
+
+\subsection{Relative homology}
+The next result again gives us another exact sequence. This time, the exact sequence involves another quantity known as \emph{relative homology}.
+
+\begin{defi}[Relative homology]\index{relative homology}
+ Let $A \subseteq X$. We write $i: A \to X$ for the inclusion map. Then the map $i_n: C_n(A) \to C_n(X)$ is injective as well, and we write
+ \[
+ C_n(X, A) = \frac{C_n(X)}{C_n(A)}.
+ \]
+ The differential $d_n: C_n(X) \to C_{n - 1}(X)$ restricts to a map $C_n(A) \to C_{n - 1}(A)$, and thus gives a well-defined differential $d_n: C_n(X, A) \to C_{n - 1}(X, A)$, sending $[c] \mapsto [d_n(c)]$. The \emph{relative homology} is given by
+ \[
+ H_n(X, A) = H_n(C_{\Cdot}(X, A)).
+ \]
+\end{defi}
+We think of this as chains in $X$ where we ignore everything that happens in $A$.
+
+\begin{thm}[Exact sequence for relative homology]\index{exact sequence for relative homology}
+ There are homomorphisms $\partial: H_n(X, A) \to H_{n - 1}(A)$ given by mapping
+ \[
+ \big[[c]\big] \mapsto [d_n c].
+ \]
+ This makes sense because if $c \in C_n(X)$, then $[c] \in C_n(X)/C_n(A)$. We know $[d_n c] = 0 \in C_{n - 1}(X)/C_{n - 1}(A)$. So $d_n c \in C_{n - 1}(A)$. So this notation makes sense.
+
+ Moreover, there is a long exact sequence
+ \[
+ \begin{tikzcd}
+ \cdots \ar[r, "\partial"] & H_n(A) \ar[r, "i_*"] & H_n(X) \ar[r, "q_*"] & H_n(X, A)\ar[out=0, in=180, looseness=2, overlay, lld, "\partial"']\\
+ & H_{n - 1}(A) \ar[r, "i_*"] & H_{n - 1}(X) \ar[r, "q_*"] & H_{n - 1}(X, A) \ar [r] & \cdots\\
+ &\cdots \ar[r] & H_0(X) \ar[r, "q_*"] & H_0(X, A) \ar [r] & 0
+ \end{tikzcd},
+ \]
+ where $i_*$ is induced by $i: C_{\Cdot}(A) \to C_{\Cdot}(X)$ and $q_*$ is induced by the quotient $q: C_{\Cdot}(X) \to C_{\Cdot}(X, A)$.
+\end{thm}
+
+This, again, is natural. To specify the naturality condition, we need the following definition:
+\begin{defi}[Map of pairs]\index{map of pairs}
+ Let $(X, A)$ and $(Y, B)$ be topological spaces with $A \subseteq X$ and $B \subseteq Y$. A \emph{map of pairs} is a map $f: X \to Y$ such that $f(A) \subseteq B$.
+\end{defi}
+
+Such a map induces a map $f_*: H_n(X, A) \to H_n(Y, B)$, and the exact sequence for relative homology is natural for such maps.
+
+\subsection{Excision theorem}
+Now the previous result is absolutely useless, because we just introduced a new quantity $H_n(X, A)$ we have no idea how to compute again. The main point of relative homology is that we want to think of $H_n(X, A)$ as the homology of $X$ when we ignore $A$. Thus, one might expect that the relative homology does not depend on the things ``inside $A$''. However, it is not true in general that, say, $H_n(X, A) = H_n(X \setminus A)$. Instead, what we are allowed to do is to remove subspaces of $A$ that are ``not too big''. This is given by excision:
+
+\begin{thm}[Excision theorem]\index{excision theorem}
+ Let $(X, A)$ be a pair of spaces, and $Z \subseteq A$ be such that $\overline{Z} \subseteq \mathring{A}$ (the closure is taken in $X$). Then the map
+ \[
+ H_n(X \setminus Z, A \setminus Z) \to H_n(X, A)
+ \]
+ is an isomorphism.
+\end{thm}
+\begin{center}
+ \begin{tikzpicture}
+ \draw [draw=black, fill=morange, opacity=0.5] (0, 0) rectangle (3, 2);
+ \node at (0.5, 0.5) {$X$};
+
+ \draw [draw=black, fill=mblue, opacity=0.6] (1.8, 1) circle [radius=0.7];
+ \node at (2.6, 1.7) {$A$};
+
+ \draw [draw=black, fill=mred, opacity=0.8, decorate, decoration={snake}] (1.75, 0.95) circle [radius=0.4] node [white] {$Z$};
+ \end{tikzpicture}
+\end{center}
+While we've only been talking about homology, everything so far holds analogously for cohomology too. It is again homotopy invariant, and there is a Mayer-Vietoris sequence (with maps $\partial_{MV}: H^n(A \cap B) \to H^{n + 1}(X)$). The relative cohomology is defined by $C^{\Cdot}(X, A) = \Hom(C_{\Cdot}(X, A), \Z)$ and so $H^*(X, A)$ is the cohomology of that. Similarly, excision holds.
+
+\subsection{Applications}
+We now use all these tools to do lots of computations. The first thing we compute will be the homology of spheres.
+
+\begin{thm}
+ We have
+ \[
+ H_i(S^1) =
+ \begin{cases}
+ \Z & i = 0, 1\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+\end{thm}
+\begin{proof}
+ We can split $S^1$ up as
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=1];
+ \draw [red] (1.105, -0.4673) arc(-22.9:202.9:1.2);
+ \node [red, above] at (0, 1.2) {$A$};
+
+ \draw [blue] (1.279, 0.545) arc(22.9:-202.9:1.4);
+ \node [blue, below] at (0, -1.4) {$B$};
+
+ \node [circ] at (1, 0) {};
+ \node at (1, 0) [left] {$q$};
+ \node [circ] at (-1, 0) {};
+ \node at (-1, 0) [right] {$p$};
+ \end{tikzpicture}
+ \end{center}
+ We want to apply Mayer-Vietoris. We have
+ \[
+ A \cong B \cong \R \simeq *,\quad A \cap B \cong \R \coprod \R \simeq \{p\} \coprod \{q\}.
+ \]
+ We obtain
+ \[
+ \begin{tikzcd}[row sep=tiny]
+ & 0 \ar[d, equals] & 0 \ar[d, equals] \\
+ \cdots \ar[r] & H_1(A \cap B) \ar[r] & H_1(A) \oplus H_1(B) \ar[r] & H_1(S^1)\ar[out=0, in=180, looseness=2, overlay, lldd, "\partial"']\\
+ \vphantom{a}\\
+ & H_0(A \cap B) \ar[r, "i_{A*} \oplus i_{B*}"] & H_0(A) \oplus H_0(B) \ar[r] & H_0(S^1) \ar [r] & 0\\
+ & \Z \oplus \Z \ar[u, equals] & \Z \oplus Z \ar[u, equals] & \Z \ar[u, equals]
+ \end{tikzcd}
+ \]
+ Notice that the map into $H_1(S^1)$ is zero. So the kernel of $\partial$ is trivial, i.e.\ $\partial$ is an injection. So $H_1(S^1)$ is isomorphic to the image of $\partial$, which is, by exactness, the kernel of $i_{A*} \oplus i_{B*}$. So we want to know what this map does.
+
+ We know that $H_0(A \cap B) \cong \Z \oplus \Z$ is generated by $p$ and $q$, and the inclusion map sends each of $p$ and $q$ to the unique connected components of $A$ and $B$. So the homology classes are both sent to $(1, 1) \in H_0(A) \oplus H_0(B) \cong \Z \oplus \Z$. We then see that the kernel of $i_{A*} \oplus i_{B*}$ is generated by $(p - q)$, and is thus isomorphic to $\Z$. So $H_1(S^1) \cong \Z$.
+
+ By looking higher up the exact sequence, we see that all other homology groups vanish.
+\end{proof}
+
+We can do the sphere in general in the same way.
+
+\begin{thm}
+ For any $n \geq 1$, we have
+ \[
+ H_i(S^n) =
+ \begin{cases}
+ \Z & i = 0, n\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+\end{thm}
+
+\begin{proof}
+ We again cut up $S^n$ as
+ \begin{align*}
+ A &= S^n \setminus \{N\} \cong \R^n \simeq *,\\
+ B &= S^n \setminus \{S\} \cong \R^n \simeq *,
+ \end{align*}
+ where $N$ and $S$ are the north and south poles. Moreover, we have
+ \[
+ A\cap B \cong \R \times S^{n - 1} \simeq S^{n - 1}
+ \]
+ So we can ``induct up'' using the Mayer-Vietoris sequence:
+ \[
+ \begin{tikzcd}
+ \cdots \ar[r] & H_i(S^{n - 1}) \ar[r] & H_i(*) \oplus H_i(*) \ar[r] & H_i(S^n)\ar[out=0, in=180, looseness=2, overlay, lld, "\partial"]\\
+ & H_{i - 1}(S^{n - 1}) \ar[r] & H_{i - 1}(*) \oplus H_{i - 1}(*) \ar[r] & H_{i - 1}(S^n) \ar [r] & \cdots
+ \end{tikzcd}
+ \]
+ Now suppose $n \geq 2$, as we already did $S^1$ already. If $i > 1$, then $H_i(*) = 0 = H_{i - 1}(*)$. So the Mayer-Vietoris map
+ \[
+ \begin{tikzcd}
+ H_i(S^n) \ar[r, "\partial"] & H_{i - 1}(S^{n - 1})
+ \end{tikzcd}
+ \]
+ is an isomorphism.
+
+ All that remains is to look at $i = 0, 1$. The $i = 0$ case is trivial. For $i = 1$, we look at
+ \[
+ \begin{tikzcd}[row sep=tiny]
+ & & 0 \ar[d, equals] \\
+ & \cdots \ar[r] & H_1(*) \oplus H_1(*) \ar[r] & H_1(S^n)\ar[out=0, in=180, looseness=2, overlay, lldd, "\partial"]\\
+ \vphantom{a} \\
+ & H_0(S^{n - 1}) \ar[r, "f"] & H_0(*) \oplus H_0(*) \ar[r] & H_0(S^n) \ar [r] & 0\\
+ & \Z \ar[u, equals] & \Z \oplus \Z \ar[u, equals] & \Z \ar[u, equals]
+ \end{tikzcd}
+ \]
+ To conclude that $H_1(S^n)$ is trivial, it suffices to show that the map $f$ is injective. By picking the canonical generators, it is given by $1 \mapsto (1, 1)$. So we are done.
+\end{proof}
+
+\begin{cor}
+ If $n \not= m$, then $S^{n - 1} \not\simeq S^{m - 1}$, since they have different homology groups.
+\end{cor}
+
+\begin{cor}
+ If $n \not= m$, then $\R^n \not \cong \R^m$.
+\end{cor}
+
+Now suppose we have a map $f: S^n \to S^n$. As always, it induces a map $f_*: H_n(S^n) \to H_n(S^n)$. Since $H_n (S^n) \cong \Z$, we know the map $f_*$ is given by multiplication by some integer. We call this the \emph{degree} of the map $f$.
+\begin{defi}[Degree of a map]\index{degree of map}
+ Let $f: S^n \to S^n$ be a map. The \emph{degree} $\deg(f)$ is the unique integer such that under the identification $H_n(S^n) \cong \Z$, the map $f_*$ is given by multiplication by $\deg(f)$.
+\end{defi}
+In particular, for $n = 1$, the degree is the winding number.
+
+Note that there are two ways we can identify $H_n(S^n)$ and $\Z$, which differ by a sign. For the degree to be well-defined, and not just defined up to a sign, we must ensure that we identify both $H_n(S^n)$ in the same way.
+
+Using the degree, we can show that certain maps are \emph{not} homotopic to each other. We first note the following elementary properties:
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\deg(\id_{S^n}) = 1$.
+ \item If $f$ is not surjective, then $\deg(f) = 0$.
+ \item We have $\deg(f\circ g) = (\deg f)(\deg g)$.
+ \item Homotopic maps have equal degrees.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Obvious.
+ \item If $f$ is not surjective, then $f$ can be factored as
+ \[
+ \begin{tikzcd}
+ S^n \ar[r, "f"] & S^n \setminus \{p\} \ar[r, hook] & S^n
+ \end{tikzcd},
+ \]
+ where $p$ is some point not in the image of $f$. But $S^n \setminus \{p\}$ is contractible. So $f_*$ factors as
+ \[
+ \begin{tikzcd}
+ f_*: H_n(S^n) \ar[r] & H_n(*) = 0 \ar[r] & H_n(S^n)
+ \end{tikzcd}.
+ \]
+ So $f_*$ is the zero homomorphism, and is thus multiplication by $0$.
+ \item This follows from the functoriality of $H_n$.
+ \item Obvious as well.\qedhere
+ \end{enumerate}
+\end{proof}
+
+As a corollary, we obtain the renowned Brouwer's fixed point theorem, a highly non-constructive fixed point existence theorem proved by a constructivist.
+\begin{cor}[Brouwer's fixed point theorem]\index{Brouwer's fixed point theorem}
+ Any map $f: D^n \to D^n$ has a fixed point.\index{fixed point}
+\end{cor}
+
+\begin{proof}
+ Suppose $f$ has no fixed point. Define $r: D^n \to S^{n - 1} = \partial D^n$ by taking the intersection of the ray from $f(x)$ through $x$ with $\partial D^n$. This is continuous.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=2cm];
+ \node [circ] at (-0.4, -0.3) {};
+ \node at (-0.4, -0.3) [below] {$x$};
+
+ \node [circ] at (0.4, 0.3) {};
+ \node at (0.4, 0.3) [right] {$f(x)$};
+ \draw (0.4, 0.3) -- (-1.6, -1.2);
+ \node at (-1.6, -1.2) [circ] {};
+ \node at (-1.6, -1.2) [anchor = north east] {$r(x)$};
+ \end{tikzpicture}
+ \end{center}
+ Now if $x \in \partial D^n$, then $r(x) = x$. So we have a map
+ \[
+ \begin{tikzcd}
+ S^{n - 1} = \partial D^n \ar[r, "i"] & D^n \ar[r, "r"] & \partial D^n = S^{n - 1}
+ \end{tikzcd},
+ \]
+ and the composition is the identity. This is a contradiction --- contracting $D^n$ to a point, this gives a homotopy from the identity map $S^{n - 1} \to S^{n - 1}$ to the constant map at a point. This is impossible, as the two maps have different degrees.
+\end{proof}
+A more manual argument to show this would be to apply $H_{n - 1}$ to the chain of maps above to obtain a contradiction.
+
+We know there is a map of degree $1$, namely the identity, and a map of degree $0$, namely the constant map. How about other degrees?
+
+\begin{prop}
+ A reflection $r: S^n \to S^n$ about a hyperplane has degree $-1$. As before, we cover $S^n$ by
+ \begin{align*}
+ A &= S^n \setminus \{N\} \cong \R^n \simeq *,\\
+ B &= S^n \setminus \{S\} \cong \R^n \simeq *,
+ \end{align*}
+ where we suppose the north and south poles lie in the hyperplane of reflection. Then both $A$ and $B$ are invariant under the reflection. Consider the diagram
+ \[
+ \begin{tikzcd}
+ H_n(S^n) \ar[r, "\partial_{MV}", "\sim"'] \ar[d, "r_*"] & H_{n - 1}(A \cap B) \ar[d, "r_*"] & H_{n - 1}(S^{n - 1}) \ar[l, "\sim"] \ar[d, "r_*"]\\
+ H_n(S^n) \ar[r, "\partial_{MV}", "\sim"'] & H_{n - 1}(A \cap B) & H_{n - 1}(S^{n - 1}) \ar[l, "\sim"]
+ \end{tikzcd}
+ \]
+ where the $S^{n - 1}$ on the right most column is given by contracting $A \cap B$ to the equator. Note that $r$ restricts to a reflection on the equator. By tracing through the isomorphisms, we see that $\deg(r) = \deg(r|_{\mathrm{equator}})$. So by induction, we only have to consider the case when $n = 1$. Then we have maps
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & H_1(S^1) \ar[r, "\partial_{MV}", "\sim"'] \ar[d, "r_*"] & H_0(A \cap B) \ar[d, "r_*"] \ar[r] & H_0(A) \oplus H_0(B) \ar[d, "r_* \oplus r_*"]\\
+ 0 \ar[r] & H_1(S^1) \ar[r, "\partial_{MV}", "\sim"'] & H_0(A \cap B) \ar[r] & H_0(A) \oplus H_0(B)
+ \end{tikzcd}
+ \]
+ Now the middle vertical map sends $p \mapsto q$ and $q \mapsto p$. Since $H_1(S^1)$ is given by the kernel of $H_0(A \cap B) \to H_0(A) \oplus H_0(B)$, and is generated by $p - q$, we see that this sends the generator to its negation. So this is given by multiplication by $-1$. So the degree is $-1$.
+\end{prop}
+
+\begin{cor}
+ The antipodal map $a: S^n \to S^n$ given by
+ \[
+ a(x_1, \cdots, x_{n + 1}) = (-x_1, \cdots, -x_{n + 1})
+ \]
+ has degree $(-1)^{n + 1}$ because it is a composition of $(n + 1)$ reflections.
+\end{cor}
+
+\begin{cor}[Hairy ball theorem]\index{Hairy ball theorem}
+ $S^n$ has a nowhere $0$ vector field iff $n$ is odd. More precisely, viewing $S^n \subseteq \R^{n + 1}$, a vector field on $S^n$ is a map $v: S^n \to \R^{n + 1}$ such that $\bra v(x), x\ket = 0$, i.e.\ $v(x)$ is perpendicular to $x$.
+\end{cor}
+
+\begin{proof}
+ If $n$ is odd, say $n = 2k - 1$, then
+ \[
+ v(x_1, y_1, x_2, y_2, \cdots, x_k, y_k) = (y_1, -x_1, y_2, -x_2, \cdots, y_k, -x_k)
+ \]
+ works.
+
+ Conversely, if $v: S^n \to \R^{n + 1} \setminus \{0\}$ is a vector field, we let $w = \frac{v}{\abs{v}}: S^n \to S^n$. We can construct a homotopy from $w$ to the antipodal map by ``linear interpolation'', but in a way so that it stays on the sphere. We let
+ \begin{align*}
+ H: [0, \pi] \times S^n &\to S^n\\
+ (t, x) &\mapsto \cos(t) x + \sin(t) w(x)
+ \end{align*}
+ Since $w(x)$ and $x$ are perpendicular, it follows that this always has norm $1$, so indeed it stays on the sphere.
+
+ Now we have
+ \[
+ H(0, x) = x,\quad H(\pi, x) = -x.
+ \]
+ So this gives a homotopy from $\id$ to $a$, which is a contradiction since they have different degrees.
+\end{proof}
+
+So far, we've only been talking about spheres all the time. We now move on to something different.
+\begin{eg}
+ Let $K$ be the Klein bottle. We cut them up by
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.55, mred] (0, 0) -- (3, 0);
+ \draw [->-=0.55, mred] (3, 3) -- (0, 3);
+ \draw [->-=0.55, mblue] (3, 0) -- (3, 3);
+ \draw [->-=0.55, mblue] (0, 0) -- (0, 3);
+
+ \fill [morange, opacity=0.5] (0, 0) rectangle (3, 1.2);
+ \fill [morange, opacity=0.5] (0, 1.8) rectangle (3, 3);
+ \node [right, morange] at (3, 2.5) {$A$};
+
+ \fill [mgreen, opacity=0.5] (0, 0.8) rectangle (3, 2.2);
+ \node [right, mgreen] at (3, 1.5) {$B$};
+
+ \foreach \n in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} {
+ \begin{scope}[shift={(0.15 + 0.2 * \n, 0.9)}]
+ \fill (0, 0) -- (0.05, 0) -- (0.1, 0.1) -- (0.05, 0.2) -- (0, 0.2) -- (0.05, 0.1) -- cycle;
+ \end{scope}
+
+ \begin{scope}[shift={(0.15 + 0.2 * \n, 1.9)}]
+ \fill (0, 0) -- (0.05, 0) -- (0.1, 0.1) -- (0.05, 0.2) -- (0, 0.2) -- (0.05, 0.1) -- cycle;
+ \end{scope}
+ }
+ \end{tikzpicture}
+ \end{center}
+ Then both $A$ and $B$ are cylinders, and their intersection is two cylinders:
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [morange, opacity=0.5] (0, 0) rectangle (3, 2.6);
+ \draw [->-=0.55, mblue] (3, 0) -- (3, 2.6);
+ \draw [->-=0.55, mblue] (0, 0) -- (0, 2.6);
+ \draw [->-=0.55, mred] (3, 1.3) -- (0, 1.3);
+ \fill [mgreen, opacity=0.5] (0, 0) rectangle (3, 0.4);
+ \fill [mgreen, opacity=0.5] (0, 2.2) rectangle (3, 2.6);
+
+ \foreach \n in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} {
+ \begin{scope}[shift={(0.15 + 0.2 * \n, 0.1)}]
+ \fill (0, 0) -- (0.05, 0) -- (0.1, 0.1) -- (0.05, 0.2) -- (0, 0.2) -- (0.05, 0.1) -- cycle;
+ \end{scope}
+
+ \begin{scope}[shift={(0.2 + 0.2 * \n, 2.3)}]
+ \fill (0, 0) -- (0.05, 0) -- (0, 0.1) -- (0.05, 0.2) -- (0, 0.2) -- (-0.05, 0.1) -- cycle;
+ \end{scope}
+ }
+ \node at (1.5, -0.1) [morange, below] {$A$};
+
+ \fill [morange, opacity=0.5] (4, 0.6) rectangle (7, 1);
+ \fill [morange, opacity=0.5] (4, 1.6) rectangle (7, 2);
+ \fill [mgreen, opacity=0.5] (4, 0.6) rectangle (7, 2);
+ \draw [->-=0.57, mblue] (4, 0.6) -- (4, 2);
+ \draw [->-=0.57, mblue] (7, 0.6) -- (7, 2);
+
+ \foreach \n in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} {
+ \begin{scope}[shift={(4.15 + 0.2 * \n, 0.7)}]
+ \fill (0, 0) -- (0.05, 0) -- (0.1, 0.1) -- (0.05, 0.2) -- (0, 0.2) -- (0.05, 0.1) -- cycle;
+ \end{scope}
+
+ \begin{scope}[shift={(4.15 + 0.2 * \n, 1.7)}]
+ \fill (0, 0) -- (0.05, 0) -- (0.1, 0.1) -- (0.05, 0.2) -- (0, 0.2) -- (0.05, 0.1) -- cycle;
+ \end{scope}
+ }
+
+ \node at (5.5, -0.1) [mgreen, below] {$B$};
+
+ \begin{scope}[shift={(4, 0)}]
+ \fill [morange, opacity=0.5] (4, 0.6) rectangle (7, 1);
+ \fill [morange, opacity=0.5] (4, 1.6) rectangle (7, 2);
+ \fill [mgreen, opacity=0.5] (4, 0.6) rectangle (7, 1);
+ \fill [mgreen, opacity=0.5] (4, 1.6) rectangle (7, 2);
+ \draw [->-=0.8, mblue] (4, 0.6) -- (4, 1) node [left, pos=0.5] {$s$};
+ \draw [->-=0.8, mblue] (7, 0.6) -- (7, 1);
+ \draw [->-=0.8, mblue] (4, 1.6) -- (4, 2) node [left, pos=0.5] {$t$};
+ \draw [->-=0.8, mblue] (7, 1.6) -- (7, 2);
+
+ \foreach \n in {0,1,2,3,4,5,6,7,8,9,10,11,12,13} {
+ \begin{scope}[shift={(4.15 + 0.2 * \n, 0.7)}]
+ \fill (0, 0) -- (0.05, 0) -- (0.1, 0.1) -- (0.05, 0.2) -- (0, 0.2) -- (0.05, 0.1) -- cycle;
+ \end{scope}
+
+ \begin{scope}[shift={(4.15 + 0.2 * \n, 1.7)}]
+ \fill (0, 0) -- (0.05, 0) -- (0.1, 0.1) -- (0.05, 0.2) -- (0, 0.2) -- (0.05, 0.1) -- cycle;
+ \end{scope}
+ }
+
+ \node at (5.5, -0.1) [mgreen!50!morange, below] {$A \cap B$};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ We have a long exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & H_2(K) \ar[r] & H_1(A \cap B) \ar[r, "{(i_A, i_B)}"] & H_1(A) \oplus H_1(B) \ar[lld, "j_A - j_B"', out=0, in=180, looseness=2, overlay] & \vphantom{0}\\
+ & H_1(K) \ar[r] & H_0(A \cap B) \ar[r, "{(i_A, i_B)}"] & H_0(A) \oplus H_0(B) \ar[lld, "j_A - j_B"', out=0, in=180, looseness=2, overlay]\\
+ & H_0(K) \ar[r] & 0
+ \end{tikzcd}
+ \]
+ We plug in the numbers to get
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & H_2(K) \ar[r] & \Z \oplus \Z \ar[r, "{(i_A, i_B)}"] & \Z \oplus \Z \ar[dll, "j_A - j_B"', out=0, in=180, looseness=2, overlay] & \vphantom{0}\\
+ & H_1(K) \ar[r] & \Z \oplus \Z \ar[r, "{(i_A, i_B)}"] & \Z \oplus \Z \ar[r, "j_A - j_B"] & \Z \ar[r] & 0
+ \end{tikzcd}
+ \]
+ Now let's look at the first $(i_A, i_B)$ map. We suppose each $H_1$ is generated by the left-to-right loops. Then we have
+ \begin{align*}
+ i_A(s) &= -1\\
+ i_A(t) &= 1\\
+ i_B(s) &= 1\\
+ i_B(t) &= 1
+ \end{align*}
+ So the matrix of $(i_A, i_B)$ is
+ \[
+ \begin{pmatrix}
+ -1 & 1\\
+ 1 & 1
+ \end{pmatrix}.
+ \]
+ This has determinant $2$, so it is injective and has trivial kernel. So we must have $H_2(K) = 0$. We also have
+ \[
+ \frac{H_1(A) \oplus H_1(B)}{\im(i_A, i_B)} = \frac{\bra a, b\ket}{\bra -a + b, a + b\ket} = \frac{\bra a\ket}{\bra 2a\ket} \cong \Z/2\Z.
+ \]
+ Now the second $(i_A, i_B)$ is given by
+ \[
+ \begin{pmatrix}
+ 1 & 1\\
+ 1 & 1
+ \end{pmatrix},
+ \]
+ whose kernel is isomorphic to $\Z$. So we have
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & \Z/2\Z \ar[r] & H_1(K) \ar[r] & \Z \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ So we have $H_1(K) \cong \Z \oplus \Z/2\Z$.
+\end{eg}
+
+Finally, we look at some applications of relative homology.
+
+\begin{lemma}
+ Let $M$ be a $d$-dimensional manifold (i.e.\ a Hausdorff, second-countable space locally homeomorphic to $\R^d$). Then
+ \[
+ H_n(M, M \setminus \{x\}) \cong
+ \begin{cases}
+ \Z & n = d\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ This is known as the \term{local homology}.
+\end{lemma}
+This gives us a homological way to define the dimension of a space.
+
+\begin{proof}
+ Let $U$ be an open neighbourhood isomorphic to $\R^d$ that maps $x \mapsto 0$. We let $Z = M\setminus U$. Then
+ \[
+ \overline{Z} = Z \subseteq M \setminus \{x\} = \mathring{(M \setminus \{x\})}.
+ \]
+ So we can apply excision, and get
+ \[
+ H_n(U, U \setminus \{x\}) = H_n(M\setminus Z, (M\setminus \{x\})\setminus \Z) \cong H_n(M, M\setminus \{x\}).
+ \]
+ So it suffices to do this in the case $M \cong \R^d$ and $x = 0$. The long exact sequence for relative homology gives
+ \[
+ \begin{tikzcd}
+ H_n(\R^d) \ar[r] & H_n(\R^d, \R^d \setminus \{0\}) \ar[r] & H_{n - 1}(\R^d \setminus \{0\}) \ar[r] & H_{n - 1}(\R^d)
+ \end{tikzcd}.
+ \]
+ Since $H_n(\R^d) = H_{n - 1}(\R^d) = 0$ for $n \geq 2$ large enough, it follows that
+ \[
+ H_n(\R^d, \R^d \setminus \{0\}) \cong H_{n - 1}(\R^d \setminus \{0\}) \cong H_{n - 1}(S^{d - 1}),
+ \]
+ and the result follows from our previous computation of the homology of $S^{d - 1}$. We will have to check the lower-degree terms manually, but we shall not.
+\end{proof}
+
+These relative homology groups are exactly the homology group of spheres. So we might be able to define a degree-like object for maps between manifolds. However, there is the subtlety that the identification with $\Z$ is not necessarily canonical, and this involves the problem of orientation. For now, we will just work with $S^n$.
+
+\begin{defi}[Local degree]\index{local degree}
+ Let $f: S^d \to S^d$ be a map, and $x \in S^d$. Then $f$ induces a map
+ \[
+ f_*: H_d(S^d, S^d \setminus \{x\}) \to H_d(S^d, S^d \setminus \{f(x)\}).
+ \]
+ We identify $H_d(S^d, S^d \setminus \{x\}) \cong H_d(S^d) \cong \Z$ for any $x$ via the inclusion $H_d(S^d) \to H_d(S^d, S^d \setminus \{x\})$, which is an isomorphism by the long exact sequence. Then $f_*$ is given by multiplication by a constant $\deg(f)_x$, called the \emph{local degree} of $f$ at $x$.
+\end{defi}
+What makes the local degree useful is that we can in fact compute the degree of a map via local degrees.
+
+\begin{thm}
+ Let $f: S^d \to S^d$ be a map. Suppose there is a $y \in S^d$ such that
+ \[
+ f^{-1}(y) = \{x_1, \cdots, x_k\}
+ \]
+ is finite. Then
+ \[
+ \deg (f) = \sum_{i = 1}^k \deg(f)_{x_i}.
+ \]
+\end{thm}
+
+\begin{proof}
+ Note that by excision, instead of computing the local degree at $x_i$ via $H_d(S^d, S^d \setminus \{x\})$, we can pick a neighbourhood $U_i$ of $x_i$ and a neighbourhood $V$ of $x_i$ such that $f(U_i) \subseteq V$, and then look at the map
+ \[
+ f_*: H_d(U_i, U_i \setminus \{x_i\}) \to H_d(V, V \setminus \{y\})
+ \]
+ instead. Moreover, since $S^d$ is Hausdorff, we can pick the $U_i$ such that they are disjoint. Consider the huge commutative diagram:
+ \[
+ \begin{tikzcd}[row sep=large]
+ H_d(S^d) \ar[r, "f_*"] \ar[d] & H_d(S^d) \ar[d, "\sim"]\\
+ H_d(S^d, S^d \setminus \{x_1, \cdots, x_k\}) \ar[r, "f_*"] & H_d(S^d, S^d \setminus \{y\})\\
+ H_d\left(\coprod U_i, \coprod (U_i \setminus x_i)\right) \ar[u, "\text{excision}"]\\
+ \bigoplus_{i = 1}^ k H_d(U_i, U_i \setminus x_i) \ar[r, "\bigoplus f_*"] \ar[u, "\sim"] & H_d(V, V\setminus \{y\}) \ar[uu, "\sim"]
+ \end{tikzcd}
+ \]
+ Consider the generator $1$ of $H_d(S^d)$. By definition, its image in $H_d(S^d)$ is $\deg(f)$. Also, its image in $\bigoplus H_d(U_i, U_i \setminus \{x_i\})$ is $(1, \cdots, 1)$. The bottom horizontal map sends this to $\sum \deg(f)_{x_i}$. So by the isomorphisms, it follows that
+ \[
+ \deg(f) = \sum_{i = 1}^k \deg(f)_x.\qedhere
+ \]
+\end{proof}
+\subsection{Repaying the technical debt}
+Finally, we prove all those theorems we stated without proof.
+
+\subsubsection*{Long exact sequence of relative homology}
+We start with the least bad one. In relative homology, we had a short exact sequence of chain complexes
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & C_\Cdot(A) \ar[r] & C_\Cdot(X) \ar[r] & C_\Cdot(X, A) \ar[r] & 0.
+ \end{tikzcd}
+\]
+The claim is that when we take the homology groups of this, we get a long exact sequence of homology groups. This is in fact a completely general theorem.
+\begin{thm}[Snake lemma]\index{snake lemma}
+ Suppose we have a short exact sequence of complexes
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_{\Cdot} \ar [r, "i_{\Cdot}"] & B_{\Cdot} \ar [r, "q_{\Cdot}"] & C_{\Cdot} \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ Then there are maps
+ \[
+ \partial: H_n(C_{\Cdot}) \to H_{n - 1}(A_{\Cdot})
+ \]
+ such that there is a long exact sequence
+ \[
+ \begin{tikzcd}
+ \cdots \ar [r] & H_n(A) \ar[r, "i_*"] & H_n(B) \ar[r, "q_*"] & H_n(C) \ar[out=0, in=180, looseness=2, overlay, dll, "\partial_*"']\\
+ & H_{n - 1}(A) \ar[r, "i_*"] & H_{n - 1}(B) \ar[r, "q_*"] & H_{n - 1}(C) \ar [r] & \cdots
+ \end{tikzcd}.
+ \]
+\end{thm}
+The method of proving this is sometimes known as ``diagram chasing'', where we just ``chase'' around commutative diagrams to find the elements we need. The idea of the proof is as follows --- in the short exact sequence, we can think of $A$ as a subgroup of $B$, and $C$ as the quotient $B/A$, by the first isomorphism theorem. So any element of $C$ can be represented by an element of $B$. We apply the boundary map to this representative, and then exactness shows that this must come from some element of $A$. We then check carefully that these is well-defined, i.e.\ does not depend on the representatives chosen.
+
+\begin{proof}
+ The proof of this is in general not hard. It just involves a lot of checking of the details, such as making sure the homomorphisms are well-defined, are actually homomorphisms, are exact at all the places etc. The only important and non-trivial part is just the construction of the map $\partial_*$.
+
+ First we look at the following commutative diagram:
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_n \ar[r, "i_n"] \ar[d, "d_n"] & B_n \ar[r, "q_n"] \ar[d, "d_n"] & C_n \ar[r] \ar[d, "d_n"] & 0\\
+ 0 \ar[r] & A_{n - 1} \ar[r, "i_{n - 1}"] & B_{n - 1} \ar[r, "q_{n - 1}"] & C_{n - 1} \ar[r] & 0
+ \end{tikzcd}
+ \]
+ To construct $\partial_*: H_n(C) \to H_{n - 1}(A)$, let $[x] \in H_n(C)$ be a class represented by $x \in Z_n(C)$. We need to find a cycle $z \in A_{n - 1}$. By exactness, we know the map $q_n: B_n \to C_n$ is surjective. So there is a $y \in B_n$ such that $q_n(y) = x$. Since our target is $A_{n - 1}$, we want to move down to the next level. So consider $d_n(y) \in B_{n - 1}$. We would be done if $d_n(y)$ is in the image of $i_{n - 1}$. By exactness, this is equivalent saying $d_n(y)$ is in the kernel of $q_{n -1 }$. Since the diagram is commutative, we know
+ \[
+ q_{n - 1}\circ d_n(y) = d_n \circ q_n (y) = d_n(x) = 0,
+ \]
+ using the fact that $x$ is a cycle. So $d_n (y) \in \ker q_{n - 1} = \im i_{n - 1}$. Moreover, by exactness again, $i_{n - 1}$ is injective. So there is a unique $z \in A_{n - 1}$ such that $i_{n - 1}(z) = d_n(y)$. We have now produced our $z$.
+
+ We are not done. We have $\partial_* [x] = [z]$ as our candidate definition, but we need to check many things:
+ \begin{enumerate}
+ \item We need to make sure $\partial_*$ is indeed a homomorphism.
+ \item We need $d_{n - 1}(z) = 0$ so that $[z] \in H_{n - 1}(A)$;
+ \item We need to check $[z]$ is well-defined, i.e.\ it does not depend on our choice of $y$ and $x$ for the homology class $[x]$.
+ \item We need to check the exactness of the resulting sequence.
+ \end{enumerate}
+ We now check them one by one:
+ \begin{enumerate}
+ \item Since everything involved in defining $\partial_*$ are homomorphisms, it follows that $\partial_*$ is also a homomorphism.
+ \item We check $d_{n - 1}(z) = 0$. To do so, we need to add an additional layer.
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_n \ar[r, "i_n"] \ar[d, "d_n"] & B_n \ar[r, "q_n"] \ar[d, "d_n"] & C_n \ar[r] \ar[d, "d_n"] & 0\\
+ 0 \ar[r] & A_{n - 1} \ar[r, "i_{n - 1}"] \ar[d, "d_{n - 1}"] & B_{n - 1} \ar[r, "q_{n - 1}"] \ar[d, "d_{n - 1}"] & C_{n - 1} \ar[r] \ar[d, "d_{n - 1}"] & 0\\
+ 0 \ar[r] & A_{n - 2} \ar[r, "i_{n - 2}"] & B_{n - 2} \ar[r, "q_{n - 2}"] & C_{n - 2} \ar[r] & 0
+ \end{tikzcd}
+ \]
+ We want to check that $d_{n - 1}(z) = 0$. We will use the commutativity of the diagram. In particular, we know
+ \[
+ i_{n - 2} \circ d_{n - 1}(z) = d_{n - 1} \circ i_{n - 1} (z) = d_{n - 1} \circ d_n(y) = 0.
+ \]
+ By exactness at $A_{n - 2}$, we know $i_{n - 2}$ is injective. So we must have $d_{n - 1}(z) = 0$.
+ \item
+ \begin{enumerate}
+ \item First, in the proof, suppose we picked a different $y'$ such that $q_n(y') = q_n(y) = x$. Then $q_n(y' - y) = 0$. So $y' - y \in \ker q_n = \im i_n$. Let $a \in A_n$ be such that $i_n(a) = y' - y$. Then
+ \begin{align*}
+ d_n(y') &= d_n(y' - y) + d_n(y) \\
+ &= d_n \circ i_n (a) + d_n(y) \\
+ &= i_{n - 1} \circ d_n (a) + d_n(y).
+ \end{align*}
+ Hence when we pull back $d_n(y')$ and $d_n(y)$ to $A_{n - 1}$, the results differ by the boundary $d_n(a)$, and hence produce the same homology class.
+ \item Suppose $[x'] = [x]$. We want to show that $\partial_* [x] = \partial_*[x']$. This time, we add a layer above.
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_{n + 1} \ar[r, "i_{n + 1}"] \ar[d, "d_{n + 1}"] & B_{n + 1} \ar[r, "q_{n + 1}"] \ar[d, "d_{n + 1}"] & C_{n + 1} \ar[r] \ar[d, "d_{n + 1}"] & 0\\
+ 0 \ar[r] & A_n \ar[r, "i_n"] \ar[d, "d_n"] & B_n \ar[r, "q_n"] \ar[d, "d_n"] & C_n \ar[r] \ar[d, "d_n"] & 0\\
+ 0 \ar[r] & A_{n - 1} \ar[r, "i_{n - 1}"] & B_{n - 1} \ar[r, "q_{n - 1}"] & C_{n - 1} \ar[r] & 0
+ \end{tikzcd}
+ \]
+ By definition, since $[x'] = [x]$, there is some $c \in C_{n + 1}$ such that
+ \[
+ x' = x + d_{n + 1} (c).
+ \]
+ By surjectivity of $q_{n + 1}$, we can write $c = q_{n + 1}(b)$ for some $b \in B_{n + 1}$. By commutativity of the squares, we know
+ \[
+ x' = x + q_n \circ d_{n + 1} (b).
+ \]
+ The next step of the proof is to find some $y$ such that $q_n (y) = x$. Then
+ \[
+ q_n(y + d_{n + 1} (b)) = x'.
+ \]
+ So the corresponding $y'$ is $y' = y + d_{n + 1}(b)$. So $d_n (y) = d_n(y')$, and hence $\partial_*[x] = \partial_* [x']$.
+ \end{enumerate}
+ \item This is yet another standard diagram chasing argument. When reading this, it is helpful to look at a diagram and see how the elements are chased along. It is even more beneficial to attempt to prove this yourself.
+ \begin{enumerate}
+ \item $\im i_* \subseteq \ker q_*$: This follows from the assumption that $i_n \circ q_n = 0$.
+ \item $\ker q_* \subseteq \im i_*$: Let $[b] \in H_n(B)$. Suppose $q_*([b]) = 0$. Then there is some $c \in C_{n + 1}$ such that $q_n(b) = d_{n + 1}(c)$. By surjectivity of $q_{n + 1}$, there is some $b' \in B_{n + 1}$ such that $q_{n + 1}(b') = c$. By commutativity, we know $q_n(b) = q_n \circ d_{n + 1}(b')$, i.e.
+ \[
+ q_n (b - d_{n + 1}(b')) = 0.
+ \]
+ By exactness of the sequence, we know there is some $a \in A_n$ such that
+ \[
+ i_n(a) = b - d_{n + 1}(b').
+ \]
+ Moreover,
+ \[
+ i_{n - 1} \circ d_n(a) = d_n \circ i_n (a) = d_n(b - d_{n + 1}(b')) = 0,
+ \]
+ using the fact that $b$ is a cycle. Since $i_{n - 1}$ is injective, it follows that $d_n(a) = 0$. So $[a] \in H_n(A)$. Then
+ \[
+ i_*([a]) = [b] - [d_{n + 1}(b')] = [b].
+ \]
+ So $[b] \in \im i_*$.
+ \item $\im q_* \subseteq \ker \partial_*$: Let $[b] \in H_n(B)$. To compute $\partial_*(q_*([b]))$, we first pull back $q_n(b)$ to $b \in B_n$. Then we compute $d_n(b)$ and then pull it back to $A_{n + 1}$. However, we know $d_n(b) = 0$ since $b$ is a cycle. So $\partial_*(q_*([b])) = 0$, i.e.\ $\partial_* \circ q_* = 0$.
+ \item $\ker \partial_* \subseteq \im q_*$: Let $[c] \in H_n(C)$ and suppose $\partial_*([c]) = 0$. Let $b \in B_n$ be such that $q_n(b) = c$, and $a \in A_{n - 1}$ such that $i_{n - 1}(a) = d_n(b)$. By assumption, $\partial_*([c]) = [a] = 0$. So we know $a$ is a boundary, say $a = d_n (a')$ for some $a' \in A_n$. Then by commutativity we know $d_n(b) = d_n \circ i_n (a')$. In other words,
+ \[
+ d_n(b - i_n(a')) = 0.
+ \]
+ So $[b - i_n(a')] \in H_n(B)$. Moreover,
+ \[
+ q_*([b - i_n(a')]) = [q_n(b) - q_n \circ i_n(a')] = [c].
+ \]
+ So $[c] \in \im q_*$.
+ \item $\im \partial_* \subseteq \ker i_*$: Let $[c] \in H_n(C)$. Let $b \in B_n$ be such that $q_n(b) = c$, and $a \in A_{n - 1}$ be such that $i_n(a) = d_n(b)$. Then $\partial_*([c]) = [a]$. Then
+ \[
+ i_*([a]) = [i_n(a)] = [d_n(b)] = 0.
+ \]
+ So $i_* \circ \partial_* = 0$.
+ \item $\ker i_* \subseteq \im \partial_*$: Let $[a] \in H_n(A)$ and suppose $i_*([a]) = 0$. So we can find some $b \in B_{n + 1}$ such that $i_n(a) = d_{n + 1}(b)$. Let $c = q_{n + 1}(b)$. Then
+ \[
+ d_{n + 1}(c) = d_{n + 1}\circ q_{n + 1} (b) = q_n \circ d_{n + 1}(b) = q_n \circ i_n (a) = 0.
+ \]
+ So $[c] \in H_n(C)$. Then $[a] = \partial_*([c])$ by definition of $\partial_*$. So $[a] \in \im \partial_*$.\qedhere
+ \end{enumerate}%\qedhere
+ \end{enumerate}
+\end{proof}
+
+Another piece of useful algebra is known as the $5$-lemma:
+\begin{lemma}[Five lemma]
+ Consider the following commutative diagram:
+ \[
+ \begin{tikzcd}
+ A \ar[r, "f"] \ar[d, "\ell"] & B \ar[r, "g"] \ar[d, "m"] & C \ar[r, "h"] \ar[d, "n"] & D \ar[r, "j"] \ar[d, "p"] & E \ar[d, "q"]\\
+ A' \ar[r, "r"] & B' \ar[r, "s"] & C' \ar[r, "t"] & D' \ar[r, "u"] & E'
+ \end{tikzcd}
+ \]
+ If the two rows are exact, $m$ and $p$ are isomorphisms, $q$ is injective and $\ell$ is surjective, then $n$ is also an isomorphism.
+\end{lemma}
+
+\begin{proof}
+ The philosophy is exactly the same as last time.
+
+ We first show that $n$ is surjective. Let $c' \in C'$. Then we obtain $d' = t(c') \in D'$. Since $p$ is an isomorphism, we can find $d \in D$ such that $p(d) = d'$. Then we have
+ \[
+ q(j(d)) = u(p(d)) = u(f(c')) = 0.
+ \]
+ Since $q$ is injective, we know $j(d) = 0$. Since the sequence is exact, there is some $c \in C$ such that $h(c) = d$.
+
+ We are not yet done. We do not know that $n(c) = c'$. All we know is that $d(n(c)) = d(c')$. So $d(c' - n(c)) = 0$. By exactness at $C'$, we can find some $b'$ such that $s(b') = n(c) - c'$. Since $m$ was surjective, we can find $b \in B$ such that $m(b) = b'$. Then we have
+ \[
+ n(g(b)) = n(c) - c'.
+ \]
+ So we have
+ \[
+ n(c - g(b)) = c'.
+ \]
+ So $n$ is surjective.
+
+ Showing that $n$ is injective is similar.
+ % complete
+\end{proof}
+
+\begin{cor}
+ Let $f: (X, A) \to (Y, B)$ be a map of pairs, and that any two of $f_*: H_*(X, A) \to H_*(Y, B)$, $H_*(X) \to H_*(Y)$ and $H_*(A) \to H_*(B)$ are isomorphisms. Then the third is also an isomorphism.
+\end{cor}
+
+\begin{proof}
+ Follows from the long exact sequence and the five lemma.
+\end{proof}
+
+That wasn't too bad, as it is just pure algebra.
+
+\subsubsection*{Proof of homotopy invariance}
+The next goal is want to show that homotopy of continuous maps does not affect the induced map on the homology groups. We will do this by showing that homotopies of maps induce homotopies of chain complexes, and chain homotopic maps induce the same map on homology groups. To make sense of this, we need to know what it means to be a homotopy of chain complexes.
+
+\begin{defi}[Chain homotopy]\index{chain homotopy}
+ A \emph{chain homotopy} between chain maps $f_{\Cdot}, g_{\Cdot}: C_{\Cdot} \to D_{\Cdot}$ is a collection of homomorphisms $F_n: C_n \to D_{n + 1}$ such that
+ \[
+ g_n - f_n = d_{n + 1}^D \circ F_n + F_{n - 1} \circ d_n^C: C_n \to D_n
+ \]
+ for all $n$.
+\end{defi}
+The idea of the chain homotopy is that $F_n(\sigma)$ gives us an $n + 1$ simplex whose boundary is $g_n - f_n$, plus some terms arising from the boundary of $c$ itself:
+\begin{center}
+ \begin{tikzpicture}
+ \fill [morange, opacity=0.3] (0, 0) rectangle (1.5, 2);
+ \node at (0.75, 1) {$F_n(\sigma)$};
+
+ \draw (0, 0) node [circ] {} -- (1.5, 0) node [circ] {} node [pos=0.5, below] {$f_n(\sigma)$};
+ \draw (0, 2) node [circ] {} -- (1.5, 2) node [circ] {} node [pos=0.5, above] {$g_n(\sigma)$};
+
+ \node at (2,1) {$:$};
+
+ \draw (2.5, 0) node [circ] {} -- (4, 0) node [circ] {};
+ \draw (2.5, 2) node [circ] {} -- (4, 2) node [circ] {};
+ \draw (2.5, 0) -- (2.5, 2);
+ \draw (4, 0) -- (4, 2);
+ \node at (3.25, 1) {$\d F_n(\sigma)$};
+
+ \node at (4.5, 1) {$=$};
+
+ \draw (5, 0) node [circ] {} -- (6.5, 0) node [circ] {} node [pos=0.5, below] {$f_n(\sigma)$};
+ \draw (5, 2) node [circ] {} -- (6.5, 2) node [circ] {} node [pos=0.5, above] {$g_n(\sigma)$};
+
+ \node at (7, 1) {$+$};
+
+ \draw (8, 0) node [circ] {} -- (8, 2) node [circ] {} ;
+ \draw (9.5, 0) node [circ] {} -- (9.5, 2) node [circ] {} ;
+
+ \node at (8.75, 1) {$F_{n - 1}(\d \sigma)$};
+ \end{tikzpicture}
+\end{center}
+We will not attempt to justify the signs appearing in the definition; they are what are needed for it to work.
+
+The relevance of this definition is the following result:
+\begin{lemma}
+ If $f_{\Cdot}$ and $g_{\Cdot}$ are chain homotopic, then $f_* = g_*: H_*(C_{\Cdot}) \to H_*(D_{\Cdot})$.
+\end{lemma}
+
+\begin{proof}
+ Let $[c] \in H_n(C_{\Cdot})$. Then we have
+ \[
+ g_n(c) - f_n(c) = d_{n + 1}^DF_n(c) + F_{n - 1}(d_n^C(c)) = d_{n + 1}^DF_n(c),
+ \]
+ where the second term dies because $c$ is a cycle. So we have $[g_n(c)] = [f_n(c)]$.
+\end{proof}
+
+That was the easy part. What we need to do now is to show that homotopy of maps between spaces gives a chain homotopy between the corresponding chain maps.
+
+We will change notation a bit.
+\begin{notation}
+ From now on, we will just write $d$ for $d_n^C$.
+
+ For $f: X \to Y$, we will write $f_\#: C_n(X) \to C_n(Y)$ for the map $\sigma \mapsto f \circ \sigma$, i.e.\ what we used to call $f_n$.
+\end{notation}
+
+Now if $H: [0, 1] \times X \to Y$ is a homotopy from $f$ to $g$, and $\sigma: \Delta^n \to X$ is an $n$-chain, then we get a homotopy
+\[
+ \begin{tikzcd}
+ \lbrack0, 1\rbrack \times \Delta^n \ar[r, "{[0, 1] \times \sigma}"] & \lbrack0, 1\rbrack \times X \ar[r, "H"] & Y
+ \end{tikzcd}
+\]
+from $f_\#(\sigma)$ to $g_\#(\sigma)$. Note that we write $[0, 1]$ for the identity map $[0, 1] \to [0, 1]$.
+
+The idea is that we are going to cut up $[0, 1] \times \Delta^n$ into $n + 1$-simplices. Suppose we can find a collection of chains $P_n \in C_{n + 1}([0, 1] \times \Delta^n)$ for $n \geq 0$ such that
+\[
+ d(P_n) = i_1 - i_0 - \sum_{j = 0}^n (-1)^j ([0, 1] \times \delta_j)_\#(P_{n - 1}),
+\]
+where
+\begin{align*}
+ i_0: \delta^n &= \{0\} \times \Delta^n \hookrightarrow [0, 1] \times \Delta^n\\
+ i_1: \delta^n &= \{0\} \times \Delta^n \hookrightarrow [0, 1] \times \Delta^n
+\end{align*}
+and $\delta_j: \Delta^{n - 1} \to \Delta^n$ is the inclusion of the $j$th face. These are ``prisms'' connecting the top and bottom face. Intuitively, the prism $P_2$ looks like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-1, 0) -- (1, 0) -- (0, 0.5) -- cycle;
+
+ \draw (-1, -2) -- (1, -2);
+ \draw [dashed] (-1, -2) -- (0, -1.5) -- (1, -2);
+ \draw [dashed] (0, -1.5) -- (0, 0.5);
+ \draw (-1, -2) -- (-1, 0);
+ \draw (1, -2) -- (1, 0);
+ \end{tikzpicture}
+\end{center}
+and the formula tells us its boundary is the top and bottom triangles, plus the side faces given by the prisms of the edges.
+
+Suppose we managed to find such prisms. We can then define
+\[
+ F_n: C_n(X) \to C_{n + 1}(Y)
+\]
+by sending
+\[
+ (\sigma: \Delta^n \to X) \mapsto (H \circ ([0, 1] \times \sigma))_\#(P_n).
+\]
+We now calculate.
+\begin{align*}
+ \d F_n(\sigma) ={}& d((H \circ (1 \times \Delta^n))_\#(P_n))\\
+ ={}& (H \circ ([0, 1] \times \sigma))_\#(d(P_n))\\
+ ={}& (H \circ ([0, 1] \times \sigma))_\# \left(i_1 - i_0 - \sum_{j = 0}^n (-1)^j ([0, 1] \times \delta_j)_\#(P_{n - 1})\right)\\
+ ={}& H \circ ([0, 1] \times \sigma) \circ i_1 - H \circ ([0, 1] \times \sigma) \circ i_0 \\
+ &\quad- \sum_{j = 0}^n (-1)^j H_\# \circ (([0, 1] \times \sigma) \circ \delta_j)_\# (P_{n - 1})\\
+ ={}& g \circ \sigma - f \circ \sigma - \sum_{j = 0}^n (-1)^j H_\# \circ (([0, 1] \times \sigma) \circ \delta_j)_\# (P_{n - 1})\\
+ ={}& g \circ \sigma - f \circ \sigma - F_{n - 1}(\d \sigma)\\
+ ={}& g_\#(\sigma) - f_\#(\sigma) - F_{n - 1}(d\sigma).
+\end{align*}
+So we just have to show that $P_n$ exists. We already had a picture of what it looks like, so we just need to find a formula that represents it. We view $[0, 1] \times \Delta^n \in \R \times \R^{n + 1}$. Write $\{v_0, v_1, \cdots, v_n\}$ for the vertices of $\{0\} \times \Delta^n \subseteq [1, 0] \times \Delta^n$, and $\{w_0, \cdots, w_n\}$ for the corresponding vertices of $\{1\} \times \Delta^n$.
+
+Now if $\{x_0, x_1, \cdots, x_{n + 1}\} \subseteq \{v_0, \cdots, v_n\} \cup \{w_0, \cdots, w_n\}$, we let
+\[
+ [x_0, \cdots, x_n]: \Delta^n \to [0, 1] \times \Delta^n
+\]
+by
+\[
+ (t_0, \cdots, t_{n + 1}) = \sum t_i x_i.
+\]
+This is still in the space by convexity. We let
+\[
+ P_n = \sum_{i = 0}^n (-1)^i [v_0, v_1 , \cdots, v_i, w_i, w_{i + 1}, \cdots, w_n] \in C_{n + 1}([0, 1] \times \Delta^n).
+\]
+It is a boring check that this actually works, and we shall not bore the reader with the details.
+
+\subsubsection*{Proof of excision and Mayer-Vietoris}
+Finally, we prove excision and Mayer-Vietoris together. It turns out both follow easily from what we call the ``small simplices theorem''.
+
+\begin{defi}[$C_n^\mathcal{U}(X)$ and $H_n^\mathcal{U}(X)$]\index{$C_n^\mathcal{U} (X)$}\index{$H_n^\mathcal{U}(X)$}
+ We let $\mathcal{U} = \{U_\alpha\}_{\alpha \in I}$ be a collection of subspaces of $X$ such that their interiors cover $X$, i.e.
+ \[
+ \bigcup_{\alpha \in I} \mathring{U}_\alpha = X.
+ \]
+ Let $C_n^\mathcal{U}(X) \subseteq C_n(X)$ be the subgroup generated by those singular $n$-simplices $\sigma: \Delta^n \to X$ such that $\sigma(\Delta^n) \subseteq U_\alpha$ for some $\alpha$. It is clear that if $\sigma$ lies in $U_\alpha$, then so do its faces. So $C_n^{\mathcal{U}}(X)$ is a sub-chain complex of $C_\Cdot(X)$.
+
+ We write $H_n^\mathcal{U}(X) = H_n(C_\Cdot^\mathcal{U}(X))$.
+\end{defi}
+It would be annoying if each choice of open cover gives a different homology theory, because this would be too many homology theories to think about. The \emph{small simplices theorem} says that the natural map $H_*^\mathcal{U}(X) \to H_*(X)$ is an isomorphism.
+
+\begin{thm}[Small simplices theorem]\index{small simplices theorem}
+ The natural map $H_*^\mathcal{U}(X) \to H_*(X)$ is an isomorphism.
+\end{thm}
+
+The idea is that we can cut up each simplex into smaller parts by barycentric subdivision, and if we do it enough, it will eventually lie in on of the open covers. We then go on to prove that cutting it up does not change homology.
+
+Proving it is not hard, but technically annoying. So we first use this to deduce our theorems.
+
+\begin{proof}[Proof of Mayer-Vietoris]
+ Let $X = A \cup B$, with $A, B$ open in $X$. We let $\mathcal{U} = \{A, B\}$, and write $C_\Cdot(A + B) = C_\Cdot^\mathcal{U} (X)$. Then we have a natural chain map
+ \[
+ \begin{tikzcd}
+ C_\Cdot(A) \oplus C_{\Cdot}(B) \ar[r, "j_A - j_B"] & C_\Cdot(A + B)
+ \end{tikzcd}
+ \]
+ that is surjective. The kernel consists of $(x, y)$ such that $j_A(x) - j_B(y) = 0$, i.e.\ $j_A(x) = j_B(y)$. But $j$ doesn't really do anything. It just forgets that the simplices lie in $A$ or $B$. So this means $y = x$ is a chain in $A \cap B$. We thus deduce that we have a short exact sequence of chain complexes
+ \[
+ \begin{tikzcd}
+ C_\Cdot(A \cap B) \ar[r, "{(i_A, i_B)}"] & C_\Cdot(A) \oplus C_{\Cdot}(B) \ar[r, "j_A - j_B"] & C_\Cdot(A + B).
+ \end{tikzcd}
+ \]
+ Then the snake lemma shows that we obtain a long exact sequence of homology groups. So we get a long exact sequence of homology groups
+ \[
+ \begin{tikzcd}
+ \cdots\ar[r] & H_n(A \cap B) \ar[r, "{(i_A, i_B)}"] & H_n(A) \oplus H_n(B) \ar[r, "j_A - j_B"] & H_n^\mathcal{U}(X) \ar[r] & \cdots
+ \end{tikzcd}.
+ \]
+ By the small simplices theorem, we can replace $H_n^\mathcal{U}(X)$ with $H_n(X)$. So we obtain Mayer-Vietoris.
+\end{proof}
+
+Now what does the boundary map $\partial: H_n(X) \to H_{n - 1}(A \cap B)$ do? Suppose we have $c \in H_n(X)$ represented by a cycle $a + b \in C_n^\mathcal{U}(X)$, with $a$ supported in $A$ and $b$ supported in $B$. By the small simplices theorem, such a representative always exists. Then the proof of the snake lemma says that $\partial([a + b])$ is given by tracing through
+\[
+ \begin{tikzcd}
+ & C_n(A) \oplus C_n(B) \ar[r, "j_A - j_B"] \ar[d, "d"] & C_n(A + B)\\
+ C_{n - 1}(A \cap B) \ar[r, "{(i_A, i_B)}"] & C_{n - 1}(A) \oplus C_{n - 1}(B)
+ \end{tikzcd}
+\]
+We now pull back $a + b$ along $j_A - j_B$ to obtain $(a, -b)$, then apply $d$ to obtain $(da, -db)$. Then the required object is $[da] = [-db]$.
+
+We now move on to prove excision.
+
+\begin{proof}[Proof of excision]
+ Let $X \supseteq A \supseteq Z$ be such that $\overline{Z} \supseteq \mathring{A}$. Let $B = X \setminus Z$. Then again take
+ \[
+ \mathcal{U} = \{A, B\}.
+ \]
+ By assumption, their interiors cover $X$. We consider the short exact sequences
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & C_\Cdot(A) \ar[d, equals] \ar[r] & C_\Cdot(A + B) \ar[r] \ar[d] & C_{\Cdot}(A + B)/C_\Cdot(A) \ar[r] \ar[d] & 0\\
+ 0 \ar[r] & C_\Cdot(A) \ar[r] & C_\Cdot(X) \ar[r] & C_{\Cdot}(X, A) \ar[r] & 0
+ \end{tikzcd}
+ \]
+ Looking at the induced map between long exact sequences on homology, the middle and left terms induce isomorphisms, so the right term does too by the $5$-lemma.
+
+ On the other hand, the map
+ \[
+ \begin{tikzcd}
+ C_{\Cdot}(B)/C_\Cdot(A \cap B) \ar[r] & C_{\Cdot}(A + B)/C_\Cdot(A)
+ \end{tikzcd}
+ \]
+ is an isomorphism of chain complexes. Since their homologies are $H_\Cdot(B, A \cap B)$ and $H_\Cdot (X, A)$, we infer they the two are isomorphic. Recalling that $B = X \setminus \bar{Z}$, we have shown that
+ \[
+ H_*(X \setminus Z, A \setminus Z) \cong H_*(X, A).\qedhere
+ \]
+\end{proof}
+
+We now provide a sketch proof of the small simplices theorem. As mentioned, the idea is to cut our simplices up, and one method to do so is barycentric subdivision.
+
+Given a $0$-simplex $\{v_0\}$, its \term{barycentric subdivision} is itself.
+
+If $x = \{x_0, \cdots, x_n\} \subseteq \R^n$ spans an $n$-simplex $\sigma$, we let
+\[
+ b_x = \frac{1}{n + 1} \sum_{i = 0}^n x_i
+\]
+be its \term{barycenter}.
+
+If we have a $1$-simplex
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [circ] at (2, 0) {};
+ \draw (0, 0) -- (2, 0);
+ \end{tikzpicture}
+\end{center}
+Then the barycentric subdivision is obtained as
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 0) {};
+ \draw (0, 0) -- (2, 0);
+ \end{tikzpicture}
+\end{center}
+We can degenerately describe this as first barycentrically subdividing the boundary (which does nothing in this case), and then add the barycenter.
+
+In the case of a $2$-simplex:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \end{tikzpicture}
+\end{center}
+we first barycentrically subdivide the boundary:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (0.5, 0.866) {};
+ \node [circ] at (1.5, 0.866) {};
+ \node [circ] at (1, 0.5773) {};
+ \end{tikzpicture}
+\end{center}
+Then add the barycenter $b_x$, and for each standard simplex in the boundary, we ``cone it off'' towards $b_x$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (0.5, 0.866) {};
+ \node [circ] at (1.5, 0.866) {};
+ \node [circ] at (1, 0.5773) {};
+ \draw (0, 0) -- (1, 0.5773) -- (2, 0);
+ \draw (1, 0) -- (1, 1.732);
+ \draw (0.5, 0.866) -- (1, 0.5773) -- (1.5, 0.866);
+ \end{tikzpicture}
+\end{center}
+More formally, in the standard $n$-simplex $\Delta^n \subseteq \R^{n + 1}$, we let $B_n$ be its barycenter. For each singular $i$-simplex $\sigma: \Delta^i \to \Delta^n$, we define
+\[
+ \mathrm{Cone}^{\Delta^n}_i (\sigma): \Delta^{i + 1} \to \Delta^n
+\]
+by
+\[
+ (t_0, t_1, \cdots, t_{i + 1}) \mapsto t_0 b_n + (1 - t_0) \cdot \sigma\left(\frac{(t_1, \cdots, t_{i + 1})}{1 - t_0}\right).
+\]
+We can then extend linearly to get a map $\mathrm{Cone}_i^{\Delta^n}: C_i(\Delta^n) \to C_{i + 1}(\Delta^n)$.
+\begin{eg}
+ In the $2$-simplex
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \end{tikzpicture}
+ \end{center}
+ the cone of the bottom edge is the simplex in orange:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+
+ \draw [fill=morange] (0, 0) -- (1, 0.5773) -- (2, 0);
+ \end{tikzpicture}
+ \end{center}
+
+\end{eg}
+Since this increases the dimension, we might think this is a chain map. Then for $i > 0$, we have
+\begin{align*}
+ d \mathrm{Cone}_i^{\Delta^n}(\sigma) &= \sum_{j = 0}^{i + 1} \mathrm{Cone}_i^{\Delta^n}(\sigma) \circ \delta_j \\
+ &= \sigma + \sum_{j = 1}^{i + 1} (-1)^j \mathrm{Cone}_{i - 1}^{\Delta^n} (\sigma \circ \delta_{j - 1})\\
+ &= \sigma - \mathrm{Cone}_{i - 1}^{\Delta^n} (d \sigma).
+\end{align*}
+For $i = 0$, we get
+\[
+ d \mathrm{Cone}_i^{\Delta^n}(\sigma) = \sigma - \varepsilon(\sigma) \cdot b_n,
+\]
+In total, we have
+\[
+ d \mathrm{Cone}_i^{\Delta^n} + \mathrm{Cone}_{i - 1}^{\Delta^n} d = \mathrm{id} - c_{\Cdot},
+\]
+where $c_i = 0$ for $i > 0$, and $c_0(\sigma) = \varepsilon(\sigma) b_n$ is a map $C_\Cdot(\Delta^n) \to C_\Cdot(\Delta^n)$.
+
+We now use this cone map to construct a barycentric subdivision map $\rho_n^X: C_n(X) \to C_n(X)$, and insist that it is natural that if $f: X \to Y$ is a map, then $f_\# \circ \rho_n^X = \rho_n^Y \circ f_\#$, i.e.\ the diagram
+\[
+ \begin{tikzcd}
+ C_n(X) \ar[r, "\rho_n^X"] \ar[d, "f_\#"] & C_n(X) \ar[d, "f_\#"]\\
+ C_n(Y) \ar[r, "\rho_n^Y"] & C_n(Y)
+ \end{tikzcd}.
+\]
+So if $\sigma: \Delta^n \to X$, we let $\iota_n: \Delta^n \to \Delta^n \in C_n(\Delta^n)$ be the identity map. Then we must have
+\[
+ \rho_n^X (\sigma) = \rho_n^X (\sigma_\# \iota_n) = \sigma_\# \rho_n^{\Delta^n}(\iota_n).
+\]
+So if know how to do barycentric subdivision for $\iota_n$ itself, then by naturality, we have defined it for all spaces! Naturality makes life easier for us, not harder!
+
+So we define $\rho_n^X$ recursively on $n$, for all spaces $X$ at once, by
+\begin{enumerate}
+ \item $\rho_0^X = \id_{C_0}(X)$
+ \item For $n > 0$, we define the barycentric subdivision of $\iota_n$ by
+ \[
+ \rho_n^{\Delta^n}(\iota_n) = \mathrm{Cone}_{n - 1}^{\Delta^n} (\rho_{n - 1}^{\Delta^n} (\d \iota_n)),
+ \]
+ and then extend by naturality.
+\end{enumerate}
+
+This has all the expected properties:
+\begin{lemma}
+ $\rho_\Cdot^X$ is a natural chain map.
+\end{lemma}
+
+\begin{lemma}
+ $\rho_\Cdot^X$ is chain homotopic to the identity.
+\end{lemma}
+
+\begin{proof}
+ No one cares.
+\end{proof}
+
+\begin{lemma}
+ The diameter of each subdivided simplex in $(\rho_n^{\Delta^n})^k(\iota_n)$ is bounded by $\left(\frac{n}{n + 1}\right)^k \diam(\Delta^n)$.
+\end{lemma}
+
+\begin{proof}
+ Basic geometry.
+\end{proof}
+
+\begin{prop}
+ If $c \in C_n^\mathcal{U}(X)$, then $p^X(c) \in C_n^{\mathcal{U}}(X)$.
+
+ Moreover, if $c \in C_n(X)$, then there is some $k$ such that $(\rho_n^{X})^k(c) \in C_n^{\mathcal{U}}(X)$.
+\end{prop}
+
+\begin{proof}
+ The first part is clear. For the second part, note that every chain is a finite sum of simplices. So we only have to check it for single simplices. We let $\sigma$ be a simplex, and let
+ \[
+ \mathcal{V} = \{\sigma^{-1} \mathring{U}_\alpha\}
+ \]
+ be an open cover of $\Delta^n$. By the Lebesgue number lemma, there is some $\varepsilon > 0$ such that any set of diameter $< \varepsilon$ is contained in some $\sigma^{-1} \mathring{U}_\alpha$. So we can choose $k > 0$ such that $(\rho_n^{\Delta^n}) (\iota_n)$ is a sum of simplices which each has diameter $< \varepsilon$. So each lies in some $\sigma^{-1}\mathring{U}_\alpha$. So
+ \[
+ (\rho_n^{\Delta^n})^k (\iota_n) = C_n^{\mathcal{V}} (\Delta^n).
+ \]
+ So applying $\sigma$ tells us
+ \[
+ (\rho_n^{\Delta^n})^k (\sigma) \in C_n^\mathcal{U}(X).\qedhere
+ \]
+\end{proof}
+
+Finally, we get to the theorem.
+\begin{thm}[Small simplices theorem]\index{small simplices theorem}
+ The natural map $U: H_*^\mathcal{U}(X) \to H_*(X)$ is an isomorphism.
+\end{thm}
+
+\begin{proof}
+ Let $[c] \in H_n(X)$. By the proposition, there is some $k > 0$ such that $(\rho_n^X)^k (c) \in C_n^{\mathcal{U}}(x)$. We know that $\rho_n^X$ is chain homotopic to the identity. Thus so is $(\rho_n^X)^k$. So $[(\rho_n^X)^k (c)] = [c]$. So the map $H_n^\mathcal{U}(X) \to H_n(X)$ is surjective.
+
+ To show that it is injective, we suppose $U([c]) = 0$. Then we can find some $z \in H_{n + 1}(X)$ such that $dz = c$. We can then similarly subdivide $z$ enough such that it lies in $C^\mathcal{U}_{n + 1}(X)$. So this shows that $[c] = 0 \in H_n^{\mathcal{U}}(X)$.
+\end{proof}
+
+That's it. We move on to (slightly) more interesting stuff. The next few sections will all be slightly short, as we touch on various different ideas.
+
+\section{Reduced homology}
+
+\begin{defi}[Reduced homology]\index{reduced homology}
+ Let $X$ be a space, and $x_0 \in X$ a basepoint. We define the \emph{reduced homology} to be $\tilde{H}_*(X) = H_*(X, \{x_0\})$.
+\end{defi}
+Note that by the long exact sequence of relative homology, we know that $\tilde{H}_n(X) \cong H_n(X)$ for $n \geq 1$. So what is the point of defining a new homology theory that only differs when $n = 0$, which we often don't care about?
+
+It turns out there is an isomorphism between $H_*(X, A)$ and $\tilde{H}_*(X/A)$ for suitably ``good'' pairs $(X, A)$.
+
+\begin{defi}[Good pair]\index{good pair}
+ We say a pair $(X, A)$ is \emph{good} if there is an open set $U$ containing $\bar{A}$ such that the inclusion $A \hookrightarrow U$ is a deformation retract, i.e.\ there exists a homotopy $H: [0, 1] \times U \to U$ such that
+ \begin{align*}
+ H(0, x) &= x\\
+ H(1, x) &\in A\\
+ H(t, a) &= a\text{ for all $a \in A, t \in [0, 1]$}.
+ \end{align*}
+\end{defi}
+
+\begin{thm}
+ If $(X, A)$ is good, then the natural map
+ \[
+ \begin{tikzcd}
+ H_*(X, A) \ar[r] & H_*(X/A, A/A) = \tilde{H}_*(X/A)
+ \end{tikzcd}
+ \]
+ is an isomorphism.
+\end{thm}
+
+\begin{proof}
+ As $i: A \hookrightarrow U$ is in particular a homotopy equivalence, the map
+ \[
+ \begin{tikzcd}
+ H_*(A) \ar[r] & H_*(U)
+ \end{tikzcd}
+ \]
+ is an isomorphism. So by the five lemma, the map on relative homology
+ \[
+ \begin{tikzcd}
+ H_*(X, A) \ar[r] & H_*(X, U)
+ \end{tikzcd}
+ \]
+ is an isomorphism as well.
+
+ As $i: A \hookrightarrow U$ is a deformation retraction with homotopy $H$, the inclusion
+ \[
+ \{*\} = A/A \hookrightarrow U/A
+ \]
+ is also a deformation retraction. So again by the five lemma, the map
+ \[
+ \begin{tikzcd}
+ H_*(X/A, A/A) \ar[r] & H_*(X/A, U/A)
+ \end{tikzcd}
+ \]
+ is also an isomorphism. Now we have
+ \[
+ \begin{tikzcd}[column sep=large]
+ H_n(X, A) \ar[d] \ar[r, "\sim"] & H_n(X, U) \ar[d] \ar[r, "\text{excise }A"] & H_n(X \setminus A, U \setminus A)\ar[d]\\
+ H_n(X/A, A/A) \ar[r, "\sim"] & H_n(X/A, U/A) \ar[r, "\text{excise }A/A"] & H_n\left(\frac{X}{A}\setminus\frac{A}{A}, \frac{U}{A}\setminus\frac{A}{A}\right)
+ \end{tikzcd}
+ \]
+ We now notice that $X\setminus A = \frac{X}{A} \setminus \frac{A}{A}$ and $U \setminus A = \frac{U}{A}\setminus \frac{A}{A}$. So the right-hand vertical map is actually an isomorphism. So the result follows.
+\end{proof}
+
+
+\section{Cell complexes}
+
+So far, everything we've said is true for \emph{arbitrary} spaces. This includes, for example, the topological space with three points $a, b, c$, whose topology is $\{\emptyset, \{a\}, \{a, b, c\}\}$. However, these spaces are horrible. We want to restrict our attention to nice spaces. Spaces that do feel like actual, genuine spaces.
+
+The best kinds of space we can imagine would be manifolds, but that is a bit too strong a condition. For example, the union of the two axes in $\R^2$ is not a manifold, but it is still a sensible space to talk about. Perhaps we can just impose conditions like Hausdorffness and maybe second countability, but we can still produce nasty spaces that satisfy these properties.
+
+So the idea is to provide a method to build spaces, and then say we only consider spaces built this way. These are known as \emph{cell complexes}, or \emph{CW complexes}
+
+\begin{defi}[Cell complex]\index{cell complex}\index{CW complexes}
+ A \emph{cell complex} is any space built out of the following procedure:
+ \begin{enumerate}
+ \item Start with a discrete space $X^0$. The set of points in $X^0$ are called $I_0$.
+ \item If $X^{n - 1}$ has been constructed, then we may choose a family of maps $\{\varphi_\alpha: S^{n - 1} \to X^{n - 1}\}_{\alpha \in I_n}$, and set
+ \[
+ X^n = \left(X^{n - 1} \amalg \left(\coprod_{\alpha \in I_n} D_\alpha^n\right) \right)/\{x \in \partial D_\alpha^n \sim \varphi_\alpha(x) \in X^{n - 1}\}.
+ \]
+ We call $X^n$ the \term{$n$-skeleton} of $X$ We call the image of $D_\alpha^n \setminus \partial D_\alpha^n$ in $X^n$ the \term{open cell} $e_\alpha$.
+ \item Finally, we define
+ \[
+ X = \bigcup_{n \geq 0} X^n
+ \]
+ with the \term{weak topology}, namely that $A \subseteq X$ is open if $A \cap X^n$ is open in $X^n$ for all $n$.
+ \end{enumerate}
+\end{defi}
+We write $\Phi_\alpha: D_\alpha^n \to X^n$ for the obvious inclusion map. This is called the \term{characteristic map} for the cell $e_\alpha$.
+
+\begin{defi}[Finite-dimensional cell complex]\index{finite-dimensional cell complex}\index{cell complex!finite-dimensional}
+ If $X = X^n$ for some $n$, we say $X$ is \emph{finite-dimensional}.
+\end{defi}
+
+\begin{defi}[Finite cell complex]\index{finite cell complex}\index{cell complex!finite}
+ If $X$ is finite-dimensional and $I_n$ are all finite, then we say $X$ is \emph{finite}.
+\end{defi}
+
+\begin{defi}[Subcomplex]\index{subcomplex}\index{cell complex!subcomplex}
+ A subcomplex $A$ of $X$ is a simplex obtained by using a subset $I_n' \subseteq I_n$.
+\end{defi}
+Note that we cannot simply throw away some cells to get a subcomplex, as the higher cells might want to map into the cells you have thrown away, and you need to remove them as well.
+
+We note the following technical result without proof:
+\begin{lemma}
+ If $A \subseteq X$ is a subcomplex, then the pair $(X, A)$ is \emph{good}.
+\end{lemma}
+
+\begin{proof}
+ See Hatcher 0.16.
+\end{proof}
+
+\begin{cor}
+ If $A \subseteq X$ is a subcomplex, then
+ \[
+ \begin{tikzcd}
+ H_n(X, A) \ar[r, "\sim"] & \tilde{H}_n(X/A)
+ \end{tikzcd}
+ \]
+ is an isomorphism.
+\end{cor}
+
+We are next going to show that we can directly compute the cohomology of a cell complex by looking at the cell structures, instead of going through all the previous rather ad-hoc mess we've been through. We start with the following lemma:
+\begin{lemma}
+ Let $X$ be a cell complex. Then
+ \begin{enumerate}
+ \item
+ \[
+ H_i(X^n, X^{n - 1}) =
+ \begin{cases}
+ 0 & i \not= n\\
+ \bigoplus_{i \in I_n} \Z & i = n
+ \end{cases}.
+ \]
+ \item $H_i(X^n) = 0$ for all $i > n$.
+ \item $H_i(X^n) \to H_i(X)$ is an isomorphism for $i < n$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item As $(X^n, X^{n - 1})$ is good, we have an isomorphism
+ \[
+ \begin{tikzcd}
+ H_i(X^n, X^{n - 1}) \ar[r, "\sim"] & \tilde{H}_i(X^n/X^{n - 1})
+ \end{tikzcd}.
+ \]
+ But we have
+ \[
+ X^n/X^{n - 1} \cong \bigvee_{\alpha \in I_n} S_\alpha^n,
+ \]
+ the space obtained from $Y = \coprod_{\alpha \in I_n}S_\alpha^n$ by collapsing down the subspace $Z = \{x_\alpha: \alpha \in I_n\}$, where each $x_\alpha$ is the south pole of the sphere. To compute the homology of the wedge $X^n/X^{n - 1}$, we then note that $(Y, Z)$ is good, and so we have a long exact sequence
+ \[
+ \begin{tikzcd}
+ H_i(Z) \ar[r] & H_i(Y) \ar[r] & \tilde{H}_i(Y/Z) \ar[r] & H_{i - 1}(Z) \ar[r] & H_{i - 1}(Y)
+ \end{tikzcd}.
+ \]
+ Since $H_i(Z)$ vanishes for $i \geq 1$, the result then follows from the homology of the spheres plus the fact that $H_i(\coprod X_\alpha) = \bigoplus H_i(X_\alpha)$.
+ \item This follows by induction on $n$. We have (part of) a long exact sequence
+ \[
+ \begin{tikzcd}
+ H_i(X^{n - 1}) \ar[r] & H_i(X^n) \ar[r] & H_i(X^n, X^{n - 1})
+ \end{tikzcd}
+ \]
+ We know the first term vanishes by induction, and the third term vanishes for $i > n$. So it follows that $H_i(X^n)$ vanishes.
+ \item To avoid doing too much point-set topology, we suppose $X$ is finite-dimensional, so $X = X^m$ for some $m$. Then we have a long exact sequence
+ \[
+ \begin{tikzcd}[column sep=small]
+ H_{i + 1} (X^{n + 1}, X^n) \ar[r] & H_i(X^n) \ar[r] & H_i(X^{n + 1}) \ar[r] & H_i(X^{n + 1}, X^n)
+ \end{tikzcd}
+ \]
+ Now if $i < n$, we know the first and last groups vanish. So we have $H_i(X^n) \cong H_i(X^{n + 1})$. By continuing, we know that
+ \[
+ H_i(X^n) \cong H_i(X^{n + 1}) \cong H_i(X^{n + 2}) \cong \cdots \cong H_i(X^m) = H_i(X).
+ \]
+ To prove this for the general case, we need to use the fact that any map from a compact space to a cell complex hits only finitely many cells, and then the result would follow from this special case.\qedhere
+ \end{enumerate}
+\end{proof}
+
+For a cell complex $X$, let
+\[
+ C_n^{\mathrm{cell}}(X) = H_n(X^n, X^{n - 1}) \cong \bigoplus_{\alpha \in I_n} \Z.
+\]
+We define $d_n^{\mathrm{cell}}: C_n^{\mathrm{cell}}(X) \to C_{n - 1}^{\mathrm{cell}}(X)$ by the composition
+\[
+ \begin{tikzcd}
+ H_n(X^n, X^{n - 1}) \ar[r, "\partial"] & H_{n - 1}(X^{n - 1}) \ar[r, "q"] & H_{n - 1}(X^{n - 1}, X^{n - 2})
+ \end{tikzcd}.
+\]
+We consider
+\[
+ \begin{tikzcd}[row sep=large, column sep=-0.5em]
+ &&& 0\\
+ 0 \ar[dr] && H_n(X^{n + 1}) \ar[ur]\\
+ &H_n(X^n) \ar[ur] \ar[dr, "q_n"]\\
+ H_{n + 1}(X^{n + 1}, X^n) \ar[ur, "\partial"] \ar[rr, "d_{n + 1}^{\mathrm{cell}}"] & & H_n(X^n, X^{n - 1}) \ar[rr, "d_n^{\mathrm{cell}}"] \ar[rd, "\partial"] && H_{n - 1}(X^{n - 1}, X^{n - 2})\\
+ & & & H_{n - 1}(X^{n - 1}) \ar[rd] \ar[ru, "q_{n - 1}"]\\
+ & & 0 \ar[ur] & & H_{n - 1}(X^n)
+ \end{tikzcd}
+\]
+Referring to the above diagram, we see that
+\[
+ d_n^{\mathrm{cell}} \circ d_{n + 1}^{\mathrm{cell}} = q_{n - 1} \circ \partial \circ q_n \circ \partial = 0,
+\]
+since the middle $\partial \circ q_n$ is part of an exact sequence. So $(C_{\Cdot}^{\mathrm{cell}}(X), d_{\Cdot}^{\mathrm{cell}})$ is a chain complex, and the corresponding homology groups are known as the \term{cellular homology} of $X$, written $H_n^{\mathrm{cell}}(X)$.
+
+\begin{thm}
+ \[
+ H_n^{\mathrm{cell}}(X) \cong H_n(X).
+ \]
+\end{thm}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ H_n(X) &\cong H_n(X^{n + 1}) \\
+ &= H_n(X^n)/\im(\partial: H_{n + 1}(X^{n + 1}, X^n) \to H_n(X^n))\\
+ \intertext{Since $q_n$ is injective, we apply it to and bottom to get}
+ &= q_n(H_n(X^n)) / \im(d_{n + 1}^{\mathrm{cell}}: H_{n + 1}(X^{n + 1}, X^n) \to H_n(X^n, X^{n - 1}))\\
+ \intertext{By exactness, the image of $q_n$ is the kernel of $\partial$. So we have}
+ &= \ker(\partial: H_n(X^n, X^{n - 1}) \to H_{n - 1}(X^{n - 1})) / \im(d_{n + 1}^{\mathrm{cell}})\\
+ &= \ker(d_n^{\mathrm{cell}}) / \im(d_{n + 1}^{\mathrm{cell}})\\
+ &= H_n^{\mathrm{cell}}(X).\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{cor}
+ If $X$ is a finite cell complex, then $H_n(X)$ is a finitely-generated abelian group for all $n$, generated by at most $|I_n|$ elements. In particular, if there are no $n$-cells, then $H_n(X)$ vanishes.
+
+ If $X$ has a cell-structure with cells in even-dimensional cells only, then $H_*(X)$ are all free.
+\end{cor}
+
+We can similarly define cellular cohomology.
+\begin{defi}[Cellular cohomology]\index{cellular cohomology}
+ We define \emph{cellular cohomology} by
+ \[
+ C_{\mathrm{cell}}^n(X) = H^n(X^n, X^{n - 1})
+ \]
+ and let $d_{\mathrm{cell}}^n$ be the composition
+ \[
+ \begin{tikzcd}
+ H^n(X^n, X^{n - 1}) \ar[r, "q^*"] & H^n(X^n) \ar[r, "\partial"] & H^{n + 1}(X^{n + 1}, X^n).
+ \end{tikzcd}
+ \]
+ This defines a cochain complex $C_{\mathrm{cell}}^\Cdot(X)$ with cohomology $H^*_{\mathrm{cell}}(X)$, and we have
+ \[
+ H_{\mathrm{cell}}^*(X) \cong H^*(X).
+ \]
+ One can directly check that
+ \[
+ C_{\mathrm{cell}}^\Cdot(X) \cong \Hom (C_\Cdot^{\mathrm{cell}}(X), \Z).
+ \]
+\end{defi}
+
+This is all very good, because cellular homology is very simple and concrete. However, to actually use it, we need to understand what the map
+\[
+ d_n^{\mathrm{cell}} : C_n^{\mathrm{cell}}(X) = \bigoplus_{\alpha \in I_n} \Z\{e_\alpha\} \to C_{n - 1}^{\mathrm{cell}}(X) = \bigoplus_{\beta \in I_{n - 1}}\Z\{e_\beta\}
+\]
+is. In particular, we want to find the coefficients $d_{\alpha\beta}$ such that
+\[
+ d_n^{\mathrm{cell}}(e_\alpha) = \sum d_{\alpha\beta} e_\beta.
+\]
+It turn out this is pretty easy
+\begin{lemma}
+ The coefficients $d_{\alpha\beta}$ are given by the degree of the map
+ \[
+ \begin{tikzcd}
+ S_\alpha^{n - 1} = \partial D_\alpha^n \ar[r, "\varphi_\alpha"] \ar[rrr, bend right, "f_{\alpha\beta}", looseness=0.5] & X^{n - 1} \ar[r] & X^{n - 1}/X^{n - 2} = \bigvee_{\gamma \in I_{n - 1}} S_\gamma^{n - 1} \ar[r] & S_\beta^{n - 1}
+ \end{tikzcd},
+ \]
+ where the final map is obtained by collapsing the other spheres in the wedge.
+
+ In the case of cohomology, the maps are given by the transposes of these.
+\end{lemma}
+This is easier in practice that in sounds. In practice, the map is given by ``the obvious one''.
+
+\begin{proof}
+ Consider the diagram
+ \[
+ \begin{tikzcd}
+ H_n(D_\alpha^n, \partial D_\alpha^n) \ar[d, "(\Phi_\alpha)_*"] \ar[r, "\partial", "\sim"'] & H_{n - 1}(\partial D_\alpha^n)\ar[d, "(\varphi_\alpha)_*"] \ar[r, dashed] & \tilde{H}_{n - 1}(S^{n - 1}_\beta) \\
+ H_n(X^n, X^{n - 1}) \ar[r, "\partial"] \ar[rd, "d_n^\mathrm{cell}"] & H_{n - 1}(X^{n - 1}) \ar[d, "q"] & \tilde{H}_{n - 1}\left(\bigvee S_\gamma^{n - 1}\right)\ar[u, "\text{collapse}"]\\
+ & H_{n - 1}(X^{n - 1}, X^{n - 2}) \ar[r, "\text{excision}", "\sim"'] & \tilde{H}_{n - 1}(X^{n - 1}/X^{n - 2}) \ar[u, equals]
+ \end{tikzcd}
+ \]
+ By the long exact sequence, the top left horizontal map is an isomorphism.
+
+ Now let's try to trace through the diagram. We can find
+ \[
+ \begin{tikzcd}
+ 1 \ar[d, maps to] \ar[r, "\text{isomorphism}"] & 1 \ar[r, maps to, "f_{\alpha\beta}"]& d_{\alpha\beta}\\
+ e_\alpha \ar[rd, maps to]\\
+ & \sum d_{\alpha\gamma}e_\gamma \ar[r, maps to] & \sum d_{\alpha\gamma} e_\gamma \ar[uu, maps to]
+ \end{tikzcd}
+ \]
+ So the degree of $f_{\alpha\beta}$ is indeed $d_{\alpha\beta}$.
+\end{proof}
+
+\begin{eg}
+ Let $K$ be the Klein bottle.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.55, mred] (0, 0) -- (3, 0);
+ \draw [->-=0.55, mred] (3, 3) -- (0, 3);
+ \draw [->-=0.55, mblue] (3, 0) -- (3, 3);
+ \draw [->-=0.55, mblue] (0, 0) -- (0, 3);
+
+ \node [circ] at (0, 0) {};
+ \node [circ] at (3, 0) {};
+ \node [circ] at (0, 3) {};
+ \node [circ] at (3, 3) {};
+ \node [anchor = south east] {$v$};
+
+ \node [left] at (0, 1.5) {$a$};
+ \node [below] at (1.5, 3) {$b$};
+ \node at (1.5, 1.5) {\huge $\pi$};
+ \end{tikzpicture}
+ \end{center}
+ We give it a cell complex structure by
+ \begin{itemize}
+ \item $K^0 = \{v\}$. Note that all four vertices in the diagram are identified.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [above] at (0, 0) {$v$};
+ \end{tikzpicture}
+ \end{center}
+ \item $K^1 = \{a, b\}$.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw (-1, 0) circle [radius=1];
+ \draw (1, 0) circle [radius=1];
+ \node [circ] {};
+ \node [left] at (-2, 0) {$a$};
+ \node [right] at (2, 0) {$b$};
+ \draw [->] (2, 0.1) -- (2, 0.11);
+ \draw [->] (-2, 0.1) -- (-2, 0.11);
+ \node [right] {$x_0$};
+ \end{tikzpicture}
+ \end{center}
+ \item $K^2$ is the unique $2$-cell $\pi$ we see in the picture, where $\varphi_\pi: S^1 \to K^1$ given by $a b a^{-1}b$.
+ \end{itemize}
+ The cellular chain complex is given by
+ \[
+ \begin{tikzcd}[row sep=small]
+ 0 \ar[r] & C_2^{\mathrm{cell}}(K) \ar[d, equals] \ar[r, "d_2^{\mathrm{cell}}"] & C_1^{\mathrm{cell}}(K) \ar[d, equals] \ar[r, "d_1^{\mathrm{cell}}"] & C_0^{\mathrm{cell}}(K) \ar[d, equals]\\
+ & \Z\pi & \Z a \oplus \Z b & \Z_v
+ \end{tikzcd}
+ \]
+ We can now compute the maps $d_i^{\mathrm{cell}}$. The $d_1$ map is easy. We have
+ \[
+ d_1(a) = d_1(b) = v - v = 0.
+ \]
+ In the $d_2$ map, we can figure it out by using local degrees. Locally, the attaching map is just like the identity map, up to an orientation flip, so the local degrees are $\pm 1$. Moreover, the inverse image of each point has two elements. If we think hard enough, we realize that for the attaching map of $a$, the two elements have opposite degree and cancel each other out; while in the case of $b$ they have the same sign and give a degree of $2$. So we have
+ \[
+ d_2(\pi) = 0a + 2b.
+ \]
+ So we have
+ \begin{align*}
+ H_0(K) &= \Z\\
+ H_1(K) &= \frac{\Z \oplus \Z}{\bra2b\ket} = \Z \oplus \Z/2\Z\\
+ H_2(K) &= 0
+ \end{align*}
+ We can similarly compute the cohomology. By dualizing, we have
+ \[
+ \begin{tikzcd}[row sep=small]
+ C_{\mathrm{cell}}^2(K)(K) \ar[d, equals] & C_{\mathrm{cell}}^1(K) \ar[d, equals] \ar[l, "(0\; 2)"] & C_{\mathrm{cell}}^0(K) \ar[d, equals] \ar[l, "(0)"] \\
+ \Z & \Z \oplus \Z & \Z
+ \end{tikzcd}
+ \]
+ So we have
+ \begin{align*}
+ H_0(K) &= \Z\\
+ H_1(K) &= \Z\\
+ H_2(K) &= \Z/2\Z.
+ \end{align*}
+ Note that the second cohomology is \emph{not} the dual of the second homology!
+
+ However, if we forget where each factor is, and just add all the homology groups together, we get $\Z \oplus \Z \oplus \Z/2\Z$. Also, if we forget all the torsion components $\Z/2\Z$, then they are the same!
+
+ This is a general phenomenon. For a cell complex, if we know what all the homology groups are, we can find the cohomologies by keeping the $\Z$'s unchanged and moving the torsion components up. The general statement will be given by the \emph{universal coefficient theorem}.
+\end{eg}
+
+\begin{eg}
+ Consider $\RP^n = S^n/(x \sim -x)$. We notice that for any point in $\RP^n$, if it is not in the equator, then it is represented by a unique element in the northern hemisphere. Otherwise, it is represented by two points. So we have $\RP^n \cong D^n/(x \sim -x\text{ for }x \in \partial D^n)$. This is a nice description, since if we throw out the interior of the disk, then we are left with an $S^{n - 1}$ with antipodal points identified, i.e.\ an $\RP^{n - 1}$! So we can immediately see that
+ \[
+ \RP^n = \RP^{n - 1} \cup_f D^n,
+ \]
+ for $f: S^{n - 1} \to \RP^{n - 1}$ given by
+ \[
+ f(x) = [x].
+ \]
+ So $\RP^n$ has a cell structure with one cell in every degree up to $n$. What are the boundary maps?
+
+ We write $e_i$ for the $i$-th degree cell. We know that $e_i$ is attached along the map $f$ described above. More concretely, we have
+ \[
+ \begin{tikzcd}
+ f: S_s^{i - 1}\ar[r, "\varphi_i"] & \RP^{i - 1} \ar[r] &\RP^{i - 1}/\RP^{i - 2} = S_t^{i - 1}
+ \end{tikzcd}.
+ \]
+ The open upper hemisphere and lower hemisphere of $S^{i - 1}_s$ are mapped homeomorphically to $S_t^{i - 1} \setminus \{*\}$. Furthermore,
+ \[
+ f|_{\mathrm{upper}} = f|_{\mathrm{lower}} \circ a,
+ \]
+ where $a$ is the antipodal map. But we know that $\deg(a) = (-1)^i$. So we have a zero map if $i$ is odd, and a $2$ map if $i$ is even. Then we have
+ \[
+ \begin{tikzcd}
+ \cdots \ar[r, "2"] & \Z e_3 \ar[r, "0"] & \Z e_2 \ar[r, "2"] & \Z e_1 \ar[r, "0"] & \Z e_0
+ \end{tikzcd}.
+ \]
+ What happens on the left end depends on whether $n$ is even or odd. So we have
+ \[
+ H_i(\RP^n) =
+ \begin{cases}
+ \Z & i = 0\\
+ \Z/2\Z & i\text{ odd}, i < n\\
+ 0 & i\text{ even}, 0 < i < n\\
+ \Z & i = n\text{ is odd}\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ We can immediately work out the cohomology too. We will just write out the answer:
+ \[
+ H^i(\RP^n) =
+ \begin{cases}
+ \Z & i = 0\\
+ 0 & i\text{ odd}, i < n\\
+ \Z/2\Z & i\text{ even}, 0 < i \leq n\\
+ \Z & i = n\text{ is odd}\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+\end{eg}
+
+\section{(Co)homology with coefficients}
+Recall that when we defined (co)homology, we constructed these free $\Z$-modules from our spaces. However, we did not actually use the fact that it was $\Z$, we might as well replace it with any abelian group $A$.
+
+\begin{defi}[(Co)homology with coefficients]\index{homology!with coefficients}\index{cohomology!with coefficients}
+ Let $A$ be an abelian group, and $X$ be a topological space. We let
+ \[
+ C_{\Cdot}(X; A) = C_\Cdot(X) \otimes A
+ \]
+ with differentials $d \otimes \id_A$. In other words $C_{\Cdot}(X; A)$ is the abelian group obtained by taking the direct sum of many copies of $A$, one for each singular simplex.
+
+ We let
+ \[
+ H_n(X; A) = H_n(C_{\Cdot}(X; A), d \otimes \id_A).
+ \]
+ We can also define
+ \[
+ H_n^{\mathrm{cell}}(X; A) = H_n(C_{\Cdot}^{\mathrm{cell}}(X) \otimes A),
+ \]
+ and the same proof shows that $H_n^{\mathrm{cell}}(X; A) = H_n(X; A)$.
+
+ Similarly, we let
+ \[
+ C^{\Cdot}(X; A) = \Hom(C_{\Cdot}(X), A),
+ \]
+ with the usual (dual) differential. We again set
+ \[
+ H^n(X; A) = H^n(C^\Cdot(X; A)).
+ \]
+ We similarly define cellular cohomology.
+
+ If $A$ is in fact a commutative ring, then these are in fact $R$-modules.
+\end{defi}
+We call $A$ the ``coefficients'', since a general member of $C_{\Cdot}(X; A)$ looks like
+\[
+ \sum n_\sigma s,\quad \text{where } n_\sigma \in A,\quad \sigma: \Delta^n \to X.
+\]
+We will usually take $A = \Z, \Z/n\Z$ or $\Q$. Everything we've proved for homology holds for these with exactly the same proof.
+
+\begin{eg}
+ In the case of $C_{\Cdot}^{\mathrm{cell}}(\RP^n)$, the differentials are all $0$ or $2$. So in $C_\Cdot^{\mathrm{cell}}(\RP^n, \Z/2)$, all the differentials are $0$. So we have
+ \[
+ H_i(\RP^n, \Z/2) =
+ \begin{cases}
+ \Z/2 & 0 \leq i \leq n\\
+ 0 & i > n
+ \end{cases}
+ \]
+ Similarly, the cohomology groups are the same.
+
+ On the other hand, if we take the coefficients to be $\Q$, then multiplication by $2$ is now an isomorphism. Then we get
+ \[
+ C_{\Cdot}^\mathrm{cell}(\RP^n, \Q)
+ \begin{cases}
+ \Q & n \text{ odd}\\
+ 0 & n\text{ even}
+ \end{cases}
+ \]
+ for $n$ not too large.
+\end{eg}
+
+\section{Euler characteristic}
+There are many ways to define the Euler characteristic, and they are all equivalent. So to define it, we pick a definition that makes it obvious it is a number.
+\begin{defi}[Euler characteristic]\index{Euler characteristic}
+ Let $X$ be a cell complex. We let
+ \[
+ \chi(X) = \sum_n (-1)^n \text{ number of $n$-cells of $X$} \in \Z.
+ \]
+\end{defi}
+From this definition, it is not clear that this is a property of $X$ itself, rather than something about its cell decomposition.
+
+We similarly define
+\[
+ \chi_{\Z}(X) = \sum_n (-1)^n \rank H_n(X; \Z).
+\]
+For any field $\F$, we define
+\[
+ \chi_{\F}(X) = \sum_n (-1)^n \dim_\F H_n(X; \F).
+\]
+\begin{thm}
+ We have
+ \[
+ \chi = \chi_\Z = \chi_\F.
+ \]
+\end{thm}
+
+\begin{proof}
+ First note that the number of $n$ cells of $X$ is the rank of $C_n^{\mathrm{cell}}(X)$, which we will just write as $C_n$. Let
+ \begin{align*}
+ Z_n &= \ker (d_n: C_n \to C_{n - 1})\\
+ B_n &= \im (d_{n + 1}: C_{n + 1} \to C_n).
+ \end{align*}
+ We are now going to write down two short exact sequences. By definition of homology, we have
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & B_n \ar[r] & Z_n \ar[r] & H_n(X; \Z) \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ Also, the definition of $Z_n$ and $B_n$ give us
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & Z_n \ar[r] & C_n \ar[r] & B_{n - 1} \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ We will now use the first isomorphism theorem to know that the rank of the middle term is the sum of ranks of the outer terms. So we have
+ \[
+ \chi_\Z(X) = \sum (-1)^n \rank H_n(X) = \sum(-1)^n (\rank Z_n - \rank B_n).
+ \]
+ We also have
+ \[
+ \rank B_n = \rank C_{n + 1} - \rank Z_{n + 1}.
+ \]
+ So we have
+ \begin{align*}
+ \chi_\Z(X) &= \sum_n (-1)^n (\rank Z_n - \rank C_{n + 1} + \rank Z_{n + 1}) \\
+ &= \sum_n (-1)^{n + 1} \rank C_{n + 1}\\
+ &= \chi(X).
+ \end{align*}
+ For $\chi_\F$, we use the fact that
+ \[
+ \rank C_n = \dim_{\F} C_n \otimes \F.\qedhere
+ \]
+\end{proof}
+\section{Cup product}
+So far, homology and cohomology are somewhat similar. We computed them, saw they are not the same, but they seem to contain the same information nevertheless. However, cohomology is 10 times better, because we can define a ring structure on them, and rings are better than groups.
+
+Just like the case of homology and cohomology, we will be able to write down the definition easily, but will struggle to compute it.
+
+\begin{defi}[Cup product]\index{cup product}\index{$\smile$}
+ Let $R$ be a commutative ring, and $\phi \in C^k(X; R)$, $\psi \in C^\ell(X; R)$. Then $\phi \smile \psi \in C^{k + \ell}(X; R)$ is given by
+ \[
+ (\phi \smile \psi)(\sigma: \Delta^{k + \ell} \to X) = \phi(\sigma|_{[v_0, \ldots, v_k]}) \cdot \psi(\sigma|_{[v_k, \ldots, v_{k + \ell}]}).
+ \]
+ Here the multiplication is multiplication in $R$, and $v_0, \cdots, v_\ell$ are the vertices of $\Delta^{k + \ell} \subseteq \R^{k + \ell + 1}$, and the restriction is given by
+ \[
+ \sigma|_{[x_0, \ldots, x_i]} (t_0, \ldots, t_i) = \sigma\left(\sum t_j x_j\right).
+ \]
+ This is a bilinear map.
+\end{defi}
+
+\begin{notation}
+ We write
+ \[
+ H^*(X; R) = \bigoplus_{n \geq 0} H^n(X; R).
+ \]
+\end{notation}
+This is the definition. We can try to establish some of its basic properties. We want to know how this interacts with the differential $d$ with the cochains. The obvious answer $d(\phi \smile \psi) = (\d \phi) \smile (\d \psi)$ doesn't work, because the degrees are wrong. What we have is:
+
+\begin{lemma}
+ If $\phi \in C^k(X; R)$ and $\psi \in C^\ell(X; R)$, then
+ \[
+ d (\phi \smile \psi) = (d \phi)\smile \psi + (-1)^k \phi\smile(d \psi).
+ \]
+\end{lemma}
+This is like the product rule with a sign.
+
+\begin{proof}
+ This is a straightforward computation.
+
+ Let $\sigma: \Delta^{k + \ell + 1} \to X$ be a simplex. Then we have
+ \begin{align*}
+ ((d \phi)\smile \psi)(\sigma) &= (d \phi)(\sigma|_{[v_0, \ldots, v_{k + 1}]}) \cdot \psi(\sigma|_{[v_{k + 1}, \ldots, v_{k + \ell + 1}]})\\
+ &= \phi\left(\sum_{i = 0}^{k + 1} (-1)^i \sigma|_{[v_0, \ldots, \hat{v}_i, \ldots, v_{k + 1}]}\right) \cdot \psi(\sigma|_{[v_{k + 1}, \ldots, v_{k + \ell + 1}]})\\
+ (\phi \smile (d \psi))(\sigma) &= \phi(\sigma|_{[v_0, \ldots, v_k]}) \cdot (d \psi)(\sigma|_{v_k,\ldots, v_{k + \ell + 1}]})\\
+ &=\phi(\sigma|_{[v_0, \ldots, v_k]}) \cdot \psi\left(\sum_{i = k}^{k + \ell + 1} (-1)^{i - k} \sigma|_{[v_k, \ldots, \hat{v}_i, \ldots, v_{k + \ell + 1}]}\right)\\
+ &=(-1)^k \phi(\sigma|_{[v_0, \ldots, v_k]}) \cdot \psi\left(\sum_{i = k}^{k + \ell + 1} (-1)^{i} \sigma|_{[v_k, \ldots, \hat{v}_i, \ldots, v_{k + \ell + 1}]}\right).
+ \end{align*}
+ We notice that the last term of the first expression, and the first term of the second expression are exactly the same, except the signs differ by $-1$. Then the remaining terms overlap in exactly 1 vertex, so we have
+ \[
+ ((d \phi) \smile \psi)(\sigma) + (-1)^k \phi \smile (d \psi)(\sigma) = (\phi \smile \psi)(d \sigma) = (d (\phi \smile \psi))(\sigma)
+ \]
+ as required.
+\end{proof}
+This is the most interesting thing about these things, because it tells us this gives a well-defined map on cohomology.
+
+\begin{cor}
+ The cup product induces a well-defined map
+ \[
+ \begin{tikzcd}[row sep = 0ex
+ ,/tikz/column 1/.append style={anchor=base east}
+ ,/tikz/column 2/.append style={anchor=base west}
+ ]
+ \smile: H^k(X; R) \times H^\ell(X; R) \ar[r] & H^{k + \ell}(X; R)\\
+ ([\phi], [\psi] ) \ar[r, maps to] & \lbrack\phi \smile \psi\rbrack
+ \end{tikzcd}
+ \]
+\end{cor}
+
+\begin{proof}
+ To see this is defined at all, as $d \phi = 0 = d \psi$, we have
+ \[
+ d (\phi \smile \psi) = (d \phi) \smile \psi \pm \phi \smile (d \psi) = 0.
+ \]
+ So $\phi \smile \psi$ is a cocycle, and represents the cohomology class. To see this is well-defined, if $\phi' = \phi + d\tau$, then
+ \[
+ \phi' \smile \psi = \phi \smile \psi + d \tau \smile \psi = \phi \smile \psi + d(\tau \smile \psi) \pm \tau \smile (d \psi).
+ \]
+ Using the fact that $d \psi = 0$, we know that $\phi' \smile \psi$ and $\phi \smile \psi$ differ by a boundary, so $[\phi' \smile \psi] = [\phi \smile \psi]$. The case where we change $\psi$ is similar.
+\end{proof}
+
+Note that the operation $\smile$ is associative on cochains, so associative on $H^*$ too.
+
+Also, there is a map $1: C_0(X) \to R$ sending $\sigma \mapsto 1$ for all $\sigma$. Then we have
+\[
+ [1] \smile [\phi] = [\phi].
+\]
+So we have
+\begin{prop}
+ $(H^*(X; R), \smile, [1])$ is a unital ring.
+\end{prop}
+
+Note that this is not necessarily commutative! Instead, we have the following \term{graded commutative} condition.
+\begin{prop}
+ Let $R$ be a commutative ring. If $\alpha \in H^k(X; R)$ and $\beta \in H^\ell(X; R)$, then we have
+ \[
+ \alpha \smile \beta = (-1)^{k\ell}\beta \smile \alpha
+ \]
+\end{prop}
+
+Note that this is only true for the cohomology classes. It is not true in general for the cochains. So we would expect that this is rather annoying to prove.
+
+The proof relies on the following observation:
+
+\begin{prop}
+ The cup product is natural, i.e.\ if $f: X \to Y$ is a map, and $\alpha, \beta \in H^*(Y; R)$, then
+ \[
+ f^*(\alpha \smile \beta) = f^*(\alpha) \smile f^*(\beta).
+ \]
+ So $f^*$ is a homomorphism of unital rings.
+\end{prop}
+
+\begin{proof}[Proof of previous proposition]
+ Let $\rho_n: C_n(X) \to C_n(x)$ be given by
+ \[
+ \sigma \mapsto (-1)^{n(n + 1)/2} \sigma|_{[v_n, v_{n - 1}, \ldots, v_0]}
+ \]
+ The $\sigma|_{[v_n, v_{n - 1}, \ldots, v_0]}$ tells us that we reverse the order of the vertices, and the factor of $(-1)^{n(n + 1)/2}$ is the sign of the permutation that reverses $0, \cdots, n$. For convenience, we write
+ \[
+ \varepsilon_n = (-1)^{n (n + 1)/2}.
+ \]
+ \begin{claim}
+ We claim that $\rho_{\Cdot}$ is a chain map, and is chain homotopic to the identity.
+ \end{claim}
+ We will prove this later.
+
+ Suppose the claim holds. We let $\phi \in C^k(X; R)$ represent $\alpha$ and $\psi \in C^\ell(X; R)$ represent $\beta$. Then we have
+ \begin{align*}
+ (\rho^* \phi \smile \rho^* \psi)(\sigma) &= (\rho^* \phi)(\sigma|_{[v_0, \ldots, v_k]} (\rho^* \psi)(\sigma|_{[v_k, \ldots, v_{k + \ell}]})\\
+ &= \phi(\varepsilon_k \cdot \sigma|_{[v_k, \ldots, v_0]}) \psi(\varepsilon_\ell \sigma|_{[v_{k + \ell}, \ldots, v_k]}).
+ \end{align*}
+ Thus, we can compute
+ \begin{align*}
+ \rho^*(\psi \smile \phi)(\sigma) &= (\psi \smile \phi)(\varepsilon_{k + \ell} \sigma|_{[v_{k + \ell}, \ldots, v_0]})\\
+ &= \varepsilon_{k + \ell} \psi(\sigma|_{[v_{k + \ell}, \ldots, v_k}]) \phi(\sigma|_{[v_k, \ldots, v_0]})\\
+ &= \varepsilon_{k + \ell}\varepsilon_k \varepsilon_\ell (\rho^* \phi \smile \rho^* \psi)(\sigma).
+ \end{align*}
+ By checking it directly, we can see that $\varepsilon_{n + \ell}\varepsilon_k \varepsilon_\ell = (-1)^{k\ell}$. So we have
+ \begin{align*}
+ \alpha \smile \beta &= [\phi \smile \psi] \\
+ &= [\rho^* \phi \smile \rho^* \psi] \\
+ &= (-1)^{k\ell}[\rho^*(\psi \smile \phi)] \\
+ &= (-1)^{k\ell}[\psi \smile \phi] \\
+ &= (-1)^{kl} \beta \smile \alpha.
+ \end{align*}
+ Now it remains to prove the claim. We have
+ \begin{align*}
+ \d \rho(\sigma) &= \varepsilon_n \sum_{i = 0}^n (-1)^j \sigma|_{[v_n, \ldots, \hat{v}_{n - i}, \ldots, v_0]}\\
+ \rho(\d \sigma) &= \rho\left(\sum_{i = 0}^n (-1)^i \sigma|_{[v_0, \ldots, \hat{v}_i, \ldots., v_n]}\right)\\
+ &= \varepsilon_{n - 1} \sum_{j = 0}^n (-1)^j \sigma|_{[v_n, \ldots, \hat{v}_j, v_0]}.
+ \end{align*}
+ We now notice that $\varepsilon_{n - 1}(-1)^{n - i} = \varepsilon_n (-1)^i$. So this is a chain map!
+
+ We now define a chain homotopy. This time, we need a ``twisted prism''. We let
+ \[
+ P_n = \sum_i (-1)^i \varepsilon_{n - i} [v_0, \cdots, v_i, w_n, \cdots, w_i] \in C_{n + 1}([0, 1] \times \Delta^n),
+ \]
+ where $v_0, \cdots, v_n$ are the vertices of $\{0\} \times \Delta^n$ and $w_0, \cdots, w_n$ are the vertices of $\{1\} \times \Delta^n$.
+
+ We let $\pi: [0, 1] \times \Delta^n \to \Delta^n$ be the projection, and let $F_n^X: C_n(X) \to C_{n + 1}(X)$ be given by
+ \[
+ \sigma \mapsto (\sigma \circ \pi)_\#(P_n).
+ \]
+ We calculate
+ \begin{align*}
+ \d F_n^X(\sigma) &= (\sigma \circ \pi)_\#(\d P_n) \\
+ &= (\sigma \circ \pi_\#)\left(\sum_i \left(\sum_{j \leq i}(-1)^j (-1)^i \varepsilon_{n - i}[v_0, \cdots, \hat{v}_j, \cdots, v_i, w_0, \cdots, w_i]\right)\right.\\
+ &\quad +\left.\left(\sum_{j \geq i} (-1)^{n + i + 1 - j}(-1)^i \varepsilon_{n - i}[v_0, \cdots, v_i, w_n, \cdots, \hat{w}_j, \cdots, v_i]\right)\right).
+ \end{align*}
+ The terms with $j = i$ give
+ \begin{align*}
+ &(\sigma \circ \pi)_\#\left(\sum_i \varepsilon_{n - i}[v_0, \cdots, v_{i - 1}, w_n, \cdots, w_i]\right. \\
+ &\quad+ \left.\sum_i (-1)^{n + 1}(-1)^i \varepsilon_{n - i}[v_0, \cdots, v_i, w_n,\cdots, w_{i + 1}]\right)\\
+ ={}& (\sigma \circ \pi)_\#(\varepsilon_n[w_n,\cdots, w_0] - [v_0, \cdots, v_n])\\
+ ={}& \rho(\sigma) - \sigma
+ \end{align*}
+ The terms with $j \not= i$ are precisely $-F_{n - 1}^X (\d \sigma)$ as required. It is easy to see that the terms are indeed the right terms, and we just have to check that the signs are right. I'm not doing that.
+\end{proof}
+
+There are some other products we can define. One example is the cross product:
+\begin{defi}[Cross product]\index{cross product}
+ Let $\pi_X: X \times Y \to X$, $\pi_Y: X \times Y\to Y$ be the projection maps. Then we have a \emph{cross product}
+ \[
+ \begin{tikzcd}[row sep = 0ex
+ ,/tikz/column 1/.append style={anchor=base east}
+ ,/tikz/column 2/.append style={anchor=base west}
+ ]
+ \times: H^k(X; R) \otimes_R H^\ell(Y; R) \ar[r] & H^{k + \ell}(X \times Y; R)\\
+ a \otimes b \ar[r, maps to] & (\pi_X^* a) \smile (\pi_Y^* b)
+ \end{tikzcd}.
+ \]
+\end{defi}
+Note that the diagonal map $\Delta: X \to X \times X$ given by $\Delta(x) = (x, x)$ satisfies
+\[
+ \Delta^*(a \times b) = a \smile b
+\]
+for all $a, b \in H^*(X; R)$. So these two products determine each other.
+
+
+There is also a \emph{relative cup} product
+\[
+ \smile: H^k(X, A; R) \otimes H^k(X; R) \to H^{k + \ell}(X, A; R)
+\]
+given by the same formula. Indeed, to see this is properly defined, note that if $\phi \in C^k(X, A; R)$, then $\phi$ is a map
+\[
+ \phi:C_k(X, A) = \frac{C_k(X)}{C_k(A)} \to R.
+\]
+In other words, it is a map $C_k(X) \to R$ that vanishes on $C_k(A)$. Then if $\sigma \in C_{k + \ell}(A)$ and $\psi \in C^\ell(X; R)$, then
+\[
+ (\phi \smile \psi)(\sigma) = \phi(\sigma|_{[v_0, \ldots, v_k]}) \cdot \psi(\sigma|_{[v_k, \ldots, v_{k + \ell}]}).
+\]
+We now notice that $[v_0, \cdots, v_k] \in C_k(A)$. So $\phi$ kills it, and this vanishes. So this is a term in $H^{k + \ell}(X, A; R)$.
+
+You might find it weird that the two factors of the cup product are in different things, but note that a relative cohomology class is in particular a cohomology class. So this restricts to a map
+\[
+ \smile: H^k(X, A; R) \otimes H^k(X, A; R) \to H^{k + \ell}(X, A; R),
+\]
+but the result we gave is more general.
+
+\begin{eg}
+ Suppose $X$ is a space such that the cohomology classes are given by
+ \begin{center}
+ \begin{tabular}{cccccccc}
+ $k$ & 0 & 1 & 2 & 3 & 4 & 5 & 6\\
+ $H^k(X, \Z)$ & $\Z$ & 0 & 0 & $\Z = \bra x\ket$ & 0 & 0 & $\Z = \bra y\ket$
+ \end{tabular}
+ \end{center}
+ What can $x \smile x$ be? By the graded commutativity property, we have
+ \[
+ x\smile x = - x \smile x.
+ \]
+ So we know $2(x \smile x) = 0 \in H^6(X, \Z) \cong \Z$. So we must have $x \smile x = 0$.
+\end{eg}
+
+\section{\texorpdfstring{K\"unneth}{Kunneth} theorem and universal coefficients theorem}
+We are going to prove two theorems of similar flavour --- K\"unneth's theorem and the universal coefficients theorem. They are both fairly algebraic results that relate different homology and cohomology groups. They will be very useful when we prove things about (co)homologies in general.
+
+In both cases, we will not prove the ``full'' theorem, as they require knowledge of certain objects known as $\mathrm{Tor}$ and $\Ext$. Instead, we will focus on a particular case where the $\mathrm{Tor}$ and $\Ext$ vanish, so that we can avoid mentioning them at all.
+
+We start with K\"unneth's theorem.
+\begin{thm}[K\"unneth's theorem]\index{K\"unneth's theorem}
+ Let $R$ be a commutative ring, and suppose that $H^n(Y; R)$ is a free $R$-module for each $n$. Then the cross product map
+ \[
+ \begin{tikzcd}
+ \displaystyle\bigoplus_{k + \ell = n}H^k(X; R) \otimes H^\ell(Y; R) \ar[r, "\times"] & H^n(X \times Y; R)
+ \end{tikzcd}
+ \]
+ is an isomorphism for every $n$, for every finite cell complex $X$.
+
+ It follows from the five lemma that the same holds if we have a relative complex $(Y, A)$ instead of just $Y$.
+\end{thm}
+For convenience, we will write $H^*(X; R) \otimes H^*(Y; R)$ for the graded $R$-module which in grade $n$ is given by
+\[
+ \bigoplus_{k + \ell = n} H^k(X; R) \otimes H^\ell (Y; R).
+\]
+Then K\"unneth says the map given by
+\[
+ \begin{tikzcd}
+ H^*(X; R) \otimes H^*(Y; R) \ar[r, "\times"] & H^*(X \times Y; R)
+ \end{tikzcd}
+\]
+is an isomorphism of graded rings.
+
+\begin{proof}
+ Let
+ \[
+ F^n(-) = \bigoplus_{k + \ell = n} H^k(-; R) \otimes H^\ell(Y; R).
+ \]
+ We similarly define
+ \[
+ G^n(-) = H^n(-\times Y; R).
+ \]
+ We observe that for each $X$, the cross product gives a map $\times: F^n(X) \to G^n(X)$, and, crucially, we know $\times_*: F^n(*) \to G^n(*)$ is an isomorphism, since $F^n(*) \cong G^n(*) \cong H^n(Y; R)$.
+
+ The strategy is to show that $F^n(-)$ and $G^n(-)$ have the same formal structure as cohomology and agree on a point, and so must agree on all finite cell complexes.
+
+ \separator
+
+ It is clear that both $F^n$ and $G^n$ are homotopy invariant, because they are built out of homotopy invariant things.
+
+ \separator
+
+ We now want to define the cohomology of pairs. This is easy. We define
+ \begin{align*}
+ F^n(X, A) &= \bigoplus_{i + j = n} H^i(X, A; R) \otimes H^j(Y; R)\\
+ G^n(X, A) &= H^n(X \times Y, A \times Y; R).
+ \end{align*}
+ Again, the relative cup product gives us a relative cross product, which gives us a map $F^n(X, A) \to G^n(X, A)$.
+
+ It is immediate $G^n$ has a long exact sequence associated to $(X, A)$ given by the usual long exact sequence of $(X \times Y, A \times Y)$. We would like to say $F$ has a long exact sequence as well, and this is where our hypothesis comes in.
+
+ If $H^*(Y; R)$ is a free $R$-module, then we can take the long exact sequence of $(X, A)$
+ \[
+ \begin{tikzcd}[column sep=small]
+ \cdots \ar[r] & H^n(A; R) \ar[r, "\partial"] & H^n(X , A ; R) \ar[r] & H^n(X ; R) \ar[r] & H^n(A ; R) \ar[r] & \cdots
+ \end{tikzcd},
+ \]
+ and then tensor with $H^j(Y; R)$. This preserves exactness, since $H^j(Y; R) \cong R^k$ for some $k$, so tensoring with $H^j(Y; R)$ just takes $k$ copies of this long exact sequence. By adding the different long exact sequences for different $j$ (with appropriate translations), we get a long exact sequence for $F$.
+
+ \separator
+
+ We now want to prove K\"unneth by induction on the number of cells and the dimension at the same time. We are going to prove that if $X = X' \cup_f D^n$ for some $S^{n - 1} \to X'$, and $\times: F(X') \to G(X')$ is an isomorphism, then $\times: F(X) \to G(X)$ is also an isomorphism. In doing so, we will assume that the result is true for attaching \emph{any} cells of dimension less than $n$.
+
+ Suppose $X = X' \cup_f D^n$ for some $f:S^{n - 1} \to X'$. We get long exact sequences
+ \[
+ \begin{tikzcd}
+ F^{* - 1}(X') \ar[r] \ar[d, "\times", "\sim"'] & F^*(X, X') \ar[r] \ar[d, "\times"] & F^*(X) \ar[r] \ar[d, "\times"] & F^*(X') \ar[r] \ar[d, "\times", "\sim"'] & F^{* + 1}(X, X') \ar[d, "\times"]\\
+ G^{* - 1}(X') \ar[r] & G^*(X, X') \ar[r] & G^*(X) \ar[r] & G^*(X') \ar[r] & G^{* + 1}(X, X')
+ \end{tikzcd}
+ \]
+ Note that we need to manually check that the boundary maps $\partial$ commute with the cross product, since this is not induced by maps of spaces, but we will not do it here.
+
+ Now by the five lemma, it suffices to show that the maps on the relative cohomology $\times: F^n(X, X') \to G^n(X, X')$ is an isomorphism.
+
+ We now notice that $F^*(-)$ and $G^*(-)$ have excision. Since $(X, X')$ is a good pair, we have a commutative square
+ \[
+ \begin{tikzcd}
+ F^*(D^n, \partial D^n) \ar[d, "\times"] & F^*(X, X') \ar[l, "\sim"] \ar[d, "\times"] \\
+ G^*(D^n, \partial D^n) & G^*(X, X') \ar[l, "\sim"]
+ \end{tikzcd}
+ \]
+ So we now only need the left-hand map to be an isomorphism. We look at the long exact sequence for $(D^n, \partial D^n)$!
+ \[
+ \begin{tikzcd}[column sep=small]
+ F^{* - 1}(\partial D^n) \ar[r] \ar[d, "\times", "\sim"'] & F^*(D^n, \partial D^n) \ar[r] \ar[d, "\times", "\sim"'] & F^*(D^n) \ar[r] \ar[d, "\times"] & F^*(\partial D^n) \ar[r] \ar[d, "\times", "\sim"'] & F^{* + 1}(D^n, \partial D^n) \ar[d, "\times", "\times"']\\
+ G^{* - 1}(\partial D^n) \ar[r] & G^*(D^n, \partial D^n) \ar[r] & G^*(D^n) \ar[r] & G^*(\partial D^n) \ar[r] & G^{* + 1}(D^n, \partial D^n)
+ \end{tikzcd}
+ \]
+ But now we know the vertical maps for $D^n$ and $\partial D^n$ are isomorphisms --- the ones for $D^n$ are because they are contractible, and we have seen the result of $*$ already; whereas the result for $\partial D^n$ follows by induction.
+
+ So we are done.
+\end{proof}
+The conditions of the theorem require that $H^n(Y; R)$ is free. When will this hold? One important example is when $R$ is actually a field, in which case \emph{all} modules are free.
+
+\begin{eg}
+ Consider $H^*(S^1, \Z)$. We know it is $\Z$ in $* = 0, 1$, and $0$ elsewhere. Let's call the generator of $H^0(S^1, \Z)$ ``$1$'', and the generator of $H^1(S^1, \Z)$ as $x$. Then we know that $x \smile x = 0$ since there isn't anything in degree $2$. So we know
+ \[
+ H^*(S^1, \Z) = \Z[x]/(x^2).
+ \]
+ Then K\"unneth's theorem tells us that
+ \[
+ H^*(T^n, \Z) \cong H^*(S^1; \Z)^{\otimes n},
+ \]
+ where $T^n = (S^1)^n$ is the $n$-torus, and this is an isomorphism of rings. So this is
+ \[
+ H^*(T^n, \Z) \cong \Z[x_1, \cdots, x_n]/(x_i^2, x_i x_j + x_j x_i),
+ \]
+ using the fact that $x_i, x_j$ have degree $1$ so anti-commute. Note that this has an interesting cup product! This ring is known as the \emph{exterior algebra in $n$ generators}.
+\end{eg}
+
+\begin{eg}
+ Let $f: S^n \to T^n$ be a map for $n > 1$. We claim that this induces the zero map on the $n$th cohomology.
+
+ We look at the induced map on cohomology:
+ \[
+ \begin{tikzcd}
+ f^*: H^n(T^n; \Z) \ar[r] & H^n(S^n, \Z)
+ \end{tikzcd}.
+ \]
+ Looking at the presentation above, we know that $H^n(T^n, \Z)$ is generated by $x_1 \smile \cdots \smile x_n$, and $f^*$ sends it to $(f^* x_1) \smile \cdots \smile (f^* x_n)$. But $f^*x_i \in H^1(S^n, \Z) = 0$ for all $n > 1$. So $f^* (x_1 \cdots x_n) = 0$.
+\end{eg}
+Note that the statement does not involve cup products at all, but it would be much more difficult to prove this without using cup products!
+
+We are now going to prove another useful result.
+\begin{thm}[Universal coefficients theorem for (co)homology]\index{universal coefficient theorem for (co)homology}
+ Let $R$ be a PID and $M$ an $R$-module. Then there is a natural map
+ \[
+ H_*(X; R)\otimes M \to H_*(X; M).
+ \]
+ If $H_*(X; R)$ is a free module for each $n$, then this is an isomorphism. Similarly, there is a natural map
+ \[
+ H^*(X; M) \to \Hom_R(H_*(X; R), M),,
+ \]
+ which is an isomorphism again if $H^*(X; R)$ is free.
+\end{thm}
+In particular, when $R = \Z$, then an $R$-module is just an abelian group, and this tells us how homology and cohomology with coefficients in an abelian group relate to the usual homology and cohomology theory.
+
+\begin{proof}
+ Let $C_n$ be $C_n(X; R)$ and $Z_n \subseteq C_n$ be the cycles and $B_n \subseteq Z_n$ the boundaries. Then there is a short exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & Z_n \ar[r, "i"] & C_n \ar[r, "g"] & B_{n - 1} \ar[r] & 0
+ \end{tikzcd},
+ \]
+ and $B_{n - 1} \leq C_{n - 1}$ is a submodule of a free $R$-module, and is free, since $R$ is a PID. So by picking a basis, we can find a map $s: B_{n - 1} \to C_n$ such that $g \circ s = \id_{B_{n - 1}}$. This induces an isomorphism
+ \[
+ \begin{tikzcd}
+ i \oplus s: Z_n \oplus B_{n - 1} \ar[r, "\sim"] & C_n.
+ \end{tikzcd}
+ \]
+ Now tensoring with $M$, we obtain
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & Z_n \otimes M \ar[r] & C_n \otimes M \ar[r] & B_{n - 1} \otimes M \ar[r] & 0
+ \end{tikzcd},
+ \]
+ which is exact because we have
+ \[
+ C_n \otimes M \cong (Z_n \oplus B_{n - 1}) \otimes M \cong (Z_n \otimes M) \oplus (B_{n - 1} \otimes M).
+ \]
+ So we obtain a short exact sequence of chain complexes
+ \[
+ \begin{tikzcd}[column sep=scriptsize]
+ 0 \ar[r] & (Z_n \otimes M, 0) \ar[r] & (C_n \otimes M, d \otimes \id) \ar[r] & (B_{n - 1} \otimes M, 0) \ar[r] & 0
+ \end{tikzcd},
+ \]
+ which gives a long exact sequence in homology:
+ \[
+ \begin{tikzcd}[column sep=scriptsize]
+ \cdots \ar[r] & B_n \otimes M \ar[r] & Z_n \otimes M \ar[r] & H_n(X; M) \ar[r] & B_{n - 1} \otimes M \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ We'll leave this for a while, and look at another short exact sequence. By definition of homology, we have a long exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & B_n \ar[r] & Z_n \ar[r] & H_n(X; R) \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ As $H_n(X; R)$ is free, we have a splitting $t: H_n(X; R) \to Z_n$, so as above, tensoring with $M$ preserves exactness, so we have
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & B_n\otimes M \ar[r] & Z_n\otimes M \ar[r] & H_n(X; R)\otimes M \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ Hence we know that $B_n \otimes M \to Z_n \otimes M$ is injective. So our previous long exact sequence breaks up to
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & B_n \otimes M \ar[r] & Z_n \otimes M \ar[r] & H_n(X; M) \ar[r] & 0.
+ \end{tikzcd}
+ \]
+ Since we have two short exact sequence with first two terms equal, the last terms have to be equal as well.
+
+ The cohomology version is similar.
+\end{proof}
+
+\section{Vector bundles}
+\subsection{Vector bundles}
+We now end the series of random topics, and work on a more focused topic. We are going to look at vector bundles. Intuitively, a vector bundle over a space $X$ is a continuous assignment of a vector space to each point in $X$. In the first section, we are just going to look at vector bundles as topological spaces. In the next section, we are going to look at homological properties of vector bundles.
+
+\begin{defi}[Vector bundle]\index{vector bundle}
+ Let $X$ be a space. A (real) \emph{vector bundle} of dimension $d$ over $X$ is a map $\pi:E \to X$, with a (real) vector space structure on each \term{fiber} $E_x = \pi^{-1}(\{x\})$, subject to the local triviality condition: for each $x \in X$, there is a neighbourhood $U$ of $x$ and a homeomorphism $\varphi:E|_U = \pi^{-1}(U) \to U \times \R^d$ such that the following diagram commutes
+ \[
+ \begin{tikzcd}[column sep=0em]
+ E|_U \ar[rr, "\varphi"] \ar[rd, "\pi"] & & U \times \R^d \ar[dl, "\pi_1"]\\
+ & U
+ \end{tikzcd},
+ \]
+ and for each $y \in U$, the restriction $\varphi|_{E_y}: E_y \to \{y\} \times \R^d$ is a \emph{linear} isomorphism for each $y \in U$. This maps is known as a \term{local trivialization}.
+\end{defi}
+We have an analogous definition for complex vector bundles.
+
+\begin{defi}[Section]\index{section}
+ A \emph{section} of a vector bundle $\pi: E \to X$ is a map $s: X \to E$ such that $\pi \circ s = \id$. In other words, $s(x) \in E_x$ for each $x$.
+\end{defi}
+
+\begin{defi}[Zero section]\index{zero section}\index{section!zero}
+ The \emph{zero section} of a vector bundle is $s_0: X \to E$ given by $s_0(x) = 0 \in E_x$.
+\end{defi}
+
+Note that the composition
+\[
+ \begin{tikzcd}
+ E \ar[r, "\pi"] & X \ar[r, "s_0"] & E
+ \end{tikzcd}
+\]
+is homotopic to the identity map on $\id_E$, since each $E_x$ is contractible.
+
+One important operation we can do on vector bundles is \emph{pullback}:
+\begin{defi}[Pullback of vector bundles]\index{pullback!vector bundle}\index{vector bundle!pullback}
+ Let $\pi: E \to X$ be a vector bundle, and $f: Y \to X$ a map. We define the \emph{pullback}
+ \[
+ f^* E = \{(y, e) \in Y \times E: f(y) = \pi(e)\}.
+ \]
+ This has a map $f^*\pi: f^*E \to Y$ given by projecting to the first coordinate. The vector space structure on each fiber is given by the identification $(f^*E)_y = E_{f(y)}$. It is a little exercise in topology to show that the local trivializations of $\pi: E \to X$ induce local trivializations of $f^*\pi: f^* E \to Y$.
+\end{defi}
+
+Everything we can do on vector spaces can be done on vector bundles, by doing it on each fiber.
+\begin{defi}[Whitney sum of vector bundles]\index{vector bundle!Whitney sum}\index{Whitney sum of vector bundles}
+ Let $\pi: E \to F$ and $\rho: F \to X$ be vector bundles. The \emph{Whitney sum} is given by
+ \[
+ E \oplus F = \{(e, f)\in E \times F: \pi(e) = \rho(f)\}.
+ \]
+ This has a natural map $\pi \oplus \rho: E \oplus F \to X$ given by $(\pi \oplus \rho)(e, f) = \pi(e) = \rho(f)$. This is again a vector bundle, with $(E \oplus F)_x = E_x \oplus F_x$ and again local trivializations of $E$ and $F$ induce one for $E \oplus F$.
+\end{defi}
+Tensor products can be defined similarly.
+
+Similarly, we have the notion of subbundles.
+\begin{defi}[Vector subbundle]\index{vector sub-bundle}\index{vector bundle!subbundle}
+ Let $\pi: E \to X$ be a vector bundle, and $F \subseteq E$ a subspace such that for each $x \in X$ there is a local trivialization $(U, \varphi)$
+ \[
+ \begin{tikzcd}[column sep=0em]
+ E|_U \ar[rr, "\varphi"] \ar[rd, "\pi"] & & U \times \R^d \ar[dl, "\pi_1"]\\
+ & U
+ \end{tikzcd},
+ \]
+ such that $\varphi$ takes $F|_U$ to $U \times \R^k$, where $\R^k \subseteq \R^d$. Then we say $F$ is a \emph{vector sub-bundle}.
+\end{defi}
+
+\begin{defi}[Quotient bundle]\index{quotient bundle}\index{vector bundle!quotient}
+ Let $F$ be a sub-bundle of $E$. Then $E/F$, given by the fiberwise quotient, is a vector bundle and is given by the \emph{quotient bundle}.
+\end{defi}
+
+We now look at one of the most important example of a vector bundle. In some sense, this is the ``universal'' vector bundle, as we will see later.
+\begin{eg}[Grassmannian manifold]\index{Grassmannian manifold}\index{$\Gr_k(\R^n)$}
+ We let
+ \[
+ X = \Gr_k(\R^n) = \{k\text{-dimensional linear subgroups of $\R^n$}\}.
+ \]
+ To topologize this space, we pick a fixed $V \in \Gr_k(\R^n)$. Then any $k$-dimensional subspace can be obtained by applying some linear map to $V$. So we obtain a surjection
+ \begin{align*}
+ \GL_n(\R) &\to \Gr_k(\R^n)\\
+ M &\mapsto M(V).
+ \end{align*}
+ So we can given $\Gr_k(\R^n)$ the quotient (final) topology. For example,
+ \[
+ \Gr_1(\R^{n + 1}) = \RP^n.
+ \]
+ Now to construct a vector bundle, we need to assign a vector space to each point in $X$. But a point in $\Gr_k(\R^n)$ \emph{is} a vector space, so we have an obvious definition
+ \[
+ E = \{(V, v) \in \Gr_k(\R^n) \times \R^n: v \in V\}.
+ \]
+ This has the evident projection $\pi: E \to X$ given by the first projection. We then have
+ \[
+ E_V = V.
+ \]
+ To see that this is a vector bundle, we have to check local triviality. We fix a $V \in \Gr_k(\R^n)$, and let
+ \[
+ U = \{W \in\Gr_k(\R^n): W \cap V^\perp = \{0\}\}.
+ \]
+ We now construct a map $\varphi: E|_U \to U \times V \cong U \times \R^k$ by mapping $(W, w)$ to $(W, \pr_V(w))$, where $\pr_V: \R^n \to V$ is the orthogonal projection.
+
+ Now if $w \in U$, then $\pr_V(w) \not= 0$ since $W \cap V^\perp = \{0\}$. So $\varphi$ is a homeomorphism. We call this bundle $\gamma_{k, n}^\R \to \Gr_k(\R^n)$.\index{$\gamma_{k, n}^\R$}
+
+ In the same way, we can get a canonical complex vector bundle $\gamma_{k, n}^\C \to \Gr_k(\C^n)$.
+\end{eg}
+
+
+\begin{eg}
+ Let $M$ be a smooth $d$-dimensional manifold, then it naturally has a $d$-dimensional \term{tangent bundle} $\pi: TM \to M$ with $(TM)|_x = T_x M$.
+
+ If $M \subseteq N$ is a smooth submanifold, with $i$ the inclusion map, then $TM$ is a subbundle of $i^* TN$. Note that we cannot say $TM$ is a smooth subbundle of $TN$, since they have different base space, and thus cannot be compared without pulling back.
+
+ The \term{normal bundle} of $M$ in $N$ is
+ \[
+ \nu_{M \subseteq N} = \frac{i^* TN}{TM}.
+ \]
+\end{eg}
+
+Here is a theorem we have to take on faith, because proving it will require some differential geometry.
+\begin{thm}[Tubular neighbourhood theorem]\index{tubular neighbourhood theorem}
+ Let $M \subseteq N$ be a smooth submanifold. Then there is an open neighbourhood $U$ of $M$ and a homeomorphism $\nu_{M \subseteq N} \to U$, and moreover, this homeomorphism is the identity on $M$ (where we view $M$ as a submanifold of $\nu_{M \subseteq N}$ by the image of the zero section).
+\end{thm}
+This tells us that locally, the neighbourhood of $M$ in $N$ looks like $\nu_{M \subseteq N}$.
+
+We will also need some results from point set topology:
+\begin{defi}[Partition of unity]\index{partition of unity}
+ Let $X$ be a compact Hausdorff space, and $\{U_\alpha\}_{\alpha \in I}$ be an open cover. A \emph{partition of unity subordinate to $\{U_\alpha\}$} is a collection of functions $\lambda_\alpha: X \to [0, \infty)$ such that
+ \begin{enumerate}
+ \item $\supp(\lambda_\alpha) = \overline{\{x \in X: \lambda_\alpha(x) > 0\}} \subseteq U_\alpha$.
+ \item Each $x \in X$ lies in finitely many of these $\supp (\lambda_\alpha)$.
+ \item For each $x$, we have
+ \[
+ \sum_{\alpha \in I}\lambda_\alpha(x) = 1.
+ \]
+ \end{enumerate}
+\end{defi}
+
+\begin{prop}
+ Partitions of unity exist for any open cover.
+\end{prop}
+
+You might have seen this in differential geometry, but this is easier, since we do not require the partitions of unity to be smooth.
+
+Using this, we can prove the following:
+\begin{lemma}
+ Let $\pi: E \to X$ be a vector bundle over a compact Hausdorff space. Then there is a continuous family of inner products on $E$. In other words, there is a map $E \otimes E \to \R$ which restricts to an inner product on each $E_x$.
+\end{lemma}
+
+\begin{proof}
+ We notice that every trivial bundle has an inner product, and since every bundle is locally trivial, we can patch these up using partitions of unity.
+
+ Let $\{U_\alpha\}_{\alpha \in I}$ be an open cover of $X$ with local trivializations
+ \[
+ \varphi_\alpha: E|_{U_\alpha} \to U_\alpha \times \R^d.
+ \]
+ The inner product on $\R^d$ then gives us an inner product on $E|_{U_\alpha}$, say $\bra \ph, \ph\ket_\alpha$. We let $\lambda_\alpha$ be a partition of unity associated to $\{U_\alpha\}$. Then for $u\otimes v \in E \otimes E$, we define
+ \[
+ \bra u, v\ket = \sum_{\alpha \in I} \lambda_\alpha(\pi(u)) \bra u, v\ket_\alpha.
+ \]
+ Now if $\pi(u) = \pi(v)$ is not in $U_\alpha$, then we don't know what we mean by $\bra u, v\ket_\alpha$, but it doesn't matter, because $\lambda_\alpha(\pi(u)) = 0$. Also, since the partition of unity is locally finite, we know this is a finite sum.
+
+ It is then straightforward to see that this is indeed an inner product, since a positive linear combination of inner products is an inner product.
+\end{proof}
+
+Similarly, we have
+\begin{lemma}
+ Let $\pi: E \to X$ be a vector bundle over a compact Hausdorff space. Then there is some $N$ such that $E$ is a vector subbundle of $X \times \R^N$.
+\end{lemma}
+
+\begin{proof}
+ Let $\{U_\alpha\}$ be a trivializing cover of $X$. Since $X$ is compact, we may wlog assume the cover is finite. Call them $U_1, \cdots, U_n$. We let
+ \[
+ \varphi_i: E|_{U_i} \to U_i \times \R^d.
+ \]
+ We note that on each patch, $E|_{U_i}$ embeds into a trivial bundle, because it \emph{is} a trivial bundle. So we can add all of these together. The trick is to use a partition of unity, again.
+
+ We define $f_i$ to be the composition
+ \[
+ \begin{tikzcd}
+ E|_{U_i} \ar[r, "\varphi_i"] & U_i \times \R^d \ar[r, "\pi_2"] & \R^d
+ \end{tikzcd}.
+ \]
+ Then given a partition of unity $\lambda_i$, we define
+ \begin{align*}
+ f: E &\to X \times (\R^d)^n\\
+ v&\mapsto (\pi(v), \lambda_1(\pi(v)) f_1(v), \lambda_2(\pi(v)) f_2(v), \cdots, \lambda_n(\pi(v)) f_n(v)).
+ \end{align*}
+ We see that this is injective. If $v, w$ belong to different fibers, then the first coordinate distinguishes them. If they are in the same fiber, then there is some $U_i$ with $\lambda_i(\pi(u)) \not= 0$. Then looking at the $i$th coordinate gives us distinguishes them. This then exhibits $E$ as a subbundle of $X \times \R^n$.
+\end{proof}
+
+\begin{cor}
+ Let $\pi: E \to X$ be a vector bundle over a compact Hausdorff space. Then there is some $p: F \to X$ such that $E \oplus F \cong X \times \R^n$. In particular, $E$ embeds as a subbundle of a trivial bundle.
+\end{cor}
+
+\begin{proof}
+ By above, we can assume $E$ is a subbundle of a trivial bundle. We can then take the orthogonal complement of $E$.
+\end{proof}
+
+Now suppose again we have a vector bundle $\pi: E\to X$ over a compact Hausdorff $X$. We can then choose an embedding $E \subseteq X \times \R^N$, and then we get a map $f_\pi: X \to \Gr_d (\R^N)$ sending $x$ to $E_x \subseteq \R^N$. Moreover, if we pull back the tautological bundle along $f_\pi$, then we have
+\[
+ f_{\pi}^* \gamma_{k, N}^\R \cong E.
+\]
+So every vector bundle is the pullback of the canonical bundle $\gamma_{k, N}^\R$ over a Grassmannian. However, there is a slight problem. Different vector bundles will require different $N$'s. So we will have to overcome this problem if we want to make a statement of the sort ``a vector bundle is the same as a map to a Grassmannian''.
+
+The solution is to construct some sort of $\Gr_d(\R^\infty)$. But infinite-dimensional vector spaces are weird. So instead we take the union of all $\Gr_d(\R^N)$. We note that for each $N$, there is an inclusion $\Gr_d(\R^N) \to \Gr_d(\R^{n + 1})$, which induces an inclusion of the canonical bundle. We can then take the union to obtain $\Gr_d(\R^\infty)$ with a canonical bundle $\gamma_d^\R$. Then the above result shows that each vector bundle over $X$ is the pullback of the canoncial bundle $\gamma_d^\R$ along some map $f: X \to \Gr_d(\R^\infty)$.
+
+Note that a vector bundle over $X$ does not uniquely specify a map $X \to \Gr_d(\R^\infty)$, as there can be multiple embeddings of $X$ into the trivial bundle $X \times \R^N$. Indeed, if we wiggle the embedding a bit, then we can get a new bundle. So we don't have a coorespondence between vector bundles $\pi: E \to X$ and maps $f_\pi: X \to \Gr_d(\R^\infty)$.
+
+The next best thing we can hope for is that the ``wiggle the embedding a bit'' is all we can do. More precisely, two maps $f, g: X \to \Gr_d(\R^\infty)$ pull back isomorphic vector bundles if and only if they are homotopic. This is indeed true:
+
+\begin{thm}
+ There is a correspondence
+ \[
+ \begin{tikzcd}[/tikz/column 1/.append style={anchor=base east}
+ ,/tikz/column 2/.append style={anchor=base west}
+ ,row sep=tiny]
+ \left\{\parbox{3cm}{\centering homotopy classess of maps $f: X \to \Gr_d(\R^\infty)$}\right\} \ar[r, leftrightarrow] & \left\{\parbox{3cm}{\centering $d$-dimensional vector bundles $\pi: E \to X$}\right\}\\
+ \lbrack f\rbrack \ar[r, maps to] & f^* \gamma_d^\R\\
+ \lbrack f_\pi\rbrack & \pi \ar[l, maps to]
+ \end{tikzcd}
+ \]
+\end{thm}
+The proof is mostly technical, and is left as an exercise on the example sheet.
+
+\subsection{Vector bundle orientations}
+We are now going to do something that actually involves algebraic topology. Unfortunately, we have to stop working with arbitrary bundles, and focus on \emph{orientable} bundles instead. So the first thing to do is to define orientability.
+
+What we are going to do is to come up with a rather refined notion of orientability. For each commutative ring $R$, we will have the notion of \emph{$R$-orientability}. The strength of this condition will depend on what $R$ is --- any vector bundle is $\F_2$ orientable, while $\Z$-orientability is the strongest --- if a vector bundle is $\Z$-orientable, then it is $R$-orientable for all $R$.
+
+While there isn't really a good geometric way to think about general $R$-orientability, the reason for this more refined notion is that whenever we want things to be true for (co)homology with coefficients in $R$, then we need the bundle to be $R$-orientable.
+
+Let's begin. Let $\pi: E \to X$ be a vector bundle of dimension $d$. We write
+\[
+ E^\# = E \setminus s_0(X).
+\]
+We now look at the relative homology groups
+\[
+ H^i(E_x, E_x^\#; R),
+\]
+where $E_x^\# = E_x \setminus \{0\}$.
+
+We know $E_x$ is a $d$-dimensional vector space. So we can choose an isomorphism $E_x \to \R^d$. So after making this choice, we know that
+\[
+ H^i(E_x, E_x^\#; R) \cong H^i(\R^d, \R^d \setminus \{0\}; R) =
+ \begin{cases}
+ R & i = d\\
+ 0 & \text{otherwise}
+ \end{cases}
+\]
+However, there is no canonical generator of $H^d(E_x, E_x^\#; R)$ as an $R$-module, as we had to pick an isomorphism $E_x \cong \R^d$.
+\begin{defi}[$R$-orientation]\index{$R$-orientation}\index{orientation}\index{local orientation}\index{local $R$-orientation}
+ A \term{local $R$-orientation} of $E$ at $x \in X$ is a choice of $R$-module generator $\varepsilon_x \in H^d(E_x, E_x^\#; R)$.
+
+ An \term{$R$-orientation} is a choice of local $R$-orientation $\{\varepsilon_x\}_{x \in X}$ which are compatible in the following way: if $U\subseteq X$ is open on which $E$ is trivial, and $x, y \in U$, then under the homeomorphisms (and in fact linear isomorphisms):
+ \[
+ \begin{tikzcd}
+ E_x \ar[rd, hook] \ar[rrrd, bend left, "h_x"'] \\
+ & E|_U \ar[r, "\varphi_\alpha", "\cong"']& U \times \R^d \ar[r, "\pi_2"] & \R^d\\
+ E_y \ar[ru, hook] \ar[rrru, bend right, "h_y"]
+ \end{tikzcd}
+ \]
+ the map
+ \[
+ h_y^* \circ (h_x^{-1})^*: H^d(E_x, E_x^\#; R) \to H^d(E_y, E_y^\#; R)
+ \]
+ sends $\varepsilon_x$ to $\varepsilon_y$. Note that this definition does not depend on the choice of $\varphi_U$, because we used it twice, and they cancel out.
+\end{defi}
+
+It seems pretty horrific to construct an orientation. However, it isn't really that bad. For example, we have
+\begin{lemma}
+ Every vector bundle is $\F_2$-orientable.
+\end{lemma}
+
+\begin{proof}
+ There is only one possible choice of generator.
+\end{proof}
+
+In the interesting cases, we are usually going to use the following result to construct orientations:
+\begin{lemma}
+ If $\{U_\alpha\}_{\alpha \in I}$ is a family of covers such that for each $\alpha, \beta \in I$, the homeomorphism
+ \[
+ \begin{tikzcd}
+ (U_\alpha \cap U_\beta) \times \R^d & E|_{U_\alpha \cap U_\beta} \ar[l, "\cong", "\varphi_\alpha"'] \ar[r, "\cong", "\varphi_\beta"'] & (U_\alpha \cap U_\beta) \times \R^d
+ \end{tikzcd}
+ \]
+ gives an orientation preserving map from $(U_\alpha \cap U_\beta) \times \R^d$ to itself, i.e.\ has a positive determinant on each fiber, then $E$ is orientable for any $R$.
+\end{lemma}
+Note that we don't have to check the determinant at each point on $U_\alpha \cap U_\beta$. By continuity, we only have to check it for one point.
+
+\begin{proof}
+ Choose a generator $u \in H^d(\R^d, \R^d \setminus \{0\}; R)$. Then for $x \in U_\alpha$, we define $\varepsilon_x$ by pulling back $u$ along
+ \[
+ \begin{tikzcd}
+ E_x \ar[r, hook] & E|_{U_\alpha} \ar[r, "\varphi_\alpha"] & U_\alpha \times \R^d \ar[r, "\pi_2"] & \R^d
+ \end{tikzcd}.\tag{$\dagger_\alpha$}
+ \]
+ If $x \in U_\beta$ as well, then the analogous linear isomorphism $\dagger_\alpha$ differs from $\dagger_\beta$ by post-composition with a linear map $L: \R^d \to \R^d$ of \emph{positive} determinant. We now use the fact that any linear map of positive determinant is homotopic to the identity. Indeed, both $L$ and $\id$ lies in $\GL_d^+(\R)$, a connected group, and a path between them will give a homotopy between the maps they represent. So we know $(\dagger_\alpha)$ is homotopic to $(\dagger_\beta)$. So they induce the same maps on cohomology classes.
+\end{proof}
+Now if we don't know that the maps have positive determinant, then $(\dagger_\alpha)$ and $(\dagger_\beta)$ might differ by a sign. So in any ring $R$ where $2 = 0$, we know every vector bundle is $R$-orientable. This is a generalization of the previous result for $\F_2$ we had.
+
+\subsection{The Thom isomorphism theorem}
+We now get to the main theorem about vector bundles.
+\begin{thm}[Thom isomorphism theorem]
+ Let $\pi: E \to X$ be a $d$-dimensional vector bundle, and $\{\varepsilon_x\}_{x \in X}$ be an $R$-orientation of $E$. Then
+ \begin{enumerate}
+ \item $H^i(E, E^\#; R) = 0$ for $i < d$.
+ \item There is a unique class $u_E \in H^d(E, E^\#; R)$ which restricts to $\varepsilon_x$ on each fiber. This is known as the \term{Thom class}.
+ \item The map $\Phi$ given by the composition
+ \[
+ \begin{tikzcd}[column sep=large]
+ H^i(X; R) \ar[r, "\pi^*"] & H^i(E; R) \ar[r, "-\smile u_E"] & H^{i + d}(E, E^\#; R)
+ \end{tikzcd}
+ \]
+ is an isomorphism.
+ \end{enumerate}
+ Note that (i) follows from (iii), since $H^i(X; R) = 0$ for $i < 0$.
+\end{thm}
+Before we go on and prove this, we talk about why it is useful.
+
+\begin{defi}[Euler class]
+ Let $\pi: E \to X$ be a vector bundle. We define the \term{Euler class} $e(E) \in H^d(X; R)$ by the image of $u_E$ under the composition
+ \[
+ \begin{tikzcd}
+ H^d(E, E^\#; R) \ar[r] & H^d(E; R) \ar[r, "s_0^*"] & H^d(X; R)
+ \end{tikzcd}.
+ \]
+\end{defi}
+This is an example of a \term{characteristic class}, which is a cohomology class related to an oriented vector bundle that behaves nicely under pullback. More precisely, given a vector bundle $\pi: E \to X$ and a map $f: Y \to X$, we can form a pullback
+\[
+ \begin{tikzcd}
+ f^*E \ar[r, "\hat{f}"] \ar[d, "f^*\pi"] & E \ar[d, "\pi"]\\
+ Y \ar[r, "f"] & X
+ \end{tikzcd}.
+\]
+Since we have a fiberwise isomorphism $(f^*E)_y \cong E_{f(y)}$, an $R$-orientation for $E$ induces one for $f^* E$, and we know $f^*(u_E) = u_{f^* E}$ by uniqueness of the Thom class. So we know
+\[
+ e(f^*(E)) = f^* e(E) \in H^d(Y; R).
+\]
+Now the Euler class is a cohomology class of $X$ itself, so we can use the Euler class to compare and contrast different vector bundles.
+
+How can we think of the Euler class? It turns out the Euler class gives us an obstruction to the vector bundle having a non-zero section.
+\begin{thm}
+ If there is a section $s: X \to E$ which is nowhere zero, then $e(E) = 0 \in H^d(X; R)$.
+\end{thm}
+
+\begin{proof}
+ Notice that any two sections of $E \to X$ are homotopic. So we have $e \equiv s_0^* u_E = s^* u_E$. But since $u_E \in H^d(E, E^\#; R)$, and $s$ maps into $E^\#$, we have $s^* u_E$.
+
+ Perhaps more precisely, we look at the long exact sequence for the pair $(E, E^\#)$, giving the diagram
+ \[
+ \begin{tikzcd}
+ H^d(E, E^\#; R) \ar[r] & H^d(E; R) \ar[r] \ar[d, "s_0^*"] & H^d(E^\#; R) \ar[dl, "s^*"]\\
+ & H^d(X; R)
+ \end{tikzcd}
+ \]
+ Since $s$ and $s_0$ are homotopic, the diagram commutes. Also, the top row is exact. So $u_E \in H^d(E, E^\#; R)$ gets sent along the top row to $0 \in H^d(E^\#; R)$, and thus $s^*$ sends it to $0 \in H^d(X; R)$. But the image in $H^d(X; R)$ is exactly the Euler class. So the Euler class vanishes.
+\end{proof}
+
+Now cohomology is not only a bunch of groups, but also a ring. So we can ask what happens when we cup $u_E$ with itself.
+\begin{thm}
+ We have
+ \[
+ u_E \smile u_E = \Phi(e(E)) = \pi^*(e(E)) \smile u_E \in H^*(E, E^\#; R).
+ \]
+\end{thm}
+
+This is just tracing through the definitions.
+\begin{proof}
+ By construction, we know the following maps commute:
+ \[
+ \begin{tikzcd}
+ H^d(E, E^\#; R) \otimes H^d(E, E^\#; R) \ar[r, "\smile"] \ar[d, "q^* \otimes \id"] & H^{2d}(E, E^\#; R)\\
+ H^d(E; R) \otimes H^d(E, E^\#; R) \ar[ur, "\smile"']
+ \end{tikzcd}
+ \]
+ We claim that the Thom class $u_E \otimes u_E \in H^d(E, E^\#; R) \otimes H^d(E, E^\#; R)$ is sent to $\pi^*(e(E)) \otimes u_E \in H^d(E; R) \otimes H^d(E, E^\#; R)$.
+
+ By definition, this means we need
+ \[
+ q^* u_E = \pi^*(e(E)),
+ \]
+ and this is true because $\pi^*$ is homotopy inverse to $s_0^*$ and $e(E) = s_0^* q^* u_E$.
+\end{proof}
+
+So if we have two elements $\Phi(c), \Phi(d) \in H^*(E, E^\#; R)$, then we have
+\begin{align*}
+ \Phi(c) \smile \Phi(d) &= \pi^*c \smile u_E \smile \pi^*d \smile u_E \\
+ &= \pm \pi^*c \smile \pi^*d \smile u_E \smile u_E \\
+ &= \pm \pi^*(c \smile d \smile e(E)) \smile u_E\\
+ &= \pm \Phi(c \smile d \smile e(E)).
+\end{align*}
+So $e(E)$ is precisely the information necessary to recover the cohomology \emph{ring} $H^*(E, E^\#; R)$ from $H^*(X; R)$.
+
+\begin{lemma}
+ If $\pi: E \to X$ is a $d$-dimensional $R$-module vector bundle with $d$ odd, then $2e(E) = 0 \in H^d(X; R)$.
+\end{lemma}
+
+\begin{proof}
+ Consider the map $\alpha: E \to E$ given by negation on each fiber. This then gives an isomorphism
+ \[
+ \begin{tikzcd}
+ a^*: H^d(E, E^\#; R) \ar[r, "\cong"] & H^d(E, E^\#; R).
+ \end{tikzcd}
+ \]
+ This acts by negation on the Thom class, i.e.
+ \[
+ a^*(u_E) = - u_E,
+ \]
+ as on the fiber $E_x$, we know $a$ is given by an odd number of reflections, each of which acts on $H^d(E_x, E_x^\#; R)$ by $-1$ (by the analogous result on $S^n$). So we change $\varepsilon_x$ by a sign. We then lift this to a statement about $u_E$ by the fact that $u_E$ is the unique thing that restricts to $\varepsilon_x$ for each $x$.
+
+ But we also know
+ \[
+ a \circ s_0 = s_0,
+ \]
+ which implies
+ \[
+ s_0^*(a^*(u_E)) = s_0^*(u_E).
+ \]
+ Combining this with the result that $a^*(u_E) = -u_E$, we get that
+ \[
+ 2 e(E) = 2 s_0^*(u_E) = 0.\qedhere
+ \]
+\end{proof}
+This is a disappointing result, because if we already know that $H^d(X; R)$ has no $2$-torsion, then $e(E) = 0$.
+
+After all that fun, we prove the Thom isomorphism theorem.
+\begin{proof}[Proof of Thom isomorphism theorem]
+ We will drop the ``$R$'' in all our diagrams for readability (and also so that it fits in the page).
+
+ We first consider the case where the bundle is trivial, so $E = X \times \R^d$. Then we note that
+ \[
+ H^*(\R^d, \R^d \setminus \{0\}) =
+ \begin{cases}
+ R & * = d\\
+ 0 & * \not= d
+ \end{cases}.
+ \]
+ In particular, the modules are free, and (a relative version of) K\"unneth's theorem tells us the map
+ \[
+ \begin{tikzcd}
+ \times: H^*(X) \otimes H^*(\R^d, \R^d \setminus \{0\}) \ar[r, "\cong"] & H^* (X \times \R^d, X \times (\R^d\setminus \{0\}))
+ \end{tikzcd}
+ \]
+ is an isomorphism. Then the claims of the Thom isomorphism theorem follow immediately.
+ \begin{enumerate}
+ \item For $i < d$, all the summands corresponding to $H^i(X \times \R^d, X \times (\R^d \setminus\{0\}))$ vanish since the $H^*(\R^d, \R^d \setminus\{0\})$ term vanishes.
+
+ \item The only non-vanishing summand for $H^d(X \times \R^d, X \times (\R^d \setminus\{0\})$ is
+ \[
+ H^0(X) \otimes H^d(\R^d, \R^d \setminus \{0\}).
+ \]
+ Then the Thom class must be $1 \otimes u$, where $u$ is the object corresponding to $\varepsilon_x \in H^d(E_x, E_x^\#) = H^d(\R^d, \R^d \setminus\{0\})$, and this is unique. % explain more
+ \item We notice that $\Phi$ is just given by
+ \[
+ \Phi(x) = \pi^*(x) \smile u_E = x \times u_E,
+ \]
+ which is an isomorphism by K\"unneth.
+ \end{enumerate}
+
+ \separator
+
+ We now patch the result up for a general bundle. Suppose $\pi: E \to X$ is a bundle. Then it has an open cover of trivializations, and moreover if we assume our $X$ is compact, there are finitely many of them. So it suffices to show that if $U, V \subseteq X$ are open sets such that the Thom isomorphism the holds for $E$ restricted to $U, V, U \cap V$, then it also holds on $U \cup V$.
+
+ The relative Mayer-Vietoris sequence gives us
+ \[
+ \begin{tikzcd}
+ H^{d - 1}(E|_{U \cap V}, E^\#|_{U \cap V}) \ar[r, "\partial^{MV}"] & H^d(E|_{U\cup V}, E^\#|_{U\cup V}) \ar[out=0, in=180, looseness=2, overlay, dl]\\
+ H^d(E|_U, E^\#|_U) \oplus H^d(E|_V, E^\#|_V) \ar[r] & H^d(E|_{U \cap V}, E^\#|_{U \cap V}).
+ \end{tikzcd}
+ \]
+ We first construct the Thom class. We have
+ \[
+ u_{E|_V} \in H^d(E|_V, E^\#),\quad u_{E|_U} \in H^d(E|_U, E^\#).
+ \]
+ We claim that $(u_{E|_U}, u_{E|_V}) \in H^d(E|_U, E^\#|_U) \oplus H^d(E|_V, E^\#|_V)$ gets sent to $0$ by $i_U^* - i_V^*$. Indeed, both the restriction of $u_{E|_U}$ and $u_{E|_V}$ to $U \cap V$ are Thom classes, so they are equal by uniqueness, so the difference vanishes.
+
+ Then by exactness, there must be some $u_{E|_{U\cup V}} \in H^d(E|_{U \cup V}, E^\#|_{U \cup V})$ that restricts to $u_{E|_U}$ and $u_{E|_V}$ in $U$ and $V$ respectively. Then this must be a Thom class, since the property of being a Thom class is checked on each fiber. Moreover, we get uniqueness because $H^{d - 1}(E|_{U \cap V}, E^\#|_{U \cap V}) = 0$, so $u_{E|_U}$ and $u_{E|_V}$ must be the restriction of a unique thing.
+
+ The last part in the Thom isomorphism theorem come from a routine application of the five lemma, and the first part follows from the last as previously mentioned.
+\end{proof}
+
+\subsection{Gysin sequence}
+Now we do something interesting with vector bundles. We will come up with a long exact sequence associated to a vector bundle, known as the \emph{Gysin sequence}. We will then use the Gysin sequence to deduce something about the \emph{base} space.
+
+Suppose we have a $d$-dimensional vector bundle $\pi: E \to X$ that is $R$-oriented. We want to talk about the unit sphere in every fiber of $E$. But to do so, we need to have a notion of length, and to do that, we want an inner product. But luckily, we do have one, and we know that any two norms on a finite-dimensional vector space are equivalent. So we might as well arbitrarily choose one.
+
+\begin{defi}[Sphere bundle]\index{sphere bundle}
+ Let $\pi: E \to X$ be a vector bundle, and let $\bra \ph, \ph\ket : E \otimes E \to \R$ be an inner product, and let
+ \[
+ S(E) = \{v \in E; \bra v, v\ket = 1\} \subseteq E.
+ \]
+ This is the \emph{sphere bundle} associated to $E$.
+\end{defi}
+
+Since the unit sphere is homotopy equivalent to $\R^d \setminus \{0\}$, we know the inclusion
+\[
+ \begin{tikzcd}
+ j: S(E) \ar[r, hook] & E^\#
+ \end{tikzcd}
+\]
+is a homotopy equivalence, with inverse given by normalization.
+
+The long exact sequence for the pair $(E, E^\#)$ gives (as before, we do not write the $R$):
+\[
+ \begin{tikzcd}
+ H^{i + d}(E, E^\#) \ar[r] & H^{i + d}(E) \ar[d, "s_0^*", xshift=2] \ar[r] & H^{i + d} (E^\#) \ar[r] \ar[d, "j^*"] & H^{i + d + 1}(E, E^\#)\\
+ H^i(X) \ar[u, "\Phi"] \ar[r, "\ph \smile e(E)"] & H^{i + d}(X) \ar[u, xshift=-2, "\pi^*"] \ar[r, "p^*"] & H^{i + d}(S(E)) \ar[r, "p_!"] & H^{i + 1}(X)\ar[u, "\Phi"]
+ \end{tikzcd}
+\]
+where $p: S(E) \to E$ is the projection, and $p_!$ is whatever makes the diagram commutes (since $j^*$ and $\Phi$ are isomorphisms). The bottom sequence is the \term{Gysin sequence}, and it is exact because the top row is exact. This is in fact a long exact sequence of $H^*(X; R)$-modules, i.e.\ the maps commute with cup products.
+
+
+\begin{eg}
+ Let $L = \gamma_{1, n + 1}^\C \to \CP^n = \Gr_1(\C^{n + 1})$ be the tautological $1$ complex dimensional vector bundle on $\Gr_1(\C^{n +1})$. This is $\Z$-oriented as any complex vector bundle is, because if we consider the inclusion
+ \[
+ \GL_1(\C) \hookrightarrow \GL_2(\R)
+ \]
+ obtained by pretending $\C$ is $\R^2$, we know $\GL_1(\C)$ is connected, so lands in the component of the identity, so has positive determinant. The sphere bundle consists of
+ \[
+ S(L) = \{(V, v) \in \CP^n \times \C^{n + 1}: v \in V, |v| = 1\} \cong \{v \in \C^{n + 1}: |v| = 1\} \cong S^{2n + 1},
+ \]
+ where the middle isomorphism is given by
+ \[
+ \begin{tikzcd}[row sep=tiny]
+ (V, v) \ar[r, maps to] & v\\
+ (\bra v\ket, v) & v \ar[l, maps to]
+ \end{tikzcd}
+ \]
+ The Gysin sequence is
+ \[
+ \begin{tikzcd}
+ H^{i + 1}(S^{2n + 1}) \ar[r, "p_!"] & H^i(\CP^n) \ar[r, "\smile e(L)"] & H^{i + 2}(\CP^n) \ar[r, "p^*"] & H^{i + 2}(S^{2n + 1})
+ \end{tikzcd}
+ \]
+ Now if $i \leq 2n - 2$, then both outer terms are $0$. So the maps in the middle are isomorphisms. Thus we get isomorphisms
+ \[
+ \begin{tikzcd}[row sep=tiny]
+ H^0(\CP^n) \ar[d, equals] \ar[r, "\smile e(L)"] & H^2(\CP^n) \ar[d, equals] \ar[r, "\smile e(L)"] & H^4(\CP^n) \ar[d, equals] \ar[r] & \cdots\\
+ \Z \cdot 1 & \Z \cdot e(L) & \Z \cdot e(L)^2
+ \end{tikzcd}
+ \]
+ Similarly, we know that the terms in the odd degree vanish.
+
+ Checking what happens at the end points carefully, the conclusion is that
+ \[
+ H^*(\CP^n) = \Z[e(L)] / (e(L)^{n + 1})
+ \]
+ as a ring.
+\end{eg}
+
+\begin{eg}
+ We do the real case of the above computation. We have
+ \[
+ K = \gamma_{1, n + 1}^\R \to \RP^n = \Gr_1(\R^{n + 1}).
+ \]
+ The previous trick doesn't work, and indeed this isn't $\Z$-orientable. However, it is $\F_2$-oriented as every vector bundle is, and by exactly the same argument, we know
+ \[
+ S(L) \cong S^n.
+ \]
+ So by the same argument as above, we will find that
+ \[
+ H^*(\RP^n, \F_2) \cong \F_2[e(L)]/(e(L)^{n + 1}).
+ \]
+ Note that this is different from the one we had before, because here $\deg e(L) = 1$, while the complex case had degree $2$.
+\end{eg}
+
+\section{Manifolds and \texorpdfstring{Poincar\'e}{Poincare} duality}
+We are going to prove Poincar\'e duality, and then use it to prove a lot of things about manifolds. Poincar\'e duality tells us that for a compact oriented manifold $M$ of dimension $n$, we have
+\[
+ H_d(M) \cong H^{n - d}(M).
+\]
+To prove this, we will want to induct over covers of $M$. However, given a compact manifold, the open covers are in general not compact. We get compactness only when we join all of them up. So we need to come up with a version of Poincar\'e duality that works for non-compact manifolds, which is less pretty and requires the notion of compactly supported cohomology.
+
+\subsection{Compactly supported cohomology}
+
+\begin{defi}[Support of cochain]\index{support!cochain}
+ Let $\varphi \in C^n(X)$ be a cochain. We say $\varphi$ has \term{support} in $S \subseteq X$ if whenever $\sigma: \Delta^n \hookrightarrow X \setminus S \subseteq X$, then $\varphi(\sigma) = 0$. In this case, $\d \varphi$ also has support in $S$.
+\end{defi}
+Note that this has a slight subtlety. The definition only requires that if $\sigma$ lies \emph{completely} outside $S$, then $\varphi(\sigma)$ vanishes. However, we can have simplices that extends very far and only touches $S$ slightly, and the support does not tell us anything about the value of $\sigma$. Later, we will get around this problem by doing sufficient barycentric subdivision.
+
+\begin{defi}[Compactly-supported cochain]\index{compactly-supported cochain}
+ Let $C_c^\Cdot(X) \subseteq C^\Cdot(X)$ be the sub-chain complex consisting of these $\varphi$ which has support in \emph{some} compact $K \subseteq X$.
+\end{defi}
+Note that this makes sense --- we have seen that if $\varphi$ has support in $K$, then $\d \varphi$ has support in $K$. To see it is indeed a sub-chain complex, we need to show that $C_c^\Cdot(X)$ is a subgroup! Fortunately, if $\varphi$ has support on $K$, and $\psi$ has support in $L$, then $\varphi + \psi$ has support in $K \cup L$, which is compact.
+
+\begin{defi}[Compactly-supported cohomology]\index{compactly-supported cohomology}
+ The \emph{compactly supported cohomology} of $X$ is
+ \[
+ H^*_c(X) = H^*(C_c^\Cdot(X)).
+ \]
+\end{defi}
+
+Note that we can write
+\[
+ C_c^{\Cdot}(X) = \bigcup_{K\text{ compact}} C^\Cdot(X, X \setminus K) \subseteq C^\Cdot(X).
+\]
+We would like to say that the compactly supported \emph{cohomology} is also ``built out of'' those relative cohomology, but we cannot just take the union, because the relative cohomology is not a subgroup of $H^*(X)$. To do that, we need something more fancy.
+
+\begin{defi}[Directed set]\index{directed set}
+ A \emph{directed set} is a partial order $(I, \leq)$ such that for all $i, j \in I$, there is some $k \in I$ such that $i \leq k$ and $j \leq k$.
+\end{defi}
+
+\begin{eg}
+ Any total order is a directed system.
+\end{eg}
+
+\begin{eg}
+ $\N$ with divisibility $\mid$ as the partial order is a directed system.
+\end{eg}
+
+\begin{defi}[Direct limit]\index{direct system}\index{direct limit}
+ Let $I$ be a directed set. An \emph{direct system} of abelian groups indexed by $I$ is a collection of abelian groups $G_i$ for each $i \in I$ and homomorphisms
+ \[
+ \rho_{ij}: G_i \to G_j
+ \]
+ for all $i, j \in I$ such that $i \leq j$, such that
+ \[
+ \rho_{ii} = \id_{G_i}
+ \]
+ and
+ \[
+ \rho_{ik} = \rho_{jk} \circ \rho_{ij}
+ \]
+ whenever $i \leq j \leq k$.
+
+ We define the \emph{direct limit} on the system $(G_i, \rho_{ij})$ to be
+ \[
+ \varinjlim_{i \in I} G_i = \left(\bigoplus_{i \in I} G_i\right)/\bra x - \rho_{ij}(x): x \in G_i\ket.
+ \]
+ The underlying set of it is
+ \[
+ \left(\coprod_{i \in I}G_i\right)/\{x \sim \rho_{ij}(x): x \in G_i\}.
+ \]
+\end{defi}
+In terms of the second description, the group operation is given as follows: given $x \in G_i$ and $y \in G_j$, we find some $k$ such that $i, j \leq k$. Then we can view $x, y$ as elements as $G_k$ and do the operation there. It is an exercise to show that these two descriptions are indeed the same.
+
+Now observe that if $J \subseteq I$ is a sub-directed set such that for all $a \in I$, there is some $b \in J$ such that $a \leq b$. Then we have
+\[
+ \varinjlim_{i \in J} G_i \cong \varinjlim_{i \in I} G_i.
+\]
+So our claim is now
+\begin{thm}
+ For any space $X$, we let
+ \[
+ \mathcal{K}(X) = \{K \subseteq X: K\text{ is compact}\}.
+ \]
+ This is a directed set under inclusion, and the map
+ \[
+ K \mapsto H^n(X, X \setminus K)
+ \]
+ gives a direct system of abelian groups indexed by $\mathcal{K}(X)$, where the maps $\rho$ are given by restriction.
+
+ Then we have
+ \[
+ H^*_c(X) \cong \varinjlim_{\mathbf{K}(X)} H^n(X, X \setminus K).
+ \]
+\end{thm}
+
+\begin{proof}
+ We have
+ \[
+ C_c^n(X) \cong \varinjlim_{\mathcal{K}(X)} C^n(X, X \setminus K),
+ \]
+ where we have a map
+ \[
+ \varinjlim_{K(\alpha)} C^n(X, X \setminus K)\to C_c^n(X)
+ \]
+ given in each component of the direct limit by inclusion, and it is easy to see that this is well-defined and bijective.
+
+ It is then a general algebraic fact that $H^*$ commutes with inverse limits, and we will not prove it.
+\end{proof}
+
+\begin{lemma}
+ We have
+ \[
+ H_c^i(\R^d; R) \cong
+ \begin{cases}
+ R & i = d\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Let $\mathcal{B} \in \mathcal{K}(\R^d)$ be the balls, namely
+ \[
+ \mathcal{B} = \{n D^d, n = 0,1, 2, \cdots\}.
+ \]
+ Then since every compact set is contained in one of them, we have
+ \[
+ H^n_c(X) \cong \varinjlim_{K \in \mathcal{K}(\R^d)} H^n(\R^d, \R^d \setminus K; R) \cong \varinjlim_{nD^d \in \mathcal{B}} H^n(\R^d,\R^d \setminus nD^d; R)
+ \]
+ We can compute that directly. Since $\R^d$ is contractible, the connecting map
+ \[
+ H^i(\R^d, \R^d \setminus nD^d; R) \to H^{i - 1}(\R^d \setminus nD^d; R)
+ \]
+ in the long exact sequence is an isomorphism. Moreover, the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ H^i(\R^d, \R^d \setminus nD^n; R) \ar[d, "\partial"] \ar[r, "\rho_{n, n + 1}"] & H^i(\R^d, \R^d \setminus (n + 1)D^d; R) \ar[d, "\partial"]\\
+ H^{i - 1}(\R^d \setminus nD^d; R) \ar[r] & H^{i - 1}(\R^d\setminus (n + 1)D^d; R)
+ \end{tikzcd}
+ \]
+ But all maps here are isomorphisms because the horizontal maps are homotopy equivalences. So we know
+ \[
+ \varinjlim H^i(\R^d, \R^d\setminus nD^d; R) \cong H^i(\R^d, \R^d \setminus \{0\}; R) \cong H^{i - 1}(\R^d \setminus \{0\}; R).
+ \]
+ So it follows that
+ \[
+ H^i(\R^d, \R^d \setminus \{0\}; R) =
+ \begin{cases}
+ \R & i = d\\
+ 0 & \text{otherwise}
+ \end{cases}.\qedhere
+ \]
+\end{proof}
+In general, this is how we always compute compactly-supported cohomology --- we pick a suitable subset of $\mathcal{K}(X)$ and compute the limit of that instead.
+
+Note that compactly-supported cohomology is \emph{not} homotopy-invariant! It knows about the dimension of $\R^d$, since the notion of compactness is not homotopy invariant. Even worse, in general, a map $f: X \to Y$ does not induce a map $f^*: H^*_c(Y) \to H^*_c(X)$. Indeed, the usual map does not work because the preimage of a compact set of a compact set is not necessarily compact.
+\begin{defi}[Proper map]\index{proper map}
+ A map $f: X \to Y$ of spaces is \emph{proper} if the preimage of a compact space is compact.
+\end{defi}
+Now if $f$ is proper, then it does induce a map $H_c^* (\ph)$ by the usual construction.
+
+From now on, we will assume all spaces are Hausdorff, so that all compact subsets are closed. This isn't too bad a restriction since we are ultimately interested in manifolds, which are by definition Hausdorff.
+
+Let $i: U \to X$ be the inclusion of an open subspace. We let $K \subseteq U$ be compact. Then by excision, we have an isomorphism
+\[
+ H^*(U, U \setminus K) \cong H^*(X, X \setminus K),
+\]
+since the thing we cut out, namely $X \setminus U$, is already closed, and $U \setminus K$ is open, since $K$ is closed.
+
+So a compactly supported cohomology class on $U$ gives one on $X$. So we get a map
+\[
+ i_*: H_c^*(U) \to H_c^*(X).
+\]
+We call this ``extension by zero''. Indeed, this is how the cohomology class works --- if you have a cohomology class $\phi$ on $U$ supported on $K \subseteq U$, then given any simplex in $X$, if it lies inside $U$, we know how to evaluate it. If it lies outside $K$, then we just send it to zero. Then by barycentric subdivision, we can assume every simplex is either inside $U$ or outside $K$, so we are done.
+
+\begin{eg}
+ If $i: U \to \R^d$ is an open ball, then the map
+ \[
+ i_*: H_c^* \to H_c^*(\R^d)
+ \]
+ is an isomorphism. So each cohomology class is equivalent to something with a support as small as we like.
+\end{eg}
+
+Since it is annoying to write $H^n(X, X \setminus K)$ all the time, we write
+\[
+ H^n(X\mid K; R) = H^n(X, X \setminus K; R).
+\]
+By excision, this depends only on a neighbourhood of $K$ in $X$. In the case where $K$ is a point, this is local cohomology at a point. So it makes sense to call this \term{local cohomology} \emph{near $K \subseteq X$}.
+
+Our end goal is to produce a Mayer-Vietoris for compactly-supported cohomology. But before we do that, we first do it for local cohomology.
+
+\begin{prop}
+ Let $K, L \subseteq X$ be compact. Then there is a long exact sequence
+ \[
+ \begin{tikzcd}[column sep=small]
+ H^n(X \mid K \cap L) \ar[r] & H^n(X \mid K) \oplus H^n(X \mid L) \ar[r] & H^n(X \mid K \cup L) \ar[out=0, in=180, overlay, lld, "\partial"]\\
+ H^{n + 1}(X \mid K \cap L) \ar[r] & H^{n + 1}(X \mid K) \oplus H^{n + 1}(X \mid L) \ar[r] & \cdots
+ \end{tikzcd},
+ \]
+ where the unlabelled maps are those induced by inclusion.
+\end{prop}
+We are going to prove this by relating it to a Mayer-Vietoris sequence of some sort.
+
+\begin{proof}
+ We cover $X \setminus K \cap L$ by
+ \[
+ \mathcal{U} = \{X \setminus K, X \setminus L\}.
+ \]
+ We then draw a huge diagram (here $^*$ denotes the dual, i.e.\ $X^* = \Hom(X; R)$, and $C^\Cdot(X \mid K) = C^\Cdot(X, X \setminus K)$):
+ \[
+ \begin{tikzcd}[column sep=small]
+ & 0 \ar[d] & 0 \ar[d] & 0 \ar[d]\\
+ 0 \ar[r] & \left(\frac{C_\Cdot(X)}{C_\Cdot^\mathcal{U}(X \setminus K \cap L)}\right)^* \ar[r] \ar[d] & C^\Cdot(X \mid K) \oplus C^\Cdot(X \mid L) \ar[r] \ar[d] & C^\Cdot(X \mid K \cup L) \ar[r] \ar[d] & 0\\
+ 0 \ar[r] & C^\Cdot(X) \ar[r, "{(\id, -\id)}"] \ar[d] & C^\Cdot(X) \oplus C^\Cdot(X) \ar[d] \ar[r, "\id + \id"] & C^\Cdot(X) \ar[r] \ar[d] & 0\\
+ 0 \ar[r] & C^\Cdot_{\mathcal{U}}(X\setminus K \cap L) \ar[r, "{(j_1^*, -j_2^*})"] \ar[d] & C^\Cdot(X \setminus K) \oplus C^\Cdot(X \setminus L) \ar[r, "i_1^* + i_2^*"] \ar[d] & C^\Cdot(X \setminus K \cup L) \ar[r] \ar[d]& 0\\
+ & 0 & 0 & 0
+ \end{tikzcd}
+ \]
+ This is a diagram. Certainly.
+
+ The bottom two rows and all columns are exact. By a diagram chase (the \term{nine lemma}), we know the top row is exact. Taking the long exact sequence almost gives what we want, except the first term is a funny thing.
+
+ We now analyze that object. We look at the left vertical column:
+ \[
+ \begin{tikzcd}[column sep=small]
+ 0 \ar[r] & \Hom\left(\frac{C_\Cdot(x)}{C_\Cdot^\mathcal{U}(X \setminus K \cap L)}, R \right) \ar[r] & C^\Cdot(X) \ar[r] & \Hom(C_\Cdot^\mathcal{U}(X \setminus K \cap L), R) \ar[r]& 0
+ \end{tikzcd}
+ \]
+ Now by the small simplices theorem, the right hand object gives the same (co)homology as $C_\Cdot(X \setminus K \cap L; R)$. So we can produce another short exact sequence:
+ \[
+ \begin{tikzcd}[column sep=small]
+ 0 \ar[r] & \Hom\left(\frac{C_\Cdot(x)}{C_\Cdot^\mathcal{U}(X \setminus (K \cap L))}, R\right) \ar[r] & C^\Cdot(X) \ar[r] & \Hom(C_\Cdot^\mathcal{U}(X \setminus K \cap L), R) \ar[r] & 0\\
+ 0 \ar[r] & C^\Cdot(X , X \setminus K \cap L) \ar[r] \ar[u] & C^\Cdot(X) \ar[u, equals] \ar[r] & \Hom(C_{\Cdot}(X \setminus K \cap L), R) \ar[r] \ar[u] & 0
+ \end{tikzcd}
+ \]
+ Now the two right vertical arrows induce isomorphisms when we pass on to homology. So by taking the long exact sequence, the five lemma tells us the left hand map is an isomorphism on homology. So we know
+ \[
+ H_*\left(\Hom\left(\frac{C_\Cdot(x)}{C_\Cdot^\mathcal{U}(X \setminus (K \cap L))}, R\right)\right) \cong H^*(X \mid K \cap L).
+ \]
+ So the long exact of the top row gives what we want.
+\end{proof}
+
+\begin{cor}
+ Let $X$ be a manifold, and $X = A \cup B$, where $A, B$ are open sets. Then there is a long exact sequence
+ \[
+ \begin{tikzcd}
+ H^n_c(A \cap B) \ar[r] & H^n_c(A) \oplus H^n_c(B) \ar[r] & H_c^n(X)\ar[out=0, in=180, looseness=2, overlay, lld, "\partial"]\\
+ H_c^{n+1}(A \cap B) \ar[r] & H_c^{n+1}(A) \oplus H_c^{n+1}(B) \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+\end{cor}
+Note that the arrows go in funny directions, which is different from both homology and cohomology versions!
+
+\begin{proof}
+ Let $K \subseteq A$ and $L \subseteq B$ be compact sets. Then by excision, we have isomorphisms
+ \begin{align*}
+ H^n(X \mid K) &\cong H^n(A \mid K)\\
+ H^n(X \mid L) &\cong H^n(B \mid L)\\
+ H^n(X \mid K \cap L) &\cong H^n(A \cap B \mid K\cap L).
+ \end{align*}
+ So the long exact sequence from the previous proposition gives us
+ \[
+ \begin{tikzcd}[column sep=small]
+ H^n(A \cap B\mid K \cap L) \ar[r] & H^n(A \mid K) \oplus H^n(B \mid L) \ar[r] & H^n(X \mid K \cup L) \ar[lld, out=0, in=180, overlay, "\partial"]\\
+ H^{n + 1}(A \cap B \mid K \cap L) \ar[r] & H^{n + 1}(A \mid K) \oplus H^{n + 1}(B \mid L) \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ The next step is to take the direct limit over $K \in \mathcal{K}(A)$ and $L \in \mathcal{K}(B)$. We need to make sure that these do indeed give the right compactly supported cohomology. The terms $H^n(A \mid K) \oplus H^n (B \mid L)$ are exactly right, and the one for $H^n(A \cap B \mid K \cap L)$ also works because every compact set in $A \cap B$ is a compact set in $A$ intersect a compact set in $B$ (take those to be both the original compact set).
+
+ So we get a long exact sequence
+ \[
+ \begin{tikzcd}[column sep=tiny]
+ H^n_c(A \cap B) \ar[r] & H^n_c(A) \oplus H^n_c(B) \ar[r] & \displaystyle\varinjlim_{\substack{K \in \mathcal{K}(A)\\ L \in \mathcal{K}(B)}} H^n(X \mid K \cup L) \ar[r, "\partial"] & H^{n + 1}_c(A \cap B)
+ \end{tikzcd}
+ \]
+ To show that that funny direct limit is really what we want, we have to show that every compact set $C \in X$ lies inside some $K \cup L$, where $K \subseteq A$ and $L \subseteq B$ are compact.
+
+ Indeed, as $X$ is a manifold, and $C$ is compact, we can find a finite set of closed balls in $X$, each in $A$ or in $B$, such that their interiors cover $C$. So done. (In general, this will work for any locally compact space)
+\end{proof}
+
+This is all we have got to say about compactly supported cohomology. We now start to talk about manifolds.
+
+\subsection{Orientation of manifolds}
+Similar to the case of vector bundles, we will only work with manifolds with orientation. The definition of orientation of a manifold is somewhat similar to the definition of orientation of a vector bundle. Indeed, it is true that an orientation of a manifold is just an orientation of the tangent bundle, but we will not go down that route, because the description we use for orientation here will be useful later on. After defining orientation, we will prove a result similar to (the first two parts of) the Thom isomorphism theorem.
+
+For a $d$-manifold $M$ and $x \in M$, we know that
+\[
+ H_i(M\mid x; R) \cong
+ \begin{cases}
+ R & i = d\\
+ 0 & i \not= d
+ \end{cases}.
+\]
+We can then define a \emph{local orientation} of a manifold to be a generator of this group.
+\begin{defi}[Local $R$-orientation of manifold]\index{local $R$-orientation!manifold}\index{manifold!local $R$-orientation}
+ Fr a $d$-manifold $M$, a local $R$-orientation of $M$ at $x$ is an $R$-module generator $\mu_x = H_d(M\mid x; R)$.
+\end{defi}
+
+\begin{defi}[$R$-orientation]\index{$R$-orientation!manifold}\index{orientation!manifold}\index{manifold!$R$-orientation}
+ An \emph{$R$-orientation} of $M$ is a collection $\{\mu_x\}_{x \in M}$ of local $R$-orientations such that if
+ \[
+ \varphi: \R^d \to U \subseteq M
+ \]
+ is a chart of $M$, and $p, q \in \R^d$, then the composition of isomorphisms
+ \[
+ \begin{tikzcd}
+ H_d(M \mid \varphi(p)) \ar[r, "\sim"] & H_d(U \mid \varphi(p)) & H_d(\R^d \mid p) \ar[l, "\varphi_*", "\sim"'] \ar[d, "\sim"] \\
+ H_d(M \mid \varphi(q)) \ar[r, "\sim"] & H_d(U \mid \varphi(q)) & H_d(\R^d \mid q) \ar[l, "\varphi_*", "\sim"']
+ \end{tikzcd}
+ \]
+ sends $\mu_x$ to $\mu_y$, where the vertical isomorphism is induced by a translation of $\R^d$.
+\end{defi}
+
+\begin{defi}[Orientation-preserving homeomorphism]\index{orientation preserving homeomorphism}\index{homeomorphism!orientation preserving}
+ For a homomorphism $f: U \to V$ with $U, V \in \R^d$ open, we say $f$ is $R$-orientation-preserving if for each $x \in U$, and $y = f(x)$, the composition
+ \[
+ \begin{tikzcd}[column sep=large]
+ H_d(\R^d \mid 0; R) \ar[r, "\text{translation}"] & H_d(\R^d \mid x; R) \ar[r, "\text{excision}"] & H_d(U \mid x; R) \ar[d, "f_*"]\\
+ H_d(\R^d \mid 0; R) \ar[r, "\text{translation}"] & H_d(\R^d \mid y; R) \ar[r, "\text{excision}"] & H_d(V \mid y; R)
+ \end{tikzcd}
+ \]
+ is the identity $H_d(\R^d \mid 0; R) \to H_d(\R^d \mid 0; R)$.
+\end{defi}
+
+As before, we have the following lemma:
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item If $R = \F_2$, then every manifold is $R$-orientable.
+ \item If $\{\varphi_\alpha: \R^d \to U_\alpha \subseteq M\}$ is an open cover of $M$ by Euclidean space such that each homeomorphism
+ \[
+ \begin{tikzcd}
+ \R^d \supseteq \varphi_\alpha^{-1}(U_\alpha \cap U_\beta) & U_\alpha \cap U_\beta \ar[l, "\varphi_\alpha^{-1}"'] \ar[r, "\varphi_\beta^{-1}"] & \varphi_\beta^{-1}(U_\alpha \cap U_\beta) \subseteq \R^d
+ \end{tikzcd}
+ \]
+ is orientation-preserving, then $M$ is $R$-orientable.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item $\F_2$ has a unique $\F_2$-module generator.
+ \item For $x \in U_\alpha$, we define $\mu_x$ to be the image of the standard orientation of $\R^d$ via
+ \[
+ \begin{tikzcd}
+ H_d(M \mid x) & H_\alpha(U_d\mid x) \ar[l, "\cong"] & H_d(\R^d \mid \varphi_\alpha^{-1}(x)) \ar[l, "(\varphi_\alpha)_*"] & \R_d(\R^d \mid 0) \ar[l, "\text{trans.}"]
+ \end{tikzcd}
+ \]
+ If this is well-defined, then it is obvious that this is compatible. However, we have to check it is well-defined, because to define this, we need to pick a chart.
+
+ If $x \in U_\beta$ as well, we need to look at the corresponding $\mu_x'$ defined using $U_\beta$ instead. But they have to agree by definition of orientation-preserving.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Finally, we get to the theorem:
+\begin{thm}
+ Let $M$ be an $R$-oriented manifold and $A \subseteq M$ be compact. Then
+ \begin{enumerate}
+ \item There is a unique class $\mu_A \in H_d(M \mid A; R)$ which restricts to $\mu_x \in H_d(M \mid x; R)$ for all $x \in A$.
+ \item $H_i(M\mid A; R) = 0$ for $i > d$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ Call a compact set $A$ ``good'' if it satisfies the conclusion of the theorem.
+
+ \begin{claim}
+ We first show that if $K, L$ and $K \cap L$ is good, then $K \cup L$ is good.
+ \end{claim}
+ This is analogous to the proof of the Thom isomorphism theorem, and we will omit this.
+
+ Now our strategy is to prove the following in order:
+ \begin{enumerate}
+ \item If $A \subseteq \R^d$ is convex, then $A$ is good.
+ \item If $A \subseteq \R^d$, then $A$ is good.
+ \item If $A \subseteq M$, then $A$ is good.
+ \end{enumerate}
+
+ \begin{claim}
+ If $A \subseteq \R^d$ is convex, then $A$ is good.
+ \end{claim}
+ Let $x \in A$. Then we have an inclusion
+ \[
+ \R^d \setminus A \hookrightarrow \R^d \setminus \{x\}.
+ \]
+ This is in fact a homotopy equivalence by scaling away from $x$. Thus the map
+ \[
+ H_i(\R^d \mid A) \to H_i(\R^d \mid x)
+ \]
+ is an isomorphism by the five lemma for all $i$. Then in degree $d$, there is some $\mu_A$ corresponding to $\mu_x$. This $\mu_A$ is then has the required property by definition of orientability. The second part of the theorem also follows by what we know about $H_i(\R^d \mid x)$.
+
+ \begin{claim}
+ If $A \subseteq \R^d$, then $A$ is good.
+ \end{claim}
+ For $A \subseteq \R^d$ compact, we can find a finite collection of closed balls $B_i$ such that
+ \[
+ A \subseteq \bigcup_{i = 1}^n \mathring{B}_i = B.
+ \]
+ Moreover, if $U \supseteq A$ for any open $U$, then we can in fact take $B_i \subseteq U$. By induction on the number of balls $n$, the first claim tells us that any $B$ of this form is good.
+
+ We now let
+ \[
+ \mathcal{G} = \{B \subseteq \R^d: A \subseteq \mathring{B}, B\text{ compact and good}\}.
+ \]
+ We claim that this is a directed set under inverse inclusion. To see this, for $B, B' \in \mathcal{G}$, we need to find a $B'' \in \mathcal{G}$ such that $B'' \subseteq B, B'$ and $B''$ is good and compact. But the above argument tells us we can find one contained in $\mathring{B}' \cup \mathring{B}''$. So we are safe.
+
+ Now consider the directed system of groups given by
+ \[
+ B \mapsto H_i(\R^d \mid B),
+ \]
+ and there is an induced map
+ \[
+ \varinjlim_{B \in \mathcal{G}} H_i(\R^d \mid B) \to H_i(\R^d \mid A),
+ \]
+ since each $H_i(\R^d \mid B)$ maps to $H_i(\R^d \mid A)$ by inclusion, and these maps are compatible. We claim that this is an isomorphism. We first show that this is surjective. Let $[c] \in H_i(\R^d \mid A)$. Then the boundary of $c \in C_i(\R^d)$ is a finite sum of simplices in $\R^d \setminus A$. So it is a sum of simplices in some compact $C \subseteq \R^d \setminus A$. But then $A \subseteq \R^d \setminus C$, and $\R^d \setminus C$ is an open neighbourhood of $A$. So we can find a good $B$ such that
+ \[
+ A \subseteq B \subseteq \R^d \setminus C.
+ \]
+ Then $c \in C_i(\R^d \mid B)$ is a cycle. So we know $[c] \in H_i(\R^d \mid B)$. So the map is surjective. Injectivity is obvious.
+
+ An immediate consequence of this is that for $i > d$, we have $H_i(\R^d \mid A) = 0$. Also, if $i = d$, we know that $\mu_A$ is given uniquely by the collection $\{\mu_B\}_{B \in \mathcal{G}}$ (uniqueness follows from injectivity).
+
+ \begin{claim}
+ If $A \subseteq M$, then $A$ is good.
+ \end{claim}
+ This follows from the fact that any compact $A \subseteq M$ can be written as a finite union of compact $A_\alpha$ with $A_\alpha \subseteq U_\alpha \cong \R^d$. So $A_\alpha$ and their intersections are good. So done.
+\end{proof}
+
+\begin{cor}
+ If $M$ is compact, then we get a unique class $[M] = \mu_M \in H_n(M; R)$ such that it restricts to $\mu_x$ for each $x \in M$. Moreover, $H^i(M; R) = 0$ for $i > d$.
+\end{cor}
+This is not too surprising, actually. If we have a triangulation of the manifold, then this $[M]$ is just the sum of all the triangles.
+
+\begin{defi}[Fundamental class]\index{fundamental class}
+ The \emph{fundamental class} of an $R$-oriented manifold is the unique class $[M]$ that restricts to $\mu_x$ for each $x \in M$.
+\end{defi}
+
+\subsection{\texorpdfstring{Poincar\'e}{Poincare} duality}
+We now get to the main theorem --- Poincar\'e duality:
+\begin{thm}[Poincar\'e duality]
+ Let $M$ be a $d$-dimensional $R$-oriented manifold. Then there is a map
+ \[
+ D_M: H^k_c(M; R) \to H_{d - k}(M; R)
+ \]
+ that is an isomorphism.
+\end{thm}
+The majority of the work is in defining the map. Afterwards, proving it is an isomorphism is a routine exercise with Mayer-Vietoris and the five lemma.
+
+What does this tell us? We know that $M$ has no homology or cohomology in negative dimensions. So by Poincar\'e duality, there is also no homology or cohomology in dimensions $> d$.
+
+Moreover, if $M$ itself is compact, then we know $H^0_c(M; R)$ has a special element $1$. So we also get a canonical element of $H_d(M; R)$. But we know there is a special element of $H_d(M; R)$, namely the fundamental class. They are in fact the same, and this is no coincidence. This is in fact how we are going to produce the map.
+
+To define the map $D_M$, we need the notion of the cap product
+\begin{defi}[Cap product]\index{cap product}
+ The \emph{cap product} is defined by
+ \begin{align*}
+ \ph \frown \ph : C_k(X; R) \times C^\ell(X; R) &\to C_{k - \ell} (X; R)\\
+ (\sigma, \varphi) &\mapsto \varphi(\sigma|_{[v_0, \ldots, v_\ell]}) \sigma|_{[v_{\ell}, \ldots, v_k]}.
+ \end{align*}
+\end{defi}
+We want this to induce a map on homology. To do so, we need to know how it interacts with differentials.
+\begin{lemma}
+ We have
+ \[
+ d(\sigma \frown \varphi) = (-1)^d ((d \sigma) \frown \varphi - \sigma \frown (d \varphi)).
+ \]
+\end{lemma}
+
+\begin{proof}
+ Write both sides out.
+\end{proof}
+
+As with the analogous formula for the cup product, this implies we get a well-defined map on homology and cohomology, i.e.\ We obtain a map
+\[
+ H_k(X; R) \times H^\ell(X; R) \to H_{k - \ell}(X; R).
+\]
+As with the cup product, there are also relative versions
+\[
+ \frown\;: H_k(X, A; R) \times H^\ell(X; R) \to H_{k - \ell}(X, A; R)
+\]
+and
+\[
+ \frown\;: H_k(X, A; R) \times H^\ell(X, A; R) \to H_{k - \ell}(X; R).
+\]
+We would like to say that the cap product is natural, but since maps of spaces induce maps of homology and cohomology in opposite directions, this is rather tricky. What we manage to get is the following:
+\begin{lemma}
+ If $f: X \to Y$ is a map, and $x \in H_k(X; R)$ And $y \in H^\ell(Y; R)$, then we have
+ \[
+ f_*(x) \frown y = f_*(x \frown f^*(y)) \in H_{k - \ell}(Y; R).
+ \]
+ In other words, the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ & H_k(Y; R) \times H^\ell(Y; R) \ar[r, "\frown"] & H_{k - \ell}(Y; R)\\
+ H_k(X; R) \times H^\ell(Y; R) \ar[ur, "f_* \times \id"] \ar[dr, "\id \times f^*"] \\
+ & H_k(X; R) \times H^\ell(X; R) \ar[r, "\frown"] & H_{k - \ell}(X; R) \ar[uu, "f_*"]
+ \end{tikzcd}
+ \]
+\end{lemma}
+
+\begin{proof}
+ We check this on the cochain level. We let $x = \sigma: \Delta^k \to X$. Then we have
+ \begin{align*}
+ f_\#(\sigma \frown f^\# y) &= f_\# \left((f^\# y) (\sigma|_{[v_0, \ldots, v_\ell]}) \sigma|_{[v_\ell, \ldots, v_k]}\right)\\
+ &= y(f_\# (\sigma|_{[v_0, \ldots, v_\ell]})) f_\# (\sigma|_{[v_{\ell}, \ldots, v_k]})\\
+ &= y((f_\# \sigma)|_{[v_0, \ldots, v_\ell]}) (f_\# \sigma)|_{[v_{\ell}, \ldots, v_k]}\\
+ &= (f_\# \sigma) \frown y.
+ \end{align*}
+ So done.
+\end{proof}
+
+Now if $M$ is compact, then we simply define the duality map as
+\[
+ D_M = [M] \frown \ph : H^d (M; R) \to H_{d - \ell}(M; R).
+\]
+If not, we note that $H_c^d(M; R)$ is the directed limit of $H^d(M, K; R)$ with $K$ compact, and each $H^d(M, L; R)$ has a fundamental class. So we can define the required map for each $K$, and then put them together.
+
+More precisely, if $K \subseteq L \subseteq M$ are such that $K, L$ are compact, then we have an inclusion map
+\[
+ (\id, \mathrm{inc}): (M, M \setminus L) \to (M, M \setminus K).
+\]
+Then we have an induced map
+\[
+ (\id, \mathrm{inc}): H_d(M \mid L; R) \to H_d(M\mid K; R)
+\]
+that sends the fundamental class $\mu_L$ to $\mu_K$, by uniqueness of the fundamental class.
+
+Then the relative version of our lemma tells us that the following map commutes:
+\[
+ \begin{tikzcd}
+ H^\ell(M \mid K; R) \ar[r, "{(\id, \mathrm{inc})^*}"] \ar[d, "\mu_K \frown \ph"] & H^\ell(M \mid L; R) \ar[d, "\mu_L \frown \ph"]\\
+ H_{d - \ell}(M; R) \ar[r, "\id"] & H_{d - \ell}(M; R)
+ \end{tikzcd}
+\]
+Indeed, this is just saying that
+\[
+ (\id)_* (\mu_L \frown (\id, \mathrm{inc})^* (\varphi)) = \mu_K \frown \varphi.
+\]
+So we get a duality map
+\[
+ D_M = \varinjlim (\mu_K \frown \ph): \varinjlim H^\ell(M \mid K; R) \to H_{d - \ell}(M; R).
+\]
+Now we can prove Poincar\'e duality.
+
+\begin{proof}
+ We say $M$ is ``good'' if the Poincar\'e duality theorem holds for $M$. We now do the most important step in the proof:
+ \begin{cclaim}
+ $\R^d$ is good.
+ \end{cclaim}
+ The only non-trivial degrees to check are $\ell = 0, d$, and $\ell = 0$ is straightforward.
+
+ For $\ell =d $, we have shown that the maps
+ \[
+ \begin{tikzcd}
+ H_c^d(\R^d; R) & H^d(\R^d\mid 0; R) \ar[l, "\sim"] \ar[r, "\text{UCT}"] & \Hom_R(H_d(\R^d \mid 0; R), R)
+ \end{tikzcd}
+ \]
+ are isomorphisms, where the last map is given by the universal coefficients theorem.
+
+ Under these isomorphisms, the map
+ \[
+ \begin{tikzcd}
+ H_c^d(\R^d; R) \ar[r, "D_{\R^d}"] & H_0(\R^d; R) \ar[r, "\varepsilon"] & R
+ \end{tikzcd}
+ \]
+ corresponds to the map $\Hom_K(H_d(\R^d \mid 0; R), R) \to R$ is given by evaluating a function at the fundamental class $\mu_0$. But as $\mu_0 \in H_d(\R^d \mid 0; R)$ is an $R$-module generator, this map is an isomorphism.
+
+ \begin{cclaim}
+ If $M = U \cup V$ and $U, V, U \cap V$ are good, then $M$ is good.
+ \end{cclaim}
+ Again, this is an application of the five lemma with the Mayer-Vietoris sequence. We have
+ \[
+ \begin{tikzcd}[column sep=small]
+ H_c^\ell (U \frown V) \ar[r] \ar[d, "D_{U \frown V}"] & H_c^\ell(U) \oplus H_c^d(V) \ar[r] \ar[d, "D_U \oplus D_V"] & H_c^\ell(M) \ar[r] \ar[d, "D_M"] & H_c^{\ell + 1}(U \frown V) \ar[d, "D_{U\frown V}"]\\
+ H_{d - \ell}(U \frown V) \ar[r] & H_{d - \ell}(U) \oplus H_{d - \ell}(V) \ar[r] & H_{d - \ell}(M) \ar[r] & H_{d - \ell - 1}(U \frown V)
+ \end{tikzcd}
+ \]
+ We are done by the five lemma if this commutes. But unfortunately, it doesn't. It only commutes up to a sign, but it is sufficient for the five lemma to apply if we trace through the proof of the five lemma.
+
+ \begin{cclaim}
+ If $U_1 \subseteq U_2 \subseteq \cdots$ with $M = \bigcup_n U_n$, and $U_i$ are all good, then $M$ is good.
+ \end{cclaim}
+
+ Any compact set in $M$ lies in some $U_n$, so the map
+ \[
+ \varinjlim H_c^\ell (U_n) \to H_c^\ell(U_n)
+ \]
+ is an isomorphism. Similarly, since simplices are compact, we also have
+ \[
+ H_{d - k}(M) = \varinjlim H_{d - k}(U_n).
+ \]
+ Since the direct limit of open sets is open, we are done.
+
+ \begin{cclaim}
+ Any open subset of $\R^d$ is good.
+ \end{cclaim}
+ Any $U$ is a countable union of open balls (something something rational points something something). For finite unions, we can use Claims 0 and 1 and induction. For countable unions, we use Claim 2.
+
+ \begin{cclaim}
+ If $M$ has a countable cover by $\R^d$'s it is good.
+ \end{cclaim}
+ Same argument as above, where we instead use Claim 3 instead of Claim 0 for the base case.
+
+ \begin{cclaim}
+ Any manifold $M$ is good.
+ \end{cclaim}
+ Any manifold is second-countable by definition, so has a countable open cover by $\R^d$.
+\end{proof}
+
+\begin{cor}
+ For any compact $d$-dimensional $R$-oriented manifold $M$, the map
+ \[
+ [M] \frown \ph: H^\ell(M; R) \to H_{d - \ell}(M; R)
+ \]
+ is an isomorphism.
+\end{cor}
+
+\begin{cor}
+ Let $M$ be an odd-dimensional compact manifold. Then the Euler characteristic $\chi(M) = 0$.
+\end{cor}
+
+\begin{proof}
+ Pick $R = \F_2$. Then $M$ is $\F_2$-oriented. Since we can compute Euler characteristics using coefficients in $\F_2$. We then have
+ \[
+ \chi(M) = \sum_{r = 0}^{2n + 1} (-1)^i \dim_{\F_2} H_i(M, \F_2).
+ \]
+ But we know
+ \[
+ H_i(M, \F_2) \cong H^{2n + 1 - i}(M, \F_2) \cong (H_{2n + 1 - i}(M, \F_2))^* \cong H_{2n + 1 - i}(M, \F_2)
+ \]
+ by Poincar\'e duality and the universal coefficients theorem.
+
+ But the dimensions of these show up in the sum above with opposite signs. So they cancel, and $\chi(M) = 0$.
+\end{proof}
+
+What is the relation between the cap product and the cup product? If $\sigma \in C_{k + \ell}(X; R)$, $\varphi \in C^k(X; R)$ and $\psi \in C^{\ell}(X; R)$, then
+\begin{align*}
+ \psi(\sigma \frown \varphi) &= \psi(\varphi(\sigma|_{[v_0, \ldots, v_k]}) \sigma|_{[v_k, \ldots, v_{k + \ell}]}) \\
+ &= \varphi(\sigma |_{[v_0, \ldots, v_k]}) \psi(\sigma|_{[v_k, \ldots, v_{k + \ell}]})\\
+ &= (\varphi \smile \psi)(\sigma),
+\end{align*}
+Since we like diagrams, we can express this equality in terms of some diagram commuting. The map $h: H^k(X; R) \to \Hom_R(H_k(X; R), R)$ in the universal coefficients theorem is given by
+\[
+ [\varphi] \mapsto ([\sigma] \mapsto \varphi(\sigma)).
+\]
+This map exists all the time, even if the hypothesis of the universal coefficients theorem does not hold. It's just that it need not be an isomorphism. The formula
+\[
+ \psi(\sigma \frown \varphi) = (\varphi \smile \psi)(\sigma)
+\]
+then translates to the following diagram commuting:
+\[
+ \begin{tikzcd}
+ H^\ell(X; R) \ar[d, "\varphi \smile \ph"] \ar[r, "h"] & \Hom_R(H_\ell(X; R), R) \ar[d, "(\ph \frown \varphi)^*"]\\
+ H^{k + \ell}(X; R) \ar[r, "h"] & \Hom_R(H_{\ell + k}(X; R), R)
+ \end{tikzcd}
+\]
+Now when the universal coefficient theorem applies, then the maps $h$ are isomorphisms. So the cup product and cap product determine each other, and contain the same information.
+
+Now since Poincar\'e duality is expressed in terms of cap products, this correspondence gives us some information about cupping as well.
+\begin{thm}
+ Let $M$ be a $d$-dimensional compact $R$-oriented manifold, and consider the following pairing:
+ \[
+ \begin{tikzcd}[cdmap]
+ \bra\ph, \ph \ket: H^k(M; R) \otimes H^{d - k}(M, R) \ar[r] & R\\
+ \lbrack\varphi\rbrack \otimes \lbrack\psi\rbrack \ar[r, maps to] & (\varphi \smile \psi)[M]
+ \end{tikzcd}.
+ \]
+ If $H_*(M; R)$ is free, then $\bra \ph, \ph\ket$ is non-singular, i.e.\ both adjoints are isomorphisms, i.e.\ both
+ \[
+ \begin{tikzcd}[cdmap]
+ H^k(M; R) \ar[r] & \Hom(H^{d - k}(M; R), R)\\
+ \lbrack\varphi\rbrack \ar[r, maps to] & (\lbrack\psi\rbrack \mapsto \bra \varphi, \psi\ket)
+ \end{tikzcd}
+ \]
+ and the other way round are isomorphisms.
+\end{thm}
+
+\begin{proof}
+ We have
+ \[
+ \bra \varphi, \psi\ket = (-1)^{|\varphi||\psi|} \bra \psi, \varphi\ket,
+ \]
+ as we know
+ \[
+ \varphi \smile \psi = (-1)^{|\varphi||\psi|} \psi \smile \varphi.
+ \]
+ So if one adjoint is an isomorphism, then so is the other.
+
+ To see that they are isomorphsims, we notice that we have an isomorphism
+ \[
+ \begin{tikzcd}[row sep=tiny]
+ H^k(M; R) \ar[r, "\mathrm{UCT}"] & \Hom_R(H_k(M; R), R) \ar[r, "D_m^*"] & \Hom_R(H^{d - k}(M; R), R)\\
+ \lbrack\varphi\rbrack \ar[r, maps to] & (\lbrack\sigma\rbrack \mapsto \varphi(\sigma)) \ar[r, maps to] & (\lbrack\psi\rbrack \mapsto \varphi([M] \frown \psi))
+ \end{tikzcd}.
+ \]
+ But we know
+ \[
+ \varphi([M]\frown \psi) = (\psi \smile \varphi)([M]) = \bra \psi, \varphi\ket.
+ \]
+ So this is just the adjoint. So the adjoint is an isomorphism.
+\end{proof}
+
+This is a very useful result. We have already seen examples where we can figure out if a cup product vanishes. But this result tells us that certain cup products are \emph{not} zero. This is something we haven't seen before.
+
+\begin{eg}
+ Consider $\CP^n$. We have previously computed
+ \[
+ H_*(\CP^n, \Z) =
+ \begin{cases}
+ \Z & * = 2i,\quad 0 \leq i \leq n\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ Also, we know $\CP^n$ is $\Z$-oriented, because it is a complex manifold. Let's forget that we have computed the cohomology ring structure, and compute it again.
+
+ Suppose for induction, that we have found
+ \[
+ H^*(\CP^{n - 1}, \Z)= \Z[x]/(x^n).
+ \]
+ As $\CP^n$ is obtained from $\CP^{n - 1}$ by attaching a $2n$ cell, the map given by inclusion $i^*: H^2(\CP^n, \Z) \to H^2(\CP^{n - 1}, \Z)$ is an isomorphism. Then the generator $x \in H^2(\CP^{n - 1}, \Z)$ gives us a generator $y \in H^2(\CP^n, \Z)$.
+
+ Now if we can show that $y^k \in H^{2k}(\CP^n, \Z) \cong \Z$ is a generator for all $k$, then $H^*(\CP^n, \Z) \cong \Z[y]/(y^{n + 1})$.
+
+ But we know that $y^{n - 1}$ generates $H^{2n - 2}(\CP^n, \Z)$, since it pulls back under $i^*$ to $x^{n - 1}$, which is a generator. Finally, consider
+ \[
+ \begin{tikzcd}[cdmap]
+ H^2(\CP^2, \Z) \otimes H^{2n - 2} (\CP^2, \Z) \ar[r] & \Z\\
+ y \otimes y^{n - 1} \ar[r, maps to] & y^n [\CP^n].
+ \end{tikzcd}
+ \]
+ Since this is non-singular, we know $y^n \in H^{2n}(\CP^n, \Z)$ must be a generator.
+\end{eg}
+Of course, we can get $H^*(\RP^n, \F_2)$ similarly.
+
+\subsection{Applications}
+We go through two rather straightforward applications, before we move on to bigger things like the intersection product.
+
+\subsubsection*{Signature}
+We focus on the case where $d = 2n$ is even. Then we have, in particular, a non-degenerate bilinear form
+\[
+ \bra \ph, \ph\ket : H^n(M; R) \otimes H^n(M; R) \to R.
+\]
+Also, we know
+\[
+ \bra a, b\ket = (-1)^{n^2}\bra b, a\ket = (-1)^n \bra b, a \ket.
+\]
+So this is a symmetric form if $n$ is even, and skew-symmetric form if $n$ is odd. These are very different scenario. For example, we know a symmetric matrix is diagonalizable with real eigenvalues (if $R = \R$), but a skew-symmetric form does not have these properties.
+
+So if $M$ is $4k$-dimensional, and $\Z$-oriented, then in particular $M$ is $\R$-oriented. Then the map $\bra \ph, \ph\ket : H^{2k}(M; \R) \otimes H^{2k}(M; \R) \to \R$ can be represented by a symmetric real matrix, which can be diagonalized. This has some real eigenvalues. The eigenvalues can be positive or negative.
+\begin{defi}[Signature of manifold]\index{signature of manifold}\index{manifold!signature}
+ Let $M$ be a $4k$-dimensional $\Z$-oriented manifold. Then the signature is the number of positive eigenvalues of
+ \[
+ \bra \ph, \ph\ket: H^{2k}(M; R) \otimes H^{2k}(M; \R) \to \R
+ \]
+ minus the number of negative eigenvalues. We write this as $\sgn(M)$.
+\end{defi}
+By Sylvester's law of inertia, this is well-defined.
+
+\begin{fact}
+ If $M = \partial W$ for some compact $4k + 1$-dimensional manifold $W$ with boundary, then $\sgn(M) = 0$.
+\end{fact}
+
+\begin{eg}
+ $\CP^2$ has $H^2(\CP^2; \R) \cong \R$, and the bilinear form is represented by the matrix $(1)$. So the signature is $1$. So $\CP^2$ is not the boundary of a manifold.
+\end{eg}
+
+\subsubsection*{Degree}
+Recall we defined the degree of a map from a sphere to itself. But if we have a $\Z$-oriented space, we can have the fundamental class $[M]$, and then there is an obvious way to define the degree.
+
+\begin{defi}[Degree of map]\index{degree of map}
+ If $M, N$ are $d$-dimensional compact connected $\Z$-oriented manifolds, and $f: M \to N$, then
+ \[
+ f_*([M]) \in H_d(N, \Z) \cong \Z[N].
+ \]
+ So $f_*([M]) = k [N]$ for some $k$. This $k$ is called the \emph{degree} of $f$, written $\deg (f)$.
+\end{defi}
+
+If $N = M = S^n$ and we pick the same orientation for them, then this recovers our previous definition.
+
+By exactly the same proof, we can compute this degree using local degrees, just as in the case of a sphere.
+
+\begin{cor}
+ Let $f: M \to N$ be a map between manifolds. If $\F$ is a field and $\deg(f) \not= 0 \in \F$, then then the induced map
+ \[
+ f^*: H^*(N, \F) \to H^*(M, \F)
+ \]
+ is injective.
+\end{cor}
+This seems just like an amusement, but this is powerful. We know this is in fact a map of rings. So if we know how to compute cup products in $H^*(M; \F)$, then we can do it for $H^*(N; \F)$ as well.
+
+\begin{proof}
+ Suppose not. Let $\alpha \in H^k(N, \F)$ be non-zero but $f^*(\alpha) = 0$. As
+ \[
+ \bra \ph, \ph \ket: H^k(N, \F) \otimes H^{d - k}(N, \F) \to \F
+ \]
+ is non-singular, we know there is some $\beta \in H^{d - k}(N, \F)$ such that
+ \[
+ \bra \alpha, \beta\ket = (\alpha \smile \beta)[N] = 1.
+ \]
+ Then we have
+ \begin{align*}
+ \deg(f) &= \deg (f) \cdot 1 \\
+ &= (\alpha \smile \beta)(\deg(f) [N]) \\
+ &= (\alpha \smile \beta) (f_*[M]) \\
+ &= (f^*(\alpha) \smile f^*(\beta))([M])\\
+ &= 0.
+ \end{align*}
+ This is a contradiction.
+\end{proof}
+
+\subsection{Intersection product}
+Recall that cohomology comes with a cup product. Thus, Poincar\'e duality gives us a product on homology. Our goal in this section is to understand this product.
+
+We restrict our attention to smooth manifolds, so that we can talk about the tangent bundle. Recall (from the example sheet) that an orientation of the manifold is the same as an orientation on the tangent bundle.
+
+We will consider homology classes that come from submanifolds. For concreteness, let $M$ be a compact smooth $R$-oriented manifold, and $N \subseteq M$ be an $n$-dimensional $R$-oriented submanifold. Let $i: N \hookrightarrow M$ be the inclusion. Suppose $\dim M = d$ and $\dim N = n$. Then we obtain a canonical homology class
+\[
+ i_*[N] \in H_n(M; R).
+\]
+We will abuse notation and write $[N]$ for $i_* [N]$. This may or may not be zero. Our objective is to show that under suitable conditions, the product of $[N_1]$ and $[N_2]$ is $[N_1 \cap N_2]$.
+
+To do so, we will have to understand what is the cohomology class Poinacr\'e dual to $[N]$. We claim that, suitably interpreted, it is the Thom class of the normal bundle.
+
+Write $\nu_{N \subseteq M}$ for the normal bundle of $N$ in $M$. Picking a metric on $TM$, we can decompose
+\[
+ i^*TM \cong TN \oplus \nu_{N \subseteq M},
+\]
+Since $TM$ is oriented, we obtain an orientation on the pullback $i^*TM$. Similarly, $TN$ is also oriented by assumption. In general, we have the following result:
+\begin{lemma}
+ Let $X$ be a space and $V$ a vector bundle over $X$. If $V = U \oplus W$, then orientations for any two of $U, W, V$ give an orientation for the third.
+\end{lemma}
+
+\begin{proof}
+ Say $\dim V = d$, $\dim U = n$, $\dim W = m$. Then at each point $x \in X$, by K\"unneth's theorem, we have an isomorphism
+ \[
+ H^d(V_x, V_x^\#; R) \cong H^n(U_x, U_x^\#; R) \otimes H^m(W_x, W_x^\#; R) \cong R.
+ \]
+ So any local $R$-orientation on any two induces one on the third, and it is straightforward to check the local compatibility condition.
+\end{proof}
+
+Can we find a more concrete description of this orientation on $\nu_{N \subseteq M}$? By the same argument as when we showed that $H^i(\R^n \mid \{0\}) \cong H_c^i(\R^d)$, we know
+\[
+ H^i(\nu_{N \subseteq M}, \nu_{N \subseteq M}^\#; R) \cong H_c^i(\nu_{N \subseteq M}; R).
+\]
+Also, by the tubular neighbourhood theorem, we know $\nu_{N \subseteq M}$ is homeomorphic to an open neighbourhood $U$ of $N$ in $M$. So we get isomorphisms
+\[
+ H^i_c(\nu_{N \subseteq M}; R) \cong H_c^i(U; R) \cong H_{d - i}(U; R) \cong H_{d - i}(N; R),
+\]
+where the last isomorphism comes from the fact that $N$ is homotopy-equivalent to $U$.
+
+In total, we have an isomorphism
+\[
+ H^i(\nu_{N \subseteq M}, \nu_{N \subseteq M}^\#; R) \cong H_{d - i}(N; R).
+\]
+Under this isomorphism, the fundamental class $[N] \in H_n(N; R)$ corresponds to some
+\[
+ \mathcal{E}_{N \subseteq M} \in H^{d - n}(\nu_{N \subseteq M}, \nu_{N \subseteq M}^\#; R)
+\]
+But we know $\nu_{N \subseteq M}$ has dimension $d - n$. So $\mathcal{E}_{N \subseteq M}$ is in the right dimension to be a Thom class, and is a generator for $H^{d - n}(\nu_{N \subseteq M}, \nu_{N \subseteq M}^\#; R)$, because it is a generator for $H_{d - n}(N; R)$. One can check that this is indeed the Thom class.
+
+How is this related to the other things we've had? We can draw the commutative diagram
+\[
+ \begin{tikzcd}
+ H_n(N; R) \ar[r, "\sim"] & H_n(U; R) \ar[r, "\sim"] \ar[d, "i_*"] & H_c^{d - n}(U; R) \ar[d, "\text{extension by }0"] \\
+ & H_n(M; R) \ar[r, "\sim"] & H^{d - n}(M; R)
+ \end{tikzcd}
+\]
+The commutativity of the square is a straightforward naturality property of Poincar\'e duality.
+
+Under the identification $H_c^{d - n}(U; R) \cong H^{d - n}(\nu_{N \subseteq M}, \nu_{N \subseteq M}^\#; R)$, the above says that the image of $[N] \in H_n(N; R)$ in $H^{d - n}_c(U; R)$ is the Thom class of the normal bundle $\nu_{N \subseteq M}$.
+
+On the other hand, if we look at this composition via the bottom map, then $[N]$ gets sent to $D_M^{-1}([N])$. So we know that
+\begin{thm}
+ The Poincar\'e dual of a submanifold is (the extension by zero of) the normal Thom class.
+\end{thm}
+
+Now if we had two submanifolds $N, W \subseteq M$. Then the normal Thom classes give us two cohomology classes of $M$. As promised, When the two intersect nicely, the cup product of their Thom classes is the Thom class of $[N \cap W]$. The niceness condition we need is the following:
+\begin{defi}[Transverse intersection]\index{transverse intersection}\index{intersect transversely}
+ We say two submanifolds $N, W \subseteq M$ \emph{intersect transversely} if for all $x \in N \cap W$, we have
+ \[
+ T_x N + T_x W = T_x M.
+ \]
+\end{defi}
+
+\begin{eg}
+ We allow intersections like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-1.5, 0) -- (1.5, 0);
+ \draw (0, -1.5) -- (0, 1.5);
+ \end{tikzpicture}
+ \end{center}
+ but not this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 0) -- (2, 0);
+ \draw (-2, 2) parabola bend (0, 0) (2, 2);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+It is a fact that we can always ``wiggle'' the submanifolds a bit so that the intersection is transverse, so this is not too much of a restriction. We will neither make this precise nor prove it.
+
+Whenever the intersection is transverse, the intersection $N \cap W$ will be a submanifold of $M$, and of $N$ and $W$ as well. Moreover,
+\[
+ (\nu_{N \cap W \subseteq M})_x = (\nu_{N \subseteq M})_x \oplus (\nu_{W \subseteq M})_x.
+\]
+Now consider the inclusions
+\begin{align*}
+ i_N: N \cap W &\hookrightarrow N\\
+ i_W: N \cap W &\hookrightarrow W.
+\end{align*}
+Then we have
+\[
+ \nu_{N \cap W \subseteq M} = i_N^* (\nu_{N \subseteq M}) \oplus i_W^*(\nu_{W \subseteq M}).
+\]
+So with some abuse of notation, we can write
+\[
+ i_N^* \mathcal{E}_{N \subseteq M} \smile i_W^* \mathcal{E}_{W \subseteq M} \in H^*(\nu_{N \cap W \subseteq M}, \nu_{N \cap W \subseteq M}^\#; R),
+\]
+and we can check this gives the Thom class. So we have
+\[
+ D_M^{-1}([N]) \smile D_M^{-1}([W]) = D_M^{-1}([N \cap W]).
+\]
+The slogan is ``cup product is Poincar\'e dual to intersection''. One might be slightly worried about the graded commutativity of the cup product, because $N \cap W = W \cap N$ as manifolds, but in general,
+\[
+ D_M^{-1}([N]) \smile D_M^{-1}([W]) \not= D_M^{-1}([W]) \smile D_M^{-1}([N]).
+\]
+The fix is to say that $N \cap W$ are $W \cap N$ are not the same as \emph{oriented} manifolds in general, and sometimes they differ by a sign, but we will not go into details.
+
+More generally, we can define
+\begin{defi}[Intersection product]\index{intersection product}
+ The \emph{intersection product} on the homology of a compact manifold is given by
+ \[
+ \begin{tikzcd}[cdmap]
+ H_{n - k}(M) \otimes H_{n - \ell}(M) \ar[r] & H_{n - k - \ell} (M)\\
+ (a, b) \ar[r, maps to] & a \cdot b = D_M(D_M^{-1}(a) \smile D_M^{-1}(b))
+ \end{tikzcd}
+ \]
+\end{defi}
+
+\begin{eg}
+ We know that
+ \[
+ H_{2k}(\CP^n, \Z) \cong
+ \begin{cases}
+ \Z & 0 \leq k \leq n\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ Moreover, we know the generator for these when we computed it using cellular homology, namely
+ \[
+ [\CP^k] \equiv y_k \in H_{2k}(\CP^n, \Z).
+ \]
+ To compute $[\CP^\ell] \cdot [\CP^\ell]$, if we picked the canonical copies of $\CP^\ell, \CP^k \subseteq \CP^n$, then one would be contained in the other, and this is exactly the opposite of intersecting transversely.
+
+ Instead, we pick
+ \[
+ \CP^k = \{[z_0:z_1 \cdots z_k:0:\cdots:0]\},\quad \CP^\ell = \{[0:\cdots : 0 : w_0 : \cdots : w_\ell]\}.
+ \]
+ It is a fact that any two embeddings of $\CP^k \hookrightarrow \CP^n$ are homotopic, so we can choose these.
+
+ Now these two manifolds intersect transversely, and the intersection is
+ \[
+ \CP^k \cap \CP^\ell = \CP^{k + \ell - n}.
+ \]
+ So this says that
+ \[
+ y_k \cdot y_\ell = \pm y_{l + \ell - n},
+ \]
+ where there is some annoying sign we do not bother to figure out.
+
+ So if $x_k$ is Poincar\'e dual to $y_{n - k}$, then
+ \[
+ x_k \smile x_\ell = x_{k + \ell},
+ \]
+ which is what we have previously found out.
+\end{eg}
+
+\begin{eg}
+ Consider the manifold with three holes
+ \begin{center}
+ \begin{tikzpicture}[yscale=2, xscale=1.3]
+ \draw plot [smooth cycle, tension=0.8] coordinates {(-1.7, 0) (-1.4, -0.27) (-0.7, -0.35) (0, -0.23) (0.8, -0.35) (1.6, -0.23) (2.3, -0.35) (3.0, -0.27) (3.3, 0) (3.0, 0.27) (2.3, 0.35) (1.6, 0.23) (0.8, 0.35) (0, 0.23) (-0.7, 0.35) (-1.4, 0.27)};
+
+ \foreach \x in {2.4, 0.8, -0.8}{
+ \begin{scope}[shift={(\x, -0.03)}]
+ \draw[rounded corners=14pt] (-0.55, 0.05)--(0, -.15)--(0.55, 0.05);
+ \draw[rounded corners=12pt] (-0.45, 0)--(0, 0.2)--(0.45, 0);
+ \end{scope}
+ }
+ \draw [mred, thick] (-0.8, 0) ellipse (0.6 and 0.2);
+ \draw [mred, thick] (0.8, 0) ellipse (0.6 and 0.2);
+ \draw [mred, thick] (2.4, 0) ellipse (0.6 and 0.2);
+
+ \node [mred, above] at (-0.8, 0.2) {$b_1$};
+ \node [mred, above] at (0.8, 0.2) {$b_2$};
+ \node [mred, above] at (2.4, 0.2) {$b_3$};
+
+ \draw [mblue, thick] (-0.8, -0.361) node [below] {$a_1$} arc(-90:90:0.05 and 0.125);
+ \draw [mblue, thick, dashed] (-0.8, -0.361) arc(270:90:0.05 and 0.125);
+ \draw [mblue, thick] (0.8, -0.361) node [below] {$a_2$} arc(-90:90:0.05 and 0.125);
+ \draw [mblue, thick, dashed] (0.8, -0.361) arc(270:90:0.05 and 0.125);
+ \draw [mblue, thick] (2.4, -0.361) node [below] {$a_3$} arc(-90:90:0.05 and 0.125);
+ \draw [mblue, thick, dashed] (2.4, -0.361) arc(270:90:0.05 and 0.125);
+ \end{tikzpicture}
+ \end{center}
+ Then we have
+ \[
+ a_i \cdot b_i = \{\mathrm{pt}\},\quad a_i \cdot b_j = 0\text{ for }i \not= j.
+ \]
+ So we find the ring structure of $H_*(\mathcal{E}_g, \Z)$, hence the ring structure of $H^*(\mathcal{E}_g, \Z)$.
+
+ This is so much easier than everything else we've been doing.
+\end{eg}
+Here we know that the intersection product is well-defined, so we are free to pick our own nice representatives of the loop to perform the calculation.
+
+Of course, this method is not completely general. For example, it would be difficult to visualize this when we work with manifolds with higher dimensions, and more severely, not all homology classes of a manifold have to come from submanifolds (e.g.\ twice the fundamental class)!
+
+\subsection{The diagonal}
+Again, let $M$ be a compact $\Q$-oriented $d$-dimensional manifold. Then $M \times M$ is a $2d$-dimensional manifold. We can try to orient it as follows --- K\"unneth gives us an isomorphism
+\[
+ H^{d + d}(M \times M; \Q) \cong H^d(M; \Q) \otimes H^d(M; \Q),
+\]
+as $H^k(M; \Q) = 0$ for $k > d$. By the universal coefficients theorem, plus the fact that duals commute with tensor products, we have an isomorphism
+\[
+ H_{d + d}(M \times M; \Q) \cong H_d(M; \Q) \otimes H_d(M; \Q).
+\]
+Thus the fundamental class $[M]$ gives us a fundamental class $[M \times M] = [M] \otimes [M]$.
+
+We are going to do some magic that involves the diagonal map
+\[
+ \begin{tikzcd}[cdmap]
+ \Delta: M \ar[r] & M \times M\\
+ x \ar[r, maps to] & (x, x).
+ \end{tikzcd}
+\]
+This gives us a cohomology class
+\[
+ \delta = D_{M \times M}^{-1}(\Delta_*[M]) \in H^d(M \times M, \Q) \cong \bigoplus_{i + j = d}H^i(M, \Q) \otimes H^j(M, \Q).
+\]
+It turns out a lot of things we want to do with this $\delta$ can be helped a lot by doing the despicable thing called picking a basis.
+
+We let $\{a_i\}$ be a basis for $H^*(M, \Q)$. On this vector space, we have a non-singular form
+\[
+ \bra \ph, \ph\ket: H^*(M, \Q) \otimes H^*(M; \Q) \to \Q
+\]
+given by $\bra \varphi, \psi\ket \mapsto (\varphi \smile \psi)([M])$. Last time we were careful about the degrees of the cochains, but here we just say that if $\varphi \smile \psi$ does not have dimension $d$, then the result is $0$.
+
+Now let $\{b_i\}$ be the dual basis to the $a_i$ using this form, i.e.
+\[
+ \bra a_i, b_j\ket = \delta_{ij}.
+\]
+It turns out $\delta$ has a really nice form expressed using this basis:
+
+\begin{thm}
+ We have
+ \[
+ \delta = \sum_i (-1)^{|a_i|} a_i \otimes b_i.
+ \]
+\end{thm}
+
+\begin{proof}
+ We can certainly write
+ \[
+ \delta = \sum_{i, } C_{ij} a_i \otimes b_j
+ \]
+ for some $C_{ij}$. So we need to figure out what the coefficients $C_{ij}$ are. We try to compute
+ \begin{align*}
+ ((b_k \otimes a_\ell) \smile \delta)[M \times M] &= \sum C_{ij} (b_k \otimes a_\ell) \smile (a_i \otimes b_j) [M \times M]\\
+ &= \sum C_{ij} (-1)^{|a_\ell||a_i|} (b_k \smile a_i) \otimes (a_\ell \smile b_i) [M] \otimes [M]\\
+ &= \sum C_{ij} (-1)^{|a_\ell||a_i|} (\delta_{ik}(-1)^{|a_i||b_k|}) \delta_{j \ell}\\
+ &= (-1)^{|a_k||a_\ell| + |a_k| |b_k|} C_{k\ell}.
+ \end{align*}
+ But we can also compute this a different way, using the definition of $\delta$:
+ \begin{align*}
+ (b_k \otimes a_\ell \smile \delta)[M \times M] = (b_k \otimes a_\ell)(\Delta_*[M]) = (b_k \smile a_\ell)[M] = (-1)^{|a_\ell||b_k|} \delta_{k\ell}.
+ \end{align*}
+ So we see that
+ \[
+ C_{k\ell} = \delta_{k\ell}(-1)^{|a_\ell|}.\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ We have
+ \[
+ \Delta^*(\delta)[M] = \chi(M),
+ \]
+ the Euler characteristic.
+\end{cor}
+
+\begin{proof}
+ We note that for $a \otimes b \in H^n(M \times M)$, we have
+ \[
+ \Delta^*(a \otimes b) = \Delta^*(\pi_1^* a \smile \pi_2^* b) = a \smile b
+ \]
+ because $\pi_i \circ \Delta = \id$. So we have
+ \[
+ \Delta^*(\delta) = \sum (-1)^{|a_i|} a_i \smile b_i.
+ \]
+ Thus
+ \[
+ \Delta^*(\delta)[M] = \sum_i (-1)^{|a_i|} = \sum_k (-1)^k \dim_\Q H^k(M; \Q) = \chi(M).\qedhere
+ \]
+\end{proof}
+
+So far everything works for an arbitrary manifold. Now we suppose further that $M$ is smooth. Then $\delta$ is the Thom class of the normal bundle $\nu_{M \subseteq M \times M}$ of
+\[
+ M \hookrightarrow M \times M.
+\]
+By definition, pulling this back along $\Delta$ to $M$ gives the Euler class of the normal bundle. But we know $\nu_{M \subseteq M \times M} \cong TM$, because the fiber at $x$ is the cokernel of the map
+\[
+ \begin{tikzcd}
+ T_x M \ar[r, "\Delta"] & T_x M \oplus T_x M
+ \end{tikzcd}
+\]
+and the inverse given by
+\begin{align*}
+ T_x M \oplus T_x M &\to T_x M\\
+ (v, w) &\mapsto v - w
+\end{align*}
+gives us an isomorphism
+\[
+ \frac{T_x M \oplus T_x M}{\Delta T_x M} \cong T_x M.
+\]
+\begin{cor}
+ We have
+ \[
+ e(TM)[M] = \chi(M).
+ \]
+\end{cor}
+
+\begin{cor}
+ If $M$ has a nowhere-zero vector field, then $\chi(M) = 0$.
+\end{cor}
+
+More generally, this tells us that $\chi(M)$ is the number of zeroes of a vector field $M \to TM$ (transverse to the zero section) counted with sign.
+
+\begin{lemma}
+ Suppose we have $R$-oriented vector bundles $E \to X$ and $F \to X$ with Thom classes $u_E, u_F$. Then the Thom class for $E \oplus F \to X$ is $u_E \smile u_F$. Thus
+ \[
+ e(E \oplus F) = e(E) \smile e(F).
+ \]
+\end{lemma}
+
+\begin{proof}
+ More precisely, we have projection maps
+ \[
+ \begin{tikzcd}
+ & E \oplus F \ar[dl, "\pi_E"'] \ar[rd, "\pi_F"]\\
+ E & & F
+ \end{tikzcd}.
+ \]
+ We let $U = \pi_E^{-1}(E^\#)$ and $V = \pi_F^{-1}(F^\#)$. Now observe that
+ \[
+ U \cup V = (E \oplus F)^\#.
+ \]
+ So if $\dim E = e, \dim F = f$, then we have a map
+ \[
+ \begin{tikzcd}
+ H^e(E, E^\#) \otimes H^f(F, F^\#) \ar[r, "\pi_E^* \otimes \pi_F^*"] & H^e(E \oplus F, U) \otimes H^f(E \oplus F, V) \ar[d, "\smile"] \\
+ & H^{e + f}(E \oplus F, (E \oplus F)^\#)
+ \end{tikzcd},
+ \]
+ and it is easy to see that the image of $u_E \otimes u_F$ is the Thom class of $E \oplus F$ by checking the fibers.
+\end{proof}
+
+\begin{cor}
+ $TS^{2n}$ has no proper subbundles.
+\end{cor}
+
+\begin{proof}
+ We know $e(TS^{2n}) \not= 0$ as $e(TS^{2n})[S^2] = \chi(S^{2n}) = 2$. But it cannot be a proper cup product of two classes, since there is nothing in lower cohomology groups. So $TS^{2n}$ is not the sum of two subbundles. Hence $TS^{2n}$ cannot have a proper subbundle $E$ or else $TS^{2n} = E \oplus E^\perp$ (for any choice of inner product).
+\end{proof}
+
+\subsection{Lefschetz fixed point theorem}
+Finally, we are going to prove the Lefschetz fixed point theorem. This is going to be better than the version you get in Part II, because this time we will know how many fixed points there are.
+
+So let $M$ be a compact $d$-dimensional manifold that is $\Z$-oriented, and $f: M \to M$ be a map. Now if we want to count the number of fixed points, then we want to make sure the map is ``transverse'' in some sense, so that there aren't infinitely many fixed points.
+
+It turns out the right condition is that the graph
+\[
+ \Gamma_f = \{(x, f(x)) \in M \times M\} \subseteq M \times M
+\]
+has to be transverse to the diagonal. Since $\Gamma_f \cap \Delta$ is exactly the fixed points of $f$, this is equivalent to requiring that for each fixed point $x$, the map
+\[
+ \D_x \Delta \oplus \D_x F: T_x M \oplus T_x M \to T_x M \oplus T_x M
+\]
+is an isomorphism. We can write the matrix of this map, which is
+\[
+ \begin{pmatrix}
+ I & I\\
+ \D_x f & I
+ \end{pmatrix}.
+\]
+Doing a bunch of row and column operations, this is equivalent to requiring that
+\[
+ \begin{pmatrix}
+ I & 0\\
+ \D_x f & I - \D_x f
+ \end{pmatrix}
+\]
+is invertible. Thus the condition is equivalent to requiring that $1$ is not an eigenvalue of $\D_x f$.
+
+The claim is now the following:
+\begin{thm}[Lefschetz fixed point theorem]\index{Lefschetz fixed point theorem}\index{fixed point theorem!Lefschetz}
+ Let $M$ be a compact $d$-dimensional $\Z$-oriented manifold, and let $f: M \to M$ be a map such that the graph $\Gamma_f$ and diagonal $\Delta$ intersect transversely. Then
+ Then we have
+ \[
+ \sum_{x \in \fix(f)} \sgn \det(I - \D_x f) = \sum_k (-1)^k \tr(f^*: H^i(M; \Q) \to H^k(M; \Q)).
+ \]
+\end{thm}
+\begin{proof}
+ We have
+ \[
+ [\Gamma_f] \cdot [\Delta (M)] \in H_0(M \times M; \Q).
+ \]
+ We now want to calculate $\varepsilon$ of this. By Poincar\'e duality, this is equal to
+ \[
+ (D_{M \times M}^{-1}[\Gamma_f] \smile D_{M \times M}^{-1}[\Delta(M)])[M \times M] \in \Q.
+ \]
+ This is the same as
+ \[
+ (D_{M \times M}^{-1} [\Delta(M)]) ([\Gamma_f]) = \delta(F_*[M]) = (F^* \delta)[M],
+ \]
+ where $F: M \to M \times M$ is given by
+ \[
+ F(x) = (x, f(x)).
+ \]
+ We now use the fact that
+ \[
+ \delta = \sum (-1)^{|a_i|} a_i \otimes b_i.
+ \]
+ So we have
+ \[
+ F^* \delta = \sum (-1)^{|a_i|} a_i \otimes f^* b_i.
+ \]
+ We write
+ \[
+ f^* b_i = \sum C_{ij} b_j.
+ \]
+ Then we have
+ \[
+ (F^* \delta)[M] = \sum_{i, j} (-1)^{|a_i|} C_{ij} (a_i \otimes b_j) [M] = \sum_i (-1)^{|a_i|} C_{ii},
+ \]
+ and $C_{ii}$ is just the trace of $f^*$.
+
+ We now compute this product in a different way. As $\Gamma_f$ and $\Delta (M)$ are transverse, we know $\Gamma_f \cap \Delta(M)$ is a $0$-manifold, and the orientation of $\Gamma_f$ and $\Delta(M)$ induces an orientation of it. So we have
+ \[
+ [\Gamma_f] \cdot [\Delta(m)] = [\Gamma_f \cap \Delta(M)] \in H_0(M \times M; \Q).
+ \]
+ We know this $\Gamma_f \cap \Delta(M)$ has $|\fix(f)|$ many points, so $[\Gamma_f \cap \Delta(M)]$ is the sum of $|\fix(f)|$ many things, which is what we've got on the left above. We have to figure out the sign of each term is actually $\sgn \det(I - \D_x f)$, and this is left as an exercise on the example sheet.
+\end{proof}
+
+\begin{eg}
+ Any map $f: \CP^{2n} \to \CP^{2n}$ has a fixed point. We can't prove this using the normal fixed point theorem, but we can exploit the ring structure of cohomology to do this. We must have
+ \[
+ f^*(x) = \lambda x \in H^2(\CP^2; \Q) = \Q x.
+ \]
+ for some $\lambda \in \Q$. So we must have
+ \[
+ f^*(x^i) = \lambda^i x^i.
+ \]
+ We can now very easily compute the right hand side of the fixed point theorem
+ \[
+ \sum_k (-1)^k \tr(f^*: H^k \to H^k) = 1 + \lambda + \lambda^2 + \cdots + \lambda^{2n}.
+ \]
+ and this cannot be zero.
+\end{eg}
+
+\printindex
+\end{document}
diff --git a/books/cam/III_M/analysis_of_partial_differential_equations.tex b/books/cam/III_M/analysis_of_partial_differential_equations.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5cad6da855ac65a98464f5c6dac792f19b6e2536
--- /dev/null
+++ b/books/cam/III_M/analysis_of_partial_differential_equations.tex
@@ -0,0 +1,3137 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2017}
+\def\nlecturer {C. Warnick}
+\def\ncourse {Analysis of Partial Differential Equations}
+\def\ncoursehead {Analysis of PDEs}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+This course serves as an introduction to the mathematical study of Partial Differential Equations (PDEs). The theory of PDEs is nowadays a huge area of active research, and it goes back to the very birth of mathematical analysis in the 18th and 19th centuries. The subject lies at the crossroads of physics and many areas of pure and applied mathematics.
+
+The course will mostly focus on four prototype linear equations: Laplace's equation, the heat equation, the wave equation and Schr\"odinger's equation. Emphasis will be given to modern functional analytic techniques, relying on a priori estimates, rather than explicit solutions, although the interaction with classical methods (such as the fundamental solution and Fourier representation) will be discussed. The following basic unifying concepts will be studied: well-posedness, energy estimates, elliptic regularity, characteristics, propagation of singularities, group velocity, and the maximum principle. Some non-linear equations may also be discussed. The course will end with a discussion of major open problems in PDEs.
+
+\subsubsection*{Pre-requisites}
+There are no specific pre-requisites beyond a standard undergraduate analysis background, in particular a familiarity with measure theory and integration. The course will be mostly self-contained and can be used as a first introductory course in PDEs for students wishing to continue with some specialised PDE Part III courses in the Lent and Easter terms.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+Partial differential equations are ubiquitous in mathematics, physics, and beyond. The first equation we have met might be Laplace's equation, saying
+\[
+ -\Delta u = -\sum_{i = 1}^n \frac{\partial^2 u}{\partial x_i^2} = 0.
+\]
+This is the canonical example of an \emph{elliptic PDE}, and we will spend a lot of time thinking about elliptic PDEs, since they tend to be very well-behaved. Instead of trying to explicitly solve equations, as we did in, say, IB Methods, our focus is mostly on the existence and uniqueness of solutions, without explicitly constructing them. This will involve the use of machinery from functional analysis, and indeed a lot of the work will be about showing that we satisfy the hypotheses required by the functional-analytic results (as well as proving the functional-analytic results themselves (sometimes)).
+
+We will also consider \emph{hyperbolic equations}. The canonical example is the wave equation
+\[
+ \frac{\partial^2 u}{\partial t^2} - \Delta u = 0.
+\]
+The difference is that the time derivative term now has a different sign from the rest. In Laplace's equation, all directions were equal. Here time is a ``special'' direction, and often our questions are about how the solution evolves in time.
+
+Of course, we don't ``just solve'' such equations. Usually, we impose some \emph{data}, such as the desired values of $u$ on the boundary of our domain, or the ``starting configuration'' in the case of the wave equation. In general, given such a system, there are several questions we can ask:
+\begin{itemize}
+ \item Does a solution exist?
+ \item Is the solution unique?
+ \item Does the solution depend continuously on the data?
+ \item How \emph{regular} is the solution? Is it continuously differentiable? Or even smooth?
+\end{itemize}
+These questions are closely related. To even make sense of the question, we need to specify our ``search space'', i.e.\ the sort of functions we are willing to consider. For example, we may consider the space of all smooth functions, or less ambitiously, the space of all twice-differentiable functions. This somewhat answers the last question, but it doesn't answer it completely. It could be that we can try to search for the solution in the space of $C^2$ functions, but it turns out the solutions are always smooth!
+
+The choice of this function space affects the answers to the other questions as well. If we have a larger function space, then we are more likely to get a positive answer to the first question. However, since there are more functions around, we are more likely to get a \emph{negative} function to the second question. So there is some tension here.
+
+The choice affects the third question in a slightly more subtle way. To speak of continuity, we must pick a topology, and this usually comes from a norm on the function space. Thus, to make sense of the third question, we must pick the appropriate norm on both the space of data and the space of potential solutions.
+
+After choosing the appropriate function spaces, if the answers to the first three questions are all ``yes'', then we say the problem is \emph{well-posed}\index{well-posed problem}.
+
+\section{Basics of PDEs}
+It might be wise to define what a partial differential equation is.
+
+\begin{defi}[Partial differential equation]
+ Suppose $U \subseteq \R^n$ is open. A \term{partial differential equation} (\term{PDE}) of \emph{order}\index{order of PDE}\index{PDE!order} $k$ is a relation of the form
+ \[
+ F(x, u(x), \D u(x), \ldots, \D^k u(x)) = 0,\tag{$*$}
+ \]
+ where $F: U \times \R \times \R^n \times \R^{n^2} \times \cdots \times \R^{n^k} \to \R$ is a given function, and $u: U \to \R$ is the ``unknown''.
+\end{defi}
+
+\begin{defi}[Classical solution]
+ We say $u \in C^k(U)$ is a \term{classical solution} of a PDE if in fact the PDE is identically satisfied on $U$ when $u, \D u, \ldots, \D^k u$ are substituted in.
+\end{defi}
+
+More generally, we can allow $u$ and $F$ to take values in a vector space. In this case, we say it is a \term{system of PDEs}\index{PDE!system}\index{partial differential equation!system}.
+
+We can now entertain ourselves by writing out a large list of PDEs that are naturally found in physics and mathematics.
+\begin{eg}[Transport equation]\index{transport equation}
+ Suppose $v: \R^4 \times \R \to \R^3$ and $f: \R^4 \to \R$ are given. The \emph{transport equation} is
+ \[
+ \frac{\partial u}{\partial t}(x, t) + v(x, t, u(x, t)) \cdot \D_x u(x, t) = f(x, t)
+ \]
+ where we think of $x \in \R^3$ and $t \in \R$. This describes the evolution of the density $u$ of some chemical being advected by a flow $v$ and produced at a rate $f$.
+
+ We see that this is a PDE of order $1$, and a relatively straightforward solution method exists, namely the \term{method of characteristics}.
+\end{eg}
+
+\begin{eg}[Laplace's and Poissson's equations]\index{Laplace's equation}\index{Poisson's equation}
+ Taking $u: \R^n \to \R$, Laplace's equation is
+ \[
+ \Delta u(x) = \sum_{i = 1}^n \frac{\partial^2 u}{\partial x_i \partial x_i} (x) = 0.
+ \]
+ This describes, for example, the electrostatic potential in vacuum and the static distribution of heat inside a uniform solid body. It also has applications to steady flows in 2d fluids.
+
+ There is an inhomogeneous version of this:
+ \[
+ \Delta u(x) = f(x),
+ \]
+ where $f: \R^n \to \R$ is a fixed function. This is known as \emph{Poisson's equation}, and describes, for example, the electrostatic field due to a charge distribution, and the gravitational field in Newtonian gravity.
+\end{eg}
+
+\begin{eg}[Heat/diffusion equation]\index{heat equation}\index{diffusion equation}
+ This is given by
+ \[
+ \frac{\partial u}{\partial t} = \Delta u,
+ \]
+ where $u: \R^n \times \R \to \R$ is now a function of space and time. This describes the evolution of temperature inside a uniform body, or equivalently the diffusion of some chemical (where $u$ is the density).
+\end{eg}
+
+\begin{eg}[Wave equation]\index{wave equation}
+ The wave equation is given by
+ \[
+ \frac{\partial^2 u}{\partial t^2} = \Delta u,
+ \]
+ where $u: \R^n \times \R \to \R$ is again a function of space and time. This describes oscillations of
+ \begin{itemize}
+ \item strings ($n = 1$)
+ \item membrane/drum ($n = 2$)
+ \item air density in a sound wave ($n = 3$)
+ \end{itemize}
+\end{eg}
+
+\begin{eg}[Schr\"odinger's equation]\index{Schr\"odinger's equation}
+ Let $u: \R^n \times \R \to \C \cong \R^2$. Up to choices of units and convention, the \emph{Schr\"odinger's equation} is
+ \[
+ i\frac{\partial u}{\partial t} + \Delta u - Vu = 0.
+ \]
+ Here $u$ is the wavefunction of a particle moving in a potential $V: \R^n \to \R$.
+\end{eg}
+
+\begin{eg}[Maxwell's equations]\index{Maxwell's equation}
+ The unknowns here are $\mathbf{E}, \mathbf{B}: \R^3 \times \R \to \R^3$. They satisfy \emph{Maxwell's equations}
+ \begin{align*}
+ \nabla \cdot \mathbf{E} &= \rho & \nabla \cdot \mathbf{B} &= 0\\
+ \nabla \times \mathbf{E} + \frac{\partial \mathbf{B}}{\partial t} &= 0 & \nabla \times \mathbf{B} - \frac{\partial \mathbf{E}}{\partial t} &= \mathbf{J},
+ \end{align*}
+ where $\rho$ is the electric charge density, $\mathbf{J}$ is the electric current, $\mathbf{E}$ is the electric field and $\mathbf{B}$ is the magnetic field.
+
+ This is a system of 6 equations and 6 unknowns.
+\end{eg}
+
+\begin{eg}[Einstein's equations]\index{Einstein's equations}
+ The \emph{Einstein's equation} in vacuum are
+ \[
+ R_{\mu\nu}[g] = 0,
+ \]
+ where $g$ is a Lorentzian metric (encoding the gravitational field), and $R_{\mu\nu}[g]$ is the Ricci curvature of $g$.
+
+ Since we haven't said what $g$ and $R_{\mu\nu}$ are, it is not clear that this is a partial differential equation, but it is.
+\end{eg}
+
+\begin{eg}[Minimal surface equation]\index{minimal surface equation}
+ The \emph{minimal surface equation is}
+ \[
+ \mathrm{Div}\left(\frac{\D u}{\sqrt{1 + |\D u|^2}}\right) = 0,
+ \]
+ where $u: \R^n \to \R$ is some function. This is the condition that the graph of $u$, $\{(x, u(x))\}\subseteq \R^n \times \R$, is locally an extremizer of area.
+\end{eg}
+
+\begin{eg}[Ricci flow]\index{Ricci flow}
+ Let $g$ be a Riemannian metric on some manifold. The \emph{Ricci flow} is a PDE that evolves this metric:
+ \[
+ \frac{\partial g_{ij}}{\partial t} = R_{ij}[g],
+ \]
+ where $R_{ij}$ is again the Ricci curvature.
+
+ The most famous application is in proving the Poincar\'e conjecture, which is a topological conjecture about $3$-manifolds.
+\end{eg}
+
+These PDEs exhibit a wide variety of behaviours. For example, waves behave very differently from the evolution of temperature. This means it is unlikely that we can say anything about PDEs as a whole, since everything we say must be true for both the heat equation and the wave equation. We must restrict to some particular classes of PDEs to say something useful. Thus, we seek to classify our PDEs into different types. We first introduce some notation.
+
+%\subsubsection*{Data and well-posedness}
+%In all the examples, we need some additional information to even hope for a unique solution. For example, in the case of Laplace's equation, we might need to know the boundary values of $u$; For the heat equation, we need to know the initial temperature distribution. We broadly refer to these information as the \term{data}. An important part of studying PDE is to understand what data we need for a certain PDE problem. Roughly speaking, we want enough data so that we can solve the problem, and not too much data that there is no solution.
+%
+%\begin{defi}[Well-posed problem]\index{well-posed problem}
+% We say a PDE problem (equation plus data) is \term{well-posed} if
+% \begin{enumerate}
+% \item A solution exists;
+% \item The solution is unique; and
+% \item The solution depends continuously on the data.
+% \end{enumerate}
+%\end{defi}
+%We should make this more precise. To say whether a solution exists, we need to first specify a function space, and ask if there is a solution in that function space. The same goes for the uniqueness problem. To ask whether the solution depends continuously, we must put the solution and data in appropriate function spaces that come with a topology.
+%
+%There is a certain freedom for us to choose which function space we are looking at, and there is no god-given choice. There is some tension between our requirements --- if we want our solution to exist, it is better to work with a larger function space; if we want it to be unique, we want it to be smaller. The details of whether a problem is well-posed can depend on the choice of function space.
+
+In this course, the natural numbers start at $0$.
+\begin{notation}[Multi-index/Schwartz notation]\index{multi-index notation}\index{Schwartz notation}
+ We say an element $\alpha \in \N^n$ is a \emph{multi-index}. Writing $\alpha = (\alpha_1, \ldots, \alpha_n)$. We write
+ \[
+ |\alpha| = \alpha_1 + \alpha_2 + \cdots + \alpha_n.
+ \]
+ Also, we have
+ \[
+ \D^\alpha f = \frac{\partial^{|\alpha|}f}{\partial x_1^{\alpha_1} \partial x_2^{\alpha_2} \cdots \partial x_n^{\alpha_n}}.
+ \]
+ If $x = (x_1, \ldots, x_n) \in \R^n$, then
+ \[
+ x^\alpha = x_1^{\alpha_1} x_2^{\alpha_2} \cdots x_n^{\alpha_n}.
+ \]
+ We also write
+ \[
+ \alpha! = \alpha_1! \alpha_2! \cdots \alpha_n!.
+ \]
+\end{notation}
+
+We now try to crudely classify the PDEs we have written down. Recall that our PDEs take the general form
+\[
+ F(x, u(x), \D u(x), \ldots, \D^k u(x)) = 0.
+\]
+
+\begin{defi}[Linear PDE]\index{linear PDE}\index{PDE!linear}
+ We say a PDE is \emph{linear} if $F$ is a linear function of $u$ and its derivatives. In this case, we can re-write it as
+ \[
+ \sum_{|\alpha| \leq k} a_\alpha(x) \D^\alpha u = 0.
+ \]
+\end{defi}
+
+\begin{defi}[Semi-linear PDE]\index{semi-linear PDE}\index{PDE!semi-linear}
+ We say a PDE is \emph{semi-linear} if it is of the form
+ \[
+ \sum_{|\alpha| = k} a_\alpha(x) \D^\alpha u(x) + a_0[x, u, \D u, \ldots, \D^{k - 1} u] = 0.
+ \]
+ In other words, the terms involving the highest order derivatives are linear.
+\end{defi}
+Generalizing further, we have
+\begin{defi}[Quasi-linear PDE]\index{quasi-linear PDE}\index{PDE!quasi-linear}
+ We say a PDE is \emph{quasi-linear} if it is of the form
+ \[
+ \sum_{|\alpha| = k} a_\alpha [x, u, \D u, \ldots, \D^{k - 1} u] \D^\alpha u(x) + a_0[x, u, \ldots, \D^{k - 1} u] = 0.
+ \]
+\end{defi}
+So the highest order derivative still appears linearly, but the coefficients can depend on lower-order derivatives of $u$.
+
+Finally, we have
+\begin{defi}[Fully non-linear PDE]\index{fully non-linear PDE}\index{PDE!fully non-linear}
+ A PDE is \emph{fully non-linear} if it is not quasi-linear.
+\end{defi}
+
+\begin{eg}
+ Laplace's equation $\Delta u = 0$ is linear.
+\end{eg}
+
+\begin{eg}
+ The equation $u_{xx} + u_{yy} = u_x^2$ is semi-linear.
+\end{eg}
+
+\begin{eg}
+ The equation $uu_{xx} + u_{yy} = u_x^2$ is quasi-linear.
+\end{eg}
+
+\begin{eg}
+ The equation $u_{xx} u_{yy} - u_{xy}^2 = 0$ is fully non-linear.
+\end{eg}
+
+\section{The Cauchy--Kovalevskaya theorem}
+
+\subsection{The Cauchy--Kovalevskaya theorem}
+Before we begin talking about PDEs, let's recall what we already know about ODEs. Fix some $U \subseteq \R^n$ an open subset, and assume $f: U \to \R^n$ is given. Consider the ODE
+\[
+ \dot{u}(t) = f(u(t)).
+\]
+This is an \term{autonomous ODE}\index{ODE!autonomous} because there is no explicit $t$ dependence on the right. This assumption is usually harmless, as we can just increment $n$ and use the new variable to keep track of $t$. Here $u: (a, b) \to U$ is the unknown, where $a < 0 < b$.
+
+The \term{Cauchy problem} for this equation is to find a solution to the ODE satisfying $u(0) = u_0 \in U$ for any $u_0$.
+
+The Picard--Lindel\"of theorem says we can always do so under some mild conditions.
+\begin{thm}[Picard--Lindel\"of theorem]\index{Picard--Lindelof theorem}
+ Suppose that there exists $r, K > 0$ such that $B_r(u_0) \subseteq U$, and
+ \[
+ \|f(x) - f(y)\| \leq K \|x - u_0\|
+ \]
+ for all $x, y \in B_r(u_0)$. Then there exists an $\varepsilon > 0$ depending on $K, r$ and a unique $C^1$ function $u: (-\varepsilon, \varepsilon) \to U$ solving the Cauchy problem.
+\end{thm}
+
+It is instructive to give a quick proof sketch of the result.
+
+\begin{proof}[Proof sketch]
+ If $u$ is a solution, then by the fundamental theorem of calculus, we have
+ \[
+ u(t) = u_0 + \int_0^t f(u(s))\;\d s.
+ \]
+ Conversely, if $u$ is a $C^0$ solution to this integral equation, then it solves the ODE. Crucially, this only requires $u$ to be $C^0$. Indeed, if $u$ is $C^0$ and satisfies the integral equation, then $u$ is automatically $C^1$. So we can work in a larger function space when we seek for $u$.
+
+ Thus, we have reformulated our initial problem into an integral equation. In particular, we reformulated it in a way that assumes less about the function. In the case of PDEs, this is what is known as a \term{weak formulation}.
+
+ Returning to the proof, we have reformulated our problem as looking for a fixed point of the map
+ \[
+ B: w \mapsto u_0+ \int_0^t f(w(s))\;\d s
+ \]
+ acting on
+ \[
+ \mathcal{C} = \{w: [-\varepsilon, \varepsilon] \to \overline{B_{r/2}(u_0)} : w\text{ is continuous}\}.
+ \]
+ This is a complete metric space when we equip it with the supremum norm (in fact, it is a closed ball in a Banach space).
+
+ We then show that for $\varepsilon$ small enough, this map $B: \mathcal{C} \to \mathcal{C}$ is a contraction map. There are two parts --- to show that it actually lands in $\mathcal{C}$, and that it is a contraction. If we managed to show these, then by the contraction mapping theorem, there is a unique fixed point, and we are done.
+\end{proof}
+
+The idea of formulating our problem as a fixed point problem is a powerful technique that allows us to understand many PDEs, especially non-linear ones. This theorem tells us that a unique $C^1$ solution exists locally. It is not reasonable to believe it can exist globally, as we might run out of $U$ in finite time. However, if $f$ is better behaved, we might expect $u$ to be more regular, and indeed this is the case. We shall not go into the details.
+
+How can we actually use the theorem in practice? Can we actually obtain a solution from this? Recall that to prove the contraction mapping theorem, what we do is that we arbitrarily pick a point in $\mathcal{C}$, keep applying $\mathcal{B}$, and by the contraction, we must approach the fixed point. This gives us a way to construct an approximation to the ODE.
+
+However, if we were a physicist, we would have done things differently. Suppose $f \in C^\infty$. We can then attempt to construct a Taylor series of the solution near the origin. First we note that for any solution $u$, we must have
+\[
+ u(0) = u_0,\quad \dot{u}(0) = f(u_0).
+\]
+Assuming $u$ is in fact a smooth solution, we can differentiate the ODE and obtain
+\[
+ \ddot{u}(t) = \frac{\d}{\d t} \dot{u}(t) = \frac{\d}{\d t} f(u(t)) = \D f(u(t)) \dot{u}(t) \equiv f_2(u(t), \dot{u}(t)).
+\]
+At the origin, we already know what $u$ and $\dot{u}$. We can proceed iteratively to determine
+\[
+ u^{(k)}(t) = f_k (u, \dot{u}, \ldots, u^{(k - 1)}).
+\]
+So in particular, we can in principle determine $u_k \equiv u^{(k)} = 0$. At least formally, we can write
+\[
+ u(t) = \sum_{k = 0}^\infty u_k \frac{t^k}{k!}.
+\]
+If we were physicists, we would say we are done. But being honest mathematicians, in order to claim that we have a genuine solution, we need to at least show that this converges. Under suitable circumstances, this is given by the Cauchy--Kovalevskaya theorem.
+
+\begin{thm}[Cauchy--Kovalevskaya for ODEs]\index{Cauchy--Kovalevskaya theorem!for ODEs}\index{ODE!Cauchy--Kovalevskaya theorem}
+ The series
+ \[
+ u(t) = \sum_{k = 0}^\infty u_k \frac{t^k}{k!}.
+ \]
+ converges to the Picard--Lindel\"of solution of the Cauchy problem if $f$ is real analytic in a neighbourhood of $u_0$.
+\end{thm}
+
+Recall that being real analytic means being equal to its Taylor series:
+\begin{defi}[Real analytic]\index{real analytic}\index{analytic!real}
+ Let $U \subseteq \R^n$ be open, and suppose $f: U \to \R$. We say $f$ is \emph{real analytic} near $x_0 \in U$ if there exists $r > 0$ and constants $f_\alpha \in \R$ for each multi-index $\alpha$ such that
+ \[
+ f(x) = \sum_\alpha f_\alpha (x - x_0)^\alpha
+ \]
+ for $|x - x_0| < r$.
+\end{defi}
+
+Note that if $f$ is real analytic near $x_0$, then it is in fact $C^\infty$ in the corresponding neighbourhood. Furthermore, the constants $f_\alpha$ are given by
+\[
+ f_\alpha = \frac{1}{\alpha!} \D^\alpha f(x_0).
+\]
+In other words, $f$ equals its Taylor expansion. Of course, by translation, we can usually assume $x_0 = 0$.
+
+\begin{eg}
+ If $r > 0$, set
+ \[
+ f(x) = \frac{r}{r - (x_1 + x_2 + \cdots + x_n)}
+ \]
+ for $|x| < \frac{r}{\sqrt{n}}$. Then this is real analytic, since we have
+ \[
+ f(x) = \frac{1}{1 - (x_1 + \cdots + x_n)/r} = \sum_{k = 0}^\infty \left(\frac{x_1 + \cdots + x_n}{r}\right)^k.
+ \]
+ We can then expand out each term to see that this is given by a power series. Explicitly, it is given by
+ \[
+ f(x) = \sum_\alpha \frac{1}{r^{|\alpha|}}\binom{|\alpha|}{\alpha} x^\alpha,
+ \]
+ where
+ \[
+ \binom{|\alpha|}{\alpha} = \frac{|\alpha|!}{\alpha!}.
+ \]
+ One sees that this series is absolutely convergent for $|x| < \frac{r}{\sqrt{n}}$.
+\end{eg}
+Recall that in single-variable analysis, essentially the only way we have to show that a series converges is by comparison to the geometric series. Here with multiple variables, our only way to show that a power series converges is by comparing it to this $f$.
+
+\begin{defi}[Majorant]\index{majorant}\index{majorize}
+ Let
+ \[
+ f = \sum_\alpha f_\alpha x^\alpha,\quad g = \sum_\alpha g_\alpha x^\alpha
+ \]
+ be formal power series. We say $g$ majorizes $f$ (or $g$ is a majorant of $f$), written $g \gg f$, if $g_\alpha \geq |f_\alpha|$ for all multi-indices $\alpha$.
+
+ If $f$ and $A$ are vector-valued, then this means $g^i \gg f^i$ for all indices $i$.
+\end{defi}
+
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item If $g \gg f$ and $g$ converges for $|x| < r$, then $f$ converges for $|x| < r$.
+ \item If $f(x) = \sum_\alpha f_\alpha x^\alpha$ converges for $x < r$ and $0 < s\sqrt{n} < r$, then $f$ has a majorant which converges on $|x| < s$. % check
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Given $x$, define $\tilde{x} = (|x_1|, |x_2|, \ldots, |x_n|)$. We then note that
+ \[
+ \sum_\alpha |f_\alpha x^\alpha| = \sum_\alpha |f_\alpha| \tilde{x}^\alpha \leq \sum_\alpha g_\alpha \tilde{x}^\alpha = g(\tilde{x}).
+ \]
+ Since $|\tilde{x}| = |x| < r$, we know $g$ converges at $\tilde{x}$.
+ \item Let $0 < s\sqrt{n} < r$ and set $y = s(1, 1, \ldots, 1)$. Then we have
+ \[
+ |y| = s \sqrt{n} < r.
+ \]
+ So by assumption, we know
+ \[
+ \sum_\alpha f_\alpha y^\alpha
+ \]
+ converges. A convergent series has bounded terms, so there exists $C$ such that
+ \[
+ |f_\alpha y^\alpha| \leq C
+ \]
+ for all $\alpha$. But $y^\alpha = s^{|\alpha|}$. So we know
+ \[
+ |f_\alpha| \leq \frac{C}{s^{|\alpha|}} \leq \frac{C}{s^{|\alpha|}} \frac{|\alpha|!}{\alpha!}.
+ \]
+ But then if we set
+ \[
+ g(x) = \frac{Cs}{s - (x_1 + \cdots + x_n)} = C \sum_\alpha \frac{|\alpha|!}{s^{|\alpha|}\alpha!} x^\alpha,
+ \]
+ we are done, since this converges for $|x| < \frac{s}{\sqrt{n}}$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+With this lemma in mind, we can now prove the Cauchy--Kovalevskaya theorem for first-order PDEs. This concerns a class of problems similar to the Cauchy problem for ODEs. We first set up our notation.
+
+We shall consider functions $\mathbf{u}: \R^n \to \R^m$. Writing $\mathbf{x} = (x^1, \ldots, x^n) \in \R^n$, we will consider the last variable $x^n$ as being the ``time variable'', and the others as being space. However, for notational convenience, we will not write it as $t$. We will adopt the shorthand $x' = (x^1, \ldots, x^{n - 1})$, so that $\mathbf{x} = (x', x^n)$.
+
+Suppose we are given two real analytic functions
+\begin{align*}
+ B: \R^m \times \R^{n - 1} &\to \Mat_{m \times m}(\R)\\
+ \mathbf{c}: \R^m \times \R^{n - 1} &\to \R^m.
+\end{align*}
+We seek a solution to the PDE
+\[
+ \mathbf{u}_{x^n} = \sum_{j = 1}^{n - 1}B(\mathbf{u}, x') \mathbf{u}_{x_j} + \mathbf{c}(\mathbf{u}, x')
+\]
+subject to $\mathbf{u} = 0$ when $x^n = 0$. We shall not require a solution on all of $\R^n$, but only on an open neighbourhood of the origin. Consequently, we will allow for $B$ and $\mathbf{c}$ to not be everywhere defined, but merely convergent on some neighbourhood of the identity.
+
+Note that we assumed $B$ and $\mathbf{c}$ do not depend on $x^n$, but this is not a restriction, since we can always introduce a new variable $u^{m + 1} = x^n$, and enlarge the target space.
+
+\begin{thm}[Cauchy--Kovalevskaya theorem]\index{Cauchy--Kovalevskaya theorem!for PDEs}\index{PDE!Cauchy--Kovalevskaya theorem}
+ Given the above assumptions, there exists a real analytic function $\mathbf{u} = \sum_\alpha \mathbf{u}_\alpha x^\alpha$ solving the PDE in a neighbourhood of the origin. Moreover, it is unique among real analytic functions.
+\end{thm}
+
+The uniqueness part of the proof is not difficult. If we write out $\mathbf{u}$, $B$ and $\mathbf{c}$ in power series and plug them into the PDE, we can then simply collect terms and come up with an expression for what $\mathbf{u}$ must be. This is the content of the following lemma:
+
+\begin{lemma}
+ For $k = 1, \ldots, m$ and $\alpha$ a multi-index in $\N^n$, there exists a polynomial $q_\alpha^k$ in the power series coefficients of $B$ and $\mathbf{c}$ such that any analytic solution to the PDE must be given by
+ \[
+ \mathbf{u} = \sum_{\alpha} \mathbf{q}_\alpha(B, \mathbf{c}) x^\alpha,
+ \]
+ where $\mathbf{q}_\alpha$ is the vector with entries $q_\alpha^k$.
+
+ Moreover, all coefficients of $q_\alpha$ are non-negative.
+\end{lemma}
+Note that despite our notation, $\mathbf{q}$ is not a function of $B$ and $\mathbf{c}$ (which are themselves functions of $\mathbf{u}$ and $x$). It is a function of the coefficients in the power series expansion of $B$ and $\mathbf{c}$, which are some fixed constants.
+
+This lemma proves uniqueness. To prove existence, we must show that this converges in a neighbourhood of the origin, and for this purpose, the fact that the coefficients of $q_\alpha$ are non-negative is crucial. After we have established this, we will use the comparison test to reduce the theorem to the case of a single, particular PDE, which we can solve by hand.
+
+\begin{proof}
+% By assumption, we can expand $B$ and $\mathbf{c}$ as power series:
+% \begin{align*}
+% B_j(\mathbf{z}, x') &= \sum_{\gamma, \delta} B_{j, \gamma, \delta} z^\gamma x^\delta\\
+% \mathbf{c}(\mathbf{z}, x') &= \sum_{\gamma, \delta} \mathbf{c}_{\gamma, \delta} z^\gamma x^\delta
+% \end{align*}
+% and we can write the individual components as
+% \[
+% B_{j, \gamma, \delta} = (b^{k\ell}_{j, \gamma, \delta}),\quad \mathbf{c}_{\gamma, \delta} = (c^1_{\gamma, \delta}, \ldots, c^m_{\gamma, \delta}),
+% \]
+% for $j = 1, \ldots, n - 1$ and $k, \ell = 1, \ldots, m$.
+%
+% Thus, in components, the PDE reads
+% \[
+% u^k_{x^n} = \sum_{j = 1}^{n - 1} \sum_{i = 1}^m b_j^{k\ell} (\mathbf{u}, x') u_{x_j}^\ell + c^k(\mathbf{u}, x')
+% \]
+% with the initial condition $u^k(x) = 0$.
+ We construct the polynomials $q_\alpha^k$ by induction on $\alpha_n$. If $\alpha_n = 0$, then since $\mathbf{u} = 0$ on $\{x_n = 0\}$, we conclude that we must have
+ \[
+ u_\alpha = \frac{\D^\alpha \mathbf{u}(0)}{\alpha!} = 0.
+ \]
+ For $\alpha_n = 1$, we note that whenever $x^n = 0$, we have $\mathbf{u}_{x_j} = 0$ for $j = 1, \ldots, n - 1$. So the PDE reads
+ \[
+ \mathbf{u}_{x_n}(x', 0) = \mathbf{c}(0, x').
+ \]
+ Differentiating this relation in directions tangent to $x_n = 0$, we find that if $\alpha = (\alpha', 1)$, then
+ \[
+ \D^\alpha \mathbf{u}(0) = \D^{\alpha'} \mathbf{c}(0, 0).
+ \]
+ So $q_\alpha^k$ is a polynomial in the power series coefficients of $\mathbf{c}$, and has non-negative coefficients.
+
+ Now suppose $\alpha_n = 2$, so that $\alpha = (\alpha', 2)$. Then
+ \begin{align*}
+ \D^\alpha \mathbf{u} &= \D^{\alpha'} (\mathbf{u}_{x^n})_{x^n}\\
+ &= \D^{\alpha'} \left(\sum_j B_j \mathbf{u}_{x^j} + \mathbf{c}\right)_{x^n}\\
+ &= \D^{\alpha'} \left(\sum_j \left(B_j \mathbf{u}_{x^j, x^n} + \sum_p \left(B_{u_p} \mathbf{u}_{x^j}\right) u^p_{x^n}\right) + \sum_p \mathbf{c}_{u_p} u^p_{x^n}\right)
+ \end{align*}
+ We don't really care what this looks like. The point is that when we evaluate at $0$, and expand all the terms out, we get a polynomial in the derivatives of $B_j$ and $\mathbf{c}$, and also $\D^\beta \mathbf{u}$ with $\beta_n < 2$. The derivatives of $B_j$ and $\mathbf{c}$ are just the coefficients of the power series expansion of $B_j$ and $\mathbf{c}$, and by the induction hypothesis, we can also express the $\D^\beta \mathbf{u}$ in terms of these power series coefficients. Thus, we can use this to construct $\mathbf{q}_\alpha$. By inspecting what the formula looks like, we see that all coefficients in $\mathbf{q}_\alpha$ are non-negative.
+
+ We see that we can continue doing the same computations to obtain all $\mathbf{q}_\alpha$.
+%
+% \begin{align*}
+% \D^\alpha u^k &= \D^{\alpha'}(u^k_{x_n})_{x_n}\\
+% &= \D^{\alpha'}\left(\sum_{j = 1}^{n - 1} \sum_{\ell = 1}^m b_j^{k\ell} u_{x_j}^\ell + c^k\right)_{x_n}\\
+% &= \D^{\alpha'}\left(\sum_{j = 1}^{n - 1} \sum_{\ell = 1}^m \left(b^{k\ell} u_{x_i x_n}^\ell + \sum_{p = 1}^m b^{k\ell}_{j, z_p} u^p_{x_n} u_{x_j}^{\ell}\right) + \sum_{p = 1}^m c_{z_p}^k u_{x_n}^p\right).
+% \end{align*}
+% Thus, we find that
+% \[
+% \D^{\alpha'} \left(\sum_{j = 1}^{n - 1} \sum_{\ell = 1}^m b_j^{k\ell} u_{x, x_n}^{\ell} + \sum_{p = 1}^m c_{z_p}^k u_{x_n}^p\right),
+% \]
+% evaluated at $0$.
+%
+% We still have to expand out the right hand side. When we do so, it will be a polynomial with non-negative integer coefficients, involving derivatives of $B_j, \mathbf{c}$, and the derivatives $\D^\beta \mathbf{u}$, where $\beta_n \leq 1$. But we already know how to express $\D^\beta \mathbf{u}(0)$ in terms of the coefficients of $B$ and $\mathbf{c}$
+
+% In principle, we can keep on going, making the same (unpleasant) computations for each $\alpha$ and each $k = \{1, \ldots, m\}$. We find that $\D^\alpha u^k(0)$ is a polynomial in arbitrary derivatives of $B$ and $\mathbf{c}$, and derivatives $\D^\beta u^k$ with $\beta_n < \alpha_n$, and further the coefficients are non-neative integers.
+
+% Plugging it into the series expansion of $u$, and we can decide the same statement holds for $u^k_\alpha$ as well.
+\end{proof}
+
+%\begin{eg}
+% Consider the equations
+% \begin{align*}
+% u_y &= v_x - f\\
+% v_y &= - u_x,
+% \end{align*}
+% subject to the condition $u = v = 0$ on $y = 0$. This implies $u_x = v_x = 0$ on $y = 0$. In general,
+% \[
+% (\partial_x)^n u(x, 0) = (\partial_x)^n v(x, 0) = 0.
+% \]
+% In the other direction, we have
+% \[
+% u_y(x, 0) = - f(x, 0),\quad v_y(x, 0) = 0.
+% \]
+% So we find that
+% \begin{align*}
+% (\partial_x)^n \partial_y u(x, 0) &= -(\partial_x)^n y(x, 0)\\
+% (\partial_x)^n \partial_y v(x, 0) &= 0
+% \end{align*}
+% Going further, we find that
+% \begin{align*}
+% u_{yy} &= v_{xy} - f_y\\
+% v_{yy} &= u_{xy}
+% \end{align*}
+% and we can keep going on.
+%\end{eg}
+
+An immediate consequence of the non-negativity is that
+\begin{lemma}
+ If $\tilde{B}_j \gg B_j$ and $\tilde{\mathbf{c}} \gg \mathbf{c}$, then
+ \[
+ q_\alpha^k(\tilde{B}, \tilde{\mathbf{c}}) > q_\alpha^k(B, \mathbf{c}).
+ \]
+ for all $\alpha$. In particular, $\tilde{\mathbf{u}} \gg \mathbf{u}$.
+\end{lemma}
+
+%\begin{proof}[Proof continued]
+% What we have found is that if an analytic solution
+% \[
+% \mathbf{u} = \sum_{\alpha} \mathbf{u}_\alpha x^\alpha
+% \]
+% exists, then it must be given by
+% \[
+% u_\alpha^k = q_\alpha^k(\ldots, B_{j, \gamma, \delta}, \ldots, \mathbf{c}_{\gamma, \delta}, \ldots, \mathbf{u}_\beta, \ldots),
+% \]
+% where $q_\alpha^k$ is a \emph{universal} polynomial, i.e.\ it doesn't depend on $B, cb$ except through its arguments. Moreovver, $q_\alpha^k$ has \emph{non-negative} coefficients further, $\beta_n \leq \alpha_n - 1$ for any mlti-index on the RHS.
+%
+% It remains to show that the series
+% \[
+% \mathbf{u} = \sum_\alpha \mathbf{u}_\alpha x^\alpha
+% \]
+% for the above choice of $\mathbf{u}_\alpha$ converges near $x = 0$. Let us first suppose that
+% \[
+% B_j^* \gg B_j,\quad \mathbf{c}^* \gg \mathbf{c},
+% \]
+% where
+% \begin{align*}
+% B_j^*(z, x) &= \sum_{\gamma, \delta} B^*_{j, \gamma, \delta} z^\gamma x^\delta,\\
+% c^*(z, x) &= \sum_{\gamma, \delta} c^*_{\gamma, \delta} z^\gamma x^\delta.
+% \end{align*}
+% We assume all these series converge for $|z| + |x'| < s$. This is possible by a previous lemma.
+%
+% By definition, we have
+% \begin{align*}
+% |B_{j, \gamma, \delta}^{k\ell}| &\leq (B_{j, \gamma, \delta}^*)^{k\ell}\\
+% 0 \leq |c_{\gamma, \delta}^k| \leq (C_{\gamma, \delta}^*)^k.
+% \end{align*}
+% We consider the modified problem
+% \[
+% \mathbf{u}^*_{x_n} = \sum_{j = 1}^{n - 1} B_j^*(\mathbf{u}^*, x') \mathbf{u}^*_{x_j} + \mathbf{c}^*(\mathbf{u}^*, x').
+% \]
+% on $|x'|^2 + x_n^2 < r$, and $\mathbf{u}^* = 0$ on $x_n = 0$. We might need to reduce $r$ so that $B_j^*$ and $\mathbf{c}^*$ converges, but that's okay.
+%
+% Again, we seek a real analytic solution
+% \[
+% \mathbf{u}^* = \sum_\alpha \mathbf{u}_\alpha^* x^\alpha.
+% \]
+% We claim that $\mathbf{u}^* \gg \mathbf{u}$. We will then show that $\mathbf{u}^*$ converges, and hence we are done.
+%
+% In other words, we want to show that
+% \[
+% 0 \leq |u_\alpha^k| \leq (u_\alpha^*)^k. \tag{$\dagger$}
+% \]
+% We prove this by induction on $\alpha_n$. For $\alpha_n = 0$, we note htat
+% \[
+% u_\alpha^k = (u_\alpha^*)^k = 0.
+% \]
+% So this is good.
+%
+% For the induction step, assume that $(\dagger)$ holds for $\alpha_n \leq a - 1$, and suppose $\alpha_n = a$. We then have
+% \begin{align*}
+% |u_\alpha^*| &= |q_\alpha^k(\ldots, B_{j, \gamma, \delta}^{k\ell}, \ldots, \mathbf{c}_{\gamma, \delta}^k, \ldots, \mathbf{u}_\beta^k)\\
+% &\leq q^k_\alpha (\ldots, |B_{j, \gamma, \delta}^{k\ell}|, \ldots, |\mathbf{c}_{\gamma, \delta}^k|, \ldots, |\mathbf{u}_\beta^k|)\\
+% &\leq q^k_\alpha (\ldots, (B_{j, \gamma, \delta}^*)^{k\ell}, \ldots, (\mathbf{c}_{\gamma, \delta}^*)^k, \ldots, (\mathbf{u}_\beta^*)^k)\\
+% &= (u^*_\alpha)^k,
+% \end{align*}
+% where we use the fact that $q_\alpha$ only has non-negative coefficients.
+%
+% This gives $\mathbf{u}^* \gg \mathbf{u}$.
+%
+% To show that $\mathbf{u}^*$ converges, we will make a particular choice for $B^*$ and $\mathbf{c}^*$, and solve the equation explicitly. Recall that when we proved the existence of majorizer, we actually wrote down an explicit choice of majorizors. They are given by
+%\end{proof}
+
+So given any $B$ and $\mathbf{c}$, if we can find some $\tilde{B}$ and $\tilde{\mathbf{c}}$ that majorizes $B$ and $\mathbf{c}$ respectively, and show that the corresponding series converges for $\tilde{B}$ and $\tilde{\mathbf{c}}$, then we are done.
+
+But we previously saw that every power series is majorized by
+\[
+ \frac{Cr}{r - (x^1 + \cdots + x^n)}
+\]
+for $C$ sufficiently large and $r$ sufficiently small. So we have reduced the problem to the following case:
+\begin{lemma}
+ For any $C$ and $r$, define
+ \[
+ h(z, x') = \frac{Cr}{r - (x_1 + \cdots + x_{n - 1}) - (z_1 + \cdots + z_m)}
+ \]
+ If $B$ and $\mathbf{c}$ are given by
+ \[
+ B^*_j(z, x') = h(z, x') \begin{pmatrix}
+ 1 & \cdots & 1\\
+ \vdots & \ddots & \vdots\\
+ 1 & \cdots & 1
+ \end{pmatrix},\quad \mathbf{c}^*(z, x') = h(z, x')
+ \begin{pmatrix}
+ 1 \\ \vdots \\ 1
+ \end{pmatrix},
+ \]
+ then the power series
+ \[
+ \mathbf{u} = \sum_{\alpha} \mathbf{q}_\alpha(B, \mathbf{c}) x^\alpha
+ \]
+ converges in a neighbourhood of the origin.
+\end{lemma}
+
+We'll provide a rather cheap proof, by just writing down a solution of the corresponding PDE. The solution itself can be found via the method of characteristics, which we will learn about soon. However, the proof itself only requires the existence of the solution, not how we got it.
+\begin{proof}
+ We define
+ \[
+ v(x) = \frac{1}{mn} \left(r - (x^1 + \cdots + x^{n - 1}) - \sqrt{(r - (x^1 + \cdots + x^{n - 1}))^2 - 2mn Cr x^n}\right),
+ \]
+ which is real analytic around the origin, and vanishes when $x^n = 0$. We then observe that
+ \[
+ \mathbf{u}(x) = v(x)
+ \begin{pmatrix}
+ 1\\\vdots\\1
+ \end{pmatrix}
+ \]
+ gives a solution to the corresponding PDE, and is real analytic around the origin. Hence it must be given by that power series, and in particular, the power series must converge.
+\end{proof}
+
+% which majorize $B_j$ and $\mathbf{c}$ provided $C$ is large enough and converge whenever $|x'| + |z| < s$ for some small $s$.
+%
+% With these choices, the modified equation becomes
+% \[
+% (u_{x_n}^*)^k = \frac{C r}{ r - (x_1 + \cdots + x_n) - ((u^*)^1 + \cdots + (u^*)^n)} \left(\sum_{j = 1}^{k - 1} \sum_{\ell = 1}^m (u_{x_j}^*)^\ell + 1\right).
+% \]
+% It turns out this thing has an explicit solution
+% where
+% which we can check is real analytic near the origin.
+
+\subsection{Reduction to first-order systems}
+
+In nature, very few equations come in the form required by the Cauchy--Kovalevskaya theorem, but it turns out a lot of PDEs can be cast into this form after some work. We shall demonstrate this via an example.
+
+\begin{eg}
+ Consider the problem
+ \begin{align*}
+ u_{tt} &= uu_{xy} - u_{xx} + u_t\\
+ u|_{t = 0} &= u_0\\
+ u_t|_{t = 0} &= u_1,
+ \end{align*}
+ where $u_0, u_1$ are some real analytic functions near the origin. We define
+ \[
+ f = u_0 + t u_1.
+ \]
+ This is then real analytic near $0$, and $f|_{t = 0} = u_0$ and $f_t|_{t = 0} = u_1$. Set
+ \[
+ w = u - f.
+ \]
+ Then $w$ satisfies
+ \[
+ w_{tt} = ww_{xy} - w_{xx} + w_t + f w_{xy} + f_{xy}w + F,
+ \]
+ where
+ \[
+ F = ff_{xy} - f_{xx} + f_t,
+ \]
+ and
+ \[
+ w|_{t = 0} = w_t|_{t = 0} = 0.
+ \]
+ We let $(x, y, t) = (x^1, x^2, x^3)$ and set $\mathbf{u} = (w, w_x, w_y, w_t)$. Then our PDE becomes
+ \begin{align*}
+ u^1_t &= w_t = u^4\\
+ u^2_t &= w_{xt} = u^4_x \\
+ u^3_t &= w_{yt} = u^4_y\\
+ u^4_t &= w_{tt} = u^1 u^2_{x_2} - u^2_{x_1} + u^4 + f u_{x_2}^2 + f_{xy}u^1 + F,
+ \end{align*}
+ and the initial condition is $\mathbf{u}(x^1, x^2, 0) = 0$. This is not quite autonomous, but we can solve that problem simply by introducing a further new variable.
+\end{eg}
+
+Let's try to understand this in more generality. In certain cases, it is not possible to write the equation in Cauchy--Kovalevskaya form. For example, if the equation has no local solutions, then it certainly cannot be written in that form, or else Cauchy--Kovalevskaya would give us a solution! It is thus helpful to understand when this is possible.
+
+Note that in the formulation of Cauchy--Kovalevskaya, the derivative $\mathbf{u}_{x^n}$ is assumed to depend only on $x'$, and not $x^n$. If we want $\mathbf{u}_{x^n}$ to depend on $x^n$ as well, we can introduce a new variable $u^{n + 1}$ and set $(u^{n + 1})_{x^n} = 1$. So from now on, we shall ignore the fact that our PDE only has $x'$ on the right-hand side.
+
+Let's now consider the scalar quasi-linear problem
+\[
+ \sum_{|\alpha| = k} a_\alpha(\D^{k - 1} u, \ldots, \D u, u, x) \D^\alpha u + a_0(\D^{k - 1}u, \ldots, u, x) = 0,
+\]
+where $u: B_r(0) \subseteq \R^n \to \R$, with initial data
+\[
+ u = \frac{\partial u}{\partial x_n} = \cdots = \frac{\partial^{k - 1} u}{\partial x_n^{k - 1}} = 0.
+\]
+whenever $|x'| < r$, $x_n = 0$.
+
+We introduce a new vector
+\[
+ \mathbf{u} = \left(u, \frac{\partial u}{\partial x_1}, \ldots, \frac{\partial u}{\partial x_n}, \frac{\partial^2 u}{\partial x_1^2}, \ldots, \frac{\partial^{n - 1} u}{\partial x_n^{k - 1}}\right) = (u^1, \ldots, u^m)
+\]
+Here $\mathbf{u}$ contains all partial derivatives of $u$ up to order $k - 1$, for $j \in \{1, \ldots, m - 1\}$, we can compute $\frac{\partial u^j}{\partial x_n}$ in terms of $u^\ell$ or $\frac{\partial u^\ell}{\partial x^p}$ for some $\ell \in \{1, \ldots, m\}$ and $p < n$.
+
+To express $\frac{\partial u^m}{\partial x_n}$ in terms of the other variables, we need to actually use the differential equation. To do so, we need to make an assumption about our equation. We suppose that $a_{(0, \ldots, 0, k)}(0, 0)$ is non-zero. We can then rewrite the equation as
+\[
+ \frac{\partial^k u}{\partial x_n^k} = \frac{-1}{a_{(0, \ldots, 0, k)}(\D^{k - 1}u, \ldots, u, x)} \left(\sum_{|\alpha| = k, \alpha_n < k} a_\alpha D^\alpha u + a_0\right),
+\]
+where at least near $x = 0$, the denominator can't vanish. The RHS can then be written in terms of $\frac{\partial u^k}{\partial x^p}$ and $ub$ for $p < n$.
+
+So we have cast our original equation into the form we previously discussed, \emph{provided} that the $a_\alpha$'s and $a_0$'s are real analytic about the origin, and that $a_{(0, \ldots, 0, k)}(0, \ldots, 0) = 0$. Under these assumptions, we can solve the equation by Cauchy--Kovalevskaya.
+
+It is convenient to make the following definition: if $a_{(0, \ldots, k)} (0, \ldots, 0) \not= 0$, we say $\{x_n = 0\}$ is \emph{non-characteristic}. Otherwise, we say it is \emph{characteristic}.
+
+Often times, we want to specify our initial data on some more exotic surface. Unfortunately, they cannot be too exotic. They have to be real analytic in some sense for our theory to have any chance of working.
+
+\begin{defi}[Real analytic hypersurface]\index{real analytic hypersurface}\index{hypersurface!real analytic}
+ We say that $\Sigma \subseteq \R^n$ is a \emph{real analytic hypersurface} near $x \in \Sigma$ if there exists $\varepsilon > 0$ and a real analytic map $\Phi: B_\varepsilon(x) \to U \subseteq \R^n$, where $U = \Phi(B_\varepsilon(x))$, such that
+ \begin{itemize}
+ \item $\Phi$ is bijective and $\Phi^{-1}: U \to B_\varepsilon(x)$ is real analytic.
+ \item $\Phi(\Sigma \cap B_\varepsilon(x)) = \{x_n = 0 \} \cap U$ and $\Phi(x) = 0$.
+ \end{itemize}
+\end{defi}
+We think of this $\Phi$ as ``straightening out the boundary''.
+
+Let $\gamma$ be the unit normal to $\Sigma$, and suppose $u$ solves
+\[
+ \sum_{|\alpha| = k} a_\alpha (\D^{k - 1} u, \ldots, u, x) \D^\alpha u + a_0(\D^{k - 1} u, \ldots, u, x) = 0
+\]
+subject to
+\[
+ u = \gamma^i \partial_i u = \cdots,(\gamma^i \partial_i)^{k- 1}u = 0
+\]
+on $\Sigma$.
+
+To do so, we define $w(y) = u(\Phi^{-1}(y))$, so that
+\[
+ u(x) = w(\Phi(x)).
+\]
+Then by the chain rule, we have
+\[
+ \frac{\partial u}{\partial x_i} = \sum_{j = 1}^n \frac{\partial w}{\partial y_i} \frac{\partial \psi^j}{\partial x_i}.
+\]
+So plugging this into the equation, we see $w$ satisfies an equation of the form
+\[
+ \sum b_\alpha \D^\alpha w + b_0 = 0,
+\]
+as well as boundary conditions of
+\[
+ w = \frac{\partial w}{\partial y_n} = \cdots = \frac{\partial^{k - 1}w }{\partial y_n^{k - 1}} = 0.
+\]
+So we have transformed this to a quasi-linear equation with boundary conditions on $y_n = 0$, which we can tackle with Cauchy--Kovalevskaya, provided the surface $y_n = 0$ is non-characteristic. Can we relate this back to the $a$'s?
+
+We can compute $b_{(0, \ldots, 0, k)}$ directly. Note that if $|\alpha| = k$, then
+\[
+ \D^\alpha u = \frac{\partial^k w}{\partial y_n^k} (\D \Phi^n)^\alpha + \text{terms not involving $\frac{\partial^n w}{\partial y_n^k}$}.
+\]
+So the coefficient of $\frac{\partial^k w}{\partial y_n^k}$ is
+\[
+ b_{(0, \ldots, k)} \sum_{|\alpha| = k} a_\alpha( \D \Phi^n)^\alpha.
+\]
+\begin{defi}[(Non-)characteristic surface]\index{non-characteristic surface}\index{characteristic surface}
+ A surface $\Sigma$ is \emph{non-characteristic} at $x \Sigma$ provided
+ \[
+ \sum_{|\alpha| = k} a_\alpha (\D \Phi^n)^\alpha \not= 0.
+ \]
+ Equivalently, if
+ \[
+ \sum_{|\alpha| = k} a_\alpha \nu^\alpha \not= 0,
+ \]
+ where $\nu$ is the normal to the surface. We say a surface is \emph{characteristic} if it is not non-characteristic.
+\end{defi}
+
+We focus on the case where our PDE is second-order. Consider an operator of the form.
+\[
+ Lu = \sum_{i, j = 1}^n a_{ij} \frac{\partial^i u}{\partial x_i \partial x_j}
+\]
+where $a_{ij} \in \R$. We may wlog assume $a_{ij} = a_{ji}$. For example the wave equation and Laplace's equation are given by operators of this form. Consider the equation
+\begin{align*}
+ Lu &= f\\
+ u = v^i \frac{\partial u}{\partial x^i} &= 0 \text{ on }\Pi_\nu = \{x \cdot \nu = 0\}.
+\end{align*}
+Then $\Pi_\nu$ is non-characteristic if
+\[
+ \sum_{i, j}^n a_{ij} \nu^i \nu^j \not= 0.
+\]
+Since $a_{ij}$ is diagonalizable, we see that if all eigenvalues are positive, then $\sum a_{ij} \nu^i \nu^j$ is non-zero, and so the problem has \emph{no} characteristic surfaces. In this case, we say the operator is \emph{elliptic}\index{elliptic operator}. If $(a_{ij})$ has one negative eigenvalue and the rest positive, then we say $L$ is \emph{hyperbolic}\index{hyperbolic operator}.
+
+\begin{eg}
+ If $L$ is the Laplacian
+ \[
+ L = \Delta = \sum_{i= 1}^n \frac{\partial^2}{\partial x_i^2},
+ \]
+ then $L$ is elliptic.
+
+ If $L$ is the wave operator
+ \[
+ L = - \partial_t^2 + \Delta,
+ \]
+ then $L$ is hyperbolic.
+\end{eg}
+
+If we consider the problem
+\[
+ Lu = 0,
+\]
+and forget the Cauchy data, we can look for solutions of the form $e^{ik\cdot x}$, as a good physicist would do. We can plug this into our operator to compute
+\[
+ L(e^{ik\cdot x}) = - \sum_{i, j = 1}^n a_{ij} k^i k^j e^{ik\cdot x}.
+\]
+So if $L$ is elliptic, the only solution of this form is $k = 0$. If $L$ is hyperbolic, we can have non-trivial plane wave solutions provided $k \propto \nu$ for some $\nu$ with
+\[
+ \sum_{i, j = 1}^n a_{ij} \nu^i \nu^j = 0.
+\]
+So if we set $u_\lambda(x) = e^{i\lambda \nu \cdot x}$ for such a $\nu$ (with $|\nu| = 1$, wlog). By taking $\lambda$ very large, we can arrange this solution to have very large derivative in the $\nu$ direction. Vaguely, this says the characteristic directions are the directions where singularities can propagate. By contrast, we will see that this is not the case for elliptic operators, and this is known as \emph{elliptic regularity}. In fact, we will show that if $L$ is elliptic and $u$ satisfies $Lu = 0$, then $u \in C^\infty$.
+
+While Cauchy--Kovalevskaya is sometimes useful, it has a few issues:
+\begin{itemize}
+ \item Not all functions are real analytic.
+ \item We have no control over ``how long'' a solution exists.
+ \item It doesn't answer the question of well-posedness.
+\end{itemize}
+
+Indeed, consider the PDE
+\[
+ u_{xx} + u_{yy} = 0.
+\]
+This admits a solution
+\[
+ u(x, y) = \cos kx \cosh ky
+\]
+for some $k \in \R$. We can think of this coming as coming from the Cauchy problem
+\[
+ u(x, 0) = \cos kx,\quad u_y(x, 0) = 0.
+\]
+By Cauchy--Kovalevskaya, there is a unique real analytic solution, and we've found one. So this is the unique solution.
+
+Let's think about what happens when $k$ gets large. In this case, it seems like nothing is very wrong with the initial data. While the initial data oscillates more and more, it is still bounded by $1$. However, we see that the solution at any $y = \varepsilon > 0$ grows exponentially. We might say that the derivatives of the initial condition grows to infinity as well, but if we do a bit more work (as you will on the example sheet), we can construct a sequence of initial data all of whose derivatives tend to $0$, but the solution still blows up.
+
+This is actually a serious problem. If we want to solve the PDE for a more general initial condition, we may want to decompose the initial data into Fourier modes, and then integrate up these solutions we found. But we cannot do this in general, if these solutions blow up as $k \to \infty$.
+
+\section{Function spaces}
+From now on, we shall restrain our desire to be a physicist, and instead tackle PDEs with functional analytic methods. This requires some technical understanding of certain function spaces.
+
+\subsection{The \texorpdfstring{H\"older}{Holder} spaces}
+The most straightforward class of functions paces is the $C^k$ spaces. These are spaces based on classical continuity and differentiability.
+
+\begin{defi}[$C^k$ spaces]\index{$C^k(U)$}
+ Let $U \subseteq \R^n$ be an open set. We define $C^k(U)$ to be vector space of all $u: U \to \R$ such that $u$ is $k$-times differentiable and the partial derivatives $\D^\alpha u : U \to \R$ are continuous for $|\alpha| \leq k$.
+\end{defi}
+We want to turn this into a Banach space by putting the supremum norm on the derivatives. However, even $\sup |u|$ is not guaranteed to exist, as $u$ may be unbounded. So this doesn't give a genuine norm. This suggests the following definition.
+
+\begin{defi}[$C^k(\bar{U})$ spaces]\index{$C^k(\bar{U})$}
+ We define $C^k(\bar{U}) \subseteq C^k(U)$ to be the subspace of all $u$ such that $\D^\alpha u$ are all bounded and uniformly continuous. We define a norm on $C^k(\bar{U})$ by
+ \[
+ \|u\|_{C^k(\bar{U})} = \sum_{|\alpha| \leq k} \sup_{x \in U} \|\D^\alpha u(x)\|.
+ \]
+ This makes $C^k(\bar{U})$ a Banach space.
+\end{defi}
+
+In some cases, we might want a ``fractional'' amount of differentiability. This gives rise to the notion of H\"older spaces.
+\begin{defi}[H\"older continuity]\index{H\"older continuity}
+ We say a function $u: U \to \R$ is \emph{H\"older continuous} with index $\gamma$ if there exists $C \geq 0$ such that
+ \[
+ |u(x) - u(y)| \leq C|x - y|^\gamma
+ \]
+ for all $x, y \in U$.
+
+ We write $C^{0, \gamma}(\bar{U}) \subseteq C^0(\bar{U})$\index{$C^{0, \gamma}(\bar{U})$} for the subspace of all H\"older continuous functions with index $\gamma$.
+
+ We define the $\gamma$-H\"older semi-norm by
+ \[
+ [u]_{C^{0, \gamma}(\bar{U})} = \sup_{x\not= y \in U} \frac{|u(x) - u(y)|}{|x - y|^\gamma}.
+ \]
+ We can then define a norm on $C^{0, \gamma}(\bar{U})$ by
+ \[
+ \|u\|_{C^{(0, \gamma}(\bar{U})} = \|u\|_{C^0(\bar{U})} + [u]_{C^{0, \gamma}(\bar{U})}.
+ \]
+ We say $u \in C^{k, \gamma}(\bar{U})$\index{$C^{k, \gamma}(\bar{U})$} if $u \in C^k(\bar{U})$ and $\D^\alpha u \in C^{0, \gamma}(\bar{U})$ for all $|\alpha| = k$, and we define
+ \[
+ \|u\|_{C^{k, \gamma}(\bar{U})} = \|u\|_{C^k(\bar{U})} + \sum_{|\alpha| = k} [\D^\alpha u]_{C^{0, \gamma}(\bar{U})}.
+ \]
+ This makes $C^{k, \gamma}(\bar{U})$ into a Banach space as well.
+\end{defi}
+
+Note that $C^{0, 1}(\bar{U})$ is the set of (uniformly) Lipschitz functions on $U$.
+
+\subsection{Sobolev spaces}
+The properties of H\"older spaces are not difficult to understand, but on the other hand they are not too useful. This is not too surprising, perhaps, because the supremum norm only sees the maximum of the function, and ignores the rest. In contrast, the $L^p$ norm takes into account the values at all points. This gives rise to the notion of Sobolev spaces.
+
+\begin{defi}[$L^p$ space]\index{$L^p$ space}
+ Let $U \subseteq \R^n$ be open, and suppose $1 \leq p \leq \infty$. We define the space \term{$L^p(U)$} by
+ \[
+ L^p(U) = \{u: U \to \R\text{ measurable} \mid \|u\|_{L^p(U)} < \infty\}/\{\text{equality a.e.}\}.
+ \]
+ where, if $p < \infty$, we define
+ \[
+ \|u\|_{L^p(U)} = \left(\int_U |u(x)|^p\;\d x\right)^{1/p},
+ \]
+ and
+ \[
+ \|u\|_{L^\infty(U)} = \inf \{C \geq 0 \mid |u(x)| \leq C\text{ almost everywhere}\}.
+ \]
+\end{defi}
+\begin{thm}
+ $L^P(U)$ is a Banach space with the $L^p$ norm.\fakeqed
+\end{thm}
+We can also define local versions of $L^p$ spaces by saying $u \in L^p_{loc}(U)$ if $u \in L^p(V)$ for every $V \Subset U$, i.e.\ $\bar{V} \subseteq U$ and $\bar{V}$ is compact. This is read as ``$V$ is \term{compactly contained} in $U$''. By working with $L^p_{loc}(U)$, we ignore any possible blowing up at the boundary. Note that $L^p_{loc}(U)$ is not Banach, but is a Fr\'echet space.
+
+What we want to do is to define differentiability for these things. If we try to define them via limits, then we run into difficulties since the value of an element in $L^p(U)$ at a point is not well-defined. To proceed, we use the notion of a weak derivative.
+
+\begin{defi}[Weak derivative]\index{weak derivative}
+ Suppose $u, v \in L^1_{loc}(U)$ and $\alpha$ is a multi-index. We say that $v$ is the $\alpha$th weak derivative of $u$ if
+ \[
+ \int_U u \D^\alpha \phi \;\d x = (-1)^{|\alpha|} \int_U v \phi\;\d x
+ \]
+ for all $\phi \in C^\infty_c(U)$, i.e.\ for all smooth, compactly supported function on $U$. We write $v = \D^\alpha u$.
+\end{defi}
+
+Note that if $u$ is a genuine smooth function, then $\D^\alpha u$ is the $\alpha$th weak derivative of $u$, as integration by parts tells us.
+
+For those who have seen distributions, this is the same as the definition of a distributional derivative, except here we require that the derivative is an $L^1_{loc}$ function.
+
+\begin{lemma}
+ Suppose $v, \tilde{v} \in L^1_{loc}(U)$ are both $\alpha$th weak derivatives of $u \in L^1_{loc}(U)$, then $v = \tilde{v}$ almost everywhere.
+\end{lemma}
+
+\begin{proof}
+ For any $\phi \in C^\infty_c(U)$, we have
+ \[
+ \int_U (v - \tilde{v}) \phi \;\d x = (-1)^{|\alpha|}\int_U (u - u) \D^\alpha \phi\;\d x = 0.
+ \]
+ Therefore $v - \tilde{v} = 0$ almost everywhere.
+\end{proof}
+
+Now that we have weak derivatives, we can define the Sobolev spaces.
+
+\begin{defi}[Sobolev space]\index{Sobolev space}
+ We say that $u \in L^1_{loc}(U)$ belongs to the \emph{Sobolev space} \term{$W^{k, p}(U)$} if $u \in L^p(U)$ and $\D^\alpha u$ exists and is in $L^p(U)$ for all $|\alpha| \leq k$.
+
+ If $p = 2$, we write $H^k(U) = W^{k, 2}(U)$\index{$H^k(U)$}, which will be a Hilbert space.
+
+ If $p < \infty$, we define the $W^{k, p}(U)$ norm by
+ \[
+ \|u\|_{W^{k, p}(U)} = \left(\sum_{|\alpha| \leq k}\int_U |\D^\alpha u|^p \;\d x \right)^{1/p}.
+ \]
+ If $p = \infty$, we define
+ \[
+ \|u\|_{W^{k, \infty}(U)} = \sum_{|\alpha|\leq k} \|\D^\alpha u\|_{L^\infty(U)}.
+ \]
+ We denote by $W_0^{k, p}(U)$\index{$W_0^{k, p}(U)$} the completion of $C_c^\infty(U)$ in this norm (and again $H^k_0(U) = W^{k, 2}_0(U)$\index{$H^k_0(U)$}).
+\end{defi}
+
+To see that these things are somehow interesting, it would be nice to find some functions that belong to these spaces but not the $C^k$ spaces.
+
+\begin{eg}
+ Let $u = B_1(0)$ be the unit ball in $\R^n$, and set
+ \[
+ u(x) = |x|^{-\alpha}
+ \]
+ when $x \in U, x \not= 0$. Then for $x \not= 0$, we have
+ \[
+ \D_i u = \frac{-\alpha x_i}{|x|^{\alpha + 1}}.
+ \]
+ By considering $\phi \in C_c^\infty(B_1(0) \setminus \{0\})$, it is clear that if $u$ is weakly differentiable, then it must be given by
+ \[
+ D_i u = \frac{-\alpha x_i}{|a|^{\alpha + 1}}\tag{$*$}
+ \]
+ We can check that $u \in L^1_{loc} (U)$ iff $\alpha < n$, and $\frac{x_i}{|x|^{\alpha + 1}} \in L^1_{loc}(U)$ if $\alpha < n - 1$.
+
+ So if we want $u \in W^{1, p}(U)$, then we must take $\alpha < n - 1$. To check $(*)$ is indeed the weak derivative, suppose $\phi \in C^\infty_c(U)$. Then integrating by parts, we get
+ \[
+ - \int_{U - B_\varepsilon(0)}u \phi_{x_i} \;\d x = \int_{U - B_\varepsilon(0)} \D_i u \phi \;\d x - \int_{\partial B_\varepsilon(0)} u \phi \nu^i \;\d S,
+ \]
+ where $\nu = (\nu^1, \ldots, \nu^n)$ is the inwards normal. We can estimate
+ \[
+ \left|\int_{\partial B_\varepsilon(0)}u \phi \nu^i \;\d S\right| \leq \|\phi\|_{L^\infty} \cdot \varepsilon^{-\alpha } \cdot C \varepsilon^{n - 1} \leq \tilde{C} \varepsilon^{n - 1 - \alpha} \to 0\text{ as }\varepsilon \to 0
+ \]
+ for some constants $C$ and $\tilde{C}$. So the second term vanishes. So by, say, dominated convergence, it follows that $(*)$ is indeed the weak derivative.
+
+ Finally, note that $\D_i u \in L^p(U)$ iff $p(\alpha + 1) < n$. Thus, if $\alpha < \frac{n - p}{p}$, then $u \in W^{1, p}(U)$. Note that if $p > n$, then the condition becomes $\alpha < 0$, and $u$ is continuous.
+
+ Note also that if $\alpha > \frac{n}{p}$, then $u \not \in W^{1,p}(U)$.
+\end{eg}
+
+\begin{thm}
+ For each $k = 0, 1, \ldots$ and $1 \leq p \leq \infty$, the space $W^{k, p}(U)$ is a Banach space.
+\end{thm}
+
+\begin{proof}
+ Homogeneity and positivity for the Sobolev norm are clear. The triangle inequality follows from the Minkowski inequality.
+
+ For completeness, note that
+ \[
+ \|\D^\alpha u\|_{L^p(U)} \leq \|u\|_{W^{k, p}(U)}
+ \]
+ for $|\alpha| \leq k$.
+
+ So if $(u_i)_{i = 1}^\infty$ is Cauchy in $W^{k, p}(U)$, then $(\D^\alpha u_i)_{i = 1}^\infty$ is Cauchy in $L^p(U)$ for $|\alpha| \leq k$. So by completeness of $L^p(U)$, we have
+ \[
+ \D^{\alpha}u_i \to u^\alpha \in L^p(U)
+ \]
+ for some $u^\alpha$. It remains to show that $u^\alpha = \D^\alpha u$, where $u = u^{(0, 0, \ldots, 0)}$. Let $\phi \in C_c^\infty(U)$. Then we have
+ \[
+ (-1)^{|\alpha|} \int_U u_j \D^\alpha \phi \;\d x = \int_U \D^\alpha u_j \phi\;\d x
+ \]
+ for all $j$. We send $j \to \infty$. Then using $\D^\alpha u_j \to u^\alpha$ in $L^p(U)$, we have
+ \[
+ (-1)^{|\alpha|} \int_U u \D^\alpha \phi\;\d x = \int_U u^\alpha \phi\;\d x.
+ \]
+ So $\D^\alpha u = u^\alpha \in L^p(U)$ and we are done.
+\end{proof}
+
+\subsection{Approximation of functions in Sobolev spaces}
+It would be nice if we could approximate functions in $W^{k, p}(U)$ with something more tractable. For example, it would be nice if we could approximate them by smooth functions, so that the weak derivatives are genuine derivatives. A useful trick to improve regularity of a function is to convolve with a smooth mollifier.
+
+\begin{defi}[Standard mollifier]\index{standard mollifier}\index{mollifier!standard}
+ Let
+ \[
+ \eta(x) =
+ \begin{cases}
+ C e^{1/(|x|^2 - 1)}& |x| < 1\\
+ 0 & |x| \geq 1
+ \end{cases} ,
+ \]
+ where $C$ is chosen so that $\int_{\R^n} \eta(x) \;\d x = 1$.
+ \begin{center}
+ \begin{tikzpicture}[xscale=1.5]
+ \draw (-2, 0) -- (2, 0);
+ \draw [->] (0, 0) -- (0, 2.5);
+ \draw [domain=-0.99:0.99, thick, mblue, samples=50] plot (\x, {5 * exp(1/((\x)^2 - 1))});
+ \draw [thick, mblue] (-2, 0) -- (-0.99, 0);
+ \draw [thick, mblue] (2, 0) -- (0.99, 0);
+ \end{tikzpicture}
+ \end{center}
+ One checks that this is a smooth function on $\R^n$, peaked at $x = 0$.
+
+ For each $\varepsilon > 0$, we set
+ \[
+ \eta_\varepsilon(x) = \frac{1}{\varepsilon^n} \eta\left(\frac{x}{\varepsilon}\right).
+ \]
+ Of course, the pre-factor of $\frac{1}{\varepsilon^n}$ is chosen so that $\eta_\varepsilon$ is appropriately normalized.
+
+ We call $\eta_\varepsilon$ the \emph{standard mollifier}, and it satisfies $\supp \eta_\varepsilon \subseteq \overline{B_\varepsilon(0)}$.
+\end{defi}
+We think of these $\eta_\varepsilon$ as approximations of the $\delta$-function.
+
+Now suppose $U \subseteq \R^n$ is open, and let
+\[
+ U_\varepsilon = \{x \in U: \mathrm{dist}(x, \partial U) > \varepsilon\}.
+\]
+\begin{defi}[Mollification]\index{mollification}
+ If $f \in L^1_{loc}(U)$, we define the \term{mollification} $f_\varepsilon: U_\varepsilon \to \R$ by the convolution
+ \[
+ f_\varepsilon = \eta_\varepsilon * f.
+ \]
+ In other words,
+ \[
+ f_\varepsilon(x) = \int_U \eta_\varepsilon (x - y) f(y) = \int_{B_\varepsilon(x)} \eta_\varepsilon(x - y) f(y)\;\d y.
+ \]
+\end{defi}
+Thus, $f_\varepsilon$ is the ``local average'' of $f$ around each point, with the weighting given by $\eta_\varepsilon$. The hope is that $f_\varepsilon$ will have much better regularity properties than $f$.
+
+\begin{thm}
+ Let $f \in L^1_{loc}(U)$. Then
+ \begin{enumerate}
+ \item $f_\varepsilon \in C^\infty(U_\varepsilon)$.
+ \item $f_\varepsilon \to f$ almost everywhere as $\varepsilon \to 0$.
+ \item If in fact $f \in C(U)$, then $f_\varepsilon \to f$ uniformly on compact subsets.
+ \item If $1 \leq p < \infty$ and $f \in L^p_{loc}(U)$, then $f_\varepsilon \to f$ in $L^p_{loc}(U)$, i.e.\ we have convergence in $L^p$ on any $V \Subset U$.\fakeqed
+ \end{enumerate}
+\end{thm}
+In general, the difficulty of proving these approximation theorems lie in what happens at the boundary
+\begin{lemma}
+ Assume $u \in W^{k, p}(U)$ for some $1 \leq p < \infty$, and set
+ \[
+ u_\varepsilon = \eta_\varepsilon * u\text{ on }U_\varepsilon.
+ \]
+ Then
+ \begin{enumerate}
+ \item $u_\varepsilon \in C^\infty(U_\varepsilon)$ for each $\varepsilon > 0$
+ \item If $V \Subset U$, then $u_\varepsilon \to u$ in $W^{k, p}(V)$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item As above.
+ \item We claim that
+ \[
+ \D^\alpha u_\varepsilon = \eta_\varepsilon * D^\alpha u
+ \]
+ for $|\alpha| \leq k$ in $U_\varepsilon$.
+
+ To see this, we have
+ \begin{align*}
+ \D^\alpha u^\varepsilon(x) &= \D^\alpha \int_U \eta_\varepsilon(x - y) u(y) \;\d y\\
+ &= \int_U \D^\alpha_x \eta_\varepsilon (x - y) u(y)\;\d y\\
+ &= \int_U (-1)^{|\alpha|} \D_y^\alpha \eta_\varepsilon(x - y) u(y)\;\d y\\
+ \intertext{For a fixed $x \in U_\varepsilon$, $\eta_\varepsilon(x - \ph) \in C_c^\infty(U)$, so by the definition of a weak derivative, this is equal to}
+ &= \int_U \eta_\varepsilon(x - y) \D^\alpha u(y)\;\d y\\
+ &= \eta_\varepsilon * \D^\alpha u.
+ \end{align*}
+ It is an exercise to verify that we can indeed move the derivative past the integral.
+
+ Thus, if we fix $V \Subset U$. Then by the previous parts, we see that $\D^\alpha u_\varepsilon \to \D^\alpha u$ in $L^p(V)$ as $\varepsilon \to 0$ for $|\alpha| \leq k$. So
+ \[
+ \|u_\varepsilon - u\|^p_{W^{k. p}(V)} = \sum_{|\alpha| \leq k} \|\D^\alpha u_\varepsilon - \D^\alpha u\|^p_{L^p(V)} \to 0
+ \]
+ as $\varepsilon \to 0$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{thm}[Global approximation]\index{global approximation}
+ Let $1 \leq p < \infty$, and $U \subseteq \R^n$ be open and bounded. Then $C^\infty(U) \cap W^{k, p}(U)$ is dense in $W^{k, p}(U)$.
+%
+% $u \in W^{k. p}(U)$ with $1 \leq p < \infty$. Then
+%
+% Then there exists $u_m \in C^\infty(U) \cap W^{k, p}(U)$ such that $u_m \to u$ in $W^{k, p}(U)$.
+%
+% In other words, $C^\infty(U) \cap W^{k. p}(U)$ is dense in $W^{k, p}(U)$.
+\end{thm}
+Our main obstacle to overcome is the fact that the mollifications are only defined on $U_\varepsilon$, and not $U$.
+
+\begin{proof}
+ For $i \geq 1$, define
+ \begin{align*}
+ U_i &= \left\{x \in U \mid \mathrm{dist}(x, \partial U) > \tfrac{1}{i}\right\}\\
+ V_i &= U_{i + 3} - \bar{U}_{i + 1}\\
+ W_i &= U_{i + 4} - \bar{U}_i.
+ \end{align*}
+ We clearly have $U = \bigcup_{i = 1}^\infty U_i$, and we can choose $V_0 \Subset U$ such that $U = \bigcup_{i = 0}^\infty V_i$.
+
+ Let $\{\zeta_i\}_{i = 0}^\infty$ be a partition of unity subordinate to $\{V_i\}$. Thus, we have $0 \leq \zeta_i \leq 1$, $\zeta_i \in C_c^\infty(V_i)$ and $\sum_{i = 0}^\infty \zeta_i = 1$ on $U$.
+
+ Fix $\delta > 0$. Then for each $i$, we can choose $\varepsilon_i$ sufficiently small such that
+ \[
+ u^i = \eta_{\varepsilon_i} * \zeta_i u
+ \]
+ satisfies $\supp u_i \subseteq W_i$ and
+ \[
+ \|u_i - \zeta_i u\|_{W^{k. p}(U)} = \|u_i - \zeta_i u\|_{W^{k. p}(W_i)} \leq \frac{\delta}{2^{i + 1}}.
+ \]
+ Now set
+ \[
+ v = \sum_{i = 0}^\infty u_i \in C^\infty(U).
+ \]
+ Note that we do not know (yet) that $v \in W^{k. p}(U)$. But it certainly is when we restrict to some $V \Subset U$.
+
+ In any such subset, the sum is finite, and since $u = \sum_{i = 0}^\infty \zeta_i u$, we have
+ \[
+ \|v - u\|_{W^{k, p}(V)} \leq \sum_{i = 0}^\infty \|u^i - \zeta_i u\|_{W^{k. p}(V)} \leq \delta \sum_{i = 0}^\infty 2^{-(i + 1)} = \delta.
+ \]
+ Since the bound $\delta$ does not depend on $V$, by taking the supremum over all $V$, we have
+ \[
+ \|v - u\|_{W^{k. p}(U)} \leq \delta.
+ \]
+ So we are done.
+\end{proof}
+
+It would be nice for $C^\infty(\bar{U})$ to be dense, instead of just $C^\infty(U)$. It turns out this is possible, as long as we have a sensible boundary.
+
+\begin{defi}[$C^{k, \delta}$ boundary]\index{$C^{k, \delta}$ boundary}
+ Let $U \subseteq \R^n$ be open and bounded. We say $\partial U$ is $C^{k, \delta}$ if for any point in the boundary $p \in \partial U$, there exists $r > 0$ and a function $\gamma \in C^{k, \delta}(\R^{n - 1})$ such that (possibly after relabelling and rotating axes) we have
+ \[
+ U \cap B_r(p) = \{(x', x_n) \in B_r(p): x_n > \gamma(x')\}.
+ \]
+\end{defi}
+Thus, this says our boundary is locally the graph of a $C^{k, \delta}$ function.
+
+\begin{thm}[Smooth approximation up to boundary]
+ Let $1 \leq p < \infty$, and $U \subseteq \R^n$ be open and bounded. Suppose $\partial U$ is $C^{0, 1}$. Then $C^\infty(\bar{U}) \cap W^{k, p}(U)$ is dense in $W^{k, p}(U)$.
+\end{thm}
+
+\begin{proof}
+ Previously, the reason we didn't get something in $C^\infty(\bar{U})$ was that we had to glue together infinitely many mollifications whose domain collectively exhaust $U$, and there is no hope that the resulting function is in $C^\infty(\bar{U})$. In the current scenario, we know that $U$ locally looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray, opacity=0.3] (0, 0) .. controls (1, -0.3) and (1.5, 0.1) .. (2, 0.2) .. controls (2.5, 0.3) .. (3.4, 0) -- (3.4, 1) -- (0, 1) -- cycle;
+ \draw (0, 0) .. controls (1, -0.3) and (1.5, 0.1) .. (2, 0.2) .. controls (2.5, 0.3) .. (3.4, 0);
+
+ \node [circ] at (2, 0.2) {};
+ \node [below] at (2, 0.2) {\small$x_0$};
+ \end{tikzpicture}
+ \end{center}
+ The idea is that given a $u$ defined on $U$, we can shift it downwards by some $\varepsilon$. It is a known result that translation is continuous, so this only changes $u$ by a tiny bit. We can then mollify with a $\bar{\varepsilon} < \varepsilon$, which would then give a function defined on $U$ (at least locally near $x_0$).
+
+ So fix some $x_0 \in \partial U$. Since $\partial U$ is $C^{0, 1}$, there exists $r > 0$ such that $\gamma \in C^{0, 1}(\R^{n - 1})$ such that
+ \[
+ U \cap B_r(x_0) = \{(x', x_n) \in B_r(x') \mid x_n > \gamma(x')\}.
+ \]
+ Set
+ \[
+ V = U \cap B_{r/2}(x^0).
+ \]
+ Define the shifted function $u_\varepsilon$ to be
+ \[
+ u_\varepsilon(x) = u(x + \varepsilon e_n).
+ \]
+ Now pick $\bar{\varepsilon}$ sufficiently small such that
+ \[
+ v^{\varepsilon, \bar{\varepsilon}} = \eta_{\bar{\varepsilon}} * u_\varepsilon
+ \]
+ is well-defined. Note that here we need to use the fact that $\partial U$ is $C^{0, 1}$. Indeed, we can see that if the slope of $\partial U$ is very steep near a point $x$:
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray, opacity=0.3] (0, 0) -- (0.5, 1.5) -- (0.5, 1.7) -- (0, 1.7);
+ \draw (0, 0) -- (0.5, 1.5);
+
+ \draw [dashed] (0, -0.3) -- (0.5, 1.2);
+
+ \draw [-latex'] (0.6, 1.2) -- (0.6, 1.5) node [pos=0.5, right] {\small$\varepsilon$};
+ \draw [latex'-] (0.6, 1.2) -- (0.6, 1.5);
+ \end{tikzpicture}
+ \end{center}
+ then we need to choose a $\bar{\varepsilon}$ much smaller than $\varepsilon$. By requiring that $\gamma$ is $1$-H\"older continuous, we can ensure there is a single choice of $\bar{\varepsilon}$ that works throughout $V$. As long as $\bar{\varepsilon}$ is small enough, we know that $v^{\varepsilon, \bar{\varepsilon}} \in C^\infty(\bar{V})$.
+
+ Fix $\delta > 0$. We can now estimate
+ \begin{align*}
+ \|v^{\varepsilon, \tilde{\varepsilon}} - u\|_{W^{k. p}(V)} &= \|v^{\varepsilon, \tilde{\varepsilon}} - u_\varepsilon + u_\varepsilon - u\|_{W^{k, p}(V)}\\
+ &\leq \|v^{\varepsilon, \tilde{\varepsilon}} - u_\varepsilon\|_{W^{k, p}(V)} + \|u_\varepsilon - u\|_{W^{k. p}(V)}.
+ \end{align*}
+ Since translation is continuous in the $L^p$ norm for $p < \infty$, we can pick $\varepsilon > 0$ such that $\|u_\varepsilon - u\|_{W^{k. p}(V)} < \frac{\delta}{2}$. Having fixed such an $\varepsilon$, we can pick $\tilde{\varepsilon}$ so small that we also have $\|v^{\varepsilon, \tilde{\varepsilon}} - u_\varepsilon\|_{W^{k. p}(V)} < \frac{\delta}{2}$.
+
+ The conclusion of this is that for any $x_0 \in \partial U$, we can find a neighbourhood $V \subseteq U$ of $x_0$ in $U$ such that for any $u \in W^{k, p}(U)$ and $\delta > 0$, there exists $v \in C^\infty(\bar{V})$ such that $\|u - v\|_{W^{k, p}(V)} \leq \delta$.
+
+ It remains to patch all of these together using a partition of unity. By the compactness of $\partial U$, we can cover $\partial U$ by finitely many of these $V$, say $V_1, \ldots, V_N$. We further pick a $V_0$ such that $V_0 \Subset U$ and
+ \[
+ U = \bigcup_{i = 0}^N V_i.
+ \]
+ We can pick approximations $v_i \in C^\infty(\bar{V}_i)$ for $i = 0, \ldots, N$ (the $i = 0$ case is given by the previous global approximation theorem), satisfying $\|v_i - u\|_{W^{k, p}(V_i)} \leq \delta$.
+%
+% Define the shifted point
+% \[
+% x^{\varepsilon} = x + \lambda\varepsilon e_n
+% \]
+% for $x \in V$, $\varepsilon > 0$.
+%
+% For $\lambda > 0$ large enough and $\varepsilon$ sufficiently small, we have
+% \[
+% B_\varepsilon(x') \subseteq U \cap B_r(x^0).
+% \]
+% We can do this uniformly for all points $x \in V$ by the fact that $\gamma \in C^{0, 1}$.
+%
+% We define $u_\varepsilon(x) = u(x^\varepsilon)$ for all $x \in V$. Set
+% \[
+% v^{\varepsilon, \tilde{\varepsilon}} = \eta_{\tilde{\varepsilon}} * u_\varepsilon
+% \]
+% for some $0 < \tilde{\varepsilon} < \varepsilon$. As a result, $v^{\varepsilon, \tilde{\varepsilon}} \in C^\infty(\bar{V})$.
+%
+% Now fix $\delta > 0$. We estimate
+% \begin{align*}
+% \|v^{\varepsilon, \tilde{\varepsilon}} - u\|_{W^{k. p}}(V) &= \|v^{\varepsilon, \tilde{\varepsilon}} - u_\varepsilon + u_\varepsilon - u\|_{W^{k, p}(V)}\\
+% &\leq \|v^{\varepsilon, \tilde{\varepsilon}} - u_\varepsilon\|_{W^{k, p}(V)} + \|u_\varepsilon - u\|_{W^{k. p}(V)}.
+% \end{align*}
+% Since translation is continuous in the $L^p$-norms for $p < \infty$, we can pick $\varepsilon > 0$ such that $\|u_\varepsilon - u\|_{W^{k p}(V)} < \frac{\delta}{2}$.
+%
+% Having fixed $\varepsilon$, we can then pick $\tilde{\varepsilon}$ so that $\|v^{\varepsilon, \tilde{\varepsilon}} - u_\varepsilon\|_{W^{k. p}(V)} < \frac{\delta}{\varepsilon}$.
+%
+% Now since $\partial U$ is compact, we can cover $\partial U$ by finitely many of these $V$'s. More precisely, we can find finitely many $x_i^0 \in \partial U$, radii $r_i > 0$, sets $V_i = U \cap B_{r_i/2}(x_i^0)$ and functions $v_i \in C^\infty(\bar{V}_i)$ for $i = 1, \ldots, N$, such that
+% \[
+% \partial U \subseteq \bigcup_{i = 1}^N B_{r_i/2}(x_i^0).
+% \]
+% and
+% \[
+% \|v_i - u\|_{W^{k, p}(V_i)} \leq \delta.
+% \]
+% Finally, take an open $V_0 \Subset U$ such that
+% \[
+% C \subseteq \cup_{i = 0}^N V_i.
+% \]
+% By the previous theorem, we can choose $v_0 \in C^{\infty}(\bar{V}_0$ such that $\|v_0 - u\|_{W^{k, p}(V_0)} \leq \delta$.
+%
+ Pick a partition of unity $\{\zeta_i\}_{i = 0}^N$ of $\bar{U}$ subordinate to $\{V_i\}$. Define
+ \[
+ v = \sum_{i = 0}^N \zeta_i v_i.
+ \]
+ Clearly $v \in C^\infty(\bar{U})$, and we can bound
+ \begin{align*}
+ \|D^\alpha v - \D^\alpha u\|_{L^p(U)} &= \norm{\D^\alpha \sum_{i = 0}^N \zeta_i v_i - \D^\alpha \sum_{i = 0}^N \zeta_i u}_{L^p(U)}\\
+ &\leq C_k \sum_{i = 0}^N \|v_i - u\|_{W^{k. p}(V_i)}\\
+ &\leq C_k(1 + N)\delta,
+ \end{align*}
+ where $C_k$ is a constant that solely depends on the derivatives of the partition of unity, which are fixed. So we are done.
+\end{proof}
+
+\subsection{Extensions and traces}
+If $U \subseteq \R^n$ is open and bounded, then there is of course a restriction map $W^{1, p}(\R^n) \to W^{1, p}(U)$. It turns out under mild conditions, there is an extension map going in the other direction as well.
+
+\begin{thm}[Extension of $W^{1. p}$ functions]
+ Suppose $U$ is open, bounded and $\partial U$ is $C^1$. Pick a bounded $V$ such that $U \Subset V$. Then there exists a bounded linear operator
+ \[
+ E: W^{1, p}(U) \to W^{1. p}(\R^n)
+ \]
+ for $1 \leq p < \infty$ such that for any $u \in W^{1, p}(U)$,
+ \begin{enumerate}
+ \item $Eu = u$ almost everywhere in $U$
+ \item $Eu$ has support in $V$
+ \item $\|Eu\|_{W^{1, p}(\R^n)} \leq C \|u\|_{W^{1, p}(U)}$, where the constant $C$ depends on $U, V, p$ but not $u$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ First note that $C^1(\bar{U})$ is dense in $W^{1, p}(U)$. So it suffices to show that the above theorem holds with $W^{1, p}(U)$ replaced with $C^1(\bar{U})$, and then extend by continuity.
+
+ We first show that we can do this locally, and then glue them together using partitions of unity.
+
+ Suppose $x^0 \in \partial U$ is such that $\partial U$ near $x^0$ lies in the plane $\{x_n = 0\}$. In other words, there exists $r > 0$ such that
+ \begin{align*}
+ B_+ &= B_r(x^0) \cap \{x_n \geq 0\} \subseteq \bar{U}\\
+ B_- &= B_r(x^0) \cap \{x_n \leq 0\} \subseteq \R^n \setminus U.
+ \end{align*}
+ The idea is that we want to reflect $u|_{B_+}$ across the $x_n = 0$ boundary to get a function on $B_{-}$, but the derivative will not be continuous if we do this. So we define a ``higher order reflection'' by
+ \[
+ \bar{u}(x) =
+ \begin{cases}
+ u(x) & x \in B^+\\
+ -3u(x', -x_n) + 4\left(u x', -\frac{x_n}{2}\right) & x \in B_-
+ \end{cases}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x^n$};
+ \draw [->] (0, -3) -- (0, 3) node [above] {$u$};
+
+ \draw [domain=-2.1:0, mblue, thick] plot (\x, {- 1/3 * \x^3});
+ \draw [domain=0:1.55, mblue, thick, dashed] plot (\x, {-5/6 * \x^3});
+ \node [circ] at (-0.75, 0.14) {};
+ \node [circ] at (-1.5, 1.124) {};
+ \node [circ] at (1.5, -2.815) {};
+
+ \draw (-1.5, 1.124) -- (1.5, -2.815);
+
+ \draw [dashed] (-1.5, 1.124) -- (-1.5, 0) node [below] {$-x$};
+ \draw [dashed] (-0.75, 0.14) -- (-0.7, 0) node [below] {$-\frac{x}{2}$};
+
+ \draw [dashed] (1.5, -2.815) -- (1.5, 0) node [anchor = north west] {$x$};
+ \end{tikzpicture}
+ \end{center}
+ We see that this is a continuous function. Moreover, by explicitly computing the partial derivatives, we see that they are continuous across the boundary. So we know $\bar{u} \in C^1(B_r(x^0))$.
+
+ We can then easily check that we have
+ \[
+ \|\bar{u}\|_{W^{1, p}(B_r(x^0))} \leq C \|u\|_{W^{1, p}(B_+)}
+ \]
+ for some constant $C$.
+
+ If $\partial U$ is not necessarily flat near $x^0 \in \partial U$, then we can use a $C^1$ diffeomorphism to straighten it out. Indeed, we can pick $r > 0$ and $\gamma \in C^1(\R^{n - 1})$ such that
+ \[
+ U \cap B_r(p) = \{(x', x^n) \in B_r(p) \mid x^n > \gamma(x')\}.
+ \]
+ We can then use the $C^1$-diffeomorphism $\Phi: \R^n \to \R^n$ given by
+ \begin{align*}
+ \Phi(x)^i &= x^i & i &= 1, \ldots, n -1\\
+ \Phi(x)^n &= x_n - \gamma(x_1, \ldots, x_n)
+ \end{align*}
+ Then since $C^1$ diffeomorphisms induce bounded isomorphisms between $W^{1, p}$, this gives a local extension.
+
+ Since $\partial U$ is compact, we can take a finite number of points $x_i^0 \in \partial W$, sets $W_i$ and extensions $u_i \in C^1(W_i)$ extending $u$ such that
+ \[
+ \partial U \subseteq \bigcup_{i = 1}^N W_i.
+ \]
+ Further pick $W_0 \Subset U$ so that $U \subseteq \bigcup_{i = 0}^N W_i$. Let $\{\zeta_i\}_{i = 0}^N$ be a partition of unity subordinate to $\{W_i\}$. Write
+ \[
+ \bar{u} = \sum_{i = 0}^N \zeta_i \bar{u}_i
+ \]
+ where $\bar{u}_0 = u$. Then $\bar{u} \in C^1(\R^n)$, $\bar{u} = u$ on $U$, and we have
+ \[
+ \|\bar{u}\|_{W^{1, p}(\R^n)} \leq C \|u\|_{W^{1, p}(U)}.
+ \]
+ By multiplying $\bar{u}$ by a cut-off, we may assume $\supp \bar{u} \subseteq V$ for some $V \Supset U$.
+
+ Now notice that the whole construction is linear in $u$. So we have constructed a bounded linear operator from a dense subset of $W^{1, p}(U)$ to $W^{1,p}(V)$, and there is a unique extension to the whole of $W^{1, p}(U)$ by the completeness of $W^{1, p}(V)$. We can see that the desired properties are preserved by this extension.
+\end{proof}
+
+\subsubsection*{Trace theorems}
+A lot of the PDE problems we are interested in are boundary value problems, namely we want to solve a PDE subject to the function taking some prescribed values on the boundary. However, a function $u \in L^p(U)$ is only defined up to sets of measure zero, and $\partial U$ is typically a set of measure zero. So naively, we can't naively define $u|_{\partial U}$. We would hope that if we require $u$ to have more regularity, then perhaps it now makes sense to define the value at the boundary. This is true, and is given by the \emph{trace theorem}
+
+\begin{thm}[Trace theorem]\index{trace theorem}
+ Assume $U$ is bounded and has $C^1$ boundary. Then there exists a bounded linear operator $T: W^{1, p}(U) \to L^p(\partial U)$ for $1 \leq p < \infty$ such that $Tu = u|_{\partial U}$ if $u \in W^{1, p}(U) \cap C(\bar{U})$.
+\end{thm}
+We say $Tu$ is the \term{trace} of $u$.
+
+\begin{proof}
+ It suffices to show that the restriction map defined on $C^\infty$ functions is a bounded linear operator, and then we have a unique extension to $W^{1, p}(U)$. The gist of the argument is that Stokes' theorem allows us to express the integral of a function over the boundary as an integral over the whole of $U$. In fact, the proof is indeed just the proof of Stokes' theorem.
+
+ By a general partition of unity argument, it suffices to show this in the case where $U = \{x_n > 0\}$ and $u \in C^\infty{\bar{U}}$ with $\supp u \subseteq B_R(0) \cap \bar{U}$. Then
+ \begin{align*}
+ \int_{\R^{n - 1}} |u(x', 0)|^p \;\d x' &= \int_{\R^{n - 1}} \int_0^\infty \frac{\partial}{\partial x_n} |u(x', x_n)|^p \;\d x_n \;\d x' \\
+ &= \int_U p |u|^{p - 1} u_{x_n} \sgn u \;\d x_n \;\d x'.
+ \end{align*}
+ We estimate this using Young's inequality to get
+ \[
+ \int_{\R^{n - 1}} |u(x', 0)|^p \;\d x' \leq C_p \int_U |u|^p + |u_{x_n}|^p \;\d U \leq C_p \|u\|_{W^{1, p}(U)}^p.
+ \]
+ So we are done.
+\end{proof}
+We can apply this to each derivative to define trace maps $W^{k, p}(U) \to W^{k - 1, p}(U)$.
+
+In general, this trace map is not surjective. So in some sense, we don't actually need to use up a whole unit of differentiability. In the example sheet, we see that in the case $p = 2$, we only lose ``half'' a derivative.
+
+Note that $C_c^\infty(U)$ is dense in $W_0^{1, p}(U)$, and the trace vanishes on $C_c^\infty(U)$. So $T$ vanishes on $W_0^{1, p}(U)$. In fact, the converse is true --- if $Tu = 0$, then $u \in W_0^{1, p}(U)$.
+
+\subsection{Sobolev inequalities}
+Before we can move on to PDE's, we have to prove some \emph{Sobolev inequalities}. These are inequalities that compare different norms, and allows us to ``trade'' different desirable properties. One particularly important thing we can do is to trade differentiability for continuity. So we will know that if $u \in W^{k, p}(U)$ for some large $k$, then in fact $u \in C^m(U)$ for some (small) $m$. The utility of these results is that we would like to construct our solutions in $W^{k, p}$ spaces, since these are easier to work with, but ultimately, we want an actual, smooth solution to our equation. Sobolev inequalities let us do so, since if $u \in W^{k, p}(U)$ for all $k$, then it must be in $C^m$ as well.
+
+To see why we should be expected to be able to do that, consider the space $H_0^1([0, 1])$. A priori, if $u \in H_0^1([0, 1])$, then we only know it exists as some measurable function, and there is no canonical representative of this function. However, we can simply assign
+\[
+ u(x) = \int_0^x u'(t)\;\d t,
+\]
+since we know $u'$ is an honest integrable function. This gives a well-defined representative of the function $u$, and even better, we can bound its supremum using $\|u'\|_{L^2([0, 1])}$.
+
+Before we start proving our Sobolev inequalities, we first prove the following lemma:
+\begin{lemma}
+ Let $n \geq 2$ and $f_1, \ldots, f_n \in L^{n - 1}(\R^{n - 1})$. For $1 \leq i \leq n$, denote
+ \[
+ \tilde{x}_i = (x_1, \ldots, x_{i - 1}, x_{i + 1}, \ldots, x_n),
+ \]
+ and set
+ \[
+ f(x) = f_1(\tilde{x}_1) \cdots f_n(\tilde{x}_n).
+ \]
+ Then $f \in L^1(\R^n)$ with
+ \[
+ \|f\|_{L^1}(\R^n) \leq \prod_{i = 1}^n \|f_i\|_{L^{n - 1}(\R^{n - 1})}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We proceed by induction on $n$.
+
+ If $n = 2$, then this is easy, since
+ \[
+ f(x_1, x_2) = f_1(x_2) f_2(x_1).
+ \]
+ So
+ \begin{align*}
+ \int_{\R^2}|f(x_1, x_2)| \;\d x &= \int |f_1(x_2)|\;\d x_2 \int |f_2(x_1)|\;\d x_1\\
+ &= \|f_1\|_{L^1(\R^1)} \|f_2\|_{L^1(\R^1)}.
+ \end{align*}
+ Suppose that the result is true for $n \geq 2$, and consider the $n + 1$ case. Write
+ \[
+ f(x) = f_{n + 1}(\tilde{x}_{n + 1}) F(x),
+ \]
+ where $F(x) = f_1(\tilde{x}_1) \cdots f_n(\tilde{x}_n)$. Then by H\"older's inequality, we have
+ \begin{multline*}
+ \int_{x_1, \ldots, x_n} |f(\ph, x_{n + 1})|\;\d x \leq \|f_{n + 1}\|_{L^n(\R^n)} \|F(\ph, x_{n + 1})\|_{L^{n/(n - 1)}(\R^n)}.
+ \end{multline*}
+ We now use the induction hypothesis to
+ \[
+ f_1^{n/(n - 1)} (\ph, x_{n + 1}) f_2^{n/(n - 1)}(\ph, x_{n + 1}) \cdots f_n^{n/(n - 1)}(\ph, x_{n + 1}).
+ \]
+ So
+ \begin{align*}
+ \int\limits_{\mathclap{x_1, \ldots, x_n}} |f(\ph, x_{n + 1})|\;\d x &\leq \|f_{n + 1}\|_{L^n(\R^n)}\left(\prod_{i = 1}^n \|f_i^{\frac{n}{n-1}} (\ph, x_n)\|_{L^{n - 1}(\R^{n - 1})}\right)^{\frac{n - 1}{n}}\\
+ &= \|f_{n + 1}\|_{L^n(\R^n)} \prod_{i = 1}^n \|f_i(\ph, x_m)\|_{L^n(\R^{n - 1})}.
+ \end{align*}
+ Now integrate over $x_{n + 1}$. We get
+ \begin{align*}
+ \|f\|_{L^1(\R^{n + 1})} &\leq \|f_{n + 1}\|_{L^n(\R^n)} \int_{x_{n + 1}}\prod_{i = 1}^n \|f_i(\ph, x_{n + 1})\|_{L^n(\R^{n - 1})}\;\d x_n.\\
+ &\leq \|f_{n + 1}\|_{L^n(\R^{n + 1})} \prod_{i = 1}^n \left(\int_{x_{n + 1}} \|f_i(\ph, x_{n + 1})\|^n_{L^n(\R^{n - 1})} \;\d x_{n + 1}\right)^{1/n}\\
+ &= \|f_{n + 1}\|_{L^n(\R^n)} \prod_{i = 1}^n \|f_i\|_{L^n(\R^n)}.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{thm}[Gagliardo--Nirenberg--Sobolev inequality]\index{Gagliardo--Nirenberg--Sobolev inequality}
+ Assume $n > p$. Then we have
+ \[
+ W^{1, p}(\R^n) \subseteq L^{p^*}(\R^n),
+ \]
+ where
+ \[
+ p^* = \frac{np}{n - p} > p,
+ \]
+ and there exists $c > 0$ depending on $n, p$ such that
+ \[
+ \|u\|_{L^{p^*}(\R^n)} \leq c\|u\|_{W^{1, p}(\R^n)}.
+ \]
+ In other words, $W^{1, p}(\R^n)$ is continuously embedded in $L^{p^*}(\R^n)$.
+\end{thm}
+
+\begin{proof}
+ Assume $u \in C_c^\infty(\R^n)$, and consider $p = 1$. Since the support is compact,
+ \[
+ u(x) = \int_{-\infty}^{x_i} u_{x_i}(x_1 ,\ldots, x_{i - 1}, y_i, x_{i + 1}, \ldots, x_n)\;\d y_i.
+ \]
+ So we know that
+ \[
+ |u(x)| \leq \int_{-\infty}^\infty |\D u(x_1, \ldots, x_{i - 1}, y_i, x_{i + 1}, \ldots, x_n)|\;\d y_i \equiv f_i(\tilde{x}_i).
+ \]
+ Thus, applying this once in each direction, we obtain
+ \[
+ |u(x)|^{n/(n - 1)} \leq \prod_{i = 1}^n f_i(\tilde{x}_i)^{1/(n - 1)}.
+ \]
+ If we integrate and then use the lemma, we see that
+ \[
+ \left(\|u\|_{L^{n/(n - 1)}(\R^n)}\right)^{n/(n - 1)} \leq C \prod_{i = 1}^n \|f_i^{1/(n - 1)}\|_{L^{n - 1}(\R^{n - 1})} = \|\D u\|_{L^1(\R^n)}^{n/(n-1)}.
+ \]
+ So
+ \[
+ \|u\|_{L^{n/(n-1)}(\R^n)} \leq C \|\D u\|_{L^1(\R^n)}.
+ \]
+ Since $C^\infty_c(\R^n)$ is dense in $W^{1, 1}(\R^n)$, the result for $p = 1$ follows.
+
+ Now suppose $p > 1$. We apply the $p = 1$ case to
+ \[
+ v = |u|^\gamma
+ \]
+ for some $\gamma > 1$, which we choose later. Then we have
+ \[
+ \D v = \gamma \sgn u \cdot |u|^{\gamma - 1} \D u.
+ \]
+ So
+ \begin{align*}
+ \left(\int_{\R^n} |u|^{\frac{\gamma n}{n-1}} \;\d x\right)^{\frac{n - 1}{n}} &\leq \gamma \int_{\R^n} |u|^{\gamma - 1}|\D u|\;\d x\\
+ &\leq \gamma \left(\int_{\R^n} |u|^{(\gamma - 1) \frac{p}{p - 1}}\;\d x\right)^{\frac{p - 1}{p}} \left(\int_{\R^n} |\D u|^p \;\d x\right)^{\frac{1}{p}}.
+ \end{align*}
+ We choose $\gamma$ such that
+ \[
+ \frac{\gamma n}{n - 1} = \frac{(\gamma - 1)p}{p - 1}.
+ \]
+ So we should pick
+ \[
+ \gamma = \frac{p(n - 1)}{n - p} > 1.
+ \]
+ Then we have
+ \[
+ \frac{\gamma n}{n - 1} = \frac{np}{n - p} = p^*.
+ \]
+ So
+ \[
+ \left(\int_{\R^n} |u|^{p^*}\;\d x\right)^{\frac{n - 1}{n}} \leq \frac{p(n - 1)}{n - p}\left(\int_{\R^n} |u|^{p^*}\;\d x \right)^{\frac{p - 1}{p}} \|\D u\|_{L^p(\R^n)}.
+ \]
+ So
+ \[
+ \left(\int_{\R^n}|u|^{p^*}\;\d x\right)^{1/p^*} \leq \frac{p(n - 1)}{n - p} \|\D u\|_{L^p(\R^n)}.
+ \]
+ This argument is valid for $u \in C_c^\infty(\R^n)$, and by approximation, we can extend to $W^{1, p}(\R^n)$.
+\end{proof}
+
+We can deduce some corollaries of this result:
+\begin{cor}
+ Suppose $U \subseteq \R^n$ is open and bounded with $C^1$-boundary, and $1 \leq p < n$. Then if $p^* = \frac{np}{n - p}$, we have
+ \[
+ W^{1, p}(U) \subseteq L^{p^*}(U),
+ \]
+ and there exists $C = C(U, p, n)$ such that
+ \[
+ \|u\|_{L^{p^*}(U)} \leq C\|u\|_{W^{1, p}(U)}.
+ \]
+\end{cor}
+
+\begin{proof}
+ By the extension theorem, we can find $\bar{u} \in W^{1, p}(\R^n)$ with $\bar{u} = u$ almost everywhere on $U$ and
+ \[
+ \|\bar{u}\|_{W^{1, p}(\R^n)} \leq C \|u\|_{W^{1, p}(U)}.
+ \]
+ Then we have
+ \[
+ \|u\|_{L^{p^*}(U)} \leq \|\bar{u}\|_{L^{p^*}(\R^n)} \leq c \|\bar{u}\|_{W^{1, p}(\R^n)} \leq \tilde{C} \|u\|_{W^{1, p}(U)}.\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ Suppose $U$ is open and bounded, and suppose $u \in W^{1, p}_0(U)$. For some $1 \leq p < n$, then we have the estimates
+ \[
+ \|u\|_{L^q(U)} \leq C \|\D u\|_{L^p(U)}
+ \]
+ for any $q \in [1, p^*]$. In particular,
+ \[
+ \|u\|_{L^p(U)} \leq C \|\D u\|_{L^p(U)}.
+ \]
+\end{cor}
+
+\begin{proof}
+ Since $u \in W_0^{1, p}(U)$, there exists $u_0 \in C_c^\infty (U)$ converging to $u$ in $W^{1, p}(U)$. Extending $u_m$ to vanish on $U^c$, we have
+ \[
+ u_m \in C_c^\infty(\R^n).
+ \]
+ Applying Gagliardo--Nirenberg--Sobolev, we find that
+ \[
+ \|u_m\|_{L^{p^*}(\R^n)} \leq C \|\D u_m\|_{L^p(\R^n)}.
+ \]
+ So we know that
+ \[
+ \|u_m\|_{L^{p^*}(U)} \leq C \|\D u_m\|_{L^p(U)}.
+ \]
+ Sending $m \to \infty$, we obtain
+ \[
+ \|u\|_{L^{p^*}(U)} \leq C \|\D u\|_{L^p(U)}.
+ \]
+ Since $U$ is bounded, by H\"older, we have
+ \[
+ \left(\int_U |u|^q\;\d x\right)^{1/q} \leq \left(\int_U 1 \;\d x \right)^{1/rq} \left(\int_U |u|^{qs}\;\d s \right)^{1/sq} \leq C \|u\|_{L^{p^*}(U)}
+ \]
+ provided $q \leq p^*$, where we choose $s$ such that $qs = p^*$, and $r$ such that $\frac{1}{r} + \frac{1}{s} = 1$.
+\end{proof}
+
+The previous results were about the case $n > p$. If $n < p < \infty$, then we might hope that if $u \in W^{1, p}(\R^n)$, then $u$ is ``better than $L^\infty$''.
+
+\begin{thm}[Morrey's inequality]\index{Morrey's inequality}
+ Suppose $n < p < \infty$. Then there exists a constant $C$ depending only on $p$ and $n$ such that
+ \[
+ \|u\|_{C^{0, \gamma}(\R^n)} \leq C \|u\|_{W^{1, p}(\R^n)}
+ \]
+ for all $u \in C^\infty_c (\R^n)$ where $C = C(p, n)$ and $\gamma = 1 - \frac{n}{p} < 1$.
+\end{thm}
+
+\begin{proof}
+ We first prove the H\"older part of the estimate.
+
+ Let $Q$ be an open cube of side length $r > 0$ and containing $0$. Define
+ \[
+ \bar{u} = \frac{1}{|Q|} \int_Q u(x)\;\d x.
+ \]
+ Then
+ \begin{align*}
+ |\bar{u} - u(0)| &= \left|\frac{1}{|Q|} \int_Q [u(x) - u(0)]\;\d x\right|\\
+ &\leq \frac{1}{|Q|} \int_Q |u(x) - u(0)|\;\d x.
+ \end{align*}
+ Note that
+ \[
+ u(x) - u(0) = \int_0^1 \frac{\d}{\d t} u(tx)\;\d t = \sum_i \int_0^1 x^i \frac{\partial u}{\partial x^i} (tx)\;\d t.
+ \]
+ So
+ \[
+ |u(x) - u(0)| \leq r \int_0^1 \sum_i \left|\frac{\partial u}{\partial x^i} (tx)\right|\;\d t.
+ \]
+ So we have
+ \begin{align*}
+ |\bar{u} - u(0)| &\leq \frac{r}{|Q|} \int_Q \int_0^1 \sum_i \left|\frac{\partial u}{\partial x^i}(tx)\right|\;\d t\;\d x\\
+ &= \frac{r}{|Q|} \int_0^1 t^{-n} \left(\int_{tQ} \sum_i \left|\frac{\partial u}{\partial x^i} (y)\right|\;\d y \right)\;\d t\\
+ &\leq \frac{r}{|Q|} \int_0^1 t^{-n} \left(\sum_{i = 1}^n \left\|\frac{\partial u}{\partial x^i}\right\|_{L^p(tQ)} |tQ|^{1/p'}\right)\;\d t.
+ \end{align*}
+ where $\frac{1}{p} + \frac{1}{p'} = 1$.
+
+ Using that $|Q| = r^n$, we obtain
+ \begin{align*}
+ |\bar{u} - u(0)| &\leq c r^{1 -n + \frac{n}{p'}} \|\D u\|_{L^p(\R^n)} \int_0^1 t^{-n + \frac{n}{p'}} \;\d t\\
+ &\leq \frac{c}{1 - n/p} r^{1 - n/p} \|\D u\|_{L^p(\R^n)}.
+ \end{align*}
+ Note that the right hand side is decreasing in $r$. So when we take $r$ to be very small, we see that $u(0)$ is close to the average value of $u$ around $0$.
+
+ Indeed, suppose $x, y \in \R^n$ with $|x - y| = \frac{r}{2}$. Pick a box containing $x$ and $y$ of side length $r$. Applying the above result, shifted so that $x$, $y$ play the role of $0$, we can estimate
+ \[
+ |u(x) - u(y)| \leq |u(x) - \bar{u}| + |u(y) - \bar{u}| \leq \tilde{C} r^{1 - n/p} \|\D u\|_{L^p(\R^n)}.
+ \]
+ Since $r < \|x - y\|$, it follows that
+ \[
+ \frac{|u(x) - u(y)|}{|x - y|^{1 - n/p}} \leq C \cdot 2^{1 - n/p} \|\D u\|_{L^p(\R^n)}.
+ \]
+ So we conclude that $[u]_{C^{0, \gamma}(\R^n)} \leq C \|\D u\|_{L^p(\R^n)}$.
+
+ Finally, to see that $u$ is bounded, any $x \in \R^n$ belongs to some cube $Q$ of side length $1$. So we have
+ \[
+ |u(x)| \leq |u(x) - \bar{u} + \bar{u}| \leq |\bar{u}| + C \|\D u\|_{L^p(\R^n)}.
+ \]
+ But also
+ \[
+ |\bar{u}| \leq \int_Q |u(x)|\;\d x \leq \|u\|_{L^p(\R^n)} \|1\|_{L^p(Q)} = \|u\|_{L^p(\R^n)}.
+ \]
+ So we are done.
+\end{proof}
+
+\begin{cor}
+ Suppose $u \in W^{1, p}(U)$ for $U$ open, bounded with $C^1$ boundary. Then there exists $u^* \in C^{0, \gamma}(U)$ such that $u = u^*$ almost everywhere and $\|u^*\|_{C^{0, \gamma}(U)} \leq C\|u\|_{W^{1, p}(U)}$.
+\end{cor}
+
+By applying these results iteratively, we can establish higher order versions
+\[
+ W^{k, p} \subseteq L^q(U)
+\]
+with some appropriate $q$.
+
+\section{Elliptic boundary value problems}
+\subsection{Existence of weak solutions}
+In this chapter, we are going to study second-order elliptic boundary value problems. The canonical example to keep in mind is the following:
+\begin{eg}
+ Suppose $U \subseteq \R^n$ is a bounded open set with smooth boundary. Suppose $\partial U$ is a perfect conductor and $\rho: U \to \R$ is the charge density inside $U$. The electrostatic field $\phi$ satisfies
+ \begin{align*}
+ \Delta \phi &= \rho\text{ on }U\\
+ \phi |_{\partial U} &= 0.
+ \end{align*}
+ This is an example of an elliptic boundary value problem. Note that we cannot tackle this with the Cauchy--Kovalevskaya theorem, since we don't even have enough boundary conditions, and also because we want an everywhere-defined solution.
+\end{eg}
+
+In general, let $U \subseteq \R^n$ be open and bounded with $C^1$ boundary, and for $u \in C^2(\bar{U})$, we define
+\[
+ Lu = - \sum_{i, j = 1}^n (a^{ij}(x) u_{x_j})_{x_i} + \sum_{i = 1}^n b^i(x) u_{x_i} + c(x) u,
+\]
+where $a^{ij}$, $b^i$ and $c$ are given functions defined on $U$. Typically, we will assume they are at least $L^\infty$, but sometimes we will require more.
+
+If $a^{ij} \in C^1(U)$, then we can rewrite this as
+\[
+ Lu = - \sum_{i, j = 1}^n a^{ij}(x) u_{x_i x_j} + \sum_{i = 1}^n \tilde{b}^i(x) u_{x_i} + c(x) u
+\]
+for some $\tilde{b}^i$, using the product rule.
+
+We will mostly use the first form, called the \term{divergence form}, which is suitable for the \emph{energy method}, while the second (non-divergence form) is suited to the maximum principle. Essentially, what makes the divergence form convenient for us is that it's easy to integrate by parts.
+
+Of course, given the title of the chapter, we assume that $L$ is elliptic, i.e.
+\[
+ \sum_{i, j} a^{ij}(x) \xi_i \xi_j \geq 0
+\]
+for all $x \in U$ and $\xi \in \R^n$.
+
+It turns out this is not quite strong enough, because this condition allows the $a_{ij}$'s to be degenerate, or vanish at the boundary.
+
+\begin{defi}[Uniform ellipticity]\index{uniform ellipticity}
+ An operator
+ \[
+ Lu = - \sum_{i, j = 1}^n (a^{ij}(x) u_j)_{x_i} + \sum_{i = 1}^n b^i(x) u_{x_i} + c(x) u
+ \]
+ is \emph{uniformly elliptic} if
+ \[
+ \sum_{i, j = 1}^n a^{ij}(x) \xi_i \xi_j \geq \theta |\xi|^2
+ \]
+ for some $\theta > 0$ and all $x \in U, \xi \in\R^n$.
+\end{defi}
+
+We shall consider the \term{boundary value problem}
+\begin{align*}
+ Lu &= f \text{ on }U\\
+ u &= 0 \text{ on }\partial U.
+\end{align*}
+This form of the equation is not very amenable to study by functional analytic methods. Similar to what we did in the proof of Picard--Lindel\"of, we want to write this in a weak formulation.
+
+Let's suppose $u \in C^2(\bar{U})$ is a solution, and suppose $v \in C^2(\bar{U})$ also satisfies $v|_{\partial U} = 0$. Multiply the equation $Lu = f$ by $v$ and integrate by parts. Then we get
+\[
+ \int_U vf \;\d x = \int_U \left(\sum_{ij} v_{x_i} a^{ij} u_{x_j} + \sum_i b^i u_{x_i} v + c uv\right)\;\d x \equiv B[u, v].\tag{$2$}
+\]
+Conversely, suppose $u \in C^2(\bar{U})$ and $u|_{\partial U} = 0$. If $\int_U vf\;\d x = B[u, v]$ for all $v \in C^2(\bar{U})$ such that $v|_{\partial U} = 0$, then we claim $u$ in fact solves the original equation.
+
+Indeed, undoing the integration by parts, we conclude that
+\[
+ \int v Lu \;\d x = \int v f\;\d x
+\]
+for all $v \in C^2(\bar{U})$ with $v|_{\partial U} = 0$. But if this is true for all $v$, then it must be that $Lu = f$.
+
+Thus, the PDE problem we started with is equivalent to finding $u$ that solves $B[u, v] = \int_U v f\;\d x$ for all suitable $v$, provided $u$ is regular enough.
+
+But the point is that $(2)$ makes sense for $u, v \in H_0^1(U)$. So our strategy is to first show that we can find $u \in H_0^1(U)$ that solves $(2)$, and then hope that under reasonable assumptions, we can show that any such solution must in fact be $C^2(\bar{U})$.
+
+\begin{defi}[Weak solution]\index{weak solution!elliptic PDE}
+ We say $u \in H_0^1(U)$ is a \emph{weak solution} of
+ \begin{align*}
+ Lu &= f \text{ on }U\\
+ u &= 0\text{ on }\partial U
+ \end{align*}
+ for $f \in L^2(U)$ if
+ \[
+ B[u, v] = (f, v)_{L^2(U)}
+ \]
+ for all $v \in H_0^1(U)$.
+\end{defi}
+
+We'll exploit the Hilbert space structure of $H_0^1(U)$ to find weak solutions.
+
+\begin{thm}[Lax--Milgram theorem]\index{Lax--Milgram theorem}
+ Let $H$ be a real Hilbert space with inner product $(\ph, \ph)$. Suppose $B: H \times H \to \R$ is a bilinear mapping such that there exists constants $\alpha, \beta > 0$ so that
+ \begin{itemize}
+ \item $|B[u, v]| \leq \alpha \|u\| \|v\|$ for all $u, v \in H$ \hfill (boundedness)
+ \item $\beta\|u\|^2 \leq B[u, u]$ \hfill (coercivity)
+ \end{itemize}
+ Then if $f: H \to \R$ is a bounded linear map, then there exists a unique $u \in H$ such that
+ \[
+ B[u, v] = \bra f, v\ket
+ \]
+ for all $v \in H$.
+\end{thm}
+Note that if $B$ is just the inner product, then this is the Riesz representation theorem.
+
+\begin{proof}
+ By the Riesz representation theorem, we may assume that there is some $w$ such that
+ \[
+ \bra f, v\ket = (u, v).
+ \]
+ For each fixed $u \in H$, the map
+ \[
+ v \mapsto B[u, v]
+ \]
+ is a bounded linear functional on $H$. So by the Riesz representation theorem, we can find some $Au$ such that
+ \[
+ B[u, v] = (Au, v).
+ \]
+ It then suffices to show that $A$ is invertible, for then we can take $u = A^{-1} w$.
+
+ \begin{itemize}
+ \item Since $B$ is bilinear, it is immediate that $A: H \to H$ is linear.
+ \item $A$ is bounded, since we have
+ \[
+ \|Au\|^2 = (Au, Au) = B[u, Au]\leq \alpha \|u\| \|Au\|.
+ \]
+ \item $A$ is injective and has closed image. Indeed, by coercivity, we know
+ \[
+ \beta \|u\|^2 \leq B[u, u] = (Au, u) \leq \|A u\| \|u\|.
+ \]
+ Dividing by $\|u\|$, we see that $A$ is bounded below, hence is injective and has closed image (since $H$ is complete).
+
+ (Indeed, injectivity is clear, and if $A u_m \to v$ for some $v$, then $\|u_m - u_n\| \leq \frac{1}{\beta} \|A u_m - A u_n\| \to 0$ as $m, n \to \infty$. So $(u_n)$ is Cauchy, and hence has a limit $u$. Then by continuity, $A u = v$, and in particular, $v \in \im A$)
+
+ \item Since $\im A$ is closed, we know
+ \[
+ H = \im A \oplus \im A ^\perp.
+ \]
+ Now let $w \in \im A^\perp$. Then we can estimate
+ \[
+ \beta\|w\|^2 \leq B[w, w] = (Aw, w) = 0.
+ \]
+ So $w = 0$. Thus, in fact $\im A^\perp = \{0\}$, and so $A$ is surjective.\qedhere
+ \end{itemize}
+\end{proof}
+
+We would like to apply this to our elliptic PDE. To do so, we need to prove that our $B$ satisfy boundedness and coercivity. Unfortunately, this is not always true.
+
+\begin{thm}[Energy estimates for $B$]\index{energy estimate}
+ Suppose $a^{ij} = a^{ji}, b^i, c \in L^\infty(U)$, and there exists $\theta > 0$ such that
+ \[
+ \sum_{i, j = 1}^n a^{ij}(x) \xi_i \xi_j \geq \theta |\xi|^2
+ \]
+ for almost every $x \in U$ and $\xi \in \R^n$. Then if $B$ is defined by
+ \[
+ B[u, v] = \int_U \left(\sum_{ij} v_{x_i} a^{ij} u_{x_j} + \sum_i b^i u_{x_i}v + c uv\right)\;\d x,
+ \]
+ then there exists $\alpha, \beta > 0$ and $\gamma \geq 0$ such that
+ \begin{enumerate}
+ \item $|B[u, v]| \leq \alpha \|u\|_{H^1(U)} \|v\|_{H^1(U)}$ for all $u, v \in H_0^1(U)$
+ \item $\beta\|u\|^2_{H^1(U)} \leq B[u, u] + \gamma \|u\|_{L^2(U)}^2$.
+ \end{enumerate}
+ Moreover, if $b^i \equiv 0$ and $c \geq 0$, then we can take $\gamma$.
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We estimate
+ \begin{align*}
+ |B[u, v]| &\leq \sum_{i, j} \|a^{ij}\|_{L^\infty(U)} \int_U |\D u| |\D v|\;\d x\\
+ &\hphantom{{ {}={}}}+ \sum_i \|b\|_{C^\infty(U)} \int_U |\D u| |v| \;\d x\\
+ &\hphantom{{ {}={}}}+ \|c\|_{L^\infty(U)} \int_U |u| |v| \;\d x\\
+ &\leq c_1 \|\D u\|_{L^2(U)} \|\D v\|_{L^2(u)} + c_2 \| \D u\|_{L^2(U)} \|v\|_{L^2(U)} \\
+ &\phantomeq + c_3 \|u\|_{L^2(U)} \|v\|_{L^2(u)}\\
+ &\leq \alpha \|u\|_{H^1(U)} \|v\|_{H^1(U)}
+ \end{align*}
+ for some $\alpha$.
+ \item We start from uniform ellipticity. This implies
+ \begin{align*}
+ \theta \int_U |\D u|^2 \;\d x &\leq \int_U \sum_{i, j = 1}^n a^{ij}(x) u_{x_i} u_{x_j} \;\d x\\
+ &= B[u ,u] - \int_U \sum_{i = 1}^n b^i u_{x_i} u + cu^2\;\d x\\
+ &\leq B[u, u] + \sum_{i = 1}^n \|b^i\|_{L^\infty(U)} \int |\D u| |u| \;\d x \\
+ &\phantomeq + \|c\|_{L^\infty(U)} \int_U |u|^2\;\d x.
+ \end{align*}
+ Now by Young's inequality, we have
+ \[
+ \int_U |\D u| |u| \;\d x \leq \varepsilon \int_U |\D u|^2 \;\d x + \frac{1}{4\varepsilon}\int_U |u|^2 \;\d x
+ \]
+ for any $\varepsilon > 0$. We choose $\varepsilon$ small enough so that
+ \[
+ \varepsilon\sum_{i = 1}^n \|b^i\|_{L^\infty(U)} \leq \frac{\theta}{2}.
+ \]
+ So we have
+ \[
+ \theta \int_U |\D u|^2 \;\d x \leq B[u, u] + \frac{\theta}{2} \int_U |\D u|^2 \;\d x + \gamma \int_U |u|^2\;\d x
+ \]
+ for some $\gamma$. This implies
+ \[
+ \frac{\theta}{2} \| \D u\|_{L^2(U)}^2 \leq B[u, u] + \gamma \|u\|_{L^2(U)}^2
+ \]
+ We can add $\frac{\theta}{2}\|u\|_{L^2(U)}^2$ on both sides to get the desired bound on $\|u\|_{H^1(U)}$.
+ \end{enumerate}
+ To get the ``moreover'' statement, we see that under these conditions, we have
+ \[
+ \theta \int |\D u|^2 \;\d x \leq B[u, u].
+ \]
+ Then we apply the \term{Poincar\'e's inequality}, which tells us there is some $C > 0$ such that for all $u \in H_0^1 (U)$, we have
+ \[
+ \|u\|_{L^2(U)} \leq C \|\D u\|_{L^2(U)}.\qedhere
+ \]
+\end{proof}
+
+The estimate (ii) is sometimes called \term{G\r{a}rding's inequality}.
+
+\begin{thm}
+ Let $U, L$ be as above. There is a $\gamma \geq 0$ such that for any $\mu \geq \gamma$ and any $f \in L^2(U)$, there exists a unique weak solution to
+ \begin{align*}
+ Lu + \mu u &= f \text{ on $U$}\\
+ u &= 0 \text{ on $\partial U$}.
+ \end{align*}
+ Moreover, we have
+ \[
+ \|u\|_{H^1(U)} \leq C \|f\|_{L^2(U)}
+ \]
+ for some $C = C(L, U) \geq 0$.
+
+ Again, if $b^i \equiv 0$ and $c \geq 0$, then we may take $\gamma = 0$.
+\end{thm}
+
+\begin{proof}
+ Take $\gamma$ from the previous theorem when applied to $L$. Then if $\mu \geq \gamma$ and we set
+ \[
+ B_\mu[u, v] = B[u, v] + \mu (u, v)_{L^2(U)},
+ \]
+ This is the bilinear form corresponding to the operator
+ \[
+ L_\mu = L + \mu.
+ \]
+ Then by the previous theorem, $B_\mu$ satisfies boundedness and coercivity. So if we fix any $f \in L^2$, and think of it as an element of $H_0^1(U)^*$ by
+ \[
+ \bra f, v \ket = (f, u)_{L^2(U)} = \int_U fv\;\d x,
+ \]
+ then we can apply Lax--Milgram to find a unique $u \in H_0^1(U)$ satisfying $B_\mu[u, v] = \bra f, v\ket = (f, v)_{L^2(U)}$ for all $v \in H_0^1(U)$. This is precisely the condition for $u$ to be a weak solution.
+
+ Finally, the G\r{a}rding inequality tells us
+ \[
+ \beta \|u\|_{H^1(U)}^2 \leq B_\mu[u, u] = (f, u)_{L^2(U)} \leq \|f\|_{L^2(U)} \|u\|_{L^2(U)}.
+ \]
+ So we know that
+ \[
+ \beta \|u\|_{H^1(U)} \leq \|f\|_{L^2(U)}.\qedhere
+ \]
+\end{proof}
+In some way, this is a magical result. We managed to solve a PDE without having to actually work with a PDE. There are a few things we might object to. First of all, we only obtained a weak solution, and not a genuine solution. We will show that under some reasonable assumptions on $a, b, c$, if $f$ is better behaved, then $u$ is also better behaved, and in general, if $f \in H^k$, then $u \in H^{k + 2}$. This is known as \emph{elliptic regularity}. Together Sobolev inequalities, this tells us $u$ is genuinely a classical solution.
+
+Another problem is the presence of the $\mu$. We noted that if $L$ is, say, Laplace's equation, then we can take $\gamma = 0$, and so we don't have this problem. But in general, this theorem requires it, and this is a bit unsatisfactory. We would like to think a bit more about it.
+
+\subsection{The Fredholm alternative}
+To understand the second problem, we shall seek to prove the following theorem:
+\begin{thm}[Fredholm alternative]\index{Fredholm alternative}
+ Consider the problem
+ \[
+ Lu = f,\quad u|_{\partial U} = 0.\tag{$*$}
+ \]
+ For $L$ a uniformly elliptic operator on an open bounded set $U$ with $C^1$ boundary, either
+ \begin{enumerate}
+ \item For each $f \in L^2(U)$, there is a unique weak solution $u \in H_0^1(U)$ to $(*)$; or
+ \item There exists a non-zero weak solution $u \in H_0^1(U)$ to the \term{homogeneous problem}, i.e.\ $(*)$ with $f = 0$.
+ \end{enumerate}
+\end{thm}
+
+This is similar to what we know about solving matrix equations $Ax = b$ --- either there is a solution for all $b$, or there are infinitely many solutions to the homogneous problem.
+
+Similar to the previous theorem, this follows from some general functional analytic result. Recall the definition of a compact operator:
+\begin{defi}[Compact operator]\index{compact operator}
+ A bounded operator $K: H \to H'$ is \emph{compact} if every bounded sequence $(u_m)_{m = 1}^\infty$ has a subsequence $u_{m_j}$ such that $(K u_{m_j})_{j = 1}^\infty$ converges strongly in $H$.
+\end{defi}
+
+Recall (or prove as an exercise) the following theorem regarding compact operators.
+\begin{thm}[Fredholm alternative]\index{Fredholm alternative}
+ Let $H$ be a Hilbert space and $K: H \to H$ be a compact operator. Then
+ \begin{enumerate}
+ \item $\ker (I - K)$ is finite-dimensional.
+ \item $\im (I - K)$ is finite-dimensional.
+ \item $\im (I - K)$ = $\ker (I - K^\dagger)^\perp$.
+ \item $\ker (I - K) = \{0\}$ iff $\im (I - K) = H$.
+ \item $\dim \ker (I - K) = \dim \ker (I - K^\dagger) = \dim \coker (I - K)$.
+ \end{enumerate}
+\end{thm}
+
+How do we apply this to our situation? Our previous theorem told us that $L + \gamma$ is invertible for large $\gamma$, and we claim that $(L + \gamma)^{-1}$ is compact. We can then deduce the previous result by applying (iv) of the Fredholm alternative with $K$ a (scalar multiple of) $(L + \gamma)^{-1}$ (plus some bookkeeping).
+
+So let us show that $(L + \gamma)^{-1}$ is compact. Note that this maps sends $f \in L^2(U)$ to $u \in H_0^1(U)$. To make it an endomorphism, we have to compose this with the inclusion $H_0^1(U) \to L^2(U)$. The proof that $(L + \gamma)^{-1}$ is compact will not involve $(L + \gamma)^{-1}$ in any way --- we shall show that the inclusion $H_0^1(U) \to L^2(U)$ is compact!
+
+We shall prove this in two steps. First, we need the notion of weak convergence.
+
+\begin{defi}[Weak convergence]\index{weak convergence}
+ Suppose $(u_n)_{n = 1}^\infty$ is a sequence in a Hilbert space $H$. We say $u_n$ \emph{converges weakly} to $u \in H$ if
+ \[
+ (u_n, w) \to (u, w)
+ \]
+ for all $w \in H$. We write $u_n \rightharpoonup u$.
+\end{defi}
+
+Of course, we have
+\begin{lemma}
+ Weak limits are unique.\fakeqed
+\end{lemma}
+
+\begin{lemma}
+ Strong convergence implies weak convergence.\fakeqed
+\end{lemma}
+
+We shall show that given any bounded sequence in $H_0^1(U)$, we can find a subsequence that is weakly convergent. We then show that every weakly convergent sequence in $H_0^1(U)$ is strongly convergent in $L^2(U)$.
+
+In fact, the first result is completely general:
+\begin{thm}[Weak compactness]\index{weak compactness theorem}
+ Let $H$ be a separable Hilbert space, and suppose $(u_m)_{m = 1}^\infty$ is a bounded sequence in $H$ with $\|u_m\| \leq K$ for all $m$. Then $u_m$ admits a subsequence $(u_{m_j})_{j = 1}^\infty$ such that $u_{m_j} \rightharpoonup u$ for some $u \in H$ with $\|u\| \leq K$.
+\end{thm}
+One can prove this theorem without assuming $H$ is separable, but it is slightly messier.
+
+\begin{proof}
+ Let $(e_i)_{i = 1}^\infty$ be an orthonormal basis for $H$. Consider $(e_1, u_m)$. By Cauchy--Schwarz, we have
+ \[
+ |(e_1, u_m)| \leq \|e_1\| \|e_m\| \leq K.
+ \]
+ So by Bolzano--Weierstrass, there exists a subsequence $(u_{m_j})$ such that $(e_1, u_{m_j})$ converges.
+
+ Doing this iteratively, we can find a subsequence $(v_\ell)$ such that for each $i$, there is some $c_i$ such that $(e_i, v_\ell) \to c_i$ as $\ell \to \infty$.
+
+ We would expect the weak limit to be $\sum c_i e_i$. To prove this, we need to first show it converges. We have
+ \begin{align*}
+ \sum_{j = 1}^p |c_j|^2 &= \lim_{k \to \infty} \sum_{j = 1}^p |(e_j, v_\ell)|^2\\
+ &\leq \sup \sum_{j = 1}^p |(e_j, v_\ell)|^2\\
+ &\leq \sup \|v_k\|^2\\
+ &\leq K^2,
+ \end{align*}
+ using Bessel's inequality. So
+ \[
+ u = \sum_{j = 1}^\infty c_j e_j
+ \]
+ converges in $H$, and $\|u\| \leq K$. We already have
+ \[
+ (e_j, v_{\ell}) \to (e_j, u)
+ \]
+ for all $j$. Since $\|v_\ell - u\|$ is bounded by $2K$, it follows that the set of all $w$ such that
+ \[
+ (w, v_\ell) \to (v, u)\tag{$\dagger$}
+ \]
+ is closed under finite linear combinations and taking limits, hence is all of $H$. To see that it is closed under limits, suppose $w_k \to w$, and $w_k$ satisfy $(\dagger)$. Then
+ \[
+ |(w, v_\ell) - (w, u)| \leq |(w - w_k, v_\ell - u)| + |(w_k, v_\ell - u)| \leq 2K \|w - w_k\| + |(w_k, v_\ell - u)|
+ \]
+ So we can first find $k$ large enough such that the first term is small, then pick $\ell$ such that the second is small.
+\end{proof}
+
+We next want to show that if $u_m \rightharpoonup u$ in $H^1(U)$, then $u_m \to u$ in $L^1$. We may as well assume that $U$ is some large cube of length $L$ by extension. Notice that since $U$ is bounded, the constant function $1$ is in $H_0^1(U)$. So $u_m \rightharpoonup u$ in particular implies $\int_U (u_m - u)\;\d x \to 0$.
+
+Recall that the Poincar\'e inequality tells us if $u \in H_0^1(U)$, then we can bound $\|u\|_{L^2(Q)}$ by some multiple of $\|\D u\|_{L^2(U)}$. If we try to prove this without the assumption that $u$ vanishes on the boundary, then we find that we need a correction term. The resulting lemma is as follows:
+\begin{lemma}[Poincar\'e revisited]
+ Suppose $u \in H^1(\R^n)$. Let $Q = [\xi_1, \xi_1 + L] \times \cdots \times [\xi_n , \xi_n + L]$ be a cube of length $L$. Then we have
+ \[
+ \|u\|_{L^2(Q)}^2 \leq \frac{1}{|Q|} \left(\int_Q u(x) \;\d x\right)^2 + \frac{nL^2}{2} \|\D u\|_{L^2(Q)}^2.
+ \]
+\end{lemma}
+We can improve this to obtain better bounds by subdividing $Q$ into smaller cubes, and then applying this to each of the cubes individually. By subdividing enough, this leads to a proof that $u_m \rightharpoonup u$ in $H^1$ implies $u_m \to u$ in $H^0$.
+
+\begin{proof}
+ By approximation, we can assume $u \in C^\infty(\bar{Q})$. For $x,y \in Q$, we write
+ \begin{align*}
+ u(x) - u(y) &= \int_{y_1}^{x_1} \frac{\d}{\d t} u(t, x_1, \ldots, x_n)\;\d t\\
+ &\hphantom{{}={}}\quad + \int_{y_2}^{x_2} \frac{\d}{\d t} u(y_1, t, x_3, \ldots, x_n)\;\d t\\
+ &\hphantom{{}={}}\quad + \cdots\\
+ &\hphantom{{}={}}\quad + \int_{y_n}^{x_n} \frac{\d}{\d t} u(y_1, \ldots, y_{n - 1}, t)\;\d t.
+ \end{align*}
+ Squaring, and using $2ab \leq a^2 + b^2$, we have
+ \begin{align*}
+ u(x)^2 + u(y)^2 - 2u(x) u(y) &\leq n\left(\int_{y_1}^{x_1} \frac{\d}{\d t} u(t, x_1, \ldots, x_n)\;\d t\right)^2\\
+ &\hphantom{{}={}} \quad+ \cdots\\
+ &\hphantom{{}={}} \quad+ n\left(\int_{y_n}^{x_n} \frac{\d}{\d t} u(y_1, \ldots, y_{n - 1}, t)\;\d t\right)^2.
+ \end{align*}
+ Now integrate over $x$ and $y$. On the left, we get
+ \[
+ \iint_{Q \times Q}\d x\;\d y\; (u(x)^2 + u(y)^2 - 2u(x) u(y)) = 2 |Q| \|u\|_{L^2(Q)}^2 - 2 \left(\int_Q u(x) \;\d x\right)^2.
+ \]
+ On the right we have
+ \begin{align*}
+ I_1 &= \left(\int_{y_1}^{x_1} \frac{\d}{\d t} u(t, x_2, \ldots, x_n) \;\d t\right)^2\\
+ &\leq \int_{y_1}^{x_1} \;\d t \int_{y_1}^{x_1}\left(\frac{\d}{\d t} u(t, x_2, \ldots, x_n)\right)^2 \;\d t\tag{Cauchy--Schwarz}\\
+ &\leq L \int_{\xi_1}^{\xi_1 + L} \left(\frac{\d}{\d t} u(t, x_2, \ldots, x_n)\right)^2 \;\d t.
+ \end{align*}
+ Integrating over all $x, y \in Q$, we get
+ \[
+ \iint_{Q \times Q}\d x\;\d y\; I_1 \leq L^2 |Q| \|\D_1 u\|_{L^2(Q)}^2.
+ \]
+ Similarly estimating the terms on the right-hand side, we find that
+ \[
+ 2|Q| \|u\|_{L^2(Q)} - 2 \left(\int_Q u(x)\;\d x\right)^2 \leq n |Q| \sum_{i = 1}^n \|\D _i u\|^2_{L^2(Q)} = n |Q| L^2 \|\D u\|^2_{L^2(Q)}.\qedhere
+ \]
+\end{proof}
+It now follows that
+\begin{thm}[Rellich--Kondrachov]\index{Rellich--Kondrachov theorem}
+ Let $U \subseteq \R^n$ be open, bounded with $C^1$ boundary. Then if $(u_m)_{m = 1}^\infty$ is a sequence in $H^1(U)$ with $u_m \rightharpoonup u$, then $u_m \to u$ in $L^2$.
+
+ In particular, by weak compactness any sequence in $H^1(U)$ has a subsequence that is convergent in $L^2(U)$.
+\end{thm}
+
+Note that to obtain the ``in particular'' part, we need to know that $H^1(U)$ is separable. This is an exercise on the example sheet. Alternatively, we can appeal to a stronger version of weak compactness that does not assume separability.
+\begin{proof}
+ By the extension theorem, we may assume $U = Q$ for some large cube $Q$ with $U \Subset Q$.
+% By the extension theorem, we may assume $u_m \in H^1_0(Q)$ for some large cube $Q$ with $U \Subset Q$.
+%
+% By weak compactness, there exists $u \in H_0^1(Q)$ and a subsequence $u_{m_j}$ such that $u_{m_j} \rightharpoonup u$.
+%
+% Set $\omega_j = u_{m_j}$. We need to show that
+% \[
+% \|\omega_j - u\|_{L^2(Q)} \to 0.
+% \]
+
+ We subdivide $Q$ into $N$ many cubes of side length $\delta$, such that the cubes only intersect at their faces. Call these $\{Q_a\}_{a = 1}^N$.
+%
+% Fix $\delta > 0$. We can then cover $Q$ exactly by some number $k(\delta)$ cubes of side length $L < \delta$, such that the cubes intersect only at their faces. Call these $\{Q_a\}_{a = 1}^{k(\delta)}$.
+
+ We apply Poincar\'e separately to each of these to obtain
+ \begin{align*}
+ \|u_j - u\|_{L^2(Q)}^2 &= \sum_{a = 1}^{N} \|u_j - u\|_{L^2(Q_a)}^2\\
+ &\leq \sum_{a = 1}^{N} \left[\frac{1}{|Q_a|} \left(\int_{Q_a} (u_i - u)\;\d x\right)^2 + \frac{n \delta^2}{2} \|\D u_i - \D u\|^2_{L^2(Q_a)}\right]\\
+ &= \sum_{a = 1}^{N} \frac{1}{|Q_a|} \left(\int_{Q_a} (u_i - u)\;\d x\right)^2 + \frac{n \delta^2}{2} \|\D u_i - \D u\|^2_{L^2(Q)}.
+ \end{align*}
+ Now since $\|\D u_i - \D u\|_{L^2(Q)}^2$ is fixed, for $\delta$ small enough, the second term is $< \frac{\varepsilon}{2}$. Then since $u_i \rightharpoonup u$, we in particular have
+ \[
+ \int_{Q_1} (u_i - u)\;\d x \to 0\text{ as }i \to \infty
+ \]
+ for all $a$, since this is just the inner product with the constant function $1$. So for $i$ large enough, the first term is also $< \frac{\varepsilon}{2}$.
+\end{proof}
+
+The same result holds with $H^1(U)$ replaced by $H^1_0(U)$. The proof is in fact simpler, and we wouldn't need the assumption that the boundary is $C^1$.
+
+\begin{cor}
+ Suppose $K: L^2(U) \to H^1(U)$ is a bounded linear operator. Then the composition
+ \[
+ \begin{tikzcd}
+ L^2(U) \ar[r, "K"] & H^1(U) \ar[r, hook] & L^2(U)
+ \end{tikzcd}
+ \]
+ is compact.
+\end{cor}
+The slogan is that we get compactness whenever we improve regularity, which is something that happens in much more generality.
+
+\begin{proof}
+ Indeed, if $u_m \in L^2(U)$ is bounded, then $K u_m$ is also bounded. So by Rellich--Kondrachov, there exists a subsequence $u_{m_j} \to u$ in $L^2(U)$.
+\end{proof}
+
+We are now ready to prove the Fredholm alternative for elliptic boundary value problems. Recall that in our description of the Fredholm alternative, we had the direct characterizations $\im (I - K) = \ker (I - K^\dagger)^\perp$. We can make the analogous statement here. To do so, we need to talk about the adjoint of $L$. Since $L$ is not an operator defined on $L^2(U)$, trying to write down what it means to be an adjoint is slightly messy. Instead, we shall be content with talking about ``formal adjoints''.
+
+It's been a while since we've met a PDE, so let's recall the setting we had. We have a uniformly elliptic operator
+\[
+ Lu = - \sum_{i, j = 1}^n (a^{ij}(x) u_{x_j})_{x_i} + \sum_{i = 1}^n b^i(x) u_{x_i} + c(x) u
+\]
+on an open bounded set $U$ with $C^1$ boundary. The associated bilinear form is
+\[
+ B[u, v] = \int_U \left(\sum_{i, j}^u a^{ij}(x) u_{x_i} v_{x_j} + \sum_{i = 1}^n b^i(x) u_{x_i} v + c(x) uv\right)\;\d x.
+\]
+We are interested solving in the boundary value problem
+\[
+ Lu = f,\quad u|_{\partial u} = 0
+\]
+with $f \in L^2(U)$.
+
+The \term{formal adjoint} of $L$ is defined by the relation
+\[
+ (L\phi, \psi)_{L^2(U)} = (\phi, L^\dagger \psi)_{L^2(U)}
+\]
+for all $\phi, \psi \in C_c^\infty(U)$. By integration by parts, we know $L^\dagger$ should be given by
+\[
+ L^\dagger v = - \sum_{i, j = 1}^n (a^{ij} v_{x_j})_{x_i} - \sum_{i = 1}^n b^i(x) v_{x_j} + \left(c - \sum_{i = 1}^n b^i_{x_i}\right)v.
+\]
+Note that here we have to assume that $b^i \in C^1(\bar{U})$. However, what really interests us is the adjoint bilinear form, which is simply given by
+\[
+ B^\dagger[v, u] = B[u, v].
+\]
+We are actually just interested in $B^\dagger$, and not $L^\dagger$, and we can sensibly talk about $B^\dagger$ even if $b^i$ is not differentiable.
+
+As usual, we say $v \in H_0^1(U)$ is a weak solution of the adjoint problem $L^\dagger v = f, v|_{\partial U} = 0$ if
+\[
+ B^\dagger[v, u] = (f, u)_{L^2(U)}
+\]
+for all $u \in H_0^1(U)$.
+
+Given this set up, we can now state and prove the Fredholm alternative.
+\begin{thm}[Fredholm alternative for elliptic BVP]\index{Fredholm alternative}
+ Let $L$ be a uniformly elliptic operator on an open bounded set $U$ with $C^1$ boundary. Consider the problem
+ \[
+ Lu = f,\quad u|_{\partial U} = 0.\tag{$*$}
+ \]
+ Then exactly one of the following are true:
+ \begin{enumerate}
+ \item For each $f \in L^2(U)$, there is a unique weak solution $u \in H_0^1(U)$ to $(*)$
+ \item There exists a non-zero weak solution $u \in H_0^1(U)$ to the \term{homogeneous problem}, i.e.\ $(*)$ with $f = 0$.
+
+ If this holds, then the dimension of $N = \ker L \subseteq H_0^1(U)$ is equal to the dimension of $N^* = \ker L^\dagger \subseteq H_0^1(U)$.
+
+ Finally, $(*)$ has a solution if and only if $(f, v)_{L^2(U)} = 0$ for all $v \in N^*$
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ We know that there exists $\gamma > 0$ such that for any $f \in L^2(U)$, there is a unique weak solution $u \in H_0^1(U)$ to
+ \[
+ L_\gamma u = Lu + \gamma u = f,\quad u|_{\partial U} = 0.
+ \]
+ Moreover, we have the bound $\|u\|_{H^1(U)} \leq C \|f\|_{L^2(U)}$ (which gives uniqueness). Thus, we can set $L_\gamma^{-1} f$ to be this $u$, and then $L_\gamma^{-1}: L^2(U) \to H_0^1(U)$ is a bounded linear map. Composing with the inclusion $L^2(U)$, we get a compact endomorphism of $L^2(U)$.
+
+ Now suppose $u \in H_0^1$ is a weak solution to $(*)$. Then
+ \[
+ B[u, v] = (f, v)_{L^2(U)} \text{ for all }v \in H_0^1(U)
+ \]
+ is true if and only if
+ \[
+ B_\gamma[u, v] \equiv B[u, v] + \gamma(u, v) = (f + \gamma u, v) \text{ for all }v \in H_0^1(U).
+ \]
+ Hence, $u$ is a weak solution of $(*)$ if and only if
+ \[
+ u = L_\gamma^{-1}(f + \gamma u) = \gamma L_\gamma^{-1} u + L_\gamma^{-1} f.
+ \]
+ In other words, $u$ solves $(*)$ iff
+ \[
+ u - Ku = h,
+ \]
+ for
+ \[
+ K = \gamma L_\gamma^{-1},\quad h = L_\gamma^{-1} f.
+ \]
+ Since we know that $K: L^2(U) \to L^2(U)$ is compact, by the Fredholm alternative for compact operators, either
+ \begin{enumerate}
+ \item $u - Ku = h$ admits a solution $u \in L^2(U)$ for all $h \in L^2(U)$; or
+ \item There exists a non-zero $u \in L^2(U)$ such that $u - Ku = 0$. Moreover, $\im (I - K) = \ker (I - K^\dagger)^\perp$ and $\dim \ker (I - K) = \dim \im (I - K)^\perp$.
+ \end{enumerate}
+ There is a bit of bookkeeping to show that this corresponds to the two alternatives in the theorem.
+
+ \begin{enumerate}
+ \item We need to show that $u \in H_0^1(U)$. But this is trivial, since we have
+ \[
+ u = \gamma L_\gamma^{-1} u + L_{\gamma}^{-1} f,
+ \]
+ and we know that $L_\gamma^{-1}$ maps $L^2(U)$ into $H_0^1(U)$.
+
+ \item As above, we know that the non-zero solution $u$. There are two things to show. First, we have to show that $v - K^\dagger v = 0$ iff $v$ is a weak solution to
+ \[
+ L^\dagger v = 0,\quad v|_{\partial U} = 0.
+ \]
+ Next, we need to show that $h = L_\gamma^{-1} f \in (N^*)^\perp$ iff $f \in (N^*)^\perp$.
+
+ For the first part, we want to show that $v \in \ker (I - K^\dagger)$ iff $B^\dagger [v, u]= B[u, v] = 0$ for all $u \in H_0^1(U)$.
+
+ We are good at evaluating $B[u, v]$ when $u$ is of the form $L_\gamma^{-1} w$, by definition of a weak solution. Fortunately, $\im L_\gamma^{-1}$ contains $C_c^\infty(U)$, since $L_\gamma^{-1} L_\gamma \phi = \phi$ for all $\phi \in C_c^\infty(U)$. In particular, $\im L_\gamma^{-1}$ is dense in $H_0^1(U)$. So it suffices to show that $v \in \ker (I - K^\dagger)$ iff $B[L_\gamma^{-1} w, v] = 0$ for $w \in L^2(U)$. This is immediate from the computation
+ \[
+ B[L_\gamma^{-1} w, v] = B_\gamma[L_\gamma^{-1} w, v] - \gamma(L_\gamma^{-1} w, v) = (w, v) - (Kw, v) = (w, v - K^\dagger v).
+ \]
+ The second is also easy --- if $v \in N^* = \ker (I - K^\dagger)$, then
+ \[
+ (L_\gamma^{-1} f, v) = \frac{1}{\gamma} (Kf, v) = \frac{1}{\gamma} (f, K^\dagger v) = \frac{1}{\gamma} (f, v).\qedhere
+ \]
+% \begin{align*}
+% v - K^\dagger v = 0 &\Longleftrightarrow (v, w)_{L^2(U)} = (K^\dagger v, w)_{L^2(U)}\text{ for all }w \in L^2(u)\\
+% &\Longleftrightarrow (v, w)_{L^2(U)} = (v, Kw)_{L^2(U)}\\
+% &\Longleftrightarrow (v, w)_{L^2(U)} = (v, \gamma L_\gamma^{-1}w)
+% \end{align*}
+% But from the definition of a weak solution to
+% \[
+% L_\gamma \tilde{w} = \tilde{f},\quad \tilde{w}|_{\partial U} = 0,
+% \]
+% we have
+% \[
+% B[\tilde{w}, v] + \gamma(\tilde{w}, v)_{L^2(U)} = (\tilde{f}, v)_{L^2(U)}.
+% \]
+% In other words,
+% \[
+% B[L_\gamma^{-1}w, v] + \gamma (L_\gamma^{-1}w, v)_{L^2(U)} = (w, v)_{L^2(U)}.
+% \]
+% So we know $v - K^\dagger v = 0$ iff
+% \[
+% B[L_\gamma^{-1}w, v] + \gamma (L_\gamma^{-1} w, v)_{L^2(U)} + \gamma (v, L_{\gamma}^{-1} w)_{L^2(U)},
+% \]
+% iff $B^\dagger[v, L_\gamma^{-1}w] = 0$ for all $w \in L^2(U)$.
+%
+% We are almost ready to conclude that $v$ is a weak solution, except that we want $B^\delta[v, u] = 0$ for \emph{all} $u \in H_0^1(U)$. We are fine, since $\{L_\gamma^{-1}w : w \in L^2(U)\}$ is dense in $H_0^1(U)$, as it contains $C_c^\infty(U)$ by considering $\{L_\gamma \phi: \phi \in C_c^\infty(U)\}$. So we are done.
+%
+% Recalling that $\im (I - K) = (\ker (I - K^\dagger))^\perp$, it follows that
+% \[
+% Lu = f,\quad u|_{\partial U} = 0
+% \]
+% has a solution iff $(f, v)_{L^2(U)} = 0$ for all $v$ a weak solution to the adjoint problem.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{The spectrum of elliptic operators}
+Let's recap what we have obtained so far. Given $L$, we have found some $\gamma$ such that whenever $\mu \geq \gamma$, there is a unique solution to $(L + \mu) u = f$ (plus boundary conditions). In particular, $L + \mu$ has trivial kernel. For $\mu \leq \gamma$, $(L + \mu) u = 0$ may or may not have a non-trivial solution, but we know this satisfies the Fredholm alternative, since $L + \mu$ is still an elliptic operator.
+
+Rewriting $(L + \mu)u = 0$ as $Lu = - \mu u$, we are essentially considering eigenvalues of $L$. Of course, $L$ is not a bounded linear operator, so our usual spectral theory does not apply to $L$. However, as always, we know that $L_\gamma^{-1}$ is compact for large enough $\gamma$, and so the spectral theory of compact operators can tell us something about what the eigenvalues of $L$ look like.
+
+We first recall some elementary definitions. Note that we are explicitly working with real Hilbert spaces and spectra.
+\begin{defi}[Resolvent set]\index{resolvent set}
+ Let $A: H \to H$ be a bounded linear operator. Then the \term{resolvent set} is
+ \[
+ \rho(A) = \{\lambda \in \R: A - \lambda I\text{ is bijective}\}.
+ \]
+\end{defi}
+
+\begin{defi}[Spectrum]\index{spectrum}
+ The \emph{spectrum} of a bounded linear $A: H \to H$ is
+ \[
+ \sigma(A) = \R \setminus \rho(A).
+ \]
+\end{defi}
+
+\begin{defi}[Point spectrum]\index{point spectrum}
+ We say $\eta \in \sigma (A)$ belongs to the \emph{point spectrum} of $A$ if
+ \[
+ \ker (A - \eta I) \not= \{0\}.
+ \]
+ If $\eta \in \sigma_p(A)$ and $w$ satisfies $A w = \eta w$, then $w$ is an \term{associated eigenvector}.
+\end{defi}
+
+Our knowledge of the spectrum of $L$ will come from known results about the spectrum of compact operators.
+\begin{thm}[Spectral theorem of compact operators]\index{spectral theorem!compact operators}
+ Let $\dim H = \infty$, and $K: H \to H$ a compact operator. Then
+ \begin{itemize}
+ \item $\sigma(K) = \sigma_p(K) \cup \{0\}$. Note that $0$ may or may not be in $\sigma_p(K)$.
+ \item $\sigma(K) \setminus \{0\}$ is either finite or is a sequence tending to $0$.
+ \item If $\lambda \in \sigma_p(K)$, then $\ker (K - \lambda I)$ is finite-dimensional.
+ \item If $K$ is self-adjoint, i.e.\ $K = K^\dagger$ and $H$ is separable, then there exists a countable orthonormal basis of eigenvectors.
+ \end{itemize}
+\end{thm}
+
+From this, it follows easily that
+\begin{thm}[Spectrum of $L$]\leavevmode
+ \begin{enumerate}
+ \item There exists a countable set $\Sigma \subseteq \R$ such that there is a non-trivial solution to $Lu = \lambda u$ iff $\lambda \in \Sigma$.
+% \[
+% Lu - \lambda u = f,\quad u|_{\partial U} = 0\tag{$*$}
+% \]
+% has a weak solution for all $f \in L^2(U)$ iff $\lambda \not \in \Sigma$.
+ \item If $\Sigma$ is infinite, then $\Sigma = \{\lambda_k\}_{k = 1}^\infty$, the values of an increasing sequence with $\lambda_k \to \infty$.
+ \item To each $\lambda \in \Sigma$ there is an associated finite-dimensional space
+ \[
+ \mathcal{E}(\lambda) = \{u \in H_0^1(U) \mid u\text{ is a weak solution of }(*)\text{ with $f = 0$}\}.
+ \]
+ We say $\lambda \in \Sigma$ is an \term{eigenvalue} and $u \in \mathcal{E}(\lambda)$ is the associated \term{eigenfunction}.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ Apply the spectral theorem to compact operator $L_\gamma^{-1}: L^2(U) \to L^2(U)$, and observe that
+ \[
+ L_\gamma^{-1} u = \lambda u \Longleftrightarrow u = \lambda (L + \gamma) u \Longleftrightarrow Lu = \frac{1 - \lambda \gamma}{\lambda} u.
+ \]
+ Note that $L_\gamma^{-1}$ does not have a zero eigenvalue.
+% Pick $\gamma > 0$ as in the last theorem, so $L_\gamma^{-1}: L^2(U) \to H_0^1(U)$ is well-defined for $\lambda \leq -\gamma$. So the problem $(*)$ admits a unique weak solution for all $f \in L^2(U)$. So $\Sigma \subseteq \{\lambda > - \gamma\}$. Consider $\lambda > -\gamma$. Then by the Fredholm alternative, $(*)$ has a solution iff $u \equiv 0$ is the only solution to the homogeneous $f = 0$ case. This is true iff $\lambda + \gamma$ is an eigenvalue of $L + \gamma$, but then the eigenfunction $u$ would satisfy
+% \[
+% u = L_\gamma^{-1}(\lambda + \gamma u) = \frac{\gamma + \lambda}{\gamma }Ku.
+% \]
+% The only time we have a non-trivial solution to this is when $\frac{\gamma}{\gamma + \lambda}$ is an eigenvalue of $K$, and we are done by the characterization of eigenvalues of compact operators.
+\end{proof}
+
+%By complexifying our Hilbert space to consider $u: U \to \C$ with
+%\[
+% (u, v)_{H_0^1(U)} = \int_U \sum_{i = 1}^n \overline{\D_i u} \cdot \D_i v + \bar{u} v \;\d x,
+%\]
+%we can consider the operator $L - z$ for $z \in \C$. We can define $\Sigma$ analogously, and there exists $\omega \in \R$ such that $\Sigma \subseteq \{\Re (z) > \omega\}$, and $\Sigma$ is a discrete set accumulating only at infinity.
+%
+%If our operators, are nice, then the results are even better:
+In certain cases, such as Laplace's equation, our operator is ``self-adjoint'', and more things can be said. As before, we want the ``formally'' quantifier:
+\begin{defi}[Formally self-adjoint]\index{formally self-adjoint operator}\index{self-adjoint operator!formal}
+ An operator $L$ is \emph{formally self-adjoint} if $L = L^\dagger$. Equivalently, if $b^i \equiv 0$.
+\end{defi}
+
+\begin{defi}[Positive operator]\index{positive operator}
+ We say $L$ is \emph{positive} if there exists $C > 0$ such that
+ \[
+ \|u\|^2_{H_0^1(U)} \leq C B[u, u]\text{ for all }u \in H_0^1(U).
+ \]
+\end{defi}
+
+\begin{thm}
+ Suppose $L$ is a formally self-adjoint, positive, uniformly elliptic operator on $U$, an open bounded set with $C^1$ boundary. Then we can represent the eigenvalues of $L$ as
+ \[
+ 0 < \lambda_1 \leq \lambda_2 \leq \lambda_3 \leq \cdots,
+ \]
+ where each eigenvalue appears according to its multiplicity ($\dim \mathcal{E}(\lambda)$), and there exists an orthonormal basis $\{w_k\}_{k = 1}^\infty$ of $L^2(U)$ with $w_k \in H_0^1(U)$ an eigenfunction of $L$ with eigenvalue $\lambda_k$.
+\end{thm}
+
+\begin{proof}
+ Note that positivity implies $c \geq 0$. So the inverse $L^{-1}: L^2(U) \to L^2(U)$ exists and is a compact operator. We are done if we can show that $L^{-1}$ is self-adjoint. This is trivial, since for any $f, g$, we have
+ \[
+ (L^{-1} f, g)_{L^2(U)} = B[v, u] = B[u, v] = (L^{-1}g, f)_{L^2(U)}.\qedhere
+ \]
+% which is symmetric in
+% We claim it is self-adjoint. Indeed, if $f, g \in L^2(U)$, then $Sf = u$ is the weak solution to $Lu = f$, $u|_{\partial U} = 0$, and similarly for $Sg = v$. So
+% \[
+% (Sf, g)_{L^2(U)} = (u, g)_{L^2(U)} = B[v, u],
+% \]
+% and by assumption, this is symmetric in $v$ and $u$, hence $f$ and $g$. So $S$ is self-adjoint.
+%
+% Then we are done by the theory of self-adjoint compact operators.
+\end{proof}
+
+\subsection{Elliptic regularity}
+We can finally turn to the problem of regularity. We previously saw that when solving $Lu = f$, if $f \in L^2(U)$, then by definition of a weak solution, we have $u \in H_0^1(U)$, so we have gained some regularity when solving the differential equation. However, it is not clear that $u \in H^2(U)$, so we cannot actually say $u$ solves $Lu = f$. Even if $u \in H^2(U)$, it may not be classically differentiable, so $Lu = f$ isn't still holding in the strongest possible sense. So we might hope that under reasonable circumstances, $u$ is in fact twice continuously differentiable. But human desires are unlimited. If $f$ is smooth, we might hope further that $u$ is also smooth. All of these will be true.
+
+Let's think about how regularity may fail. It could be that the individual derivatives of $u$ are quite singular, but in $Lu$ all these singularities happen to cancel with each other. Thus, the content of elliptic regularity is that this doesn't happen.
+
+To see why we should expect this to be true, suppose for convenience that $u, f \in C_c^\infty(\R^n)$ and
+\[
+ -\Delta u = f.
+\]
+Using integration by parts, we compute
+\begin{align*}
+ \int_{\R^n} f^2 \;\d x &= \int_{\R^n} (\Delta u)^2\;\d x \\
+ &= \sum_{i, j} \int_{\R^n} (\D_i \D_i u) (\D_j \D_j u)\;\d x\\
+ &= \sum_{i ,j} \int_{\R^n} (\D_i \D_j u) (\D_i \D_j u) \;\d x\\
+ &= \|\D^2 u\|_{L^2(\R^n)}.
+\end{align*}
+So we have deduced that
+\[
+ \|\D^2 u\|_{L^2(\R^n)} = \| \Delta u\|_{L^2(\R^n)}.
+\]
+This is of course not a very useful result, because we have a priori assumed that $u$ and $f$ are $C^\infty$, while what we want to prove that $u$ is, for example, in $H^2(u)$. However, the fact that we can control the $H^2$ norm if we assumed that $u \in H^2(U)$ gives us some strong indication that we should be able to show that $u$ must always be in $H^2(U)$.
+
+The idea is to run essentially the same argument for weak solutions, without mentioning the word ``second derivative''. This involves the use of \emph{difference quotients}.
+
+\begin{defi}[Difference quotient]\index{difference quotient}
+ Suppose $U \subseteq \R^n$ is open and $V \Subset U$. For $0 < |h| < \mathrm{dist}(V, \partial U)$, we define
+ \begin{align*}
+ \Delta_i^h u(x) &= \frac{u(x + h e_i) - u(x)}{h}\\
+ \Delta^k u(x) &= (\Delta_1^h u, \ldots, \Delta^h_n u).
+ \end{align*}
+\end{defi}
+Observe that if $u \in L^2(U)$, then $\Delta^h u \in L^2(V)$. If further $u \in H^1(U)$, then $\Delta^h u \in H^1(V)$ and $\D \Delta^h u = \Delta^h \D u$.
+
+What makes difference quotients useful is the following lemma:
+
+\begin{lemma}
+ If $u \in L^2(U)$, then $u \in H^1(V)$ iff
+ \[
+ \|\Delta^h u\|_{L^2(V)} \leq C
+ \]
+ for some $C$ and all $0 < |h| < \frac{1}{2} \mathrm{dist}(V, \partial U)$. In this case, we have
+ \[
+ \frac{1}{\tilde{C}} \|\D u\|_{L^2(V)} \leq \|\Delta^h u\|_{L^2(V)} \leq \tilde{C} \|\D u\|_{L^2(V)}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ See example sheet.
+\end{proof}
+
+Thus, if we are able to establish the bounds we had for the Laplacian using difference quotients, then this tells us $u$ is in $H^2_{loc}(U)$.
+
+\begin{lemma}
+ If $w, v$ and compactly supported in $U$, then
+ \begin{align*}
+ \int_U w \Delta_k^{-h}v\;\d x &= \int_U (\Delta^h_k w) v\;\d x\\
+ \Delta^h_k(w v) &= (\tau_k^h w) \Delta_k^h v + (\Delta_k^h w) v,
+ \end{align*}
+ where $\tau_k^h w (x) = w(x + h e_k)$.
+\end{lemma}
+
+\begin{thm}[Interior regularity]\index{elliptic regularity!interior}\index{interior regularity}
+ Suppose $L$ is uniformly elliptic on an open set $U \subseteq \R^n$, and assume $a^{ij} \in C^1(U)$, $b^i, c \in L^\infty(U)$ and $f \in L^2(U)$. Suppose further that $u \in H^1(U)$ is such that
+ \[
+ B[u, v] = (f, v)_{L^2(U)}\tag{$\dagger$}
+ \]
+ for all $v \in H_0^1(U)$. Then $u \in H_{loc}^2(U)$, and for each $V \Subset U$, we have
+ \[
+ \|u\|_{H^2(V)} \leq C (\|f\|_{L^2(U)} + \|u\|_{L^2(U)}),
+ \]
+ with $C$ depending on $L, V, U$, but not $f$ or $u$.
+\end{thm}
+Note that we don't require $u \in H^1_0(U)$, so we don't require $u$ to satisfy the boundary conditions. In this case, there may be multiple solutions, so we need the $u$ on the right. Also, observe that we don't actually need uniform ellipticity, as the property of being in $H^2_{loc}(U)$ can be checked locally, and $L$ is always locally uniformly elliptic.
+
+The proof is essentially what we did for the Laplacian just now, except this time it is much messier since we need to use difference quotients instead of derivatives, and there are lots of derivatives of $a^{ij}$'s that have to be kept track of.
+
+When using regularity results, it is often convenient to not think about it in terms of ``solving equations'', but as something that (roughly) says ``if $u$ is such that $Lu$ happens to be in $L^2$ (say), then $u$ is in $H^2_{loc}(U)$''.
+\begin{proof}
+ We first show that we may in fact assume $b^i = c = 0$. Indeed, if we know the theorem for such $L$, then given a general $L$, we write
+ \[
+ L'u = -\sum (a^{ij} u_{x_j})_{x_i},\quad Ru = \sum b^i u_{x_i} + cu.
+ \]
+ Then if $u$ is a weak solution to $Lu = f$, then it is also a weak solution to $L'u = f - Ru$. Noting that $Ru \in L^2(U)$, this tells us $u \in H^2_{loc}(U)$. Moreover, on $V \Subset U$,
+ \begin{itemize}
+ \item We can control $\|u\|_{H^2(V)}$ by $\|f - Ru\|_{L^2(V)}$ and $\|u\|_{L^2(V)}$ (by theorem).
+ \item We can control $\|f - Ru\|_{L^2(V)}$ by $\|f\|_{L^2(V)}, \|u\|_{L^2(V)}$ and $\|\D u\|_{L^2(V)}$.
+ \item By G\r{a}rding's inequality, we can control $\|\D u\|_{L^2(V)}$ by $\|u\|_{L^2(V)}$ and $B[u, u] = (f, u)_{L^2(V)}$.
+ \item By H\"older, we can control $(f, u)_{L^2(V)}$ by $\|f\|_{L^2(V)}$ and $\|u\|_{L^2(V)}$.
+ \end{itemize}
+
+ So it suffices to consider the case where $L$ only has second derivatives. Fix $V \Subset U$ and choose $W$ such that $V \Subset W \Subset U$. Take $\xi \in C_c^\infty(W)$ such that $\zeta \equiv 1$ on $V$.
+
+ Recall that our example of Laplace's equation, we considered the integral $\int f^2\;\d x$ and did some integration by parts. Essentially, what we did was to apply the definition of a weak solution to $\Delta u$. There we was lucky, and we could obtain the result in one go. In general, we should consider the second derivatives one by one.
+
+ For $k \in \{1, \ldots n\}$, we consider the function
+% \[
+% \sum_{i, j = 1}^n \int_U a^{ij} u_{x_i} u_{x_j} \;\d x = \int_U \tilde{f} v\;\d x,
+% \]
+% where
+% \[
+% \tilde{f} = f - \sum_{i = 1}^n b^i u_{x_i} - cu.
+% \]
+ \[
+ v = - \Delta_k^{-h} (\zeta^2 \Delta_k^h u).
+ \]
+ As we shall see, this is the correct way to express $u_{x_k x_k}$ in terms of difference quotients (the $-h$ in the first $\Delta_k^{-h}$ comes from the fact that we want to integrate by parts). We shall put this into the definition of a weak solution to say $B[u, v] = (f, v)$. The plan is to isolate a $\|\Delta^h_k \D u\|_2$ term on the left and then bound it.
+
+ We first compute
+% We write
+% \begin{align*}
+% A &= \sum_{i, j = 1}^n \int_U a^{ij} u_{x_i} v_{x_j} \;\d x\\
+% B &= \int_U \tilde{f} v\;\d x.
+% \end{align*}
+% Note that if $v, w$ are supported in $W$, then
+% \[
+% \int_U w \Delta_k^{-h}v\;\d x = \int_U \Delta^h_k w v\;\d x.
+% \]
+% Moreover, we have the product rule
+% \[
+% \Delta^h_k(w v) = \tau_k^h w \Delta_k^h v + (\Delta_k^h w) v,
+% \]
+% where $\tau_k^h w (x) = w(x + h e_k)$. Looking at $A$, we have
+ \begin{align*}
+ B[u, v] &= - \sum_{i, j} \int_U a^{ij} u_{x_i} \Delta_k^{-h} (\zeta^2 \Delta_k^h u)_{x_j}\;\d x\\
+ &= \sum_{i, j} \int_U \Delta^h_k (a^{ij} u_{x_i}) (\zeta^2 \Delta_k^h u)_{x_j}\;\d x\\
+ &= \sum_{i, j} \int_U (\tau^h_k a^{ij} \Delta_k^h u_{x_i} + (\Delta_k^h a^{ij}) u_{x_i}) (\zeta^2 \Delta^h_k u_{x_j} + 2 \zeta \zeta_{x_j} \Delta_k^h u)\;\d x\\
+% &= \sum_{i, j} \int_U \zeta^2 (\tau_k^h a^{ij}) \Delta_k^h u_{x_i} \Delta_k^h u_{x_j}\;\d x\\
+% &\hphantom{{}={}} + \sum_{i, j} \int_U \left\{ (\Delta^h_k a^{ij}) u_{x_i} \zeta^2 \Delta_k^2 u_{x_j} + 2(\Delta_k^h a^{ij}) u_{x_i} \zeta \zeta_{x_j} \Delta_k^h u + 2(\tau_k^h a^{ij}) \Delta^h_k u_{x_i} \cdot 2 \zeta \zeta_{x_j} \Delta_k^h u \right\}\;\d x\\ % fix this
+ &\equiv A_1 + A_2,
+ \end{align*}
+ where
+ \begin{align*}
+ A_1 &= \sum_{i, j} \int_U \xi^2 (\tau_k^h a^{ij}) (\Delta_k^h u_{x_i}) (\Delta_k^h u_{x_j})\;\d x\\
+ A_2 &= \sum_{i, j} \int_U \Big[ (\Delta^h_k a^{ij}) u_{x_i} \zeta^2 \Delta_k^h u_{x_j} + 2\zeta \zeta_{x_j} \Delta_k^h u (\tau^h_k a^{ij} \Delta_k^h u_{x_i} + (\Delta_k^h a^{ij})u_{x_i})\Big]\;\d x.
+ \end{align*}
+ By uniform ellipticity, we can bound
+ \[
+ A_1 \geq \theta \int_U \xi^2 |\Delta_k^h \D u|^2\;\d x.
+ \]
+ This is what we want to be small.
+
+ Note that $A_2$ looks scary, but every term either only involves ``first derivatives'' of $u$, or a product of a second derivative of $u$ with a first derivative. Thus, applying Young's inequality, we can bound $|A_2|$ by a linear combination of $|\Delta_k^h \D u|^2$ and $|\D u|^2$, and we can make the coefficient of $|\Delta_k^h \D u|^2$ as small as possible.
+
+ In detail, since $a^{ij} \in C^1(U)$ and $\zeta$ is supported in $W$, we can uniformly bound $a^{ij}, \Delta_k^h a^{ij}, \zeta_{x_j}$, and we have
+ \[
+ |A_2| \leq C \int_W \Big[\zeta |\Delta^h_k \D u| |\D u| + \zeta |\D u| |\Delta_k^h u| + \zeta |\Delta^h_k \D u| |\Delta^h_k u|\Big]\;\d x.
+ \]
+ Now recall that $\|\Delta^h_k u\|$ is bounded by $\|\D u\|$. So applying Young's inequality, we may bound (for a different $C$)
+ \[
+ |A_2| \leq \varepsilon \int_W \zeta^2 |\Delta_k^h \D u|^2 + C \int_W |\D u|^2\;\d x.
+ \]
+% By Young's inequality, we have
+% \[
+% |A_2| \leq \varepsilon \int_W \zeta^2 |\Delta_k^h \D u|^2 \;\d x + \frac{C}{\varepsilon} \int_W |\D u|^2 + |\Delta_k^h u|^2\;\d x.
+% \]
+% Note that
+% \[
+% \int_W |\Delta_k^h u|^2\;\d x \leq c \int_W |\D u|^2 \;\d x.
+% \]
+% So setting $\varepsilon = \frac{\theta}{2}$, we conclude that
+ Thus, taking $\varepsilon = \frac{\theta}{2}$, it follows that
+ \[
+ (f, v) = B[u, v] \geq \frac{\theta}{2} \int_U \zeta^2 |\Delta_k^h \D u|^2 \;\d x - C \int_W |\D u|^2\;\d x.
+ \]
+ This is promising.
+
+ It now suffices to bound $(f, v)$ from above. By Young's inequality,
+ \begin{align*}
+ |(f, v)| &\leq \int |f| |\Delta^{-h}_k (\zeta^2 \Delta_k^h u)|\;\d x\\
+ &\leq C \int |f| |\D (\zeta^2 \Delta_k^h u)|\;\d x\\
+ &\leq \varepsilon \int |\D (\zeta^2 \Delta_k^h u)|^2\;\d x + C \int |f|^2\;\d x\\
+ &\leq \varepsilon \int |\zeta^2 \Delta_k^h \D u|^2\;\d x + C (\|f\|_{L^2(U)}^2 + \|\D u\|_{L^2(U)}^2)
+ \end{align*}
+%
+% We now return to $B$. We have
+% \[
+% |B| \leq C \int (|f| + |\D u| + |u|) \Delta^{-h}_k (\zeta^2 \Delta_k^h u)\;\d x.
+% \]
+% By the lemma, we have
+% \begin{align*}
+% \int_u (\Delta_k^{-h} (\zeta^i \Delta_k^h u))^2\;\d x &\leq C \int_U |\D(\zeta^2 \Delta_k^h u)|^2 \;\d x\\
+% &\leq C \int_U \zeta^2 |\Delta_k^h u|^2 + \zeta^2 |\Delta^h_k \D u|^2\;\d x\\
+% &\leq c \int_U |\D u|^2 \;\d x + C \int_U \zeta^2 |\Delta^h_k \D u|^2\;\d x.
+% \end{align*}
+% By Young's inequality, we have
+% \[
+% |B| \leq \varepsilon \int_U \zeta^2 |\Delta_k^h \D u|^2\;\d x + C \int_W f^2 + u^2 + |\D u|^2 \;\d x.
+% \]
+ Setting $\varepsilon = \frac{\theta}{4}$, we get
+ \[
+ \int_U \zeta^2 |\Delta_k^h \D u|^2 \;\d x \leq C (\|f\|_{L^2(W)}^2 + \|\D u\|_{L^2(W)}^2),
+ \]
+ and so, in particular, we get a uniform bound on $\|\Delta_k^h \D u\|_{L^2(V)}$. Now as before, we can use G\r{a}rding to get rid of the $\|\D u\|_{L^2(W)}$ dependence on the right.
+% This implies
+% \[
+% \int_V |\Delta_k^h \D u|^2 \;\d x \leq C \int_W f^2 + u^2 + |\D u|^2\;\d x.
+% \]
+% Note that the right hand side is independent of $h$. So by the lemma, we have $\D u \in H^1(V)$ with
+% \[
+% \|\D^2 u\|_{L^2(V)} \leq C \int_W f^2 + u^2 + |\D u|^2\;\d x = C (\|f\|_{L^2(W)} + \|u\|_{H^1(W)})
+% \]
+% To complete the proof, we have to replace the $H^1$ norm with the $L^2$ norm. To see this,we let $\xi \in C^\infty_c(U)$ with $\xi \equiv 1$ on $W$. Set $v = \xi^2 u$, and then $B[u, v] = (f, v)_{L^2(U)}$ tells us
+% \[
+% \int_U \left(\sum_{i, j = 1}^n a^{ij} u_{x_i} (\xi^2 u)_{x_j} + \sum_{i = 1}^n b^i u_{x_i} \xi^2 u + c \xi^2 u^2 \right) \;\d u \int_U f \xi^2 u\;\d x.
+% \]
+% Proceeding as in the proof of G\r{a}rding's inequality, we deduce that
+% \[
+% \|\D u\|_{L^2(W)} \leq C (\|f\|_{L^2(U)} + \|u\|_{L^2(U)},
+% \]
+% and then we are done.
+\end{proof}
+Notice that this is a local result. In order to have $u \in H^2(V)$, it is enough for us to have $f \in L^2(W)$ for some $W$ slightly larger than $V$. Thus, singularities do not propagate either in from the boundary or from regions where $f$ is not well-behaved.
+
+With elliptic regularity, we can understand weak solutions as genuine solutions to the equation $Lu = f$. Indeed, if $u$ is a weak solution, then for any $v \in C_c^\infty(U)$, we have $B[u, v] = (f, v)$, hence after integrating by parts, we recover $(Lu - f, v) = 0$ for all $v \in C_c^\infty(U)$. So in fact $Lu = f$ almost everywhere.
+
+It is natural to hope that we can get better than $u \in H^2_{loc}(U)$. This is actually not hard given our current work. If $Lu = f$, and all $a^{ij}, b^i, c, f$ are sufficiently well-behaved, then we can simply differentiate the whole qeuation with respect to $x_i$, and then observe that $u_{x_i}$ satisfies some second-order elliptic PDE of the form previously understood, and if we do this for all $i$, then we can conclude that $u \in H^3_{loc}(U)$. Of course, some bookkeeping has to be done if we were to do this properly, since we need to write everything in weak form. However, this is not particularly hard, and the details are left as an exercise.
+
+\begin{thm}[Elliptic regularity]\index{interior regularity}\index{elliptic regularity!interior}
+ If $a^{ij}, b^i$ and $c$ are $C^{m + 1}(U)$ for some $m \in \N$, and $f \in H^m(U)$, then $u \in H^{m + 2}_{loc}(U)$ and for $V \Subset W \Subset U$, we can estimate
+ \[
+ \|u\|_{H^{m + 2}(V)} \leq C (\|f\|_{H^m(W)} + \|u\|_{L^2(W)}).
+ \]
+ In particular, if $m$ is large enough, then $u \in C^2_{loc}(U)$, and if all $a^{ij}, b^i, c, f$ are smooth, then $u$ is also smooth.
+\end{thm}
+
+We can similarly obtain a H\"older theory of elliptic regularity, which gives (roughly) $f \in C^{k, \alpha}(U)$ implies $u \in C^{k + 2, \alpha}(U)$.
+
+The final loose end is to figure out what happens at the boundary.
+
+\begin{thm}[Boundary $H^2$ regularity]\index{elliptic regularity!up to boundary}
+ Assume $a^{ij} \in C^1(\bar{U})$, $b^1, c \in L^\infty(U)$, and $f \in L^2(U)$. Suppose $u \in H_0^1(U)$ is a weak solution of $Lu = f, u|_{\partial U} = 0$. Finally, we assume that $\partial U$ is $C^2$. Then
+ \[
+ \|u\|_{H^2(U)} \leq C (\|f\|_{L^2(U)} + \|u\|_{L^2(U)}).
+ \]
+ If $u$ is the \emph{unique} weak solution, we can drop the $\|u\|_{L^2(U)}$ from the right hand side.
+\end{thm}
+
+\begin{proof}
+ Note that we already know that $u$ is locally in $H^2_{loc}(U)$. So we only have to show that the second-derivative is well-behaved near the boundary.
+
+ By a partition of unity and change of coordinates, we may assume we are in the case
+ \[
+ U = B_1(0) \cap \{x_n > 0\}.
+ \]
+ Let $V = B^{1/2}(0) \cap \{x_n > 0\}$. Choose a $\zeta \in \C_c^\infty(B_1(0))$ with $\zeta \equiv 1$ on $V$ and $0 \leq \zeta \leq 1$.
+
+ Most of the proof in the previous proof goes through, as long as we restrict to
+ \[
+ v = - \Delta_k^{-h} (\zeta^2 \Delta^h_k u)
+ \]
+ with $k \not= n$, since all the translations keep us within $U$, and hence are well-defined.
+%
+% Since $u$ is a weak solution, for any $v \in H_0^1(U)$, we can write
+% \[
+% \sum_{i, j = 1}^n \int_U a^{ij} u_{x_i} v_{x_j}\;\d x = \int_U \tilde{f} v\;\d x,
+% \]
+% where
+% \[
+% \tilde{f} = f - \sum_{i = 1}^n b^i u_{x_i} - cu.
+% \]
+% Now, let $|h| > 0$ be small, and for $k \in \{1, \ldots, n - 1\}$, write
+% \[
+% v = - \Delta_k^{-h} (\zeta^2 \Delta^h_k u).
+% \]
+% Crucially, since we only translate in the directions tangent to the boundary, this is well-defined, and is in $H_0^1(U)$. So we can use it as a test function. Then proceed just as before, and we deduce that
+% \[
+% \int_V |\Delta^h_k \D u|^2 \;\d x \leq C \int_U f^2 + u^2 + |\D u|^2\;\d x.
+% \]
+% So even though we don't have $V \Subset U$, the results concerning difference quotients in directions \emph{tangent} to the boundary still hold.
+
+ Thus, we control all second derivatives of the form $\D_k \D_i u$, where $k \in \{1, \ldots, n - 1\}$ and $i \in \{1, \ldots, n\}$. The only remaining second-derivative to control is $\D_n \D_n u$. To understand this, we go back to the PDE and look at the PDE itself. Recall that we know it holds pointwise almost everywhere, so
+ \[
+ \sum_{i, j = 1}^n (a^{ij} u_{x_i})_{x_j} + \sum_{i = 1}^n b^i u_{x_i} + cu = f.
+ \]
+ So we can write $a^{nn} u_{x_n} u_{x_n} = F$ almost everywhere, where $F$ depends on $a, b, c, f$ and all (up to) second derivatives of $u$ that are not $u_{x_n x_n}$. Thus, $F$ is controlled in $L^2$. But uniform ellipticity implies $a^{nn}$ is bounded away from $0$. So we are done.
+\end{proof}
+Similraly, we can reiterate this to obtain higher regularity results.
+
+\section{Hyperbolic equations}
+So far, we have been looking at elliptic PDEs. Since the operator is elliptic, there is no preferred ``time direction''. For example, Laplace's equation models static electric fields. Thus, it is natural to consider boundary value problems in these cases.
+
+Hyperbolic equations single out a time direction, and these model quantities that evolve in time. In this case, we are often interested in initial value problems instead. Let's first define what it means for an equation to by hyperbolic
+
+\begin{defi}[Hyperbolic PDE]\index{hyperbolic operator}\index{hyperbolic PDE}
+ A \emph{second-order linear hyperbolic PDE} is a PDE of the form
+ \[
+ \sum_{i, j = 1}^{n + 1} (a^{ij}(y) u_{y_j})_{y_i} + \sum_{i = 1}^{n + 1} b^i(y) u_{y_i} + c(y) u = f
+ \]
+ with $y \in \R^{n + 1}$, $a^{ij} = a^{ji}, b^i, c \in C^\infty(\R^{n + 1})$, such that the \term{principal symbol}
+ \[
+ Q(\xi) = \sum_{i, j = 1}^{n + 1} a^{ij}(y) \xi_i \xi_j
+ \]
+ has signature $(+, -, -, \ldots)$ for all $y$. That is to say, after perhaps changing basis, at each point we can write
+ \[
+ q(\xi) = \lambda_{n + 1}^2 \xi_{n + 1}^2 - \sum_{i = 1}^n \lambda_i^2 \xi_i^2
+ \]
+ with $\lambda_i > 0$.
+\end{defi}
+
+It turns out not to be too helpful to treat this equation at this generality. We would like to pick out a direction that corresponds to the positive eigenvalue. By a coordinate transformation, we can locally put our equation in the form
+\[
+ u_{tt} = \sum_{i, j = 1}^n (a^{ij}(x, t) u_{x_i})_{x_j} + \sum_{i = 1}^n b^i(x, t)u_{x_i} + c(x, t) u.
+\]
+Note that we did not write down a $u_t$ term. It doesn't make much difference, and it is notationally convenient to leave it out.
+
+In this form, hyperbolicity is equivalent to the statement that the operator on the right is elliptic for each $t$ (or rather, the negative of the right hand side).
+
+We observe that $t = 0$ is a non-characteristic surface. So we can hope to solve the Cauchy problem. In other words, we shall specify $u|_{t = 0}$ and $u_t|_{t = 0}$. Actually, we'll look at an initial boundary value problem. Consider a region of the form $\R \times U$, where $U \subseteq \R^n$ is open bounded with $C^1$ boundary.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (1.5, 0) arc (0:-180:1.5 and 0.3);
+ \draw [dashed] (1.5, 0) arc (0:180:1.5 and 0.3);
+ \draw (1.5, 0) -- (1.5, 3);
+ \draw (-1.5, 0) -- (-1.5, 3);
+
+ \draw (0, 3) ellipse (1.5 and 0.3);
+
+ \node [left] at (-1.5, 0) {$t = 0$};
+ \node [left] at (-1.5, 3) {$t = T$};
+
+ \node at (0, 0) {$U$};
+ \end{tikzpicture}
+\end{center}
+We define
+\begin{align*}
+ U_t &= (0, t) \times U\\
+ \Sigma_t &= \{t\} \times U\\
+ \partial^* U_t &= [0, t] \times \partial U.
+\end{align*}
+Then
+\[
+ \partial U_T = \Sigma_0 \sqcup \Sigma_T \sqcup \partial^* U_T.
+\]
+The general \term{initial boundary value problem} (IVBP)\index{IBVP} is as follows: Let $L$ be a (time-dependent) uniformly elliptic operator. We want to solve
+\begin{align*}
+ u_{tt} + L u &= f && \text{on }U_T\\
+ u &= \psi &&\text{on }\Sigma_0\\
+ u_t &= \psi' && \text{on }\Sigma_0\\
+ u &= 0 && \text{on } \partial^* U_T.
+\end{align*}
+In the case of elliptic PDEs, we saw that Laplace's equation was a canonical, motivating example. In this case, if we take $L = - \Delta$, then we obtain the \term{wave equation}. Let's see what we can do with it.
+
+\begin{eg}
+ Start with the equation $u_{tt} - \Delta u = 0$. Multiply by $u_t$ and integrate over $U_t$ to obtain
+ \begin{align*}
+ 0 &= \int_{U_t} \Big(u_{tt} u_t - u_t \Delta u\Big) \;\d x\;\d t\\
+ &= \int_{U_t} \left[\frac{1}{2}\frac{\partial}{\partial t} u_t^2 - \nabla \cdot (u_t \D u) + \D u_t \cdot \D u\right]\;\d x\;\d t\\
+ &= \int_{U_t} \left[\frac{1}{2}\frac{\partial}{\partial t} \left( u_t^2 + |\D u|^2\right) - \nabla \cdot (u_t D u)\right]\;\d x\;\d t\\
+ &= \frac{1}{2} \int_{\Sigma_t - \Sigma_0} \Big(u_t^2 + |\D u|^2\Big)\;\d x - \int_{\partial^* U_t} u_t \frac{\partial u}{\partial \nu}\;\d S.
+ \end{align*}
+ But $u$ vanishing on $\partial^* U_T$ implies $u_t$ vanishes as well. So the second term vanishes, and we obtain
+ \[
+ \int_{\Sigma_t} u_t^2 + |\D u|^2 \;\d x = \int_{\Sigma_0} u_t^2 + |\D u|^2 \;\d x.
+ \]
+ This is the conservation of energy! Thus, if a solution exists, we control $\|u\|_{H^1(\Sigma_t)}$ in terms of $\|\psi\|_{H^1(\Sigma_0)}$ and $\|\psi'\|_{L^2(\Sigma_0)}$. We also see that the solution is uniquely determined by $\psi$ and $\psi'$, since if $\psi = \psi' = 0$, then $u_t = \D u = 0$ and $u$ is zero at the boundary.
+\end{eg}
+
+Estimates like this that control a solution without needing to construct it are known as \term{a priori estimates}. These are often crucial to establish the existence of solutions (cf.\ G\r{a}rding).
+
+%Returning to the general case, let us define
+%\[
+% Lu = -\sum_{i, j = 1}^n (a^{ij}(x, t) u_{x_j})_{x_i} + \sum_{i = 1}^n b^i(x, t) u_{x_i} + c(x, t) u,
+%\]
+%for $a^{ij} = a^{ji}, b^i, c \in C^1(\bar{U}_T)$. We further assume the uniform ellipticity condition, i.e.\ there exists $\theta > 0$ such that
+%\[
+% \sum_{i, j = 1}^n a^{ij}(x, t) \xi_i \xi_j \geq \theta |\xi|^2.
+%\]
+%The initial boundary value problem we are going to consider is
+%\begin{align*}
+% u_{tt} + Lu &= f&\text{on }U_T\\
+% u &= \psi &\text{on }\Sigma_0\\
+% u_t &= \psi' & \text{on }\Sigma_0\\
+% u &= 0 &\text{on } \partial^* U_T
+%\end{align*}
+
+We shall first find a weak formulation of this problem that only requires $u \in H^1(U_T)$. Note that when we do so, we have to understand carefully what we mean by $u_t = \psi'$. We shall see how we will deal with that in the derivation of the weak formulation.
+
+Assume that $u \in C^2(\bar{U}_T)$ is a classical solution. Multiply the equation by $v \in C^2(\bar{U}_T)$ which satisfies $v = 0$ on $\partial^* U_T \cup \Sigma_T$. Then we have
+\begin{align*}
+ \int_{U_T} \d x\;\d t\; (fv) &= \int_{U_T} \d x\;\d t\; (u_{tt}v + Luv)\\
+ &= \int_{U_T} \d x\;\d t\left(-u_t v_t + \sum a^{ij} u_{x_i} v_{x_j} + \sum b^i u_{x_i v} + cu\right)\\
+ &\phantomeq + \left[\int_U u_t v \;\d x\right]_0^T - \int_0^T \;\d t \left(\int_{\partial U} \sum a^{ij} u_{x_j} v\;\d S\right)\;\d t.
+\end{align*}
+Using the boundary conditions, we find that
+\begin{multline*}
+ \int_{U_T} fv\;\d x \;\d t = \int_{U_T} \left(-u_t v_t + \sum a^{ij} u_{x_i} v_{x_j} + \sum b^i u_{x_i} v + cuv \right)\;\d x\;\d t \\
+ - \int_{\Sigma_0} \psi' v\;\d x.\tag{$\dagger$}
+\end{multline*}
+Conversely, suppose $u \in C^2(\bar{U}_T)$ satisfies $(\dagger)$ for all such $v$, and $u|_{\Sigma_0} = \psi$ and $u|_{\partial^* U_T} = 0$. Then by first testing on $v \in C_c^\infty(U_T)$, reversing the integration by parts tells us
+\[
+ 0 = \int_{U_T} (u_{tt} + Lu - f)v\;\d x,
+\]
+since there is no boundary term. Hence we get
+\[
+ u_{tt} + Lu = f
+\]
+on $U_T$. To check the boundary conditions, if $v \in C^\infty(\bar{U}_T)$ vanishes on $\partial^*U_T \cup \Sigma_T$, then again reversing the integration by parts shows that
+\[
+ \int_{U_T} (u_{tt} + Lu - f) v\;\d x \;\d t = \int_{\Sigma_0} (\psi' - u_t) v\;\d x.
+\]
+Since we know that the LHS vanishes, it follows that $\psi' = u_t$ on $\Sigma_0$. So we see that our weak formulation can encapsulate the boundary condition on $\Sigma_0$.
+
+\begin{defi}[Weak solution]\index{weak solution!hyperbolic equation}
+ Suppose $f \in L^2(U_T)$, $\psi \in H_0^1(\Sigma_0)$ and $\psi' \in L^2(\Sigma_0)$. We say $u \in H^1(U_t)$ is a weak solution to the hyperbolic PDE if
+ \begin{enumerate}
+ \item $u|_{\Sigma_0} = \psi$ in the trace sense;
+ \item $u|_{\partial^* U_T} = 0$ in the trace sense; and
+ \item $(\dagger)$ holds for all $v \in H^1(U_T)$ with $v = 0$ on $\partial^* U_T \cup \Sigma_T$ in a trace sense.
+ \end{enumerate}
+\end{defi}
+
+\begin{thm}[Uniqueness of weak solution]\index{uniqueness theorem!hyperbolic equation}
+ A weak solution, if exists, is unique.
+\end{thm}
+%The non-trivial part of this theorem is that we cannot use $u_t$ as the test function.
+
+\begin{proof}
+ It suffices to consider the case $f = \psi = \psi' = 0$, and show any solution must be zero. Let
+ \[
+ v(x, t) = \int_t^T e^{-\lambda s} u(x, s)\;\d s,
+ \]
+ where $\lambda$ is a real number we will pick later. The point of introducing this $e^{-\lambda t}$ is that in general, we do not expect conservation of energy. There could be some exponential growth in the energy, so want to suppress this.
+
+ Then this function belongs to $H^1(U_T)$, $v = 0$ on $\Sigma_T \cup \partial^* U_T$, and
+ \[
+ v_t = -e^{-\lambda t} u.
+ \]
+ Using the fact that $u$ is a weak solution, we have
+ \[
+ \int_{U_T} \left( u_t u e^{- \lambda t} - \sum v_{t x_j} v_{x_i} e^{\lambda t} + \sum_i b^i u_{x_i} v + (c - 1) uv - v v_t e^{\lambda t}\right)\;\d x \;\d t = 0.
+ \]
+ Integrating by parts, we can write this as $A = B$, where
+ \begin{align*}
+ A &= \int_{U_T} \left( \frac{\d}{\d t} \left(\frac{1}{2} u^2 e^{-\lambda t} - \sum a^{ij} v_{x_i} v_{x_j} e^{\lambda t} - \frac{1}{2} v^2 e^{\lambda t}\right) \right.\\
+ &\phantomeq \hphantom{\int_{U_T}} \left.+ \frac{\lambda}{2} \left(u^2 e^{-\lambda t} + \sum a^{ij} v_{x_i} v_{x_j} e^{\lambda t} + v^2 e^{\lambda t}\right)\right) \;\d x\;\d t\\
+ B &= - \int_{U_T} \left(e^{\lambda t} \sum a^{ij} v_{x_i} v_{x_j} - \sum b^i_{x_i} uv - \sum b^i v_{x_i} u + (c - 1) uv\right) \;\d x\;\d t.
+ \end{align*}
+ Here $A$ is the nice bit, which we can control, and $B$ is the junk bit, which we will show that we can absorb elsewhere.
+
+ Integrating the time derivative in $A$, using $v = 0$ on $\Sigma_T$ and $u = 0$ on $\Sigma_0$, we have
+ \begin{multline*}
+ A = e^{\lambda T} \int_{\Sigma_T} \frac{1}{2} u^2 \;\d x + \int_{\Sigma_0} \sum \left(a^{ij} v_{x_i} v_{x_j} + v^2\right) \;\d x\\
+ \frac{\lambda}{2} \int_{U_T}\left(u^2 e^{-\lambda t} + \sum a^{ij} v_{x_i} v_{x_j} e^{\lambda t} + v^2 e^{\lambda t}\right) \;\d x\;\d t.
+ \end{multline*}
+ Using the uniform ellipticity condition (and the observation that the first line is always non-negative), we can bound
+ \[
+ A \geq \frac{\lambda}{2} \int_{U_T} \left(u^2 e^{-\lambda t} + \theta |\D v|^2 e^{\lambda t} + v^2 e^{\lambda t}\right)\;\d x\;\d t.
+ \]
+ Doing some integration by parts, we can also bound
+ \[
+ B \leq \frac{c}{2} \int_{U_T} \left(u^2 e^{-\lambda t} + \theta |\D v|^2 e^{\lambda t} + v^2 e^{\lambda t}\right)\;\d x\;\d t,
+ \]
+ where the constant $c$ does not depend on $\lambda$. Taking this together, we have
+ \[
+ \frac{\lambda - c}{2} \int_{U_T} \left(u^2 e^{-\lambda t} + \theta |\D v|^2 e^{\lambda t} + v^2 e^{\lambda t}\right)\;\d x\;\d t \leq 0.
+ \]
+ Taking $\lambda > c$, this tells us the integral must vanish. In particular, the integral of $u^2 e^{\lambda t} = 0$. So $u = 0$.
+\end{proof}
+%For those who know GR/the vector field method, this is using
+%\[
+% X = e^{-\lambda t} \partial_t
+%\]
+%as a multiplier.
+
+We now want to prove the existence of weak solutions. While we didn't need to assume much regularity in the uniqueness result, since we are going to subtract the boundary conditions off anyway, we expect that we need more regularity to prove existence.
+\begin{thm}[Existence of weak solution]\index{existence theorem!hyperbolic equation}
+ Given $\psi \in H_0^1(U)$ and $\psi' \in L^2(U)$, $f \in L^2(U_T)$, there exists a (unique) weak solution with
+ \[
+ \|u\|_{H^1(U_T)} \leq C (\|\psi\|_{H^1(U)} + \|\psi'\|_{L^2(U)} + \|f\|_{L^2(U_T)}).\tag{$\dagger$}
+ \]
+\end{thm}
+\begin{proof}
+ We use \term{Galerkin's method}. The way we write our equations suggests we should think of our hyperbolic PDE as a second-order ODE taking values in the infinite-dimensional space $H^1_0(U)$. To apply the ODE theorems we know, we project our equation onto a finite-dimensional subspace, and then take the limit.
+
+ First note that by density arguments, we may assume $\psi, \psi' \in C_c^\infty(U)$ and $f \in C_c^\infty(U_T)$, as long as we prove the estimate ($\dagger$). So let us do so.
+
+ Let $\{\varphi_k\}_{k = 1}^\infty$ be an orthonormal basis for $L^2(U)$, with $\varphi_k \in H_0^1(U)$. For example, we can take $\varphi_k$ to be eigenfunctions of $-\Delta$ with Dirichlet boundary conditions.
+
+ We shall consider ``solutions'' of the form
+ \[
+ u^N(x, t) = \sum_{k = 1}^N u_k(t) \varphi_k(x).
+ \]
+ We want this to be a solution after projecting to the subspace spanned by $\varphi_1, \ldots, \varphi_N$. Thus, we want $(u_{tt} + Lu - f, \varphi_k)_{L^2(\Sigma_t)} = 0$ for all $k = 1,\ldots, N$. After some integration by parts, we see that we want
+ \[
+ \left(\ddot{u}^N, \varphi_k\right)_{L^2(U)} + \int_{\Sigma_t} \left(\sum a^{ij} u_{x_i}^N (\varphi_k)_{x_j} + b^i u_{x_i}^N \varphi_k + c u^N \varphi_k\right) \;\d x = (f, \varphi_k)_{L^2(U)}.\tag{$*$}
+ \]
+ We also require
+ \begin{align*}
+ u_k(0) &= (\psi, \varphi_k)_{L^2(U)}\\
+ \dot{u}_k(0) &= (\psi', \varphi_k)_{L^2(U)}.
+ \end{align*}
+ Notice that if we have a genuine solution $u$ that can be written as a finite sum of the $\varphi_k(x)$, then these must be satisfied.
+
+ This is a system of ODEs for the functions $u_k(t)$, and the RHS is uniformly $C^1$ in $t$ and linear in the $u_k$'s. By Picard--Lindel\"of, a solution exists for $t \in [0, T]$.
+
+ So for each $N$, we have an approximate solution that solves the equation when projected onto $\bra \varphi_1, \ldots, \varphi_N\ket$. What we need to do is to extract from this solution a genuine weak solution. To do so, we need some estimates to show that the functions $u^N$ converge.
+
+ We multiply $(*)$ by $e^{-\lambda t} \dot{u}_k(t)$, sum over $k = 1, \ldots, N$, and integrate from $0$ to $\tau \in (0, T)$, and end up with
+ \begin{multline*}
+ \int_0^\tau \;\d t \int_U \;\d x \left(\ddot{u}^N \dot{u}^N e^{-\lambda t} + \sum a^{ij} u_{x_i}^N \dot{u}_{x_j}^N + \sum b^i u_{x_i}^N \dot{u}^N + c u^N \dot{u}^N\right)e^{-\lambda t}\\
+ = \int_0^\tau \;\d t \int_U \;\d u (f \dot{u}_N e^{-\lambda t}).
+ \end{multline*}
+ As before, we can rearrange this to get $A = B$, where
+ \begin{multline*}
+ A = \int_{U_\tau}\d t \;\d x \left(\frac{\d}{\d t} \left(\frac{1}{2} (\dot{u}^N)^2 + \frac{1}{2} \sum a^{ij} u_{x_i}^N u_{x_j}^N + \frac{1}{2} (u^N)^2 e^{-\lambda t}\right)\right.\\
+ \left.+ \frac{\lambda}{2} \left((\dot{u}^N)^2 + \sum a^{ij} u_{x_i}^N u_{x_j}^N + (u^N)^2 \right)e^{-\lambda t}\right)
+ \end{multline*}
+ and
+ \[
+ B = \int_{U_\tau} \d t\;\d x\left( \frac{1}{2} \sum \dot{a}^{ij} u_{x_i}^N u_{x_j}^N - \sum b^i u_{x_i}^N \dot{u}^N + (1 - c) u^N \dot{u}^N + f \dot{u}^N \right)e^{-\lambda t}.
+ \]
+ Integrating in time, and estimating as before, for $\lambda$ sufficiently large, we get
+ \begin{multline*}
+ \frac{1}{2} \int_{\Sigma_\tau} \Big((\dot{u}^N)^2 + |\D u^N|^2\Big) \;\d x + \int_{U_\tau} \Big((\dot{u}^N)^2 + |\D u^N|^2 + (u^N)^2\Big) \;\d x\;\d t\\
+ \leq C (\|\psi\|^2_{H^1(U)} + \|\psi'\|^2_{L^2(U)} + \|f\|_{U_T}^2).
+ \end{multline*}
+ This, in particular, tells us $u^N$ is bounded in $H^1(U_T)$,
+
+ Since $u^N(0) = \sum_{n = 1}^N (\psi, \varphi_k) \varphi_k$, we know this tends to $\psi$ in $H^1(U)$. So for $N$ large enough, we have
+ \[
+ \|u^N\|_{H^1(\Sigma_0)} \leq 2 \|\psi\|_{H^1(U)}.
+ \]
+ Similarly, $\|\dot{u}^N\|_{L^2(\Sigma_0)} \leq 2 \|\psi'\|_{L^2(U)}$.
+
+ Thus, we can extract a convergent subsequence $u^{N_m} \rightharpoonup u$ in $H^1(U)$ for some $u \in H^1(U)$ such that % u \in H_0^1(U) ?
+ \[
+ \|u\|_{H^1(U_T)} \leq C (\|\psi\|_{H^1(U)} + \|\psi\|_{L^2(U)} + \|f\|_{L^2(U_T)}).
+ \]
+ For convenience, we may relabel the sequence so that in fact $u^N \rightharpoonup u$.
+
+ To check that $u$ is a solution, suppose $v = \sum_{k = 1}^M v_k(t) \varphi_k$ for some $v_k \in H^1((0, T))$ with $v_k(T) = 0$. By definition of $u^N$, we have
+ \[
+ (\ddot{u}^N, v)_{L^2(U)} + \int_{\Sigma_t} \sum_{i, j} a^{ij} u_{x_i}^N v_{x_j} + \sum_i b^i u_{x_i}^N v + cuv \;\d x = (f, v)_{L^2(U)}.
+ \]
+ Integrating $\int_0^T \;\d t$ using $v(T) = 0$, we have
+ \begin{multline*}
+ \int_{U_T} \left(-u_t^N v_t + \sum{x_i}^N v_{x_j} + \sum b^i u_{x_i}^N v + cuv\right) \;\d x \;\d t - \int_{\Sigma_0} u_t^N v\;\d x \\
+ = \int_{U_T} fv\;\d x\;\d t.
+ \end{multline*}
+ Now note that if $N > M$, then $\int_{\Sigma_0} u_t^N v\;\d x = \int_{\Sigma_0} \psi' v \;\d x$. Now, passing to the weak limit, we have
+ \begin{multline*}
+ \int_{U_T} \left(-u_t v_t + \sum a^{ij} u_{x_i} v_{x_j} + \sum b^i u_{x_i} v + cuv\right) \;\d x \;\d t - \int_{\Sigma_0}\psi' v\;\d x \\
+ = \int_{U_T} fv\;\d x\;\d t.
+ \end{multline*}
+ So $u_t$ satisfies the identity required for $u$ to be a weak solution.
+
+ Now for $k = 1, \ldots, M$, the map $w \in H^1(U_T) \mapsto \int_{\Sigma_0} w \varphi_k \;\d x$ is a bounded linear map, since the trace is bounded in $L^2$. So we conclude that
+ \[
+ \int_{\Sigma_0} u \varphi_k \;\d x = \lim_{N \to \infty} \int_{\Sigma_0} u^N \varphi_k \;\d x = (\psi, \varphi_k)_{L^2(H)}.
+ \]
+ Since this is true for all $\varphi_k$, it follows that $u|_{\Sigma_0} = \psi$, and $v$ of the form considered are dense in $H^1(U_T)$ with $v = 0$ on $\partial^*U_T \cup \Sigma_T$. So we are done.
+\end{proof}
+
+In fact, we have
+\[
+ \esssup_{t \in (0, T)} (\|\dot{u}\|_{L^2(\Sigma_t)} + \|u\|_{H^1(\Sigma_t)}) \leq C \cdot (\mathrm{data}).
+\]
+So we can say $u \in L^\infty( (0, T), H^1(U))$ and $\dot{u} \in L^\infty((0, T), L^2(U))$.
+
+%Note that although the energy $\|\dot{u}\|_{L^2(\Sigma_t)} + \|u\|_{H^2(\Sigma_t)}$ is bounded, we cannot conclude that the energy is continuous.
+
+We would like to improve the regularity of the solution. To motivate how we are going to do that, let's go back to the wave equation for a bit.
+
+Suppose that in fact $u \in C^\infty(U_T)$ is a smooth solution to the wave equation with initial conditions $(\psi, \psi')$. We want a quantitative estimate for $u \in H^2(\Sigma_t)$. The idea is to differentiate the equation with respect to $t$. Writing $w = u_t$, we get
+\begin{align*}
+ w_{tt} - \Delta w &= 0\\
+ w|_{\Sigma_0} &= \psi'\\
+ w_t|_{\Sigma_0} &= \Delta \psi\\
+ w|_{\partial^* U_T} &= 0.
+\end{align*}
+By the energy estimate we have for the wave equation, we get
+\begin{align*}
+ \|w_t\|_{L^2(\Sigma_t)} + \|w\|_{H^1(\Sigma_t)} &\leq C(\|\psi'\|_{H^1(U)} + \|\Delta \psi\|_{L^2(U)})\\
+ &\leq C(\|\psi'\|_{H^1(U)} + \|\psi\|_{H^2(U)}).
+\end{align*}
+So we now have control of $u_{tt}$ and $u_{tx_i}$ in $L^2(\Sigma_t)$. But once we know that $u_{tt}$ is controlled in $L^2$, then we can use the elliptic estimate to gain control on the second-order spatial derivatives of $u$. So
+\[
+ \|u\|_{H^2(\Sigma_t)}\leq C(\|\Delta u\|_{L^2(\Sigma_t)}) = C \|u_{tt}\|_{L^2(\Sigma_t)}.
+\]
+So we control all second-derivatives of $u$ in terms of the data.
+
+\begin{thm}
+ If $a^{ij}, b^i, c \in C^2(U_T)$ and $\partial U \in C^2$, then for $\psi \in H^2(U)$ and $\psi' \in H_0^1(U)$, and $f, f_t \in L^2(U_T)$, we have
+ \begin{align*}
+ u &\in H^2(U_T) \cap L^\infty((0, T); H^2(U))\\
+ u_t &\in L^\infty((0, T), H_0^1(U))\\
+ u_{tt} &\in L^\infty((0, T); L^2(U))
+ \end{align*}
+\end{thm}
+
+\begin{proof}
+ We return to the Galerkin approximation. Now by assumption, we have a linear system with $C^2$ coefficients. So $u_k \in C^3((0, T))$. Differentiating with respect to $t$ (assuming as we can $f, f_t \in C^0(\bar{U}_T)$), we have
+ \begin{multline*}
+ (\partial_t^3 u^N, \varphi_k)_{L^2(U)} + \int_{\Sigma_t} \left(\sum a^{ij} \dot{u}_{x_i}^N (\varphi_k)_{x_j} + \sum b^i \dot{u}_{x_i}^N \varphi_k + c \dot{u}^N \varphi_k \right) \;\d x\\
+ = (\dot{f}, \varphi_k)_{L^2(U)} - \int_{\Sigma_t} \left( \sum \dot{a}^{ij} u_{x_i}^N (\varphi_k)_{x_j} + \sum \dot{b}^i u_{x_i}^N \varphi_k + \dot{c} u \varphi_k \right)\;\d x.
+ \end{multline*}
+ Multiplying by $\ddot{u}_k e^{-\lambda t}$, summing $k = 1, \ldots, N$, integrating $\int_0^\tau \;\d t$, and recalling we already control $u \in H^1(U_T)$, we get
+ \begin{multline*}
+ \sup_{t \in (0, T)} (\|u_t^N\|_{H^1(\Sigma_t)} + \|u_{tt}^N\|_{L^2(\Sigma_t)} + \|u_t^N\|_{H^2(U_T)})\\
+ \leq C\Big(\|u_t^N\|_{H^1(\Sigma_0)} + \|u_{tt}^N\|_{L^2(\Sigma_0)} + \|\psi\|_{H^1(\Sigma_0)} \\
+ + \|\psi'\|_{L^2(\Sigma_0)} + \|f\|_{L^2(U_T)} + \|f_t\|_{L^2(U_T)}\Big).
+ \end{multline*}
+ We know
+ \[
+ u_t^N|_{t = 0} = \sum_{k = 1}^N (\psi', \varphi_k)_{L^2(U)} \varphi_k.
+ \]
+ Since $\varphi_k$ are a basis for $H^1$, we have
+ \[
+ \|u_t^N\|_{H^1(\Sigma_0)} \leq \|\psi'\|_{H^1(\Sigma_0)}.
+ \]
+ To control $u^N_{tt}$, let us assume for convenience that in fact $\varphi_k$ are the eigenfunctions $-\Delta$. From the fact that
+ \[
+ (\ddot{u}^N, \varphi_k)_{L^2(U)} + \int_{\Sigma_t} \sum_{i, j} a^{ij} u_{x_i}^N (\varphi_k)_{x_j} + \sum_i b^i u_{x_i}^N \varphi_k + c u^N \varphi_k\;\d x\;\d t = (f, \varphi_k)_{L^2(U)},
+ \]
+ integrate the first term in the integral by parts, multiply by $\ddot{u}_n$, and sum to get
+ \[
+ \|u_{tt}^N\|_{\Sigma_0} \leq C(\|u^N\|_{H^2(\Sigma_0)} + \|f\|_{L^2(U_T)} + \|f_t\|_{L^2(U_T)}).
+ \]
+ We need to control $\|u^N\|_{H^2(\Sigma_0)}$ by $\|\psi\|_{H^2(\Sigma_0)}$. Then, using that $\Delta\varphi_k|_{\partial U} = 0$ and $u^N$ is a finite sum of these $\varphi_k$'s,
+ \[
+ (\Delta u^N, \Delta u^N)_{L^2\!(\Sigma_0)} = (u^N, \Delta^2 u^N)_{L^2\!(\Sigma_0)} = (\psi, \Delta^2 u^N)_{L^2\!(\Sigma_0)} = (\Delta \psi, \Delta u^N)_{L^2\!(\Sigma_0)}.
+ \]
+ So
+ \[
+ \|u^N\|_{H^2(\Sigma_0)} \leq \|\Delta u^N\|_{L^2(\Sigma_0)} \leq C \|\psi\|_H^2(U).
+ \]
+ Passing to the weak limit, we conclude that
+ \begin{align*}
+ u_t &\in H^1(U_T)\\
+ u_t &\in L^\infty((0, T), H_0^1(U))\\
+ u_{tt} &\in L^\infty((0, T), L^2(U)).
+ \end{align*}
+ Since $u_{tt} + Lu = f$, by an elliptic estimate on (almost) every constant $t$, we obtain $u \in L^\infty((0, T), H^2(U))$.
+\end{proof}
+
+We can now understand the equation as holding pointwise almost everywhere by undoing the integration by parts that gave us the definition of the weak solution. The initial conditions can also be understood in a trace sense.
+
+Returning to the case $\psi \in H^1_0(U)$ and $\psi' \in L^2(U)$, by approximating in $H^2(U)$, by approximating in $H^2(U)$, $H_0^1(U)$ respectively, we can show that a weak solution can be constructed as a \emph{strong} limit in $H^1(U_T)$. This implies the energy identity, so that in fact weak solutions satisfy
+\begin{align*}
+ u &\in C^0((0, T); H_0^1(U))\\
+ u_t &\in C^0((0, T); L^2(U))
+\end{align*}
+This requires slightly stronger regularity assumptions on $a^{ij}$, $b^i$ and $c$. Such solutions are said to be in the \term{energy class}.
+
+Finally, note that we can iterate the argument to get higher regularity.
+\begin{thm}
+ If $a^{ij}, b^i, c \in C^{k + 1}(\bar{U}_T)$ and $\partial U$ is $C^{k + 1}$, and
+ \begin{align*}
+ \partial^i_t u|_{\Sigma_0} &\in H_0^1 (U)&i &= 0, \ldots, k\\
+ \partial_t^{k + 1}u |_{\Sigma_0} &\in L^2(U)\\
+ \partial_t^i f &\in L^2((0, T); H^{k - i}(U)) & i &= 0, \ldots, k
+ \end{align*}
+ then $u \in H^{k + 1}(U)$ and
+ \[
+ \partial_t^i u \in L^\infty((0, T); H^{k + 1 - i}(U))
+ \]
+ for $i = 0, \ldots, k + 1$.
+
+ In particular, if everything is smooth, then we get a smooth solution.
+\end{thm}
+The first two conditions should be understood as conditions on $\psi$ and $\psi'$, using the fact that the equation allows us to express higher time derivatives of $u$ in terms of lower time derivatives and spatial derivatives. One can check that these condition imply $\psi \in H^{k + 1}(U)$ and $\psi' \in H^k(U)$, but the condition we wrote down also encodes some compatibility conditions, since we know $u$ ought to vanish at the boundary, hence all time derivatives should.
+
+%For example, consider the wave equation
+%\[
+% u_{tt} - \Delta u = 0.
+%\]
+%with $u = \psi, u_t = \psi'$ at $\Sigma_0$ and $u = 0$ on $\partial^* U_t$. If we have a smooth solution, then $u_{tt} = 0$ on $\partial^* U_T$. So $\Delta u = 0$ on $\partial^* U_T$. So $\Delta \psi$ on $\partial^* U_T$.
+
+Those were the standard existence and regularity theorems for hyperbolic PDEs. However, there are more things to say about hyperbolic equations. The ``physicist's version'' of the wave equation involves a constant $c$, and says
+\[
+ \ddot{u} - c^2 \Delta x = 0.
+\]
+This constant $c$ is the speed of propagation. This tells us in the wave equation, information propagates at a speed of at most $c$. We can see this very concretely in the $1$-dimensional wave equation, where d'Alembert wrote down an explicit solution to the wave equation given by
+\[
+ u(x, t) = \frac{1}{2} (\psi(x - ct) + \psi(x + ct)) + \frac{1}{2c} \int_{x - ct}^{x + ct} \psi'(y) \;\d y.
+\]
+Thus, we see that the value of $\phi$ at any point $(x, t)$ is \emph{completely} determined by the values of $\psi$ and $\psi'$ in the interval $[x - ct, x + ct]$.
+\begin{center}
+ \begin{tikzpicture}
+ \node at (-1.5, 2) [circ] {};
+ \node at (-1.5, 2) [above] {$(x, t)$};
+ \fill [mgreen, opacity=0.5] (-3.5, 0) -- (-1.5, 2) -- (0.5, 0) -- cycle;
+ \draw (-3.5, 0) -- (-1.5, 2) -- (0.5, 0);
+
+ \draw [mred] (-4, 0) -- (1, 0) node [right] {$t = 0$};
+ \end{tikzpicture}
+\end{center}
+
+This is true for a general hyperbolic PDE. In this case, the speed of propagation should be measured by the principal symbol $Q(\xi) = \sum a^{ij}(y) \xi_i \xi_j$. The correct way to formulate this result is as follows:
+
+Let $S_0 \subseteq U$ be an open set with (say) smooth boundary. Let $\tau: S_0 \to [0, T]$ be a smooth function vanishing on $\partial S_0$, and define
+\begin{align*}
+ D &= \{(t, x) \in U_T : x \in S_0, 0 < t < \tau(x)\}\\
+ S' &= \{(\tau(x), x): x \in S_0\}.
+\end{align*}
+We say $S'$ is \term{spacelike} if
+\[
+ \sum_{i, j = 1}^n a^{ij} \tau_{x_i} \tau_{x_j} < 1
+\]
+for all $x \in S_0$.
+
+\begin{thm}
+ If $u$ is a weak solution of the usual thing, and $S'$ is spacelike, then $u|_D$ depends only on $\psi|_{S_0}, \psi'|_{S_0}$ and $f|_{D}$.
+\end{thm}
+The proof is rather similar to the proof of uniqueness of solutions.
+
+\begin{proof}
+ Returning to the definition of a weak solution, we have
+ \[
+ \int_{U_T} - u_t v_t + \sum_{i, j = 1}^{n} a^{ij} u_{x_j} v_{x_i} + \sum_{i = 1}^n b^i u_{x_i} + cuv \;\d x \;\d t - \int_{\Sigma_0} \psi' v \;\d x = \int_{U_T} fv \;\d x\;\d t.
+ \]
+ By linearity it suffices to show that if $u|_{\Sigma_0} = 0$ if $\psi|_{S_0} = \psi' |_{S_0} = 0$ and $f|_D = 0$. We take as test function
+ \[
+ v(t, x) =
+ \begin{cases}
+ \int_t^{\tau(x)} e^{-\lambda s} u(s, x)\;\d s & (t, x) \in D\\
+ 0 & (t, x) \not \in D
+ \end{cases}.
+ \]
+ One checks that this is in $H^1(U_T)$, and $v = 0$ on $\Sigma_T \cup \partial^* U_T$ with
+ \begin{align*}
+ v_{x_i} &= \tau_{x_i} e^{-\lambda \tau} u(x, \tau) + \int_t^{\tau(x)} e^{-\lambda s} u_{x_i} (x, s) \;\d s\\
+ v_t &= -e^{-\lambda t} u(x, t).
+ \end{align*}
+ Plugging these into the definition of a weak solution, we argue as in the previous uniqueness proof. Then
+ \begin{multline*}
+ \int_D \frac{\d}{\d t} \left(\frac{1}{2} u^2 e^{-\lambda t} - \frac{1}{2} \sum a^{ij} v_{x_i} v_{x_j} e^{\lambda t} - \frac{1}{2} v^2 e^{\lambda t}\right)\\
+ + \frac{\lambda}{2} \left(u^2 e^{-\lambda t} + \sum a^{ij} v_{x_i} v_{x_j} e^{\lambda t}+ v^2 e^{\lambda t}\right)\;\d x\;\d t\\
+ = \int_D \left(\frac{1}{2} \sum a^{ij} v_{x_i} v_{x_j} e^{\lambda t} - \sum b^i v_{x_i} v - (c - 1) uv\right)\;\d x\;\d t
+ \end{multline*}
+ Noting that $\int_D \;\d x \;\d t = \int_{S_0} \;\d x \int_0^{\tau(x)}\;\d t$, we can perform the $t$ integral of the $\frac{\d}{\d t}$ term, and we get contribution from $S'$ which is given by
+ \[
+ I_{S'} =\int_{S_0}\left(\frac{1}{2} u^2 (\tau(x), x) e^{-\lambda \tau(x)} - \frac{1}{2} \sum_{i, j} a^{ij} \tau_{x_i} \tau_{x_j} u^2 e^{-\lambda \tau}\right)\;\d x
+ \]
+ We have used $v = 0$ on $S'$ and $v_{x_i} = \tau_{x_i} u e^{-\lambda \tau}$. Using the definition of a spacelike surface, we have $I_{S'} > 0$. The rest of the argument of the uniqueness of solutions goes through to conclude that $u = 0$ on $D$.
+\end{proof}
+This implies no signal can travel faster than a certain speed. In particular, if
+\[
+ \sum_{i, j} a^{ij} \xi_i \xi_j \leq \mu |\xi|^k
+\]
+for some $\mu$, then no signal can travel faster than $\sqrt{\mu}$. This allows us to solve hyperbolic equations on unbounded domains by restricting to bounded domains.
+\printindex
+\end{document}
diff --git a/books/cam/III_M/combinatorics.tex b/books/cam/III_M/combinatorics.tex
new file mode 100644
index 0000000000000000000000000000000000000000..93ed4a244c8b73de1c758d79a69d3a77cc7ea86b
--- /dev/null
+++ b/books/cam/III_M/combinatorics.tex
@@ -0,0 +1,1782 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2017}
+\def\nlecturer {B. Bollobas}
+\def\ncourse {Combinatorics}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+What can one say about a collection of subsets of a finite set satisfying certain conditions in terms of containment, intersection and union? In the past fifty years or so, a good many fundamental results have been proved about such questions: in the course we shall present a selection of these results and their applications, with emphasis on the use of algebraic and probabilistic arguments.
+
+The topics to be covered are likely to include the following:
+\begin{itemize}
+ \item The de Bruijn--Erd\"os theorem and its extensions.
+ \item The Graham--Pollak theorem and its extensions.
+ \item The theorems of Sperner, EKR, LYMB, Katona, Frankl and F\"uredi. % check Furedi
+ \item Isoperimetric inequalities: Kruskal--Katona, Harper, Bernstein, BTBT, and their applications.
+ \item Correlation inequalities, including those of Harris, van den Berg and Kesten, and the Four Functions Inequality.
+ \item Alon's Combinatorial Nullstellensatz and its applications.
+ \item LLLL and its applications.
+\end{itemize}
+
+\subsubsection*{Pre-requisites}
+The main requirement is mathematical maturity, but familiarity with the basic graph theory course in Part II would be helpful.
+}
+\tableofcontents
+
+\section{Hall's theorem}
+We shall begin with a discussion of Hall's theorem. Ideally, you've already met it in IID Graph Theory, but we shall nevertheless go through it again.
+
+\begin{defi}[Bipartite graph]\index{bipartite graph}
+ We say $G = (X, Y; E)$ is a \emph{bipartite graph} with bipartition $X$ and $Y$ if $(X \sqcup Y, E)$ is a graph such that every edge is between a vertex in $X$ and a vertex in $Y$.
+
+ We say such a bipartite graph is \term{$(k, \ell)$-regular} if every vertex in $X$ has degree $k$ and every vertex in $Y$ has degree $\ell$. A bipartite graph that is $(k, \ell)$-regular for some $k, \ell \geq 1$ is said to be \emph{biregular}\index{biregular graph}.
+\end{defi}
+
+\begin{defi}[Complete matching]\index{complete matching}
+ Let $G = (X, Y; E)$ be a bipartite graph with bipartition $X$ and $Y$. A \emph{complete matching} from $X$ to $Y$ is an injection $f: X \to Y$ such that $x\, f(x)$ is an edge for every $x \in X$.
+\end{defi}
+
+Hall's theorem gives us a necessary and sufficient condition for the existence of a complete matching. Let's try to first come up with a necessary condition. If there is a complete matching, then for any subset $S \subseteq X$, we certainly have $|\Gamma(S)| \geq |S|$, where \term{$\Gamma(S)$} is the set of neighbours of $S$. Hall's theorem says this is also sufficient.
+
+\begin{thm}[Hall, 1935]\index{Hall's theorem}
+ A bipartite graph $G = (X, Y; E)$ has a complete matching from $X$ to $Y$ if and only if $|\Gamma(S)| \geq |S|$ for all $S \subseteq X$.
+\end{thm}
+This condition is known as \term{Hall's condition}.
+
+\begin{proof}
+ We may assume $G$ is edge-minimal satisfying Hall's condition. We show that $G$ is a complete matching from $X$ to $Y$. For $G$ to be a complete matching, we need the following two properties:
+ \begin{enumerate}
+ \item Every vertex in $X$ has degree $1$
+ \item Every vertex in $Y$ has degree $0$ or $1$.
+ \end{enumerate}
+
+ We first examine the second condition. Suppose $y \in Y$ is such that there exists edges $x_1 y, x_2 y \in E$. Then the minimality of $G$ implies there are sets, $X_1, X_2 \subseteq X$ such that $x_i \in X_i$ such that $|\Gamma(X_i)| = |X_i|$ and $x_i$ is the only neighbour of $y$ in $X_i$.
+
+ Now consider the set $X_1 \cap X_2$. We know $\Gamma(X_1 \cap X_2) \subseteq \Gamma(X_1) \cap \Gamma(X_2)$. Moreover, this is strict, as $y$ is in the RHS but not the LHS. So we have
+ \[
+ \Gamma(X_1 \cap X_2) \leq |\Gamma(X_i) \cap \Gamma(X_2)| - 1.
+ \]
+ But also
+ \begin{align*}
+ |X_1 \cap X_2| &\leq |\Gamma(X_1 \cap X_2)|\\
+ &\leq |\Gamma(X_1) \cap \Gamma(X_2)| - 1 \\
+ &= |\Gamma(X_1)| + |\Gamma(X_2)| - |\Gamma(X_1) \cup \Gamma(X_2)| - 1\\
+ &= |X_1| + |X_2| - |\Gamma(X_1\cup X_2)| - 1\\
+ &\leq |X_1| + |X_2| - |X_1 \cup X_2| - 1\\
+ &= |X_1 \cap X_2| - 1,
+ \end{align*}
+ which contradicts Hall's condition.
+
+ One then sees that the first condition is also satisfied --- if $x \in X$ is a vertex, then the degree of $x$ certainly cannot be $0$, or else $|\Gamma(\{x\})| < |\{x\}|$, and we see that $d(x)$ cannot be $>1$ or else we can just remove an edge from $x$ without violating Hall's condition.
+\end{proof}
+
+We shall now describe some consequences of Hall's theorem. They will be rather straightforward applications, but we shall later see they have some interesting consequences.
+
+Let $\mathcal{A} = \{A_1, \ldots, A_m\}$ be a set system. All sets are finite. A set of \term{distinct representatives} of $\mathcal{A}$ is a set $\{a_1, \ldots a_m\}$ of distinct elements $a_i \in A_i$.
+
+Under what condition do we have a set of distinct representatives? If we have one, then for any $I \subseteq [m] = \{1, 2, \ldots, m\}$, we have
+\[
+ \left|\bigcup_{i \in I} A_i \right| \geq |I|.
+\]
+We might hope this is sufficient.
+\begin{thm}
+ $\mathcal{A}$ has a set of distinct representatives iff for all $\mathcal{B} \subseteq \mathcal{A}$, we have
+ \[
+ \left|\bigcup_{B \in \mathcal{B}} B\right| \geq |\mathcal{B}|.
+ \]
+\end{thm}
+
+This is an immediate consequence of Hall's theorem.
+
+\begin{proof}
+ Define a bipartite graph as follows --- we let $X = \mathcal{A}$, and $Y = \bigcup_{i \in [m]} A_i$. Then draw an edge from $x$ to $A_i$ if $x \in A_i$. Then there is a complete matching of this graph iff $\mathcal{A}$ has a set of distinct representations, and the condition in the theorem is exactly Hall's condition. So we are done by Hall's theorem.
+\end{proof}
+
+\begin{thm}
+ Let $G = (X, Y; E)$ be a bipartite graph such that $d(x) \geq d(y)$ for all $x \in X$ and $y \in Y$. Then there is a complete matching from $X$ to $Y$.
+\end{thm}
+
+\begin{proof}
+ Let $d$ be such that $d(x) \geq d \geq d(y)$ for all $x \in X$ and $y \in Y$. For $S \subseteq X$ and $T \subseteq Y$, we let $e(S, T)$ be the number of edges between $S$ and $T$. Let $S \subseteq X$, and $T = \Gamma(S)$. Then we have
+ \[
+ e(S, T) = \sum_{x \in S} d(x) \geq d |S|,
+ \]
+ but on the other hand, we have
+ \[
+ e(S, T) \leq \sum_{y \in T} d(y) \leq d |T|.
+ \]
+ So we find that $|T| \geq |S|$. So Hall's condition is satisfied.
+\end{proof}
+
+\begin{cor}
+ If $G = (X, Y; E)$ is a $(k, \ell)$-regular bipartite graph with $1 \leq \ell \leq k$, then there is a complete matching from $X$ to $Y$.
+\end{cor}
+
+\begin{thm}
+ Let $G = (X, Y; E)$ be biregular and $A \subseteq X$. Then
+ \[
+ \frac{|\Gamma(A)|}{|Y|}\geq \frac{|A|}{|X|}.
+ \]
+\end{thm}
+
+\begin{proof}
+ Suppose $G$ is $(k, \ell)$-regular. Then
+ \[
+ k|A| = e(A, \Gamma(A)) \leq \ell |\Gamma(A)|.
+ \]
+ Thus we have
+ \[
+ \frac{|\Gamma(A)|}{|Y|} \geq \frac{k|A|}{\ell |Y|}.% = \frac{|A|}{|X|}.
+ \]
+ On the other hand, we can count that
+ \[
+ |E| = |X| k = |Y| \ell,
+ \]
+ and so
+ \[
+ \frac{k}{\ell} = \frac{|Y|}{|X|}.
+ \]
+ So we are done.
+\end{proof}
+Briefly, this says biregular graphs ``expand''.
+
+\begin{cor}
+ Let $G = (X, Y; E)$ be biregular and let $|X| \leq |Y|$. Then there is a complete matching of $X$ into $Y$.
+\end{cor}
+In particular, for \emph{any} biregular graph, there is always a complete matching from one side of the graph to the other.
+
+\begin{notation}\index{$X^{(r)}$}\index{$X^{(\leq r)}$}\index{$X^{\geq r}$}
+ Given a set $X$, we write $X^{(r)}$ for the set of all subsets of $X$ with $r$ elements, and similarly for $X^{(\geq r)}$ and $X^{(\leq r)}$.
+\end{notation}
+
+If $|X| = n$, then $|X^{(r)}| = \binom{n}{r}$.
+
+Now given a set $X$ and two numbers $r < s$, we can construct a biregular graph $(X^{(r)}, X^{(s)}; E)$, where $A \in X^{(r)}$ is joined to $B \in X^{(s)}$ if $A \subseteq B$.
+
+\begin{cor}
+ Let $1 \leq r < s \leq |X| = n$. Suppose $|\frac{n}{2} -r | \geq |\frac{n}{2} - s|$. Then there exists an injection $f: X^{(r)} \to X^{(s)}$ such that $A \subseteq f(A)$ for all $A \in X^{(r)}$.
+
+ If $|\frac{n}{2} - r| \leq |\frac{n}{2} - s|$, then there exists an injection $g: X^{(s)} \to X^{(r)}$ such that $A \supseteq g(A)$ for all $A \in X^{(s)}$.
+\end{cor}
+
+\begin{proof}
+ Note that $|\frac{n}{2} - r| \leq |\frac{n}{2} - s|$ iff $\binom{n}{r} \geq \binom{n}{s}$.
+\end{proof}
+
+\section{Sperner systems}
+In the next few chapters, we are going to try to understand the power set $\mathcal{P}(X)$ of a set. One particularly important structure of $\mathcal{P}(X)$ is that it is a \emph{graded poset}. A lot of the questions we ask can be formulated for arbitrary (graded) posets, but often we will only answer them for power sets, since that is what we are interested in.
+
+\begin{defi}[Chain]\index{chain}
+ A subset $C \subseteq S$ of a poset is a \emph{chain} if any two of its elements are comparable.
+\end{defi}
+
+\begin{defi}[Anti-chain]\index{anti-chain}
+ A subset $A \subseteq S$ is an \emph{anti-chain} if no two of its elements are comparable.
+\end{defi}
+
+Given a set $X$, the \term{power set} $\mathcal{P}(X)$\index{$\mathcal{P}(X)$} of $X$ can be viewed as a Boolean lattice. This is a poset by saying $A < B$ if $A \subsetneq B$.
+
+In general, there are many questions we can ask about a poset $\mathcal{P}$. For example, we may ask what is the largest possible of an anti-chain in $\mathcal{P}$. While this is quite hard in general, we may be able to produce answers if we impose some extra structure on our posets. One particularly useful notion is that of a \emph{graded poset}.
+
+\begin{defi}[Graded poset]\index{graded poset}
+ We say $\mathcal{P} = (S, <)$ is a \emph{graded poset} if we can write $S$ as a disjoint union
+ \[
+ S = \coprod_{i = 0}^n S_i % union with dot on top
+ \]
+ such that
+ \begin{itemize}
+ \item $S_i$ is an anti-chain; and
+ \item $x < y$ iff there exists elements $x = z_i < z_{i + 1} < \cdots < z_j = y$ such that $z_h \in S_h$.
+ \end{itemize}
+\end{defi}
+
+\begin{eg}
+ If $X$ is a set, $\mathcal{P}(X)$ is an anti-chain with $X_i = X^{(i)}$.
+\end{eg}
+If we want to obtain the largest anti-chain as possible, then we might try $X_i$ with $i = \lfloor \frac{n}{2}\rfloor$. But is this actually the largest possible? Or can we construct some funny-looking anti-chain that is even larger? Sperner says no.
+
+\begin{thm}[Sperner, 1928]
+ For $|X| = n$, the maximal size of an antichain in $\mathcal{P}(X)$ is $\binom{n}{\lfloor n/2\rfloor}$, witnessed by $X^{\lfloor n/2\rfloor}$.
+\end{thm}
+
+\begin{proof}
+ If $\mathcal{C}$ is a chain and $\mathcal{A}$ is an antichain, then $|\mathcal{A} \cap \mathcal{C}| \leq 1$. So it suffices to partition $\mathcal{P}(X)$ into
+ \[
+ m = \max_{k} \binom{n}{k} = \binom{n}{\lfloor n/2\rfloor} = \binom{n}{\lceil n/2 \rceil}
+ \]
+ many chains.
+
+ We can do so using the injections constructed at the end of the previous section. For $i \geq \lfloor \frac{n}{2}\rfloor$, we can construct injections $f_i: X_{i - 1} \to X_i$ such that $A \subseteq f_i(A)$ for all $A$. By chaining these together, we get $m$ chains ending in $X^{\lfloor \frac{n}{2}\rfloor}$.
+ \begin{center}
+ \begin{tikzpicture}[yscale=0.5]
+ \node [circ] at (0, 3) {};
+
+ \fill (1.2, 0) ellipse (0.025 and 0.05); % because yscale
+ \fill (1.1, -0.05) ellipse (0.025 and 0.05);
+ \fill (0.93, 0.05) ellipse (0.025 and 0.05);
+
+ \fill (-1.2, 0) ellipse (0.025 and 0.05);
+ \fill (-1.1, -0.05) ellipse (0.025 and 0.05);
+ \fill (-0.93, 0.05) ellipse (0.025 and 0.05);
+
+ \draw (0, 0) ellipse (1.5 and 0.25);
+
+ \draw [-latex', mred] (0, 1) -- +(0, -1);
+ \draw [-latex', mblue] (0.2, 1.05) -- +(0, -1);
+ \draw [-latex', mblue] (0.3, 0.95) -- +(0, -1);
+ \draw [-latex', mblue] (-0.3, 1.05) -- +(0, -1);
+ \draw [-latex', mblue] (-0.2, 0.95) -- +(0, -1);
+
+ \draw [-latex'] (0.5, 1.02) -- +(0, -1);
+ \draw [-latex'] (0.8, 0.97) -- +(0, -1);
+ \draw [-latex'] (0.6, 1) -- +(0, -1);
+ \draw [-latex'] (-0.6, 1) -- +(0, -1);
+ \draw [-latex'] (-0.5, 1.04) -- +(0, -1);
+ \draw [-latex'] (-0.8, 0.97) -- +(0, -1);
+
+ \draw [fill=white, opacity=0.7] (0, 1) ellipse (1 and 0.2);
+
+ \draw [-latex', mred] (0, 2) -- +(0, -1);
+ \draw [-latex', mblue] (0.2, 2.05) -- +(0, -1);
+ \draw [-latex', mblue] (0.3, 1.95) -- +(0, -1);
+ \draw [-latex', mblue] (-0.3, 2.05) -- +(0, -1);
+ \draw [-latex', mblue] (-0.2, 1.95) -- +(0, -1);
+
+ \draw [fill=white, opacity=0.7] (0, 2) ellipse (0.5 and 0.15);
+ \draw [-latex', mred] (0, 3) -- +(0, -1);
+ \end{tikzpicture}
+ \end{center}
+
+ Similarly, we can partition $X^{(\leq \lfloor n/2\rfloor)}$ into $m$ chains with each chain ending in $X^{(\lfloor n/2\rfloor)}$. Then glue them together.
+\end{proof}
+
+Another way to prove this result is to provide an alternative measure on how large an antichain can be, and this gives a stronger result.
+\begin{thm}[LYM inequality]\index{LYM inequality}
+ Let $\mathcal{A}$ be an antichain in $\mathcal{P}(X)$ with $|X| = n$. Then
+ \[
+ \sum_{r = 0}^n \frac{|\mathcal{A} \cap X^{(r)}|}{\binom{n}{r}} \leq 1.
+ \]
+ In particular, $|\mathcal{A}| \leq \max_r \binom{n}{r} = \binom{n}{\lfloor n/2\rfloor}$, as we already know.
+\end{thm}
+
+\begin{proof}
+ A chain $C_0 \subseteq C_1 \subseteq \cdots \subseteq C_m$ is maximal if it has $n + 1$ elements. Moreover, there are $n!$ maximal chains, since we start with the empty set and then, given $C_i$, we produce $C_{i + 1}$ by picking one unused element and adding it to $C_i$.
+
+ For every maximal chain $\mathcal{C}$, we have $|\mathcal{C} \cap \mathcal{A}| \leq 1$. Moreover, every set of $k$ elements appears in $k! (n - k)!$ maximal chains, by a similar counting argument as above. So
+ \[
+ \sum_{A \in \mathcal{A}} |A|! (n - |A|)! \leq n!.
+ \]
+ Then the result follows.
+\end{proof}
+
+There are analogous results for posets more general than just $\mathcal{P}(X)$. To formulate these results, we must introduce the following new terminology.
+\begin{defi}[Shadow]\index{shadow}
+ Given $A \subseteq S_i$, the \emph{shadow} at level $i - 1$ is
+ \[
+ \partial A = \{x \in S_{i - 1}: x < y\text{ for some }y \in A\}.
+ \]
+\end{defi}
+
+\begin{defi}[Downward-expanding poset]\index{downward-expanding poset}
+ A graded poset $P = (S, <)$ is said to be \emph{downward-expanding} if
+ \[
+ \frac{|\partial A|}{|S_{i - 1}|} \geq \frac{|A|}{|S_i|}
+ \]
+ for all $A \subseteq A_i$.
+
+ We similarly define \term{upward-expanding}, and say a poset is \term{expanding} if it is upward or downward expanding.
+\end{defi}
+
+\begin{defi}[Weight]\index{weight}
+ The \emph{weight} of a set $A \subseteq S$ is
+ \[
+ w(A) = \sum_{i = 0}^n \frac{|A \cap S_i|}{|S_i|}.
+ \]
+\end{defi}
+
+The theorem is that the LYM inequality holds in general for any downward expanding posets.
+\begin{thm}
+ If $P$ is downward expanding and $A$ is an anti-chain, then $w(A) \leq 1$. In particular, $|A| \leq \max_i |S_i|$.
+
+ Since each $S_i$ is an anti-chain, the largest anti-chain has size $\max_i |S_i|$.
+\end{thm}
+
+\begin{proof}
+ We define the \emph{span} of $A$ to be
+ \[
+ \spn A = \max_{A_j \not= \emptyset} j - \min_{A_i \not= \emptyset} i.
+ \]
+ We do induction on $\spn A$.
+
+ If $\spn A = 0$, then we are done. Otherwise, let $h_i = \max_{A_j \not= 0} j$, and set $B_{h - 1} = \partial A_h$. Then since $A$ is an anti-chain, we know $A_{h - 1} \cap B_{h - 1} = \emptyset$.
+
+ We set $A' = A \setminus A_h \cup B_{h - 1}$. This is then another anti-chain, by the transitivity of $<$. We then have
+ \[
+ w(A) = w(A') + w(A_h) - w(B_{h - 1}) \leq w(A') \leq 1,
+ \]
+ where the first inequality uses the downward-expanding hypothesis and the second is the induction hypothesis.
+\end{proof}
+
+We may want to mimic our other proof of the fact that the largest size of an antichain in $\mathcal{P}(X)$ is $\binom{n}{\lfloor n/2\rfloor}$. This requires the notion of a \emph{regular poset}.
+
+\begin{defi}[Regular poset]\index{regular poset}\index{poset!regular}
+ We say a graded poset $(S, <)$ is \emph{regular} if for each $i$, there exists $r_i, s_i$ such that if $x \in A_i$, then $x$ dominates $r_i$ elements at level $i - 1$, and is dominated by $s_i$ elements at level $i + 1$.
+\end{defi}
+%We observe that being regular implies being expanding.
+%Note that being regular implies being expanding. So we know
+
+\begin{prop}
+ An anti-chain in a regular poset has weight $\leq 1$.
+\end{prop}
+
+\begin{proof}
+ Let $M$ be the number of maximal chains of length $(n + 1)$, and for each $x \in S_k$, let $m(x)$ be the number of maximal chains through $x$. Then
+ \[
+ m(x) = \prod_{i = 1}^k r_i \prod_{i = k}^{n - 1} s_i.
+ \]
+ So if $x, y \in S_i$, then $m(x) = m(y)$.
+
+ Now since every maximal chain passes through a unique element in $S_i$, for each $x \in S_i$, we have
+ \[
+ M = \sum_{x \in S_i} m(x) = |S_i| m(x).
+ \]
+ This gives the formula
+ \[
+ m(x) = \frac{M}{|S_i|}.
+ \]
+ now let $A$ be an anti-chain. Then $A$ meets each chain in $\leq 1$ elements. So we have
+ \[
+ M = \sum_{\text{maximal chains}} 1 \geq \sum_{x \in A} m(x) = \sum_{i = 0}^n |A \cap S_i| \cdot \frac{M}{|S_i|}.
+ \]
+ So it follows that
+ \[
+ \sum \frac{|A \cap S_i|}{|S_i|} \leq 1.\qedhere
+ \]
+\end{proof}
+
+%\subsection{Littlewood--Offord problem}
+%In the 1930s and 1940s, people were studying roots of random polynomials of the form
+%\[
+% \sum_{k = 0}^n \varepsilon_k x^k,
+%\]
+%where $\varepsilon_k = 0, 1$.
+
+Let's now turn to a different problem. Suppose $x_1, \ldots, x_n \in \C$, with each $|x_i| \geq 1$. Given $A \subseteq [n]$, we let
+\[
+ x_A = \sum_{i \in A} x_i.
+\]
+We now seek the largest size of $\mathcal{A}$ such that $|x_A - x_B| < 1$ for all $A, B \in \mathcal{A}$. More precisely, we want to find the best choice of $x_1, \ldots, x_n$ and $\mathcal{A}$ so that $|\mathcal{A}|$ is as large as possible while satisfying the above condition.
+
+If we are really lazy, then we might just choose $x_i = 1$ for all $i$. By taking $\mathcal{A} = [n]^{\lfloor n/2\rfloor}$, we can obtain $|\mathcal{A}| = \binom{n}{\lfloor n/2\rfloor}$.
+
+Erd\"os noticed this is the best bound if we require the $x_i$ to be real.
+
+\begin{thm}[Erd\"os, 1945]
+ Let $x_i$ be all real, $|x_i| \geq 1$. For $A \subseteq [n]$, let
+ \[
+ x_A = \sum_{i \in A} x_i.
+ \]
+ Let $\mathcal{A} \subseteq \mathcal{P}(n)$. Then $|\mathcal{A}| \leq \binom{n}{\lfloor n/2\rfloor}$.
+\end{thm}
+
+\begin{proof}
+ We claim that we may assume $x_i \geq 1$ for all $i$. To see this, suppose we instead had $x_1 = -2$, say. Then whether or not $i \in A$ determines whether $x_A$ should include $0$ or $-2$ in the sum. If we replace $x_i$ with $2$, then whether or not $i \in A$ determines whether $x_A$ should include $0$ or $2$. So replacing $x_i$ with $2$ just essentially shifts all terms by $2$, which doesn't affect the difference.
+
+ But if we assume that $x_i \geq 1$ for all $i$, then we are done, since $\mathcal{A}$ must be an anti-chain, for if $A, B \in \mathcal{A}$ and $A \subsetneq B$, then $x_B - x_A = x_{B\setminus A} \geq 1$.
+\end{proof}
+
+Doing it for complex numbers is considerably harder. In 1970, Kleitman found a gorgeous proof for \emph{every} normed space. This involves the notion of a \emph{symmetric decomposition}. To motivate this, we first consider the notion of a symmetric chain.
+
+\begin{defi}[Symmetric chain]\index{symmetric chain}
+ We say a chain $\mathcal{C} = \{C_i, C_{i + 1}, \ldots, C_{n - i}\}$ is \emph{symmetric} if $|C_j| = j$ for all $j$.
+\end{defi}
+
+\begin{thm}
+ $\mathcal{P}(n)$ has a decomposition into symmetric chain.
+\end{thm}
+
+\begin{proof}
+ We prove by induction. In the case $n = 1$, we simply have to take $\{\emptyset, \{1\}\}$.
+
+ Now suppose $\mathcal{P}(n- 1)$ has a symmetric chain decomposition $\mathcal{C}_1 \cup \cdots \cup \mathcal{C}_t$. Given a symmetric chain
+ \[
+ \mathcal{C}_j = \{C_i, C_{i + 1}, \ldots, C_{n - 1 - i}\},
+ \]
+ we obtain two chains $\mathcal{C}_j^{(0)}, \mathcal{C}_j^{(1)}$ in $\mathcal{P}(n)$ by
+ \begin{align*}
+ \mathcal{C}_j^{(0)} &= \{C_i, C_{i + 1}, \ldots, C_{n - 1 - i}, C_{n - 1 - i} \cup \{n\}\}\\
+ \mathcal{C}_j^{(1)} &= \{C_i \cup\{n\}, C_{i + 1} \cup \{n\}, \ldots, C_{n - 2 - i} \cup \{n\}\}.
+ \end{align*}
+ Note that if $|\mathcal{C}_j| = 1$, then $\mathcal{C}_j^{(1)} = \emptyset$, and we drop this. Under this convention, we note that every $A \in \mathcal{P}(n)$ appears in exactly one $\mathcal{C}_j^{(\varepsilon)}$, and so we are done.
+\end{proof}
+
+We are not going to actually need the notion of symmetric chains in our proof. What we need is the ``profile'' of a symmetric chain decomposition. By a simple counting argument, we see that for $0 \leq i \leq \frac{n}{2}$, the number of chains with $n + 1 - 2i$ sets is
+\[
+ \ell(n, i) \equiv \binom{n}{i} - \binom{n}{i - 1}.
+\]
+\begin{thm}[Kleitman, 1970]
+ Let $x_1, x_2, \ldots, x_n$ be vectors in a normed space with norm $\|x_I\| \geq 1$ for all $i$. For $A \in \mathcal{P}(n)$, we set
+ \[
+ x_A = \sum_{i \in A} x_i.
+ \]
+ Let $\mathcal{A} \subseteq \mathcal{P}(n)$ be such that $\|x_A - x_B\|< 1$. Then $\|\mathcal{A}\| \leq \binom{n}{\lfloor n/2\rfloor}$.
+\end{thm}
+This bound is indeed the best, since we can pick $x_i = x$ for some $\|x\| \geq 1$, and then we can pick $\mathcal{A} = [n]^{\lfloor n/2\rfloor}$.
+
+\begin{proof}
+ Call $\mathcal{F} \subseteq \mathcal{P}(n)$ \emph{sparse} if $\|x_E - x_F\| \geq 1$ for all $E, F \in \mathcal{F}$, $E \not= F$. Note that if $\mathcal{F}$ is sparse, then $|\mathcal{F} \cap \mathcal{A}| \leq 1$. So if we can find a decomposition of $\mathcal{P}(n)$ into $\binom{n}{\lfloor n/2\rfloor}$ sparse sets, then we are done.
+
+ We call a partition $\mathcal{P}(n) = \mathcal{F}_1 \cup \cdots \cup F_t$ \emph{symmetric} if the number of families with $n + 1 - 2i$ sets is $\ell(n, i)$, i.e.\ the ``profile'' is that of a symmetric chain decomposition.
+
+ \begin{claim}
+ $\mathcal{P}(n)$ has a symmetric decomposition into sparse families.
+ \end{claim}
+
+ We again induct on $n$. When $n = 1$, we can take $\{\emptyset, \{1\}\}$. Now suppose $\Delta_{n - 1}$ is a symmetric decomposition of $\mathcal{P}(n - 1)$ as $\mathcal{F}_1 \cup \cdots \cup \mathcal{F}_t$.
+
+ Given $\mathcal{F}_j$, we construct $\mathcal{F}_j^{(0)}$ and $\mathcal{F}_j^{(1)}$ ``as before''. We pick some $D \in \mathcal{F}_j$, to be decided later, and we take
+ \begin{align*}
+ \mathcal{F}_j^{(0)} &= \mathcal{F}_j \cup\{ D \cup \{n\}\}\\
+ \mathcal{F}_j^{(1)} &= \{ E \cup \{n\}: E \in \mathcal{F}_j \setminus \{D\}\}.
+ \end{align*}
+ The resulting set is certainly still symmetric. The question is whether it is sparse, and this is where the choice of $D$ comes in. The collection $\mathcal{F}_j^{(1)}$ is certainly still sparse, and we must pick a $D$ such that $\mathcal{F}_j^{(0)}$ is sparse.
+
+ To do so, we use Hahn--Banach to obtain a linear functional $f$ such that $\|f\| = 1$ and $f(x_n) = \|x_n\| \geq 1$. We can then pick $D$ to maximize $f(x_D)$. Then we check that if $E \in \mathcal{F}_j$, then
+ \[
+ f(x_{D \cup \{n\}} - x_2) = f(x_D) - f(x_E) + f(x_n).
+ \]
+ By assumption, $f(x_n) \geq 1$ and $f(x_D) \geq f(x_E)$. So this is $\geq 1$. Since $\|f\| = 1$, it follows that $\|x_{D \cup \{n\}} - x_E\| \geq 1$.
+\end{proof}
+
+\section{The Kruskal--Katona theorem}
+For $\mathcal{A} \subseteq X^{(r)}$, recall we defined the lower shadow to be
+\[
+ \partial \mathcal{A} = \{B \in X^{(r - 1)} : B \subseteq A \text{ for some } A \in \mathcal{A}\}.
+\]
+The question we wish to understand is how small we can make $\partial \mathcal{A}$, relative to $\mathcal{A}$. Crudely, we can bound the size by
+\[
+ |\partial \mathcal{A}| \geq |\mathcal{A}| \frac{\binom{n}{r - 1}}{\binom{n}{r}} = \frac{n - r}{r} |\mathcal{A}|.
+\]
+But surely we can do better than this. To do so, one reasonable strategy is to first produce some choice of $\mathcal{A}$ we think is optimal, and see how we can prove that it is indeed optimal.
+
+To do so, let's look at some examples.
+\begin{eg}
+ Take $n = 6$ and $r = 3$. We pick
+ \[
+ \mathcal{A} = \{123, 456, 124, 256\}.
+ \]
+ Then we have
+ \[
+ \partial \mathcal{A} = \{12, 13, 23, 45, 46, 56, 14, 24, 25, 26\},
+ \]
+ and this has $10$ elements.
+
+ But if we instead had
+ \[
+ \mathcal{A} = \{123, 124, 134, 234\},
+ \]
+ then
+ \[
+ \partial \mathcal{A} = \{12, 13, 14, 23, 24, 34\},
+ \]
+ and this only has $6$ elements, and this is much better.
+\end{eg}
+Intuitively, the second choice of $\mathcal{A}$ is better because the terms are ``bunched'' together.
+
+More generally, we would expect that if we have $|\mathcal{A}| = \binom{k}{r}$, then the best choice should be $\mathcal{A} = [k]^{(r)}$, with $|\partial A| = \binom{k}{r - 1}$. For other choices of $\mathcal{A}$, perhaps a reasonable strategy is to find the largest $k$ such that $\binom{k}{r} < |\mathcal{A}|$, and then take $\mathcal{A}$ to be $[k]^{(r)}$ plus some elements. To give a concrete description of which extra elements to pick, our strategy is to define a total order on $[n]^{(r)}$, and say we should pick the initial segment of length $|\mathcal{A}|$.
+
+This suggests the following proof strategy:
+\begin{enumerate}
+ \item Come up with a total order on $[n]^{(r)}$, or even $\N^{(r)}$ such that $[k]^{(r)}$ are initial segments for all $k$.
+ \item Construct some ``compression'' operators\index{compression operator} $\mathcal{P}(\N^{(r)}) \to \mathcal{P}(\N^{(r)})$ that pushes each element down the ordering without increasing the $|\partial \mathcal{A}|$.
+ \item Show that the only subsets of $\N^{(r)}$ that are fixed by the compression operators are the initial segments.
+\end{enumerate}
+There are two natural orders one can put on $[n]^{(r)}$:
+\begin{itemize}
+ \item lex\index{lex order}: We say $A < B$ if $\min A \Delta B \in A$.
+ \item colex\index{colex order}: We say $A < B$ if $\max A \Delta B \in B$.
+\end{itemize}
+\begin{eg}
+ For $r = 3$, the elements of $X^{(3)}$ in colex order are
+ \[
+ 123, 124, 134, 234, 125, 135, 235, 145, 245, 345, 126,\ldots
+ \]
+\end{eg}
+
+In fact, colex is an order on $\N^{(r)}$, and we see that the initial segment with $\binom{n}{r}$ elements is exactly $[n]^{(r)}$. So this is a good start.
+
+If we believe that colex is indeed the right order to do, then we ought to construct some compression operators. For $i \not= j$, we define the $(i, j)$-compression as follows: for a set $A \in X^{(r)}$, we define
+\[
+ C_{ij}(A) =
+ \begin{cases}
+ (A \setminus \{j\}) \cup \{i\} & j \in A, i \not\in A\\
+ A & \text{otherwise}
+ \end{cases}
+\]
+For a set system, we define
+\[
+ C_{ij}(\mathcal{A}) = \{C_{ij}(A): A \in \mathcal{A}\} \cup \{A \in \mathcal{A}:C_{ij}(A) \in \mathcal{A}\}
+\]
+We can picture our universe of sets as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0,1,2} {
+ \node [circ] at (\x, 0) {};
+ \node [circ] at (\x, -1) {};
+ \draw [->] (\x, -0.2) -- (\x, -0.8) {};
+ }
+ \node [left] at (0, 0) {\small$B \cup \{j\}$};
+ \node [left] at (0, -1) {\small$B \cup \{i\}$};
+ \foreach \y in {3, 4, 5} {
+ \node [circ] at (\y, -0.5) {};
+ }
+ \end{tikzpicture}
+\end{center}
+The set system $\mathcal{A}$ is some subset of all these points, and we what we are doing is that we are pushing everything down when possible.
+
+It is clear that we have $|C_{ij}(\mathcal{A})| = |\mathcal{A}|$. We further observe that
+
+\begin{lemma}
+ We have
+ \[
+ \partial C_{ij}(\mathcal{A}) \subseteq C_{ij}(\partial \mathcal{A}).
+ \]
+ In particular, $|\partial C_{ij}(\mathcal{A})| \leq |\partial \mathcal{A}|$.\fakeqed
+\end{lemma}
+
+Given $\mathcal{A} \subseteq X^{(r)}$, we say $\mathcal{A}$ is left-compressed if $c_{ij}(\mathcal{A}) = \mathcal{A}$ for all $i < j$. Is this good enough?
+
+Of course initial segments are left-compressed. However, it turns out the converse is not true.
+
+\begin{eg}
+ $\{123, 124, 125, 126\}$ are left-compressed, but not an initial segment.
+\end{eg}
+
+So we want to come up with ``more powerful'' compressions. For $U, V \in X^{(s)}$ with $U \cap V = \emptyset$, we define a $(U, V)$-compression as follows: for $A \subseteq X$, we define
+\[
+ C_{UV}(A) =
+ \begin{cases}
+ (A \setminus V) \cup U & A \cap (U \cup V) = V\\
+ A & \text{otherwise}
+ \end{cases}
+\]
+Again, for $\mathcal{A} \subseteq X^{(r)}$, we can define
+\[
+ C_{UV}(\mathcal{A}) = \{C_{UV}(A) : A \in \mathcal{A}\} \cup \{A \in \mathcal{A}: C_{UV}(A) \in \mathcal{A}\}.
+\]
+Again, $\mathcal{A}$ is $(U, V)$-compressed if $C_{UV}(\mathcal{A}) = \mathcal{A}$.
+
+This time the behaviour of the compression is more delicate.
+\begin{lemma}
+ Let $\mathcal{A} \subseteq X^{(r)}$ and $U, V \in X^{(s)}$, $U \cap V = \emptyset$. Suppose for all $u \in U$, there exists $v$ such that $\mathcal{A}$ is $(U \setminus \{u\}, V \setminus \{v\})$-compressed. Then
+ \[
+ \partial C_{UV} (\mathcal{A}) \subseteq C_{UV}(\partial A).\tag*{$\square$}
+ \]
+\end{lemma}
+
+\begin{lemma}
+ $\mathcal{A} \subseteq X^{(r)}$ is an initial segment of $X^{(r)}$ in colex if and only if it is $(U, V)$-compressed for all $U, V$ disjoint with $|U| = |V|$ and $\max V > \max U$.
+\end{lemma}
+
+\begin{proof}
+ $\Rightarrow$ is clear. Suppose $\mathcal{A}$ is $(U, V)$ compressed for all such $U, V$. If $\mathcal{A}$ is not an initial segment, then there exists $B \in \mathcal{A}$ and $C \not \in \mathcal{A}$ such that $C < B$. Then $\mathcal{A}$ is not $(C \setminus B, B \setminus C)$-compressed. A contradiction.
+\end{proof}
+
+\begin{lemma}
+ Given $\mathcal{A} \in X^{(r)}$, there exists $\mathcal{B} \subseteq X^{(r)}$ such that $\mathcal{B}$ is $(U, V)$-compressed for all $|U| = |V|$, $U \cap V= \emptyset$, $\max V > \max U$, and moreover
+ \[
+ |\mathcal{B}| = |\mathcal{A}|, |\partial \mathcal{B}| \leq |\partial \mathcal{A}|.\tag{$*$}
+ \]
+\end{lemma}
+
+\begin{proof}
+ Let $\mathcal{B}$ be such that
+ \[
+ \sum_{B \in \mathcal{B}} \sum_{i \in B} 2^i
+ \]
+ is minimal among those $\mathcal{B}$'s that satisfy $(*)$. We claim that this $\mathcal{B}$ will do. Indeed, if there exists $(U, V)$ such that $|U| = |V|$, $\max V > \max U$ and $C_{UV}(\mathcal{B}) \not= \mathcal{B}$, then pick such a pair with $|U|$ minimal. Then apply a $(U, V)$-compression, which is valid since given any $u \in U$ we can pick any $v \in V$ that is not $\max V$ to satisfy the requirements of the previous lemma. This decreases the sum, which is a contradiction.
+\end{proof}
+
+From these, we conclude that
+\begin{thm}[Kruskal 1963, Katona 1968]
+ Let $\mathcal{A} \subseteq X^{(r)}$, and let $\mathcal{C} \subseteq X^{(r)}$ be the initial segment with $|\mathcal{C}| = |\mathcal{A}|$. Then
+ \[
+ |\partial \mathcal{A}| \geq |\partial \mathcal{C}|.
+ \]
+\end{thm}
+
+We can now define the \term{shadow function}
+\[
+ \partial^{(r)}(m) = \min\{|\partial \mathcal{A}| : \mathcal{A} \subseteq X^{(r)}, |\mathcal{A}| = m\}.
+\]
+This does not depend on the size of $X$ as long as $X$ is large enough to accommodate $m$ sets, i.e.\ $\binom{n}{r} \geq m$. It would be nice if we can concretely understand this function. So let's try to produce some initial segments.
+
+Essentially by definition, an initial segment is uniquely determined by the last element. So let's look at some examples.
+\begin{eg}
+ Take $r = 4$. What is the size of the initial segment ending in $3479$? We note that anything that ends in something less than $8$ is less that $3479$, and there are $\binom{8}{4}$ such elements. If you end in $9$, then you are still fine if the second-to-last digit is less than $7$, and there are $\binom{6}{3}$ such elements. Continuing, we find that there are
+ \[
+ \binom{8}{4} + \binom{6}{3} + \binom{4}{2}
+ \]
+ such elements.
+\end{eg}
+
+Given $m_r > m_{r - 1} > \cdots > m_s \geq s$, we let$\mathcal{B}^{(r)}(m_r, m_{r - 1}, \ldots, m_s)$ be the initial segment ending in the element
+\[
+ m_r + 1, m_{r - 1} + 1, \ldots, m_{s + 1} + 1, m_s, m_s - 1, m_s - 2, \ldots, m_s - (s - 1).
+\]
+This consists of the sets $\{a_1 < a_2 < \cdots < a_r\}$ such that there exists $j \in [s, r]$ with $a_i = m_i + 1$ for $i > j$, and $a_j \leq m_j$.
+
+To construct an element in $\mathcal{B}^{(r)}(m_r, \ldots, m_s)$, we need to first pick a $j$, and then select $j$ elements that are $\leq m_j$. Thus, we find that
+\[
+ |\mathcal{B}^{(r)}(m_r, \ldots, m_s)| = \sum_{j = s}^r \binom{m_j}{j} = b^{(r)} (m_1, \ldots, m_s).
+\]
+We see that this $\mathcal{B}^{(r)}$ is indeed the initial segment in the colex order ending in that element. So we know that for all $m \in \N$, there is a unique sequence $m_r > m_{r - 1} > \ldots, > m_s \geq s$ such that $n = \sum_{j = 0}^r \binom{m_j}{j}$.
+
+It is also not difficult to find the shadow of this set. After a bit of thinking, we see that it is given by
+\[
+ \mathcal{B}^{(r - 1)} (m_r, \ldots, m_s).
+\]
+Thus, we find that
+\[
+ \partial^{(r)}\left( \sum_{i = s}^r \binom{m_i}{i}\right) = \sum_{i = s}^r \binom{m_i}{i - 1},
+\]
+and moreover every $m$ can be expressed in the form $\sum_{i = s}^r \binom{m_i}{i}$ for some unique choices of $m_i$.
+
+In particular, we have
+\[
+ \partial^{(r)}\left(\binom{n}{r}\right) = \binom{n}{r - 1}.
+\]
+Since it might be slightly annoying to write $m$ in the form $\sum_{i = s}^r \binom{m_i}{i}$, Lov\'asz provided another slightly more convenient bound.
+\begin{thm}[Lov\'asz, 1979]
+ If $\mathcal{A} \subseteq X^{(r)}$ with $|\mathcal{A}| = \binom{x}{r}$ for $x \geq 1, x \in \R$, then
+ \[
+ |\partial \mathcal{A}| \geq \binom{x}{r - 1}.
+ \]
+ This is best possible if $x$ is an integer.
+\end{thm}
+
+\begin{proof}
+ Let
+ \begin{align*}
+ \mathcal{A}_0 &= \{A \in \mathcal{A}: 1 \not \in A\}\\
+ \mathcal{A}_1 &= \{A \in \mathcal{A}: 1 \in A\}.
+ \end{align*}
+ For convenience, we write
+ \[
+ \mathcal{A}_1 - 1 = \{A \setminus \{1\}: A \in \mathcal{A}_1\}.
+ \]
+ We may assume $\mathcal{A}$ is $(i, j)$-compressed for all $i < j$. We induct on $r$ and then on $|\mathcal{A}|$. We have
+ \[
+ |\mathcal{A}_0| = |\mathcal{A}| - |\mathcal{A}_1|.
+ \]
+ We note that $\mathcal{A}_1$ is non-empty, as $\mathcal{A}$ is left-compressed. So $|\mathcal{A}_0| < |\mathcal{A}|$.
+
+ If $r = 1$ and $|\mathcal{A}| = 1$ then there is nothing to do.
+
+ Now observe that $\partial \mathcal{A} \subseteq \mathcal{A}_1 - 1$, since if $A \in \mathcal{A}$, $1 \not \in A$, and $B \subseteq A$ is such that $|A \setminus B| = 1$, then $B \cup \{1\} \in \mathcal{A}_1$ since $\mathcal{A}$ is left-compressed. So it follows that
+ \[
+ |\partial \mathcal{A}_0| \leq |\mathcal{A}_1|.
+ \]
+ Suppose $|\mathcal{A}_1| < \binom{x - 1}{r - 1}$. Then
+ \[
+ |\mathcal{A}_0| > \binom{x}{r} - \binom{x - 1}{r - 1} = \binom{x - 1}{r}.
+ \]
+ Therefore by induction, we have
+ \[
+ |\partial \mathcal{A}_0| > \binom{x - 1}{r - 1}.
+ \]
+ This is a contradiction, since $|\partial \mathcal{A}_0| \leq |\mathcal{A}_1|$. Hence $|\mathcal{A}_1| \geq \binom{x - 1}{r - 1}$. Hence we are done, since
+ \[
+ |\partial \mathcal{A}| \geq |\partial \mathcal{A}_1| = |\mathcal{A}_1| + |\partial (\mathcal{A}_1 - 1)| \geq \binom{x - 1}{r - 1} + \binom{x - 1}{r - 2} = \binom{x}{r - 1}.\qedhere
+ \]
+\end{proof}
+
+%$r = 2$ is a non-starter. Given $m$ edges on $[n]$, at least how many vertices do they have altogether? If $m > \binom{k}{2}$, then $|\mathrm{shadow}| \geq k + 1$. Thus, if
+%\[
+% \binom{k - 1}{2} < m \leq \binom{k}{2},
+%\]
+%then for
+%\[
+% \mathcal{A} \subseteq X^{(2)} = [n]^{(2)}
+%\]
+%with $|\mathcal{A}| = m$, then $|\partial A| \geq k$. This is the best possible.
+%
+%We hope that if $\mathcal{A} \subseteq X^{(r)}$ and $\mathcal{C}$ is the initial segment of $X^{(r)}$ in colex with $|\mathcal{C}| = |\mathcal{A}|$, then
+%\[
+% |\partial \mathcal{A}| \geq |\partial \mathcal{C}|.
+%\]
+%
+%
+
+\section{Isoperimetric inequalities}
+We are now going to ask a question similar to the one answered by Kruskal--Katona. Kruskal--Katona answered the question of how small can $\partial \mathcal{A}$ be among all $\mathcal{A} \subseteq X^{(r)}$ of fixed size. Clearly, we obtain the same answer if we sought to minimized the upper shadow instead of the lower. But what happens if we want to minimize both the upper shadow and the lower shadow? Or, more generally, if we allow $\mathcal{A} \subseteq \mathcal{P}(X)$ to contain sets of different sizes, how small can the set of ``neighbours'' of $\mathcal{A}$ be?
+
+\begin{defi}[Boundary]\index{boundary}
+ Let $G$ be a graph and $A \subseteq V(A)$. Then the \emph{boundary} $b(A)$ is the set of all $x \in G$ such that $x \not \in A$ but $x$ is adjacent to $A$.
+\end{defi}
+
+\begin{eg}
+ In the following graph
+ \begin{center}
+ \begin{tikzpicture}
+
+ \draw (0, 0) rectangle (1, 1);
+ \draw (1, 1) -- (2.5, 1) -- (3.07736, 0) -- (1.92264, 0) -- (2.5, 1);
+
+ \node [circ, mgreen] at (0, 0) {};
+ \node [circ, mgreen] at (1, 0) {};
+ \node [circ, mgreen] at (1, 1) {};
+ \node [circ, red] at (0, 1) {};
+ \node [circ, red] (a) at (2.5, 1) {};
+
+ \end{tikzpicture}
+ \end{center}
+ the boundary of the green vertices is the red vertices.
+\end{eg}
+
+An \term{isoperimetric inequality} on $G$ is an inequality of the form
+\[
+ |b(A)| \geq f(|A|)
+\]
+for all $A \subseteq G$. Of course, we could set $f \equiv 0$, but we would like to do better than that.
+
+The ``continuous version'' of this problem is well-known. For example, in a plane, given a fixed area, the perimeter of the area is minimized if we pick the area to be a disc. Similarly, among subsets of $\R^3$ of a given volume, the solid sphere has the smallest surface area. Slightly more exotically, among subsets of $S^2$ of given area, the circular cap has smallest perimeter.
+
+Before we proceed, we note the definition of a \emph{neighbourhood}:
+\begin{defi}[Neighbourhood]\index{neighbourhood}
+ Let $G$ be a graph and $A \subseteq V(A)$. Then the \emph{neighbourhood} of $A$ is $N(A) = A \cup b(A)$.
+\end{defi}
+Of course, $|b(A)| = |N(A)| - |A|$, and it is often convenient to express and prove our isoperimetric inequalities in terms of the neighbourhood instead.
+
+If we look at our continuous cases, then we observe that all our optimal figures are balls, i.e.\ they consist of all the points a distance at most $r$ from a point, for some $r$ and single point. We would hope that this pattern generalizes.
+
+Of course, it would be a bit ambitious to hope that balls are optimal for all graphs. However, we can at least show that it is true for the graphs we care about, namely graphs obtained from power sets.
+
+\begin{defi}[Discrete cube]\index{discrete cube}
+ Given a set $X$, we turn $\P(X)$ into a graph as follows: join $x$ to $y$ if $|x \Delta y| = 1$, i.e.\ if $x = y \cup \{a\}$ for some $a \not \in y$, or vice versa.
+
+ This is the \emph{discrete cube} $Q_n$, where $n = |X|$.
+\end{defi}
+
+\begin{eg}
+ $Q_3$ looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \node (123) at (0, 0) {123};
+
+ \node (13) at (0, -1) {13};
+ \node (12) at (-1, -1) {12};
+ \node (23) at (1, -1) {23};
+
+ \node (2) at (0, -2) {2};
+ \node (1) at (-1, -2) {1};
+ \node (3) at (1, -2) {3};
+
+ \node (0) at (0, -3) {$\emptyset$};
+
+ \draw (0) -- (1) -- (12) -- (123);
+ \draw (0) -- (3) -- (23) -- (123);
+ \draw (0) -- (2) -- (12);
+ \draw (2) -- (23);
+ \draw (1) -- (13) -- (123);
+ \draw (3) -- (13);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+This looks like a cube! Indeed, if we identify each $x \in \Q$ with the 0-1 sequence of length $n$ (e.g.\ $13 \mapsto 101000\cdots 0$), or, in other words, its indicator function, then $Q_n$ is naturally identified with the unit cube in $\R^n$.
+\begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \node (0) at (0, 0) {\small$\emptyset$};
+ \node (1) at (1, 0) {\small$1$};
+ \node (3) at (0, 1) {\small$3$};
+ \node (2) at (0.4, 0.4) {\small$2$};
+ \node (12) at (1.4, 0.4) {\small$12$};
+ \node (13) at (1, 1) {\small$13$};
+ \node (23) at (0.4, 1.4) {\small$23$};
+ \node (123) at (1.4, 1.4) {\small$123$};
+
+ \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0);
+ \draw (3) -- (13) -- (1);
+ \draw (13) -- (123);
+
+ \draw [dashed] (0) -- (2) -- (12);
+ \draw [dashed] (2) -- (23);
+
+ \end{tikzpicture}
+\end{center}
+Note that in this picture, the topmost layer is the points that do have $3$, and the bottom layer consists of those that do not have a $3$, and we can make similar statements for the other directions.
+
+\begin{eg}
+ Take $Q_3$, and try to find a size $A$ of size $4$ that has minimum boundary. There are two things we might try --- we can take a slice, or we can take a ball. In this case, we see the ball is the best.
+\end{eg}
+We can do more examples, and it appears that the ball $X^{(\leq r)}$ is the best all the time. So that might be a reasonable thing to try to prove. But what if we have $|A|$ such that $|X^{(\leq r)}| < |A| < |X^{(\leq r + 1)}$?
+
+It is natural to think that we should pick an $A$ with $X^{(\leq r)} \subseteq A \subseteq X^{(\leq r + 1)}$, so we set $A = X^{(\leq r)} \cup B$, where $B \subseteq X^{(r + 1)}$. Such an $A$ is known as a \term{Hamming ball}\index{ball!Hamming}.
+
+What $B$ should we pick? Observe that
+\[
+ N(A) = X^{(\leq r + 1)} \cup \partial^+ B.
+\]
+So we want to pick $B$ to minimize the \emph{upper} shadow. So by Kruskal--Katona, we know we should pick $B$ to be the initial segment in the lex order.
+
+Thus, if we are told to pick $1000$ points to minimize the boundary, we go up in levels, and in each level, we go up in lex.
+\begin{defi}[Simplicial ordering]\index{simplicial ordering}
+ The \emph{simplicial ordering} on $Q_n$ is defined by $x < y$ if either $|x| < |y|$, or $|x| = |y|$ and $x < y$ in lex.
+\end{defi}
+Our aim is to show that the initial segments of the simplicial order minimize the neighbourhood. Similar to Kruskal--Katona, a reasonable strategy would be to prove it by compression.
+
+For $A \subseteq Q_n$, and $1 \leq i \leq n$, the $i$-sections of $A$ are $A_+^{(i)}, A_-^{(i)} \subseteq \P(X\setminus \{i\})$ defined by
+\begin{align*}
+ A_-^{(i)} &= \{x \in A: i \not\in x\}\\
+ A_+^{(i)} &= \{x \setminus \{i\}: x \in A, i \in x\}.
+\end{align*}
+These are the top and bottom layers in the $i$ direction.
+
+The $i$-compression (or \term{co-dimension $1$ compression}) of $A$ is $C_i(A)$, defined by
+\begin{align*}
+ C_i(A)_+ &= \text{first $|A_+|$ elements of $\P(X \setminus \{i\})$ in simplicial}\\
+ C_i(A)_- &= \text{first $|A_-|$ elements of $\P(X \setminus \{i\})$ in simplicial}
+\end{align*}
+
+\begin{eg}
+ Suppose we work in $Q_4$, where the original set is
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (1, 0) -- (1.4, 0.2) -- (1.4, 1.2) -- (0.4, 1.2) -- (0, 1) -- cycle;
+ \draw (0, 1) -- (1, 1) -- (1, 0);
+ \draw (1, 1) -- (1.4, 1.2);
+ \draw [dashed] (0, 0) -- (0.4, 0.2) -- (1.4, 0.2);
+ \draw [dashed] (0.4, 0.2) -- (0.4, 1.2);
+
+ \node [circ] at (0.4, 0.2) {};
+ \node [circ] at (0, 1) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (1.4, 1.2) {};
+ \begin{scope}[shift={(2, 0)}]
+ \draw (0, 0) -- (1, 0) -- (1.4, 0.2) -- (1.4, 1.2) -- (0.4, 1.2) -- (0, 1) -- cycle;
+ \draw (0, 1) -- (1, 1) -- (1, 0);
+ \draw (1, 1) -- (1.4, 1.2);
+ \draw [dashed] (0, 0) -- (0.4, 0.2) -- (1.4, 0.2);
+ \draw [dashed] (0.4, 0.2) -- (0.4, 1.2);
+
+ \node [circ] at (1, 1) {};
+ \node [circ] at (0, 1) {};
+ \node [circ] at (0.4, 1.2) {};
+ \node [circ] at (1.4, 1.2) {};
+ \node [circ] at (1.4, 0.2) {};
+ \end{scope}
+
+ \draw [->] (0.7, -0.5) -- (2.7, -0.5) node [pos=0.5, below] {$i$};
+ \end{tikzpicture}
+ \end{center}
+ The resulting set is then
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (1, 0) -- (1.4, 0.2) -- (1.4, 1.2) -- (0.4, 1.2) -- (0, 1) -- cycle;
+ \draw (0, 1) -- (1, 1) -- (1, 0);
+ \draw (1, 1) -- (1.4, 1.2);
+ \draw [dashed] (0, 0) -- (0.4, 0.2) -- (1.4, 0.2);
+ \draw [dashed] (0.4, 0.2) -- (0.4, 1.2);
+
+ \node [circ] at (0, 0) {};
+ \node [circ] at (0, 1) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (0.4, 0.2) {};
+ \begin{scope}[shift={(2, 0)}]
+ \draw (0, 0) -- (1, 0) -- (1.4, 0.2) -- (1.4, 1.2) -- (0.4, 1.2) -- (0, 1) -- cycle;
+ \draw (0, 1) -- (1, 1) -- (1, 0);
+ \draw (1, 1) -- (1.4, 1.2);
+ \draw [dashed] (0, 0) -- (0.4, 0.2) -- (1.4, 0.2);
+ \draw [dashed] (0.4, 0.2) -- (0.4, 1.2);
+
+ \node [circ] at (0, 0) {};
+ \node [circ] at (0, 1) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (0.4, 0.2) {};
+ \node [circ] at (1.4, 0.2) {};
+ \end{scope}
+
+ \draw [->] (0.7, -0.5) -- (2.7, -0.5) node [pos=0.5, below] {$i$};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+Clearly, we have $|C_i(A)| = |A|$, and $C_i(A)$ ``looks more like'' an initial segment in simplicial ordering than $A$ did.
+
+We say $A$ is \emph{$i$-compressed} if $C_i(A) = A$.
+
+\begin{lemma}
+ For $A \subseteq Q_n$, we have $|N(C_i(A))| \leq |N(A)|$.
+\end{lemma}
+
+\begin{proof}
+ We have
+ \[
+ |N(A)| = |N(A_+) \cup A_-| + |N(A_-) \cup A_+|
+ \]
+ Take $B = C_i(A)$. Then
+ \begin{align*}
+ |N(B)| &= |N(B_+) \cup B_-| + |N(B_-) \cup B_+|\\
+ &= \max \{|N(B_+)|, |B_-|\} + \max \{|N(B_-)|, |B_+|\}\\
+ &\leq \max \{|N(A_+)|, |A_-|\} + \max \{|N(A_-)|, |A_+|\}\\
+ &\leq |N(A_+) \cup A_i| + |N(A_-) \cup A_+|\\
+ &= |N(A)|\qedhere
+ \end{align*}
+\end{proof}
+
+Since each compression moves us down in the simplicial order, we can keep applying compressions, and show that
+\begin{lemma}
+ For any $A \subseteq Q_n$, there is a compressed set $B \subseteq Q_n$ such that
+ \[
+ |B| = |A|,\quad |N(B)| \leq |N(A)|.
+ \]
+\end{lemma}
+
+Are we done? Does being compressed imply being an initial segment? No! For $n = 3$, we can take $\{\emptyset, 1, 2, 12\}$, which is obviously compressed, but is not an initial segment. To obtain the actual initial segment, we should replace $12$ with $3$.
+\begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \node [mred] (0) at (0, 0) {\small$\mathbf{\emptyset}$};
+ \node [mred] (1) at (1, 0) {\small$\mathbf{1}$};
+ \node (3) at (0, 1) {\small$3$};
+ \node [mred] (2) at (0.4, 0.4) {\small$\mathbf{2}$};
+ \node [mred] (12) at (1.4, 0.4) {\small$\mathbf{12}$};
+ \node (13) at (1, 1) {\small$13$};
+ \node (23) at (0.4, 1.4) {\small$23$};
+ \node (123) at (1.4, 1.4) {\small$123$};
+
+ \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0);
+ \draw (3) -- (13) -- (1);
+ \draw (13) -- (123);
+
+ \draw [dashed] (0) -- (2) -- (12);
+ \draw [dashed] (2) -- (23);
+ \end{tikzpicture}
+\end{center}
+
+For $n = 4$, we can take $\{\emptyset, 1, 2, 3, 4, 12, 13, 23\}$, which is again compressed by not an initial segment. It is an initial segment only if we replace $23$ with $14$.
+\begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \node [mred] (0) at (0, 0) {\small$\mathbf{\emptyset}$};
+ \node [mred] (1) at (1, 0) {\small$\mathbf{1}$};
+ \node [mred] (3) at (0, 1) {\small$\mathbf{3}$};
+ \node [mred] (2) at (0.4, 0.4) {\small$\mathbf{2}$};
+ \node [mred] (12) at (1.4, 0.4) {\small$\mathbf{12}$};
+ \node [mred] (13) at (1, 1) {\small$\mathbf{13}$};
+ \node [mred] (23) at (0.4, 1.4) {\small$\mathbf{23}$};
+ \node (123) at (1.4, 1.4) {\small$123$};
+
+ \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0);
+ \draw (3) -- (13) -- (1);
+ \draw (13) -- (123);
+ \draw [dashed] (0) -- (2) -- (12);
+ \draw [dashed] (2) -- (23);
+
+ \begin{scope}[shift={(2, 0)}]
+ \node [mred] (0) at (0, 0) {\small$\mathbf{4}$};
+ \node (1) at (1, 0) {\small$14$};
+ \node (3) at (0, 1) {\small$34$};
+ \node (2) at (0.4, 0.4) {\small$24$};
+ \node (12) at (1.4, 0.4) {\small$124$};
+ \node (13) at (1, 1) {\small$134$};
+ \node (23) at (0.4, 1.4) {\small$234$};
+ \node (123) at (1.4, 1.4) {\small$1234$};
+
+ \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0);
+ \draw (3) -- (13) -- (1);
+ \draw (13) -- (123);
+ \draw [dashed] (0) -- (2) -- (12);
+ \draw [dashed] (2) -- (23);
+
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+We notice that these two examples have a common pattern. The ``swap'' we have to perform to get to an initial segment is given by replacing an element with its complement, or equivalently, swapping something the opposite diagonal element. This is indeed general.
+\begin{lemma}
+ For each $n$, there exists a unique element $z \in Q_n$ such that $z^c$ is the successor of $z$.
+
+ Moreover, if $B \subseteq Q_n$ is compressed but not an initial segment, then $|B| = 2^{n - 1}$ and $B$ is obtained from taking the initial segment of size $2^{n - 1}$ and replacing $x$ with $x^c$.
+\end{lemma}
+
+\begin{proof}
+ For the first part, simply note that complementation is an order-reversing bijection $Q_n \to Q_n$, and $|Q_n|$ is even. So the $2^{n - 1}$th element is the only such element $z$.
+
+ Now if $B$ is not an initial segment, then we can find some $x < y$ such that $x \not \in B$ and $y \in B$. Since $B$ is compressed, it must be the case that for each $i$, there is exactly one of $x$ and $y$ that contains $i$. Hence $x = y^c$. Note that this is true for all $x < y$ such that $x \not \in B$ and $y \in B$. So if we write out the simplicial order, then $B$ must look like
+ \begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \foreach \x in {0,...,7} {
+ \fill (\x, 0) circle [radius=0.1];
+ }
+ \draw (8, 0) circle [radius=0.1];
+ \fill (9, 0) circle [radius=0.1];
+ \foreach \x in {10,...,15} {
+ \draw (\x, 0) circle [radius=0.1];
+ }
+ \node at (16.5, 0) {$\cdots$};
+ \end{tikzpicture}
+ \end{center}
+ since any $x \not \in B$ such that $x < y$ must be given by $x = y^c$, and so there must be a unique such $x$, and similarly the other way round. So it must be the case that $y$ is the successor of $x$, and so $x = z$.
+\end{proof}
+We observe that these anomalous compressed sets are worse off than the initial segments (exercise!). So we deduce that
+
+%We define these exception sets for $n \geq 3$. For $n = 2k + 1$, we define
+%\[
+% B_n^* = \left(X^{(\leq k)} \setminus \{\{(k + 2)(k + 3) \cdots (2k + 1)\} \}\right) \cup \{12\cdots (k + 1)\}.
+%\]
+%For $n = 2k$, define
+%\[
+% B_n^* = \left(X^{(\leq k - 1)} \cup \{(X \setminus \{1\})^{(k - 1)} + 1\} \setminus \{1 (k + 2) \cdots (2k)\}\right) \cup \{23 \cdots (k + 1)\}.
+%\]
+
+\begin{thm}[Harper, 1967]
+ Let $A \subseteq Q^n$, and let $C$ be the initial segment in the simplicial order with $|C| = |A|$. Then $|N(A)| \geq |N(C)|$. In particular,
+ \[
+ |A| = \sum_{i = 0}^r \binom{n}{i}\text{ implies } |N(A)| \geq \sum_{i = 0}^{r + 1} \binom{n}{i}.
+ \]
+\end{thm}
+
+\subsubsection*{The edge isoperimetric inequality in the cube}
+Let $A \subseteq V$ be a subset of vertices in a graph $G = (V, E)$. Consider the \term{edge boundary}
+\[
+ \partial_e A = \{xy \in E: x \in A, y \not \in A\}.
+\]
+Given a graph $G$, and given the size of $A$, can we give a lower bound for the size of $\partial_e A$?
+
+\begin{eg}
+ Take $G = Q_3$. For the vertex isoperimetric inequality, our optimal solution with $|A| = 4$ was given by
+ \begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \node [mred] (0) at (0, 0) {\small$\boldsymbol{\emptyset}$};
+ \node [mred] (1) at (1, 0) {\small$\mathbf{1}$};
+ \node [mred] (3) at (0, 1) {\small$\mathbf{3}$};
+ \node [mred] (2) at (0.4, 0.4) {\small$\mathbf{2}$};
+ \node (12) at (1.4, 0.4) {\small$12$};
+ \node (13) at (1, 1) {\small$13$};
+ \node (23) at (0.4, 1.4) {\small$23$};
+ \node (123) at (1.4, 1.4) {\small$123$};
+
+ \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0);
+ \draw (3) -- (13) -- (1);
+ \draw (13) -- (123);
+
+ \draw [dashed] (0) -- (2) -- (12);
+ \draw [dashed] (2) -- (23);
+ \end{tikzpicture}
+ \end{center}
+ The edge boundary has size $6$. However, if we just pick a slice, then the edge boundary has size $4$ only.
+\end{eg}
+More generally, consider $Q_n = Q_{2k + 1}$, and take the Hamming ball $B_k = X^{(\leq \ell)}$. Then
+\[
+ \partial_e B_k = \{AB: A \subseteq B \subseteq X: |A| = k, |B| = k + 1\}.
+\]
+So we have
+\[
+ |\partial_e B_k| = \binom{2k + 1}{k + 1} \cdot (k + 1) \sim \frac{2^n \sqrt{n}}{\sqrt{2\pi}}.
+\]
+However, if we pick the bottom face of $Q_n$. Then $|A| = 2^{n - 1}$ and $|\partial_e A| = 2^{n - 1}$. This is much much better.
+
+More generally, it is not unreasonable to suppose that sub-cubes are always the best. For a $k$-dimensional sub-cube in $Q_n$, we have
+\[
+ |\partial_k| = 2^k (n - k).
+\]
+If we want to prove this, and also further solve the problem for $|A|$ not a power of $2$, then as our previous experience would suggest, we should define an order on $\mathcal{P}(X)$.
+
+\begin{defi}[Binary order]\index{binary order}
+ The binary order on $Q_n \cong \mathcal{P}(X)$ is given by $x < y$ if $\max x \Delta y \in y$.
+
+ Equivalently, define $\varphi: \mathcal{P}(X) \to \N$ by
+ \[
+ \varphi(x) = \sum_{i \in x} 2^i.
+ \]
+ Then $x < y$ if $\varphi(x) < \varphi(y)$.
+\end{defi}
+The idea is that we avoid large elements. The first few elements in the elements would like like
+\[
+ \emptyset, 1, 2, 12 3, 13, 23, 123, \ldots.
+\]
+\begin{thm}
+ Let $A \subseteq Q_n$ be a subset, and let $C \subseteq Q_n$ be the initial segment of length $|A|$ in the binary order. Then $|\partial_e C| \leq |\partial_e A|$.
+\end{thm}
+
+\begin{proof}
+ We induct on $n$ using codimension-$1$ compressions. Recall that we previously defined the sets $A_{\pm}^{(i)}$.
+
+ The $i$-compression of $A$ is the set $B \subseteq Q_n$ such that $|B_{\pm}^{(i)}| = |A_{\pm}^{(i)}|$, and $B_{\pm}^{(i)}$ are initial segments in the binary order. We set $D_i(A) = B$.
+
+ Observe that performing $D_i$ reduces the edge boundary. Indeed, given any $A$, we have
+ \[
+ |\partial_e A| = |\partial_e A_+^{(i)}| + |\partial_e A_-^{(i)}| + |A_+^{(i)} \Delta A_i^{(i)}|.
+ \]
+ Applying $D_i$ clearly does not increase any of those factors. So we are happy. Now note that if $A \not= D_i A$, then
+ \[
+ \sum_{x \in A} \sum_{i \in x} 2^i < \sum_{x \in D_i A} \sum_{i \in x} 2^i.
+ \]
+ So after applying compressions finitely many times, we are left with a compressed set.
+
+ We now hope that a compressed subset must be an initial segment, but this is not quite true.
+ \begin{claim}
+ If $A$ is compressed but not an initial, then
+ \[
+ A = \tilde{B} = \P(X \setminus \{n\}) \setminus \{123\cdots (n-1)\} \cup \{n\}.
+ \]
+ \end{claim}
+ By direct computation, we have
+ \[
+ |\partial_e \tilde{B}| = 2^{n - 1} - 2(n - 2),
+ \]
+ and so the initial segment is better. So we are done.
+
+ The proof of the claim is the same as last time. Indeed, by definition, we can find some $x < y$ such that $x \not \in A$ and $y \in A$. As before, for any $i$, it cannot be the case that both $x$ and $y$ contain $i$ or neither contain $i$, since $A$ is compressed. So $x = y^c$, and we are done as before.
+\end{proof}
+
+\section{Sum sets}
+Let $G$ be an abelian group, and $A, B \subseteq G$. Define
+\[
+ A + B = \{a + b: a \in A, b \in B\}.
+\]
+For example, suppose $G = \R$ and $A = \{a_1 < a_2 < \cdots < a_n\}$ and $B = \{b_1 < b_2 < \cdots < b_m\}$. Surely, $A + B \leq nm$, and this bound can be achieved. Can we bound it from below? The elements
+\[
+ a_1 + b_1, a_1 + b_2, \ldots, a_1 + b_m, a_2 + b_m, \ldots, a_n + b_m
+\]
+are certainly distinct as well, since they are in strictly increasing order. So
+\[
+ |A + B| \geq m + n - 1 = |A| + |B| - 1.
+\]
+What if we are working in a finite group? In general, we don't have an order, so we can't make the same argument. Indeed, the same inequality cannot always be true, since $|G + G| = |G|$. Slightly more generally, if $H$ is a subgroup of $G$, then $|H + H| = |H|$.
+
+So let's look at a group with no subgroups. In other words, pick $G = \Z_p$.
+
+\begin{thm}[Cauchy--Davenport theorem]\index{Cauchy--Davenport theorem}
+ Let $A$ and $B$ be non-empty subsets of $\Z_p$ with $p$ a prime, and $|A| + |B| \leq p + 1$. Then
+ \[
+ |A + B| \geq |A| + |B| - 1.
+ \]
+\end{thm}
+
+\begin{proof}
+ We may assume $1 \leq |A| \leq |B|$. Apply induction on $|A|$. If $|A| = 1$, then there is nothing to do. So assume $A \geq 2$.
+
+ Since everything is invariant under translation, we may assume $0, a \in A$ with $a \not= 0$. Then $\{a, 2a, \ldots, pa\} = \Z_p$. So there exists $k \geq 0$ such that $ka \in B$ and $(k + 1) a \not \in B$.
+
+ By translating $B$, we may assume $0 \in B$ and $a \not \in B$.
+
+ Now $0 \in A \cap B$, while $a \in A \setminus B$. Therefore we have
+ \[
+ 1 \leq |A \cap B| < |A|.
+ \]
+ Hence
+ \[
+ |(A \cap B) + (A \cup B)| \geq |A \cap B| + |A \cup B| - 1 = |A| + |B| - 1.
+ \]
+ Also, clearly
+ \[
+ (A \cap B) + (A \cup B) \subseteq A + B.
+ \]
+ So we are done.
+\end{proof}
+
+\begin{cor}
+ Let $A_1, \ldots, A_k$ be non-empty subsets of $\Z_p$ such that
+ \[
+ \sum_{i =1 }^d |A_i| \leq p + k - 1.
+ \]
+ Then
+ \[
+ |A_1 + \ldots + A_k| \geq \sum_{i = 1}^k |A_i| - k + 1.
+ \]
+\end{cor}
+What if we don't take sets, but sequences? Let $a_1, \ldots, a_m \in \Z_n$. What $m$ do we need to take to guarantee that there are $m$ elements that sums to $0$? By the pigeonhole principle, $m \geq n$ suffices. Indeed, consider the sequence
+\[
+ a_1, a_1 + a_2, a_1 + a_2 + a_3, \cdots, a_1 + \cdots + a_n.
+\]
+If they are all distinct, then one of them must be zero, and so we are done. If they are not distinct, then by the pigeonhole principle, there must be $k < k'$ such that
+\[
+ a_1 + \cdots + a_k = a_1 + \cdots + a_{k'}.
+\]
+So it follows that
+\[
+ a_{k + 1} + \cdots + a_{k'}.
+\]
+So in fact we can even require the elements we sum over to be consecutive. On the other hand, $m \geq n$ is also necessary, since we can take $a_i = 1$ for all $i$.
+
+We can tackle a harder question, where we require that the sum of a fixed number of things vanishes.
+\begin{thm}[Erd\"os--Ginzburg--Ziv]
+ Let $a_1, \ldots, a_{2n - 1} \in \Z_n$. Then there exists $I \in [2n - 1]^{(n)}$ such that
+ \[
+ \sum_{i \in I} a_i = 0
+ \]
+ in $\Z_n$.
+\end{thm}
+
+\begin{proof}
+ First consider the case $n = p$ is a prime. Write
+ \[
+ 0 \leq a_1 \leq a_2 \leq \cdots \leq a_{2p - 1} < p.
+ \]
+ If $a_i = a_{i + p - 1}$, then there are $p$ terms that are the same, and so we are done by adding them up. Otherwise, set $A_i = \{a_i, a_{i + p - 1}\}$ for $i = 1, \ldots, p - 1$, and $A_p = \{a_{2p - 1}\}$, then $|A_i| = 2$ for $i = 1, \ldots, p - 1$ and $|A_p| = 1$. Hence we know
+ \[
+ |A_1 + \cdots + A_p| \geq (2(p - 1) + 1) - p + 1 = p.
+ \]
+ Thus, every element in $\Z_p$ is a sum of some $p$ of our terms, and in particular $0$ is.
+
+ In general, suppose $n$ is not a prime. Write $n = pm$, where $p$ is a prime and $m > 1$. By induction, for every $2m - 1$ terms, we can find $m$ terms whose sum is a multiple of $m$.
+
+ Select \emph{disjoint} $S_1, S_2, \ldots, S_{2p - 1} \in [2n - 1]^{(m)}$ such that
+ \[
+ \sum_{j \in S_i} a_j = m b_i.
+ \]
+ This can be done because after selecting, say, $S_1, \ldots, S_{2p - 2}$, we have
+ \[
+ (2n - 1) - (2p - 2)m = 2m - 1
+ \]
+ elements left, and so we can pick the next one.
+
+ We are essentially done, because we can pick $j_1, \ldots, j_p$ such that $\sum_{k = 1}^p b_{i_k}$ is a multiple of $p$. Then
+ \[
+ \sum_{k = 1}^p \sum_{j \in S_{i_k}} a_j
+ \]
+ is a sum of $mp = n$ terms whose sum is a multiple of $mp$.
+\end{proof}
+
+\section{Projections}
+So far, we have been considering discrete objects only. For a change, let's work with something continuous.
+
+Let $K \subseteq \R^n$ be a bounded open set. For $A \subseteq [n]$, we set
+\[
+ K_A = \{(x_i)_{i \in A} : \exists y \in K, y_i - x_i\text{ for all }i \in A\} \subseteq \R^A.
+\]
+We write $|K_A|$ for the Lebesgue measure of $K_A$ as a subset of $\R^A$. The question we are interested in is given some of these $|K_A|$, can we bound $|K|$? In some cases, it is completely trivial.
+\begin{eg}
+ If we have a partition of $A_1 \cup \cdots \cup A_m = [n]$, then we have
+ \[
+ |K| \leq \prod_{i = 1}^m |K_{A_i}|.
+ \]
+\end{eg}
+
+But, for example, in $\R^3$, can we bound $|K|$ given $|K_{12}|$, $|K_{13}|$ and $|K_{23}|$?
+
+It is clearly not possible if we only know, say $|K_{12}|$ and $|K_{13}|$. For example, we can consider the boxes
+\[
+ \left(0, \frac{1}{n}\right) \times (0, n) \times (0, n).
+\]
+\begin{prop}
+ Let $K$ be a body in $\R^3$. Then
+ \[
+ |K|^2 \leq |K_{12}| |K_{13}| |K_{23}|.
+ \]
+\end{prop}
+This is actually quite hard to prove! However, given what we have done so far, it is natural to try to compress $K$ in some sense. Indeed, we know equality holds for a box, and if we can make $K$ look more like a box, then maybe we can end up with a proof.
+
+For $K \subseteq \R^n$, its \term{$n$-sections} are the sets $K(x) \subseteq \R^{n - 1}$ defined by
+\[
+ K(x) = \{(x_1, \ldots, x_{n - 1}) \in \R^{n - 1}: (x_1, \ldots, x_{n - 1}, x) \in K\}.
+\]
+\begin{proof}
+ Suppose first that each section of $K$ is a square, i.e.
+ \[
+ K(x) = (0, f(x)) \times (0, f(x))\;\d x
+ \]
+ for all $x$ and some $f$. Then
+ \[
+ |K| = \int f(x)^2\;\d x.
+ \]
+ Moreover,
+ \[
+ |K_{12}| = \left(\sup_x f(x)\right)^2 \equiv M^2,\quad |K_{13}| = |K_{23}| = \int f(x)\;\d x.
+ \]
+ So we have to show that
+ \[
+ \left(\int f(x)^2\;\d x\right)^2 \leq M^2 \left(\int f(x)\;\d x \right)^2,
+ \]
+ but this is trivial, because $f(x) \leq M$ for all $x$.
+
+ Let's now consider what happens when we compress $K$. For the general case, define a new body $L \subseteq \R^3$ by setting its sections to be
+ \[
+ L(x) = (0, \sqrt{|K(x)|}) \times (0, \sqrt{|K(x)|}).
+ \]
+ Then $|L| = |K|$, and observe that
+ \[
+ |L_{12}| \leq \sup |K(x)| \leq \left|\bigcup K(x)\right| = |K_{12}|.
+ \]
+ To understand the other two projections, we introduce
+ \[
+ g(x) = |K(x)_1|,\quad h(x) = |K(x)_2|.
+ \]
+ Now observe that
+ \[
+ |L(x)| = |K(x)| \leq g(x) h(x),
+ \]
+ Since $L(x)$ is a square, it follows that $L(x)$ has side length $\leq g(x)^{1/2} h(x)^{1/2}$. So
+ \[
+ |L_{13}| = |L_{23}| \leq \int g(x)^{1/2} h(x)^{1/2}\;\d x.
+ \]
+ So we want to show that
+ \[
+ \left(\int g^{1/2}h^{1/2} \;\d x\right)^2 \leq \left(\int g\;\d x\right)\left(\int h\;\d x\right).
+ \]
+ Observe that this is just the Cauchy--Schwarz inequality applied to $g^{1/2}$ and $h^{1/2}$. So we are done.
+\end{proof}
+
+Let's try to generalize this.
+\begin{defi}[(Uniform) cover]\index{uniform cover}\index{cover}
+ We say a family $A_1, \ldots, A_r \subseteq [n]$ \emph{covers} $[n]$ if
+ \[
+ \bigcup_{i = 1}^r A_r = [n],
+ \]
+ and is a \emph{uniform $k$-cover} if each $i \in [n]$ is in exactly $k$ many of the sets.
+\end{defi}
+
+\begin{eg}
+ With $n = 3$, the singletons $\{1\}, \{2\}, \{3\}$ form a $1$-uniform cover, and so does $\{1\}, \{2, 3\}$. Also, $\{1, 2\}, \{1, 3\}$ and $\{2, 3\}$ form a uniform $2$-cover. However, $\{1, 2\}$ and $\{2, 3\}$ do not form a uniform cover of $[3]$.
+\end{eg}
+Note that we allow repetitions.
+\begin{eg}
+ $\{1\}, \{1\}, \{2, 3\}, \{2\}, \{3\}$ is a $2$-uniform cover of $[3]$.
+\end{eg}
+
+\begin{thm}[Uniform cover inequality]\index{uniform cover inequality}
+ If $A_1, \ldots, A_r$ is a uniform $k$-cover of $[n]$, then
+ \[
+ |K|^k = \prod_{i = 1}^r |K_A|.
+ \]
+\end{thm}
+
+\begin{proof}
+ Let $\mathcal{A}$ be a $k$-uniform cover of $[k]$. Note that $\mathcal{A}$ is a \emph{multiset}. Write
+ \begin{align*}
+ \mathcal{A}_- &= \{A \in \mathcal{A}: n \not \in A\}\\
+ \mathcal{A}_+ &= \{A \setminus \{n\} \in \mathcal{A}: n \in A\}
+ \end{align*}
+ We have $|\mathcal{A}_+| = k$, and $\mathcal{A}_+ \cup \mathcal{A}_-$ forms a $k$-uniform cover of $[n - 1]$.
+
+ Now note that if $K = \R^n$ and $n \not \in A$, then
+ \[
+ |K_A| \geq |K(x)_A|\tag{1}
+ \]
+ for all $x$. Also, if $n \in A$, then
+ \[
+ |K_A| = \int |K(x)_{A \setminus \{n\}}|\;\d x.\tag{2}
+ \]
+ In the previous proof, we used Cauchy--Schwarz. What we need here is H\"older's inequality
+ \[
+ \int fg\;\d x \leq \left(\int f^p\;\d x\right)^{1/p} \left(\int g^q\;\d x\right)^{1/q},
+ \]
+ where $\frac{1}{p} + \frac{1}{q} = 1$. Iterating this, we get
+ \[
+ \int f_1 \cdots f_k\;\d x\leq \prod_{i = 1}^k \left(\int f_i^k\;\d x\right)^{1/k}.
+ \]
+ Now to perform the proof, we induct on $n$. We are done if $n = 1$. Otherwise, given $K \subseteq \R^n$ and $n \geq 2$, by induction,
+ \begin{align*}
+ |K| &= \int |K(x)|\;\d x \\
+ &\leq \int \prod_{A \in A_-} |K(x)_A|^{1/k} \prod_{A \in \mathcal{A}_+} |K(x)_A|^{1/k}\;\d x\tag{by induction}\\
+ &\leq \prod_{A \in \mathcal{A}_-} |K_A|^{1/k} \int \prod_{A \in \mathcal{A}_+}|K(x)_A|^{1/k}\;\d x\tag{by (1)}\\
+ &\leq \prod_{A \leq |\mathcal{A}_-} |K_A|^{1/k} \prod_{A \in \mathcal{A}_+} \left(\int |K(x)_A|\right)^{1/k} \tag{by H\"older}\\
+ &= \prod_{A \in \mathcal{A}} |K_A|^{1/k} \prod_{A \in \mathcal{A}_+} |K_{A \cup \{n\}}|^{1/k}\tag{by (2)}.%\qedhere
+ \end{align*}
+\end{proof}
+This theorem is great, but we can do better. In fact,
+
+\begin{thm}[Box Theorem (Bollob\'as, Thomason)]\index{box theorem}
+ Given a body $K \subseteq \R^n$, i.e.\ a non-empty bounded open set, there exists a box $L$ such that $|L| = |K|$ and $|L_A| \leq |K_A|$ for all $A \subseteq [n]$.
+\end{thm}
+Of course, this trivially implies the uniform cover theorem. Perhaps more surprisingly, we can deduce this from the uniform cover inequality.
+
+To prove this, we first need a lemma.
+\begin{defi}[Irreducible cover]\index{reducible cover}\index{irreducible cover}
+ A uniform $k$-cover is \emph{reducible} if it is the disjoint union of two uniform covers. Otherwise, it is \emph{irreducible}.
+\end{defi}
+
+\begin{lemma}
+ There are only finitely many irreducible covers of $[n]$.
+\end{lemma}
+
+\begin{proof}
+ Let $\mathcal{A}$ and $\mathcal{B}$ be covers. We say $\mathcal{A} < \mathcal{B}$ if $\mathcal{A}$ is a ``subset'' of $\mathcal{B}$, i.e.\ for each $A \subseteq [n]$, the multiplicity of $A$ in $\mathcal{A}$ is less than the multiplicity in $\mathcal{B}$.
+
+ Then note that the set of irreducible uniform $k$-covers form an anti-chain, and observe that there cannot be an infinite anti-chain.
+
+% Observe that we can represent families $\mathcal{A}$ by ``indicator functions'' $\psi_\mathcal{A}: \mathcal{P}(n) \to \Z$ by setting $\psi_\mathcal{A}(A)$ to be the multiplicity of $A$ in $\mathcal{A}$. Then we can reformulate the definition of uniform $k$-covers as saying $\psi: \mathcal{P}(n) \to \Z$ satisfies
+% \[
+% \sum_{A : i \in A} \psi(A) = k
+% \]
+% for all $i \in [n]$.
+%
+% In this formulation, $\psi$ is reducible if we can write $\psi = \psi_1 + \psi_2$, where $\psi_1$ and $\psi_2$ are both uniform covers.
+%
+% Given two covers $\psi_1, \psi_2: \mathcal{P}(n) \to \Z$, we say $\psi_1 < \psi_2$ if $\psi_1(A) < \psi_2(A)$ for all $A$. Then
+%
+%Fix a finite set $\Omega$, and define an ordering on $\R^\Omega$ by saying $f < g$ if $f(x) < g(x)$ for all $x$.
+%
+%Can we have an infinite anti-chain $f_1, f_2, \ldots$? One can easily produce such an example! However, we cannot have an infinite anti-chain in $\Z^\Omega$. In fact, we can easily see that there is an increasing subsequence.
+%
+%Hence, there are only finitely many irreducible uniform $k$ covers of $[n]$. However, we haven't shown any upper bound!
+\end{proof}
+
+\begin{proof}[Proof of box theorem]
+ For $\mathcal{A}$ an irreducible cover, we have
+ \[
+ |K|^k \leq \prod_{A \in \mathcal{A}} |K_A|.
+ \]
+ Also,
+ \[
+ |K_A| \leq \prod_{i \in A} |K_{\{i\}}|.
+ \]
+ Let $\{x_A: A \subseteq [n]\}$ be a minimal array with $x_A \leq |K_A|$ such that for each irreducible $k$-cover $\mathcal{A}$, we have
+ \[
+ |K|^k \leq \prod_{A \in \mathcal{A}} x_A\tag{$1$}
+ \]
+ and moreover
+ \[
+ x_A \leq \prod_{i \in A} x_{\{i\}}\tag{$2$}
+ \]
+ for all $A \subseteq [n]$. We know this exists since there are only finitely many inequalities to be satisfied, and we can just decrease the $x_A$'s one by one. Now again by finiteness, for each $x_A$, there must be at least one inequality involving $x_A$ on the right-hand side that is in fact an equality.
+
+ \begin{claim}
+ For each $i \in [n]$, there exists a uniform $k_i$-cover $\mathcal{C}_i$ containing $\{i\}$ with equality
+ \[
+ |K|^{k_i} = \prod_{A \in \mathcal{C}_i} x_A.
+ \]
+ \end{claim}
+
+ Indeed if $x_i$ occurs on the right of (1), then we are done. Otherwise, it occurs on the right of (2), and then there is some $A$ such that (2) holds with equality. Now there is some cover $\mathcal{A}$ containing $A$ such that (1) holds with equality. Then replace $A$ in $\mathcal{A}$ with $\{ \{j\}: j \in A\}$, and we are done.
+
+ Now let
+ \[
+ \mathcal{C} = \bigcup_{i = 1}^n \mathcal{C}_i,\quad
+ \mathcal{C}' = \mathcal{C} \setminus \{\{1\}, \{2\}, \ldots, \{n\}\},\quad
+ k = \sum_{i = 1}^n k_i.
+ \]
+ Then
+ \[
+ |K|^k = \prod_{A \in \mathcal{C}} x_A = \left(\prod_{A \in \mathcal{C}^1} x_A\right)\geq |K|^{k - 1} \prod_{i = 1}^n x_i.
+ \]
+ So we have
+ \[
+ |K| \geq \prod_{i = 1}^n x_i.
+ \]
+ But we of course also have the reverse inequality. So it must be the case that they are equal.
+
+ Finally, for each $A$, consider $\mathcal{A} = \{A\} \cup \{ \{i\} : i \not \in A\}$. Then dividing (1) by $\prod_{i \in A} x_i$ gives us
+ \[
+ \prod_{i \not \in A} x_i \leq x_A.
+ \]
+ By (2), we have the inverse equality. So we have
+ \[
+ x_A = \prod_{i \in A} x_i
+ \]
+ for all $i$. So we are done by taking $L$ to be the box with side length $x_i$.
+\end{proof}
+
+\begin{cor}
+ If $K$ is a union of translates of the unit cube, then for any (not necessarily uniform) $k$-cover $\mathcal{A}$, we have
+ \[
+ |K|^k \leq \prod_{A \in \mathcal{A}} |K_A|.
+ \]
+\end{cor}
+Here a $k$-cover is a cover where every element is covered at least $k$ times.
+
+\begin{proof}
+ Observe that if $B \subseteq A$, then $|K_B| \leq |K_A|$. So we can reduce $\mathcal{A}$ to a uniform $k$-cover.
+\end{proof}
+%
+%\begin{cor}
+% Let $S$ be a set of sequences of length $n$ with terms from a finite set $X$. Then every uniform $k$-cover $\mathcal{A}$ of $[n]$ satisfies
+% \[
+% |S|^k \leq \prod_{A \in \mathcal{A}} |S_A|,
+% \]
+% where $S_A$ is the restriction of the sequences in $S$ to $A$ and $|S|$ is the product of elements in $S$.
+%\end{cor}
+
+\section{Alon's combinatorial Nullstellensatz}
+Alon's combinatorial Nullstellensatz is a seemingly unexciting result that has surprisingly many useful consequences.
+\begin{thm}[Alon's combinatorial Nullstellensatz]\index{combinatorial Nullstellensatz}\index{Alon's combinatorial Nullstellensatz}
+ Let $\F$ be a field, and let $S_1, \ldots, S_n$ be non-empty finite subsets of $\F$ with $|S_i| = d_i + 1$. Let $f \in \F[X_1, \ldots, X_n]$ have degree $d = \sum_{i = 1}^n d_i$, and let the coefficient of $X_1^{d_1} \cdots X_n^{d_n}$ be non-zero. Then $f$ is not identically zero on $S = S_1 \times \cdots \times S_n$.
+\end{thm}
+
+Its proof follows from generalizing facts we know about polynomials in one variable. Here $R$ will always be a ring; $\F$ always a field, and $\F_q$ the unique field of order $q = p^n$. Recall the following result:
+\begin{prop}[Division algorithm]\index{division algorithm}
+ Let $f, g \in R[X]$ with $g$ monic. Then we can write
+ \[
+ f = hg + r,
+ \]
+ where $\deg h \leq \deg f - \deg g$ and $\deg r < \deg g$.
+\end{prop}
+Our convention is that $\deg 0 = -\infty$.
+
+Let $X = (X_1, \ldots, X_n)$ be a sequence of variables, and write $R[X] = R[X_1, \ldots, X_n]$.
+
+\begin{lemma}
+ Let $f \in R[X]$, and for $i = 1, \ldots, n$, let $g_i(X_i) \in R[X_i] \subseteq R[X]$ be monic of degree $\deg g_i = \deg_{X_i} g_i = d_i$. Then there exists polynomials $h_1, \ldots, h_n, r \in R[X]$ such that
+ \[
+ f = \sum f_i g_i + r,
+ \]
+ where
+ \begin{align*}
+ \deg h_i &\leq \deg f - \deg d_i & \deg_{X_i} r &\leq d_i - 1\\
+ \deg_{X_i} h_i &\leq \deg_{X_i} f - d_i & \deg_{X_i} r &\leq \deg_{X_i} f\\
+ \deg_{X_j} h_i &\leq \deg_{X_j} f & \deg r &\leq \deg f
+ \end{align*}
+ for all $i, j$.
+\end{lemma}
+
+\begin{proof}
+ Consider $f$ as a polynomial with coefficients in $R[X_2, \ldots, X_n]$, then divide by $g_1$ using the division algorithm. So we write
+ \[
+ f = h_1 g_1 + r_1.
+ \]
+ Then we have
+ \begin{align*}
+ \deg_{X_1} h_1 &\leq \deg_{X_1} f - d_1 & \deg_{X_1} r_1 &\leq d_1 - 1\\
+ \deg h_1 &\leq \deg f & \deg_{X_j} r_1 &\leq \deg_{X_j} f\\
+ \deg_{X_j} h_1 &\leq \deg_{X_j}f & \deg r &\leq \deg f.
+ \end{align*}
+ Then repeat this with $f$ replaced by $r_1$, $g_1$ by $g_2$, and $X_1$ by $X_2$.
+\end{proof}
+
+We also know that a polynomial of one variable of degree $n \geq 1$ over a field has at most $n$ zeroes.
+
+\begin{lemma}
+ Let $S_1, \ldots, S_n$ be non-empty finite subsets of a field $\F$, and let $h \in \F[X]$ be such that $\deg_{X_i} h < |S_i|$ for $i = 1, \ldots, n$. Suppose $h$ is identically $0$ on $S = S_1 \times \cdots \times S_n \subseteq \F^n$. Then $h$ is the zero polynomial.
+\end{lemma}
+
+\begin{proof}
+ Let $d_i = |S_i| - 1$. We induct on $n$. If $n = 1$, then we are done. For $n \geq 2$, consider $h$ as a one-variable polynomial in $F[X_1, \ldots, X_{n - 1}]$ in $X_n$. Then we can write
+ \[
+ h = \sum_{i = 0}^{d_n} g_i(X_1, \ldots, X_{n - 1}) X_m^i.
+ \]
+ Fix $(x_1, \ldots, x_{n - 1}) \in S_1 \times \cdots S_{n - 1}$, and set $c_i = g_i(x_1, \ldots, x_{n - 1}) \in \F$. Then $\sum_{i = 0}^{d_n} c_i X_n^i$ vanishes on $S_n$. So $c_i = g_i(x_1, \ldots, x_{n - 1}) = 0$ for all $(x_1, \ldots, x_{n - 1}) \in S_1 \times \cdots \times S_{n - 1}$. So by induction, $g_i = 0$. So $h = 0$.
+\end{proof}
+
+Another fact we know about polynomials in one variables is that if $f \in \F[X]$ vanishes at $z_1, \ldots, z_n$, then $f$ is a multiple of $\prod_{i = 1}^n (X - z_i)$.
+\begin{lemma}
+ For $i = 1, \ldots, n$, let $S_i$ be a non-empty finite subset of $\F$, and let
+ \[
+ g_i(X_i) = \prod_{s \in S_i} (X_i - s) \in \F[X_i] \subseteq F[X].
+ \]
+ Then if $f \in \F[X]$ is identically zero on $S = S_1 \times \cdots \times S_n$, then there exists $h_i \in \F[X]$, $\deg h_i \leq \deg f - |S_i|$ and
+ \[
+ f = \sum_{i = 1}^n h_i g_i.
+ \]
+\end{lemma}
+
+\begin{proof}
+ By the division algorithm, we can write
+ \[
+ f = \sum_{i = 1}^n h_i g_i + r,
+ \]
+ where $r$ satisfies $\deg_{X_i} r < \deg g_i$. But then $r$ vanishes on $S_1 \times \cdots \times S_n$, as both $f$ and $g_i$ do. So $r = 0$.
+\end{proof}
+
+We finally get to Alon's combinatorial Nullstellensatz.
+\begin{thm}[Alon's combinatorial Nullstellensatz]\index{combinatorial Nullstellensatz}\index{Alon's combinatorial Nullstellensatz}
+ Let $S_1, \ldots, S_n$ be non-empty finite subsets of $\F$ with $|S_i| = d_i + 1$. Let $f \in \F[X]$ have degree $d = \sum_{i = 1}^n d_i$, and let the coefficient of $X_1^{d_1} \cdots X_n^{d_n}$ be non-zero. Then $f$ is not identically zero on $S = S_1 \times \cdots \times S_n$.
+\end{thm}
+
+\begin{proof}
+ Suppose for contradiction that $f$ is identically zero on $S$. Define $g_i(X_i)$ and $h_i$ as before such that
+ \[
+ f = \sum h_i g_i.
+ \]
+ Since the coefficient of $X_1^{d_1} \cdots X_n^{d_n}$ is non-zero in $f$, it is non-zero in some $h_j g_j$. But that's impossible, since
+ \[
+ \deg h_j \leq \left(\sum_{i = 1}^n d_i \right) - \deg g_j = \sum_{i \not= j} d_i - 1,
+ \]
+ and so $h_j$ cannot contain a $X_1^{d_1} \cdots \hat{X_j}^{d_j} \cdots X_n^{d_n}$ term.
+\end{proof}
+
+Let's look at some applications. Here $p$ is a prime, $q = p^k$, and $\F_q$ is the unique field of order $q$.
+\begin{thm}[Chevalley, 1935]
+ Let $f_1, \ldots, f_m \in \F_q[X_1, \ldots, X_n]$ be such that
+ \[
+ \sum_{i = 1}^m \deg f_i < n.
+ \]
+ Then the $f_i$ cannot have exactly one common zero.
+\end{thm}
+
+\begin{proof}
+ Suppose not. We may assume that the common zero is $0 = (0, \ldots, 0)$. Define
+ \[
+ f = \prod_{i = 1}^m (1 - f_i(X)^{q - 1}) - \gamma \prod_{i = 1}^n \prod_{s \in \F_q^\times} (X_i - s),
+ \]
+ where $\gamma$ is chosen so that $F(0) = 0$, namely the inverse of $\left(\prod_{s \in \F_q^\times} (-s)\right)^m$.
+
+ Now observe that for any non-zero $x$, the value of $f_i(x)^{q - 1} = 1$, so $f(x) = 0$.
+
+ Thus, we can set $S_i = \F_q$, and they satisfy the hypothesis of the theorem. In particular, the coefficient of $X_1^{q - 1} \cdots X_n^{q - 1}$ is $\gamma \not= 0$. However, $f$ vanishes on $\F_q^n$. This is a contradiction.
+\end{proof}
+
+It is possible to prove similar results without using the combinatorial Nullstellensatz. These results are often collectively refered to as \term{Chevalley--Warning theorems}.
+\begin{thm}[Warning]
+ Let $f(X) = f(X_1, \ldots, X_n) \in \F_q[X]$ have degree $< n$. Then $N(f)$, the number of zeroes of $f$ is a multiple of $p$.
+\end{thm}
+
+One nice trick in doing these things is that finite fields naturally come with an ``indicator function''. Since the multiplicative group has order $q - 1$, we know that if $x \in \F_q$, then
+\[
+ x^{q - 1} =
+ \begin{cases}
+ 1 & x \not= 0\\
+ 0 & x = 0
+ \end{cases}.
+\]
+\begin{proof}
+ We have
+ \[
+ 1 - f(x)^{q - 1} =
+ \begin{cases}
+ 1 & f(x) = 0\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ Thus, we know
+ \[
+ N(f) = \sum_{x \in \F_q^n} (1 - f(x)^{q - 1}) = -\sum_{x \in \F_q^n} f(x)^{q - 1} \in \F_q.
+ \]
+ Further, we know that if $k \geq 0$, then
+ \[
+ \sum_{x \in \F_q^n} x^k =
+ \begin{cases}
+ -1 & k = q - 1\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ So let's write $f(x)^{q - 1}$ as a linear combination of monomials. Each monomial has degree $] (0, -1) -- +(0, -1.3) node [pos=0.5, right] {$\varphi$};
+ \end{tikzpicture}
+\end{center}
+With a chart, we can talk about things like continuity, differentiability by identifying $U$ with $\varphi(U)$:
+\begin{defi}[Smooth function]\index{smooth function}\index{$C^\infty$}
+ Let $(U, \varphi)$ be a chart on $M$ and $f: M \to \R$. We say $f$ is \emph{smooth} or $C^\infty$ at $p \in U$ if $f \circ \varphi^{-1}: \varphi(U) \to \R$ is smooth at $\varphi(p)$ in the usual sense.
+ \[
+ \begin{tikzcd}
+ \R^n \supseteq \varphi(U) \ar[r, "\varphi^{-1}"] & U \ar[r, "f"] & \R
+ \end{tikzcd}
+ \]
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+
+ \draw [fill=mgreen, opacity=0.5] circle [radius=0.4];
+ \node [right] {$p$};
+ \node [circ] {};
+ \node at (0.4, 0) [right] {$U$};
+
+ \begin{scope}[shift={(-1.75, -4)}]
+ \draw (0, 0) -- (3, 0) -- (3.5, 1.5) -- (0.5, 1.5) -- cycle;
+
+ \draw (1.75, 0.75) [fill=morange, opacity=0.5] circle [radius=0.4];
+ \node at (1.75, 0.75) [right] {$\varphi(p)$};
+ \node at (1.75, 0.75) [circ] {};
+
+ \draw [->] (4, 0.75) -- (7, 0.75) node [pos=0.5, above] {$f \circ \varphi^{-1}$};
+ \node at (8, 0.75) {$\R$};
+ \draw [->] (4, 3.5) -- (7, 1.5) node [pos=0.5, anchor = south west] {$f$};
+ \end{scope}
+
+ \draw [->] (0, -1) -- +(0, -1.3) node [pos=0.5, right] {$\varphi$};
+
+ \end{tikzpicture}
+\end{center}
+We can define all other notions such as continuity, differentiability, twice differentiability etc.\ similarly.
+
+This definition has a problem that some points might not be in the chart, and we don't know how to determine if a function is, say, smooth at the point. The solution is easy --- we just take many charts that together cover $M$. However, we have the problem that a function might be smooth at a point relative to some chart, but not relative to some other chart. The solution is to require the charts to be compatible in some sense.
+
+\begin{defi}[Atlas]\index{atlas}
+ An \emph{atlas} on a set $M$ is a collection of charts $\{(U_\alpha, \varphi_\alpha)\}$ on $M$ such that
+ \begin{enumerate}
+ \item $M = \bigcup_{\alpha} U_\alpha$.
+ \item For all $\alpha, \beta$, we have $\varphi_\alpha(U_\alpha \cap U_\beta)$ is open in $\R^n$, and the transition function
+ \[
+ \varphi_\alpha \circ \varphi_\beta^{-1}: \varphi_\beta(U_\alpha \cap U_\beta) \to \varphi_\alpha(U_\alpha \cap U_\beta)
+ \]
+ is smooth (in the usual sense).
+ \end{enumerate}
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+
+ \draw (-0.3, 0) [fill=mgreen, opacity=0.5] circle [radius=0.4];
+ \draw (0.3, 0) [fill=mblue, opacity=0.5] circle [radius=0.4];
+
+ \node [left] at (-0.7, 0) {$U_\beta$};
+ \node [right] at (0.7, 0) {$U_\alpha$};
+
+ \begin{scope}[shift={(-4.75, -4)}]
+ \draw (0, 0) -- (3, 0) -- (3.5, 1.5) -- (0.5, 1.5) -- cycle;
+
+ \draw (1.75, 0.75) [fill=morange, opacity=0.5] circle [radius=0.4];
+ \begin{scope}
+ \clip (1.75, 0.75) circle [radius=0.4];
+ \draw (2.35, 0.75) [fill=mred, opacity=0.5] circle [radius=0.4];
+ \end{scope}
+ \end{scope}
+
+ \draw [->] (-0.7, -0.5) -- (-2.5, -2.6) node [pos=0.5, left] {$\varphi_\beta$};
+ \draw [->] (0.7, -0.5) -- (2.5, -2.6) node [pos=0.5, right] {$\varphi_\alpha$};
+
+ \begin{scope}[shift={(1.25, -4)}]
+ \draw (0, 0) -- (3, 0) -- (3.5, 1.5) -- (0.5, 1.5) -- cycle;
+
+ \draw (1.75, 0.75) [fill=mred, opacity=0.5] circle [radius=0.4];
+ \begin{scope}
+ \clip (1.75, 0.75) circle [radius=0.4];
+ \draw (1.15, 0.75) [fill=morange, opacity=0.5] circle [radius=0.4];
+ \end{scope}
+ \end{scope}
+ \draw [->] (-2, -3.25) -- (2, -3.25) node [pos=0.5, above] {$\varphi_\alpha \varphi_\beta^{-1}$};
+ \end{tikzpicture}
+\end{center}
+
+\begin{lemma}
+ If $(U_\alpha, \varphi_\alpha)$ and $(U_\beta, \varphi_\beta)$ are charts in some atlas, and $f: M \to \R$, then $f \circ \varphi_\alpha^{-1}$ is smooth at $\varphi_\alpha(p)$ if and only if $f \circ \varphi_\beta^{-1}$ is smooth at $\varphi_\beta (p)$ for all $p \in U_\alpha \cap U_\beta$.
+\end{lemma}
+
+\begin{proof}
+ We have
+ \[
+ f \circ \varphi_\beta^{-1} = f \circ \varphi_\alpha^{-1} \circ (\varphi_\alpha \circ \varphi_\beta^{-1}). \qedhere
+ \]
+\end{proof}
+So we know that if we have an atlas on a set, then the notion of smoothness does not depend on the chart.
+
+\begin{eg}
+ Consider the sphere
+ \[
+ S^2 = \{(x_1, x_2, x_3): \sum x_i^2 = 1\} \subseteq \R^3.
+ \]
+ We let
+ \[
+ U_1^+ = S^2 \cap \{x_1 > 0\},\quad U_1^- = S^2 \cap \{x_1 < 0\}, \cdots
+ \]
+ We then let
+ \begin{align*}
+ \varphi_1^+: U_1^+ &\to \R^2\\
+ (x_1, x_2, x_3) &\mapsto (x_2, x_3).
+ \end{align*}
+ It is easy to show that this gives a bijection to the open disk in $\R^2$. We similarly define the other $\varphi_i^{\pm}$. These then give us an atlas of $S^2$.
+\end{eg}
+
+\begin{defi}[Equivalent atlases]\index{equivalent atlases}\index{atlas!equivalence}
+ Two atlases $\mathcal{A}_1$ and $\mathcal{A}_2$ are \emph{equivalent} if $\mathcal{A}_1 \cup \mathcal{A}_2$ is an atlas.
+\end{defi}
+Then equivalent atlases determine the same smoothness, continuity etc.\ information.
+
+\begin{defi}[Differentiable structure]\index{differentiable structure}
+ A \emph{differentiable structure} on $M$ is a choice of equivalence class of atlases.
+\end{defi}
+
+We want to define a manifold to be a set with a differentiable structure. However, it turns out we can find some really horrendous sets that have differential structures.
+
+\begin{eg}
+ Consider the line with two origins given by taking $\R \times \{0\} \cup \R \times \{1\}$ and then quotienting by
+ \[
+ (x, 0) \sim (x, 1)\text{ for } x \not= 0.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0) -- (3, 0);
+ \node [fill=white, draw, circle, inner sep=0, minimum size=3] {};
+ \node [circ] at (0, 0.2) {};
+ \node [circ] at (0, -0.2) {};
+ \end{tikzpicture}
+ \end{center}
+ Then the inclusions of the two copies of $\R$ gives us an atlas of the space.
+\end{eg}
+
+The problem with this space is that it is not Hausdorff, which is bad. However, that is not actually true, because $M$ is not a topological space, so it doesn't make sense to ask if it is Hausdorff. So we want to define a topology on $M$, and then impose some topological conditions on our manifolds.
+
+It turns out the smooth structure already gives us a topology:
+
+\begin{ex}
+ An atlas determines a topology on $M$ by saying $V \subseteq M$ is open iff $\varphi(U \cap V)$ is open in $\R^n$ for all charts $(U, \varphi)$ in the atlas. Equivalent atlases give the same topology.
+\end{ex}
+
+We now get to the definition of a manifold.
+
+\begin{defi}[Manifold]\index{manifold}
+ A \emph{manifold} is a set $M$ with a choice of differentiable structure whose topology is
+ \begin{enumerate}
+ \item Hausdorff\index{Hausdorff}, i.e.\ for all $x, y \in M$, there are open neighbourhoods $U_x, U_y \subseteq M$ with $x \in U_x, y \in U_y$ and $U_x \cap U_y = \emptyset$.
+ \item Second countable\index{second countable}, i.e.\ there exists a countable collection $(U_n)_{n \in \N}$ of open sets in $M$ such that for all $V \subseteq M$ open, and $p \in V$, there is some $n$ such that $p \in U_n \subseteq V$.
+ \end{enumerate}
+\end{defi}
+The second countability condition is a rather technical condition that we wouldn't really use much. This, for example, excludes the long line.
+
+Note that we will often refer to a manifold simply as $M$, where the differentiable structure is understood from context. By a chart on $M$, we mean one in some atlas in the equivalence class of atlases.
+
+\begin{defi}[Local coordinates]\index{local coordinates}
+ Let $M$ be a manifold, and $\varphi: U \to \varphi(U)$ a chart of $M$. We can write
+ \[
+ \varphi = (x_1, \cdots, x_n)
+ \]
+ where each $x_i: U \to \R$. We call these the \emph{local coordinates}.
+\end{defi}
+So a point $p \in U$ can be represented by local coordinates
+\[
+ (x_1(p), \cdots, x_n(p)) \in \R^n.
+\]
+By abuse of notation, if $f: M \to \R$, we confuse $f|_U$ and $f \circ \varphi^{-1}: \varphi(U) \to \R$. So we write $f(x_1, \cdots, x_n)$ to mean $f(p)$, where $\varphi(p) = (x_1, \cdots, x_n) \in \varphi(U)$.
+\[
+ \begin{tikzcd}
+ U \ar[r, hook, "\iota"] \ar[d, "\varphi"] & M \ar[r, "f"] & \R\\
+ \varphi(U) \ar[rru, "f|_U"']
+ \end{tikzcd}
+\]
+Of course, we can similarly define $C^0, C^1, C^2, \cdots$ manifolds, or analytic manifolds. We can also model manifolds on other spaces, e.g.\ $\C^n$, where we get complex manifolds, or on infinite-dimensional spaces.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Generalizing the example of the sphere, the $n$-dimensional sphere $S^n = \{(x_0, \cdots, x_n) \in \R^{n + 1}: \sum x_i^2 = 1\}$ is a manifold.
+ \item If $M$ is open in $\R^n$, then the inclusion map $\varphi: M \to \R^n$ given by $\varphi(p) = p$ is a chart forming an atlas. So $M$ is a manifold. In particular, $\R^n$ is a manifold, with its ``standard'' differentiable structure. We will always assume $\R^n$ is given this structure, unless otherwise specified.
+ \item $M(n, n)$, the set of all $n \times n$ matrices is also a manifold, by the usual bijection with $\R^{n^2}$. Then $\GL_n \subseteq M(n, n)$ is open, and thus also a manifold.
+ \item The set $\RP^n$, the set of one-dimensional subspaces of $\R^{n + 1}$ is a manifold. We can define charts as follows: we let $U_i$ to be the lines spanned by a vector of the form $(v_0, v_1, \cdots, v_{i - 1}, 1, v_{i + 1}, \cdots, v_n) \in \R^{n + 1}$.
+
+ We define the map $\varphi_i: U_i \to \R^n \cong \{\mathbf{x} \in \R^{n + 1}: x_i = 1\}$ that sends $\varphi(L) = (v_0, \cdots, 1, \cdots, v_n)$, where $L$ is spanned by $(v_0, \cdots, 1, \cdots, v_n)$. It is an easy exercise to show that this defines a chart.
+ \end{enumerate}
+\end{eg}
+
+Note that when we defined a chart, we talked about charts as maps $U \to \R^n$. We did not mention whether $n$ is fixed, or whether it is allowed to vary. It turns out it cannot vary, as long as the space is connected.
+
+\begin{lemma}
+ Let $M$ be a manifold, and $\varphi_1: U_1 \to \R^n$ and $\varphi_2: U_2 \to \R^m$ be charts. If $U_1 \cap U_2 \not= \emptyset$, then $n = m$.
+\end{lemma}
+
+\begin{proof}
+ We know
+ \[
+ \varphi_1 \varphi_2^{-1}: \varphi_2(U_1 \cap U_2) \to \varphi_1(U_1 \cap U_2)
+ \]
+ is a smooth map with inverse $\varphi_2 \varphi_1^{-1}$. So the derivative
+ \[
+ D(\varphi_1 \varphi_2^{-1})(\varphi_2(p)): \R^m \to \R^n
+ \]
+ is a linear isomorphism, whenever $p \in U_1 \cap U_2$. So $n = m$.
+\end{proof}
+
+\begin{defi}[Dimension]\index{dimension}
+ If $p \in M$, we say $M$ has \emph{dimension} $n$ at $p$ if for one (thus all) charts $\varphi: U \to \R^m$ with $p \in U$, we have $m = n$. We say $M$ has dimension $n$ if it has dimension $n$ at all points.
+\end{defi}
+
+\subsection{Smooth functions and derivatives}
+From now on, $M$ and $N$ will be manifolds. As usual, we would like to talk about maps between manifolds. What does it mean for such a map to be smooth? In the case of a function $M \to \R$, we had to check it on each chart of $M$. Now that we have functions $M \to N$, we need to check it on charts of \emph{both} $N$ and $M$.
+
+\begin{defi}[Smooth function]\index{smooth function}
+ A function $f: M \to N$ is \emph{smooth at a point $p \in M$} if there are charts $(U, \varphi)$ for $M$ and $(V, \xi)$ for $N$ with $p \in U$ and $f(p) \in V$ such that $\xi \circ f \circ \varphi^{-1}: \varphi(U) \to \xi(V)$ is smooth at $\varphi(p)$.
+
+ A function is \emph{smooth} if it is smooth at all points $p \in M$.
+
+ A \term{diffeomorphism} is a smooth $f$ with a smooth inverse.
+
+ We write $C^\infty(M, N)$ \index{$C^\infty(M, N)$} for the space of smooth maps $f: M \to N$. We write $C^\infty(M)$\index{$C^\infty(M)$} for $C^\infty(M, \R)$, and this has the additional structure of an algebra, i.e.\ a vector space with multiplication.
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+
+ \draw [fill=mgreen, opacity=0.5] circle [radius=0.4];
+
+ \begin{scope}[shift={(-1.75, -4)}]
+ \draw (0, 0) -- (3, 0) -- (3.5, 1.5) -- (0.5, 1.5) -- cycle;
+
+ \draw (1.75, 0.75) [fill=morange, opacity=0.5] circle [radius=0.4];
+ \end{scope}
+
+ \draw [->] (0, -1) -- +(0, -1.3) node [pos=0.5, right] {$\varphi$};
+ \begin{scope}[shift={(5, 0)}]
+ \draw plot [smooth cycle] coordinates {(1.4, -0.4) (-0.8, -0.9) (-1.5, -0.3) (-1.6, 0.5) (-0.2, 1.3) (1.8, 0.7)};
+
+ \draw [fill=mblue, opacity=0.5] circle [radius=0.4];
+
+ \begin{scope}[shift={(-1.75, -4)}]
+ \draw (0, 0) -- (3, 0) -- (3.5, 1.5) -- (0.5, 1.5) -- cycle;
+
+ \draw (1.75, 0.75) [fill=mred, opacity=0.5] circle [radius=0.4];
+ \end{scope}
+
+ \draw [->] (0, -1) -- +(0, -1.3) node [pos=0.5, right] {$\xi$};
+ \end{scope}
+
+ \draw [->] (1, 0) -- (4, 0) node [above, pos=0.5] {$f$};
+
+ \draw [->] (1, -3.25) -- (4, -3.25) node [above, pos=0.5] {$\xi \circ f \circ \varphi^{-1}$};
+ \end{tikzpicture}
+\end{center}
+Equivalently, $f$ is smooth at $p$ if $\xi \circ f \circ \varphi^{-1}$ is smooth at $\varphi(p)$ for \emph{any} such charts $(U, \varphi)$ and $(V, \xi)$.
+
+\begin{eg}
+ Let $\varphi: U \to \R^n$ be a chart. Then $\varphi: U \to \varphi(U)$ is a diffeomorphism.
+\end{eg}
+
+\begin{defi}[Curve]\index{curve}
+ A \emph{curve} is a smooth map $I \to M$, where $I$ is a non-empty open interval.
+\end{defi}
+
+To discuss derivatives, we first look at the case where $U \subseteq \R^n$ is open. Suppose $f: U \to \R$ is smooth. If $p \in U$ and $\mathbf{v} \in \R^n$, recall that the \term{directional derivative} is defined by
+\[
+ Df|_p(\mathbf{v}) = \lim_{t \to 0} \frac{f(p + t\mathbf{v}) - f(p)}{t}.
+\]
+If $\mathbf{v} = \mathbf{e}_i = (0, \cdots, 0, 1, 0, \cdots, 0)$, then we write
+\[
+ Df|_p (\mathbf{e}_i) = \left.\frac{\partial f}{\partial x_i}\right|_p.
+\]
+Also, we know $Df|_p: \R^n \to \R$ is a linear map (by definition of smooth).
+
+Note that here $p$ and $\mathbf{v}$ are both vectors, but they play different roles --- $p$ is an element in the domain $U$, while $\mathbf{v}$ is an arbitrary vector in $\R^n$. Even if $\mathbf{v}$ is enormous, by taking a small enough $t$, we find that $p + t\mathbf{v}$ will eventually be inside $U$.
+
+If we have a general manifold, we can still talk about the $p$. However, we don't have anything that plays the role of a vector. Our first goal is to define the tangent space to a manifold that captures where the ``directions'' live.
+
+An obvious way to do so would be to use a curve. Suppose $\gamma: I \to M$ is a curve, with $\gamma(0) = p \in U \subseteq M$, and $f: U \to \R$ is smooth. We can then take the derivative of $f$ along $\gamma$ as before. We let
+\[
+ X(f) = \left.\frac{\d}{\d t}\right|_{t = 0} f(\gamma(t)).
+\]
+It is an exercise to see that $X: C^\infty(U) \to \R$ is a linear map, and it satisfies the \term{Leibniz rule}
+\[
+ X(fg) = f(p) X(g) + g(p) X(f).
+\]
+We denote $X$ by $\dot{\gamma}(0)$. We might think of defining the tangent space as curves up to some equivalence relation, but if we do this, there is no obvious vector space on it. The trick is to instead define a vector by the derivative $X$ induces. This then has an obvious vector space structure.
+\begin{defi}[Derivation]\index{derivation}
+ A \emph{derivation} on an open subset $U \subseteq M$ at $p \in U$ is a linear map $X: C^\infty(U) \to \R$ satisfying the Leibniz rule
+ \[
+ X(fg) = f(p) X(g) + g(p) X(f).
+ \]
+\end{defi}
+
+\begin{defi}[Tangent space]\index{tangent space}
+ Let $p \in U \subseteq M$, where $U$ is open. The \emph{tangent space} of $M$ at $p$ is the vector space
+ \[
+ T_p M = \{ \, \text{derivations on $U$ at $p$} \, \} \equiv \Der_p(C^\infty(U)).
+ \]
+ The subscript $p$ tells us the point at which we are taking the tangent space.
+\end{defi}
+Why is this the ``right'' definition? There are two things we would want to be true:
+\begin{enumerate}
+ \item The definition doesn't actually depend on $U$.
+ \item This definition agrees with the usual definition of tangent vectors in $\R^n$.
+\end{enumerate}
+We will do the first part at the end by bump functions, and will do the second part now. Note that it follows from the second part that every tangent vector comes from the derivative of a path, because this is certainly true for the usual definition of tangent vectors in $\R^n$ (take a straight line), and this is a completely local problem.
+
+\begin{eg}\index{$\frac{\partial}{\partial x_i}$}
+ Let $U \subseteq \R^n$ be open, and let $p \in U$. Then we have tangent vectors
+ \[
+ \left.\frac{\partial}{\partial x_i}\right|_p \in T_p \R^n, \qquad i = 1, \ldots, n.
+ \]
+ These correspond to the canonical basis vectors in $\R^n$.
+\end{eg}
+
+\begin{lemma}
+ $\left.\frac{\partial}{\partial x_1}\right|_p, \cdots, \left.\frac{\partial}{\partial x_n}\right|_p$ is a basis of $T_p \R^n$. So these are all the derivations.
+\end{lemma}
+
+The idea of the proof is to show that a derivation can only depend on the first order derivatives of a function, and all possibilities will be covered by the $\frac{\partial}{\partial x_i}$.
+
+\begin{proof}
+ Independence is clear as
+ \[
+ \frac{\partial x_j}{\partial x_i} = \delta_{ij}.
+ \]
+ We need to show spanning. For notational convenience, we wlog take $p = 0$. Let $X \in T_0 \R^n$.
+
+ We first show that if $g \in C^\infty(U)$ is the constant function $g = 1$, then $X(g) = 0$. Indeed, we have
+ \[
+ X(g) = X(g^2) = g(0) X(g) + X(g) g(0) = 2 X(g).
+ \]
+ Thus, if $h$ is any constant function, say, $c$, then $X(h) = X(cg) = c X(g)$. So the derivative of any constant function vanishes.
+
+ In general, let $f \in C^\infty(U)$. By Taylor's theorem, we have
+ \[
+ f(x_1, \cdots, x_n) = f(0) + \sum_{i = 1}^n \left.\frac{\partial f}{\partial x_i}\right|_0 x_i + \varepsilon,
+ \]
+ where $\varepsilon$ is a sum of terms of the form $x_i x_j h$ with $h \in C^\infty(U)$.
+
+ We set $\lambda_i = X(x_i) \in \R$. We first claim that $X(\varepsilon) = 0$. Indeed, we have
+ \[
+ X (x_i x_j h) = x_i(0) X(x_j h) + (x_jh)(0) X(x_i) = 0.
+ \]
+ So we have
+ \[
+ X(f) = \sum_{i = 1}^n \lambda_i \left.\frac{\partial f}{\partial x_i}\right|_0.
+ \]
+ So we have
+ \[
+ X = \sum_{i = 1}^n \lambda_i \left.\frac{\partial}{\partial x_i}\right|_0. \qedhere
+ \]
+\end{proof}
+
+Given this definition of a tangent vector, we have a rather silly and tautological definition of the derivative of a smooth function.
+\begin{defi}[Derivative]\index{derivative}
+ Suppose $F \in C^\infty(M, N)$, say $F(p) = q$. We define $\D F|_p: T_p M \to T_q N$ by
+ \[
+ \D F|_p(X)(g) = X(g \circ F)
+ \]
+ for $X \in T_pM$ and $g \in C^\infty(V)$ with $q \in V \subseteq N$.
+
+ This is a linear map called the \emph{derivative} of $F$ at $p$.
+ \[
+ \begin{tikzcd}
+ M \ar[r, "F"] \ar[rd, "g \circ F"'] & N \ar[d, "g"]\\
+ & \R
+ \end{tikzcd}
+ \]
+\end{defi}
+
+With a silly definition of a derivative comes a silly definition of the chain rule.
+\begin{prop}[Chain rule]\index{chain rule}
+ Let $M, N, P$ be manifolds, and $F \in C^\infty(M, N)$, $G \in C^\infty(N, P)$, and $p \in M, q = F(p)$. Then we have
+ \[
+ \D(G \circ F)|_p = \D G|_q \circ \D F|_p.
+ \]
+\end{prop}
+
+\begin{proof}
+ Let $h \in C^\infty(P)$ and $X \in T_p M$. We have
+ \[
+ \D G|_q (\D F|_p (X))(h) = \D F|_p(X) (h \circ G) = X(h \circ G \circ F) = \D (G \circ F)|_p (X)(h). \qedhere
+ \]
+\end{proof}
+Note that this does not provide a new, easy proof of the chain rule. Indeed, to come this far into the course, we have used the actual chain rule something like ten thousand times.
+
+\begin{cor}
+ If $F$ is a diffeomorphism, then $\D F|_p$ is a linear isomorphism, and $(\D F|_p)^{-1} = \D (F^{-1})|_{F(p)}$.
+\end{cor}
+
+In the special case where the domain is $\R$, there is a canonical choice of tangent vector at each point, namely $1$.
+\begin{defi}[Derivative]\index{derivative}
+ Let $\gamma: \R \to M$ be a smooth function. Then we write
+ \[
+ \frac{\d \gamma}{\d t}(t) = \dot{\gamma}(t) = \D \gamma|_t (1).
+ \]
+\end{defi}
+
+We now go back to understanding what $T_pM$ is if $p \in M$. We let $p \in U$ where $(U, \varphi)$ is a chart. Then if $q = \varphi(p)$, the map $\D\varphi|_p: T_p M \to T_q \R^n$ is a linear isomorphism.
+
+\begin{defi}[$\frac{\partial}{\partial x_i}$]\index{$\frac{\partial}{\partial x_i}$}
+ Given a chart $\varphi: U \to \R^n$ with $\varphi = (x_1, \cdots, x_n)$, we define
+ \[
+ \left.\frac{\partial}{\partial x_i}\right|_p = (\D \varphi|_p)^{-1} \left(\left.\frac{\partial}{\partial x_i}\right|_{\varphi(p)}\right) \in T_p M.
+ \]
+\end{defi}
+So $\left.\frac{\partial}{\partial x_1}\right|_p, \cdots, \left.\frac{\partial}{\partial x_n}\right|_p$ is a basis for $T_p M$.
+
+Recall that if $f: U \to \R$ is smooth, then we can write $f(x_1, \cdots, x_n)$. Then we have
+\[
+ \left.\frac{\partial}{\partial x_i}\right|_p (f) = \left.\frac{\partial f}{\partial x_i}\right|_{\varphi(p)}.
+\]
+So we have a consistent notation.
+
+Now, how does this basis change when we change coordinates? Suppose we also have coordinates $y_1, \cdots, y_n$ near $p$ given by some other chart. We then have $\left.\frac{\partial}{\partial y_i}\right|_p \in T_p M$. So we have
+\[
+ \left.\frac{\partial}{\partial y_i}\right|_p = \sum_{j = 1}^n \alpha_j \left.\frac{\partial}{\partial x_j}\right|_p
+\]
+for some $\alpha_j$. To figure out what they are, we apply them to the function $x_k$. So we have
+\[
+ \left.\frac{\partial}{\partial y_i}\right|_p (x_k) = \frac{\partial x_k}{\partial y_i}(p) = \alpha_k.
+\]
+So we obtain
+\[
+ \left.\frac{\partial}{\partial y_i}\right|_p = \sum_{j = 1}^n \frac{\partial x_j}{\partial y_i}(p)\left.\frac{\partial}{\partial x_j}\right|_p.
+\]
+This is the usual change-of-coordinate formula!
+
+Now let $F \in C^\infty(M, N)$, $(U, \varphi)$ be a chart on $M$ containing $p$ with coordinates $x_1, \cdots, x_n$, and $(V, \xi)$ a chart on $N$ containing $q = F(p)$ with coordinates $y_1,\cdots, y_m$. By abuse of notation, we confuse $F$ and $\xi \circ F \circ \varphi^{-1}$. So we write $F = (F_1, \cdots, F_m)$ with $F_i = F_i(x_1, \cdots, x_n): U \to \R$.
+
+As before, we have a basis
+\begin{align*}
+ \left.\frac{\partial}{\partial x_1}\right|_p, \cdots, \left.\frac{\partial}{\partial x_n}\right|_p&\quad\text{for}\quad T_pM,\\
+ \left.\frac{\partial}{\partial y_1}\right|_q, \cdots, \left.\frac{\partial}{\partial y_m}\right|_q&\quad\text{for}\quad T_qN.
+\end{align*}
+
+\begin{lemma}
+ We have
+ \[
+ \D F|_p \left(\left.\frac{\partial}{\partial x_i}\right|_p\right) = \sum_{j = 1}^m \frac{\partial F_j}{\partial x_i}(p) \left.\frac{\partial}{\partial y_j}\right|_q.
+ \]
+ In other words, $\D F|_p$ has matrix representation
+ \[
+ \left(\frac{\partial F_j}{\partial x_i}(p)\right)_{ij}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We let
+ \[
+ \D F|_p \left(\left.\frac{\partial}{\partial x_i}\right|_p\right) = \sum_{j = 1}^m \lambda_j \left.\frac{\partial}{\partial y_j}\right|_q.
+ \]
+ for some $\lambda_j$. We apply this to the local function $y_k$ to obtain
+ \begin{align*}
+ \lambda_k &= \left(\sum_{j = 1}^m \lambda_j \left.\frac{\partial}{\partial y_j}\right|_q\right)(y_k)\\
+ &= \D F_p \left(\left.\frac{\partial}{\partial x_i}\right|_p\right)(y_k)\\
+ &= \left.\frac{\partial}{\partial x_i}\right|_p (y_k \circ F) \\
+ &= \left.\frac{\partial}{\partial x_i}\right|_p(F_k) \\
+ &= \frac{\partial F_k}{\partial x_i}(p). \qedhere
+ \end{align*}
+\end{proof}
+
+\begin{eg}
+ Let $f: C^\infty(U)$ where $U \subseteq M$ is an open set containing $p$. Then $\D f|_p: T_p M \to T_{f(p)} \R \cong \R$ is a linear map. So $\D f|_p$ is an element in the dual space $(T_pM)^*$, called the \term{differential} of $f$ at $p$, and is denoted $\d f|_p$. Then we have
+ \[
+ \d f|_p(X) = X(f).
+ \]
+ (this can, e.g.\ be checked in local coordinates)
+\end{eg}
+
+\subsection{Bump functions and partitions of unity}
+Recall that there is one thing we swept under the carpet --- to define the tangent space, we needed to pick an open set $U$. Ways to deal with this can be found in the example sheet, but there are two general approaches --- one is to talk about germs of functions, where we consider all open neighbourhoods, and identify two functions if they agree on some open neighbourhood of the point. The other way is to realize that we can ``extend'' any function on $U \subseteq M$ to a function on the whole of $M$, using bump functions.
+
+In general, we want a function that looks like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -0.5) -- (0, 3);
+ \draw [thick, blue] (-1.01, 2) -- (1.01, 2);
+ \draw [thick, blue] (-3, 0) -- (-1.99, 0);
+ \draw [thick, blue] (1.99, 0) -- (3, 0);
+ \draw [domain=-1.99:-1.01, thick, blue] plot [smooth] (\x, {2 * (exp(- 1 / ((2 + \x) * (2 + \x))))/((exp(- 1 / ((2 + \x) * (2 + \x)))) + (exp(- 1 / ((-1 - \x) * (-1 - \x)))))});
+ \draw [domain=1.01:1.99, thick, blue] plot [smooth] (\x, {2 * (exp(- 1 / ((2 - \x) * (2 - \x))))/((exp(- 1 / ((2 - \x) * (2 - \x)))) + (exp(- 1 / ((-1 + \x) * (-1 + \x)))))});
+ \end{tikzpicture}
+\end{center}
+
+\begin{lemma}
+ Suppose $W \subseteq M$ is a coordinate chart with $p \in W$. Then there is an open neighbourhood $V$ of $p$ such that $\bar{V} \subseteq W$ and an $X \in C^\infty(M, \R)$ such that $X = 1$ on $V$ and $X = 0$ on $M \setminus W$.
+\end{lemma}
+
+\begin{proof}
+ Suppose we have coordinates $x_1, \cdots, x_n$ on $W$. We wlog suppose these are defined for all $|x| < 3$.
+
+ We define $\alpha, \beta, \gamma: \R \to \R$ by
+ \[
+ \alpha(t) =
+ \begin{cases}
+ e^{-t^{-2}} & t > 0\\
+ 0 & t \leq 0
+ \end{cases}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -0.5) -- (0, 3);
+ \draw [thick, blue] (-3, 0) -- (0, 0);
+ \draw [domain=0.01:3, thick, blue] plot [smooth] (\x, {2 * exp(- 1 / (\x * \x))});
+ \end{tikzpicture}
+ \end{center}
+ We now let
+ \[
+ \beta(t) = \frac{\alpha(t)}{\alpha(t) + \alpha(1 - t)}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}[xscale=4]
+ \draw [->] (0, 0) -- (1, 0);
+ \draw [->] (0, -0.5) -- (0, 3);
+ \draw [thick, blue] (0, 0) -- (0.01, 0);
+ \draw [domain=0.01:0.99, thick, blue] plot [smooth] (\x, {2 * (exp(- 1 / (\x * \x)))/((exp(- 1 / (\x * \x))) + (exp(- 1 / ((1 - \x) * (1 - \x)))))});
+ \end{tikzpicture}
+ \end{center}
+ Then we let
+ \[
+ \gamma(t) = \beta(t + 2)\beta(2 - t).
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -0.5) -- (0, 3);
+ \draw [thick, blue] (-1.01, 2) -- (1.01, 2);
+ \draw [thick, blue] (-3, 0) -- (-1.99, 0);
+ \draw [thick, blue] (1.99, 0) -- (3, 0);
+ \draw [domain=-1.99:-1.01, thick, blue] plot [smooth] (\x, {2 * (exp(- 1 / ((2 + \x) * (2 + \x))))/((exp(- 1 / ((2 + \x) * (2 + \x)))) + (exp(- 1 / ((-1 - \x) * (-1 - \x)))))});
+ \draw [domain=1.01:1.99, thick, blue] plot [smooth] (\x, {2 * (exp(- 1 / ((2 - \x) * (2 - \x))))/((exp(- 1 / ((2 - \x) * (2 - \x)))) + (exp(- 1 / ((-1 + \x) * (-1 + \x)))))});
+ \end{tikzpicture}
+ \end{center}
+ Finally, we let
+ \[
+ X(x_1, \cdots, x_n) = \gamma(x_1) \cdots \gamma(x_n).
+ \]
+ on $W$. We let
+ \[
+ V = \{\mathbf{x}: |x_i| < 1\}.
+ \]
+ Extending $X$ to be identically $0$ on $M \setminus W$ to get the desired smooth function (up to some constant).
+\end{proof}
+\begin{lemma}
+ Let $p \in W \subseteq U$ and $W, U$ open. Let $f_1, f_2 \in C^\infty(U)$ be such that $f_1 = f_2$ on $W$. If $X \in \Der_p(C^\infty(U))$, then we have $X(f_1) = X(f_2)$
+\end{lemma}
+
+\begin{proof}
+ Set $h = f_1 - f_2$. We can wlog assume that $W$ is a coordinate chart. We pick a bump function $\chi \in C^\infty(U)$ that vanishes outside $W$. Then $\chi h = 0$. Then we have
+ \[
+ 0 = X(\chi h) = \chi(p) X(h) + h(p) X(\chi) = X(h) + 0 = X(f_1) - X(f_2). \qedhere
+ \]
+\end{proof}
+
+While we're doing boring technical work, we might as well do the other one, known as a \emph{partition of unity}. The idea is as follows --- suppose we want to construct a global structure on our manifold, say a (smoothly varying) inner product for each tangent space $T_p M$. We know how to do this if $M = \R^n$, because there is a canonical choice of inner product at each point in $\R^n$. We somehow want to patch all of these together.
+
+In general, there are two ways we can do the patching. The easy case is that not only is there a choice on $\R^n$, but there is a \emph{unique} choice. In this case, just doing it on each chart suffices, because they must agree on the intersection by uniqueness.
+
+However, this is obviously not the case for us, because a vector space can have many distinct inner products. So we need some way to add them up.
+
+\begin{defi}[Partition of unity]\index{partition of unity}
+ Let $\{U_\alpha\}$ be an open cover of a manifold $M$. A \emph{partition of unity} subordinate to $\{U_\alpha\}$ is a collection $\varphi_\alpha \in C^\infty(M, \R)$ such that
+ \begin{enumerate}
+ \item $0 \leq \varphi_\alpha \leq 1$
+ \item $\supp(\varphi_\alpha) \subseteq U_\alpha$
+ \item For all $p \in M$, all but finitely many $\varphi_\alpha(p)$ are zero.
+ \item $\sum_\alpha \varphi_\alpha = 1$.
+ \end{enumerate}
+\end{defi}
+Note that by (iii), the final sum is actually a finite sum, so we don't have to worry about convergence issues.
+
+Now if we have such a partition of unity, we can pick an inner product on each $U_\alpha$, say $q_\alpha(\ph, \ph)$, and then we can define an inner product on the whole space by
+\[
+ q(v_p, w_p) = \sum_{\alpha} \varphi_{\alpha}(p) q_\alpha(v_p, w_p),
+\]
+where $v_p, w_p \in T_p M$ are tangent vectors. Note that this makes sense. While each $q_\alpha$ is not defined everywhere, we know $\varphi_\alpha(p)$ is non-zero only when $q_\alpha$ is defined at $p$, and we are also only taking a finite sum.
+
+The important result is the following:
+\begin{thm}
+ Given any $\{U_\alpha\}$ open cover, there exists a partition of unity subordinate to $\{U_\alpha\}$.
+\end{thm}
+
+\begin{proof}
+ We will only do the case where $M$ is compact. Given $p \in M$, there exists a coordinate chart $p \in V_p$ and $\alpha(p)$ such that $V_p \subseteq U_{\alpha(p)}$. We pick a bump function $\chi_p \in C^\infty(M, \R)$ such that $\chi_p = 1$ on a neighbourhood $W_p \subseteq V_p$ of $p$. Then $\supp(\chi_p) \subseteq U_{\alpha(p)}$.
+
+ Now by compactness, there are some $p_1, \cdots, p_N$ such that $M$ is covered by $W_{p_1} \cup \cdots \cup W_{p_N}$. Now let
+ \[
+ \tilde{\varphi}_\alpha = \sum_{i: \alpha(p_i) = \alpha} \chi_{p_i}.
+ \]
+ Then by construction, we have
+ \[
+ \supp(\tilde{\varphi}_\alpha) \subseteq U_\alpha.
+ \]
+ Also, by construction, we know $\sum_\alpha \tilde{\varphi}_\alpha > 0$. Finally, we let
+ \[
+ \varphi_\alpha = \frac{\tilde{\varphi}_\alpha}{\sum_\beta \tilde{\varphi}_\beta}. \qedhere
+ \]
+\end{proof}
+The general proof will need the fact that the space is second-countable.
+
+We will actually not need this until quite later on in the course, but we might as well do all the boring technical bits all together.
+
+\subsection{Submanifolds}
+You have a manifold, and a subset of it is a manifold, so you call it a submanifold.
+
+\begin{defi}[Embedded submanifold]\index{Embedded submanifold}
+ Let $M$ be a manifold with $\dim M = n$, and $S$ be a submanifold of $M$. We say $S$ is an \emph{embedded submanifold} if for all $p \in S$, there are coordinates $x_1, \cdots, x_n$ on some chart $U \subseteq M$ containing $p$ such that
+ \[
+ S \cap U = \{x_{k + 1} = x_{k + 2} = \cdots = x_n = 0\}
+ \]
+ for some $k$. Such coordinates are known as \term{slice coordinates} for $S$.
+\end{defi}
+This is a rather technical condition, rather than ``a subset that is also a manifold under the inherited smooth structure''. The two definitions are indeed equivalent, but picking this formulation makes it easier to prove things about it.
+
+\begin{lemma}
+ If $S$ is an embedded submanifold of $M$, then there exists a unique differential structure on $S$ such that the inclusion map $\iota: S \hookrightarrow M$ is smooth and $S$ inherits the subspace topology.
+\end{lemma}
+
+\begin{proof}
+ Basically if $x_1, \cdots, x_n$ is a slice chart for $S$ in $M$, then $x_1, \cdots, x_k$ will be coordinates on $S$.
+
+ More precisely, let $\pi: \R^n \to \R^k$ be the projection map
+ \[
+ \pi(x_1, \cdots, x_n) = (x_1, \cdots, x_k).
+ \]
+ Given a slice chart $(U, \varphi)$ for $S$ in $M$, consider $\tilde{\varphi}: S \cap U \to \R^k$ by $\tilde{\varphi} = \pi \circ \varphi$. This is smooth and bijective, and is so a chart on $S$. These cover $S$ by assumption. So we only have to check that the transition functions are smooth.
+
+ Given another slice chart $(V, \xi)$ for $S$ in $M$, we let $\tilde{\xi} = \pi \circ \xi$, and check that
+ \[
+ \tilde{\xi} \circ \tilde{\varphi}^{-1} = \pi \circ \xi \circ \varphi^{-1} \circ j,
+ \]
+ where $j: \R^k \to \R^n$ is given by $j(x_1, \cdots, x_k) = (x_1, \cdots, x_k, 0, \cdots, 0)$.
+
+ From this characterization, by looking at local charts, it is clear that $S$ has the subspace topology. It is then easy to see that the embedded submanifold is Hausdorff and second-countable, since these properties are preserved by taking subspaces.
+
+ We can also check easily that $\iota: S \hookrightarrow M$ is smooth, and this is the only differential structure with this property.
+\end{proof}
+
+It is also obvious from the slice charts that:
+\begin{prop}
+ Let $S$ be an embedded submanifold. Then the derivative of the inclusion map $\iota: S \hookrightarrow M$ is injective.
+\end{prop}
+
+Sometimes, we like to think of a subobject not as a subset, but as the inclusion map $\iota: S \hookrightarrow M$ instead. However, when we are doing topology, there is this funny problem that a continuous bijection need not be a homeomorphism. So if we define submanifolds via inclusions maps, we get a weaker notion known as an \emph{immersed submanifold}.
+
+\begin{defi}[Immersed submanifold]\index{immersed submanifold}
+ Let $S, M$ be manifolds, and $\iota: S \hookrightarrow M$ be a smooth injective map with $\D\iota|_p : T_p S \to T_p M$ injective for all $p \in S$. Then we call $(\iota, S)$ an \emph{immersed submanifold}. By abuse of notation, we identify $S$ and $\iota(S)$.
+\end{defi}
+
+\begin{eg}
+ If we map $\R$ into $\R^2$ via the following figure of eight (where the arrow heads denote the ``end points'' of $\R$), then this gives an immersed submanifold that is not an embedded submanifold.
+ \begin{center}
+ \begin{tikzpicture}
+ \path[use as bounding box] (-1.8, -0.7) rectangle (1.8,0.7);
+ \draw [-latex'] (0, 0) .. controls (2, 2) and (2, -2) .. (0, 0);
+ \draw [-latex'] (0, 0) .. controls (-2, -2) and (-2, 2) .. (0, 0);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{eg}
+ Consider the line $\R$, and define the map $f: \R \to T^2 = \R^2 / \Z^2$ by $f(x) = \alpha x$, where $\alpha$ is some irrational number. Then this map gives an immersed submanifold of $T^2$, but is not an embedded submanifold, since $\R$ certainly does not have the subspace topology from $T^2$.
+\end{eg}
+
+How do we construct submanifolds? The definition is rather difficult to work with. It is not immediately clear whether
+\[
+ S^n = \{\mathbf{x} \in \R^{n + 1}: |\mathbf{x}| \leq 1\} \subseteq \R^{n + 1}
+\]
+is an embedded submanifold, even though it feels like it should be.
+
+More generally, if $M, N$ are manifolds, $F \in C^\infty(M, N)$ and $c \in N$, under what circumstances will $F^{-1}(c)$ be an embedded submanifold of $M$? The answer is that $c$ has to be a regular value.
+\begin{defi}[Regular value]\index{regular value}
+ Let $F \in C^\infty(M, N)$ and $c \in N$. Let $S = F^{-1}(c)$. We say $c$ is a \emph{regular value} if for all $p \in S$, the map $\D F|_p: T_p M \to T_c N$ is surjective.
+\end{defi}
+
+\begin{prop}
+ Let $F \in C^\infty(M, N)$, and let $c \in N$. Suppose $c$ is a regular value. Then $S = F^{-1}(c)$ is an embedded submanifold of dimension $\dim M - \dim N$.
+\end{prop}
+
+\begin{proof}
+ We let $n = \dim M$ and $m = \dim N$. Notice that for the map $\D F$ to be surjective, we must have $n \geq m$.
+
+ Let $p \in S$, so $F(p) = c$. We want to find a slice coordinate for $S$ near $p$. Since the problem is local, by restricting to local coordinate charts, we may wlog assume $N = \R^m$, $M = \R^n$ and $c = p = 0$.
+
+ Thus, we have a smooth map $F: \R^n \to \R^m$ with surjective derivative at $0$. Then the derivative is
+ \[
+ \left(\left.\frac{\partial F_j}{\partial x_i}\right|_0\right)_{i = 1, \ldots, n; \, j = 1, \ldots, m},
+ \]
+ which by assumption has rank $m$. We reorder the $x_i$ so that the first $m$ columns are independent. Then the $m \times m$ matrix
+ \[
+ R = \left(\left.\frac{\partial F_j}{\partial x_i}\right|_0\right)_{i,j = 1, \ldots, m}
+ \]
+ is non-singular. We consider the map
+ \[
+ \alpha(x_1, \cdots, x_n) = (F_1, \cdots, F_m, x_{m + 1}, \cdots, x_n).
+ \]
+ We then obtain
+ \[
+ \D \alpha|_0 =
+ \begin{pmatrix}
+ R & * \\
+ 0 & I
+ \end{pmatrix},
+ \]
+ and this is non-singular. By the inverse function theorem, $\alpha$ is a local diffeomorphism. So there is an open $W\subseteq \R^n$ containing $0$ such that $\alpha|_W: W \to \alpha(W)$ is smooth with smooth inverse. We claim that $\alpha$ is a slice chart of $S$ in $\R^n$.
+
+ Since it is a smooth diffeomorphism, it is certainly a chart. Moreover, by construction, the points in $S$ are exactly those whose image under $F$ have the first $m$ coordinates vanish. So this is the desired slice chart.
+\end{proof}
+
+\begin{eg}
+ We want to show that $S^n$ is a manifold. Let $F: \R^{n + 1} \to \R$ be defined by
+ \[
+ F(x_0, \cdots, x_n) = \sum x_i^2.
+ \]
+ Then $F^{-1}(1) = S^n$. We find that
+ \[
+ \D F|_p = 2(x_0, \cdots, x_n) \not= 0
+ \]
+ when $p \in S^n$. So $S^n$ is a manifold.
+\end{eg}
+
+\begin{eg}
+ Consider the orthogonal group. We let $M_n \cong \R^{n^2}$ be the space of all $n \times n$ matrices with the obvious smooth structure. We define
+ \[
+ N = \{A \in M_n: A^T = A\}.
+ \]
+ Since this is a linear subspace, it is also a manifold. We define
+ \begin{align*}
+ F: M_n &\to N\\
+ A &\mapsto AA^T.
+ \end{align*}
+ Then we have
+ \[
+ \Or(n) = F^{-1}(I) = \{A: AA^T = I\}.
+ \]
+ We compute the derivative by looking at
+ \[
+ F (A + H) = (A + H)(A + H)^T = AA^T + HA^T + AH^T + HH^T.
+ \]
+ So we have
+ \[
+ \D F|_A (H) = HA^T + AH^T.
+ \]
+ Now if $A \in \Or(n)$, then we have
+ \[
+ \D F|_A (HA) = HAA^T + AA^T H^T = H + H^T
+ \]
+ for any $H$. Since every symmetric matrix is of the form $H + H^T$, we know $\D F|_A: T_A M_n \to T_{F(A)}N$ is surjective. So $\Or(n)$ is a manifold.
+\end{eg}
+
+\section{Vector fields}
+\subsection{The tangent bundle}
+Recall that we had the notion of a tangent vector. If we have a curve $\gamma: I \to M$, then we would like to think that the derivative $\dot{\gamma}$ ``varies smoothly'' with time. However, we cannot really do that yet, since for different $t$, the value of $\dot{\gamma}$ lies in different vector spaces, and there isn't a way of comparing them.
+
+More generally, given a ``vector field'' $f: p \mapsto v_p \in T_p M$ for each $p \in M$, how do we ask if this is a smooth function?
+
+One way to solve this is to pick local coordinates $x_1, \cdots,x _n$ on $U \subseteq M$. We can then write
+\[
+ v_p = \sum_i \alpha_i(p) \left.\frac{\partial}{\partial x_i}\right|_p.
+\]
+Since $\alpha_i(p) \in \R$, we can say $v_p$ varies smoothly if the functions $\alpha_i(p)$ are smooth. We then proceed to check that this does not depend on coordinates etc.
+
+However, there is a more direct approach. We simply turn
+\[
+ TM = \bigcup_{p \in M} T_p M
+\]
+into a manifold. There is then a natural map $\pi: TM \to M$ sending $v_p \in T_pM$ to $p$ for each $p \in M$, and this is smooth. We can then define the smoothness of $f$ using the usual notion of smoothness of maps between manifolds.
+
+Assuming that we have successfully constructed a sensible $TM$, we can define:
+\begin{defi}[Vector field]
+ A \term{vector field} on some $U \subseteq M$ is a smooth map $X: U \to TM$ such that for all $p \in U$, we have
+ \[
+ X(p) \in T_p M.
+ \]
+ In other words, we have $\pi \circ X = \id$.
+\end{defi}
+
+\begin{defi}[$\Vect(U)$]\index{$\Vect(U)$}
+ Let $\Vect(U)$ denote the set of all vector fields on $U$. Let $X, Y \in \Vect(U)$, and $f \in C^\infty(U)$. Then we can define
+ \[
+ (X + Y)(p) = X(p) + Y(p),\quad (fX)(p) = f(p) X(p).
+ \]
+ Then we have $X + Y, fX \in \Vect(U)$. So $\Vect(U)$ is a $C^\infty(U)$ module.
+
+ Moreover, if $V \subseteq U \subseteq M$ and $X \in \Vect(U)$, then $X|_V \in \Vect(V)$.
+
+ Conversely, if $\{V_i\}$ is a cover of $U$, and $X_i \in \Vect(V_i)$ are such that they agree on intersections, then they patch together to give an element of $\Vect(U)$. So we say that $\Vect$ is a \emph{sheaf of $C^\infty(M)$ modules}.
+\end{defi}
+
+Now we properly define the manifold structure on the tangent bundle.
+
+\begin{defi}[Tangent bundle]\index{tangent bundle}
+ Let $M$ be a manifold, and
+ \[
+ TM = \bigcup_{p \in M}T_p M.
+ \]
+ There is a natural projection map $\pi: TM \to M$ sending $v_p \in T_pM$ to $p$.
+
+ Let $x_1, \cdots, x_n$ be coordinates on a chart $(U, \varphi)$. Then for any $p \in U$ and $v_p \in T_p M$, there are some $\alpha_1, \cdots, \alpha_n \in \R$ such that
+ \[
+ v_p = \sum_{i = 1}^n \alpha_i \left.\frac{\partial}{\partial x_i}\right|_{p}.
+ \]
+ This gives a bijection
+ \begin{align*}
+ \pi^{-1}(U) &\to \varphi(U) \times \R^n\\
+ v_p &\mapsto (x_1(p), \cdots, x_n(p), \alpha_1, \cdots, \alpha_n),
+ \end{align*}
+ These charts make $TM$ into a manifold of dimension $2 \dim M$, called the \emph{tangent bundle} of $M$.
+\end{defi}
+
+\begin{lemma}
+ The charts actually make $TM$ into a manifold.
+\end{lemma}
+
+\begin{proof}
+ If $(V, \xi)$ is another chart on $M$ with coordinates $y_1, \cdots, y_n$, then
+ \[
+ \left.\frac{\partial}{\partial x_i}\right|_p = \sum_{j = 1}^n \frac{\partial y_j}{\partial x_i}(p) \left.\frac{\partial}{\partial y_j}\right|_p.
+ \]
+ So we have $\tilde{\xi} \circ \tilde{\varphi}^{-1}: \varphi(U \cap V) \times \R^n \to \xi(U \cap V) \times \R^n$ given by
+ \[
+ \tilde{\xi} \circ \tilde{\varphi}^{-1} (x_1, \cdots, x_n, \alpha_1, \cdots, \alpha_n) = \left(y_1, \cdots, y_n, \sum_{i = 1}^n \alpha_i \frac{\partial y_{1}}{\partial x_i}, \cdots, \sum_{i = 1}^n \alpha_i \frac{\partial y_n}{\partial x_i}\right),
+ \]
+ and is smooth (and in fact fiberwise linear).
+
+ It is easy to check that $TM$ is Hausdorff and second countable as $M$ is.
+\end{proof}
+
+There are a few remarks to make about this.
+\begin{enumerate}
+ \item The projection map $\pi: TM \to M$ is smooth.
+ \item If $U \subseteq M$ is open, recall that
+ \[
+ \Vect(U) = \{\text{smooth }X: U \to TM\mid X(p) \in T_pM\text{ for all }p \in U\}.
+ \]
+ We write $X_p$ for $X(p)$. Now suppose further that $U$ is a coordinate chart, then we can write any function $X: U \to TM$ such that $X_p \in T_p M$ (uniquely) as
+ \[
+ X_p = \sum_{i = 1}^n \alpha_i(p) \left.\frac{\partial}{\partial x_i} \right|_p
+ \]
+ Then $X$ is smooth iff all $\alpha_i$ are smooth.
+ \item If $F \in C^\infty(M, N)$, then $\D F: TM \to TN$ given by $\D F(v_p) = \D F|_p(v_p)$ is smooth. This is nice, since we can easily talk about higher derivatives, by taking the derivative of the derivative map.
+ \item If $F \in C^\infty (M, N)$ and $X$ is a vector field on $M$, then we \emph{cannot} obtain a vector field on $N$ by $\D F(X)$, since $F$ might not be injective. If $F(p_1) = F(p_2)$, we need not have $\D F(X(p_1)) = \D F(X(p_2))$.
+\end{enumerate}
+However, there is a weaker notion of being $F$-related.
+\begin{defi}[$F$-related]\index{$F$-related}
+ Let $M, N$ be manifolds, and $X \in \Vect(M)$, $Y \in \Vect(N)$ and $F \in C^\infty(M, N)$. We say they are \emph{$F$-related} if
+ \[
+ Y_q = \D F|_p (X_p)
+ \]
+ for all $p \in M$ and $F(p) = q$. In other words, if the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ TM \ar[r, "\D F"] & TN\\
+ M \ar[u, "X"] \ar[r, "F"] & N \ar[u, "Y"]
+ \end{tikzcd}.
+ \]
+\end{defi}
+
+So what does $\Vect(M)$ look like? Recall that a vector is defined to be a derivation. So perhaps a vector field is also a derivation of some sort.
+\begin{defi}[$\Der(C^\infty(M))$]\index{$\Der(C^\infty(M))$}
+ Let $\Der(C^\infty(M))$ be the set of all $\R$-linear maps $\mathcal{X}: C^\infty(M) \to C^\infty(M)$ that satisfy
+ \[
+ \mathcal{X}(fg) = f \mathcal{X}(g) + \mathcal{X}(f) g.
+ \]
+ This is an $\R$-vector space, and in fact a $C^\infty(M)$ module.
+\end{defi}
+
+Given $X \in \Vect(M)$, we get a derivation $\mathcal{X} \in \Der(C^\infty(M))$ by setting
+\[
+ \mathcal{X}(f)(p) = X_p(f).
+\]
+It is an exercise to show that $\mathcal{X}(f)$ is smooth and satisfies the Leibniz rule. Similar to the case of vectors, we want to show that all derivations come from vector fields.
+
+\begin{lemma}
+ The map $X \mapsto \mathcal{X}$ is an $\R$-linear isomorphism
+ \[
+ \Gamma: \Vect(M) \to \Der(C^\infty(M)).
+ \]
+\end{lemma}
+
+\begin{proof}
+ Suppose that $\alpha$ is a derivation. If $p \in M$, we define
+ \[
+ X_p(f) = \alpha(f)(p)
+ \]
+ for all $f \in C^\infty(M)$. This is certainly a linear map, and we have
+ \[
+ X_p(fg) = \alpha(fg)(p) = (f\alpha(g) + g\alpha(f))(p) = f(p) X_p(g) + g(p) X_p(f).
+ \]
+ So $X_p \in T_p M$. We just need to check that the map $M \to TM$ sending $p \mapsto X_p$ is smooth. Locally on $M$, we have coordinates $x_1, \cdots, x_n$, and we can write
+ \[
+ X_p = \sum_{i = 1}^n \alpha_i(p) \left.\frac{\partial}{\partial x_i}\right|_p.
+ \]
+ We want to show that $\alpha_i: U \to \R$ are smooth.
+
+ We pick a bump function $\varphi$ that is identically $1$ near $p$, with $\supp \varphi \subseteq U$. Consider the function $\varphi x_j \in C^\infty(M)$. We can then consider
+ \[
+ \alpha(\varphi x_j)(p) = X_p(\varphi x_j).
+ \]
+ As $\varphi x_j$ is just $x_j$ near $p$, by properties of derivations, we know this is just equal to $\alpha_j$. So we have
+ \[
+ \alpha(\varphi x_j) = \alpha_j.
+ \]
+ So $\alpha_j$ is smooth.
+\end{proof}
+
+From now on, we confuse $X$ and $\mathcal{X}$, i.e.\ we think of any $X \in \Vect(M)$ as a derivation of $C^\infty(M)$.
+
+Note that the product of two vector fields (i.e.\ the composition of derivations) is not a vector field. We can compute
+\begin{align*}
+ XY(fg) &= X(Y(fg)) \\
+ &= X(fY(g) + gY(f)) \\
+ &= X(f) Y(g) + fXY(g) + X(g) Y(f) + g XY(f).
+\end{align*}
+So this is not a derivation, because we have the cross terms $X(f) Y(g)$. However, what we do have is that $XY - YX$ is a derivation.
+\begin{defi}[Lie bracket]\index{Lie bracket}
+ If $X, Y \in \Vect(M)$, then the \emph{Lie bracket} $[X, Y]$ is (the vector field corresponding to) the derivation $XY - YX \in \Vect(M)$.
+\end{defi}
+
+So $\Vect(M)$ becomes what is known as a \emph{Lie algebra}.
+\begin{defi}[Lie algebra]\index{Lie algebra}
+ A \emph{Lie algebra} is a vector space $V$ with a bracket $[\ph, \ph]: V \times V \to V$ such that
+ \begin{enumerate}
+ \item $[\ph, \ph]$ is bilinear.
+ \item $[\ph, \ph]$ is antisymmetric, i.e.\ $[X, Y] = -[Y, X]$.
+ \item The \term{Jacobi identity} holds
+ \[
+ [X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y]] = 0.
+ \]
+ \end{enumerate}
+\end{defi}
+
+It is a (painful) exercise to show that the Lie bracket does satisfy the Jacobi identity.
+
+The definition of the Lie algebra might seem a bit weird. Later it will come up in many different guises and hopefully it might become more clear.
+
+\subsection{Flows}
+What can we do with vector fields? In physics, we can imagine a manifold as all of space, and perhaps a vector field specifies the velocity a particle should have at that point. Now if you actually drop a particle into that space, the particle will move according to the velocity specified. So the vector field generates a \emph{flow} of the particle. These trajectories are known as \emph{integral curves}.
+
+\begin{defi}[Integral curve]\index{integral curve}
+ Let $X \in \Vect(M)$. An \emph{integral curve} of $X$ is a smooth $\gamma: I \to M$ such that $I$ is an open interval in $\R$ and
+ \[
+ \dot{\gamma}(t) = X_{\gamma(t)}.
+ \]
+\end{defi}
+
+\begin{eg}
+ Take $M = \R^2$, and let
+ \[
+ X = x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x}.
+ \]
+ The field looks like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \foreach \t in {0,30,60,90,120,150,180,210,240,270,300,330,360} {
+ \begin{scope}[rotate=\t]
+ \draw [-latex'] (2.5, 0) -- +(0, 0.5);
+ \draw [-latex'] (2, 0) -- +(0, 0.4);
+ \draw [-latex'] (1.5, 0) -- +(0, 0.3);
+ \draw [-latex'] (1, 0) -- +(0, 0.2);
+ \draw [-latex'] (0.5, 0) -- +(0, 0.1);
+ \end{scope}
+ }
+ \end{tikzpicture}
+ \end{center}
+ We would expect the integral curves to be circles. Indeed, suppose $\gamma: I \to \R^2$ is an integral curve. Write $\gamma = (\gamma_1, \gamma_2)$. Then the definition requires
+ \[
+ \gamma_1'(t) \frac{\partial}{\partial x} + \gamma_2'(t) \frac{\partial}{\partial y} = \gamma_1(t) \frac{\partial}{\partial y} - \gamma_2(t) \frac{\partial}{\partial x}.
+ \]
+ So the equation is
+ \begin{align*}
+ \gamma_1'(t) &= - \gamma_2(t)\\
+ \gamma_2'(t) &= \gamma_1(t).
+ \end{align*}
+ For example, if our starting point is $p = (1, 0)$, then we have
+ \[
+ \gamma_1(t) = \cos t,\quad \gamma_2(t) = \sin t.
+ \]
+\end{eg}
+We see that to find an integral curve, all we are doing is just solving ordinary differential equations. We know that all ODEs have smooth and unique solutions, and they have all the nice properties we can hope for. So we are going to get nice corresponding results for integral solutions. However, sometimes funny things happen.
+
+\begin{eg}
+ Take $M = \R$, and
+ \[
+ X = x^2 \frac{\d}{\d x}.
+ \]
+ Then if $\gamma$ is an integral curve, it must satisfy:
+ \[
+ \gamma'(t) = \gamma(t)^2.
+ \]
+ This means that the solution is of the form
+ \[
+ \gamma(t) = \frac{1}{C - t}
+ \]
+ for $C$ a constant. For example, if we want $\gamma(0) = \frac{1}{2}$, then we have
+ \[
+ \gamma(t) = \frac{1}{2 - t}.
+ \]
+ The solution to this ODE is defined only for $t < 2$, so we can only have $I = (-\infty, 2)$ at best.
+\end{eg}
+
+We are going to prove that integral curves always exist. To do so, we need to borrow some powerful theorems from ODE theory:
+\begin{thm}[Fundamental theorem on ODEs]
+ Let $U \subseteq \R^n$ be open and $\alpha: U \to \R^n$ smooth. Pick $t_0 \in \R$.
+
+ Consider the ODE
+ \begin{align*}
+ \dot{\gamma}_i(t) &= \alpha_i(\gamma(t))\\
+ \gamma_i(t_0) &= c_i,
+ \end{align*}
+ where $\mathbf{c} = (c_1, \cdots, c_n) \in \R^n$.
+
+ Then there exists an open interval $I$ containing $t_0$ and an open $U_0 \subseteq U$ such that for every $\mathbf{c} \in U_0$, there is a smooth solution $\gamma_\mathbf{c}:I \to U$ satisfying the ODE.
+
+ Moreover, any two solutions agree on a common domain, and the function $\Theta: I \times U_0 \to U$ defined by $\Theta(t, \mathbf{c}) = \gamma_\mathbf{c}(t)$ is smooth (in both variables).
+\end{thm}
+
+\begin{thm}[Existence of integral curves]
+ Let $X \in \Vect(M)$ and $p \in M$. Then there exists some open interval $I \subseteq \R$ with $0 \in I$ and an integral curve $\gamma: I \to M$ for $X$ with $\gamma(0) = p$.
+
+ Moreover, if $\tilde{\gamma}: \tilde{I} \to M$ is another integral curve for $X$, and $\tilde{\gamma}(0) = p$, then $\tilde{\gamma} = \gamma$ on $I \cap \tilde{I}$.
+\end{thm}
+
+\begin{proof}
+ Pick local coordinates for $M$ centered at $p$ in an open neighbourhood $U$. So locally we write
+ \[
+ X = \sum_{i = 1}^n \alpha_i \frac{\partial}{\partial x_i},
+ \]
+ where $\alpha_i \in C^\infty(U)$. We want to find $\gamma = (\gamma_1, \cdots, \gamma_n): I \to U$ such that
+ \[
+ \sum_{i = 1}^n \gamma_i'(t) \left.\frac{\partial}{\partial x_i}\right|_{\gamma(t)} = \sum_{i = 1}^n \alpha_i(\gamma(t)) \left.\frac{\partial}{\partial x_i}\right|_{\gamma(t)},\quad \gamma_i(0) = 0.
+ \]
+ Since the $\frac{\partial}{\partial x_i}$ form a basis, this is equivalent to saying
+ \[
+ \gamma_i(t) = \alpha_i(\gamma(t)),\quad \gamma_i(0) = 0
+ \]
+ for all $i$ and $t \in I$.
+
+ By the general theory of ordinary differential equations, there is an interval $I$ and a solution $\gamma$, and any two solutions agree on their common domain.
+
+ However, we need to do a bit more for uniqueness, since all we know is that there is a unique integral curve lying in this particular chart. It might be that there are integral curves that do wild things when they leave the chart.
+
+ So suppose $\gamma: I \to M$ and $\tilde{\gamma}: \tilde{I} \to M$ are both integral curves passing through the same point, i.e.\ $\gamma(0) = \tilde{\gamma}(0) = p$.
+
+ We let
+ \[
+ J = \{t \in I \cap \tilde{I}: \gamma(t) = \tilde{\gamma}(t)\}.
+ \]
+ This is non-empty since $0 \in J$, and $J$ is closed since $\gamma$ and $\tilde{\gamma}$ are continuous. To show it is all of $I \cap \tilde{I}$, we only have to show it is open, since $I \cap \tilde{I}$ is connected.
+
+ So let $t_0 \in J$, and consider $q = \gamma(t_0)$. Then $\gamma$ and $\tilde{\gamma}$ are integral curves of $X$ passing through $q$. So by the first part, they agree on some neighbourhood of $t_0$. So $J$ is open. So done.
+\end{proof}
+
+\begin{defi}[Maximal integral curve]\index{maximal integral curve}\index{integral curve!maximal}
+ Let $p \in M$, and $X \in \Vect(M)$. Let $I_p$ be the union of all $I$ such that there is an integral curve $\gamma: I \to M$ with $\gamma(0) = p$. Then there exists a unique integral curve $\gamma: I_p \to M$, known as the \emph{maximal integral curve}.
+\end{defi}
+
+Note that $I_p$ does depend on the point.
+
+\begin{eg}
+ Consider the vector field
+ \[
+ X = \frac{\partial}{\partial x}
+ \]
+ on $\R^2 \setminus \{0\}$. Then for any point $p = (x, y)$, if $y \not= 0$, we have $I_p = \R$, but if $y = 0$ and $x < 0$, then $I_p = (-\infty, -x)$. Similarly, if $y = 0$ and $x > 0$, then $I_p = (-x, \infty)$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 3) rectangle (3, -3);
+ \foreach \x in {-2.7, -1.7, -0.7, 0.3, 1.3, 2.3} {
+ \foreach \y in {-2, -1, 0, 1, 2} {
+ \draw [-latex'] (\x, \y) -- +(0.4, 0);
+ }
+ }
+ \draw circle [radius=0.1];
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{defi}[Complete vector field]\index{complete vector field}
+ A vector field is \emph{complete} if $I_p = \R$ for all $p \in M$.
+\end{defi}
+
+Given a complete vector field, we obtain a flow map as follows:
+\begin{thm}\index{$\Theta_t(p)$}
+ Let $M$ be a manifold and $X$ a complete vector field on $M$. Define $\Theta_t: \R \times M \to M$ by
+ \[
+ \Theta_t(p) = \gamma_p(t),
+ \]
+ where $\gamma_p$ is the maximal integral curve of $X$ through $p$ with $\gamma(0) = p$. Then $\Theta$ is a function smooth in $p$ and $t$, and
+ \[
+ \Theta_0 = \id,\quad \Theta_t \circ \Theta_s = \Theta_{s + t}
+ \]
+\end{thm}
+
+\begin{proof}
+ This follows from uniqueness of integral curves and smooth dependence on initial conditions of ODEs.
+\end{proof}
+
+In particular, since $\Theta_t \circ \Theta_{-t} = \Theta_0 = \id$, we know
+\[
+ \Theta_t^{-1} = \Theta_{-t}.
+\]
+So $\Theta_t$ is a diffeomorphism.
+
+More algebraically, if we write $\Diff(M)$ for the diffeomorphisms $M \to M$, then
+\begin{align*}
+ \R &\to \Diff(M)\\
+ t &\mapsto \Theta_t
+\end{align*}
+is a homomorphism of groups. We call this a \emph{one-parameter subgroup} of diffeomorphisms.
+
+What happens when we relax the completeness assumption? Everything is essentially the same whenever things are defined, but we have to take care of the domains of definition.
+
+\begin{thm}
+ Let $M$ be a manifold, and $X \in \Vect(M)$. Define
+ \[
+ D = \{(t, p) \in \R \times M: t \in I_p\}.
+ \]
+ In other words, this is the set of all $(t, p)$ such that $\gamma_p(t)$ exists. We set
+ \[
+ \Theta_t (p) = \Theta(t, p) = \gamma_p(t)
+ \]
+ for all $(t, p) \in D$. Then
+ \begin{enumerate}
+ \item $D$ is open and $\Theta: D \to M$ is smooth
+ \item $\Theta(0, p) = p$ for all $p \in M$.
+ \item If $(t, p) \in D$ and $(t, \Theta(s, p)) \in D$, then $(s + t, p) \in D$ and $\Theta(t, \Theta(s, p)) = \Theta(t + s, p)$.
+ \item For any $t \in \R$, the set $M_t: \{p \in M: (t, p) \in D\}$ is open in $M$, and
+ \[
+ \Theta_t: M_t \to M_{-t}
+ \]
+ is a diffeomorphism with inverse $\Theta_{-t}$.
+ \end{enumerate}
+\end{thm}
+
+This is really annoying. We now prove the following useful result that saves us from worrying about these problems in nice cases:
+\begin{prop}
+ Let $M$ be a compact manifold. Then any $X \in \Vect(M)$ is complete.
+\end{prop}
+
+\begin{proof}
+ Recall that
+ \[
+ D = \{(t, p): \Theta_t(p)\text{ is defined}\}
+ \]
+ is open. So given $p \in M$, there is some open neighbourhood $U \subseteq M$ of $p$ and an $\varepsilon > 0$ such that $(-\varepsilon, \varepsilon) \times U \subseteq D$. By compactness, we can find finitely many such $U$ that cover $M$, and find a small $\varepsilon$ such that $(-\varepsilon, \varepsilon) \times M \subseteq D$.
+
+ In other words, we know $\Theta_t(p)$ exists and $p \in M$ and $|t| < \varepsilon$. Also, we know $\Theta_t \circ \Theta_s = \Theta_{t + s}$ whenever $|t|, |s| < \varepsilon$, and in particular $\Theta_{t + s}$ is defined. So $\Theta_{Nt} = (\Theta_t)^N$ is defined for all $N$ and $|t| < \varepsilon$, so $\Theta_t$ is defined for all $t$.
+\end{proof}
+
+\subsection{Lie derivative}
+We now want to look at the concept of a Lie derivative. If we have a function $f$ defined on all of $M$, and we have a vector field $X$, then we might want to ask what the derivative of $f$ in the direction of $X$ is at each point. If $f$ is a real-valued function, then this is by definition $X(f)$. If $f$ is more complicated, then this wouldn't work, but we can still differentiate things along $X$ using the flows.
+
+\begin{notation}\index{$F^*g$}
+ Let $F: M \to M$ be a diffeomorphism, and $g \in C^\infty(M)$. We write
+ \[
+ F^* g = g \circ F \in C^\infty(M).
+\]
+\end{notation}
+
+We now define the \emph{Lie derivative} of a function, i.e.\ the derivative of a function $f$ in the direction of a vector field $X$. Of course, we can obtain this by just applying $X(f)$, but we want to make a definition that we can generalize.
+
+\begin{defi}[Lie derivative of a function]\index{Lie derivative!function}
+ Let $X$ be a complete vector field, and $\Theta$ be its flow. We define the \emph{Lie derivative} of $g$ along $X$ by
+ \[
+ \mathcal{L}_X(g) = \left.\frac{\d}{\d t} \right|_{t = 0} \Theta_t^* g.
+ \]
+ Here this is defined pointwise, i.e.\ for all $p \in M$, we define
+ \[
+ \mathcal{L}_X(g)(p) = \left.\frac{\d}{\d t}\right|_{t = 0} \Theta_t^*(g)(p).
+ \]
+\end{defi}
+
+\begin{lemma}
+ $\mathcal{L}_X(g) = X(g)$. In particular, $\mathcal{L}_X(g) \in C^\infty(M, \R)$.
+\end{lemma}
+
+\begin{proof}
+ \begin{align*}
+ \mathcal{L}_X(g)(p) &= \left.\frac{\d}{\d t}\right|_{t = 0} \Theta_t^*(g)(p) \\
+ &= \left.\frac{\d}{\d t}\right|_{t = 0} g(\Theta_t(p)) \\
+ &= \d g|_p(X(p))\\
+ &= X(g)(p). \qedhere
+ \end{align*}
+\end{proof}
+
+So this is quite boring. However, we can do something more exciting by differentiating vector fields.
+
+\begin{notation}\index{$F^*(Y)$}
+ Let $Y \in \Vect(M)$, and $F: M \to M$ be a diffeomorphism. Then $\D F^{-1}|_{F(p)}: T_{F(p)} M \to T_pM$. So we can write
+ \[
+ F^*(Y)|_p = \D F^{-1}|_{F(p)}(Y_{F(p)}) \in T_p M.
+ \]
+ Then $F^*(Y) \in \Vect(M)$. If $g \in C^\infty(M)$, then
+ \[
+ F^*(Y)|_p(g) = Y_{F(p)} (g \circ F^{-1}).
+ \]
+ Alternatively, we have
+ \[
+ F^*(Y)|_p(g \circ F) = Y_{F(p)}(g).
+ \]
+ Removing the $p$'s, we have
+ \[
+ F^*(Y)(g \circ F) = (Y(g)) \circ F.
+ \]
+\end{notation}
+
+\begin{defi}[Lie derivative of a vector field]\index{Lie derivative!vector field}
+ Let $X \in \Vect(M)$ be complete, and $Y \in \Vect(M)$ be a vector field. Then the \emph{Lie derivative} is given pointwise by
+ \[
+ \mathcal{L}_X(Y) = \left.\frac{\d}{\d t}\right|_{t = 0} \Theta_t^*(Y).
+ \]
+\end{defi}
+
+\begin{lemma}
+ We have
+ \[
+ \mathcal{L}_X Y = [X, Y].
+ \]
+\end{lemma}
+
+\begin{proof}
+ Let $g \in C^\infty(M, \R)$. Then we have
+ \[
+ \Theta_t^*(Y)(g \circ \Theta_t) = Y(g) \circ \Theta_t.
+ \]
+ We now look at
+ \[
+ \frac{\Theta_t^* (Y)(g) - Y(g)}{t} = \underbrace{\frac{\Theta_t^*(Y)(g) - \Theta_t^*(Y)(g \circ \Theta_t)}{t}}_{\alpha_t} + \underbrace{\frac{Y(g) \circ \Theta_t - Y(g)}{t}}_{\beta_t}.
+ \]
+ We have
+ \[
+ \lim_{t \to 0} \beta_t = \mathcal{L}_X (Y(g)) = XY(g)
+ \]
+ by the previous lemma, and we have
+ \[
+ \lim_{t\to 0}\alpha_t = \lim_{t \to 0} (\Theta_t^*(Y))\left(\frac{g - g \circ \Theta_t}{t}\right) = Y(-\mathcal{L}_X(g)) = - YX(g). \qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ Let $X, Y \in \Vect(M)$ and $f \in C^\infty(M, \R)$. Then
+ \begin{enumerate}
+ \item $\mathcal{L}_X(fY) = \mathcal{L}_X(f) Y + f \mathcal{L}_X Y = X(f) Y + f \mathcal{L}_X Y$
+ \item $\mathcal{L}_X Y = - \mathcal{L}_Y X$
+ \item $\mathcal{L}_X[Y, Z] = [\mathcal{L}_X Y, Z] + [Y, \mathcal{L}_X Z]$.
+ \end{enumerate}
+\end{cor}
+
+\begin{proof}
+ Immediate from the properties of the Lie bracket.
+\end{proof}
+
+\section{Lie groups}
+We now have a short digression to Lie groups. Lie groups are manifolds with a group structure. They have an extraordinary amount of symmetry, since multiplication with any element of the group induces a diffeomorphism of the Lie group, and this action of the Lie group on itself is free and transitive. Effectively, this means that any two points on the Lie group, as a manifold, are ``the same''.
+
+As a consequence, a lot of the study of a Lie group reduces to studying an infinitesimal neighbourhood of the identity, which in turn tells us about infinitesimal neighbourhoods of \emph{all points} on the manifold. This is known as the \emph{Lie algebra}.
+
+We are not going to go deep into the theory of Lie groups, as our main focus is on differential geometry. However, we will state a few key results about Lie groups.
+
+\begin{defi}[Lie group]\index{Lie group}
+ A \emph{Lie group} is a manifold $G$ with a group structure such that multiplication $m: G \times G \to G$ and inverse $i: G \to G$ are smooth maps.
+\end{defi}
+
+%\begin{own}
+% \begin{defi}[Lie group]
+% A \emph{Lie group} is a group object in the category of smooth manifolds.
+% \end{defi}
+%\end{own}
+
+\begin{eg}
+ $\GL_n(\R)$ and $\GL_n(\C)$ are Lie groups.
+\end{eg}
+
+\begin{eg}
+ $M_n(\R)$ under addition is also a Lie group.
+\end{eg}
+
+\begin{eg}
+ $\Or(n)$ is a Lie group.
+\end{eg}
+
+\begin{notation}\index{$L_g$}
+ Let $G$ be a Lie group and $g \in G$. We write $L_g: G \to G$ for the diffeomorphism
+ \[
+ L_g(h) = gh.
+ \]
+\end{notation}
+This innocent-seeming translation map is what makes Lie groups nice. Given any local information near an element $g$, we can transfer it to local information near $h$ by applying the diffeomorphism $L_{hg^{-1}}$. In particular, the diffeomorphism $L_g: G \to G$ induces a linear isomorphism $\D L_g|_e : T_e G \to T_g G$, so we have a canonical identification of all tangent spaces.
+
+\begin{defi}[Left invariant vector field]\index{left invariant vector field}\index{vector field!left invariant}
+ Let $X \in \Vect(G)$ be a vector field. This is \emph{left invariant} if
+ \[
+ \D L_g|_h (X_h) = X_{gh}
+ \]
+ for all $g,h \in G$.
+
+ We write $\Vect^L(G)$\index{$\Vect^L(G)$} for the collection of all left invariant vector fields.
+\end{defi}
+
+Using the fact that for a diffeomorphism $F$, we have
+\[
+ F^*[X, Y] = [F^* X, F^* Y],
+\]
+it follows that $\Vect^L(G)$ is a Lie subalgebra of $\Vect(G)$.
+
+If we have a left invariant vector field, then we obtain a tangent vector at the identity. On the other hand, if we have a tangent vector at the identity, the definition of a left invariant vector field tells us how we can extend this to a left invariant vector field. One would expect this to give us an isomorphism between $T_e G$ and $\Vect^L(G)$, but we have to be slightly more careful and check that the induced vector field is indeed a vector field.
+
+\begin{lemma}
+ Given $\xi \in T_e G$, we let
+ \[
+ X_\xi|_g = \D L_g|_e(\xi) \in T_g(G).
+ \]
+ Then the map $T_e G \to \Vect^L(G)$ by $\xi \mapsto X_\xi$ is an isomorphism of vector spaces.
+\end{lemma}
+
+\begin{proof}
+ The inverse is given by $X \mapsto X|_e$. The only thing to check is that $X_\xi$ actually is a left invariant vector field. The left invariant part follows from
+ \[
+ \D L_h|_g (X_\xi|_g) = \D L_h|_g (\D L_g|_e (\xi)) = \D L_{hg}|_e(\xi) = X_\xi |_{hg}.
+ \]
+ To check that $X_\xi$ is smooth, suppose $f \in C^\infty(U, \R)$, where $U$ is open and contains $e$. We let $\gamma: (-\varepsilon, \varepsilon) \to U$ be smooth with $\dot{\gamma}(0) = \xi$. So
+ \[
+ X_\xi f|_g = \D L_g (\xi)(f) = \xi(f \circ L_g) = \left.\frac{\d}{\d t}\right|_{t = 0} (f \circ L_g \circ \gamma)
+ \]
+ But as $(t, g) \mapsto f \circ L_g \circ \gamma(t)$ is smooth, it follows that $X_\xi f$ is smooth. So $X_\xi \in \Vect^L(G)$.
+\end{proof}
+
+Thus, instead of talking about $\Vect^L(G)$, we talk about $T_e G$, because it seems less scary. This isomorphism gives $T_eG$ the structure of a Lie algebra.
+
+\begin{defi}[Lie algebra of a Lie group]\index{Lie algebra of Lie group}
+ Let $G$ be a Lie group. The \emph{Lie algebra} $\mathfrak{g}$ of $G$ is the Lie algebra $T_eG$ whose Lie bracket is induced by that of the isomorphism with $\Vect^L(G)$. So
+ \[
+ [\xi, \eta] = [X_\xi, X_\eta]|_e.
+ \]
+ We also write $\Lie(G)$ for $\mathfrak{g}$.
+\end{defi}
+In general, if a Lie group is written in some capital letter, say $G$, then the Lie algebra is written in the same letter but in lower case fraktur.
+
+Note that $\dim \mathfrak{g} = \dim G$ is finite.
+
+\begin{lemma}
+ Let $G$ be an abelian Lie group. Then the bracket of $\mathfrak{g}$ vanishes.
+\end{lemma}
+
+\begin{eg}
+ For any vector space $V$ and $v \in V$, we have $T_v V \cong V$. So $V$ as a Lie group has Lie algebra $V$ itself. The commutator vanishes because the group is commutative.
+\end{eg}
+
+\begin{eg}
+ Note that $G = \GL_n(\R)$ is an open subset of $M_n$, so it is a manifold. It is then a Lie group under multiplication. Then we have
+ \[
+ \gl_n(\R) = \Lie(\GL_n(\R)) = T_I \GL_n(\R) = T_I M_n \cong M_n.
+ \]
+ If $A, B \in \GL_n(\R)$, then
+ \[
+ L_A(B) = AB.
+ \]
+ So
+ \[
+ \D L_A|_B(H) = AH
+ \]
+ as $L_A$ is linear.
+
+ We claim that under the identification, if $\xi, \eta \in \gl_n(\R) = M_n$, then
+ \[
+ [\xi, \eta] = \xi \eta - \eta \xi.
+ \]
+ Indeed, on $G$, we have global coordinates $U_i^j: \GL_n(\R) \to \R$ where
+ \[
+ U_i^j(A) = A_i^j,
+ \]
+ where $A = (A_i^j) \in \GL_n(\R)$.
+
+ Under this chart, we have
+ \[
+ X_\xi|_A = L_A(\xi) = \sum_{i,j} (A \xi)_j^i \left.\frac{\partial}{\partial U_j^i}\right|_A = \sum_{i,j,k} A_k^i \xi_j^k \left.\frac{\partial}{\partial U_j^i} \right|_A
+ \]
+ So we have
+ \[
+ X_\xi = \sum_{i,j,k} U^i_k \xi^k_j \frac{\partial}{\partial U_j^i}.
+ \]
+ So we have
+ \[
+ [X_\xi, X_\eta] = \left[\sum_{i,j,k} U_k^i \xi_j^k \frac{\partial}{\partial U_j^i},\sum_{p, r, q} U_q^p \eta_r^q \frac{\partial}{\partial U_r^p}\right].
+ \]
+ We now use the fact that
+ \[
+ \frac{\partial}{\partial U^i_j} U^p_q = \delta_{ip}\delta_{jq}.
+ \]
+ We then expand
+ \[
+ [X_\xi, X_\eta] = \sum_{i,j,k,r} (U_j^i \xi_k^j \eta^k_r - U^i_j \xi^j_k \xi^k_r) \frac{\partial}{\partial U_r^i}.
+ \]
+ So we have
+ \[
+ [X_\xi, X_\eta] = X_{\xi\eta - \eta\xi}.
+ \]
+\end{eg}
+
+\begin{defi}[Lie group homomorphisms]\index{Lie group!homomorphism}
+ Let $G, H$ be Lie groups. A \emph{Lie group homomorphism} is a smooth map that is also a homomorphism.
+\end{defi}
+
+\begin{defi}[Lie algebra homomorphism]\index{Lie algebra!homomorphism}
+ Let $\mathfrak{g}, \mathfrak{h}$ be Lie algebras. Then a \emph{Lie algebra homomorphism} is a linear map $\beta: \mathfrak{g} \to \mathfrak{h}$ such that
+ \[
+ \beta[\xi,\eta] = [\beta(\xi), \beta(\eta)]
+ \]
+ for all $\xi,\eta \in \mathfrak{g}$.
+\end{defi}
+
+\begin{prop}
+ Let $G$ be a Lie group and $\xi \in \mathfrak{g}$. Then the integral curve $\gamma$ for $X_\xi$ through $e \in G$ exists for all time, and $\gamma: \R \to G$ is a Lie group homomorphism.
+\end{prop}
+
+The idea is that once we have a small integral curve, we can use the Lie group structure to copy the curve to patch together a long integral curve.
+\begin{proof}
+ Let $\gamma: I \to G$ be a maximal integral curve of $X_\xi$, say $(-\varepsilon, \varepsilon) \in I$. We fix a $t_0$ with $|t_0| < \varepsilon$. Consider $g_0 = \gamma(t_0)$.
+
+ We let
+ \[
+ \tilde{\gamma}(t) = L_{g_0}(\gamma(t))
+ \]
+ for $|t| < \varepsilon$.
+
+ We claim that $\tilde{\gamma}$ is an integral curve of $X_\xi$ with $\tilde{\gamma}(0) = g_0$. Indeed, we have
+ \[
+ \dot{\tilde{\gamma}}|_t = \frac{\d}{\d t} L_{g_0}\gamma(t) = \D L_{g_0} \dot{\gamma}(t) = \D L_{g_0} X_\xi|_{\gamma(t)} = X_\xi|_{g_0 \cdot \gamma(t)} = X_\xi|_{\tilde{\gamma}(t)}.
+ \]
+ By patching these together, we know $(t_0 - \varepsilon, t_0 + \varepsilon) \subseteq I$. Since we have a fixed $\varepsilon$ that works for all $t_0$, it follows that $I = \R$.
+
+ The fact that this is a Lie group homomorphism follows from general properties of flow maps.
+\end{proof}
+
+\begin{eg}
+ Let $G = \GL_n$. If $\xi \in \gl_n$, we set
+ \[
+ e^\xi = \sum_{k \geq 0} \frac{1}{k!} \xi^k.
+ \]
+ We set $F(t) = e^{t \xi}$. We observe that this is in $\GL_n$ since $e^{t\xi}$ has an inverse $e^{-t\xi}$ (alternatively, $\det (e^{t\xi}) = e^{\tr (t\xi)} \not= 0$). Then
+ \[
+ F'(t) = \frac{\d}{\d t} \sum_k \frac{1}{k!} t^k \xi^k = e^{t\xi} \xi = L_{e^{t\xi}}\xi = L_{F(t)}\xi.
+ \]
+ Also, $F(0) = I$. So $F(t)$ is an integral curve.
+\end{eg}
+
+\begin{defi}[Exponential map]\index{exponential map}
+ The \emph{exponential map} of a Lie group $G$ is $\exp: \mathfrak{g} \to G$ given by
+ \[
+ \exp(\xi) = \gamma_\xi(1),
+ \]
+ where $\gamma_\xi$ is the integral curve of $X_\xi$ through $e \in G$.
+\end{defi}
+So in the case of $G = \GL_n$, the exponential map is the exponential map.
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\exp$ is a smooth map.
+ \item If $F(t) = \exp(t\xi)$, then $F: \R \to G$ is a Lie group homomorphism and $\D F|_0 \left(\frac{\d}{\d t}\right) = \xi$.
+ \item The derivative
+ \[
+ \D \exp: T_0 \mathfrak{g} \cong \mathfrak{g} \to T_e G \cong \mathfrak{g}
+ \]
+ is the identity map.
+ \item $\exp$ is a local diffeomorphism around $0 \in \mathfrak{g}$, i.e.\ there exists an open $U \subseteq \mathfrak{g}$ containing $0$ such that $\exp: U \to \exp(U)$ is a diffeomorphism.
+ \item $\exp$ is natural, i.e.\ if $f: G \to H$ is a Lie group homomorphism, then the diagram
+ \[
+ \begin{tikzcd}
+ \mathfrak{g} \ar[r, "\exp"] \ar[d, "\D f|_e"]& G \ar[d, "f"]\\
+ \mathfrak{h} \ar[r, "\exp"] & H
+ \end{tikzcd}
+ \]
+ commutes.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item This is the smoothness of ODEs with respect to parameters
+ \item Exercise.
+ \item If $\xi \in \mathfrak{g}$, we let $\sigma(t) = t \xi$. So $\dot{\sigma}(0) = \xi \in T_0 \mathfrak{g} \cong \mathfrak{g}$. So
+ \[
+ \D \exp|_0 (\xi) = \D \exp|_0(\dot{\sigma}(0)) = \left.\frac{\d}{\d t}\right|_{t = 0} \exp(\sigma(t)) = \left.\frac{\d}{\d t}\right|_{t = 0} \exp(t \xi) = X_\xi|_e = \xi.
+ \]
+ \item Follows from above by inverse function theorem.
+ \item Exercise. \qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[Lie subgroup]\index{Lie subgroup}\index{Lie group!subgroup}
+ A \emph{Lie subgroup} of $G$ is a subgroup $H$ with a smooth structure on $H$ making $H$ an \emph{immersed} submanifold.
+\end{defi}
+
+Certainly, if $H \subseteq G$ is a Lie subgroup, then $\mathfrak{h} \subseteq \mathfrak{g}$ is a Lie subalgebra.
+
+\begin{thm}
+ If $\mathfrak{h} \subseteq \mathfrak{g}$ is a subalgebra, then there exists a unique connected Lie subgroup $H \subseteq G$ such that $\Lie(H) = \mathfrak{h}$.
+\end{thm}
+
+\begin{thm}
+ Let $\mathfrak{g}$ be a finite-dimensional Lie algebra. Then there exists a (unique) simply-connected Lie group $G$ with Lie algebra $\mathfrak{g}$.
+\end{thm}
+
+\begin{thm}
+ Let $G, H$ be Lie groups with $G$ simply connected. Then every Lie algebra homomorphism $\mathfrak{g} \to \mathfrak{h}$ lifts to a Lie group homomorphism $G \to H$.
+\end{thm}
+
+\section{Vector bundles}
+Recall that we had the tangent bundle of a manifold. The tangent bundle gives us a vector space at each point in space, namely the tangent space. In general, a vector bundle is a vector space attached to each point in our manifold (in a smoothly-varying way), which is what we are going to study in this chapter.
+
+Before we start, we have a look at tensor products. These will provide us a way of constructing new vector spaces from old ones.
+
+\subsection{Tensors} % Return to this
+The tensor product is a very important concept in Linear Algebra. It is something that is taught in no undergraduate courses and assumed knowledge in all graduate courses. For the benefit of the students, we will give a brief introduction to tensor products.
+
+A motivation for tensors comes from the study of bilinear maps. A bilinear map is a function that takes in two vectors and returns a number, and this is linear in both variables. An example is the inner product, and another example is the volume form, which tells us the volume of a parallelepiped spanned by the two vectors.
+
+\begin{defi}[Bilinear map]\index{bilinear map}
+ Let $U, V, W$ be vector spaces. We define $\Bilin(V\times W, U)$ to be the functions $V \times W \to U$ that are bilinear, i.e.\
+ \begin{align*}
+ \alpha(\lambda_1 v_1 + \lambda_2 v_2, w) &= \lambda_1 \alpha(v_1, w) + \lambda_2 \alpha(v_2, w)\\
+ \alpha(v, \lambda_1 w_1 + \lambda_2 w_2) &= \lambda_1 \alpha(v, w_1) + \lambda_2 \alpha(v, w_2).
+ \end{align*}
+\end{defi}
+It is important that a bilinear map is not a linear map. This is bad. We spent so much time studying linear maps, and we now have to go back to our linear algebra book and rewrite everything to talk about bilinear maps as well. But bilinear maps are not enough. We want to do them for multi-linear maps! But linear maps were already complicated enough, so this must be much worse. We want to die.
+
+Tensors are a trick to turn the study of bilinear maps to linear maps (from a different space).
+
+\begin{defi}[Tensor product]\index{tensor product}
+ A \emph{tensor product} of two vector spaces $V, W$ is a vector space $V \otimes W$ and a bilinear map $\pi: V \times W \to V \otimes W$ such that a bilinear map from $V \times W$ is ``the same as'' a linear map from $V \otimes W$. More precisely, given any bilinear map $\alpha: V \times W \to U$, we can find a unique linear map $\tilde{\alpha}: V \otimes W \to U$ such that the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ V \times W \ar[rd, "\alpha"] \ar[d, "\pi"] \\
+ V \otimes W \ar[r, "\tilde\alpha"'] & U
+ \end{tikzcd}
+ \]
+ So we have
+ \[
+ \Bilin(V \times W, U) \cong \Hom(V \otimes W, U).
+ \]
+ Given $v \in V$ and $w \in W$, we obtain $\pi(v, w) \in V \otimes W$, called the \emph{tensor product} of $v$ and $w$, written $v \otimes w$.
+\end{defi}
+We say $V \otimes W$ \emph{represents} bilinear maps from $V \times W$.
+
+It is important to note that not all elements of $V \otimes W$ are of the form $v \otimes w$.
+
+Now the key thing we want to prove is the \emph{existence} and uniqueness of tensor products.
+
+\begin{lemma}
+ Tensor products exist (and are unique up to isomorphism) for all pairs of finite-dimensional vector spaces.
+\end{lemma}
+
+\begin{proof}
+ We can construct $V \otimes W = \Bilin(V \times W, \R)^*$. The verification is left as an exercise on the example sheet.
+\end{proof}
+
+We now write down some basic properties of tensor products.
+\begin{prop}
+ Given maps $f: V \to W$ and $g: V' \to W'$, we obtain a map $f \otimes g: V \otimes V' \to W \otimes W'$ given by the bilinear map
+ \[
+ (f \otimes g)(v, w) = f(v) \otimes g(w).
+ \]
+\end{prop}
+
+\begin{lemma}
+ Given $v, v_i \in V$ and $w, w_i \in W$ and $\lambda_i \in \R$, we have
+ \begin{align*}
+ (\lambda_1 v_1 + \lambda_2 v_2) \otimes w &= \lambda_1 (v_1 \otimes w) + \lambda_2 (v_2 \otimes w)\\
+ v \otimes (\lambda_1 w_1 + \lambda_2 w_2) &= \lambda_1 (v \otimes w_1) + \lambda_2 (v \otimes w_2).
+ \end{align*}
+\end{lemma}
+
+\begin{proof}
+ Immediate from the definition of bilinear map.
+\end{proof}
+
+\begin{lemma}
+ If $v_1,\cdots, v_n$ is a basis for $V$, and $w_1, \cdots, w_m$ is a basis for $W$, then
+ \[
+ \{v_i \otimes w_j: i = 1, \cdots, n; j = 1, \cdots, m\}
+ \]
+ is a basis for $V \otimes W$. In particular, $\dim V \otimes W = \dim V \times \dim W$.
+\end{lemma}
+
+\begin{proof}
+ We have $V \otimes W = \Bilin(V \times W, \R)^*$. We let $\alpha_{pq}:V \times W \to \R$ be given by
+ \[
+ \alpha_{pq}\left(\sum a_i v_i, \sum b_j w_j\right) = a_p b_q.
+ \]
+ Then $\alpha_{pq} \in \Bilin(V\times W, \R)$, and $(v_i \otimes w_j)$ are dual to $\alpha_{pq}$. So it suffices to show that $\alpha_{pq}$ are a basis. It is clear that they are independent, and any bilinear map can be written as
+ \[
+ \alpha = \sum c_{pq}\alpha_{pq},
+ \]
+ where
+ \[
+ c_{pq} = \alpha(v_p, w_q).
+ \]
+ So done.
+\end{proof}
+
+\begin{prop}
+ For any vector spaces $V, W, U$, we have (natural) isomorphisms
+ \begin{enumerate}
+ \item $V \otimes W \cong W \otimes V$
+ \item $(V \otimes W) \otimes U \cong V \otimes (W \otimes U)$
+ \item $(V \otimes W)^* \cong V^* \otimes W^*$
+ \end{enumerate}
+\end{prop}
+
+\begin{defi}[Covariant tensor]\index{covariant tensor}
+ A \emph{covariant tensor} of rank $k$ on $V$ is an element of
+ \[
+ \alpha \in \underbrace{V^* \otimes \cdots \otimes V^*}_{k\text{ times}},
+ \]
+ i.e.\ $\alpha$ is a multilinear map $V \times \cdots \times V \to \R$.
+\end{defi}
+
+\begin{eg}
+ A covariant $1$-tensor is an $\alpha \in V^*$, i.e.\ a linear map $\alpha: V \to \R$.
+
+ A covariant $2$-tensor is a $\beta \in V^* \otimes V^*$, i.e.\ a bilinear map $V \times V \to \R$, e.g.\ an inner product.
+\end{eg}
+
+\begin{eg}
+ If $\alpha, \beta \in V^*$, then $\alpha \otimes \beta \in V^* \otimes V^*$ is the covariant $2$-tensor given by
+ \[
+ (\alpha \otimes b)(v, w) = \alpha(v) \beta(w).
+ \]
+ More generally, if $\alpha$ is a rank $k$ tensor and $\beta$ is a rank $\ell$ tensor, then $\alpha \otimes \beta$ is a rank $k + \ell$ tensor.
+\end{eg}
+
+\begin{defi}[Tensor]\index{tensor}
+ A \emph{tensor} of type $(k, \ell)$ is an element in
+ \[
+ T^k_\ell(V) = \underbrace{V^* \otimes \cdots \otimes V^*}_{k\text{ times}} \otimes \underbrace{V \otimes \cdots \otimes V}_{\ell\text{ times}}.
+ \]
+\end{defi}
+We are interested in alternating bilinear maps, i.e.\ $\alpha(v, w) = - \alpha(w, v)$, or equivalently, $\alpha(v, v) = 0$ (if the characteristic is not $2$).
+
+\begin{defi}[Exterior product]\index{exterior product}\index{exterior algebra}
+ Consider
+ \[
+ T(V) = \bigoplus_{k \geq 0} V^{\otimes k}
+ \]
+ as an algebra (with multiplication given by the tensor product) (with $V^{\otimes 0} = \R$). We let $I(V)$ be the ideal (as algebras!) generated by $\{v \otimes v: v \in V\} \subseteq T(V)$. We define
+ \[
+ \Lambda (V) = T(V)/I(V),
+ \]
+ with a projection map $\pi: T(V) \to \Lambda(V)$. This is known as the \emph{exterior algebra}. We let
+ \[
+ \Lambda^k(V) = \pi(V^{\otimes k}),
+ \]
+ the \emph{$k$-th exterior product} of $V$.
+
+ We write $a \wedge b$ for $\pi(\alpha \otimes \beta)$.
+\end{defi}
+
+The idea is that $\Lambda^p V$ is the dual of the space of alternating multilinear maps $V \times V\to \R$.
+
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item If $\alpha \in \Lambda^p V$ and $\beta \in \Lambda^q V$, then $\alpha \wedge \beta = (-1)^{pq} \beta \wedge \alpha$.
+ \item If $\dim V = n$ and $p > n$, then we have
+ \[
+ \dim \Lambda^0 V = 1,\quad \dim \Lambda^n V = 1,\quad \Lambda^p V = \{0\}.
+ \]
+ \item The multilinear map $\det: V \times \cdots \times V \to \R$ spans $\Lambda^n V$.
+ \item If $v_1, \cdots, v_n$ is a basis for $V$, then
+ \[
+ \{v_{i_1} \wedge \cdots \wedge v_{i_p}: i_1 < \cdots < i_p\}
+ \]
+ is a basis for $\Lambda^p V$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We clearly have $v \wedge v = 0$. So
+ \[
+ v \wedge w = - w \wedge v
+ \]
+ Then
+ \[
+ (v_1 \wedge \cdots \wedge v_p) \wedge (w_1 \wedge \cdots \wedge w_q) = (-1)^{pq} w_1 \wedge \cdots \wedge w_q \wedge v_1 \wedge \cdots \wedge v_p
+ \]
+ since we have $pq$ swaps. Since
+ \[
+ \{v_{i_1} \wedge \cdots \wedge v_{i_p}: i_1, \cdots, i_p \in \{1,\cdots, n\}\} \subseteq \Lambda^p V
+ \]
+ spans $\Lambda^p V$ (by the corresponding result for tensor products), the result follows from linearity.
+ \item Exercise.
+ \item The $\det$ map is non-zero. So it follows from the above.
+ \item We know that
+ \[
+ \{v_{i_1} \wedge \cdots \wedge v_{i_p}: i_1, \cdots, i_p \in \{1,\cdots, n\}\} \subseteq \Lambda^p V
+ \]
+ spans, but they are not independent since there is a lot of redundancy (e.g.\ $v_1 \wedge v_2 = - v_2 \wedge v_1$). By requiring $i_1 < \cdots < i_p$, then we obtain a unique copy for combination.
+
+ To check independence, we write $I = (i_1, \cdots, i_p)$ and let $v_I = v_{i_1} \wedge \cdots \wedge v_{i_p}$. Then suppose
+ \[
+ \sum_I a_I v_I = 0
+ \]
+ for $a_I \in \R$. For each $I$, we let $J$ be the multi-index $J = \{1, \cdots, n\} \setminus I$. So if $I \not= I'$, then $v_{I'} \wedge v_J = 0$. So wedging with $v_J$ gives
+ \[
+ \sum_{I'} \alpha_{I'} v_{I'} \wedge v_J = a_I v_I \wedge v_J = 0.
+ \]
+ So $a_I = 0$. So done by (ii). \qedhere
+ \end{enumerate}
+\end{proof}
+
+If $F: V \to W$ is a linear map, then we get an induced linear map $\Lambda^p F: \Lambda^p V \to \Lambda^p W$ in the obvious way, making the following diagram commute:
+\[
+ \begin{tikzcd}
+ V^{\otimes p} \ar[r, "F^{\otimes p}"] \ar[d, "\pi"] & W^{\otimes p} \ar[d, "\pi"]\\
+ \Lambda^p V \ar[r, "\Lambda^p F"] & \Lambda^p W
+ \end{tikzcd}
+\]
+More concretely, we have
+\[
+ \Lambda^p F (v_1 \wedge \cdots \wedge v_p) = F(v_1) \wedge \cdots \wedge F(v_p).
+\]
+\begin{lemma}
+ Let $F: V \to V$ be a linear map. Then $\Lambda^n F: \Lambda^n V \to \Lambda^n V$ is multiplication by $\det F$.
+\end{lemma}
+
+\begin{proof}
+ Let $v_1, \cdots, v_n$ be a basis. Then $\Lambda^n V$ is spanned by $v_1 \wedge \cdots \wedge v_n$. So we have
+ \[
+ (\Lambda^n F)(v_1\wedge \cdots \wedge v_n) = \lambda \, v_1 \wedge \cdots \wedge v_n
+ \]
+ for some $\lambda$. Write
+ \[
+ F(v_i) = \sum_j A_{ji} v_j
+ \]
+ for some $A_{ji} \in \R$, i.e.\ $A$ is the matrix representation of $F$. Then we have
+ \[
+ (\Lambda^n F)(v_1 \wedge \cdots \wedge v_n) = \biggl(\sum_j A_{j1} v_j\biggr) \wedge \cdots \wedge \biggl(\sum_j A_{jn} v_j\biggr).
+ \]
+ If we expand the thing on the right, a lot of things die. The only things that live are those where we get each of $v_i$ once in the wedges in some order. Then this becomes
+ \[
+ \sum_{\sigma \in S_n} \varepsilon(\sigma) (A_{\sigma(1), 1} \cdots A_{\sigma(n), n}) v_1 \wedge \cdots \wedge v_n = \det(F) \, v_1 \wedge \cdots \wedge v_n,
+ \]
+ where $\varepsilon(\sigma)$ is the sign of the permutation, which comes from rearranging the $v_i$ to the right order.
+\end{proof}
+
+\subsection{Vector bundles}
+Our aim is to consider spaces $T_p M \otimes T_p M, \ldots, \Lambda^r T_p M$ etc as $p$ varies, i.e.\ construct a ``tensor bundle'' for these tensor products, similar to how we constructed the tangent bundle. Thus, we need to come up with a general notion of vector bundle.
+
+\begin{defi}[Vector bundle]\index{vector bundle}
+ A \emph{vector bundle} of rank $r$ on $M$ is a smooth manifold $E$ with a smooth $\pi: E \to M$ such that
+ \begin{enumerate}
+ \item For each $p \in M$, the fiber $\pi^{-1}(p) = E_p$ is an $r$-dimensional vector space,
+ \item For all $p \in M$, there is an open $U \subseteq M$ containing $p$ and a diffeomorphism
+ \[
+ t: E|_U = \pi^{-1}(U) \to U \times \R^r
+ \]
+ such that
+ \[
+ \begin{tikzcd}
+ E|_U \ar[r, "t"] \ar[d, "\pi"] & U \times \R^r \ar[dl, "p_1"]\\
+ U
+ \end{tikzcd}
+ \]
+ commutes, and the induced map $E_q \to \{q\} \times \R^r$ is a linear isomorphism for all $q \in U$.
+
+ We call $t$ a \term{trivialization} of $E$ over $U$; call $E$ the \term{total space}; call $M$ the \term{base space}; and call $\pi$ the \term{projection}. Also, for each $q \in M$, the vector space $E_q = \pi^{-1}(\{q\})$ is called the \term{fiber} over $q$.
+ \end{enumerate}
+ Note that the vector space structure on $E_p$ is part of the data of a vector bundle.
+\end{defi}
+
+Alternatively, $t$ can be given by collections of smooth maps $s_1, \cdots, s_r: U \to E$ with the property that for each $q \in U$, the vectors $s_1(q), \cdots, s_r(q)$ form a basis for $E_q$. Indeed, given such $s_1, \cdots, s_r$, we can define $t$ by
+\[
+ t(v_q) = (q, \alpha_1, \cdots, \alpha_r),
+\]
+where $v_q \in E_q$ and the $\alpha_i$ are chosen such that
+\[
+ v_q = \sum_{i = 1}^r \alpha_i s_i(q).
+\]
+The $s_1, \cdots, s_r$ are known as a \term{frame} for $E$ over $U$.
+
+\begin{eg}[Tangent bundle]
+ The bundle $TM \to M$ is a vector bundle. Given any point $p$, find some coordinate charts around $p$ with coordinates $x_1, \cdots, x_n$. Then we get a frame $\frac{\partial}{\partial x_i}$, giving trivializations of $TM$ over $U$. So $TM$ is a vector bundle.
+\end{eg}
+
+\begin{defi}[Section]\index{section}\index{$C^\infty(U, E)$}
+ A \emph{(smooth) section} of a vector bundle $E \to M$ over some open $U \subseteq M$ is a smooth $s: U \to E$ such that $s(p) \in E_p$ for all $p \in U$, that is $\pi \circ s = \id$. We write $C^\infty(U, E)$ for the set of smooth sections of $E$ over $U$.
+\end{defi}
+
+\begin{eg}
+ $\Vect(M) = C^\infty (M, TM)$.
+\end{eg}
+
+\begin{defi}[Transition function]\index{transition function}
+ Suppose that $t_\alpha: E|_{U_\alpha} \to U_\alpha \times \R^r$ and $t_\beta: E|_{U_\beta} \to U_\beta \times \R^r$ are trivializations of $E$. Then
+ \[
+ t_\alpha \circ t_\beta^{-1} : (U_\alpha \cap U_\beta) \times \R^r \to (U_\alpha \cap U_\beta) \times \R^r
+ \]
+ is fiberwise linear, i.e.
+ \[
+ t_\alpha \circ t_\beta^{-1}(q, v) = (q, \varphi_{\alpha\beta}(q) v),
+ \]
+ where $\varphi_{\alpha\beta}(q)$ is in $\GL_r(\R)$.
+
+ In fact, $\varphi_{\alpha\beta}: U_\alpha \cap U_\beta \to \GL_r(\R)$ is smooth. Then $\varphi_{\alpha\beta}$ is known as the \term{transition function} from $\beta$ to $\alpha$.
+\end{defi}
+
+\begin{prop}
+ We have the following equalities whenever everything is defined:
+ \begin{enumerate}
+ \item $\varphi_{\alpha\alpha} = \id$
+ \item $\varphi_{\alpha\beta} = \varphi_{\beta\alpha}^{-1}$
+ \item $\varphi_{\alpha\beta}\varphi_{\beta\gamma} = \varphi_{\alpha\gamma}$, where $\varphi_{\alpha\beta} \varphi_{\beta\gamma}$ is pointwise matrix multiplication.
+ \end{enumerate}
+ These are known as the \term{cocycle conditions}.
+\end{prop}
+We now consider general constructions that allow us to construct new vector bundles from old ones.
+
+\begin{prop}[Vector bundle construction]
+ Suppose that for each $p \in M$, we have a vector space $E_p$. We set
+ \[
+ E = \bigcup_p E_p
+ \]
+ We let $\pi: E \to M$ be given by $\pi(v_p) = p$ for $v_p \in E_p$. Suppose there is an open cover $\{U_\alpha\}$ of open sets of $M$ such that for each $\alpha$, we have maps
+ \[
+ t_\alpha: E|_{U_\alpha} = \pi^{-1}(U_\alpha) \to U_\alpha \times \R^r
+ \]
+ over $U_\alpha$ that induce fiberwise linear isomorphisms. Suppose the transition functions $\varphi_{\alpha\beta}$ are smooth. Then there exists a unique smooth structure on $E$ making $\pi: E \to M$ a vector bundle such that the $t_\alpha$ are trivializations for $E$.
+\end{prop}
+
+\begin{proof}
+ The same as the case for the tangent bundle.
+\end{proof}
+
+In particular, we can use this to perform the following constructions:
+\begin{defi}[Direct sum of vector bundles]\index{direct sum!vector bundles}\index{vector bundle!direct sum}
+ Let $E, \tilde{E}$ be vector bundles on $M$. Suppose $t_\alpha: E|_{U_\alpha} \cong U_\alpha \times \R^r$ is a trivialization for $E$ over $U_\alpha$, and $\tilde{t}_\alpha: \tilde{E}|_{U_\alpha} \cong U_\alpha \times \R^{\tilde{r}}$ is a trivialization for $\tilde{E}$ over $U_\alpha$.
+
+ We let $\varphi_{\alpha\beta}$ be transition functions for $\{t_\alpha\}$ and $\tilde{\varphi}_{\alpha\beta}$ be transition functions for $\{\tilde{t}_\alpha\}$.
+
+ Define
+ \[
+ E \oplus \tilde{E} = \bigcup_p E_p \oplus \tilde{E}_p,
+ \]
+ and define
+ \[
+ T_\alpha: (E \oplus \tilde{E})|_{U_\alpha} = E|_{U_\alpha} \oplus \tilde{E}|_{U_\alpha} \to U_\alpha \times (\R^r \oplus \R^{\tilde{r}}) = U_\alpha \times \R^{r + \tilde{r}}
+ \]
+ be the fiberwise direct sum of the two trivializations. Then $T_\alpha$ clearly gives a linear isomorphism $(E \oplus \tilde{E})_p \cong \R^{r + \tilde{r}}$, and the transition function for $T_\alpha$ is
+ \[
+ T_\alpha \circ T_\beta^{-1} = \varphi_{\alpha\beta} \oplus \tilde{\varphi}_{\alpha\beta},
+ \]
+ which is clearly smooth. So this makes $E \oplus \tilde{E}$ into a vector bundle.
+\end{defi}
+
+In terms of frames, if $\{s_1, \cdots, s_r\}$ is a frame for $E$ and $\{\tilde{s}_1, \cdots, \tilde{s}_{\tilde{r}}\}$ is a frame for $\tilde{E}$ over some $U\subseteq M$ , then
+\[
+ \{s_i \oplus 0, 0 \oplus \tilde{s}_j: i = 1, \cdots, r; j = 1, \cdots, \tilde{r}\}
+\]
+is a frame for $E \oplus \tilde{E}$.
+
+\begin{defi}[Tensor product of vector bundles]\index{tensor product!vector bundle}\index{vector bundle!tensor product}
+ Given two vector bundles $E, \tilde{E}$ over $M$, we can construct $E \otimes \tilde{E}$ similarly with fibers $(E \otimes \tilde{E})|_p = E|_p \otimes \tilde{E}|_p$.
+\end{defi}
+
+Similarly, we can construct the alternating product of vector bundles $\Lambda^n E$. Finally, we have the \emph{dual} vector bundle.
+
+\begin{defi}[Dual vector bundle]\index{vector bundle!dual}\index{dual!vector bundle}
+ Given a vector bundle $E \to M$, we define the \emph{dual vector bundle} by
+ \[
+ E^* = \bigcup_{p \in M} (E_p)^*.
+ \]
+ Suppose again that $t_\alpha: E|_{U_\alpha} \to U_\alpha \times \R^r$ is a local trivialization. Taking the dual of this map gives
+ \[
+ t_\alpha^*: U_\alpha \times (\R^r)^* \to E|_{U_\alpha}^*.
+ \]
+ since taking the dual reverses the direction of the map. We pick an isomorphism $(\R^r)^* \to \R^r$ once and for all, and then reverse the above isomorphism to get a map
+ \[
+ E|_{U_\alpha}^* \to U_\alpha \times \R^r.
+ \]
+ This gives a local trivialization.
+\end{defi}
+
+If $\{s_1, \cdots, s_r\}$ is a frame for $E$ over $U$, then $\{s_1^*, \cdots, s_r^*\}$ is a frame for $E^*$ over $U$, where $\{s_1^{*}(p), \cdots, s_r^*(p)\}$ is a dual basis to $\{s_1(p), \cdots, s_r(p)\}$.
+
+\begin{defi}[Cotangent bundle]\index{cotangent bundle}\index{$T^*M$}\index{$\d x_i$}
+ The \emph{cotangent bundle} of a manifold $M$ is
+ \[
+ T^*M = (TM)^*.
+ \]
+ In local coordinate charts, we have a frame $\frac{\partial}{\partial x_1}, \cdots, \frac{\partial}{\partial x_n}$ of $TM$ over $U$. The dual frame is written as $\d x_1, \cdots, \d x_n$. In other words, we have
+ \[
+ \d x_i|_p \in (T_p M)^*
+ \]
+ and
+ \[
+ \d x_i|_p\left(\left.\frac{\partial}{\partial x_j}\right|_p\right) = \delta_{ij}.
+ \]
+\end{defi}
+
+Recall the previously, given a function $f \in C^\infty(U, \R)$, we defined $\d f$ as the differential of $f$ given by
+\[
+ \d f|_p = \D f|_p: T_p M \to T_{f(p)} \R \cong \R.
+\]
+Thinking of $x_i$ as a function on a coordinate chart $U$, we have
+\[
+ \D x_i|_p \left(\left.\frac{\partial}{\partial x_j}\right|_p\right) = \frac{\partial}{\partial x_j}(x_i) = \delta_{ij}
+\]
+for all $i, j$. So the two definitions of $\d x_i$ agree.
+
+We can now take powers of this to get more interesting things.
+\begin{defi}[$p$-form]\index{$p$-form}
+ A \emph{$p$-form} on a manifold $M$ over $U$ is a smooth section of $\Lambda^p T^*M$, i.e.\ an element in $C^\infty (U, \Lambda^p T^*M)$.
+\end{defi}
+
+\begin{eg}
+ A $1$-form is an element of $T^* M$. It is locally of the form
+ \[
+ \alpha_1 \d x_1 + \cdots + \alpha_n \d x_n
+ \]
+ for some smooth functions $\alpha_1, \cdots, \alpha_n$.
+
+ Similarly, if $\omega$ is a $p$-form, then locally, it is of the form
+ \[
+ \omega = \sum_I \omega_I \d x_I,
+ \]
+ where $I = (i_1, \cdots, i_p)$ with $i_1 < \cdots < i_p$, and $\d x_I = \d x_{i_1} \wedge \cdots \wedge \d x_{i_p}$.
+\end{eg}
+It is important to note that these representations only work locally.
+
+\begin{defi}[Tensors on manifolds]\index{tensors!on manifolds}\index{$T_\ell^k M$}
+ Let $M$ be a manifold. We define
+ \[
+ T_{\ell}^k M = \underbrace{T^*M \otimes \cdots \otimes T^* M}_{k \text{ times}} \otimes \underbrace{TM \otimes \cdots \otimes TM}_{\ell \text{ times}}.
+ \]
+ A \emph{tensor of type $(k, \ell)$} is an element of
+ \[
+ C^\infty (M, T_\ell^k M).
+ \]
+ The convention when $k = \ell = 0$ is to set $T_0^0 M = M \times \R$.
+\end{defi}
+
+In local coordinates, we can write a $(k, \ell)$ tensor $\omega$ as
+\[
+ \omega = \sum \alpha_{i_1, \ldots, i_\ell}^{j_1, \ldots, j_k} \d x_{j_1} \otimes \cdots \otimes \d x_{j_k} \otimes \frac{\partial}{\partial x_{i_1}} \otimes \cdots \otimes \frac{\partial}{\partial x_{i_\ell}},
+\]
+where the $\alpha$ are smooth functions.
+
+\begin{eg}
+ A tensor of type $(0, 1)$ is a vector field.
+
+ A tensor of type $(1, 0)$ is a $1$-form.
+
+ A tensor of type $(0, 0)$ is a real-valued function.
+\end{eg}
+
+\begin{defi}[Riemannian metric]\index{Riemannian metric}
+ A \emph{Riemannian metric} on $M$ is a $(2, 0)$-tensor $g$ such that for all $p$, the bilinear map $g_p: T_p M \times T_p M \to \R$ is symmetric and positive definite, i.e.\ an inner product.
+
+ Given such a $g$ and $v_p \in T_p M$, we write $\norm{v_p}$ for $\sqrt{g_p(v_p, v_p)}$.
+\end{defi}
+
+Using these, we can work with things like length:
+
+\begin{defi}[Length of curve]\index{length!curve}\index{curve!length}
+ Let $\gamma: I \to M$ be a curve. The \emph{length} of $\gamma$ is
+ \[
+ \ell(\gamma)= \int_I \norm{\dot{\gamma}(t)} \,\d t.
+ \]
+\end{defi}
+
+Finally, we will talk about morphisms between vector bundles.
+
+\begin{defi}[Vector bundle morphisms]\index{vector bundle!morphism}\index{morphism!vector bundle}\index{bundle morphism}
+ Let $E \to M$ and $E' \to M'$ be vector bundles. A \emph{bundle morphism} from $E$ to $E'$ is a pair of smooth maps $(F: E \to E', f: M \to M')$ such that the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ E \ar[d] \ar[r, "F"] & E' \ar[d]\\
+ M \ar[r, "f"] & M'
+ \end{tikzcd}.
+ \]
+ i.e.\ such that $F_p: E_p \to E'_{f(p)}$ is linear for each $p$.
+\end{defi}
+
+\begin{eg}
+ Let $E = TM$ and $E' = TM'$. If $f: M \to M'$ is smooth, then $(\D f, f)$ is a bundle morphism.
+\end{eg}
+
+\begin{defi}[Bundle morphism over $M$]\index{bundle morphism!over $M$}
+ Given two bundles $E, E'$ over the same base $M$, a \emph{bundle morphism over $M$} is a bundle morphism $E \to E'$ of the form $(F, \id_M)$.
+\end{defi}
+
+\begin{eg}
+ Given a Riemannian metric $g$, we get a bundle morphism $TM \to T^*M$ over $M$ by
+ \[
+ v \mapsto F(v) = g(v, -).
+ \]
+ Since each $g(v, -)$ is an isomorphism, we have a canonical bundle isomorphism $TM \cong T^*M$.
+\end{eg}
+Note that the isomorphism between $TM$ and $T^*M$ requires the existence of a Riemannian metric.
+
+\section{Differential forms and de Rham cohomology}
+\subsection{Differential forms}
+We are now going to restrict our focus to a special kind of tensors, known as \emph{differential forms}. Recall that in $\R^n$ (as a vector space), an alternating $n$-linear map tells us the signed volume of the parallelepiped spanned by $n$ vectors. In general, a differential $p$-form is an alternating $p$-linear map on the tangent space at each point, so it tells us the volume of an ``infinitesimal $p$-dimensional parallelepiped''.
+
+In fact, we will later see than on an (oriented) $p$-dimensional manifold, we can integrate a $p$-form on the manifold to obtain the ``volume'' of the manifold.
+
+\begin{defi}[Differential form]\index{differential form}\index{$p$-form}\index{$\Omega^p(M)$}
+ We write
+ \[
+ \Omega^p (M) = C^\infty(M, \Lambda^p T^*M) = \{\text{$p$-forms on $M$}\}.
+ \]
+ An element of $\Omega^p(M)$ is known as a \emph{differential $p$-form}.
+
+ In particular, we have
+ \[
+ \Omega^0(M) = C^\infty(M, \R).
+ \]
+\end{defi}
+In local coordinates $x_1, \cdots, x_n$ on $U$ we can write $\omega \in \Omega^p(M)$ as
+\[
+ \omega = \sum_{i_1 < \ldots < i_p} \omega_{i_1, \ldots, i_p} \d x_{i_1} \wedge \cdots \wedge \d x_{i_p}
+\]
+for some smooth functions $\omega_{i_1, \ldots, i_p}$.
+
+We are usually lazy and just write
+\[
+ \omega = \sum_I \omega_I \d x_I.
+\]
+\begin{eg}
+ A $0$-form is a smooth function.
+\end{eg}
+
+\begin{eg}
+ A $1$-form is a section of $T^* M$. If $\omega \in \Omega^1(M)$ and $X \in \Vect(M)$, then $\omega(X) \in C^\infty(M, \R)$.
+
+ For example, if $f$ is a smooth function on $M$, then $\d f \in \Omega^1(M)$ with
+ \[
+ \d f(X) = X(f)
+ \]
+ for all $X \in \Vect(M)$.
+
+ Locally, we can write
+ \[
+ \d f = \sum_{i = 1}^n a_i \;\d x_i.
+ \]
+ To work out what the $a_i$'s are, we just hit this with the $\frac{\partial}{\partial x_j}$. So we have
+ \[
+ a_j = \d f\left(\frac{\partial}{\partial x_j}\right) = \frac{\partial f}{\partial x_j}.
+ \]
+ So we have
+ \[
+ \d f = \sum_{i = 1}^n \frac{\partial f}{\partial x_i}\; \d x_i.
+ \]
+ This is essentially just the gradient of a function!
+\end{eg}
+
+\begin{eg}
+ If $\dim M = n$, and $\omega \in \Omega^n(M)$, then locally we can write
+ \[
+ \omega = g \;\d x_1 \wedge \cdots \wedge \d x_n.
+ \]
+ for some smooth function $g$. This is an alternating form that assigns a real number to $n$ tangent vectors. So it measures volume!
+
+ If $y_1, \cdots, y_n$ is any other coordinates, then
+ \[
+ \d x_i = \sum \frac{\partial x_i}{\partial y_j}\; \d y_j.
+ \]
+ So we have
+ \[
+ \omega = g \det\left(\frac{\partial x_i}{\partial y_j}\right)_{i, j} \;\d y_1 \wedge \cdots \wedge \d y_n.
+ \]
+\end{eg}
+
+Now a motivating question is this --- given an $\omega \in \Omega^1(M)$, can we find some $f \in \Omega^0(M)$ such that $\omega = \d f$?
+
+More concretely, let $U \subseteq \R^2$ be open, and let $x, y$ be the coordinates. Let
+\[
+ \omega = a \;\d x + b\;\d y
+\]
+If we have $w = \;\d f$ for some $f$, then we have
+\[
+ a = \frac{\partial f}{\partial x},\quad b = \frac{\partial f}{\partial y}.
+\]
+So the symmetry of partial derivatives tells us that
+\[
+ \frac{\partial a}{\partial y} = \frac{\partial b}{\partial x}.\tag{$*$}
+\]
+So this equation $(*)$ is a necessary condition to solve $\omega = \d f$. Is it sufficient?
+
+To begin with, we want to find a better way to express $(*)$ without resorting to local coordinates, and it turns out this construction will be very useful later on.
+\begin{thm}[Exterior derivative]
+ There exists a unique linear map
+ \[
+ \d = \d_{M, p}: \Omega^p(M) \to \Omega^{p + 1}(M)
+ \]
+ such that
+ \begin{enumerate}
+ \item On $\Omega^0(M)$ this is as previously defined, i.e.
+ \[
+ \d f (X) = X(f)\text{ for all }X \in \Vect(M).
+ \]
+ \item We have
+ \[
+ \d \circ \d = 0: \Omega^p(M) \to \Omega^{p + 2}(M).
+ \]
+ \item It satisfies the \term{Leibniz rule}
+ \[
+ \d (\omega \wedge \sigma) = \d \omega \wedge \sigma + (-1)^p \omega \wedge \d \sigma.
+ \]
+ \end{enumerate}
+ It follows from these assumptions that
+ \begin{enumerate}[resume]
+ \item $\d$ acts locally, i.e.\ if $\omega, \omega' \in \Omega^p(M)$ satisfy $\omega|_U = \omega'|_U$ for some $U \subseteq M$ open, then $\d \omega|_U = \d \omega'|_U$.
+ \item We have
+ \[
+ \d (\omega|_U) = (\d \omega)|_U
+ \]
+ for all $U \subseteq M$.
+ \end{enumerate}
+\end{thm}
+What do the three rules tell us? The first rule tells us this is a generalization of what we previously had. The second rule will turn out to be a fancy way of saying partial derivatives commute. The final Leibniz rule tells us this $\d$ is some sort of derivative.
+
+\begin{eg}
+ If we have
+ \[
+ \omega = a\;\d x + b\;\d y,
+ \]
+ then we have
+ \begin{align*}
+ \d \omega &= \d a\wedge \d x + a\; \d(\d x) + \d b \wedge \d y+ b\; \d(\d y)\\
+ &= \d a\wedge \d x + \d b \wedge \d y\\
+ &= \left(\frac{\partial a}{\partial x}\;\d x + \frac{\partial a}{\partial y}\;\d y\right)\wedge \d x + \left(\frac{\partial b}{\partial x}\;\d x + \frac{\partial b}{\partial y}\;\d y\right)\wedge \d y\\
+ &= \left(\frac{\partial b}{\partial x} - \frac{\partial a}{\partial y}\right)\;\d x \wedge \d y.
+ \end{align*}
+ So the condition $(*)$ says $\d \omega = 0$.
+\end{eg}
+We now rephrase our motivating question --- if $\omega \in \Omega^1(M)$ satisfies $\d \omega = 0$, can we find some $f$ such that $\omega = \d f$ for some $f \in \Omega^0(M)$? Now this has the obvious generalization --- given any $p$-form $\omega$, if $\d \omega = 0$, can we find some $\sigma$ such that $\omega = \d \sigma$?
+
+\begin{eg}
+ In $\R^3$, we have coordinates $x, y, z$. We have seen that for $f \in \Omega^0(\R^3)$, we have
+ \[
+ \d f = \frac{\partial f}{\partial x}\;\d x + \frac{\partial f}{\partial y}\;\d y + \frac{\partial f}{\partial z}\;\d z.
+ \]
+ Now if
+ \[
+ \omega = P\;\d x + Q\;\d y+ R \;\d z \in \Omega^1(\R^3),
+ \]
+ then we have
+ \begin{align*}
+ \d(P \;\d x) &= \d P \wedge \d x + P\; \d \d x\\
+ &= \left(\frac{\partial P}{\partial x}\;\d x + \frac{\partial P}{\partial y} \;\d y + \frac{\partial P}{\partial z} \;\d z\right)\wedge \d x\\
+ &= -\frac{\partial P}{\partial y} \;\d x \wedge \d y - \frac{\partial P}{\partial z}\;\d x \wedge \d z.
+ \end{align*}
+ So we have
+ \[
+ \d \omega = \left(\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right)\;\d x \wedge \d y + \left(\frac{\partial R}{\partial x} - \frac{\partial P}{\partial z}\right)\;\d x \wedge \;\d z + \left(\frac{\partial R}{\partial y} - \frac{\partial Q}{\partial z}\right)\;\d y \wedge \d z.
+ \]
+ This is just the curl! So $\d^2 = 0$ just says that $\mathrm{curl} \circ \mathrm{grad} = 0$.
+\end{eg}
+
+\begin{proof}
+ The above computations suggest that in local coordinates, the axioms already tell use completely how $\d$ works. So we just work locally and see that they match up globally.
+
+ Suppose $M$ is covered by a single chart with coordinates $x_1, \cdots, x_n$. We define $\d : \Omega^0(M) \to \Omega^1(M)$ as required by (i). For $p > 0$, we define
+ \[
+ \d\left(\sum_{i_1 < \ldots < i_p} \omega_{i_1, \ldots, i_p}\;\d x_{i_1} \wedge \cdots \wedge \d x_{i_p}\right) = \sum \d \omega_{i_1, \ldots, i_p}\wedge \d x_{i_1} \wedge \cdots \wedge \d x_{i_p}.
+ \]
+ Then (i) is clear. For (iii), we suppose
+ \begin{align*}
+ \omega &= f\;\d x_I \in \Omega^p(M)\\
+ \sigma &= g\;\d x_J \in \Omega^q(M).
+ \end{align*}
+ We then have
+ \begin{align*}
+ \d(\omega \wedge \sigma) &= \d (fg\; \d x_I \wedge \d x_J)\\
+ &= \d (fg) \wedge \d x_I \wedge \d x_J\\
+ &= g \;\d f \wedge \d x_I \wedge \d x_J + f \;\d g \wedge \d x_I \wedge \d x_J\\
+ &= g \;\d f \wedge \d x_I \wedge \d x_J + f(-1)^p \;\d x_I \wedge (\d g \wedge\d x_J)\\
+ &= (\d \omega) \wedge \sigma + (-1)^p \omega \wedge \d \sigma.
+ \end{align*}
+ So done. Finally, for (ii), if $f \in \Omega^0(M)$, then
+ \[
+ \d^2 f = \d\left(\sum_i \frac{\partial f}{\partial x_i}\;\d x_i\right) = \sum_{i, j} \frac{\partial^2 f}{\partial x_i \partial x_j} \;\d x_j \wedge \d x_i = 0,
+ \]
+ since partial derivatives commute. Then for general forms, we have
+ \begin{align*}
+ \d^2 \omega = \d^2 \left(\sum \omega_I \;\d x_I\right) &= \d\left(\sum \d \omega_I \wedge \d x_I\right)\\
+ &= \d \left(\sum \d \omega_I \wedge \d x_{i_1} \wedge \cdots \wedge \d x_{i_p}\right)\\
+ &= 0
+ \end{align*}
+ using Leibniz rule. So this works.
+
+ Certainly this has the extra properties. To claim uniqueness, if $\partial: \Omega^p(M) \to \Omega^{p + 1}(M)$ satisfies the above properties, then
+ \begin{align*}
+ \partial \omega &= \partial \left(\sum \omega_I \d x_I\right) \\
+ &= \sum \partial \omega_I \wedge \d x_I + \omega_I \wedge\partial \d x_I \\
+ &= \sum \d \omega_I \wedge \d x_I,
+ \end{align*}
+ using the fact that $\partial = \d$ on $\Omega^0(M)$ and induction.
+
+ Finally, if $M$ is covered by charts, we can define $\d: \Omega^p(M) \to \Omega^{p + 1}(M)$ by defining it to be the $\d$ above on any single chart. Then uniqueness implies this is well-defined. This gives existence of $\d$, but doesn't immediately give uniqueness, since we only proved local uniqueness.
+
+ So suppose $\partial: \Omega^p(M) \to \Omega^{p + 1}(M)$ again satisfies the three properties. We claim that $\partial$ is local. We let $\omega, \omega' \in \Omega^p(M)$ be such that $\omega|_U = \omega'|_U$ for some $U \subseteq M$ open. Let $x \in U$, and pick a bump function $\chi \in C^\infty(M)$ such that $\chi \equiv 1$ on some neighbourhood $W$ of $x$, and $\supp(\chi) \subseteq U$. Then we have
+ \[
+ \chi \cdot (\omega - \omega') = 0.
+ \]
+ We then apply $\partial$ to get
+ \[
+ 0 = \partial(\chi \cdot(\omega - \omega')) = \d \chi \wedge (\omega - \omega') + \chi(\partial \omega - \partial\omega').
+ \]
+ But $\chi \equiv 1$ on $W$. So $\d \chi$ vanishes on $W$. So we must have
+ \[
+ \partial \omega|_W - \partial \omega'|_W = 0.
+ \]
+ So $\partial \omega = \partial \omega'$ on $W$.
+
+ Finally, to show that $\partial = d$, if $\omega \in \Omega^p(M)$, we take the same $\chi$ as before, and then on $x$, we have
+ \begin{align*}
+ \partial \omega &= \partial\left(\chi\sum \omega_I \;\d x_I\right) \\
+ &= \partial \chi \sum \omega_I \;\d x_I + \chi \sum \partial \omega_I \wedge \d x_I \\
+ &= \chi \sum \d \omega_I \wedge \d x_I\\
+ &= \d \omega.
+ \end{align*}
+ So we get uniqueness. Since $x$ was arbitrary, we have $\partial = \d$.
+\end{proof}
+
+One useful example of a differential form is a \emph{symplectic form}.
+\begin{defi}[Non-degenerate form]\index{non-degenerate form}
+ A $2$-form $\omega \in \Omega^2(M)$ is \emph{non-degenerate} if $\omega(X_p, X_p) = 0$ implies $X_p = 0$.
+\end{defi}
+As in the case of an inner product, such an $\omega$ gives us an isomorphism $T_p M \to T^*_p M$ by
+\[
+ \alpha(X_p)(Y_p) = \omega(X_p, Y_p).
+\]
+\begin{defi}[Symplectic form]\index{symplectic form}
+ A symplectic form is a non-degenerate $2$-form $\omega$ such that $\d \omega = 0$.
+\end{defi}
+
+Why did we work with covectors rather than vectors when defining differential forms? It happens that differential forms have nicer properties. If we have some $F \in C^\infty(M, N)$ and $g \in \Omega^0(N) = C^{\infty}(N, \R)$, then we can form the pullback
+\[
+ F^*g = g \circ F \in \Omega^0(M).
+\]
+More generally, for $x \in M$, we have a map
+\[
+ \D F|_x : T_x M \to T_{F(x)}N.
+\]
+This does \emph{not} allow us to pushforward a vector field on $M$ to a vector field of $N$, as the map $F$ might not be injective. However, we can use its dual
+\[
+ (\D F|_x)^*: T_{F(x)}^* N \to T_x^* M
+\]
+to pull forms back.
+\begin{defi}[Pullback of differential form]\index{pullback!differential form}\index{differential form!pullback}
+ Let $\omega \in \Omega^p(N)$ and $F \in C^\infty(M, N)$. We define the \emph{pullback} of $\omega$ along $F$ to be
+ \[
+ F^* \omega|_x = \Lambda^p(\D F|_x)^*(\omega|_{F(x)}).
+ \]
+ In other words, for $v_1, \cdots, v_p \in T_x M$, we have
+ \[
+ (F^*\omega|_x)(v_1, \cdots, v_p) = \omega|_{F(x)} (\D F|_x(v_1), \cdots, \D F|_x(v_p)).
+ \]
+\end{defi}
+
+\begin{lemma}
+ Let $F \in C^\infty(M, N)$. Let $F^*$ be the associated pullback map. Then
+ \begin{enumerate}
+ \item $F^*$ is a linear map $\Omega^p(N) \to \Omega^p(M)$.
+ \item $F^*(\omega \wedge \sigma) = F^*\omega \wedge F^*\sigma$.
+ \item If $G \in C^\infty (N, P)$, then $(G \circ F)^* = F^* \circ G^*$.
+ \item We have $\d F^* = F^* \d$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}
+ All but (iv) are clear. We first check that this holds for $0$ forms. If $g \in \Omega^0(N)$, then we have
+ \begin{align*}
+ (F^* \d g)|_x(v) &= \d g|_{F(x)} (\D F|_x (v)) \\
+ &= \D F|_x(v)(g) \\
+ &= v(g \circ F) \\
+ &= \d (g \circ F)(v) \\
+ &= \d (F^* g)(v).
+ \end{align*}
+ So we are done.
+
+ Then the general result follows from (i) and (ii). Indeed, in local coordinates $y_1, \cdots, y_n$, if
+ \[
+ \omega = \sum\omega_{i_1, \ldots, i_p}\; \d y_{i_1} \wedge \cdots \wedge \d y_{i_p},
+ \]
+ then we have
+ \[
+ F^* \omega = \sum (F^* \omega_{i_1, \ldots, i_p}) (F^* \d y_{i_1} \wedge \cdots \wedge \d y_{i_p}).
+ \]
+ Then we have
+ \[
+ \d F^* \omega = F^* \d \omega = \sum (F^* \d \omega_{i_1, \ldots, i_p}) (F^* \d y_{i_1} \wedge \cdots \wedge \d y_{i_p}). \qedhere
+ \]
+\end{proof}
+
+\subsection{De Rham cohomology}
+We now get to answer our original motivating question --- given an $\omega \in \Omega^p(M)$ with $\d \omega = 0$, does it follow that there is some $\sigma \in \Omega^{p - 1}(M)$ such that $\omega = \d \sigma$?
+
+The answer is ``not necessarily''. In fact, the extent to which this fails tells us something interesting about the topology of the manifold. We are going to define certain vector spaces $H^p_{\dR}(M)$ for each $p$, such that this vanishes if and only if all $p$ forms $\omega$ with $\d \omega = 0$ are of the form $\d \theta$. Afterwards, we will come up with techniques to compute this $H^p_{\dR}(M)$, and then we can show that certain spaces have vanishing $H^p_{\dR}(M)$.
+
+We start with some definitions.
+
+\begin{defi}[Closed form]\index{closed form}
+ A $p$-form $\omega \in \Omega^p(M)$ is \emph{closed} if $\d \omega = 0$.
+\end{defi}
+
+\begin{defi}[Exact form]\index{exact form}
+ A $p$-form $\omega \in \Omega^p(M)$ is \emph{exact} if there is some $\sigma \in \Omega^{p - 1}(M)$ such that $\omega = \d \sigma$.
+\end{defi}
+
+We know that every exact form is closed. However, in general, not every closed form is exact. The extent to which this fails is given by the \emph{de Rham cohomology}.
+
+\begin{defi}[de Rham cohomology]\index{de Rham cohomology}
+ The \emph{$p$th de Rham cohomology} is given by the $\R$-vector space
+ \[
+ H^p_\dR (M) = \frac{\ker \d: \Omega^p(M) \to \Omega^{p + 1}(M)}{\im \d: \Omega^{p - 1}(M) \to \Omega^p(M)} = \frac{\text{closed forms}}{\text{exact forms}}.
+ \]
+ In particular, we have
+ \[
+ H^0_{\dR}(M) = \ker \d: \Omega^0(M) \to \Omega^1(M).
+ \]
+\end{defi}
+
+We could tautologically say that if $\d \omega = 0$, then $\omega$ is exact iff it vanishes in $H^p_{\dR}(M)$. But this is as useful as saying ``Let $S$ be the set of solutions to this differential equation. Then the differential equation has a solution iff $S$ is non-empty''. So we want to study the properties of $H^p_{\dR}$ and find ways of computing them.
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item Let $M$ have $k$ connected components. Then
+ \[
+ H_{\dR}^0(M) = \R^k.
+ \]
+ \item If $p > \dim M$, then $H^p_{\dR}(M) = 0$.
+ \item If $F \in C^\infty(M, N)$, then this induces a map $F^*: H^p_\dR(N) \to H^p_\dR(M)$ given by
+ \[
+ F^*[\omega] = [F^* \omega].
+ \]
+ \item $(F \circ G)^* = G^* \circ F^*$.
+ \item If $F: M \to N$ is a diffeomorphism, then $F^*: H^p_{\dR}(N) \to H^p_{\dR}(M)$ is an isomorphism. \qedhere
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We have
+ \begin{align*}
+ H_{\dR}^0(M) &= \{f \in C^\infty(M, \R): \d f = 0\} \\
+ &= \{\text{locally constant functions $f$}\}\\
+ &= \R^{\text{number of connected components}}.
+ \end{align*}
+ \item If $p > \dim M$, then all $p$-forms are trivial.
+ \item We first show that $F^* \omega$ indeed represents some member of $H^p_{\dR}(M)$. Let $[\omega] \in H^p_{\dR}(N)$. Then $\d \omega = 0$. So
+ \[
+ \d (F^* \omega) = F^*(\d \omega) = 0.
+ \]
+ So $[F^* \omega] \in H^p_\dR(M)$. So this map makes sense.
+
+ To see it is well-defined, if $[\omega] = [\omega']$, then $\omega - \omega' = \d \sigma$ for some $\sigma$. So $F^* \omega - F^* \omega' = \d (F^* \sigma)$. So $[F^* \omega] = [F^* \omega']$.
+ \item Follows from the corresponding fact for pullback of differential forms.
+ \item If $F^{-1}$ is an inverse to $F$, then $(F^{-1})^*$ is an inverse to $F^*$ by above.\qedhere
+ \end{enumerate}
+\end{proof}
+
+It turns out that de Rham cohomology satisfies a stronger property of being \emph{homotopy} invariant. To make sense of that, we need to define what it means to be homotopy invariant.
+\begin{defi}[Smooth homotopy]\index{smooth homotopy}\index{homotopic maps}
+ Let $F_0, F_1: M \to N$ be smooth maps. A \emph{smooth homotopy} from $F_0$ to $F_1$ is a smooth map $F: [0, 1] \times M \to N$ such that
+ \[
+ F_0(x) = F(0, x),\quad F_1(x) = F(1, x).
+ \]
+ If such a map exists, we say $F_0$ and $F_1$ are \emph{homotopic}.
+\end{defi}
+Note that here $F$ is defined on $[0, 1] \times M$, which is not a manifold. So we need to be slightly annoying and say that $F$ is smooth if it can be extended to a smooth function $I \times M \to N$ for $I \supseteq [0, 1]$ open.
+
+We can now state what it means for the de Rham cohomology to be homotopy invariant.
+\begin{thm}[Homotopy invariance]\index{de Rham cohomology!deformation invariance}\index{deformation invariance of de Rham cohomology}\index{de Rham cohomology!homotopy invariance}\index{homotopy invariance of de Rham cohomology}
+ Let $F_0, F_1$ be homotopic maps. Then $F_0^* = F_1^*: H^p_{\d R}(N) \to H_{\dR}^p(M)$.
+\end{thm}
+
+\begin{proof}
+ Let $F: [0, 1] \times M \to N$ be the homotopy, and
+ \[
+ F_t(x) = F(t, x).
+ \]
+ We denote the exterior derivative on $M$ by $\d_M$ (and similarly $\d_N$), and that on $[0, 1] \times M$ by $d$.
+
+ Let $\omega \in \Omega^p(N)$ be such that $\d_N \omega = 0$. We let $t$ be the coordinate on $[0, 1]$. We write
+ \[
+ F^* \omega = \sigma + \d t \wedge \gamma,
+ \]
+ where $\sigma = \sigma(t) \in \Omega^p(M)$ and $\gamma = \gamma(t) \in \Omega^{p - 1}(M)$. We claim that
+ \[
+ \sigma(t) = F_t^* \omega.
+ \]
+ Indeed, we let $\iota: \{t\} \times M \to [0, 1] \times M$ be the inclusion. Then we have
+ \begin{align*}
+ F_t^* \omega |_{\{t\} \times M} &= (F \circ \iota)^* \omega = \iota^* F^* \omega \\
+ &= \iota^* (\sigma + \d t \wedge \gamma)\\
+ &= \iota^* \sigma + \iota^* \d t \wedge \iota^* \gamma\\
+ &= \iota^* \sigma,
+ \end{align*}
+ using the fact that $\iota^* \d t = 0$. As $\d_N \omega = 0$, we have
+ \begin{align*}
+ 0 &= F^* \d_N \omega \\
+ &= \d F^* \omega \\
+ &= \d(\sigma + \d t \wedge \gamma)\\
+ &= \d_M(\sigma) + (-1)^p\frac{\partial \sigma}{\partial t} \wedge \d t + \d t \wedge \d_M \gamma\\
+ &= \d_M \sigma + (-1)^p \frac{\partial \sigma}{\partial t} \wedge \d t + (-1)^{p - 1} \d_M \gamma \wedge \d t.
+ \end{align*}
+ Looking at the $\d t$ components, we have
+ \[
+ \frac{\partial \sigma}{\partial t} = \d_M \gamma.
+ \]
+ So we have
+ \[
+ F_1^* \omega - F_0^* \omega = \sigma(1) - \sigma(0) = \int_0^1 \frac{\partial \sigma}{\partial t} \;\d t = \int_0^1 \d_M \gamma \;\d t = \d_M \int_0^1 \gamma(t) \, \d t.
+ \]
+ So we know that
+ \[
+ [F_1^* \omega] = [F_0^* \omega].
+ \]
+ So done.
+\end{proof}
+
+\begin{eg}
+ Suppose $U \subseteq \R^n$ is an open ``\term{star-shaped}'' subset, i.e.\ there is some $x_0 \in U$ such that for any $x \in U$ and $t \in [0, 1]$, we have
+ \[
+ tx + (1 - t)x_0 \in U.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (1, 0) -- (2, 2) -- (0, 1) -- (-2, 2) -- (-1, 0) -- (-2, -2) -- (0, -1) -- (2, -2) -- (1, 0);
+ \node [circ] {};
+ \node [below] {$x_0$};
+ \draw (0, 0) -- (1, 1.1) node [circ] {} node [right] {$x$};
+ \end{tikzpicture}
+ \end{center}
+ We define $F_t: U \to U$ by
+ \[
+ F_t(x) = tx + (1 - t) x_0.
+ \]
+ Then $F$ is a smooth homotopy from the identity map to $F_0$, the constant map to $x_0$. We clearly have $F_1^*$ being the identity map, and $F_0^*$ is the zero map on $H^p_{\dR}(U)$ for all $p \geq 1$. So we have
+ \[
+ H_{\dR}^p(U) =
+ \begin{cases}
+ 0 & p \geq 1\\
+ \R & p = 0
+ \end{cases}.
+ \]
+\end{eg}
+
+\begin{cor}[Poincar\'e lemma]\index{Poincar\'e lemma}
+ Let $U \subseteq \R^n$ be open and star-shaped. Suppose $\omega \in \Omega^p(U)$ is such that $\d \omega = 0$. Then there is some $\sigma \in \Omega^{p - 1}(M)$ such that $\omega = \d \sigma$.
+\end{cor}
+
+\begin{proof}
+ $H^p_{\dR}(U) = 0$ for $p \geq 1$.
+\end{proof}
+
+More generally, we have the following notion.
+
+\begin{defi}[Smooth homotopy equivalence]\index{smooth homotopy equivalence}
+ We say two manifolds $M, N$ are \emph{smoothly homotopy equivalent} if there are smooth maps $F: M \to N$ and $G: N \to M$ such that both $F \circ G$ and $G \circ F$ are homotopic to the identity.
+\end{defi}
+
+\begin{cor}
+ If $M$ and $N$ are smoothly homotopy equivalent, then $H_{\dR}^p(M) \cong H_{\dR}^p(N)$.
+\end{cor}
+
+Note that by approximation, it can be shown that if $M$ and $N$ are homotopy equivalent as topological spaces (i.e.\ the same definition where we drop the word ``smooth''), then they are in fact smoothly homotopy equivalent. So the de Rham cohomology depends only on the homotopy type of the underlying topological space.
+
+\subsection{Homological algebra and Mayer-Vietoris theorem}
+The main theorem we will have for computing de Rham cohomology will be the Mayer-Vietoris theorem. Proving this involves quite a lot of setting up and hard work. In particular, we need to define some notions from homological algebra to even \emph{state} Mayer-Vietoris theorem.
+
+The actual proof will be divided into two parts. The first part is a purely algebraic result known as the \emph{snake lemma}, and the second part is a differential-geometric part that proves that we satisfy the hypothesis of the snake lemma.
+
+We will not prove the snake lemma, whose proof can be found in standard algebraic topology texts (perhaps with arrows the wrong way round). % include this!
+
+We start with some definitions.
+\begin{defi}[Cochain complex and exact sequence]\index{cochain complex}\index{exact sequence}
+ A sequence of vector spaces and linear maps
+ \[
+ \begin{tikzcd}
+ \cdots \ar[r] & V^{p - 1} \ar[r, "\d_{p - 1}"] & V^p \ar[r, "\d_p"] & V^{p + 1} \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ is a \emph{cochain complex} if $\d_p \circ \d_{p - 1} = 0$ for all $p \in \Z$. Usually we have $V^p = 0$ for $p < 0$ and we do not write them out. Keeping these negative degree $V^p$ rather than throwing them away completely helps us state our theorems more nicely, so that we don't have to view $V^0$ as a special case when we state our theorems.
+
+ It is \emph{exact at $p$} if $\ker \d_p = \im \d_{p - 1}$, and \emph{exact} if it is exact at every $p$.
+\end{defi}
+There are, of course, chain complexes as well, but we will not need them for this course.
+
+\begin{eg}
+ The \term{de Rham complex}
+ \[
+ \begin{tikzcd}
+ \Omega^0(M) \ar[r, "\d"] & \Omega^1(M) \ar[r, "\d"] & \Omega^2(M) \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ is a cochain complex as $\d^2 = 0$. It is exact at $p$ iff $H_{\dR}^p(M) = \{0\}$.
+\end{eg}
+
+\begin{eg}
+ If we have an exact sequence such that $\dim V^p < \infty$ for all $p$ and are zero for all but finitely many $p$, then
+ \[
+ \sum_p (-1)^p \dim V^p = 0.
+ \]
+\end{eg}
+
+\begin{defi}[Cohomology]\index{cohomology}
+ Let
+ \[
+ V^{\Cdot} =
+ \begin{tikzcd}
+ \cdots \ar[r] & V^{p - 1} \ar[r, "\d_{p - 1}"] & V^p \ar[r, "\d_p"] & V^{p + 1} \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ be a cochain complex. The \emph{cohomology} of $V^{\Cdot}$ at $p$ is given by
+ \[
+ H^p(V^{\Cdot}) = \frac{\ker \d_p}{\im \d_{p - 1}}.
+ \]
+\end{defi}
+
+\begin{eg}
+ The cohomology of the de Rham complex is the de Rham cohomology.
+\end{eg}
+
+We can define maps between cochain complexes:
+\begin{defi}[Cochain map]\index{cochain map}
+ Let $V^{\Cdot}$ and $W^\Cdot$ be cochain complexes. A \emph{cochain map} $V^{\Cdot} \to W^{\Cdot}$ is a collection of maps $f^p: V^p \to W^p$ such that the following diagram commutes for all $p$:
+ \[
+ \begin{tikzcd}
+ V^p \ar[r, "f^p"] \ar[d, "\d_p"] & W^p \ar[d, "\d_p"]\\
+ V^{p + 1} \ar[r, "f^{p + 1}"] & W^{p + 1}
+ \end{tikzcd}
+ \]
+\end{defi}
+
+\begin{prop}
+ A cochain map induces a well-defined homomorphism on the cohomology groups.
+\end{prop}
+
+\begin{defi}[Short exact sequence]
+ A \term{short exact sequence} is an exact sequence of the form
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & V^1 \ar[r, "\alpha"] & V^2 \ar[r, "\beta"] & V^3 \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ This implies that $\alpha$ is injective, $\beta$ is surjective, and $\im (\alpha) = \ker(\beta)$. By the rank-nullity theorem, we know
+ \[
+ \dim V^2 = \rank(\beta) + \mathrm{null}(\beta) = \dim V^3 + \dim V^1.
+ \]
+\end{defi}
+
+We can now state the main technical lemma, which we shall not prove.
+\begin{thm}[Snake lemma]\index{snake lemma}
+ Suppose we have a short exact sequence of complexes
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A^{\Cdot} \ar [r, "i"] & B^{\Cdot} \ar [r, "q"] & C^{\Cdot} \ar[r] & 0
+ \end{tikzcd},
+ \]
+ i.e.\ the $i, q$ are cochain maps and we have a short exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A^p \ar [r, "i^p"] & B^p \ar [r, "q^p"] & C^p \ar[r] & 0
+ \end{tikzcd},
+ \]
+ for each $p$.
+
+ Then there are maps
+ \[
+ \delta: H^p(C^{\Cdot}) \to H^{p + 1}(A^{\Cdot})
+ \]
+ such that there is a long exact sequence
+ \[
+ \begin{tikzcd}
+ \cdots \ar [r] & H^p(A^{\Cdot}) \ar[r, "i^*"] & H^p(B^{\Cdot}) \ar[r, "q^*"] & H^p(C^{\Cdot}) \ar[out=0, in=180, looseness=2, overlay, dll, "\delta"']\\
+ & H^{p + 1}(A^{\Cdot}) \ar[r, "i^*"] & H^{p + 1}(B^{\Cdot}) \ar[r, "q^*"] & H^{p + 1}(C^{\Cdot}) \ar [r] & \cdots
+ \end{tikzcd}.
+ \]
+\end{thm}
+
+Using this, we can prove the Mayer-Vietoris theorem.
+\begin{thm}[Mayer-Vietoris theorem]\index{Mayer-Vietoris sequence}
+ Let $M$ be a manifold, and $M = U \cup V$, where $U, V$ are open. We denote the inclusion maps as follows:
+ \[
+ \begin{tikzcd}
+ U \cap V \ar[r, "i_1", hook] \ar[d, "i_2", hook] & U \ar[d, hook, "j_1"]\\
+ V \ar[r, hook, "j_2"] & M
+ \end{tikzcd}
+ \]
+ Then there exists a natural linear map
+ \[
+ \delta: H^p_\dR (U \cap V) \to H_{\dR}^{p + 1}(M)
+ \]
+ such that the following sequence is exact:
+ \[
+ \begin{tikzcd}
+ H^p_{\dR}(M) \ar[r, "j_1^* \oplus j_2^*"] & H^p_{\dR}(U) \oplus H^p_{\dR}(V) \ar[r, "i_1^* - i_2^*"] & H_\dR^p(U \cap V)\ar[out=0, in=180, looseness=2, overlay, lld, "\delta"]\\
+ H_\dR^{p+1}(M) \ar[r, "j_1^* \oplus j_2^*"] & H_\dR^{p+1}(U) \oplus H_\dR^{p+1}(V) \ar[r, "i_1^* - i_2^*"] & \cdots
+ \end{tikzcd}
+ \]
+\end{thm}
+
+Before we prove the theorem, we do a simple example.
+\begin{eg}
+ Consider $M = S^1$. We can cut the circle up:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=1];
+ \draw [red] (1.105, -0.4673) arc(-22.9:202.9:1.2);
+ \node [red, above] at (0, 1.2) {$U$};
+
+ \draw [blue] (1.279, 0.545) arc(22.9:-202.9:1.4);
+ \node [blue, below] at (0, -1.4) {$V$};
+
+ \node [circ] at (1, 0) {};
+ \node [circ] at (-1, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ Here we have
+ \begin{align*}
+ S^1 &= \{(x, y): x^2 + y^2 = 1\}\\
+ U &= S^1 \cap \{y > -\varepsilon\}\\
+ V &= S^1 \cap \{y < \varepsilon\}.
+ \end{align*}
+ As $U, V$ are diffeomorphic to intervals, hence contractible, and $U \cap V$ is diffeomorphic to the disjoint union of two intervals, we know their de Rham cohomology.
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & H^0_{\dR}(S^1) \ar[r] & H^0_{\dR}(U) \oplus H^0_{\dR}(V) \ar[r] & H_\dR^0(U \cap V)\ar[out=0, in=180, looseness=2, overlay, lld]\\
+ & H_\dR^1(S^1) \ar[r] & H_\dR^1(U) \oplus H_\dR^1(V) \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ We can fill in the things we already know to get
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & \R \ar[r] & \R \oplus \R \ar[r] & \R \oplus \R\ar[out=0, in=180, looseness=2, overlay, lld]\\
+ & H_\dR^1(S^1) \ar[r] & 0 \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ By adding the degrees alternatingly, we know that
+ \[
+ \dim H^1_{\dR}(S^1) = 1.
+ \]
+ So
+ \[
+ H^1_{\dR}(S^1) \cong \R.
+ \]
+\end{eg}
+
+Now we prove Mayer-Vietoris.
+
+\begin{proof}[Proof of Mayer-Vietoris]
+ By the snake lemma, it suffices to prove that the following sequence is exact for all $p$:
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & \Omega^p(U \cup V) \ar[r, "j_1^* \oplus j_2^*"] & \Omega^p(U) \oplus \Omega^p(V) \ar[r, "i_1^* - i_2^*"] & \Omega^p(U \cap V) \ar[r] & 0
+ \end{tikzcd}
+ \]
+ It is clear that the two maps compose to $0$, and the first map is injective. By counting dimensions, it suffices to show that $i_1^* - i_2^*$ is surjective.
+
+ Indeed, let $\{\varphi_U, \varphi_V\}$ be partitions of unity subordinate to $\{U, V\}$. Let $\omega \in \Omega^p(U \cap V)$. We set $\sigma_U \in \Omega^p(U)$ to be
+ \[
+ \sigma_U =
+ \begin{cases}
+ \varphi_V \omega & \text{on }U \cap V\\
+ 0 & \text{on }U \setminus \supp \varphi_V
+ \end{cases}.
+ \]
+ Similarly, we define $\sigma_V \in \Omega^p(V)$ by
+ \[
+ \sigma_V =
+ \begin{cases}
+ -\varphi_U \omega & \text{on }U \cap V\\
+ 0 & \text{on }V \setminus \supp \varphi_U
+ \end{cases}.
+ \]
+ Then we have
+ \[
+ i_1^* \sigma_U - i_2^* \sigma_V = (\varphi_V \omega + \varphi_U \omega)|_{U \cap V} = \omega.
+ \]
+ So $i_1^* - i_2^*$ is surjective.
+\end{proof}
+
+\section{Integration}
+As promised, we will be able to integrate differential forms on manifolds. However, there is a slight catch. We said that differential forms give us the signed volume of an infinitesimal parallelepiped, and we can integrate these infinitesimal volumes up to get the whole volume of the manifold. However, there is no canonical choice of the sign of the volume, so we do not, in general, get a well-defined volume.
+
+In order to fix this issue, our manifold needs to have an \emph{orientation}.
+
+\subsection{Orientation}
+We start with the notion of an orientation of a vector space. After we have one, we can define an orientation of a manifold to be a smooth choice of orientation for each tangent space.
+
+Informally, an orientation on a vector space $V$ is a choice of a collection of ordered bases that we declare to be ``oriented''. If $(e_1, \cdots, e_n)$ is an oriented basis, then changing the sign of one of the $e_i$ changes orientation, while scaling by a positive multiple does not. Similarly, swapping two elements in the basis will induce a change in orientation.
+
+To encode this information, we come up with some alternating form $\omega \in \Lambda^n(V^*)$. We can then say a basis $e_1, \cdots, e_n$ is oriented if $\omega(e_1, \cdots, e_n)$ is positive.
+
+\begin{defi}[Orientation of vector space]\index{orientation!vector space}
+ Let $V$ be a vector space with $\dim V = n$. An \emph{orientation} is an equivalence class of elements $\omega \in\Lambda^n (V^*)$, where we say $\omega \sim \omega'$ iff $\omega = \lambda \omega'$ for some $\lambda > 0$. A basis $(e_1, \cdots, e_n)$ is \emph{oriented} if
+ \[
+ \omega(e_1, \cdots, e_n) > 0.
+ \]
+ By convention, if $V = \{0\}$, an orientation is just a choice of number in $\{\pm 1\}$.
+\end{defi}
+
+Suppose we have picked an oriented basis $e_1, \cdots, e_n$. If we have any other basis $\tilde{e}_1, \cdots, \tilde{e}_n$, we write
+\[
+ e_i = \sum_j B_{ij} \tilde{e}_j.
+\]
+Then we have
+\[
+ \omega (\tilde{e}_1, \cdots, \tilde{e}_n) = \det B\; \omega(e_1, \cdots, e_n).
+\]
+So $\tilde{e}_1, \cdots, \tilde{e}_n$ is oriented iff $\det B > 0$.
+
+We now generalize this to manifolds, where we try to orient the tangent bundle smoothly.
+
+\begin{defi}[Orientation of a manifold]\index{orientation!manifold}\index{manifold!orientation}
+ An \emph{orientation} of a manifold $M$ is defined to be an equivalence class of elements $\omega \in \Omega^n(M)$ that are nowhere vanishing, under the equivalence relation $ \omega \sim \omega'$ if there is some smooth $f: M \to \R_{> 0}$ such that $\omega = f \omega'$.
+\end{defi}
+
+\begin{defi}[Orientable manifold]\index{orientable manifold}\index{manifold!orientable}
+ A manifold is \emph{orientable} if it has some orientation.
+\end{defi}
+
+If $M$ is a connected, orientable manifold, then it has precisely two possible orientations.
+
+\begin{defi}[Oriented manifold]\index{oriented manifold}
+ An \emph{oriented manifold} is a manifold with a choice of orientation.
+\end{defi}
+
+\begin{defi}[Oriented coordinates]\index{oriented coordinates}
+ Let $M$ be an oriented manifold. We say coordinates $x_1, \cdots, x_n$ on a chart $U$ are \emph{oriented coordinates} if
+ \[
+ \left.\frac{\partial}{\partial x_1}\right|_p, \cdots, \left.\frac{\partial}{\partial x_n}\right|_p
+ \]
+ is an oriented basis for $T_p M$ for all $p \in U$.
+\end{defi}
+Note that we can always find enough oriented coordinates. Given any connected chart, either the chart is oriented, or $-x_1, \cdots, x_n$ is oriented. So any oriented $M$ is covered by oriented charts.
+
+Now by the previous discussion, we know that if $x_1, \cdots, x_n$ and $y_1, \cdots, y_n$ are oriented charts, then the transition maps for the tangent space all have positive determinant.
+
+\begin{eg}
+ $\R^n$ is always assumed to have the standard orientation given by $\d x_1 \wedge \cdots \wedge \d x_n$.
+\end{eg}
+
+\begin{defi}[Orientation-preserving diffeomorphism]\index{orientation!-preserving diffeomorphism}\index{diffeomorphism!orientation preserving}
+ Let $M, N$ be oriented manifolds, and $F \in C^\infty(M, N)$ be a diffeomorphism. We say $F$ \emph{preserves orientation} if $\D F|_p: T_p M \to T_{F(p)}N$ takes an oriented basis to an oriented basis.
+
+ Alternatively, this says the pullback of the orientation on $N$ is the orientation on $M$ (up to equivalence).
+\end{defi}
+
+\subsection{Integration}
+The idea is that to define integration, we fist understand how we can integrate on $\R^n$, and then patch them up using partitions of unity.
+
+We are going to allow ourselves to integrate on rather general domains.
+\begin{defi}[Domain of integration]\index{domain of integration}
+ Let $D \subseteq \R^n$. We say $D$ is a \emph{domain of integration} if $D$ is bounded and $\partial D$ has measure zero.
+\end{defi}
+
+Since $D$ can be an arbitrary subset, we define an $n$-form on $D$ to be some $\omega \in \Omega^n(U)$ for some open $U$ containing $D$.
+
+\begin{defi}[Integration on $\R^n$]\index{integration!on $\R^n$}
+ Let $D$ be a compact domain of integration, and
+ \[
+ \omega = f\; \d x_1 \wedge \cdots \wedge \d x_n
+ \]
+ be an $n$-form on $D$. Then we define
+ \[
+ \int_D \omega = \int_D f(x_1, \cdots, x_n) \;\d x_1 \cdots \d x_n.
+ \]
+ In general, let $U \subseteq \R^n$ and let $\omega \in \Omega^n(\R^n)$ have compact support. We define
+ \[
+ \int_U \omega = \int_D \omega
+ \]
+ for some $D\subseteq U$ containing $\supp \omega$.
+\end{defi}
+Note that we do not directly say we integrate it on $\supp \omega$, since $\supp \omega$ need not have a nice boundary.
+
+Now if we want to integrate on a manifold, we need to patch things up, and to do so, we need to know how these things behave when we change coordinates.
+
+\begin{defi}[Smooth function]\index{smooth function}
+ Let $D \subseteq \R^n$ and $f: D \to \R^m$. We say $f$ is \emph{smooth} if it is a restriction of some smooth function $\tilde{f}: U \to \R^m$ where $U \supseteq D$.
+\end{defi}
+
+\begin{lemma}
+ Let $F: D \to E$ be a smooth map between domains of integration in $\R^n$, and assume that $F|_{\mathring{D}}: \mathring{D} \to \mathring{E}$ is an orientation-preserving diffeomorphism. Then
+ \[
+ \int_E \omega = \int_D F^* \omega.
+ \]
+\end{lemma}
+This is exactly what we want.
+
+\begin{proof}
+ Suppose we have coordinates $x_1, \cdots, x_n$ on $D$ and $y_1, \cdots, y_n$ on $E$. Write
+ \[
+ \omega = f\;\d y_1 \wedge \cdots \wedge \d y_n.
+ \]
+ Then we have
+ \begin{align*}
+ \int_E \omega &= \int_E f\;\d y_1 \cdots \d y_n\\
+ &= \int_D (f \circ F)\, |\det \D F|\;\d x_1 \cdots \d x_n\\
+ &= \int_D (f \circ F) \det \D F \; \d x_1 \cdots \d x_n\\
+ &= \int_D F^* \omega.
+ \end{align*}
+ Here we used the fact that $|\det \D F| = \det \D F$ because $F$ is orientation-preserving.
+\end{proof}
+
+We can now define integration over manifolds.
+
+\begin{defi}[Integration on manifolds]\index{integration!manifolds}
+ Let $M$ be an oriented manifold. Let $\omega \in \Omega^n(M)$. Suppose that $\supp(\omega)$ is a compact subset of some oriented chart $(U, \varphi)$. We set
+ \[
+ \int_M \omega = \int_{\varphi(U)} (\varphi^{-1})^* \omega.
+ \]
+ By the previous lemma, this does not depend on the oriented chart $(U, \varphi)$.
+
+ If $\omega \in \Omega^n(M)$ is a general form with compact support, we do the following: cover the support by finitely many oriented charts $\{U_\alpha\}_{\alpha = 1, \ldots, m}$. Let $\{\chi_\alpha\}$ be a partition of unity subordinate to $\{U_\alpha\}$. We then set
+ \[
+ \int_M \omega = \sum_\alpha \int_{U_\alpha} \chi_\alpha \omega.
+ \]
+\end{defi}
+
+It is clear that we have
+\begin{lemma}
+ This is well-defined, i.e.\ it is independent of cover and partition of unity.
+\end{lemma}
+We will not bother to go through the technicalities of proving this properly.
+
+Note that it is possible to define this for non-smooth forms, or not-everywhere-defined form, or with non-compact support etc, but we will not do it here.
+
+Theoretically, our definition is perfectly fine and easy to work with. However, it is absolutely useless for computations, and there is no hope you can evaluate that directly.
+
+Now how would we \emph{normally} integrate things? In IA Vector Calculus, we probably did something like this --- if we want to integrate something over a sphere, we cut the sphere up into the Northern and Southern hemisphere. We have coordinates for each of the hemispheres, so we integrate each hemisphere separately, and then add the result up.
+
+This is all well, except we have actually missed out the equator in this process. But that doesn't really matter, because the equator has measure zero, and doesn't contribute to the integral.
+
+We now try to formalize our above approach. The below definition is not standard:
+\begin{defi}[Parametrization]\index{parametrization}
+ Let $M$ be either an oriented manifold of dimension $n$, or a domain of integration in $\R^n$. By a \emph{parametrization} of $M$ we mean a decomposition
+ \[
+ M = S_1 \cup \cdots \cup S_n,
+ \]
+ with smooth maps $F_i: D_i \to S_i$, where $D_i$ is a compact domain of integration, such that
+ \begin{enumerate}
+ \item $F_i|_{\mathring{D}_i}: \mathring{D}_i \to \mathring{S}_i$ is an orientation-preserving diffeomorphism
+ \item $\partial S_i$ has measure zero (if $M$ is a manifold, this means $\varphi(\partial S_i \cap U)$ for all charts $(U, \varphi)$).
+ \item For $i \not= j$, $S_i$ intersects $S_j$ only in their common boundary.
+ \end{enumerate}
+\end{defi}
+
+\begin{thm}
+ Given a parametrization $\{S_i\}$ of $M$ and an $\omega \in \Omega^n(M)$ with compact support, we have
+ \[
+ \int_M \omega = \sum_i \int_{D_i} F_i^* \omega.
+ \]
+\end{thm}
+
+\begin{proof}
+% We know $M$ is covered by oriented charts $(U_\alpha, \varphi_\alpha)$ such that $\varphi$ gives a smooth map $\varphi: \bar{U}_\alpha \to \overline{B_1(0)}$. We can then pick this cover and a partition of unity subordinate to this cover. Then if we define $\int_M \omega$ this way, it suffices to deal with the case where $\supp(\omega)$ lies in a single chart $(U, \varphi)$. Then identifying $\bar{U}$ with $\overline{B_1(0)}$, it suffices to consider the case where $M = \overline{B_1(0)}$. Then the result is obvious.
+
+ By using partitions of unity, we may consider the case where $\omega$ has support in a single chart, and thus we may wlog assume we are working on $\R^n$, and then the result is obvious.
+\end{proof}
+
+There is a problem --- in all our lives, we've been integrating functions, not forms. If we have a function $f: \R \to \R$, then we can take the integral
+\[
+ \int f\;\d x.
+\]
+Now of course, we are not actually integrating $f$. We are integrating the differential form $f\;\d x$. Why we seem like we are integrating functions is because we have a background form $\d x$. So if we have a manifold $M$ with a ``background'' $n$-form $\omega \in \Omega^n (M)$, then we can integrate $f \in C^\infty(M, \R)$ by
+\[
+ \int_M f \omega.
+\]
+In general, a manifold does not come with such a canonical background form. However, in some cases, it does.
+\begin{lemma}
+ Let $M$ be an oriented manifold, and $g$ a Riemannian metric on $M$. Then there is a unique $\omega \in \Omega^n(M)$ such that for all $p$, if $e_1, \cdots, e_n$ is an oriented orthonormal basis of $T_pM$, then
+ \[
+ \omega(e_1, \cdots, e_n) = 1.
+ \]
+ We call this the \term{Riemannian volume form}, written \term{$\d V_g$}.
+\end{lemma}
+Note that $\d V_g$ is a notation. It is not the exterior derivative of some mysterious object $V_g$.
+
+\begin{proof}
+ Uniqueness is clear, since if $\omega'$ is another, then $\omega_p = \lambda \omega'_p$ for some $\lambda$, and evaluating on an orthonormal basis shows that $\lambda = 1$.
+
+ To see existence, let $\sigma$ be any nowhere vanishing $n$-form giving the orientation of $M$. On a small set $U$, pick a frame $s_1, \cdots, s_n$ for $TM|_U$ and apply the Gram-Schmidt process to obtain an orthonormal frame $e_1, \cdots, e_n$, which we may wlog assume is oriented. Then we set
+ \[
+ f = \sigma(e_1, \cdots, e_n),
+ \]
+ which is non-vanishing because $\sigma$ is nowhere vanishing. Then set
+ \[
+ \omega = \frac{\sigma}{f}.
+ \]
+ This proves existence locally, and can be patched together globally by uniqueness.
+\end{proof}
+\subsection{Stokes Theorem}
+Recall from, say, IA Vector Calculus that Stokes' theorem relates an integral on a manifold to a integral on its boundary. However, our manifolds do not have boundaries! So we can't talk about Stokes' theorem! So we now want to define what it means to be a manifold with boundary.
+
+\begin{defi}[Manifold with boundary]\index{manifold!with boundary}\index{chart!with boundary}\index{atlas!with boundary}
+ Let
+ \[
+ \H^n = \{(x_1, \cdots, x_n)\in \R^n: x_n \geq 0\}.
+ \]
+ A \emph{chart-with-boundary} on a set $M$ is a bijection $\varphi: U \to \varphi(U)$ for some $U \subseteq M$ such that $\varphi(U) \subseteq \H^n$ is open. Note that this image may or may not hit the boundary of $\H^n$. So a ``normal'' chart is also a chart with boundary.
+
+ An \emph{atlas-with-boundary} on $M$ is a cover by charts-with-boundary $(U_\alpha, \varphi_\alpha)$ such that the transition maps
+ \[
+ \varphi_\beta \circ \varphi_\alpha^{-1}: \varphi_\alpha(U_\alpha \cap U_\beta) \to \varphi_\beta(U_\alpha \cap U_\beta)
+ \]
+ are smooth (in the usual sense) for all $\alpha, \beta$.
+
+ A \emph{manifold-with-boundary} is a set $M$ with an (equivalence class of) atlas with boundary whose induced topology is Hausdorff and second-countable.
+\end{defi}
+
+Note that a manifold with boundary is not a manifold, but a manifold is a manifold with boundary. We will often be lazy and drop the ``with boundary'' descriptions.
+
+\begin{defi}[Boundary point]\index{boundary point}\index{$\partial M$}
+ If $M$ is a manifold with boundary and $p \in M$, then we say $p$ is a \emph{boundary point} if $\varphi(p) \in \partial \H^n$ for some (hence any) chart-with-boundary $(U, \varphi)$ containing $p$. We let $\partial M$ be the set of boundary points and $\Int(M) = M \setminus \partial M$.
+\end{defi}
+
+Note that these are not the topological notions of boundary and interior.
+
+\begin{prop}
+ Let $M$ be a manifold with boundary. Then $\Int(M)$ and $\partial M$ are naturally manifolds, with
+ \[
+ \dim \partial M = \dim \Int M - 1.
+ \]
+\end{prop}
+
+\begin{eg}
+ The solid ball $\overline{B_1(0)}$ is a manifold with boundary, whose interior is $B_1(0)$ and boundary is $S^{n - 1}$.
+\end{eg}
+
+Note that the product of manifolds with boundary is not a manifold with boundary. For example, the interval $[0, 1]$ is a manifold with boundary, but $[0, 1]^2$ has \emph{corners}. This is bad. We can develop the theory of manifolds with corners, but that is more subtle. We will not talk about them.
+
+Everything we did for manifolds can be done for manifolds with boundary, e.g.\ smooth functions, tangent spaces, tangent bundles etc. Note in particular the definition of the tangent space as derivations still works word-for-word.
+
+\begin{lemma}
+ Let $p \in \partial M$, say $p \in U \subseteq M$ where $(U, \varphi)$ is a chart (with boundary). Then
+ \[
+ \left.\frac{\partial }{\partial x_1}\right|_p, \cdots, \left.\frac{\partial}{\partial x_n}\right|_p
+ \]
+ is a basis for $T_p M$. In particular, $\dim T_p M = n$.
+\end{lemma}
+
+\begin{proof}
+ Since this is a local thing, it suffices to prove it for $M = \H^n$. We write $C^\infty(\H, \R)$ for the functions $f: \H^n \to \R^n$ that extend smoothly to an open neighbourhood of $\H^n$. We fix $a \in \partial \H^n$. Then by definition, we have
+ \[
+ T_a\H^n = \Der_a(C^\infty(\H^n, \R)).
+ \]
+ We let $i_*: T_a\H^n \to T_a \R^n$ be given by
+ \[
+ i_*(X) (g) = X(g|_{\H^n})
+ \]
+ We claim that $i_*$ is an isomorphism. For injectivity, suppose $i_*(X) = 0$. If $f \in C^\infty(\H^n)$, then $f$ extends to a smooth $g$ on some neighbourhood $U$ of $\H^n$. Then
+ \[
+ X(f) = X(g|_{\H^n}) = i_*(X)(g) = 0.
+ \]
+ So $X(f) = 0$ for all $f$. Then $X = 0$. So $i_*$ is injective.
+
+ To see surjectivity, let $Y \in T_a \R^n$, and let $X \in T_a\H^n$ be defined by
+ \[
+ X(f) = Y(g),
+ \]
+ where $g \in C^\infty(\H^n, \R)$ is any extension of $f$ to $U$. To see this is well-defined, we let
+ \[
+ Y = \sum_{i=1}^n \alpha_i \left.\frac{\partial}{\partial x_i}\right|_a.
+ \]
+ Then
+ \[
+ Y(g) = \sum_{i=1}^n \alpha_i \frac{\partial g}{\partial x_i}(a),
+ \]
+ which only depends on $g|_{\H^n}$, i.e.\ $f$. So $X$ is a well-defined element of $T_a\H^n$, and $i_*(X) = Y$ by construction. So done.
+\end{proof}
+
+Now we want to see how orientations behave. We can define them in exactly the same way as manifolds, and everything works. However, something interesting happens. If a manifold with boundary has an orientation, this naturally induces an orientation of the boundary.
+\begin{defi}[Outward/Inward pointing]\index{outward pointing}\index{inward pointing}
+ Let $p \in \partial M$. We then have an inclusion $T_p \partial M \subseteq T_p M$. If $X_p \in T_p M$, then in a chart, we can write
+ \[
+ X_p = \sum_{i = 1}^n a_i \frac{\partial}{\partial x_i},
+ \]
+ where $a_i \in \R$ and $\frac{\partial}{\partial x_1}, \cdots, \frac{\partial}{\partial x_{n - 1}}$ are a basis for $T_p \partial M$. We say $X_p$ is \emph{outward pointing} if $a_n < 0$, and \emph{inward pointing} if $a_n > 0$.
+\end{defi}
+
+\begin{defi}[Induced orientation]\index{induced orientation}
+ Let $M$ be an oriented manifold with boundary. We say a basis $e_1,\cdots, e_{n - 1}$ is an oriented basis for $T_p \partial M$ if $(X_p, e_1, \cdots, e_{n - 1})$ is an oriented basis for $T_p M$, where $X_p$ is any outward pointing element in $T_p M$. This orientation is known as the \emph{induced orientation}.
+\end{defi}
+
+It is an exercise to see that these notions are all well-defined and do not depend on the basis.
+\begin{eg}
+ We have an isomorphism
+ \begin{align*}
+ \partial\H^n &\cong \R^{n - 1}\\
+ (x_1, \cdots, x_{n - 1}, 0) &\mapsto (x_1, \cdots, x_{n - 1}).
+ \end{align*}
+ So
+ \[
+ \left.-\frac{\partial}{\partial x_n}\right|_{\partial \H^n}
+ \]
+ is an outward pointing vector. So we know $x_1, \cdots, x_{n - 1}$ is an oriented chart for $\partial \H^n$ iff
+ \[
+ -\frac{\partial}{\partial x_n}, \frac{\partial}{\partial x_1}, \cdots, \frac{\partial}{\partial x_{n-1}}
+ \]
+ is oriented, which is true iff $n$ is even.
+\end{eg}
+
+\begin{eg}
+ If $n = 1$, say $M = [a, b] \subseteq \R$ with $a < b$, then $\{a, b\}$, then $T_p \partial M = \{0\}$. So an orientation of $\partial M$ is a choice of numbers $\pm 1$ attached to each point. The convention is that if $M$ is in the standard orientation induced by $M \subseteq \R$, then the orientation is obtained by giving $+1$ to $b$ and $-1$ to $a$.
+\end{eg}
+
+Finally, we get to Stokes' theorem.
+\begin{thm}[Stokes' theorem]\index{Stokes' theorem}
+ Let $M$ be an oriented manifold with boundary of dimension $n$. Then if $\omega \in \Omega^{n - 1}(M)$ has compact support, then
+ \[
+ \int_M \d \omega = \int_{\partial M}\omega.
+ \]
+ In particular, if $M$ has no boundary, then
+ \[
+ \int_M \d \omega = 0
+ \]
+\end{thm}
+Note that this makes sense. $\d \omega$ is an $n$-form on $M$, so we can integrate it. On the right hand side, what we really doing is integrating the restriction of $\omega$ to $\partial M$, i.e.\ the $(n - 1)$-form $i^* \omega$, where $i: \partial M \to M$ is the inclusion, so that $i^* \omega \in \Omega^{n - 1}(\partial M)$.
+
+Note that if $M = [a, b]$, then this is just the usual fundamental theorem of calculus.
+
+The hard part of the proof is keeping track of the signs.
+\begin{proof}
+ We first do the case where $M = \H^n$. Then we have
+ \[
+ \omega = \sum_{i = 1}^n \omega_i\;\d x_1 \wedge \cdots \wedge \widehat{\d x_i} \wedge \cdots \wedge \d x_n,
+ \]
+ where $\omega_i$ is compactly supported, and the hat denotes omission. So we have
+ \begin{align*}
+ \d \omega &= \sum_i \d \omega_i \wedge \d x_1 \wedge \cdots \wedge \widehat{\d x_i} \wedge \cdots \wedge \d x_n\\
+ &= \sum_i \frac{\partial \omega_i}{\partial x_i} \d x_i \wedge \d x_1 \wedge \cdots \wedge \widehat{\d x_i} \wedge \cdots \wedge \d x_n\\
+ &= \sum_i (-1)^{i - 1} \frac{\partial \omega_i}{\partial x_i} \d x_1 \wedge \cdots \wedge \d x_i \wedge \cdots \wedge \d x_n
+ \end{align*}
+ Let's say
+ \[
+ \supp(\omega) = \{x_j \in [-R, R]: j = 1, \cdots, n - 1; x_n \in [0, R]\} = A.
+ \]
+ Then suppose $i \not= n$. Then we have
+ \begin{align*}
+ &\hphantom{=}\int_{\H^n} \frac{\partial \omega_i}{\partial x_i} \d x_1 \wedge \cdots \wedge \d x_i \wedge \cdots \wedge \d x_n\\
+ &= \int_A \frac{\partial \omega_i}{\partial x_i} \d x_1 \cdots \d x_n\\
+ &= \int_{-R}^R \int_{-R}^R \cdots \int_{-R}^R \int_0^R \frac{\partial\omega_i}{\partial x_i}\;\d x_1 \cdots \d x_n\\
+ \intertext{By Fubini's theorem, we can integrate this in any order. We integrate with respect to $\d x_i$ first. So this is}
+ &= \pm \int_{-R}^R \cdots \int_{-R}^R \int_0^R \left(\int_{-R}^R \frac{\partial \omega_i}{\partial x_i}\;\d x_i\right)\d x_1 \cdots \widehat{\d x_i} \cdots \d x_n
+ \end{align*}
+ By the fundamental theorem of calculus, the inner integral is
+ \[
+ \omega(x_1, \cdots, x_{i - 1}, R, x_{i + 1}, \cdots, x_n) - \omega(x_1, \cdots, x_{i - 1}, -R, x_{i + 1}, \cdots, x_n) = 0 - 0 = 0.
+ \]
+ So the integral vanishes. So we are only left with the $i = n$ term. So we have
+ \begin{align*}
+ \int_{\H^n} \d \omega &= (-1)^{n - 1} \int_A \frac{\partial \omega_n}{\partial x_n} \;\d x_1 \cdots \d x_n\\
+ &= (-1)^{n - 1} \int_{-R}^R \cdots \int_{-R}^R \left(\int_0^R \frac{\partial \omega_n}{\partial x_n}\;\d x_n\right) \d x_1 \cdots \d x_{n - 1}
+ \end{align*}
+ Now that integral is just
+ \[
+ \omega_n(x_1, \cdots, x_{n - 1}, R) - \omega_n(x_1, \cdots, x_{n - 1}, 0) = -\omega_n(x_1, \cdots, x_{n - 1}, 0).
+ \]
+ So this becomes
+ \[
+ =(-1)^n \int_{-R}^R \cdots \int_{-R}^R \omega_n(x_1, \cdots, x_{n - 1}, 0)\;\d x_1 \cdots \d x_{n - 1}.
+ \]
+ Next we see that
+ \[
+ i^* \omega = \omega_n \d x_1 \wedge \cdots \wedge \d x_{n - 1},
+ \]
+ as $i^*(\d x_n) = 0$. So we have
+ \[
+ \int_{\partial \H^n} i^* \omega = \pm \int_{A \cap \partial \H^n} \omega(x_1, \cdots, x_{n - 1}, 0) \, \d x_1 \cdots \d x_n.
+ \]
+ Here the sign is a plus iff $x_1, \cdots, x_{n - 1}$ are an oriented coordinate for $\partial \H^n$, i.e.\ $n$ is even. So this is
+ \[
+ \int_{\partial \H^n} \omega = (-1)^n \int_{-R}^R \cdots \int_{-R}^R \omega_n(x_1, \cdots, x_{n - 1}, 0)\;\d x_1 \cdots \d x_{n - 1} = \int_{\H^n} \d \omega.
+ \]
+ Now for a general manifold $M$, suppose first that $\omega \in \Omega^{n - 1}(M)$ is compactly supported in a single oriented chart $(U, \varphi)$. Then the result is true by working in local coordinates. More explicitly, we have
+ \[
+ \int_M \d \omega = \int_{\H^n}(\varphi^{-1})^* \d \omega = \int_{\H^n} \d((\varphi^{-1})^* \omega) = \int_{\partial \H^n} (\varphi^{-1})^* \omega = \int_{\partial M} \omega.
+ \]
+ Finally, for a general $\omega$, we just cover $M$ by oriented charts $(U, \varphi_\alpha)$, and use a partition of unity $\chi_\alpha$ subordinate to $\{U_\alpha\}$. So we have
+ \[
+ \omega = \sum \chi_\alpha \omega.
+ \]
+ Then
+ \[
+ \d \omega = \sum (\d \chi_\alpha) \omega + \sum \chi_\alpha \d \omega = \d\left(\sum \chi_\alpha\right) \omega + \sum \chi_\alpha \d \omega = \sum \chi_\alpha \d \omega,
+ \]
+ using the fact that $\sum \chi_\alpha$ is constant, hence its derivative vanishes. So we have
+ \[
+ \int_M \d \omega = \sum_\alpha \int_M \chi_\alpha \d \omega = \sum_\alpha \int_{\partial M} \chi_\alpha \omega = \int_{\partial M}\omega. \qedhere
+ \]
+\end{proof}
+Then all the things likes Green's theorem and divergence theorem follow from this.
+
+\begin{eg}
+ Let $M$ be a manifold without boundary with a symplectic form $\omega \in \Omega^2(M)$ that is closed and positive definite. Then by basic Linear algebra we know
+ \[
+ \int_M \omega^n \not= 0.
+ \]
+ Since $\omega$ is closed, it is an element $[\omega] \in H_{\dR}^2 (M)$. Does this vanish? If $\omega = \d \tau$, then we have
+ \[
+ \d (\tau \wedge \omega \wedge \cdots \wedge \omega) = \omega^n.
+ \]
+ So we have
+ \[
+ \int_M \omega^n = \int_M \d (\tau \wedge \omega \wedge \cdots \wedge \omega) = 0
+ \]
+ by Stokes' theorem. This is a contradiction. So $[\omega]$ is non-zero in $H^2_{\dR}(M)$.
+\end{eg}
+
+% \subsection{Densities*}
+
+\section{De Rham's theorem*}
+In the whole section, $M$ will be a compact manifold.
+
+\begin{thm}[de Rham's theorem]\index{de Rham's theorem}
+ There exists a natural isomorphism
+ \[
+ H^p_{\dR}(M) \cong H^p(M, \R),
+ \]
+ where $H^p(M, \R)$ is the singular cohomology of $M$, and this is in fact an isomorphism of rings, where $H^p_{\dR}(M)$ has the product given by the wedge, and $H^p(M, \R)$ has the cup product.
+\end{thm}
+
+Recall that singular cohomology is defined as follow:
+\begin{defi}[Singular $p$-complex]
+ Let $M$ be a manifold. Then a \term{singular $p$-simplex} is a continuous map
+ \[
+ \sigma: \Delta_p \to M,
+ \]
+ where
+ \[
+ \Delta_p = \left\{\sum_{i = 0}^p t_i e_i: \sum t_I = 1\right\} \subseteq \R^{n + 1}.
+ \]
+ We define
+ \[
+ C_p(M) = \{\text{formal sums }\sum a_i \sigma_i: a_i \in \R, \sigma_i\text{ a singular $p$ simplex}\}.
+ \]
+ We define
+ \[
+ C_p^\infty(m) = \{\text{formal sums }\sum a_i \sigma_i: a_i \in \R, \sigma_i\text{ a smooth singular $p$ simplex}\}.
+ \]
+\end{defi}
+
+\begin{defi}[Boundary map]
+ The \emph{boundary map}
+ \[
+ \partial: C_p(M) \to C_{p - 1}(M)
+ \]
+ is the linear map such that if $\sigma: \Delta_p \to M$ is a $p$ simplex, then
+ \[
+ \partial \sigma = \sum (-1)^i \sigma \circ F_{i, p},
+ \]
+ where $F_{i, p}$ maps $\Delta_{p - 1}$ affine linearly to the face of $\Delta_p$ opposite the $i$th vertex. We similarly have
+ \[
+ \partial: C_p^\infty(M) \to C_{p - 1}^\infty(M).
+ \]
+\end{defi}
+
+We can then define singular homology
+\begin{defi}[Singular homology]\index{singular homology}
+ The \emph{singular homology} of $M$ is
+ \[
+ H_p(M, \R) = \frac{\ker \partial: C_p(M) \to C_{p - 1}(M)}{\im \partial: C_{p + 1}(M) \to C_p(M)}.
+ \]
+ The \term{smooth singular homology}\index{singular homology!smooth} is the same thing with $C_p(M)$ replaced with $C_p^\infty(M)$.
+\end{defi}
+
+$H_p^\infty$ has the same properties as $H_p$, e.g.\ functoriality, (smooth) homotopy invariance, Mayer-Vietoris etc with no change in proof.
+
+Any smooth $p$-simplex $\sigma$ is also continuous, giving a natural inclusion
+\[
+ i: C_p^\infty(M) \to C_p(M),
+\]
+which obviously commutes with $\partial$, giving
+\[
+ i_*: H_p^\infty(M) \to H_p(M).
+\]
+\begin{thm}
+ The map $i_*: H_p^\infty(M) \to H_p(M)$ is an isomorphism.
+\end{thm}
+
+There are three ways we can prove this. We will give the ideas for these proofs:
+\begin{enumerate}
+ \item We can show that any continuous map $F: M \to N$ between manifolds is homotopic to a smooth one. But this is painful to prove.
+ \item What we really care about is maps $\sigma: \Delta_p \to M$, and we can barycentrically subdivide the simplex so that it only lies in a single chart, and then it is easy to do smooth approximation.
+ \item As $H_p$ and $H_p^\infty$ have enough properties in common, in particular they both have Mayer-Vietoris and agree on convex sets, this implies they are the same. We will not spell out this proof, because we are going to do this to prove that de Rham cohomology is the same as singular cohomology
+\end{enumerate}
+
+Since we are working with $\R$, we can cheat and define singular cohomology in a simple way:
+\begin{defi}[Singular cohomology]\index{singular cohomology}
+ The \emph{singular cohomology} of $M$ is defined as
+ \[
+ H^p(M, \R) = \Hom(H_p(M, \R), \R).
+ \]
+ Similarly, the smooth singular cohomology is
+ \[
+ H^p_\infty(M, \R) = \Hom(H_p^\infty(M, \R), \R).
+ \]
+\end{defi}
+This is a bad definition in general! It just happens to work for singular cohomology with coefficients in $\R$, and we are not too bothered to write dowm the proper definition.
+
+Our goal is now to describe an isomorphism
+\[
+ H^p_{\dR}(M) \cong H^p_\infty(M, \R).
+\]
+The map itself is given as follows:
+
+Suppose $[w] \in H^p_{\dR}(M)$, so $\omega \in \Omega^p(M)$ with $\d \omega = 0$. Suppose that $\sigma: \Delta p \to M$ is smooth with $\partial \sigma = 0$. We can then define
+\[
+ I([\omega]) = \int_{\Delta^p} \sigma^* \omega \in \R.
+\]
+Note that we have not defined what $\int_{\Delta^p}$ means, because $\Delta^p$ is not even a manifold with boundary --- it has corners. We can develop an analogous theory of integration on manifolds with corners, but we can also be lazy, and just integrate over
+\[
+ \Delta_p^\times = \Delta_p \setminus \{\text{codimension 2 faces}\}.
+\]
+Now $\omega|_{\Delta_p^*}$ does not have compact support, but has the property that it is the restriction of a (unique) $p$-form on $\Delta_p$, so in particular it is bounded. So the integral is finite.
+
+Now in general, if $\tau = \sum a_i \sigma_i \in C_p(M)$, we define
+\[
+ I([\omega])(\tau) = \sum a_i \int_{\Delta^p} \sigma_i^* \omega \in \R.
+\]
+Now \emph{Stokes theorem} tell us
+\[
+ \int_{\partial \sigma}\omega = \int_\sigma \d \omega.
+\]
+So we have
+\begin{lemma}
+ $I$ is a well-defined map $H^p_{\dR} (M) \to H^p_\infty(M, \R)$.
+\end{lemma}
+
+\begin{proof}
+ If $[\omega] = [\omega']$, then $\omega - \omega' = \d \alpha$. Then let $\sigma \in H^p_\infty(M, \R)$. Then
+ \[
+ \int_\sigma (\omega - \omega') = \int_\sigma \d \alpha = \int_{\partial \sigma}\alpha = 0,
+ \]
+ since $\partial \sigma = 0$.
+
+ On the other hand, if $[\sigma] = [\sigma']$, then $\sigma -\sigma = \partial \beta$ for some $\beta$. Then we have
+ \[
+ \int_{\sigma - \sigma'}\omega = \int_{\partial \beta}\omega = \int_\beta \d \omega = 0.
+ \]
+ So this is well-defined.
+\end{proof}
+
+\begin{lemma}
+ $I$ is functorial and commutes with the boundary map of Mayer-Vietoris. In other words, if $F: M \to N$ is smooth, then the diagram
+ \[
+ \begin{tikzcd}
+ H^p_{\dR}(M) \ar[r, "F^*"] \ar[d, "I"] & H^p_{\dR} (N) \ar[d, "I"]\\
+ H^p_\infty(M) \ar[r, "F^*"] & H^p_\infty(N)
+ \end{tikzcd}.
+ \]
+ And if $M = U \cup V$ and $U, V$ are open, then the diagram
+ \[
+ \begin{tikzcd}
+ H^p_\dR(U \cap V) \ar[r, "\delta"] \ar[d, "I"] &H_{\dR}^{p + 1}(U \cup V) \ar[d, "I"]\\
+ H^p_\infty(U \cap V, \R) \ar[r, "\delta"] & H^p(U \cup V, \R)
+ \end{tikzcd}
+ \]
+ also commutes. Note that the other parts of the Mayer-Vietoris sequence commute because they are induced by maps of manifolds.
+\end{lemma}
+
+\begin{proof}
+ Trace through the definitions.
+\end{proof}
+
+\begin{prop}
+ Let $U \subseteq \R^n$ is convex, then
+ \[
+ U: H^p_{\dR}(U) \to H_\infty^p(U, \R)
+ \]
+ is an isomorphism for all $p$.
+\end{prop}
+
+\begin{proof}
+ If $p > 0$, then both sides vanish. Otherwise, we check manually that $I: H^0_{\dR}(U) \to H^0_\infty (U, \R)$ is an isomorphism.
+\end{proof}
+
+These two are enough to prove that the two cohomologies agree --- we can cover any manifold by convex subsets of $\R^n$, and then use Mayer-Vietoris to patch them up.
+
+We make the following definition:
+\begin{defi}[de Rham]\leavevmode
+ \begin{enumerate}
+ \item We say a manifold $M$ is \emph{de Rham} if $I$ is an isomorphism.
+ \item We say an open cover $\{U_\alpha\}$ of $M$ is \emph{de Rham} if $U_{\alpha_1} \cap \cdots \cap U_{\alpha_p}$ is de Rham for all $\alpha_1, \cdots, \alpha_p$.
+ \item A \emph{de Rham basis} is a de Rham cover that is a basis for the topology on $M$.
+ \end{enumerate}
+\end{defi}
+Our plan is to inductively show that everything is indeed de Rham.
+
+We have already seen that if $U \subseteq \R^n$ is convex, then it is de Rham, and a countable disjoint union of de Rham manifolds is de Rham.
+
+The key proposition is the following:
+\begin{prop}
+ Suppose $\{U, V\}$ is a de Rham cover of $U \cup V$. Then $U \cup V$ is de Rham.
+\end{prop}
+
+\begin{proof}
+ We use the five lemma! We write the Mayer-Vietoris sequence that is impossible to fit within the margins:
+ \[
+ \begin{tikzcd}[column sep=small]
+ H^p_{\dR}(U) \oplus H^p_{\dR}(V) \ar[d, "I \oplus I"] \ar[r] & H^p_{\dR}(U \cup V) \ar[r] \ar[d, "I"] & H^{p + 1}_{\dR}(U \cap V) \ar[r] \ar[d, "I"] & H^p_{\dR}(U) \oplus H^{p + 1}_{\dR}(V) \ar[d, "I \oplus I"] \ar[r] & H^{p + 1}_{\dR}(U \cup V) \ar[d, "I"]\\
+ H^p_{\infty}(U) \oplus H^p_{\infty}(V) \ar[r] & H^p_{\infty}(U \cup V) \ar[r] & H^{p + 1}_{\infty}(U \cap V) \ar[r] & H^p_{\infty}(U) \oplus H^{p + 1}_{\infty}(V) \ar[r] & H^{p + 1}_{\infty}(U \cup V)
+ \end{tikzcd}
+ \]
+ This huge thing commutes, and all but the middle map are isomorphisms. So by the five lemma, the middle map is also an isomorphism. So done.
+\end{proof}
+
+\begin{cor}
+ If $U_1,\cdots, U_k$ is a finite de Rham cover of $U_1 \cup \cdots \cup U_k = N$, then $M$ is de Rham.
+\end{cor}
+
+\begin{proof}
+ By induction on $k$.
+\end{proof}
+
+\begin{prop}
+ The disjoint union of de Rham spaces is de Rham.
+\end{prop}
+
+\begin{proof}
+ Let $A_i$ be de Rham. Then we have
+ \[
+ H^p_{\dR}\left(\coprod A_i\right) \cong \prod H^p_{\dR}(A_i) \cong \prod H^p_\infty(A_i) \cong H^p_\infty\left(\coprod A_i\right).\qedhere
+ \]
+\end{proof}
+\begin{lemma}
+ Let $M$ be a manifold. If it has a de Rham basis, then it is de Rham.
+\end{lemma}
+
+\begin{proof}[Proof sketch]
+ Let $f: M \to \R$ be an ``exhaustion function'', i.e.\ $f^{-1}([-\infty, c])$ for all $c \in \R$. This is guaranteed to exist for any manifold. We let
+ \[
+ A_m = \{q \in M: f(q) \in [m, m + 1]\}.
+ \]
+ We let
+ \[
+ A_m' = \left\{q \in M: f(q) \in \left[m - \frac{1}{2}, m + \frac{3}{2}\right]\right\}.
+ \]
+ Given any $q \in A_m$, there is some $U_{\alpha(q)} \subseteq A_m'$ in the de Rham basis containing $q$. As $A_m$ is compact, we can cover it by a finite number of such $U_{\alpha_i}$, with each $U_{\alpha_i} \subseteq A_m'$. Let
+ \[
+ B_m = U_{\alpha_1} \cup \cdots \cup U_{\alpha_r}.
+ \]
+ Since $B_m$ has a finite de Rham cover, so it is de Rham. Observe that if $B_m \cap B_{\tilde{m}} \not= \emptyset$, then $\tilde{M} \in \{m, m - 1, m + 1\}$. We let
+ \[
+ U = \bigcup_{m\text{ even}} B_m,\quad V = \bigcup_{m \text{ odd}} B_m.
+ \]
+ Then this is a countable union of de Rham spaces, and is thus de Rham. Similarly, $U \cap V$ is de Rham. So $M = U \cup V$ is de Rham.
+\end{proof}
+
+\begin{thm}
+ Any manifold has a de Rham basis.
+\end{thm}
+
+\begin{proof}
+ If $U \subseteq \R^n$ is open, then it is de Rham, since there is a basis of convex sets $\{U_\alpha\}$ (e.g.\ take open balls). So they form a de Rham basis.
+
+ Finally, $M$ has a basis of subsets diffeomorphic to open subsets of $\R^n$. So it is de Rham.
+\end{proof}
+
+\section{Connections}
+\subsection{Basic properties of connections}
+Imagine we are moving in a manifold $M$ along a path $\gamma: I \to M$. We already know what ``velocity'' means. We simply have to take the derivative of the path $\gamma$ (and pick the canonical tangent vector $1 \in T_p I$) to obtain a path $\gamma: I \to TM$. Can we make sense of our \emph{acceleration}? We can certainly iterate the procedure, treating $TM$ as just any other manifold, and obtain a path $\gamma: I \to TTM$. But this is not too satisfactory, because $TTM$ is a rather complicated thing. We would want to use the extra structure we know about $TM$, namely each fiber is a vector space, to obtain something nicer, perhaps an acceleration that again lives in $TM$.
+
+We could try the naive definition
+\[
+ \frac{\d}{\d t} = \lim_{h \to 0} \frac{\gamma(t + h) - \gamma(t)}{h},
+\]
+but this doesn't make sense, because $\gamma(t + h)$ and $\gamma(t)$ live in different vector spaces. %What we want to do is to
+
+The problem is solved by the notion of a connection. There are (at least) two ways we can think of a connection --- on the one hand, it is a specification of how we can take derivatives of sections, so by definition this solves our problem. On the other hand, we can view it as telling us how to compare infinitesimally close vectors. Here we will define it the first way.
+
+\begin{notation}\index{$\Omega^p(E)$}
+ Let $E$ be a vector bundle on $M$. Then we write
+ \[
+ \Omega^p(E) = \Omega^0(E \otimes \Lambda^p (T^* M)).
+ \]
+ So an element in $\Omega^p(E)$ takes in $p$ tangent vectors and outputs a vector in $E$.
+\end{notation}
+
+\begin{defi}[Connection]\index{connection}
+ Let $E$ be a vector bundle on $M$. A \emph{connection} on $E$ is a linear map
+ \[
+ \d_E: \Omega^0(E) \to \Omega^1(E)
+ \]
+ such that
+ \[
+ \d_E (fs) = \d f \otimes s + f \d_E s
+ \]
+ for all $f \in C^\infty(M)$ and $s \in \Omega^0(E)$.
+
+ A connection on $TM$ is called a \emph{linear} or \emph{Koszul connection}.\index{linear connection}\index{Koszul connection}
+\end{defi}
+Given a connection $\d_E$ on a vector bundle, we can use it to take derivatives of sections. Let $s \in \Omega^0(E)$ be a section of $E$, and $X \in \Vect(M)$. We want to use the connection to define the derivative of $s$ in the direction of $X$. This is easy. We define $\nabla_X: \Omega^0(E) \to \Omega^0(E)$ by
+\[
+ \nabla_X(s) = \bra \d_E(s), X \ket \in \Omega^0(E),
+\]
+where the brackets denote applying $\d_E(s): TM \to E$ to $X$. Often we just call $\nabla_X$ the connection.
+
+\begin{prop}
+ For any $X$, $\nabla_X$ is linear in $s$ over $\R$, and linear in $X$ over $C^\infty(M)$. Moreover,
+ \[
+ \nabla_X(fs) = f \nabla_X(s) + X(f) s
+ \]
+ for $f \in C^\infty(M)$ and $s \in \Omega^0(E)$.
+\end{prop}
+
+This doesn't really solve our problem, though. The above lets us differentiate sections of the whole bundle $E$ along an everywhere-defined vector field. However, what we want is to differentiate a path in $E$ along a vector field defined on that path only.
+
+\begin{defi}[Vector field along curve]\index{vector field!along curve}
+ Let $\gamma: I \to M$ be a curve. A \emph{vector field} along $\gamma$ is a smooth $V: I \to TM$ such that
+ \[
+ V(t) \in T_{\gamma(t)} M
+ \]
+ for all $t \in I$. We write
+ \[
+ J(\gamma) = \{\text{vector fields along $\gamma$}\}.
+ \]
+\end{defi}
+The next thing we want to prove is that we can indeed differentiate these things.
+
+\begin{lemma}
+ Given a linear connection $\nabla$ and a path $\gamma: I \to M$, there exists a unique map $\D_t: J(\gamma) \to J(\gamma)$ such that
+ \begin{enumerate}
+ \item $\D_t(fV) = \dot{f} V + f \D_t V$ for all $f \in C^\infty(I)$
+ \item If $U$ is an open neighbourhood of $\im(\gamma)$ and $\tilde{V}$ is a vector field on $U$ such that $\tilde{V}|_{\gamma(t)} = V_t$ for all $t \in I$, then
+ \[
+ \D_t(V)|_t = \nabla_{\dot{\gamma}(0)} \tilde{V}.
+ \]
+ \end{enumerate}
+ We call $\D_t$ the \term{covariant derivative} along $\gamma$.
+\end{lemma}
+In general, when we have some notion on $\R^n$ that involves derivatives and we want to transfer to general manifolds with connection, all we do is to replace the usual derivative with the covariant derivative, and we will usually get the right generalization, because this is the only way we can differentiate things on a manifold.
+
+Before we prove the lemma, we need to prove something about the locality of connections:
+\begin{lemma}
+ Given a connection $\nabla$ and vector fields $X, Y \in \Vect(M)$, the quantity $\nabla_X Y|_p$ depends only on the values of $Y$ near $p$ and the value of $X$ at $p$.
+\end{lemma}
+
+\begin{proof}
+ It is clear from definition that this only depends on the value of $X$ at $p$.
+
+ To show that it only depends on the values of $Y$ near $p$, by linearity, we just have to show that if $Y = 0$ in a neighbourhood $U$ of $p$, then $\nabla_X Y|_p = 0$. To do so, we pick a bump function $\chi$ that is identically $1$ near $p$, then $\supp(X) \subseteq U$. Then $\chi Y = 0$. So we have
+ \[
+ 0 = \nabla_X (\chi Y) = \chi \nabla_X(Y) + X(\chi) Y.
+ \]
+ Evaluating at $p$, we find that $X(\chi) Y$ vanishes since $\chi$ is constant near $p$. So $\nabla_X(Y) = 0$.
+\end{proof}
+
+We now prove the existence and uniqueness of the covariant derivative.
+\begin{proof}[Proof of previous lemma]
+ We first prove uniqueness.
+
+ By a similar bump function argument, we know that $\D_t V|_{t_0}$ depends only on values of $V(t)$ near $t_0$. Suppose that locally on a chart, we have
+ \[
+ V(t) = \sum_j V_j(t) \left.\frac{\partial}{\partial x_j}\right|_{\gamma(t)}
+ \]
+ for some $V_j: I \to \R$. Then we must have
+ \[
+ \D_t V|_{t_0} = \sum_j \dot{V}_j (t) \left.\frac{\partial}{\partial x_j}\right|_{\gamma(t_0)} + \sum_j V_j(t_0) \nabla_{\dot{\gamma}(t_0)} \frac{\partial}{\partial x_j}
+ \]
+ by the Leibniz rule and the second property. But every term above is uniquely determined. So it follows that $\D_t V$ must be given by this formula.
+
+ To show existence, note that the above formula works locally, and then they patch because of uniqueness.
+\end{proof}
+
+\begin{prop}
+ Any vector bundle admits a connection.
+\end{prop}
+
+\begin{proof}
+ Cover $M$ by $U_\alpha$ such that $E|_{U_\alpha}$ is trivial. This is easy to do locally, and then we can patch them up with partitions of unity.
+\end{proof}
+
+Note that a connection is not a tensor, since it is not linear over $C^\infty(M)$. However, if $\d_E$ and $\tilde{\d}_E$ are connections, then
+\[
+ (\d_E - \tilde{\d}_E)(fs) = \d f \otimes s + f \d_E s - (\d f \otimes s + f \tilde{\d}_E S) = f (\d_E - \tilde{\d}_E)(s).
+\]
+So the difference is linear. Recall from sheet 2 that if $E, E'$ are vector bundles and
+\[
+ \alpha: \Omega^0(E) \to \Omega^0(E')
+\]
+is a map such that
+\[
+ \alpha(fs) = f \alpha(s)
+\]
+for all $s \in \Omega^0(E)$ and $f \in C^\infty(M)$, then there exists a unique bundle morphism $\xi: E \to E'$ such that
+\[
+ \alpha(s)|_p = \xi(s(p)).
+\]
+Applying this to $\alpha = \d_E - \tilde{\d}_E: \Omega^0(E) \to \Omega^1(E) = \Omega^0(E \otimes T^* M)$, we know there is a unique bundle map
+\[
+ \xi: E \to E \otimes T^* M
+\]
+such that
+\[
+ \d_E (s)|_p = \tilde{\d}_E (s) |_p + \xi(s(p)).
+\]
+So we can think of $\d_E - \tilde{d}_E$ as a bundle morphism
+\[
+ E \to E \otimes T^*M.
+\]
+In other words, we have
+\[
+ \d_E - \tilde{\d}_E \in \Omega^0(E \otimes E^* \otimes T^* M) = \Omega^1(\End(E)).
+\]
+The conclusion is that the set of all connections on $E$ is an affine space modelled on $\Omega^1(\End(E))$.
+
+\subsubsection*{Induced connections}
+In many cases, having a connection on a vector bundle allows us to differentiate many more things. Here we will note a few.
+
+\begin{prop}
+ The map $\d_E$ extends uniquely to $\d_E: \Omega^p(E) \to \Omega^{p + 1}(E)$ such that $\d_E$ is linear and
+ \[
+ \d_E(w \otimes s) = \d \omega \otimes s + (-1)^p \omega \wedge \d_E s,
+ \]
+ for $s \in \Omega^0(E)$ and $\omega \in \Omega^p(M)$.
+ Here $\omega \wedge \d_E s$ means we take the wedge on the form part of $\d_E s$. More generally, we have a wedge product
+ \begin{align*}
+ \Omega^p(M) \times \Omega^q(E) &\to \Omega^{p + q}(E)\\
+ (\alpha, \beta \otimes s) &\mapsto (\alpha \wedge \beta) \otimes s.
+ \end{align*}
+ More generally, the extension satisfies
+ \[
+ \d_E (\omega \wedge \xi) = \d \omega \wedge \xi + (-1)^q \omega \wedge \d_E \xi,
+ \]
+ where $\xi \in \Omega^p(E)$ and $\omega \in \Omega^q(M)$.
+\end{prop}
+
+\begin{proof}
+ The formula given already uniquely specifies the extension, since every form is locally a sum of things of the form $\omega \otimes s$. To see this is well-defined, we need to check that
+ \[
+ \d_E ((f \omega) \otimes s) = \d_E (\omega \otimes (fs)),
+ \]
+ and this follows from just writing the terms out using the Leibniz rule. The second part follows similarly by writing things out for $\xi = \eta \otimes s$.
+\end{proof}
+\begin{defi}[Induced connection on tensor product]\index{induced connection!tensor product}
+ Let $E, F$ be vector bundles with connections $\d_E, \d_F$ respectively. The \emph{induced connection} is the connection $\d_{E \otimes F}$ on $E \otimes F$ given by
+ \[
+ \d_{E \otimes F}(s \otimes t) = \d_E s \otimes t + s \otimes \d_F t
+ \]
+ for $s \in \Omega^0(E)$ and $t \in \Omega^0(F)$, and then extending linearly.
+\end{defi}
+
+\begin{defi}[Induced connection on dual bundle]\index{induced connection!dual bundle}
+ Let $E$ be a vector bundle with connection $\d_E$. Then there is an induced connection $\d_{E^*}$ on $E^*$ given by requiring
+ \[
+ \d\bra s, \xi\ket = \bra \d_E s, \xi\ket + \bra s, \d_{E^*} \xi\ket,
+ \]
+ for $s \in \Omega^0(E)$ and $\xi \in \Omega^0(E^*)$. Here $\bra \ph, \ph\ket$ denotes the natural pairing $\Omega^0(E) \times \Omega^0(E^*) \to C^\infty(M, \R)$.
+\end{defi}
+So once we have a connection on $E$, we have an induced connection on all tensor products of it.
+
+
+\subsubsection*{Christoffel symbols}
+We also have a local description of the connection, known as the \emph{Christoffel symbols}.
+
+Say we have a frame $e_1, \cdots, e_r$ for $E$ over $U \subseteq M$. Then any section $s \in \Omega^0(E|_U)$ is uniquely of the form
+\[
+ s = s^i e_i,
+\]
+where $s_i \in C^\infty(U, \R)$ and we have implicit summation over repeated indices (as we will in the whole section).
+
+Given a connection $\d_E$, we write
+\[
+ \d_E e_i = \Theta_i^j \otimes e_j,
+\]
+where $\Theta_i^j \in \Omega^1(U)$. Then we have
+\[
+ \d_E s = \d_E s^i e_i = \d s^i \otimes e_i + s^i \d_E e_i = (\d s^j + \Theta_i^j s^i) \otimes e_j.
+\]
+We can write $\mathbf{s} = (s^1, \cdots, s^r)$. Then we have
+\[
+ \d_E \mathbf{s} = \d \mathbf{s} + \Theta \mathbf{s},
+\]
+where the final multiplication is matrix multiplication.
+
+It is common to write
+\[
+ \d_E = \d + \Theta,
+\]
+where $\Theta$ is a matrix of $1$-forms. It is a good idea to just view this just as a formal equation, rather than something that actually makes mathematical sense.
+
+Now in particular, if we have a \emph{linear} connection $\nabla$ on $TM$ and coordinates $x_1, \cdots, x_n$ on $U \subseteq M$, then we have a frame for $TM|_U$ given by $\partial_1, \cdots, \partial_n$. So we again have
+\[
+ \d_E \partial_i = \Theta_i^k \otimes \partial_k.
+\]
+where $\Theta_i^k \in \Omega^1(U)$. But we don't just have a frame for the tangent bundle, but also the cotangent bundle. So in these coordinates, we can write
+\[
+ \Theta_i^k = \Gamma_{\ell i}^k\; \d x^\ell,
+\]
+where $\Gamma_{\ell i}^k \in C^\infty(U)$. These $\Gamma_{\ell i}^k$ are known as the \term{Christoffel symbols}\index{$\Gamma_{\ell i}^k$}.
+
+In this notation, we have
+\begin{align*}
+ \nabla_{\partial_j} \partial_i &= \bra \d_E \partial_i, \partial_j\ket\\
+ &= \bra \Gamma_{\ell i}^k \;\d x^\ell \otimes \partial_k, \partial_j\ket\\
+ &= \Gamma^k_{ji} \partial_k.
+\end{align*}
+
+\subsection{Geodesics and parallel transport}
+One thing we can do with a connection is to define a geodesic as a path with ``no acceleration''.
+\begin{defi}[Geodesic]\index{geodesic}
+ Let $M$ be a manifold with a linear connection $\nabla$. We say that $\gamma: I \to M$ is a \emph{geodesic} if
+ \[
+ \D_t \dot{\gamma}(t) = 0.
+ \]
+\end{defi}
+
+A natural question to ask is if geodesics exist. This is a local problem, so we work in local coordinates. We try to come up with some ordinary differential equations that uniquely specify a geodesic, and then existence and uniqueness will be immediate. If we have a vector field $V \in J(\gamma)$, we can write it locally as
+\[
+ V = V^j \partial_j,
+\]
+then we have
+\[
+ \D_t V = \dot{V}^j \partial_j + V^j \nabla_{\dot{\gamma}(t_0)} \partial_j.
+\]
+We now want to write this in terms of Christoffel symbols. We put $\gamma = (\gamma_1, \cdots, \gamma_n)$. Then using the chain rule, we have
+\begin{align*}
+ \D_t V &= \dot{V}^k \partial_k + V^j \dot{\gamma}^i \nabla_{\partial_i} \partial_j\\
+ &= (\dot{V}^k + V^j \dot{\gamma}^i \Gamma_{ij}^k)\partial_k.
+\end{align*}
+Recall that $\gamma$ is a geodesic if $\D_t \dot{\gamma} = 0$ on $I$. This holds iff we locally have
+\[
+ \ddot{\gamma}^k + \dot{\gamma}^i \dot{\gamma}^j \Gamma_{ij}^k = 0.
+\]
+As this is just a second-order ODE in $\gamma$, we get unique solutions locally given initial conditions.\
+\begin{thm}\index{geodesic}
+ Let $\nabla$ be a linear connection on $M$, and let $W \in T_pM$. Then there exists a geodesic $\gamma: (-\varepsilon, \varepsilon) \to M$ for some $\varepsilon > 0$ such that
+ \[
+ \dot{\gamma}(0) = W.
+ \]
+ Any two such geodesics agree on their common domain.
+\end{thm}
+
+More generally, we can talk about parallel vector fields.
+\begin{defi}[Parallel vector field]\index{parallel vector field}
+ Let $\nabla$ be a linear connection on $M$, and $\gamma: I \to M$ be a path. We say a vector field $V \in J(\gamma)$ along $\gamma$ is \emph{parallel} if $\D_t V(t) \equiv 0$ for all $t \in I$.
+\end{defi}
+What does this say? If we think of $\D_t$ as just the usual derivative, this tells us that the vector field $V$ is ``constant'' along $\gamma$ (of course it is not literally constant, since each $V(t)$ lives in a different vector space).
+
+\begin{eg}
+ A path $\gamma$ is a geodesic iff $\dot{\gamma}$ is parallel.
+\end{eg}
+The important result is the following:
+\begin{lemma}[Parallel transport]
+ Let $t_0 \in I$ and $\xi \in T_{\gamma(t_0)} M$. Then there exists a unique parallel vector field $V \in J(\gamma)$ such that $V(t_0) = \xi$. We call $V$ the \emph{parallel transport} of $\xi$ along $\gamma$.
+\end{lemma}
+
+\begin{proof}
+ Suppose first that $\gamma(I) \subseteq U$ for some coordinate chart $U$ with coordinates $x_1, \cdots, x_n$. Then $V\in J(\gamma)$ is parallel iff $\D_t V = 0$. We put
+ \[
+ V = \sum V^j(t) \frac{\partial}{\partial x^j}.
+ \]
+ Then we need
+ \[
+ \dot{V}^k + V^j \dot\gamma^i \Gamma_{ij}^k = 0.
+ \]
+ This is a first-order linear ODE in $V$ with initial condition given by $V(t_0) = \xi$, which has a unique solution.
+
+ The general result then follows by patching, since by compactness, the image of $\gamma$ can be covered by finitely many charts.
+\end{proof}
+
+Given this, we can define a map given by parallel transport:
+\begin{defi}[Parallel transport]\index{parallel transport}
+ Let $\gamma: I \to M$ be a curve. For $t_0, t_1$, we define the \emph{parallel transport map}
+ \[
+ P_{t_0 t_1}: T_{\gamma(t_0)}M \to T_{\gamma(t_1)}M
+ \]
+ given by $\xi \mapsto V_{\xi}(t_1)$.
+\end{defi}
+
+It is easy to see that this is indeed a linear map, since the equations for parallel transport are linear, and this has an inverse $P_{t_1t_0}$ given by the inverse path. So the connection $\nabla$ ``connects'' $T_{\gamma(t_0)}M$ and $T_{\gamma(t_1)}M$.
+
+Note that this connection depends on the curve $\gamma$ chosen! This problem is in general unfixable. Later, we will see that there is a special kind of connections known as \emph{flat connections} such that the parallel transport map only depends on the homotopy class of the curve, which is an improvement.
+
+\subsection{Riemannian connections}
+Now suppose our manifold $M$ has a Riemannian metric $g$. It is then natural to ask if there is a ``natural'' connection compatible with $g$.
+
+The requirement of ``compatibility'' in some sense says the product rule is satisfied by the connection. Note that saying this does require the existence of a metric, since we need one to talk about the product of two vectors.
+
+\begin{defi}[Metric connection]\index{linear connection!compatible}\index{metric connection}\index{linear connection!metric}
+ A linear connection $\nabla$ is \emph{compatible} with $g$ (or is a \emph{metric connection}) if for all $X, Y, Z \in \Vect(M)$,
+ \[
+ \nabla_X g(Y, Z) = g(\nabla_X Y, Z) + g(Y, \nabla_X Z).
+ \]
+ Note that the first term is just $X(g(Y, Z))$.
+\end{defi}
+We should view this formula as specifying that the product rule holds.
+
+We can alternatively formulate this in terms of the covariant derivative.
+\begin{lemma}
+ Let $\nabla$ be a connection. Then $\nabla$ is compatible with $g$ if and only if for all $\gamma: I \to M$ and $V, W \in J(\gamma)$, we have
+ \[
+ \frac{\d}{\d t}g(V(t), W(t)) = g(\D_t V(t), W(t)) + g(V(t), \D_t W(t)).\tag{$*$}
+ \]
+\end{lemma}
+
+\begin{proof}
+% Since this is a local question, we may assume that $\gamma(I) \subseteq U$ for some chart with coordinates $x_1, \cdots, x_n$. If $V$ is extendible, then we can find some $\hat{V}, \hat{W} \in \Vect(m)$ such that $\hat{V}|_{\gamma(t)} = V(t)$ and $\hat{W}|_{\gamma(t)} = W(t)$.
+%
+% So we have
+% \[
+% D_t V = \nabla_{\dot{\gamma}} \tilde{V}, \quad D_t W = \nabla_{\dot{\gamma}(t)} \tilde{W}.
+% \]
+% Then the RHS of $(*)$ is
+% \[
+% g(\nabla_{\gamma(t)} \tilde{V}, \tilde{W} (\gamma(t))) + g(\tilde{V}(\gamma(t)) \nabla_{\dot{\gamma}(t)} \tilde{W}) = \nabla_{\dot{\gamma}(t)} g(\tilde{V}, \tilde{W} = \frac{\d}{\d t} g(V(t), W(t)).
+% \]
+% On the other hand, if
+ Write it out explicitly in local coordinates.
+\end{proof}
+
+We have some immediate corollaries, where the connection is always assumed to be compatible.
+\begin{cor}
+ If $V, W$ are parallel along $\gamma$, then $g(V(t), W(t))$ is constant with respect to $t$.
+\end{cor}
+
+\begin{cor}
+ If $\gamma$ is a geodesic, then $|\dot{\gamma}|$ is constant.
+\end{cor}
+
+\begin{cor}
+ Parallel transport is an isometry.
+\end{cor}
+
+In general, on a Riemannian manifold, there can be many metric conditions. To ensure that it is actually unique, we need to introduce a new constraint, known as the torsion. The definition itself is pretty confusing, but we will chat about it afterwards to explain why this is a useful definition.
+
+\begin{defi}[Torsion of linear connection]\index{torsion of linear connection}\index{linear connection!torsion}
+ Let $\nabla$ be a linear connection on $M$. The \emph{torsion} of $\nabla$ is defined by
+ \[
+ \tau(X, Y) = \nabla_X Y - \nabla_Y X - [X, Y]
+ \]
+ for $X, Y \in \Vect(M)$.
+\end{defi}
+
+\begin{defi}[Symmetric/torsion free connection]\index{symmetric connection}\index{torsion-free connection}\index{linear connection!symmetric}\index{linear connection!torsion-free}
+ A linear connection is \emph{symmetric} or \emph{torsion-free} if $\tau(X, Y) = 0$ for all $X, Y$.
+\end{defi}
+
+\begin{prop}
+ $\tau$ is a tensor of type $(2, 1)$.
+\end{prop}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ \tau(fX, Y) &= \nabla_{fX}Y - \nabla_Y(fX) - [fX, Y]\\
+ &= f\nabla_X Y - Y(f) X - f \nabla_Y X - fXY + Y(f X)\\
+ &= f(\nabla_X Y - \nabla_Y X - [X, Y])\\
+ &= f \tau(X, Y).
+ \end{align*}
+ So it is linear.
+
+ We also have $\tau(X, Y) = - \tau(Y, X)$ by inspection.
+\end{proof}
+
+What does being symmetric tell us? Consider the Christoffel symbols in some coordinate system $x_1, \cdots, x_n$. We then have
+\[
+ \left[\frac{\partial}{\partial x_i}, \frac{\partial}{\partial x_j}\right] = 0.
+\]
+So we have
+\begin{align*}
+ \tau\left(\frac{\partial}{\partial x_i}, \frac{\partial}{\partial x_j}\right) &= \nabla_i \partial_j - \nabla_j \partial_i\\\
+ &= \Gamma_{ij}^k \partial_k - \Gamma^k_{ji} \partial_k.
+\end{align*}
+So we know a connection is symmetric iff the Christoffel symbol is symmetric, i.e.
+\[
+ \Gamma_{ij}^k = \Gamma_{ji}^k.
+\]
+Now the theorem is this:
+\begin{thm}
+ Let $M$ be a manifold with Riemannian metric $g$. Then there exists a unique torsion-free linear connection $\nabla$ compatible with $g$.
+\end{thm}
+
+The actual proof is unenlightening.
+
+\begin{proof}
+ In local coordinates, we write
+ \[
+ g = \sum g_{ij} \;\d x_i \otimes \d x_j.
+ \]
+ Then the connection is explicitly given by
+ \[
+ \Gamma^k_{ij} = \frac{1}{2} g^{k\ell} (\partial_i g_{j\ell} + \partial_j g_{i\ell} - \partial_\ell g_{ij}),
+ \]
+ where $g^{k\ell}$ is the inverse of $g_{ij}$.
+
+ We then check that it works.
+\end{proof}
+
+\begin{defi}[Riemannian/Levi-Civita connection]\index{Riemannian connection}\index{Levi-Civita connection}
+ The unique torsion-free metric connection on a Riemannian manifold is called the \emph{Riemannian connection} or \emph{Levi-Civita connection}.
+\end{defi}
+
+\begin{eg}
+ Consider the really boring manifold $\R^n$ with the usual metric. We also know that $T\R^n \cong \R^n \to \R^n$ is trivial, so we can give a trivial connection
+ \[
+ \d_{\R^n} \left(f \frac{\partial}{\partial x_i}\right) = \d f \otimes \frac{\partial}{\partial x_i}.
+ \]
+ In the $\nabla$ notation, we have
+ \[
+ \nabla_X \left(f \frac{\partial}{\partial x_i}\right) = X(f) \frac{\partial}{\partial x_i}.
+ \]
+ It is easy to see that this is a connection, and also that it is compatible with the metric. So this is the Riemannian connection on $\R^n$.
+\end{eg}
+This is not too exciting.
+\begin{eg}
+ Suppose $\phi: M \subseteq \R^n$ is an embedded submanifold. This gives us a Riemannian metric on $M$ by pulling back
+ \[
+ g = \phi^* g_{\R^n}
+ \]
+ on $M$.
+
+ We also get a connection on $M$ as follows: suppose $X, Y \in \Vect(M)$. Locally, we know $X, Y$ extend to vector fields $\tilde{X}, \tilde{Y}$ on $\R^n$. We set
+ \[
+ \nabla_X Y = \pi (\bar\nabla_{\tilde{X}} \tilde{Y}),
+ \]
+ where $\pi$ is the orthogonal projection $T_p(\R^n) \to T_pM$.
+
+ It is an exercise to check that this is a torsion-free metric connection on $M$.
+\end{eg}
+
+It is a (difficult) theorem by Nash that every manifold can be embedded in $\R^n$ such that the metric is the induced metric. So all metrics arise this way.
+
+\subsection{Curvature}
+The final topic to discuss is the curvature of a connection. We all know that $\R^n$ is flat, while $S^n$ is curved. How do we make this precise?
+
+We can consider doing some parallel transports on $S^n$ along the loop counterclockwise:
+\begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=1.5];
+
+ \draw (1.5, 0) arc(0:-180:1.5 and 0.3);
+ \draw [dashed] (1.5, 0) arc(0:180:1.5 and 0.3);
+
+ \draw (0, 1.5) arc(90:191:0.4 and 1.5);
+
+ \draw (0, 1.5) arc(90:-9:0.9 and 1.5);
+
+ \draw [mred, thick, -latex'] (-0.39, -0.29) -- +(0, 0.7);
+ \draw [mred!60!morange, thick, -latex'] (0.89, -0.24) -- +(0, 0.7);
+
+ \draw [mred!30!morange, -latex', thick] (0, 1.5) -- +(-0.7, 0.05);
+
+ \draw [morange, -latex', thick] (-0.39, -0.29) -- +(-0.7, 0.05);
+ \end{tikzpicture}
+\end{center} % improve picture
+We see that after the parallel transport around the loop, we get a \emph{different} vector. It is in this sense that the connection on $S^2$ is not flat.
+
+Thus, what we want is the following notion:
+\begin{defi}[Parallel vector field]\index{parallel vector field}
+ We say a vector field $V \in \Vect(M)$ is \emph{parallel} if $V$ is parallel along any curve in $M$.
+\end{defi}
+
+\begin{eg}
+ In $\R^2$, we can pick $\xi \in T_0\R^2 \cong \R^2$. Then setting $V(p) = \xi \in T_p \R^2 \cong \R^2$ gives a parallel vector field with $V(0) = \xi$.
+\end{eg}
+
+However, we cannot find a non-trivial parallel vector field on $S^2$.
+
+This motivates the question --- given a manifold $M$ and a $\xi \in T_p M$ non-zero, does there exist a parallel vector field $V$ on some neighbourhood of $p$ with $V(p) = \xi$?
+
+Naively, we would try to construct it as follows. Say $\dim M = 2$ with coordinates $x, y$. Put $p = (0, 0)$. Then we can transport $\xi$ along the line $\{y = 0\}$ to define $V(x, 0)$. Then for each $\alpha$, we parallel transport $V(\alpha, 0)$ along $\{x = \alpha\}$. So this determines $V(x, y)$.
+
+Now if we want this to work, then $V$ has to be parallel along any curve, and in particular for lines $\{y = \beta\}$ for $\beta \not= 0$. If we stare at it long enough, we figure out a necessary condition is
+\[
+ \nabla_{\frac{\partial}{\partial x_i}} \nabla_{\frac{\partial}{\partial x_j}} = \nabla_{\frac{\partial}{\partial x_j}} \nabla_{\frac{\partial}{\partial x_i}}.
+\]
+So the failure of these to commute tells us the curvature. This definition in fact works for any vector bundle.
+
+The actual definition we will state will be slightly funny, but we will soon show afterwards that this is what we think it is.
+\begin{defi}[Curvature]\index{curvature}
+ The \emph{curvature} of a connection $\d_E: \Omega^0(E) \to \Omega^1(E)$ is the map
+ \[
+ F_E = \d_E \circ \d_E: \Omega^0(E) \to \Omega^2(E).
+ \]
+\end{defi}
+
+\begin{lemma}
+ $F_E$ is a tensor. In particular, $F_E \in \Omega^2(\End(E))$.
+\end{lemma}
+
+\begin{proof}
+ We have to show that $F_E$ is linear over $C^\infty(M)$. We let $f \in C^\infty(M)$ and $s \in \Omega^0(E)$. Then we have
+ \begin{align*}
+ F_E(fs) &= \d_E \d_E (fs)\\
+ &= \d_E(\d f \otimes s + f \d_E s)\\
+ &= \d^2 f \otimes s - \d f \wedge \d_E s + \d f \wedge \d_E s + f \d_E^2 s\\
+ &= f F_E(s) \qedhere
+ \end{align*}
+\end{proof}
+
+How do we think about this? Given $X, Y \in \Vect(M)$, consider
+\begin{align*}
+ F_E(X, Y) : \Omega^0(E) &\to \Omega^0(E)\\
+ F_E(X, Y)(s) &= (F_E(s))(X, Y)
+\end{align*}
+
+\begin{lemma}
+ We have
+ \[
+ F_E(X, Y)(s) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]}s.
+ \]
+ In other words, we have
+ \[
+ F_E(X, Y) = [\nabla_X, \nabla_Y] - \nabla_{[X, Y]}.
+ \]
+\end{lemma}
+This is what we were talking about, except that we have an extra term $\nabla_{[X, Y]}$, which vanishes in our previous motivating case, since $\frac{\partial}{\partial x_i}$ and $\frac{\partial}{\partial x_j}$ commute in general.
+
+\begin{proof}
+ We claim that if $\mu \in \Omega^1(E)$, then we have
+ \[
+ (\d_E \mu) (X, Y) = \nabla_X (\mu(Y)) - \nabla_Y (\mu(X)) - \mu([X, Y]).
+ \]
+ To see this, we let $\mu = \omega \otimes s$, where $\omega \in \Omega^1(M)$ and $s \in \Omega^0(E)$. Then we have
+ \[
+ \d_E \mu = \d \omega \otimes s - \omega \wedge \d_E s.
+ \]
+ So we know
+ \begin{align*}
+ (\d_E \mu)(X, Y) &= \d \omega(X, Y) \otimes s - (\omega \wedge \d_E s)(X, Y)\\
+ \intertext{By a result in the example sheet, this is equal to}
+ &= (X \omega(Y) - Y \omega(X) - \omega([X, Y])) \otimes s \\
+ &\quad\quad- \omega(X) \nabla_Y (s) + \omega(Y) \nabla_X(s)\\
+ &= X \omega(Y) \otimes s + \omega(Y) \nabla_X s \\
+ &\quad \quad- (Y \omega(X) \otimes s + \omega(X) \nabla_Y s) - \omega([X, Y]) \otimes s
+ \end{align*}
+ Then the claim follows, since
+ \begin{align*}
+ \mu([X, Y]) &= \omega([X, Y]) \otimes s\\
+ \nabla_X(\mu(Y)) &= \nabla_X(\omega(Y) s)\\
+ &= X \omega(Y) \otimes s + \omega(Y) \nabla_X s.
+ \end{align*}
+ Now to prove the lemma, we have
+ \begin{align*}
+ (F_E s)(X, Y) &= \d_E( \d_E s)(X, Y)\\
+ &= \nabla_X((\d_E s)(Y)) - \nabla_Y((\d_E s)(X)) - (\d_E s)([X, Y])\\
+ &= \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s. \qedhere
+ \end{align*}
+\end{proof}
+
+\begin{defi}[Flat connection]\index{flat connection}
+ A connection $\d_E$ is \emph{flat} if $F_E = 0$.
+\end{defi}
+
+Specializing to the Riemannian case, we have
+\begin{defi}[Curvature of metric]\index{curvature!of metric}
+ Let $(M, g)$ be a Riemannian manifold with metric $g$. The \emph{curvature} of $g$ is the curvature of the Levi-Civita connection, denoted by
+ \[
+ F_g \in \Omega^2(\End(TM)) = \Omega^0(\Lambda^2 T^* M \otimes TM \otimes T^*M).
+ \]
+\end{defi}
+
+\begin{defi}[Flat metric]\index{flat metric}
+ A Riemannian manifold $(M, g)$ is \emph{flat} if $F_g = 0$.
+\end{defi}
+
+Since flatness is a local property, it is clear that if a manifold is locally isometric to $\R^n$, then it is flat. What we want to prove is the converse --- if you are flat, then you are locally isometric to $\R^n$. For completeness, let's define what an isometry is.
+
+\begin{defi}[Isometry]\index{isometry}
+ Let $(M, g)$ and $(N, g')$ be Riemannian manifolds. We say $G \in C^\infty(M, N)$ is an \emph{isometry} if $G$ is a diffeomorphism and $G^* g' = g$, i.e.
+ \[
+ \D G|_p: T_p M \to T_{G(p)} N
+ \]
+ is an isometry for all $p \in M$.
+\end{defi}
+
+\begin{defi}[Locally isometric]\index{locally isometric}
+ A manifold $M$ is \emph{locally isometric} to $N$ if for all $p \in M$, there is a neighbourhood $U$ of $p$ and a $V \subseteq N$ and an isometry $G: U \to V$.
+\end{defi}
+
+\begin{eg}
+ The flat torus obtained by the quotient of $\R^2$ by $\Z^2$ is locally isometric to $\R^2$, but is not diffeomorphic (since it is not even homeomorphic).
+\end{eg}
+
+Our goal is to prove the following result.
+\begin{thm}
+ Let $M$ be a manifold with Riemannian metric $g$ .Then $M$ is flat iff it is locally isometric to $\R^n$.
+\end{thm}
+
+One direction is obvious. Since flatness is a local property, we know that if $M$ is locally isometric to $\R^n$, then it is flat.
+
+To prove the remaining of the theorem, we need some preparation.
+\begin{prop}
+ Let $\dim M = n$ and $U \subseteq M$ open. Let $V_1, \cdots, V_n \in \Vect(U)$ be such that
+ \begin{enumerate}
+ \item For all $p \in U$, we know $V_1(p), \cdots, V_n(p)$ is a basis for $T_pM$, i.e.\ the $V_i$ are a frame.
+ \item $[V_i, V_j] = 0$, i.e.\ the $V_i$ form a frame that pairwise commutes.
+ \end{enumerate}
+ Then for all $p \in U$, there exists coordinates $x_1, \cdots, x_n$ on a chart $p \in U_p$ such that
+ \[
+ V_i = \frac{\partial}{\partial x_i}.
+ \]
+ Suppose that $g$ is a Riemannian metric on $M$ and the $V_i$ are orthonormal in $T_pM$. Then the map defined above is an isometry.
+\end{prop}
+
+\begin{proof}
+ We fix $p \in U$. Let $\Theta_i$ be the flow of $V_i$. From example sheet 2, we know that since the Lie brackets vanish, the $\Theta_i$ commute.
+
+ Recall that $(\Theta_i)_t(q) = \gamma(t)$, where $\gamma$ is the maximal integral curve of $V_i$ through $q$. Consider
+ \[
+ \alpha(t_1, \cdots, t_n) = (\Theta_n)_{t_n} \circ (\Theta_{n - 1})_{t_{n - 1}} \circ \cdots \circ (\Theta_1)_{t_1}(p).
+ \]
+ Now since each of $\Theta_i$ is defined on some small neighbourhood of $p$, so if we just move a bit in each direction, we know that $\alpha$ will be defined for $(t_0, \cdots, t_n) \in B = \{|t_i| < \varepsilon\}$ for some small $\varepsilon$.
+
+ Our next claim is that
+ \[
+ \D\alpha\left(\frac{\partial}{\partial t_i}\right) = V_i
+ \]
+ whenever this is defined. Indeed, for $\mathbf{t} \in B$ and $f \in C^\infty(M, \R)$. Then we have
+ \begin{align*}
+ \D\alpha \left(\left.\frac{\partial}{\partial t_i}\right|_t\right)(f) &= \left.\frac{\partial}{\partial t_i}\right|_t f(\alpha(t_1, \cdots, t_n))\\
+ &= \left.\frac{\partial}{\partial t_i} \right|_{\mathbf{t}} f((\Theta_i)_t \circ (\Theta_n)_{t_n} \circ \cdots \circ \widehat{(\Theta_i)_{t_i}} \circ \cdots \circ (\Theta_1)_{t_1}(p))\\
+ &= V_i|_{\alpha(\mathbf{t})}(f).
+ \end{align*}
+ So done. In particular, we have
+ \[
+ \D \alpha|_0 \left(\left.\frac{\partial}{\partial t_i}\right|_0\right) = V_i(p),
+ \]
+ and this is a basis for $T_p M$. So $\D \alpha|_0: T_0 \R^n \to T_p M$ is an isomorphism. By the inverse function theorem, this is a local diffeomorphism, and in this chart, the claim tells us that
+ \[
+ V_i = \frac{\partial}{\partial x_i}.
+ \]
+ The second part with a Riemannian metric is clear.
+\end{proof}
+
+We now actually prove the theorem
+\begin{proof}[Proof of theorem]
+ Let $(M, g)$ be a flat manifold. We fix $p \in M$. We let $x_1, \cdots, x_n$ be coordinates centered at $p_1$, say defined for $|x_i| < 1$. We need to construct orthonormal vector fields. To do this, we pick an orthonormal basis at a point, and parallel transport it around.
+
+ We let $e_1, \cdots, e_n$ be an orthonormal basis for $T_p M$. We construct vector fields $E_1, \cdots, E_n \in \Vect(U)$ by parallel transport. We first parallel transport along $(x_1, 0, \cdots, 0)$ which defines $E_i(x_1, 0, \cdots, 0)$, then parallel transport along the $x_2$ direction to cover all $E_i(x_1, x_2, 0, \cdots, 0)$ etc, until we define on all of $U$. By construction, we have
+ \[
+ \nabla_k E_i = 0\tag{$*$}
+ \]
+ on $\{x_{k + 1} = \cdots = x_n = 0\}$.
+
+ We will show that the $\{E_i\}$ are orthonormal and $[E_i, E_j] = 0$ for all $i, j$. We claim that each $E_i$ is parallel, i.e.\ for any curve $\gamma$, we have
+ \[
+ \D_\gamma E_i = 0.
+ \]
+ It is sufficient to prove that
+ \[
+ \nabla_{j} E_i = 0
+ \]
+ for all $i, j$.
+
+ By induction on $k$, we show
+ \[
+ \nabla_{j} E_i = 0
+ \]
+ for $j \leq k$ on $\{x_{k + 1} = \cdots = x_n = 0\}$. The statement for $k = 1$ is already given by $(*)$. We assume the statement for $k$, so
+ \[
+ \nabla_j E_i = 0\tag{$A$}
+ \]
+ for $j \leq k$ and $\{x_{k + 1} = \cdots = x_n = 0\}$. For $j = k + 1$, we know that $\nabla_{k + 1} E_i = 0$ on $\{x_{k + 2} = \cdots = x_n = 0\}$ by $(*)$. So the only problem we have is for $j = k$ and $\{x_{k + 2} = \cdots = x_n = 0\}$.
+
+ By flatness of the Levi-Civita connection, we have
+ \[
+ \left[\nabla_{k + 1}, \nabla_k\right] = \nabla_{[\partial_{k + 1}, \partial_k]} = 0.
+ \]
+ So we know
+ \[
+ \nabla_{k + 1} \nabla_k E_i = \nabla_k \nabla_{k + 1} E_i = 0\tag{$B$}
+ \]
+ on $\{x_{k + 2} = \cdots = x_n = 0\}$. Now at $x_{k + 1} = 0$ , we know $\nabla_k E_i$ vanishes. So it follows from parallel transport that $\nabla_k E_i$ vanishes on $\{x_{k + 2} = \cdots = x_n = 0\}$.
+
+ As the Levi-Civita connection is compatible with $g$, we know that parallel transport is an isometry. So the inner product product $g(E_i, E_j) = g(e_i, e_j) = \delta_{ij}$. So this gives an orthonormal frame at all points.
+
+ Finally, since the torsion vanishes, we know
+ \[
+ [E_i, E_j] = \nabla_{E_i} E_j - \nabla_{E_j} E_i = 0,
+ \]
+ as the $E_i$ are parallel. So we are done by the proposition.
+\end{proof}
+
+What does the curvature mean when it is non-zero? There are many answers to this, and we will only give one.
+
+\begin{defi}[Holonomy]\index{holonomy}
+ Consider a piecewise smooth curve $\gamma: [0, 1] \to M$ with $\gamma(0) = \gamma(1) = p$. Say we have a linear connection $\nabla$. Then we have a notion of parallel transport along $\gamma$.
+
+ The \emph{holonomy} of $\nabla$ around $\gamma$: is the map
+ \[
+ H: T_p M \to T_p M
+ \]
+ given by
+ \[
+ H(\xi) = V(1),
+ \]
+ where $V$ is the parallel transport of $\xi$ along $\gamma$.
+\end{defi}
+
+\begin{eg}
+ If $\nabla$ is compatible with a Riemannian metric $g$, then $H$ is an isometry.
+\end{eg}
+
+\begin{eg}
+ Consider $\R^n$ with the usual connection. Then if $\xi \in T_0\R^n$, then $H(\xi) = \xi$ for any such path $\gamma$. So the holonomy is trivial.
+\end{eg}
+
+\begin{eg}
+ Say $(M, g)$ is flat, and $p \in M$. We have seen that there exists a neighbourhood of $p$ such that $(U, g|_U)$ is isometric to $\R^n$. So if $\gamma([0, 1]) \in U$, then $H = \id$.
+\end{eg}
+
+The curvature measures the extent to which this does not happen. Suppose we have coordinates $x_1, \cdots, x_n$ on some $(M, g)$. Consider $\gamma$ as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.6] (0, 0) -- (2, 0) node [right] {$(s, 0 , \cdots, 0)$};
+ \draw [->-=0.6] (2, 0) -- (2, 2) node [right] {$(s, t, \cdots, 0)$};
+ \draw [->-=0.6] (2, 2) -- (0, 2) node [left] {$(0, t, \cdots, 0)$};
+ \draw [->-=0.6] (0, 2) -- (0, 0) node [left] {$0$};
+ \end{tikzpicture}
+\end{center}
+Then we can Taylor expand to find
+\[
+ H = \id + F_g\left(\left.\frac{\partial}{\partial x_1}\right|_p, \left.\frac{\partial}{\partial x_2}\right|_p\right)st + O(s^2 t, st^2).
+\]
+
+\printindex
+\end{document}
diff --git a/books/cam/III_M/extremal_graph_theory.tex b/books/cam/III_M/extremal_graph_theory.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1fbb32cb4e2e3c0353edcf38dc8f7d300de9a30b
--- /dev/null
+++ b/books/cam/III_M/extremal_graph_theory.tex
@@ -0,0 +1,1529 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2017}
+\def\nlecturer {A.\ G.\ Thomason}
+\def\ncourse {Extremal Graph Theory}
+
+\input{header}
+
+\newcommand\ind{\mathrm{ind}}
+
+\renewcommand\ex{\mathrm{ex}}
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+Tur\'an's theorem, giving the maximum size of a graph that contains no complete $r$-vertex subgraph, is an example of an extremal graph theorem. Extremal graph theory is an umbrella title for the study of how graph and hypergraph properties depend on the values of parameters. This course builds on the material introduced in the Part II Graph Theory course, which includes Tur\'an's theorem and also the Erd\"os--Stone theorem.
+
+The first few lectures will cover the Erd\"os--Stone theorem and stability. Then we shall treat Szemer\'edi's Regularity Lemma, with some applications, such as to hereditary properties. Subsequent material, depending on available time, might include: hypergraph extensions, the flag algebra method of Razborov, graph containers and applications.
+
+\subsubsection*{Pre-requisites}
+A knowledge of the basic concepts, techniques and results of graph theory, as afforded by the Part II Graph Theory course, will be assumed. This includes Tur\'an's theorem, Ramsey's theorem, Hall's theorem and so on, together with applications of elementary probability.
+}
+\tableofcontents
+
+\section{The \texorpdfstring{Erd\"os--Stone}{Erdos--Stone} theorem}
+The starting point of extremal graph theory is perhaps Tur\'an's theorem, which you hopefully learnt from the IID Graph Theory course. To state the theory, we need the following preliminary definition:
+\begin{defi}[Tur\'an graph]\index{Tur\'an graph}
+ The \emph{Tur\'an graph} \term{$T_r(n)$} is the complete $r$-partite graph on $n$ vertices with class sizes $\lfloor n/r\rfloor$ or $\lceil n/r\rceil$. We write $t_r(n)$ for the number of edges in $T_r(n)$.\index{$t_r(n)$}
+\end{defi}
+
+The theorem then says
+\begin{thm}[Tur\'an's theorem]\index{Tur\'an's theorem}
+ If $G$ is a graph with $|G| = n$, $e(G) \geq t_r(n)$ and $G \not \supseteq K_{r+1}$. Then $G = T_r(n)$.
+\end{thm}
+This is an example of an \emph{extremal theorem}. More generally, given a fixed graph $F$, we seek
+\[
+ \ex(n, F) = \max \{e(G): |G| = n, G \not\supseteq F\}.
+\]
+Tur\'an's theorem tells us $\ex(n, K_{r + 1}) = t_r(n)$. We cannot find a nice expression for the latter number, but we have
+\[
+ \ex(n, K_{r + 1}) = t_r(n) \approx \left(1 - \frac{1}{r}\right)\binom{n}{2}.
+\]
+Tur\'an's theorem is a rather special case. First of all, we actually know the exact value of $\ex(n, F)$. Moreover, there is a unique \term{extremal graph} realizing the bound. Both of these properties do not extend to other choices of $F$.
+
+By definition, if $e(G) > \ex(n, K_{r + 1})$, then $G$ contains a $K_{r + 1}$. The Erd\"os--Stone theorem tells us that as long as $|G| = n$ is big enough, this condition implies $G$ contains a much larger graph than $K_{r + 1}$.
+
+\begin{notation}\index{$K_r(t)$}
+ We denote by $K_r(t)$ the complete $r$-partite graph with $t$ vertices in each class.
+\end{notation}
+So $K_r(1) = K_r$ and $K_r(t) = T_r(rt)$.
+
+\begin{thm}[Erd\"os--Stone, 1946]\index{Erd\"os--Stone theorem}
+ Let $r \geq 1$ be an integer and $\varepsilon > 0$. Then there exists $d = d(r, \varepsilon)$ and $n_0 = n_0(r, \varepsilon)$ such that if $|G| = n \geq n_0$ and
+ \[
+ e(G) \geq \left(1 - \frac{1}{r} + \varepsilon \right) \binom{n}{2},
+ \]
+ then $G \supseteq K_{r + 1}(t)$, where $t = \lfloor d \log n\rfloor$.
+\end{thm}
+Note that we can remove $n_0$ from the statement simply by reducing $d$, since for sufficiently small $d$, whenever $n < n_0$, we have $\lfloor d \log n\rfloor = 0$.
+
+One corollary of the theorem, and a good way to think about the theorem is that given numbers $r, \varepsilon, t$, whenever $|G| = n$ is sufficiently large, the inequality $e(G) \geq \left(1 - \frac{1}{r} + \varepsilon\right) \binom{n}{2}$ implies $G \subseteq K_{r + 1}(t)$.
+
+To prove the Erd\"os--Stone theorem, a natural strategy is to try to throw away vertices of small degree, and so that we can bound the \emph{minimal degree} of the graph instead of the total number of edges. We will make use of the following lemma to do so:
+\begin{lemma}
+ Let $c, \varepsilon > 0$. Then there exists $n_1 = n_1(c, \varepsilon)$ such that if $|G| = n \geq n_1$ and $e(G) \geq (c + \varepsilon) \binom{n}{2}$, then $G$ has a subgraph $H$ where $\delta(H) \geq c |H|$ and $|H| \geq \sqrt{\varepsilon} n$.
+\end{lemma}
+
+\begin{proof}
+ The idea is that we can keep removing the vertex of smallest degree and then we must eventually get the $H$ we want. Suppose this doesn't gives us a suitable graph even after we've got $\sqrt{\varepsilon}n$ vertices left. That means we can find a sequence
+ \[
+ G = G_n \supseteq G_{n - 1} \supseteq G_{n - 2} \supseteq \cdots G_s,
+ \]
+ where $s = \lfloor \varepsilon^{1/2}n \rfloor$, $|G_j| = j$ and the vertex in $G_j \setminus G_{j - 1}$ has degree $< cj$ in $G_j$.
+
+ We can then calculate
+ \begin{align*}
+ e(G_s) &> (c + \varepsilon) \binom{n}{2} - c \sum_{j = s + 1}^n j \\
+ &= (c + \varepsilon) \binom{n}{2} - c \left\{\binom{n+1}{2} - \binom{s + 1}{2}\right\} \\
+ &\sim \frac{\varepsilon n^2}{2}
+ \end{align*}
+ as $n$ gets large (since $c$ and $s$ are fixed numbers). In particular, this is $> \binom{s}{2}$. But $G_s$ only has $s$ vertices, so this is impossible.
+\end{proof}
+
+Using this, we can reduce the Erd\"os--Stone theorem to a version that talks about the minimum degree instead.
+\begin{lemma}
+ Let $r \geq 1$ be an integer and $\varepsilon > 0$. Then there exists a $d_1 = d_1(r, \varepsilon)$ and $n_2 = n_2(r, \varepsilon)$ such that if $|G| = n \geq n_2$ and
+ \[
+ \delta(G) \geq \left(1 - \frac{1}{r} + \varepsilon\right)n,
+ \]
+ then $G \supseteq K_{r + 1}(t)$, where $t = \lfloor d_1 \log n\rfloor$.
+\end{lemma}
+We first see how this implies the Erd\"os--Stone theorem:
+
+\begin{proof}[Proof of Erd\"os--Stone theorem]
+ Provided $n_0$ is large, say $n_0 > n_1\left(1 - \frac{1}{r} + \frac{\varepsilon}{2}, \frac{\varepsilon}{2}\right)$, we can apply the first lemma to $G$ to obtain a subgraph $H \subseteq G$ where $|H| > \sqrt{\frac{\varepsilon}{2}} n$, and $\delta(H) \geq \left(1 - \frac{1}{r} + \frac{\varepsilon}{2}\right) |H|$.
+
+ We can then apply our final lemma as long as $\sqrt{\frac{\varepsilon}{2}} n$ is big enough, and obtain $K_{r + 1}(t) \subseteq H \subseteq G$, with $t > \left\lfloor d_1(r, \varepsilon/2) \log \left(\sqrt{\frac{\varepsilon}{2}} n\right)\right\rfloor$.
+\end{proof}
+
+We can now prove the lemma.
+
+\begin{proof}[Proof of lemma]
+ We proceed by induction on $r$. If $r = 0$ or $\varepsilon \geq \frac{1}{r}$, the theorem is trivial. Otherwise, by the induction hypothesis, we may assume $G \supseteq K_r(T)$ for
+ \[
+ T = \left\lfloor \frac{2t}{\varepsilon r}\right\rfloor.
+ \]
+ Call it $K = K_r(T)$. This is always possible, as long as
+ \[
+ d_1(r, \varepsilon) < \frac{\varepsilon r}{2} d_1\left(r - 1, \frac{1}{r(r - 1)}\right).
+ \]
+ The exact form is not important. The crucial part is that $\frac{1}{r(r - 1)} = \frac{1}{r - 1} - \frac{1}{r}$, which is how we chose the $\varepsilon$ to put into $d_1(r - 1, \varepsilon)$.
+
+ Let $U$ be the set of vertices in $G - K$ having at least $\left(1 - \frac{1}{r} + \frac{\varepsilon}{2}\right)|K|$ neighbours in $K$. Calculating
+ \[
+ \left(1 - \frac{1}{r} + \frac{\varepsilon}{2}\right) |K| = \left(1 - \frac{1}{r} + \frac{\varepsilon}{2}\right) rT = (r - 1)T + \frac{\varepsilon r}{2}T \geq (r - 1)T + t,
+ \]
+ we see that every element in $U$ is joined to at least $t$ vertices in each class, and hence is joined to some $K_r(t) \subseteq K$.
+
+ So we have to show that $|U|$ is large. If so, then by a pigeonhole principle argument, it follows that we can always find enough vertices in $U$ so that adding them to $K$ gives a $K_{r + 1}(t)$.
+ \begin{claim}
+ There is some $c > 0$ (depending on $r$ and $\varepsilon$) such that for $n$ sufficiently large, we have
+ \[
+ |U| \geq cn.
+ \]
+ \end{claim}
+
+ The argument to establish these kinds of bounds is standard, and will be used repeatedly. We write $e(K, G - K)$ for the number of edges between $K$ and $G - K$. By the minimum degree condition, each vertex of $K$ sends at least $\left(1 - \frac{1}{r} + \varepsilon\right)n - |K|$ edges to $G - K$. Then we have two inequalities
+ \begin{align*}
+ |K| \left\{\left(1 - \frac{1}{r} + \varepsilon\right)n - |K|\right\} &\leq e(K, G - K) \\
+ &\leq |U||K| + (n - |U|) \left(1 - \frac{1}{r} + \frac{\varepsilon}{2}\right)|K|.
+ \end{align*}
+ Rearranging, this tells us
+ \[
+ \frac{\varepsilon n}{2} - |K| \leq |U| \left(\frac{1}{r} - \frac{\varepsilon}{2}\right).
+ \]
+ Now note that $|K| \sim \log n$, so for large $n$, the second term on the left is negligible, and it follows that $U \sim n$.
+
+ We now want to calculate the number of $K_r(t)$'s in $K$. To do so, we use the simple inequality
+ \[
+ \binom{n}{k} \leq \left(\frac{e n}{k}\right)^k.
+ \]
+ Then we have
+ \[
+ \# K_r(t) = \binom{T}{t}^r \leq \left(\frac{eT}{t}\right)^{rt} \leq \left(\frac{3e}{\varepsilon r}\right)^{rt} \leq \left(\frac{3e}{\varepsilon r}\right)^{rd\log n} = n^{rd \log (3e/\varepsilon r)}% \leq \frac{\varepsilon r n}{3t} \leq \frac{|U|}{t},
+ \]
+ Now if we pick $d$ sufficiently small, then this tells us $\#K_r(t)$ grows sublinearly with $n$. Since $|U|$ grows linearly, and $t$ grows logarithmically, it follows that for $n$ large enough, we have
+ \[
+ |U| \geq t\cdot \# K_r(t).
+ \]
+ Then by the pigeonhole principle, there must be a set $W \subseteq U$ of size $t$ joined to the same $K_r(t)$, giving a $K_{r + 1}(t)$.
+\end{proof}
+
+Erd\"os and Simonovits noticed that Erd\"os--Stone allows us to find $\ex(n, F)$ asymptotically for all $F$.
+
+\begin{thm}[Erd\"os--Simonovits]
+ Let $F$ be a fixed graph with chromatic number $\chi(F) = r + 1$. Then
+ \[
+ \lim_{n \to \infty} \frac{\ex(n, F)}{\binom{n}{2}} = 1 - \frac{1}{r}.
+ \]
+\end{thm}
+
+\begin{proof}
+ Since $\chi(F) = r + 1$, we know $F$ cannot be embedded in an $r$-partite graph. So in particular, $F \not\subseteq T_r(n)$. So
+ \[
+ \ex(n, F) \geq t_r(n) \geq \left(1 - \frac{1}{r}\right)\binom{n}{2}.
+ \]
+ On the other hand, given any $\varepsilon > 0$, if $|G| = n$ and
+ \[
+ e(G) \geq \left(1 - \frac{1}{r} + \varepsilon\right) \binom{n}{2},
+ \]
+ then by the Erd\"os--Stone theorem, we have $G \supseteq K_{r + 1}(|F|) \supseteq F$ provided $n$ is large. So we know that for every $\varepsilon > 0$, we have
+ \[
+ \limsup \frac{\ex(n, F)}{\binom{n}{2}} \leq 1 - \frac{1}{r} + \varepsilon.
+ \]
+ So we are done.
+\end{proof}
+If $r > 1$, then this gives us a genuine asymptotic expression for $\ex(n, F)$. However, if $r = 1$, i.e.\ $F$ is a bipartite graph, then this only tells us $\ex(n, F) = o\left(\binom{n}{2}\right)$, but doesn't tell us about the true asymptotic behaviour.
+
+To end the chapter, we show that $t \sim \log n$ is indeed the best we can do in the Erd\"os--Stone theorem, by actually constructing some graphs.
+\begin{thm}
+ Given $r \in \N$, there exists $\varepsilon_r > 0$ such that if $0 < \varepsilon < \varepsilon_r$, then there exists $n_3(r, \varepsilon)$ so that if $n > n_3$, there exists a graph $G$ of order $n$ such that
+ \[
+ e(G) \geq \left(1 - \frac{1}{r} + \varepsilon\right) \binom{n}{2}
+ \]
+ but $K_{r + 1}(t) \not\subseteq G$, where
+ \[
+ t = \left\lceil \frac{3 \log n}{\log 1/\varepsilon}\right\rceil.
+ \]
+\end{thm}
+So this tells us we cannot get better than $t \sim \log n$, and this gives us some bound on $d(r, \varepsilon)$.
+
+\begin{proof}
+ Start with the Tur\'an graph $T_r(n)$. Let $m = \left\lceil \frac{n}{r} \right\rceil$. Then there is a class $W$ of $T_r(n)$ of size $m$. The strategy is to add $\varepsilon \binom{n}{2}$ edges inside $W$ to obtain $G$ such that $G[W]$ (the subgraph of $G$ formed by $W$) does not contain a $K_2(t)$. It then follows that $G \not\supseteq K_{r + 1}(t)$, but also
+ \[
+ e(G) \geq t_r(n) + \varepsilon \binom{n}{2} \geq \left(1 - \frac{1}{r} +\varepsilon \right) \binom{n}{2},
+ \]
+ as desired.
+
+ To see that such an addition is possible, we choose edges inside $W$ independently with probability $p$, to be determined later. Let $X$ be the number of edges chosen, and $Y$ be the number of $K_2(t)$ created. If $\E[X - Y] > \varepsilon \binom{n}{2}$, then this means there is an actual choice of edges with $X - Y > \varepsilon \binom{n}{2}$. We then remove an edge from each $K_2(t)$ to leave a $K_2(t)$-free graph with at least $X - Y > \varepsilon \binom{n}{2}$ edges.
+
+ So we want to actually compute $\E[X - Y]$. Seeing that asymptotically, $m$ is much larger than $t$, we have
+ \begin{align*}
+ \E[X - Y] &= \E[X] - \E[Y] \\
+ &= p\binom{m}{2} - \frac{1}{2}\binom{m}{t} \binom{m - t}{t} p^{t^2}\\
+ &\sim \frac{1}{2} pm^2 - \frac{1}{2} m^{2t} p^{t^2}\\
+ &= \frac{1}{2} pm^2 (1 - m^{2t - 2} p^{t^2 - 1})\\
+ &= \frac{1}{2} pm^2 (1 - (m^2 p^{t + 1})^{t - 1}).
+ \end{align*}
+ We pick $p = 3 \varepsilon r^2$ and $\varepsilon_r = (3r^2)^{-6}$. Then $p < \varepsilon^{5/6}$, and
+ \[
+ m^2 p^{t + 1} < m^2 \varepsilon^{5/6 (t + 1)} < (m^2 m^{-5/2})^{t - 1} < \frac{1}{2}. % return to this
+ \]
+ Hence, we find that
+ \[
+ \E[x - y] \geq \frac{1}{4} p m^2 > \varepsilon \binom{n}{2}.\qedhere
+ \]
+\end{proof}
+
+Let's summarize what we have got so far. Let $t(n, r, \varepsilon)$ be the largest value of $t$ such that a graph of order $n$ with $\left(1 - \frac{1}{r} + \varepsilon \right) \binom{n}{2}$ edges is guaranteed to contain $K_{r + 1}(t)$. Then we know $t(n, r, \varepsilon)$ grows with $n$ like $\log n$. If we are keen, we can ask how $t$ depends on the other terms $r$ and $\varepsilon$. We just saw that
+\[
+ t(n, r, \varepsilon) \leq \frac{3 \log n}{\log 1/\varepsilon}.
+\]
+So we see that the dependence on $\varepsilon$ is at most logarithmic, and in 1976, Bollob\'as, Erd\"os and Simonovits showed that
+\[
+ t(n, r, \varepsilon) \geq c \frac{\log n}{r \log 1/\varepsilon}
+\]
+for some $c$. Thus, $t(n, r, \varepsilon)$ also grows (inverse) logarithmically with $\varepsilon$.
+
+In our original bound, there is no dependence on $r$, which is curious. While Bollob\'as, Erd\"os and Simonovits's bound does, in 1978, Chv\'atal and Szemer\'edi showed that
+\[
+ t \geq \frac{1}{500} \frac{\log n}{\log 1/\varepsilon}
+\]
+if $n$ is large. So we know there actually is no dependence on $r$.
+
+We can also ask about the containment of less regular graphs than $K_r(t)$. In 1994, Bollob\'as and Kohayakawa adapted the proof of the Erd\"os--Stone theorem to show that there is a constant $c$ such that for any $0 < \gamma < 1$, if $e(G) \geq \left(1 - \frac{1}{r} + \varepsilon\right) \binom{n}{2}$ and $n$ is large, then we can find a complete $(r + 1)$-partite subgraph with class sizes
+\[
+ \left\lfloor c \gamma \frac{\log n}{r \log 1/\varepsilon}\right\rfloor, \left\lfloor c \gamma \frac{\log n}{\log r}\right\rfloor, r - 1\text{ times}, \left\lfloor c \varepsilon^{\frac{3}{2} - \frac{\gamma}{2}} n^{1 - \gamma}\right\rfloor.
+\]
+We might wonder if we can make similar statements for hypergraphs. It turns out all the analogous questions are open. Somehow graphs are much easier.
+
+\section{Stability}
+An extremal problem is \term{stable} if every near-optimal extremal example looks close to some optimal (unique) example. Stability pertain for $\ex(n, F)$.
+
+\begin{thm}
+ Let $t, r \geq 2$ be fixed, and suppose $|G| = n$ and $G \not\supseteq K_{r + 1}(t)$. If
+ \[
+ e(G) = \left(1 - \frac{1}{r} + o(1)\right) \binom{n}{2}.
+ \]
+ Then
+ \begin{enumerate}
+ \item There exists $T_r(n)$ on $V(G)$ with $|E(G) \Delta E(T_r(n))| = o(n^2)$.
+ \item $G$ contains an $r$-partite subgraph with $\left(1 - \frac{1}{r} + o(1)\right) \binom{n}{2}$ edges.
+ \item $G$ contains an $r$-partite subgraph with minimum degree $\left(1 - \frac{1}{r} + o(1)\right)n$.
+ \end{enumerate}
+\end{thm}
+
+The outline of the proof is as follows: we first show that the three statements are equivalent, so that we only have to prove (ii). To prove (ii), we first use Erd\"os--Stone to find some $K_r(s)$ living inside $G$ (for some $s$), which is a good $r$-partite subgraph of $G$, but with not quite the right number of vertices. We now look at the remaining vertices. For each vertex $v$, find the $C_i$ such that $v$ is connected to as few vertices in $C_i$ as possible, and enlarge $C_i$ to include that as well. After doing this, $C_i$ will have quite a few edges between its vertices, and we throw those away. The hope is that there are only $o(n^2)$ edges to throw away, so that we still have $\left(1 - \frac{1}{r} + o(1)\right) \binom{n}{2}$ edges left. It turns out as long as we throw out some ``bad'' vertices, we will be fine.
+
+\begin{proof}
+ We first prove (ii), which looks the weakest. We will then argue that the others follow from this (and in fact they are equivalent).
+
+ By Erd\"os--Stone, we can always find some $K_r(s) = K \subseteq G$ for some $s = s(n) \to \infty$. We can, of course, always choose $s(n)$ so that $s(n) \leq \log n$, which will be helpful for our future analysis. We let $C_i$ be the $i$th class of $K$.
+
+ \begin{itemize}
+ \item By throwing away $o(n)$ vertices, we may assume that
+ \[
+ \delta(G) \geq \left(1 - \frac{1}{r} + o(1)\right) n
+ \]
+ \item Let $X$ be the set of all vertices joined to at least $t$ vertices in each $C_i$. Note that there are $\binom{s}{t}^r$ many copies of $K_r(t)$'s in $K$. So by the pigeonhole principle, we must have $|X| < t \binom{s}{t}^r = o(n)$, or else we can construct a $K_{r + 1}(t)$ inside $G$.
+ \item Let $Y$ be the set of vertices joined to fewer than $(r - 1) s - \frac{s}{2t} + t$ other vertices. By our bound on $\delta(G)$, we certainly have
+ \[
+ e(K, G - K) \geq s(r - 1 + o(1))n.
+ \]
+ On the other hand, every element of $G$ is certainly joined to at most $(r-1)s + t$ edges, since $X$ was thrown away. So we have
+ \[
+ ((r - 1)s + t)(n - |Y|) + |Y| \left((r - 1) s - \frac{s}{2t} + t\right) \geq e(K, G - K).
+ \]
+ So we deduce that $|Y| = o(n)$. Throw $Y$ away.
+ \end{itemize}
+ Let $V_i$ be the set of vertices in $G \setminus K$ joined to fewer than $t$ of $C_i$. It is now enough to show that $e(G[V_i]) = o(n^2)$. We can then throw away the edges in each $V_i$ to obtain an $r$-partite graph.
+
+ Suppose on the contrary that $e(G[V_j]) \geq \varepsilon n^2$, say, for some $j$. Then Erd\"os--Stone with $r = 1$ says we have $G[V_j] \supseteq K_2(t)$. Each vertex of $K_2(t)$ has at least $s - \frac{s}{2t} + 1$ in each other $C_i$, for $i \not= j$. So we see that $K_2(t)$ has at least $s - 2t \left(\frac{s}{2t} - 1\right) > t$ common neighbours in each $C_i$, giving $K_{r + 1}(t) \subseteq G$.
+
+ It now remains to show that (i) and (iii) follows from (ii). The details are left for the example sheet, but we sketch out the rough argument. (ii) $\Rightarrow$ (i) is straightforward, since an $r$-partite graph with that many edges is already pretty close to being a Tur\'an graph.
+
+ To deduce (iii), since we can assume $\delta(G) \geq \left(1 - \frac{1}{r} + o(1)\right) n$, we note that if (iii) did not hold, then we have $\varepsilon n$ vertices of degree $\leq \left(1 - \frac{1}{r} - \varepsilon\right)n$, and their removal leaves a graph of order $(1 - \varepsilon) n$ with at least
+ \[
+ \left(1 - \frac{1}{r} + o(1)\right)\binom{n}{2} - \varepsilon n \left(1 - \frac{1}{r} - \varepsilon\right)n > \left(1 - \frac{1}{r} + \varepsilon^2\right) \binom{(1 - \varepsilon)n}{2},
+ \]
+ which by Erd\"os--Stone would contain $K_{r + 1}(t)$. So we are done.
+\end{proof}
+%if we want this to succeed, we have to throw out some ``bad'' vertices before we start.
+%
+%\begin{proof}
+% Clearly $(i) \Rightarrow (ii)$, and also $(ii) \Rightarrow (i)$, since an $r$-partite graph with that many edges is already pretty close to a Tur\'an graph. This is made precise on the example sheet.
+%
+% It is also easy to see that $(iii) \Rightarrow (ii)$. Note too that we may jettison $o(n)$ vertices if we wish to. By doing that, we may assume that $\delta(G) \geq \left(1 - \frac{1}{r} + o(1)\right) n$. So $(ii) \Rightarrow (iii)$, for otherwise we have $\varepsilon n$ vertices of degree $\leq \left(1 - \frac{1}{r} - \varepsilon\right)n$, and their removal leaves a graph of order $(1 - \varepsilon) n$ with at least
+% \[
+% \left(1 - \frac{1}{r} + o(1)\right)\binom{n}{2} - \varepsilon n \left(1 - \frac{1}{r} - \varepsilon\right)n > \left(1 - \frac{1}{r} + \varepsilon^2\right) \binom{(1 - \varepsilon)n}{2},
+% \]
+% which by Erd\"os--Stone would contain $K_{r + 1}(t)$.
+%
+% So our three properties can be assumed to be equivalent, and (ii) is the weakest-looking statement. So we will prove $(ii)$
+%
+% Now we have $G \supseteq K = K_r(s)$ where $\log n \geq s = s(n) \to \infty$ by Erd\"os--Stone. Let $C_i$ be the $i$th class of $K$. Let $X$ be the set of vertices joined to at least $T$ in each $C_i$. There are $\binom{s}{t}^{r}$ many $K_r(t)$'s in $K$, and since $G \supseteq K_{r + 1}(t)$, we have $|X| < t \binom{s}{t}^r = o(n)$. Jettison $X$.
+%
+% Let $Y$ be the set of vertices joined to fewer than $(r - 1)S - \frac{s}{2t} + t$. Since $\delta(G) = \left(1 - \frac{1}{r} + o(1) \right) n$, we see
+% \[
+% e(K, G - K) \geq s(r - 1 + o(1))n.
+% \]
+% On the other hand, because $X$ has been jettisoned, we know
+% \[
+% ((r - 1)s + t)(n - |Y|) + |Y| \left((r - 1) s - \frac{s}{2t} + t\right) \geq e(K, G - K).
+% \]
+% So $|Y| = o(n)$. So we jettison $Y$.
+%
+% The remaining vertices are partitioned by letting $V_i$ be the set of vertices in $G - K$ joined to fewer than $t$ of $C_i$. Now it is enough to show that $e(G[V_i]) = o(n^2)$, since then $V_1, \ldots, V_r$ are the classes of our $r$-partite subgraph.
+%
+% Suppose on the contrary that $e(G[V_j]) \geq \varepsilon n^2$, say, for some $j$. Then Erd\"os--Stone with $r = 1$ says we have $G[V_j] \supseteq K_2(t)$. Each vertex of $K_2(t)$ has at least $s - \frac{s}{2t} + 1$ in each other $C_i$, for $i \not= j$. So we see that $K_2(t)$ has at least $s - 2t \left(\frac{s}{2t} - 1\right) > t$ common neighbours in each $C_i$, giving $K_{r + 1}(t) \subseteq G$.
+%\end{proof}
+
+\begin{cor}
+ Let $\chi(F) = r + 1$, and let $G$ be extremal for $F$, i.e.\ $G \not\supseteq F$ and $e(G) = \ex(|G|, F)$. Then $\delta(G) = \left(1 - \frac{1}{r} + o(1)\right)n$.
+\end{cor}
+
+\begin{proof}
+ By our asymptotic bound on $\ex(n, F)$, it cannot be that $\delta(G)$ is greater than $\left(1 - \frac{1}{r} + o(1)\right)n$. So there must be a vertex $v \in G$ with $d(v) \leq \left(1 - \frac{1}{r} - \varepsilon\right) n$.
+
+ We now apply version (iii) of the stability theorem to obtain an $r$-partite subgraph $H$ of $G$. We can certainly pick $|F|$ vertices in the same part of $H$, which are joined to $m = \left(1 - \frac{1}{r} + o(1)\right)$ common neighbours. Form $G^*$ from $G - v$ by adding a new vertex $u$ joined to these $m$ vertices. Then $e(G^*) > e(G)$, and so the maximality of $G$ entails $G^* \supseteq F$.
+
+ Pick a copy of $F$ in $G^*$. This must involve $u$, since $G$, and hence $G - v$ does not contain a copy of $F$. Thus, there must be some $x$ amongst the $|F|$ vertices mentioned. But then we can replace $u$ with $x$ and we are done.
+\end{proof}
+
+Sometimes, stability and bootstrapping can give exact results.
+
+\begin{eg}
+ $\ex(n, C_5) = t_2(n)$. In fact, $\ex(n, C_{2k + 1}) = t_2(n)$ if $n$ is large enough.
+\end{eg}
+
+\begin{thm}[Simonovits]
+ Let $F$ be $(r + 1)$-edge-critical\index{$r$-edge-critical}, i.e.\ $\chi(F) = r + 1$ but $\chi(F) \setminus e) = r$ for every edge $e$ of $F$. Then for large $n$,
+ \[
+ \ex(n, F) = t_r(n).
+ \]
+ and the only extremal graph is $T_r(n)$.
+\end{thm}
+So we get a theorem like Tur\'an's theorem for large $n$.
+
+\begin{proof}
+ Let $G$ be an extremal graph for $F$ and let $H$ be an $r$-partite subgraph with
+ \[
+ \delta(H) = \left(1 - \frac{1}{r} + o(1)\right)n.
+ \]
+ Note that $H$ necessarily have $r$ parts of size $\left(\frac{1}{r} + o(1)\right)n$ each. Assign each of the $o(n)$ vertices in $V(G) \setminus V(H)$ to a class where it has fewest neighbours.
+
+ Suppose some vertex $v$ has $\varepsilon n$ neighbours in its own class. Then it has $\geq \varepsilon n$ in each class. Pick $\varepsilon n$ neighbours of $v$ in each class of $H$. These neighbours span a graph with at least $\left(1 -\frac{1}{r} + o(1)\right) \binom{r\varepsilon n}{2}$ edges. So by Erd\"os--Stone (or arguing directly), they span $F - w$ for some vertex $w$ (contained in $K_r(|F|)$). Hence $G \supseteq F$, contradiction.
+
+ Thus, each vertex of $G$ has only $o(n)$ vertices in its own class. So it is joined to all but $o(n)$ vertices in every other class.
+
+ Suppose some class of $G$ contains an edge $xy$. Pick a set $Z$ of $|F|$ vertices in that class with $\{x, y\} \in Z$. Now $Z$ has $\left(\frac{1}{r} + o(1)\right)n$ common neighbours in each class, so (by Erd\"os--Stone or directly) these common neighbours span a $K_{r - 1}(|F|)$. But together with $Z$, we have a $K_r(|F|)$ but with an edge added inside a class. But by our condition that $F \setminus e$ is $r$-partite for any $e$, this subgraph contains an $F$. This is a contradiction.
+
+ So we conclude that no class of $G$ contains an edge. So $G$ itself is $r$-partite, but the $r$-partite graph with most edges is $T_r(n)$, which does not contain $F$. So $G = T_r(n)$.
+\end{proof}
+
+\section{Supersaturation}
+Suppose we have a graph $G$ with $e(G) > \ex(n, F)$. Then by definition, there is at least one copy of $F$ in $G$. But can we tell how many copies we have? It turns out this is not too difficult to answer, and in fact we can answer the question for any hypergraph.
+
+Recall that an \term{$\ell$-uniform hypergraph} is a pair $(V, E)$, where $E \subseteq V^{(\ell)}$. We can define the extremal function of a class of hypergraphs $\mathcal{F}$ by
+\[
+ \ex(n, \mathcal{F}) = \max \{e(G): |G| = n, \text{$G$ contains no $f \in \mathcal{F}$}\}.
+\]
+Again we are interested in the limiting density
+\[
+ \pi(\mathcal{F}) = \lim_{n \to \infty} \frac{\ex(n, \mathcal{F})}{\binom{n}{\ell}}.
+\]
+It is an easy exercise to show that this limit always exists. We solved it explicitly for graphs explicitly previously, but we don't really need the Erd\"os--Stone theorem just to show that the limit exists.
+
+The basic theorem in supersaturation is
+\begin{thm}[Erd\"os--Simonovits]
+ Let $H$ be some $\ell$-uniform hypergraph. Then for all $\varepsilon > 0$, there exists $\delta(H, \varepsilon)$ such that every $\ell$-uniform hypergraph $G$ with $|G| = n$ and
+ \[
+ e(G) > (\pi(H) + \varepsilon) \binom{n}{\ell}
+ \]
+ contains $\lfloor \delta n^{|H|}\rfloor$ copies of $H$.
+\end{thm}
+Note that $n^{|H|}$ is approximately the number of ways to choose $|H|$ vertices from $n$, so it's the number of possible candidates for subgraphs $H$.
+
+\begin{proof}
+ For each $m$-set $M \subseteq V(G)$, we let $G[M]$ be the sub-hypergraph induced by these vertices. Let the number of subsets $M$ with $e(G[M]) > \left(\pi(H) + \frac{\varepsilon}{2}\right) \binom{m}{\ell}$ be $\eta \binom{n}{m}$. Then we can estimate
+ \begin{multline*}
+ \left(\pi(H) + \varepsilon\right) \binom{n}{\ell} \leq e(G) = \frac{\sum_M e(G[M])}{\binom{n - \ell}{m - \ell}} \\
+ \leq \frac{\eta \binom{n}{m} \binom{m}{\ell} + (1 - \eta) \binom{n}{m} \left(\pi(H) + \frac{\varepsilon}{2}\right) \binom{m}{\ell}}{\binom{n - \ell}{m - \ell}}.
+ \end{multline*}
+ So if $n > m$, then
+ \[
+ \pi(H) + \varepsilon \leq \eta + (1 + \eta) \left(\pi(H) + \frac{\varepsilon}{2}\right).
+ \]
+ So
+ \[
+ \eta \geq \frac{\frac{\varepsilon}{2}}{1 - \pi(H) - \frac{\varepsilon}{2}} > 0.
+ \]
+ The point is that it is positive, and that's all we care about.
+
+ We pick $m$ large enough so that
+ \[
+ \ex(m, H) < \left(\pi(H) + \frac{\varepsilon}{2}\right) \binom{m}{\ell}.
+ \]
+ Then $H \subseteq G[M]$ for at least $\eta \binom{n}{m}$ choices of $M$. Hence $G$ contains at least
+ \[
+ \frac{\eta\binom{n}{m}}{\binom{n - |H|}{m - |H|}} = \frac{\eta\binom{n}{|H|}}{\binom{m}{|H|}}
+ \]
+ copies of $H$, and we are done. (Our results hold when $n$ is large enough, but we can choose $\delta$ small enough so that $\delta n^{|H|} < 1$ when $n$ is small)
+\end{proof}
+
+Let $k_P(G)$ be the number of copies of $K_p$ in $G$. Ramsey's theorem tells us $k_p(G) + k_p(\bar{G}) > 0$ if $|G|$ is large (where $\bar{G}$ is the complement of $G$). In the simplest case where $p = 3$, it turns out we can count these monochromatic triangles exactly.
+\begin{thm}[Lorden, 1961]
+ Let $G$ have degree sequence $d_1, \ldots, d_n$. Then
+ \[
+ k_3(G) + k_3(\bar{G}) = \binom{n}{3} - (n - 2) e(G) + \sum_{i = 1}^n \binom{d_i}{2}.
+ \]
+\end{thm}
+
+\begin{proof}
+ The number of paths of length $2$ in $G$ and $\bar{G}$ is precisely
+ \[
+ \sum_{i = 1}^n \left(\binom{d_i}{2} + \binom{n - 1 - d_i}{2}\right) = 2 \sum_{i = 1}^n \binom{d_i}{2} - 2 (n - 2) e(G) + 3 \binom{n}{3},
+ \]
+ since to find a path of length $2$, we pick the middle vertex and then pick the two edges. A complete or empty $K_3$ contains $3$ such paths; Other sets of three vertices contain exactly 1 such path. Hence
+ \[
+ \binom{n}{3} + 2 (k_3(G) + k_3(\bar{G})) = \text{number of paths of length $2$}.\qedhere
+ \]
+\end{proof}
+\begin{cor}[Goodman, 1959]
+ We have
+ \[
+ k_3(G) + k_3(\bar{G}) \geq \frac{1}{24} n(n - 1)(n - 5).
+ \]
+\end{cor}
+
+In particular, the Ramsey number of a triangle is at most $6$.
+
+\begin{proof}
+ Let $m = e(G)$. Then
+ \[
+ k_3(G) + k_3(\bar{G}) \geq \binom{n}{3} - (n - 2) m + n \binom{2m/n}{2}. % Cauchy--Schwarz
+ \]
+ Then minimize over $m$.
+\end{proof}
+
+This shows the minimum density of a monochromatic $K_3$ in a red/blue colouring of $K_n$ (for $n$ large) is $\sim \frac{1}{4}$. But if we colour edges monochromatic, then $\frac{1}{4}$ is the probability of having a triangle being monochromatic. So the density is achieved by a ``random colouring''. Recall also that the best bounds on the Ramsey numbers we have are obtained by random colourings. So we might think the best way of colouring if we want to minimize the number of monochromatic cliques is to do so randomly.
+
+However, this is not true. While we do not know what the minimum density of monochromatic $K_4$ in a red/blue colouring of $K_n$, it is already known to be $< \frac{1}{33}$ (while $\frac{1}{32}$ is what we expect from a random colouring). It is also known by flag algebras to be $> \frac{1}{35}$. So we are not very far.
+
+\begin{cor}
+ For $m = e(G)$ and $n = |G|$, we have
+ \[
+ k_3(G) \geq \frac{m}{3n}(4m - n^2).
+ \]
+\end{cor}
+
+\begin{proof}
+ By Lorden's theorem, we know
+ \[
+ k_3(G) + k_3(\bar{G}) = \binom{n}{3} - (n - 2) e(\bar{G}) + \sum\binom{\bar{d}_i}{2},
+ \]
+ where $\bar{d}_i$ is the degree sequence in $\bar{G}$. But
+ \[
+ 3 k_3(\bar{G}) \leq \sum \binom{\bar{d}_i}{2},
+ \]
+ since the sum is counting the number of paths of length $2$. So we find that
+ \[
+ k_3(G) \geq \binom{n}{3} - (n - 2)\bar{m} - \frac{2}{3}n \binom{2 \bar{m}/n}{2},
+ \]
+ and $\bar{m} = \binom{n}{2} - m$.
+\end{proof}
+Observe that equality is almost never attained. It is attained only for regular graphs with no subgraphs looking like
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (0.5, 0.866) {};
+
+ \draw (0, 0) -- (1, 0);
+ \end{tikzpicture}
+\end{center}
+So non-adjacency is an equivalence relation, so the graph is complete multi-partite and regular. Thus it is $T_r(n)$ with $r \mid n$.
+
+\begin{thm}
+ Let $G$ be a graph. For any graph $F$, let $i_F(G)$ be the number of induced copies of $F$ in $G$, i.e.\ the number of subsets $M \subseteq V(G)$ such that $G[M] \cong F$. So, for example, $i_{K_p}(G) = k_p(G)$.
+
+ Define
+ \[
+ f(G) = \sum_F \alpha_F i_F(G),
+ \]
+ with the sum being over a finite collection of graphs $F$, each being complete multipartite, with $\alpha_F \in \R$ and $\alpha_F \geq 0$ if $F$ is not complete. Then amongst graphs of given order, $f(G)$ is maximized on a complete multi-partite graph. Moreover, if $\alpha_{\bar{K}_3} > 0$, then there are no other maxima.
+\end{thm}
+
+\begin{proof}
+ We may suppose $\alpha_{\bar{K}_3} > 0$, because the case of $\alpha_{\bar{K}_3} = 0$ follows from a limit argument. Choose a graph $G$ maximizing $f$ and suppose $G$ is not complete multipartite. Then there exist non-adjacent vertices $x, y$ whose neighbourhoods $X, Y$ differ.
+
+ There are four contributions to $i_F(G)$, coming from
+ \begin{enumerate}
+ \item $F$'s that contain both $x$ and $y$;
+ \item $F$'s that contain $y$ but not $x$;
+ \item $F$'s that contain $x$ but not $y$;
+ \item $F$'s that contain neither $x$ nor $y$.
+ \end{enumerate}
+ We may assume that the contribution from (iii) $\geq$ (ii), and if they are equal, then $|X| \leq |Y|$.
+
+ Consider what happens when we remove all edges between $y$ and $Y$, and add edges from $y$ to everything in $X$. Clearly (iii) and (iv) are unaffected.
+ \begin{itemize}
+ \item If (iii) $>$ (ii), then after this move, the contribution of (ii) becomes equal to the contribution of (iii) (which didn't change), and hence strictly increased.
+
+ The graphs $F$ that contribute to (i) are not complete, so $\alpha_F \geq 0$. Moreover, since $F$ is complete multi-partite, it cannot contain a vertex in $X \Delta Y$. So making the move can only increase the number of graphs contributing to (i), and each contribution is non-negative.
+ \item If (iii) $=$ (ii) and $|X| \leq |Y|$, then we make similar arguments. The contribution to (ii) is unchanged this time, and we know the contribution of (i) strictly increased, because the number of $\bar{K}_3$'s contributing to (i) is the number of points not connected to $x$ and $y$.
+ \end{itemize}
+ In both cases, the total sum increased.
+%
+% coming from $F$'s that contain both $x, y$; contain $x$ but not $y$; $y$ but not $x$; or contain neither $x$ nor $y$. Moreover, the first contribution depends only on $X \cap Y$ and $V \setminus (X \cup Y)$, because $F$, being complete multi-partite, cannot contain $x, y$ and a vertex in $X \Delta Y$.
+%
+% Considering the contributions to $f(G)$ from the fourfold individual contributions,
+% \[
+% f(G) = h(X \cap Y, V - (X \cup Y)) + g(X) + g(Y) + C,
+% \]
+% where $g$ and $h$ are some functions and $C$ is independent of $X, Y$.
+%
+% Note that $h(A, B) \leq h(A', B')$ if $A \subseteq A'$ and $B \subseteq B'$, because if $F$ is of the first kind, it is not complete, and so $\alpha_F \geq 0$. Moreover, if $B \not= B'$, then $h(A, B) < h(A', B')$, because the contribution from $F = \bar{K_3}$ is $\alpha_{\bar{K}_3} |B|$, and $\alpha_{\bar{K}_3} > 0$.
+%
+% We may assume that $g(X) \geq g(Y)$ and if $g(X) = g(Y)$, we may assume $|X| \leq |Y|$. In particular, $X \not= X \cup Y$. Hence
+% \[
+% g(X) + h(X, V \setminus X) > g(Y) + h(X \cap Y, V \setminus (X \cup Y)).
+% \]
+% (we certainly have $\geq$, and in both cases, we can check that we have a strict gain)
+%
+% Now remove the edges between $y$ and $Y$, and add edges between $y$ and $X$ to get $G'$, and observe that
+% \[
+% f(G') = h(X, V - X) + 2 g(X) + C > f(G).
+% \]
+\end{proof}
+Perhaps that theorem seemed like a rather peculiar one to prove. However, it has some nice consequences. The following theorem relates $k_p(G)$ with $k_r(G)$ for different $p$ and $r$:
+
+\begin{thm}[Bollob\'as, 1976]
+ Let $1 \leq p \leq r$, and for $0 \leq x \leq \binom{n}{p}$, let $\psi(x)$ be a maximal convex function lying below the points
+ \[
+ \{(k_p(T_q(n)), k_r(T_q(n))): q = r - 1, r, \ldots\} \cup \{(0, 0)\}.
+ \]
+ Let $G$ be a graph of order $n$. Then
+ \[
+ k_r(G) \geq \psi(k_p(G)).
+ \]
+\end{thm}
+
+\begin{proof}
+ Let $f(G) = k_p(G) - c k_r(G)$, where $c > 0$.
+ \begin{claim}
+ It is enough to show that $f$ is maximized on a Tur\'an graph for any $c$.
+ \end{claim}
+ Indeed, suppose we plot out the values of $(k_p(T_q(n)), k_r(T_q(n))$:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (6, 0) node [right] {$k_p(G)$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$k_r(G)$};
+
+ \draw [thick, mblue] (0, 0) node [circ] {} -- (1.5, 0) node [circ] {} node [below, black] {\small$k_p(T_{r - 1}(n))$} -- (3.2, 1.2) node [circ] {} -- (5, 3) node [circ] {};
+
+ \draw [dashed] (3.2, 0) node [below] {\small$k_p(T_r(n))$} -- (3.2, 1.2) -- (0, 1.2) node [left] {\small$k_r(T_r(n))$};
+ \draw [dashed] (5, 0) node [below] {\small$k_p(T_{r + 1}(n))$} -- (5, 3) -- (0, 3) node [left] {\small$k_r(T_{r + 1}(n))$};
+ \end{tikzpicture}
+ \end{center}
+
+ If the theorem doesn't hold, then we can pick a $G$ such that $(k_p(G), k_r(G))$ lies below $\psi$. Draw a straight line through the point with slope parallel to $\psi$. This has slope $\frac{1}{c} > 0$ for some $c$. The intercept on the $x$-axis is then $k_p(G) - c k_r(G)$, which would be greater than $f(\text{any Tur\'an graph})$ by convexity, a contradiction.
+
+ Now the previous theorem immediately tells us $f$ is maximized on some complete multi-partite graph. Suppose this has $q$ class, say of sizes $a_1 \leq a_2 \leq \cdots \leq a_q$. It is easy to verify $q \geq r - 1$. In fact, we may assume $q \geq r$, else the maximum is on a Tur\'an graph $T_{r - 1}(n)$.
+
+ Then we can write
+ \[
+ f(G) = a_1 a_q A - c a_q a_q B + C,
+ \]
+ where $A, B, C$ are rationals depending only on $a_2, \ldots, a_{q - 1}$ and $a_1 + a_q$ ($A$ and $B$ count the number of ways to pick a $K_p$ and $K_r$ respectively in a way that involves terms in both the first and last classes).
+
+ We wlog assume $c$ is irrational. Hence $a_q a_q A - c a_1 a_q = a_1 a_q (A - cB) \not= 0$.
+ \begin{itemize}
+ \item If $A - cB < 0$, replace $a_1$ and $a_q$ by $0$ and $a_1 + a_q$. This would then increase $f$, which is impossible.
+ \item If $A - cB > 0$ and $a_1 \leq a_q - 2$, then we can replace $a_1, a_q$ by $a_1 + 1, a_q - 1$ to increase $f$.
+ \end{itemize}
+ Hence $a_1 \geq a_q - 1$. So $G = T_q(n)$.
+\end{proof}
+
+%It was conjectured that the value of $\min \{k_3(G): e(G) = m, |G| = n\}$ is given by an $r$-partite graph, where $r$ is the minimum possible (subject to $e(G) = m$).
+%
+%The continuous envelope of the conjection in range $\frac{1}{2}$ to $\frac{2}{3}$ was proved by Fisher in 1989. The whole range was proved (in the limit $n \to \infty$) was proved by Razborov in 2007, who introduced the method fof flag algebras. The idea was to use a computer to find the best possible Cauchy--Schwarz inequality, using semi-definite programming. Later in 2008, Nikiforov did $k_4$'s directly. Finally, Reiher did $k_r$'s for all $r$ by another method.
+%
+%Finally, Liu, Pikhurko, Staden in 2017+ obtained the exact result for $k_3$'s (for $n$ large).
+%
+%It is an open problem to maximize the number of induced paths of length $3$. We don't even have a conjecture.
+
+\section{\texorpdfstring{Szemer\'edi's}{Szemeredi's} regularity lemma}
+Szmer\'edi's regularity lemma tells us given a very large graph, we can always equipartition it into pieces that are ``uniform'' in some sense. The lemma is arguably ``trivial'', but it also has many interesting consequences. To state the lemma, we need to know what we mean by ``uniform''.
+
+%A graph with the (large scale) property that all subsets have the same density can be regarded as ``pseudo-random'' in a quantifiable sense. This lies behind the spirit of Szemer\'edi's lemma.
+
+\begin{defi}[Density]\index{density}
+ Let $U, W$ be disjoint subsets of the vertex set of some graph. The number of edges between $U$ and $W$ is denoted by \term{$e(U, W)$}, and the \emph{density} is
+ \[
+ d(U, W) = \frac{e(U, W)}{|U| |W|}.
+ \]
+\end{defi}
+\begin{defi}[$\varepsilon$-uniform pair]\index{$\varepsilon$-uniform}
+ Let $0 < \varepsilon < 1$. We say a pair $(U, W)$ is \emph{$\varepsilon$-uniform} if
+ \[
+ |d(U', W') - d(U, W)| < \varepsilon
+ \]
+ whenever $U' \subseteq U$, $W' \subseteq W$, and $|U'| \geq \varepsilon |U|$, $|W'| \geq \varepsilon |W|$.
+\end{defi}
+
+Note that it is necessary to impose some conditions on how small $U'$ and $W'$ can be. For example, if $|U'| = |W'| = 1$, then $d(U', W')$ is either $0$ or $1$. So we cannot have a sensible definition if we want to require the inequality to hold for arbitrary $U', W'$.
+
+But we might be worried that it is unnatural to use the same $\varepsilon$ for two different purposes. This is not something one should worry about. The Szemer\'edi regularity lemma is a fairly robust result, and everything goes through if we use different $\varepsilon$'s for the two different purposes. However, it is annoying to have to have many different $\varepsilon$'s floating around.
+
+Before we state and prove Szemer\'edi's regularity lemma, let's first try to understand why uniformity is good. The following is an elementary observation.
+\begin{lemma}
+ Let $(U, W)$ be an $\varepsilon$-uniform pair with $d(U, W) = d$. Then
+ \begin{align*}
+ |\{u \in U: |\Gamma(u) \cap W| > (d - \varepsilon) |W|\}| &\geq (1 - \varepsilon)|U|\\
+ |\{u \in U: |\Gamma(u) \cap W| < (d + \varepsilon) |W|\}| &\geq (1 - \varepsilon)|U|,
+ \end{align*}
+ where $\Gamma(u)$ is the set of neighbours of $u$.
+\end{lemma}
+
+\begin{proof}
+ Let
+ \[
+ X = \{u \in U: |\Gamma(u) \cap W| \leq (d - \varepsilon)|W|\}.
+ \]
+ Then $e(X, W) \leq (d - \varepsilon) |X||W|$. So
+ \[
+ d(X, W) \leq d - \varepsilon = d(U, W) - \varepsilon.
+ \]
+ So it fails the uniformity condition. Since $W$ is definitely not small, we must have $|X| < \varepsilon |U|$.
+
+ The other case is similar, or observe that the complementary bipartite graph between $U$ and $W$ has density $1 - d$ and is $\varepsilon$-uniform.
+\end{proof}
+
+What is good about $\varepsilon$-uniform pairs is that if we have enough of them, then we can construct essentially any subgraph we like. Later, Szemer\'edi's regularity lemma says any graph large enough has $\varepsilon$-uniform equipartitions, and together, they can give us some pretty neat results.
+\begin{lemma}[Graph building lemma]\index{graph building lemma}\index{building lemma}
+ Let $G$ be a graph containing distinct vertex subsets $V_1, \ldots, V_r$ with $|V_i| = u$, such that $(V_i, V_j)$ is $\varepsilon$-uniform and $d(V_i, V_j) \geq \lambda$ for all $1 \leq i \leq j \leq r$.
+
+ Let $H$ be a graph with maximum degree $\Delta(H) \leq \Delta$. Suppose $H$ has an $r$-colouring in which no colour is used more than $s$ times, i.e.\ $H \subseteq K_r(s)$, and suppose $(\Delta + 1) \varepsilon \leq \lambda^\Delta$ and $s \leq \lfloor \varepsilon u\rfloor$. Then $H \subseteq G$.
+\end{lemma}
+
+To prove this, we just do what we did in the previous lemma, and find lots of vertices connected to lots of other vertices, and then we are done.
+\begin{proof}
+ We wlog assume $V(H) = \{1, \ldots, k\}$, and let $c: V(H) \to \{1, \ldots, r\}$ be a colouring of $V(H)$ using no colour more than $s$ times. We want to pick vertices $x_1, \ldots, x_k$ in $G$ so that $x_i x_j \in E(G)$ if $ij \in E(H)$.
+
+ We claim that, for $0 \leq \ell \leq k$, we can choose distinct vertices $x_1, \ldots, x_\ell$ so that $x_j \in C_{c(j)}$, and for $\ell < j \leq k$, a set $X^{\ell}_j$ of \emph{candidates} for $x_j$ such that
+ \begin{enumerate}
+ \item $X_j^{\ell} \subseteq V_{c(j)}$;
+ \item $x_i y_j \in E(G)$ for all $y_j \in X_j^{\ell}$ and $i \leq \ell$ such that $ij \in E(H)$.
+
+ \item $|X_j^{\ell}| \geq (\lambda - \varepsilon)^{|N(j, \ell)|} |V_{c(j)}|$, where
+ \[
+ N(j, \ell) = \{x_i: 1 \leq i \leq \ell \text{ and }ij \in E(H)\}.
+ \]
+ \end{enumerate}
+
+ The claim holds for $\ell = 0$ by taking $X_j^0 = V_{c(j)}$.
+
+ By induction, suppose it holds for $\ell$. To pick $x_{\ell + 1}$, of course we should pick it from our candidate set $X_{\ell + 1}^\ell$. Then the first condition is automatically satisfied. Define the set
+ \[
+ T = \{j > \ell + 1 : (\ell + 1)j \in E(H)\}.
+ \]
+ Then each $t \in T$ presents an obstruction to (ii) and (iii) being satisfied. To satisfy (ii), for each $t \in T$, we should set
+ \[
+ X^{\ell + 1}_t = X_t^\ell \cap \Gamma(x_{\ell + 1}).
+ \]
+ Thus, to satisfy (iii), we want to exclude those $x_{\ell + 1}$ that make this set too small. We define
+ \[
+ Y_t = \Big\{y \in X_{\ell + 1}^{\ell} : |\Gamma(y) \cap X_t^\ell| \leq (\lambda - \varepsilon) |X^{\ell}_t|\Big\}.
+ \]
+ So we want to find something in $X_{\ell + 1}^\ell \setminus \bigcup_{t \in T} Y_t$. We also cannot choose one of the $x_i$ already used. So our goal is to show that
+ \[
+ \left|X_{\ell + 1}^{\ell} - \bigcup_{t \in T} Y_t \right| > s - 1.
+ \]
+ This will follow simply from counting the sizes of $|X_{\ell + 1}^\ell|$ and $|Y_t|$. We already have a bound on the size of $|X_{\ell + 1}^\ell|$, and we shall show that if $|Y_t|$ is too large, then it violates $\varepsilon$-uniformity.
+
+ Indeed, by definition of $Y_t$, we have
+ \[
+ d(Y_t, X_{\ell + 1}^\ell) \leq \lambda - \varepsilon \leq d(V_{c(t)}, V_{c(\ell + 1)}) - \varepsilon.
+ \]
+ So either $|X_{\ell + 1}^\ell| < \varepsilon |V_{c(t)}|$ or $|Y_t| < \varepsilon |V_{c(\ell + 1)}|$. But the first cannot occur.
+
+ Indeed, write $m = |N(\ell + 1, \ell)|$. Then $m + |T| \leq \Delta$. In particular, since the $|T| = 0$ case is trivial, we may assume $m \leq \Delta - 1$. So we can easily bound
+ \[
+ |X_{\ell + 1}| \geq (\lambda - \varepsilon)^{\Delta - 1} |V_{c(t)}| \geq (\lambda^{\Delta - 1} - (\Delta - 1)\varepsilon) |V_{c(t)}| > \varepsilon |V_{c(t)}|.
+ \]
+ Thus, by $\varepsilon$-uniformity, it must be the case that
+ \[
+ |Y_t| \leq \varepsilon |V_{c(\ell + 1)}|.
+ \]
+ Therefore, we can bound
+ \begin{multline*}
+ \left|X_{\ell + 1}^{\ell} - \bigcup_{t \in T} Y_t \right| \geq (\lambda - \varepsilon)^m |V_{c(\ell + 1)}| - (\Delta - m) \varepsilon|V_{c(\ell + 1)}|\\
+ \geq (\lambda^m - m\varepsilon - (\Delta - m)\varepsilon) u \geq \varepsilon u > s - 1.
+ \end{multline*}
+ So we are done.
+
+% At most $s - 1$ vertices of $X_{\ell + 1}^\ell - \bigcup Y_t$ have been chosen amongst $x_1, \ldots, x_\ell$, so we may select $x_{\ell + 1}$ in this set. Then take
+% \[
+% X^{\ell + 1}_t = X^\ell_t \cap \Gamma(x_{\ell + 1})
+% \]
+% for $t \in T$, and for $t \not \in T$, we just set $X_j^{\ell + 1} X_j^{\ell}$.
+%
+% This establishes the claim for $1 \leq \ell \leq k$, completing the proof.
+\end{proof}
+
+\begin{cor}
+ Let $H$ be a graph with vertex set $\{v_1, \ldots, v_r\}$. Let $0 < \lambda, \varepsilon < 1$ satisfy $r \varepsilon \leq \lambda^{r - 1}$.
+
+ Let $G$ be a graph with disjoint vertex subsets $V_1, \ldots, V_r$, each of size $u \geq 1$. Suppose each pair $(V_i, V_j)$ is $\varepsilon$ uniform, and $d(V_i, V_j) \geq \lambda$ if $v_i v_j \in E(H)$, and $d(V_i, V_j) \leq 1 - \lambda$ if $v_i v_j \not \in E(H)$. Then there exists $x_i \in V_i$ so that the map $v_i \to x_i$ is an isomorphism $H \to G[\{x_1, \ldots, x_r\}]$.
+\end{cor}
+
+\begin{proof}
+ By replacing the $V_i$-$V_j$ edges by the complementary set whenever $v_i v_j \not \in E(H)$, we may assume $d(V_i, V_j) \geq \lambda$ for all $i, j$, and $H$ is a complete graph.
+
+ We then apply the previous lemma with $\Delta = r - 1$ and $s = 1$.
+\end{proof}
+
+Szemer\'edi showed that every graph that is sufficiently large can be partitioned into finitely many classes, with most pairs being $\varepsilon$-uniform. The idea is simple --- whenever we see something that is not uniform, we partition it further into subsets that are more uniform. The ``hard part'' of the proof is to come up with a measure of how far we are from being uniform.
+
+\begin{defi}[Equipartition]\index{equipartition}
+ An \term{equipartition} of $V(G)$ into $k$ parts is a partition into sets $V_1, \ldots, V_k$, where $\lfloor \frac{n}{k} \rfloor \leq V_i \leq \lceil \frac{n}{k}\rceil$, where $n = |G|$.
+
+ We say that the partition is $\varepsilon$-uniform\index{$\varepsilon$-uniform!partition} if $(V_i, V_j)$ is $\varepsilon$-uniform for all but $\varepsilon \binom{k}{2}$ pairs.
+\end{defi}
+
+
+\begin{thm}[Szemer\'edi's regularity lemma]\index{Szemer\'edi regularity lemma}
+ Let $0 < \varepsilon < 1$ and let $\ell$ be some natural number. Then there exists some $L = L(\ell, \varepsilon)$ such that every graph has an $\varepsilon$-uniform equipartition into $m$ parts for some $\ell \leq m \leq L$, depending on the graph.
+\end{thm}
+This lemma was proved by Szemer\'edi in order to prove his theorem on arithmetic progressions in dense subsets of integers.
+
+When we want to apply this, we usually want at least $\ell$ many parts. For example, having $1$ part is usually not very helpful. The upper bound on $m$ is helpful for us to ensure the parts are large enough, by picking graphs with sufficiently many vertices.
+
+We first need a couple of trivial lemmas.
+\begin{lemma}
+ Let $U' \subseteq U$ and $W' \subseteq W$, where $|U'| \geq (1 - \delta)|U|$ and $|W'| \geq (1 - \delta) |W|$. Then
+ \[
+ |d(U', W') - d(U, W)| \leq 2\delta.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Let $d = d(U, W)$ and $d' = d(U', W')$. Then
+ \[
+ d = \frac{e(U, W)}{|U||W|} \geq \frac{e(U', W')}{|U||W|} = d' \frac{|U'||W'|}{|U||W|} \geq d' (1 - \delta)^2.
+ \]
+ Thus,
+ \[
+ d' - d \leq d'(1 - (1 - \delta)^2) \leq 2\delta d' \leq 2 \delta.
+ \]
+ The other inequality follows from considering the complementary graph, which tells us
+ \[
+ (1 - d') - (1 - d) \leq 2\delta.\qedhere
+ \]
+\end{proof}
+
+\begin{lemma}
+ Let $x_1, \ldots, x_n$ be real numbers with
+ \[
+ X = \frac{1}{n} \sum_{i = 1}^n x_i,
+ \]
+ and let
+ \[
+ x = \frac{1}{m} \sum_{i = 1}^m x_i.
+ \]
+ Then
+ \[
+ \frac{1}{n} \sum_{i = 1}^n x_i^2 \geq X^2 + \frac{m}{n - m}(x - X)^2 \geq X^2 + \frac{m}{n} (x - X)^2.
+ \]
+\end{lemma}
+If we ignore the second term on the right, then this is just Cauchy--Schwarz.
+
+\begin{proof}
+ We have
+ \begin{align*}
+ \frac{1}{n} \sum_{i = 1}^n x_i^2 &= \frac{1}{n} \sum_{i = 1}^m x_i^2 + \frac{1}{n} \sum_{i = m + 1}^n x_i^2 \\
+ &\geq \frac{m}{n} x^2 + \frac{n - m}{n} \left(\frac{nX - mx}{n - m}\right)^2\\
+ &\geq X^2 + \frac{m}{n - m} (x - X)^2
+ \end{align*}
+ by two applications of Cauchy--Schwarz.
+\end{proof}
+
+We can now prove Szemer\'edi's regularity lemma.
+\begin{proof}
+ Define the index $\ind(\mathcal{P})$ of an equipartition $\mathcal{P}$ into $k$ parts $V_i$ to be
+ \[
+ \ind(P) = \frac{1}{k^2} \sum_{i < j} d^2(V_i, V_j).
+ \]
+ We show that if $P$ is not $\varepsilon$-uniform, then there is a refinement equipartition $\mathcal{Q}$ into $k 4^k$ parts, with $\ind(\mathcal{Q}) \geq \ind(\mathcal{P}) + \frac{\varepsilon^5}{8}$.
+
+ This is enough to prove the theorem. For choose $t \geq \ell$ with $4^t \varepsilon^5 \geq 100$. Define recursively a function $f$ by
+ \[
+ f(0) = t,\quad f(j + 1) = f(j) 4^{f(j)}.
+ \]
+ Let
+ \[
+ N = f(\lceil 4 \varepsilon^{-5}\rceil),
+ \]
+ and pick $L = N 16^N$.
+
+ Then, if $n \leq L$, then just take an equipartition into single vertices. Otherwise, begin with a partition into $t$ parts. As long as the current partition into $k$ parts is not $\varepsilon$ uniform, replace it a refinement into $4 k^4$ parts. The point is that $\ind(\mathcal{P}) \leq \frac{1}{2}$ for any partition. So we can't do this more than $4 \varepsilon^{-5}$ times, at which point we have partitioned into $N \leq L$ parts.
+
+ Note that the reason we had to set $L = N 16^N$ is that in our proof, we want to assume we have many vertices lying around.
+
+ The proof really is just one line, but students tend to complain about such short proofs, so let's try to explain it in a bit more detail. If the partition is not $\varepsilon$-uniform, this means we can further partition each part into uneven pieces. Then our previous lemma tells us this discrepancy allows us to push up $\frac{1}{n} \sum x_i^2$.
+
+ So given an equipartition $\mathcal{P}$ that is not $\varepsilon$-uniform, for each non-uniform pair $(V_i, V_j)$ of $P$, we pick witness sets
+ \[
+ X_{ij} \subseteq V_i,\quad X_{ji} \subseteq V_j
+ \]
+ with $|X_{ij}| \geq \varepsilon |V_i|$, $|X_{ji}| \geq |V_j|$ and $|d(X_{ij}, X_{ji}) - D(V_i, V_j)| \geq \varepsilon$.
+
+ Fix $i$. Then the sets $X_{ij}$ partition $V_i$ into at most $2^{k - 1}$ \term{atoms}. Let $m = \lfloor\frac{n}{k4^k}\rfloor$, and let $n = k 4^k m + ak + b$, where $0 \leq a \leq 4^k$ and $b \leq k$. Then we see that
+ \[
+ \lfloor n/k\rfloor = 4^k m + a
+ \]
+ and the parts of $\mathcal{P}$ have size $4^k m + a$ or $4^km + a + 1$, with $b$ of the larger size.
+
+ Partition each part of $\mathcal{P}$ into $4^k$ sets, of size $m$ or $m + 1$. The smaller $V_i$ having $a$ parts of size $m + 1$, and thelargrer having $a + 1$ such pairs.
+
+ We see that any such partition is an equipartition into $k 4^k$ parts of size $m$ or $m + 1$, with $ak + b$ parts of larger size $m + 1$.
+
+ Let's choose such an equipartition $\mathcal{Q}$ with parts as nearly as possible inside atoms, so each atom is a union of parts of $\mathcal{Q}$ with at most $m$ extra vertices.
+
+ All that remains is to check that $\ind (\mathcal{Q}) \geq \ind(\mathcal{P}) + \frac{\varepsilon^5}{8}$.
+
+ Let the sets of $\mathcal{Q}$ within $V_i$ be $V_i(s)$, where $1 \leq s \leq 4^k \equiv q$. So
+ \[
+ V_i = \bigcup_{s = 1}^q V_i(s).
+ \]
+ Now
+ \[
+ \sum_{1 \leq s, t \leq q} e(V_i(s), V_j(t)) = e(V_i, V_j).
+ \]
+ We'd like to divide by some numbers and convert these to densities, but this is where we have to watch out. But this is still quite easy to handle. We have
+ \[
+ \frac{m}{m + 1} \leq q |V_i(s)| \leq |V_i| \leq \frac{m + 1}{m} q |V_i(s)|
+ \]
+ for all $s$. So we want $m$ to be large for this to not hurt us too much.
+
+ So
+ \[
+ \left(\frac{m}{m + 1}\right) d(V_i, V_j) \leq \frac{1}{q^2} \sum_{s, t} d(V_i(s), V_j(t)) \leq \left(\frac{m + 1}{m}\right)^2 d(V_i, V_j).
+ \]
+ Using $n \geq k 16^k$, and hence
+ \[
+ \left(\frac{m}{m + 1}\right)^2 \geq 1 - \frac{2}{m} \geq 1 - \frac{2}{4^k} \geq 1 - \frac{\varepsilon^5}{50},
+ \]
+ we have
+ \[
+ \left|\frac{1}{q^2} \sum_{s, t} d(V_i(s), V_j(t)) - d(V_i, V_j)\right| \leq \frac{\varepsilon^5}{49} + d(V_i, V_j). % s/49/50/ ?
+ \]
+ In particular,
+ \[
+ \frac{1}{q^2} \sum d^2(V_i(s), V_j(t)) \geq d^2(V_i, V_j) - \frac{\varepsilon^5}{25}.
+ \]
+ The lower bound can be improved if $(V_i, V_j)$ is not $\varepsilon$-uniform.
+
+ Let $X_{ij}^*$ be the largest subset of $X_{ij}$ that is the union of parts of $\mathcal{Q}$. We may assume
+ \[
+ X_{ij}^* = \bigcup_{1 \leq s \leq r_i} V_i(s).
+ \]
+ By an argument similar to the above, we have
+ \[
+ \frac{1}{r_i r_j} \sum_{\substack{1 \leq s \leq r_i\\ 1 \leq t \leq r_j}} d(V_i(s), d_j(t)) \leq d(X_{ij}^*, X_{ji}^*) + \frac{\varepsilon^5}{49}.
+ \]
+ By the choice of parts of $\mathcal{Q}$ within atoms, and because $|V_i| \geq qm = 4^k m$, we have
+ \begin{align*}
+ |X_{ij}^*| &\geq |X_{ij}| - 2^{k - 1}m \\
+ &\geq |X_{ij}| \left(1 - \frac{2^k m}{\varepsilon |V_i|}\right) \\
+ &\geq |X_{ij}| \left(1 - \frac{1}{2^k \varepsilon}\right)\\
+ &\geq |X_{ij}| \left(1 - \frac{\varepsilon}{10}\right).
+ \end{align*}
+ So by the lemma, we know
+ \[
+ |d(X_{ij}^*, X_{ji}^*) - d(X_{ij}, X_{ji})| < \frac{\varepsilon}{5}.
+ \]
+ Recalling that
+ \[
+ |d(X_{ij}, X_{ji}) - d(V_i, V_j)| > \varepsilon,
+ \]
+ we have
+ \[
+ \bigg| \frac{1}{r_i r_j} \sum_{\substack{1 \leq s \leq r_i\\ 1 \leq t \leq r_j}} d(V_i(s), V_j(t)) - d(V_i, V_j)\bigg| > \frac{3}{4} \varepsilon.
+ \]
+ We can now allow our Cauchy--Schwarz inequality with $n = q^2$ and $m = r_i r_j$ gives
+ \begin{align*}
+ \frac{1}{q^2} \sum_{1 \leq s, t \leq q} d^2 (V_i(s), V_j(t)) &\geq d^2(V_i, V_j) - \frac{\varepsilon^5}{25} + \frac{r_i r_j}{q^2}\cdot \frac{9\varepsilon^2}{16} \\
+ &\geq d^2(V_i, V_j) - \frac{\varepsilon^5}{25} + \frac{\varepsilon^4}{3},
+ \end{align*}
+ using the fact that
+ \begin{multline*}
+ \frac{r_i}{q} \geq \left(1 - \frac{1}{m}\right) \frac{r_i}{q} \frac{m + 1}{m} \geq \left(1 - \frac{1}{m}\right)\\ \geq \frac{|X_{ij}^*|}{|V_i|} \geq \left(1 - \frac{1}{m}\right) \left(1 - \frac{\varepsilon}{10}\right) \frac{|X_{ij}|}{|V_i|} > \frac{4\varepsilon}{5}.
+ \end{multline*}
+ Therefore
+ \begin{align*}
+ \ind(\mathcal{Q}) &= \frac{1}{k^2q^2}\sum_{\substack{1 \leq i < j < k\\ 1 \leq s, t\leq q}} d^2(V_i(s), V_j(t))\\
+ &\geq \frac{1}{k^2} \sum_{1 \leq i < j \leq k} d^2(V_i, V_j) - \frac{\varepsilon^5}{25} + \varepsilon \binom{k}{2} \frac{\varepsilon^4}{3}\\
+ &\geq \ind(P) + \frac{\varepsilon^5}{8}.\qedhere
+ \end{align*}
+\end{proof}
+The proof gives something like
+\[
+ L \sim 2^{2^{.^{.^{.^{2}}}}},
+\]
+where the tower is $\varepsilon^{-5}$ tall. Can we do better than that?
+
+In 1997, Gowers showed that the tower of height at least $\varepsilon^{-1/16}$. More generally, we can define $V_1, \ldots, V_k$ to be $(\varepsilon, \delta, \eta)$-uniform if all but $\eta \binom{k}{2}$ pairs $(V_i, V_j)$ satisfy $|d(V_i, V_j) - d(V_i', V_j')| \leq \varepsilon$ whenever $|V_i'| \geq \delta |V_i|$. Then there is a graph for which every $(1 - \delta^{1/16}, \delta, 1 - 20 \delta^{1/16})$-uniform partition needs a tower height $\delta^{-1/16}$ parts.
+
+More recently, Moskowitz and Shapira (2012) improved these bounds. Most recently, a reformulation of the lemma due to Lov\'asz and Szegedy (2007) for which the upper bound is tower($\varepsilon^{-2}$) was shown to have lower bound tower($\varepsilon^{-2}$) by Fox and Lov\'asz (2014) (note that these are different Lov\'asz's!).
+
+Let's now turn to some applications of the Szemer\'edi's regularity lemma. Recall that Ramsey's theorem says there exists $R(k)$ so every red-blue colouring of the edges of $K_n$ yeidls a monochromatic $K_k$ provided $n \geq R(k)$. There are known bounds
+\[
+ 2^{k/2} \leq R(k) \leq 4^k.
+\]
+The existence of $R(k)$ implies that for every graph $G$, there exists a number $r(G)$ minimal so if $n \geq r(G)$ and we colour $K_n$, we obtain a monochromatic $G$. Clearly, we have
+\[
+ r(G) \leq R(|G|).
+\]
+How much smaller can $r(G)$ be compared to $R(|G|)$?
+
+\begin{thm}
+ Given an integer $d$, there exists $c(d)$ such that
+ \[
+ r(G) \leq c|G|
+ \]
+ for every graph $G$ with $\Delta(G) \leq d$.
+\end{thm}
+
+\begin{proof}
+ Let $t = R(d + 1)$. Pick $\varepsilon \leq \min \left\{\frac{1}{t}, \frac{1}{2^d (d + 1)}\right\}$. Let $\ell \geq t^2$, and let $L = L(\ell, \varepsilon)$. We show that $c = \frac{L}{\varepsilon}$ works.
+
+ Indeed, let $G$ be a graph. Colour the edges of $K_n$ by red and blue, where $n \geq c |G|$. Apply Szemir\'edi to the red graph with $\ell, \varepsilon$ as above. Let $H$ be the graph whose vertices are $\{V_1, \ldots, V_m\}$, where $V_1, \ldots, V_m$ is the partition of the red graph. Let $V_i, V_j \in E(H)$ if $(V_i, V_j)$ is $\varepsilon$-uniform. Notice that $m = |H| \geq \ell \geq t^2$, and $e(\bar{H}) \leq \varepsilon \binom{m}{2}$. So $H \supseteq K_t$, or else by Tur\'an's theroem, there are integers $d_1, \ldots, d_{t - 1}$ and $\sum d_i m$ and
+ \[
+ e(\bar{H}) \geq \binom{d_i}{2}\geq (t - 1) \binom{m/(t - 1)}{2} \geq \varepsilon \binom{m}{2}
+ \]
+ by our choice of $\varepsilon$ and $m$.
+
+ We may as well assume all pairs $V_i, V_j$ for $1 \leq i < j \leq t$ are $\varepsilon$-uniform. We colour the edge of $K_t$ green if $d(V_i, V_j) \geq \frac{1}{2}$ (in the red graph), or white if $< \frac{1}{2}$ (i.e.\ density $> \frac{1}{2}$ in the blue graph).
+
+ By Ramsey's theorem, we may assume all pairs $V_i V_j$ for $1 \leq i < j \leq d + 1$ are the same colour. We may wlog assume the colour is green, and we shall find a red $G$ (a similar argument gives a blue $G$ if the colour is white).
+
+ Indeed, take a vertex colouring of $G$ with at most $d + 1$ colours (using $\Delta(G) \leq d$), with no colour used more than $|G|$ times. By the building lemma with $H$ (in the lemma) being $G$ (in this proof), and $G$ (in the lemma) equal to the subgrpah of the red graph spanned by $V_1, \ldots, V_{d + 1}$ (here),
+ \[
+ u = |V_i| \geq \frac{n}{L} \geq \frac{c |G|}{L} \geq \frac{|G|}{\varepsilon},
+ \]
+ $r = d + 1$, $\lambda = \frac{1}{2}$, and we are done.
+\end{proof}
+
+This proof is due to Chv\'atal, R\"odl, Szem\'eredi, Trotter (1983). It was extended to more general graphs by Chen and Schelp (1993) including planar graphs. It was conjectured by Burr and Erd\"os (1978) to be true for $d$-degenerate graphs ($e(H) \leq d |H|$ for all $H \subseteq G$).
+
+Kostochka--R\"odl (2004) introduced ``dependent random choice'', used by Fox--Sudakov (2009) and finally Lee (2015) proved the full conjecture.
+
+An important application of the Szem\'eredi regularity lemma is the \emph{triangle removal lemma}.
+\begin{thm}[Triangle removal lemma]\index{triangle removal lemma}
+ Given $\varepsilon > 0$, there exists $\delta > 0$ such that if $|G| = n$ and $G$ contains at most $\delta n^3$ triangles, then there exists a set of at most $\varepsilon n^2$ edges whose removal leaves no triangles.
+\end{thm}
+
+\begin{proof}
+ Exercise. (See example sheet)
+\end{proof}
+Appropriate modifications hold for general graphs, not just triangles.
+
+\begin{cor}[Roth, 1950's]
+ Let $\varepsilon > 0$. Then if $n$ is large enough, and $A \subseteq [n] = \{1, 2, \ldots, n\}$ with $|A| \geq \varepsilon n$, then $A$ contains a $3$-term arithmetic progression.
+\end{cor}
+Roth originally proved this by some sort of Fourier analysis arguments, while Szmer\'edi came and prove this for all lengths later.
+
+\begin{proof}
+ Define
+ \[
+ B = \{(x, y) \in [2n]^2 : x - y \in A\}.
+ \]
+ Then certainly $|B| \geq \varepsilon n^2$. We form a $3$-partite graph with disjoint vertex classes $X = [2n]$ and $Y = [2n]$, $Z = [4n]$. If we have $x \in X, y \in Y$ and $z \in Z$, we join $x$ to $y$ if $(x, y) \in B$; join $x$ to $z$ if $(x, z - x) \in B$ and join $y$ to $z$ if $(z - y, y) \in B$.
+
+ A triangle in $G$ is a triple $(x, y, z)$ where $(x, y), (x, y + w), (x + w, y) \in B$ where $w = z - x - y$. Note that $w < 0$ is okay. A $0$-triangle is one with $w = 0$. There are at least $\varepsilon n^2 $ of these, one for each $(x, y) \in B$, and these are edge disjoint, because $z = x + y$.
+
+ Hence the triangles cannot be killed by removing $\leq \varepsilon n^2/2$ edges. By the triangle removal lemma, we must have $\geq \delta n^3$ triangles for some $\delta$. In particular, for $n$ large enough, there is some triangle that is not a $0$-triangle.
+
+ But then we are done, since
+ \[
+ x - y - w, x - y, x - y + w \in A
+ \]
+ where $w \not= 0$, and this is a $3$-term arithemtic progression.
+\end{proof}
+There is a simple generalization of this argument which yields $k$-term arithmetic progressions provided we have a suitable removal lemma. This needs a Szemer\'edi regularity lemma for $(k - 1)$-uniform hypergraphs, instead of just graphs, and a corresponding version of the building lemma.
+
+The natural generalizations of Szemer\'edi's lemma to hypergraphs is easily shown to be true (exercise). The catch is that what the Szemer\'edi's lemma does not give us a strong enough result to apply the building lemma.
+
+What we need is a stronger version of the regularity that allows us to build graphs, but not too strong that we can't prove the Szemer\'edi regularity lemma. A workable hypergraph regularity lemma along these lines was proved by Nagle, R\"odl, Skokan, and by Gowers (those weren't quite the same lemma, though).
+
+\section{Subcontraction, subdivision and linking}
+We begin with some definitions.
+\begin{defi}
+ Let $G$ be a graph, $e = xy$ an edge. Then the graph $G/e$\index{$G/e$} is the graph formed from $G \setminus \{x, y\}$ by adding a new vertex joined to all neighbours of $x$ and $y$. We say this is the graph formed by \emph{contracting} the edge $e$.
+\end{defi}
+
+% insert picture
+
+\begin{defi}[(Sub)contraction]
+ A \term{contraction} of $G$ is a graph obtained by a sequence of edge contractions. A \term{subcontraction} of $G$ is a contraction of a subgraph. We write $G \succ H$ if $H$ is a subcontraction of $G$.
+\end{defi}
+If $G \succ H$, then $G$ has disjoint vertex subsets $W_v$ for each $v \in V(H)$ such that $G[W_v]$ is connected, and there is an edge of $G$ between $W_u$ and $W_v$ if $w \in E(H)$.
+
+\begin{defi}[Subdivision]\index{subdivision}
+ If $H$ is a graph, \term{$TH$} stands for any graph obtained from $H$ by replacing its edges by vertex disjoint paths (i.e.\ we subdivide edges).
+\end{defi}
+The $T$ stands for ``topological'', since the resulting graph has the same topology.
+
+Clearly, if $G \supseteq TH$, then $G \succ H$.
+
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} -- (2, 0) node [circ] {} -- (1, 1.732) node [circ] {} -- cycle;
+
+ \draw [->] (2.5, 0.866) -- +(1, 0);
+
+ \draw (4, 0) node [circ] {} -- (6, 0) node [circ] {} node [pos=0.5, circ] {} -- (5, 1.732) node [circ] {} node [pos=0.333, circ] {} node [pos=0.667, circ] {} -- cycle;
+ \end{tikzpicture}
+\end{center}
+
+Recall the following theorem:
+\begin{thm}[Menger's theorem]\index{Menger's theorem}
+ Let $G$ be a graph and $s_1, \ldots, s_k, t_1, \ldots, t_k$ be distinct vertices. If $\kappa(G) \geq k$, then there exists $k$ vertex disjoint paths from $\{s_1, \ldots, s_k\}$ to $\{t_1, \ldots, t_k\}$.
+\end{thm}
+
+This is good, but not good enoguh. It would be nice if we can have paths that join $s_i$ to $t_i$, as opposed to joining $s_i$ to any element in $\{t_1, \ldots, t_k\}$.
+
+\begin{defi}[$k$-linked graph]\index{$k$-linked graph}
+ We say $G$ is \emph{$k$-linked} if there exists vertex disjoint $s_i$-$t_i$ paths for $1 \leq i \leq k$ for any choice of these $2k$ vertices.
+\end{defi}
+
+We want to understand how these three notions interact. There are some obvious ways in which they interact. For example, it is not hard to see that $G \supseteq TK_t$ if $G$ is $k$-linked for some large $k$. Here is a kind of converse.
+\begin{lemma}
+ If $\kappa(G) \geq 2k$ and $G \supseteq TK_{5k}$, then $G$ is $k$-linked.
+\end{lemma}
+
+\begin{proof}
+ Let $B$ be the set of ``branch vertices'' of $TK_{5k}$, i.e.\ the $5k$ vertices joined by paths. By Menger's theorem, there exists $2k$ vertex disjoint paths joining $\{s_1, \ldots, s_k, t_1, \ldots, t_k\}$ to $B$ (note that these sets might intersect). We say one of our $2k$ paths \emph{impinges} on a path in $TK_{5k}$ if it meets that path and subsequently leaves it. Choose a set of $2k$ paths impinging minimally (counting 1 per impingement).
+
+ Let these join $\{s_1, \ldots, t_k\}$ to $\{v_1, \ldots, v_{2k}\}$, where $B = \{v_1, \ldots, v_{5k}\}$. Then no path impinges on a path in $TK_{5k}$ from $\{v_1, \ldots, v_{2k}\}$ to $\{v_{2k + 1}, \ldots, v_{5k}\}$. Otherwise, pick the path impinging closest to $v_{2k + j}$, and reroute it to $v_{2k + j}$ rather than to something in $\{v_1, \ldots, v_{2k}\}$, reducing the number of impingements.
+
+ Hence each path (of any $2k$) meet at most one path in $\{v_1, \ldots, v_{2k}\}$ to $\{v_{2k + 1}, \ldots, v_{5k}\}$ (once we hit it, we must stay there).
+
+ Thus, we may assume the paths from $\{v_1, \ldots, v_{2k}\}$ to $\{v_{4k + 1}, \ldots, v_{5k}\}$ are not met at all. Then we are done, since we can link up $v_j$ to $v_{k + j}$ via $v_{4k + j}$ to join $s_j$ to $t_k$.
+\end{proof}
+The argument can be improved to use $TK_{3k}$.
+
+Since we are doing extremal graph theory, we want to ask the following question --- how many edges do we need to guarantee $G \succ K_t$ or $G \supseteq TK_t$?
+
+For two graphs $G$ and $H$, we write $G + H$ for the graph formed by taking the disjoint union of $G$ and $H$ and then joining everything in $G$ to everything in $H$.
+\begin{lemma}
+ If $e(G) \geq k|G|$, then there exists some $H$ with $|H| \leq 2k$ and $\delta(H) \geq k$ such that $G \succ K_1 + H$.
+\end{lemma}
+
+\begin{proof}
+ Consider the minimal subcontraction $G'$ of $G$ among those that satisfy $e(G') \geq k |G'|$. Then we must in fact have $e(G') = k|G'|$ , or else we can just throw away an edge.
+
+ Since $e(G') = k|G'|$, it must be the case that $\delta(G') \leq 2k$. Let $v$ be of minimum degree in $G'$. and set $H = G'[\Gamma(v)]$. Then $G \succ K_1 + H$ and $|H| \leq 2v$.
+
+ To see that $\delta(H) \geq k$, suppose $u \in V(H) = \Gamma(v)$. Then by minimality of $G'$, we have
+ \[
+ e(G'/uv) \leq k|G'| - k - 1.
+ \]
+ But the number of edges we kill when performing this contraction is exactly $1$ plus the number of triangles containing $uv$. So $uv$ lies in at least $k$ triangles of $G'$. In other words, $\delta(H) \geq k$.
+\end{proof}
+
+By iterating this, we can find some subcontractions into $K_t$.
+\begin{thm}
+ If $t \geq 3$ and $e(G) \geq 2^{t - 3}|G|$, then $G \succ K_t$.
+\end{thm}
+
+\begin{proof}
+ If $t = 3$, then $G$ contains a cycle. So $G \succ K_3$. If $t > 3$, then $G \succ K_1 + H$ where $\delta(H) \geq 2^{t - 3}$. So $e(H) \geq 2^{t - 4} |H|$ and (by induction) $H \succ K_{t - 1}$.
+\end{proof}
+
+We can prove similar results for the topological things.
+\begin{lemma}
+ If $\delta(G) \geq 2k$, then $G$ contains vertex disjoint subgraphs $H, J$ with $\delta(H) \geq k$, $J$ connected, and every vertex in $H$ has a neighbour in $J$.
+\end{lemma}
+If we contract $J$ to a single vertex, then we get an $H + K_1$.
+
+\begin{proof}
+ We may assume $G$ is connected, or else we can replace $G$ by a component.
+
+ Now pick a subgraph $J$ maximal such that $J$ is connected and $e(G/J) \geq k(|G| - |J| + 1)$. Note that any vertex could be used for $J$. So such a $J$ exist.
+
+ Let $H$ be the subgraph spanned by the vertices having a neighbour in $J$. Note that if $v \in V(H)$, then $v$ has at least $k$ neighbours in $H$. Indeed, when we contract $J \cup \{v\}$, maximality tells us
+ \[
+ e(G/(J \cup \{v\})) \leq k(|G| - |J| + 1) - k - 1,
+ \]
+ and the number of edges we lose in the contraction is $1$ plus the number of neighbours of $v$ (it is easier to think of this as a two-step process --- first contract $J$, so that we get an $H + K_1$, then contract $v$ with the vertex coming from $K_1$).
+\end{proof}
+
+Again, we iterate this result.
+\begin{thm}
+ Let $F$ be a graph with $n$ edges and no isolated vertices. If $\delta(G) \geq 2^n$, then $G \supseteq TF$.
+\end{thm}
+
+\begin{proof}
+ If $n = 1$, then this is immediate. If $F$ consists of $n$ isolated edges, then $F \subseteq G$ (in fact, $\delta(G) \geq 2n - 1$ is enough for this). Otherwise, pick an edge $e$ which is not isolated. Then $F - e$ has at most $1$ isolated vertex. Apply the previous lemma to $G$ to obtain $H$ with $\delta(H) \geq 2^{n - 1}$. Find a copy of $T(F - e)$ in $H$ (apart from the isolated vertex, if exists).
+
+ If $e$ was just a leaf, then just add an edge (say to $J$). If not, just construct a path going through $J$ to act as $e$, since $J$ is connected.
+\end{proof}
+
+It is convenient to make the definition
+\begin{align*}
+ c(t) &= \inf \{c: e(G) \geq c|G| \Rightarrow G \succ K_t\}\\
+ t(t) &= \inf \{c: e(G) \geq c|G| \Rightarrow G \supseteq T K_t\}.
+\end{align*}
+We can interpret the first result as saying $c(t) \leq 2^{t - 3}$. Moreover, note that if $e(G) \geq k|G|$, then $G$ must contain a subgraph with $\delta \geq k$. Otherwise, we can keep removing vertices of degree $ 0$, there is no complete partition.
+
+ Writing $q = 1 - p$, a given partition with $|V_i| = n_i$ is complete with probability
+ \begin{align*}
+ \prod_{i, j}(1 - q^{n_i n_j}) &\leq \exp\left(- \sum_{i < j} q^{n_i n_j}\right)\\
+ &\leq \exp \left(- \binom{t}{2}\prod q^{n_i n_j/\binom{t}{2}}\right) \tag{AM-GM}\\
+ &\leq \exp \left(- \binom{t}{2} q^{n^2/t^2}\right).
+ \end{align*}
+ The expected number of complete partitions is then
+ \[
+ \leq t^n \exp \left(- \binom{t}{2} q^{n^2/t^2}\right),
+ \]
+ As long as we restrict to the choices of $n$ and $q$ such that
+ \[
+ t > n \sqrt{\frac{\log(1/q)}{\log n}},
+ \]
+ we can bound this by
+ \[
+ \leq \exp \left(n \log t - \binom{t}{2} \frac{1}{n} \right) = o(1)
+ \]
+ in the limit $t \to \infty$. We set
+ \[
+ q = \lambda,\quad n = \frac{t \sqrt{\log t}}{\sqrt{\log 1/\lambda}}.
+ \]
+ Then with probability $>0$, we can find a graph with only $o(1)$ many complete partitions, and by removing $o(n)$ many edges, we have a graph with
+ \[
+ p \binom{n}{2} - o(n) = (\alpha + o(1)) t \sqrt{\log t} \cdot n
+ \]
+ many edges with no $K_t$ minor.
+\end{proof}
+
+At this point, the obvious question to ask is --- is $t \sqrt{\log t}$ the correct growth rate? The answer is yes, and perhaps surprisingly, the proof is also probabilistic. Before we prove that, we need the following lemma:
+
+\begin{lemma}
+ Let $k \in \N$ and $G$ be a graph with $e(G) \geq 11 k |G|$. Then there exists some $H$ with
+ \[
+ |H| \leq 11k + 2,\quad 2\delta (H) \geq |H| + 4k - 1
+ \]
+ such that $G \succ H$.
+\end{lemma}
+
+\begin{proof}
+ We use our previous lemma as a starting point. Letting $\ell = 11k$, we know that we can find $H_1$ such that
+ \[
+ G \succ K_1 + H_1,
+ \]
+ with $|H_1| \leq 2\ell$ and $\delta(H_1) \geq \ell$. We shall improve the connectivity of this $H_1$ at the cost of throwing away some elements.
+
+ By divine inspiration, consider the constant $\beta \approx 0.37$ defined by the equation
+ \[
+ 1 = \beta \left(1 + \frac{\log 2}{\beta}\right),
+ \]
+ and the function $\phi$ defined by
+ \[
+ \phi(F) = \beta \ell \frac{|F|}{2} \left( \log \frac{|F|}{\beta \ell} + 1\right).
+ \]
+ Now consider the set of graphs
+ \[
+ \mathcal{C} = \{F : |F| \geq \beta \ell, e(F) \geq \phi(F)\}.
+ \]
+% where
+% \[
+% \phi(F) = \beta \ell \frac{|F|}{2} \left( \log \frac{|F|}{\beta \ell} + 1\right)
+% \]
+% and $\beta \approx 0.37$ is defined by
+% \[
+% 1 = \beta \left(1 + \frac{\log 2}{\beta}\right).
+% \]
+ Observe that $H_1 \in \mathcal{C}$, since $\delta(H_1) \geq \ell$ and $|H_1| \leq 2\ell$.
+
+ Let $H_2$ be a subcontraction of $H_1$, minimal with respect to $\mathcal{C}$.
+
+ Since the complete graph of order $\lceil \beta \ell\rceil$ is not in $\mathcal{C}$, as $\phi > \binom{\beta \ell}{2}$, the only reason we are minimal is that we have hit the bound on the number of edges. Thus, we must have
+ \[
+ |H_2| \geq \beta\ell + 1,\quad e(H_2) = \lceil \phi(H_2)\rceil,
+ \]
+ and
+ \[
+ e(H_2/uv) < \phi(H_2/uv)
+ \]
+ for all edges $uv$ of $H_2$.
+
+ Choose a vertex $u \in H_2$ of minimum degree, and put $H_3 = H_2 [\Gamma(u)]$. Then we have
+ \[
+ |H_3| = \delta (H_2) \leq \frac{2 \lceil \phi(H_2)\rceil}{|H_2|} \leq \left\lfloor\beta \ell \left(\log \left(\frac{|H_2|}{\beta \ell}\right) + 1\right) + \frac{2}{|H_2|}\right\rfloor \leq \ell,
+ \]
+ since $\beta \ell \leq |H_2| \leq 2\ell$.
+
+ Let's write $b = \beta \ell$ and $h = |H_2|$. Then by the usual argument, we have
+ \begin{align*}
+ &\hphantom{ {}\geq{}}2 \delta (H_3) - |H_3| \\
+ &\geq 2 (\phi(H_2) - \phi(H_2/uv) - 1) - |H_3|\\
+ &\geq bh \left(\log \frac{h}{\beta} + 1\right) - b (h - 1) \left(\log \frac{h - 1}{b} + 1\right) - b \left(\log \frac{h}{b} + 1\right) - 3\\% funny terms for rounding.
+ &= b(h - 1) \log \frac{h}{h - 1} - 3\\
+ &> b - 4,
+ \end{align*}
+ because $h \geq b + 1$ and $x \log \left(1 + \frac{1}{x}\right) > 1 - \frac{1}{x}$ for real $x > 1$. So
+ \[
+ 2 \delta (H_3) - |H_3| \geq \beta \ell - 4 > 4k - 4.
+ \]
+ If we put $H = K_2 + H_3$, then $G \succ H$, $|H| \leq 11k + 2$ and
+ \[
+ 2\delta (H) - |H| \geq 2 \delta(H_3) - |H_3| + 2 \geq 4k - 1.\qedhere
+ \]
+\end{proof}
+% Where did that magical function $\phi$ come from? This was an argument by Mader, who had a proof using two straight lines. The exact function is found by solving a differential equation of the form $\phi(x) - \phi(x - 1) \approx \phi'(x)$.
+% use a graph of steeper slope. We know we have 2\ell vertices
+
+
+\begin{thm}
+ We have
+ \[
+ c(t) \leq 7 t \sqrt{\log t}
+ \]
+ if $t$ is large.
+\end{thm}
+
+The idea of the proof is as follows: First, we pick some $H$ with $G \succ H$ such that $H$ has high minimum degree, as given by the previous lemma. We now randomly pick disjoint subsets $U_1, \ldots, U_{2t}$, and hope that they give us a subcontraction to $K_t$. There are more sets that we need, because some of them might be ``bad'', in the sense that, for example, there might be no edges from $U_1$ to $U_{57}$. However, since $H$ has high minimum degree, the probability of being bad is quite low, and with positive probability, we can find $t$ subsets that have an edge between any pair. While the $U_i$ are not necessarily connected, this is not a huge problem since $H$ has so many edges that we can easily find paths that connect the different vertices in $U_i$.
+
+\begin{proof}
+ For large $t$, we can choose $k \in \N$ so that
+ \[
+ 5.8t \sqrt{\log_2 t} \leq 11k \leq 7t \sqrt{\log t}.
+ \]
+ Let $\ell = \lceil 1.01 \sqrt{\log_2 t}\rceil$. Then $k \geq t\ell/2$. We want to show that $G \succ K_t$ if $e(G) \geq 11k |G|$.
+
+ We already know that $G \succ H$ where $h = |H| \leq 11k + 2$, and $2 \delta(H) \geq |H| + 4k - 1$. We show that $H \succ K_t$.
+
+ From now on, we work entirely in $H$. Note that any two adjacent vertices have $\geq 4k + 1$ common neighbours. Randomly select $2t$ disjoint $\ell$ sets $U_1, \ldots, U_{2t}$ in $V(H)$. This is possible since we need $2t \ell$ vertices, and $2t\ell \leq 4k$.
+
+ Of course, we only need $t$ many of those $\ell$ sets, and we want to select the ones that have some chance of being useful to us.
+
+ Fix a part $U$. For any vertex $v$ (not necessarily in $U$), its degree is at least $h/2$. So the probability that $v$ has no neighbour in $U$ is at most $\binom{h/2}{\ell} / \binom{h}{\ell} \leq 2^{-\ell}$. Write $X(U)$ for the vertices having no neighbour in $U$, then $\E |X(U)| \leq 2^{-\ell} h$.
+
+ We say $U$ is \emph{bad} if $|X(U)| > 8 \cdot 2^{-\ell}h$. Then by Markov's inequality, the probability that $U$ is bad is $< \frac{1}{8}$. Hence the expected number of bad sets amongst $U_1, \ldots, U_{2t}$ is $\leq \frac{t}{4}$. By Markov again, the probability that there are more than $\frac{t}{2}$ bad sets is $< \frac{1}{2}$.
+
+ We say a pair of sets $(U, U')$ is \emph{bad} if there is no $U-U'$ edge. Now the probability that $U, U'$ is bad is $\P(U' \subseteq X(U))$. If we condition on the event that $U$ is good (i.e.\ not bad), then this probability is bounded by
+ \[
+ \binom{8 \cdot 2^{-\ell} h}{\ell} / \binom{h - \ell}{\ell} \leq 8^\ell 2^{-\ell^2} \left(1 + \frac{\ell}{h - \ell}\right)^\ell \leq 8^\ell 2^{-\ell^2} e^{\ell^2/(h - \ell)} \leq 9^\ell 2^{-\ell^2}
+ \]
+ if $t$ is large (hence $\ell$ is slightly large and $h$ is very large). We can then bound this by $\frac{1}{8t}$.
+
+ Hence, the expected number of bad pairs (where one of them is good) is at most
+ \[
+ \frac{1}{8t} \binom{2t}{2} \leq \frac{t}{4}.
+ \]
+ So the probability that there are more than $\frac{t}{2}$ such bad pairs is $< \frac{1}{2}$.
+
+ Now with positive probability, $U_1, \ldots, U_{2t}$ has at most $\frac{t}{2}$ bad sets and at most $\frac{t}{2}$ bad pairs amongst the good sets. Then we can find $t$ many sets that are pairwise good. We may wlog assume they are $U_1, \ldots, U_t$.
+
+ Fixing such a choice $U_1, \ldots, U_t$, we now work deterministically. We would be done if each $U_i$ were connected, but they are not in general. However, we can find $W_1, \ldots, W_t$ in $V(H) \setminus (U_1 \cup \cdots U_t)$ such that $U_i \cup W_i$ is connected for $1 \leq i \leq t$.
+
+ Indeed, if $U_i = \{u_1, \ldots, u_\ell\}$, we pick a common neighbour of $u_{j - 1}$ and $u_j$ if $u_{j - 1} u_j \not \in E(H)$, and put it in $W_i$. In total, this requires us to pick $\leq t\ell$ distinct vertices in $V(H) - (U_1 \cup \cdots \cup U_t)$, and we can do this because $u_{j - 1}$ and $u_j$ have at least $4k - t\ell \geq t\ell$ common neighbours in this set.
+\end{proof}
+
+It has been shown (2001) that equality holds in the previous lemma.
+
+So we now understand $c(t)$ quite well. How about subdivision and linking? We decide to work on $f(k)$ now, because Robertson and Seymour (1995) showed that our first lemma that holds if $G \succ TK_{3k}$ instead of $G \supseteq K_{3k}$. We also know how many edges we need to get a minor. Combining with our previous theorem, we know
+\[
+ f(k) = O(k \sqrt{\log k}).
+\]
+We know we can't do better than $k$, but we can get it down to order $k$ by proving our first lemma under the weaker hypothesis that $G \succ H$ where $H$ is dense.
+
+So far, we have proven theorems that say, roughly, ``if $G$ is a graph with enough edges, then it subcontracts to some highly connected thing''. Now saying that $G$ subcontracts to some highly connected thing is equivalent to saying we can find disjoint non-empty connected subgraphs $D_1, \ldots, D_m$ such that each $D_i$ is connected, and there are many edges between the $D_i$. The lemma we are going to prove next says under certain conditions, we can choose the $D_i$ in a way that they contain certain prescribed points.
+
+\begin{defi}[$S$-cut]\index{$S$-cut}
+ Given $S \subseteq V(G)$, an $S$-cut is a pair $(A, B)$ of subsets of the vertices such that $A \cup B = V(G)$, $S \subseteq A$ and $e(A \setminus B, B \setminus A) = 0$. The \term{order} of the cut is $|A \cap B|$.
+
+ We say $(A, B)$ \emph{avoids} $C$ if $A \cap V(C) = \emptyset$.
+\end{defi}
+
+\begin{eg}
+ For any $S \subseteq A$, the pair $(A, V(G))$ is an $S$-cut.
+\end{eg}
+
+\begin{lemma}
+ Let $d \geq 0$, $k \geq 2$ and $h \geq d + \lfloor 3k/2 \rfloor$ be integers.
+
+ Let $G$ be a graph, $S= \{s_1, \ldots, s_k\} \subseteq V(G)$. Suppose there exists disjoint subgraphs $C_1, \ldots, C_h$ of $G$ such that
+ \begin{itemize}
+ \item[($*$)] Each $C_i$ is either connected, or each of its component meets $S$. Moreover, each $C_i$ is adjacent to all but at most $d$ of the $C_j$, $j \not= i$ not meeting $S$.
+ \item[($\dagger$)] Moreover, no $S$-cut of order $< k$ avoids $d + 1$ of $C_1, \ldots, C_h$.
+ \end{itemize}
+ Then $G$ contains disjoint non-empty connected subgraphs $D_1, \ldots, D_m$, where
+ \[
+ m = h - \lfloor k/2 \rfloor,
+ \]
+ such that for $1 \leq i \leq k$, $s_i \in D_i$, and $D_i$ is adjacent to all but at most $d$ of $D_{k + 1}, \ldots, D_m$.
+\end{lemma}
+
+\begin{proof}
+ Suppose the theorem is not true. Then there is a minimal counterexample $G$ (with minimality defined with respect to proper subgraphs).
+
+ We first show that we may assume $G$ has no isolated vertices. If $v$ is an isolated vertex, and $v \not \in S$, then we can simply apply the result to $G - v$. If $v \in S$, then the $S$-cut $(S, V(G) \setminus \{v\})$ of order $k - 1$ avoids $h - k \geq d - 1$ of the $C_i$'s.
+
+ \begin{claim}
+ If $(A, B)$ is an $S$-cut of order $k$ avoiding $d + 1$ many of the $C_i$'s, then $B = V(G)$ and $A$ is discrete.
+ \end{claim}
+ Indeed, given such an $S$-cut, we define
+ \begin{align*}
+ S' &= A \cap B\\
+ G' &= G[B] - E(S')\\
+ C_i' &= C_i \cap G'.
+ \end{align*}
+ We now make a further claim:
+ \begin{claim}
+ $(G', S', \{C_i'\})$ satisfies the hypothesis of the lemma.
+ \end{claim}
+ We shall assume this claim, and then come back to establishing the claim.
+
+ Assuming these claims, we show that $G$ isn't a countrexample after all. Since $(G', S', \{C_i'\})$ satisfies the hypothesis of the lemma, by minimality, we can find subgraphs $D_1', \ldots, D_m'$ in $G'$ that satisfy the conclusion of the lemma for $G'$ and $S'$. Of course, these do not necessarily contain our $s_i$Assuming these claims, we can construct the desired $D_i$. However, we can just add paths from $S'$ to the $s_i$.
+
+ Indeed, $G[A]$ has no $S$-cut $(A'', B'')$ of order less than $k$ with $S' \subseteq B''$, else $(A'', B'' \cup B)$ is an $S$-cut of $G$ of the forbidden kind, since $A$, and hence $A''$ avoids too many of the $C_i$. Hence by Menger's theorem, there are vertex disjoint paths $P_1, \ldots, P_k$ in $G[A]$ joining $s_i$ to $s_i'$ (for some labelling of $S'$). Then simply take
+ \[
+ D_i = D_i' \cup P_i.
+ \]
+
+ \separator
+
+ To check that $(G', S', \{C_i'\})$ satisfy the hypothesis of the lemma, we first need to show that the $C_i'$ are non-empty.
+
+ If $(A, B)$ avoids $C_j$, then by definition $C_j \cap A = \emptyset$. So $C_j = C_j'$. By assumption, there are $d + 1$ many $C_j$'s for which this holds. Also, since these don't meet $S$, we know each $C_i$ is adjacent to at least one of these $C_j$'s. So in particular, $C_i'$ is non-empty since $C_i' \supseteq C_i \cap C_j' = C_i \cap C_j$.
+
+ Let's now check the conditions:
+ \begin{itemize}
+ \item[($*$)] For each $C_i'$, consider its components. If there is some component that does not meet $S'$, then this component is contained in $B \setminus A$. So this must also be a component of $C_i$. But $C_i$ is connected. So this component is $C_i$. So $C_i' = C_i$. Thus, either $C_i'$ is connected, or all components meet $S'$.
+
+ Moreover, since any $C_j'$ not meeting $S'$ equals $C_j$, it follows that each $C_i$ and so each $C_i'$ is adjacent to all but at most $d$ of these $C_j'$s. Therefore $(*)$ holds.
+
+ \item[($\dagger$)] If $(A', B')$ is an $S'$-cut in $G'$ avoiding some $C_i'$, then $(A \cup A', B')$ is an $S$-cut of $G$ of the same order, and it avoids $C_i$ (since $C_i'$ doesn't meet $S'$, and we have argued this implies $C_i' = C_i$, and so $C_i \cap A = \emptyset$). In particular, no $S'$-cut $(A', B')$ of $G'$ has order less than $k$ and still avoids $d + 1$ of $C_1', \ldots, C_h'$.
+ \end{itemize}
+
+ \separator
+
+ We can now use our original claim. We know what $S$-cuts of order $k$ loop like, so we see that if there is an edge that does not join two $C_i$'s, then we can contract the eedge to get a smaller counterexample. Recalling that $G$ has no isolated vertices, we see that in fact $V(G) = \cup V(C_i)$, and $|C_i| = 1$ unless $C_i \subseteq S$.
+
+ Let
+ \[
+ C = \bigcup \{V(C_i): |C_i| \geq 2\}.
+ \]
+ We claim that there is a set $I$ of $|C|$ independent edges meeting $C$. If not, by Hall's theorem, we can find $X \subseteq C$ whose neighbourhood $Y$ (in $V(G) - S$) such that $|Y| < |X|$. Then $(S \cup Y, V(G) - X)$ is an $S$-cut of order $|S| - |X| + |Y| < |S| = k$ avoiding $\geq |G| - |S| - |Y|$ many $C_i$'s, and this is
+ \[
+ \geq |G| - |S| - |X| + 1 \geq |G| - |C| - k + 1.
+ \]
+ But $h \leq |G| - |C| + \frac{|C|}{2}$, since $|G| - |C|$ is the number of $C_i$'s of size $1$, so we can bound the above by
+ \[
+ \geq h - \frac{|C|}{2} - k + 1 \geq h - \frac{3k}{2} + 1 \geq d + 1.
+ \]
+ This contradiction shows that $I$ exists.
+
+ Now we can just write down the $D_i$'s. For $1 \leq i \leq k$, set $D_i$ to be $s_i$ if $s_i$ is a $C_\ell$ with $|C_\ell| = 1$, i.e.\ $s_i \not \in C$. If $s_i \in C$, let $D_i$ be the edge of $I$ meeting $S_i$.
+
+ Let $D_{k + 1}, \ldots, D_m$ each be single vertices of $G - S - (\text{ends of }I)$. Note those exist because $\geq |G| - k - |C| \geq h - \lfloor \frac{3k}{2}\rfloor = m - k$ vertices are avaiable. Note that each $D_i$ contains a $C_\ell$ with $|C_\ell| = 1$. So each $D_i$ is joined to all but $\leq d$ many $D_j$'s.
+\end{proof}
+
+The point of this result is to prove the following theorem:
+\begin{thm}
+ Let $G$ be a graph with $\kappa(G) \geq 2k$ and $e(G) \geq 11k |G|$. Then $G$ is $k$-linked. In particular, $f(k) \leq 22k$.
+\end{thm}
+Note that if $\kappa(G) \geq 22k$ then $\delta(G) \geq 22k$, so $e(G) \geq 11k |G|$.
+
+\begin{proof}
+ We know that under these conditions, $G \succ H$ with $|H| \leq 11k + 2$ and $2\delta(H) \geq |H| + 4k - 1$. Let $h = |H|$, and $d = h - 1 - \delta (H)$. Note that
+ \[
+ h = 2d - 2 + 2 \delta(H) - h \geq 2d + 4k.
+ \]
+ Let $C_1, \ldots, C_k$ be connected subgraphs of $G$ which contract to form $H$. Then clearly each $C_i$ is joined to all but at most $h - 1 - \delta(H) = d$ other $C_j$'s. Now let $s_1, \ldots, s_k, t_1, \ldots, t_k$ be the vertices we want to link up. Let $S = \{s_1, \ldots, s_k, t_1, \ldots, t_k\}$. Observe that $G$ has no $S$-cut of order $< 2k$ at all since $\kappa(G) \geq 2k$. So the conditions are satisfied, but with $2k$ instead of $k$.
+
+ So there are subgraphs $D_1, \ldots, D_m$, where $m = h - k \geq 2d + 3k$ as described. We may assume $s_i \in D_i$ and $t_i \in D_{k + i}$. Note that for each pair $(D_i, D_j)$, we can find $\geq m - 2d \geq 3k$ other $D_\ell$ such that $D_\ell$ is joined to both $D_i$ and $D_j$. So for each $i = 1, \ldots, k$, we can find an unused $D_\ell$ not among $D_1, \ldots, D_{2k}$ that we can use to connect up $D_i$ and $D_{k + i}$, hence $s_i$ and $t_i$.
+\end{proof}
+
+In 2004, the coefficient was reduced from $11$ to $5$. This shows $f(k) \leq 10 k$. It is known that $f(1) = 3, f(2) = 6$ and $f(k) \geq 3k - 2$ for $k \geq 3$.
+
+Finally, we consider the function $t(t)$. We begin by the following trivial lemma:
+\begin{lemma}
+ If $\delta(a) \geq t^2$ and $G$ is $\binom{t + 2}{3}$-linked, then $G \supseteq TK_t$.
+\end{lemma}
+
+\begin{proof}
+ Since $\delta(G) \geq t^2$, we may pick vertices $v_1, \ldots, v_t$ and sets $U_1, \ldots U_t$ all disjoint so that $|U_i| = t - 1$ and $v_i$ is joined to $U_i$. Then $G - \{v_1, \ldots, v_k\}$ is still $\binom{t + 2}{2} - t = \binom{t}{2}$-linked. So $U_1, \ldots, U_{t - 1}$ may be joined with paths to form a $TK_t$ in $G$.
+\end{proof}
+
+To apply our previous theorem together with this lemma, we need the following lemma:
+\begin{lemma}
+ Let $k, d \in \N$ with $k \leq \frac{d + 1}{2}$, and suppose $e(G) \geq d|G|$. Then $G$ contains a subgraph $H$ with
+ \[
+ e(H) = d|H| - kd + 1,\quad \delta(H) \geq d + 1,\quad \kappa(H) \geq k.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Define
+ \[
+ \mathcal{E}_{d, k} = \{F \subseteq G: |F| \geq d, e(F) > d|F| - kd\}.
+ \]
+ We observe that $D \in \mathcal{E}_{d, k}$, but $K_d \not \in \mathcal{E}_{d, k}$. Hence $|F| > d$ for all $F \in \mathcal{E}_{d, k}$. let $H$ be a subgraph of $G$ minimal with respect to $H \in \mathcal{E}_{d, k}$. Then $e(H) = d|H| - kd + 1$, and $\delta(H) \geq d + 1$, else $H - v \in \mathcal{E}_{d,k}$ for some $v \in H$.
+
+ We only have to worry about the connectivity condition. Suppose $S$ is a cutset of $H$. Let $C$ be a component of $G - S$. Since $\delta(h) \geq d + 1$, we have $|C \cup S| \geq d + 2$ and $|H - C| \geq d + 2$.
+
+ Since neither $C \sup S$ nor $H - C$ lies in that class, we have
+ \[
+ e(C \cup S) \leq d|C \cup S| - kd \leq d|C| + d |S| - kd
+ \]
+ and
+ \[
+ e(H - C) \leq d|H| - d|C| - kd.
+ \]
+ So we find that
+ \[
+ e(H) \leq d|H| + d|S| - 2 kd.
+ \]
+ But we know this is equal to $d|H| - kd + 1$. So we must have $|S| \geq k$.
+\end{proof}
+
+\begin{thm}
+ \[
+ \frac{t^2}{16} \leq t(t) \leq 13 \binom{t + 1}{2}.
+ \]
+\end{thm}
+Recall our previous upper bound was something like $2^{t^2}$. So this is an improvement.
+\begin{proof}
+ The lower bound comes from (disjoint copies of) $K_{t^2/8, t^2/8}$. For the uppper bound, write
+ \[
+ d = 13 \binom{t + 1}{2},\quad k = t\cdot (t + 1).
+ \]
+ Then we can find a subgraph $H$ with $\delta(H) \geq d + 1$ and $\kappa(H) \geq k + 1$.
+
+ Certainly $|H| \geq d + 2$, and we know
+ \[
+ e(H) \geq d |H| - kd > (d - k)|H| \geq 11 \binom{t + 1}{2} |H|.
+ \]
+ So we know $H$ is $\binom{t + 1}{2}$-linked, and so $H \supseteq TK_t$.
+\end{proof}
+So $t(t)$ is, more or less, $t^2$!
+
+\section{Extremal hypergraphs}
+Define $K_\ell^\ell(t)$ be the complete $\ell$-partite $\ell$-uniform hypergraph with $\ell$ classes of size $t$, containing all $t^{\ell}$ edges with one vertex in each class.
+
+\begin{thm}[Erd\"os]
+ Let $G$ be $\ell$-uniform of order $n$ with $p \binom{n}{\ell}$ edges, where $p \geq 2n^{-1/t^{\ell - 1}}$. Then $G$ contains $K_{\ell}^{\ell}(t)$ (provided $n$ is large).
+\end{thm}
+If we take $\ell = 2$, then this agrees with what we know about the exact value of the extremal function.
+
+\begin{proof}
+ Assume that $t \geq 2$. We proceed by induction on $\ell \geq 2$. For each $(\ell - 1)$-set $\sigma \subseteq V(G)$, let
+ \[
+ N(\sigma) = \{v: \sigma \cup \{v\} \in E(G)\}.
+ \]
+ Then the average degree is
+ \[
+ \binom{n}{\ell - 1}^{-1} \sum |N(\sigma)| = \binom{n}{\ell - 1}^{-1} \ell |E(G)| = p (n - \ell + 1).
+ \]
+ For each of the $\binom{n}{t}$ many $t$-sets $T \subseteq V(G)$, let
+ \[
+ D(T) = \{\sigma : T \subseteq N(\sigma)\}.
+ \]
+ Then
+ \[
+ \sum_T |D(T)| = \sum_\sigma \binom{|N(\sigma)|}{t} \geq \binom{n}{\ell - 1} \binom{p(n - \ell + 1)}{t},
+ \]
+ where the inequality follows from convexity, since we can assume that $p(n - \ell + 1) \geq t - 1$ if $n$ is large.
+
+ In particular, there exists a $T$ with
+ \[
+ |D(T)| \geq \binom{n}{t}^{-1} \binom{n}{\ell - 1} \binom{p(n - \ell + 1)}{t} \geq \frac{1}{2} p^t \binom{n}{\ell - 1}.
+ \]
+ when $n$ is large. By our choice of $t$, we can write this as
+ \[
+ |D(T)| \geq 2^{t - 1} n^{-1/t^{\ell - 2}} \binom{n}{\ell - 1}.
+ \]
+ If $\ell = 2$, then this is $\geq 2^{t - 1} \geq t$. So $T$ is joined to a $T$-set giving $K_{t, t}$.
+
+ If $\ell \geq 3$, then $|D(T)| \geq 2 n^{-1/t^{\ell - 2}} \binom{n}{\ell - 1}$. So, by induction, the $(\ell - 1)$-uniform hypergraph induced by the $\sigma$ with $T \subseteq N(\sigma)$ contains $K_{\ell - 1}^{\ell - 1}(t)$, giving $K_{\ell}^\ell(t)$ with $T$.
+\end{proof}
+We can do a simple random argument to show that this is a right-order of magnitude.
+
+\printindex
+\end{document}
diff --git a/books/cam/III_M/hydrodynamic_stability.tex b/books/cam/III_M/hydrodynamic_stability.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b1726ce6fa9cd6940573403b085f51cd9eb3539c
--- /dev/null
+++ b/books/cam/III_M/hydrodynamic_stability.tex
@@ -0,0 +1,3131 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2017}
+\def\nlecturer {C.\ P.\ Caulfield}
+\def\ncourse {Hydrodynamic Stability}
+
+\input{header}
+
+\newcommand\Ra{\mathrm{Ra}}
+\renewcommand\Pr{\mathrm{Pr}}
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+Developing an understanding by which ``small'' perturbations grow, saturate and modify fluid flows is central to addressing many challenges of interest in fluid mechanics. Furthermore, many applied mathematical tools of much broader relevance have been developed to solve hydrodynamic stability problems, and hydrodynamic stability theory remains an exceptionally active area of research, with several exciting new developments being reported over the last few years.
+
+In this course, an overview of some of these recent developments will be presented. After an introduction to the general concepts of flow instability, presenting a range of examples, the major content of this course will be focussed on the broad class of flow instabilities where velocity ``shear'' and fluid inertia play key dynamical roles. Such flows, typically characterised by sufficiently“high” Reynolds number $Ud/\nu$, where $U$ and $d$ are characteristic velocity and length scales of the flow, and $\nu$ is the kinematic viscosity of the fluid, are central to modelling flows in the environment and industry. They typically demonstrate the key role played by the redistribution of vorticity within the flow, and such vortical flow instabilities often trigger the complex, yet hugely important process of ``transition to turbulence''.
+
+A hierarchy of mathematical approaches will be discussed to address a range of ``stability'' problems, from more traditional concepts of ``linear'' infinitesimal normal mode perturbation energy growth on laminar parallel shear flows to transient, inherently nonlinear perturbation growth of general measures of perturbation magnitude over finite time horizons where flow geometry and/or fluid properties play a dominant role. The course will also discuss in detail physical interpretations of the various flow instabilities considered, as well as the industrial and environmental application of the results of the presented mathematical analyses
+
+\subsubsection*{Pre-requisites}
+Elementary concepts from undergraduate real analysis. Some knowledge of complex analysis would be advantageous (e.g.\ the level of IB Complex Analysis/Methods). No knowledge of functional analysis is assumed.
+}
+\tableofcontents
+
+\section{Linear stability analysis}
+\subsection{Rayleigh--Taylor instability}
+In this section, we would like to investigate the stability of a surface between two fluids:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0) sin (-2, 0.2) cos (-1, 0) sin (0, -0.2) cos (1, 0) sin (2, 0.2) cos (3, 0) node [right] {$\eta$};
+ \fill [mblue, opacity=0.5] (-3, -1) -- (-3, 0) sin (-2, 0.2) cos (-1, 0) sin (0, -0.2) cos (1, 0) sin (2, 0.2) cos (3, 0) -- (3, -1);
+ \fill [mblue, opacity=0.2] (-3, 1) -- (-3, 0) sin (-2, 0.2) cos (-1, 0) sin (0, -0.2) cos (1, 0) sin (2, 0.2) cos (3, 0) -- (3, 1);
+
+ \draw [dashed] (-3, 0) -- (3, 0);
+ \node at (-1.7, -0.6) {$\rho_2$};
+ \node at (-1.7, 0.6) {$\rho_1$};
+ \end{tikzpicture}
+\end{center}
+We assume the two fluids have densities $\rho_1$ and $\rho_2$, and are separated by a smooth interface parametrized by the deviation $\eta$.
+
+Recall that the Navier--Stokes equations for an incompressible fluid are
+\[
+ \rho\left(\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u}\right) = - \nabla P- g \rho \hat{\mathbf{z}} + \mu \nabla^2 \mathbf{u},\quad \nabla \cdot \mathbf{u} = 0.
+\]
+Usually, when doing fluid mechanics, we assume density is constant, and divide the whole equation across by $\rho$. We can then forget the existence of $\rho$ and carry on. We can also get rid of the gravity term. We know that the force of gravity is balanced out by a hydrostatic pressure gradient. More precisely, we can write
+\[
+ P = P_h(z) + p (x, t),
+\]
+where $P_h$ satisfies
+\[
+ \frac{\partial P_h}{\partial z} = -g\rho.
+\]
+We can then write our equation as
+\[
+ \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u}\cdot \nabla \mathbf{u} = - \nabla \left(\frac{p}{\rho}\right) + \nu \nabla^2 \mathbf{u}.
+\]
+To do all of these, we must assume that the density $\rho$ is constant, but this is clearly not the case when we have tow distinct fluids.
+
+Since it is rather difficult to move forward without making \emph{any} simplifying assumptions, we shall focus on the case of inviscid fluids, so that we take $\nu = 0$. We will also assume that the fluid is irrotational. In this case, it is convenient to use the vector calculus identity
+\[
+ \mathbf{u} \cdot \nabla \mathbf{u} = \nabla \left(\frac{1}{2}|\mathbf{u}|^2\right) - \mathbf{u} \times (\nabla \times \mathbf{u}),
+\]
+where we can now drop the second term. Moreover, since $\nabla \times \mathbf{u} = 0$, Stokes' theorem allows us to write $\mathbf{u} = \nabla \phi$ for some \term{velocity potential} $\phi$. Finally, in each separate region, the density $\rho$ is constant. So we can now write
+\[
+ \nabla \left(\rho \frac{\partial \phi}{\partial t} + \frac{1}{2} \rho |\nabla \phi|^2 + P + g \rho z\right) = 0.
+\]
+This is \term{Bernoulli's theorem}.
+
+Applying these to our scenario, since we have two fluids, we have two separate velocity potentials $\phi_1, \phi_2$ for the two two regions. Both of these independently satisfy the incompressibility hypothesis
+\[
+ \nabla^2\phi_{1, 2} = 0.
+\]
+Bernoulli's theorem tells us the quantity
+\[
+ \rho \frac{\partial \phi}{\partial t} + \frac{1}{2} \rho |\nabla \phi|^2 + P + g \rho z
+\]
+should be constant across the fluid. Our investigation will not use the full equation. All we will need is that this quantity matches at the interface, so that we have
+\[
+ \left.\rho_1 \frac{\partial \phi_1}{\partial t} + \frac{1}{2} \rho_1 |\nabla \phi_1|^2 + P_1 + g \rho_1 \eta\right|_{z = \eta} = \left.\rho_2 \frac{\partial \phi_2}{\partial t} + \frac{2}{2} \rho_2 |\nabla \phi_2|^2 + P_2 + g \rho_2 \eta\right|_{z = \eta}.
+\]
+To understand the interface, we must impose boundary conditions. First of all the vertical velocities of the fluids must match with the interface, so we impose the \term{kinematic boundary condition}
+\[
+ \left.\frac{\partial \phi_1}{\partial z} \right|_{z = \eta} = \left.\frac{\partial \phi_2}{\partial z} \right|_{z = \eta} = \frac{\D \eta}{\D t},
+\]
+where
+\[
+ \D = \frac{\partial}{\partial t} + \mathbf{u} \cdot \nabla
+\]
+We also make another, perhaps dubious boundary condition, namely that the pressure is continuous along the interface. This does not hold at all interfaces. For example, if we have a balloon, then the pressure inside is greater than the pressure outside, since that is what holds the balloon up. In this case, it is the rubber that is exerting a force to maintain the pressure difference. In our case, we only have two fluids meeting, and there is no reason to assume a discontinuity in pressure, and thus we shall assume it is continuous. If you are not convinced, this is good, and we shall later see this is indeed a dubious assumption.
+
+But assuming it for the moment, this allows us to simplify our Bernoulli's theorem to give
+\[
+ \rho_1 \frac{\partial \phi_1}{\partial t} + \frac{1}{2} \rho_1 |\nabla \phi_1|^2 + g \rho_1 \eta = \rho_2 \frac{\partial \phi_2}{\partial t} + \frac{2}{2} \rho_2 |\nabla \phi_2|^2 + g \rho_2 \eta.
+\]
+There is a final boundary condition that is specific to our model. What we are going to do is that we will start with a flat, static solution with $\mathbf{u} = 0$ and $\eta = 0$. We then perturb $\eta$ a bit, and see what happens to our fluid. In particular, we want to see if the perturbation is stable.
+
+Since we expect the interesting behaviour to occur only near the interface, we make the assumption that there is no velocity in the far field, i.e.\ $\phi_1 \to 0$ as $z \to \infty$ and $\phi_2 \to 0$ as $z \to-\infty$ (and similarly for the derivatives). For simplicity, we also assume there is no $y$ dependence.
+
+We now have equations and boundary conditions, so we can solve them. But these equations are pretty nasty and non-linear. At this point, one sensible approach is to linearize the equations by assuming everything is small. In addition to assuming that $\eta$ is small, we also need to assume that various derivatives such as $\nabla \phi$ are small, so that we can drop all second-order terms. Since $\eta$ is small, the value of, say, $\frac{\partial \phi_1}{\partial z}$ at $\eta$ should be similar to that at $\eta = 0$. Since $\frac{\partial \phi_1}{\partial z}$ itself is already assumed to be small, the difference would be second order in smallness.
+
+So we replace all evaluations at $\eta$ with evaluations at $z = 0$. We are then left with the collection of $3$ equations
+\begin{align*}
+ \nabla^2 \phi_{1, 2} &= 0\\
+ \left.\frac{\partial \phi_{1, 2}}{\partial z}\right|_{z = 0} &= \eta_t\\
+ \left.\rho_1 \frac{\partial \phi_1}{\partial t} + g \rho_1 \eta \right|_{z = 0} &= \left.\rho_2 \frac{\partial \phi_2}{\partial t} + g \rho_2 \eta \right|_{z = 0}.
+\end{align*}
+
+This is a nice linear problem, and we can analyze the Fourier modes of the solutions. We plug in an ansatz
+\begin{align*}
+ \phi_{1, 2}(x, z, t) &= \hat{\phi}_{1, 2}(z) e^{i(kx - \omega t)}\\
+ \eta(x, t) &= B e^{i(kx - \omega t)}.
+\end{align*}
+Substituting into Laplace's equation gives
+\[
+ \hat{\phi}_{1, 2} - k^2 \hat{\phi}_{1, 2} = 0.
+\]
+Using the far field boundary conditions, we see that we have a family of solutions
+\[
+ \hat{\phi}_1 = A_1 e^{-kz},\quad \hat{\phi}_2 = A_2 e^{kz}.
+\]
+The kinematic boundary condition tells us we must have
+\[
+ \hat{\phi}_{1, 2}'(0) = -i\omega B.
+\]
+We can solve this to get
+\[
+ B = \frac{k A_1}{i \omega} = - \frac{kA_2}{ i \omega}.
+\]
+In particular, we must have $A \equiv A_1 = - A_2$. We can then write
+\[
+ \eta = \frac{kA}{i\omega} e^{ik(kx - \omega t)}.
+\]
+Plugging these into the final equation gives us
+\[
+ \rho_1 (-i\omega A) + g \rho_1 \frac{k A}{i \omega} = \rho_2 i \omega A + g \rho_2 \frac{kA}{i \omega}.
+\]
+Crucially, we can cancel the $A$ throughout the equation, and gives us a result independent of $A$. This is, after all, the point of linearization. Solving this gives us a \term{dispersion relation} relating the frequency (or phase speed $c_p = \omega/k$) to the wavenumber:
+\[
+ \omega^2 = \frac{g(\rho_2 - \rho_1)k}{\rho_1 + \rho_2}.
+\]
+If $\rho_2 \gg \rho_1$, then this reduces to $\omega^2 \approx gk$, and this is the usual dispersion relation for deep water waves. On the other hand, if $\rho_2 - \rho_1 > 0$ is small, this can lead to waves of rather low frequency, which is something we can observe in cocktails, apparently.
+
+But nothing in our analysis actually required $\rho_2 > \rho_1$. Suppose we had $\rho_1 > \rho_2$. This amounts to putting the heavier fluid on top of the lighter one. Anyone who has tried to turn a cup of water upside down will know this is highly unstable. Can we see this from our analysis?
+
+If $\rho_1 < \rho_2$, then $\omega$ has to be imaginary. We shall write it as $\omega = \pm i \sigma$, where $\sigma > 0$. We can then compute
+\[
+ \sigma = \sqrt{\frac{g(\rho_1 - \rho_2)k}{\rho_1 + \rho_2}}.
+\]
+Writing out our expression for $\phi_{1, 2}$, we see that there are $e^{\pm \sigma t}$ terms, and the $e^{\sigma t}$ term dominates in the long run, causing $\phi_{1, 2}$ to grow exponentially. This is the \emph{Rayleigh--Taylor instability}.
+
+There is more to say about the instability. As the wavelength decreases, $k$ increases, and we see that $\sigma$ increases as well. Thus, we see that short-scale perturbations blow up exponentially much more quickly, which means the system is not very well-posed. This is the \term{ultra-violet catastrophe}. Of course, we should not trust our model here. Recall that in our simplifying assumptions, we not only assumed that the amplitudes of our $\phi$ and $\eta$ were small, but also that their derivatives were small. The large $k$ behaviour gives us large $x$ derivatives, and so we have to take into account the higher order terms as well.
+
+But we can also provide physical reasons for why small scales perturbations should be suppressed by such a system. In our model, we assumed there is no surface tension. Surface tension quantifies the tendency of interfaces between fluids to minimize surface area. We know that small-scale variations will cause a large surface area, and so the surface tension will suppress these variations. Mathematically, what the surface allows for is a pressure jump across the interface.
+
+Surface tension is quantified by $\gamma$, the force per unit length. This has dimensions $[\gamma] = MT^{-1}$. Empirically, we observe that the pressure difference across the interface is
+\[
+ \Delta p = - \gamma \nabla \cdot \hat{\mathbf{n}} = 2 \gamma H = \gamma \left(\frac{1}{R_x} + \frac{1}{R_y}\right),
+\]
+where $\hat{\mathbf{n}}$ is the unit normal and $H$ is the \term{mean curvature}. This is an empirical result.
+
+Again, we assume no dependence in the $y$ direction, so we have a cylindrical interface with a single radius of curvature. Linearizing, we have a pressure difference of
+\[
+ (P_2 - P_1) |_{z = \eta} = \frac{\gamma}{R_x} = - \gamma \frac{\frac{\partial^2 \eta}{\partial x^2}}{\left(1 + \left(\frac{\partial \eta}{\partial x}\right)^2\right)^{3/2}} \approx -\gamma \frac{\partial^2 \eta}{\partial x^2}.
+\]
+Therefore the (linearized) dynamic boundary condition becomes
+\[
+ \left.\rho_1 \frac{\partial \phi_1}{\partial t} + g \rho_1 \eta\right|_{z =0 } + \gamma \frac{\partial^2 \eta}{\partial x^2} = \left. \rho_2 \frac{\partial \phi_2}{\partial t} + g \rho_2 \eta\right|_{z = 0}.
+\]
+If we run the previous analysis again, we find a dispersion relation of
+\[
+ \omega^2 = \frac{g(\rho_2 - \rho_1)k + \gamma k^3}{\rho_1 + \rho_2}.
+\]
+Since $\gamma$ is always positive, even if we are in the situation when $\rho_1 > \rho_2$, for $k$ sufficiently large, the system is stable. Of course, for small $k$, the system is still unstable --- we still expect water to fall out even with surface tension. In the case $\rho_1 < \rho_2$, what we get is known as \term{internal gravity-capillary waves}.
+
+In the unstable case, we have
+\[
+ \sigma^2 = k \left(\frac{g(\rho_2 - \rho_1)}{\rho_1 + \rho_2}\right) (1 - l_c^2 k^2),
+\]
+where
+\[
+ l_c^2 = \frac{\gamma}{g (\rho_1 - \rho_2)}
+\]
+is a characteristic length scale. For $k \ell_c > 1$, the oscillations are stable, and the maximum value of $k$ is attained at $k = \frac{l_c}{\sqrt{3}}$.
+% insert plot
+
+In summary, we have a range of wavelength, and we have identified a ``most unstable wavelength''. Over time, if all frequencies modes are triggered, we should expect this most unstable wavelength to dominate. But it is rather hopeful thinking, because this is the result of linear analysis, and we can't expect it to carry on to the non-linear regime.
+
+\subsection{Rayleigh--\tph{B\'enard}{B\'enard}{Bénard} convection}
+The next system we want to study is something that looks like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [red!50!mred, fill] (0, 0) rectangle (8, -0.2);
+ \draw [blue, fill] (0, 2) rectangle (8, 2.2);
+
+ \fill [mblue, opacity=0.3] (0, 0) rectangle (8, 2);
+
+ \node [right] at (8, 2.1) {$T_0$};
+ \node [right] at (8, -0.1) {$T_0 + \Delta T$};
+ \node [white, left] at (0, -0.1) {$T_0 + \Delta T$};
+
+ \draw [<->] (4, 0) -- (4, 2) node [pos=0.5, right] {$d$};
+ \end{tikzpicture}
+\end{center}
+There is a hot plate at the bottom, a cold plate on top, and some fluid in between. We would naturally expect heat to transmit from the bottom to the top. There are two ways this can happen:
+\begin{itemize}
+ \item Conduction: Heat simply diffuses from the bottom to the top, without any fluid motion.
+ \item Convection: The fluid at the bottom heats up, expands, becomes lighter, and moves to the top.
+\end{itemize}
+The first factor is controlled purely by thermal diffusivity $\kappa$, while the latter is mostly controlled by the viscosity $\nu$. It is natural to expect that when the temperature gradient $\Delta T$ is small, most of the heat transfer is due to conduction, as there isn't enough force to overcome the viscosity to move fluid around. When $\Delta T$ is large, the heat transfer will be mostly due to conduction, and we would expect such a system to be unstable.
+
+To understand the system mathematically, we must honestly deal with the case where we have density variations throughout the fluid. Again, recall that the Navier--Stokes equation
+\[
+ \rho\left(\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u}\right) = - \nabla P- g \rho \hat{\mathbf{z}} + \mu \nabla^2 \mathbf{u},\quad \nabla \cdot \mathbf{u} = 0.
+\]
+The static solution to our system is the pure conduction solution, where $\mathbf{u} = 0$, and there is a uniform vertical temperature gradient across the fluid. Since the density is a function of temperature, which in the static case is just a function of $z$, we know $\rho = \rho(z)$. When we perturb the system, we allow some horizontal and time-dependent fluctuations, and assume we can decompose our density field into
+\[
+ \rho = \rho_h(z) + \rho'(x, t).
+\]
+We can then define the hydrostatic pressure by the equation
+\[
+ \frac{\d P_h}{\d z} = - g\rho_h.
+\]
+This then allows us to decompose the pressure as
+\[
+ P = P_h(z) + p'(x, t).
+\]
+Then our equations of motion become
+\[
+ \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} = - \frac{1}{\rho} \nabla p' - \frac{g\rho'}{\rho} \hat{\mathbf{z}} + \nu \nabla^2 \mathbf{u},\quad \nabla \cdot \mathbf{u} = 0
+\]
+We have effectively ``integrated out'' the hydrostatic part, and it is now the deviation from the average that leads to buoyancy forces.
+
+An important component of our analysis involves looking at the vorticity. Indeed, vorticity necessarily arises if we have non-trivial density variations. For concreteness, suppose we have a interface between two fluids with different densities, $\rho$ and $\rho + \Delta \rho$. If the interface is horizontal, then nothing happens:
+
+\begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.5] (0, 0) rectangle (3, -1);
+ \fill [mblue, opacity=0.2] (0, 0) rectangle (3, 1);
+
+ \draw (0, 0) -- (3, 0);
+ \node at (1.5, 0.5) {$\rho$};
+ \node at (1.5, -0.5) {$\rho + \Delta \rho$};
+
+ \end{tikzpicture}
+\end{center}
+However, if the interface is tilted, then there is a naturally induced torque:
+\begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.5] (0, -0.3) -- (3, 0.3) -- (3, -1) -- (0, -1);
+ \fill [mblue, opacity=0.2] (0, -0.3) -- (3, 0.3) -- (3, 1) -- (0, 1);
+
+ \draw (0, -0.3) -- (3, 0.3);
+ \node at (1.5, 0.5) {$\rho$};
+ \node at (1.5, -0.5) {$\rho + \Delta \rho$};
+
+ \draw (0.7, -0.5) edge [out=120, in=240, ->] (0.7, 0.5);
+ \draw (2.3, 0.5) edge [out=300, in=60, ->] (2.3, -0.5);
+ \end{tikzpicture}
+\end{center}
+In other words, \emph{vorticity} is induced. If we think about it, we see that the reason this happens is that the direction of the density gradient does not align with the direction of gravity. More precisely, this is because the density gradient does not align with the pressure gradient.
+
+Let's try to see this from our equations. Recall that the vorticity is defined by\index{vorticity}\index{$\boldsymbol\omega$}
+\[
+ \boldsymbol\omega = \nabla \times \mathbf{u}.
+\]
+Taking the curl of the Navier--Stokes equation, and doing some vector calculus, we obtain the equation.
+\[
+ \frac{\partial \boldsymbol\omega}{\partial t} + \mathbf{u} \cdot \nabla \boldsymbol\omega = \boldsymbol\omega \cdot \nabla \mathbf{u} + \frac{1}{\rho^2}\nabla \rho \times \nabla P + \nu \nabla^2 \boldsymbol\omega
+\]
+The term on the left hand side is just the material derivative of the vorticity. So we should interpret the terms on the right-hand-side as the terms that contribute to the change in vorticity.
+
+The first and last terms on the right are familiar terms that have nothing to do with the density gradient. The interesting term is the second one. This is what we just described --- whenever the pressure gradient and density gradient do not align, we have a \term{baroclinic torque}. % describe other terms as well.
+
+%The left hand side is just the change in vorticity along the flow. The first term on the left is describing what happens when we stretch a vortex, which by conservation of momentum implies that the spin increases. The second term is what we just described --- if the pressure gradient points differently from the density gradient, then we get a torque, called \term{baroclinic torque}.
+
+In general, equations are hard to solve, and we want to make approximations. A common approximation is the \term{Boussinesq approximation}. The idea is that even though the density difference is often what drives the motion, from the point of view of inertia, the variation in density is often small enough to be ignored. To take some actual, physical examples, salt water is roughly $4\%$ more dense than fresh water, and every $10$ degrees Celsius changes the density of air by approximately $4\%$.
+
+Thus, what we do is that we assume that the density is constant except in the buoyancy force. The mathematically inclined people could think of this as taking the limit $g \to \infty$ but $\rho' \to 0$ with $g \rho'$ remaining finite.
+
+Under this approximation, we can write our equations as
+\begin{align*}
+ \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u}\cdot \nabla \mathbf{u} &= - \frac{1}{\rho_0} \nabla p' - g' \hat{\mathbf{z}} + \nu \nabla^2 \mathbf{u}\\
+ \frac{\partial \boldsymbol\omega}{\partial t} + \mathbf{u} \cdot \nabla\boldsymbol\omega &= \boldsymbol\omega \cdot \nabla \mathbf{u} + \frac{g}{\rho_0} \hat{\mathbf{z}} \times \nabla \rho + \nu \nabla^2 \boldsymbol\omega,
+\end{align*}
+where we define the \term{reduced gravity} to be
+\[
+ g' = \frac{g \rho'}{\rho_0}
+\]
+Recall that our density is to be given by a function of temperature $T$. We must write down how we want the two to be related. In reality, this relation is extremely complicated, and may be affected by other external factors such as salinity (in the sea). However, we will use a ``leading-order'' approximation
+\[
+ \rho = \rho_0 (1 - \alpha(T - T_0)).
+\]
+We will also need to know how temperature $T$ evolves in time. This is simply given by the diffusion equation
+\[
+ \frac{\partial T}{\partial t} + \mathbf{u} \cdot \nabla T = \kappa \nabla^2 T.
+\]
+Note that in thermodynamics, temperature and pressure are related quantities. In our model, we will completely forget about this. The only purpose of pressure will be to act as a non-local Lagrange multiplier that enforces incompressibility.
+
+There are some subtleties with this approximation. Inverting the relation,
+\[
+ T = \frac{1}{\alpha}\left(1 - \frac{\rho}{\rho_0}\right) + T_0.
+\]
+Plugging this into the diffusion equation, we obtain
+\[
+ \frac{\partial \rho}{\partial t} + \mathbf{u} \cdot \nabla \rho = \kappa \nabla^2 \rho.
+\]
+The left-hand side is the material derivative of density. So this equation is saying that density, or mass, can ``diffuse'' along the fluid, independent of fluid motion. Mathematically, this follows from the fact that density is directly related to temperature, and temperature diffuses.
+
+This seems to contradict the conservation of mass. Recall that the conservation of mass says
+\[
+ \frac{\partial \rho}{\partial t} + \nabla \cdot (\mathbf{u} \rho) = 0.
+\]
+We can than expand this to say
+\[
+ \frac{\partial \rho}{\partial t} + \mathbf{u} \cdot \nabla \rho = - \rho \nabla \cdot \mathbf{u}.
+\]
+We just saw that the left-hand side is given by $\kappa \nabla^2 \rho$, which can certainly be non-zero, but the right-hand side should vanish by incompressibility! The issue is that here the change in $\rho$ is not due to compressing the fluid, but simply thermal diffusion.
+
+If we are not careful, we may run into inconsistencies if we require $\nabla \cdot \mathbf{u} = 0$. For our purposes, we will not worry about this too much, as we will not use the conservation of mass.
+
+We can now return to our original problem. We have a fluid of viscosity $\nu$ and thermal diffusivity $\kappa$. There are two plates a distance $d$ apart, with the top held at temperature $T_0$ and bottom at $T_0 + \Delta T$. We make the Boussinesq approximation.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [red!50!mred, fill] (0, 0) rectangle (8, -0.2);
+ \draw [blue, fill] (0, 2) rectangle (8, 2.2);
+
+ \fill [mblue, opacity=0.3] (0, 0) rectangle (8, 2);
+
+ \node [right] at (8, 2.1) {$T_0$};
+ \node [right] at (8, -0.1) {$T_0 + \Delta T$};
+ \node [white, left] at (0, -0.1) {$T_0 + \Delta T$};
+
+ \draw [<->] (4, 0) -- (4, 2) node [pos=0.5, right] {$d$};
+ \end{tikzpicture}
+\end{center}
+We first solve for our steady state, which is simply
+\begin{align*}
+ \mathbf{U} &= \mathbf{0}\\
+ T_h &= T_0 - \Delta T \frac{(z - d)}{d}\\
+ \rho_h &= \rho_0 \left(1 + \alpha \Delta T \frac{z - d}{d}\right)\\
+ P_h &= P_0 - g \rho_0 \left(z + \frac{\alpha \Delta Tz}{2d}(z - 2d)\right),
+\end{align*}
+In this state, the only mode of heat transfer is conduction. What we would like to investigate, of course, is whether this state is stable. Consider small perturbations
+\begin{align*}
+ \mathbf{u} &= \mathbf{U} + \mathbf{u}'\\
+ T &= T_h + \theta\\
+ P &= P_h + p'.
+\end{align*}
+We substitute these into the Navier--Stokes equation. Starting with
+\[
+ \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} = -\frac{1}{\rho_0} \nabla p' + g \alpha \theta \hat{\mathbf{z}} + \nu \nabla^2 \mathbf{u},
+\]
+we assume the term $\mathbf{u} \cdot \nabla \mathbf{u}$ will be small, and end up with the equation
+\[
+ \frac{\partial \mathbf{u}'}{\partial t} = - \frac{1}{\rho_0} \nabla p' + g \alpha \theta \hat{\mathbf{z}} + \nu \nabla^2 \mathbf{u},
+\]
+together with the incompressibility condition $\nabla \cdot \mathbf{u}' = 0$.
+
+Similarly, plugging these into the temperature diffusion equation and writing $\mathbf{u} = (u, v, w)$ gives us
+\[
+ \frac{\partial \theta}{\partial t} - w' \frac{\Delta T}{d} = \kappa \nabla^2 \theta.
+\]
+We can gain further understanding of these equations by expressing them in terms of dimensionless quantities. We introduce new variables
+\begin{align*}
+ \tilde{t} &= \frac{\kappa t}{d^2} & \tilde{\mathbf{x}} &= \frac{\mathbf{x}}{d}\\
+ \tilde{\theta} &= \frac{\theta}{\Delta T} & \tilde{p} &= \frac{d^2 p'}{\rho_0 \kappa^2}
+\end{align*}
+In terms of these new variables, our equations of motion become rather elegant:
+\begin{align*}
+ \frac{\partial \tilde{\theta}}{\partial \tilde{t}} - \tilde{w} &= \tilde{\nabla}^2 \tilde{\theta}\\
+ \frac{\partial \tilde{\mathbf{u}}}{\partial \tilde{t}} &= - \tilde{\nabla} \tilde{p} + \left(\frac{g\alpha \Delta T d^3}{\nu\kappa}\right) \left(\frac{\nu}{\kappa}\right) \tilde{\theta} \hat{\mathbf{z}} + \frac{\nu}{\kappa} \tilde{\nabla}^2 \tilde{\mathbf{u}}
+\end{align*}
+Ultimately, we see that the equations of motion depend on two dimensionless constants: the \term{Rayleigh number} and \term{Prandtl number}
+\begin{align*}
+ \Ra &= \frac{g\alpha \Delta T d^3}{\nu\kappa}\\
+ \Pr &= \frac{\nu}{\kappa},
+\end{align*}
+These are the two parameters control the behaviour of the system. In particular, the Prandtl number measures exactly the competition between viscous and diffusion forces. Different fluids have different Prandtl numbers:
+\begin{itemize}
+ \item In a gas, then $\frac{\nu}{\kappa} \sim 1$, since both are affected by the mean free path of a particle.
+ \item In a non-metallic liquid, the diffusion of heat is usually much slower than that of momentum, so $\Pr \sim 10$.
+ \item In a liquid metal, $\Pr$ is very very low, since, as we know, metal transmits heat quite well.
+\end{itemize}
+Finally, the incompressibility equation still reads
+\[
+ \tilde{\nabla} \cdot \tilde{\mathbf{u}} = 0
+\]
+Under what situations do we expect an unstable flow? Let's start with some rather more heuristic analysis. Suppose we try to overturn our system as shown in the diagram:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [red!50!mred, fill] (0, 0) rectangle (8, -0.2);
+ \draw [blue, fill] (0, 2) rectangle (8, 2.2);
+
+ \fill [mblue, opacity=0.3] (0, 0) rectangle (8, 2);
+
+ \foreach \x in {0,2,4,6} {
+ \begin{scope}[shift={(\x, 0)}]
+ \draw (1, 1) circle [radius=0.8];
+ \draw [->] (0.2, 0.85) -- +(0, -0.0001);
+ \draw [->] (1.8, 1.15) -- +(0, 0.0001);
+ \end{scope}
+ }
+ \end{tikzpicture}
+\end{center}
+
+Suppose we do this at a speed $U$. For each $d \times d \times d$ block, the potential energy released is
+\[
+ PE \sim g \cdot d \cdot (d^3 \Delta \rho) \sim g \rho_0 \alpha d^4 \Delta T,
+\]
+and the time scale for this to happen is $\tau \sim d/U$.
+
+What is the friction we have to overcome in order to achieve this? The viscous stress is approximately
+\[
+ \mu \frac{\partial U}{\partial z} \sim \mu \frac{U}{d}.
+\]
+The force is the integral over the area, which is $\sim \frac{\mu U}{d} d^2$. The work done, being $\int F \;\d z$, then evaluates to
+\[
+ \frac{\mu U}{d} \cdot d^2 \cdot d = \mu U d^2.
+\]
+We can figure out $U$ by noting that the heat is diffused away on a time scale of $\tau \sim d^2 / \kappa$. For convection to happen, it must occur on a time scale at least as fast as that of diffusion. So we have $U \sim \kappa/d$. All in all, for convection to happen, we need the potential energy gain to be greater than the work done, which amounts to
+\[
+ g \rho_0 \alpha \Delta T d^4 \gtrsim \rho_0 \nu \kappa d.
+\]
+In other words, we have
+\[
+ \Ra = \frac{g \alpha \Delta T d^3}{\nu \kappa} = \frac{g' d^3}{\nu \kappa} \gtrsim 1.
+\]
+Now it is extremely unlikely that $\Ra \geq 1$ is indeed the correct condition, as our analysis was not very precise. However, it is generally true that instability occurs only when $\Ra$ is large.
+
+To work this out, let's take the curl of the Navier--Stokes equation we previously obtained, and dropping the tildes, we get
+\[
+ \frac{\partial \boldsymbol\omega}{\partial t} = \Ra \Pr \nabla \theta \times \hat{\mathbf{z}} + \Pr \nabla^2 \boldsymbol\omega
+\]
+This gets rid of the pressure term, so that's one less thing to worry about. But this is quite messy, so we take the curl yet again to obtain
+\[
+ \frac{\partial}{\partial t} \nabla^2 \mathbf{u} = \Ra \Pr \left(\nabla^2 \theta \hat{\mathbf{z}} - \nabla \left(\frac{\partial \theta}{\partial z}\right)\right) + \Pr \nabla^4 \mathbf{u}
+\]
+Reading off the $z$ component, we get
+\[
+ \frac{\partial}{\partial t} \nabla^2 w = \Ra \Pr \left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right) \theta + \Pr \nabla^4 w.
+\]
+We will combine this with the temperature equation
+\[
+ \frac{\partial \theta}{\partial t} - w = \nabla^2 \theta
+\]
+to understand the stability problem.
+
+We first try to figure out what boundary conditions to impose. We've got a $6$th order partial differential equation ($4$ from the Navier--Stokes and $2$ from the temperature equation), and so we need $6$ boundary conditions. It is reasonable to impose that $w = \theta = 0$ at $z = 0, 1$, as this gives us impermeable boundaries.
+
+It is also \emph{convenient} to assume the top and bottom surfaces are stress-free: $\frac{\partial u}{\partial z} = \frac{\partial v}{\partial z} = 0$, which implies
+\[
+ \frac{\partial}{\partial z} \left(\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y}\right) = 0.
+\]
+This then gives us $\frac{\partial^2 w}{\partial z^2} = 0$. This is does \emph{not} make much physical sense, because our fluid is viscous, and we should expect no-slip boundary conditions, not no-stress. But they are mathematically convenient, and we shall assume them.
+
+Let's try to look for unstable solutions using separation of variables. We see that the equations are symmetric in $x$ and $y$, and so we write our solution as
+\begin{align*}
+ w &= W(z) X(x, y) e^{\sigma t}\\
+ \theta &= \Theta(z) X(x, y) e^{\sigma t}.
+\end{align*}
+If we can find solutions of this form with $\sigma > 0$, then the system is unstable.
+
+What follows is just a classic separation of variables. Plugging these into the temperature equation yields
+\[
+ \left(\frac{\d^2}{\d z^2} - \sigma + \left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right)\right) X \Theta = -XW.
+\]
+Or equivalently
+\[
+ \left(\frac{\d^2}{\d z^2} - \sigma\right) X\Theta + XW = - \left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right) X\Theta.
+\]
+We see that the differential operator on the left only acts in $\Theta$, while those on the right only act on $X$. We divide both sides by $X\Theta$ to get
+\[
+ \frac{\left(\frac{\d^2}{\d y^2} - \sigma\right) \Theta + W}{\Theta} = - \frac{\left(\frac{\d^2}{\d x^2} + \frac{\d^2}{\d y^2}\right) X}{X}.
+\]
+Since the LHS is purely a function of $z$, and the RHS is purely a function of $X$, they must be constant, and by requiring that the solution is bounded, we see that this constant must be positive. Our equations then become
+\begin{align*}
+ \left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} \right)X = - \lambda^2 X\\
+ \left(\frac{\d^2}{\d z^2} - \lambda^2 - \sigma\right) \Theta = - W.
+\end{align*}
+We now look at our other equation. Plugging in our expressions for $w$ and $\theta$, and using what we just obtained, we have
+\[
+ \sigma\left(\frac{\d^2}{\d z^2} - \lambda^2 \right)W = - \Ra \Pr \lambda^2 \Theta + \Pr \left(\frac{\d^2}{\d z^2} - \lambda^2\right)^2 W.
+\]
+On the boundary, we have $\Theta = 0 = W = \frac{\partial^2 W}{\partial z^2} = 0$, by assumption. So it follows that we must have
+\[
+ \left.\frac{\d^4W}{\d z^4} \right|_{z = 0, 1} = 0.
+\]
+We can eliminate $\Theta$ by letting $\left(\frac{\d^2}{\d z^2} - \sigma - \lambda^2\right)$ act on both sides, which converts the $\Theta$ into $W$. We are then left with a $6$th order differential equation
+\[
+ \left(\frac{\d^2}{\d z^2} - \sigma - \lambda^2\right) \left(\Pr \left(\frac{\d^2}{\d z^2} - \lambda^2\right)^2 - \sigma \left(\frac{\d^2}{\d z^2} - \lambda^2\right)\right) W =- \Ra \Pr \lambda^2 W.
+\]
+This is an eigenvalue problem, and we see that our operator is self-adjoint. We can factorize and divide across by $\Pr$ to obtain
+\[
+ \left(\frac{\d^2}{\d z^2} - \lambda^2 \right) \left(\frac{\d^2}{\d z^2} - \sigma - \lambda^2\right) \left(\frac{\d^2}{\d z^2} - \lambda ^2 - \frac{\sigma}{\Pr}\right) W = - \Ra \lambda^2 W.
+\]
+The boundary conditions are that
+\[
+ W= \frac{\d^2 W}{\d z^2}= \frac{\d^4 W}{\d x^4} = 0\text{ at }z = 0, 1.
+\]
+We see that any eigenfunction of $\frac{\d^2}{\d z^2}$ gives us an eigenfunction of our big scary differential operator, and our solutions are quantized sine functions, $W = \sin n \pi z$ with $n \in \N$. In this case, the eigenvalue equation is
+\[
+ (n^2 \pi^2 + \lambda^2) (n^2 \pi^2 + \sigma + \lambda^2) \left(n^2\pi^2 + \lambda^2 + \frac{\sigma}{\Pr}\right) = \Ra \lambda^2.
+\]
+When $\sigma > 0$, then, noting that the LHS is increasing with $\sigma$ on the range $[0, \infty)$, this tells us that we have
+\[
+ \Ra(n) \geq \frac{(n^2 \pi^2 + \lambda^2)^3}{\lambda^2}.
+\]
+We minimize the RHS with respect to $\lambda$ and $n$. We clearly want to set $n = 1$, and then solve for
+\[
+ 0 = \frac{\d}{\d [\lambda^2]} \frac{(n^2 \pi^2 + \lambda^2)^3}{\lambda^2} = \frac{(\pi^2 + \lambda^2)^2}{\lambda^4}(2\lambda^2 - \pi^2).
+\]
+So the minimum value of $\Ra$ is obtained when
+\[
+ \lambda_c = \frac{\pi}{\sqrt{2}}.
+\]
+So we find that we have a solution only when
+\[
+ \Ra \geq \Ra_c = \frac{27\pi^4}{4} \approx 657.5.
+\]
+If we are slightly about $\Ra_c$, then there is the $n = 1$ unstable mode. As $\Ra$ increases, the number of available modes increases. While the critical Rayleigh number depends on boundary conditions, but the picture is generic, and the width of the unstable range grows like $\sqrt{\Ra - \Ra_c}$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (4.5, 0) node [below] {$kd$};
+
+ \draw [->] (0, 0) -- (0, 2.5) node [left] {$Ra$};
+
+ \draw plot [smooth] coordinates {(0.5, 2.56) (1.5, 1) (4, 2.45)};
+
+ \node at (3.2, 1) {stable};
+ \node at (1.9, 2) {unstable};
+ \end{tikzpicture}
+\end{center}
+
+We might ask --- how does convection enhance the heat flux? This is quantified by the \term{Nusselt number}
+\[
+ Nu = \frac{\text{convective heat transfer}}{\text{conductive heat transfer}} = \frac{\bra wT\ket d}{\kappa \Delta T} = \bra \tilde{w} \tilde{\theta}\ket.
+\]
+Since this is a non-dimensional number, we know it is a function of $\Ra$ and $\Pr$.
+
+There are some physical arguments we can make about the value of $Nu$. The claim is that the convective heat transfer is independent of layer depth. If there is no convection at all, then the temperature gradient will look approximately like
+
+There is some simple arguments we can make about this. If we have a purely conductive state, then we expect the temperature gradient to be linear:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [red!50!mred, fill] (0, 0) rectangle (8, -0.2);
+ \draw [blue, fill] (0, 2) rectangle (8, 2.2);
+
+ \fill [mblue, opacity=0.3] (0, 0) rectangle (8, 2);
+
+ \draw (3, 0) -- (5, 2);
+ \end{tikzpicture}
+\end{center}
+This is pretty boring. If there is convection, then we claim that the temperature gradient will look like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [red!50!mred, fill] (0, 0) rectangle (8, -0.2);
+ \draw [blue, fill] (0, 2) rectangle (8, 2.2);
+
+ \fill [mblue, opacity=0.3] (0, 0) rectangle (8, 2);
+
+ \draw [dotted] (4, 0) -- (4, 2);
+
+ \draw (3, 0) .. controls (3.5, 0) and (4, 0) .. (4, 0.5) -- (4, 1.5) .. controls (4, 2) and (4.5, 2) .. (5, 2);
+ \end{tikzpicture}
+\end{center}
+The reasoning is that the way convection works is that once a parcel at the bottom is heated to a sufficiently high temperature, it gets kicked over to the other side; if a parcel at the top is cooled to a sufficiently low temperature, it gets kicked over to the other side. In between the two boundaries, the temperature is roughly constant, taking the average temperature of the two boundaries.
+
+In this model, it doesn't matter how far apart the two boundaries are, since once the hot or cold parcels are shot off, they can just travel on their own until they reach the other side (or mix with the fluid in the middle).
+
+If we believe this, then we must have $\bra wT \ket \propto d^0$, and hence $Nu \propto d$. Since only $\Ra$ depends on $d$, we must have $Nu \propto \Ra^{1/3}$.
+
+Of course, the real situation is much more subtle.
+
+\subsection{Classical Kelvin--Helmholtz instability}
+Let's return to the situation of Rayleigh--Taylor instability, but this time, there is some horizontal flow in the two layers, and they may be of different velocities.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 0) sin (-2, 0.1) cos (-1, 0) sin (0, -0.1) cos (1, 0) sin (2, 0.1) cos (3, 0);
+ \fill [mblue, opacity=0.5] (-3, -1) -- (-3, 0) sin (-2, 0.1) cos (-1, 0) sin (0, -0.1) cos (1, 0) sin (2, 0.1) cos (3, 0) -- (3, -1);
+ \fill [mblue, opacity=0.2] (-3, 1) -- (-3, 0) sin (-2, 0.1) cos (-1, 0) sin (0, -0.1) cos (1, 0) sin (2, 0.1) cos (3, 0) -- (3, 1);
+
+ \node at (-1.7, -0.6) {$\rho_2$};
+ \node at (-1.7, 0.6) {$\rho_1$};
+
+ \draw [->] (1.1, 0.5) -- (1.9, 0.5) node [right] {$U_1$};
+ \draw [->] (1.1, -0.5) -- (1.9, -0.5) node [right] {$U_2$};
+ \end{tikzpicture}
+\end{center}
+
+Our analysis here will be not be very detailed, partly because what we do is largely the same as the analysis of the Rayleigh--Taylor instability, but also because the analysis makes quite a few assumptions which we wish to examine more deeply.
+
+In this scenario, we can still use a velocity potential
+\[
+ (u, w) = \left(\frac{\partial \phi}{\partial x}, \frac{\partial \phi}{\partial z}\right).
+\]
+We shall only consider the system in $2D$. The far field now has velocity
+\begin{align*}
+ \phi_1 &= U_1 x + \phi_1'\\
+ \phi_2 &= U_2 x + \phi_2'
+\end{align*}
+with $\phi_1' \to 0$ as $z \to \infty$ and $\phi_2' \to 0$ as $z \to -\infty$.
+
+The boundary conditions are the same. Continuity of vertical velocity requires
+\[
+ \left.\frac{\partial \phi_{1, 2}}{\partial z} \right|_{z = \eta} = \frac{\D \eta}{\D t}.
+\]
+The dynamic boundary condition is that we have continuity of pressure at the interface if there is no surface tension, in which case Bernoulli tells us
+\[
+ \rho_1 \frac{\partial \phi_1}{\partial t} + \frac{1}{2} \rho_1 |\nabla \phi_1|^2 + g \rho_1 \eta = \rho_2 \frac{\partial \phi_2}{\partial t} + \frac{1}{2} \rho_2 |\nabla \phi_2|^2 + g \rho_2 \eta.
+\]
+The interface conditions are non-linear, and again we want to linearize. But since $U_{1, 2}$ is of order $1$, linearization will be different. We have
+\[
+ \left. \frac{\partial \phi'_{1, 2}}{\partial z} \right|_{z = 0} = \left(\frac{\partial}{\partial t} + U_{1, 2} \frac{\partial}{\partial x}\right)\eta.
+\]
+So the Bernoulli condition gives us
+\[
+ \rho_1 \left(\left(\frac{\partial}{\partial t} + U_1 \frac{\partial}{\partial x}\right) \phi_1' + g \eta\right) = \rho_2 \left(\left(\frac{\partial}{\partial t} + U_2 \frac{\partial}{\partial x}\right) \phi_2' + g \eta\right)
+\]
+This modifies our previous eigenvalue problem for the phase speed and wavenumber $\omega = k c$, $k = \frac{2\pi}{\lambda}$. We go exactly as before, and after some work, we find that we have
+\[
+ c = \frac{\rho_1 U_1 + \rho_2 U_2}{\rho_1 + \rho_2} \pm \frac{1}{\rho_1 + \rho_2} \left(\frac{g (\rho_2^2 - \rho_1^2)}{k} - \rho_1 \rho_2 (U_1 - U_2)^2\right)^{1/2}.
+\]
+So we see that we have instability if $c$ has an imaginary part, i.e.
+\[
+ k > \frac{g (\rho_2^2 - \rho_1^2)}{\rho_1 \rho_2 (U_1 - U_2)^2}.
+\]
+So we see that there is instability for sufficiently large wavenumbers, even for static stability. Similar to Rayleigh--Taylor instability, the growth rate $k c_i$ grows monotonically with wavenumber, and as $k \to \infty$, the instability becomes proportional to the difference $|U_1 - U_2|$ (as opposed to Rayleigh--Taylor instability, where it grows unboundedly). % c_i and c_r are real and imaginary components of c
+
+How can we think about this result? If $U_1 \not= U_2$ and we have a discrete change in velocity, then this means there is a $\delta$-function of vorticity at the interface. So it should not be surprising that the result is unstable!
+
+Another way to think about it is to change our frame of reference so that $U_1 = U = - U_2$. In the Boussinesq limit, we have $c_r = 0$, and instability arises whenever $\frac{g \Delta \rho \lambda}{\rho U^2} < 4\pi$. We can see the numerator as the potential energy cost if we move a parcel from the bottom layer to the top layer, while the denominator as some sort of kinetic energy. So this says we are unstable if there is enough kinetic energy to move a parcel up.
+
+This analysis of the shear flow begs a few questions:
+\begin{itemize}
+ \item How might we regularize this expression? The vortex sheet is obviously wildly unstable.
+ \item Was it right to assume two-dimensional perturbations?
+ \item What happens if we regularize the depth of the shear?
+\end{itemize}
+
+\subsection{Finite depth shear flow}
+We now consider a finite depth shear flow. This means we have fluid moving between two layers, with a $z$-dependent velocity profile:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [black, fill] (0, 0) rectangle (8, -0.2);
+ \draw [black, fill] (0, 2) rectangle (8, 2.2);
+
+ \fill [mblue, opacity=0.3] (0, 0) rectangle (8, 2);
+
+ \node [left] at (0, 2.1) {$z = L$};
+ \node [left] at (0, -0.1) {$z = -L$};
+ \node [white, right] at (8, -0.1) {$z = -L$};
+
+ \draw [thick, mgreen] (3.2, 0) .. controls (3.5, 0) and (4, 0.5) .. (4, 1) .. controls (4, 1.5) and (4.5, 2) .. (4.8, 2);
+ \draw [dashed] (3.2, 0) -- ++(0, 2);
+
+ \draw [-latex'] (3.2, 0.25) -- +(0.46, 0);
+ \draw [-latex'] (3.2, 0.50) -- +(0.63, 0);
+ \draw [-latex'] (3.2, 0.75) -- +(0.78, 0);
+ \draw [-latex'] (3.2, 1.00) -- +(0.8, 0);
+ \draw [-latex'] (3.2, 1.25) -- +(0.83, 0);
+ \draw [-latex'] (3.2, 1.50) -- +(0.96, 0);
+ \draw [-latex'] (3.2, 1.75) -- +(1.15, 0);
+
+ \node at (4.7, 1) {$U(z)$};
+ \end{tikzpicture}
+\end{center}
+We first show that it suffices to work in $2$ dimensions. We will assume that the mean flow points in the $\hat{\mathbf{x}}$ direction, but the perturbations can point at an angle. The inviscid homogeneous incompressible Navier--Stokes equations are again
+\[
+ \left(\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u}\right) = \frac{\D \mathbf{u}}{\D t} = - \nabla \left(\frac{p'}{\rho}\right),\quad \nabla \cdot \mathbf{u} = 0.
+\]
+We linearize about a shear flow, and consider some 3D normal modes
+\[
+ \mathbf{u} = \bar{U}(z)\hat{\mathbf{x}} + \mathbf{u}'(x, y, z, t),
+\]
+where
+\[
+ (\mathbf{u}', p'/\rho) = [\hat{\mathbf{u}}(z), \hat{p}(z)]e^{i (kx + \ell y - kct)}.
+\]
+The phase speed is then
+\[
+ c_p = \frac{\omega}{\kappa} = \frac{\omega}{(k^2 + \ell^2)^{1/2}} = \frac{kc_r}{(k^2 + \ell^2)^{1/2}}
+\]
+and the growth rate of the perturbation is simply $\sigma_{3d} = k c_i$.
+
+We substitute our expression of $\mathbf{u}$ and $p'/\rho$ into the Navier--Stokes equations to obtain, in components,
+\begin{align*}
+ ik (\bar{U} - c) \hat{u} + \hat{w} \frac{\d \bar{U}}{\d z} &= - ik \hat{p}\\
+ ik (\bar{U} - c) \hat{v} &= - il \hat{p}\\
+ ik (\bar{U} - c) \hat{w} &= - \frac{\d \hat{p}}{\d z}\\
+ ik \hat{u} + i\ell\hat{v} + \frac{\d\hat{w}}{\d z} &= 0
+\end{align*}
+Our strategy is to rewrite everything to express things in terms of new variables
+\[
+ \kappa = \sqrt{k^2 + \ell^2},\quad \kappa \tilde{u} = k\hat{u} + \ell\hat{v},\quad \tilde{p} = \frac{\kappa \hat{p}}{k}
+\]
+and if we are successful in expressing everything in terms of $\tilde{u}$, then we could see how we can reduce this to the two-dimensional problem.
+
+To this end, we can slightly rearrange our first two equations to say
+\begin{align*}
+ ik (\bar{U} - c) \hat{u} + \hat{w} \frac{\d \bar{U}}{\d z} &= - ik^2 \frac{\hat{p}}{k}\\
+ i\ell (\bar{U} - c) \hat{v} &= - i\ell^2 \frac{\hat{p}}{k}
+\end{align*}
+which combine to give
+\[
+ i (\bar{U} - c) \kappa \tilde{u} + \hat{w} \frac{\d \bar{U}}{\d z} = - i \kappa \tilde{p}.
+\]
+We can rewrite the remaining two equations in terms of $\tilde{p}$ as well:
+\begin{align*}
+ i \kappa (\bar{U} - c) \hat{w} &= - \frac{\d}{\d z} \tilde{p}\\
+ \kappa \tilde{U} + \frac{\d \hat{w}}{\d z} &= 0.
+\end{align*}
+This looks just like a 2d system, but with an instability growth rate of $\sigma_{2d} = \kappa c_i > k c_i = \sigma_{3d}$. Thus, our 3d problem is ``equivalent'' to a 2d problem with greater growth rate. However, whether or not instability occurs is unaffected. One way to think about the difference in growth rate is that the $y$ component of the perturbation sees less of the original velocity $\bar{U}$, and so it is more stable. This result is known as \term{Squire's Theorem}.
+
+We now restrict to two-dimensional flows, and have equations
+\begin{align*}
+ ik (\bar{U} - c) \hat{u} + \hat{w} \frac{\d \bar{U}}{\d z} &= ik\hat{p}\\
+ ik (\bar{U} - c) \hat{w} &= - \frac{\d \hat{p}}{\d z}\\
+ ik \hat{u} + \frac{\d \hat{w}}{\d z} &= 0.
+\end{align*}
+We can use the last incompressibility equation to eliminate $\hat{u}$ from the first equation, and be left with
+\[
+ -(\bar{U} - c) \frac{\d \hat{w}}{\d z} + \hat{w} \frac{\d \bar{U}}{\d z} = - ik \hat{p}.
+\]
+We wish to get rid of the $\hat{p}$, and to do so, we differentiate this equation with respect to $z$ and use the second equation to get
+\[
+ -(\bar{U} - c) \frac{\d^2 \hat{w}}{\d z^2} - \frac{\d \hat{w}}{\d z} \frac{\d \bar{U}}{\d z} + \frac{\d \hat{w}}{\d z} \frac{\d \bar{U}}{\d z} + \hat{w} \frac{\d^2 \bar{U}}{\d z^2} = -k^2 (\bar{U} - c) \hat{w}.
+\]
+The terms in the middle cancel, and we can rearrange to obtain the \term{Rayleigh equation}
+\[
+ \left((\bar{U} - c) \left(\frac{\d^2}{\d z^2} - k^2\right) - \frac{\d^2 \bar{U}}{\d z^2} \right) \hat{w} = 0.
+\]
+We see that when $\bar{U} = c$, we have a regular singular point. The natural boundary conditions are that $\hat{w} \to 0$ at the edge of the domain.
+
+Note that this differential operator is not self-adjoint! This is since $\bar{U}$ has non-trivial $z$-dependence. This means we do not have a complete basis of orthogonal eigenfunctions. This is manifested by the fact that it can have transient growth.
+
+To analyze this scenario further, we rewrite Rayleigh equation in conventional form
+\[
+ \frac{\d^2 \hat{w}}{\d z^2} - k^2 \hat{w} - \frac{\d^2 \bar{U}/\d z^2}{\bar{U} - c} \hat{w} = 0.
+\]
+The trick is to multiply by $w^*$, integrate across the domain, and apply boundary conditions to obtain
+\[
+ \int_{-L}^L \frac{\bar{U}''}{\bar{U} - c} |\hat{w}|^2 \;\d z = - \int_{-L}^L (|\hat{w}'|^2 + k^2 |\hat{w}|^2)\;\d z
+\]
+We can split the LHS into real and imaginary parts:
+\[
+ \int_{-L}^L \left( \frac{\bar{U}''(\bar{U} - c_r)}{|\bar{U} - c|^2}\right) |\hat{w}|^2 \;\d z + i c_i \int_{-L}^L \left(\frac{\bar{U}''}{ |\bar{U} - c|^2}\right) |\hat{w}|^2 \;\d z.
+\]
+But since the RHS is purely real, we know the imaginary part must vanish.
+
+One way for the imaginary part to vanish is for $c_i$ to vanish, and this corresponds to stable flow. If we want $c_i$ to be non-zero, then the integral must vanish. So we obtain the \term{Rayleigh inflection point criterion}: $\frac{\d^2}{\d z^2} \bar{U}$ must change sign at least once in $-L < z < L$.
+
+Of course, this condition is not sufficient for instability. If we want to get more necessary conditions for instability to occur, it might be wise to inspect the imaginary part, as Fjortoft noticed. If instability occurs, then we know that we must have
+\[
+ \int_{-L}^L \left(\frac{\bar{U}''}{|\bar{U} - c|^2} \right) |\hat{w}|^2\;\d z = 0.
+\]
+Let's assume that there is a unique (non-degenerate) inflection point at $z = z_s$, with $\bar{U}_s = \bar{U}(z_s)$. We can then add $(c_r - \bar{U}_s)$ times the above equation to the real part to see that we must have
+\[
+ -\int_{-L}^L (|\hat{w}'|^2 + k^2 |\hat{w}|^2)\;\d z = \int_{-L}^L \left(\frac{\bar{U}''(\bar{U} - \bar{U}_s)}{ |\bar{U} - c|^2} \right) |\hat{w}|^2\;\d z.
+\]
+Assuming further that $\bar{U}$ is monotonic, we see that both $\bar{U} - \bar{U}_s$ and $U''$ change sign at $z_s$, so the sign of the product is unchanged. So for the equation to be consistent, we must have $\bar{U}'' (\bar{U} - \bar{U}_s) \leq 0$ with equality only at $z_s$.
+
+We can look at some examples of different flow profiles:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [black, fill] (-0.5, 0) rectangle (2.5, -0.2);
+ \draw [black, fill] (-0.5, 2) rectangle (2.5, 2.2);
+
+ \fill [mblue, opacity=0.3] (-0.5, 0) rectangle (2.5, 2);
+
+ \draw [thick, mgreen] (0.2, 0) sin (1.5, 2);
+ \draw [dashed] (0.2, 0) -- ++(0, 2);
+
+ \draw [-latex'] (0.2, 0.25) -- +(0.1037, 0);
+ \draw [-latex'] (0.2, 0.50) -- +(0.2091, 0);
+ \draw [-latex'] (0.2, 0.75) -- +(0.3181, 0);
+ \draw [-latex'] (0.2, 1.00) -- +(0.433, 0);
+ \draw [-latex'] (0.2, 1.25) -- +(0.5587, 0);
+ \draw [-latex'] (0.2, 1.50) -- +(0.7019, 0);
+ \draw [-latex'] (0.2, 1.75) -- +(0.8818, 0);
+
+ \begin{scope}[shift={(3.5, 0)}]
+ \draw [black, fill] (-0.5, 0) rectangle (2.5, -0.2);
+ \draw [black, fill] (-0.5, 2) rectangle (2.5, 2.2);
+
+ \fill [mblue, opacity=0.3] (-0.5, 0) rectangle (2.5, 2);
+
+ \draw [thick, mgreen] (0.2, 0) .. controls (0.5, 0) and (1, 0.5) .. (1, 1) .. controls (1, 1.5) and (1.5, 2) .. (1.8, 2);
+ \draw [dashed] (0.2, 0) -- ++(0, 2);
+
+ \draw [-latex'] (0.2, 0.25) -- +(0.46, 0);
+ \draw [-latex'] (0.2, 0.50) -- +(0.63, 0);
+ \draw [-latex'] (0.2, 0.75) -- +(0.78, 0);
+ \draw [-latex'] (0.2, 1.00) -- +(0.8, 0);
+ \draw [-latex'] (0.2, 1.25) -- +(0.83, 0);
+ \draw [-latex'] (0.2, 1.50) -- +(0.96, 0);
+ \draw [-latex'] (0.2, 1.75) -- +(1.15, 0);
+ \end{scope}
+
+ \begin{scope}[shift={(7, 0)}]
+ \draw [black, fill] (-0.5, 0) rectangle (2.5, -0.2);
+ \draw [black, fill] (-0.5, 2) rectangle (2.5, 2.2);
+
+ \fill [mblue, opacity=0.3] (-0.5, 0) rectangle (2.5, 2);
+
+ \draw [thick, mgreen] (0.2, 0) sin (1, 1) cos (1.8, 2);
+
+ \draw [dashed] (0.2, 0) -- ++(0, 2);
+
+ \draw [-latex'] (0.2, 0.25) -- +(0.1287, 0);
+ \draw [-latex'] (0.2, 0.50) -- +(0.2667, 0);
+ \draw [-latex'] (0.2, 0.75) -- +(0.4319, 0);
+ \draw [-latex'] (0.2, 1.00) -- +(0.8, 0);
+ \draw [-latex'] (0.2, 1.25) -- +(1.1681, 0);
+ \draw [-latex'] (0.2, 1.50) -- +(1.3333, 0);
+ \draw [-latex'] (0.2, 1.75) -- +(1.4713, 0);
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+In the leftmost example, Rayleigh's criterion tells us it is stable, because there is no inflection point. The second example has an inflection point, but does not satisfy Fjortoft's criterion. So this is also stable. The third example satisfies both criteria, so it may be unstable. Of course, the condition is not sufficient, so we cannot make a conclusive statement.
+
+Is there more information we can extract from them Rayleigh equation? Suppose we indeed have an unstable mode. Can we give a bound on the growth rate $c_i$ given the phase speed $c_r$?
+
+The trick is to perform the substitution
+\[
+ \hat{W} = \frac{\hat{w}}{(\bar{U} - c)}.
+\]
+Note that this substitution is potentially singular when $\bar{U} = c$, which is the singular point of the equation. By expressing everything in terms of $\hat{W}$, we are essentially hiding the singularity in the definition of $\hat{W}$ instead of in the equation.
+
+Under this substitution, our Rayleigh equation then takes the \emph{self-adjoint} form
+\[
+ \frac{\d}{\d z} \left((\bar{U} - c)^2 \frac{\d \hat{W}}{\d z}\right) - k^2 (\bar{U} - c)^2 \hat{W} = 0.
+\]
+We can multiply by $\hat{W}^*$ and integrate over the domain to obtain
+\[
+ \int_{-L}^L (\bar{U} - c)^2 \underbrace{\left(\abs{\frac{\d \hat{W}}{\d z}}^2 + k^2 |\hat{W}^2\right)}_{\equiv Q}\;\d z = 0.
+\]
+Since $Q \geq 0$, we may again take imaginary part to require
+\[
+ 2 c_i \int_{-L}^L (\bar{U} - c_r) Q \;\d z = 0.
+\]
+This implies that we must have $U_{min} < c_r < U_{max}$, and gives a bound on the phase speed.
+
+Taking the \emph{real} part implies
+\[
+ \int_{-L}^L [(\bar{U} - c_r)^2 - c_i^2]Q \;\d z = 0.
+\]
+But we can combine this with the imaginary part condition, which tells us
+\[
+ \int_{-L}^L \bar{U} Q\;\d z = c_r \int_{-L}^L Q\;\d z.
+\]
+So we can expand the real part to give us
+\[
+ \int_{-L}^L \bar{U}^2 Q\;\d z = \int_{-L}^L (c_r^2 + c_i^2)Q\;\d z.
+\]
+Putting this aside, we note that tautologically, we have $U_{min} \leq \bar{U} \leq U_{max}$. So we always have
+\[
+ \int_{-L}^L (\bar{U} - U_{max}) (\bar{U} - U_{min}) Q \;\d z \leq 0.
+\]
+expanding this, and using our expression for $\int_{-L}^L \bar{U}^2 Q \;\d z$, we obtain
+\[
+ \int_{-L}^L ((c_r^2 + c_i^2) - (U_{max} - U_{min})c_r + U_{max} U_{min}) Q\;\d z \leq 0.
+\]
+But we see that we are just multiplying $Q\;\d z$ by a constant and integrating. Since we know that $\int Q\;\d z > 0$, we must have
+\[
+ (c_r^2 + c_i^2) - (U_{max} + U_{min})c_r + U_{max} U_{min} \leq 0.
+\]
+By completing the square, we can rearrange this to say
+\[
+ \left(c_r - \frac{U_{max} + U_{min}}{2}\right)^2 + (c_i - 0)^2 \leq \left(\frac{U_{max} - U_{min}}{2}\right)^2.
+\]
+This is just an equation for the circle! We can now concretely plot the region of possible $c_r$ and $c_i$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$c_r$};
+ \draw [->] (0, 0) -- (0, 2) node [above] {$c_i$};
+
+ \node [circ] at (2.5, 0) {};
+ \node [below] at (2.5, 0) {$\frac{U_{max} + U_{min}}{2}$};
+ \draw [fill=gray, fill opacity=0.2] (1, 0) arc (180:0:1.5);
+ \node [below] at (1, 0) {$U_{min}$};
+ \node [below] at (4, 0) {$U_{max}$};
+ \end{tikzpicture}
+\end{center}
+Of course, lying within this region is a \emph{necessary} condition for instability to occur, but not \emph{sufficient}. The actual region of instability is a subset of this semi-circle, and this subset depends on the actual $\bar{U}$. But this is already very helpful, since if we way to search for instabilities, say numerically, then we know where to look.
+
+\subsection{Stratified flows}
+In the Kelvin--Helmholtz scenario, we had a varying density. Let's now try to model the situation in complete generality.
+
+Recall that the set of (inviscid) equations for Boussinesq stratified fluids is
+\[
+ \rho \left(\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u}\right) = - \nabla p' - g \rho' \hat{\mathbf{z}},
+\]
+with incompressibility equations
+\[
+ \nabla \cdot \mathbf{u} = 0,\quad \frac{\D \rho}{\D t} = 0.
+\]
+We consider a mean base flow and a $2D$ perturbation as a normal mode:
+\begin{align*}
+ \mathbf{u} &= \bar{U}(z) \hat{\mathbf{x}} + \mathbf{u}'(x, z, t)\\
+ p &= \bar{p}(z) + p'(x, z, t)\\
+ \rho &= \bar{\rho}(z) + \rho'(x, z, t),
+\end{align*}
+with
+\[
+ (\mathbf{u}', p', \rho') = [\hat{\mathbf{u}}(z), \hat{p}(z), \hat{\rho}(z)] e^{i(kx - \omega t)} = [\hat{\mathbf{u}}(z), \hat{p}(z), \hat{\rho}(z)] e^{ik(x - ct)}.
+\]
+We wish to obtain an equation that involves only $\hat{w}$. We first linearize our equations to obtain
+\begin{align*}
+ \bar{\rho} \left(\frac{\partial u'}{\partial t} + \bar{U} \frac{\partial u'}{\partial x} + w' \frac{\d \bar{U}}{\d z}\right) &= - \frac{\partial p'}{\partial x}\\
+ \bar{\rho} \left(\frac{\partial w'}{\partial t} + \bar{U} \frac{\partial w'}{\partial x} \right) &= - \frac{\partial p'}{\partial z} - g \rho'\\
+ \frac{\partial \rho'}{\partial t} + \bar{U} \frac{\partial \rho'}{\partial x} + w' \frac{\d \bar{\rho}}{\d z} &= 0\\
+ \frac{\partial u'}{\partial x} + \frac{\partial w'}{\partial z} &= 0.
+\end{align*}
+Plugging in our normal mode solution into the four equations, we obtain
+\begin{align*}
+ ik \bar{\rho} (\bar{U} - c) \hat{u} + \bar{\rho} \hat{w} \frac{\d}{\d z} \bar{U} &= - ik\hat{p}\\
+ ik \bar{\rho} (\bar{U} - c) \hat{w} &= - \frac{\d \hat{p}}{\d z} - g \hat{\rho}.\tag{$*$}\\
+ ik(\bar{U} - c) \hat{\rho} + w' \frac{\d \bar{\rho}}{\d z} &= 0 \tag{$\dagger$}\\
+ ik \hat{u} + \frac{\partial w'}{\partial z} &= 0.
+\end{align*}
+The last (incompressibility) equation helps us eliminate $\hat{u}$ from the first equation to obtain
+\begin{align*}
+ - \bar{\rho} (\bar{U} - c) \frac{\d \hat{w}}{\d z} + \bar{\rho} \hat{w} \frac{\d \bar{U}}{\d z} &= -ik \hat{p}.
+\end{align*}
+To eliminate $\hat{p}$ as well, we differentiate with respect to $z$ and apply $(*)$, and then further apply ($\dagger$) to get rid of the $\hat{\rho}$ term, and end up with
+\[
+ - \bar{\rho} (\bar{U} - c) \frac{\d^2 \hat{w}}{\d z^2} + \bar{\rho} \hat{w} \frac{\d^2 \bar{U}}{\d z^2} + k^2 \bar{\rho}(\bar{U} - c) \hat{w} = - \frac{g \hat{w}}{\bar{U} - c} \frac{\d \bar{\rho}}{\d z}.
+\]
+This allows us to write our equation as the \term{Taylor--Goldstein equation}
+\[
+ \left(\frac{\d^2}{\d z^2} - k^2\right) \hat{w} - \frac{\hat{w}}{(\bar{U} - c)} \frac{\d^2 \bar{U}}{\d z^2} + \frac{N^2 \hat{w}}{(\bar{U} - c)^2} = 0,
+\]
+where
+\[
+ N^2 = - \frac{g}{\bar{\rho}} \frac{\d \bar{\rho}}{\d z}.
+\]
+This $N$ is known as the \term{buoyancy frequency}, and has dimensions $T^{-2}$. This is the frequency at which a slab of stratified fluid would oscillate vertically.
+
+Indeed, if we have a slab of volume $V$, and we displace it by an infinitesimal amount $\zeta$ in the vertical direction, then the buoyancy force is
+\[
+ F = Vg \zeta \frac{\partial \rho}{\partial z}.
+\]
+Since the mass of the fluid is $\rho_0 V$, by Newton's second law, the acceleration of the parcel satisfies
+\[
+ \frac{\d^2 \zeta}{\d t^2} + \left(- \frac{g}{\rho_0} \frac{\partial \rho}{\partial z}\right) \zeta = 0.
+\]
+This is just simple harmonic oscillation of frequency $N$.
+
+Crucially, what we see from this computation is that a density stratification of the fluid can lead to \term{internal waves}. We will later see that the presence of these internal waves can destabilize our system.
+
+\subsubsection*{Miles--Howard theorem}
+In 1961, John Miles published a series of papers establishing a sufficient condition for infinitesimal stability in the case of a stratified fluid. When he submitted this to review, Louis Howard read the paper, and replied with a 3-page proof of a more general result. Howard's proof was published as a follow-up paper titled ``Note on a paper of John W.\ Miles''.
+
+Recall that we just derived the Taylor--Goldstein equation
+\[
+ \left(\frac{\d^2}{\d z^2} - k^2\right) \hat{w} - \frac{\hat{w}}{(\bar{U} - c)} \frac{\d^2 \bar{U}}{\d z^2} + \frac{N^2 \hat{w}}{(\bar{U} - c)^2} = 0,
+\]
+The magical insight of Howard was to introduce the new variable
+\[
+ H = \frac{\hat{W}}{(\bar{U} - c)^{1/2}}.
+\]
+We can regretfully compute the derivatives
+\begin{align*}
+ \hat{w} &= (\bar{U} - c)^{1/2} H\\
+ \frac{\d}{\d z} \hat{w} &= \frac{1}{2}(\bar{U} - c)^{-1/2} H \frac{\d \bar{U}}{\d z} + (\bar{U} - c)^{1/2} \frac{\d H}{\d z}\\
+ \frac{\d^2}{\d z^2} \hat{w} &= - \frac{1}{4} (\bar{U} - c)^{-3/2} H \left(\frac{\d \bar{U}}{\d z}\right)^2 + \frac{1}{2} (\bar{U} - c)^{-1/2} H \frac{\d^2 \bar{U}}{\d z^2} \\
+ &\hphantom{{}={}asdfadsf} + (\bar{U} - c)^{-1/2} \frac{\d H}{\d z} \frac{\d \bar{U}}{\d z} + (\bar{U} - c)^{1/2} \frac{\d^2 H}{\d z^2}.
+\end{align*}
+
+We can substitute this into the Taylor--Goldstein equation, and after some algebraic mess, we obtain the decent-looking equation
+\[
+ \frac{\d}{\d z}\left((\bar{U} - c) \frac{\d H}{\d z}\right) - H \left(k^2 (\bar{U} - c) + \frac{1}{2} \frac{\d^2 \bar{U}}{\d z^2} + \frac{\frac{1}{4} \left(\frac{\d \bar{U}}{\d z}\right)^2 - N^2}{\bar{U} - c}\right) = 0.
+\]
+This is now self-adjoint. Imposing boundary conditions $\hat{w}, \frac{\d \hat{w}}{\d z} \to 0$ in the far field, we multiply by $H^*$ and integrate over all space. The first term gives
+\[
+ \int H^* \frac{\d}{\d z} \left((\bar{U} - c)\frac{\d H}{\d z}\right)\;\d z = -\int (\bar{U} - c) \left|\frac{\d H}{\d z}\right|^2 \;\d z,
+\]
+while the second term is given by
+\[
+ \int \left(-k^2 |H|^2 (\bar{U} - c) - \frac{1}{2} \frac{\d^2\bar{U}}{\d z^2} |H|^2 - |H|^2 \frac{\left(\frac{1}{4} \left(\frac{\d \bar{U}}{\d z}\right)^2 - N^2\right) (\bar{U} - c^*)}{|\bar{U} - c|^2}\right)\;\d z.
+\]
+Both the real and imaginary parts of the sum of these two terms must be zero. In particular, the imaginary part reads
+\[
+ c_i \int \left(\abs{\frac{\d H}{\d z}}^2 + k^2 |H|^2 + |H|^2 \frac{N^2 - \frac{1}{4} \left(\frac{\d \bar{U}}{\d z}\right)^2}{|\bar{U} - c|^2}\right)\;\d z = 0.
+\]
+So a necessary condition for instability is that
+\[
+ N^2 - \frac{1}{4} \left(\frac{\d \bar{U}}{\d z}\right)^2 < 0
+\]
+somewhere. Defining the \term{Richardson number} to be
+\[
+ Ri(z) = \frac{N^2}{(\d \bar{U}/\d z)^2},
+\]
+the necessary condition is
+\[
+ Ri(z) < \frac{1}{4}.
+\]
+Equivalently, a \emph{sufficient} condition for stability is that $Ri(z) \geq \frac{1}{4}$ \emph{everywhere}.
+
+How can we think about this? When we move a parcel, there is the buoyancy force that attempts to move it back to its original position. However, if we move the parcel to the high velocity region, we can gain kinetic energy from the mean flow. Thus, if the velocity gradient is sufficiently high, it becomes ``advantageous'' for parcels to move around.
+
+\subsubsection*{Piecewise-linear profiles}
+If we want to make our lives simpler, we can restrict our profile with piecewise linear velocity and layered density. In this case, to solve the Taylor--Goldstein equation, we observe that away from the interfaces, we have $N = U'' = 0$. So the Taylor--Goldstein equation reduces to
+\[
+ \left(\frac{\d^2}{\d z^2} - k^2\right) \hat{w} = 0.
+\]
+This has trivial exponential solutions in each of the individual regions, and to find the correct solutions, we need to impose some matching conditions at the interface.
+
+So assume that the pressure is continuous across all interfaces. Then using the equation
+\[
+ - \bar{\rho} (\bar{U} - c) \frac{\d \hat{w}}{\d z} + \rho \hat{w} \frac{\d \bar{U}}{\d z} = -ik\hat{p},
+\]
+we see that the left hand seems like the derivative of a product, except the sign is wrong. So we divide by $\frac{1}{(\bar{U} - c)^2}$, and then we can write this as
+\[
+ \frac{ik}{\bar{\rho}} \frac{\hat{p}}{(\bar{U} - c)^2} = \frac{\d}{\d z}\left(\frac{\hat{w}}{\bar{U} - c}\right).
+\]
+For a general $c$, we integrate over an infinitesimal distance at the interface. We assume $\hat{p}$ is continuous, and that $\bar{\rho}$ and $(\bar{U} - c)$ have bounded discontinuity. Then the integral of the LHS vanishes in the limit, and so the integral of the right-hand side must be zero. This gives the matching condition
+\[
+ \left[ \frac{\hat{w}}{ \bar{U} - c}\right]^+_- = 0.
+\]
+To obtain a second matching condition, we rewrite the Taylor--Goldstein equation as a derivative, which allows for the determination of the other matching condition:
+\[
+ \frac{\d}{\d z} \left((\bar{U} - c) \frac{\d \hat{w}}{\d z} - \hat{w} \frac{\d \bar{U}}{\d z} - \frac{g \bar{\rho}}{\rho_0}\left(\frac{\hat{w}}{\bar{U} - c}\right)\right) = k^2 (\bar{U} - c)\hat{w} - \frac{g \bar{\rho}}{\rho_0} \frac{\d}{\d z} \left(\frac{\hat{w}}{\bar{U} - c}\right).
+\]
+Again integrating over an infinitesimal distance over at the interface, we see that the LHS must be continuous along the interface. So we obtain the second matching condition.
+\[
+ \left[(\bar{U} - c) \frac{\d \hat{w}}{\d z} - \hat{w} \frac{\d \bar{U}}{\d z} - \frac{g \bar{\rho}}{\rho_0}\left(\frac{\hat{w}}{\bar{U} - c}\right)\right]^+_- = 0.
+\]
+We begin by applying this to a relatively simple profile with constant density, and whose velocity profile looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-1, -2.5) -- (-1, -1) -- (1, 1) -- (1, 2.5);
+
+ \draw [dashed] (0, -2.5) -- (0, 2.5);
+
+ \foreach \y in {-1.3, -1.8, -2.3} {
+ \draw [-latex'] (0, \y) -- (-1, \y);
+ \draw [-latex'] (0, -\y) -- (1, -\y);
+ }
+ \foreach \x in {0.8, 0.3, -0.3, -0.8}{
+ \draw [-latex'] (0, \x) -- (\x, \x);
+ }
+
+ \draw [<->] (-1.5, 1) -- (-1.5, -1) node [pos=0.5, left] {$h$};
+
+ \draw [<->] (-1, -3) -- (1, -3) node [pos=0.5, below] {$\Delta U$};
+ \end{tikzpicture}
+\end{center}
+
+For convenience, we scale distances by $\frac{h}{2}$ and speeds by $\frac{\Delta U}{2}$, and define $\tilde{c}$ and $\alpha$ by
+\[
+ c = \frac{\Delta U}{2} \tilde{c},\quad \alpha = \frac{kh}{2}.
+\]
+These quantities $\alpha$ and $\tilde{c}$ (which we will shortly start writing as $c$ instead) are nice dimensionless quantities to work with. After the scaling, the profile can be described by
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] (-2, 1) -- (2, 1) node [right] {$z = 1$};
+ \draw [dashed] (-2, -1) -- (2, -1) node [right] {$z = -1$};
+
+ \draw (-1, -3) -- (-1, -1) node [pos=0.5, right] {$\bar{U} = -1$} -- (1, 1) node [pos=0.5, right] {$\bar{U} = z$} -- (1, 3) node [pos=0.5, right] {$\bar{U} =1$};
+ \node at (-2, -2) {III};
+ \node at (-2, 0) {II};
+ \node at (-2, 2) {I};
+
+ \node [right] at (3.5, 2) {$\hat{w} = Ae^{-\alpha(z - 1)}$};
+ \node [right] at (3.5, 0) {$\hat{w} = Be^{\alpha z} + Ce^{-\alpha z}$};
+ \node [right] at (3.5, -2) {$\hat{w} = D e^{\alpha (z + 1)}$};
+ \end{tikzpicture}
+\end{center}
+We have also our exponential solutions in each region between the interfaces, noting that we require our solution to vanish at the far field.
+
+We now apply the matching conditions. Since $\bar{U} - c$ is continuous, the first matching condition just says $\hat{w}$ has to be continuous. So we get
+\begin{align*}
+ A &= B e^\alpha + C e^{-\alpha}\\
+ D &= B e^{-\alpha} + C e^{\alpha}.
+\end{align*}
+The other matching condition is slightly messier. It says
+\[
+ \left[(\bar{U} - c)\frac{\d \hat{w}}{\d z} - \hat{w} \frac{\d \bar{U}}{\d z} - \frac{g \bar{\rho}}{\rho_0} \left(\frac{\hat{w}}{\bar{U} - c}\right)\right]^+_- = 0,
+\]
+which gives us two equations
+\begin{align*}
+ (B e^\alpha + C e^{-\alpha}) (1 - c)(-\alpha) = (Be^\alpha - Ce^{-\alpha}) (1 - c)(\alpha) - (Be^\alpha + Ce^{-\alpha})\\
+ (B e^{-\alpha} + C e^{\alpha}) (-1 - c)(\alpha) = (Be^{-\alpha} - Ce^{\alpha}) (-1 - c)(\alpha) - (Be^{-\alpha} + Ce^\alpha).
+\end{align*}
+Simplifying these gives us
+\begin{align*}
+ (2\alpha(1 - c) - 1) Be^\alpha &= Ce^{-\alpha}\\
+ (2\alpha(1 + c) - 1) Ce^\alpha &= B e^{-\alpha}.
+\end{align*}
+Thus, we find that
+\[
+ (2\alpha - 1)^2 - 4\alpha^2 c^2 = e^{-4\alpha},
+\]
+and hence we have the dispersion relation
+\[
+ c^2 = \frac{(2\alpha - 1)^2 - e^{-4\alpha}}{4\alpha^2}.
+\]
+We see that this has the possibility of instability. Indeed, we can expand the numerator to get
+\[
+ c^2 = \frac{(1 - 4\alpha + 4\alpha^2) - (1 - 4\alpha + 8 \alpha^2 + O(\alpha^3))}{4\alpha^2} = -1 + O(\alpha).
+\]
+So for small $\alpha$, we have instability. This is known as \term{Rayleigh instability}. On the other hand, as $\alpha$ grows very large, this is stable. We can plot these out in a graph
+\begin{center}
+ \begin{tikzpicture}[xscale=4, yscale=4]
+ \draw [->] (0, 0) -- (1, 0) node [right] {\small$k$};
+ \draw [->] (0, 0) -- (0, 0.5) node [above] {\small$\omega$};
+
+ \draw [mblue, thick, dashed] plot coordinates {(0.000,0.00000) (0.005,0.00497) (0.010,0.00987) (0.015,0.01470) (0.020,0.01947) (0.025,0.02417) (0.030,0.02881) (0.035,0.03339) (0.040,0.03789) (0.045,0.04234) (0.050,0.04672) (0.055,0.05104) (0.060,0.05529) (0.065,0.05948) (0.070,0.06361) (0.075,0.06767) (0.080,0.07167) (0.085,0.07561) (0.090,0.07949) (0.095,0.08331) (0.100,0.08706) (0.105,0.09076) (0.110,0.09439) (0.115,0.09796) (0.120,0.10147) (0.125,0.10492) (0.130,0.10831) (0.135,0.11163) (0.140,0.11490) (0.145,0.11811) (0.150,0.12126) (0.155,0.12434) (0.160,0.12737) (0.165,0.13034) (0.170,0.13325) (0.175,0.13609) (0.180,0.13888) (0.185,0.14161) (0.190,0.14428) (0.195,0.14689) (0.200,0.14944) (0.205,0.15193) (0.210,0.15436) (0.215,0.15673) (0.220,0.15905) (0.225,0.16130) (0.230,0.16349) (0.235,0.16563) (0.240,0.16770) (0.245,0.16971) (0.250,0.17167) (0.255,0.17356) (0.260,0.17540) (0.265,0.17717) (0.270,0.17888) (0.275,0.18053) (0.280,0.18213) (0.285,0.18366) (0.290,0.18513) (0.295,0.18653) (0.300,0.18788) (0.305,0.18916) (0.310,0.19038) (0.315,0.19154) (0.320,0.19264) (0.325,0.19367) (0.330,0.19464) (0.335,0.19554) (0.340,0.19638) (0.345,0.19715) (0.350,0.19786) (0.355,0.19850) (0.360,0.19908) (0.365,0.19958) (0.370,0.20002) (0.375,0.20039) (0.380,0.20069) (0.385,0.20092) (0.390,0.20108) (0.395,0.20117) (0.400,0.20118) (0.405,0.20112) (0.410,0.20099) (0.415,0.20077) (0.420,0.20048) (0.425,0.20011) (0.430,0.19967) (0.435,0.19914) (0.440,0.19852) (0.445,0.19782) (0.450,0.19704) (0.455,0.19617) (0.460,0.19520) (0.465,0.19415) (0.470,0.19300) (0.475,0.19175) (0.480,0.19040) (0.485,0.18895) (0.490,0.18739) (0.495,0.18572) (0.500,0.18394) (0.505,0.18204) (0.510,0.18002) (0.515,0.17787) (0.520,0.17559) (0.525,0.17317) (0.530,0.17061) (0.535,0.16789) (0.540,0.16502) (0.545,0.16197) (0.550,0.15875) (0.555,0.15533) (0.560,0.15171) (0.565,0.14786) (0.570,0.14377) (0.575,0.13943) (0.580,0.13479) (0.585,0.12983) (0.590,0.12452) (0.595,0.11880) (0.600,0.11260) (0.605,0.10586) (0.610,0.09844) (0.615,0.09019) (0.620,0.08084) (0.625,0.06997) (0.630,0.05670) (0.635,0.03862) (0.639,0.00909)};
+ \draw [mblue, thick] plot coordinates {(0.640,0.01655) (0.645,0.04562) (0.650,0.06270) (0.655,0.07632) (0.660,0.08809) (0.665,0.09868) (0.670,0.10844) (0.675,0.11757) (0.680,0.12622) (0.685,0.13447) (0.690,0.14240) (0.695,0.15005) (0.700,0.15747) (0.705,0.16469) (0.710,0.17174) (0.715,0.17863) (0.720,0.18538) (0.725,0.19201) (0.730,0.19854) (0.735,0.20496) (0.740,0.21129) (0.745,0.21755) (0.750,0.22373) (0.755,0.22984) (0.760,0.23588) (0.765,0.24187) (0.770,0.24781) (0.775,0.25370) (0.780,0.25954) (0.785,0.26534) (0.790,0.27110) (0.795,0.27682) (0.800,0.28251) (0.805,0.28816) (0.810,0.29378) (0.815,0.29938) (0.820,0.30495) (0.825,0.31049) (0.830,0.31601) (0.835,0.32151) (0.840,0.32698) (0.845,0.33244) (0.850,0.33787) (0.855,0.34329) (0.860,0.34869) (0.865,0.35407) (0.870,0.35944) (0.875,0.36480) (0.880,0.37014) (0.885,0.37546) (0.890,0.38078) (0.895,0.38608) (0.900,0.39137) (0.905,0.39665) (0.910,0.40192) (0.915,0.40718) (0.920,0.41242) (0.925,0.41767) (0.930,0.42290) (0.935,0.42812) (0.940,0.43333) (0.945,0.43854) (0.950,0.44374) (0.955,0.44894) (0.960,0.45412) (0.965,0.45930) (0.970,0.46448) (0.975,0.46964) (0.980,0.47480) (0.985,0.47996) (0.990,0.48511) (0.995,0.49026) (1.000,0.49540)};
+ % map (\x -> (showFFloat (Just 3) x "", showFFloat (Just 5) ((sqrt $ -((2*x-1)^2-exp(-4*x))/4)) "")) ([0.000001] ++ [0.005,0.01..0.635] ++ [0.639])
+ % map (\x -> (showFFloat (Just 3) x "", showFFloat (Just 5) ((sqrt $ ((2*x-1)^2-exp(-4*x))/4)) "")) ([0.64,0.645..1])
+
+ \node [above] at (0.4, 0.20118) {\small$\omega_i$};
+ \node [anchor = north west] at (0.79, 0.27110) {\small$\omega_r$};
+
+ \node [below] at (0.64, 0) {\small$0.64$};
+ \end{tikzpicture}
+\end{center}
+
+We see that the critical value of $\alpha$ is $0.64$, and the maximally instability point is $\alpha \approx 0.4$.
+
+Let's try to understand where this picture came from. For large $k$, the wavelength of the oscillations is small, and so it seems reasonable that we can approximate the model by one where the two interfaces don't interact. So consider the case where we only have one interface.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] (-2, 1) -- (2, 1) node [right] {$z = 1$};
+
+ \draw (-0.7, -0.7) -- (1, 1) node [pos=0.3, right] {$\bar{U} = z$} -- (1, 3) node [pos=0.5, right] {$\bar{U} =1$};
+
+ \draw [dashed] (-0.7, -0.7) -- (-1.5, -1.5);
+
+ \node at (-2, 0) {II};
+ \node at (-2, 2) {I};
+
+ \node [right] at (3.5, 2) {$Ae^{-\alpha(z - 1)}$};
+ \node [right] at (3.5, 0) {$Be^{\alpha (z - 1)}$};
+ \end{tikzpicture}
+\end{center}
+We can perform the same procedure as before, solving
+\[
+ \left[\frac{\hat{w}}{\bar{U} - c}\right]_-^+ = 0.
+\]
+This gives the condition $A = B$. The other condition then tell us
+\[
+ (1 - c) (-\alpha) A = (1 - c)\alpha A - Ae^{\alpha(z - 1)}.
+\]
+We can cancel the $A$, and rearrange this to say
+\[
+ c = 1 - \frac{1}{2\alpha}.
+\]
+So we see that this interface supports a wave at speed $c = 1 - \frac{1}{2\alpha}$ at a speed lower than $\bar{U}$. Similarly, if we treated the other interface at isolation, it would support a wave at $c_- = -1 + \frac{1}{2\alpha}$.
+
+Crucially these two waves are individually stable, and when $k$ is large, so that the two interfaces are effectively in isolation, we simply expect two independent and stable waves. Indeed, in the dispersion relation above, we can drop the $e^{-4\alpha}$ term when $\alpha$ is large, and approximate
+\[
+ c = \pm \frac{2\alpha - 1}{2\alpha} = \pm \left(1 - \frac{1}{2\alpha}\right).
+\]
+When $k$ is small, then we expect the two modes to interact with each other, which manifests itself as the $-e^{4\alpha}$ term. Moreover, the resonance should be the greatest when the two modes have equal speed, i.e.\ at $\alpha \approx \frac{1}{2}$, which is quite close to actual maximally unstable mode.
+%
+%Physically, what is going on? The key aspect of the instability is that the exponential term $e^{-4\alpha}$ is negative. Where did this come from? If we look through our derivations, we see that these come from the interaction between the two interfaces.
+%
+%Indeed, suppose our fluid flow instead looks like
+%\begin{center}
+% \begin{tikzpicture}
+% \draw [dashed] (-2, 0) -- (2, 0) node [right] {$z = 1$};
+%
+% \draw (-2, -1) -- (1, 2) node [pos=0.5, right] {$\bar{U} = z$} -- (1, 4) node [pos=0.5, right] {$\bar{U} =1$};
+% \node at (-2, 1) {II};
+% \node at (-2, 3) {I};
+%
+% %insert arrows?
+%
+% \node at (2, -1) {$B e^{\alpha (z + 1)}$};
+% \node at (2, 3) {$Ae^{-\alpha(z - 1)}$};
+% \end{tikzpicture}
+%\end{center}
+%The first condition
+%\[
+% \left[\frac{\hat{w}}{\bar{U} - c}\right]_-^+ = 0
+%\]
+%gives the condition $A = B$. The other condition then tell us
+%\[
+% (1 - c) (-\alpha) A = (1 - c)\alpha A - Ae^{\alpha(z - 1)}.
+%\]
+%We can cancel the $A$, and rearrange this to say
+%\[
+% c = 1 - \frac{1}{2\alpha}.
+%\]
+%So we see that this interface supports a wave at speed $c = 1 - \frac{1}{2\alpha}$ at a speed lower than $\bar{U}$. Similarly, if we treated the other interface at isolation, it would support a wave at $c_- = -1 + \frac{1}{2\alpha}$. % treating separately = large wavenumber.
+%
+%Now if $\alpha = \frac{1}{2}$, then $c_+ = c_- = 0$. Thus, the two speeds can resonate when $\alpha = \frac{1}{2}$, and this is when we expect to have the worst instability.
+%
+%We can also see this by writing our dispersion relation by
+%\[
+% c^2 = \left(1 - \frac{1}{2\alpha}\right)^2 - \frac{e^{-4\alpha}}{4\alpha^2}.
+%\]
+%We see that if we ignore the second term, then this is the same as saying that the possible speeds are $\pm \left(1 - \frac{1}{3\alpha}\right)$. The second term gives us the interaction between the two interfaces, and we see that this interaction is what gives rise to the instability.
+
+\subsubsection*{Density stratification}
+We next consider the case where we have \term{density stratification}. After scaling, we can draw our region and solutions as
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] (-2, 0) -- (2, 0) node [right] {$z = -1$};
+ \draw [dashed] (-2, 1) -- (2, 1) node [right] {$z = 0$};
+
+ \draw [dashed] (-2, 2) -- (2, 2) node [right] {$z = 1$};
+ \draw (-1, -2) -- (-1, 0) node [pos=0.5, right] {$\bar{U} = -1$} -- (1, 2) node [pos=0.7, right] {$\bar{U} = z$} -- (1, 4) node [pos=0.5, right] {$\bar{U} =1$};
+ \node at (-2, 0.5) {III};
+ \node at (-2, -1) {IV};
+ \node at (-2, 1.5) {II};
+ \node at (-2, 3) {I};
+
+ \node [right] at (3.5, 3) {$Ae^{-\alpha(z - 1)}$};
+ \node [right] at (3.5, 1.5) {$Be^{\alpha(z - 1)} + C e^{-\alpha(z - 1)}$};
+ \node [right] at (3.5, 0.5) {$De^{\alpha (z + 1)} + E e^{-\alpha(z + 1)}$};
+ \node [right] at (3.5, -1) {$F e^{\alpha (z + 1)}$};
+
+ \draw [mred] (-1, 4) node [above] {$\bar{\rho} = -1$} -- (-1, 1) -- (1, 1) -- (1, -2) node [below] {$\bar\rho = +1$};
+ \end{tikzpicture}
+\end{center}
+Note that the $\bar{\rho}$ is the relative density, since it is the density difference that matters (fluids cannot have negative density).
+
+Let's first try to understand how this would behave heuristically. As before, at the I-II and III-IV interfaces, we have waves with $c = \pm \left(1 - \frac{1}{2\alpha}\right)$. At the density interface, we previously saw that we can have \term{internal gravity waves} with $c_{igw} = \pm \sqrt{\frac{Ri_0}{\alpha}}$, where
+\[
+ Ri_0 = \frac{g h}{\rho_0}
+\]
+is the \term{bulk Richardson number} (before scaling, it is $Ri_0 = \frac{g \Delta \rho h}{\rho_0 \Delta U^2}$).
+
+We expect instability to occur when the frequency of this internal gravity wave aligns with the frequency of the waves at the velocity interfaces, and this is given by
+\[
+ \left(1 - \frac{1}{2\alpha}\right)^2 = \frac{Ri_0}{\alpha}.
+\]
+It is easy to solve this to get the condition
+\[
+ Ri_0 \simeq \alpha - 1.
+\]
+This is in fact what we find if we solve it exactly.
+
+So let's try to understand the situation more responsibly now. The procedure is roughly the same. The requirement that $\hat{w}$ is continuous gives
+\begin{align*}
+ A &= B + C\\
+ B e^{-\alpha} + C e^\alpha &= D e^\alpha + E e^{-\alpha}\\
+ D + E &= F.
+\end{align*}
+If we work things out, then the other matching condition
+\[
+ \left[(\bar{U} - c) \frac{\d \hat{w}}{\d z} - \hat{w} \frac{\d \bar{U}}{\d z} - Ri_0 \bar{\rho} \left(\frac{\hat{w}}{\bar{U} - c}\right)\right]^+_- = 0
+\]
+gives
+\begin{align*}
+ (1 - c)(-\alpha) A &= (1 - c)(\alpha) (B - C) - (B + C)\\
+ (-1 - c)(\alpha) F &= (-1 - c)(\alpha) (D - E) - (D + E)
+\end{align*}
+\begin{multline*}
+ (-c)(\alpha) (B e^{-\alpha} - C e^\alpha) + \frac{Ri_0}{(-c)} (Be^{-\alpha} + C e^\alpha) \\
+ = (-c)(\alpha) (De^\alpha - E e^{-\alpha}) - \frac{Ri_0}{(-c)} (De^\alpha + Ee^{-\alpha}).
+\end{multline*}
+%
+%
+%What happens when we have stratification? In that case, we have interfacial gravity waves too, with
+%\[
+% \omega^2 = \frac{g(\rho_1 - \rho_2) k}{\rho_1 + \rho_2}.
+%\]
+%These have % star on dimensional quantities % u^* = \Delta u^*/2 u,\quad c^* = \Delta u^*/2 c,\quad z^* = d^*/2 z,\quad \bar{\rho}^* = \Delta \rho^2/2 \bar{\rho}. So \bar{\rho} = \frac{2 \rho_0^*}{\Delta \rho^*} \mp 1
+%\[
+% c_{igw \pm} = \left(\frac{Ri_0}{\alpha}\right)^{1/2}.
+%\]
+%How does their presence modify stability? Recall we were scaling lengths by $\frac{h^*}{2}$, and
+%\[
+% J = Ri_0 = \frac{g^* \Delta \rho^* h^*}{\rho_0^* (\Delta U^*)^2}.
+%\]
+%Then the natural thing to say is that
+%\[
+% \rho_1^* = \rho^* + \frac{\Delta \rho^*}{2},\quad \rho_2^* = \rho_0^* - \frac{\Delta \rho^*}{2}.
+%\]
+%Then we have
+%\[
+% (\omega^*)^2 = c^{*2} h^{*2} = \frac{g^* \Delta \rho^* k^*}{2 \rho_0^*}.
+%\]
+%To non-dimensionalize this, we as usual put
+%\[
+% c^* = \frac{\Delta U}{2} c,\quad k^* = \frac{2h}{\alpha}.
+%\]
+%We then get
+%\[
+% \frac{(\Delta u^*)^2}{4} \frac{4}{h^{*2}} \alpha^2 c^2 = \frac{g^* \Delta \rho^*}{2 \rho_0^*} \frac{2}{h} \alpha.
+%\]
+%Cancelling terms, we are left with
+%\[
+% c^2 = \left(\frac{g^* \Delta \rho^* h^*}{\rho_0^*(\Delta u^*)^2}\right)/\alpha = \frac{Ri_0}{\alpha}.
+%\]
+%
+%We now have three interfaces. The condition $\left[\frac{\hat{w}}{\bar{U} - c}\right]^+_- = 0$ is simple. We get the conditions
+%\begin{align*}
+% A &= B + C\\
+% B e^{-\alpha} + C e^\alpha &= D e^\alpha + E e^{-\alpha}\\
+% D + E &= F.
+%\end{align*}
+%If we work things out, then the other (non-dimensionalized) matching condition
+%\[
+% \left[(\bar{U} - c) \frac{\d \hat{w}}{\d z} - \hat{w} \frac{\d \bar{U}}{\d z} - Ri_0 \bar{\rho} \left(\frac{\hat{w}}{\bar{U} - c}\right)\right]^+_- = 0
+%\]
+%gives
+%\begin{align*}
+% (1 - c)(-\alpha) A &= (1 - c)(\alpha) (B - C) - (B + C)\\
+% (-1 - c)(\alpha) F &= (-1 - c)(\alpha) (D - E) - (D + E)\\
+% &(-c)(\alpha) (B e^{-\alpha} - C e^\alpha) + \frac{Ri_0}{(-c)} (Be^{-\alpha} + C e^\alpha) \\
+% &= (-c)(\alpha) (De^\alpha - E e^{-\alpha}) - \frac{Ri_0}{(-c)} (De^\alpha + Ee^{-\alpha}).
+%\end{align*}
+This defines a $6 \times 6$ matrix problem, 4th order in $c$. Writing it as $S\mathbf{X} = 0$ with $\mathbf{X} = (A, B, C, D, E, F)^T$, the existence of non-trivial solutions is given by the requirement that $\det S = 0$, which one can check (!) is given by
+\[
+ c^4 + c^2 \left(\frac{e^{-4\alpha} - (2\alpha - 1)^2}{4\alpha^2} - \frac{Ri_0}{\alpha}\right) + \frac{Ri_0}{\alpha} \left(\frac{e^{-2\alpha} + (2\alpha - 1)}{2\alpha}\right)^2 = 0.
+\]
+This is a biquadratic equation, which we can solve.
+
+We can inspect the case when $Ri_0$ is very small. In this case, the dispersion relation becomes
+\[
+ c^4 + c^2 \left(\frac{e^{-4\alpha} - (2\alpha - 1)^2}{4\alpha^2}\right).
+\]
+Two of these solutions are simply given by $c = 0$, and the other two are just those we saw from Rayleigh instability. This is saying that when the density difference is very small, then the internal gravitational waves are roughly non-existent, and we can happily ignore them.
+
+In general, we have instability for all $Ri_0$! This is \term{Holmboe instability}. The scenario is particularly complicated if we look at small $\alpha$, where we expect the effects of Rayleigh instability to kick in as well. So let's fix some $0 < \alpha < 0.64$, and see what happens when we increase the Richardson number. % insert full contour picture
+
+As before, we use dashed lines to denote the imaginary parts and solid lines to denote the real part. For each $Ri_0$, any combination of imaginary part and real part gives a solution for $c$, and, except when there are double roots, there should be four such possible combinations.
+\newcommand{\AAA}{0.25296}
+\newcommand{\DDD}{1.57055}
+\begin{center}
+ \begin{tikzpicture}[xscale=100, yscale=2.8]
+ \pgfmathdeclarefunction{imsqrt}{2}{\pgfmathparse{((#1)^2 + (#2)^2)^(1/4) * sin (90 + 1/2 * atan (#2 / #1))}}
+ \pgfmathdeclarefunction{resqrt}{2}{\pgfmathparse{((#1)^2 + (#2)^2)^(1/4) * cos (90 + 1/2 * atan (#2 / #1))}}
+
+ \draw [->] (0, 0) -- (0.05, 0) node [right] {$Ri_0$}; % = Ri?
+ \draw [->] (0, -1) -- (0, 1) node [above] {$c$};
+
+ \draw [semithick, domain=0:0.0205, samples=50, dashed, mblue] plot (\x, {sqrt(-(\x - \AAA - sqrt(\x^2 - 2 * \DDD * \x + (\AAA)^2)))});
+ \draw [semithick, domain=0:0.0205, samples=50, dashed, mblue] plot (\x, {sqrt(-(\x - \AAA + sqrt(\x^2 - 2 * \DDD * \x + (\AAA)^2)))});
+
+ \draw [semithick, domain=0.0206:0.049, samples=50, dashed, mblue] plot (\x, {imsqrt(\x - \AAA, sqrt(-(\x^2 - 2 * \DDD * \x + (\AAA)^2)))});
+
+ \draw [semithick, mblue] (0, 0) -- (0.0205, 0);
+ \draw [semithick, domain=0.020506:0.049, samples=50, mblue] plot (\x, {resqrt(\x - \AAA, sqrt(-(\x^2 - 2 * \DDD * \x + (\AAA)^2)))});
+
+ \draw [semithick, domain=0:0.0205, samples=50, dashed, mblue] plot (\x, {-sqrt(-(\x - \AAA - sqrt(\x^2 - 2 * \DDD * \x + (\AAA)^2)))});
+ \draw [semithick, domain=0:0.0205, samples=50, dashed, mblue] plot (\x, {-sqrt(-(\x - \AAA + sqrt(\x^2 - 2 * \DDD * \x + (\AAA)^2)))});
+
+ \draw [semithick, domain=0.0206:0.049, samples=50, dashed, mblue] plot (\x, {-imsqrt(\x - \AAA, sqrt(-(\x^2 - 2 * \DDD * \x + (\AAA)^2)))});
+
+ \draw [semithick, domain=0.020506:0.049, samples=50, mblue] plot (\x, {-resqrt(\x - \AAA, sqrt(-(\x^2 - 2 * \DDD * \x + (\AAA)^2)))});
+ \end{tikzpicture}
+\end{center}
+We can undersatnd this as follows. When $Ri_0 = 0$, all we have is Rayleigh instability, which are the top and bottom curves. There are then two $c = 0$ solutions, as we have seen. As we increase $Ri_0$ to a small, non-sero amount, this mode becomes non-zero, and turns into a genuine unstable mode. As the value of $Ri_0$ increases, the two imaginary curves meet and merge to form a new curve. The solutions then start to have a non-zero real part, which gives a non-zero phase speed to our pertrubation.
+
+Note that when $Ri_0$ becomes very large, then our modes become unstable again, though this is not clear from our graph.
+
+%
+%
+%In general, the phase speed of the perturbation is non-zero, and actually the instability lies between the interfaces.
+%
+%Note that this does not violate Miles--Howard, since $Ri = 0$ on either side of the density of the interface.
+%
+%Let's fix some $0 < \alpha < 0.64$, so that we have Rayleigh instability. Note that if $c$ is a complex root, then we know the roots are just $\{\pm c, \pm \bar{c}\}$. Plotting $c_r$ and $c_i$ separately,
+%
+%\begin{center}
+% \begin{tikzpicture}
+% \draw [->] (-1, 0) -- (5, 0) node [right] {$J$}; % = Ri?
+% \draw (0, -3) -- (0, 3);
+% \draw [morange, thick] (0, 2) edge [bend left] (2, 0.8);
+% \draw [morange, thick] (0, 0) edge [bend right] (2, 0.8) edge [bend left] (4, 0);
+%
+% \draw [mred, thick] (0, 0) -- (2, 0) edge [bend left] (4, 0.6); % reflect all these
+% \end{tikzpicture}
+%\end{center}
+%The orange lines are the imaginary parts, and the red lines are the real parts. When $J = 0$, we have Rayleigh instability, so we have one unstable mode, one decaying mode and two zero modes. As we increase $J$, the zero modes increase in imaginary parts, but the phase speed remains constant.
+%
+%When the two curves meet, they combine to give Holmboe instability, and the phase speed starts to increase.
+\subsubsection*{Taylor instability}
+While Holmboe instability is quite interesting, our system was unstable even without the density stratification. Since we saw that instability is in general triggered by the interaction between two interfaces, it should not be surprising that even in a Rayleigh stable scenario, if we have two layers of density stratification, then we can still get instability.
+
+Consider the following flow profile:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] (-2, 1) -- (2, 1) node [right] {$z = 1$};
+ \draw [dashed] (-2, -1) -- (2, -1) node [right] {$z = -1$};
+
+ \draw (-1, -3) -- (1, 3) node [pos=0.5, right] {$\bar{U} =z$};
+
+ \foreach \y in {0.75, 1.25, 1.75, 2.25, 2.75} {
+ \draw [-latex'] (0, \y) -- (\y/3, \y);
+ \draw [-latex'] (0, -\y) -- (-\y/3, -\y);
+ }
+ \draw [dashed] (0, -3) -- (0, 3);
+
+ \node at (-2, -2) {III};
+ \node at (-2, 0) {II};
+ \node at (-2, 2) {I};
+
+ \node [right] at (3.5, 2) {$Ae^{-\alpha(z - 1)}$};
+ \node [right] at (3.5, 0) {$Be^{\alpha z} + Ce^{-\alpha z}$};
+ \node [right] at (3.5, -2) {$D e^{\alpha (z + 1)}$};
+
+ \draw [mred] (-0.7, 3) node [above] {\small$\bar{\rho} = R - 1$} -- (-0.7, 1) -- (0, 1) -- (0, -1) --(0.7, -1) -- (0.7, -3) node [below] {\small$\bar{\rho} = R + 1$};
+ \end{tikzpicture}
+\end{center}
+As before, continuity in $\hat{w}$ requires
+\begin{align*}
+ A &= Be^\alpha + Ce^{-\alpha}\\
+ D &= Be^{-\alpha} + Ce^{\alpha}.
+\end{align*}
+Now there is no discontinuity in vorticity, but the density field has jumps. So we need to solve the second matching condition. They are
+\begin{align*}
+ (1 - c) (-\alpha) (Be^\alpha + C e^{-\alpha}) + \frac{Ri_0}{1 - c} (B e^\alpha + Ce^{-\alpha}) &= (1 - c)(\alpha) (Be^\alpha - C e^{-\alpha})\\
+ (1 - c)(\alpha) (Be^{-\alpha} + Ce^\alpha) + \frac{Ri_0}{1 + c} (B e^{-\alpha} + C e^\alpha) &= (-1 - c)(\alpha)(Be^\alpha - Ce^\alpha)
+\end{align*}
+These give us, respectively,
+\begin{align*}
+ (2\alpha(1 - c)^2 - Ri_0)B &= Ri_0 Ce^{-2\alpha}\\
+ (2\alpha(1 + c)^2 - Ri_0)C &= Ri_0 Be^{-2\alpha}.
+\end{align*}
+So we get the biquadratic equation
+\[
+ c^4 - c^2 \left(2 + \frac{Ri_0}{\alpha}\right) + \left(1 - \frac{Ri_0}{2\alpha}\right)^2 - \frac{Ri_0^2 e^{-4\alpha}}{4\alpha^2} = 0.
+\]
+We then apply the quadratic formula to say
+\[
+ c^2 = 1 + \frac{Ri_0}{2\alpha} \pm \sqrt{\frac{2Ri_0}{\alpha} + \frac{Ri_0^2 e^{-4\alpha}}{4\alpha^2}}.
+\]
+So it is possible to have instability with no inflection! Indeed, instability occurs when $c^2 < 0$, which is equivalent to requiring
+\[
+ \frac{2\alpha}{1 + e^{-2\alpha}} < Ri_0 < \frac{2\alpha}{1 - e^{-2\alpha}}.
+\]
+Thus, for any $Ri_0 > 0$, we can have instability. This is known as \term{Taylor instability}.
+
+Heuristically, the density jumps at the interface have speed
+\[
+ c_{igw}^{\pm} = \pm 1 \mp \left(\frac{Ri_0}{2\alpha}\right)^{1/2}.
+\]
+We get equality when $c = 0$, so that $Ri_0 = 2\alpha$. This is in very good agreement with what our above analysis gives us.
+
+% insert contour picture
+
+
+\section{Absolute and convective instabilities}
+
+So far, we have only been considering perturbations that are Fourier modes,
+\[
+ (\mathbf{u}', p', \rho') = [\hat{\mathbf{u}}(z), \hat{p}(z), \hat{\rho}(z)]e^{i(\mathbf{k}\cdot \mathbf{x} - \omega t}.
+\]
+This gives rise to a dispersion relation $D(\mathbf{k}, \omega) = 0$. This is an eigenvalue problem for $\omega(\mathbf{k})$, and the $k$th mode is unstable if the solution $\omega(k)$ has positive imaginary part.
+
+When we focus on Fourier modes, they are necessarily non-local. In reality, perturbations tend to be local. We perturb the fluid at some point, and the perturbation spreads out. In this case, we might be interested in \emph{how} the perturbations spread.
+
+To understand this, we need the notion of \term{group velocity}. For simplicity, suppose we have a sum of two Fourier modes of slightly different wavenumbers $k_0 \pm k_\Delta$. The corresponding frequencies are then $\omega_0 \pm \omega_\Delta$, and for small $k_\Delta$, we may approximate
+\[
+ \omega_\Delta = k_\Delta \left.\frac{\partial \omega}{\partial k}\right|_{k = k_0}.
+\]
+We can then look at how our wave propagates:
+\begin{align*}
+ \eta &= \cos [(k_0 + k_\Delta)x - (\omega_0 + \omega_\Delta)t] + \cos [k_0 - k_\Delta)x - (\omega_0 - \omega_\Delta)t]\\
+ &= 2 \cos (k_\Delta x - \omega_\Delta t) \cos (k_0 x - \omega_0 t)\\
+ &= 2 \cos \left(k_\Delta \left(w - \left.\frac{\partial \omega}{\partial k}\right|_{k = k_0}t\right)\right) \cos (k_0 x - \omega_0 t)
+\end{align*}
+Since $k_\Delta$ is small, we know the first term has a long wavelength, and determines the ``overall shape'' of the wave.
+
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6.5, 0);
+ \draw [->] (0, -2) -- (0, 2) node [above] {$\eta$};
+
+ \draw [semithick, mblue, domain=0:6,samples=600] plot(\x, {1.5 * cos(1000 * \x) * sin (90 * \x)});
+ \draw [semithick, morange] (0, 0) sin (1, 1.5) cos (2, 0) sin (3, -1.5) cos (4, 0) sin (5, 1.5) cos (6, 0);
+ \draw [semithick, mgreen] (0, 0) sin (1, -1.5) cos (2, 0) sin (3, 1.5) cos (4, 0) sin (5, -1.5) cos (6, 0);
+ \end{tikzpicture}
+\end{center}
+
+As time evolves, these ``packets'' propagate with \term{group velocity} $c_g = \frac{\partial \omega}{\partial k}$. This is also the speed at which the energy in the wave packets propagate.
+
+In general, there are 4 key characteristics of interest for these waves:
+\begin{itemize}
+ \item The energy in a wave packet propagates at a \term{group velocity}.
+ \item They can \emph{disperse} (different wavelengths have different speeds)
+ \item They can be \emph{advected} by a streamwise flow.
+ \item They can be \emph{unstable}, i.e.\ their amplitude can grow in time and space.
+\end{itemize}
+
+In general, if we have an unstable system, then we expect the perturbation to grow with time, but also ``move away'' from the original source of perturbation. We can consider two possibilities --- we can have \term{convective instability}, where the perturbation ``goes away''; and \term{absolute instability}, where the perturbation doesn't go away.
+
+We can imagine the evolution of a convective instability as looking like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, -1) -- (0, 4) node [above] {$t$};
+
+ \draw [semithick, mblue] (-2, 0) -- (5, 0);
+
+ \draw [semithick, mblue] (-2, 1) -- (0.5, 1);
+ \draw [semithick, mblue] (-2, 2) -- (1, 2);
+ \draw [semithick, mblue] (-2, 3) -- (1.5, 3);
+
+ \draw [semithick, mblue] (1.8, 1) -- (5, 1);
+ \draw [semithick, mblue] (3, 2) -- (5, 2);
+ \draw [semithick, mblue] (4, 3) -- (5, 3);
+
+ \node [circ, mblue] at (0, 0) {};
+ \draw [semithick, mblue, domain=0.5:1.8,samples=80] plot (\x, {1 + 0.2 * cos(2500 * \x) * exp (- 25 * ((\x - 1)^2))});
+ \draw [semithick, mblue, domain=1:3,samples=100] plot (\x, {2 + 0.3 * cos(2500 * \x) * exp (- 10 * ((\x - 2)^2))});
+ \draw [semithick, mblue, domain=1.5:4,samples=200] plot (\x, {3 + 0.4 * cos(2500 * \x) * exp (- 6 * ((\x - 2.9)^2))});
+
+ \draw [dashed] (0, 0) -- (2.6, 3.8);
+ \draw [dashed] (0, 0) -- (4.8, 3.8);
+ \end{tikzpicture}
+\end{center}
+whereas an absolute instability would look like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, -1) -- (0, 4) node [above] {$t$};
+
+ \draw [semithick, mblue] (-2, 0) -- (5, 0);
+
+ \draw [semithick, mblue] (-2, 1) -- (-1, 1);
+ \draw [semithick, mblue] (-2, 2) -- (-1.5, 2);
+ \draw [semithick, mblue] (-2, 3) -- (-2, 3);
+
+ \draw [semithick, mblue] (1.2, 1) -- (5, 1);
+ \draw [semithick, mblue] (2, 2) -- (5, 2);
+ \draw [semithick, mblue] (3, 3) -- (5, 3);
+ \node [circ, mblue] at (0, 0) {};
+
+ \draw [semithick, mblue, domain=-1:1.2,samples=100] plot (\x, {1 + 0.2 * cos(2500 * \x) * exp (- 8 * ((\x - 0.2)^2))});
+ \draw [semithick, mblue, domain=-1.5:2,samples=200] plot (\x, {2 + 0.3 * cos(2500 * \x) * exp (- 2 * ((\x - 0.4)^2))});
+ \draw [semithick, mblue, domain=-2:3,samples=300] plot (\x, {3 + 0.4 * cos(2500 * \x) * exp (- ((\x - 0.6)^2))});
+
+ \draw [dashed] (0, 0) -- (-1.7, 3.8);
+ \draw [dashed] (0, 0) -- (3.2, 3.8);
+ \end{tikzpicture}
+\end{center}
+
+Note that even in the absolute case, the perturbation may still have non-zero group velocity. It's just that the perturbations grow more quickly than the group velocity.
+
+To make this more precise, we consider the response of the system to an \emph{impulse}. We can understand the dispersion relation as saying that in phase space, a quantity $\chi$ (e.g.\ velocity, pressure, density) must satisfy
+\[
+ D(k, \omega) \tilde{\chi}(k, \omega) = 0,
+\]
+where
+\[
+ \tilde{\chi}(k, \omega) = \int_{-\infty}^\infty\int_{-\infty}^\infty \chi(x, t) e^{-i(kx - \omega t)}\;\d x\;\d t.
+\]
+Indeed, this equation says we can have a non-zero $(k, \omega)$ mode iff they satisfy $D(k, \omega) = 0$. The point of writing it this way is that this is now linear in $\tilde{\chi}$, and so it applies to any $\chi$, not necessarily a Fourier mode.
+
+Going back to position space, we can think of this as saying
+\[
+ \D \left(-i \frac{\partial}{\partial x}, i\frac{\partial}{\partial t}\right) \chi(x, t) = 0.
+\]
+This equation allows us to understand how the system responds to some external forcing $F(x, t)$. We simply replace the above equation by
+\[
+ \D \left(-i\frac{\partial}{\partial x}, i\frac{\partial}{\partial t}\right) \chi(x, t) = F(x,t).
+\]
+Usually, we want to solve this in Fourier space, so that
+\[
+ D(k, \omega) \tilde{\chi}(k, \omega) = \tilde{F}(k, \omega).
+\]
+In particular, the \term{Green's function} (or \term{impulse response}) is given by the response to the impulse $F_{\xi, \tau}(x, t) = \delta(x - \xi) \delta(t - \tau)$. We may wlog assume $\xi = \tau = 0$, and just call this $F$. The solution is, by definition, the \term{Green's function} $G(x, t)$, satisfying
+\[
+ \D \left(-i\frac{\partial}{\partial x}, i\frac{\partial}{\partial t}\right) G(x, t) = \delta(x)\delta(t).
+\]
+Given the Green's function, the response to an arbitrary forcing $F$ is ``just'' given by
+\[
+ \chi(x, t) = \int G(x - \xi, t - \tau) F(\xi, \tau)\;\d \xi \;\d \tau.
+\]
+Thus, the Green's function essentially controls the all behaviour of the system.
+
+%Once there is the possibility of spatial instability, it is essential to consider response to perturbations. This gives a different (but equivalent) definitions of linear instability in terms of impulse response.
+%
+%Consider we are in (for simplicity) 2D and we have a dispersion relation $D(k, \omega; R) = 0$. Then the eigenfunction (dependent on $z$) is completely determined from $D$. We are interested in the response of some complex scalar field $\chi(x, t)$ to some forcing.
+%
+%The dispersion relation effectively defines a linear partial differential operator on $\chi(x, t)$:
+%Indeed, if $F = 0$ and $\chi(x, t) = Ae^{i(kx - \omega t)}$, then this simply says $D(k, \omega; R) = 0$.
+%
+%Then the central problem reduces to the determination of Green's function of this operator:
+%\[
+% \D \left(-i\frac{\partial}{\partial x}, i \frac{\partial}{\partial t}; R\right) G(x, t; \xi, \tau) = \delta(x - \xi) \delta(t - \tau).
+%\]
+%Then the solution to our PDE above is just
+%\[
+% \chi(x, t) = \int G(x, t; \xi, \tau) F(\xi, \tau)\;\d \xi\;\d \tau.
+%\]
+%of course, we may perform a shift to make the impulse at $x = 0, t = 0$. So $G$ is the \term{impulse response}.
+
+With the Green's function, we may now make some definitions.
+\begin{defi}[Linear stability]\index{linear stability}
+ The base flow of a system is \emph{linearly stable} if
+ \[
+ \lim_{t \to \infty} G(x, t) = 0
+ \]
+ along \emph{all} rays $\frac{x}{t} = C$.
+
+ A flow is unstable if it is not stable.
+\end{defi}
+
+\begin{defi}[Linearly convectively unstable]\index{linear convective instability}\index{convective instability!linear}
+ An unstable flow is \emph{linearly convectively unstable} if $\lim_{t \to \infty}G(x, t) = 0$ along the ray $\frac{x}{t} = 0$.
+\end{defi}
+\begin{defi}[Linearly absolutely unstable]\index{linear absolute instability}\index{absolute instability!linear}
+ An unstable flow is \term{linearly absolutely unstable} if $\lim_{t\to \infty}G(x, t) \not= 0$ along the ray $\frac{x}{t} = 0$.
+\end{defi}
+
+The first case is what is known as an \term{amplifier}, where the instability grows but travels away at the same time. In the second case, we have an \term{oscillator}.
+
+Even for a general $F$, it is easy to solve for $\tilde{\chi}$ in Fourier space. Indeed, we simply have
+\[
+ \tilde{\chi}(k, \omega) = \frac{\tilde{F}(k, \omega)}{D(k, \omega)}.
+\]
+To recover $\chi$ from this, we use the Fourier inversion formula
+\[
+ \chi(x, t) = \frac{1}{(2\pi)^2} \int_{L_\omega} \int_{F_k} \tilde{\chi}(k, \omega) e^{i(kx - \omega t)}\;\d k\;\d \omega.
+\]
+Note that here we put some general contours for $\omega$ and $k$ instead of integrating along the real axis, so this is not the genuine Fourier inversion formula. However, we notice that as we deform our contour, when we pass through a singularity, we pick up a multiple of $e^{i(kx - \omega t)}$. Moreover, since the singularities occur when $D(k, \omega) = 0$, it follows that these extra terms we pick up are in fact solutions to the homogeneous equation $D(-i\partial_x, i\partial_t) \chi = 0$. Thus, no matter which contour we pick, we do get a genuine solution to our problem.
+
+So how should we pick a contour? The key concept is \emph{causality} --- the response must come after the impulse. Thus, if $F(x, t) = 0$ for $t < 0$, then we also need $\chi(x, t) = 0$ in that region. To understand this, we perform only the temporal part of the Fourier inversion, so that
+\[
+ \tilde{\chi}(k, t) = \frac{1}{2\pi}\int_{L_\omega} \frac{\tilde{F}(k, \omega)}{D(k, \omega; R)} e^{-i\omega t}\;\d \omega.
+\]
+To perform this contour integral, we close our contour either upwards or downwards, compute residues, and then take the limit as our semi-circle tends to infinity. For this to work, it must be the case that the contribution by the circular part of the contour vanishes in the limit, and this determines whether we should close upwards or downwards.
+
+If we close upwards, then $\omega$ will have positive imaginary part. So for $e^{-i\omega t}$ not to blow up, we need $t < 0$. Similarly, we close downwards when $t > 0$. Thus, if we want $\chi$ to vanish whenever $t < 0$, we should pick our contour so that it it lies above all the singularities, so that it picks up no residue when we close upwards. This determines the choice of contour.
+
+But the key question is which of these are causal. When $t < 0$, we require $\chi(k, t) < 0$. By Jordan's lemma, for $t < 0$, when performing the $\omega$ integral, we should close the contour upwards when when perform the integral; when $t > 0$, we close it downwards. Thus for $\chi(k, t) = 0$ when $t < 0$, we must pick $L_\omega$ to lie \emph{above} all singularities of $\tilde{\chi}(k, \omega)$.
+
+%Now recall our Fourier transformed problem
+%\[
+% \tilde{D}(k, \omega; R) \tilde{\chi}(k, \omega) = \tilde{F}(k, \omega).
+%\]
+%In the absence of forcing, i.e.\ $F = 0$, we recover the normal modes with $\tilde{D}(k, \omega; R) = 0$.
+%
+%With forcing, we can formally construct
+%\[
+% \tilde{\chi}(k, \omega) = \frac{\tilde{F}(k, \omega)}{\tilde{D}(k, \omega, R)}.
+%\]
+%So we can consider the inverse Fourier transform
+%\[
+% \tilde{\chi}(k, t) = \frac{1}{2\pi}\int_{L_\omega} \frac{\tilde{F}(k, \omega)}{\tilde{D}(k, \omega; R)} e^{-i\omega t}\;\d \omega.
+%\]
+%Here $k$ is a parameter on the $F_k$ contour restricted to being real.
+%
+%Assuming analyticity, the integral is given by the residues, i.e.\ singularities of $\tilde{D}$. So they are at the \emph{temporal modes} $\omega(k)$.
+
+Assume that $D$ has a single simple zero for each $k$. Let $\omega(k)$ be the corresponding value of $\omega$. Then by complex analysis, we obtain
+\[
+ \tilde{\chi}(k, t) = -i\frac{\tilde{F}[k, \omega(k)]}{\frac{\partial \tilde{D}}{\partial \omega}[k, \omega(k)]} e^{-i\omega(k) t}.
+\]
+We finally want to take the inverse Fourier transform with respect to $x$:
+\[
+ \chi(x, t) = \frac{1}{2\pi}\int_{F_k} \frac{\tilde{F}[k, \omega(k)]}{\frac{\partial \tilde{D}}{\partial \omega}[k, \omega(k)]} e^{-i\omega(k) t}\;\d k.
+\]
+We are interested in the case $F = \delta(x) \delta(t)$, i.e.\ $\tilde{F}(k, \omega) = 1$. So the central question is the evaluation of the integral
+\[
+ G(x, t) = -\frac{i}{2\pi}\int_{F_k} \frac{\exp(i(kx - \omega(k) t)}{\frac{\partial\tilde{D}}{\partial \omega}[k, \omega(k)]}\;\d k.
+\]
+Recall that our objective is to determine the behaviour of $G$ as $t \to \infty$ with $V = \frac{x}{t}$ fixed. Since we are interested in the large $t$ behaviour instead of obtaining exact values at finite $t$, we may use what is known as the \emph{method of steepest descent}.
+
+The method of steepest descent is a very general technique to approximate integrals of the form
+\[
+ H(t) = \frac{-i}{2\pi} \int_{F_k} f(k) \exp \left(t \rho(k)\right)\;\d k,
+\]
+in the limit $t \to \infty$. In our case, we take
+\begin{align*}
+ f(k) &= \frac{1}{\frac{\partial \tilde{D}}{\partial \omega}[k, \omega(k)]}\\
+ \rho\left(k\right) &= i \left(kV - \omega(k)\right).
+\end{align*}
+
+In an integral of this form, there are different factors that may affect the limiting behaviour when $t \to \infty$. First is that as $t$ gets very large, the fast oscillations in $\omega$ causes the integral to cancel except at \emph{stationary phase} $\frac{\partial \rho_i}{\partial k} = 0$. On the other hand, since we're taking the exponential of $\rho$, we'd expect the largest contribution to the integral to come from the $k$ where $\rho_r(k)$ is the greatest.
+
+The idea of the \term{method of steepest descent} is to deform the contour so that we integrate along paths of stationary phase, i.e.\ paths with constant $\rho_i$, so that we don't have to worry about the oscillating phase.
+
+To do so, observe that the Cauchy--Riemann equations tell us $\nabla \rho_r \cdot \nabla \rho_i = 0$, where in $\nabla$ we are taking the ordinary derivative with respect to $k_r$ and $k_i$, viewing the real and imaginary parts as separate variables.
+
+Since the gradient of a function is the normal to the contours, this tells us the curves with constant $\rho_i$ are those that point in the direction of $\rho_r$. In other words, the stationary phase curves are exactly the curves of steepest descent of $\rho_r$.
+
+Often, the function $\rho$ has some stationary points, i.e.\ points $k_*$ satisfying $\rho'(k_*) = 0$. Generically, we expect this to be a second-order zero, and thus $\rho$ looks like $(k - k_*)^2$ near $k_*$. We can plot the contours of $\rho_i$ near $k_*$ as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [gray] (-3, -3) rectangle (3, 3);
+
+ \draw [mblue] (-3, 0) -- (3, 0);
+ \draw [mblue] (0, -3) -- (0, 3);
+ \node [circ] at (0 ,0) {};
+ \node [anchor = south west] at (0, 0) {$k_*$};
+
+ \foreach \x in {0,90,180,270} {
+ \begin{scope}[rotate=\x]
+ \draw [mblue] (0.5, -3) .. controls (0.5, -1) and (1, -0.5) .. (3, -0.5);
+ \draw [mblue] (1.5, -3) .. controls (1.5, -2) and (2, -1.5) .. (3, -1.5);
+
+ \draw [mblue] (2.5, -3) arc(180:90:0.5);
+ \end{scope}
+ }
+
+ \draw [mblue, ->] (-1.3, 0) -- +(0.001, 0);
+ \draw [mblue, ->] (1.3, 0) -- +(-0.001, 0);
+ \draw [mblue, ->] (0, -1.7) -- +(0, -0.001);
+ \draw [mblue, ->] (0, 1.7) -- +(0, 0.001);
+ \end{tikzpicture}
+\end{center}
+where the arrows denote the direction of steepest descent of $\rho_r$. Note that since the real part satisfies Laplace's equation, such a stationary point must be a saddle, instead of a local maximum or minimum, even if the zero is not second-order.
+
+We now see that if our start and end points lie on opposite sides of the ridge, i.e.\ one is below the horizontal line and the other is above, then the only way to do so while staying on a path of stationary phase is to go through the stationary point.
+
+Along such a contour, we would expect the greatest contribution to the integral to occur when $\rho_r$ is the greatest, i.e.\ at $k_*$. We can expand $\rho$ about $k_*$ as
+%
+%Now there are many contours of $\rho_i$. Which of them should we pick? Along such contours, the greatest contribution comes from the point where $\rho_r(k)$ is the greatest. If we want to essentially approximate the whole integral by the value at the maximum, then we should pick a contour where $\rho_r(k)$ drops as quickly as possible when we move away from the maximum. After some work, we see that we should pick the contour that passes through the points $k_*$ with $\frac{\partial \rho}{\partial k}(k^*) = 0$.
+%
+%Note that since $\rho_r$ and $\rho_i$ satisfy Laplace's equation,
+%
+%
+%To approximate the integral, we expand $\rho$ about the point $k_*$, to get
+\[
+ \rho(k) \sim \rho(k_*) + \frac{1}{2} \frac{\partial^2 \rho}{\partial k^2} (k_*) (k - k_*)^2.
+\]
+We can then approximate
+\[
+ H(t) \sim \frac{-i}{2\pi}\int_{\varepsilon} f(k) e^{t \rho(k)}\;\d k,
+\]
+where we are just integrating over a tiny portion of our contour near $k_*$. Putting in our series expansion of $\rho$, we can write this as
+\[
+ H(t) \sim \frac{-i}{2\pi} f(k_*) e^{t\rho(k_*)} \int_{\varepsilon} \exp\left(\frac{t}{2} \frac{\partial^2 \rho}{\partial k^2} (k_*) (k - k_*)^2\right)\;\d k.
+\]
+Recall that we picked our path to be the path of steepest descent on both sides of the ridge. So we can paramretrize our path by $K$ such that
+\[
+ (iK)^2 = \frac{t}{2} \frac{\partial^2 \rho}{\partial k^2} (k_*) (k - k_*)^2,
+\]
+where $K$ is purely real. So our approximation becomes
+\[
+ H(t) \sim \frac{f(k_*) e^{t \rho(k_*)}}{\sqrt{2\pi^2 t \rho''(k_*)}} \int_{-\varepsilon}^\varepsilon e^{-K^2}\;\d K.
+\]
+Since $e^{-K^2}$ falls of so quickly as $k$ gets away from $0$, we may approximate that integral by an integral over the whole real line, which we know gives $\sqrt{\pi}$. So our final approximation is
+\[
+ H(t) \sim \frac{f(k_*) e^{t \rho(k_*)}}{\sqrt{2\pi t \rho''(k_*)}}.
+\]
+We then look at the maxima of $\rho_r(k)$ along these paths and see which has the greatest contribution.
+
+Now let's apply this to our situation. Our $\rho$ was given by
+\[
+ \rho(k) = i (kV - \omega(k)).
+\]
+So $k_*$ is given by solving
+\[
+ \frac{\partial \omega}{\partial k}(k_*) = V.\tag{$*$}
+\]
+Thus, what we have shown was that the greatest contribution to the Green's function along the $V$ direction comes from the the modes whose group velocity is $V$! Note that in general, this $k_*$ is complex.
+
+Thus, the conclusion is that given any $V$, we should find the $k_*$ such that ($*$) is satisfied. The temporal growth rate along $\frac{x}{t} = V$ is then
+\[
+ \sigma(V) = \omega_i(k_*) - k_{*i}V.
+\]
+This is the growth rate we would experience if we moved at velocity $V$. However, it is often more useful to consider the absolute growth rate. In this case, we should try to maximize $\omega_i$. Suppose this is achieved at $k = k_{max}$, possibly complex. Then we have
+\[
+ \frac{\partial \omega_i}{\partial k}(k_{max}) = 0.
+\]
+But this means that $c_g = \frac{\partial \omega}{\partial k}$ is purely real. Thus, this maximally unstable mode can be realized along some physically existent $V = c_g$.
+
+
+%
+%Remember that $\rho = \rho_r + i \rho_i$ is complex and analytic. So $\rho_r$ and $\rho_i$ satisfy the Cauchy--Riemann equations
+%\[
+% \frac{\partial \rho_r}{\partial k_r} = \frac{\partial \rho_i}{\partial k_i},\quad \frac{\partial \rho_r}{\partial k_i} = - \frac{\partial \rho_i}{\partial k_r}.
+%\]
+%In particular, we deduce that the gradients (with respect to $k$) are orthogonal: $\nabla_k \rho_r \cdot \nabla_k \rho_i = 0$. Also, $\rho_r$ and $\rho_i$ satisfy Laplace's equation
+%\[
+% \nabla_k^2 \rho_r = \nabla_k^2 \rho_i = 0.
+%\]
+%Thus, we know that any critical point $k_*$ for $\rho$ must be a saddle point.
+%
+%We know that the integral is dominated by the largest value of $\rho_r$ in vicinity of $k_*$, where we can expand
+%\[
+% \rho\left(k; \frac{x}{t}\right) \sim \rho\left(k_*; \frac{x}{t}\right) + \frac{1}{2} \frac{\partial^2}{\partial k^2} \rho\left(k_*; \frac{x}{t}\right)(k - k_*)^2.
+%\]
+%Since we are sitting on a complex plane, by Cauchy's thoerem, the value of the integral should be independent of the path. So we may distort the contour as we wish, and the trick is to follow the steepest descent for the real part. Then by orthogonality, we are \emph{guaranteed} that this is the direction of stationary phase!
+%
+%Now if we believe that the dominant contribution is due to what happens at $k_*$, then we can approximate
+%\[
+% G(x, t) \sim \frac{-i}{2\pi} \left(f(k_*) \exp \left(t \rho \left(k_*; \frac{x}{t}\right)\right)\right) \int_{F_k} \exp \left(\frac{t}{2} \frac{\partial^2}{\partial k^2} \rho\left(k_*; \frac{x}{t}\right)(k - k_*)^2\right)\;\d k.
+%\]
+%We can substitute
+%\[
+% (iK)^2 = \frac{t}{2} \frac{\partial^2}{\partial k^2}\rho\left(k_*; \frac{x}{t}\right)(k - k_*)^2.
+%\]
+%So we are left with
+%\[
+% G(x, t) \sim \frac{f(k_*) \exp \left(t \rho\left(k_*; \frac{x}{t}\right)\right)}{ \sqrt{2\pi^2 t \frac{\partial^2}{\partial k^2} \rho(k_*; \frac{x}{t})}} \int_{-\infty}^\infty e^{-K^2}\;\d K = \frac{f(k_*) \exp \left((t \rho\left((k_*; \frac{x}{t}\right)\right)}{ \sqrt{2\pi t \frac{\partial^2}{\partial k^2} \rho(k_*; \frac{x}{t})}}
+%\]
+%Now recall that we had
+%\[
+% \rho = i \left(k \left(\frac{x}{t}\right) - \omega(k)\right).
+%\]
+%So we simply have
+%\[
+% \frac{\partial^2 \rho}{\partial k^2} = -i \frac{\partial^2 \omega}{\partial k^2}.
+%\]
+%So for an observer at $V = \frac{x}{t}$, we have
+%\[
+% G(x, t) \sim \frac{e^{i\pi/4} e^{i(k_* x - \omega(k_*)t}}{\frac{\partial D}{\partial \omega}(k_*, \omega(k_*)) \sqrt{2\pi t \frac{\partial^2}{\partial k^2}\omega(k_*)}},
+%\]
+%and
+%\[
+% \frac{\partial \omega}{\partial k}(k_*) = \frac{x}{t}.
+%\]
+%We then see that the temporal growth rate is
+%\[
+% \sigma(V) = \omega_i(k_*) - k_{*i} V.
+%\]
+%Generically, we expect there is a maximum growth rate at some $k = k_{max}$. Then
+%\[
+% \omega_i(k_{max}) = \omega_{i, max}.
+%\]
+%So we have
+%\[
+% \frac{\partial \omega_i}{\partial k} (k_{max}) = 0.
+%\]
+%So it follows that the group velocity
+%\[
+% c_g = \frac{\partial \omega}{\partial k}(k_{max}) = V_{max}
+%\]
+%is real, and is thus well-defined.
+
+%Now if $\omega_{i, max} < 0$, then $\sigma(V) < 0$ for all $V = \frac{x}{t}$. So the flow is linearly stable.
+%
+%On the other hand, if $\omega_{i, max} > 0$, then $\sigma(V) > 0$ for all $V = \frac{x}{t}$. on the other hand, if $\omega_{i, max} > 0$, then it is linearly unstable.
+
+We can now say
+\begin{itemize}
+ \item If $\omega_{i, max} < 0$, then the flow is linearly stable.
+ \item If $\omega_{i, max} > 0$, then the flow is linearly unstable. In this case,
+ \begin{itemize}
+ \item If $\omega_{0, i} < 0$, then the flow is convectively unstable.
+ \item If $\omega_{0, i} > 0$, then the flow is absolutely unstable.
+ \end{itemize}
+\end{itemize}
+Here $\omega_0$ is defined by first solving $\frac{\partial \omega}{\partial k}(k_0) = 0$, thus determining the $k_0$ that leads to a zero group velocity, and then setting $\omega_0 = \omega(k_0)$. These quantities are known as the \term{absolute frequency} and \term{absolute growth rate} respectively.
+
+%For the convective/absolute issue, we can consider the (complex) \term{absolute wavenumber} $k_0$ defined by
+%\[
+% \frac{\partial \omega}{\partial k}(k_0) = 0.
+%\]
+%There is an associated (complex) \term{absolute frequency} and \term{absolute growth rate} $\omega_0 = \omega(k_0)$ and $\sigma(0) = \omega_{0, i}$.
+%
+%Therefore we can obtain the Briggs--Bers criterion for unstable flows: if $\omega_{_0, i} < 0$, then the flow is convectively unstable, and if $0 < V_- < V_+$. If $\omega_{0,i} > 0$, then the flow is absolutely unstable and $V_i < 0 < V_+$.
+%
+\begin{eg}
+ We can consider a ``model dispersion relation'' given by the \term{linear complex Ginzburg--Landau equation}
+ \[
+ \left(\frac{\partial}{\partial t} + U \frac{\partial}{\partial x}\right) \chi - \mu \chi - (1 + i c_d) \frac{\partial^2}{\partial x^2}\chi = 0.
+ \]
+ This has advection ($U$), dispersion ($c_d$) and instability ($\mu$). Indeed, if we replace $\frac{\partial}{\partial t} \leftrightarrow -i\omega$ and $\frac{\partial}{\partial x} \leftrightarrow ik$, then we have
+ \[
+ i(-\omega + Uk)\chi - \mu \chi + (1 + ic_d) k^2 \chi = 0.
+ \]
+ This gives
+ \[
+ \omega = Uk + c_d k^2 + i(\mu - k^2).
+ \]
+ We see that we have temporal instability for $|k| < \sqrt{\mu}$, where we assume $\mu > 0$. This has
+ \[
+ c_r = \frac{\omega_r}{k} = U + c_d k.
+ \]
+ On the other hand, if we force $\omega \in \R$, then we have spatial instability. Solving
+ \[
+ k^2 (c_d - i) + Uk + (i\mu - \omega) = 0,
+ \]
+ the quadratic equation gives us two branches
+ \[
+ k^{\pm} = \frac{-U \pm \sqrt{k^2 - 4(i\mu - \omega)(c_d - i)}}{2(c_d - i)}.
+ \]
+ To understand whether this instability is convective or absolute, we can complete the square to obtain
+ \[
+ \omega = \omega_0 + \frac{1}{2} \omega_{kk}(k - k_0)^2,
+ \]
+ where
+ \begin{align*}
+ \omega_{kk} &= 2(c_d - i)\\
+ k_0 &= \frac{U}{2(i - c_d)}\\
+ \omega_0 &= -\frac{c_d U^2}{4(1 + c_d^2)} +i \left(\mu - \frac{U^2}{4(1 + c_d^2)}\right).
+ \end{align*}
+ These $k_0$ are then the absolute wavenumber and absolute frequency. % interplay between advection and dispersion
+\end{eg}
+
+%Inverting this relation, we can write
+%\[
+% k^{\pm}(\omega) = k_0 \pm \left(\frac{2}{\omega_{kk}}\right)^{1/2} (\omega - \omega_0)^{1/2}.
+%\]
+Note that after completing the square, it is clera that $k_0$ is a double root at $\omega = \omega_0$. Of course, there is not nothing special about this example, since $k_0$ was defined to solve $\frac{\partial \omega}{\partial k}(k_0) = 0$!
+
+I really ought to say something about Bers' method at this point.
+
+%In 1983, Bers identified a geometric method to find $\omega_0, k_0$ --- we find \term{pinch points} as $L_\omega$ is lowered.
+%
+%Indeed, for each fixed $\omega_i$, the contour $L_\omega$ generically corresponds to two branches $k^{\pm}(L_\omega)$ in $k$ space as dictated by the dispersion relation. Conversely, the contour $F_k$ for a fixed $k_i$ corresopnds to a branch $\omega(F_k)$ in $\omega$ space.
+%
+%% insert picture
+%
+%By causlity, it must be the case that $\omega(F_k)$ lies below $L_\omega$, and $F_k$ lies in between $k^{\pm}(L_\omega)$.
+
+In practice, it is not feasible to apply a $\delta$ function of perturbation and see how it grows. Instead, we can try to understand convective versus absolute instabilities using a periodic and switched on forcing
+\[
+ F(x, t) = \delta(x) H(t) e^{-i\omega_f t},
+\]
+where $H$ is the Heaviside step function. The response in spectral space is then
+\[
+ \tilde{\chi}(k, \omega) = \frac{\tilde{F}(k, \omega)}{D(k, \omega)} = \frac{i}{D(k, \omega)(\omega - \omega_f)}.
+\]
+There is a new (simple) pole precisely at $\omega = \omega_f$ on the real axis. We can invert for $t$ to obtain
+\[
+ \tilde{\chi}(k, t) = \frac{i}{2\pi} \int_{L_\omega} \frac{e^{-i\omega t}}{D(k, \omega) (\omega - \omega_F)}\;\d \omega = \frac{e^{-i\omega_f t}}{D(k, \omega_f)} + \frac{e^{-i\omega(k) t}}{(\omega(k) - \omega_f) \frac{\partial \tilde{D}}{\partial \omega}[k, \omega(k)]}.
+\]
+We can then invert for $x$ to obtain
+\[
+ \chi(x, t) = \underbrace{\frac{e^{-i\omega_f t}}{2\pi} \int_{F_k} \frac{e^{ikx}}{D(k, \omega)}\;\d k}_{\chi_F(x, t)} + \underbrace{\frac{1}{2\pi} \int_{F_k} \frac{e^{i[kx - \omega(k) t]}}{[\omega(k) - \omega_f] \frac{\partial \tilde{D}}{\partial \omega} [k, \omega(k)]}}_{\chi_T(x, t)}.
+\]
+The second term is associated with the switch-on transients, and is very similar to what we get in the previous steepest descent.
+
+If the flow is absolutely unstable, then the transients dominate and invade the entire domain. But if the flow is convectively unstable, then this term is swept away, and all that is left is $\chi_F(x, t)$. This gives us a way to distinguish between
+
+Note that in the forcing term, we get contributions at the singularities $k^{\pm}(\omega_f)$. Which one contributes depends on causality. One then checks that the correct result is
+\[
+ \chi_F(x, t) = iH(x) \frac{e^{i(k^+(\omega_f)x - \omega_f t)}}{\frac{\partial D}{\partial k} [k^+(\omega_f), \omega_f]} - i H(-x) \frac{e^{i(k^-(\omega_f) x - \omega_f t)}}{\frac{\partial D}{\partial k} [k^-(\omega_f), \omega_f]}.
+\]
+Note that $k^{\pm}(\omega_f)$ may have imaginary parts! If there is some $\omega_f$ such that $-k_i^+(\omega_f) > 0$, then we will see spatially growing waves in $x > 0$. Similarly, if there exists some $\omega_f$ such that $-k_i^-(\omega_f) < 0$, then we see spatially growing waves in $x < 0$.
+
+Note that we will see this effect only when we are convectively unstable.
+
+%We can plot the GLE branches of $k^+(\omega)$ and $k^-(\omega)$ branches, which are valid for the convectively unstable regime
+%\[
+% 0 \leq \mu \leq \frac{U^2}{4 (1 + c_d^2)}.
+%\]
+%% insert picture
+%
+Perhaps it is wise to apply these ideas to an actual fluid dynamics problem. We revisit the broken line shear layer profile, scaled with the velocity jump, but allow a non-zero mean $U$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] (-2, -1) -- (2, -1) node [right] {$z = 1$};
+
+ \draw [dashed] (-2, 1) -- (2, 1) node [right] {$z = -1$};
+ \draw (-1, -3) -- (-1, -1) node [pos=0.5, right] {$\bar{U} = -1 + U_m$} -- (1, 1) node [pos=0.5, right] {$\bar{U} = z + U_m$} -- (1, 3) node [pos=0.5, right] {$\bar{U} = 1 + U_m$};
+ \node at (-2, -2) {III};
+ \node at (-2, 0) {II};
+ \node at (-2, 2) {I};
+
+ \node [right] at (3.5, 2) {$Ae^{-\alpha(z - 1)}$};
+ \node [right] at (3.5, 0) {$Be^{\alpha z} + Ce^{-\alpha z}$};
+ \node [right] at (3.5, -2) {$D e^{\alpha (z + 1)}$};
+ \end{tikzpicture}
+\end{center}
+We do the same interface matching conditions, and after doing the same computations (or waving your hands with Galilean transforms), we get the dispersion relation
+\[
+ 4 (\omega - U_m \alpha)^2 = (2\alpha - 1)^2 - e^{-4\alpha}.
+\]
+It is now more natural to scale with $U_m$ rather than $\Delta U/2$, and this involves expressing everything in terms of the velocity ratio $R = \frac{\Delta U}{2 U_m}$. Then we can write the dispersion relation as
+\[
+ D(k, \omega; R) = 4(\omega - k)^2 - R^2[(2k - 1)^2 - e^{-4k}] = 0.
+\]
+Note that under this scaling, the velocity for $z < -1$ is $u = 1 - R$. In particular, if $R < 1$, then all of the fluid is flowing in the same direction, and we might expect the flow to ``carry away'' the perturbations, and this is indeed true.
+
+The absolute/convective boundary is given by the frequency at wavenumber for zero group velocity:
+\[
+ \frac{\omega}{\partial k}(k_0) = 0.
+\]
+This gives us
+\[
+ \omega_0 = k_0 - \frac{R^2}{2} [2k_0 - 1 + e^{-4k_0}].
+\]
+Plugging this into the dispersion relations, we obtain
+\[
+ R^2 [2k_0 - 1 + e^{-4k_0}] - [(2k_0 -1 )^2 - e^{-4k_0}] = 0.
+\]
+We solve for $k_0$, which is in general \emph{complex}, and then solve for $\omega$ to see if it leads to $\omega_{0, i} > 0$. This has to be done numerically, and when we do this, we see that the convective/absolute boundary occurs precisely at $R = 1$.
+
+\subsubsection*{Gaster relation}
+Temporal and spatial instabilities are related close to a marginally stable state. This is flow at a critical parameter $R_c$ with critical (real) wavenumber and frequency: $D(k_c, \omega_c; R_c) = 0$ with $\omega_{c, i} = k_{c, i} = 0$.
+
+We can Taylor expand the dispersion relation
+\[
+ \omega = \omega_c + \frac{\partial \omega}{\partial k} (k_c; R_c) [k - k_c].
+\]
+We take the imaginary part
+\[
+ \omega_i = \frac{\partial \omega_i}{\partial k_r} (k_c, R_c) (k_r - k_c) + \frac{\partial \omega_r}{\partial k_r}(k_c, R_c)k_i.
+\]
+For the temporal mode, we have $k_i = 0$, and so
+\[
+ \omega_i^{(T)} = \frac{\partial \omega_i}{\partial k_r}(k_c, R_c)(k_r - k_c).
+\]
+For the spatial mode, we have $\omega_i = 0$, and so
+\[
+ 0 = \frac{\partial \omega_i}{\partial k_r}(k_c, R_c)[k_r - k_c] + \frac{\partial \omega_r}{\partial k_r} (k_c, R_c)k_i^{(S)}.
+\]
+Remembering that $c_g = \frac{\partial \omega_r}{\partial k_r}$. So we find that
+\[
+ \omega_i^{(T)} = - c_g k_i^{(S)}.
+\]
+This gives us a relation between the growth rates of the temporal mode and spatial mode when we are near the marginal stable state.
+
+Bizarrely, this is often a good approximation when we are \emph{far} from the marginal state.
+
+\subsubsection*{Global instabilities}
+So far, we have always been looking at flows that were parallel, i.e.\ the base flow depends on $z$ alone. However, in real life, flows tend to evolve downstream. Thus, we want to consider base flows $U = U(x, z)$.
+
+Let $\lambda$ be the characteristic wavelength of the perturbation, and $L$ be the characteristic length of the change in $U$. If we assume $\varepsilon \sim \frac{\lambda}{L} \ll 1$, then we may want to perform some local analysis.
+
+To leading order in $\varepsilon$, evolution is governed by \emph{frozen} dispersion relation at each $X = \varepsilon x$. We can then extend notions of stability/convective/absolute to local notions, e.g.\ a flow is locally convectively unstable if there is some $X$ such that $\omega_{i, max}(X) > 0$, but $\omega_{0, i}(X) < 0$ for all $X$.
+
+However, we can certainly imagine some complicated interactions between the different region. For example, a perturbation upstream may be swept downstream by the flow, and then get ``stuck'' somewhere down there. In general, we can define
+
+%% discuss patch of instability in locally absolutely unstable
+%
+%The WKBJ analysis can follow our previous approach. We can consider the impulse resupose
+%\[
+% \left[ D (-\partial_x, i \partial_t; X) + \varepsilon \D_\varepsilon (-i \partial_x, i \partial_t; X)\right] G(x, t) = \delta(x) \delta(t),
+%\]
+%where $X = \varepsilon x$.
+%
+%Now $X$ is not frozen, and we can define new concepts of stability:
+\begin{defi}[Global stability]\index{global stability}
+ A flow is \emph{globally stable} if $\lim_{t \to \infty} G(x, t) = 0$ for all $x$.
+
+ A flow is \emph{globally unstable} if there is some $x$ such that $\lim_{t \to \infty} G(x, t) \to \infty$.
+\end{defi}
+
+For a steady spatially developing flow, global modes are
+\[
+ \chi(x, t) = \phi(x) e^{-i \omega_G t}.
+\]
+The complex global frequency $\omega_G$ is determined analogously to before using complex integration.
+
+It can be established that $\omega_{G, i} \leq \omega_{0, i, max}$. This then gives a necessary condition for global instability: there must be a region of local absolute instability within the flow.
+
+Sometimes this is a good predictor, i.e.\ $R_{G_c} \simeq R_t$. For example, with mixing layers, we have $R_t = 1.315$ while $R_{G_c} = 1.34$. On the other hand, it is sometimes poor. For example, for bluff-body wakes, we have $Re_t = 25$ while $Re_{Gc} = 48.5$.
+
+\section{Transient growth}
+\subsection{Motivation}
+So far, our model of stability is quite simple. We linearize our theory, look at the individual perturbation modes, and say the system is unstable if there is exponential growth. In certain circumstances, it works quite well. In others, they are just completely wrong.
+
+There are six billion kilometers of pipes in the United States alone, where the flow is turbulent. A lot of energy is spent pumping fluids through these pipes, and turbulence is not helping. So we might think we should try to understand flow in a pipe mathematically, and see if it gives ways to improve the situation.
+
+Unfortunately, we can prove that flow in a pipe is linearly stable for all Reynolds numbers. The analysis is not \emph{wrong}. The flow is indeed linearly stable. The real problem is that linear stability is not the right thing to consider.
+
+%In reality, we find that turbulence starts to occur for
+%\[
+% Re = \frac{QD}{\nu A} \sim 2000.
+%\]
+%If we On the other hand, this can be avoided for a quiet flow for $Re \sim 10^5$. What is going on?
+
+We get similar issues with plane Poiseuille flow, i.e.\ a pressure driven flow between horizontal plates. As we know from IB Fluids, the flow profile is a parabola:
+\begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \fill [black] (0, 0) rectangle (6, -0.15);
+ \fill [black] (0, 1) rectangle (6, 1.15);
+ \fill [opacity=0.3, mblue] (0, 0) rectangle (6, 1);
+
+ \foreach \x in {0.14, 0.26, 0.38, 0.5, 0.62, 0.74, 0.86} {
+ \pgfmathsetmacro\len{\x * (1 - \x)}
+ \draw [-latex'] (2, \x) -- +(3 * \len, 0);
+ };
+ \draw [rotate=270, yscale=-1, shift={(-3,-3)}] (2.5, 0.25) parabola (2, 1);% magic
+ \draw [rotate=90, shift={(-2,-3)}] (2.5, 0.25) parabola (2, 1);
+ \draw [dashed] (2, 0) -- (2, 1);
+ \end{tikzpicture}
+\end{center}
+One can analyze this and prove that it is linearly unstable when
+\[
+ Re = \frac{U_c d}{\nu} > 5772.
+\]
+However, it is observed to be unstable at much lower $Re$.
+
+We have an even more extreme issue for plane Couette flow. This is flow between two plates at $z = \pm 1$ (after rescaling) driven at speeds of $\pm 1$ (after scaling). Thus, the base flow is given by
+\[
+ \bar{U} = z,\quad |z| \leq 1.
+\]
+Assuming the fluid is inviscid, the Rayleigh equation then tells us perturbations obey
+\[
+ \left[ (\bar{U} - c) \left(\frac{\d^2}{\d z^2} - k^2\right) - \frac{\d^2}{\d z^2} \bar{U} \right] \hat{w} = 0.
+\]
+Since $\bar{U} = z$, the second derivative term drops out and this becomes
+\[
+ (\bar{U} - c) \left(\frac{\d^2}{\d z^2} - k^2\right) \hat{w} = 0.
+\]
+If we want our solution to be smooth, or even just continuously differentiable, then we need $\left(\frac{\d^2}{\d z^2} - k^2\right) \hat{w} = 0$. So the solution is of the form
+\[
+ \hat{w} = A \sinh k(z + 1) + B \sinh k(z - 1).
+\]
+However, to satisfy the boundary conditions $\hat{w}(\pm 1) = 0$, then we must have $A = B = 0$, i.e.\ $\hat{w} = 0$.
+
+Of course, it is \emph{not} true that there can be no perturbations. Instead, we have to relax the requirement that the eigenfunction is smooth. We shall allow it to be non-differentiable at certain points, but still require that it is continuous (alternatively, we relax differentiability to weak differentiability).
+
+The fundamental assumption that the eigenfunction is smooth must be relaxed. Let's instead consider a solution of the form
+\begin{align*}
+ \hat{w}_+ &= A_+ \sinh k (z - 1) & z &> z_c\\
+ \hat{w}_- &= A_- \sinh k(z + 1) & z &< z_c.
+\end{align*}
+If we require the vertical velocity to be continuous at the critical layer, then we must have the matching condition
+\[
+ A_+ \sinh k (z - z_c) = A_- \sinh k(z + z_c).
+\]
+This still satisfies the Rayleigh equation if $\bar{U} = c$ at the critical layer. Note that $u$ is discontinuous at the critical layer, because incompressibility requires
+\[
+ \frac{\partial w}{\partial z} = - \frac{\partial u}{\partial x} = -iku.
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \fill [black] (0, 0) rectangle (4, -0.2);
+ \fill [black] (0, 2) rectangle (4, 2.2);
+ \fill [opacity=0.3, mblue] (0, 0) rectangle (4, 2);
+
+ \draw [dashed] (1, 0) -- (1, 2);
+
+ \draw (2.5, 1.5) parabola (1, 0);
+ \draw (2.5, 1.5) parabola (1, 2);
+
+ \draw [dashed] (2.5, 1.5) -- (1, 1.5) node [left] {\small$c$};
+ \end{tikzpicture}
+\end{center} % improve this picture
+So for all $|c| = |\omega/k| < 1$, there is a (marginally stable) mode. The spectrum is \emph{continuous}. There is no discrete spectrum. This is quite weird, compared to what we have previously seen.
+
+But still, we have only found modes with a real $c$, since $\bar{U}$ is real! Thus, we conclude that inviscid plane Couette flow is stable! While viscosity regularizes the flow, but it turns out that does not linearly destabilize the flow at \emph{any} Reynolds number (Romanov, 1973).
+
+Experimentally, and numerically, plane Couette flow is known to exhibit a rich array of dynamics.
+\begin{itemize}
+ \item Up to $Re \approx 280$, we have laminar flow.
+ \item Up to $Re \approx 325$, we have transient spots.
+ \item Up to $Re \approx 415$, we have sustained spots and stripes.
+ \item For $Re > 415$, we have fully-developed turbulence.
+\end{itemize}
+In this chapter, we wish to understand transient growth. This is the case when small perturbations can grow up to some visible, significant size, and then die off.
+
+\subsection{A toy model}
+Let's try to understand transient dynamics in a finite-dimensional setting. Ultimately, the existence of transient growth is due to the \emph{non-normality} of the operator.
+
+Recall that a matrix $A$ is \emph{normal}\index{normal operator} iff $A^\dagger A = A A^\dagger$. Of course, self-adjoint matrices are examples of normal operators. The spectral theorem says a normal operator has a complete basis of orthonormal eigenvectors, which is the situation we understand well. However, if our operator is not normal, then we don't necessarily have a basis of eigenvectors, and even if we do, they need not be orthonormal.
+
+So suppose we are in a $2$-dimensional world, and we have two eigenvectors that are very close to each other:
+\begin{center}
+ \begin{tikzpicture}[xscale=1.5]
+ \draw [-latex'] (0, 0) -- (0, 2) node [pos=0.5, right] {$\Phi_2$};
+ \draw [-latex'] (0, 0) -- (-0.3, 2) node [pos=0.5, left] {$\Phi_1$};
+
+ \draw [-latex', mred, semithick] (0, 0) -- (0.3, 0);
+ \end{tikzpicture}
+\end{center}
+Now suppose we have a small perturbation given by $\boldsymbol\varepsilon = (\varepsilon, 0)^T$. While this perturbation is very small in magnitude, if we want to expand this in the basis $\Phi_1$ and $\Phi_2$, we must use coefficients that are themselves quite large. In this case, we might have $\boldsymbol\varepsilon = \Phi_2 - \Phi_1$, as indicated in red in the diagram above.
+
+Let's let this evolve in time. Suppose both $\Phi_1$ and $\Phi_2$ are stable modes, but $\Phi_1$ decays much more quickly than $\Phi_2$. Then after some time, the perturbation will grow like
+\begin{center}
+ \begin{tikzpicture}[xscale=1.5]
+ \draw [-latex'] (0, 0) -- (0, 1.8);
+ \draw [-latex'] (0, 0) -- (-0.15, 1);
+
+ \draw [-latex', mred, semithick] (0, 0) -- (0.15, 0.8);
+ \draw [-latex', mred, opacity=0.5] (0, 0) -- (0.22, 0.5);
+ \draw [-latex', mred, opacity=0.5] (0, 0) -- (0.28, 0.2);
+ \draw [-latex', mred, opacity=0.5] (0, 0) -- (0.3, 0);
+ \end{tikzpicture}
+\end{center}
+Note that the perturbation grows to reach a finite, large size, until it vanishes again as the $\Phi_1$ component goes away.
+
+Let's try to put this down more concretely in terms of equations. We shall consider a linear ODE of the form
+\[
+ \dot{\mathbf{x}} = A \mathbf{x}.
+\]
+We first begin by considering the matrix
+\[
+ A =
+ \begin{pmatrix}
+ 0 & 1\\
+ 0 & 0
+ \end{pmatrix},
+\]
+which does \emph{not} exhibit transient growth. We first check that $A$ is not normal. Indeed,
+\[
+ AA^T
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 0
+ \end{pmatrix} \not=
+ \begin{pmatrix}
+ 0 & 0\\
+ 0 & 1
+ \end{pmatrix} = A^T A.
+\]
+Note that the matrix $A$ has a repeated eigenvalue of $0$ with a \emph{single} eigenvector of $(1\; 0)$.
+
+To solve this system, write the equations more explicitly as
+\begin{align*}
+ \dot{x}_1 &= x_2\\
+ \dot{x_2} &= 0.
+\end{align*}
+If we impose the initial condition
+\[
+ (x_1, x_2)(0) = (x_{10}, x_{20}),
+\]
+then the solution is
+\begin{align*}
+ x_1(t) &= x_{20}t + x_{10}\\
+ x_2(t) &= x_{20}.
+\end{align*}
+This exhibits linear, algebraic growth instead of the familiar exponential growth.
+
+Let's imagine we perturb this system slightly. We can think of our previous system as the $Re = \infty$ approximation, and this small perturbation as the effect of a large but finite Reynolds number. For $\varepsilon > 0$, set
+\[
+ A_\varepsilon =
+ \begin{pmatrix}
+ -\varepsilon & 1\\
+ 0 & -2\varepsilon
+ \end{pmatrix}.
+\]
+We then have eigenvalues
+\begin{align*}
+ \lambda_1 &= -\varepsilon\\
+ \lambda_2 &= -2\varepsilon,
+\end{align*}
+corresponding to eigenvectors
+\[
+ \mathbf{e}_1 =
+ \begin{pmatrix}
+ 1 \\0
+ \end{pmatrix},\quad \mathbf{e}_2 =
+ \begin{pmatrix}
+ 1 \\\ -\varepsilon
+ \end{pmatrix}.
+\]
+Notice that the system is now stable. However, the eigenvectors are very close to being parallel.
+
+As we previously discussed, if we have an initial perturbation $(\varepsilon, 0)$, then it is expressed in this basis as $e_1 - e_2$. As we evolve this in time, the $e_2$ component decays more quickly than $e_1$. So after some time, the $e_2$ term is mostly gone, and what is left is a finite multiple of $e_1$, and generically, we expect this to have magnitude larger than $\varepsilon$!
+
+We try to actually solve this. The second row in $\dot{\mathbf{x}} = A\mathbf{x}$ gives us
+\[
+ \dot{x}_2 = -2\varepsilon x_2.
+\]
+This is easy to solve to get
+\[
+ x_2 = x_{20} e^{-2\varepsilon t}.
+\]
+Plugging this into the first equation, we have
+\[
+ \dot{x}_1 = -\varepsilon x_1 + x_{20} e^{-2\varepsilon t}.
+\]
+We would expect a solution of the form $x_1 = A e^{-\varepsilon t} - B e^{-2 \varepsilon t}$, where the first term is the homogeneous solution and the second comes from a particular solution. Plugging this into the equation and applying our initial conditions, we need
+\[
+ x_1 = \left(x_{10} + \frac{x_{20}}{\varepsilon}\right) e^{-\varepsilon t} - \frac{x_{20}}{\varepsilon} e^{-2 \varepsilon t}.
+\]
+Let us set
+\begin{align*}
+ y_{10} &= x_{10} + \frac{x_{20}}{\varepsilon}\\
+ y_{20} &= -\frac{x_{20}}{\varepsilon}.
+\end{align*}
+Then the full solution is
+\[
+ \mathbf{x} = y_{10} e^{-\varepsilon t} \mathbf{e}_1 + y_{20} e^{-2\varepsilon t} \mathbf{e}_2.
+\]
+For $\varepsilon t \gg 1$, we know that $\mathbf{x} \sim y_{10} e^{-\varepsilon t} \mathbf{e}_2$. So our solution is an exponentially decaying solution.
+
+But how about early times? Let's consider the magnitude of $x$. We have
+\begin{align*}
+ \|\mathbf{x}\|^2 &= y_{10}^2 e^{-2 \varepsilon t} \mathbf{e}_1 \cdot \mathbf{e}_2 + 2y_{10} y_{20} e^{-3\varepsilon t} \mathbf{e}_1 \cdot \mathbf{e}_2 + y_{20}^2 e^{-4\varepsilon t} \mathbf{e}_2 \mathbf{e}_2\\
+ &= y_{10}^2 e^{-2\varepsilon t} + 2y_{10} y_{20} e^{-3 \varepsilon t} + (1 + \varepsilon^2) y_{20}^2 e^{-4 \varepsilon t}.
+\end{align*}
+If $y_{10} = 0$ or $y_{20} = 0$, then this corresponds to pure exponential decay.
+
+Thus, consider the situation where $y_{20} = - a y_{10}$ for $a \not =0$. Doing some manipulations, we find that
+\[
+ \|\mathbf{x}\|^2 = y_{10}^2 (1 - 2a + a^2 (1 + \varepsilon^2)) + y_{10}^2 (-2 + 6a - 4a^2(1 + \varepsilon^2))\varepsilon t + O(\varepsilon^2 t^2).
+\]
+Therefore we have initial growth if
+\[
+ 4a^2 (1 + \varepsilon^2) - a + 2 < 0.
+\]
+Equivalently, if
+\[
+ a_- < a < a_+
+\]
+with
+\[
+ a_{\pm} = \frac{3 \pm \sqrt{1 - 8\varepsilon^2}}{4(1 + \varepsilon^2)}.
+\]
+Expanding in $\varepsilon$, we have
+\begin{align*}
+ a_+ &= 1 - 2 \varepsilon^2 + O(\varepsilon^4),\\
+ a_- &= \frac{1 + \varepsilon^2}{2} + O(\varepsilon^4).
+\end{align*}
+These correspond to $s_{20} \simeq \frac{x_{10}}{2\varepsilon}$ and $x_{20} \simeq \varepsilon x_{10}$ respectively, for $\varepsilon \ll 1$. This is interesting, since in the first case, we have $x_{20} \gg x_{10}$, while in the second case, we have the opposite. So this covers a wide range of possible $x_{10}, x_{20}$.
+
+What is the best initial condition to start with if we want the largest possible growth? Let's write everything in terms of $a$ for convenience, so that the energy is given by
+\[
+ E = \frac{\mathbf{x} \cdot \mathbf{x}}{2} = \frac{y_{10}^2}{2} (e^{-2\varepsilon t} - 2 a e^{-3 \varepsilon t} + a^2(1 + \varepsilon^2) e^{-4\varepsilon t}).
+\]
+Take the time derivative of this to get
+\[
+ \frac{\d E}{\d t} = - \varepsilon \frac{y_{10}^2}{2} e^{-2\varepsilon t} (2 - 6 (a e^{-\varepsilon t}) + 4 (1 + \varepsilon^2) (ae^{-\varepsilon t})^2).
+\]
+Setting $\hat{a} = ae^{-\varepsilon t}$, we have $\frac{\d E}{\d t} > 0$ iff
+\[
+ 2 - 6 \hat{a} + 4 \hat{a}^2 (1 + \varepsilon^2) < 0.
+\]
+When $t = 0$, then $\hat{a} = a$, and we saw that there is an initial growth if $a_- < a < a_+$. We now see that we continue to have growth as long as $\hat{a}$ lies in this region. To see when $E$ reaches a maximum, we set $\frac{\d E}{\d t} = 0$, and so we have
+\[
+ (a_- - ae^{-\varepsilon t})(ae^{-\varepsilon t} - a_+) = 0.
+\]
+So a priori, this may happen when $\hat{a} = a_-$ or $a_+$. However, we know it must occur when $\hat{a}$ hits $a_-$, since $\hat{a}$ is decreasing with time. Call the time when this happens $t_{max}$, which we compute to be given by
+\[
+ \varepsilon t_{max} = \log \frac{a}{a_-}.
+\]
+Now consider the energy gain
+\[
+ G = \frac{E(t_{max})}{E(0)} = \frac{a_-^2}{a^2} \frac{a_-^2(1 + \varepsilon^2) - 2a_1 + 1}{a^2 (1 + \varepsilon^2) - 2a + 1}.
+\]
+Setting $\frac{\d G}{\d a} = 0$, we find that we need $a = a_+$, and so
+\[
+ \max_a G = \frac{(3a_- - 1)(1 - a_-)}{(3a_+ - 1)(1 - a_+)}.
+\]
+% use time translation invariance to argue this must be the case.
+
+We can try to explicitly compute the value of the maximum energy gain, using our first-order approximations to $a_{\pm}$;
+\[
+ \max_a G = \frac{\left(3 \left(\frac{1 + \varepsilon^2}{2}\right)\right)\left(1 - \frac{1 + \varepsilon^2}{2}\right)}{(3(1 + 2\varepsilon^2) - 1)(1 - (1 - 2 \varepsilon^2))} = \frac{(1 + 3 \varepsilon^2)(1 - \varepsilon^2)}{16(1 - 3 \varepsilon^2)\varepsilon^2} \sim \frac{1}{16 \varepsilon^2}.
+\]
+So we see that we can have very large transient growth for a small $\varepsilon$.
+
+How about the case where there is an unstable mode? We can consider a perturbation of the form
+\[
+ A =
+ \begin{pmatrix}
+ \varepsilon_1 & 1\\
+ 0 & -\varepsilon_2
+ \end{pmatrix},
+\]
+with eigenvectors
+\[
+ \mathbf{e}_1 =
+ \begin{pmatrix}
+ 1 \\0
+ \end{pmatrix},\quad
+ \mathbf{e}_2 = \frac{1}{\sqrt{1 + (\varepsilon_1 + \varepsilon_2)^2}}
+ \begin{pmatrix}
+ 1\\
+ -(\varepsilon_1 + \varepsilon_2)
+ \end{pmatrix}
+\]
+We then have one growing mode and another decaying mode. Again, we have two eigenvectors that are very close to being parallel. We can do very similar computations, and see that what this gives us is the possibility of a large initial growth despite the fact that $\varepsilon_1$ is very small. In general, this growth scales as $\frac{1}{1 - \mathbf{e}_1 \cdot \mathbf{e}_2}$.
+
+\subsection{A general mathematical framework}
+Let's try to put this phenomenon in a general framework, which would be helpful since the vector spaces we deal with in fluid dynamics are not even finite-dimensional. Suppose $\mathbf{x}$ evolves under an equation
+\[
+ \dot{\mathbf{x}} = A \mathbf{x}.
+\]
+Given an inner product $\bra \ph, \ph\ket$ on our vector space, we can define the \term{adjoint} of $A$ by requiring
+\[
+ \bra \mathbf{x}, A\mathbf{y}\ket = \bra A^\dagger \mathbf{x}, \mathbf{y}\ket
+\]
+for all $x, y$. To see why we should care about the adjoint, note that in our previous example, the optimal perturbation came from an eigenvector of $A^\dagger$ for $\lambda_1 = \varepsilon$, namely $\begin{pmatrix} \varepsilon_1 + \varepsilon_2 \\ 1 \end{pmatrix}$ and we might conjecture that this is a general phenomenon.
+
+For our purposes, we assume $A$ and hence $A^\dagger$ have a basis of eigenvectors. First of all, observe that the eigenvalues of $A$ are the complex conjugates of the eigenvalues of $A^\dagger$. Indeed, let $\mathbf{v}_1, \ldots, \mathbf{v}_n$ be a basis of eigenvectors of $A$ with eigenvalues $\lambda_1, \ldots, \lambda_n$, and $\mathbf{w}_1, \ldots, \mathbf{w}_n$ a basis of eigenvectors of $A^\dagger$ with eigenvalues $\mu_1, \ldots, \mu_n$. Then we have
+\[
+ \lambda_j \bra \mathbf{w}_i, \mathbf{v}_j\ket = \bra \mathbf{w}_i, A \mathbf{v}_j\ket = \bra A^\dagger \mathbf{w}_i, \mathbf{v}_j\ket = \mu_i^* \bra \mathbf{w}_i, \mathbf{v}_j\ket.
+\]
+But since the inner product is non-degenerate, for each $i$, it cannot be that $\bra \mathbf{w}_i, \mathbf{v}_j\ket = 0$ for all $j$. So there must be some $j$ such that $\lambda_j = \mu_i^*$.
+
+By picking appropriate basis for each eigenspace, we can arrange the eigenvectors so that $\bra \mathbf{w}_i, \mathbf{v}_j\ket = 0$ unless $i = j$, and $\|\mathbf{w}_i\| = \|\mathbf{v}_j\| = 1$. This is the \term{biorthogonality property}. Crucially, the basis is \emph{not} orthonormal.
+
+Now if we are given an initial condition $\mathbf{x}_0$, and we want to solve the equation $\dot{\mathbf{x}} = A\mathbf{x}$. Note that this is trivial to solve if $\mathbf{x}_0 = \mathbf{v}_j$ for some $j$. Then the solution is simply
+\[
+ \mathbf{x} = e^{\lambda_j t} \mathbf{v}_j.
+\]
+Thus, for a general $\mathbf{x}_0$, we should express $\mathbf{x}_0$ as a linear combination of the $\mathbf{v}_j$'s. If we want to write
+\[
+ \mathbf{x}_0 = \sum_{i = 1}^k \alpha_i \mathbf{v}_i,
+\]
+then using the biorthogonality condition, we should set
+\[
+ \alpha_i = \frac{\bra \mathbf{x}_0, \mathbf{w}_i\ket}{\bra \mathbf{v}_i, \mathbf{w}_i\ket}.
+\]
+Note that since we normalized our eigenvectors so that each eigenvector has norm $1$, the denominator $\bra \mathbf{v}_i, \mathbf{w}_i\ket$ can be small, hence $\alpha_i$ can be large, even if the norm of $\mathbf{x}_0$ is quite small, and as we have previously seen, this gives rise to transient growth if the eigenvalue of $\mathbf{v}_i$ is also larger than the other eigenvalues.
+
+In our toy example, our condition for the existence of transient growth is that we have two eigenvectors that are very close to each other. In this formulation, the requirement is that $\bra \mathbf{v}_i, \mathbf{w}_i\ket$ is very small, i.e.\ $\mathbf{v}_i$ and $\mathbf{w}_i$ are close to being orthogonal. But these are essentially the same, since by the biorthogonality conditions, $\mathbf{w}_i$ is normal to all other eigenvectors of $A$. So if there is some eigenvector of $A$ that is very close to $\mathbf{v}_i$, then $\mathbf{w}_i$ must be very close to being orthogonal to $\mathbf{v}_i$.
+
+%In our actual fluid dynamics problems, our spaces are infinite-dimensional. Note that if we want to sensibly talk about the ``size'' of the growth, and eigenvectors being ``close to parallel'', it is crucial that we have fixed an inner product on our space of solutions. We may get completely different answers if we picked a different inner product.
+%
+%Now given an inner product $\bra \ph, \ph\ket$, we can define the \term{adjoint} of any operator $A$ by requiring
+%\[
+% \bra x, Ay\ket = \bra A^\dagger x, y\ket
+%\]
+%for all $x, y$. For a real matrix with real eigenvalues, $A^\dagger$ and $A$ share eigenvalues (proof: put in upper triangular form, and note that the diagonal is invariant under transpose). If we look at what we had previously, we see that the optimal perturbation comes from an eigenvector of $A^\dagger$ for $\lambda_1 = \varepsilon$, namely
+%\[
+% \begin{pmatrix}
+% \varepsilon_1 + \varepsilon_2 \\ 1
+% \end{pmatrix}
+%\]
+%We observe that this is orthogonal to $\mathbf{e}_2$. Indeed, eigenvectors of $A^\dagger$ and $A$ are orthogonal to each other if they do not have conjugate eigenvalues, and the proof is exactly the same as the case where $A$ is self-adjoint.
+%
+%With these in mind, let's go and actually do some fluids.
+
+Now assuming we have transient growth, the natural question to ask is how large this growth is. We can write the solution to the initial value problem as
+\[
+ \mathbf{x}(t) = e^{At} \mathbf{x}_0.
+\]
+The maximum gain at time $t$ is given by
+\[
+ G(t) = \max_{\mathbf{x}_0 \not= 0} \frac{\|\mathbf{q}(t)\|^2}{\|\mathbf{q}(0)\|} = \max_{\mathbf{x}_0 \not= 0} \frac{\|e^{At} \mathbf{x}_0\|^2}{\|\mathbf{x}_0\|^2}.
+\]
+This is, by definition, the matrix norm of $B$.
+
+\begin{defi}[Matrix norm]\index{matrix norm}
+ Let $B$ be an $n \times n$ matrix. Then the \emph{matrix norm} is
+ \[
+ \|B\| = \max_{\mathbf{v} \not= 0} \frac{\|B\mathbf{v}\|}{\|\mathbf{v}\|}.
+ \]
+\end{defi}
+
+To understand the matrix norm, we may consider the eigenvalues of the matrix. Order the eigenvalues of $A$ by their real parts, so that $\Re(\lambda_1) \geq \cdots \geq \Re(\lambda_n)$. Then the gain is clearly bounded below by
+\[
+ G(t) \geq e^{2 \Re(\lambda_1)t},
+\]
+achieved by the associated eigenvector.
+
+If $L$ is normal, then this is the complete answer. We know the eigenvectors form an orthonormal basis, so we can write $L = V \Lambda V^{-1}$ for $\Lambda = \diag(\lambda_1,\ldots, \lambda_n)$ and $V$ is unitary. Then we have
+\[
+ G(t) = \|e^{Lt}\|^2 = \|V e^{\Lambda t} V^{-1}\|^2 = \|e^{\Lambda t}\|^2 = e^{2 \Re (\lambda_1) t}.
+\]
+But life gets enormously more complicated when matrices are non-normal. As a simple example, the matrix $\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix}$ has $1$ as the unique eigenvalue, but applying it to $(0, 1)$ results in a vector of length $\sqrt{2}$.
+
+In the non-normal case, it would still be convenient to be able to diagonalize $e^{At}$ in some sense, so that we can read off its norm. To do so, we must relax what we mean by diagonalizing. Instead of finding a $U$ such that $U^\dagger e^{At} U$ is diagonal, we find unitary matrices $U, V$ such that
+\[
+ U^\dagger e^{At} V = \Sigma = \diag(\sigma_1, \ldots, \sigma_n),
+\]
+where $\sigma_i \in \R$ and $\sigma_1 > \cdots > \sigma_n \geq 0$. We can always do this. This is known as the \term{singular value decomposition}, and the diagonal entries $\sigma_i$ are called the \term{singular values}. We then have
+%
+%Recall the \term{condition number}
+%\[
+% \kappa(A) = \|A\| \|A^{-1}\|.
+%\]
+%Singular values and singular vectors air pairs of vectors $(\mathbf{v}, \mathbf{u})$ such that
+%\[
+% A\mathbf{v} = \sigma \mathbf{u},\quad A^\dagger \mathbf{u} = \sigma \mathbf{v}.
+%\]
+%Then we have
+%\[
+% A^\dagger A \mathbf{v} = \sigma^2 \mathbf{v},\quad AA^\dagger \mathbf{u} = \sigma^2 \mathbf{u}.
+%\]
+%Note $AA^\dagger$ and $A^\dagger A$. We may wlog assume singular values are normalized, so that $\|A\|= \sigma_{max}$, but consider the square invertible case:
+%\[
+% A^{-1}A \mathbf{v} = \sigma A^{-1} \mathbf{u},\quad (A^{-1})^\dagger A^\dagger \mathbf{u} = \sigma [A^{-1}]^{\dagger} \mathbf{v}.
+%\]
+%So in fact we have
+%\[
+% \frac{1}{\sigma} \mathbf{v} = A^{-1} \mathbf{u},\quad \frac{1}{\sigma} \mathbf{u} = [A^\dagger]^{-1} \mathbf{v}.
+%\]
+%Therefore we know
+%\[
+% \|A^{-1}\| = \frac{1}{\sigma_{min}}.
+%\]
+%So we have
+%\[
+% \kappa(A) = \frac{\sigma_{max}}{\sigma_{min}}.
+%\]
+%Therefore, returning to gain, where now eigenvectors do not form orthonormal basis, we have (with $V$ not necessarily unitary)
+%\[
+% G(t) = \|e^{Lt}\|^2 = \|V e^{\Lambda t} V^{-1}\|^2 \leq \|V\|^2 \|e^{\Lambda t}\|^2 \|V^{-1}\|^2 = \kappa^2(V) e^{2 \Re(\lambda_1) t},
+%\]
+%which may be much larger. So we have the possibility for transient growth.
+%
+%Actually to calculate transient growth, it is very tempting to use the idea of SVD. We can write $e^{Lt} = B$, with
+%\[
+% BV = U\Sigma, U^\dagger = U^{-1}, V^\dagger = V^{-1}, \Sigma = \diag(\sigma_1, \ldots, \sigma_n),
+%\]
+%with $\sigma_i \in \R$ and $\sigma_1 > \cdots > \sigma_n \geq 0$.
+%
+%Therefore, $B\mathbf{v}_1 = \sigma_1 \mathbf{u}_1$, and remembering the definition of matrix norms, we have
+\begin{multline*}
+ G(t) =\|e^{At}\|^2 = \max_{\mathbf{x} \not= 0} \frac{(e^{At}\mathbf{x}, e^{At}\mathbf{x})}{(\mathbf{x}, \mathbf{x})} = \max_{\mathbf{x} \not= 0} \frac{(U\Sigma V^\dagger \mathbf{x}, U\Sigma V^\dagger \mathbf{x})}{(\mathbf{x}, \mathbf{x})}\\
+ = \max_{\mathbf{x} \not= 0} \frac{(\Sigma V^\dagger \mathbf{x}, \Sigma V^\dagger \mathbf{x})}{(\mathbf{x}, \mathbf{x})} = \max_{\mathbf{y} \not= 0} \frac{(\Sigma \mathbf{y}, \Sigma \mathbf{y})}{(\mathbf{y}, \mathbf{y})} = \sigma_1^2(t).
+\end{multline*}
+If we have an explicit singular value decomposition, then this tells us the optimal initial condition if we want to maximize $G(t)$, namely the first column of $\mathbf{v}$.
+
+%The optimal initial condition is $\mathbf{v}_1$, power and appealing techniques, as SVD can be calculated. This can of course be applied to the Orr-Sommerfeld equation, but there are subtleties arising from discretization.
+
+\subsection{Orr-Sommerfeld and Squire equations}
+Let's now see how this is relevant to our fluid dynamics problems. For this chapter, we will use the ``engineering'' coordinate system, so that the $y$ direction is the vertical direction. The $x, y, z$ directions are known as the \term{streamwise direction}, \term{wall-normal direction} and \term{spanwise direction} respectively.
+\begin{center}
+ \begin{tikzpicture}[scale=0.8]
+ \draw [->] (0, 0) -- (0, 2) node [above] {$y$};
+ \draw [->] (0, 0) -- (2, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (-0.9, -1.386) node [below] {$z$};
+ \end{tikzpicture}
+\end{center}
+Again suppose we have a base shear flow $U(y)$ subject to some small perturbations $(u, v, w)$. We can write down our equations as
+\begin{align*}
+ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} + \frac{\partial w}{\partial z} &= 0\\
+ \frac{\partial u}{\partial t} + U \frac{\partial u}{\partial x} + v U' &= - \frac{\partial p}{\partial x} + \frac{1}{Re}\nabla^2 u\\
+ \frac{\partial v}{\partial t} + U \frac{\partial v}{\partial x} &= - \frac{\partial p}{\partial y} + \frac{1}{Re} \nabla^2 v\\
+ \frac{\partial w}{\partial t} + U \frac{\partial w}{\partial x} &= - \frac{\partial p}{\partial z} + \frac{1}{Re} \nabla^2 w.
+\end{align*}
+Again our strategy is to reduce these to a single, higher order equation in $v$. To get rid of the pressure term, we differentiate the second, third and fourth equation with respect to $x, y$ and $z$ respectively and apply incompressibility to obtain
+\[
+ \nabla^2 p = -2U' \frac{\partial v}{\partial x}.
+\]
+By applying $\nabla^2$ to the third equation, we get an equation for the wall-normal velocity:
+\[
+ \left[ \left(\frac{\partial}{\partial t} + U \frac{\partial}{\partial x}\right)\nabla^2 - U'' \frac{\partial}{\partial x} - \frac{1}{Re}\nabla^4\right] v = 0.
+\]
+We would like to impose the boundary conditions $v = \frac{\partial v}{\partial y} = 0$, but together with initial conditions, this is not enough to specify the solution uniquely as we have a fourth order equation. Thus, we shall require the vorticity to vanish at the boundary as well.
+
+The wall-normal vorticity is defined by
+\[
+ \eta = \omega_y = \frac{\partial u}{\partial z} - \frac{\partial w}{\partial x}.
+\]
+By taking $\frac{\partial}{\partial z}$ of the second equation and then subtracting the $\frac{\partial}{\partial x}$ of the last, we then get the equation
+\[
+ \left[\frac{\partial}{\partial t} + U \frac{\partial}{\partial x} - \frac{1}{Re} \nabla^2\right] \eta = - U' \frac{\partial v}{\partial z}.
+\]
+As before, we decompose our perturbations into Fourier modes:
+\begin{align*}
+ v(x, y, z, t) &= \hat{v}(y) e^{i(\alpha x + \beta z - \omega t)}\\
+ \eta(x, y, z, t) &= \hat{\eta}(y) e^{i(\alpha x + \beta z - \omega t)}.
+\end{align*}
+For convenience, we set $k^2 = \alpha^2 + \beta^2$ and write $\mathcal{D}$ for the $y$ derivative. We then obtain the \term{Orr-Sommerfeld equation} and \term{Squire equation} with boundary conditions $\hat{v} = \mathcal{D} \hat{v} = \eta = 0$:
+\begin{align*}
+ \left[(-i\omega + i \alpha U)(\mathcal{D}^2 - k^2) - i \alpha U'' - \frac{1}{Re} (\mathcal{D}^2 - k^2)^2\right] \hat{v} &= 0\\
+ \left[(-i\omega + i \alpha U) - \frac{1}{Re} (\mathcal{D}^2 - k^2) \right] \hat{\eta} &= -i\beta U' \hat{v}.
+\end{align*}
+Note that if we set $Re = \infty$, then this reduces to the Rayleigh equation.
+
+Let's think a bit more about the Squire equation. Notice that there is an explicit $\hat{v}$ term on the right. Thus, the equation is forced by the wall-normal velocity. In general, we can distinguish between two classes of modes:
+\begin{enumerate}
+ \item \term{Squire modes}, which are solutions to the homogeneous problem with $\hat{v} = 0$;
+ \item \term{Orr-Sommerfeld modes}, which are particular integrals for the actual $\hat{v}$;
+\end{enumerate}
+The Squire modes are always damped. Indeed, set $\hat{v} = 0$, write $\omega = \alpha c$, multiply the Squire equation by $\eta^*$ and integrate across the domain:
+\[
+ c \int_{-1}^1 \hat{\eta}^* \hat{\eta} \;\d y = \int_{-1}^1 U \hat{\eta}^* \eta \;\d y - \frac{i}{\alpha Re} \int_{-1}^1 \hat{\eta}^* (k^2 - \mathcal{D}^2) \hat{\eta}\;\d y.
+\]
+Taking the imaginary part and integrating by parts, we obtain
+\[
+ c_i \int_{-1}^1 |\hat{\eta}|^2 \;\d y = - \frac{1}{\alpha Re} \int_{-1}^1 |\mathcal{D} \hat{\eta}|^2 + k^2 |\hat{\eta}|^2 \;\d y < 0.
+\]
+So we see that the Squire modes are always stable, and instability in vorticity comes from the forcing due to the velocity.
+
+It is convenient to express the various operators in compact vector form. Define
+\[
+ \hat{\mathbf{q}} =
+ \begin{pmatrix}
+ \hat{\mathbf{v}}\\\hat{\eta}
+ \end{pmatrix},\quad
+ M =
+ \begin{pmatrix}
+ k^2 - \mathcal{D}^2 & 0\\
+ 0 & 1
+ \end{pmatrix},\quad
+ L =
+ \begin{pmatrix}
+ \mathcal{L}_{0S} & 0\\
+ i \beta U' & \mathcal{L}_{SQ}
+ \end{pmatrix},
+\]
+where
+\begin{align*}
+ \mathcal{L}_{OS} &= i\alpha U(k^2 - \mathcal{D}^2) + i \alpha U' + \frac{1}{Re} (k^2 - \mathcal{D}^2)^2\\
+ \mathcal{L}_{SQ} &= i\alpha U + \frac{1}{Re}(k^2 - \mathcal{D})^2.
+\end{align*}
+We can then write our equations as
+\[
+ L\hat{\mathbf{q}} = i \omega M \hat{\mathbf{q}}.
+\]
+This form of equation reminds us of what we saw in Sturm--Liouville theory, where we had an eigenvalue equation of the form $Lu = \lambda w u$ for some weight function $w$. Here $M$ is not just a weight function, but a differential operator. However, the principle is the say. First of all, this tells us the correct inner product to use on our space of functions is
+\[
+ \bra \mathbf{p}, \mathbf{q}\ket = \int \mathbf{p}^\dagger M \mathbf{q} \;\d y.\tag{$*$},
+\]
+We can make the above equation look like an actual eigenvalue problem by writing it as
+\[
+ M^{-1}L \hat{\mathbf{q}} = i \omega \hat{\mathbf{q}}.
+\]
+We want to figure out if the operator $M^{-1}L$ is self-adjoint under $(*)$, because this tells us something about its eigenvalues and eigenfunctions. So in particular, we should be able to figure out what the adjoint should be. By definition, we have
+\begin{align*}
+ \bra \mathbf{p}, M^{-1}L\mathbf{q}\ket &= \int \mathbf{p}^\dagger MM^{-1} L \mathbf{q}\;\d y\\
+ &= \int \mathbf{p}^\dagger L\mathbf{q}\;\d y\\
+ &= \int (\mathbf{q}^\dagger (L^\dagger \mathbf{p}))^*\;\d y\\
+ &= \int (\mathbf{q}^\dagger M (M^{-1} L^\dagger \mathbf{p}))^*\;\d y,
+\end{align*}
+where $L^\dagger$ is the adjoint of $L$ under the $L^2$ norm. So the adjoint eigenvalue equation is
+\[
+ L^\dagger \mathbf{q} = -i\omega M \mathbf{q}.
+\]
+Here we introduced a negative sign in the right hand side, which morally comes from the fact we ``took the conjugate'' of $i$. Practically, adopting this sign convention makes, for example, the statement of biorthogonality cleaner.
+
+So \emph{mathematically}, we know we should take the inner product as $\int \mathbf{p}^\dagger M \mathbf{q}$. Physically, what does this mean? From incompressibility
+\[
+ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} + \frac{\partial w}{\partial z} = 0,
+\]
+plugging in our series expansions of $v, \eta$ etc. gives us
+\[
+ i\alpha \hat{u} + i \beta \hat{w} = - \mathcal{D} \hat{v},\quad i\beta \hat{u} - i\alpha \hat{w} = \hat{\eta}.
+\]
+So we find that
+\[
+ \hat{u} = \frac{i}{k^2}(\alpha \mathcal{D} \hat{v} - \beta \eta),\quad \hat{w} = \frac{i}{k^2}(\beta \mathcal{D} \hat{v} + \alpha \eta).
+\]
+Thus, we have
+\[
+ \frac{1}{2}(|u|^2 + |w|^2) = \frac{1}{2k^2} \Big(|\mathcal{D} \hat{v}|^2 + |\eta|^2\Big),
+\]
+and the total energy is
+\begin{multline*}
+ E = \int \frac{1}{2k^2} \Big(|\mathcal{D} \hat{v}|^2 + k^2 |\hat{v}|^2 + |\eta|^2\Big)\\
+ = \frac{1}{2k^2} \int_{-1}^1 (\hat{v}^*\; \hat{\eta}^*)
+ \begin{pmatrix}
+ k^2 - \mathcal{D}^2 & 0\\
+ 0 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ \hat{v}\\
+ \hat{\eta}
+ \end{pmatrix}\;\d y = \frac{1}{2k^2}\bra \mathbf{q}, \mathbf{q}\ket.
+\end{multline*}
+So the inner product the mathematics told us to use is in fact, up to a constant scaling, the energy!
+
+%We now see that the Orr--Sommerfeld modes and Squire modes are both eigenmodes, and can just be separately classified. Note that the eigenmodes of the Orr-Sommerfeld operators are in general not orthogonal, and has both discrete and continuous spectrum. The fundamental questions which we want to ask are:
+%\begin{itemize}
+% \item Is the Orr-Sommerfeld operator non-self-adjoint?
+% \item How do the eigenfunctions relate to adjoint eigenfunctions?
+% \item Does the adjoint help us to understand optimal transient growth?
+%\end{itemize}
+%Recall that the notion of an adjoint has to be defined with respect to an inner product. This is slightly subtle. Recall that the Orr--Sommerfeld equation was
+%\[
+% \mathcal{L}_{OS} \hat{\mathbf{q}} = i\omega (k^2 - \mathcal{D}^2) \hat{\mathbf{q}}
+%\]
+%This tells us the correct inner product is the \emph{energy} norm
+%\[
+% \bra \xi, v\ket = \int \xi^* (k^2 - \mathcal{D}^2) v \;\d y = \int \Big(k^2 \xi^* \cdot v + (\D \xi) \cdot (\D v)\Big)\;\d y.
+%\]
+%On the other hand, the actual eigenvalue problem is that associated to $(k^2 - \mathcal{D}^2)^{-1} \mathcal{L}_{OS}$. Then the adjoint is defined to be $(k^2 - \mathcal{D}^2)^{-1} \mathcal{L}_{OS}^\dagger$, where $\mathcal{L}_{OS}^\dagger$ satisfies
+%\[
+% \int_{-1}^1 \hat{\xi}^* \mathcal{L}_{OS} \hat{v} \;\d y = \int_{-1}^1 \hat{v} (\mathcal{L}_{OS}^\dagger \hat{\xi})^*\;\d y,
+%\]
+%since the $(k^2 - \mathcal{D}^2)^{-1}$ in the operator cancels with the $(k^2 - \mathcal{D}^2)$ in the inner product. Thus, we are equivalently asking if $\mathcal{L}_{OS}$ is self-adjoint in the $L^2$ norm.
+%
+%This might seem like a long-winded way of getting back to the $L^2$-norm, but we will see that the energy norm is indeed the correct norm to think about, such as when we think about orthogonality.
+%
+%Given this, computing $\mathcal{L}_{OS}^\dagger$ is just an exercise in integration by parts, with slight annoyances due to the fact that $U$ has non-zero derivative. Ultimately, we end up with
+%\[
+% \mathcal{L}_{OS}^\dagger = ik U (\mathcal{D}^2 - k^2) - 2i k U'\mathcal{D} + \frac{1}{Re} (k^2 - \mathcal{D}^2)^2.
+%\]
+%Compared to the original operator, we picked up some sign changes due to conjugation, but more significantly, we have a different advection term $2ik U' \mathcal{D}$.
+%
+%A perhaps slightly subtle point is that the eigenvalue problem for the adjoint should be
+%\[
+% \mathcal{L}_{OS}^\dagger \hat{\xi}_q = i \omega_q(\mathcal{D}^2 - k^2) \hat{\xi}_q.
+%\]
+%The change in sign is largely because we have decided to put an $i$ in from of our eigenvalues. To see this gives the correct definition, suppose $\hat{v}_p$ is an eigenvector of the original equation, so that
+%\[
+% \mathcal{L}_{OS}\hat{v}_p = i\omega_p (k^2 - \mathcal{D}^2) \hat{v}_p.
+%\]
+%Then we have
+%\[
+% 0 = \int_{-1}^1 \hat{\xi}_q^* \mathcal{L}_{OS} \hat{v}_p\;\d y - \int_{-1}^1 \hat{v}_p (\mathcal{L}_{OS}^\dagger \hat{\xi}_q)^* \;\d y = (\omega_p - \omega_q^*) \int_{-1}^1 \hat{\xi}_q^* (k^2 - \mathcal{D}^2) \hat{v}_p \;\d y.
+%\]
+%Thus, we recover the usual result that eigenvectors of the adjoint and the direct operator are orthogonal unless their eigenvalues are conjugate, provided we use the energy inner product. After normalizing, we obtain eigenvectors such that
+%
+%Using the boundary conditions, we have
+%\begin{align*}
+% \int_{-1}^1 \hat{\xi}^* [U(\mathcal{D}^2 - \alpha^2)\hat{v} - U'' \hat{v}]\;\d y &= \int_{-1}^1 \hat{v}(\hat{\xi}^*)'' \;\d y - \int_{-1}^1 \hat{\xi}^* (\alpha^2 U + U'') \hat{v}\;\d y\\
+% &= \int_{-1}^1 \hat{v} [U(\mathcal{D}^2 - \alpha^2) \hat{\xi}^* + 2 U' \mathcal{D} \hat{\xi}^*]\;\d y.
+%\end{align*}
+%We can similarly integrate the $Re$ term by parts $4$ times to obtain
+%\[
+% \frac{1}{Re} \int_{-1}^1 \hat{\xi}^* \mathcal{D}^4 \hat{v}\;\d y = \frac{1}{Re} \int_{-1}^1 \hat{v} \mathcal{D}^4 \hat{\xi}^* \;\d y.
+%\]
+%Here we need to impose the boundary condition that both $\hat{v}$ and $\mathcal{D}\hat{v}$ vanish at the boundary (and same for $\hat{\xi}$, of course). Therefore, remembering the complex conjugate, the adjoint Orr--Sommerfeld equation is
+%\[
+% \left[(-i\omega + i \alpha U) (\mathcal{D}^2 - \alpha^2) + 2i\alpha U' \mathcal{D} + \frac{1}{Re} ( \mathcal{D}^2 - \alpha^2)^2\right] \hat{\xi} = 0.
+%\]
+%Note that we have a different advection term, a relative sign of diffusion, and time-dependence switched.
+%
+%From the definitions of the direct and adjoint Orr-Sommerfeld operators, adjoint eigenvalues are complex conjugates of the direct eigenvalues. Indeed, a direct eigenfunction is a $\hat{v}_p$ such that
+%\[
+% \mathcal{L}_{OS} \hat{v} = i\omega_p (\alpha^2 - \mathcal{D}^2) \hat{v},
+%\]
+%while an adjoint eigenfunctino $\hat{\xi}_p$ is one such that
+%\[
+% \mathcal{L}_{OS}^\dagger \hat{\xi}_q = i\omega_q (\mathcal{D}^2 - \alpha^2) \hat{\xi}_q.
+%\]
+%So we have
+%\[
+% 0 = \int_{-1}^1 \hat{\xi}_q^* \mathcal{L}_{OS} \hat{v}_p\;\d y - \int_{-1}^1 \hat{v}_p (\mathcal{L}_{OS}^\dagger \hat{\xi}_q)^* \;\d y = (\omega_p - \omega_q)^* \int_{-1}^1 \hat{\xi}_q^* (\alpha^2 - \mathcal{D}^2) \hat{v}_p \;\d y.
+%\]
+%When defined this way, we now have identified a \emph{weighted} inner product such, provided the eigenvectors are normalized,
+%\[
+% \int_{-1}^1 \hat{\xi}_q^*( k^2 - \mathcal{D}^2) \hat{v}_p \;\d y = \delta_{pq}.
+%\]
+%This is called the \term{bi-orthogonality condition}.
+
+We can now try to discretize this, do SVD (numerically), and find out what the growth rates are like. However, in the next chapter, we shall see that there is a better way of doing so. Thus, in the remainder of this chapter, we shall look at ways in which transient growth can physically manifest themselves.
+
+\subsubsection*{Orr's mechanisms}
+Suppose we have a simple flow profile that looks like this:
+\begin{center}
+ \begin{tikzpicture}[scale=0.7, yscale=1.3]
+ \draw (-1.2, -1.2) -- (1.2, 1.2);
+
+ \draw [dashed] (0, -1.2) -- (0, 1.2);
+
+ \foreach \x in {1, 0.6, 0.2, -0.2, -0.6, -1}{
+ \draw [-latex'] (0, \x) -- (\x, \x);
+ }
+ \end{tikzpicture}
+\end{center}
+Recall that the $z$-direction vorticity is given by
+\[
+ \omega_3 = \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y},
+\]
+and evolves as
+\[
+ \frac{\partial \omega_3}{\partial t} + U \frac{\partial \omega_3}{\partial x} = U' \left(\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y}\right) + v U'' + \frac{1}{Re} \nabla^2 \omega_3.
+\]
+Assuming constant shear and $\beta = 0$ at high Reynolds number, we have $\frac{\D \omega_3}{\D t} \simeq 0$.
+
+Suppose we have some striped patch of vorticity:
+\begin{center}
+ \begin{tikzpicture}
+ \draw[rotate=-10] ellipse (2 and 0.15);
+ \begin{scope}[scale=0.7, yscale=1.3, opacity=0.5]
+ \draw (-1.2, -1.2) -- (1.2, 1.2);
+ \draw [dashed] (0, -1.2) -- (0, 1.2);
+ \foreach \x in {1, 0.6, 0.2, -0.2, -0.6, -1}{
+ \draw [-latex'] (0, \x) -- (\x, \x);
+ }
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+In this stripe, we always have $\omega_3 > 0$. Now over time, due to the shear flow, this evolves to become something that looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw[rotate=90] ellipse (1 and 0.3);
+ \begin{scope}[scale=0.7, yscale=1.3, opacity=0.5]
+ \draw (-1.2, -1.2) -- (1.2, 1.2);
+ \draw [dashed] (0, -1.2) -- (0, 1.2);
+ \foreach \x in {1, 0.6, 0.2, -0.2, -0.6, -1}{
+ \draw [-latex'] (0, \x) -- (\x, \x);
+ }
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Now by incompressibility, the area of this new region is the same as the area of the old. We also argued that $\omega_3$ does not change. So the total quantity $\int_\mathcal{D} \omega_3 \;\d A = \int_\mathcal{D} \nabla \times \mathbf{u} \cdot \d A$ does not change. But Stokes' theorem says
+\[
+ \int_{\mathcal{D}} \nabla \times \mathbf{u} \cdot \d A = \int_{\partial \mathcal{D}} \mathbf{u} \cdot \;\d \ell.
+\]
+Since the left-hand-side didn't change, the same must be true for the right hand side. But since the boundary $\partial \mathcal{D}$ decreased in length, this implies $\mathbf{u}$ must have increased in magnitude! This growth is only transient, since after some further time, the vorticity patch gets sheared into
+\begin{center}
+ \begin{tikzpicture}
+ \draw[rotate=10] ellipse (2 and 0.15);
+ \begin{scope}[scale=0.7, yscale=1.3, opacity=0.5]
+ \draw (-1.2, -1.2) -- (1.2, 1.2);
+ \draw [dashed] (0, -1.2) -- (0, 1.2);
+ \foreach \x in {1, 0.6, 0.2, -0.2, -0.6, -1}{
+ \draw [-latex'] (0, \x) -- (\x, \x);
+ }
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+But remember that Stokes theorem tells us
+\[
+ \int_\mathcal{D} [\nabla \times \mathbf{u}] \;\d A = \int_{\partial D} \mathbf{u} \cdot \d \ell.
+\]
+Thus, if we have vortices that are initially tilted into the shear, then this gets advected by the mean shear. In this process, the perimeter of each vortex sheet gets smaller, then grows again. Since $\int_{\partial \mathcal{D}} \mathbf{u} \cdot \d \ell$ is constant, we know $\mathbf{u}$ grows transiently, and then vanishes again.
+
+\subsubsection*{Lift up}
+Another mechanism is lift-up. This involves doing some mathematics. Suppose we instead expand our solutions as
+\[
+ v = \tilde{v}(y, t) e^{i\alpha x + i \beta z},\quad \eta = \tilde{\eta}(y, t) e^{i\alpha x + i \beta z}.
+\]
+Focusing on the $\alpha = 0$ and $Re = \infty$ case, the Orr-Sommerfeld and Squire equations are
+\begin{align*}
+ \frac{\partial}{\partial t} \tilde{\eta}(y, t) &= -i\beta U' \tilde{v}\\
+ \frac{\partial}{\partial t} (\mathcal{D}^2 - k^2) \tilde{v} &= 0.
+\end{align*}
+Since we have finite depth, except for a few specific values of $k$, the only solution to the second equation is $\tilde{v}(y, t) = \tilde{v}_0(y)$, and then
+\[
+ \tilde{\eta} = \tilde{\eta}_0 - i \beta U' \tilde{v}_0 t.
+\]
+This is an algebraic instability. The constant $\tilde{v}(y, t)$ means fluid is constantly being lifted up, and we can attribute this to the effects of the $\omega_1$ of streamwise rolls.
+
+%Recall the Squire equation in the large Re limit:
+%\[
+% \left[\frac{\partial}{\partial t} + i \alpha U \right] \tilde{\eta}(y, t) = -i \beta U' \tilde{v}(y, t),
+%\]
+%where we set
+%\[
+% v = \tilde{v}(y, t) e^{i\alpha x + i \beta z},\quad \eta = \tilde{\eta}(y, t) e^{i\alpha x + i \beta z}.
+%\]
+%We can formally integrate
+%\[
+% \tilde{\eta} = \tilde{\eta}_0 e^{-i\alpha Ut} - i \beta U' e^{-i\alpha Ut} \int_0^t \tilde{v}(y, \tau)e^{i \alpha U\tau} \;\d \tau.
+%\]
+%Let's suppose that $\alpha = 0$. Then the evolution is completely dominated by wall-normal velocity. We make similar assumptions for the Rayleigh equation, which was
+%\[
+% \left[\left(\frac{\partial}{\partial t} + i \alpha U\right) (\mathcal{D}^2 - k^2) - i \alpha U''\right] \tilde{v}(y, t) = 0.
+%\]
+%If $\alpha = 0$, then $\tilde{v} = \tilde{v}_0$ is a constant, and so $\tilde{\eta} = \tilde{\eta}_0 - i \beta U' \tilde{v}_0 t$. This is an \emph{algebraic} instability. Such dynamics can be associated with streamwise rolls with purely $\omega_1$ \emph{lifting up} fluid.
+
+We should be a bit careful when we consider the case where the disturbance is localized. In this case, we should consider quantities integrated over all $x$. We use a bar to denote this integral, so that, for example, $\bar{v} = \int_{-\infty}^\infty v\;\d x$. Of course, this makes sense only if the disturbance is local, so that the integral converges. Ultimately, we want to understand the growth in the energy, but it is convenient to first understand $\bar{v}, \bar{u}$, and the long-forgotten $\bar{p}$. We have three equations
+\begin{align*}
+ \nabla^2 p &= - 2U' \frac{\partial v}{\partial x}\\
+ \frac{\partial v}{\partial t} + U \frac{\partial v}{\partial x} &= - \frac{\partial p}{\partial y}\\
+ \frac{\partial u}{\partial t} + U \frac{\partial u}{\partial x} + v U' &= - \frac{\partial p}{\partial x}.
+\end{align*}
+Note that $U$ does not depend on $x$, and all our (small lettered) variables vanish at infinity since the disturbance is local. Thus, integrating the first equation over all $x$, we get $\nabla^2 \bar{p} = 0$. So $\bar{p} = 0$. Integrating the second equation then tells us $\frac{\partial \bar{v}}{\partial t} = 0$. Finally, plugging this into the integral of the last equation tells us
+\[
+ \bar{u} = \bar{u}_0 - \bar{v}_0 U' t.
+\]
+Thus, $\bar{u}$ grows linearly with time. However, this does not immediately imply that the energy is growing, since the domain is growing as well, and the velocity may be spread over a larger region of space.
+
+Let's suppose $u(x, 0) = 0$ for $|x| > \delta$. Then at time $t$, we would expect $u$ to be non-zero only at $U_{min} t - \delta < x < U_{max} t + \delta$. Recall that Cauchy--Schwarz says
+\[
+ \left|\int_{-\infty}^\infty f(x) g(x)\;\d x \right|^2 \leq \left(\int_{-\infty}^\infty |f(x)|^2\;\d x\right)\left(\int_{-\infty}^\infty |g(x)|^2 \;\d x\right).
+\]
+Here we can take $f = u$, and
+\[
+ g(x) =
+ \begin{cases}
+ 1 & U_{min} t - \delta < x < U_{max} t + \delta\\
+ 0 & \text{otherwise}
+ \end{cases}.
+\]
+Then applying Cauchy--Schwarz gives us
+\[
+ \bar{u}^2 \leq [\Delta U + t + 2\delta] \int_{-\infty}^\infty u^2 \;\d x.
+\]
+So we can bound
+\[
+ E \geq \frac{[\bar{u}]^2}{2 \Delta U}t = \frac{[\bar{v}_0 U']^2 t}{2 \Delta U}
+\]
+provided $t \gg \frac{2\delta}{\Delta U}$. Therefore energy grows at least as fast as $t$, but not necessarily as fast as $t^2$.
+
+
+%Let's try to recall some linear algebra. We shall consider bounded domains, so that we don't have to worry about continuous spectra. Let us also assume that finite Fourier representations of a perturbation are adequate. Further, let us assume that the base state is independent of time, to make life easier.
+%
+%For a weight matrix $W$, we can define a weighted inner product by
+%\[
+% (\mathbf{a}, \mathbf{b}) = \mathbf{a}^\dagger W\mathbf{b},
+%\]
+%where $\mathbf{a}, \mathbf{b} \in \C^n$. Since the weight matrix is positive definite, we have a \term{Cholesky factorization} $W = FF^\dagger$, where $F$ is lower triangular. We now have a natural definition of an (energy) norm:
+%\[
+% \|\mathbf{q}(t)\|^2 = (\mathbf{q}, \mathbf{q}) = \mathbf{q}^\dagger W\mathbf{q} = \|F^\dagger \mathbf{q}(t)\|_2^2.
+%\]
+%Of course, if $W$ is just the identity matrix, then we are doing nothing. What this tells us is that we can always change our coordinate system so that we get the Euclidean norm, so let us assume that is the case. In particular, this removes problems to do with $\mathcal{D} = \frac{\partial}{\partial y}$.
+%
+%In this case, we can reduce the stability problem to solving a system of coupled linear ODEs. But unlike classic undergraduate problems, our operators are not normal.
+%
+%We are interested in a system
+%\[
+% \frac{\d \mathbf{q}}{\d t} = L \mathbf{q}
+%\]
+%for $\mathbf{q} \in \C^n$ and $\mathbf{q}(0) = \mathbf{q}_0$. We can solve this in terms of matrix exponentials:
+%\[
+% \mathbf{q}(t) = e^{Lt} \mathbf{q}_0,
+%\]
+%where the matrix exponential is defined by
+%\[
+% e^{Lt} = 1 + Lt + \frac{t^2}{2} L^2 + \cdots.
+%\]
+%The key question is what is the maximum gain
+%\[
+% G(t) = \max_{\mathbf{q}(0) \not= 0}\frac{\|\mathbf{q}(t)\|^2}{ \|\mathbf{q}(0)\|^2} = \max_{\mathbf{q}(0) \not= 0} \frac{\|e^{Lt} \mathbf{q}(0)\|^2}{\|\mathbf{q}(0)\|^2}.
+%\]
+%By definition, this is the natrix norm:
+%\begin{defi}[Matrix norm]\index{matrix norm}
+% Let $B$ be an $n \times n$ matrix. Then the \emph{matrix norm} is
+% \[
+% \|B\| = \max_{\mathbf{v} \not= 0} \frac{\|B\mathbf{v}\|}{\|\mathbf{v}\|}.
+% \]
+%\end{defi}
+%The fundamental issue is then to describe $q$ in terms of normalized eigenvectors of $L$. We order the associated eigenvalues by their real partrs, so that $\Re(\lambda_1) \geq \cdots \geq \Re(\lambda_n)$. Then the gain is clearly bounded below by
+%\[
+% G(t) \geq e^{2 \Re(\lambda_1)t}.
+%\]
+%If $L$ is normal, then the eigenvectors form an orthonormal basis. So we can write $L = V \Lambda V^{-1}$ for $\Lambda = \diag(\lambda_1,\ldots, \lambda_n)$ and $V$ is unitary. Then % condition number \kappa(V) = 1, but in general \kappa(V) = |V| |V^{-1}|
+%\[
+% G(t) = \|e^{Lt}\|^2 = \|V e^{\Lambda t} V^{-1}\|^2 = \|e^{\Lambda t}\|^2 = e^{2 \Re (\lambda_1) t}.
+%\]
+%By life gets enormously more complicated when matrices are non-normal. Recall the \term{condition number}
+%\[
+% \kappa(A) = \|A\| \|A^{-1}\|.
+%\]
+%Singluar values and singular vectors air pairs of vectors $(\mathbf{v}, \mathbf{u})$ such that
+%\[
+% A\mathbf{v} = \sigma \mathbf{u},\quad A^\dagger \mathbf{u} = \sigma \mathbf{v}.
+%\]
+%Then we have
+%\[
+% A^\dagger A \mathbf{v} = \sigma^2 \mathbf{v},\quad AA^\dagger \mathbf{u} = \sigma^2 \mathbf{u}.
+%\]
+%Note $AA^\dagger$ and $A^\dagger A$. We may wlog assume singular values are normalized, so that $\|A\|= \sigma_{max}$, but consider the square invertible case:
+%\[
+% A^{-1}A \mathbf{v} = \sigma A^{-1} \mathbf{u},\quad (A^{-1})^\dagger A^\dagger \mathbf{u} = \sigma [A^{-1}]^{\dagger} \mathbf{v}.
+%\]
+%Sio in fact we have
+%\[
+% \frac{1}{\sigma} \mathbf{v} = A^{-1} \mathbf{u},\quad \frac{1}{\sigma} \mathbf{u} = [A^\dagger]^{-1} \mathbf{v}.
+%\]
+%Therefore we know
+%\[
+% \|A^{-1}\| = \frac{1}{\sigma_{min}}.
+%\]
+%So we have
+%\[
+% \kappa(A) = \frac{\sigma_{max}}{\sigma_{min}}.
+%\]
+%Therefore, returning to gain, where now eigenvectors do not form orthonormal basis, we have (with $V$ not necessarily unitary)
+%\[
+% G(t) = \|e^{Lt}\|^2 = \|V e^{\Lambda t} V^{-1}\|^2 \leq \|V\|^2 \|e^{\Lambda t}\|^2 \|V^{-1}\|^2 = \kappa^2(V) e^{2 \Re(\lambda_1) t},
+%\]
+%which may be much larger. So we have the possibility for transient growth.
+%
+%Actually to calculate transient growth, it is very tempting to use the idea of SVD. We can write $e^{Lt} = B$, with
+%\[
+% BV = U\Sigma, U^\dagger = U^{-1}, V^\dagger = V^{-1}, \Sigma = \diag(\sigma_1, \ldots, \sigma_n),
+%\]
+%with $\sigma_i \in \R$ and $\sigma_1 > \cdots > \sigma_n \geq 0$.
+%
+%Therefore, $B\mathbf{v}_1 = \sigma_1 \mathbf{u}_1$, and remembering the definition of matrix norms, we have
+%\begin{multline*}
+% G(t) =\|e^{Lt}\|^2 = \max_{\mathbf{x} \not= 0} \frac{(B\mathbf{x}, B\mathbf{x})}{(\mathbf{x}, \mathbf{x}}) = \max_{\mathbf{x} \not= 0} \frac{(U\Sigma V^\dagger \mathbf{x}, U\Sigma V^\dagger \mathbf{x})}{(\mathbf{x}, \mathbf{x})}\\
+% = \max_{\mathbf{x} \not= 0} \frac{(\Sigma V^\dagger \mathbf{x}, \Sigma V^\dagger \mathbf{x})}{(\mathbf{x}, \mathbf{x})} = \max_{\mathbf{y} \not= 0} \frac{(\Sigma \mathbf{y}, \Sigma \mathbf{y})}{(\mathbf{y}, \mathbf{y})} = \sigma_1^2(t).
+%\end{multline*}
+%The optimal initial condition is $\mathbf{v}_1$, power and appealing techniques, as SVD can be calculated. This can of course be applied to the Orr-Sommerfeld equation, but there are subtleties arising from discretization.
+%
+%% say a bit more
+%But there are some further problems. Crucially, this assumes the system is linear.
+
+\section{A variational point of view}
+In this chapter, we are going to learn about a more robust and powerful way to approach and understand transient growth. Instead of trying to think about $G(t)$ as a function of $t$, let us fix some target time $T$, and just look at $G(T)$.
+
+For any two times $t_i < t_f$, we can define a \term{propagator function} such that
+\[
+ \mathbf{u}(t_f) = \Phi(t_f, t_i) \mathbf{u}(t_i)
+\]
+for any solution $\mathbf{u}$ of our equations. In the linear approximation, this propagator is a linear function between appropriate function spaces. Writing $\Phi = \Phi(T, T_0)$, the gain problem is then equivalent to maximizing
+\begin{multline*}
+ G(T, T_0) = \frac{E(T)}{E(T_0)} = \frac{\bra \mathbf{u}_p(T), \mathbf{u}_p(T)\ket}{ \bra \mathbf{u}_p(T_0), \mathbf{u}_p(T_0)\ket}\\
+ = \frac{\bra \Phi \mathbf{u}_p(T_0), \Phi \mathbf{u}_p(T_0)\ket}{\bra \mathbf{u}_p(T_0), \mathbf{u}_p(T_0)\ket} = \frac{\bra \mathbf{u}_p(T_0), \Phi^\dagger \Phi \mathbf{u}_p(T_0)\ket}{\bra \mathbf{u}_p(T_0), \mathbf{u}_p(T_0)\ket}.
+\end{multline*}
+Here the angled brackets denote natural inner product leading to the energy norm. Note that the operator $\Phi^\dagger \Phi$ is necessarily self-adjoint, and so this $G$ is maximized when $\mathbf{u}_p(T_0)$ is chosen to be the eigenvector of $\Phi$ of maximum eigenvalue.
+
+There is a general method to find the maximum eigenvector of a (self-adjoint) operator $\Phi$. We start with a random vector $\mathbf{x}$. Then we have $\Phi^n \mathbf{x} \sim \lambda_1^n \mathbf{v}_1$ as $n \to \infty$, where $\lambda_1$ is the maximum eigenvalue with associated eigenvector $\mathbf{v}_1$. Indeed, if we write $x$ as the linear combination of eigenvectors, then as we apply $\Phi$ many times, the sum is dominated by the term with largest eigenvalue.
+
+So if we want to find the mode with maximal transient growth, we only need to be able to compute $\Phi^\dagger \Phi$. The forward propagator $\Phi(T, T_0)$ is something we know how to compute (at least numerically). We simply numerically integrate the Navier--Stokes equation. So we need to understand $\Phi(T, T_0)^\dagger$.
+
+Let $\mathbf{u}(t)$ be a solution to the linear equation
+\[
+ \frac{\d \mathbf{u}}{\d t}(t) = L(t) \mathbf{u}(t).
+\]
+Here we may allow $L$ to depend on time. Let $L^\dagger$ be the adjoint, and suppose $\mathbf{v}(t)$ satisfies
+\[
+ \frac{\d \mathbf{v}}{\d t}(t) = - L(t)^\dagger \mathbf{v}(t).
+\]
+Then the chain rule tells us
+\[
+ \frac{\d}{\d t} \bra\mathbf{v}(t), \mathbf{u}(t)\ket = \bra -L(t)^\dagger \mathbf{v}(t), \mathbf{u}(t)\ket + \bra \mathbf{v}(t), L(t)^\dagger \mathbf{u}(t)\ket = 0.
+\]
+So we know that
+\[
+ \bra \mathbf{v}(T_0), \mathbf{u}(T_0)\ket = \bra \mathbf{v}(T), \mathbf{u}(T)\ket = \bra \mathbf{v}(T), \Phi(T, T_0) \mathbf{u}(T_0)\ket.
+\]
+Thus, given a $\mathbf{v}_0$, to compute $\Phi(T, T_0)^\dagger \mathbf{v}_0$, we have to integrate the adjoint $\frac{\d \mathbf{v}}{\d t} = - L(t)^\dagger v$ \emph{backwards in time}, using the ``initial'' condition $\mathbf{v}(T) = \mathbf{v}_0$, and then we have
+\[
+ \Phi(T, T_0)^\dagger \mathbf{v}_0 = \mathbf{v}(T_0).
+\]
+What does the adjoint equation look like? For a time-dependent background shear flow, the linearized forward/direct equation for a perturbation $\mathbf{u}_p$ is given by
+\[
+ \frac{\partial \mathbf{u}_p}{\partial t} + (\mathbf{U}(t) \cdot \nabla) \mathbf{u}_p = -\nabla p_p - (\mathbf{u}_p \cdot \nabla) \mathbf{u}(t) + Re^{-1} \nabla^2 \mathbf{u}_p.
+\]
+The adjoint (linearized) Navier--Stokes equation is then
+\[
+ -\frac{\partial \mathbf{u}_a}{\partial t} = \boldsymbol\Omega \times \mathbf{u}_a - \nabla \times (\mathbf{U} \times \mathbf{u}_\sigma) - \nabla p_a + Re^{-1} \nabla^2 \mathbf{u}_a,
+\]
+where we again have
+\[
+ \nabla \cdot \mathbf{u}_a = 0,\quad \boldsymbol\Omega = \nabla \times \mathbf{U}.
+\]
+This PDE is ill-posed if we wanted to integrate it forwards in time, but that does not concern us, because to compute $\Phi^\dagger$, we have to integrate it \emph{backwards} in time.
+
+Thus, to find the maximal transient mode, we have to run the \term{direct-adjoint loop}.
+\[
+ \begin{tikzcd}
+ \mathbf{u}_p(T_0) \ar[r, "\Phi"] & \mathbf{u}_p(T) \ar[d]\\
+ \Phi^\dagger \Phi (\mathbf{u}_p(T_0)) \ar[u, dashed] & \mathbf{u}_p(T) \ar[l, "\Phi^\dagger"]
+ \end{tikzcd},
+\]
+and keep running this until it converges.
+
+Using these analysis, we can find some interesting results. For example, in a shear layer flow, 3-dimensional modes with both streamwise and spanwise perturbations are the most unstable. However, in the long run, the Kelvin-Helmholtz modes dominate.
+
+%Why does this work? And how does this relate to the toy problems we were looking at before? Let's think about a simple linear version first with energy norms:
+%\[
+% \frac{\partial \mathbf{q}}{\partial t} = \mathcal{L} \mathbf{q}.
+%\]
+%Then it is again straightforward to define the adjoint operator $\mathcal{L}^\dagger$, and the eigenvalues of $\mathcal{L}^\dagger$ are thus complex conjugates of the eigenvalues of $\mathcal{L}$. We denoted the associated eigenvectors of $\mathcal{L}$ as $\mathbf{v}_k^{(d)}$ and of $\mathcal{L}^\dagger$ as $\mathbf{v}_k^{(a)}$. We have biorthogonality property
+%\[
+% (\mathbf{v}_i^{(a)}, \mathbf{v}_j^{(d)} = 0 \text{ if }i \not= j.
+%\]
+%The normalized initial condition approximation is then
+%\[
+% \mathbf{q}(0) = \sum_{k = 1}^n \alpha_k \mathbf{v}_k^{(d)},\quad \alpha_k = \frac{(\mathbf{v}_k^{(a)}, \mathbf{q})}{(\mathbf{v}_k^{(a)}, \mathbf{v}_k^{(d)}}.
+%\]
+%If there is a dominant mode within the system $\mathbf{v}_1^{(d)}$, and the system is strongly non-normal so that $(\mathbf{v}_1^{(a)}, \mathbf{v}_1^{(d)}) \ll 1$ (as in the toy problem), then $\alpha_k$ would be very large, and so there can be very strong amplification. In fact, the optimal initial condition should be the equivalent eigenvector of the adjoint, $\mathbf{v}_k^{(a)}$.
+%
+%But the adjoint is even more attractive if we pose it a variational way\ldots
+
+\subsubsection*{Variational formulation}
+We can also use variational calculus to find the maximally unstable mode. We think of the calculation as a \emph{constrained optimization} problem with the following requirements:
+\begin{enumerate}
+ \item For all $T_0 \leq t \leq T$, we have $\frac{\partial \mathbf{q}}{\partial t} = \mathcal{D}_t \mathbf{q} = \mathcal{L} \mathbf{q}$.
+ \item The initial state is given by $\mathbf{q}(0) = \mathbf{q}_0$.
+\end{enumerate}
+We will need Lagrangian multipliers to impose these constraints, and so the augmented Lagrangian is
+\[
+ \mathcal{G} = \frac{\bra \mathbf{q}_T, \mathbf{q}_T\ket}{\bra \mathbf{q}_0, \mathbf{q}_0\ket} - \int_0^T \bra \tilde{\mathbf{q}}, (\mathcal{D}_t - \mathcal{L}) \mathbf{q}\ket \;\d t + \bra \tilde{\mathbf{q}}_0, \mathbf{q}(0) - \mathbf{q}_0\ket.
+\]
+Taking variations with respect to $\tilde{\mathbf{q}}$ and $\tilde{\mathbf{q}}_0$ recover the evolution equation and the initial condition. The integral can then be written as
+\[
+ \int_0^T \bra \tilde{\mathbf{q}}, (\mathcal{D}_t - \mathcal{L})\mathbf{q}\ket \;\d t = \int_0^T \bra \mathbf{q}, (\mathcal{D}_t + \mathcal{L}^\dagger) \tilde{\mathbf{q}}\ket \;\d t + \bra \tilde{\mathbf{q}}_0, \mathbf{q}_0\ket - \bra \tilde{\mathbf{q}}_T, \mathbf{q}_T\ket.
+\]
+Now if we take a variation with respect to $\mathbf{q}$, we see that $\tilde{\mathbf{q}}$ has to satisfy
+\[
+ (\mathcal{D}_t + \mathcal{L}^\dagger) \tilde{\mathbf{q}} = 0.
+\]
+So the Lagrange multiplier within such a variational problem evolves according to the adjoint equation!
+
+Taking appropriate adjoints, we can write
+\[
+ \mathcal{G} = \frac{\bra \mathbf{q}_T, \mathbf{q}_T\ket}{\bra \mathbf{q}_0, \mathbf{q}_0\ket} + \int_0^T \bra \mathbf{q}, (\mathcal{D}_t + \mathcal{L}^\dagger) \tilde{\mathbf{q}}\ket \;\d t + \bra \tilde{\mathbf{q}}_0, \mathbf{q}_0\ket - \bra \tilde{\mathbf{q}}_T, \mathbf{q}_T\ket + \text{boundary terms}.
+\]
+But if we take variations with respect to our initial conditions, then $\frac{\delta \mathcal{G}}{\delta \mathbf{q}_0} = 0$ gives
+\[
+ \tilde{\mathbf{q}}_0 = \frac{2 \bra \mathbf{q}_T, \mathbf{q}_T\ket}{\bra \mathbf{q}_0, \mathbf{q}_0\ket^2} \mathbf{q}_0.
+\]
+Similarly, setting $\frac{\delta \mathcal{G}}{\delta \mathbf{q}_T} = 0$, we get
+\[
+ \tilde{\mathbf{q}}_T = \frac{2}{\bra \mathbf{q}_0, \mathbf{q}_0\ket }\mathbf{q}_T.
+\]
+Applying $\bra \ph, \mathbf{q}_0\ket$ to the first equation and $\bra \ph, \mathbf{q}_T\ket$ to the second, we find that
+\[
+ \bra \tilde{\mathbf{q}}_0, \mathbf{q}_0\ket = \bra \tilde{\mathbf{q}}_T, \mathbf{q}_T\ket.
+\]
+This is the equation we previously saw for the adjoint equation, and this provides a consistency condition to see if we have found the optimal solution.
+
+Previously our algorithm used power iteration, which requires us to integrate forwards and backwards many times. However, we now have \emph{gradient information} for gain:
+\[
+ \frac{\delta \mathcal{G}}{\partial \mathbf{q}_0} = \tilde{\mathbf{q}}_0 - \frac{2 \bra \mathbf{q}_T, \mathbf{q}_T\ket}{\bra \mathbf{q}_0, \mathbf{q}_0\ket^2}\mathbf{q}_0.
+\]
+This allows us to exploit optimization algorithms (steepest descent, conjugate gradient etc.). This has an opportunity for faster convergence.
+
+There are many ways we can modify this. One way is to modify the inner product. Note that we actually had to use the inner product in three occasions:
+\begin{enumerate}
+ \item Inner product in the objective functional $\mathcal{J}$
+ \item Inner product in initial normalization/constraint of the state vector.
+ \item Inner product in the definition of adjoint operator.
+\end{enumerate}
+We used the same energy norm for all of these, but there is no reason we have to use the same norm for all of these. However, there are strong arguments that an energy norm is natural for norm 2. It is a wide open research question as to whether there is an appropriate choice for inner product 3. On the other hand, investigation of variation of inner product 1 has been widely explored.
+
+As an example, consider $p$-norms
+\[
+ \mathcal{J} = \left( \frac{1}{V_\Omega} \int_\Omega e(x, T)^p \;\d \Omega\right)^{1/p},\quad e(x, T) = \frac{1}{2} |u(x, T)|^2. % tilde is dagger everywhere
+\]
+This has the attraction that for large values of $p$, this would be dominated by peak values of $e$, not average.
+
+When we set $p = 1$, i.e.\ we use the usual energy norm, then we get a beautiful example of the Orr mechanism, in perfect agreement with what SVD tells us. For, say, $p = 50$, we get more exotic center/wall modes. % insert pictures ?
+
+%Replacing the energy term in $\mathcal{G}$ with $\mathcal{J}$, we obtain
+%Then we have
+%\[
+% \mathcal{M} = \mathcal{J} - \int_0^T \left\bra \mathbf{u}^\dagger, \left(\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} + \mathbf{u} \cdot \nabla \mathbf{U} + \nabla p - \frac{1}{Re} \nabla^2 \mathbf{u}\right)\right\ket - \bra \mathbf{p}^\dagger, \nabla \cdot \mathbf{u}\ket \;\d t - \bra \mathbf{u}^\dagger_0, \mathbf{u}_0 - \mathbf{u}(0)\ket.
+%\]
+%Note that the pressure is a Lagrange multiplier.
+%
+%Here integration by parts with respect to time will lead to a term $\bra \tilde{\mathbf{u}} (0), \mathbf{u}(0)\ket - \bra \tilde{\mathbf{u}} (T), \mathbf{u}(T)\ket$. Variations with respect to $\mathbf{u}(0)$ , $\mathbf{u}(T)$ lead to interesting alternate conditions. We have
+%\[
+% \frac{\delta \mathcal{M}}{\delta \mathbf{u}_0} = \tilde{\mathbf{u}}_0.
+%\]
+%If we fix the norm of $\mathbf{u}_0$, then given $\frac{\delta \mathcal{M}}{\delta \mathbf{u}_0}$, we should ``improve'' our solution by letting $\mathbf{u}_0$ move in the direction of the projection of $\frac{\delta \mathcal{M}}{\delta \mathbf{u}_0}$ to the sphere of fixed norm. So the optimal solution occurs when $\tilde{\mathbf{u}}_0, \mathbf{u}_0$ are perfectly parallel.
+%
+%If we set $\frac{\delta \mathcal{M}}{\delta \mathbf{u}_T} = 0$, then we have
+%\[
+% \tilde{\mathbf{u}} (x, T) = \left(\frac{1}{V_\Omega} \int_\Omega e(x, T)^p \;\d \Omega\right)^{1/p - 1} e(x, T)^{p - 1} \mathbf{u}(x, T).
+%\]
+%So although there is a term $\bra \mathbf{u}^\dagger(0), \mathbf{u}(0)\ket - \bra \tilde{\mathbf{u}}(T), \mathbf{u}(T)\ket$, it doesn't affect the value of $\mathcal{M}$. Iterative process leads to optimal solutions for different objective functional.
+%
+%Therefore convergence occurs when $\tilde{\mathbf{u}}_0$, $\mathbf{u}_0$ are perfectly parallel if $\mathbf{u}_0$ is normalized.
+
+\subsubsection*{Non-linear adjoints}
+In the variational formulation, there is nothing in the world that stops us from using a non-linear evolution equation! We shall see that this results in slightly less pleasant formulas, but it can still be done.
+
+Consider plane Couette flow with $Re = 1000$, and allow arbitrary amplitude in perturbation
+\[
+ \mathbf{u}_{tot} = \mathbf{U} + \mathbf{u},\quad \partial_t \mathbf{u} + (\mathbf{u} + \mathbf{U}) \cdot \nabla (\mathbf{u} + \mathbf{U}) = - \nabla p + Re^{-1} \nabla^2 \mathbf{u},
+\]
+where $\mathbf{U} = y\hat{x}$. We can define a variational problem with Lagrangian
+\[
+ \mathcal{L} = \frac{E(T)}{E_0} - [\partial_t \mathbf{u} + \mathcal{N}(u) + \nabla p, v] - [\nabla \cdot \mathbf{u}, q] - \left(\frac{1}{2} \bra \mathbf{u}_0, \mathbf{u}_0\ket - E_0\right) c - \bra \mathbf{u}(0) - \mathbf{u}_0, \mathbf{v}_0\ket,
+\]
+where
+\[
+ N(u_i) = U_j \partial_j u_i + u_i \partial_i U_j + u_j \partial_j u_i - \frac{1}{Re} \partial_j \partial_j \mathbf{u}_i,\quad [\mathbf{v}, \mathbf{u}] = \frac{1}{T} \int_0^T \bra \mathbf{v}, \mathbf{u}\ket\;\d t.
+\]
+Variations with respect to the direct variable $u$ can once again define a non-linear adjoint equation
+\[
+ \frac{\delta \mathcal{L}}{\delta \mathbf{u}} = \partial_t \mathbf{v} + N^\dagger (\mathbf{v}, \mathbf{u}) + \nabla q + \left.\left(\frac{\mathbf{u}}{E_0} - \mathbf{v}\right)\right|_{t = T} + (v - \mathbf{v}_0)|_{t = 0} = 0,
+\]
+where
+\[
+ N^\dagger (v_i, \mathbf{u}) = \partial_j (u_j, v_i) - v_j \partial_i u_j + \partial_j (U_j v_i) - v_j \partial_i U_j - v_j \partial_i U_j + \frac{1}{Re} \partial_j \partial_j v_i.
+\]
+We also have
+\[
+ \frac{\delta \mathcal{L}}{\delta p} = \nabla \cdot \mathbf{v} = 0,\quad \frac{\delta \mathcal{L}}{\delta \mathbf{u}_0} = \mathbf{v}_0 - c \mathbf{u}_0 = 0.
+\]
+Note that the equation for the adjoint variable $v$ depends on the direct variable $u$, but is linear in $v$. Computationally, this means that we need to remember our solution to $\mathbf{u}$ when we do our adjoint loop. If $T$ is large, then it may not be feasible to store all information about $\mathbf{u}$ across the whole period, as that is too much data. Instead, the method of \term{checkpointing} is used:
+\begin{enumerate}
+ \item Pick ``checkpoints'' $0 = T_0 < \cdots < T_K = T$.
+ \item When integrating $\mathbf{u}$, we remember high resolution data for $u(x, T_K)$.
+ \item When integrating $v$ backwards in the interval $(T_{K - 1}, T_K)$, we use the data remembered at $T_{K - 1}$ to re-integrate to obtain detailed information about $\mathbf{u}$ in the interval $(T_{k - 1}, T_K)$.
+\end{enumerate}
+This powerful algorithmic approach allows us to identify \emph{minimal} seed of turbulence, i.e.\ the minimal, finite perturbation required to enter a turbulence mode. Note that this is something that cannot be understood by linear approximations! This is rather useful, because in real life, it allows us to figure out how to modify our system to reduce the chance of turbulence arising.
+
+\printindex
+\end{document}
diff --git a/books/cam/III_M/local_fields.tex b/books/cam/III_M/local_fields.tex
new file mode 100644
index 0000000000000000000000000000000000000000..3307bddb3f214e778eda6cb7f074328689a18525
--- /dev/null
+++ b/books/cam/III_M/local_fields.tex
@@ -0,0 +1,4434 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2016}
+\def\nlecturer {H.\ C.\ Johansson}
+\def\ncourse {Local Fields}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+The $p$-adic numbers $\Q_p$ (where $p$ is any prime) were invented by Hensel in the late 19th century, with a view to introduce function-theoretic methods into number theory. They are formed by completing $\Q$ with respect to the $p$-adic absolute value $|-|_p$ , defined for non-zero $x \in \Q$ by $|x|_p = p^{-n}$, where $x = p^n a/b$ with $a, b, n \in \Z$ and $a$ and $b$ are coprime to $p$. The $p$-adic absolute value allows one to study congruences modulo all powers of $p$ simultaneously, using analytic methods. The concept of a local field is an abstraction of the field $\Q_p$, and the theory involves an interesting blend of algebra and analysis. Local fields provide a natural tool to attack many number-theoretic problems, and they are ubiquitous in modern algebraic number theory and arithmetic geometry.
+
+Topics likely to be covered include:
+\begin{itemize}[label={}]
+ \item The $p$-adic numbers. Local fields and their structure.
+ \item Finite extensions, Galois theory and basic ramification theory.
+ \item Polynomial equations; Hensel's Lemma, Newton polygons.
+ \item Continuous functions on the $p$-adic integers, Mahler's Theorem.
+ \item Local class field theory (time permitting).
+\end{itemize}
+
+\subsubsection*{Pre-requisites}
+Basic algebra, including Galois theory, and basic concepts from point set topology and metric spaces. Some prior exposure to number fields might be useful, but is not essential.
+}
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+What are local fields? Suppose we are interested in some basic number theoretic problem. Say we have a polynomial $f(x_1, \cdots, x_n) \in \Z[x_1, \cdots, x_n]$. We want to look for solutions $\mathbf{a} \in \Z^n$, or show that there are no solutions at all. We might try to view this polynomial as a real polynomial, look at its roots, and see if they are integers. In lucky cases, we might be able to show that there are no real solutions at all, and conclude that there cannot be any solutions at all.
+
+On the other hand, we can try to look at it modulo some prime $p$. If there are no solutions mod $p$, then there cannot be any solution. But sometimes $p$ is not enough. We might want to look at it mod $p^2$, or $p^3$, or \ldots. One important application of local fields is that we can package all these information together. In this course, we are not going to study the number theoretic problems, but just look at the properties of the local fields for their own sake.
+
+Throughout this course, all rings will be commutative with unity, unless otherwise specified.
+\section{Basic theory}
+We are going to start by making loads of definitions, which you may or may not have seen before.
+
+\subsection{Fields}
+
+\begin{defi}[Absolute value]\index{absolute value}
+ Let $K$ be a field. An \emph{absolute value} on $K$ is a function $|\ph|: K \to \R_{\geq 0}$ such that
+ \begin{enumerate}
+ \item $|x| = 0$ iff $x = 0$;
+ \item $|xy| = |x||y|$ for all $x, y \in K$;
+ \item $|x + y| \leq |x| + |y|$.\index{triangle inequality}
+ \end{enumerate}
+\end{defi}
+
+\begin{defi}[Valued field]\index{valued field}
+ A \emph{valued field} is a field with an absolute value.
+\end{defi}
+
+\begin{eg}
+ The rationals, reals and complex numbers with the usual absolute values are absolute values.
+\end{eg}
+
+\begin{eg}[Trivial absolute value]
+ The \emph{trivial absolute value}\index{trivial absolute value} on a field $K$ is the absolute value given by
+ \[
+ |x| =
+ \begin{cases}
+ 1 & x \not= 0\\
+ 0 & x = 0
+ \end{cases}.
+ \]
+\end{eg}
+The only reason we mention the trivial absolute value here is that from now on, we will assume that the absolute values are \emph{not} trivial, because trivial absolute values are boring and break things.
+
+There are some familiar basic properties of the absolute value such as
+\begin{prop}
+ $||x| - |y|| \leq |x - y|$. Here the outer absolute value on the left hand side is the usual absolute value of $\R$, while the others are the absolute values of the relevant field.
+\end{prop}
+
+An absolute value defines a metric $d(x, y) = |x - y|$ on $K$.
+
+\begin{defi}[Equivalence of absolute values]\index{absolute values!equivalence}\index{equivalent absolute values}
+ Let $K$ be a field, and let $|\ph|, |\ph|'$ be absolute values. We say they are \emph{equivalent} if they induce the same topology.
+\end{defi}
+
+\begin{prop}
+ Let $K$ be a field, and $|\ph|, |\ph|'$ be absolute values on $K$. Then the following are equivalent.
+ \begin{enumerate}
+ \item $|\ph|$ and $|\ph|'$ are equivalent
+ \item $|x| < 1$ implies $|x|' < 1$ for all $x \in K$
+ \item There is some $s \in \R_{> 0}$ such that $|x|^s = |x|'$ for all $x \in K$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ (i) $\Rightarrow$ (ii) and (iii) $\Rightarrow$ (i) are easy exercises. Assume (ii), and we shall prove (iii). First observe that since $|x^{-1}| = |x|^{-1}$, we know $|x| > 1$ implies $|x|' > 1$, and hence $|x| = 1$ implies $|x|' = 1$. To show (iii), we have to show that the ratio $\frac{\log |x|}{\log |x'|}$ is independent of $x$.
+
+ Suppose not. We may assume
+ \[
+ \frac{\log |x|}{\log |x|'} < \frac{\log |y|}{\log |y|'},
+ \]
+ and moreover the logarithms are positive. Then there are $m, n \in \Z_{>0}$ such that
+ \[
+ \frac{\log |x|}{\log |y|} < \frac{m}{n} < \frac{\log |x|'}{\log |y|'}.
+ \]
+ Then rearranging implies
+ \[
+ \left|\frac{x^n}{y^m}\right| < 1 < \left|\frac{x^n}{y^m}\right|',
+ \]
+ a contradiction.
+\end{proof}
+
+\begin{ex}
+ Let $K$ be a valued field. Then equivalent absolute values induce the same the \emph{completion} $\hat{K}$ of $K$, and $\hat{K}$ is a valued field with an absolute value extending $|\ph|$.
+\end{ex}
+
+In this course, we are not going to be interested in the usual absolute values. Instead, we are going to consider some really weird ones, namely \emph{non-archimedean} ones.
+\begin{defi}[Non-archimedean absolute value]\index{non-archimedean absolute value}\index{absolute value!non-archimedean}\index{archimedean absolute value}\index{absolute value!archimedean}
+ An absolute value $|\ph|$ on a field $K$ is called \emph{non-archimedean} if $|x + y| \leq \max(|x|, |y|)$. This condition is called the \emph{strong triangle inequality}\index{strong triangle inequality}.
+
+ An absolute value which isn't non-archimedean is called \emph{archimedean}.
+\end{defi}
+Metrics satisfying $d(x, z) \leq \max(d(x, y), d(y, z))$ are often known as \emph{ultrametrics}\index{ultrametric}.
+
+\begin{eg}
+ $\Q$, $\R$ and $\C$ under the usual absolute values are archimedean.
+\end{eg}
+
+In this course, we will only consider non-archimedean absolute values. Thus, from now on, unless otherwise mentioned, an absolute value is assumed to be non-archimedean. The metric is weird!
+
+We start by proving some absurd properties of non-archimedean absolute values.
+
+Recall that the closed balls are defined by
+\[
+ B(x, r) = \{y: |x - y| \leq r\}.
+\]
+\begin{prop}
+ Let $(K, |\ph|)$ be a non-archimedean valued field, and let $x \in K$ and $r \in \R_{> 0}$. Let $z \in B(x, r)$. Then
+ \[
+ B(x, r) = B(z, r).
+ \]
+\end{prop}
+So closed balls do not have unique ``centers''. Every point can be viewed as the center.
+
+\begin{proof}
+ Let $y \in B(z, r)$. Then
+ \[
+ |x - y| = |(x - z) + (z - y)| \leq \max(|x - z|, |z - y|) \leq r.
+ \]
+ So $y \in B(x, r)$. By symmetry, $y \in B(x, r)$ implies $y \in B(z, r)$.
+\end{proof}
+
+\begin{cor}
+ Closed balls are open.
+\end{cor}
+
+\begin{proof}
+ To show that $B(x, r)$ is open, we let $z \in B(x, r)$. Then we have
+ \[
+ \{y: |y - z| < r\} \subseteq B(z, r) = B(x, r).
+ \]
+ So we know the open ball of radius $r$ around $z$ is contained in $B(x, r)$. So $B(x, r)$ is open.
+\end{proof}
+
+Norms in non-archimedean valued fields are easy to compute:
+\begin{prop}
+ Let $K$ be a non-archimedean valued field, and $x, y \in K$. If $|x| > |y|$, then $|x + y| = |x|$.
+
+ More generally, if $x = \sum_{c = 0}^\infty x_i$ and the non-zero $|x_i|$ are distinct, then $|x| = \max |x_i|$.
+\end{prop}
+
+\begin{proof}
+ On the one hand, we have $|x + y| \leq \max\{|x|, |y|\}$. On the other hand, we have
+ \[
+ |x| = |(x + y) - y| \leq \max(|x + y|, |y|) = |x + y|,
+ \]
+ since we know that we cannot have $|x| \leq |y|$. So we must have $|x| = |x + y|$.
+\end{proof}
+
+Convergence is also easy for valued fields.
+\begin{prop}
+ Let $K$ be a valued field.
+ \begin{enumerate}
+ \item Let $(x_n)$ be a sequence in $K$. If $x_n - x_{n + 1} \to 0$, then $x_n$ is Cauchy.
+ \end{enumerate}
+ If we assume further that $K$ is complete, then
+ \begin{enumerate}[resume]
+ \item Let $(x_n)$ be a sequence in $K$. If $x_n - x_{n + 1} \to 0$, then a sequence $(x_n)$ in $K$ converges.
+ \item Let $\sum_{n = 0}^\infty y_n$ be a series in $K$. If $y_n \to 0$, then $\sum_{n = 0}^\infty y_n$ converges.
+ \end{enumerate}
+\end{prop}
+The converses to all these are of course also true, with the usual proofs.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Pick $\varepsilon > 0$ and $N$ such that $|x_n - x_{n + 1}| < \varepsilon$ for all $n \geq N$. Then given $m\geq n \geq N$, we have
+ \begin{align*}
+ |x_m - x_n| &= |x_m - x_{m - 1} + x_{m - 1} - x_{m - 2} + \cdots - x_n| \\
+ &\leq \max(|x_m - x_{m - 1}|, \cdots, |x_{n + 1} - x_n|) \\
+ &< \varepsilon.
+ \end{align*}
+ So the sequence is Cauchy.
+ \item Follows from (1) and the definition of completeness.
+ \item Follows from the definition of convergence of a series and (2).\qedhere
+ \end{enumerate}
+\end{proof}
+
+The reason why we care about these weird non-archimedean fields is that they have very rich \emph{algebraic} structure. In particular, there is this notion of the \emph{valuation ring}.
+
+\begin{defi}[Valuation ring]\index{valuation ring}
+ Let $K$ be a valued field. Then the \emph{valuation ring of K} is the open subring
+ \[
+ \mathcal{O}_K = \{x: |x| \leq 1\}.
+ \]
+\end{defi}
+
+We prove that it is actually a ring
+\begin{prop}
+ Let $K$ be a valued field. Then
+ \[
+ \mathcal{O}_K = \{x: |x| \leq 1\}
+ \]
+ is an open subring of $K$. Moreover, for each $r \in (0, 1]$, the subsets $\{x : |x| < r\}$ and $\{x: |x| \leq r\}$ are open ideals of $\mathcal{O}_K$. Moreover, $\mathcal{O}_K^\times = \{x : |x| = 1\}$.
+\end{prop}
+Note that this is very false for usual absolute values. For example, if we take $\R$ with the usual absolute value, we have $1 \in \mathcal{O}_\R$, but $1+ 1 \not\in \mathcal{O}_R$.
+
+\begin{proof}
+ We know that these sets are open since all balls are open.
+
+ To see $\mathcal{O}_K$ is a subring, we have $|1| = |-1| = 1$. So $1, -1 \in \mathcal{O}_K$. If $x, y \in \mathcal{O}_K$, then $|x + y| \leq \max(|x|, |y|) \leq 1$. So $x + y \in \mathcal{O}_K$. Also, $|xy| = |x||y| \leq 1 \cdot 1 = 1$. So $xy \in \mathcal{O}_K$.
+
+ That the other sets are ideals of $\mathcal{O}_K$ is checked in the same way.
+
+ To check the units, we have $x \in \mathcal{O}_K^\times \Leftrightarrow |x|, |x^{-1}| \leq 1 \Leftrightarrow |x| = |x|^{-1} = 1$.
+\end{proof}
+
+\subsection{Rings}
+
+\begin{defi}[Integral element]\index{integral element}
+ Let $R \subseteq S$ be rings and $s \in S$. We say $s$ is \emph{integral over $R$} if there is some monic $f \in R[x]$ such that $f(s) = 0$.
+\end{defi}
+
+\begin{eg}
+ Any $r \in R$ is integral (take $f(x) = x - r$).
+\end{eg}
+
+\begin{eg}
+ Take $\Z \subseteq \C$. Then $z \in \C$ is integral over $\Z$ if it is an \term{algebraic integer} (by definition of algebraic integer). For example, $\sqrt{2}$ is an algebraic integer, but $\frac{1}{\sqrt{2}}$ is not.
+\end{eg}
+
+We would like to prove the following characterization of integral elements:
+\begin{thm}
+ Let $R \subseteq S$ be rings. Then $s_1, \cdots, s_n \in S$ are all integral iff $R[s_1, \cdots, s_n] \subseteq S$ is a finitely-generated $R$-module.
+\end{thm}
+Note that $R[s_1, \cdots, s_n]$ is by definition a finitely-generated $R$-algebra, but requiring it to be finitely-generated as a module is stronger.
+
+Here one direction is easy. It is not hard to show that if $s_1, \cdots, s_n$ are all integral, then $R[s_1, \cdots, s_n]$ is finitely-generated. However to show the other direction, we need to find some clever trick to produce a \emph{monic} polynomial that kills the $s_i$.
+
+The trick we need is the adjugate matrix we know and love from IA Vectors and Matrices.
+
+\begin{defi}[Adjoint/Adjugate matrix]
+ Let $A = (a_{ij})$ be an $n \times n$ matrix with coefficients in a ring $R$. The \term{adjugate matrix} or \term{adjoint matrix} $A^* = (a_{ij}^*)$ of $A$ is defined by
+ \[
+ a_{ij}^* = (-1)^{i + j} \det (A_{ij}),
+ \]
+ where $A_{ij}$ is an $(n-1)\times (n-1)$ matrix obtained from $A$ by deleting the $i$th column and the $j$th row.
+\end{defi}
+
+As we know from IA, the following property holds for the adjugate matrix:
+\begin{prop}
+ For any $A$, we have $A^*A = AA^* = \det(A) I$, where $I$ is the identity matrix.
+\end{prop}
+
+With this, we can prove our claim:
+\begin{proof}[Proof of theorem]
+ Note that we can construct $R[s_1, \cdots, s_n]$ by a sequence
+ \[
+ R \subseteq R[s_1] \subseteq R[s_1, s_2] \subseteq \cdots \subseteq R[s_1, \cdots, s_n] \subseteq S,
+ \]
+ and each $s_i$ is integral over $R[s_1, \cdots, s_{n - 1}]$. Since the finite extension of a finite extension is still finite, it suffices to prove it for the case $n = 1$, and we write $s$ for $s_1$.
+
+ Suppose $f(x) \in R[x]$ is monic such that $f(s) = 0$. If $g(x) \in R[x]$, then there is some $q, r \in R[x]$ such that $g(x) = f(x)q(x) + r(x)$ with $\deg r < \deg f$. Then $g(s) = r(s)$. So any polynomial expression in $s$ can be written as a polynomial expression with degree less than $\deg f$. So $R[s]$ is generated by $1, s, \cdots, s^{\deg f - 1}$.
+
+ In the other direction, let $t_1, \cdots, t_d$ be $R$-module generators of $R[s_1, \cdots, s_n]$. We show that in fact any element of $R[s_1, \cdots, s_n]$ is integral over $R$. Consider any element $b \in R[s_1, \cdots, s_n]$. Then there is some $a_{ij} \in R$ such that
+ \[
+ b t_i = \sum_{j = 1}^d a_{ij} t_j.
+ \]
+ In matrix form, this says
+ \[
+ (bI - A)t = 0.
+ \]
+ We now multiply by $(bI - A)^*$ to obtain
+ \[
+ \det(bI - A) t_j = 0
+ \]
+ for all $j$. Now we know $1 \in R$. So $1 = \sum c_j t_j$ for some $c_j \in R$. Then we have
+ \[
+ \det(bI - A) = \det(bI - A) \sum c_j t_j = \sum c_j (\det (bI - A) t_j) = 0.
+ \]
+ Since $\det(bI - A)$ is a monic polynomial in $b$, it follows that $b$ is integral.
+\end{proof}
+
+Using this characterization, the following result is obvious:
+\begin{cor}
+ Let $R \subseteq S$ be rings. If $s_1, s_2 \in S$ are integral over $R$, then $s_1 + s_2$ and $s_1s_2$ are integral over $R$. In particular, the set $\tilde{R} \subseteq S$ of all elements in $S$ integral over $R$ is a ring, known as the integral closure of $R$ in $S$.
+\end{cor}
+
+\begin{proof}
+ If $s_1, s_2$ are integral, then $R[s_1, s_2]$ is a finite extension over $R$. Since $s_1 + s_2$ and $s_1s_2$ are elements of $R[s_1, s_2]$, they are also integral over $R$.
+\end{proof}
+
+\begin{defi}[Integrally closed]
+ Given a ring extension $R \subseteq S$, we say $R$ is \term{integrally closed} in $S$ if $\tilde{R} = R$.
+\end{defi}
+
+\subsection{Topological rings}
+Recall that we previously constructed the valuation ring $\mathcal{O}_K$. Since the valued field $K$ itself has a topology, the valuation ring inherits a subspace topology. This is in fact a ring topology.
+
+\begin{defi}[Topological ring]\index{topological ring}\index{ring topology}
+ Let $R$ be a ring. A topology on $R$ is called a \emph{ring topology} if addition and multiplication are continuous maps $R \times R \to R$. A ring with a ring topology is a \emph{topological ring}.
+\end{defi}
+
+\begin{eg}
+ $\R$ and $\C$ with the usual topologies and usual ring structures are topological rings.
+\end{eg}
+
+\begin{ex}
+ Let $K$ be a valued field. Then $K$ is a topological ring. We can see this from the fact that the product topology on $K \times K$ is induced by the metric $d((x_0, y_0), (x_1, y_1)) = \max(|x_0 - x_1|, |y_0 - y_1|)$.
+\end{ex}
+
+Now if we are just randomly given a ring, there is a general way of constructing a ring topology. The idea is that we pick an ideal $I$ and declare its elements to be small. For example, in a valued ring, we can pick $I = \{x \in \mathcal{O}_K: |x| < 1\}$. Now if you are not only in $I$, but $I^2$, then you are even smaller. So we have a hierarchy of small sets
+\[
+ I \supseteq I^2 \supseteq I^3 \supseteq I^4 \supseteq\cdots
+\]
+Now to make this a topology on $R$, we say that a subset $U \subseteq R$ is open if every $x \in U$ is contained in some translation of $I^n$ (for some $n$). In other words, we need some $y \in R$ such that
+\[
+ x \in y + I^n \subseteq U.
+\]
+But since $I^n$ is additively closed, this is equivalent to saying $x + I^n \subseteq U$. So we make the following definition:
+
+\begin{defi}[$I$-adically open]\index{$I$-adically open}
+ Let $R$ be a ring and $I \subseteq R$ an ideal. A subset $U \subseteq R$ is called \emph{$I$-adically open} if for all $x \in U$, there is some $n \geq 1$ such that $x + I^n \subseteq U$.
+\end{defi}
+
+\begin{prop}
+ The set of all $I$-adically open sets form a topology on $R$, called the \term{$I$-adic topology}.
+\end{prop}
+Note that the $I$-adic topology isn't really the kind of topology we are used to thinking about, just like the topology on a valued field is also very weird. Instead, it is a ``filter'' for telling us how small things are.
+
+\begin{proof}
+ By definition, we have $\emptyset$ and $R$ are open, and arbitrary unions are clearly open. If $U, V$ are $I$-adically open, and $x \in U \cap V$, then there are $n, m$ such that $x + I^n \subseteq U$ and $x + I^m \subseteq V$. Then $x + I^{\max(m, n)} \subseteq U \cap V$.
+\end{proof}
+
+\begin{ex}
+ Check that the $I$-adic topology is a ring topology.
+\end{ex}
+
+In the special case where $I = xR$, we often call the $I$-adic topology the \term{$x$-adic topology}.
+
+Now we want to tackle the notion of completeness. We will consider the case of $I = xR$ for motivation, but the actual definition will be completely general.
+
+If we pick the $x$-adic topology, then we are essentially declaring that we take $x$ to be small. So intuitively, we would expect power series like
+\[
+ a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots
+\]
+to ``converge'', at least if the $a_i$ are ``of bounded size''. In general, the $a_i$ are ``not too big'' if $a_ix^i$ is genuinely a member of $x^i R$, as opposed to some silly thing like $x^{-i}$.
+
+As in the case of analysis, we would like to think of these infinite series as a sequence of partial sums
+\[
+ (a_0,\; a_0 + a_1 x,\; a_0 + a_1 x + a_2 x^2,\; \cdots)
+\]
+Now if we denote the limit as $L$, then we can think of this sequence alternatively as
+\[
+ (L \bmod I,\; L \bmod I^2,\; L \bmod I^3,\; \cdots).
+\]
+The key property of this sequence is that if we take $L \bmod I^k$ and reduce it mod $I^{k - 1}$, then we obtain $L \bmod I^{k - 1}$.
+
+In general, suppose we have a sequence
+\[
+ (b_n \in R/I^n)_{n = 1}^\infty.
+\]
+such that $b_n \bmod I^{n - 1} = b_{n - 1}$. Then we want to say that the ring is \emph{$I$-adically complete} if every sequence satisfying this property is actually of the form
+\[
+ (L \bmod I,\; L \bmod I^2,\; L \bmod I^3,\; \cdots)
+\]
+for some $L$. Alternatively, we can take the \emph{$I$-adic completion} to be the collection of all such sequences, and then a space is $I$-adically complete it is isomorphic to its $I$-adic completion.
+
+To do this, we need to build up some technical machinery. The kind of sequences we've just mentioned is a special case of an inverse limit.
+\begin{defi}[Inverse/projective limit]\index{inverse limit}\index{projective limit}
+ Let $R_1, R_2,, \cdots$ be topological rings, with continuous homomorphisms $f_n: R_{n + 1} \to R_n$.
+ \[
+ \begin{tikzcd}
+ R_1 & R_2 \ar[l, "f_1"] & R_3 \ar[l, "f_2"] & R_4 \ar[l, "f_3"] & \ar[l] \cdots
+ \end{tikzcd}
+ \]
+ The \emph{inverse limit} or \emph{projective limit} of the $R_i$ is the ring
+ \[
+ \varprojlim R_n = \left\{(x_n) \in \prod_n R_n : f_n(x_{n + 1}) = x_n\right\},
+ \]
+ with coordinate-wise addition and multiplication, together with the subspace topology coming from the product topology of $\prod R_n$. This topology is known as the \term{inverse limit topology}.
+\end{defi}
+
+\begin{prop}
+ The inverse limit topology is a ring topology.
+\end{prop}
+
+\begin{proof}[Proof sketch]
+ We can fit the addition and multiplication maps into diagrams
+ \[
+ \begin{tikzcd}
+ \varprojlim R_n \times \varprojlim R_n \ar[r] & \varprojlim R_n\\
+ \prod R_n \times \prod R_n \ar[u, hook] \ar[r] & \prod R_n \ar[u, hook]
+ \end{tikzcd}
+ \]
+ By the definition of the subspace topology, it suffices to show that the corresponding maps on $\prod R_n$ are continuous. By the universal property of the product, it suffices to show that the projects $\prod R_n \times \prod R_n \to R_m$ is continuous for all $m$. But this map can alternatively be obtained by first projecting to $R_m$, then doing multiplication in $R_m$, and projection is continuous. So the result follows.
+\end{proof}
+
+It is easy to see the following universal property of the inverse limit topology:
+\begin{prop}
+ Giving a continuous ring homomorphism $g: S \to \varprojlim R_n$ is the same as giving a continuous ring homomorphism $g_n: S \to R_n$ for each $n$, such that each of the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ S \ar[r, "g_n"] \ar[rd, "g_{n - 1}"'] & R_n \ar[d, "f_{n - 1}"]\\
+ & R_{n - 1}
+ \end{tikzcd}
+ \]
+\end{prop}
+\begin{defi}[$I$-adic completion]
+ Let $R$ be a ring and $I$ be an ideal. The \term{$I$-adic completion} of $R$ is the topological ring
+ \[
+ \varprojlim R/I^n,
+ \]
+ where $R/I^n$ has the discrete topology, and $R/I^{n + 1} \to R/I^n$ is the quotient map. There is an evident map
+ \begin{align*}
+ \nu: R &\to \varprojlim R/I^n\\
+ r &\mapsto (r \bmod I^n)_.
+ \end{align*}
+ This map is a continuous ring homomorphism if $R$ is given the $I$-adic topology.
+\end{defi}
+
+\begin{defi}[$I$-adically complete]\index{$I$-adically complete}
+ We say that $R$ is \emph{$I$-adically complete} if $\nu$ is a bijection.
+\end{defi}
+
+\begin{ex}
+ If $\nu$ is a bijection, then $\nu$ is in fact a homeomorphism.
+\end{ex}
+
+\subsection{The \texorpdfstring{$p$}{p}-adic numbers}
+For the rest of this course, $p$ is going to be a prime number.
+
+We consider a particular case of valued fields, namely the $p$-adic numbers, and study some of its basic properties.
+
+Let $x \in \Q$ be non-zero. Then by uniqueness of factorization, we can write $x$ uniquely as
+\[
+ x =p^n \frac{a}{b},
+\]
+where $a, b, n \in \Z$, $b > 0$ and $a, b, p$ are pairwise coprime.
+
+\begin{defi}[$p$-adic absolute value]\index{$p$-adic absolute value}
+ The \emph{$p$-adic absolute value} on $\Q$ is the function $|\ph|_p: \Q \to \R_{\geq 0}$ given by
+ \[
+ |x|_p =
+ \begin{cases}
+ 0 & x = 0\\
+ p^{-n} & x = p^n \frac{a}{b}\text{ as above}
+ \end{cases}.
+ \]
+\end{defi}
+
+\begin{prop}
+ The $p$-adic absolute value is an absolute value.
+\end{prop}
+
+\begin{proof}
+ It is clear that $|x|_p = 0$ iff $x = 0$.
+
+ Suppose we have
+ \[
+ x = p^n \frac{a}{b},\quad y = p^m \frac{c}{d}.
+ \]
+ We wlog $m \geq n$. Then we have
+ \[
+ |xy|_p = \left|p^{n + m}\frac{ac}{bd}\right| = p^{-m-n} = |x|_p |y|_p.
+ \]
+ So this is multiplicative. Finally, we have
+ \[
+ |x + y|_p = \left|p^n \frac{ab + p^{m - n}cb}{bd}\right| \leq p^{-n} = \max(|x|_p, |y|_p).
+ \]
+ Note that we must have $bd$ coprime to $p$, but $ab + p^{m - n}cb$ need not be. However, any extra powers of $p$ could only decrease the absolute value, hence the above result.
+\end{proof}
+
+Note that if $x \in \Z$ is an integer, then $|x|_p = p^{-n}$ iff $p^n \mid\mid x$ (we say $p^n \mid\mid x$ if $p^n \mid x$ and $p^{n + 1} \nmid x$).
+
+\begin{defi}[$p$-adic numbers]\index{$p$-adic numbers}
+ The \emph{$p$-adic numbers} $\Q_p$ is the completion of $\Q$ with respect to $|\ph|_p$.
+\end{defi}
+
+\begin{defi}[$p$-adic integers]\index{$p$-adic integers}
+ The valuation ring
+ \[
+ \Z_p = \{x \in \Q_p: |x|_p \leq 1\}
+ \]
+ is the \emph{$p$-adic integers}.
+\end{defi}
+
+\begin{prop}
+ $\Z_p$ is the closure of $\Z$ inside $\Q_p$.
+\end{prop}
+
+\begin{proof}
+ If $x \in \Z$ is non-zero, then $x = p^na$ with $n \geq 0$. So $|x|_p \leq 1$. So $\Z \subseteq \Z_p$.
+
+ We now want to show that $\Z$ is dense in $\Z_p$. We know the set
+ \[
+ \Z_{(p)} = \{x \in \Q: |x|_p \leq 1\}
+ \]
+ is dense inside $\Z_p$, essentially by definition. So it suffices to show that $\Z$ is dense in $\Z_{(p)}$. We let $x \in \Z_{(p)} \setminus\{0\}$, say
+ \[
+ x = p^n \frac{a}{b},\quad n \geq 0.
+ \]
+ It suffices to find $x_i \in \Z$ such that $x_i \to \frac{1}{b}$. Then we have $p^n ax_i \to x$.
+
+ Since $(b, p) = 1$, we can find $x_i, y_i \in \Z$ such that $b x_i + p^i y_i = 1$ for all $i \geq 1$. So
+ \[
+ \left|x_i - \frac{1}{b}\right|_p = \left|\frac{1}{b}\right|_p |bx_i - 1|_p = |p^i y_i|_p \leq p^{-i} \to 0.
+ \]
+ So done.
+\end{proof}
+
+\begin{prop}
+ The non-zero ideals of $\Z_p$ are $p^n \Z_p$ for $n \geq 0$. Moreover,
+ \[
+ \frac{\Z}{p^n \Z} \cong \frac{\Z_p}{p^n \Z_p}.
+ \]
+\end{prop}
+
+\begin{proof}
+ Let $0 \not= I \subseteq \Z_p$ be an ideal, and pick $x \in I$ such that $|x|_p$ is maximal. This supremum exists and is attained because the possible values of the absolute values are discrete and bounded above. If $y \in I$, then by maximality, we have $|y|_p \leq |x|_p$. So we have $|yx^{-1}|_p \leq 1$. So $yx^{-1} \in \Z_p$, and this implies that $y = (yx^{-1})x \in x\Z_p$. So $I \subseteq x\Z_p$, and we obviously have $x\Z_p \subseteq I$. So we have $I = x\Z_p$.
+
+ Now if $x = p^n \frac{a}{b}$, then since $\frac{a}{b}$ is invertible in $\Z_p$, we have $x\Z_p = p^n \Z_p$. So $I = p^n \Z_p$.
+
+ To show the second part, consider the map
+ \[
+ f_n: \Z \to \frac{\Z_p}{p^n \Z_p}
+ \]
+ given by the inclusion map followed by quotienting. Now $p^n \Z_p = \{x : |x|_p \leq p^{-n}$. So we have
+ \[
+ \ker f_n = \{x \in \Z: |x|_p \leq p^{-n}\} = p^n \Z.
+ \]
+ Now since $\Z$ is dense in $\Z_p$, we know the image of $f_n$ is dense in $\Z_p/p^n \Z_p$. But $\Z_p/p^n \Z_p$ has the discrete topology. So $f_n$ is surjective. So $f_n$ induces an isomorphism $\Z/p^n Z \cong \Z_p/p^n \Z_p$.
+\end{proof}
+
+\begin{cor}
+ $\Z_p$ is a PID with a unique prime element $p$ (up to units).
+\end{cor}
+This is pretty much the point of the $p$-adic numbers --- there are a lot of primes in $\Z$, and by passing on to $\Z_p$, we are left with just one of them.
+
+\begin{prop}
+ The topology on $\Z$ induced by $|\ph|_p$ is the $p$-adic topology (i.e.\ the $p\Z$-adic topology).
+\end{prop}
+
+\begin{proof}
+ Let $U \subseteq \Z$. By definition, $U$ is open wrt $|\ph|_p$ iff for all $x \in U$, there is an $n\in \N$ such that
+ \[
+ \{y \in \Z: |y - x|_p \leq p^{-n}\} \subseteq U.
+ \]
+ On the other hand, $U$ is open in the $p$-adic topology iff for all $x \in U$, there is some $n \geq 0$ such that $x + p^n \Z \subseteq U$. But we have
+ \[
+ \{y \in \Z: |y - x|_p \leq p^{-n}\} = x + p^n \Z.
+ \]
+ So done.
+\end{proof}
+
+\begin{prop}
+ $\Z_p$ is $p$-adically complete and is (isomorphic to) the $p$-adic completion of $\Z$.
+\end{prop}
+
+\begin{proof}
+ The second part follows from the first as follows: we have the maps
+ \[
+ \begin{tikzcd}
+ \Z_p \ar[r, "\nu"] & \varprojlim \Z_p/(p^n \Z_p) & \lim \Z/(p^n\Z) \ar[l, "(f_n)_n"']
+ \end{tikzcd}
+ \]
+ We know the map induced by $(f_n)_n$ is an isomorphism. So we just have to show that $\nu$ is an isomorphism
+
+ To prove the first part, we have $x \in \ker \nu$ iff $x \in p^n \Z_p$ for all $n$ iff $|x|_p \leq p^{-n}$ for all $n$ iff $|x|_p = 0$ iff $x = 0$. So the map is injective.
+
+ To show surjectivity, we let
+ \[
+ z_n \in \varprojlim \Z_p/p^n \Z_p.
+ \]
+ We define $a_i \in \{0, 1, \cdots, p - 1\}$ recursively such that
+ \[
+ x_n = \sum_{i = 0}^{n - 1} a_i p^i
+ \]
+ is the unique representative of $z_n$ in the set of integers $\{0, 1, \cdots, p^n - 1\}$. Then
+ \[
+ x = \sum_{i = 0}^\infty a_i p^i
+ \]
+ exists in $\Z_p$ and maps to $x \equiv x_n \equiv z_n \pmod{p^n}$ for all $n \geq 0$. So $\nu(x) = (z_n)$. So the map is surjective. So $\nu$ is bijective.
+\end{proof}
+
+\begin{cor}
+ Every $a \in \Z_p$ has a unique expansion
+ \[
+ a = \sum_{i = 0}^\infty a_i p^i.
+ \]
+ with $a_i \in \{0, \cdots, p - 1\}$.
+
+ More generally, for any $a \in \Q^\times$, there is a unique expansion
+ \[
+ a = \sum_{i = n}^\infty a_i p^i
+ \]
+ for $a_i \in \{0, \cdots, p - 1\}$, $a_n \not= 0$ and
+ \[
+ n = - \log_p |a|_p \in \Z.
+ \]
+\end{cor}
+
+\begin{proof}
+ The second part follows from the first part by multiplying $a$ by $p^{-n}$.
+\end{proof}
+
+\begin{eg}
+ We have
+ \[
+ \frac{1}{1 - p} = 1 + p + p^2 + p^3 + \cdots.
+ \]
+\end{eg}
+
+\section{Valued fields}
+\subsection{Hensel's lemma}
+We return to the discussion of general valued fields. We are now going to introduce an alternative to the absolute value that contains the same information, but is presented differently.
+\begin{defi}[Valuation]
+ Let $K$ be a field. A \term{valuation} on $K$ is a function $v: K \to \R \cup \{\infty\}$ such that
+ \begin{enumerate}
+ \item $v(x) = 0$ iff $x = 0$
+ \item $v(xy) = v(x) + v(y)$
+ \item $v(x + y) \geq \min\{v(x), v(y)\}$.
+ \end{enumerate}
+\end{defi}
+Here we use the conventions that $r + \infty = \infty$ and $r \leq \infty$ for all $r \in \infty$.
+
+In some sense, this definition is sort-of pointless, since if $v$ is a valuation, then the function
+\[
+ |x| = c^{-v(x)}
+\]
+for any $c > 1$ is a (non-archimedean) absolute value. Conversely, if $|\ph|$ is a valuation, then
+\[
+ v(x) = - \log_c |x|
+\]
+is a valuation.
+
+Despite this, sometimes people prefer to talk about the valuation rather than the absolute value, and often this is more natural. As we will later see, in certain cases, there is a canonical normalization of $v$, but there is no canonical choice for the absolute value.
+
+\begin{eg}
+ For $x \in \Q_p$, we define
+ \[
+ v_p(x) = -\log_p |x|_p.
+ \]
+ This is a valuation, and if $x \in \Z_p$, then $v_p(x) = n$ iff $p^n \mid\mid x$.
+\end{eg}
+
+\begin{eg}
+ Let $K$ be a field, and define
+ \[
+ k((T)) = \left\{\sum_{i = n}^\infty a_i T^i: a_i \in k, n \in \Z\right\}.
+ \]
+ This is the field of \term{formal Laurent series} over $k$. We define
+ \[
+ v\left(\sum a_i T^i\right) = \min\{i: a_i \not= 0\}.
+ \]
+ Then $v$ is a valuation of $k((T))$.
+\end{eg}
+
+Recall that for a valued field $K$, the \term{valuation ring}\index{$\mathcal{O}_K$} is given by
+\[
+ \mathcal{O}_K = \{x \in K: |x|\leq 1\} = \{x \in K: v(x) \geq 0\}.
+\]
+Since this is a subring of a field, and the absolute value is multiplicative, we notice that the units in $\mathcal{O}$ are exactly the elements of absolute value $1$. The remaining elements form an ideal (since the field is non-archimedean), and thus we have a \term{maximal ideal}\index{$\mathfrak{m}_K$}
+\[
+ \mathfrak{m} = \mathfrak{m}_K = \{x \in K: |x| < 1\}
+\]
+The quotient
+\[
+ k = k_K = \mathcal{O}_K/\mathfrak{m}_K
+\]
+is known as the \term{residue field}\index{$k_K$}.
+
+\begin{eg}
+ Let $K = \Q_p$. Then $\mathcal{O} = \Z_p$, and $\mathfrak{m} = p\Z_p$. So
+ \[
+ k = \mathcal{O}/\mathfrak{m} = \Z_p/p\Z_p \cong \Z/p\Z.
+ \]
+\end{eg}
+
+\begin{defi}[Primitive polynomial]\index{primitive polynomial}
+ If $K$ is a valued field and $f(x) = a_0 + a_1 x + \cdots + a_n x^n \in K[x]$ is a polynomial, we say that $f$ is \emph{primitive} if
+ \[
+ \max_i |a_i| = 1.
+ \]
+ In particular, we have $f \in \mathcal{O}[x]$.
+\end{defi}
+The point of a primitive polynomial is that such a polynomial is naturally, and non-trivially, an element of $k[x]$. Moreover, focusing on such polynomials is not that much of a restriction, since any polynomial is a constant multiple of a primitive polynomial.
+
+\begin{thm}[Hensel's lemma]\index{Hensel's lemma}
+ Let $K$ be a complete valued field, and let $f \in K[x]$ be primitive. Put $\bar{f} = f\bmod \mathfrak{m} \in k[x]$. If there is a factorization
+ \[
+ \bar{f}(x) = \bar{g}(x) \bar{h}(x)
+ \]
+ with $(\bar{g}, \bar{h}) = 1$, then there is a factorization
+ \[
+ f(x) = g(x) h(x)
+ \]
+ in $\mathcal{O}[x]$ with
+ \[
+ \bar{g} = g,\quad \bar{h} = h\mod \mathfrak{m},
+ \]
+ with $\deg g = \deg \bar{g}$.
+\end{thm}
+Note that requiring $\deg g = \deg \bar{g}$ is the best we can hope for --- we cannot guarantee $\deg h = \deg \bar{h}$, since we need not have $\deg f = \deg \bar{f}$.
+
+This is one of the most important results in the course.
+\begin{proof}
+ Let $g_0, h_0$ be arbitrary lifts of $\bar{g}$ and $\bar{h}$ to $\mathcal{O}[x]$ with $\deg \bar{g} = g_0$ and $\deg \bar{h} = h_0$. Then we have
+ \[
+ f = g_0 h_0 \mod \mathfrak{m}.
+ \]
+ The idea is to construct a ``Taylor expansion'' of the desired $g$ and $h$ term by term, starting from $g_0$ and $h_0$, and using completeness to guarantee convergence. To proceed, we use our assumption that $\bar{g}, \bar{h}$ are coprime to find some $a, b \in \mathcal{O}[x]$ such that
+ \[
+ ag_0 + bh_0 \equiv 1 \mod \mathfrak{m}.\tag{$\dagger$}
+ \]
+ It is easier to work modulo some element $\pi$ instead of modulo the ideal $\mathfrak{m}$, since we are used to doing Taylor expansion that way. Fortunately, since the equations above involve only finitely many coefficients, we can pick an $\pi \in \mathfrak{m}$ with absolute value large enough (i.e.\ close enough to $1$) such that the above equations hold with $\mathfrak{m}$ replaced with $\pi$. Thus, we can write
+ \[
+ f = g_0h_0 + \pi r_0,\quad r_0 \in \mathcal{O}[x].
+ \]
+ Plugging in $(\dagger)$, we get
+ \[
+ f = g_0 h_0 + \pi r_0 (a g_0 + b h_0) + \pi^2 (\text{something}).
+ \]
+ If we are lucky enough that $\deg r_0 b < \deg g_0$, then we group as we learnt in secondary school to get
+ \[
+ f = (g_0 + \pi r_0 b)(h_0 + \pi r_0 a) + \pi^2 (\text{something}).
+ \]
+ We can then set
+ \begin{align*}
+ g_1 &= g_0 + \pi r_0 b\\
+ h_1 &= h_0 + \pi r_0 a,
+ \end{align*}
+ and then we can write
+ \[
+ f = g_1 h_1 + \pi^2 r_1,\quad r_1 \in \mathcal{O}[x],\quad \deg g_1 = \deg \bar{g}.\tag{$*$}
+ \]
+ If it is not true that $\deg r_0 b \leq \deg g_0$, we use the division algorithm to write
+ \[
+ r_0 b = q g_0 + p.
+ \]
+ Then we have
+ \[
+ f = g_0 h_0 + \pi ((r_0a + q)g_0 + p h_0),
+ \]
+ and then proceed as above.
+
+ Given the factorization $(*)$, we replace $r_1$ by $r_1(a g_0 + b h_0)$, and then repeat the procedure to get a factorization
+ \[
+ f \equiv g_2 h_2 \mod \pi^3,\quad \deg g_2 = \deg \bar{g}.
+ \]
+ Inductively, we constrict $g_k, h_k$ such that
+ \begin{align*}
+ f &\equiv g_k h_k \mod \pi^{k + 1}\\
+ g_k &\equiv g_{k - 1} \mod \pi^k\\
+ h_k &\equiv h_{k - 1} \mod \pi^k\\
+ \deg g_k &= \deg \bar{g}
+ \end{align*}
+ Note that we may drop the terms of $h_k$ whose coefficient are in $\pi^{k + 1}\mathcal{O}$, and the above equations still hold. Moreover, we can then bound $\deg h_k \leq \deg f - \deg g_k$. It now remains to set
+ \[
+ g = \lim_{k \to \infty} g_k,\quad h = \lim_{k \to \infty} h_k.\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ Let $f(x) = a_0 + a_1 x + \cdots + a_n x^n \in K[x]$ where $K$ is complete and $a_0, a_n \not= 0$. If $f$ is irreducible, then
+ \[
+ |a_\ell| \leq \max(|a_0|, |a_n|)
+ \]
+ for all $\ell$.
+\end{cor}
+
+\begin{proof}
+ By scaling, we can wlog $f$ is primitive. We then have to prove that $\max(|a_0|, |a_n|) = 1$. If not, let $r$ be minimal such that $|a_r| = 1$. Then $0 < r < n$. Moreover, we can write
+ \[
+ f(x) \equiv x^r (a_r + a_{r + 1}x + \cdots + a_n x^{n - r})\mod \mathfrak{m}.
+ \]
+ But then Hensel's lemma says this lifts to a factorization of $f$, a contradiction.
+\end{proof}
+
+\begin{cor}[of Hensel's lemma]
+ Let $f \in \mathcal{O}[x]$ be monic, and $K$ complete. If $f \mod \mathfrak{m}$ has a simple root $\bar{\alpha} \in k$, then $f$ has a (unique) simple root $\alpha \in \mathcal{O}$ lifting $\bar{\alpha}$.
+\end{cor}
+
+\begin{eg}
+ Consider $x^{p - 1} - 1 \in \Z_p[x]$. We know $x^{p - 1}$ splits into distinct linear factors over $\F_p[x]$. So all roots lift to $\Z_p$. So $x^{p - 1} - 1$ splits completely in $\Z_p$. So $\Z_p$ contains all $p$ roots of unity.
+\end{eg}
+
+\begin{eg}
+ Since $2$ is a quadratic residue mod $7$, we know $\sqrt{2} \in \Q_7$.
+\end{eg}
+
+\subsection{Extension of norms}
+The main goal of this section is to prove the following theorem:
+\begin{thm}
+ Let $K$ be a complete valued field, and let $L/K$ be a finite extension. Then the absolute value on $K$ has a unique extension to an absolute value on $L$, given by
+ \[
+ |\alpha|_L = \sqrt[n]{|N_{L/K}(\alpha)|},
+ \]
+ where $n = [L:K]$ and $N_{L/K}$ is the field norm. Moreover, $L$ is complete with respect to this absolute value.
+\end{thm}
+
+\begin{cor}
+ Let $K$ be complete and $M/K$ be an algebraic extension of $K$. Then $|\ph|$ extends uniquely to an absolute value on $M$.
+\end{cor}
+This is since any algebraic extension is the union of finite extensions, and uniqueness means we can patch the absolute values together.
+
+\begin{cor}
+ Let $K$ be a complete valued field and $L/K$ a finite extension. If $\sigma \in \Aut(L/K)$, then $|\sigma(\alpha)|_L = |\alpha|_L$.
+\end{cor}
+
+\begin{proof}
+ We check that $\alpha \mapsto |\sigma(\alpha)|_L$ is also an absolute value on $L$ extending the absolute value on $K$. So the result follows from uniqueness.
+\end{proof}
+
+Before we can prove the theorem, we need some preliminaries. Given a finite extension $L/K$, we would like to consider something more general than a field norm on $L$. Instead, we will look at norms of $L$ as a $K$-vector space. There are less axioms to check, so naturally there will be more choices for the norm. However, just as in the case of $\R$-vector spaces, we can show that all choices of norms are equivalent. So to prove things about the extended field norm, often we can just pick a convenient vector space norm, prove things about it, then apply equivalence.
+
+\begin{defi}[Norm on vector space]\index{norm on vector space}
+ Let $K$ be a valued field and $V$ a vector space over $K$. A \emph{norm} on $V$ is a function $\norm{\ph}: V \to \R_{\geq 0}$ such that
+ \begin{enumerate}
+ \item $\norm{x} = 0$ iff $x = 0$.
+ \item $\norm{\lambda} = |\lambda|\norm{x}$ for all $\lambda \in K$ and $x \in V$.
+ \item $\norm{x + y} \leq \max \{\norm{x}, \norm{y}\}$.
+ \end{enumerate}
+\end{defi}
+Note that our norms are also non-Archimedean.
+
+\begin{defi}[Equivalence of norms]\index{equivalence of norm}\index{norm!equivalent}
+ Let $\norm{\ph}$ and $\norm{\ph}'$ be norms on $V$. Then two norms are equivalent if they induce the same topology on $V$, i.e.\ there are $C, D > 0$ such that
+ \[
+ C\norm{x} \leq \norm{x}'\leq D\norm{x}
+ \]
+ for all $x \in V$.
+\end{defi}
+
+One of the most convenient norms we will work with is the max norm:
+\begin{eg}[Max norm]
+ Let $K$ be a complete valued field, and $V$ a finite-dimensional $K$-vector space. Let $x_1, \cdots, x_n$ be a basis of $V$. Then if
+ \[
+ x = \sum a_i x_i,
+ \]
+ then
+ \[
+ \norm{x}_{\mathrm{max}} = \max_i |a_i|
+ \]
+ defines a norm on $V$.
+\end{eg}
+
+\begin{prop}
+ Let $K$ be a complete valued field, and $V$ a finite-dimensional $K$-vector space. Then $V$ is complete under the max norm.
+\end{prop}
+
+\begin{proof}
+ Given a Cauchy sequence in $V$ under the max norm, take the limit of each coordinate to get the limit of the sequence, using the fact that $K$ is complete.
+\end{proof}
+That was remarkably easy. We can now immediately transfer this to all other norms we can think of by showing all norms are equivalent.
+
+\begin{prop}
+ Let $K$ be a complete valued field, and $V$ a finite-dimensional $K$-vector space. Then any norm $\norm{\ph}$ on $V$ is equivalent to $\norm{\ph}_{\mathrm{max}}$.
+\end{prop}
+
+\begin{cor}
+ $V$ is complete with respect to any norm.
+\end{cor}
+
+\begin{proof}
+ Let $\norm{\ph}$ be a norm. We need to find $C, D > 0$ such that
+ \[
+ C\norm{x}_{\mathrm{max}} \leq \norm{x} \leq D \norm{x}_{\mathrm{max}}.
+ \]
+ We set $D = \max_i(\norm{x_i})$. Then we have
+ \[
+ \norm{x} = \norm{\sum a_i x_i} \leq \max\left(\abs{a_i}\norm{x_i}\right) \leq (\max\abs{a_i}) D = \norm{x}_{\mathrm{max}}D.
+ \]
+ We find $C$ by induction on $n$. If $n = 1$, then $\norm{x} = \norm{a_1 x_1} = \abs{a_1} \norm{x} = \norm{x}_{\mathrm{max}}\norm{x_1}$. So $C = \norm{x_1}$ works.
+
+ For $n \geq 2$, we let
+ \begin{align*}
+ V_i &= Kx_1 \oplus \cdots \oplus Kx_{i - 1} \oplus Kx_{i + 1} \oplus\cdots\oplus K x_n \\
+ &= \spn\{x_1, \cdots, x_{i - 1}, x_{i + 1}, \cdots, x_n\}.
+ \end{align*}
+ By the induction hypothesis, each $V_i$ is complete with respect to (the restriction of) $\norm{\ph}$. So in particular $V_i$ is closed in $V$. So we know that the union
+ \[
+ \bigcup_{i = 1}^n x_i + V_i
+ \]
+ is also closed. By construction, this does not contain $0$. So there is some $C > 0$ such that if $x \in \bigcup_{i = 1}^n x_i + V_i$, then $\norm{x} \geq C$. We claim that
+ \[
+ C\norm{x}_{\mathrm{max}} \leq \|x\|.
+ \]
+ Indeed, take $x = \sum a_i x_i \in V$. Let $r$ be such that
+ \[
+ |a_r| = \max_i (\abs{a_i}) = \norm{x}_{\mathrm{max}}.
+ \]
+ Then
+ \begin{align*}
+ \norm{x}_{\mathrm{max}}^{-1}\norm{x} &= \norm{a_r^{-1}x} \\
+ &= \norm{\frac{a_1}{a_r}x_1 + \cdots + \frac{a_{r - 1}}{a_r} x_{r - 1} + x_r + \frac{a_{r + 1}}{a_r} x_{r + 1} + \cdots + \frac{a_n}{a_r}x_n}\\
+ &\geq C,
+ \end{align*}
+ since the last vector is an element of $x_r + V_r$.
+\end{proof}
+
+Before we can prove our theorem, we note the following two easy lemmas:
+\begin{lemma}
+ Let $K$ be a valued field. Then the valuation ring $\mathcal{O}_K$ is integrally closed in $K$.
+\end{lemma}
+
+\begin{proof}
+ Let $x \in K$ and $|x| > 1$. Suppose we have $a_{n - 1}, \cdots, a_0 \in \mathcal{O}_K$. Then we have
+ \[
+ |x^n| > |a_0 + a_1x + \cdots + a_{n - 1} x^{n - 1}|.
+ \]
+ So we know
+ \[
+ x^n + a_{n - 1}x^{n - 1} + \cdots + a_1 x + a_0
+ \]
+ has non-zero norm, and in particular is non-zero. So $x$ is not integral over $\mathcal{O}_K$. So $\mathcal{O}_K$ is integrally closed.
+\end{proof}
+
+\begin{lemma}
+ Let $L$ be a field and $|\ph|$ a function that satisfies all axioms of an absolute value but the strong triangle inequality. Then $|\ph|$ is an absolute value iff $|\alpha| \leq 1$ implies $|\alpha + 1| \leq 1$.
+\end{lemma}
+
+\begin{proof}
+ It is clear that if $|\ph|$ is an absolute value, then $|\alpha| \leq 1$ implies $|\alpha + 1| \leq 1$.
+
+ Conversely, if this holds, and $|x| \leq |y|$, then $|x/y| \leq 1$. So $|x/y + 1| \leq 1$. So $|x + y| \leq |y|$. So $|x + y| \leq \max\{|x|, |y|\}$.
+\end{proof}
+
+Finally, we get to prove our theorem.
+\begin{thm}
+ Let $K$ be a complete valued field, and let $L/K$ be a finite extension. Then the absolute value on $K$ has a unique extension to an absolute value on $L$, given by
+ \[
+ \abs{\alpha}_L = \sqrt[n]{\abs{N_{L/K}(\alpha)}},
+ \]
+ where $n = [L:K]$ and $N_{L/K}$ is the field norm. Moreover, $L$ is complete with respect to this absolute value.
+\end{thm}
+
+\begin{proof}
+ For uniqueness and completeness, if $\abs{\ph}_L$ is an absolute value on $L$, then it is in particular a $K$-norm on $L$ as a finite-dimensional vector space. So we know $L$ is complete with respect to $\abs{\ph}_L$.
+
+ If $\abs{\ph}_L'$ is another absolute value extending $\abs{\ph}$, then we know $\abs{\ph}_L$ and $\abs{\ph}_L'$ are equivalent in the sense of inducing the same topology. But then from one of the early exercises, when \emph{field} norms are equivalent, then we can find some $s > 0 $ such that $\abs{\ph}_L^s = \abs{\ph}_L'$. But the two norms agree on $K$, and they are non-trivial. So we must have $s = 1$. So the norms are equal.
+
+ To show existence, we have to prove that
+ \[
+ \abs{\alpha}_L = \sqrt[n]{\abs{N_{L/K}(\alpha)}}
+ \]
+ is a norm.
+ \begin{enumerate}
+ \item If $\abs{\alpha}_L = 0$, then $N_{L/K}(\alpha) = 0$. This is true iff $\alpha = 0$.
+ \item The multiplicativity of $\abs{\alpha}$ and follows from the multiplicativity of $N_{L/K}$, $\abs{\ph}$ and $\sqrt[n]{\ph}$.
+ \end{enumerate}
+ To show the strong triangle inequality, it suffices to show that $|\alpha|_L \leq 1$ implies $|\alpha + 1|_L \leq 1$.
+
+ Recall that
+ \[
+ \mathcal{O}_L = \{\alpha \in L: \abs{\alpha}_L \leq 1\} = \{\alpha \in L: N_{L/K}(\alpha) \in \mathcal{O}_K\}.
+ \]
+ We claim that $\mathcal{O}_L$ is the integral closure of $\mathcal{O}_K$ in $L$. This implies what we want, since the integral closure is closed under addition (and $1$ is in the integral closure).
+
+ Let $\alpha \in \mathcal{O}_L$. We may assume $\alpha \not= 0$, since that case is trivial. Let the minimal polynomial of $\alpha$ over $K$ be
+ \[
+ f(x) = a_0 + a_1x + \cdots + a_{n - 1}x^{n - 1} + x^n\in K[x].
+ \]
+ We need to show that $a_i \in \mathcal{O}_K$ for all $i$. In other words, $|a_i| \leq 1$ for all $i$. This is easy for $a_0$, since
+ \[
+ N_{L/K}(\alpha) = \pm a_0^m,
+ \]
+ and hence $|a_0| \leq 1$.
+
+ By the corollary of Hensel's lemma, for each $i$, we have
+ \[
+ |a_i| \leq \max(|a_0|, 1)
+ \]
+ By general properties of the field norm, there is some $m \in \Z_{\geq 1}$ such that $N_{L/K}(\alpha) = \pm a_0^m$. So we have
+ \[
+ |a_i| \leq \max\left(\abs{N_{L/K}(\alpha)^{1/m}}, 1\right) = 1.
+ \]
+ So $f \in \mathcal{O}_K[x]$. So $\alpha$ is integral over $\mathcal{O}_K$.
+
+ On the other hand, suppose $\alpha$ is integral over $\mathcal{O}_K$. Let $\bar{K}/K$ be an algebraic closure of $K$. Note that
+ \[
+ N_{L/K}(\alpha) = \left(\prod_{\sigma: L \hookrightarrow \bar{K}} \sigma(\alpha)\right)^d,
+ \]
+ for some $d \in \Z_{\geq 1}$, and each $\sigma(\alpha)$ is integral over $\mathcal{O}_K$, since $\alpha$ is (apply $\sigma$ to the minimal polynomial). This implies that $N_{L/K}(\alpha)$ is integral over $\mathcal{O}_K$ (and lies in $K$). So $N_{L/K}(\alpha) \in \mathcal{O}_K$ since $\mathcal{O}_K$ is integrally closed in $K$.
+\end{proof}
+
+\begin{cor}[of the proof]
+ Let $K$ be a complete valued field, and $L/K$ a finite extension. We equip $L$ with $|\ph|_L$ extending $|\ph|$ on $K$. Then $\mathcal{O}_L$ is the integral closure of $\mathcal{O}_K$ in $L$.
+\end{cor}
+
+\subsection{Newton polygons}
+We are going to have a small digression to Newton polygons. We will not make use of them in this course, but it is a cute visual devices that tell us about roots of polynomials. It is very annoying to write down a formal definition, so we first look at some examples. We will work with valuations rather than the absolute value.
+
+\begin{eg}
+ Consider the valued field $(\Q_p, v_p)$, and the polynomial
+ \[
+ t^4 + p^2 t^4 - p^3 t^2 + pt + p^3.
+ \]
+ We then plot the coefficients for each power of $t$, and then draw a ``convex polygon'' so that all points lie on or above it:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [step=1,gray, very thin] (-1, 4) grid (5, -1);
+
+ \draw [->] (-1, 0) -- (5, 0) node [right] {power of $t$};
+ \draw [->] (0, -1) -- (0, 4) node [above] {valuation of coefficient};
+
+ \node [circ] at (0, 3) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (2, 3) {};
+ \node [circ] at (3, 2) {};
+ \node [circ] at (4, 0) {};
+
+ \draw (0, 3) -- (1, 1) -- (4, 0);
+
+ \foreach \x in {1,2,3,4} {
+ \node at (\x, 0) [below] {$\x$};
+ }
+ \node at (0, 0) [anchor = north east] {$0$};
+ \foreach \y in {1,2,3} {
+ \node at (0, \y) [left] {$\y$};
+ }
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{eg}
+ Consider $(\Q_2, v_2)$ with the polynomial
+ \[
+ 4t^4 + 5t^3 + \frac{7}{2}t + \frac{9}{2}.
+ \]
+ Here there is no $t^2$ term, so we simply don't draw anything.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (5, 0) node [right] {power of $t$};
+ \draw [->] (0, -2) -- (0, 3) node [above] {valuation of coefficient};
+
+ \node [circ] at (0, -1) {};
+ \node [circ] at (1, -1) {};
+ \node [circ] at (3, 0) {};
+ \node [circ] at (4, 2) {};
+
+ \draw [step=1,gray, very thin] (-1, 3) grid (5, -2);
+ \draw (0, -1) -- (1, -1) -- (3, 0) -- (4, 2);
+
+ \foreach \x in {1,2,3,4} {
+ \node at (\x, 0) [below] {$\x$};
+ }
+ \node at (0, 0) [anchor = north east] {$0$};
+ \foreach \y in {-1,1,2} {
+ \node at (0, \y) [left] {$\y$};
+ }
+
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+We now go to come up with a formal definition.
+
+\begin{defi}[Lower convex set]\index{lower convex set}
+ We say a set $S \subseteq \R^2$ is \emph{lower convex} if
+ \begin{enumerate}
+ \item Whenever $(x, y) \in S$, then $(x, z) \in S$ for all $z \geq y$.
+ \item $S$ is convex.
+ \end{enumerate}
+\end{defi}
+\begin{defi}[Lower convex hull]
+ Given any set of points $T \subseteq \R^2$, there is a minimal lower convex set $S \supseteq T$ (by the intersection of all lower convex sets containing $T$ -- this is a non-empty definition because $\R^2$ satisfies the property). This is known as the \term{lower convex hull} of the points.
+\end{defi}
+
+\begin{eg}
+ The lower convex hull of the points $(0, 3), (1, 1), (2, 3), (3, 2), (4, 0)$ is given by the region denoted below:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (5, 0);
+ \draw [->] (0, -1) -- (0, 4);
+
+ \node [circ] at (0, 3) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (2, 3) {};
+ \node [circ] at (3, 2) {};
+ \node [circ] at (4, 0) {};
+
+ \draw [step=1,gray, very thin] (-1, 4) grid (5, -1);
+ \draw [fill=morange, opacity=0.5] (0, 4) -- (0, 3) -- (1, 1) -- (4, 0) -- (4, 4);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{defi}[Newton polygon]\index{Newton polygon}
+ Let $f(x) = a_0 + a_1 x + \cdots + a_n x^n \in K[x]$, where $(K, v)$ is a valued field. Then the \emph{Newton polygon} of $f$ is the lower convex hull of $\{(i, v(a_i)): i = 0, \cdots, n, a_i \not= 0\}$.
+\end{defi}
+This is the formal definition, so in our first example, the Newton polygon really should be the shaded area shown above, but most of the time, we only care about the lower line.
+
+\begin{defi}[Break points]\index{break points}
+ Given a polynomial, the points $(i, v(a_i))$ lying on the boundary of the Newton polygon are known as the \emph{break points}.
+\end{defi}
+
+\begin{defi}[Line segment]\index{line segment}
+ Given a polynomial, the line segment between two adjacent break points is a \term{line segment}.
+\end{defi}
+
+\begin{defi}[Multiplicity/length]\index{multiplicity}\index{length}
+ The \emph{length} or \emph{multiplicity} of a line segment is the horizontal length.
+\end{defi}
+
+\begin{defi}[Slope]\index{slope}
+ The \emph{slope} of a line segment is its slope.
+\end{defi}
+
+\begin{eg}
+ Consider again $(\Q_2, v_2)$ with the polynomial
+ \[
+ 4t^4 + 5t^3 + \frac{7}{2}t + \frac{9}{2}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (5, 0) node [right] {power of $t$};
+ \draw [->] (0, -2) -- (0, 3) node [above] {valuation of coefficient};
+
+ \node [circ] at (0, -1) {};
+ \node [circ] at (1, -1) {};
+ \node [circ] at (3, 0) {};
+ \node [circ] at (4, 2) {};
+
+ \draw [step=1,gray, very thin] (-1, 3) grid (5, -2);
+ \draw (0, -1) -- (1, -1) -- (3, 0) -- (4, 2);
+ \end{tikzpicture}
+ \end{center}
+ The middle segment has length $2$ and slope $1/2$.
+\end{eg}
+\begin{eg}
+ In the following Newton polygon:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (5, 0);
+ \draw [->] (0, -1) -- (0, 4);
+
+ \node [circ] at (0, 3) {};
+ \node [circ] at (1, 1) {};
+ \node [circ] at (2, 3) {};
+ \node [circ] at (3, 2) {};
+ \node [circ] at (4, 0) {};
+
+ \draw [step=1,gray, very thin] (-1, 4) grid (5, -1);
+ \draw (0, 3) -- (1, 1) -- (4, 0);
+ \end{tikzpicture}
+ \end{center}
+ The second line segment has length $3$ and slope $-\frac{1}{3}$.
+\end{eg}
+
+It turns out the Newton polygon tells us something about the roots of the polynomial.
+\begin{thm}
+ Let $K$ be complete valued field, and $v$ the valuation on $K$. We let
+ \[
+ f(x) = a_0 + a_1 x + \cdots + a_n x^n \in K[x].
+ \]
+ Let $L$ be the splitting field of $f$ over $K$, equipped with the unique extension $w$ of $v$.
+
+ If $(r, v(a_r)) \to (s, v(a_s))$ is a line segment of the Newton polygon of $f$ with slope $-m \in \R$, then $f$ has precisely $s - r$ roots of valuation $m$.
+\end{thm}
+Note that by lower convexity, there can be at most one line segment for each slope. So this theorem makes sense.
+
+\begin{proof}
+ Dividing by $a_n$ only shifts the polygon vertically, so we may wlog $a_n = 1$. We number the roots of $f$ such that
+ \begin{align*}
+ w(\alpha_1) &= \cdots = w(\alpha_{s_1}) = m_1\\
+ w(\alpha_{s_1 + 1}) &= \cdots = w(\alpha_{s_2}) = m_2\\
+ &\vdots\\
+ w(\alpha_{s_t}) &= \cdots = w(\alpha_n) = m_{t + 1},
+ \end{align*}
+ where we have
+ \[
+ m_1 < m_2 < \cdots < m_{t + 1}.
+ \]
+ Then we know
+ \begin{align*}
+ v(a_n) &= v(1) = 0\\
+ v(a_{n - 1}) &= w\left(\sum \alpha_i\right) \geq \min_i w(\alpha_i) = m_1\\
+ v(a_{n - 2}) &= w\left(\sum \alpha_i\alpha_j\right) \geq \min_{i \not= j} w(\alpha_i \alpha_j) = 2 m_1\\
+ &\vdots\\
+ v(a_{n - s_1}) &= w\left(\sum_{i_1\not=\ldots \not=i_{s_1}} \alpha_{i_1 \ldots \alpha_{i_{s_1}}}\right) = \min w(\alpha_{i_1}\cdots\alpha_{i_{s_1}}) = s_1 m_1.
+ \end{align*}
+ It is important that in the last one, we have equality, not an inequality, because there is one term in the sum whose valuation is less than all the others.
+
+ We can then continue to get
+ \[
+ v(\alpha_{n - s_1 - 1}) \geq \min w(\alpha_{i_1} \cdots \alpha_{i_{s_1 + 1}}) = s_1 m_1 + m_2,
+ \]
+ until we reach
+ \[
+ v(\alpha_{n - s_1 - s_2}) = s_1 m_1 + (s_2 - s_1) m_2.
+ \]
+ We keep going on.
+
+ We draw the Newton polygon.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (8, 0);
+ \draw [->] (0, -2) -- (0, 2);
+
+ \node [circ] at (7, 0) {};
+ \node [above] at (7, 0) {$(n, 0)$};
+
+ \node [circ] at (5, -1) {};
+ \node [right] at (5, -1) {$(n - s_1, s_1 m_1)$};
+
+ \node [circ] at (3, -1.5) {};
+ \node [below] at (3, -1.5) {$(n - s_1 - s_2, s_1 m_1 + (s_2 - s_1) m_1)$};
+
+ \node at (2, -1.5) {$\cdots$};
+
+ \draw (7, 0) -- (5, -1) -- (3, -1.5);
+ \end{tikzpicture}
+ \end{center}
+ We don't know where exactly the other points are, but the inequalities imply that the $(i, v(a_i))$ are above the lines drawn. So this is the Newton polygon.
+
+ Counting from the right, the first line segment has length $n - (n - s_1) = s_1$ and slope
+ \[
+ \frac{0 - s_1 m_1}{n - (n - s_1)} = -m_1.
+ \]
+ In general, the $k$th segment has length $(n - s_{k - 1}) - (n - s_k) = s_k - s_{k - 1}$, and slope
+ \begin{align*}
+ &\frac{\left(s_1 m_1 + \sum_{i = 1}^{k - 2} (s_{i + 1} - s_i) m_{i + 1}\right) - \left(s_1 m_1 + \sum_{i = 1}^{k - 1} (s_{i + 1} - s_i) m_{i + 1}\right)}{s_k - s_{k - 1}} \\
+ ={}& \frac{-(s_k - s_{k - 1})m_k}{s_k - s_{k - 1}} = - m_k.
+ \end{align*}
+ and the others follow similarly.
+\end{proof}
+
+\begin{cor}
+ If $f$ is irreducible, then the Newton polygon has a single line segment.
+\end{cor}
+
+\begin{proof}
+ We need to show that all roots have the same valuation. Let $\alpha, \beta$ be in the splitting field $L$. Then there is some $\sigma \in \Aut(L/K)$ such that $\sigma(\alpha) = \beta$. Then $w(\alpha) = w(\sigma(\alpha)) = \beta$. So done.
+\end{proof}
+
+Note that Eisenstein's criterion is a (partial) converse to this!
+
+\section{Discretely valued fields}
+We are now going to further specialize. While a valued field already has some nice properties, we can't really say much more about them without knowing much about their valuations.
+
+Recall our previous two examples of valued fields: $\Q_p$ and $\F_p((T))$. The valuations had the special property that they take values in $\Z$. Such fields are known as \emph{discretely valued fields}.
+
+\begin{defi}[Discretely valued field]\index{discretely valued field}\index{valued field!discretely}\index{DVF}
+ Let $K$ be a valued field with valuation $v$. We say $K$ is a \emph{discretely valued field} (DVF) if $v(k^\times) \subseteq \R$ is a discrete subgroup of $\R$, i.e.\ $v(k^\times)$ is infinite cyclic.
+\end{defi}
+Note that we do not require the image to be exactly $\Z \subseteq \R$. So we allow scaled versions of the valuation. This is useful because the property of mapping into $\Z$ is not preserved under field extensions in general, as we will later see. We will call those that do land in $\Z$ \emph{normalized valuations}.
+
+\begin{defi}[Normalized valuation]\index{normalized valuation}\index{valuation!normalized}
+ Let $K$ be a DVF. The \emph{normalized valuation} $V_K$ on $K$ is the unique valuation on $K$ in the given equivalence class of valuations whose image is $\Z$.
+\end{defi}
+
+Note that the normalized valuation does not give us a preferred choice of absolute value, since to obtain an absolute value, we still have to arbitrarily pick the base $c > 1$ to define $|x| = c^{-v(x)}$.
+
+\begin{defi}[Uniformizer]\index{uniformizer}
+ Let $K$ be a discrete valued field. We say $\pi \in K$ is \emph{uniformizer} if $v(\pi) > 0$ and $v(\pi)$ generates $v(k^\times)$ (iff $v(\pi)$ has minimal positive valuation).
+\end{defi}
+So with a normalized valuation, we have $v_K(\pi) = 1$.
+
+\begin{eg}
+ The usual valuation on $\Q_p$ is normalized, and so is the usual valuation on $k((T))$. $p$ is a uniformizer for $\Q_p$ and $T$ is a uniformizer for $k((T))$.
+\end{eg}
+
+The kinds of fields we will be interested are \emph{local fields}. The definition we have here might seem rather ad hoc. This is just one of the many equivalent characterizations of a local field, and the one we pick here is the easiest to state.
+\begin{defi}[Local field]\index{local field}
+ A \emph{local field} is a complete discretely valued field with a finite residue field.
+\end{defi}
+
+\begin{eg}
+ $\Q$ and $\Q_p$ with $v_p$ are both discretely valued fields, and $\Q_p$ is a local field. $p$ is a uniformizer.
+\end{eg}
+
+\begin{eg}
+ The Laurent series field $k((T))$ with valuation
+ \[
+ v\left(\sum a_n T^n\right) = \inf\{n: a_n \not= 0\}
+ \]
+ is a discrete valued field, and is a local field if and only if $k$ is finite field, as the residue field is exactly $k$. We have
+ \[
+ \mathcal{O}_{k((T))} = k[[T]] = \left\{\sum_{n = 0}^\infty a_n T^n: a_n \in k\right\}.
+ \]
+ Here $T$ is a uniformizer.
+\end{eg}
+
+These discretely valued field are pretty much like the $p$-adic numbers.
+\begin{prop}
+ Let $K$ be a discretely valued field with uniformizer $\pi$. Let $S \subseteq \mathcal{O}_K$ be a set of coset representatives of $\mathcal{O}_k/\mathfrak{m}_k = k_K$ containing $0$. Then
+ \begin{enumerate}
+ \item The non-zero ideals of $\mathcal{O}_K$ are $\pi^n \mathcal{O}_K$ for $n \geq 0$.
+ \item The ring $\mathcal{O}_K$ is a PID with unique prime $\pi$ (up to units), and $\mathfrak{m}_K = \pi\mathcal{O}_K$.
+ \item The topology on $\mathcal{O}_K$ induced by the absolute value is the $\pi$-adic topology.
+ \item If $K$ is complete, then $\mathcal{O}_K$ is $\pi$-adically complete.
+ \item If $K$ is complete, then any $x \in K$ can be written uniquely as
+ \[
+ x = \sum_{n \gg -\infty}^\infty a_n \pi^n,
+ \]
+ where $a_n \in S$, and
+ \[
+ |x| = |\pi|^{-\inf\{n: a_n \not= 0\}}.
+ \]
+ \item The completion $\hat{K}$ is also discretely valued and $\pi$ is a uniformizer, and moreover the natural map
+ \[
+ \begin{tikzcd}
+ \displaystyle\frac{\mathcal{O}_k}{\pi^n \mathcal{O}_k} \ar[r, "\sim"] & \displaystyle\frac{\mathcal{O}_{\hat{K}}}{\pi^n \mathcal{O}_{\hat{K}}}
+ \end{tikzcd}
+ \]
+ is an isomorphism.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ The same as for $\Q_p$ and $\Z_p$, with $\pi$ instead of $p$.
+\end{proof}
+
+\begin{prop}
+ Let $K$ be a discretely valued field. Then $K$ is a local field iff $\mathcal{O}_K$ is compact.
+\end{prop}
+
+\begin{proof}
+ If $\mathcal{O}_K$ is compact, then $\pi^{-n}\mathcal{O}_K$ is compact for all $n \geq 0$ (where $\pi$ is the uniformizer), and in particular complete. So
+ \[
+ K = \bigcup_{n \geq 0}^\infty \pi^{-n} \mathcal{O}_K
+ \]
+ is complete, as this is an increasing union, and Cauchy sequences are bounded. Also, we know the quotient map $\mathcal{O}_K \to k_K$ is continuous when $k_K$ is given the discrete topology, by definition of the $\pi$-adic topology. So $k_K$ is compact and discrete, hence finite.
+
+ In the other direction, if $K$ is local, then we know $\mathcal{O}_K/\pi^n \mathcal{O}_K$ is finite for all $n \geq 0$ (by induction and finiteness of $k_K$). We let $(x_i)$ be a sequence in $\mathcal{O}_K$. Then by finiteness of $\mathcal{O}_K/\pi \mathcal{O}_K$, there is a subsequence $(x_{1, i})$ which is constant modulo $\pi$. We keep going, choosing a subsequence $(x_{n + 1, i})$ of $(x_{n_i})$ such that $(x_{n + 1, i})$ is constant modulo $\pi^{n + 1}$. Then $(x_{i, i})_{i = 1}^\infty$ converges, since it is Cauchy as
+ \[
+ |x_{ii} - x_{jj}| \leq |\pi|^j
+ \]
+ for $j \leq i$. So $\mathcal{O}_K$ is sequentially compact, hence compact.
+\end{proof}
+
+Now the valuation ring $\mathcal{O}_K$ inherits a valuation from $K$, and this gives it a structure of a \emph{discrete valuation ring}. We will define a discrete valuation ring in a funny way, but there are many equivalent definitions that we will not list.
+
+\begin{defi}[Discrete valuation ring]\index{discrete valuation ring}\index{DVR}
+ A ring $R$ is called a \emph{discrete valuation ring} (DVR) if it is a PID with a unique prime element up to units.
+\end{defi}
+
+\begin{prop}
+ $R$ is a DVR iff $R \cong \mathcal{O}_K$ for some DVF $K$.
+\end{prop}
+
+\begin{proof}
+ We have already seen that valuation rings of discrete valuation fields are DVRs. In the other direction, let $R$ be a DVR, and $\pi$ a prime. Let $x \in \R \setminus \{0\}$. Then we can find a unique unit $u \in R^\times$ and $n \in \Z_{\geq 0}$ such that $x = \pi^n u$ (say, by unique factorization of PIDs). We define
+ \[
+ v(x) =
+ \begin{cases}
+ n & x \not= 0\\
+ \infty & x = 0
+ \end{cases}
+ \]
+ This is then a discrete valuation of $R$. This extends uniquely to the field of fractions $K$. It remains to show that $R = \mathcal{O}_K$. First note that
+ \[
+ K = R\left[\frac{1}{\pi}\right].
+ \]
+ This is since any non-zero element in $R\left[\frac{1}{\pi}\right]$ looks like $\pi^n u, u \in R^\times, n \in \Z$, and is already invertible. So it must be the field of fractions. Then we have
+ \[
+ v(\pi^n u) = n \in \Z_{\geq 0} \Longleftrightarrow \pi^n u \in R.
+ \]
+ So we have $R = \mathcal{O}_K$.
+\end{proof}
+
+Now recall our two ``standard'' examples of valued fields --- $\F_p((T))$ and $\Q_p$. Both of their residue fields are $\F_p$, and in particular has characteristic $p$. However, $\F_p((T))$ itself is \emph{also} of characteristic $p$, while $\Q_p$ has characteristic $0$. It would thus be helpful to split these into two different cases:
+
+\begin{defi}[Equal and mixed characteristic]\index{equal characteristic}\index{mixed characteristic}
+ Let $K$ be a valued field with residue field $k_K$. Then $K$ has \emph{equal characteristic} if
+ \[
+ \Char K = \Char k_K.
+ \]
+ Otherwise, we have $K$ has \emph{mixed characteristic}.
+\end{defi}
+If $K$ has mixed characteristic, then necessarily $\Char K = 0$, and $\Char k_K > 0$.
+
+\begin{eg}
+ $\Q_p$ has mixed characteristic, since $\Char \Q_p = 0$ but $\Char k_{\Q_p} = \Z/p\Z = p$.
+\end{eg}
+
+We will also need the following definition:
+\begin{defi}[Perfect ring]\index{perfect ring}
+ Let $R$ be a ring of characteristic $p$. We say $R$ is \emph{perfect} if the Frobenius map $x \mapsto x^p$ is an automorphism of $R$, i.e.\ every element of $R$ has a $p$th root.
+\end{defi}
+
+\begin{fact}
+ Let $F$ be a field of characteristic $p$. Then $F$ is perfect if and only if every finite extension of $F$ is separable.
+\end{fact}
+
+\begin{eg}
+ $\F_q$ is perfect for every $q = p^n$.
+\end{eg}
+
+\subsection{\texorpdfstring{Teichm\"uller}{Teichm\"uller} lifts}
+Take our favorite discretely valued ring $\Z_p$. This is $p$-adically complete, so we can write each element as
+\[
+ x = a_0 + a_1 p + a_2 p^2 + \cdots,
+\]
+where each $a_i$ is in $\{0, 1, \cdots, p - 1\}$. The reason this works is that $0, 1, \cdots, p - 1$ are coset representatives of the ring $\Z_p/p \Z_p \cong \Z/p\Z$. While these coset representatives might feel like a ``natural'' thing to do in this context, this is because we have implicitly identified with $\Z_p/p\Z_p \cong \Z/p\Z$ as a particular subset of $\Z \subseteq \Z_p$. However, this identification respects effectively no algebraic structure at all. For example, we cannot multiplying the cosets simply by multiplying the representatives as elements of $\Z_p$, because, say, $(p - 1)^2 = p^2 - 2p + 1$, which is not $1$. So this is actually quite bad, at least theoretically.
+
+It turns out that we can actually construct ``natural'' lifts in a very general scenario.
+\begin{thm}
+ Let $R$ be a ring, and let $x \in R$. Assume that $R$ is $x$-adically complete and that $R/xR$ is perfect of characteristic $p$. Then there is a unique map $[-]: R/xR \to R$ such that
+ \[
+ [a] \equiv a \mod x
+ \]
+ and
+ \[
+ [ab] = [a][b].
+ \]
+ for all $a, b \in R/xR$. Moreover, if $R$ has characteristic $p$, then $[-]$ is a ring homomorphism.
+\end{thm}
+
+\begin{defi}[Teichm\"uller map]\index{Teichm\"uller map}\index{Teichm\"uller lift}\index{Teichm\"uller representative}
+ The map $[-]: R/xR \to R$ is called the \emph{Teichm\:uller map}. $[x]$ is called the \emph{Teichm\"uller lift} or \emph{representative} of $x$.
+\end{defi}
+
+The idea of the proof is as follows: suppose we have an $a \in R/xR$. If we randomly picked a lift $\alpha$, then chances are it would be a pretty ``bad'' choice, since any two such choices can differ by a multiple of $x$.
+
+Suppose we instead lifted a $p$th root of $a$ to $R$, and then take the $p$th power of it. We claim that this is a better way of picking a lift. Suppose we have picked two lifts of $a^{p^{-1}}$, say, $\alpha_1$ and $\alpha_1'$. Then $\alpha_1' = xc + \alpha_1$ for some $c$. So we have
+\[
+ (\alpha_1')^p - \alpha_1^p = \alpha_1^p + pxc + O(x^2) - \alpha_1^p = pxc + O(x^2),
+\]
+where we abuse notation and write $O(x^2)$ to mean terms that are multiples of $x^2$.
+
+We now recall that $R/xR$ has characteristic $p$, so $p \in xR$. Thus in fact $pxc = O(x^2)$. So we have
+\[
+ (\alpha_1')^p - \alpha_1^p = O(x^2).
+\]
+So while the lift is still arbitrary, any two arbitrary choices can differ by at most $x^2$. Alternatively, our lift is now a well-defined element of $R/x^2 R$.
+
+We can, of course, do better. We can lift the $p^2$th root of $a$ to $R$, then take the $p^2$th power of it. Now any two lifts can differ by at most $O(x^3)$. More generally, we can try to lift the $p^n$th root of $a$, then take the $p^n$th power of it. We keep picking a higher and higher $n$, take the limit, and hopefully get something useful out!
+
+To prove this result, we will need the following messy lemma:
+\begin{lemma}
+ Let $R$ be a ring with $x \in R$ such that $R/xR$ has characteristic $p$. Let $\alpha, \beta \in R$ be such that
+ \[
+ \alpha = \beta \mod x^k\tag{$\dagger$}
+ \]
+ Then we have
+ \[
+ \alpha^p = \beta^p \mod x^{k + 1}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ It is left as an exercise to modify the proof to work for $p = 2$ (it is actually easier). So suppose $p$ is odd. We take the $p$th power of $(\dagger)$ to obtain
+ \[
+ \alpha^p - \beta^p + \sum_{i = 1}^{p - 1} \binom{p}{i} \alpha^{p - i} \beta^i \in x^{p(k + 1)}R.
+ \]
+ We can now write
+ \begin{align*}
+ \sum_{i = 1}^{p - 1} (-1)^i\binom{p}{i} \alpha^{p - i} \beta^i &= \sum_{i = 1}^{\frac{p - 1}{2}} (-1)^i\binom{p}{i} (\alpha \beta)^i \left(\alpha^{p - 2i} - \beta^{p - 2i}\right)\\
+ &= p (\alpha - \beta) (\text{something}).
+ \end{align*}
+ Now since $R/xR$ has characteristic $p$, we know $p \in xR$. By assumption, we know $\alpha - \beta \in x^{k + 1} R$. So this whole mess is in $x^{k + 2} R$, and we are done.
+\end{proof}
+
+\begin{proof}[Proof of theorem]
+ Let $a \in R/xR$. For each $n$, there is a unique $a^{p^{-n}} \in R/xR$. We lift this arbitrarily to some $\alpha_n \in R$ such that
+ \[
+ \alpha_n \equiv a^{p^{-n}} \mod x.
+ \]
+ We define
+ \[
+ \beta_n = \alpha_n^{p^n}.
+ \]
+ The claim is that
+ \[
+ [a] = \lim_{n \to \infty}\beta_n
+ \]
+ exists and is independent of the choices.
+
+ Note that if the limit exists no matter how we choose the $\alpha_n$, then it must be independent of the choices. Indeed, if we had choices $\beta_n$ and $\beta_n'$, then $\beta_1, \beta_2', \beta_3, \beta_4', \beta_5, \beta_6', \cdots$ is also a respectable choice of lifts, and thus must converge. So $\beta_n$ and $\beta_n'$ must have the same limit.
+
+ Since the ring is $x$-adically complete and is discretely valued, to show the limit exists, it suffices to show that $\beta_{n + 1} - \beta_n \to 0$ $x$-adically. Indeed, we have
+ \[
+ \beta_{n + 1} - \beta_n = (\alpha_{n + 1}^p)^{p^n} - \alpha_n^{p^n}.
+ \]
+ We now notice that
+ \[
+ \alpha_{n + 1}^p \equiv (a^{p^{-n - 1}})^p = a^{p^{-n}} \equiv \alpha_n \mod x.
+ \]
+ So by applying the previous the lemma many times, we obtain
+ \[
+ (\alpha_{n + 1}^p)^{p^n} \equiv \alpha_n^{p^n} \mod x^{n + 1}.
+ \]
+ So $\beta_{n + 1} - \beta_n \in x^{n + 1} R$. So $\lim \beta_n$ exists.
+
+ To see $[a] = a\mod x$, we just have to note that
+ \[
+ \lim_{n \to \infty} \alpha_n^{p^n} \equiv \lim_{n \to \infty} (a^{p^{-n}})^{p^n} = \lim a = a \mod x.
+ \]
+ (here we are using the fact that the map $R \to R/xR$ is continuous when $R$ is given the $x$-adic topology and $R/xR$ is given the discrete topology)
+
+ The remaining properties then follow trivially from the uniqueness of the above limit.
+
+ For multiplicativity, if we have another element $b \in R/xR$, with $\gamma_n \in R$ lifting $b^{p^{-n}}$ for all $n$, then $\alpha_n \gamma_n$ lifts $(ab)^{p^{-n}}$. So
+ \[
+ [ab] = \lim\alpha_n^{p^n} \gamma_n^{p^n} = \lim \alpha_n^{p^n} \lim \gamma_n^{p^n} = [a][b].
+ \]
+ If $R$ has characteristic $p$, then $\alpha_n + \gamma_n$ lifts $a^{p^{-n}} + b^{p^{-n}} = (a + b)^{p^{-n}}$. So
+ \[
+ [a + b] = \lim (\alpha_n + \gamma_n)^{p^n} = \lim \alpha_n^{p^n} + \lim \gamma_n^{p^n} = [a] + [b].
+ \]
+ Since $1$ is a lift of $1$ and $0$ is a lift of $0$, it follows that this is a ring homomorphism.
+
+ Finally, to show uniqueness, suppose $\phi: R/xR \to R$ is a map with these properties. Then we note that $\phi(a^{p^{-n}}) \equiv a^{p^{-n}} \mod x$, and is thus a valid choice of $\alpha_n$. So we have
+ \[
+ [a] = \lim_{n \to \infty} \phi(a^{p^{-n}})^{p^n} = \lim \phi(a) = \phi(a).\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ Let $R = \Z_p$ and $x = p$. Then $[-]: \F_p \to \Z_p$ satisfies
+ \[
+ [x]^{p - 1} = [x^{p - 1}] = [1] = 1.
+ \]
+ So the image of $[x]$ must be the unique $p - 1$th root of unity lifting $x$ (recall we proved their existence via Hensel's lemma).
+\end{eg}
+
+When proving theorems about these rings, the Teichm\"uller lifts would be very handy and natural things to use. However, when we want to do actual computations, there is absolutely no reason why these would be easier!
+
+As an application, we can prove the following characterization of equal characteristic complete DVF's.
+\begin{thm}
+ Let $K$ be a complete discretely valued field of equal characteristic $p$, and assume that $k_K$ is perfect. Then $K \cong k_K((T))$.
+\end{thm}
+
+\begin{proof}
+ Let $K$ be a complete DVF. Since every DVF the field of fractions of its valuation ring, it suffices to prove that $\mathcal{O}_K \cong k_K[[T]]$. We know $\mathcal{O}_K$ has characteristic $p$. So $[-]: k_K \to \mathcal{O}_K$ is an injective ring homomorphism. We choose a uniformizer $\pi \in \mathcal{O}_K$, and define
+ \[
+ k_K[[T]] \to \mathcal{O}_K
+ \]
+ by
+ \[
+ \sum_{n = 0}^\infty a_n T^n \mapsto \sum_{n = 0}^\infty [a_n] \pi^n.
+ \]
+ Then this is a ring homomorphism since $[-]$ is. The bijectivity follows from property (v) in our list of properties of complete DVF's.
+\end{proof}
+
+\begin{cor}
+ Let $K$ be a local field of equal characteristic $p$. Then $k_K \cong \F_q$ for some $q$ a power of $p$, and $K \cong F_q((T))$.
+\end{cor}
+
+\subsection{Witt vectors*}
+We are now going to look at the mixed characteristic analogue of this result. We want something that allows us to go from characteristic $p$ to characteristic $0$. This is known as \emph{Witt vectors}, which is non-examinable.
+
+We start with the notion of a \emph{strict $p$-ring}. Roughly this is a ring that satisfies all the good properties whose name has the word ``$p$'' in it.
+\begin{defi}[Strict $p$-ring]\index{strict $p$-ring}
+ Let $A$ be a ring. A is called a \emph{strict $p$-ring} if it is $p$-torsion free, $p$-adically complete, and $A/pA$ is a perfect ring.
+\end{defi}
+
+Note that a strict $p$-ring in particular satisfies the conditions for the Teichm\"uller lift to exist, for $x = p$.
+\begin{eg}
+ $\Z_p$ is a strict $p$-ring.
+\end{eg}
+
+The next example we are going to construct is more complicated. This is in some sense a generalization of the usual polynomial rings $\Z[x_1, \cdots, x_n]$, or more generally,
+\[
+ \Z[x_i \mid i \in I],
+\]
+for $I$ possibly infinite. To construct the ``free'' strict $p$-ring, after adding all these variables $x_i$, to make it a strict $p$-ring, we also need to add their $p$th roots, and the $p^2$th roots etc, and then take the $p$-adic completion, and hope for the best.
+
+\begin{eg}
+ Let $X = \{x_i: i \in I\}$ be a set. Let
+ \[
+ B = \Z[x_i^{p^{-\infty}}\mid i \in I] = \bigcup_{n = 0}^\infty \Z[x_i^{p^{-n}}\mid i \in I].
+ \]
+ Here the union on the right is taken by treating
+ \[
+ \Z[x_i \mid i \in I] \subseteq \Z [x_i^{p^{-1}} \mid i \in I] \subseteq \cdots
+ \]
+ in the natural way.
+
+ We let $A$ be the $p$-adic completion of $B$. We claim that $A$ is a strict $p$-ring and $A/pA \cong \F_p[x_i^{p^{-\infty}}\mid i \in I]$.
+
+ Indeed, we see that $B$ is $p$-torsion free. By Exercise 13 on Sheet 1, we know $A$ is $p$-adically complete and torsion free. Moreover,
+ \[
+ A/pA \cong B/pB \cong \F_p[x_i^{p^{-\infty}}\mid i \in I],
+ \]
+ which is perfect since every element has a $p$-th root. % why % definition
+\end{eg}
+
+If $A$ is a strict $p$-ring, then we know that we have a Teichm\"uller map
+\[
+ [-]: A/pA \to A,
+\]
+\begin{lemma}
+ Let $A$ be a strict $p$-ring. Then any element of $A$ can be written uniquely as
+ \[
+ a = \sum_{n = 0}^\infty [a_n] p^n,
+ \]
+ for a \emph{unique} $a_n \in A/pA$.
+\end{lemma}
+
+\begin{proof}
+ We recursively construct the $a_n$ by
+ \begin{align*}
+ a_0 &= a \pmod p\\
+ a_1 &\equiv p^{-1}(a - [a_0]) \pmod p\\
+ &\vdots\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{lemma}
+ Let $A$ and $B$ be strict $p$-rings and let $f: A/pA \to B/pB$ be a ring homomorphism. Then there is a unique homomorphism $F: A \to B$ such that $f = F \bmod p$, given by
+ \[
+ F\left(\sum [a_n]p^n\right) = \sum [f(a_n)] p^n.
+ \]
+\end{lemma}
+
+\begin{proof}[Proof sketch]
+ We define $F$ by the given formula and check that it works. First of all, by the formula, $F$ is $p$-adically continuous, and the key thing is to check that it is additive (which is slightly messy). Multiplicativity then follows formally from the continuity and additivity.
+
+ To show uniqueness, suppose that we have some $\psi$ lifting $f$. Then $\psi(p) = p$. So $\psi$ is $p$-adically continuous. So it suffices to show that $\psi([a]) = [\psi(a)]$.
+
+ We take $\alpha_n \in A$ lifting $a^{p^{-n}} \in A/pA$. Then $\psi(\alpha_n)$ lifts $f(a)^{p^{-n}}$. So
+ \[
+ \psi([a]) = \lim \psi(\alpha_n^{p^{-n}}) = \lim \psi(\alpha_n)^{p^{-n}} = [f(a)].
+ \]
+ So done.
+\end{proof}
+
+There is a generalization of this result:
+\begin{prop}
+ Let $A$ be a strict $p$-ring and $B$ be a ring with an element $x$ such that $B$ is $x$-adically complete and $B/xB$ is perfect of characteristic $p$. If $f: A/pA \to B/xB$ is a ring homomorphism. Then there exists a unique ring homomorphism $F: A \to B$ with $f = F \mod x$, i.e.\ the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ A \ar[r, "F"] \ar[d] & B \ar[d]\\
+ A/pA \ar[r, "f"] & B/xB
+ \end{tikzcd}.
+ \]
+\end{prop}
+Indeed, the conditions on $B$ are sufficient for Teichm\"uller lifts to exist, and we can at least write down the previous formula, then painfully check it works.
+
+We can now state the main theorem about strict $p$-rings.
+
+\begin{thm}
+ Let $R$ be a perfect ring. Then there is a unique (up to isomorphism) strict $p$-ring $W(B)$\index{$W(R)$} called the \term{Witt vector}\emph{s} of $R$ such that $W(R)/p W(R) \cong R$.
+
+ Moreover, for any other perfect ring $R$, the reduction mod $p$ map gives a bijection
+ \[
+ \begin{tikzcd}
+ \Hom_{\mathrm{Ring}}(W(R), W(R')) \ar[r, "\sim"] & \Hom_{\mathrm{Ring}}(R, R')
+ \end{tikzcd}.
+ \]
+\end{thm}
+
+\begin{proof}[Proof sketch]
+ If $W(R)$ and $W(R')$ are such strict $p$-rings, then the second part follows from the previous lemma. Indeed, if $C$ is a strict $p$-ring with $C/pC \cong R \cong W(R)/pW(R)$, then the isomorphism $\bar{\alpha}: W(R)/pW(R) \to C/pC$ and its inverse $\bar{\alpha}^{-1}$ have unique lifts $\gamma: W(R) \to C$ and $\gamma^{-1}: C \to W(R)$, and these are inverses by uniqueness of lifts.
+
+ To show existence, let $R$ be a perfect ring. We form
+ \begin{align*}
+ \F_p[x_r^{p^{-\infty}}\mid r \in R] &\to R\\
+ x_r&\mapsto r
+ \end{align*}
+ Then we know that the $p$-adic completion of $\Z[x_r^{p^{-\infty}}\mid r \in R]$, written $A$, is a strict $p$-ring with
+ \[
+ A/pA \cong \F_p[x_r^{p^{-\infty}}\mid r \in R].
+ \]
+ We write
+ \[
+ I = \ker(\F_p[x_r^{p^{-\infty}}\mid r \in R] \to R).
+ \]
+ Then define
+ \[
+ J = \left\{\sum_{n = 0}^\infty [a_k] p^n \in A: a_n \in I\text{ for all $n$}\right\}.
+ \]
+ This turns out to be an ideal.
+ \[
+ \begin{tikzcd}
+ & J \ar[r, dashed] \ar[d] & A \ar[r] \ar[d] & R \ar[d, equals]\\
+ 0 \ar[r] & I \ar[r] & A/pA \ar[r] & R \ar[r] & 0
+ \end{tikzcd}
+ \]
+ We put $W(R) = A/J$. We can then painfully check that this has all the required properties. For example, if
+ \[
+ x = \sum_{n = 0}^\infty [a_n] p^n \in A,
+ \]
+ and
+ \[
+ px = \sum_{n = 0}^\infty [a_n] p^{n + 1} \in J,
+ \]
+ then by definition of $J$, we know $[a_n] \in I$. So $x \in J$. So $W(R)/J$ is $p$-torsion free. By a similar calculation, one checks that
+ \[
+ \bigcap_{n = 0}^\infty p^n W(R) = \{0\}.
+ \]
+ This implies that $W(R)$ injects to its $p$-adic completion. Using that $A$ is $p$-adically complete, one checks the surjectivity by hand.
+
+ Also, we have
+ \[
+ \frac{W(R)}{p W(R)} \cong \frac{A}{J + pA}.
+ \]
+ But we know
+ \[
+ J + pA = \left\{\sum_n [a_n] p^n \mid a_0 \in I\right\}.
+ \]
+ So we have
+ \[
+ \frac{W(R)}{pW(R)} \cong \frac{\F_p[x_r^{p^{-\infty}}\mid r \in R]}{I} \cong R.
+ \]
+ So we know that $W(R)$ is a strict $p$-ring.
+\end{proof}
+
+\begin{eg}
+ $W(\F_p) = \Z_p$, since $\Z_p$ satisfies all the properties $W(\F_p)$ is supposed to satisfy.
+\end{eg}
+
+\begin{prop}
+ A complete DVR $A$ of mixed characteristic with perfect residue field and such that $p$ is a uniformizer is the same as a strict $p$-ring $A$ such that $A/pA$ is a field.
+\end{prop}
+
+\begin{proof}
+ Let $A$ be a complete DVR such that $p$ is a uniformizer and $A/pA$ is perfect. Then $A$ is $p$-torsion free, as $A$ is an integral domain of characteristic $0$. Since it is also $p$-adically complete, it is a strict $p$-ring.
+
+ Conversely, if $A$ is a strict $p$-ring, and $A/pA$ is a field, then we have $A^\times \subseteq A \setminus pA$, and we claim that $A^\times = A \setminus pA$. Let
+ \[
+ x = \sum_{n = 0}^\infty [x_n] p^n
+ \]
+ with $x_0 \not= 0$, i.e.\ $x \not\in pA$. We want to show that $x$ is a unit. Since $A/pA$ is a field, we can multiply by $[x_0^{-1}]$, so we may wlog $x_0 = 1$. Then $x = 1 - py$ for some $y \in A$. So we can invert this with a geometric series
+ \[
+ x^{-1} = \sum_{n = 0}^\infty p^n y^n.
+ \]
+ So $x$ is a unit. Now, looking at Teichm\"uller expansions and factoring out multiple of $p$, any non-zero element $z$ can be written as $p^n u$ for a unique $n \geq \Z_{\geq 0}$ and $u \in A^\times$. Then we have
+ \[
+ v(z) =
+ \begin{cases}
+ n & z \not= 0\\
+ \infty & z = 0
+ \end{cases}
+ \]
+ is a discrete valuation on $A$.
+\end{proof}
+
+\begin{defi}[Absolute ramification index]\index{absolute ramification index}
+ Let $R$ be a DVR with mixed characteristic $p$ with normalized valuation $v_R$. The integer $v_R(p)$ is called the \emph{absolute ramification index} of $R$.
+\end{defi}
+
+\begin{cor}
+ Let $R$ be a complete DVR of mixed characteristic with absolute ramification index $1$ and perfect residue field $k$. Then $R \cong W(k)$.
+\end{cor}
+
+\begin{proof}
+ Having absolute ramification index $1$ is the same as saying $p$ is a uniformizer. So $R$ is a strict $p$-ring with $R/pR \cong k$. By uniqueness of the Witt vector, we know $R \cong W(k)$.
+\end{proof}
+
+\begin{thm}
+ Let $R$ be a complete DVR of mixed characteristic $p$ with a perfect residue field $k$ and uniformizer $\pi$. Then $R$ is finite over $W(k)$.
+\end{thm}
+
+\begin{proof}
+ We need to first exhibit $W(k)$ as a subring of $R$. We know that $\id: k \to k$ lifts to a homomorphism $W(k) \to R$. The kernel is a prime ideal because $R$ is an integral domain. So it is either $0$ or $p W(k)$. But $R$ has characteristic $0$. So it can't be $pW(k)$. So this must be an injection.
+
+ Let $e$ be the absolute ramification index of $R$. We want to prove that
+ \[
+ R = \bigoplus_{i = 0}^{e - 1} \pi^i W(k).
+ \]
+ Looking at valuations, one sees that $1, \pi, \pi, \cdots, \pi^{e - 1}$ are linearly independent over $W(k)$. So we can form
+ \[
+ M = \bigoplus_{i = 0}^{e - 1} \pi^i W(k) \subseteq R.
+ \]
+ We consider $R/pR$. Looking at Teichm\"uller expansions
+ \[
+ \sum_{n = 0}^\infty [x_n] \pi^n \equiv \sum_{n = 0}^{e - 1} [x_n]\pi^n \mod pR,
+ \]
+ we see that $1, \pi, \cdots, \pi^{e - 1}$ generate $R/pR$ as $W(k)$-modules (all the Teichm\"uller lifts live in $W(k)$). Therefore $R = M + pR$. We iterate to get
+ \[
+ R = M + p(M + pR) = M + p^2 r = \cdots = M + p^m R
+ \]
+ for all $m \geq 1$. So $M$ is dense in $R$. But $M$ is also $p$-adically complete, hence closed in $R$. So $M = R$.
+\end{proof}
+
+The important statement to take away is
+\begin{cor}
+ Let $K$ be a mixed characteristic local field. Then $K$ is a finite extension of $\Q_p$.
+\end{cor}
+
+\begin{proof}
+ Let $\F_q$ be the residue field of $K$. Then $\mathcal{O}_K$ is finite over $W(\F_q)$ by the previous theorem. So it suffices to show that $W(\F_q)$ is finite over $W(\F_p) = \Z_p$. Again the inclusion $\F_p \subseteq \F_q$ gives an injection $W(\F_p) \hookrightarrow W(\F_q)$. Write $q = p^d$, and let $x_1, \cdots, x_d \in W(F_q)$ be lifts of an $\F_p$-bases of $\F_q$.. Then we have
+ \[
+ W(\F_q) = \bigoplus_{i = 1}^d x_d \Z_p + p W(\F_q),
+ \]
+ and then argue as in the end of the previous theorem to get
+ \[
+ W(\F_q) = \bigoplus_{i = 1}^d x_d \Z_p. \qedhere
+ \]
+\end{proof}
+
+\section{Some \texorpdfstring{$p$}{p}-adic analysis}
+We are now going to do some fun things that is not really related to the course. In ``normal'' analysis, the applied mathematicians hold the belief that every function can be written as a power series
+\[
+ f(x) = \sum_{n = 0}^\infty a_n x^n.
+\]
+When we move on to $p$-adic numbers, we do not get such a power series expansion. However, we obtain an analogous result using binomial coefficients.
+
+Before that, we have a quick look at our familiar functions $\exp$ and $\log$, which we shall continue to define as a power series:
+\[
+ \exp(x) = \sum_{n = 0}^\infty \frac{x^n}{n!},\quad \log(1 + x) = \sum_{n = 1}^\infty (-1)^{n - 1}\frac{x^n}{n}
+\]
+The domain will no longer be all of the field. Instead, we have the following result:
+\begin{prop}
+ Let $K$ be a complete valued field with an absolute value $|\ph|$ and assume that $K \supseteq \Q_p$ and $|\ph|$ restricts to the usual $p$-adic norm on $\Q_p$. Then $\exp(x)$ converges for $|x| < p^{-1/(p - 1)}$ and $\log(1 + x)$ converges for $|x| < 1$, and then define continuous maps
+ \begin{align*}
+ \exp: \{x \in K: |x| < p^{-1/(p - 1)}\} &\to \mathcal{O}_K\\
+ \log: \{1 + x \in K: |x| < 1 \} &\to K.
+ \end{align*}
+\end{prop}
+
+\begin{proof}
+ We let $v = -\log_p |\ph|$ be a valuation extending $v_p$. Then we have the dumb estimate
+ \[
+ v(n) \leq \log_p n.
+ \]
+ Then we have
+ \[
+ v\left(\frac{x^n}{n}\right) \geq n \cdot v(x) - \log_p n \to \infty
+ \]
+ if $v(x) > 0$. So $\log$ converges.
+
+ For $\exp$, we have
+ \[
+ v(n!) = \frac{n - s_p(n)}{p - 1},
+ \]
+ where $s_p(n)$ is the sum of the $p$-adic digits of $n$. Then we have
+ \[
+ v\left(\frac{x^n}{n!}\right) \geq n\cdot v(x) - \frac{n}{p - 1} = n\cdot\left(v(x) - \frac{1}{p - 1}\right)\to \infty
+ \]
+ if $v(x) > 1/(p - 1)$. Since $v\left(\frac{x^n}{n!}\right) \geq 0$, this lands in $\mathcal{O}_K$.
+
+ For the continuity, we just use uniform convergence as in the real case.
+\end{proof}
+
+What we really want to talk about is binomial coefficients. Let $n \geq 1$. Then we know that
+\[
+ \binom{x}{n} = \frac{x(x - 1) \cdots (x - n + 1)}{n!}
+\]
+is a polynomial in $x$, and so defines a continuous function $\Z_p \to \Q_p$ by $x \mapsto \binom{x}{n}$. When $n = 0$, we set $\binom{x}{n} = 1$ for all $x \in \Z_p$.
+
+We know $\binom{x}{n} \in \Z$ if $x \in \Z_{\geq 0}$. So by density of $\Z_{\geq 0} \subseteq \Z_p$, we must have $\binom{x}{n} \in \Z_p$ for all $x \in \Z_p$.
+
+We will eventually want to prove the following result:
+\begin{thm}[Mahler's theorem]\index{Mahler's theorem}
+ Let $f: \Z_p \to \Q_p$ be any continuous function. Then there is a unique sequence $(a_n)_{n \geq 0}$ with $a_n \in \Q_p$ and $a_n \to 0$ such that
+ \[
+ f(x) = \sum_{n = 0}^\infty a_n \binom{x}{n},
+ \]
+ and moreover
+ \[
+ \sup_{x \in \Z_p}|f(x)| = \max_{k \in \N} |a_k|.
+ \]
+\end{thm}
+
+We write $C(\Z_p, \Q_p)$\index{$C(\Z_p, \Q_p)$} for the set of continuous functions $\Z_p \to \Q_p$ as usual. This is a $\Q_p$ vector space as usual, with
+\[
+ (\lambda f + \mu g)(x) = \lambda f(x) + \mu g(x)
+\]
+for all $\lambda, \mu \in \Q_p$ and $f, g \in C(\Z_p, \Q_p)$ and $x \in \Z_p$.
+
+If $f \in C(\Z_p, \Q_p)$, we set
+\[
+ \|f\| = \sup_{x \in \Z_p} |f(x)|_p.
+\]
+Since $\Z_p$ is compact, we know that $f$ is bounded. So the supremum exists and is attained.
+
+\begin{prop}
+ The norm $\|\ph\|$ defined above is in fact a (non-archimedean) norm, and that $C(\Z_p, \Q_p)$ is complete under this norm.
+\end{prop}
+
+Let $c_0$\index{$c_0$} denote the set of sequences $(a_n)_{n = 0}^\infty$ in $\Q_p$ such that $a_n \to 0$. This is a $\Q_p$-vector space with a norm
+\[
+ \|(a_n)\| = \max_{n \in \N} |a_n|_p,
+\]
+and $c_0$ is complete. So what Mahler's theorem gives us is an isometric isomorphism between $c_0$ and $C(\Z_p, \Q_p)$.
+
+We define
+\[
+ \Delta: C(\Z_p, \Q_p) \to C(\Z_p, \Q_p)
+\]
+by
+\[
+ \Delta f(x) = f(x + 1) - f(x).
+\]
+By induction, we have
+\[
+ \Delta^n f(x) = \sum_{i = 0}^n (-1)^i \binom{n}{i} f(x + n - i).
+\]
+Note that $\Delta$ is a linear operator on $C(\Z_p, \Q_p)$, and moreover
+\[
+ |\Delta f(x)|_p = |f(x + 1) - f(x)|_p \leq \|f\|.
+\]
+So we have
+\[
+ \|\Delta f\| \leq \|f\|.
+\]
+In other words, we have
+\[
+ \|\Delta\| \leq 1.
+\]
+\begin{defi}[Mahler coefficient]\index{Mahler coefficient}
+ Let $f \in C(\Z_p, \Q_p)$. Then $n$th-Mahler coefficient $a_n(f) \in \Q_p$ is defined by the formula
+ \[
+ a_n(f) = \Delta^n(f)(0) = \sum_{i = 0}^n (-1)^i \binom{n}{i} f(n - i).
+ \]
+\end{defi}
+We will eventually show that these are the $a_n$'s that appear in Mahler's theorem. The first thing to prove is that these coefficients do tend to $0$. We already know that they don't go up, so we just have to show that they always eventually go down.
+
+\begin{lemma}
+ Let $f \in C(\Z_p, \Q_p)$. Then there exists some $k\geq 1$ such that
+ \[
+ \|\Delta^{p^k}f\| \leq \frac{1}{p} \|f\|.
+ \]
+\end{lemma}
+
+\begin{proof}
+ If $f =0 $, there is nothing to prove. So we will wlog $\|f\| = 1$ by scaling (this is possible since the norm is attained at some $x_0$, so we can just divide by $f(x_0)$). We want to find some $k$ such that
+ \[
+ \Delta^{p^k}f(x) \equiv 0 \mod p
+ \]
+ for all $x$. To do so, we use the explicit formula
+ \[
+ \Delta^{p^k} f(x) = \sum_{i = 0}^{p^k} (-1)^i \binom{p^k}{i} f(x + p^k - i) \equiv f(x + p^k) - f(x)\pmod p
+ \]
+ because the binomial coefficients $\binom{p^k}{i}$ are divisible by $p$ for $i \not= 0, p^k$. Note that we do have a negative sign in front of $f(x)$ because $(-1)^{p^k}$ is $-1$ as long as $p$ is odd, and $1 = -1$ if $p = 2$.
+
+ Now $\Z_p$ is compact. So $f$ is uniformly continuous. So there is some $k$ such that $|x - y|_p \leq p^{-k}$ implies $|f(x) - f(y)|_p \leq p^{-1}$ for all $x, y \in \Z_p$. So take this $k$, and we're done.
+\end{proof}
+
+We can now prove that the Mahler's coefficients tend to $0$.
+
+\begin{prop}
+ The map $f \mapsto (a_n(f))_{n = 0}^\infty$ defines an injective norm-decreasing linear map $C(\Z_p, \Q_p) \to c_0$.
+\end{prop}
+
+\begin{proof}
+ First we prove that $a_n(f) \to 0$. We know that
+ \[
+ \|a_n(f)\|_p \leq \|\Delta^n f\|.
+ \]
+ So it suffices to show that $\|\Delta^n f\| \to 0$. Since $\|\Delta\| \leq 1$, we know $\|\Delta^n f\|$ is monotonically decreasing. So it suffices to find a subsequence that tends to $0$. To do so, we simply apply the lemma repeatedly to get $k_1, k_2, \cdots$ such that
+ \[
+ \norm{\Delta^{p^{^k_1 + \ldots + k_n}}} \leq \frac{1}{p^n}\|f\|.
+ \]
+ This gives the desired sequence.
+
+ Note that
+ \[
+ |a_n(f)|_p \leq \|\Delta^n\| \leq \|f\|.
+ \]
+ So we know
+ \[
+ \|(a_n(f))_n\| = \max |a_n(f)|_p \leq \|f\|.
+ \]
+ So the map is norm-decreasing. Linearity follows from linearity of $\Delta$. To finish, we have to prove injectivity.
+
+ Suppose $a_n(f) = 0$ for all $n \geq 0$. Then
+ \[
+ a_0(f) = f(0) = 0,
+ \]
+ and by induction,we have that
+ \[
+ f(n) = \Delta^k f(0) = a_n (f) = 0.
+ \]
+ for all $n \geq 0$. So $f$ is constantly zero on $\Z_{\geq 0}$. By continuity, it must be zero everywhere on $\Z_p$.
+\end{proof}
+
+We are almost at Mahler's theorem. We have found some coefficients already, and we want to see that it works. We start by proving a small, familiar, lemma.
+\begin{lemma}
+ We have
+ \[
+ \binom{x}{n} + \binom{x}{n - 1} = \binom{x + 1}{n}
+ \]
+ for all $n \in \Z_{\geq 1}$ and $x \in \Z_p$.
+\end{lemma}
+
+\begin{proof}
+ It is well known that this is true when $x \in \Z_{\geq n}$. Since the expressions are polynomials in $x$, them agreeing on infinitely many values implies that they are indeed the same.
+\end{proof}
+
+\begin{prop}
+ Let $a = (a_n)_{n = 0}^\infty \in c_0$. We define $f_a: \Z_p \to \Q_p$ by
+ \[
+ f_a(x) = \sum_{n = 0}^\infty a_n \binom{x}{n}.
+ \]
+ This defines a norm-decreasing linear map $c_0 \to C(\Z_p, \Q_p)$. Moreover $a_n(f_a) = a_n$ for all $n \geq 0$.
+\end{prop}
+
+\begin{proof}
+ Linearity is clear. Norm-decreasing follows from
+ \[
+ |f_a(x)| = \abs{\sum a_n \binom{x}{n}} \leq \sup_n |a_n|_p \abs{\binom{x}{n}}_p \leq \sup_n |a_n|_p = \|a_n\|,
+ \]
+ where we used the fact that $\binom{x}{n} \in \Z_p$, hence $\abs{\binom{x}{n}}_p \leq 1$.
+
+ Taking the supremum, we know that
+ \[
+ \|f_a\| \leq \|a\|.
+ \]
+ For the last statement, for all $k \in \Z_{\geq 0}$, we define
+ \[
+ a^{(k)} = (a_k, a_{k + 1},a_{k + 1}, \cdots).
+ \]
+ Then we have
+ \begin{align*}
+ \Delta f_a(x) &= f_a(x + 1) - f_a(x) \\
+ &= \sum_{n = 1}^\infty a_n \left(\binom{x + 1}{n} - \binom{x}{n}\right)\\
+ &= \sum_{n = 1}^\infty a_n \binom{x}{n - 1} \\
+ &= \sum_{n = 0}^\infty a_{n + 1} \binom{x}{n}\\
+ &= f_{a^(1)}(x)
+ \end{align*}
+ Iterating, we have
+ \[
+ \Delta^k f_a = f_{a^{(k)}}.
+ \]
+ So we have
+ \[
+ a_n(f_a) = \Delta^n f_a(0) = f_{a^{(n)}}(0) = a_n.\qedhere
+ \]
+\end{proof}
+Summing up, we now have maps
+\[
+ \begin{tikzcd}
+ C(\Z_p, \Q_p) \ar[r, yshift=2, "F"] & c_0 \ar[l, yshift=-2, "G"]
+ \end{tikzcd}
+\]
+with
+\begin{align*}
+ F(f) &= (a_n(f))\\
+ G(a) &= f_a.
+\end{align*}
+We now that $F$ is injective and norm-decreasing, and $G$ is norm-decreasing and $FG = \id$. It then follows formally that $GF = \id$ and the maps are norm-preserving.
+
+\begin{lemma}
+ Suppose $V, W$ are normed spaces, and $F: V \to W$, $G: W \to V$ are maps such that $F$ is injective and norm-decreasing, and $G$ is norm-decreasing and $FG = \id_W$. Then $GF = \id_V$ and $F$ and $G$ are norm-preserving.
+\end{lemma}
+
+\begin{proof}
+ Let $v \in V$. Then
+ \[
+ F(v - GFv) = Fv - FGF v = (F - F)v = 0.
+ \]
+ Since $F$ is injective, we have
+ \[
+ v = GF v.
+ \]
+ Also, we have
+ \[
+ \|v\| \geq \|Fv\| \geq \|GFv\| = \|v\|.
+ \]
+ So we have equality throughout. Similarly, we have $\|v\| = \|Gv\|$.
+\end{proof}
+This finishes the proof Mahler's theorem, and also finishes this section on $p$-adic analysis.
+
+\section{Ramification theory for local fields}
+From now on, the characteristic of the residue field of any local field will be denoted $p$, unless stated otherwise.
+
+\subsection{Ramification index and inertia degree}
+Suppose we have an extension $L/K$ of local fields. Then since $\mathfrak{m}_K \subseteq \mathfrak{m}_L$, and $\mathcal{O}_L \subseteq \mathcal{O}_L$, we obtain an injection
+\[
+ k_K = \frac{\mathcal{O}_K}{\mathfrak{m}_k} \hookrightarrow \frac{\mathcal{O}_L}{\mathfrak{m}_L} = k_L.
+\]
+So we also get an extension of residue fields $k_L/k_K$. The question we want to ask is how much of the extension is ``due to'' the extension of residue fields $k_L/k_K$, and how much is ``due to'' other things happening.
+
+It turns out these are characterized by the following two numbers:
+\begin{defi}[Inertia degree]\index{inertia degree}
+ Let $L/K$ be a finite extension of local fields. The \emph{inertia degree} of $L/K$ is
+ \[
+ f_{L/K} = [k_L:k_K].
+ \]
+\end{defi}
+
+\begin{defi}[Ramification index]\index{ramification index}
+ Let $L/K$ be a finite extension of local fields, and let $v_L$ be the normalized valuation of $L$ and $\pi_K$ a uniformizer of $K$. The integer
+ \[
+ e_{L/K} = v_L(\pi_K)
+ \]
+ is the \emph{ramification index} of $L/K$.
+\end{defi}
+
+The goal of the section is to show the following result:
+\begin{thm}
+ Let $L/K$ be a finite extension. Then
+ \[
+ [L:K] = e_{L/K}f_{L/K}.
+ \]
+\end{thm}
+
+We then have two extreme cases of ramification:
+\begin{defi}[Unramified extension]\index{unramified extension}
+ Let $L/K$ be a finite extension of local fields. We say $L/K$ is \emph{unramified} if $e_{L/K} = 1$, i.e.\ $f_{L/K} = [L:K]$.
+\end{defi}
+
+\begin{defi}[Totally ramified extension]\index{totally ramified extension}
+ Let $L/K$ be a finite extension of local fields. We say $L/K$ is \emph{totally ramified} if $f_{L/K} = 1$, i.e.\ $e_{L/K} = [L:K]$.
+\end{defi}
+
+In the next section we will, amongst many things, show that every extension of local fields can be written as an unramified extension followed by a totally ramified extension.
+
+Recall the following: let $R$ be a PID and $M$ a finitely-generated $R$-module. Assume that $M$ is torsion-free. Then there is a unique integer $n \geq 0$ such that $M \cong R^n$. We say $n$ has \term{rank} $n$. Moreover, if $N \subseteq M$ is a submodule, then $N$ is finitely-generated, so $N\cong R^m$ for some $m \leq n$.
+
+\begin{prop}
+ Let $K$ be a local field, and $L/K$ a finite extension of degree $n$. Then $\mathcal{O}_L$ is a finitely-generated and free $\mathcal{O}_K$ module of rank $n$, and $k_L/k_K$ is an extension of degree $\leq n$.
+
+ Moreover, $L$ is also a local field.
+\end{prop}
+
+\begin{proof}
+ Choose a $K$-basis $\alpha_1, \cdots, \alpha_n$ of $L$. Let $\|\ph\|$ denote the maximum norm on $L$.
+ \[
+ \norm{\sum_{i = 1}^n x_i \alpha_i} = \max_{i = 1, \ldots, n} |x_i|
+ \]
+ as before. Again, we know that $\|\ph\|$ is equivalent to the extended norm $|\ph|$ on $L$ as $K$-norms. So we can find $r > s > 0$ such that
+ \[
+ M = \{x \in L: \|x\| \leq s\} \subseteq \mathcal{O}_L \subseteq N = \{x \in L : \|x\| \leq r\}.
+ \]
+ Increasing $r$ and decreasing $s$ if necessary, we wlog $r = |a|$ and $s = |b|$ for some $a, b \in K$.
+
+ Then we can write
+ \[
+ M = \bigoplus_{i = 1}^n \mathcal{O}_k b \alpha_i \subseteq \mathcal{O}_L \subseteq N = \bigoplus_{i = 1}^n \mathcal{O}_K a \alpha_i.
+ \]
+ We know that $N$ is finitely generated and free of rank $n$ over $\mathcal{O}_K$, and so is $M$. So $\mathcal{O}_L$ must be finitely generated and free of rank $n$ over $\mathcal{O}_K$.
+
+ Since $\mathfrak{m}_k = \mathfrak{m}_k \cap \mathcal{O}_K$, we have a natural injection
+ \[
+ \frac{\mathcal{O}_K}{\mathfrak{m}_k} \hookrightarrow \frac{\mathcal{O}_L}{\mathfrak{m}_L} = k_L.
+ \]
+ Since $\mathcal{O}_L$ is generated over $\mathcal{O}_K$ by $n$ elements, we know that $k_K$ is generated by $n$ elements over $k_K$, so it has rank at most $n$.
+
+ To see that $L$ is a local field, we know that $k_L/k_K$ is finite and $k_K$ is finite, so $k_L$ is finite. It is complete under the norm because it is a finite-dimensional vector space over a complete field.
+
+ Finally, to see that the valuation is discrete, suppose we have a normalized valuation on $K$, and $w$ the unique extension of $v_K$ to $L$. Then we have
+ \[
+ w(\alpha) = \frac{1}{n} v_K(N_{L/K}(\alpha)).
+ \]
+ So we have
+ \[
+ w(L^\times) \subseteq \frac{1}{n} v(K^\times) = \frac{1}{n} \Z.
+ \]
+ So it is discrete.
+\end{proof}
+
+Note that we cannot just pick an arbitrary basis of $L/K$ and scale it to give a basis of $\mathcal{O}_L/\mathcal{O}_K$. For example, $\Q_2(\sqrt{2})/\Q_2$ has basis $1, \sqrt{2}$, but $|\sqrt{2}| = \frac{1}{\sqrt{2}}$ and cannot be scaled to $1$ by an element in $\Q_2$.
+
+Even if such a scaled basis exists, it doesn't necessarily give a basis of the integral rings. For example, $\Q_3(\sqrt{-1}) / \Q_3$ has a $\Q_3$-basis $1, 1 + 3\sqrt{-1}$ and $|1 + 3\sqrt{-1}| = 1$, but
+\[
+ \sqrt{-1} \not\in \Z_3 + \Z_3(1 + 3\sqrt{-1}).
+\]
+So this is not a basis of $\mathcal{O}_{\Q_3(\sqrt{-1})}$ over $\Z_3$.
+
+
+\begin{thm}
+ Let $L/K$ be a finite extension. Then
+ \[
+ [L:K] = e_{L/K}f_{L/K},
+ \]
+ and there is some $\alpha \in \mathcal{O}_L$ such that $\mathcal{O}_L = \mathcal{O}_K[\alpha]$.
+\end{thm}
+
+\begin{proof}
+ We will be lazy and write $e = e_{L/K}$ and $f = f_{L/K}$. We first note that $k_L/k_K$ is separable, so there is some $\bar{\alpha} \in k_L$ such that $k_L = k_K(\bar{\alpha})$ by the primitive element theorem. Let
+ \[
+ \bar{f}(x) \in k_K[x]
+ \]
+ be the minimal polynomial of $\bar{\alpha}$ over $k_K$ and let $f \in \mathcal{O}_L[x]$ be a monic lift of $\bar{f}$ with $\deg f = \deg \bar{f}$.
+
+ We first claim that there is some $\alpha \in \mathcal{O}_L$ lifting $\bar{\alpha}$ such that $v_L(f(\alpha)) = 1$ (note that it is always $\geq 1$). To see this, we just take any lift $\beta$. If $v_L(f(\beta)) = 1$, then we are happy and set $\alpha = \beta$. If it doesn't work, we set $\alpha = \beta + \pi_L$, where $\pi_L$ is the uniformizer of $L$.
+
+ Then we have
+ \[
+ f(\alpha) = f(\beta + \pi_L) = f(\beta) + f'(\beta) \pi_L + b \pi_L^2
+ \]
+ for some $b \in \mathcal{O}_L$, by Taylor expansion around $\beta$. Since $v_L(f(\beta)) \geq 2$ and $v_L(f'(\beta)) = 0$ (since $\bar{f}$ is separable, we know $f'(\beta)$ does not vanish when we reduce mod $\mathfrak{m}$), we know $v_L(f(\alpha)) = 1$. So $f(\alpha)$ is a uniformizer of $L$.
+
+ We now claim that the elements $\alpha^i \pi^j$ for $i = 0, \cdots, f - 1$ and $j = 0, \cdots, e - 1$ are an $\mathcal{O}_K$-basis of $\mathcal{O}_L$. Suppose we have
+ \[
+ \sum_{i, j} a_{ij} \alpha^i \pi^j = 0
+ \]
+ for some $a_{ij} \in K$ not all $0$. We put
+ \[
+ s_j = \sum_{i = 0}^{f - 1} a_{ij}\alpha^i.
+ \]
+ We know that $1, \alpha, \cdots, \alpha^{f - 1}$ are linearly independent over $K$ since their reductions are linearly independent over $k_K$. So there must be some $j$ such that $s_j \not= 0$.
+
+ The next claim is that if $s_j \not= 0$, then $e \mid v_L(s_j)$. We let $k$ be an index for which $|a_{kj}|$ is maximal. Then we have
+ \[
+ a_{kj}^{-1}s_j = \sum_{i = 0}^{f - 1} a_{kj}^{-1} a_{ij} \alpha^i.
+ \]
+ Now note that by assumption, the coefficients on the right have absolute value $\leq 1$, and is $1$ when $i = k$. So we know that
+ \[
+ a_{kj}^{-1} s_j \not\equiv 0 \mod \pi_L,
+ \]
+ because $1, \bar{\alpha}, \cdots, \bar{\alpha}^{f - 1}$ are linearly independent. So we have
+ \[
+ v_L(a_{kj}^{-1} s_j) = 0.
+ \]
+ So we must have
+ \[
+ v_L(s_j) = v_L(a_{kj}) + v_L(a_{kj}^{-1} s_j) \in v_L(K^\times) = e v_L(L^\times) = e\Z.
+ \]
+ Now we write
+ \[
+ \sum a_{ij} \alpha^i \pi^j = \sum_{j = 0}^{e - 1} s_j \pi^j = 0.
+ \]
+ If $s_j \not= 0$, then we have $v_L(s_j \pi^j) = v_L(s_j) + j \in j + e\Z$. So no two non-zero terms in $\sum_{j = 0}^{e - 1} s_j \pi^j$ have the same valuation. This implies that $\sum_{j = 0}^{e - 1} s_j \pi^j \not= 0$, which is a contradiction.
+
+ We now want to prove that
+ \[
+ \mathcal{O}_L = \bigoplus_{i, j} \mathcal{O}_K \alpha^i \pi^j.
+ \]
+ We let
+ \[
+ M = \bigoplus_{i, j} \mathcal{O}_K \alpha^i \pi^j,
+ \]
+ and put
+ \[
+ N = \bigoplus_{i = 0}^{f - 1} \mathcal{O}_L \alpha^i.
+ \]
+ Then we have
+ \[
+ M = N + \pi N + \pi^2 N + \cdots + \pi^{e - 1}N.
+ \]
+ We are now going to use the fact that $1, \bar{\alpha}, \cdots, \bar{\alpha}^{f - 1}$ span $k_L$ over $k_K$. So we must have that $\mathcal{O}_L = N + \pi \mathcal{O}_L$. We iterate this to obtain
+ \begin{align*}
+ \mathcal{O}_L &= N + \pi(N + \mathcal{O}_L) \\
+ &= N + \pi N + \pi^2 \mathcal{O}_L \\
+ &= \cdots \\
+ &= N + \pi N + \pi^2 N + \cdots + \pi^{e - 1}N + \pi^n \mathcal{O}_L\\
+ &= M + \pi_K \mathcal{O}_L,
+ \end{align*}
+ using the fact that $\pi_K$ and $\pi^e$ have the same valuation, and thus they differ by a unit in $\mathcal{O}_L$. Iterating this again, we have
+ \[
+ \mathcal{O}_L = M + \pi_k^n \mathcal{O}_L
+ \]
+ for all $n \geq 1$. So $M$ is dense in $\mathcal{O}_L$. But $M$ is the closed unit ball in the subspace
+ \[
+ \bigoplus_{i, j}K \alpha^i \pi^j \subseteq l
+ \]
+ with respect to the maximum norm with respect to the given basis. So it must be complete, and thus $M = \mathcal{O}_L$.
+
+ Finally, since $\alpha^i \pi^j = \alpha^i f(\alpha)^j$ is a polynomial in $\alpha$, we know that $\mathcal{O}_L = \mathcal{O}_K[\alpha]$.
+\end{proof}
+
+\begin{cor}
+ If $M/L/K$ is a tower of finite extensions of local fields, then
+ \begin{align*}
+ f_{M/K} &= f_{L/K}f_{M/L}\\
+ e_{M/K} &= e_{L/K}e_{M/L}
+ \end{align*}
+\end{cor}
+
+\begin{proof}
+ The multiplicativity of $f_{M/K}$ follows from the tower law for the residue fields, and the multiplicativity of $e_{M/K}$ follows from the tower law for the local fields and that $f_{M/K}e_{M/K} = [M:K]$.
+\end{proof}
+\subsection{Unramified extensions}
+Unramified extensions are easy to classify, since they just correspond to extensions of the residue field.
+\begin{thm}
+ Let $K$ be a local field. For every finite extension $\ell /k_K$, there is a \emph{unique} (up to isomorphism) finite unramified extension $L/K$ with $k_L \cong \ell$ over $k_K$. Moreover, $L/K$ is Galois with
+ \[
+ \Gal(L/K) \cong \Gal(\ell/k_K).
+ \]
+\end{thm}
+
+\begin{proof}
+ We start with existence. Let $\bar{\alpha}$ be a primitive element of $\ell/k_K$ with minimal polynomial $\bar{f} \in k_K[x]$. Take a monic lift $f \in \mathcal{O}_K[x]$ of $\bar{f}$ such that $\deg f = \deg \bar{f}$. Note that since $\bar{f}$ is irreducible, we know $f$ is irreducible. So we can take $L = K(\alpha)$, where $\alpha$ is a root of $f$ (i.e.\ $L = K[x]/f$). Then we have
+ \[
+ [L:K] = \deg f = \deg(\bar{f}) = [\ell:k_K].
+ \]
+ Moreover, $k_L$ contains a root of $\bar{f}$, namely the reduction $\alpha$. So there is an embedding $\ell \hookrightarrow k_L$, sending $\bar{\alpha}$ to the reduction of $\alpha$. So we have
+ \[
+ [k_L:k_K] \geq [\ell:k_L] = [L:K].
+ \]
+ So $L/K$ must be unramified and $k_L \cong \ell$ over $k_K$.
+
+ Uniqueness and the Galois property follow from the following lemma:
+\end{proof}
+
+\begin{lemma}
+ Let $L/K$ be a finite unramified extension of local fields and let $M/K$ be a finite extension. Then there is a natural bijection
+ \[
+ \Hom_{K\operatorname{-}\mathrm{Alg}}(L, M) \longleftrightarrow \Hom_{k_K\operatorname{-}\mathrm{Alg}} (k_L, k_M)
+ \]
+ given in one direction by restriction followed by reduction.
+\end{lemma}
+
+\begin{proof}
+ By the uniqueness of extended absolute values, any $K$-algebra homomorphism $\varphi: L \hookrightarrow M$ is an isometry for the extended absolute values. In particular, we have $\varphi(\mathcal{O}_L) \subseteq \mathcal{O}_M$ and $\varphi(\mathfrak{m}_L) \subseteq \mathfrak{m}_M$. So we get an induced $k_K$-algebra homomorphism $\bar\varphi: k_L \to k_M$.
+
+ So we obtain a map
+ \[
+ \Hom_{K\text{-}\mathrm{Alg}}(L, M) \to \Hom_{k_K\text{-}\mathrm{Alg}} (k_L, k_M)
+ \]
+ To see this is bijective, we take a primitive element $\bar{\alpha} \in k_L$ over $k_K$, and take a minimal polynomial $\bar{f} \in k_K[x]$. We take a monic lift of $\bar{f}$ to $\mathcal{O}_k[x]$, and $\alpha \in \mathcal{O}_L$ the unique root of $f$ which lifts $\bar{\alpha}$, which exists by Hensel's lemma. Then by counting dimensions, the fact that the extension is unramified tells us that
+ \[
+ k_L = k_K(\bar{\alpha}),\quad L = K(\alpha).
+ \]
+ So we can construct the following diagram:
+ \[
+ \begin{tikzcd}
+ \varphi \ar[d, maps to] & \Hom_{K\text{-}\mathrm{Alg}}(L, M) \ar[d, "\cong"] \ar[r, "\text{reduction}"] & \Hom_{k_K\text{-}\mathrm{Alg}} (k_L, k_M) \ar[d, "\cong"] & \bar{\varphi} \ar[d, maps to]\\
+ \varphi(\alpha) & \{x \in M: f(x) = 0\} \ar[r, "\text{reduction}"] & \{\bar{x} \in k_M: \bar{f}(\bar{x}) = 0\} & \bar\varphi(\bar{\alpha})
+ \end{tikzcd}
+ \]
+ But the bottom map is a bijection by Hensel's lemma. So done.
+\end{proof}
+Alternatively, given a map $\bar\varphi: k_L \to k_M$, we can lift it to the map $\varphi: L \to M$ given by
+\[
+ \varphi\left(\sum [a_n] \pi_k^n\right) = \sum [\bar\varphi(a_n)] \pi_k^n,
+\]
+using the fact that $\pi_k^n$ is a uniformizer in $L$ since the extension is unramified. So we get an explicit inverse.
+
+\begin{proof}[Proof of theorem (continued)]
+ To finish off the proof of the theorem, we just note that an isomorphism $\bar\varphi: k_L \cong k_M$ over $k_K$ between unramified extensions. Then $\bar{\varphi}$ lifts to a $K$-embedding $\varphi: L \hookrightarrow M$ and $[L:K] = [M:K]$ implies that $\varphi$ is an isomorphism.
+
+ To see that the extension is Galois, we just notice that
+ \[
+ |\Aut_K(L)| = |\Aut_{k_K}(k_L)| = [k_L:k_K] = [L:K].
+ \]
+ So $L/K$ is Galois. Moreover, the map $\Aut_K(L) \to \Aut_{k_K}(k_L)$ is really a homomorphism, hence an isomorphism.
+\end{proof}
+
+\begin{prop}
+ Let $K$ be a local field, and $L/K$ a finite unramified extension, and $M/K$ finite. Say $L, M$ are subfields of some fixed algebraic closure $\bar{K}$ of $K$. Then $LM/M$ is unramified. Moreover, any subextension of $L/K$ is unramified over $K$. If $M/K$ is unramified as well, then $LM/K$ is unramified.
+\end{prop}
+
+\begin{proof}
+ Let $\bar{\alpha}$ be a primitive element of $k_K/k_L$, and $\bar{f} \in k_K[x]$ a minimal polynomial of $\bar{\alpha}$, and $f \in \mathcal{O}_k[x]$ a monic lift of $\bar{f}$, and $\alpha \in \mathcal{O}_L$ a unique lift of $f$ lifting $\bar{\alpha}$. Then $L = K(\alpha)$. So $LM = M(\alpha)$.
+
+ Let $\bar{g}$ be the minimal polynomial of $\bar{\alpha}$ over $k_M$. Then $\bar{g} \mid \bar{f}$. By Hensel's lemma, we can factorize $f = gh$ in $\mathcal{O}_M[x]$, where $g$ is monic and lifts $\bar{g}$. Then $g(\alpha) = 0$ and $g$ is irreducible in $M[x]$. So $g$ is the minimal polynomial of $\alpha$ over $M$. So we know that
+ \[
+ [LM:M] = \deg g = \deg \bar{g} \leq [k_{LM}:k_M] \leq [LM:M].
+ \]
+ So we have equality throughout and $LM/M$ is unramified.
+
+ The second assertion follows from the multiplicativity of $f_{L/K}$, as does the third.
+\end{proof}
+
+\begin{cor}
+ Let $K$ be a local field, and $L/K$ finite. Then there is a unique maximal subfield $K \subseteq T \subseteq L$ such that $T/K$ is unramified. Moreover, $[T:K] = f_{L/K}$.
+\end{cor}
+
+\begin{proof}
+% For the existence of $T$, we just take it to be the composite of all unramified subextensions. Then we have
+% \[
+% [T:K] = f_{T/K} \leq f_{L/K}
+% \]
+% by multiplicativity.
+ Let $T/K$ be the unique unramified extension with residue field extension $k_L/k_K$. Then $\id: k_T = k_L \to k_L$ lifts to a $K$-embedding $T \hookrightarrow L$. Identifying $T$ with its image, we know
+ \[
+ [T:K] = f_{L/K}.
+ \]
+ Now if $T'$ is any other unramified extension, then $T'T$ is an unramified extension over $K$, so
+ \[
+ [T:K] \leq [TT':K] \leq f_{L/K} = [T:K].
+ \]
+ So we have equality throughout, and $T' \subseteq T$. So this is maximal.
+\end{proof}
+
+\subsection{Totally ramified extensions}
+We now quickly look at totally ramified extensions. Recall the following irreducibility criterion:
+\begin{thm}[Eisenstein criterion]\index{Eisenstein criterion}
+ Let $K$ be a local field, and $f(x) = x^n + a_{n - 1} x^{n - 1} + \cdots + a_0 \in \mathcal{O}_K[x]$. Let $\pi_K$ be the uniformizer of $K$. If $\pi_K \mid a_{n - 1}, \cdots, a_0$ and $\pi_K^2 \nmid a_0$, then $f$ is irreducible.
+\end{thm}
+
+\begin{proof}
+ Left as an exercise. You've probably seen this already in a much more general context, but in this case there is a neat proof using Newton polygons.
+\end{proof}
+
+We will need to use the following characterization of the ramification index:
+\begin{prop}
+ Let $L/K$ be an extension of local fields, and $v_K$ be the normalized valuation. Let $w$ be the unique extension of $v_K$ to $L$. Then the ramification index $e_{L/K}$ is given by
+ \[
+ e_{L/K}^{-1} = w(\pi_L) = \min \{w(x): x \in \mathfrak{m}_L\},
+ \]
+\end{prop}
+
+\begin{proof}
+ We know $w$ and $v_L$ differ by a constant. To figure out what this is, we have
+ \[
+ 1 = w(\pi_K) = e_{L/K}^{-1} v_L(\pi_K).
+ \]
+ So for any $x \in L$, we have
+ \[
+ w(x) = e_{L/K}^{-1} v_L(x).
+ \]
+ In particular, putting $x = \pi_L$, we have
+ \[
+ w(\pi_L) = e_{L/K}^{-1} v_L(\pi_L) = e_{L/K}^{-1}.
+ \]
+ The equality
+ \[
+ w(\pi_L) = \min \{w(x): x \in \mathfrak{m}_L\},
+ \]
+ is trivially true because the minimum is attained by $\pi_L$.
+\end{proof}
+
+\begin{defi}[Eisenstein polynomial]\index{Eisenstein polynomial}
+ A polynomial $f(x) \in \mathcal{O}_K[x]$ satisfying the assumptions of Eisenstein's criterion is called an \term{Eisenstein polynomial}.
+\end{defi}
+We can now state the proposition:
+\begin{prop}
+ Let $L/K$ be a totally ramified extension of local fields. Then $L = K(\pi_L)$ and the minimal polynomial of $\pi_L$ over $K$ is Eisenstein.
+
+ Conversely, if $L = K(\alpha)$ and the minimal polynomial of $\alpha$ over $K$ is Eisenstein, then $L/K$ is totally ramified and $\alpha$ is a uniformizer of $L$.
+\end{prop}
+
+\begin{proof}
+ Let $n = [L:K]$, $v_K$ be the valuation of $K$, and $w$ the unique extension to $L$. Then
+ \[
+ [K(\pi_L): K]^{-1} \leq e_{K(\pi_L)/K}^{-1} = \min_{x \in \mathfrak{m}_{K(\pi_L)}} w(c) \leq \frac{1}{n},
+ \]
+ where the last inequality follows from the fact that $\pi_L \in \mathfrak{m}_{L(\pi_L)}$.
+
+ But we also know that
+ \[
+ [K(\pi_L):K] \leq [L:K].
+ \]
+ So we know that $L = K(\pi_L)$.
+
+ Now let $f(x) = x^n + a_{n - 1} x^{n - 1} + \cdots + a_0 \in \mathcal{O}_K[x]$ be the minimal polynomial of $\pi_L/K$. Then we have
+ \[
+ \pi_L^n = -(a_0 + a_1 \pi_L + \cdots + a_{n - 1}\pi_L^{n - 1}).
+ \]
+ So we have
+ \[
+ 1 = w(\pi_L^n) = w(a_0 + a_1 \pi_L + \cdots + a_{n - 1}\pi_L^{n - 1}) = \min_{i = 0, \ldots, n - 1} \left(v_k(a_i) + \frac{i}{n}\right).
+ \]
+ This implies that $v_K(a_i) \geq 1$ for all $i$, and $v_K(x_0) = 1$. So it is Eisenstein.
+
+ For the converse, if $K = K(\alpha)$ and $n = [L:K]$, take
+ \[
+ g(x) = x^n + b_{n - 1} x^{n - 1} + .. + b_0 \in \mathcal{O}_K[x]
+ \]
+ be the minimal polynomial of $\alpha$. So all roots have the same valuation. So we have
+ \[
+ 1 = w(b_0) = n \cdot w(\alpha).
+ \]
+ So we have $w(\alpha) = \frac{1}{n}$. So we have
+ \[
+ e_{L/K}^{-1} = \min_{x \in \mathfrak{m}_L} w(x) \leq \frac{1}{n} = [L:K]^{-1}.
+ \]
+ So $[L:K] = e_{L/K} = n$. So $L/K$ is totally ramified and $\alpha$ is a uniformizer.
+\end{proof}
+In fact, more is true. We have $\mathcal{O}_L = \mathcal{O}_K[\pi_L]$, since every element in $\mathcal{O}_L$ can be written as
+\[
+ \sum_{i \geq 0} a_i \pi_L^i,
+\]
+where $a_i$ is a lift of an element in $k_L = k_K$, which can be chosen to be in $\mathcal{O}_K$.
+
+\section{Further ramification theory}
+\subsection{Some filtrations}
+If we have a field $K$, then we have a unit group $U_K = \mathcal{O}_K^\times$. We would like to come up with a \emph{filtration} of subgroups of the unit group, namely a sequence
+\[
+ \cdots \subseteq U_K^{(2)} \subseteq U_K^{(1)} \subseteq U_K^{(0)} = U_K
+\]
+of subgroups that tells us how close a unit is to being $1$. The further down we are in the chain, the closer we are to being $1$.
+
+Similarly, given a field extension $L/K$, we want a filtration on the Galois group (the indexing is conventional)
+\[
+ \cdots\subseteq G_2(L/K) \subseteq G_1(L/K) \subseteq G_0(L/K) \subseteq G_{-1}(L/K) = \Gal(L/K).
+\]
+This time, the filtration tells us how close the automorphisms are to being the identity map.
+
+The key thing about these filtrations is that we can figure out information about the quotients $U_K^{(s)}/U_K^{(s + 1)}$ and $G_s(L/K)/G_{s + 1}(L/K)$, which is often easier. Later, we might be able to patch these up to get more useful information about $U_K$ and $\Gal(L/K)$.
+
+We start with the filtration of the unit group.
+
+\begin{defi}[Higher unit groups]\index{higher unit groups}\index{$U_k^{(s)}$}
+ We define the \emph{higher unit groups} to be
+ \[
+ U_K^{(s)} = U^{(s)} = 1 + \pi_K^s\mathcal{O}_K.
+ \]
+ We also put
+ \[
+ U_K = U_K^{(0)} = U^{(0)} = \mathcal{O}_K^\times.
+ \]
+\end{defi}
+
+The quotients of these units groups are surprisingly simple:
+\begin{prop}
+ We have
+ \begin{align*}
+ U_K/U_K^{(1)} &\cong (k_K^\times, \ph),\\
+ U_K^{(s)}/U_K^{(s + 1)} &\cong (k_K, +).
+ \end{align*}
+ for $s \geq 1$.
+\end{prop}
+
+\begin{proof}
+ We have a surjective homomorphism $\mathcal{O}_K^\times \to k_K^\times$ which is just reduction mod $\pi_K$, and the kernel is just things that are $1$ modulo $\pi_K$, i.e.\ $U_K^{(1)}$. So this gives the first part.
+
+ For the second part, we define a surjection $U_K^{(s)} \to k_K$ given by
+ \[
+ 1 + \pi_K^s x \mapsto x \mod \pi_k.
+ \]
+ This is a group homomorphism because
+ \[
+ (1 + \pi_K^s x)(1 + \pi_K^s y) = 1 + \pi^S(x + y + \pi^s xy),
+ \]
+ and this gets mapped to
+ \[
+ x + y + \pi^s x + y \cong x + y \mod \pi_K.
+ \]
+ Then almost by definition, the kernel is $U_K^{(s + 1)}$.
+\end{proof}
+
+The next thing to consider is a filtration of the Galois group.
+\begin{defi}[Higher ramification group]\index{$s$th ramification group}\index{ramification group}
+ Let $L/K$ be a finite Galois extension of local fields, and $v_L$ the normalized valuation of $L$.
+
+ Let $s \in \R_{\geq -1}$. We define the \emph{$s$th ramification group} by
+ \[
+ G_s(L/K) = \{\sigma \in \Gal(L/K): v_L(\sigma(x) - x) \geq s + 1\text{ for all }x \in \mathcal{O}_L\}.
+ \]
+\end{defi}
+So if you belong to $G_s$ for a large $s$, then you move things less. Note that we could have defined these only for $s \in \Z_{\geq -1}$, but allowing fractional indices will be helpful in the future.
+
+Now since $\sigma(x) - x \in \mathcal{O}_L$ for all $x \in \mathcal{O}_L$, we know
+\[
+ G_{-1}(L/K) = \Gal(L/K).
+\]
+We next consider the case of $G_0(L/K)$. This is, by definition
+\begin{align*}
+ G_0(L/K) &= \{\sigma \in \Gal(L/K): v_L(\sigma(x) - x) \geq 1\text{ for all }x \in \mathcal{O}_L\}\\
+ &= \{\sigma \in \Gal(L/K): \sigma(x) \equiv x \bmod \mathfrak{m}\text{ for all }x \in \mathcal{O}_L\}.
+\end{align*}
+In other words, these are all the automorphisms that reduce to the identity when we reduce it to $\Gal(k_L/k_K)$.
+
+\begin{defi}[Inertia group]\index{inertia group}\index{$L/K$}
+ Let $L/K$ be a finite Galois extension of local fields. Then the \emph{inertia group} of $L/K$ is the kernel of the natural homomorphism
+ \[
+ \Gal(L/K) \to \Gal(k_L/k_K)
+ \]
+ given by reduction. We write this as
+ \[
+ I(L/K) = G_0(L/K).
+ \]
+\end{defi}
+
+\begin{prop}
+ Let $L/K$ be a finite Galois extension of local fields. Then the homomorphism
+ \[
+ \Gal(L/K) \to \Gal(k_L/k_K)
+ \]
+ given by reduction is surjective.
+\end{prop}
+
+\begin{proof}
+ Let $T/K$ be maximal unramified subextension. Then by Galois theory, the map $\Gal(L/K) \to \Gal(T/K)$ is a surjection. Moreover, we know that $k_T = k_L$. So we have a commutative diagram
+ \[
+ \begin{tikzcd}
+ \Gal(L/K) \ar[r] \ar[d, two heads] & \Gal(k_L/k_K) \ar[d, equals]\\
+ \Gal(T/K) \ar[r, "\sim"] & \Gal(k_T/k_K).
+ \end{tikzcd}
+ \]
+ So the map $\Gal(L/K) \to \Gal(k_L/k_K)$ is surjective.
+\end{proof}
+
+Then the inertia group is trivial iff $L/K$ is unramified. The field $T$ is sometimes called the \term{inertia field}.
+
+\begin{lemma}
+ Let $L/K$ be a finite Galois extension of local fields, and let $\sigma \in I(L/K)$. Then $\sigma([x]) = [x]$ for all $x$.
+
+ More generally, let $x \in k_L$ and $\sigma \in \Gal(L/K)$ with image $\bar{\sigma} \in \Gal(k_L/k_K)$. Then we have
+ \[
+ [\bar{\sigma}(x)] = \sigma([x]).
+ \]
+\end{lemma}
+
+\begin{proof}
+ Consider the map $k_L \to \mathcal{O}_L$ given by
+ \[
+ f: x \mapsto \sigma^{-1}([\bar{\sigma}(x)]).
+ \]
+ This is multiplicative, because every term is multiplicative, and
+ \[
+ \sigma^{-1}([\bar{\sigma}(x)]) \equiv x\mod \pi_L.
+ \]
+ So this map $f$ has to be the Teichm\"uller lift by uniqueness.
+\end{proof}
+
+That's all we're going to say about the inertia group. We now consider the general properties of this filtration.
+\begin{prop}
+ Let $L/K$ be a finite Galois extension of local fields, and $v_L$ the normalized valuation of $L$. Let $\pi_L$ be the uniformizer of $L$. Then $G_{s + 1}(L/K)$ is a normal subgroup of $G_s(L/K)$ for $s \in \Z_{\geq 0}$, and the map
+ \[
+ \frac{G_s(L/K)}{G_{s + 1}(L/K)} \to \frac{U_L^{(s)}}{U_L^{(s + 1)}}
+ \]
+ given by
+ \[
+ \sigma \mapsto \frac{\sigma(\pi_L)}{\pi_L}
+ \]
+ is a well-defined injective group homomorphism, independent of the choice of $\pi_L$.
+\end{prop}
+
+\begin{proof}
+ We define the map
+ \begin{align*}
+ \phi: G_s(L/K) &\to \frac{U_L^{(s)}}{U_L^{(s + 1)}}\\
+ \sigma &\mapsto \sigma(\pi_L)/\pi_L.
+ \end{align*}
+ We want to show that this has kernel $G_{s + 1}(L/K)$.
+
+ First we show it is well-defined. If $\sigma \in G_s(L/K)$, we know
+ \[
+ \sigma(\pi_L) = \pi_L + \pi_L^{s + 1}x
+ \]
+ for some $x \in \mathcal{O}_L$. So we know
+ \[
+ \frac{\sigma(\pi_L)}{\pi_L} = 1 + \pi_L^s x \in U_L^{(s)}.
+ \]
+ So it has the right image. To see this is independent of the choice of $\pi_L$, we let $u \in \mathcal{O}_L^\times$. Then $\sigma(u) = u + \pi_L^{s + 1}y$ for some $y \in \mathcal{O}_L$.
+
+ Since any other uniformizer must be of the form $\pi_L u$, we can compute
+ \begin{align*}
+ \frac{\sigma(\pi_L u)}{\pi_L u} &= \frac{(\pi_L + \pi_L^{s + 1})(u + \pi_L^{s + 1}y)}{\pi_L u} \\
+ &= (1 + \pi_L^s x) (1 + \pi_L^{s + 1} u^{-1}y) \\
+ &\equiv 1 \pi_L^s x \pmod {U_L^{s + 1}}.
+ \end{align*}
+ So they represent the same element in in $U_L^{(s)}/U_L^{(s + 1)}$.
+
+ To see this is a group homomorphism, we know
+ \[
+ \phi(\sigma\tau) = \frac{\sigma(\tau(\pi_L))}{\pi_L} = \frac{\sigma(\tau(\pi_L))}{\tau(\pi_L)} \frac{\tau(\pi_L)}{\pi_L} = \phi(\sigma)\phi(t),
+ \]
+ using the fact that $\tau(\pi_L)$ is also a uniformizer.
+
+ Finally, we have to show that $\ker \phi = G_{s + 1}(L/K)$. We write down
+ \[
+ \ker \phi = \{\sigma \in G_s(L/K): v_L(\sigma(\pi_L) - \pi_L) \geq s + 2\}.
+ \]
+ On the other hand, we have
+ \[
+ G_{s + 1}(L/K) = \{\sigma \in G_s(L/K): v_L(\sigma(z) - z) \geq s + 2\text{ for all }z \in \mathcal{O}_L\}.
+ \]
+ So we trivially have $G_{s + 1}(L/K) \subseteq \ker \phi$. To show the converse, let $x \in \mathcal{O}_L$ and write
+ \[
+ x = \sum_{n = 0}^\infty [x_n] \pi_L^n.
+ \]
+ Take $\sigma \in \ker \phi \subseteq G_s(L/K) \subseteq I(L/K)$. Then we have
+ \[
+ \sigma(\pi_L) = \pi_L + \pi_L^{s + 2} y,\quad y \in \mathcal{O}_L.
+ \]
+ Then by the previous lemma, we know
+ \begin{align*}
+ \sigma(x) - x &= \sum_{n = 1}^\infty [x_n] \left((\sigma (\pi_L))^n - \pi_L^n\right)\\
+ &= \sum_{n = 1}^\infty [x_n] \left((\pi_L + \pi_L^{s + 2} y)^n - \pi_L^n\right)\\
+ &= \pi_L^{s + 2}(\text{things}).
+ \end{align*}
+ So we know $v_L(\sigma(x) - x) \geq s + 2$.
+\end{proof}
+
+\begin{cor}
+ $\Gal(L/K)$ is solvable.
+\end{cor}
+
+\begin{proof}
+ Note that
+ \[
+ \bigcap_s G_s(L/K) = \{\id\}.
+ \]
+ So $(G_s(L/K))_{s \in \Z_{\geq -1}}$ is a subnormal series of $\Gal(L/K)$, and all quotients are abelian, because they embed into $\frac{U_L^{(s)}}{U_L^{(s + 1)}} \cong (k_K, +)$ (and $s = -1$ can be checked separately).
+\end{proof}
+
+Thus if $L/K$ is a finite extension of local fields, then we have, for $s \geq 1$, injections
+\[
+ \frac{G_s(L/K)}{G_{s + 1}(L/K)} \hookrightarrow k_L.
+\]
+Since $k_L$ is a $p$-group, it follows that
+\[
+ \frac{|G_s(L/K)|}{|G_{s + 1}(L/K)|}
+\]
+is a $p$th power. So it follows that for any $t$, the quotient
+\[
+ \frac{|G_1(L/K)|}{|G_t(L/K)|}
+\]
+is also a $p$th power. However, we know that the intersection of all $G_s(L/K)$ is $\{\id\}$, and also $\Gal(L/K)$ is finite. So for sufficiently large $t$, we know that $|G_t(L/K)| = 1$. So we conclude that
+\begin{prop}
+ $G_1(L/K)$ is always a $p$-group.
+\end{prop}
+
+We now use the injection
+\[
+ \frac{G_0(L/K)}{G_1(L/K)} \hookrightarrow k_L^\times,
+\]
+and the fact that $k_L^\times$ has order prime to $p$. So $G_1(L/K)$ must be the Sylow $p$-subgroup of $G_0(L/K)$. Since it is normal, it must be the unique $p$-subgroup.
+
+\begin{defi}[Wild inertia group and tame quotient]\index{wild inertia group}\index{tame quotient}
+ $G_1(L/K)$ is called the \term{wild inertia group}, and the quotient $G_0(L/K)/G_1(L/K)$ is the \term{tame quotient}.
+\end{defi}
+
+\subsection{Multiple extensions}
+Suppose we have tower $M/L/K$ of finite extensions of local fields. How do the ramification groups of the different extensions relate? We first do the easy case.
+
+\begin{prop}
+ Let $M/L/K$ be finite extensions of local fields, and $M/K$ Galois. Then
+ \[
+ G_s(M/K) \cap \Gal(M/L) = G_s(M/L).
+ \]
+\end{prop}
+
+\begin{proof}
+ We have
+ \[
+ G_s(M/K) = \{\sigma \in \Gal(M/L): v_M(\sigma x - x) \geq s + 1\} = G_s(M/K) \cap \Gal(M/L).\qedhere
+ \]
+\end{proof}
+This is trivial, because the definition uses the valuation $v_M$ of the bigger field all the time. What's more difficult and interesting is quotients, namely going from $M/K$ to $L/K$.
+
+We want to prove the following theorem:
+\begin{thm}[Herbrand's theorem]
+ Let $M/L/K$ be finite extensions of local fields with $M/K$ and $L/K$ Galois. Then there is some function $\eta_{M/L}$ such that
+ \[
+ G_t(L/K) \cong \frac{G_s(M/K)}{G_s(M/L)}
+ \]
+ for all $s$, where $t = \eta_{M/L}(s)$.
+\end{thm}
+
+To better understand the situation, it helps to provide an alternative characterization of the Galois group.
+
+\begin{defi}[$i_{L/K}$]\index{$i_{L/K}$}
+ We define
+ \[
+ i_{L/K}(\sigma) = \min_{x \in \mathcal{O}_L} v_L(\sigma(x) - x).
+ \]
+\end{defi}
+It is then immediate that
+\[
+ G_s(L/K) = \{\sigma \in \Gal(L/K): i_{L/K}(\sigma) \geq s + 1\}.
+\]
+This is not very helpful. We now claim that we can compute $i_{L/K}$ using the following formula:
+\begin{prop}
+ Let $L/K$ be a finite Galois extension of local fields, and pick $\alpha \in \mathcal{O}_L$ such that $\mathcal{O}_L = \mathcal{O}_K[\alpha]$. Then
+ \[
+ i_{L/K}(\sigma) = v_L(\sigma(\alpha) - \alpha).
+ \]
+\end{prop}
+
+\begin{proof}
+ Fix a $\sigma$. It is clear that $i_{L/K}(\sigma) \leq v_L(\sigma(\alpha) - \alpha)$. Conversely, for any $x \in \mathcal{O}_L$, we can find a polynomial $g \in \mathcal{O}_K[t]$ such that
+ \[
+ x = g(\alpha) = \sum b_i \alpha^i,
+ \]
+ where $b_i \in \mathcal{O}_K$. In particular, $b_i$ is fixed by $\sigma$.
+
+ Then we have
+ \begin{align*}
+ v_L(\sigma (x) - x) &= v_L(\sigma g(\alpha) - g(\alpha)) \\
+ &= v_L\left( \sum_{i = 1}^n b_i (\sigma(\alpha)^i - \alpha^i)\right)\\
+ &\geq v_L(\sigma(\alpha) - \alpha),
+ \end{align*}
+ using the fact that $\sigma(\alpha) - \alpha \mid \sigma(\alpha)^i - \alpha^i$ for all $i$. So done.
+\end{proof}
+
+Now if $M/L/K$ are finite Galois extensions of local fields, then $\mathcal{O}_M = \mathcal{O}_K[\alpha]$ implies $\mathcal{O}_M = \mathcal{O}_L[\alpha]$. So for $\sigma \in \Gal(M/L)$, we have
+\[
+ i_{M/L}(\sigma) = i_{M/K}(\sigma).
+\]
+Going in the other direction is more complicated.
+\begin{prop}
+ Let $M/L/K$ be a finite extension of local fields, such that $M/K$ and $L/K$ are Galois. Then for $\sigma \in \Gal(L/K)$, we have
+ \[
+ i_{L/K}(\sigma) = e_{M/L}^{-1} \sum_{\substack{\tau \in \Gal(M/K)\\ \tau|_L = \sigma}} i_{M/K}(\tau).
+ \]
+\end{prop}
+Here $e_{M/L}$ is just to account for the difference between $v_L$ and $v_M$. So the real content is that the value of $i_{L/K}(\sigma)$ is the sum of the values of $i_{M/K}(\tau)$ for all $\tau$ that restrict to $\sigma$.
+
+\begin{proof}
+ If $\sigma = 1$, then both sides are infinite by convention, and equality holds. So we assume $\sigma \not= 1$. Let $\mathcal{O}_M = \mathcal{O}_L[\alpha]$ and $\mathcal{O}_L = \mathcal{O}_K[\beta]$, where $\alpha \in \mathcal{O}_M$ and $\beta \in \mathcal{O}_L$. Then we have
+ \[
+ e_{M/L} i_{L/K}(\sigma) = e_{M/L} v_L(\sigma \beta - \beta) = v_M(\sigma \beta - \beta).
+ \]
+ Now if $\tau \in \Gal(M/K)$, then
+ \[
+ i_{M/K}(\tau) = v_M(\tau \alpha - \alpha)
+ \]
+ Now fix a $\tau$ such that $\tau|_L = \sigma$. We set $H = \Gal(M/L)$. Then we have
+ \[
+ \sum_{\tau' \in \Gal(M/K), \tau'|_L = \sigma} i_{M/K}(\tau') = \sum_{g\in H}v_M(\tau g(\alpha) - \alpha) = v_M\left(\prod_{g \in H} (\tau g(\alpha) - \alpha)\right).
+ \]
+ We let
+ \[
+ b = \sigma(\beta) - \beta = \tau(\beta) - \beta
+ \]
+ and
+ \[
+ a = \prod_{g \in H} (\tau g(\alpha) - \alpha).
+ \]
+ We want to prove that $v_M(b) = v_M(a)$. We will prove that $a \mid b$ and $b \mid a$.
+
+ We start with a general observation about elements in $\mathcal{O}_L$. Given $z \in \mathcal{O}_L$, we can write
+ \[
+ z = \sum_{i = 1}^n z_i \beta^i,\quad z_i \in \mathcal{O}_K.
+ \]
+ Then we know
+ \[
+ \tau(z) - z = \sum_{i = 1}^n z_i(\tau(\beta)^i - \beta^i)
+ \]
+ is divisible by $\tau(\beta) - \beta = b$.
+
+ Now let $F(x) \in \mathcal{O}_L[x]$ be the minimal polynomial of $\alpha$ over $L$. Then explicitly, we have
+ \[
+ F(x) = \prod_{g \in H}(x - g(\alpha)).
+ \]
+ Then we have
+ \[
+ (\tau F)(x) = \prod_{g \in H} (x - \tau g(\alpha)),
+ \]
+ where $\tau F$ is obtained from $F$ by applying $\tau$ to all coefficients of $F$. Then all coefficients of $\tau F - F$ are of the form $\tau(z) - z$ for some $z \in \mathcal{O}_L$. So it is divisible by $b$. So $b$ divides every value of this polynomial, and in particular
+ \[
+ b \mid (\tau F - F)(\alpha) = \prod_{g \in H}(\alpha - g(\alpha)) = \pm a,
+ \]
+ So $b \mid a$.
+
+ In other direction, we pick $f \in \mathcal{O}_K[x]$ such that $f(\alpha) = \beta$. Then $f(\alpha) - \beta = 0$. This implies that the polynomial $f(x) - \beta$ divides the minimal polynomial of $\alpha$ in $\mathcal{O}_L[x]$. So we have
+ \[
+ f(x) - \beta = F(x) h(x)
+ \]
+ for some $h \in \mathcal{O}_L[x]$.
+
+ Then noting that $f$ has coefficients in $\mathcal{O}_K$, we have
+ \[
+ (f - \tau \beta)(x) = (\tau f - \tau b) (x) = (\tau F)(x) (\tau h)(x).
+ \]
+ Finally, set $x = \alpha$. Then
+ \[
+ -b = \beta - \tau \beta = \pm a (\tau h)(\alpha).
+ \]
+ So $a \mid b$.
+\end{proof}
+
+Now that we understand how the $i_{L/K}$ behave when we take field extensions, we should be able to understand how the ramification groups behave!
+
+We now write down the right choice of $\eta_{L/K}: [-1, \infty) \to [-1, \infty)$:\index{$\eta_{L/K}$}
+\[
+ \eta_{L/K}(s) = \left(e_{L/K}^{-1} \sum_{\sigma \in G} \min(i_{L/K}(\sigma), s + 1)\right) - 1.
+\]
+\begin{thm}[Herbrand's theorem]\index{Herbrand's theorem}
+ Let $M/L/K$ be a finite extension of local fields with $M/K$ and $L/K$ Galois. We set
+ \[
+ H = \Gal(M/L),\quad t = \eta_{M/L}(s).
+ \]
+ Then we have
+ \[
+ \frac{G_s(M/K)H}{H} = G_t(L/K).
+ \]
+ By some isomorphism theorem, and the fact that $H \cap G_s(M/K) = G_s(M/L)$, this is equivalent to saying
+ \[
+ G_t(L/K) \cong \frac{G_s(M/K)}{G_s(M/L)}.
+ \]
+\end{thm}
+
+\begin{proof}
+ Let $G = \Gal(M/K)$. Fix a $\sigma \in \Gal(L/K)$. We let $\tau \in \Gal(M/K)$ be an extension of $\sigma$ to $M$ that maximizes $i_{M/K}$, i.e.
+ \[
+ i_{M/K}(\tau) \geq i_{M/K}(\tau g)
+ \]
+ for all $g \in H$. This is possible since $H$ is finite.
+
+ We claim that
+ \[
+ i_{L/K}(\sigma) - 1 = \eta_{M/L}(i_{M/K}(\tau) - 1).
+ \]
+ If this were true, then we would have
+ \begin{align*}
+ \sigma \in \frac{G_s(M/K)H}{H} &\Leftrightarrow \tau \in G_s(M/K)\\
+ &\Leftrightarrow i_{M/K}(\tau) - 1 \geq s\\
+ \intertext{Since $\eta_{M/L}$ is strictly increasing, we have}
+ &\Leftrightarrow \eta_{M/L}(i_{M/K}(\tau) - 1) \geq \eta_{M/L}(s) = t\\
+ &\Leftrightarrow i_{L/K}(\sigma) - 1 \geq t \\
+ &\Leftrightarrow \sigma \in G_t(L/K),
+ \end{align*}
+ and we are done.
+
+ To prove the claim, we now use our known expressions for $i_{L/K}(\sigma)$ and $\eta_{M/L}(i_{M/K}(\tau) - 1)$ to rewrite it as
+ \[
+ e^{-1}_{M/L} \sum_{g \in H} i_{M/K}(\tau g) = e_{M/L}^{-1} \sum_{g \in H} \min (i_{M/L}(g), i_{M/K}(\tau)).
+ \]
+ We then make the \emph{stronger} claim
+ \[
+ i_{M/K}(\tau g) = \min (i_{M/L}(g), i_{M/K}(\tau)).
+ \]
+ We first note that
+ \begin{align*}
+ i_{M/K}(\tau g) &= v_M(\tau g(\alpha) - \alpha) \\
+ &= v_M(\tau g(\alpha) - g(\alpha) + g(\alpha) - \alpha) \\
+ &\geq \min(v_M(\tau g(\alpha) - g(\alpha)), v_M(g(\alpha) - \alpha))\\
+ &= \min(i_{M/K}(\tau), i_{M/K}(g))
+ \end{align*}
+ We cannot conclude our (stronger) claim yet, since we have a $\geq$ in the middle. We now have to split into two cases.
+ \begin{enumerate}
+ \item If $i_{M/K}(g) \geq i_{M/K}(\tau)$, then the above shows that $i_{M/K}(\tau g) \geq i_{M/K}(\tau)$. But we also know that it is bounded above by $m$. So $i_{M/K}(\tau g) = i_{M/K}(\tau)$. So our claim holds.
+ \item If $i_{M/K}(g) < i_{M/K}(\tau)$, then the above inequality is in fact an equality as the two terms have different valuations. So our claim also holds.
+ \end{enumerate}
+ So done.
+\end{proof}
+
+We now prove an alternative characterization of the function $\eta_{L/K}$, using a funny integral.
+\begin{prop}
+ Write $G = \Gal(L/K)$. Then
+ \[
+ \eta_{L/K}(s) = \int_0^s \frac{\d x}{(G_0(L/K): G_x(L/K))}.
+ \]
+ When $-1 \leq x < 0$, our convention is that
+ \[
+ \frac{1}{(G_0(L/K):G_x(L/K))} = (G_x(L/K): G_0(L/K)),
+ \]
+ which is just equal to $1$ when $-1 < x < 0$. So
+ \[
+ \eta_{L/K}(s) = s\text{ if }-1 \leq s \leq 0.
+ \]
+\end{prop}
+
+\begin{proof}
+ We denote the RHS by $\theta(s)$. It is clear that both $\eta_{L/K}(s)$ and $\theta(s)$ are piecewise linear and the break points are integers (since $i_{L/K}(\sigma)$ is always an integer). So to see they are the same, we see that they agree at a point, and that they have equal derivatives. We have
+ \[
+ \eta_{L/K}(0) = \frac{|\{\sigma \in G: i_{L/K}(\sigma) \geq 1\}|}{e_{L/K}} - 1 = 0 = \theta(0),
+ \]
+ since the numerator is the size of the inertia group.
+
+ If $s \in [-1, \infty) \setminus \Z$, then
+ \begin{align*}
+ \eta_{L/K}'(s) &= e_{L/K}^{-1} (|\{\sigma \in G: i_{L/K}(\sigma) \geq s + 1\}|) \\
+ &= \frac{|G_s(L/K)|}{|G_0(L/K)|} \\
+ &= \frac{1}{(G_0(L/K):G_s(L/K))} \\
+ &= \theta'(s).
+ \end{align*}
+ So done.
+\end{proof}
+
+
+We now tidy up the proof by inventing a different numbering of the ramification groups. Recall that
+\[
+ \eta_{L/K}: [-1, \infty) \to [-1, \infty)
+\]
+is continuous, strictly increasing, and
+\[
+ \eta_{L/K}(-1) = -1,\quad \eta_{L/K}(s) \to \infty\text{ as } s \to \infty.
+\]
+So this is invertible. We set
+\begin{notation}\index{$\psi_{L/K}$}
+ \[
+ \psi_{L/K} = \eta_{L/K}^{-1}.
+ \]
+\end{notation}
+
+\begin{defi}[Upper numbering]\index{upper numbering of ramification group}\index{ramification group!upper numbering}\index{lower numbering of ramification group}\index{ramification group!lower numbering}
+ Let $L/K$ be a Galois extension of local fields. Then the \emph{upper numbering} of the ramification groups of $L/K$ is defined by
+ \[
+ G^t(L/K) = G_{\psi_{L/K}(t)} (L/K)
+ \]
+ for $t \in [-1, \infty)$. The original number is called the \emph{lower numbering}.
+\end{defi}
+
+To rephrase our previous theorem using the upper numbering, we need a little lemma:
+\begin{lemma}
+ Let $M/L/K$ be a finite extension of local fields, and $M/K$ and $L/K$ be Galois. Then
+ \[
+ \eta_{M/K} = \eta_{L/K} \circ \eta_{M/L}.
+ \]
+ Hence
+ \[
+ \psi_{M/K} = \psi_{M/L} \circ \psi_{L/K}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Let $s \in [-1 , \infty)$, and let $t = \eta_{M/L}(s)$, and $H = \Gal(M/L)$. By Herbrand's theorem, we know
+ \[
+ G_t(L/K) \cong \frac{G_s(M/K)H}{H} \cong \frac{G_s(M/K)}{H\cap G_s(M/K)} = \frac{G_s(M/K)}{G_s(M/L)}.
+ \]
+ Thus by multiplicativity of the inertia degree, we have
+ \[
+ \frac{|G_s(M/K)|}{e_{M/K}} = \frac{|G_t(L/K)|}{e_{L/K}} \frac{|G_s(M/L)|}{e_{M/L}}.
+ \]
+ By the fundamental theorem of calculus, we know that whenever the derivatives make sense, we have
+ \[
+ \eta_{M/K}'(s) = \frac{|G_s(M/K)|}{e_{M/K}}.
+ \]
+ So putting this in, we know
+ \[
+ \eta'_{M/K}(s) = \eta'_{L/K}(t) \eta_{M/L}'(s) = (\eta_{L/K} \circ \eta_{M/L})'(s).
+ \]
+ Since $\eta_{M/K}$ and $\eta_{L/K} \circ \eta_{M/L}$ agree at $0$ (they both take value $0$), we know that the functions must agree everywhere. So done.
+\end{proof}
+
+\begin{cor}
+ Let $M/L/K$ be finite Galois extensions of local fields, and $H = \Gal(M/L)$. Let $t \in [-1, \infty)$. Then
+ \[
+ \frac{G^t(M/K)H}{H} = G^t(L/K).
+ \]
+\end{cor}
+
+\begin{proof}
+ Put $s = \eta_{L/K}(t)$. Then by Herbrand's theorem, we have
+ \begin{align*}
+ \frac{G^t(M/K)H}{H} &= \frac{G_{\psi_{M/K}(t)}(M/K) H}{H} \\
+ &\cong G_{\eta_{M/L}(\psi_{M/K}(t))}(L/K)\\
+ &= G_s(L/K)\\
+ &= G^t(L/K).\qedhere
+ \end{align*}
+\end{proof}
+
+This upper numbering might seem like an unwieldy beast that was invented just so that our theorem looks nice. However, it turns out that often the upper numberings are rather natural, as we could see in the example below:
+\begin{eg}
+ Consider $\zeta_{p^n}$ a primitive $p^n$th root of unity, and $K = \Q_p(\zeta_{p^n})$. The minimal polynomial of $\zeta_{p^n}$ is the $p^n$th cyclotomic polynomial
+ \[
+ \Phi_{p^n}(x) = x^{p^{n - 1}(p - 1)} + x^{p^{n - 1}(p - 2)} + \cdots + 1.
+ \]
+ It is an exercise on the example sheet to show that this is indeed irreducible. So $K/\Q_p$ is a Galois extension of degree $p^{n - 1}(p - 1)$. Moreover, it is totally ramified by question 6 on example sheet 2, with uniformizer
+ \[
+ \pi = \zeta_{p^n} - 1
+ \]
+ is a uniformizer. So we know
+ \[
+ \mathcal{O}_K = \Z_p[\zeta_{p^n} - 1] = \Z_p[\zeta_{p^n}].
+ \]
+ We then have an isomorphism
+ \[
+ \left(\frac{\Z}{p^n \Z}\right)^\times \to \Gal(L/\Q_p)
+ \]
+ obtained by sending $m \to \sigma_m$, where
+ \[
+ \sigma_m(\zeta_{p^n}) = \zeta_{p^n}^m.
+ \]
+ We have
+ \begin{align*}
+ i_{K/\Q_p}(\sigma_m) &= v_K(\sigma_m (\zeta_{p^n}) - \zeta_{p^n})\\
+ &= v_K(\zeta_{p^n}^m - \zeta_{p^n})\\
+ &= v_K(\zeta_{p^n}^{m - 1} - 1)
+ \end{align*}
+ since $\zeta_{p^n}$ is a unit. If $m = 1$, then this thing is infinity. If it is not $1$, then $\zeta_{p^n}^{m - 1}$ is a primitive $p^{n - k}$th root of unity for the maximal $k$ such that $p^k \mid m- 1$. So by Q6 on example sheet 2, we have
+ \[
+ v_K(\zeta_{p^n}^{m - 1} - 1) = \frac{p^{n - 1}(p - 1)}{p^{n - k - 1}(p - 1)} = p^k.
+ \]
+ Thus we have
+ \[
+ v_K(\zeta_{p^n}^{m - 1} - 1) \geq p^k \Leftrightarrow m \equiv 1\bmod {p^k}.
+ \]
+ It then follows that for
+ \[
+ p^k \geq s + 1 \geq p^{k - 1} + 1,
+ \]
+ we have
+ \[
+ G_s(K/\Q_p) \cong \{m \in (\Z/p^n)^\times: m \equiv 1 \bmod p^k\}.
+ \]
+ Now $m \equiv 1 \bmod p^k$ iff $\sigma_m(\zeta_{p^k}) = \zeta_{p^k}$. So in fact
+ \[
+ G_s(K/\Q_p) \cong \Gal(K/\Q_p(\zeta_{p^k})).
+ \]
+ Finally, when $s \geq p^n - 1$, we have
+ \[
+ G_s(K/\Q_p) = 1.
+ \]
+ We claim that
+ \[
+ \eta_{K/\Q_p}(p^k - 1) = k.
+ \]
+ So we have
+ \[
+ G^k(K/\Q_p) = \Gal(K/\Q_p(\zeta_{p^k})).
+ \]
+ This actually looks much nicer!
+
+ To actually compute $\eta_{K/\Q_p}$, we have notice that the function we integrate to get $\eta$ looks something like this (not to scale):
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (7, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw (0, 2) -- (2, 2) node [pos=0.5, above] {$\frac{1}{p - 1}$} node [circ] {};
+ \draw (2, 1) -- (4, 1) node [pos=0.5, above] {$\frac{1}{p(p - 1)}$} node [circ] {};
+ \draw (4, 0.5) -- (6, 0.5) node [pos=0.5, above] {$\frac{1}{p^2(p - 1)}$} node [circ] {};
+
+ \node [circ] at (2, 0) {};
+ \node [below] at (2, 0) {$p - 1$};
+ \node [circ] at (4, 0) {};
+ \node [below] at (4, 0) {$p^2 - 1$};
+ \node [circ] at (6, 0) {};
+ \node [below] at (6, 0) {$p^3 - 1$};
+
+ \draw [fill=white] (0, 2) circle [radius=0.05];
+ \draw [fill=white] (2, 1) circle [radius=0.05];
+ \draw [fill=white] (4, 0.5) circle [radius=0.05];
+ \end{tikzpicture}
+ \end{center}
+ The jumps in the lower numbering are at $p^k - 1$ for $k = 1, \cdots, n -1 $. So we have
+ \begin{align*}
+ \eta_{K/\Q_p}(p^k - 1) &= (p - 1)\frac{1}{p - 1} + ((p^2 - 1) - (p - 1))\frac{1}{p(p - 1)} \\
+ &\quad\quad+ \cdots + ((p^k - 1) - (p^{k - 1} - 1)) \frac{1}{p^{k - 1}(p - 1)} \\
+ &= k.
+ \end{align*}
+\end{eg}
+
+\section{Local class field theory}
+Local class field theory is the study of abelian extensions of local fields, i.e.\ a Galois extension whose Galois group is abelian.
+
+\subsection{Infinite Galois theory}
+It turns out that the best way of formulating this theory is to not only use finite extensions, but infinite extensions as well. So we need to begin with some infinite Galois theory. We will mostly just state the relevant results instead of proving them, because this is not a course on Galois theory.
+
+In this section, we will work with any field.
+\begin{defi}[Separable and normal extensions]\index{separable extension}\index{normal extension}
+ Let $L/K$ be an algebraic extension of fields. We say that $L/K$ is \emph{separable} if, for every $\alpha \in L$, the minimal polynomial $f_\alpha \in K[\alpha]$ is separable. We say $L/K$ is \emph{normal} if $f_\alpha$ splits in $L$ for every $\alpha \in L$.
+\end{defi}
+
+\begin{defi}[Galois extension]\index{Galois extension}
+ Let $L/K$ be an algebraic extension of fields. Then it is \emph{Galois} if it is normal and separable. If so, we write
+ \[
+ \Gal(L/K) = \Aut_K(L).
+ \]
+\end{defi}
+These are all the same definitions as in the finite case.
+
+In finite Galois theory, the subgroups of $\Gal(L/K)$ match up with the intermediate extensions, but this is no longer true in the infinite case. The Galois group has too many subgroups. To fix this, we need to give $\Gal(L/K)$ a topology, and talk about closed subgroups.
+
+\begin{defi}[Krull topology]\index{Krull topology}
+ Let $M/K$ be a Galois extension. We define the \emph{Krull topology} on $M/K$ by the basis
+ \[
+ \{\Gal(M/L): L/K\text{ is finite}\}.
+ \]
+ More explicitly, we say that $U \subseteq \Gal(M/K)$ is open if for every $\sigma \in U$, we can find a finite subextension $L/K$ of $M/K$ such that $\sigma \Gal(M/L) \subseteq U$.
+\end{defi}
+Note that any open subgroup of a topological group is automatically closed, but the converse does not hold.
+
+Note that when $M/K$ is finite, then the Krull topology is discrete, since we can just take the finite subextension to be $M$ itself.
+
+\begin{prop}
+ Let $M/K$ be a Galois extension. Then $\Gal(M/K)$ is compact and Hausdorff, and if $U \subseteq \Gal(M/K)$ is an open subset such that $1 \in U$, then there is an open normal subgroup $N \subseteq \Gal(M/K)$ such that $N \subseteq U$.
+\end{prop}
+Groups with properties in this proposition are known as \term{profinite groups}.
+
+\begin{proof}
+ We will not prove the first part.
+
+ For the last part, note that by definition, there is a finite subextension of $M/K$ such that $\Gal(M/L) \subseteq U$. We then let $L'$ be the Galois closure of $L$ over $K$. Then $\Gal(M/L') \subseteq \Gal(M/L) \subseteq U$, and $\Gal(M/L')$ is open and normal.
+\end{proof}
+
+Recall that we previously defined the inverse limit of a sequence rings. More generally, we can define such an inverse limit for any sufficiently nice poset of things. Here we are going to do it for topological groups (for those doing Category Theory, this is the filtered limit of topological groups).
+\begin{defi}[Directed system]\index{directed system}
+ Let $I$ be a set with a partial order. We say that $I$ is a \emph{directed system} if for all $i, j \in I$, there is some $k \in I$ such that $i \leq k$ and $j \leq k$.
+\end{defi}
+
+\begin{eg}
+ Any total order is a directed system.
+\end{eg}
+
+\begin{eg}
+ $\N$ with divisibility $\mid$ as the partial order is a directed system.
+\end{eg}
+
+\begin{defi}[Inverse limit]\index{inverse system}\index{inverse limit}
+ Let $I$ be a directed system. An \emph{inverse system} (of topological groups) indexed by $I$ is a collection of topological groups $G_i$ for each $i \in I$ and continuous homomorphisms
+ \[
+ f_{ij}: G_j \to G_i
+ \]
+ for all $i, j \in I$ such that $i \leq j$, such that
+ \[
+ f_{ii} = \id_{G_i}
+ \]
+ and
+ \[
+ f_{ik} = f_{ij} \circ f_{jk}
+ \]
+ whenever $i \leq j \leq k$.
+
+ We define the \emph{inverse limit} on the system $(G_i, f_{ij})$ to be
+ \[
+ \varprojlim_{i \in I} G_i = \left\{(g_i) \in \prod_{i \in I} G_i: f_{ij} (g_j) = g_i\text{ for all }i \leq j\right\} \subseteq \prod_{i \in I} g_i,
+ \]
+ which is a group under coordinate-wise multiplication and a topological space under the subspace topology of the product topology on $\prod_{i \in I} G_i$. This makes $\varprojlim_{i \in I} G_i$ into a topological group.
+\end{defi}
+
+\begin{prop}
+ Let $M/K$ be a Galois extension. The set $I$ of finite Galois subextensions $L/K$ is a directed system under inclusion. If $L, L' \in I$ and $L \subseteq L'$, then we have a restriction map
+ \[
+ \ph |_L^{L'} : \Gal(L'/K) \to \Gal(L/K).
+ \]
+ Then $(\Gal(L/K), \ph|_L^{L'})$ is an inverse system, and the map
+ \begin{align*}
+ \Gal(M/K) &\to \varprojlim_{i \in I} \Gal(L/K)\\
+ \sigma &\mapsto (\sigma|_L)_{i \in I}
+ \end{align*}
+ is an isomorphism of topological groups.
+\end{prop}
+
+We now state the main theorem of Galois theory.
+\begin{thm}[Fundamental theorem of Galois theory]\index{fundamental theorem of Galois theory}
+ Let $M/K$ be a Galois extension. Then the map $L \mapsto \Gal(M/L)$ defines a bijection between subextensions $L/K$ of $M/K$ and closed subgroups of $\Gal(M/K)$, with inverse given by sending $H \mapsto M^H$, the fixed field of $H$.
+
+ Moreover, $L/K$ is finite if and only if $\Gal(M/L)$ is open, and $L/K$ is Galois iff $\Gal(M/L)$ is normal, and then
+ \[
+ \frac{\Gal(L/K)}{\Gal(M/L)} \to \Gal(L/K)
+ \]
+ is an isomorphism of topological groups.
+\end{thm}
+
+\begin{proof}
+ This follows easily from the fundamental theorem for finite field extensions. We will only show that $\Gal(M/L)$ is closed and leave the rest as an exercise. We can write
+ \[
+ L = \bigcup_{\substack{L' \subseteq L\\L'/K\text{ finite}}} L'.
+ \]
+ Then we have
+ \[
+ \Gal(M/L) = \bigcap_{\substack{L' \subseteq L\\L'/K\text{ finite}}} \Gal(M/L'),
+ \]
+ and each $\Gal(M/L')$ is open, hence closed. So the whole thing is closed. % finish proof
+\end{proof}
+
+\subsection{Unramified extensions and Weil group}
+We first define what it means for an infinite extension to be unramified or totally ramified. To do so, we unexcitingly patch up the definitions for finite cases.
+\begin{defi}[Unramified extension]\index{unramified extension}
+ Let $K$ be a local field, and $M/K$ be algebraic. Then $M/K$ is unramified if $L/K$ is unramified for every finite subextension $L/K$ of $M/K$.
+\end{defi}
+Note that since the extension is not necessarily finite, in general $M$ will not be a local field, since chances are its residue field would be infinite.
+
+\begin{defi}[Totally ramified extension]\index{totally ramified extension}
+ Let $K$ be a local field, and $M/K$ be algebraic. Then $M/K$ is totally ramified if $L/K$ is totally ramified for every finite subextension $L/K$ of $M/K$.
+\end{defi}
+
+\begin{prop}
+ Let $M/K$ be an unramified extension of local fields. Then $M/K$ is Galois, and
+ \[
+ \Gal(M/K) \cong \Gal(k_M/k_K)
+ \]
+ via the reduction map.
+\end{prop}
+
+\begin{proof}
+ Every finite subextension of $M/K$ is unramified, so in particular is Galois. So $M/K$ is Galois (because normality and separability is checked for each element). Then we have a commutative diagram
+ \[
+ \begin{tikzcd}[column sep=large]
+ \Gal(M/K) \ar[r, "\text{reduction}"] \ar[d, "\sim"] & \Gal(k_M/k_K) \ar[d, "\sim"]\\
+ \displaystyle\varprojlim_{L/K} \Gal(L/K) \ar[r, "\text{reduction}", "\sim"'] & \displaystyle\varprojlim_{L/K} \Gal(k_L/k_K)
+ \end{tikzcd}
+ \]
+ The left hand map is an isomorphism by (infinite) Galois theory, and since all finite subextensions of $k_M/k_K$ are of the form $k_L/k_K$ by our finite theory, we know the right-hand map is an isomorphism. The bottom map is an isomorphism since it is an isomorphism in each component. So the top map must be an isomorphism.
+\end{proof}
+
+Since the compositor of unramified extensions is unramified, it follows that any algebraic extension $M/K$ has a maximal unramified subextension
+\[
+ T = T_{M/K}/K.
+\]
+In particular, every field $K$ has a maximal unramified extension $K^{ur}$\index{$K^{ur}$}.
+
+We now try to understand unramified extensions. For a finite unramified extension $L/K$, we have an isomorphism
+\[
+ \begin{tikzcd}
+ \Gal(L/K) \ar[r, "\sim"] & \Gal(k_L/k_K)
+ \end{tikzcd},
+\]
+By general field theory, we know that $\Gal(k_L/k_K)$ is a cyclic group generated by
+\[
+ \Frob_{L/K}: x \mapsto x^q,
+\]
+where $q = |k_K|$ is the size of $k_K$. So by the isomorphism, we obtain a generator of $\Gal(L/K)$.
+
+\begin{defi}[Arithmetic Frobenius]\index{arithmetic Frobenius}
+ Let $L/K$ be a finite unramified extension of local fields, the \emph{(arithmetic) Frobenius} of $L/K$ is the lift of $\Frob_{L/K} \in \Gal(k_L/k_K)$ under the isomorphism $\Gal(L/K) \cong \Gal(k_L/k_K)$.
+\end{defi}
+There is also a geometric Frobenius, which is its inverse, but we will not use that in this course.
+
+We know $\Frob$ is compatible in towers, i.e.\ if $M/L/K$ is a tower of finite unramified extension of local fields, then $\Frob_{M/K}|_L = \Frob_{L/K}$, since they both reduce to the map $x \mapsto x^{|k_K|}$ in $\Gal(k_L/k_K)$, and the map between $\Gal(k_L/k_K)$ and $\Gal(L/K)$ is a bijection.
+
+So if $M/K$ is an arbitrary unramified extension, then we have an element
+\[
+ (\Frob_{L/K}) \in \varprojlim_{L/K} \Gal(L/K) \cong \Gal(M/K).
+\]
+So we get an element $\Frob_{M/K} \in \Gal(M/K)$. By tracing through the proof of $\Gal(M/K) \cong \Gal(k_M/k_K)$, we see that this is the unique lift of $x \mapsto x^{|k_K|}$.
+
+Note that while for finite unramified extensions $M/K$, the Galois group is generated by the Frobenius, this is not necessarily the case when the extension is infinite. However, powers of the Frobenius are the only things we want to think about, so we make the following definition:
+
+\begin{defi}[Weil group]\index{Weil group}\index{$W(M/K)$}
+ Let $K$ be a local field and $M/K$ be Galois. Let $T = T_{M/K}$ be the maximal unramified subextension of $M/K$. The \emph{Weil group} of $M/K$ is
+ \[
+ W(M/K) = \{\sigma \in \Gal(M/K): \sigma|_T = \Frob_{T/K}^n\text{ for some }n \in \Z\}.
+ \]
+ We define a topology on $W(M/K)$ by saying that $U$ is open iff there is a finite extension $L/T$ such that $\sigma \Gal(L/T) \subseteq U$.
+\end{defi}
+In particular, if $M/K$ is unramified, then $W(M/K) = \Frob_{T/K}^\Z$.
+
+It is helpful to put these groups into a diagram of topological groups to see what is going on.
+\[
+ \begin{tikzcd}
+ \Gal(M/T) \ar[d, equals] \ar[r, hook] & W(M/K)\ar[d, hook] \ar[r, two heads] & \Frob_{T/K}^\Z \ar[d, hook]\\
+ \Gal(M/T) \ar[r, hook] & \Gal(M/K) \ar[r, two heads] & \Gal(T/K)
+ \end{tikzcd}
+\]
+Here we put the discrete topology on the subgroup generated by the Frobenius. The topology of $W(M/K)$ is then chosen so that all these maps are continuous homomorphisms of groups.
+
+In many ways, the Weil group works rather like the Galois group.
+\begin{prop}
+ Let $K$ be a local field, and $M/K$ Galois. Then $W(M/K)$ is dense in $\Gal(M/K)$. Equivalently, for any finite Galois subextension $L/K$ of $M/K$, the restriction map $W(M/K) \to \Gal(L/K)$ is surjective.
+
+ If $L/K$ is a finite subextension of $M/K$, then
+ \[
+ W(M/L) = W(M/K) \cap \Gal(M/L).
+ \]
+ If $L/K$ is also Galois, then
+ \[
+ \frac{W(M/K)}{W(M/L)} \cong \Gal(L/K)
+ \]
+ via restriction.
+\end{prop}
+
+\begin{proof}
+ We first prove density. To see that density is equivalent to $W(M/K) \to \Gal(L/K)$ being surjective for all finite subextension $L/K$, note that by the topology on $\Gal(M/K)$, we know density is equivalent to saying that $W(M/K)$ hits every coset of $\Gal(M/L)$, which means that $W(M/K) \to \Gal(L/K)$ is surjective.
+
+ Let $L/K$ be a subextension. We let $T = T_{M/K}$. Then $T_{L/K} = T \cap L$. Then we have a diagram
+ \[
+ \begin{tikzcd}
+ \Gal(M/T) \ar[r] \ar[d, two heads] & W(M/K) \ar[r]\ar[d] & \Frob_{T/K}^\Z \ar[d, two heads]\\
+ \Gal(L/T \cap L) \ar[r] & \Gal(L/K) \ar[r] & \Gal(T \cap L/K)
+ \end{tikzcd}
+ \]
+ Here the surjectivity of the left vertical arrow comes from field theory, and the right hand vertical map is surjective because $T \cap L/K$ is finite and hence the Galois group is generated by the Frobenius. Since the top and bottom rows are short exact sequences (top by definition, bottom by Galois theory), by diagram chasing (half of the five lemma), we get surjectivity in the middle.
+
+ \separator
+
+ To prove the second part, we again let $L/K$ be a finite subextension. Then $L \cdot T_{M/K} \subseteq T_{M/L}$. We then have maps
+ \[
+ \begin{tikzcd}
+ \Frob_{T_{M/K}/K}^\Z \ar[r, hook] & \Gal(T_{M/K}/K) \ar[r, "\cong"] & \Gal(k_M/k_K)\\
+ \Frob_{T_{M/L}/L}^\Z \ar[r, hook] \ar[u] & \Gal(T_{M/L}/L)\ar[u, hook] \ar[r, "\cong"] & \Gal(k_M/k_L) \ar[u, hook]
+ \end{tikzcd}
+ \]
+ So the left hand vertical map is an inclusion. So we know
+ \[
+ \Frob_{T_{M/L}/L}^\Z = \Frob_{T_{M/K}/K}^\Z \cap \Gal(T_{M/L}/L).
+ \]
+ Now if $\sigma \in \Gal(M/L)$, then we have
+ \begin{align*}
+ \sigma \in W(M/L) &\Leftrightarrow \sigma|_{T_{M/L}/L} \in \Frob_{T_{M/L}/L}^\Z\\
+ &\Leftrightarrow \sigma|_{T_{M/K}/K} \in \Frob_{T_{M/K}/K}^\Z\\
+ &\Leftrightarrow \sigma \in W(M/K).
+ \end{align*}
+ So this gives the second part.
+
+ \separator
+
+ Now $L/K$ is Galois as well. Then $\Gal(M/L)$ is normal in $\Gal(M/K)$. So $W(M/L)$ is normal in $W(M/K)$ by the second part. Then we can compute
+ \begin{align*}
+ \frac{W(M/K)}{W(M/L)} &= \frac{W(M/K)}{W(M/K) \cap \Gal(M/L)}\\
+ &\cong \frac{W(M/K)\Gal(M/L)}{\Gal(M/L)} \\
+ &= \frac{\Gal(M/K)}{\Gal(M/L)}\\
+ &\cong \Gal(L/K).
+ \end{align*}
+ The only non-trivial part in this chain is the assertion that $W(M/K) \Gal(M/L) = \Gal(M/K)$, i.e.\ that $W(M/K)$ hits every coset of $\Gal(M/L)$, which is what density tells us.
+\end{proof}
+
+\subsection{Main theorems of local class field theory}
+We now come to the main theorems of local class field theory.
+\begin{defi}[Abelian extension]\index{abelian extension}
+ Let $K$ be a local field. A Galois extension $L/K$ is \emph{abelian} if $\Gal(L/K)$ is abelian.
+\end{defi}
+
+We will fix an algebraic closure $\bar{K}$ of $K$, and all algebraic extensions we will consider will be taken to be subextensions of $\bar{K}/K$. We let $K^{\mathrm{sep}}$ be the separable closure of $K$ inside $\bar{K}$.
+
+If $M/K$ and $M/K$ are Galois extensions, then $LM/K$ is Galois, and the map given by restriction
+\[
+ \Gal(LM/K) \hookrightarrow \Gal(L/K) \times \Gal(M/K).
+\]
+is an injection. In particular, if $L/K$ and $M/K$ are both abelian, then so is $LM/K$. This implies that there is a maximal abelian extension $K^{\mathrm{ab}}$.
+
+Finally, note that we know an example of an abelian extension, namely the maximal unramified extension $K^{\mathrm{ur}} = T_{K^{\mathrm{sep}}/K} \subseteq K^{\mathrm{ab}}$, and we put $\Frob_K = \Frob_{K^{\mathrm{ur}}/K}$.
+
+\begin{thm}[Local Artin reciprocity]
+ There exists a unique topological isomorphism
+ \[
+ \Art_K: K^\times \to W(K^{\mathrm{ab}}/K)
+ \]
+ characterized by the properties
+ \begin{enumerate}
+ \item $\Art_K(\pi_K)|_{K^{\mathrm{ur}}} = \Frob_K$, where $\pi_K$ is \emph{any} uniformizer.
+ \item We have
+ \[
+ \Art_K(N_{L/K}(x))|_L = \id_L
+ \]
+ for all $L/K$ finite abelian and $x \in L^\times$.
+ \end{enumerate}
+ Moreover, if $M/K$ is finite, then for all $x \in M^\times$, we know $\Art_M(x)$ is an automorphism of $M^{\mathrm{ab}}/M$, and restricts to an automorphisms of $K^{\mathrm{ab}}/K$. Then we have
+ \[
+ \Art_M(x)|_K^{K^{\mathrm{ab}}} = \Art_K(N_{M/K}(x)).
+ \]
+ Moreover, $\Art_K$ induces an isomorphism
+ \[
+ \frac{K^\times}{N_{M/K}(M^\times)} \to \Gal\left(\frac{M\cap K^{\mathrm{ab}}}{K}\right).
+ \]
+\end{thm}
+
+To simplify this, we will write $N(L/K) = N_{L/K}(L^\times)$ for $L/K$ finite\index{$N(L/K)$}. From this theorem, we can deduce a lot of more precise statements.
+\begin{cor}
+ Let $L/K$ be finite. Then $N(L/K) = N((L \cap K^{ab})/K)$, and
+ \[
+ (K^\times : N(L/K)) \leq [L:K]
+ \]
+ with equality iff $L/K$ is abelian.
+\end{cor}
+
+\begin{proof}
+ To see this, we let $M = L \cap K^{\mathrm{ab}}$. Applying the isomorphism twice gives
+ \[
+ \frac{K^\times}{N(L/K)} \cong \Gal(M/K) \cong \frac{K^\times}{N(M/K)}.
+ \]
+ Since $N(L/K) \subseteq N(M/K)$, and $[L:K] \geq [M:K] = |\Gal(M/K)|$, we are done.
+\end{proof}
+
+The theorem tells us if we have a finite abelian extension $M/K$, then we obtain an open finite-index subgroup $N_{M/K}(M^\times) \leq K^\times$. Conversely, if we are given an open finite index subgroup of $K^\times$, we might ask if there is an abelian extension of $K$ whose norm group is corresponds to this subgroup. The following theorem tells us this is the case:
+\begin{thm}
+ Let $K$ be a local field. Then there is an isomorphism of posets
+ \[
+ \begin{tikzcd}[/tikz/column 1/.append style={anchor=base east}
+ ,/tikz/column 2/.append style={anchor=base west}
+ ,row sep=tiny]
+ \left\{\parbox{3cm}{\centering open finite index\\subgroups of $K^\times$}\right\} \ar[r, leftrightarrow] & \left\{\parbox{3cm}{\centering finite abelian\\extensions of $L/K$}\right\}\\
+ H \ar[r, maps to] & (K^{\mathrm{ab}})^{\Art_K(H)}\\
+ N(L/K) & L/K \ar[l, maps to]
+ \end{tikzcd}.
+ \]
+ In particular, for $L/K$ and $M/K$ finite abelian extensions, we have
+ \begin{align*}
+ N(LM/K) &= N(L/K) \cap N(M/K),\\
+ N(L\cap M/K) &= N(L/K)N(M/K).
+ \end{align*}
+\end{thm}
+
+While proving this requires quite a bit of work, a small part of it follows from local Artin reciprocity:
+
+\begin{thm}
+ Let $L/K$ be a finite extension, and $M/K$ abelian. Then $N(L/K) \subseteq N(M/K)$ iff $M \subseteq L$.
+\end{thm}
+
+\begin{proof}
+ By the previous theorem, we may wlog $L/K$ abelian by replacing with $L \cap K^{\mathrm{ab}}$. The $\Leftarrow$ direction is clear by the last part of Artin reciprocity.
+
+ For the other direction, we assume that we have $N(L/K) \subseteq N(M/K)$, and let $\sigma \in \Gal(K^{\mathrm{ab}}/L)$. We want to show that $\sigma|_M = \id_M$. This would then imply that $M$ is a subfield of $L$ by Galois theory.
+
+ We know $W(K^{\mathrm{ab}}/L)$ is dense in $\Gal(K^{\mathrm{ab}}/L)$. So it suffices to show this for $\sigma \in W(K^{\mathrm{ab}}/L)$. Then we have % why
+ \[
+ W(K^{\mathrm{ab}}/L) \cong \Art_K(N(L/K)) \subseteq \Art_K(N(M/K)).
+ \]
+ So we can find $x \in M^\times$ such that $\sigma = \Art_K(N_{M/K}(x))$. So $\sigma|_M = \id_M$ by local Artin reciprocity.
+\end{proof}
+
+Side note: Why is this called ``class field theory''? Usually, we call the field corresponding to the subgroup $H$ the \term{class field} of $H$. Historically, the first type of theorems like this are proved for number fields. The groups that appear on the left would be different, but in some cases, they are the class group of the number field.
+
+\section{Lubin--Tate theory}
+For the rest of the course, we will indicate how one can explicitly construct the field $K^{\mathrm{ab}}$ and the map $\Art_K$.
+
+There are many ways we can approach local class field theory. The approach we use, using Lubin--Tate theory, is the most accessible one. Another possible approach is via Galois cohomology. This, however, relies on more advanced machinery, namely Galois cohomology.
+
+\subsection{Motivating example}
+%To motivate Lubin--Tate theory, let's try to look at what we know about $\Q_p$. In general, unramified extensions are easy, since they come from extensions of the residue field, and all finite extensions of $\F_p$ come from adjoining roots of unity of order coprime to $p$. Thus, we have
+%\begin{prop}
+% \[\Q_p^{ur} = \bigcup_{p \nmid n} \Q_p(\zeta_n).\]
+%\end{prop}
+%
+%We can say a bit more about this. The statement of local class field theory says we should study the norm groups. Recall that after picking a uniformizer $p$, we can decompose
+%\[
+% \Q_p^\times = \bra p \ket \times \Z_p^\times.
+%\]
+%Consider an unramified extension $L/\Q_p$ of degree $n = f$. Then since $p \in L$ is still the uniformizer, we know $N(p) = p^n$. Moreover, the norm of something of valuation $0$ has valuation $0$. So we have
+%\[
+% N(L/\Q_p) \subseteq \bra p^n \ket \times \Z_p^\times.
+%\]
+%This is a generic phenomenon --- unramified extensions are boring, since finite fields are boring.
+%
+%So we turn to totally ramified extensions.
+%
+%We know one way of producing totally ramified extensions of $\Q_p$, namely by adjoining $\zeta_{p^n}$, and we also know by field theory that this extension is abelian. We define
+%\[
+% \Q_p(\zeta_{p^\infty}) = \bigcup_{n \in \Z_{\geq 1}}\Q_p(\zeta_{p^n}).
+%\]
+%It turns out this gives everything.
+%\begin{thm}[Local Kronecker--Weber theorem]\index{local Kronecker-Weber theorem}
+% We have
+% \[
+% \Q^{ab}_p = \Q_p^{ur} \Q_p(\zeta_{p^\infty}) = \bigcup_{n = 1}^\infty \Q_p(\zeta_n).
+% \]
+%\end{thm}
+%This is a special case of what we will (not) prove later. With this in mind, we now have a rather concrete picture of what abelian extensions of $\Q_p$ look like.
+
+We will work out the details of local Artin reciprocity in the case of $\Q_p$ as a motivating example for the proof we are going to come up with later. Here we will need the results of local class field theory to justify our claims, but this is not circular since this is not really part of the proof.
+
+\begin{lemma}
+ Let $L/K$ be a finite abelian extension. Then we have
+ \[
+ e_{L/K} = (\mathcal{O}_K^\times : N_{L/K}(\mathcal{O}_L^\times)).
+ \]
+\end{lemma}
+
+\begin{proof}
+ Pick $x \in L^\times$, and $w$ the valuation on $L$ extending $v_K$, and $n = [L:K]$. Then by construction of $w$, we know
+ \[
+ v_K(N_{L/K}(x)) = n w(x) = f_{L/K}v_L(x).
+ \]
+ So we have a surjection
+ \[
+ \begin{tikzcd}
+ \displaystyle\frac{K^\times}{N(L/K)} \ar[r, "v_K", two heads] & \displaystyle\frac{\Z}{f_{L/K}\Z}
+ \end{tikzcd}.
+ \]
+ The kernel of this map is equal to
+ \[
+ \frac{\mathcal{O}_K^\times N(L/K)}{N(L/K)} \cong \frac{\mathcal{O}_K^\times}{\mathcal{O}_K^\times \cap N(L/K)} = \frac{\mathcal{O}_K^\times}{N_{L/K}(\mathcal{O}_L^\times)}.
+ \]
+ So by local class field theory, we know
+ \[
+ n = (K^\times : N(L/K)) = f_{L/K} (\mathcal{O}_K^\times : N_{L/K}(\mathcal{O}_L^\times)),
+ \]
+ and this implies what we want.
+\end{proof}
+
+\begin{cor}
+ Let $L/K$ be a finite abelian extension. Then $L/K$ is unramified if and only if $N_{L/K}(\mathcal{O}_L^\times) = \mathcal{O}_K^\times$.
+\end{cor}
+
+Now we fix a uniformizer $\pi_K$. Then we have a topological group isomorphism
+\[
+ K^\times \cong \bra \pi_K\ket \times \mathcal{O}_K^\times.
+\]
+Since we know that the finite abelian extensions correspond exactly to finite index subgroups of $K^\times$ by taking the norm groups, we want to understand subgroups of $K^\times$. Now consider the subgroups of $K^\times$ of the form
+\[
+ \bra \pi_K^m\ket \times U_K^{(n)}.
+\]
+We know these form a basis of the topology of $K^\times$, so it follows that finite-index open subgroups must contain one of these guys. So we can find the maximal abelian extension as the union of all fields corresponding to these guys.
+
+Since we know that $N(LM/K) = N(L/K) \cap N(M/K)$, it suffices to further specialize to the cases
+\[
+ \bra \pi_K\ket \times U_K^{(n)}
+\]
+and
+\[
+ \bra \pi_K^m\ket \times \mathcal{O}_K
+\]
+separately. The second case is easy, because this corresponds to an unramified extension by the above corollary, and unramified extensions are completely characterized by the extension of the residue field. Note that the norm group and the extension are both independent of the choice of uniformizer. The extensions corresponding to the first case are much more difficult to construct, and they depend on the choice of $\pi_K$. We will get them from Lubin--Tate theory.% Elaborated and fixed original mistake.
+
+\begin{lemma}
+ Let $K$ be a local field, and let $L_m/K$ be the extension corresponding to $\bra \pi_K^m \ket \times \mathcal{O}_K$. Let
+ \[
+ L = \bigcup_m L_m. % find out correct notation % This is correct
+ \]
+ Then we have
+ \[
+ K^{\mathrm{ab}} = K^{\mathrm{ur}}L,
+ \]
+\end{lemma}
+
+\begin{lemma}
+ We have isomorphisms
+ \begin{align*}
+ W(K^{\mathrm{ab}}/K) &\cong W(K^{\mathrm{ur}} L/K) \\
+ &\cong W(K^{\mathrm{ur}}/K) \times \Gal(L/K)\\
+ &\cong \Frob_K^\Z \times \Gal(L/K)
+ \end{align*}
+\end{lemma}
+
+\begin{proof}
+ The first isomorphism follows from the previous lemma. The second follows from the fact that $K^{\mathrm{ur}} \cap L = K$ as $L$ is totally ramified. The last isomorphism follows from the fact that $T_{K^{\mathrm{ur}}/K} = K^{\mathrm{ur}}$ trivially, and then by definition $W(K^{\mathrm{ur}/K}) \cong \Frob_K^\Z$.
+\end{proof}
+
+%Now we want to study the abelian extensions of $K$ by looking at the norm groups. Given any finite extension $L/K$, we can similarly decompose
+%\[
+% L^\times \cong \bra \pi_L\ket \times \mathcal{O}_L^\times.
+%\]
+%Since the norm map is multiplicative, we know
+%\[
+% N_{L/K}(L^\times) \cong N_{L/K}(\bra \pi_L\ket) \times N_{L/K}(\mathcal{O}_L^\times).
+%\]
+%Since $v_K(N_{L/K}(x)) = f_{L/K}v_L(x)$, we know $N_{L/K}$ maps $\mathcal{O}_L^\times$ to $\mathcal{O}_M^\times$. By picking a good uniformizer, we can also assume $N_{L/K}(\bra \pi_L\ket) \subseteq \bra \pi_K\ket$.
+%
+%We know they are subgroups of $K^\times$.
+%
+%To construct $K^{\mathrm{ab}}$, we need extensions with norm groups $\bra \pi_K^m\ket \times U_K^{(n)}$ for $m, n \in \Z_{\geq 0}$. It suffices to consider the $m$ and the $n$ separately, i.e.\ $\bra \pi_K^m\ket \times \mathcal{O}_K^\times$ and $\bra \pi_K\ket \times U_K^{(n)}$, as intersection of groups correspond to taking compositor fields.
+%
+%By the lemma, we know that $\bra \pi_K^m\ket \times \mathcal{O}_K^\times$ is the norm group of the unique unramified extension of degree $m$. So we need to focus on $\bra \pi_K\ket \times U_K^{(n)}$.
+%
+%(Note that the groups $\bra \pi_K^m\ket \times \mathcal{O}_K^\times$ are independent of the choice of uniformizer, because they differ by something in $\mathcal{O}_K^\times$, but the groups $\bra \pi_K\ket \times U_k^{(n)}$ do)
+%
+\begin{eg}
+ We consider the special case of $K = \Q_p$ and $\pi_K = p$. We let
+ \[
+ L_n = \Q_p (\zeta_{p^n}),
+ \]
+ where $\zeta_{p^n}$ is the primitive $p^n$th root of unity. Then by question 6 on example sheet 2, we know this is a field with norm group
+ \[
+ N(\Q_p(\zeta_{p^n})/\Q_p) = \bra p\ket \times(1 + p^n\Z_p) = \bra p \ket \times U_{\Q_p}^{(n)},
+ \]
+ and thus this is a totally ramified extension of $\Q_p$.
+
+ We put
+ \[
+ \Q_p(\zeta_{p^\infty}) = \bigcup_{n = 1}^\infty \Q_p(\zeta_{p^n}).
+ \]
+ Then again this is totally ramified extension, since it is the nested union of totally ramified extensions.
+
+ Then we have
+ \begin{align*}
+ \Gal(\Q_p(\zeta_{p^\infty})/\Q_p) &\cong \varprojlim_n \Gal(\Q_p(\zeta_{p^n})/\Q_p)\\
+ &= \varprojlim_n (\Z/p^n \Z)^\times \\
+ &= \Z_p^\times.
+ \end{align*}
+ Note that we are a bit sloppy in this deduction. While we know that it is true that $\Z_p^\times \cong \varprojlim_n (\Z/p^n\Z)^\times$, the inverse limit depends not only on the groups $(\Z/p^n\Z)^\times$ themselves, but also on the maps we use to connect the groups together. Fortunately, from the discussion below, we will see that the maps
+ \[
+ \Gal(\Q_p(\zeta_{p^n})/\Q_p) \to \Gal(\Q_p(\zeta_{p^{n - 1}})/\Q_p)
+ \]
+ indeed correspond to the usual restriction maps
+ \[
+ (\Z/p^n \Z)^\times \to (\Z/p^{n - 1}\Z)^\times.
+ \]
+ It is a fact that this is the \emph{inverse} of the Artin map of $\Q_p$ restricted to $\Z_p^\times$. Note that we have $W(\Q_p(\zeta_{p^\infty})/\Q_p) = \Gal(\Q_p(\zeta_{p^\infty})/\Q_p)$ because its maximal unramified subextension is trivial.
+
+ We can trace through the above chains of isomorphisms to figure out what the Artin map does. Let $m = \Z_p^\times$. Then we can write
+ \[
+ m = a_0 + a_1p + \cdots,
+ \]
+ where $a_i \in \{0, \cdots, p - 1\}$ and $a_0 \not= 0$. Now for each $n$, we know
+ \[
+ m \equiv a_0 + a_1 p + \cdots + a_{n - 1}p^{n - 1} \mod p^n.
+ \]
+ By the usual isomorphism $\Gal(\Q_p(\zeta_{p^n})/\Q_p) \cong \Z/p^n\Z$, we know $m$ acts as
+ \[
+ \zeta_{p^n} \mapsto \zeta_{p^n}^{a_0 + a_1 p + \ldots + a_{n - 1} p^{n - 1}} \qeq \zeta_{p^n}^m
+ \]
+ on $\Q_p(\zeta_{p^n})$, where we abuse notation because taking $\zeta_{p^n}$ to powers of $p$ greater than $n$ gives $1$. It can also be interpreted as $(1+\lambda_{p_n})^m$, where $\lambda_{p_n}=\zeta_{p_n}-1$ is a uniformizer, which makes sense using binomial expansion.
+
+ So the above isomorphisms tells us that $\Art_{\Q_p}$ restricted to $\Z_p^\times$ acts on $\Q_p(\zeta_{p^\infty})$ as
+ \[
+ \Art_{\Q_p}(m)(\zeta_{p^n}) \equiv \sigma_{m^{-1}} (\zeta_{p^n}) = \zeta_{p^n}^{m^{-1}}.
+ \]
+ The full Artin map can then be read off from the following diagram:
+ \[
+ \begin{tikzcd}
+ \Q_p^\times \ar[d, "\cong"] \ar[r, "\Art_{\Q_p}"] & W(\Q_p^{\mathrm{ab}}/\Q_p) \ar[d, "\text{restriction}", "\sim"']\\
+ \bra p\ket\times \Z_p^\times \ar[r] & W(\Q_p^{\mathrm{ur}} / \Q_p) \times \Gal(\Q_p(\zeta_{p^\infty}) / \Q_p)
+ \end{tikzcd}
+ \]
+ where the bottom map sends
+ \[
+ \bra p^n, m\ket \mapsto (\Frob_{\Q_p}^n, \sigma_{m^{-1}}).
+ \]
+\end{eg}
+
+In fact, we have
+\begin{thm}[Local Kronecker-Weber theorem]\index{Local Kronecker-Weber theorem}
+ \begin{align*}
+ \Q_p^{\mathrm{ab}} &= \bigcup_{n \in \Z_{\geq 1}} \Q_p(\zeta_n),\\
+ \Q_p^{\mathrm{ur}} &= \bigcup_{\substack{n \in \Z_{\geq 1}\\ (n, p) = 1}} \Q_p(\zeta_n).
+ \end{align*}
+\end{thm}
+
+\begin{proof}[Not a proof]
+ We will comment on the proof of the generalized version later.
+\end{proof}
+
+\begin{remark}
+ There is another normalization of the Artin map which sends a uniformizer to the \term{geometric Frobenius}, defined to be the inverse of the arithmetic Frobenius. With this convention, $\Art_{\Q_p}(m)|_{\Q_p(\zeta_{p^\infty})}$ is $\sigma_m$.
+\end{remark}
+
+We can define higher ramification groups for general Galois extensions.
+\begin{defi}[Higher ramification groups]\index{Higher ramification groups}
+ Let $K$ be a local field and $L/K$ Galois. We define, for $s \in \R_{\geq -1}$
+ \begin{gather*}
+ G^s(M/K) = \{ \sigma \in \Gal(M/K): \sigma|_L \in G^s(L/K)\text{ for all finite}\\
+ \text{Galois subextension $M/K$}\}.
+ \end{gather*}
+\end{defi}
+This definition makes sense, because the upper number behaves well when we take quotients. This is one of the advantages of upper numbering. Note that we can write the ramification group as the inverse limit
+\[
+ G^s(M/K) \cong \varprojlim_{L/K}G^s(L/K),
+\]
+as in the case of the Galois group.
+
+\begin{eg}
+ Going back to the case of $K = \Q_p$. We write $\Q_{p^n}$ for the unramified extension of degree $n$ of $\Q_p$. By question 11 of example sheet 3, we know that
+ \[
+ G^s(\Q_{p^n}(\zeta_{p^m})/\Q_p) =
+ \begin{cases}
+ \Gal(\Q_{p^n}(\zeta_{p^m}) / \Q_p) & s = -1\\
+ \Gal(\Q_{p^n}(\zeta_{p^m}) / \Q_{p^n}) & -1 < s \leq 0\\
+ \Gal(\Q_{p^n}(\zeta_{p^m}) / \zeta_{p^k}) & k - 1 < s \leq k \leq m - 1\\
+ 1 & s > m -1
+ \end{cases},
+ \]
+ which corresponds to
+ \[
+ \begin{cases}
+ \displaystyle \frac{\bra p\ket \times U^{(0)}}{\bra p^n \ket \times U^{(m)}} & s = -1\\
+ \displaystyle \frac{\bra p^n\ket \times U^{(0)}}{\bra p^n \ket \times U^{(m)}} & -1 < s \leq 0\\
+ \displaystyle \frac{\bra p^n\ket \times U^{(k)}}{\bra p^n \ket \times U^{(m)}} & k - 1 < s \leq k \leq m - 1\\
+ 1 & s > m - 1
+ \end{cases}
+ \]
+ under the Artin map.
+\end{eg}
+
+By taking the limit as $n, m \to \infty$, we get
+\begin{thm}
+ We have
+ \[
+ G^s(\Q_p^{\mathrm{ab}}/\Q_p) = \Art_{\Q_p}(1 + p^k \Z_p) = \Art_{\Q_p} (U^{(k)}),
+ \]
+ where $k$ is chosen such that $k - 1< s \leq k$, $k \in \Z_{\geq 0}$.
+\end{thm}
+
+\begin{cor}
+ If $L/\Q_p$ is a finite abelian extension, then
+ \[
+ G^s(L/\Q_p) = \Art_{\Q_p}\left(\frac{N(L/\Q_p)(1 + p^n \Z_p)}{N(L/\Q_p)}\right),
+ \]
+ where $n - 1 < s \leq n$.
+\end{cor}
+Here $\Art_{\Q_p}$ induces an isomorphism
+\[
+ \frac{\Q_p^\times}{N(L/\Q_p)} \to \Gal(L/\Q_p).
+\]
+So it follows that $L \subseteq \Q_p(\zeta_{p^m})$ for some $n$ if and only if $G^s(L/\Q_p) = 1$ for all $s > m - 1$. % why? % Basic Galois theory
+
+\subsection{Formal groups}
+The proof of local Artin reciprocity will be done by constructing the analogous versions of $L_n$ for an arbitrary local field, and then proving that it works. To do so, we will need the notion of a \emph{formal group}. The idea of a formal group is that a formal group is a rule that specifies how we should multiply two elements via a power series over a ring $R$. Then if we have a complete $R$-module, then the formal group will turn the $R$-module into an actual group. There is then a natural notion of a formal module, which is a formal group $F$ with an $R$-action.
+
+At the end, we will pick $R = \mathcal{O}_K$. The idea is then that we can fix an algebraic closure $\bar{K}$, and then a formal $\mathcal{O}_K$-module will turn $\mathfrak{m}_{\bar{K}}$ into an actual $\mathcal{O}_K$-module. Then if we adjoin the right elements of $\mathfrak{m}_{\bar{K}}$ to $K$, then we obtain an extension of $K$ with a natural $\mathcal{O}_K$ action, and we can hope that this restricts to field automorphisms when we restrict to the unit group.
+
+
+\begin{notation}
+ Let $R$ be a ring. We write
+ \[
+ \R[[x_1, \cdots, x_n]] = \left\{ \sum_{k_1, \ldots, k_n \in \Z_{\geq 0}} a_{k_1, \ldots, k_n} x_1^{k_1} \cdots x_n^{k_n}: a_{k_1, \ldots, k_n} \in R\right\}
+ \]
+ for the ring of formal power series in $n$ variables over $R$.
+\end{notation}
+
+\begin{defi}[Formal group]\index{formal group}
+ A (one-dimensional, commutative) \term{formal group} over $R$ is a power series $F(X, Y) \in R[X, Y]$ such that
+ \begin{enumerate}
+ \item $F(X, Y) \equiv X + Y \mod (X^2, XY, Y^2)$
+ \item Commutativity: $F(X, Y) = F(Y, X)$
+ \item Associativity: $F(X, F(Y, Z)) = F(F(X, Y), Z)$.
+ \end{enumerate}
+\end{defi}
+This is most naturally understood from the point of view of algebraic geometry, as a generalization of the Lie algebra over a Lie group. Instead of talking about the tangent space of a group (the ``first-order neighbourhood''), we talk about its infinitesimal (formal) neighbourhood, which contains all higher-order information. A lot of the seemingly-arbitrary compatibility conditions we later impose have such geometric motivation that we unfortunately cannot go into.
+
+\begin{eg}
+ If $F$ is a formal group over $\mathcal{O}_K$, where $K$ is a complete valued field, then $F(x, y)$ converges for all $x, y \in \mathfrak{m}_K$. So $\mathfrak{m}_K$ becomes a (semi)group under the multiplication
+ \begin{align*}
+ (x, y) \mapsto F(x, y) \in \mathfrak{m}_k
+ \end{align*}
+\end{eg}
+
+\begin{eg}
+ We can define
+ \[
+ \hat{\GG}_a(X, Y) = X + Y.
+ \]
+ This is called the \term{formal additive group}.
+
+ Similarly, we can have
+ \[
+ \hat{\GG}_m(X, Y) = X + Y + XY.
+ \]
+ This is called the \term{formal multiplicative group}. Note that
+ \[
+ X + Y + XY = (1 + X)(1 + Y) - 1.
+ \]
+ So if $K$ is a complete valued field, then $\mathfrak{m}_K$ bijects with $1 + \mathfrak{m}_k$ by sending $x \mapsto 1 + x$, and the rule sending $(x, y) \in \mathfrak{m}_K^2 \mapsto x + y + xy \in \mathfrak{m}_K$ is just the usual multiplication in $1 + \mathfrak{m}_K$ transported to $\mathfrak{m}_K$ via the bijection above.
+
+ We can think of this as looking at the group in a neighbourhood of the identity $1$.
+\end{eg}
+
+Note that we called this a formal \emph{group}, rather than a formal semi-group. It turns out that the existence of identity and inverses is automatic.
+\begin{lemma}
+ Let $R$ be a ring and $F$ a formal group over $R$. Then
+ \[
+ F(X, 0) = X.
+ \]
+ Also, there exists a power series $i(X) \in X \cdot R[[X]]$ such that
+ \[
+ F(X, i(X)) = 0.
+ \]
+\end{lemma}
+
+\begin{proof}
+ See example sheet 4. % fill in
+\end{proof}
+
+The next thing to do is to define homomorphisms of formal groups.
+
+\begin{defi}[Homomorphism of formal groups]\index{homomorphism!formal group}\index{formal group!homomorphism}
+ Let $R$ be a ring, and $F, G$ be formal groups over $R$. A \emph{homomorphism} $f: F \to G$ is an element $f \in R[[X]]$ such that $f(X) \equiv 0 \mod X$ and
+ \[
+ f(F(X, Y)) = G(f(X), f(Y)).
+ \]
+ The endomorphisms $f: F \to F$ form a ring $\End_R(F)$ with addition $+_F$ given by
+ \[
+ (f +_F g)(x) = F(f(x), g(x)).
+ \]
+ and multiplication is given by composition.
+\end{defi}
+
+We can now define a formal module in the usual way, plus some compatibility conditions.
+\begin{defi}[Formal module]\index{formal module}\index{formal group!module}\index{module!formal group}
+ Let $R$ be a ring. A \emph{formal $R$-module} is a formal group $F$ over $R$ with a ring homomorphism $R \to \End_{R}(F)$, written, $a \mapsto [a]_F$, such that
+ \[
+ [a]_F(X) = aX \mod X^2.
+ \]
+\end{defi}
+
+Those were all general definitions. We now restrict to the case we really care about. Let $K$ be a local field, and $q = |k_K|$. We let $\pi \in \mathcal{O}_K$ be a uniformizer.
+\begin{defi}[Lubin--Tate module]\index{Lubin--Tate module}
+ A \emph{Lubin--Tate module} over $\mathcal{O}_K$ with respect to $\pi$ is a formal $\mathcal{O}_K$-module $F$ such that
+ \[
+ [\pi]_F (X) \equiv X^q\mod \pi.
+ \]
+\end{defi}
+We can think of this condition of saying ``uniformizer corresponds to the Frobenius''.
+
+\begin{eg}
+ The formal group $\hat{\GG}_m$ is a Lubin--Tate $\Z_p$ module with respect to $p$ given by the following formula: if $a \in \Z_p$, then we define
+ \[
+ [a]_{\hat{\GG}_m}(X) = (1 + X)^a - 1 = \sum_{n = 1}^\infty \binom{a}{n} X^n.
+ \]
+ The conditions
+ \[
+ (1 + X)^a - 1 \equiv aX \mod X^2
+ \]
+ and
+ \[
+ (1 + X)^p - 1 \equiv X^p \mod p
+ \]
+ are clear.
+
+ We also have to check that $a \mapsto [a]_F$ is a ring homomorphism. This follows from the identities
+ \[
+ ((1 + X)^a)^b = (1 + X)^{ab},\quad (1 + X)^a (1 + X)^b = (1 + X)^{ab},
+ \]
+ which are on the second example sheet.
+\end{eg}
+
+The objective of the remainder of the section is to show that all Lubin--Tate modules are isomorphic.
+\begin{defi}[Lubin--Tate series]\index{Lubin--Tate series}
+ A \emph{Lubin--Tate series} for $\pi$ is a power series $e(X) \in \mathcal{O}_K[[X]]$ such that
+ \[
+ e(X) \equiv \pi X \mod X^2,\quad e(X) \equiv X^q\mod \pi.
+ \]
+ We denote the set of Lubin--Tate series for $\pi$ by $\mathcal{E}_\pi$.
+\end{defi}
+Now by definition, if $F$ is a Lubin--Tate $\mathcal{O}_K$ module for $\pi$, then $[\pi]_F$ is a Lubin--Tate series for $\pi$.
+
+\begin{defi}[Lubin--Tate polynomial]\index{Lubin--Tate polynomial}
+ A \emph{Lubin--Tate polynomial} is a polynomial of the form
+ \[
+ uX^q + \pi(a_{q - 1} X^{q - 1} + \cdots + a_2 X^2) + \pi X
+ \]
+ with $u \in U_K^{(1)}$, and $a_{q-1}, \cdots, a_2 \in \mathcal{O}_K$.
+
+ In particular, these are Lubin--Tate series.
+\end{defi}
+
+\begin{eg}
+ $X^q + \pi X$ is a Lubin--Tate polynomial.
+\end{eg}
+
+\begin{eg}
+ If $K = \Q_p$ and $\pi = p$, then $(1 + X)^p - 1$ is a Lubin--Tate polynomial.
+\end{eg}
+
+The result that allows us to prove that all Lubin--Tate modules are isomorphic is the following general result:
+\begin{lemma}
+ Let $e_1, e_2 \in \mathcal{E}_\pi$ and take a linear form
+ \[
+ L(x_1, \cdots, x_n) = \sum_{i = 1}^n a_i X_i, \quad a_i \in \mathcal{O}_K.
+ \]
+ Then there is a unique power series $F(x_1, \cdots, x_n) \in \mathcal{O}_K[[x_1, \cdots, x_n]]$ such that
+ \[
+ F(x_1, \cdots, x_n) \equiv L(x_1, \cdots, x_n)\mod (x_1, \cdots, x_n)^2,
+ \]
+ and
+ \[
+ e_1(F(x_1, \cdots, x_n)) = F(e_2(x_1), e_2(x_2), \cdots, e_2(x_n)).
+ \]
+\end{lemma}
+For reasons of time, we will not prove this. We just build $F$ by successive approximation, which is not terribly enlightening.
+
+\begin{cor}
+ Let $e \in \mathcal{E}_\pi$ be a Lubin--Tate series. Then there are unique power series $F_e(X, Y) \in \mathcal{O}_K[[X, Y]]$ such that
+ \begin{align*}
+ F_e(X, Y) &\equiv X + Y \mod (X + Y)^2\\
+ e(F_e(X, Y)) &= F_e(e(X), e(Y))
+ \end{align*}
+\end{cor}
+
+\begin{cor}
+ Let $e_1, e_2 \in \mathcal{E}_\pi$ be Lubin--Tate series and $a \in \mathcal{O}_K$. Then there exists a unique power series $[a]_{e_1, e_2} \in \mathcal{O}_K[[X]]$ such that
+ \begin{align*}
+ [a]_{e_1, e_2}(X) &\equiv aX \mod X^2\\
+ e_1([a]_{e_1, e_2}(X)) &= [a]_{e_1, e_2}(e_2(X)).
+ \end{align*}
+ To simplify notation, if $e_1 = e_2 = e$, we just write $[a]_e = [a]_{e, e}$.
+\end{cor}
+
+We now state the theorem that classifies all Lubin--Tate modules in terms of Lubin--Tate series.
+\begin{thm}
+ The Lubin--Tate $\mathcal{O}_K$ modules for $\pi$ are precisely the series $F_e$ for $e \in \mathcal{E}_\pi$ with formal $\mathcal{O}_K$-module structure given by
+ \[
+ a \mapsto [a]_e.
+ \]
+ Moreover, if $e_1, e_2 \in \mathcal{E}_\pi$ and $a \in \mathcal{O}_K$, then $[a]_{e_1, e_2}$ is a homomorphism from $F_{e_2} \to F_{e_1}$.
+
+ If $a \in \mathcal{O}_K^\times$, then it is an isomorphism with inverse $[a^{-1}]_{e_2, e_1}$.
+\end{thm}
+So in some sense, there is only one Lubin--Tate module.
+
+\begin{proof}[Proof sketch]
+ If $F$ is a Lubin--Tate $\mathcal{O}_K$-module for $\pi$, then $e = [\pi]_F \in \mathcal{E}_\pi$ by definition, and $F$ satisfies the properties that characterize the series $F_e$. So $F = F_e$ by uniqueness.
+
+ For the remaining parts, one has to verify the following for all $e, e_1, e_2, e_3 \in \mathcal{E}_\pi$ and $a, b \in \mathcal{O}_K$.
+ \begin{enumerate}
+ \item $F_e(X, Y) = F_e(Y, X)$.
+ \item $F_e(X, F_e(Y, Z)) = F_e(F_e(X, Y), Z)$.
+ \item $[a]_{e_1, e_2}(F_e(X, Y)) = F_{e_1}([a]_{e_1, e_2}(X), [a]_{e_1, e_2}(Y))$.
+ \item $[ab]_{e_1, e_3}(X) = [a]_{e_1, e_2} ([b]_{e_2, e_3}(X))$.
+ \item $[a + b]_{e_1, e_2}(X) = [a]_{e_1, e_2}(X) + [b]_{e_1, e_2}(X)$.
+ \item $[\pi]_e(X) = e(X)$.
+ \end{enumerate}
+ The proof is just repeating the word ``uniqueness'' ten times.
+\end{proof}
+
+\subsection{Lubin--Tate extensions}
+We now use the Lubin--Tate modules to do things. As before, we fixed an algebraic closure $\bar{K}$ of $K$. We let $\bar{\mathfrak{m}} = \mathfrak{m}_{\bar{K}}$ be the maximal ideal in $\mathcal{O}_{\bar{K}}$.
+
+\begin{prop}
+ If $F$ is a formal $\mathcal{O}_K$-module, then $\bar{\mathfrak{m}}$ becomes a (genuine) $\mathcal{O}_K$ module under the operations $+_F$ and $\ph$
+ \begin{align*}
+ x +_F y&= F(x, y)\\
+ a\cdot x &= [a]_F(x)
+ \end{align*}
+ for all $x, y \in \bar{\mathfrak{m}}$ and $a \in \mathcal{O}_K$.
+
+ We denote this $\bar{\mathfrak{m}}_F$.
+\end{prop}
+This isn't exactly immediate, because $\bar{K}$ need not be complete. However, this is not a problem as each multiplication given by $F$ only involves finitely many things (namely two of them).
+
+\begin{proof}
+ If $x, y \in \bar{\mathfrak{m}}$, then $F(x, y)$ is a series in $K(x, y) \subseteq \bar{K}$. Since $K(x, y) $ is a finite extension, we know it is complete. Since the terms in the sum have absolute value $< 1$ and $\to 0$, we know it converges to an element in $\mathfrak{m}_{K(x, y)} \subseteq \bar{\mathfrak{m}}$. The rest then essentially follows from definition.
+\end{proof}
+
+To prove local class field theory, we want to find elements with an $U_K/U_K^{(n)}$ action for each $n$, or equivalently elements with an $\mathcal{O}_K/\mathcal{O}_K^{(n)}$ action. Note that the first quotient is a quotient of groups, while the second quotient is a quotient of a ring by an ideal. So it is natural to consider the following elements:
+
+\begin{defi}[$\pi^n$-division points]\index{$\pi^n$-division points}
+ Let $F$ be a Lubin--Tate $\mathcal{O}_K$-module for $\pi$. Let $n \geq 1$. The group $F(n)$ of \emph{$\pi^n$-division points of $F$} is defined to be
+ \[
+ F(n) = \{x \in \bar{\mathfrak{m}}_F \mid [\pi^n]_F x= 0\} = \ker ([\pi^n]_F).
+ \]
+ This is a group under the operation given by $F$, and is indeed an $\mathcal{O}_K$ module.
+\end{defi}
+
+\begin{eg}
+ Let $F = \hat{\GG}_m, K = \Q_p$ and $\pi = p$. Then for $x \in \bar{\mathfrak{m}}_{\hat{\GG}_m}$, we have
+ \[
+ p^n \cdot x = (1 + x)^{p^n} - 1.
+ \]
+ So we know
+ \[
+ \hat{\GG}_m(n) = \{\zeta_{p^n}^i - 1 \mid i = 0, 1, \cdots, p^n - 1\},
+ \]
+ where $\zeta_{p^n} \in \bar{\Q}_p$ is the primitive $p^n$th root of unity.
+
+ So $\hat{\GG}_m(n)$ generates $\Q_p(\zeta_{p^n})$.
+\end{eg}
+
+To prove this does what we want, we need the following lemma:
+\begin{lemma}
+ Let $e(X) = X^q + \pi X$. We let
+ \[
+ f_n(X) = \underbrace{(e\circ \cdots \circ e)}_{n\text{ times}}(X).
+ \]
+ Then $f_n$ has no repeated roots. Here we take $f_0$ to be the identity function.
+\end{lemma}
+
+\begin{proof}
+ Let $x \in \bar{K}$. We claim that if $|f_i(x)| < 1$ for $i = 0, \cdots, n - 1$, then $f_n'(X) \not= 0$.
+
+ We proceed by induction on $n$.
+ \begin{enumerate}
+ \item When $n = 1$, we assume $|x| < 1$. Then
+ \[
+ f_1'(x) = e'(x) = q x^{q - 1} + \pi = \pi\left(1 + \frac{q}{\pi} x^{q - 1}\right) \not= 0,
+ \]
+ since we know $\frac{q}{\pi}$ has absolute value $\leq 1$ ($q$ vanishes in $k_K$, so $q/\pi$ lives in $\mathcal{O}_K$), and $x^{q - 1}$ has absolute value $< 1$.
+ \item in the induction step, we have
+ \[
+ f'_{n + 1}(x) = (q f_n(x)^{q - 1} + \pi) f_n'(x) = \pi\left(1 + \frac{q}{\pi}f_n(x) ^{q - 1}\right) f_n'(x).
+ \]
+ By induction hypothesis, we know $f_n'(x) \not= 0$, and by assumption $|f_n(x)| < 1$. So the same argument works.
+ \end{enumerate}
+ We now prove the lemma. We assume that $f_n(x) = 0$. We want to show that $|f_i(x)| < 1$ for all $i = 0, \cdots, n - 1$. By induction, we have
+ \[
+ f_n(x) = x^{q^n} + \pi g_n(x)
+ \]
+ for some $g_n(x) \in \mathcal{O}_K[x]$. It follows that if $f_n(x) = 0$, then $|x| < 1$. So $|f_i(x)| < 1$ for all $i$. So $f'_n(x) \not= 0$.
+\end{proof}
+
+The point of the lemma is to prove the following proposition:
+\begin{prop}
+ $F(n)$ is a free $\mathcal{O}_K/\pi^n \mathcal{O}_K$ module of rank $1$. In particular, it has $q^n$ elements.
+\end{prop}
+
+\begin{proof}
+ By definition, we know
+ \[
+ \pi^n \cdot F(n) = 0.
+ \]
+ So $F(n)$ is indeed an $\mathcal{O}_K/\pi^n \mathcal{O}_K$-module.
+
+ To prove that it is free of rank $1$, we note that all Lubin--Tate modules for $\pi$ are isomorphic. This implies that all the honest $\mathcal{O}_K$ modules $F(n)$ are isomorphic. We choose $F = F_e$, where $e = X^q + \pi X$. Then $F(n)$ consists of the roots of the polynomial $f_n = e^n(X)$, which is of degree $q^n$ and has no repeated roots. So $|F(n)| = q^n$. To show that it is actually the right thing, if $\lambda_n \in F(n) \setminus F(n - 1)$, then we have a homomorphism
+ \[
+ \mathcal{O}_K \to F(n)
+ \]
+ given by $A \mapsto a \cdot \lambda_n$. Its kernel is $\pi^n \mathcal{O}_K$ by our choice of $\lambda_n$. By counting, we get an $\mathcal{O}_K$-module isomorphism
+ \[
+ \frac{\mathcal{O}_K}{\pi^n \mathcal{O}_K} \to F(n)
+ \]
+ as desired.
+\end{proof}
+
+\begin{cor}
+ We have isomorphisms
+ \begin{align*}
+ \frac{\mathcal{O}_K}{\pi^n \mathcal{O}_K}&\cong \End_{\mathcal{O}_K}(F(n))\\
+ \frac{U_K}{U_K^{(n)}} &\cong \Aut_{\mathcal{O}_K}(F(n)).
+ \end{align*}
+\end{cor}
+
+Given a Lubin--Tate $\mathcal{O}_K$-module $F$ for $\pi$, we consider
+\[
+ L_{n, \pi} = L_n = K(F(n)),
+\]
+which is the field of $\pi^n$ division points of $F$. From the inclusions $F(n) \subseteq F(n + 1)$ for all $n$, we obtain a corresponding inclusion of fields
+\[
+ L_n \subseteq L_{n + 1}.
+\]
+The field $L_n$ depends only in $\pi$, and not on $F$. To see this, we let $G$ be another Lubin--Tate $\mathcal{O}_K$-module, and let $f: F \to G$ be an isomorphism. Then
+\[
+ G(n) = f(F(n)) \subseteq K(F(n))
+\]
+since the coefficients of $f$ lie in $K$. So we know
+\[
+ K(G(n)) \subseteq K(F(n)).
+\]
+By symmetry, we must have equality.
+
+\begin{thm}
+ $L_n/K$ is a totally ramified abelian extension of degree $q^{n - 1}(q - 1)$ with Galois group
+ \[
+ \Gal(L_n/K) \cong \Aut_{\mathcal{O}_K}(F(n)) \cong \frac{U_K}{U_K^{(n)}}.
+ \]
+ Explicitly, for any $\sigma \in \Gal(L_n/K)$, there is a unique $u \in U_K/U_K^{(n)}$ such that
+ \[
+ \sigma(\lambda) = [u]_F(\lambda)
+ \]
+ for all $\lambda \in F(n)$. Under this isomorphism, for $m \geq n$, we have
+ \[
+ \Gal(L_m/L_n) \cong \frac{U_K^{(n)}}{U_K^{(m)}}.
+ \]
+ Moreover, if $F = F_e$, where
+ \[
+ e(X) = X^q + \pi(a_{q - 1} \pi^{q - 1} + \cdots + a_2 X^2) + \pi X,
+ \]
+ and $\lambda_n \in F(n) \setminus F(n - 1)$, then $\lambda_n$ is a uniformizer of $L_n$ and
+ \[
+ \phi_n(X) = \frac{e^n(X)}{e^{n - 1}(X)} = X^{q^{n - 1}(q - 1)} + \cdots + \pi
+ \]
+ is the minimal polynomial of $\lambda_n$. In particular,
+ \[
+ N_{L_n/K}(-\lambda_n) = \pi.
+ \]
+\end{thm}
+
+\begin{proof}
+ Consider a Lubin--Tate polynomial
+ \[
+ e(X) = x^q + \pi(a_{q - 1}X^{q - 1} + \cdots + a_2 X^2) + \pi X.
+ \]
+ We set $F = F_e$. Then
+ \[
+ \phi_n(X) = \frac{e^n(X)}{e^{n - 1}(X)} = (e^{n - 1}(X))^{q - 1} + \pi(a_{q_1} e^{n - 1} (X)^{q - 2} + \cdots + a_2 e^{n - 1}(X)) + \pi
+ \]
+ is an Eisenstein polynomial of degree $q^{n - 1}(q - 1)$ by starting at it long enough. So if $\lambda_n \in F(n) \setminus F(n - 1)$, then $\lambda_n$ is a root of $\phi_n(x)$, so $K(\lambda_n)/K$ is totally ramified of degree $q^{n - 1}(q - 1)$, and $\lambda_n$ is a uniformizer, and
+ \[
+ N_{K(\lambda_n)/K}(-\lambda_n) = \pi
+ \]
+ as the norm is just the constant coefficient of the minimal polynomial.
+
+ Now let $\sigma \in \Gal(L_n/K)$. Then $\sigma$ induces a permutation of $F(n)$, as these are the roots of $e^n(X)$, which is in fact $\mathcal{O}_K$-linear, i.e.
+ \begin{gather*}
+ \sigma(x) +_F\sigma(y) = F(\sigma(x), \sigma(y)) = \sigma(F(x, y)) = \sigma(x +_F y)\\
+ \sigma(a \cdot x) = \sigma([a]_F(x)) = [a]_F(\sigma(x)) = a \cdot \sigma(x)
+ \end{gather*}
+ for all $x, y \in \mathfrak{m}_{L_n}$ and $a \in \mathcal{O}_K$.
+
+ So we have an injection of groups
+ \[
+ \Gal(L_n/K) \hookrightarrow \Aut_{\mathcal{O}_K}(F(n)) = \frac{U_K}{U_K^{(n)}}
+ \]
+ But we know
+ \[
+ \left|\frac{U_K}{U_K^{(n)}}\right| = q^{n - 1}(q - 1) = [K(\lambda_n): K] \leq [L_n: K] = |\Gal(L_n/K)|.
+ \]
+ So we must have equality throughout, the above map is an isomorphism, and $K(\lambda_n) = L_n$.
+
+ It is clear from the construction of the isomorphism that for $m \geq n$, the diagram
+ \[
+ \begin{tikzcd}
+ \Gal(L_m/K) \ar[r, "\sim"] \ar[d, "\text{restriction}"] & U_K/U_K^{(m)} \ar[d, "\text{quotient}"]\\
+ \Gal(L_n/K) \ar[r, "\sim"] & U_K/U_K^{(n)}
+ \end{tikzcd}
+ \]
+ commutes. So the isomorphism
+ \[
+ \Gal(L_m/L_n) \cong \frac{U_K^{(m)}}{U_K^{(n)}}
+ \]
+ follows by looking at the kernels.
+\end{proof}
+
+\begin{eg}
+ In the case where $K = \Q_p$ and $\pi = p$, recall that
+ \[
+ \hat{\GG}_m(n) = \{\zeta_{p^n}^i - 1 \mid i = 0, \cdots, p^{n - 1} - 1\},
+ \]
+ where $\zeta_{p^n}$ is the principal $p^n$th root of unity. The theorem then gives
+ \[
+ \Gal(\Q_p(\zeta_{p^n})/\Q_p) \cong (\Z/p^n)^\times
+ \]
+ given by if $a \in \Z_{\geq 0}$ and $(a, p) = 1$, then
+ \[
+ \sigma_a(\zeta_{p^n}^i - 1) = [a]_{\hat{\GG}_m(n)} (\zeta_{p^n}^i - 1) = (1 + (\zeta_{p^n}^i - 1))^a - 1 = \zeta_{p^n}^{ai} - 1.
+ \]
+ This agrees with the isomorphism we previously constructed.
+\end{eg}
+
+Back to the general situation, setting
+\[
+ L_\infty = \bigcup_{n = 1}^\infty L_n,
+\]
+we know $L_\infty/K$ is Galois, and we have isomorphisms
+\[
+ \begin{tikzcd}[row sep=tiny]
+ \Gal(L_\infty/K) \ar[r, "\sim"] & \varprojlim \Gal(L_n/K) \ar[r, "\sim"] & \varprojlim_n U_K/U_K^{(n)} \cong U_K\\
+ \sigma \ar[r, maps to] & (\sigma|_{L_n})_n
+ \end{tikzcd}
+\]
+This map will be the inverse of the Artin map restricted to $L_\infty$.
+
+To complete the proof of Artin reciprocity, we need to use the following theorem without proof:
+\begin{thm}[Generalized local Kronecker-Weber theorem]\index{local Kronecker-Weber theorem}\index{generalized local Kronecker-Weber theorem}
+ We have
+ \[
+ K^{\mathrm{ab}} = K^{\mathrm{ur}} L_\infty
+ \]
+ (for any $\pi$).
+\end{thm}
+
+\begin{proof}[Comments on the proof]
+ One can prove this from the \term{Hasse-Arf theorem}, which states that in an abelian extension, the jumps in the upper ramification groups occur only at integer values. This, together with the calculation of ramification groups done later, easily implies the theorem. Essentially, $L_\infty$ maxed out all possible jumps of the upper ramification groups. However, the Hasse-Arf theorem is difficult to prove.
+
+ Another approach is to prove the existence of the Artin map using other techniques (e.g.\ Galois cohomology). Consideration of the norm group (cf.\ the next theorem) then implies the theorem. The content of this section then becomes an explicit construction of a certain family of abelian extensions.
+\end{proof}
+
+We can characterize the norm group by
+\begin{thm}
+ We have
+ \[
+ N(L_n/K) = \bra \pi\ket \times U_k^{(n)}.
+ \]
+\end{thm} % insert proof % Also hard
+\begin{proof}[Comments on the proof]
+ This can be done by defining \term{Coleman operators}, which are power series representations of the norm. Alternatively, assuming the description of the local Artin map given below and local Artin reciprocity, $U_k^{(n)}$ is in the kernel of $\Art|_{L_n}$, so $\bra\pi\ket\times U_k^{(n)}\subseteq N(L_n/K)$. The result follows by comparing order.
+\end{proof}
+
+We can then construct the Artin map as follows:
+\begin{thm}
+ Let $K$ be a local field. Then we have an isomorphism $\Art: K^\times \to W(K^{\mathrm{ab}}/K)$ given by the composition
+ \[
+ \begin{tikzcd}
+ K^\times \ar[d, "\sim"] \ar[r, dashed, "\Art"] & W(K^{\mathrm{ab}}/K) \ar[d, "\sim"]\\
+ \bra \pi\ket \times U_K \ar[r] & \Frob_K^\Z \times \Gal(L_\infty/K)
+ \end{tikzcd}
+ \]
+ where the bottom map is given by $(\pi^m, u) \mapsto (\Frob_K^m, \sigma_{u^{-1}})$, where
+ \[
+ \sigma_u(\lambda) = [u]_F(\lambda)
+ \]
+ for all $\lambda \in \bigcup_{n = 1}^\infty F(n)$.
+\end{thm}
+The inverse shows up in the proof to make sure the map defined above is independent of the choice of uniformizer. We will not prove this, nor that the map obtained has the desired properties. Instead, we will end the course by computing the higher ramification groups of these extensions.
+
+\begin{thm}
+ We have
+ \[
+ G_s(L_n/K) =
+ \begin{cases}
+ \Gal(L_n/K) & -1 \leq s \leq 0\\
+ \Gal(L_n/L_k) & q^{k - 1} - 1 < s \leq q^k - 1,\; 1 \leq k \leq n - 1\\
+ 1 & s > q^{n - 1}
+ \end{cases}
+ \]
+\end{thm}
+
+\begin{proof}
+ The case for $-1 \leq s \leq 0$ is clear.
+
+ For $0 \leq s \leq 1$ (which we may wlog is actually $1$), we know that
+ \[
+ \Gal(L_n/L_k) \cong U_K^{(k)}/U_K^{(n)}
+ \]
+ under the isomorphism $\Gal(L_n/K) \cong U_K/U_K^{(n)}$. On the other hand, we know $G_1(L_n/K)$ is the Sylow $p$-subgroup of $\Gal(L_n/K)$. So we must have
+ \[
+ G_1(L_n/K) \cong U_K^{(1)}/U_K^{(n)}.
+ \]
+ So we know that $G_1(L_n/K) = \Gal(L_n/L_1)$. Thus we know that $G_s(L_n/K) = \Gal(L_n/K)$ for $0 < s \leq 1$.
+
+ \separator
+
+ We now let $\sigma = \sigma_u \in G_1(L_n/K)$ and $u \in U_K^{(1)}/U_K^{(n)}$. We write
+ \[
+ u = 1 + \varepsilon \pi^k
+ \]
+ for some $\varepsilon \in U_K$ and some $k = k(u) \geq 1$. Since $\sigma$ is not the identity, we know $k < n$.
+ We claim that
+ \[
+ i_{L_n/K}(\sigma) = v_{L_n}(\sigma(\lambda) - \lambda) = q^k.
+ \]
+ Indeed, we let $\lambda \in F(n) \setminus F(n - 1)$, where $F$ is a choice of Lubin--Tate module for $\pi$. Then $\lambda$ is a uniformizer of $L_n$ and $\mathcal{O}_{L_n} = \mathcal{O}_K[\lambda]$. We can compute
+ \begin{align*}
+ \sigma_u(\lambda) &= [u]_F (\lambda) \\
+ &= [1 + \varepsilon \pi^k]_F(\lambda) \\
+ &= F(\lambda, [\varepsilon \pi^k]_F(\lambda))
+ \end{align*}
+ Now we can write
+ \[
+ [\varepsilon \pi^k]_F (\lambda) = [\varepsilon]_F ([\pi^k]_F(\lambda)) \in F(n - k) \setminus F(n - k - 1),
+ \]
+ since $[\varepsilon]_F$ is invertible, and applying $[\pi^{n - k}]_F$ to $[\pi^k]_F (\lambda)$ kills it, but applying $[\pi^{n - k - 1}]_F$ gives $[\pi^{n- 1}]_F$, which does not kill.
+
+ So we know $[\varepsilon \pi^k]_F(\lambda)$ is a uniformizer of $L_{n - k}$. Since $L_n/L_{n - k}$ is totally ramified of degree $q^k$, we can find $\varepsilon_0 \in \mathcal{O}_{L_n}^\times$ such that
+ \[
+ [\varepsilon \pi^k]_F(\lambda) = \varepsilon_0 \lambda^{q^k} % why.
+ \]
+ Recall that $F(X, 0) = X$ and $F(0, Y) = Y$. So we can write
+ \[
+ F(X, Y) = X + Y + XYG(X, Y),
+ \]
+ where $G(X, Y) \in \mathcal{O}_K[[X, Y]]$. So we have
+ \begin{align*}
+ \sigma(\lambda) - \lambda &= F(\lambda, [\varepsilon \pi^k]_F (\lambda)) - \lambda\\
+ &= F(\lambda, \varepsilon_0 \lambda^{q^k}) - \lambda\\
+ &= \lambda + \varepsilon_0 \lambda^{q^k} + \varepsilon_0 \lambda^{q^k + 1} G(\lambda, \varepsilon_0 \lambda^{q^k}) - \lambda\\
+ &= \varepsilon_0 \lambda^{q^k} + \varepsilon_0 \lambda^{q^k + 1} G(\lambda, \varepsilon_0 \lambda^{q^k}).
+ \end{align*}
+ In terms of valuation, the first term is the dominating term, and
+ \[
+ i_{L_n/K}(\sigma) = v_{L_n}(\sigma(\lambda) - \lambda) = q^k
+ \]
+ So we know
+ \[
+ i_{L_n/K}(\sigma_k) \geq s + 1 \Leftrightarrow q^{k(u)} - 1 \geq s.
+ \]
+ So we know
+ \[
+ G_s(L_n/K) = \{\sigma_K \in G_1(L_n/K): q^{k(u)} - 1 \geq s\} = \Gal(L_n/L_k),
+ \]
+ where $q^{k - 1} - 1 < s \leq q^k - 1$ for $k = 1, \cdots, n - 1$, and $1$ if $s > q^{n - 1} = 1$.
+\end{proof}
+
+\begin{cor}
+ We have
+ \[
+ G^t(L_n/K) =
+ \begin{cases}
+ \Gal(L_n/K) & -1 \leq t \leq 0\\
+ \Gal(L_n/L_k) & k - 1 < t \leq k,\quad k = 1, \cdots, n - 1\\
+ 1 & t > n - 1
+ \end{cases}
+ \]
+ In other words, we have
+ \[
+ G^t(L_n/K) =
+ \begin{cases}
+ \Gal(L_n/L_{\lceil t\rceil}) & -1 \leq t \leq n - 1\\
+ 1 & t > n - 1
+ \end{cases},
+ \]
+ where we set $L_0 = K$.
+\end{cor}
+Once again, the numbering is a bit more civilized in the upper numbering.
+
+\begin{proof}
+ We have to compute the integral of
+ \[
+ \frac{1}{(G_0(L_n/K): G_x(L_n/K)}.
+ \]
+ We again plot this out
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (7, 0);
+ \draw [->] (0, -1) -- (0, 3);
+
+ \draw (0, 2) -- (2, 2) node [pos=0.5, above] {$\frac{1}{q - 1}$} node [circ] {};
+ \draw (2, 1) -- (4, 1) node [pos=0.5, above] {$\frac{1}{q(q - 1)}$} node [circ] {};
+ \draw (4, 0.5) -- (6, 0.5) node [pos=0.5, above] {$\frac{1}{q^2(q - 1)}$} node [circ] {};
+
+ \node [circ] at (2, 0) {};
+ \node [below] at (2, 0) {$q - 1$};
+ \node [circ] at (4, 0) {};
+ \node [below] at (4, 0) {$q^2 - 1$};
+ \node [circ] at (6, 0) {};
+ \node [below] at (6, 0) {$q^3 - 1$};
+ \end{tikzpicture}
+ \end{center}
+ So by the same computation as the ones we did last time, we find that
+ \[
+ \eta_{L_n/K}(s) =
+ \begin{cases}
+ s & -1 \leq s \leq 0\\
+ (k - 1) + \frac{s - (q^{k - 1} - 1)}{q^{k - 1}(q - 1)} & q^{k - 1} - 1 \leq s \leq q^k - 1, \quad k = 1, \cdots, n - 1\\
+ (n - 1) + \frac{s - (q^{n - 1} - 1)}{q^{n - 1}(q - 1)} & s > q^{n - 1} - 1.
+ \end{cases}
+ \]
+ Inverting this, we find that
+ \[
+ \psi_{L_n/K} =
+ \begin{cases}
+ t & -1 \leq t \leq 0\\
+ q^{\lceil t \rceil - 1}(q - 1)(t - (\lceil t \rceil - 1)) + q^{\lceil t \rceil - 1} - 1 & 1 < t \leq n - 1\\
+ q^{n - 1}(q - 1)(t - (n - 1)) + q^{n - 1} - 1 & t > n - 1
+ \end{cases}.
+ \]
+ Then we have
+ \[
+ G^t(L_n/K) = G_{\psi(L_n/K)(t)} (L_n/K),
+ \]
+ which gives the desired by the previous theorem.
+\end{proof}
+
+So we know that
+\[
+ \Art_K^{-1}(G^t(L_n/K)) =
+ \begin{cases}
+ U_K^{\lceil t\rceil} / U_K^{(n)} & -1 \leq t \leq n\\
+ 1 & t \geq n
+ \end{cases}.
+\]
+
+\begin{cor}
+ When $t > -1$, we have
+ \[
+ G^t(K^{\mathrm{ab}/K}) = \Gal(K^{\mathrm{ab}}/K^{\mathrm{ur}}L_{\lceil t\rceil}),
+ \]
+ and
+ \[
+ \Art_K^{-1}(G^t(K^{\mathrm{ab}}/K)) = U^{(\lceil t\rceil)}.
+ \]
+\end{cor}
+
+\begin{proof}
+ Recall the following fact from the examples class: If $L/K$ is finite unramified and $M/K$ is finite totally ramified, then $LM/L$ is totally ramified, and $\Gal(LM/L) \cong \Gal(M/K)$ by restriction, and % check this
+ \[
+ G^t(LM/K) \cong G^t(M/K).
+ \]
+ via this isomorphism (for $t > -1$).
+
+ Now let $K_m/K$ be the unramified extension of degree $m$. By the lemma and the previous corollary, we have
+ \begin{align*}
+ G^t(K_m L_n/K) \cong G^t(L_n/K) &=
+ \begin{cases}
+ \Gal(L_n/L_{\lceil t\rceil}) & -1 < t \leq n\\
+ 1 & t \geq n
+ \end{cases}\\
+ &=
+ \begin{cases}
+ \Gal(K_m L_n / K_m L_{\lceil t\rceil}) & -1 < t \leq n\\
+ 1 & t \geq n
+ \end{cases}
+ \end{align*}
+ So we have
+ \begin{align*}
+ G^t(K^{\mathrm{ab}}/K) &= G^t(K^{\mathrm{ur}} L_\infty/K) \\
+ &= \varprojlim_{m, n} G^t(K_m L_n/K) \\
+ &= \varprojlim_{\substack{m, n\\n \geq \lceil t\rceil}} \Gal(K_m L_n/K_m L_{\lceil t\rceil})\\
+ &= \Gal(K^{\mathrm{ur}}L_\infty/K^{\mathrm{ur}}L_{\lceil t\rceil})\\
+ &= \Gal(K^{\mathrm{ab}} / K^{\mathrm{ur}}L_{\lceil t\rceil}),
+ \end{align*}
+ and
+ \begin{align*}
+ \Art_K^{-1}(\Gal(K^{\mathrm{ab}}/K^{\mathrm{ur}} L_{\lceil t \rceil})) &= \Art_K^{-1} \left(\varprojlim_{\substack{m, n\\n \geq \lceil t\rceil}} \Gal(K_m L_n/K_m L_{\lceil t\rceil})\right)\\
+ &= \varprojlim_{\substack{m, n\\n \geq \lceil t\rceil}} \Art_K^{-1} \left(\Gal(K_m L_n/K_m L_{\lceil t\rceil})\right)\\
+ &= \varprojlim_{\substack{m, n\\n \geq \lceil t\rceil}} \frac{U_K^{(\lceil t\rceil)}}{U_K^{(n)}}\\
+ &= U^{\lceil t \rceil}.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{cor}
+ Let $M/K$ be a finite abelian extension. Then we have an isomorphism
+ \[
+ \Art_K: \frac{K^\times}{N(M/K)} \cong \Gal(M/K).
+ \]
+ Moreover, for $t > -1$, we have
+ \[
+ G^t(M/K) = \Art_K\left(\frac{U_K^{(\lceil t\rceil)}N(M/K)}{N(M/K)}\right). \qedhere
+ \]
+\end{cor}
+
+\begin{proof}
+ We have
+ \[
+ G^t(M/K) = \frac{G^t(K^{\mathrm{ab}}/K) G(K^{\mathrm{ab}}/M)}{G(K^{\mathrm{ab}}/M)} = \Art\left(\frac{U_K^{(\lceil t\rceil)}N(M/K)}{N(M/K)}\right).\qedhere
+ \]
+\end{proof} % why
+
+
+\printindex
+\end{document}
diff --git a/books/cam/III_M/modern_statistical_methods.tex b/books/cam/III_M/modern_statistical_methods.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b54e89f03151ec2336094b9f96020a43deee2ea3
--- /dev/null
+++ b/books/cam/III_M/modern_statistical_methods.tex
@@ -0,0 +1,3039 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2017}
+\def\nlecturer {R.\ D.\ Shah}
+\def\ncourse {Modern Statistical Methods}
+\def\nofficial {https://www.dpmms.cam.ac.uk/~rds37/modern_stat_methods.html}
+
+\input{header}
+
+\DeclareMathOperator*\argmin{argmin}
+\DeclareMathOperator\pa{pa}
+\DeclareMathOperator\child{ch}
+\DeclareMathOperator\de{de}
+\newcommand\ddo{\mathsf{do}}
+\tikzset{msstate/.style={circle, draw, blue, text=black, minimum width=0.5cm}}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+The remarkable development of computing power and other technology now allows scientists and businesses to routinely collect datasets of immense size and complexity. Most classical statistical methods were designed for situations with many observations and a few, carefully chosen variables. However, we now often gather data where we have huge numbers of variables, in an attempt to capture as much information as we can about anything which might conceivably have an influence on the phenomenon of interest. This dramatic increase in the number variables makes modern datasets strikingly different, as well-established traditional methods perform either very poorly, or often do not work at all.
+
+Developing methods that are able to extract meaningful information from these large and challenging datasets has recently been an area of intense research in statistics, machine learning and computer science. In this course, we will study some of the methods that have been developed to analyse such datasets. We aim to cover some of the following topics.
+
+\begin{itemize}
+ \item Kernel machines: the kernel trick, the representer theorem, support vector machines, the hashing trick.
+ \item Penalised regression: Ridge regression, the Lasso and variants.
+ \item Graphical modelling: neighbourhood selection and the graphical Lasso. Causal inference through structural equation modelling; the PC algorithm.
+ \item High-dimensional inference: the closed testing procedure and the Benjamini--Hochberg procedure; the debiased Lasso
+\end{itemize}
+
+\subsubsection*{Pre-requisites}
+Basic knowledge of statistics, probability, linear algebra and real analysis. Some background in optimisation would be helpful but is not essential.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+In recent years, there has been a rather significant change in what sorts of data we have to handle and what questions we ask about them, witnessed by the popularity of the buzzwords ``big data'' and ``machine learning''. In classical statistics, we usually have a small set of parameters, and a very large data set. We then use the large data set to estimate the parameters.
+
+However, nowadays we often see scenarios where we have a very large number of parameters, and the data set is relatively small. If we tried to apply our classical linear regression, then we would just be able to tune the parameters so that we have a perfect fit, and still have great freedom to change the parameters without affecting the fitting.
+
+One example is that we might want to test which genomes are responsible for a particular disease. In this case, there is a huge number of genomes to consider, and there is good reason to believe that most of them are irrelevant, i.e.\ the parameters should be set to zero. Thus, we want to develop methods that find the ``best'' fitting model that takes this into account.
+
+Another problem we might encounter is that we just have a large data set, and doing linear regression seems a bit silly. If we have so much data, we might as well try to fit more complicated curves, such as polynomial functions and friends. Perhaps more ambitiously, we might try to find the best \emph{continuously differentiable function} that fits the curve, or, as analysts will immediately suggest as an alternative, weakly differentiable functions.
+
+There are many things we can talk about, and we can't talk about all of them. In this course, we are going to cover $4$ different topics of different size:
+\begin{itemize}
+ \item Kernel machines
+ \item The Lasso and its extensions
+ \item Graphical modeling and causal inference
+ \item Multiple testing and high-dimensional inference
+\end{itemize}
+The four topics are rather disjoint, and draw on different mathematical skills.
+
+\section{Classical statistics}
+This is a course on \emph{modern} statistical methods. Before we study methods, we give a brief summary of what we are \emph{not} going to talk about, namely classical statistics.
+
+So suppose we are doing regression. We have some \term{predictors} $x_i \in \R^p$ and \term{responses} $Y_i \in \R$, and we hope to find a model that describes $Y$ as a function of $x$. For convenience, define the vectors
+\[
+ X = \begin{pmatrix} x_1^T \\ \vdots \\ x_n^T\end{pmatrix},\quad
+ Y = \begin{pmatrix} Y_1^T \\ \vdots \\ Y_n^T\end{pmatrix}.
+\]
+The linear model then assumes there is some $\beta^0 \in \R^p$ such that
+\[
+ Y = X \beta^0 + \varepsilon,
+\]
+where $\varepsilon$ is some (hopefully small) error random variable. Our goal is then to estimate $\beta^0$ given the data we have.
+
+If $X$ has full column rank, so that $X^TX$ is invertible, then we can use \term{ordinary least squares} to estimate $\beta^0$, with estimate
+\[
+ \hat{\beta}^{OLS} = \argmin_{\beta \in \R^p} \|Y - x \beta\|_2^2 = (X^TX)^{-1} X^TY.
+\]
+This assumes nothing about $\varepsilon$ itself, but if we assume that $\E \varepsilon = 0$ and $\var (\mathcal{E}) = \sigma^2 I$, then this estimate satisfies
+\begin{itemize}
+ \item $\E_\beta \hat{\beta}^{OLS} = (X^T X)^{-1} X^T X \beta^0 = \beta^0$
+ \item $\var_\beta(\hat{\beta}^{OLS}) = (X^T X^{-1}) X^T \var(\varepsilon) X (X^T X)^{-1} = \sigma^2 (X^T X)^{-1}$.
+\end{itemize}
+In particular, this is an unbiased estimator. Even better, this is the best linear unbiased estimator. More precisely, the Gauss--Markov theorem says any other linear estimator $\tilde{\beta} = AY$ has $\var(\tilde{\beta}) - \var(\hat{\beta}^{OLS})$ positive semi-definite.
+
+Of course, ordinary least squares is not the only way to estimate $\beta^0$. Another common method for estimating parameters is maximum likelihood estimation, and this works for more general models than linear regression. For people who are already sick of meeting likelihoods, this will be the last time we meet likelihoods in this course.
+
+Suppose we want to estimate a parameter $\theta$ via knowledge of some data $Y$. We assume $Y$ has density $f(y; \theta)$. We define the \term{log-likelihood} by
+\[
+ \ell(\theta) = \log f(Y, \theta).
+\]
+The \term{maximum likelihood estimator} then maximize $\ell(\theta)$ over $\theta$ to get $\hat{\theta}$.
+
+Similar to ordinary least squares, there is a theorem that says maximum likelihood estimation is the ``best''. To do so, we introduce the \term{Fisher information matrix}. This is a family of $d \times d$ matrices indexed by $\theta$, defined by
+\[
+ I_{jk}(\theta) = - E_\theta\left[\frac{\partial^2}{\partial \theta_j \partial \theta_j} \ell(\theta)\right].
+\]
+The relevant theorem is
+\begin{thm}[Cram\'er--Rao bound]\index{Cram\'er--Rao bound}
+ If $\tilde{\theta}$ is an unbiased estimator for $\theta$, then $\var(\tilde{\theta}) - I^{-1}(\theta)$ is positive semi-definite.
+
+ Moreover, asymptotically, as $n \to \infty$, the maximum likelihood estimator is unbiased and achieves the Carm\'er--Rao bound.
+\end{thm}
+
+Another wonderful fact about the maximum likelihood estimator is that asymptotically, it is normal distributed, and so it is something we understand well.
+
+This might seem very wonderful, but there are a few problems here. The results we stated are asymptotic, but what we actually see in real life is that as $n \to \infty$, the value of $p$ also increases. In these contexts, the asymptotic property doesn't tell us much. Another issue is that these all talk about unbiased estimators. In a lot of situations of interest, it turns out biased methods do much much better than these methods we have.
+
+Another thing we might be interested is that as $n$ gets large, we might want to use more complicated models than simple parametric models, as we have much more data to mess with. This is not something ordinary least squares provides us with.
+
+\section{Kernel machines}
+We are going to start a little bit slowly, and think about our linear model $Y = X\beta^0 + \varepsilon$, where $\E(\varepsilon) = 0$ and $\var(\varepsilon) = \sigma^2 I$. Ordinary least squares is an unbiased estimator, so let's look at biased estimators.
+
+For a biased estimator, $\tilde{\beta}$, we should not study the variance, but the \term{mean squared error}
+\begin{align*}
+ \E[(\tilde{\beta} - \beta^0)(\tilde{\beta} - \beta^0)^T]
+ &= \E(\tilde{\beta} - \E \tilde{\beta} + E \tilde{\beta} - \beta^0)(\tilde{\beta} - \E \tilde{\beta} + E \tilde{\beta} - \beta^0)^T \\ &= \var(\tilde{\beta})+ (\E \tilde{\beta} - \beta^0)(\E \tilde{\beta} - \beta^0)^T
+\end{align*}
+The first term is, of course, just the variance, and the second is the squared bias. So the point is that if we pick a clever biased estimator with a tiny variance, then this might do better than unbiased estimators with large variance.
+
+\subsection{Ridge regression}
+Ridge regression was introduced in around 1970. The idea of Ridge regression is to try to shrink our estimator a bit in order to lessen the variance, at the cost of introducing a bias. We would hope then that this will result in a smaller mean squared error.
+
+\begin{defi}[Ridge regression]
+ \term{Ridge regression} solves
+ \[
+ (\hat{\mu}^R_\lambda, \hat{\beta}^R_\lambda) = \argmin_{(\mu, \beta) \in \R \times \R^p} \{ \|Y - \mu \mathbf{1} - X \beta\|_2^2 + \lambda \|\beta\|_2^2\},
+ \]
+ where $\mathbf{1}$ is a vector of all $1$'s. Here $\lambda \geq 0$ is a tuning parameter, and it controls how much we penalize a large choice of $\beta$.
+\end{defi}
+
+Here we explicitly have an intercept term. Usually, we eliminate this by adding a column of $1$'s in $X$. But here we want to separate that out, since we do not want give a penalty for large $\mu$. For example, if the parameter is temperature, and we decide to measure in degrees Celsius rather than Kelvins, then we don't want the resulting $\hat{\mu}, \hat{\beta}$ to change.
+
+More precisely, our formulation of Ridge regression is so that if we make the modification
+\[
+ Y' = c\mathbf{1} + Y,
+\]
+then we have
+\[
+ \hat{\mu}^R_\lambda(Y') = \hat{\mu}_\lambda^R(Y) + c.
+\]
+Note also that the Ridge regression formula makes sense only if each entry in $\beta$ have the same order of magnitude, or else the penalty will only have a significant effect on the terms of large magnitude. Standard practice is to subtract from each column of $X$ its mean, and then scale it to have $\ell_2$ norm $\sqrt{n}$. The actual number is not important here, but it will in the case of the Lasso.
+
+By differentiating, one sees that the solution to the optimization problem is
+\begin{align*}
+ \hat{\mu}_\lambda^R &= \bar{Y} = \frac{1}{n} \sum Y_i\\
+ \hat{\beta}^R_\lambda &= (X^T X + \lambda I)^{-1} X^T Y.
+\end{align*}
+Note that we can always pick $\lambda$ such that the matrix $(X^T X + \lambda I)$ is invertible. In particular, this can work even if we have more parameters than data points. This will be important when we work with the Lasso later on.
+
+If some god-like being told us a suitable value of $\lambda$ to use, then Ridge regression always works better than ordinary least squares.
+\begin{thm}
+ Suppose $\rank(X) = p$. Then for $\lambda > 0$ sufficiently small (depending on $\beta^0$ and $\sigma^2$), the matrix
+ \[
+ \E(\hat{\beta}^{OLS} - \beta^0)(\hat{\beta}^{OLS} - \beta^0)^T - \E(\hat{\beta}^{R}_\lambda - \beta^0)(\hat{\beta}^{R}_\lambda - \beta^0)^T \tag{$*$}
+ \]
+ is positive definite.
+\end{thm}
+
+\begin{proof}
+ We know that the first term is just $\sigma^2(X^T X)^{-1}$. The right-hand-side has a variance term and a bias term. We first look at the bias:
+ \begin{align*}
+ \E [\hat{\beta} - \beta^0] &= (X^T X + \lambda I)^{-1} X^T X \beta^0 - \beta^0\\
+ &= (X^T X + \lambda I)^{-1}(X^T X + \lambda I - \lambda I) \beta^0- \beta^0\\
+ &= - \lambda (X^T X + \lambda I)^{-1} \beta^0.
+ \end{align*}
+ We can also compute the variance
+ \[
+ \var(\hat{\beta}) = \sigma^2(X^T X + \lambda I)^{-1} X^T X (X^T X + \lambda I)^{-1}.
+ \]
+ Note that both terms appearing in the squared error look like
+ \[
+ (X^T X + \lambda I)^{-1}\text{something}(X^T X + \lambda I)^{-1}.
+ \]
+ So let's try to write $\sigma^2 (X^T X)^{-1}$ in this form. Note that
+ \begin{align*}
+ (X^T X)^{-1} &= (X^T X + \lambda I)^{-1} (X^T X + \lambda I) (X^T X)^{-1} (X^T X + \lambda I) (X^T X + \lambda I)^{-1}\\
+ &= (X^T X + \lambda I)^{-1}(X^T X + 2 \lambda I + \lambda^2 (X^T X)^{-1}) (X^T X + \lambda I)^{-1}.
+ \end{align*}
+ Thus, we can write $(*)$ as
+ \begin{align*}
+ &\hphantom{ {} = {} }(X^T X + \lambda I)^{-1} \Big( \sigma^2(X^T X + 2 \lambda I + \lambda^2(X^T X)^{-1})\\
+ &\hphantom{adasdfafdasfdfadsfasdf}- \sigma^2 X^T X - \lambda^2 \beta^0 (\beta^0)^T\Big) (X^T X + \lambda I)^{-1}\\
+ &= \lambda (X^T X + \lambda I)^{-1} \Big( 2 \sigma^2 I + \lambda(X^T X)^{-1} - \lambda \beta^0 (\beta^0)^T\Big) (X^T X + \lambda I)^{-1}
+ \end{align*}
+ Since $\lambda > 0$, this is positive definite iff
+ \[
+ 2 \sigma^2 I + \sigma^2 \lambda (X^T X)^{-1} - \lambda \beta^0 (\beta^0)^T
+ \]
+ is positive definite, which is true for $0 < \lambda < \frac{2\sigma^2}{\|\beta^0\|_2^2}$.
+\end{proof}
+While this is nice, this is not really telling us much, because we don't know how to pick the correct $\lambda$. It also doesn't tell us when we should expect a big improvement from Ridge regression.
+
+To understand this better, we need to use the \emph{singular value decomposition}.
+
+\begin{thm}[Singular value decomposition]\index{singular value decomposition}\index{SVD}
+ Let $X \in \R^{n \times p}$ be any matrix. Then it has a \emph{singular value decomposition} (SVD)
+ \[
+ \underset{n \times p}{X} = \underset{n \times n}{U}\;\;\underset{n \times p}{D}\;\;\underset{p \times p}{V^T},
+ \]
+ where $U, V$ are orthogonal matrices, and $D_{11} \geq D_{22} \geq \cdots \geq D_{mm} \geq 0$, where $m = \min (n, p)$, and all other entries of $D$ are zero.
+\end{thm}
+
+When $n > p$, there is an alternative version where $U$ is an $n \times p$ matrix with orthogonal columns, and $D$ is a $p \times p$ diagonal matrix. This is done by replacing $U$ by its first $p$ columns and $D$ by its first $p$ rows. This is known as the \term{thin singular value decomposition}\index{thin SVD}. In this case, $U^T U = I_p$ but $UU^T$ is not the identity.
+
+Let's now apply try to understand Ridge regressions a little better. Suppose $n > p$. Then using the thin SVD, the fitted values from ridge regression are
+\begin{align*}
+ X \hat{\beta}_\lambda^R &= X (X^T X + \lambda I)^{-1} X^T Y\\
+ &= UDV^T (VD^2 V^T + \lambda I)^{-1} VDU^T Y.
+\end{align*}
+We now note that $VD^2 V^T + \lambda I = V(D^2 + \lambda I)V^T$, since $V$ is still orthogonal. We then have
+\[
+ (V (D^2 + \lambda I)V^T)^{-1} = V(D^2 + \lambda I)^{-1} V^T
+\]
+Since $D^2$ and $\lambda I$ are both diagonal, it is easy to compute their inverses as well. We then have
+\[
+ X\hat{\beta}_\lambda^R = UD^2 (D^2 + \lambda I)^{-1} U^T Y = \sum_{j = 1}^p U_j \frac{D_{jj}^2}{ D_{jj}^2 + \lambda} U_j^T Y
+\]
+Here $U_j$ refers to the $j$th column of $U$.
+
+Now note that the columns of $U$ form an orthonormal basis for the column space of $X$. If $\lambda = 0$, then this is just a fancy formula for the projection of $Y$ onto the column space of $X$. Thus, what this formula is telling us is that we look at this projection, look at the coordinates in terms of the basis given by the columns of $U$, and scale accordingly.
+
+We can now concretely see the effect of our $\lambda$. The shrinking depends on the size of $D_{jj}$, and the larger $D_{jj}$ is, the less shrinking we do.
+
+This is not very helpful if we don't have a concrete interpretation of the $D_{jj}$, or rather, the columns of $U$. It turns out the columns of $U$ are what are known as the \term{normalized principal components} of $X$.
+
+Take $u \in \R^p$, $\|u\|_2 = 1$. The sample variance of $Xu \in \R^n$ is then
+\[
+ \frac{1}{n} \|Xu\|_2^2 = \frac{1}{n} u^T X^T X u = \frac{1}{n} u^T VD^2 V^t u.
+\]
+We write $w = V^T u$. Then $\|w\|_2 = 1$ since $V$ is orthogonal. So we have
+\[
+ \frac{1}{n} \|Xu\|_2^2 = \frac{1}{n} w^T D^2 w = \frac{1}{n} \sum_j w_j^2 D_{jj^2} \leq \frac{1}{n} D_{11}^2,
+\]
+and the bound is achieved when $w = e_1$, or equivalently, $u = V e_1 = V_1$. Thus, $V_1$ gives the coefficients of the linear combination of columns of $X$ that has largest sample variance. We can then write the result as
+\[
+ X V_1 = U DV^T V_1 = U_1 D_{11}.
+\]
+We can extend this to a description of the other columns of $U$, which is done in the example sheet. Roughly, the subsequent principle components obey the same optimality conditions with the added constraint of being orthogonal to all earlier principle components.
+
+The conclusion is that Ridge regression works best if $\E Y$ lies in the space spanned by the large principal components of $X$. % insert example?
+
+\subsection{\texorpdfstring{$v$}{v}-fold cross-validation}
+In practice, the above analysis is not very useful, since it doesn't actually tell us what $\lambda$ we should pick. If we are given a concrete data set, how do we know what $\lambda$ we should pick?
+
+One common method is to use \term{$v$-fold cross-validation}. This is a very general technique that allows us to pick a regression method from a variety of options. We shall explain the method in terms of Ridge regression, where our regression methods are parametrized by a single parameter $\lambda$, but it should be clear that this is massively more general.
+
+Suppose we have some data set $(Y_i, x_i)_{i = 1}^n$, and we are asked to predict the value of $Y^*$ given a new predictors $x^*$. We may want to pick $\lambda$ to minimize the following quantity:
+\[
+ \E \left\{ (Y^* - (x^*)^T \hat{\beta}_\lambda^R(X, Y))^2 \mid X, Y \right\}.
+\]
+It is difficult to actually do this, and so an easier target to minimize is the expected prediction error
+\[
+ \E\left[\E \left\{ (Y^* - (x^*)^T \hat{\beta}_\lambda^R(\tilde{X}, \tilde{Y}))^2 \mid \tilde{X}, \tilde{Y} \right\}\right]
+\]
+One thing to note about this is that we are thinking of $\tilde{X}$ and $\tilde{Y}$ as arbitrary data sets of size $n$, as opposed to the one we have actually got. This might be a more tractable problem, since we are not working with our actual data set, but general data sets.
+
+The method of cross-validation estimates this by splitting the data into $v$ folds.
+\begin{center}
+ \begin{tikzpicture}[yscale=0.5]
+ \draw (0, 0) rectangle +(3, 1);
+ \draw (0, 5) rectangle +(3, 1);
+
+ \draw [dashed] (0, 1) -- (0, 5);
+ \draw [dashed] (3, 1) -- (3, 5);
+
+ \node at (4, 0.5) {$(X^{(1)}, Y^{(1)})$};
+ \node at (6, 0.5) {$A_1$};
+
+ \node at (4, 5.5) {$(X^{(v)}, Y^{(v)})$};
+ \node at (6, 5.5) {$A_v$};
+ \end{tikzpicture}
+\end{center}
+$A_i$ is called the \term{observation indices}, which is the set of all indices $j$ so that the $j$th data point lies in the $i$th fold.
+
+We let $(X^{(-k)}, Y^{(-k)})$ be all data except that in the $k$th fold. We define
+\[
+ CV(\lambda) = \frac{1}{n} \sum_{k = 1}^v \sum_{i \in A_k} (Y_i - x_i^T \hat{\beta}_\lambda^R (X^{(-k)}, Y^{(-k)}))^2
+\]
+We write $\lambda_{CV}$ for the minimizer of this, and pick $\hat{\beta}^R_{\lambda_{CV}}(X, Y)$ as our estimate. This tends to work very will in practice.
+
+But we can ask ourselves --- why do we have to pick a single $\lambda$ to make the estimate? We can instead average over a range of different $\lambda$. Suppose we have computed $\hat{\beta}_\lambda^R$ on a grid of $\lambda$-values $\lambda_1 > \lambda_2 > \cdots > \lambda_L$. Our plan is to take a good weighted average of the $\lambda$. Concretely, we want to minimize
+\[
+ \frac{1}{n} \sum_{k = 1}^v \sum_{i \in A_k} \left(Y_i - \sum_{i = 1}^L w_i x_i^T \hat{\beta}_{\lambda_i}^R (X^{(-k)}, Y^{(-k)})\right)^2
+\]
+over $w \in [0, \infty)^L$. This is known as \term{stacking}. This tends to work better than just doing $v$-fold cross-validation. Indeed, it must --- cross-validation is just a special case where we restrict $w_i$ to be zero in all but one entries. Of course, this method comes with some heavy computational costs.
+
+\subsection{The kernel trick}
+We now come to the actual, main topic of the chapter. We start with a very simple observation. An alternative way of writing the fitted values from Ridge regression is
+\[
+ (X^T X + \lambda I) X^T = X^T(XX^T + \lambda I).
+\]
+One should be careful that the $\lambda I$ are different matrices on both sides, as they have different dimensions. Multiplying inverses on both sides, we have
+\[
+ X^T(XX^T + \lambda I)^{-1} = (X^T X + \lambda I)^{-1} X^T.
+\]
+We can multiply the right-hand-side by $Y$ to obtain the $\hat{\beta}$ from Ridge regression, and multiply on the left to obtain the fitted values. So we have
+\[
+ XX^T(XX^T + \lambda I)^{-1}Y = X (X^T X + \lambda I)^{-1} X^T Y = X \hat{\beta}^R_\lambda.
+\]
+Note that $X^T X$ is a $p \times p$ matrix, and takes $O(np^2)$ time to compute. On the other hand, $XX^T$ is an $n \times n$ matrix, and takes $O(n^2 p)$ time to compute. If $p \gg n$, then this way of computing the fitted values would be much quicker. The key point to make is that the fitted values from Ridge regression only depends on $K = XX^T$ (and $Y$). Why is this important?
+
+Suppose we believe we have a quadratic signal
+\[
+ Y_i = x_i^T \beta + \sum_{k, \ell} x_{ik} x_{i\ell} \theta_{k\ell} + \varepsilon_i.
+\]
+Of course, we can do Ridge regression, as long as we add the products $x_{ik} x_{i\ell}$ as predictors. But this requires $O(p^2)$ many predictors. Even if we use the $XX^T$ way, this has a cost of $O(n^2 p^2)$, and if we used the naive method, it would require $O(np^4)$ operations.
+
+We can do better than this. The idea is that we might be able to compute $K$ directly. Consider
+\[
+ (1 + x_i^T x_j)^2 = 1 + 2 x_i^T x_j + \sum_{k, \ell} x_{ik} x_{i\ell} x_{jk}x_{j\ell}.
+\]
+This is equal to the inner product between vectors of the form
+\[
+ (1, \sqrt{2} x_{i1}, \ldots, \sqrt{2}x_{ip}, x_{i1} x_{i1}, \ldots, x_{i1} x_{ip}, x_{i2} x_{i1}, \ldots, x_{ip} x_{ip})\tag{$*$}
+\]
+If we set $K_{ij} = (1 + x_i^T x_j)^2$ and form $K(K + \lambda I)^{-1} Y$, this is equivalent to forming ridge regression with $(*$) as our predictors. Note that here we don't scale our columns to have the same $\ell_2$ norm. This is pretty interesting, because computing this is only $O(n^2p)$. We managed to kill a factor of $p$ in this computation. The key idea here was that the fitted values depend only on $K$, and not on the values of $x_{ij}$ itself.
+
+Consider the general scenario where we try to predict the value of $Y$ given a predictor $x \in \mathcal{X}$. In general, we don't even assume $\mathcal{X}$ has some nice linear structure where we can do linear regression.
+
+If we want to do Ridge regression, then one thing we can do is that we can try to construct some map $\phi: \mathcal{X} \to \R^D$ for some $D$, and then run Ridge regression using $\{\phi(x_i)\}$ as our predictors. If we want to fit some complicated, non-linear model, then $D$ can potentially be very huge, and this can be costly. The observation above is that instead of trying to do this, we can perhaps find some magical way of directly computing
+\[
+ K_{ij} = k(x_i, x_j) = \bra \phi(x_i), \phi(x_j)\ket.
+\]
+If we can do so, then we can simply use the above formula to obtain the fitted values (we shall discuss the problem of making new predictions later).
+
+Since it is only the function $k$ that matters, perhaps we can specify a ``regression method'' simply by providing a suitable function $k: \mathcal{X} \times \mathcal{X} \to \R$. If we are given such a function $k$, when would it arise from a ``feature map'' $\phi: \mathcal{X} \to \R^D$?
+
+More generally, we will find that it is convenient to allow for $\phi$ to take values in infinite-dimensional vector spaces instead (since we don't actually have to compute $\phi$!).
+\begin{defi}[Inner product space]\index{inner product space}
+ an inner product space is a real vector space $\mathcal{H}$ endowed with a map $\bra \ph, \ph \ket: \mathcal{H} \times \mathcal{H} \to \R$ and obeys
+ \begin{itemize}
+ \item Symmetry: $\bra u, v \ket = \bra v, u\ket$
+ \item Linearity: If $a, b \in \R$, then $\bra au + bw, v\ket = a \bra u, v\ket + b \bra w, v\ket$.
+ \item Positive definiteness: $\bra u, u \ket \geq 0$ with $\bra u, u \ket = 0$ iff $u = 0$.
+ \end{itemize}
+\end{defi}
+
+If we want $k$ to come from a feature map $\phi$, then an immediate necessary condition is that $k$ has to be symmetric. There is also another condition that corresponds to the positive-definiteness of the inner product.
+
+\begin{prop}
+ Given $\phi: \mathcal{X} \times \mathcal{X} \to \mathcal{H}$, define $k: \mathcal{X} \times \mathcal{X} \to \R$ by
+ \[
+ k(x, x') = \bra \phi(x), \phi(x')\ket.
+ \]
+ Then for any $x_1, \ldots, x_n \in \mathcal{X}$, the matrix $K \in \R^n \times \R^n$ with entries
+ \[
+ K_{ij} = k(x_i, x_j)
+ \]
+ is positive semi-definite.
+\end{prop}
+
+\begin{proof}
+ Let $x_1, \ldots, x_n \in \mathcal{X}$, and $\alpha \in \R^n$. Then
+ \begin{align*}
+ \sum_{i, j} \alpha_i k(x_i, x_j) \alpha_j &= \sum_{i, j} \alpha_i \bra \phi(x_i), \phi(x_j)\ket \alpha_j\\
+ &= \left\bra \sum_i \alpha_i \phi(x_i), \sum_j \alpha_j \phi(x_j)\right\ket\\
+ &\geq 0
+ \end{align*}
+ since the inner product is positive definite.
+\end{proof}
+
+%More generally, since the Ridge fitted values only depends on inner products between different observations, if we want to try to fit some other complicated model, instead of fitting non-linear models by first mapping each $x_i \mapsto \phi(x_i) \in \R^D$ using some ``feature map'' $\phi$, we can instead aim to directly compute
+%\[
+% k(x_i, x_j) = \bra \phi(x_i), \phi(x_j)\ket.
+%\]
+%
+%Consider a general scenario, where we are trying to predict the value of $Y$ given some $x \in \mathcal{X}$, where $\mathcal{X}$ is any set.
+%
+%We don't need to know $\phi$ itself, but simply have to find some appropriate function $k(x_i, x_j)$, and then set $K_{ij} = k(x_i, x_j)$.
+%
+%
+%
+%We'll aim to characterize the similarity measures $k: \mathcal{X} \times \mathcal{X} \to \R$ for which there exists a feature map $\phi: \mathcal{X} \to \mathcal{H}$, where $\mathcal{H}$ is a real inner product space, with% use $\mathcal{X}$
+%\[
+% k(x, x') = \bra \phi(x), \phi(x')\ket.
+%\]
+%We are being slightly more general than we were last time, where we allow $\mathcal{H}$ to be fancy.
+
+This suggests the following definition:
+\begin{defi}[Positive-definite kernel]\index{positive-definite kernel}\index{kernel}
+ A \emph{positive-definite kernel} (or simply \emph{kernel}) is a symmetric map $k\colon \mathcal{X} \times \mathcal{X} \to \R$ such that for all $n \in \N$ and $x_1, \ldots, x_n \in \mathcal{X}$, the matrix $K \in \R^n \times \R^n$ with entries
+ \[
+ K_{ij} = k(x_i, x_j)
+ \]
+ is positive semi-definite.
+\end{defi}
+
+%It turns out the Cauchy--Schwarz inequality still holds for positive semi-definite matrices
+%\begin{prop}[Cauchy--Schwarz inequality]\index{Cauchy--Schwarz inequality}
+% We have
+% \[
+% k(x, x')^2 \leq k(x, x) k(x', x')
+% \]
+%\end{prop}
+%
+%\begin{proof}
+% The matrix
+% \[
+% \begin{pmatrix}
+% k(x, x) & k(x, x')\\
+% k(x', x) & k(x', x')
+% \end{pmatrix}
+% \]
+% has to be positive semi-definite by assumption, and expanding out, we are done.
+%\end{proof}
+%
+%\begin{prop}
+% $k$ defined by
+% \[
+% k(x, x') = \bra \phi(x), \phi(x')\ket
+% \]
+% is a kernel.
+%\end{prop}
+%
+%\begin{proof}
+% Let $x_1, \ldots, x_n \in \mathcal{X}$, and $\alpha \in \R^n$. Consider
+% \begin{align*}
+% \sum_{i, j} \alpha_i k(x_i, x_j) \alpha_jh &= \sum_{i, j} \alpha_i \bra \phi(x_i), \phi(x_j)\ket\\
+% &= \left\bra \sum_i \alpha_i \phi(x_i), \sum_j \alpha_j \phi(x_j)\right\ket\\
+% &\geq 0
+% \end{align*}
+% since the inner product is positive definite.
+%\end{proof}
+
+We will prove that every positive-definite kernel comes from a feature map. However, before that, let's look at some examples.
+\begin{eg}
+ Suppose $k_1, k_2, \ldots, k_n$ are kernels. Then
+ \begin{itemize}
+ \item If $\alpha_1, \alpha_2 \geq 0$, then $\alpha_1 k_1 + \alpha_2 k_2$ is a kernel. Moreover, if
+ \[
+ k(x, x') = \lim_{m \to \infty} k_m(x, x')
+ \]
+ exists, then $k$ is a kernel.
+ \item The pointwise product $k_1k_2$ is a kernel, where
+ \[
+ (k_1 k_2)(x, x') = k_1(x, x') k_2(x, x').
+ \]
+ \end{itemize}
+\end{eg}
+
+\begin{eg}
+ The \term{linear kernel} is $k(x, x') = x^T x'$. To see this, we see that this is given by the feature map $\phi = \id$ and taking the standard inner product on $\R^p$.
+\end{eg}
+
+\begin{eg}
+ The \term{polynomial kernel} is $k(x, x') = (1 + x^T x')^d$ for all $d \in \N$.
+
+ We saw this last time with $d = 2$. This is a kernel since both $1$ and $x^T x'$ are kernels, and sums and products preserve kernels.
+\end{eg}
+
+\begin{eg}
+ The \term{Gaussian kernel} is
+ \[
+ k(x, x') = \exp\left(- \frac{\|x- x'\|^2_2}{2\sigma^2}\right).
+ \]
+ The quantity $\sigma$ is known as the \term{bandwidth}.
+
+ To show that it is a kernel, we decompose
+ \[
+ \|x - x'\|_2^2 = \|x\|_2^2 + \|x'\|_2^2 - 2 x^T x'.
+ \]
+ We define
+ \[
+ k_1(x, x') = \exp\left(-\frac{\|x\|_2^2}{2\sigma^2}\right)\exp\left(-\frac{\|x'\|_2^2}{2\sigma^2}\right).
+ \]
+ This is a kernel by taking $\phi(\ph) = \exp\left(-\frac{\|\ph\|_2^2}{2\sigma^2}\right)$.
+
+ Next, we can define
+ \[
+ k_2(x, x') = \exp\left(\frac{x^T x'}{\sigma^2}\right) = \sum_{r = 0}^\infty \frac{1}{r!} \left(\frac{x^T x'}{\sigma^2}\right)^r.
+ \]
+ We see that this is the infinite linear combination of powers of kernels, hence is a kernel. Thus, it follows that $k = k_1 k_2$ is a kernel.
+
+ Note also that any feature map giving this $k$ must take values in an infinite dimensional inner product space. Roughly speaking, this is because we have arbitrarily large powers of $x$ and $x'$ in the expansion of $k$.
+\end{eg}
+
+\begin{eg}
+ The \term{Sobolev kernel} is defined as follows: we take $\mathcal{X} = [0, 1]$, and let
+ \[
+ k(x, x') = \min(x, x').
+ \]
+ A slick proof that this is a kernel is to notice that this is the covariance function of Brownian motion.
+\end{eg}
+
+\begin{eg}
+ The \term{Jaccard similarity} is defined as follows: Take $\mathcal{X}$ to be the set of all subsets of $\{1, \ldots, p\}$. For $x, x' \in \mathcal{X}$, define
+ \[
+ k(x, x') =
+ \begin{cases}
+ \frac{|x \cap x'|}{|x \cup x'|} & x \cup x' \not= \emptyset\\
+ 1 & \text{otherwise}
+ \end{cases}.
+ \]
+\end{eg}
+
+\begin{thm}[Moore--Aronszajn theorem]\index{Moore--Aronszajn theorem}
+ For every kernel $k: \mathcal{X} \times \mathcal{X} \to \R$, there exists an inner product space $\mathcal{H}$ and a feature map $\phi: \mathcal{X} \to \mathcal{H}$ such that
+ \[
+ k(x, x') = \bra \phi(x), \phi(x')\ket.
+ \]
+\end{thm}
+This is not actually the full Moore--Aronszajn theorem, but a simplified version of it.
+
+\begin{proof}
+ Let $\mathcal{H}$ denote the vector space of functions $f: \mathcal{X} \to \R$ of the form
+ \[
+ f(\ph) = \sum_{i = 1}^n \alpha_i k(\ph, x_i)\tag{$*$}
+ \]
+ for some $n \in \N$, $\alpha_1, \ldots, \alpha_n \in \R$ and $x_1, \ldots, x_n \in \mathcal{X}$. If
+ \[
+ g(\ph) = \sum_{i = 1}^m \beta_j k(\ph, x_j') \in \mathcal{H},
+ \]
+ then we tentatively define the inner product of $f$ and $g$ by
+ \[
+ \bra f, g\ket = \sum_{i = 1}^n \sum_{j = 1}^m \alpha_i \beta_j k(x_i, x_j').
+ \]
+ We have to check that this is an inner product, but even before that, we need to check that this is well-defined, since $f$ and $g$ can be represented in the form $(*)$ in multiple ways. To do so, simply observe that
+ \[
+ \sum_{i = 1}^n \sum_{j = 1}^m \alpha_i \beta_j k(x_i, x_j') = \sum_{i = 1}^n \alpha_i g(x_i) = \sum_{j = 1}^m \beta_j f(x_j').\tag{$\dagger$}
+ \]
+ The first equality shows that the definition of our inner product does not depend on the representation of $g$, while the second equality shows that it doesn't depend on the representation of $f$.
+
+ To show this is an inner product, note that it is clearly symmetric and bilinear. To show it is positive definite, note that we have
+ \[
+ \bra f, f \ket = \sum_{i = 1}^n \sum_{j = 1}^m \alpha_i k(x_i, x_j) \alpha_j \geq 0
+ \]
+ since the kernel is positive semi-definite. It remains to check that if $\bra f, f\ket = 0$, then $f = 0$ as a function. To this end, note the important \term{reproducing property}: by $(\dagger)$, we have
+ \[
+ \bra k(\ph, x), f\ket = f(x).
+ \]
+ This says $k(\cdot, x)$ represents the evaluation-at-$x$ linear functional.
+
+ In particular, we have
+ \[
+ f(x)^2 = \bra k(\ph, x), f\ket^2 \leq \bra k(\ph, x), k(\ph, x)\ket \bra f, f\ket = 0.
+ \]
+ Here we used the Cauchy--Schwarz inequality, which, if you inspect the proof, does not require positive definiteness, just positive semi-definiteness. So it follows that $f \equiv 0$. Thus, we know that $\mathcal{H}$ is indeed an inner product space.
+
+ We now have to construct a feature map. Define $\phi: \mathcal{X} \to \mathcal{H}$ by
+ \[
+ \phi(x) = k(\ph, x).
+ \]
+ Then we have already observed that
+ \[
+ \bra \phi(x), \phi(x')\ket = \bra k(\ph, x), k(\ph, x')\ket = k(x, x'),
+ \]
+ as desired.
+\end{proof}
+
+We shall boost this theorem up to its full strength, where we say there is a ``unique'' inner product space with such a feature map. Of course, it will not be unique unless we impose appropriate quantifiers, since we can just throw in some random elements into $\mathcal{H}$.
+
+Recall (or learn) from functional analysis that any inner product space $\mathcal{B}$ is a normed space, with norm
+\[
+ \|f\|^2 = \bra f,f \ket_\mathcal{B}.
+\]
+We say sequence $(f_m)$ in a normed space $\mathcal{B}$ is \emph{Cauchy}\index{Cauchy sequence} if $\|f_m - f_n\|_\mathcal{B} \to 0$ as $n, m \to \infty$. An inner product space is \emph{complete}\index{complete inner product space} if every Cauchy sequence has a limit (in the space), and a complete inner product space is called a \term{Hilbert space}.
+
+One important property of a Hilbert space $\mathcal{B}$ is that if $V$ is a closed subspace of $\mathcal{B}$, then every $f \in \mathcal{B}$ has a unique representation as $f = v + w$, where $v \in V$ and
+\[
+ w \in V^\perp = \{u \in \mathcal{B}: \bra u, y\ket = 0 \text{ for all } y \in V\}.
+\]
+By adding limits of Cauchy sequences to $\mathcal{H}$, we can obtain a Hilbert space. Indeed, if $(f_m)$ is Cauchy in $\mathcal{H}$, then
+\[
+ |f_m(x) - f_n(x)| \leq k^{1/2}(x, x) \|f_m - f_n\|_{\mathcal{H}} \to 0
+\]
+as $m, n \to \infty$. Since every Cauchy sequence in $\R$ converges (i.e.\ $\R$ is complete), it makes sense to define a limiting function
+\[
+ f(x) = \lim_{n \to \infty} f_n(x),
+\]
+and it can be shown that after augmenting $\mathcal{H}$ with such limits, we obtain a Hilbert space. In fact, it is a special type of Hilbert space.
+
+\begin{defi}[Reproudcing kernel Hilbert space (RKHS)]\index{reproducing kernel Hilbert space}\index{RKHS}
+ A Hilbert space $\mathcal{B}$ of functions $f: \mathcal{X} \to \R$ is a reproducing kernel Hilbert space if for each $x \in \mathcal{X}$, there exists a $k_x \in \mathcal{B}$ such that
+ \[
+ \bra k_x, f\ket = f(x)
+ \]
+ for all $x \in \mathcal{B}$.
+
+ The function $k: \mathcal{X} \times \mathcal{X} \to \R$ given by
+ \[
+ k(x, x') = \bra k_x, k_x'\ket = k_x(x') = k_{x'}(x)
+ \]
+ is called the \term{reproducing kernel} associated with $\mathcal{B}$.
+\end{defi}
+By the Riesz representation theorem, this condition is equivalent to saying that pointwise evaluation is continuous.
+
+We know the reproducing kernel associated with an RKHS is a positive-definite kernel, and the Moore--Aronszajn theorem can be extended to show that to each kernel $k$, there is a unique representing kernel Hilbert space $\mathcal{H}$ and feature map $\phi: \mathcal{X} \to \mathcal{H}$ such that $k(x, x') = \bra \phi(x), \phi(x')\ket$.
+
+\begin{eg}
+ Take the linear kernel
+ \[
+ k(x, x') = x^T x'.
+ \]
+ By definition, we have
+ \[
+ \mathcal{H} = \left\{f:\R^p \to \R \mid f(x) = \sum_{i = 1}^n \alpha_i x_i^T x\right\}
+ \]
+ for some $n \in \N$, $x_1, \ldots, x^n \in \R^p$. We then see that this is equal to
+ \[
+ \mathcal{H} = \{f: \R^p \to \R \mid f(x) = \beta^T x\text{ for some }\beta \in \R^p\},
+ \]
+ and if $f(x) = \beta^T x$, then $\|f\|_{\mathcal{H}}^2 = k(\beta, \beta) = \|\beta\|^2_2$.
+\end{eg}
+
+\begin{eg}
+ Take the Sobolev kernel, where $\mathcal{X} = [0, 1]$ and $k(x, x') = \min(x, x')$. Then $\mathcal{H}$ includes all linear combinations of functions of the form $x \mapsto \min(x, x')$, where $x' \in [0, 1]$, and their pointwise limits. These functions look like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 2) node [above] {$\min(x, x')$};
+
+ \draw (0, 0) -- (1.6, 1.6) -- (3, 1.6);
+
+ \draw [dashed] (1.6, 0) node [below] {$x'$} -- (1.6, 1.6);
+ \end{tikzpicture}
+ \end{center}
+ Since we allow arbitrary linear combinations of these things and pointwise limits, this gives rise to a large class of functions. In particular, this includes all Lipschitz functions that are $0$ at the origin.
+
+ In fact, the resulting space is a Sobolev space, with the norm given by% check this
+ \[
+ \|f\| = \left(\int_0^1 f'(x)^2 \;\d x \right)^{1/2}.
+ \]
+ For example, if we take $f = \min (\ph, x)$, then by definition we have
+ \[
+ \|\min(\ph, x)\|_\mathcal{H}^2 = \min(x, x) = x,
+ \]
+ whereas the formula we claimed gives
+ \[
+ \int_0^x 1^2 \;\d t = x.
+ \]
+\end{eg}
+
+In general, it is difficult to understand the RKHS, but if we can do so, this provides a lot of information on what we are regularizing in the kernel trick.
+
+\subsection{Making predictions}
+Let's now try to understand what exactly we are doing when we do ridge regression with a kernel $k$. To do so, we first think carefully what we were doing in ordinary ridge regression, which corresponds to using the linear kernel. For the linear kernel, the $L^2$ norm $\|\beta\|_2^2$ corresponds exactly to the norm in the RKHS $\|f\|_{\mathcal{H}}^2$. Thus, an alternative way of expressing ridge regression is
+\[
+ \hat{f} = \argmin_{f \in \mathcal{H}} \left\{\sum_{i = 1}^n (Y_i - f(x_i))^2 + \lambda \|f\|_{\mathcal{H}}^2 \right\},\tag{$*$}
+\]
+where $\mathcal{H}$ is the RKHS of the linear kernel. Now this way of writing ridge regression makes sense for an arbitrary RKHS, so we might think this is what we should solve in general.
+
+But if $\mathcal{H}$ is infinite dimensional, then naively, it would be quite difficult to solve $(*)$. The solution is provided by the representer theorem.
+
+\begin{thm}[Representer theorem]\index{representer theorem}
+ Let $\mathcal{H}$ be an RKHS with reproducing kernel $k$. Let $c$ be an arbitrary loss function and $J: [0, \infty) \to \R$ any strictly increasing function. Then the minimizer $\hat{f} \in \mathcal{H}$ of
+ \[
+ Q_1(f) = c(Y, x_1, \ldots, x_n, f(x_1), \ldots, f(x_n)) + J(\|f\|_{\mathcal{H}}^2)
+ \]
+ lies in the linear span of $\{k(\ph, x_i)\}_{i = 1}^n$.
+\end{thm}
+
+Given this theorem, we know our optimal solution can be written in the form
+\[
+ \hat{f}(\ph) = \sum_{i = 1}^n \hat{\alpha}_i k(\ph, x_i),
+\]
+and thus we can rewrite our optimization problem as looking for the $\hat{\alpha} \in \R^n$ that minimizes
+\[
+ Q_2(\alpha) = c(Y, x_1, \ldots, x_n, K\alpha) + J(\alpha^T K \alpha),
+\]
+over $\alpha \in \R^n$ (with $K_{ij} = k(x_i, x_j)$).
+
+For an arbitrary $c$, this can still be quite difficult, but in the case of Ridge regression, this tells us that $(*)$ is equivalent to minimizing
+\[
+ \|Y - K\alpha\|_2^2 + \lambda \alpha^T K \alpha,
+\]
+and by differentiating, we find that this is given by solving
+\[
+ K \hat{\alpha} = K(K + \lambda I)^{-1} Y.
+\]
+Of course $K\hat{\alpha}$ is also our fitted values, so we see that if we try to minimize $(*)$, then the fitted values is indeed given by $K(K + \lambda I)^{-1}Y$, as in our original motivation.
+
+\begin{proof}
+ Suppose $\hat{f}$ minimizes $Q_1$. We can then write
+ \[
+ \hat{f} = u + v
+ \]
+ where $u \in V = \spn\{k(\ph, x_i): i = 1, \ldots ,n\}$ and $v \in V^\perp$. Then
+ \[
+ \hat{f}(x_i) = \bra f, k(\ph, x_i)\ket = \bra u + v, k(\ph, x_i)\ket = \bra u, k(\ph, x_i)\ket = u(x_i).
+ \]
+ So we know that
+ \[
+ c(Y, x_1, \ldots, x_n, f(x_1), \ldots, f(x_n)) = c(Y, x_1, \ldots, x_n, u(x_1), \ldots, u(x_n)).
+ \]
+ Meanwhile,
+ \[
+ \|f\|^2_{\mathcal{H}} = \|u + v\|^2_{\mathcal{H}} = \|u\|^2_{\mathcal{H}} + \|v\|^2_{\mathcal{H}},
+ \]
+ using the fact that $u$ and $v$ are orthogonal. So we know
+ \[
+ J(\|f\|_{\mathcal{H}}^2 ) \geq J(\|u\|_{\mathcal{H}}^2)
+ \]
+ with equality iff $v = 0$. Hence $Q_1(f) \geq Q_1(u)$ with equality iff $v = 0$, and so we must have $v = 0$ by optimality.
+
+ Thus, we know that the optimizer in fact lies in $V$, and then the formula of $Q_2$ just expresses $Q_1$ in terms of $\alpha$.
+\end{proof}
+
+How well does our kernel machine do? Let $\mathcal{H}$ be an RKHS with reproducing kernel $k$, and $f^0 \in \mathcal{H}$. Consider a model
+\[
+ Y_i = f^0(x_i) + \varepsilon_i
+\]
+for $i = 1, \ldots, n$, and assume $\E \varepsilon = 0$, $\var(\varepsilon) = \sigma^2 I$.
+
+For convenience, We assume $\|f^0\|^2_{\mathcal{H}} \leq 1$. There is no loss in generality by assuming this, since we can always achieve this by scaling the norm.
+
+Let $K$ be the kernel matrix $K_{ij} = k(x_i, x_j)$ with eigenvalues $d_1 \geq d_2 \geq \cdots \geq d_n \geq 0$. Define
+\[
+ \hat{f}_\lambda = \argmin_{f \in \mathcal{H}} \left\{ \sum_{i = 1}^n (Y_i - f(x_i))^2 + \lambda \|f\|_\mathcal{H}^2\right\}.
+\]
+\begin{thm}
+ We have
+ \begin{align*}
+ \frac{1}{n} \sum_{i = 1}^n \E(f^0(x_i) - \hat{f}_\lambda (x_i))^2 &\leq \frac{\sigma^2}{n} \sum_{i = 1}^n \frac{d_i^2}{(d_i + \lambda)^2} + \frac{\lambda}{4n}\\
+ &\leq \frac{\sigma^2}{n} \frac{1}{\lambda} \sum_{i = 1}^n \min \left(\frac{d_i}{4}, \lambda\right) + \frac{\lambda}{4n}.
+ \end{align*}
+\end{thm}
+Given a data set, we can compute the eigenvalues $d_i$, and thus we can compute this error bound.
+
+\begin{proof}
+ We know from the representer theorem that
+ \[
+ (\hat{f}_\lambda (x_i), \ldots, \hat{f}_\lambda (x_n))^T = K(K + \lambda I)^{-1} Y.
+ \]
+ Also, there is some $\alpha \in \R^n$ such that
+ \[
+ (f^0(x_1), \ldots, f^0(x_n))^T = K \alpha.
+ \]
+ Moreover, on the example sheet, we show that
+ \[
+ 1 \geq \|f^0\|_{\mathcal{H}}^2 \geq \alpha^T K \alpha.
+ \]
+ Consider the eigen-decomposition $K = UDU^T$, where $U$ is orthogonal, $D_{ii} = d_i$ and $D_{ij} = 0$ for $i \not= j$. Then we have
+ \begin{align*}
+ \E \sum_{i = 1}^n (f^0(x_i) - \hat{f}_\lambda (x_i))^2 &= \E\|K\alpha - K(K + \lambda I)^{-1} (K\alpha + \varepsilon)\|_2^2\\
+ \intertext{Noting that $K\alpha= (K + \lambda I)(K + \lambda I)^{-1} K\alpha$, we obtain}
+ &= \E \norm{\lambda (K + \lambda I)^{-1} K\alpha - K(K + \lambda I)^{-1} \varepsilon}_2^2\\
+ &= \underbrace{\lambda^2 \norm{(K + \lambda I)^{-1} K\alpha}_2^2}_{\mathrm{(I)}} + \underbrace{\E \norm{K(K + \lambda I)^{-1} \varepsilon}_2^2}_{\mathrm{(II)}}.
+ \end{align*}
+ At this stage, we can throw in the eigen-decomposition of $K$ to write (I) as
+ \begin{align*}
+ \mathrm{(I)} &= \lambda^2 \norm{(UDU^T + \lambda I)^{-1} UDU^T \alpha }_2^2\\
+ &= \lambda^2 \|U (D + \lambda I)^{-1} \underbrace{D U^T \alpha}_{\theta} \|_2^2\\
+ &= \sum_{i = 1}^n \theta_i^2 \frac{\lambda^2}{(d_i + \lambda)^2}
+ \end{align*}
+ Now we have
+ \[
+ \alpha^T K \alpha = \alpha^T U DU^T \alpha = \alpha^T U DD^+ DU^T,
+ \]
+ where $D^+$ is diagonal and
+ \[
+ D_{ii}^+ =
+ \begin{cases}
+ d_i^{-1} & d_i > 0\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ We can then write this as
+ \[
+ \alpha^T K \alpha = \sum_{d_i > 0} \frac{\theta_i^2}{d_i}.
+ \]
+ The key thing we know about this is that it is $\leq 1$.
+
+ Note that by definition of $\theta_i$, we see that if $d_i = 0$, then $\theta_i = 0$. So we can write
+ \[
+ \mathrm{(II)} = \sum_{i: d_i \geq 0} \frac{\theta_i^2}{d_i} \frac{d_i \lambda^2}{(d_i + \lambda)^2} \leq \lambda \max_{i = 1, \ldots, n} \frac{d_i \lambda}{(d_i + \lambda)^2}
+ \]
+ by H\"older's inequality with $(p, q) = (1, \infty)$. Finally, use the inequality that
+ \[
+ (a + b)^2 \geq 4ab
+ \]
+ to see that we have
+ \[
+ \mathrm{(I)} \leq \frac{\lambda}{4}.
+ \]
+ So we are left with (II), which is a bit easier. Using the trace trick, we have
+ \begin{align*}
+ \mathrm{(II)} &= \E \varepsilon^T (K + \lambda I)^{-1} K^2 (K + \lambda I)^{-1} \varepsilon\\
+ &= \E \tr \left(K (K + \lambda I)^{-1} \varepsilon \varepsilon^T (K + \lambda I)^{-1} K\right)\\
+ &= \tr \left(K (K + \lambda I)^{-1} \E(\varepsilon \varepsilon^T) (K + \lambda I)^{-1} K\right)\\
+ &= \sigma^2 \tr \left(K^2(K + \lambda I)^{-2}\right)\\
+ &= \sigma^2 \tr \left(UD^2 U^T (UDU^T + \lambda I)^{-2}\right)\\
+ &= \sigma^2 \tr \left(D^2 (D + \lambda I)^{-2}\right)\\
+ &= \sigma^2 \sum_{i = 1}^n \frac{d_i^2}{(d_i + \lambda)^2}.
+ \end{align*}
+ Finally, writing $\frac{d_i^2}{(d_i + \lambda)^2} = \frac{d_i}{\lambda} \frac{d_i \lambda}{(d_i + \lambda)^2}$, we have
+ \[
+ \frac{d_i^2}{(d_i + \lambda)^2} \leq \min \left(1, \frac{d_i}{4 \lambda}\right),
+ \]
+ and we have the second bound.
+\end{proof}
+How can we interpret this? If we look at the formula, then we see that we can have good benefits if the $d_i$'s decay quickly, and the exact values of the $d_i$ depend only on the choice of $x_i$. So suppose the $x_i$ are random and idd and independent of $\varepsilon$. Then the entire analysis is still valid by conditioning on $x_1, \ldots, x_n$.
+
+We define $\hat{\mu}_i = \frac{d_i}{n}$, and $\lambda_n = \frac{\lambda}{n}$. Then we can rewrite our result to say
+\[
+ \frac{1}{n} \E \sum_{i = 1}^n (f^0(x_i) - \hat{f}_\lambda (x_i))^2 \leq \frac{\sigma^2}{\lambda_n} \frac{1}{n} \sum_{i = 1}^n \E \min\left(\frac{\hat{\mu}_i}{4}, \lambda_n\right) + \frac{\lambda_n}{4} \equiv \E \delta_n(\lambda_n).
+\]
+We want to relate this more directly to the kernel somehow. Given a density $p(x)$ on $\mathcal{X}$, \term{Mercer's theorem} guarantees the following eigenexpansion
+\[
+ k(x, x') = \sum_{j = 1}^\infty \mu_j e_j(x) e_j(x'),
+\]
+where the eigenvalues $\mu_j$ and eigenfunctions $e_j$ obey
+\[
+ \mu_j e_j(x') = \int_{\mathcal{X}} k(x, x') e_j(x) p(x)\;\d x
+\]
+and
+\[
+ \int_{\mathcal{X}} e_k(x) e_j(x) p(x)\;\d x = \mathbf{1}_{j = k}.
+\]
+One can then show that
+\[
+ \frac{1}{n} \E \sum_{i = 1}^n \min \left(\frac{\hat{\mu}_i}{4}, \lambda_n\right) \leq \frac{1}{n} \sum_{i = 1}^\infty \min\left(\frac{\mu_i}{4}, \lambda_n\right)
+\]
+up to some absolute constant factors. For a particular density $p(x)$ of the input space, we can try to solve the integral equation to figure out what the $\mu_j$'s are, and we can find the expected bound.
+
+When $k$ is the Sobolev kernel and $p(x)$ is the uniform density, then it turns out we have
+\[
+ \frac{\mu_j}{4} = \frac{1}{\pi^2(2j - 1)^2}.
+\]
+By drawing a picture, we see that we can bound $\sum_{i = 1}^\infty \min \left(\frac{\mu_i}{4} , \lambda_n\right)$ by
+\[
+ \sum_{i = 1}^\infty \min \left(\frac{\mu_i}{4} , \lambda_n\right) \leq \int_0^\infty \lambda_n \wedge \frac{1}{\pi^2(2j - 1)^2}\;\d j \leq \frac{\sqrt{\lambda_n}}{\pi} + \frac{\lambda_n}{2}.
+\]
+So we find that
+\[
+ \E(\delta_n(\lambda_n)) = O \left(\frac{\sigma^2}{n \lambda_n^{1/2}} + \lambda_n\right).
+\]
+We can then find the $\lambda_n$ that minimizes this, and we see that we should pick
+\[
+ \lambda_n \sim \left(\frac{\sigma^2}{n}\right)^{2/3},
+\]
+which gives an error rate of $\sim \left(\frac{\sigma^2}{n}\right)^{2/3}$.
+
+\subsection{Other kernel machines}
+Recall that the representer theorem was much more general than what we used it for. We could apply it to any loss function. In particular, we can apply it to classification problems. For example, we might want to predict whether an email is a spam. In this case, we have $Y \in \{-1, 1\}^n$.
+
+Suppose we are trying to find a hyperplane that separates the two sets $\{x_i\}_{y_i = 1}$ and $\{x_i\}_{i: Y_i = -1}$. Consider the really optimistic case where they can indeed be completely separated by a hyperplane through the origin. So there exists $\beta \in \R^p$ (the normal vector) with $y_i x_I^T \beta> 0$ for all $i$. Moreover, it would be nice if we can maximize the minimum distance between any of the points and the hyperplane defined by $\beta$. This is given by
+\[
+ \max \frac{y_i x_i^T \beta}{\|\beta\|_2}.
+\]
+Thus, we can formulate our problem as
+\begin{center}
+ maximize $M$ among $\beta \in \R^p, M \geq 0$ subject to $\frac{y_i x_i^T \beta}{\|\beta\|_2} \geq M$.
+\end{center}
+
+This optimization problem gives the hyperplane that maximizes the \emph{margin} between the two classes.
+
+Let's think about the more realistic case where we cannot completely separate the two sets. We then want to impose a penalty for each point on the wrong side, and attempt to minimize the distance.
+
+There are two options. We can use the penalty
+\[
+ \lambda \sum_{i = 1}^n \left(M - \frac{Y_i x_i^T \beta}{\|\beta\|_2}\right)_+
+\]
+where $t_+ = t \mathbf{1}_{t \geq 0}$. Another option is to take
+\[
+ \lambda \sum_{i = 1}^n \left(1 - \frac{Y_i x_i^T \beta}{M \|\beta\|_2}\right)_+
+\]
+which is perhaps more natural, since now everything is ``dimensionless''. We will actually use neither, but will start with the second form and massage it to something that can be attacked with our existing machinery. Starting with the second option, we can write our optimization problem as
+\[
+ \max_{M \geq 0, \beta \in \R^p} \left(M - \lambda \sum_{i = 1}^n \left(1 - \frac{Y_i x_i^T \beta}{M \|\beta\|_2}\right)_+\right).
+\]
+Since the objective function is invariant to any positive scaling of $\beta$, we may assume $\|\beta\|_2 = \frac{1}{M}$, and rewrite this problem as maximizing
+\[
+ \frac{1}{\|\beta\|_2} - \lambda \sum_{i = 1}^n (1 - Y_i x_i^T \beta)_+.
+\]
+Replacing $\max \frac{1}{\|\beta\|_2}$ with minimizing $\|\beta\|_2^2$ and adding instead of subtracting the penalty part, we modify this to say
+\[
+ \min_{\beta \in \R^p} \left(\|\beta\|_2^2 + \lambda \sum_{i = 1}^n (1 - Y_i x_i^T \beta)_+\right).
+\]
+The final change we make is that we replace $\lambda$ with $\frac{1}{\lambda}$, and multiply the whole equation by $\lambda$ to get
+\[
+ \min_{\beta \in \R^p} \left(\lambda \|\beta\|_2^2 + \sum_{i = 1}^n (1 - Y_i x_i^T \beta)_+\right).
+\]
+This looks much more like what we've seen before, with $\lambda \|\beta\|_2^2$ being the penalty term and $\sum_{i = 1}^n (1 - Y_i x_i^T \beta)_+$ being the loss function.
+
+The final modification is that we want to allow planes that don't necessarily pass through the origin. To do this, we allow ourselves to translate all the $x_i$'s by a fixed vector $\delta \in \R^p$. This gives
+\[
+ \min_{\beta \in \R^p, \delta \in \R^p} \left(\lambda \|\beta\|_2^2 + \sum_{i = 1}^n \left(1 - Y_i (x_i - \delta)^T \beta\right)_+\right)
+\]
+Since $\delta^T \beta$ always appears together, we can simply replace it with a constant $\mu$, and write our problem as
+\[
+ (\hat{\mu}, \hat{\beta}) = \argmin_{(\mu, \beta) \in \R \times \R^p} \sum_{i = 1}^n \left(1 - Y_i (x_i^T \beta + \mu)\right)_+ + \lambda \|\beta\|_2^2.\tag{$*$}
+\]
+This gives the \term{support vector classifier}.
+
+This is still just fitting a hyperplane. Now we would like to ``kernelize'' this, so that we can fit in a surface instead. Let $\mathcal{H}$ be the RKHS corresponding to the linear kernel. We can then write $(*)$ as
+\[
+ (\hat{\mu}_\lambda, \hat{f}_\lambda) = \argmin_{(\mu, f) \in \R \times \mathcal{H}} \sum_{i = 1}^n (1 - Y_i( f(x_i) + \mu))_+ + \lambda \|f\|_{\mathcal{H}}^2.
+\]
+The representer theorem (or rather, a slight variant of it) tells us that the above optimization problem is equivalent to the \term{support vector machine}
+\[
+ (\hat{\mu}_\lambda, \hat{\alpha}_\lambda) = \argmin_{(\mu, \alpha) \in \R \times \R^n}\sum_{i = 1}^n (1 - Y_i (K_i^T \alpha + \mu))_+ + \lambda \alpha^T K \alpha
+\]
+where $K_{ij} = k(x_i, x_j)$ and $k$ is the reproducing kernel of $\mathcal{H}$. Predictions at a new $x$ are then given by
+\[
+ \mathrm{sign} \left(\hat{\mu}_\lambda + \sum_{i = 1}^n \hat{\alpha}_{\lambda, i} k(x, x_i)\right).
+\]
+
+One possible alternative to this is to use \term{logistic regression}. Suppose that
+\[
+ \log \left(\frac{\P(Y_i = 1)}{\P(Y_i = -1)}\right) = x_i^T \beta^0,
+\]
+and that $Y_1, \ldots, Y_n$ are independent. An estimate of $\beta^0$ can be obtained through maximizing the likelihood, or equivalently,
+\[
+ \argmin_{~b \in \R^p} \sum_{i = 1}^n \log (1 + \exp(-Y_i x_i^T \beta)).
+\]
+To kernelize this, we introduce an error term of $\lambda \|\beta\|_2^2$, and then kernelize this. The resulting optimization problem is then given by
+\[
+ \argmin_{f \in \mathcal{H}} \left( \sum_{i = 1}^n \log (1 + \exp (-Y_i f(x_i))) + \lambda \|f\|_{\mathcal{H}}^2\right).
+\]
+We can then solve this using the representer theorem.
+
+\subsection{Large-scale kernel machines}
+How do we apply kernel machines at large scale? Whenever we want to make a prediction with a kernel machine, we need $O(n)$ many steps, which is quite a pain if we work with a large data set, say a few million of them. But even before that, forming $K \in \R^{n \times n}$ and inverting $K + \lambda I$ or equivalent can be computationally too expensive.
+
+One of the more popular approaches to tackle these difficulties is to form a randomized feature map $\hat{\phi}: \mathcal{X} \to \R^b$ such that
+\[
+ \E \hat{\phi}(x)^T \hat{\phi}(x') = k(x, x')
+\]
+To increase the quality of the approximation, we can use
+\[
+ x \mapsto \frac{1}{\sqrt{L}}(\hat{\phi}_1(x), \ldots, \hat{\phi}_L(x))^T \in \R^{Lb},
+\]
+where the $\hat{\phi}_i(x)$ are iid with the same distribution as $\hat{\phi}$.
+
+Let $\Phi$ be the matrix with $i$th row $\frac{1}{\sqrt{L}}(\hat{\phi}_1(x), \ldots, \hat{\phi}_L(x))$. When performing, e.g.\ kernel ridge regression, we can compute
+\[
+ \hat{\theta} = (\Phi^T \Phi + \lambda I)^{-1} \Phi^T Y.
+\]
+The computational cost of this is $O((Lb)^3 + n (Lb)^2)$. The key thing of course is that this is linear in $n$, and we can choose the size of $L$ so that we have a good trade-off between the computational cost and the accuracy of the approximation.
+
+Now to predict a new $x$, we form
+\[
+ \frac{1}{\sqrt{L}} (\hat{\phi}_1(x), \ldots, \hat{\phi}_L(x))^T \hat{\theta},
+\]
+and this is $O(Lb)$.
+
+In 2007, Rahimi and Recht proposed a random map for shift-invariant kernels, i.e.\ kernels $k$ such that $k(x, x') = h(x - x')$ for some $h$ (we work with $\mathcal{X} = \R^p$). A common example is the Gaussian kernel.
+
+One motivation of this comes from a classical result known as \emph{Bochner's theorem}.
+
+\begin{thm}[Bochner's theorem]\index{Bochner's theorem}
+ Let $k: \R^p \times \R^p \to \R$ be a continuous kernel. Then $k$ is shift-invariant if and only if there exists some distribution $F$ on $\R^p$ and $c > 0$ such that if $W \sim F$, then
+ \[
+ k(x, x') = c\E \cos((x - x')^T W).
+ \]
+\end{thm}
+Our strategy is then to find some $\hat{\phi}$ depending on $W$ such that
+\[
+ c\cos((x - x')^T W) = \hat{\phi}(x) \hat{\phi}(x').
+\]
+We are going to take $b = 1$, so we don't need a transpose on the right.
+
+The idea is then to play around with trigonometric identities to try to write $c \cos ((x - x')^T W)$. After some work, we find the following solution:
+
+Let $u \sim U[-\pi, \pi]$, and take $x, y \in \R$. Then
+\[
+ \E \cos(x + u)\cos (y + u) = \E (\cos x \cos u - \sin x \sin u)(\cos y \cos u - \sin y \sin u)
+\]
+Since $u$ has the same distribution as $-u$, we see that $\E \cos u\sin u = 0$.
+
+Also, $\cos^2 u + \sin^2 u = 1$. Since $u$ ranges uniformly in $[-\pi, \pi]$, by symmetry, we have $\E \cos^2 u = \E \sin^2 u = \frac{1}{2}$. So we find that
+\[
+ 2 \E \cos(x + u) \cos (y + u) = (\cos x \cos y + \sin x \sin y) = \cos (x - y).
+\]
+Thus, given a shift-invariant kernel $k$ with associated distribution $F$, we set $W \sim F$ independently of $u$. Define
+\[
+ \hat{\phi}(x) = \sqrt{2c} \cos (W^T x + u).
+\]
+Then
+\begin{align*}
+ \E \hat{\phi}(x) \hat{\phi}(x') &= 2c \E[\E[\cos (W^T x + u) \cos (W^T x' + u) \mid W]]\\
+ &= c \E \cos(W^T (x - x'))\\
+ &= k(x, x').
+\end{align*}
+
+In order to get this to work, we must find the appropriate $W$. In certain cases, this $W$ is actually not too hard to find:
+\begin{eg}
+ Take
+ \[
+ k(x, x') = \exp\left(-\frac{1}{2\sigma^2} \|x - x'\|_2^2\right),
+ \]
+ the Gaussian kernel. If $W \sim N(0, \sigma^{-2} I)$, then
+ \[
+ \E e^{it^TW} = e^{-\|t\|_2^2/(2\sigma^2)} = \E \cos(t^2 W).
+ \]
+\end{eg}
+
+\section{The Lasso and beyond}
+We are interested in the situation where we have a very large number of variables compared to the size of the data set. For example the data set might be all the genetic and environmental information about patients, and we want to predict if they have diabetes. The key property is that we expect that most of the data are not useful, and we want to select a small subset of variables that are relevant, and form prediction models based on them.
+
+In these cases, ordinary least squares is not a very good tool. If there are more variables than the number of data points, then it is likely that we can pick the regression coefficients so that our model fits our data exactly, but this model is likely to be spurious, since we are fine-tuning our model based on the random fluctuations of a large number of irrelevant variables. Thus, we want to find a regression method that penalizes non-zero coefficients. Note that Ridge regression is not good enough --- it encourages small coefficients, but since it uses the $\ell_2$ norm, it's quite lenient towards small, non-zero coefficients, as the square of a small number is really small.
+
+There is another reason to favour models that sets a lot of coefficients to zero. Consider the linear model
+\[
+ Y = X \beta^0 + \varepsilon,
+\]
+where as usual $\E \varepsilon = 0$ and $\var(\varepsilon) = \sigma^2 I$. Let $S = \{k : \beta^0_k \not= 0\}$, and let $s = |S|$. As before, we suppose we have some a priori belief that $s \ll p$.
+
+For the purposes of this investigation, suppose $X$ has full column rank, so that we can use ordinary least squares. Then the prediction error is
+\begin{align*}
+ \frac{1}{n} \E \|X \beta^0 - X \hat{\beta}^{OLS} \|_2^2 &= \frac{1}{n} \E (\hat{\beta}^{OLS} - \beta^0)^T X^T X (\hat{\beta}^{OLS} - \beta^0)\\
+ &= \frac{1}{n} \E \tr (\hat{\beta}^{OLS} - \beta^0) (\hat{\beta}^{OLS} - \beta^0)^T X^T X\\
+ &= \frac{1}{n} \tr \E(\hat{\beta}^{OLS} - \beta^0) (\hat{\beta}^{OLS} - \beta^0)^T X^T X\\
+ &= \frac{1}{n} \tr \E \sigma^2 (X^T X)^{-1} X^T X\\
+ &= \sigma^2 \frac{p}{n}.
+\end{align*}
+Note that this does not depend on what what $\beta^0$ is, but only on $\sigma^2, p$ and $n$.
+
+In the situation we are interested in, we expect $s \ll p$. So if we can find $S$ and find ordinary least squares just on these, then we would have a mean squared prediction error of $\sigma^2 \frac{s}{n}$, which would be much much smaller.
+
+We first discuss a few classical model methods that does this.
+
+The first approach we may think of is the \emph{best subsets method}, where we try to do regression on all possible choices of $S$ and see which is the ``best''. For any set $M$ of indices, we let $X_M$ be the submatrix of $X$ formed from the columns of $X$ with index in $M$. We then regress $Y$ on $X_M$ for every $M \subseteq \{1, \ldots, p\}$, and then pick the best model via cross-validation, for example. A big problem with this is that the number of subsets grows exponentially with $p$, and so becomes infeasible for, say, $p > 50$.
+
+Another method might be \emph{forward selection}. We start with an intercept-only model, and then add to the existing model the predictor that reduces the RSS the most, and then keep doing this until a fixed number of predictors have been added. This is quite a nice approach, and is computationally quite fast even if $p$ is large. However, this method is greedy, and if we make a mistake at the beginning, then the method blows up. In general, this method is rather unstable, and is not great from a practical perspective.
+
+\subsection{The Lasso estimator}
+The Lasso (Tibshirani, 1996) is a seemingly small modification of Ridge regression that solves our problems. It solves
+\[
+ (\hat{\mu}_\lambda^L, \hat{\beta}_\lambda^L) = \argmin_{(\mu, \beta) \in \R \times \R^p} \frac{1}{2n} \|Y - \mu 1 - X \beta\|_2^2 + \lambda \|\beta\|_1.
+\]
+The key difference is that we use an $\ell_1$ norm on $\beta$ rather than the $\ell_2$ norm,
+\[
+ \|\beta\|_1 = \sum_{k = 1}^p |\beta_k|.
+\]
+This makes it drastically different from Ridge regression. We will see that for $\lambda$ large, it will make all of the entries of $\beta$ exactly zero, as opposed to being very close to zero.
+
+We can compare this to best subset regression, where we replace $\|\beta\|_1$ with something of the form $\sum_{k = 1}^p \mathbf{1}_{\beta_k > 0}$. But the beautiful property of the Lasso is that the optimization problem is now continuous, and in fact convex. This allows us to solve it using standard convex optimization techniques.
+
+Why is the $\ell_1$ norm so different from the $\ell_2$ norm? Just as in Ridge regression, we may center and scale $X$, and center $Y$, so that we can remove $\mu$ from the objective. Define
+\[
+ Q_\lambda(\beta) = \frac{1}{2n} \|Y - X \beta\|_2^2 + \lambda \|\beta\|_1.
+\]
+Any minimizer $\hat{\beta}^L_\lambda$ of $Q_\lambda(\beta)$ must also be a solution to
+\[
+ \min \|Y - X \beta\|_2^2 \text{ subject to } \|\beta\|_1 \leq \|\hat{\beta}_\lambda^L\|_1.
+\]
+Similarly, $\hat{\beta}_\lambda^R$ is a solution of
+\[
+ \min \|Y - X \beta\|_2^2 \text{ subject to } \|\beta\|_2 \leq \|\hat{\beta}_\lambda^R\|_2.
+\]
+So imagine we are given a value of $\|\hat{\beta}_\lambda^L\|_1$, and we try to solve the above optimization problem with pictures. The region $\|\beta\|_1 \leq \|\hat{\beta}_\lambda^L\|_1$ is given by a rotated square
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-1.5, 0) -- (1.5, 0);
+ \draw (0, -1.5) -- (0, 1.5);
+
+ \draw [mred, fill, fill opacity=0.4] (1, 0) -- (0, 1) -- (-1, 0) -- (0, -1) -- cycle;
+ \end{tikzpicture}
+\end{center}
+On the other hand, the minimum of $\|Y - X \beta\|_2^2$ is at $\hat{\beta}^{OLS}$, and the contours are ellipses centered around this point.
+\begin{center}
+ \begin{tikzpicture}
+ \clip (-1.5, -1.5) rectangle (3, 3);
+ \draw (-1.5, 0) -- (3, 0);
+ \draw (0, -1.5) -- (0, 3);
+
+ \begin{scope}[rotate=-5, shift={(2.4, 1.74)}]
+ \node [circ] {};
+ \draw [gray] ellipse (0.5 and 0.1);
+ \draw [gray, opacity=0.9] ellipse (1.5 and 0.3);
+ \draw [gray, opacity=0.8] ellipse (2.5 and 0.5);
+ \draw [gray, opacity=0.7] ellipse (3.5 and 0.7);
+ \draw [gray, opacity=0.6] ellipse (4.5 and 0.9);
+ \draw [gray, opacity=0.5] ellipse (5.5 and 1.1);
+ \draw [gray, opacity=0.4] ellipse (6.5 and 1.3);
+ \draw [gray, opacity=0.3] ellipse (7.5 and 1.5);
+ \draw [gray, opacity=0.2] ellipse (8.5 and 1.7);
+ \draw [gray, opacity=0.1] ellipse (9.5 and 1.9);
+ \end{scope}
+ \draw [mred, fill, fill opacity=0.4] (1, 0) -- (0, 1) -- (-1, 0) -- (0, -1) -- cycle;
+ \end{tikzpicture}
+\end{center}
+To solve the minimization problem, we should pick the smallest contour that hits the square, and pick the intersection point to be our estimate of $\beta^0$. The point is that since the unit ball in the $\ell^1$-norm has these corners, this $\beta^0$ is likely to be on the corners, hence has a lot of zeroes. Compare this to the case of Ridge regression, where the constraint set is a ball:
+\begin{center}
+ \begin{tikzpicture}
+ \clip (-1.5, -1.5) rectangle (3, 3);
+ \draw (-1.5, 0) -- (3, 0);
+ \draw (0, -1.5) -- (0, 3);
+
+ \begin{scope}[rotate=-5, shift={(2.4, 1.74)}]
+ \node [circ] {};
+ \draw [gray] ellipse (0.5 and 0.1);
+ \draw [gray, opacity=0.9] ellipse (1.5 and 0.3);
+ \draw [gray, opacity=0.8] ellipse (2.5 and 0.5);
+ \draw [gray, opacity=0.7] ellipse (3.5 and 0.7);
+ \draw [gray, opacity=0.6] ellipse (4.5 and 0.9);
+ \draw [gray, opacity=0.5] ellipse (5.5 and 1.1);
+ \draw [gray, opacity=0.4] ellipse (6.5 and 1.3);
+ \draw [gray, opacity=0.3] ellipse (7.5 and 1.5);
+ \draw [gray, opacity=0.2] ellipse (8.5 and 1.7);
+ \draw [gray, opacity=0.1] ellipse (9.5 and 1.9);
+ \end{scope}
+ \draw [mred, fill, fill opacity=0.4] circle [radius=1];
+ \end{tikzpicture}
+\end{center}
+Generically, for Ridge regression, we would expect the solution to be non-zero everywhere.
+
+More generally, we can try to use the $\ell_q$ norm given by
+\[
+ \|\beta\|_q = \left(\sum_{k = 1}^p \beta_k^q\right)^{1/q}.
+\]
+We can plot their unit spheres, and see that they look like
+
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[shift={(0, 0)}]
+ \draw (-1, 0) -- (1, 0);
+ \draw (0, -1) -- (0, 1);
+ \draw [domain=-0.8:0.8, mred, fill, fill opacity=0.4] plot [smooth] (\x, {(0.8^0.5 - (abs(\x))^0.5)^2});
+ \draw [domain=-0.8:0.8, mred, fill, fill opacity=0.4] plot [smooth] (\x, {-(0.8^0.5 - (abs(\x))^0.5)^2});
+
+ \node at (0, -1.5) {$q = 0.5$};
+ \end{scope}
+ \begin{scope}[shift={(3, 0)}]
+ \draw (-1, 0) -- (1, 0);
+ \draw (0, -1) -- (0, 1);
+ \draw [mred, fill, fill opacity=0.4] (0.8, 0) -- (0, 0.8) -- (-0.8, 0) -- (0, -0.8) -- cycle;
+
+ \node at (0, -1.5) {$q = 1$};
+ \end{scope}
+ \begin{scope}[shift={(6, 0)}]
+ \draw (-1, 0) -- (1, 0);
+ \draw (0, -1) -- (0, 1);
+ \draw [domain=-0.8:0.8, mred, fill, fill opacity=0.4] plot [smooth] (\x, {(0.8^1.5 - (abs(\x))^1.5)^(1/1.5)});
+ \draw [domain=-0.8:0.8, mred, fill, fill opacity=0.4] plot [smooth] (\x, {-(0.8^1.5 - (abs(\x))^1.5)^(1/1.5)});
+
+ \node at (0, -1.5) {$q = 1.5$};
+ \end{scope}
+ \begin{scope}[shift={(9, 0)}]
+ \draw (-1, 0) -- (1, 0);
+ \draw (0, -1) -- (0, 1);
+ \draw [mred, fill, fill opacity=0.4] circle [radius=0.8];
+
+ \node at (0, -1.5) {$q = 2$};
+ \end{scope}
+
+ \end{tikzpicture}
+\end{center}
+
+We see that $q = 1$ is the smallest value of $q$ for which there are corners, and also the largest value of $q$ for which the constraint set is still convex. Thus, $q = 1$ is the sweet spot for doing regression.
+
+But is this actually good? Suppose the columns of $X$ are scaled to have $\ell_2$ norm $\sqrt{n}$, and, after centering, we have a normal linear model
+\[
+ Y = X \beta^0 + \varepsilon- \bar{\varepsilon} 1,
+\]
+with $\varepsilon \sim N_n(0, \sigma^2 I)$.
+
+\begin{thm}
+ Let $\hat{\beta}$ be the Lasso solution with
+ \[
+ \lambda = A \sigma \sqrt{\frac{\log p}{n}}
+ \]
+ for some $A$. Then with probability $1 - 2p^{-(A^2/2 - 1)}$, we have
+ \[
+ \frac{1}{n} \|X \beta^0 - X \hat{\beta}\|_2^2 \leq 4A \sigma \sqrt{\frac{\log p}{n}} \|\beta^0\|_1.
+ \]
+\end{thm}
+Crucially, this is proportional to $\log p$, and not just $p$. On the other hand, unlike what we usually see for ordinary least squares, we have $\frac{1}{\sqrt{n}}$, and not $\frac{1}{n}$.
+
+We will later obtain better bounds when we make some assumptions on the design matrix.
+
+\begin{proof}
+ We don't really have a closed form solution for $\hat{\beta}$, and in general it doesn't exist. So the only thing we can use is that it in fact minimizes $Q_\lambda(\beta)$. Thus, by definition, we have
+ \[
+ \frac{1}{2n} \|Y - X \hat{\beta}\|_2^2 + \lambda \|\hat{\beta}\|_1 \leq \frac{1}{2n} \|Y - X \beta^0\|^2_2 + \lambda \|\beta^0\|_1.
+ \]
+ We know exactly what $Y$ is. It is $X \beta^0 + \varepsilon - \bar{\varepsilon} 1$. If we plug this in, we get
+ \[
+ \frac{1}{2n} \|X \beta^0 - X \hat{\beta}\|_2^2 \leq \frac{1}{n} \varepsilon^T X(\hat{\beta} - \beta^0) + \lambda \|\beta^0\|_1 - \lambda \|\hat{\beta}\|_1.
+ \]
+ Here we use the fact that $X$ is centered, and so is orthogonal to $1$.
+
+ Now H\"older tells us
+ \[
+ |\varepsilon^T X (\hat{\beta} - \beta^0)| \leq \|X^T \varepsilon \|_\infty \|\hat{\beta} - \beta^0\|_1.
+ \]
+ We'd like to bound $\|X^T \varepsilon \|_\infty$, but it can be arbitrarily large since $\varepsilon$ is a Gaussian. However, with high probability, it is small. Precisely, define
+ \[
+ \Omega = \left\{\frac{1}{n} \|X^T \varepsilon \|_\infty \leq \lambda\right\}.
+ \]
+ In a later lemma, we will show later that $\P(\Omega) \geq 1 - 2 p^{-(A^2/2 - 1)}$. Assuming $\Omega$ holds, we have
+ \[
+ \frac{1}{2n} \|X \beta^0 - X \hat{\beta}\|^2_2 \leq \lambda \|\hat{\beta} - \beta^0\| - \lambda \|\hat{\beta}\| + \lambda \|\beta^0\| \leq 2 \lambda \|\beta^0\|_1.\qedhere
+ \]
+\end{proof}
+
+\subsection{Basic concentration inequalities}
+We now have to prove the lemma we left out in the theorem just now. In this section, we are going to prove a bunch of useful inequalities that we will later use to prove bounds.
+
+Consider the event $\Omega$ as defined before. Then
+\begin{align*}
+ \P\left(\frac{1}{n} \|X^T \varepsilon\|_\infty > \lambda\right) &= \P\left(\bigcup_{j = 1}^p \left\{\frac{1}{n}|X_j^T \varepsilon| > \lambda\right\}\right)\\
+ &\leq \sum_{j = 1}^p \P\left(\frac{1}{n} |X_j^T \varepsilon| > \lambda\right).
+\end{align*}
+Now $\frac{1}{n} X_j^T \varepsilon \sim N(0, \frac{\sigma^2}{n})$. So we just have to bound tail probabilities of normal random variables.
+
+The simplest tail bound we have is Markov's inequality.
+\begin{lemma}[Markov's inequality]\index{Markov's inequality}
+ Let $W$ be a non-negative random variable. Then
+ \[
+ \P(W \geq t) \leq \frac{1}{t} \E W.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We have
+ \[
+ t \mathbf{1}_{W \geq t} \leq W.
+ \]
+ The result then follows from taking expectations and then dividing both sides by $t$.
+\end{proof}
+
+While this is a very simple bound, it is actually quite powerful, because it assumes almost nothing about $W$. This allows for the following trick: given any strictly increasing function $\varphi: \R \to (0, \infty)$ and any random variable $W$, we have
+\[
+ \P(W \geq t) = \P(\varphi(W) \geq \varphi(t))\leq \frac{\E \varphi(W)}{\varphi(t)}.
+\]
+So we get a bound on the tail probability of $W$ for \emph{any} such function. Even better, we can minimize the right hand side over a class of functions to get an even better bound.
+
+In particular, applying this with $\varphi (t) = e^{\alpha t}$ gives
+\begin{cor}[Chernoff bound]\index{Chernoff bound}
+ For any random variable $W$, we have
+ \[
+ \P(W \geq t) \leq \inf_{\alpha > 0} e^{-\alpha t} \E e^{\alpha W}.
+ \]
+\end{cor}
+Note that $\E e^{\alpha W}$ is just the \term{moment generating function} of $W$, which we can compute quite straightforwardly.
+
+We can immediately apply this when $W$ is a normal random variable, $W \sim N(0, \sigma^2)$. Then
+\[
+ \E e^{\alpha W} = e^{\alpha^2 \sigma^2/2}.
+\]
+So we have
+\[
+ \P(W \geq t) \leq \inf_{\alpha > 0}\exp \left(\frac{\alpha^2 \sigma^2}{2} - \alpha t\right) = e^{-t^2/(2\sigma^2)}.
+\]
+Observe that in fact this tail bound works for any random variable whose moment generating function is bounded above by $e^{\alpha^2 \sigma^2/2}$.
+
+\begin{defi}[Sub-Gaussian random variable]\index{sub-Gaussian random variable}
+ A random variable $W$ is \emph{sub-Gaussian} (with parameter $\sigma$) if
+ \[
+ \E e^{\alpha (W - \E W)} \leq e^{\alpha^2 \sigma^2/2}
+ \]
+ for all $\alpha \in \R$.
+\end{defi}
+
+\begin{cor}
+ Any sub-Gaussian random variable $W$ with parameter $\sigma$ satisfies
+ \[
+ \P(W \geq t) \leq e^{-t^2/2\sigma^2}.\tag*{$\square$}
+ \]
+\end{cor}
+
+In general, bounded random variables are sub-Gaussian.
+\begin{lemma}[Hoeffding's lemma]\index{Hoeffding's lemma}
+ If $W$ has mean zero and takes values in $[a, b]$, then $W$ is sub-Gaussian with parameter $\frac{b - a}{2}$.\fakeqed
+\end{lemma}
+
+Recall that the sum of two independent Gaussians is still a Gaussian. This continues to hold for sub-Gaussian random variables.
+
+\begin{prop}
+ Let $(W_i)_{i = 1}^n$ be independent mean-zero sub-Gaussian random variables with parameters $(\sigma_i)_{i = 0}^n$, and let $\gamma \in \R^n$. Then $\gamma^T W$ is sub-Gaussian with parameter
+ \[
+ \left(\sum (\gamma_i \sigma_i)^2\right)^{1/2}.
+ \]
+\end{prop}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ \E \exp \left(\alpha \sum_{i = 1}^n \gamma_i W_i\right) &= \prod_{i = 1}^n \E \exp \left(\alpha \gamma_i W_i\right)\\
+ &\leq \prod_{i = 1}^n \exp\left(\frac{\alpha^2}{2} \gamma_i^2 \sigma_i^2\right)\\
+ &= \exp\left(\frac{\alpha^2}{2} \sum_{i = 1}^n \sigma_i^2 \gamma_i^2\right).\qedhere
+ \end{align*}
+\end{proof}
+
+We can now prove our bound for the Lasso, which in fact works for any sub-Gaussian random variable.
+\begin{lemma}
+ Suppose $(\varepsilon_i)_{i = 1}^n$ are independent, mean-zero sub-Gaussian with common parameter $\sigma$. Let
+ \[
+ \lambda = A \sigma \sqrt{\frac{\log p}{n}}.
+ \]
+ Let $X$ be a matrix whose columns all have norm $\sqrt{n}$. Then
+ \[
+ \P\left(\frac{1}{n} \|X^T \varepsilon\|_{\infty} \leq \lambda\right) \geq 1 - 2p^{-(A^2/2 - 1)}.
+ \]
+\end{lemma}
+In particular, this includes $\varepsilon \sim N_n(0, \sigma^2I)$.
+
+\begin{proof}
+ We have
+ \[
+ \P\left(\frac{1}{n}\|X^T \varepsilon\|_{\infty} > \lambda \right) \leq \sum_{j = 1}^p \P\left(\frac{1}{n} |X_j^T \varepsilon| > \lambda\right).
+ \]
+ But $\pm \frac{1}{n} X_j^T \varepsilon$ are both sub-Gaussian with parameter
+ \[
+ \sigma \left(\sum_i \left(\frac{X_{ij}}{n}\right)^2\right)^{1/2} = \frac{\sigma}{\sqrt{n}}.
+ \]
+ Then by our previous corollary, we get
+ \[
+ \sum_{j = 1}^p \P\left(\frac{1}{n} |X_j^T \varepsilon|_\infty > \lambda\right) \leq 2p \exp \left(- \frac{\lambda^2 n}{2\sigma^2}\right).
+ \]
+ Note that we have the factor of $2$ since we need to consider the two cases $\frac{1}{n} X_j^T \varepsilon > \lambda$ and $-\frac{1}{n} X_j^T \varepsilon > \lambda$.
+
+ Plugging in our expression of $\lambda$, we write the bound as
+ \[
+ 2p \exp\left(-\frac{1}{2} A^2 \log p\right) = 2p^{1 - A^2/2}.\qedhere
+ \]
+\end{proof}
+This is all we need for our result on the Lasso. We are going to go a bit further into this topic of concentration inequalities, because we will need them later when we impose conditions on the design matrix. In particular, we would like to bound the tail probabilities of products.
+
+\begin{defi}[Bernstein's condition]\index{Bernstein's condition}
+ We say that a random variable $W$ satisfies Bernstein's condition with parameters $(\sigma, b)$ where $a, b > 0$ if
+ \[
+ \E[|W - \E W|^k] \leq \frac{1}{2} k!\, \sigma^2 b^{k - 2}
+ \]
+ for $k = 2, 3, \ldots$.
+\end{defi}
+The point is that these bounds on the moments lets us bound the moment generating function of $W$.
+
+\begin{prop}[Bernstein's inequality]\index{Bernstein's inequality}
+ Let $W_1, W_2, \ldots, W_n$ be independent random variables with $\E W_i = \mu$, and suppose each $W_i$ satisfies Bernstein's condition with parameters $(\sigma, b)$. Then
+ \begin{align*}
+ \E e^{\alpha(W_i - \mu)} &\leq \exp \left(\frac{\alpha^2 \sigma^2/2}{1 - b|\alpha|}\right)\text{ for all }|\alpha| < \frac{1}{b},\\
+ \P\left(\frac{1}{n} \sum_{i = 1}^n W_i - \mu \geq t\right) &\leq \exp \left(-\frac{nt^2}{2 (\sigma^2 + bt)}\right)\text{ for all } t \geq 0.
+ \end{align*}
+\end{prop}
+Note that for large $t$, the bound goes as $e^{-t}$ instead of $e^{-t^2}$.
+
+\begin{proof}
+ For the first part, we fix $i$ and write $W = W_i$. Let $|\alpha| < \frac{1}{b}$. Then
+ \begin{align*}
+ \E e^{\alpha(W_i - \mu)} &= \sum_{k = 0}^\infty \E \left[\frac{1}{k!} \alpha^k |W_i - \mu|^k\right]\\
+ &\leq 1 + \frac{\sigma^2 \alpha^2}{2} \sum_{k = 2}^\infty |\alpha|^{k - 2} b^{k - 2}\\
+ &= 1 + \frac{\sigma^2 \alpha^2}{2} \frac{1}{1 - |\alpha|b}\\
+ &\leq \exp \left(\frac{\alpha^2 \sigma^2/2}{1 - b|\alpha|}\right).
+ \end{align*}
+ Then
+ \begin{align*}
+ \E \exp\left(\frac{1}{n} \sum_{i = 1}^n \alpha (W_i - \mu)\right) &= \prod_{i = 1}^n \E \exp\left(\frac{\alpha}{n} (W_i - \mu)\right)\\
+ &\leq \exp \left(n \frac{\left(\frac{\alpha}{n}\right)^2 \sigma^2/2}{1 - b \left|\frac{\alpha}{n}\right|}\right),
+ \end{align*}
+ assuming $\left|\frac{\alpha}{n}\right| < \frac{1}{b}$. So it follows that
+ \[
+ \P \left(\frac{1}{n} \sum_{i = 1}^n W_i - \mu \geq t\right) \leq e^{-\alpha t}\exp \left(n \frac{\left(\frac{\alpha}{n}\right)^2 \sigma^2/2}{1 - b \left|\frac{\alpha}{n}\right|}\right).
+ \]
+ Setting
+ \[
+ \frac{\alpha}{n} = \frac{t}{bt + \sigma^2} \in \left[0, \frac{1}{b}\right)
+ \]
+ gives the result.
+\end{proof}
+
+\begin{lemma}
+ Let $W, Z$ be mean-zero sub-Gaussian random variables with parameters $\sigma_W$ and $\sigma_Z$ respectively. Then $WZ$ satisfies Bernstein's condition with parameter $(8 \sigma_W \sigma_Z, 4 \sigma_W \sigma_Z)$.
+\end{lemma}
+
+\begin{proof}
+ For any random variable $Y$ (which we will later take to be $WZ$), for $k > 1$, we know
+ \begin{align*}
+ \E |Y - \E Y|^k &= 2^k \E \left| \frac{1}{2}Y - \frac{1}{2}\E Y\right|^k\\
+ &\leq 2^k \E \left| \frac{1}{2} |Y| + \frac{1}{2} |\E Y|\right|^k.
+ \end{align*}
+ Note that
+ \[
+ \left| \frac{1}{2} |Y| + \frac{1}{2} |\E Y|\right|^k \leq \frac{|Y|^k + |\E Y|^k}{2}
+ \]
+ by Jensen's inequality. Applying Jensen's inequality again, we have
+ \[
+ |\E Y|^k \leq \E |Y|^k.
+ \]
+ Putting the whole thing together, we have
+ \[
+ \E |Y - \E Y|^k \leq 2^k \E |Y|^k.
+ \]
+ Now take $Y = WZ$. Then
+ \[
+ \E |WZ - \E WZ| \leq 2^k \E |WZ|^k \leq 2^k \left(\E W^{2k}\right)^{1/2} (\E Z^{2k})^{1/2},
+ \]
+ by the Cauchy--Schwarz inequality.
+
+ We know that sub-Gaussians satisfy a bound on the tail probability. We can then use this to bound the moments of $W$ and $Z$. First note that
+ \[
+ W^{2k} = \int_0^\infty \mathbf{1}_{x < W^{2k}} \;\d x.
+ \]
+ Taking expectations of both sides, we get
+ \[
+ \E W^{2k} = \int_0^\infty \P(x < W^{2k})\;\d x.
+ \]
+ Since we have a tail bound on $W$ instead of $W^{2k}$, we substitute $x = t^{2k}$. Then $\d x = 2k t^{2k - 1}\;\d t$. So we get
+ \begin{align*}
+ \E W^{2k} &= 2k \int_0^\infty t^{2k - 1} \P(|W| > t) \;\d t\\
+ &= 4k \int_0^\infty t^{2k} \exp \left(-\frac{t^2}{2 \sigma_N^2}\right)\;\d t.
+ \end{align*}
+ where again we have a factor of $2$ to account for both signs. We perform yet another substitution
+ \[
+ x = \frac{t^2}{2 \sigma_N^2},\quad \d x = \frac{t}{\sigma_W^2}\;\d t.
+ \]
+ Then we get
+ \[
+ \E W^{2k} = 2^{k + 1} \sigma_W^{2k} k \sigma_W^2 \int_0^\infty x^{k - 1} e^{-x}\;\d x = 4 \cdot k! \sigma_W^2.
+ \]
+ Plugging this back in, we have
+ \begin{align*}
+ \E|WZ - \E WZ|^k &\leq 2^k 2^{k + 1} k! \sigma_W^k \sigma_Z^k \sigma_Z^k\\
+ &= \frac{1}{2} k! 2^{2k + 2} \sigma_W^k \sigma_Z^k\\
+ &= \frac{1}{2} k! (8 \sigma_W \sigma_Z)^2 (4 \sigma_W \sigma_Z)^{k - 2}.\qedhere
+ \end{align*}
+\end{proof}
+
+\subsection{Convex analysis and optimization theory}
+We'll leave these estimates aside for a bit, and give more background on convex analysis and convex optimization. Recall the following definition:
+\begin{defi}[Convex set]\index{convex set}
+ A set $A \subseteq \R^d$ is convex if for any $x, y \in A$ and $t \in [0, 1]$, we have
+ \[
+ (1 - t)x + ty \in A.
+ \]
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[shift={(-2, 0)}]
+ \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0, 0) (0.6, -1) (1, 0) (0.7, 0.7)};
+ \draw (-0.5, -0.5) node [circ] {} -- (0.5, -0.5) node [circ] {};
+
+ \node at (0, -1.5) {non-convex};
+ \end{scope}
+
+ \begin{scope}[shift={(2, 0)}]
+ \draw [fill=gray!50!white] plot [smooth cycle, tension=0.7] coordinates {(0, 1) (-0.7, 0.7) (-1, 0) (-0.6, -1) (0.6, -1) (1, 0) (0.7, 0.7)};
+ \node at (0, -1.5) {convex};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+
+We are actually more interested in convex functions. We shall let our functions to take value $\infty$, so let us define $\bar{\R} = \R \cup \{\infty\}$. The point is that if we want our function to be defined in $[a, b]$, then it is convenient to extend it to be defined on all of $\R$ by setting the function to be $\infty$ outside of $[a, b]$.
+
+\begin{defi}[Convex function]\index{convex function}
+ A function $f: \R^d \to \bar{\R}$ is \emph{convex} iff
+ \[
+ f((1 - t)x + ty) \leq (1 - t) f(x) + t f(y)
+ \]
+ for all $x, y \in \R^d$ and $t \in (0, 1)$. Moreover, we require that $f(x) < \infty$ for at least one $x$.
+
+ We say it is \emph{strictly convex} if the inequality is strict for all $x, y$ and $t \in (0, 1)$.
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 0) -- (2, 0);
+ \draw (-1.3, 0.1) -- (-1.3, -0.1) node [below] {$x$};
+ \draw (1.3, 0.1) -- (1.3, -0.1) node [below] {$y$};
+ \draw (-1.7, 2) parabola bend (-.2, 1) (1.7, 3.3);
+ \draw [dashed] (-1.3, 0) -- (-1.3, 1.53) node [circ] {};
+ \draw [dashed] (1.3, 0) -- (1.3, 2.42) node [circ] {};
+ \draw (-1.3, 1.53) -- (1.3, 2.42);
+ \draw [dashed] (0, 0) node [below] {\tiny $(1 - t) x + t y$} -- (0, 1.975) node [above] {\tiny$(1 - t) f(x) + t f(y)\quad\quad\quad\quad\quad\quad$} node [circ] {};
+ \end{tikzpicture}
+\end{center}
+
+\begin{defi}[Domain]\index{domain}
+ Define the \emph{domain} of a function $f: \R^d \to \bar{\R}$ to be
+ \[
+ \dom f = \{x : f(x) < \infty\}.
+ \]
+\end{defi}
+One sees that the domain of a convex function is always convex.
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item Let $f_1, \ldots, f_m: \R^d \to \bar{\R}$ be convex with $\dom f_1 \cap \cdots \cap \dom f_m \not = \emptyset$, and let $c_1, \ldots, c_m \geq 0$. Then $c_1 + \cdots + c_m f_m$ is a convex function.
+ \item If $f: \R^d \to \R$ is twice continuously differentiable, then
+ \begin{enumerate}
+ \item $f$ is convex iff its Hessian is positive semi-definite everywhere.
+ \item $f$ is strictly convex if its Hessian positive definite everywhere.\fakeqed
+ \end{enumerate}
+ \end{enumerate}
+\end{prop}
+Note that having a positive definite Hessian is not necessary for strict convexity, e.g.\ $x^4$ is strictly convex but has vanishing Hessian at $0$.
+
+Now consider a constrained optimization problem
+\begin{center}
+ minimize $f(x)$ subject to $g(x) = 0$
+\end{center}
+where $x \in \R^d$ and $g: \R^d \to \R^b$. The \term{Lagrangian} for this problem is
+\[
+ L(x, \theta) = f(x) + \theta^T g(x).
+\]
+Why is this helpful? Suppose $c^*$ is the minimum of $f$. Then note that for any $\theta$, we have
+\[
+ \inf_{x \in \R^d} L(x, \theta) \leq \inf_{x \in \R^d, g(x) = 0} L(x, \theta) = \inf_{x \in \R^d: g(x) = 0} f(x) = c^*.
+\]
+Thus, if we can find some $\theta^*, x^*$ such that $x^*$ minimizes $L(x, \theta^*)$ and $g(x^*) = 0$, then this is indeed the optimal solution.
+
+This gives us a method to solve the optimization problem --- for each fixed $\theta$, solve the unconstrained optimization problem $\argmin_x L(x, \theta)$. If we are doing this analytically, then we would have a formula for $x$ in terms of $\theta$. Then seek for a $\theta$ such that $g(x) = 0$ holds.
+
+\subsubsection*{Subgradients}
+Usually, when we have a function to optimize, we take its derivative and set it to zero. This works well if our function is actually differentiable. However, the $\ell_1$ norm is not a differentiable function, since $|x|$ is not differentiable at $0$. This is not some exotic case we may hope to avoid most of the time --- when solving the Lasso, we actively want our solutions to have zeroes, so we really want to get to these non-differentiable points.
+
+Thus, we seek some generalized notion of derivative that works on functions that are not differentiable.
+
+\begin{defi}[Subgradient]\index{subgradient}
+ A vector $v \in \R^d$ is a \emph{subgradient} of a convex function at $x$ if $f(y) \geq f(x) + v^T(y - x)$ for all $y \in \R^d$.
+
+ The set of subgradients of $f$ at $x$ is denoted $\partial f(x)$, and is called the \term{subdifferential}.
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1.5, -0.5) -- (1.5, -0.5);
+ \draw (-1, 1.5) parabola bend (0, 0) (1, 1.5) node [above] {\small $f$};
+ \node [circ] at (0.4, 0.25) {};
+ \draw [mred] (-0.7, -1) -- (1.5, 1.5);
+
+ \begin{scope}[shift={(4, 0)}]
+ \draw [->] (-1.5, -0.5) -- (1.5, -0.5);
+
+ \draw [opacity=0.8, mred!0!mblue] (-1, 0.6) -- (1, -0.6);
+ \draw [opacity=0.8, mred!16!mblue] (-1, 0.4) -- (1, -0.4);
+ \draw [opacity=0.8, mred!33!mblue] (-1, 0.2) -- (1, -0.2);
+ \draw [opacity=0.8, mred!50!mblue] (-1, 0) -- (1, 0);
+ \draw [opacity=0.8, mred!67!mblue] (-1, -0.2) -- (1, 0.2);
+ \draw [opacity=0.8, mred!84!mblue] (-1, -0.4) -- (1, 0.4);
+ \draw [opacity=0.8, mred!100!mblue] (-1, -0.6) -- (1, 0.6);
+
+ \draw (-1, 1.5) -- (0, 0) -- (1, 1.5) node [above] {\small $f$};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+
+This is indeed a generalization of the derivative, since
+\begin{prop}
+ Let $f$ be convex and differentiable at $x \in \mathrm{int}(\dom f)$. Then $\partial f(x) = \{\nabla f(x)\}$.\fakeqed
+\end{prop}
+
+The following properties are immediate from definition.
+\begin{prop}
+ Suppose $f$ and $g$ are convex with $\mathrm{int}(\dom f) \cap \mathrm{int} (\dom g) \not= \emptyset$, and $\alpha > 0$. Then
+ \begin{align*}
+ \partial (\alpha f)(x) &= \alpha \partial f(x) = \{\alpha v: v \in \partial f(x)\}\\
+ \partial (f + g)(x) &= \partial g(x) + \partial f(x).\fakeqed
+ \end{align*}
+\end{prop}
+
+The condition (for convex differentiable functions) that ``$x$ is a minimum iff $f'(x) = 0$'' now becomes
+\begin{prop}
+ If $f$ is convex, then
+ \[
+ x^* \in \argmin_{x \in \R^d} f(x) \Leftrightarrow 0 \in \partial f(x^*).
+ \]
+\end{prop}
+
+\begin{proof}
+ Both sides are equivalent to the requirement that $f(y) \geq f(x^*)$ for all $y$.
+\end{proof}
+
+We are interested in applying this to the Lasso. So we want to compute the subdifferential of the $\ell_1$ norm. Let's first introduce some notation.
+
+\begin{notation}\index{$x_A$}\index{$x_{-j}$}
+ For $x \in \R^d$ and $A \subseteq \{1 \ldots, d\}$, we write $x_A$ for the sub-vector of $x$ formed by the components of $x$ induced by $A$. We write $x_{-j} = x_{\{j\}^c} = x_{\{1, \ldots, d\} \setminus j}$. Similarly, we write $x_{-jk} = x_{\{jk\}^c}$ etc.
+
+ We write\index{$\sgn$}
+ \[
+ \sgn(x_i) =
+ \begin{cases}
+ -1 & x_i < 0\\
+ 1 & x_i > 0\\
+ 0 & x_i = 0
+ \end{cases},
+ \]
+ and $\sgn(x) = (\sgn(x_1), \ldots, \sgn(x_d))^T$.
+\end{notation}
+
+First note that $\|\ph\|_1$ is convex, as it is a norm.
+
+\begin{prop}
+ For $x \in \R^d$ and $A \in \{j: x_j \not= 0\}$, we have
+ \[
+ \partial \|x\|_1 = \{v \in \R^d: \|v\|_\infty \leq 1, v_A = \sgn (x_A)\}.
+ \]
+\end{prop}
+
+\begin{proof}
+ It suffices to look at the subdifferential of the absolute value function, and then add them up.
+
+ For $j = 1, \ldots, d$, we define $g_j: \R^d \to \R$ by sending $x$ to $|x_j|$. If $x_j \not= 0$, then $g_j$ is differentiable at $x$, and so we know $\partial g_j (x) = \{\sgn(x_j) e_j\}$, with $e_j$ the $j$th standard basis vector.
+
+ When $x_j = 0$, if $v \in \partial g_j(x_j)$, then
+ \[
+ g_j(y) \geq g_j(x) + v^T(y - x).
+ \]
+ So
+ \[
+ |y_j| \geq v^T (y - x).
+ \]
+ We claim this holds iff $v_j \in [-1, 1]$ and $v_{-j} = 0$. The $\Leftarrow$ direction is an immediate calculation, and to show $\Rightarrow$, we pick $y_{-j} = v_{-j} + x_{-j}$, and $y_j = 0$. Then we have
+ \[
+ 0 \geq v_{-j}^T v_{-j}.
+ \]
+ So we know that $v_{-j} = 0$. Once we know this, the condition says
+ \[
+ |y_j| \geq v_j y_j
+ \]
+ for all $y_j$. This is then true iff $v_j \in [-1, 1]$. Forming the set sum of the $\partial g_j(x)$ gives the result.
+\end{proof}
+
+\subsection{Properties of Lasso solutions}
+Let's now apply this to the Lasso. Recall that the Lasso objective was
+\[
+ Q_\lambda (\beta) = \frac{1}{2n} \|Y - X \beta\|_2^2 + \lambda \|\beta\|_1.
+\]
+We know this is a convex function in $\beta$. So we know $\hat{\beta}_\lambda^L$ is a minimizer of $Q_\lambda(\beta)$ iff $0 \in \partial Q_\lambda(\hat{\beta})$.
+
+Let's subdifferentiate $Q_\lambda$ and see what this amounts to. We have
+\[
+ \partial Q_\lambda(\hat{\beta}) = \left\{-\frac{1}{n}X^T(Y - X \beta)\right\} + \lambda \left\{\hat{\nu} \in \R^d: \|\hat{\nu}\|_\infty \leq 1, \hat{\nu}|_{\hat{\rho}_\lambda^L} = \sgn (\hat{\beta}_{\lambda, \hat{\rho}_\lambda^L}^L)\right\},
+\]
+where $\hat{\rho}_\lambda^L = \{j : \hat{\beta}_{\lambda, j}^L \not= 0\}$. % the \rho is probably an S
+
+Thus, $0 \in \partial Q_\lambda (\hat{\beta}_\lambda)$ is equivalent to
+\[
+ \hat{\nu} = \frac{1}{\lambda} \cdot \frac{1}{n} X^T (Y - X \hat{\beta}_\lambda)
+\]
+satisfying
+\[
+ \|\hat{\nu}\|_{\infty} \leq 1,\quad \hat{\nu}_{\hat{\rho}_\lambda^L} = \sgn (\hat{\beta}_{\lambda, \hat{\rho}^L_\lambda}^L).
+\]
+These are known as the \term{KKT conditions} for the Lasso.
+
+In principle, there could be several solutions to the Lasso. However, at least the fitted values are always unique.
+
+\begin{prop}
+ $X \hat{\beta}_\lambda^L$ is unique.
+\end{prop}
+\begin{proof}
+ Fix $\lambda > 0$ and stop writing it. Suppose $\hat{\beta}^{(1)}$ and $\hat{\beta}^{(2)}$ are two Lasso solutions at $\lambda$. Then
+ \[
+ Q(\hat{\beta}^{(1)}) = Q(\hat{\beta}^{(2)}) = c^*.
+ \]
+ As $Q$ is convex, we know
+ \[
+ c^* = Q\left(\frac{1}{2} \hat{\beta}^{(1)} + \frac{1}{2} \hat{\beta}^{(2)}\right) \leq \frac{1}{2} Q(\hat{\beta}^{(1)}) + \frac{1}{2} Q(\hat{\beta}^{(2)}) = c^*.
+ \]
+ So $\frac{1}{2} \hat{\beta}^{(1)} + \frac{1}{2} \hat{\beta}^{(2)}$ is also a minimizer.
+
+ Since the two terms in $Q(\beta)$ are individually convex, it must be the case that
+ \begin{align*}
+ \norm{\frac{1}{2} (Y - X\hat{\beta}^{(1)}) + \frac{1}{2} (Y - X \hat{\beta}^{(2)})}_2^2 &= \frac{1}{2} \norm{Y - X \hat{\beta}^{(1)}}_2^2 + \frac{1}{2} \norm{Y - X \hat{\beta}^{(2)}}_2^2\\
+ \norm{\frac{1}{2} (\hat{\beta}^{(1)} + \hat{\beta}^{(2)})}_1 &= \frac{1}{2} \|\hat{\beta}^{(1)}\|_1 + \frac{1}{2} \|\hat{\beta}^{(2)}\|_1.
+ \end{align*}
+ Moreover, since $\|\ph\|_2^2$ is strictly convex, we can have equality only if $X \hat{\beta}^{(1)} = X \hat{\beta}^{(2)}$. So we are done.
+\end{proof}
+
+\begin{defi}[Equicorrelation set]\index{equicorrelation set}
+ Define the \emph{equicorrelation set} $\hat{E}_\lambda$ to be the set of $k$ such that
+ \[
+ \frac{1}{n} |X_k^T(Y - X \hat{\beta}_\lambda^L)| = \lambda,
+ \]
+ or equivalently, the $k$ with $\nu_k = \pm 1$, which is well-defined since it depends only on the fitted values.
+\end{defi}
+By the KKT conditions, $\hat{E}_\lambda$ contains the set of non-zeroes of Lasso solution, but may be strictly bigger than that.
+
+Note that if $\rk(X_{\hat{E}_\lambda}) = |\hat{E}_\lambda|$, then the Lasso solution must be unique since
+\[
+ X_{\hat{E}_\lambda} (\hat{\beta}^{(1)} - \hat{\beta}^{(2)}) = 0.
+\]
+So $\hat{\beta}^{(1)} = \hat{\beta}^{(2)}$.
+
+\subsection{Variable selection}
+So we have seen that the Lasso picks out some ``important'' variables and discards the rest. How well does it do the job?
+
+For simplicity, consider a noiseless linear model
+\[
+ Y = X \beta^0.
+\]
+Our objective is to find the set
+\[
+ S = \{k: \beta_k^0 \not= 0\}.
+\]
+We may wlog assume $S = \{1, \ldots, s\}$, and $N = \{1 \ldots p\} \setminus S$ (as usual $X$ is $n \times p$). We further assume that $\rk (X_S) = s$.
+
+In general, it is difficult to find out $S$ even if we know $|S|$. Under certain conditions, the Lasso can do the job correctly. This condition is dictated by the $\ell_\infty$ norm of the quantity
+\[
+ \Delta = X_N^T X_S (X_S^T X_S)^{-1} \sgn (\beta^0_S).
+\]
+We can understand this a bit as follows --- the $k$th entry of this is the dot product of $\sgn (\beta^0_S)$ with $(X_S^T X_S)^{-1} X_S^T X_k$. This is the coefficient vector we would obtain if we tried to regress $X_k$ on $X_S$. If this is large, then this suggests we expect $X_k$ to look correlated to $X_S$, and so it would be difficult to determine if $k$ is part of $S$ or not.
+
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item If $\|\Delta\|_\infty \leq 1$, or equivalently
+ \[
+ \max_{k \in N} |\sgn(\beta^0_S)^T (X^T_S X_S)^{-1} X^T_S X_k| \leq 1,
+ \]
+ and moreover
+ \[
+ |\beta^0_k| > \lambda \left|\sgn(\beta^0_S)^T \left(\frac{1}{n} X_j^T X_j\right)^{-1}_k\right|
+ \]
+ for all $k \in S$, then there exists a Lasso solution $\hat{\beta}^L_\lambda$ with $\sgn(\hat{\beta}_\lambda^L) = \sgn(\beta^0)$.
+ \item If there exists a Lasso solution with $\sgn (\hat{\beta}_\lambda^L) = \sgn(\beta^0)$, then $\|\Delta\|_\infty \leq 1$.
+ \end{enumerate}
+\end{thm}
+We are going to make heavy use of the KKT conditions.
+\begin{proof}
+ Write $\hat{\beta} = \hat{\beta}^L_\lambda$, and write $\hat{S} = \{k: \hat{\beta}_k \not= 0\}$. Then the KKT conditions are that
+ \[
+ \frac{1}{n} X^T (\beta^0 - \hat{\beta}) = \lambda \hat{\nu},
+ \]
+ where $\|\hat{\nu}\|_{\infty} \leq 1$ and $\hat{\nu}_{\hat{S}} = \sgn (\hat{\beta}_{\hat{S}})$.
+
+ We expand this to say
+ \[
+ \frac{1}{n}
+ \begin{pmatrix}
+ X_S^T X_S & X_S^T X_N\\
+ X_N^T X_S & XN^T X_N
+ \end{pmatrix}
+ \begin{pmatrix}
+ \beta^0_S - \hat{\beta}_S\\
+ -\hat{\beta}_N
+ \end{pmatrix} = \lambda
+ \begin{pmatrix}
+ \hat{\nu}_S\\
+ \hat{\nu}_N
+ \end{pmatrix}.
+ \]
+ Call the top and bottom equations (1) and (2) respectively.
+
+ It is easier to prove (ii) first. If there is such a solution, then $\hat{\beta}_N = 0$. So from (1), we must have
+ \[
+ \frac{1}{n} X_S^T X_S (\beta^0_S - \hat{\beta}_S) = \lambda \hat{\nu}_S.
+ \]
+ Inverting $\frac{1}{n} X_S^T X_S$, we learn that
+ \[
+ \beta^0_S - \hat{\beta}_S = \lambda \left(\frac{1}{n} X_S^T X_S\right)^{-1} \sgn (\beta^0_S).
+ \]
+ Substitute this into (2) to get
+ \[
+ \lambda \frac{1}{n} X_N^T X_S \left(\frac{1}{n} X_S^T X_S\right)^{-1} \sgn (\beta^0_S) = \lambda \hat{\nu}_N.
+ \]
+ By the KKT conditions, we know $\|\hat{\nu}_N\|_\infty \leq 1$, and the LHS is exactly $\lambda\Delta$.
+
+ To prove (1), we need to exhibit a $\hat{\beta}$ that agrees in sign with $\hat{\beta}$ and satisfies the equations (1) and (2). In particular, $\hat{\beta}_N = 0$. So we try
+ \begin{align*}
+ (\hat{\beta}_S, \hat{\nu}_S) &= \left(\beta^0_S - \lambda \left(\frac{1}{n} X_S^T X_S\right)^{-1} \sgn (\beta^0_S), \sgn (\beta^0_S)\right)\\
+ (\hat{\beta}_N, \nu_N) &= (0, \Delta).
+ \end{align*}
+ This is by construction a solution. We then only need to check that
+ \[
+ \sgn (\hat{\beta}_S) = \sgn (\beta^0_S),
+ \]
+ which follows from the second condition.
+\end{proof}
+
+\subsubsection*{Prediction and estimation}
+We now consider other question of how good the Lasso functions as a regression method. Consider the model
+\[
+ Y = X \beta^0 + \varepsilon - \bar{\varepsilon} 1,
+\]
+where the $\varepsilon_i$ are independent and have common sub-Gaussian parameter $\sigma$. Let $S, s, N$ be as before.
+
+As before, the Lasso doesn't always behave well, and whether or not it does is controlled by the compatibility factor.
+
+\begin{defi}[Compatibility factor]\index{compatibility factor}
+ Define the \emph{compatibility factor} to be
+ \[
+ \phi^2 = \inf_{\substack{\beta \in \R^p\\ \|\beta_N\|_1 \leq 3 \|\beta_S\|_1\\ \beta_S \not= 0}} \frac{\frac{1}{n} \|X \beta\|_2^2}{ \frac{1}{s} \|\beta_S\|^2_1} = \inf_{\substack{\beta \in \R^p\\ \|\beta_S\| = 1\\ \|\beta_N\|_1 \leq 3}} \frac{s}{n} \|X_S \beta_S - X_N \beta_N\|_2^2.
+ \]
+\end{defi}
+Note that we are free to use a minus sign inside the norm since the problem is symmetric in $\beta_N \leftrightarrow -\beta_N$.
+
+In some sense, this $\phi$ measures how well we can approximate $X_S \beta_S$ just with the noise variables.
+\begin{defi}[Compatibility condition]\index{compatibility condition}
+ The \emph{compatibility condition} is $\phi^2 > 0$.
+\end{defi}
+
+Note that if $\hat{\Sigma} = \frac{1}{n} X^T X$ has minimum eigenvalue $c_{min} > 0$, then we have $\phi^2 \geq c_{min}$. Indeed,
+\[
+ \|\beta_S\|_1 = \sgn (\beta_S)^T \beta_S \leq \sqrt{s} \|\beta_S\|_2 \leq \sqrt{s} \|\beta\|_2,
+\]
+and so
+\[
+ \phi^2 \geq \inf_{\beta \not= 0} \frac{\frac{1}{n} \|X \beta\|_2^2}{\|\beta\|_2^2} = c_{min}.
+\]
+Of course, we don't expect the minimum eigenvalue to be positive, but we have the restriction in infimum in the definition of $\phi^2$ and we can hope to have a positive value of $\phi^2$.
+
+\begin{thm}
+ Assume $\phi^2 > 0$, and let $\hat{\beta}$ be the Lasso solution with
+ \[
+ \lambda = A \sigma \sqrt{\log p/n}.
+ \]
+ Then with probability at least $1 - 2p^{-(A^2/8 - 1)}$, we have
+ \[
+ \frac{1}{n} \|X (\beta^0 - \hat{\beta})\|_2^2 + \lambda\|\hat{\beta} - \beta^0\|_1 \leq \frac{16 \lambda^2 s}{\phi^2} = \frac{16 A^2 \log p}{\phi^2} \frac{s \sigma^2}{n}.
+ \]
+\end{thm}
+This is actually two bounds. This simultaneously bounds the error in the fitted values, and a bound on the error in predicting $\hat{\beta} - \beta^0$.
+
+Recall that in our previous bound, we had a bound of $\sim \frac{1}{\sqrt{n}}$, and now we have $\sim \frac{1}{n}$. Note also that $\frac{s \sigma^2}{n}$ is the error we would get if we magically knew which were the non-zero variables and did ordinary least squares on those variables.
+
+This also tells us that if $\beta^0$ has a component that is large, then $\hat{\beta}$ must be non-zero in that component as well. So while the Lasso cannot predict exactly which variables are non-zero, we can at least get the important ones.
+
+\begin{proof}
+ Start with the basic inequality $Q_\lambda(\hat{\beta}) \leq Q_\lambda(\beta^0)$, which gives us
+ \[
+ \frac{1}{2n} \|X (\beta^0 - \hat{\beta})\|_2^2 + \lambda \|\hat{\beta}\|_1 \leq \frac{1}{n} \varepsilon^T X(\hat{\beta} - \beta^0) + \lambda \|\beta^0\|_1.
+ \]
+ We work on the event
+ \[
+ \Omega = \left\{\frac{1}{n} \|X^T \varepsilon \|_\infty \leq \frac{1}{2} \lambda\right\},
+ \]
+ where after applying H\"older's inequality, we get
+ \[
+ \frac{1}{n} \|X(\beta^0 - \hat{\beta})\|_2^2 + 2 \lambda \|\hat{\beta}\|_1 \leq \lambda \|\hat{\beta} - \beta^0\|_1 + 2 \lambda \|\beta^0\|_1.
+ \]
+ We can move $2 \lambda \|\hat{\beta}\|_1$ to the other side, and applying the triangle inequality, we have
+ \[
+ \frac{1}{n} \|X(\hat{\beta} - \beta^0)\|_2^2 \leq 3 \lambda \|\hat{\beta} - \beta^0\|.
+ \]
+ If we manage to bound the RHS from above as well, so that
+ \[
+ 3\lambda \|\hat{\beta} - \beta^0\| \leq c \lambda \frac{1}{\sqrt{n}} \|X (\hat{\beta} - \beta^0)\|_2
+ \]
+ for some $c$, then we obtain the bound
+ \[
+ \frac{1}{n} \|X (\beta - \beta^0)\|_2^2 \leq c^2 \lambda^2.
+ \]
+ Plugging this back into the second bound, we also have
+ \[
+ 3 \lambda \|\hat{\beta} - \beta^0\|_1 \leq c^2 \lambda^2.
+ \]
+ To obtain these bounds, we want to apply the definition of $\phi^2$ to $\hat{\beta} - \beta^0$. We thus need to show that the $\hat{\beta} - \beta^0$ satisfies the conditions required in the infimum taken.
+
+ Write
+ \[
+ a = \frac{1}{n\lambda} \|X (\hat{\beta} - \beta^0)\|_2^2.
+ \]
+ Then we have
+ \[
+ a + 2 (\|\hat{\beta}_n\|_1 + \|\hat{\beta}_S\|_1) \leq \|\hat{\beta}_S - \beta^0_S \|_1 + \|\hat{\beta}_N\|_1 + 2 \|\beta^0_S\|_1.
+ \]
+ Simplifying, we obtain
+ \[
+ a + \|\hat{\beta}_N\|_1 \leq \|\hat{\beta}_S - \beta^0_S\|_1 + 2 \|\beta^0_S\|_1 - 2 \|\hat{\beta}_S\|_1.
+ \]
+ Using the triangle inequality, we write this as
+ \[
+ a + \|\hat{\beta}_N - \beta^0\|_N \leq 3 \|\hat{\beta}_S - \beta_S^0\|_1.
+ \]
+ So we immediately know we can apply the compatibility condition, which gives us
+ \[
+ \phi^2 \leq \frac{\frac{1}{n} \|X (\hat{\beta} - \beta^0)\|_2^2}{\frac{1}{s} \|\hat{\beta}_S - \beta^0_S\|_1^2}.
+ \]
+ Also, we have
+ \[
+ \frac{1}{n} \|X(\hat{\beta} -\beta^0)\|_2^2 + \lambda \|\hat{\beta} - \beta^0\|_1 \leq 4 \lambda\|\hat{\beta}_S - \beta^0_S\|_1.
+ \]
+ Thus, using the compatibility condition, we have
+ \[
+ \frac{1}{n} \|X (\hat{\beta} - \beta^0)\|^2_2 + \lambda \|\hat{\beta} - \beta^0\| \leq \frac{4 \lambda}{\phi} \sqrt{\frac{s}{n}} \|X (\hat{\beta} - \beta^0)\|_2.
+ \]
+ Thus, dividing through by $\frac{1}{\sqrt{n}} \|X(\hat{\beta} - \beta^0)\|_2$, we obtain
+ \[
+ \frac{1}{\sqrt{n}} \|X(\hat{\beta} - \beta^0)\|_2 \leq \frac{4 \lambda \sqrt{s}}{\phi}.\tag{$*$}
+ \]
+ So we substitute into the RHS $(*)$ and obtain
+ \[
+ \frac{1}{n} \|X (\hat{\beta} - \beta^0)\|_2^2 + \lambda \|\hat{\beta} - \beta^0\|_1 \leq \frac{16 \lambda^2 s}{\phi^2}.\qedhere
+ \]
+\end{proof}
+If we want to be impressed by this result, then we should make sure that the compatibility condition is a reasonable assumption to make on the design matrix.
+
+\subsubsection*{The compatibility condition and random design}
+For any $\Sigma \im \R^{p \times p}$, define
+\[
+ \phi_{\Sigma}^2(S) = \inf_{\beta: \|\beta_N\|_1 \leq 3 \|\beta_S\|_1,\; \beta_s \not= 0} \frac{\beta^T \Sigma \beta}{\|\beta_S\|_1^2/|S|}
+\]
+Our original $\phi^2$ is then the same as $\phi^2_{\hat{\Sigma}}(S)$.
+
+If we want to analyze how $\phi^2_{\Sigma}(S)$ behaves for a ``random'' $\Sigma$, then it would be convenient to know that this depends continuously on $\Sigma$. For our purposes, the following lemma suffices:
+\begin{lemma}
+ Let $\Theta, \Sigma \in \R^{p \times p}$. Suppose $\phi^2_\Theta(S) > 0$ and
+ \[
+ \max_{j, k} |\Theta_{jk} - \Sigma_{jk}| \leq \frac{\phi^2_{\Theta}(S)}{32|S|}.
+ \]
+ Then
+ \[
+ \phi^2_\Sigma(S) \geq \frac{1}{2} \phi^2_\Theta(S).
+ \]
+\end{lemma}
+
+\begin{proof}
+ We suppress the dependence on $S$ for notational convenience. Let $s = |S|$ and
+ \[
+ t = \frac{\phi^2_\Theta(S)}{32s}.
+ \]
+ We have
+ \[
+ |\beta^T (\Sigma - \Theta) \beta| \leq \|\beta\|_1 \|(\Sigma - \Theta) \beta\|_\infty \leq t\|\beta\|_1^2,
+ \]
+ where we applied H\"older twice.
+
+ If $\|\beta_N\| \leq 3 \|\beta_S\|_1$, then we have
+ \[
+ \|\beta\|_1 \leq 4 \|\beta_S\|_1 \leq 4\frac{\sqrt{\beta^T \Theta \beta}}{\phi_\Theta/\sqrt{s}}.
+ \]
+ Thus, we have
+ \[
+ \beta^T \Theta \beta - \frac{\phi^2_\Theta}{32 s} \cdot \frac{16 \beta^T \Theta \beta}{\phi^2_\Theta /s} = \frac{1}{2} \beta^T \Theta \beta \leq \beta^T \Sigma \beta.\qedhere
+ \]
+\end{proof}
+
+Define
+\[
+ \phi^2_{\Sigma, s} = \min_{S: |S| = s} \phi^2_\Sigma(S).
+\]
+Note that if
+\[
+ \max_{jk} |\Theta_{jk} - \Sigma_{jk}| \leq \frac{\phi^2_{\Theta, s}}{32 s},
+\]
+then
+\[
+ \phi^2_\Sigma(S) \geq \frac{1}{2} \phi^2_\Theta(S).
+\]
+for all $S$ with $|S| = s$. In particular,
+\[
+ \phi^2_{\Sigma, s} \geq \frac{1}{2} \phi^2_{\Theta, s}.
+\]
+\begin{thm}
+ Suppose the rows of $X$ are iid and each entry is sub-Gaussian with parameter $v$. Suppose $s \sqrt{\log p / n} \to 0$ as $n \to \infty$, and $\phi^2_{\Sigma^0, s}$ is bounded away from $0$. Then if $\Sigma^0 = \E \hat{\Sigma}$, then we have
+ \[
+ \P\left(\phi^2_{\hat{\Sigma}, s} \geq \frac{1}{2} \phi^2_{\Sigma^0, s}\right) \to 1\text{ as }n \to \infty. % \geq 1 - O(s \sqrt{\log p / n})
+ \]
+\end{thm}
+
+\begin{proof}
+ It is enough to show that
+ \[
+ \P\left(\max_{jk} |\hat{\Sigma}_{jk} - \Sigma^0_{jk}| \leq \frac{\phi^2_{\Sigma^0, s}}{32 s}\right) \to 0
+ \]
+ as $n \to \infty$.
+
+ Let $t = \frac{\phi_{\Sigma^0, s}^2}{32 s}$. Then
+ \[
+ \P \left(\max_{j,k} |\hat{\Sigma}_{jk} - \Sigma^0_{jk}| \geq t\right) \leq \sum_{j, k} \P(|\hat{\Sigma}_{jk} - \Sigma^0_{jk}| \geq t).
+ \]
+ Recall that
+ \[
+ \hat{\Sigma}_{jk} = \frac{1}{n} \sum_{i = 1}^n X_{ij} X_{ik}.
+ \]
+ So we can apply Bernstein's inequality to bound
+ \[
+ \P(|\hat{\Sigma}_{jk} - \Sigma^0_{jk}) \leq 2 \exp \left(-\frac{nt^2}{2 (64 v^4 + 4 v^2 t)}\right),
+ \]
+ since $\sigma = 8v^2$ and $b = 4 v^2$. So we can bound
+ \[
+ \P \left(\max_{j,k} |\hat{\Sigma}_{jk} - \Sigma^0_{jk}| \geq t\right) \leq 2 p^2 \exp \left( - \frac{c n}{s^2}\right) = 2 \exp \left(-\frac{c n}{s^2}\left(c - \frac{2s^2}{n \log p}\right)\right) \to 0
+ \]
+ for some constant $c$.
+\end{proof}
+
+\begin{cor}
+ Suppose the rows of $X$ are iid mean-zero multivariate Gaussian with variance $\Sigma^0$. Suppose $\Sigma^n$ has minimum eigenvalue bounded from below by $c_{min} > 0$, and suppose the diagonal entries of $\Sigma^0$ are bounded from above. If $s \sqrt{\log p / n} \to 0$, then
+ \[
+ \P\left(\phi_{\hat{\Sigma}, s}^2 \geq \frac{1}{2} c_{min}\right) \to 1\text{ as }n \to \infty.
+ \]
+\end{cor}
+
+\subsection{Computation of Lasso solutions}
+We have had enough of bounding things. In this section, let's think about how we can actually run the Lasso. What we present here is actually a rather general method to find the minimum of a function, known as \term{coordinate descent}.
+
+Suppose we have a function $f: \R^d \to \R$. We start with an initial guess $x^{(0)}$ and repeat for $m = 1, 2, \ldots$
+\begin{align*}
+ x_1^{(m)} &= \argmin_{x_1} f(x_1, x_2^{(m - 1)}, \ldots, x_d^{(m - 1)})\\
+ x_2^{(m)} &= \argmin_{x_2} f(x_1^{(m)}, x_2, x_3^{(m - 1)}, \ldots, x_d^{(m - 1)})\\
+ &\rvdots\\
+ x_d^{(m)} &= \argmin_{x_d} f(x_1^{(m)}, x_2^{(m)}, \ldots, x_{d - 1}^{(m)}, x_d)
+\end{align*}
+until the result stabilizes.
+
+This was proposed for solving the Lasso a long time ago, and a Stanford group tried this out. However, instead of using $x_1^{(m)}$ when computing $x_2^{(m)}$, they used $x^{(m - 1)}_1$ instead. This turned out to be pretty useless, and so the group abandoned the method. After trying some other methods, which weren't very good, they decided to revisit this method and fixed their mistake.
+
+For this to work well, of course the coordinatewise minimizations have to be easy (which is the case for the Lasso, where we even have explicit solutions). This converges to the global minimizer if the minimizer is unique, $\{x : f(x) \leq f(x^{(0)})\}$ is compact, and if $f$ has the form
+\[
+ f(x) = g(x) + \sum_j h_j(x_j),
+\]
+where $g$ is convex and differentiable, and each $h_j$ is convex but not necessarily differentiable. In the case of the Lasso, the first is the least squared term, and the $h_j$ is the $\ell_1$ term.
+
+There are two things we can do to make this faster. We typically solve the Lasso on a grid of $\lambda$ values $\lambda_0 > \lambda_1 > \cdots > \lambda_L$, and then picking the appropriate $\lambda$ by $v$-fold cross-validation. In this case, we can start solving at $\lambda_0$, and then for each $i > 0$, we solve the $\lambda = \lambda_i$ problem by picking $x^{(0)}$ to be the optimal solution to the $\lambda_{i - 1}$ problem. In fact, even if we already have a fixed $\lambda$ value we want to use, it is often advantageous to solve the Lasso with a larger $\lambda$-value, and then use that as a warm start to get to our desired $\lambda$ value.
+
+Another strategy is an \term{active set} strategy. If $p$ is large, then this loop may take a very long time. Since we know the Lasso should set a lot of things to zero, for $\ell = 1, \ldots, L$, we set
+\[
+ A = \{ k : \hat{\beta}^L_{\lambda_{\ell - 1}, k} \not= 0 \}.
+\]
+We then perform coordinate descent only on coordinates in $A$ to obtain a Lasso solution $\hat{\beta}$ with $\hat{\beta}_{A^c} = 0$. This may not be the actual Lasso solution. To check this, we use the KKT conditions. We set
+\[
+ V = \left\{k \in A^c: \frac{1}{n} |X_k^T(Y - X \hat{\beta})| > \lambda_\ell \right\}.
+\]
+If $V = \emptyset$, and we are done. Otherwise, we add $V$ to our active set $A$, and then run coordinate descent again on this active set.
+
+\subsection{Extensions of the Lasso}
+There are many ways we can modify the Lasso. The first thing we might want to change in the Lasso is to replace the least squares loss with other log-likelihoods. Another way to modify the Lasso is to replace the $\ell_1$ penalty with something else in order to encourage a different form of sparsity.
+
+\begin{eg}[Group Lasso]\index{group Lasso}
+ Given a partition
+ \[
+ G_1 \cup \cdots G_q = \{1, \ldots, p\},
+ \]
+ the \emph{group Lasso} penalty is
+ \[
+ \lambda \sum_{j = 1}^q m_j \|\beta_{G_j}\|_2,
+ \]
+ where $\{m_j\}$ is some sort of weight to account for the fact that the groups have different sizes. Typically, we take $m_j = \sqrt{|G_j|}$.
+
+ If we take $G_i = \{i\}$, then we recover the original Lasso. If we take $q = 1$, then we recover Ridge regression. What this does is that it encourages the entire group to be all zero, or all non-zero.
+\end{eg}
+
+\begin{eg}
+ Another variation is the \term{fused Lasso}. If $\beta^0_{j + 1}$ is expected to be close to $\beta_j^0$, then a \emph{fused Lasso} penalty may be appropriate, with
+ \[
+ \lambda_1 \sum_{j = 1}^{p - 1} |\beta_{j + 1} - \beta_j| + \lambda_2 \|\beta\|_1.
+ \]
+ For example, if
+ \[
+ Y_i = \mu_i + \varepsilon_i,
+ \]
+ and we believe that $(\mu_i)_{i = 1}^n$ form a piecewise constant sequence, we can estimate $\mu^0$ by
+ \[
+ \argmin_\mu \left\{\|Y - \mu\|_2^2 + \lambda \sum_{i = 1}^{n - 1} |\mu_{i + 1} - \mu_i|\right\}.
+ \]
+\end{eg}
+
+\begin{eg}[Elastic net]\index{elastic net}
+ We can use
+ \[
+ \hat{\beta}_{\lambda, \alpha}^{EN} = \argmin_{\beta} \left\{\frac{1}{2n} \|Y - X \beta\|_2^2 + \lambda (\alpha \|\beta\|_1 + (1 - \alpha)\|\beta\|_2^2)\right\}.
+ \]
+ for $\alpha \in [0, 1]$. What the $\ell_2$ norm does is that it encourages highly positively correlated variables to have similar estimated coefficients.
+
+ For example, if we have duplicate columns, then the $\ell_1$ penalty encourages us to take one of the coefficients to be $0$, while the $\ell_2$ penalty encourages the coefficients to be the same.
+\end{eg}
+
+Another class of variations try to reduce the bias of the Lasso. Although the bias of the Lasso is a necessary by-product of reducing the variance of the estimate, it is sometimes desirable to reduce this bias.
+
+The \term{LARS-OLS hybrid} takes the $\hat{S}_\lambda$ obtained by the Lasso, and then re-estimate $\beta^0_{\hat{S}_\lambda}$ by OLS. We can also re-estimate using the Lasso on $X_{\hat{S}_\lambda}$, and this is known as the \term{relaxed Lasso}.
+
+In the \term{adaptive Lasso}, we obtain an initial estimate of $\beta^0$, e.g.\ with the Lasso, and then solve
+\[
+ \hat{\beta}^{\mathrm{adapt}}_\lambda = \argmin_{\beta : \beta_{\hat{S}} = 0} \left\{\frac{1}{2n} \|Y - X \beta\|_2^2 + \lambda \sum_{k \in \hat{S}} \frac{|\beta_k|}{|\hat{\beta}_k|}\right\}.
+\]
+We can also try to use a non-convex penalty. We can attempt to solve
+\[
+ \argmin_\beta \left\{\frac{1}{2n} \|Y - X \beta\|_2^2 + \sum_{k = 1}^n p_\lambda(|\beta_k|)\right\},
+\]
+where $p_\lambda: [0, \infty) \to p[0, \infty)$ is a non-convex function. One common example is the \term{MCP}, given by
+\[
+ p_\lambda'(u) = \left(\lambda - \frac{u}{\gamma}\right)_+,
+\]
+where $\gamma$ is an extra tuning parameter. This tends to give estimates even sparser than the Lasso.
+
+\section{Graphical modelling}
+\subsection{Conditional independence graphs}
+So far, we have been looking at prediction problems. But sometimes we may want to know more than that. For example, there is a positive correlation between the wine budget of a college, and the percentage of students getting firsts. This information allows us to make predictions, in the sense that if we happen to know the wine budget of a college, but forgot the percentage of students getting firsts, then we can make a reasonable prediction of the latter based on the former. However, this does not suggest any causal relation between the two --- increasing the wine budget is probably not a good way to increase the percentage of students getting firsts!
+
+Of course, it is unlikely that we can actually figure out causation just by looking at the data. However, there are some things we can try to answer. If we gather more data about the colleges, then we would probably find that the colleges that have larger wine budget and more students getting firsts are also the colleges with larger endowment and longer history. If we condition on all these other variables, then there is not much correlation left between the wine budget and the percentage of students getting firsts. This is what we are trying to capture in conditional independence graphs.
+
+We first introduce some basic graph terminology. For the purpose of conditional independence graphs, we only need undirected graphs. But later, we need the notion of direct graphs as well, so our definitions will be general enough to include those.
+\begin{defi}[Graph]\index{graph}
+ A \emph{graph} is a pair $\mathcal{G} = (V, E)$, where $V$ is a set and $E \subseteq (V, V)$ such that $(v, v) \not \in E$ for all $v \in V$.
+\end{defi}
+
+\begin{defi}[Edge]\index{edge}
+ We say there is an \emph{edge} between $j$ and $k$ and that $j$ and $k$ are \term{adjacent} if $(j, k) \in E$ or $(k, j) \in E$.
+\end{defi}
+
+\begin{defi}[Undirected edge]\index{undirected edge}
+ An edge $(j, k)$ is \emph{undirected} if also $(k, j) \in E$. Otherwise, it is \emph{directed}\index{directed edge} and we write $j \to k$ to represent it. We also say that $j$ is a \term{parent} of $k$, and write $\pa(k)$ for the set of all parents of $k$.
+\end{defi}
+
+\begin{defi}[(Un)directed graph]\index{undirected graph}\index{directed graph}
+ A graph is \emph{(un)directed} if all its edges are (un)directed.
+\end{defi}
+
+\begin{defi}[Skeleton]\index{skeleton}
+ The \emph{skeleton} of $\mathcal{G}$ is a copy of $\mathcal{G}$ with every edge replaced by an undirected edge.
+\end{defi}
+
+\begin{defi}[Subgraph]\index{subgraph}
+ A graph $\mathcal{G}_1 = (V, E)$ is a \emph{subgraph} of $\mathcal{G} = (V, E)$ if $V_1 \subseteq V$ and $E_1 \subseteq E$. A \term{proper subgraph}\index{subgraph!proper} is one where either of the inclusions are proper inclusions.
+\end{defi}
+
+As discussed, we want a graph that encodes the conditional dependence of different variables. We first define what this means. In this section, we only work with undirected graphs.
+
+\begin{defi}[Conditional independence]\index{conditional independence}
+ Let $X, Y, Z$ be random vectors with joint density $f_{XYZ}$. We say that $X$ is \emph{conditionally independent} of $Y$ given $Z$, written $X \amalg Y \mid Z$, if
+ \[
+ f_{XY\mid Z}(x, y \mid z) = f_{X\mid Z}(x \mid z) f_{Y \mid Z} (y \mid z).
+ \]
+ Equivalently,
+ \[
+ f_{X\mid YZ} (x \mid y, z) = f_{X \mid Z}(x \mid z)
+ \]
+ for all $y$.
+\end{defi}
+We shall ignore all the technical difficulties, and take as an assumption that all these conditional densities exist.
+
+\begin{defi}[Conditional independence graph (CIG)]\index{conditional independence graph}\index{CIG}
+ Let $P$ be the law of $Z = (Z_1, \ldots, Z_p)^T$. The \emph{conditional independent graph} (CIG) is the graph whose vertices are $\{1, \ldots, p\}$, and contains an edge between $j$ and $k$ iff $Z_j$ and $Z_k$ are conditionally dependent given $Z_{-jk}$.
+\end{defi}
+
+More generally, we make the following definition:
+\begin{defi}[Pairwise Markov property]\index{pairwise Markov property}
+ Let $P$ be the law of $Z = (Z_1, \ldots, Z_p)^T$. We say $P$ satisfies the \emph{pairwise Markov property} with respect to a graph $\mathcal{G}$ if for any distinct, non-adjacent vertices $j, k$, we have $Z_j \amalg Z_k \mid Z_{-jk}$.
+\end{defi}
+
+\begin{eg}
+ If $\mathcal{G}$ is a complete graph, then $P$ satisfies the pairwise Markov property with respect to $\mathcal{G}$.
+\end{eg}
+
+The conditional independence graph is thus the minimal graph satisfying the pairwise Markov property. It turns out that under mild conditions, the pairwise Markov property implies something much stronger.
+
+\begin{defi}[Separates]\index{separates}
+ Given a triple of (disjoint) subsets of nodes $A, B, S$, we say $S$ \emph{separates} $A$ from $B$ if every path from a node in $A$ to a node in $B$ contains a node in $S$.
+\end{defi}
+
+\begin{defi}[Global Markov property]\index{global Markov property}
+ We say $P$ satisfies the \emph{global Markov property} with respect to $\mathcal{G}$ if for any triple of disjoint subsets of $V$ $(A, B, S)$, if $S$ separates $A$ and $B$, then $Z_A \amalg Z_B \mid Z_S$.
+\end{defi}
+
+\begin{prop}
+ If $P$ has a positive density, then if it satisfies the pairwise Markov property with respect to $\mathcal{G}$, then it also satisfies the global Markov property.
+\end{prop}
+This is a really nice result, but we will not prove this. However, we will prove a special case in the example sheet.
+
+So how do we actually construct the conditional independence graph? To do so, we need to test our variables for conditional dependence. In general, this is quite hard. However, in the case where we have Gaussian data, it is much easier, since independence is the same as vanishing covariance.
+
+\begin{notation}[$M_{A,B}$]
+ Let $M$ be a matrix. Then $M_{A,B}$ refers to the submatrix given by the rows in $A$ and columns in $B$.
+\end{notation}
+
+Since we are going to talk about conditional distributions a lot, the following calculation will be extremely useful.
+\begin{prop}
+ Suppose $Z \sim N_p(\mu, \Sigma)$ and $\Sigma$ is positive definite. Then
+ \[
+ Z_A \mid Z_B = z_B \sim N_{|A|}(\mu_A + \Sigma_{A, B} \Sigma_{B, B}^{-1}(z_B - \mu_B), \Sigma_{A, A} - \Sigma_{A, B} \Sigma_{B, B}^{-1} \Sigma_{B, A}).
+ \]
+\end{prop}
+
+\begin{proof}
+ Of course, we can just compute this directly, maybe using moment generating functions. But for pleasantness, we adopt a different approach. Note that for any $M$, we have
+ \[
+ Z_A = M Z_B + (Z_A - M Z_B).
+ \]
+ We shall pick $M$ such that $Z_A - M Z_B$ is independent of $Z_B$, i.e.\ such that
+ \[
+ 0 = \cov(Z_B, Z_A - M Z_B) = \Sigma_{B,A} - \Sigma_{B, B} M^T.
+ \]
+ So we should take
+ \[
+ M = (\Sigma_{B, B}^{-1}\Sigma_{B,A})^T = \Sigma_{A,B} \Sigma_{B,B}^{-1}.
+ \]
+ We already know that $Z_A - M Z_B$ is Gaussian, so to understand it, we only need to know its mean and variance. We have
+ \begin{align*}
+ \E[Z_A - M Z_B ] &= \mu_A - M \mu_B = \mu_A - \Sigma_{AB} \Sigma_{BB}^{-1} \mu_B\\
+ \var(Z_A - M Z_B) &= \Sigma_{A, A} - 2 \Sigma_{A, B} \Sigma_{B, B}^{-1} \Sigma_{B, A} + \Sigma_{A, B} \Sigma_{B, B}^{-1} \Sigma_{B, B} \Sigma_{B, B}^{-1} \Sigma_{B, A}\\
+ &= \Sigma_{A, A} - \Sigma_{A, B} \Sigma_{B, B}^{-1} \Sigma_{B, A}.
+ \end{align*}
+ Then we are done.
+\end{proof}
+
+\subsubsection*{Neighbourhood selection}
+We are going to specialize to $A = \{k\}$ and $B = \{1, \ldots, n\} \setminus \{k\}$. Then we can separate out the ``mean'' part and write
+\[
+ Z_k = M_k + Z_{-k}^T \Sigma_{-k, -k}^{-1} \Sigma_{-k, k} + \varepsilon_k,
+\]
+where
+\begin{align*}
+ M_k &= \mu_k - \mu_{-k}^T \Sigma_{k, -k}^{-1} \Sigma_{-k, k},\\
+ \varepsilon_k \mid Z_{-k} &\sim N(0, \Sigma_{k, k} - \Sigma_{k, -k} \Sigma_{-k, -k}^{-1} \Sigma_{-k, k}).
+\end{align*}
+This now looks like we are doing regression.
+
+We observe that
+\begin{lemma}
+ Given $k$, let $j'$ be such that $(Z_{-k})_j = Z_{j'}$. This $j'$ is either $j$ or $j + 1$, depending on whether it comes after or before $k$.
+
+ If the $j$th component of $\Sigma_{-k, -k}^{-1} \Sigma_{-k, k}$ is $0$, then $Z_k \amalg Z_{j'} \mid Z_{-kj'}$.
+\end{lemma}
+
+\begin{proof}
+ If the $j$th component of $\Sigma_{-k, -k}^{-1} \Sigma_{-k, k}$ is $0$, then the distribution of $Z_k \mid Z_{-k}$ will not depend on $(Z_{-k})_j = Z_{j'}$ (here $j'$ is either $j$ or $j + 1$, depending on where $k$ is). So we know
+ \[
+ Z_k \mid Z_{-k} \overset{d}{=} Z_k \mid Z_{-kj'}.
+ \]
+ This is the same as saying $Z_k \amalg Z_{j'} \mid Z_{-kj'}$.
+\end{proof}
+
+Neighbourhood selection exploits this fact. Given $x_1, \ldots, x_n$ which are iid $\sim Z$ and
+\[
+ X = (x_1^T, \cdots, x_n^T)^T,
+\]
+we can estimate $\Sigma_{-k, -k}^{-1} \Sigma_{-k, k}$ by regressing $X_k$ on $X_{-k}$ using the Lasso (with an intercept term). We then obtain selected sets $\hat{S}_k$. There are two ways of estimating the CIG based on these:
+\begin{itemize}
+ \item OR rule: We add the edge $(j, k)$ if $j \in \hat{S}_k$ or $k \in \hat{S}_j$.
+ \item AND rule: We add the edge $(j, k)$ if $j \in \hat{S}_k$ and $k \in \hat{S}_j$.
+\end{itemize}
+
+\subsubsection*{The graphical Lasso}
+Another way of finding the conditional independence graph is to compute $\var(Z_{jk} \mid Z_{-jk})$ directly. The following lemma will be useful:
+
+\begin{lemma}
+ Let $M \in \R^{p \times p}$ be positive definite, and write
+ \[
+ M =
+ \begin{pmatrix}
+ P & Q\\
+ Q^T & R
+ \end{pmatrix},
+ \]
+ where $P$ and $Q$ are square. The \term{Schur complement} of $R$ is
+ \[
+ S = P - QR^{-1} Q^T.
+ \]
+ Note that this has the same size as $P$. Then
+ \begin{enumerate}
+ \item $S$ is positive definite.
+ \item
+ \[
+ M^{-1} =
+ \begin{pmatrix}
+ S^{-1} & -S^{-1} QR^{-1}\\
+ -R^{-1} Q^T S^{-1} & R^{-1} + R^{-1} Q^T S^{-1} Q R^{-1}
+ \end{pmatrix}.
+ \]
+ \item $\det(M) = \det (S) \det (R)$
+ \end{enumerate}
+\end{lemma}
+We have seen this Schur complement when we looked at $\var(Z_A \mid Z_{A^c})$ previously, where we got
+\[
+ \var(Z_A \mid Z_{A^c}) = \Sigma_{A, A} - \Sigma_{A, A^C} \Sigma_{A^c, A^c}^{-1} \Sigma_{A^c, A} = \Omega_{A, A}^{-1},
+\]
+where $\Omega = \Sigma^{-1}$ is the \term{precision matrix}.
+
+Specializing to the case where $A = \{j, k\}$, we have
+\[
+ \var(Z_{\{j, k\}} \mid Z_{-jk}) = \frac{1}{\det (\Omega_{A, A})}
+ \begin{pmatrix}
+ \Omega_{k, k} & -\Omega_{j, k}\\
+ -\Omega_{j, k} & \Omega_{j, j}.
+ \end{pmatrix}
+\]
+This tells us $Z_k \amalg Z_j \mid Z_{-kj}$ iff $\Omega_{j k} = 0$. Thus, we can approximate the conditional independence graph by computing the precision matrix $\Omega$.
+
+Our method to estimate the precision matrix is similar to the Lasso. Recall that the density of $N_p(\mu, \Sigma)$ is
+\[
+ P(z) = \frac{1}{(2\pi)^{p/2} (\det \Sigma)^{1/2}} \exp \left(-\frac{1}{2} (z - \mu)^T \Sigma^{-1} (z - \mu)\right).
+\]
+The log-likelihood of $(\mu, \Omega)$ based on an iid sample $(X_1, \ldots, X_n)$ is (after dropping a constant)
+\[
+ \ell(\mu, \Omega) = \frac{n}{2} \log \det \Omega - \frac{1}{2} \sum_{i = 1}^n (x_i - \mu)^T \Omega (x_i - \mu).
+\]
+To simplify this, we let
+\[
+ \bar{x} = \frac{1}{n} \sum_{i = 1}^n x_i,\quad S = \frac{1}{n} \sum_{i = 1}^n (x_i - \bar{x}) (x_i - \bar{x})^T.
+\]
+Then
+\begin{align*}
+ \sum (x_i - \mu)^T \Omega (x_i -\mu) &= \sum (x_i - \bar{x} + \bar{x} - \mu)^T \Omega (x_i - \bar{x} + \bar{x} - \mu) \\
+ &= \sum (x_i - \bar{x})^T \Omega (x_i - \bar{x}) + n (\bar{X} - \mu)^T \Omega (\bar{X} - \mu).
+\end{align*}
+We have
+\[
+ \sum (x_i - \bar{x})^T \Omega (x_i - \bar{x}) = \sum \tr \Big((x_i - \bar{x})^T \Omega (x_i - \bar{x})\Big) = n \tr (S\Omega).
+\]
+So we now have
+\[
+ \ell(\mu, \Omega) = -\frac{n}{2} \Big(\tr (S\Omega) - \log \det \Omega + (\bar{X} - \mu)^T \Omega (\bar{X} - \mu)\Big).
+\]
+We are really interested in estimating $\Omega$. So we should try to maximize this over $\mu$, but that is easy, since this is the same as minimizing $(\bar{X} - \mu)^T \Omega (\bar{X} - \mu)$, and we know $\Omega$ is positive-definite. So we should set $\mu = \bar{X}$. Thus, we have
+\[
+ \ell(\Omega) = \max_{\mu \in \R^p} \ell(\mu, \Omega) = -\frac{n}{2} \Big(\tr (S\Omega) - \log \det \omega\Big).
+\]
+So we can solve for the MLE of $\Omega$ by solving
+\[
+ \min_{\Omega: \Omega \succ 0} \Big(- \log \det \Omega + \tr (S \Omega)\Big).
+\]
+One can show that this is convex, and to find the MLE, we can just differentiate
+\[
+ \frac{\partial}{\partial \Omega_{jk}} \log \det \Omega = (\Omega^{-1})_{jk},\quad \frac{\partial}{\partial \Omega_{jk}} \tr (S\Omega) = S_{jk},
+\]
+using that $S$ and $\Omega$ are symmetric. So provided that $S$ is positive definite, the maximum likelihood estimate is just
+\[
+ \Omega = S^{-1}.
+\]
+But we are interested in the high dimensional situation, where we have loads of variables, and $S$ cannot be positive definite. To solve this, we use the \term{graphical Lasso}.
+
+The \emph{graphical Lasso} solves
+\[
+ \argmin_{\Omega: \Omega \succ 0} \Big(-\log \det \Omega + \tr (S \Omega) + \lambda \|\Omega\|_1\Big),
+\]
+where
+\[
+ \|\Omega\|_1 = \sum_{jk} \Omega_{jk}.
+\]
+Often, people don't sum over the diagonal elements, as we want to know if off-diagonal elements ought to be zero. Similar to the case of the Lasso, this gives a sparse estimate of $\Omega$ from which we may estimate the conditional independence graph.
+
+\subsection{Structural equation modelling}
+The conditional independence graph only tells us which variables are related to one another. However, it doesn't tell us any causal relation between the different variables. We first need to explain what we mean by a causal model. For this, we need the notion of a \emph{directed acyclic graph}.
+
+\begin{defi}[Path]\index{path}
+ A \emph{path} from $j$ to $k$ is a sequence $j = j_1, j_2, \ldots, j_m = k$ of (at least two) distinct vertices such that $j_\ell$ and $j_{\ell + 1}$ are adjacent.
+
+ A path is \emph{directed}\index{directed path} if $j_\ell \to j_{\ell + 1}$ for all $\ell$.
+\end{defi}
+
+\begin{defi}[Directed acyclic graph (DAG)]\index{DAG}
+ A \term{directed cycle} is (almost) a directed path but with the start and end points the same.
+
+ A \emph{directed acyclic graph (DAG)} is a directed graph containing no directed cycles.
+\end{defi}
+
+\begin{center}
+ \begin{tikzpicture}
+ \node [mstate] (2) at (0, 0) {$a$};
+ \node [mstate] (3) at (2, 0) {$b$};
+
+ \node [mstate] (1) at (1, -1.732) {$c$};
+
+ \draw [->] (2) -- (1);
+ \draw [->] (3) -- (1);
+ \draw [->] (3) -- (2);
+
+ \node at (1, -2.5) {DAG};
+ \begin{scope}[shift={(4, 0)}]
+ \node [mstate] (2) at (0, 0) {$a$};
+ \node [mstate] (3) at (2, 0) {$b$};
+
+ \node [mstate] (1) at (1, -1.732) {$c$};
+
+ \draw [->] (2) -- (1);
+ \draw [->] (1) -- (3);
+ \draw [->] (3) -- (2);
+
+ \node at (1, -2.5) {not DAG};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+
+We will use directed acyclic graphs to encode causal structures, where we have a path from $a$ to $b$ if $a$ ``affects'' $b$.
+
+\begin{defi}[Structural equation model (SEM)]\index{structural equation model}\index{SEM}
+ A \emph{structural equation model} $\mathcal{S}$ for a random vector $Z \in \R^p$ is a collection of equations
+ \[
+ Z_k = h_k(Z_{p_k}, \varepsilon_k),
+ \]
+ where $k = 1, \ldots, p$ and $\varepsilon_1, \ldots, \varepsilon_p$ are independent, and $p_k \subseteq \{1, \ldots, p\} \setminus \{k\}$ and such that the graph with $pa(k) = p_k$ is a directed acyclic graph.
+\end{defi}
+
+\begin{eg}
+ Consider three random variables:
+ \begin{itemize}
+ \item $Z_1 = 1$ if a student is taking a course, $0$ otherwise
+ \item $Z_2 = 1$ if a student is attending catch up lectures, $0$ otherwise
+ \item $Z_3 = 1$ if a student heard about machine learning before attending the course, $0$ otherwise.
+ \end{itemize}
+ Suppose
+ \begin{align*}
+ Z_3 &= \varepsilon_3 \sim \Bernoulli(0.25)\\
+ Z_2 &= \mathbf{1}_{\{\varepsilon_2(1 + Z_3) > \frac{1}{2}\}},\quad \varepsilon_2 \sim U[0, 1]\\
+ Z_1 &= \mathbf{1}_{\{\varepsilon_1(Z_2 + Z_3) > \frac{1}{2}\}},\quad \varepsilon_1 \sim U[0, 1].
+ \end{align*}
+ This is then an example of a structural equation modelling, and the corresponding DAG is
+ \begin{center}
+ \begin{tikzpicture}
+ \node [mstate] (2) at (0, 0) {$Z_2$};
+ \node [mstate] (3) at (2, 0) {$Z_3$};
+
+ \node [mstate] (1) at (1, -1.732) {$Z_1$};
+
+ \draw [->] (2) -- (1);
+ \draw [->] (3) -- (1);
+ \draw [->] (3) -- (2);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+Note that the structural equation model for $Z$ determines its distribution, but the converse is not true. For example, the following two distinct structural equation give rise to the same distribution:
+\begin{align*}
+ Z_1 &= \varepsilon & Z_1 &= Z_2\\
+ Z_2 &= Z_1 & Z_2 &= \varepsilon
+\end{align*}
+Indeed, if we have two variables that are just always the same, it is hard to tell which is the cause and which is the effect.
+
+It would be convenient if we could order our variables in a way that $Z_k$ depends only on $Z_j$ for $j < k$. This is known as a \emph{topological ordering}:
+
+\begin{defi}[Descendant]\index{descendant}\index{$\mathrm{de}$}
+ We say $k$ is a \emph{descendant} of $j$ if there is a directed path from $j$ to $k$. The set of descendant of $j$ will be denoted $\de(j)$.
+\end{defi}
+
+\begin{defi}[Topological ordering]\index{topological ordering}
+ Given a DAG $\mathcal{G}$ with $V = \{1, \ldots, p\}$ we say that a permutation $\pi: V \to V$ is a \emph{topological ordering} if $\pi(j) < \pi(k)$ whenever $k \in \de(j)$.
+\end{defi}
+
+Thus, given a topological ordering $\pi$, we can write $Z_k$ as a function of $\varepsilon_{\pi^{-1}(1)}, \ldots, \varepsilon_{\pi^{-1}(\pi(k))}$.
+
+How do we understand structural equational models? They give us information that are not encoded in the distribution itself. One way to think about them is via \emph{interventions}. We can modify a structural equation model by replacing the equation for $Z_k$ by setting, e.g.\ $Z_k = a$. In real life, this may correspond to forcing all students to go to catch up workshops. This is called a \term{perfect intervention}\index{intervention!perfect}. The modified SEM gives a new joint distribution for $Z$. Expectations or probabilities with respect to the new distribution are written by adding ``$\ddo(Z_k = a)$''. For example, we write
+\[
+ \E(Z_j \mid \ddo(Z_k = a)).
+\]
+In general, this is different to $\E(Z_j \mid Z_k = a)$, since, for example, if we conditioned on $Z_2 = a$ in our example, then that would tell us something about $Z_3$.
+
+\begin{eg}
+ After the intervention $\ddo(Z_2 = 1)$, i.e.\ we force everyone to go to the catch-up lectures, we have a new SEM with
+ \begin{align*}
+ Z_3 &= \varepsilon_3 \sim \Bernoulli(0.25)\\
+ Z_2 &= 1\\
+ Z_1 &= \mathbf{1}_{\varepsilon_1 (1 + Z_3) > \frac{1}{2}}, \quad \varepsilon_1 \sim U[0, 1].
+ \end{align*}
+ Then, for example, we can compute
+ \[
+ \P(Z_1 = 1 \mid \ddo(Z_2 = 1)) = \frac{1}{2} \cdot \frac{3}{4} + \frac{3}{4} + \frac{1}{4} = \frac{9}{16},
+ \]
+ and by high school probability, we also have
+ \[
+ \P(Z_1 = 1 \mid Z_2 = 1) = \frac{7}{12}.
+ \]
+\end{eg}
+
+To understand the DAGs associated to structural equation models, we would like to come up with Markov properties analogous to what we had for undirected graphs. This, in particular, requires the correct notion of ``separation'', which we call d-separation. Our notion should be such that if $S$ d-separates $A$ and $B$ in the DAG, then $Z_A$ and $Z_B$ are conditionally independent given $Z_S$. Let's think about some examples. For example, we might have a DAG that looks like this:
+\begin{center}
+ \begin{tikzpicture}
+ \node [msstate] (1) at (-2, 0) {$a$};
+ \node [msstate] (2) at (-1, 0.5) {$p$};
+ \node [msstate] (3) at (0, 1) {$s$};
+ \node [msstate] (4) at (1, 0.5) {$q$};
+ \node [msstate] (5) at (2, 0) {$b$};
+
+ \draw [->] (1) -- (2);
+ \draw [->] (2) -- (3);
+ \draw [->] (4) -- (3);
+ \draw [->] (5) -- (4);
+ \end{tikzpicture}
+\end{center}
+Then we expect that
+\begin{enumerate}
+ \item $Z_a$ and $Z_s$ are not independent;
+ \item $Z_a$ and $Z_s$ are independent given $Z_q$;
+ \item $Z_a$ and $Z_b$ are independent;
+ \item $Z_a$ and $Z_b$ are not independent given $Z_s$.
+\end{enumerate}
+We can explain a bit more about the last one. For example, the structural equation model might telling us $Z_s = Z_a + Z_b + \varepsilon$. In this case, if we know that $Z_a$ is large but $Z_s$ is small, then chances are, $Z_b$ is also large (in the opposite sign). The point is that both $Z_a$ and $Z_b$ both contribute to $Z_s$, and if we know one of the contributions and the result, we can say something about the other contribution as well.
+
+Similarly, if we have a DAG that looks like
+\begin{center}
+ \begin{tikzpicture}
+ \node [msstate] (1) at (-2, 0) {$a$};
+ \node [msstate] (4) at (0, 0) {};
+ \node [msstate] (3) at (0, 1) {$s$};
+ \node [msstate] (5) at (2, 0) {$b$};
+
+ \draw [->] (1) -- (4);
+ \draw [->] (4) -- (3);
+ \draw [->] (5) -- (4);
+ \end{tikzpicture}
+\end{center}
+then as above, we know that $Z_a$ and $Z_b$ are not independent given $Z_s$.
+
+Another example is
+\begin{center}
+ \begin{tikzpicture}
+ \node [msstate] (1) at (-2, 0) {$a$};
+ \node [msstate] (2) at (-1, 0.5) {$p$};
+ \node [msstate] (3) at (0, 1) {$s$};
+ \node [msstate] (4) at (1, 0.5) {$q$};
+ \node [msstate] (5) at (2, 0) {$b$};
+
+ \draw [->] (2) -- (1);
+ \draw [->] (3) -- (2);
+ \draw [->] (3) -- (4);
+ \draw [->] (4) -- (5);
+ \end{tikzpicture}
+\end{center}
+Here we expect
+\begin{itemize}
+ \item $Z_a$ and $Z_b$ are not independent.
+ \item $Z_a$ and $Z_b$ are independent given $Z_s$.
+\end{itemize}
+To see (i), we observe that if we know about $Z_a$, then this allows us to predict some information about $Z_s$, which would in turn let us say something about $Z_b$.
+\begin{defi}[Blocked]\index{blocked}
+ In a DAG, we say a path $(j_1, \ldots, j_m)$ between $j_1$ and $j_m$ is \emph{blocked} by a set of nodes $S$ (with neither $j_1$ nor $j_m$ in $S$) if there is some $j_\ell \in S$ and the path is \emph{not} of the form
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circle, draw, blue, text=black, minimum width=1cm] (1) at (-2, 0) {\small $j_{\ell - 1}$};
+ \node [circle, draw, blue, text=black, minimum width=1cm, fill=red, fill opacity=0.5, text opacity=1] (2) at (0, 0) {\small $j_{\ell}$};
+ \node [circle, draw, blue, text=black, minimum width=1cm] (3) at (2, 0) {\small $j_{\ell + 1}$};
+ \draw [->] (1) -- (2);
+ \draw [->] (3) -- (2);
+ \end{tikzpicture}
+ \end{center}
+ or there is some $j_\ell$ such that the path \emph{is} of this form, but neither $j_\ell$ nor any of its descendants are in $S$.
+\end{defi}
+
+\begin{defi}[d-separate]\index{d-separate}
+ If $\mathcal{G}$ is a DAG, given a triple of (disjoint) subsets of nodes $A, B, S$, we say $S$ \emph{d-separates} $A$ from $B$ if $S$ blocks every path from $A$ to $B$.
+\end{defi}
+
+For convenience, we define
+\begin{defi}[$v$-structure]\index{$v$-structure}
+ A set of three nodes is called a \emph{$v$-structure} if one node is a child of the two other nodes, and these two nodes are not adjacent.
+\end{defi}
+
+It is now natural to define
+\begin{defi}[Markov properties]
+ Let $P$ be the distribution of $Z$ and let $f$ be the density. Given a DAG $\mathcal{G}$, we say $P$ satisfies
+ \begin{enumerate}
+ \item the \term{Markov factorization criterion} if
+ \[
+ f(z_1, \ldots, z_p) = \prod_{k = 1}^p f(z_k \mid z_{\pa(k)}).
+ \]
+ \item the \term{global Markov property} if for all disjoint $A, B, S$ such that $A, B$ is d-separated by $S$, then $Z_A \amalg Z_B \mid Z_S$.
+ \end{enumerate}
+\end{defi}
+
+\begin{prop}
+ If $P$ has a density with respect to a product measure, then (i) and (ii) are equivalent.
+\end{prop}
+
+How does this fit into the structural equation model?
+
+\begin{prop}
+ Let $P$ be the structural equation model with DAG $\mathcal{G}$. Then $P$ obeys the Markov factorization property.
+\end{prop}
+
+\begin{proof}
+ We assume $\mathcal{G}$ is topologically ordered (i.e.\ the identity map is a topological ordering). Then we can always write
+ \[
+ f(z_1, \ldots, z_p) = f(z_1) f(z_2 \mid z_1) \cdots z(z_p \mid z_1, z_2, \ldots, z_{p - 1}).
+ \]
+ By definition of a topological order, we know $\pa(k) \subseteq \{1, \ldots, k - 1\}$. Since $Z_k$ is a function of $Z_{\pa(k)}$ and independent noise $\varepsilon_k$. So we know
+ \[
+ Z_{k} \amalg Z_{\{1, \ldots, p\} \setminus \{k \cup \pa(k)\}} \mid Z_{\pa(k)}.
+ \]
+ Thus,
+ \[
+ f(z_{k} \mid z_1, \ldots, z_{k - 1}) = f(z_k \mid z_{\pa(k)}).\qedhere
+ \]
+\end{proof}
+
+\subsection{The PC algorithm}
+We now want to try to find out the structural equation model given some data, and in particular, determine the causal structure. As we previously saw, there is no hope of determining this completely, even if we know the distribution of the $Z$ completely. Let's consider the different obstacles to this problem.
+
+\subsubsection*{Causal minimality}
+If $P$ is generated by an SEM with DAG $\mathcal{G}$, then from the above, we know that $P$ is Markov with respect to $\mathcal{G}$. The converse is also true: if $P$ is Markov with respect to a DAG $\mathcal{G}$, then there exists a SEM with DAG $\mathcal{G}$ that generates $P$. This immediately implies that $P$ will be Markov with respect to many DAGs. For example, a DAG whose skeleton is complete will always work. This suggests the following definition:
+
+\begin{defi}[Causal minimality]\index{causal minimality}
+ A distribution $P$ satisfies \emph{causal minimality} with respect to $\mathcal{G}$ but not any proper subgraph of $\mathcal{G}$.
+\end{defi}
+
+\subsubsection*{Markov equivalent DAGs}
+
+It is natural to aim for finding a causally minimal DAG. However, this does not give a unique solution, as we saw previously with the two variables that are always the same.
+
+In general, two different DAGs may satisfy the same set of d-separations, and then a distribution is Markov with respect to one iff its Markov with respect to the other, and we cannot distinguish between the two.
+
+\begin{defi}[Markov equivalence]\index{Markov equivalence}
+ For a DAG $\mathcal{G}$, we let
+ \[
+ \mathcal{M}(\mathcal{G}) = \{\text{distributions $P$ such that $P$ is Markov with respect to $\mathcal{G}$}\}.
+ \]
+ We say two DAGs $\mathcal{G}_1, \mathcal{G}_2$ are are \emph{Markov equivalent} if $\mathcal{M}(\mathcal{G}_1) = \mathcal{M}(\mathcal{G}_2)$.
+\end{defi}
+What is nice is that there is a rather easy way of determining when two DAGs are Markov equivalent.
+
+\begin{prop}
+ Two DAGs are Markov equivalent iff they have the same skeleton and same set of $v$-structure.
+\end{prop}
+The set of all DAGs that are Markov equivalent to a given DAG can be represented by a \term{CPDAG} (completed partial DAG), which contains an edge $(j, k)$ iff some member of the equivalence class does.
+
+\subsubsection*{Faithfulness}
+To describe the final issue, consider the SEM
+\[
+ Z_1 = \varepsilon_1,\quad Z_2 = \alpha Z_1 + \varepsilon_2,\quad Z_3 = \beta Z_1 + \gamma Z_2 + \varepsilon_3.
+\]
+We take $\varepsilon \sim N_3(0, I)$. Then we have $Z = (Z_1, Z_2, Z_3) \sim N(0, \Sigma)$, where
+\[
+ \Sigma =
+ \begin{pmatrix}
+ 1 & \alpha & \beta + \alpha \gamma\\
+ \alpha & \alpha^2 + 1 & \alpha(\beta + \alpha \gamma) + \gamma\\
+ \beta + \alpha\gamma & \alpha(\beta + \alpha \gamma) + \gamma & \beta^2 + \gamma^2(\alpha^2 + 1) + 1 + 2\alpha \beta \gamma
+ \end{pmatrix}.
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \node [mstate] (2) at (0, 0) {$Z_2$};
+ \node [mstate] (3) at (2, 0) {$Z_3$};
+
+ \node [mstate] (1) at (1, 1.732) {$Z_1$};
+
+ \draw [->] (1) -- (2);
+ \draw [->] (1) -- (3);
+ \draw [->] (2) -- (3);
+ \end{tikzpicture}
+\end{center}
+Now if we picked values of $\alpha, \beta, \gamma$ such that
+\[
+ \beta + \alpha \gamma = 0,
+\]
+then we obtained an extra independence relation $Z_1 \amalg Z_3$ in our system. For example, if we pick $\beta = -1$ and $\alpha, \gamma = 1$, then
+\[
+ \Sigma =
+ \begin{pmatrix}
+ 1 & 1 & 0\\
+ 1 & 2 & 1\\
+ 0 & 1 & 2
+ \end{pmatrix}.
+\]
+While there is an extra independence relation, we cannot remove any edge while still satisfying the Markov property. Indeed:
+\begin{itemize}
+ \item If we remove $1 \to 2$, then this would require $Z_1 \amalg Z_2$, but this is not true.
+ \item If we remove $2 \to 3$, then this would require $Z_2 \amalg Z_3 \mid Z_1$, but we have
+ \[
+ \var((Z_2, Z_3) \mid Z_1) =
+ \begin{pmatrix}
+ 2 & 1\\
+ 1 & 2
+ \end{pmatrix} -
+ \begin{pmatrix}
+ 1 \\0
+ \end{pmatrix}
+ \begin{pmatrix}
+ 1 & 0
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 1 & 1\\
+ 1 & 2
+ \end{pmatrix},
+ \]
+ and this is not diagonal.
+ \item If we remove $1 \to 3$, then this would require $Z_1 \amalg Z_3 \mid Z_2$, but
+ \[
+ \var((Z_1, Z_3) \mid Z_2) =
+ \begin{pmatrix}
+ 1 & 0 \\
+ 0 & 2
+ \end{pmatrix} -
+ \frac{1}{2}
+ \begin{pmatrix}
+ 1 \\ 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ 1 & 1
+ \end{pmatrix},
+ \]
+ which is again non-diagonal.
+\end{itemize}
+So this DAG satisfies causal minimality. However, $P$ can also be generated by the structural equation model
+\[
+ \tilde{Z}_1 = \tilde{\varepsilon}_1,\quad \tilde{Z}_2 = \tilde{Z}_1 + \frac{1}{2} \tilde{Z}_3 + \tilde{\varepsilon}_2,\quad \tilde{Z}_3 = \tilde{\varepsilon}_3,
+\]
+where the $\tilde{\varepsilon}_i$ are independent with
+\[
+ \tilde{\varepsilon}_1 \sim N(0, 1),\quad \tilde{\varepsilon}_2 \sim N(0, 2),\quad \tilde{\varepsilon}_3 \sim N(0, \tfrac{1}{2}).
+\]
+Then this has the DAG
+\begin{center}
+ \begin{tikzpicture}
+ \node [mstate] (2) at (0, 0) {$Z_2$};
+ \node [mstate] (3) at (2, 0) {$Z_3$};
+
+ \node [mstate] (1) at (1, 1.732) {$Z_1$};
+
+ \draw [->] (1) -- (2);
+ \draw [->] (3) -- (2);
+ \end{tikzpicture}
+\end{center}
+This is a strictly smaller DAG in terms of the number of edges involved. It is easy to see that this satisfies causal minimality.
+
+\begin{defi}[Faithfulness]\index{faithfulness}
+ We say $P$ is \emph{faithful} to a DAG $\mathcal{G}$ if it is Markov with respect to $\mathcal{G}$ and for all $A, B, S$ disjoint, $Z_A \amalg Z_B \mid Z_S$ implies $A, B$ are d-separated by $S$.
+\end{defi}
+
+\subsubsection*{Determining the DAG}
+We shall assume our distribution is faithful to some $\mathcal{G}_0$, and see if we can figure out $\mathcal{G}_0$ from $P$, or even better, from data.
+
+To find $\mathcal{G}$, the following proposition helps us compute the skeleton:
+\begin{prop}
+ If nodes $j$ and $k$ are adjacent in a DAG $\mathcal{G}$, then no set can $d$-separate them.
+
+ If they are not adjacent, and $\pi$ is a topological order for $\mathcal{G}$ with $\pi(j) < \pi(k)$, then they are $d$-separated by $\pa(k)$.
+\end{prop}
+
+\begin{proof}
+ Only the last part requires proof. Consider a path $j = j_1, \ldots, j_m = k$. Start reading the path from $k$ and go backwards. If it starts as $j_{m - 1} \to k$, then $j_{m - 1}$ is a parent of $k$ and blocks the path. Otherwise, it looks like $k \to j_{m - 1}$.
+
+ We keep going down along the path until we first see something of the form
+ \begin{center}
+ \begin{tikzpicture}
+ \node [msstate] (end) at (2, 1) {$k$};
+ \node [msstate] (1) at (1, 0.5) {};
+ \node [msstate] (2) at (-1, -0.5) {$a$};
+
+ \node [msstate] (start) at (-2, 0) {};
+
+ \draw [->] (end) -- (1);
+ \draw [->] (1) -- (2) node [pos=0.5, fill=white] {$\cdots$};
+ \draw [->] (start) -- (2);
+ \end{tikzpicture}
+ \end{center}
+ Thus must exist, since $j$ is not a descendant of $k$ by topological ordering. So it suffices to show that $a$ does not have a descendant in $\pa(k)$, but if it did, then this would form a closed loop.
+\end{proof}
+
+Finding the $v$-structures is harder, and at best we can do so up to Markov equivalence. To do that, observe the following:
+\begin{prop}
+ Suppose we have $j - \ell - k$ in the skeleton of a DAG.
+ \begin{enumerate}
+ \item If $j \to \ell \leftarrow k$, then no $S$ that d-separates $j$ can have $\ell \in S$.
+ \item If there exists $S$ that d-separates $j$ and $k$ and $\ell \not \in S$, then $j \to \ell \leftarrow k$.\fakeqed
+ \end{enumerate}
+\end{prop}
+Denote the set of nodes adjacent to the vertex $k$ in the graph $\mathcal{G}$ by $\adj(\mathcal{G}, k)$.
+
+We can now describe the first part of the \term{PC algorithm}, which finds the skeleton of the ``true DAG'':
+\begin{enumerate}
+ \item Set $\hat{\mathcal{G}}$ to be the complete undirected graph. Set $\ell = -1$.
+ \item Repeat the following steps:
+ \begin{enumerate}
+ \item Set $\ell = \ell + 1$:
+ \item Repeat the following steps:
+ \begin{enumerate}
+ \item Select a new ordered pair of nodes $j, k$ that are adjacent in $\hat{\mathcal{G}}$ and such that $|\adj(\hat{\mathcal{G}}, j) \setminus \{k\}| \geq \ell$.
+ \item Repeat the following steps:
+ \begin{enumerate}
+ \item Choose a new $S \subseteq \adj(\hat{\mathcal{G}}, j) \setminus \{k\}$ with $|S| = \ell$.
+ \item If $Z_j \amalg Z_k \mid Z_S$, then delete the edge $jk$, and store $S(k, j) = S(j, k) = S$
+ \item Repeat until $j-k$ is deleted or all $S$ chosen.
+ \end{enumerate}
+ \item Repeat until all pairs of adjacent nodes are inspected.
+ \end{enumerate}
+ \item Repeat until $\ell \geq p - 2$.
+ \end{enumerate}
+\end{enumerate}
+Suppose $P$ is faithful to a DAG $\mathcal{G}^0$. At each stage of the algorithm, the skeleton of $\mathcal{G}^0$ will be a subgraph of $\hat{\mathcal{G}}$. On the other hand, edges $(j, k)$ remaining at termination will have
+\[
+ Z_j \amalg Z_k \mid Z_S\text{ for all }S \subseteq (\hat{\mathcal{G}}, k),\quad S \subseteq (\hat{\mathcal{G}}, j).
+\]
+So they must be adjacent in $\mathcal{G}^0$. Thus, $\hat{\mathcal{G}}$ and $\mathcal{G}_0$ have the same skeleton.
+
+To find the $v$-structures, we perform:
+\begin{enumerate}
+ \item For all $j-l-k$ in $\hat{\mathcal{G}}$, do:
+ \begin{enumerate}
+ \item If $\ell \not \in S(j, k)$, then orient $j \to \ell \leftarrow k$.
+ \end{enumerate}
+\end{enumerate}
+This gives us the Markov equivalence class, and we may orient the other edges using other properties like acyclicity.
+
+If we want to apply this to data sets, then we need to apply some conditional independence tests instead of querying our oracle to figure out if things are conditional dependence. However, errors in the algorithm propagate, and the whole process may be derailed by early errors. Moreover, the result of the algorithm may depend on how we iterate through the nodes. People have tried many ways to fix these problems, but in general, this method is rather unstable. Yet, if we have large data sets, this can produce quite decent results.
+
+\section{High-dimensional inference}
+\subsection{Multiple testing}
+Finally, we talk about high-dimensional inference. Suppose we have come up with a large number of potential drugs, and want to see if they are effective in killing bacteria. Naively, we might try to run a hypothesis test on each of them, using a $p < 0.05$ threshold. But this is a terrible idea, since each test has a $0.05$ chance of giving a false positive, so even if all the drugs are actually useless, we would have incorrectly believed that a lot of them are useful, which is not the case.
+
+In general, suppose we have some null hypothesis $H_1, \ldots, H_m$. By definition, a $p$ value $p_i$ for $H_i$ is a random variable such that
+\[
+ \P_{H_i}(p_i \leq \alpha) \leq \alpha
+\]
+for all $\alpha \in [0, 1]$.
+
+Let $m_0 = |I_0|$ be the number of true null hypothesis. Given a procedure for rejecting hypothesis (a \term{multiple testing procedure}), we let $N$ be the number of false rejections (false positives), and $R$ the total number of rejections. One can also think about the number of false negatives, but we shall not do that here.
+
+Traditionally, multiple-testing procedures sought to control the \term{family-wise error rate} (\term{FWER}), defined by $\P(N \geq 1)$. The simplest way to minimize this is to use the \term{Bonferroni correction}, which rejects $H_i$ if $p_i \leq \frac{\alpha}{m}$. Usually, we might have $\alpha \sim 0.05$, and so this would be very small if we have lots of hypothesis (e.g.\ a million). Unsurprisingly, we have
+\begin{thm}
+ When using the Bonferroni correction, we have
+ \[
+ \mathrm{FWER} \leq \E(N) \leq \frac{m_0 \alpha}{m} \leq \alpha.
+ \]
+\end{thm}
+
+\begin{proof}
+ The first inequality is Markov's inequality, and the last is obvious. The second follows from
+ \[
+ \E(N) = \E \left(\sum_{i \in I_0} \mathbf{1}_{p_i \leq \alpha/m}\right) = \sum_{i \in I_0} \P\left(p_i \leq \frac{\alpha}{m}\right) \leq \frac{m_0 \alpha}{m}.\qedhere
+ \]
+\end{proof}
+The Bonferroni is a rather conservative procedure, since all these inequalities can be quite loose. When we have a large number of hypotheses, the criterion for rejection is very very strict. Can we do better?
+
+A more sophisticated approach is the \term{closed testing procedure}. For each non-empty subset $I \subseteq \{1, \ldots, m\}$, we let $H_I$ be the null hypothesis that $H_i$ is true for all $i \in I$. This is known as an \term{intersection hypothesis}. Suppose for each $I \subseteq \{1, \ldots, m\}$ non-empty, we have an $\alpha$-level test $\phi_I$ for $H_I$ (a \term{local test}), so that
+\[
+ \P_{H_I}(\phi_I = 1) \leq \alpha.
+\]
+Here $\Phi_I$ takes values in $\{0, 1\}$, and $\phi_I = 1$ means rejection. The closed testing procedure then rejects $H_I$ iff for all $J \supseteq I$, we have $\phi_J = 1$.
+
+\begin{eg}
+ Consider the tests, where the red ones are the rejected one:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [mred] at (0, 4) {$H_{1234}$};
+
+ \node [mred] at (-1.5, 3) {$H_{134}$};
+ \node [mred] at (-0.5, 3) {$H_{124}$};
+ \node [mred] at (0.5, 3) {$H_{134}$};
+ \node [mred] at (1.5, 3) {$H_{234}$};
+
+ \node [mred] at (-2.5, 2) {$H_{12}$};
+ \node [mred] at (-1.5, 2) {$H_{13}$};
+ \node [mred] at (-0.5, 2) {$H_{14}$};
+ \node [mred] at (0.5, 2) {$H_{23}$};
+ \node at (1.5, 2) {$H_{24}$};
+ \node at (2.5, 2) {$H_{34}$};
+
+ \node [mred] at (-1.5, 1) {$H_{1}$};
+ \node [mred] at (-0.5, 1) {$H_{2}$};
+ \node at (0.5, 1) {$H_{3}$};
+ \node at (1.5, 1) {$H_{4}$};
+ \end{tikzpicture}
+ \end{center}
+ In this case, we reject $H_1$ but not $H_2$ by closed testing. While $H_{23}$ is rejected, we cannot tell if it is $H_2$ or $H_3$ that should be rejected.
+\end{eg}
+
+This might seem like a very difficult procedure to analyze, but it turns out it is extremely simple.
+\begin{thm}
+ Closed testing makes no false rejections with probability $\geq 1 - \alpha$. In particular, FWER $\leq \alpha$.
+\end{thm}
+
+\begin{proof}
+ In order for there to be a false rejection, we must have falsely rejected $H_{I_0}$ with the local test, which has probability at most $\alpha$.
+\end{proof}
+
+But of course this doesn't immediately give us an algorithm we can apply to data. Different choices for the local test give rise to different multiple testing procedures. One famous example is \term{Holm's procedure}. This takes $\phi_I$ to be the Bonferroni test, where $\phi_I = 1$ iff $p_i \leq \frac{\alpha}{|I|}$ for some $i \in I$.
+
+When $m$ is large, then we don't want to compute all $\phi_I$, since there are $2^I$ computations to do. So we might want to find a shortcut. With a moment of thought, we see that Holm's procedure amounts to the following:
+\begin{itemize}
+ \item Let $(i)$ be the index of the $i$th smallest $p$-value, so $p_{(1)} \leq p_{(2)} \leq \cdots \leq p_{(m)}$.
+ \item Step $1$: If $p_{(1)} \leq \frac{\alpha}{m}$, then reject $H_{(1)}$ and go to step $2$. Otherwise, accept all null hypothesis.
+ \item Step $i$: If $p_{(i)} \leq \frac{\alpha}{m - i + 1}$, then reject $H_{(i)}$ and go to step $i + 1$. Otherwise, accept $H_{(i)}, H_{(i + 1)}, \ldots, H_{(m)}$.
+ \item Step $m$: If $p_{(m)} \leq \alpha$, then reject $H_{(m)}$. Otherwise, accept $H_{(m)}$.
+\end{itemize}
+The interesting thing about this is that it has the same bound on FWER as the Bonferroni correction, but the conditions here are less lenient.
+
+But if $m$ is very large, then the criterion for accepting $p_{(1)}$ is still quite strict. The problem is that controlling FWER is a very strong condition. Instead of controlling the probability that there is one false rejection, when $m$ is large, it might be more reasonable to control the proportion of false discoveries. Many modern multiple testing procedures aim to control the \term{false discovery rate}
+\[
+ \mathrm{FDR} = \E\left(\frac{N}{\max \{R, 1\}}\right).
+\]
+The funny maximum in the denominator is just to avoid division by zero. When $R = 0$, then we must have $N = 0$ as well, so what is put in the denominator doesn't really matter.
+
+The \term{Benjamini--Hochberg procedure} attempts to control the FDR at level $\alpha$ and works as follows:
+\begin{itemize}
+ \item Let $\hat{k} = \max \left\{i: p_{(i)} \leq \frac{\alpha i}{m}\right\}$. Then reject $M_{(1)}, \ldots, H_{(\hat{k})}$, or accept all hypothesis if $\hat{k}$ is not defined.
+\end{itemize}
+
+Under certain conditions, this does control the false discovery rate.
+\begin{thm}
+ Suppose that for each $i \in I_0$, $p_i$ is independent of $\{p_j: j \not= i\}$. Then using the Benjamini--Hochberg procedure, the false discovery rate
+ \[
+ FDR = \E \frac{N}{\max (R, 1)} \leq \frac{\alpha M_0}{M} \leq \alpha.
+ \]
+\end{thm}
+Curiously, while the proof requires $p_i$ to be independent of the others, in simulations, even when there is no hope that the $p_i$ are independent, it appears that the Benjamini--Hochberg still works very well, and people are still trying to understand what it is that makes Benjamini--Hochberg work so well in general.
+
+\begin{proof}
+ The false discovery rate is
+ \begin{align*}
+ \E \frac{N}{\max (R, 1)} &= \sum_{r = 1}^M \E \frac{N}{r} \mathbf{1}_{R = r}\\
+ &= \sum_{r = 1}^m \frac{1}{r} \E \sum_{i \in I_0} \mathbf{1}_{p_i \leq \alpha r/M} \mathbf{1}_{R = r}\\
+ &= \sum_{i \in I_0} \sum_{r = 1}^M \frac{1}{r} \P\left(p_i \leq \frac{\alpha r}{m}, R = r\right).
+ \end{align*}
+ The brilliant idea is, for each $i \in I_0$, let $R_i$ be the number of rejections when applying a modified Benjamini--Hochberg procedure to $p^{\setminus i} = \{p_1, \ldots, p_M\} \setminus \{p_i\}$ with cutoff
+ \[
+ \hat{k}_i = \max \left\{j: p_{(j)}^{\setminus i} \leq \frac{\alpha (j + 1)}{m}\right\}
+ \]
+ We observe that for $i \in I_0$ and $r \geq 1$, we have
+ \begin{align*}
+ \left\{ p_i \leq \frac{\alpha r}{m},\, R = r\right\} &= \left\{ p_i \leq \frac{ar}{m},\, p_{(r)} \leq \frac{\alpha r}{m},\, p_{(s)} > \frac{\alpha s}{m}\text{ for all }s \geq r\right\}\\
+ &= \left\{p_i \leq \frac{\alpha r}{m},\, p_{(r - 1)}^{\setminus i} \leq \frac{\alpha r}{m},\, p^{\setminus i}_{(s - 1)} > \frac{\alpha s}{m}\text{ for all }s > r\right\}\\
+ &= \left\{p_i \leq \frac{\alpha r}{m},\, R_i = r - 1\right\}.
+ \end{align*}
+ The key point is that $R_i = r - 1$ depends only on the other $p$-values. So the FDR is equal to
+ \begin{align*}
+ \mathrm{FDR} &= \sum_{i \in I_0} \sum_{r = 1}^M \frac{1}{r} \P\left(p_i \leq \frac{\alpha r}{M}, R_i = r - 1\right)\\
+ &= \sum_{i \in I_0} \sum_{r = 1}^M \frac{1}{r} \P \left(p_i \leq \frac{\alpha r}{m}\right) \P(R_i = r - 1)\\
+ \intertext{Using that $\P(p_i \leq \frac{\alpha r}{m}) \leq \frac{\alpha r}{m}$ by definition, this is}
+ &\leq \frac{\alpha }{M} \sum_{i \in I_0} \sum_{r = 1}^m \P(R_i = r - 1)\\
+ &= \frac{\alpha}{M} \sum_{i \in I_0} \P(R_i \in \{0, \ldots, m - 1\})\\
+ &= \frac{\alpha M_0}{M}.\qedhere
+ \end{align*}
+\end{proof}
+This is one of the most used procedures in modern statistics, especially in the biological sciences.
+
+
+\subsection{Inference in high-dimensional regression}
+We have more-or-less some answer as to how to do hypothesis testing, given that we know how to obtain these $p$-values. But how do we obtain these in the first place?
+
+For example, we might be trying to do regression, and are trying figure out which coefficients are non-zero. The the low dimension setting, with the normal linear model $Y = X\beta^0 + \varepsilon$, where $\varepsilon \sim N_n(0, \sigma^2 I)$. In the low-dimensional setting, we have $\sqrt{n}(\hat{\beta}^{OLS} - \beta^0) \sim N_p(0, \sigma^2(\frac{1}{n} X^T X)^{-1})$. Since this does not depend on $\beta^0$, we can use this to form confidence intervals and hypothesis tests.
+
+However, if we have more coefficients than there are data points, then we can't do ordinary least squares. So we need to look for something else. For example, we might want to replace the OLS estimate with the Lasso estimate. However, $\sqrt{n}(\hat{\beta}_\lambda^L - \beta^0)$ has an intractable distribution. In particular, since $\hat{\beta}_\lambda^L$ has a bias, the distribution will depend on $\beta^0$ in a complicated way.
+
+The recently introduced \term{debiased Lasso} tries to overcomes these issues. See \emph{van de Geer, B\"uhlmann, Ritov, Dezeure} (2014) for more details. Let $\hat{\beta}$ be the Lasso solution at $\lambda > 0$. Recall the KKT conditions that says $\hat{\nu}$ defined by
+\[
+ \frac{1}{n} X^T (Y - X \hat{\beta}) = \lambda \hat{\nu}
+\]
+satisfies $\|\hat{\nu}\|_\infty \leq 1$ and $\hat{\nu}_{\hat{S}} = \sgn (\hat{\beta}_{\hat{S}})$, where $\hat{S} = \{k : \hat{\beta}_k \not= 0\}$.
+
+We set $\hat{\Sigma} = \frac{1}{n} X^T X$. Then we can rewrite the KKT conditions as
+\[
+ \hat{\Sigma} (\hat{\beta} - \beta^0) + \lambda \hat{\nu} = \frac{1}{n} X^T \varepsilon.
+\]
+What we are trying to understand is $\hat{\beta} - \beta^0$. So it would be nice if we can find some sort of inverse to $\hat{\Sigma}$. If so, then $\hat{\beta} - \beta^0$ plus some correction term involving $\hat{v}$ would then be equal to a Gaussian.
+
+Of course, the problem is that in the high dimensional setting, that $\hat{\Sigma}$ has no hope of being invertible. So we want to find some approximate inverse $\hat{\Theta}$ so that the error we make is not too large. If we are equipped with such a $\hat{\Theta}$, then we have
+\[
+ \sqrt{n} (\hat{\beta} + \lambda \hat{\Theta} \hat{\nu} - \beta^0) = \frac{1}{\sqrt{n}} \hat{\Theta} X^T \varepsilon + \Delta,
+\]
+where
+\[
+ \Delta = \sqrt{n}(\hat{\Theta} \hat{\Sigma} - I)(\beta^0 - \hat{\beta}).
+\]
+We hope we can choose $\hat{\Theta}$ so that $\delta$ is small. We can then use the quantity
+\[
+ b = \hat{\beta} + \lambda \hat{\Theta} \hat{\nu} = \hat{\beta} + \frac{1}{n} \hat{\Theta} X^T (Y - X \hat{\beta})
+\]
+as our modified estimator, called the \term{debiased Lasso}.
+
+How do we bound $\Delta$? We already know that (under compatibility and sparsity conditions), we can make the $\ell_1$ norm of $\|\beta^0 - \hat{\beta}\|$ small with high probability. So if the $\ell_\infty$ norm of each of the rows of $\hat{\Theta} \hat{\Sigma} - 1$ is small, then H\"older allows us to bound $\Delta$.
+
+Write $\hat{\theta}_j$ for the $j$th row of $\hat{\Theta}$. Then
+\[
+ \|(\hat{\Sigma} \hat{\Theta}^T)_j - I \|_\infty \leq \eta
+\]
+is equivalent to $|(\hat{\Sigma} \hat{\Theta}^T)_{kj}| \leq \eta$ for $k \not= j$ and $|(\hat{\Sigma} \hat{\Theta}^T)_{jj} - 1 | \leq \eta$. Using the definition of $\hat{\Sigma}$, these are equivalent to
+\[
+ \frac{1}{n}|X_k^T X \hat{\theta}_j| \leq \eta,\quad \left|\frac{1}{n} X^T_j X \hat{\theta}_j - 1 \right| \leq \eta.
+\]
+The first is the same as saying
+\[
+ \frac{1}{n} \|X_{-j}^T X \hat{\theta}_j\|_\infty \leq \eta.
+\]
+This is quite reminiscent of the KKT conditions for the Lasso. So let us define
+\begin{align*}
+ \hat{\gamma}^{(j)} &= \argmin_{\gamma \in \R^{p - 1}} \left\{\frac{1}{2n} \|X_j - X_{-j} \gamma\|_2^2 + \lambda_j \|\gamma\|_1\right\}\\
+ \hat{\tau}_j^2 &= \frac{1}{n} X_j^T (X_j - X_{-j} \hat{\gamma}^{(j)}) = \frac{1}{n} \|X_j - X_{-j} \hat{\gamma}^{(j)}\|_2^2 + \lambda_j \|\hat{\gamma}^{(j)}\|_1.
+\end{align*}
+The second equality is an exercise on the example sheet.
+
+We can then set
+\[
+ \hat{\theta}_j = -\frac{1}{\hat{\tau}_j^2} (\hat{\gamma}_1^{(j)}, \ldots, \hat{\gamma}_{j - 1}^{(j)}, -1, \hat{\gamma}_j^{(j)}, \ldots, \hat{\gamma}_{p - 1}^{(j)})^T.
+\]
+The factor is there so that the second inequality holds.
+
+Then by construction, we have
+\[
+ X \hat{\theta}_j = \frac{X_j - X_{-j} \hat{\gamma}^{(j)}}{X^T_j(X - X_{-j} \hat{\gamma}^{(j)})/n}.
+\]
+Thus, we have $X_j^T X \hat{\theta}_j n = 1$, and by the KKT conditions for the Lasso, we have
+\[
+ \frac{\hat{\tau}_j}{n} \|X_{-j}^T X \hat{\theta}_j\|_{\infty} \leq \lambda_j.
+\]
+Thus, with the choice of $\hat{\Theta}$ above, we have
+\[
+ \|\Delta\|_{\infty} \leq \sqrt{n} \|\hat{\beta} - \beta^0\|_1 \max_j \frac{\lambda_j}{\hat{\tau}_j^2}.
+\]
+Now this is good as long as we can ensure $\frac{\lambda_j}{\hat{\tau}_j^2}$ to be small. When is this true?
+
+We can consider a random design setting, where each row of $X$ is iid $N_p(0, \Sigma)$ for some positive-definite $\Sigma$. Write $\Omega = \Sigma^{-1}$.
+
+Then by our study of the neighbourhood selection procedure, we know that for each $j$, we can write
+\[
+ X_j = X_{-j} \gamma^{(j)} + \varepsilon^{(j)},
+\]
+where $\varepsilon_i^{(j)}\mid X_{-j} \sim N(0, \Omega_{jj}^{-1})$ are iid and $\gamma^{(j)} = - \Omega_{jj}^{-1} \Omega_{-j, j}$. To apply our results, we need to ensure that $\gamma^{(j)}$ are sparse. Let use therefore define
+\[
+ s_j = \sum_{k \not= j} \mathbf{1}_{\Omega_{jk} \not =0},
+\]
+and set $s_{max} = \max (\max_j s_j, s)$.
+
+\begin{thm}
+ Suppose the maximum eigenvalue of $\Sigma$ is always at least $c_{min} > 0$ and $\max_j \Sigma_{jj} \leq 1$. Suppose further that $s_{max} \sqrt{\log (p)/n} \to 0$. Then there exists constants $A_1, A_2$ such that setting $\lambda = \lambda_j = A_1 \sqrt{\log(p) /n}$, we have
+ \begin{align*}
+ \sqrt{n} (\hat{b} - \beta^0) &= W + \Delta\\
+ W\mid X &\sim N_p(0, \sigma^2 \hat{\Theta} \hat{\Sigma} \hat{\Theta}^T),
+ \end{align*}
+ and as $n, p \to \infty$,
+ \[
+ \P\left(\|\Delta\|_\infty > A_2 s \frac{\log(p)}{\sqrt{n}}\right) \to 0.
+ \]
+\end{thm}
+Note that here $X$ is not centered and scaled.
+
+We see that in particular, $\sqrt{n}(\hat{b}_j - \beta_j^0) \sim N(0, \sigma^2(\hat{\Theta} \hat{\Sigma} \hat{\Theta}^T)_{jj})$. In fact, one can show that
+\[
+ d_j = \frac{1}{n} \frac{\|X_j - X_{-j} \hat{\gamma}^{(j)}\}_2^2}{\hat{\tau}_j^2}.
+\]
+This suggests an approximate $(1 - \alpha)$-level confidence interval for $\beta^0_j$,
+\[
+ \mathrm{CI} = \left(b_j - Z_{\alpha/2} \sigma \sqrt{d_j/n}, \hat{b}_j + Z_{\alpha/2} \sigma \sqrt{d_j/n}\right),
+\]
+where $Z_\alpha$ is the upper $\alpha$ point of $N(0, 1)$. Note that here we are getting confidence intervals of width $\sim \sqrt{1/n}$. In particular, there is no $\log p$ dependence, if we are only trying to estimate $\beta^0_j$ only.
+
+\begin{proof}
+ Consider the sequence of events $\Lambda_n$ defined by the following properties:
+ \begin{itemize}
+ \item $\phi_{\hat{\Sigma}, s} \geq c_{min}/2$ and $\phi^2_{\hat{\Sigma}_{-j, -j}, s_j} \geq c_{min}/2$ for all $j$
+ \item $\frac{2}{n} \|X^T \Sigma\|_\infty \leq \lambda$ and $\frac{2}{n} \|X_{-j}^T \varepsilon^{(j)} \|_\infty \leq \lambda$.
+ \item $\frac{1}{n} \Sigma^{(j)} \|_2^2 \geq (\Omega_{jj})^{-1} (1 - 4 \sqrt{(\log p) / n})$
+ \end{itemize}
+% The idea of the last condition is that we hope $\|X_j - X_{-j} \hat{\gamma}^{(j)}\|_2^2$ would be close to $\varepsilon$, so we want $\hat{\gamma}^{(j)}$ to be not too far off from the what it ought to be, and
+
+ Question 13 on example sheet 4 shows that $\P(\Lambda_n) \to 1$ for $A_1$ sufficiently large. So we will work on the event $\Lambda_n$.
+
+ By our results on the Lasso, we know
+ \[
+ \|\beta^0 - \hat{\beta}\|_1 \leq c_1 s \sqrt{\log p / n}.
+ \]
+ for some constant $c_1$. We now seek a lower bound for $\hat{\tau}_j^2$. Consider linear models
+ \[
+ X_j = X_{-j} \gamma^{(j)} + \varepsilon^{(j)},
+ \]
+ where the sparsity of $\gamma^{(j)}$ is $s_j$, and $\varepsilon_i^{(j)}| X_{-j} \sim N(0, \Omega_{jj}^{-1})$. Note that
+ \[
+ \Omega_{jj}^{-1} = \var (X_{ij} \mid X_{i, -j}) \leq \var(X_{ij}) = \Sigma_{ij} \leq A.
+ \]
+ Also, the maximum eigenvalue of $\Omega$ is at most $c_{min}^{-1}$. So $\Omega_{jj} \leq c_{min}^{-1}$. So $\Omega_{jj}^{-1} \geq c_{min}$. So by Lasso theory, we know
+ \[
+ \|\gamma^{(j)} - \hat{\gamma}^{(j)}\|_1 \leq c_2 s_j \sqrt{\frac{\log p}{n}}
+ \]
+ for some constant $c_2$. Then we have
+ \begin{align*}
+ \hat{\tau}_j^2 &= \frac{1}{n} \|X_j - X_{-j} \hat{\gamma}^{(j)} \|_2^2 + \lambda \|\hat{\gamma}^{(j)} \|_1\\
+ &\geq \frac{1}{n} \|\varepsilon^{(j)} + X_{-j} (\gamma^{(j)} - \hat{\gamma}^{(j)})\|_2^2\\
+ &\geq \frac{1}{n} \|\varepsilon^{(j)}\|_2^2 - \frac{2}{n} \|X_{-j}^T \varepsilon^{(j)}\|_\infty \|\gamma^{(j)} - \hat{\gamma}^{(j)}\|_1\\
+ &\geq \Omega_{jj}^{-1} \left(1 - 4\sqrt{\frac{\log p}{n}}\right) - c_2 s_j \sqrt{\frac{\log p}{n}} + A_1 \sqrt{\frac{\log p}{n}}
+ \end{align*}
+ In the limit, this tends to $\Omega_{jj}^{-1}$. So for large $n$, this is $\geq \frac{1}{2} \Omega_{jj}^{-1} \geq \frac{1}{2} c_{min}$.
+
+ Thus, we have
+ \[
+ \|\Delta\|_\infty \leq 2\lambda \sqrt{n} c_1 s \sqrt{\frac{\log p}{n}} c_{min}^{-1} = A_2 s \frac{\log p}{\sqrt{n}}.\qedhere
+ \]
+\end{proof}
+
+
+\printindex
+\end{document}
diff --git a/books/cam/III_M/percolation_and_random_walks_on_graphs.tex b/books/cam/III_M/percolation_and_random_walks_on_graphs.tex
new file mode 100644
index 0000000000000000000000000000000000000000..deb8406a63c32622eb071b5e86e1ad5ec0cd2201
--- /dev/null
+++ b/books/cam/III_M/percolation_and_random_walks_on_graphs.tex
@@ -0,0 +1,2360 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2017}
+\def\nlecturer {P. Sousi}
+\def\ncourse {Percolation and Random Walks on Graphs}
+\def\nofficial {http://www.statslab.cam.ac.uk/~ps422/percolation.html}
+\input{header}
+
+\let\div\relax
+\DeclareMathOperator\div{div}
+
+\renewcommand\L{\mathbb{L}}
+\newcommand\Pci[1]{\P_{#1}(|\mathcal{C}(0)| = \infty)}
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+A phase transition means that a system undergoes a radical change when a continuous parameter passes through a critical value. We encounter such a transition every day when we boil water. The simplest mathematical model for phase transition is percolation. Percolation has a reputation as a source of beautiful mathematical problems that are simple to state but seem to require new techniques for a solution, and a number of such problems remain very much alive. Amongst connections of topical importance are the relationships to so-called Schramm--Loewner evolutions (SLE), and to other models from statistical physics. The basic theory of percolation will be described in this course with some emphasis on areas for future development.
+
+Our other major topic includes random walks on graphs and their intimate connection to electrical networks; the resulting discrete potential theory has strong connections with classical potential theory. We will develop tools to determine transience and recurrence of random walks on infinite graphs. Other topics include the study of spanning trees of connected graphs. We will present two remarkable algorithms to generate a uniform spanning tree (UST) in a finite graph $G$ via random walks, one due to Aldous-Broder and another due to Wilson. These algorithms can be used to prove an important property of uniform spanning trees discovered by Kirchhoff in the 19th century: the probability that an edge is contained in the UST of $G$, equals the effective resistance between the endpoints of that edge.
+
+\subsubsection*{Pre-requisites}
+There are no essential pre-requisites beyond probability and analysis at undergraduate levels, but a familiarity with the measure-theoretic basis of probability will be helpful.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+This course is naturally divided into two parts --- percolation, and random walks on graphs. Percolation is one of the simplest models that experience phase transition --- an abrupt change in quantitative feature due to a continuous change of a parameter. More sophisticated examples of percolation the boiling of water and the loss of long-range correlation in magnets when temperature increases.
+
+But we are not physicists. So let's talk about percolation. For reasons that become clear later, consider an $n \times (n + 1)$ lattice connected by edges:
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (0, 0) rectangle (7, 6);
+ \foreach \x in {1,2,3,4,5,6}{
+ \draw (\x, 0) -- (\x, 6);
+ \draw (0, \x) -- (7, \x);
+ }
+ \draw (7, 0) -- (7, 6);
+ \foreach \x in {0,...,7}{
+ \foreach \y in {0,...,6}{
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \end{tikzpicture}
+\end{center}
+We now fix some $p \in [0, 1]$, and for each edge in the graph, we either keep it or remove it with probability $p$. There are many questions we may ask. For example, we may ask for the probability that there is a left-to-right crossing of open edges.
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw [gray] (0, 0) rectangle (7, 6);
+ \foreach \x in {1,2,3,4,5,6}{
+ \draw [gray] (\x, 0) -- (\x, 6);
+ \draw [gray] (0, \x) -- (7, \x);
+ }
+ \draw [gray] (7, 0) -- (7, 6);
+ \foreach \x in {0,...,7}{
+ \foreach \y in {0,...,6}{
+ \node [gray, circ] at (\x, \y) {};
+ }
+ }
+ \draw [thick, mred] (0, 3) node [circ] {} -- (1, 3) node [circ] {} -- (1, 4) node [circ] {} -- (2, 4) node [circ] {} -- (3, 4) node [circ] {} -- (3, 3) node [circ] {} -- (3, 2) node [circ] {} -- (4, 2) node [circ] {} -- (4, 3) node [circ] {} -- (5, 3) node [circ] {} -- (6, 3) node [circ] {} -- (7, 3) node [circ] {};
+ \end{tikzpicture}
+\end{center}
+For example, we have $f_n(0) = 0$ and $f_n(1) = 1$.
+
+An interesting choice of $p$ to consider is $p = \frac{1}{2}$. We can argue that $f_n(\frac{1}{2})$ must be $\frac{1}{2}$, by symmetry. More precisely, consider the dual lattice:
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw [mblue] (0.5, -0.5) rectangle (6.5, 6.5);
+ \foreach \x in {1,2,3,4,5,6}{
+ \draw [mblue] (\x-0.5, -0.5) -- (\x - 0.5, 6.5);
+ \draw [mblue] (0.5, \x - 0.5) -- (6.5, \x - 0.5);
+ }
+
+ \draw [gray, opacity=0.5] (0, 0) rectangle (7, 6);
+ \foreach \x in {1,2,3,4,5,6}{
+ \draw [gray, opacity=0.5] (\x, 0) -- (\x, 6);
+ \draw [gray, opacity=0.5] (0, \x) -- (7, \x);
+ }
+ \draw [gray, opacity=0.5] (7, 0) -- (7, 6);
+ \foreach \x in {0,...,7}{
+ \foreach \y in {0,...,6}{
+ \node [gray, circ, opacity=0.5] at (\x, \y) {};
+ }
+ }
+
+ \foreach \x in {0,...,7}{
+ \foreach \y in {0,...,6}{
+ \node [mblue, opacity=0.6, circ] at (\y + 0.5, \x - 0.5) {};
+ }
+ }
+
+ \draw [thick, mred] (0, 3) node [circ] {} -- (1, 3) node [circ] {} -- (1, 4) node [circ] {} -- (2, 4) node [circ] {} -- (3, 4) node [circ] {} -- (3, 3) node [circ] {} -- (3, 2) node [circ] {} -- (4, 2) node [circ] {} -- (4, 3) node [circ] {} -- (5, 3) node [circ] {} -- (6, 3) node [circ] {} -- (7, 3) node [circ] {};
+ \end{tikzpicture}
+\end{center}
+Note that this lattice is isomorphic to the original lattice we were thinking about, by applying a rotation. Now each edge in the dual lattice crosses exactly one edge in the original lattice, and we can set the edge to be open iff the corresponding edge in the original lattice is open. This gives rise to a percolation on the dual lattice with $p = \frac{1}{2}$ as well.
+
+Now notice that that there is a left-right crossing of open edges in the original lattice iff there is no top-bottom crossing in the dual graph. But since the dual and original lattices are isomorphic, it follows that the probability that there is a left-right crossing in the original lattice is equal to the probability that there is \emph{no} left-right crossing in the original lattice. So both of these must be $\frac{1}{2}$.
+
+The ability to talk about the dual graph is a very important property that is only true in $2$ dimensions. In general, there are many things known for $2$ dimensions via the dual, which do not generalize to higher dimensions.
+
+The other topic we are going to discuss is random walks in graphs. In IB Markov chains, and maybe IA Probability, we considered random walks on the integer lattice $\Z^d$. Here we shall consider random walks on \emph{any graph}. We shall mostly think about finite graphs, but we will also talk about how certain results can be extended to infinite graphs. It turns out a rather useful way of thinking about this is to think of the graph as representing an electrical network. Then many concepts familiar from high school physics translate to interesting properties about the graph, and importing well-known (and elementary) results from electrical networks helps us understand graphs better.
+
+\section{Percolation}
+\subsection{The critical probability}
+There are two models of percolation --- \term{bond percolation} and \term{site percolation}. In this course, we will focus on bond percolation, but we will look at site percolation in the example sheets.
+
+The very basic set up of percolation theory involves picking a \term{graph} $G = (V, E)$, where \term{$V$} is the set of \term{vertices} and \term{$E$} is the set of \term{edges}. We also pick a \term{percolation probability} $p \in [0, 1]$. For each edge $e \in E$, we keep it with probability $p$ and throw it with probability $1 - p$. In the first case, we say the edge is \emph{open}\index{open edge}\index{edge!open}, and in the latter, we say it is \emph{closed}\index{closed edge}\index{edge!closed}.
+
+More precisely, we define the probability space to be $\Omega= \{0, 1\}^E$, where $0$ denotes a closed edge and $1$ denotes an open one (in the case of site percolation, we have $\Omega = \{0, 1\}^V$). We endow $\Omega$ with the $\sigma$-algebra generated by \term{cylinder sets}
+\[
+ \{\omega \in \Omega: \omega(e) = x_e \text{ for all }e \in A\},
+\]
+where $A$ is a finite set and $x_e \in \{0, 1\}$ for all $e$. In other words, this is the product $\sigma$-algebra. As probability measure, we take the product measure $\P_p$, i.e.\ every edge is $1$ with probability $p$ and $0$ with probability $1 - p$. We will write \index{$\eta_p$}$\eta_p \in \{0, 1\}^E$ for the state of the system.
+
+Now what can we say about the graph resulting from this process? One question we may ask is whether we can connect two points in the graphs via the edges that remain. To further the discussion, we introduce some notation.
+
+\begin{notation}
+ We write \term{$x \leftrightarrow y$} if there is an open path of edges from $x$ to $y$.
+\end{notation}
+
+\begin{notation}
+ We write $\mathcal{C}(x) = \{y \in V: y \leftrightarrow x\}$\index{$\mathcal{C}(x)$}, the \term{cluster} of $x$.
+\end{notation}
+
+\begin{notation}
+ We write $x \leftrightarrow \infty$ if $|\mathcal{C}(x)| = \infty$.
+\end{notation}
+
+From now on, we shall take $G = \L^d = (\Z^d, E(\Z^d))$, the $d$-dimensional integer lattice.\index{$\L^d$} Then by translation invariance, $|\mathcal{C}(x)|$ has the same distribution as $|\mathcal{C}(0)|$ for all $x$. We now introduce a key piece of notation:
+\begin{defi}[$\theta(p)$]\index{$\theta_p$}
+ We define $\theta(p) = \P_p(|\mathcal{C}(0)| = \infty)$.
+\end{defi}
+
+Most of the questions we ask surround this $\theta(p)$. We first make the most elementary observations:
+\begin{eg}
+ $\theta(0) = 0$ and $\theta(1) = 1$.
+\end{eg}
+
+A natural question to ask is then if we can find $p \in (0, 1)$ such that $\theta(p) > 0$. But even before answering that question, we can ask a more elementary one --- is $\theta$ an increasing function of $p$?
+
+Intuitively, it must be. And we can prove it. The proof strategy is known as \term{coupling}. We have already seen coupling in IB Markov Chains, where we used it to prove the convergence to the invariant distribution under suitable conditions. Here we are going to couple all percolation processes for different values of $P$.
+
+\begin{lemma}
+ $\theta$ is an increasing function of $p$.
+\end{lemma}
+
+\begin{proof}
+ We let $(U(e))_{e \in E(\Z^d)}$ be iid $U[0, 1]$ random variables. For each $p \in [0, 1]$, we define
+ \[
+ \eta_p(e) =
+ \begin{cases}
+ 1 & U(e) \leq p\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ Then $\P(\eta_p(e) = 1) = \P(U(e) < p) = p$. Since the $U(e)$ are independent, so are $\eta_p$. Thus $\eta_p$ has the law of bond percolation with probability $p$.
+
+ Moreover, if $p \leq q$, then $\eta_p(e) \leq \eta_q(e)$. So the result follows.
+\end{proof}
+Note that this is not only useful as a theoretical tool. If we want to simulate percolation with different probabilities $p$, we can simply generate a set of $U[0, 1]$ variables, and use it to produce a percolation for all $p$.
+
+If we wish, we can provide an abstract definition of what coupling is, but the detailed definition is not of much practical use:
+\begin{defi}[Coupling]\index{coupling}
+ Let $\mu$ and $\nu$ be two probability measures on (potentially) different probability spaces. A \emph{coupling} is a pair of random variables $(X, Y)$ defined on the same probability space such that the marginal distribution of $X$ is $\mu$ and the marginal distribution of $Y$ is $\nu$.
+\end{defi}
+
+%The obvious first question to ask is then:
+%\begin{question}
+% Is there $p \in (0, 1)$ such that $\theta(p) > 0$?
+%\end{question}
+%
+%In the degenerate case $d = 1$, we immediately see that we have
+%\begin{eg}
+% Take $d = 1$. Then for any $p < 1$, we have $\theta(p) = 0$.
+%\end{eg}
+%
+%If $\theta(p)$ is not always zero, then there is an immediate follow-up question we can ask:
+%\begin{question}
+% Is $\theta(p)$ increasing?
+%\end{question}
+%Intuitively, this must be the case, and indeed we will show that it is true.
+%
+%Finally, a questions we can ask is:
+%\begin{question}
+% Suppose $\theta(p) > 0$. Then how many infinite components are there?
+%\end{question}
+%
+%We first answer question $2$. What we are going to use is \emph{coupling}. The definition will look a little abstract, but we will use it a lot, and we shall see how it is useful in practice.
+
+With the lemma, we can make the definition
+\begin{defi}[Critical probability]\index{Critical probability}
+ We define $p_c(d) = \sup \{p \in [0, 1]: \theta(p) = 0\}$.
+\end{defi}
+
+Recall we initially asked whether $\theta(p)$ can be non-zero for $p \in (0, 1)$. We can now rephrase and strengthen this question by asking for the value of $p_c(d)$. There are a lot more questions we can ask about $p_c$ and $\theta(p)$.
+
+For example, we know that $\theta(p)$ is a $C^\infty$ function on $(p_c, 1]$. However, we do not know if $\theta$ is continuous at $p_c$ in $d = 3$. We will see soon that $p_c = \frac{1}{2}$ in $d = 2$, but the exact value of $p_c$ is not known in higher dimensions.
+
+Let's start actually proving things about $p_c$. We previously noted that
+\begin{prop}
+ $p_c(1) = 1$.
+\end{prop}
+
+The first actually interesting theorem is the following:
+\begin{thm}
+ For all $d \geq 2$, we have $p_c(d) \in (0, 1)$.
+\end{thm}
+
+We shall break this up into two natural parts:
+\begin{lemma}
+ For $d \geq 2$, $p_c(d) > 0$.
+\end{lemma}
+
+\begin{proof}
+ Write $\Sigma_n$ for the number of open self-avoiding paths of length $n$ starting at $0$. We then note that
+ \[
+ \P_p(|\mathcal{C}(0)| = \infty) = \P_p(\forall n \geq 1: \Sigma_n \geq 1) = \lim_{n \to \infty} \P_p(\Sigma_n \geq 1) \leq \lim_{n \to \infty} \E_p[\Sigma_n].
+ \]
+ We can now compute $\E_p[\Sigma_n]$. The point is that expectation is linear, which makes this much easier to compute. We let $\sigma_n$ be the number of self-avoiding paths\index{self-avoiding paths} of length $n$ from $0$. Then we simply have
+ \[
+ \E_p[\Sigma_n] = \sigma_n p^n.
+ \]
+ We can bound $\sigma_n$ by $2d \cdot (2d - 1)^{n - 1}$, since we have $2d$ choices of the first step, and at most $2d - 1$ choices in each subsequent step. So we have
+ \[
+ \E_p[\Sigma_n] \leq 2d (2d - 1)^{n - 1} p^n = \frac{2d}{2d - 1} (p(2d - 1))^n.
+ \]
+ So if $p (2d - 1) < 1$, then $\theta(p) = 0$. So we know that
+ \[
+ p_c(d) \geq \frac{1}{2d - 1}.\qedhere
+ \]
+\end{proof}
+
+Before we move on to the other half of the theorem, we talk a bit more about \term{self-avoiding paths}.
+\begin{defi}[$\sigma_n$]\index{$\sigma_n$}
+ We write $\sigma_n$ for the number of self-avoiding paths of length $n$ starting from $0$.
+\end{defi}
+In the proof, we used the rather crude bound
+\[
+ \sigma_n \leq 2d \cdot (2d - 1)^{n - 1}.
+\]
+More generally, we can make the following bound:
+
+\begin{lemma}
+ We have $\sigma_{n + m} \leq \sigma_n \sigma_m$.
+\end{lemma}
+
+\begin{proof}
+ A self-avoiding path of length $n + m$ can be written as a concatenation of self-avoiding paths of length $n$ starting from $0$ and another one of length $m$.
+\end{proof}
+Taking the logarithm, we know that $\log \sigma_n$ is a \term{subadditive sequence}. It turns out this property alone is already quite useful. It is an exercise in analysis to prove the following lemma:
+
+\begin{lemma}[Fekete's lemma]\index{Fekete's lemma}
+ If $(a_n)$ is a subadditive sequence of real numbers, then
+ \[
+ \lim_{n \to \infty} \frac{a_n}{n} = \inf\left\{\frac{a_k}{k}: k \geq 1\right\} \in [-\infty, \infty).
+ \]
+ In particular, the limit exists.
+\end{lemma}
+
+This allows us to define
+\begin{defi}[$\lambda$ and $\kappa$]\index{$\lambda$}\index{$\kappa$}
+ We define
+ \[
+ \lambda = \lim_{n \to \infty} \frac{\log \sigma_n}{n},\quad \kappa = e^\lambda.
+ \]
+ $\kappa$ is known as the \term{connective constant}.
+\end{defi}
+
+Then by definition, we have
+\[
+ \sigma_n= e^{n \lambda (1 + o(1))} = \kappa^{n + o(n)}.
+\]
+as $n \to \infty$.
+
+Thus, asymptotically, the growth rate of $\sigma_n$ is determined by $\kappa$. It is natural to then seek for the value of $\kappa$, but unfortunately we don't know the value of $\kappa$ for the Euclidean lattice. A bit more on the positive side, the value for the hexagonal lattice has been found recently:
+\begin{thm}[Duminil-Copin, Smirnov, 2010]
+ The hexagonal lattice has
+ \[
+ \kappa_{\mathrm{hex}} = \sqrt{2 + \sqrt{2}}.
+ \]
+\end{thm}
+
+We might want to be a bit more precise about how $\sigma_n$ grows. For $d \geq 5$, we have the following theorem:
+\begin{thm}[Hara and Slade, 1991]
+ For $d \geq 5$, there exists a constant $A$ such that
+ \[
+ \sigma_n = A \kappa^n (1 + O(n^{-\varepsilon}))
+ \]
+ for any $\varepsilon < \frac{1}{2}$.
+\end{thm}
+
+We don't really know what happens when $d < 5$, but we have the following conjecture:
+
+\begin{conjecture}
+ \[
+ \sigma_n \approx
+ \begin{cases}
+ n^{11/32} \kappa^n & d = 2\\
+ n^\gamma \kappa^n & d = 3\\
+ (\log n)^{1/4} \kappa^n & d = 4
+ \end{cases}
+ \]
+\end{conjecture}
+
+One can also instead try to bound $\sigma_n$ from above. We have the following classic theorem:
+\begin{thm}[Hammersley and Welsh, 1962]
+ For all $d \geq 2$, we have
+ \[
+ \sigma_n \leq C \kappa^n \exp(c' \sqrt{n})
+ \]
+ for some constants $C$ and $c'$.
+\end{thm}
+
+In fact, a better bound was recently found:
+\begin{thm}[Hutchcroft, 2017]
+ For $d \geq 2$, we have
+ \[
+ \sigma_n \leq C \kappa^n \exp(o(\sqrt{n})).
+ \]
+\end{thm}
+This will be proved in the example sheet.
+
+What would be a good way to understand self-avoiding walks? Fixing an $n$, there are only finitely many self-avoiding walks of length $n$. So we can sample such a self-avoiding walk uniformly at random. In general, we would expect the total displacement of the walk to be $\sim \sqrt{n}$. Thus, what we can try to do is to take $n \to \infty$ while simultaneously shrinking space by a factor of $\frac{1}{\sqrt{n}}$. We would then hope that the result converges toward some Brownian motion-like trajectory. If we can characterize the precise behaviour of this \term{scaling limit}, then we might be able to say something concrete about $\kappa$ and the asymptotic behaviour of $\sigma_n$.
+
+But we don't really know what the scaling limit is. In the case $d = 2$, it is conjectured to be $SLE(\frac{8}{3})$. Curiously, it was proven by Gwynne and Miller in 2016 that if we instead looked at self-avoiding walks on a \emph{random surface}, then the scaling limit is $SLE(\frac{8}{3})$.
+
+That's enough of a digression. Let's finish our proof and show that $p_c(d) < 1$. A first observation is that it suffices to show this for $d = 2$. Indeed, since $\Z^d$ embeds into $\Z^{d + 1}$ for all $d$, if we can find an infinite cluster in $\Z^d$, then the same is true for $\Z^{d + 1}$. Thus, it is natural to restrict to the case of $d = 2$, where \emph{duality} will be prove to be an extremely useful tool.
+
+\begin{defi}[Planar graph]\index{planar graph}\index{graph!planar}
+ A graph $G$ is called planar if it can be embedded on the plane in such a way that no two edges cross.
+\end{defi}
+
+\begin{defi}[Dual graph]\index{dual graph}\index{graph!dual}
+ Let $G$ be a planar graph (which we call the \term{primal graph}\index{graph!primal}). We define the \emph{dual graph} by placing a vertex in each face of $G$, and connecting $2$ vertices if their faces share a boundary edge.
+\end{defi}
+
+\begin{eg}
+ The dual of $\Z^2$ is isomorphic to $\Z^2$ % insert picture
+ \begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (0, 0) grid (5, 5);
+ \begin{scope}[shift={(0.5, 0.5)}]
+ \clip (-0.5, -0.5) rectangle (4.5, 4.5);
+ \draw [mblue, opacity=0.5, step=1] (-1, -1) grid (5, 5);
+ \end{scope}
+ \foreach \x in {0.5, 1.5, 2.5, 3.5, 4.5} {
+ \foreach \y in {0.5, 1.5, 2.5, 3.5, 4.5} {
+ \node [mblue, opacity=0.5, circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {0, 1, 2, 3, 4, 5} {
+ \foreach \y in {0, 1, 2, 3, 4, 5} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+The dual lattice will help us prove a lot of properties for percolation in $\Z^2$.
+
+\begin{lemma}
+ $p_c(d) < 1$ for all $d \geq 2$.
+\end{lemma}
+
+\begin{proof}
+ It suffices to show this for $d = 2$. Suppose we perform percolation on $\Z^2$. Then this induces a percolation on the dual lattice by declaring an edge of the dual is open if it crosses an open edge of $\Z^2$, and closed otherwise.
+
+ Suppose $|\mathcal{C}(0)| < \infty$ in the primal lattice. Then there is a closed circuit in the dual lattice, given by the ``boundary'' of $\mathcal{C}(0)$. Let $D_n$ be the number of closed dual circuits of length $n$ that surround $0$. Then the union bound plus Markov's inequality tells us
+ \[
+ \P_p(|\mathcal{C}(0)| < \infty) = \P_p(\exists n \geq D_n \geq 1) \leq \sum_{n = 4}^\infty \E_p[D_n],
+ \]
+ using the union bound and Markov's inequality.
+
+ It is a simple exercise to show that
+ \begin{ex}
+ Show that the number of dual circuits of length $n$ that contain $0$ is at most $n \cdot 4^n$.
+ \end{ex}
+ From this, it follows that
+ \[
+ \P_p(|\mathcal{C}(0)| < \infty) \leq \sum_{n = 4}^\infty n \cdot 4^n (1 - p)^n.
+ \]
+ Thus, if $p$ is sufficiently near $1$, then $\P_p(|\mathcal{C}(0)| < \infty)$ is bounded away from $1$.
+\end{proof}
+
+By definition, if $p < p_c(d)$, then $0$ is almost surely not contained in an infinite cluster. If $p > p_c(d)$, then there is a positive probability that $0$ is contained in an infinite cluster. However, it is of course not necessarily the case that $0$ is connected to $\infty$ with probability $1$. In fact, there is at least probability $(1 - p)^{2d}$ that $0$ is not connected to $\infty$, since $0$ cannot be connected to $\infty$ if all its neighbouring edges are closed. However, it is still possible that there is some infinite cluster somewhere. It's just that it does not contain $0$.
+
+\begin{prop}
+ Let $A_\infty$ be the event that there is an infinite cluster.
+ \begin{enumerate}
+ \item If $\theta(p) = 0$, then $\P_p(A_\infty) = 0$.
+ \item If $\theta(p) > 0$, then $\P_p(A_\infty) = 1$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We have
+ \[
+ \P_p(A_\infty) = \P_p(\exists x: |\mathcal{C}(x)| = \infty) \leq \sum_{x \in \Z^d} \P_p(|\mathcal{C}(x)| = \infty) = \sum \theta(p) = 0.
+ \]
+ \item We need to apply the \term{Kolmogorov 0-1 law}. Recall that if $X_1, X_2, \ldots$ are independent random variables, and $\mathcal{F}_n = \sigma(X_k: k \geq n)$, $\mathcal{F}_\infty = \bigcap_{n \geq 0} \mathcal{F}_n$, Then $\mathcal{F}_\infty$ is trivial, i.e.\ for all $A \in \mathcal{F}_\infty$, $\P(A) \in \{0, 1\}$.
+
+ So we order the edges of $\Z^d$ as $e_1, e_2, \ldots$ and denote their states
+ \[
+ w(e_1), w(e_2), \ldots.
+ \]
+ These are iid random variables. We certainly have $\P_p(A_\infty) \geq \theta(p) > 0$. So if we can show that $A_\infty \in \mathcal{F}_\infty$, then we are done. But this is clear, since changing the states of a finite number of edges does not affect the occurrence of $A_\infty$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+The next follow up questions is how many infinite clusters do we expect to get?
+\begin{thm}[Burton and Keane]
+ If $p > p_c$, then there exists a unique infinite cluster with probability $1$.
+\end{thm}
+This proof is considerably harder than the ones we have previously done. We might think we can use the Kolmogorov 0-1 law, but we can't, since changing a finite number of edges can break up or join together infinite clusters, so the event that there are $k$ infinite clusters for $k > 0$ is not in $\mathcal{F}_\infty$. However, we can exploit the fact that $N$ is translation invariant.
+
+\begin{ex}
+ Let $A$ be an event that is translation invariant. Then $\P_p(A) = 0$ or $1$ almost surely. % prove this
+\end{ex}
+
+\begin{proof}
+ Let $N$ be the number of infinite clusters. Then by the lemma, we know $N$ is constant almost surely. So there is some $k \in \N \cup \{\infty\}$ such that $\P_p(N = k) = 1$. First of all, we know that $k \not= 0$, since $\theta(p) > 0$. We shall first exclude $2 \leq k < \infty$, and then exclude $k = \infty$.
+
+ Assume that $k < \infty$. We will show that $\P_p(n = 1) > 0$, and hence it must be the case that $\P_p(n = 1) = 1$.
+
+ To bound this probability, we let $B(n) = [-n, n]^d \cap \Z^d$\index{$B(n)$} (which we will sometimes write as \term{$B_n$}), and let $\partial B(n)$ be its boundary. We know that
+ \[
+ \P_p(\text{all infinite clusters intersect $\partial B(n)$}) \to 1
+ \]
+ as $n \to \infty$. This is since with probability $1$, there are only finitely many clusters by assumption, and for each of these configurations, all infinite clusters intersect $\partial B(n)$ for sufficiently large $n$.
+
+ In particular, we can take $n$ large enough such that
+ \[
+ \P_p(\text{all infinite clusters intersect }\partial B(n)) \geq \frac{1}{2}.
+ \]
+ We can then bound
+ \begin{multline*}
+ \P_p(N = 1) \geq \P_p(\text{all infinite clusters intersect }\partial B(n)\\
+ \text{ and all edges in $B(n)$ are open}).
+ \end{multline*}
+ Finally, note that the two events in there are independent, since they involve different edges. But the probability that all edges in $B(n)$ are open is just $p^{E(B(n))}$. So
+ \[
+ \P_p(N = 1) \geq \frac{1}{2} p^{E(B(n))} > 0.
+ \]
+ So we are done.
+
+ We now have to show that $k \not= \infty$. This involves the notion of a \emph{trifurcation}. The idea is that we will show that if $k = \infty$, then the probability that a vertex is a trifurcation is positive. This implies the expected number of trifurcations is $\sim n^d$. We will then show deterministically that the number of trifurcations inside $B(n)$ must be $\leq |\partial B(n)|$, and so there are $O(n^{d - 1})$ trifurcations, which is a contradiction.
+
+ We say a vertex $x$ is a \term{trifurcation} if the following three conditions hold:
+ \begin{enumerate}
+ \item $x$ is in an infinite open cluster $\mathcal{C}_\infty$;
+ \item There exist exactly three open edges adjacent to $x$;
+ \item $\mathcal{C}_\infty \setminus \{x\}$ contains exactly three infinite clusters and no finite ones.
+ \end{enumerate}
+ This is clearly a translation invariant notion. So
+ \[
+ \P_p(0 \text{ is a trifurcation}) = \P_p(x \text{ is a trifurcation})
+ \]
+ for all $x \in \Z^d$.
+
+ \begin{claim}
+ $\P_p(0\text{ is a trifurcation}) > 0$.
+ \end{claim}
+ We need to use something slightly different from $B(n)$. We define $S(n) = \{x \in \Z^d: \|x\|_1 \leq n\}$.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw [mblue, fill opacity=0.2, fill=mblue] (-3, 0) -- (0, 3) -- (3, 0) -- (0, -3) -- cycle;
+ \foreach \x in {-4,..., 4} {
+ \foreach \y in {-4,..., 4} {
+ \node [circ, opacity=0.3] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {-3,..., 3} {
+ \pgfmathsetmacro\n{3-abs(\x)};
+ \foreach \y in {-\n, ..., \n} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \end{tikzpicture}
+ \end{center}
+ The crucial property of this is that for any $x_1, x_2, x_3 \in \partial S(n)$, there exist three disjoint self-avoiding paths joining $x_i$ to $0$ (exercise!). For each triple $x_1, x_2, x_3$, we arbitrarily pick a set of three such paths, and define the event
+ \begin{multline*}
+ J(x_1, x_2, x_3) = \{\text{all edges on these 3 paths are open}\\\text{ and everything else inside $S(n)$ is closed}\}.
+ \end{multline*}
+ Next, for every possible infinite cluster in $\Z^d \setminus S(n)$ that intersects $\partial S(n)$ at at least one point, we pick a designated point of intersection arbitrarily.
+
+ Then we can bound
+ \begin{multline*}
+ \P_p(0 \text{ is a trifurcation}) \geq \P_p(\exists \mathcal{C}_\infty^1, \mathcal{C}_\infty^2, \mathcal{C}_\infty^3 \subseteq \Z^d \setminus S(n)\\ \text{ infinite clusters which intersect $\partial S(n)$ at $x_1, x_2, x_3$, and $J(x_1, x_2, x_3)$}).
+ \end{multline*}
+ Rewrite the right-hand probability as
+ \begin{multline*}
+ \P_p(J(x_1, x_2, x_3) \mid\exists \mathcal{C}_\infty^1, \mathcal{C}_\infty^2, \mathcal{C}_\infty^3 \subseteq \Z\text{ intersecting }\partial S(n))\\
+ \times \P_p(\exists \mathcal{C}_\infty^1, \mathcal{C}_\infty^2, \mathcal{C}_\infty^3 \subseteq \Z^d \setminus \partial S(n))
+ \end{multline*}
+ We can bound the first term by
+ \[
+ \min(p, 1 - p)^{E(S(n))}.
+ \]
+ To bound the second probability, we have already assumed that $\P_p(N = \infty) = 1$. So $\P_p(\exists \mathcal{C}_\infty^1, \mathcal{C}_\infty^2, \mathcal{C}_\infty^3 \subseteq \Z^d \setminus S(n)\text{ intersecting }\partial S(n)) \to 1$ as $n \to \infty$. We can then take $n$ large enough such that the probability is $\geq \frac{1}{2}$. So we have shown that $c \equiv \P_p(0\text{ is a trifurcation}) > 0$.
+
+ Using the linearity of expectation, it follows that
+ \[
+ \E_p[\text{number of trifurcations inside $B(n)$})] \geq c |B(n)| \sim n^d.
+ \]
+ On the other hand, we can bound the number of trifurcations in $B(n)$ by $|\partial B(n)|$. To see this, suppose $x_1$ is a trifurcation in $B(n)$. By definition, there exists $3$ open paths to $\partial B(n)$. Fix three such paths. Let $x_2$ be another trifurcation. It also has $3$ open paths to the $\partial B(n)$, and its paths to the boundary could intersect those of $x_1$. However, they cannot create a cycle, by definition of a trifurcation. For simplicity, we add the rule that when we produce the paths for $x_2$, once we intersect the path of $x_1$, we continue following the path of $x_1$.
+
+ Exploring all trifurcations this way, we obtain a forest inside $B(n)$, and the boundary points will be the leaves of the forest. Now the trifurcations have degree $3$ in this forest. The rest is just combinatorics.
+ \begin{ex}
+ For any tree, the number of degree $3$ vertices is always less than the number of leaves.\qedhere % prove this
+ \end{ex}
+\end{proof}
+
+\subsection{Correlation inequalities}
+In this section, we are going to prove some useful inequalities and equalities, and use them to prove some interesting results about $\theta(p)$ and the decay of $\P_p(0 \leftrightarrow \partial B_n)$.
+
+To motivate our first inequality, suppose we have $4$ points $x, y, u, v$, and we want to ask for the conditional probability
+\[
+ \P_p(x \leftrightarrow y \mid u \leftrightarrow v).
+\]
+Intuitively, we expect this to be greater than $\P_p(x \leftrightarrow y)$, since $u \leftrightarrow v$ tells us there are some open edges around, which is potentially helpful. The key property underlying this intuition is that having more edges is beneficial to both of the events. To quantify this, we need the notion of an increasing random variable.
+
+Again, let $G = (V, E)$ be a graph and $\Omega = \{0, 1\}^E$. We shall assume that $E$ is countable.
+
+\begin{notation}[$\leq$]\index{$\leq$}
+ Given $\omega, \omega' \in \Omega$, we write $\omega \leq \omega'$ if $\omega(e) \leq \omega'(e)$ for all $e \in E$.
+\end{notation}
+This defines a partial order on $\Omega$.
+
+\begin{defi}[Increasing random variable]\index{increasing random variable}\index{random variable!increasing}\index{decreasing random variable}\index{random variable!decreasing}
+ A random variable $X$ is increasing if $X(\omega) \leq X(\omega')$ whenever $\omega \leq \omega'$, and is decreasing if $-X$ is increasing.
+\end{defi}
+
+\begin{defi}[Increasing event]\index{increasing event}\index{event!increasing}\index{event!decreasing}\index{decreasing event}
+ An event $A$ is increasing (resp.\ decreasing) if the indicator $1(A)$ is increasing (resp.\ decreasing)
+\end{defi}
+
+\begin{eg}
+ $\{|\mathcal{C}(0)|= \infty\}$ is an increasing event.
+\end{eg}
+
+An immediate consequence of the definition is that
+\begin{thm}
+ If $N$ is an increasing random variable and $p_1 \leq p_2$, then
+ \[
+ \E_{p_1} [N] \leq \E_{p_2} [N],
+ \]
+ and if an event $A$ is increasing, then
+ \[
+ \P_{p_1}(A) \leq \P_{p_2}(A).
+ \]
+\end{thm}
+
+\begin{proof}
+ Immediate from coupling.
+\end{proof}
+
+What we want to prove is the following result, which will be extremely useful.
+\begin{thm}[Fortuin--Kasteleyn--Ginibre (FKG) inequality]\index{FKG inequality}
+ Let $X$ and $Y$ be increasing random variables with $\E_p[X^2], \E_p[Y^2] < \infty$. Then
+ \[
+ \E_p[XY] \geq \E_p[X] \E_p[Y].
+ \]
+ In particular, if $A$ and $B$ are increasing events, then
+ \[
+ \P_p(A \cap B) \geq \P_p(A) \P_p(B).
+ \]
+ Equivalently,
+ \[
+ \P_p(A \mid B) \geq \P_p(A).
+ \]
+\end{thm}
+
+\begin{proof}
+ The plan is to first prove this in the case where $X$ and $Y$ depend on a finite number of edges, and we do this by induction. Afterwards, the desired result can be obtained by an application of the martingale convergence theorem. In fact, the ``real work'' happens when we prove it for $X$ and $Y$ depending on a single edge. Everything else follows from messing around with conditional probabilities.
+
+ If $X$ and $Y$ depend only on a single edge $e_1$, then for any $\omega_1, \omega_2 \in \{0, 1\}$, we claim that
+ \[
+ (X(\omega_1) - X(\omega_2))(Y(\omega_1) - Y(\omega_2)) \geq 0.
+ \]
+ Indeed, $X$ and $Y$ are both increasing. So if $\omega_1 > \omega_2$, then both terms are positive; if $\omega < \omega_2$, then both terms are negative, and there is nothing to do if they are equal.
+
+ In particular, we have
+ \[
+ \sum_{\omega_1, \omega_2 \in \{0, 1\}} (X(\omega_1) - X(\omega_2)) (Y(\omega_1) - Y(\omega_2)) \P_p( \omega(e_1) = \omega_1) \P_p(\omega(e_1) = \omega_2) \geq 0.
+ \]
+ Expanding this, we find that the LHS is $2(\E_p[XY] - \E_p[X] \E_p[Y])$, and so we are done.
+
+ Now suppose the claim holds for $X$ and $Y$ that depend on $n < k$ edges. We shall prove the result when they depend on $k$ edges $e_1, \ldots, e_k$. We have
+ \[
+ \E_p[XY] = \E_p[\E_p[XY \mid \omega(e_1), \ldots, \omega(e_{k - 1})]].
+ \]
+ Now after conditioning on $\omega(e_1), \ldots, \omega(e_{k - 1})$, the random variables $X$ and $Y$ become increasing random variables of $\omega(e_k)$. Applying the first step, we get
+ \begin{multline*}
+ \E_p[X Y \mid \omega(e_1), \ldots, \omega(e_{k - 1})]\\
+ \geq \E_p[X \mid \omega(e_1), \ldots, \omega(e_{k - 1})] \E_p[Y \mid \omega(e_1), \ldots, \omega(e_{k - 1})].\tag{$*$}
+ \end{multline*}
+ But $\E_p[X \mid \omega(e_1), \ldots, \omega(e_{k - 1})]$ is a random variable depending on the edges $e_1, \ldots, e_{k - 1}$, and moreover it is increasing. So the induction hypothesis tells us
+ \begin{multline*}
+ \E_p\big[\E_p[X \mid \omega(e_1), \ldots, \omega(e_{k - 1})] \E_p[Y \mid \omega(e_1), \ldots, \omega(e_{k - 1})]\big] \\
+ \geq \E_p\big[\E_p[X \mid \omega(e_1), \ldots, \omega(e_{k - 1})]\big] \E_p\big[\E_p[Y \mid \omega(e_1), \ldots, \omega(e_{k - 1})]\big]
+ \end{multline*}
+ Combining this with (the expectation of) ($*$) then gives the desired result.
+
+ Finally, suppose $X$ and $Y$ depend on the states of a countable set of edges $e_1, e_2, \ldots$. Let's define
+ \begin{align*}
+ X_n &= \E_p [X \mid \omega(e_1), \ldots, \omega(e_n)]\\
+ Y_n &= \E_p [Y \mid \omega(e_1), \ldots, \omega(e_n)]
+ \end{align*}
+ Then $X_n$ and $Y_n$ are martingales, and depend only on the states of only finitely many edges. So we know that
+ \[
+ \E_p[X_n Y_n] \geq \E_p [X_n] \E_p[Y_n] = \E_p[X] \E_p[Y].
+ \]
+ By the $\mathcal{L}^2$-martingale convergence theorem, $X_n \to X$, $Y_n \to Y$ in $L^2$ and almost surely. So taking the limit $n \to \infty$, we get
+ \[
+ \E_p[XY] \geq \E_p[X] \E_p[Y].\qedhere
+ \]
+\end{proof}
+
+What we want to consider next is the notion of disjoint occurrence. For example, we want to able to ask the probability that there exists two disjoint paths connecting a pair of points.
+
+To formulate this disjointness, suppose we have an event $A$, and let $\omega \in A$. To ask whether this occurrence of $A$ depends only on some set $S \subseteq E$ of edges, we can look at the set
+\[
+ [\omega]_S = \{\omega' \in \Omega: \omega'(e) = \omega(e)\text{ for all }e \in S\}.
+\]
+If $[\omega]_S \subseteq A$, then we can rightfully say this occurrence of $A$ depends only on the edges in $S$. Note that this depends explicitly and importantly on $\omega$, i.e.\ the ``reason'' $A$ happened. For example, if $A = \{x \leftrightarrow y\}$, and $\omega \in A$, then we can take $S$ to be the set of all edges in a chosen path from $x$ to $y$ in the configuration of $\omega$. This choice will be different for different values of $\omega$.
+
+Using this, we can define what it means for two events to occur disjointly.
+\begin{defi}[Disjoint occurrence]\index{disjoint occurrence}
+ Let $F$ be a set and $\Omega = \{0, 1\}^F$. If $A$ and $B$ are events, then the event that $A$ and $B$ occurs \emph{disjointly} is
+ \[
+ A \circ B = \{\omega \in \Omega : \exists S \subseteq F\text{ s.t. } [\omega]_S \subseteq A \text{ and } [\omega]_{F\setminus S} \subseteq B\}.
+ \]
+\end{defi}
+
+\begin{thm}[BK inequality]\index{BK inequality} % van den Berg and Kesten
+ Let $F$ be a finite set and $\Omega = \{0, 1\}^F$. Let $A$ and $B$ be increasing events. Then
+ \[
+ \P_p(A \circ B) \leq \P_p(A) \P_p(B).
+ \]
+\end{thm}
+This says if $A$ and $B$ are both events that ``needs'' edges to occur, then requiring that they occur disjointly is more difficult than them occurring individually.
+
+The proof is completely magical. There exist saner proofs of the inequality, but they are rather longer.
+\begin{proof}[Proof (Bollob\'as and Leader)]
+ We prove by induction on the size $n$ of the set $F$. For $n = 0$, it is trivial.
+
+ Suppose it holds for $n - 1$. We want to show it holds for $n$. For $D \subseteq \{0, 1\}^F$ and $i = 0, 1$, set
+ \[
+ D_i = \{(\omega_1, \ldots, \omega_{n - 1}) : (\omega_1, \ldots, \omega_{n - 1}, i) \in D\}.
+ \]
+ Let $A, B \subseteq \{0, 1\}^F$, and $C = A \circ B$. We check that
+ \[
+ C_0 = A_0 \circ B_0,\quad C_1 = (A_0 \circ B_1) \cup (A_1 \circ B_0).
+ \]
+ Since $A$ and $B$ are increasing, $A_0 \subseteq A_1$ and $B_0 \subseteq B_1$, and $A_i$ and $B_i$ are also increasing events. So
+ \begin{align*}
+ C_0 &\subseteq (A_0 \circ B_1) \cap (A_1 \circ B_0)\\
+ C_1 &\subseteq A_1 \circ B_1.
+ \end{align*}
+ By the induction hypothesis, we have
+ \begin{align*}
+ \P_p(C_0) &= \P_p(A_0 \circ B_0) \leq \P_p(A_0) \P_p(B_0)\\
+ \P_p(C_1) &\leq \P_p(A_1 \circ B_1) \leq \P_p(A_1) \P_p(B_1)\\
+ \P_p(C_0) + \P_p(C_1) &\leq \P_p((A_0 \circ B_1) \cap (A_1 \circ B_0)) + \P_p((A \circ B_1) \cup (A_1 \circ B_0))\\
+ &= \P_p(A_0 \circ B_1) + \P_p(A_1 \circ B_0)\\
+ &\leq \P_p(A_0) \P_p(B_1) + \P_p(A_1) \P_p(B_0).
+ \end{align*}
+ Now note that for any $D$, we have
+ \[
+ \P_p(D) = p \P_p(D_1) + (1 - p) \P_p(D_0).
+ \]
+ By some black magic, we multipy the first inequality by $(1 - p)^2$, the second by $p^2$ and the third by $p(1 - p)$. This gives
+ \[
+ p\P_p(C_1) + (1 - p)\P_p(C_0) \leq (p\P_p(A_1) + (1- p)\P_p(A_0))(p\P_p(B_1) + (1 - p) \P_p(B_0)).
+ \]
+ Expand and we are done.
+\end{proof}
+
+It turns out the increasing hypothesis is not necessary:
+\begin{thm}[Reimer]
+ For all events $A, B$ depending on a finite set, we have $\P_p(A \circ B) \leq \P_p(A) \P_p(B)$.
+\end{thm}
+But the proof is much harder, and in all the case where we want to apply this, the variables are increasing.
+
+As an application of the BK inequality, we first prove a preliminary result about the decay of $\P_p(0 \leftrightarrow \partial B(n))$. To prove our result, we will need a stronger condition than $p < p_c$. Recall that we defined
+\[
+ \theta(p) = \P_p( |\mathcal{C}(0)| = \infty).
+\]
+We also define\index{$\chi(p)$}
+\[
+ \chi(p) = \E_p[|\mathcal{C}(0)|].
+\]
+If $\chi(p)$ is finite, then of course $\theta(p) = 0$. However, the converse need not hold.
+
+\begin{thm}
+ If $\chi(p) < \infty$, then there exists a positive constant $c$ such that for all $n \geq 1$,
+ \[
+ \P_p(0 \leftrightarrow \partial B(n)) \leq e^{-cn}.
+ \]
+\end{thm}
+Later, we will show that in fact this holds under the assumption that $p < p_c$. However, that requires a bit more technology, which we will develop after this proof.
+
+The idea of the proof is that if we want a path from, say, $0$ to $B(2n)$, then the path must hit a point on $\partial B(n)$. So there is a path from $0$ to a point on $\partial B(n)$, and a path from that point to $\partial B(2n)$. Moreover, these two paths are disjoint, which allows us to apply the BK inequality.
+\begin{proof}
+ Let
+ \[
+ X_n = \sum_{x \in \partial B(n)} 1(0 \leftrightarrow x).
+ \]
+ Now consider
+ \[
+ \sum_{n = 0}^\infty \E[X_n] = \sum_n \sum_{x \in \partial B(n)} \P_p(0 \leftrightarrow x) = \sum_{x \in \Z^d} \P_p(0 \leftrightarrow x) = \chi(p).
+ \]
+ Since $\chi(p)$ is finite, we in particular have $\E_p[X_n] \to 0$ as $n \to \infty$. Take $m$ large enough such that $\E_p[X_m] < \delta < 1$.
+
+ Now we have
+ \begin{align*}
+ \P_p( 0 \leftrightarrow \partial B(m + k)) &= \P_p(\exists x \in \partial B(m) : 0 \leftrightarrow x \text{ and } x \leftrightarrow \partial B(m + k)\text{ disjointly})\\
+ &\leq \sum_{x \in \partial B(m)} \P_p(0 \leftrightarrow x) \P_p(x \leftrightarrow \partial B(m + k)) \tag{BK}\\
+ &\leq \sum_{x \in \partial B(m)} \P_p(0 \leftrightarrow x) \P_p(0 \leftrightarrow \partial B(k)) \tag{trans. inv.}\\
+ &\leq \P_p(0 \leftrightarrow B(k)) \E_p[X_m].
+ \end{align*}
+ So for any $n > m$, write $n = qm + r$, where $r \in [0, m - 1]$. Then iterating the above result, we have
+ \[
+ \P_p(0 \leftrightarrow \partial B(n)) \leq \P_p(0 \leftrightarrow B(mq)) \leq \delta^q \leq \delta^{-1 + \frac{n}{m}} \leq e^{-cn}.\qedhere
+ \]
+\end{proof}
+
+To replace the condition with the weaker condition $p < p_c$, we need to understand how $\theta(p)$ changes with $p$. We know $\theta(p)$ is an increasing function in $p$. It would be great if it were differentiable, and even better if we could have an explicit formula for $\frac{\d \theta}{\d p}$.
+
+To do so, suppose we again do coupling, and increase $p$ by a really tiny bit. Then perhaps we would expect that exactly one of the edges switches from being closed to open. Thus, we want to know if the state of this edge is \emph{pivotal} to the event $|\mathcal{C}(0)| = \infty$, and this should determine the rate of change of $\theta$.
+
+\begin{defi}[Pivotal edge]\index{pivotal edge}
+ Let $A$ be an event and $\omega$ a percolation configuration. The edge $e$ is \emph{pivotal} for $(A, \omega)$ if
+ \[
+ 1(\omega \in A) \not= 1(\omega' \in A),
+ \]
+ where $\omega'$ is defined by
+ \[
+ \omega'(f) =
+ \begin{cases}
+ \omega(f) & f \not= e\\
+ 1 - \omega(f) & f = e
+ \end{cases}.
+ \]
+ The event that $e$ is pivotal for $A$ is defined to be the set of all $\omega$ such that $e$ is pivotal for $(A, \omega)$.
+\end{defi}
+Note that whether or not $e$ is pivotal for $(A, \omega)$ is independent of $\omega(e)$.
+
+\begin{thm}[Russo's formula]
+ Let $A$ be an increasing event that depends on the states of a finite number of edges. Then
+ \[
+ \frac{\d}{\d p} \P_p(A) = \E_p[N(A)],
+ \]
+ where $N(A)$ is the number of pivotal edges for $A$.
+\end{thm}
+
+\begin{proof}
+ Assume that $A$ depends the states of $m$ edges $e_1, \ldots, e_m$. The idea is to let each $e_i$ be open with probability $p_i$, whree the $\{p_i\}$ may be distinct. We then vary the $p_i$ one by one and see what happens.
+
+ Writing $\bar{p} = (p_1, \ldots, p_m)$, we define
+ \[
+ f(p_1, \ldots, p_m) = \P_{\bar{p}} (A),
+ \]
+ Now $f$ is the sum of the probability of all configurations of $\{e_1, \ldots, e-m\}$ for which $A$ happens, and is hence a finite sum of polynomials. So in particular, it is differentaible.
+
+ We now couple all percolation process. Let $(X(e): e \in \L^d)$ be iid $U[0, 1]$ random variables. For a vector $\bar{p} = (p(e): e \in \L^d)$, we write
+ \[
+ \eta_{\bar{p}}(e) = 1 (X(e) \leq p(e)).
+ \]
+ Then we have $\P_{\bar{p}}(A) = \P(\eta_{\bar{p}} \in A)$.
+
+ Fix an edge $f$ and let $\bar{p}' = (p'(e))$ be such that $p'(e) = p(e)$ for all $e \not= f$, and $p'(f) = p(f) + \delta$ for some $\delta > 0$. Then
+ \begin{align*}
+ \P_{\bar{p}'}(A) - \P_{\bar{p}}(A) &= \P(\eta_{\bar{p}'} \in A) - \P(\eta_{\bar{p}} \in A)\\
+ &= \P(\eta_{\bar{p}'} \in A, \eta_{\bar{p}} \in A) + \P(\eta_{\bar{p}'} \in A, \eta_{\bar{p}} \in A) - \P(\eta_{\bar{p}} \in A).
+ \end{align*}
+ But we know $A$ is increasing, so $\P(\eta_{\bar{p}'} \in A, \eta_{\bar{p}} \in A) = \P(\eta_{\bar{p}} \in A)$. So the first and last terms cancel, and we have
+ \[
+ \P_{\bar{p}'}(A) - \P_{\bar{p}}(A) = \P(\eta_{\bar{p}'} \in A, \eta_{\bar{p}} \not \in A).
+ \]
+ But we observe that we simply have
+ \[
+ \P (\eta_{\bar{p}'} \in A, \eta_{\bar{p}} \not \in A) = \delta \cdot \P_{\bar{p}} (f \text{ is pivotal for $A$}).
+ \]
+ Indeed, we by definition of pivotal edges, we have
+ \[
+ \P(\eta_{\bar{p}'} \in A, \eta_{\bar{p}} \not \in A) = \P_{\bar{p}}(f \text{ is pivotal for $A$, }p(f) < X(f) \leq p(f) + \delta).
+ \]
+ Since the event $\{f \text{ is pivotal for A}\}$ is independent of the state of the edge $f$, we obtain
+ \[
+ \P_{\bar{p}}(f\text{ is pivotal}, p(f) < X(f) \leq p(f) + \delta) = \P_{\bar{p}}(f\text{ is pivotal}) \cdot \delta.
+ \]
+ Therefore we have
+ \[
+ \frac{\partial}{\partial p(f)} \P_{\bar{p}}(A) = \lim_{\delta \to 0} \frac{\P_{\bar{p}'}(A) - \P_{\bar{p}}(A)}{\delta} = \P_{\bar{p}}(\text{$f$ is pivotal for $A$}).
+ \]
+ The desired result then follows from the chain rule:
+ \begin{align*}
+ \frac{\d}{\d p} \P_p(A) &= \left.\sum_{i = 1}^m \frac{\partial}{\partial p(e_i)} \P_{\bar{p}}(A)\right|_{\bar{p} = (p, \ldots, p)} \\
+ &= \sum_{i = 1}^m \P_p(\text{$e_i$ is pivotal for $A$})\\
+ &= \E_p[N(A)].\qedhere
+ \end{align*}
+\end{proof}
+If $A$ depends on an infinite number of edges, then the best we can say is that
+\[
+ \liminf_{\delta \downarrow 0} \frac{\P_{p + \delta}(A) - \P_p(A)}{\delta} \geq \E_p[N(A))].
+\]
+To see this, again set $B(n) = [-n, n]^d \cap \Z^d$. Define $\bar{p}_n$ by
+\[
+ \bar{p}_n(e) =
+ \begin{cases}
+ p & e \not \in B(n)\\
+ p + \delta & e \in B(n)
+ \end{cases}.
+\]
+Then since $a$ is increasing, we know
+\[
+ \frac{\P_{p + \delta}(A) - \P_p(A)}{\delta} \geq \frac{\P_{\bar{p}_n} (A) - \P_p(A)}{\delta}.
+\]
+We can then apply the previous claim, take successive differences, and take $n \to \infty$.
+
+\begin{cor}
+ Let $A$ be an increasing event that depends on $m$ edges. Let $p \leq q \in [0, 1]$. Then $\P_q(A) \leq \P_p(A) \left(\frac{q}{p}\right)^m$.
+\end{cor}
+
+\begin{proof}
+ We know that $\{f\text{ is pivotal for A}\}$ is independent of the state of $f$, and so
+ \[
+ \P_p(\omega(f) = 1, f\text{ is pivotal for $A$}) = p\P_p(f\text{ is pivotal for $A$}).
+ \]
+ But since $A$ is increasing, if $\omega(f) = 1$ and $f$ is pivotal for $A$, then $A$ occurs. Conversely, if $f$ is pivotal and $A$ occurs, then $\omega(f) = 1$.
+
+ Thus, by Russo's formula, we have
+ \begin{align*}
+ \frac{\d}{\d p} \P_p(A) &= \E_p[N(A)] \\
+ &= \sum_e \P_p(e\text{ is pivotal for $A$})\\
+ &= \sum_e \frac{1}{p}\P_p(\omega(e) = 1,\text{$e$ is pivotal for $A$})\\
+ &= \sum_e \frac{1}{p}\P_p(\text{$e$ is pivotal} \mid A) \P_p(A)\\
+ &= \P_p(A) \frac{1}{p} \E_p[N(A) \mid A].
+ \end{align*}
+ So we have
+ \[
+ \frac{\frac{\d}{\d p} \P_p(A)}{\P_p(A)} = \frac{1}{p} \E_p[N(A) \mid A].
+ \]
+ Integrating, we find that
+ \[
+ \log \frac{\P_q(A)}{\P_p(A)} = \int_{p}^q \frac{1}{u} \E_u[N(A) \mid A]\;\d u.
+ \]
+ Bounding $\E_u [N(A) \mid A] \leq m$, we obtain the desired bound.
+\end{proof}
+
+With Russo's formula, we can now prove the desired theorem.
+\begin{thm}
+ Let $d \geq 2$ and $B_n = [-n, n]^d \cap \Z^d$.
+ \begin{enumerate}
+ \item If $p < p_c$, then there exists a positive constant $c$ for all $n \geq 1$, $\P_p(0 \leftrightarrow \partial B_n) \leq e^{-cn}$.
+ \item If $p > p_c$, then
+ \[
+ \theta(p) = \P_p(0 \leftrightarrow \infty) \geq \frac{p - p_c}{p(1 - p_c)}.
+ \]
+ \end{enumerate}
+\end{thm}
+This was first proved by Aizenman and Barsky who looked at the more general framework of long-range percolation. Menshikov gave an alternative proof by analyzing the geometry of pivotal edges. The proof we will see is by Duminil-Copin and Tassion. Recall that we defined
+\[
+ \chi(p) = \E_p[|\mathcal{C}(0)|].
+\]
+We saw that if $\chi(p) < \infty$, then $\P_p(0 \leftrightarrow \partial B_n) \leq e^{-cn}$. We now see that $\chi(p) < \infty$ iff $p < p_c$.
+
+The strategy of the proof is to define a new percolation probability $\tilde{p}_c$, whose definition makes it easier to prove the theorem. Once we have established the two claims, we see that (i) forces $\tilde{p}_c \leq p_c$, and (ii) forces $\tilde{p}_c \geq p_c$. So they must be equal.
+
+\begin{proof}[Proof (Duminil-Copin and Tassion)]
+ If $S \subseteq V$ is finite, we write
+ \[
+ \partial S = \{(x, y) \in E: x \in S, y \not \in S\}.
+ \]
+ We write $x \overset{S}{\leftrightarrow} y$ if there exists an open path of edges from $x$ to $y$ all of whose end points lie in $S$.
+
+ Now suppose that $0 \in S$. We define
+ \[
+ \varphi_p(S) = p \sum_{(x, y) \in \partial S} \P_p(0 \overset{S}{\leftrightarrow} x).
+ \]
+ Define
+ \[
+ \tilde{p}_c = \sup \{p \in [0, 1]: \text{exists a finite set $S$ with $0 \in S$ and $\varphi_p(S) < 1$}\}.
+ \]
+ \begin{claim}
+ It suffices to prove (i) and (ii) with $p_c$ replaced by $\tilde{p}_c$.
+ \end{claim}
+% Indeed, $\{p \in [0, 1]: \text{exists a finite set $S$ with $0 \in S$ and $\varphi_p(S) < 1$}\}$ is an open subset of $[0, 1]$, since for any fixed finite set $S$, $\varphi_p(S)$ is a continuous function in $p$. Thus, $\tilde{p}_c > 0$.
+
+ Indeed, from (i), if $p < \tilde{p}_c$, then $\P_p(0 \leftrightarrow \partial B_n) \leq e^{-cn}$. So taking the limit $n \to \infty$, we see $\theta(p) = 0$. So $\tilde{p}_c \leq p_c$. From (ii), if $p > \tilde{p}_c$, then $\theta(p) > 0$. So $p_c \leq \tilde{p}_c$. So $p_c = \tilde{p}_c$.
+
+ We now prove (i) and (ii):
+ \begin{enumerate}
+ \item Let $p < \tilde{p}_c$. Then there exists a finite set $S$ containing $0$ with $\varphi_p(S) < 1$. Since $S$ is finite, we can pick $L$ large enough so that $S \subseteq B_{L - 1}$. We will prove that $\P_p(0 \leftrightarrow \partial B_{kL}) \leq (\varphi_p(S))^{k - 1}$ for $k \geq 1$.
+
+ Define $\mathcal{C} = \{x \in S: 0 \overset{S}{\leftrightarrow} x\}$. Since $S \subseteq B_{L - 1}$, we know $S \cap \partial B_{kL} = \emptyset$.
+
+ Now if we have an open path from $0$ to $\partial B_{kL}$, we let $x$ be the last element on the path that lies in $\mathcal{C}$. We can then replace the path up to $x$ by a path that lies entirely in $S$, by assumption. This is then a path that lies in $\mathcal{C}$ up to $x$, then takes an edge on $\partial S$, and then lies entirely outside of $\mathcal{C}^c$. Thus,
+ \[
+ \P_p(0 \leftrightarrow \partial B_{kL}) \leq \sum_{\substack{A \subseteq S\\0\in A}} \sum_{(x, y) \in \partial A} \P_p(0 \overset{A}{\leftrightarrow} x,\ (x, y)\text{ open},\ \mathcal{C} = A,\ y \overset{A^C}{\leftrightarrow} \partial B_{kL}).
+ \]
+ Now observe that the events $\{\mathcal{C} = A, 0 \overset{S}{\leftrightarrow} x\}$, $\{(x, y)\text{ is open}\}$ and $\{y \overset{A^c}{\leftrightarrow} \partial B_{kL}\}$ are independent. So we obtain
+ \[
+ \P_p(0 \leftrightarrow \partial B_{kL}) \leq \sum_{A \subseteq S, 0 \in A} \sum_{(x, y) \in \partial S} p\ \P_p(0\overset{S}{\leftrightarrow} x, \mathcal{C} = A) \ \P_p(y \overset{A^c}{\leftrightarrow} \partial B_{kL}).
+ \]
+ Since we know that $y \in B_L$, we can bound
+ \[
+ \P_p(y \overset{A^c}{\leftrightarrow}\partial B_{kL}) \leq \P_p(0 \leftrightarrow \partial B_{(k - 1)L}).
+ \]
+ So we have
+ \begin{align*}
+ \P_p(0 \leftrightarrow \partial B_{kL}) &\leq p\ \P_p(0 \leftrightarrow \partial B_{(k-1)L}) \sum_{A \subseteq S, 0 \in A} \sum_{(x, y) \in \partial S} \P_p(0 \overset{S}{\leftrightarrow} x, \mathcal{C} = A)\\
+ &= \P_p(0 \leftrightarrow \partial B_{(k - 1)L})\ p\sum_{(x, y) \in \partial S} \P_p(0 \overset{S}{\leftrightarrow} x)\\
+ &= \P_p(0 \leftrightarrow \partial B_{(k - 1)L}) \varphi_p(S).
+ \end{align*}
+ Iterating, we obtain the deseired result.
+ \item We want to use Russo's formula. We claim that it suffices to prove that
+ \[
+ \frac{\d}{\d p} \P_p(0 \leftrightarrow \partial B_n) \geq \frac{1}{p(1 - p)} \inf_{S \subseteq B_n, 0 \in S} \varphi_p(S) (1 - \P_p(0 \leftrightarrow \partial B_n)).
+ \]
+ Indeed, if $p > \tilde{p}_c$, we integrate from $\tilde{p}_c$ to $p$, use in this range $\varphi_p(S) \geq 1$, and then take the limit as $n \to \infty$.
+
+ The event $\{0 \leftrightarrow \partial B_n\}$ is increasing and only dependson a finite number of edges. So we can apply Russo's formula
+ \begin{align*}
+ \frac{\d}{\d p} \P_p(0 \leftrightarrow \partial B_n) &= \sum_{e \in B_n} \P_p(e\text{ is pivotal for }\{0 \leftrightarrow \partial B_n\})\\
+ \intertext{Since being pivotal and being open/closed are independent, we can write this as}
+ &= \sum_{e \in B_n} \frac{1}{1 - p} \P_p(e\text{ is pivotal for }\{0 \leftrightarrow \partial B_n\},\ e\text{ is closed})\\
+ &= \sum_{e \in B_n} \frac{1}{1 - p} \P_p(e\text{ is pivotal for }\{0 \leftrightarrow \partial B_n\},\ 0 \not\leftrightarrow \partial B_n)
+ \end{align*}
+ Define $\mathcal{S} = \{x \in B_n: x \not\leftrightarrow \partial B_n\}$. Then $\{0 \not\leftrightarrow \partial B_n\}$ implies $0 \in S$. So
+ \[
+ \frac{\d}{\d p} \P_p(0 \leftrightarrow \partial B_n) = \frac{1}{1 - p} \sum_{e \in B_n} \sum_{A \subseteq B_n, 0 \in A} \P_p(e\text{ is pivotal},\ \mathcal{S} = A)
+ \]
+ Given that $\mathcal{S} = A$, an edge $e = (x, y)$ is pivotal iff $e \in \partial A$ and $0 \overset{A}{\leftrightarrow} x$. So we know
+ \[
+ \frac{\d}{\d p} \P_p(0 \leftrightarrow \partial B_n) = \frac{1}{1 - p} \sum_{A \subseteq B_n, 0 \in A} \sum_{(x, y) \in \partial A} \P_p(0 \overset{A}{\leftrightarrow} x,\ \mathcal{S} = A).
+ \]
+ Observe that $\{0 \overset{A}{\leftrightarrow} x\}$ and $\{\mathcal{S} = A\}$ are independent, since to determine if $\mathcal{S} = A$, we only look at the edges on the boundary of $A$. So the above is equal to
+ \begin{align*}
+ &\hphantom{{}={}}\frac{1}{1 - p} \sum_{A \subseteq B_n, 0 \in A} \sum_{(x, y) \in \partial A} \P_p(0 \overset{A}{\leftrightarrow} x)\P_p(\mathcal{S} = A)\\
+ &= \frac{1}{p(1 - p)} \sum_{A \subseteq B_n, 0 \in A} \varphi_p(A) \P_p(\mathcal{S} = A)\\
+ &\geq \frac{1}{p(1 - p)} \inf_{S \subseteq B_n, 0 \in S} \varphi_P(S) \P_p(0 \not\leftrightarrow \partial B_n),
+ \end{align*}
+ as desired.\qedhere
+ \end{enumerate}
+\end{proof}
+We might ask if $\P_p(0 \leftrightarrow \partial B_n) \leq e^{-cn}$ is the best convergence rate when $p < p_c$, but we cannot do better, since $\P_p(0 \leftrightarrow \partial B_n) \geq p^n$.
+
+Also, if $p < p_c$, then we can easily bound
+\[
+ \P_p(|\mathcal{C}(0)| \geq n) \leq \P_p(0 \leftrightarrow \partial B_{n^{1/d}}) \leq \exp(-c n^{1/d}).
+\]
+However, this is not a good bound. In fact, $n^{1/d}$ can be replaced by $n$, but we will not prove it here. This tells us the largest cluster in $B_n$ will have size of order $\log n$ with high probability.
+
+\subsection{Two dimensions}
+We now focus on $2$ dimensions. As discussed previously, we can exploit duality to prove a lot of things specific to two dimensions. In particular, we will show that, at $p = p_c = \frac{1}{2}$, certain probabilities such as $\P_{\frac{1}{2}} (0 \leftrightarrow \partial B(n))$ exhibit a power law decay. This is in contrast to the exponential decay for subcritical percolation (and being bounded away from zero for supercritical percolation).
+
+First we establish that $p_c$ is actually $\frac{1}{2}$.
+\begin{thm}
+ In $\Z^2$, we have $\theta \left(\frac{1}{2}\right) = 0$ and $p_c = \frac{1}{2}$.
+\end{thm}
+It is conjectured that for all $d \geq 2$, we have $\theta(p_c(d)) = 0$. It is known to be true only in $d = 2$ and $d \geq 11$.
+
+This was proved first by Harris, Kesten, Russo, Seymour, Welsh in several iterations.
+\begin{proof}
+ First we prove that $\theta\left(\frac{1}{2}\right) = 0$. This will imply that $p_c \geq \frac{1}{2}$.
+
+ Suppose not, and $\theta\left(\frac{1}{2}\right) > 0$. Recall that $B(n) = [-n, n]^2$, and we define
+ \[
+ C(n) = [-(n - 1), (n - 1)]^2 + \left(\tfrac{1}{2}, \tfrac{1}{2}\right)
+ \]
+ in the dual lattice. The appearance of the $-1$ is just a minor technical inconvenience. For the same $n$, our $B(n)$ and $C(n)$ look like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [mblue] (0, 0) rectangle (3, 3);
+ \draw [mred] (0.25, 0.25) rectangle (2.75, 2.75);
+ \node [right, mblue] at (3, 1.5) {$B(n)$};
+ \node [right, mred] at (0.25, 1.5) {$C(n)$};
+ \node [left, white] at (0, 1.5) {$B(n)$};
+ \end{tikzpicture}
+ \end{center}
+
+ We claim that for large $n$, there is a positive probability that there are open paths from the left and right edges of $B(n)$ to $\infty$, and also there are \emph{closed} paths from the top and bottom edges of $C(n)$ to $\infty$. But we know that with probability $1$, there is a \emph{unique} infinite cluster in both the primal lattice and the dual lattice. To connect up the two infinite open paths starting from the left and right edges of $B(n)$, there must be an open left-right crossing of $B(n)$. To connect up the two infinite closed paths starting from the top and bottom of $C(n)$, there must be a closed top-bottom crossing. But these cannot both happen, since this would require an open primal edge crossing a closed dual edge, which is impossible.
+
+ To make this an actual proof, we need to show that these events do happen with positive probability. We shall always take $p = \frac{1}{2}$, and will not keep repeating it.
+
+ First note that since there is, in particular, an infinite cluster with probability $1$, we have
+ \[
+ \P(\partial B(n) \leftrightarrow \infty) \to 1.
+ \]
+ So we can pick $n$ large enough such that
+ \[
+ \P (\partial B(n) \leftrightarrow \infty),\ \P (\partial C(n) \leftrightarrow \infty) \geq 1 - \frac{1}{8^4}.
+ \]
+ Let $A_\ell$/$A_r$/$A_t$/$A_b$ be the events that the left/right/top/bottom side of $B(n)$ is connected to $\infty$ via an open path of edges. Similarly, let $D_\ell$ be the event that the left of $C(n)$ is connected to $\infty$ via a closed path, and same for $D_r, D_r, D_b$.
+
+ Of course, by symmetry, for $i, j \in \{\ell, r, t, b\}$, we have $\P(A_i) = \P(A_j)$. Using FKG, we can bound
+ \[
+ \P(\partial S_n \not\leftrightarrow \infty) = \P (A_\ell^c \cap A_r^c \cap A_t^c \cap A_b^c) \geq (\P(A_\ell^c))^4 = \left(1 - \P(A_\ell)\right)^4.
+ \]
+ Thus, by assumption on $n$, we have
+ \[
+ \left(1 - \P(A_\ell)\right)^{4} \leq \frac{1}{8^4},
+ \]
+ hence
+ \[
+ \P (A_\ell) \geq \frac{7}{8}.
+ \]
+ Of course, the same is true for other $A_i$ and $D_j$.
+
+ Then if $G = A_\ell \cap A_r \cap D_t \cap D_b$, which is the desired event, then we have
+ \[
+ \P (G^c) \leq \P(A_\ell^c) + \P(A_r^c) + \P(D_t^c) + \P(D_b^c) \leq \frac{1}{2}.
+ \]
+ So it follows that
+ \[
+ \P(G) \geq \frac{1}{2},
+ \]
+ which, as argued, leads to a contradiction.
+
+%
+% By symmetry, we see that
+% \[
+% \P(A_i) = \P (D_j)
+% \]
+% for $i, j \in \{\ell, r, t, b\}$. We now have
+% \begin{align*}
+% \P(\partial S_n \not\leftrightarrow \infty) &= \P (A_\ell^c \cap A_r^c \cap A_t^c \cap A_b^c)\\
+% &\geq (\P(A_\ell^c))^4 \tag{FKG}\\
+% &= \left(1 - \P(A_\ell)\right)^4.
+% \end{align*}
+% By assumption on $n$, we know that
+% \[
+% \left(1 - \P(A_\ell)\right)^{4} \leq \frac{1}{8^4}.
+% \]
+% So we know that
+% \[
+% \P (A_\ell) \geq \frac{7}{8}.
+% \]
+% Now let $G = A_\ell \cap A_r \cap D_t \cap D_b$. Then
+% \[
+% \P (G^c) \leq \P(A_\ell^c) + \P(A_r^c) + \P(D_t^c) + \P(D_b^c) \leq \frac{1}{2}.
+% \]
+% So we know that
+% \[
+% \P(G) \geq \frac{1}{2}.
+% \]
+% On $G$, there are two infinite open clusters of primal lattice, by uniqueness of $\infty$ clusters, these two must connect inside the box. But if this happens, this creates a barrier for the two infinite closed clusters of the dual to join. This means that there are $2$ infinite disjoint closed clusters of the dual, but again this is impossible by uniqueness. So we have reached a contradiction.
+
+ So we have $p_c \geq \frac{1}{2}$. It remains to prove that $p_c \leq \frac{1}{2}$. Suppose for contradiction that $p_c > \frac{1}{2}$. Then $p = \frac{1}{2}$ is in the subcritical regime, and we expect exponential decay. Thus, (again with $p = \frac{1}{2}$ fixed) there exists a $c > 0$ such that for all $n \geq 1$,
+ \[
+ \P(0 \leftrightarrow \partial B(n)) \leq e^{-cn}.
+ \]
+ Consider $C_n = [0, n + 1] \times [0, n]$, and define $A_n$ to be the event that there exists a left-right crossing of $C_n$ by open edges.
+
+ Again consider the dual box $D_n = [0, n] \times [-1, n] + \left(\frac{1}{2}, \frac{1}{2}\right)$.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \draw [thick, mblue] (0, 0) rectangle (6, 5);
+ \draw [dashed, mred, thick] (0.5, -0.5) rectangle (5.5, 5.5);
+ \node [right, mblue] at (6, 2.5) {$C_n$};
+ \node [above, mred] at (3, 5.5) {$D_n$};
+ \draw [mblue] (0, 2.5) -- (6, 2.5);
+ \draw [mred] (3, -0.5) -- (3, 5.5);
+ \end{tikzpicture}
+ \end{center}
+ Define $B_n$ to be the event that there is a top-bottom crossing of $D_n$ by closed edges of the dual.
+
+ As before, it cannot be the case that $A_n$ and $B_n$ both occur. In fact, $A_n$ and $B_n$ partition the whole space, since if $A_n$ does not hold, then every path from left to right of $C_n$ is blocked by a closed path of the dual. Thus, we know
+ \[
+ \P(A_n) + \P(B_n) = 1.
+ \]
+ But also by symmetry, we have $\P(A_n) = \P(B_n)$. So
+ \[
+ \P (A_n) = \frac{1}{2}.
+ \]
+ On the other hand, for any point on the left edge, the probability of it reaching the right edge decays exponentially with $n$. Thus,
+ \[
+ \P(A_n) \leq n (n + 1) \P(0 \leftrightarrow \partial B_n) \leq (n + 1) e^{-cn}
+ \]
+ which is a contradiction. So we are done.
+\end{proof}
+
+So we now know that $p_c = \frac{1}{2}$. We now want to prove that
+\[
+ \P_{\frac{1}{2}}(0 \leftrightarrow \partial B(n)) \leq A n^{-\alpha}
+\]
+for some $A, \alpha$. To prove this, we again consider the dual lattice. Observe that if we have a closed dual circuit around the origin, then this prevents the existence of an open path from the origin to a point far far away. What we are going to do is to construct ``zones'' in the dual lattice like this:
+\begin{center}
+ \begin{tikzpicture}[scale=0.1]
+ \draw [fill=mred!30!white] (-13.5, -13.5) rectangle (13.5, 13.5);
+ \draw [fill=mgreen!30!white] (-4.5, -4.5) rectangle (4.5, 4.5);
+ \draw [fill=mblue!30!white] (-1.5, -1.5) rectangle (1.5, 1.5);
+ \draw [fill=mred!30!white] (-0.5, -0.5) rectangle (0.5, 0.5);
+ \node [fill, circle, inner sep = 0, minimum size = 1] at (0, 0) {};
+ \end{tikzpicture}
+\end{center}
+The idea is to choose these zones in a way such that the probability that each zone contains a closed circuit around the origin is $\geq \zeta$ for some fixed $\zeta$. Thus, if $B(n)$ contains $m$ many of these zones, then the probability that $0$ is connected to $\partial B(n)$ is bounded above by $(1 - \zeta)^m$. We would then be done if we can show that $m \sim \log n$.
+
+The main strategy to bounding these probabilities is to use FKG. For example, if we want to bound the probability that there is a closed circuit in a region
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray!50!white] (0, 0) rectangle (1.5, 1.5);
+ \draw [fill=white] (0.5, 0.5) rectangle (1, 1);
+ \end{tikzpicture}
+\end{center}
+then we cut it up into the pieces
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=gray!50!white] (0, 0) rectangle (1.5, 0.5);
+ \draw [fill=gray!50!white] (0, 1) rectangle (1.5, 1.5);
+
+ \draw [fill=gray!50!white] (-1, 0) rectangle (-0.5, 1.5);
+ \draw [fill=gray!50!white] (2, 0) rectangle (2.5, 1.5);
+ \end{tikzpicture}
+\end{center}
+If we can bound the probability of there being a left-to-right crossing of a horizontal piece, and hence by symmetry the probability of there being a top-to-bottom crossing of a vertical piece, then FKG gives us a bound on the probability of there being a closed circuit.
+
+For convenience of notation, we prove these bounds for open paths in the primal lattice. We make the following definitions:
+\begin{align*}
+ B(k\ell, \ell) &= [-\ell, (2k - 1)\ell] \times [-\ell, \ell]\\
+ B(\ell) &= B(\ell, \ell) = [-\ell, \ell]^2\\
+ A(\ell) &= B(3\ell) \setminus B(\ell)\\
+ LR(k\ell ,\ell) &= \{\text{there exists left-right crossing of $B(k\ell, \ell)$ of open edges}\}\\
+ LR(\ell) &= LR(\ell, \ell)\\
+ O(\ell) &= \{\text{there exists open circuit in $A(\ell)$ that contains $0$ in its interior}\}.
+\end{align*}
+We first note that we have already proven the following:
+\begin{prop}
+ $\P_{\frac{1}{2}}(LR(\ell)) \geq \frac{1}{2}$ for all $\ell$.
+\end{prop}
+
+\begin{proof}
+ We have already seen that the probability of there being a left-right crossing of $[0, n + 1] \times [0, n]$ is at least $\frac{1}{2}$. But if there is a left-right crossing of $[0, n + 1] \times [0, n]$, then there is also a left-right crossing of $[0, n] \times [0, n]$!
+\end{proof}
+
+For a general $p$, Russo--Symour--Welsh (RSW) lets us bound $\P_p(O(\ell))$ by $\P_p(LR(\ell))$:
+\begin{thm}[Russo--Symour--Welsh (RSW) theorem]\index{Russo--Symour--Welsh}\index{RSW}
+ If $\P_p(LR(\ell)) = \alpha$, then
+ \[
+ \P_p(O(\ell)) \geq \left(\alpha \left(1 - \sqrt{1 - \alpha}\right)^4\right)^{12}.
+ \]
+\end{thm}
+
+A large part of the proof is done by the cut-and-paste argument we sketched above. However, to successfully do cut-and-paste, it turns out we need bounds on the probability of a left-right crossing on something that is \emph{not} a square. The key, non-trivial estimate is the following:
+\begin{lemma}
+ If $\P_p(LR(\ell)) = \alpha$, then
+ \[
+ \P_p\left(LR\left(\tfrac{3}{2}\ell, \ell\right)\right) \geq (1 - \sqrt{1 - \alpha})^3.
+ \]
+\end{lemma}
+
+To prove this, we need a result from the first example sheet:
+\begin{lemma}[$n$th root trick]\index{$n$th root trick}
+ If $A_1, \ldots, A_n$ are increasing events all having the same probability, then
+ \[
+ \P_p(A_1) \geq 1 - \left(1 - \P_p\left(\bigcup_{i = 1}^n A_i\right)\right)^{1/n}.
+ \]
+\end{lemma}
+
+\begin{proof}[Proof sketch]
+ Observe that the proof of FKG works for \emph{decreasing} events as well, and then apply FKG to $A_i^c$.
+\end{proof}
+
+We can now prove our initial lemma.
+\begin{proof}[Proof sketch]
+ Let $\mathcal{A}$ be the set of left-right crossings of $B(\ell) = [-\ell, \ell]^2$. Define a partial order on $\mathcal{A}$ by $\pi_1 \leq \pi_2$ if $\pi_1$ is contained in the closed bounded region of $B(\ell)$ below $\pi_2$.
+
+ Note that given any configuration, if the set of open left-right crossings is non-empty, then there exists a lowest one. Indeed, since $\mathcal{A}$ must be finite, it suffices to show that meets exist in this partial order, which is clear.
+
+ For a left-right crossing $\pi$, let $(0, y_\pi)$ be the last vertex on the vertical axis where $\pi$ intersects, and let $\pi_r$ be the path of the path that connects $(0, y_\pi)$ to the right.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.7]
+
+ \draw (-2, -2) rectangle (2, 2);
+ \draw [dashed] (0, -2) -- (0, 2);
+ \draw [dashed] (-2, 0) -- (2, 0);
+ \node [circ] at (0, 0) {};
+ \node [anchor = north west] {\small $O$};
+
+ \draw [thick, mred] plot [smooth] coordinates {(-2, 1) (-0.5, 1.2) (0, 0.8) (-0.4, -1) (0, -1.3) (2, -1.1)};
+ \begin{scope}
+ \clip (-2, -2) -- (0, -2) -- (0, 0) -- (2, 0) -- (2, 2) -- (-2, 2) -- (-2, -2);
+ \draw [thick, black] plot [smooth] coordinates {(-2, 1) (-0.5, 1.2) (0, 0.8) (-0.4, -1) (0, -1.3) (2, -1.1)};
+ \end{scope}
+ \node [circ] at (0, -1.3) {};
+ \node [anchor = north east] at (0, -1.3) {\small$(0, y_\pi)$};
+ \node [below, mred] at (1, -1.2) {\small$\pi_r$};
+ \end{tikzpicture}
+ \end{center}
+ Let
+ \begin{align*}
+ \mathcal{A}_- &= \{\pi \in \mathcal{A}: y_\pi \leq 0\}\\
+ \mathcal{A}_+ &= \{\pi \in \mathcal{A}: y_\pi \geq 0\}
+ \end{align*}
+ Letting $B(\ell)' = [0, 2\ell] \times [-\ell, \ell]$, our goal is to find a left-right crossing of the form
+ \begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \draw (-2, -2) rectangle (2, 2);
+ \draw [dashed] (0, -2) -- (0, 2);
+ \draw [dashed] (-2, 0) -- (2, 0);
+ \node [circ] at (0, 0) {};
+ \node [anchor = north west] {$O$};
+
+ \draw [thick, mred] plot [smooth] coordinates {(-2, 1) (-0.5, 1.2) (-0.2, 0.8) (-0.4, -1) (0, -1.3) (2, -1.1)};
+
+ \draw [morange!50!black, opacity=0.8] (0, -2) rectangle (4, 2);
+
+ \draw [mblue, thick] (0, 1.5) edge [out=0, in=185] (4, 1);
+
+ \draw [mgreen, thick] (1.1, 2) edge [out=250, in=85] (1.3, -2);
+ \end{tikzpicture}
+ \end{center}
+ More precisely, we want the following paths:
+ \begin{enumerate}
+ \item Some $\pi \in \mathcal{A}_-$
+ \item Some top-bottom crossing of $B(\ell')$ that crosses $\pi_r$.
+ \item Some left-right crossing of $B(\ell')$ that starts at the positive (i.e.\ non-negative) $y$ axis.
+ \end{enumerate}
+ To understand the probabilities of these events happening, we consider the ``mirror'' events and then apply the square root trick.
+
+ Let $\pi_r'$ be the reflection of $\pi_r$ on $\{(\ell, k): k \in \Z\}$. For any $\pi \in \mathcal{A}$, we define
+ \begin{align*}
+ V_\pi &= \{\text{all edges of $\pi$ are open}\}\\
+ M_\pi &= \{\text{exists open crossing from top of $B(\ell)'$ to $\pi_r \cup \pi_r'$}\}\\
+ M_\pi^- &= \{\text{exists open crossing from top of $B(\ell)'$ to $\pi_r$}\}\\
+ M_\pi^+ &= \{\text{exists open crossing from top of $B(\ell)'$ to $\pi_r'$}\}\\\
+ L^+ &= \{\text{exists open path in $\mathcal{A}_+$}\}\\
+ L^- &= \{\text{exists open path in $\mathcal{A}_-$}\}\\
+ L_\pi &= \{\text{$\pi$ is the lowest open LR crossing of $B(\ell)$}\}\\
+ N &= \{\text{exists open LR crossing of $B(\ell)'$}\}\\
+ N^+ &= \{\text{exists open LR crossing in $B(\ell)'$ starting from positive vertical axis}\}\\
+ N^- &= \{\text{exists open LR crossing in $B(\ell)'$ starting from negative vertical axis}\}
+ \end{align*}
+ In this notation, our previous observation was that
+ \[
+ \underbrace{\bigcup_{\pi \in \mathcal{A}_-}(V_\pi \cap M_{\pi}^-)}_{G} \cap N^+ \subseteq LR\left(\frac{3}{2} \ell, \ell\right)
+ \]
+ So we know that
+ \[
+ \P_p\left(LR\left(\frac{3}{2}\ell, \ell\right)\right) \geq \P_p(G \cap N') \geq \P_p(G) \P_p(N^+),
+ \]
+ using FKG.
+
+ Now by the ``square root trick'', we know
+ \[
+ \P_p(N^+) \geq 1 - \sqrt{1 - \P_p(N^+ \cup N^-)}.
+ \]
+ Of course, we have $\P_p(N^+ \cup N^-) = \P_p(LR(\ell)) = \alpha$. So we know that
+ \[
+ \P_p(N^+) \geq 1 - \sqrt{1 - \alpha}.
+ \]
+ We now have to understand $G$. To bound its probability, we try to bound it by the union of some disjoint events. We have
+ \begin{align*}
+ \P_p(G) &= \P_p\left(\bigcup_{\pi \in \mathcal{A}_-} (V_\pi \cap M_\pi^-)\right)\\
+ &\geq \P_p\left(\bigcup_{\pi \in A_-} (M_\pi^- \cap L_\pi)\right)\\
+ &= \sum_{\pi \in \mathcal{A}_-} \P_p(M_\pi^- \mid L_\pi) \P_p(L_\pi).
+ \end{align*}
+ \begin{claim}
+ \[
+ \P_p(M_\pi^- \mid L_\pi) \geq 1 - \sqrt{1 - \alpha}.
+ \]
+ \end{claim}
+ Note that if $\pi$ intersects the vertical axis in one point, then $\P_p(M_\pi^- \mid L_\pi) = \P_p(M_\pi^- \mid V_\pi)$, since $L_\pi^-$ tells us what happens below $\pi$, and this does not affect the occurrence of $M_\pi^-$.
+
+ Since $M_\pi^-$ and $V_\pi$ are increasing events, by FKG, we have
+ \[
+ \P_p(M_\pi^- \mid V_\pi) \geq \P_p(M_\pi^-) \geq 1 - \sqrt{1 - \P_p(M_\pi^- \cup M_\pi^+)} = 1 - \sqrt{1 - \P_p(M_\pi)}.
+ \]
+ Since $\P_p(M_\pi) \geq \P_p(LR(\ell)) = \alpha$, the claim follows.
+
+ In the case where $\pi$ is more complicated, we will need an extra argument, which we will not provide.
+
+ Finally, we have
+ \[
+ \P_p(G) \geq \sum_{\pi \in \mathcal{A}_-} \P_p(L_\pi) (1 - \sqrt{1 - \alpha}) = (1 - \sqrt{1 - \alpha})\P_p(L^-).
+ \]
+ But again by the square root trick,
+ \[
+ \P_p(L^-) \geq 1 - \sqrt{1 - \P_p(L^+ \cup L^-)} = 1 - \sqrt{1 - \alpha},
+ \]
+ and we are done.
+\end{proof}
+
+We now do the easy bit to finish off the theorem:
+\begin{lemma}
+ \begin{align*}
+ \P_p(LR(2\ell ,\ell)) &\geq \P_p(LR(\ell)) \left(\P_p\left(LR\left(\frac{3}{2}\ell, \ell\right)\right)\right)^2\\
+ \P_p(LR(3\ell ,\ell)) &\geq \P_p(LR(\ell)) \left(\P_p\left(LR\left(2\ell, \ell\right)\right)\right)^2\\
+ \P_p(O(\ell)) &\geq \P_p(LR(3\ell, \ell))^4
+ \end{align*}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ To prove the first inequality, consider the box $[0, 4\ell] \times [-\ell, \ell]$.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.6]
+ \draw (0, 0) rectangle (2, 4);
+ \draw (2, 0) rectangle (4, 4);
+ \draw (4, 0) rectangle (6, 4);
+ \draw (6, 0) rectangle (8, 4);
+ \node [below] at (6, 0) {$3\ell$};
+ \node [below] at (2, 0) {$\ell$};
+
+ \draw [mblue, thick] (0, 3) edge [out=-10, in=170] (6, 3);
+ \draw [mblue, thick] (2, 1) edge [out=15, in=191] (8, 1);
+ \draw [mblue, thick] (3, 4) edge [out=270, in=90] (5, 0);
+ \end{tikzpicture}
+ \end{center}
+ We let
+ \begin{align*}
+ LR_1 &= \{\text{exists left-right crossing of }[0, 3\ell] \times [-\ell, \ell]\}\\
+ LR_2 &= \{\text{exists left-right crossing of }[\ell, 4\ell] \times [-\ell, \ell]\}\\
+ TB_1 &= \{\text{exists top-bottom crossing of }[\ell, 3\ell] \times [-\ell, \ell]\}.
+ \end{align*}
+ Then by FKG, we find that
+ \[
+ \P_p(LR(2\ell, \ell)) \geq \P_p(LR_1 \cap LR_2 \cap TB_1) \geq \P_p(LR_1) \P_p(LR_2) \P_p(TB_1).
+ \]
+ The others are similar.
+\end{proof}
+
+\begin{thm}
+ There exists positive constants $\alpha_1, \alpha_2, \alpha_3, \alpha_4, A_1, A_2, A_4$ such that
+ \begin{align*}
+ \P_{\frac{1}{2}} (0 \leftrightarrow \partial B(n)) &\leq A_1 n^{-\alpha_1}\\
+ \P_{\frac{1}{2}} (|\mathcal{C}(0)| \geq n) &\leq A_2 n^{-\alpha_2}\\
+ \E(|\mathcal{C}(0)|^{\alpha_3}) &\leq \infty
+ \end{align*}
+ Moreover, for $p > p_c = \frac{1}{2}$, we have
+ \[
+ \theta(p) \leq A_4 \left(p - \frac{1}{2}\right)^{\alpha_4}.
+ \]
+\end{thm}
+It is an exercise on the example sheet to prove that $\P_{\frac{1}{2}}(0 \leftrightarrow B(n)) \geq \frac{1}{2\sqrt{n}}$ using the BK inequality. So the true decay of $\P_{\frac{1}{2}}(0 \leftrightarrow \partial B(n))$ is indeed a power law.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We first prove the first inequality. Define dual boxes
+ \[
+ B(k)_d = B(k) + \left(\frac{1}{2}, \frac{1}{2}\right).
+ \]
+ The dual annuli $A(\ell)_d$ are defined by
+ \[
+ A(\ell)_d = B(3\ell)_d \setminus B(\ell)_d.
+ \]
+ We let $O(\ell)_d$ be the event that there is a closed dual circuit in $A(\ell)_d$ containing $\left(\frac{1}{2}, \frac{1}{2}\right)$. Then RSW tells us there is some $\zeta \in (0, 1)$ such that
+ \[
+ \P_{\frac{1}{2}}(O(\ell)_d) \geq \zeta,
+ \]
+ independent of $\ell$. Then observe that
+ \[
+ \P_{\frac{1}{2}} (0 \leftrightarrow \partial B(3^k + 1)) \leq \P_p(O(3^r)_d\text{ does not occur for all }r < k).
+ \]
+ Since the annuli $(A(3^r)_d)$ are disjoint, the events above are independent. So
+ \[
+ \P_{\frac{1}{2}}(0 \leftrightarrow \partial B(3^k + 1)) \leq (1 - \zeta)^k,
+ \]
+ and this proves the first inequality.
+
+ \item The second inequality follows from the first inequality plus the fact that $|\mathcal{C}(0)| \geq n$ implies $0 \leftrightarrow \partial B(g(n))$, for some function $g(n) \sim \sqrt{n}$.
+
+ \item To show that $\E_{\frac{1}{2}}[|\mathcal{C}(0)|^{\alpha_3}] < \infty$ for some $\alpha_3$, we observe that this expectation is just
+ \[
+ \sum_n \P_{\frac{1}{2}}(|\mathcal{C}(0)|^{\alpha_3} \geq n).
+ \]
+ \item To prove the last part, note that
+ \[
+ \theta(p) = \P_p(|\mathcal{C}(0)| = \infty) \leq \P_p(0 \leftrightarrow \partial B_n)
+ \]
+ for all $n$. By the corollary of Russo's formula, and since $\{0 \leftrightarrow \partial B_n\}$ only depends on the edges in $B_n$, which are $\leq 18 n^2$, we get that
+ \[
+ \P_{\frac{1}{2}}(0 \leftrightarrow \partial B_n) \geq \left(\frac{1}{2p}\right)^{18n^2}\P_p(0 \leftrightarrow \partial B_n).
+ \]
+ So
+ \[
+ \theta(p) \leq (2p)^{18n^2} \P_{\frac{1}{2}}(0 \leftrightarrow \partial B_n) \leq A_1 (2p)^{18n^2} n^{-\alpha_1}.
+ \]
+ Now take $n = \lfloor (\log 2p)^{-1/2} \rfloor $. Then as $p \searrow \frac{1}{2}$, we have
+ \[
+ n \sim \frac{1}{(2p - 1)^{\frac{1}{2}}}.
+ \]
+ Substituting this in, we get
+ \[
+ \theta(p) \leq C \left(p - \frac{1}{2}\right)^{\alpha_1/2}.\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+
+By similar methods, we can prove that
+\begin{thm}
+ When $d = 2$ and $p > p_c$, there exists a positive constant $c$ such that
+ \[
+ \P_p(0 \leftrightarrow \partial B(n), |\mathcal{C}(0)| < \infty) \leq e^{-cn}.
+ \]
+\end{thm}
+
+It is natural to ask if we have a similar result in higher dimensions. In higher dimensions, all the techniques in $\Z^2$ don't work.
+
+\subsubsection*{Higher dimensions}
+
+\index{slab percolation}In $d \geq 3$, define the \emph{slab}
+\[
+ S_k = \Z^2 \times \{0, 1, \ldots, k\}^{d - 2}.
+\]
+Then $S_k \subseteq S_{k + 1} \subseteq \Z^d$.
+
+In general, for a graph $G$, we let $p_c(G)$ be the critical probability of bond percolation in $G$. Then we have
+\[
+ p_c(S_k) \geq p_c(S_{k + 1}) \geq p_c.
+\]
+So the sequence $(p_c(S_k))$ must converge to a limit. Call this limit\index{$p_c^{slab}$}
+\[
+ p_c^{slab} = \lim_{k \to \infty} p_c(S_k).
+\]
+We know that $p_c^{slab} \geq p_c$.
+
+A lot of the results we can prove for $\Z^2$ can be proven for $p_c^{slab}$ instead of $p_c$. So the natural question is how $p_c^{slab}$ is related to $p_c$, and in particular, if they are equal. This has been an open question for a long time, until Grimmett and Marstrand proved it.
+
+\begin{thm}[Grimmett--Marstrand]\index{Grimmett--Marstrand theorem}
+ Let $F$ be an infinite-connected subset of $\Z^d$ with $p_c(F) < 1$. Then for all $\eta > 0$, there exists $k \in \N$ such that
+ \[
+ p_c(2k F + B_k) \leq p_c + \eta.
+ \]
+ In particular, for all $d \geq 3$, $p_c^{slab} = p_c$.
+\end{thm}
+We shall not prove the theorem, but we shall indicate how the ``in particular'' part works.
+
+We take $F = \Z^2 \times \{0\}^{d - 2}$. Then
+\[
+ 2kF + B_k = \Z^2 \times ([-k, k]^{d - 2} \cap \Z^{d - 2}),
+\]
+a translate of $S_{2k}$. So $p_c(S_{2k}) = p_c(2kF + B_k) \to p_c$ as $k \to \infty$.
+
+A consequence of Grimmett-Marstrand is that
+\begin{thm}
+ If $d \geq 3$ and $p > p_c$, then there exists $c > 0$ such that
+ \[
+ \P_p(0 \leftrightarrow \partial B(n), |\mathcal{C}(0)| < \infty) \leq e^{-cn}.
+ \]
+\end{thm}
+
+\subsection{Conformal invariance and SLE in \texorpdfstring{$d = 2$}{d = 2}}
+Instead of working on $\Z^2$, we will now work on the triangular lattice $\T$. We consider \term{site percolation}. So every vertex is open (black) with probability $p$ and closed (white) with probability $1 - p$, independently for different vertices.
+
+Like for $\Z^2$, we can show that $p = p_c(\T) = \frac{1}{2}$.
+
+Let $D$ be an open simply connected domain in $\R^2$ with $\partial D$ a Jordan curve. Let $a, b, c \in \partial D$ be $3$ points labeled in anti-clockwise order. Consider the triangle $T$ with vertices $A = 0$, $B = 1$ and $C = e^{i\pi/3}$. By the Riemann mapping theorem, there exists a conformal map $\varphi: D \to T$ that maps $a \mapsto A$, $b \mapsto B$, $c \mapsto C$.
+
+Moreover, this can be extended to $\partial D$ such that $\varphi: \bar{D} \to \bar{T}$ is a homeomorphism. If $x$ is in the arc $bc$, then it will be mapped to $X = \varphi(x)$ on the edge $BC$ of $T$.
+
+Focus on the case $p = \frac{1}{2}$. We can again put a triangular lattice inside $D$, with mesh size $\delta$. We let $ac \leftrightarrow bx$ to mean the event that there is an open path in $D$ joining the arc $ac$ to $bx$.
+
+Then by RSW (for $\T$), we get that
+\[
+ \P_\delta(ac \leftrightarrow bx) \geq c > 0,
+\]
+independent of $\delta$. We might ask what happens when $\delta \to 0$. In particular, does it converge, and what does it converge to?
+
+Cardy, a physicist studying conformal field theories, conjectured that
+\[
+ \lim_{\delta \to 0} \P_\delta(ac \leftrightarrow bx) = |BX|.
+\]
+He didn't really write it this way, but expressed it in terms of hypergeometric functions. It was Carlesson who expressed it in this form.
+
+In 2001, Smirnov proved this conjecture.
+
+\begin{thm}[Smirnov, 2001]
+ Suppose $(\Omega, a, b, c, d)$ and $(\Omega', a', b', c', d')$ are conformally equivalent. Then
+ \[
+ \P(ac \leftrightarrow bd\text{ in }\Omega) = \P(a' c' \leftrightarrow b' d'\text{ in }\Omega').
+ \]
+\end{thm}
+This says percolation at criticality on the triangular lattice is conformally invariant.
+
+We may also take the dual of the triangular lattice, which is the hexagonal lattice. Colour a hexagon black if the center is black, and similarly for whites. Suppose we impose the boundary condition on the upper half plane that the hexagons are black when $x > 0, y = 0$, and white when $x < 0, y = 0$.
+
+Starting at $(x, y) = (0, 0)$, we explore the interface between the black and white by always keeping a black to our right and white on our right. We can again take the limit $\delta \to 0$. What is this exploration path going to look like in the limit $\delta \to 0$? It turns out this is an SLE(6) curve.
+
+To prove this, we use the fact that the exploration curve satisfies the \emph{locality property}, namely that if we want to know where we will be in the next step, we only need to know where we currently are.
+
+%Let $\gamma_t$ be a continuous curve in $\H$, and let $K_t$ be the unbounded component of $\H \setminus \gamma [0, t]$. Then there is a conformal map $g_t: \H \setminus K_t \to \H$ such that
+%\[
+% \frac{\d}{\d t} g_t(x) = \frac{2}{g_t(z) - a(t)},
+%\]
+%and $a(t) = g_t(\gamma_t)$, $g_0(z) = z$.
+%
+%This is known as \term{Loewner's equation}.
+%
+%If we take $a(t) = \sqrt{\kappa} B_t$, where $B_t$ is a standard Brownian motion, this gives us SLE($\kappa$).
+
+\section{Random walks}
+\subsection{Random walks in finite graphs}
+We shall consider weighted random walks on finite graphs. Let $G = (V, E)$ be a finite connected graph. We assign a conductances/weights to the edges $(c(e))_{e \in E}$. Here $c$ is a function of the edges, so we require $c(xy) = c(yx)$.
+
+In the weighted random walk on $G$, the probability of going from $x$ to $y$ (assuming $x \sim y$) is
+\[
+ \P(x, y) = \frac{c(x, y)}{ c(x)},\quad c(x) = \sum_{z \sim x} c(x, z).
+\]
+If we put $c(e) = 1$ for all $e$, then this is just a simple random walk, with
+\[
+ \P(x, y) =
+ \begin{cases}
+ \frac{1}{\deg x} & y \sim x\\
+ 0 & \text{otherwise}
+ \end{cases}.
+\]
+The weighted random walk is reversible with respect to $\pi(x) = \frac{c(x)}{c_G}$, where $c_G = \sum_x c(x)$, because
+\[
+ \pi(x) P(x, y) = \frac{c(x)}{c_G} \cdot \frac{c(x, y)}{c(x)} = \pi(y) \P(y, x).
+\]
+Conversely, every reversible Markov chain can be represented as a random walk on a weighted graph --- place an edge $\{x, y\}$ if $\P(x, y) > 0$. By reversibility, $\P(x, y) > 0$ iff $\P(y, x) = 0$. Define weights
+\[
+ c(x, y) = \pi(x) \P(x, y)
+\]
+for all $x \sim y$.
+
+One way to understand random walks on weighted graphs is to think of them as currents in electrical networks. We set the resistance on the edges by
+\[
+ r(e) = \frac{1}{c(e)}.
+\]
+Naturally, we would like to talk about currents flowing through the circuit.
+\begin{defi}[Flow]\index{flow}
+ A \emph{flow} $\theta$ on $G$ is a function defined on oriented edges which is anti-symmetric, i.e.\ $\theta(x, y) = - \theta(y, x)$.
+\end{defi}
+Not every flow can be thought of as a current. We fix two distinguished vertices $a$ and $z$, called the \term{source} and the \term{sink} respectively. We think of current as entering the network through $a$ and exiting through $z$. Through any other vertex, the amount of current going in should equal the amount of current going out.
+
+\begin{defi}[Divergence]\index{divergence}
+ The \emph{divergence} of a flow $\theta$ is
+ \[
+ \div \theta (x) = \sum_{y \sim x} \theta(x, y).
+ \]
+\end{defi}
+By the antisymmetry property, we have
+\[
+ \sum_x \div \theta(x) = \sum_x \sum_{y \sim x} \theta(x, y) = \sum_{x \sim y} \theta(x, y) = 0.
+\]
+\begin{defi}[Flow from $a$ to $z$]\index{flow}
+ A flow $\theta$ from $a$ to $z$ is a flow such that
+ \begin{enumerate}
+ \item $\div \theta(x) = 0$ for all $x \not \in \{a, z\}$. \hfill (\term{Kirchhoff's node law})
+ \item $\div \theta(a) \geq 0$.
+ \end{enumerate}
+ The \term{strength} of the flow from $a$ to $z$ is $\|\theta\| = \div \theta(a)$.\index{$\|\theta\|$}We say this is a \term{unit flow} if $\|\theta\| = 1$.
+\end{defi}
+Since $\sum \div \theta(x) = 0$, a flow from $a$ to $z$ satisfies $\div \theta(z) = - \div \theta(a)$.
+
+Now if we believe in Ohm's law, then the voltage difference across each edge is $I(x, y) r(x, y)$. So we have
+\[
+ W(x) = W(y) + r(x, y) I(x, y)
+\]
+for all $y$. We know that if we sum $I(x, y)$ over all $y$, then the result vanishes. So we rewrite this as
+\[
+ c(x, y) W(x) = c(x, y) W(y) + I(x, y).
+\]
+Summing over all $y$, we find that the voltage satisfies
+\[
+ W(x) = \sum_{x \sim y} \frac{c(x, y)}{c(x)} W(y) = \sum_{x \sim y} \P(x, y) W(y).
+\]
+\begin{defi}[Harmonic function]\index{harmonic function}
+ Let $P$ be a transition matrix on $\Omega$. We call $h$ \emph{harmonic} for $P$ at the vertex $x$ if
+ \[
+ h(x) = \sum_y \P(x, y) h(y).
+ \]
+\end{defi}
+
+We now take this as a \emph{definition} of what a voltage is.
+\begin{defi}[Voltage]\index{voltage}
+ A \emph{voltage} $W$ is a function on $\Omega$ that is harmonic on $\Omega \setminus \{a, z\}$.
+\end{defi}
+
+The first theorem to prove is that given any two values $W(a)$ and $W(z)$, we can find a unique voltage with the prescribed values at $a$ and $z$.
+
+More generally, if $B \subseteq \Omega$, and $X$ is a Markov chain with matrix $P$, then we write
+\[
+ \tau_B = \min\{t \geq 0: X_t \in B\},
+\]
+
+\begin{prop}
+ Let $P$ be an irreducible matrix on $\Omega$ and $B \subseteq \Omega$, $f: B \to \R$ a function. Then
+ \[
+ h(x) = \E_x [f(X_{\tau_B})]
+ \]
+ is the unique extension of $f$ which is harmonic on $\Omega \setminus B$.
+\end{prop}
+
+\begin{proof}
+ It is obvious that $h(x) = f(x)$ for $x \in B$. Let $x \not\in B$. Then
+ \begin{align*}
+ h(x) &= \E_x [f(X_{\tau_B})] \\
+ &= \sum_y \P(x, y) \E_x[f(X_{\tau_B}) \mid X_1 = y]\\
+ &= \sum_y \P(x, y) \E_y[f(X_{\tau_B})]\\
+ &= \sum_Y \P(x, y) h(y)
+ \end{align*}
+ So $h$ is harmonic.
+
+ To show uniqueness, suppose $h'$ be another harmonic extension. Take $g = h - h'$. Then $g = 0$ on $B$. Set
+ \[
+ A = \left\{x: g(x) = \max_{y \in \Omega \setminus B} g(y) \right\},
+ \]
+ and let $x \in A \setminus B$. Now since $g(x)$ is the weighted average of its neighbours, and $g(y) \leq g(x)$ for all neighbours $y$ of $x$, it must be the case that $g(x) = g(y)$ for all neighbours of $y$.
+
+ Since we assumed $G$ is connected, we can construct a path from $x$ to the boundary, where $g$ vanishes. So $g(x) = 0$. In other words, $\max g = 0$. Similarly, $\min g = 0$. So $g = 0$.
+\end{proof}
+
+Given a voltage, Ohm's law\index{Ohm's law} defines a natural flow, called the \emph{current flow}.
+\begin{defi}[Current flow]\index{current flow}
+ The \emph{current flow} associated to the voltage $W$ is
+ \[
+ I(x, y) = \frac{W(x) - W(y)}{r(x, y)} = c(x, y) (W(x) - W(y)).
+ \]
+\end{defi}
+The current flow has an extra property, namely that it satisfies the \term{cycle law}. For any cycle $e_1, e_2, \ldots, e_n$, we have
+\[
+ \sum_{i = 1}^n r(e_i) I (e_i) = 0.
+\]
+
+\begin{prop}
+ Let $\theta$ be a flow from $a$ to $z$ satisfying the cycle law for any cycle. Let $I$ the current flow associated to a voltage $W$. If $\|\theta\| = \|I\|$, then $\theta = I$.
+\end{prop}
+
+\begin{proof}
+ Take $f = \theta - I$. Then $f$ is a flow which satisfies Kirchhoff's node law at all vertices and the cycle law for any cycle. We want to show that $f = 0$. Suppose not. The we can find some $e_1$ such that $f(e_i) > 0$. But
+ \[
+ \sum_{y\sim x} f(x, y) = 0
+ \]
+ for all $x$, so there must exist an edge $e_2$ that $e_1$ leads to such that $f(e_2) > 0$. Continuing this way, we will get a cycle of ordered edges where $f > 0$, which violates the cycle law.
+\end{proof}
+Let $W_0$ be a voltage with $W_0(a) = 1$ and $W_0(z) = 0$, and let $I_0$ be the current associated to $W_0$. Then any other voltage $W$ can be written as
+\[
+ W(x) = (W(a) - W(z)) W_0(x) + W(z).
+\]
+So, noting that $W_0(a) = 1$, we have
+\[
+ W(a) = W(x) = (W(a) - W(z))(W_0(a) - W_0(x)).
+\]
+Thus, if $I$ is the current associated to $W$, then
+\[
+ \|I\| = \sum_{x \sim a} I(a, x) = \sum_{x \sim a} \frac{W(a) - W(x)}{r(a, x)} = (W(a) - W(z)) \|I_0\|.
+\]
+So we know that
+\[
+ \frac{W(a) - W(z)}{\|I\|} = \frac{1}{\|I_0\|},
+\]
+and in particular does not depend on $W$.
+
+\begin{defi}[Effective resistance]\index{Effective resistance}
+ The \emph{effective resistance} $R_{\mathrm{eff}}(a, z)$ of an electric network is defined to be the ratio
+ \[
+ R_{\mathrm{eff}}(a, z) = \frac{W(a) - W(z)}{\|I\|}
+ \]
+ for any voltage $W$ with associated current $I$. The \term{effective conductance} is $C_{\mathrm{eff}}(a, z) = R_{\mathrm{eff}}(a, z)^{-1}$.
+\end{defi}
+
+\begin{prop}
+ Take a weighted random walk on $G$. Then
+ \[
+ \P_a(\tau_z < \tau_a^+) = \frac{1}{c(a) R_{\mathrm{eff}}(a, z)},
+ \]
+ where $\tau_a^+ = \min \{t \geq 1: X_t = a\}$.
+\end{prop}
+
+\begin{proof}
+ Let
+ \[
+ f(x) = \P_x(\tau_z < \tau_a).
+ \]
+ Then $f(a) = 0$ and $f(z) = 1$. Moreover, $f$ is harmonic on $\Omega \setminus \{a, z\}$. Let $W$ be a voltage. By uniqueness, we know
+ \[
+ f(x) = \frac{W(a) - W(x)}{W(a) - W(z)}.
+ \]
+ So we know
+ \begin{align*}
+ \P_a(\tau_z < \tau_a^+) &= \sum_{x \sim a} \P(a, x) f(x) \\
+ &= \sum_{x \sim a} \frac{c(a, x)}{c(a)} \frac{W(a) - W(x)}{W(a) - W(z)}\\
+ &= \frac{1}{c(a) (W(a) - W(z))} \sum_{x \sim a}I(a, x)\\
+ &= \frac{\|I\|}{c(a) (W(a) - W(z))}\\
+ &= \frac{1}{c(a) R_{\mathrm{eff}}(a, z)}.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{defi}[Green kernel]\index{Green kernel}
+ Let $\tau$ be a stopping time. We define the \emph{Green kernel} to be
+ \[
+ G_\tau(a, x) = \E_a \left[ \sum_{t = 0}^\infty \mathbf{1}(X_t = x, t < \tau)\right].
+ \]
+\end{defi}
+This is the expected number of times we hit $x$ before $\tau$ occurs.
+\begin{cor}
+ For any reversible chain and all $a, z$, we have
+ \[
+ G_{\tau_z}(a, a) = c(a) R_{\mathrm{eff}}(a, z).
+ \]
+\end{cor}
+
+\begin{proof}
+ By the Markov property, the number of visits to $a$ starting from $a$ until $\tau_z$ is the geometric distribution $Geo(\P_a(\tau_z < \tau_a^+))$. So
+ \[
+ G_{\tau_z}(a, a) = \frac{1}{\P_a(\tau_z < \tau_a^+)} = c(a) R_{\mathrm{eff}}(a, z).\qedhere
+ \]
+\end{proof}
+
+Practically, to compute the effective resistance, it is useful to know some high school physics. For example, we know that conductances in parallel add:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (1, 0) -- (1, 0.5) -- (2, 0.5);
+ \draw (1, 0) -- (1, -0.5) -- (2, -0.5);
+ \draw [decorate, decoration={zigzag}] (2, 0.5) -- (2.5, 0.5) node [pos=0.5, above] {$c_1$};
+ \draw [decorate, decoration={zigzag}] (2, -0.5) -- (2.5, -0.5) node [pos=0.5, below] {$c_2$};
+
+ \draw (2.5, 0.5) -- (3.5, 0.5) -- (3.5, -0.5) -- (2.5, - 0.5);
+ \draw (3.5, 0) -- (4.5, 0);
+
+ \node at (5.5, 0) {$=$};
+
+ \draw (6.5, 0) -- (7.5, 0);
+ \draw [decorate, decoration={zigzag}] (7.5, 0) -- (8, 0) node [pos=0.5, above] {$c_1 + c_2$};
+ \draw (8, 0) -- (9, 0);
+ \end{tikzpicture}
+\end{center}
+
+Thus, if $e_1$ and $e_2$ are two edges with the same endvertices, then we we can replace them by a single edge of conductance the sum of the conductances. We can prove that this actually works by observing that the voltages and currents remain unchanged outside of $e_1, e_2$, and we check that Kirchhoff's and Ohm's law are satisfied with
+\[
+ I(e) = I(e_1) + I(e_2).
+\]
+Similarly, resistances in series add. Let $v$ be a node of degree $2$ and let $v_1$ and $v_2$ be its $2$ neighbours. Then we can replace these $2$ edges by a single edge of resistance $r_1 + r_2$. Again, to justify this, we check Kirchoff's and Ohm's law with
+\[
+ I(v_1 v_2) = I(v_1 v) + I(vv_2).
+\]
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (0.5, 0);
+ \draw [decorate, decoration={zigzag}] (0.5, 0) -- (1, 0) node [pos=0.5, above] {$r_1$};
+ \draw (1, 0) -- (2, 0);
+ \draw [decorate, decoration={zigzag}] (2, 0) -- (2.5, 0) node [pos=0.5, above] {$r_2$};
+ \draw (2.5, 0) -- (3, 0);
+
+ \node [below] at (0, 0) {$v_1$};
+ \node [below] at (1.5, 0) {$v$};
+ \node [below] at (3, 0) {$v_2$};
+
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1.5, 0) {};
+ \node [circ] at (3, 0) {};
+
+ \node at (4, 0) {$=$};
+ \draw (5, 0) -- (6, 0);
+ \draw [decorate, decoration={zigzag}] (6, 0) -- (6.5, 0) node [pos=0.5, above] {$r_1 + r_2$};
+ \draw (6.5, 0) -- (7.5, 0);
+ \node [circ] at (5, 0) {};
+ \node [circ] at (7.5, 0) {};
+ \end{tikzpicture}
+\end{center}
+
+We can also glue $2$ vertices of the same potential, keeping all other edges. Current and potential are unchanged since current doesn't flow if there is the same potential.
+
+\begin{ex}
+ Let $T$ be a finite connected tree, and $a, z$ two vertices. Then $R_{\mathrm{eff}}(a, z)$ is the graph distance between $a$ and $z$.
+\end{ex}
+
+It turns out it is not convenient to work with our definition of effective resistance. Instead, we can use the well known result from high school physics $P = IV = I^2 R$. For any flow, we define
+
+\begin{defi}[Energy]\index{energy}
+ Let $\theta$ be a flow on $G$ with conductances $(c(e))$. Then the \emph{energy} is
+ \[
+ \mathcal{E}(\theta) = \sum_e (\theta(e))^2 r(e).
+ \]
+ Here we sum over unoriented edges.
+\end{defi}
+Note that given any flow, we can always increase the energy by pumping more and more current along a cycle. Thus, we might expect the lowest energy configuration to be given by the flows satisfying the cycle law, and if we focus on unit flows, the following should not be surprising:
+
+\begin{thm}[Thomson's principle]\index{Thomson's principle}
+ Let $G$ be a finite connected graph with conductances $(c(e))$. Then for any $a, z$, we have
+ \[
+ R_{\mathrm{eff}}(a, z) = \inf \{\mathcal{E}(\theta): \text{$\theta$ is a unit flow from $a$ to $z$}\}.
+ \]
+ Moreover, the unit current flow from $a$ to $z$ is the unique minimizer.
+\end{thm}
+
+\begin{proof}
+ Let $i$ be the unit current flow associated to potential $\varphi$. Let $j$ be another unit flow $a \to z$. We need to show that $\mathcal{E}(j) \geq \mathcal{E}(i)$ with equality iff $j = i$.
+
+ Let $k = j - i$. Then $k$ is a flow of $0$ strength. We have
+ \[
+ \mathcal{E}(j) = \mathcal{E}(i + k) = \sum_e (i(e) + k(e))^2 r(e) = \mathcal{E}(i) + \mathcal{E}(k) + 2 \sum_e i(e) k(e) r(e).
+ \]
+ It suffices to show that the last sum is zero. By definition, we have
+ \[
+ i(x, y) = \frac{\varphi(x) - \varphi(y)}{r(x, y)}.
+ \]
+ So we know
+ \begin{align*}
+ \sum_e i(e) k(e) r(e) &= \frac{1}{2} \sum_x \sum_{y \sim x} (\varphi(x) - \varphi(y)) k(x, y)\\
+ &= \frac{1}{2} \sum_x \sum_{y \sim x} \varphi(x)k(x, y) + \frac{1}{2} \sum_x \sum_{y \sim x} \varphi(x) k(x, y).
+ \end{align*}
+ Now note that $\sum_{y \sim x} k(x, y) = 0$ for any $x$ since $k$ has zero strength. So this vanishes.
+
+ Thus, we get
+ \[
+ \mathcal{E}(j) = \mathcal{E}(i) + \mathcal{E}(k),
+ \]
+ and so $\mathcal{E}(j) \geq \mathcal{E}(i)$ with equality iff $\mathcal{E}(k) = 0$, i.e.\ $k(e) = 0$ for all $e$.
+\end{proof}
+Using this characterization, it is then immediate that
+\begin{thm}[Rayleigh's monotonicity principle]\index{Rayleigh's monotonicity principle}\index{monotonicity principle}
+ Let $G$ be a finite connected graph and $(r(e))_e$ and $(r'(e))_e$ two sets of resistances on the edges such that $r(e) \leq r'(e)$ for all $e$. Then
+ \[
+ R_{\mathrm{eff}}(a, z; r) \leq R_{\mathrm{eff}}(a, z; r').
+ \]
+ for all $a, z \in G$.
+\end{thm}
+
+\begin{proof}
+ By definition of energy, for any current $i$, we have
+ \[
+ \mathcal{E}(i; r) \leq \mathcal{E}(i; r').
+ \]
+ Then take the infimum and conclude by Thompson.
+\end{proof}
+
+\begin{cor}
+ Suppose we add an edge to $G$ which is not adjacent to $a$. This increases the escape probability
+ \[
+ \P_a(\tau_z < \tau_a^+).
+ \]
+\end{cor}
+
+\begin{proof}
+ We have
+ \[
+ \P_a(\tau_z < \tau_z^+) = \frac{1}{c(a) R_{\mathrm{eff}}(a, z)}.
+ \]
+ We can think of the new edge as an edge having infinite resistance in the old graph. By decreasing it to a finite number, we decrease $R_{\mathrm{eff}}$.
+\end{proof}
+
+\begin{defi}[Edge cutset]\index{edge cutset}\index{cutset}
+ A set of edges $\Pi$ is an edge-cutset separating $a$ from $z$ if every path from $a$ to $z$ uses an edge of $\Pi$.
+\end{defi}
+
+\begin{thm}[Nash--Williams inequality]\index{Nash--Williams inequality}
+ Let $(\Pi_k)$ be disjoint edge-cutsets separating $a$ from $z$. Then
+ \[
+ R_{\mathrm{eff}}(a, z) \geq \sum_k \left(\sum_{e \in \Pi_k}c(e)\right)^{-1}.
+ \]
+\end{thm}
+The idea is that if we combine all paths in each $\Pi_k$ to a single path with conductance $\sum_{e \in \Pi_k} c(e)$, then every path has to use all of these edges, so we can bound the effective resistance.
+
+\begin{proof}
+ By Thompson, it suffices to prove that for any unit flow $\theta$ from $a$ to $z$, we have
+ \[
+ \sum_e (\theta(e))^2 r(e) \geq \sum_k \left(\sum_{e \in \Pi_k}c(e)\right)^{-1}.
+ \]
+ Certainly, we have
+ \[
+ \sum_e (\theta(e))^2 r(e) \geq \sum_k \sum_{e \in \Pi_k}(\theta(e))^2 r(e).
+ \]
+ We now use Cauchy--Schwarz to say
+ \begin{align*}
+ \left(\sum_{e \in \Pi_k} (\theta(e))^2 r(e)\right) \left(\sum_{e \in \Pi_k}c(e)\right) &\geq \left(\sum_{e \in \Pi_k} |\theta(e)| \sqrt{r(e) c(e)} \right)^2\\
+ &= \left(\sum_{e \in \Pi_k} |\theta(e)|\right)^2.
+ \end{align*}
+ The result follows if we show that the final sum is $\geq 1$.
+
+ Let $F$ be the component of $G \setminus \Pi_k$ containing $a$. Let $F'$ be the set of all edges that start in $F$ and land outside $F$. Then we have
+ \[
+ 1 = \sum_{x \in F} \div \theta(x) \leq \sum_{e \in F'} |\theta(e)| \leq \sum_{e \in \Pi_k} |\theta(e)|.\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ Consider $B_n = [1, n]^2 \cap \Z^2$. Then
+ \[
+ R_{\mathrm{eff}}(a, z) \geq \frac{1}{2} \log (n - 1).
+ \]
+\end{cor}
+There is also an upper bound of the form $R_{\mathrm{eff}}(a, z) \leq c \log n$. So this is indeed the correct order. The proof involves constructing an explicit flow with this energy.
+
+\begin{proof}
+ Take
+ \[
+ \Pi_k = \{(v, u) \in B_n: \|v\|_\infty = k - 1, \|u\|_\infty = k\}.
+ \]
+ Then
+ \[
+ |\Pi_k| = 2(k - 1),
+ \]
+ and so
+ \[
+ R_{eff}(a, z) \geq \sum_{k = 1}^{ - 1} \frac{1}{|\Pi_k|} \geq \frac{1}{2} \log (n - 1).\qedhere
+ \]
+\end{proof}
+
+We shall need the following result:
+\begin{prop}
+ Let $X$ be an irreducible Markov chain on a finite state space. Let $\tau$ be a stopping time such that $\P_a(X_\tau = a) = 1$ and $\E_a[\tau] < \infty$ for some $a$ in the state space. Then
+ \[
+ G_\tau(a, x) = \pi(x) \E_a[\tau].
+ \]
+\end{prop}
+A special case of this result was proven in IB Markov Chain, where $\tau$ was taken to be the first return time to $a$. The proof of this is very similar, and we shall not repeat it.
+
+Using this, we can prove the commute-time identity.
+\begin{thm}[Commute time identity]\index{commute time identity}
+ Let $X$ be a reversible Markov chain on a finite state space. Then for all $a, b$, we have
+ \[
+ \E_a [\tau_b] + \E_b[ \tau_a] = c(G) R_{\mathrm{eff}}(a, b),
+ \]
+ where
+ \[
+ c(G) = 2 \sum_e c(e).
+ \]
+\end{thm}
+
+\begin{proof}
+ Let $\tau_{a,b}$ be the first time we hit $a$ after we hit $b$. Then $\P_a(X_{\tau_{a, b}} = a) = 1$. By the proposition, we have
+ \[
+ G_{\tau_{a, b}}(a, a) = \pi(a) \E_a[\tau_{a,b}].
+ \]
+ Since starting from $a$, the number of times we come back to $A$ by $\tau_{a, b}$ is the same as up to time $\tau_b$, we get
+ \[
+ G_{\tau_{a,b}}(a, a) = G_{\tau_b}(a, a) = c(a) R_{\mathrm{eff}}(a, b).
+ \]
+ Combining the two, we get
+ \[
+ \pi(a) \E_a[\tau_{a, b}] = c(a) R_{\mathrm{eff}}(a, b).
+ \]
+ Thus, the result follows.
+\end{proof}
+In particular, from the commute-time identity, we see that the effective resistance satisfies the triangle inequality, and hence defines a metric on any graph.
+
+\subsection{Infinite graphs}
+Finally, let's think a bit about how we can develop the same theory for infinite graphs.
+
+Let $G$ be an infinite connected weighted graph with conductances $(c(e))_e$. Let $0$ be a distinguished vertex. Let $(G_k = (V_k, E_k))_k$ be an \term{exhaustion} of $G$ by finite graphs, i.e.
+\begin{itemize}
+ \item $V_k \subseteq V_{k + 1}$ for all $k$;
+ \item $0 \in V_k$ for all $k$;
+ \item $\bigcup_k V_k = V$;
+ \item $E_k$ is the set of edges of $E$ with both end points in $V_k$.
+\end{itemize}
+
+Let $G_n^*$ be the graph obtained by gluing all the vertices of $V \setminus V_n$ into a single point $z_n$ and setting
+\[
+ c(x, z_n) = \sum_{z \in V \setminus V_n} c(x, z)
+\]
+for all $x \in V_n$. Now observe that $R_{\mathrm{eff}}(0, z_n; G_n^*)$ is a non-decreasing function of $n$ (Rayleigh). So the limit as $n \to \infty$ exists. We set \index{$R_{\mathrm{eff}}(0, \infty)$}
+\[
+ R_{\mathrm{eff}}(0, \infty) = \lim_{n \to \infty} R_{\mathrm{eff}}(0, z_n; G_n*).
+\]
+This is independent of the choice of exhaustion by Rayleigh. % or by interlacing.
+
+Now we have
+\[
+ \P_0(\tau_0^+ = \infty) = \lim_{n \to \infty} \P_0(\tau_{z_n} < \tau_0^+) = \frac{1}{c(0) R_{\mathrm{eff}}(0, z_n; G_n^*)} = \frac{1}{c(0) R_{\mathrm{eff}}(0, \infty)}.
+\]
+\begin{defi}[Flow]\index{flow}
+ Let $G$ be an infinite graph. A \emph{flow} $\theta$ from $0$ to $\infty$ is an anti-symmetric function on the edges such that $\div \theta(x) = 0$ for all $x \not= 0$.
+\end{defi}
+
+\begin{thm}
+ Let $G$ be an infinite connected graph with conductances $(c(e))_e$. Then
+ \begin{enumerate}
+ \item Random walk on $G$ is recurrent iff $R_{\mathrm{eff}}(0, \infty) = \infty$.
+ \item The random walk is transient iff there exists a unit flow $i$ from $0$ to $\infty$ of finite energy
+ \[
+ \mathcal{E}(i) = \sum_e (i(e))^2 r(e).
+ \]
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ The first part is immediate since
+ \[
+ \P_0(\tau_0^+ = \infty) = \frac{1}{c(0) R_{\mathrm{eff}}(0, \infty)}.
+ \]
+ To prove the second part, let $\theta$ be a unit flow from $0$ to $\infty$ of finite energy. We want to show that the effective resistance is finite, and it suffices to extend Thompson's principle to infinite graphs.
+
+ Let $i_n$ be the unit current flow from $0$ to $z_n$ in $G_n^*$. Let $v_n(x)$ be the associated potential. Then by Thompson,
+ \[
+ R_{\mathrm{eff}}(0, z_n; G_n^*) = \mathcal{E}(i_n).
+ \]
+ Let $\theta_n$ be the restriction of $\theta$ to $G_n^*$. Then this is a unit flow on $G_n^*$ from $0$ to $z_n$. By Thompson, we have
+ \[
+ \mathcal{E}(i_n) \leq \mathcal{E}(\theta_n) \leq \mathcal{E}(\theta) < \infty.
+ \]
+ So
+ \[
+ R_{\mathrm{eff}}(0, \infty) = \lim_{n \to \infty} \mathcal{E}(i_n).
+ \]
+ So by the first part, the flow is transient.
+
+ Conversely, if the random walk is transient, we want to construct a unit flow from $0$ to $\infty$ of finite energy. The idea is to define it to be the limit of $(i_n)(x, y)$.
+
+ By the first part, we know $R_{\mathrm{eff}}(0, \infty) = \lim \mathcal{E}(i_n) < \infty$. So there exists $M > 0$ such that $\mathcal{E}(i_n) \leq M$ for all $n$. Starting from $0$, let $Y_n(x)$ be the number of visits to $x$ up to time $\tau_{z_n}$. Let $Y(x)$ be the total number of visits to $x$. Then $Y_n(x) \nearrow Y(x)$ as $n \to \infty$ almost surely.
+
+ By monotone convergence, we get that
+ \[
+ \E_0[Y_n(x)] \nearrow \E_0[Y(x)] < \infty,
+ \]
+ since the walk is transient. On the other hand, we know
+ \[
+ E_0[Y_n(x)] = G_{\tau_{z_n}}(0, x).
+ \]
+ It is then easy to check that $\frac{G_{\tau_{z_n}}(0, x)}{c(x)}$ is a harmonic function outside of $0$ and $z_n$ with value $0$ at $z_n$. So it has to be equal to the voltage $v_n(x)$. So
+ \[
+ v_n(x) = \frac{G_{\tau_{z_n}}(0, x)}{c(x)}.
+ \]
+ Therefore there exists a function $v$ such that
+ \[
+ \lim_{n \to \infty} c(x) v_n(x) = c(x) v(x).
+ \]
+ We define
+ \[
+ i(x, y) = c(x, y) (v(x) - v(y)) = \lim_{n \to \infty} c(x, y) (v_n(x) - v_n(y)) = \lim_{n \to \infty} i_n(x, y).
+ \]
+ Then by dominated convergence, we know $\mathcal{E}(i) \leq M$, and also $i$ is a flow from $0$ to $\infty$.
+\end{proof}
+
+Note that this connection with electrical networks only works for reversible Markov chains.
+
+\begin{cor}
+ Let $G' \subseteq G$ be connected graphs.
+ \begin{enumerate}
+ \item If a random walk on $G$ is recurrent, then so is random walk on $G'$.
+ \item If random walk on $G'$ is transient, so is random walk on $G$.
+ \end{enumerate}
+\end{cor}
+
+\begin{thm}[Polya's theorem]\index{Polya's theorem}
+ Random walk on $\Z^2$ is recurrent and transient on $\Z^d$ for $d \geq 3$.
+\end{thm}
+
+\begin{proof}[Proof sketch]
+ For $d = 2$, if we glue all vertices at distance $n$ from $0$, then
+ \[
+ R_{\mathrm{eff}}(0, \infty) \geq \sum_{i = 1}^{n - 1} \frac{1}{8i - 4} \geq c \cdot \log n
+ \]
+ using the parallel and series laws. So $R_{\mathrm{eff}}(0, \infty) = \infty$. So we are recurrent.
+
+ For $d = 3$, we can construct a flow as follows --- let $S$ be the sphere of radius $\frac{1}{4}$ centered at the origin. Given any edge $\mathbf{e}$, take the unit square centered at the midpoint $\mathbf{m}_e$ of $\mathbf{e}$ and has normal $\mathbf{e}$. We define $|\theta(\mathbf{e})|$ to be the area of the radial projection on the sphere, with positive sign iff $\bra \mathbf{m}_e, \mathbf{e} \ket > 0$.
+
+ One checks that $\theta$ satisfies Kirchoff's node law outside of $0$. Then we find that
+ \[
+ \mathcal{E}(\theta) \leq C \sum_n n^2 \cdot \left(\frac{1}{n^2}\right)^2 < \infty,
+ \]
+ since there are $\sim n^2$ edges at distance $n$, and the flow has magnitude $\sim \frac{1}{n^2}$.
+\end{proof}
+
+\begin{proof}[Another proof sketch]
+ We consider a binary tree $T_\rho$ with edges joining generation to $n + 1$ having resistance $\rho^n$ for $\rho$ to be determined. Then
+ \[
+ R_{\mathrm{eff}}(T_\rho) = \sum_{n = 1}^\infty \left(\frac{\rho}{2}\right)^n,
+ \]
+ and this is finite when $\rho < 2$.
+
+ Now we want to embed this in $\Z^3$ in such a way that neighbours in $T_\rho$ of generation $n$ and $n + 1$ are separated by a path of length of order $\rho^n$. In generation $n$ of $T_\rho$, there are $2^n$ vertices. On the other hand, the number of vertices at distance $n$ in $\Z^3$ will be of order $(\rho^n)^2$. So we need $(\rho^n)^2 \geq 2^n$. So we need $\rho > \sqrt{2}$. We can then check that
+ \[
+ R_{\mathrm{eff}}(0, \infty; \Z^3) \leq c \cdot R_{\mathrm{eff}}(T_\rho) < \infty.\qedhere
+ \]
+\end{proof}
+Note that this method is rather robust. We only need to be able to embed our $T_\rho$ into $\Z^3$. Using a similar method, Grimmett, Kesten and Zhang (1993) showed that simple random walk on a supercritical bond percolation is transient for $d \geq 3$.
+
+\section{Uniform spanning trees}
+\subsection{Finite uniform spanning trees}
+The relation between uniform spanning trees and electrical networks was discovered by Kirchoff in 1850's. Of course, he was not interested in uniform spanning tress, but in electrical networks themselves. One of the theorems we will prove is
+\begin{thm}[Foster's theorem]\index{Foster's theorem}
+ Let $G = (V, E)$ be a finite weighted graph on $n$ vertices. Then
+ \[
+ \sum_{e \in E}R_{\mathrm{eff}}(e) = n - 1.
+ \]
+\end{thm}
+We can prove this using the commute-time formula, but there is a one-line proof using this connection between electrical networks and uniform spanning trees.
+\begin{defi}[Spanning tree]\index{spanning tree}
+ Let $G = (V, E)$ be a finite connected graph. A \emph{spanning tree} $T$ of $G$ is a connected subgraph of $G$ which is a tree\index{tree} (i.e.\ there are no cycles) and contains all the vertices in $G$.
+\end{defi}
+
+Let $\mathcal{T}$ be the set of all spanning trees of $G$. Pick $T \in \mathcal{T}$ uniformly at random. We call it the \term{uniform spanning tree}. We shall prove the following theorem:
+\begin{thm}
+ Let $e \not= f \in E$. Then
+ \[
+ \P(e \in T \mid f \in T) \leq \P(e \in T).
+ \]
+\end{thm}
+It is tempting to ask a very similar question --- if we instead considered all spanning \emph{forests}, do we get the same result? Intuitively, we should, but it turns out this is an open question. So let's think about trees instead.
+
+In order to prove this, we need the following result:
+\begin{thm}[Kirchoff]
+ Let $T$ be a uniform spanning tree, $e$ an edge. Then
+ \[
+ \P(e \in T) = R_{\mathrm{eff}}(e)
+ \]
+\end{thm}
+Of course, this immediately implies Foster's theorem, since every tree has $n - 1$ edges.
+
+\begin{notation}
+ Fix two vertices $s, t$ of $G$. For all every edge $e = (a, b)$, define $\mathcal{N}(s, a, b, t)$ to be the set of spanning tress of $G$ whose unique path from $s$ to $t$ passes along the edge $(a, b)$ in the direction from $a$ to $b$. Write
+ \[
+ N(s, a, b, t) = |\mathcal{N}(s, a, b, t)|,
+ \]
+ and $N$ the total number of spanning trees.
+\end{notation}
+
+\begin{thm}
+ Define, for every edge $e = (a, b)$,
+ \[
+ i(a, b) = \frac{N(s, a, b, t) - N(s, b, a, t)}{N}.
+ \]
+ Then $i$ is a unit flow from $s$ to $t$ satisfying Kirchoff's node law and the cycle law.
+\end{thm}
+Note that from the expression for $i$, we know that
+\[
+ i(a, b) = \P(T \in \mathcal{N}(s, a, b, t)) - \P(T \in \mathcal{N}(s, b, a, t)).
+\]
+\begin{proof}
+ We first show that $i$ is a flow from $s$ to $t$. It is clear that $i$ is anti-symmetric. To check Kirchoff's node law, pick $a \not \in \{s, t\}$. We need to show that
+ \[
+ \sum_{x \sim a} i(a, x) = 0.
+ \]
+ To show this, we count how much each spanning tree contributes to the sum. In each spanning tree, the unique path from $s$ to $t$ may or may not contain $a$. If it does not contain $a$, it contributes $0$. If it contains $A$, then there is one edge entering $a$ and one edge leaving $a$, and it also contributes $0$. So every spanning tree contributes exactly zero to the sum.
+
+ So we have to prove that $i$ satisfies the cycle law. Let $C = (v_1, v_2, \ldots, v_{n + 1} = v_1)$ be a cycle. We need to show that
+ \[
+ \sum_{i = 1}^n i(v_i, v_{i + 1}) = 0.
+ \]
+ To prove this, it is easier to work with ``bushes'' instead of trees. An \term{$s/t$ bush} is a forest that consists of exactly $2$ trees, $T_s$ and $T_t$, such that $s \in T_s$ and $t \in T_t$. Let $e = (a, b)$ be an edge. Define $\mathcal{B}(s, a, b, t)$ to be the set of $s/t$ bushes such that $a \in T_s$ and $b \in T_t$.
+
+ We claim that $|\mathcal{B}(s, a, b, t)| = N(s, a, b, t)$. Indeed, given a bush in $\mathcal{B}(s, a, b, t)$, we can add the edge $e = (a, b)$ to get a spanning tree whose unique path from $s$ to $t$ passes through $e$, and vice versa.
+
+ Instead of considering the contribution of each tree to the sum, we count the contribution of each bush to the set. Then this is easy. Let
+ \begin{align*}
+ F_+ &= |\{(v_j, v_{j + 1}): B \in \mathcal{B}(s, v_j, v_{j + 1}, t)\}\\
+ F_- &= |\{(v_j, v_{j + 1}): B \in \mathcal{B}(s, v_{j + 1}, v_j, t)\}
+ \end{align*}
+ Then the contribution of $B$ is
+ \[
+ \frac{F_+ - F_-}{N}.
+ \]
+ By staring it at long enough, since we have a cycle, we realize that we must have $F_+ = F_-$. So we are done.
+
+ Finally, we need to show that $i$ is a unit flow. In other words,
+ \[
+ \sum_{x \sim s} i(s, x) = 1.
+ \]
+ But this is clear, since each spanning tree contributes $\frac{1}{N}$ to $i(s, x)$, and there are $N$ spanning trees.
+\end{proof}
+
+We can now prove the theorem we wanted to prove.
+\begin{thm}
+ Let $e \not= f \in E$. Then
+ \[
+ \P(e \in T \mid f \in T) \leq \P(e \in T).
+ \]
+\end{thm}
+
+\begin{proof}
+ Define the new graph $G.f$ to be the graph obtained by gluing both endpoints of $f$ to a single vertex (keeping multiple edges if necessary). This gives a correspondence between spanning trees of $G$ containing $f$ and spanning trees of $G.f$. But
+ \[
+ \P(e \in T \mid f \in T) = \frac{\text{number of spanning trees of $G.f$ containing $e$}}{\text{number of spanning trees of $G.f$}}.
+ \]
+ But this is just $\P(e \in \text{UST of $G.f$})$, and this is just $R_{\mathrm{eff}}(e; G .f)$. So it suffices to show that
+ \[
+ R_{\mathrm{eff}}(e; G.f) \leq R_{\mathrm{eff}}(e; G).
+ \]
+ But this is clear by Rayleigh's monotone principle, since contracting $f$ is the same as setting the resistance of the edge to $0$.
+\end{proof}
+
+In practice, how can we generate a uniform spanning tree? One way to do so is \term{Wilson's method}.
+
+\begin{defi}[Loop erasure]\index{loop erasure}
+ Let $x = \bra x_1, \ldots x_n\ket$ be a finite path in the graph $G$. We define the \emph{loop erasure} as follows: for any pair $i < j$ such that $x_i = x_j$, remove $x_{i + 1}, x_{i+2}, \ldots, x_j$, and keep repeating until no such pairs exist.
+\end{defi}
+
+To implement Wilson's algorithm, distinguish a root vertex of $G$, called $r$, and take an ordering of $V$. Set $T_0 = \{r\}$. Define $T_i$ inductively as follows: take the first vertex in the ordering is not in $T_i$. We start a simple random walk from this vertex, and run until it hits $T_i$, and erase the loops. Then set $T_{i + 1}$ to be the union of $T_i$ with this (loop-erased) path. Repeat until all vertices are used.
+
+\begin{thm}[Wilson]
+ The resulting tree is a uniform spanning tree.
+\end{thm}
+Note that in particular, the choice of ordering is irrelevant to the resulting distribution.
+
+To prove this theorem, it is convenient to have a good model of how the simple random walk on $G$ is generated. Under any vertex $G$, place an infinite number of cards that contain iid instructions, i.e.\ each card tells us to which neighbour we jump. Different vertices have independent stacks. Whenever we are at a vertex $x$, look at the top card underneath it and go to the neighbour it instructs. Then throw away the top card and continue.
+
+Given stacks of cards under the vertices of the graph, by revealing the top card of each vertex, we get a directed graph, where directed edges are of the form $(x, y)$, where $y$ is the instruction on the top card under $x$.
+
+If there is no directed cycle, then we stop, and we have obtained a spanning tree. If there is a directed cycle, then we remove the top cards from all vertices on the cycle. We call this procedure \term{cycle popping}. Then we look at the top cards again and remove cycles and cards used.
+
+We first prove a \emph{deterministic} lemma about cycle popping.
+\begin{lemma}
+ The order in which cycles are popped is irrelevant, in the sense that either the popping will never stop, or the same set of cycles will be popped, thus leaving the same spanning tree lying underneath.
+\end{lemma}
+
+\begin{proof}
+ Given an edge $(x, S_x^i)$, where $S_x^i$ is the $i$th instruction under $x$, colour $(x, S_x^i)$ with colour $i$. A colour is now coloured, but not necessarily with the same colour.
+
+ Suppose $C$ is a cycle that can be popped in the order $C_1, C_2, \ldots, C_n$, with $C_n = C$. Let $C'$ be any cycle in the original directed graph. We claim that either $C'$ does not intersect $C_1, \ldots, C_n$, or $C' = C_k$ for some $k$, and $C'$ does not intersect $C_1, \ldots C_{k - 1}$. Indeed, if they intersect, let $x \in C' \cap C_k$, where $k$ is the smallest index where they intersect. Then the edge coming out of $x$ will have colour $1$. Then $S_x^1$ is also in the intersection of $C'$ and $C_k$, and so the same is true for the edge coming out of $S_x^1$. Continuing this, we see that we must have $C_k = C'$. So popping $C_k, C_1, \ldots, C_{k - 1}, C_{k + 1}, \ldots, C_n$ gives the same result as popping $C_1, \ldots, C_n$.
+
+ Thus, by induction, if $C$ is a cycle that is popped, then after performing a finite number of pops, $C$ is still a cycle that can be popped. So either there is an infinite number of cycles that can be popped, so popping can never stop, or every cycle that can be popped will be popped, thus in this case giving the same spanning tree.
+\end{proof}
+
+Using this, we can prove that Wilson's method gives the correct distribution. In Wilson's algorithm, we pop cycles by erasing loops in the order of cycle creation. This procedure will stop will probability $1$. With this procedure, we will reveal a finite set of coloured cycles $O$ lying over a spanning tree. Let $X$ be the set of all $(O, T)$, where $O$ is the set of coloured cycles lying on top of $T$, which arise from running the random walks.
+
+Now if we can get to $(O, T)$, then we could also get to $(O, T')$ for any other spanning spanning tree $T'$, since there is no restriction on what could be under the stacks. So we can write
+\[
+ X = X_1 \times X_2,
+\]
+where $X_1$ is a certain set of finite sets of coloured cycles, and $X_2$ is the set of all spanning trees.
+
+Define a function $\Psi$ on the set of cycles by
+\[
+ \Psi(C) = \prod_{e \in C} p(e),\quad p(e) = \frac{c(e)}{c(e_-)},
+\]
+where $e = (e_-, e_+)$, and $c(e_-)$ is the sum of the conductances of all edges with vertex $e_-$.
+
+More generally, if $O$ is a set of cycles, then we write
+\[
+ \Psi(O) = \prod_{C \in O} \Psi(C).
+\]
+Then
+\[
+ \P(\text{get $(O, T)$ using Wilson's algorithm}) = \prod_{e \in \bigcup_{C \in O} C \cup T)} p(e) = \Psi(O) \cdot \Psi(T).
+\]
+Since this probability factorizes, it follows that the distribution of $O$ is independent of $T$, and the probability of getting $T$ is proportional to $\Psi(T)$. So $T$ has the distribution of a uniform spanning tree.
+
+\begin{cor}[Cayley's formula]\index{Cayley's formula}
+ The number of labeled unrooted trees on $n$-vertices is equal to $n^{n - 2}$.
+\end{cor}
+
+\begin{proof}
+ Exercise.
+\end{proof}
+
+How can we generalize Wilson's algorithm to infinite graphs? If the graph is recurrent, then we can still use the same algorithm, and with probability one, the algorithm terminates. If we have a transient graph, then we can still perform the same algorithm, but it is not clear what the resulting object will be.
+
+The \term{Aldous--Broder algorithm} is another way of generating a uniform spanning tree. Run a random walk that visits all vertices at least once. For every vertex other than the root, take the edge the walk crossed the first time it visited the vertex, and keep the edge in the spanning tree. It is an exercise to prove that this does give a uniform spanning tree. Again, this algorithm works only on a recurrent graph. A generalization to transient graphs was found by Hutchcroft.
+
+\subsection{Infinite uniform spanning trees and forests}
+Since recurrent graphs are boring, let $G$ be an infinite transient graph. Then a ``naive'' way to define a uniform spanning tree is by exhaustions.
+
+Let $G_n$ be an exhaustion of $G$ by finite graphs. Define $G_n^W$ (where $W$ stands for ``wired'') to be the graph $G_n$, where all of the vertices of $G \setminus G_n$ are glued to a single vertex $z_n$. The ``free'' $G_n$ is just $G_n$ considered as a subgraph. It turns out the wired one is much easier to study.
+
+Let $\mu_n$ be the uniform spanning tree measure on $G_n^W$. Let $B$ be a finite set of edges $B = \{e_1, \ldots, e_n\}$. Then for $T$ a UST of $G_n^W$, we have
+\begin{align*}
+ \mu_n(B \subseteq T) &= \prod_{k = 1}^m \mu(e_k \in T \mid e_j \in T\text{for all }j < k) \\
+ &= \prod_{k = 1}^n R_{\mathrm{eff}}(e_k; G_n^W/\{e_j: j < k\}),
+\end{align*}
+where $G_n^W/\{e_j: j < k\}$ is the graph obtained from $G_n^W$ by contracting the edges $\{e_j: j < k\}$.
+
+We want to show that $\mu_n$ as a sequence of measures converges. So we want to understand how $R_{\mathrm{eff}}(e_k; G_n^W/\{e_j: j < k\})$ changes when we replace $n$ with $n + 1$. By Rayleigh's monotonicity principle, since $G_n^W$ can be obtained from $G_{n + 1}^W$ by contracting more edges, we know the effective resistance increases when we replace $n$ with $n + 1$. So we know that
+\[
+ \mu_n(B \subseteq T) \leq \mu_{n + 1}(B \subseteq T).
+\]
+So $(\mu_n(B \subseteq T))$ is an increasing sequence. So we can define
+\[
+ \mu(B \subseteq \mathcal{F}) = \lim_{n \to \infty} \mu_n(B \subseteq T),
+\]
+and $\mathcal{F}$ is our uniform forest. We can then extend the definition of $\mu$ to cylinder events
+\[
+ \{\mathcal{F}: B_1 \subseteq \mathcal{F}, B_2 \cap \mathcal{F} = \emptyset\}
+\]
+with $B_1, B_2$ finite sets of edges, using inclusion-exclusion:
+\[
+ \mu(B_1 \subseteq \mathcal{F}, B_2 \cap \mathcal{F} = \emptyset) = \sum_{S \subseteq B_2} \mu(B_1 \cup S \subseteq \mathcal{F}) (-1)^{|S|}.
+\]
+Then by commuting limits, we find that in fact
+\[
+ \mu(B_1 \subseteq \mathcal{F}, B_2 \cap \mathcal{F} = \emptyset) = \lim_{n \to \infty} \mu_n (B_1 \subseteq T, B_2 \cap T = \emptyset).
+\]
+If $\mathcal{A}$ is a finite union or intersection of cylinder sets of this form, then we have
+\[
+ \mu(\mathcal{A}) = \lim_{n \to \infty} \mu_n(\mathcal{A}).
+\]
+So this gives a consistent family of measures.
+
+By Kolmogorov's theorem, we get a unique probability measure $\mu$ supported on the forests of $G$. $\mu$ is called the \term{wired uniform spanning forest}. One can also consider the \term{free uniform spanning forest}, where we do the same thing but without gluing, however, this is more complicated and we will not study it.
+
+We can adapt Wilson's algorithm to generate uniform spanning forests. This is called \term{Wilson's method rooted at infinity} (BKPS, 2001).
+
+Let $G$ be a transient graph, and start with $\mathcal{F}_0 = \emptyset$. Pick an ordering of the vertices of $V = \{x_1, x_2, \ldots\}$. Inductively, from $x_n$, we start a simple random walk until the first time it hits $\mathcal{F}_{n - 1}$ if it does. If it doesn't, then run indefinitely. Call this (possibly infinite) path $\mathcal{P}_n$. Since $G$ is transient, $\mathcal{P}_n$ will visit every vertex finitely many times with probability $1$. So it makes sense to define the loop erasure of $\mathcal{P}_n$. Call it $LE(\mathcal{P}_n)$. We set
+\[
+ \mathcal{F}_n = \mathcal{F}_{n - 1} \cup LE(\mathcal{P}_n),
+\]
+and take
+\[
+ \mathcal{F} = \bigcup_n \mathcal{F}_n.
+\]
+As before, the order of $V$ that we take does not affect the final distribution.
+\begin{prop}
+ Let $G$ be a transient graph. The wired uniform spanning forest is the same as the spanning forest generated using Wilson's method rooted at infinity.
+\end{prop}
+
+\begin{proof}
+ Let $e_1, \ldots, e_M$ be a finite set of edges. Let $T(n)$ be the uniform spanning tree on $G_n^W$. Let $\mathcal{F}$ be the limiting law of $T(n)$. Look at $G_n^W$ and generate $T(n)$ using Wilson's method rooted at $z_n$. Start the random walks from $u_1, u_2, \ldots, u_L$ in this order, where $(u_i)$ are all the end points of the $e_1, \ldots, e_M$ ($L$ could be less than $2M$ if the edges share end points).
+
+ Start the first walk from $u_1$ and wait until it hits $z_n$; Then start from the next vertex and wait until it hits the previous path. We use the same infinite path to generate all the walks, i.e.\ for all $i$, let $(X_k(u_i))_{k \geq 0}$ be a simple random walk on $G$ started from $u_i$. When considering $G_n^W$, we stop these walks when they exit $G_n$, .e.\ if they hit $z_n$. In this way, we couple all walks together and all the spanning trees.
+
+ Let $\tau_i^n$ be the first time the $i$th walk hits the tree the previous $i - 1$ walks have generated in $G_n^W$. Now
+ \[
+ \P(e_1, \ldots, e_M \in T(n)) = \P\left(e_1, \ldots, e_M \in \bigcup_{j = 1}^L LE(X_k(u_j)) : k \leq \tau_{j^n} \right).
+ \]
+ Let $\tau_j$ be the stopping times corresponding to Wilson's method rooted at infinity. By induction on $j$, we have $\tau_j^n \to \tau_j$ as $n \to \infty$, and by transience, we have $LE(X_k(u_j): k \leq \tau_j^k) \to LE(X_k(u_j): k \leq \tau_j)$. So we are done.
+\end{proof}
+
+\begin{thm}[Pemantle, 1991]
+ The uniform spanning forest on $\Z^d$ is a single tree almost surely if and only if $d \leq 4$.
+\end{thm}
+
+Note that the above proposition tells us
+\begin{prop}[Pemantle]
+ The uniform spanning forest is a single tree iff starting from every vertex, a simple random walk intersects an independent loop erased random walk infinitely many times with probability $1$. Moreover, the probability that $x$ and $y$ are in the same tree of the uniform spanning forest is equal to the probability that simple random walk started from $x$ intersects an independent loop-erased random walk started from $y$.
+\end{prop}
+
+To further simplify the theorem, we use the following theorem of Lyons, Peres and Schramm:
+\begin{thm}[Lyons, Peres, Schramm]
+ Two independent simple random walks intersect infinitely often with probability $1$ if one walks intersects the loop erasure of the other one infinitely often with probability $1$.
+\end{thm}
+
+Using these, we can now prove the theorem we wanted. Note that proving things about intersection, rather than collisions, tends to be higher, because any attempts to, say, define the first intersection time would inherently introduce an asymmetry.
+
+We prove half of Pemantle's theorem.
+\begin{thm}
+ The uniform spanning forest is not a tree for $d \geq 5$ with probability $1$.
+\end{thm}
+
+\begin{proof}[Proof of Pemantle's theorem]
+ Let $X, Y$ be two independent simple random walks in $\Z^d$. Write
+ \[
+ I = \sum_{t = 0}^\infty \sum_{s = 0}^\infty \mathbf{1}(X_t = Y_s).
+ \]
+ Then we have
+ \[
+ \E_{x, y}[I] = \sum_t \sum_s \P_{x - y} (X_{t + s} = 0) \approx \sum_{t = \|x - y\|} t \P_{x - y}(X_t = 0).
+ \]
+ It is an elementary exercise to show that
+ \[
+ \P_x(X_t = 0) \leq \frac{c}{t^{d/2}},
+ \]
+ so
+ \[
+ \P_x(X_t = 0) \leq \sum_{t = \|x - y\|} \frac{1}{t^{d/2 - 1}}.
+ \]
+ For $d \geq 5$, for all $\varepsilon > 0$, we take $x, y$ such that
+ \[
+ \E_{x, y} [I] < \varepsilon.
+ \]
+ Then
+ \[
+ \P(\text{USF is connected}) \leq \P_{x, y}(I > 0) \leq \E_{x, y}[I] < \varepsilon.
+ \]
+ Since this is true for every $\varepsilon$, it follows that $\P(\text{USF is connected}) = 0$.
+\end{proof}
+
+\printindex
+\end{document}
diff --git a/books/cam/III_M/quantum_computation.tex b/books/cam/III_M/quantum_computation.tex
new file mode 100644
index 0000000000000000000000000000000000000000..bf370f865e6e61b494146b39f12e1caac726278b
--- /dev/null
+++ b/books/cam/III_M/quantum_computation.tex
@@ -0,0 +1,2242 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2016}
+\def\nlecturer {R.\ Jozsa}
+\def\ncourse {Quantum Computation}
+\def\nofficial {http://www.qi.damtp.cam.ac.uk/part-iii-quantum-computation}
+
+\input{header}
+
+\makeatletter
+\DeclareRobustCommand{\rvdots}{%
+ \vbox{
+ \baselineskip4\p@\lineskiplimit\z@
+ \kern-\p@
+ \hbox{.}\hbox{.}\hbox{.}
+ }
+}
+\newcommand\addstate[3]{
+ \pgfmathsetmacro{\y@y}{-#2 * 0.7};
+ \pgfmathsetmacro{\x@x}{#3};
+ \node [left] at (0, \y@y) {#1};
+ \draw (0, \y@y) -- (\x@x, \y@y);
+}
+\newcommand\addstateend[4]{
+ \pgfmathsetmacro{\y@y}{-#2 * 0.7};
+ \pgfmathsetmacro{\x@x}{#3};
+ \node [left] at (0, \y@y) {#1};
+ \node [right] at (\x@x, \y@y) {#4};
+ \draw (0, \y@y) -- (\x@x, \y@y);
+}
+
+\newcommand\addoperator[3]{
+ \node [draw, rectangle, fill=white] at (#3, -#2 * 0.7) {#1};
+}
+\newcommand\addbigoperator[5]{
+ \pgfmathsetmacro{\y@s}{-#2 * 0.7 + 0.2};
+ \pgfmathsetmacro{\y@t}{-#3 * 0.7 - 0.2};
+ \pgfmathsetmacro{\x@s}{#4};
+ \pgfmathsetmacro{\x@t}{#5};
+ \pgfmathsetmacro{\x@c}{(\x@s + \x@t)/2};
+ \pgfmathsetmacro{\y@c}{(\y@s + \y@t)/2};
+
+ \draw [fill=white] (\x@s, \y@s) rectangle (\x@t, \y@t);
+ \node at (\x@c, \y@c) {#1};
+}
+
+\newcommand\addMQCstate [3] {
+ \pgfmathsetmacro{\x@x}{#3 * 1.5 - 1};
+ \node [left] at (0, -#2) {#1};
+ \draw (0, -#2) -- (\x@x, -#2);
+}
+
+\newcommand\addJ[3]{
+ \pgfmathsetmacro{\x@x}{#3 * 1.5 - 0.5};
+ \node [draw, rectangle, fill=morange!30!white, minimum height=0.65cm, minimum width=1cm] at (\x@x, -#2) {$\qJ(#1)$};
+}
+\newcommand\addX[3]{
+ \pgfmathsetmacro{\x@x}{#3 * 1.5 - 0.5};
+ \node [draw, rectangle, fill=mgreen!30!white, minimum height=0.65cm, minimum width=1cm] at (\x@x, -#2) {$\qX^{#1}$};
+}
+\newcommand\addZ[3]{
+ \pgfmathsetmacro{\x@x}{#3 * 1.5 - 0.5};
+ \node [draw, rectangle, fill=mblue!30!white, minimum height=0.65cm, minimum width=1cm] at (\x@x, -#2) {$\qZ^{#1}$};
+}
+\newcommand\addPhantom[2]{
+ \pgfmathsetmacro{\x@x}{#2 * 1.5 - 0.5};
+ \node [minimum height=0.65cm, minimum width=1cm] at (\x@x, -#1) {};
+}
+\newcommand\drawvert[3]{
+ \pgfmathsetmacro{\x@x}{#3 * 1.5 - 0.5};
+ \draw (\x@x, -#1) node [circ] {} -- (\x@x, -#2) node [circ] {};
+}
+\newcommand\measurestate[4] {
+ \pgfmathsetmacro{\y@y}{-#3 - 0.05};
+ \pgfmathsetmacro{\x@x}{#4 + 0.05};
+ \draw [-latex'] (\x@x, \y@y) -- +(0.4, -0.4) node [right] {#2};
+ \node [above] at (#4, -#3) {#1};
+}
+
+\newcommand{\qCX}{\mathsf{CX}}
+\newcommand{\qCZ}{\mathsf{CZ}}
+\newcommand{\qE}{\mathsf{E}}
+\newcommand{\qJ}{\mathsf{J}}
+\newcommand{\qM}{\mathsf{M}}
+\newcommand{\qQFT}{\mathsf{QFT}}
+\newcommand{\qH}{\mathsf{H}}
+\newcommand{\qP}{\mathsf{P}}
+\newcommand{\qT}{\mathsf{T}}
+\newcommand{\qX}{\mathsf{X}}
+\newcommand{\qY}{\mathsf{Y}}
+\newcommand{\qZ}{\mathsf{Z}}
+\newcommand{\qcd}{\mathsf{c}\operatorname{-}}
+\newcommand{\AND}{\mathsf{AND}}
+\newcommand{\NOT}{\mathsf{NOT}}
+\newcommand{\OR}{\mathsf{OR}}
+
+\makeatother
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+Quantum mechanical processes can be exploited to provide new modes of information processing that are beyond the capabilities of any classical computer. This leads to remarkable new kinds of algorithms (so-called quantum algorithms) that can offer a dramatically increased efficiency for the execution of some computational tasks. Notable examples include integer factorisation (and consequent efficient breaking of commonly used public key crypto systems) and database searching. In addition to such potential practical benefits, the study of quantum computation has great theoretical interest, combining concepts from computational complexity theory and quantum physics to provide striking fundamental insights into the nature of both disciplines.
+
+The course will cover the following topics:
+
+Notion of qubits, quantum logic gates, circuit model of quantum computation. Basic notions of quantum computational complexity, oracles, query complexity.
+
+The quantum Fourier transform. Exposition of fundamental quantum algorithms including the Deutsch-Jozsa algorithm, Shor's factoring algorithm, Grovers searching algorithm.
+
+A selection from the following further topics (and possibly others):
+\begin{enumerate}
+ \item Quantum teleportation and the measurement-based model of quantum computation;
+ \item Lower bounds on quantum query complexity;
+ \item Phase estimation and applications in quantum algorithms;
+ \item Quantum simulation for local hamiltonians.
+\end{enumerate}
+
+\subsubsection*{Pre-requisites}
+It is desirable to have familiarity with the basic formalism of quantum mechanics especially in the simple context of finite dimensional state spaces (state vectors, Dirac notation, composite systems, unitary matrices, Born rule for quantum measurements). Prerequisite notes will be provided on the course webpage giving an account of the necessary material including exercises on the use of notations and relevant calculational techniques of linear algebra. It would be desirable for you to look through this material at (or slightly before) the start of the course. Any encounter with basic ideas of classical theoretical computer science (complexity theory) would be helpful but is not essential.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+Quantum computation is currently a highly significant and important subject, and is very active in international research.
+
+First of all, it is a fundamental connection between physics and computing. We can think of physics as computing, where in physics, we label states with parameters (i.e.\ numbers), and physical evolution changes these parameters. So we can think of these parameters as encoding information, and physical evolution changes the information. Thus, this evolution can be thought of as a computational process.
+
+More strikingly, we can also view computing as physics! We all have computers, and usually represent information as bits, 0 or 1. We often think of computation as manipulation of these bits, i.e.\ as discrete maths. However, there is no actual discrete bits --- when we build a computer, we need physical devices to represent these bits. When we run a computation on a computer, it has to obey the laws of physics. So we arrive at the idea that the limits of computation are not a part of mathematics, but depend on the laws of physics. Thus, we can associate a ``computing power'' with any theory of physics!
+
+On the other hand, there is also a technology/engineering aspect of quantum computation. Historically, we have been trying to reduce the size of computers. Eventually, we will want to try to achieve miniaturization of computer components to essentially the subatomic scale. The usual boolean operations we base our computations on do not work so well on this small scale, since quantum effects start to kick in. We could try to mitigate these quantum issues and somehow force the bits to act classically, but we can also embrace the quantum effects, and build a quantum computer! There is a lot of recent progress in quantum technology. We are now expecting a 50-qubit quantum computer in full coherent control soon. However, we are not going to talk about implementation in this course.
+
+Finally, apart from the practical problem of building quantum computers, we also have theoretical quantum computer science, where we try to understand how quantum algorithms behave. This is about how we can actually exploit quantum physical facts for computational possibilities beyond classical computers. This will be the focus of the course.
+
+\section{Classical computation theory}
+To appreciate the difference between quantum and classical computing, we need to first understand classical computing. We will only briefly go over the main ideas instead of working out every single technical detail. Hence some of the definitions might be slightly vague.
+
+We start with the notion of ``computable''. To define computability, one has to come up with a sensible mathematical model of a computer, and then ``computable'' means that theoretical computer can compute it. So far, any two sensible mathematical models of computations we manage to come up with are equivalent, so we can just pick any one of them. Consequently, we will not spend much time working out a technical definition of computable.
+
+\begin{eg}
+ Let $N$ be an integer. We want to figure out if $N$ a prime. This is clearly computable, since we can try all numbers less than $N$ and see if it divides $N$.
+\end{eg}
+
+This is not too surprising, but it turns out there are some problems that are not computable! Most famously, we have the Halting problem.
+\begin{eg}[Halting problem]\index{Halting problem}
+ Given the code of a computer program, we want to figure out if the computer will eventually halt. In 1936, Turing proved that this problem is uncomputable! So we cannot have a program that determines if an arbitrary program halts.
+\end{eg}
+
+For a less arbitrary problem, we have
+\begin{eg}
+ Given a polynomial with integer coefficients with many variables, e.g.\ $2x^2 y - 17 zw^{19} + x^5 w^3 + 1$, does this have a root in the integers? It was shown in 1976 that this problem is uncomputable as well!
+\end{eg}
+
+These results are all for classical computing. If we expect quantum computing to be somehow different, can we get around this problems? This turns out not to be the case, for the very reason that all the laws of quantum physics (e.g.\ state descriptions, evolution equations) are all computable on a classical computer (in principle). So it follows that quantum computing, being a quantum process, cannot compute any classical uncomputable problem.
+
+Despite this limitation, quantum computation is still interesting! In practice, we do not only care about computability. We care about how efficient we are at doing the computation. This is the problem of complexity --- the complexity of a quantum computation might be much simpler than the classical counterpart.
+
+To make sense of complexity, we need to make our notion of computations a bit more precise.
+
+\begin{defi}[Input string]\index{input string}
+ An \emph{input bit string} is a sequence of bits $x = i_1i_2 \cdots i_n$, where each $i_k$ is either $0$ or $1$. We write $B_n$ for the set of all $n$-bit string, and $B = \bigcup_{n \in \N} B_n$. The \emph{input size} is the length $n$. So in particular, if the input is regarded as an actual number, the size is not the number itself, but its logarithm.
+\end{defi}
+
+\begin{defi}[Language]
+ A \term{language} is a subset $L\subseteq B$.
+\end{defi}
+
+\begin{defi}[Decision problem]
+ Given a language $L$, the \term{decision problem} is to determine whether an arbitrary $x \in B$ is a member of $L$. The output is thus 1 bit of information, namely yes or no.
+\end{defi}
+Of course, we can have a more general task with multiple outputs, but for simplicity, we will not consider that case here.
+
+\begin{eg}
+ If $L$ is the set of all prime numbers, then the corresponding decision problem is determining whether a number is prime.
+\end{eg}
+
+We also have to talk about models of computations. We will only give an intuitive and classical description of it.
+\begin{defi}[Computational model]
+ A \term{computational model} is a process with discrete steps (elementary computational steps), where each step requires a constant amount of effort/resources to implement.
+\end{defi}
+If we think about actual computers that works with bits, we can imagine a step as an operation such as ``and'' or ``or''. Note that addition and multiplication are \emph{not} considered a single step --- as the number gets larger, it takes more effort to add or multiply them.
+
+Sometimes it is helpful to allow some randomness.
+\begin{defi}[Randomized/probabilistic computation]
+ This is the same as a usual computational model, but the process also has access to a string $r_1, r_2, r_3, \cdots$ of independent, uniform random bits. In this case, we will often require the answer/output to be correct with ``suitably good'' probability.
+\end{defi}
+
+In computer science, there is a separate notion of ``non-deterministic'' computation, which is \emph{different} from probabilistic computation. In probabilistic computation, every time we ask for a random number, we just pick one of the possible output and follows that. With a non-deterministic computer, we simultaneously consider \emph{all} possible choices with no extra overhead. This is extremely powerful, and also obviously physically impossible, but it is a convenient thing to consider theoretically.
+
+\begin{defi}[Complexity of a computational task (or an algorithm)]
+ The \term{complexity} of a computational task or algorithm is the ``consumption of resources as a function of input size $n$''. The resources are usually the time
+ \[
+ T(n) = \text{number of computational steps needed},
+ \]
+ and space
+ \[
+ Sp(n) = \text{number of memory/work space needed}.
+ \]
+ In each case, we take the worse case input of a given size $n$.
+\end{defi}
+We usually consider the worst-case scenario, since, e.g.\ for primality testing, there are always some numbers which we can easily rule out as being not prime (e.g.\ even numbers). Sometimes, we will also want to study the average complexity.
+
+In the course, we will mostly focus on the time complexity, and not work with the space complexity itself.
+
+As one would imagine, the actual time or space taken would vary a lot on the actual computational model. Thus, the main question we ask will be whether $T(n)$ grows polynomially or super-polynomially (``exponentially'') with $n$.
+\begin{defi}[Polynomial growth]\index{polynomial growth}
+ We say $T(n)$ \emph{grows polynomially}, and write
+ \[
+ T(n) = O(\poly(n)) = O(n^k)
+ \]
+ for some $k$, if there is some constant $c$, and some integer $k$ and some integer $n_0$ such that $T(n) < c n^k$ for all $n > n_0$.
+\end{defi}
+
+The other possible cases are exponential growth, e.g.\ $T(n) = c_1 2^{c_2 n}$, or super-polynomial and sub-exponential growth such as $T(n) = 2^{\sqrt{n}}$ or $n^{\log n}$.
+
+We will usually regard polynomial time processes as ``feasible in practice'', while super-polynomial ones are considered ``infeasible''. Of course, this is not always actually true. For example, we might have a polynomial time of $n^{10^{10^{10}}}$, or an exponential time of $2^{0.0000\ldots0001 n}$. However, this distinction of polynomial vs non-polynomial is robust, since any computational model can ``simulate'' other computational models in polynomial time. So if something is polynomial in one computational model, it is polynomial in all models.
+
+In general, we can have a more refined complexity classes of decision problems:
+\begin{enumerate}
+ \item \term{\textbf{P}} (\term{polynomial time}): The class of decision problems having \emph{deterministic} polynomial-time algorithm.
+ \item \term{\textbf{BPP}} (\term{bounded error, probabilistic polynomial time}): The class of decision problems having \emph{probabilistic} polynomial time algorithms such that for every input,
+ \[
+ \mathrm{Prob}(\text{answer is correct}) \geq \frac{2}{3}.
+ \]
+ The number $\frac{2}{3}$ is sort of arbitrary --- we see that we cannot put $\frac{1}{2}$, or else we can just randomly guess a number. So we need something greater than $\frac{1}{2}$, and ``bounded'' refers to it being bounded away from $\frac{1}{2}$. We could replace $\frac{2}{3}$ with any other constant $\frac{1}{2} + \delta$ with $0 < \delta < \frac{1}{2}$, and \textbf{BPP} is the same. This is because if we have a $\frac{1}{2} + \delta$ algorithm, we simply repeat the algorithm $K$ times, and take the majority vote. By the Chernoff bound (a result in probability), the probability that the majority vote is correct is $> 1 - e^{-2 \delta^2 K}$. So as we do more and more runs, the probability of getting a right answer grows exponentially. This can be bigger than an $1 - \varepsilon$ by a suitably large $K$. Since $K$ times a polynomial time is still polynomial time, we still have a polynomial time algorithm.
+\end{enumerate}
+These two are often considered as ``classically feasible computations'', or ``computable in practice''. In the second case, we tolerate small errors, but that is fine in practice, since in genuine computers, random cosmic rays and memory failures can also cause small errors in the result, even for a deterministic algorithm.
+
+It is clear that \textbf{P} is contained in \textbf{BPP}, but we do not know about the other direction. It is not known whether \textbf{P} and \textbf{BPP} are the same --- in general, not much is known about whether two complexity classes are the same.
+
+\begin{eg}[Primality testing]
+ Let $N$ be an integer. We want to determine if it is prime. The input size is $\log_2 N$. The naive method of primality testing is to test all numbers and see if it divides $N$. We only need to test up to $\sqrt{N}$, since if $N$ has a factor, there must be one below $\sqrt{N}$. The is \emph{not} polynomial time, since we need $\sqrt{N} = 2^{\frac{1}{2} \log N}$ operations, we see that this is exponential time.
+
+ How about a probabilistic algorithm? We can choose a random $k < N$, and see if $k$ divides $N$. This is a probabilistic, polynomial time algorithm, but it is not bounded, since the probability of getting a correct answer is not $> \frac{1}{2}$.
+
+ In reality, primality testing is known to be in \textbf{BPP} (1976), and it is also known to be in \textbf{P} (2004).
+\end{eg}
+
+Finally, we quickly describe a simple model of (classical) computation that we will use to build upon later on. While the most famous model for computation is probably the \term{Turing machine}, for our purposes, it is much simpler to work with the \term{circuit model}.
+
+The idea is simple. In general, we are working with bits, and a program is a function $f: B_m \to B_n$. It is a mathematical fact that any such function can be constructed by combinations of boolean $\AND$, $\OR$ and $\NOT$ gates. We say that this is a \term{universal set} of gates. Thus a ``program'' is a specification of how to arrange these gates in order to give the function we want, and the time taken by the circuit is simply the number of gates we need.
+
+Of course, we could have chosen a different universal set of gates, and the programs would be different. However, since only a fixed number of gates is needed to construct $\AND$, $\OR$ and $\NOT$ from any universal set, and vice versa, it follows that the difference in time is always just polynomial.
+
+\section{Quantum computation}
+We are now going to start talking about quantum computation. Our model of quantum computation will be similar to the circuit model.
+
+The main difference is that instead of working with bits, we work with \emph{qubits}. A single qubit is an element of $\C^2$, with basis vectors
+\[
+ \bket{0} =
+ \begin{pmatrix}
+ 1 \\ 0
+ \end{pmatrix},\quad
+ \bket{1} =
+ \begin{pmatrix}
+ 0 \\ 1
+ \end{pmatrix}.
+\]
+When we have multiple qubits, we write them as $\bket{a}\bket{b}$, which is a shorthand for $\bket{a} \otimes \bket{b}$ etc.
+
+Now any classical bit string $x = i_1 i_2 \cdots i_n$ can be encoded as a qubit
+\[
+ \bket{i_1} \bket{i_2}\cdots\bket{i_n}\bket{0}\cdots\bket{0} \in \bigotimes_{i = 0}^{n + k} \C^2 \cong \C^{2^{n + k}},
+\]
+where we padded $k$ extra zeroes at the end. In classical computation, there was no such need, because within any computation, we were free to introduce or remove extra bits. However, one peculiarity of quantum computation is that all processes are invertible, and in particular, the number of qubits is always fixed. So if we want to do something ``on the side'' during the computations, the extra bits needed to do so must be supplied at the beginning.
+
+Now the quantum gates are not just boolean functions, but \emph{unitary operators} on the states. The standard gates will operate on one or two qubits only, and we can chain them together to get larger operators.
+
+We now list our standard unitary gates. The four main (families) single-qubit gates we will need are the following (in the standard $\bket{0}, \bket{1}$ basis):
+\[
+ \qX =
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 0
+ \end{pmatrix},\quad
+ \qZ =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix},\quad
+ \qH = \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 1 & 1\\
+ 1 & -1
+ \end{pmatrix},\quad
+ \qP_\varphi =
+ \begin{pmatrix}1 & 0\\0 & e^{\varphi}\end{pmatrix}
+\]
+We also have two ``controlled'' gates. These controlled gates take in two qubits. It does not directly change the first qubit, and will decide whether or not to act on the second bit depending on the value of the first. They are given by
+\[
+ \qCX\bket{i}\bket{j} = \bket{i} X^i\bket{j},\quad \qCZ \bket{i}\bket{j} = \bket{i}
+\]
+In the basis $\{\bket{0}\bket{0}, \bket{0}\bket{1}, \bket{1}\bket{0}, \bket{1}\bket{1}\}$, we can write these operators as
+\[
+ \qCX =
+ \begin{pmatrix}
+ 1 & 0 & 0 & 0\\
+ 0 & 1 & 0 & 0\\
+ 0 & 0 & 0 & 1\\
+ 0 & 0 & 1 & 0
+ \end{pmatrix},\quad
+ \qCZ =
+ \begin{pmatrix}
+ 1 & 0 & 0 & 0\\
+ 0 & 1 & 0 & 0\\
+ 0 & 0 & 1 & 0\\
+ 0 & 0 & 0 & -1
+ \end{pmatrix},
+\]
+These will be taken as the basic unitary gates.
+
+The other thing we are allowed to do in a quantum system is \emph{measurement}. What does this do? We wlog we are measuring the first qubit. Suppose the state before the measurement is given by
+\[
+ c_0 \bket{0} \bket{a} + c_1 \bket{1}\bket{b},
+\]
+where $\bket{a}, \bket{b}$ are $(n-1)$-qubit states of unit norm, and $|c_0|^2 + |c_1|^2 = 1$.
+
+Then when we measure the first qubit, we have probability $|c_0|^2$ of getting $0$, and probability $|c_1|^2$ of getting $1$. After measuring, the resulting state is either $\bket{0}\bket{a}$ or $\bket{1}\bket{b}$, depending on whether we measured $0$ or $1$ respectively.
+
+Measurements are irreversible, and in particular aren't given by unitary matrices. We will allow ourselves to make classical computations based on the results of the measurements, and decide which future quantum gates we apply based on the results of the classical computations.
+
+While this seems like a very useful thing to do, it is a mathematical fact that we can modify such a system to an equivalent quantum one where all the measurements are done at the end instead. So we are not actually adding anything new to our mathematical model of computation by allowing such classical manipluations.
+
+Now what is the analogous notion of a universal set of gates? In the classical case, the set of boolean functions is discrete, and therefore a finite set of gates can be universal. However, in the quantum case, the possible unitary matrices are continuous, so no finite set can be universal (more mathematically, there is an uncountable number of unitary matrices, but a finite collection of gates can only generate a countable subgroup of matrices).
+
+Thus, instead of asking for universality, we ask for approximate universality. To appreciate this, we can take the example of rotations --- there is no single rotation that generates all possible rotations in $\R^2$. However, we can pick a rotation by an irrational angle, and then the set of rotations generated by this rotation is dense in the set of all rotations, and this is good enough.
+
+\begin{defi}[Approximate universality]\index{approximate universality}
+ A collection of gates is \emph{approximately universal} if for any unitary matrix $U$ and any $\varepsilon > 0$, there is some circuit $\tilde{U}$ built out of the collection of gates such that
+ \[
+ \norm{U - \tilde{U}} < \varepsilon.
+ \]
+ In other words, we have
+ \[
+ \sup_{\norm{\psi} = 1} \norm{U\bket{\psi} - \tilde{U} \bket{\psi}} < \varepsilon,
+ \]
+ where we take the usual norm on the vectors (any two norms are equivalent if the state space is finite dimensional, so it doesn't really matter).
+\end{defi}
+
+We will provide some examples without proof.
+\begin{eg}
+ The infinite set $\{\qCX\} \cup \{\text{all $1$-qubit gates}\}$ is exactly universal.
+\end{eg}
+
+\begin{eg}
+ The collection
+ \[
+ \{\qH, \qT = \qP_{\pi/4}, \qCX\}
+ \]
+ is approximately universal.
+\end{eg}
+
+Similar to the case of classical computation, we can define the following complexity class:
+\begin{defi}[\textbf{BQP}]\index{\textbf{BQP}}
+ The complexity class \textbf{BQP} (bounded error, quantum polynomial time) is the class of all decision problems computable with polynomial quantum circuits with at least $2/3$ probability of being correct.
+\end{defi}
+We can show that \textbf{BQP} is independent of choice of approximately universal gate set. This is not as trivial as the classical case, since when we switch to a different set, we cannot just replace a gate with an equivalent circuit --- we can only do so approximately, and we have to make sure we control the error appropriately to maintain the bound of $2/3$.
+
+We will consider \textbf{BQP} to be the feasible computations with quantum computations.
+
+It is also a fact that \textbf{BPP} is a subset of \textbf{BQP}. This, again, is not a trivial result. In a quantum computation, we act by unitary matrices, which are invertible. However, boolean functions in classical computing are not invertible in general. So there isn't any straightforward plug-in replacement.
+
+However, it turns out that for any classical computation, there is an equivalent computation that uses reversible/invertible boolean gates, with a modest (i.e.\ polynomial) overhead of both space and time resources. Indeed, let $f: B_m \to B_n$ be a boolean function. We consider the function
+\begin{align*}
+ \tilde{f}: B_{m + n} &\to B_{m + n}\\
+ (x, y) &\mapsto (x, y \oplus f(x)),
+\end{align*}
+where $\oplus$ is the bitwise addition (i.e.\ addition in $(\Z/2\Z)^n$, e.g.\ $011 \oplus 110 = 101$). We call $x$ and $y$ the \term{input register} and \term{output register} respectively.
+
+Now if we set $y = 0$, then we get $f(x)$ in the second component of $\tilde{f}$. So we can easily obtain $f$ from $\tilde{f}$, and vice versa.
+
+\begin{lemma}
+ For any boolean function $f: B_m \to B_n$, the function
+ \begin{align*}
+ \tilde{f}: B_{m + n} &\to B_{m + n}\\
+ (x, y) &\mapsto (x, y \oplus f(x)),
+ \end{align*}
+ is invertible, and in fact an involution, i.e.\ is its own inverse.
+\end{lemma}
+
+\begin{proof}
+ Simply note that $x \oplus x = 0$ for any $x$, and bitwise addition is associative.
+\end{proof}
+
+So we can just consider boolean functions that are invertible. There is an easy way of making this a unitary matrix.
+\begin{lemma}
+ Let $g: B_k \to B_k$ be a reversible permutation of $k$-bit strings. Then the linear map on $\C^k$ defined by
+ \[
+ A:\bket{x} \mapsto \bket{g(x)}
+ \]
+ on $k$ qubits is unitary.
+\end{lemma}
+
+\begin{proof}
+ This is because the $x$th column of the matrix of $A$ is in fact $A\bket{x} = \bket{g(x)}$, and since $g$ is bijective, the collection of all $\bket{g(x)}$ are all orthonormal.
+\end{proof}
+
+Thus, given any $f: B_m \to B_n$, we get the $n + m$-qubit unitary matrix denoted by $U_f$, given by
+\[
+ U_f\bket{x}\bket{y} = \bket{x}\bket{y \oplus f(x)}.
+\]
+In particular, if we set $\bket{y} = \bket{0\cdots0}$, then we get
+\[
+ \begin{tikzcd}
+ \bket{x} \bket{0\cdots0} \ar[r, maps to, "U_f"] & \bket{x}\bket{f(x)}
+ \end{tikzcd},
+\]
+which we can use to evaluate $f(x)$.
+
+What does quantum computation give us? Our gate $U_f$ is unitary, and in particular acts linearly on the states. So by linearity, we have
+\[
+ \begin{tikzcd}
+ \displaystyle\frac{1}{\sqrt{2^n}}\sum_x \bket{x} \bket{0\cdots0} \ar[r, maps to, "U_f"] &\displaystyle \frac{1}{\sqrt{2^n}}\sum_x \bket{x}\bket{f(x)}
+ \end{tikzcd}.
+\]
+Now \emph{one} run of $U_f$ gives us this state that embodies all exponentially many values of $f(x)$'s. Of course, we still need to figure out how we can extract useful information from this mixed state, and we will figure that out later on.
+
+While the state
+\[
+ \bket{\psi} = \frac{1}{\sqrt{2^n}}\sum_x \bket{x}
+\]
+has exponentially many terms, it can be made in polynomial (and in fact linear) time by $n$ applications of $\qH$. Indeed, recall that $\qH$ is given by
+\[
+ \begin{tikzcd}
+ \bket{0} \ar[r, maps to, "\qH"] & \frac{1}{\sqrt{2}}(\bket{0} + \bket{1})
+ \end{tikzcd}
+\]
+So for an $n$-qubit state, we have
+\[
+ \begin{tikzcd}
+ \bket{0}\cdots\bket{0} \ar[r, maps to, "\qH\otimes \cdots\otimes \qH"] & \displaystyle\frac{1}{\sqrt{2^n}}(\bket{0} + \bket{1})\cdots(\bket{0} + \bket{1})
+ \end{tikzcd},
+\]
+and expanding the right hand side gives us exactly what we want.
+
+\section{Some quantum algorithms}
+\subsection{Balanced vs constant problem}
+We are going to come to our first quantum algorithm.
+
+Here our computational task is a bit special. Instead of an input $i_1, \cdots, i_n \in B_n$, we are given a \emph{black box}/\term{oracle} that computes some $f: B_m \to B_n$. We may have some \emph{a priori} promise on $f$, and we want to determine some property of the function $f$. The only access to $f$ is querying the oracle with its inputs.
+
+The use of $f$ (classical) or $U_f$ (quantum) counts as one step of computation. The \term{query complexity} of this task is the least number of times the oracle needs to be queried. Usually, we do not care much about how many times the other gates are used.
+
+Obviously, if we just query all the values of the function, then we can determine anything about the function, since we have complete information. So the goal is to see if we can do it with fewer queries.
+
+The problem we are going to look at is the \term{balanced vs constant problem}. The input black box is a function $f: B_n \to B_1$. The promise is that $f$ is either
+\begin{enumerate}
+ \item a \emph{constant} function, i.e.\ $f(x) = 0$ for all $x$, or $f(x) = 1$ for all $x$; or
+ \item a \emph{balanced} function, i.e.\ exactly half of the values in $B_n$ are sent to $1$.
+\end{enumerate}
+We want to determine if $f$ is (i) or (ii) with certainty.
+
+Classically, if we want to find the answer \emph{with certainty}, in the worst case scenario, we will have to perform $2^{n - 1} + 1$ queries --- if you are really unlucky, you might query a balanced function $2^{n - 1}$ times and get $0$ all the time, and you can't distinguish it from a constant $0$ function.
+
+Quantumly, we have the \term{Deutsch-Jozsa algorithm}, that answers the question in 1 query!
+
+A trick we are going to use is something known as ``\term{phase kickback}''. Instead of encoding the result as a single bit, we encode them as $\pm$ signs, i.e.\ as phases of the quantum bits. The ``kickback'' part is about using the fact that we have
+\[
+ \bket{a} (e^{i\theta}\bket{b}) = (e^{i\theta}\bket{a})\bket{b},
+\]
+So we might do something to $\bket{b}$ to give it a phase, and then we ``kick back'' the phase on $\bket{a}$, and hopefully obtain something when we measure $\bket{a}$.
+
+Recall that we have
+\[
+ U_f \bket{x}\bket{y} = \bket{x} \bket{y \oplus f(x)}.
+\]
+Here $\bket{x}$ has $n$ qubits, and $\bket{y}$ has 1 qubit.
+
+The non-obvious thing to do is to set the output register to
+\[
+ \bket{\alpha} = \frac{\bket{0} - \bket{1}}{2} = \qH\bket{1} = \qH \qX \bket{0}.
+\]
+We then note that $U_f$ acts by
+\begin{align*}
+ \bket{x}\left(\frac{\bket{0} - \bket{1}}{2}\right) &\mapsto \bket{x} \frac{\bket{f(x)} - \bket{1 \oplus f(x)}}{\sqrt{2}}\\
+ &=
+ \begin{cases}
+ \bket{x}\frac{\bket{0} - \bket{1}}{\sqrt{2}}&\text{if } f(x) = 0\\
+ \bket{x}\frac{\bket{1} - \bket{0}}{\sqrt{2}}&\text{if } f(x) = 1\\
+ \end{cases}\\
+ &= (-1)^{f(x)}\bket{x}\bket{\alpha}.
+\end{align*}
+Now we do this to the superposition over all possible $x$:
+\[
+ \frac{1}{\sqrt{2^n}} \sum \bket{x}\bket{\alpha} \mapsto \left(\frac{1}{\sqrt{2^n}} \sum (-1)^{f(x)}\bket{x}\right) \bket{\alpha}.
+\]
+So one query gives us
+\[
+ \bket{\xi_f} = \frac{1}{\sqrt{2^n}} \sum (-1)^{f(x)}\bket{x}.
+\]
+The key observation now is simply that if $f$ is constant, then all the signs are the same. If $x$ is balanced, then exactly half of the signs are $+$ and $-$. The crucial thing is that $\bket{\xi_{f_\text{const}}}$ is orthogonal to $\bket{\xi_{f_\text{balanced}}}$. This is a good thing, since orthogonality is something we can perfectly distinguish with a quantum measurement.
+
+There is a slight technical problem here. We allow only measurements in the standard $\bket{0}$, $\bket{1}$ basis. So we need want to ``rotate'' our states to the standard basis. Fortunately, recall that
+\[
+ \begin{tikzcd}
+ \bket{0}\cdots\bket{0} \ar[r, "\qH\otimes \cdots\otimes \qH"] &\frac{1}{\sqrt{2^n}}\sum \bket{x}
+ \end{tikzcd},
+\]
+Now recall that $\qH$ is self-inverse, so $H^2 = I$. Thus, if we apply $\qH \otimes\cdots \otimes \qH$ to $\frac{1}{\sqrt{2^n}}\sum \bket{x}$, then we obtain $\bket{0}\cdots\bket{0}$.
+
+We write
+\[
+ \bket{\eta_f} = \qH \otimes \cdots \otimes \qH \bket{\xi_f}.
+\]
+Since $\qH$ is unitary, we still have
+\[
+ \bket{\eta_{f_\mathrm{const}}} \perp \bket{\eta_{f_\mathrm{balanced}}}.
+\]
+Now we note that if $f$ is constant, then
+\[
+ \eta_{f_\mathrm{const}} = \pm \bket{0}\cdots\bket{0}.
+\]
+If we look at what $\bket{\eta_{f_\mathrm{balanced}}}$ is, it will be a huge mess, but it doesn't really matter --- all that matters is that it is perpendicular to $\bket{0}\cdots\bket{0}$.
+
+Now when we measure $\eta_f$, if $f$ is a constant function, then we obtain $0\cdots0$ with probability $1$. If it is balanced, then we obtain something that is not $0$ with probability $1$. So we can determine the result with probability $1$.
+\begin{center}
+ \begin{tikzpicture}
+ \addstate{$\bket{0}$}{0}{5};
+ \addstate{$\rvdots\;$}{1}{5};
+ \addstate{$\bket{0}$}{2}{5};
+ \addstate{$\bket{0}$}{3}{5};
+ \addoperator{$\qH$}{0}{1};
+ \addoperator{$\qH$}{1}{1};
+ \addoperator{$\qH$}{2}{1};
+ \addoperator{$\qX$}{3}{0.6};
+ \addoperator{$\qH$}{3}{1.4};
+ \addbigoperator{$U_f$}{0}{3}{2}{3};
+ \addoperator{$\qH$}{0}{4};
+ \addoperator{$\qH$}{1}{4};
+ \addoperator{$\qH$}{2}{4};
+
+ \draw [decorate, decoration={brace, amplitude=6pt}] (-0.8, -1.7) -- (-0.8, 0.2) node [pos=0.5, left] {input\;\;};
+
+ \node [left] at (-0.8, -2.1) {output};
+
+ \draw [decorate, decoration={brace, amplitude=6pt}] (5.2, 0.2) -- (5.2, -1.7) node [pos=0.5, right] {\;\;measure};
+
+ \node [right] at (5.2, -2.1) {discard};
+ \end{tikzpicture}
+\end{center}
+This uses exactly one query, with $1 + (n + 1) + n + n = O(n)$ elementary gates and measurements.
+
+What if we tolerate error in the balanced vs constant problem? In other words, we only require that the answer is correct with probability $1 - \varepsilon$ with $0 < \varepsilon < \frac{1}{2}$.
+
+In the quantum case, nothing much changes, since we are probably not going to do better than $1$ query. However, we no longer have a huge benefit over classical algorithms. There is a classical randomized algorithm with $O(\log(1/\varepsilon))$ queries, and in particular does not depend on $n$.
+
+Indeed, we do it the obvious way --- we choose some $K$ $x$-values uniformly at random from $B_n$, say $x_1, \cdots, x_n$ (where $K$ is fixed and determined later). We then evaluate $f(x_1), \cdots, f(x_n)$.
+
+If all the outputs are the same, then we say $f$ is constant. If they are not the same, then we say $f$ is balanced.
+
+If $f$ actually is constant, then the answer is correct with probability $1$. If $f$ is balanced, then each $f(x_i)$ is $0$ or $1$ with equal probability. So the probability of getting the same values for all $x_i$ is
+\[
+ \frac{2}{2^K} = 2^{1 - K}.
+\]
+This is our failure probability. So if we pick
+\[
+ K > \log_2 (\varepsilon^{-1}) + 1,
+\]
+then we have a failure probability of less than $\varepsilon$.
+
+Can we decide \emph{every} yes/no question about $f: B_n \to B_1$'s by quantum algorithms with ``a few'' queries? The answer is no. One prominent example is the \term{SAT problem} (\term{satisfiability problem}) --- given an arbitrary $f$, we want to determine if there an $x$ such that $f(x) = 1$? It can be shown that any quantum algorithm (even if we allow for bounded errors) needs at least $O(\sqrt{2^n})$ queries, which is achieved by Grover's algorithm. Classically, we need $O(2^n)$ queries. So we have achieved a square root speedup, which is good, but not as good as the Deutsch-Jozsa algorithm.
+
+In any case, the Deutsch-Jozsa algorithm demonstrates how we can achieve an exponential benefit with quantum algorithms, but it happens only when we have no error tolerance. In real life scenario, external factors will lead to potential errors anyway, and requiring that we are always correct is not a sensible requirement.
+
+There are other problems where quantum algorithms are better:
+\begin{eg}[Simon's algorithm]\index{Simon's algorithm}
+ The \term{Simon's problem} is a promise problem about $f: B_n \to B_n$ with provably exponential separation between classical ($O(2^{n/4})$) and quantum ($O(n)$) query complexity even with bounded error. The details are on the first example sheet.
+\end{eg}
+
+\subsection{Quantum Fourier transform and periodicities}
+We've just seen some nice examples of benefits of quantum algorithms. However, oracles are rather unnatural problems --- it is rare to just have a black-box access to a function without knowing anything else about the function.
+
+How about more ``normal'' problems? The issue with trying to compare quantum and classical algorithms for ``normal'' problems is that we don't actually have any method to find the lower bound for the computation complexity. For example, while we have not managed to find polynomial prime factorization algorithms, we cannot prove for sure that there isn't any classical algorithm that is polynomial time. However, for the prime factorization problem, we \emph{do} have a quantum algorithm that does much better than all known classical algorithms. This is \emph{Shor's algorithm}, which relies on the toolkit of the quantum Fourier transform.
+
+We start by defining the quantum Fourier transform.
+\begin{defi}[Quantum Fourier transform mod $N$]\index{quantum Fourier transform}
+ Suppose we have an $N$-dimensional state space with basis $\bket{0}, \bket{1}, \cdots, \bket{N - 1}$ labelled by $\Z/N\Z$. The \emph{quantum Fourier transform mod $N$} is defined by
+ \[
+ \qQFT: \bket{a} \mapsto \frac{1}{\sqrt{N}} \sum_{b = 0}^{N - 1} e^{2\pi i ab/N} \bket{b}.
+ \]
+ The matrix entries are
+ \[
+ [\qQFT]_{ab} = \frac{1}{\sqrt{N}} \omega^{ab},\quad \omega = e^{2\pi i/N},
+ \]
+ where $a, b = 0, 1, \cdots, N - 1$. We write \term{$\qQFT_n$} for the quantum Fourier transform mod $n$.
+\end{defi}
+Note that we are civilized and start counting at $0$, not $1$.
+
+We observe that the matrix $\sqrt{N}\qQFT$ is
+\begin{enumerate}
+ \item Symmetric
+ \item The first (i.e.\ $0$th) row and column are all $1$'s.
+ \item Each row and column is a geometric progression $1, r, r^2, \cdots, r^{n - 1}$, where $r = \omega^k$ for the $k$th row or column.
+\end{enumerate}
+
+\begin{eg}
+ If we look at $\qQFT_2$, then we get our good old $\qH$. However, $\qQFT_4$ is not $H \otimes H$.
+\end{eg}
+
+\begin{prop}
+ $\qQFT$ is unitary.
+\end{prop}
+
+\begin{proof}
+ We use the fact that
+ \[
+ 1 + r + \cdots + r^{N - 1} =
+ \begin{cases}
+ \frac{1 - r^N}{1 - r} & r \not= 1\\
+ N & r = 1
+ \end{cases}.
+ \]
+ So if $r = \omega^k$, then we get
+ \[
+ 1 + r + \cdots + r^{N - 1} =
+ \begin{cases}
+ 0 & k \not\equiv 0 \mod N\\
+ N & k \equiv 0 \mod N
+ \end{cases}.
+ \]
+ Then we have
+ \[
+ (\qQFT^\dagger \qQFT)_{ij} = \frac{1}{\sqrt{N}^2} \sum_k \omega^{-ik} \omega^{jk} =\frac{1}{N} \sum_k \omega^{(j-i)k} =
+ \begin{cases}
+ 1 & i = j\\
+ 0 & i \not= j
+ \end{cases}.\qedhere
+ \]
+\end{proof}
+
+We now use the quantum Fourier Transform to solve the periodicity problem.
+\begin{eg}\index{Periodicity problem}
+ Suppose we are given $f: \Z/N\Z \to Y$ (for some set $Y$). We are promised that $f$ is periodic with some period $r \mid N$, so that
+ \[
+ f(x + r) = f(x)
+ \]
+ for all $x$. We also assume that $f$ is injective in each period, so that
+ \[
+ 0 \leq x_1 \not= x_2 \leq r - 1\quad\text{ implies }\quad f(x_1) \not= f(x_2).
+ \]
+ The problem is to find $r$, with any constant level of error $1 - \varepsilon$ independent of $N$. Since this is not a decision problem, we can allow $\varepsilon > \frac{1}{2}$.
+
+ In the classical setting, if $f$ is viewed as an oracle, then $O(\sqrt{N})$ queries are necessary and sufficient. We are going to show that quantumly, $O(\log \log N)$ queries with $O(\mathrm{poly}(\log N))$ processing steps suffice. In later applications, we will see that the relevant input size is $\log N$, not $N$. So the classical algorithm is exponential time, while the quantum algorithm is polynomial time.
+
+ Why would we care about such a problem? It turns out that later we will see that we can reduce prime factorization into a periodicity problem. While we will actually have a very explicit formula for $f$, there isn't much we can do with it, and treating it as a black box and using a slight modification of what we have here will be much more efficient than any known classical algorithm.
+
+ The quantum algorithm is given as follows:
+ \begin{enumerate}
+ \item Make $\frac{1}{\sqrt{N}} \sum_{x = 0}^{N - 1} \bket{x}$. For example, if $N = 2^n$, then we can make this using $\qH \otimes \cdots \otimes \qH$. If $N$ is not a power of $2$, it is not immediately obvious how we can make this state, but we will discuss this problem later.
+
+ \item We make one query to get
+ \[
+ \bket{f} = \frac{1}{\sqrt{N}} \sum \bket{x} \bket{f(x)}.
+ \]
+
+ \item We now recall that $r \mid N$, Write $N = Ar$, so that $A$ is the number of periods. We measure the second register, and we will see some $y = f(x)$. We let $x_0$ be the \emph{least} $x$ with $f(x) = y$, i.e.\ it is in the first period. Note that we don't know what $x_0$ is. We just know what $y$ is.
+
+ By periodicity, we know there are exactly $A$ values of $x$ such that $f(x) = y$, namely
+ \[
+ x_0,\, x_0 + r,\, x_0 + 2r,\, \cdots,\, x_0 + (A - 1)r.
+ \]
+ By the Born rule, the first register is collapsed to
+ \[
+ \bket{\mathrm{per}} = \left(\frac{1}{\sqrt{A}} \sum_{j = 0}^{A - 1} \bket{x_0 + jr}\right) \bket{f(x_0)}.
+ \]
+ We throw the second register away. Note that $x_0$ is chosen randomly from the first period $0, 1, \cdots, r - 1$ with equal probability.
+
+ What do we do next? If we measure $\bket{\mathrm{per}}$, we obtain a random $j$-value, so what we actually get is a random element ($x_0$th) of a random period ($j$th), namely a uniformly chosen random number in $0, 1, \cdots, N$. This is not too useful.
+
+ \item The solution is the use the quantum Fourier transform, which is not surprising, since Fourier transforms are classically used to extract periodicity information.
+
+ Apply $\qQFT_N$ to $\bket{\mathrm{per}}$ now gives
+ \begin{align*}
+ \qQFT_N \bket{\mathrm{per}} &= \frac{1}{\sqrt{NA}} \sum_{j = 0}^{n - 1} \sum_{y = 0}^{N - 1} \omega^{(x_0 + jr) y} \bket{y}\\
+ &= \frac{1}{\sqrt{NA}} \sum_{y = 0}^{N - 1} \omega^{x_0 y}\left(\sum_{j = 0}^{n - 1}\omega^{jry}\right) \bket{y}
+ \end{align*}
+ We now see the inner sum is a geometric series. If $\omega^{ry} = 1$, then this sum is just $A$. Otherwise, we have
+ \[
+ \sum_{j = 0}^{A - 1} \omega^{jry} = \frac{1 - \omega^{rA}}{1 - \omega^{ry}} = \frac{1 - 1}{1 - \omega^{ry}} = 0.
+ \]
+ So we are left with
+ \[
+ \qQFT_n \bket{\mathrm{per}} = \sqrt{\frac{A}{N}} \sum_{k = 0}^{r - 1}\omega ^{x_0 k N/r} \bket{k\frac{N}{r}}.
+ \]
+ Note that before the Fourier transform, the random shift of $x_0$ lied in the label $\bket{x_0 + jr}$. After the Fourier transform, it is now encoded in the phase instead.
+
+ \item Now we can measure the label, and we will get some $C$ which is a multiple $k_0 \frac{N}{r}$, where $0 \leq k_0 \leq r - 1$ is chosen uniformly at random. We rewrite this equation as
+ \[
+ \frac{k_0}{r} = \frac{C}{N}.
+ \]
+ We know $C$, because we just measured it, and $N$ is a given in the question. Also, $k_0$ is randomly chosen, and $r$ is what we want. So how do we extract that out?
+
+ If by good chance, we have $k_0$ coprime to $r$, then we can cancel $C/N$ to lowest terms and read off $r$ as the resulting denominator $\tilde{r}$. Note that cancelling $C/N$ to lowest terms can be done quickly by Euclidean algorithm. But how likely are we to be so lucky? We can just find some number theory book, and figure out that the number of natural numbers $< r$ that are coprime to $r$ grows as $O(r/\log \log r)$. More precisely, it is $\sim e^{-\gamma} r/\log \log r$, where $\gamma$ is the other Euler's constant. We note that
+ \[
+ O\left(\frac{r}{\log \log r}\right) > O\left(\frac{1}{\log \log N}\right).
+ \]
+ So if $k_0$ is chosen uniformly and randomly, the probability that $k_0$ is coprime to $r$ is at least $O(1/\log \log N)$.
+
+ Note that if $k_0$ is \emph{not} coprime with $r$, then we have $\tilde{r} \mid r$, and in particular $\tilde{r} < r$. So we can check if $\tilde{r}$ is a true period --- we compute $f(0)$ and $f(\tilde{r})$, and see if they are the same. If $\tilde{r}$ is wrong, then they cannot be equal as $f$ is injective in the period.
+
+ While the probability of getting a right answer decreases as $N \to \infty$, we just have to do the experiment many times. From elementary probability, if an event has some (small) success probability $p$, then given any $0 < 1 - \varepsilon < 1$, for $M = - \frac{\log \varepsilon}{p}$ trials, the probability that there is at least one success is $ > 1 - \varepsilon$. So if we repeat the quantum algorithm $O(\log \log N)$ times, and check $\tilde{r}$ each time, then we can get a true $r$ with any constant level of probability.
+ \item We can further improve this process --- if we have obtained two attempts $\tilde{r}, \tilde{r}'$, then we know $r$ is at least their least common multiple. So we can in fact achieve this in constant time, if we do a bit more number theory. However, the other parts of the algorithm (e.g.\ cancelling $C/N$ down to lowest terms) still use time polynomial in $\log N$. So we have a polynomial time algorithm.
+ \end{enumerate}
+\end{eg}
+
+There is one thing we swept under the carpet. We need to find an efficient way of computing the quantum Fourier transform, or else we just hid all our complexity in the quantum Fourier transform.
+
+In general, we would expect that a general unitary operations on $n$ qubits needs $\exp(n)$ elementary circuits. However, the quantum Fourier transform is special.
+
+\begin{fact}
+ $\qQFT_{2^n}$ can be implemented by a quantum circuit of size $O(n^2)$.
+\end{fact}
+The idea of the construction is to mimic the classical fast Fourier transform. An important ingredient of it is:
+\begin{fact}
+ The state
+ \[
+ \qQFT_{2^n} \bket{x} = \frac{1}{2^{n/2}} \sum_{y = 0}^{2^n - 1} \omega^{xy}\bket{y}
+ \]
+ is in fact a product state.
+\end{fact}
+We will not go into the details of implementation.
+
+We can generalize the periodicity problem to arbitrary groups, known as the \term{hidden subgroup problem}. We are given some oracle for $f: G \to Y$, and we are promised that there is a subgroup $H < G$ such that $f$ is constant and distinct on cosets of $H$ in $G$. We want to find $H$ (we can make ``find'' more precise in two ways --- we can either ask for a set of generators, or provide a way of sampling uniformly from $H$).
+
+In our case, we had $G = (\Z/N\Z, +)$, and our subgroup was
+\[
+ H = \{0, r, 2r, \cdots, (A - 1) r\}.
+\]
+Unfortunately, we do not know how to do this efficiently for a group in general.
+
+\subsection{Shor's algorithm}
+All that was a warm up for \emph{Shor's algorithm}. This is a quantum algorithm that factorizes numbers in polynomial time. The crux of the algorithm will be a modified version of the quantum Fourier transform.
+
+The precise statement of the problem is as follows --- given an integer $N$ with $n = \log N$ digits, we want to find a factor $1 < K < N$. Shor's algorithm will achieve this with constant probability $(1 - \varepsilon)$ in $O(n^3)$ time. The best known classical algorithm is $e^{O(n^{1/3} (\log n)^{2/3})}$.
+
+To do this, we will use the periodicity algorithm. However, there is one subtlety involved. Instead of working in $\Z/n\Z$, we need to work in $\Z$. Since computers cannot work with infinitely many numbers, we will have to truncate it somehow. Since we have no idea what the period of our function will be, we must truncate it randomly, and we need to make sure we can control the error introduced by the truncation.
+
+We shall now begin. Given an $N$, we first choose some $1 < a < N$ uniformly randomly, and compute $\hcf(a, N)$. If it is not equal to $1$, then we are finished. Otherwise, by Euler's theorem, there is a least power $r$ of $a$ such that $a^r \equiv 1 \bmod N$. The number $r$ is called the \emph{order} of $a$ mod $N$. It follows that the function $f: \Z \to \Z/N\Z$ given by $f(k) = a^k \bmod N$ has period $r$, and is injective in each period.
+
+Note that $f(k)$ can be efficiently computed in $\poly(\log k)$ time, by repeated squaring. Also note that classically, it is hard to find $r$, even though $f$ has a simple formula!
+
+It was known to Legendre in 1800 that knowing $r$ means we can factor $n$. Suppose we can find $r$, and further suppose $r$ is even. Then we have
+\[
+ a^r - 1 \equiv (a^{r/2} + 1)(a^{r/2} - 1) \equiv 0 \pmod N.
+\]
+So $N$ exactly divides the product. By minimality of $r$, we know $N$ does not divide $a^{r/2} - 1$. So if $N$ does not divide $a^{r/2} + 1$ as well, then $\hcf(N, a^{r/2} \pm 1)$ are non-trivial factors of $N$.
+
+For this to work, we needed two assumptions -- $r$ is even, and $a^{r/2} \not\equiv -1 \pmod N$. Fortunately, there is a theorem in number theory that says if $N$ is odd and not a prime power, and $a$ is chosen uniformly at random, then the probability that these two things happen is at least $\frac{1}{2}$. In fact, it is $\geq 1 - \frac{1}{2^{m - 1}}$, where $m$ is the number of prime factors of $N$.
+
+So if we repeat this $k$ times, the probability that they all fail to give a factor is less than $\frac{1}{2^k}$. So this can be as small as we wish.
+
+What about the other possibilities? If $N$ is even, then we would have noticed by looking at the last digit, and we can just write down $2$. If $N = c^\ell$ for $c, \ell > 2$, then there is a classical polynomial time algorithm that outputs $c$, which is a factor. So these are the easy cases.
+
+Everything we've done so far is classical! The quantum part comes in when we want to compute $r$. We know that $f(k) = a^k$ is periodic on $\Z$, which is an infinite domain. So we cannot just apply our periodicity algorithm.
+
+By number theory, we know that $r$ is at most $N$. But other than that, we have no idea what $r$ actually is, nor do we know of any multiple of $r$. So we cannot apply the periodicity argument directly. Instead, we pick a big number $2^m$, and work on the domain $D = \{0, 1, \cdots, 2^m - 1\} = \Z/2^m \Z$. How do we choose $m$? The idea is that we want $0, \cdots, 2^m - 1$ to contain $B$ full periods, plus some extra ``corrupt'' noise $b$, so
+\[
+ 2^m = Br + b,
+\]
+with $0 \leq b < r$. Since we want to separate out the periodicity information from the corrupt noise, we will want $b$ to be relatively small, compared to $Br$. We know the size of $b$ is bounded by $r$, hence by $N$. So we need $2^m$ to be ``much larger'' than $N$. It turns out picking $2^m > N^2$ is enough, and we will pick $m$ to be the smallest number such that this holds.
+
+We now study the effect of corruption on the periodicity algorithm. We again make the state
+\[
+ \bket{f} = \frac{1}{\sqrt{2^m}} \sum \bket{x} \bket{f(x)}.
+\]
+and measure the value of $f$. We then get
+\[
+ \bket{\mathrm{per}} = \frac{1}{\sqrt{A}} \sum_{k = 0}^{A - 1} \bket{x_0 + kr},
+\]
+where $A = B$ or $B + 1$, depending on whether $x_0 \leq b$ or not. As before, we apply $\qQFT_{2^m}$ to obtain
+\[
+ \qQFT_{2^m}\bket{\mathrm{per}} = \sum_{c = 0}^{2^n - 1} \tilde{f}(c) \bket{c}.
+\]
+When we did this before, with an exact period, most of the $\hat{f}(c)$ is zero. However, this time things are a bit more messy. As before, we have
+\[
+ \tilde{f}(c) = \frac{\omega^{c x_0}}{\sqrt{A}\sqrt{2^m}} [1 + \alpha + \cdots + \alpha^{A - 1}],\quad \alpha = e^{2\pi i cr/2^m}.
+\]
+The important question is, when we measure this, which $c$'s will we see with ``good probability''? With exact periodicity, we knew that $\frac{2^m}{r} = A$ is an exact integer. So $\tilde{f}(c) = 0$ except when $c$ is a multiple of $A$. Intuitively, we can think of this as interference, and we had totally destructive and totally constructive interference respectively.
+
+In the inexact case, we will get constructive interference for those $c$ such that the phase $\alpha$ is close to $1$. These are the $c$'s with $\frac{cr}{2^m}$ nearest to integers $k$, and the powers up to $\alpha^{A - 1}$ don't spread too far around the unit circle. So we avoid cancellations.
+
+So we look at those special $c$'s having this particular property. As $c$ increases from $0$ to $2^m - 1$, the angle $\frac{cr}{2^m}$ increments by $\frac{r}{2^m}$ each time from $0$ up to $r$. So we have $c_k$'s for each $k = 0, 1, \cdots, r - 1$ such that
+\[
+ \left|\frac{c_k r}{2^m} - k\right| < \frac{1}{2} \cdot \frac{r}{2^m}.
+\]
+In other words, we have
+\[
+ \left|c_k - k\frac{2^m}{r}\right| < \frac{1}{2}.
+\]
+So the $c_k$ are the integers nearest to the multiples of $2^m/r$.
+
+In $\tilde{f}(c)$, the $\alpha$'s corresponding to the $c_k$'s have the smallest phases, i.e.\ nearest to the positive real axis. We write
+\[
+ \frac{c_k r}{2^m} = k + \xi,
+\]
+where
+\[
+ k \in \Z,\quad |\xi| < \frac{1}{2} \frac{r}{2^m}.
+\]
+Then we have
+\[
+ \alpha^n = \exp\left(2\pi i \frac{c_k r}{2^m}n\right) = \exp\left(e\pi i(k + \xi)n\right) = \exp(2 \pi i \xi n)
+\]
+Now for $n < A$, we know that $|2 \xi n| < \pi$, and thus $1, \alpha, \alpha^2, \cdots, \alpha^{A - 1}$ all lie in the lower half plane or upper half plane.
+
+Doing all the algebra needed, we find that if $\qQFT\bket{\mathrm{per}}$ is measured, then for any $c_k$ as above, we have
+\[
+ \mathrm{Prob}(c_k) > \frac{\gamma}{r},
+\]
+where
+\[
+ \gamma = \frac{4}{\pi^2} \approx 0.4.
+\]
+Recall that in the exact periodicity case, the points $c_k$ hit the integers exactly, and instead of $\gamma$ we had $1$. The distribution of the $c$'s then look like:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0);
+ \draw [->] (0, 0) -- (0, 3);
+
+ \foreach \x in {0, 1, 2, 3} {
+ \draw [thick] (\x, 0) -- (\x, 2);
+ }
+ \draw [thick] (0, 0) -- (4, 0);
+ \end{tikzpicture}
+\end{center}
+With inexact periods, we obtain something like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0);
+ \draw [->] (0, 0) -- (0, 3);
+
+ \foreach \x/\y in {0/2.2, 1/2.1, 2/2.3, 3/2.25} {
+ \draw [thick] (\x, 0) -- (\x, \y);
+ }
+ \foreach \x\y in {0.2/0.1,0.4/0.15,0.6/0.13,0.8/0.2,1.2/0.12,1.4/0.16,1.6/0.22,1.8/0.09,2.2/0.11,2.4/0.13,2.6/0.2,2.8/0.18} {
+ \draw [thick] (\x, 0) -- (\x, \y);
+ }
+ \end{tikzpicture}
+\end{center}
+Now how do we get $r$ from a $c_k$? We know
+\[
+ \left| \frac{c_k}{2^m} - \frac{k}{r}\right| < \frac{1}{2^{m + 1}} < \frac{1}{2N^2}.
+\]
+We claim that there is at most 1 fraction $\frac{k}{r}$ with denominator $< N$ such that this inequality holds. So this inequality does uniquely determine $k/r$.
+
+Indeed, suppose $\frac{k}{r}$ and $\frac{k'}{r'}$ both work. Then we have
+\[
+ \left|\frac{k'}{r'} - \frac{k}{r}\right| = \frac{|k'r - r'k|}{rr'} > \frac{1}{rr'} > \frac{1}{N^2}.
+\]
+However, we also have
+\[
+ \left|\frac{k'}{r'} - \frac{k}{r}\right| \leq \left|\frac{k'}{r'} - \frac{c_k}{2^m}\right| + \left|\frac{c_k}{2^m} - \frac{k}{r}\right| < \frac{1}{2N^2} + \frac{1}{2N^2} = \frac{1}{N^2}.
+\]
+So it follows that we must have $\frac{k'}{r'} = \frac{k}{r}$.
+
+We introduce the notion of a ``good'' $c_k$ value, which is when $k$ is coprime to $r$. The probability of getting a good $c_k$ is again
+\[
+ O(1/\log \log r) > O(1/\log \log N).
+\]
+Note that this is the same rate as the case of exact periodicity, since we have only lost a constant factor of $\gamma$! If we did have such a $c_k$, then now $r$ is uniquely determined.
+
+However, there is still the problem of finding $r$ from a good $c_k$ value. At this point, this is just classical number theory.
+
+We can certainly try all $\frac{k'}{r'}$ with $k' < r' < N$ and find the closest one to $c_k/2^m$, but there are $O(N^2)$ fractions to try, but we want a $O(\mathrm{poly}(\log N))$ algorithm. Indeed, if we were to do it this way, we might as well try all numbers less than $N$ and see if they divide $N$. And this is just $O(N)$!
+
+The answer comes from the nice theory of continued fractions. Any rational number $\frac{s}{t} < 1$ has a continued fraction expansion
+\[
+ \frac{s}{t} = \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \cdots}}}.
+\]
+Indeed to do this, we simply write
+\[
+ \frac{s}{t} = \cfrac{1}{\cfrac{t}{s}} = \cfrac{1}{a_1 + \cfrac{s_1}{t_1}},
+\]
+where we divide $t$ by $s$ to get $t = a_1s + s_1$, and then put $t_1 = s$. We then keep going on with $\frac{s_1}{t_1}$. Since the numbers $s_i, t_i$ keep getting smaller, it follows that this process will eventually terminate.
+
+Since it is very annoying to type these continued fractions in \LaTeX, we often write the continued fraction as
+\[
+ \frac{s}{t} = [a_1, a_2, a_3, \cdots, a_n].
+\]
+We define the $k$th convergent of $\frac{s}{t}$ to be
+\[
+ \frac{p_k}{q_k} = [a_1, a_2, \cdots, a_k].
+\]
+There are some magic results from number theory that gives us a simple recurrence relation for the convergents.
+\begin{lemma}
+ For $a_1, a_2, \cdots, a_\ell$ any positive reals, we set
+ \begin{align*}
+ p_0 &= 0 & q_0 &= 1\\
+ p_1 &= 1 & q_1 &= a_1
+ \end{align*}
+ We then define
+ \begin{align*}
+ p_k &= a_k p_{k - 1} + p_{k - 2}\\
+ q_k &= a_k q_{k - 1} + q_{k - 2}
+ \end{align*}
+ Then we have
+ \begin{enumerate}
+ \item We have
+ \[
+ [a_1, \cdots, a_k] = \frac{p_k}{q_k}.
+ \]
+ \item We also have
+ \[
+ q_k p_{k - 1} - p_k q_{k - 1} = (-1)^k.
+ \]
+ In particular, $p_k$ and $q_k$ are coprime.
+ \end{enumerate}
+\end{lemma}
+
+From a bit more number theory, we find that
+\begin{fact}
+ If $s < t$ are $m$-bit integers, then the continued fraction has length $O(m)$, and all convergents $\frac{p_k}{q_k}$ can be computed in $O(m^3)$ time.
+\end{fact}
+
+More importantly, we have the following result:
+\begin{fact}
+ Let $0 < x < 1$ be rational, and suppose $\frac{p}{q}$ is rational with
+ \[
+ \left|x - \frac{p}{q}\right| < \frac{1}{2q^2}.
+ \]
+ Then $\frac{p}{q}$ is a convergent of the continued fraction of $x$.
+\end{fact}
+
+Then by this theorem, for a good $c$, we know $\frac{k}{r}$ must be a convergent of $\frac{c}{2^m}$. So we compute all convergents find a (unique) one whose denominator is less than $N$ and is within $\frac{1}{2N^2}$ of $\frac{c}{2^m}$. This gives us the value of $r$, and we are done.
+
+In fact, this last classical part is the slowest part of the algorithm.
+
+\begin{eg}
+ Suppose we want to factor $N = 39$. Suppose the random $a$ we chose is $a = 7 < 39$, which is coporime to $N$. Let $r$ be the period of $f(x) = 7^x \bmod 39$.
+
+ We notice
+ \[
+ 1024 = 2^{10} < N^2 = 1621 < 2^{11} = 2048.
+ \]
+ So we pick $m = 11$. Suppose the measurement of $\qQFT_{2^m}\bket{\mathrm{per}}$ yeilds $c = 853$.
+
+ By the theory, this has a constant probability (approximately $0.4$) to satisfy
+ \[
+ \left|\frac{853}{2^m} - \frac{k}{r}\right| < \frac{1}{2^{m + 1}} = \frac{1}{2^{12}} < \frac{1}{2N^2}.
+ \]
+ We also have a probability of $O(1/\log \log r)$ to have $k$ and $r$ coprime. In this case, $c$ is indeed ``good''. So there is a unique $\frac{k}{r}$ satisfying
+ \[
+ \left|\frac{853}{2048} - \frac{k}{r}\right| < \frac{1}{2^{12}}.
+ \]
+ So to find $\frac{k}{r}$, we do the continued fraction expansion of $\frac{853}{2048}$. We have
+ \[
+ \frac{853}{2048} = \cfrac{1}{\cfrac{2048}{853}} = \cfrac{1}{2 + \cfrac{342}{853}} = \cfrac{1}{2 + \cfrac{1}{\cfrac{853}{342}}} = \cfrac{1}{2 + \cfrac{1}{2 + \cfrac{169}{342}}} = \cdots = [2, 2, 2, 42, 4].
+ \]
+ We can then compute the convergents
+ \begin{align*}
+ [2] &= \frac{1}{2}\\
+ [2, 2] &= \frac{2}{5}\\
+ [2, 2, 2] &= \frac{5}{12}\\
+ [2, 2, 2, 42] &= \frac{212}{509}\\
+ [2, 2, 2, 42, 4] &= \frac{853}{2048}
+ \end{align*}
+ Of all these numbers, only $\frac{5}{12}$ is within $\frac{1}{2^{12}}$ of $\frac{853}{2048}$ and whose denominator is less than $N = 39$.
+
+ If we do not assume $k$ and $r$ are coprime, then the possible $\frac{k}{r}$ are
+ \[
+ \frac{5}{12}, \frac{10}{24}, \frac{15}{36}.
+ \]
+ If we assume that $\frac{k}{r}$ are coprime, then $r = 12$. Indeed, we can try that
+ \[
+ 7^{12} \equiv 1 \pmod {39}.
+ \]
+ So we now know that
+ \[
+ 39 \mid (7^6 + 1)(7^6 - 1).
+ \]
+ We now hope/expect with probability $>\frac{1}{2}$ exactly that it goes partly into each factor. We can compute
+ \begin{align*}
+ 7^6 + 1 &= 117650 \equiv 26 \pmod {39}\\
+ 7^6 - 1 &= 117648 \equiv 24 \pmod {39}
+ \end{align*}
+ We can then compute
+ \[
+ \hcf(26, 39) = 13,\quad \hcf(24, 39) = 3 \pmod {39}.
+ \]
+ We see that $3$ and $13$ are factors of $39$.
+\end{eg}
+
+\subsection{Search problems and Grover's algorithm}
+We are now going to turn our attention to search problems. These are very important problems in computing, as we can formulate almost all problems as some sort of search problems.
+
+One important example is simultaneous constraint satisfaction. Here we have a large configuration space of options, and we want to find some configuration that satisfies some constraints. For example, when designing a lecture timetable for Part III courses, we need to schedule the courses so that we don't clash two popular courses in the same area, and the courses need to have big enough lecture halls, and we have to make sure a lecturer doesn't have to simultaneously lecture two courses at the same time. This is very complicated.
+
+In general, search problems have some common features:
+\begin{enumerate}
+ \item Given any instance of solution attempt, it is easy to check if it is good or not.
+ \item There are exponentially many possible instances to try out.
+\end{enumerate}
+
+One example is the boolean satisfiability problem, which we have already seen before.
+\begin{eg}[Boolean satisfiability problem]
+ The \term{boolean satisfiability problem} (\term{SAT}) is as follows --- given a Boolean formula $f: B_n \to B$, we want to know if there is a ``satisfying argument'', i.e.\ if there is an $x$ with $f(x) = 1$.
+\end{eg}
+
+This has complexity class \textbf{NP}, standing for \emph{non-deterministic polynomial time}. There are many ways to define \textbf{NP}, and here we will provide two. The first definition of \textbf{NP} will involve the notion of a verifier:
+
+\begin{defi}[Verifier]
+ Suppose we have a language $L \subseteq B^*$, where
+ \[
+ B^* = \bigcup_{n \in \N} B_n
+ \]
+ is the set of all bit strings.
+
+ A \term{verifier} for $L$ is a computation $V(w, c)$ with two inputs $w, c$ such that
+ \begin{enumerate}
+ \item $V$ halts on all inputs.
+ \item If $w \in L$, then for \emph{some} $c$, $V(w, c)$ halts with ``accept''.
+ \item If $w \not\in L$, then for \emph{all} $c$, $V(w, c)$ halts with ``reject''.
+ \end{enumerate}
+ A \term{polynomial time verifier} is a $V$ that runs in polynomial time in $|w|$ (not $|w| + |c|!$).
+\end{defi}
+We can think of $c$ as ``certificate of membership''. So if you are a member, you can exhibit a certificate of membership that you are in there, and we can check if the certification is valid. However, if you are not a member, you cannot ``fake'' a certificate.
+
+\begin{defi}[Non-deterministic polynomial time problem]\index{\textbf{NP}}\index{non-deterministic polynomial time}
+ \textbf{NP} is the class of languages that have polynomial time verifiers.
+\end{defi}
+
+\begin{eg}
+ The SAT problem is in \textbf{NP}. Here $c$ is the satisfying argument, and $V(f, c)$ just computes $f(c)$ and checks whether it is $1$.
+\end{eg}
+
+\begin{eg}
+ Determining if a number is composite is in \textbf{NP}, where a certificate is a factor of the number.
+\end{eg}
+However, it is not immediately obvious that testing if a number is prime is in \textbf{NP}. It is an old result that it indeed is, and recent progress shows that it is in fact in \textbf{P}.
+
+It is rather clear that $\mathbf{P} \subseteq \mathbf{NP}$. Indeed, if we can check membership in polynomial time, then we can also construct a verifier in polynomial time that just throws the certificate away and check directly.
+
+There is another model of \textbf{NP}, via \term{non-deterministic computation}. Recall that in probabilistic computation, in some steps, we had to pick a random number, and picking a different number would lead to a different ``branch''. In the case of non-deterministic computation, we are allowed to take \emph{all} paths at the same time. If \emph{some} of the paths end up being accepting, then we accept the input. If \emph{all} paths reject, then we reject the input. Then we can alternatively say a problem is in \textbf{NP} if there is a polynomial-time non-deterministic machine that checks if the string is in the language.
+
+It is not difficult to see that these definitions of \textbf{NP} are equivalent. Suppose we have a non-deterministic machine that checks if a string is in the language. Then we can construct a verifier whose certificate is a prescription of which particular branch we should follow. Then the verifier just takes the prescription, follows the path described and see if we end up being accepted.
+
+Conversely, if we have a verifier, we can construct a non-deterministic machine by testing a string on \emph{all} possible certificates, and check if any of them accepts.
+
+Unfortunately, we don't know anything about how these different complexity classes compare. We clearly have $\mathbf{P} \subseteq \mathbf{BPP} \subseteq \mathbf{BQP}$ and $\mathbf{P} \subseteq \mathbf{NP}$. However, we do not know if these inclusions are strict, or how $\mathbf{NP}$ compares to the others.
+
+\subsubsection*{Unstructured search problem and Grover's algorithm}
+Usually, when we want to search something, the search space we have is structured in some way, and this greatly helps our searching problem.
+
+For example, if we have a phone book, then the names are ordered alphabetically. If we want to find someone's phone number, we don't have to look through the whole book. We just open to the middle of the book, and see if the person's name is before or after the names on the page. By one lookup like this, we have already eliminated half of the phone book we have to search through, and we can usually very quickly locate the name.
+
+However, if we know someone's phone number and want to figure out their name, it is pretty much hopeless! This is the problem with unstructured data!
+
+So the problem is as follows: we are given an unstructured database with $N = 2^n$ items and a \emph{unique} good item (or no good items). We can query any item for good or bad-ness. The problem is to find the good item, or determine if one exists.
+
+Classically, $O(N)$ queries are necessary and sufficient. Even if we are asking for a right result with fixed probability $c$, if we pick items randomly to check, then the probability of seeing the ``good'' one in $k$ queries is given by $k/N$. So we still need $O(N)$ queries for any fixed probability.
+
+Quantumly, we have \term{Grover's algorithm}. This needs $O(\sqrt{N})$ queries, and this is both necessary and sufficient.
+
+The database of $N = 2^n$ items will be considered as an oracle $f: B_n \to B_1$. it is promised that there is a unique $x_0 \in B_n$ with $f(x_0) = 1$. The problem is to find $x_0$. Again, we have the quantum version
+\[
+ U_f \bket{x}\bket{y} = \bket{x} \bket{y \otimes f(x)}.
+\]
+However, we'll use instead $I_{x_0}$ on $n$ qubits given by
+\[
+ I_{x_0}\bket{x} =
+ \begin{cases}
+ \bket{x} & x \not= x_0\\
+ -\bket{x} & x \not= x_0
+ \end{cases}.
+\]
+This can be constructed from $U_f$ as we've done before, and one use of $I_{x_0}$ can be done with one use of $U_f$.
+\begin{center}
+ \begin{tikzpicture}
+ \addstateend{$\bket{s}$}{0}{3} {$I_{x_0}\bket{s}$};
+ \addstateend{$\frac{\bket{0} - \bket{1}}{2}$}{1}{3}{$\frac{\bket{0} - \bket{1}}{2}$};
+ \addbigoperator{$U_f$}{0}{1}{1}{2};
+ \end{tikzpicture}
+\end{center}
+We can write $I_{x_0}$ as
+\[
+ I_{x_0} = I - 2 \bket{x_0} \brak{x_0},
+\]
+where $I$ is the identity operator.
+
+We are now going to state the \term{Grover's algorithm}, and then later prove that it works.
+
+For convenience, we write
+\[
+ \qH_n = \underbrace{\qH \otimes \cdots \otimes \qH}_{n\text{ times}}
+\]
+We start with a uniform superposition
+\[
+ \bket{\psi_0} = \qH_n \bket{0\cdots0} = \frac{1}{\sqrt{n}} \sum_{\text{all }x} \bket{x}.
+\]
+We consider the \term{Grover iteration operator} on $n$ qubits given by
+\[
+ \mathcal{Q} = -\qH_n I_0 \qH_n I_{x_0}.
+\]
+Here running $I_{x_0}$ requires one query (whereas $I_0$ is ``free'' because it is just $I - 2 \bket{0}\brak{0}$).
+
+Note that all these operators are all real. So we can pretend we are living in the real world and have nice geometric pictures of what is going on. We let $\mathcal{P}(x_0)$ be the (real) plane spanned by $\bket{x_0}$ and $\bket{\psi_0}$. We claim that
+\begin{enumerate}
+ \item In this plane $\mathcal{P}(x_0)$, this operator $\mathcal{Q}$ is a rotation by $2\alpha$, where
+ \[
+ \sin\alpha = \frac{1}{\sqrt{N}} = \braket{x_0}{\psi_0}.
+ \]
+ \item In the orthogonal complement $\mathcal{P}(x_0)^\perp$, we have $\mathcal{Q} = -I$.
+\end{enumerate}
+We will prove these later on. But if we know this, then we can repeatedly apply $\mathcal{Q}$ to $\bket{\psi_0}$ to rotate it near to $\bket{x_0}$, and then measure. Then we will obtain $\bket{x_0}$ with very high probability:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (5, 5);
+ \node at (5, 5) [right] {$\mathcal{P}(x_0)$};
+ \draw [->] (1, 1) -- (4, 1) node [right] {$\bket{\psi_0}$};
+ \draw [dashed] (1, 1) -- (1, 4);
+ \draw [->] (1, 1) -- (2, 4) node [above] {$\bket{x_0}$};
+
+ \draw (1.4, 1) arc(0:71.57:0.4) node [pos=0.5, right] {$\beta$};
+ \draw (1, 1.6) arc (90:71.57:0.6) node [pos=0.5, above] {$\alpha$};
+ \end{tikzpicture}
+\end{center}
+The initial angle is
+\[
+ \cos \beta = \braket{x_0}{\psi_0} = \frac{1}{\sqrt{N}}.
+\]
+So the number of iterations needed is
+\[
+ \frac{\cos^{-1}(1/\sqrt{N})}{2 \sin^{-1}(1/\sqrt{N})} = \frac{\beta}{2\alpha}.
+\]
+In general, this is not an integer, but applying a good integer approximation to it will bring us to very close to $\bket{x_0}$, and thus we measure $x_0$ with high probability. For large $n$, the number of iterations is approximately
+\[
+ \frac{\pi/2}{2/\sqrt{N}} = \frac{\pi}{4} \sqrt{N}.
+\]
+\begin{eg}
+ Let's do a boring example with $N = 4$. The initial angle satisfies
+ \[
+ \cos \beta = \frac{1}{\sqrt{4}} = \frac{1}{2}.
+ \]
+ So we know
+ \[
+ \beta = \frac{\pi}{3}.
+ \]
+ Similarly, we have
+ \[
+ 2\alpha = 2 \sin^{-1}\frac{1}{2} = \frac{\pi}{3}.
+ \]
+ So $1$ iteration of $\mathcal{Q}$ will rotate $\bket{\psi_0}$ \emph{exactly} to $\bket{x_0}$, so we can find it with \emph{certainty} with 1 lookup.
+\end{eg}
+Now we prove that this thing actually works. In general, for any unitary $U$ and $I_{\bket{\psi}} = I - 2 \bket{\psi}\brak{\psi}$, we have
+\[
+ UI_{\bket{\psi}}U^\dagger = UIU^\dagger - 2U \bket{\psi} \brak{\psi} U^\dagger = I_{U \bket{\psi}}.
+\]
+In particular, since $H_n$ is self-adjoint, i.e.\ $H_n^\dagger H_n$, and that by definition $H_n \bket{0} = \bket{\psi_0}$, we know
+\[
+ \mathcal{Q} = -H_n I_0 H_n I_{x_0} = -I_{\bket{\psi_0}} I_{x_0}.
+\]
+Next we note that for any $\bket{\psi}$ and $\bket{\xi}$, we know by definition
+\[
+ I_{\bket{\psi}}\bket{\xi} = \bket{\xi} - 2 \bket{\psi} \braket{\psi}{\xi}.
+\]
+So this modifies $\bket{\xi}$ by some multiple of $\bket{\psi}$. So we know our operator
+\[
+ \mathcal{Q}\bket{\psi} = -I_{\bket{\psi_0}} I_{x_0}\bket{\psi}
+\]
+modifies $\bket{\psi}$ first by some multiple of $\bket{x_0}$, then by some multiple of $\psi_0$. So if $\bket{\xi} \in \mathcal{P}(x_0)$, then $\mathcal{Q} \bket{\psi} \in \mathcal{P}(x_0)$ too! So $\mathcal{Q}$ preserves $\mathcal{P}(x_0)$.
+
+We know that $\mathcal{Q}$ is a unitary, and it is ``real''. So it must be a rotation or a reflection, since these are the only things in $\Or(2)$. We can explicitly figure out what it is. In the plane $\mathcal{P}(x_0)$, we know $I_{\bket{x_0}}$ is reflection in the mirror line perpendicular to $\bket{x_0}$. Similarly, $I_{\bket{\psi_0}}$ is reflection in the mirror line perpendicular to $\bket{\psi_0}$.
+
+We now use the following facts about 2D Euclidean geometry:
+\begin{enumerate}
+ \item If $R$ is a reflection in mirror $M$ along $\bket{M}$, then $-R$ is reflection in mirror $M^\perp$ along $\bket{M^\perp}$.
+
+ To see this, we know any vector can be written as $a\bket{M} + b \bket{M^\perp}$. Then $R$ sends this to $a \bket{M} - b\bket{M^\perp}$, while $-R$ sends it to $-a \bket{M} + b \bket{M^\perp}$, and this is reflection in $\bket{M^\perp}$.
+
+ \item Suppose we have mirrors $M_1$ and $M_2$ making an angle of $\theta$:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 1) -- (2, -1) node [right] {$M_1$};
+ \draw (-2, -1) -- (2, 1) node [right] {$M_2$};
+ \draw (0.4, -0.2) arc(-26.57:26.57:0.447) node [pos=0.5, right] {$\theta$};
+ \end{tikzpicture}
+ \end{center}
+ Then reflection in $M_1$ then reflection in $M_2$ is the same as rotating counterclockwise by $2\theta$.
+\end{enumerate}
+So we know
+\[
+ \mathcal{Q} = -I_{\bket{\psi_0}} I_{\bket{x_0}}
+\]
+is reflection in $\bket{x_0^\perp}$ then reflection in $\bket{\psi_0^{\perp\perp}} = \bket{\psi_0}$. So this is a rotation by $2\alpha$, where $\alpha$ is the angle between $\bket{x_0^\perp}$ and $\bket{\psi_0}$, i.e.
+\[
+ \sin \alpha = \cos \beta = \braket{x_0}{\psi_0}.
+\]
+To prove our second claim that $\mathcal{Q}$ acts as $-\mathbf{1}$ in $\mathcal{P}(x_0)^\perp$, we simply note that if $\bket{\xi} \in \mathcal{P}(x_0)^\perp$, then $\bket{\xi} \perp \bket\psi_0$ and $\xi \perp\bket{x_0}$. So both $I_{\bket{x_0}}$ and $I_{\bket{\psi_0}}$ fix $\bket{\xi}$.
+
+In fact, Grover's algorithm is the best algorithm we can achieve.
+\begin{thm}
+ Let $A$ be any quantum algorithm that solves the unique search problem with probability $1 - \varepsilon$ (for any constant $\varepsilon$), with $T$ queries. Then $T$ is at least $O(\sqrt{N})$. In fact, we have
+ \[
+ T \geq \frac{\pi}{4}(1 - \varepsilon) \sqrt{N}.
+ \]
+\end{thm}
+So Grover's algorithm is not only optimal in the growth rate, but in the constant as well, asymptotically.
+
+Proof is omitted.
+
+\subsubsection*{Further generalizations}
+Suppose we have multiple good items instead, say $r$ of them. We then replace $I_{x_0}$ with $I_f$, where
+\[
+ I_f \bket{x} =
+ \begin{cases}
+ -\bket{x} & x\text{ good}\\
+ \bket{x} & x\text{ bad}
+ \end{cases}
+\]
+We run the same algorithm as before. We let
+\[
+ \bket{\psi_{\mathrm{good}}} = \frac{1}{\sqrt{r}} \sum_{x\text{ good}} \bket{x}.
+\]
+Then now $\mathcal{Q}$ is a rotation through $2\alpha$ in the plane spanned by $\bket{\psi_{\mathrm{good}}}$ and $\bket{\psi_0}$ with
+\[
+ \sin \alpha = \braket{\psi_{\mathrm{good}}}{\psi_0} = \sqrt{\frac{r}{N}}.
+\]
+So for large $N$, we need
+\[
+ \frac{\pi/2}{2\sqrt{r/N}} = \frac{\pi}{4} \sqrt{\frac{N}{r}},
+\]
+i.e.\ we have a $\sqrt{r}$ reduction over the unique case. We will prove that these numbers are right later when we prove a much more general result.
+
+What if we don't know what $r$ is? The above algorithm would not work, because we will not know when to stop the rotation. However, there are some tricks we can do to fix it. This involves cleverly picking angles of rotation at random, and we will not go into the details.
+
+\subsection{Amplitude amplification}
+In fact, the techniques from Grover's algorithm is completely general. Let $G$ be any subspace (``good'' subspace) of the state space $\mathcal{H}$, and $G^\perp$ be its orthogonal complement (``bad'' subspace). Then
+\[
+ \mathcal{H} = G \oplus G^\perp.
+\]
+Given any normalized vector $\bket{\psi} \in \mathcal{H}$, we have a unique decomposition with real, non-negative coefficients
+\[
+ \bket{\psi} = \sin \theta \bket{\psi_g} + \cos \theta \bket{\psi_b}
+\]
+such that
+\[
+ \bket{\psi_g} \in G,\quad \bket{\psi_b} \in G^\perp
+\]
+are normalized.
+
+We define the reflections
+\[
+ I_{\bket{\psi}} = I -2\bket{\psi}\brak{\psi}, \quad I_g = I - 2P,
+\]
+where $P$ is the projection onto $G$ given by
+\[
+ P = \sum_b \bket{b}\brak{b}
+\]
+for any orthonormal basis $\{\bket{b}\}$ of $G$. This $P$ satisfies
+\[
+ P \bket{\psi} =
+ \begin{cases}
+ \bket{\psi} & \bket{\psi} \in G\\
+ 0 & \bket{\psi} \in G^\perp
+ \end{cases}.
+\]
+We now define the \term{Grover operator}
+\[
+ \mathcal{Q} = - I_\psi I_G.
+\]
+\begin{thm}[Amplitude amplification thoerem]\index{amplitude amplification theorem}
+ In the $2$-dimensional subspace spanned by $\bket{\psi_g}$ and $\bket{\psi}$ (or equivalently by $\bket{\psi_g}$ and $\bket{\psi_b}$), where
+ \[
+ \bket{\psi} = \sin \theta \bket{\psi_g} + \cos \theta \bket{\psi_b},
+ \]
+ we have that $\mathcal{Q}$ is rotation by $2 \theta$.
+\end{thm}
+
+\begin{proof}
+ We have
+ \[
+ I_G \bket{\psi_g} = - \bket{\psi_g},\quad I_G \bket{\psi_b} = \bket{\psi_b}.
+ \]
+ So
+ \[
+ \mathcal{Q} \bket{\psi_g} = I_\psi \bket{\psi_g}, \quad \mathcal{Q} \bket{\psi_b} = - I_\psi \bket{\psi_b}.
+ \]
+ We know that
+ \[
+ I_\psi = I - 2\bket{\psi}\brak{\psi}.
+ \]
+ So we have
+ \begin{align*}
+ \mathcal{Q} \bket{\psi_g} &= I_\psi \bket{\psi_g}\\
+ &= \bket{\psi_g} - 2(\sin \theta \bket{\psi_g} + \cos \theta \bket{\psi_b})(\sin \theta)\\
+ &= (1 - 2 \sin^2 \theta) \bket{\psi_g} - 2 \sin \theta \cos \theta \bket{\psi_b}\\
+ &= \cos 2\theta \bket{\psi_g} - \sin 2\theta \bket{\psi_b}\\
+ \mathcal{Q} \bket{\psi_b} &= - I_\psi \bket{\psi_b}\\
+ &= -\bket{\psi_b} + 2(\sin \theta \bket{\psi_g} + \cos \theta \bket{\psi_b})(\cos \theta)\\
+ &= 2 \sin \theta \cos \theta \bket{\psi_g} + (2\cos^2 \theta - 1) \bket{\psi_b}\\
+ &= \sin 2 \theta \bket{\psi_g} + \cos 2 \theta \bket{\psi_b}.
+ \end{align*}
+ So this is rotation by $2 \theta$.
+\end{proof}
+
+If we iterate this $n$ times, then we have rotated by $2n \theta$, but we started at $\theta$ from the $\bket{\psi_b}$ direction. So we have
+\[
+ \mathcal{Q}^n \bket{\psi} = \sin (2n + 1) \theta \bket{\psi_g} + \cos (2n + 1) \theta \bket{\psi_b}.
+\]
+If we measure $\mathcal{Q}^n\bket{\psi}$ for good versus bad, we know
+\[
+ \P(\text{good}) = \sin^2 (2n + 1)\theta,
+\]
+and this is a maximum, when $(2n + 1) \theta = \frac{\pi}{2}$, i.e.
+\[
+ n = \frac{\pi}{4 \theta} - \frac{1}{2}.
+\]
+For a general $\theta$, we know that $n$ is not a n integer. So we use $n$ the nearest integer to $\frac{\pi}{4 \theta} - \frac{1}{2}$, which is approximately
+\[
+ \frac{\pi}{4 \theta} = O(\theta^{-1}) = O(1/\sin \theta) = O\left(\frac{1}{\|\text{good projection of }\bket{\psi}\|}\right).
+\]
+
+\begin{eg}
+ Suppose we want to do a Grover search for $r$ good items in $N$ objects. We start with
+ \[
+ \bket{\psi} = \frac{1}{\sqrt{N}} \sum_{\text{all }x} \bket{x} = \sqrt{\frac{r}{N}}\left(\frac{1}{\sqrt{r}} \sum_{\text{good }x} \bket{x}\right) + \sqrt{\frac{N - r}{N}}\left(\frac{1}{\sqrt{N - r}} \sum_{\text{bad }x} \bket{x}\right).
+ \]
+ Then $G$ is the subspace spanned by the good $x$'s, and
+ \[
+ \sin \theta =\sqrt{\frac{r}{N}},
+ \]
+ So $\mathcal{Q}$ is a rotation by $2\theta$, where
+ \[
+ \theta = \sin^{-1}\sqrt{\frac{r}{N}} \approx \sqrt{\frac{r}{N}}
+ \]
+ for $r \ll N$. So we will use $O(\sqrt{r/N})$ operations.
+\end{eg}
+
+\begin{eg}
+ Let $A$ be any quantum circuit on start state $\bket{0\cdots0}$. Then the final state is $A \bket{0\cdots0}$. The good states are the desired computational outcomes. For example, if $A$ is Shor's algorithm, then the desired outcomes might be the good $c$-values. We can write
+ \[
+ A\bket{0\cdots0} = a \bket{\psi_g} + b \bket{\psi_b}.
+ \]
+ The probability of a success in one run is $|a|^2$. So we normally need $O(1/|a|^2)$ repetitions of $A$ to succeed with a given constant probability $1 - \varepsilon$.
+
+ Instead of just measuring the result and hoping for the best, we can use amplitude amplification. We assume we can check if $x$ is good or bad, so we can implement $I_G$. We consider
+ \[
+ \bket{\psi} = A \bket{0\cdots0}.
+ \]
+ Then we define
+ \[
+ \mathcal{Q} = - I_{A\bket{0\cdots0}} I_G = - A I_{\bket{0\cdots0}} A^\dagger I_G.
+ \]
+ Here we can construct $A^\dagger$ just by reversing the gates in $A$. So all parts are implementable.
+
+ By amplitude amplification, $\mathcal{Q}$ is rotation by $2 \theta$, where $\sin \theta = |a|$. So after
+ \[
+ n \approx \frac{\pi}{4 \theta} = O(|a|^{-1})
+ \]
+ repetitions, $A \bket{0\cdots 0}$ will be rotated to very near to $\bket{\psi_g}$, and this will succeed with high probability. This gives us a square root speedup over the naive method.
+\end{eg}
+
+\section{Measurement-based quantum computing}
+In this chapter, we are going to look at an alternative model of quantum computation. This is rather weird. Instead of using unitary gates and letting them act on state, we prepare a generic starting state known as a \emph{graph state}, then we just keep measuring them. Somehow, by cleverly planning the measurements, we will be able to simulate any quantum computation in the usual sense with such things.
+
+We will need a bunch of new notation.
+\begin{notation}
+ We write
+ \[
+ \bket{\pm_\alpha} = \frac{1}{\sqrt{2}} (\bket{0} \pm e^{-i\alpha} \bket{1}).
+ \]
+ In particular, we have
+ \[
+ \bket{\pm_0} = \bket{\pm} = \frac{1}{\sqrt{2}}(\bket{0} \pm \bket{1})
+ \]
+ Then
+ \[
+ \mathcal{B}(\alpha) = \{\bket{+_\alpha}, \bket{-_\alpha}\}
+ \]
+ is an orthonormal basis. We have $1$-qubit gates
+ \[
+ \qJ(\alpha) = \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 1 & e^{i\alpha}\\
+ 1 & -e^{i\alpha}
+ \end{pmatrix} = \qH \qP(\alpha),
+ \]
+ where
+ \[
+ \qH = \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 1 & 1\\
+ 1 & -1
+ \end{pmatrix},\quad \qP(\alpha) =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & e^{i\alpha}
+ \end{pmatrix}.
+ \]
+ We also have the ``Pauli gates''
+ \[
+ \qX=
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 0
+ \end{pmatrix},\quad
+ \qZ =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix} = \qP(\pi)
+ \]
+ We also have the 2-qubit gates
+ \[
+ \qE = \qCZ = \diag(1, 1, 1, -1).
+ \]
+ We also have $1$-qubit measurements
+ \[
+ \qM_i(\alpha) = \text{measurement of qubit $i$ in basis $\mathcal{B}(\alpha)$}.
+ \]
+ The outcome $\bket{+_\alpha}$ is denoted $0$ and the outcome $\bket{-_\alpha}$ is denoted $1$.
+
+ We also have $\qM_i(\qZ)$, which is measurement of qubit $i$ in the standard basis $\{\bket{0}, \bket{1}\}$.
+
+ Finally, we have the notion of a graph state. Suppose we have an undirected graph $G = (V, E)$ with vertices $V$ and edges $E$ with no self-loops and at most one edge between two vertices, we can define the \term{graph state} $\bket{\psi_G}$ that is a state of $|V|$ qubits as follows: for each vertex $i \in V$, introduce a qubit $\bket{+}_i$. For each edge $e: i \to j$, we apply $E_{ij}$ (i.e.\ $E$ operating on the qubits $i$ and $j$). Since all these $E_{ij}$ commute, the order does not matter.
+
+ \begin{eg}
+ If $G_1$ is
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [above] {$0$};
+ \draw (0, 0) -- (1, 0) node [circ] {} node [above] {$1$};
+ \end{tikzpicture}
+ \end{center}
+ then we have
+ \[
+ \bket{\psi_{G_1}} = E_{12} \bket{+}_1 \bket{+}_2 = \frac{1}{2} [ \bket{00} + \bket{01} + \bket{10} - \bket{11}],
+ \]
+ and this is an entangled state.
+
+ If $G_2$ is
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [above] {$0$};
+ \draw (0, 0) -- (1, 0) node [circ] {} node [above] {$1$} -- (2, 0) node [circ] {} node [above] {$2$};
+ \end{tikzpicture}
+ \end{center}
+ then we have
+ \[
+ \bket{\psi_{G_2}} = E_{12}E_{23} \bket{+}_1 \bket{+}_2 \bket{+}_3.
+ \]
+ \end{eg}
+
+ A \emph{cluster state} is a graph state $\bket{\psi_G}$ for $G$ being a rectangular $2D$ grid.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [step=1cm] (0, 0) grid (4, 3);
+ \foreach \x in {0,1,2,3,4} {
+ \foreach \y in {0,1,2,3} { \node [circ] at (\x, \y) {}; };
+ };
+ \end{tikzpicture}
+ \end{center}
+\end{notation}
+
+The main result of measurement-based quantum computation is the following:
+\begin{thm}
+ Let $C$ be any quantum circuit on $n$ qubits with a sequence of gates $U_1, \cdots, U_K$ (in order). We have an input state $\bket{\psi_{\mathrm{in}}}$, and we perform $\qZ$-measurements on the output states on specified qubits $j = i_1, \cdots, i_k$ to obtain a $k$-bit string.
+
+ We can always simulate the process as follows:
+ \begin{enumerate}
+ \item The starting resource is a graph state $\bket{\psi_G}$, where $G$ is chosen depending on the connectivity structure of $C$.
+ \item The computational steps are $1$-qubit measurements of the form $\qM_i(\alpha)$, i.e.\ measurement in the basis $\mathcal{B}(\alpha)$. This is adaptive --- $\alpha$ may depend on the (random) outcomes $s_1, s_2, \cdots$ of previous measurements.
+ \item The computational process is a prescribed (adaptive) sequence $\qM_{i_1}(\alpha_1)$, $\qM_{i_2}(\alpha_2)$, $\cdots$, $\qM_{i_N}(\alpha_N)$, where the qubit labels $i_1, i_2, \cdots, i_N$ all distinct.
+ \item To obtain the output of the process, we perform further measurements $\qM(\qZ)$ on $k$ specified qubits not previously measured, and we get results $s_{i_1}, \cdots, s_{i_k}$, and finally the output is obtained by further (simple) \emph{classical} computations on $s_{i_1}, \cdots, s_{i_k}$ as well as the previous $\qM_i(\alpha)$ outcomes.
+ \end{enumerate}
+\end{thm}
+The idea of the last part is that the final measurement $s_{i_1}, \cdots, s_{i_k}$ has to be re-interpret in light of the results $M_i(\alpha_i)$.
+
+This is a funny process, because the result of each measurement $M_i(\alpha)$ is uniformly random, with probability $\frac{1}{2}$ for each outcome, but somehow we can obtain useful information by doing adaptive measurements.
+
+We now start the process of constructing such a system. We start with the following result:
+\begin{fact}
+ The $1$-qubit gates $\qJ(\alpha)$ with $\qE_{i, i\pm 1}$ is a universal set of gate.
+
+ In particular, any $1$-qubit $U$ is a product of $3$ $\qJ$'s.
+\end{fact}
+
+We call these $E_{i, i \pm 1}$ \term{nearest neighbour $E_{ij}$'s}.
+\begin{proof}
+ This is just some boring algebra.
+\end{proof}
+
+So we can assume that our circuit $C$'s gates are all of the form $\qJ(\alpha)$'s or $\qE_{ij}'s$, and it suffices to try to implement these gates in our weird system.
+
+The next result we need is what we call the $\qJ$-lemma:
+\begin{lemma}[$\qJ$-lemma]\index{$\qJ$-lemma}
+ Given any $1$-qubit state $\bket{\psi}$, consider the state
+ \[
+ \qE_{12} (\bket{\psi}_1 \bket{+}_2).
+ \]
+ Suppose we now measure $\qM_1(\alpha)$, and suppose the outcome is $s_1 \in \{0, 1\}$. Then after measurement, the state of $2$ is
+ \[
+ \qX^{s_1} \qJ(\alpha) \bket{\psi}.
+ \]
+ Also, two outcomes $s = 0, 1$ always occurs with probability $\frac{1}{2}$, regardless of the values of $\bket{\psi} b, \alpha$.
+\end{lemma}
+
+\begin{proof}
+ We just write it out. We write
+ \[
+ \bket{\psi} = a \bket{0} + b \bket{1}.
+ \]
+ Then we have
+ \begin{align*}
+ \qE_{12} (\bket{\psi}_1 \bket{+}_2) &= \frac{1}{\sqrt{2}} \qE_{12}(a \bket{0}\bket{0} + a \bket{0}\bket{1} + b \bket{1}\bket{0} + b \bket{1}\bket{1})\\
+ &= \frac{1}{\sqrt{2}}(a \bket{0}\bket{0} + a \bket{0}\bket{1} + b \bket{1}\bket{0} - b \bket{1}\bket{1})
+ \end{align*}
+ So if we measured $0$, then we would get something proportional to
+ \begin{align*}
+ \brak{+_\alpha}_1\qE_{12} (\bket{\psi}_1 \bket{+}_2) &= \frac{1}{2}(a \bket{0} + a \bket{1} + b e^{i\alpha} \bket{0} - be^{i\alpha} \bket{1})\\
+ &= \frac{1}{2}
+ \begin{pmatrix}
+ 1 & e^{i\alpha}\\
+ 1 & -e^{i\alpha}
+ \end{pmatrix}
+ \begin{pmatrix}
+ a\\b
+ \end{pmatrix},
+ \end{align*}
+ as required. Similarly, if we measured $1$, then we get $\qX \qJ(\alpha) \bket{\psi}$.
+\end{proof}
+
+We will usually denote processes by diagrams. In this case, we started with the graph state
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [circ] at (2, 0) {};
+ \draw (0, 0) -- (2, 0);
+
+ \draw [gray, ->] (-0.5, -0.5) node [below] {$\bket{\psi}$} -- (-0.02, -0.02);
+ \draw [gray, ->] (2.5, -0.5) node [below] {$\bket{+}$} -- (2.02, -0.02);
+ \end{tikzpicture}
+\end{center}
+and the measurement can be pictured as
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [above] {$\alpha$};
+ \node [circ] at (2, 0) {};
+ \draw (0, 0) -- (2, 0);
+ \draw [->] (0.05, -0.05) -- +(0.5, -0.5) node [right] {$s_1$};
+
+ \draw [gray, ->] (-0.5, -0.5) node [below] {$\bket{\psi}$} -- (-0.02, -0.02);
+ \draw [gray, ->] (2.5, -0.5) node [below] {$\bket{+}$} -- (2.02, -0.02);
+ \end{tikzpicture}
+\end{center}
+If we measure $\qZ$, we denote that by
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [above] {$\qZ$};
+ \draw [->] (0.05, -0.05) -- (0.5, -0.5) node [below] {$i$};
+ \end{tikzpicture}
+\end{center}
+In fact, this can be extended if $1$ is just a single qubit part of a larger multi-qubit system $1\mathcal{S}$, i.e.
+\begin{lemma}
+ Suppose we start with a state
+ \[
+ \bket{\psi}_{1\mathcal{S}} = \bket{0}_1\bket{a}_\mathcal{S} + \bket{1}_1 \bket{b}_\mathcal{S}.
+ \]
+ We then apply the $J$-lemma process by adding a new qubit $\bket{+}$ for $2 \not \in \mathcal{S}$, and then query $1$. Then the resulting state is
+ \[
+ \qX_2^{s_1}\qJ_2(\alpha) \bket{\psi}_{2\mathcal{S}}.
+ \]
+\end{lemma}
+So the $\qJ$-lemma allows us to simulate $\qJ$-gates with measurements. But we want to do many $\qJ$ gates. So we need the concatenation lemma:
+\begin{lemma}[Concatenation lemma]
+ If we concatenate the process of $\qJ$-lemma on a row of qubits $1, 2, 3, \cdots$ to apply a sequence of $\qJ(\alpha)$ gates, then all the entangling operators $\qE_{12}, \qE_{23}, \cdots$ can be done \emph{first} before any measurements are applied.
+\end{lemma}
+
+It is a fact that for any composite quantum system $A \otimes B$, any local actions (unitary gates or measurements) done on $A$ always commutes with anything done on $B$, which is easy to check by expanding out the definition. So the proof of this is simple:
+
+\begin{proof}
+ For a state $\bket{\psi}_1 \bket{+}_2 \bket{+}_3\cdots$, we can look at the sequence of $\qJ$-processes in the sequence of operations (left to right):
+ \[
+ \qE_{12} \qM_1(\alpha_1) \qE_{23}\qM_2(\alpha_2) \qE_{34} \qM_3(\alpha_3)\cdots
+ \]
+ It is then clear that each $\qE_{ij}$ commutes with all the measurements before it. So we are safe.
+\end{proof}
+
+We can now determine the measurement-based quantum computation process corresponding to a quantum circuit $C$ of gates $U_1, U_2, \cdots, U_K$ with each $U_i$ either a $\qJ(\alpha)$ or a nearest-neighbour $\qE_{ij}$. We may wlog assume the input state to $C$ is
+\[
+ \bket{+}\cdots\bket{+}
+\]
+as any $1$-qubit product state may be written as
+\[
+ \bket{\psi} = U \bket{+}
+\]
+for suitable $U$, which is then represented as at most three $\qJ(\alpha)$'s. So we simply prefix $C$ with these $J(\alpha)$ gates.
+
+\begin{eg}
+ We can write $\bket{j}$ for $j = 0, 1$ as
+ \[
+ \bket{j} = \qX^j\qH \bket{+}.
+ \]
+ We also have
+ \[
+ \qH = \qJ(0),\quad \qX = \qJ(\pi)\qJ(0).
+ \]
+\end{eg}
+
+So the idea is that we implement these $\qJ(\alpha)$ gates by the $\qJ$-processes we just described, and the nearest-neighbour $\qE_{ij}$ gates will just be performed when we create the graph state.
+
+We first do a simple example:
+\begin{eg}
+ Consider the circuit $C$ given by
+ \begin{center}
+ \begin{tikzpicture}
+ \addMQCstate{$\bket{+}$}{0}{4};
+ \addMQCstate{$\bket{+}$}{1}{4};
+ \addJ{\alpha_1}{0}{1};
+ \addJ{\alpha_2}{1}{1};
+ \addJ{\alpha_3}{0}{3};
+
+ \drawvert{0}{1}{2};
+ \end{tikzpicture}
+ \end{center}
+ where the vertical line denotes the $E_{12}$ operators. At the end, we measure the outputs $i_1, i_2$ by $\qM(Z)$ measurements.
+
+ We use the graph state
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (0, -1) {};
+ \node [circ] at (1, -1) {};
+
+ \draw (0, 0) -- (2, 0);
+ \draw (0, -1) -- (1, -1) -- (1, 0);
+ \end{tikzpicture}
+ \end{center}
+ In other words, we put a node for a $\bket{+}$, horizontal line for a $\qJ(\alpha)$ and a vertical line for an $\qE$.
+
+ If we just measured all the qubits for the $J$-process in the order $\alpha_1, \alpha_2, \alpha_3$, and then finally read off the final results $i_1, i_2$:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (0, -1) {};
+ \node [circ] at (1, -1) {};
+
+ \draw (0, 0) -- (2, 0);
+ \draw (0, -1) -- (1, -1) -- (1, 0);
+
+ \measurestate{$\alpha_1$}{$s_1$}{0}{0};
+ \measurestate{$\alpha_2$}{$s_2$}{1}{0};
+ \measurestate{$\alpha_3$}{$s_3$}{0}{1};
+ \measurestate{$\qZ$}{$i_1$}{0}{2};
+ \measurestate{\;\;\;$\qZ$}{$i_2$}{1}{1};
+ \end{tikzpicture}
+ \end{center}
+ then we would have effected the circuit
+ \begin{center}
+ \begin{tikzpicture}
+ \addMQCstate{$\bket{+}$}{0}{6};
+ \addMQCstate{$\bket{+}$}{1}{6};
+ \addJ{\alpha_1}{0}{1};
+ \addJ{\alpha_2}{1}{1};
+
+ \addX{s_1}{0}{2};
+ \addX{s_2}{1}{2};
+
+ \addJ{\alpha_3}{0}{4};
+ \addX{s_3}{0}{5};
+
+ \drawvert{0}{1}{3};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+Now the problem is to get rid of the $X^i$'s. We know each $X^i$ comes with probability $\frac{1}{2}$. So the probability of them all not appearing is tiny for more complicated circuits, and we cannot just rely on pure chance for it to turn out right.
+
+To deal with the unwanted $X^i$ ``errors'', we want to commute them out to the end of the circuit. But they do not commute, so we are going to use the following commutation relations:
+\[
+ J(\alpha) X = e^{i\alpha} Z J(-\alpha)
+\]
+In other words, up to an overall phase, the following diagrams are equivalent:
+\begin{center}
+ \raisebox{-0.5\height}{\begin{tikzpicture}
+ \addMQCstate{}{0}{3};
+ \addX{}{0}{1};
+ \addJ{\alpha}{0}{2};
+ \end{tikzpicture}}
+ \raisebox{-0.5\height}{is equivalent to}
+ \raisebox{-0.5\height}{\begin{tikzpicture}
+ \addMQCstate{}{0}{3};
+ \addJ{-\alpha}{0}{1};
+ \addX{}{0}{2};
+ \end{tikzpicture}}
+\end{center}
+More generally, we can write
+\begin{align*}
+ \qJ_i(\alpha) \qX_i^s &= e^{-i\alpha s} \qZ_i^s \qJ_i((-1)^s \alpha)\\
+ \qJ_i(\alpha) \qZ_i^s &= \qX_i^s \qJ_i(\alpha)\\
+ \qE_{ij}\qZ_i^s &= \qZ_i^s \qE_{ij}\\
+ \qE_{ij} \qX_i^s &= \qX_i^s \qZ_i^s \qE_{ij}
+\end{align*}
+Here the subscripts tell us which qubit the gates are acting on.
+
+The last one corresponds to
+\begin{center}
+ \makecenter{\begin{tikzpicture}
+ \addMQCstate{}{0}{3};
+ \addMQCstate{}{1}{3};
+ \addX{}{0}{1};
+ \drawvert{0}{1}{2}
+ \addPhantom{1}{1};
+ \end{tikzpicture}}
+ \makecenter{is equivalent to}
+ \makecenter{\begin{tikzpicture}
+ \addMQCstate{}{0}{3};
+ \addMQCstate{}{1}{3};
+ \drawvert{0}{1}{1}
+ \addX{}{0}{2};
+ \addZ{}{1}{2};
+ \end{tikzpicture}}
+\end{center}
+All of these are good, except for the first one where we have a funny phase and the angle is negatived. The phase change is irrelevant because it doesn't affect measurements, but the sign changes are problematic. To fix this, we need to use adaptive measurements.
+
+\begin{eg}
+ Consider the simpler $1$-qubit circuit
+ \begin{center}
+ \begin{tikzpicture}
+ \addMQCstate{$\bket{+}$}{0}{3};
+ \addJ{\alpha_1}{0}{1};
+ \addJ{\alpha_2}{0}{2};
+ \end{tikzpicture}
+ \end{center}
+ We first prepare the graph sate
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (2, 0) {};
+
+ \draw (0, 0) -- (2, 0);
+ \end{tikzpicture}
+ \end{center}
+ We now measure the first qubit to get
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (2, 0) {};
+
+ \measurestate{$\alpha_1$}{$r_1$}{0}{0};
+ \draw (0, 0) -- (2, 0);
+ \end{tikzpicture}
+ \end{center}
+ We have thus done
+ \begin{center}
+ \begin{tikzpicture}
+ \addMQCstate{$\bket{+}$}{0}{3};
+ \addJ{\alpha_1}{0}{1};
+ \addX{r_1}{0}{2};
+ \end{tikzpicture}
+ \end{center}
+ To deal with the unwanted $X^{r_1}$, we note that
+ \begin{center}
+ \makecenter{\begin{tikzpicture}
+ \addMQCstate{}{0}{3};
+ \addX{r_1}{0}{1};
+ \addJ{\alpha_2}{0}{2};
+ \end{tikzpicture}}
+ \makecenter{is equivalent to}
+ \makecenter{\begin{tikzpicture}
+ \addMQCstate{}{0}{4};
+ \addJ{(-1)^{r_1}\alpha_2}{0}{1.5};
+ \addZ{r_1}{0}{3};
+ \end{tikzpicture}}
+ \end{center}
+ So we adapt the sign of the second measurement angle to depend on the previous measurement result:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (2, 0) {};
+
+ \draw (0, 0) -- (2, 0);
+ \measurestate{$\alpha_1$}{$r_1$}{0}{0};
+ \measurestate{$(-1)^{r_1} \alpha_2$}{$r_2$}{0}{1};
+ \end{tikzpicture}
+ \end{center}
+ Then this measurement results in
+ \begin{center}
+ \begin{tikzpicture}
+ \addMQCstate{}{0}{6};
+ \addJ{\alpha_1}{0}{1};
+ \addX{r_1}{0}{2};
+ \addJ{(-1)^{r_1}\alpha_2}{0}{3.5};
+ \addX{r_2}{0}{5};
+ \end{tikzpicture}
+ \end{center}
+ which is equivalent to
+ \begin{center}
+ \begin{tikzpicture}
+ \addMQCstate{}{0}{5};
+ \addJ{\alpha_1}{0}{1};
+ \addJ{\alpha_2}{0}{2};
+ \addZ{r_1}{0}{3};
+ \addX{r_2}{0}{4};
+ \end{tikzpicture}
+ \end{center}
+ If we had further $\qJ$-gates, we need to commute both $\qZ^{r_1}$ and $\qX^{r_2}$ over.
+
+ Note that while we are introducing a lot of funny $\qX$'s and $\qZ$'s, these are all we've got, and the order of applying them does not matter, as they anti-commute:
+ \[
+ \qX\qZ = -\qZ\qX.
+ \]
+ So if we don't care about the phase, they effectively commute.
+
+ Also, since $\qX^2 = \qZ^2 = I$, we only need to count the number of $\qX$'s and $\qZ$'s mod 2, which is very helpful.
+
+ Now what do we do with the $\qZ$ and $\qX$ at the end? For the final $\qZ$-measurement, having moved everything to the end, we simply reinterpret the final, actual $\qZ$-measurement result $j$:
+ \begin{enumerate}
+ \item The $\qZ$-gate does not affect outcome or probability of a $\qZ$-measurement, becasuse if
+ \[
+ \bket{\psi} = a \bket{0} + b \bket{1},
+ \]
+ then
+ \[
+ \qZ \bket{\psi} = a \bket{0} - b \bket{1}.
+ \]
+ So the probabilities of $\bket{0}$ and $\bket{1}$ are $|a|^2$ and $|b|^2$ regardless.
+
+ \item The $\qX$ gate simply interchanges the labels, while leavining probabilities the same, because if
+ \[
+ \bket{\psi} = a \bket{0} + b \bket{1},
+ \]
+ then
+ \[
+ \qX \bket{\psi} = a \bket{1} + b \bket{0}.
+ \]
+ \end{enumerate}
+ So we ignore all $\qZ$-errors, and for each $\qX^r$ error, we just modify the seen measurement outcome $j$ by $j \mapsto j \oplus r$.
+\end{eg}
+
+If we actually implement measurement-based quantum computations, the measurements can always be done ``left to right'', implementing the gates in order. However, we don't have to do that. Recall that quantum operations on disjoint qubits always commute. Since the only thing we are doing are measurements, \emph{all} $M_i(\alpha)$ measurements can be performed simultaneously if the angles $\alpha$ do not depend on other measurements. This gives us a novel way to parallel a computation.
+
+For example, in our simple example, we can start by first measuring $r_1$ and $j$, and then measuring $r_2$ after we know $r_1$. In particular, we can \emph{first} measure the ``answer'' $j$, before we do any other thing! The remaining measurements just tell us how we should interpret the answer.
+
+In general, we can divide the measurements into ``layers'' --- the first layer consists of all measurements that do not require any adaptation. The second layer then consists of the measurements that only depends on the first layer. The \term{logical depth} is the least number of layers needed, and this somewhat measures the complexity of our circuit.
+
+\section{Phase estimation algorithm}
+\index{phase estimation}
+We now describe a quantum algorithm that estimates the eigenvalues of a unitary operator. Suppose we are given a unitary operator $U$ and an eigenstate $\bket{v_\varphi}$. Then we can write
+\[
+ U \bket{V_\varphi} = e^{2\pi i \varphi} \bket{v_\varphi}
+\]
+with $0 \leq \varphi < 1$. Our objective is to estimate $\varphi$ to $n$ binary bits of precision:
+\[
+ \varphi \approx 0.i_1 i_2 i_3 \cdots i_n = \frac{i_1}{2} + \frac{i_2}{2^2} + \cdots + \frac{i_n}{2^n}.
+\]
+We will need the controlled $U^k$ gate $\qcd U^k$ for integers $k$, defined by
+\begin{align*}
+ \qcd U^k\bket{0}\bket{\xi} &= \bket{0}\bket{\xi}\\
+ \qcd U^k\bket{1}\bket{\xi} &= \bket{1} U^k\bket\xi,
+\end{align*}
+where $\bket{0}, \bket{1}$ are 1-qubit states, and $\bket{\xi}$ is a general $d$-dimensional register.
+
+Note that we have
+\[
+ U^k \bket{v_\varphi} = e^{2\pi i k \varphi}\bket{v_\varphi},
+\]
+and we have
+\[
+ \qcd U^k = (\qcd U)^k,
+\]
+Note that if we are given $U$ as a formula or a circuit \emph{description}, then we can readily implement $\qcd U$ by adding control to each gate. However, if $U$ is a quantum black-box, then we need further information. For example, it suffices to have an eigenstate $\bket{\alpha}$ with \emph{known} eigenvalue $e^{i\alpha}$. However, we will not bother ourselves with that, and just assume that we can indeed implement it.
+
+In fact, we will use a ``generalized'' controlled $U$ given by
+\[
+ \bket{x} \bket{\xi} \mapsto \bket{x}U^x \bket{\xi},
+\]
+where $\bket{x}$ has $n$ qubits. We will make this from $\qcd U^k = (\qcd U)^k$ as follows: for
+\[
+ x = x_{n - 1} \cdots x_1 x_0 = x_0 + 2^1 x_1 + 2^2 x_2 + \cdots + 2^{n - 1}x_{n - 1},
+\]
+we write $\qcd U^k_i$ for the controlled $U^k$ controlled by $i$. Then we just construct
+\[
+ U^{2^0}_0 U^{2^1}_1 \cdots U^{2^{n - 1}}_{n - 1}.
+\]
+Now if input $\bket{\xi} = \bket{v_{\varphi}}$, then we get
+\[
+ e^{2\pi i \varphi x} \bket{x} \bket{v_\varphi}.
+\]
+To do phase estimation, we superpose the above over all $x = 0, 1, 2, \cdots, 2^{n - 1}$ and use $\bket{\xi} = \bket{v_\varphi}$. So we construct our starting state by
+\[
+ \bket{s} = \qH \otimes \cdots \otimes \qH \bket{0\cdots0} = \frac{1}{\sqrt{2^n}} \sum_{\text{all }x} \bket{x}.
+\]
+Now if we apply the generalized control $U$, we obtain
+\[
+ \underbrace{\left(\frac{1}{\sqrt{2^n}} \sum_x e^{2\pi i \varphi x}\bket{x}\right)}_{\bket{A}} \bket{v_\psi}.
+\]
+Finally, we apply the inverse Fourier transform $\qQFT_{2^n}^{-1}$ to $\bket{A}$ and measure to see $y_0, y_1, \cdots, y_{n -1 }$ on lines $0, 1, \cdots, n - 1$. Then we simply output
+\[
+ 0.y_0 y_1 \cdots y_{n - 1} = \frac{y_0}{2} + \frac{y_1}{4} + \cdots + \frac{y_{n - 1}}{2^n} = \frac{y_0 y_1 \cdots y_{n - 1}}{2^n}.
+\]
+as the estimate of $\varphi$.
+
+Why does this work? Suppose $\varphi$ actually only had $n$ binary digits. Then we have
+\[
+ \varphi = 0.z_0 z_1 \cdots \cdots z_{n - 1} = \frac{z}{2^n},
+\]
+where $z \in \Z_{2^n}$. Then we have
+\[
+ \bket{A} = \frac{1}{\sqrt{2^n}} \sum_x 2^{2\pi i x z/2^n} \bket{x},
+\]
+which is the Fourier transform of $\bket{z}$. So the inverse Fourier transform of $\bket{A}$ is exactly $\bket{Z}$ and we get $\varphi$ exactly with certainty.
+
+If $\varphi$ has more than $n$ bits, say
+\[
+ \varphi = 0.z_0z_1 \cdots z_{n - 1}z_n z_{n + 1} \cdots,
+\]
+then we have
+\begin{thm}
+ If the measurements in the above algorithm give $y_0, y_1, \cdots, y_n$ and we output
+ \[
+ \theta = 0.y_0 y_1 \cdots y_{n - 1},
+ \]
+ then
+ \begin{enumerate}
+ \item The probability that $\theta$ is $\varphi$ to $n$ digits is at least $\frac{4}{\pi^2}$.
+ \item The probability that $|\theta - \varphi| \geq \varepsilon$ is at most $O(1/(2^n \varepsilon))$.
+ \end{enumerate}
+\end{thm}
+The proofs are rather boring and easy algebra.
+
+So for any fixed desired accuracy $\varepsilon$, the probability to fail to get $\varphi$ to this accuracy falls \emph{exponentially} with $n$.
+
+Note that if $\qcd U^{2^k}$ is implemented as $(\qcd U)^{2^k}$, then the algorithm would need
+\[
+ 1 + 2 + 4 + \cdots + 2^{n - 1} = 2^{n - 1}
+\]
+many $\qcd U$ gates. But for some special $U$'s, this $\qcd U^{2^k}$ can be implemented in polynomial time in $k$.
+
+For example, in \term{Kitaev's factoring algorithm}, for $\hcf(a, N) = 1$, we will use the function
+\[
+ U: \bket{m} \mapsto \bket{am \bmod N}.
+\]
+Then we have
+\[
+ U^{2^k}\bket{m} = \bket{a^{2^k}m},
+\]
+which we can implement by repeated squaring.
+
+Now what if we didn't have an eigenstate to being with? If instead of $\bket{v_\varphi}$, we used a general input state $\bket{\xi}$, then we can write
+\[
+ \bket{\xi} = \sum_j c_j \bket{v_{\varphi_j}},
+\]
+where
+\[
+ U \bket{v_{\varphi_j}} = e^{2\pi i \varphi_j}\bket{v_{\varphi_j}}.
+\]
+Then in the phase estimation algorithm, just before the final measurement, we have managed to get ourselves
+\[
+ \bket{0\cdots0}\bket{\xi} \to \sum_j c_j \bket{\varphi_j} \bket{v_{\varphi_j}}.
+\]
+Then when we measure, we get \emph{one} of the $\varphi_j$'s (or an approximation of it) with probability $|c_j|^2$. Note that this is \emph{not} some average of them. Of course, we don't know which one we got, but we still get some meaningful answer.
+
+\subsubsection*{Quantum counting}\index{quantum counting}
+An application of this is the quantum counting problem. Given $f: B_n \to B$ with $k$ good $x$'s, we want to estimate the number $k$.
+
+Recall the Grove iteration operator $\mathcal{Q}_G$ is rotation through $2\theta$ in a $2$-dimensional plane spanned by
+\[
+ \bket{\psi_0} = \frac{1}{\sqrt{2^n}} \sum_x \bket{x}
+\]
+and its good projection, and $\theta$ is given by
+\[
+ \sin \theta \approx \theta = \sqrt{\frac{k}{N}}.
+\]
+Now the eigenvalues of this rotation in the plane are
+\[
+ e^{2i \theta}, e^{-2i\theta}.
+\]
+So either eigenvalue will suffice to get $k$.
+
+We will equivalently write
+\[
+ e^{i2\theta} = e^{2\pi i \varphi}
+\]
+with
+\[
+ 0 \leq \varphi < 1.
+\]
+Then $\pm 2 \theta$ is equivalent to $\varphi$ or $1 - \varphi$, where $\varphi$ is small.
+
+Now we don't have an eigenstate, but we can start with any state in the plane, $\bket{\psi_0}$. We then do phase estimation with it. We will then get either $\varphi$ or $1 - \varphi$ with some probabilities, but we don't mind which one we get, since we can obtain one from the other, and we can tell them apart because $\varphi$ is small.
+
+\section{Hamiltonian simulation}
+So. Suppose we did manage to invent a usable quantum computer. What would it be good for? Grover's algorithm is nice, but it seems a bit theoretical. You might say we can use Shor's algorithm to crack encryption, but then if quantum computers are available, then no one would be foolish enough to use encryption that is susceptible to such attacks. So what can we actually do?
+
+One useful thing would be to simulate physical systems. If we have a quantum system and with $n$ qubits, then we want to simulate its evolution over time, as governed by Schr\"odinger's equation. Classically, a $n$-qubit system is specified by $2^n$ complex numbers, so we would expect any such algorithm to have performance at best $O(2^n)$. However, one would imagine that to simulate a quantum $n$-qubit system in a quantum computer, we only need $n$-qubits! Indeed, we will see that we will be able to simulate quantum systems in polynomial time in $n$.
+
+In a quantum physical system, the state of the system is given by a state $\bket{\psi}$, and the evolution is governed by a \term{Hamiltonian} $H$. This is a self-adjoint (Hermitian) operator, and in physics, this represents the energy of the system. Thus, we have
+\[
+ \brak{\psi} H \bket{\psi} = \text{average value obtained in measurement of energy}.
+\]
+The time evolution of the particle is given by the Schr\"odinger equation
+\[
+ \frac{\d}{\d t} \bket{\psi(t)} = -i H \bket{\psi(t)}.
+\]
+We'll consider only time-independent Hamiltonians $H(t) = H$. Then the solution can be written down as
+\[
+ \bket{\psi(t)} = e^{-iHt} \bket{\psi(0)}.
+\]
+Here $e^{-iHt}$ is the matrix exponential given by
+\[
+ e^A = I + A + \frac{A^2}{2} + \cdots,
+\]
+Thus, given a Hamiltonian $H$ and a time $t$, we want to simulate $U(t) = e^{-iHt}$ to suitable approximations.
+
+Before we begin, we note the following useful definitions:
+\begin{defi}[Operator norm]\index{operator norm}\index{$\|A\|$}
+ The \emph{operator norm} of an operator $A$ is
+ \[
+ \|A\| = \max_{\|\bket{\psi}\| = 1} \norm{A \bket{\psi}}.
+ \]
+ If $A$ is diagonalizable, then this is the maximum eigenvalue of $A$.
+\end{defi}
+
+The following properties are easy to see:
+\begin{prop}
+ \begin{align*}
+ \norm{A + B} &\leq \norm{A} + \norm{B}\\
+ \norm{AB} &\leq \norm{A}\norm{B}.
+ \end{align*}
+\end{prop}
+
+We now begin. There will be a slight catch in what we do. We will have to work with special Hamiltonians known as \emph{$k$-local Hamiltonians} for a fixed $k$. Then for this fixed $k$, the time required to simulate the system will be polynomial in $n$. However, we should not expect the complexity to grow nicely as we increase $k$!
+
+So what is a $k$-local Hamiltonian? This is a Hamiltonian in which each interaction governed by the Hamiltonian only involves $k$ qubits. In other words, this Hamiltonian can be written as a sum of operators, each of which only touches $k$ qubits. This is not too bad a restriction, because in real life, most Hamiltonians are indeed local, so that if each qubit represents a particle, then the behaviour of the particle will only be affected by the particles near it.
+
+\begin{defi}[$k$-local Hamiltonian]\index{$k$-local Hamiltonian}\index{Hamiltonian!$k$-local}
+ We say a Hamiltonian $H$ is $k$-local (for $k$ a fixed constant) on $n$ qubits if
+ \[
+ H = \sum_{j = 1}^m H_j,
+ \]
+ where each $H_j$ acts on at most $k$ qubits (not necessarily adjacent), i.e.\ we can write
+ \[
+ H_j = \tilde{H}_j \otimes I,
+ \]
+ where $\tilde{H}_j$ acts on some $k$ qubits, and $I$ acts on all other qubits as the identity.
+\end{defi}
+
+The number $m$ of terms we need is bounded by
+\[
+ m \leq \binom{n}{k} = O(n^k),
+\]
+which is polynomial in $n$.
+
+\begin{eg}
+ The Hamiltonian
+ \[
+ H = \qX \otimes I \otimes I - \qZ \otimes I \otimes Y
+ \]
+ is $2$-local on $3$ qubits.
+\end{eg}
+
+We write $M_{(i)}$ to denote the operator $M$ acting on the $i$th qubit.
+\begin{eg}
+ We could write
+ \[
+ X \otimes I \otimes I = X_{(1)}.
+ \]
+\end{eg}
+
+\begin{eg}[Ising model]\index{Ising model}
+ The \emph{Ising model} on an $n \times n$ square lattice of qubits is given by
+ \[
+ H = \qJ \sum_{i, j = 1}^{n - 1} \qZ_{(i, j)} \qZ_{i, j + 1} + \qZ_{(i, j)} \qZ_{(i + 1, j)}.
+ \]
+\end{eg}
+
+\begin{eg}[Heisenberg model]\index{Heisenberg model}
+ The Heisenberg model on a line is given by
+ \[
+ H = \sum_{i = 1}^{n - 1} J_x \qX_{(i)} \qX_{(i + 1)} + J_y \qY_{(i)}\qY_{(i + 1)} + J_z \qZ_{(i)} \qZ_{(i + 1)},
+ \]
+ where $J_x, J_y$ and $J_z$ are real constants.
+
+ This is useful in modelling magnetic system.
+\end{eg}
+
+The idea is that we simulate each $e^{iH_j t}$ separately, and then put them together. However, if $\{H_j\}$ doesn't commute, then in general
+\[
+ e^{-i\sum_j H_j t} \not= \prod_j e^{-iH_j t}.
+\]
+So we need to somehow solve this problem. But putting it aside, we can start working on the quantum simulation problem.
+
+We will make use of the following theorem:
+\begin{thm}[Solovay-Kitaev theorem]\index{Solovay-Kitaev theorem}
+ Let $U$ be a unitary operator on $k$ qubits and $S$ any universal set of quantum gates. Then $U$ can be approximated to within $\varepsilon$ using $O(\log^c \frac{1}{\varepsilon})$ from $S$, where $c < 4$.
+\end{thm}
+In other words, we can simulate each $e^{-iH_j t}$ with very modest overhead in circuit size for improved error, \emph{assuming we fix $k$}.
+
+\begin{proof}
+ Omitted.
+\end{proof}
+
+We will also need to keep track of the accumulation of errors. The following lemma will be useful:
+\begin{lemma}
+ Let $\{U_i\}$ and $\{V_i\}$ be sets of unitary operators with
+ \[
+ \norm{U_i - V_i} \leq \varepsilon.
+ \]
+ Then
+ \[
+ \norm{U_m \cdots U_1 - V_m \cdots V_1} \leq m\varepsilon.
+ \]
+\end{lemma}
+This is remarkable!
+
+\begin{proof}
+ See example sheet $2$. The idea is that unitary gates preserve the size of vectors, hence do not blow up errors.
+\end{proof}
+
+We start by doing a warm-up: we solve the easy case where the terms in the Hamiltonian commute.
+\begin{prop}
+ Let
+ \[
+ H = \sum_{j = 1}^m H_j
+ \]
+ be any $k$-local Hamiltonian with commuting terms.
+
+ Then for any $t$, $e^{-iHt}$ can be approximated to within $\varepsilon$ by a circuit of
+ \[
+ O\left(m \poly\left(\log\left(\frac{m}{\varepsilon}\right)\right)\right)
+ \]
+ gates from any given universal set.
+\end{prop}
+
+\begin{proof}
+ We pick $\varepsilon' = \frac{\varepsilon}{m}$, and approximate $e^{-iH_jt}$ to within $\varepsilon'$. Then the total error is bounded by $m\varepsilon' = \varepsilon$, and this uses
+ \[
+ O\left(m \poly\left(\log\left(\frac{m}{\varepsilon}\right)\right)\right)
+ \]
+ gates.
+\end{proof}
+
+We now do the full non-commutative case. To do so, we need to keep track of how much $e^{iH_i t} e^{-i H_j t}$ differs from $e^{i(H_i + H_j) t}$.
+
+\begin{notation}
+ For a matrix $X$, we write
+ \[
+ X + O (\varepsilon)
+ \]
+ for $X + E$ with $\|E\| = O(\varepsilon)$.
+\end{notation}
+
+Then we have
+\begin{lemma}[Lie-Trotter product formula]\index{Lie-Trotter product formula}
+ Let $A, B$ be matrices with $\|A\|, \|B\| \leq K < 1$. Then we have
+ \[
+ e^{-iA} e^{-iB} = e^{-i(A + B)} + O(K^2).
+ \]
+\end{lemma}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ e^{-iA} &= 1 - iA + \sum_{k = 2}^\infty \frac{(iA)^k}{k!}\\
+ &= I - iA + (iA)^2 \sum_{k = 0}^\infty \frac{(-iA)^k}{(k + 2)!}
+ \end{align*}
+ We notice that $\|(iA)^2\| \leq K^2$, the final sum has norm bounded by $e^K < e$. So we have
+ \[
+ e^{-iA} = I - iA + O(K^2).
+ \]
+ Then we have
+ \begin{align*}
+ e^{-iA} e^{-iB} &= (I - iA + O(K^2))(I - iB + O(K^2)) \\
+ &= I - i(A + B) + O(K^2) \\
+ &= e^{-i(A + B)} + O(K^2).
+ \end{align*}
+ Here we needed the fact that $\|A + B\| \leq 2K = O(K)$ and $\|AB\| \leq K^2 = O(K^2)$.
+\end{proof}
+
+We now apply this repeatedly to accumulate sums $H_1, H_2, .., H_m$ in the exponent. First of all, we note that if each $\|H_i\| < K$, then $\|H_i + \cdots + H_\ell\| < \ell K$. We want this to be $ < 1$ for all $\ell \leq m$. So for now, we assume $K < \frac{1}{m}$. Also, we take $t = 1$ for now. Then consider
+\begin{align*}
+ e^{-iH_1} e^{-iH_2} \cdots e^{-iH_m} &= (e^{-i(H_1 + H_2)} + O(K^2)) e^{-i H_3} \cdots e^{-iH_m}\\
+ &= e^{-i(H_1 + H_2)} e^{-iH_3} \cdots e^{-i H_m} + O(K^2)\\
+ &= e^{-i(H_1 + H_2 + H_3)} e^{-iH_4} \cdots e^{-iH_m} + O((2K)^2) + O(K^2)\\
+ &= e^{-i\sum H_j} + O(m^3 K^2),
+\end{align*}
+where we used the fact that
+\[
+ 1^2 + 2^2 + \cdots + m^2 = O(m^3).
+\]
+We write the error as $C m^3 K^2$.
+
+This is fine if $K$ is super small, but it won't be in general. For general $K$ and $t$ values, we introduce a large $N$ such that
+\[
+ \norm{\frac{H_j t}{N}} < \frac{Kt}{N} \leq \tilde{K} < 1.
+\]
+In other words, we divide time up into small $\frac{t}{N}$ intervals. We then try to simulate
+\[
+ U = e^{-i (H_1 + \cdots + H_m)t } = \left(e^{-i \left(\frac{H_1 t}{N} + \ldots + \frac{H_n t}{N}\right)}\right)^N.
+\]
+This equality holds because we know that $\frac{t}{N}(H_1 + \cdots + H_n)$ commutes with itself (as does everything in the world).
+
+We now want to make sure the final error for $U$ is $< \varepsilon$. So we know each term $e^{-i \left(\frac{H_1 t}{n} + \ldots + \frac{H_n t}{n}\right)}$ needs to be approximated to $\frac{\varepsilon}{N}$. So using our previous formula, we want that
+\[
+ Cm^3 \tilde{K}^2 < \frac{\varepsilon}{N},
+\]
+Doing some algebraic manipulation, we find that we need
+\[
+ N > \frac{Cm^3 K^2 t^2}{\varepsilon}.
+\]
+We now have $Nm$ gates of the form $e^{iH_j t/N}$. So the circuit size is at most
+\[
+ O\left(\frac{m^4(Kt)^2}{\varepsilon}\right).
+\]
+Recall for $n$-qubits, a general $k$-local Hamiltonian has $m = O(n^k)$. So the circuit size is
+\[
+ |\mathcal{C}| = O\left(\frac{n^{4k} (Kt)^2}{\varepsilon}\right).
+\]
+Now this is in terms of the number of $e^{iH_j t/N}$ gates. If we want to express this in terms of universal gates, then each gate needs to be approximated to $O(\varepsilon/|\mathcal{C}|)$. We then need $O(\log^c(\frac{|\mathcal{C}|}{\varepsilon}))$ gates for each, for some $c < 4$. So we only get a modest extra multiplicative factor in $|\mathcal{C}|$.
+
+Note that for a fixed $n$ with a variable $t$, then a quantum process $e^{-iHt}$ runs in time $t$, but our simulation needs time $O(t^2)$. This can be improved to $O(t^{1 + \delta})$ for any $\delta > 0$ by using ``better'' Lie-Trotter expansions.
+
+\subsubsection*{Local Hamiltonian ground state problem}
+There are many other things we might want to do with a $k$-local Hamiltonian. One question we might be interested in is the eigenvalues of $H$. Suppose we are given a $5$-local Hamiltonian
+\[
+ H = \sum_{i = 1}^m H_j
+\]
+on $n$ qubits (this was the original result proved). We suppose $\|H_i\| < 1$, and we are given two numbers $a < b$, e.g.\ $a = \frac{1}{3}$ and $b = \frac{2}{3}$. We are promised that the smallest eigenvalue $E_0$ of $H$ is $ < a$ or $> b$. The problem is to decide whether $E_0 < a$.
+
+The reason we have these funny $a, b$ is so that we don't have to worry about precision problems. If we only had a single $a$ and we want to determine if $E_0 > a$ or $E_0 < a$, then it would be difficult to figure out if $E_0$ happens to be very close to $a$.
+
+Kitaev's theorem says that the above problem is complete for a complexity class known as \textbf{QMA}\index{\textbf{QMA}}, i.e.\ it is the ``hardest'' problem in \textbf{QMA}. In other words, any problem in \textbf{QMA} can be translated into a local Hamiltonian ground state problem with polynomial overhead. A brief survey can be found on \href{https://arxiv.org/abs/quant-ph/0210077}{arXiv:{}quant-ph/0210077}.
+
+What is this \textbf{QMA}? We will not go into details, but roughly, it is a quantum version of \textbf{NP}. In case you are wondering, MA stands for Merlin and Arthur\ldots
+\printindex
+\end{document}
diff --git a/books/cam/III_M/quantum_field_theory.tex b/books/cam/III_M/quantum_field_theory.tex
new file mode 100644
index 0000000000000000000000000000000000000000..af93bc261fbcc2ce774652302ebf122681094e88
--- /dev/null
+++ b/books/cam/III_M/quantum_field_theory.tex
@@ -0,0 +1,5899 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2016}
+\def\nlecturer {B.\ Allanach}
+\def\ncourse {Quantum Field Theory}
+
+\input{header}
+\usepackage{simplewick}
+\usepackage[compat=1.1.0]{tikz-feynman}
+\tikzfeynmanset{/tikzfeynman/momentum/arrow shorten = 0.3}
+\tikzfeynmanset{/tikzfeynman/warn luatex = false}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+Quantum Field Theory is the language in which modern particle physics is formulated. It represents the marriage of quantum mechanics with special relativity and provides the mathematical framework in which to describe the interactions of elementary particles.
+
+This first Quantum Field Theory course introduces the basic types of fields which play an important role in high energy physics: scalar, spinor (Dirac), and vector (gauge) fields. The relativistic invariance and symmetry properties of these fields are discussed using Lagrangian language and Noether's theorem.
+
+The quantisation of the basic non-interacting free fields is firstly developed using the Hamiltonian and canonical methods in terms of operators which create and annihilate particles and anti-particles. The associated Fock space of quantum physical states is explained together with ideas about how particles propagate in spacetime and their statistics. How these fields interact with a classical electromagnetic field is described.
+
+Interactions are described using perturbative theory and Feynman diagrams. This is first illustrated for theories with a purely scalar field interaction, and then for a couplings between scalar fields and fermions. Finally Quantum Electrodynamics, the theory of interacting photons, electrons and positrons, is introduced and elementary scattering processes are computed.
+
+\subsubsection*{Pre-requisites}
+You will need to be comfortable with the Lagrangian and Hamiltonian formulations of classical mechanics and with special relativity. You will also need to have taken an advanced course on quantum mechanics.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+%\emph{The greyed out parts are my own attempts to make more sense of the theory from a more mathematical/differential-geometric point of view}
+
+The idea of quantum mechanics is that photons and electrons behave similarly. We can make a photon interfere with itself in double-slit experiments, and similarly an electron can interfere with itself. However, as we know, lights are ripples in an electromagnetic field. So photons should arise from the quantization of the electromagnetic field. If electrons are like photons, should we then have an electron field? The answer is yes!
+
+Quantum field theory is a quantization of a classical field. Recall that in quantum mechanics, we promote degrees of freedom to operators. Basic degrees of freedom of a quantum field theory are operator-valued functions of spacetime. Since there are infinitely many points in spacetime, there is an infinite number of degrees of freedom. This infinity will come back and bite us as we try to develop quantum field theory.
+
+Quantum field theory describes creation and annihilation of particles. The interactions are governed by several basic principles --- locality, symmetry and \emph{renormalization group flow}. What the renormalization group flow describes is the decoupling of low and high energy processes.
+
+\subsubsection*{Why quantum field theory?}
+It appears that all particles of the same type are indistinguishable, e.g.\ all electrons are the same. It is difficult to justify why this is the case if each particle is considered individually, but if we view all electrons as excitations of the same field, this is (almost) automatic.
+
+Secondly, if we want to combine special relativity and quantum mechanics, then the number of particles is not conserved. Indeed, consider a particle trapped in a box of size $L$. By the Heisenberg uncertainty principle, we have $\Delta p \gtrsim \hbar/L$. We choose a particle with small rest mass so that $m \ll E$. Then we have
+\[
+ \Delta E = \Delta p \cdot c \gtrsim \frac{\hbar c}{L}.
+\]
+When $\Delta E \gtrsim 2 mc^2$, then we can pop a particle-antiparticle pair out of the vacuum. So when $L \lesssim \frac{\hbar}{2mc}$, we can't say for sure that there is only one particle.
+
+We say $\lambda = \hbar/(mc)$ is the ``compton wavelength'' --- the minimum distance at which it makes sense to localize a particle. This is also the scale at which quantum effects kick in.
+
+This argument is somewhat circular, since we just assumed that if we have enough energy, then particle-antiparticle pairs would just pop out of existence. This is in fact something we can prove in quantum field theory.
+
+To reconcile quantum mechanics and special relativity, we can try to write a relativistic version of Schr\"odinger's equation for a single particle, but something goes wrong. Either the energy is unbounded from below, or we end up with some causality violation. This is bad. These are all fixed by quantum field theory by the introduction of particle creation and annihilation.
+
+\subsubsection*{What is quantum field theory good for?}
+Quantum field theory is used in (non-relativistic) condensed matter systems. It describes simple phenomena such as phonons, superconductivity, and the fractional quantum hall effect.
+
+Quantum field theory is also used in high energy physics. The standard model of particle physics consists of electromagnetism (quantum electrodynamics), quantum chromodynamics and the weak forces. The standard model is tested to very high precision by experiments, sometimes up to $1$ part in $10^{10}$. So it is good. While there are many attempts to go beyond the standard model, e.g.\ Grand Unified Theories, they are mostly also quantum field theories.
+
+In cosmology, quantum field theory is used to explain the density perturbations. In quantum gravity, string theory is also primarily a quantum field theory in some aspects. It is even used in pure mathematics, with applications in topology and geometry.
+
+\subsubsection*{History of quantum field theory}
+In the 1930's, the basics of quantum field theory were laid down by Jordan, Pauli, Heisenberg, Dirac, Weisskopf etc. They encountered all sorts of infinities, which scared them. Back then, these sorts of infinities seemed impossible to work with.
+
+Fortunately, in the 1940's, renormalization and quantum electrodynamics were invented by Tomonaga, Schwinger, Feynman, Dyson, which managed to deal with the infinities. It was a sloppy process, and there was no understanding of why we can subtract infinities and get a sensible finite result. Yet, they managed to make experimental predictions, which were subsequently verified by actual experiments.
+
+In the 1960's, quantum field theory fell out of favour as new particles such as mesons and baryons were discovered. But in the 1970's, it had a golden age when the renormalization group was developed by Kadanoff and Wilson, which was really when the infinities became understood. At the same time, the standard model was invented, and a connection between quantum field theory and geometry was developed.
+
+\subsubsection*{Units and scales}
+We are going to do a lot of computations in the course, which are reasonably long. We do not want to have loads of $\hbar$ and $c$'s all over the place when we do the calculations. So we pick convenient units so that they all vanish.
+
+Nature presents us with three fundamental dimensionful constants that are relevant to us:
+\begin{enumerate}
+ \item The speed of light $c$ with dimensions $LT^{-1}$;
+ \item Planck's constant $\hbar$ with dimensions $L^2 MT^{-1}$;
+ \item The gravitational constant $G$ with dimensions $L^3 M^{-1} T^{-2}$.
+\end{enumerate}
+We see that these dimensions are independent. So we define units such that $c = \hbar = 1$. So we can express everything in terms of a mass, or an energy, as we now have $E = m$. For example, instead of $\lambda = \hbar/(mc)$, we just write $\lambda = m^{-1}$. We will work with electron volts $eV$. To convert back to the conventional SI units, we must insert the relevant powers of $c$ and $\hbar$. For example, for a mass of $m_e = \SI{e6}\electronvolt$, we have $\lambda_e = \SI{2e-12}{\meter}$.
+
+After getting rid of all factors of $\hbar$ and $c$, if a quantity $X$ has a mass dimension $d$, we write $[X] = d$. For example, we have $[G] = - 2$, since we have
+\[
+ G = \frac{\hbar c}{M_p^2} = \frac{1}{M_p^2},
+\]
+where $M_p \sim \SI{e19}{\giga\electronvolt}$ is the \emph{Planck scale}.
+
+\section{Classical field theory}
+Before we get into quantum nonsense, we will first start by understanding classical fields.
+
+\subsection{Classical fields}
+\begin{defi}[Field]\index{field}
+ A \emph{field} $\phi$ is a physical quantity defined at every point of spacetime $(\mathbf{x}, t)$. We write the value of the field at $(\mathbf{x}, t)$ as $\phi(\mathbf{x}, t)$.
+\end{defi}
+The ``physical quantities'' can be real or complex scalars, but later we will see that we might have to consider even more complicated stuff.
+
+In classical point mechanics, we have a finite number of generalized coordinates $q_a(t)$. In field theory, we are interested in the dynamics of $\phi_a(\mathbf{x}, t)$, where $a$ and $\mathbf{x}$ are \emph{both} labels. Here the $a$ labels the different fields we can have, and $(\mathbf{x}, t)$ labels a spacetime coordinate. Note that here position has been relegated from a dynamical variable (i.e.\ one of the $q_a$) to a mere label.
+
+\begin{eg}
+ The \term{electric field} $E_i(\mathbf{x}, t)$ and \term{magnetic field} $B_i(\mathbf{x}, t)$, for $i = 1, 2, 3$, are examples of fields. These six fields can in fact be derived from 4 fields $A_\mu(\mathbf{x}, t)$, for $\mu = 0, 1, 2, 3$, where
+ \[
+ E_i = \frac{\partial A_i}{\partial t} - \frac{\partial A_0}{\partial x_i},\quad B_i = \frac{1}{2} \varepsilon_{ijk}\frac{\partial A_k}{\partial x^j}.
+ \]
+ Often, we write
+ \[
+ A^\mu = (\phi, \mathbf{A}).
+ \]
+\end{eg}
+
+Just like in classical dynamics, we would specify the dynamics of a field through a \emph{Lagrangian}. For particles, the Lagrangian specifies a number at each time $t$. Since a field is a quantity at each point in space, we now have Lagrangian \emph{densities}, which gives a number at each point in spacetime $(t, \mathbf{x})$.
+
+\begin{defi}[Lagrangian density]\index{Lagrangian density}
+ Given a field $\phi (\mathbf{x}, t)$, a \emph{Lagrangian density} is a function $\mathcal{L}(\phi, \partial_\mu \phi)$ of $\phi$ and its derivative.
+\end{defi}
+
+Note that the Lagrangian density treats space and time symmetrically. However, if we already have a favorite time axis, we can look at the ``total'' Lagrangian at each time, and obtain what is known as the \emph{Lagrangian}.
+
+\begin{defi}[Lagrangian]\index{Lagrangian}
+ Given a Lagrangian density, the \emph{Lagrangian} is defined by
+ \[
+ L = \int \d^3 \mathbf{x}\; \mathcal{L}(\phi, \partial_\mu \phi).
+ \]
+\end{defi}
+For most of the course, we will only care about the Lagrangian density, and call it the Lagrangian.
+
+\begin{defi}[Action]\index{action}
+ Given a Lagrangian and a time interval $[t_1, t_2]$, the \emph{action} is defined by
+ \[
+ S = \int_{t_1}^{t_2} \;\d t\; L(t) = \int \d^4 x\; \mathcal{L}.
+ \]
+\end{defi}
+In general, we would want the units to satisfy $[S] = 0$. Since we have $[\d^4 x] = -4$, we must have $[\mathcal{L}] = 4$.
+
+The equations of motion are, as usual, given by the principle of least action.
+\begin{defi}[Principle of least action]\index{principle of least action}
+ The equation of motion of a Lagrangian system is given by the \emph{principle of least action} --- we vary the field slightly, keeping values at the boundary fixed, and require the first-order change $\delta S = 0$.
+\end{defi}
+
+For a small perturbation $\phi_a \mapsto \phi_a + \delta \phi_a$, we can compute
+\begin{align*}
+ \delta S &= \int \d^4 x\left(\frac{\partial \mathcal{L}}{\partial \phi_a} \delta \phi_a + \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi_a)} \delta(\partial_\mu\phi_a)\right)\\
+ &= \int \d^4 x \left\{\left(\frac{\partial \mathcal{L}}{\partial \phi_a} - \partial_\mu\left(\frac{\partial \mathcal{L}}{\partial (\partial_\mu\phi_a)}\right)\right) \delta \phi_a + \partial_\mu \left(\frac{\partial \mathcal{L}}{\partial (\partial_\mu\phi_a)} \delta\phi_a\right)\right\}.
+\end{align*}
+We see that the last term vanishes for any term that decays at spatial infinity, and obeys $\delta \phi_a(\mathbf{x}, t_1) = \delta\phi_a(\mathbf{x}, t_2) = 0$.
+
+Requiring $\delta S = 0$ means that we need
+\begin{prop}[Euler-Lagrange equations]
+ The equations of motion for a field are given by the \term{Euler-Lagrange equations}:
+ \[
+ \partial_\mu\left(\frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi_a)}\right) - \frac{\partial \mathcal{L}}{\partial \phi_a} = 0.
+ \]
+\end{prop}
+
+We can begin by considering the simplest field one can imagine of --- the Klein--Gordon field. We will later see that this is a ``free field'', and ``particles'' don't interact with each other.
+
+\begin{eg}
+ The \term{Klein--Gordon equation} for a real scalar field $\phi(\mathbf{x}, t)$ is given by the Lagrangian
+ \[
+ \mathcal{L} = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \frac{1}{2}m^2 \phi^2 = \frac{1}{2} \dot{\phi}^2 - \frac{1}{2} (\nabla \phi)^2 - \frac{1}{2} m^2 \phi^2.
+ \]
+ We can view this Lagrangian as saying $\mathcal{L} = T - V$, where
+ \[
+ T = \frac{1}{2} \dot{\phi}^2
+ \]
+ is the kinetic energy, and
+ \[
+ V = \frac{1}{2} (\nabla \phi)^2 + \frac{1}{2}m^2 \phi^2
+ \]
+ is the potential energy.
+
+ To find the Euler-Lagrange equation, we can compute
+ \[
+ \frac{\partial \mathcal{L}}{\partial(\partial_\mu \phi)} = \partial^\mu \phi = (\dot{\phi}, -\nabla \phi)
+ \]
+ and
+ \[
+ \frac{\partial \mathcal{L}}{\partial \phi} = -m^2 \phi.
+ \]
+ So the Euler-Lagrange equation says
+ \[
+ \partial_\mu \partial^\mu \phi + m^2 \phi = 0.
+ \]
+ More explicitly, this says
+ \[
+ \ddot{\phi} - \nabla^2 \phi + m^2 \phi = 0.
+ \]
+\end{eg}
+We could generalize this and add more terms to the Lagrangian. An obvious generalization would be
+\[
+ \mathcal{L} = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - V(\phi),
+\]
+where $V(\phi)$ is an arbitrary potential function. Then we similarly obtain the equation
+\[
+ \partial_\mu \partial^\mu \phi + \frac{\partial V}{\partial \phi} = 0.
+\]
+
+\begin{eg}
+ \term{Maxwell's equations} in vacuum are given by the Lagrangian
+ \[
+ \mathcal{L} = -\frac{1}{2} (\partial_\mu A_\nu)(\partial^\mu A^\nu) + \frac{1}{2}(\partial_\mu A^\mu)^2.
+ \]
+ To find out the Euler-Lagrange equations, we need to figure out the derivatives with respect to each component of the $A$ field. We obtain
+ \[
+ \frac{\partial \mathcal{L}}{\partial(\partial_\mu A_\nu)} = -\partial^\mu A^\nu + (\partial_\rho A^\rho) \eta^{\mu\nu}.
+ \]
+ So we obtain
+ \[
+ \partial_\mu \left(\frac{\partial \mathcal{L}}{\partial(\partial_\mu A_\nu)}\right) = - \partial_\mu\partial^\mu A^\nu + \partial^\nu (\partial_\rho A^\rho) = - \partial_\mu (\partial^\mu A^\nu - \partial^\nu A^\mu).
+ \]
+ We write
+ \[
+ F^{\mu\nu} = \partial^\mu A^\nu - \partial^\nu A^\mu.
+ \]
+ So we are left with
+ \[
+ \partial_\mu \left(\frac{\partial \mathcal{L}}{\partial(\partial_\mu A_\nu)}\right) = -\partial_\mu F^{\mu\nu}.
+ \]
+ It is an exercise to check that these Euler-Lagrange equations reproduce
+ \[
+ \partial_i E_i = 0,\quad \dot{E}_i = \varepsilon_{ijk} \partial_j B_k.
+ \]
+ Using our $F^{\mu\nu}$, we can rewrite the Lagrangian as
+ \[
+ \mathcal{L} = -\frac{1}{4} F_{\mu\nu}F^{\mu\nu}.
+ \]
+\end{eg}
+How did we come up with these Lagrangians? In general, it is guided by two principles --- one is symmetry, and the other is renormalizability. We will discuss symmetries shortly, and renormalizaility would be done in the III Advanced Quantum Field Theory course.
+
+In all of these examples, the Lagrangian is local. In other words, the terms don't couple $\phi(\mathbf{x}, t)$ to $\phi(\mathbf{y}, t)$ if $\mathbf{x} \not= \mathbf{y}$.
+\begin{eg}
+ An example of a non-local Lagrangian would be
+ \[
+ \int \d^3 \mathbf{x} \int \d^3 \mathbf{y} \; \phi(\mathbf{x}) \phi(\mathbf{x} - \mathbf{y})
+ \]
+\end{eg}
+A priori, there is no reason for this, but it happens that nature seems to be local. So we shall only consider local Lagrangians.
+
+Note that locality does \emph{not} mean that the Lagrangian at a point only depends on the value at the point. Indeed, it also depends on the derivatives at $\mathbf{x}$. So we can view this as saying the value of $\mathcal{L}$ at $\mathbf{x}$ only depends on the value of $\phi$ at an infinitesimal neighbourhood of $\mathbf{x}$ (formally, the jet at $\mathbf{x}$).
+
+\subsection{Lorentz invariance}
+If we wish to construct relativistic field theories such that $\mathbf{x}$ and $t$ are on an equal footing, the Lagrangian should be invariant under \term{Lorentz transformations} $x^\mu \mapsto x'^\mu = \Lambda^\mu\!_\nu x^\nu$, where
+\[
+ \Lambda^\mu\!_\sigma \eta^{\sigma\tau} \Lambda^\nu\!_\tau = \eta^{\mu\nu}.
+\]
+and $\eta^{\mu\nu}$ is the \term{Minkowski metric} given by
+\[
+ \eta^{\mu\nu} =
+ \begin{pmatrix}
+ 1 & 0 & 0 & 0\\
+ 0 & -1 & 0 & 0\\
+ 0 & 0 & -1 & 0\\
+ 0 & 0 & 0 & -1
+ \end{pmatrix}.
+\]
+
+\begin{eg}
+ The transformation
+ \[
+ \Lambda^\mu\!_\sigma =
+ \begin{pmatrix}
+ 1 & 0 & 0 & 0\\
+ 0 & 1 & 0 & 0\\
+ 0 & 0 & \cos \theta & -\sin \theta\\
+ 0 & 0 & \sin \theta & \cos \theta
+ \end{pmatrix}
+ \]
+ describes a rotation by an angle around the $x$-axis.
+\end{eg}
+
+\begin{eg}
+ The transformation
+ \[
+ \Lambda^\mu\!_\sigma =
+ \begin{pmatrix}
+ \gamma & -\gamma v & 0 & 0\\
+ -\gamma v & \gamma & 0 & 0\\
+ 0 & 0 & 1 & 0\\
+ 0 & 0 & 0 & 1
+ \end{pmatrix}
+ \]
+ describes a \term{boost} by $v$ along the $x$-axis.
+\end{eg}
+The Lorentz transformations form a Lie group under matrix multiplication --- see III Symmetries, Field and Particles.
+
+The Lorentz transformations have a representation on the fields. For a scalar field, this is given by
+\[
+ \phi(x) \mapsto \phi'(x) = \phi(\Lambda^{-1}x),
+\]
+where the indices are suppressed. This is an active transformation --- say $x_0$ is the point at which, say, the field is a maximum. Then after applying the Lorentz transformation, the position of the new maximum is $\Lambda x_0$. The field itself actually moved.
+
+Alternatively, we can use passive transformations, where we just relabel the points. In this case, we have
+\[
+ \phi(x) \mapsto \phi(\Lambda (x)).
+\]
+However, this doesn't really matter, since if $\Lambda$ is a Lorentz transformation, then so is $\Lambda^{-1}$. So being invariant under active transformations is the same as being invariant under passive transformations.
+
+A Lorentz invariant theory should have equations of motion such that if $\phi(x)$ is a solution, then so is $\phi(\Lambda^{-1} x)$. This can be achieved by requiring that the action $S$ is invariant under Lorentz transformations.
+
+\begin{eg}
+ In the Klein--Gordon field, we have
+ \[
+ \mathcal{L} = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \frac{1}{2}m^2 \phi^2.
+ \]
+ The Lorentz transformation is given by
+ \[
+ \phi(x) \mapsto \phi'(x) = \phi(\Lambda^{-1}x) = \phi(y),
+ \]
+ where
+ \[
+ y^\mu = (\Lambda^{-1})^\mu\!_\nu x^\nu.
+ \]
+ We then check that
+ \begin{align*}
+ \partial_\mu \phi(x) &\mapsto \frac{\partial}{\partial x^\mu}(\phi(\Lambda^{-1}x)) \\
+ &= \frac{\partial}{\partial x^\mu} (\phi(y))\\
+ &= \frac{\partial y^\nu}{\partial x^\mu} \frac{\partial}{\partial y^\nu} (\phi(y))\\
+ &= (\Lambda^{-1})^\nu\!_\mu(\partial_\nu \phi)(y).
+ \end{align*}
+ Since $\Lambda^{-1}$ is a Lorentz transformation, we have
+ \[
+ \partial_\mu \phi \partial^\mu \phi = \partial_\mu \phi' \partial^\mu \phi'.
+ \]
+\end{eg}
+In general, as long as we write everything in terms of tensors, we get Lorentz invariant theories.
+
+Symmetries play an important role in QFT. Different kinds of symmetries include Lorentz symmetries, gauge symmetries, global symmetries and supersymmetries (SUSY).
+
+\begin{eg}
+ Protons and neutrons are made up of quarks. Each type of quark comes in three flavors, which are called red, blue and green (these are arbitrary names). If we swap around red and blue everywhere in the universe, then the laws don't change. This is known as a global symmetry.
+
+ However, in light of relativity, swapping around red and blue \emph{everywhere} in the universe might be a bit absurd, since the universe is so big. What if we only do it locally? If we make the change differently at different points, the equations don't \emph{a priori} remain invariant, unless we introduce a \emph{gauge boson}. More of this will be explored in the AQFT course.
+\end{eg}
+
+\subsection{Symmetries and Noether's theorem for field theories}
+As in the case of classical dynamics, we get a Noether's theorem that tells us symmetries of the Lagrangian give us conserved quantities. However, we have to be careful here. If we want to treat space and time equally, saying that a quantity ``does not change in time'' is bad. Instead, what we have is a \emph{conserved current}, which is a $4$-vector. Then given any choice of spacetime frame, we can integrate this conserved current over all space at each time (with respect to the frame), and this quantity will be time-invariant.
+
+\begin{thm}[Noether's theorem]\index{Noether's theorem}
+ Every continuous symmetry of $\mathcal{L}$ gives rise to a \term{conserved current} $j^\mu(x)$ such that the equation of motion implies that
+ \[
+ \partial_\mu j^\mu = 0.
+ \]
+ More explicitly, this gives
+ \[
+ \partial_0 j^0 + \nabla \cdot \mathbf{j} = 0.
+ \]
+ A conserved current gives rise to a \emph{conserved charge}
+ \[
+ Q = \int_{\R^3} j^0 \d^3 \mathbf{x},
+ \]
+ since
+ \begin{align*}
+ \frac{\d Q}{\d t} &= \int_{\R^3} \frac{\d j^0}{\d t} \;\d^3 \mathbf{x}\\
+ &= -\int_{\R^3} \nabla \cdot \mathbf{j} \;\d ^3 \mathbf{x}\\
+ &= 0,
+ \end{align*}
+ assuming that $j^i \to 0$ as $|\mathbf{x}| \to \infty$.
+\end{thm}
+
+\begin{proof}
+ Consider making an arbitrary transformation of the field $\phi_a \mapsto \phi_a + \delta \phi_a$. We then have
+ \begin{align*}
+ \delta \mathcal{L} &= \frac{\partial \mathcal{L}}{\partial \phi_a} \delta \phi_a + \frac{\partial \mathcal{L}}{\partial(\partial_\mu \phi_a)} \delta(\partial_\mu \phi_a)\\
+ &= \left[\frac{\partial \mathcal{L}}{\partial \phi_a} - \partial_\mu \frac{\partial \mathcal{L}}{\partial(\partial_\mu \phi_a)}\right] \delta \phi_a + \partial_\mu \left(\frac{\partial \mathcal{L}}{\partial(\partial_\mu \phi_a)} \delta \phi_a\right).
+ \end{align*}
+ When the equations of motion are satisfied, we know the first term always vanishes. So we are left with
+ \[
+ \delta \mathcal{L} = \partial_\mu \left(\frac{\partial \mathcal{L}}{\partial(\partial_\mu \phi_a)} \delta \phi_a\right).
+ \]
+ If the specific transformation $\delta \phi_a = X_a$ we are considering is a \term{symmetry}, then $\delta\mathcal{L} = 0$ (this is the definition of a symmetry). In this case, we can define a conserved current by
+ \[
+ j^\mu = \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi_a)}X_a,
+ \]
+ and by the equations above, this is actually conserved.
+\end{proof}
+We can have a slight generalization where we relax the condition for a symmetry and still get a conserved current. We say that $X_a$ is a symmetry if $\delta \mathcal{L} = \partial_\mu F^\mu(\phi)$ for some $F^\mu(\phi)$, i.e.\ a total derivative. Replaying the calculations, we get
+\[
+ j^\mu = \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi_a)} X_a - F^\mu.
+\]
+\begin{eg}[Space-time invariance]
+ Recall that in classical dynamics, spatial invariance implies the conservation of momentum, and invariance wrt to time translation implies the conservation of energy. We'll see something similar in field theory. Consider $x^\mu \mapsto x^\mu - \varepsilon^\mu$. Then we obtain
+ \[
+ \phi_a(x) \mapsto \phi_a(x) + \varepsilon^\nu \partial_\nu \phi_a(x).
+ \]
+ A Lagrangian that has no explicit $x^\mu$ dependence transforms as
+ \[
+ \mathcal{L}(x) \mapsto \mathcal{L}(x) + \varepsilon^\nu \partial_\nu \mathcal{L}(x),
+ \]
+ giving rise to 4 currents --- one for each $\nu = 0, 1, 2, 3$. We have
+ \[
+ (j^\mu)_\nu = \frac{\partial \mathcal{L}}{\partial (\partial_\mu \phi_a)} \partial_\nu \phi_a - \delta^\mu\!_\nu \mathcal{L},
+ \]
+ This particular current is called $T^\mu\!_\nu$, the \term{energy-momentum tensor}\index{$T^\mu_\nu$}. This satisfies
+ \[
+ \partial_\mu T^\mu\!_\nu = 0,
+ \]
+ We obtain conserved quantities, namely the \term{energy}
+ \[
+ E = \int \d^3 \mathbf{x}\;T^{00},
+ \]
+ and the total \term{momentum}
+ \[
+ \mathbf{P}^i = \int \d^3 \mathbf{x}\; T^{0i}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider the Klein--Gordon field, with
+ \[
+ \mathcal{L} = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \frac{1}{2}m^2 \phi^2.
+ \]
+ We then obtain
+ \[
+ T^{\mu\nu} = \partial^\mu \phi \partial^\nu \phi - \eta^{\mu\nu} \mathcal{L}.
+ \]
+ So we have
+ \[
+ E = \int \d^3 \mathbf{x} \left(\frac{1}{2}\dot{\phi}^2 + \frac{1}{2}(\nabla \phi)^2 + \frac{1}{2}m^2 \phi^2\right).
+ \]
+ The momentum is given by
+ \[
+ \mathbf{P}^i = \int\d^3 \mathbf{x}\; \dot{\phi} \partial^i \phi.
+ \]
+\end{eg}
+In this example, $T^{\mu\nu}$ comes out symmetric in $\mu$ and $\nu$. In general, it would not be, but we can always massage it into a symmetric form by adding
+\[
+ \sigma^{\mu\nu} = T^{\mu\nu} + \partial_\rho \Gamma^{\rho\mu\nu}
+\]
+with $\Gamma^{\rho\mu\nu}$ a tensor antisymmetric in $\rho$ and $\mu$. Then we have
+\[
+ \partial_\mu \partial_\rho \Gamma^{\rho\mu\nu} = 0.
+\]
+So this $\sigma^{\mu\nu}$ is also invariant.
+
+A symmetric energy-momentum tensor of this form is actually useful, and is found on the RHS of Einstein's field equation.
+
+\begin{eg}[Internal symmetries]
+ Consider a complex scalar field
+ \[
+ \psi(x) = \frac{1}{\sqrt{2}} (\phi_1(x) + i \phi_2(x)),
+ \]
+ where $\phi_1$ and $\phi_2$ are real scalar fields. We put
+ \[
+ \mathcal{L} = \partial_\mu \psi^* \partial^\mu \psi - V(\psi^*\psi),
+ \]
+ where
+ \[
+ V (\psi^*\psi) = m^2 \psi^* \psi + \frac{\lambda}{2}(\psi^*\psi)^2 + \cdots
+ \]
+ is some potential term.
+
+ To find the equations of motion, if we do all the complex analysis required, we will figure that we will obtain the same equations as the real case if we treat $\psi$ and $\psi^*$ as independent variables. In this case, we obtain
+ \[
+ \partial_\mu \partial^\mu \psi + m^2 \psi + \lambda (\psi^*\psi) \psi + \cdots = 0
+ \]
+ and its complex conjugate.
+ The $\mathcal{L}$ has a symmetry given by $\psi \mapsto e^{i\alpha} \psi$. Infinitesimally, we have $\delta \psi = i\alpha \psi$, and $\delta \psi^* = -i\alpha \psi^*$.
+
+ This gives a current
+ \[
+ j^\mu = i(\partial^\mu \psi^*) \psi - i (\partial^\mu \psi)\psi^*.
+ \]
+ We will later see that associated charges of this type have an interpretation of electric charge (or particle number, e.g.\ baryon number or lepton number).
+\end{eg}
+
+Note that this symmetry is an abelian symmetry, since it is a symmetry under the action of the abelian group $\U(1)$. There is a generalization to a non-abelian case.
+
+\begin{eg}[Non-abelian internal symmetries]
+ Suppose we have a theory with many fields, with the Lagrangian given by
+ \[
+ \mathcal{L} = \frac{1}{2} \sum_{a = 1}^N \partial_\mu \phi_a \partial^\mu \phi_a - \frac{1}{2} \sum_{a = 1}^N m^2 \phi_a^2 - g \left(\sum_{a = 1}^N \phi_a^2\right)^2.
+ \]
+ This theory is invariant under the bigger symmetry group $G = \SO(N)$. If we view the fields as components of complex fields, then we have a symmetry under $\U(N/2)$ or even $\SU(N/2)$. For example, the symmetry group $\SU(3)$ gives the \term{8-fold way}.
+\end{eg}
+
+\begin{eg}
+ There is a nice trick to determine the conserved current when our infinitesimal transformation is given by $\delta \phi = \alpha \phi$ for some real constant $\alpha$. Consider the case where we have an \emph{arbitrary} perturbation $\alpha = \alpha(x)$. In this case, $\delta\mathcal{L}$ is no longer invariant, but we know that whatever formula we manage to come up with, it has to vanish when $\alpha$ is constant. Assuming that we only have first-order derivatives, the change in Lagrangian must be of the form
+ \[
+ \delta \mathcal{L} = (\partial_\mu \alpha(x)) h^\mu(\phi).
+ \]
+ for some $h^\mu$. We claim that $h^\mu$ is the conserved current. Indeed, we have
+ \[
+ \delta S = \int \d^4 x\; \delta \mathcal{L} = - \int \d^4 x\; \alpha(x) \partial_\mu h^\mu,
+ \]
+ using integration by parts. We know that if the equations of motion are satisfied, then this vanishes for any $\alpha(x)$ as long as it vanishes at infinity (or the boundary). So we must have
+ \[
+ \partial_\mu h^\mu = 0.
+ \]
+\end{eg}
+
+\subsection{Hamiltonian mechanics}
+We can now talk about the Hamiltonian formulation. This can be done for field theories as well. We define
+\begin{defi}[Conjugate momentum]\index{conjugate momentum}
+ Given a Lagrangian system for a field $\phi$, we define the \emph{conjugate momentum} by
+ \[
+ \pi(x) = \frac{\partial \mathcal{L}}{\partial \dot{\phi}}.
+ \]
+\end{defi}
+This is not to be confused with the total momentum $\mathbf{P}^i$.
+
+\begin{defi}[Hamiltonian density]\index{Hamiltonian density}
+ The \emph{Hamiltonian density} is given by
+ \[
+ \mathcal{H} = \pi(x) \dot{\phi}(x) - \mathcal{L}(x),
+ \]
+ where we replace all occurrences of $\dot{\phi}(x)$ by expressing it in terms of $\pi(x)$.
+\end{defi}
+
+\begin{eg}
+ Suppose we have a field Lagrangian of the form
+ \[
+ \mathcal{L} = \frac{1}{2} \dot{\phi}^2 - \frac{1}{2} (\nabla \phi)^2 - V(\phi).
+ \]
+ Then we can compute that
+ \[
+ \pi = \dot\phi.
+ \]
+ So we can easily find
+ \[
+ \mathcal{H} = \frac{1}{2}\pi^2 + \frac{1}{2}(\nabla \phi)^2 + V(\phi).
+ \]
+\end{eg}
+
+\begin{defi}[Hamiltonian]\index{Hamiltonian}
+ The \emph{Hamiltonian} of a Hamiltonian system is
+ \[
+ H = \int\;\d^3 \mathbf{x}\; \mathcal{H}.
+ \]
+\end{defi}
+This agrees with the field energy we computed using Noether's theorem.
+
+\begin{defi}[Hamilton's equations]\index{Hamilton's equations}
+ Hamilton's equations are
+ \[
+ \dot{\phi} = \frac{\partial \mathcal{H}}{\partial \pi},\quad \dot{\pi} = -\frac{\partial \mathcal{H}}{\partial \phi}.
+ \]
+ These give us the equations of motion of $\phi$.
+\end{defi}
+There is an obvious problem with this, that the Hamiltonian formulation is not manifestly Lorentz invariant. However, we know it actually is because we derived it as an equivalent formulation of a Lorentz invariant theory. So we are safe, so far. We will later have to be more careful if we want to quantize the theories from the Hamiltonian formalism.
+
+
+%\subsection{Formal point of view*}
+%\begin{own}
+% Formally, we suppose the universe is given by a spacetime manifold $\mathcal{M}$ with a pseudo-Riemannian metric of signature $(+1, -1, -1 ,-1)$. All maps are assumed to be smooth.
+%
+% A field can then be defined as a function $\mathcal{M} \to V$, where $V$ is some vector space. This is, of course, equivalent to a section of the trivial bundle $V \times \mathcal{M} \to \mathcal{M}$. A natural generalization would be to define a field as a section of some bundle, and we will consider more general bundles later when we do gauge theory.
+%
+% Given a vector space $V$, we can define a corresponding (local) Lagrangian density by a function from the $n$th jet bundle $\mathcal{L}: j^n(\mathcal{M}, V) \to \R$ ($n$ can be taken to be infinite if one is not taunted by the horrors of infinite-dimensional spaces). Given a field $\phi: \mathcal{M} \to V$, we write $\mathcal{L}(\phi)$ for the composition $\mathcal{L} \circ j^n \phi$, where $j^n$ is the $n$th jet prolongation of $\phi$. It happens that nature seems to prefer Lagrangians with $n = 1$, and this is the only case we will consider.
+%
+% If we have found ourselves a preferred time axis $T$, then we can project each spacetime coordinate to the time axis via $\pi: \mathcal{M} \to T$. Given a field $\phi: \mathcal{M} \to V$, we can integrating $\mathcal{L}(\phi)$ along fibers to get the ``total'' Lagrangian at each point in time, and this gives a function $L: T \to V$. This is called the \emph{Lagrangian}.
+%
+% If we have a $V$ and a Lagrangian density $\mathcal{L}$, the principle of least action the ``allowed'' fields (i.e.\ the fields that will physically exist) are those $\phi:\mathcal{M} \to V$ satisfying the following property:
+%
+% For any compact region $C \subseteq \mathcal{M}$ and any $F: C \times (-\varepsilon, \varepsilon) \to V$ such that $F(\mathbf{x}, 0) = \phi(\mathbf{x})$ for all $\mathbf{x} \in C$, and $F(\mathbf{x}, s) = \phi(\mathbf{x})$ for all $\mathbf{x} \in \partial C$, we have
+% \[
+% 0 = \left.\frac{\d}{\d s}\right|_{s = 0} \int_C \d^4 x\; \mathcal{L}(F(\mathbf{x}, s)).
+% \]
+% We can now define symmetries of the Lagrangian as follows:
+% \begin{defi}[Continuous symmetry]\index{continuous symmetry}
+% A \emph{continuous symmetry} of the Lagrangian is an action of a Lie group $G$ on $j^n(\mathcal{M}, V)$ such that for every $g \in G$ and every compact set $C \subseteq \mathcal{M}$, we have
+% \[
+% \int_C \mathcal{L}(j^1\phi(\mathbf{x})) \;\d g = \int_C \mathcal{L}(g \cdot (j^1\phi(\mathbf{x})))\;\d g.
+% \]
+% In particular, if $L(g \cdot a) = L(a)$ for all $a \in j^n(\mathcal{M}, V)$, then this is a symmetry.
+%
+% A \emph{one-parameter symmetry} is one where the Lie algebra of $G$ is $\R$.
+% \end{defi}
+%\end{own}
+%
+\section{Free field theory}
+So far, we've just been talking about classical field theory. We now want to quantize this, and actually do \emph{quantum} field theory.
+\subsection{Review of simple harmonic oscillator}
+Michael Peskin once famously said ``Physics is the subset of human experience that can be reduced to coupled harmonic oscillators''. Thus, to understand quantum mechanics, it is important to understand how the quantum harmonic oscillator works.
+
+Classically, the simple harmonic oscillator is given by the Hamiltonian
+\[
+ H = \frac{1}{2}p^2 + \frac{1}{2} \omega^2 q^2,
+\]
+where $p$ is the momentum and $q$ is the position. To obtain the corresponding quantum system, \term{canonical quantization} tells us that we should promote the $p$ and $q$ into complex ``operators'' $\hat{p}, \hat{q}$, and use the same formula for the Hamiltonian, namely we now have
+\[
+ \hat{H} = \frac{1}{2}\hat{p}^2 + \frac{1}{2} \omega^2 \hat{q}^2.
+\]
+In the classical system, the quantities $p$ and $q$ used to satisfy the Poisson brackets
+\[
+ \{q, p\} = 1.
+\]
+After promoting to operators, they satisfy the commutation relation
+\[
+ [\hat{q}, \hat{p}] = i.
+\]
+We will soon stop writing the hats because we are lazy.
+
+There are a few things to take note of.
+\begin{enumerate}
+ \item We said $p$ and $q$ are ``operators'', but did not say what they actually operate on! Instead, what we are going to do is to analyze these operators formally, and after some careful analysis, we show that there is a space the operators naturally act on, and then take that as our state space. This is, in general, how we are going to do quantum field theory (except we tend to replace the word ``careful'' with ``sloppy'').
+
+ During the analysis, we will suppose there are indeed some states our operators act on, and then try to figure out what properties the states must have.
+
+ \item The process of canonical quantization depends not only on the classical system itself, but how we decide the present our system. There is no immediate reason why if we pick different coordinates for our classical system, the resulting quantum system would be equivalent. In particular, before quantization, all the terms commute, but after quantization that is no longer true. So how we decide to order the terms in the Hamiltonian matters.
+
+ Later, we will come up with the notion of \emph{normal ordering}. From then onwards, we can have a (slightly) more consistent way of quantizing operators.
+\end{enumerate}
+
+After we've done this, the time evolution of states is governed by the Schr\"odinger equation:
+\[
+ i\frac{\d}{\d t}\bket{\psi} = H\bket{\psi}.
+\]
+In practice, instead of trying to solve this thing, we want to find eigenstates $\bket{E}$ such that
+\[
+ H \bket{E} = E\bket{E}.
+\]
+If such states are found, then defining
+\[
+ \bket{\psi} = e^{-iEt}\bket{E}
+\]
+would give a nice, stable solution to the Schr\"odinger equation.
+
+The trick is to notice that in the classical case, we can factorize the Hamiltonian as
+\[
+ H = \omega \left(\sqrt{\frac{\omega}{2}}q + \frac{i}{\sqrt{2\omega}}p\right)\left(\sqrt{\frac{\omega}{2}}q + \frac{-i}{\sqrt{2\omega}}p\right).
+\]
+Now $H$ is a product of two terms that are complex conjugates to each other, which, in operator terms, means they are adjoints. So we have the benefit that we only have to deal with a single complex object $\sqrt{\frac{\omega}{2}}q + \frac{i}{\sqrt{2\omega}}p$ (and its conjugate), rather than two unrelated real objects. Also, products are nicer than sums. (if this justification for why this is a good idea doesn't sound convincing, just suppose that we had the idea of doing this via divine inspiration, and it turns out to work well)
+
+We now do the same factorization in the quantum case. We would not expect the result to be \emph{exactly} the above, since that working relies on $p$ and $q$ commuting. However, we can still try and define the operators
+\[
+ a = \frac{i}{\sqrt{2 \omega}}p + \sqrt{\frac{\omega}{2}}q,\quad a^\dagger = \frac{-i}{\sqrt{2\omega}}p + \sqrt{\frac{\omega}{2}}q.
+\]
+These are known as \term{creation} and \term{annihilation} operators for reasons that will become clear soon.
+
+We can invert these to find
+\[
+ q = \frac{1}{\sqrt{2\omega}}(a + a^\dagger),\quad p = -i\sqrt{\frac{\omega}{2}} (a - a^\dagger).
+\]
+We can substitute these equations into the commutator relation to obtain
+\[
+ [a, a^\dagger] = 1.
+\]
+Putting them into the Hamiltonian, we obtain
+\[
+ H = \frac{1}{2}\omega(a a^\dagger + a^\dagger a) = \omega\left(a^\dagger a + \frac{1}{2}[a, a^\dagger]\right) = \omega \left(a^\dagger a + \frac{1}{2}\right).
+\]
+We can now compute
+\[
+ [H, a^\dagger] = \omega a^\dagger,\quad [H, a] = -\omega a.
+\]
+These ensure that $a, a^\dagger$ take us between energy eigenstates --- if
+\[
+ H\bket{E} = E \bket{E},
+\]
+then
+\[
+ Ha^\dagger\bket{E} = (a^\dagger H + [H, a^\dagger]) \bket{E} = (E + \omega)a^\dagger \bket{E}.
+\]
+Similarly, we have
+\[
+ Ha\bket{E} = (E - \omega)a\bket{E}.
+\]
+So assuming we have some energy eigenstate $\bket{E}$, these operators give us loads more with eigenvalues
+\[
+ \cdots, E - 2\omega, E - \omega, E, E + \omega, E + 2\omega, \cdots.
+\]
+If the energy is bounded below, then there must be a ground state $\bket{0}$ satisfying $a \bket{0} = 0$. Then the other ``excited states'' would come from repeated applications of $a^\dagger$, labelled by
+\[
+ \bket{n} = (a^\dagger)^n \bket{0},
+\]
+with
+\[
+ H\bket{n} = \left(n + \frac{1}{2}\right)\omega \bket{n}.
+\]
+Note that we were lazy and ignored normalization, so we have $\braket{n}{n} \not= 1$.
+
+One important feature is that the ground state energy is \emph{non-zero}. Indeed, we have
+\[
+ H \bket{0} = \omega\left(a^\dagger a + \frac{1}{2}\right)\bket{0} = \frac{\omega}{2} \bket{0}.
+\]
+Notice that we managed to figure out what the eigenvalues of $H$ \emph{must be}, without having a particular state space (assuming the state space is non-trivial). Now we know what is the appropriate space to work in. The right space is the Hilbert space generated by the orthonormal basis
+\[
+ \{\bket{0}, \bket{1}, \bket{2}, \cdots\}.
+\]
+
+\subsection{The quantum field}
+We are now going to use canonical quantization to promote our classical fields to quantum fields. We will first deal with the case of a real scalar field.
+
+\begin{defi}[Real scalar quantum field]\index{quantum field!real scalar}\index{real scalar quantum field}
+ A \emph{(real, scalar) quantum field} is an operator-valued function of space $\phi$, with conjugate momentum $\pi$, satisfying the commutation relations
+ \[
+ [\phi(\mathbf{x}), \phi(\mathbf{y})] = 0 = [\pi(\mathbf{x}), \pi(\mathbf{y})]
+ \]
+ and
+ \[
+ [\phi(\mathbf{x}), \pi(\mathbf{y})] = i \delta^3(\mathbf{x} - \mathbf{y}).
+ \]
+ In case where we have many fields labelled by $a \in I$, the commutation relations are
+ \[
+ [\phi_a(\mathbf{x}), \phi_b(\mathbf{y})] = 0 = [\pi^a(\mathbf{x}), \pi^b(\mathbf{y})]
+ \]
+ and
+ \[
+ [\phi_a(\mathbf{x}), \pi^b(\mathbf{y})] = i \delta^3(\mathbf{x} - \mathbf{y}) \delta_a^b.
+ \]
+\end{defi}
+
+The evolution of states is again given by Schr\"odinger equation.
+\begin{defi}[Schr\"odinger equation]\index{Schr\"odinger equation}
+ The \emph{Schr\"odinger equation} says
+ \[
+ i \frac{\d}{\d t}\bket{\psi} = H \bket{\psi}.
+ \]
+\end{defi}
+However, we will, as before, usually not care and just look for eigenvalues of $H$.
+
+As in the case of the harmonic oscillator, our plan is to rewrite the field in terms of creation and annihilation operators. Note that in quantum mechanics, it is always possible to write the position and momentum in terms of some creation and annihilation operators for \emph{any} system. It's just that if the system is not a simple harmonic oscillator, these operators do not necessarily have the nice properties we want them to have. So we are just going to express everything in creation and annihilation operators nevertheless, and see what happens.
+
+\subsection{Real scalar fields}
+We start with simple problems. We look at \term{free theories}, where the Lagrangian is quadratic in the field, so that the equation of motion is linear. We will see that the whole field then decomposes into many independent harmonic oscillators.
+
+Before we jump into the quantum case, we first look at what happens in a classical free field.
+\begin{eg}
+ The simplest free theory is the classic Klein--Gordon theory for a real scalar field $\phi(\mathbf{x}, t)$. The equations of motion are
+ \[
+ \partial_\mu \partial^\mu \phi + m^2 \phi = 0.
+ \]
+ To see why this is free, we take the Fourier transform so that
+ \[
+ \phi(\mathbf{x}, t) = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} e^{i \mathbf{p}\cdot \mathbf{x}} \tilde{\phi}(\mathbf{p}, t).
+ \]
+ We substitute this into the Klein--Gordon equation to obtain
+ \[
+ \left(\frac{\partial^2}{\partial t^2} + (\mathbf{p}^2 + m^2)\right)\tilde{\phi}(\mathbf{p}, t) = 0.
+ \]
+ This is just the usual equation for a simple harmonic oscillator for each $\mathbf{p}$, independently, with frequency $\omega_\mathbf{p} = \sqrt{\mathbf{p}^2 + m^2}$. So the solution to the classical Klein--Gordon equation is a superposition of simple harmonic oscillators, each vibrating at a different frequency (and a different amplitude).
+
+ For completeness, we will note that the Hamiltonian density for this field is given by
+ \[
+ \mathcal{H} = \frac{1}{2}(\pi^2 + (\nabla \phi)^2 + m^2 \phi^2).
+ \]
+\end{eg}
+So to quantize the Klein--Gordon field, we just have to quantize this infinite number of harmonic oscillators!
+
+We are going to do this in two steps. First, we write our quantum fields $\phi(\mathbf{x})$ and $\pi(\mathbf{x})$ in terms of their Fourier transforms
+\begin{align*}
+ \phi(\mathbf{x}) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} e^{i\mathbf{p}\cdot \mathbf{x}} \tilde{\phi}(\mathbf{p})\\
+ \pi(\mathbf{x}) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} e^{i\mathbf{p}\cdot \mathbf{x}} \tilde{\pi}(\mathbf{p})
+\end{align*}
+Confusingly, $\pi$ represents both the conjugate momentum and the mathematical constant, but it should be clear from the context.
+
+If we believe in our classical analogy, then the operators $\tilde{\phi}(\mathbf{p})$ and $\tilde{\pi}(\mathbf{p})$ should represent the position and momentum of quantum harmonic oscillators. So we further write them as
+\begin{align*}
+ \phi(\mathbf{x}) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2 \omega_\mathbf{p}}} \left(a_\mathbf{p} e^{i \mathbf{p} \cdot \mathbf{x}} + a_\mathbf{p}^\dagger e^{-i \mathbf{p}\cdot \mathbf{x}}\right)\\
+ \pi(\mathbf{x}) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} (-i) \sqrt{\frac{\omega_\mathbf{p}}{2}}\left(a_\mathbf{p} e^{i\mathbf{p}\cdot \mathbf{x}} - a_\mathbf{p}^\dagger e^{-i \mathbf{p}\cdot \mathbf{x}}\right),
+\end{align*}
+where we have
+\[
+ \omega_\mathbf{p}^2 = \mathbf{p}^2 + m^2.
+\]
+Note that despite what we said above, we are multiplying $a_\mathbf{p}^\dagger$ by $e^{-i\mathbf{p}\cdot \mathbf{x}}$, and not $e^{i\mathbf{p}\cdot \mathbf{x}}$. This is so that $\phi(\mathbf{x})$ will be manifestly a real quantity.
+
+%On the other hand, since we are integrating over all $\mathbf{p}$, we can relabel $\mathbf{p} \mapsto -\mathbf{p}$ for the second term in the expression for $\phi$, we can write
+%\[
+% \phi(x) = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{e^{i \mathbf{p}\cdot \mathbf{x}}}{\sqrt{2 \omega_\mathbf{p}}} (a_\mathbf{p} + a_\mathbf{p}^\dagger).
+%\]
+%This then looks more like our good old
+%\[
+% q = \frac{1}{\sqrt{2\omega}}(a + a^\dagger)
+%\]
+%for a normal harmonic oscillator.
+% understand this sign flipping thing.
+
+We are now going to find the commutation relations for the $a_\mathbf{p}$ and $a_\mathbf{p}^{\dagger}$. Throughout the upcoming computations, we will frequently make use of the following result:
+\begin{prop}
+ We have
+ \[
+ \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} e^{-i\mathbf{p}\cdot \mathbf{x}} = \delta^3(\mathbf{x}).
+ \]
+\end{prop}
+
+\begin{prop}
+ The canonical commutation relations of $\phi, \pi$, namely
+ \begin{align*}
+ [\phi(\mathbf{x}), \phi(\mathbf{y})] &= 0\\
+ [\pi(\mathbf{x}), \pi(\mathbf{y})] &= 0\\
+ [\phi(\mathbf{x}), \pi(\mathbf{y})] &= i \delta^3(\mathbf{x} - \mathbf{y})
+ \intertext{are equivalent to}
+ [a_\mathbf{p}, a_\mathbf{q}] &= 0\\
+ [a_\mathbf{p}^\dagger, a_\mathbf{q}^\dagger] &= 0\\
+ [a_\mathbf{p}, a_\mathbf{q}^\dagger] &= (2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q}).
+ \end{align*}
+\end{prop}
+
+\begin{proof}
+ We will only prove one small part of the equivalence, as the others are similar tedious and boring computations, and you are not going to read it anyway. We will use the commutation relations for the $a_\mathbf{p}$ to obtain the commutation relations for $\phi$ and $\pi$. We can compute
+ \begin{align*}
+ &[\phi(\mathbf{x}), \pi(\mathbf{y})]\\
+ ={}& \int \frac{\d^3 \mathbf{p}\; \d^3 \mathbf{q}}{(2\pi)^6} \frac{(-i)}{2}\sqrt{\frac{\omega_\mathbf{q}}{\omega_\mathbf{p}}}\left(-[a_\mathbf{p}, a_\mathbf{q}^\dagger] e^{i\mathbf{p} \cdot \mathbf{x} - i \mathbf{q}\cdot \mathbf{y}} + [a_\mathbf{p}^\dagger, a_\mathbf{q}] e^{-i\mathbf{p}\cdot \mathbf{x} + i \mathbf{q}\cdot \mathbf{y}}\right)\\
+ ={}& \int \frac{\d^3 \mathbf{p}\; \d^3 \mathbf{q}}{(2\pi)^6} \frac{(-i)}{2}\sqrt{\frac{\omega_\mathbf{q}}{\omega_\mathbf{p}}}(2\pi)^3\left(-\delta^3(\mathbf{p} - \mathbf{q}) e^{i\mathbf{p} \cdot \mathbf{x} - i \mathbf{q}\cdot \mathbf{y}} - \delta^3(\mathbf{q} - \mathbf{p}) e^{-i\mathbf{p}\cdot \mathbf{x} + i \mathbf{q}\cdot \mathbf{y}}\right)\\
+ ={}& \frac{(-i)}{2} \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \left(-e^{-i\mathbf{p}\cdot (\mathbf{x} - \mathbf{y})} - e^{i\mathbf{p} \cdot(\mathbf{y} - \mathbf{x})}\right)\\
+ ={}& i \delta^3(\mathbf{x} - \mathbf{y}).
+ \end{align*}
+ Note that to prove the inverse direction, we have to invert the relation between $\phi(\mathbf{x}), \pi(\mathbf{x})$ and $a_\mathbf{p}, a_\mathbf{p}^\dagger$ and express $a_\mathbf{p}$ and $a_\mathbf{p}^\dagger$ in terms of $\phi$ and $\pi$ by using
+\begin{align*}
+ \int \d^3 \mathbf{x} \; \phi(\mathbf{x}) \, e^{i \mathbf{p} \cdot \mathbf{x}} &= \frac{1}{\sqrt{2 \omega_\mathbf{p}}} \left(a_{-\mathbf{p}} + a_\mathbf{p}^\dagger \right)\\
+ \int \d^3 \mathbf{x} \; \pi(\mathbf{x}) \, e^{i \mathbf{p} \cdot \mathbf{x}} &= (-i) \sqrt{\frac{\omega_\mathbf{p}}{2}}\left(a_{-\mathbf{p}} - a_\mathbf{p}^\dagger \right). \qedhere
+\end{align*}
+\end{proof}
+So our creation and annihilation operators do satisfy commutation relations similar to the case of a simple harmonic oscillator.
+
+The next thing to do is to express $H$ in terms of $a_\mathbf{p}$ and $a_\mathbf{p}^\dagger$. Before we plunge into the horrendous calculations that you are probably going to skip, it is a good idea to stop and think what we are going to expect. If we have enough faith, then we should expect that we are going to end up with infinitely many decoupled harmonic oscillators. In other words, we would have
+\[
+ H = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} (\text{a harmonic oscillator of frequency $\omega_\mathbf{p}$}).
+\]
+But if this were true, then we would have a serious problem. Recall that each harmonic oscillator has a non-zero ground state energy, and this ground state energy increases with frequency. Now that we have \emph{infinitely many} harmonic oscillators of arbitrarily high frequency, the ground state energy would add up to infinity! What's worse is that when we derived the ground state energy for the harmonic oscillator, the energy is $\frac{1}{2}\omega [a, a^\dagger]$. Now our $[a_\mathbf{p}, a_\mathbf{p}^\dagger]$ is $(2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q})$, i.e.\ the value is infinite. So our ground state energy is an infinite sum of infinities! This is so bad.
+
+These problems indeed will occur. We will discuss how we are going to avoid them later on, after we make ourselves actually compute the Hamiltonian.
+
+As in the classical case, we have
+\[
+ H = \frac{1}{2}\int \d^3 \mathbf{x}\;(\pi^2 + (\nabla \phi)^2 + m^2 \phi^2).
+\]
+For the sake of sanity, we will evaluate this piece by piece. We have
+\begin{align*}
+ \int \d^3 \mathbf{x}\;\pi^2 &=- \int \frac{\d^3 \mathbf{x}\;\d^3 \mathbf{p}\;\d^3 \mathbf{q}}{(2\pi)^6} \frac{\sqrt{\omega_\mathbf{p} \omega_\mathbf{q}}}{2} (a_\mathbf{p} e^{i\mathbf{p}\cdot \mathbf{x}} - a_\mathbf{p}^{\dagger} e^{-i\mathbf{p}\cdot \mathbf{x}})(a_\mathbf{q} e^{i\mathbf{q}\cdot \mathbf{x}} - a_\mathbf{q}^{\dagger} e^{-i\mathbf{q}\cdot \mathbf{x}})\\
+ &=- \int \frac{\d^3 \mathbf{x}\;\d^3 \mathbf{p}\;\d^3 \mathbf{q}}{(2\pi)^6} \frac{\sqrt{\omega_\mathbf{p} \omega_\mathbf{q}}}{2} (a_\mathbf{p} a_\mathbf{q}e^{i(\mathbf{p} + \mathbf{q})\cdot \mathbf{x}} - a_\mathbf{p}^\dagger a_\mathbf{q} e^{i(\mathbf{q} - \mathbf{p})\cdot \mathbf{x}} \\
+ &\hphantom{=- \int \frac{\d^3 \mathbf{x}\;\d^3 \mathbf{p}\;\d^3 \mathbf{q}}{(2\pi)^6} \frac{\sqrt{\omega_\mathbf{p} \omega_\mathbf{q}}}{2} (}- a_\mathbf{p}a_\mathbf{q}^\dagger e^{i(\mathbf{p} - \mathbf{q})\cdot \mathbf{x}} + a_\mathbf{p}^\dagger a_\mathbf{q}^\dagger e^{-i(\mathbf{p} + \mathbf{q})\cdot \mathbf{x}})\\
+ &=- \int \frac{\d^3 \mathbf{p}\;\d^3 \mathbf{q}}{(2\pi)^3} \frac{\sqrt{\omega_\mathbf{p} \omega_\mathbf{q}}}{2} (a_\mathbf{p} a_\mathbf{q} \delta^3(\mathbf{p} + \mathbf{q}) - a_\mathbf{p}^\dagger a_\mathbf{q} \delta^3(\mathbf{p} - \mathbf{q})\\
+ &\hphantom{=-\int \frac{\d^3 \mathbf{p}\;\d^3 \mathbf{q}}{(2\pi)^3} \frac{\sqrt{\omega_\mathbf{p} \omega_\mathbf{q}}}{2} (}-a_\mathbf{p} a_\mathbf{q}^\dagger \delta^3(\mathbf{p} - \mathbf{q}) + a_\mathbf{p}^\dagger a_\mathbf{q}^\dagger \delta^3(\mathbf{p} + \mathbf{q}))\\
+ &=\int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{\omega_\mathbf{p}}{2} ((a_\mathbf{p}^\dagger a_\mathbf{p} + a_\mathbf{p} a_\mathbf{p}^\dagger) - (a_\mathbf{p} a_{-\mathbf{p}} + a_\mathbf{p}^\dagger a_{-\mathbf{p}}^\dagger)).
+\end{align*}
+That was tedious. We similarly compute
+\begin{align*}
+ \int \d^3 \mathbf{x}\; (\nabla \phi)^2 &= \int \frac{\d^3 \mathbf{x} \;\d^3 \mathbf{p} \;\d^3 \mathbf{q}}{(2\pi)^6} \frac{1}{2\sqrt{\omega_\mathbf{p} \omega_\mathbf{q}}} (i\mathbf{p} a_\mathbf{p} e^{i\mathbf{p}\cdot \mathbf{x}} - i\mathbf{p} a_\mathbf{p}^\dagger e^{-i\mathbf{p}\cdot \mathbf{x}})\\
+ &\hphantom{= \int \frac{\d^3 \mathbf{x} \;\d^3 \mathbf{p} \;\d^3 \mathbf{q}}{(2\pi)^6} \frac{1}{2\sqrt{\omega_\mathbf{p} \omega_\mathbf{q}}}}\quad(i\mathbf{q} a_\mathbf{q}e^{i\mathbf{q}\cdot \mathbf{x}} - i\mathbf{q} a_\mathbf{q}^\dagger e^{-i\mathbf{q}\cdot \mathbf{x}})\\
+ &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{\mathbf{p}^2}{2\omega_\mathbf{p}}((a_\mathbf{p}^\dagger a_\mathbf{p} + a_\mathbf{p} a_\mathbf{p}^\dagger) + (a_\mathbf{p} a_{-\mathbf{p}} + a_\mathbf{p}^\dagger a_{-\mathbf{p}}^\dagger))\\
+ \int \d^3 \mathbf{x}\; m^2 \phi^2 &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{m^2}{2\omega_\mathbf{p}}((a_\mathbf{p}^\dagger a_\mathbf{p} + a_\mathbf{p} a_\mathbf{p}^\dagger) + (a_\mathbf{p} a_{-\mathbf{p}} + a_\mathbf{p}^\dagger a_{-\mathbf{p}}^\dagger)).
+\end{align*}
+Putting all these together, we have
+\begin{align*}
+ H ={}& \frac{1}{2}\int \d^3 \mathbf{x}\;(\pi^2 + (\nabla \phi)^2 + m^2 \phi^2)\\
+ ={}& \frac{1}{4}\int \frac{\d^3 \mathbf{p}}{(2\pi)^3}\left(\left(-\omega_\mathbf{p} + \frac{\mathbf{p}^2}{\omega_\mathbf{p}} + \frac{m^2}{\omega_\mathbf{p}}\right)(a_\mathbf{p} a_{-\mathbf{p}} + a_\mathbf{p}^\dagger a_{-\mathbf{p}}^\dagger)\right.\\
+ &\hphantom{\frac{1}{4}\int \frac{\d^3\mathbf{p}}{(2\pi)^3}}+ \left(\omega_\mathbf{p} + \frac{\mathbf{p}^2}{\omega_\mathbf{p}} \left. + \frac{m^2}{\omega_\mathbf{p}}\right) (a_\mathbf{p} a_\mathbf{p}^\dagger + a_\mathbf{p}^\dagger a_\mathbf{p})\right)\\
+ \intertext{Now the first term vanishes, since we have $\omega_\mathbf{p}^2 = \mathbf{p}^2 + m^2$. So we are left with}
+ ={}& \frac{1}{4}\int \frac{\d^3 \mathbf{p}}{(2\pi)^3}\frac{1}{\omega_\mathbf{p}} (\omega_\mathbf{p}^2 + \mathbf{p}^2 + m^2)(a_\mathbf{p} a_\mathbf{p}^\dagger + a_\mathbf{p}^\dagger a_\mathbf{p})\\
+ ={}& \frac{1}{2} \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \omega_\mathbf{p}(a_\mathbf{p}a_\mathbf{p}^\dagger + a_\mathbf{p}^\dagger a_\mathbf{p})\\
+ ={}& \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \omega_\mathbf{p}\left(a_\mathbf{p}^\dagger a_\mathbf{p} + \frac{1}{2}[a_\mathbf{p}, a_\mathbf{p}^\dagger]\right)\\
+ ={}& \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \omega_\mathbf{p}\left(a_\mathbf{p}^\dagger a_\mathbf{p} + \frac{1}{2}(2\pi)^3 \delta^3(\mathbf{0})\right).
+\end{align*}
+Note that if we cover the $\int \frac{\d^3 \mathbf{p}}{(2\pi)^3}$, then the final three lines are exactly what we got for a simple harmonic oscillator of frequency $\omega_\mathbf{p} = \sqrt{\mathbf{p}^2 + m^2}$.
+
+Following the simple harmonic oscillator, we postulate that we have a \term{vacuum state} $\bket{0}$ such that
+\[
+ a_\mathbf{p} \bket{0} = 0
+\]
+for all $\mathbf{p}$.
+
+When $H$ acts on this, the $a_\mathbf{p}^\dagger a_\mathbf{p}$ terms all vanish. So the energy of this ground state comes from the second term only, and we have
+\[
+ H\bket{0} = \frac{1}{2}\int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \omega_\mathbf{p} (2\pi)^3 \delta^3(\mathbf{0}) \bket{0} = \infty\bket{0}.
+\]
+since all $\omega_\mathbf{p}$ are non-negative.
+
+Quantum field theory is always full of these infinities! But they tell us something important. Often, they tell us that we are asking a stupid question.
+
+Let's take a minute to explore this infinity and see what it means. This is bad, since what we want to do at the end is to compute some actual probabilities in real life, and infinities aren't exactly easy to calculate with.
+
+Let's analyze the infinities one by one. The first thing we want to tackle is the $\delta^3(\mathbf{0})$. Recall that our $\delta^3$ can be thought of as
+\[
+ (2\pi)^3 \delta^3(\mathbf{p}) = \int \d^3 \mathbf{x}\; e^{i\mathbf{p}\cdot \mathbf{x}}.
+\]
+When we evaluate this at $\mathbf{p} = \mathbf{0}$, we are then integrating the number $1$ over all space, and thus the result is infinite! You might say, duh, space is so big. If there is some energy everywhere, then of course the total energy is infinite. This problem is known as \term{infrared divergence}. So the idea is to look at the energy \emph{density}, i.e.\ the energy per unit volume. Heuristically, since the $(2\pi)^3\delta^3(\mathbf{p})$ is just measuring the ``volume'' of the universe, we would get rid of it by simply throwing away the factor of $(2\pi)^3 \delta^3(\mathbf{0})$.
+
+If we want to make this a bit more sophisticated, we would enclose our universe in a box, and thus we can replace the $(2\pi)^3\delta^3(\mathbf{0})$ with the volume $V$. This trick is known as an \term{infrared cutoff}. Then we have
+\[
+ \mathcal{E}_0 = \frac{E}{V} = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{2} \omega_\mathbf{p}.
+\]
+We can then safely take the limit as $V \to \infty$, and forget about this $\delta^3(\mathbf{0})$ problem.
+
+This is \emph{still} infinite, since $\omega_\mathbf{p}$ gets unbounded as $\mathbf{p} \to \infty$. In other words, the ground state energies for each simple harmonic oscillator add up to infinity.
+
+These are high frequency divergences at short distances. These are called \term{ultraviolet divergences}. Fortunately, our quantum field theorists are humble beings and believe their theories are \emph{wrong}! This will be a recurring theme --- we will only assume that our theories are low-energy approximations of the real world. While this might seem pessimistic, it is practically a sensible thing to do --- our experiments can only access low-level properties of the universe, so what we really see is the low-energy approximation of the real theory.
+
+Under this assumption, we would want to cut off the integral at high momentum in some way.\index{ultraviolet cutoff} In other words, we just arbitrarily put a bound on the integral over $\mathbf{p}$, instead of integrating over all possible $\mathbf{p}$. While the cut-off point is arbitrary, it doesn't really matter. In (non-gravitational) physics, we only care about energy differences, and picking a different cut-off point would just add a constant energy to everything. Alternatively, we can also do something slightly more sophisticated to avoid this arbitrariness, as we will see in the example of the Casimir effect soon.
+
+Even more straightforwardly, if we just care about energy differences, we can just forget about the infinite term, and write
+\[
+ H = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \omega_\mathbf{p} a_\mathbf{p}^\dagger a_\mathbf{p}.
+\]
+Then we have
+\[
+ H \bket{0} = 0.
+\]
+While all these infinity cancelling sound like bonkers, the theory actually fits experimental data very well. So we have to live with it.
+
+The difference between this $H$ and the previous one with infinities is an ordering ambiguity in going from the classical to the quantum theory. Recall that we did the quantization by replacing the terms in the classical Hamiltonian with operators. However, terms in the classical Hamiltonian are commutative, but not in the quantum theory. So if we write the classical Hamiltonian in a different way, we get a different quantized theory. Indeed, if we initially wrote
+\[
+ H = \frac{1}{2} (\omega q - ip)(\omega q + ip),
+\]
+for the classical Hamiltonian for a single harmonic operator, and then did the quantization, we would have obtained
+\[
+ H = \omega a^\dagger a.
+\]
+It is convenient to have the following notion:
+\begin{defi}[Normal order]\index{normal order}
+ Given a string of operators
+ \[
+ \phi_1(\mathbf{x}_1) \cdots \phi_n(\mathbf{x}_n),
+ \]
+ the \emph{normal order} is what you obtain when you put all the annihilation operators to the right of (i.e.\ acting before) all the creation operators. This is written as
+ \[
+ \normalorder{\phi_1(\mathbf{x}_1) \cdots \phi_n(\mathbf{x}_n)}.
+ \]
+\end{defi}
+So, for example,
+\[
+ \normalorder{H} = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \omega_\mathbf{p} a_\mathbf{p}^\dagger a_\mathbf{p}.
+\]
+In the future, we will assume that when we quantize our theory, we do so in a way that the resulting operators are in normal order.
+
+%Note that in gravity, the sum of the zero point fluctuations should appear on the right hand side of Eisntein's Field Equations as $\Lambda$:
+%\[
+% R_{\mu\nu} - \frac{1}{2} R g_{\mu\nu} -= -8\pi GT_{\mu\nu} + \Lambda g_{\mu\nu}.
+%\]
+%Here $\Lambda = E_0/V$ is the cosmological constant. Observations suggest that $\Lambda \sim (\SI{e-3}{\electronvolt})^4$. In the current models of the universe, this cosmological constant accounts for $\approx 75\%$ of the universe's energy budget! However, our actual standard model predicts a different $\Lambda$ than we actually observe. We do not know why. This is the cosmological constant problem.
+
+\subsubsection*{Applications --- The Casimir effect}
+\index{Casimir effect}
+Notice that we happily set $E_0 = 0$, claiming that only energy differences are measured. But there exists a situation where differences in the vacuum fluctuations themselves can be measured, when we have two separated regions of vacuum. In this example, we will also demonstrate how the infrared and ultraviolet cutoffs can be achieved. We are going to enclose the world in a box of length $L$, and then do the ultraviolet cutoff in a way parametrized by a constant $a$. We then do some computations under these cutoffs, derive an answer, and then take the limit as $L \to \infty$ and $a \to 0$. If we asked the right question, then the answer will tend towards a finite limit as we take these limits.
+
+To regulate the infrared divergences, we put the universe in a box again. We make the $x$ direction periodic, with a large period $L$. So this is a one-dimensional box. We impose periodic boundary conditions
+\[
+ \phi(x, y, z) = \phi(x + L, y, z).
+\]
+We are now going to put two reflecting plates in the box at some distance $d \ll L$ apart. The plates impose $\phi(\mathbf{x}) = 0$ on the plates.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (3, 0) -- (4, 0.5);
+ \draw [dashed] (4, 0.5) -- (1, 0.5) -- (0, 0);
+ \draw (0, 2) -- (3, 2) -- (4, 2.5) -- (1, 2.5) -- cycle;
+ \draw (0, 0) -- (0, 2);
+ \draw (3, 0) -- (3, 2);
+ \draw (1, 0.5) -- (1, 2.5);
+ \draw (4, 0.5) -- (4, 2.5);
+
+ \draw [fill=mblue, opacity=0.5] (1.25, 0) -- (2.25, 0.5) -- (2.25, 2.5) -- (1.25, 2) -- cycle;
+ \draw [fill=mblue, opacity=0.5] (1.75, 0) -- (2.75, 0.5) -- (2.75, 2.5) -- (1.75, 2) -- cycle;
+
+ \draw [latex'-latex'] (1.25, -0.2) -- (1.75, -0.2) node [pos=0.5, below] {$d$};
+
+ \draw [decorate, decoration={brace, amplitude=5pt}] (3, -0.7) -- (0, -0.7);
+ \node at (1.5, -0.9) [below] {$L$};
+ \end{tikzpicture}
+\end{center}
+The presence of the plates means that the momentum of the field inside them is quantized
+\[
+ \mathbf{p} = \left(\frac{\pi n}{d}, p_y, p_z\right)
+\]
+for $n \in \Z$. For a massless scalar field, the energy per unit area between the plates is
+\[
+ E(d) = \sum_{n = 1}^\infty \int \frac{\d p_y\;\d p_z}{(2\pi)^2} \frac{1}{2} \sqrt{\left(\frac{\pi n}{d}\right)^2 + p_y^2 +p_z^2}.
+\]
+The energy outside the plates is then $E(L - d)$. The total energy is then
+\[
+ E = E(d) + E(L - d).
+\]
+This energy (at least naively) depends on $d$. So there is a force between the plates! This is the \emph{Casimir effect}, predicted in 1945, and observed in 1958. In the lab, this was done with the EM field, and the plates impose the boundary conditions.
+
+Note that as before, we had to neglect modes with $|\mathbf{p}|$ too high. More precisely, we pick some distance scale $a \ll d$, and ignore modes where $|\mathbf{p}| \gg a^{-1}$. This is known as the ultraviolet cut-off. This is reasonable, since for high momentum modulus, we would break through the plates. Then we have
+\[
+ E(d) = A \sum_n \int \frac{\d p_y \;\d p_z}{(2\pi)^2} \frac{1}{2}|\mathbf{p}| e^{-a|\mathbf{p}|}.
+\]
+Note that as we set $a \to 0$, we get the previous expression.
+
+Since we are scared by this integral, we consider the special case where we live in the $1 + 1$ dimensional world. Then this becomes
+\begin{align*}
+ E_{1 + 1}(d) &= \frac{\pi}{2d} \sum_{n = 1}^\infty n e^{-an\pi/d} \\
+ &= -\frac{1}{2} \frac{\d}{\d a}\left(\sum_n e^{-an\pi/d}\right)\\
+ &= -\frac{1}{2} \frac{\d}{\d a} \frac{1}{1 - e^{-a\pi/d}}\\
+ &= \frac{\pi}{2d}\frac{e^{-a\pi/d}}{(1 - e^{-a\pi/d})^2}\\
+ &= \frac{d}{2\pi a^2} - \frac{\pi}{24 d} + O(a^2).
+\end{align*}
+Our total energy is
+\[
+ E = E(d) + E(L - d) = \frac{L}{2\pi a^2} - \frac{\pi}{24}\left(\frac{1}{d} + \frac{1}{L - d}\right) + O(a^2).
+\]
+As $a \to 0$, this is still infinite, but the infinite term does not depend on $d$. The force itself is just
+\[
+ \frac{\partial E}{\partial d} = \frac{2\pi}{24 d^2} + O\left(\frac{d^2}{L^2}\right) + O(a^2),
+\]
+which is finite as $a \to 0$ and $L \to \infty$. So as we remove both the infrared and UV cutoffs, we still get a sensible finite force.
+
+In $3 + 1$ dimensions, if we were to do the more complicated integral, it turns out we get
+\[
+ \frac{1}{A}\frac{\partial E}{\partial d} = \frac{\pi^2}{480 d^4}.
+\]
+The actual Casimir effect for electromagnetic fields is actually double this due to the two polarization states of the photon.
+
+\subsubsection*{Recovering particles}
+We called the operators $a_\mathbf{p}$ and $a_\mathbf{p}^\dagger$ the creation and annihilation operators. Let us verify that they actually create and annihilate things! Recall that the Hamiltonian is given by
+\[
+ H = \frac{1}{2} \int \frac{\d^3 \mathbf{q}}{(2\pi)^3} \omega_\mathbf{q} a_\mathbf{q}^\dagger a_\mathbf{q}.
+\]
+Then we can compute
+\begin{align*}
+ [H, a_\mathbf{p}^\dagger] &= \int \frac{\d^3 \mathbf{q}}{(2\pi)^3} \omega_\mathbf{q} [a_\mathbf{q}^\dagger a_\mathbf{q}, a_\mathbf{p}^\dagger]\\
+ &= \int \frac{\d^3 \mathbf{q}}{(2\pi)^3} \omega_\mathbf{q} a_\mathbf{q}^\dagger (2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q})\\
+ &= \omega_\mathbf{p} a_\mathbf{p}^\dagger.
+\end{align*}
+Similarly, we obtain
+\[
+ [H, a_\mathbf{p}] = - \omega_\mathbf{p} a_\mathbf{p}.
+\]
+which means (like SHM) we can construct energy eigenstates by acting with $a_\mathbf{p}^\dagger$. We let
+\[
+ \bket{\mathbf{p}} = a_\mathbf{p}^\dagger \bket{0}.
+\]
+Then we have
+\[
+ H\bket{\mathbf{p}} = \omega_\mathbf{p} \bket{\mathbf{p}},
+\]
+where the eigenvalue is
+\[
+ \omega_\mathbf{p}^2 = \mathbf{p}^2 + m^2.
+\]
+But from special relativity, we also know that the energy of a particle of mass $m$ and momentum $\mathbf{p}$ is given by
+\[
+ E_\mathbf{p}^2 = \mathbf{p}^2 + m^2.
+\]
+So we interpret $\bket{\mathbf{p}}$ as the momentum eigenstate of a particle of mass $m$ and momentum $\mathbf{p}$. And we identify $m$ with the mass of the quantized particle. From now on, we will write $E_\mathbf{p}$ instead of $\omega_\mathbf{p}$.
+
+Let's check this interpretation. After normal ordering, we have
+\[
+ \mathbf{P} = \int \pi(\mathbf{x}) \nabla \phi(\mathbf{x}) \;\d^3 \mathbf{x} = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \mathbf{p} a_\mathbf{p}^\dagger a_\mathbf{p}.
+\]
+So we have
+\begin{align*}
+ \mathbf{P}\bket{\mathbf{p}} &= \int \frac{\d^3 \mathbf{q}}{(2\pi)^3} \mathbf{q} a_\mathbf{q}^\dagger a_\mathbf{q}(a_\mathbf{p}^\dagger \bket{0})\\
+ &= \int \frac{\d^3 \mathbf{q}}{(2\pi)^3} \mathbf{q} a_\mathbf{q}^\dagger (a_\mathbf{p}^\dagger a_\mathbf{q} + (2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q}))\bket{0}\\
+ &= \mathbf{p}a_\mathbf{p}^\dagger \bket{0}\\
+ &= \mathbf{p}\bket{\mathbf{p}}.
+\end{align*}
+So the state has total momentum $\mathbf{p}$.
+
+%We can also act with the angular momentum operator
+%\[
+% J^i = \varepsilon^{ijk} \int \d^3 x\; (\mathcal{J}^a)^{jk},
+%\]
+%and obtain
+%\[
+% J^i \bket{p} \to 0\text{ as }\mathbf{p} \to 0.
+%\]
+%So this particle has spin $0$. See example sheet $2$ for more details.
+
+What about multi-particle states? We just have to act with more $a_\mathbf{p}^\dagger$'s. We have the $n$-particle state
+\[
+ \bket{\mathbf{p}_1, \mathbf{p}_2, \cdots, \mathbf{p}_n} = a_{\mathbf{p}_1}^\dagger a_{\mathbf{p}_2}^\dagger \cdots a_{\mathbf{p}_n}^\dagger \bket{0}.
+\]
+Note that $\bket{\mathbf{p}, \mathbf{q}} = \bket{\mathbf{q}, \mathbf{p}}$ for any $\mathbf{p}, \mathbf{q}$. So any two parts are symmetric under interchange, i.e.\ they are \term{bosons}.
+
+Now we can tell what our state space is. It is given by the span of particles of the form
+\[
+ \bket{0}, a_\mathbf{p}^\dagger \bket{0}, a_\mathbf{p}^\dagger a_\mathbf{q}^{\dagger}\bket{0}, \cdots
+\]
+This is known as the \term{Fock space}. As in the case of SHM, there is also an operator which counts the number of particles. It is given by
+\[
+ N = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} a_\mathbf{p}^\dagger a_\mathbf{p}.
+\]
+So we have
+\[
+ N\bket{\mathbf{p}_1, \cdots, \mathbf{p}_n} = n\bket{\mathbf{p}_1, \cdots, \mathbf{p}_n}.
+\]
+It is easy to compute that
+\[
+ [N, H] = 0
+\]
+So the particle number is conserved in the free theory. This is \emph{not} true in general! Usually, when particles are allowed to interact, the interactions may create or destroy particles. It is only because we are in a free theory that we have particle number conservation.
+
+Although we are calling these states ``particles'', they aren't localized --- they're momentum eigenstates. Theoretically, we can create a localized state via a Fourier transform:
+\[
+ \bket{\mathbf{x}} = \int\frac{\d^3 \mathbf{p}}{(2\pi)^3 e^{-i\mathbf{p}\cdot \mathbf{x}}}\bket{\mathbf{p}}.
+\]
+More generally, we can create a wave-packet, and insert $\psi(\mathbf{p})$ to get
+\[
+ \bket{\psi} = \int\frac{\d^3 \mathbf{p}}{(2\pi)^3 e^{-i\mathbf{p}\cdot \mathbf{x}}}\psi(\mathbf{p})\bket{\mathbf{p}}.
+\]
+Then this wave-packet can be both partially localized in space and in momentum.
+
+For example, we can take $\psi$ to be the Gaussian
+\[
+ \psi(\mathbf{p}) = e^{-\mathbf{p}^2/2m}.
+\]
+Note now that neither $\bket{\mathbf{x}}$ nor $\bket{\psi}$ are $H$-eigenstates, just like in non-relativistic quantum mechanics.
+
+\subsubsection*{Relativistic normalization}
+As we did our quantum field theory, we have completely forgotten about making our theory Lorentz invariant. Now let's try to recover some. We had a vacuum state $\bket{0}$, which we can reasonably assume to be Lorentz invariant. We then produced some $1$-particle states $\bket{\mathbf{p}} = a_\mathbf{p}^\dagger \bket{0}$. This is certainly not a ``scalar'' quantity, so it would probably transform as we change coordinates.
+
+However, we can still impose compatibility with relativity, by requiring that its \emph{norm} is Lorentz invariant. We have chosen $\bket{0}$ such that
+\[
+ \braket{0}{0} = 1.
+\]
+We can compute
+\[
+ \braket{\mathbf{p}}{\mathbf{q}} = \brak{0}a_\mathbf{p} a_\mathbf{q}^\dagger \bket{0} = \brak{0}a_\mathbf{q}^\dagger a_\mathbf{p} + (2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q})\bket{0} = (2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q}).
+\]
+There is absolutely no reason to believe that this is a Lorentz invariant quantity, as $\mathbf{p}$ and $\mathbf{q}$ are $3$-vectors. Indeed, it isn't.
+
+As the resulting quantity is a scalar, we might hope that we can come up with new states
+\[
+ \bket{p} = A_p \bket{\mathbf{p}}
+\]
+so that
+\[
+ \braket{p}{q} = (2\pi)^3 A_p^* A_q \delta^3 (\mathbf{p} - \mathbf{q})
+\]
+is a Lorentz invariant quantity. Note that we write non-bold characters for ``relativistic'' things, and bold characters for ``non-relativistic'' things. Thus, we think of the $p$ in $\bket{p}$ as the 4-vector $p$, and the $\mathbf{p}$ in $\bket{\mathbf{p}}$ as the 3-vector $\mathbf{p}$.
+
+The trick to figuring out the right normalization is by looking at an object we know is Lorentz invariant, and factoring it into small pieces. If we know that all but one of the pieces are Lorentz invariant, then the remaining piece must be Lorentz invariant as well.
+
+Our first example would be the following:
+\begin{prop}
+ The expression
+ \[
+ \int \frac{\d^3 \mathbf{p}}{2 E_\mathbf{p}}
+ \]
+ is Lorentz-invariant, where
+ \[
+ E_\mathbf{p}^2 = \mathbf{p}^2 + m^2
+ \]
+ for some fixed $m$.
+\end{prop}
+
+\begin{proof}
+ We know $\int \d^4 p$ certainly is Lorentz invariant, and
+ \[
+ m^2 = p_\mu p^\mu = p^2 = p_0^2 - \mathbf{p}^2
+ \]
+ is also a Lorentz-invariant quantity. So for any $m$, the expression
+ \[
+ \int \d^4 p\; \delta(p_0^2 - \mathbf{p}^2 - m^2)
+ \]
+ is also Lorentz invariant. Writing
+ \[
+ E_\mathbf{p}^2 = p_0^2 = \mathbf{p}^2 + m^2,
+ \]
+ integrating over $p_0$ in the integral gives
+ \[
+ \int \frac{\d^3 \mathbf{p}}{2p_0} =\int \frac{\d^3 \mathbf{p}}{2 E_\mathbf{p}},
+ \]
+ and this is Lorentz invariant.
+\end{proof}
+
+Thus, we have
+\begin{prop}
+ The expression
+ \[
+ 2E_\mathbf{p} \delta^3 (\mathbf{p} - \mathbf{q})
+ \]
+ is Lorentz invariant.
+\end{prop}
+
+\begin{proof}
+ We have
+ \[
+ \int \frac{\d^3 \mathbf{p}}{2 E_\mathbf{p}} \cdot (2E_\mathbf{p} \delta^3(\mathbf{p} - \mathbf{q})) = 1.
+ \]
+ Since the RHS is Lorentz invariant, and the measure is Lorentz invariant, we know $2E_\mathbf{p} \delta^3(\mathbf{p} - \mathbf{q})$ must be Lorentz invariant.
+\end{proof}
+
+From this, we learn that the correctly normalized states are
+\[
+ \bket{p} = \sqrt{2 E_\mathbf{p}}\bket{\mathbf{p}} = \sqrt{2 E_\mathbf{p}} a_\mathbf{p}^\dagger \bket{0}.
+\]
+These new states satisfy
+\[
+ \braket{p}{q} = (2\pi)^3 (2E_\mathbf{p}) \delta^3 (\mathbf{p} - \mathbf{q}),
+\]
+and is Lorentz invariant.
+
+We can now define relativistically normalized creation operators by
+\[
+ a^\dagger(p) = \sqrt{2E_\mathbf{p}} a_\mathbf{p}^\dagger.
+\]
+Then we can write our field as
+\[
+ \phi (\mathbf{x}) = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3}\frac{1}{2E_\mathbf{p}} (a(p)e^{i\mathbf{p}\cdot \mathbf{x}} + a^\dagger(p) e^{-i\mathbf{p}\cdot \mathbf{x}}).
+\]
+\subsection{Complex scalar fields}
+What happens when we want to talk about a complex scalar field? A classical free complex scalar field would have Lagrangian
+\[
+ \mathcal{L} = \partial_\mu \psi^* \partial^\mu \psi - \mu^2 \psi^* \psi.
+\]
+Again, we want to write the quantized $\psi$ as an integral of annihilation and creation operators indexed by momentum. In the case of a real scalar field, we had the expression
+\[
+ \phi(\mathbf{x}) = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2 E_\mathbf{p}}} \left(a_\mathbf{p} e^{i \mathbf{p} \cdot \mathbf{x}} + a_\mathbf{p}^\dagger e^{-i \mathbf{p}\cdot \mathbf{x}}\right)
+\]
+We needed the ``coefficients'' of $e^{i\mathbf{p}\cdot \mathbf{x}}$ and those of $e^{-i\mathbf{p}\cdot \mathbf{x}}$ to be $a_\mathbf{p}$ and its conjugate $a_\mathbf{p}^\dagger$ so that the resulting $\phi$ would be real. Now we know that our $\psi$ is a complex quantity, so there is no reason to assert that the coefficients are conjugates of each other. Thus, we take the more general decomposition
+\begin{align*}
+ \psi(\mathbf{x}) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3}\frac{1}{\sqrt{2 E_\mathbf{p}}}(b_\mathbf{p} e^{i \mathbf{p}\cdot \mathbf{x}} + c_\mathbf{p}^\dagger e^{-i\mathbf{p}\cdot \mathbf{x}})\\
+ \psi^\dagger(\mathbf{x}) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2E_\mathbf{p}}} (b_\mathbf{p}^\dagger e^{-i\mathbf{p}\cdot \mathbf{x}} + c_\mathbf{p} e^{i\mathbf{p}\cdot \mathbf{x}}).
+\end{align*}
+Then the conjugate momentum is
+\begin{align*}
+ \pi(\mathbf{x}) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \, i \, \sqrt{\frac{E_\mathbf{p}}{2}} (b_\mathbf{p}^\dagger e^{-i\mathbf{p}\cdot \mathbf{x}} - c_\mathbf{p}e^{i\mathbf{p}\cdot \mathbf{x}})\\
+ \pi^\dagger(\mathbf{x}) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3}(-i) \sqrt{\frac{E_\mathbf{p}}{2}} (b_\mathbf{p} e^{i\mathbf{p}\cdot \mathbf{x}} - c_\mathbf{p}^\dagger e^{-i\mathbf{p}\cdot \mathbf{x}}).
+\end{align*}
+The commutator relations are
+\[
+ [\psi(\mathbf{x}), \pi(\mathbf{y})] = [\psi^\dagger(\mathbf{x}), \pi^\dagger(\mathbf{y})] = i \delta^3(\mathbf{x} - \mathbf{y}),
+\]
+with all other commutators zero.
+
+Similar tedious computations show that
+\[
+ [b_\mathbf{p}, b_\mathbf{q}^\dagger] = [c_\mathbf{p}, c_\mathbf{q}^\dagger] = (2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q}),
+\]
+with all other commutators zero.
+
+As before, we can find that the number operators
+\[
+ N_c = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3}c_\mathbf{p}^\dagger c_\mathbf{p},\quad N_b = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3}b_\mathbf{p}^\dagger b_\mathbf{p}
+\]
+are conserved.
+
+But in the classical case, we had an extra conserved charge
+\[
+ Q = i \int \d^3 \mathbf{x} \; (\dot{\psi}^* \psi - \psi^* \dot{\psi}) = i \int \d^3 \mathbf{x} \; (\pi \psi - \psi^* \pi^*).
+\]
+Again by tedious computations, we can replace the $\pi$ and $\psi$ with their operator analogues, expand everything in terms of $c_\mathbf{p}$ and $b_\mathbf{p}$, throw away the pieces of infinity by requiring normal ordering, and then obtain
+\[
+ Q = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} (c_\mathbf{p}^\dagger c_\mathbf{p} - b_\mathbf{p}^\dagger b_\mathbf{p}) = N_c - N_b,
+\]
+Fortunately, after quantization, this quantity is still conserved, i.e.\ we have
+\[
+ [Q, H] = 0.
+\]
+This is not a big deal, since $N_c$ and $N_b$ are separately conserved. However, in the interacting theory, we will find that $N_c$ and $N_b$ are \emph{not} separately conserved, but $Q$ still is.
+
+We can think of $c$ and $b$ particles as particle and anti-particle, and $Q$ computes the number of particles minus the number of antiparticles. Looking back, in the case of a real scalar field, we essentially had a system with $c = b$. So the particle is equal to its anti-particle.
+
+\subsection{The Heisenberg picture}
+We have tried to make our theory Lorentz invariant, but it's currently in a terrible state. Indeed, the time evolution happens in the states, and the operators depend on states only. Fortunately, from IID Principles of Quantum Mechanics, we know that there is a way to encode the time evolution in the operators instead, via the Heisenberg picture.
+
+Recall that the equation of motion is given by the Schr\"odinger equation
+\[
+ i \frac{\d}{\d t}\bket{\psi(t)} = H \bket{\psi(t)}.
+\]
+We can write the solution formally as
+\[
+ \bket{\psi(t)} = e^{-iHt} \bket{\psi(0)}.
+\]
+Thus we can think of $e^{-iHt}$ as the ``time evolution operator'' that sends $\bket{\psi(0)}$ forward in time by $t$.
+
+Now if we are given an initial state $\bket{\psi(0)}$, and we want to know what an operator $O_S$ does to it at time $t$, we can first apply $e^{-iHt}$ to let $\bket{\psi(0)}$ evolve to time $t$, then apply the operator $O_S$, then pull the result back to time $0$. So in total, we obtain the operator
+\[
+ O_H(t) = e^{iHt} O_S e^{-iHt}.
+\]
+To evaluate this expression, it is often convenient to note the following result:
+\begin{prop}
+ Let $A$ and $B$ be operators. Then
+ \[
+ e^A B e^{-A} = B + [A, B] + \frac{1}{2!}[A, [A, B]] + \frac{1}{3!}[A, [A, [A, B]]] + \cdots.
+ \]
+ In particular, if $[A, B] = c B$ for some constant $c$, then we have
+ \[
+ e^A B e^{-A} = e^c B.
+ \]
+\end{prop}
+
+\begin{proof}
+ For $\lambda$ a real variable, note that
+ \begin{align*}
+ \frac{\d}{\d \lambda} (e^{\lambda A}B e^{-\lambda A}) &= \lim_{\varepsilon \to 0} \frac{e^{(\lambda + \varepsilon) A} B e^{-(\lambda + \varepsilon) A} - e^{\lambda A}Be^{-\lambda A}}{\varepsilon}\\
+ &= \lim_{\varepsilon \to 0} e^{\lambda A}\frac{e^{\varepsilon A}B e^{-\varepsilon A}- B}{\varepsilon} e^{-\lambda A}\\
+ &= \lim_{\varepsilon \to 0} e^{\lambda A}\frac{(1 + \varepsilon A)B(1 - \varepsilon A) - B + o(\varepsilon)}{\varepsilon} e^{-\lambda A}\\
+ &= \lim_{\varepsilon \to 0} e^{\lambda A} \frac{(\varepsilon (AB - BA) + o(\varepsilon))}{\varepsilon}e^{-\lambda A}\\
+ &= e^{\lambda A}[A, B] e^{-\lambda A}.
+ \end{align*}
+ So by induction, we have
+ \[
+ \frac{\d^n}{\d \lambda^n}(e^{\lambda A}B e^{-\lambda A}) = e^{\lambda A} [A, [A, \cdots[A, B]\cdots]]e^{-\lambda A}.
+ \]
+ Evaluating these at $\lambda = 0$, we obtain a power series representation
+ \[
+ e^{\lambda A}B e^{-\lambda A} = B + \lambda [A, B] + \frac{\lambda^2}{2}[A, [A, B]] + \cdots.
+ \]
+ Putting $\lambda = 1$ then gives the desired result.
+\end{proof}
+In the Heisenberg picture, one can readily directly verify that the commutation relations for our operators $\phi(\mathbf{x}, t)$ and $\pi(\mathbf{x}, t)$ now become \emph{equal time} commutation relations
+\[
+ [\phi(\mathbf{x}, t), \phi(\mathbf{y}, t)] = [\pi(\mathbf{x}, t), \pi(\mathbf{y}, t)] = 0,\quad [\phi(\mathbf{x}, t), \pi(\mathbf{y}, t)] = i \delta^3(\mathbf{x} - \mathbf{y}).
+\]
+For an arbitrary operator $O_H$, we can compute
+\begin{align*}
+ \frac{\d O_H}{\d t} &= \frac{\d}{\d t}(e^{iHt}O_S e^{-iHt})\\
+ &= iH e^{iHt}O_Se^{-iHt} + e^{iHt}O_S (-iH e^{-iHt})\\
+ &= i[H, O_H],
+\end{align*}
+where we used the fact that the Hamiltonian commutes with any function of itself. This gives the time evolution for the operators.
+
+Recall that in quantum mechanics, when we transformed into the Heisenberg picture, a miracle occurred, and the equations of motion of the operators became the usual classical equations of motion. This will happen again. For $O_S = \phi(\mathbf{x})$, we have
+\begin{align*}
+ \dot{\phi}(\mathbf{x}, t) &= i[H, \phi(\mathbf{x}, t)] \\
+ &= i\int \d^3 \mathbf{y}\;\frac{1}{2} \, [\pi^2(\mathbf{y}, t) + (\nabla_y \phi(\mathbf{y}, t))^2 + m^2 \phi^2(\mathbf{y}, t), \phi(\mathbf{x}, t)]\\
+ &= i\int \d^3 \mathbf{y}\;\frac{1}{2} \, [\pi^2(\mathbf{y}, t), \phi(\mathbf{x}, t)]\\
+ &= i\int \d^3 \mathbf{y}\;\frac{1}{2} \, \bigl( \pi(\mathbf{y}, t) [\pi(\mathbf{y}, t), \phi(\mathbf{x}, t)] + [\pi(\mathbf{y}, t), \phi(\mathbf{x}, t)] \pi(\mathbf{y}, t)\bigr)\\
+ &= i\int \d^3 \mathbf{y}\; \pi(\mathbf{y}, t) (-i \delta^3(\mathbf{x} - \mathbf{y}))\\
+ &= \pi(\mathbf{x}, t).
+\end{align*}
+Similarly, we have
+\begin{align*}
+ \dot{\pi}(\mathbf{x}, t) &= i[H, \pi(\mathbf{x}, t)]\\
+ &= \frac{i}{2}\int \d^3 \mathbf{y} \left( [(\nabla_y \phi(\mathbf{y}, t))^2, \pi(\mathbf{x}, t)] + m^2 [\phi^2(\mathbf{y}, t), \pi(\mathbf{x}, t)]\right)\\
+ \intertext{Now notice that $\nabla_y$ does not interact with $\pi(\mathbf{x}, t)$. So this becomes}
+ &= \int \d^3 \mathbf{y} \left( -(\nabla_y \phi(\mathbf{y}, t) \cdot \nabla_y \delta^3(\mathbf{x} - \mathbf{y})) - m^2 \phi(\mathbf{y}, t) \delta^3(\mathbf{x} - \mathbf{y})\right)\\
+ \intertext{Integrating the first term by parts, we obtain}
+ &= \int \d^3 \mathbf{y} \left( (\nabla^2_y \phi(\mathbf{y}, t)) \delta^3(\mathbf{x} - \mathbf{y}) - m^2 \phi(\mathbf{y}, t) \delta^3(\mathbf{x} - \mathbf{y}) \right)\\
+ &= \nabla^2 \phi(\mathbf{x}, t) - m^2 \phi(\mathbf{x}, t).
+\end{align*}
+Finally, noting that $\dot{\pi} = \ddot{\phi}$ and rearranging, we obtain
+\[
+ \partial_\mu \partial^\mu \phi(\mathbf{x}, t) + m^2 \phi(\mathbf{x}, t) = 0.
+\]
+This is just the usual Klein--Gordon equation!
+
+It is interesting to note that this final equation of motion \emph{is} Lorentz invariant, but our original
+\[
+ \frac{\d \phi}{\d t} = i[H, \phi]
+\]
+is not. Indeed, $\frac{\d\phi}{\d t}$ itself singles out time, and to define $H$ we also need to single out a preferred time direction. However, this statement tells us essentially that $H$ generates time evolution, which is true in any frame, for the appropriate definition of $H$ and $t$ in that frame.
+
+What happens to the creation and annihilation operators? We use our previous magic formula to find that
+\begin{align*}
+ e^{iHt}a_\mathbf{p} e^{-iHt} &= e^{-iE_\mathbf{p} t} a_\mathbf{p}\\
+ e^{iHt} a_\mathbf{p}^\dagger e^{-iHt} &= e^{iE_\mathbf{p} t} a_\mathbf{p}^\dagger.
+\end{align*}
+These extra factors of $e^{\pm iE_\mathbf{p} t}$ might seem really annoying to carry around, but they are not! They merge perfectly into the rest of our formulas relativistically. Indeed, we have
+\begin{align*}
+ \phi(x) \equiv \phi(\mathbf{x}, t) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3}\frac{1}{\sqrt{2 E_\mathbf{p}}} \left(a_\mathbf{p}e^{-iE_\mathbf{p} t} e^{i\mathbf{p}\cdot \mathbf{x}} + a_\mathbf{p}^\dagger e^{iE_\mathbf{p} t} e^{-i\mathbf{p}\cdot \mathbf{x}}\right)\\
+ &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3}\frac{1}{\sqrt{2 E_\mathbf{p}}} \left(a_\mathbf{p} e^{-ip \cdot x} + a_\mathbf{p}^\dagger e^{i p \cdot x}\right),
+\end{align*}
+where now $p \cdot x$ is the inner product of the $4$-vectors $p$ and $x$! If we use our relativistically normalized
+\[
+ a^\dagger(p) = \sqrt{2 E_\mathbf{p}} a_\mathbf{p}^\dagger
+\]
+instead, then the equation becomes
+\begin{align*}
+ \phi(x) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{2E_\mathbf{p}}\left(a(p) e^{-ip \cdot x} + a(p)^\dagger e^{i p \cdot x}\right),
+\end{align*}
+which is made up of Lorentz invariant quantities only!
+
+Unfortunately, one should note that we are not yet completely Lorentz-invariant, as our commutation relations are \emph{equal time} commutation relations. However, this is the best we can do, at least in this course.
+
+\subsubsection*{Causality}
+Since we are doing relativistic things, it is important to ask ourselves if our theory is causal. If two points $x, y$ in spacetime are space-like separated, then a measurement of the field at $x$ should not affect the measurement of the field at $y$. So measuring $\phi(x)$ after $\phi(y)$ should give the same thing as measuring $\phi(y)$ after $\phi(x)$. So we must have $[\phi(x), \phi(y)] = 0$.
+
+\begin{defi}[Causal theory]\index{causal theory}
+ A theory is \emph{causal} if for any space-like separated points $x, y$, and any two fields $\phi, \psi$, we have
+ \[
+ [\phi(x), \psi(y)] = 0.
+ \]
+\end{defi}
+Does our theory satisfy causality? For convenience, we will write
+\[
+ \Delta(x - y) = [\phi(x), \phi(y)].
+\]
+We now use the familiar trick that our $\phi$ is Lorentz-invariant, so we can pick a convenient frame to do the computations. Suppose $x$ and $y$ are space-like separated. Then there is some frame where they are of the form
+\[
+ x = (\mathbf{x}, t),\quad y = (\mathbf{y}, t)
+\]
+for the \emph{same} $t$. Then our equal time commutation relations tell us that
+\[
+ \Delta(x - y) = [\phi(x), \phi(y)] = i \delta^3(\mathbf{x} - \mathbf{y}) = 0.
+\]
+So our theory is indeed causal!
+
+What happens for time-like separation? We can derive the more general formula
+\begin{align*}
+ \Delta(x - y) &= [\phi(x), \phi(y)]\\
+ &= \int \frac{\d^3 \mathbf{p}\;\d^3 \mathbf{q}}{(2\pi)^6}\frac{1}{\sqrt{4E_\mathbf{p} E_\mathbf{q}}} \left([a_\mathbf{p}, a_\mathbf{q}]e^{i(- p\cdot x - q\cdot y)} + [a_\mathbf{p}, a_\mathbf{q}^\dagger] e^{i(-p \cdot x + q\cdot y)}\right.\\
+ &\hphantom{= \int \frac{\d^3 \mathbf{p}\;\d^3 \mathbf{q}}{(2\pi)^6}\frac{1}{\sqrt{4E_\mathbf{p} E_\mathbf{q}}}} \left.+[a_\mathbf{p}^\dagger, a_\mathbf{q}]e^{i(p\cdot x - q\cdot y)} + [a_\mathbf{p}^\dagger, a_\mathbf{q}^\dagger] e^{i(p\cdot x + q \cdot y)}\right)\\
+ &= \int \frac{\d^3 \mathbf{p}\;\d^3 \mathbf{q}}{(2\pi)^6}\frac{1}{\sqrt{4E_\mathbf{p} E_\mathbf{q}}} (2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q}) \left(e^{i(-p \cdot x + q\cdot y)} - e^{i(p\cdot x - q \cdot y)}\right)\\
+ &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3}\frac{1}{2E_\mathbf{p}} (e^{ip\cdot(y - x)} - e^{-ip\cdot(y - x)}).
+\end{align*}
+This \emph{doesn't} vanish for time-like separation, which we may wlog be of the form $x = (\mathbf{x}, 0), y = (\mathbf{x}, t)$, since we have
+\[
+ [\phi(\mathbf{x}, 0), \phi (\mathbf{x}, t)] \propto e^{-imt} - e^{imt}.
+\]
+%but it does for space-like separation. We note that
+%\[
+% \Delta (x, y) = 0
+%\]
+%at equal times, but is Lorentz invariant. So it can only depend on the combination $(x - y)^2$. So if it vanishes at equal times (when $x \not= y$), it must vanish for all $(x - y)^2 < 0$.
+%
+%So this theory is causal.
+%
+%Note that we have
+%\[
+% [\phi(\mathbf{x}, t), \phi(\mathbf{y}, t)] = \frac{1}{2}\int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{\mathbf{p}^2 + m^2}} \left(e^{i\mathbf{p}\cdot (\mathbf{x} - \mathbf{y})} - e^{-i\mathbf{p}\cdot (\mathbf{x} - \mathbf{y})}\right).
+%\]
+%When $\mathbf{x} \not= \mathbf{y}$, we can change the sign of the $\mathbf{p}$ in the second $e^{-i\mathbf{p}\cdot (\mathbf{x} - \mathbf{y})}$, since we are integrating over all possible momentum. So the two terms cancel, and the result vanishes.
+%
+%This will also hold in an interacting theory.
+
+\subsection{Propagators}
+We now introduce the very important idea of a propagator. Suppose we are living in the vacuum. We want to know the probability of a particle appearing at $x$ and then subsequently being destroyed at $y$. To quantify this, we introduce the \emph{propagator}.
+\begin{defi}[Propagator]\index{propagator}
+ The \emph{propagator} of a real scalar field $\phi$ is defined to be
+ \[
+ D(x - y) = \brak{0}\phi(x) \phi(y) \bket{0}.
+ \]
+\end{defi}
+When we study interactions later, we will see that the propagator indeed tells us the probability of a particle ``propagating'' from $x$ to $y$.
+
+It is not difficult to compute the value of the propagator. We have
+\[
+ \brak{0}\phi(x) \phi(y) \bket{0} = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{\d^3 \mathbf{p}'}{(2\pi)^3} \frac{1}{\sqrt{4E_\mathbf{p} E_\mathbf{p}'}} \brak{0}a_\mathbf{p} a_{\mathbf{p}'}^\dagger \bket{0} e^{-i p \cdot x + i p' \cdot y}.
+\]
+with all other terms in $\phi(x) \phi(y)$ vanishing because they annihilate the vacuum. We now use the fact that
+\[
+ \brak{0}a_\mathbf{p} a_\mathbf{p'}^\dagger\bket{0} = \brak{0}[a_\mathbf{p}, a_\mathbf{p'}^\dagger]\bket{0} = i (2\pi)^3 \delta^3 (\mathbf{p} - \mathbf{p}'),
+\]
+since $a_\mathbf{p'}^\dagger a_\mathbf{p} \bket{0} = 0$. So we have
+\begin{prop}
+ \[
+ D(x - y) = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{2 E_\mathbf{p}} e^{-i p \cdot (x - y)}.
+ \]
+\end{prop}
+
+The propagator relates to the $\Delta(x - y)$ we previously defined by the following simple relationship:
+\begin{prop}
+ We have
+ \[
+ \Delta(x - y) = D(x - y) - D(y - x).
+ \]
+\end{prop}
+
+\begin{proof}
+ \[
+ \Delta(x - y) = [\phi(x), \phi(y)] = \brak{0}[\phi(x), \phi(y)]\bket{0} = D(x - y) - D(y - x),
+ \]
+ where the second equality follows as $[\phi(x), \phi(y)]$ is just an ordinary function.
+\end{proof}
+
+For a space-like separation $(x - y)^2 < 0$, one can show that it decays as
+\[
+ D(x - y) \sim e^{-m(|\mathbf{x} - \mathbf{y}|)}.
+\]
+This is non-zero! So the field ``leaks'' out of the light cone a bit. However, since there is no Lorentz-invariant way to order the events, if a particle can travel in a spacelike direction $\mathbf{x} \to \mathbf{y}$, it can just as easily travel in the other direction. So we know would expect $D(x - y) = D(y - x)$, and in a measurement, both amplitudes cancel. This indeed agrees with our previous computation that $\Delta(x - y) = 0$.
+
+%With a complex scalar field, we get something more interesting. We have
+%\[
+% [\psi(x), \psi^\dagger(y)] = 0
+%\]
+%outside the light cone. The interpretation now is that the propagation of the particle from $\mathbf{x} \to \mathbf{y}$ cancels the propagation of the antiparticle in the opposite direction.
+
+\subsubsection*{Feynman propagator}
+As we will later see, it turns out that what actually is useful is not the above propagator, but the Feynman propagator:
+\begin{defi}[Feynman propagator]\index{Feynman propagator}
+ The \emph{Feynman propagator} is
+ \[
+ \Delta_F (x - y) = \brak{0} T \phi(x) \phi(y) \bket{0} =
+ \begin{cases}
+ \brak{0} \phi(x) \phi(y) \bket{0} & x^0 > y^0\\
+ \brak{0} \phi(y) \phi(x) \bket{0} & y^0 > x^0
+ \end{cases}
+ \]
+\end{defi}
+Note that it seems like this is not a Lorentz-invariant quantity, because we are directly comparing $x^0$ and $y^0$. However, this is actually not a problem, since when $x$ and $y$ are space-like separated, then $\phi(x)$ and $\phi(y)$ commute, and the two quantities agree.
+
+\begin{prop}
+ We have
+ \[
+ \Delta_F(x - y) = \int \frac{\d^4 p}{(2\pi)^4 } \frac{i}{p^2 - m^2} e^{-ip \cdot (x - y)}.
+ \]
+ This expression is \emph{a priori} ill-defined since for each $\mathbf{p}$, the integrand over $p^0$ has a pole whenever $(p^0)^2 = \mathbf{p}^2 + m^2$. So we need a prescription for avoiding this. We replace this with a complex contour integral with contour given by
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-4, 0) -- (4, 0);
+ \draw [->] (0, -2) -- (0, 2);
+
+ \node [circ] at (-2, 0) {};
+ \node [above] at (-2, 0) {$-E_\mathbf{p}$};
+ \node [circ] at (2, 0) {};
+ \node [below] at (2, 0) {$E_\mathbf{p}$};
+
+ \draw [thick, blue] (-4, 0) -- (-2.5, 0) arc (180:360:0.5) -- (1.5, 0) arc(180:0:0.5) -- (4, 0);
+ \end{tikzpicture}
+ \end{center}
+\end{prop}
+
+\begin{proof}
+ To compare with our previous computations of $D(x - y)$, we evaluate the $p^0$ integral for each $\mathbf{p}$. Writing
+ \[
+ \frac{1}{p^2 - m^2} = \frac{1}{(p^0)^2 - E_\mathbf{p}^2} = \frac{1}{(p^0 - E_\mathbf{p})(p^0 + E_\mathbf{p})},
+ \]
+ we see that the residue of the pole at $p^0 = \pm E_\mathbf{p}$ is $\pm \frac{1}{2E_\mathbf{p}}$.
+
+ When $x^0 > y^0$, we close the contour in the lower plane $p^0 \to -i\infty$, so $e^{-p^0(x^0 - t^0)} \to e^{-\infty} = 0$. Then $\int \d p^0$ picks up the residue at $p^0 = E_\mathbf{p}$. So the Feynman propagator is
+ \begin{align*}
+ \Delta_F (x - y) &= \int \frac{\d ^3 \mathbf{p}}{(2\pi)^4} \frac{(-2\pi i)}{2E_\mathbf{p}} i e^{-iE_\mathbf{p}(x^0 - y^0) + i \mathbf{p} \cdot (\mathbf{x} - \mathbf{y})}\\
+ &= \int \frac{\d^3 \mathbf{p}}{(2\pi^3)} \frac{1}{2 E_\mathbf{p}} e^{-ip \cdot (x - y)}\\
+ &= D(x - y)
+ \end{align*}
+ When $x^0 < y^0$, we close the contour in the upper-half plane. Then we have
+ \begin{align*}
+ \Delta_F(x - y) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^4} \frac{2\pi i}{-2E_\mathbf{p}} i e^{iE_\mathbf{p} (x^0 - y^0) + i\mathbf{p} \cdot (\mathbf{x} - \mathbf{y})}\\
+ &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{2E_\mathbf{p}} e^{-i E_\mathbf{p}(y^0 - x^0) - i \mathbf{p}\cdot (\mathbf{y} - \mathbf{x})}\\
+ \intertext{We again use the trick of flipping the sign of $\mathbf{p}$ to obtain}
+ &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{2E_\mathbf{p}} e^{-ip\cdot(y - x)}\\
+ &= D(y - x). \qedhere
+ \end{align*}
+\end{proof}
+Usually, instead of specifying the contour, we write
+\[
+ \Delta_F(x - y) = \int \frac{\d^4 p}{(2\pi)^4} \frac{i e^{-ip\cdot(x - y)}}{p^2 - m^2 + i \varepsilon},
+\]
+where $\varepsilon$ is taken to be small, or infinitesimal. Then the poles are now located at
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-4, 0) -- (4, 0);
+ \draw [->] (0, -2) -- (0, 2);
+
+ \node [circ] at (-2, 0.1) {};
+ \node [above] at (-2, 0.1) {$-E_\mathbf{p}$};
+ \node [circ] at (2, -0.1) {};
+ \node [below] at (2, -0.1) {$E_\mathbf{p}$};
+ \end{tikzpicture}
+\end{center}
+So integrating along the real axis would give us the contour we had above. This is known about the ``$i\varepsilon$-prescription''.\index{$i\varepsilon$-prescription}
+
+The propagator is in fact the Green's function of the Klein--Gordon operator:
+\begin{align*}
+ (\partial_t^2 - \nabla^2 + m^2) \Delta_F(x - y) &= \int \frac{\d^4 p}{(2\pi)^4} \frac{i}{p^2 - m^2}(-p^2 + m^2) e^{-ip\cdot(x - y)}\\
+ &= -i \int \frac{\d^4 p}{(2\pi)^4} e^{-ip\cdot(x - y)}\\
+ &= -i \delta^4(x - y).
+\end{align*}
+
+\subsubsection*{Other propagators}
+For some purposes, it's useful to pick other contours, e.g.\ the \term{retarded Green's function} defined as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-4, 0) -- (4, 0);
+ \draw [->] (0, -2) -- (0, 2);
+
+ \node [circ] at (-2, 0) {};
+ \node [below] at (-2, 0) {$-E_\mathbf{p}$};
+ \node [circ] at (2, 0) {};
+ \node [below] at (2, 0) {$E_\mathbf{p}$};
+
+ \draw [thick, blue] (-4, 0) -- (-2.5, 0) arc (180:0:0.5) -- (1.5, 0) arc(180:0:0.5) -- (4, 0);
+ \end{tikzpicture}
+\end{center}
+This can be given in terms of operators by
+\begin{defi}[Retarded Green's function]
+ The \emph{retarded Green's function} is given by
+ \[
+ \Delta_R(x - y) =
+ \begin{cases}
+ [\phi(x), \phi(y)]& x^0 > y^0\\
+ 0 & y^0 > x^0
+ \end{cases}
+ \]
+\end{defi}
+This is useful if we have some initial field configuration, and we want to see how it evolves in the presence of some source. This solves the ``inhomogeneous Klein--Gordon equation'', i.e.\ an equation of the form
+\[
+ \partial_\mu \partial^\mu \phi(x) + m^2 \phi(x) = J(x),
+\]
+where $J(x)$ is some background function.
+
+One also defines the \term{advanced Green's function} which vanishes when $y^0 < x^0$ instead. This is useful if we know the end-point of a field configuration and want to know where it came from.
+
+However, in general, the Feynman propagator is the most useful in quantum field theory, and we will only have to use this.
+
+\section{Interacting fields}
+Free theories are special --- we can determine the exact spectrum, but nothing interacts. These facts are related. Free theories have at most quadratic terms in the Lagrangian. So the equations of motion are linear. So we get exact quantization, and we get multi-particle states with no interactions by superposition.
+
+\subsection{Interaction Lagrangians}
+How do we introduce field interactions? The short answer is to throw more things into the Lagrangian to act as the ``potential''.
+
+We start with the case where there is only one real scalar field, and the field interacts with itself. Then the general form of the Lagrangian can be given by
+\[
+ \mathcal{L} = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \frac{1}{2}m^2 \phi^2 - \sum_{n = 3}^\infty \frac{\lambda_n}{n!} \phi^n.
+\]
+These $\lambda_n$ are known as the \term{coupling constants}.
+
+It is almost impossible to work with such a Lagrangian directly, so we want to apply perturbation theory to it. To do so, we need to make sure our interactions are ``small''. The obvious thing to require would be that $\lambda_n \ll 1$. However, this makes sense only if the $\lambda_n$ are dimensionless. So let's try to figure out what the dimensions are.
+
+Recall that we have
+\[
+ [S] = 0.
+\]
+Since we know $S = \int \d^4 x \mathcal{L}$, and $[\d^4 x] = -4$, we have
+\[
+ [\mathcal{L}] = 4.
+\]
+We also know that
+\[
+ [\partial_\mu] = 1.
+\]
+So the $\partial_\mu \phi \partial^\mu \phi$ term tells us that
+\[
+ [\phi] = 1.
+\]
+From these, we deduce that we must have
+\[
+ [\lambda_n] = 4 - n.
+\]
+So $\lambda_n$ isn't always dimensionless. What we can do is to compare it with some \emph{energy scale} $E$. For example, if we are doing collisions in the LHC, then the energy of the particles would be our energy scale. If we have picked such an energy scale, then looking at $\lambda_n E^{n - 4}$ would give us a dimensionless parameter since $[E] = 1$.
+
+We can separate this into three cases:
+\begin{enumerate}
+ \item $n = 3$: here $E^{n - 4} = E^{-1}$ decreases with $E$. So $\lambda_3 E^{-1}$ would be small at high energies, and large at low energies.
+
+% This in fact happens in reality, as we have a term like this in the standard model for couplings of the Higgs boson.
+
+ Such perturbations are called \term{relevant perturbation}\emph{s} at low energies. In a relativistic theory, we always have $E > m$. So we can always make the perturbation small by picking $\lambda_3 \ll m$ (at least when we are studying general theory; we cannot go and change the coupling constant in nature so that our maths work out better).
+
+ \item $n = 4$: Here the dimensionless parameter is just $\lambda_4$ itself. So this is small whenever $\lambda_4 \ll 1$. This is known as \term{marginal perturbation}.
+
+ \item $n > 4$: This has dimensionless parameter $\lambda_n E^{n - 4}$, which is an increasing function of $E$. So this is small at low energies, and high at large energies. These operators are called \term{irrelevant perturbation}\emph{s}.
+\end{enumerate}
+While we call them ``irrelevant'', they are indeed relevant, as it is typically difficult to avoid high energies in quantum field theory. Indeed, we have seen that we can have arbitrarily large vacuum fluctuations. In the Advanced Quantum Field Theory course, we will see that these theories are ``non-renormalizable''.
+
+In this course, we will consider only weakly coupled field theories --- one that can truly be considered as small perturbations of the free theory at all energies.
+
+We will quickly describe some interaction Lagrangians that we will study in more detail later on.
+\begin{eg}[$\phi^4$ theory]\index{$\phi^4$ theory}
+ Consider the $\phi^4$ theory
+ \[
+ \mathcal{L} = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \frac{1}{2}m^2 \phi^2 - \frac{\lambda}{4!}\phi^4,
+ \]
+ where $\lambda \ll 1$. We can already guess the effects of the final term by noting that here we have
+ \[
+ [H, N] \not= 0.
+ \]
+ So particle number is \emph{not} conserved. Expanding the last term in the Lagrangian in terms of $a_\mathbf{p}, a_\mathbf{p}^\dagger$, we get terms involving things like $a_\mathbf{p} a_\mathbf{p} a_\mathbf{p} a_\mathbf{p}$ or $a_\mathbf{p}^\dagger a_\mathbf{p}^\dagger a_\mathbf{p}^\dagger a_\mathbf{p}^\dagger$ or $a_\mathbf{p}^\dagger a_\mathbf{p}^\dagger a_\mathbf{p}^\dagger a_\mathbf{p}$, and all other combinations you can think of. These will create or destroy particles.
+\end{eg}
+
+We also have a Lagrangian that involves two fields --- a real scalar field and a complex scalar field.
+\begin{eg}[Scalar Yukawa theory]\index{scalar Yukawa theory}
+ In the early days, we found things called \emph{pions} that seemed to mediate nuclear reactions. At that time, people did not know that things are made up of quarks. So they modelled these pions in terms of a scalar field, and we have
+ \[
+ \mathcal{L} = \partial_\mu \psi^* \partial^\mu \psi + \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - M^2 \psi^* \psi - \frac{1}{2} m^2 \phi^2 - g \psi^* \psi \phi.
+ \]
+ Here we have $g \ll m, M$. Here we still have $[Q, H] = 0$, as all terms are at most quadratic in the $\psi$'s. So the number of $\psi$-particles minus the number of $\psi$-antiparticles is still conserved. However, there is no particle conservation for $\phi$.
+
+ A wary --- the potential
+ \[
+ V = M^2 \psi^* \psi + \frac{1}{2} m^2 \phi^2 + g \psi^*\psi \phi
+ \]
+ has a stable local minimum when the fields are zero, but it is unbounded below for large $-g \phi$. So we can't push the theory too far.
+\end{eg}
+%The study of strongly coupled field theories is a major research topic in quantum field theory. In some particular solids, we can get charge fractionalization --- Solids contain electrons, and each has $\pm 1$ charges. However, in complicated systems, we can get excitations with fractional charges, and this gives the fractional quantum hall effect. Another topic people study is confinement. We want to figure out what quarks and gluons are always stuck together in mesons and baryons at lower energies. Another issue is emergent gravity --- there are some theories in four dimensions that are somehow equivalent to 10-dimensional quantum gravity. This is known as the \term{AdS/CFT correspondence}, standing for anti-de Sitter space and conformal field theory.
+
+\subsection{Interaction picture}
+How do we actually study interacting fields? We can imagine that the Hamiltonian is split into two parts, one of which is the ``free field'' part $H_0$, and the other part is the ``interaction'' part $H_{\mathrm{int}}$. For example, in the $\phi^4$ theory, we can split it up as
+\[
+ H = \int\d^3 \mathbf{x} \left(\vphantom{\frac{\lambda}{4}}\right.\underbrace{\frac{1}{2} (\pi^2 + (\nabla \phi)^2 + m^2 \phi^2)}_{H_0} + \underbrace{\frac{\lambda}{4!} \phi^4}_{H_{\mathrm{int}}}\left.\vphantom{\frac{\lambda}{4!}}\right).
+\]
+The idea is that the \term{interaction picture} mixes the Schr\"odinger picture and the Heisenberg picture, where the simple time evolutions remain in the operators, whereas the complicated interactions live in the states. This is a very useful trick if we want to deal with small interactions.
+
+To see how this actually works, we revisit what the Schr\"odinger picture and the Heisenberg picture are.
+
+\separator
+
+In the Schr\"odinger picture, we have a state $\bket{\psi(t)}$ that evolves in time, and operators $O_S$ that are fixed. In anticipation of what we are going to do next, we are going to say that the operators are a function of time. It's just that they are constant functions:
+\[
+ \frac{\d}{\d t}O_S(t) = 0.
+\]
+The states then evolve according to the Schr\"odinger equation
+\[
+ i \frac{\d}{\d t} \bket{\psi(t)}_S = H \bket{\psi(t)}_S.
+\]
+Since $H$ is constant in time, this allows us to write the solution to this equation as
+\[
+ \bket{\psi(t)}_S = e^{-iHt} \bket{\psi(0)}_S.
+\]
+Here $e^{-iHt}$ is the \term{time evolution operator}, which is a unitary operator. More generally, we have
+\[
+ \bket{\psi(t)}_S = e^{-iH(t - t_0)} \bket{\psi(0)}_S = U_S(t, t_0) \bket{\psi(t_0)}_S,
+\]
+where we define
+\[
+ U_S(t, t_0) = e^{-iH(t - t_0)}.
+\]
+To obtain information about the system at time $t$, we look at what happens when we apply $O_S(t) \bket{\psi(t)}_S$.
+
+\separator
+
+In the Heisenberg picture, we define a new set of operators, relating to the Schr\"odinger picture by
+\begin{align*}
+ O_H(t) &= e^{iHt} O_S(t) e^{-iHt}\\
+ \bket{\psi(t)}_H &= e^{iHt}\bket{\psi(t)}_S.
+\end{align*}
+In this picture, the states $\bket{\psi(t)}_H$ are actually constant in time, and are just equal to $\bket{\psi(0)}_S$ all the time. The equations of motion are then
+\begin{align*}
+ \frac{\d}{\d t}O_H &= i [H, O_H]\\
+ \frac{\d}{\d t}\bket{\psi(t)}_H &= 0.
+\end{align*}
+To know about the system at time $t$, we apply
+\[
+ O_H(t) \bket{\psi(t)}_H = e^{iHt}O_S(t) \bket{\psi(t)}_S.
+\]
+In this picture, the time evolution operator is just the identity map
+\[
+ U_H(t, t_0) = \id.
+\]
+\separator
+
+In the interaction picture, we do a mixture of both. We only shift the ``free'' part of the Hamiltonian to the operators. We define
+\begin{align*}
+ O_I(t) &= e^{iH_0 t}O_S(t) e^{-iH_0 t}\\
+ \bket{\psi(t)}_I &= e^{iH_0 t}\bket{\psi(t)}_S.
+\end{align*}
+In this picture, both the operators and the states change in time!
+
+Why is this a good idea? When we studied the Heisenberg picture of the free theory, we have already figured out what conjugating by $e^{iH_0 t}$ does to operators, and it was really simple! In fact, it made things look \emph{nicer}. On the other hand, time evolution of the state is now just generated by the interaction part of the Hamiltonian, rather than the whole thing, and this is much simpler (or at least shorter) to deal with:
+
+\begin{prop}
+ In the interaction picture, the equations of motion are
+ \begin{align*}
+ \frac{\d}{\d t}O_I &= i [H_0, O_I]\\
+ \frac{\d}{\d t}\bket{\psi(t)}_I &= H_I\bket{\psi(t)}_I,
+ \end{align*}
+ where $H_I$ is defined by
+ \[
+ H_I = (H_{\mathrm{int}})_I = e^{iH_0 t} (H_{\mathrm{int}})_S e^{-iH_0 t}.
+ \]
+\end{prop}
+
+\begin{proof}
+ The first part we've essentially done before, but we will write it out again for completeness.
+ \begin{align*}
+ \frac{\d}{\d t} (e^{iH_0 t}O_S e^{-iH_0 t}) &= \lim_{\varepsilon \to 0} \frac{e^{iH_0(t + \varepsilon)} O_S e^{-iH_0(t +\varepsilon)} - e^{iH_0 t}O_Se^{-iH_0 t}}{\varepsilon}\\
+ &= \lim_{\varepsilon \to 0} e^{iH_0 t}\frac{e^{iH_0 \varepsilon}O_S e^{-iH_0 \varepsilon}- O_S}{\varepsilon} e^{-iH_0 t}\\
+ &= \lim_{\varepsilon \to 0} e^{iH_0 t}\frac{(1 + iH_0 \varepsilon)O_S(1 - iH_0 \varepsilon) - O_S + o(\varepsilon)}{\varepsilon} e^{-iH_0 t}\\
+ &= \lim_{\varepsilon \to 0} e^{iH_0 t} \frac{i\varepsilon (H_0 O_S - O_S H_0) + o(\varepsilon)}{\varepsilon}e^{-iH_0 t}\\
+ &= e^{iH_0 t}i [H_0 , O_S] e^{-iH_0 t}\\
+ &= i [H_0, O_I].
+ \end{align*}
+ For the second part, we start with Schr\"odinger's equation
+ \[
+ i \frac{\d \bket{\psi(t)}_S}{\d t} = H_S \bket{\psi(t)}_S.
+ \]
+ Putting in our definitions, we have
+ \[
+ i \frac{\d}{\d t}(e^{-iH_0 t}\bket{\psi(t)}_I) = (H_0 + H_{\mathrm{int}})_S e^{-iH_0 t} \bket{\psi(t)}_I.
+ \]
+ Using the product rule and chain rule, we obtain
+ \[
+ H_0 e^{-iH_0 t}\bket{\psi(t)}_I + ie^{-iH_0t}\frac{\d}{\d t}\bket{\psi(t)}_I = (H_0 + H_{\mathrm{int}})_S e^{-iH_0 t} \bket{\psi(t)}_I.
+ \]
+ Rearranging gives us
+ \[
+ i \frac{\d \bket{\psi(t)}_I}{\d t} = e^{iH_0 t} (H_{\mathrm{int}})_S e^{-iH_0 t} \bket{\psi(t)}_I. \qedhere
+ \]
+\end{proof}
+To obtain information about the state at time $t$, we look at what happens when we apply
+\[
+ O_I(t) \bket{\psi(t)}_I = e^{iH_0t}O_S(t) \bket{\psi(t)}_S.
+\]
+Now what is the time evolution operator $U(t, t_0)$? In the case of the Schr\"odinger picture, we had the equation of motion
+\[
+ i\frac{\d}{\d t}\bket{\psi(t)}_S = H\bket{\psi(t)}_S.
+\]
+So we could immediately write down
+\[
+ \bket{\psi(t)} = e^{-iH(t - t_0)} \bket{\psi(t_0)}.
+\]
+In this case, we \emph{cannot} do that, because $H_I$ is now a function of time. The next best guess would be
+\[
+ U_I(t, t_0) = \exp\left(-i \int_{t_0}^t H_I(t') \;\d t'\right).
+\]
+For this to be right, it has to satisfy
+\[
+ i\frac{\d}{\d t}U_I(t, t_0) = H_I(t) U_I(t, t_0).
+\]
+We can try to compute the derivative of this. If we take the series expansion of this, then the second-order term would be
+\[
+ \frac{(-i)^2}{2} \left(\int_{t_0}^t H_I(t') \;\d t'\right)^2.
+\]
+For the equation of motion to hold, the derivative should evaluate to $iH_I$ times the first order term, i.e.
+\[
+ -i H_I \left( (-i)\int_{t_0}^t H_I(t')\;\d t'\right).
+\]
+We can compute its derivative to be
+\[
+ \frac{(-i)^2}{2} H_I(t) \left(\int_{t_0}^t H_I(t')\;\d t'\right) + \frac{(-i)^2}{2} \left(\int_{t_0}^t H_I(t') \;\d t'\right) H_I(t) .
+\]
+This is \emph{not} equal to what we wanted, because $H_I(t)$ and $H_I(t')$ do not commute in general.
+
+Yet, this is a good guess, and to get the real answer, we just need to make a slight tweak:
+\begin{prop}[Dyson's formula]\index{Dyson's formula}
+ The solution to the Schr\"odinger equation in the interaction picture is given by
+ \[
+ U(t, t_0) = T\exp\left(-i \int_{t_0}^t H_I(t') dt'\right),
+ \]
+ where $T$ stands for \term{time ordering}: operators evaluated at earlier times appear to the right of operators evaluated at leter times when we write out the power series. More explicitly,
+ \[
+ T\{O_1(t_1) O_2(t_2)\} =
+ \begin{cases}
+ O_1(t_1) O_2(t_2) & t_1 > t_2\\
+ O_2(t_2) O_1(t_1) & t_2 > t_1
+ \end{cases}.
+ \]
+ We do not specify what happens when $t_1 = t_2$, but it doesn't matter in our case since the operators are then equal.
+
+ Thus, we have
+ \begin{multline*}
+ U(t, t_0) = \mathbf{1} - i \int_{t_0}^t \d t'\; H_I (t') + \frac{(-i)^2}{2} \left\{\int_{t_0}^t \d t' \int_{t'}^t\;\d t'' H_I(t'') H_I(t')\right.\\
+ \left.+ \int_{t_0}^t \d t' \int_{t_0}^{t'} \d t''\; H_I(t') H_I(t'')\right\} + \cdots.
+ \end{multline*}
+ We now notice that
+ \[
+ \int_{t_0}^t \d t' \int_{t'}^t \;\d t''\; H_I(t'') H_I(t') = \int_{t_0}^t \d t'' \int_{t_0}^{t''}\d t'\; H_I(t'') H_I(t'),
+ \]
+ since we are integrating over all $t_0 \leq t' \leq t'' \leq t$ on both sides. Swapping $t'$ and $t''$ shows that the two second-order terms are indeed the same, so we have
+ \[
+ U(t, t_0) = \mathbf{1} - i \int_{t_0}^t \d t' \; H_I (t') + (-i)^2 \int_{t_0}^t \d t' \int_{t_0}^{t'} \d t'' \; H_I(t'') H_I(t') + \cdots.
+ \]
+\end{prop}
+
+\begin{proof}
+ This from the last presentation above, using the fundamental theorem of calculus.
+\end{proof}
+
+In theory, this time-ordered exponential is very difficult to compute. However, we have, at the beginning, made the assumption that the interactions $H_I$ are small. So often we can just consider the first one or two terms.
+
+Note that when we evaluate the time integral of the Hamiltonian, a convenient thing happens. Recall that our Hamiltonian is the integral of some operators over all space. Now that we are also integrating it over time, what we are doing is essentially integrating over spacetime --- a rather relativistic thing to do!
+
+\subsubsection*{Some computations}
+Now let's try to do some computations. By doing so, we will very soon realize that our theory is very annoying to work with, and then we will later come up with something more pleasant.
+
+We consider the scalar Yukawa theory. This has one complex scalar field and one real scalar field, with an interaction Hamiltonian of
+\[
+ H_I = g\psi^\dagger \psi \phi.
+\]
+Here the $\psi, \phi$ are the ``Heseinberg'' versions where we conjugated by $e^{iH_0 t}$, so we have $e^{ip\cdot x}$ rather than $e^{i\mathbf{p}\cdot \mathbf{x}}$ in the integral. We will pretend that the $\phi$ particles are mesons, and that the $\psi$-particles are nucleons.
+
+We now want to ask ourselves the following question --- suppose we start with a meson, and we let it evolve over time. After infinite time, what is the probability that we end up with a $\psi\bar\psi$ pair, where $\bar{\psi}$ is the anti-$\psi$ particle?
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i) {$\phi$};
+ \vertex [right=of i] (m);
+ \vertex [above right=of m] (f1) {$\psi$};
+ \vertex [below right=of m] (f2) {$\bar\psi$};
+
+ \diagram*{
+ (i) -- [scalar] (m),
+ (f2) -- [fermion] (m) -- [fermion] (f1),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+We denote our initial state and ``target'' final states by
+\begin{align*}
+ \bket{i} &= \sqrt{2E_\mathbf{p}} a_\mathbf{p}^\dagger \bket{0}\\
+ \bket{f} &= \sqrt{4E_{\mathbf{q}_1} E_{\mathbf{q}_2}} b_{\mathbf{q}_1}^\dagger c_{\mathbf{q}_2}^{\dagger} \bket{0}.
+\end{align*}
+What we now want to find is the quantity
+\[
+ \brak{f}U(\infty, -\infty)\bket{i}.
+\]
+\begin{defi}[$S$-matrix]\index{$S$-matrix}
+ The \emph{$S$-matrix} is defined as
+ \[
+ S = U(\infty, -\infty).
+ \]
+\end{defi}
+Here $S$ stands for ``scattering''. So we often just write our desired quantity as $\brak{f}S\bket{i}$ instead. When we evaluate this thing, we will end up with some numerical expression, which will very obviously not be a probability (it contains a delta function). We will come back to interpreting this quantity later. For the moment, we will just try to come up with ways to compute it in a sensible manner.
+
+Now let's look at that expression term by term in the Taylor expansion of $U$. The first term is the identity $\mathbf{1}$, and we have a contribution of
+\[
+ \brak{f}\mathbf{1}\bket{i} = \braket{f}{i} = 0
+\]
+since distinct eigenstates are orthogonal. In the next term, we are integrating over $\psi^\dagger \psi \phi$. If we expand this out, we will get loads of terms, each of which is a product of some $b, c, a$ (or their conjugates). But the only term that contributes would be the term $c_{\mathbf{q}_2}b_{\mathbf{q}_1}a_{\mathbf{p}}^\dagger$, so that we exactly destroy the $a_\mathbf{p}^\dagger$ particle and create the $b_{\mathbf{q}_1}$ and $c_{\mathbf{q}_2}$ particles. The other terms will end up transforming $\bket{i}$ to something that is not $\bket{f}$ (or perhaps it would kill $\bket{i}$ completely), and then the inner product with $\bket{f}$ will give $0$.
+
+Before we go on to discuss what the higher order terms do, we first actually compute the contribution of the first order term.
+
+We want to compute
+\begin{align*}
+ \brak{f}S\bket{i} &= \brak{f}\mathbf{1} - i\int \d t\; H_I(t) + \cdots\bket{i}\\
+ \intertext{We know the $\mathbf{1}$ contributes nothing, and we drop the higher order terms to get}
+ &\approx -i \brak{f} \int \d t\; H_I(t) \bket{i}\\
+ &= -ig \brak{f}\int \d^4 x\; \psi^\dagger(x) \psi(x) \phi(x) \bket{i}\\
+ &= -ig \int \d^4 x \brak{f}\psi^\dagger(x) \psi(x) \phi(x) \bket{i}.
+\end{align*}
+We now note that the $\bket{i}$ interacts with the $\phi$ terms only, and the $\brak{f}$ interacts with the $\psi$ terms only, so we can evaluate $\phi(x)\bket{i}$ and $\psi^\dagger (x)\psi(x) \bket{f}$ separately. Moreover, any leftover $a^\dagger, b^\dagger, c^\dagger$ terms in the expressions will end up killing the other object, as the $a^\dagger$'s commute with the $b^\dagger, c^\dagger$, and $\brak{0}a^\dagger = (a \bket{0})^\dagger = 0$.
+
+We can expand $\phi$ in terms of the creation and annihilation operators $a$ and $a^\dagger$ to get
+\[
+ \phi(x) \bket{i} = \int \frac{\d^3 \mathbf{q}}{(2\pi)^3} \frac{\sqrt{2 E_\mathbf{p}}}{\sqrt{2 E_\mathbf{q}}} (a_\mathbf{q} e^{-iq\cdot x} + a_\mathbf{q}^\dagger e^{iq\cdot x}) a_\mathbf{p}^\dagger \bket{0}\\
+\]
+We notice that nothing happens to the $a_\mathbf{q}^\dagger$ terms and they will end up becoming $0$ by killing $\brak{0}$. Swapping $a_\mathbf{q}$ and $a_\mathbf{p}^\dagger$ and inserting the commutator, we obtain
+\begin{align*}
+ &\hphantom{=}\int \frac{\d^3 \mathbf{q}}{(2\pi)^3} \frac{\sqrt{2 E_\mathbf{p}}}{\sqrt{2 E_\mathbf{q}}} ([a_\mathbf{q}, a_\mathbf{p}^\dagger] + a_\mathbf{p}^\dagger a_\mathbf{q}) e^{-iq\cdot x} \bket{0}\\
+ &= \int \frac{\d^3 \mathbf{q}}{(2\pi)^3} \frac{\sqrt{2 E_{\mathbf{p}}}}{\sqrt{2 E_\mathbf{q}}} (2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q}) e^{-iq\cdot x} \bket{0}\\
+ &= e^{-ip\cdot x}\bket{0}.
+\end{align*}
+We will use this to describe the creation and destruction of mesons.
+
+We can similarly look at
+\begin{align*}
+ &\hphantom{=}\psi^\dagger (x) \psi(x) \bket{f} \\
+ &= \int \frac{\d^3 \mathbf{k}_1\;\d^3 \mathbf{k}_2}{(2\pi)^6} \frac{1}{\sqrt{4E_{\mathbf{k}_1} E_{\mathbf{k}_2}}}(b^\dagger_{\mathbf{k}_1} e^{ik_1\cdot x} + c_{\mathbf{k}_1} e^{-ik_1\cdot x})(b_{\mathbf{k}_2} e^{-ik_2 \cdot x} + c_{\mathbf{k}_2}^\dagger e^{i k_2 \cdot x})\bket{f}
+\end{align*}
+As before, the only interesting term for us is when we have both $c_{\mathbf{k}_1}$ and $b_{\mathbf{k}_2}$ present to ``kill'' $b_{\mathbf{q}_1}^\dagger c_{\mathbf{q}_2}^\dagger$ in $\bket{f}$. So we look at
+\begin{align*}
+ &\hphantom{=} \int \frac{\d^3 \mathbf{k}_1 \; \d^3 \mathbf{k}_2}{(2\pi)^6} \frac{\sqrt{4E_{\mathbf{q}_1}E_{\mathbf{q}_2}}}{\sqrt{4 E_{\mathbf{k}_1} E_{\mathbf{k}_2}}} c_{\mathbf{k}_1} b_{\mathbf{k}_2}b_{\mathbf{q}_1}^\dagger c_{\mathbf{q}_2}^\dagger e^{-i(k_1 + k_2) \cdot x}\bket{0}\\
+ &= \int \frac{\d^3 \mathbf{k}_1 \;\d^3 \mathbf{k}_2}{(2\pi)^6} (2\pi)^6 \delta^3(\mathbf{q}_1 - \mathbf{k}_2)\delta^3(\mathbf{q}_2 - \mathbf{k}_1) e^{-i(k_1 + k_2)\cdot x}\bket{0}\\
+ &= e^{-i(q_1 + q_2)\cdot x}\bket{0}.
+\end{align*}
+So we have
+\begin{align*}
+ \brak{f}S\bket{i} &\approx -ig \int \d^4 x\; (e^{-i(q_1 + q_2) \cdot x}\bket{0})^\dagger (e^{-ip\cdot x}\bket{0})\\
+ &= -ig \int \d^4 x\; e^{i(q_1 + q_2 - p) \cdot x}\\
+ &= -ig (2\pi)^4 \delta^4(q_1 + q_2 - p).
+\end{align*}
+If we look carefully at what we have done, what allowed us to go from the initial $\phi$ state $\bket{i} = \sqrt{2 E_\mathbf{p}} a_\mathbf{p}^\dagger\bket{0}$ to the final $\psi\bar\psi$ state $\bket{f} = \sqrt{4 E_{\mathbf{q}_1} E_{\mathbf{q}_2}} b_{\mathbf{q}_1}^\dagger c_{\mathbf{q}_2}^{\dagger} \bket{0}$ was the presence of the term $c_{\mathbf{q}_1}^\dagger b_{\mathbf{q}_2}^\dagger a_\mathbf{p}$ in the $S$ matrix (what we appeared to have used in the calculation was $c_{\mathbf{q}_1} b_{\mathbf{q}_2}$ instead of their conjugates, but that is because we were working with the conjugate $\psi^\dagger \psi \bket{f}$ rather than $\brak{f} \psi^\dagger \psi$).
+
+If we expand the $S$-matrix to, say, second order, we will, for example, find a term that looks like $(c^\dagger b^\dagger a)(cba^\dagger)$. This corresponds to destroying a $\psi \bar{\psi}$ pair to produce a $\phi$, then destroying that to recover another $\psi \bar\psi$ pair, which is an interaction of the form
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i);
+ \vertex [right=of i] (m);
+ \vertex [above right=of m] (f1) {$\psi$};
+ \vertex [below right=of m] (f2) {$\bar\psi$};
+ \vertex [above left=of i] (s1) {$\psi$};
+ \vertex [below left=of i] (s2) {$\bar\psi$};
+
+ \diagram*{
+ (i) -- [scalar, edge label=$\phi$] (m),
+ (f2) -- [fermion] (m) -- [fermion] (f1),
+ (s1) -- [fermion] (i) -- [fermion] (s2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+
+\subsection{Wick's theorem}
+Now the above process was still sort-of fine, because we were considering first-order terms only. If we considered higher-order terms, we would have complicated expressions involving a mess of creation and annihilation operators, and we would have to commute them past each other to act on the states.
+
+Our objective is thus to write a time-ordered product $T \phi(x_1) \phi(x_2) \cdots \phi(x_n)$ as a sum of normal-ordered things, since it is easy to read off the action of a normal-ordered operator on a collection of states. Indeed, we know
+\[
+ \brak{f}\normalorder{O}\bket{i}
+\]
+is non-zero if the creation and annihilation operators $\normalorder{O}$ match up exactly with those used to create $\bket{f}$ and $\bket{i}$ from the vacuum.
+
+We start with a definition:
+\begin{defi}[Contraction]\index{contraction}
+ The \emph{contraction} of two fields $\phi, \psi$ is defined to be
+ \[
+ \contraction{}{\phi}{}{\psi}\phi\psi = T(\phi\psi) - \normalorder{\phi\psi}.
+ \]
+ More generally, if we have a string of operators, we can contract just some of them:
+ \[
+ \contraction{\cdots}{\phi}{(x_1) \cdots}{\phi}\cdots\phi(x_1) \cdots \phi(x_2) \cdots,
+ \]
+ by replacing the two fields with the contraction.
+
+ In general, the contraction will be a c-function, i.e.\ a number. So where we decide to put the result of the contraction wouldn't matter.
+\end{defi}
+
+We now compute the contraction of our favorite scalar fields:
+\begin{prop}
+ Let $\phi$ be a real scalar field. Then
+ \[
+ \contraction{}{\phi}{(x_1)}{\phi}\phi(x_1)\phi(x_2) = \Delta_F(x_1 - x_2).
+ \]
+\end{prop}
+
+\begin{proof}
+ We write $\phi = \phi^+ + \phi^-$, where
+ \[
+ \phi^+(x) = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2 E_\mathbf{p}}} a_\mathbf{p} e^{-ip\cdot x},\quad \phi^-(x) = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2 E_\mathbf{p}}} a_\mathbf{p}^\dagger e^{ip\cdot x}.
+ \]
+ Then we get normal order if all the $\phi^-$ appear before the $\phi^+$.
+
+ When $x^0 > y^0$, we have
+ \begin{align*}
+ T\phi(x) \phi(y) ={}& \phi(x) \phi(y)\\
+ ={}& (\phi^+(x) + \phi^-(x))(\phi^+(y) + \phi^-(y))\\
+ ={}& \phi^+(x)\phi^+(y) + \phi^-(x) \phi^+(y) + [\phi^+(x), \phi^-(y)] \\
+ &\hphantom{\phi^+(x)\phi^+(y) + \phi^-(x) \phi^+(y) + [}+ \phi^-(y) \phi^+(x) + \phi^-(x) \phi^-(y).
+ \end{align*}
+ So we have
+ \[
+ T\phi(x) \phi(y) = \normalorder{\phi(x) \phi(y)} + \, D(x - y).
+ \]
+ By symmetry, for $y^0 > x^0$, we have
+ \[
+ T\phi(x) \phi(y) = \normalorder{\phi(x)\phi(y)} + \, D(y - x).
+ \]
+ Recalling the definition of the Feynman propagator, we have
+ \[
+ T\phi(x)\phi(y) = \normalorder{\phi(x) \phi(y)} + \, \Delta_F(x - y). \qedhere
+ \]
+\end{proof}
+
+Similarly, we have
+\begin{prop}
+ For a complex scalar field, we have
+ \[
+ \contraction{}{\psi}{(x)}{\psi^\dagger}\psi(x) \psi^\dagger(y) = \Delta_F(x - y) = \contraction{}{\psi^\dagger}{(u)}{\psi}\psi^\dagger(y) \psi(x),
+ \]
+ whereas
+ \[
+ \contraction{}{\psi}{(x)}{\psi} \psi(x) \psi(y) = 0 =\contraction{}{\psi^\dagger}{(x)}{\psi^\dagger} \psi^\dagger(x) \psi^\dagger(y).
+ \]
+\end{prop}
+
+It turns out these computations are all we need to figure out how to turn a time-ordered product into a sum of normal-ordered products.
+
+\begin{thm}[Wick's theorem]\index{Wick's theorem}
+ For any collection of fields $\phi_1 = \phi_1(x_1), \phi_2 = \phi_2(x_2), \cdots$, we have
+ \[
+ T(\phi_1 \phi_2 \cdots \phi_n) = \normalorder{\phi_1 \phi_2 \cdots \phi_n} + \text{ all possible contractions}
+ \]
+\end{thm}
+
+\begin{eg}
+ We have
+ \[
+ T(\phi_1 \phi_2 \phi_3 \phi_4) = \normalorder{\phi_1 \phi_2 \phi_3 \phi_4} + \, \contraction{}{\phi_1}{}{\phi_2}\phi_1 \phi_2 \normalorder{ \phi_3 \phi_4} + \, \contraction{}{\phi_1}{}{\phi_3}\phi_1 \phi_3 \normalorder{\phi_2 \phi_4} + \cdots + \contraction{}{\phi_1}{\phi_2}{\phi_3}\contraction[1.5ex]{\phi_1}{\phi_2}{\phi_3}{\phi_4}\phi_1\phi_2\phi_3\phi_4 + \cdots
+ \]
+\end{eg}
+
+\begin{proof}[Proof sketch]
+ We will provide a rough proof sketch.
+
+ By definition this is true for $n = 2$. Suppose this is true for $\phi_2 \cdots \phi_n$, and now add $\phi_1$ with
+ \[
+ x_1^0 > x_k^0 \text{ for all }k \in \{2, \cdots, n\}.
+ \]
+ Then we have
+ \[
+ T(\phi_1 \cdots \phi_n) = (\phi_1^+ + \phi_1^-)(\normalorder{\phi_2 \cdots \phi_n} + \text{ other contractions}).
+ \]
+ The $\phi^-$ term stays where it is, as that already gives normal ordering, and the $\phi_1^+$ term has to make its way past the $\phi_k^-$ operators. So we can write the RHS as a normal-ordered product. Each time we move past the $\phi_k^-$, we pick up a contraction $\contraction{}{\phi_1}{}{\phi_n}\phi_1\phi_k$. % insert an example
+\end{proof}
+
+\begin{eg}
+ We now consider the more complicated problem of nucleon scattering:
+ \[
+ \psi \psi \to \psi \psi.
+ \]
+ So we are considering interactions of the form
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [above left=of m1] (i1) {$p_1$};
+ \vertex [below left=of m1] (i2) {$p_2$};
+ \vertex [right=of m1] (mm);
+ \vertex [right=of mm] (m2);
+ \vertex [above right=of m2] (f1) {$p_1'$};
+ \vertex [below right=of m2] (f2) {$p_2'$};
+
+ \node at (mm) {something happens};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [anti fermion] (i2),
+ (f2) -- [anti fermion] (m2) -- [fermion] (f1),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ Then we have initial and final states
+ \begin{align*}
+ \bket{i} &= \sqrt{4 E_{\mathbf{p}_1} E_{\mathbf{p}_2}} b_{\mathbf{p}_1}^\dagger b_{\mathbf{p}_2}^\dagger\bket{0}\\
+ \bket{f} &= \sqrt{4 E_{\mathbf{p}_1'} E_{\mathbf{p}_2'}} b_{\mathbf{p}_1'}^\dagger b_{\mathbf{p}_2'}^\dagger\bket{0}
+ \end{align*}
+ We look at the order $g^2$ term in $\brak{f}(S - \mathbf{1})\bket{i}$. We remove that $\mathbf{1}$ as we are not interested in the case with no scattering, and we look at order $g^2$ because there is no order $g$ term.
+
+ The second order term is given by
+ \[
+ \frac{(-ig)^2}{2}\int \d^4 x_1 \d^4 x_2 \; T\left[\psi^\dagger(x_1) \psi(x_1) \phi(x_1) \psi^\dagger(x_2) \psi(x_2) \phi(x_2)\right].
+ \]
+ Now we can use Wick's theorem to write the time-ordered product as a sum of normal-ordered products. The annihilation and creation operators don't end up killing the vacuum only if we annihilate two $b_\mathbf{p}$ and then construct two new $b_\mathbf{p}$. This only happens in the term
+ \[
+ \normalorder{\psi^\dagger(x_1) \psi(x_1) \psi^\dagger(x_2) \psi(x_2)} \contraction{}{\phi}{(x_1)}{\phi}\phi(x_1)\phi(x_2).
+ \]
+ We know the contraction is given by the Feynman propagator $\Delta_F(x_1 - x_2)$. So we now want to compute
+ \[
+ \brak{p_1', p_2'} \normalorder{\psi^\dagger(x_1) \psi(x_1) \psi^\dagger(x_2) \psi(x_2)}\bket{p_1, p_2}
+ \]
+ The only relevant term in the normal-ordered product is
+ \[
+ \iiiint \frac{\d^3 \mathbf{k}_1\;\d^3 \mathbf{k}_2 \;\d^3 \mathbf{k}_3 \;\d^3 \mathbf{k}_4}{(2\pi)^{12}\sqrt{16 E_{\mathbf{k}_1} E_{\mathbf{k}_2} E_{\mathbf{k}_3} E_{\mathbf{k}_4}}} b_{\mathbf{k}_1}^\dagger b_{\mathbf{k}_2}^\dagger b_{\mathbf{k}_3} b_{\mathbf{k}_4} e^{ik_1 \cdot x_1 + i k_2 \cdot x_2 - i k_3 \cdot x_1 - i k_4 \cdot x_2}.
+ \]
+ Now letting this act on $\brak{p_1', p_2'}$ and $\bket{p_1, p_2}$ would then give
+ \[
+ e^{ix_1 \cdot (p_1' - p_1) + i x_2 \cdot (p_2' - p_2)} + e^{ix_1(p_2' - p_1) + i x_1 (p_1' - p_1)}, \tag{$*$}
+ \]
+ plus what we get by swapping $x_1$ and $x_2$ by a routine calculation (swap the $b$-operators in the $\psi$ with those that create $\bket{i}, \bket{f}$, insert the delta functions as commutators, then integrate).
+
+ What we really want is to integrate the Hamiltonian. So we want to integrate the result of the above computation with respect to all spacetime to obtain
+ \begin{align*}
+ &\hphantom{=}\frac{(-ig)^2}{2} \int \d^4 x_1\;\d^4 x_2 \;(*)\; \Delta_F(x_1 - x_2)\\
+ &= \frac{(-ig)^2}{2}\int \d^4 x_1\;\d^4 x_2 \;(*)\; \int \frac{\d^4 k}{(2\pi)^4} \frac{e^{ik\cdot(x_1 - x_2)}}{k^2 - m^2 + i \varepsilon}\\
+ &=(-ig)^2 \int \frac{\d^4 k}{(2\pi)^4} \frac{i (2\pi)^8}{k^2 - m^2 + i \varepsilon} \\
+ &\hphantom{=(-ig)^2 \int}\Big(\delta^4(p_1' - p_1 + k) \delta^4(p_2' - p_2 - k) + \delta^4(p_2' - p_1 + k) \delta^4(p_1' - p_2 - k)\Big)\\
+ &= (-ig)^2 (2\pi)^4\left(\frac{i}{(p_1 - p_1')^2 - m^2} + \frac{i}{(p_1 - p_2')^2 - m^2}\right) \delta^4(p_1 + p_2 - p_1' - p_2').
+ \end{align*}
+ What we see again is a $\delta$-function that enforces momentum conservation.
+\end{eg}
+
+\subsection{Feynman diagrams}
+This is better, but still rather tedious. How can we make this better? In the previous example, we could imagine that the term that contributed to the integral above represented the interaction where the two $\psi$-particles ``exchanged'' a $\phi$-particle. In this process, we destroy the old $\psi$-particles and construct new ones with new momenta, and the $\phi$-particle is created and then swiftly destroyed. The fact that the $\phi$-particles were just ``intermediate'' corresponds to the fact that we included their contraction in the computation.
+
+We can pictorially represent the interaction as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$\psi$};
+ \vertex [below=of i1] (i2) {$\psi$};
+
+ \vertex [right=of i1] (m1);
+ \vertex [right=of m1] (f1) {$\psi$};
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m2] (f2) {$\psi$};
+
+ \diagram* {
+ (i1) -- [fermion] (m1) -- [fermion] (f1),
+ (i2) -- [fermion] (m2) -- [fermion] (f2),
+ (m1) -- [scalar] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+\end{center}
+The magical insight is that \emph{every} term given by Wick's theorem can be interpreted as a diagram of this sort. Moreover, the term contributes to the process if and only if it has the right ``incoming'' and ``outgoing'' particles. So we can figure out what terms contribute by drawing the right diagrams.
+
+Moreover, not only can we count the diagrams like this. More importantly, we can read out how much each term contributes from the diagram directly! This simplifies the computation a lot.
+
+We will not provide a proof that Feynman diagrams do indeed work, as it would be purely technical and would also involve the difficult work of what it actually means to be a diagram. However, based on the computations we've done above, one should be confident that at least something of this sort would work, maybe up to some sign errors.
+
+We begin by specifying what diagrams are ``allowed'', and then specify how we assign numbers to diagrams.
+
+Given an initial state and final state, the possible Feynman diagrams are specified as follows:
+\begin{defi}[Feynman diagram]\index{Feynman diagram!scalar Yukawa theory}\index{Feynman diagram}
+ In the scalar Yukawa theory, given an initial state $\bket{i}$ and final state $\bket{f}$, a \emph{Feynman diagram} consists of:
+ \begin{itemize}
+ \item An external line for all particles in the initial and final states. A dashed line is used for a $\phi$-particle, and solid lines are used for $\psi$/$\bar\psi$-particles.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i) {$\phi$};
+ \vertex [right=of i] (b);
+ \vertex [above right=of b] (a1);
+ \vertex [below right=of b] (a2);
+
+ \vertex [right=of a1] (f1) {$\psi$};
+ \vertex [right=of a2] (f2) {$\bar{\psi}$};
+
+ \diagram* {
+ (i) -- [scalar] (b),
+ (a1) -- (f1),
+ (a2) -- (f2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ \item Each $\psi$-particle comes with an arrow. An initial $\psi$-particle has an incoming arrow, and a final $\psi$-particle has an outgoing arrow. The reverse is done for $\bar\psi$-particles.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i) {$\phi$};
+ \vertex [right=of i] (b);
+ \vertex [above right=of b] (a1);
+ \vertex [below right=of b] (a2);
+
+ \vertex [right=of a1] (f1) {$\psi$};
+ \vertex [right=of a2] (f2) {$\bar{\psi}$};
+
+ \diagram* {
+ (i) -- [scalar] (b),
+ (a1) -- [fermion] (f1),
+ (a2) -- [anti fermion] (f2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ \item We join the lines together with more lines and vertices so that the only loose ends are the initial and final states. The possible vertices correspond to the interaction terms in the Lagrangian. For example, the only interaction term in the Lagrangian here is $\psi^\dagger \psi \phi$, so the only possible vertex is one that joins a $\phi$ line with two $\psi$ lines that point in opposite directions:
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i) {$\phi$};
+ \vertex [right=of i] (a);
+ \vertex [above right=of a] (f1) {$\psi$};
+ \vertex [below right=of a] (f2) {$\bar{\psi}$};
+
+ \diagram* {
+ (i) -- [scalar] (a),
+ (f1) -- [fermion] (a) -- [fermion] (f2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ Each such vertex represents an interaction, and the fact that the arrows match up in all possible interactions ensures that charge is conserved.
+
+ \item Assign a directed momentum $p$ to each line, i.e.\ an arrow into or out of the diagram to each line.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i) {$\phi$};
+ \vertex [right=of i] (a);
+ \vertex [above right=of a] (f1) {$\psi$};
+ \vertex [below right=of a] (f2) {$\bar{\psi}$};
+
+ \diagram* {
+ (i) -- [scalar, momentum=$p$] (a),
+ (f2) -- [fermion, reversed momentum=$p_2$] (a) -- [fermion, momentum=$p_1$] (f1),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ The initial and final particles already have momentum specified in the initial and final state, and the internal lines are given ``dummy'' momenta $k_i$ (which we will later integrate over).
+ \end{itemize}
+\end{defi}
+Note that there are infinitely many possible diagrams! However, when we lay down the Feynman rules later, we will see that the more vertices the diagram has, the less it contributes to the sum. In fact, the $n$-vertices diagrams correspond to the $n$th order term in the expansion of the $S$-matrix. So most of the time it suffices to consider ``simple'' diagrams.
+
+\begin{eg}[Nucleon scattering]
+ If we look at $\psi + \psi \to \psi + \psi$, the simplest diagram is
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$\psi$};
+ \vertex [below=of i1] (i2) {$\psi$};
+
+ \vertex [right=of i1] (m1);
+ \vertex [right=of m1] (f1) {$\psi$};
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m2] (f2) {$\psi$};
+
+ \diagram* {
+ (i1) -- [fermion] (m1) -- [fermion] (f1),
+ (i2) -- [fermion] (m2) -- [fermion] (f2),
+ (m1) -- [scalar] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ On the other hand, we can also swap the two particles to obtain a diagram of the form.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$\psi$};
+ \vertex [below=of i1] (i2) {$\psi$};
+
+ \vertex [right=of i1] (m1);
+ \vertex [right=of m1] (f1) {$\psi$};
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m2] (f2) {$\psi$};
+
+ \diagram* {
+ (i1) -- [fermion] (m1) -- [fermion] (f2),
+ (i2) -- [fermion] (m2) -- [fermion] (f1),
+ (m1) -- [scalar] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ These are the ones that correspond to second-order terms.
+
+ There are also more complicated ones such as
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$\psi$};
+ \vertex [below=1.8cm of i1] (i2) {$\psi$};
+
+ \vertex [right=of i1] (m1);
+ \vertex [right=of m1] (f1) {$\psi$};
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m2] (f2) {$\psi$};
+
+ \vertex [below=0.5cm of m1] (c1);
+ \vertex [below=0.8cm of c1] (c2);
+
+ \diagram* {
+ (i1) -- [fermion] (m1) -- [fermion] (f1),
+ (i2) -- [fermion] (m2) -- [fermion] (f2),
+ (m1) -- [scalar] (c1) -- [fermion, half left] (c2) -- [fermion, half left] (c1),
+ (c2) -- [scalar] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ This is a \term{1-loop diagram}. We can also have a \term{2-loop diagram}:
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$\psi$};
+ \vertex [below=1.8cm of i1] (i2) {$\psi$};
+
+ \vertex [right=of i1] (m1);
+ \vertex [right=of m1] (f1) {$\psi$};
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m2] (f2) {$\psi$};
+
+ \vertex [below=0.5cm of m1] (c1);
+ \vertex [below=0.8cm of c1] (c2);
+
+ \vertex [below right=0.4cm and 0.4cm of c1] (r);
+ \vertex [below left=0.4cm and 0.4cm of c1] (l);
+ \diagram* {
+ (i1) -- [fermion] (m1) -- [fermion] (f1),
+ (i2) -- [fermion] (m2) -- [fermion] (f2),
+ (m1) -- [scalar] (c1) -- [fermion, half left] (c2) -- [fermion, half left] (c1),
+ (c2) -- [scalar] (m2),
+ (r) -- [scalar] (l),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ If we ignore the loops, we say we are looking at the \term{tree level}.
+\end{eg}
+
+To each such diagram, we associate a number using the \emph{Feynman rules}.
+\begin{defi}[Feynman rules]\index{Feynman rules}
+ To each Feynman diagram in the interaction, we write down a term like this:
+ \begin{enumerate}
+ \item To each vertex in the Feynman diagram, we write a factor of
+ \[
+ (-ig)(2\pi)^4 \delta^4\left(\sum_i k_i\right),
+ \]
+ where the sum goes through all lines going into the vertex (and put a negative sign for those going out).
+ \item For each internal line with momentum $k$, we integrate the product of all factors above by
+ \[
+ \int \frac{\d^4 k}{(2\pi)^4} D(k^2),
+ \]
+ where
+ \begin{align*}
+ D(k^2) &= \frac{i}{k^2 - m^2 + i\varepsilon}\text{ for }\phi\\
+ D(k^2) &= \frac{i}{k^2 - \mu^2 + i\varepsilon}\text{ for }\psi
+ \end{align*}
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}
+ We consider the case where $g$ is small. Then only the simple diagrams with very few vertices matter.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$\psi$};
+ \vertex [below=of i1] (i2) {$\psi$};
+
+ \vertex [right=of i1] (m1);
+ \vertex [right=of m1] (f1) {$\psi$};
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m2] (f2) {$\psi$};
+
+ \diagram* {
+ (i1) -- [fermion, momentum=$p_1$] (m1) -- [fermion, momentum=$p_1'$] (f1),
+ (i2) -- [fermion, momentum'=$p_2$] (m2) -- [fermion, momentum'=$p_2'$] (f2),
+ (m1) -- [scalar, momentum'=$k$] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$\psi$};
+ \vertex [below=of i1] (i2) {$\psi$};
+
+ \vertex [right=of i1] (m1);
+ \vertex [right=of m1] (f1) {$\psi$};
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m2] (f2) {$\psi$};
+
+ \diagram* {
+ (i1) -- [fermion, momentum=$p_1$] (m1) -- [fermion, momentum=$p_2'$] (f2),
+ (i2) -- [fermion, momentum'=$p_2$] (m2) -- [fermion, momentum'=$p_1'$] (f1),
+ (m1) -- [scalar, momentum'=$k$] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ As promised by the Feynman rules, the two diagrams give us
+ \[
+ (-ig)^2 (2\pi)^8(\delta^4(p_1 - p_1' - k) \delta^4(p_2 + k - p_2') + \delta^4(p_1 - p_2' - k)\delta^4(p_2 + k - p_1')).
+ \]
+ Now integrating gives us the formula
+ \begin{multline*}
+ (-ig)^2 \int \frac{\d^4 k}{(2\pi)^4} \frac{i(2\pi)^8}{k^2 - m^2 + i \varepsilon}\\
+ (\delta^4(p_1 - p_1' - k) \delta^4(p_2 + k - p_2')+ \delta^4(p_1 - p_2' - k)\delta^4(p_2 + k - p_1')).
+ \end{multline*}
+ Doing the integral gives us
+ \[
+ (-ig)^2 (2\pi)^4\left(\frac{i}{(p_1 - p_1')^2 - m^2} + \frac{i}{(p_1 - p_2')^2 - m^2}\right) \delta^4(p_1 + p_2 - p_1' - p_2'),
+ \]
+ which is what we found before.
+\end{eg}
+There is a nice physical interpretation of the diagrams. We can interpret the first diagram as saying that the nucleons exchange a meson of momentum $k = p_1 - p_1' = p_2 - p_2'$. This meson doesn't necessarily satisfy the relativistic dispersion relation $k^2 = m^2$ (note that $k$ is the $4$-momentum). When this happens, we say it is \term{off-shell}, or that it is a \term{virtual meson}. Heuristically, it can't live long enough for its energy to be measured accurately. In contrast, the external legs are \term{on-shell}, so they satisfy $p_i^2 = \mu^2$.
+
+\subsection{Amplitudes}
+Now the important question is --- what do we do with that quantity we computed? We first define a more useful quantity known as the \emph{amplitude}:
+\begin{defi}[Amplitude]\index{amplitude}
+ The \emph{amplitude} $\mathcal{A}_{f,i}$ of a scattering process from $\bket{i}$ to $\bket{f}$ is defined by
+ \[
+ \brak{f}S - \mathbf{1}\bket{i} = i \mathcal{A}_{f,i} (2\pi)^4 \delta^4 (p_F - p_I).
+ \]
+ where $p_F$ is the sum of final state $4$-momenta, and $p_I$ is the sum of initial state $4$-momenta. The factor of $i$ sticking out is by convention, to match with non-relativistic quantum mechanics.
+\end{defi}
+
+We can now modify our Feynman rules accordingly to compute $\mathcal{A}_{f, i}$.
+
+\begin{itemize}
+ \item Draw all possible diagrams with the appropriate external legs and impose $4$-momentum conservation at each vertex.
+ \item Write a factor $(-ig)$ for each vertex.
+ \item For each internal line of momentum $p$ and mass $m$, put in a factor of
+ \[
+ \frac{1}{p^2 - m^2 + i \varepsilon}.
+ \]
+ \item Integrate over all undetermined $4$-momentum $k$ flowing in each loop.
+\end{itemize}
+
+We again do some examples of computation.
+\begin{eg}[Nucleon-antinucleon scattering]
+ Consider another tree-level process
+ \[
+ \psi + \bar{\psi} \to \phi + \phi.
+ \]
+ We have a diagram of the form
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$\psi$};
+ \vertex [below=of i1] (i2) {$\bar{\psi}$};
+
+ \vertex [right=of i1] (m1);
+ \vertex [right=of m1] (f1) {$\phi$};
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m2] (f2) {$\phi$};
+
+ \diagram* {
+ (i1) -- [fermion, momentum=$p_1$] (m1) -- [scalar, momentum=$p_1'$] (f1),
+ (f2) -- [scalar, rmomentum=$p_2'$] (m2) -- [fermion, rmomentum=$p_2$] (i2),
+ (m1) -- [fermion, momentum=$k$] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ and a similar one with the two $\phi$ particles crossing.
+
+ The two diagrams then say
+ \[
+ \mathcal{A} = (-ig)^2 \left(\frac{1}{(p_1 - p_1')^2 - \mu^2} + \frac{1}{(p_2 - p_2')^2 - \mu^2}\right).
+ \]
+ Note that we dropped the $i\varepsilon$ terms in the denominator because the denominator never vanishes, and we don't need to integrate over anything because all momenta in the diagram are determined uniquely by momentum conservation.
+\end{eg}
+
+\begin{eg}[Meson scattering]
+ Consider
+ \[
+ \phi + \phi \to \phi + \phi.
+ \]
+ This is a bit more tricky. There is no tree-level diagram we can draw. The best we can get is a box diagram like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$\phi$};
+ \vertex [below=of i1] (i2) {$\phi$};
+
+ \vertex [right=of i1] (m1);
+ \vertex [right=of m1] (n1);
+ \vertex [right=of n1] (f1) {$\phi$};
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m2] (n2);
+ \vertex [right=of n2] (f2) {$\phi$};
+
+ \diagram* {
+ (i1) -- [scalar, momentum=$p_1$] (m1),
+ (i2) -- [scalar, momentum'=$p_2$] (m2),
+ (n1) -- [scalar, momentum=$p_1'$] (f1),
+ (n2) -- [scalar, momentum'=$p_2'$] (f2),
+
+ (n1) -- [anti fermion, momentum=$k$] (n2) -- [anti fermion, momentum=$k - p_2'$] (m2) -- [anti fermion, momentum=$k + p_1' - p_1$] (m1) -- [anti fermion, momentum=$k + p_1'$] (n1);
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ We then integrate through all possible $k$.
+
+ This particular graph we've written down gives
+ \begin{multline*}
+ i\mathcal{A} = (-ig)^4 \int \frac{\d^4 k}{(2\pi)^4}\\\frac{i^4}{(k^2 - \mu^2 + i\varepsilon)((k + p_1')^2 - \mu^2 + i\varepsilon)((k + p_1' - p_1)^2 - \mu^2 + i \varepsilon)((k - p_2')^2 - \mu^2 + i \varepsilon)}.
+ \end{multline*}
+ For large $k$, this asymptotically tends to $\int \frac{\d^4 k}{k^8}$, which fortunately means the integral converges. However, this is usually not the case. For example, we might have $\frac{\d^4k}{k^4}$, which diverges, or even $\int \frac{\d^4 k}{k^2}$. This is when we need to do renormalization.
+\end{eg}
+
+\begin{eg}[Feynman rules for $\phi^4$ theory]
+ Suppose our Lagrangian has a term
+ \[
+ \frac{\lambda}{4!} \phi^4.
+ \]
+ We then have single-vertex interactions
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (c);
+ \vertex [above right=of c] (1);
+ \vertex [above left=of c] (2);
+ \vertex [below right=of c] (3);
+ \vertex [below left=of c] (4);
+ \diagram* {
+ (1) -- [scalar] (c),
+ (2) -- [scalar] (c),
+ (3) -- [scalar] (c),
+ (4) -- [scalar] (c),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ Any such diagram contributes a factor of $(-i\lambda)$. Note that there is no $\frac{1}{4!}$. This is because if we expand the term
+ \[
+ \frac{-i \lambda}{4!} \brak{p_1', p_2'} \normalorder{\phi(x_1)\phi(x_2)\phi(x_3)\phi(x_4)}\bket{p_1, p_2},
+ \]
+ there are $4!$ ways of pairing the $\phi(x_i)$ with the $p_i, p_i'$, so each possible interaction is counted $4!$ times, and cancels that $4!$ factor.
+
+ For some diagrams, there are extra combinatoric factors, known as \term{symmetry factors}, that one must take into account.
+\end{eg}
+
+These amplitudes can be further related to experimentally measurable quantities. We will not go into details here, but in The Standard Model course, we will see that if we have a single initial state $i$ and a collection of final states $\{f\}$, then the decay rate of $i$ to $\{f\}$ is given by
+\[
+ \Gamma_{i \to f} = \frac{1}{2 m_i} \int |\mathcal{A}_{f,i}|^2 \;\d \rho_f,
+\]
+where
+\[
+ \d \rho_f = (2\pi)^4 \delta^4(p_F - p_I) \prod_r \frac{\d^3 p_r}{(2\pi)^3 2 p_r^0},
+\]
+and $r$ runs over all momenta in the final states.
+
+%Now what do we do with the resulting expression? We start by writing down what we think should be the probability of transitioning from $\bket{i}$ to $\bket{f}$:
+%\[
+% P = \frac{|\brak{f}S - \mathbf{1}\bket{i}|^2}{\braket{f}{f} \braket{i}{i}}.
+%\]
+%Recall that the terms are given by
+%\begin{align*}
+% |\brak{f}S- \mathbf{1}\bket{i}|^2 &= |\mathcal{A}_{f, i}| (2\pi)^8 (\delta^4(p_F - p_I))^2\\
+% &= |\mathcal{A}_{f, i}| (2\pi)^8 \delta^4(p_F - p_I) \delta^4(0)\\
+% \braket{i}{i} &= \prod_{\text{initial particles}} (2\pi)^3 2 E_{\mathbf{p}_i} \delta^3(0)\\
+% \braket{f}{f} &= \prod_{\text{final particles}} (2\pi)^3 2 E_{\mathbf{p}_i} \delta^3(0)
+%\end{align*}
+%So we found that
+%\[
+% P \qeq |\mathcal{A}_{f, i}| (2\pi)^8 \delta^4(p_F - p_I) \delta^4(0)\prod_{\text{all particles}} \frac{1}{(2\pi)^3 2E_{\mathbf{p}_i}\delta^3(0)}.
+%\]
+%We again have a lot of scary $\delta$-functions. As before, if we trace through the calculations, we will find that $(2\pi)^3\delta^3(0)$ comes from integrating over the volume of the whole universe, so we use the trick of replacing $(2\pi)^3\delta^3(0)$ with the volume $V$ of the universe. Similarly, $(2\pi)^4\delta^4(0)$ is the total spacetime volume of the universe $VT$.
+%
+%So we have a slightly better expression
+%\[
+% |\mathcal{A}_{f, i}|^2 (2\pi)^4 \delta^4(p_F - p_I) VT \prod_{\text{all particles}} \frac{1}{2E_{\mathbf{p}_i} V}.
+%\]
+%Remember that all the $V$ and $T$ will be taken to be infinity eventually. So we would like to get rid of them. Here we can easily get rid of the $T$, because we can instead ask for the probability of decaying to $\bket{f}$ in \emph{unit time}. So we are left with
+%\[
+% |\mathcal{A}_{f, i}|^2 (2\pi)^4 \delta^4(p_F - p_I) V \prod_{\text{all particles}} \frac{1}{2E_{\mathbf{p}_i} V}.
+%\]
+%What do we do next? We are computing the probability of transitioning from $\bket{i}$ to exactly $\bket{f}$. As one might expect, since the momentum has a continuous range of values, it is virtually impossible to obtain exactly $\bket{f}$. Instead, a more sensible question might be to ask the probability of decaying into, say, two $\psi$ particles, and then we integrate the above expression over all conceivable pairs of momenta. We might want to just integrate against
+%\[
+% \d \Pi = \prod_{i \text{ final}}\d^3 \mathbf{p}_i,
+%\]
+%but this is not the right thing to integrate over, because $\int \d \Pi \not= 1$. Instead, the correctly noramlized version is
+%\[
+% \d \Pi = \prod_{i \text{ final}} \frac{V \d^3 \mathbf{p}_i}{(2\pi)^3}.
+%\]
+%I have not yet figured out how this can be made to work, actually. This was not done in lectures and explanations I have found seem unsatisfactory.
+
+%We can take our quantum calculations of the $\psi \psi \to \psi \psi$ and take a classical limit. This then gives us a potential for the interaction of two $\psi$ particles, given by
+%\[
+% U(r) = -\frac{\lambda^2 e^{-mr}}{4 \pi r}.
+%\]
+%Here
+%\[
+% \lambda = \frac{g}{2\mu},
+%\]
+%where the $2\mu$ comes from non-relativistic normalization. The negative sign tells us the force is attractive.
+%
+%We can do exactly the same calculation for nucleon-antinucleon scattering, which also gives an attractive force. In general, the exchange of scalar fields gives us an attractive force. When spin $1$ particles are being exchanged, we get attractive forces for like particles, and repulsive forces for unlike particles. When we get up to spin $2$ particles, everything is again attractive, and this can be used to model gravity. There are more details on David Tong's notes.
+%
+%Note that for the $\phi^4$, theory, we have
+%\[
+% i\mathcal{A} = -i \lambda.
+%\]
+%So we have
+%\[
+% U(\mathbf{r}) = - \lambda \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} e^{i\mathbf{p}\cdot \mathbf{r}} = -\lambda \delta^3 \mathbf{r}.
+%\]
+\subsection{Correlation functions and vacuum bubbles}
+The $S$-matrix elements are good and physical, because they correspond to probabilities of certain events happening. However, we are not always interested in these quantities. For example, we might want to ask the conductivity of some quantum-field-theoretic object. To do these computations, quantities known as \emph{correlation functions} are useful. However, we are not going to use these objects in this course, so we are just going to state, rather than justify many of the results, and you are free to skip this chapter.
+
+Before that, we need to study the vacuum. Previously, we have been working with the vacuum of the free theory $\bket{0}$. This satisfies the boring relation
+\[
+ H_0 \bket{0} = 0.
+\]
+However, when we introduce an interaction term, this is no longer the vacuum. Instead we have an interacting vacuum $\bket{\Omega}$, satisfying
+\[
+ H \bket{\Omega} = 0.
+\]
+As before, we normalize the vacuum so that
+\[
+ \braket{\Omega}{\Omega} = 1.
+\]
+Concretely, this interaction vacuum can be obtained by starting with a free vacuum and then letting it evolve for infinite time.
+\begin{lemma}
+ The free vacuum and interacting vacuum are related via
+ \[
+ \bket{\Omega} = \frac{1}{\braket{\Omega}{0}} U_I(t, -\infty)\bket{0} = \frac{1}{\braket{\Omega}{0}} U_S(t, -\infty)\bket{0}.
+ \]
+Similarly, we have
+ \[
+ \brak{\Omega} = \frac{1}{\braket{\Omega}{0}} \brak{0}U(\infty, t).
+ \]
+\end{lemma}
+
+\begin{proof}
+ Note that we have the last equality because $U_I(t, -\infty)$ and $U_S(t, -\infty)$ differs by factors of $e^{-iH t_0}$ which acts as the identity on $\bket{0}$. % I am not convinced.
+
+ Consider an arbitrary state $\bket{\Psi}$. We want to show that
+ \[
+ \brak{\Psi}U(t, -\infty)\bket{0} = \braket{\Psi}{\Omega}\braket{\Omega}{0}.
+ \]
+ The trick is to consider a complete set of eigenstates $\{\Omega, \bket{x}\}$ for $H$ with energies $E_x$. Then we have
+ \[
+ U(t, t_0) \bket{x} = e^{iE_x(t_0 - t)} \bket{x}.
+ \]
+ Also, we know that
+ \[
+ \mathbf{1} = \bket{\Omega}\brak{\Omega} + \int \d x\; \bket{x}\brak{x}.
+ \]
+ So we have
+ \begin{align*}
+ \brak{\Psi}U(t, -\infty) \bket{0} &= \brak{\Psi}U(t, -\infty) \left(\bket{\Omega}\brak{\Omega} + \int \;\d x \bket{x}\brak{x}\right)\bket{0}\\
+ &= \braket{\Psi}{\Omega} \braket{\Omega}{0} + \lim_{t' \to -\infty} \int\d x\; e^{iE_x(t' - t)} \braket{\Psi}{x} \braket{x}{0}.
+ \end{align*}
+ We now claim that the second term vanishes in the limit. As in most of the things in the course, we do not have a rigorous justification for this, but it is not too far-stretched to say so since for any well-behaved (genuine) function $f(x)$, the \emph{Riemann-Lebesgue lemma} tells us that for any fixed $a, b \in \R$, we have
+ \[
+ \lim_{\mu \to \infty} \int_a^b f(x) e^{i\mu x}\;\d x = 0.
+ \]
+ If this were to hold, then
+ \[
+ \brak{\Psi}U(t, -\infty)\bket{0} = \braket{\Psi}{\Omega}\braket{\Omega}{0}.
+ \]
+ So the result follows. The other direction follows similarly.
+\end{proof}
+
+Now given that we have this vacuum, we can define the \emph{correlation function}.
+\begin{defi}[Correlation/Green's function]\index{correlation function}\index{Green's function}
+ The \emph{correlation} or \emph{Green's function} is defined as
+ \[
+ G^{(n)}(x_1, \cdots, x_n) = \brak{\Omega}T \phi_H(x_1) \cdots \phi_H(x_n)\bket{\Omega},
+ \]
+ where $\phi_H$ denotes the operators in the Heisenberg picture.
+\end{defi}
+
+How can we compute these things? We claim the following:
+\begin{prop}
+ \[
+ G^{(n)}(x_1, \cdots, x_n) = \frac{\brak{0}T \phi_I(x_1) \cdots \phi_I(x_n)S \bket{0}}{\brak{0}S\bket{0}}.
+ \]
+\end{prop}
+
+\begin{proof}
+ We can wlog consider the specific example $t_1 > t_2 > \cdots > t_n$. We start by working on the denominator. We have
+ \[
+ \brak{0}S\bket{0} = \brak{0}U(\infty, t) U(t, -\infty)\bket{0} = \braket{0}{\Omega}\braket{\Omega}{\Omega}\braket{\Omega}{0}.
+ \]
+ For the numerator, we note that $S$ can be written as
+ \[
+ S = U_I(\infty, t_1) U_I(t_1, t_2) \cdots U_I(t_n, -\infty).
+ \]
+ So after time-ordering, we know the numerator of the right hand side is
+ \[
+ \brak{0}U_I(\infty, t_1) \phi_I(x_1) U_I(t_1, t_2) \phi_I(x_2) \cdots \phi_I(n)U_I(t_n, -\infty)\bket{0}
+ \]
+ \begin{align*}
+ &\hphantom{=}\brak{0}U_I(\infty, t_1) \phi_I(x_1) U_I(t_1, t_2) \phi_I(x_2) \cdots \phi_I(n)U_I(t_n, -\infty)\bket{0}\\
+ &=\brak{0}U_I(\infty, t_1) \phi_H(x_1) \cdots \phi_H(x_n) U_I(t_n, -\infty)\bket{0}\\
+ &=\braket{0}{\Omega}\brak{\Omega}T \phi_H(x_1) \cdots \phi_H(x_n)\bket{\Omega}\braket{\Omega}{0}.
+ \end{align*}
+ So the result follows. % why can we convert like that?
+\end{proof}
+
+Now what does this quantity tell us? It turns out these have some really nice physical interpretation. Let's look at the terms $\brak{0}T \phi_I(x_1) \cdots \phi_I(x_n) S \bket{0}$ and $\brak{0}S\bket{0}$ individually and see what they tell us.
+
+For simplicity, we will work with the $\phi^4$ theory, so that we only have a single $\phi$ field, and we will, without risk of confusion, draw all diagrams with solid lines.
+
+Looking at $\brak{0}S\bket{0}$, we are looking at all transitions from $\bket{0}$ to $\bket{0}$. The Feynman diagrams corresponding to these would look like
+\begin{center}
+ \begin{tikzpicture}
+ \draw (2, 0.3) circle [radius=0.3];
+ \draw (2, -0.3) circle [radius=0.3];
+
+ \draw (4, 0.6) circle [radius=0.3];
+ \draw (4, 0) circle [radius=0.3];
+ \draw (4, -0.6) circle [radius=0.3];
+
+ \draw (5.5, 0) circle [radius=0.5];
+ \draw (5.5, 0) ellipse (0.2 and 0.5);
+
+ \draw (7, 0.3) circle [radius=0.3];
+ \draw (7, -0.3) circle [radius=0.3];
+ \draw (7.7, 0.3) circle [radius=0.3];
+ \draw (7.7, -0.3) circle [radius=0.3];
+
+ \draw [decorate, decoration={brace, amplitude=3pt, raise=-3pt}] (2.6, -1.2) -- (1.4, -1.2) node [pos=0.5, below] {$1$ vertex};
+ \draw [decorate, decoration={brace, amplitude=3pt, raise=-3pt}] (8.3, -1.2) -- (3.4, -1.2) node [pos=0.5, below] {$2$ vertex};
+ \end{tikzpicture}
+\end{center}
+These are known as vacuum bubbles. Then $\brak{0}S\bket{0}$ is the sum of the amplitudes of all these vacuum bubbles.
+
+While this sounds complicated, a miracle occurs. It happens that the different combinatoric factors piece together nicely so that we have
+\[
+ \brak{0}S \bket{0} = \exp(\text{all distinct (connected) vacuum bubbles}).
+\]
+Similarly, magic tells us that
+\[
+ \brak{0}T\phi_I(x_1) \cdots \phi_I(x_n) S \bket{0} =\left(\sum \parbox{9em}{connected diagrams with $n$ loose ends}\right) \brak{0}S\bket{0}.
+\]
+So what $G^{(n)}(x_1, \cdots, x_n)$ really tells us is the sum of connected diagrams modulo these silly vacuum bubbles.
+
+\begin{eg}
+ The diagrams that correspond to $G^{(4)}(x_1, \cdots, x_4)$ include things that look like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [left] {$x_1$} node [circ] {} -- (1, 0) node [circ] {} node [right] {$x_2$};
+ \draw (0, -1) node [left] {$x_3$} node [circ] {} -- (1, -1) node [circ] {} node [right] {$x_4$};
+
+ \draw (3, 0) node [left] {$x_1$} node [circ] {} -- (4, -1) node [circ] {} node [right] {$x_4$};
+ \draw (3, -1) node [left] {$x_3$} node [circ] {} -- (4, 0) node [circ] {} node [right] {$x_2$};
+
+ \draw (6, 0) node [left] {$x_1$} node [circ] {} -- (7, 0) node [circ] {} node [right] {$x_2$};
+ \draw (6, -1) node [left] {$x_3$} node [circ] {} -- (6.5, -0.7) -- (7, -1) node [circ] {} node [right] {$x_4$};
+ \draw (6.5, -0.5) circle [radius=0.2];
+ \end{tikzpicture}
+ \end{center}
+ Note that we define ``connected'' to mean every line is connected to some of the end points in some way, rather than everything being connected to everything.
+\end{eg}
+We can come up with analogous Feynman rules to figure out the contribution of all of these terms.
+
+%For scalar Yukawa theory, the vacuum bubbles may look like these:
+%\begin{center}
+% \begin{tikzpicture}
+% \begin{feynman}
+% \vertex (r);
+% \vertex [right=of r] (l);
+%
+% \diagram*{
+% (r) -- [fermion, half right] (l) -- [fermion, half right] (r),
+% (r) -- [scalar] (l),
+% };
+% \end{feynman}
+% \end{tikzpicture}
+% \quad
+% \begin{tikzpicture}
+% \begin{feynman}
+% \vertex (r);
+% \vertex [right=of r] (l);
+%
+% \vertex [right=of l] (r');
+% \vertex [right=of r'] (l');
+%
+% \diagram*{
+% (r) -- [fermion, half right] (l) -- [fermion, half right] (r),
+% (r') -- [fermion, half right] (l') -- [fermion, half right] (r'),
+% (l) -- [scalar] (r'),
+% };
+% \end{feynman}
+% \end{tikzpicture}
+%\end{center}
+%With a little thought, we can show that
+%Here connected means that every point in the diagram is connected to at least $1$ external le.g.\ For example, the following are connected:
+%\begin{center}
+% \begin{tikzpicture}
+% \begin{feynman}
+% \vertex (l);
+% \vertex [right=of l] (r);
+% \diagram*{
+% (l) -- [scalar] (r),
+% };
+% \end{feynman}
+% \end{tikzpicture}
+% \begin{tikzpicture}
+% \begin{feynman}
+% \vertex (l);
+% \vertex [right=of l] (m);
+% \vertex [right=of m] (r);
+% \vertex [above=of m] (t);
+%
+% \diagram*{
+% (l) -- [scalar] (m) -- [scalar] (r),
+% (m) -- [scalar, half left] (t) -- [scalar, half left] (m),
+% };
+% \end{feynman}
+% \end{tikzpicture}
+%\end{center}
+%So we learn that
+%\[
+% \brak{\Omega}T \phi_H(x_1), \cdots, \phi_H(x_n)\bket{\Omega} = \sum\text{connected Feynman diagrams}.
+%\]
+%Here the Feynman diagrams depend on $x_i$, whereas in the $S$-matrix we integrated them.
+%
+%It is simple to adopt the Feynman rules for momentum space to compute $G^{(n)}(x_1, \cdots, x_n)$ directly. We simply draw $n$ external points $x_1, \cdots, x_n$. We connect them with propagators (i.e.\ lines), and vertices.
+%
+%For example, to compute $\brak{\Omega} T \phi_H(x_1) \cdots \phi_H(x_4) \bket{\Omega}$, we have the $0$th order terms
+%\begin{center}
+% \begin{tikzpicture}
+% \begin{feynman}
+% \vertex (tl);
+% \vertex [right=of tl] (tr);
+% \vertex [below=of tl] (bl);
+% \vertex [right=of bl] (br);
+% \diagram* {
+% (tl) -- [scalar] (tr),
+% (bl) -- [scalar] (br),
+% };
+% \end{feynman}
+% \end{tikzpicture}
+%\end{center}
+%and similarly $3$ other diagrams that connect the dots. We then have higher order terms like
+%\begin{center}
+% \begin{tikzpicture}
+% \begin{feynman}
+% \vertex (tl);
+% \vertex [right=1.5cm of tl] (tr);
+% \vertex [below=1.5cm of tl] (bl);
+% \vertex [right=1.5cm of bl] (br);
+% \vertex [right=0.75cm of bl] (bm);
+% \vertex [above=0.75cm of bm] (bb);
+% \diagram* {
+% (tl) -- [scalar] (tr),
+% (bl) -- [scalar] (br),
+% (bm) -- [scalar, half left] (bb) -- [scalar, half left] (bm),
+% };
+% \end{feynman}
+% \end{tikzpicture}
+%\end{center}
+%Green's functions are a handy tool to describe scattering because they avoid the previous complications.
+
+There is also another way we can think about these Green's function. Consider the theory with a source $J(x)$ added, so that
+\[
+ H = H_0 + H_{\mathrm{int}} - J(x) \phi(x).
+\]
+This $J$ is a fixed background function, called a source in analogy with electromagnetism.
+
+Consider the interaction picture, but now we choose $(H_0 + H_{\mathrm{int}})$ to be the ``free'' part, and $-J \phi$ as the ``interaction'' piece.
+
+Now the ``vacuum'' we use is not $\bket{0}$ but $\bket{\Omega}$, since this is the ``free vacuum'' for our theory.
+We define
+\[
+ W[J] = \brak{\Omega}U_I(-\infty, \infty) \bket{\Omega}.
+\]
+This is then a functional in $J$. We can compute
+\begin{align*}
+ W[J] &= \brak{\Omega}U_I(-\infty, \infty) \bket{\Omega}\\
+ &= \brak{\Omega} T \exp\left(\int \d^4 x\; -J(x) \phi_H(x)\right)\bket{\Omega}\\
+ &= 1 + \sum_{n = 1}^\infty \frac{(-1)^n}{n!} \int \d^4 x_1 \cdots \d^4 x_n \; J(x_1) \cdots J(x_n) G^{(n)}(x_1, \cdots, x_n).
+\end{align*}
+Thus by general variational calculus, we know
+\[
+ G^{(n)}(x_1, \cdots, x_n) = (-1)^n \left.\frac{\delta^n W[J]}{\delta J(x_1) \cdots \delta J(x_n)} \right|_{J = 0}.
+\]
+Thus we call $W[J]$ a \emph{generating function} for the function $G^{(n)}(x_1, \cdots, x_n)$.
+
+%We can compute $W[J]$ by using the rules prescribed earlier, amended to include the vertex including the source $iJ(x)$. Often, one first defines a related generating function
+%\[
+% Z[J] = \brak{0}S\bket{0}
+%\]
+%which sums over all Feynman diagrams, and order $n$, is given by
+%\[
+% Z[J] = e^{W[J]}.
+%\]
+%Here $W([J]$ sums over all connected diagrams.
+
+\section{Spinors}
+In this chapter, we return to the classical world where things make sense. We are going to come up with the notion of a \emph{spinor}, which is a special type of field that, when quantized, gives us particles with ``intrinsic spin''.
+
+\subsection{The Lorentz group and the Lorentz algebra}
+So far, we have only been working with scalar fields. These are pretty boring, since when we change coordinates, the values of the field remain unchanged. One can certainly imagine more exciting fields, like the electromagnetic potential. Ignoring issues of choice of gauge, we can imagine the electromagnetic potential as a $4$-vector $A^\mu$ at each point in space-time. When we change coordinates by a Lorentz transformation $\Lambda$, the field transforms by
+\[
+ A^\mu(x) \mapsto \Lambda^\mu\!_\nu A^\nu(\Lambda^{-1} x).
+\]
+Note that we had to put a $\Lambda^{-1}$ inside $A^\nu$ because the names of the points have changed. $\Lambda^{-1}x$ in the new coordinate system labels the same point as $x$ in the old coordinate system.
+
+In general, we can consider vector-valued fields that transform when we change coordinates. If we were to construct such a field, then given any Lorentz transformation $\Lambda$, we need to produce a corresponding transformation $D(\Lambda)$ of the field. If our field $\phi$ takes values in a vector space $V$ (usually $\R^n$), then this $D(\Lambda)$ should be a linear map $V \to V$. The transformation rule is then
+\begin{align*}
+ x &\mapsto \Lambda x,\\
+ \phi &\mapsto D(\Lambda) \phi.
+\end{align*}
+We want to be sure this assignment of $D(\Lambda)$ behaves sensibly. For example, if we apply the Lorentz transformation $\Lambda = \mathbf{1}$, i.e.\ we do nothing, then $D(\mathbf{1})$ should not do anything as well. So we must have
+\[
+ D(\mathbf{1}) = \mathbf{1}.
+\]
+Now if we do $\Lambda_1$ and then $\Lambda_2$, then our field will transform first by $D(\Lambda_1)$, then $D(\Lambda_2)$. For the universe to make sense, this had better be equal to $D(\Lambda_2 \Lambda_1)$. So we require
+\[
+ D(\Lambda_1)D(\Lambda_2) = D(\Lambda_1 \Lambda_2)
+\]
+for any $\Lambda_1, \Lambda_2$. Mathematically, we call this a \emph{representation of the Lorentz group}.
+\begin{defi}[Lorentz group]\index{Lorentz group}\index{$\Or(1, 3)$}
+ The \emph{Lorentz group}, denoted $\Or(1, 3)$, is the group of all Lorentz transformations. Explicitly, it is given by
+ \[
+ \Or(1, 3) = \{\Lambda \in M_{4\times 4}: \Lambda^T \eta \Lambda = \eta\},
+ \]
+ where
+ \[
+ \eta = \eta_{\mu\nu} =
+ \begin{pmatrix}
+ 1 & 0 & 0 & 0\\
+ 0 & -1 & 0 & 0\\
+ 0 & 0 & -1 & 0\\
+ 0 & 0 & 0 & -1
+ \end{pmatrix}
+ \]
+ is the \term{Minkowski metric}. Alternatively, $\Or(1, 3)$ is the group of all matrices $\Lambda$ such that
+ \[
+ \bra \Lambda x, \Lambda y\ket = \bra x, y\ket,
+ \]
+ for all $x, y \in \R^{1 + 3}$, where $\bra x, y\ket$ denotes the inner product given by the Minkowski metric
+ \[
+ \bra x, y\ket = x^T \eta y.
+ \]
+\end{defi}
+\begin{defi}[Representation of the Lorentz group]\index{Lorentz group!representation}\index{representation!Lorentz group}
+ A \emph{representation of the Lorentz group} is a vector space $V$ and a linear map $D(\Lambda): V \to V$ for each $\Lambda \in \Or(1, 3)$ such that
+ \[
+ D(\mathbf{1}) = \mathbf{1}, \quad D(\Lambda_1)D(\Lambda_2) = D(\Lambda_1 \Lambda_2)
+ \]
+ for any $\Lambda_1, \Lambda_2 \in \Or(1, 3)$.
+
+ The space $V$ is called the \term{representation space}.
+\end{defi}
+So to construct interesting fields, we need to find interesting representations! There is one representation that sends each $\Lambda$ to the matrix $\Lambda$ acting on $\R^{1 + 3}$, but that is really boring.
+
+To find more representations, we will consider ``infinitesimal'' Lorentz transformations. We will find representations of these infinitesimal Lorentz transformations, and then attempt to patch them up to get a representation of the Lorentz group. It turns out that we will \emph{fail} to patch them up, but instead we will end up with something more interesting!
+
+We write our Lorentz transformation $\Lambda$ as
+\[
+ \Lambda^\mu\!_\nu = \delta^\mu\!_\nu + \varepsilon \omega^\mu\!_\nu + O(\varepsilon^2).
+\]
+The definition of a Lorentz transformation requires
+\[
+ \Lambda^\mu\!_\sigma \Lambda^\nu\!_\rho \eta^{\sigma \rho} = \eta^{\mu\nu}.
+\]
+Putting it in, we have
+\[
+ (\delta^\mu\!_\sigma + \varepsilon \omega^\mu\!_\sigma) (\delta^\nu\!_\rho + \varepsilon \omega^\nu\!_\rho) \eta^{\rho\sigma} + O(\varepsilon^2) = \eta^{\mu\nu}.
+\]
+So we find that we need
+\[
+ \omega^{\mu\nu} + \omega^{\nu\mu} = 0.
+\]
+So $\omega$ is antisymmetric. Thus we figured that an infinitesimal Lorentz transformation is an antisymmetric matrix. This is known as the \term{Lie algebra} of the Lorentz group.
+\begin{defi}[Lorentz algebra]\index{Lorentz algebra}
+ The \emph{Lorentz algebra} is
+ \[
+ \ort(1, 3) = \{\omega \in M_{4 \times 4}: \omega^{\mu\nu} + \omega^{\nu\mu} = 0\}.
+ \]
+\end{defi}
+It is important that when we say that $\omega$ is antisymmetric, we mean exactly $\omega^{\mu\nu} + \omega^{\nu\mu} = 0$. Usually, when we write out a matrix, we write out the entries of $\omega^\mu\!_\nu$ instead, and the matrix we see will \emph{not} in general be antisymmetric, as we will see in the examples below.
+
+A $4 \times 4$ anti-symmetric matrix has $6$ free components. These $6$ components in fact correspond to three infinitesimal rotations and three infinitesimal boosts. We introduce a basis given by the confusing expression
+\[
+ (M^{\rho\sigma})^\mu\!_\nu = \eta^{\rho\mu}\delta^\sigma\!_\nu - \eta^{\sigma\mu}\delta^{\rho}\!_\nu.
+\]
+Here the $\rho$ and $\sigma$ count which basis vector (i.e.\ matrix) we are talking about, and $\mu, \nu$ are the rows and columns. Usually, we will just refer to the matrix as $M^{\rho\sigma}$ and not mention the indices $\mu, \nu$ to avoid confusion.
+
+Each basis element will have exactly one pair of non-zero entries. For example, we have
+\[
+ (M^{01})^\mu\!_\nu =
+ \begin{pmatrix}
+ 0 & 1 & 0 & 0\\
+ 1 & 0 & 0 & 0\\
+ 0 & 0 & 0 & 0\\
+ 0 & 0 & 0 & 0
+ \end{pmatrix},\quad
+ (M^{12})^\mu\!_\nu =
+ \begin{pmatrix}
+ 0 & 0 & 0 & 0\\
+ 0 & 0 & -1 & 0\\
+ 0 & 1 & 0 & 0\\
+ 0 & 0 & 0 & 0
+ \end{pmatrix}.
+\]
+These generate a boost in the $x^1$ direction and a rotation in the $x^1$-$x^2$ plane respectively, as we will later see.
+
+By definition of a basis, we can write all matrices $\omega$ in the Lorentz algebra as a linear combination of these:
+\[
+ \omega = \frac{1}{2} \Omega_{\rho\sigma}M^{\rho\sigma}.
+\]
+Note that here $M^{\rho\sigma}$ and $M^{\sigma\rho}$ will be the negative of each other, so $\{M^{\rho\sigma}\}$ doesn't really form a basis. By symmetry, we will often choose $\Omega$ so that $\Omega_{\rho\sigma} = - \Omega_{\sigma \rho}$ as well.
+
+Now we can talk about representations of the Lorentz algebra. In the case of representations of the Lorentz group, we required that representations respect multiplication. In the case of the Lorentz algebra, the right thing to ask to be preserved is the commutator (cf.\ III Symmetries, Fields and Particles).
+\begin{defi}[Representation of Lorentz algebra]\index{representation!Lorentz algebra}\index{Lorentz algebra!representation}
+ A \emph{representation of the Lorentz algebra} is a collection of matrices that satisfy the same commutation relations as the Lorentz algebra.
+
+ Formally, this is given by a vector space $V$ and a linear map $R(\omega): V \to V$ for each $\omega \in \ort(1, 3)$ such that
+ \[
+ R(a \omega + b\omega') = aR(\omega) + bR(\omega'),\quad R([\omega, \omega']) = [R(\omega), R(\omega')]
+ \]
+ for all $\omega, \omega' \in \ort(1, 3)$ and $a, b \in \R$.
+
+\end{defi}
+It turns out finding representations of the Lorentz algebra isn't that bad. We first note the following magic formula about the basis vectors of the Lorentz algebra:
+\begin{prop}
+ \[
+ [M^{\rho\sigma}, M^{\tau\nu}] = \eta^{\sigma\tau} M^{\rho\nu} - \eta^{\rho\tau} M^{\sigma\nu} + \eta^{\rho\nu}M^{\sigma\tau} - \eta^{\sigma\nu}M^{\rho\tau}.
+ \]
+\end{prop}
+This can be proven, painfully, by writing out the matrices.
+
+It turns out this is the only thing we need to satisfy:
+\begin{fact}
+ Given any vector space $V$ and collection of linear maps $L^{\rho\sigma}: V \to V$, they give a representation of the Lorentz algebra if and only if
+ \[
+ [L^{\rho\sigma}, L^{\tau\nu}] = \eta^{\sigma\tau} L^{\rho\nu} - \eta^{\rho\tau} L^{\sigma\nu} + \eta^{\rho\nu}L^{\sigma\tau} - \eta^{\sigma\nu}L^{\rho\tau}.
+ \]
+\end{fact}
+Suppose we have already found a representation of the Lorentz algebra. How can we find a representation of the Lorentz group? We will need to make use of the following fact:
+\begin{fact}
+ Let $\Lambda$ be a Lorentz transformation that preserves orientation and does not reverse time. Then we can write $\Lambda = \exp(\omega)$ for some $\omega \in \ort(1, 3)$. In coordinates, we can find some $\Omega_{\rho\sigma}$ such that
+ \[
+ \Lambda = \exp\left(\frac{1}{2} \Omega_{\rho\sigma} M^{\rho\sigma}\right).\tag{$*$}
+ \]
+\end{fact}
+Thus if we have found some representation $R(M^{\rho\sigma})$, we can try to define a representation of the Lorentz group by
+\[
+ R(\Lambda) = \exp(R(\omega)) = \exp \left(\frac{1}{2} \Omega_{\rho\sigma} R(M^{\rho\sigma})\right).\tag{$\dagger$}
+\]
+Now we see two potential problems with our approach. The first is that we can only lift a representation of the Lorentz algebra to the parity and time-preserving Lorentz transformations, and not all of them. Even worse, it might not even be well-defined for these nice Lorentz transformations. We just know that we can find \emph{some} $\Omega_{\rho\sigma}$ such that $(*)$ holds. In general, there can be many such $\Omega_{\rho\sigma}$, and they might not give the same answer when we evaluate $(\dagger)$!
+
+\begin{defi}[Restricted Lorentz group]\index{Restricted Lorentz group}\index{Lorentz group!restricted}\index{$\SO^+(1, 3)$}
+ The \emph{restricted Lorentz group} consists of the elements in the Lorentz group that preserve orientation and direction of time.
+\end{defi}
+
+\begin{eg}
+ Recall that we had
+ \[
+ (M^{12})^\mu\!_\nu =
+ \begin{pmatrix}
+ 0 & 0 & 0 & 0\\
+ 0 & 0 & -1 & 0\\
+ 0 & 1 & 0 & 0\\
+ 0 & 0 & 0 & 0
+ \end{pmatrix}
+ \]
+ We pick
+ \[
+ \Omega_{12} = -\Omega_{21} = -\phi_3,
+ \]
+ with all other entries zero. Then we have
+ \[
+ \frac{1}{2}\Omega_{\rho\sigma} M^{\rho\sigma} =
+ \begin{pmatrix}
+ 0 & 0 & 0 & 0\\
+ 0 & 0 & \phi_3 & 0\\
+ 0 & -\phi_3 & 0 & 0\\
+ 0 & 0 & 0 & 0
+ \end{pmatrix}
+ \]
+ So we know
+ \[
+ \Lambda = \exp\left(\frac{1}{2}\Omega_{\rho\sigma} M^{\rho\sigma}\right) =
+ \begin{pmatrix}
+ 1 & 0 & 0 & 0\\
+ 0 & \cos \phi_3 & \sin \phi_3 & 0\\
+ 0 & -\sin \phi_3 & \cos \phi_3 & 0\\
+ 0 & 0 & 0 & 1
+ \end{pmatrix},
+ \]
+ which is a rotation in the $x^1$-$x^2$ plane by $\phi_3$. It is clear that $\phi_3$ is not uniquely defined by $\Lambda$. Indeed, $\phi_3 + 2n \pi$ for any $n$ would give the same element of the Lorentz group.
+
+ More generally, given any rotation parameters $\boldsymbol\phi = (\phi_1, \phi_2, \phi_3)$, we obtain a rotation by setting
+ \[
+ \Omega_{ij} = - \varepsilon_{ijk} \phi_k.
+ \]
+\end{eg}
+
+\subsection{The Clifford algebra and the spin representation}
+It turns out there is a useful way of finding a representation of the Lorentz algebra which gives rise to nice properties of the representation. This is via the Clifford algebra.
+
+We will make use of the following notation:
+\begin{notation}[Anticommutator]\index{anticommutator}\index{$\{A, B\}$}
+ We write
+ \[
+ \{A, B\} = AB + BA
+ \]
+ for the \emph{anticommutator} of two matrices/linear maps.
+\end{notation}
+
+\begin{defi}[Clifford algebra]\index{Clifford algebra}
+ The \emph{Clifford algebra} is the algebra generated by $\gamma^0, \gamma^1, \gamma^2, \gamma^3$ subject to the relations
+ \[
+ \{\gamma^\mu, \gamma^\nu\} = \gamma^\mu \gamma^\nu + \gamma^\nu \gamma^\mu = 2 \eta^{\mu\nu}\mathbf{1}.
+ \]
+ More explicitly, we have
+ \[
+ \gamma^\mu \gamma^\nu = - \gamma^\nu \gamma^\mu\quad \text{for}\quad \mu \not= \nu.
+ \]
+ and
+ \[
+ (\gamma^0)^2 = \mathbf{1},\quad (\gamma^i)^2 = -\mathbf{1}.
+ \]
+ A \emph{representation} of the Clifford algebra is then a collection of matrices (or linear maps) satisfying the relations listed above.
+\end{defi}
+Note that if we have a representation of the Clifford algebra, we will also denote the matrices of the representation by $\gamma^\mu$, instead of, say, $R(\gamma^\mu)$.
+
+\begin{eg}[Chiral representation]
+ The simplest representation (in the sense that it is easy to write out, rather than in any mathematical sense) is the $4$-dimensional representation called the \term{chiral representation}. In block matrix form, this is given by
+ \[
+ \gamma^0 =
+ \begin{pmatrix}
+ \mathbf{0} & \mathbf{1}\\
+ \mathbf{1} & \mathbf{0}
+ \end{pmatrix},\quad
+ \gamma^i =
+ \begin{pmatrix}
+ \mathbf{0} & \sigma^i\\
+ -\sigma^i & \mathbf{0}
+ \end{pmatrix},
+ \]
+ where the $\sigma^i$ are the Pauli matrices given by
+ \[
+ \sigma^1 =
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 0
+ \end{pmatrix},\quad
+ \sigma^2 =
+ \begin{pmatrix}
+ 0 & -i\\
+ i & 0
+ \end{pmatrix},\quad
+ \sigma^3 =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix},
+ \]
+ which satisfy
+ \[
+ \{\sigma^i, \sigma^j\} = 2 \delta^{ij} \mathbf{1}.
+ \]
+\end{eg}
+Of course, this is not the only representation. Given any $4 \times 4$ matrix $U$, we can construct a new representation of the Clifford algebra by transforming by
+\[
+ \gamma^\mu \mapsto U \gamma^\mu U^{-1}.
+\]
+It turns out any $4$-dimensional representation of the Clifford algebra comes from applying some similarity transformation to the chiral representation, but we will not prove that.
+
+We now claim that every representation of the Clifford algebra gives rise to a representation of the Lorentz algebra.
+
+\begin{prop}
+ Suppose $\gamma^\mu$ is a representation of the Clifford algebra. Then the matrices given by
+ \[
+ S^{\rho\sigma} = \frac{1}{4}[\gamma^\rho, \gamma^\sigma] =
+ \begin{cases}
+ 0 & \rho = \sigma\\
+ \frac{1}{2}\gamma^\rho \gamma^\sigma & \rho \not= \sigma
+ \end{cases}
+ = \frac{1}{2} \gamma^\rho \gamma^\sigma - \frac{1}{2}\eta^{\rho\sigma}
+ \]
+ define a representation of the Lorentz algebra.
+\end{prop}
+This is known as the \term{spin representation}.
+
+We will need the following technical result to prove the proposition:
+\begin{lemma}
+ \[
+ [S^{\mu\nu}, \gamma^\rho] = \gamma^\mu \eta^{\nu\rho} - \gamma^\nu \eta^{\rho\mu}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ \begin{align*}
+ [S^{\mu\nu}, \gamma^\rho] &= \left[\frac{1}{2} \gamma^\mu\gamma^\nu - \frac{1}{2}\eta^{\mu\nu}, \gamma^\rho\right]\\
+ &= \frac{1}{2} \gamma^\mu \gamma^\nu \gamma^\rho - \frac{1}{2} \gamma^\rho \gamma^\mu \gamma^\nu\\
+ &= \gamma^\mu (\eta^{\nu\rho} - \gamma^\rho \gamma^\nu) - (\eta^{\mu\rho} - \gamma^\mu \gamma^\rho)\gamma^\nu\\
+ &= \gamma^\mu \eta^{\nu\rho} - \gamma^\nu \eta^{\rho\mu}. \qedhere
+ \end{align*}
+\end{proof}
+
+Now we can prove our claim.
+
+\begin{proof}[Proof of proposition]
+ We have to show that
+ \[
+ [S^{\mu\nu}, S^{\rho\sigma}] = \eta^{\nu\rho} S^{\mu\sigma} - \eta^{\mu\rho} S^{\nu\sigma} + \eta^{\mu\sigma} S^{\nu\rho} - \eta^{\nu\sigma} S^{\mu\rho}.
+ \]
+ The proof involves, again, writing everything out. Using the fact that $\eta^{\rho\sigma}$ commutes with everything, we know
+ \begin{align*}
+ [S^{\mu\nu}, S^{\rho\sigma}] &= \frac{1}{2}[S^{\mu\nu}, \gamma^\rho \gamma^\sigma]\\
+ &= \frac{1}{2}\big([S^{\mu\nu}, \gamma^\rho] \gamma^\sigma + \gamma^\rho[S^{\mu\nu}, \gamma^\sigma]\big)\\
+ &= \frac{1}{2}\big(\gamma^\mu \eta^{\nu \rho} \gamma^\sigma - \gamma^\nu \eta^{\mu\rho} \gamma^\sigma + \gamma^\rho \gamma^\mu \eta^{\nu\sigma} - \gamma^\rho \gamma^\nu \eta^{\mu\sigma}\big).
+ \end{align*}
+ Then the result follows form the fact that
+ \[
+ \gamma^\mu \gamma^\sigma = 2 S^{\mu\sigma} + \eta^{\mu\sigma}. \qedhere
+ \]
+\end{proof}
+
+So we now have a representation of the Lorentz algebra. Does this give us a representation of the (restricted) Lorentz group? Given any $\Lambda \in \SO^+(1, 3)$, we can write
+\[
+ \Lambda = \exp\left(\frac{1}{2}\Omega_{\rho\sigma} M^{\rho\sigma}\right).
+\]
+We now try to define
+\[
+ S[\Lambda] = \exp\left(\frac{1}{2} \Omega_{\rho\sigma} S^{\rho\sigma}\right).
+\]
+Is this well-defined?
+
+We can try using an example we have previously computed. Recall that given any rotation parameter $\boldsymbol\phi = (\phi_1, \phi_2, \phi_3)$, we can pick
+\[
+ \Omega_{ij} = - \varepsilon_{ijk} \phi_k
+\]
+to obtain a rotation denoted by $\boldsymbol\phi$. What does this give when we exponentiate with $S^{\rho\sigma}$? We use the chiral representation, so that
+\[
+ S^{ij} = \frac{1}{4}\left[
+ \begin{pmatrix}
+ \mathbf{0} & \sigma^i\\
+ -\sigma^i & \mathbf{0}
+ \end{pmatrix},
+ \begin{pmatrix}
+ \mathbf{0} & \sigma^j\\
+ -\sigma^j & \mathbf{0}
+ \end{pmatrix}\right] = -\frac{i}{2} \varepsilon^{ijk}
+ \begin{pmatrix}
+ \sigma^k & \mathbf{0}\\
+ \mathbf{0} & \sigma^k
+ \end{pmatrix}
+\]
+Then we obtain
+\begin{prop}
+ Let $\boldsymbol\phi = (\phi_1, \phi_2, \phi_3)$, and define
+ \[
+ \Omega_{ij} = - \varepsilon_{ijk} \phi_k.
+ \]
+ Then in the chiral representation of $S$, writing $\boldsymbol\sigma=(\sigma^1,\sigma^2,\sigma^3)$, we have
+ \[
+ S[\Lambda] = \exp\left(\frac{1}{2} \Omega_{\rho\sigma} S^{\rho \sigma}\right)
+ =
+ \begin{pmatrix}
+ e^{i\boldsymbol\phi \cdot \boldsymbol\sigma/2} & \mathbf{0}\\
+ \mathbf{0} & e^{i \boldsymbol\phi \cdot \boldsymbol \sigma/2}
+ \end{pmatrix}.
+ \]
+\end{prop}
+In particular, we can pick $\boldsymbol\phi = (0, 0, 2\pi)$. Then the corresponding Lorentz transformation is the identity, but
+\[
+ S[\mathbf{1}] =
+ \begin{pmatrix}
+ e^{i\pi \sigma^3} & \mathbf{0}\\
+ \mathbf{0} & e^{i\pi \sigma^3}
+ \end{pmatrix} = -\mathbf{1} \not= \mathbf{1}.
+\]
+So this does not give a well-defined representation of the Lorentz group, because different ways of representing the element $\mathbf{1}$ in the Lorentz group will give different values of $S[\mathbf{1}]$. Before we continue on to discuss this, we take note of the corresponding formula for boosts.
+
+Note that we have
+\[
+ S^{0i} = \frac{1}{2}
+ \begin{pmatrix}
+ \mathbf{0} & \mathbf{1}\\
+ \mathbf{1} & \mathbf{0}
+ \end{pmatrix}
+ \begin{pmatrix}
+ \mathbf{0} & \sigma^i\\
+ -\sigma^i & \mathbf{0}
+ \end{pmatrix}=
+ \frac{1}{2}
+ \begin{pmatrix}
+ -\sigma^i & \mathbf{0}\\
+ \mathbf{0} & \sigma^i
+ \end{pmatrix}.
+\]
+Then the corresponding result is
+\begin{prop}
+ Write $\boldsymbol\chi = (\chi_1, \chi_2, \chi_3)$. Then if
+ \[
+ \Omega_{0i} = -\Omega_{i0} = -\chi_i,
+ \]
+ then
+ \[
+ \Lambda = \exp\left( \frac{1}{2} \Omega_{\rho\sigma}M^{\rho\sigma}\right)
+ \]
+ is the boost in the $\boldsymbol\chi$ direction, and
+ \[
+ S[\Lambda] = \exp\left(\frac{1}{2} \Omega_{\rho\sigma} S^{\rho \sigma}\right) =
+ \begin{pmatrix}
+ e^{\boldsymbol\chi \cdot \boldsymbol\sigma/2} & \mathbf{0}\\
+ \mathbf{0} & e^{-\boldsymbol\chi \cdot \boldsymbol\sigma/2}
+ \end{pmatrix}.
+ \]
+\end{prop}
+
+Now what is this $S[\Lambda]$ we have constructed? To begin with, writing $S[\Lambda]$ is a very bad notation, because $S[\Lambda]$ doesn't only depend on what $\Lambda$ is, but also additional information on how we ``got to'' $\Lambda$, i.e.\ the values of $\Omega_{\rho\sigma}$. So going from $\mathbf{1}$ to $\mathbf{1}$ by rotating by $2\pi$ is different from going from $\mathbf{1}$ to $\mathbf{1}$ by doing nothing. However, physicists like to be confusing and this will be the notation used in the course.
+
+Yet, $S[\Lambda]$ is indeed a representation of \emph{something}. Each point of this ``something'' consists of an element of the Lorentz group, and also information on how we got there (formally, a (homotopy class of) paths from the identity to $\Lambda$). Mathematicians have come up with a fancy name of this, called the \term{universal cover}, and it can be naturally given the structure of a Lie group. It turns out in this case, this universal cover is a \term{double cover}, which means that for each Lorentz transformation $\Lambda$, there are only two non-equivalent ways of getting to $\Lambda$.
+
+Note that the previous statement depends crucially on what we mean by ways of getting to $\Lambda$ being ``equivalent''. We previously saw that for a rotation, we can always add $2n\pi$ to the rotation angle to get a different $\Omega_{\rho\sigma}$. However, it is a fact that whenever we add $4\pi$ to the angle, we will always get back the same $S[\Lambda]$ for \emph{any} representation $S$ of the Lorentz algebra. So the only two non-equivalent ways are the original one, and the original one plus $2\pi$.
+
+(The actual reason is backwards. We know for geometric reasons that adding $4\pi$ will give an equivalent path, and thus it must be the case that any representation must not change when we add $4\pi$. Trying to supply further details and justification for what is going on would bring us deep into the beautiful world of algebraic topology)
+
+We give a name to this group.
+\begin{defi}[Spin group]\index{spin group}\index{$\Spin(1, 3)$}
+ The \emph{spin group} $\Spin(1, 3)$ is the universal cover of $\SO^+(1, 3)$. This comes with a canonical surjection to $\SO^+(1, 3)$, sending ``$\Lambda$ and a path to $\Lambda$'' to $\Lambda$.
+\end{defi}
+As mentioned, physicists like to be confusing, and we will in the future keep talking about ``representation of the Lorentz group'', when we actually mean the representation of the spin group.
+
+So we have the following maps of (Lie) groups:
+\[
+ \begin{tikzcd}
+ \Spin(1, 3) \ar[r, two heads] & \SO^+(1, 3) \ar[r, hook] & \Or(1, 3)
+ \end{tikzcd}.
+\]
+This diagram pretty much summarizes the obstructions to lifting a representation of the Lorentz algebra to the Lorentz group. What a representation of the Lorentz algebra gives us is actually a representation of the group $\Spin(1, 3)$, and we want to turn this into a representation of $\Or(1, 3)$.
+
+If we have a representation $D$ of $\Or(1, 3)$, then we can easily produce a representation of $\Spin(1, 3)$ --- given an element $M \in \Spin(1, 3)$, we need to produce some matrix. We simply apply the above map to obtain an element of $\Or(1, 3)$, and then the representation $D$ gives us the matrix we wanted.
+
+However, going the other way round is harder. If we have a representation of $\Spin(1, 3)$, then for that to give a representation of $\SO^+(1, 3)$, we need to make sure that different ways of getting to a $\Lambda$ don't give different matrices in the representation. If this is true, then we have found ourselves a representation of $\SO^+(1, 3)$. After that, we then need to decide what we want to do with the elements of $\Or(1, 3)$ not in $\SO^+(1, 3)$, and this again involves some work.
+
+% some more philosophy
+
+\subsection{Properties of the spin representation}
+We have produced a representation of the Lorentz group, which acts on some vector space $V \cong \R^4$. Its elements are known as \emph{Dirac spinors}.
+
+% something about the lack of unitary representations
+
+\begin{defi}[Dirac spinor]\index{Dirac spinor}
+ A \emph{Dirac spinor} is a vector in the representation space of the spin representation. It may also refer to such a vector for each point in space.
+\end{defi}
+
+Our ultimate goal is to construct an action that involves a spinor. So we would want to figure out a way to get a number out of a spinor.
+
+In the case of a 4-vector, we had these things called covectors that lived in the ``dual space''. A covector $\lambda$ can eat up a vector $v$ and spurt out a number $\lambda(v)$. Often, we write the covector as $\lambda_\mu$ and the vector as $v^\mu$, and then $\lambda(v) = \lambda_\mu v^\mu$. When written out like a matrix, a covector is represented by a ``row vector''.
+
+Under a Lorentz transformation, these objects transform as
+\begin{align*}
+ \lambda &\mapsto \lambda \Lambda^{-1}\\
+ v &\mapsto \Lambda v
+\end{align*}
+(What do we mean by $\lambda \mapsto \lambda \Lambda^{-1}$? If we think of $\lambda$ as a row vector, then this is just matrix multiplication. However, we can think of it without picking a basis as follows --- $\lambda \Lambda^{-1}$ is a covector, so it is determined by what it does to a vector $v$. We then define $(\lambda \Lambda^{-1})(v) = \lambda(\Lambda^{-1}v)$)
+
+Then the result $\lambda v$ transforms as
+\[
+ \lambda v \mapsto \lambda \Lambda^{-1} \Lambda v = \lambda v.
+\]
+So the resulting number does not change under Lorentz transformations. (Mathematically, this says a covector lives in the dual representation space of the Lorentz group)
+
+Moreover, given a vector, we can turn it into a covector in a canonical way, by taking the transpose and then inserting some funny negative signs in the space components.
+
+We want to do the same for spinors. This terminology may or may not be standard:
+\begin{defi}[Cospinor]\index{cospinor}
+ A \emph{cospinor} is an element in the dual space to space of spinors, i.e.\ a cospinor $X$ is a linear map that takes in a spinor $\psi$ as an argument and returns a number $X\psi$. A cospinor can be represented as a ``row vector'' and transforms under $\Lambda$ as
+ \[
+ X \mapsto X S[\Lambda]^{-1}.
+ \]
+\end{defi}
+This is a definition we can always make. The hard part is to produce some actual cospinors. To figure out how we can do so, it is instructive to figure out what $S[\Lambda]^{-1}$ is!
+
+We begin with some computations using the $\gamma^\mu$ matrices.
+\begin{prop}
+ We have
+ \[
+ \gamma^0 \gamma^\mu \gamma^0 = (\gamma^\mu)^\dagger.
+ \]
+\end{prop}
+
+\begin{proof}
+ This is true by checking all possible $\mu$.
+\end{proof}
+
+\begin{prop}
+ \[
+ S[\Lambda]^{-1} = \gamma^0 S[\Lambda]^\dagger \gamma^0,
+ \]
+ where $S[\Lambda]^\dagger$ denotes the Hermitian conjugate as a matrix (under the usual basis).
+\end{prop}
+
+\begin{proof}
+ We note that
+ \[
+ (S^{\mu\nu})^\dagger = \frac{1}{4}[(\gamma^\nu)^\dagger, (\gamma^\mu)^\dagger] = -\gamma^0 \left(\frac{1}{4}[\gamma^\mu, \gamma^\nu]\right) \gamma^0 = - \gamma^0 S^{\mu\nu} \gamma^0.
+ \]
+ So we have
+ \[
+ S[\Lambda]^\dagger = \exp\left(\frac{1}{2} \Omega_{\mu\nu}(S^{\mu\nu})^\dagger\right) = \exp\left(-\frac{1}{2} \gamma^0 \Omega_{\mu\nu}S^{\mu\nu}\gamma^0\right) = \gamma^0 S[\Lambda]^{-1} \gamma^0,
+ \]
+ using the fact that $(\gamma^0)^2 = \mathbf{1}$ and $\exp(-A) = (\exp A)^{-1}$. Multiplying both sides on both sides by $\gamma^0$ gives the desired formula.
+\end{proof}
+
+We now come to our acclaimed result:
+\begin{prop}
+ If $\psi$ is a Dirac spinor, then
+ \[
+ \bar\psi = \psi^\dagger \gamma^0
+ \]
+ is a cospinor.
+\end{prop}
+
+\begin{proof}
+ $\bar \psi$ transforms as
+ \[
+ \bar \psi \mapsto \psi^\dagger S[\Lambda]^\dagger \gamma^0 = \psi^\dagger \gamma^0 (\gamma^0 S[\Lambda]^\dagger \gamma^0) = \bar\psi S[\Lambda]^{-1}. \qedhere
+ \]
+\end{proof}
+
+\begin{defi}[Dirac adjoint]\index{Dirac adjoint}
+ For any Dirac spinor $\psi$, its \emph{Dirac adjoint} is given by
+ \[
+ \bar\psi = \psi^\dagger \gamma^0.
+ \]
+\end{defi}
+Thus we immediately get
+\begin{cor}
+ For any spinor $\psi$, the quantity $\bar \psi \psi$ is a scalar, i.e.\ it doesn't transform under a Lorentz transformation.
+\end{cor}
+
+The next thing we want to do is to construct $4$-vectors out of spinors. While the spinors do have 4 components, they aren't really related to 4-vectors, and transform differently. However, we do have that thing called $\gamma^\mu$, and the indexing by $\mu$ should suggest that $\gamma^\mu$ transforms like a $4$-vector. Of course, it is a collection of matrices and is not actually a 4-vector, just like $\partial_\mu$ behaves like a 4-vector but isn't. But it behaves sufficiently like a 4-vector and we can combine it with other things to get actual 4-vectors.
+
+% It is worth clarifying what it means for $\gamma^\mu$ to ``transform''. Fix a $\mu$. Then $\gamma^\mu$ is a matrix (linear maps) that acts on the space of spinors. Now when we apply $S[\Lambda]$, we can think of this as changing basis. Then the linear map $\gamma^\mu$ has a different representation under this new basis. We can
+
+\begin{prop}
+ We have
+ \[
+ S[\Lambda]^{-1} \gamma^\mu S[\Lambda] = \Lambda^\mu\!_\nu \gamma^\nu.
+ \]
+% $\gamma^\mu$ transforms as
+% \[
+% \gamma^\mu \mapsto \Lambda^\mu\!_\nu \gamma^\nu.
+% \]
+\end{prop}
+
+\begin{proof}
+% We have
+% \[
+% \gamma^\mu \mapsto S[\Lambda]^{-1} \gamma^\mu S[\Lambda].
+% \]
+% So we want to show
+% \[
+% S[\Lambda]^{-1} \gamma^\mu S[\Lambda] = \Lambda^\mu\!_\nu \gamma^\nu.
+% \]
+ We work infinitesimally. So this reduces to
+ \[
+ \left(\mathbf{1} - \frac{1}{2} \Omega_{\rho\sigma} S^{\rho\sigma}\right) \gamma^\mu \left(\mathbf{1} + \frac{1}{2} \Omega_{\rho\sigma}S^{\rho\sigma}\right) = \left(\mathbf{1} + \frac{1}{2}\Omega_{\rho\sigma}M^{\rho\sigma}\right)^\mu_\nu\gamma^\nu.
+ \]
+ This becomes
+ \[
+ [S^{\rho\sigma}, \gamma^\mu] = -(M^{\rho\sigma})^\mu\!_\nu\gamma^\nu.
+ \]
+ But we can use the explicit formula for $M$ to compute
+ \[
+ -(M^{\rho\sigma})^\mu\!_\nu \gamma^\nu = (\eta^{\sigma\mu} \delta^\rho\!_\nu - \eta^{\rho\mu}\delta^\sigma\!_\nu) \gamma^\nu = \gamma^\rho\eta^{\sigma\mu} - \gamma^\sigma\eta^{\rho\mu},
+ \]
+ and we have previously shown this is equal to $[S^{\rho\sigma}, \gamma^\mu]$.
+\end{proof}
+
+\begin{cor}
+ The object $\bar{\psi} \gamma^\mu \psi$ is a Lorentz vector, and $\bar\psi \gamma^\mu \gamma^\nu \psi$ transforms as a Lorentz tensor.
+\end{cor}
+
+In $\bar\psi \gamma^\mu \gamma^\nu \psi$, the symmetric part is a Lorentz scalar and is proportional to $\eta^{\mu\nu} \bar\psi \psi$, and the anti-symmetric part transforms as an antisymmetric Lorentz tensor, and is proportional to $\bar\psi S^{\mu\nu} \psi$.
+
+\subsection{The Dirac equation}
+Armed with these objects, we can now construct a Lorentz-invariant action. We will, as before, not provide justification for why we choose this action, but as we progress we will see some nice properties of it:
+\begin{defi}[Dirac Lagrangian]\index{Dirac Lagrangian}
+ The \emph{Dirac Lagrangian} is given by
+ \[
+ \mathcal{L} = \bar\psi (i \gamma^\mu \partial_\mu - m)\psi.
+ \]
+\end{defi}
+From this we can get the Dirac equation, which is the equation of motion obtained by varying $\psi$ and $\bar\psi$ independently. Varying $\bar\psi$, we obtain the equation
+\begin{defi}[Dirac equation]\index{Dirac equation}
+ The \emph{Dirac equation} is
+ \[
+ (i \gamma^\mu \partial_\mu - m)\psi = 0.
+ \]
+\end{defi}
+Note that this is first order in derivatives! This is different from the Klein--Gordon equation. This is only made possible by the existence of the $\gamma^\mu$ matrices. If we wanted to write down a first-order equation for a scalar field, there is nothing to contract $\partial_\mu$ with.
+
+We are often going to meet vectors contracted with $\gamma^\mu$. So we invent a notation for it:
+\begin{notation}[Slash notation]\index{slash notation}\index{$\slashed\psi$}
+ We write
+ \[
+ A_\mu \gamma^\mu \equiv \slashed A.
+ \]
+\end{notation}
+Then the Dirac equation says
+\[
+ (i \slashed\partial - m) \psi = 0.
+\]
+Note that the $m$ here means $m\mathbf{1}$, for $\mathbf{1}$ the identity matrix. Whenever we have a matrix equation and a number appears, that is what we mean.
+
+Note that the $\gamma^\mu$ matrices are not diagonal. So they mix up different components of the Dirac spinor. However, magically, it turns out that each individual component satisfies the Klein--Gordon equation! We know
+\[
+ (i \gamma^\mu \partial_\mu - m) \psi = 0.
+\]
+We now act on the left by another matrix to obtain
+\[
+ (i \gamma^\nu \partial_\nu + m)(i \gamma^\mu\partial_\mu - m)\psi = -(\gamma^\nu \gamma^\mu \partial_\nu \partial_\mu + m^2)\psi = 0.
+\]
+But using the fact that $\partial_\mu \partial_\nu$ commutes, we know that (after some relabelling)
+\[
+ \gamma^\nu \gamma^\mu \partial_\mu \partial_\nu = \frac{1}{2}\{\gamma^\mu, \gamma^\nu\} \partial_\mu \partial_\nu = \partial_\mu \partial^\mu.
+\]
+So this tells us
+\[
+ -(\partial_\mu \partial^\mu + m^2) \psi = 0.
+\]
+Now nothing mixes up different indices, and we know that each component of $\psi$ satisfies the Klein--Gordon equation.
+
+In some sense, the Dirac equation is the ``square root'' of the Klein--Gordon equation.
+
+\subsection{Chiral/Weyl spinors and \tph{$\gamma^5$}{gamma^5}{γ5}}
+Recall that if we picked the chiral representation of the Clifford algebra, the corresponding representation of the spin group is
+\[
+ S[\Lambda] =
+ \begin{cases}
+ \begin{pmatrix}
+ e^{\frac{1}{2}\boldsymbol\chi\cdot \boldsymbol\sigma} & \mathbf{0}\\
+ \mathbf{0} & e^{-\frac{1}{2} \boldsymbol\chi \cdot \boldsymbol\sigma}
+ \end{pmatrix}& \text{ for boosts}\\[11pt]
+ \begin{pmatrix}
+ e^{\frac{i}{2}\boldsymbol\phi\cdot \boldsymbol\sigma} & \mathbf{0}\\
+ \mathbf{0} & e^{\frac{i}{2} \boldsymbol\phi \cdot \boldsymbol\sigma}
+ \end{pmatrix}& \text{ for rotations}
+ \end{cases}.
+\]
+It is pretty clear from the presentation that this is actually just two independent representations put together, i.e.\ the representation is reducible. We can then write our spinor $\psi$ as
+\[
+ \psi=
+ \begin{pmatrix}
+ U_+\\ U_-
+ \end{pmatrix},
+\]
+where $U_+$ and $U_-$ are 2-complex-component objects. These objects are called \emph{Weyl spinors} or \emph{chiral spinors}.
+\begin{defi}[Weyl/chiral spinor]\index{Weyl spinor}\index{chiral spinor}
+ A \emph{left (right)-handed chiral spinor} is a 2-component complex vector $U_+$ and $U_-$ respectively that transform under the action of the Lorentz/spin group as follows:
+
+ Under a rotation with rotation parameters $\boldsymbol\phi$, both of them transform as
+ \[
+ U_\pm \mapsto e^{i \boldsymbol\phi \cdot \boldsymbol\sigma/2} U_\pm,
+ \]
+ Under a boost $\boldsymbol\chi$, they transform as
+ \[
+ U_{\pm} \mapsto e^{\pm \boldsymbol\chi\cdot \boldsymbol\sigma/2} U_{\pm}.
+ \]
+ So these are two two-dimensional representations of the spin group.
+\end{defi}
+We have thus discovered
+\begin{prop}
+ A Dirac spinor is the direct sum of a left-handed chiral spinor and a right-handed one.
+\end{prop}
+
+In group theory language, $U_+$ is the $(0, \frac{1}{2})$ representation of the Lorentz group, and $U_-$ is the $(\frac{1}{2}, 0)$ representation, and $\psi$ is in $(\frac{1}{2}, 0) \oplus (0, \frac{1}{2})$.
+
+As an element of the representation space, the left-handed part and right-handed part are indeed completely independent. However, we know that the evolution of spinors is governed by the Dirac equation. So a natural question to ask is if the Weyl spinors are coupled in the Dirac Lagrangian.
+
+Decomposing the Lagrangian in terms of our Weyl spinors, we have
+\begin{align*}
+ \mathcal{L} &= \bar\psi(i\slashed\partial - m)\psi\\
+ &=
+ \begin{pmatrix}
+ U_+^\dagger & U_-^\dagger
+ \end{pmatrix}
+ \begin{pmatrix}
+ \mathbf{0} & \mathbf{1}\\
+ \mathbf{1} & \mathbf{0}
+ \end{pmatrix}
+ \left[
+ i
+ \begin{pmatrix}
+ \mathbf{0} & \partial_t + \sigma^i \partial_i\\
+ \partial_t - \sigma^i \partial_i & \mathbf{0}
+ \end{pmatrix} - m
+ \begin{pmatrix}
+ \mathbf{1} & \mathbf{0}\\
+ \mathbf{0} & \mathbf{1}
+ \end{pmatrix}\right]
+ \begin{pmatrix}
+ U_+\\ U_-
+ \end{pmatrix}\\
+ &= i U_-^\dagger \sigma^\mu \partial_\mu U_- + i U_+^\dagger \bar\sigma^\mu \partial_\mu U_+ - m(U_+^\dagger U_- + U_-^\dagger U_+),
+\end{align*}
+where
+\[
+ \sigma^\mu = (\mathbf{1}, \boldsymbol\sigma),\quad \bar{\sigma}^\mu = (\mathbf{1}, -\boldsymbol\sigma).
+\]
+So the left and right-handed fermions are coupled if and only if the particle is massive. If the particle is massless, then we have two particles satisfying the Weyl equation:
+\begin{defi}[Weyl equation]\index{Weyl equation}
+ The \emph{Weyl equation} is
+ \[
+ i \bar{\sigma}^\mu \partial_\mu U_+ = 0.
+ \]
+\end{defi}
+
+This is all good, but we produced these Weyl spinors by noticing that in our particular chiral basis, the matrices $S[\Lambda]$ looked good. Can we produce a more intrinsic definition of these Weyl spinors that do not depend on a particular representation of the Dirac spinors?
+
+The solution is to introduce the magic quantity $\gamma^5$:
+\begin{defi}[$\gamma^5$]\index{$\gamma^5$}
+ \[
+ \gamma^5 = -i \gamma^0 \gamma^1 \gamma^2 \gamma^3.
+ \]
+\end{defi}
+
+\begin{prop}
+ We have
+ \[
+ \{\gamma^\mu, \gamma^5\} = 0,\quad (\gamma^5)^2 = \mathbf{1}
+ \]
+ for all $\gamma^\mu$, and
+ \[
+ [S^{\mu\nu}, \gamma^5] = 0.
+ \]
+\end{prop}
+
+Since $(\gamma^5)^2 = \mathbf{1}$, we can define projection operators
+\[
+ P_{\pm} = \frac{1}{2}(\mathbf{1} \pm \gamma^5).
+\]
+\begin{eg}
+ In the chiral representation, we have
+ \[
+ \gamma^5 =
+ \begin{pmatrix}
+ \mathbf{1} & \mathbf{0}\\
+ \mathbf{0} & -\mathbf{1}
+ \end{pmatrix}.
+ \]
+ Then we have
+ \[
+ P_+ =
+ \begin{pmatrix}
+ \mathbf{1} & \mathbf{0}\\
+ \mathbf{0} & \mathbf{0}
+ \end{pmatrix},
+ \quad P_- =
+ \begin{pmatrix}
+ \mathbf{0} & \mathbf{0}\\
+ \mathbf{0} & \mathbf{1}
+ \end{pmatrix}.
+ \]
+\end{eg}
+We can prove, in general, that these are indeed projections:
+\begin{prop}
+ \[
+ P_\pm^2 = P_{\pm}, \quad P_+ P_- = P_- P_+ = 0.
+ \]
+\end{prop}
+
+\begin{proof}
+ We have
+ \[
+ P_{\pm}^2 = \frac{1}{4}(\mathbf{1} \pm \gamma^5)^2 = \frac{1}{4}(\mathbf{1} + (\gamma^5)^2 \pm 2 \gamma^5) = \frac{1}{2}(\mathbf{1} \pm \gamma^5),
+ \]
+ and
+ \[
+ P_+P_- = \frac{1}{4}(\mathbf{1} + \gamma^5)(\mathbf{1} - \gamma^5) = \frac{1}{4}(\mathbf{1} - (\gamma^5)^2) = 0. \qedhere
+ \]
+\end{proof}
+We can think of these $P_{\pm}$ as projection operators to two orthogonal subspaces of the vector space $V$ of spinors. We claim that these are indeed representations of the spin group. We define
+\[
+ V_{\pm} = \{P_{\pm} \psi: \psi \in V\}.
+\]
+We claim that $S[\Lambda]$ maps $V_{\pm}$ to itself. To show this, we only have to compute infinitesimally, i.e.\ that $S^{\mu\nu}$ maps $V_{\pm}$ to itself. But it follows immediately from the fact that $S^{\mu\nu}$ commutes with $\gamma^5$ that
+\[
+ S^{\mu\nu} P_{\pm}\psi = P_{\pm} S^{\mu\nu} \psi.
+\]
+We can then define the chiral spinors as
+\[
+ \psi_{\pm} = P_{\pm}\psi.
+\]
+It is clear from our previous computation of the $P_{\pm}$ in the chiral basis that these agree with what we've defined before.
+\subsection{Parity operator}
+So far, we've considered only continuous transformations, i.e.\ transformations continuously connected to the identity. These are those that preserve direction of time and orientation. However, there are two discrete symmetries in the full Lorentz group --- time reversal and parity:
+\begin{align*}
+ T: (t, \mathbf{x}) &\mapsto (-t, \mathbf{x})\\
+ P: (t, \mathbf{x}) &\mapsto (t, -\mathbf{x})
+\end{align*}
+Since we defined the spin representation via exponentiating up infinitesimal transformations, it doesn't tell us what we are supposed to do for these discrete symmetries.
+
+However, we do have some clues. Recall that we figured that the $\gamma^\mu$ transformed like $4$-vectors under continuous Lorentz transformations. So we can \emph{postulate} that $\gamma^\mu$ also transforms like a 4-vector under these discrete symmetries.
+
+We will only do it for the parity transformation, since they behave interestingly for spinors. We will suppose that our parity operator acts on the $\gamma^\mu$ as
+\begin{align*}
+ P: \gamma^0 &\mapsto \gamma^0\\
+ \gamma^i &\mapsto -\gamma^i.
+\end{align*}
+Because of the Clifford algebra relations, we can write this as
+\[
+ P: \gamma^\mu \mapsto \gamma^0 \gamma^\mu \gamma^0.
+\]
+So we see that $P$ is actually conjugating by $\gamma^0$ (note that $(\gamma^0)^{-1} = \gamma^0$), and this is something we can generalize to everything. Since all the interesting matrices are generated by multiplying and adding the $\gamma^\mu$ together, \emph{all} matrices transform via conjugation by $\gamma^0$. So it is reasonable to assume that $P$ \emph{is} $\gamma^0$.
+\begin{axiom}
+ The parity operator $P$ acts on the spinors as $\gamma^0$.
+\end{axiom}
+
+So in particular, we have
+\begin{prop}
+ \[
+ P : \psi \mapsto \gamma^0 \psi,\quad P: \bar\psi \mapsto \bar\psi \gamma^0.
+ \]
+\end{prop}
+
+\begin{prop}
+ We have
+ \[
+ P:\gamma^5 \mapsto - \gamma^5.
+ \]
+\end{prop}
+
+\begin{proof}
+ The $\gamma^1, \gamma^2, \gamma^3$ each pick up a negative sign, while $\gamma^0$ does not change.
+\end{proof}
+
+Now something interesting happens. Since $P$ switches the sign of $\gamma^5$, it exchanges $P_+$ and $P_-$. So we have
+\begin{prop}
+ We have
+ \[
+ P: P_{\pm} \mapsto P_{\mp}.
+ \]
+ In particular, we have
+ \[
+ P \psi_\pm = \psi_{\mp}.
+ \]
+\end{prop}
+
+As $P$ still acts as right-multiplication-by-$P^{-1}$ on the cospinors, we know that scalar quantities etc are still preserved when we act by $P$. However, if we construct something with $\gamma^5$, then funny things happen, because $\gamma^5$ gains a sign when we transform by $P$. For example,
+\[
+ P: \bar \psi \gamma^5 \psi \mapsto -\bar \psi \gamma^5 \psi.
+\]
+Note that here it is important that we view $\gamma^5$ as a fixed matrix that does not transform, and $P$ only acts on $\bar\psi$ and $\psi$. Otherwise, the quantity does not change. If we make $P$ act on everything, then (almost) by definition the resulting quantity would remain unchanged under any transformation. Alternatively, we can think of $P$ as acting on $\gamma^5$ and leaving other things fixed. % think carefully.
+
+\begin{defi}[Pseudoscalar]\index{pseudoscalar}
+ A \emph{pseudoscalar} is a number that does not change under Lorentz boosts and rotations, but changes sign under a parity operator.
+\end{defi}
+Similarly, we can look at what happens when we apply $P$ to $\bar \psi \gamma^5 \gamma^\mu \psi$. This becomes
+\[
+ \bar \psi \gamma^5 \gamma^\mu \psi \mapsto \bar\psi \gamma^0 \gamma^5 \gamma^\mu \gamma^0 \psi =
+ \begin{cases}
+ - \bar\psi \gamma^5 \gamma^\mu \psi & \mu = 0\\
+ \bar\psi \gamma^5 \gamma^\mu \psi & \mu \not= 0
+ \end{cases}.
+\]
+This is known as an \emph{axial vector}.
+\begin{defi}[Axial vector]\index{axial vector}
+ An \emph{axial vector} is a quantity that transforms as vectors under rotations and boosts, but gain an additional sign when transforming under parity.
+\end{defi}
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ Type & Example\\
+ \midrule
+ Scalar & $\bar\psi \psi$\\
+ Vector & $\bar \psi \gamma^\mu \psi$\\
+ Tensor & $\bar \psi S^{\mu\nu} \psi$\\
+ Pseudoscalar & $\bar \psi \gamma^5 \psi$\\
+ Axial vector & $\bar\psi \gamma^5 \gamma^\mu \psi$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+We can now add extra terms to $\mathcal{L}$ that use $\gamma^5$. These terms will typically break the parity invariance of the theory. Of course, it doesn't always break parity invariance, since we can multiply two pseudoscalars together to get a scalar.
+
+It turns out nature does use $\gamma^5$, and they do break parity. The classic example is a $W$-boson, which is a vector field, which couples only to left-handed fermions. The Lagrangian is given by
+\[
+ \mathcal{L} = \cdots + \frac{g}{2} W^\mu \bar\psi \gamma^\mu(1 - \gamma^5) \psi,
+\]
+where $(1 - \gamma^5)$ acts as the left-handed projection.
+
+A theory which puts $\psi_{\pm}$ on an equal footing is known as \emph{vector-like}\index{vector-like theory}. Otherwise, it is known as \emph{chiral}\index{chiral theory}.
+
+\subsection{Solutions to Dirac's equation}
+\subsubsection*{Degrees of freedom}
+If we just look at the definition of a Dirac spinor, it has $4$ complex components, so $8$ real degrees of freedom. However, the equation of motion imposes some restriction on how a Dirac spinor can behave. We can compute the conjugate momentum of the Dirac spinor to find
+\[
+ \pi_\psi = \frac{\partial \mathcal{L}}{\partial \dot\psi} = i \psi^\dagger.
+\]
+So the phase space is parameterized by $\psi$ and $\psi^\dagger$, as opposed to $\psi$ and $\dot\psi$ in the usual case. But $\psi^\dagger$ is completely determined by $\psi$! So the phase space really just has 8 real degrees of freedom, as opposed to 16, and thus the number of degrees of freedom of $\psi$ itself is just 4 (the number of degrees of freedom is in general half the number of degrees of freedom of the phase space). We will see this phenomenon in the solutions we come up with below. % think about
+
+\subsubsection*{Plane wave solutions}
+We want to solve the Dirac equation
+\[
+ (i \slashed \partial - m) \psi = 0.
+\]
+We start with the simplest \emph{ansatz} (``guess'')
+\[
+ \psi = u_\mathbf{p} e^{- i p \cdot x},
+\]
+where $u_\mathbf{p}$ is a constant $4$-component spinor which, as the notation suggests, depends on momentum. Putting this into the equation, we get
+\[
+ (\gamma^\mu p_\mu - m) u_\mathbf{p} = 0.
+\]
+We write out the LHS to get
+\[
+ \begin{pmatrix}
+ -m & p_\mu \sigma^\mu\\
+ p_\mu \bar\sigma^\mu & -m
+ \end{pmatrix} u_\mathbf{p} = 0.
+\]
+\begin{prop}
+ We have a solution
+ \[
+ u_\mathbf{p} =
+ \begin{pmatrix}
+ \sqrt{p \cdot \sigma} \xi\\
+ \sqrt{p \cdot \bar\sigma} \xi
+ \end{pmatrix}
+ \]
+ for any $2$-component spinor $\xi$ normalized such that $\xi^\dagger \xi = 1$.
+\end{prop}
+
+\begin{proof}
+ We write
+ \[
+ u_\mathbf{p} =
+ \begin{pmatrix}
+ u_1\\
+ u_2
+ \end{pmatrix}.
+ \]
+ Then our equation gives
+ \begin{align*}
+ (p\cdot \sigma) u_2 &= m u_1\\
+ (p \cdot \bar\sigma) u_1 &= m u_2
+ \end{align*}
+ Either of these equations can be derived from the other, since
+ \begin{align*}
+ (p \cdot \sigma)(p \cdot \bar\sigma) &= p_0^2 - p_i p_j \sigma^i \sigma^j\\
+ &= p_0^2 - p_i p_j \delta^{ij}\\
+ &= p_\mu p^\mu \\
+ &= m^2.
+ \end{align*}
+ We try the ansatz
+ \[
+ u_1 = (p \cdot \sigma) \xi'
+ \]
+ for a spinor $\xi'$. Then our equation above gives
+ \[
+ u_2 = \frac{1}{m} (p \cdot \bar\sigma)(p \cdot \sigma) \xi' = m \xi'.
+ \]
+ So any vector of the form
+ \[
+ u_\mathbf{p} = A
+ \begin{pmatrix}
+ (p \cdot \sigma) \xi'\\
+ m \xi'
+ \end{pmatrix}
+ \]
+ is a solution, for $A$ a constant. To make this look more symmetric, we choose
+ \[
+ A = \frac{1}{m},\quad \xi' = \sqrt{p \cdot \bar\sigma} \xi,
+ \]
+ with $\xi$ another spinor. Then we have
+ \[
+ u_1 = \frac{1}{m} (p\cdot \sigma)\sqrt{p \cdot \bar\sigma} \xi = \sqrt{p \cdot \sigma} \xi.
+ \]
+ This gives our claimed solution.
+\end{proof}
+
+Note that solving the Dirac equation has reduced the number of dimensions from $4$ to $2$, as promised.
+
+\begin{eg}
+ A stationary, massive particle of mass $m$ and $\mathbf{p} = 0$ has
+ \[
+ u_\mathbf{p} = \sqrt{m}
+ \begin{pmatrix}
+ \xi\\ \xi
+ \end{pmatrix}
+ \]
+ for any $2$-component spinor $\xi$. Under a spacial rotation, we have a transformation
+ \[
+ \xi \mapsto e^{i \boldsymbol \phi\cdot \boldsymbol\sigma/2} \xi,
+ \]
+ which rotates $\xi$.
+
+ Now let's pick $\boldsymbol\phi = (0, 0, \phi_3)$, so that $\boldsymbol\phi \cdot \boldsymbol\sigma$ is a multiple of $\sigma^3$. We then pick
+ \[
+ \xi =
+ \begin{pmatrix}
+ 1 \\0
+ \end{pmatrix}.
+ \]
+ This is an eigenvector of $\sigma^3$ with positive eigenvalue. So it is invariant under rotations about the $x^3$ axis (up to a phase). We thus say this particle has \emph{spin up} along the $x^3$ axis, and this will indeed give rise to quantum-mechanical spin as we go further on.
+\end{eg}
+
+Now suppose our particle is moving in the $x^3$ direction, so that
+\[
+ p^\mu = (E, 0, 0, p^3).
+\]
+Then the solution to the Dirac equation becomes
+\[
+ u_\mathbf{p} =
+ \begin{pmatrix}
+ \sqrt{p\cdot \sigma}
+ \begin{pmatrix}
+ 1\\0
+ \end{pmatrix}\\[9pt]
+ \sqrt{p\cdot \bar\sigma}
+ \begin{pmatrix}
+ 1\\0
+ \end{pmatrix}\\
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ \sqrt{E - p^3}
+ \begin{pmatrix}
+ 1\\0
+ \end{pmatrix}\\[9pt]
+ \sqrt{E + p^3}
+ \begin{pmatrix}
+ 1\\0
+ \end{pmatrix}\\
+ \end{pmatrix}
+\]
+In the massless limit, we have $E \to p^3$. So this becomes
+\[
+ \sqrt{2E}
+ \begin{pmatrix}
+ 0\\0\\1\\0
+ \end{pmatrix}.
+\]
+For a spin down field, i.e.
+\[
+ \xi =
+ \begin{pmatrix}
+ 0\\1
+ \end{pmatrix},
+\]
+we have
+\[
+ u_\mathbf{p} =
+ \begin{pmatrix}
+ \sqrt{E + p^3}
+ \begin{pmatrix}
+ 0\\1
+ \end{pmatrix}\\[9pt]
+ \sqrt{E - p^3}
+ \begin{pmatrix}
+ 0\\1
+ \end{pmatrix}\\
+ \end{pmatrix}
+ \to
+ \sqrt{2E}
+ \begin{pmatrix}
+ 0\\1\\0\\0
+ \end{pmatrix}.
+\]
+\subsubsection*{Helicity operator}
+\begin{defi}[Helicity operator]\index{helicity operator}
+ The helicity operator is a projection of angular momentum along the direction of motion
+ \[
+ h = \hat{\mathbf{p}}\cdot \mathbf{J} = \frac{1}{2} \hat{p}_i
+ \begin{pmatrix}
+ \sigma^i & \mathbf{0}\\
+ \mathbf{0} & \sigma^i
+ \end{pmatrix}.
+ \]
+\end{defi}
+From the above expressions, we know that a massless spin up particle has $h = + \frac{1}{2}$, while a massless spin down particle has helicity $-\frac{1}{2}$. % check if this is true.
+
+\subsubsection*{Negative frequency solutions}
+We now try a different ansatz. We try
+\[
+ \psi = v_\mathbf{p} e^{i p\cdot x}.
+\]
+These are known as \term{negative frequency solutions}. The $u_\mathbf{p}$ solutions we've found are known as \term{positive frequency solutions}.
+
+The Dirac solution then becomes
+\[
+ v_\mathbf{p} =
+ \begin{pmatrix}
+ \sqrt{p \cdot \sigma} \eta\\
+ -\sqrt{p \cdot \bar\sigma} \eta
+ \end{pmatrix}
+\]
+for some $2$-component $\eta$ with $\eta^\dagger \eta = 1$.
+
+This is exactly the same as $u_\mathbf{p}$, but with a relative minus sign between $v_1$ and $v_2$.
+
+\subsubsection*{A basis}
+It is useful to introduce a basis given by
+\[
+ \eta^1 = \xi^1 =
+ \begin{pmatrix}
+ 1\\0
+ \end{pmatrix},\quad \eta^2 = \xi^2 =
+ \begin{pmatrix}
+ 0\\1
+ \end{pmatrix}
+\]
+We then define
+\[
+ u^s_\mathbf{p} =
+ \begin{pmatrix}
+ \sqrt{p\cdot \sigma} \xi^s\\
+ \sqrt{p\cdot \bar\sigma} \xi^s
+ \end{pmatrix}
+ ,\quad v^s_\mathbf{p} =
+ \begin{pmatrix}
+ \sqrt{p\cdot \sigma} \eta^s\\
+ -\sqrt{p\cdot \bar\sigma} \eta^s
+ \end{pmatrix}
+\]
+These then form a basis for the \emph{solution space} of the Dirac equation.
+\subsection{Symmetries and currents}
+We now look at what Noether's theorem tells us about spinor fields. We will only briefly state the corresponding results because the calculations are fairly routine. The only interesting one to note is that the Lagrangian is given by
+\[
+ \mathcal{L} = \bar\psi (i \slashed\partial - m) \psi,
+\]
+while the equations of motion say
+\[
+ (i \slashed\partial - m)\psi = 0.
+\]
+So whenever the equations of motion are satisfied, the Lagrangian vanishes. This is going to simplify quite a few of our results.
+
+\subsubsection*{Translation symmetry}
+As usual, spacetime translation is a symmetry, and the infinitesimal transformation is given by
+\begin{align*}
+ x^\mu &\mapsto x^\mu - \varepsilon^\mu\\
+ \psi &\mapsto \psi + \varepsilon^\mu \partial_\mu\psi.
+\end{align*}
+So we have
+\[
+ \delta \psi = \varepsilon^\mu \partial_\mu \psi,\quad \delta \bar\psi = \varepsilon^\mu \partial_\mu \bar\psi.
+\]
+Then we find a conserved current
+\[
+ T^{\mu\nu} = i \bar\psi \gamma^\mu \partial^\nu \psi - \eta^{\mu\nu}\mathcal{L} = i \bar\psi \gamma^\mu \partial^\nu \psi.
+\]
+%where
+%\[
+% \mathcal{L} = \bar\psi (i\slashed\partial - m)\psi.
+%\]
+%This only differentiates $\psi$ and not $\bar\psi$. To make it more symmetric, we can write the action as
+%\[
+% S = \frac{1}{2} \int \d^4 x [\bra \psi(i \overleftarrow{\slashed\partial} - m) \psi + \bar\psi (i \overrightarrow{\slashed\partial}- m)\psi],
+%\]
+%where we integrated by parts, and the arrows indicate which side the partial derivative acts. So we have
+%\[
+% T^{\mu\nu}= \frac{i}{2} \bar\psi (\gamma^\mu \gamma^\nu - \gamma^\nu \partial^\mu )\psi - \eta^{\mu\nu} \mathcal{L}.
+%\]
+%Since we get a conserved current when the equation motion are obeyed, we can impose the equations of motion into this conserved quantity. Our equations of motion is $i\slashed \partial - m)\psi = 0$. So we have
+%\[
+% T^{\mu\nu} = \frac{i}{2} \bar\psi \gamma^\mu \partial^\nu \psi.
+%\]
+In particular, we have a conserved energy of
+\begin{align*}
+ E &= \int T^{00}\;\d^3 \mathbf{x} \\
+ &= \int \d^3 \mathbf{x}\; i \bar\psi \gamma^0 \dot\psi \\
+ &= \int \d^3 \mathbf{x}\; \bar\psi(-i \gamma^i \partial_i + m)\psi,
+\end{align*}
+where we used the equation of motion in the last line.
+
+Similarly, we have a conserved total momentum
+\[
+ \mathbf{P}^i = \int \d^3 \mathbf{x}\; T^{0i} = \int \d^3 \mathbf{x}\; i \psi^\dagger \partial^i \psi.
+\]
+
+\subsubsection*{Lorentz transformations}
+We can also consider Lorentz transformations. This gives a transformation
+\[
+ \psi \mapsto S[\Lambda] \psi(x^\mu - \omega^\mu\!_\nu x^\nu).
+\]
+Taylor expanding, we get
+\begin{align*}
+ \delta \psi &=- \omega^\mu\!_\nu x^\nu \partial_\mu \psi + \frac{1}{2} \omega_{\rho\sigma} (S^{\rho\sigma}) \psi\\
+ &= - \omega^{\mu\nu}\left[x_\nu \partial_\mu \psi - \frac{1}{2}(S_{\mu\nu}) \psi\right].
+\end{align*}
+%where
+%\[
+% \omega^\mu\!_\nu = \frac{1}{2}\Omega_{\rho\sigma}(M^{\rho\sigma})^\mu\!_\nu.
+%\]
+%But we have an explicit representation for the $M$, as
+%\[
+% (M^{\rho\sigma})^\mu\!_\nu = \eta^{\rho\mu} \delta^\sigma\!_\nu - \eta^{\sigma\mu} \delta^\rho\!_\nu.
+%\]
+%So we have
+%\[
+% \omega^{\mu\nu} = \Omega^{\mu\nu}
+%\]
+%from the previous equation. So we have
+%\[
+% \delta \Psi^\alpha = - \omega^{\mu\nu}\left[x_\nu \partial_\mu \psi^\alpha - \frac{1}{2}(S_{\mu\nu})^\alpha\!_\beta \psi^\beta\right].
+%\]
+Similarly, we have
+\[
+ \delta \bar\psi = - \omega^{\mu\nu} \left[x_\nu \partial_\mu \bar\psi + \frac{1}{2} \bar\psi (S_{\mu\nu})\right].
+\]
+The change is sign is because $\bar\psi$ transforms as $S[\Lambda]^{-1}$, and taking the inverse of an exponential gives us a negative sign.
+
+So we can write this as
+\begin{align*}
+ j^\mu &= - \omega^{\rho\sigma}\left[i \bar\psi \gamma^\mu (x_\sigma \partial_\rho \psi - S_{\rho\sigma} \psi)\right] + \omega^\mu\!_\nu x^\nu \mathcal{L}\\
+ &= - \omega^{\rho\sigma}\left[i \bar\psi \gamma^\mu (x_\sigma \partial_\rho \psi - S_{\rho\sigma} \psi)\right].
+\end{align*}
+So if we allow ourselves to pick different $\omega^{\rho\sigma}$, we would have one conserved current for each choice:
+\[
+ (\mathcal{J}^\mu)^{\rho\sigma} = x^\sigma T^{\mu\rho} - x^\rho T^{\mu\sigma} - i \bar\psi \gamma^\mu S^{\rho\sigma} \psi.
+\]
+In the case of a scalar field, we only have the first two terms. We will later see that the extra term will give us spin $\frac{1}{2}$ after quantization. For example, we have
+\[
+ (\mathcal{J}^0)^{ij} = -i \bar\psi S^{ij} \psi = \frac{1}{2} \varepsilon^{ijk}\psi^\dagger
+ \begin{pmatrix}
+ \sigma^k & \mathbf{0}\\
+ \mathbf{0} & \sigma^k
+ \end{pmatrix}\psi.
+\]
+This is our angular momentum operator.
+
+\subsubsection*{Internal vector symmetry}
+We also have an internal vector symmetry
+\[
+ \psi \mapsto e^{i\alpha} \psi.
+\]
+So the infinitesimal transformation is
+\[
+ \delta \psi = i \alpha \psi.
+\]
+We then obtain
+\[
+ j_V^\mu = \bar\psi \gamma^\mu \psi.
+\]
+We can indeed check that
+\[
+ \partial_\mu j_V^\mu = (\partial_\mu \bar\psi) \gamma^\mu \psi + \bar\psi \gamma^\mu (\partial_\mu \psi) = im \bar\psi \psi - im \bar\psi \psi = 0.
+\]
+The conserved quantity is then
+\[
+ Q = \int \d^3 \mathbf{x}\; j_V^0 = \int \d^3 \mathbf{x}\; \bar\psi \gamma^0 \psi = \int \d^3 \mathbf{x}\; \psi^\dagger \psi.
+\]
+We will see that this is electric charge/particle number.
+
+\subsubsection*{Axial symmetry}
+When $m = 0$, we have an extra internal symmetry obtained by rotating left-handed and right-handed fermions in opposite signs. These symmetries are called \emph{chiral}\index{chiral symmetry}. We consider the transformations
+\[
+ \psi \mapsto e^{i \alpha \gamma^5} \psi,\quad \bar\psi \mapsto \bar\psi e^{i\alpha \gamma^5}.
+\]
+This gives us an axial current
+\[
+ j_A^\mu = \bar\psi \gamma^\mu \gamma^5 \psi.
+\]
+This is an axial vector. We claim that this is conserved only when $m = 0$. We have
+\begin{align*}
+ \partial_\mu j_A^\mu &= \partial_\mu \bar\psi \gamma^\mu \gamma^5 \psi + \bar\psi \gamma^\mu \gamma^5 \partial_\mu \psi\\
+ &= 2im \bar\psi\gamma^5 \psi.
+\end{align*}
+This time the two terms add rather than subtract. So this vanishes iff $m = 0$.
+
+It turns out that this is an \term{anomalous symmetry}, in the sense that the classical axial symmetry does not survive quantization.
+
+\section{Quantizing the Dirac field}
+\subsection{Fermion quantization}
+\subsubsection*{Quantization}
+We now start to quantize our spinor field! As it will turn out, this is complicated.
+
+Recall that for a complex scalar field, classically a solution of momentum $p$ can be written as
+\[
+ b_\mathbf{p} e^{-ip\cdot x} + c_\mathbf{p}^* e^{ip\cdot x}
+\]
+for some constants $b_\mathbf{p}, c_\mathbf{p}$. Here the first term is the positive frequency solution, and the second term is the negative frequency solution. To quantize this field, we then promoted the $b_\mathbf{p}, c_\mathbf{p}$ to operators, and we had
+\[
+ \phi(\mathbf{x}) = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2E_\mathbf{p}}} (b_\mathbf{p} e^{i\mathbf{p}\cdot \mathbf{x}} + c_\mathbf{p}^\dagger e^{-i\mathbf{p}\cdot \mathbf{x}}).
+\]
+Similarly, classically, every positive frequency solution to Dirac's equation of momentum $p$ can be written as
+\[
+ (b_\mathbf{p}^1 u_\mathbf{p}^1 + b_\mathbf{p}^2 u_\mathbf{p}^2)e^{-ip\cdot x}
+\]
+for some $b_\mathbf{p}^s$, and similarly a negative frequency solution can be written as
+\[
+ (c_\mathbf{p}^1 v_\mathbf{p}^1 + c_\mathbf{p}^2 v_\mathbf{p}^2) e^{ip\cdot x}
+\]
+for some $c_\mathbf{p}^s$. So when we quantize our field, we could promote the $b_\mathbf{p}$ and $c_\mathbf{p}$ to operators and obtain
+\begin{align*}
+ \psi(\mathbf{x}) &= \sum_{s = 1}^2 \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2 E_\mathbf{p}}} \left(b_\mathbf{p}^s u_\mathbf{p}^s e^{i\mathbf{p}\cdot \mathbf{x}} + c_\mathbf{p}^{s\dagger}v_\mathbf{p}^s e^{-i\mathbf{p}\cdot \mathbf{x}}\right)\\
+ \psi^\dagger (\mathbf{x}) &= \sum_{s = 1}^2 \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2 E_\mathbf{p}}} \left(b_\mathbf{p}^{s\dagger} u_\mathbf{p}^{s\dagger} e^{-i\mathbf{p}\cdot \mathbf{x}} + c_\mathbf{p}^{s}v_\mathbf{p}^{s\dagger} e^{i\mathbf{p}\cdot \mathbf{x}}\right)
+\end{align*}
+In these expressions, $b_\mathbf{p}^s$ and $c_\mathbf{p}^s$ are operators, and $u_\mathbf{p}^s$ and $v_\mathbf{p}^s$ are the elements of the spinor space we have previously found, which are concrete classical objects.
+
+We can compute the conjugate momentum to be
+\[
+ \pi = \frac{\partial \mathcal{L}}{\partial \dot\psi} = i \bar\psi \gamma^0 = i \psi^\dagger.
+\]
+The Hamiltonian density is then given by
+\begin{align*}
+ \mathcal{H} &= \pi \dot\psi - \mathcal{L} \\
+ &= i\psi^\dagger \dot\psi - i \bar\psi \gamma^0 \dot\psi - i \bar\psi \gamma^i \partial_i \psi + m \bar\psi \psi\\
+ &= \bar\psi(-i \gamma^i \partial_i + m) \psi.
+\end{align*}
+What are the appropriate commutation relations to impose on the $\psi$, or, equivalently, $b_\mathbf{p}^s$ and $c_\mathbf{p}^{s\dagger}$? The naive approach would be to impose
+\[
+ [\psi_\alpha(\mathbf{x}), \psi_\beta(\mathbf{y})] = [\psi_\alpha^\dagger(\mathbf{x}), \psi^\dagger_\beta(\mathbf{y})] = 0
+\]
+and
+\[
+ [\psi_\alpha(\mathbf{x}), \psi_\beta^\dagger(\mathbf{y})] = \delta_{\alpha\beta}\delta^3(\mathbf{x} - \mathbf{y}).
+\]
+These are in turn equivalent to requiring
+\begin{align*}
+ [b_\mathbf{p}^r, b_\mathbf{q}^{s\dagger}] &= (2\pi)^3 \delta^{rs}\delta^3(\mathbf{p} - \mathbf{q}),\\
+ [c_\mathbf{p}^r, c_\mathbf{q}^{s\dagger}] &= -(2\pi)^3 \delta^{rs} \delta^3(\mathbf{p} - \mathbf{q}),
+\end{align*}
+and all others vanishing.
+
+If we do this, we can do the computations and find that (after normal ordering) we obtain a Hamiltonian of
+\[
+ H = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} E_\mathbf{p}(b_\mathbf{p}^{s\dagger}b_\mathbf{p}^s - c_\mathbf{p}^{s\dagger}c_\mathbf{p}^s).
+\]
+Note the minus sign before $c_\mathbf{p}^{s\dagger}c_\mathbf{p}^s$. This is a disaster! While we can define a vacuum $\bket{0}$ such that $b_\mathbf{p}^s \bket{0} = c_\mathbf{p}^s \bket{0} = 0$, this isn't really a vacuum, because we can keep applying $c_\mathbf{p}^\dagger$ to get \emph{lower and lower} energies. This is totally bad.
+
+%The equations of motion are first order in time, not second order. So all we have to do is to specify $\psi$ and $\psi^\dagger$ on some initial time slice to determine the full evolution.
+%
+%Imposing the canonical commutation relations, we have
+%\[
+% [\psi_\alpha, \psi_\beta(\mathbf{y})] = [\psi_\alpha^\dagger(\mathbf{x}), \psi^\dagger_\beta(\mathbf{y})] = 0
+%\]
+%and
+%\[
+% [\psi_\alpha(\mathbf{x}), \psi_\beta^\dagger(\mathbf{y})] = \delta_{\alpha\beta}\delta^3(\mathbf{x} - \mathbf{y}).
+%\]
+%It turns out these don't work.
+%
+%We know any classical solution is a sum of plane waves. So we can write the quantum operator as
+%\begin{align*}
+% \psi(\mathbf{x}) &= \sum_{s = 1}^2 \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2 E_\mathbf{p}}} \left(b_\mathbf{p}^s u_\mathbf{p}^s e^{i\mathbf{p}\cdot \mathbf{x}} + c_\mathbf{p}^{s\dagger}v_\mathbf{p}^s e^{-i\mathbf{p}\cdot \mathbf{x}}\right)\\
+% \psi^\dagger (\mathbf{x}) &= \sum_{s = 1}^2 \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2 E_\mathbf{p}}} \left(b_\mathbf{p}^{s\dagger} u_\mathbf{p}^{s\dagger} e^{-i\mathbf{p}\cdot \mathbf{x}} + c_\mathbf{p}^{s}v_\mathbf{p}^{s\dagger} e^{i\mathbf{p}\cdot \mathbf{x}}\right)
+%\end{align*}
+%We claim that
+%\begin{align*}
+% [b_\mathbf{p}^r, b_\mathbf{q}^{s\dagger}] &= (2\pi)^3 \delta^{rs}\delta^3(\mathbf{p} - \mathbf{q}),\\
+% [c_\mathbf{p}^r, c_\mathbf{q}^{s\dagger}] &= -(2\pi)^3 \delta^{rs} \delta^3(\mathbf{p} - \mathbf{q}),
+%\end{align*}
+%and all others vanish.
+%
+%Now let's try to work out the Hamiltonian
+%\begin{align*}
+% \mathcal{H} &= \pi \dot\psi - \mathcal{L} \\
+% &= i\psi^\dagger \dot\psi - i \bar\psi \gamma^0 \dot\psi - i \bar\psi \gamma^i \partial_i \psi + m \bar\psi \psi\\
+% &= \bar\psi(-i \gamma^i \partial_i + m) \psi.
+%\end{align*}
+%Then we have
+%\[
+% H = \int \d^3 \mathbf{x}\; \mathcal{H}.
+%\]
+%We can plug our expansion of $\psi$ in terms of expansion and annihilation operators into this.
+%
+%We first look at
+%\begin{align*}
+% (-i \gamma^i \partial_i m)\psi &= \iint \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2 E_\mathbf{p}}} (b_\mathbf{p}^s(- \gamma^i p_i + m)u_\mathbf{p}^s e^{i\mathbf{p}\cdot \mathbf{x}} \\
+% &\quad\quad+ c_\mathbf{p}^{s\dagger} (\gamma^i p_i + m) v_\mathbf{p}^s e^{-i\mathbf{p}\cdot \mathbf{x}}),
+%\end{align*}
+%with an implicit sum over $s$.
+%
+%We now note that
+%\[
+% (\slashed p - m) u_\mathbf{p} = 0,
+%\]
+%by definition. So we have
+%\[
+% -\gamma^i p_i + m)u_\mathbf{p}^s = \gamma^0 p_0 u_\mathbf{p}^s.
+%\]
+%Similarly, for the $v_\mathbf{p}$'s, we have
+%\[
+% (\slashed p + m) v_\mathbf{p}= 0,
+%\]
+%which implies
+%\[
+% (\gamma^i p_i + m) v_\mathbf{p}^s = - \gamma^0 p_0 v_\mathbf{p}^s.
+%\]
+%So we get
+%\[
+% (i \gamma^i p_i + m)\psi = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \sqrt{\frac{E_\mathbf{p}}{2}} \left(b_\mathbf{p}^s u_\mathbf{p}^s e^{i\mathbf{p}\cdot \mathbf{x}} - c_\mathbf{p}^{s\dagger} v_\mathbf{p}^s e^{-i\mathbf{p}\cdot \mathbf{x}}\right).
+%\]
+%Integrating over $\mathbf{x}$, we obtain
+%\begin{align*}
+% H &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} E_\mathbf{p} (b_\mathbf{p}^{s\dagger}b_\mathbf{p}^s) - c_\mathbf{p}^s c_\mathbf{p}^{s\dagger})\\
+% &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} E_\mathbf{p}(b_\mathbf{p}^{s\dagger}b_\mathbf{p}^s - c_\mathbf{p}^{s\dagger}c_\mathbf{p}^s +(2\pi)^3 \delta^3(0)).
+%\end{align*}
+%As in the case of the scalar Hamiltonian, we get rid of the $(2\pi)^3 \delta^3(0)$.
+%
+%The problem with this is that we have a negative sign there, which means that we can reduce the Hamiltonian by adding in more and more particles, and so the Hamiltonian is not bounded below. So the quantum theory makes no sense.
+
+What went wrong? The answer comes from actually looking at particles in the Real World\textsuperscript{TM}. In the case of scalar fields, the commutation relation
+\[
+ [a_\mathbf{p}^\dagger, a_\mathbf{q}^\dagger] = 0.
+\]
+tells us that
+\[
+ a_\mathbf{p}^\dagger a_\mathbf{q}^\dagger \bket{0} = \bket{\mathbf{p}, \mathbf{q}} = \bket{\mathbf{q}, \mathbf{p}}.
+\]
+This means the particles satisfy \emph{Bose statistics}, namely swapping two particles gives the same state.
+
+However, we know that fermions actually satisfy Fermi statistics. Swapping two particles gives us a negative sign. So what we really need is
+\[
+ b_\mathbf{p}^rb_\mathbf{q}^s = - b_\mathbf{q}^s b_\mathbf{p}^r.
+\]
+In other words, instead of setting the commutator zero, we want to set the \term{anticommutator} to zero:
+\[
+ \{b_\mathbf{p}^r, b_\mathbf{q}^s\} = 0.
+\]
+In general, we are going to replace all commutators with anti-commutators. It turns out this is what fixes our theory. We require that
+\begin{axiom}
+ The spinor field operators satisfy
+ \[
+ \{\psi_\alpha (\mathbf{x}), \psi_\beta(\mathbf{y})\} = \{\psi_\alpha^\dagger(\mathbf{x}), \psi_\beta^\dagger(\mathbf{y})\} = 0,
+ \]
+ and
+ \[
+ \{\psi_\alpha(\mathbf{x}), \psi_\beta^\dagger(\mathbf{y})\} = \delta_{\alpha\beta} \delta^3(\mathbf{x} - \mathbf{y}).
+ \]
+\end{axiom}
+
+\begin{prop}
+ The anti-commutation relations above are equivalent to
+\[
+ \{c_\mathbf{p}^r, c_\mathbf{q}^{s\dagger}\} = \{b_\mathbf{p}^r, b_\mathbf{q}^{s\dagger}\} = (2\pi)^3 \delta^{rs} \delta^3(\mathbf{p} - \mathbf{q}),
+\]
+and all other anti-commutators vanishing.
+\end{prop}
+Note that from a computational point of view, these anti-commutation relations are horrible. When we manipulate our operators, we will keep on introducing negative signs to our expressions, and as we all know, keeping track of signs is the hardest thing in mathematics. Indeed, these negative signs will come and haunt us all the time, and we have to insert funny negative signs everywhere.
+
+If we assume these anti-commutation relations with the same Hamiltonian as before, then we would find
+\begin{prop}
+ \[
+ H = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} E_\mathbf{p} \left(b_\mathbf{p}^{s\dagger} b_\mathbf{p}^s + c_\mathbf{p}^{s\dagger}c_\mathbf{p}^s\right).
+ \]
+\end{prop}
+We now define the vacuum as in the bosonic case, where it is annihilated by the $b$ and $c$ operators:
+\[
+ b_\mathbf{p}^s\bket{0} = c_\mathbf{p}^s \bket{0} = 0.
+\]
+Although the $b$'s and $c$'s satisfy the anti-commutation relations, the Hamiltonian satisfies commutator relations
+\begin{align*}
+ [H, b_\mathbf{p}^{r\dagger}] &= H b_\mathbf{p}^{r\dagger} - b_\mathbf{p}^{r\dagger} H \\
+ &= \int \frac{\d^3 \mathbf{q}}{(2\pi)^3} E_\mathbf{q} (b_\mathbf{q}^{s\dagger} b_\mathbf{q}^s + c_\mathbf{q}^{s\dagger}c_\mathbf{q}^s) b_\mathbf{p}^{r^\dagger}\\
+ &\hphantom{=}\quad\quad - b_\mathbf{p}^{r\dagger} \int \frac{\d^3 \mathbf{q}}{(2\pi)^3} E_\mathbf{q}(b_\mathbf{q}^{s\dagger} b_\mathbf{q}^s + c_\mathbf{q}^{s\dagger}c_\mathbf{q}^s)\\
+ &= E_\mathbf{p} b_\mathbf{p}^{r\dagger}.
+\end{align*}
+Similarly, we have
+\[
+ [H, b_\mathbf{p}^r] = -E_\mathbf{p}b_\mathbf{p}^r
+\]
+and the corresponding relations for $c_\mathbf{p}^r$ and $c_\mathbf{p}^{r\dagger}$.
+\subsubsection*{Heisenberg picture}
+As before, we would like to put our Dirac field in the Heisenberg picture. We have a spinor operator at each point $x^\mu$
+\[
+ \psi(x) = \psi(\mathbf{x}, t).
+\]
+The time evolution of the field is given by
+\[
+ \frac{\partial \psi}{\partial t} = i [H, \psi],
+\]
+which is solved by
+\begin{align*}
+ \psi(x) &= \sum_{s = 1}^2 \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2E_\mathbf{p}}} \left[b_\mathbf{p}^s u_\mathbf{p}^s e^{-ip\cdot x} + c_\mathbf{p}^{s\dagger}v_\mathbf{p}^s e^{ip\cdot x}\right]\\
+ \psi^\dagger(x) &= \sum_{s = 1}^2 \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2E_\mathbf{p}}} \left[b_\mathbf{p}^{s\dagger} u_\mathbf{p}^{s\dagger} e^{ip\cdot x} + c_\mathbf{p}^{s}v_\mathbf{p}^{s\dagger} e^{-ip\cdot x}\right].
+\end{align*}
+We now look at the anti-commutators of the fields in the Heisenberg picture. We define
+\begin{defi}
+ \[
+ i S_{\alpha\beta}(x - y) = \{\psi_\alpha(x), \bar\psi_\beta(y)\}.
+ \]
+\end{defi}
+Dropping the indices, we write
+\[
+ i S(x - y) = \{\psi(x), \bar\psi(y)\}.
+\]
+If we drop the indices, then we have to remember that $S$ is a $4 \times 4$ matrix with indices $\alpha, \beta$.
+
+Substituting $\psi$ and $\bar\psi$ in using the expansion, we obtain
+\begin{align*}
+ iS(x - y) &= \sum_{s, r} \int \frac{\d^3 \mathbf{p}\; \d^3 \mathbf{q}}{(2\pi)^6} \frac{1}{\sqrt{4E_\mathbf{p} E_\mathbf{q}}} \left(\{b_\mathbf{p}^s, b_\mathbf{q}^{r\dagger}\} u_\mathbf{p}^s \bar u_\mathbf{q}^r e^{-i(p\cdot x - q \cdot y)}\right.\\
+ &\hphantom{=\sum_{s, r} \int \frac{\d^3 \mathbf{p}\; \d^3 \mathbf{q}}{(2\pi)^6} \frac{1}{\sqrt{4E_\mathbf{p} E_\mathbf{q}}} } +\left.\{c_\mathbf{p}^{s\dagger},c_\mathbf{q}^r \} v_\mathbf{p}^s \bar v_\mathbf{q}^r e^{i(p\cdot x - q \cdot y)}\right)\\
+ &= \sum_s \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{2 E_\mathbf{p}} \left(u_\mathbf{p}^s \bar u_\mathbf{p}^s e^{-p\cdot(x - y)} + v_\mathbf{p}^s \bar v_\mathbf{p}^s e^{ip\cdot(x - y)}\right)\\
+ &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{2 E_\mathbf{p}} \left[(\slashed p + m) e^{-ip\cdot (x - y)} + (\slashed p - m) e^{i p\cdot (x - y)}\right]
+\end{align*}
+Recall that we had a scalar propagator
+\[
+ D(x - y) = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{2 E_\mathbf{p}} e^{-ip\cdot (x - y)}.
+\]
+So we can write the above quantity in terms of the $D(x - y)$ by
+\[
+ iS(x - y) = (i \slashed \partial_x + m)(D(x - y) - D(y - x)).
+\]
+Some comments about this: firstly, for spacelike separations where
+\[
+ (x - y)^2 < 0,
+\]
+we had $D(x - y) - D(y - x) = 0$.
+
+For bosonic fields, we made a big deal of this, since it ensured that $[\phi(x), \phi(y)] = 0$. We then interpreted this as saying the theory was causal. For spinor fields, we have anti-commutation relations rather than commutation relations. What does this say about causality? The best we can say is that all our observables are bilinear (or rather quadratic) in fermions, e.g.
+\[
+ H = \int \d^3 \mathbf{x} \; \psi^\dagger (-i \gamma^i \partial_i + m) \psi.
+\]
+So these observables will commute for spacelike separations.
+
+%Also, at least away form singularities, we have
+%\[
+% (i \slashed\partial_x - m) S(x - y) = 0,
+%\]
+%since
+%\[
+% (i \slashed\partial_x - m)(i \slashed \partial + m)(D(x - y) - D(y - x)) = -(\partial_x^2 + m^2) (D(x - y) - D(y - x)),
+%\]
+%which vanishes since $p^2 = m^2$.
+
+\subsubsection*{Propagators}
+The next thing we need to figure out is the Feynman propagator. By a similar computation to that above, we can determine the \term{vacuum expectation value}
+\begin{align*}
+ \brak{0} \psi(x) \bar\psi(y)\bket{0} &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{2E_\mathbf{p}}(\slashed p + m) e^{-ip\cdot (x - y)}\\
+ \brak{0} \bar\psi(y) \psi(x)\bket{0} &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{2E_\mathbf{p}}(\slashed p - m) e^{ip\cdot (x - y)}.
+\end{align*}
+We define the Feynman propagator by
+\begin{defi}[Feynman propagator]\index{Feynman propagator!spinor field}
+ The \emph{Feynman propagator} of a spinor field is the time-ordered product
+ \[
+ S_F(x - y) = \brak{0} T \psi_\alpha(x) \bar\psi_\beta(y)\bket{0} =
+ \begin{cases}
+ \brak{0}\psi_\alpha(x) \bar\psi_\beta(y)\bket{0} & x^0 > y^0\\
+ -\brak{0} \bar\psi_\beta(y) \psi_\alpha(x) \bket{0} & y^0 > x^0
+ \end{cases}
+ \]
+\end{defi}
+Note that we have a funny negative sign! This is necessary for Lorentz invariance --- when $(x - y)^2 < 0$, then there is no invariant way to determine whether one time is bigger than the other. So we need the expressions for the two cases to agree. In the case of a boson, we had $\brak{0}\phi(x) \phi(y) \bket{0} = \brak{0} \phi(y) \phi(x)\bket{0}$, but here we have anti-commutation relations, so we have
+\[
+ \psi(x) \bar\psi(y) = - \bar\psi (y) \psi(x).
+\]
+So we need to insert the negative sign to ensure time-ordered product as defined is Lorentz invariant.
+
+For normal ordered products, we have the same behaviour. Fermionic operators anti-commute, so
+\[
+ \normalorder{\psi_1\psi_2} = - \normalorder{\psi_2\psi_1}.
+\]
+As before, the Feynman propagator appears in Wick's theorem as the contraction:
+\begin{prop}
+ \[
+ \contraction{}{\psi}{(x)}{\bar\psi}\psi(x) \bar\psi(y) = T(\psi(x) \bar\psi(y)) - \normalorder{\psi(x) \bar\psi(y)} = S_F(x - y).
+ \]
+\end{prop}
+When we apply Wick's theorem, we need to remember the minus sign. For example,
+\[
+ \normalorder{\contraction{}{\psi}{_1\psi_2}{\psi}\psi_1 \psi_2\psi_3\psi_4} = -\normalorder{\contraction{}{\psi}{_1}{\psi_3}\psi_1\psi_3 \psi_2\psi_4} = -\psi_1 \psi_3 \normalorder{\psi_2 \psi_4}.
+\]
+Again, $S_F$ can be expressed as a $4$-momentum integral
+\[
+ S_F(x - y) = i \int \frac{\d^4 p}{(2\pi)^4} e^{-ip\cdot (x - y)} \frac{\slashed p + m}{p^2 - m^2 + i \varepsilon}.
+\]
+As in the case of a real scalar field, it is a Green's function of Dirac's equation:
+\[
+ (i \slashed\partial_x - m) S_F(x - y) = i \delta^4(x - y).
+\]
+
+\subsection{Yukawa theory}
+The interactions between a Dirac fermion and a real scalar field are governed by the Yukawa interaction. The Lagrangian is given by
+\[
+ \mathcal{L} = \frac{1}{2}\partial_\mu \phi \partial^\mu \phi + \frac{1}{2} \mu^2 \phi^2 + \bar\psi (i \gamma^\mu \partial_\mu - m) \psi - \lambda \phi \bar\psi \psi.
+\]
+where $\mu$ is the mass of the scalar and $m$ is the mass of the Lagrangian. This is the full version of the \term{Yukawa theory}. Note that the kinetic term implies that
+\[
+ [\psi] = [\bar\psi] = \frac{3}{2}.
+\]
+Since $[\phi] = 1$ and $[\mathcal{L}] = 4$, we know that
+\[
+ [\lambda] = 0.
+\]
+So this is a dimensionless coupling constant, which is good.
+
+Note that we could have written the interaction term of the Lagrangian as
+\[
+ \mathcal{L}_{\mathrm{int}} =- \lambda \phi \bar\psi \gamma^5 \psi.
+\]
+We then get a pseudoscalar Yuakwa theory.
+
+We again do painful computations directly to get a feel of how things work, and then state the Feynman rules for the theory.
+\begin{eg}
+ Consider a fermion scattering $\psi\psi \to \psi\psi$.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$\psi$};
+ \vertex [below=of i1] (i2) {$\psi$};
+ \vertex [right=of i1] (l1);
+ \vertex [right=of l1] (r1);
+ \vertex [right=of r1] (f1) {$\psi$};
+
+ \vertex [right=of i2] (l2);
+ \vertex [right=of l2] (r2);
+ \vertex [right=of r2] (f2) {$\psi$};
+
+
+ \diagram*{
+ (i1) -- [fermion, momentum=$p$] (l1),
+ (i2) -- [fermion, momentum=$q$] (l2),
+
+ (r1) -- [fermion, momentum=$p'$] (f1),
+ (r2) -- [fermion, momentum=$q'$] (f2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ We have initial and final states
+ \begin{align*}
+ \bket{i} &= \sqrt{4 E_\mathbf{p} E_\mathbf{q}} b_\mathbf{p}^{s\dagger}b_\mathbf{q}^{r\dagger} \bket{0}\\
+ \bket{f} &= \sqrt{4 E_{\mathbf{p}'} E_{\mathbf{q}'}} b_{\mathbf{p}'}^{s'\dagger}b_{\mathbf{q}'}^{r'\dagger} \bket{0}.
+ \end{align*}
+ Here we have to be careful about ordering the creation operators, because they anti-commute, not commute. We then have
+ \[
+ \brak{f} = \sqrt{4 E_{\mathbf{p}'} E_{\mathbf{q}}} \brak{0} b_{\mathbf{q}'}^{r'} b_{\mathbf{p}'}^{s'}.
+ \]
+ We can then look at the $O(\lambda^2)$ term in $\brak{f}(S - \mathbf{1})\bket{i}$. We have
+ \[
+ \frac{(-i\lambda)^2}{2} \int \d^4 x_1 \;\d^4 x_2\; T\Big(\bar\psi(x_1) \psi(x_1) \phi(x_1) \bar\psi(x_2) \psi(x_2) \phi(x_2)\Big)
+ \]
+ The contribution to scattering comes from the contraction
+ \[
+ \normalorder{\bar\psi(x_1) \psi(x_1) \bar\psi(x_2) \psi(x_2)} \contraction{}{\phi}{(x_1)}{\phi}\phi(x_1) \phi(x_2).
+ \]
+ The two $\psi$'s annihilate the initial state, whereas the two $\bar\psi$ create the final state. This is just like the bosonic case, but we have to be careful with minus signs and spinor indices.
+
+ Putting in $\bket{i}$ and ignoring $c$ operators as they don't contribute, we have
+ \begin{align*}
+ &\hphantom{=}\normalorder{\bar\psi(x_1)\psi(x_1) \bar\psi(x_2) \psi(x_2)} b_\mathbf{p}^{s\dagger}b_{\mathbf{q}}^{r^\dagger}\bket{0}\\
+ &= -\int \frac{\d^3 \mathbf{k}_1\;\d^3 \mathbf{k}_2}{(2\pi)^6 2\sqrt{E_{\mathbf{k}_1}E_{\mathbf{k}_2}}} [\bar\psi (x_1) u_{\mathbf{k}_1}^m] [\bar\psi(x_2) u_{\mathbf{k}_2}^n] e^{-ik_1 \cdot x_1 - i k_2 \cdot x_2} b_{\mathbf{k}_1}^m b_{\mathbf{k}_2}^n b_{\mathbf{p}}^{s^\dagger} b_\mathbf{q}^{r\dagger}\bket{0}\\
+ \intertext{where the square brackets show contraction of spinor indices}
+ &= -\frac{1}{2\sqrt{E_\mathbf{p} E_\mathbf{q}}} \left([\bar\psi(x_1) u_\mathbf{q}^r] [\bar\psi(x_2) u_\mathbf{p}^s] e^{-iq\cdot x_1 - ip\cdot x_2}\right. \\
+ &\hphantom{= -\frac{1}{2\sqrt{E_\mathbf{p} E_\mathbf{q}}} \Big(}\left.- [\bar\psi(x_1) u_\mathbf{p}^s][\bar\psi(x_2) u_\mathbf{q}^r] e^{-ip\cdot x - iq\cdot x_2}\right)\bket{0}.
+ \end{align*}
+ The negative sign, which arose from anti-commuting the $b$'s, is crucial. We put in the left hand side to get
+ \begin{align*}
+ &\hphantom{=}\brak{0}b_{\mathbf{q}'}^{r'} b_{\mathbf{p}'}^{s'} [\bar\psi(x_1) u_\mathbf{q}^r][\bar\psi(x_2) u_\mathbf{p}^s]\\
+ &= \frac{1}{2\sqrt{E_{\mathbf{p}'} E_{\mathbf{q}'}}} \left([\bar u_{\mathbf{p}'}^s u_\mathbf{q}^r][\bar u_{\mathbf{q}'}^{r'} u_\mathbf{p}^s] e^{ip'\cdot x_1 + i q'\cdot x_2} -[\bar{u}_{\mathbf{q}'}^{r'} u_\mathbf{q}^\mathbf{r}][\bar u_{\mathbf{p}'}^{s'} u_{\mathbf{p}}^s] e^{ip'\cdot x_2 + i q'\cdot x_1}\right).
+ \end{align*}
+ Putting all of this together, including the initial relativistic normalization of the initial state and the propagator, we have
+ \begin{align*}
+ &\hphantom{=}\brak{f}S - \mathbf{1}\bket{i} \\
+ &= (-i\lambda)^2 \int \frac{\d^4 x_1 \; \d^4 x_2}{(2\pi)^4} \frac{\d^3 \mathbf{k}}{(2\pi)^4} \frac{ie^{ik\cdot (x_1 - x_2)}}{k^2 - \mu^2 + i \varepsilon}\\
+ &\quad\left([\bar u_{\mathbf{p}'}^{s'} \cdot u_{\mathbf{p}}^s][\bar u_{\mathbf{q}'}^{t'} \cdot u_\mathbf{q}^r] e^{i x_1 \cdot (q' - q) + i x_2 \cdot (p' - p)} - [\bar{u}_{\mathbf{p}'}^{s'}u_\mathbf{q}^r][u_{\mathbf{q}'}^{r'} u_\mathbf{p}^s] e^{ix_1 \cdot (p' - q) + i x_2 \cdot (q' - p)}\right)\\
+ &= i(-i\lambda)^2 \int \frac{\d^4 k (2\pi)^4}{k^2 - \mu^2 + i \varepsilon}\left([\bar u_{\mathbf{p}'}^{s'} \cdot u_\mathbf{p}^s][u_{\mathbf{q}'}^{r'} \cdot u_{\mathbf{q}}^r] \delta^4(q' - q + k)\delta^4(p' - p + k)\right.\\
+ &\hphantom{= i(-i\lambda)^2 \int \frac{\d^4 k (2\pi)^4}{k^2 - \mu^2 + i \varepsilon}\Big(}\left.- [\bar u_{\mathbf{p}'}^{s'} \cdot u_\mathbf{q}^y][\bar u_{\mathbf{q}'}^{r'} \cdot u_{\mathbf{p}}^s] \delta^4(p' - q + k)\delta^4(q' - p + k)\right).
+ \end{align*}
+ So we get
+ \[
+ \brak{f}S - \mathbf{1}\bket{i} = i \mathcal{A}(2\pi)^4 \delta^4(p + q - p' - q'),
+ \]
+ where
+ \[
+ \mathcal{A} = (-i \lambda)^2 \left( \frac{[\bar u_{\mathbf{p}'}^{s'} \cdot u_\mathbf{p}^s][\bar u_{\mathbf{q}'}^{r'} \cdot u_\mathbf{q}^r]}{(p' - p)^2 - \mu^2 + i \varepsilon} - \frac{[\bar u_{\mathbf{p}'}^{s'} \cdot u_\mathbf{q}^r][\bar u_{\mathbf{q}'}^{r'} \cdot u_\mathbf{p}^s]}{(q' - p)^2 - \mu^2 + i \varepsilon}\right).
+ \]
+\end{eg}
+\subsection{Feynman rules}
+As always, the Feynman rules are much better:
+\begin{enumerate}
+ \item An incoming fermion is given a spinor $u_\mathbf{p}^r$, and an outgoing fermion is given a $\bar{u}_\mathbf{p}^r$.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (l) {$u_\mathbf{p}^r$};
+ \vertex [right=of l] (m);
+ \vertex [above right=of m] (f1);
+ \vertex [below right=of m] (f2);
+ \diagram*{
+ (l) -- [fermion, momentum=$p$] (m) -- [fermion] (f1),
+ (m) -- [scalar] (f2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad\quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m);
+ \vertex [above left=of m] (i1);
+ \vertex [below left=of m] (i2);
+ \vertex [right=of m] (f) {$\bar{u}_\mathbf{p}^s$};
+ \diagram*{
+ (i1) -- [fermion] (m) -- [fermion, momentum=$p$] (f),
+ (m) -- [scalar] (i2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ \item For an incoming \emph{anti}-fermion, we put in a $\bar{v}_\mathbf{p}^r$, and for an outgoing anti-fermion we put a $v_\mathbf{p}^r$.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (l) {$\bar v_\mathbf{p}^r$};
+ \vertex [right=of l] (m);
+ \vertex [above right=of m] (f1);
+ \vertex [below right=of m] (f2);
+ \diagram*{
+ (l) -- [anti fermion, momentum=$p$] (m) -- [anti fermion] (f1),
+ (m) -- [scalar] (f2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad\quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m);
+ \vertex [above left=of m] (i1);
+ \vertex [below left=of m] (i2);
+ \vertex [right=of m] (f) {$v_\mathbf{p}^r$};
+ \diagram*{
+ (i1) -- [anti fermion] (m) -- [anti fermion, momentum=$p$] (f),
+ (m) -- [scalar] (i2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ \item For each vertex we get a factor of $(-i\lambda)$.
+
+ \item Each internal scalar line gets a factor of
+ \[
+ \frac{i}{p^2 - \mu^2 + i \varepsilon},
+ \]
+ and each internal fermion we get
+ \[
+ \frac{i(\slashed p + m)}{p^2 - m^2 + i \varepsilon}.
+ \]
+ \item The arrows on fermion lines must flow consistently, ensuring fermion conservation.
+ \item We impose energy-momentum conservation at each vertex, and if we have a loop, we integrate over all possible momentum.
+ \item We add an extra minus sign for a loop of fermions.
+\end{enumerate}
+Note that the Feynman propagator is a $4 \times 4$ matrix. The indices are contracted with each vertex, either with further propagators or external spinors.
+
+We look at computations using Feynman rules.
+\begin{eg}[Nucleon scattering]
+ For nucleon scattering, we have diagrams
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$p, s$};
+ \vertex [below=of i1] (i2) {$q, r$};
+ \vertex [right=of i1] (m1);
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m1] (f1) {$p', s'$};
+ \vertex [right=of m2] (f2) {$q', r'$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (f1),
+ (i2) -- [fermion] (m2) -- [fermion] (f2),
+ (m1) -- [scalar, momentum=$p - p'$] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad\quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$p, s$};
+ \vertex [below=of i1] (i2) {$q, r$};
+ \vertex [right=of i1] (m1);
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m1] (f1) {$p', s'$};
+ \vertex [right=of m2] (f2) {$q', r'$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (f2),
+ (i2) -- [fermion] (m2) -- [fermion] (f1),
+ (m1) -- [scalar, momentum'=$p - q'$] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ In the second case, the fermions are swapped, so we get a relative minus sign. So by the Feynman rules, this contributes an amplitude of
+ \[
+ \mathcal{A} = (-i \lambda)^2 \left( \frac{[\bar u_{\mathbf{p}'}^{s'} \cdot u_\mathbf{p}^s][\bar u_{\mathbf{q}'}^{r'} \cdot u_\mathbf{q}^r]}{(p' - p)^2 - \mu^2 + i \varepsilon} - \frac{[\bar u_{\mathbf{p}'}^{s'} \cdot u_\mathbf{q}^r][\bar u_{\mathbf{q}'}^{r'} \cdot u_\mathbf{p}^s]}{(q' - p)^2 - \mu^2 + i \varepsilon}\right).
+ \]
+\end{eg}
+
+\begin{eg}
+ We look at nucleon to meson scattering
+ \[
+ \psi\bar\psi \to \phi \phi.
+ \]
+ We have two diagrams
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$p, s$};
+ \vertex [below=of i1] (i2) {$q, r$};
+ \vertex [right=of i1] (m1);
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m1] (f1) {$p'$};
+ \vertex [right=of m2] (f2) {$q'$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (m2) -- [fermion] (i2),
+ (m1) -- [scalar] (f1),
+ (m2) -- [scalar] (f2)
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$p, s$};
+ \vertex [below=of i1] (i2) {$q, r$};
+ \vertex [right=of i1] (m1);
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m1] (f1) {$p'$};
+ \vertex [right=of m2] (f2) {$q'$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (m2) -- [fermion] (i2),
+ (m1) -- [scalar] (f2),
+ (m2) -- [scalar] (f1)
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ This time, we flipped two bosons, so we are not gaining a negative sign. Then the Feynman rules give us
+ \[
+ \mathcal{A} = (-i\lambda)^2 \left(\frac{\bar{v}_\mathbf{q}^r (\slashed p - \slashed p' + m) u_\mathbf{p}^s}{(p - p')^2 - m^2 + i \varepsilon} + \frac{\bar v_\mathbf{q}^r (\slashed p - \slashed q' + m) u_\mathbf{p}^s}{(p - q')^2 - m^2 + i \varepsilon}\right).
+ \]
+\end{eg}
+
+\begin{eg}
+ We can also do nucleon anti-nucleon scattering
+ \[
+ \psi\bar\psi \to \psi\bar\psi.
+ \]
+ As before, we have two contributions
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$p, s$};
+ \vertex [below=of i1] (i2) {$q, r$};
+ \vertex [right=of i1] (m1);
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m1] (f1) {$p', s'$};
+ \vertex [right=of m2] (f2) {$q', r'$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (f1),
+ (i2) -- [anti fermion] (m2) -- [anti fermion] (f2),
+ (m1) -- [scalar, momentum=$p - p'$] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [above left=of m1] (i1) {$p, s$};
+ \vertex [below left=of m1] (i2) {$q, r$};
+ \vertex [right=of m1] (m2);
+ \vertex [above right=of m2] (f1) {$p', s'$};
+ \vertex [below right=of m2] (f2) {$q', r'$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (i2),
+ (f2) -- [fermion] (m2) -- [fermion] (f1),
+ (m1) -- [scalar] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ This time we have an amplitude of
+ \[
+ \mathcal{A} = (-i\lambda)^2 \left(\frac{-[\bar u_{\mathbf{p}'}^{s'} \cdot u_\mathbf{p}^s][\bar v_\mathbf{q}^r \cdot v_{\mathbf{q}'}^{r'}]}{(p - p')^2 - \mu^2 + i \varepsilon} + \frac{[\bar{v_\mathbf{q}}^r \cdot u_\mathbf{p}^s][\bar u_{\mathbf{p}'}^{s'} \cdot v_{\mathbf{q}'}^{r'}]}{(p + q)^2 - \mu^2 + i \varepsilon}\right).
+ \]
+ We have these funny signs that we have to make sure are right. We have an initial state
+ \begin{align*}
+ \bket{i} &= \sqrt{4 E_\mathbf{p} E_\mathbf{q}} b_\mathbf{p}^{s\dagger}c_\mathbf{q}^{r\dagger}\bket{0} \equiv \bket{\mathbf{p}, s; \mathbf{q}, r}\\
+ \bket{f} &= \sqrt{4E_{\mathbf{p}'} E_{\mathbf{q}'}} b_{\mathbf{p}'} ^{s'\dagger} c_{\mathbf{q}'}^{r'\dagger} \bket{0} \equiv \bket{\mathbf{p}', s'; \mathbf{q}', r'}
+ \end{align*}
+ To check the signs, we work through the computations, but we can ignore all factors because we only need the final sign. Then we have
+ \begin{align*}
+ \psi &\sim b + c^\dagger\\
+ \bar\psi &\sim b^\dagger + c
+ \end{align*}
+ So we can check
+ \begin{align*}
+ &\hphantom{=} \brak{f} \normalorder{\bar\psi(x_1) \psi(x_1) \bar\psi(x_2) \psi(x_2)} b_\mathbf{p}^{s\dagger} c_{\mathbf{q}}^{r\dagger}\bket{0}\\
+ &\sim \brak{f} [\bar{v}_{\mathbf{k}_1}^m \psi(x_1)][\bar\psi(x_2) u_{\mathbf{k}_2}^n] c_{\mathbf{k}_1}^m b_{\mathbf{k}_2}^n b_{\mathbf{p}}^{s\dagger} c_{\mathbf{q}}^{r\dagger}\bket{0}\\
+ &\sim \brak{f}[\bar{v}_{\mathbf{q}}^m \psi(x_1)][\bar\psi(x_2) u_{\mathbf{p}}^n]\bket{0}\\
+ &\sim \brak{0}c_{\mathbf{q}'}^{r'} b_{\mathbf{p}'}^{s'} c_{\boldsymbol\ell_1}^{m\dagger} b_{\boldsymbol\ell_2}^{n^\dagger} [\bar{v}_{\mathbf{q}}^r \cdot v_{\boldsymbol{\ell}_1}^m][\bar{u}_{\boldsymbol{\ell}_2}^n \cdot u_\mathbf{p}^s ]\bket{0}\\
+ &\sim -[\bar{v}_\mathbf{q}^r \cdot v_{\mathbf{q}'}^{r'}][\bar{u}_{\mathbf{p}'}^{s'} \cdot u_\mathbf{p}^s],
+ \end{align*}
+ where we got the final sign by anti-commuting $c_{\mathbf{\ell}_1}^{m\dagger}$ and $b_{\mathbf{p}'}^{s'}$ to make the $c$'s and $b$'s get together.
+
+ We can follow a similar contraction to get a positive sign for the second diagram.
+\end{eg}
+
+\section{Quantum electrodynamics}
+Finally, we get to the last part of the course, where we try to quantize electromagnetism. Again, this will not be so straightforward. This time, we will have to face the fact that the electromagnetic potential $A$ is not uniquely defined, but can have many gauge transformations. The right way to encode this information when we quantize the theory is not immediate, and will require some experimentation.
+
+\subsection{Classical electrodynamics}
+We begin by reviewing what we know about classical electrodynamics. Classically, the electromagnetic field is specified by an \term{electromagnetic potential} $A$, from which we derive the \term{electromagnetic field strength tensor}
+\[
+ F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu,
+\]
+We can then find the dynamics of a free electromagnetic field by
+\[
+ \mathcal{L} = -\frac{1}{4} F_{\mu\nu}F^{\mu\nu}.
+\]
+The Euler-Lagrange equations give
+\[
+ \partial_\mu F^{\mu\nu} = 0.
+\]
+It happens that $F$ satisfies the mystical \term{Bianchi identity}
+\[
+ \partial_\lambda F_{\mu\nu} + \partial_\mu F_{\nu\lambda} + \partial_\nu F_{\lambda\mu} = 0,
+\]
+but we are not going to use this.
+
+We can express these things in terms of the electromagnetic field. We write $A = (\phi, \mathbf{A})$. Then we define
+\[
+ \mathbf{E} = - \nabla \phi - \dot{\mathbf{A}},\quad \mathbf{B} = \nabla\wedge \mathbf{A}.
+\]
+Then $F_{\mu\nu}$ can be written as
+\[
+ F_{\mu\nu} =
+ \begin{pmatrix}
+ 0 & E_x & E_y & E_z\\
+ -E_x & 0 & -B_z & B_y\\
+ -E_y & B_z & 0 & -B_x\\
+ -E_z & -B_y & B_x & 0
+ \end{pmatrix}
+\]
+Writing out our previous equations, we find that
+\begin{align*}
+ \nabla \cdot \mathbf{E} &= 0\\
+ \nabla \cdot \mathbf{B} &= 0\\
+ \dot{\mathbf{B}} &= - \nabla \wedge \mathbf{E}\\
+ \dot{\mathbf{E}} &= \nabla \wedge \mathbf{B}.
+\end{align*}
+We notice something wrong. Photons are excitations of the electromagnetic field. However, the photon only has two polarization states, i.e.\ two degrees of freedom, while this vector field $A^\mu$ has four. How can we resolve this?
+
+There are two parts to the resolution. Firstly, note that the $A_0$ field is not dynamical, as it has no kinetic term in the Lagrangian (the $\partial_0 A_0$ terms cancel out by symmetry). Thus, if we're given $A_i$ and $\dot{A}_i$ at some initial time $t_0$, then $A_0$ is fully determined by $\nabla \cdot \mathbf{E} = 0$, since it says
+\[
+ \nabla \cdot \dot{\mathbf{A}} + \nabla^2 A_0 = 0.
+\]
+This has a solution
+\[
+ A_0(\mathbf{x}) = \int \d^3 \mathbf{x} \frac{\nabla \cdot \dot{\mathbf{A}}(\mathbf{x}')}{4\pi(\mathbf{x} - \mathbf{x}')}.
+\]
+So $A_0$ is \emph{not} independent, and we've got down to three degrees of freedom. What's the remaining one?
+
+The next thing to note is that the theory is invariant under the transformation
+\[
+ A_\mu (x) \mapsto A_\mu(x) + \partial_\mu \lambda(x),
+\]
+Indeed, the only ``observable'' thing is $F_{\mu\nu}$, and we can see that it transforms as
+\[
+ F_{\mu\nu} \mapsto \partial_\mu (A_\nu + \partial_\nu \lambda) - \partial_\nu(A_\mu + \partial_\mu \lambda) = F_{\mu\nu}.
+\]
+It looks like that we now have an infinite number of symmetries.
+
+This is a different kind of symmetry. Previously, we had symmetries acting at all points in the universe in the same way, say $\psi \mapsto e^{i\alpha}\psi$ for some $\alpha \in \R$. This gave rise to conservation laws by Noether's theorem.
+
+Now do we get infinitely many symmetries from Noether's theorem? The answer is no. The local symmetries we're considering now have a different interpretation. Rather than taking a physical state to another physical state, they are really a redundancy in our description. These are known as \emph{local} or \term{gauge symmetries}\index{local symmetries}.
+%
+%One way to see this redundancy is to notice that Maxwell's equations do not specify the evolutions of $A_\mu$. The equations read
+%\[
+% (\eta_{\mu\nu} \partial_\rho \partial^\rho - \partial_\mu \partial_\nu) A^\nu = 0.
+%\]
+%The differential operator is not invertible, and in fact annihilates any function of the form $\partial_\mu \lambda(x)$.
+%
+%This means that given $A_i$ and $\dot{A}_i$ at $t_0$, we have no way to uniquely determine the evolution of $A_\mu$. So we can't distinguish between $A_\mu$ and $A_\mu + \partial_\mu \lambda(x)$.
+%
+%This might seem problematic, but it is fine as long as $A_\mu$ and $A_\mu + \partial_\mu \lambda(x)l$ correspond to the same physical state.
+%
+Seeing this, we might be tempted to try to formulate the theory only in terms of gauge invariant objects like $\mathbf{E}$ and $\mathbf{B}$, but it turns out this doesn't work. To describe nature, it appears that we have to introduce quantities that we cannot measure.
+
+Now to work with our theory, it is common to \emph{fix a gauge}. The idea is that we specify some additional rules we want $A_\mu$ to satisfy, and hopefully this will make sure there is a unique $A_\mu$ we can pick for each equivalence class. Picking the right gauge for the right problem can help us a lot. This is somewhat like picking a coordinate system. For example, in a system with spherical symmetry, using spherical polar coordinates will help us a lot.
+
+There are two gauges we would be interested in.
+\begin{defi}[Lorenz gauge]\index{Lorenz gauge}
+ The \emph{Lorenz gauge} is specified by
+ \[
+ \partial_\mu A^\mu = 0
+ \]
+\end{defi}
+Note that this is Lorenz, \emph{not} Lorentz!
+
+To make sure this is a valid gauge, we need to make sure that each electromagnetic potential has a representation satisfying this condition.
+
+Suppose we start with $A_\mu'$ such that $\partial_\mu A'^\mu = f$. If we introduce a gauge transformation
+\[
+ A_\mu = A_\mu' + \partial_\mu \lambda,
+\]
+then we have
+\[
+ \partial_\mu A^\mu = \partial_\mu \partial^\mu \lambda + f.
+\]
+So we need to find a $\lambda$ such that
+\[
+ \partial_\mu \partial^\mu \lambda = -f.
+\]
+By general PDE theory, such a $\lambda$ exists. So we are safe.
+
+However, it turns out this requirement does not pick a unique representation in the gauge orbit. We are free to make a further gauge transformation by with
+\[
+ \partial_\mu \partial^\mu \lambda = 0,
+\]
+which has non-trivial solutions (e.g.\ $\lambda(x) = x^0$).
+
+Another important gauge is the Coulomb gauge:
+\begin{defi}[Coulomb gauge]\index{Coulomb gauge}
+ The \emph{Coulomb gauge} requires
+ \[
+ \nabla \cdot \mathbf{A} = 0.
+ \]
+\end{defi}
+Of course, this is not a Lorentz-invariant condition.
+
+Similar to the previous computations, we know that this is a good gauge. Looking at the integral we've found for $A_0$ previously, namely
+\[
+ A_0 = \int \d^3 \mathbf{x}' \frac{\nabla \cdot \dot{\mathbf{A}}(\mathbf{x}')}{4\pi|\mathbf{x} - \mathbf{x}'|},
+\]
+we find that $A_0 = 0$ all the time. Note that this happens only because we do not have matter.
+
+Here it is easy to see the physical degrees of freedom --- the three components in $\mathbf{A}$ satisfy a single constraint $\nabla \cdot \mathbf{A} = 0$, leaving behind two physical degrees of freedom, which gives the desired two polarization states.
+
+\subsection{Quantization of the electromagnetic field}
+We now try to quantize the field, using the Lorenz gauge. The particles we create in this theory would be photons, namely quanta of light. Things will go wrong really soon.
+
+We first try to compute the conjugate momentum $\pi^\mu$ of the vector field $A^\mu$. We have
+\[
+ \pi^0 = \frac{\partial \mathcal{L}}{\partial \dot{A}_0} = 0.
+\]
+This is slightly worrying, because we would later want to impose the commutation relation
+\[
+ [A_0(\mathbf{x}), \pi^0(\mathbf{y})] = i \delta^3(\mathbf{x} - \mathbf{y}),
+\]
+but this is clearly not possible if $\pi^0$ vanishes identically!
+
+We need to try something else. Note that under the Lorenz gauge $\partial_\mu A^\mu = 0$, the equations of motion tell us
+\[
+ \partial_\mu \partial^\mu A^\nu = 0.
+\]
+The trick is to construct a Lagrangian where this is actually the equation of motion, and then later impose $\partial_\mu A^\mu = 0$ after quantization.
+
+This is not too hard. We can pick the Lagrangian as
+\[
+ \mathcal{L} = -\frac{1}{4} F_{\mu\nu}F^{\mu\nu} - \frac{1}{2}(\partial_\mu A^\mu)^2.
+\]
+We can work out the equations of motion of this, and we get
+\[
+ \partial_\mu F^{\mu\nu} + \partial^\nu(\partial_\mu A^\mu) = 0.
+\]
+Writing out the definition of $F^{\mu\nu}$, one of the terms cancels with the second bit, and we get
+\[
+ \partial_\mu \partial^\mu A^\nu = 0.
+\]
+We are now going to work with this Lagrangian, and only later impose $\partial_\mu A^\mu = 0$ at the operator level.
+
+More generally, we can use a Lagrangian
+\[
+ \mathcal{L} =- \frac{1}{4}F_{\mu\nu}F^{\mu\nu} - \frac{1}{2\alpha} (\partial_\mu A^\mu)^2,
+\]
+where $\alpha$ is a fixed constant. Confusingly, the choice of $\alpha$ is also known as a gauge. We picked $\alpha = 1$, which is called the \term{Feynman gauge}. If we take $\alpha \to 0$, we obtain the \term{Landau gauge}.
+
+This new theory has no gauge symmetry, and both $A_0$ and $\mathbf{A}$ are dynamical fields. We can find the conjugate momenta as
+\begin{align*}
+ \pi^0 &= \frac{\partial\mathcal{L}}{\partial \dot{A}_0} = - \partial_\mu A^\mu\\
+ \pi^i &= \frac{\partial \mathcal{L}}{\partial \dot{A}_i} = \partial^i A^0 - \dot{A}^i.
+\end{align*}
+We now apply the usual commutation relations
+\begin{gather*}
+ [A_\mu (\mathbf{x}), A_\nu(\mathbf{y})] = [\pi^\mu(\mathbf{x}), \pi^\nu(\mathbf{y})] = 0\\
+ [A_\mu(\mathbf{x}), \pi^\nu(\mathbf{y})] = i\delta^3(\mathbf{x} - \mathbf{y}) \delta_\mu^\nu.
+\end{gather*}
+Equivalently, the last commutation relation is
+\[
+ [A_\mu(\mathbf{x}), \pi_\nu(\mathbf{y})] = i\delta^3(\mathbf{x} - \mathbf{y}) \eta_{\mu\nu}.
+\]
+As before, in the Heisenberg picture, we get equal time commutation relations
+\[
+ [A_\mu (\mathbf{x}, t), \dot{A}_\nu(\mathbf{y}, t)] = i\eta_{\mu\nu} \delta^3(\mathbf{x} - \mathbf{y}).
+\]
+As before, we can write our operators in terms of creation and annihilation operators:
+\begin{align*}
+ A_\mu(\mathbf{x}) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2 |\mathbf{p}|}} \sum_{\lambda = 0}^3 \mathcal{E}_\mu^{(\lambda)}(\mathbf{p}) [a_\mathbf{p}^\lambda e^{i\mathbf{p}\cdot \mathbf{x}} + a_\mathbf{p}^{\lambda\dagger} e^{-i\mathbf{p}\cdot \mathbf{x}}],\\
+ \pi^\nu(\mathbf{x}) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} i\sqrt{\frac{|\mathbf{p}|}{2}} \sum_{\lambda = 0}^3 (\mathcal{E}^{(\lambda)}(\mathbf{p}))^\nu [a_\mathbf{p}^\lambda e^{i\mathbf{p}\cdot \mathbf{x}} - a_\mathbf{p}^{\lambda\dagger} e^{-i\mathbf{p}\cdot \mathbf{x}}],
+\end{align*}
+where for each $\mathbf{p}$, the vectors $\{\mathcal{E}^{(0)}(\mathbf{p}), \mathcal{E}^{(1)}(\mathbf{p}), \mathcal{E}^{(2)}(\mathbf{p}), \mathcal{E}^{(3)}(\mathbf{p})\}$ form a basis of $\R^{3, 1}$. The exact choice doesn't really matter much, but we will suppose we pick a basis with the following properties:
+\begin{enumerate}
+ \item $\mathcal{E}^{(0)}(\mathbf{p})$ will be a timelike vector, while the others are spacelike.
+ \item Viewing $\mathbf{p}$ as the direction of motion, we will pick $\mathcal{E}^{(3)}(\mathbf{p})$ to be a longitudinal polarization, while $\mathcal{E}^{(1),(2)}(\mathbf{p})$ are transverse to the direction of motion, i.e.\ we require
+ \[
+ \mathcal{E}^{(1), (2)}_\mu(\mathbf{p}) \cdot p^\mu = 0.
+ \]
+ \item The normalization of the vectors is given by
+ \[
+ \mathcal{E}^{(\lambda)}_\mu (\mathcal{E}^{(\lambda')})^\mu = \eta^{\lambda\lambda'},
+ \]
+\end{enumerate}
+
+We can explicitly write down a choice of such basis vectors. When $p \propto (1, 0, 0, 1)$, then we choose
+\[
+ \mathcal{E}^{(0)}_\mu(\mathbf{p}) =
+ \begin{pmatrix}
+ 1\\0\\0\\0
+ \end{pmatrix},\quad
+ \mathcal{E}^{(1)}_\mu(\mathbf{p}) =
+ \begin{pmatrix}
+ 0\\1\\0\\0
+ \end{pmatrix},\quad
+ \mathcal{E}^{(2)}_\mu(\mathbf{p}) =
+ \begin{pmatrix}
+ 0\\0\\1\\0
+ \end{pmatrix},\quad
+ \mathcal{E}^{(3)}_\mu(\mathbf{p}) =
+ \begin{pmatrix}
+ 0\\0\\0\\1
+ \end{pmatrix}.
+\]
+Now for a general $p$, we pick a basis where $p \propto (1, 0, 0, 1)$, which is always possible, and then we define $\mathcal{E}(\mathbf{p})$ as above in that basis. This gives us a Lorentz-invariant choice of basis vectors satisfying the desired properties.
+
+One can do the tedious computations required to find the commutation relations for the creation and annihilation operators:
+\begin{thm}
+ \[
+ [a_\mathbf{p}^\lambda, a_\mathbf{q}^{\lambda'}] = [a_\mathbf{p}^{\lambda\dagger}, a_\mathbf{q}^{\lambda'\dagger}] = 0
+ \]
+ and
+ \[
+ [a_\mathbf{p}^\lambda, a_\mathbf{q}^{\lambda' \dagger}] = -\eta^{\lambda\lambda'} (2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q}).
+ \]
+\end{thm}
+Notice the strange negative sign!
+
+We again define a vacuum $\bket{0}$ such that
+\[
+ a_\mathbf{p}^\lambda \bket{0} = 0
+\]
+for $\lambda = 0, 1, 2, 3$, and we can create one-particle states
+\[
+ \bket{\mathbf{p}, \lambda} = a_\mathbf{p}^{\lambda\dagger} \bket{0}.
+\]
+This makes sense for $\lambda = 1, 2, 3$, but for $\lambda = 0$, we have states with negative norm:
+\begin{align*}
+ \braket{\mathbf{p}, 0}{\mathbf{q}, 0} &\equiv \brak{0} a_\mathbf{p}^0 a_\mathbf{q}^{0\dagger}\bket{0}\\
+ &= -(2\pi)^3 \delta^3(\mathbf{p} - \mathbf{q}).
+\end{align*}
+A Hilbert space with a negative norm means that we have negative probabilities. This doesn't make sense.
+
+Here is where the Lorenz gauge comes in. By imposing $\partial_\mu A^\mu = 0$, we are going to get rid of bad things. But how do we implement this constraint? We can try implementing it in a number of ways. We will start off in the obvious way, which turns out to be too strong, and we will keep on weakening it until it works.
+
+If we just asked for $\partial_\mu A^\mu = 0$, for $A^\mu$ the operator, then this doesn't work, because
+\[
+ \pi^0 = - \partial_\mu A^\mu,
+\]
+and if this vanishes, then the commutation conditions cannot possibly be obeyed.
+
+Instead, we can try to impose this on the Hilbert space rather than on the operators. After all, that's where the trouble lies. Maybe we can try to split the Hilbert space up into good states and bad states, and then just look at the good states only?
+
+How do we define the good, physical states? Maybe we can impose
+\[
+ \partial_\mu A^\mu \bket{\psi} = 0
+\]
+on all physical (``good'') states, but it turns out even this condition is a bit too strong, as the vacuum will not be physical! To see this, we decompose
+\[
+ A_\mu(x) = A_\mu^+ (x) + A_\mu^-(x),
+\]
+where $A_\mu^+$ has the annihilation operators and $A_\mu^-$ has the creation operators. Explicitly, we have
+\begin{align*}
+ A_\mu^+(x) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2|\mathbf{p}|}} \mathcal{E}_\mu^{(\lambda)} a_\mathbf{p}^\lambda e^{-i p\cdot x}\\
+ A_\mu^-(x) &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2|\mathbf{p}|}} \mathcal{E}_\mu^{(\lambda)} a_\mathbf{p}^{\lambda\dagger} e^{ip\cdot x},
+\end{align*}
+where summation over $\lambda$ is implicit. Then we have
+\[
+ \partial^\mu A_\mu^+\bket{0} = 0,
+\]
+but we have
+\[
+ \partial^\mu A_\mu^- \bket{0} \not= 0.
+\]
+So not even the vacuum is physical! This is very bad.
+
+Our final attempt at weakening this is to ask the physical states to satisfy
+\[
+ \partial^\mu A_\mu^+(x) \bket{\psi} = 0.
+\]
+This ensures that
+\[
+ \brak{\psi}\partial^\mu A_\mu \bket{\psi} = 0,
+\]
+as $\partial^\mu A_\mu^+$ will kill the right hand side and $\partial^\mu A_\mu^-$ will kill the left. So $\partial_\mu A^\mu$ has vanishing matrix elements between physical states.
+
+This is known as the \term{Gupta-Bleuler condition}. The linearity of this condition ensures that the physical states span a vector space $\mathcal{H}_{\mathrm{phys}}$.
+
+What does $\mathcal{H}_{\mathrm{phys}}$ look like? To understand this better, we write out what $\partial^\mu A_\mu^-$ is. We have
+\begin{align*}
+ \partial^\mu A_\mu^- &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2|\mathbf{p}|}} \mathcal{E}_\mu^{(\lambda)} a_\mathbf{p}^{\lambda} (-ip^\mu) e^{-ip\cdot x}\\
+ &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{1}{\sqrt{2|\mathbf{p}|}} i(a_\mathbf{p}^3 - a_\mathbf{p}^0) e^{ip\cdot x},
+\end{align*}
+using the properties of our particular choice of the $\mathcal{E}^\mu$. Thus the condition is equivalently
+\[
+ (a_\mathbf{p}^3 - a_\mathbf{p}^0)\bket{\psi} = 0.
+\]
+This means that it is okay to have a timelike or longitudinal photons, as long as they come in pairs of the same momentum!
+
+By restricting to these physical states, we have gotten rid of the negative norm states. However, we still have the problem of certain non-zero states having zero norm. Consider a state
+\[
+ \bket{\phi} = a_\mathbf{p}^{0\dagger} a_\mathbf{p}^{3\dagger}\bket{0}.
+\]
+This is an allowed state, since it has exactly one timelike and one longitudinal photon, each of momentum $\mathbf{p}$. This has zero norm, as the norm contributions from the $a_\mathbf{p}^0$ part cancel from those from the $a_\mathbf{p}^3$ part. However, this state is non-zero! We wouldn't want this to happen if we want a positive definite inner product.
+
+The solution is to declare that if two states differ only in the longitudinal and timelike photons, then we consider them \emph{physically equivalent}, or gauge equivalent! In other words, we are quotienting the state space out by these zero-norm states.
+
+Of course, if we want to do this, we need to go through everything we do and make sure that our physical observables do not change when we add or remove these silly zero-norm states. Fortunately, this is indeed the case, and we are happy.
+
+%This suggests we should treat the longitudinal/timelike parts and the transverse parts separately. For any basis state $\bket{\psi}$ in the Fock space, we write it as
+%\[
+% \bket{\psi} = \bket{\psi_T} \bket{\phi},
+%\]
+%where $\bket{\psi_T}$ contains the transverse photons, and $\bket{\phi}$ contains the timelike or longitudinal ones. Then we can rewrite our condition as
+%\[
+% (a_\mathbf{p}^3 - a_\mathbf{p}^0) \bket{\phi} = 0.
+%\]
+%Assuming this condition, we can write
+%\[
+% \bket{\phi} = \sum_{n = 0}^\infty c_n \bket{\phi_n},
+%\]
+%where each $\phi_n$ is a sum of $2n$-particle states, with each state containing $n$ timelike and $n$ longitudinally polarized photons of the same momenta. We have $\brak{\phi_0} = \bket{0}$, and by convention, we pick $c_0 = 1$ (we can shift the factors to $\bket{\psi_T}$). These states satisfy
+%\[
+% \braket{\phi_n}{\phi_m} = \delta_{n0} \delta_{m0}.
+%\]
+%So in particular, we have
+%\[
+% \braket{\phi}{\phi} = c_0 \braket{\phi_0}{\phi_0} = c_0 = 1,
+%\]
+%Thus, we know that
+%\[
+% \braket{\psi}{\psi} = \braket{\psi_T}{\psi_T}.
+%\]
+%This means, in particular, that
+%and all negative normed states are moved by $(*)$. We treat the zero normed states as being \emph{gauge equivalent} to the vacuum. % are we quotienting out by the subspace?
+%
+%Two states which differ only in the longitudinal or timelike photons are said to be \term{physically equivalent}. Of course, this makes sense only if no physical observables depend on these $\phi_n$ (for $n > 0$). In particular, we should check that the Hamiltonian indeed doesn't depend on these states, but this is boring, and we will just state the result. We have
+%\[
+% H = \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} |\mathbf{p}| \left(\sum_{i = 1}^3 a_\mathbf{p}^{i\dagger} a_\mathbf{p}^i - a_\mathbf{p}^{0\dagger}a_\mathbf{p}^0\right).
+%\]
+%But we require that $a_\mathbf{k}^3 - a_\mathbf{k}^0 \bket{\psi} = 0$. So we have
+%\[
+% \bra \psi a_\mathbf{p}^{3\dagger}a_\mathbf{p}^3 \bket{\psi} = \bra{\psi}a_\mathbf{p}^{0\dagger}a_\mathbf{p}^0 \bket{\psi}.
+%\]
+%So time-like and longitudinal pieces cancel in $H$. We just get contributions from the transverse states.
+%
+%In general, any matrix elements involving any gauge-invariant operator evalluated on physical states are independent of the $c_n$.
+
+Before we move on, we note the value of the Feynman propagator:
+
+\begin{thm}
+ The Feynman propagator for the electromagnetic field, under a general gauge $\alpha$, is
+ \[
+ \brak{0} TA_\mu(x) A_\nu(y) \bket{0} = \int \frac{\d^4 p}{(2\pi)^4} \frac{-i}{p^2 + i\varepsilon} \left(\eta_{\mu\nu} + (\alpha - 1) \frac{p_\mu p_\nu}{p^2}\right) e^{-ip\cdot(x - y)}.
+ \]
+\end{thm}
+
+\subsection{Coupling to matter in classical field theory}
+We now move on to couple our EM field with matter. We first do it in the clear and sensible universe of classical field theory. We will tackle two cases --- in the first case, we do coupling with fermions. In the second, we couple with a mere complex scalar field. It should be clear one how one can generalize these further to other fields.
+
+Suppose the resulting coupled Lagrangian looked like
+\[
+ \mathcal{L} = - \frac{1}{4} F_{\mu\nu}F^{\mu\nu} - A_\mu j^\mu,
+\]
+plus some kinetic and self-interaction terms for the field we are coupling with. Then the equations of motion give us
+\[
+ \partial_\mu F^{\mu\nu} = j^\nu.
+\]
+Since $F_{\mu\nu}$ is anti-symmetric, we know we must have
+\[
+ 0 = \partial_\mu \partial_\nu F^{\mu\nu} = \partial_\nu j^\nu.
+\]
+So we know $j^\mu$ is a conserved current. So to couple the EM field to matter, we need to find some conserved current.
+
+\subsubsection*{Coupling with fermions}
+Suppose we had a spinor field $\psi$ with Lagrangian
+\[
+ \mathcal{L} = \bar\psi (i \slashed\partial -m ) \psi.
+\]
+This has an internal symmetry
+\begin{align*}
+ \psi &\mapsto e^{-i\alpha \psi}\\
+ \bar\psi &\mapsto e^{i \alpha} \bar\psi,
+\end{align*}
+and this gives rise to a conserved current
+\[
+ j^\mu = \bar\psi \gamma^\mu \psi.
+\]
+So let's try
+\[
+ \mathcal{L} = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu} + \bar\psi(i \slashed \partial- m)\psi - e \bar\psi \gamma^\mu A_\mu \psi,
+\]
+where $e$ is some coupling factor.
+
+Have we lost gauge invariance now that we have an extra term? If we just substituted $A_\mu$ for $A_\mu + \partial_\mu \lambda$, then obviously the Lagrangian changes. However, something deep happens. When we couple the two terms, we don't just add a factor into the Lagrangian. We have secretly introduced a new gauge symmetry to the fermion field, and now when we take gauge transformations, we have to transform both fields at the same time.
+
+To see this better, we rewrite the Lagrangian as
+\[
+ \mathcal{L} = -\frac{1}{4} F_{\mu\nu} F^{\mu\nu} + \bar\psi(i \slashed \D - m)\psi,
+\]
+where $\D$ is the \term{covariant derivative} given by
+\[
+ \D_\mu \psi = \partial_\mu \psi + i e A_\mu \psi.
+\]
+We now claim that $\mathcal{L}$ is invariant under the simultaneous transformations
+\begin{align*}
+ A_\mu &\mapsto A_\mu + \partial_\mu \lambda(x)\\
+ \psi &\mapsto e^{-ie \lambda(x)} \psi.
+\end{align*}
+To check that this is indeed invariant, we only have to check that $\bar\psi\slashed \D \psi$ term. We look at how $\D_\mu \psi$ transforms. We have
+\begin{align*}
+ \D_\mu \psi &= \partial_\mu \psi + ie A_\mu \psi\\
+ &\mapsto \partial_\mu (e^{-ie\lambda(x)} \psi) + ie (A_\mu + \partial_\mu \lambda(x)) e^{-ie\lambda(x)}\psi\\
+ &= e^{-ie\lambda(x)} \D_\mu\psi.
+\end{align*}
+So we have
+\[
+ \bar\psi \slashed \D \psi \mapsto \bar\psi \slashed \D \psi,
+\]
+i.e.\ this is gauge invariant.
+
+So as before, we can use the gauge freedom to remove unphysical states after quantization. The coupling constant $e$ has the interpretation of electric charge, as we can see from the equation of motion
+\[
+ \partial_\mu F^{\mu\nu} = ej^\nu.
+\]
+In electromagnetism, $j^0$ is the charge density, but after quantization, we have
+\begin{align*}
+ Q &= e \int \d^3 \mathbf{x}\; \bar\psi \gamma^0 \psi \\
+ &= \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} (b_\mathbf{p}^{s\dagger} b_\mathbf{p}^s - c_\mathbf{p}^{s\dagger} c_\mathbf{p}^s)\\
+ &= e \times (\text{number of electrons} - \text{number of anti-electrons}),
+\end{align*}
+with an implicit sum over the spin $s$. So this is the total charge of the electrons, where anti-electrons have the opposite charge!
+
+For QED, we usually write $e$ in terms of the fine structure constant
+\[
+ \alpha = \frac{e^2}{4\pi} \approx \frac{1}{137}
+\]
+for an electron.
+
+\subsubsection*{Coupling with complex scalars}
+Now let's try to couple with a scalar fields. For a real scalar field, there is no suitable conserved current to couple $A^\mu$ to. For a complex scalar field $\varphi$, we can use the current coming from $\varphi \to e^{i\alpha} \varphi$, namely
+\[
+ j_\mu = (\partial_\mu \varphi)^* \varphi - \varphi^* \partial_\mu \varphi.
+\]
+We try the obvious thing with an interaction term
+\[
+ \mathcal{L}_{\mathrm{int}} = -i [(\partial_\mu \varphi)^* \varphi - \varphi^* \partial_\mu \varphi] A^\mu
+\]
+This doesn't work. The problem is that we have introduced a new term $j^\mu A_\mu$ to the Lagrangian. If we compute the conserved current due to $\varphi \mapsto e^{i\alpha} \varphi$ under the new Lagrangian, it turns out we will obtain a \emph{different} $j^\mu$, which means in this theory, we are not really coupling to the conserved current, and we find ourselves going in circles.
+
+To solve this problem, we can try random things and eventually come up with the extra term we need to add to the system to make it consistent, and then be perpetually puzzled about why that worked. Alternatively, we can also learn from what we did in the previous example. What we did, at the end, was to invent a new covariant derivative:
+\[
+ \D_\mu \varphi = \partial_\mu \varphi + ie A_\mu \varphi,
+\]
+and then replace all occurrences of $\partial_\mu$ with $\D_\mu$. Under a simultaneous gauge transformation
+\begin{align*}
+ A_\mu &\mapsto A_\mu + \partial_\mu \lambda(x)\\
+ \varphi &\mapsto e^{-ie \lambda(x)} \varphi,
+\end{align*}
+the covariant derivative transforms as
+\[
+ \D_\mu \varphi \mapsto e^{i\lambda(x)} \D_\mu \varphi,
+\]
+as before. So we can construct a gauge invariant Lagrangian by
+\[
+ \mathcal{L} = -\frac{1}{4} F_{\mu\nu}F^{\mu\nu} + (\D_\mu \varphi)^\dagger (\D^\mu \varphi) - m^2 \varphi^* \varphi.
+\]
+The current is the same thing as we've had before, except we have
+\[
+ j^\mu = i(\D_\mu \varphi)^\dagger \varphi - \varphi^\dagger \D_\mu \varphi.
+\]
+In general, for any field $\phi$ taking values in a complex vector space, we have a $\U(1)$ gauge symmetry
+\[
+ \phi \mapsto e^{i\lambda(x)} \phi.
+\]
+Then we can couple with the EM field by replacing
+\[
+ \partial_\mu \phi \mapsto \D_\mu \phi = \partial_\mu \phi+ ie \lambda(x) A_\mu \phi.
+\]
+This process of just taking the old Lagrangian and then replacing partial derivatives with covariant derivatives is known as \term{minimal coupling}. More details on how this works can be found in the Symmetries, Fields and Particles course.
+
+\subsection{Quantization of interactions}
+Let's work out some quantum amplitudes for light interacting with electrons. We again have the Lagrangian
+\[
+ \mathcal{L} = -\frac{1}{4} F_{\mu\nu}F^{\mu\nu} + \bar\psi (i \slashed \D - m)\psi,
+\]
+where
+\[
+ \D_\mu = \partial_\mu + i e A_\mu,
+\]
+For a change, we work in the Coulomb gauge $\nabla \cdot \mathbf{A} = 0$. So the equation of motion for $A_0$ is
+\[
+ \nabla^2 A_0 = e \bar\psi \gamma^0 \psi = ej^0.
+\]
+This equation has a solution
+\[
+ A_0(x) = e \int \d^3 \mathbf{x}' \frac{j^0(\mathbf{x}', t)}{4\pi |\mathbf{x} - \mathbf{x}'|}.
+\]
+In the Coulomb gauge, we can rewrite the Maxwell part (i.e.\ $F_{\mu\nu}F^{\mu\nu}$ part) of the Lagrangian (not density!) as
+\begin{align*}
+ L_A &= \int \d^3 \mathbf{x}\; \frac{1}{2}(\mathbf{E}^2 - \mathbf{B}^2)\\
+ &= \int \d^3 \mathbf{x}\; \left(\frac{1}{2} (\dot{\mathbf{A}} + \nabla A_0)^2 - \frac{1}{2} \mathbf{B}^2\right)\\
+ &= \int \d^3 \mathbf{x}\; \left(\frac{1}{2}\dot{\mathbf{A}}^2 + \frac{1}{2}(\nabla A_0)^2 - \frac{1}{2}\mathbf{B}^2\right),
+\end{align*}
+where the gauge condition means that the cross-term vanishes by integration by parts.
+
+Integrating by parts and substituting our $A_0$, we get that
+\[
+ L_A = \int \d^3 \mathbf{x} \left(\frac{1}{2} \dot{\mathbf{A}}^2 + \frac{e^2}{2} \left(\int \d^3 \mathbf{x}'\; \frac{j_0(\mathbf{x}) j_0(\mathbf{x}')}{4\pi|\mathbf{x} - \mathbf{x}'|}\right) - \frac{1}{2}\mathbf{B}^2\right).
+\]
+This is weird, because we now have a non-local term in the Lagrangian. This term arises as an artifact of working in Coulomb gauge. This doesn't appear in Lorenz gauge.
+
+Let's now compute the Hamiltonian. We will use capital Pi ($\boldsymbol\Pi$) for the conjugate momentum of the electromagnetic potential $\mathbf{A}$, and lower case pi ($\pi$) for the conjugate momentum of the spinor field. We have
+\begin{align*}
+ \boldsymbol\Pi &= \frac{\partial \mathcal{L}}{\partial \dot{\mathbf{A}}} = \dot{\mathbf{A}}\\
+ \pi_\psi &= \frac{\partial \mathcal{L}}{\partial \dot{\psi}} = i \psi^\dagger.
+\end{align*}
+Putting all these into the Hamiltonian, we get
+\begin{multline*}
+ H = \int \d^3 \mathbf{x}\; \left(\frac{1}{2} \dot{\mathbf{A}}^2 + \frac{1}{2} \mathbf{B}^2 + \bar\psi (-i \gamma^i \partial_i + m) \psi - e\mathbf{j}\cdot \mathbf{A} \right.\\
+ \left.+ \frac{e^2}{2} \int \d^3 \mathbf{x}'\;\frac{j_0(\mathbf{x}) j_0(\mathbf{x}')}{4\pi |\mathbf{x} - \mathbf{x}'|}\right),
+\end{multline*}
+where
+\[
+ \mathbf{j} = \bar\psi \boldsymbol\gamma \psi,\quad j_0 = \bar\psi \gamma_0 \psi.
+\]
+After doing the tedious computations, one finds that the Feynman rules are as follows:
+\begin{enumerate}
+ \item The photons are denoted as squiggly lines:
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (l);
+ \vertex [right=of l] (r);
+ \diagram*{
+ (l) -- [photon] (r),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ Each line comes with an index $i$ that tells us which component of $\mathbf{A}$ we are talking about. Each internal line gives us a factor of
+ \[
+ D_{ij}^{tr} = \frac{i}{p^2 + i \varepsilon} \left(\delta_{ij} - \frac{p_i p_j}{|\mathbf{p}|^2}\right),
+ \]
+ while for each external line,
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (l) {$\mathcal{E}^{(i)}$};
+ \vertex [right=of l] (r);
+ \diagram*{
+ (l) -- [photon] (r),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ we simply write down a polarization vector $\mathcal{E}^{(i)}$ corresponding to the polarization of the particle.
+
+ \item The $e \mathbf{j}\cdot \mathbf{A}$ term gives us an interaction of the form
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m);
+ \vertex [above left=of m] (i1);
+ \vertex [below left=of m] (i2);
+ \vertex [right=of m] (f);
+
+ \diagram*{
+ (i1) -- [fermion] (m) -- [fermion] (i2),
+ (m) -- [photon] (f),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ This contributes a factor of $-ie\gamma^i$, where $i$ is the index of the squiggly line.
+
+ \item The non-local term of the Lagrangian gives us instantaneous non-local interactions, denoted by dashed lines:
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [above left=of m1] (i1);
+ \vertex [below left=of m1] (i2);
+ \vertex [right=of m1] (m2);
+ \vertex [above right=of m2] (f1);
+ \vertex [below right=of m2] (f2);
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (i2),
+ (f2) -- [fermion] (m2) -- [fermion] (f1),
+ (m1) -- [scalar] (m2),
+ };
+ \node at (m1) [above] {$x$};
+ \node at (m2) [above] {$y$};
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ The contribution of this factor, in position space, is given by
+ \[
+ \frac{i(e \gamma_0)^2 \delta(x^0 - y^0)}{4\pi|\mathbf{x} - \mathbf{y}|}.
+ \]
+\end{enumerate}
+Whoa. What is this last term saying? Recall that when we previously derived our Feynman rules, we first obtained some terms from Wick's theorem that contained things like $e^{ip\cdot x}$, and then integrated over $x$ to get rid of these terms, so that the resulting formula only included momentum things. What we've shown here in the last rule is what we have before integrating over $x$ and $y$. We now want to try to rewrite things so that we can get something nicer.
+
+We note that this piece comes from the $A_0$ term. So one possible strategy is to make it into a $D_{00}$ piece of the photon propagator. So we treat this as a photon indexed by $\mu = 0$. We then rewrite the above rules and replace all indices $i$ with $\mu$ and let it range over $0, 1, 2, 3$.
+
+To do so, we note that
+\[
+ \frac{\delta(x^0 - y^0)}{4\pi |\mathbf{x} - \mathbf{y}|} = \int \frac{\d^4 p}{(2\pi)^4} \frac{e^{ip\cdot(x - y)}}{|\mathbf{p}|^2}.
+\]
+We now define the $\gamma$-propagator
+\[
+ D_{\mu\nu}(p) =
+ \begin{cases}
+ \frac{i}{p^2 + i\varepsilon} \left(\delta_{\mu\nu} - \frac{p_\mu p_\nu}{|\mathbf{p}|^2}\right) & \mu, \nu \not= 0\\
+ \frac{i}{|\mathbf{p}|^2} & \mu = \nu = 0\\
+ 0 & \text{otherwise}
+ \end{cases}
+\]
+We now have the following Feynman rules:
+\begin{enumerate}
+ \item The photons are denoted as squiggly lines:
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (l);
+ \vertex [right=of l] (r);
+ \diagram*{
+ (l) -- [photon] (r),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ Each internal line comes with an index $\mu$ ranging over $0, 1, 2, 3$ that tells us which component of $\mathbf{A}$ we are talking about, and give us a factor of $D_{\mu\nu}$, while for each external line,
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (l) {$\mathcal{E}^{(\mu)}$};
+ \vertex [right=of l] (r);
+ \diagram*{
+ (l) -- [photon] (r),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ we simply write down a polarization vector $\mathcal{E}^{(\mu)}$ corresponding to the polarization of the particle.
+
+ \item The $e \mathbf{j}\cdot \mathbf{A}$ term gives us an interaction of the form
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m);
+ \vertex [above left=of m] (i1);
+ \vertex [below left=of m] (i2);
+ \vertex [right=of m] (f);
+
+ \diagram*{
+ (i1) -- [fermion] (m) -- [fermion] (i2),
+ (m) -- [photon] (f),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ This contributes a factor of $-ie\gamma^i$, where $i$ is the index of the squiggly line.
+\end{enumerate}
+
+Now the final thing to deal with is the annoying $D_{\mu\nu}$ formula. We claim that it can always be replaced by
+\[
+ D_{\mu\nu}(p) = -i \frac{\eta_{\mu\nu}}{p^2},
+\]
+i.e.\ in all contractions involving $D_{\mu\nu}$ we care about, contracting with $D_{\mu\nu}$ gives the same result as contracting with $-i \frac{\eta_{\mu\nu}}{p^2}$. This can be proved in full generality using momentum conservation, but we will only look at some particular cases.
+
+\begin{eg}
+ Consider the process
+ \[
+ e^- e^- \to e^- e^-.
+ \]
+ We look at one particular diagram
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$p, s$};
+ \vertex [below=of i1] (i2) {$q, r$};
+ \vertex [right=of i1] (m1);
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m1] (f1) {$p', s'$};
+ \vertex [right=of m2] (f2) {$q', r'$};
+
+ \node [above] at (m1) {$\mu$};
+ \node [below] at (m2) {$\nu$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (f1),
+ (i2) -- [fermion] (m2) -- [fermion] (f2),
+ (m1) -- [photon] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ We have two vertices, which contribute to a term of
+ \[
+ e^2 [\bar{u}(p') \gamma^\mu u(p)] D_{\mu\nu}(k)[\bar{u}(q') \gamma^\nu u(q)],
+ \]
+ where
+ \[
+ k = p - p' = -q + q'.
+ \]
+ We show that in this case, we can replace $D_{\mu\nu}(k)$ by
+ \[
+ D_{\mu\nu}(k) = \frac{-i \eta_{\mu\nu}}{k^2}.
+ \]
+ The proof follows from current conservation. Recall that $u(p)$ satisfies
+ \[
+ (\slashed p - m) u(p) = 0.
+ \]
+ We define the spinor combinations
+ \begin{align*}
+ \alpha^\mu &= \bar{u}(p') \gamma^\mu u(p)\\
+ \beta^\mu &= \bar{u}(q') \gamma^\mu u(q)
+ \end{align*}
+ What we have is that
+ \[
+ k_\mu \alpha^\mu = \bar{u} (p')(\slashed p' - \slashed p) u(p) = \bar{u}(p')(m - m) u(p) = 0.
+ \]
+ Similarly, $k_\mu \beta^\mu$ is also zero. So our Feynman diagram is given by
+ \[
+ \alpha^\mu D_{\mu\nu} \beta^\nu = i\left(\frac{\boldsymbol\alpha \cdot \boldsymbol\beta}{k^2} - \frac{(\boldsymbol\alpha\cdot \mathbf{k})(\boldsymbol\beta \cdot \mathbf{k})}{|\mathbf{k}|^2 k^2} + \frac{\alpha^0 \beta^0}{|\mathbf{k}|^2}\right).
+ \]
+ But we know that $\alpha^\mu k_\mu = \beta^\mu k_\mu = 0$. So this is equal to
+ \begin{align*}
+ &\hphantom{=}i\left(\frac{\boldsymbol\alpha \cdot \boldsymbol\beta}{k^2} - \frac{k_0^2 \alpha^0 \beta^0}{|\mathbf{k}|^2 k^2} + \frac{\alpha^0 \beta^0}{|\mathbf{k}|^2}\right)\\
+ &= i\left(\frac{\boldsymbol \alpha\cdot \boldsymbol\beta}{k^2} - \frac{1}{|\mathbf{k}|^2 k^2}(k_0^2 - k^2) \alpha^0 \beta^0\right)\\
+ &= i\left(\frac{\boldsymbol \alpha\cdot \boldsymbol\beta}{k^2} - \frac{|\mathbf{k}|^2}{|\mathbf{k}|^2 k^2} \alpha^0 \beta^0\right)\\
+ &= -i \frac{\alpha \cdot \beta}{k^2} \\
+ &= \alpha^\mu \left(\frac{-i \eta_{\mu\nu}}{k^2}\right)\beta^\nu.
+ \end{align*}
+ What really is giving us this simple form is current conservation.
+\end{eg}
+In general, in Lorenz gauge, we have
+\[
+ D_{\mu\nu} = -\frac{i}{p^2} \left(\eta_{\mu\nu} + (\alpha - 1) \frac{p_\mu p_\nu}{p^2}\right),
+\]
+and the second term cancels in all physical processes, for similar reasons.
+
+\subsubsection*{Charged scalars}
+We quickly go through the Feynman rules for charged complex scalar fields. We will not use these for anything. The Lagrangian coming from minimal coupling is
+\[
+ \mathcal{L} = (\D_\mu \psi)^\dagger \D^\mu \psi - \frac{1}{4} F_{\mu\nu}F^{\mu\nu}
+\]
+We can expand the first term to get
+\[
+ (\D_\mu \psi)^\dagger \D^\mu \psi = \partial_\mu \psi^\dagger \partial^\mu \psi - ie A_\mu(\psi^\dagger \partial^\mu \psi - \psi \partial^\mu \psi^\dagger) + e^2 A_\mu A^\mu \psi^\dagger \psi.
+\]
+The Feynman rules for these are:
+\begin{enumerate}
+ \item The first interaction term gives us a possible vertex
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m);
+ \vertex [below left=of m] (l);
+ \vertex [below right=of m] (r);
+ \vertex [above=of m] (t);
+ \diagram*{
+ (l) -- [momentum=$p$] (m) -- [momentum=$q$] (r),
+ (m) -- [photon] (t),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ This contributes a factor of $-ie(p + q)_\mu$.
+ \item The $A_\mu A^\mu \psi^\dagger \psi$ term gives diagrams of the form
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m);
+ \vertex [below left=of m] (l);
+ \vertex [below right=of m] (r);
+ \vertex [above left=of m] (tl);
+ \vertex [above right=of m] (tr);
+ \diagram*{
+ (l) -- [momentum=$p$] (m) -- [momentum=$q$] (r),
+ (m) -- [photon] (tr),
+ (m) -- [photon] (tl),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ This contributes a factor of $2ie^2 \eta_{\mu\nu}$.
+\end{enumerate}
+
+\subsection{Computations and diagrams}
+We now do some examples. Here we will not explain where the positive/negative signs come from, since it involves some tedious work involving going through Wick's theorem, if you are not smart enough to just ``see'' them.
+
+\begin{eg}
+ Consider the process
+ \[
+ e^- e^- \to e^- e^-.
+ \]
+ We again can consider the two diagrams
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$p, s$};
+ \vertex [below=of i1] (i2) {$q, r$};
+ \vertex [right=of i1] (m1);
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m1] (f1) {$p', s'$};
+ \vertex [right=of m2] (f2) {$q', r'$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (f1),
+ (i2) -- [fermion] (m2) -- [fermion] (f2),
+ (m1) -- [photon] (m2),
+ };
+ \node at (m1) [above] {$\mu$};
+ \node at (m2) [below] {$\nu$};
+ \end{feynman}
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$p, s$};
+ \vertex [below=of i1] (i2) {$q, r$};
+ \vertex [right=of i1] (m1);
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m1] (f1) {$p', s'$};
+ \vertex [right=of m2] (f2) {$q', r'$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (f2),
+ (i2) -- [fermion] (m2) -- [fermion] (f1),
+ (m1) -- [photon] (m2),
+ };
+ \node at (m1) [above] {$\mu$};
+ \node at (m2) [below] {$\nu$};
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ We will do the first diagram slowly. The top vertex gives us a factor of
+ \[
+ -ie [\bar{u}^{s'}_{\mathbf{p}'}\gamma^\mu u^s_{\mathbf{p}}].
+ \]
+ The bottom vertex gives us
+ \[
+ -ie [\bar{u}^{r'}_{\mathbf{q}'} \gamma^\nu u^r_\mathbf{q}].
+ \]
+ The middle squiggly line gives us
+ \[
+ -\frac{i \eta_{\mu\nu}}{(p' - p)^2}.
+ \]
+ So putting all these together, the first diagram gives us a term of
+ \[
+ -i(-ie)^2 \left(\frac{[\bar{u}^{s'}_{\mathbf{p}'} \gamma^\mu u^s_\mathbf{p}][\bar{u}^{r'}_{\mathbf{q}'} \gamma_\mu u^r_\mathbf{q}]}{(p' - p)^2}\right).
+ \]
+ Similarly, the second diagram gives
+ \[
+ -i(-ie)^2 \left(\frac{[\bar{u}^{s'}_{\mathbf{p}'} \gamma^\mu u^s_\mathbf{q}][\bar{u}^{r'}_{\mathbf{q}'} \gamma_\mu u^r_\mathbf{p}]}{(p - q)^2}\right),
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider the process
+ \[
+ e^+ e^- \to \gamma \gamma.
+ \]
+ We have diagrams of the form
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (i1) {$p, s$};
+ \vertex [below=of i1] (i2) {$q, r$};
+ \vertex [right=of i1] (m1);
+ \vertex [right=of i2] (m2);
+ \vertex [right=of m1] (f1) {$\mathcal{E}^\mu, p'$};
+ \vertex [right=of m2] (f2) {$\mathcal{E}^\nu, q'$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (m2) -- [fermion] (i2),
+ (m1) -- [photon] (f1),
+ (m2) -- [photon] (f2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ This diagram gives us
+ \[
+ i(-ie)^2 \left(\frac{[\bar{v}^r_{\mathbf{q}} \gamma^\nu(\slashed p - \slashed p' + m) \gamma^{\mu} u^s_{\mathbf{p}}]}{(p - p')^2 - m^2}\right) \mathcal{E}^\mu(\mathbf{p}') \mathcal{E}^\nu(\mathbf{q}').
+ \]
+ Usually, since the mass of an electron is so tiny relative to our energy scales, we simply ignore it.
+\end{eg}
+
+\begin{eg}[Bhabha scattering]
+ This is definitely not named after an elephant.
+
+ We want to consider the scattering process
+ \[
+ e^+ e^- \to e^+ e^-
+ \]
+ with diagrams
+ \begin{center}
+ \begin{tikzpicture}[yscale=0.5]
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [below=of m1] (m2);
+
+ \vertex [left=of m1] (i1) {$p, s$};
+ \vertex [left=of m2] (i2) {$q, r$};
+ \vertex [right=of m1] (f1) {$p', s'$};
+ \vertex [right=of m2] (f2) {$q', r'$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (f1),
+ (i2) -- [anti fermion] (m2) -- [anti fermion] (f2),
+ (m1) -- [photon] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [above left=of m1] (i1) {$p, s$};
+ \vertex [below left=of m1] (i2) {$q, r$};
+ \vertex [right=of m1] (m2);
+ \vertex [above right=of m2] (f1) {$p', s'$};
+ \vertex [below right=of m2] (f2) {$q', r'$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (i2),
+ (f2) -- [fermion] (m2) -- [fermion] (f1),
+ (m1) -- [photon] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ These contribute
+ \[
+ -i(-i e)^2 \left(-\frac{[\bar{u}^{s'}_{\mathbf{p}'} \gamma^\mu u^s_{\mathbf{p}}][\bar{v}^r_{\mathbf{q}} \gamma_\mu v^{r'}_{\mathbf{q}'}]}{(p - p')^2} + \frac{[\bar{v}^r_{\mathbf{q}} \gamma^\mu u^s_{\mathbf{p}}][\bar{u}^{s'}_{\mathbf{p}'} \gamma_\mu v^{r'}_{\mathbf{q}'}]}{(p + q)^2}\right).
+ \]
+\end{eg}
+
+\begin{eg}[Compton scattering]
+ Consider scattering of the form
+ \[
+ \gamma e^- \to \gamma e^-.
+ \]
+ We have diagrams
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex at (0, 0) (a);
+ \vertex at (1.5, 0) (b);
+ \vertex at (3, 0) (c);
+ \vertex at (4.5, 0) (d);
+
+ \node at (a) [left] {$u_{\mathbf{q}}$};
+ \node at (d) [right] {$\bar{u}_{\mathbf{q}'}$};
+
+ \vertex at (0, 1.5) (l) {$\mathcal{E}^\mu(\mathbf{p})$};
+ \vertex at (4.5, 1.5) (r) {$\mathcal{E}^\nu(\mathbf{p'})$};
+
+ \diagram*{
+ (a) -- [fermion] (b) -- [fermion, momentum=$p + q$] (c) -- [fermion] (d),
+ (l) -- [photon] (b),
+ (c) -- [photon] (r),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \quad\quad
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex at (0, 0) (a);
+ \vertex at (1.5, 0) (b);
+ \vertex at (3, 0) (c);
+ \vertex at (4.5, 0) (d);
+
+ \node at (a) [left] {$u_{\mathbf{q}}$};
+ \node at (d) [right] {$\bar{u}_{\mathbf{q}'}$};
+
+ \vertex at (1, 1.5) (l) {$\mathcal{E}^\mu(\mathbf{p})$};
+ \vertex at (3.5, 1.5) (r) {$\mathcal{E}^\nu(\mathbf{p'})$};
+
+ \diagram*{
+ (a) -- [fermion] (b) -- [fermion, momentum'=$q - p'$] (c) -- [fermion] (d),
+ (l) -- [photon] (c),
+ (b) -- [photon] (r),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{eg}
+ Consider the
+ \[
+ \gamma\gamma \to \gamma\gamma
+ \]
+ scattering process. We have a loop diagram
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (tl);
+ \vertex [below=of tl] (bl);
+ \vertex [right=of tl] (tr);
+ \vertex [below=of tr] (br);
+ \vertex [above left=of tl] (ttll);
+ \vertex [above right=of tr] (ttrr);
+ \vertex [below left=of bl] (bbll);
+ \vertex [below right=of br] (bbrr);
+
+ \diagram*{
+ (tl) -- [fermion] (tr) -- [fermion] (br) -- [fermion] (bl) -- [fermion] (tl),
+ (tl) -- [photon] (ttll),
+ (bl) -- [photon] (bbll),
+ (tr) -- [photon] (ttrr),
+ (br) -- [photon] (bbrr),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ Naively, we might expect the contribution to be proportional to
+ \[
+ \int \frac{\d^4 k}{k^4},
+ \]
+ This integral itself diverges, but it turns out that if we do this explicitly, the gauge symmetry causes things to cancel out and renders the diagram finite.
+\end{eg}
+
+\begin{eg}[Muon scattering]
+ We could also put muons into the picture. These behave like electrons, but are actually different things. We can consider scattering of the form
+ \[
+ e^- \mu^- \to e^- \mu^-
+ \]
+ This has a diagram
+ \begin{center}
+ \begin{tikzpicture}[yscale=0.5]
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [below=of m1] (m2);
+
+ \vertex [left=of m1] (i1) {$e^-$};
+ \vertex [left=of m2] (i2) {$e^-$};
+ \vertex [right=of m1] (f1) {$\mu^-$};
+ \vertex [right=of m2] (f2) {$\mu^-$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (f1),
+ (i2) -- [fermion] (m2) -- [fermion] (f2),
+ (m1) -- [photon] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+ We don't get the diagram obtained by swapping two particles because electrons are not muons.
+
+ We can also get interactions of the form
+ \[
+ e^+ e^- \to \mu^+ \mu^-
+ \]
+ by
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{feynman}
+ \vertex (m1);
+ \vertex [above left=of m1] (i1) {$e^-$};
+ \vertex [below left=of m1] (i2) {$e^+$};
+ \vertex [right=of m1] (m2);
+ \vertex [above right=of m2] (f1) {$\mu^-$};
+ \vertex [below right=of m2] (f2) {$\mu^+$};
+
+ \diagram*{
+ (i1) -- [fermion] (m1) -- [fermion] (i2),
+ (f2) -- [fermion] (m2) -- [fermion] (f1),
+ (m1) -- [photon] (m2),
+ };
+ \end{feynman}
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+%\subsection{The Coulomb potential}
+%Consider an $e^- e^- \to e^- e^-$ scattering process. In the relativistic limit, we have
+%\[
+% u(\mathbf{p}) \to \sqrt{m}
+% \begin{pmatrix}
+% \xi\\ \xi
+% \end{pmatrix}.
+%\]
+%So
+%\begin{align*}
+% \bar{u}^s(\mathbf{p}') \gamma^0 u^r(\mathbf{p}) &\to 2m \delta^{r s'}\\
+% \bar{u}^s(\mathbf{p}') \gamma^i u^r(\mathbf{p}) &\to 0
+%\end{align*}
+%We now work in this non-relativistic limit, and try to get out the Coulomb potential.
+%
+%We average over initial and final spins, and we end up with an amplitude
+%\[
+% \mathcal{A} \sim \frac{i e^2 (2m)^2}{|\mathbf{p} - \mathbf{p}'|^2}.
+%\]
+%When we turn that into a potential as a function of their separation (and skipping over thousands of steps), we get
+%\[
+% U(r) = e^2 \int \frac{\d^3 \mathbf{p}}{(2\pi)^3} \frac{e^{i\mathbf{p}\cdot \mathbf{r}}}{|\mathbf{p}|^2} = \frac{e^2}{4\pi r}.
+%\]
+%So we recover the Coulomb potential.
+%
+%What happens when we look at $e^+ e^- \to e^+ e^-$? Going through the same computations, we find that we get an extra minus sign, so we get negative the potential, and thus they repel.
+
+\printindex
+\end{document}
diff --git a/books/cam/III_M/symmetries_fields_and_particles.tex b/books/cam/III_M/symmetries_fields_and_particles.tex
new file mode 100644
index 0000000000000000000000000000000000000000..03eb76cc62b1df3c658b90407888db78c074c254
--- /dev/null
+++ b/books/cam/III_M/symmetries_fields_and_particles.tex
@@ -0,0 +1,4267 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Michaelmas}
+\def\nyear {2016}
+\def\nlecturer {N.\ Dorey}
+\def\ncourse {Symmetries, Fields and Particles}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+This course introduces the theory of Lie groups and Lie algebras and their applications to high energy physics. The course begins with a brief overview of the role of symmetry in physics. After reviewing basic notions of group theory we define a Lie group as a manifold with a compatible group structure. We give the abstract definition of a Lie algebra and show that every Lie group has an associated Lie algebra corresponding to the tangent space at the identity element. Examples arising from groups of orthogonal and unitary matrices are discussed. The case of $\SU(2)$, the group of rotations in three dimensions is studied in detail. We then study the representations of Lie groups and Lie algebras. We discuss reducibility and classify the finite dimensional, irreducible representations of $\SU(2)$ and introduce the tensor product of representations. The next part of the course develops the theory of complex simple Lie algebras. We define the Killing form on a Lie algebra. We introduce the Cartan-Weyl basis and discuss the properties of roots and weights of a Lie algebra. We cover the Cartan classification of simple Lie algebras in detail. We describe the finite dimensional, irreducible representations of simple Lie algebras, illustrating the general theory for the Lie algebra of $\SU(3)$. The last part of the course discusses some physical applications. After a general discussion of symmetry in quantum mechanical systems, we review the approximate $\SU(3)$ global symmetry of the strong interactions and its consequences for the observed spectrum of hadrons. We introduce gauge symmetry and construct a gauge-invariant Lagrangian for Yang-Mills theory coupled to matter. The course ends with a brief introduction to the Standard Model of particle physics.
+
+\subsubsection*{Pre-requisites}
+Basic finite group theory, including subgroups and orbits. Special relativity and quantum theory, including orbital angular momentum theory and Pauli spin matrices. Basic ideas about manifolds, including coordinates, dimension, tangent spaces.
+}
+\tableofcontents
+
+\section{Introduction}
+In this course, we are, unsurprisingly, going to talk about symmetries. Unlike what the course name suggests, there will be relatively little discussion of fields or particles.
+
+So what is a symmetry? There are many possible definitions, but we are going to pick one that is relevant to physics.
+\begin{defi}[Symmetry]\index{symmetry}
+ A \emph{symmetry} of a physical system is a transformation of the dynamical variables which leaves the physical laws invariant.
+\end{defi}
+Symmetries are very important. As Noether's theorem tells us, every symmetry gives rise to a conserved current. But there is more to symmetry than this. It seems that the whole physical universe is governed by symmetries. We believe that the forces in the universe are given by gauge fields, and a gauge field is uniquely specified by the gauge symmetry group it works with.
+
+In general, the collection of all symmetries will form a \emph{group}.
+\begin{defi}[Group]\index{group}
+ A \emph{group} is a set $G$ of elements with a multiplication rule, obeying the axioms
+ \begin{enumerate}
+ \item For all $g_1, g_2 \in G$, we have $g_1 g_2 \in G$.\hfill (closure)
+ \item There is a (necessarily unique) element $e \in G$ such that for all $g \in G$, we have $eg = ge = g$.\hfill (identity)
+ \item For every $g \in G$, there exists some (necessarily unique) $g^{-1} \in G$ such that $gg^{-1} = g^{-1}g = e$.\hfill (inverse)
+ \item For every $g_1, g_2, g_3 \in G$, we have $g_1 (g_2 g_3) = (g_1 g_2) g_3$.\hfill (associativity)
+ \end{enumerate}
+\end{defi}
+Physically, these mean
+\begin{enumerate}
+ \item The composition of two symmetries is also a symmetry.
+ \item ``Doing nothing'' is a symmetry.
+ \item A symmetry can be ``undone''.
+ \item Composing functions is always associative.
+\end{enumerate}
+
+Note that the set of elements $G$ may be finite or infinite.
+\begin{defi}[Commutative/abelian group]\index{commutative group}\index{abelian group}
+ A group is \emph{abelian} or \emph{commutative} if $g_1 g_2 = g_2 g_1$ for all $g_1, g_2 \in G$. A group is \emph{non-abelian} if it is not abelian.
+\end{defi}
+
+In this course, we are going to focus on \emph{smooth} symmetries. These are symmetries that ``vary smoothly'' with some parameters. One familiar example is rotation, which can be described by the rotation axis and the angle of rotation. These are the symmetries that can go into Noether's theorem. These smooth symmetries form a special kind of groups known as \emph{Lie groups}.
+
+One nice thing about these smooth symmetries is that they can be studied by looking at the ``infinitesimal'' symmetries. They form a vector space, known as a \emph{Lie algebra}. This is a much simpler mathematical structure, and by reducing the study of a Lie group to its Lie algebra, we make our lives much easier. In particular, one thing we can do is to classify \emph{all} (simple) Lie algebras. It turns out this isn't too hard, as the notion of (simple) Lie algebra is very restrictive.
+
+After understanding Lie algebras very well, we will move on to study gauge theories. These are theories obtained when we require that our theories obey some sort of symmetry condition, and then magically all the interactions come up automatically.
+
+\section{Lie groups}
+\subsection{Definitions}
+A Lie group is a group and a manifold, where the group operations define smooth maps. We already know what a group is, so let's go into a bit more details about what a manifold is.
+
+The canonical example of a manifold one might want to keep in mind is the sphere $S^2$. Having lived on Earth, we all know that near any point on Earth, $S^2$ looks like $\R^2$. But we also know that $S^2$ is \emph{not} $\R^2$. We cannot specify a point on the surface of the Earth uniquely by two numbers. Longitude/latitude might be a good start, but adding $2\pi$ to the longitude would give you the same point, and things also break down at the poles.
+
+Of course, this will not stop us from producing a map of Cambridge. One can reasonably come up with consistent coordinates just for Cambridge itself. So what is true about $S^2$ is that near each point, we can come up with some coordinate system, but we cannot do so for the whole space itself.
+
+\begin{defi}[Smooth map]\index{smooth map}
+ We say a map $f:\R^n \to \R^m$ is \emph{smooth} if all partial derivatives of all orders exist.
+\end{defi}
+
+\begin{defi}[Manifold]\index{manifold}
+ A \emph{manifold} (of dimension $n$) is a set $M$ together with the following data:
+ \begin{enumerate}
+ \item A collection $U_\alpha$ of subsets of $M$ whose union is $M$;
+ \item A collection of bijections $\varphi_\alpha: U_\alpha \to V_\alpha$, where $V_\alpha$ is an open subset of $\R^n$. These are known as \emph{charts}.
+ \end{enumerate}
+ The charts have to satisfy the following compatibility condition:
+ \begin{itemize}
+ \item For all $\alpha, \beta$, we have $\varphi_\alpha(U_\alpha \cap U_\beta)$ is open in $\R^n$, and the transition function
+ \[
+ \varphi_\alpha \circ \varphi_\beta^{-1}: \varphi_\beta(U_\alpha \cap U_\beta) \to \varphi_\alpha(U_\alpha \cap U_\beta)
+ \]
+ is smooth.
+ \end{itemize}
+ We write $(M, \varphi_\alpha)$ for the manifold if we need to specify the charts explicitly.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+
+ \draw (-0.3, 0) [fill=mgreen, opacity=0.5] circle [radius=0.4];
+ \draw (0.3, 0) [fill=mblue, opacity=0.5] circle [radius=0.4];
+
+ \node [left] at (-0.7, 0) {$U_\beta$};
+ \node [right] at (0.7, 0) {$U_\alpha$};
+
+ \begin{scope}[shift={(-4.75, -4)}]
+ \draw (0, 0) -- (3, 0) -- (3.5, 1.5) -- (0.5, 1.5) -- cycle;
+
+ \draw (1.75, 0.75) [fill=morange, opacity=0.5] circle [radius=0.4];
+ \begin{scope}
+ \clip (1.75, 0.75) circle [radius=0.4];
+ \draw (2.35, 0.75) [fill=mred, opacity=0.5] circle [radius=0.4];
+ \end{scope}
+ \end{scope}
+
+ \draw [->] (-0.7, -0.5) -- (-2.5, -2.6) node [pos=0.5, left] {$\varphi_\beta$};
+ \draw [->] (0.7, -0.5) -- (2.5, -2.6) node [pos=0.5, right] {$\varphi_\alpha$};
+
+ \begin{scope}[shift={(1.25, -4)}]
+ \draw (0, 0) -- (3, 0) -- (3.5, 1.5) -- (0.5, 1.5) -- cycle;
+
+ \draw (1.75, 0.75) [fill=mred, opacity=0.5] circle [radius=0.4];
+ \begin{scope}
+ \clip (1.75, 0.75) circle [radius=0.4];
+ \draw (1.15, 0.75) [fill=morange, opacity=0.5] circle [radius=0.4];
+ \end{scope}
+ \end{scope}
+ \draw [->] (-2, -3.25) -- (2, -3.25) node [pos=0.5, above] {$\varphi_\alpha \varphi_\beta^{-1}$};
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+
+\begin{defi}[Smooth map]\index{smooth map}
+ Let $M$ be a manifold. Then a map $f: M \to \R$ is \emph{smooth} if it is smooth in each coordinate chart. Explicitly, for each chart $(U_\alpha, \varphi_\alpha)$, the composition $f \circ \varphi_\alpha^{-1}: \varphi_\alpha(U_\alpha) \to \R$ is smooth (in the usual sense)
+ \begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+
+ \draw [fill=mgreen, opacity=0.5] circle [radius=0.4];
+ \node [right] {$p$};
+ \node [circ] {};
+ \node at (0.4, 0) [right] {$U$};
+
+ \begin{scope}[shift={(-1.75, -4)}]
+ \draw (0, 0) -- (3, 0) -- (3.5, 1.5) -- (0.5, 1.5) -- cycle;
+
+ \draw (1.75, 0.75) [fill=morange, opacity=0.5] circle [radius=0.4];
+ \node at (1.75, 0.75) [right] {$\varphi(p)$};
+ \node at (1.75, 0.75) [circ] {};
+
+ \draw [->] (4, 0.75) -- (7, 0.75) node [pos=0.5, above] {$f \circ \varphi^{-1}$};
+ \node at (8, 0.75) {$\R$};
+ \draw [->] (4, 3.5) -- (7, 1.5) node [pos=0.5, anchor = south west] {$f$};
+ \end{scope}
+
+ \draw [->] (0, -1) -- +(0, -1.3) node [pos=0.5, right] {$\varphi$};
+
+ \end{tikzpicture}
+ \end{center}
+ More generally, if $M, N$ are manifolds, we say $f: M \to N$ is smooth if for any chart $(U, \varphi)$ of $M$ and any chart $(V, \xi)$ of $N$, the composition $\xi \circ f \circ \varphi^{-1}: \varphi(U) \to \xi(V)$ is smooth.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+
+ \draw [fill=mgreen, opacity=0.5] circle [radius=0.4];
+
+ \begin{scope}[shift={(-1.75, -4)}]
+ \draw (0, 0) -- (3, 0) -- (3.5, 1.5) -- (0.5, 1.5) -- cycle;
+
+ \draw (1.75, 0.75) [fill=morange, opacity=0.5] circle [radius=0.4];
+ \end{scope}
+
+ \draw [->] (0, -1) -- +(0, -1.3) node [pos=0.5, right] {$\varphi$};
+ \begin{scope}[shift={(5, 0)}]
+ \draw plot [smooth cycle] coordinates {(1.4, -0.4) (-0.8, -0.9) (-1.5, -0.3) (-1.6, 0.5) (-0.2, 1.3) (1.8, 0.7)};
+
+ \draw [fill=mblue, opacity=0.5] circle [radius=0.4];
+
+ \begin{scope}[shift={(-1.75, -4)}]
+ \draw (0, 0) -- (3, 0) -- (3.5, 1.5) -- (0.5, 1.5) -- cycle;
+
+ \draw (1.75, 0.75) [fill=mred, opacity=0.5] circle [radius=0.4];
+ \end{scope}
+
+ \draw [->] (0, -1) -- +(0, -1.3) node [pos=0.5, right] {$\xi$};
+ \end{scope}
+
+ \draw [->] (1, 0) -- (4, 0) node [above, pos=0.5] {$f$};
+
+ \draw [->] (1, -3.25) -- (4, -3.25) node [above, pos=0.5] {$\xi \circ f \circ \varphi^{-1}$};
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+
+Finally, we note that if $M, N$ are manifolds of dimensions $m$ and $n$ respectively, then $M \times N$ is a manifold of dimension $m + n$, where charts $(U, \varphi: U \to \R^m), (V, \xi: V \to \R^n)$ of $M$ and $N$ respectively give us a chart $\varphi \times \xi: U \times V \to \R^m \times \R^n = \R^{m + n}$ of $M \times N$.
+
+All those definitions were made so that we can define what a Lie group is:
+
+\begin{defi}[Lie group]\index{Lie group}
+ A \emph{Lie group} is a group $G$ whose underlying set is given a manifold structure, and such that the multiplication map $m: G \times G \to G$ and inverse map $i: G \to G$ are smooth maps. We sometimes write $\mathcal{M}(G)$ for the underlying manifold of $G$.
+\end{defi}
+
+\begin{eg}
+ The unit $2$-sphere
+ \[
+ S^2 = \{(x, y, z) \in \R^3: x^2 + y^2 + z^2 = 1\}
+ \]
+ is a manifold. Indeed, we can construct a coordinate patch near $N = (0, 0, 1)$. Near this point, we have $z = \sqrt{1 - x^2 - y^2}$. This works, since near the north pole, the $z$-coordinate is always positive. In this case, $x$ and $y$ are good coordinates near the north pole.
+
+ However, it is a fact that the $2$-sphere $S^2$ has no Lie group structure.
+\end{eg}
+
+In general, most of our manifolds will be given by subsets of Euclidean space specified by certain equations, but note that not all subsets given by equations are manifolds! It is possible that they have some singularities.
+
+\begin{defi}[Dimension of Lie group]\index{Dimension of Lie group}\index{Lie group!dimension}
+ The \emph{dimension} of a Lie group $G$ is the dimension of the underlying manifold.
+\end{defi}
+
+\begin{defi}[Subgroup]\index{subgroup}
+ A \emph{subgroup} $H$ of $G$ is a subset of $G$ that is also a group under the same operations. We write $H \leq G$ if $H$ is a subgroup of $G$.
+\end{defi}
+
+The really interesting thing is when the subgroup is also a manifold!
+\begin{defi}[Lie subgroup]\index{Lie subgroup}
+ A subgroup is a \emph{Lie subgroup} if it is also a manifold (under the induced smooth structure).
+\end{defi}
+
+%Given a Lie group $G$, we can introduce coordinates $\boldsymbol\theta = \{\theta^i\}_{i = 1, \ldots, D}$, where $D = \dim(G)$, in a patch $P$ containing $e$. We write $g(\boldsymbol\theta) \in G$ for the element of $G$ specified by the coordinates $\boldsymbol\theta$. We by convention set $g(0) = e$.
+%
+%Suppose we have two elements $g(\boldsymbol\theta), g(\boldsymbol\theta') \in G$. Suppose they are small enough so that their product is also in the patch $P$. So we an write
+%\[
+% g(\boldsymbol\theta) g(\boldsymbol\theta') = g(\boldsymbol\varphi)
+%\]
+%This corresponds to a smooth map $G \times G \to G$. In coordinates, we can write this as
+%\[
+% \varphi^i = \varphi^i(\boldsymbol\theta, \boldsymbol\theta').
+%\]
+%This map is smooth.
+%
+%Similarly, group inversion also defines a smooth map.
+
+We now look at some examples.
+\begin{eg}
+ Let $G = (\R^D, +)$ be the $D$-dimensional Euclidean space with addition as the group operation. The inverse of a vector $\mathbf{x}$ is $-\mathbf{x}$, and the identity is $\mathbf{0}$. This is obviously locally homeomorphic to $\R^D$, since it \emph{is} $\R^D$, and addition and negation are obviously smooth.
+\end{eg}
+
+This is a rather boring example, since $\R^D$ is a rather trivial manifold, and the operation is commutative. The largest source of examples we have will be matrix Lie groups.
+
+\subsection{Matrix Lie groups}
+We write $\Mat_n(\F)$ for the set of $n\times n$ matrices with entries in a field $\F$ (usually $\R$ or $\C$). Matrix multiplication is certainly associative, and has an identity, namely the identity $I$. However, it doesn't always have inverses --- not all matrices are invertible! So this is not a group (instead, we call it a monoid). Thus, we are led to consider the \emph{general linear group}:
+
+\begin{defi}[General linear group]\index{general linear group}
+ The \emph{general linear group} is
+ \[
+ \GL(n, \F) = \{M \in \Mat_n(\F): \det M \not= 0\}.
+ \]
+\end{defi}
+This is closed under multiplication since the determinant is multiplicative, and matrices with non-zero determinant are invertible.
+
+\begin{defi}[Special linear group]\index{special linear group}
+ The \emph{special linear group} is
+ \[
+ \SL(n, \F) = \{M \in \Mat_n(\F): \det M =1\} \leq \GL(n, \F).
+ \]
+\end{defi}
+While these are obviously groups, less obviously, these are in fact Lie groups! In the remainder of the section, we are going to casually claim that all our favorite matrix groups are Lie groups. Proving these take some work and machinery, and we will not bother ourselves with that too much. However, we can do it explicitly for certain special cases:
+
+\begin{eg}
+ Explicitly, we can write
+ \[
+ \SL(2, \R) = \left\{
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}:
+ a, b, c, d \in \R, ad - bc = 1\right\}.
+ \]
+ The identity is the matrix with $a = d = 1$ and $b = c = 0$. For $a \not= 0$, we have
+ \[
+ d = \frac{1 + bc}{a}.
+ \]
+ This gives us a coordinate patch for all points where $a \not= 0$ in terms of $b, c, a$, which, in particular, contains the identity $I$. By considering the case where $b \not= 0$, we obtain a separate coordinate chart, and these together cover all of $\SL(2, \R)$, as a matrix in $\SL(2, \R)$ cannot have $a = b = 0$.
+
+ Thus, we see that $\SL(2, \R)$ has dimension $3$.
+\end{eg}
+
+In general, by a similar counting argument, we have
+\begin{align*}
+ \dim (\SL(n, \R)) &= n^2 - 1& \dim (\SL(n, \C)) &= 2n^2 - 2\\
+ \dim (\GL(n, \R)) &= n^2,& \dim(\GL(n, \C)) &= 2n^2.
+\end{align*}
+Most of the time, we are interested in matrix Lie groups, which will be subgroups of $\GL(n, \R)$.
+
+\subsubsection*{Subgroups of \texorpdfstring{$\GL(n, \R)$}{GL(n, R)}}
+\begin{lemma}
+ The \term{general linear group}\index{$\GL(n, \R)$}:
+ \[
+ \GL(n, \R) = \{M \in \Mat_n(\R): \det M \not= 0\}
+ \]
+ and \term{orthogonal group}\index{$\Or(n)$}:
+ \[
+ \Or(n) = \{M \in \GL(n, \R): M^TM = I\}
+ \]
+ are Lie groups.
+\end{lemma}
+Note that we write $\Or(n)$ instead of $\Or(n, \R)$ since orthogonal matrices make sense only when talking about real matrices.
+
+The orthogonal matrices are those that preserve the lengths of vectors. Indeed, for $\mathbf{v}\in \R^n$, we have
+\[
+ |M\mathbf{v}|^2 = \mathbf{v}^T M^T M \mathbf{v} = \mathbf{v}^T \mathbf{v} = |\mathbf{v}|^2.
+\]
+We notice something interesting. If $M \in \Or(n)$, we have
+\[
+ 1 = \det(I) = \det(M^TM) = \det(M)^2.
+\]
+So $\det(M) = \pm 1$. Now $\det$ is a continuous function, and it is easy to see that $\det$ takes both $\pm 1$. So $\Or(n)$ has (at least) two connected components. Only one of these pieces contains the identity, namely the piece $\det M = 1$. We might expect this to be a group on its own right, and indeed it is, because $\det$ is multiplicative.
+\begin{lemma}
+ The \term{special orthogonal group} \term{$\SO(n)$}:
+ \[
+ \SO(n) = \{M \in \Or(n): \det M = 1\}
+ \]
+ is a Lie group.
+\end{lemma}
+
+Given a frame $\{\mathbf{v}_1, \cdots, \mathbf{v}_n\}$ in $\R^n$ (i.e.\ an ordered basis), any orthogonal matrix $M \in \Or(n)$ acts on it to give another frame $\mathbf{v}_a \in \R^n \mapsto \mathbf{v}_a' \mapsto M\mathbf{v}_a \in \R^n$.
+\begin{defi}[Volume element]\index{volume element}
+ Given a frame $\{\mathbf{v}_1, \cdots, \mathbf{v}_n\}$ in $\R^n$, the \emph{volume element} is
+ \[
+ \Omega = \varepsilon_{i_1 \ldots i_n} v_1^{i_1} v_2^{i_2} \cdots v_n^{i_n}.
+ \]
+\end{defi}
+
+By direct computation, we see that an orthogonal matrix preserves the sign of the volume element iff its determinant is $+1$, i.e.\ $M \in \SO(n)$.
+
+We now want to find a more explicit description of these Lie groups, at least in low dimensions. Often, it is helpful to classify these matrices by their eigenvalues:
+
+\begin{defi}[Eigenvalue]\index{eigenvalue}
+ A complex number $\lambda$ is an \emph{eigenvalue} of $M \in M(n)$ if there is some (possibly complex) vector $\mathbf{v}_\lambda \not= 0$ such that
+ \[
+ M \mathbf{v}_\lambda = \lambda \mathbf{v}_\lambda.
+ \]
+\end{defi}
+
+\begin{thm}
+ Let $M$ be a real matrix. Then $\lambda$ is an eigenvalue iff $\lambda^*$ is an eigenvalue. Moreover, if $M$ is orthogonal, then $|\lambda|^2 = 1$.
+\end{thm}
+
+\begin{proof}
+ Suppose $M \mathbf{v}_\lambda = \lambda \mathbf{v}_\lambda$. Then applying the complex conjugate gives $M \mathbf{v}_\lambda^* = \lambda^* \mathbf{v}_\lambda^*$.
+
+ Now suppose $M$ is orthogonal. Then $M\mathbf{v}_\lambda = \lambda \mathbf{v}_\lambda$ for some non-zero $\mathbf{v}_\lambda$. We take the norm to obtain $|M\mathbf{v}_\lambda| = |\lambda| |\mathbf{v}_\lambda|$. Using the fact that $|\mathbf{v}_\lambda| = |M\mathbf{v}_\lambda|$, we have $|\lambda| = 1$. So done.
+\end{proof}
+
+\begin{eg}
+ Let $M \in \SO(2)$. Since $\det M = 1$, the eigenvalues must be of the form $\lambda = e^{i\theta}, e^{-i\theta}$. In this case, we have
+ \[
+ M = M(\theta) =
+ \begin{pmatrix}
+ \cos \theta & -\sin \theta\\
+ \sin \theta & \cos \theta
+ \end{pmatrix},
+ \]
+ where $\theta$ is the rotation angle in $S^1$. Here we have
+ \[
+ M(\theta_1)M(\theta_2) = M(\theta_2) M(\theta_1) = M(\theta_1 + \theta_2).
+ \]
+ So we have $\mathcal{M}(\SO(2)) = S^1$.
+\end{eg}
+
+\begin{eg}
+ Consider $G = \SO(3)$. Suppose $M \in \SO(3)$. Since $\det M = +1$, and the eigenvalues have to come in complex conjugate pairs, we know one of them must be $1$. Then the other two must be of the form $e^{i\theta}, e^{-i\theta}$, where $\theta \in S^1$.
+
+ We pick a normalized eigenvector $\mathbf{n}$ for $\lambda = 1$. Then $M\mathbf{n} = \mathbf{n}$, and $\mathbf{n} \cdot \mathbf{n} = 1$. This is known as the \emph{axis of rotation}. Similarly, $\theta$ is the angle of rotation. We write $M(\mathbf{n}, \theta)$ for this matrix, and it turns out this is
+ \[
+ M(\mathbf{n}, \theta)_{ij} = \cos \theta \delta_{ij} + (1 - \cos \theta)n_i n_j - \sin \theta \varepsilon_{ijk} n_k.
+ \]
+ Note that this does not uniquely specify a matrix. We have
+ \[
+ M(\mathbf{n}, 2\pi - \theta) = M(-\mathbf{n}, \theta).
+ \]
+ Thus, to uniquely specify a matrix, we need to restrict the range of $\theta$ to $0 \leq \theta \leq \pi$, with the further identification that
+ \[
+ (\mathbf{n}, \pi) \sim (-\mathbf{n}, \pi).
+ \]
+ Also note that $M(\mathbf{n}, 0) = I$ for any $\mathbf{n}$.
+
+ Given such a matrix, we can produce a vector $\mathbf{w} = \theta \mathbf{n}$. Then $\mathbf{w}$ lies in the region
+ \[
+ B_3 = \{\mathbf{w} \in \R^3: \norm{\mathbf{w}} \leq \pi\} \subseteq \R^3.
+ \]
+ This has a boundary
+ \[
+ \partial B_3 = \{\mathbf{w} \in \R^3: \norm{\mathbf{w}} = \pi\} \cong S^2.
+ \]
+ Now we identify antipodal points on $\partial B_3$. Then each vector in the resulting space corresponds to exactly one element of $\SO(3)$.
+\end{eg}
+
+\subsubsection*{Subgroups of \texorpdfstring{$\GL(n, \C)$}{GL(n, C)}}
+We can similarly consider subgroups of $\GL(n, \C)$. Common examples include
+\begin{defi}[Unitary group]\index{unitary group}\index{$\U(n)$}
+ The \emph{unitary group} is defined by
+ \[
+ \U(n) = \{U \in \GL(n, \C): U^\dagger U = I\}.
+ \]
+\end{defi}
+These are important in physics, because unitary matrices are exactly those that preserve the norms of vectors, namely $\norm{\mathbf{v}} = \norm{U\mathbf{v}}$ for all $\mathbf{v}$.
+
+Again, if $U^\dagger U = 1$, then $|\det(U)|^2 = 1$. So $\det U = e^{i\delta}$ for some $\delta \in \R$. Unlike the real case, the determinant can now take a continuous range of values, and this no longer disconnects the group. In fact, $\U(n)$ is indeed connected.
+
+\begin{defi}[Special unitary group]\index{special unitary group}\index{$\SU(n)$}
+ The \emph{special unitary group} is defined by
+ \[
+ \SU(n) = \{U \in \U(n): \det U = 1\}.
+ \]
+\end{defi}
+
+It is an easy exercise to show that
+\[
+ \dim[\U(n)] = 2n^2 - n^2 = n^2.
+\]
+For $\SU(n)$, the determinant condition imposes an additional single constraint, so we have
+\[
+ \dim[\SU(n)] = n^2 - 1.
+\]
+\begin{eg}
+ Consider the group $G = \U(1)$. This is given by
+ \[
+ \U(1) = \{z \in \C: |z| = 1\}.
+ \]
+ Therefore we have
+ \[
+ \mathcal{M}[\U(1)] = S^1.
+ \]
+ However, we also know another Lie group with underlying manifold $S^1$, namely $\SO(2)$. So are they ``the same''?
+\end{eg}
+
+\subsection{Properties of Lie groups}
+The first thing we want to consider is when two Lie groups are ``the same''. We take the obvious definition of isomorphism.
+\begin{defi}[Homomorphism of Lie groups]\index{homomorphism of Lie groups}\index{Lie group!homomorphism}
+ Let $G, H$ be Lie groups. A map $J: G \to H$ is a \emph{homomorphism} if it is smooth and for all $g_1, g_2 \in G$, we have
+ \[
+ J(g_1 g_2) = J(g_1) J(g_2).
+ \]
+ (the second condition says it is a homomorphism of groups)
+\end{defi}
+
+\begin{defi}[Isomorphic Lie groups]\index{isomorphism of Lie groups}\index{Lie groups!isomorphism}
+ An \emph{isomorphism} of Lie groups is a bijective homomorphism whose inverse is also a homomorphism. Two Lie groups are \emph{isomorphic} if there is an isomorphism between them.
+\end{defi}
+
+\begin{eg}
+ We define the map $J: \U(1) \to \SO(2)$ by
+ \[
+ J(e^{i\theta}) \mapsto
+ \begin{pmatrix}
+ \cos \theta & -\sin \theta\\
+ \sin \theta & \cos \theta
+ \end{pmatrix} \in \SO(2).
+ \]
+ This is easily seen to be a homomorphism, and we can construct an inverse similarly.
+\end{eg}
+
+\begin{ex}
+ Show that $\mathcal{M}(\SU(2)) \cong S^3$.
+\end{ex}
+
+We now look at some words that describe manifolds. Usually, manifolds that satisfy these properties are considered nice.
+
+The first notion is the idea of compactness. The actual definition is a bit weird and takes time to get used to, but there is an equivalent characterization if the manifold is a subset of $\R^n$.
+\begin{defi}[Compact]\index{compact}
+ A manifold (or topological space) $X$ is \emph{compact} if every open cover of $X$ has a finite subcover.
+
+ If the manifold is a subspace of some $\R^n$, then it is compact iff it is closed and bounded.
+\end{defi}
+
+\begin{eg}
+ The sphere $S^2$ is compact, but the hyperboloid given by $x^2 - y^2 - z^2 = 1$ (as a subset of $\R^3$) is not.
+\end{eg}
+
+\begin{eg}
+ The orthogonal groups are compact. Recall that the definition of an orthogonal matrix requires $M^T M = I$. Since this is given by a polynomial equation in the entries, it is closed. It is also bounded since each entry of $M$ has magnitude at most $1$.
+
+ Similarly, the special orthogonal groups are compact.
+\end{eg}
+
+\begin{eg}
+ Sometimes we want to study more exciting spaces such as Minkowski spaces. Let $n = p + q$, and consider the matrices that preserve the metric on $\R^n$ of signature $(p, q)$, namely
+ \[
+ \Or(p, q) = \{M \in \GL(n, \R): M^T \eta M = \eta\},
+ \]
+ where
+ \[
+ \eta =
+ \begin{pmatrix}
+ I_p & 0\\
+ 0 & -I_q
+ \end{pmatrix}.
+ \]
+ For $p, q$ both non-zero, this group is non-compact. For example, if we take $\SO(1, 1)$, then the matrices are all of the form
+ \[
+ M =
+ \begin{pmatrix}
+ \cosh\theta & \sinh \theta\\
+ \sinh \theta & \cosh \theta
+ \end{pmatrix},
+ \]
+ where $\theta \in \R$. So this space is homeomorphic to $\R$, which is not compact.
+
+ Another common example of a non-compact group is the \term{Lorentz group} $\Or(3, 1)$.
+\end{eg}
+
+Another important property is simply-connectedness.
+\begin{defi}[Simply connected]\index{simply connected}
+ A manifold $M$ is \emph{simply connected} if it is connected (there is a path between any two points), and every loop $l: S^1 \to M$ can be contracted to a point. Equivalently, any two paths between any two points can be continuously deformed into each other.
+\end{defi}
+
+\begin{eg}
+ The circle $S^1$ is not simply connected, as the loop that goes around the circle once cannot be continuously deformed to the loop that does nothing (this is non-trivial to prove).
+\end{eg}
+
+\begin{eg}
+ The $2$-sphere $S^2$ is simply connected, but the torus is not. $\SO(3)$ is also not simply connected. We can define the map by
+ \[
+ l(\theta) =
+ \begin{cases}
+ \theta \mathbf{n} & \theta \in [0, \pi)\\
+ -(2\pi - \theta) \mathbf{n} & \theta \in [\pi, 2\pi)
+ \end{cases}
+ \]
+ This is indeed a valid path because we identify antipodal points.
+\end{eg}
+
+The failure of simply-connectedness is measured by the \emph{fundamental group}.
+\begin{defi}[Fundamental group/First homotopy group]\index{fundamental group}\index{homotopy group}\index{first homotopy group}
+ Let $M$ be a manifold, and $x_0 \in M$ be a preferred point. We define $\pi_1(M)$ to be the equivalence classes of loops starting and ending at $x_0$, where two loops are considered equivalent if they can be continuously deformed into each other.
+
+ This has a group structure, with the identity given by the ``loop'' that stays at $x_0$ all the time, and composition given by doing one after the other.
+\end{defi}
+
+\begin{eg}
+ $\pi_1(S^2) = \{0\}$ and $\pi_1(T^2) = \Z \times \Z$. We also have $\pi_1(\SO(3)) = \Z/2\Z$. We will not prove these results.
+\end{eg}
+
+
+\section{Lie algebras}
+It turns out that in general, the study of a Lie group can be greatly simplified by studying its associated \emph{Lie algebra}, which can be thought of as an infinitesimal neighbourhood of the identity in the Lie group.
+
+To get to that stage, we need to develop some theory of Lie algebra, and also of differentiation.
+
+\subsection{Lie algebras}
+We begin with a rather formal and weird definition of a Lie algebra.
+
+\begin{defi}[Lie algebra]\index{Lie algebra}
+ A \emph{Lie algebra} $\mathfrak{g}$ is a vector space (over $\R$ or $\C$) with a \term{bracket}
+ \[
+ [\ph,\ph] : \mathfrak{g} \times \mathfrak{g} \to \mathfrak{g}
+ \]
+ satisfying
+ \begin{enumerate}
+ \item $[X, Y] = -[Y, X]$ for all $X, Y \in \mathfrak{g}$ \hfill(antisymmetry)
+ \item $[\alpha X + \beta Y, Z] = \alpha [X, Z] + \beta [Y, Z]$ for all $X, Y, Z \in \mathfrak{g}$ and $\alpha, \beta \in \F$ \hfill((bi)linearity)
+ \item $[X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y]] = 0$ for all $X, Y, Z \in \mathfrak{g}$.\hfill(Jacobi identity\index{Jacobi identity})
+ \end{enumerate}
+ Note that linearity in the second argument follows from linearity in the first argument and antisymmetry.
+\end{defi}
+Some (annoying) pure mathematicians will complain that we should state anti-symmetry as $[X, X] = 0$ instead, which is a stronger condition if we are working over a field of characteristic $2$, but I do not care about such fields.
+
+There isn't much one can say to motivate the Jacobi identity. It is a property that our naturally-occurring Lie algebras have, and turns out to be useful when we want to prove things about Lie algebras.
+
+\begin{eg}
+ Suppose we have a vector space $V$ with an associative product (e.g.\ a space of matrices with matrix multiplication). We can then turn $V$ into a Lie algebra by defining
+ \[
+ [X, Y] = XY - YX.
+ \]
+ We can then prove the axioms by writing out the expressions.
+\end{eg}
+
+\begin{defi}[Dimension of Lie algebra]\index{dimension of Lie algebra}\index{Lie algebra!dimension}
+ The \emph{dimension} of a Lie algebra is the dimension of the underlying vector space.
+\end{defi}
+
+Given a finite-dimensional Lie algebra, we can pick a basis $\mathcal{B}$ for $\mathfrak{g}$.
+\[
+ \mathcal{B} = \{T^a: a = 1, \cdots, \dim \mathfrak{g}\}.
+\]
+Then any $X \in \mathfrak{g}$ can be written as
+\[
+ X = X_a T^a = \sum_{a = 1}^n X_a T^a,
+\]
+where $X_a \in \F$ and $n = \dim \mathfrak{g}$.
+
+By linearity, the bracket of elements $X, Y \in \mathfrak{g}$ can be computed via
+\[
+ [X, Y] = X_a Y_b [T^a, T^b].
+\]
+In other words, the whole structure of the Lie algebra can be given by the bracket of basis vectors. We know that $[T^a, T^b]$ is again an element of $\mathfrak{g}$. So we can write
+\[
+ [T^a, T^b] = f^{ab}\!_c T^c,
+\]
+where $f^{ab}\!_c\in \F$ are the \emph{structure constants}.
+\begin{defi}[Structure constants]\index{structure constants}
+ Given a Lie algebra $\mathfrak{g}$ with a basis $\mathcal{B} = \{T^a\}$, the \emph{structure constants} are $f^{ab}\!_c$ given by
+ \[
+ [T^a, T^b] = f^{ab}\!_c T^c,
+ \]
+\end{defi}
+
+By the antisymmetry of the bracket, we know
+\begin{prop}
+ \[
+ f^{ba}\!_c = -f^{ab}\!_c.
+ \]
+\end{prop}
+
+By writing out the Jacobi identity, we obtain
+\begin{prop}
+ \[
+ f^{ab}\!_c f^{cd}\!_e + f^{da}\!_c f^{cb}\!_e + f^{bd}\!_c f^{ca}\!_e = 0.
+ \]
+\end{prop}
+
+As before, we would like to know when two Lie algebras are the same.
+\begin{defi}[Homomorphism of Lie algebras]\index{homomorphism of Lie algebras}
+ A \emph{homomorphism} of Lie algebras $\mathfrak{g}, \mathfrak{h}$ is a linear map $f: \mathfrak{g} \to \mathfrak{h}$ such that
+ \[
+ [f(X), f(Y)] = f([X, Y]).
+ \]
+\end{defi}
+
+\begin{defi}[Isomorphism of Lie algebras]\index{isomorphism of Lie algebras}\index{Lie algebra!isomorphism}
+ An \emph{isomorphism} of Lie algebras is a homomorphism with an inverse that is also a homomorphism. Two Lie algebras are \emph{isomorphic} if there is an isomorphism between them.
+\end{defi}
+
+Similar to how we can have a subgroup, we can also have a subalgebra $\mathfrak{h}$ of $\mathfrak{g}$.
+
+\begin{defi}[Subalgebra]\index{subalgebra}\index{Lie algebra!subalgebra}
+ A \emph{subalgebra} of a Lie algebra $\mathfrak{g}$ is a vector subspace that is also a Lie algebra under the bracket.
+\end{defi}
+
+Recall that in group theory, we have a stronger notion of a normal subgroup, which are subgroups invariant under conjugation. There is an analogous notion for subalgebras.
+\begin{defi}[Ideal]\index{ideal of Lie algebra}\index{Lie algebra!ideal}\index{Lie algebra!invariant subalgebra}\index{invariant subalgebra}
+ An \emph{ideal} of a Lie algebra $\mathfrak{g}$ is a subalgebra $\mathfrak{h}$ such that $[X, Y] \in \mathfrak{h}$ for all $X \in \mathfrak{g}$ and $Y \in \mathfrak{h}$.
+\end{defi}
+
+\begin{eg}
+ Every Lie algebra $\mathfrak{g}$ has two \emph{trivial}\index{trivial ideal}\index{ideal!trivial} ideals $\mathfrak{h} = \{0\}$ and $\mathfrak{h} = \mathfrak{g}$.
+\end{eg}
+
+\begin{defi}[Derived algebra]
+ The \term{derived algebra} of a Lie algebra $\mathfrak{g}$ is
+ \[
+ \mathfrak{i} = [\mathfrak{g}, \mathfrak{g}] = \spn_\F\{[X, Y]: X, Y \in \mathfrak{g}\},
+ \]
+ where $\F = \R$ or $\C$ depending on the underlying field.
+\end{defi}
+It is clear that this is an ideal. Note that this may or may not be trivial.
+
+\begin{defi}[Center of Lie algebra]\index{center of Lie algebra}\index{Lie algebra!center}
+ The \emph{center} of a Lie algebra $\mathfrak{g}$ is given by
+ \[
+ \xi(\mathfrak{g}) = \{X \in \mathfrak{g}: [X, Y] = 0\text{ for all }Y \in \mathfrak{g}\}.
+ \]
+\end{defi}
+This is an ideal, by the Jacobi identity.
+
+\begin{defi}[Abelian Lie algebra]\index{abelian Lie algebra}\index{Lie algebra!abelian}
+ A Lie algebra $\mathfrak{g}$ is \emph{abelian} if $[X, Y] = 0$ for all $X, Y \in \mathfrak{g}$. Equivalently, if $\xi(\mathfrak{g}) = \mathfrak{g}$.
+\end{defi}
+
+\begin{defi}[Simple Lie algebra]\index{simple Lie algebra}\index{Lie algebra!simple}
+ A \emph{simple Lie algebra} is a Lie algebra $\mathfrak{g}$ that is non-abelian and possesses no non-trivial ideals.
+\end{defi}
+If $\mathfrak{g}$ is simple, then since the center is always an ideal, and it is not $\mathfrak{g}$ since $\mathfrak{g}$ is not abelian, we must have $\xi(\mathfrak{g}) = \{0\}$. On the other hand, the derived algebra is also an ideal, and is non-zero since it is not abelian. So we must have $\mathfrak{i}(\mathfrak{g}) = \mathfrak{g}$.
+
+We will later see that these are the Lie algebras on which we can define a non-degenerate invariant inner product. In fact, there is a more general class, known as the \emph{semi-simple} Lie algebras, that are exactly those for which non-degenerate invariant inner products can exist.
+
+These are important in physics because, as we will later see, to define the Lagrangian of a gauge theory, we need to have a non-degenerate invariant inner product on the Lie algebra. In other words, we need a (semi-)simple Lie algebra.
+
+
+\subsection{Differentiation}
+We are eventually going to get a Lie algebra from a Lie group. This is obtained by looking at the tangent vectors at the identity. When we have homomorphisms $f: G \to H$ of Lie groups, they are in particular smooth, and taking the derivative will give us a map from tangent vectors in $G$ to tangent vectors in $H$, which in turn restricts to a map of their Lie algebras. So we need to understand how differentiation works.
+
+Before that, we need to understand how tangent vectors work. This is completely general and can be done for manifolds which are not necessarily Lie groups. Let $M$ be a smooth manifold of dimension $D$ and $p \in M$ a point. We want to formulate a notion of a ``tangent vector'' at the point $p$. We know how we can do this if the space is $\R^n$ --- a tangent vector is just any vector in $\R^n$. By definition of a manifold, near a point $p$, the manifold looks just like $\R^n$. So we can just pretend it is $\R^n$, and use tangent vectors in $\R^n$.
+
+However, this definition of a tangent vector requires us to pick a particular coordinate chart. It would be nice to have a more ``intrinsic'' notion of vectors. Recall that in $\R^n$, if we have a function $f: \R^n \to \R$ and a tangent vector $\mathbf{v}$ at $p$, then we can ask for the directional derivative of $f$ along $\mathbf{v}$. We have a correspondence
+\[
+ \mathbf{v} \longleftrightarrow \frac{\partial}{\partial \mathbf{v}}.
+\]
+This directional derivative takes in a function and returns its derivative at a point, and is sort-of an ``intrinsic'' notion. Thus, instead of talking about $\mathbf{v}$, we will talk about the associated directional derivative $\frac{\partial}{\partial \mathbf{v}}$.
+
+It turns out the characterizing property of this directional derivative is the product rule:
+\[
+ \frac{\partial}{\partial \mathbf{v}}(fg) = f(p) \frac{\partial}{\partial \mathbf{v}} g + g(p) \frac{\partial}{\partial \mathbf{v}}f.
+\]
+So a ``directional derivative'' is a linear map from the space of smooth functions $M \to \R$ to $\R$ that satisfies the Leibnitz rule.
+\begin{defi}[Tangent vector]\index{tangent vector}
+ Let $M$ be a manifold and write $C^\infty(M)$ for the vector space of smooth functions on $M$. For $p \in M$, a \emph{tangent vector} is a linear map $v: C^\infty(M) \to \R$ such that for any $f, g \in C^\infty(M)$, we have
+ \[
+ v(fg) = f(p) v(g) + v(f) g(p).
+ \]
+ It is clear that this forms a vector space, and we write $T_p M$ for the vector space of tangent vectors at $p$.
+\end{defi}
+Now of course one would be worried that this definition is too inclusive, in that we might have included things that are not genuinely directional derivatives. Fortunately, this is not the case, as the following proposition tells us.
+
+In the case where $M$ is a submanifold of $\R^n$, we can identify the tangent space with an actual linear subspace of $\R^n$. This is easily visualized when $M$ is a surface in $\R^3$, where the tangent vectors consists of the vectors in $\R^3$ ``parallel to'' the surface at the point, and in general, a ``direction'' in $M$ is also a ``direction'' in $\R^n$, and tangent vectors of $\R^n$ can be easily identified with elements of $\R^n$ in the usual way.
+
+This will be useful when we study matrix Lie groups, because this means the tangent space will consist of matrices again.
+\begin{prop}
+ Let $M$ be a manifold with local coordinates $\{x^i\}_{i = 1, \cdots, D}$ for some region $U \subseteq M$ containing $p$. Then $T_p M$ has basis
+ \[
+ \left\{\frac{\partial}{\partial x^j}\right\}_{j = 1, \cdots, D}.
+ \]
+ In particular, $\dim T_p M = \dim M$.
+\end{prop}
+This result on the dimension is extremely useful. Usually, we can manage to find a bunch of things that we know lie in the tangent space, and to show that we have found all of them, we simply count the dimensions.
+
+One way we can obtain tangent vectors is by differentiating a curve.
+\begin{defi}[Smooth curve]\index{smooth curve}
+ A \emph{smooth curve} is a smooth map $\gamma: \R \to M$. More generally, a \emph{curve} is a $C^1$ function $\R \to M$.
+\end{defi}
+Since we only want the first derivative, being $C^1$ is good enough.
+
+There are two ways we can try to define the derivative of the curve at time $t = 0 \in \R$. Using the definition of a tangent vector, to specify $\dot{\gamma}(0)$ is to tell how we can differentiate a function $f: M \to \R$ at $p = \gamma(0)$ in the direction of $\dot{\gamma}(0)$. This is easy. We define
+\[
+ \dot{\gamma}(0)(f) = \frac{\d}{\d t} f(\gamma(t)) \in \R.
+\]
+If this seems too abstract, we can also do it in local coordinates.
+
+We introduce some coordinates $\{x_i\}$ near $p \in M$. We then refer to $\gamma$ by coordinates (at least near $p$), by
+\[
+ \gamma: t \in \R \mapsto \{x^i(t) \in \R: i = 1, \cdots, D\}.
+\]
+By the smoothness condition, we know $x^i(t)$ is differentiable, with $x^i(0) = 0$. Then the \emph{tangent vector} of the curve $\gamma$ at $p$ is
+\[
+ v_\gamma = \dot{x}^i(0) \frac{\partial}{\partial x^i} \in T_p (M),\quad \dot{x}^i(t) = \frac{\d x^i}{\d t}.
+\]
+It follows from the chain rule that this exactly the same thing as what we described before.
+
+More generally, we can define the derivative of a map between manifolds.
+\begin{defi}[Derivative]\index{derivative}
+ Let $f: M \to N$ be a map between manifolds. The \emph{derivative} of $f$ at $p \in M$ is the linear map
+ \[
+ \D f_p: T_p M \to T_{f(p)} N
+ \]
+ given by the formula
+ \[
+ (\D f_p)(v)(g) = v(g \circ f)
+ \]
+ for $v \in T_p M$ and $g \in C^\infty(N)$.
+\end{defi}
+This will be useful later when we want to get a map of Lie algebras from a map of Lie groups.
+
+\subsection{Lie algebras from Lie groups}
+We now try to get a Lie algebra from a Lie group $G$, by considering $T_e(G)$.
+
+\begin{thm}
+ The tangent space of a Lie group $G$ at the identity naturally admits a Lie bracket
+ \[
+ [\ph, \ph]: T_e G \times T_e G \to T_e G
+ \]
+ such that
+ \[
+ \mathcal{L}(G) = (T_e G, [\ph, \ph])
+ \]
+ is a Lie algebra.
+\end{thm}
+
+\begin{defi}[Lie algebra of a Lie group]\index{Lie algebra of a Lie group}
+ Let $G$ be a Lie group. The \emph{Lie algebra} of $G$, written $\mathcal{L}(G)$ or $\mathfrak{g}$, is the tangent space $T_e G$ under the natural Lie bracket.
+\end{defi}
+The general convention is that if the name of a Lie group is in upper case letters, then the corresponding Lie algebra is the same name with lower case letters in fraktur font. For example, the Lie group of $\SO(n)$ is $\so(n)$.
+
+\begin{proof}
+ We will only prove it for the case of a matrix Lie group $G \subseteq \Mat_n(\F)$. Then $T_I G$ can be naturally identified as a subspace of $\Mat_n(\F)$. There is then an obvious candidate for the Lie bracket --- the actual commutator:
+ \[
+ [X, Y] = XY - YX.
+ \]
+ The basic axioms of a Lie algebra can be easily (but painfully) checked.
+
+ However, we are not yet done. We have to check that if we take the bracket of two elements in $T_I(G)$, then it still stays within $T_I(G)$. This will be done by producing a curve in $G$ whose derivative at $0$ is the commutator $[X, Y]$.
+
+ In general, let $\gamma$ be a smooth curve in $G$ with $\gamma(0) = I$. Then we can Taylor expand
+ \[
+ \gamma(t) = I + \dot{\gamma}(0) t + \ddot{\gamma}(0) t^2 + O(t^3),
+ \]
+ Now given $X, Y \in T_e G$, we take curves $\gamma_1, \gamma_2$ such that $\dot{\gamma}_1(0) = X$ and $\dot{\gamma}_2(0) = Y$. Consider the curve given by
+ \[
+ \gamma(t) = \gamma_1^{-1}(t) \gamma_2^{-1}(t) \gamma_1(t)\gamma_2(t) \in G.
+ \]
+ We can Taylor expand this to find that
+ \[
+ \gamma(t) = I + [X, Y] t^2 + O(t^3).
+ \]
+ This isn't too helpful, as $[X, Y]$ is not the coefficient of $t$. We now do the slightly dodgy step, where we consider the curve
+ \[
+ \tilde{\gamma}(t) = \gamma(\sqrt{t}) = I + [X, Y] t + O(t^{3/2}).
+ \]
+ Now this is only defined for $t \geq 0$, but it is good enough, and we see that its derivative at $t = 0$ is $[X, Y]$. So the commutator is in $T_I(G)$. So we know that $\mathcal{L}(G)$ is a Lie algebra.
+\end{proof}
+
+\begin{eg}
+ Let $G = \GL(n, \F)$, where $\F = \R$ or $\C$. Then $\mathcal{L}(\GL(n, \F)) = \gl(n, \F) = \Mat_n(\F)$ because we know it must be an $n \times n$-dimensional subspace of $\Mat_n(\F)$.
+
+ More generally, for a vector space $V$, say of dimension $n$, we can consider the group of invertible linear maps $V \to V$, written $\GL(V)$. By picking a basis of $V$, we can construct an isomorphism $\GL(V) \cong \GL(n, \F)$, and this gives us a smooth structure on $\GL(V)$ (this does not depend on the basis chosen). Then the Lie algebra $\gl(V)$ is the collection of all linear maps $V \to V$.
+\end{eg}
+
+\begin{eg}
+ If $G = \SO(2)$, then the curves are of the form
+ \[
+ g(t) = M(\theta(t)) =
+ \begin{pmatrix}
+ \cos \theta(t) & -\sin \theta (t)\\
+ \sin \theta(t) & \cos \theta(t)
+ \end{pmatrix} \in \SO(2).
+ \]
+ So we have
+ \[
+ \dot{g}(0) =
+ \begin{pmatrix}
+ 0 & -1\\
+ 1 & 0
+ \end{pmatrix} \dot{\theta}(0).
+ \]
+ Since the Lie algebra has dimension $1$, these are all the matrices in the Lie algebra. So the Lie algebra is given by
+ \[
+ \so(2) = \left\{
+ \begin{pmatrix}
+ 0 & -c\\
+ c & 0
+ \end{pmatrix}, c \in \R
+ \right\}.
+ \]
+\end{eg}
+
+\begin{eg}
+ More generally, suppose $G = \SO(n)$, and we have a path $R(t) \in \SO(n)$.
+
+ By definition, we have
+ \[
+ R^T(t) R(t) = I.
+ \]
+ Differentiating gives
+ \[
+ \dot{R}^T(t) R(t) + R^T(t) \dot{R}(t) = 0.
+ \]
+ for all $t \in \R$. Evaluating at $t = 0$, and noting that $R(0) = I$, we have
+ \[
+ X^T + X = 0,
+ \]
+ where $X = \dot{R}(0)$ is a tangent vector. There are no further constraints from demanding that $\det R = +1$, since this is obeyed anyway for any matrix in $\Or(n)$ near $I$.
+
+ By dimension counting, we know the antisymmetric matrices are exactly the matrices in $\mathcal{L}(\Or(n))$ or $\mathcal{L}(\SO(n))$. So we have
+ \[
+ \ort(n) = \so(n) = \{X \in \Mat_n(\R): X^T = -X\}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider $G = \SU(n)$. Suppose we have a path $U(t) \in \SU(n)$, with $U(0) = I$. Then we have
+ \[
+ U^\dagger (t) U(t) = I.
+ \]
+ Then again by differentiation, we obtain
+ \[
+ Z^\dagger + Z = 0,
+ \]
+ where $Z = \dot{U}(0) \in \su(n)$. So we must have
+ \[
+ \su(n) \subset \{Z \in \Mat_n(\C): Z^\dagger = -Z\}.
+ \]
+ What does the condition $\det U(t) = 1$ give us? We can do a Taylor expansion by
+ \[
+ \det U(t) = 1 + \tr Z \cdot t + O(t^2).
+ \]
+ So requiring that $\det U(t) = 1$ gives the condition
+ \[
+ \tr Z = 0.
+ \]
+ By dimension counting, we know traceless anti-Hermitian matrices are all the elements in the Lie algebra. So we have
+ \[
+ \su(n) = \{Z \in \Mat_n(\C), Z^\dagger = -Z, \tr Z = 0\}.
+ \]
+\end{eg}
+
+\begin{eg}
+ We look at $\SU(2)$ in detail. We know that $\su(2)$ is the $2 \times 2$ traceless anti-Hermitian matrices.
+
+ These are given by multiples of the \term{Pauli matrices} $\sigma_j$, for $j = 1, 2, 3$, satisfying
+ \[
+ \sigma_i \sigma_j = \delta_{ij}I + i \varepsilon_{ijk} \sigma_k.
+ \]
+ They can be written explicitly as
+ \[
+ \sigma_1 =
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 0
+ \end{pmatrix},\quad
+ \sigma_2 =
+ \begin{pmatrix}
+ 0 & -i\\
+ i & 0
+ \end{pmatrix},\quad
+ \sigma_3 =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix}
+ \]
+ One can check manually that the generators for the Lie algebra are given by
+ \[
+ T^a = -\frac{1}{2} i \sigma_a.
+ \]
+ Indeed, each $T^a$ is in $\su(2)$, and they are independent. Since we know $\dim \su(2) = 3$, these must generate everything.
+
+ We have
+ \[
+ [T^a, T^b] = -\frac{1}{4}[\sigma_a, \sigma_b] = -\frac{1}{2} i \varepsilon_{abc} \sigma_c = f^{ab}\!_c T^c,
+ \]
+ where the structure constants are
+ \[
+ f^{ab}\!_c = \varepsilon_{abc}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Take $G = \SO(3)$. Then $\so(3)$ is the space of $3 \times 3$ real anti-symmetric matrices, which one can manually check are generated by
+ \[
+ \tilde{T}^1 =
+ \begin{pmatrix}
+ 0 & 0 & 0\\
+ 0 & 0 & -1\\
+ 0 & 1 & 0
+ \end{pmatrix},\quad
+ \tilde{T}^2 =
+ \begin{pmatrix}
+ 0 & 0 & 1\\
+ 0 & 0 & 0\\
+ -1 & 0 & 0
+ \end{pmatrix},\quad
+ \tilde{T}^3 =
+ \begin{pmatrix}
+ 0 & -1 & 0\\
+ 1 & 0 & 0\\
+ 0 & 0 & 0
+ \end{pmatrix}
+ \]
+ We then have
+ \[
+ (\tilde{T}^a)_{bc} = -\varepsilon_{abc}.
+ \]
+ Then the structure constants are
+ \[
+ [\tilde{T}^a, \tilde{T}^b] = f^{ab}\!_c \tilde{T}^c,
+ \]
+ where
+ \[
+ f^{ab}\!_{c} = \varepsilon_{abc}.
+ \]
+\end{eg}
+Note that the structure constants are the same! Since the structure constants completely determine the brackets of the Lie algebra, if the structure constants are the same, then the Lie algebras are isomorphic. Of course, the structure constants depend on which basis we choose. So the real statement is that if there are some bases in which the structure constants are equal, then the Lie algebras are isomorphic.
+
+So we get that $\so(3) \cong \su(2)$, but $\SO(3)$ is \emph{not} isomorphic to $\SU(2)$. Indeed, the underlying manifold is $\SU(2)$ is the $3$-sphere, but the underlying manifold of $\SO(3)$ has a fancy construction. They are not even topologically homeomorphic, since $\SU(2)$ is simply connected, but $\SO(3)$ is not. More precisely, we have
+\begin{align*}
+ \pi_1(\SO(3)) &= \Z/2\Z\\
+ \pi_1(\SU(2)) &= \{0\}.
+\end{align*}
+So we see that we don't have a perfect correspondence between Lie algebras and Lie groups. However, usually, two Lie groups with the same Lie algebra have some covering relation. For example, in this case we have
+\[
+ \SO(3) = \frac{\SU(2)}{\Z/2\Z},
+\]
+where
+\[
+ \Z/2\Z = \{I, -I\} \subseteq \SU(2)
+\]
+is the \emph{center} of $\SU(2)$.
+
+We can explicitly construct this bijection as follows. We define the map $d: \SU(2) \to \SO(3)$ by
+\[
+ d(A)_{ij} = \frac{1}{2} \tr (\sigma_i A\sigma_j A^\dagger) \in \SO(3).
+\]
+This is globally a 2-to-1 map. It is easy to see that
+\[
+ d(A) = d(-A),
+\]
+and conversely if $d(A) = d(B)$, then $A = -B$. By the first isomorphism theorem, this gives an isomorphism
+\[
+ \SO(3) = \frac{\SU(2)}{\Z/2\Z},
+\]
+where $\Z/2\Z = \{I, -I\}$ is the center of $\SU(2)$.
+
+Geometrically, we know $\mathcal{M}(\SU(2)) \cong S^3$. Then the manifold of $\SO(3)$ is obtained by identifying antipodal points of the manifold.
+
+\subsection{The exponential map}
+So far, we have been talking about vectors in the tangent space of the identity $e$. It turns out that the group structure means this tells us about the tangent space of all points. To see this, we make the following definition:
+
+\begin{defi}[Left and right translation]\index{left translation}\index{right translation}\index{translation}
+ For each $h \in G$, we define the \emph{left and right translation maps}
+ \begin{align*}
+ L_h: G &\to G\\
+ g &\mapsto hg,\\
+ R_h: G &\to G\\
+ g &\mapsto gh.
+ \end{align*}
+ These maps are bijections, and in fact diffeomorphisms (i.e.\ smooth maps with smooth inverses), because they have smooth inverses $L_{h^{-1}}$ and $R_{h^{-1}}$ respectively.
+\end{defi}
+In general, there is no reason to prefer left translation over right, or vice versa. By convention, we will mostly talk about left translation, but the results work equally well for right translation.
+
+Then since $L_h(e) = h$, the derivative $\D_e L_h$ gives us a linear isomorphism between $T_e G$ and $T_h G$, with inverse given by $\D_h L_{h^{-1}}$. Then in particular, if we are given a single tangent vector $X \in \mathcal{L}(G)$, we obtain a tangent vector at all points in $G$, i.e.\ a vector field.
+
+\begin{defi}[Vector field]\index{vector field}
+ A \emph{vector field} $V$ of $G$ specifies a tangent vector
+ \[
+ V(g) \in T_gG
+ \]
+ at each point $g \in G$. Suppose we can pick coordinates $\{x_i\}$ on some subset of $G$, and write
+ \[
+ v(g) = v^i (g) \frac{\partial}{\partial x^i} \in T_g G.
+ \]
+ The vector field is \emph{smooth} if $v^i(g) \in \R$ are all differentiable for any coordinate chart.
+\end{defi}
+
+As promised, given any $X \in T_eG$, we can define a vector field by using $L_g^* := \D_e L_g$ to move this to all places in the world. More precisely, we can define
+\[
+ V(g) = L_g^*(X).
+\]
+This has the interesting property that if $X$ is non-zero, then $V$ is non-zero everywhere, because $L_g^*$ is a linear isomorphism. So we found that
+
+\begin{prop}
+ Let $G$ be a Lie group of dimension $> 0$. Then $G$ has a nowhere-vanishing vector field.
+\end{prop}
+This might seem like a useless thing to know, but it tells us that certain manifolds \emph{cannot} be made into Lie groups.
+
+\begin{thm}[Poincare-Hopf theorem]
+ Let $M$ be a compact manifold. If $M$ has non-zero Euler characteristic, then any vector field on $M$ has a zero.
+\end{thm}
+The Poincare-Hopf theorem actually tells us how we can count the actual number of zeroes, but we will not go into that. We will neither prove this, nor use it for anything useful. But in particular, it has the following immediate corollary:
+\begin{thm}[Hairy ball theorem]\index{hairy ball theorem}
+ Any smooth vector field on $S^2$ has a zero. More generally, any smooth vector field on $S^{2n}$ has a zero.
+\end{thm}
+Thus, it follows that $S^{2n}$ can never be a Lie group. In fact, the full statement of the Poincare-Hopf theorem implies that if we have a compact Lie group of dimension $2$, then it must be the torus! (we have long classified the list of all possible compact 2-dimensional manifolds, and we can check that only the torus works)
+
+That was all just for our own amusement. We go back to serious work. What does this left-translation map look like when we have a matrix Lie group $G \subseteq \Mat_n(\F)$? For all $h \in G$ and $X \in \mathcal{L} (G)$, we can represent $X$ as a matrix in $\Mat_n(\F)$. We then have a concrete representation of the left translation by
+\[
+ L_h^*X = hX \in T_hG.
+\]
+Indeed, $h$ acts on $G$ by left multiplication, which is a linear map if we view $G$ as a subset of the vector space $\Mat_n(\F)$. Then we just note that the derivative of any linear map is ``itself'', and the result follows.
+
+Now we may ask ourselves the question --- given a tangent vector $X \in T_eG$, can we find a path that ``always'' points in the direction $X$? More concretely, we want to find a path $\gamma: \R \to T_e G$ such that
+\[
+ \dot{\gamma}(t) = L_{\gamma(t)}^* X.
+\]
+Here we are using $L_{\gamma(t)}$ to identify $T_{\gamma(t)}G$ with $T_e G = \mathcal{L}(G)$. In the case of a matrix Lie group, this just says that
+\[
+ \dot{\gamma}(t) = \gamma(t) X.
+\]
+We also specify the boundary condition $\gamma(0) = e$.
+
+Now this is just an ODE, and by general theory of ODE's, we know a solution always exists and is unique. Even better, in the case of a matrix Lie group, there is a concrete construction of this curve.
+\begin{defi}[Exponential]\index{exponential}
+ Let $M \in \Mat_n(\F)$ be a matrix. The \emph{exponential} is defined by
+ \[
+ \exp(M) = \sum_{\ell = 0}^\infty \frac{1}{\ell!} M^\ell \in \Mat_n(\F).
+ \]
+\end{defi}
+The convergence properties of these series are very good, just like our usual exponential.
+
+\begin{thm}
+ For any matrix Lie group $G$, the map $\exp$ restricts to a map $\mathcal{L}(G) \to G$.
+\end{thm}
+
+\begin{proof}
+ We will not prove this, but on the first example sheet, we will prove this manually for $G = \SU(n)$.
+\end{proof}
+
+We now let
+\[
+ g(t) = \exp(tX).
+\]
+We claim that this is the curve we were looking for.
+
+To check that it satisfies the desired properties, we simply have to compute
+\[
+ g(0) = \exp(0) = I,
+\]
+and also
+\[
+ \frac{\d g(t)}{\d t} = \sum_{\ell = 1}^\infty \frac{1}{(\ell - 1)!} t^{\ell - 1} X^\ell = \exp(tX) X = g(t) X.
+\]
+So we are done.
+
+We now consider the set
+\[
+ S_{X} = \{\exp(tX): t \in \R\}.
+\]
+This is an abelian Lie subgroup of $G$ with multiplication given by
+\[
+ \exp(tX) \exp(sX) = \exp((t + s)X)
+\]
+by the usual proof. These are known as \term{one-parameter subgroups}.
+
+Unfortunately, it is not true in general that
+\[
+ \exp(X) \exp(Y) = \exp(X + Y),
+\]
+since the usual proof assumes that $X$ and $Y$ commute. Instead, what we've got is the \term{Baker--Campbell--Hausdorff formula}.
+\begin{thm}[Baker--Campbell--Hausdorff formula]
+ We have
+ \[
+ \exp(X) \exp(Y) = \exp\!\left(\!X + Y + \frac{1}{2}[X, Y] + \frac{1}{12}([X, [X, Y]] - [Y, [X, Y]]) + \cdots\!\right).
+ \]
+\end{thm}
+It is possible to find the general formula for all the terms, but it is messy. We will not prove this.
+
+By the inverse function theorem, we know the map $\exp$ is locally bijective. So we know $\mathcal{L}(G)$ completely determines $G$ in some neighbourhood of $e$. However, $\exp$ is not \emph{globally} bijective. Indeed, we already know that the Lie algebra doesn't completely determine the Lie group, as $\SO(3)$ and $\SU(2)$ have the same Lie algebra but are different Lie groups.
+
+In general, $\exp$ can fail to be bijective in two ways. If $G$ is not connected, then $\exp$ cannot be surjective, since by continuity, the image of $\exp$ must be connected.
+
+\begin{eg}
+ Consider the groups $\Or(n)$ and $\SO(n)$. Then the Lie algebra of $\Or(n)$ is
+ \[
+ \ort(n) = \{X \in \Mat_n(\R): X + X^T = 0\}.
+ \]
+ So if $X \in \ort(n)$, then $\tr X = 0$. Then we have
+ \[
+ \det (\exp(X)) = \exp(\tr X) = \exp(0) = +1.
+ \]
+ So any matrix in the image of $\exp$ has determinant $+1$, and hence can only lie inside $\SO(n)$. It turns out that the image of $\exp$ is indeed $\SO(n)$.
+\end{eg}
+
+More generally, we have
+\begin{prop}
+ Let $G$ be a Lie group, and $\mathfrak{g}$ be its Lie algebra. Then the image of $\mathfrak{g}$ under $\exp$ is the connected component of $e$.
+\end{prop}
+
+On the other hand, $\exp$ can also fail to be injective. This happens when $G$ has a $\U(1)$ subgroup.
+\begin{eg}
+ Let $G = \U(1)$. Then
+ \[
+ \mathfrak{u}(1) = \{ix, x \in \R\}
+ \]
+ We then have
+ \[
+ \exp(ix) = e^{ix}.
+ \]
+ This is certainly not injective. In particular, we have
+ \[
+ \exp(ix) = \exp(i(x + 2\pi))
+ \]
+ for any $x$.
+\end{eg}
+
+\section{Representations of Lie algebras}
+\subsection{Representations of Lie groups and algebras}
+So far, we have just talked about Lie groups and Lie algebras abstractly. But we know these groups don't just sit there doing nothing. They \emph{act} on things. For example, the group $\GL(n,\R)$ acts on the vector space $\R^n$ in the obvious way. In general, the action of a group on a vector space is known as a \emph{representation}.
+
+\begin{defi}[Representation of group]\index{representation!group}\index{group!representation}
+ Let $G$ be a group and $V$ be a (finite-dimensional) vector space over a field $\F$. A \emph{representation} of $G$ on $V$ is given by specifying invertible linear maps $D(g): V \to V$ (i.e.\ $D(g) \in \GL(V)$) for each $g \in G$ such that
+ \[
+ D(gh) = D(g)D(h)
+ \]
+ for all $g, h \in G$. In the case where $G$ is a Lie group and $\F = \R$ or $\C$, we require that the map $D: G \to \GL(V)$ is smooth.
+
+ The space $V$ is known as the \emph{representation space}, and we often write the representation as the pair $(V, D)$.
+\end{defi}
+Here if we pick a basis $\{e_1, \cdots, e_n\}$ for $V$, then we can identify $\GL(V)$ with $\GL(n, \F)$, and this obtains a canonical smooth structure when $\F = \R$ or $\C$. This smooth structure does not depend on the basis chosen.
+
+In general, the map $D$ need not be injective or surjective.
+
+\begin{prop}
+ Let $D$ be a representation of a group $G$. Then $D(e) = I$ and $D(g^{-1}) = D(g)^{-1}$.
+\end{prop}
+
+\begin{proof}
+ We have
+ \[
+ D(e) = D(ee) = D(e) D(e).
+ \]
+ Since $D(e)$ is invertible, multiplying by the inverse gives
+ \[
+ D(e) = I.
+ \]
+ Similarly, we have
+ \[
+ D(g) D(g^{-1}) = D(gg^{-1})= D(e) = I.
+ \]
+ So it follows that $D(g)^{-1} = D(g^{-1})$.
+\end{proof}
+
+Now why do we care about representations? Mathematically, we can learn a lot about a group in terms of its possible representations. However, from a physical point of view, knowing about representations of a group is also very important. When studying field theory, our fields take value in a fixed vector space $V$, and when we change coordinates, our field will ``transform accordingly'' according to some rules. For example, we say scalar fields ``don't transform'', but, say, the electromagnetic field tensor transforms ``as a $2$-tensor''.
+
+We can describe the spacetime symmetries by a group $G$, so that specifying a ``change of coordinates'' is equivalent to giving an element of $G$. For example, in special relativity, changing coordinates corresponds to giving an element of the Lorentz group $\Or(3, 1)$.
+
+Now if we want to say how our objects in $V$ transform when we change coordinates, this is exactly the same as specifying a representation of $G$ on $V$! So understanding what representations are available lets us know what kinds of fields we can have.
+
+The problem, however, is that representations of Lie groups are very hard. Lie groups are very big geometric structures with a lot of internal complexity. So instead, we might try to find representations of their Lie algebras instead.
+\begin{defi}[Representation of Lie algebra]\index{representation!Lie algebra}
+ Let $\mathfrak{g}$ be a Lie algebra. A \emph{representation} $\rho$ of $\mathfrak{g}$ on a vector space $V$ is a collection of linear maps
+ \[
+ \rho(X) \in \gl(V),
+ \]
+ for each $X \in \mathfrak{g}$, i.e.\ $\rho(X): V \to V$ is a linear map, not necessarily invertible. These are required to satisfy the conditions
+ \[
+ [\rho(X_1), \rho(X_2)] = \rho([X_1, X_2])
+ \]
+ and
+ \[
+ \rho(\alpha X_1 + \beta X_2) = \alpha \rho(X_1) + \beta \rho(X_2).
+ \]
+ The vector space $V$ is known as the \emph{representation space}. Similarly, we often write the representation as $(V, \rho)$.
+\end{defi}
+Note that it is possible to talk about a complex representation of a real Lie algebra, because any complex Lie algebra (namely $\gl(V)$ for a complex vector space $V$) can be thought of as a real Lie algebra by ``forgetting'' that we can multiply by complex numbers, and indeed this is often what we care about.
+
+\begin{defi}[Dimension of representation]\index{dimension!representation}\index{representation!dimension}
+ The \emph{dimension} of a representation is the dimension of the representation space.
+\end{defi}
+
+We will later see that a representation of a Lie group gives rise to a representation of the Lie algebra. The representation is not too hard to obtain --- if we have a representation $D: G \to \GL(V)$ of the Lie group, taking the derivative of this map gives us the confusingly-denoted $\D_e D: T_e G \to T_e (\GL(V))$, which is a map $\mathfrak{g} \to \gl(V)$. To check that this is indeed a representation, we will have to see that it respects the Lie bracket. We will do this later when we study the relation between representations of Lie groups and Lie algebras.
+
+Before we do that, we look at some important examples of representations of Lie algebras.
+\begin{defi}[Trivial representation]\index{trivial representation}\index{representation!trivial}
+ Let $\mathfrak{g}$ be a Lie algebra of dimension $D$. The \emph{trivial representation} is the representation $d_0: \mathfrak{g} \to \F$ given by $d_0(X) = 0$ for all $X \in \mathfrak{g}$. This has dimension $1$.
+\end{defi}
+
+\begin{defi}[Fundamental representation]\index{fundamental representation}\index{representation!fundamental}
+ Let $\mathfrak{g} = \mathcal{L}(G)$ for $G \subseteq \Mat_n(\F)$. The \emph{fundamental representation} is given by $d_f: \mathfrak{g} \to \Mat_n(\F)$ given by
+ \[
+ d_f (X) = X
+ \]
+ This has $\dim (d_f) = n$.
+\end{defi}
+
+\begin{defi}[Adjoint representation]\index{adjoint representation}\index{representation!adjoint}
+ All Lie algebras come with an \emph{adjoint representation} $d_\mathrm{Adj}$ of dimension $\dim(\mathfrak{g}) = D$. This is given by mapping $X \in \mathfrak{g}$ to the linear map
+ \begin{align*}
+ \ad_X: \mathfrak{g} &\to \mathfrak{g}\\
+ Y &\mapsto [X, Y]
+ \end{align*}
+ By linearity of the bracket, this is indeed a linear map $\mathfrak{g} \to \gl(\mathfrak{g})$.
+\end{defi}
+There is a better way of thinking about this. Suppose our Lie algebra $\mathfrak{g}$ comes from a Lie group $G$. Writing $\Aut(G)$ for all the isomorphisms $G \to G$, we know there is a homomorphism
+\begin{align*}
+ \Phi: G &\to \Aut(G)\\
+ g &\mapsto \Phi_g
+\end{align*}
+given by conjugation:
+\[
+ \Phi_g(x) = gxg^{-1}.
+\]
+Now by taking the derivative, we can turn each $\Phi_g$ into a linear isomorphism $\mathfrak{g} \to \mathfrak{g}$, i.e.\ an element of $\GL(\mathfrak{g})$. So we found ourselves a homomorphism
+\[
+ \mathrm{Ad}: G \to \GL(\mathfrak{g}),
+\]
+which is a representation of the Lie group $G$! It is an exercise to show that the corresponding representation of the Lie algebra $\mathfrak{g}$ is indeed the adjoint representation.
+
+Thus, if we view conjugation as a natural action of a group on itself, then the adjoint representation is the natural representation of $\mathfrak{g}$ over itself.
+
+\begin{prop}
+ The adjoint representation is a representation.
+\end{prop}
+
+\begin{proof}
+ Since the bracket is linear in both components, we know the adjoint representation is a linear map $\mathfrak{g} \to \gl(\mathfrak{g})$. It remains to show that
+ \[
+ [\ad_X, \ad_Y] = \ad_{[X, Y]}.
+ \]
+ But the Jacobi identity says
+ \[
+ [\ad_X, \ad_Y](Z) = [X, [Y, Z]] - [Y, [X, Z]] = [[X, Y], Z] = \ad_{[X, Y]}(Z).\qedhere
+ \]
+\end{proof}
+
+We will eventually want to find all representations of a Lie algebra. To do so, we need the notion of when two representations are ``the same''.
+
+Again, we start with the definition of a homomorphism.
+\begin{defi}[Homomorphism of representations]\index{representation!homomorphism}
+ Let $(V_1, \rho_1), (V_2, \rho_2)$ be representations of $\mathfrak{g}$. A \emph{homomorphism} $f: (V_1, \rho_1) \to (V_2, \rho_2)$ is a linear map $f: V_1 \to V_2$ such that for all $X \in \mathfrak{g}$, we have
+ \[
+ f(\rho_1(X)(v)) = \rho_2(X)(f(v))
+ \]
+ for all $v \in V_1$. Alternatively, we can write this as
+ \[
+ f \circ \rho_1 = \rho_2 \circ f.
+ \]
+ In other words, the following diagram commutes for all $X \in \mathfrak{g}$:
+ \[
+ \begin{tikzcd}
+ V_1 \ar[r, "f"] \ar[d, "\rho_1(X)"] & V_2 \ar[d, "\rho_2(X)"]\\
+ V_1 \ar[r, "f"] & V_2
+ \end{tikzcd}
+ \]
+\end{defi}
+
+Then we can define
+\begin{defi}[Isomorphism of representations]\index{representation!isomorphism}
+ Two $\mathfrak{g}$-vector spaces $V_1, V_2$ are \emph{isomorphic} if there is an invertible homomorphism $f: V_1 \to V_2$.
+\end{defi}
+In particular, isomorphic representations have the same dimension.
+
+If we pick a basis for $V_1$ and $V_2$, and write the matrices for the representations as $R_1(X)$ and $R_2(X)$, then they are isomorphic if there exists a non-singular matrix $S$ such that
+\[
+ R_2(X) = SR_1(X) S^{-1}
+\]
+for all $X \in \mathfrak{g}$.
+
+We are going to look at special representations that are ``indecomposable''.
+\begin{defi}[Invariant subspace]\index{invariant subspace}
+ Let $\rho$ be a representation of a Lie algebra $\mathfrak{g}$ with representation space $V$. An \emph{invariant subspace} is a subspace $U \subseteq V$ such that
+ \[
+ \rho(X) u \in U
+ \]
+ for all $X \in \mathfrak{g}$ and $u \in U$.
+
+ The \term{trivial subspaces}\index{subspace!trivial} are $U = \{0\}$ and $V$.
+\end{defi}
+
+\begin{defi}[Irreducible representation]\index{irreducible representation}\index{representation!irreducible}
+ An \emph{irreducible representation} is a representation with no non-trivial invariant subspaces. They are referred to as \emph{irreps}\index{irrep}.
+\end{defi}
+
+\subsection{Complexification and correspondence of representations}
+So far, we have two things --- Lie algebras and Lie groups. Ultimately, the thing we are interested in is the Lie group, but we hope to simplify the study of a Lie group by looking at the Lie algebra instead. So we want to understand how representations of Lie groups correspond to the representations of their Lie algebras.
+
+If we have a representation $D: G \to \GL(V)$ of a Lie group $G$, then taking the derivative at the identity gives us a linear map $\rho: T_e G \to T_I \GL(V)$, i.e.\ a map $\rho: \mathfrak{g} \to \gl(V)$. To show this is a representation, we need to show that it preserves the Lie bracket.
+
+\begin{lemma}
+ Given a representation $D: G \to \GL(V)$, the induced representation $\rho: \mathfrak{g} \to \gl(V)$ is a Lie algebra representation.
+\end{lemma}
+
+\begin{proof}
+ We will again only prove this in the case of a matrix Lie group, so that we can use the construction we had for the Lie bracket.
+
+ We have to check that the bracket is preserved. We take curves $\gamma_1, \gamma_2: \R \to G$ passing through $I$ at $0$ such that $\dot{\gamma}_i(0) = X_i$ for $i = 1, 2$. We write
+ \[
+ \gamma(t) = \gamma_1^{-1}(t) \gamma_2^{-1}(t)\gamma_1(t)\gamma_2(t) \in G.
+ \]
+ We can again Taylor expand this to obtain
+ \[
+ \gamma(t) = I + t^2 [X_1, X_2] + O(t^3).
+ \]
+ Essentially by the definition of the derivative, applying $D$ to this gives
+ \[
+ D(\gamma(t)) = I + t^2 \rho([X_1, X_2]) + O(t^3).
+ \]
+ On the other hand, we can apply $D$ to $(*)$ before Taylor expanding. We get
+ \[
+ D(\gamma) = D(\gamma_1^{-1}) D(\gamma_2^{-2}) D(\gamma_1) D(\gamma_2).
+ \]
+ So as before, since
+ \[
+ D(\gamma_i) = I + t \rho(X_i) + O(t^2),
+ \]
+ it follows that
+ \[
+ D(\gamma)(t) = I + t^2 [\rho(X_1), \rho(X_2)] + O(t^3).
+ \]
+ So we must have
+ \[
+ \rho([X_1, X_2]) = [\rho(X_1), \rho(X_2)]. \qedhere
+ \]
+\end{proof}
+
+How about the other way round? We know that if $\rho: \mathfrak{g} \to \gl(V)$ is induced by $D: G \to \GL(V)$, then have
+\[
+ D(\exp(X)) = I + t \rho(X) + O(t^2),
+\]
+while we also have
+\[
+ \exp(\rho(X)) = I + r \rho(X) + O(t^2).
+\]
+So we might expect that we indeed have
+\[
+ D(\exp(X)) = \exp(\rho(X))
+\]
+for all $X \in \mathfrak{g}$.
+
+So we can try to use this formula to construct a representation of $G$. Given an element $g \in G$, we try to write it as $g = \exp(X)$ for some $X \in \mathfrak{g}$. We then define
+\[
+ D(g) = \exp(\rho(X)).
+\]
+For this to be well-defined, we need two things to happen:
+\begin{enumerate}
+ \item Every element in $G$ can be written as $\exp(X)$ for some $X$
+ \item The value of $\exp(\rho(X))$ does not depend on which $X$ we choose.
+\end{enumerate}
+We first show that if this is well-defined, then it indeed gives us a representation of $G$.
+
+To see this, we use the Baker-Campbell-Hausdorff formula to say
+\begin{align*}
+ D(\exp(X) \exp(Y)) &= \exp(\rho(\log(\exp(X)\exp(Y))))\\
+ &= \exp\left(\rho\left(\log\left(\exp\left(X + Y + \frac{1}{2}[X, Y] + \cdots\right)\right)\right)\right)\\
+ &= \exp\left(\rho\left(X + Y + \frac{1}{2}[X, Y] + \cdots\right)\right)\\
+ &= \exp\left(\rho(X) + \rho(Y) + \frac{1}{2}[\rho(X), \rho(Y)] + \cdots\right)\\
+ &= \exp(\rho(X))\exp(\rho(Y))\\
+ &= D(\exp(X)) D(\exp(Y)),
+\end{align*}
+where $\log$ is the inverse to $\exp$. By well-defined-ness, it doesn't matter which log we pick. Here we need to use the fact that $\rho$ preserves the Lie bracket, and all terms in the Baker-Campbell-Hausdorff formula are made up of Lie brackets.
+
+So when are the two conditions satisfied? For the first condition, we know that $\mathfrak{g}$ is connected, and the continuous image of a connected space is connected. So a necessary condition is that $G$ must be a connected Lie group. This rules out groups such as $\Or(n)$. It turns out this is also sufficient. The second condition is harder, and we will take note of the following result without proof:
+\begin{thm}
+ Let $G$ be a simply connected Lie group with Lie algebra $\mathfrak{g}$, and let $\rho: \mathfrak{g} \to \gl(V)$ be a representation of $\mathfrak{g}$. Then there is a unique representation $D: G \to \GL(V)$ of $G$ that induces $\rho$.
+\end{thm}
+So if we only care about simply connected Lie groups, then studying the representations of its Lie algebra is exactly the same as studying the representations of the group itself. But even if we care about other groups, we know that all representations of the group give representations of the algebra. So to find a representation of the group, we can look at all representations of the algebra, and see which lift to a representation of the group.
+
+It turns out Lie algebras aren't simple enough. Real numbers are terrible, and complex numbers are nice. So what we want to do is to look at the \emph{complexification} of the Lie algebra, and study the complex representations of the complexified Lie algebra.
+
+We will start with the definition of a complexification of a real vector space. We will provide three different definitions of the definition, from concrete to abstract, to suit different people's tastes.
+
+\begin{defi}[Complexification I]\index{complexification of vector space}
+ Let $V$ be a real vector space. We pick a basis $\{T^a\}$ of $V$. We define the \emph{complexification} of $V$, written $V_\C$ as the \emph{complex} linear span of $\{T^a\}$, i.e.
+ \[
+ V_\C = \left\{\sum \lambda_a T^a: \lambda_a \in \C\right\}.
+ \]
+ There is a canonical inclusion $V \hookrightarrow V_\C$ given by sending $\sum \lambda_a T^a$ to $\sum \lambda_a T^a$ for $\lambda_a \in \R$.
+\end{defi}
+
+This is a rather concrete definition, but the pure mathematicians will not be happy with such a definition. We can try another definition that does not involve picking a basis.
+
+\begin{defi}[Complexification II]\index{complexification of vector space}
+ Let $V$ be a real vector space. The \emph{complexification} of $V$ has underlying vector space $V_\C = V \oplus V$. Then the action of a complex number $\lambda = a + bi$ on $(u_1, u_2)$ is given by
+ \[
+ \lambda (u_1, u_2) = (au_1 - bu_2, a u_2 + bu_1).
+ \]
+ This gives $V_\C$ the structure of a complex vector space. We have an inclusion $V \to V_\C$ by inclusion into the first factor.
+\end{defi}
+
+Finally, we have a definition that uses some notions from commutative algebra, which you may know about if you are taking the (non-existent) Commutative Algebra course. Otherwise, do not bother reading.
+\begin{defi}[Complexification III]\index{complexification of vector space}
+ Let $V$ be a real vector space. The \emph{complexification} of $V$ is the tensor product $V \otimes_\R \C$, where $\C$ is viewed as an $(\R, \C)$-bimodule.
+\end{defi}
+
+To define the Lie algebra structure on the complexification, we simply declare that
+\[
+ [X + iY, X' + i Y'] = [X, X'] + i([X, Y'] + [Y, X']) - [Y, Y']
+\]
+for $X, Y \in V \subseteq V_\C$.
+
+Whichever definition we decide to choose, we have now a definition, and we want to look at the representations of the complexification.
+
+\begin{thm}
+ Let $\mathfrak{g}$ be a real Lie algebra. Then the complex representations of $\mathfrak{g}$ are exactly the (complex) representations of $\mathfrak{g}_\C$.
+
+ Explicitly, if $\rho: \mathfrak{g} \to \gl(V)$ is a complex representation, then we can extend it to $\mathfrak{g}_\C$ by declaring
+ \[
+ \rho(X + iY) = \rho(X) + i \rho(Y).
+ \]
+ Conversely, if $\rho_\C: \mathfrak{g}_C \to \gl(V)$, restricting it to $\mathfrak{g} \subseteq \mathfrak{g}_\C$ gives a representation of $\mathfrak{g}$.
+\end{thm}
+
+\begin{proof}
+ Just stare at it and see that the formula works.
+\end{proof}
+
+So if we only care about complex representations, which is the case most of the time, we can study the representations of the complexification instead. This is much easier. In fact, in the next chapter, we are going to classify \emph{all} simple complex Lie algebras and their representations.
+
+Before we end, we note the following definition:
+\begin{defi}[Real form]\index{real form}
+ Let $\mathfrak{g}$ be a \emph{complex} Lie algebra. A \emph{real form} of $\mathfrak{g}$ is a real Lie algebra $\mathfrak{h}$ such that $\mathfrak{h}_\C = \mathfrak{g}$.
+\end{defi}
+Note that a complex Lie algebra can have multiple non-isomorphic real forms in general.
+
+\subsection{Representations of \texorpdfstring{$\su(2)$}{su(2)}}
+We are now going to study the complex representations of $\su(2)$, which are equivalently the representations of the complexification $\su_\C(2)$. In this section, we will write $\su(2)$ when we actually mean $\su_\C(2)$ for brevity. There are a number of reasons why we care about this.
+\begin{enumerate}
+ \item We've already done this before, in the guise of ``spin''/``angular momentum'' in quantum mechanics.
+ \item The representations are pretty easy to classify, and form a motivation for our later general classification of representations of complex simple Lie algebras.
+ \item Our later work on the study of simple Lie algebras will be based on our knowledge of representations of $\su(2)$.
+\end{enumerate}
+
+Recall that the uncomplexified $\su(2)$ has a basis
+\[
+ \su(2) = \spn_\R\left\{T^a = -\frac{1}{2}i \sigma_a: a = 1, 2, 3\right\},
+\]
+where $\sigma_a$ are the Pauli matrices. Since the Pauli matrices are independent over $\C$, they give us a basis of the complexification of $\su(2)$:
+\[
+ \su_\C(2) = \spn_\C\{\sigma_a: a = 1, 2, 3\}.
+\]
+However, it is more convenient to use the following basis:
+\[
+ H = \sigma_3 =
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix},\quad E_{\pm} = \frac{1}{2} (\sigma_1 \pm i \sigma_2) =
+ \begin{pmatrix}
+ 0 & 1 \\
+ 0 & 0
+ \end{pmatrix},
+ \begin{pmatrix}
+ 0 & 0\\
+ 1 & 0
+ \end{pmatrix}.
+\]
+We then have
+\[
+ [H, E_{\pm}] = \pm 2 E_{\pm},\quad [E_+, E_-] = H.
+\]
+We can rewrite the first relation as saying
+\[
+ \ad_H(E_{\pm}) = \pm 2 E_{\pm}.
+\]
+Together with the trivial result that $\ad_H(H) = 0$, we know $\ad_H$ has eigenvalues $\pm 2$ and $0$, with eigenvectors $E_{\pm}, H$. These are known as \term{roots} of $\su(2)$.
+
+Now let's look at irreducible representations of $\su(2)$. Suppose $(V, \rho)$ is a representation. By linear algebra, we know $\rho(H)$ has an eigenvector $v_\lambda$, say
+\[
+ \rho(H) v_\lambda = \lambda v_\lambda.
+\]
+The eigenvalues of $\rho(H)$ are known as the \term{weights} of the representation $\rho$. The operators $E_{\pm}$ are known as \term{step operators}. We have
+\begin{align*}
+ \rho(H) \rho(E_{\pm}) v_\lambda &= (\rho(E_{\pm})\rho(H) + [\rho(H), \rho(E_{\pm})])v_\lambda \\
+ &= (\rho(E_{\pm})\rho(H) + \rho([H, E_{\pm}]) v_\lambda \\
+ &= (\lambda \pm 2) \rho(E_{\pm}) v_\lambda.
+\end{align*}
+So $\rho(E_{\pm}) v_\lambda$ are also eigenvectors of $\rho(H)$ with eigenvalues $\lambda \pm 2$, \emph{provided $\rho(E_{\pm})v_\lambda$ are non-zero}. This constrains what can happen in a finite-dimensional representation. By finite-dimensionality, we cannot have infinitely many eigenvalues. So at some point, we have to stop. In other words, a finite-dimensional representation must have a \term{highest weight}\index{weight!highest} $\Lambda \in \C$, with
+\[
+ \rho(H)v_\Lambda = \Lambda v_\Lambda,\quad \rho(E_+) v_\Lambda = 0.
+\]
+Now if we start off with $v_\Lambda$ we can keep applying $E_-$ to get the lower weights. Indeed, we get
+\[
+ v_{\Lambda - 2n} = (\rho(E_-))^n v_\Lambda
+\]
+for each $n$. Again, this sequence must terminate somewhere, as we only have finitely many eigenvectors. We might think that the irrep would consist of the basis vectors
+\[
+ \{v_\Lambda, v_{\Lambda - 2}, v_{\Lambda - 4}, \cdots, v_{\Lambda - 2n}\}.
+\]
+However, we need to make sure we don't create something new when we act on these by $\rho(E_+)$ again. We would certainly get that $\rho(E_+)v_{\Lambda - 2n}$ is an eigenvector of eigenvalue $\Lambda - 2n + 2$, but we might get some \emph{other} eigenvector that is not $v_{\Lambda - 2n + 2}$. So we have to check.
+
+We can compute
+\begin{align*}
+ \rho(E_+) v_{\Lambda - 2n} &= \rho(E_+) \rho(E_-) v_{\Lambda - 2n + 2} \\
+ &= (\rho(E_-)\rho(E_+) + [\rho(E_+), \rho(E_-)]) v_{\Lambda - 2n + 2} \\
+ &= \rho(E_-) \rho(E_+) v_{\Lambda - 2n + 2} + (\Lambda - 2n + 2)v_{\Lambda - 2n + 2}.
+\end{align*}
+using the fact that $[E_+, E_-] = H$. We now get a recursive relation. We can analyze this by looking at small cases. When $n = 1$, the first term is
+\[
+ \rho(E_-)\rho(E_+) v_{\Lambda - 2n + 2} = \rho(E_-) \rho(E_+) v_\Lambda = 0,
+\]
+by definition of $v_\Lambda$. So we have
+\[
+ \rho(E_+) v_{\Lambda - 2} = \Lambda v_\Lambda.
+\]
+When $n = 2$, we have
+\[
+ \rho(E_+) v_{\Lambda - 4} = \rho(E_-) \rho(E_+) v_{\Lambda - 2} + (\Lambda - 2) v_{\Lambda - 2} = (2\Lambda - 2) v_{\Lambda - 2}.
+\]
+In general, $\rho(E_+)v_{\Lambda - 2n}$ is some multiple of $v_{\Lambda - 2n + 2}$, say
+\[
+ \rho(E_+) v_{\Lambda - 2n} = r_n V_{\Lambda - 2n + 2}.
+\]
+Plugging this into our equation gives
+\[
+ r_n = r_{n - 1} + \Lambda - 2n + 2,
+\]
+with the boundary condition $\rho(E_+)v_\Lambda = 0$, i.e.\ $r_1 = \Lambda$. If we solve this recurrence relation, we determine an explicit relation for $r_n$ given by
+\[
+ r_n = (\Lambda + 1 - n)n.
+\]
+Now returning to the problem that the sequence must terminate, we figure that we also need to have a \term{lowest weight}\index{weight!lowest} of the form $\Lambda - 2n$. Hence we must have some non-zero vector $v_{\Lambda - 2n} \not= 0$ such that
+\[
+ \rho(E_-) v_{\Lambda - 2n} = 0.
+\]
+For this to be true, we need $r_{N + 1} = 0$ for some non-negative integer $N$. So we have
+\[
+ (\Lambda + 1 - (N + 1))(N + 1) = 0.
+\]
+This is equivalent to saying
+\[
+ (\Lambda - N)(N + 1) = 0.
+\]
+Since $N + 1$ is a non-negative integer, we must have $\Lambda - N = 0$, i.e.\ $\Lambda = N$. So in fact the highest weight is an integer!
+
+In summary, we've got
+\begin{prop}\index{$\su(2)$}
+ The finite-dimensional irreducible representations of $\su(2)$ are labelled by $\Lambda \in \Z_{\geq 0}$, which we call $\rho_\Lambda$, with weights given by
+ \[
+ \{-\Lambda, -\Lambda + 2, \cdots, \Lambda - 2, \Lambda\}.
+ \]
+ The weights are all non-degenerate, i.e.\ each only has one eigenvector. We have $\dim (\rho_\Lambda) = \Lambda + 1$.
+\end{prop}
+
+\begin{proof}
+ We've done most of the work. Given any irrep, we can pick any eigenvector of $\rho(H)$ and keep applying $E_+$ to get a highest weight vector $v_\Lambda$, then the above computations show that
+ \[
+ \{v_\Lambda, v_{\Lambda - 2}, \cdots, v_{-\Lambda}\}
+ \]
+ is a subspace of the irrep closed under the action of $\rho$. By irreducibility, this must be the whole of the representation space.
+\end{proof}
+
+The representation $\rho_0$ is the trivial representation; $\rho_1$ is the fundamental one, and $\rho_2$ is the adjoint representation.
+
+Now what do these tell us about representations of the Lie \emph{group} $\SU(2)$? We know $\SU(2)$ is simply connected, so by our discussion previously, we know the complex representations of $\SU(2)$ are exactly the representations of $\su(2)$. This is not too exciting.
+
+We know there are other Lie groups with Lie algebra $\su(2)$, namely $\SO(3)$. Now that $\SO(3)$ is not simply connected, when does a representation of $\su(2)$ give us a representation of $\SO(3)$? Recall that we had
+\[
+ \SO(3) \cong \frac{\SU(2)}{\Z/2\Z}.
+\]
+So an element in $\SO(3)$ is a pair $\{A, -A\}$ of elements in $\SU(2)$. Given a representation $\rho_\Lambda$ of $\su(2)$, we obtain a corresponding representation $D_\Lambda$ of $\SU(2)$. Now we get a representation of $\SO(3)$ if and only if $D_\Lambda$ respects the identification between $A$ and $-A$. In particular, we need
+\[
+ D_\Lambda(-I) = D_\Lambda(I).\tag{$*$}
+\]
+On the other hand, if this were true, then multiplying both sides by $D_\Lambda(A)$ we get
+\[
+ D_\Lambda(-A) = D_\Lambda(A).
+\]
+So $(*)$ is a necessary and sufficient condition for $D_\Lambda$ to descend to a representation of $\SO(3)$.
+
+We know that
+\[
+ -I = \exp(i\pi H).
+\]
+So we have
+\[
+ D_\Lambda(-I) = \exp(i\pi \rho_\Lambda(H)).
+\]
+We know that $\rho_\Lambda$ has eigenvalues $\Lambda \in S_\Lambda$. So $D_\Lambda(-I)$ has eigenvalues
+\[
+ \exp(i\pi\lambda) = (-1)^\lambda = (-1)^\Lambda,
+\]
+since $\lambda$ and $\Lambda$ differ by an even integer. So we know $D_\Lambda(-I) = D_\Lambda (I)$ if and only if $\Lambda$ is even. In other words, we get a representation of $\SO(3)$ iff $\Lambda$ is even.
+
+In physics, we have already seen this as integer and half-integer spin. We have integer spin exactly when we have a representation of $\SO(3)$. The fact that half-integer spin particles exist means that ``spin'' is really about $\SU(2)$, rather than $\SO(3)$.
+
+\subsection{New representations from old}
+We are now going to look at different ways of obtaining new representations from old ones. We start with a rather boring one.
+
+\begin{defi}[Conjugate representation]\index{conjugate representation}\index{representation!conjugate}
+ Let $\rho$ be a representation of a real Lie algebra $\mathfrak{g}$ on $\C^n$. We define the \emph{conjugate representation} by
+ \[
+ \bar{\rho}(X) = \rho(X)^*
+ \]
+ for all $X \in \mathfrak{g}$.
+\end{defi}
+Note that this need not be an actual new representation.
+
+To obtain more interesting new representations, we recall the following definitions from linear algebra:
+\begin{defi}[Direct sum]\index{direct sum}
+ Let $V, W$ be vector spaces. The \emph{direct sum} $V \oplus W$ is given by
+ \[
+ V \oplus W = \{v \oplus w: v \in V, w \in W\}
+ \]
+ with operations defined by
+ \begin{align*}
+ (v_1 \oplus w_1) + (v_2 \oplus w_2) &= (v_1 + v_2) \oplus (w_1 + w_2)\\
+ \lambda(v \oplus w) &= (\lambda v) \oplus (\lambda w).
+ \end{align*}
+ We often suggestively write $v \oplus w$ as $v + w$. This has dimension
+ \[
+ \dim(V \oplus W) = \dim V + \dim W.
+ \]
+\end{defi}
+
+\begin{defi}[Sum representation]\index{sum representation}\index{representation!sum}
+ Suppose $\rho_1$ and $\rho_2$ are representations of $\mathfrak{g}$ with representation spaces $V_1$ and $V_2$ of dimensions $d_1$ and $d_2$. Then $V_1 \oplus V_2$ is a representation space with representation $\rho_1 \oplus \rho_2$ given by
+ \[
+ (\rho_1 \oplus \rho_2)(X) \cdot (v_1 \oplus v_2) = (\rho_1(X)(v_1)) \oplus (\rho_2(X)(v_2)).
+ \]
+ In coordinates, if $R_i$ is the matrix for $\rho_i$, then the matrix of $\rho_1 \oplus \rho_2$ is given by
+ \[
+ (R_1 \oplus R_2)(X) =
+ \begin{pmatrix}
+ R_1(X) & 0\\
+ 0 & R_2(X)
+ \end{pmatrix}
+ \]
+ The dimension of this representation is $d_1 + d_2$.
+\end{defi}
+Of course, sum representations are in general not irreducible! However, they are still rather easy to understand, since they decompose into smaller, nicer bits.
+
+In contrast, tensor products do not behave as nicely.
+\begin{defi}[Tensor product]\index{tensor product}
+ Let $V, W$ be vector spaces. The \emph{tensor product} $V \otimes W$ is spanned by elements $v \otimes w$, where $v \in V$ and $w \in W$, where we identify
+ \begin{align*}
+ v \otimes (\lambda_1 w_1 + \lambda_2 w_2) &= \lambda_1 (v \otimes w_1) + \lambda_2 (v \otimes w_2)\\
+ (\lambda_1 v_1 + \lambda_2 v_2) \otimes w &= \lambda_1 (v_1 \otimes w) + \lambda_2 (v_2 \otimes w)
+ \end{align*}
+ This has dimension
+ \[
+ \dim (V \otimes W) = (\dim V)(\dim W).
+ \]
+ More explicitly, if $e^1, \cdots, e^n$ is a basis of $V$ and $f^1 ,\cdots, f^m$ is a basis for $W$, then $\{e^i \otimes f^j: 1 \leq i \leq n, 1 \leq j \leq m\}$ is a basis for $V \otimes W$.
+
+ Given any two maps $F: V \to V'$ and $G: W \to W'$, we define $F \otimes G: V \otimes W \to V' \otimes W'$ by
+ \[
+ (F \otimes G) (v \otimes w) = (F(v)) \otimes (G(w)),
+ \]
+ and then extending linearly.
+\end{defi}
+The operation of tensor products should be familiar in quantum mechanics, where we combine two state spaces to get a third. From a mathematical point of view, the tensor product is characterized by the fact that a bilinear map $V \times W \to U$ is the same as a linear map $V \otimes W \to U$.
+
+\begin{defi}[Tensor product representation]\index{tensor product representation}\index{representation!tensor product}\index{product representation}\index{representation!product}
+ Let $\mathfrak{g}$ be a Lie algebra, and $\rho_1, \rho_2$ be representations of $\mathfrak{g}$ with representation spaces $V_1, V_2$. We define the \emph{tensor product representation} $\rho_1 \otimes \rho_2$ with representation space $V_1 \otimes V_2$ given by
+ \[
+ (\rho_1 \otimes \rho_2)(X) = \rho_1(X) \otimes I_2 + I_1 \otimes \rho_2(X): V_1 \otimes V_2 \to V_1 \otimes V_2,
+ \]
+ where $I_1$ and $I_2$ are the identity maps on $V_1$ and $V_2$.
+\end{defi}
+
+Note that the tensor product representation is \emph{not} given by the ``obvious'' formula $\rho_1(X) \otimes \rho_2(X)$. This does not work.
+
+Now suppose $(\rho, V)$ is a reducible representation. Then it has a non-trivial invariant subspace $U \subseteq V$. Now we pick a basis of $U$ and extend it to $V$. Then $\rho$ sends any member of $U$ to itself, so the matrix representation of $\rho$ looks like
+\[
+ R(X) =
+ \begin{pmatrix}
+ A(X) & B(X)\\
+ 0 & C(X)
+ \end{pmatrix}.
+\]
+However, there is no \emph{a priori} reason why $B(X)$ should vanish as well. When this happens, we say the representation is completely reducible.
+
+\begin{defi}[Completely reducible representation]\index{representation!completely reducible}\index{completely reducible representation}
+ If $(\rho, V)$ is a representation such that there is a basis of $V$ in which $\rho$ looks like
+ \[
+ \rho(X) =
+ \begin{pmatrix}
+ \rho_1(X) & 0 & \cdots & 0\\
+ 0 & \rho_2(X) & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & \rho_n(X)
+ \end{pmatrix},
+ \]
+ then it is called \emph{completely reducible}. In this case, we have
+ \[
+ \rho = \rho_1 \oplus \rho_2 \oplus \cdots \oplus \rho_n.
+ \]
+\end{defi}
+
+Here is an important fact:
+\begin{thm}
+ If $\rho_i$ for $i = 1,\cdots, m$ are finite-dimensional irreps of a simple Lie algebra $\mathfrak{g}$, then $\rho_1 \otimes \cdots \otimes \rho_m$ is completely reducible to irreps, i.e.\ we can find $\tilde{\rho}_1, \cdots, \tilde{\rho}_k$ such that
+ \[
+ \rho_1 \otimes \cdots \otimes \rho_m = \tilde{\rho}_1 \oplus \tilde{\rho}_2 \oplus \cdots \oplus \tilde{\rho}_k.
+ \]
+\end{thm}
+We will not prove this.
+\subsection{Decomposition of tensor product of \texorpdfstring{$\su(2)$}{su(2)} representations}
+We now try to find an explicit description of the decomposition of tensor products of irreps of $\su(2)$.
+
+We let $\rho_\Lambda$ and $\rho_{\Lambda'}$ be irreps of $\su(2)$, where $\Lambda, \Lambda' \in \N$. We call the representation spaces $V_\Lambda$ and $V_{\Lambda'}$.
+
+We can form the tensor product $\rho_\Lambda \otimes \rho_{\Lambda'}$ with representation space
+\[
+ V_\Lambda \otimes V_{\Lambda'} = \spn_\R \{v \otimes v': v \in V_\Lambda, v' \in V_{\Lambda'}\}.
+\]
+By definition, for $X \in \su(2)$, the representation is given by
+\[
+ (\rho_\Lambda \otimes \rho_{\Lambda'})(X)(v \otimes v') = (\rho_\Lambda(X)v) \otimes v' + v \otimes (\rho_{\Lambda'}(X)v').
+\]
+This gives us a completely reducible representation of $\su(2)$ of dimension
+\[
+ \dim (\rho_\Lambda \otimes \rho_{\Lambda'}) = (\Lambda + 1)(\Lambda' + 1).
+\]
+We can then write
+\[
+ \rho_\Lambda \otimes \rho_{\Lambda'} = \bigoplus_{\Lambda'' \in \Z, \Lambda'' \geq 0} \mathcal{L}^{\Lambda''}_{\Lambda, \Lambda'} \rho_{\Lambda''},
+\]
+where $\mathcal{L}^{\Lambda''}_{\Lambda, \Lambda'}$ are some non-negative integers we want to find out. These coefficients are usually known as \term{Littlewood-Richardson coefficients} in general.
+
+Recall that $V_\Lambda$ has a basis $\{v_\lambda\}$, where
+\[
+ \lambda \in S_\Lambda = \{-\Lambda, \Lambda - 2, \cdots, + \Lambda\}.
+\]
+Similarly, $V_{\Lambda'}$ has a basis $\{v_{\lambda'}'\}$.
+
+Then we know that the tensor product space has basis
+\[
+ \mathcal{B} = \{v_\lambda \otimes v_{\lambda'}': \lambda \in S_{\Lambda}, \lambda' \in S_{\Lambda'}\}.
+\]
+We now see what $H$ does to our basis vectors. We have
+\begin{align*}
+ (\rho_\Lambda \otimes \rho_{\Lambda'})(H)(v_\lambda \otimes v_{\lambda'}') &= (\rho_\Lambda(H)v_\lambda) \otimes v_{\lambda'}' + v_\lambda \otimes (\rho_{\Lambda'}(H)v_{\lambda'}')\\
+ &= (\lambda + \lambda')(v_\lambda \otimes v_{\lambda'}').
+\end{align*}
+We thus see that the weights of the tensor product are just the sum of the weights of the individual components. In other words, we have
+\[
+ S_{\Lambda, \Lambda'} = \{\lambda + \lambda': \lambda \in S_\Lambda, \lambda' \in S_{\Lambda'}\}
+\]
+Note that here we count the weights with multiplicity, so that each weight can appear multiple times.
+
+We see that the highest weight is just the sum of the largest weights of the irreps, and this appears with multiplicity $1$. Thus we know
+\[
+ \mathcal{L}_{\Lambda, \Lambda'}^{\Lambda + \Lambda'} = 1,
+\]
+i.e.\ we have one copy of $\rho_{\Lambda + \Lambda'}$ in the decomposition of the tensor product. We write
+\[
+ \rho_\Lambda \otimes \rho_{\Lambda'} = \rho_{\Lambda + \Lambda'} \oplus \tilde{\rho}_{\Lambda, \Lambda'},
+\]
+where $\tilde{\rho}_{\Lambda, \Lambda'}$ has weight set $\tilde{S}_{\Lambda, \Lambda'}$ satisfying
+\[
+ S_{\Lambda, \Lambda'} = S_{\Lambda + \Lambda'} \cup \tilde{S}_{\Lambda, \Lambda'}.
+\]
+We now notice that there is only one $\Lambda + \Lambda' - 2$ term in $\tilde{S}_{\Lambda, \Lambda'}$. So there must be a copy of $\rho_{\Lambda + \Lambda' - 2}$ as well. We keep on going.
+
+\begin{eg}
+ Take $\Lambda = \Lambda' = 1$. Then we have
+ \[
+ S_1 = \{-1, +1\}.
+ \]
+ So we have
+ \[
+ S_{1, 1} = \{-2, 0, 0, 2\}.
+ \]
+ We see that the highest weight is $2$, and this corresponds to a factor of $\rho_2$. In doing so, we write
+ \[
+ S_{1, 1} = \{-2, 0, +2\} \cup \{0\} = S_2 \cup S_0.
+ \]
+ So we have
+ \[
+ \rho_1 \otimes \rho_1 = \rho_2 \oplus \rho_0.
+ \]
+\end{eg}
+
+From the above, one can see (after some thought) that in general, we have
+\begin{prop}
+ \[
+ \rho_M \otimes \rho_N = \rho_{|N - M|} \oplus \rho_{|N - M| + 2} \oplus \cdots \oplus \rho_{N + M}.
+ \]
+\end{prop}
+
+\section{Cartan classification}
+We now move on to the grand scheme of classifying all complex simple Lie algebras. The starting point of everything is that we define a natural inner product on our Lie algebra $\mathfrak{g}$. We will find a subalgebra $\mathfrak{h}$ of $\mathfrak{g}$ that plays the role of the $H$ we had when we studied $\su(2)$. The remainder of $\mathfrak{g}$ will be controlled by things known as \emph{roots}, which live in $\mathfrak{h}^*$. We will see that the Killing form induces a inner product on $\mathfrak{h}^*$, which allows us to think of these roots as ``geometric'' objects that live in $\R^n$. We can then find some strong conditions that restrict what these roots and their inner products can be, and it turns out these completely characterize our possible Lie algebras.
+
+\subsection{The Killing form}
+The first thing we want to figure out is an ``invariant'' inner product on our Lie algebra $\mathfrak{g}$. We will do so by writing down a formula, and then checking that for a (semi-)simple Lie algebra, it is non-degenerate. Having a non-degenerate inner product will be very useful. Amongst many things, it provides us with a bijection between a vector space and its dual.
+
+We recall the following definitions:
+\begin{defi}[Inner product]\index{inner product}
+ Given a vector space $V$ over $\F$, an \emph{inner product} is a symmetric bilinear map $i: V \times V \to \F$.
+\end{defi}
+
+\begin{defi}[Non-degenerate inner product]\index{non-degenerate inner product}\index{inner product!non-degenerate}
+ An inner product $i$ is said to be \emph{non-degenerate} if for all $v \in V$ non-zero, there is some $w \in V$ such that
+ \[
+ i(v, w) \not= 0.
+ \]
+\end{defi}
+
+The question we would like to ask is if there is a ``natural'' inner product on $\mathfrak{g}$. We try the following:
+
+\begin{defi}[Killing form]\index{Killing form}
+ The \emph{Killing form} of a Lie algebra $\mathfrak{g}$ is the inner product $\kappa: \mathfrak{g} \times \mathfrak{g} \to \F$ given by
+ \[
+ \kappa(X, Y) = \tr(\ad_X \circ \ad_Y),
+ \]
+ where $\tr$ is the usual trace of a linear map. Since $\ad$ is linear, this is bilinear in both arguments, and the cyclicity of the trace tells us this is symmetric.
+\end{defi}
+
+We can try to write this more explicitly. The map $\ad_X \circ \ad_Y: \mathfrak{g} \to \mathfrak{g}$ is given by
+\[
+ Z \mapsto [X, [Y, Z]].
+\]
+We pick a basis $\{T^a\}_{a = 1, \ldots, D}$ for $\mathfrak{g}$. We write
+\[
+ X = X_aT^a,\quad Y = Y_aT^a,\quad Z = Z_aT^a.
+\]
+We again let $f^{ab}\!_c$ be the structure constants satisfying
+\[
+ [T^a, T^b] = f^{ab}\!_c T^c.
+\]
+We then have
+\begin{align*}
+ [X, [Y, Z]] &= X_a Y_b Z_ c [T^a, [T^b, T^c]]\\
+ &= X_a Y_b Z_c f^{ad}\!_e f^{bc}\!_d T^e\\
+ &= M(X, Y)^c_e Z_c T^e,
+\end{align*}
+where
+\[
+ M(X, Y)^c_e = X_a Y_b f^{ad}\!_e f^{bc}\!_d.
+\]
+So the trace of this thing is
+\[
+ \kappa(X, Y) = \tr(M(X, Y)) = \kappa^{ab}X_a Y_b,\quad \kappa^{ab} = f^{ad}\!_c f^{bc}\!_d.
+\]
+Why is this a natural thing to consider? This is natural because it obeys an invariance condition:
+\begin{defi}[Invariant inner product]\index{invariant inner product}\index{inner product!invariant}
+ An inner product $\kappa$ on a Lie algebra $\mathfrak{g}$ is \emph{invariant} if for any $X, Y, Z \in \mathfrak{g}$, we have
+ \[
+ \kappa([Z, X], Y)+ \kappa(X, [Z, Y]) = 0.
+ \]
+ Equivalently, we have
+ \[
+ \kappa(\ad_Z X, Y) + \kappa(X, \ad_Z Y) = 0.
+ \]
+\end{defi}
+What does this condition actually mean? If one were to arbitrarily write down a definition of invariance, they might try
+\[
+ \kappa(\ad_Z X, Y) = \kappa(X, \ad_Z Y)
+\]
+instead. However, this is not the right thing to ask for.
+
+Usually, we think of elements of the Lie algebra as some sort of ``infinitesimal transformation'', and as we have previously discussed, the adjoint representation is how an element $Z \in \mathfrak{g}$ naturally acts on $\mathfrak{g}$. So under an infinitesimal transformation, the elements $X, Y \in \mathfrak{g}$ transform as
+\begin{align*}
+ X &\mapsto X + \ad_Z X\\
+ Y &\mapsto Y + \ad_Z Y
+\end{align*}
+What is the effect on the Killing form? It transforms infinitesimally as
+\[
+ \kappa(X, Y) \mapsto \kappa(X + \ad_Z X, Y + \ad_Z Y) \approx \kappa(X, Y) + \kappa(\ad_Z X, Y)+ \kappa(X, \ad_Z Y),
+\]
+where we dropped the ``higher order terms'' because we want to think of the $\ad_Z$ terms as being infinitesimally small (justifying this properly will require going to the more global picture involving an actual Lie group, which we shall not go into. This is, after all, just a motivation). So invariance of the Killing form says it doesn't transform under this action.
+
+So we now check that the Killing form does satisfy the invariance condition.
+\begin{prop}
+ The Killing form is invariant.
+\end{prop}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ \kappa([Z, X], Y) &= \tr(\ad_{[Z, X]} \circ \ad_Y)\\
+ &= \tr([\ad_Z, \ad_X] \circ \ad_Y)\\
+ &= \tr(\ad_Z \circ \ad_X \circ \ad_Y - \ad_X \circ \ad_Z \circ \ad_Y)\\
+ &= \tr(\ad_Z \circ \ad_X \circ \ad_Y) - \tr(\ad_X \circ \ad_Z \circ \ad_Y)
+ \intertext{Similarly, we have}
+ \kappa(X, [Z, Y]) &= \tr(\ad_X \circ \ad_Z \circ \ad_Y) - \tr(\ad_X \circ \ad_Y \circ \ad_Z).
+ \end{align*}
+ Adding them together, we obtain
+ \[
+ \kappa([Z, X], Y) + \kappa(X, [Z, Y]) = \tr(\ad_Z \circ \ad_X \circ \ad_Y) - \tr(\ad_X \circ \ad_Y \circ \ad_Z).
+ \]
+ By the cyclicity of $\tr$, this vanishes.
+\end{proof}
+
+The next problem is to figure out when the Killing form is degenerate. This is related to the notion of simplicity of Lie algebras.
+
+\begin{defi}[Semi-simple Lie algebra]\index{semi-simple Lie algebra}\index{Lie algebra!semi-simple}
+ A Lie algebra is \emph{semi-simple} if it has no abelian non-trivial ideals.
+\end{defi}
+This is weaker than the notion of simplicity --- simplicity requires that there are no non-trivial ideals at all!
+
+In fact, it is true that $\mathfrak{g}$ being semi-simple is equivalent to $\mathfrak{g}$ being the direct sum of simple Lie algebras. This is on the second example sheet.
+\begin{thm}[Cartan]
+ The Killing form of a Lie algebra $\mathfrak{g}$ is non-degenerate iff $\mathfrak{g}$ is semi-simple.
+\end{thm}
+
+\begin{proof}
+ We are only going to prove one direction --- if $\kappa$ is non-degenerate, then $\mathfrak{g}$ is semi-simple.
+
+ Suppose we had an abelian ideal $\mathfrak{a} \subseteq \mathfrak{g}$. We want to show that $\kappa(A, X) = 0$ for all $A \in \mathfrak{a}$ and $X \in \mathfrak{g}$. Indeed, we pick a basis of $\mathfrak{a}$, and extend it to a basis of $\mathfrak{g}$. Then since $[X, A] \in \mathfrak{a}$ for all $X \in \mathfrak{g}$ and $A \in \mathfrak{a}$, we know the matrix of $\ad_X$ must look like
+ \[
+ \ad_X =
+ \begin{pmatrix}
+ * & *\\
+ 0 & *
+ \end{pmatrix}.
+ \]
+ Also, if $A \in \mathfrak{a}$, then since $\mathfrak{a}$ is an abelian ideal, $\ad_A$ kills everything in $\mathfrak{a}$, and $\ad_A(X) \in \mathfrak{a}$ for all $X \in \mathfrak{g}$. So the matrix must look something like
+ \[
+ \ad_A =
+ \begin{pmatrix}
+ 0 & *\\
+ 0 & 0
+ \end{pmatrix}.
+ \]
+ So we know
+ \[
+ \ad_A \circ \ad_X =
+ \begin{pmatrix}
+ 0 & *\\
+ 0 & 0
+ \end{pmatrix},
+ \]
+ and the trace vanishes. So $\kappa(A, X) = 0$ for all $X \in \mathfrak{g}$ and $A \in \mathfrak{a}$. So $A = 0$. So $\mathfrak{a}$ is trivial.
+\end{proof}
+Now if $\kappa$ is non-degenerate, then $\kappa^{ab}$ is ``invertible''. So we can find a $\kappa_{ab}$ such that
+\[
+ \kappa_{ab}\kappa^{bc} = \delta_a^c.
+\]
+We can then use this to raise and lower indices.
+
+%\subsubsection*{Complexification}
+%Our initial source of Lie algebras is from Lie groups. The Lie algebras coming this way is always a real Lie algebra.
+%
+%\begin{eg}
+% Consider
+% \[
+% \su(2) = \spn_\R\left\{T^a = -\frac{i}{2} \sigma_a: a = 1, 2, 3\right\},
+% \]
+% which is equivalently the $2 \times 2$ traceless anti-Hermitian matrices. Then the complexification is given by
+% \[
+% \mathcal{L}_\C (\SU(2)) = \su_\C(2) = \spn_\C \{T^a = -\frac{i}{2} \sigma_a: a = 1, 2, 3\}.
+% \]
+% Then the matrices in $\su_\C(2)$ are still traceless, since complex multiplication preserves tracelessness, but they no longer have to be anti-Hermitian. In fact, they are just the $2 \times 2$ traceless complex matrices.
+%\end{eg}
+%
+%We can compute $\sl_\C(2, \R)$ similarly, and find that
+%\[
+% \su_\C(2) \cong \sl_\C(2, \R),
+%\]
+%but $\su(2)$ is not isomorphic to $\sl(2, \R)$.
+
+Note that this definition so far does not care if this is a real or complex Lie algebra. From linear algebra, we know that any symmetric matrix can be diagonalized. If we are working over $\C$, then the diagonal entries can be whatever we like if we choose the right basis. However, if we are working over $\R$, then the number of positive (or negative) diagonal entries are always fixed, while the magnitudes can be whatever we like, by Sylvester's law of inertia.
+
+Thus in the case of a real Lie algebra, it is interesting to ask when the matrix always has the same sign. It turns out it is the case with always negative sign is the interesting case.
+
+\begin{defi}[Real Lie algebra of compact type]\index{Lie algebra!compact type}\index{compact type}
+ We say a real Lie algebra is of \emph{compact type} if there is a basis such that
+ \[
+ \kappa^{ab} = - \kappa \delta^{ab},
+ \]
+ for some $\kappa \in \R^+$.
+\end{defi}
+By general linear algebra, we can always pick a basis so that $\kappa = 1$.
+
+The reason why it is called ``compact'' is because these naturally arise when we study compact Lie groups.
+
+We will note the following fact without proof:
+\begin{thm}
+ Every complex semi-simple Lie algebra (of finite dimension) has a real form of compact type.
+\end{thm}
+We will not use it for any mathematical results, but it will be a helpful thing to note when we develop gauge theories later on.
+
+\subsection{The Cartan basis}
+From now on, we restrict to the study of finite-dimensional simple complex Lie algebras. Every time we write the symbol $\mathfrak{g}$ or say ``Lie algebra'', we mean a finite-dimensional simple complex Lie algebra.
+
+Recall that we have already met such a Lie algebra
+\[
+ \su_\C(2) = \spn_\C \{H, E_+, E_-\}
+\]
+with the brackets
+\[
+ [H, E_{\pm}] = \pm 2E_{\pm},\quad [E_+, E_-] = H.
+\]
+These are known as the \emph{Cartan basis} for $\su_\C(2)$. We will try to mimic this construction in an arbitrary Lie algebra.
+
+Recall that when we studied $\su_\C(2)$, we used the fact that $H$ is a diagonal matrix, and $E_{\pm}$ acted as step operators. However, when we study Lie algebras in general, we want to think of them abstractly, rather than as matrices, so it doesn't make sense to ask if an element is diagonal.
+
+So to develop the corresponding notions, we look at the $\ad$ map associated to them instead. Recall that the adjoint map of $H$ is also diagonal, with eigenvectors given by
+\begin{align*}
+ \ad_H(E_{\pm}) &= \pm 2 E_{\pm}\\
+ \ad_H(H) &= 0.
+\end{align*}
+This is the structure we are trying to generalize.
+
+\begin{defi}[$\ad$-diagonalizable]\index{$\ad$-diagonalizable}
+ Let $\mathfrak{g}$ be a Lie algebra. We say that an element $X \in \mathfrak{g}$ is \emph{$\ad$-diagonalizable} if the associated map
+ \[
+ \ad_X: \mathfrak{g} \to \mathfrak{g}
+ \]
+ is diagonalizable.
+\end{defi}
+
+\begin{eg}
+ In $\su_\C(2)$, we know $H$ is $\ad$-diagonalizable, but $E_{\pm}$ is not.
+\end{eg}
+
+Now we might be tempted to just look at all $\ad$-diagonalizable elements. However, this doesn't work. In the case of $\su(2)$, each of $\sigma_1, \sigma_2, \sigma_3$ is $\ad$-diagonalizable, but we only want to pick one of them as our $H$. Instead, what we want is the following:
+
+\begin{defi}[Cartan subalgebra]\index{Cartan subalgebra}
+ A \emph{Cartan subalgebra} $\mathfrak{h}$ of $\mathfrak{g}$ is a maximal abelian subalgebra containing only $\ad$-diagonalizable elements.
+\end{defi}
+A Cartan subalgebra always exists, since the dimension of $\mathfrak{g}$ is finite, and the trivial subalgebra $\{0\} \subseteq \mathfrak{g}$ is certainly abelian and contains only $\ad$-diagonalizable elements. However, as we have seen, this is not necessarily unique. Fortunately, we will later see that in fact all possible Cartan subalgebras have the same dimension, and the dimension of $\mathfrak{h}$ is called the \term{rank} of $\mathfrak{g}$. % verify
+
+From now on, we will just assume that we have fixed one such Cartan subalgebra.
+
+It turns out that Cartan subalgebras satisfy a stronger property.
+\begin{prop}
+ Let $\mathfrak{h}$ be a Cartan subalgebra of $\mathfrak{g}$, and let $X \in \mathfrak{g}$. If $[X, H] = 0$ for all $H \in \mathfrak{h}$, then $X \in \mathfrak{h}$.
+\end{prop}
+Note that this does not follow immediately from $\mathfrak{h}$ being maximal, because maximality only says that $\ad$-diagonalizable elements satisfy that property.
+
+\begin{proof}
+ Omitted. % include
+\end{proof}
+
+
+\begin{eg}
+ In the case of $\su_\C(2)$, one possible Cartan subalgebra is
+ \[
+ \mathfrak{h} = \spn_\C \{H\}.
+ \]
+ However, recall our basis is given by
+ \begin{align*}
+ H &= \sigma_3\\
+ E_{\pm} &= \frac{1}{2}(\sigma_1 \pm i \sigma_2),
+ \end{align*}
+ where the $\sigma_i$ are the Pauli matrices. Then, by symmetry, we know that $\sigma_1 = E_+ + E_-$ gives an equally good Cartan subalgebra, and so does $\sigma_2$. So we have many choices, but they all have the same dimension.
+\end{eg}
+
+Now we know that everything in $\mathfrak{h}$ commutes with each other, i.e.\ for any $H, H' \in \mathfrak{h}$, we have
+\[
+ [H, H'] = 0.
+\]
+Since $\ad$ is a Lie algebra representation, it follows that
+\[
+ \ad_{H} \circ \ad_{H'} - \ad_{H'} \circ \ad_H = 0.
+\]
+In other words, all these $\ad$ maps commute. By assumption, we know each $\ad_H$ is diagonalizable. So we know they are in fact \emph{simultaneously} diagonalizable. So $\mathfrak{g}$ is spanned by simultaneous eigenvectors of the $\ad_H$. Can we find a basis of eigenvectors?
+
+We know that everything in $\mathfrak{h}$ is a zero-eigenvector of $\ad_H$ for all $H \in \mathfrak{h}$, since for $H, H' \in\mathfrak{h}$, we have
+\[
+ \ad_{H}(H') = [H, H'] = 0.
+\]
+We can arbitrarily pick a basis
+\[
+ \{H^i: i = 1, \cdots, r\},
+\]
+where $r$ is the rank of $\mathfrak{h}$. Moreover, by maximality, there are no other eigenvectors that are killed by all $H \in \mathfrak{h}$.
+
+We are now going to label the remaining eigenvectors by their eigenvalue. Given any eigenvector $E \in \mathfrak{g}$ and $H \in \mathfrak{h}$, we have
+\[
+ \ad_H(E) = [H, E] = \alpha(H) E
+\]
+for some constant $\alpha(H)$ depending on $H$ (and $E$). We call $\alpha: \mathfrak{h} \to \C$ the \term{root} of $E$. We will use the following fact without proof:
+\begin{fact}
+ The non-zero simultaneous eigenvectors of $\mathfrak{h}$ are non-degenerate, i.e.\ there is a unique (up to scaling) eigenvector for each root.
+\end{fact}
+Thus, we can refer to this eigenvector unambiguously by $E^\alpha$, where $\alpha$ designates the root.
+
+What are these roots $\alpha$? It is certainly a \emph{function} $\mathfrak{h} \to \C$, but it is actually a linear map! Indeed, we have
+\begin{align*}
+ \alpha(H + H')E &= [H + H', E] \\
+ &= [H, E] + [H', E] \\
+ &= \alpha(H) E + \alpha(H') E \\
+ &= (\alpha(H) + \alpha(H'))E,
+\end{align*}
+by linearity of the bracket.
+
+We write $\Phi$\index{$\Phi$} for the collection of all roots. So we can write the remaining basis eigenvectors as
+\[
+ \{E^\alpha: \alpha \in \Phi\}.
+\]
+\begin{eg}
+ In the case of $\su(2)$, the roots are $\pm 2$, and the eigenvectors are $E_{\pm}$.
+\end{eg}
+
+We can now define a \term{Cartan-Weyl basis} for $\mathfrak{g}$ given by
+\[
+ \mathcal{B} = \{H^i: i = 1, \cdots, r\} \cup \{E^\alpha: \alpha \in \Phi\}.
+\]
+Recall that we have a Killing form
+\[
+ \kappa(X, Y) = \frac{1}{\mathcal{N}} \tr(\ad_X \circ \ad_Y),
+\]
+where $X, Y \in \mathfrak{g}$. Here we put in a normalization factor $\mathcal{N}$ for convenience later on. Since $\mathfrak{g}$ is simple, it is in particular semi-simple. So $\kappa$ is non-degenerate.
+
+We are going to evaluate $\kappa$ in the Cartan-Weyl basis.
+
+\begin{lemma}
+ Let $H \in \mathfrak{h}$ and $\alpha \in \Phi$. Then
+ \[
+ \kappa(H, E^\alpha) = 0.
+ \]
+\end{lemma}
+\begin{proof}
+ Let $H' \in \mathfrak{h}$. Then
+ \begin{align*}
+ \alpha(H')\kappa(H, E^\alpha) &= \kappa(H, \alpha(H') E^\alpha) \\
+ &= \kappa(H, [H', E^\alpha])\\
+ &= -\kappa([H', H], E^\alpha)\\
+ &= -\kappa(0, E^\alpha)\\
+ &= 0
+ \end{align*}
+ But since $\alpha \not= 0$, we know that there is some $H'$ such that $\alpha(H') \not= 0$.
+\end{proof}
+
+\begin{lemma}
+ For any roots $\alpha, \beta \in \Phi$ with $\alpha + \beta \not= 0$, we have
+ \[
+ \kappa(E^\alpha, E^\beta) = 0.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Again let $H \in \mathfrak{h}$. Then we have
+ \begin{align*}
+ (\alpha(H) + \beta(H)) \kappa(E^\alpha, E^\beta) &= \kappa([H, E^\alpha], E^\beta) + \kappa(E^\alpha, [H, E^\beta]),\\
+ &= 0
+ \end{align*}
+ where the final line comes from the invariance of the Killing form. Since $\alpha + \beta$ does not vanish by assumption, we must have $\kappa(E^\alpha, E^\beta) = 0$.
+\end{proof}
+
+\begin{lemma}
+ If $H \in \mathfrak{h}$, then there is some $H' \in \mathfrak{h}$ such that $\kappa(H, H') \not= 0$.
+\end{lemma}
+
+\begin{proof}
+ Given an $H$, since $\kappa$ is non-degenerate, there is some $X \in \mathfrak{g}$ such that $\kappa (H, X) \not= 0$. Write $X = H' + E$, where $H' \in \mathfrak{h}$ and $E$ is in the span of the $E^\alpha$.
+ \[
+ 0\not= \kappa(H, X) = \kappa(H, H') + \kappa(H, E) = \kappa(H, H'). \qedhere
+ \]
+\end{proof}
+What does this tell us? $\kappa$ started life as a non-degenerate inner product on $\mathfrak{g}$. But now we know that $\kappa$ is a non-degenerate inner product on $\mathfrak{h}$. By non-degeneracy, we can invert it \emph{within $\mathfrak{h}$}.
+
+In coordinates, we can find some $\kappa^{ij}$ such that
+\[
+ \kappa(e_i H^i, e_j' H^j) = \kappa^{ij} e_i e_j'
+\]
+for any $e_i$, $e_j'$. The fact that the inner product is non-degenerate means that we can invert the matrix $\kappa$, and find some $(\kappa^{-1})_{ij}$ such that
+\[
+ (\kappa^{-1})_{ij} \kappa^{jk} = \delta_i\!^k
+\]
+Since $\kappa^{-1}$ is non-degenerate, this gives a non-degenerate inner product on $\mathfrak{h}^*$. In particular, this gives us an inner product between the roots! So given two roots $\alpha, \beta \in \Phi \subseteq \mathfrak{h}^*$, we write the inner product as
+\[
+ (\alpha, \beta) = (\kappa^{-1})_{ij} \alpha^i \beta^j,
+\]
+%How can we describe this without coordinates? Recall that whenever we have a non-degenerate inner product, we obtain an isomorphism between a vector space and its dual. Given by
+%\[
+% H \in \mathfrak{h} \mapsto A(H) \in \mathfrak{h}^*
+%\]
+%gievn by
+%\[
+% A(H)(\ph) = \kappa(H, \ph).
+%\]
+where $\alpha^i := \alpha(H^i)$. We will later show that the inner products of roots will always be real, and hence we can talk about the ``geometry'' of roots.
+
+We note the following final result:
+\begin{lemma}
+ Let $\alpha \in \Phi$. Then $-\alpha \in \Phi$. Moreover,
+ \[
+ \kappa(E^\alpha, E^{-\alpha}) \not= 0
+ \]
+\end{lemma}
+This holds for stupid reasons.
+
+\begin{proof}
+ We know that
+ \[
+ \kappa(E^\alpha, E^\beta) = \kappa(E^\alpha, H^i) = 0
+ \]
+ for all $\beta \not= -\alpha$ and all $i$. But $\kappa$ is non-degenerate, and $\{E^\beta, H^i\}$ span $\mathfrak{g}$. So there must be some $E^{-\alpha}$ in the basis set, and
+ \[
+ \kappa(E^\alpha, E^{-\alpha}) \not= 0. \qedhere
+ \]
+\end{proof}
+
+So far, we know that
+\begin{align*}
+ [H^i, H^j] &= 0\\
+ [H^i, E^\alpha] &= \alpha^i E^\alpha
+\end{align*}
+for all $\alpha \in \Phi$ and $i, j = 1, \cdots, r$. Now it remains to evaluate $[E^\alpha, E^\beta]$.
+
+Recall that in the case of $\su_\C(2)$, we had
+\[
+ [E_+, E_-] = H.
+\]
+What can we get here? For any $H \in \mathfrak{h}$ and $\alpha, \beta \in \Phi$, we have
+\begin{align*}
+ [H, [E^\alpha, E^\beta]] &= -[E^\alpha, [E^\beta, H]] - [E^\beta, [H, E^\alpha]]\\
+ &= (\alpha(H)+ \beta(H))[E^\alpha, E^\beta].
+\end{align*}
+Now if $\alpha + \beta \not= 0$, then either $[E^\alpha, E^\beta] = 0$, or $\alpha + \beta \in \Phi$ and
+\[
+ [E^\alpha, E^\beta] = N_{\alpha, \beta} E^{\alpha + \beta}
+\]
+for some $N_{\alpha, \beta}$.
+
+What if $\alpha + \beta = 0$? We claim that this time, $[E^\alpha, E^{-\alpha}] \in \mathfrak{h}$. Indeed, for any $H \in \mathfrak{h}$, we have
+\begin{align*}
+ [H, [E^\alpha, E^{-\alpha}]] &= [[H, E^\alpha], E^{-\alpha}] + [[E^{-\alpha}, H], E^\alpha]\\
+ &= \alpha(H) [E^\alpha, E^{-\alpha}] + \alpha(H) [E^{-\alpha}, E^\alpha]\\
+ &= 0.
+\end{align*}
+Since $H$ was arbitrary, by the (strong) maximality property of $\mathfrak{h}$, we know that $[E^\alpha, E^{-\alpha}] \in \mathfrak{h}$.
+
+Now we can compute
+\begin{align*}
+ \kappa([E^\alpha, E^{-\alpha}], H) &= \kappa(E^\alpha, [E^{-\alpha}, H])\\
+ &= \alpha(H) \kappa(E^\alpha, E^{-\alpha}).
+\end{align*}
+We can view this as an equation for $[E^\alpha, E^{-\alpha}]$. Now since we know that $[E^\alpha, E^{-\alpha}] \in \mathfrak{h}$, and the Killing form is non-degenerate when restricted to $\mathfrak{h}$, we know $[E^\alpha, E^{-\alpha}]$ is uniquely determined by this relation.
+
+We define the normalized
+\[
+ H^\alpha = \frac{[E^\alpha, E^{-\alpha}]}{\kappa(E^\alpha, E^{-\alpha})}.
+\]
+Then our equation tells us
+\[
+ \kappa(H^\alpha, H) = \alpha(H).
+\]
+Writing $H^\alpha$ and $H$ in components:
+\[
+ H^\alpha = e_i^\alpha H^i,\quad H = e_i H^i.
+\]
+the equation reads
+\[
+ \kappa^{ij}e^\alpha_i e_j = \alpha^i e_i.
+\]
+Since the $e_j$ are arbitrary, we know
+\[
+ e^\alpha_i = (\kappa^{-1})_{ij} \alpha^j.
+\]
+So we know
+\[
+ H^\alpha = (\kappa^{-1})_{ij} \alpha^j H^i.
+\]
+We now have a complete set of relations:
+\begin{thm}
+ \begin{align*}
+ [H^i, H^j] &= 0\\
+ [H^i, E^\alpha] &= \alpha^i E^\alpha\\
+ [E^\alpha, E^\beta] &=
+ \begin{cases}
+ N_{\alpha, \beta}E^{\alpha + \beta} & \alpha + \beta \in \Phi\\
+ \kappa(E^\alpha, E^\beta) H^\alpha & \alpha + \beta = 0\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \end{align*}
+\end{thm}
+Now we notice that there are special elements $H^\alpha$ in the Cartan subalgebra associated to the roots. We can compute the brackets as
+\begin{align*}
+ [H^\alpha, E^\beta] &= (\kappa^{-1})_{ij} \alpha^i [H^j, E^\beta]\\
+ &= (\kappa^{-1})_{ij}\alpha^i \beta^j E^\beta\\
+ &= (\alpha, \beta) E^\beta,
+\end{align*}
+where we used the inner product on the dual space $\mathfrak{h}^*$ induced by the Killing form $\kappa$.
+
+Note that so far, we have picked $H^i$ and $E^\alpha$ arbitrarily. Any scalar multiple of them would have worked as well for what we did above. However, it is often convenient to pick a normalization such that the numbers we get turn out to look nice. It turns out that the following normalization is useful:
+\begin{align*}
+ e^\alpha &= \sqrt{\frac{2}{(\alpha, \alpha) \kappa(E^\alpha, E^{-\alpha})}} E^\alpha\\
+ h^\alpha &= \frac{2}{(\alpha, \alpha)} H^\alpha.
+\end{align*}
+Here it is important that $(\alpha, \alpha) \not= 0$, but we will only prove it next chapter where the proof fits in more naturally.
+
+Under this normalization, we have
+\begin{align*}
+ [h^\alpha, h^\beta] &= 0\\
+ [h^\alpha, e^\beta] &= \frac{2(\alpha, \beta)}{(\alpha, \alpha)} e^\beta\\
+ [e^\alpha, e^\beta] &=
+ \begin{cases}
+ n_{\alpha\beta} e^{\alpha + \beta} & \alpha + \beta \in \Phi\\
+ h^\alpha & \alpha + \beta = 0\\
+ 0 & \text{otherwise}
+ \end{cases}
+\end{align*}
+Note that the number of roots is $d - r$, where $d$ is the dimension of $\mathfrak{g}$ and $r$ is its rank, and this is typically greater than $r$. So in general, there are too many of them to be a basis, even if they spanned $\mathfrak{h}$ (we don't know that yet). However, we are still allowed to talk about them, and the above relations are still true. It's just that they might not specify everything about the Lie algebra.
+
+\subsection{Things are real}
+We are now going to prove that many things are real. When we do so, we can then restrict to a real form of $\mathfrak{h}$, and then we can talk about their geometry.
+
+The first result we want to prove in this section is the following:
+\begin{thm}
+ \[
+ (\alpha, \beta) \in \R
+ \]
+ for all $\alpha, \beta \in \Phi$.
+\end{thm}
+
+To prove this, we note that any Lie algebra contains many copies of $\su(2)$. If $\alpha \in \Phi$, then $-\alpha \in \Phi$. For each pair $\pm \alpha \in \Phi$, we can consider the subalgebra with basis $\{h^\alpha, e^\alpha, e^{-\alpha}\}$. Then we have
+\begin{align*}
+ [h^\alpha, e^{\pm \alpha}] &= \pm 2 e^{\pm \alpha}\\
+ [e^{\alpha}, e^{-\alpha}] &= h^\alpha.
+\end{align*}
+These are exactly the relations for the Lie algebra of $\su(2)$ (or rather, $\su_\C(2)$, but we will not write the $_\C$ every time). So this gives a subalgebra isomorphic to $\su(2)$. We call this $\su(2)_\alpha$\index{$\su(2)_\alpha$}.
+
+It turns out we also get a lot of representations of $\su(2)_\alpha$.
+\begin{defi}[String]\index{string}\index{$\alpha$-string passing through $\beta$}
+ For $\alpha, \beta \in \Phi$, we define the \emph{$\alpha$-string passing through $\beta$} to be
+ \[
+ S_{\alpha, \beta} = \{\beta + \rho\alpha \in \Phi: \rho \in \Z\}.
+ \]
+\end{defi}
+We will consider the case where $\beta$ is not proportional to $\alpha$. The remaining case, where we may wlog take $\beta = 0$, is left as an exercise in the example sheet.
+
+We then have a corresponding vector subspace
+\[
+ V_{\alpha, \beta} = \spn_\C\{e^{\beta + \rho \alpha}: \beta + \rho \alpha \in S_{\alpha,\beta}\}.
+\]
+Consider the action of $\su(2)_\alpha$ on $V_{\alpha, \beta}$. We have
+\[
+ [h^\alpha, e^{\beta + \rho \alpha}] = \frac{2(\alpha, \beta + \rho \alpha)}{(\alpha, \alpha)}e^{\beta + \rho \alpha} = \left(\frac{2 (\alpha, \beta)}{(\alpha, \alpha)} + 2\rho\right)e^{\beta + \rho \alpha}.\tag{$*$}
+\]
+We also have
+\[
+ [e^{\pm \alpha}, e^{\beta + \rho \alpha}] \propto
+ \begin{cases}
+ e^{\beta + (\rho \pm 1)\alpha} & \beta + (\rho \pm 1) \alpha \in \Phi\\
+ 0 & \text{otherwise}
+ \end{cases} \in V_{\alpha, \beta}.
+\]
+So $V_{\alpha, \beta}$ is invariant under the action of $\su(2)_\alpha$. So $V_{\alpha, \beta}$ is the representation space for some representation of $\su(2)_\alpha$.
+
+Moreover, $(*)$ tells us the weight set of this representation. We have
+\[
+ S = \left\{ \frac{2(\alpha , \beta)}{(\alpha, \alpha)} + 2\rho: \beta + \rho \alpha \in \Phi: \rho \in \Z\right\}.
+\]
+This is not to be confused with the string $S_{\alpha, \beta}$ itself!
+
+Since the Lie algebra itself is finite-dimensional, this representation must be finite-dimensional. We also see from the formula that the weights are non-degenerate and are spaced by $2$. So it must be an irreducible representation of $\su(2)$. Then if its highest weight is $\Lambda \in \Z$, we have
+\[
+ S = \{-\Lambda, -\Lambda + 2, \cdots, +\Lambda\}.
+\]
+What does this tell us? We know that the possible values of $\rho$ are bounded above and below, so we can write
+\[
+ S_{\alpha, \beta} = \left\{ \beta + \rho \alpha \in \Phi: n_- \leq \rho \leq n_+\right\}.
+\]
+for some $n_{\pm} \in \Z$. In particular, we know that
+\begin{prop}
+ For any $\alpha, \beta \in \Phi$, we have
+ \[
+ \frac{2(\alpha, \beta)}{(\alpha,\alpha)} \in \Z.
+ \]
+\end{prop}
+
+\begin{proof}
+ For $\rho = n_{\pm}$, we have
+ \[
+ \frac{2(\alpha, \beta)}{(\alpha,\alpha)} + 2n_- = -\Lambda \quad \text{ and } \quad \frac{2(\alpha, \beta)}{(\alpha,\alpha)} + 2n_+ = \Lambda.
+ \]
+ Adding the two equations yields
+ \[
+ \frac{2(\alpha, \beta)}{(\alpha,\alpha)} = - (n_+ + n_-) \in \Z. \qedhere
+ \]
+\end{proof}
+
+We are next going to prove that the inner products themselves are in fact real. This will follow from the following lemma:
+\begin{lemma}
+ We have
+ \[
+ (\alpha, \beta) = \frac{1}{\mathcal{N}} \sum_{\delta \in \Phi} (\alpha, \delta) (\delta, \beta),
+ \]
+ where $\mathcal{N}$ is the normalization factor appearing in the Killing form
+ \[
+ \kappa(X, Y) = \frac{1}{\mathcal{N}} \tr(\ad_X \circ \ad_Y).
+ \]
+\end{lemma}
+
+\begin{proof}
+ We pick the Cartan-Weyl basis, with
+ \[
+ [H^i, E^\delta] = \delta^i E^\delta
+ \]
+ for all $i = 1, \cdots, r$ and $\delta \in \Phi$. Then the inner product is defined by
+ \[
+ \kappa^{ij} = \kappa(H^i, H^j) = \frac{1}{\mathcal{N}} \tr [\ad_{H^i} \circ \ad_{H^j}].
+ \]
+ But we know that these matrices $\ad_{H^i}$ are diagonal in the Cartan-Weyl basis, and the non-zero diagonal entries are exactly the $\delta^i$. So we can write this as
+ \[
+ \kappa^{ij} = \frac{1}{\mathcal{N}} \sum_{\delta \in \Phi} \delta^i \delta^j.
+ \]
+ Now recall that our inner product was defined by
+ \[
+ (\alpha, \beta) = \alpha^i \beta^j (\kappa^{-1})_{ij} = \kappa^{ij} \alpha_i \beta_j,
+ \]
+ where we define
+ \[
+ \beta_j = (\kappa^{-1})_{jk} \beta^k.
+ \]
+ Putting in our explicit formula for the $\kappa^{ij}$, this is
+ \[
+ (\alpha, \beta) = \frac{1}{\mathcal{N}}\sum_{\delta \in \Phi} \alpha_i \delta^i \delta^j \beta_j = \frac{1}{\mathcal{N}}\sum_{\delta \in \Phi}(\alpha, \delta) (\delta, \beta). \qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ \[
+ (\alpha, \beta) \in \R
+ \]
+ for all $\alpha, \beta \in \Phi$.
+\end{cor}
+
+\begin{proof}
+ We write
+ \[
+ R_{\alpha, \beta} = \frac{2(\alpha, \beta)}{(\alpha, \alpha)} \in \Z.
+ \]
+ Then the previous formula tells us
+ \[
+ \frac{2}{(\beta, \beta)} R_{\alpha, \beta} = \frac{1}{\mathcal{N}} \sum_{\delta \in \Phi} R_{\alpha, \delta} R_{\beta, \delta}
+ \]
+ We know that $R_{\alpha, \beta}$ are all integers, and in particular real. So $(\beta, \beta)$ must be real as well. So it follows that $(\alpha, \beta)$ is real since $R_{\alpha, \beta}$ is an integer.
+\end{proof}
+
+\subsection{A real subalgebra}
+Let's review what we know so far. The roots $\alpha \in \Phi$ are elements of $\mathfrak{h}^*$. In general, the roots will not be linearly independent elements of $\mathfrak{h}^*$, because there are too many of them. However, it is true that the roots span $\mathfrak{h}^*$.
+
+\begin{prop}
+ The roots $\Phi$ span $\mathfrak{h}^*$. In particular, we know
+ \[
+ |\Phi| \geq \dim \mathfrak{h}^*.
+ \]
+\end{prop}
+
+\begin{proof}
+ Suppose the roots do not span $\mathfrak{h}^*$. Then the space spanned by the roots would have a non-trivial orthogonal complement. So we can find $\lambda \in \mathfrak{h}^*$ such that $(\lambda, \alpha) = 0$ for all $\alpha \in \Phi$.
+ We now define
+ \[
+ H_{\lambda} = \lambda_i H^i \in \mathfrak{h}.
+ \]
+ Then as usual we have
+ \[
+ [H_\lambda, H] = 0\text{ for all }H \in \mathfrak{h}.
+ \]
+ Also, we know
+ \[
+ [H_\lambda, E^\alpha] = (\lambda, \alpha) E^\alpha = 0.
+ \]
+ for all roots $\alpha \in \Phi$ by assumption. So $H_\lambda$ commutes with everything in the Lie algebra. This would make $\bra H_\lambda\ket$ a non-trivial ideal, which is a contradiction since $\mathfrak{g}$ is simple.
+\end{proof}
+Since the $\alpha \in \Phi$ span, we can find a basis $\{\alpha_{(i)} \in \Phi: i = 1, \cdots, r\}$ of roots. Again, this choice is arbitrary. We now define a \emph{real} vector subspace $\mathfrak{h}_\R^* \subseteq \mathfrak{h}^*$ by
+\[
+ \mathfrak{h}_\R^* = \spn_\R\{\alpha_{(i)}: i = 1, \cdots, r\}.
+\]
+One might be worried that this would depend on our choice of the basis $\alpha_{(i)}$, which was arbitrary. However, it does not, since any choice will give us the same space:
+\begin{prop}
+ $\mathfrak{h}_\R^*$ contains all roots.
+\end{prop}
+So $\mathfrak{h}_\R^*$ is alternatively the real span of all roots.
+
+\begin{proof}
+ We know that $\mathfrak{h}^*$ is spanned by the $\alpha_{(i)}$ as a complex vector space. So given an $\beta \in \mathfrak{h}^*$, we can find some $\beta^i \in \C$ such that
+ \[
+ \beta = \sum_{i = 1}^r \beta^i \alpha_{(i)}.
+ \]
+ Taking the inner product with $\alpha_{(j)}$, we know
+ \[
+ (\beta, \alpha_{(j)}) = \sum_{i = 1}^r \beta^i(\alpha_{(i)}, \alpha_{(j)}).
+ \]
+ We now use the fact that the inner products are all real! So $\beta^i$ is the solution to a set of real linear equations, and the equations are non-degenerate since the $\alpha_{(i)}$ form a basis and the Killing form is non-degenerate. So $\beta^i$ must be real. So $\beta \in \mathfrak{h}^*_\R$.
+\end{proof}
+
+Now the inner product of any two elements of $\mathfrak{h}_\R^*$ is real, since an element in $\mathfrak{h}_\R^*$ is a real linear combination of the $\alpha_{(i)}$, and the inner product of the $\alpha_{(i)}$ is always real.
+
+\begin{prop}
+ The Killing form induces a positive-definite inner product on $\mathfrak{h}_\R^*$.
+\end{prop}
+
+\begin{proof}
+ It remains to show that $(\lambda, \lambda) \geq 0$ for all $\lambda$, with equality iff $\lambda = 0$. We can write
+ \[
+ (\lambda, \lambda) = \frac{1}{\mathcal{N}} \sum_{\delta \in \Phi}(\lambda, \delta)^2 \geq 0.
+ \]
+ If this vanishes, then $(\lambda, \delta) = 0$ for all $\delta \in \Phi$. But the roots span, so this implies that $\lambda$ kills everything, and is thus $0$ by non-degeneracy.
+\end{proof}
+
+Now this $\mathfrak{h}_\R^*$ is just like any other real inner product space we know and love from IA Vectors and Matrices, and we can talk about the lengths and angles between vectors.
+
+\begin{defi}[Norm of root]\index{norm!of root}
+ Let $\alpha \in \Phi$ be a root. Then its \emph{length} is
+ \[
+ |\alpha| = \sqrt{(\alpha, \alpha)} > 0.
+ \]
+\end{defi}
+
+Then for any $\alpha, \beta$, there is some ``angle'' $\varphi \in [0, \pi]$ between them given by
+\[
+ (\alpha, \beta) = |\alpha||\beta| \cos \varphi.
+\]
+Now recall that we had the quantization result
+\[
+ \frac{2(\alpha, \beta)}{(\alpha, \alpha)} \in \Z.
+\]
+Then in terms of the lengths, we have
+\[
+ \frac{2 |\beta|}{|\alpha|} \cos \varphi \in \Z.
+\]
+Since the quantization rule is not symmetric in $\alpha, \beta$, we obtain a second quantization constraint
+\[
+ \frac{2|\alpha|}{|\beta|} \cos \varphi \in \Z.
+\]
+Since the product of two integers is an integer, we know that
+\[
+ 4 \cos^2 \varphi \in \Z.
+\]
+So
+\[
+ \cos \varphi = \pm \frac{\sqrt{n}}{2}, \quad n = \{0, 1, 2, 3, 4\}.
+\]
+So we have some boring solutions
+\[
+ \varphi = 0, \frac{\pi}{2}, \pi,
+\]
+and non-boring ones
+\[
+ \varphi = \frac{\pi}{6}, \frac{\pi}{4}, \frac{\pi}{3}, \frac{2\pi}{3}, \frac{3 \pi}{4}, \frac{5 \pi}{6}.
+\]
+These are the only possibilities!
+
+\subsection{Simple roots}
+Now we have a lot of roots that span $\mathfrak{h}^*_\R$, but they obviously do not form a basis. For example, whenever $\alpha \in \mathfrak{h}^*_\R$, then so is $-\alpha$. So it might be helpful to get rid of half of these things. The divide is again rather arbitrary.
+
+To do so, we pick a hyperplane in $\mathfrak{h}_\R^* \cong \R^r$, i.e.\ a subspace of dimension $r - 1$. Since there are finitely many roots, we can pick it so that it doesn't contain any root. Then the plane divides $\mathfrak{h}_\R^*$ into $2$ sides. We then pick a distinguished side, and say that $\alpha$ is ``\emph{positive}\index{positive root}'' if it lies in that side, and \emph{negative}\index{negative root} otherwise.
+
+Then if $\alpha$ is positive, $-\alpha $ is negative. More interestingly, if $\alpha, \beta$ are positive roots, then $\alpha + \beta$ is also positive. Similarly, if they are both negative, then the sum is also negative.
+
+It turns out restricting to positive roots is not enough. However, the following trick does the job:
+
+\begin{defi}[Simple root]\index{simple root}
+ A \emph{simple root} is a positive root that cannot be written as a sum of two positive roots. We write $\Phi_S$ for the set of simple roots.
+\end{defi}
+
+We can immediately deduce a few properties about these simple roots.
+\begin{prop}
+ Any positive root can be written as a linear combination of simple roots with positive integer coefficients. So every root can be written as a linear combination of simple roots.
+\end{prop}
+
+\begin{proof}
+ Given any positive root, if it cannot be decomposed into a positive sum of other roots, then it is simple. Otherwise, do so, and further decompose the constituents. This will have to stop because there are only finitely many roots, and then you are done.
+\end{proof}
+
+\begin{cor}
+ The simple roots span $\mathfrak{h}^*_{\R}$.
+\end{cor}
+
+To show that they are independent, we need to do a bit of work that actually involves Lie algebra theory.
+\begin{prop}
+ If $\alpha, \beta \in \Phi$ are simple, then $\alpha - \beta$ is \emph{not} a root.
+\end{prop}
+
+\begin{proof}
+ Suppose $\alpha - \beta$ were a root. By swapping $\alpha$ and $\beta$ if necessary, we may wlog assume that $\alpha - \beta$ is a positive root. Then
+ \[
+ \alpha = \beta + (\alpha - \beta)
+ \]
+ is a sum of two positive roots, which is a contradiction.
+\end{proof}
+
+\begin{prop}
+ If $\alpha, \beta \in \Phi_S$, then the $\alpha$-string through $\beta$, namely
+ \[
+ S_{\alpha, \beta} = \{\beta + n \alpha \in \Phi\},
+ \]
+ has length
+ \[
+ \ell_{\alpha\beta} = 1 - \frac{2 (\alpha, \beta)}{(\alpha, \alpha)} \in \N.
+ \]
+\end{prop}
+
+\begin{proof}
+ Recall that there exists $n_{\pm}$ such that
+ \[
+ S_{\alpha, \beta} = \{\beta + n \alpha: n_- \leq n \leq n_+\},
+ \]
+ We have shown before that
+ \[
+ n_+ + n_- = - \frac{2(\alpha, \beta)}{(\alpha, \alpha)} \in \Z.
+ \]
+ In the case where $\alpha, \beta$ are simple roots, we know that $\beta - \alpha$ is not a root. So $n_- \geq 0$. But we know $\beta$ is a root. So we know that $n_- = 0$ and hence
+ \[
+ n_+ = -\frac{2(\alpha, \beta)}{(\alpha, \alpha)} \in \N.
+ \]
+ So there are
+ \[
+ n_+ + 1 = 1 - \frac{2(\alpha, \beta)}{(\alpha, \alpha)}
+ \]
+ things in the string.
+\end{proof}
+From this formula, we learn that
+
+\begin{cor}
+ For any distinct simple roots $\alpha, \beta$, we have
+ \[
+ (\alpha, \beta) \leq 0.
+ \]
+\end{cor}
+
+We are now in a position to check that we do get a basis this way.
+
+\begin{prop}
+ Simple roots are linearly independent.
+\end{prop}
+
+\begin{proof}
+ Suppose we have a non-trivial linear combination $\lambda$ of the simple roots. We write
+ \[
+ \lambda = \lambda_+ - \lambda_- = \sum_{i \in I_+} c_i \alpha_{(i)} - \sum_{j \in I_-} b_j \alpha_{(j)},
+ \]
+ where $c_i, b_j \geq 0$ and $I_+, I_-$ are disjoint. If $I_-$ is empty, then the sum is a positive root, and in particular is non-zero. Similarly, if $I_+$ is empty, then the sum is negative. So it suffices to focus on the case $c_i, b_j > 0$.
+
+ Then we have
+ \begin{align*}
+ (\lambda, \lambda) &= (\lambda_+, \lambda_+) + (\lambda_-, \lambda_-) - 2 (\lambda_+, \lambda_-)\\
+ &> -2(\lambda_+, \lambda_-)\\
+ &= -2 \sum_{i \in I_+}\sum_{j \in I_-} c_i b_j (\alpha_{(i)}, \alpha_{(j)})\\
+ &\geq 0,
+ \end{align*}
+ since $(\alpha_{(i)}, \alpha_{(j)}) \leq 0$ for all simple roots $\alpha_{(i)}, \alpha_{(j)}$. So in particular $\lambda$ is non-zero.
+\end{proof}
+
+\begin{cor}
+ There are exactly $r = \rank \mathfrak{g}$ simple roots roots, i.e.
+ \[
+ |\Phi_S| = r.
+ \]
+\end{cor}
+
+\subsection{The classification}
+We now have a (not so) canonical choice of basis of $\mathfrak{h}_\R^*$
+\[
+ \mathcal{B} = \{\alpha \in \Phi_S\} = \{\alpha_{(i)}: i = 1, \cdots, r\}.
+\]
+We want to re-express the Lie algebra in terms of this basis.
+\begin{defi}[Cartan matrix]\index{Cartan matrix}
+ The \emph{Cartan matrix} $A^{ij}$ is defined as the $r \times r$ matrix
+ \[
+ A^{ij} = \frac{2(\alpha_{(i)}, \alpha_{(j)})}{(\alpha_{(j)}, \alpha_{(j)})}.
+ \]
+\end{defi}
+Note that this is \emph{not} a symmetric matrix. We immediately know that $A^{ij} \in \Z$.
+
+For each root $\alpha_{(i)}$, we have an $\su(2)$ subalgebra given by
+\[
+ \{h^i = h^{\alpha_{(i)}}, e_{\pm}^i = e^{\pm \alpha_{(i)}}\}.
+\]
+We then have
+\[
+ [h^i, e_{\pm}^i] = \pm 2 e^i_{\pm},\quad [e_+^i, e_-^i] = h^i.
+\]
+Looking at everything in our Lie algebra, we have the relations
+\begin{align*}
+ [h^i, h^j] &= 0\\
+ [h^i, e^j_{\pm}] &= \pm A^{ji} e^j_{\pm}\\
+ [e_+^i, e_-^j] &= \delta_{ij} h_i,
+\end{align*}
+with no summation implied in the second row. Everything here we've seen, except for the last expression $[e_+^i, e_-^j]$. We are claiming that if $i \not= j$, then the bracket in fact vanishes! This just follows from the fact that if $\alpha_{(i)}$ and $\alpha_{(j)}$ are simple, then $\alpha_{(i)} - \alpha_{(j)}$ is not a root.
+
+Note that this does not \emph{a priori} tell us everything about the Lie algebra. We have only accounted for step generators of the form $e_{\pm}^i$, and also we did not mention how, say, $[e_+^i, e_+^j]$ behaves.
+
+We know
+\[
+ [e_+^i, e_+^j] = \ad_{e^i_+}(e_+^j) \propto e^{\alpha_{(i)} + \alpha_{(j)}}
+\]
+if $\alpha_{(i)} + \alpha_{(j)} \in \Phi$. We know that if $\alpha_{(i)} + \alpha_{(j)} \in \Phi$, then it belongs to a string. In general, we have
+\[
+ \ad_{e_+^i}^n (e_+^j) \propto e^{\alpha_{(j)} + n \alpha_{(i)}}
+\]
+if $\alpha_{(j)} + n \alpha_{(i)} \in \Phi$, and we know how long this string is. So we have
+\[
+ (\ad_{e_{\pm}^i})^{1 - A^{ji}} e^j_{\pm} = 0.
+\]
+This is the \term{Serre relation}. It turns out this is all we need to completely characterize the Lie algebra.
+
+\begin{thm}[Cartan]
+ Any finite-dimensional, simple, complex Lie algebra is uniquely determined by its Cartan matrix.
+\end{thm}
+
+We will not prove this.
+
+To achieve the Cartan classification, we need to classify all Cartan matrices, and then reconstruct $\mathfrak{g}$ form the Cartan matrix.
+
+We first do the first part. Recall that
+\[
+ A^{ij} = \frac{2 (\alpha_{(i)}, \alpha_{(j)})}{(\alpha_{(j)}, \alpha_{(j)})} \in \Z.
+\]
+We first read off properties of the Cartan matrix.
+\begin{prop}
+ We always have $A^{ii} = 2$ for $i = 1, \cdots, r$.
+\end{prop}
+
+This is not a symmetric matrix in general, but we do have the following:
+\begin{prop}
+ $A^{ij} = 0$ if and only if $A^{ji} = 0$.
+\end{prop}
+
+We also have
+\begin{prop}
+ $A^{ij} \in \Z_{\leq 0}$ for $i \not= j$.
+\end{prop}
+
+We now get to a less trivial fact:
+\begin{prop}
+ We have $\det A > 0$.
+\end{prop}
+
+\begin{proof}
+ Recall that we defined our inner product as
+ \[
+ (\alpha, \beta) = \alpha^T \kappa^{-1} \beta,
+ \]
+ and we know this is positive definite. So we know $\det \kappa^{-1} > 0$. We now write the Cartan matrix as
+ \[
+ A = \kappa^{-1}D,
+ \]
+ where we have
+ \[
+ D_k^j = \frac{2}{(\alpha_{(j)}, \alpha_{(j)})} \delta^j_k.
+ \]
+ Then we have
+ \[
+ \det D = \prod_j \frac{2}{(\alpha_{(j)}, \alpha_{(j)})} > 0.
+ \]
+ So it follows that
+ \[
+ \det A = \det \kappa^{-1} \det D > 0. \qedhere
+ \]
+\end{proof}
+
+It is an exercise to check that if the Cartan matrix is reducible, i.e.\ it looks like
+\[
+ A =
+ \begin{pmatrix}
+ A^{(1)} & 0\\
+ 0 & A^{(2)}
+ \end{pmatrix},
+\]
+then in fact the Lie algebra is not simple.
+
+So we finally have
+\begin{prop}
+ The Cartan matrix $A$ is irreducible.
+\end{prop}
+
+So in total, we have the following five properties:
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $A^{ii} = 2$ for all $i$.
+ \item $A^{ij} = 0$ if and only if $A^{ji} = 0$.
+ \item $A^{ij} \in \Z_{\leq 0}$ for $i \not= j$.
+ \item $\det A > 0$.
+ \item $A$ is irreducible.
+ \end{enumerate}
+\end{prop}
+
+These are hugely restrictive.
+\begin{eg}
+ For a rank $1$ matrix, there is only one entry, and it is on the diagonal. So we must have
+ \[
+ A =
+ \begin{pmatrix}
+ 2
+ \end{pmatrix}.
+ \]
+ This corresponds to the simple Lie algebra $\su(2)$.
+\end{eg}
+
+\begin{eg}
+ For a rank $2$ matrix, we know the diagonals must be $2$, so we must have something of the form
+ \[
+ A =
+ \begin{pmatrix}
+ 2 & m\\
+ \ell & 2
+ \end{pmatrix}.
+ \]
+ We know that the off-diagonals are not all zero, but if one is non-zero, then the other also is. So they must be both non-zero, and we have $m\ell < 4$. We thus have
+ \[
+ (m, \ell) = (-1, -1), (-1, -2), (-1, -3).
+ \]
+ Note that we do not write out the cases where we swap the two entries, because we will get the same Lie algebra but with a different ordering of the basis.
+\end{eg}
+
+Now we see that we have a redundancy in the description of the Cartan matrix given by permutation. There is a neat solution to this problem by drawing \emph{Dynkin diagrams}.
+\begin{defi}[Dynkin diagram]\index{Dynkin diagram}
+ Given a Cartan matrix, we draw a diagram as follows:
+ \begin{enumerate}
+ \item For each simple root $\alpha_{(i)} \in \Phi_S$, we draw a node
+ \begin{center}
+ \begin{tikzpicture}
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (0, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ \item We join the nodes corresponding to $\alpha_{(i)}, \alpha_{(j)}$ with $A^{ij} A^{ji}$ many lines.
+ \item If the roots have different lengths, we draw an arrow from the longer root to the shorter root. This happens when $A^{ij} \not= A^{ji}$.
+ \end{enumerate}
+\end{defi}
+Note that we wouldn't have to draw too many lines. We have
+\[
+ A^{ij} = \frac{2|\alpha_{(i)}|}{|\alpha_{(j)}|} \cos \varphi_{ij},\quad A^{ji} = \frac{2|\alpha_{(j)}|}{|\alpha_{(i)}|} \cos \varphi_{ij},
+\]
+where $\varphi_{ij}$ is the angle between them. So we have
+\[
+ \cos^2 \varphi_{ij} = \frac{1}{4} A^{ij} A^{ji}.
+\]
+But we know $\cos^2 \varphi_{ij} \in [0, 1]$. So we must have
+\[
+ A^{ij} A^{ji} \in \{0, 1, 2, 3\}.
+\]
+So we have to draw at most $3$ lines, which isn't that bad. Moreover, we have the following information:
+\begin{prop}
+ A simple Lie algebra has roots of at most $2$ distinct lengths.
+\end{prop}
+
+\begin{proof}
+ See example sheet.
+\end{proof}
+
+It is an exercise to see that all the information about $A^{ji}$ can be found from the Dynkin diagram.
+
+We can now revisit the case of a rank 2 simple Lie algebra.
+\begin{eg}
+ The Cartan matrices of rank $2$ are given by
+ \[
+ \begin{pmatrix}
+ 2 & -1\\
+ -1 & 2\\
+ \end{pmatrix},\quad
+ \begin{pmatrix}
+ 2 & -2\\
+ -1 & 2
+ \end{pmatrix},\quad
+ \begin{pmatrix}
+ 2 & -3\\
+ -1 & 2
+ \end{pmatrix}
+ \]
+ These correspond to the Dynkin diagrams
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (3, 0) -- (4, 0);
+ \foreach \x in {3,4} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \draw (0, 0.05) -- (1, 0.05);
+ \draw (0, -0.05) -- (1, -0.05);
+
+ \draw [thick] (0.45, 0.1) -- (0.55, 0) -- (0.45, -0.1);
+
+ \foreach \x in {0,1} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+ \quad
+ \begin{tikzpicture}
+ \draw (0, 0.05) -- (1, 0.05);
+ \draw (0, 0) -- (1, 0);
+ \draw (0, -0.05) -- (1, -0.05);
+
+ \draw [thick] (0.45, 0.1) -- (0.55, 0) -- (0.45, -0.1);
+
+ \foreach \x in {0,1} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+The conditions on the matrices $A^{ij}$ now translate to conditions on the Dynkin diagrams which we will not write out, and it turns out we can classify all the allowed Dynkin diagrams as follows:
+\begin{thm}[Cartan classification]
+ The possible Dynkin diagrams include the following infinite families (where $n$ is the number of vertices):
+ \begin{center}
+ \begin{tikzpicture}
+ \node at (-1, 0) {$A_n:$};
+ \draw (0, 0) -- (2, 0);
+ \draw [dashed] (2, 0) -- (3, 0);
+ \draw (3, 0) -- (4, 0);
+ \foreach \x in {0,1,2,3,4} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \node at (7, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ \begin{center}
+ \begin{tikzpicture}
+ \node at (-1, 0) {$B_n:$};
+ \draw (0, 0) -- (2, 0);
+ \draw [dashed] (2, 0) -- (3, 0);
+ \draw (3, 0.05) -- (4, 0.05);
+ \draw (3, -0.05) -- (4, -0.05);
+
+ \draw [thick] (3.45, 0.1) -- (3.55, 0) -- (3.45, -0.1);
+
+ \foreach \x in {0,1,2,3,4} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \node at (7, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ \begin{center}
+ \begin{tikzpicture}
+ \node at (-1, 0) {$C_n:$};
+ \draw (0, 0) -- (2, 0);
+ \draw [dashed] (2, 0) -- (3, 0);
+ \draw (3, 0.05) -- (4, 0.05);
+ \draw (3, -0.05) -- (4, -0.05);
+
+ \draw [thick] (3.55, 0.1) -- (3.45, 0) -- (3.55, -0.1);
+
+ \foreach \x in {0,1,2,3,4} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \node at (7, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ \begin{center}
+ \begin{tikzpicture}
+ \node at (-1, 0) {$D_n:$};
+ \draw (0, 0) -- (2, 0);
+ \draw [dashed] (2, 0) -- (3, 0);
+ \draw (3, 0) -- (4, 0);
+ \draw (4.5, 0.5) -- (4, 0) -- (4.5, -0.5);
+
+ \foreach \x in {0,1,2,3,4} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (4.5, 0.5) {};
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (4.5, -0.5) {};
+ \node at (7, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ And there are also five exceptional cases:
+ \begin{center}
+ \begin{tikzpicture}
+ \node at (-1, 0.5) {$E_6:$};
+ \draw (0, 0) -- (4, 0);
+ \draw (2, 0) -- (2, 1);
+ \foreach \x in {0,1,2,3,4} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (2, 1) {};
+ \node at (7, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ \begin{center}
+ \begin{tikzpicture}
+ \node at (-1, 0.5) {$E_7:$};
+ \draw (0, 0) -- (5, 0);
+ \draw (2, 0) -- (2, 1);
+ \foreach \x in {0,1,2,3,4,5} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (2, 1) {};
+ \node at (7, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ \begin{center}
+ \begin{tikzpicture}
+ \node at (-1, 0.5) {$E_8:$};
+ \draw (0, 0) -- (6, 0);
+ \draw (2, 0) -- (2, 1);
+ \foreach \x in {0,1,2,3,4,5,6} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (2, 1) {};
+ \node at (7, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ \begin{center}
+ \begin{tikzpicture}
+ \node at (-1, 0) {$F_4:$};
+ \draw (0, 0) -- (1, 0);
+ \draw (2, 0) -- (3, 0);
+
+ \draw (1, 0.05) -- (2, 0.05);
+ \draw (1, -0.05) -- (2, -0.05);
+
+ \draw [thick] (1.45, 0.1) -- (1.55, 0) -- (1.45, -0.1);
+
+ \foreach \x in {0,1,2,3} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \node at (7, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ \begin{center}
+ \begin{tikzpicture}
+ \node at (-1, 0) {$G_2:$};
+ \draw (0, 0.05) -- (1, 0.05);
+ \draw (0, 0) -- (1, 0);
+ \draw (0, -0.05) -- (1, -0.05);
+
+ \draw [thick] (0.45, 0.1) -- (0.55, 0) -- (0.45, -0.1);
+
+ \foreach \x in {0,1} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \node at (7, 0) {};
+ \end{tikzpicture}
+ \end{center}
+\end{thm}
+
+\begin{eg}
+ The infinite families $A_n, B_n, C_n, D_n$ correspond to well-known complex Lie groups by
+ \begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ Family & Lie Group\\
+ \midrule
+ $A_n$ & $\mathcal{L}_\C(\SU(n + 1))$\\
+ $B_n$ & $\mathcal{L}_\C(\SO(2n + 1))$\\
+ $C_n$ & $\mathcal{L}_\C (\Sp(2n))$\\
+ $D_n$ & $\mathcal{L}_\C (\SO(2n))$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ where the $\Sp(2n)$ are the \term{symplectic matrices}.
+\end{eg}
+
+Note that there is some repetition in our list. For example, we have $A_1 = B_1 = C_1 = D_1$ and $B_2 = C_2$. Also, $D_2$ does not give a simple Lie algebra, since it is disconnected. We have $D_2 \cong A_1 \oplus A_1$. So we have
+\[
+ \mathcal{L}_\C(\SO(4)) \cong \mathcal{L}_\C(\SU(2)) \oplus \mathcal{L}_\C(\SU(2)).
+\]
+Finally, we also have $D_3 = A_3$, and this reflects an isomorphism
+\[
+ \mathcal{L}_\C(\SU(4)) \cong \mathcal{L}_\C(\SO(6)).
+\]
+Hence, a list without repetitions is given by
+\begin{center}
+ \begin{tabular}{cl}
+ $A_n$ & for $n\geq 1$,\\
+ $B_n$ & for $n\geq 2$,\\
+ $C_n$ & for $n\geq 3$,\\
+ $D_n$ & for $n\geq 4$.
+ \end{tabular}
+\end{center}
+
+This classification is very important in modern theoretical physics, since in many theories, we need to pick a Lie group as, say, our gauge group. So knowing what Lie groups are around lets us know what theories we can have.
+
+\subsection{Reconstruction}
+Now given the Cartan matrix, we want to reconstruct the Lie algebra itself.
+
+Recall that we had a Cartan-Weyl basis
+\[
+ \{H^i, E^\alpha: i = 1, \cdots, r: \alpha \in \Phi\}.
+\]
+The first thing to do in the reconstruction is to figure out what the set of roots is.
+
+By definition, the Cartan matrix determines simple roots $\alpha_{(i)}$ for $i = 1, \cdots, r$. We can read off the inner products from the Cartan matrix as
+\[
+ A^{ij} = \frac{2(\alpha_{(i)}, \alpha_{(j)})}{(\alpha_{(j)}, \alpha_{(j)})} = \frac{2|\alpha_{(i)}|}{|\alpha_{(j)}|} \cos \varphi_{ij}.
+\]
+This allows us to find the simple roots.
+
+How about the other roots? We can find them by considering the root strings, as we know that the length of the $\alpha_{(i)}$-string through $\alpha_{(j)}$ is given by
+\[
+ \ell_{ij} = 1 - A_{ji} \in \N.
+\]
+So we can work out the length of the root string from each simple root. By Cartan's theorem, this gives us all roots.
+
+Instead of going through a formal and general reconstruction, we do an example.
+
+\begin{eg}
+ Consider $\mathfrak{g} = A_2$. We have a Dynkin diagram
+ \begin{center}
+ \begin{tikzpicture}
+ \node at (-1, 0) {$A_2:$};
+ \draw (0, 0) -- (1, 0);
+ \foreach \x in {0,1} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+ \end{center}
+ So we see that the Cartan matrix is
+ \[
+ A =
+ \begin{pmatrix}
+ 2 & -1\\
+ -1 & 2
+ \end{pmatrix}.
+ \]
+ So we know that $\mathfrak{g}$ has two simple roots $\alpha, \beta \in \Phi$, and we know
+ \[
+ A_{12} = A_{21} = \frac{2(\alpha, \beta)}{(\alpha, \alpha)} = \frac{2(\beta, \alpha)}{(\beta, \beta)} = -1.
+ \]
+ So we find that $(\alpha, \alpha) = (\beta, \beta)$, and $\varphi_{\alpha\beta} = \frac{2\pi}{3}$. So we can display the roots as
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (2, 0) node [right] {$\alpha$};
+ \draw [->] (0, 0) -- (-1, 1.732) node [above] {$\beta$};
+ \draw (0.6, 0) arc(0:120:0.6) node [pos=0.5, anchor=south west] {$\frac{2\pi}{3}$};
+ \end{tikzpicture}
+ \end{center}
+ Since $\alpha, \beta \in \Phi_S$, we know $\pm(\alpha - \beta) \not\in \Phi$, and also we have
+ \[
+ \ell_{\alpha, \beta} = \ell_{\beta, \alpha} = 1 - \frac{2 (\alpha, \beta)}{(\alpha, \alpha)} = 2.
+ \]
+ So we have roots $\beta + n \alpha, \alpha + \tilde{n}\beta$ for $n, \tilde{n} \in \{0, 1\}$. So in fact our list of roots contains
+ \[
+ \alpha, \beta, \alpha + \beta \in \Phi.
+ \]
+ These are the positive ones. We know each of these roots has a negative counterpart. So we have more roots
+ \[
+ -\alpha, -\beta, -\alpha - \beta \in \Phi.
+ \]
+ Then by our theorem, we know these are all the roots. We can find the length of $\alpha + \beta$ by
+ \begin{align*}
+ (\alpha + \beta, \alpha + \beta) &= (\alpha, \alpha) + (\beta, \beta) + 2(\alpha, \beta)\\
+ &= (\alpha, \alpha) + (\beta, \beta) - (\beta, \beta)\\
+ &= (\alpha, \alpha).
+ \end{align*}
+ So in fact all roots have the same length. So if we draw all our roots, we get
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (2, 0) node [right] {$\alpha$};
+ \draw [->] (0, 0) -- (-1, 1.732) node [above] {$\beta$};
+ \draw [->] (0, 0) -- (1, 1.732) node [right] {$\alpha + \beta$};
+ \draw [->] (0, 0) -- (-2, 0) node [left] {$-\alpha$};
+ \draw [->] (0, 0) -- (1, -1.732) node [below] {$-\beta$};
+ \draw [->] (0, 0) -- (-1, -1.732) node [below] {$-\alpha - \beta$};
+ \end{tikzpicture}
+ \end{center}
+ So the Cartan-Weyl basis consists of
+ \[
+ \mathcal{B}_{\mathrm{CW}} = \{H^1, H^2, E^{\pm \alpha}, E^{\pm \beta}, E^{\pm(\alpha + \beta)}\}.
+ \]
+ This is the complete Cartan-Weyl basis of $A_2$. So this has dimension $2 + 6 = 8$.
+
+ If we want to find out the brackets of these things, we look at the ones we have previously written down, and use the Jacobi identity many times.
+\end{eg}
+
+\section{Representation of Lie algebras}
+We now try to understand the possible representations of Lie algebras. It turns out they again are pretty restricted.
+
+\subsection{Weights}
+Let $\rho$ be a representation of $\mathfrak{g}$ on $V$. Then it is completely determined by the images
+\begin{align*}
+ H^i &\mapsto \rho(H^i) \in \gl(V)\\
+ E^\alpha &\mapsto \rho(E^\alpha) \in \gl(V).
+\end{align*}
+Again, we know that
+\[
+ [\rho(H^i), \rho(H^j)] = \rho([H^i, H^j]) = 0.
+\]
+So all $\rho(H^i)$ commute, and by linear algebra, we know they have a common eigenvector $v_\lambda$. Again, for each $H \in \mathfrak{h}$, we know
+\[
+ \rho(H) v_\lambda = \lambda(H) v_\lambda
+\]
+for some $\lambda(H)$, and $\lambda$ lives in $\mathfrak{h}^*$.
+\begin{defi}[Weight of representation]\index{weight of representation}\index{representation!weight}
+ Let $\rho: \mathfrak{g} \to \gl(V)$ be a representation of $\mathfrak{g}$. Then if $v_\lambda \in V$ is an eigenvector of $\rho(H)$ for all $H \in \mathfrak{h}$, we say $\lambda\in \mathfrak{h}^*$ is a \emph{weight} of $\rho$, where
+ \[
+ \rho(H) v_\lambda = \lambda(H) v_\lambda
+ \]
+ for all $H \in \mathfrak{h}$.
+
+ The \emph{weight set} $S_\rho$ of $\rho$ is the set of all weights.
+\end{defi}
+For a weight $\lambda$, we write $V_\lambda$ for the subspace that consists of vectors $v$ such that
+\[
+ \rho(H) v = \lambda(H) v.
+\]
+Note that the weights can have multiplicity, i.e.\ $V_\lambda$ need not be $1$-dimensional. We write
+\[
+ m_\lambda = \dim V_\lambda \geq 1
+\]
+for the \term{multiplicity} of the weight.
+
+\begin{eg}
+ By definition, we have
+ \[
+ [H^i, E^\alpha] = \alpha^i E^\alpha
+ \]
+ So we have
+ \[
+ \ad_{H^i} E^\alpha = \alpha^i E_\alpha.
+ \]
+ In terms of the adjoint representation, this says
+ \[
+ \rho_{\mathrm{adj}} (H^i) E_\alpha = \alpha^i E_\alpha.
+ \]
+ So the roots $\alpha$ are the weights of the adjoint representation.
+\end{eg}
+Recall that for $\su_\C(2)$, we know that the weights are always integers. This will be the case in general as well.
+
+Given any representation $\rho$, we know it has at least one weight $\lambda$ with an eigenvector $v \in V_\lambda$. We'll see what happens when we apply the step operators $\rho(E^\alpha)$ to it, for $\alpha \in \Phi$. We have
+\[
+ \rho(H^i) \rho(E^\alpha)v = \rho(E^\alpha) \rho(H^i) v + [\rho(H^i), \rho(E^\alpha)] v.
+\]
+We know that
+\[
+ [\rho(H^i), \rho(E^\alpha)] = \rho([H^i, E^\alpha]) = \alpha^i \rho(E^\alpha).
+\]
+So we know that if $v \in V_\lambda$, then
+\[
+ \rho(H^i)\rho(E^\alpha)v = (\lambda^i + \alpha^i) \rho(E^\alpha) v.
+\]
+So the weight of the vector has been shifted by $\alpha$. Thus, for all vectors $v \in V_\lambda$, we find that
+\[
+ \rho(E^\alpha) v \in V_{\lambda + \alpha}.
+\]
+However, we do not know a priori if $V_{\lambda + \alpha}$ is a thing at all. If $V_{\lambda + \alpha} = \{0\}$, i.e.\ $\lambda + \alpha$ is not a weight, then we know that $\rho(E^\alpha) v = 0$.
+
+So the Cartan elements $H^i$ preserve the weights, and the step operators $E^\alpha$ increment the weights by $\alpha$.
+
+Consider the action of our favorite $\su(2)_\alpha$ subalgebra on the representation space. In other words, we consider the action of $\{\rho(h^\alpha), \rho(e^\alpha), \rho(e^{-\alpha})\}$ on $V$. Then $V$ becomes the representation space for some representation $\rho_\alpha$ for $\su(2)_\alpha$.
+
+Now we can use what we know about the representations of $\su(2)$ to get something interesting about $V$. Recall that we defined
+\[
+ h^\alpha = \frac{2}{(\alpha, \alpha)} H^\alpha = \frac{2}{(\alpha, \alpha)} (\kappa^{-1})_{ij} \alpha^j H^i.
+\]
+So for any $v \in V_\lambda$, after some lines of algebra, we find that
+\[
+ \rho(h^\alpha)(v) = \frac{2(\alpha, \lambda)}{(\alpha, \alpha)} v.
+\]
+So we know that we must have
+\[
+ \frac{2(\alpha, \lambda)}{(\alpha, \alpha)} \in \Z
+\]
+for all $\lambda \in S_\rho$ and $\alpha \in \Phi$. So in particular, we know that $(\alpha_{(i)}, \lambda) \in \R$ for all simple roots $\alpha_{(i)}$. In particular, it follows that $\lambda \in \mathfrak{h}^*_\R$ (if we write $\lambda$ as a sum of the $\alpha_{(i)}$, then the coefficients are the unique solution to a real system of equations, hence are real).
+
+Again, from these calculations, we see that $\bigoplus_{\lambda \in S_\rho} V_\lambda$ is a subrepresentation of $V$, so by irreducibility, everything is a linear combination of these simultaneous eigenvectors, i.e.\ we must have
+\[
+ V = \bigoplus_{\lambda \in S_\rho} V_\lambda.
+\]
+In particular, this means all $\rho(H)$ are diagonalizable for $H \in \mathfrak{h}$.
+
+\subsection{Root and weight lattices}
+Recall that the simple roots are a basis for the whole root space. In particular, any positive root can be written as a positive sum of simple roots. So for any $\beta \in \Phi$, we can write
+\[
+ \beta = \sum_{i = 1}^r \beta^i \alpha_{(i)},
+\]
+where $\beta^i \in \Z$ for $i = 1, \cdots, r$. Hence all roots lie in a \emph{root lattice}:
+\begin{defi}[Root lattice]\index{root lattice}
+ Let $\mathfrak{g}$ be a Lie algebra with simple roots $\alpha_{(i)}$. Then the \emph{root lattice} is defined as
+ \[
+ \mathcal{L}[\mathfrak{g}] = \spn_\Z\{\alpha_{(i)}: i = 1, \cdots, r\}.
+ \]
+\end{defi}
+Note that here we are taking the integer span, not the real span.
+
+What we want to do is to define an analogous \emph{weight lattice}, where all possible weights lie.
+
+We start by defining some funny normalization of the roots:
+\begin{defi}[Simple co-root]\index{simple co-root}\index{co-root}
+ The \emph{simple co-roots} are
+ \[
+ \check{\alpha}_{(i)} = \frac{2 \alpha_{(i)}}{(\alpha_{(i)},\alpha_{(i)})}.
+ \]
+\end{defi}
+Similarly, we can define the co-root lattice:
+
+\begin{defi}[Co-root lattice]\index{co-root lattice}
+ Let $\mathfrak{g}$ be a Lie algebra with simple roots $\alpha_{(i)}$. Then the \emph{co-root lattice} is defined as
+ \[
+ \check{\mathcal{L}}[\mathfrak{g}] = \spn_\Z\{\check\alpha_{(i)}: i = 1, \cdots, r\}.
+ \]
+\end{defi}
+
+\begin{defi}[Weight lattice]\index{weight lattice}
+ The \emph{weight lattice} $\mathcal{L}_W[\mathfrak{g}]$ is the \emph{dual} to the co-root lattice:
+ \[
+ \mathcal{L}_W [\mathfrak{g}] = \check{\mathcal{L}}^*[\mathfrak{g}] = \{\lambda \in \mathfrak{h}^*_\R: (\lambda, \mu) \in \Z \text{ for all }\mu \in \check{\mathcal{L}}[\mathfrak{g}]\}.
+ \]
+\end{defi}
+So we know $\lambda \in \mathcal{L}_W[\mathfrak{g}]$ iff for all $\alpha_{(i)}$, we have
+\[
+ (\lambda , \check{\alpha}_{(i)}) = \frac{2(\alpha_{(i)}, \lambda)}{(\alpha_{(i)}, \alpha_{(i)})} \in \Z.
+\]
+So by what we have previously computed, we know that all weights $\lambda \in S_\rho$ are in $\mathcal{L}_W [\mathfrak{g}]$.
+
+Given the basis
+\[
+ \mathcal{B} = \{\check{\alpha}_{(i)}: i = 1, \cdots, r\}
+\]
+for $\check{\mathcal{L}}[\mathfrak{g}]$, we define the dual basis
+\[
+ \mathcal{B}^* = \{\omega_{(i)} : i = 1, \cdots, r\}
+\]
+by demanding
+\[
+ (\check{\alpha}_{(i)}, \omega_{(j)}) = \frac{2(\alpha_{(i)}, \omega_{(j)})}{(\alpha_{(i)}, \alpha_{(i)})} = \delta_{ij}.
+\]
+These $\omega_{(i)}$ are known as the \term{fundamental weights}. As the simple roots span $\mathfrak{h}^*_\R$, we can write
+\[
+ \omega_{(i)} = \sum_{j = 1}^r B_{ij} \alpha_{(j)}
+\]
+for some $B_{ij} \in \R$. Plugging this into the definition, we find
+\[
+ \sum_{k = 1}^r \frac{2(\alpha_{(i)}, \alpha_{(k)})}{(\alpha_{(i)}, \alpha_{(i)})} B_{jk} = \delta^i_j.
+\]
+So we know
+\[
+ \sum_{k = 1}^r B_{jk}A^{ki} = \delta^i_j,
+\]
+i.e.\ $B$ is the inverse of the Cartan matrix $A$. Thus we can write
+\[
+ \alpha_{(i)} = \sum_{j = 1}^r A^{ij} \omega_{(j)}.
+\]
+\begin{eg}
+ Consider $\mathfrak{g} = A_2$. Then we have
+ \[
+ A =
+ \begin{pmatrix}
+ 2 & -1\\
+ -1 & 2
+ \end{pmatrix}.
+ \]
+ Then we have
+ \begin{align*}
+ \alpha &= \alpha_{(1)} = 2 \omega_{(1)} - \omega_{(2)}\\
+ \beta &= \alpha_{(2)} = - \omega_{(1)} + 2 \omega_{(2)}.
+ \end{align*}
+ So we have
+ \begin{align*}
+ \omega_{(1)} &= \frac{1}{3} (2 \alpha + \beta)\\
+ \omega_{(2)} &= \frac{1}{3} (\alpha + 2 \beta).
+ \end{align*}
+ We can then draw the weight lattice:
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2, 0, 2} {
+ \foreach \y in {0, 1.155, -1.155} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {-1, 1} {
+ \foreach \y in {0.577, -0.577, 1.732} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \draw [mred, ->] (0, 0) -- (0, 1.155) node [above] {$\omega_{(2)}$};
+ \draw [mred, ->] (0, 0) -- (1, 0.577) node [right] {$\omega_{(1)}$};
+ \draw [mblue, ->] (0, 0) -- (2, 0) node [right] {$\alpha$};
+ \draw [mblue, ->] (0, 0) -- (-1, 1.732) node [left] {$\beta$};
+ \end{tikzpicture}
+ \end{center}
+ Note that the roots are also in the weight lattice, because they are weights of the adjoint representation.
+\end{eg}
+
+Given any weight $\lambda \in S_\rho \subseteq \mathcal{L}_W[\mathfrak{g}]$, we can write
+\[
+ \lambda = \sum_{i = 1}^r \lambda^i \omega_{(i)}
+\]
+for some $\lambda^i \in \Z$. These $\{\lambda^i\}$ are called \term{Dynkin labels} of the weight $\lambda$.
+
+%What do these coefficients tell us?
+%\begin{prop}
+% Suppose $v_\lambda$ has weight
+% \[
+% \lambda = \sum_{i = 1}^r \lambda^i \omega_{(i)}.
+% \]
+% Then $\lambda(h^i) = \lambda^i$, i.e.
+% \[
+% \rho(h^i) v_\lambda = \lambda^i v_\lambda.
+% \]
+%\end{prop}
+%
+%\begin{proof}
+% What we want to show is exactly that
+% \[
+% \omega_{(i)}(h^j) = \delta_i^j.
+% \]
+% In other words, we need
+% \[
+% \omega_{(i)} (h^\alpha) = (\check{\alpha}, \omega_{(i)}).
+% \]
+%\end{proof}
+\subsection{Classification of representations}
+Suppose we have a representation $\rho$ of $\mathfrak{g}$. We can start with any eigenvector $v \in V_\lambda$. We then keep applying things of the form $\rho(E^\alpha)$, where $\alpha \in \Phi_+$ is a \emph{positive} weight. Doing so will bring us further away from the hyperplane dividing the positive and negative weights, and since there are only finitely many weights, we must eventually stop. This is known as a \emph{highest weight}.
+
+\begin{defi}[Highest weight]\index{highest weight}
+ A \emph{highest weight} of a representation $\rho$ is a weight $\Lambda \in S_\rho$ whose associated eigenvector $v_\Lambda$ is such that
+ \[
+ \rho(E^\alpha) v_\Lambda = 0
+ \]
+ for all $\alpha \in \Phi_+$.
+\end{defi}
+The \emph{Dynkin labels} (or \emph{indices}) of a representation are the Dynkin labels of its highest weight.
+
+Now if $v \in V_\lambda$, then we know
+\[
+ \rho(E^\alpha) v \in V_{\lambda + \alpha},
+\]
+if $\lambda + \alpha \in S_\rho$ (and vanishes otherwise). So acting with the step operators translates us in the weight lattice by the vector corresponding to the root.
+
+In the case of $\su(2)$, we found out an explicit formula for when this will stop. In this general case, we have the following result which we will not prove:
+
+\begin{thm}
+ For any finite-dimensional representation of $\mathfrak{g}$, if
+ \[
+ \lambda = \sum_{i = 1}^r \lambda^i \omega_{(i)} \in S_\rho,
+ \]
+ then we know
+ \[
+ \lambda - m_{(i)} \alpha_{(i)} \in S_\rho,
+ \]
+ for all $m_{(i)} \in \Z$ and $0 \leq m_{(i)} \leq \lambda^i$.
+
+ If we know further that $\rho$ is irreducible, then we can in fact obtain all weights by starting at the highest weight and applying this procedure.
+
+ Moreover, for any
+ \[
+ \Lambda = \sum \Lambda^i \omega_{(i)} \in \mathcal{L}_W[\mathfrak{g}],
+ \]
+ this is the highest weight of some irreducible representation if and only if $\Lambda^i \geq 0$ for all $i$.
+\end{thm}
+
+This gives us a very concrete way of finding, at least the weights, of an irreducible representation. In general, though, we don't immediately know the multiplicities of the weights.
+
+\begin{defi}[Dominant integral weight]\index{dominant integral weight}
+ A \emph{dominant integral weight} is a weight
+ \[
+ \Lambda = \sum \Lambda^i \omega_{(i)} \in \mathcal{L}_W[\mathfrak{g}],
+ \]
+ such that $\Lambda^i \geq 0$ for all $i$.
+\end{defi}
+
+\begin{eg}
+ Take the example of $\mathfrak{g} = A_2$. It is a fact that the \emph{fundamental representation} $f$ has Dynkin labels $(1, 0)$. In other words,
+ \[
+ \Lambda = \omega_{(1)}.
+ \]
+ To get the remaining weights, we subtract roots:
+ \begin{enumerate}
+ \item $\Lambda = \omega_{(1)} \in S_f$. So we can subtract by $\alpha_{(1)}$ exactly once.
+ \item So
+ \begin{align*}
+ \lambda &= \omega_{(1)} - (2 \omega_{(1)} - \omega_{(2)}) \\
+ &= - \omega_{(1)} + \omega_{(2)} \in S_f.
+ \end{align*}
+ This has Dynkin labels $(-1, 1)$. So we can do one further subtraction to get a new weight
+ \begin{align*}
+ \lambda - \alpha_{(2)} &= - \omega_{(1)} + \omega_{(2)} - (2 \omega_{(2)} - \omega_{(1)})\\
+ &= -\omega_{(2)}
+ \end{align*}
+ \end{enumerate}
+ In the weight diagram, these are given by
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2, 0, 2} {
+ \foreach \y in {0, 1.155, -1.155} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {-1, 1} {
+ \foreach \y in {0.577, -0.577, 1.732} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \draw [opacity=0.5, mred, ->] (0, 0) -- (0, 1.155) node [above] {$\omega_{(2)}$};
+ \draw [opacity=0.5, mred, ->] (0, 0) -- (1, 0.577) node [right] {$\omega_{(1)}$};
+ \draw [opacity=0.5, mblue, ->] (0, 0) -- (2, 0) node [right] {$\alpha$};
+ \draw [opacity=0.5, mblue, ->] (0, 0) -- (-1, 1.732) node [left] {$\beta$};
+
+ \draw [mgreen, thick, ->] (1, 0.577) node [above] {$(1, 0)$} -- (-1, 0.577);
+ \draw [mgreen, thick, ->] (-1, 0.577) node [left] {$(-1, 1)$} -- (0, -1.155) node [below] {$(0, -1)$};
+
+ \draw (0, -1.155) circle [radius=0.2];
+ \draw (1, 0.577) circle [radius=0.2];
+ \draw (-1, 0.577) circle [radius=0.2];
+ \end{tikzpicture}
+ \end{center}
+ In general, for a dominant integral weight of the form
+ \[
+ \Lambda = \Lambda^1 \omega_{(1)} + \Lambda^2 \omega_{(2)} \in \mathcal{L}_W[A_2]
+ \]
+ for $\Lambda^1, \Lambda^2 \in \Z_{\geq 0}$. We then get an irrep $\rho_{(\Lambda^1, \Lambda^2)}$ of $A_2$.
+
+ One can show, via characters, that the dimension of this representation is given by
+ \[
+ \dim \rho_{(\Lambda^1, \Lambda^2)} = \frac{1}{2} (\Lambda^1 + 1)(\Lambda^2 + 1)(\Lambda^1 + \Lambda^2 + 2).
+ \]
+ This is symmetric in $\Lambda^1$ and $\Lambda^2$. So if $\Lambda^1\not= \Lambda^2$, then we get a pair of distinct representations of the same dimension. It is a fact that
+ \[
+ \lambda \in S_{(\Lambda^1, \Lambda^2)} \Leftrightarrow -\lambda \in S_{(\Lambda^2, \Lambda^1)},
+ \]
+ and we have
+ \[
+ \rho_{(\Lambda_1, \Lambda_2)} = \bar{\rho}_{(\Lambda_2, \Lambda_1)}.
+ \]
+ On the other hand, if $\Lambda_1 = \Lambda_2$, then this representation is self-conjugate.
+
+ The first few representations of $A_2$ are given by
+ \begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ Highest weight & Dimension & Name\\
+ \midrule
+ $\rho_{(0, 0)}$ & $\mathbf{1}$ & Trivial\\
+ $\rho_{(1, 0)}$ & $\mathbf{3}$ & Fundamental\\
+ $\rho_{(0, 1)}$ & $\bar{\mathbf{3}}$ & Anti-fundamental\\
+ $\rho_{(2, 0)}$ & $\mathbf{6}$\\
+ $\rho_{(0, 2)}$ & $\bar{\mathbf{6}}$\\
+ $\rho_{(1, 1)}$ & $\mathbf{8}$ & Adjoint\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ We can figure out the weights of the adjoint representation with $\Lambda = (1, 1) \in S_\rho$. Since both labels are positive, we can subtract both $\alpha_{(1)}$ and $\alpha_{(2)}$ to get
+ \begin{align*}
+ \Lambda - \alpha_{(1)} &= (-1, 2)\\
+ \Lambda - \alpha_{(2)} &= (2, -1)
+ \end{align*}
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2, 0, 2} {
+ \foreach \y in {0, 1.155, -1.155, -2.310} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {-1, 1} {
+ \foreach \y in {0.577, -0.577, 1.732, -1.732} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \draw [opacity=0.5, mred, ->] (0, 0) -- (0, 1.155) node [above] {$\omega_{(2)}$};
+ \draw [opacity=0.5, mred, ->] (0, 0) -- (1, 0.577) node [right] {$\omega_{(1)}$};
+ \draw [opacity=0.5, mblue, ->] (0, 0) -- (2, 0) node [below] {$\alpha$};
+ \draw [opacity=0.5, mblue, ->] (0, 0) -- (-1, 1.732) node [left] {$\beta$};
+
+ \draw [mgreen, thick, ->] (1, 1.732) -- (-1, 1.732);
+ \draw [mgreen, thick, ->] (1, 1.732) -- (2, 0);
+
+ \node [above, mgreen] at (1, 1.732) {$(1, 1)$};
+ \node [above, mgreen] at (-1, 1.732) {$(-1, 2)$};
+ \node [right, mgreen] at (2, 0) {$(2, -1)$};
+
+ \draw (1, 1.732) circle [radius=0.15];
+ \draw (2, 0) circle [radius=0.15];
+ \draw (-1, 1.732) circle [radius=0.15];
+ \end{tikzpicture}
+ \end{center}
+ Now we have a $+2$ in the Dynkin labels. So we can subtract twice. So we have the following weights:
+ \begin{align*}
+ \Lambda - \alpha_{(1)} - \alpha_{(2)} &= (0, 0) \\
+ \Lambda - \alpha_{(1)} - 2\alpha_{(2)} &= (1, -2)\\
+ \Lambda - 2\alpha_{(1)} - \alpha_{(2)} &= (-2, 1)
+ \end{align*}
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2, 0, 2} {
+ \foreach \y in {0, 1.155, -1.155, -2.310} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {-1, 1} {
+ \foreach \y in {0.577, -0.577, 1.732, -1.732} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \draw [opacity=0.5, mred, ->] (0, 0) -- (0, 1.155) node [above] {$\omega_{(2)}$};
+ \draw [opacity=0.5, mred, ->] (0, 0) -- (1, 0.577) node [right] {$\omega_{(1)}$};
+ \draw [opacity=0.5, mblue, ->] (0, 0) -- (2, 0) node [below] {$\alpha$};
+ \draw [opacity=0.5, mblue, ->] (0, 0) -- (-1, 1.732) node [left] {$\beta$};
+
+ \draw [mgreen, thick, ->] (-1, 1.732) -- (0, 0);
+ \draw [mgreen, thick, ->] (2, 0) -- (0, 0);
+ \draw [mgreen, thick, ->] (0, 0) -- (-2, 0);
+ \draw [mgreen, thick, ->] (0, 0) -- (1, -1.732);
+
+ \node [above, mgreen] at (1, 1.732) {$(1, 1)$};
+ \node [above, mgreen] at (-1, 1.732) {$(-1, 2)$};
+ \node [right, mgreen] at (2, 0) {$(2, -1)$};
+ \node [anchor = south west, mgreen] at (0, 0) {$(0, 0)$};
+ \node [below, mgreen] at (1, -1.732) {$(1, -2)$};
+ \node [left, mgreen] at (-2, 0) {$(-2, 1)$};
+
+ \draw (1, 1.732) circle [radius=0.15];
+ \draw (2, 0) circle [radius=0.15];
+ \draw (-1, 1.732) circle [radius=0.15];
+ \draw (-2, 0) circle [radius=0.15];
+ \draw (1, -1.732) circle [radius=0.15];
+ \draw (0, 0) circle [radius=0.15];
+ \end{tikzpicture}
+ \end{center}
+ Finally, we can subtract $\alpha_{(1)}$ from the second or $\alpha_{(2)}$ from the third to get
+ \[
+ \Lambda - 2\alpha_{(1)} - 2 \alpha_{(2)} = (-1, -1) \in S_\rho.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2, 0, 2} {
+ \foreach \y in {0, 1.155, -1.155, -2.310} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {-1, 1} {
+ \foreach \y in {0.577, -0.577, 1.732, -1.732} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \draw [opacity=0.5, mred, ->] (0, 0) -- (0, 1.155) node [above] {$\omega_{(2)}$};
+ \draw [opacity=0.5, mred, ->] (0, 0) -- (1, 0.577) node [right] {$\omega_{(1)}$};
+ \draw [opacity=0.5, mblue, ->] (0, 0) -- (2, 0) node [below] {$\alpha$};
+ \draw [opacity=0.5, mblue, ->] (0, 0) -- (-1, 1.732) node [left] {$\beta$};
+
+ \draw [mgreen, thick, ->] (1, -1.732) -- (-1, -1.732);
+ \draw [mgreen, thick, ->] (-2, 0) -- (-1, -1.732);
+
+ \node [above, mgreen] at (1, 1.732) {$(1, 1)$};
+ \node [above, mgreen] at (-1, 1.732) {$(-1, 2)$};
+ \node [right, mgreen] at (2, 0) {$(2, -1)$};
+ \node [anchor = south west, mgreen] at (0, 0) {$(0, 0)$};
+ \node [below, mgreen] at (1, -1.732) {$(-2, 1)$};
+ \node [left, mgreen] at (-2, 0) {$(-2, 1)$};
+ \node [below, mgreen] at (-1, -1.732) {$(-1, -1)$};
+
+ \draw (1, 1.732) circle [radius=0.15];
+ \draw (2, 0) circle [radius=0.15];
+ \draw (-1, 1.732) circle [radius=0.15];
+ \draw (-1, -1.732) circle [radius=0.15];
+ \draw (-2, 0) circle [radius=0.15];
+ \draw (1, -1.732) circle [radius=0.15];
+ \draw (0, 0) circle [radius=0.15];
+ \end{tikzpicture}
+ \end{center}
+ Now notice that we only have seven weights, not eight. This is because our algorithm did not tell us the multiplicity of the weights. However, in this particular case, we know, because this is the adjoint representation, and the Cartan generators give us two things of weight $0$. So we can plot all weights with multiplicity as
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2, 0, 2} {
+ \foreach \y in {0, 1.155, -1.155, -2.310} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {-1, 1} {
+ \foreach \y in {0.577, -0.577, 1.732, -1.732} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \draw (1, 1.732) circle [radius=0.15];
+ \draw (2, 0) circle [radius=0.15];
+ \draw (-1, 1.732) circle [radius=0.15];
+ \draw (-1, -1.732) circle [radius=0.15];
+ \draw (-2, 0) circle [radius=0.15];
+ \draw (1, -1.732) circle [radius=0.15];
+ \draw (0, 0) circle [radius=0.15];
+ \draw (0, 0) circle [radius=0.25];
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+It is a general fact that the weight lattice traces out a polygon like this, and the vertices at the perimeter are always non-degenerate. So in this case, we could also have figured out the multiplicty by using this general fact.
+
+\subsection{Decomposition of tensor products}
+The last thing we need to know how to do is how we can decompose tensor products of irreps. This is not hard. It is just what we did for $\su(2)$ but in a more general context.
+
+We let $\rho_\Lambda$ and $\rho_{\Lambda'}$ be irreps of $\mathfrak{g}$ with corresponding representation spaces $V_\Lambda$ and $V_{\Lambda'}'$. Then we can write
+\[
+ V_\Lambda = \bigoplus_{\lambda \in S_\Lambda} V_\lambda,\quad V_{\Lambda'}' = \bigoplus_{\lambda' \in S_{\Lambda'}} V'_{\lambda'}.
+\]
+If $v_\lambda \in V_\Lambda$ and $v_{\lambda'} \in V_{\Lambda'}'$, then for $H \in \mathfrak{h}$, we have
+\begin{align*}
+ (\rho_\Lambda \otimes \rho_{\Lambda'})(H)(v_\lambda \otimes v_{\lambda'}) &= \rho_\Lambda(H)v_\lambda \otimes v_{\lambda'} + v_\lambda \otimes \rho_{\Lambda'}(H) v_{\lambda'}\\
+ &= \lambda(H) v_\lambda \otimes v_{\lambda'} + \lambda'(H) v_\lambda \otimes v_{\lambda'}\\
+ &= (\lambda + \lambda')(H) v_\lambda \otimes v_{\lambda'}.
+\end{align*}
+Since things of the form $v_\lambda \otimes v_{\lambda'}$ span $V_\Lambda \otimes V_{\Lambda'}$, we know
+\[
+ S_{\Lambda \otimes \Lambda'} = \{\lambda + \lambda': \lambda \in S_\Lambda, \lambda' \in S_{\Lambda'}\}
+\]
+with multiplicities. To figure out the decomposition, we find out what the highest weight of the tensor product is, and then subtract off the factors corresponding to the highest weight, and then repeat.
+\begin{eg}
+ Consider $\mathfrak{g} = A_2$, and the fundamental representation $\rho_{(1, 0)}$. We consider
+ \[
+ \rho_{(1, 0)} \otimes \rho_{(1, 0)}.
+ \]
+ Recall that the weight set is given by
+ \[
+ S_{(1, 0)} = \{\omega_{(1)}, -\omega_{(1)} + \omega_{(2)}, -\omega_{(2)}\}.
+ \]
+ We can then draw the collection of all weights in $\rho_{(1, 0)} \otimes \rho_{(1, 0)}$ as follows:
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2, 0, 2} {
+ \foreach \y in {0, 1.155, -1.155, -2.310} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {-1, 1} {
+ \foreach \y in {0.577, -0.577, 1.732, -1.732} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \draw [mblue, ->] (0, 0) -- (2, 0) node [right] {$\alpha$};
+ \draw [mblue, ->] (0, 0) -- (-1, 1.732) node [left] {$\beta$};
+
+ \draw (0, 1.155) circle [radius=0.2];
+ \draw (0, 1.155) circle [radius=0.3];
+ \draw (2, 1.155) circle [radius=0.2];
+ \draw (-2, 1.155) circle [radius=0.2];
+ \draw (1, -0.577) circle [radius=0.2];
+ \draw (-1, -0.577) circle [radius=0.2];
+ \draw (1, -0.577) circle [radius=0.3];
+ \draw (-1, -0.577) circle [radius=0.3];
+ \draw (0, -2.310) circle [radius=0.2];
+ \end{tikzpicture}
+ \end{center}
+ So we see that we have a highest weight of $(2, 0)$. We then use the algorithm to compute all the weights of such an irrep, remove it from the weight lattice, and be left with
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2, 0, 2} {
+ \foreach \y in {0, 1.155, -1.155, -2.310} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {-1, 1} {
+ \foreach \y in {0.577, -0.577, 1.732, -1.732} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \draw [mblue, ->] (0, 0) -- (2, 0) node [right] {$\alpha$};
+ \draw [mblue, ->] (0, 0) -- (-1, 1.732) node [left] {$\beta$};
+
+ \draw (0, 1.155) circle [radius=0.3];
+ \draw (1, -0.577) circle [radius=0.3];
+ \draw (-1, -0.577) circle [radius=0.3];
+ \end{tikzpicture}
+ \end{center}
+ This is just the anti-fundamental representation. So we have
+ \[
+ \mathbf{3} \otimes \mathbf{3} = \mathbf{6} \oplus \bar{\mathbf{3}}.
+ \]
+\end{eg}
+
+%In our theory of the strong interaction, the quarks carry two $\su(3)$ representations, called $\su(3)_{\mathrm{flavour}}$ and $\su(3)_{\mathrm{color}}$. The quarks have representations $\mathbf{3}, \mathbf{3}$, and the anti-quarks have $\bar{\mathbf{3}}, \bar{\mathbf{3}}$.
+%
+%Since colour is confined, only singlets appear, i.e.\ things in a $\mathbf{1}$-representation. If we have a quark and an anti-quark, we can form
+%\[
+% \mathbf{3} \otimes \bar{\mathbf{3}} = \mathbf{1} \oplus \mathbf{8}.
+%\]
+%So we have a $\mathbf{1}$, and this corresponds to $q \bar{q}$.
+%
+%If we have three quarks, then we get
+%\[
+% \mathbf{3} \otimes \mathbf{3} \otimes \mathbf{3} = \mathbf{1} \oplus \cdots.
+%\]
+%So this is a $qqq$ state.
+%
+
+\section{Gauge theories}
+We are now going to do something rather amazing. We are going to show that Maxwell's equation is ``forced'' upon us by requiring that our (charged) fields possess something known as a \emph{$\U(1)$} gauge symmetry. We are going to write down essentially the only possible theory under the assumption of this gauge symmetry, and unwrapping it gives us Maxwell's equation.
+
+When we do it for Maxwell's equation, it is not entirely clear that what we wrote down was ``the only possible thing'', but it will be when we try to do it for general gauge theories.
+
+Before we begin, it is important to note that everything we do here has a very nice geometric interpretation in terms of connections on principal $G$-bundles, and there are very good geometric reasons to pick the definitions (and names) we use here. Unfortunately, we do not have the time to introduce all the necessary geometry in order to do so.
+
+\subsection{Electromagnetism and \texorpdfstring{$\U(1)$}{U(1)} gauge symmetry}
+In electromagnetism, we had two fields $\mathbf{E}$ and $\mathbf{B}$. There are four Maxwell's equations governing how they behave. Two of them specify the evolution of the field and how they interact with matter, while the other two just tells us we can write the field in terms of a scalar and a vector potential $\Phi, \mathbf{A}$. Explicitly, they are related by
+\begin{align*}
+ \mathbf{E} &= - \nabla \Phi + \frac{\partial \mathbf{A}}{\partial t}\\
+ \mathbf{B} &= \nabla \times \mathbf{A}.
+\end{align*}
+The choice of potentials is not unique. We know that $\mathbf{E}$ and $\mathbf{B}$ are invariant under the transformations
+\begin{align*}
+ \Phi &\mapsto \Phi + \frac{\partial \alpha}{\partial t}\\
+ \mathbf{A} &\mapsto \mathbf{A} + \nabla \alpha
+\end{align*}
+for any \emph{gauge function} $\alpha = \alpha(\mathbf{x}, t)$.
+
+Now we believe in relativity, so we should write things as $4$-vectors. It so happens that there are four degrees of freedom in the potentials, so we can produce a $4$-vector
+\[
+ a_\mu =
+ \begin{pmatrix}
+ \Phi\\
+ A_i
+ \end{pmatrix}.
+\]
+We can then succinctly write the gauge transformations as
+\[
+ a_\mu \mapsto a_\mu + \partial_\mu \alpha.
+\]
+We can recover our fields via something known as the \emph{electromagnetic field tensor}, given by
+\[
+ f_{\mu\nu} = \partial_\mu a_\nu - \partial_\nu a_\mu.
+\]
+It is now easy to see that $f$ is invariant under gauge transformations. We can expand the definitions out to find that we actually have
+\[
+ f_{\mu\nu} =
+ \begin{pmatrix}
+ 0 & E_x & E_y & E_z\\
+ -E_x & 0 & -B_z & B_y\\
+ -E_y & B_z & 0 & -B_x\\
+ -E_z & -B_y & B_x & 0
+ \end{pmatrix}
+\]
+So we do recover the electric and magnetic fields this way.
+
+The free field theory of electromagnetism then has a Lagrangian of
+\[
+ \mathcal{L}_{\mathrm{EM}} = - \frac{1}{4 g^2} f_{\mu\nu} f^{\mu\nu} = \frac{1}{2g^2}(\mathbf{E}^2 - \mathbf{B}^2).
+\]
+For reasons we will see later on, it is convenient to rescale the $a_\mu$ by
+\[
+ A_\mu = -i a_\mu \in i\R = \mathcal{L}(\U(1))
+\]
+and write
+\[
+ F_{\mu\nu}=-if_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu.
+\]
+Now the magic happens when we want to couple this to matter.
+
+Suppose we have a complex scalar field $\phi: \R^{3, 1} \to \C$ with Lagrangian
+\[
+ \mathcal{L}_\phi = \partial_\mu \phi^* \partial^\mu \phi - W(\phi^* \phi).
+\]
+The interesting thing is that this has a \emph{global} $\U(1)$ symmetry by
+\begin{align*}
+ \phi &\mapsto g \phi\\
+ \phi^* &\mapsto g^{-1}\phi^*
+\end{align*}
+for $g = e^{i\delta} \in \U(1)$, i.e.\ the Lagrangian is invariant under this action.
+
+In general, it is more convenient to talk about infinitesimal transformations Consider an element
+\[
+ g = \exp(X) \approx 1 + X,
+\]
+where we think of $X$ as ``small''. In our case, we have $X \in \uu(1) \cong i\R$, and
+\begin{align*}
+ \phi &\mapsto \phi + \delta_X \phi,\\
+ \phi^* &\mapsto \phi ^*+ \delta_X \phi^*,
+\end{align*}
+where
+\begin{align*}
+ \delta_X \phi &= X \phi\\
+ \delta_X \phi^* &= -X \phi^*.
+\end{align*}
+Now this is a \emph{global} symmetry, i.e.\ this is a symmetry if we do the same transformation at all points in the space. Since we have
+\[
+ \delta_X \mathcal{L}_\phi = 0,
+\]
+we get a conserved charge. What if we want a \emph{local} symmetry? We want to have a different transformation at \emph{every} point in space, i.e.\ we now have a function
+\[
+ g: \R^{3, 1} \to \U(1),
+\]
+and we consider the transformations given by
+\begin{align*}
+ \phi(x) &\mapsto g(x) \phi(x)\\
+ \phi^*(x) &\mapsto g^{-1}(x) \phi^*(x).
+\end{align*}
+This is in general no longer a symmetry. Under an infinitesimal variation $X: \R^{3, 1} \to \uu(1)$, we have
+\[
+ \delta_X \phi = X \phi.
+\]
+So the derivative transforms as
+\[
+ \delta_X (\partial_\mu \phi) = \partial_\mu (\delta_X \phi) = (\partial_\mu X) \phi + X \partial_\mu \phi.
+\]
+This is bad. What we really want is for this to transform like $\phi$, so that
+\[
+ \partial_\mu \phi \mapsto g(x) \partial_\mu \phi.
+\]
+Then the term $\partial_\mu \phi^* \partial^\mu \phi$ will be preserved.
+
+It turns out the solution is to couple $\phi$ with $A_\mu$. Recall that both of these things had gauge transformations. We now demand that under any gauge transformation, both should transform the same way. So from now on, a gauge transformation $X: \R^{3, 1} \to \uu(1)$ transforms \emph{both} $\phi$ and $A_\mu$ by
+\begin{align*}
+ \phi &\mapsto \phi + X \phi\\
+ A_\mu &\mapsto A_\mu - \partial_\mu X.
+\end{align*}
+However, this does not fix the problem, since everything we know so far that involves the potential $A$ is invariant under gauge transformation. We now do the funny thing. We introduce something known as the \emph{covariant derivative}:
+\[
+ \D_\mu = \partial_\mu + A_\mu,
+\]
+Similar to the case of general relativity (if you are doing that course), the covariant derivative is the ``right'' notion of derivative we should use whenever we want to differentiate fields that are coupled with $A$. We shall now check that this derivative transforms in the same way as $\phi$. Indeed, we have
+\begin{align*}
+ \delta_X(\D_\mu \phi) &= \delta_X(\partial_\mu \phi + A_\mu \phi)\\
+ &= \partial_\mu (\delta_X \phi) + A_\mu \delta_X \phi - \partial_\mu X \phi\\
+ &= X \partial_\mu \phi + X A_\mu \phi\\
+ &= X \D_\mu \phi.
+\end{align*}
+This implies that the kinetic term
+\[
+ (\D_\mu \phi)^* \D^\mu \phi
+\]
+is gauge invariant. So we can put together a gauge-invariant Lagrangian
+\[
+ \mathcal{L} = -\frac{1}{4 g^2} F_{\mu\nu} F^{\mu\nu} + (\D_\mu \phi)^* (\D^\mu \phi) - W(\phi^* \phi).
+\]
+So what the electromagnetic potential $A$ gives us is a covariant derivative $\D$ which then allows us to ``gauge'' these complex fields to give them a larger symmetry group.
+
+\subsection{General case}
+We now want to do what we had above for a general Lie group $G$. In particular, this can be non-abelian, and thus the Lie algebra has non-trivial bracket. It turns out in the general case, it is appropriate to insert some brackets into what we have done so far, but otherwise most things follow straight through.
+
+In the case of electromagnetism, we had a Lie group $\U(1)$, and it acted on $\C$ in the obvious way. In general, we start with a Lie group $G$, and we have to pick a representation $D$ of $G$ on a vector space $V$. Since we want to have a real Lagrangian, we will assume our vector space $V$ also comes with an inner product.
+
+Given such a representation, we consider fields that take values in $V$, i.e.\ functions $\phi: \R^{3, 1} \to V$. We can, as before, try to write down a (scalar) Lagrangian
+\[
+ \mathcal{L}_\phi = (\partial_\mu \phi, \partial^\mu \phi) - W((\phi, \phi)).
+\]
+Writing out the summation explicitly, we have
+\[
+ \mathcal{L}_\phi = \sum_{\mu = 0}^3 (\partial_\mu \phi, \partial^\mu \phi) - W((\phi, \phi)),
+\]
+i.e.\ the sum is performed outside the inner product.
+
+This is not always going to be invariant under an arbitrary action of the group $G$, because the inner product is not necessarily preserved. Fortunately, by definition of being unitary, what we require is that our representation is \emph{unitary}, i.e.\ each $D(g)$ is unitary for all $g \in G$.
+
+It is a theorem that any representation of a compact Lie group is equivalent to one that is unitary, so we are not losing too much by assuming this (it is also that a non-compact Lie group cannot have a non-trivial unitary representation).
+
+Again, it will be convenient to think about this in terms of infinitesimal transformations. Near the identity, we can write
+\[
+ g = \exp(X)
+\]
+for some $X \in \mathfrak{g}$. Then we can write
+\[
+ D(g) = \exp(\rho(X)) \in \gl(n, \C).
+\]
+We write $\rho: \mathfrak{g} \to \gl(n, \C)$ for the associated representation of the Lie algebra $\mathfrak{g}$. If $D$ is unitary, then $\rho$ will be anti-Hermitian. Then the infinitesimal transformation is given by
+\[
+ \phi \mapsto \phi + \delta_X \phi = \phi + \rho(X) \phi.
+\]
+In general, an infinitesimal Gauge transformation is then given by specifying a function
+\[
+ X: \R^{3, 1} \to \mathfrak{g}.
+\]
+Our transformation is now
+\[
+ \delta_X \phi = \rho(X(x))\phi.
+\]
+Just as in the abelian case, we know our Lagrangian is no longer gauge invariant in general.
+
+We now try to copy our previous fix. Again, we suppose our universe comes with a gauge field
+\[
+ A_\mu : \R^{3, 1} \to \mathcal{L}(G).
+\]
+Again, we can define a covariant derivative
+\[
+ \D_\mu \phi = \partial_\mu \phi + \rho(A_\mu) \phi.
+\]
+In the case of a non-abelian gauge symmetry, our gauge field transformed under $X$ as
+\[
+ \delta_X A_\mu = - \partial_\mu X.
+\]
+This expression still makes sense, but it turns out this isn't what we want, if we try to compute $\delta_X(\D_\mu \phi)$. Since the gauge field is non-abelian, there is one extra thing we can include, namely $[X, A_\mu]$. It turns out this is the right thing we need. We claim that the right definition is
+\[
+ \delta_X A_\mu = - \partial_\mu X + [X, A_\mu].
+\]
+To see this, we show that
+\begin{prop}
+ We have
+ \[
+ \delta_X (\D_\mu \phi) = \rho(X) \D_\mu \phi.
+ \]
+\end{prop}
+
+The proof involves writing out the terms and see that it works.
+\begin{proof}
+ We have
+ \begin{align*}
+ \delta_X (\D_\mu \phi) &= \delta_X(\partial_\mu \phi + \rho(A_\mu) \phi)\\
+ &= \partial_\mu (\delta_X\phi) + \rho(A_\mu) \delta_X \phi + \rho(\delta_X A_\mu)\phi\\
+ &= \partial_\mu (\rho(X) \phi) + \rho(A_\mu) \rho(X) \phi - \rho(\partial_\mu X) \phi + \rho([X, A_\mu]) \phi\\
+ &= \rho(\partial_\mu X) \phi + \rho(X) \partial_\mu \phi + \rho(X)\rho(A_\mu) \phi \\
+ &\quad\quad+ [\rho(A_\mu), \rho(X)] \phi - \rho(\partial_\mu X) \phi + \rho([X, A_\mu])\phi\\
+ &= \rho(X)(\partial_\mu \phi + \rho(A_\mu) \phi)\\
+ &= \rho(X) \D_\mu \phi,
+ \end{align*}
+ as required.
+\end{proof}
+
+Thus, we know that $(\D_\mu \phi, \D^\mu \phi)$ is gauge invariant. We can then write down a gauge invariant ``matter'' part of the action
+\[
+ \mathcal{L}_\phi = (\D_\mu \phi, \D^\mu \phi) - W[(\phi, \phi)].
+\]
+Finally, we need to produce a gauge invariant kinetic term for $A_\mu: \R^{3, 1} \to \mathcal{L}(G)$.
+
+For the case of an abelian gauge theory, we had
+\[
+ F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu \in \mathcal{L}(G).
+\]
+In this case, there is one extra term we can add, namely $[A_\mu, A_\nu] \in \mathfrak{g}$, and it turns out we need it for things to work out. Our field strength tensor is
+\[
+ F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu + [A_\mu, A_\nu].
+\]
+How does the field strength tensor transform? In the abelian case, we had that $F_{\mu\nu}$ was invariant. Let's see if that is the case here. It turns out this time the field strength tensor transforms as the adjoint representation.
+\begin{lemma}
+ We have
+ \[
+ \delta_X (F_{\mu\nu}) = [X, F_{\mu\nu}] \in \mathcal{L}(G).
+ \]
+\end{lemma}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ \delta_X(F_{\mu\nu}) &= \partial_\mu (\delta_X A_\nu) - \partial_\nu (\delta_X A_\mu) + [\delta_X A_\mu, A_\nu] + [A_\mu, \delta_X A_\nu]\\
+ &= \partial_\mu \partial_\nu X + \partial_\mu ([X, A_\nu]) - \partial_\nu \partial_\mu X - \partial_\nu ([X, A_\mu]) - [\partial_\mu X, A_\nu]\\
+ &\quad\quad- [A_\mu, \partial_\nu X] + [[X, A_\mu ], A_\nu] + [A_\mu, [X, A_\nu]]\\
+ &= [X, \partial_\mu A_\nu] - [X, \partial_\nu A_\mu] +([X, [A_\mu, A_n]]\\
+ &= [X, F_{\mu\nu}].
+ \end{align*}
+ where we used the Jacobi identity in the last part.
+\end{proof}
+
+So to construct a scalar quantity out of $F_{\mu\nu}$, we need an inner product invariant under an adjoint representation. We know one of these --- the Killing form! So we can just pick
+\[
+ \mathcal{L}_A = \frac{1}{g^2} \kappa(F_{\mu\nu}, F^{\mu\nu}).
+\]
+This is known as the \emph{Yang-Mills Lagrangian}. Note that for each fixed $\mu, \nu$, we have that $F_{\mu\nu} \in \mathfrak{g}$. So we should read this as
+\[
+ \mathcal{L}_A = \frac{1}{g^2} \sum_{\mu, \nu} \kappa(F_{\mu\nu}, F^{\mu\nu}).
+\]
+Putting all these together, we get ourselves an invariant Lagrangian of the system with a $G$ gauge symmetry.
+
+So the final Lagrangian looks like
+\[
+ \mathcal{L} = \frac{1}{g^2} \sum_{\mu, \nu} \kappa(F_{\mu\nu}, F^{\mu\nu}) + (\D_\mu \phi, \D^\mu \phi) + W((\phi, \phi)).
+\]
+Now if we are further told that we actually have a simple complex Lie algebra, then it is a fact that we can find a real form of compact type. So in particular we can find a basis $\mathcal{B} = \{T^a, a = 1, \cdots, \dim \mathfrak{g}\}$ with
+\[
+ \kappa^{ab} = \kappa(T^a, T^b) = - \kappa \delta^{ab}.
+\]
+In this basis, we have
+\[
+ \mathcal{L}_A = \frac{-\kappa}{g^2} \sum_{a = 1}^d F_{\mu\nu, a} F^{\mu\nu, a},
+\]
+and this does look like a copy of $d$ many electromagnetic fields.
+
+So to construct (a sensible) gauge theory, we need to get a (semi-)simple Lie algebra, and then find some representations of $\mathfrak{g}$. These are things we have already studied well before!
+
+According to our current understanding, the gauge group of the universe is
+\[
+ G = \U(1) \times \SU(2) \times \SU(3),
+\]
+and, for example, the Higgs boson has representation
+\[
+ \phi = (+1, \rho_1, \rho_0).
+\]
+
+\section{Lie groups in nature}
+We are now going to look around and see how the things we have done exhibit themselves in nature. This chapter, by design, is very vague, and one shall not expect to learn anything concrete from here.
+
+\subsection{Spacetime symmetry}
+In special relativity, we have a metric
+\[
+ \d s^2 = -\d t^2 + \d x^2 + \d y^2 + \d z^2.
+\]
+The group of (orientation-preserving and) metric-preserving symmetries gives us the \term{Lorentz group} $\SO(3, 1)$. However, in certain cases, we can get something more (or perhaps less) interesting. Sometimes it makes sense to substitute $\tau = i t$, so that
+\[
+ \d s^2 = \d \tau^2 + \d x^2 + \d y^2 + \d z^2.
+\]
+This technique is known as \term{Wick rotation}. If we put it this way, we now have a symmetry group of $\SO(4)$ instead.
+
+It happens that when we complexify this, this doesn't really matter. The resulting Lie algebra is
+\[
+ \mathcal{L}_\C(\SO(3, 1)) = \mathcal{L}_\C(\SO(4)) = D_2.
+\]
+Its Dynkin diagram is
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {0,1} {
+ \node [draw, fill=morange, circle, inner sep = 0, minimum size = 6] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+\end{center}
+This is not simple, and we have
+\[
+ D_2 = A_1 \oplus A_1.
+\]
+In other words, we have
+\[
+ \so(4) = \su(2) \oplus \su(2).
+\]
+At the Lie group level, $\SO(4)$ does not decompose as $\SU(2) \times \SU(2)$. Instead, $\SU(2) \times \SU(2)$ is a double cover of $\SO(4)$, and we have
+\[
+ \SO(4) = \frac{\SU(2) \times \SU(2)}{\Z_2}.
+\]
+Now in general, our fields in physics transform when we change coordinates. There is the boring case of a scalar, which never transforms, and the less boring case of a vector, which transforms like a vector. More excitingly, we have objects known as spinors. The weird thing is that spinors do have an $\so(3, 1)$ action, but this does not lift to an action of $\SO(3, 1)$. Instead, we need to do something funny with double covers, which we shall not go into.
+
+In general, spinors decompose into ``left-handed'' and ``right-handed'' components, known as \emph{Weyl fermions}, and these correspond to two different representations of $A_1$. We can summarize the things we have in the following table:
+\begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ Field & $\so(3, 1)$ & $A_1 \oplus A_1$\\
+ \midrule
+ Scalar & $\mathbf{1}$ & $(\rho_0, \rho_0)$\\
+ Dirac Fermion & LH $\oplus$ RH & $(\rho_1, \rho_0) \oplus (\rho_0, \rho_1)$\\
+ Vector & $\mathbf{4}$ & $(\rho_1, \rho_1)$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+However, the Lorentz group $\SO(3, 1)$ is not all the symmetries we can do to Minkowski spacetime. These are just those symmetries that fix the origin. If we add in the translations, then we obtain what is known as the \term{Poincar\'e group}. The Poincar\'e algebra is \emph{not} simple, as the translations form a non-trivial ideal.
+
+\subsection{Possible extensions}
+Now we might wonder how we can expand the symmetry group of our theory to get something richer. In very special cases, we will get what is known as \emph{conformal symmetry}, but in general, we don't. To expand the symmetry further, we must consider supersymmetry.
+\subsubsection*{Conformal field theory}
+If all our fields are massless, then our theory gains \term{conformal invariance}. Before we look into conformal invariance, we first have a look at a weaker notion of scale invariance.
+
+A massless free field $\phi: \R^{3, 1} \to V$ has a Lagrangian that looks like
+\[
+ \mathcal{L} = -\frac{1}{2}(\partial_\mu \phi, \partial^\mu \phi).
+\]
+This is invariant under scaling, namely the following simultaneous transformations parameterized by $\lambda$:
+\begin{align*}
+ x &\mapsto x' = \lambda^{-1} x\\
+ \phi(x) &\mapsto \lambda^\Delta \phi(x'),
+\end{align*}
+where $\Delta = \dim V$ is the dimension of the field.
+
+Often in field theory, whenever we have scale invariance, we also get \emph{conformal} invariance. In general, a conformal transformation is one that preserves all angles. So this, for example, includes some mixture and scaling, and is more general than Lorentz invariance and scale invariance.
+
+In the case of 4 (spacetime) dimensions, it turns out the conformal group is $\SO(4, 2)$, and has dimension $15$. In general, members of this group can be written as
+\[
+ \begin{pmatrix}
+ \begin{matrix}
+ 0 & D\\
+ -D & 0\\
+ \end{matrix} & P_\mu + K_\mu\\
+ P_\nu + K_\nu & \begin{matrix} & & \\ & \mathcal{M}_{\mu\nu} & \\ & & &\end{matrix}
+ \end{pmatrix}.
+\]
+Here the $K_\mu$ are the \term{special conformal generators}, and the $P_\mu$ are translations. We will not go deep into this.
+
+Conformal symmetry gives rise to \term{conformal field theory}. Theoretically, this is very important, since they provide ``end points'' of renormalization group flow. This will be studied more in depth in the Advanced Quantum Field Theory course.
+
+\subsubsection*{Supersymmetry}
+Can we add even more symmetries? It turns out there is a no-go theorem that says we can't.
+\begin{thm}[Coleman-Mondula]
+ In an interactive quantum field theory (satisfying a few sensible conditions), the largest possible symmetry group is the Poincar\'e group times some internal symmetry that commutes with the Poincar\'e group.
+\end{thm}
+But this is not the end of the world. The theorem assumes that we are working with traditional Lie groups and Lie algebras. The idea of supersymmetry is to prefix everything we talk about with ``super-'', so that this theorem no longer applies.
+
+To being with, supersymmetry replaces Lie algebras with graded Lie algebras, or \term{Lie superalgebras}. This is a graded vector space, which we can write as
+\[
+ \mathfrak{g} = \mathfrak{g}_0 \oplus \mathfrak{g}_1,
+\]
+where the elements in $\mathfrak{g}_0$ as said to have \emph{grade} $0$, and elements in $\mathfrak{g}_1$ have grade $1$. We write $|X| = 0, 1$ for $X \in \mathfrak{g}_0, \mathfrak{g}_1$ respectively. These will correspond to bosonic and fermionic operators respectively.
+
+Now the Lie bracket respects a graded anti-commutation relation
+\[
+ [X, Y] = -(-1)^{|X||Y|}[Y, X].
+\]
+So for Bosonic generators, they anti-commute as usual, but with we have two fermionic things, they commute. We can similarly formulate a super-Jacobi identity. These can be used to develop field theories that have these ``supersymmetries''.
+
+Unfortunately, there is not much experimental evidence for supersymmetry, and unlike string theory, we are already reaching the scales we expect to see supersymmetry in the LHC\ldots
+
+\subsection{Internal symmetries and the eightfold way}
+Finally, we get to the notion of internal symmetries. Recall that for a complex scalar field, our field was invariant under a global $\U(1)$ action given by phase change. More generally, fields can carry some representation of a Lie group $G$. In general, if this representation has dimension greater than $1$, this means that we have multiple different ``particles'' which are related under this symmetry transformation. Then by symmetry, all these particles will have the same mass, and we get a degeneracy in the mass spectrum.
+
+When we have such degeneracies, we would want to distinguish the different particles of the same mass. These can be done by looking at the weights of the representation. It turns out the different weights correspond to the different quantum numbers of the particles.
+
+Often, we do not have an exact internal symmetry. Instead, we have an approximate symmetry. This is the case when the dominant terms in the Lagrangian are invariant under the symmetry, while some of the lesser terms are not. In particular, the different particles usually have different (but very similar) masses.
+
+These internal symmetries are famously present in the study of hadrons, i.e.\ things made out of quarks. The basic examples we all know are the nucleons, namely protons and neutrons. We can list them as
+\begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ & Charge (Q) & Mass (M)\\
+ \midrule
+ $p$ & $+1$ & $938$ MeV\\
+ $n$ & $0$ & $940$ MeV\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+Note that they have very similar masses. Later, we found, amongst many other things, the pions:
+\begin{center}
+ \begin{tabular}{ccc}
+ \toprule
+ & Charge (Q) & Mass (M)\\
+ \midrule
+ $\pi^+$ & +1 & 139 MeV\\
+ $\pi^0$ & 0 & 135 MeV\\
+ $\pi^-$ & -1 & 139 MeV\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+Again, these have very similar masses. We might expect that there is some approximate internal symmetry going on. This would imply that there is some conserved quantity corresponding to the weights of the representation. Indeed, we later found one, known as \emph{isospin}:
+\begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ & Charge (Q) & Isospin (J) & Mass (M)\\
+ \midrule
+ $p$ & $+1$ & $+\frac{1}{2}$ & $938$ MeV\\
+ $n$ & $0$ & $-\frac{1}{2}$ & $940$ MeV\\
+ $\pi^+$ & +1 & +1 & 139 MeV\\
+ $\pi^0$ & 0 & 0 & 135 MeV\\
+ $\pi^-$ & -1 & -1 & 139 MeV\\
+ \bottomrule % check numbers
+ \end{tabular}
+\end{center}
+Isospin comes from an approximate $\SU(2)_I$ symmetry, with a generator given by
+\[
+ H = 2J.
+\]
+The nucleons then have the fundamental $\rho_1$ representation, and the pions have the $\rho_2$ representation.
+
+Eventually, we got smarter, and discovered an extra conserved quantum number known as \emph{hypercharge}. We can plot out the values of the isospin and the hypercharge for our pions and some other particles we discovered, and we found a pattern:
+\begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -2.5) -- (0, 2.5);
+ \node [circ] at (2, 0) {};
+ \node [above] at (2, 0) {$\pi^+$};
+ \node [circ] at (1, 1.732) {};
+ \node [above] at (1, 1.732) {$K^0$};
+ \node [circ] at (1, -1.732) {};
+ \node [below] at (1, -1.732) {$K^+$};
+ \node [circ] at (-2, 0) {};
+ \node [above] at (-2, 0) {$\pi^-$};
+ \node [circ] at (-1, 1.732) {};
+ \node [above] at (-1, 1.732) {$K^-$};
+ \node [circ] at (-1, -1.732) {};
+ \node [below] at (-1, -1.732) {$\bar K^0$};
+ \node [circ] at (-0.2, 0) {};
+ \node [anchor = north west] at (0.2, 0) {$\eta$};
+ \node [circ] at (0.2, 0) {};
+ \node [anchor = south east] at (-0.2, 0) {$\pi^0$};
+ \end{tikzpicture}
+\end{center}
+At firsts, physicists were confused by the appearance of this pattern, and tried very hard to figure out generalizations. Of course, now that we know about Lie algebras, we know this is the weight diagram of a representation of $\su(3)$.
+
+However, the word $\SU(3)$ hasn't resolved all the mystery. In reality, we only observed representations of dimension $1, 8$ and $10$. We did not see anything else. So physicists hypothesized that there are substructures known as quarks. Each quark ($q$) carry a $\mathbf{3}$ representation (flavour), and antiquarks ($\bar{q}$) carry a $\bar{\mathbf{3}}$ representation.
+
+The mesons correspond to $q\bar{q}$ particles, and the representation decompose as
+\[
+ \mathbf{3} \otimes \bar{\mathbf{3}} = \mathbf{1} \oplus \mathbf{8}.
+\]
+Bosons correspond to $qqq$ particles, and these decompose as
+\[
+ \mathbf{3} \otimes \mathbf{3} \otimes \mathbf{3} = \mathbf{1} \oplus \mathbf{8} \oplus \mathbf{8} \oplus \mathbf{10}.
+\]
+Of course, we now have to explain why only $q\bar{q}$ and $qqq$ appear in nature, and why we don't see quarks appearing isolated in nature. To do so, we have to go very deep down in QCD. This theory says that quarks have a $\SU(3)$ gauge symmetry (which is a different $\SU(3)$) and the quark again carries the fundamental representation $\mathbf{3}$. More details can be found in the Lent Standard Model course.
+\printindex
+\end{document}
diff --git a/books/cam/II_L/logic_and_set_theory.tex b/books/cam/II_L/logic_and_set_theory.tex
new file mode 100644
index 0000000000000000000000000000000000000000..df90c13f2489e85c36de687e1619cff8280409d0
--- /dev/null
+++ b/books/cam/II_L/logic_and_set_theory.tex
@@ -0,0 +1,3026 @@
+\documentclass[a4paper]{article}
+
+\def\npart {II}
+\def\nterm {Lent}
+\def\nyear {2015}
+\def\nlecturer {I.\ B.\ Leader}
+\def\ncourse {Logic and Set Theory}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\emph{No specific prerequisites.}
+\vspace{10pt}
+
+\noindent\textbf{Ordinals and cardinals}\\
+Well-orderings and order-types. Examples of countable ordinals. Uncountable ordinals and Hartogs' lemma. Induction and recursion for ordinals. Ordinal arithmetic. Cardinals; the hierarchy of alephs. Cardinal arithmetic.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Posets and Zorn's lemma}\\
+Partially ordered sets; Hasse diagrams, chains, maximal elements. Lattices and Boolean algebras. Complete and chain-complete posets; fixed-point theorems. The axiom of choice and Zorn's lemma. Applications of Zorn's lemma in mathematics. The well-ordering principle.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Propositional logic}\\
+The propositional calculus. Semantic and syntactic entailment. The deduction and completeness theorems. Applications: compactness and decidability.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Predicate logic}\\
+The predicate calculus with equality. Examples of first-order languages and theories. Statement of the completeness theorem; *sketch of proof*. The compactness theorem and the Lowenheim-Skolem theorems. Limitations of first-order logic. Model theory.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Set theory}\\ Set theory as a first-order theory; the axioms of ZF set theory. Transitive closures, epsilon-induction and epsilon-recursion. Well-founded relations. Mostowski's collapsing theorem. The rank function and the von Neumann hierarchy.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{Consistency}\\
+*Problems of consistency and independence*\hspace*{\fill} [1]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+Most people are familiar with the notion of ``sets'' (here ``people'' is defined to be mathematics students). However, most of the time, we only have an intuitive picture of what set theory should look like --- there are sets, we can take intersections, unions, intersections and subsets. We can write down sets like $\{x: \phi(x)\text{ is true}\}$.
+
+Historically, mathematicians were content with this vague notion of sets. However, it turns out that our last statement wasn't really correct. We cannot just arbitrarily write down sets we like. This is evidenced by the famous Russel's paradox, where the set $X$ is defined as $X = \{x: x\not \in x\}$. Then we have $X\in X \Leftrightarrow X\not\in X$, which is a contradiction.
+
+This lead to the formal study of set theory, where set theory is given a formal foundation based on some axioms of set theory. This is known as \emph{axiomatic set theory}. This is similar to Euclid's axioms of geometry, and, in some sense, the group axioms. Unfortunately, while axiomatic set theory appears to avoid paradoxes like Russel's paradox, as G\"odel proved in his incompleteness theorem, we cannot prove that our axioms are free of contradictions.
+
+Closely related to set theory is \emph{formal logic}. Similarly, we want to put logic on a solid foundation. We want to formally define our everyday notions such as propositions, truth and proofs. The main result we will have is that a statement is true if and only if we can prove it. This assures that looking for proofs is a sensible way of showing that a statement is true.
+
+It is important to note that having studied formal logic does \emph{not} mean that we should always reason with formal logic. In fact, this is impossible, as we ultimately need informal logic to reason about formal logic itself!
+
+Throughout the course, we will interleave topics from set theory and formal logic. This is necessary as we need tools from set theory to study formal logic, while we also want to define set theory within the framework of formal logic. One is not allowed to complain that this involves circular reasoning.
+
+As part of the course, we will also side-track to learn about well-orderings and partial orders, as these are very useful tools in the study of logic and set theory. Their importance will become evident as we learn more about them.
+
+\section{Propositional calculus}
+\label{sec:propositional}\index{propositional logic}
+Propositional calculus is the study of logical statements such $p \Rightarrow p$ and $p \Rightarrow (q\Rightarrow p)$. As opposed to \emph{predicate} calculus, which will be studied in Chapter~\ref{sec:predicate}, the statements will not have quantifier symbols like $\forall$, $\exists$.
+
+When we say ``$p \Rightarrow p$ is a correct'', there are two ways we can interpret this. We can interpret this as ``no matter what truth value $p$ takes, $p \Rightarrow p$ always has the truth value of ``true''.'' Alternatively, we can interpret this as ``there is a proof that $p \Rightarrow p$''.
+
+The first notion is concerned with truth, and does not care about whether we can prove things. The second notion, on the other hand, on talks about proofs. We do not, in any way, assert that $p \Rightarrow p$ is ``true''. One of the main objectives of this chapter is to show that these two notions are consistent with each other. A statement is true (in the sense that it always has the truth value of ``true'') if and only if we can prove it. It turns out that this equivalence has a few rather striking consequences.
+
+Before we start, it is important to understand that there is no ``standard'' logical system. What we present here is just one of the many possible ways of doing formal logic. In particular, do not expect anyone else to know exactly how your system works without first describing it. Fortunately, no one really writes proof with formal logic, and the important theorems we prove (completeness, compactness etc.) do not depend much on the exact implementation details of the systems.
+
+\subsection{Propositions}
+We'll start by defining propositions, which are the statements we will consider in propositional calculus.
+\begin{defi}[Propositions]\index{propositions}
+ Let $P$ be a set of \emph{primitive propositions}. These are a bunch of (meaningless) symbols (e.g.\ $p$, $q$, $r$), which are used as the basic building blocks of our more interesting propositions. These are usually interpreted to take a truth value. Usually, any symbol (composed of alphabets and subscripts) is in the set of primitive propositions.
+
+ The set of \emph{propositions}, written as $L$ or $L(P)$, is defined inductively by
+ \begin{enumerate}
+ \item If $p\in P$, then $p\in L$.
+ \item $\bot\in L$, where $\bot$ is read as ``false'' (also a meaningless symbol).
+ \item If $p, q\in L$, then $(p\Rightarrow q)\in L$.
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}
+ If our set of primitive propositions is $P = \{p, q, r\}$, then $p\Rightarrow q$, $p\Rightarrow \bot$, $((p\Rightarrow q)\Rightarrow (p\Rightarrow r))$ are propositions.
+\end{eg}
+
+To define $L$ formally, we let
+\begin{align*}
+ L_0 &= \{\bot\}\cup P\\
+ L_{n + 1} &= L_n\cup \{(p\Rightarrow q): p, q\in L_n\}.
+\end{align*}
+Then we define $L = L_0 \cup L_1\cup L_2\cup \cdots$.
+
+In formal language terms, $L$ is the set of finite strings of symbols from the alphabet $\bot$ $\Rightarrow $ $($ $)$ $p_1$ $p_2 \cdots$ that satisfy some formal grammar rule (e.g.\ brackets have to match).
+
+Note here that officially, the only relation we have is $\Rightarrow$. The familiar ``not'', ``and'' and ``or'' do not exist. Instead, we define them to be \emph{abbreviations} of certain expressions:
+\begin{defi}[Logical symbols]\leavevmode
+ \begin{center}
+ \begin{tabular}{cccc}
+ $\neg p$ & (``not $p$'') & is an abbreviation for & $(p\Rightarrow \bot)$\\
+ $p\wedge q$ & (``$p$ and $q$'') & is an abbreviation for & $\neg(p\Rightarrow (\neg q))$\\
+ $p\vee q$ & (``$p$ or $q$'') & is an abbreviation for & $(\neg p)\Rightarrow q$
+ \end{tabular}
+ \end{center}
+\end{defi}
+The advantage of having just one symbol $\Rightarrow $ is that when we prove something about our theories, we only have to prove it for $\Rightarrow $, instead of all $\Rightarrow , \neg, \wedge$ and $\vee$ individually.
+\subsection{Semantic entailment}
+The idea of \emph{semantic entailment} is to assign \emph{truth values} to propositions, where we declare each proposition to be ``true'' or ``false''. This assignment is performed by a \emph{valuation}.
+
+\begin{defi}[Valuation]\index{valuation}
+ A \emph{valuation} on $L$ is a function $v: L\to \{0, 1\}$ such that:
+ \begin{itemize}
+ \item $v(\bot) = 0$,
+ \item $v(p\Rightarrow q) = \begin{cases} 0 & \text{if }v(p) = 1, v(q) = 0,\\1 & \text{otherwise}\end{cases}$
+ \end{itemize}
+ We interpret $v(p)$ to be the truth value of $p$, with 0 denoting ``false'' and 1 denoting ``true''.
+
+ Note that we do not impose any restriction of $v(p)$ when $p$ is a primitive proposition.
+\end{defi}
+For those people who like homomorphisms, we can first give the set $\{0, 1\}$ a binary operation $\Rightarrow $ by
+\[
+ a\Rightarrow b = \begin{cases}
+ 0 & \text{if }a = 1, b = 0\\
+ 1 & \text{otherwise}
+ \end{cases}
+\]
+as well as a constant $\bot = 0$. Then a valuation can be defined as a homomorphism between $L$ and $\{0, 1\}$ that preserves $\bot$ and $\Rightarrow $.
+
+It should be clear that a valuation is uniquely determined by its values on the primitive propositions, as the values on all other propositions follow from the definition of a valuation. In particular, we have
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item If $v$ and $v'$ are valuations with $v(p) = v'(p)$ for all $p\in P$, then $v = v'$.
+ \item For any function $w: P \to \{0, 1\}$, we can extend it to a valuation $v$ such that $v(p) = w(p)$ for all $p\in L$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Recall that $L$ is defined inductively. We are given that $v(p) = v'(p)$ on $L_0$. Then for all $p\in L_1$, $p$ must be in the form $q\Rightarrow r$ for $q, r\in L_0$. Then $v(q\Rightarrow r) = v(p\Rightarrow q)$ since the value of $v$ is uniquely determined by the definition. So for all $p\in L_1$, $v(p) = v'(p)$.
+
+ Continue inductively to show that $v(p) = v'(p)$ for all $p\in L_n$ for any $n$.
+
+ \item Set $v$ to agree with $w$ for all $p\in P$, and set $v(\bot) = 0$. Then define $v$ on $L_n$ inductively according to the definition.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ Suppose $v$ is a valuation with $v(p) = v(q) = 1$, $v(r) = 0$. Then
+ \[
+ v((p\Rightarrow q)\Rightarrow r) = 0.
+ \]
+\end{eg}
+
+Often, we are interested in propositions that are always true, such as $p \Rightarrow p$. These are known as \emph{tautologies}.
+\begin{defi}[Tautology]\index{tautology}
+ $t$ is a \emph{tautology}, written as $\models t$, if $v(t) = 1$ for all valuations $v$.
+\end{defi}
+
+To show that a statement is a tautology, we can use a \emph{truth table}, where we simply list out all possible valuations and find the value of $v(t)$.
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\models p\Rightarrow (q\Rightarrow p)$ ``A true statement is implied by anything''
+ \begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ $v(p)$ & $v(q)$ & $v(q\Rightarrow p)$ & $v(p\Rightarrow (q\Rightarrow p))$\\
+ \midrule
+ 1 & 1 & 1 & 1\\
+ 1 & 0 & 1 & 1\\
+ 0 & 1 & 0 & 1\\
+ 0 & 0 & 1 & 1\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ \item $\models (\neg \neg p)\Rightarrow p$. Recall that $\neg\neg p$ is defined as $((p\Rightarrow \bot)\Rightarrow \bot)$.
+ \begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ $v(p)$ & $v(p\Rightarrow \bot)$ & $v((p\Rightarrow \bot)\Rightarrow \bot)$ & $v(((p\Rightarrow \bot)\Rightarrow \bot)\Rightarrow p)$\\
+ \midrule
+ 1 & 0 & 1 & 1\\
+ 0 & 1 & 0 & 1\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ \item $\models [p\Rightarrow (q\Rightarrow r)]\Rightarrow [(p\Rightarrow q)\Rightarrow (p\Rightarrow r)]$.
+
+ Instead of creating a truth table, which would be horribly long, we show this by reasoning: Suppose it is not a tautology. So there is a $v$ such that $v(p\Rightarrow (q\Rightarrow r)) = 1$ and $v((p\Rightarrow q)\Rightarrow (p\Rightarrow r)) =0 $. For the second equality to hold, we must have $v(p\Rightarrow q) = 1$ and $v(p\Rightarrow r) = 0$. So $v(p) = 1, v(r) = 0, v(q) = 1$. But then $v(p\Rightarrow (q \Rightarrow r)) = 0$.
+ \end{enumerate}
+\end{eg}
+
+Sometimes, we don't want to make statements as strong as ``$t$ is always true''. Instead, we might want to say ``$t$ is true whenever $S$ is true''. This is known as \emph{semantic entailment}.
+\begin{defi}[Semantic entailment]\index{semantic entailment}
+ For $S\subseteq L$, $t\in L$, we say $S$ \emph{entails} $t$, $S$ \emph{semantically implies t} or $S\models t$ if, for any $v$ such that $v(s) = 1$ for all $s\in S$, we have $v(t) = 1$.
+\end{defi}
+Here we are using the symbol $\models$ again. This is not an attempt to confuse students. $\models t$ is equivalent to the statement $\emptyset \models t$.
+
+\begin{eg}
+ $\{p\Rightarrow q, q\Rightarrow r\}\models (p\Rightarrow r).$
+
+ We want to show that for any valuation $v$ with $v(p\Rightarrow q) = v(q\Rightarrow r) = 1$, we have $v(p\Rightarrow r) = 1$. We prove the contrapositive.
+
+ If $v(p\Rightarrow r) = 0$, then $v(p) = 1$ and $v(r) = 0$. If $v(q) = 0$, then $v(p\Rightarrow q) = 0$. If $v(q) = 1$, then $v(q\Rightarrow r) = 0$. So $v(p\Rightarrow r) = 0$ only if one of $v(p\Rightarrow q)$ or $v(q\Rightarrow r)$ is zero.
+\end{eg}
+
+Note that $\{p\}\models q$ and $p \Rightarrow q$ both intuitively mean ``if $p$ is true, then $q$ is true''. However, these are very distinct notions. $p\Rightarrow q$ is a proposition \emph{within} our theory. It is true (or false) in the sense that valuations take the value 0 (or 1).
+
+On the other hand, when we say $\{p\} \models q$, this is a statement in the \emph{meta-theory}. It is true (or false) in the sense that we decided it is true (or false) based on some arguments and (informal) proofs, performed in the real world instead of inside propositional calculus.
+
+The same distinction should be made when we define syntactic implication later.
+
+Before we dive into syntactic implication, we will define a few convenient terms for later use.
+\begin{defi}[Truth and model]
+ If $v(t) = 1$, then we say that $t$ is \emph{true} in $v$, or $v$ is a \emph{model} of $t$. For $S\subseteq L$, a valuation $v$ is a \emph{model} of $S$ if $v(s) = 1$ for all $s\in S$.
+\end{defi}
+\subsection{Syntactic implication}
+While semantic implication captures the idea of truthfulness, syntactic implication captures the idea of proofs. We want to say $S$ \emph{syntactically implies} $t$ if there we can prove $t$ from $S$.
+
+To prove propositions, we need two things. Firstly, we need \emph{axioms}, which are statements we can assume to be true in a proof. These are the basic building blocks from which we will prove our theorems.
+
+Other than axioms, we also need \emph{deduction rules}. This allows as to make deduce statements from other statements.
+
+Our system of deduction composes of the following three axioms:
+\begin{enumerate}[label=\arabic{*}.]
+ \item $p\Rightarrow(q\Rightarrow p)$
+ \item $[p\Rightarrow(q\Rightarrow r)]\Rightarrow[(p\Rightarrow q)\Rightarrow(p \Rightarrow r)]$
+ \item $(\neg\neg p)\Rightarrow p$
+\end{enumerate}
+and the deduction rule of \emph{modus ponens}\index{modus ponens}: from $p$ and $p\Rightarrow q$, we can deduce $q$.
+
+At first sight, our axioms look a bit weird, especially the second one. We will later see that how this particular choice of axioms allows us to prove certain theorems more easily. This choice of axioms can also be motivated by combinatory logic, but we shall not go into details of these.
+
+\begin{defi}[Proof and syntactic entailment]\index{proof}\index{syntactic entailment}
+ For any $S\subseteq L$, a \emph{proof} of $t$ from $S$ is a finite sequence $t_1, t_2, \cdots t_n$ of propositions, with $t_n = t$, such that each $t_i$ is one of the following:
+ \begin{enumerate}
+ \item An axiom
+ \item A member of $S$
+ \item A proposition $t_i$ such that there exist $j, k < i$ with $t_j$ being $t_k\Rightarrow t_i$.
+ \end{enumerate}
+ If there is a proof of $t$ from $S$, we say that $S$ \emph{proves} or \emph{syntactically entails} $t$, written $S\vdash t$.
+
+ If $\emptyset \vdash t$, we say $t$ is a \emph{theorem}\index{theorem} and write $\vdash t$.
+
+ In a proof of $t$ from $S$, $t$ is the \emph{conclusion} and $S$ is the set of \emph{hypothesis} or \emph{premises}.
+\end{defi}
+\begin{eg}
+ $\{p\Rightarrow q, q\Rightarrow r\} \vdash p\Rightarrow r$
+
+ We go for $(p\Rightarrow q)\Rightarrow (p\Rightarrow r)$ via Axiom 2.
+ \begin{enumerate}[label=\arabic{*}.]
+ \item $[p\Rightarrow (q\Rightarrow r)]\Rightarrow [(p\Rightarrow q) \Rightarrow (p\Rightarrow r)]$ \hfill Axiom 2
+ \item $q\Rightarrow r$ \hfill Hypothesis
+ \item $(q\Rightarrow r)\Rightarrow [p\Rightarrow (q\Rightarrow r)]$\hfill Axiom 1
+ \item $p\Rightarrow (q\Rightarrow r)$ \hfill MP on 2, 3
+ \item $(p\Rightarrow q)\Rightarrow (p\Rightarrow r)$\hfill MP on 1, 4
+ \item $p\Rightarrow q$\hfill Hypothesis
+ \item $p\Rightarrow r$\hfill MP on 5, 6
+ \end{enumerate}
+\end{eg}
+
+\begin{eg}
+ $\vdash (p\Rightarrow p)$
+
+ We go for $[p\Rightarrow (p\Rightarrow p)]\Rightarrow (p\Rightarrow p)$.
+ \begin{enumerate}[label=\arabic{*}.]
+ \item $[p\Rightarrow ((p\Rightarrow p)\Rightarrow p)]\Rightarrow [(p\Rightarrow (p\Rightarrow p))\Rightarrow (p\Rightarrow p)]$ \hfill Axiom 2
+ \item $p\Rightarrow ( (p\Rightarrow p)\Rightarrow p)$\hfill Axiom 1
+ \item $[p\Rightarrow (p\Rightarrow p)]\Rightarrow (p\Rightarrow p)$ \hfill MP on 1, 2
+ \item $p\Rightarrow (p\Rightarrow p)$\hfill Axiom 1
+ \item $p\Rightarrow p$ \hfill MP on 3, 4
+ \end{enumerate}
+\end{eg}
+
+This seems like a really tedious way to prove things. We now prove that the \emph{deduction theorem}, which allows as to find proofs much more easily.
+
+\begin{prop}[Deduction theorem]\index{deduction theorem}
+ Let $S\subset L$ and $p, q\in L$. Then we have
+ \[
+ S\vdash (p\Rightarrow q)\quad \Leftrightarrow\quad S\cup \{p\} \vdash q.
+ \]
+ This says that $\vdash$ behaves like the connective $\Rightarrow $ in the language.
+\end{prop}
+
+\begin{proof}
+ ($\Rightarrow $) Given a proof of $p\Rightarrow q$ from $S$, append the lines
+ \begin{itemize}
+ \item $p$\hfill Hypothesis
+ \item $q$\hfill MP
+ \end{itemize}
+ to obtain a proof of $q$ from $S\cup \{p\}$.
+
+ ($\Leftarrow$) Let $t_1, t_2, \cdots, t_n = q$ be a proof of $q$ from $S\cup \{p\}$. We'll show that $S\vdash p\Rightarrow t_i$ for all $i$.
+
+ We consider different possibilities of $t_i$:
+ \begin{itemize}
+ \item $t_i$ is an axiom: Write down
+ \begin{itemize}
+ \item $t_i\Rightarrow (p\Rightarrow t_i)$\hfill Axiom 1
+ \item $t_i$\hfill Axiom
+ \item $p\Rightarrow t_i$ \hfill MP
+ \end{itemize}
+ \item $t_i\in S$: Write down
+ \begin{itemize}
+ \item $t_i\Rightarrow (p\Rightarrow t_i)$\hfill Axiom 1
+ \item $t_i$\hfill Hypothesis
+ \item $p\Rightarrow t_i$ \hfill MP
+ \end{itemize}
+ \item $t_i = p$: Write down our proof of $p\Rightarrow p$ from our example above.
+ \item $t_i$ is obtained by MP: we have some $j, k< i$ such that $t_k = (t_j\Rightarrow t_i)$. We can assume that $S\vdash (p\Rightarrow t_j)$ and $S\vdash (p\Rightarrow t_k)$ by induction on $i$. Now we can write down
+ \begin{itemize}
+ \item $[p\Rightarrow (t_j\Rightarrow t_i)]\Rightarrow [(p\Rightarrow t_j)\Rightarrow (p\Rightarrow t_i)]$\hfill Axiom 2
+ \item $p\Rightarrow (t_j\Rightarrow t_i)$\hfill Known already
+ \item $(p\Rightarrow t_j)\Rightarrow (p\Rightarrow t_i)$\hfill MP
+ \item $p\Rightarrow t_j$\hfill Known already
+ \item $p\Rightarrow t_i$\hfill MP
+ \end{itemize}
+ to get $S\vdash (p\Rightarrow t_i)$.
+ \end{itemize}
+ This is the reason why we have this weird-looking Axiom 2 --- it enables us to easily prove the deduction theorem.
+\end{proof}
+
+This theorem has a ``converse''. Suppose we have a deduction system system that admits \emph{modus ponens}, and the deduction theorem holds for this system. Then axioms (1) and (2) must hold in the system, since we can prove them using the deduction theorem and \emph{modus ponens}. However, we are not able to deduce axiom (3) from just \emph{modus ponens} and the deduction theorem.
+
+\begin{eg}
+ We want to show $\{p\Rightarrow q, q\Rightarrow r\} \vdash (p\Rightarrow r)$. By the deduction theorem, it is enough to show that $\{p\Rightarrow q, q\Rightarrow r, p\}\vdash r$, which is trivial by applying MP twice.
+\end{eg}
+
+Now we have two notions: $\models$ and $\vdash$. How are they related? We want to show that they are equal: if something is true, we can prove it; if we can prove something, it must be true.
+
+\begin{aim}
+ Show that $S\vdash t$ if and only if $S\models t$.
+ \end{aim}
+This is known as the \emph{completeness theorem}, which is made up of two directions:
+\begin{enumerate}
+ \item Soundness: If $S\vdash t$, then $S\models t$. ``Our axioms aren't absurd''
+ \item Adequacy: If $S\models t$, $S\vdash t$. ``Our axioms are strong enough to be able to deduce, from $S$, \emph{all} semantic consequences of $S$.''
+\end{enumerate}
+
+We first prove the easy bit:
+\begin{prop}[Soundness theorem]\index{soundness theorem}
+ If $S\vdash t$, then $S\models t$.
+\end{prop}
+
+\begin{proof}
+ Given valuation $v$ with $v(s) = 1$ for all $s\in S$, we need to show that $v(t) = 1$. We will show that every line $t_i$ in the proof has $v(t_i) = 1$.
+
+ If $t_i$ is an axiom, then $v(t_i) = 1$ since axioms are tautologies. If $t_i$ is a hypothesis, then by assumption $v(s) = 1$. If $t_i$ is obtained by modus ponens, say from $t_j \Rightarrow t_i$, since $v(t_j) = 1$ and $v(t_j \Rightarrow t_i) = 1$, we must have $v(t_i) = 1$.
+\end{proof}
+Note that soundness holds whenever our axioms are all tautologies. Even if we had silly axioms that are able to prove almost nothing, as long as they are all tautologies, it will be sound.
+
+Now we have to prove adequacy. It seems like a big and scary thing to prove. Given that a statement is true, we have to find a proof for it, but we all know that finding proofs is hard!
+
+We first prove a special case. To do so, we first define \emph{consistency}.
+\begin{defi}[Consistent]\index{consistent}\index{inconsistent}
+ $S$ is \emph{inconsistent} if $S\vdash \bot$. $S$ is \emph{consistent} if it is not inconsistent.
+\end{defi}
+
+The special case we will prove is the following:
+\begin{thm}[Model existence theorem]\index{model existence theorem}
+ If $S\models \bot$, then $S\vdash \bot$. i.e.\ if $S$ has no model, then $S$ is inconsistent. Equivalently, if $S$ is consistent, then $S$ has a model.
+\end{thm}
+
+While we called this a ``special case'', it is in fact all we need to know to prove adequacy. If we are given $S\models t$, then $S\cup \{\neg t\} \models \bot$. Hence using the model existence theorem, we know that $S\cup \{\neg t\} \vdash \bot$. Hence by the deduction theorem, we know that $S\vdash \neg \neg t$. But $\vdash (\neg\neg t)\Rightarrow t$ by Axiom 3. So $S\vdash t$.
+
+As a result, some books call this the ``completeness theorem'' instead, because the rest of the completeness theorem follows trivially from this.
+
+The idea of the proof is that we'd like to define $v: L \to \{0, 1\}$ by
+\[
+ p\mapsto
+ \begin{cases}
+ 1 & \text{if } p\in S\\
+ 0 & \text{if } p\not\in S
+ \end{cases}
+\]
+However, this is obviously going to fail, because we could have some $p$ such that $S\vdash p$ but $p\not\in S$, i.e.\ $S$ is not \emph{deductively closed}. Yet this is not a serious problem --- we take the deductive closure first, i.e.\ add all the statements that $S$ can prove.
+
+But there is a more serious problem. There might be a $p$ with $S\not\vdash p$ and $S\not\vdash \neg p$. This is the case if, say, $p$ never appears in $S$. The idea here is to arbitrarily declare that $p$ is true or false, and add $p$ or $\neg p$ to $S$. What we have to prove is that we can do so without making $S$ consistent.
+
+We'll prove this in the following lemma:
+\begin{lemma}
+ For consistent $S\subset L$ and $p\in L$, at least one of $S\cup \{p\}$ and $S\cup \{\neg p\}$ is consistent.
+\end{lemma}
+
+\begin{proof}
+ Suppose instead that both $S\cup \{p\} \vdash \bot$ and $S\cup \{\neg p\}\vdash \bot$. Then by the deduction theorem, $S\vdash p$ and $S\vdash \neg p$. So $S\vdash \bot$, contradicting consistency of $S$.
+\end{proof}
+
+Now we can prove the completeness theorem. Here we'll assume that the primitives $P$, and hence the language $L$ is countable. This is a reasonable thing to assume, since we can only talk about finitely many primitives (we only have a finite life), so uncountably many primitives would be of no use.
+
+However, this is not a good enough excuse to not prove our things properly. To prove the whole completeness theorem, we will need to use Zorn's lemma, which we will discuss in Chapter~\ref{sec:poset}. For now, we will just prove the countable case.
+
+\begin{proof}
+ Assuming that $L$ is countable, list $L$ as $\{t_1, t_2, \cdots\}$.
+
+ Let $S_0 = S$. Then at least one of $S\cup \{t_1\}$ and $S\cup \{\neg t_1\}$ is consistent. Pick $S_1$ to be the consistent one. Then let $S_2 = S_1 \cup \{t_2\}$ or $S_1\cup \{\neg t_2\}$ such that $S_2$ is consistent. Continue inductively.
+
+ Set $\bar{S} = S_0\cup S_1\cup S_2\cdots$. Then $p\in \bar{S}$ or $\neg p\in \bar{S}$ for each $p\in L$ by construction. Also, we know that $\bar S$ is consistent. If we had $\bar S\vdash \bot$, then since proofs are finite, there is some $S_n$ that contains all assumptions used in the proof of $\bar S \vdash \bot$. Hence $S_n \vdash \bot$, but we know that all $S_n$ are consistent.
+
+ Finally, we check that $\bar S$ is deductively closed: if $\bar S\vdash p$, we must have $p \in \bar S$. Otherwise, $\neg p\in \bar S$. But this implies that $\bar S$ is inconsistent.
+
+ Define $v: L\to \{0, 1\}$ by
+ \[
+ p \mapsto
+ \begin{cases}
+ 1 & \text{if }p\in \bar S\\
+ 0 & \text{if not}
+ \end{cases}.
+ \]
+ All that is left to show is that this is indeed a valuation.
+
+ First of all, we have $v(\bot) = 0$ as $\bot \not\in \bar S$ (since $\bar S$ is consistent).
+
+ For $p\Rightarrow q$, we check all possible cases.
+ \begin{enumerate}
+ \item If $v(p) = 1, v(q) = 0$, we have $p\in \bar S$, $q\not\in \bar S$. We want to show $p\Rightarrow q\not\in \bar S$. Suppose instead that $p\Rightarrow q\in \bar S$. Then $\bar S \vdash q$ by modus ponens. Hence $q\in \bar S$ since $\bar S$ is deductively closed. This is a contradiction. Hence we must have $v(p\Rightarrow q) = 0$.
+ \item If $v(q) = 1$, then $q\in \bar S$. We want to show $p\Rightarrow q\in \bar S$. By our first axiom, we know that $\vdash q\Rightarrow (p\Rightarrow q)$. So $\bar S \vdash p\Rightarrow q$. So $p\Rightarrow q \in \bar S$ by deductive closure. Hence we have $v(p \Rightarrow q) = 1$.
+ \item If $v(p) = 0$, then $p\not\in \bar S$. So $\neg p\in \bar S$. We want to show $p\Rightarrow q\in \bar S$.
+ \begin{itemize}
+ \item
+ This is equivalent to showing $\neg p \vdash p\Rightarrow q$.
+ \item By the deduction theorem, this is equivalent to proving $\{p, \neg p\} \vdash q$.
+ \item We know that $\{p, \neg p\} \vdash \bot$. So it is sufficient to show $\bot \vdash q$.
+ \item By axiom 3, this is equivalent to showing $\bot \vdash \neg \neg q$.
+ \item By the deduction theorem, this is again equivalent to showing $\vdash \bot \Rightarrow \neg \neg q$.
+ \item By definition of $\neg$, this is equivalent to showing $\vdash \bot \Rightarrow (\neg q\Rightarrow \bot)$.
+ \end{itemize}
+ But this is just an instance of the first axiom. So we know that $\bar S\vdash p\Rightarrow q$. So $v(p\Rightarrow q) = 1$.\qedhere
+ \end{enumerate}
+\end{proof}
+Note that the last case is the first time we really use Axiom 3.
+
+By remark before the proof, we have
+\begin{cor}[Adequacy theorem]\index{adequacy theorem}
+ Let $S\subset L$, $t\in L$. Then $S\models t$ implies $S\vdash t$.
+\end{cor}
+
+\begin{thm}[Completeness theorem]\index{completeness theorem}
+ Le $S\subset L$ and $t\in L$. Then $S\models t$ if and only if $S\vdash t$.
+\end{thm}
+
+This theorem has two nice consequences.
+\begin{cor}[Compactness theorem]\index{compactness theorem}
+ Let $S\subset L$ and $t\in L$ with $S\models t$. Then there is some finite $S'\subset S$ has $S'\models t$.
+\end{cor}
+
+\begin{proof}
+ Trivial with $\models$ replaced by $\vdash$, because proofs are finite.
+\end{proof}
+Sometimes when people say compactness theorem, they mean the special case where $t = \bot$. This says that if every finite subset of $S$ has a model, then $S$ has a model. This result can be used to prove some rather unexpected result in other fields such as graph theory, but we will not go into details.
+
+\begin{cor}[Decidability theorem]\index{decidability theorem}
+ Let $S\subset L$ be a finite set and $t\in L$. Then there exists an algorithm that determines, in finite and bounded time, whether or not $S\vdash t$.
+\end{cor}
+\begin{proof}
+ Trivial with $\vdash$ replaced by $\models$, by making a truth table.
+\end{proof}
+This is a rather nice result, because we know that proofs are hard to find in general. However, this theorem only tells you whether a proof exists, without giving you the proof itself!
+
+\section{Well-orderings and ordinals}
+In the coming two sections, we will study different orderings. The focus of this chapter is \emph{well-orders}, while the focus of the next is \emph{partial orders}.
+
+A well-order on a set $S$ is a special type of total order where every non-empty subset of $S$ has a least element. Among the many nice properties of well-orders, it is possible to do induction and recursion on well-orders.
+
+Our interest, however, does not lie in well-orders itself. Instead, we are interested in the ``lengths'' of well-orders. Officially, we call them them the \emph{order types} of the well-orders. Each order type is known as an \emph{ordinal}.
+
+There are many things we can do with ordinals. We can add and multiply them to form ``longer'' well-orders. While we will not make much use of them in this chapter, in later chapters, we will use ordinals to count ``beyond infinity'', similar to how we count finite things using natural numbers.
+
+\subsection{Well-orderings}
+We start with a few definitions.
+\begin{defi}[(Strict) total order]\index{total order}
+ A \emph{(strict) total order} or \emph{linear order} is a pair $(X, <)$, where $X$ is a set and $<$ is a relation on $X$ that satisfies
+ \begin{enumerate}
+ \item $x \not< x$ for all $x$ \hfill (irreflexivity)
+ \item If $x < y$, $y < z$, then $x < z$ \hfill (transitivity)
+ \item $x < y$ or $x = y$ or $y < x$ \hfill (trichotomy)
+ \end{enumerate}
+\end{defi}
+
+We have the usual shorthands for total orders. For example, $x > y$ means $y < x$ and $x \leq y$ means ($x < y$ or $x = y$).
+
+It is also possible for a total order to be defined in terms of $\leq$ instead of $<$.
+\begin{defi}[(Non-strict) total order]
+ A \emph{(non-strict) total order} is a pair $(X, \leq)$, where $X$ is a set and $\leq$ is a relation on $X$ that satisfies
+ \begin{enumerate}
+ \item $x \leq x$ \hfill (reflexivity)
+ \item $x\leq y$ and $y \leq z$ implies $x\leq z$ \hfill (transitivity)
+ \item $x\leq y$ and $y\leq x$ implies $x = y$ \hfill (antisymmetry)
+ \item $x\leq y$ or $y\leq x$ \hfill (trichotomy)
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\N, \Z, \Q, \R$ with usual the usual orders are total orders.
+ \item On $\N^+$ (the positive integers), `$x < y$ if $x\mid y$ and $x \not=y$' is not trichotomous, and so not a total order.
+ \item On $\P(S)$, define `$x\leq y$' if $x\subseteq y$. This is not a total order since it is not trichotomous (for $|X| > 1$).
+ \end{enumerate}
+\end{eg}
+
+While there can be different total orders, the particular kind we are interested in is well-orders.
+\begin{defi}[Well order]\index{well order}
+ A total order $(X, <)$ is a \emph{well-ordering} if every (non-empty) subset has a least element, i.e.
+ \[
+ (\forall S\subseteq X)[S\not= \emptyset \Rightarrow (\exists x\in S)(\forall y\in S)\, y \geq x].
+ \]
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\N$ with usual ordering is a well-order.
+ \item $\Z, \Q, \R$ are not well-ordered because the whole set itself does not have a least element.
+ \item $\{x\in \Q: x\geq 0\}$ is not well-ordered. For example, $\{x\in X: x> 0\}$ has no least element.
+ \item In $\R$, $\{1 - 1/n: n = 2, 3, 4, \cdots\}$ is well-ordered because it is isomorphic to the naturals.
+ \item $\{1 - 1/n: n = 2, 3, 4, \cdots\} \cup \{1\}$ is also well-ordered, If the subset is only $1$, then $1$ is the least element. Otherwise take the least element of the remaining set.
+ \item Similarly, $\{1 - 1/n: n = 2, 3, 4, \cdots\}\cup \{2\}$ is well-ordered.
+ \item $\{1 - 1/n: n = 2, 3, 4, \cdots\} \cup \{2 - 1/n: n = 2, 3, 4, \cdots\}$ is also well-ordered. This is a good example to keep in mind.
+ \end{enumerate}
+\end{eg}
+
+There is another way to characterize total orders in terms of infinite decreasing sequences.
+\begin{prop}
+ A total order is a well-ordering if and only if it has no infinite strictly decreasing sequence.
+\end{prop}
+
+\begin{proof}
+ If $x_1 > x_2 > x_3 > \cdots$, then $\{x_i: i\in \N\}$ has no least element.
+
+ Conversely, if non-empty $S\subset X$ has no least element, then each $x\in S$ have $x'\in S$ with $x' < x$. Similarly, we can find some $x'' < x'$ \emph{ad infinitum}. So
+ \[
+ x > x' > x'' > x''' > \cdots
+ \]
+ is an infinite decreasing sequence.
+\end{proof}
+
+Like all other axiomatic theories we study, we identify two total orders to be \emph{isomorphic} if they are ``the same up to renaming of elements''.
+\begin{defi}[Order isomorphism]\index{order isomorphism}
+ Say the total orders $X, Y$ are \emph{isomorphic} if there exists a bijection $f: X\to Y$ that is order-preserving, i.e.\ $x < y \Rightarrow f(x) < f(y)$.
+\end{defi}
+
+\begin{eg}\leavevmode
+\begin{enumerate}
+ \item $\N$ and $\{1 - 1/n: n = 2, 3, 4, \cdots\}$ are isomorphic.
+ \item $\{1 - 1/n:n = 2, 3, 4, \cdots\}\cup \{1\}$ is isomorphic to $\{1 - 1/n: n = 2, 3, 4, \cdots\} \cup \{2\}$.
+ \item $\{1 - 1/n: n = 2, 3, 4, \cdots\}$ and $\{1 - 1/n: n = 2, 3, 4, \cdots\}\cup \{1\}$ are not isomorphic because the second has a greatest element but the first doesn't.
+\end{enumerate}
+\end{eg}
+
+Recall from IA Numbers and Sets that in $\N$, the well-ordering principle is equivalent to the principle induction. We proved that we can do induction simply by assuming that $\N$ is well-ordered. Using the same proof, we should be able to prove that we can do induction on \emph{any} well-ordered set.
+
+Of course, a general well-ordered set does not have the concept of ``$+1$'', so we won't be able to formulate weak induction. Instead, our principle of induction is the form taken by strong induction.
+\begin{prop}[Principle by induction]\index{induction}
+ Let $X$ be a well-ordered set. Suppose $S \subseteq X$ has the property:
+ \[
+ (\forall x)\Big(\big((\forall y)\, y < x \Rightarrow y\in S\big) \Rightarrow x\in S\Big),
+ \]
+ then $S = X$.
+
+ In particular, if a property $P(x)$ satisfies
+ \[
+ (\forall x)\Big(\big((\forall y)\, y < x \Rightarrow P(y)\big)\Rightarrow P(x)\Big),
+ \]
+ then $P(x)$ for all $x$.
+\end{prop}
+
+\begin{proof}
+ Suppose $S\not=X$. Let $x$ be the least element of $X\setminus S$. Then by minimality of $x$, for all $y$, $y < x \Rightarrow y\in S$. Hence $x\in S$. Contradiction.
+\end{proof}
+
+Using proof by induction, we can prove the following property of well-orders:
+\begin{prop}
+ Let $X$ and $Y$ be isomorphic well-orderings. Then there is a unique isomorphism between $X$ and $Y$.
+\end{prop}
+This is something special to well-orders. This is not true for general total orderings. For example, $x\mapsto x$ and $x\mapsto x - 13$ are both isomorphisms $\Z\to \Z$. It is also not true for, say, groups. For example, there are two possible isomorphisms from $\Z_3$ to itself.
+
+\begin{proof}
+ Let $f$ and $g$ be two isomorphisms $X\to Y$. To show that $f = g$, it is enough, by induction, to show $f(x) = g(x)$ given $f(y) = g(y)$ for all $y < x$.
+
+ Given a fixed $x$, let $S = \{f(y): y < x\}$. We know that $Y\setminus S$ is non-empty since $f(x) \not\in S$. So let $a$ be the least member of $Y\setminus S$. Then we must have $f(x) = a$. Otherwise, we will have $a < f(x)$ by minimality of $a$, which implies that $f^{-1}(a) < x$ since $f$ is order-preserving. However, by definition of $S$, this implies that $a = f(f^{-1}(a)) \in S$. This is a contradiction since $a \in Y\setminus S$.
+
+ By the induction hypothesis, for $y < x$, we have $f(y) = g(y)$. So we have $S = \{g(y): y < x\}$ as well. Hence $g(x) = \min(Y\setminus S) = f(x)$.
+\end{proof}
+
+If we have an ordered set, we can decide to cut off the top of the set and keep the bottom part. What is left is an initial segment.
+\begin{defi}[Initial segment]\index{initial segment}
+ A subset $Y$ of a totally ordered $X$ is an \emph{initial segment} if
+ \[
+ x\in Y, y< x \Rightarrow y\in Y,
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [gray!50!white] (-0.5, 0) arc (180:360:0.5 and 1.3) -- cycle;
+ \draw (0, 0) ellipse (0.5 and 1.3);
+ \draw (-0.5, 0) -- (0.5, 0);
+ \node at (0, -0.5) {$Y$};
+ \node at (0.7, 1) {$X$};
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+
+\begin{eg}
+ For any $x\in X$, the set $I_x = \{y\in X: y < x\}$ is an initial segment. However, not every initial segment of $X$ need to be in this form. For example, $\{x: x\leq 3\}\subseteq \R$ and $\{x: x < 0\text{ or }x^2 < 2\}\subseteq \Q$ are both initial segments not of this form.
+\end{eg}
+
+The next nice property of well-orders we have is that every proper initial segment is of this form.
+\begin{prop}
+ Every initial segment $Y$ of a well-ordered set $X$ is of the form $I_x = \{y\in X: y < x\}$.
+\end{prop}
+
+\begin{proof}
+ Take $x = \min X\setminus Y$. Then for any $y\in I_x$, we have $y < x$. So $y \in Y$ by definition of $x$. So $I_x \subseteq Y$.
+
+ On the other hand, if $y \in Y$, then definitely $y \not= x$. We also cannot have $y > x$ since this implies $x \in Y$. Hence we must have $y < x$. So $y \in I_x$. Hence $Y \subseteq I_x$. So $Y = I_x$.
+\end{proof}
+
+The next nice property we want to show is that in a well-ordering $X$, \emph{every} subset $S$ is isomorphic to an initial segment.
+
+Note that this is \emph{very} false for a general total order. For example, in $\Z$, no initial segment is isomorphic to the subset $\{1, 2, 3\}$ since every initial segment of $\Z$ is either infinite or empty. Alternatively, in $\R$, $\Q$ is not isomorphic to an initial segment.
+
+It is intuitively obvious how we can prove this. We simply send the minimum element of $S$ to the minimum of $X$, and the continue recursively. However, how can we justify recursion? If we have the well-order $\{\frac{1}{2}, \frac{2}{3}, \frac{3}{4}, \cdots, 1\}$, then we will never reach $1$ if we attempt to write down each step of the recursion, since we can only get there after infinitely many steps. We will need to define some sort of ``infinite recursion'' to justify our recursion on general well-orders.
+
+We first define the restriction of a function:
+\begin{defi}[Restriction of function]
+ For $f: A\to B$ and $C\subseteq A$, the \emph{restriction} of $f$ to $C$ is
+ \[
+ f|_C = \{(x, f(x)): x\in C\}.
+ \]
+\end{defi}
+In this theorem (and the subsequent proof), we are treating functions as explicitly a set of ordered pairs $(x, f(x))$. We will perform set operations on functions and use unions to ``stitch together'' functions.
+
+\begin{thm}[Definition by recursion]\index{recursion}
+ Let $X$ be a well-ordered set and $Y$ be any set. Then for any function $G: \P(X\times Y)\to Y$, there exists a function $f:X\to Y$ such that
+ \[
+ f(x) = G(f|_{I_x})
+ \]
+ for all $x$.
+
+ This is a rather weird definition. Intuitively, it means that $G$ takes previous values of $f(x)$ and returns the desired output. This means that in defining $f$ at $x$, we are allowed to make use of values of $f$ on $I_x$. For example, we define $f(n) = n f(n - 1)$ for the factorial function, with $f(0) = 1$.
+\end{thm}
+
+\begin{proof}
+ We might want to jump into the proof and define $f(0) = G(\emptyset)$, where $0$ is the minimum element. Then we define $f(1) = G(f(0))$ etc. But doing so is simply recursion, which is the thing we want to prove that works!
+
+ Instead, we use the following clever trick: We define an ``$h$ is an attempt'' to mean
+ \begin{center}
+ $h: I \to Y$ for some initial segment $I$ of $X$, and $h(x) = G(h|_{I_x})$ for $x\in I$.
+ \end{center}
+ The idea is to show that for any $x$, there is an attempt $h$ that is defined at $x$. Then take the value $f(x)$ to be $h(x)$. However, we must show this is well-defined first:
+
+ \begin{claim}
+ If attempts $h$ and $h'$ are defined at $x$, then $h(x) = h'(x)$.
+ \end{claim}
+ By induction on $x$, it is enough to show that $h(x) = h'(x)$ assuming $h(y) = h'(y)$ for all $y < x$. But then $h(x) = G(h|_{I_x}) = G(h'|_{I_x}) = h'(x)$. So done.
+
+ \begin{claim}
+ For any $x$, there must exist an attempt $h$ that is defined at $x$.
+ \end{claim}
+ Again, we may assume (by induction) that for each $y < x$, there exists an attempt $h_y$ defined at $y$. Then we put all these functions together, and take $h' = \bigcup_{y < x} h_y$. This is defined for all $y < x$, and is well-defined since the $h_y$ never disagree.
+
+ Finally, add to it $(x, G(h'|_{I_x}))$. Then $h = h'\cup (x, G(h'|_{I_x}))$ is an attempt defined at $x$.
+
+ Now define $f:X\to Y$ by $f(x) = y$ if there exists an attempt $h$, defined at $x$, with $h(x) = y$.
+
+ \begin{claim}
+ There is a unique such $f$.
+ \end{claim}
+ Suppose $f$ and $f'$ both work. Then if $f(y) = f'(y)$ for all $y < x$, then $f(x) = f'(x)$ by definition. So by induction, we know for all $x$, we have $f'(x) = f(x)$.
+\end{proof}
+With the tool of recursion, we can prove that every subset of a well-order.
+
+\begin{lemma}[Subset collapse]\index{subset collapse lemma}
+ Let $X$ be a well-ordering and let $Y\subseteq X$. Then $Y$ is isomorphic to an initial segment of $X$. Moreover, this initial segment is unique.
+\end{lemma}
+
+\begin{proof}
+ For $f: Y\to X$ to be an order-preserving bijection with an initial segment of $X$, we need to map $x$ to the smallest thing not yet mapped to, i.e.
+ \[
+ f(x) = \min (X\setminus \{f(y): y < x\}).
+ \]
+ To be able to take the minimum, we have to make sure the set is non-empty, i.e.\ $\{f(y): y < x\} \not= X$. We can show this by proving that $f(z) < x$ for all $z < x$ by induction, and hence $x \not\in \{f(y): y < x\}$.
+
+ Then by the recursion theorem, this function exists and is unique.
+\end{proof}
+This implies that a well-ordered $X$ can \emph{never} be isomorphic to a proper initial segment of itself. This is since $X$ is isomorphic to itself by the identity function, and uniqueness shows that it cannot be isomorphic to another initial segment.
+
+Using the idea of initial segments, we can define an order comparing different well-orders themselves.
+\begin{notation}
+ Write $X\leq Y$ if $X$ is isomorphic to an initial segment of $Y$.
+
+ We write $X < Y$ if $X\leq Y$ but $X$ is not isomorphic to $Y$, i.e.\ $X$ is isomorphic to a proper initial segment of $Y$.
+\end{notation}
+
+\begin{eg}
+ If $X = \N$, $Y = \{\frac{1}{2}, \frac{2}{3}, \frac{3}{4},\cdots\}\cup \{1\}$, then $X \leq Y$.
+\end{eg}
+
+We will show that this is a total order. Of course, we identify two well-orders as ``equal'' when they are isomorphic.
+
+Reflexivity and transitivity are straightforward. So we prove trichotomy and antisymmetry:
+\begin{thm}
+ Let $X, Y$ be well-orderings. Then $X\leq Y$ or $Y \leq X$.
+\end{thm}
+
+\begin{proof}
+ We attempt to define $f: X\to Y$ by
+ \[
+ f(x) = \min (Y\setminus \{f(y): y < x\}).
+ \]
+ By the law of excluded middle, this function is either well-defined or not.
+
+ If it is well-defined, then it is an isomorphism from $X$ to an initial segment of $Y$.
+
+ If it is not, then there is some $x$ such that $\{f(y): y < x\} = Y$ and we cannot take the minimum. Then $f$ is a bijection between $I_x = \{y: y < x\}$ and $Y$. So $f$ is an isomorphism between $Y$ and an initial segment of $X$.
+
+ Hence either $X \leq Y$ or $Y \leq X$.
+\end{proof}
+
+\begin{thm}
+ Let $X, Y$ be well-orderings with $X\leq Y$ and $Y \leq X$. Then $X$ and $Y$ are isomorphic.
+\end{thm}
+
+\begin{proof}
+ Since $X \leq Y$, there is an order-preserving function $f: X\to Y$ that bijects $X$ with an initial segment of $Y$. Similarly, since $Y \leq X$, we get an analogous $g: Y\to X$. Then $g\circ f: X\to X$ defines a bijection between $X$ and an initial segment of $X$.
+
+ Since there is no bijection between $X$ and a \emph{proper} initial segment of itself, the image of $g\circ f$ must be $X$ itself. Hence $g\circ f$ is a bijection.
+
+ Similarly, $f\circ g$ is a bijection. Hence $f$ and $g$ are both bijections, and $X$ and $Y$ are isomorphic.
+\end{proof}
+
+\subsection{New well-orderings from old}
+Given a well-ordering $X$, we want to create more well-orderings. We've previously shown that we can create a shorter one by taking an initial segment. In this section, we will explore two ways to make \emph{longer} well-orderings.
+
+\subsubsection*{Add one element}
+We can extend a well-ordering by exactly one element. This is known as the \emph{successor}.
+\begin{defi}[Successor]\index{successor}
+ Given $X$, choose some $x\not\in X$ and define a well-ordering on $X\cup \{x\}$ by setting $y < x$ for all $y \in X$. This is the \emph{successor} of $X$, written $X^+$.
+\end{defi}
+We clearly have $X < X^+$.
+\subsubsection*{Put some together}
+More interestingly, we want to ``stitch together'' many well-orderings. However, we cannot just arbitrarily stitch well-orderings together. The well-orderings must satisfy certain nice conditions for this to be well-defined.
+
+\begin{defi}[Extension]\index{extension}\index{extension}
+ For well-orderings $(X, <_X)$ and $(Y, <_Y)$, we say $Y$ \emph{extends} $X$ if $X$ is a proper initial segment of $Y$ and $<_X$ and $<_Y$ agree when defined.
+\end{defi}
+Note that we explicitly require $X$ to be an initial segment of $Y$. $X$ simply being a \emph{subset} of $Y$ will not work, for reasons that will become clear shortly.
+
+\begin{defi}[Nested family]\index{nested family}
+ We say well-orderings $\{X_i: i\in I\}$ form a \emph{nested} family if for any $i, j\in I$, either $X_i$ extends $X_j$, or $X_j$ extends $X_i$.
+\end{defi}
+
+\begin{prop}
+ Let $\{X_i: i\in I\}$ be a nested set of well-orderings. Then there exists a well-ordering $X$ with $X_i\leq X$ for all $i$.
+\end{prop}
+
+\begin{proof}
+ Let $X = \bigcup_{i\in I}X_i$ with $<$ defined on $X$ as $\bigcup_{i\in I} <_i$ (where $<_i$ is the ordering of $X_i$), i.e.\ we inherit the orders from the $X_i$'s. This is clearly a total ordering. Since $\{X_i: i \in I\}$ is a nested family, each $X_i$ is an initial segment of $X$.
+
+ To show that it is a well-ordering, let $S\subseteq X$ be a non-empty subset of $X$. Then $S\cap X_i$ is non-empty for some $i$. Let $x$ be the minimum element (in $X_i$) of $S\cap X_i$. Then also for any $y\in S$, we must have $x \leq y$, as $X_i$ is an initial segment of $X$.
+\end{proof}
+Note that if we didn't require $X$ to be an initial segment of $Y$ when defining 'extension', then the above proof will not work. For example, we can take the collection of all subsets $X_n = \{x \geq -n: x\in \Z\}$, and their union would be $\Z$, which is not well-ordered.
+
+\subsection{Ordinals}
+We have already shown that the collection of all well-orderings is a total order. But is it a well-ordering itself? To investigate this issue further, we first define ourselves a convenient way of talking about well-orderings.
+
+\begin{defi}[Ordinal]\index{ordinal}
+ An \emph{ordinal} is a well-ordered set, with two regarded as the same if they are isomorphic. We write ordinals as Greek letters $\alpha, \beta$ etc.
+\end{defi}
+We would want to define ordinals as equivalence classes of well-orders under isomorphism, but we cannot, because they do not form a set. We will provide a formal definition of ordinals later when we study set theory.
+
+\begin{defi}[Order type]\index{order type}
+ If a well-ordering $X$ has corresponding ordinal $\alpha$, we say $X$ has \emph{order type} $\alpha$, and write $\otp(X) = \alpha$.
+\end{defi}
+
+\begin{notation}
+ For each $k\in \N$, we write $k$ for the order type of the (unique) well-ordering of size $k$. We write $\omega$ for the order type of $\N$.
+\end{notation}
+
+\begin{eg}
+ In $\R$, $\{2, 3, 5 ,6\}$ has order type 4. $\{\frac{1}{2}, \frac{2}{3}, \frac{3}{4}, \cdots\}$ has order type $\omega$.
+\end{eg}
+
+\begin{notation}
+ For ordinals $\alpha, \beta$, write $\alpha \leq \beta$ if $X\leq Y$ for some $X$ of order type $\alpha$, $Y$ of order type $\beta$. This does not depend on the choice of $X$ and $Y$ (since any two choices must be isomorphic).
+\end{notation}
+
+\begin{prop}
+ Let $\alpha$ be an ordinal. Then the ordinals $<\alpha$ form a well-ordering of order type $\alpha$.
+\end{prop}
+
+\begin{notation}
+ Write $I_\alpha = \{\beta: \beta < \alpha\}$.
+\end{notation}
+
+\begin{proof}
+ Let $X$ have order type $\alpha$. The well-orderings $< X$ are precisely (up to isomorphism) the proper initial segments of $X$ (by uniqueness of subset collapse). But these are the $I_x$ for all $x\in X$. So we can biject $X$ with the well-orderings $< X$ by $x\mapsto I_x$.
+\end{proof}
+
+Finally, we can prove that the ordinals are well-ordered.
+\begin{prop}
+ Let $S$ be a non-empty set of ordinals. Then $S$ has a least element.
+\end{prop}
+
+\begin{proof}
+ Choose $\alpha\in S$. If it is minimal, done.
+
+ If not, then $S\cap I_\alpha$ is non-empty. But $I_\alpha$ is well-ordered. So $S\cap I_\alpha$ has a least element, $\beta$. Then this is a minimal element of $S$.
+\end{proof}
+
+However, it would be wrong to say that the ordinals form a well-ordered \emph{set}, for the very reason that they don't form a set.
+\begin{thm}[Burali-Forti paradox]\index{Burali-Forti paradox}
+ The ordinals do not form a set.
+\end{thm}
+
+\begin{proof}
+ Suppose not. Let $X$ be the set of ordinals. Then $X$ is a well-ordering. Let its order-type be $\alpha$. Then $X$ is isomorphic to $I_\alpha$, a proper initial subset of $X$. Contradiction.
+\end{proof}
+
+Recall that we could create new well-orderings from old via adding one element and taking unions. We can translate these into ordinal language.
+
+Given an ordinal $\alpha$, suppose that $X$ is the corresponding well-order. Then we define $\alpha^+$ to be the order type of $X^+$.
+
+If we have a set $\{\alpha_i: i \in I\}$ of ordinals, we can stitch them together to form a new well-order. In particular, we apply ``nested well-orders'' to the initial segments $\{I_{\alpha_i}: i \in I\}$. This produces an upper bound of the ordinals $\alpha_i$. Since the ordinals are well-ordered, we know that there is a \emph{least} upper bound. We call this the \emph{supremum} of the set $\{\alpha_i: i \in I\}$, written $\sup\{\alpha_i: i \in I\}$. In fact, the upper bound created by nesting well-orders is the least upper bound.
+
+\begin{eg}
+ $\{2, 4, 6, 8, \cdots\}$ has supremum $\omega$.
+\end{eg}
+
+Now we have two ways of producing ordinals: $+1$ and supremum.
+
+We can generate a lot of ordinals now:
+\begin{center}
+ \begin{tabular}{ccccccc}
+ 0 & $\omega\cdot 2 + 1$ & $\omega^2 + 1$ & $\omega^2\cdot 3$ & $\omega^{\omega + 2}$ & $\varepsilon_0 + 1$ \\
+ 1 & $\omega\cdot 2 + 2$ & $\omega^2 + 2$ & $\omega^2\cdot 4$ & $\vdots$ & $\vdots$ \\
+ 2 & $\omega\cdot 2 + 3$ & $\omega^2 + 3$ & $\omega^2\cdot 5$ & $\omega^{\omega \cdot 2}$ & $\varepsilon_0 \cdot 2$ \\
+ $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
+ $\omega$ & $\omega\cdot 3$ & $\omega^2 + \omega$ & $\omega^3$ & $\omega^{\omega^2}$ & $\varepsilon_0^2$ \\
+ $\omega + 1$ & $\omega\cdot 4$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
+ $\omega + 2$ & $\omega\cdot 5$ & $\omega^2 + \omega \cdot 2$ & $\omega^\omega$ & $\omega^{\omega^{\omega}}$ & $\varepsilon_0^{\varepsilon_0}$ \\
+ $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
+ $\omega + \omega = \omega\cdot 2$ & $\omega\cdot \omega = \omega^2$ & $\omega^2 \cdot 2$ & $\omega^{\omega + 1}$ & $\omega^{\omega^{.^{.^.}}} = \varepsilon_0$ & $\varepsilon_0^{\varepsilon_0^{.^{.^.}}} = \varepsilon_1$ \\
+ \end{tabular}
+\end{center}
+Here we introduced a lot of different notations. For example, we wrote $\omega + 1$ to mean $\omega^+$, and $\omega\cdot 2 = \sup\{\omega, \omega + 1, \omega + 2, \cdots\}$. We will formally define these notations later.
+
+We have written a lot of ordinals above, some of which are really huge. However, all the ordinals above are countable. The operations we have done so far is adding one element and taking countable unions. So the results are all countable. So is there an uncountable ordinal?
+
+\begin{thm}
+ There is an uncountable ordinal.
+\end{thm}
+
+\begin{proof}
+ This is easy by looking at the supremum of the set of all countable ordinals. However, this works only if the collection of countable ordinals is a set.
+
+ Let $A = \{R\in \P(\N\times \N): R \text{ is a well-ordering of a subset of }\N\}$. So $A \subseteq \P(\N\times \N)$. Then $B = \{\text{order type of }R: R\in A\}$ is the set of all countable ordinals.
+
+ Let $\omega_1 = \sup B$. Then $\omega_1$ is uncountable. Indeed, if $\omega_1$ were countable, then it would be the greatest countable ordinal, but $\omega_1 + 1$ is greater and is also countable.
+\end{proof}
+
+By definition, $\omega_1$ is the \emph{least} uncountable ordinal, and everything in our previous big list of ordinals is less than $\omega_1$.
+
+There are two strange properties of $\omega_1$:
+\begin{enumerate}
+ \item $\omega_1$ is an uncountable ordering, yet for every $x\in \omega_1$, the set $\{y: y< x\}$ is countable.
+ \item Every sequence in $\omega_1$ is bounded, since its supremum is a countable union of countable sets, which is countable.
+\end{enumerate}
+
+In general, we have the following theorem:
+\begin{thm}[Hartogs' lemma]\index{Hartogs' lemma}
+ For any set $X$, there is an ordinal that does not inject into $X$.
+\end{thm}
+
+\begin{proof}
+ As before, with $B = \{\alpha: \alpha\text{ injects into }X\}$.
+\end{proof}
+
+\begin{notation}
+ Write $\gamma(X)$ for the least ordinal that does not inject into $X$. e.g.\ $\gamma(\omega) = \omega_1$.
+\end{notation}
+
+\subsection{Successors and limits}
+In general, we can divide ordinals into two categories. The criteria is as follows:
+
+Given an ordinal $\alpha$, is there a greatest element of $\alpha$? i.e.\ does $I_\alpha = \{\beta: \beta < \alpha\}$ have a greatest element?
+
+If yes, say $\beta$ is the greatest element. Then $\gamma\in I_\alpha \Leftrightarrow \gamma \leq \beta$. So $I_\alpha = \{\beta\}\cup I_\beta$. In other words, $\alpha = \beta^+$.
+
+\begin{defi}[Successor ordinal]\index{successor ordinal}
+ An ordinal $\alpha$ is a \emph{successor ordinal} if there is a greatest element $\beta$ below it. Then $\alpha = \beta^+$.
+\end{defi}
+
+On the other hand, if no, then for any $\gamma < \alpha$, there exists $\beta < \alpha$ such that $\beta > \gamma$. So $\alpha = \sup \{\beta: \beta < \alpha\}$.
+\begin{defi}[Limit ordinal]\index{limit ordinal}
+ An ordinal $\alpha$ is a limit if it has no greatest element below it. We usually write $\lambda$ for limit ordinals.
+\end{defi}
+
+\begin{eg}
+ $5$ and $\omega^+$ are successors. $\omega$ and $0$ are limits ($0$ is a limit because it has no element below it, let alone a greatest one!).
+\end{eg}
+
+\subsection{Ordinal arithmetic}
+We want to define ordinals arithmetic such as $+$ and $\times$, so that we can make formal sense out of our notations such as $\omega + \omega$ in our huge list of ordinals.
+
+We first start with addition.
+\begin{defi}[Ordinal addition (inductive)]\index{ordinal addition}
+ Define $\alpha + \beta$ by recursion on $\beta$ ($\alpha$ is fixed):
+ \begin{itemize}
+ \item $\alpha + 0 = \alpha$.
+ \item $\alpha + \beta^+ = (\alpha + \beta)^+$.
+ \item $\alpha + \lambda = \sup \{\alpha + \gamma: \gamma < \lambda\}$ for non-zero limit $\lambda$.
+ \end{itemize}
+\end{defi}
+Note that officially, we cannot do ``recursion on the ordinals'', since the ordinals don't form a set. So what we officially do is that we define $\alpha + \gamma$ on $\{\gamma: \gamma < \beta\}$ recursively for each ordinal $\beta$. Then by uniqueness of recursions, we can show that this addition is well-defined.
+
+\begin{eg}
+ $\omega + 1 = (\omega + 0)^+ = \omega^+$.
+
+ $\omega + 2 = (\omega + 1)^+ = \omega^{++}$.
+
+ $1 + \omega = \sup\{ 1 + n: n \leq \omega\} = \sup\{1, 2, 3, \cdots\} = \omega$.
+\end{eg}
+It is very important to note that addition is not commutative! This asymmetry arises from our decision to perform recursion on $\beta$ instead of $\alpha$.
+
+On the other hand, addition \emph{is} associative.
+\begin{prop}
+ Addition is associative, i.e.\ $(\alpha + \beta) + \gamma = \alpha + (\beta + \gamma)$.
+\end{prop}
+
+\begin{proof}
+ Since we define addition by recursion, it makes sense to prove this by induction. Since we recursed on the right-hand term in the definition, it only makes sense to induct on $\gamma$ (and fix $\alpha + \beta$).
+
+ \begin{enumerate}
+ \item If $\gamma = 0$, then $\alpha + (\beta + 0) = \alpha + \beta = (\alpha + \beta) + 0$.
+ \item If $\gamma = \delta^+$ is a successor, then
+ \begin{align*}
+ \alpha + (\beta + \delta^+) &= \alpha + (\beta + \delta)^+\\
+ &= [\alpha + (\beta + \delta)]^+\\
+ &= [(\alpha + \beta) + \delta]^+\\
+ &= (\alpha + \beta) + \delta^+\\
+ &= (\alpha + \beta) + \gamma.
+ \end{align*}
+ \item If $\gamma$ is a limit ordinal, we have
+ \begin{align*}
+ (\alpha + \beta) + \lambda &= \sup\{(\alpha + \beta) + \gamma: \gamma < \lambda\}\\
+ &= \sup\{\alpha + (\beta + \gamma): \gamma < \lambda\}
+ \end{align*}
+ If we want to evaluate $\alpha + (\beta + \lambda)$, we have to first know whether $\beta + \lambda$ is a successor or a limit. We now claim it is a limit:
+
+ $\beta + \lambda = \sup\{\beta + \gamma: \gamma < \lambda\}$. We show that this cannot have a greatest element: for any $\beta + \gamma$, since $\lambda$ is a limit ordinal, we can find a $\gamma'$ such that $\gamma < \gamma' < \lambda$. So $\beta + \gamma' > \beta + \gamma$. So $\beta + \gamma$ cannot be the greatest element.
+
+ So
+ \[
+ \alpha + (\beta + \lambda) = \sup\{\alpha + \delta: \delta < \beta + \lambda\}.
+ \]
+ We need to show that
+ \[
+ \sup\{\alpha + \delta: \delta < \beta + \lambda\} = \sup\{\alpha + (\beta + \gamma): \gamma < \lambda\}.
+ \]
+ Note that the two sets are not equal. For example, if $\beta = 3$ and $\lambda = \omega$, then the left contains $\alpha + 2$ but the right does not.
+
+ So we show that the left is both $\geq$ and $\leq$ the right.
+
+ $\geq$: Each element of the right hand set is an element of the left.
+
+ $\leq$: For $\delta < \beta + \lambda$, we have $\delta < \sup \{\beta + \gamma: \gamma < \lambda\}$. So $\delta < \beta + \gamma$ for some $\gamma < \lambda$. Hence $\alpha + \delta < \alpha + (\beta + \gamma)$.\qedhere
+ \end{enumerate}
+\end{proof}
+Note that it is easy to prove that $\beta < \gamma \Rightarrow \alpha + \beta < \alpha + \gamma$ by induction on $\gamma$ (which we implicitly assumed above). But it is not true if we add on the right: $1 < 2$ but $1 + \omega = 2 + \omega$.
+
+The definition we had above is called the \emph{inductive} definition. There is an alternative definition of $+$ based on actual well-orders. This is known as the \emph{synthetic} definition.
+
+Intuitively, we first write out all the elements of $\alpha$, then write out all the elements of $\beta$ after it. The $\alpha + \beta$ is the order type of the combined mess.
+
+\begin{defi}[Ordinal addition (synthetic)]\index{ordinal addition}
+ $\alpha + \beta$ is the order type of $\alpha \sqcup \beta$ ($\alpha$ disjoint union $\beta$, e.g.\ $\alpha\times \{0\}\cup \beta\times \{1\}$), with all $\alpha$ before all of $\beta$
+ \[
+ \alpha + \beta = \underbracket{\quad\quad\vphantom{\beta}\alpha\vphantom{\beta}\quad\quad}\underbracket{\quad\;\beta\;\quad}
+ \]
+\end{defi}
+\begin{eg}
+ $\omega + 1 = \omega^+$.
+
+ $1 + \omega = \omega$.
+\end{eg}
+
+With this definition, associativity is trivial:.
+\[
+ \alpha + (\beta + \gamma) = \underbracket{\quad\quad\vphantom{\beta}\alpha\vphantom{\beta}\quad\quad}\underbracket{\quad\;\beta\;\quad}\underbracket{\quad\gamma\quad} = (\alpha + \beta) + \gamma.
+\]
+Now that we have given two definitions, we must show that they are the same:
+\begin{prop}
+ The inductive and synthetic definition of $+$ coincide.
+\end{prop}
+
+\begin{proof}
+ Write $+$ for inductive definition, and $+'$ for synthetic. We want to show that $\alpha + \beta = \alpha +' \beta$. We induct on $\beta$.
+
+ \begin{enumerate}
+ \item $\alpha + 0 = \alpha = \alpha +' 0$.
+ \item $\alpha + \beta^+ = (\alpha + \beta)^+ = (\alpha +' \beta)^+ = \otp\underbracket{\quad\vphantom\beta\alpha\vphantom\beta\quad}\underbracket{\quad\beta\quad}\underbracket{\vphantom\beta\cdot\vphantom\beta} = \alpha +' \beta^+$
+ \item $\alpha + \lambda = \sup\{\alpha + \gamma: \gamma < \lambda\} = \sup \{\alpha +' \gamma: \gamma < \lambda\} = \alpha +' \lambda$. This works because taking the supremum is the same as taking the union.
+ \[
+ \underbracket{\quad\vphantom\gamma\alpha\vphantom\gamma\quad}\underbracket{\gamma\quad\gamma'\;\gamma''\cdots \lambda}\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+The synthetic definition is usually easier to work with, if possible. For example, it was very easy to show associativity using the synthetic definition. It is also easier to see why addition is not commutative. However, if we want to do induction, the inductive definition is usually easier.
+
+After addition, we can define multiplication. Again, we first give an inductive definition, and then a synthetic one.
+\begin{defi}[Ordinal multiplication (inductive)]\index{ordinal multiplication}
+ We define $\alpha\cdot \beta$ by induction on $\beta$ by:
+ \begin{enumerate}
+ \item $\alpha\cdot 0 = 0$.
+ \item $\alpha\cdot (\beta^+) = \alpha\cdot \beta + \alpha$.
+ \item $\alpha\cdot \lambda = \sup\{\alpha\cdot \gamma: \gamma < \lambda\}$ for $\lambda$ a non-zero limit.
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item $\omega \cdot 1 = \omega\cdot 0 + \omega = 0 + \omega = \omega$.
+ \item $\omega \cdot 2 = \omega \cdot 1 + \omega = \omega + \omega$.
+ \item $2\cdot \omega = \sup\{2\cdot n: n < \omega\} = \omega$.
+ \item $\omega\cdot \omega = \sup \{\omega\cdot n: n < \omega\} = \sup\{\omega, \omega^2, \omega^3, \cdots\}$.
+ \end{itemize}
+\end{eg}
+
+We also have a synthetic definition.
+\begin{defi}[Ordinal multiplication (synthetic)]\index{ordinal multiplication}
+ \[
+ \beta\left\{
+ \begin{array}{c}
+ \underbracket{\quad\alpha\quad} \\
+ \vdots \\
+ \underbracket{\quad\alpha\quad} \\
+ \underbracket{\quad\alpha\quad}
+ \end{array}\right.
+ \]
+ Formally, $\alpha\cdot \beta$ is the order type of $\alpha\times \beta$, with $(x, y) < (x', y')$ if $y < y'$ or ($y = y'$ and $x < x'$).
+\end{defi}
+
+\begin{eg}
+ $\displaystyle\omega\cdot 2 =
+ \begin{array}{c}
+ \underbracket{\quad\omega\quad}\\
+ \underbracket{\quad\omega\quad}
+ \end{array} = \omega + \omega$.
+
+ Also $2\cdot \omega = \omega\left\{
+ \begin{array}{c}
+ \underbracket{\,\cdot\,\cdot\,} \\
+ \vdots \\
+ \underbracket{\,\cdot\,\cdot\,} \\
+ \underbracket{\,\cdot\,\cdot\,} \\
+ \end{array}\right. = \omega$
+\end{eg}
+We can check that the definitions coincide, prove associativity etc. similar to what we did for addition.
+
+We can define ordinal exponentiation, towers etc. similarly:
+\begin{defi}[Ordinal exponentiation (inductive)]\index{ordinal exponentiation}
+ $\alpha^\beta$ is defined as
+ \begin{enumerate}
+ \item $\alpha^0 = 1$
+ \item $\alpha^{\beta^+} = \alpha^\beta \cdot \alpha$
+ \item $\alpha^{\lambda} = \sup \{\alpha^\gamma: \gamma< \lambda\}$.
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}
+ $\omega^1 = \omega^0\cdot \omega = 1\cdot \omega = \omega$.
+
+ $\omega^2 = \omega^1\cdot \omega = \omega\cdot \omega$.
+
+ $2^{\omega} = \sup \{2^n: n < \omega\} = \omega$.
+\end{eg}
+
+\subsection{Normal functions*}
+\begin{own}
+ \emph{Note: These content were not lectured during the year.}
+
+ When we have ordinals, we would like to consider functions $\mathrm{On} \to \mathrm{On}$. Since the ordinals are totally ordered, it would make sense to consider the order-preserving functions, i.e.\ the increasing ones. However, ordinals have an additional property --- we could take suprema of ordinals. If we want our function to preserve this as well, we are lead to the following definition:
+
+ \begin{defi}[Normal function]\index{normal function}
+ A function $f: \mathrm{On} \to \mathrm{On}$ is \emph{normal} if
+ \begin{enumerate}
+ \item For any ordinal $\alpha$, we have $f(\alpha) < f(\alpha^+)$.
+ \item If $\lambda$ is a non-zero limit ordinal, then $f(\lambda) = \sup \{f(\gamma): \gamma < \lambda\}$.
+ \end{enumerate}
+ \end{defi}
+ Some replace the increasing condition by $f(\alpha) < f(\alpha^+)$. These are easily seen to be equivalent by transfinite induction.
+
+ \begin{eg}
+ By definition, we see that for each $\beta > 1$, the function $\alpha \mapsto \beta^\alpha$ is normal.
+ \end{eg}
+
+ We start by a few technical lemmas.
+ \begin{lemma}
+ Let $f$ be a normal function. Then $f$ is strictly increasing.
+ \end{lemma}
+
+ \begin{proof}
+ Let $\alpha$ be a fixed ordinal. We induct on all $\beta > \alpha$ that $f(\alpha) < f(\beta)$.
+
+ If $\beta = \alpha^+$, then the result is obvious.
+
+ If $\beta = \gamma^+$ with $\gamma \not= \alpha$, then $\alpha < \gamma$. So $f(\alpha) < f(\gamma) < f(\gamma^+) = f(\beta)$ by induction.
+
+ If $\beta$ is a limit and is greater than $\alpha$, then
+ \[
+ f(\beta) = \sup\{f(\gamma): \gamma < \beta\} \geq f(\alpha^+) > f(\alpha),
+ \]
+ since $\alpha^+ < \beta$. So the result follows.
+ \end{proof}
+
+ \begin{lemma}
+ Let $f$ be a normal function, and $\alpha$ an ordinal. Then $f(\alpha) \geq \alpha$.
+ \end{lemma}
+
+ \begin{proof}
+ We prove by induction. It is trivial for zero. For successors, we have $f(\alpha^+) > f(\alpha) \geq \alpha$, so $f(\alpha^+) \geq \alpha^+$. For limits, we have
+ \[
+ f(\lambda) = \sup \{f(\gamma): \gamma < \lambda\} \geq \sup\{\gamma: \gamma < \lambda\} = \lambda.\qedhere
+ \]
+ \end{proof}
+
+ The following is a convenient refinement of the continuity result:
+ \begin{lemma}
+ If $f$ is a normal function, then for any non-empty set $\{\alpha_i\}_{i \in I}$, we have
+ \[
+ f(\sup\{\alpha_i: i \in I\}) = \sup\{f(\alpha_i): i \in I\}.
+ \]
+ \end{lemma}
+
+ \begin{proof}
+ If $\{\alpha_i\}$ has a maximal element, then the result is obvious, as $f$ is increasing, and the supremum is a maximum.
+
+ Otherwise, let
+ \[
+ \alpha = \sup\{\alpha_i: i \in I\}
+ \]
+ Since the $\alpha_i$ has no maximal element, we know $\alpha$ must be a limit ordinal. So we have
+ \[
+ f(\alpha) = \sup \{f(\beta): \beta < \alpha\}.
+ \]
+ So it suffices to prove that
+ \[
+ \sup \{f(\beta): \beta < \alpha\} = \sup\{f(\alpha_i): i \in I\}.
+ \]
+ Since all $\alpha_i < \alpha$, we have $\sup \{f(\beta): \beta < \alpha\} \geq \sup\{f(\alpha_i): i \in I\}$.
+
+ For the other direction, it suffices, by definition, to show that
+ \[
+ f(\beta) \leq \sup \{f(\alpha_\gamma): i \in I\}
+ \]
+ for all $\beta < \alpha$.
+
+ Given such a $\beta$, since $\alpha$ is the supremum of the $\alpha_i$, we can find some particular $\alpha_i$ such that $\beta < \alpha_i$. So $f(\beta) < f(\alpha_i) \leq \sup \{f(\alpha_i): i \in I\}$. So we are done.
+ \end{proof}
+ Because of these results, some define normal functions to be functions that are strictly increasing and preserve all suprema.
+
+ We now proceed to prove two important properties of normal functions (with easy proofs!):
+
+ \begin{lemma}[Fixed-point lemma]\index{ordinal!fixed point lemma}\index{normal function!fixed point lemma}\index{fixed point}\index{fixed point lemma for normal functions}
+ Let $f$ be a normal function. Then for each ordinal $\alpha$, there is some $\beta \geq \alpha$ such that $f(\beta) = \beta$.
+ \end{lemma}
+ Since the supremum of fixed points is also a fixed point (by normality), it follows that we can define a function $g: \mathrm{On} \to \mathrm{On}$ that enumerates the fixed points. Now this function itself is again normal, so it has fixed points as well\ldots
+
+ \begin{proof}
+ We thus define
+ \[
+ \beta = \sup \{f(\alpha), f(f(\alpha)), f(f(f(\alpha))), \cdots\}.
+ \]
+ If the sequence eventually stops, then we have found a fixed point. Otherwise, $\beta$ is a limit ordinal, and thus normality gives
+ \[
+ f(\beta) = \sup\{f(f(\alpha)), f(f(f(\alpha))), f(f(f(f(\alpha)))), \cdots\} = \beta.
+ \]
+ So $\beta$ is a fixed point, and $\beta \geq f(\alpha) \geq \alpha$.
+ \end{proof}
+
+ \begin{lemma}[Division algorithm for normal functions]
+ Let $f$ be a normal function. Then for all $\alpha$, there is some maximal $\gamma$ such that $\alpha \geq f(\gamma)$.
+ \end{lemma}
+
+ \begin{proof}
+ Let $\gamma = \sup\{\beta: f(\beta) \leq \alpha\}$. Then we have
+ \[
+ f(\gamma) = \sup\{f(\beta): f(b) \leq \alpha\} \leq \alpha.
+ \]
+ This is clearly maximal.
+ \end{proof}
+\end{own}
+\section{Posets and Zorn's lemma}
+\label{sec:poset}
+In this chapter, we study \emph{partial orders}. While there are many examples of partial orders, the most important example is the power set $\P(X)$ for any set $X$, ordered under inclusion. We will also consider subsets of the power set.
+
+The two main theorems of this chapter are Knaster-Tarski fixed point theorem and Zorn's lemma. We will use Zorn's lemma to prove a lot of useful results in different fields, including the completeness theorem in propositional calculus. Finally, we will investigate the relationship between Zorn's lemma and Axiom of Choice.
+
+\subsection{Partial orders}
+\begin{defi}[Partial ordering (poset)]\index{partial order}\index{poset}
+ A \emph{partially ordered set} or \emph{poset} is a pair $(X, \leq)$, where $X$ is a set and $\leq$ is a relation on $X$ that satisfies
+ \begin{enumerate}
+ \item $x\leq x$ for all $x\in X$ \hfill (reflexivity)
+ \item $x \leq y$ and $y \leq z \Rightarrow x \leq z$ \hfill (transitivity)
+ \item $x \leq y$ and $y \leq x \Rightarrow x = y$ \hfill (antisymmetry)
+ \end{enumerate}
+ We write $x < y$ to mean $ x\leq y$ and $x\not= y$. We can also define posets in terms of~$<$:
+ \begin{enumerate}
+ \item $x \not< x$ for all $x\in X$ \hfill (irreflexive)
+ \item $x < y$ and $y < z\Rightarrow x < z$ \hfill (transitive)
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Any total order is (trivially) a partial order.
+ \item $\N$ with ``$x\leq y$'' if $x \mid y$ is a partial order.
+ \item $\P(S)$ with $\subseteq$ for any set $S$ is a partial order.
+ \item Any subset of $\P(S)$ with inclusion is a partial order.
+ \item We can use a diagram
+ \begin{center}
+ \begin{tikzpicture}[grow=up]
+ \node {a}
+ child {
+ node {b} edge from parent
+ child {
+ node {c} edge from parent
+ }
+ }
+ child {
+ node {d} edge from parent
+ child {
+ node {e} edge from parent
+ }
+ };
+ \end{tikzpicture}
+ \end{center}
+ Where ``above'' means ``greater''. So $a \leq b\leq c$, $a \leq d\leq e$, and what follows by transitivity. This is a Hasse diagram.
+
+ \begin{defi}[Hasse diagram]
+ A \emph{Hasse diagram} for a poset $X$ consists of a drawing of the points of $X$ in the plane with an upwards line from $x$ to $y$ if $y$ \emph{covers} $x$:
+ \end{defi}
+ \begin{defi}[Cover]
+ In a poset, $y$ \emph{covers} $x$ if $y > x$ and no $z$ has $y > z > x$.
+ \end{defi}
+ Hasse diagrams can be useful --- e.g.\ $\N$, or useless, e.g.\ $\Q$.
+ \item The following example shows that we cannot assign ``heights'' or ``ranks'' to posets:
+ \begin{center}
+ \begin{tikzpicture}
+ \node (a) {$a$};
+ \node[above=0.5 of a] (dummy1) {};
+ \node[above=0.5 of dummy1] (dummy2) {};
+ \node[above=0.5 of dummy2] (dummy3) {};
+
+ \node[right=of dummy1] (d) {$d$} edge (a);
+ \node[left=of dummy2] (b) {$b$} edge (a);
+ \node[right=of dummy3] (e) {$e$} edge (d);
+ \node [above=0.5 of dummy3] {$c$} edge (b) edge (e);
+ \end{tikzpicture}
+ \end{center}
+ \item We can also have complicated structures:
+ \begin{center}
+ \begin{tikzpicture}[node distance=1]
+ \node at (-1, 0) (a) {$a$};
+ \node at (1, 0) (b) {$b$};
+ \node at (-1, 1) (c) {$c$} edge (a) edge (b);
+ \node at (1, 1) (d) {$d$} edge (a) edge (b);
+ \node at (0, 2) {$e$} edge (c) edge (d);
+ \end{tikzpicture}
+ \end{center}
+ \item Or the empty poset (let $X$ be any set and nothing is less than anything else).
+ \end{enumerate}
+\end{eg}
+While there are many examples of posets, all we care about are actually power sets and their subsets only.
+
+Often, we want to study subsets of posets. For example, we might want to know if a subset has a least element. All subsets are equal, but some subsets are more equal than others. A particular interesting class of subsets is a \emph{chain}.
+\begin{defi}[Chain and antichain]\index{chain}\index{antichain}
+ In a poset, a subset $S$ is a \emph{chain} if it is totally ordered, i.e.\ for all $x$, $y$, $x\leq y$ or $y\leq x$. An \emph{antichain} is a subset in which no two things are related.
+\end{defi}
+
+\begin{eg}
+ In $(\N, \mid )$, $1, 2, 4, 8, 16, \cdots$ is a chain.
+
+ In (v), $\{a, b, c\}$ or $\{a, c\}$ are chains.
+
+ $\R$ is a chain in $\R$.
+\end{eg}
+
+\begin{defi}[Upper bound and supremum]\index{upper bound}\index{supremum}
+ For $S\subset X$, an \emph{upper bound} for $S$ is an $x\in X$ such that $\forall y\in S: x \geq y$.
+
+ $x\in X$ is a \emph{least upper bound}, \emph{supremum} or \emph{join} of $S$, written $x = \sup S$ or $x = \bigvee S$, if $x$ is an upper bound for $S$, and for all $y \in X$, if $y$ is an upper bound, then $y \geq x$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item In $\R$, $\{x: x < \sqrt{2}\}$ has an upper bound $7$, and has a supremum $\sqrt{2}$.
+
+ \item In (v) above, consider $\{a, b\}$. Upper bounds are $b$ and $c$. So $\sup = b$. However, $\{b, d\}$ has no upper bound!
+ \item In (vii), $\{a, b\}$ has upper bounds $c, d, e$, but has no \emph{least} upper bound.
+ \end{enumerate}
+\end{eg}
+
+\begin{defi}[Complete poset]\index{complete poset}
+ A poset $X$ is \emph{complete} if every $S\subseteq X$ has a supremum. In particular, it has a greatest element (i.e.\ $x$ such that $\forall y: x \geq y$), namely $\sup X$, and least element (i.e.\ $x$ such that $\forall y: x \leq y$), namely $\sup \emptyset$.
+\end{defi}
+It is very important to remember that this definition does \emph{not} require that the subset $S$ is bounded above or non-empty. This is different from the definition of metric space completeness.
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item $\R$ is not complete because $\R$ itself has no supremum.
+ \item $[0, 1]$ is complete because every subset is bounded above, and so has a least upper bound. Also, $\emptyset$ has a supremum of $0$.
+ \item $(0, 1)$ is not complete because $(0, 1)$ has no upper bound.
+ \item $\P(S)$ for any $S$ is \emph{always} complete, because given any $\{A_i: i\in A\}$, where each $A_i\subseteq S$, $\bigcup A_i$ is its supremum.
+ \end{itemize}
+\end{eg}
+
+Now we are going to derive fixed-point theorems for complete posets. We start with a few definitions:
+
+\begin{defi}[Fixed point]\index{fixed point}
+ A \emph{fixed point} of a function $f:X\to X$ is an $x$ such that $f(x) = x$.
+\end{defi}
+
+\begin{defi}[Order-preserving function]\index{order-preserving function}
+ For a poset $X$, $f: X\to X$ is \emph{order-preserving} of $x \leq y \Rightarrow f(x) \leq f(y)$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item On $\N$, $x\mapsto x + 1$ is order-preserving
+ \item On $\Z$, $x\mapsto x - 1$ is order-preserving
+ \item On $(0, 1)$, $x\mapsto \frac{1 + x}{2}$ is order-preserving (this function halves the distance from $x$ to $1$).
+ \item On $\P(S)$, let some fixed $i\in S$. Then $A\mapsto A\cup \{i\}$ is order-preserving.
+ \end{itemize}
+\end{eg}
+Not every order-preserving $f$ has a fixed point (e.g.\ first two above). However, we have
+\begin{thm}[Knaster-Tarski fixed point theorem]
+ Let $X$ be a complete poset, and $f: X\to X$ be a order-preserving function. Then $f$ has a fixed point.
+\end{thm}
+
+\begin{proof}
+ To show that $f(x) = x$, we need $f(x) \leq x$ and $f(x) \geq x$. Let's not be too greedy and just want half of it:
+
+ Let $E = \{x: x \leq f(x)\}$. Let $s = \sup E$. We claim that this is a fixed point, by showing $f(s) \leq s$ and $s \leq f(s)$.
+
+ To show $s \leq f(s)$, we use the fact that $s$ is the least upper bound. So if we can show that $f(s)$ is also an upper bound, then $s \leq f(s)$. Now let $x \in E$. So $x\leq s$. Therefore $f(x) \leq f(s)$ by order-preservingness. Since $x \leq f(x)$ (by definition of $E$) $x \leq f(x) \leq f(s)$. So $f(s)$ is an upper bound.
+
+ To show $f(s) \leq s$, we simply have to show $f(s) \in E$, since $s$ is an upper bound. But we already know $s \leq f(s)$. By order-preservingness, $f(s) \leq f(f(s))$. So $f(s)\in E$ by definition.
+\end{proof}
+While this proof looks rather straightforward, we need to first establish that $s \leq f(s)$, then use this fact to show $f(s) \leq s$. If we decided to show $f(s) \leq s$ first, then we would fail!
+
+The very typical application of Knaster-Tarski is the quick, magic proof of Cantor-Shr\"oder-Bernstein theorem.
+\begin{cor}[Cantor-Schr\"oder-Bernstein theorem]\index{Cantor-Schr\"oder-Bernstein}
+ Let $A, B$ be sets. Let $f: A\to B$ and $g: B\to A$ be injections. Then there is a bijection $h: A\to B$.
+\end{cor}
+
+\begin{proof}
+ We try to partition $A$ into $P$ and $Q$, and $B$ into $R$ and $S$, such that $f(P) = R$ and $g(S) = Q$. Then we let $h = f$ on $R$ and $g^{-1}$ on $Q$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-1, 0) ellipse (0.5 and 1.3);
+ \draw (-1.5, 0) -- (-0.5, 0);
+ \node at (-1, -0.5) {$Q$};
+ \node at (-1, 0.5) {$P$};
+ \draw (1, 0) ellipse (0.5 and 1.3);
+ \draw (0.5, 0) -- (1.5, 0);
+ \node at (1, -0.5) {$S$};
+ \node at (1, 0.5) {$R$};
+ \draw [->] (-0.4, 0.7) -- (0.4, 0.7) node [pos = 0.5, above] {$f$};
+ \draw [->] (0.4, -0.7) -- (-0.4, -0.7) node [pos = 0.5, above] {$g$};
+ \end{tikzpicture}
+ \end{center}
+ Since $S = B\setminus R$ and $Q = A \setminus P$, so we want
+ \[
+ P = A\setminus g(B\setminus f(P))
+ \]
+ Since the function $P \mapsto A\setminus g(B\setminus f(P))$ from $\P(A)$ to $\P(A)$ is order-preserving (and $\P(a)$ is complete), the result follows.
+\end{proof}
+
+The next result we have is Zorn's lemma. The main focus of Zorn's lemma is on \emph{maximal elements}.
+\begin{defi}[Maximal element]\index{maximal}
+ In a poset $X$, $x\in X$ is maximal if no $y\in X$ has $y > x$.
+\end{defi}
+Caution! Under no circumstances confuse a maximal element with a maximum element, except under confusing circumstances! A maximum element is defined as an $x$ such that all $y\in X$ satisfies $y \leq x$. These two notions are the same in totally ordered sets, but are very different in posets.
+
+\begin{eg}
+ In the poset
+ \begin{center}
+ \begin{tikzpicture}[grow=up]
+ \node {a}
+ child {
+ node {b} edge from parent
+ child {
+ node {c} edge from parent
+ }
+ }
+ child {
+ node {d} edge from parent
+ child {
+ node {e} edge from parent
+ }
+ };
+ \end{tikzpicture}
+ \end{center}
+ $c$ and $e$ are maximal.
+\end{eg}
+
+Not every poset has a maximal element, e.g.\ $\N, \Q, \R$. In each of these, not only are they incomplete. They have chains that are not bounded above.
+
+\begin{thm}[Zorn's lemma]\index{Zorn's lemma}
+ Assuming Axiom of Choice, let $X$ be a (non-empty) poset in which every chain has an upper bound. Then it has a maximal element.
+\end{thm}
+Note that ``non-empty'' is not a strictly necessary condition, because if $X$ is an empty poset, then the empty chain has no upper bound. So the conditions can never be satisfied.
+
+The actual proof of Zorn's lemma is rather simple, given what we've had so far. We ``hunt'' for the maximal element. We start with $x_0$. If it is maximal, done. If not, we find a bigger $x_1$. If $x_1$ is maximal, done. Otherwise, keep go on.
+
+If we never meet a maximal element, then we have an infinite chain. This has an upper bound $x_\omega$. If this is maximal, done. If not, find $x_{\omega + 1} > x_\omega$. Keep going on.
+
+We have not yet reached a contradiction. But suppose we never meet a maximal element. If $X$ is countable, and we can reach $x_{\omega_1}$, then we have found uncountably many elements in a countable set, which is clearly nonsense!
+
+Since the ordinals can be arbitrarily large (Hartogs' lemma), if we never reach a maximal element, then we can get find more elements that $X$ has.
+
+\begin{proof}
+ Suppose not. So for each $x\in X$, we have $x'\in X$ with $x' > x$. We denote the-element-larger-than-$x$ by $x'$.
+
+ We know that each chain $C$ has an upper bound, say $u(C)$.
+
+ Let $\gamma = \gamma(X)$, the ordinal-larger-than-$X$ by Hartogs' lemma.
+
+ We pick $x\in X$, and define $x_\alpha$ for $\alpha < \gamma$ recursively by
+ \begin{itemize}
+ \item $x_0 = x$
+ \item $x_{\alpha^+} = x_\alpha'$
+ \item $x_{\lambda} = u(\{x_\alpha: \alpha < \lambda\})'$ for non-zero limit $\lambda$
+ \end{itemize}
+ Of course, we have to show that $\{x_\alpha: \alpha < \lambda\}$ is a chain. This is trivial by induction.
+
+ Then $\alpha \mapsto x_\alpha$ is an injection from $\gamma\to X$. Contradiction.
+\end{proof}
+Note that we could as well have defined $x_\lambda = u(\{x_\alpha : \alpha < \lambda\})$, and we can easily prove it is still an injection. However, we are lazy and put the ``prime'' just to save a few lines of proof.
+
+This proof was rather easy. However, this is only because we are given ordinals, definition by recursion, and Hartogs' lemma. Without these tools, it is rather difficult to prove Zorn's lemma.
+
+A typical application of Zorn's lemma is: Does every vector space have a basis? Recall that a \emph{basis} of $V$ is a subset of $V$ that is linearly independent (no finite linear combination $= 0$) and spanning (ie every $x\in V$ is a finite linear combination from it).
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item Let $V$ be the space of all real polynomials. A basis is $\{1, x, x^2, x^3, \cdots\}$.
+ \item Let $V$ be the space of all real sequences. Let $\mathbf{e}_i$ be the sequence with all $0$ except $1$ in the $i$th place. However, $\{\mathbf{e}_i\}$ is not a basis, since $1, 1, \cdots$ cannot be written as a finite linear combination of them. In fact, there is no countable basis (easy exercise). It turns out that there is no ``explicit'' basis.
+ \item Take $\R$ as a vector space over $\Q$. A basis here, if exists, is called a Hamel basis.
+ \end{itemize}
+\end{eg}
+
+Using Zorn's lemma, we can prove that the answer is positive.
+\begin{thm}
+ Every vector space $V$ has a basis.
+\end{thm}
+
+\begin{proof}
+ We go for a maximal linearly independent subset.
+
+ Let $X$ be the set of all linearly independent subsets of $V$, ordered by inclusion. We want to find a maximal $B\in X$. Then $B$ is a basis. Otherwise, if $B$ does not span $V$, choose $x\not\in \spn B$. Then $B\cup \{x\}$ is independent, contradicting maximality.
+
+ So we have to find such a maximal $B$. By Zorn's lemma, we simply have to show that every chain has an upper bound.
+
+ Given a chain $\{A_i: i\in I\}$ in $X$, a reasonable guess is to try the union. Let $A = \bigcup A_i$. Then $A\subseteq A_i$ for all $i$, by definition. So it is enough to check that $A\in X$, i.e.\ is linearly independent.
+
+ Suppose not. Say $\lambda_1x_1 + \cdots + \lambda_nx_n = 0$ for some $\lambda_1 \cdots \lambda_n$ scalars (not all $0$). Suppose $x_1 \in A_{i_1}, \cdots x_n \in A_{i_n}$ for some $i_1, \cdots i_n \in I$. Then there is some $A_{i_m}$ that contains all $A_{i_k}$, since they form a finite chain. So $A_{i_m}$ contains all $x_i$. This contradicts the independence of $A_{i_m}$.
+
+ Hence by Zorn's lemma, $X$ has a maximal element. Done.
+\end{proof}
+
+Another application is the completeness theorem for propositional logic when $P$, the primitives, can be uncountable.
+
+\begin{thm}[Model existence theorem (uncountable case)]\index{model existence theorem}
+ Let $S\subseteq L(P)$ for any set of primitive propositions $P$. Then if $S$ is consistent, $S$ has a model.
+\end{thm}
+
+\begin{proof}
+ We need a consistent $\bar S \subseteq S$ such that $\forall t\in L$, $t\in \bar S$ or $\neg t\in \bar S$. Then we have a valuation $v(t) = \begin{cases} 1 & t\in \bar S \\ 0 & t\not\in \bar S\end{cases}$, as in our original proof for the countable case.
+
+ So we seek a \emph{maximal} consistent $\bar S\supseteq S$. If $\bar S$ is maximal, then if $t\not\in \bar S$, then we must have $\bar S \cup \{t\}$ inconsistent, i.e.\ $\bar S \cup \{t\}\vdash \bot$. By deduction theorem, this means that $\bar S \vdash \neg t$. By maximality, we must have $\neg t \in \bar S$. So either $t$ or $\neg t$ is in $\bar S$.
+
+ Now we show that there is such a maximal $\bar S$. Let $X = \{ T\subseteq L: T\text{ is consistent }, T\supseteq S\}$. Then $X\not=\emptyset$ since $S\in X$. We show that any non-empty chain has an upper bound. An obvious choice is, again the union.
+
+ Let $\{T_i: i\in I\}$ be a non-empty chain. Let $T = \bigcup T_i$. Then $T\supseteq T_i$ for all $i$. So to show that $T$ is an upper bound, we have to show $T\in X$.
+
+ Certainly, $T\supseteq S$, as any $T_i$ contains $S$ (and the chain is non-empty). So we want to show $T$ is consistent. Suppose $T\vdash \bot$. So we have $t_1, \cdots, t_n \in T$ with $\{t_1, \cdots, t_n\} \vdash \bot$, since proofs are finite. Then some $T_k$ contains all $t_i$ since $T_i$ are nested. So $T_k$ is inconsistent. This is a contradiction. Therefore $T$ must be consistent.
+
+ Hence by Zorn's lemma, there is a maximal element of $X$.
+\end{proof}
+This proof is basically the same proof that every vector space has a basis! In fact, most proofs involving Zorn's lemma are similar.
+
+\subsection{Zorn's lemma and axiom of choice}
+Recall that in the proof of Zorn's, we picked $x$, then picked $x'$, then picked $x''$, \emph{ad infinitum}. Here we are making arbitrary choices of $x'$. In particular, we have made infinitely many arbitrary choices.
+
+We did the same in IA Numbers and Sets, when proving a countable union of countable sets is countable, because we chose, for each $A_n$, a listing of $A_n$, and then count them diagonally. We needed to make a choice because each $A_n$ has a lot of possible listings, and we have to pick exactly one.
+
+In terms of ``rules for producing sets'', we are appealing to the axiom of choice, which states that you can pick an element of each $A_i$ whenever $\{A_i: i\in I\}$ is a family of non-empty sets. Formally,
+\begin{axiom}[Axiom of choice]\index{axiom!of choice}
+ Given any family $\{A_i: i\in I\}$ of non-empty sets, there is a \emph{choice function} $f: i \to \bigcup A_i$ such that $f(i)\in A_i$.
+\end{axiom}
+This is of a different character from the other set-building rules (e.g.\ unions and power sets exist). The difference is that the other rules are concrete. We know exactly what $A\cup B$ is, and there is only one possible candidate for what $A\cup B$ might be. ``Union'' uniquely specifies what it produces. However, the choice function is not. $\{A_i:i\in I\}$ can have many choice functions, and the axiom of choice does not give us a solid, explicit choice function. We say the axiom of choice is \emph{non-constructive}.
+
+We are not saying that it's wrong, but it's \emph{weird}. For this reason, it is often of interest to ask ``Did I use AC?'' and ``Do I need AC?''.
+
+(It is important to note that the Axiom of Choice is needed only to make \emph{infinite} choices. It is trivially true if $|I| = 1$, since $A\not=\emptyset$ by definition means $\exists x\in A$. We can also do it for two sets. Similarly, for $|I|$ finite, we can do it by induction. However, in general, AC is required to make infinite choices, i.e.\ it cannot be deduced from the other axioms of set theory)
+
+In the proof of Zorn's we used Choice. However, do we \emph{need} it? Is it possible to prove it without Choice?
+
+The answer is it is necessary, since we can deduce AC from Zorn's. In other words, we can write down a proof of AC from Zorn's, using only the other set-building rules.
+
+\begin{thm}
+ Zorn's Lemma $\Leftrightarrow$ Axiom of choice.
+\end{thm}
+
+As in the past uses of Zorn's lemma, we have a big scary choice function to produce. We know that we can do it for small cases, such as when $|I| = 1$. So we start with small attempts and show that the maximal attempt is what we want.
+\begin{proof}
+ We have already proved that AC $\Rightarrow $ Zorn. We now proved the other way round.
+
+ Given a family $\{A_i: i\in I\}$ of non-empty sets. We say a \emph{partial choice function} is a function $f: J\to \bigcup_{i\in I}A_i$ (for some $J\subseteq I$) such that $f(j)\in A$ for all $j\in J$.
+
+ Let $X = \{(J, f): f\text{ is a partial choice function with domain }J\}$. We order by extension, i.e.\ $(J, f) \leq (J', f')$ iff $J\subseteq J'$ and $f'$ agrees with $f$ when both are defined.
+
+ Given a chain $\{(J_k, f_k): k\in K\}$, we have an upper bound $\left(\bigcup J_k, \bigcup f_k\right)$, ie the function obtained by combining all functions in the chain. So by Zorn's, it has a maximal element $(J, f)$.
+
+ Suppose $J \not = I$. Then pick $i\in I\setminus J$. Then pick $x\in A_i$. Set $J' = J\cup \{i\}$ and $f' = f\cup\{(i, x)\}$. Then this is greater than $(J, f)$. This contradicts the maximality of $(J, f)$. So we must have $J = I$, i.e.\ $f$ is a full choice function.
+\end{proof}
+
+We have shown that Zorn's lemma is equivalent to the Axiom of Choice. There is a \emph{third} statement that is also equivalent to both of these:
+\begin{thm}[Well-ordering theorem]\index{well-ordering theorem}
+ Axiom of choice $\Rightarrow$ every set $X$ can be well-ordered.
+\end{thm}
+This might be very surprising at first for, say $X = \R$, since there is no obvious way we can well-order $\R$. However, it is much less surprising given Hartogs' lemma, since Hartogs' lemma says that there is a (well-ordered) ordinal even bigger than $\R$. So well-ordering $\R$ shouldn't be hard.
+
+\begin{proof}
+ The idea is to pick an element from $X$ and call it the first; pick another element and call it the second, and continue transfinitely until we pick everything.
+
+ For each $A\subseteq X$ with $A\not= X$, we let $y_A$ be an element of $X\setminus A$. Here we are using Choice to pick out $y_A$.
+
+ Define $x_\alpha$ recursively: Having defined $x_{\beta}$ for all $\beta < \alpha$, if $\{x_\beta: \beta < \alpha\} = X$, then stop. Otherwise, set $x_\alpha = y_{\{x_\beta: \beta< \alpha\}}$, ie pick $x_\alpha$ to be something not yet chosen.
+
+ We must stop at some time. Otherwise, we have injected $\gamma(X)$ (ie the ordinal larger than $X$) into $X$, which is a contradiction. So when stop, we have bijected $X$ with an well-ordered set (i.e.\ $I_\alpha$, where $\alpha$ is when you've stopped). Hence we have well-ordered $X$.
+\end{proof}
+
+Did we need AC? Yes, trivially.
+\begin{thm}
+ Well-ordering theorem $\Rightarrow $ Axiom of Choice.
+\end{thm}
+
+\begin{proof}
+ Given non-empty sets $\{A_i: i\in I\}$, well-order $\bigcup A_i$. Then define $f(i)$ to be the least element of $A_i$.
+\end{proof}
+Our conclusion is:
+\begin{center}
+ Axiom of Choice $\Leftrightarrow$ Zorn's lemma $\Leftrightarrow$ Well-ordering theorem.
+\end{center}
+Before we end, we need to do a small sanity check: we showed that these three statements are equivalents using a lot of ordinal theory. Our proofs above make sense only if we did not use AC when building our ordinal theory. Fortunately, we did not, apart from the remark that $\omega_1$ is not a countable supremum --- which used the fact that a countable union of countable sets is countable.
+
+\subsection{Bourbaki-Witt theorem*}
+Finally, we'll quickly present a second (non-examinable) fixed-point theorem. This time, we are concerned about \emph{chain-complete} posets and \emph{inflationary} functions.
+
+\begin{defi}[Chain-complete poset]\index{chain-complete}
+ We say a poset $X$ is \emph{chain-complete} if $X \not=\emptyset$ and every non-empty chain has a supremum.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item Every complete poset is chain-complete.
+ \item Any finite (non-empty) poset is chain complete, since every chain is finite and has a greatest element.
+ \item $\{A\subseteq V: A\text{ is linearly independent}\}$ for any vector space $V$ is chain-complete, as shown in the proof that every vector space has a basis.
+ \end{itemize}
+\end{eg}
+
+\begin{defi}[Inflationary function]\index{inflationary function}
+ A function $f: X\to X$ is \emph{inflationary} if $f(x) \geq x$ for all $x$.
+\end{defi}
+
+\begin{thm}[Bourbaki-Witt theorem]\index{Bourbaki-Witt theorem}
+ If $X$ is chain-complete and $f: X\to X$ is inflationary, then $f$ has a fixed point.
+\end{thm}
+This is follows instantly from Zorn's, since $X$ has a maximal element $x$, and since $f(x) \geq x$, we must have $f(x) = x$. However, we can prove Bourbaki-Witt without choice. In the proof of Zorn's, we had to ``pick'' and $x_1 > x_0$. Here, we can simply let
+\[
+ x_0 \xmapsto{\,f\,} x_1 \xmapsto{\,f\,} x_2 \xmapsto{\,f\,} x_3 \cdots
+\]
+Since each chain has a \emph{supremum} instead of an upper bound, we also don't need Choice to pick our favorite upper bound of each chain.
+
+Then we can do the same proof as Zorn's to find a fixed point.
+
+We can view this as the ``AC-free'' part of Zorn's. It can be used to prove Zorn's lemma, but the proof is totally magic.
+
+\section{Predicate logic}
+\label{sec:predicate}\index{predicate logic}
+In the first chapter, we studied propositional logic. However, it isn't sufficient for most mathematics we do.
+
+In, say, group theory, we have a set of objects, operations and constants. For example, in group theory, we have the operations multiplication $m: A^2 \to A$, inverse $i: A\to A$, and a constant $e\in A$. For each of these, we assign a number known as the \emph{arity}, which specifies how many inputs each operation takes. For example, multiplication has arity 2, inverse has arity 1 and $e$ has arity $0$ (we can view $e$ as a function $A^0 \to A$, that takes no inputs and gives a single output).
+
+The study of these objects is known as \emph{predicate logic}. Compared to propositional logic, we have a much richer language, which includes all the operations and possibly relations. For example, with group theory, we have $m, i, e$ in our language, as well as things like $\forall$, $\Rightarrow$ etc. Note that unlike propositional logic, different theories give rise to different languages.
+
+Instead of a valuation, now we have a \emph{structure}, which is a solid object plus the operations and relations required. For example, a structure of group theory will be an actual concrete group with the group operations.
+
+Similar to what we did in propositional logic, we will take $S\models t$ to mean ``for any structure in which $S$ is true, $t$ is true''. For example, ``Axioms of group theory'' $\models m(e, e) = e$, i.e.\ in any set that satisfies the group axioms, $m(e, e) = e$. We also have $S\vdash t$ meaning we can prove $t$ from $S$.
+
+\subsection{Language of predicate logic}
+We start with the definition of the language. This is substantially more complicated than what we've got in propositional logic.
+\begin{defi}[Language]\index{language}
+ Let $\Omega$ (function symbols) and $\Pi$ (relation symbols) be disjoint sets, and $\alpha : \Omega\cup\Pi \to \N$ a function ('arity').
+
+ The \emph{language} $L = L(\Omega, \Pi, \alpha)$ is the set of formulae, defined as follows:
+
+ \begin{itemize}
+ \item \emph{Variables}\index{variable}: we have some variables $x_1, x_2, \cdots$. Sometimes (i.e.\ always), we write $x, y, z, \cdots$ instead.
+ \item \emph{Terms}\index{term}: these are defined inductively by
+ \begin{enumerate}
+ \item Every variable is a term
+ \item If $f\in \Omega$, $\alpha(f) = n$, and $t_1, \cdots, t_n$ are terms, then $ft_1\cdots t_n$ is a term. We often write $f(t_1, \cdots, t_n)$ instead.
+ \end{enumerate}
+ \begin{eg}
+ In the language of groups $\Omega = \{m, i, e\}$, $\Pi = \emptyset$, and $\alpha (m) = 2, \alpha(i) = 1, \alpha(e) = 0$. Then $e, x_1, m(x_1, x_2), i(m(x_1, x_1))$ are terms.
+ \end{eg}
+ \item \emph{Atomic formulae}\index{atomic formulae}: there are three sorts:
+ \begin{enumerate}
+ \item $\bot$
+ \item $(s = t)$ for any terms $s, t$.
+ \item $(\phi t_1 \cdots t_n)$ for any $\phi\in \Pi$ with $\alpha(\phi) = n$ and $t_1, \cdots, t_n$ terms.
+ \end{enumerate}
+ \begin{eg}
+ In the language of posets, $\Omega = \emptyset$, $\Pi=\{\leq\}$ and $\alpha(\leq) = 2$. Then $(x_1 = x_1)$, $x_1\leq x_2$ (really means $(\leq x_1x_2)$) are atomic formulae.
+ \end{eg}
+ \item \emph{Formulae}\index{formulae}: defined inductively by
+ \begin{enumerate}
+ \item Atomic formulae are formulae
+ \item $(p \Rightarrow q)$ is a formula for any formulae $p, q$.
+ \item $(\forall x) p$ is a formula for any formula $p$ and variable $x$.
+ \end{enumerate}
+ \begin{eg}
+ In the language of groups, $e = e$, $x_1 = e$, $m(e, e) = e$, $(\forall x) m(x, i(x)) = e$, $(\forall x)(m(x, x) = e \Rightarrow (\exists y) (m(y, y) = x))$ are formulae.
+ \end{eg}
+ \end{itemize}
+\end{defi}
+It is important to note that a formula is a string of meaningless symbol. It doesn't make sense to ask whether it is true or false. In particular, the function and relation symbols are not assigned any meaning. The only thing the language cares is the arity of the symbol.
+
+Again, we have the usual abbreviations $\neg p$, $p\wedge q$, $p\vee q$ etc. Also, we have $(\exists x)p$ for $\neg(\forall x)(\neg p)$.
+
+\begin{defi}[Closed term]\index{closed term}
+ A term is \emph{closed} if it has no variables.
+\end{defi}
+
+\begin{eg}
+ In the language of groups, $e, m(e, e)$ are closed terms. However, $m(x, i(x))$ is not closed even though we think it is always $e$. Apart from the fact that it is by definition not closed (it has a variable $x$), we do not have the groups axioms stating that $m(x, i(x)) = e$.
+\end{eg}
+
+\begin{defi}[Free and bound variables]\index{free variable}\index{bound variable}
+ An occurrence of a variable $x$ in a formula $p$ is \emph{bound} if it is inside brackets of a $(\forall x)$ quantifier. It is \emph{free} otherwise.
+\end{defi}
+
+\begin{eg}
+ In $(\forall x)(m(x, x) = e)$, $x$ is a bound variable.
+
+ In $(\forall y)(m(y, y) = x \Rightarrow m(x, y) = m(y, x))$, $y$ is bound while $x$ is free.
+
+ We are technically allowed to have a formula with $x$ both bound and free, but \emph{DO NOT DO IT}. For example, $m(x, x) = e \Rightarrow (\forall x)(\forall y)(m(x, y) = m(y, x))$ is a valid formula (first two $x$ are free, while the others are bound).
+\end{eg}
+
+\begin{defi}[Sentence]\index{sentence}
+ A \emph{sentence} is a formula with no free variables.
+\end{defi}
+
+\begin{eg}
+ $m(e, e) = e$ and $(\forall x)(m(x, x) = e)$ are sentences, while $m(x, i(x)) = e$ is not.
+\end{eg}
+
+\begin{defi}[Substitution]\index{substitution}
+ For a formula $p$, a variable $x$ and a term $t$, the \emph{substitution} $p[t/x]$ is obtained by replacing each free occurrence of $x$ with $t$.
+\end{defi}
+
+\begin{eg}
+ If $p$ is the statement $(\exists y)(m(y, y) = x)$, then $p[e/x]$ is $(\exists y)(m(y, y) = e$.
+\end{eg}
+\subsection{Semantic entailment}
+In propositional logic, we can't say whether a proposition is true or false unless we have a valuation. What would be a ``valuation'' in the case of predicate logic? It is a set with the operations of the right arity. We call this a structure.
+
+For example, in the language of groups, a structure is a set $G$ with the correct operations. Note that it does not have to satisfy the group axioms in order to qualify as a structure.
+
+\begin{defi}[Structure]\index{structure}
+ An $L$-\emph{structure} is a non-empty set $A$ with a function $f_A: A^n \to A$ for each $f\in \Omega, \alpha(f) = n$, and a relation $\phi_A\subseteq A^n$, for each $\phi\in \Pi$, $\alpha(\phi) = n$.
+\end{defi}
+Note that we explicitly forbid $A$ from being empty. It is possible to formulate predicate logic that allows empty structures, but we will have to make many exceptions when defining our axioms, as we will see later. Since empty structures are mostly uninteresting (and don't exist if there is at least one constant), it isn't a huge problem if we ignore it. (There is a small caveat here --- we are working with single-sorted logic here, so everything is of the same ``type''. If we want to work with multi-sorted logic, where there can be things of different types, it would then be interesting to consider the case where some of the types could be empty).
+
+\begin{eg}
+ In the language of posets $A$, a structure is a set $A$ with a relation $\leq_A \subseteq A\times A$.
+
+ In the language of groups, a structure is a set $A$ with functions $m_A: A\times A\to A$, $i_A: A\to A$ and $e_A\in A$.
+\end{eg}
+Again, these need not be genuine posets/groups since we do not have the axioms yet.
+
+Now we want to define ``$p$ holds in $A$'' for a sentence $p\in L$ and a $L$-structure $A$.
+
+For example, we want $(\forall x)(m(x, x) = e)$ to be true in $A$ iff for each $a\in A$, we have $m_A(a, a) = e_A$. So to translate $p$ into something about $A$, you ``add subscript $_A$ to each function-symbol and relation-symbol, insert $\in A$ after the quantifiers, and say it aloud''. We call this the interpretation of the sentence $p$.
+
+This is not great as a definition. So we define it formally, and then quickly forget about it.
+
+\begin{defi}[Interpretation]\index{interpretation}
+ To define the \emph{interpretation} $p_A\in {0, 1}$ for each sentence $p$ and $L$-structure $A$, we define inductively:
+ \begin{enumerate}
+ \item Closed terms: define $t_A\in A$ for each closed term $t$ by
+ \[
+ (ft_1, \cdots, t_n)_A = f_A(t_{1_A}, t_{2_A}\cdots, t_{n_A})
+ \]
+ for any $f\in \Omega$, $\alpha(f) = n$, and closed terms $t_1, \cdots, t_n$.
+
+ \begin{eg}
+ $(m(m(e,e),e))_A = m_A(m_A(e_A, e_A),e_A)$.
+ \end{eg}
+ \item Atomic formulae:
+ \begin{align*}
+ \bot_A &= 0\\
+ (s = t)_A &=
+ \begin{cases}
+ 1 & s_A = t_A\\
+ 0 & s_A \not= t_A
+ \end{cases}\\
+ (\phi t_1 \cdots t_n)_A &=
+ \begin{cases}
+ 1 & (t_{1_A}, \cdots, t_{n_A})\in \phi_A \\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \end{align*}
+ \item Sentences:
+ \begin{align*}
+ (p\Rightarrow q)_A &=
+ \begin{cases}
+ 0 & p_A = 1, q_A = 0\\
+ 1 & \text{otherwise}
+ \end{cases}\\
+ ((\forall x)p)_A &=
+ \begin{cases}
+ 1 & p[\bar a/x]_{\bar A} \text{ for all }a\in A\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \end{align*}
+ where for any $a\in A$, we define a new language $L'$ by adding a constant $\bar a$ and make $A$ into an $L'$ structure $\bar A$ by setting $\bar a_{\bar A} = a$.
+ \end{enumerate}
+\end{defi}
+Now that we have formally defined truth, just forget about it!
+
+Note that we have only defined the interpretation only for sentences. We can also define it for functions with free variables. For any formula $p$ with $n$ free variables, we can define the interpretation as the set of all things that satisfy $p$. For example, if $p$ is $(\exists y)(m(y, y) = a)$, then
+\[
+ p_A = \{a\in A: \exists b\in A\text{ such that } m(b, b) = a\}.
+\]
+However, we are mostly interested in sentences, and don't have to worry about these.
+
+Now we can define models and entailment as in propositional logic.
+\begin{defi}[Theory]\index{theory}
+ A \emph{theory} is a set of sentences.
+\end{defi}
+
+\begin{defi}[Model]\index{model}
+ If a sentence $p$ has $p_A = 1$, we say that $p$ \emph{holds} in $A$, or $p$ is \emph{true} in $A$, or $A$ is a model of $p$.
+
+ For a theory $S$, a \emph{model} of $S$ is a structure that is a model for each $s\in S$.
+\end{defi}
+
+\begin{defi}[Semantic entailment]\index{semantic entailment}
+ For a theory $S$ and a sentence $t$, $S$ \emph{entails} $t$, written as $S\models t$, if every model of $S$ is a model of $t$.
+
+ ``Whenever $S$ is true, $t$ is also true''.
+\end{defi}
+
+\begin{defi}[Tautology]\index{tautology}
+ $t$ is a \emph{tautology}, written $\models t$, if $\emptyset\models t$, i.e.\ it is true everywhere.
+\end{defi}
+
+\begin{eg}
+ $(\forall x)(x = x)$ is a tautology.
+\end{eg}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Groups:
+ \begin{itemize}
+ \item The language $L$ is $\Omega = (m, i, e)$ and $\Pi = \emptyset$, with arities 2, 1, 0 respectively.
+ \item Let $T$ be
+ \begin{align*}
+ \{&(\forall x)(\forall y)(\forall z) m(x,m(y, z)) = m(m(x, y), z),\\
+ &(\forall x)(m(x, e) = x \wedge m(e, x) = x),\\
+ & (\forall x)(m(x, i(x)) = e \wedge m(i(x), x) = e)\}.
+ \end{align*}
+ \end{itemize}
+ Then an $L$-structure $A$ is a model for $T$ iff $A$ is a group. We say $T$ \emph{axiomatizes} the theory of groups/class of groups. Sometimes we call the members of $T$ the \emph{axioms} of $T$.
+
+ Note that we could use a different language and theory to axiomatize group theory. For example, we can have $\Omega = (m, e)$ and change the last axiom to $(\forall x)(\exists y)(m(x, y) = e\wedge m(y, x) = e)\}$.
+ \item Fields:
+ \begin{itemize}
+ \item The language $L$ is $\Omega = (+, \times, -, 0, 1)$ and $\Pi = \emptyset$, with arities 2, 2, 1, 0, 0.
+ \item The theory $T$ consists of:
+ \begin{itemize}
+ \item Abelian group under $+$
+ \item $\times$ is commutative, associative, and distributes over $+$
+ \item $\neg (0 = 1)$
+ \item $(\forall x)((\neg(x = 0)) \Rightarrow (\exists y)(y\times x = 1)$.
+ \end{itemize}
+ Then an $L$-structure is a model of $T$ iff it is a field. Then we have $T\models$ "inverses are unique", i.e.
+ \[
+ T\models (\forall x)\Big(\big(\neg(x = 0)\big) \Rightarrow (\forall y)(\forall z)\big((xy = 1 \wedge xz = 1)\Rightarrow (y = z)\big)\Big)
+ \]
+ \end{itemize}
+ \item Posets:
+ \begin{itemize}
+ \item The language is $\Omega = \emptyset$, and $\Pi=\{\leq\}$ with arity 2.
+ \item The theory $T$ is
+ \begin{align*}
+ \{& (\forall x)(x \leq x),\\
+ & (\forall x)(\forall y)(\forall z)\big((x \leq y)\wedge(y \leq z) \Rightarrow x \leq z\big),\\
+ & (\forall x)(\forall y)\big(x \leq y \wedge y \leq z \Rightarrow x = y\big)\}
+ \end{align*}
+ Then $T$ axiomatizes the theory of posets.
+ \end{itemize}
+ \item Graphs:
+ \begin{itemize}
+ \item The language $L$ is $\Omega = \emptyset$ and $\Pi = \{a\}$ with arity 2. This relation is ``adjacent to''. So $a(x, y)$ means there is an edge between $x$ and $y$.
+ \item The theory is
+ \begin{align*}
+ \{& (\forall x)(\neg a(x, x)),\\
+ & (\forall x)(\forall y)(a(x, y)\Leftrightarrow a(y, x)\}
+ \end{align*}
+ \end{itemize}
+\end{enumerate}
+Predicate logic is also called ``first-order logic''. It is ``first-order'' because our quantifiers range over \emph{elements} of the structure only, and not subsets. It would be difficult (and in fact impossible) to axiomatize, say, a complete ordered field, since the definition requires says every bounded \emph{subset} has a least upper bound.
+\end{eg}
+
+\subsection{Syntactic implication}
+Again, to define syntactic implication, we need axioms and deduction rules.
+\begin{defi}[Axioms of predicate logic]
+ The \emph{axioms of predicate logic} consists of the 3 usual axioms, 2 to explain how $=$ works, and 2 to explain how $\forall$ works. They are
+ \begin{enumerate}[label=\arabic{*}.]
+ \item $p\Rightarrow (q\Rightarrow p)$ for any formulae $p, q$.
+ \item $[p\Rightarrow (q\Rightarrow r)] \Rightarrow [(p\Rightarrow q)\Rightarrow (p\Rightarrow r)]$ for any formulae $p, q, r$.
+ \item $(\neg \neg p\Rightarrow p)$ for any formula $p$.
+ \item $(\forall x)(x = x)$ for any variable $x$.
+ \item $(\forall x)(\forall y)\big((x = y)\Rightarrow (p \Rightarrow p[y/x])\big)$ for any variable $x, y$ and formula $p$, with $y$ not occurring bound in $p$.
+ \item $[(\forall x)p] \Rightarrow p[t/x]$ for any formula $p$, variable $x$, term $t$ with no free variable of $t$ occurring bound in $p$.
+ \item $[(\forall x)(p\Rightarrow q)]\Rightarrow [p\Rightarrow (\forall x)q]$ for any formulae $p, q$ with variable $x$ not occurring free in $p$.
+ \end{enumerate}
+ The \emph{deduction rules} are
+ \begin{enumerate}[label=\arabic{*}.]
+ \item Modus ponens\index{modus ponens}: From $p$ and $p\Rightarrow q$, we can deduce $q$.
+ \item Generalization\index{generalization}: From $r$, we can deduce $(\forall x)r$, provided that no premise used in the proof so far had $x$ as a free variable.
+ \end{enumerate}
+\end{defi}
+Again, we can define proofs, theorems etc.
+
+\begin{defi}[Proof]\index{proof}
+ A \emph{proof} of $p$ from $S$ is a sequence of statements, in which each statement is either an axiom, a statement in $S$, or obtained via modus ponens or generalization.
+\end{defi}
+\begin{defi}[Syntactic implication]\index{syntactic entailment}
+ If there exists a proof a formula $p$ form a set of formulae $S$, we write $S\vdash p$ ``$S$ proves $t$''.
+\end{defi}
+
+\begin{defi}[Theorem]\index{theorem}
+ If $S\vdash p$, we say $p$ is a \emph{theorem} of $S$. (e.g.\ a theorem of group theory)
+\end{defi}
+
+Note that these definitions are exactly the same as those we had in propositional logic. The only thing that changed is the set of axioms and deduction rules.
+
+\begin{eg}
+ $\{x = y, x = z\}\vdash y = z$.
+
+ We go for $x = z$ giving $y = z$ using Axiom 5.
+
+ \begin{enumerate}[label=\arabic{*}.]
+ \item $(\forall x)(\forall y)((x = y)\Rightarrow (x = z\Rightarrow y=z))$\hfill Axiom 5
+ \item $[(\forall x)(\forall y)((x = y)\Rightarrow (x = z\Rightarrow y=z))]\Rightarrow (\forall y)(x = y\Rightarrow y = z)$ \hfill Axiom 6
+ \item $(\forall y)((x = y) \Rightarrow x = z \Rightarrow y = z)$\hfill MP on 1, 2
+ \item $[(\forall y)((x = y) \Rightarrow x = z \Rightarrow y = z)]\Rightarrow [(x = y)\Rightarrow (x = z\Rightarrow y = z)$\hfill Axiom 6
+ \item $(x = y) \Rightarrow [(x = z) \Rightarrow (y = z)]$\hfill MP on 3, 4
+ \item $x = y$\hfill Premise
+ \item $(x = z) \Rightarrow (y = z)$\hfill MP 6, 7
+ \item $x = z$ \hfill Premise
+ \item $y = z$ \hfill MP on 7, 8
+ \end{enumerate}
+
+ Note that in the first 5 rows, we are merely doing tricks to get rid of the $\forall$ signs.
+\end{eg}
+
+We can now revisit why we forbid $\emptyset$ from being a structure. If we allowed $\emptyset$, then $(\forall x)\bot$ holds in $\emptyset$. However, axioms 6 states that $((\forall x)\bot )\Rightarrow \bot$. So we can deduce $\bot$ in the empty structure! To fix this, we will have to add some weird clauses to our axioms, or simply forbid the empty structure!
+
+Now we will prove the theorems we had for propositional logic.
+\begin{prop}[Deduction theorem]\index{deduction theorem}
+ Let $S\subseteq L$, and $p, q\in L$. Then $S\cup \{p\}\vdash q$ if and only if $S\vdash p\Rightarrow q$.
+\end{prop}
+
+\begin{proof}
+ The proof is exactly the same as the one for propositional logic, except in the $\Rightarrow $ case, we have to check Gen.
+
+ Suppose we have lines
+ \begin{itemize}
+ \item $r$
+ \item $(\forall x) r$\hfill Gen
+ \end{itemize}
+ and we have a proof of $S\vdash p\Rightarrow r$ (by induction). We want to seek a proof of $p\Rightarrow (\forall x)r$ from $S$.
+
+ We know that no premise used in the proof of $r$ from $S\cup \{p\}$ had $x$ as a free variable, as required by the conditions of the use of Gen. Hence no premise used in the proof of $p\Rightarrow r$ from $S$ had $x$ as a free variable.
+
+ Hence $S\vdash (\forall x)(p\Rightarrow r)$.
+
+ If $x$ is not free in $p$, then we get $S\vdash p\Rightarrow (\forall x)r$ by Axiom 7 (and MP).
+
+ If $x$ \emph{is} free in $p$, then we did not use premise $p$ in our proof $r$ from $S\cup \{p\}$ (by the conditions of the use of Gen). So $S\vdash r$, and hence $S\vdash (\forall x)r$ by Gen. So $S\vdash p\Rightarrow (\forall x)r$.
+\end{proof}
+
+Now we want to show $S\vdash p$ iff $S\models p$. For example, a sentence that holds in all groups should be deducible from the axioms of group theory.
+
+\begin{prop}[Soundness theorem]\index{soundness theorem}
+ Let $S$ be a set of sentences, $p$ a sentence. Then $S\vdash p$ implies $S\models p$.
+\end{prop}
+
+\begin{proof}(non-examinable)
+ We have a proof of $p$ from $S$, and want to show that for every model of $S$, $p$ holds.
+
+ This is an easy induction on the lines of the proof, since our axioms are tautologies and our rules of deduction are sane.
+\end{proof}
+
+The hard part is proving
+\[
+ S\models p \Rightarrow S\vdash p.
+\]
+This is, by the deduction theorem,
+\[
+ S\cup \{\neg p\}\models \bot \Rightarrow S\cup \{\neg p\}\vdash \bot.
+\]
+This is equivalent to the contrapositive:
+\[
+ S\cup \{\neg p\} \not\vdash \bot \Rightarrow S\cup \{\neg p\}\not\models \bot.
+\]
+
+\begin{thm}[Model existence lemma]\index{model existence theorem}
+ Let $S$ be a consistent set of sentences. Then $S$ has a model.
+\end{thm}
+
+We need several ideas to prove the lemma:
+\begin{enumerate}
+ \item We need to find a structure. Where can we start from? The only thing we have is the language. So we start form the language. Let $A =$ set of all closed terms, with the obvious operations.
+
+ For example, in the theory of fields, we have ``$1 + 1$'', ``$0 + 1$`` etc in the structure. Then $(1 + 1) +_A (0 + 1) = (1 + 1) + (0 + 1)$.
+ \item However, we have a problem. In, say, the language of fields, and $S$ our field axioms, our $A$ has distinct elements ``$1 + 0$'', ``$0 + 1$'', ``$0 + 1 + 0$'' etc. However, $S\vdash 1 + 0 = 0 + 1$ etc. So we can't have them as distinct elements. The solution is to quotient out by the equivalence relation $s\sim t$ if $S\vdash (s = t)$, i.e.\ our structure is the set of equivalence classes. It is trivial check to check that the $+$, $\times$ operations are well-defined for the equivalence classes.
+ \item We have the next problem: If $S$ is ''field of characteristic 2 or 3``, i.e.\ $S$ has a field axiom plus $1 + 1 = 0 \vee 1 + 1 + 1 = 0$. Then $S\not\vdash 1 + 1 = 0$. Also $S\not\vdash 1 + 1 + 1 = 0$. So $[1 + 1] \not = [0]$, and $[1 + 1 + 1] \not= [0]$. But then our $S$ has neither characteristic $2$ or $3$.
+
+ This is similar to the problem we had in the propositional logic case, where we didn't know what to do with $p_3$ if $S$ only talks about $p_1$ and $p_2$. So we first extend $S$ to a maximal consistent (or \emph{complete}) $\bar S$.
+
+ \item Next problem: Let $S =$ ``fields with a square root of 2'', i.e.\ $S$ is the field axioms plus $(\exists x)(xx = 1 + 1)$. However, there is no closed term $t$ which is equivalent to $\sqrt{2}$. We say we lack \emph{witnesses} to the statement $(\exists x)(xx = 1 + 1)$. So we add a witness. We add a constant $c$ to the language, and add the axiom ``$cc = 1 + 1$'' to $S$. We do this for each such existential statement.
+
+ \item Now what? We have added new symbols to $S$, so our new $S$ is no longer complete! Of course, we go back to (iii), and take the completion again. Then we have new existential statements to take care of, and we do (iv) again. Then we're back to (iii) again! It won't terminate!
+
+ So we keep on going, and finally take the union of all stages.
+ \end{enumerate}
+
+\begin{proof}(non-examinable)
+ Suppose we have a consistent $S$ in the language $L = L(\Omega, \Pi)$. Extend $S$ to a consistent $S_1$ such that $p\in S_1$ or $(\neg p)\in S$ for each sentence $p\in L$ (by applying Zorn's lemma to get a maximal consistent $S_1$). In particular, $S_1$ is \emph{complete}, meaning $S_1\vdash p$ or $S_1 \vdash \neg p$ for all $p$.
+
+ Then for each sentence of the form $(\exists x)p$ in $S_1$, add a new constant $c$ to $L$ and add $p[c/x]$ to $S_1$ --- obtaining $T_1$ in language $L_1 = L(\Omega \cup C_1, \Pi)$. It is easy to check that $T_1$ is consistent.
+
+ Extend $T_1$ to a complete theory $S_2\subseteq L_1$, and add witnesses to form $T_2 \subseteq L_2 = L(\Omega \cup C_1 \cup C_2, \Pi)$. Continue inductively.
+
+ Let $\bar S = S_1 \cup S_2 \cup \cdots$ in language $\bar L = L_1\cup L_2\cup \cdots$ (i.e.\ $\bar L = L(\Omega\cup C_1\cup C_2\cup\cdots, \Pi)$).
+
+ \begin{claim}
+ $\bar S$ is consistent, complete, and has witnesses, i.e.\ if $(\exists x)p \in \bar S$, then $p[t/x]\in \bar S$ For some term $t$.
+ \end{claim}
+
+ It is consistent since if $\bar S \vdash \bot$, then some $S_n \vdash \bot$ since proofs are finite. But all $S_n$ are consistent. So $\bar S$ is consistent.
+
+ To show completeness, for sentence $p\in \bar L$, we have $p\in L_n$ for some $n$, as $p$ has only finitely many symbols. So $S_{n + 1} \vdash p$ or $S_{n + 1}\vdash \neg p$. Hence $\bar S \vdash p$ or $\bar S \vdash \neg p$.
+
+ To show existence of witnesses, if $(\exists x)p \in \bar S$, then $(\exists x) p\in S_n$ for some $n$. Hence (by construction of $T_n$), we have $p[c/x] \in T_n$ for some constant $c$.
+
+ Now define an equivalence relation $\sim$ on closed term of $\bar L$ by $s\sim t$ if $\bar S\vdash (s = t)$. It is easy to check that this is indeed an equivalence relation. Let $A$ be the set of equivalence classes. Define
+ \begin{enumerate}
+ \item $f_A([t_1],\cdots, [t_n]) = [f t_1, \cdots, t_n]$ for each formula $f\in \Omega$, $\alpha(f) = n$.
+ \item $\phi_A = \{([t_1], \cdots, [t_n]): \bar S \vdash \phi(t_1, \cdots, t_n)\}$ for each relation $\phi \in \Pi$ and $\alpha (\phi) = n$.
+ \end{enumerate}
+ It is easy to check that this is well-defined.
+
+ \begin{claim}
+ For each sentence $p$, $\bar S\vdash p$ (i.e.\ $p\in \bar S$) if and only if $p$ holds in $A$, i.e.\ $p_A = 1$.
+ \end{claim}
+ We prove this by an easy induction.
+ \begin{itemize}
+ \item Atomic sentences:
+ \begin{itemize}
+ \item $\bot$: $\bar S \not\vdash \bot$, and $\bot_A = 0$. So good.
+ \item $s = t$: $\bar S \vdash s = t$ iff $[s] = [t]$ (by definition) iff $s_A = t_A$ (by definition of $s_A$) iff $(s = t)_A$. So done.
+ \item $\phi t_1, \cdots, t_n$ is the same.
+ \end{itemize}
+ \item Induction step:
+ \begin{itemize}
+ \item $p\Rightarrow q$: $\bar S\vdash (p \Rightarrow q)$ iff $\bar S \vdash (\neg p)$ or $\bar S\vdash q$ (justification: if $\bar S \not\vdash \neg p$ and $\bar S\not\vdash q$, then $\bar S\vdash p$ and $\bar S \vdash \neg q$ by completeness, hence $\bar S\vdash \neg(p\Rightarrow q)$, contradiction). This is true iff $p_A = 0$ or $q_A = 1$ iff $(p\Rightarrow q)_A = 1$.
+ \item $(\exists x)p$: $\bar S \vdash (\exists x)p$ iff $\bar S\vdash p[t/x]$ for some closed term $t$. This is true since $\bar S$ has witnesses. Now this holds iff $p[t/x]_A = 1$ for some closed term $t$ (by induction). This is the same as saying $(\exists x)p$ holds in $A$, because $A$ is the set of (equivalence classes of) closed terms.
+ \end{itemize}
+ Here it is convenient to pretend $\exists$ is the primitive symbol instead of $\forall$. Then we can define $(\forall x) p$ to be $\neg (\exists x)\neg p$, instead of the other way round. It is clear that the two approaches are equivalent, but using $\exists $ as primitive makes the proof look clearer here.
+ \end{itemize}
+ Hence $A$ is a model of $\bar S$. Hence it is also a model of $S$. So $S$ has a model.
+\end{proof}
+Again, if $L$ is countable (i.e.\ $\Omega, \Pi$ are countable), then Zorn's Lemma is not needed.
+
+From the Model Existence lemma, we obtain:
+\begin{cor}[Adequacy theorem]\index{adequacy theorem}
+ Let $S$ be a theory, and $p$ a sentence. Then $S\models p$ implies $S\vdash p$.
+\end{cor}
+
+\begin{thm}[G\"odel's completeness theorem (for first order logic)]\index{completeness theorem}\index{G\"odel's completeness theorem}
+ Let $S$ be a theory, $p$ a sentence. Then $S\vdash p$ if and only if $S \models p$.
+\end{thm}
+
+\begin{proof}
+ $(\Rightarrow)$ Soundness, $(\Leftarrow)$ Adequacy.
+\end{proof}
+
+\begin{cor}[Compactness theorem]\index{compactness theorem}
+ Let $S$ be a theory such that every finite subset of $S$ has a model. Then so does $S$.
+\end{cor}
+
+\begin{proof}
+ Trivial if we replace ``has a model'' with ``is consistent'', because proofs are finite.
+\end{proof}
+
+We can look at some applications of this:
+
+Can we axiomatize the theory of finite groups (in the language of groups)? i.e.\ is there a set of sentences $T$ such that models of $T$ are finite groups.
+
+\begin{cor}
+ The theory of finite groups cannot be axiomatized (in the language of groups).
+\end{cor}
+It is extraordinary that we can \emph{prove} this, as opposed to just ``believing it should be true''.
+
+\begin{proof}
+ Suppose theory $T$ has models all finite groups and nothing else. Let $T'$ be $T$ together with
+ \begin{itemize}
+ \item $(\exists x_1)(\exists x_2)(x_1 \not = x_2)$ (intuitively, $|G| \geq 2$)
+ \item $(\exists x_1)(\exists x_2)(\exists x_3)(x_1 \not= x_2 \not= x_3)$ (intuitively, $|G| \geq 3$)
+ \item $\cdots$
+ \end{itemize}
+ Then $T'$ has no model, since each model has to be simultaneously arbitrarily large and finite. But every finite subset of $T'$ does have a model (e.g.\ $\Z_n$ for some $n$). Contradiction.
+\end{proof}
+This proof looks rather simple, but it is not ``easy'' in any sense. We are using the full power of completeness (via compactness), and this is not easy to prove!
+
+\begin{cor}
+ Let $S$ be a theory with arbitrarily large models. Then $S$ has an infinite model.
+
+ ``Finiteness is not a first-order property''
+\end{cor}
+
+\begin{proof}
+ Same as above.
+\end{proof}
+
+Similarly, we have
+\begin{cor}[Upward L\"owenheim-Skolem theorem]\index{L\"owenheim-Skolem theorem}\index{upward L\"owenheim-Skolem theorem}
+ Let $S$ be a theory with an infinite model. Then $S$ has an uncountable model.
+\end{cor}
+
+\begin{proof}
+ Add constants $\{c_i: i\in I\}$ to $L$ for some uncountable $I$.
+
+ Let $T = S\bigcup\{\text{``}c_i \not= c_j\text{''}: i, j\in I, i \not = j\}$.
+
+ Then any finite $T' \subseteq T$ has a model, since it can only mention finitely many of the $C_i$. So any infinite model of $S$ will do. Hence by compactness, $T$ has a model
+\end{proof}
+Similarly, we have a model for $S$ that does not inject into $X$, for any chosen set $X$. For example, we can add $\gamma(X)$ constants, or $\P(X)$ constants.
+
+\begin{eg}
+ There exists an infinite field ($\Q$). So there exists an uncountable field (e.g.\ $\R$). Also, there is a field that does not inject into $\P(\P(\R))$, say,
+\end{eg}
+
+\begin{thm}[Downward L\"owenheim-Skolem theorem]\index{L\"owenheim-Skolem theorem}\index{downward L\"owenheim-Skolem theorem}
+ Let $L$ be a countable language (i.e.\ $\Omega$ and $\Pi$ are countable). Then if $S$ has a model, then it has a countable model.
+\end{thm}
+
+\begin{proof}
+ The model constructed in the proof of model existence theorem is countable.
+\end{proof}
+Note that the proof of the model existence theorem is non-examinable, but the proof of this is examinable! So we are supposed to magically know that the model constructed in the proof is countable without knowing what the proof itself is!
+
+\subsection{Peano Arithmetic}
+As an example, we will make the usual axioms for $\N$ into a first-order theory. We take $\Omega = \{0, s, +, \times\}$, and $\Pi = \emptyset$. The arities are $\alpha(0) = 0, \alpha(s) = 1, \alpha(+) = \alpha(\times) = 2$.
+
+The operation $s$ is the \emph{successor}\index{successor} operation, which should be thought of as ``+1''.
+
+Our axioms are
+\begin{defi}[Peano's axioms]\index{Peano's axioms}\index{PA}
+ The axioms of \emph{Peano's arithmetic} (PA) are
+ \begin{enumerate}
+ \item $(\forall x)\neg(s(x) = 0)$.
+ \item $(\forall x)(\forall y)((s(x) = s(y)) \Rightarrow (x = y))$.
+ \item ${\color{gray}(\forall y_1)\cdots (\forall y_n)}[p[0/x] \wedge (\forall x)(p \Rightarrow p[s(x)/x])]\Rightarrow (\forall x)p$. This is actually infinitely many axioms --- one for each formula $p$, free variables $y_1, \cdots, y_n, x$, i.e.\ it is an axiom scheme.
+ \item $(\forall x)(x + 0 = x)$.
+ \item $(\forall x)(\forall y)(x + s(y) = s(x + y))$.
+ \item $(\forall x)(x \times 0 = 0)$.
+ \item $(\forall x)(\forall y)(x\times s(y) = (x + y) + x)$.
+ \end{enumerate}
+ Note that our third axiom looks rather funny with all the $(\forall y_i)$ in front. Our first guess at writing it would be
+ \[
+ [p[0/x] \wedge (\forall x)(p\Rightarrow p[s(x)/x])] \Rightarrow (\forall x)p.
+ \]
+ However, this is in fact not sufficient. Suppose we want to prove that for all $x$ and $y$, $x + y = y + x$. The natural thing to do would be to fix a $y$ and induct on $x$ (or the other way round). We want to be able to fix \emph{any} $y$ to do so. So we need a $(\forall y)$ in front of our induction axiom, so that we can prove it for all values of $y$ all at once, instead of proving it once for $y = 0$, once for $y = 1$ , once for $y = 1 + 1$ etc. This is important, since we might have an uncountable model of PA, and we cannot name all $y$. When we actually think about it, we can just forget about the $(\forall y_i)$s. But just remember that formally we need them.
+\end{defi}
+We know that PA has a model $\N$ that is infinite. So it has an uncountable model by Upward L\"owenheim-Skolem. Then clearly this model is not isomorphic to $\N$. However, we are also told that the axioms of arithmetic characterize $\N$ completely. Why is this so?
+
+This is since Axiom 3 is not \emph{full} induction, but a ``first-order'' version. The proper induction axiom talks about \emph{subsets} of $\N$, i.e.
+\[
+ (\forall S\subseteq \N )((0 \in S \wedge x\in S \Rightarrow s(x)\in S)\Rightarrow S=\N).
+\]
+However, there are uncountably many subsets of $\N$, and countably many formulae $p$. So our Axiom 3 only talks about \emph{some} subsets of $\N$, not all.
+
+Now the important question is: is PA complete?
+
+G\"odel's \emph{in}completeness theorem says: no! There exists some $p$ with PA $\not\vdash p$ and PA $\not\vdash \neg p$.
+
+Hence, there is a $p$ that is true in $\N$, but PA $\not\vdash p$.
+
+Note that this does not contradict G\"odel's completeness theorem. The completeness theorem tells us that if $p$ holds in \emph{every} model of PA (not just in $\N$), then $PA\vdash p$.
+
+\subsection{Completeness and categoricity*}
+\begin{own}
+ We now study some ``completeness'' property of theories. For convenience, we will assume that the language is countable.
+
+ \begin{defi}[Complete theory]\index{complete theory}\index{theory!complete}
+ A theory $T$ is \emph{complete} if for all propositions $p$ in the language, either $T \vdash p$ or $T \vdash \neg p$.
+ \end{defi}
+ Complete theories are rather hard to find. So for the moment, we will content ourselves by just looking at theories that are not complete.
+ \begin{eg}
+ The theory of groups is not complete, since the proposition
+ \[
+ (\forall g)(\forall h)(gh = hg)
+ \]
+ can neither be proven or disproven (there are abelian groups and non-abelian groups).
+ \end{eg}
+ Another ``completeness'' notion we might be interested in is \emph{categoricity}. We will need to use the notion of a cardinal, which will be introduced later in the course. Roughly, a cardinal is a mathematical object that denotes the sizes of sets, and a set ``has cardinality $\kappa$'' if it bijects with $\kappa$.
+ \begin{defi}[$\kappa$-categorical]\index{$\kappa$-categorical}
+ Let $\kappa$ be an infinite cardinal. Then a theory $T$ is \emph{$\kappa$-categorical} if there is a unique model of the theory of cardinality $\kappa$ up to isomorphism.
+ \end{defi}
+
+ Here the notion of ``isomorphism'' is the obvious one --- a homomorphism of models is a function between the structures that preserves everything, and an isomorphism is a homomorphism with an inverse (that is also a homomorphism).
+
+ \begin{eg}[Pure identity theory]\index{pure identity theory}
+ The \emph{pure identity theory} has an empty language and no axioms. Then this theory is $\kappa$-categorical for any $\kappa$, since if two models have the same cardinality, then by definition there is a bijection between them, and any such bijection is an isomorphism because there are no properties to be preserve.
+ \end{eg}
+
+ One nice thing about categorical theories is that they are complete!
+ \begin{prop}
+ Let $T$ be a theory that is $\kappa$ categorical for some $\kappa$, and suppose $T$ has no finite models. Then $T$ is complete.
+ \end{prop}
+ Note that the requirement that $T$ has no finite models is necessary. For example, pure identity theory is $\kappa$-categorical for all $\kappa$ but is not complete, since the statement $(\exists x)(\exists y)(\neg (x = y))$ is neither provable nor disprovable.
+ \begin{proof}
+ Let $p$ be a proposition. Suppose $T \not\vdash p$ and $T \not\vdash \neg p$. Then there are infinite models of $T \cup \{p\}$ and $T \cup \{\neg p\}$ (since the models cannot be finite), and so by the L\"owenhein--Skolem theorems, we can find such models of cardinality $\kappa$. But since one satisfies $p$ and the other does not, they cannot be isomorphic. This contradicts $\kappa$-categoricity.
+ \end{proof}
+
+ We are now going to use this idea to prove the \term{Ax-Grothendieck theorem}
+ \begin{thm}[Ax-Grothendieck theorem]
+ Let $f: \C^n \to \C^n$ be a complex polynomial. If $f$ is injective, then it is in fact a bijection.
+ \end{thm}
+
+ We will use the following result from field theory without proof:
+ \begin{lemma}
+ Any two uncountable algebraically closed fields with the same dimension and same characteristic are isomorphic. In other words, the theory of algebraically closed fields of characteristic $p$ (for $p$ a prime or $0$) is $\kappa$-categorical for all uncountable cardinals $\kappa$, and in particular complete.
+ \end{lemma}
+ The rough idea (for the field theorists) is that an algebraically closed field is uniquely determined by its transcendence degree over the base field $\Q$ or $\F_p$.
+
+ In the following proof, we will also use some (very) elementary results about field theory that can be found in IID Galois Theory.
+
+ \begin{proof}[Proof of Ax-Grothendieck]
+ We will use compactness and completeness to show that we only have to prove this for fields of positive characteristic, and the result can be easily proven since we end up dealing with finite fields.
+
+ Let $\mathrm{ACF}$ be the theory of algebraically closed fields. The language is the language of rings, and the axioms are the usual axioms of a field, plus the following axiom for each $n > 0$:
+ \[
+ (\forall a_0, a_1, \cdots, a_{n - 1})(\exists x)(x^n + a_{n - 1}x^{n - 1} + \cdots + a_1 x + a_0 = 0).
+ \]
+ Let $\mathrm{ACF}_0$ denote the theory of algebraically closed fields of characteristic $0$, where we add the axiom
+ \[
+ \underbrace{1 + 1 + \cdots + 1}_{n\text{ times}} \not= 0\tag{$*$}
+ \]
+ for all $n$ to $\mathrm{ACF}_n$.
+
+ Let $\mathrm{ACF}_p$ denote the theory of algebraically closed fields of characteristic $p$, where we add the axiom
+ \[
+ \underbrace{1 + 1 + \cdots + 1}_{p\text{ times}} = 0
+ \]
+ to $\mathrm{ACF}$.
+
+ We now notice the following fact: if $r$ is a proposition that is a theorem of $\mathrm{ACF}_p$ for all $p$, then it is true of $\mathrm{ACF}_0$. Indeed, we know that $\mathrm{ACF}_0$ is complete. So if $r$ is not a theorem in $\mathrm{ACF}_0$, then $\neg r$ is a theorem. But the proof is finite, so it can only use finitely many instances of $(*)$. So there is some large $p$ where $\neg r$ can be proven in $\mathrm{ACF}_p$, which is a contradiction.
+
+ Now the statement ``If $f$ is a polynomial of degree $d$ and $f$ is injective, then $f$ is surjective'' can be expressed as a first-order statement. So we just have to prove it for all fields of characteristic $p > 0$. Moreover, by completeness, for each $p$, we only need to prove it for \emph{some} algebraically complete field of characteristic $p$.
+
+ Fix a prime $p$, and consider $F = \bar{\F}_p$, the algebraic closure of $\F_p$. This is an algebraically closed field with the property that every element is algebraic over $\F_p$, i.e.\ the field generated by any finite subset of elements is finite.
+
+ Let $f: F^n \to F^n$ be a polynomial function involving coefficients $a_1, \cdots, a_K$. Let $b = (b_1, \cdots, b_n) \in F^n$ be a point. Then $F$ restricts to a function from the field $\tilde{F}$ generated by $\{b_1, \cdots, b_n, a_1, \cdots, a_K\}$ to itself. But $\tilde{F}$ is finite, so any function $f|_{\tilde{F}}: \tilde{F} \to \tilde{F}$ that is injective must also be surjective. So $b$ is in the image of $f$. So $f$ is surjective. So done.
+ \end{proof}
+
+ We conclude the section by stating, without proof, a theorem by Morely:
+ \begin{thm}[Morley's categoricity theorem]\index{Morley's categoricity theorem}
+ Let $T$ be a theory with a countable language. If $T$ is $\kappa$-categorical for \emph{some} uncountable cardinal $\kappa$, then it is $\mu$-categorical for \emph{all} uncountable cardinals $\mu$.
+ \end{thm}
+ Hence we often just say a theory is uncountably categorical when the theory is categorical for some (hence all) uncountable cardinals.
+\end{own}
+\section{Set theory}
+Here we'll axiomatize set theory as ``just another first-order theory'', with signatures, structures etc. There are many possible formulations, but the most common one is \emph{Zermelo Fraenkel set theory} (with the axiom of choice), which is what we will study.
+
+\subsection{Axioms of set theory}
+\begin{defi}[Zermelo-Fraenkel set theory]\index{Zermelo-Fraenkel set theory}\index{ZF}
+ \emph{Zermelo-Fraenkel set theory} (ZF) has language $\Omega = \emptyset$, $\Pi = \{\in\}$, with arity 2.
+\end{defi}
+Then a ``universe of sets'' will simply mean a model of these axioms, a pair $(V, \in)$, where $V$ is a set and $\in$ is a binary relation on $V$ in which the axioms are true (officially, we should write $\in_V$, but it's so \emph{weird} that we don't do it). Each model $(V, \in)$ will (hopefully) contain a copy of all of maths, and so will look very complicated!
+
+ZF will have 2 axioms to get started with, 4 to build things, and 3 more weird ones one might not realize are needed.
+
+\begin{axiom}[Axiom of extension]\index{axiom!of extension}
+ ``If two sets have the same elements, they are the same set''.
+ \[
+ (\forall x)(\forall y)((\forall z)(z\in x\Leftrightarrow z\in y) \Rightarrow x = y).
+ \]
+\end{axiom}
+We could replace the $\Rightarrow $ with an $\Leftrightarrow$, but the converse statement $x = y \Rightarrow (z\in x \Leftrightarrow z\in y)$ is an instance of a logical axiom, and we don't have to explicitly state it.
+
+\begin{axiom}[Axiom of separation]\index{axiom!of separation}
+ ``Can form subsets of sets''. More precisely, for any set $x$ and a formula $p$, we can form $\{z\in x: p(z)\}$.
+ \[
+ {\color{gray}{(\forall t_1)\cdots(\forall t_n)}}(\forall x)(\exists y)(\forall z)(z\in y \Leftrightarrow (z\in x\wedge p)).
+ \]
+ This is an axiom scheme, with one instance for each formula $p$ with free variables $t_1, \cdots, t_n, z$.
+
+ Note again that we have those funny $(\forall t_i)$. We do need them to form, e.g.\ $\{z\in x: t\in z\}$, where $t$ is a parameter.
+
+ This is sometimes known as Axiom of comprehension.
+\end{axiom}
+\begin{axiom}[Axiom of empty set]\index{axiom!of empty set}
+ ``The empty-set exists''
+ \[
+ (\exists x)(\forall y)(y\not\in x).
+ \]
+ We write $\emptyset$ for the (unique, by extension) set with no members. This is an abbreviation: $p(\emptyset)$ means $(\exists x)(x\text{ has no members}\wedge p(x))$. Similarly, we tend to write $\{z\in x: p(z)\}$ for the set given by separation.
+\end{axiom}
+\begin{axiom}[Axiom of pair set]\index{axiom!of pair set}
+ ``Can form $\{x, y\}$''.
+ \[
+ (\forall x)(\forall y)(\exists z)(\forall t)(t\in z \Leftrightarrow (t= x\vee t = y)).
+ \]
+ We write $\{x, y\}$ for this set. We write $\{x\}$ for $\{x, x\}$.
+\end{axiom}
+
+We can now define ordered pairs:
+\begin{defi}[Ordered pair]\index{ordered pair}
+ An \emph{ordered pair} $(x, y)$ is $\{\{x\}, \{x, y\}\}$.
+
+ We define ``$x$ is an ordered pair'' to mean $(\exists y)(\exists z)(x = (y, z))$.
+\end{defi}
+We can show that $(a, b) = (c, d) \Leftrightarrow (a = c) \wedge (b = d)$.
+
+\begin{defi}[Function]\index{function}
+ We define ``$f$ is a \emph{function}'' to mean
+ \begin{align*}
+ & (\forall x)(x\in f\Rightarrow x\text{ is an ordered pair})\wedge\\
+ & (\forall x)(\forall y)(\forall z)[(x, y)\in f \wedge (x, z)\in f]\Rightarrow y = z.
+ \end{align*}
+ We define $x = \dom f$ to mean $f$ is a function and $(\forall y)(y\in x \Leftrightarrow (\exists z)((y, z)\in f))$.
+
+ We define $f: x\to y$ to mean $f$ is a function and $\dom f = x$ and
+ \[
+ (\forall z)[(\exists t)((t, z)\in f)\Rightarrow z\in y].
+ \]
+\end{defi}
+Once we've defined them formally, let's totally forget about the definition and move on with life.
+
+\begin{axiom}[Axiom of union]\index{axiom!of union}
+ ``We can form unions'' Intuitively, we have $a\cup b\cup c = \{x: x\in a \text{ or }x\in b\text{ or }x\in c\}$. but instead of $a\cup b\cup c$, we write $\bigcup\{a, b, c\}$ so that we can express infinite unions as well.
+ \[
+ (\forall x)(\exists y)(\forall z)(z\in y \Leftrightarrow (\exists t)(t\in x \wedge z\in t)).
+ \]
+ We tend to write $\bigcup x$ for the set given above. We also write $x\cup y$ for $\bigcup\{x, y\}$.
+\end{axiom}
+
+Note that we can define intersection $\bigcap x$ as a subset of $y$ (for any $y\in x$) by separation, so we don't need an axiom for that.
+\begin{axiom}[Axiom of power set]\index{axiom!of power set}
+ ``Can form power sets''.
+ \[
+ (\forall x)(\exists y)(\forall z)(z\in y \Leftrightarrow z\subseteq x),
+ \]
+ where $z\subseteq x$ means $(\forall t)(t\in z \Rightarrow t\in x)$.
+
+ We tend to write $\P(x)$ for the set generated above.
+\end{axiom}
+We can now form $x\times y$, as a subset of $\P(\P(x\cup y))$, because for $t\in x, s\in y$, we have $(t, s) = \{\{t\},\{t, s\}\}\in \P(\P(x\cup y))$.
+
+Similarly, we can form the set of all functions from $x$ to $y$ as a subset of $\P(x\times y)$ (or $\P(\P(\P(x\cup y)))$!).
+
+Now we've got the easy axioms. Time for the weird ones.
+
+\subsubsection*{Axiom of infinity}\index{axiom!of infinity}
+From the axioms so far, we cannot prove that there is an infinite set! We start from the empty set, and all the operations above can only produce finite sets. So we need an axiom to say that the natural numbers exists.
+
+But how could we do so in finitely many words? So far, we do have infinitely many sets. For example, if we write $x^+$ for $x\cup \{x\}$, it is easy to check that $\empty, \emptyset^+, \emptyset^{++}, \cdots$ are all distinct.
+
+(Writing them out explicitly, we have $\emptyset^+ = \{\emptyset\}$, $\emptyset^{++} = \{\emptyset, \{\emptyset\}\}$, $\emptyset^{+++} = \{\emptyset, \{\emptyset\}, \{\emptyset, \{\emptyset\}\}\}$. We can also write $0$ for $\emptyset$, $1$ for $\emptyset^{+}$, $2$ for $\emptyset^{++}$. So $0 = \emptyset$, $1 = \{0\}$, $2 = \{0, 1\}, \cdots$)
+
+Note that all of these are individually sets, and so $V$ is definitely infinite. However, inside $V$, $V$ is not a set, or else we can apply separation to obtain Russell's paradox.
+
+So the collection of all $0, 1, \cdots, $ need not be a set. Therefore we want an axiom that declares this is a set. So we want an axiom stating $\exists x$ such that $\emptyset\in x, \emptyset^+\in x, \emptyset^{++}\in x, \cdots$. But this is an infinite statement, and we need a smarter way of formulating this.
+
+\begin{axiom}[Axiom of infinity]
+ ``There is an infinite set''.
+ \[
+ (\exists x)(\emptyset\in x \wedge (\forall y)(y\in x \Rightarrow y^+ \in x)).
+ \]
+ We say any set that satisfies the above axiom is a successor set.
+\end{axiom}
+A successor set can contain a lot of nonsense elements. How do we want to obtain $\N$, without nonsense?
+
+We know that the intersection of successors is also a successor set. So there is a \emph{least} successor set, i.e.\ the intersection of all successor sets. Call this $\omega$. This will be our copy of $\N$ in $V$. So
+\[
+ (\forall x)(x\in \omega \Leftrightarrow (\forall y)(y\text{ is a successor set} \Rightarrow x \in y)).
+\]
+Therefore, if we have
+\[
+ (\forall x)[(x \subseteq \omega\wedge x\text{ is a successor set}) \Rightarrow x = \omega],
+\]
+by definition of $\omega$. By the definition of ``is a successor set'', we can write this as
+\[
+ (\forall x)[(x\subseteq \omega \wedge \emptyset\in x \wedge (\forall y)(y\in x \Rightarrow y^+ \in x))\Rightarrow x = \omega].
+\]
+This is genuine induction (in $V$)! It is not just our weak first-order axiom in Peano's axioms.
+
+Also, we have
+\[
+ (\forall x)(x\in \omega \Rightarrow x^+ \not = \emptyset)\text{ and }(\forall x)(\forall y)((x\in \omega \wedge y\in \omega \wedge x^+ = y^+) \Rightarrow x = y)
+\]
+by $\omega$-induction (i.e.\ induction on $\omega$). Hence $\omega$ satisfies the axioms of natural numbers.
+
+We can now write, e.g.\ ``$x$ is finite'' for $(\exists y)(\exists f)(y\in x \wedge f\text{ bijects }x\text{ with }y)$.
+
+Similarly, we define ``$x$ is countable'' to mean ``$x$ is finite or $x$ bijects with $\omega$''.
+
+That's about it for this axiom. Time for the next one:
+\subsubsection*{Axiom of foundation}\index{axiom!of foundation}
+``Sets are built out of simpler sets''. We want to ban weird things like $x\in x$, or $x\in y\wedge y\in x$, or similarly for longer chains. We also don't want infinite descending chains like $\cdots x_3 \in x_2 \in x_1 \in x_0$.
+
+How can we capture the ``wrongness'' of these weird things all together? In the first case $x\in x$, we see that $\{x\}$ has no $\in$-minimal element (we say $y$ is $\in$-minimal in $x$ if $(\forall z\in x)(z\not\in y)$). In the second case, $\{x, y\}$ has no minimal element. In the last case, $\{x_0, x_1, x_2, \cdots\}$ has no $\in$-minimal element.
+
+
+\begin{axiom}[Axiom of foundation]
+ ``Every (non-empty) set has an $\in$-minimal member''
+ \[
+ (\forall x)(x\not= \emptyset \Rightarrow (\exists y)(y\in x\wedge (\forall z)(z\in x \Rightarrow z\not\in y))).
+ \]
+ This is sometimes known as the Axiom of regularity.
+\end{axiom}
+Note that most of the time, we don't actually use the Axiom of Foundation. It's here just so that our universe ``looks nice''. Most results in set theory don't rely on foundation.
+
+We will later show that this entails that all sets in $V$ can be ``built out of'' the empty set, but that's left for a later time.
+
+\subsubsection*{Axiom of replacement}
+In ordinary mathematics, we often say things like ``for each $x\in I$, we have some $A_i$. Now take $\{A_i: i\in I\}$''. For example, (after defining ordinals), we want to have the set $\{\omega + i: i\in \omega\}$.
+
+How do we know that $\{A_i: i\in I\}$ is a set? How do we know that $i\mapsto A_i$ is a function, i.e.\ that $\{(i, A_i): i\in I\}$ is a set? It feels like it should be. We want an axiom that says something along the line of ``the image of a set under a function is a set''. However, we do not know that the thing is a function yet. So we will have ``the image of a set under something that looks like a function is a set''.
+
+To formally talk about ``something that looks like a function'', we need a digression on classes:
+
+\subsubsection*{Digression on classes}
+$x\mapsto \{x\}$ for all $x$ looks like a function, but isn't it, since every function $f$ has a (set) domain, defined as a suitable subset of $\bigcup \bigcup f$, but our function here has domain $V$.
+
+So what is this $x\mapsto \{x\}$? We call it a class.
+\begin{defi}[Class]\index{class}
+ Let $(V, \in)$ be an $L$-structure. A \emph{class} is a collection $C$ of points of $V$ such that, for some formula $p$ with free variable $x$ (and maybe more funny parameters), we have
+ \[
+ x\in C\Leftrightarrow p\text{ holds}.
+ \]
+ Intuitively, everything of the form $\{x\in V: p(x)\}$ is a class.
+\end{defi}
+Note that here we are abusing notation. When we say $x\in C$, the symbol $\in$ does not mean the membership relation in $V$. Inside the theory, we should view $x\in C$ as a shorthand of ``$p(x)$ holds''.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $V$ is a class, by letting $p$ be ``$x = x$''.
+ \item For any $t$, $\{x\in V: t\in x\}$ is a class, with $p$ being $t\in x$. Here $t$ is a parameter.
+ \item For any set $y\in V$, $y$ is a class --- let $p$ be ``$x\in y$''.
+ \end{enumerate}
+\end{eg}
+
+\begin{defi}[Proper class]\index{proper class}
+ We say $C$ is a \emph{proper class} if $C$ is not a set (in $V$), ie
+ \[
+ \neg (\exists y)(\forall x)(x\in y\Leftrightarrow p).
+ \]
+\end{defi}
+
+Similarly, we can have
+\begin{defi}[Function-class]\index{function class}
+ A \emph{function-class} $F$ is a collection of ordered pairs such that there is a formula $p$ with free variables $x, y$ (and maybe more) such that
+ \[
+ (x, y) \in F \Leftrightarrow p\text{ holds, and } (x, y)\in F\wedge (x, z)\in F \Rightarrow y = z.
+ \]
+\end{defi}
+
+\begin{eg}
+ $x\mapsto \{x\}$ is a function-class: take $p$ to be $y = \{x\}$.
+\end{eg}
+
+\subsubsection*{Back to replacement}\index{axiom!of replacement}
+How do we talk about function-classes? We cannot refer to classes inside $V$. Instead, we must refer to the first-order formula $p$.
+
+\begin{axiom}[Axiom of replacement]
+ ``The image of a set under a function-class is a set''. This is an axiom scheme, with an instance for each first-order formula $p$:
+ \begin{align*}
+ \underbrace{{\color{gray}{(\forall t_1)\cdots(\forall t_n)}}}_{\text{parameters }}\big(&\underbrace{[(\forall x)(\forall y)(\forall z)((p\wedge p[z/y])\Rightarrow y=z)]}_{p\text{ defines a function-class}}\\
+ & \Rightarrow \underbrace{[(\forall x)(\exists y)(z\in y \Leftrightarrow (\exists t)(t\in x\wedge p[t/x, z/y]))]}_{\text{image of }x\text{ under }F\text{ is a set}}\big).
+ \end{align*}
+\end{axiom}
+
+\begin{eg}
+ For any set $x$, we can form $\{\{t\}: t\in x\}$ by $p$ being ``$y = \{x\}$''. However, this is a \emph{bad} use of replacement, because we can already obtain that by power set (and separation).
+
+ We will give a good example later.
+\end{eg}
+
+So that's all the axioms of ZF. Note that we did not include the Axiom of choice!
+
+\subsubsection*{Axiom of choice}\index{axiom!of choice}
+\begin{defi}[ZFC]\index{ZFC}
+ ZFC is the axioms ZF + AC, where AC is the axiom of choice, ``every family of non-empty sets has a choice function''.
+ \begin{align*}
+ (\forall f)[&(\forall x)(x\in \dom f \Rightarrow f(x)\not= \emptyset) \Rightarrow \\
+ &(\exists g)(\dom g = \dom f) \wedge (\forall x)(x\in \dom g \Rightarrow g(x) \in f(x))]
+ \end{align*}
+ Here we define a family of sets $\{A_i: i \in I\}$ to be a function $f: I \to V$ such that $i \mapsto A_i$.
+\end{defi}
+
+\subsection{Properties of ZF}
+Now, what does $V$ look like? We start with the following definition:
+\begin{defi}[Transitive set]\index{transitive}
+ A set $x$ is \emph{transitive} if every member of a member of $x$ is a member of $x$:
+ \[
+ (\forall y)((\exists z)(y\in z\wedge z\in x)\Rightarrow y \in x).
+ \]
+ This can be more concisely written as $\bigcup x \subseteq x$, but is very confusing and impossible to understand!
+\end{defi}
+This notion seems absurd --- and it is. It is only of importance in the context of set theory and understanding what $V$ looks like. It is an utterly useless notion in, say, group theory.
+
+\begin{eg}
+ $\{\emptyset, \{\emptyset\}\}$ is transitive. In general, for any $n\in \omega$, $n$ is transitive. $\omega$ is also transitive.
+\end{eg}
+
+\begin{lemma}
+ Every $x$ is contained in a transitive set.
+\end{lemma}
+Note that this is a theorem of $ZF$, i.e.\ it officially means: let $(V, \in)$ be a model of ZF. Then in $V$, $<$stuff above$>$. Equivalently, ZF $\vdash$ $<$stuff above$>$.
+
+We know that any intersection of transitive sets is transitive. So this lemma will tell us that $x$ is contained in a \emph{least} transitive set, called the \emph{transitive closure} of $x$, or $TC(x)$.
+
+\begin{proof}
+ We'd like to form ``$x\cup (\bigcup x)\cup (\bigcup\bigcup x)\cup (\bigcup\bigcup\bigcup x)\cup \cdots$''. If this makes sense, then we are done, since the final product is clearly transitive. This will be a set by the union axiom applied to $\{x, \bigcup x, \bigcup\bigcup x, \cdots\}$, which itself is a set by replacement applied to $\omega$, for the function-class $0\mapsto x$, $1\mapsto \bigcup x$, $2\mapsto \bigcup\bigcup x$ etc.
+
+ Of course we have to show that the above is a function class, i.e.\ can be expressed as a first order relation. We might want to write the sentence as:
+ \[
+ p(s, t)\text{ is }(s = 0\wedge t = x)\vee (\exists u)(\exists v)(s = u + 1\wedge t = \textstyle{\bigcup} v \wedge p(u, v)),
+ \]
+ but this is complete nonsense! We are defining $p$ in terms of itself!
+
+ The solution would be to use attempts, as we did previously for recursion. We define ``$f$ is an attempt'' to mean ``$f$ is a function and $\dom f\in \omega$ and $\dom f\not=\emptyset$ and $f(0) = x$ and $(\forall n)(n\in \omega \wedge n\in \dom f)\Rightarrow f(n) = \bigcup f(n - 1)$, i.e.\ $f$ is defined for some natural numbers and meet our requirements when it is defined.
+
+ Then it is easy to show that two attempts $f$ and $f'$ agree whenever both are defined. Also, $\forall n\in \omega$, there is an attempt $f$ defined for $n$ (both by $\omega$-induction).
+
+ Note that the definition of an attempt is a first-order formula. So our function class is
+ \[
+ p(s, t)\text{ is } (\exists f)(f\text{ is an attempt}\wedge y\in \dom f \wedge f(y) = z).\qedhere
+ \]
+\end{proof}
+This is a good use of replacement, unlike our example above.
+
+We previously said that Foundation captures the notion ``every set is built out of simpler sets''. What does that exactly mean? If this is true, we should be able to do induction on it: if $p(y)$ for all $y\in x$, then $p(x)$.
+
+If this sounds weird, think about the natural numbers: everything is built from $0$ using $+1$. So we can do induction on $\omega$.
+
+\begin{thm}[Principle of $\in$-induction]\index{$\in$-induction}
+ For each formula $p$, with free variables $t_1, \cdots, t_n, x$,
+ \[
+ {\color{gray}{(\forall t_1)\cdots(\forall t_n)}}\big([(\forall x)((\forall y)(y\in x \Rightarrow p(y))) \Rightarrow p(x)] \Rightarrow (\forall x)(p(x))\big)
+ \]
+ Note that officially, $p(y)$ means $p[y/x]$ and $p(x)$ is simply $x$.
+\end{thm}
+
+\begin{proof}
+ Given $t_1, \cdots, t_n$, suppose $\neg (\forall x)p(x)$. So we have some $x$ with $\neg p(x)$. Similar to how we proved regular induction on naturals from the well-ordering principle (in IA Numbers and Sets), we find a minimal $x$ such that $p(x)$ does not hold.
+
+ While foundation allows us to take the minimal element of a \emph{set}, $\{y: \neg p(y)\}$ need not be a set --- e.g.\ if $p(y)$ is $y \not= y$.
+
+ Instead, we pick a single $x$ such that $\neg p(x)$. Let $u = TC(\{x\})$. Then $\{y\in u: \neg p(y)\} \not= \emptyset$, since $x\in u$. So it has an $\in$-minimal element, say $y$, by Foundation. Then each $z\in y$ has $z\in u$ since $u$ is transitive. Hence $p(z)$ by minimality of $y$. But this implies $p(y)$. Contradiction.
+\end{proof}
+Note that here we used the transitive closure to reduce from reasoning about the whole scary $V$, to an actual set $TC(x)$. This technique is rather useful.
+
+We have now used Foundation to prove $\in$-induction. It turns out that the two are in fact equivalent.
+
+\begin{prop}
+ $\in$-induction $\Rightarrow $ Foundation.
+\end{prop}
+
+\begin{proof}
+ To deduce foundation from $\in$-induction, the obvious $p(x)$ --- $x$ has an $\in$-minimal member, doesn't work.
+
+ Instead, consider $p(x)$ given by
+ \[
+ (\forall y)\,x\in y \Rightarrow y\text{ has an }\in\text{-minimal member}.
+ \]
+ If $p(x)$ is true, we say $x$ is \emph{regular}. To show that $(\forall x)p(x)$, it is enough to show that: if every $y\in x$ is regular, then $x$ is regular.
+
+ Given any $z$ with $x\in z$, we want to show that $z$ has an $\in$-minimal member.
+
+ If $x$ is itself minimal in $z$, then done. Otherwise, then $y\in z$ for some $y\in x$. But since $y\in x$, $y$ is regular. So $z$ has a minimal element.
+
+ Hence all $x$ is regular. Since all non-empty sets contain at least one element (by definition), all sets have $\in$-minimal member.
+\end{proof}
+This looked rather simple to prove. However, it is because we had the clever idea of regularity. If we didn't we would be stuck for a long time!
+
+Now what about recursion? Can we define $f(x)$ using $f(y)$ for all $y\in x$?
+
+\begin{thm}[$\in$-recursion theorem]\index{$\in$-recursion}
+ Let $G$ be a function-class, everywhere defined. Then there is a function-class $F$ such that $F(x) = G(F|_x)$ for all $x$. Moreover, $F$ is unique (cf.\ definition of recursion on well-orderings).
+\end{thm}
+Note that $F|_x = \{(y, F(y)): y\in x\}$ is a set, by replacement.
+
+\begin{proof}
+ We first show existence. Again, we prove this with attempts. Define ``$f$ is an attempt'' to mean ``$f$ is a function and $\dom f$ is transitive and $(\forall x)(x\in \dom f \Rightarrow f(x) = G(f|_x))$''.
+
+ Then by simple $\in$-induction, we have
+ \begin{align*}
+ (\forall x)(\forall f')[(&f\text{ an attempt defined at }x\wedge\\
+ & f'\text{ an attempt defined at $x$}) \Rightarrow f(x) = f'(x)].
+ \end{align*}
+ Also, $(\forall x)(\exists f)(f\text{ an attempt defined at }x)$, again by $\in$-induction: suppose for each $y\in x$, there exists an attempt defined at $y$. So there exists a unique attempt $f_y$ with domain $TC(\{y\})$. Set $f = \bigcup_{y\in x}f_y$, and let $f' = f\bigcup \{(x, G(f|_x)\}$. Then this is an attempt defined at $x$.
+
+ So we take $q(x, y)$ to be
+ \[
+ (\exists f)(f\text{ is an attempt defined at }x\text{ with }f(x) = y).
+ \]
+ Uniqueness follows form $\in$-induction.
+\end{proof}
+Note that this is exactly the same as the proof of recursion on well-orderings. It is just that we have some extra mumbling about transitive closures.
+
+So we proved recursion and induction for $\in$. What property of the relation-class $\in$ (with $p(x, y)$ defined as $x\in y$) did we use? We most importantly used the Axiom of foundation, which says that
+\begin{enumerate}
+ \item $p$ is \emph{well-founded}: every set has a $p$-minimal element.
+ \item $p$ is \emph{local}: ie, $\{x: p(x, y)\}$ is a set for each $y$. We needed this to form the transitive closure.
+\end{enumerate}
+
+\begin{defi}[Well-founded relation]\index{well-founded relation}
+ A relation-class $R$ is \emph{well-founded} if every set has a $R$-minimal element.
+\end{defi}
+
+\begin{defi}[Local relation]\index{local relation}
+ A relation-class $R$ is \emph{local} if $\{x: p(x, y)\}$ is a set for each $y$.
+\end{defi}
+
+So we will have
+\begin{prop}
+ $p$-induction and $p$-recursion are well-defined and valid for any $p(x, y)$ that is well-founded and local.
+\end{prop}
+
+\begin{proof}
+ Same as above.
+\end{proof}
+Note that if $r$ is a relation on a set $a$, then $r$ is trivially local. So we just need $r$ to be well-founded. In particular, our induction and recursion theorems for well-orderings are special cases of this.
+
+We have almost replicated everything we proved for well-orderings, except for subset collapse. We will now do that.
+
+This is motivated by the following question: can we model a given relation on a set by $\in$?
+
+For example, let $a = \{b, c, d\}$, with $b\,r\,c$ and $c\;r\;d$. Can we find a set $\{b', c', d'\}$ such that $b'\in c'$ and $c'\in d'$? Yes. We can put $b' = \emptyset$, $c' = \{\emptyset\}$, $d' = \{\{\emptyset\}\}$. Moreover, $a' = \{b', c', d'\}$ is transitive.
+
+\begin{defi}[Extensional relation]\index{extension relation}
+ We say a relation $r$ on set $a$ is \emph{extensional} if
+ \[
+ (\forall x\in a)(\forall y\in a)((\forall z\in a)(z\,r\,x \Leftrightarrow z\,r\,y)\Rightarrow x = y).
+ \]
+ i.e.\ it obeys the axiom of extension.
+\end{defi}
+
+\begin{thm}[Mostowski collapse theorem]\index{Mostowski collapse theorem}
+ Let $r$ be a relation on a set $a$ that is well-founded and extensional. Then there exists a transitive $b$ and a bijection $f: a \to b$ such that $(\forall x, y\in a)(x\, r\, y\Leftrightarrow f(x) \in f(y))$. Moreover, $b$ and $f$ are unique.
+\end{thm}
+Note that the two conditions ``well-founded'' and ``extensional'' are trivially necessary, since $\in$ is both well-founded and extensional.
+
+\begin{proof}
+ Existence: define $f$ on $a$ the obvious way --- $f(x) = \{f(y): y\,r\,x\}$. This is well-defined by $r$-recursion, and is a genuine function, not just of a function class by replacement --- it is an image of $a$.
+
+ Let $b = \{f(x): x\in a\}$ (this is a set by replacement). We need to show that it is transitive and bijective.
+
+ By definition of $f$, $b$ is transitive, and $f$ is surjective as $b$ is \emph{defined} to be the image of $f$. So we have to show that $f$ is injective.
+
+ We'll show that $(\forall x\in a)(f(y) = f(x) \Rightarrow y = x)$ for each $x\in a$, by $r$-induction. Given $y\in a$, with $f(y) = f(x)$, we have $\{f(t): t\,r\, y\} = \{f(s): s\,r\,y\}$ by definition of $f$. So $\{t: t\,r \, y\}=\{s: s\,r \, x\}$ by the induction hypothesis. Hence $x = y$ since $r$ is extensional.
+
+ So we have constructed such an $b$ and $f$. Now we show it is unique: for any suitable $f, f'$, we have $f(x) = f'(x)$ for all $x\in a$ by $r$-induction.
+\end{proof}
+
+Recall that we defined the ordinals to be the ``equivalence class'' of all well-orderings. But this is not a good definition since the ordinals won't be sets. Hence we define them (formally) as follows:
+\begin{defi}[Ordinal]\index{ordinal}
+ An \emph{ordinal} is a transitive set, totally ordered by $\in$.
+\end{defi}
+This is automatically well-ordered by $\in$, by foundation.
+
+\begin{eg}
+ $\emptyset$, $\{\emptyset\}$, $\{\emptyset, \{\emptyset\}\}$ are ordinals. Any $n\in \omega$ as $n = \{0, 1, \cdots, n - 1\}$, as well as $\omega$ itself, are ordinals.
+\end{eg}
+
+Why is this a good definition? Mostowski says that each well-ordering is order-isomorphic to a unique ordinal (using our definition of ordinal above) --- this is its order-type. So here, instead of saying that the ordinals is the equivalence class of well-orderings, we simply choose one representative of each equivalence class (given by Mostowski collapse), and call that the ordinal.
+
+For any ordinal $\alpha$, $I_\alpha = \{\beta: \beta < \alpha \}$ is a well-ordering of order-type $\alpha$. Applying Mostowski collapse, we get $\alpha = \{\beta: \beta < \alpha\}$. So $\beta < \alpha$ iff $\beta \in \alpha$.
+
+So, for example, $\alpha^+ = \alpha \cup \{\alpha\}$, and $\sup \{\alpha_k: k\in I\} = \bigcup\{\alpha_i: i\in I\}$, Set theorists are often write suprema as unions, but this is a totally unhelpful notation!
+
+\subsection{Picture of the universe}
+What does the universe look like? We start with the empty set, and take the power set, repeatedly, transfinitely.
+
+\begin{defi}[von Neumann hierarchy]\index{von Neumann hierarchy}
+ Define sets $V_\alpha$ for $\alpha \in \mathrm{On}$ (where $\mathrm{On}$ is the class of ordinals) by $\in$-recursion:
+ \begin{enumerate}
+ \item $V_0 = \emptyset$.
+ \item $V_{\alpha + 1} = \P(V_\alpha)$.
+ \item $V_\lambda = \bigcup\{V_\gamma: \gamma < \lambda\}$ for $\lambda$ a non-zero limit ordinal.
+ \end{enumerate}
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (0, 6) node [above] {On};
+ \draw (-3, 6) -- (0, 0) -- (3, 6);
+ \draw (-0.15, 0.3) -- (0.15, 0.3) node [right] {$V_0 = \emptyset$};
+ \draw (-0.375, 0.75) -- (0.375, 0.75) node [right] {$V_1 = \{\emptyset\}$};
+ \draw (-1, 2) -- (1, 2) node [right] {$V_7$};
+ \node at (-0.75, 2.6) {$\vdots$};
+ \node at (0.75, 2.6) {$\vdots$};
+ \draw (-1.5, 3) -- (1.5, 3) node [right] {$V_\omega$};
+ \draw (-1.75, 3.5) -- (1.75, 3.5) node [right] {$V_{\omega + 1}$};
+ \node at (-0.75, 4.1) {$\vdots$};
+ \node at (0.75, 4.1) {$\vdots$};
+ \draw (-2.3, 4.6) -- (2.3, 4.6) node [right] {$V_{\omega + \omega}$};
+ \node at (-0.75, 5.3) {$\vdots$};
+ \node at (0.75, 5.3) {$\vdots$};
+ \end{tikzpicture}
+\end{center}
+Note that by definition, we have $x\subseteq V_\alpha \Leftrightarrow x\in V_{\alpha + 1}$.
+
+We would like every $x$ to be in some $V_\alpha$, and that is indeed true. We prove this though a series of lemmas:
+
+\begin{lemma}
+ Each $V_\alpha$ is transitive.
+\end{lemma}
+
+\begin{proof}
+ Since we define $V_\alpha$ by recursion, it is sensible to prove this by induction:
+
+ By induction on $\alpha$:
+ \begin{enumerate}
+ \item Zero: $V_0 = \emptyset$ is transitive.
+ \item Successors: If $x$ is transitive, then so is $\P(x)$: given $y\in z\in \P(x)$, we want to show that $y\in \P(x)$. Since $y$ is in a member of $\P(x)$, i.e.\ a subset of $x$, we must have $y\in x$. So $y\subseteq x$ since $x$ is transitive. So $y\in \P(x)$.
+ \item Limits: Any union of transitive sets is transitive.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{lemma}
+ If $\alpha \leq \beta$, then $V_\alpha \subseteq V_\beta$.
+\end{lemma}
+
+\begin{proof}
+ Fix $\alpha$, and induct on $\beta$.
+ \begin{enumerate}
+ \item $\beta = \alpha$: trivial
+ \item Successors: $V_{\beta^+}\subseteq V_\beta$ since $x\subseteq \P(x)$ for transitive $x$. So $V_\alpha \subseteq V_\beta \Rightarrow V_\alpha \subseteq V_{\beta^+}$.
+ \item Limits: Trivial by definition\qedhere
+ \end{enumerate}
+\end{proof}
+
+Finally we are ready for:
+\begin{thm}
+ Every $x$ belongs to some $V_\alpha$. Intuitively, we want to say
+ \[
+ V = \bigcup_{\alpha \in \mathrm{On}} V_\alpha,
+ \]
+\end{thm}
+We'll need some terminology to start with. If $x\subseteq V_\alpha$ for some $\alpha$, then there is a least $\alpha$ with $x\subseteq V_\alpha$. We call this the \emph{rank} of $x$.
+
+For example, $\rank(\emptyset) = 0$, $\rank(\{\emptyset\}) = 1$. Also $\rank(\omega) = \omega$. In fact, $\rank(\alpha) = \alpha$ for all $\alpha\in \mathrm{On}$.
+
+Note that we want the least $\alpha$ such that $x\subseteq V_\alpha$, \emph{not} $x\in V_\alpha$.
+
+\begin{proof}
+ We'll show that $(\forall x)(\exists \alpha)(x\in V_\alpha)$ by $\in$-induction on $x$.
+
+ So we are allowed to assume that for each $y\in x$, we have $y\subseteq V_\alpha$ for some $\alpha$. So $y\subseteq V_{\rank(y)}$, or $y\in V_{\rank(y) + 1}$.
+
+ Let $\alpha = \sup\{(\rank(y)^+: y\in x\}$. Then $y\in V_\alpha$ for every $y\in x$. So $x\subseteq V_\alpha$.
+\end{proof}
+
+Our definition of rank is easy to get wrong --- it is easy to be off by of 1. So the official definition is
+\begin{defi}[Rank]\index{rank}
+ The \emph{rank} of a set $x$ is defined recursively by
+ \[
+ \rank(x) = \sup\{(\rank y)^+ : y\in x\}.
+ \]
+\end{defi}
+
+Then the initial definition we had is now a proposition.
+\begin{prop}
+ $\rank(x)$ is the first $\alpha$ such that $x\subseteq V_\alpha$.
+\end{prop}
+
+\section{Cardinals}
+In this chapter, we will look at the ``sizes'' of (infinite) sets (finite sets are boring!). We work in ZFC, since things become really weird without Choice.
+
+Since we will talk about bijections a lot, we will have the following notation:
+\begin{notation}
+ Write $x\leftrightarrow y$ for $\exists f: f$ is a bijection from $x$ to $y$.
+\end{notation}
+
+\subsection{Definitions}
+We want to define $\card(x)$ (the \emph{cardinality}, or \emph{size} of $x$) in such a way that $\card(x) = \card(y) \Leftrightarrow x\leftrightarrow y$. We can't define $\card(x) = \{y: y\leftrightarrow x\}$ as it may not be a set. So we want to pick a ``representative'' of the sets that biject with $x$, like how we defined the ordinals.
+
+So why not use the ordinals? By Choice, we know that all $x$ is well-orderable. So $x\leftrightarrow \alpha$ for some $\alpha$. So we define:
+\begin{defi}[Cardinality]\index{cardinality}
+ The \emph{cardinality} of a set $x$, written $\card(x)$, is the least ordinal $\alpha$ such that $x\leftrightarrow \alpha$.
+\end{defi}
+Then we have (trivially) $\card(x) = \card(y) \Leftrightarrow x\leftrightarrow y$.
+
+(What if we don't have Choice, i.e.\ if we are in ZF? This will need a really clever trick, called the \emph{Scott trick}. In our universe of ZF, there is a huge of blob of things that biject with $x$. We cannot take the whole blob (it won't be a set), or pick one of them (requires Choice). So we ``chop off'' the blob at fixed, determined point, so we are left with a set.
+
+Define the essential rank of $x$ to be the least rank of all $y$ such that $y\leftrightarrow x$. Then set $\card (x) = \{y\in V_{\mathrm{essrank}(x)^+}: y\leftrightarrow x\}$.)
+
+So what cardinals are there? Clearly we have $1, 2, 3, \cdots$. What else?
+
+\begin{defi}[Initial ordinal]\index{initial ordinal}
+ We say an ordinal $\alpha$ is \emph{initial} if
+ \[
+ (\forall \beta < \alpha)(\neg \beta \leftrightarrow \alpha),
+ \]
+ i.e.\ it is the smallest ordinal of that cardinality.
+\end{defi}
+
+Then $0, 1, 2, 3, \cdots, \omega, \omega_1, \gamma(X)$ for any $X$ are all initial. However, $\omega^\omega$ is not initial, as it bijects with $\omega$ (both are countable).
+
+Can we find them all? Yes!
+\begin{defi}[Omega ordinals]\index{$\omega_\alpha$}
+ We define $\omega_\alpha$ for each $\alpha \in \mathrm{On}$ by
+ \begin{enumerate}
+ \item $\omega_0 = \omega$;
+ \item $\omega_{\alpha + 1} = \gamma(\omega_\alpha)$;
+ \item $\omega_\lambda = \sup\{\omega_\alpha: \alpha < \lambda\}$ for non-zero limit $\lambda$.
+ \end{enumerate}
+\end{defi}
+It is easy induction to show that each $\omega_\alpha$ is initial. We can also show that every initial $\delta$ (for $\delta \geq \omega$) is an $\omega_\alpha$. We know that the $\omega_\alpha$ are unbounded since, say $\omega_\alpha \geq \alpha$ for all $\alpha$. So there is a least $\alpha$ with $\omega_\alpha \geq \delta$.
+
+If $\alpha$ is a successor, then let $\alpha = \beta^+$. Then $\omega_\beta < \delta \leq \omega_\alpha$. But there is no initial ordinal between $\omega_\beta$ and $\omega_\alpha = \gamma(\omega_\beta)$, since $\gamma(X)$ is defined as the \emph{least} ordinal that does not biject with $X$. So we must have $\delta = \omega_\alpha$.
+
+If $\alpha$ is a limit, then since $\omega_\alpha$ is defined as a supremum, by definition we cannot have $\delta < \omega_\alpha$, or else there is some $\beta < \alpha$ with $\delta < \omega_\beta$. So $\omega_\alpha = \delta$ as well.
+
+\begin{defi}[Aleph number]\index{$\aleph_\alpha$}
+ Write $\aleph_\alpha$ (``aleph-$\alpha$'') for $\card(\omega_\alpha)$.
+\end{defi}
+
+Then from the argument above, we have
+\begin{thm}
+ The $\aleph_\alpha$ are the cardinals of all infinite sets (or, in ZF, the cardinals of all infinite well-orderable sets). For example, $\card(\omega) = \aleph_0$, $\card \omega_1 = \aleph_1$.
+\end{thm}
+
+We will use lower case letters to denote cardinalities and upper case for the sets with that cardinality. e.g.\ $\card(N) = n$.
+
+\begin{defi}[Cardinal (in)equality]\index{cardinal inequality}
+ For cardinals $n$ and $m$, write $m \leq n$ if $M$ injects into $N$, where $\card M = m, \card N = n$. This clearly does not depend on $M$ and $N$.
+
+ So $m \leq n$ and $n\leq m$ implies $n = m$ by Schr\"oder-Bernstein. Write $m < n$ if $m \leq n$ by $m\not = n$.
+\end{defi}
+
+\begin{eg}
+ $\card(\P(\omega)) > \card(\omega)$.
+\end{eg}
+
+This $\leq$ is a partial order. Moreover, it is a total order (assuming AC).
+
+\subsection{Cardinal arithmetic}
+\begin{defi}[Cardinal addition, multiplication and exponentiation]\index{cardinal addition}\index{cardinal multiplication}\index{cardinal exponentiation}
+ For cardinals $m, n$, write $m + n$ for $\card(M\sqcup N)$; $mn$ for $\card(M\times N)$; and $m^n$ for $\card(M^N)$, where $M^N = \{f: f\text{ is a function }N\to M\}$. Note that this coincides with our usual definition of $X^n$ for finite $n$.
+\end{defi}
+
+\begin{eg}
+ $\R \leftrightarrow \P(\omega) \leftrightarrow 2^\omega$. So $\card(\R) = \card(\P_\omega) = 2^{\aleph_0}$.
+\end{eg}
+
+Similarly, define
+\[
+ \sum_{i \in I} m_i = \card\left(\bigsqcup_{i\in I} M_i\right).
+\]
+
+\begin{eg}
+ How many sequences of reals are there? A real sequence is a function from $\omega \to \R$. We have
+ \[
+ \card(\R^\omega) = (2^{\aleph_0})^{\aleph_0} = 2^{\aleph_0 \times \aleph_0} = 2^{\aleph_0} = \card(\R)
+ \]
+\end{eg}
+Note that we used facts like
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $m + n = n + m$ since $N\sqcup M \leftrightarrow N\sqcup N$ with the obvious bijection.
+ \item $mn = nm$ using the obvious bijection
+ \item $(m^n)^p = m^{np}$ as $(M^N)^P \leftrightarrow M^{N\times P}$ since both objects take in a $P$ and an $N$ and returns an $M$.
+ \end{enumerate}
+\end{prop}
+It is important to note that cardinal exponentiation is different from ordinal exponentiation. For example, $\omega^\omega$ (ordinal exponentiation) is countable, but $\aleph_0^{\aleph_0} \geq 2^{\aleph_0} > \aleph_0$ (cardinal exponentiation).
+
+From IA Numbers and sets, we know that $\aleph_0 \aleph_0 = \aleph_0$. What about $\aleph_1 \aleph_1$? Or $\aleph_3 \aleph_0$?
+
+It turns out that cardinal sums and multiplications are utterly boring:
+\begin{thm}
+ For every ordinal $\alpha$,
+ \[
+ \aleph_\alpha \aleph_\alpha = \aleph_\alpha.
+ \]
+ This is the best we could ever ask for. What can be simpler?
+\end{thm}
+
+\begin{proof}
+ Since the Alephs are defined by induction, it makes sense to prove it by induction.
+
+ In the following proof, there is a small part that doesn't work nicely with $\alpha = 0$. But $\alpha = 0$ case (ie $\aleph_0\aleph_0 = \aleph_0$) is already done. So assume $\alpha \not= 0$.
+
+ Induct on $\alpha$. We want $\omega_\alpha \times \omega_\alpha$ to biject with $\omega_\alpha$, i.e.\ well-order $\omega_\alpha \times \omega_\alpha$ to an ordering of length $\omega_\alpha$.
+
+ Using the ordinal product clearly doesn't work. The ordinal product counts the product in rows, so we have many copies of $\omega_\alpha$. When we proved $\aleph_0\aleph_0 = \aleph_0$, we counted them diagonally. But counting diagonally here doesn't look very nice, since we will have to ``jump over'' infinities. Instead, we count in squares
+ \begin{center}
+ \begin{tikzpicture}
+ \draw rectangle (3, 3);
+ \node at (1.5, 3) [above] {$\omega_\alpha$};
+ \node at (3, 1.5) [right] {$\omega_\alpha$};
+ \draw (0, 1) -- (1, 1) -- (1, 0);
+ \draw (0, 1.5) -- (1.5, 1.5) -- (1.5, 0);
+ \draw (0, 1.8) -- (1.8, 1.8) -- (1.8, 0);
+ \end{tikzpicture}
+ \end{center}
+ We set $(x, y) < (x', y')$ if \emph{either} $\max(x, y) < \max(x', y')$ (this says that $(x', y')$ is in a bigger square), \emph{or}, (say $\max(x, y) = \max(x', y') = \beta$ and $y' = \beta, y < \beta$ or $x = x' = \beta, y < y'$ or $y = y' = \beta, x < x'$) (nonsense used to order things in the same square --- utterly unimportant).
+
+ How do we show that this has order type $\omega_\alpha$? We show that any initial segment has order type $ < \omega_\alpha$.
+
+ For any proper initial segment $I_{(x, y)}$, we have
+ \[
+ I_{(x, y)} \subseteq \beta\times \beta
+ \]
+ for some $\beta < \omega_\alpha$, since $\omega_\alpha$ is a limit, with wlog $\beta$ infinite. So
+ \[
+ \beta\times \beta \leftrightarrow \beta
+ \]
+ by induction hypothesis (their cardinality is less that $\omega_\alpha$). So
+ \[
+ \card(\beta\times \beta) < \card(\omega_\alpha).
+ \]
+ Hence $I_{(x, y)}$ has order type $ < \omega_\alpha$. Thus the order type of our well-order is $\leq \omega_\alpha$. So $\omega_\alpha \times \omega_\alpha$ injects into $\omega_\alpha$. Since trivially $\omega_\alpha$ injects into $\omega_\alpha \times \omega_\alpha$, we have $\omega_\alpha \times \omega_\alpha \leftrightarrow \omega_\alpha$.
+\end{proof}
+So why did we say cardinal arithmetic is boring? We have
+\begin{cor}
+ Let $\alpha \leq \beta$. Then
+ \[
+ \aleph_\alpha + \aleph_\beta = \aleph_\alpha\aleph_\beta = \aleph_\beta.
+ \]
+\end{cor}
+
+\begin{proof}
+ \[
+ \aleph_\beta \leq \aleph_\alpha + \aleph_\beta \leq \aleph_\beta + \aleph_\beta = 2\aleph_\beta \leq \aleph_\beta\times \aleph_\beta = \aleph_\beta,
+ \]
+ So done
+\end{proof}
+
+\begin{eg}
+ $X\sqcup X$ bijects with $X$, for infinite $X$ (in ZFC).
+\end{eg}
+
+However, cardinal exponentiation is \emph{very hard}. For example, is $2^{\aleph_0} = \aleph_1$? This is the continuum hypothesis, and cannot be proved or disproved in ZFC!
+
+Even today, not all implications among values of $\aleph_\alpha^{\aleph_\beta}$ are known, i.e.\ we don't know whether they are true, false or independent!
+\section{Incompleteness*}
+The big goal of this (non-examinable) chapter is to show that PA is incomplete, i.e.\ there is a sentence $p$ such that $\mathrm{PA}\not\vdash p$ and $\mathrm{PA}\not\vdash \neg p$.
+
+The strategy is to find a $p$ that is true in $\N$, but $\mathrm{PA}\not\vdash p$.
+
+Here we say ``true'' to mean ``true in $\N$'', and ``provable'' to mean ``PA proves it''.
+
+The idea is to find a $p$ that says ``I am not provable''. More precisely, we want a $p$ such that $p$ is true if and only if $p$ is not provable. We are then done: $p$ must be true, since if $p$ were false, then it is provable, i.e.\ $\mathrm{PA}\vdash p$. So $p$ holds in every model of PA, and in particular, $p$ holds in $\N$. Contradiction. So $p$ is true. So $p$ is not provable.
+
+We'll have to ``code'' formulae, proofs etc. inside PA, i.e.\ as numbers. But this doesn't seem possible --- it seems like, in any format, ``$p$ is not provable'' must be longer than $p$. So $p$ cannot say ``$p$ is not provable``!
+
+So be prepared for some magic to come up in the middle for the proof!
+
+\subsection*{Definability}
+We first start with some notions of definability.
+
+\begin{defi}[Definability]\index{definability}
+ A subset $S\subseteq \N$ is \emph{definable} if there is a formula $p$ with one free variable such that
+ \[
+ \forall m\in \N:\; m\in S \Leftrightarrow p(m)\text{ holds}.
+ \]
+ Similarly, $f: \N \to \N$ is \emph{definable} if there exists a formula $p(x, y)$ such that
+ \[
+ \forall m, n\in \N:\; f(m) = n \Leftrightarrow p(m, n)\text{ holds}.
+ \]
+\end{defi}
+
+\begin{eg}
+ The set of primes is definable: $p(x)$ is
+ \[
+ x\not = 1 \wedge (\forall y)(\forall z)(yz = x \Rightarrow (y = 1\vee z = 1)).
+ \]
+ We can say ``$m$ is prime'' is definable.
+
+ How about powers of $2$? We don't have exponentiation here. What can we do? We can take $p(x)$ to be
+ \[
+ (\forall y)((y\text{ is prime}\wedge y \mid x) \Rightarrow y = 2),
+ \]
+ where $2$ is a shorthand for $s(s(0))$, and $y\mid x$ is a shorthand for $(\exists z)(yz = x)$. So this is also definable.
+
+ The function $m\mapsto m^2$ is also definable: take $p(x, y)$ to be $yy = x$.
+\end{eg}
+
+Here we will assume:
+\begin{fact}
+ Any function given by an algorithm is definable.
+\end{fact}
+Proof will not be given here. See, eg, PTJ's book for detailed proof.
+
+\begin{eg}
+ $m\mapsto 2^m$ is definable.
+\end{eg}
+
+\subsection*{Coding}
+Our language has 12 symbols: $s, 0, \times, +, \bot, \Rightarrow, (, ), =, x, ', \forall$ (where the variables are now called $x, x', x'', x'''$.
+
+We assign values to them, say $v(s) = 1, v(0) = 2, \cdots, v(\forall) = 12$. To code a formula, we can take
+\[
+ 2^{v(\text{first symbol})}\cdot 3^{v(\text{second symbol})}\cdots (n\text{th prime})^{v(n\text{th symbol})}.
+\]
+For example, $(\forall x)(x = x)$ is coded as
+\[
+ 2^7 3^{12}5^{10}7^8 11^7 13^{10}17^9 23^8.
+\]
+Not every $m$ codes a formula, e.g.\ $2^7 3^{12}5^{10}$ is translated to $(\forall x$, which is clearly nonsense. Similarly, $2^7 7^3$ or $2^{100}$ can't even be translated at all.
+
+However, ``$m$ codes a formula'' is definable, as there is an algorithm that checks that.
+
+Write $S_m$ for the formula coded by $m$ (and set $S_m$ to be ``$\bot$'' if $m$ does not code a formula). Similarly, write $c(p)$ for the code of $p$.
+
+Given a finite sequence $p_1, \cdots p_n$ for formulae, code it as
+\[
+ 2^{c(p_1)}3^{c(p_2)}\cdots (n\text{th prime})^{c(p_n)}.
+\]
+Alternatively, we can add a separator character to our 12 symbols and concatenate the sequence of formulae with the separator character.
+
+Now, ``$m$ codes an axiom (either logical or axiom of PA)'' is definable, as there exists an algorithm to check it. For example, an instance of the first logical axiom can be validated by the following regex:
+\[
+ \verb/^\([s0()=x'\+/\mathtt{\times\bot\Rightarrow\forall}\verb/]+\)/\mathtt{\Rightarrow}\verb/([s0()=x'\+/\mathtt{\times\bot\Rightarrow\forall}\verb/]+/\Rightarrow\verb/\1)$/
+\]
+(after translating to a sentence and verifying it is a valid logical formula)
+
+Also,
+\begin{center}
+ $\phi(\ell, m, n) = $``$S_n$ obtained from $S_\ell$, $S_m$ via MP''
+\end{center}
+is definable, and similarly for generalization.
+
+So
+\[
+ \Theta(m, n) =\text{``}n\text{ codes a proof of }S_m\text{''}
+\]
+is definable.
+
+Thus
+\[
+ \Psi(m) = \text{``}S_m\text{ is provable''}
+\]
+is definable as
+\[
+ \Psi(m) \Leftrightarrow (\exists n)\Theta(m, n).
+\]
+So far, everything is rather boring. We all \emph{know} that strings of symbols can be coded into numbers --- that's what computers do!
+
+\subsection*{Clever bit}
+Consider the statement $\chi(m)$ that states
+\begin{center}
+ ``$m$ codes a formula $S_m$ with one free variable, and $S_m(m)$ is not provable.''
+\end{center}
+This is clearly definable. Suppose this is defined by $p$. So
+\[
+ \chi(n) \Leftrightarrow p[n/x]
+\]
+Suppose $c(p) = N$. Then $\chi(N)$ asserts
+\begin{center}
+ ``$N$ codes a formula $S_N$ with one free variable and $S_N(N)$ is not provable.''
+\end{center}
+But we already know that $N$ codes the formula $\chi$. So $\chi(N)$ asserts that $\chi(N)$ is not provable.
+
+So
+\begin{thm}[G\"odel's incompleteness theorem]\index{G\"odel's incompleteness theorem}
+ PA is incomplete.
+\end{thm}
+
+Maybe that's because PA is rubbish. Could we add some clever axiom $t$ (true in $\N$) to PA, so that PA$\cup \{t\}$ is complete? No! Just run the same proof with ``PA'' replaced by ``PA$\cup \{t\}$'' to show that PA$\cup\{t\}$ is incomplete.
+
+But we can certainly extend PA to a complete theory --- let $T = \{p: p\text{ holds in }\N\}$. What will go wrong in our proof? It can only be because
+\begin{thm}
+ ``Truth is not definable''
+
+ $T= \{p: p\text{ holds in }\N\}$ is not definable. This officially means
+ \[
+ \{m: m\text{ codes a member of }T\}
+ \]
+ is not a definable set.
+\end{thm}
+
+Next question: we proved that our clever statement $p$ is true but not provable. Why doesn't this formalize into PA? The answer is, in the proof, we also used the fact that PA has a model, $\N$. By completeness, this means that we used the statement con(PA), i.e.\ the statement that PA is consistent, or
+\[
+ (\forall m)(m\text{ does not code a proof of }\bot).
+\]
+With this statement, the proof does formalize to PA. So PA$\cup \{\text{con(PA)}\}\vdash p$. Hence
+\begin{thm}
+ PA $\not \vdash$ con(PA).
+\end{thm}
+
+The same proof works in ZF. So ZF is incomplete and ZF does not prove its own consistency.
+
+\printindex
+\end{document}
diff --git a/books/cam/II_L/number_fields.tex b/books/cam/II_L/number_fields.tex
new file mode 100644
index 0000000000000000000000000000000000000000..360751133242f186654c55cc97fa251c26d6e3b5
--- /dev/null
+++ b/books/cam/II_L/number_fields.tex
@@ -0,0 +1,3223 @@
+\documentclass[a4paper]{article}
+
+\def\npart {II}
+\def\nterm {Lent}
+\def\nyear {2016}
+\def\nlecturer {I.\ Grojnowski}
+\def\ncourse {Number Fields}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\emph{Part IB Groups, Rings and Modules is essential and Part II Galois Theory is desirable}
+\vspace{10pt}
+
+\noindent Definition of algebraic number fields, their integers and units. Norms, bases and discriminants.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Ideals, principal and prime ideals, unique factorisation. Norms of ideals.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Minkowski's theorem on convex bodies. Statement of Dirichlet's unit theorem. Determination of units in quadratic fields.\hspace*{\fill} [2]
+
+\vspace{5pt}
+\noindent Ideal classes, finiteness of the class group. Calculation of class numbers using statement of the Minkowski bound.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Dedekind's theorem on the factorisation of primes. Application to quadratic fields.\hspace*{\fill} [2]
+
+\vspace{5pt}
+\noindent Discussion of the cyclotomic field and the Fermat equation or some other topic chosen by the lecturer.\hspace*{\fill} [3]}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+Technically, IID Galois Theory is not a prerequisite of this course. However, many results we have are analogous to what we did in Galois Theory, and we will not refrain from pointing out the correspondence. If you have not learnt Galois Theory, then you can ignore them.
+
+\section{Number fields}
+The focus of this course is, unsurprisingly, number fields. Before we define what number fields are, we look at some motivating examples. Suppose we wanted to find all numbers of the form $x^2 + y^2$, where $x, y \in \Z$. For example, if $a, b$ can both be written in this form, does it follow that $ab$ can?
+
+In IB Groups, Rings and Modules, we did the clever thing of working with $\Z[i]$. The integers of the form $x^2 + y^2$ are exactly the norms of integers in $\Z[i]$, where the norm of $x + iy$ is
+\[
+ N(x + iy) = |x + iy|^2 = x^2 + y^2.
+\]
+Then the previous result is obvious --- if $a = N(z)$ and $b = N(w)$, then $ab = N(zw)$. So $ab$ is of the form $x^2 + y^2$.
+
+Similarly, in the IB Groups, Rings and Modules example sheet, we found all solutions to the equation $x^2 + 2 = y^3$ by working in $\Z[\sqrt{-2}]$. This is a very general technique --- working with these rings, and corresponding fields $\Q(\sqrt{-d})$ can tell us a lot about arithmetic we care about.
+
+In this chapter, we will begin by writing down some basic definitions and proving elementary properties about number fields.
+
+\begin{defi}[Field extension]\index{field extension}
+ A \emph{field extension} is an inclusion of fields $K \subseteq L$. We sometimes write this as $L/K$.
+\end{defi}
+
+\begin{defi}[Degree of field extension]\index{degree}
+ Let $K \subseteq L$ be fields. Then $L$ is a vector space over $K$, and the \emph{degree} of the field extension is
+ \[
+ [L:K] = \dim_K (L).
+ \]
+\end{defi}
+
+\begin{defi}[Finite extension]\index{finite extension}
+ A \emph{finite field extension} is a field extension with finite degree.
+\end{defi}
+
+\begin{defi}[Number field]\index{number field}
+ A \emph{number field} is a finite field extension over $\Q$.
+\end{defi}
+
+A field is the most boring kind of ring --- the only ideals are the trivial one and the whole field itself. Thus, if we want to do something interesting with number fields algebraically, we need to come up with something more interesting.
+
+In the case of $\Q$ itself, one interesting thing to talk about is the integers $\Z$. It turns out the right generalization to number fields is \emph{algebraic integers}.
+
+\begin{defi}[Algebraic integer]\index{algebraic integer}
+ Let $L$ be a number field. An \emph{algebraic integer} is an $\alpha \in L$ such that there is some monic $f \in \Z[x]$ with $f(\alpha) = 0$. We write $\mathcal{O}_L$ for the set of algebraic integers in $L$.
+\end{defi}
+
+\begin{eg}
+ It is a fact that if $L = \Q(i)$, then $\mathcal{O}_L = \Z[i]$. We will prove this in the next chapter after we have the necessary tools.
+\end{eg}
+
+These are in fact the main objects of study in this course. Since we say this is a generalization of $\Z \subseteq \Q$, the following had better be true:
+
+\begin{lemma}
+ $\mathcal{O}_\Q = \Z$, i.e.\ $\alpha \in \Q$ is an algebraic integer if and only if $\alpha \in \Z$.
+\end{lemma}
+
+\begin{proof}
+ If $\alpha \in \Z$, then $x - \alpha \in \Z[x]$ is a monic polynomial. So $\alpha \in \mathcal{O}_\Q$.
+
+ On the other hand, let $\alpha \in \Q$. Then there is some coprime $r, s \in \Z$ such that $\alpha = \frac{r}{s}$. If it is an algebraic integer, then there is some
+ \[
+ f(x) = x^n + a_{n - 1} x^{n - 1} + \cdots + a_0
+ \]
+ with $a_i \in \Z$ such that $f(\alpha) = 0$. Substituting in and multiplying by $s^n$, we get
+ \[
+ r^n + \underbrace{a_{n - 1} r^{n - 1}s + \cdots + a_0 s^n}_{\text{divisible by }s} = 0,
+ \]
+ So $s\mid r^n$. But if $s\not= 1$, there is a prime $p$ such that $p \mid s$, and hence $p \mid r^n$. Thus $p \mid r$. So $p$ is a common factor of $s$ and $r$. This is a contradiction. So $s = 1$, and $\alpha$ is an integer.
+\end{proof}
+
+How else is this a generalization of $\Z$? We know $\Z$ is a ring. So perhaps $\mathcal{O}_L$ also is.
+
+\begin{thm}
+ $\mathcal{O}_L$ is a ring, i.e.\ if $\alpha, \beta \in \mathcal{O}_L$, then so is $\alpha \pm \beta$ and $\alpha\beta$.
+\end{thm}
+Note that in general $\mathcal{O}_L$ is not a field. For example, $\Z = \mathcal{O}_\Q$ is not a field.
+
+The proof of this theorem is not as straightforward as the previous one. Recall we have proved a similar theorem in IID Galois Theory before with ``algebraic integer'' replaced with ``algebraic number'', namely that if $L/K$ is a field extension with $\alpha, \beta \in L$ algebraic over $K$, then so is $\alpha\beta$ and $\alpha \pm \beta$, as well as $\frac{1}{\alpha}$ if $\alpha \not= 0$.
+
+To prove this, we notice that $\alpha \in K$ is algebraic if and only if $K[\alpha]$ is a finite extension --- if $\alpha$ is algebraic, with
+\[
+ f(\alpha) = a_n \alpha^n + \cdots + a_0 = 0,\quad a_n \not= 0
+\]
+then $K[\alpha]$ has degree at most $n$, since $\alpha^n$ (and similarly $\alpha^{-1}$) can be written as a linear combination of $1, \alpha, \cdots, \alpha^{n - 1}$, and thus these generate $K[\alpha]$. On the other hand, if $K[\alpha]$ is finite, say of degree $k$, then $1, \alpha, \cdots, \alpha^k$ are independent, hence some linear combination of them vanishes, and this gives a polynomial for which $\alpha$ is a root. Moreover, by the same proof, if $K'$ is any finite extension over $K$, then any element in $K'$ is algebraic.
+
+Thus, to prove the result, notice that if $K[\alpha]$ is generated by $1, \alpha, \cdots, \alpha^n$ and $K[\beta]$ is generated by $1, \beta, \cdots, \beta^m$, then $K[\alpha, \beta]$ is generated by $\{\alpha^i \beta^j\}$ for $1 \leq i \leq n, 1 \leq j \leq m$. Hence $K[\alpha, \beta]$ is a finite extension, hence $\alpha\beta, \alpha \pm \beta \in K[\alpha, \beta]$ are algebraic.
+
+We would like to prove this theorem in an analogous way. We will consider $\mathcal{O}_L$ as a \emph{ring} extension of $\Z$. We will formulate the general notion of ``being an algebraic integer'' in general ring extensions:
+
+\begin{defi}[Integrality]\index{integral}
+ Let $R \subseteq S$ be rings. We say $\alpha \in S$ is \emph{integral over $R$} if there is some monic polynomial $f \in R[x]$ such that $f(\alpha) = 0$.
+
+ We say $S$ is \emph{integral over $R$} if all $\alpha \in S$ are integral over $R$.
+\end{defi}
+
+\begin{defi}[Finitely-generated]\index{finitely-generated}
+ We say $S$ is \emph{finitely-generated} over $R$ if there exists elements $\alpha_1, \cdots, \alpha_n \in S$ such that the function $R^n \to S$ defined by $(r_1, \cdots, r_n) \mapsto \sum r_i \alpha_i$ is surjective, i.e.\ every element of $S$ can be written as a $R$-linear combination of elements $\alpha_1, \cdots, \alpha_n$. In other words, $S$ is finitely-generated as an $R$-module.
+\end{defi}
+This is a refinement of the idea of being algebraic. We allow the use of rings and restrict to monic polynomials. In Galois theory, we showed that finiteness and algebraicity ``are the same thing''. We will generalize this to integrality of rings.
+
+\begin{eg}
+ In a number field $\Z \subseteq \Q \subseteq L$, $\alpha \in L$ is an algebraic integer if and only if $\alpha$ is integral over $\Z$ by definition, and $\mathcal{O}_L$ is integral over $\Z$.
+\end{eg}
+
+\begin{notation}
+ If $\alpha_1, \cdots, \alpha_r \in S$, we write $R[\alpha_1, \cdots, \alpha_r]$ for the subring of $S$ generated by $R, \alpha_1, \cdots, \alpha_r$. In other words, it is the image of the homomorphism from the polynomial ring $R[x_1, \cdots, x_n] \to S$ given by $x_i \mapsto \alpha_i$.
+\end{notation}
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item Let $R \subseteq S$ be rings. If $S = R[s]$ and $s$ is integral over $R$, then $S$ is finitely-generated over $R$.
+ \item If $S = R[s_1, \cdots, s_n]$ with $s_i$ integral over $R$, then $S$ is finitely-generated over $R$.
+ \end{enumerate}
+\end{prop}
+This is the easy direction in identifying integrality with finitely-generated.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We know $S$ is spanned by $1, s, s^2, \cdots$ over $R$. However, since $s$ is integral, there exists $a_0, \cdots, a_n \in R$ such that
+ \[
+ s^n = a_0 + a_1 s + \cdots + a_{n - 1}s^{n - 1}.
+ \]
+ So the $R$-submodule generated by $1, s, \cdots, s^{n - 1}$ is stable under multiplication by $s$. So it contains $s^n, s^{n + 1}, s^{n + 2}, \cdots$. So it is $S$.
+ \item Let $S_i = R[s_1, \cdots, s_i]$. So $S_i = S_{i - 1}[s_i]$. Since $s_i$ is integral over $R$, it is integral over $S_{i - 1}$. By the previous part, $S_i$ is finitely-generated over $S_{i - 1}$. To finish, it suffices to show that being finitely-generated is transitive. More precisely, if $A \subseteq B \subseteq C$ are rings, $B$ is finitely generated over $A$ and $C$ is finitely generated over $B$, then $C$ is finitely generated over $A$. This is not hard to see, since if $x_1, \cdots, x_n$ generate $B$ over $A$, and $y_1, \cdots, y_m$ generate $C$ over $B$, then $C$ is generated by $\{x_i y_j\}_{1 \leq i \leq n,1 \leq j \leq m}$ over $A$.\qedhere
+ \end{enumerate}
+\end{proof}
+The other direction is harder.
+
+\begin{thm}
+ If $S$ is finitely-generated over $R$, then $S$ is integral over $R$.
+\end{thm}
+The idea of the proof is as follows: if $s \in S$, we need to find a monic polynomial which it satisfies. In Galois theory, we have fields and vector spaces, and the proof is easy. We can just consider $1, s, s^2, \cdots$, and linear dependence kicks in and gives us a relation. But even if this worked in our case, there is no way we can make this polynomial monic.
+
+Instead, consider the multiplication-by-$s$ map: $m_s: S \to S$ by $\gamma \mapsto s\gamma$. If $S$ were a finite-dimensional vector space over $R$, then Cayley-Hamilton tells us $m_s$, and thus $s$, satisfies its characteristic polynomial, which is monic. Even though $S$ is not a finite-dimensional vector space, the proof of Cayley-Hamilton will work.
+
+\begin{proof}
+ Let $\alpha_1, \cdots, \alpha_n$ generate $S$ as an $R$-module. wlog take $\alpha_1 = 1 \in S$. For any $s \in S$, write
+ \[
+ s \alpha_i = \sum b_{ij}\alpha_j
+ \]
+ for some $b_{ij} \in R$. We write $B = (b_{ij})$. This is the ``matrix of multiplication by $S$''. By construction, we have
+ \[
+ (sI - B)
+ \begin{pmatrix}
+ \alpha_1\\\vdots\\a_n
+ \end{pmatrix} = 0.\tag{$*$}
+ \]
+ Now recall for any matrix $X$, we have $\adj(X)X = (\det X) I$, where the $i, j$th entry of $\adj(X)$ is given by the determinant of the matrix obtained by removing the $i$th row and $j$th column of $X$.
+
+ We now multiply $(*)$ by $\adj(s I - B)$. So we get
+ \[
+ \det(sI - B)
+ \begin{pmatrix}
+ \alpha_1\\\vdots\\\alpha_n
+ \end{pmatrix} = 0
+ \]
+ In particular, $\det(sI - B) \alpha_1 = 0$. Since we picked $\alpha_1 = 1$, we get $\det(sI - B) = 0$. Hence if $f(x) = \det(xI - B)$, then $f(x) \in R[x]$, and $f(s) = 0$.
+\end{proof}
+
+Hence we obtain the following:
+\begin{cor}
+ Let $L \supseteq \Q$ be a number field. Then $\mathcal{O}_L$ is a ring.
+\end{cor}
+
+\begin{proof}
+ If $\alpha, \beta \in \mathcal{O}_L$, then $\Z[\alpha, \beta]$ is finitely-generated by the proposition. But then $\Z[\alpha, \beta]$ is integral over $\Z$, by the previous theorem. So $\alpha \pm \beta, \alpha\beta \in \Z[\alpha, \beta]$.
+\end{proof}
+
+Note that it is not necessarily true that if $S \supseteq R$ is an integral extension, then $S$ is finitely-generated over $R$. For example, if $S$ is the set of all algebraic integers in $\C$, and $R = \Z$, then by definition $S$ is an integral extension of $\Z$, but $S$ is not finitely generated over $\Z$.
+
+Thus the following corollary isn't as trivial as the case with ``integral'' replaced by ``finitely generated'':
+\begin{cor}
+ If $A \subseteq B \subseteq C$ be ring extensions such that $B$ over $A$ and $C$ over $B$ are integral extensions. Then $C$ is integral over $A$.
+\end{cor}
+
+The idea of the proof is that while the extensions might not be finitely generated, only finitely many things are needed to produce the relevant polynomials witnessing integrality.
+
+\begin{proof}
+ If $c \in C$, let
+ \[
+ f(x) = \sum_{i = 0}^N b_i x^i \in B[x]
+ \]
+ be a monic polynomial such that $f(c) = 0$. Let $B_0 = A[b_0, \cdots, b_N]$ and let $C_0 = B_0[c]$. Then $B_0/A$ is finitely generated as $b_0, \cdots, b_N$ are integral over $A$. Also, $C_0$ is finitely-generated over $B_0$, since $c$ is integral over $B_0$. Hence $C_0$ is finitely-generated over $A$. So $c$ is integral over $A$. Since $c$ was arbitrary, we know $C$ is integral over $A$.
+\end{proof}
+
+Now how do we recognize algebraic integers? If we want to show something is an algebraic integer, we just have to exhibit a monic polynomial that vanishes on the number. However, if we want to show that something is \emph{not} an algebraic integer, we have to make sure \emph{no} monic polynomial kills the number. How can we do so?
+
+It turns out to check if something is an algebraic integer, we don't have to check all monic polynomials. We just have to check one. Recall that if $K \subseteq L$ is a field extensions with $\alpha \in L$, then the \term{minimal polynomial}\index{$p_\alpha$} is the \emph{monic} polynomial $p_\alpha(x) \in K[x]$ of minimal degree such that $p_\alpha(\alpha) = 0$.
+
+Note that we can always make the polynomial monic. It's just that the coefficients need not lie in $\Z$.
+
+Recall that we had the following lemma about minimal polynomials:
+\begin{lemma}
+ If $f \in K[x]$ with $f(\alpha) = 0$, then $p_\alpha \mid f$.
+\end{lemma}
+
+\begin{proof}
+ Write $f = p_\alpha h + r$, with $r \in K[x]$ and $\deg(r) < \deg (p_\alpha)$. Then we have
+ \[
+ 0 = f(\alpha) = p(\alpha) h(\alpha) + r(\alpha) = r(\alpha).
+ \]
+ So if $r \not= 0$, this contradicts the minimality of $\deg p_\alpha$.
+\end{proof}
+
+In particular, this lemma implies $p_\alpha$ is unique. One nice application of this result is the following:
+\begin{prop}
+ Let $L$ be a number field. Then $\alpha \in \mathcal{O}_L$ if and only if the minimal polynomial $p_\alpha(x) \in \Q[x]$ for the field extension $\Q \subseteq L$ is in fact in $\Z[x]$.
+\end{prop}
+This is a nice proposition. This gives us an necessary and sufficient condition for whether something is algebraic.
+\begin{proof}
+ $(\Leftarrow)$ is trivial, since this is just the definition of an algebraic integer.
+
+ $(\Rightarrow)$ Let $\alpha \in \mathcal{O}_L$ and $p_\alpha \in \Q[x]$ be the minimal polynomial of $\alpha$, and $h(x) \in \Z[x]$ be a monic polynomial which $\alpha$ satisfies. The idea is to use $h$ to show that the coefficients of $p_\alpha$ are algebraic, thus in fact integers.
+
+ Now there exists a bigger field $M \supseteq L$ such that
+ \[
+ p_\alpha(x) = (x - \alpha_1) \cdots (x - \alpha_r)
+ \]
+ factors in $M[x]$. But by our lemma, $p_\alpha \mid h$. So $h(\alpha_i) = 0$ for all $\alpha_i$. So $\alpha_i \in \mathcal{O}_M$ is an algebraic integer. But $\mathcal{O}_M$ is a ring, i.e.\ sums and products of the $\alpha_i$'s are still algebraic integers. So the coefficients of $p_\alpha$ are algebraic integers (in $\mathcal{O}_M$). But they are also in $\Q$. Thus the coefficients must be integers.
+\end{proof}
+Alternatively, we can deduce this proposition from the previous lemma plus Gauss' lemma.
+
+Another relation between $\Z$ and $\Q$ is that $\Q$ is the fraction field of $\Z$. This is true for general number fields
+\begin{lemma}
+ We have
+ \[
+ \Frac \mathcal{O}_L = \left\{\frac{\alpha}{\beta}: \alpha, \beta \in \mathcal{O}_L, \beta \not= 0\right\} = L.
+ \]
+ In fact, for any $\alpha \in L$, there is some $n \in \Z$ such that $n\alpha \in \mathcal{O}_L$.
+\end{lemma}
+
+\begin{proof}
+ If $\alpha \in L$, let $g(x) \in \Q[x]$ be its monic minimal polynomial. Then there exists $n \in \Z$ non-zero such that $ng(x) \in \Z[x]$ (pick $n$ to be the least common multiple of the denominators of the coefficients of $g(x)$). Now the magic is to put
+ \[
+ h(x) = n^{\deg(g)}g\left(\frac{x}{n}\right).
+ \]
+ Then this is a monic polynomial with integral coefficients --- in effect, we have just multiplied the coefficient of $x^i$ by $n^{\deg(g) - i}$! Then $h(n\alpha) = 0$. So $n\alpha$ is integral.
+\end{proof}
+
+\section{Norm, trace, discriminant, numbers}
+Recall that in our motivating example of $\Z[i]$, one important tool was the norm of an algebraic integer $x + iy$, given by $N(x + iy) = x^2 + y^2$. This can be generalized to arbitrary number fields, and will prove itself to be a very useful notion to consider. Apart from the norm, we will also consider a number known as the \emph{trace}, which is also useful. We will also study numbers associated with the number field itself, rather than particular elements of the field, and it turns out they tell us a lot about how the field behaves.
+
+\subsubsection*{Norm and trace}
+Recall the following definition from IID Galois Theory:
+\begin{defi}[Norm and trace]\index{norm}\index{trace}
+ Let $L/K$ be a field extension, and $\alpha \in L$. We write $m_\alpha: L \to L$ for the map $\ell \mapsto \alpha \ell$. Viewing this as a linear map of $L$ vector spaces, we define the \emph{norm} of $\alpha$ to be
+ \[
+ N_{L/K}(\alpha) = \det m_\alpha,
+ \]
+ and the \emph{trace} to be
+ \[
+ \tr_{L/K}(\alpha) = \tr m_\alpha.
+ \]
+\end{defi}
+
+The following property is immediate:
+\begin{prop}
+ For a field extension $L/K$ and $a, b \in L$, we have $N(ab) = N(a)N(b)$ and $\tr(a + b) = \tr(a) + \tr(b)$.
+\end{prop}
+
+We can alternatively define the norm and trace as follows:
+\begin{prop}
+ Let $p_\alpha \in K[x]$ be the minimal polynomial of $\alpha$. Then the characteristic polynomial of $m_\alpha$ is
+ \[
+ \det(xI - m_\alpha) = p_\alpha^{[L:K(\alpha)]}
+ \]
+ Hence if $p_\alpha(x)$ splits in some field $L'\supseteq K(\alpha)$, say
+ \[
+ p_\alpha(x) = (x - \alpha_1) \cdots (x - \alpha_r),
+ \]
+ then
+ \[
+ N_{K(\alpha)/K}(\alpha) = \prod \alpha_i,\quad \tr_{K(\alpha)/K}(\alpha) = \sum \alpha_i,
+ \]
+ and hence
+ \[
+ N_{L/K}(\alpha) = \left(\prod \alpha_i\right)^{[L:K(\alpha)]},\quad \tr_{L/K} = [L:K(\alpha)] \left(\sum \alpha_i\right).
+ \]
+\end{prop}
+This was proved in the IID Galois Theory course, and we will just use it without proving.
+
+\begin{cor}
+ Let $L \supseteq \Q$ be a number field. Then the following are equivalent:
+ \begin{enumerate}
+ \item $\alpha \in \mathcal{O}_L$.
+ \item The minimal polynomial $p_\alpha$ is in $\Z[x]$
+ \item The characteristic polynomial of $m_\alpha$ is in $\Z[x]$.
+ \end{enumerate}
+ This in particular implies $N_{L/\Q}(\alpha) \in \Z$ and $\tr_{L/\Q}(\alpha) \in \Z$.
+\end{cor}
+
+\begin{proof}
+ The equivalence between the first two was already proven. For the equivalence between (ii) and (iii), if $m_\alpha \in \Z[x]$, then $\alpha \in \mathcal{O}_L$ since it vanishes on a monic polynomial in $\Z[x]$. On the other hand, if $p_\alpha \in \Z[x]$, then so is the characteristic polynomial, since it is just $p_\alpha^N$.
+
+ The final implication comes from the fact that the norm and trace are just coefficients of the characteristic polynomial.
+\end{proof}
+
+It would be nice if the last implication is an if and only if. This is in general not true, but it occurs, obviously, when the characteristic polynomial is quadratic, since the norm and trace would be the only coefficients.
+\begin{eg}
+ Let $L = K(\sqrt{d}) = K[z]/(z^2 - d)$, where $d$ is not a square in $K$. As a vector space over $K$, we can take $1, \sqrt{d}$ as our basis. So every $\alpha$ can be written as
+ \[
+ \alpha = x + y\sqrt{d}.
+ \]
+ Hence the matrix of multiplication by $\alpha$ is
+ \[
+ m_\alpha =
+ \begin{pmatrix}
+ x & dy\\
+ y & x
+ \end{pmatrix}.
+ \]
+ So the trace and norm are given by
+ \begin{align*}
+ \tr_{L/K} (x + y\sqrt{d}) &= 2x = (x + y\sqrt{d}) + (x - y\sqrt{d})\\
+ N_{L/K} (x + y \sqrt{d}) &= x^2 - dy^2 = (x + y\sqrt{d})(x- y\sqrt{d})
+ \end{align*}
+ We can also obtain this by consider the roots of the minimal polynomial of $\alpha = x + y \sqrt{d}$, namely $(\alpha - x)^2 - y^2 d = 0$, which has roots $x \pm y \sqrt{d}$.
+\end{eg}
+In particular, if $L = \Q(\sqrt{d})$, with $d < 0$, then the norm of an element is just the norm of it as a complex number.
+
+Now that we have computed the general trace and norm, we can use the proposition to find out what the algebraic integers are. It turns out the result is (slightly) unexpected:
+\begin{lemma}
+ Let $L = \Q(\sqrt{d})$, where $d \in \Z$ is not $0, 1$ and is square-free. Then
+ \[
+ \mathcal{O}_L =
+ \begin{cases}
+ \Z[\sqrt{d}] & d \equiv 2 \text{ or }3\pmod 4\\
+ \Z\left[\frac{1}{2}(1 + \sqrt{d})\right] & d \equiv 1 \pmod 4
+ \end{cases}
+ \]
+\end{lemma}
+
+\begin{proof}
+ We know $x + y \sqrt{\lambda} \in \mathcal{O}_L$ if and only if $2x, x^2 - dy^2 \in \Z$ by the previous example. These imply $4dy^2 \in \Z$. So if $y = \frac{r}{s}$ with $r, s$ coprime, $r, s \in \Z$, then we must have $s^2 \mid 4d$. But $d$ is square-free. So $s = 1$ or $2$. So
+ \[
+ x = \frac{u}{2},\quad y = \frac{v}{2}
+ \]
+ for some $u, v \in \Z$. Then we know $u^2 - dv^2 \in 4\Z$, i.e.\ $u^2 \equiv dv^2 \pmod 4$. But we know the squares mod $4$ are always $0$ and $1$. So if $d \not\equiv 1 \pmod 4$, then $u^2 \equiv dv^2\pmod 4$ imply that $u^2 = v^2 = 0\pmod 4$, and hence $u, v$ are even. So $x, y \in \Z$, giving $\mathcal{O}_L = \Z[\sqrt{d}]$.
+
+ On the other hand, if $d \equiv 1\pmod 4$, then $u, v$ have the same parity mod 2, i.e.\ we can write $x + y\sqrt{d}$ as a $\Z$-combination of $1$ and $\frac{1}{2}(1 + \sqrt{d})$.
+
+ As a sanity check, we find that the minimal polynomial of $\frac{1}{2}(1 + \sqrt{d})$ is $x^2 - x + \frac{1}{4}(1 - d)$ which is in $\Z$ if and only if $d \equiv 1 \pmod 4$.
+\end{proof}
+
+\subsubsection*{Field embeddings}
+Recall the following theorem from IID Galois Theory:
+\begin{thm}[Primitive element theorem]\index{primitive element theorem}
+ Let $K \subseteq L$ be a separable field extension. Then there exists an $\alpha \in L$ such that $K(\alpha) = L$.
+\end{thm}
+For example, $\Q(\sqrt{2}, \sqrt{3}) = \Q(\sqrt{2} + \sqrt{3})$.
+
+Since $\Q$ has characteristic zero, it follows that all number fields are separable extensions. So any number field $L/\Q$ is of the form $L = \Q(\alpha)$. This makes it much easier to study number fields, as the only extra ``stuff'' we have on top of $\Q$.
+
+One particular thing we can do is to look at the number of ways we can embed $L \hookrightarrow \C$. For example, for $\Q(\sqrt{-1})$, there are two such embeddings --- one sends $\sqrt{-1}$ to $i$ and the other sends $\sqrt{-1}$ to $-i$.
+
+\begin{lemma}
+ The degree $[L:\Q] = n$ of a number field is the number of field embeddings $L \hookrightarrow \C$.
+\end{lemma}
+
+\begin{proof}
+ Let $\alpha$ be a primitive element, and $p_\alpha(x) \in \Q[x]$ its minimal polynomial. Then by we have $\deg p_\alpha = [L:\Q] = n$, as $1, \alpha, \alpha^2, \cdots, \alpha^{n - 1}$ is a basis. Moreover,
+ \[
+ \frac{\Q[x]}{(p_\alpha)} \cong \Q(\alpha) = L.
+ \]
+ Since $L/\Q$ is separable, we know $p_\alpha$ has $n$ distinct roots in $\C$. Write
+ \[
+ p_\alpha(x) = (x - \alpha_1) \cdots (x - \alpha_n).
+ \]
+ Now an embedding $\Q[x]/(p_\alpha) \hookrightarrow \C$ is uniquely determined by the image of $x$, and $x$ must be sent to one of the roots of $p_\alpha$. So for each $i$, the map $x \mapsto \alpha_i$ gives us a field embedding, and these are all. So there are $n$ of them.
+\end{proof}
+
+Using these field embeddings, we can come up with the following alternative formula for the norm and trace.
+\begin{cor}
+ Let $L/\Q$ be a number field. If $\sigma_1, \cdots, \sigma_n:L \to \C$ are the different field embeddings and $\beta \in L$, then
+ \[
+ \tr_{L/\Q}(\beta) = \sum \sigma_i(\beta),\quad N_{L/\Q}(\beta) = \prod_i \sigma_i(\beta).
+ \]
+ We call $\sigma_1(\beta), \cdots, \sigma_n (\beta)$ the \emph{conjugates}\index{conjugate} of $\beta$ in $\C$.
+\end{cor}
+Proof is in the Galois theory course.
+
+Using this characterization, we have the following very concrete test for when something is a unit.
+\begin{lemma}
+ Let $x \in \mathcal{O}_L$. Then $x$ is a unit if and only if $N_{L/\Q}(x) = \pm 1$.
+\end{lemma}
+
+\begin{notation}
+ Write $\mathcal{O}_L^\times = \{x \in \mathcal{O}_L: x^{-1} \in \mathcal{O}_L\}$, the units in $\mathcal{O}_L$.
+\end{notation}
+
+\begin{proof}
+ $(\Rightarrow)$ We know $N(a b) = N(a)N(b)$. So if $x \in \mathcal{O}_L^\times$, then there is some $y \in \mathcal{O}_L$ such that $xy = 1$. So $N(x) N(y) = 1$. So $N(x)$ is a unit in $\Z$, i.e.\ $\pm 1$.
+
+ $(\Leftarrow)$ Let $\sigma_1, \cdots, \sigma_n: L \to \C$ be the $n$ embeddings of $L$ in $\C$. For notational convenience, We suppose that $L$ is already subfield of $\C$, and $\sigma_1$ is the inclusion map. Then for each $x \in \mathcal{O}_L$, we have
+ \[
+ N(x) = x \sigma_2(x) \cdots \sigma_n(x).
+ \]
+ Now if $N(x) = \pm 1$, then $x^{-1} = \pm \sigma_2(x) \cdots \sigma_n(x)$. So we have $x^{-1} \in \mathcal{O}_L$, since this is a product of algebraic integers. So $x$ is a unit in $\mathcal{O}_L$.
+\end{proof}
+
+\begin{cor}
+ If $x \in \mathcal{O}_L$ is such that $N(x)$ is prime, then $x$ is irreducible.
+\end{cor}
+
+\begin{proof}
+ If $x = ab$, then $N(a)N(b) = N(x)$. Since $N(x)$ is prime, either $N(a) = \pm 1$ or $N(b) = \pm 1$. So $a$ or $b$ is a unit.
+\end{proof}
+We can consider a more refined notion than just the number of field embeddings.
+
+\begin{defi}[$r$ and $s$]\index{$r$}\index{$s$}
+ We write $r$ for the number of field embeddings $L \hookrightarrow \R$, and $s$ the number of pairs of non-real field embeddings $L \hookrightarrow \C$. Then
+ \[
+ n = r + 2s.
+ \]
+ Alternatively, $r$ is the number of real roots of $p_\alpha$, and $s$ is the number of pairs of complex conjugate roots.
+\end{defi}
+The distinction between real embeddings and complex embeddings will be important in the second half of the course.
+
+\subsubsection*{Discriminant}
+The final invariant we will look at in this chapter is the discriminant. It is based on the following observation:
+
+\begin{prop}
+ Let $L/K$ be a separable extension. Then a $K$-bilinear form $L \times L \to K$ defined by $(x, y) \mapsto \tr_{L/K}(xy)$ is non-degenerate. Equivalent, if $\alpha_1,\cdots, \alpha_n$ are a $K$-basis for $L$, the Gram matrix $(\tr(\alpha_i\alpha_j))_{i, j = 1, \cdots, n}$ has non-zero determinant.
+\end{prop}
+Recall from Galois theory that if $L/K$ is \emph{not} separable, then $\tr_{L/K} = 0$, and it is very \emph{very} degenerate. Also, note that if $K$ is of characteristic $0$, then there is a quick and dirty proof of this fact --- the trace map is non-degenerate, because for any $x \in K$, we have $\tr_{L/K}(x\cdot x^{-1}) = n \not= 0$. This is really the only case we care about, but in the proof of the general result, we will also find a useful formula for the discriminant when the basis is $1, \theta, \theta^2, \ldots, \theta^{n - 1}$.
+
+We will use the following important notation:
+\begin{notation}
+ \[
+ \Delta(\alpha_1, \cdots, \alpha_n) = \det(\tr_{L/K}(\alpha_i \alpha_j)).
+ \]
+\end{notation}
+
+\begin{proof}
+ Let $\sigma_1, \cdots, \sigma_n: L \to \bar{K}$ be the $n$ distinct $K$-linear field embeddings $L \hookrightarrow \bar{K}$. Put
+ \[
+ S = (\sigma_i(\alpha_j))_{i, j = 1, \cdots, n} =
+ \begin{pmatrix}
+ \sigma_1(\alpha_1) & \cdots & \sigma_1(\alpha_n)\\
+ \vdots & \ddots & \vdots\\
+ \sigma_n(\alpha_1) & \cdots & \sigma_n(\alpha_n).
+ \end{pmatrix}
+ \]
+ Then
+ \[
+ S^T S = \left(\sum_{k = 1}^n \sigma_k(\alpha_i)\sigma_k(\alpha_j)\right)_{i,j = 1, \cdots n}.
+ \]
+ We know $\sigma_k$ is a field homomorphism. So
+ \[
+ \sum_{k = 1}^n \sigma_k (\alpha_i)\sigma_k(\alpha_j) = \sum_{k = 1}^n \sigma_k(\alpha_i \alpha_j) = \tr_{L/K}(\alpha_i \alpha_j).
+ \]
+ So
+ \[
+ S^T S = (\tr(\alpha_i\alpha_j))_{i, j = 1, \cdots, n}.
+ \]
+ So we have
+ \[
+ \Delta(\alpha_1, \cdots, \alpha_n) = \det(S^T S) = \det(S)^2.
+ \]
+ Now we use the theorem of primitive elements to write $L = K(\theta)$ such that $1, \theta, \cdots, \theta^{n - 1}$ is a basis for $L$ over $K$, with $[L:K] = n$. Now $S$ is just
+ \[
+ S =
+ \begin{pmatrix}
+ 1 & \sigma_1(\theta) & \cdots & \sigma_1(\theta)^{n - 1}\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 1 & \sigma_n(\theta) & \cdots & \sigma_n(\theta)^{n - 1}
+ \end{pmatrix}.
+ \]
+ This is a Vandermonde matrix, and so
+ \[
+ \Delta(1, \theta, \cdots, \theta^{n - 1}) = (\det S)^2 = \prod_{i < j} (\sigma_i(\theta) - \sigma_j(\theta))^2.
+ \]
+ Since the field extension is separable, and hence $\sigma_i \not= \sigma_j$ for all $i, j$, this implies $\sigma_i (\theta) \not= \sigma_j(\theta)$, since $\theta$ generates the field. So the product above is non-zero.
+\end{proof}
+So we have this nice canonical bilinear map. However, this determinant is not canonical. Recall that if $\alpha_1, \cdots, \alpha_n$ is a basis for $L/K$, and $\alpha_1', \cdots, \alpha_n'$ is another basis, then
+\[
+ \alpha_i' = \sum a_{ij}\alpha_j
+\]
+for some $A = (a_{ij}) \in \GL_n(K)$. So
+\[
+ \Delta(\alpha_1', \cdots, \alpha_n') = (\det A)^2 \Delta(\alpha_1, \cdots, \alpha_n).
+\]
+However, for number fields, we shall see that we can pick a ``canonical'' basis, and get a canonical value for $\Delta$. We will call this the discriminant.
+
+\begin{defi}[Integral basis]\index{integral basis}
+ Let $L/\Q$ be a number field. Then a basis $\alpha_1, \cdots, \alpha_n$ of $L$ is an \emph{integral basis} if
+ \[
+ \mathcal{O}_L = \left\{\sum_{i = 1}^n m_i \alpha_i: m_i \in \Z\right\} = \bigoplus_1^n \Z\alpha_i.
+ \]
+ In other words, it is simultaneously a basis for $L$ over $\Q$ and $\mathcal{O}_L$ over $\Z$.
+\end{defi}
+Note that integral bases are not unique, just as with usual bases. Given one basis, you can get any other by acting by $\GL_n(\Z)$.
+
+\begin{eg}
+ Consider $\Q(\sqrt{d})$ with $d$ square-free, $d \not= 0, 1$. If $d \cong 1\pmod 4$, we've seen that $1, \frac{1}{2}(1 + \sqrt{\lambda})$ is an integral basis. Otherwise, if $d \cong 2, 3 \pmod 4$, then $1, \sqrt{d}$ is an integral basis.
+\end{eg}
+
+The important theorem is that an integral basis always exists.
+
+\begin{thm}
+ Let $\Q/L$ be a number field. Then there exists an integral basis for $\mathcal{O}_L$. In particular, $\mathcal{O}_L \cong \Z^n$ with $n = [L:\Q]$.
+\end{thm}
+
+\begin{proof}
+ Let $\alpha_1, \cdots, \alpha_n$ be any basis of $L$ over $\Q$. We have proved that there is some $n_i \in \Z$ such that $n_i \alpha_i \in \mathcal{O}_L$. So wlog $\alpha_1, \cdots, \alpha_n \in \mathcal{O}_L$, and are an basis of $L$ over $\Q$. Since $\alpha_i$ are integral, so are $\alpha_i \alpha_j$, and so all these have integer trace, as we have previously shown. Hence $\Delta(\alpha_1, \cdots, \alpha_n)$, being the determinant of a matrix with integer entries, is an integer.
+
+ Now choose a $\Q$-basis $\alpha_1, \cdots, \alpha_n \in \mathcal{O}_L$ such that $\Delta(\alpha_1, \cdots, \alpha_n) \in \Z\setminus \{0\}$ has minimal absolute value. We will show that these are an integral basis.
+
+ Let $x \in \mathcal{O}_L$, and write
+ \[
+ x = \sum \lambda_i \alpha_i
+ \]
+ for some $\lambda_i \in \Q$. These $\lambda_i$ are necessarily unique since $\alpha_1, \cdots, \alpha_n$ is a basis.
+
+ Suppose some $\lambda_i \not\in \Z$. wlog say $\lambda_1 \not \in \Z$. We write
+ \[
+ \lambda_1 = n_1 + \varepsilon_1,
+ \]
+ for $n_1 \in \Z$ and $0 < \varepsilon_1 < 1$. We put
+ \[
+ \alpha_1' = x - n_1 \alpha_1 = \varepsilon_1 \alpha_1 + \lambda_2 \alpha_2 + \cdots + \lambda_n \alpha_n \in \mathcal{O}_L.
+ \]
+ So $\alpha_1', \alpha_2, \cdots, \alpha_n$ is still a basis for $L/\Q$, and are still in $\mathcal{O}_L$. But then
+ \[
+ \Delta(\alpha_1', \cdots, \alpha_n) = \varepsilon_1^2 \cdot \Delta(\alpha_1, \cdots, \alpha_n) < \Delta(\alpha_1, \cdots, \alpha_n).
+ \]
+ This contradicts minimality. So we must have $\lambda_i \in \Z$ for all $\Z$. So this is a basis for $\mathcal{O}_L$.
+\end{proof}
+
+Now if $\alpha_1', \cdots, \alpha_n'$ is another integral basis of $L$ over $\Q$, then there is some $g \in \GL_n(\Z)$ such that $g\alpha_i = \alpha_i'$. Since $\det (g)$ is invertible in $\Z$, it must be $1$ or $-1$, and hence
+\[
+ \det \Delta(\alpha_1', \cdots, \alpha_n') = \det(g)^2 \Delta (\alpha_1, \cdots, \alpha_n) = \Delta(\alpha_1, \cdots, \alpha_n)
+\]
+and is independent of the choice of integral basis.
+\begin{defi}[Discriminant]\index{discriminant}\index{$D_L$}
+ The \emph{discriminant} $D_L$ of a number field $L$ is defined as
+ \[
+ D_L = \Delta(\alpha_1, \cdots, \alpha_n)
+ \]
+ for any integral basis $\alpha_1, \cdots, \alpha_n$.
+\end{defi}
+
+\begin{eg}
+ Let $L = \Q(\sqrt{d})$, where $d \not= 0, 1$ and $d$ is square-free. If $d \cong 2, 3 \pmod 4$, then it has an integral basis $1, \sqrt{d}$. So
+ \[
+ D_L = \det
+ \begin{pmatrix}
+ 1 & \sqrt{d}\\
+ 1 & -\sqrt{d}
+ \end{pmatrix}^2 = 4d.
+ \]
+ Otherwise, if $d \cong 1 \pmod 4$, then
+ \[
+ D_L = \det
+ \begin{pmatrix}
+ 1 & \frac{1}{2}(1 + \sqrt{d})\\
+ 1 & \frac{1}{2}(1 - \sqrt{d})
+ \end{pmatrix}^2 = d.
+ \]
+\end{eg}
+
+Recall that we have seen the word discriminant before, and let's make sure these concepts are more-or-less consistent. Recall that the discriminant of a polynomial $f(x) = \prod (x - \alpha_i)$ is defined as
+\[
+ \disc(f) = \prod_{i < j} (\alpha_i - \alpha_j)^2 = (-1)^{n(n - 1)/2}\prod_{i \not= j}(\alpha_i - \alpha_j).
+\]
+If $p_\theta(x) \in K[x]$ is the minimal polynomial of $\theta$ (where $L = K[\theta]$), then the roots of $p_\theta$ are $\sigma_i(\theta)$. Hence we get
+\[
+ \disc(p_\theta) = \prod_{i < j}(\sigma_i(\theta) - \sigma_j(\theta))^2.
+\]
+In other words,
+\[
+ \disc(p_\theta) = \Delta(1, \theta, \cdots, \theta^{n - 1}).
+\]
+So this makes sense.
+
+
+\section{Multiplicative structure of ideals}
+Again, let $L/\Q$ be a number field. It turns out that in general, the integral ring $\mathcal{O}_L$ is not too well-behaved as a ring. In particular, it fails to be a UFD in general.
+
+\begin{eg}
+ Let $L = \Q(\sqrt{5})$. Then $\mathcal{O}_L = \Z[\sqrt{-5}]$. Then we find
+ \[
+ 3 \cdot 7 = (1 + 2\sqrt{-5})(1 - 2\sqrt{-5}).
+ \]
+ These have norms $9, 49, 21, 21$. So none of $3, 7, 1 + 2\sqrt{5}$ are associates.
+
+ Moreover, $3, 7, 1 \pm 2\sqrt{-5}$ are all irreducibles. The proof is just a straightforward check on the norms.
+
+ For example, to show that $3$ is irreducible, if $3 = \alpha \beta$, then $9 = N(3) = N(\alpha) N(\beta)$. Since none of the terms on the right are $\pm 1$, we must have $N(\alpha) = \pm 3$. But there are no solutions to
+ \[
+ x^2 + 5y^2 = \pm 3
+ \]
+ where $x, y$ are integers. So there is no $\alpha = x + y\sqrt{-5}$ such that $N(\alpha) = \pm 3$.
+
+ So unique factorization fails.
+\end{eg}
+
+Note that it is still possible to factor any element into irreducibles, just not uniquely --- we induct on $|N(\alpha)|$. If $|N(\alpha)| = 1$, then $\alpha$ is a unit. Otherwise, $\alpha$ is either irreducible, or $\alpha = \beta \gamma$. Since $N(\beta)N(\gamma) = N(\alpha)$, and none of them are $\pm 1$, we must have $|N(\beta)|, |N(\gamma)| < |N(\alpha)|$. So done by induction.
+
+To fix the lack of unique factorization, we instead look at ideals in $\mathcal{O}_L$. This has a natural multiplicative structure --- the product of two ideals $\mathfrak{a}, \mathfrak{b}$ is generated by products $ab$, with $a \in \mathfrak{a}, b \in \mathfrak{b}$. The big theorem is that every ideal can be written uniquely as a product of prime ideals.
+\begin{defi}[Ideal multiplication]\index{ideal!multiplication}\index{multiplication of ideals}
+ Let $\mathfrak{a}, \mathfrak{b}\lhd \mathcal{O}_L$ be ideals. Then we define the product $\mathfrak{a}\mathfrak{b}$ as
+ \[
+ \mathfrak{a} \mathfrak{b} = \left\{\sum_{i, j} \alpha_i \beta_j: \alpha_i \in \mathfrak{a}, \beta_j \in \mathfrak{b}\right\}.
+ \]
+ We write $\mathfrak{a} \mid \mathfrak{b}$ if there is some ideal $\mathfrak{c}$ such that $\mathfrak{a}\mathfrak{c} = \mathfrak{b}$, and say $\mathfrak{a}$ \term{divides} $\mathfrak{b}$.
+\end{defi}
+The proof of unique factorization is the same as the proof that $\Z$ is a UFD. Usually, when we want to prove factorization is unique, we write an object as
+\[
+ a = x_1 x_2 \cdots x_m = y_1 y_2 \cdots y_n.
+\]
+We then use primality to argue that $x_1$ must be equal to some of the $y_i$, and then \emph{cancel them from both sides}. We can usually do this because we are working with an integral domain. However, we don't have this luxury when working with ideals.
+
+Thus, what we are going to do is to find inverses for our ideals. Of course, given any ideal $\mathfrak{a}$, there is no ideal $\mathfrak{a}^{-1}$ such that $\mathfrak{a}\mathfrak{a}^{-1} = \mathcal{O}_L$, as for any $\mathfrak{b}$, we know $\mathfrak{a}\mathfrak{b}$ is contained in $\mathfrak{a}$. Thus we are going to consider more general objects known as \emph{fractional ideals}, and then this will allow us to prove unique factorization.
+
+Even better, we will show that $\mathfrak{a} \mid \mathfrak{b}$ is equivalent to $\mathfrak{b} \subseteq \mathfrak{a}$. This is a very useful result, since it is often very easy to show that $\mathfrak{b} \subseteq \mathfrak{a}$, but it is usually very hard to actually find the quotient $\mathfrak{a}^{-1}\mathfrak{b}$.
+
+%\begin{defi}[Addition of ideals]
+% Let $\mathfrak{a}, \mathfrak{b}$ be ideas of $R$. Then
+% \[
+% \mathfrak{a} + \mathfrak{b} = \{a + b: a \in \mathfrak{a}, b \in \mathfrak{b}\}
+% \]
+% is an ideal.
+%\end{defi}
+
+We first look at some examples of multiplication and factorization of ideals to get a feel of what these things look like.
+\begin{eg}
+ We have
+ \[
+ \bra x_1, \cdots, x_n\ket \bra y_1, \cdots, y_m\ket = \bra x_i y_j: 1 \leq i \leq n, 1 \leq j \leq m\ket.
+ \]
+ In particular,
+ \[
+ \bra x\ket \bra y\ket = \bra xy\ket.
+ \]
+\end{eg}
+It is also an easy exercise to check $(\mathfrak{a}\mathfrak{b})\mathfrak{c} = \mathfrak{a}(\mathfrak{b}\mathfrak{c})$.
+
+\begin{eg}
+ In $\Z[\sqrt{-5}]$, we claim that we have
+ \[
+ \bra 3\ket = \bra 3, 1 + \sqrt{-5}\ket \bra 3, 1 - \sqrt{-5}\ket.
+ \]
+ So $\bra 3\ket$ is not irreducible.
+
+ Indeed, we can compute
+ \[
+ \bra 3, 1 + \sqrt{-5}\ket \bra 3, 1 - \sqrt{-5}\ket = \bra 9, 3(1 + 2\sqrt{-5}), 3(1 - 2\sqrt{-5}), 21\ket.
+ \]
+ But we know $\gcd(9, 21) = 3$. So $\bra 9, 21\ket = \bra 3\ket$ by Euclid's algorithm. So this is in fact equal to $\bra 3\ket$.
+\end{eg}
+Notice that when we worked with \emph{elements}, the number $3$ was irreducible, as there is no element of norm $3$. Thus, scenarios such as $2 \cdot 3 = (1 + \sqrt{-5})(1 - \sqrt{-5})$ could appear and mess up unique factorization. By passing on to ideals, we can further factorize $\bra3\ket$ into a product of smaller ideals. Of course, these cannot be principal ideals, or else we would have obtained a factorization of $3$ itself. So we can think of these ideals as ``generalized elements'' that allow us to further break elements down.
+
+Indeed, given any element in $\alpha \in \mathcal{O}_L$, we obtain an ideal $\bra \alpha \ket$ corresponding to $\alpha$. This map is not injective --- if two elements differ by a unit, i.e.\ they are \term{associates}, then they would give us the same ideal. However, this is fine, as we usually think of associates as being ``the same''.
+
+We recall the following definition:
+\begin{defi}[Prime ideal]\index{prime ideal}\index{ideal!prime}
+ Let $R$ be a ring. An ideal $\mathfrak{p} \subseteq R$ is \emph{prime} if $R/\mathfrak{p}$ is an integral domain. Alternatively, for all $x, y \in R$, $xy \in \mathfrak{p}$ implies $x \in \mathfrak{p}$ or $y \in \mathfrak{p}$.
+
+ In this course, we take the convention that a prime ideal is \emph{non-zero}. This is not standard, but it saves us from saying ``non-zero'' all the time.
+\end{defi}
+
+It turns out that the ring of integers $\mathcal{O}_L$ is a very special kind of rings, known as \emph{Dedekind domains}:
+\begin{defi}[Dedekind domain]\index{Dedekind domain}
+ A ring $R$ is a \emph{Dedekind domain} if
+ \begin{enumerate}
+ \item $R$ is an integral domain.
+ \item $R$ is a Noetherian ring.
+ \item $R$ is integrally closed in $\Frac R$, i.e.\ if $x \in \Frac R$ is integral over $R$, then $x \in R$.
+ \item Every proper prime ideal is maximal.
+ \end{enumerate}
+\end{defi}
+This is a rather specific list of properties $\mathcal{O}_L$ happens to satisfy, and it turns out most interesting properties of $\mathcal{O}_L$ can be extended to arbitrary Dedekind domains. However, we will not do the general theory, and just study number fields in particular.
+
+The important result is, of course:
+\begin{prop}
+ Let $L / \Q$ be a number field, and $\mathcal{O}_L$ be its ring of integers. Then $\mathcal{O}_L$ is a Dedekind domain.
+\end{prop}
+The first three parts of the definition are just bookkeeping and not too interesting. The last one is what we really want. This says that $\mathcal{O}_L$ is ``one dimensional'', if you know enough algebraic geometry.
+
+\begin{proof}[Proof of (i) to (iii)]\leavevmode
+ \begin{enumerate}
+ \item Obvious, since $\mathcal{O}_L \subseteq L$.
+ \item We showed that as an abelian group, $\mathcal{O}_L = \Z^n$. So if $\mathfrak{a} \leq \mathcal{O}_L$ is an ideal, then $\mathfrak{a} \leq \Z^n$ as a subgroup. So it is finitely generated as an abelian group, and hence finitely generated as an ideal.
+ \item Note that $\Frac \mathcal{O}_L = L$. If $x \in L$ is integral over $\mathcal{O}_L$, as $\mathcal{O}_L$ is integral over $\Z$, $x$ is also integral over $\Z$. So $x \in \mathcal{O}_L$, by definition of $\mathcal{O}_L$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+To prove the last part, we need the following lemma, which is also very important on its own right.
+\begin{lemma}
+ Let $\mathfrak{a}\lhd \mathcal{O}_L$ be a non-zero ideal. Then $\mathfrak{a} \cap \Z \not= \{0\}$ and $\mathcal{O}_L/\mathfrak{a}$ is finite.
+\end{lemma}
+
+\begin{proof}
+ Let $\alpha \in \mathfrak{a}$ and $\alpha \not= 0$. Let
+ \[
+ p_\alpha = x^m + a_{m - 1}x^{m - 1} + \cdots + a_0
+ \]
+ be its minimal polynomial. Then $p_\alpha \in \Z[x]$. We know $a_0 \not= 0$ as $p_\alpha$ is irreducible.
+
+ Since $p_\alpha(\alpha) = 0$, we know
+ \[
+ a_0 = -\alpha(\alpha^{m - 1} + a_{m - 1} \alpha^{m - 2} + \cdots + a_2 \alpha + a_1).
+ \]
+ We know $\alpha \in \mathfrak{a}$ by assumption, and the mess in the brackets is in $\mathcal{O}_L$. So the whole thing is in $\mathfrak{a}$. But $a_0 \in \Z$. So $a_0 \in \Z \cap \mathfrak{a}$.
+
+ Thus, we know $\bra a_0\ket \subseteq \mathfrak{a}$. Thus we get a surjection
+ \[
+ \frac{\mathcal{O}_L}{\bra a_0\ket} \rightarrow \frac{\mathcal{O}_L}{\mathfrak{a}}.
+ \]
+ Hence it suffices to show that $\mathcal{O}_L/\bra a_0\ket$ is finite. But for every $d \in \Z$, we know
+ \[
+ \frac{\mathcal{O}_L}{\bra d\ket} = \frac{\Z^n}{d\Z^n} = \left(\frac{\Z}{d\Z}\right)^n,
+ \]
+ which is finite.
+\end{proof}
+
+Finally, recall that a finite integral domain must be a field --- let $x \in R$ with $x \not= 0$. Then $m_x: y \mapsto xy$ is injective, as $R$ is an integral domain. So it is a bijection, as $R$ is finite. So there is some $y \in R$ such that $xy = 1$.
+
+This allows us to prove the last part
+\begin{proof}[Proof of (iv)]
+ Let $\mathfrak{p}$ be a prime ideal. Then $\mathcal{O}_L/\mathfrak{p}$ is an integral domain. Since the lemma says $\mathcal{O}_L/\mathfrak{p}$ is finite, we know $\mathcal{O}_L/\mathfrak{p}$ is a field. So $\mathfrak{p}$ is maximal.
+\end{proof}
+
+We now continue on to prove a few more technical results.
+\begin{lemma}
+ Let $\mathfrak{p}$ be a prime ideal in a ring $R$. Then for $\mathfrak{a}, \mathfrak{b}\lhd R$ ideals, then $\mathfrak{a}\mathfrak{b} \subseteq \mathfrak{p}$ implies $\mathfrak{a} \subseteq \mathfrak{p}$ or $\mathfrak{b}\subseteq \mathfrak{p}$.
+\end{lemma}
+Once we've shown that inclusion of ideals is equivalent to divisibility, this in effect says ``prime ideals are primes''.
+
+\begin{proof}
+ If not, then there is some $a \in \mathfrak{a}\setminus \mathfrak{p}$ and $b \in \mathfrak{b}\setminus \mathfrak{p}$. Then $ab \in \mathfrak{a}\mathfrak{b} \subseteq \mathfrak{p}$. But then $a \in \mathfrak{p}$ or $b \in \mathfrak{p}$. Contradiction.
+\end{proof}
+
+Eventually, we will prove that every ideal is a product of prime ideals. However, we cannot prove that just yet. Instead, we will prove the following ``weaker'' version of that result:
+\begin{lemma}
+ Let $0 \not= \mathfrak{a} \lhd \mathcal{O}_L$ a non-zero ideal. Then there is a subset of $\mathfrak{a}$ that is a product of prime ideals.
+\end{lemma}
+
+The proof is some unenlightening abstract nonsense.
+\begin{proof}
+ We are going to use the fact that $\mathcal{O}_L$ is Noetherian. If this does not hold, then there must exist a maximal ideal $\mathfrak{a}$ not containing a product of prime ideals (by which we mean any ideal greater than $\mathfrak{a}$ contains a product of prime ideals, \emph{not} that $\mathfrak{a}$ is itself a maximal ideal). In particular, $\mathfrak{a}$ is not prime. So there are some $x,y \in \mathcal{O}_L$ such that $x, y \not\in \mathfrak{a}$ but $xy \in \mathfrak{a}$.
+
+ Consider $\mathfrak{a} + \bra x\ket$. This is an ideal, strictly bigger than $\mathfrak{a}$. So there exists prime ideals $\mathfrak{p}_1, \cdots, \mathfrak{p}_r$ such that $\mathfrak{p}_1\cdots \mathfrak{p}_r \subseteq \mathfrak{a} + \bra x\ket$, by definition.
+
+ Similarly, there exists $\mathfrak{q}_1, \cdots\mathfrak{q}_s$ such that $\mathfrak{q}_1 \cdots \mathfrak{q}_s \subseteq \mathfrak{a} + \bra y\ket$.
+
+ But then
+ \[
+ \mathfrak{p}_1\cdots\mathfrak{p}_r \mathfrak{q}_1\cdots\mathfrak{q}_s \subseteq (\mathfrak{a} + \bra x\ket)(\mathfrak{a} + \bra y\ket) \subseteq \mathfrak{a} + \bra xy\ket = \mathfrak{a}
+ \]
+ So $\mathfrak{a}$ contains a product of prime ideals. Contradiction.
+\end{proof}
+
+Recall that for integers, we can multiply, but not divide. To make life easier, we would like to formally add inverses to the elements. If we do so, we obtain things like $\frac{1}{3}$, and obtain the rationals.
+
+Now we have ideals. What can we do? We can \emph{formally} add some inverse and impose some nonsense rules to make sure it is consistent, but it is helpful to actually construct something explicitly that acts as an inverse. We can then understand what significance these inverses have in terms of the rings.
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item Let $0 \not= \mathfrak{a} \lhd \mathcal{O}_L$ be an ideal. If $x \in L$ has $x\mathfrak{a} \subseteq \mathfrak{a}$, then $x \in \mathcal{O}_L$.
+ \item Let $0 \not= \mathfrak{a} \lhd \mathcal{O}_L$ be a \emph{proper} ideal. Then
+ \[
+ \{y \in L: y \mathfrak{a} \leq \mathcal{O}_L\}
+ \]
+ contains elements that are not in $\mathcal{O}_L$. In other words,
+ \[
+ \frac{\{y \in L: y \mathfrak{a} \leq \mathcal{O}_L\}}{\mathcal{O}_L} \not = 0.
+ \]
+ \end{enumerate}
+\end{prop}
+We will see that the object $\{y \in L: y \mathfrak{a} \leq \mathcal{O}_L\}$ is in some sense an inverse to $\mathfrak{a}$.
+
+Before we prove this, it is helpful to see what this means in a concrete setting.
+\begin{eg}
+ Consider $\mathcal{O}_L = \Z$ and $\mathfrak{a} = 3\Z$. Then the first part says if $\frac{a}{b} \cdot 3\Z \subseteq 3\Z$, then $\frac{a}{b} \in \Z$. The second says
+ \[
+ \left\{\frac{a}{b}: \frac{a}{b}\cdot 3 \in \Z\right\}
+ \]
+ contains something not in $\Z$, say $\frac{1}{3}$. These are both ``obviously true''.
+\end{eg}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $\mathfrak{a} \subseteq \mathcal{O}_L$. Then since $\mathcal{O}_L$ is Noetherian, we know $\mathfrak{a}$ is finitely generated, say by $\alpha_1, \cdots, \alpha_m$. We consider the multiplication-by-$x$ map $m_x: \mathfrak{a} \to \mathfrak{a}$, i.e.\ write
+ \[
+ x\alpha_i = \sum a_{ij} \alpha_j,
+ \]
+ where $A = (a_{ij})$ is a matrix in $\mathcal{O}_L$. So we know
+ \[
+ (xI - A)
+ \begin{pmatrix}
+ \alpha_1\\\vdots\\ \alpha_n
+ \end{pmatrix}
+ = 0.
+ \]
+ By multiplying by the adjugate matrix, this implies $\det (xI - A) = 0$. So $x$ satisfies a monic polynomial with coefficients in $\mathcal{O}_L$, i.e.\ $x$ is integral over $\mathcal{O}_L$. Since $\mathcal{O}_L$ is integrally closed, $x \in \mathcal{O}_L$.
+
+ \item It is clear that if the result is true for $\mathfrak{a}$, then it is true for all $\mathfrak{a}' \subseteq \mathfrak{a}$. So it is enough to prove this for $\mathfrak{a} = \mathfrak{p}$, a maximal, and in particular prime, ideal.
+
+ Let $\alpha \in \mathfrak{p}$ be non-zero. By the previous lemma, there exists prime ideals $\mathfrak{p}_1, \cdots, \mathfrak{p}_r$ such that $\mathfrak{p}_1 \cdots \mathfrak{p}_r \subseteq \bra \alpha\ket$. We also have that $\bra \alpha\ket \subseteq \mathfrak{p}$ by definition. Assume $r$ is minimal with this property. Since $\mathfrak{p}$ is prime, there is some $i$ such that $\mathfrak{p}_i \subseteq \mathfrak{p}$. wlog, we may as well assume $i = 1$, i.e.\ $\mathfrak{p}_1 \subseteq \mathfrak{p}$. But $\mathfrak{p}_1$ is a prime ideal, and hence maximal. So $\mathfrak{p}_1 = \mathfrak{p}$.
+
+ Also, since $r$ is minimal, we know $\mathfrak{p}_2 \cdots \mathfrak{p}_r \not\subseteq \bra a\ket$.
+
+ Pick $\beta \in \mathfrak{p}_2 \cdots \mathfrak{p}_r \setminus \bra a\ket$. Then
+ \[
+ \beta \mathfrak{p} = \beta \mathfrak{p}_1 \subseteq \mathfrak{p}_1 \mathfrak{p}_2 \cdots \mathfrak{p}_r \subseteq \bra \alpha\ket.
+ \]
+ Dividing by $\alpha$, we get $\frac{\beta}{\alpha}\mathfrak{p} \subseteq \mathcal{O}_L$. But $\beta \not\in \bra \alpha\ket$. So we know $\frac{\beta}{\alpha} \not\in \mathcal{O}_L$. So done.\qedhere
+ \end{enumerate}
+\end{proof}
+What is this $\{x \in L: x \mathfrak{a} \leq \mathcal{O}_L\}$? This is not an ideal, but it almost is. The only way in which it fails to be an ideal is that it is not contained inside $\mathcal{O}_L$. By this we mean it is closed under addition and multiplication by elements in $\mathcal{O}_L$. So it is an $\mathcal{O}_L$ module, which is finitely generated (we will see this in a second), and a subset of $L$. We call this a ``fractional ideal''.
+
+\begin{defi}[Fractional ideal]\index{fractional ideal}\index{ideal!fractional}
+ A \emph{fractional ideal} of $\mathcal{O}_L$ is a subset of $L$ that is also an $\mathcal{O}_L$ module and is finitely generated.
+\end{defi}
+
+\begin{defi}[Integral/honest ideal]\index{integral ideal}\index{honest ideal}\index{ideal!honest}\index{ideal!integral}
+ If we want to emphasize that $\mathfrak{a} \lhd \mathcal{O}_L$ is an ideal, we say it is an \emph{integral or honest ideal}. But we never use ``ideal'' to mean fractional ideal.
+\end{defi}
+
+Note that the definition of fractional ideal makes sense only because $\mathcal{O}_L$ is Noetherian. Otherwise, the non-finitely-generated honest ideals would not qualify as fractional ideals, which is bad. Rather, in the general case, the following characterization is more helpful:
+
+\begin{lemma}
+ An $\mathcal{O}_L$ module $\mathfrak{q} \subseteq L$ is a fractional ideal if and only if there is some $c \in L^\times$ such that $c\mathfrak{q}$ is an ideal in $\mathcal{O}_L$. Moreover, we can pick $c$ such that $c \in \Z$.
+\end{lemma}
+In other words, each fractional ideal is of the form $\frac{1}{c} \mathfrak{a}$ for some honest ideal $\mathfrak{a}$ and integer $c$.
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item[$(\Leftarrow)$] We have to prove that $\mathfrak{q}$ is finitely generated. If $\mathfrak{q} \subseteq L^\times$, $c \in L$ non-zero, then $c\mathfrak{q} \cong \mathfrak{q}$ as an $\mathcal{O}_L$ module. Since $\mathcal{O}_L$ is Noetherian, every ideal is finitely-generated. So $c\mathfrak{q}$, and hence $\mathfrak{q}$ is finitely generated.
+ \item[$(\Rightarrow)$] Suppose $x_1, \cdots, x_n$ generate $\mathfrak{q}$ as an $\mathcal{O}_L$-module. Write $x_i = \frac{y_i}{n_i}$, with $y_i \in \mathcal{O}_L$ and $n_i \in \Z$, $n_i \not= 0$, which we have previously shown is possible.
+
+ We let $c = \lcm(n_1, \cdots, n_k)$. Then $c\mathfrak{q} \subseteq \mathcal{O}_L$, and is an $\mathcal{O}_L$-submodule of $\mathcal{O}_L$, i.e.\ an ideal.\qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{cor}
+ Let $\mathfrak{q}$ be a fractional ideal. Then as an abelian group, $\mathfrak{q} \cong \Z^n$, where $n = [L:\Q]$.
+\end{cor}
+
+\begin{proof}
+ There is some $c \in L^\times$ such that $c\mathfrak{q} \lhd \mathcal{O}_L$ as an ideal, and $c\mathfrak{q} \cong \mathfrak{q}$ as abelian groups. So it suffices to show that any non-zero ideal $\mathfrak{q} \leq \mathcal{O}_L$ is isomorphic to $\Z^n$. Since $\mathfrak{q} \leq \mathcal{O}_L \cong \Z^n$ as abelian groups, we know $\mathfrak{q} \cong \Z^m$ for some $m$. But also there is some $a_0 \in \Z \cap \mathfrak{q}$, and $\Z^n \cong \bra a_0\ket \leq \mathfrak{q}$. So we must have $n = m$, and $\mathfrak{q} \cong \Z^n$.
+\end{proof}
+
+\begin{cor}
+ Let $\mathfrak{a} \leq \mathcal{O}_L$ be a proper ideal. Then $\{x \in L: x\mathfrak{a} \leq \mathcal{O}_L\}$ is a fractional ideal.
+\end{cor}
+
+\begin{proof}
+ Pick $a \in \mathfrak{a}$. Then $a \cdot \{x \in L: x\mathfrak{a} \leq \mathcal{O}_L\} \subseteq \mathcal{O}_L$ and is an ideal in $\mathcal{O}_L$.
+\end{proof}
+Finally, we can state the proposition we want to prove, after all that nonsense work.
+
+\begin{defi}[Invertible fractional ideal]\index{invertible fractional ideal}\index{fractional ideal!invertible}\index{ideal!invertible}
+ A fractional ideal $\mathfrak{q}$ is \emph{invertible} if there exists a fractional ideal $\mathfrak{r}$ such that $\mathfrak{q}\mathfrak{r} = \mathcal{O}_L = \bra 1\ket$.
+\end{defi}
+Notice we can multiply fractional ideals using the same definition as for integral ideals.
+
+\begin{prop}
+ Every non-zero fractional ideal is invertible. The inverse of $\mathfrak{q}$ is
+ \[
+ \{x \in L: x\mathfrak{q} \subseteq \mathcal{O}_L\}.
+ \]
+\end{prop}
+This is good.
+
+Note that if $\mathfrak{q} = \frac{1}{n} \mathfrak{a}$ and $\mathfrak{r} = \frac{1}{m} \mathfrak{b}$, and $\mathfrak{a}, \mathfrak{b} \lhd \mathcal{O}_L$ are integral ideals, then
+\[
+ \mathfrak{q} \mathfrak{r} = \frac{1}{mn} \mathfrak{a}\mathfrak{b} = \mathcal{O}_L
+\]
+if and only if $\mathfrak{a}\mathfrak{b} = \bra mn\ket$. So the proposition is equivalent to the statement that for every $\mathfrak{a} \lhd \mathcal{O}_L$, there exists an ideal $\mathfrak{b} \lhd \mathcal{O}_L$ such that $\mathfrak{a}\mathfrak{b}$ is principal.
+
+\begin{proof}
+ Note that for any $n \in \mathcal{O}_L$ non-zero, we know $\mathfrak{q}$ is invertible if and only if $n\mathfrak{q}$ is invertible. So if the proposition is false, there is an integral ideal $\mathfrak{a} \lhd \mathcal{O}_L$ which is not invertible. Moreover, as $\mathcal{O}_L$ is Noetherian, we can assume $\mathfrak{a}$ is maximal with this property, i.e.\ if $\mathfrak{a} < \mathfrak{a}' < \mathcal{O}_L$, then $\mathfrak{a}'$ is invertible.
+
+ Let $\mathfrak{b} = \{x \in L: x \mathfrak{a} \subseteq \mathcal{O}_L\}$, a fractional ideal. We clearly have $\mathcal{O}_L \subseteq \mathfrak{b}$, and by our previous proposition, we know this inclusion is strict.
+
+ As $\mathcal{O}_L \subseteq \mathfrak{b}$, we know $\mathfrak{a} \subseteq \mathfrak{a} \mathfrak{b}$. Again, this inclusion is strict --- if $\mathfrak{a} \mathfrak{b} = \mathfrak{a}$, then for all $x \in \mathfrak{b}$, we have $x\mathfrak{a} \subseteq \mathfrak{a}$, and we have shown that this implies $x \in \mathcal{O}_L$, but we cannot have $\mathfrak{b} \subseteq \mathcal{O}_L$.
+
+ So $\mathfrak{a} \subsetneq \mathfrak{a} \mathfrak{b}$. By assumption, we also have $\mathfrak{a} \mathfrak{b} \subseteq \mathcal{O}_L$, and since $\mathfrak{a}$ is not invertible, this is strict. But then by definition of $\mathfrak{a}$, we know $\mathfrak{a}\mathfrak{b}$ is invertible, which implies $\mathfrak{a}$ is invertible (if $\mathfrak{c}$ is an inverse of $\mathfrak{a}\mathfrak{b}$, then $\mathfrak{b}\mathfrak{c}$ is an inverse of $\mathfrak{a}$). This is a contradiction. So all fractional ideals must be invertible.
+
+ Finally, we have to show that the formula for the inverse holds. We write
+ \[
+ \mathfrak{c} = \{x \in L: x\mathfrak{q} \subseteq \mathcal{O}_L \}.
+ \]
+ Then by definition, we know $\mathfrak{q}^{-1} \subseteq \mathfrak{c}$. So
+ \[
+ \mathcal{O}_L = \mathfrak{q}\mathfrak{q}^{-1}\subseteq \mathfrak{q} \mathfrak{c} \subseteq \mathcal{O}_L.
+ \]
+ Hence we must have $\mathfrak{q}\mathfrak{c} = \mathcal{O}_L$, i.e.\ $\mathfrak{c} = \mathfrak{q}^{-1}$.
+\end{proof}
+
+We're now done with the annoying commutative algebra, and can finally prove something interesting.
+\begin{cor}
+ Let $\mathfrak{a}, \mathfrak{b}, \mathfrak{c}\lhd \mathcal{O}_L$ be ideals, $\mathfrak{c} \not= 0$. Then
+ \begin{enumerate}
+ \item $\mathfrak{b} \subseteq \mathfrak{a}$ if and only if $\mathfrak{b}\mathfrak{c} \subseteq \mathfrak{a}\mathfrak{c}$
+ \item $\mathfrak{a} \mid \mathfrak{b}$ if and only if $\mathfrak{a} \mathfrak{c} \mid \mathfrak{b}\mathfrak{c}$
+ \item $\mathfrak{a} \mid \mathfrak{b}$ if and only if $\mathfrak{b} \subseteq \mathfrak{a}$.
+ \end{enumerate}
+\end{cor}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item $(\Rightarrow)$ is clear, and $(\Leftarrow)$ is obtained by multiplying with $\mathfrak{c}^{-1}$.
+ \item $(\Rightarrow)$ is clear, and $(\Leftarrow)$ is obtained by multiplying with $\mathfrak{c}^{-1}$.
+ \item $(\Rightarrow)$ is clear. For the other direction, we notice that the result is easy if $\mathfrak{a} = \bra \alpha\ket$ is principal. Indeed, if $\mathfrak{b} = \bra \beta_1, \cdots, \beta_r\ket$, then $\mathfrak{b} \subseteq \bra \alpha\ket$ means there are some $\beta_a', \cdots, \beta_r' \in \mathcal{O}_L$ such that $\beta_i = \beta_i' \alpha$. But this says
+ \[
+ \bra \beta_a, \cdots, \beta_r\ket = \bra \beta_1', \cdots, \beta_r' \ket \bra \alpha\ket,
+ \]
+ So $\bra \alpha\ket \mid \mathfrak{b}$.
+
+ In general, suppose we have $\mathfrak{b} \subseteq \mathfrak{a}$. By the proposition, there exists an ideal $\mathfrak{c} \lhd \mathcal{O}_L$ such that $\mathfrak{a}\mathfrak{c} = \bra \alpha\ket$ is principal with $\alpha \in \mathcal{O}_L, \alpha \not= 0$. Then
+ \begin{itemize}
+ \item $\mathfrak{b}\subseteq \mathfrak{a}$ if and only if $\mathfrak{b}\mathfrak{c} \subseteq \bra \alpha\ket$ by (i); and
+ \item $\mathfrak{a} \mid \mathfrak{b}$ if and only if $\bra \alpha\ket \mid \mathfrak{b}\mathfrak{c}$ by (ii).
+ \end{itemize}
+ So the result follows.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Finally, we can prove the unique factorization of prime ideals:
+\begin{thm}\index{unique factorization of ideals}\index{ideal!unique factorization}
+ Let $\mathfrak{a} \lhd \mathcal{O}_L$ be an ideal, $\mathfrak{a} \not= 0$. Then $\mathfrak{a}$ can be written uniquely as a product of prime ideals.
+\end{thm}
+
+\begin{proof}
+ To show existence, if $\mathfrak{a}$ is prime, then there is nothing to do. Otherwise, if $\mathfrak{a}$ is not prime, then it is not maximal. So there is some $\mathfrak{b} \supsetneq \mathfrak{a}$ with $\mathfrak{b} \lhd \mathcal{O}_L$. Hence $\mathfrak{b} \mid \mathfrak{a}$, i.e.\ there is some $\mathfrak{c} \lhd \mathcal{O}_L$ with $\mathfrak{a} = \mathfrak{b} \mathfrak{c}$, and $\mathfrak{c} \supseteq \mathfrak{a}$. We can continue factoring this way, and it must stop eventually, or else we have an infinite chain of strictly ascending ideals.
+
+ We prove uniqueness the usual way. We have shown $\mathfrak{p} \mid \mathfrak{a}\mathfrak{b}$ implies $\mathfrak{p} \mid \mathfrak{a}$ or $\mathfrak{p} \mid \mathfrak{b}$. So if $\mathfrak{p}_1 \cdots \mathfrak{p}_r = \mathfrak{q}_1 \cdots \mathfrak{q}_s$, with $\mathfrak{p}_i, \mathfrak{q}_j$ prime, then we know $\mathfrak{p}_1 \mid \mathfrak{q}_1 \cdots \mathfrak{q}_s$, which implies $\mathfrak{p}_1 \mid \mathfrak{q}_i$ for some $i$, and wlog $i = 1$. So $\mathfrak{q}_1 \subseteq \mathfrak{p}_1$. But $\mathfrak{q}_1$ is prime and hence maximal. So $\mathfrak{p}_1 = \mathfrak{q}_1$.
+
+ Multiply the equation $\mathfrak{p}_1 \cdots \mathfrak{p}_r = \mathfrak{q}_1 \cdots \mathfrak{q}_s$ by $\mathfrak{p}_1^{-1}$, and we get $\mathfrak{p}_2 \cdots \mathfrak{p}_r = \mathfrak{q}_2 \cdots \mathfrak{q}_s$. Repeat, and we get $r = s$ and $\mathfrak{p}_i = \mathfrak{q}_i$ for all $i$ (after renumbering).
+\end{proof}
+
+\begin{cor}
+ The non-zero fractional ideals form a group under multiplication. We denote this $I_L$\index{$I_L$}. This is a free abelian group generated by the prime ideals, i.e.\ any fractional ideal $\mathfrak{q}$ can be written uniquely as $\mathfrak{p}_1^{a_1} \cdots \mathfrak{p}_r^{q_r}$, with $\mathfrak{p}_i$ distinct prime ideals and $a_i \in \Z$.
+
+ Moreover, if $\mathfrak{q}$ is an integral ideal, i.e.\ $\mathfrak{q} \lhd \mathcal{O}_L$, then $a_i, \cdots, a_r \geq 0$.
+\end{cor}
+
+\begin{proof}
+ We already have unique factorization of honest ideals. Now take any fractional ideal, and write it as $\mathfrak{q} = \mathfrak{a} \mathfrak{b}^{-1}$, with $\mathfrak{a}, \mathfrak{b} \in \mathcal{O}_L$ (e.g.\ take $\mathfrak{b} = \bra n\ket$ for some $n$), and the result follows.
+\end{proof}
+
+Unimportant side note: we have shown that there are two ways we can partially order the ideals of $\mathcal{O}_L$ --- by inclusion and by division. We have shown that these two orders are actually the same. Thus, it follows that the ``least common multiple'' of two ideals $\mathfrak{a}, \mathfrak{b}$ is their intersection $\mathfrak{a} \cap \mathfrak{b}$, and the ``greatest common divisor'' of two ideals is their sum\index{sum of ideals}\index{ideal!sum}\index{addition of ideals}
+\[
+ \mathfrak{a} + \mathfrak{b} = \{a + b: a \in \mathfrak{a}, b \in \mathfrak{b}\}.
+\]
+
+\begin{eg}
+ Again let $[L:\Q] = 2$, i.e.\ $L = \Q(\sqrt{d})$ with $d \not= 0, 1$ and square-free.
+
+ While we proved that every ideal can be factorized into prime ideals, we have completely no idea what prime ideals look like. We just used their very abstract properties like being prime and maximal. So we would like to play with some actual ideals.
+
+ Recall we had the example
+ \[
+ \bra 3, 1 + 2\sqrt{-5}\ket \bra 3, 1 - 2\sqrt{-5}\ket = \bra 3\ket.
+ \]
+ This is an example where we multiply two ideals together to get a principal ideal, and the key to this working is that $1 + 2\sqrt{-5}$ is conjugate to $1 - 2\sqrt{-5}$. We will use this idea to prove the previous result for number fields of this form.
+
+ Let $\mathfrak{a} \lhd \mathcal{O}_L$ be a non-zero ideal. We want to find some $\mathfrak{b} \lhd \mathcal{O}_L$ such that $\mathfrak{a}\mathfrak{b}$ is principal.
+
+ We know $\mathcal{O}_L \cong \Z^2$, and $\mathfrak{a} \leq \mathcal{O}_L$ as a subgroup. Moreover, we have shown that we must have $\mathfrak{a} \cong \Z^2$ as abelian groups. So it is generated by $2$ elements as subgroups of $\Z^2$. Since $\Z$ is a subring of $\mathcal{O}_L$, we know $\mathfrak{a}$ must be generated by at most $2$ elements as $\mathcal{O}_L$-modules, i.e.\ as ideals of $\mathcal{O}_L$. If it is generated by one element, then it is already principal. Otherwise, suppose $\mathfrak{a} = \bra \alpha, \beta\ket$ for some $\alpha, \beta \in \mathcal{O}_L$.
+
+ Further, we claim that we can pick $\alpha, \beta$ such that $\beta \in \Z$. We write $\alpha = a + b \sqrt{d}$ and $\beta = a' + b' \sqrt{d}$. Then let $\ell = \gcd(b, b') = mb + m' b'$, with $m, m' \in \Z$ (by Euclid's algorithm). We set
+ \begin{align*}
+ \beta' &= (m\alpha + m' \beta)\cdot \frac{-b'}{\ell} + \beta\\
+ &= (ma + m' a' + \ell \sqrt{d}) \frac{-b'}{\ell} + a' + b'\sqrt{d}\\
+ &= (ma + m'a') \frac{-b'}{\ell} + a' \in \Z
+ \end{align*}
+ using the fact that that $-\frac{b'}{\ell} \in \Z$. Then $\bra \alpha, \beta'\ket = \bra \alpha, \beta\ket$.
+
+ So suppose $\mathfrak{a} = \bra b, \alpha\ket$, with $b \in \Z$ and $\alpha \in \mathcal{O}_L$. We now claim
+ \[
+ \bra b, \alpha\ket \bra b, \bar{\alpha}\ket
+ \]
+ is principal (where $\alpha = x + y \sqrt{d}$, $\bar{\alpha} = x - y \sqrt{d}$). In particular, if $\mathfrak{a} \lhd \mathcal{O}_L$, then $\mathfrak{a} \bar{\mathfrak{a}}$ is principal, so the proposition is proved by hand.
+
+ To show this, we can manually check
+ \begin{align*}
+ \bra b, \alpha\ket \bra b, \bar{\alpha}\ket &= \bra b^2, b\alpha, b\bar{\alpha}, \alpha\bar{\alpha}\ket\\
+ &= \bra b^2, b\alpha, b\tr(\alpha), N(\alpha)\ket,\\
+ \intertext{using the fact that $\tr(\alpha) = \alpha + \bar{\alpha}$ and $N(\alpha) = \alpha \bar{\alpha}$. Now note that $b^2, b\tr(\alpha)$ and $N(\alpha)$ are all integers. So we can take the gcd $c = \gcd(b^2, b\tr(\alpha), N(\alpha))$. Then this ideal is equal to}
+ &= \bra c, b\alpha\ket.
+ \end{align*}
+ Finally, we claim that $b\alpha \in \bra c\ket$.
+
+ Write $b\alpha = cx$, with $x \in L$. Then $\tr x = \frac{b}{c} \tr \alpha \in \Z$ since $\frac{b}{c} \in \Z$ by definition, and
+ \[
+ N(x) = N\left(\frac{b\alpha}{c}\right) = \frac{b^2 N(\alpha)}{c^2} = \frac{b^2}{c} \frac{N(\alpha)}{c} \in \Z.
+ \]
+ So $x \in \mathcal{O}_L$. So $c \mid b\alpha$ in $\mathcal{O}_L$. So $\bra c, b\alpha\ket = \bra c\ket$.
+\end{eg}
+
+Finally, after all these results, we can get to the important definition of the course.
+
+\begin{defi}[Class group]\index{class group}\index{$\cl_L$}\index{ideal class group}.
+ The \emph{class group} or \emph{ideal class group} of a number field $L$ is
+ \[
+ \cl_L = I_L/P_L,
+ \]
+ where $I_L$ is the group of fractional ideals, and $P_L$ is the subgroup of principal fractional ideals.
+\end{defi}
+If $\mathfrak{a} \in I_L$, we write $[\mathfrak{a}]$ for its equivalence class in $\cl_L$. So $[\mathfrak{a}] = [\mathfrak{b}]$ if and only if there is some $\gamma\in L^\times$ such that $\gamma \mathfrak{a} = \mathfrak{b}$.
+
+The significance is that $\cl_L$ measures the failure of unique factorization:
+\begin{thm}
+ The following are equivalent:
+ \begin{enumerate}
+ \item $\mathcal{O}_L$ is a principal ideal domain
+ \item $\mathcal{O}_L$ is a unique factorization domain
+ \item $\cl_L$ is trivial.
+ \end{enumerate}
+\end{thm}
+\begin{proof}
+ (i) and (iii) are equivalent by definition, while (i) implies (ii) is well-known from IB Groups, Rings and Modules. So the real content is (ii) to (i), which is specific to Dedekind domains.
+
+ If $\mathfrak{p} \lhd \mathcal{O}_L$ is prime, and $x \in \mathfrak{p} \setminus \{0\}$, we factor $x = \alpha_1 \cdots \alpha_k$ such that $\alpha_i$ is irreducible in $\mathcal{O}_L$. As $\mathfrak{p}$ is prime, there is some $\alpha_i \in \mathfrak{p}$. But then $\bra \alpha_i\ket \subseteq \mathfrak{p}$, and $\bra \alpha_i\ket$ is prime as $\mathcal{O}_L$ is a UFD. So we must have $\bra \alpha_i\ket = \mathfrak{p}$ as prime ideals are maximal. So $\mathfrak{p}$ is principal.
+\end{proof}
+
+In the next few chapters, we will come up with methods to explicitly compute the class group of any number field.
+
+\section{Norms of ideals}
+In the previous chapter, we defined the class group, and we know it is generated by prime ideals of $\mathcal{O}_L$. So we now want to figure out what the prime ideals are. In the case of finding irreducible elements, one very handy tool was the norm --- we know an element of $\mathcal{O}_L$ is a unit iff it has norm $1$. So if $x \in \mathcal{O}_L$ is not irreducible, then there must be some element whose norm strictly divides $N(x)$. Similarly, we would want to come up with the notion of the norm of an ideal, which turns out to be an incredibly useful notion.
+
+\begin{defi}[Norm of ideal]\index{norm!of ideal}\index{ideal!norm}
+ Let $\mathfrak{a} \lhd \mathcal{O}_L$ be an ideal. We define
+ \[
+ |N(\mathfrak{a})| = |\mathcal{O}_L/\mathfrak{a}| \in \N.
+ \]
+\end{defi}
+Recall that we've already proved that $|\mathcal{O}_L/\mathfrak{a}|$ is finite. So this definition makes sense. It is also clear that $N(\mathfrak{a}) = 1$ iff $\mathfrak{a} = \mathcal{O}_L$ (i.e.\ $\mathfrak{a}$ is a ``unit'').
+
+\begin{eg}
+ Let $d \in \Z$. Then since $\mathcal{O}_L \cong \Z^n$, we have $d \mathcal{O}_L \cong d \Z^n$. So we have
+ \[
+ N(\bra d\ket) = |\Z^n/(d\Z)^n| = |\Z/d\Z|^n = d^n.
+ \]
+\end{eg}
+
+We start with a simple observation:
+\begin{prop}
+ For any ideal $\mathfrak{a}$, we have $N(\mathfrak{a}) \in \mathfrak{a} \cap \Z$.
+\end{prop}
+
+\begin{proof}
+ It suffices to show that $N(\mathfrak{a}) \in \mathfrak{a}$. Viewing $\mathcal{O}_L/\mathfrak{a}$ as an additive group, the order of $1$ is a factor of $N(\mathfrak{a})$. So $N(\mathfrak{a}) = N(\mathfrak{a}) \cdot 1 = 0 \in \mathcal{O}_L/\mathfrak{a}$. Hence $N(\mathfrak{a}) \in \mathfrak{a}$.
+\end{proof}
+
+The most important property of the norm is the following:
+\begin{prop}
+ Let $\mathfrak{a}, \mathfrak{b} \lhd \mathcal{O}_L$ be ideals. Then $N(\mathfrak{a} \mathfrak{b}) = N(\mathfrak{a})N(\mathfrak{b})$.
+\end{prop}
+
+We will provide two proofs of the result.
+\begin{proof}
+ By the factorization into prime ideals, it suffices to prove this for $\mathfrak{b} = \mathfrak{p}$ prime, i.e.
+ \[
+ N(\mathfrak{a}\mathfrak{p}) = N(\mathfrak{a}) N(\mathfrak{p}).
+ \]
+ In other words, we need to show that
+ \[
+ \left|\frac{\mathcal{O}_L}{\mathfrak{a}}\right| = \left|\frac{\mathcal{O}_L}{\mathfrak{a}\mathfrak{p}}\right|\Big/ \left|\frac{\mathcal{O}_L}{\mathfrak{p}}\right|.
+ \]
+ By the third isomorphism theorem, we already know that
+ \[
+ \frac{\mathcal{O}_L}{\mathfrak{a}} \cong \left(\frac{\mathcal{O}_L}{\mathfrak{a}\mathfrak{p}}\right) \big/ \left(\frac{\mathfrak{a}}{\mathfrak{a}\mathfrak{p}}\right).
+ \]
+ So it suffices to show that $\mathcal{O}_L/\mathfrak{p} \cong \mathfrak{a}/\mathfrak{a}\mathfrak{p}$ as abelian groups.
+
+ In the case of the integers, we could have, say, $\mathfrak{p} = 7\Z$, $\mathfrak{a} = 12\Z$. We would then simply define
+ \[
+ \begin{tikzcd}[cdmap]
+ \displaystyle\frac{\Z}{7\Z} \ar[r] & \displaystyle\frac{12\Z}{7\cdot 12\Z}\\
+ x \ar[r, maps to] & 12x
+ \end{tikzcd}
+ \]
+ However, in general, we do not know that $\mathfrak{a}$ is principal, but it turns out it doesn't really matter. We can just pick an arbitrary element to multiply with.
+
+ By unique factorization, we know $\mathfrak{a} \not= \mathfrak{a} \mathfrak{p}$. So we can find some $\alpha \in \mathfrak{a} \setminus \mathfrak{a} \mathfrak{p}$.
+
+ We now claim that the homomorphism of abelian groups
+ \[
+ \begin{tikzcd}[cdmap]
+ \displaystyle \frac{\mathcal{O}_L}{\mathfrak{p}} \ar[r] & \displaystyle \frac{\mathfrak{a}}{\mathfrak{a}\mathfrak{p}}\\
+ x + \mathfrak{p} \ar[r, maps to]& \alpha x + \mathfrak{a}\mathfrak{p}
+ \end{tikzcd}
+ \]
+ is an isomorphism. We first check this is well-defined --- if $p \in \mathfrak{p}$, then $\alpha \mathfrak{p} \in \mathfrak{a} \mathfrak{p}$ since $\alpha \in \mathfrak{a}$. So the image of $x + \mathfrak{p}$ and $(x + p) + \mathfrak{p}$ are equal. So this is well-defined.
+
+ To prove our claim, we have to show injectivity and surjectivity. To show injectivity, since $\bra \alpha\ket \subseteq \mathfrak{a}$, we have $\mathfrak{a} \mid \bra a\ket$, i.e.\ there is an ideal $\mathfrak{c} \subseteq \mathcal{O}_L$ such that $\mathfrak{a}\mathfrak{c} = \bra \alpha\ket$. If $x \in \mathcal{O}_L$ is in the kernel of the map, then $\alpha x \in \mathfrak{a}\mathfrak{p}$. So
+ \[
+ x \mathfrak{a}\mathfrak{c} \subseteq \mathfrak{a}\mathfrak{p}.
+ \]
+ So
+ \[
+ x\mathfrak{c} \subseteq \mathfrak{p}.
+ \]
+ As $\mathfrak{p}$ is prime, either $\mathfrak{c} \subseteq \mathfrak{p}$ or $x \in \mathfrak{p}$. But $\mathfrak{c} \subseteq \mathfrak{p}$ implies $\bra \alpha\ket = \mathfrak{a}\mathfrak{c} \subseteq \mathfrak{a}\mathfrak{p}$, contradicting the choice of $\alpha$. So we must have $x \in \mathfrak{p}$, and the map is injective.
+
+ To show this is surjective, we notice that surjectivity means $\bra \alpha\ket/\mathfrak{a}\mathfrak{p} = \mathfrak{a}/\mathfrak{a}\mathfrak{p}$, or equivalently $\mathfrak{a}\mathfrak{p} + \bra \alpha\ket = \mathfrak{a}$.
+
+ Using our knowledge of fractional ideals, this is equivalent to saying $(\mathfrak{a}\mathfrak{p} + \bra \alpha\ket) \mathfrak{a}^{-1} = \mathcal{O}_L$. But we know
+ \[
+ \mathfrak{a}\mathfrak{p} < \mathfrak{a}\mathfrak{p} + \bra \alpha\ket \subseteq \mathfrak{a}.
+ \]
+ We now multiply by $\mathfrak{a}^{-1}$ to obtain
+ \[
+ \mathfrak{p} < (\mathfrak{a} \mathfrak{p} + \bra \alpha\ket) \mathfrak{a}^{-1} = \mathfrak{p} + \mathfrak{c} \subseteq \mathcal{O}_L.
+ \]
+ Since $\mathfrak{p}$ is a prime, and hence maximal ideal, the last inclusion must be an equality. So $\mathfrak{a}\mathfrak{p} + \bra\alpha\ket = \mathfrak{a}$, and we are done.
+\end{proof}
+
+Now we provide the sketch of a proof that makes sense. The details are left as an exercise in the second example sheet.
+\begin{proof}
+ It is enough to show that $N(\mathfrak{p}_1^{a_1} \cdots \mathfrak{p}_r^{a_r}) = N(\mathfrak{p}_1)^{a_1} \cdots N(\mathfrak{p}_r)^{a_r}$ by unique factorization.
+
+ By the Chinese remainder theorem, we have
+ \[
+ \frac{\mathcal{O}_L}{\mathfrak{p}_1^{a_1} \cdots \mathfrak{p}_r^{a_r}} \cong \frac{\mathcal{O}_L}{\mathfrak{p}_1^{a_1}} \times \cdots \times \frac{\mathcal{O}_L}{\mathfrak{p}_r^{a_r}}
+ \]
+ where $\mathfrak{p}_1, \cdots, \mathfrak{p}_r$ are distinct prime ideals.
+
+ Next, we show by hand that
+ \[
+ \left|\frac{\mathcal{O}_L}{\mathfrak{p}^r}\right| = \left|\frac{\mathcal{O}_L}{\mathfrak{p}}\right| \times \left|\frac{\mathfrak{p}}{\mathfrak{p}^2}\right| \times \cdots \times \left|\frac{\mathfrak{p}^{r - 1}}{\mathfrak{p}^r}\right| = \left|\frac{\mathcal{O}_L}{\mathfrak{p}}\right|^r,
+ \]
+ by showing that $\mathfrak{p}^k/\mathfrak{p}^{k + 1}$ is a $1$-dimensional vector space over the field $\mathcal{O}_L/\mathfrak{p}$. Then the result follows. % complete
+\end{proof}
+This is actually the same proof, but written in a much saner form. This is better because we are combining a general statement (the Chinese remainder theorem), with a special property of the integral rings. In the first proof, what we really did was simultaneously proving two parts using algebraic magic.
+
+We've taken an obvious invariant of an ideal, the size, and found it is multiplicative. How does this relate to the other invariants?
+
+Recall that
+\[
+ \Delta(\alpha_1, \cdots, \alpha_n) = \det(\tr_{L/\Q}(\alpha_i\alpha_j)) = \det(\sigma_i(\alpha_j))^2.
+\]
+\begin{prop}
+ Let $\mathfrak{a} \lhd \mathcal{O}_L$ be an ideal, $n = [L:\Q]$. Then
+ \begin{enumerate}
+ \item There exists $\alpha_1, \cdots, \alpha_n \in \mathfrak{a}$ such that
+ \[
+ \mathfrak{a} = \left\{ \sum r_i \alpha_i : r_i \in \Z\right\} = \bigoplus_{1}^n \alpha_i\Z,
+ \]
+ and $\alpha_1, \cdots, \alpha_n$ are a basis of $L$ over $\Q$. In particular, $\mathfrak{a}$ is a free $\Z$-module of $n$ generators.
+ \item For any such $\alpha_1, \cdots, \alpha_n$,
+ \[
+ \Delta (\alpha_1, \cdots, \alpha_n) = N(\mathfrak{a})^2 D_L.
+ \]
+ \end{enumerate}
+\end{prop}
+
+The prove this, we recall the following lemma from IB Groups, Rings and Modules:
+\begin{lemma}
+ Let $M$ be a $\Z$-module (i.e.\ abelian group), and suppose $M \leq \Z^n$. Then $M \cong \Z^r$ for some $0 \leq r \leq n$.
+
+ Moreover, if $r = n$, then we can choose a basis $v_1, \cdots, v_n$ of $M$ such that the change of basis matrix $A = (a_{ij}) \in M_{n \times n}(\Z)$ is upper triangular, where
+ \[
+ v_j = \sum a_{ij} e_i,
+ \]
+ where $e_1, \cdots, e_n$ is the standard basis of $\Z^n$.
+
+ In particular,
+ \[
+ |\Z^n/M| = |a_{11} a_{12} \cdots a_{nn}| = |\det A|.
+ \]
+\end{lemma}
+
+\begin{proof}[Proof of proposition]
+ Let $d \in \mathfrak{a} \cap \Z$, say $d = N(\alpha)$. Then $d \mathcal{O}_L \subseteq \mathfrak{a} \subseteq \mathcal{O}_L$. As abelian groups, after picking an integral basis $\alpha_1', \cdots, \alpha_n'$ of $\mathcal{O}_L$, we have
+ \[
+ \Z^n \cong d \Z^n \leq \mathfrak{a} \leq \Z^n.
+ \]
+ So $\mathfrak{a} \cong \Z^n$. Then the lemma gives us a basis $\alpha_1, \cdots, \alpha_n$ of $\mathfrak{a}$ as a $\Z$-module. As a $\Q$-module, since the $\alpha_i$ are obtained from linear combinations of $\alpha_i'$, by basic linear algebra, $\alpha_1, \cdots, \alpha_n$ is also a basis of $L$ over $\Q$.
+
+ Moreover, we know that we have
+ \[
+ \Delta(\alpha_1, \cdots, \alpha_n) = \det(A)^2 \Delta(\alpha_1', \cdots, \alpha_n').
+ \]
+ Since $\det(A)^2 = |\mathcal{O}_L/\mathfrak{a}|^2 = N(\mathfrak{a})$ and $D_L = \Delta(\alpha_1', \cdots, \alpha_n')$ by definition, the second part follows.
+\end{proof}
+
+This result is very useful for the following reason:
+\begin{cor}
+ Suppose $\mathfrak{a} \lhd \mathcal{O}_L$ has basis $\alpha_1, \cdots, \alpha_n$, and $\Delta(\alpha_1, \cdots, \alpha_n)$ is square-free. Then $\mathfrak{a} = \mathcal{O}_L$ (and $D_L$ is square-free).
+\end{cor}
+This is a nice trick, since it allows us to determine immediately whether a particular basis is an integral basis.
+
+\begin{proof}
+ Immediate, since this forces $N(\mathfrak{a})^2 = 1$.
+\end{proof}
+
+Note that nothing above required $\mathfrak{a}$ to be an actual ideal. It merely had to be a subgroup of $\mathcal{O}_L$ that is isomorphic to $\Z^n$, since the quotient $\mathcal{O}_L/\mathfrak{a}$ is well-defined as long as $\mathfrak{a}$ is a subgroup. With this, we can have the following useful result:
+
+\begin{eg}
+ Let $\alpha$ be an algebraic integer and $L = \Q(\alpha)$. Let $n = [\Q(\alpha):\Q]$. Then $\mathfrak{a} = \Z[\alpha] \lhd \mathcal{O}_L$. We have
+ \[
+ \disc(p_\alpha) = \Delta(1, \alpha, \alpha^2, \cdots, \alpha^{n - 1}) = \text{discriminant of minimal polynomial of }\alpha.
+ \]
+ Thus if $\disc(p_\alpha)$ is square-free, then $\Z[\alpha] = \mathcal{O}_L$.
+
+ Even if $\disc(p_\alpha)$ is not square-free, it still says something: let $d^2 \mid \disc(p_\alpha)$ be such that $\disc(p_\alpha)/d^2$ is square-free. Then $|N(\Z[\alpha])|$ divides $d$.
+
+ Let $x \in \mathcal{O}_L$. Then the order of $x + \Z[\alpha] \in \mathcal{O}_L/\Z[\alpha]$ divides $N(\Z[\alpha])$, hence $d$. So $d \cdot x \in \Z[\alpha]$. So $x \in \frac{1}{d} \Z[\alpha]$. Hence we have
+ \[
+ \Z[\alpha] \subseteq \mathcal{O}_L \subseteq \frac{1}{d} \Z[\alpha].
+ \]
+ For example, if $\alpha = \sqrt{a}$ for some square-free $a$, then $\disc(\sqrt{a})$ is the discriminant of $x^2 - a$, which is $4a$. So the $d$ above is $2$, and we have
+ \[
+ \Z[\alpha] \subseteq \mathcal{O}_{\Q(\sqrt{d})} \subseteq \frac{1}{2} \Z[\alpha],
+ \]
+ as we have previously seen.
+
+\end{eg}
+
+We shall prove one more lemma, and start factoring things. Recall that we had two different notions of norm. Given $\alpha \in \mathcal{O}_L$, we can take the norm $N(\bra \alpha\ket)$, or $N_{L/\Q}(\alpha)$. It would be great if they are related, like if they are equal. However, that cannot possibly be true, since $N(\bra \alpha\ket)$ is always positive, but $N_{L/\Q}(\alpha)$ can be negative. So we take the absolute value.
+\begin{lemma}
+ If $\alpha \in \mathcal{O}_L$, then
+ \[
+ N(\bra \alpha\ket) = |N_{L/\Q}(\alpha)|.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Let $\alpha_1, \cdots, \alpha_n$ be an integral basis of $\mathcal{O}_L$. Then $\alpha\alpha_1, .., \alpha\alpha_n$ is an integral basis of $\bra \alpha\ket$. So by the previous lemma,
+ \[
+ \Delta(\alpha \alpha_1, \cdots, \alpha\alpha_n) = N(\bra \alpha\ket)^2 D_L.
+ \]
+ But
+ \begin{align*}
+ \Delta (\alpha \alpha_1, \cdots, \alpha\alpha_n) &= \det(\sigma_i(\alpha\alpha_j)_{ij})^2 \\
+ &= \det(\sigma_i(\alpha)\sigma_i(\alpha_j))^2\\
+ &= \left(\prod_{i = 1}^n \sigma_i(\alpha)\right)^2 \Delta(\alpha_1, \cdots, \alpha_n)\\
+ &= N_{L/\Q}(\alpha)^2 D_L.
+ \end{align*}
+ So
+ \[
+ N_{L/\Q}(\alpha)^2 = N(\bra \alpha\ket)^2.
+ \]
+ But $N(\bra \alpha\ket)$ is positive. So the result follows.
+\end{proof}
+
+\section{Structure of prime ideals}
+We can now move on to find \emph{all} prime ideals of $\mathcal{O}_L$. We know that every ideal factors as a product of prime ideals, but we don't know what the prime ideals are. The only obvious way we've had to obtain prime ideals is to take a usual prime, take its principal ideal and factor it in $\mathcal{O}_L$, and get the resultant prime ideals.
+
+It turns out this gives us all prime ideals.
+\begin{lemma}
+ Let $\mathfrak{p} \lhd \mathcal{O}_L$ be a prime ideal. Then there exists a unique $p \in \Z$, $p$ prime, with $\mathfrak{p} \mid \bra p\ket$. Moreover, $N(\mathfrak{p}) = p^f$ for some $1 \leq f \leq n$.
+\end{lemma}
+This is not really too exciting, as soon as we realize that $\mathfrak{p} \mid \bra p\ket$ is the same as saying $\bra p\ket \subseteq \mathfrak{p}$, and we already know $\mathfrak{p} \cap \Z$ is non-empty.
+
+\begin{proof}
+ Well $\mathfrak{p} \cap \Z$ is an ideal in $\Z$, and hence principal. So $\mathfrak{p} \cap \Z = p\Z$ for some $p \in \Z$.
+
+ We now claim $p$ is a prime integer. If $p = ab$ with $ab \in \Z$. Then since $p \in \mathfrak{p}$, either $a \in \mathfrak{p}$ or $b \in \mathfrak{p}$. So $a \in \mathfrak{p} \cap \Z = p\Z$ or $b \in \mathfrak{p} \cap \Z = p\Z$. So $p \mid a$ or $p \mid b$.
+
+ Since $\bra p\ket \subseteq \mathfrak{p}$, we know $\bra p\ket = \mathfrak{p}\mathfrak{a}$ for some ideal $\mathfrak{a}$ by factorization. Taking norms, we get
+ \[
+ p^n = N(\bra p\ket) = N(\mathfrak{p}) N(\mathfrak{a}).
+ \]
+ So the result follows.
+\end{proof}
+This is all good. So all we have to do is to figure out how principal ideals $\bra p\ket$ factor into prime ideals.
+
+We write
+\[
+ \bra p\ket = \mathfrak{p}_1^{e_1} \cdots \mathfrak{p}_m^{e_m}
+\]
+for some distinct prime ideals $\mathfrak{p}_i$, with $N(\mathfrak{p}_i) = p^{f_i}$ for some positive integers $e_i$. Taking norms, we get
+\[
+ p^n = \prod p^{f_i e_i}.
+\]
+So
+\[
+ n = \sum e_i f_i.
+\]
+We start by giving some names to the possible scenarios.
+\begin{defi}[Ramification indices]\index{ramification index}
+ Let $\bra p\ket = \mathfrak{p}_1^{e_1} \cdots \mathfrak{p}_m^{e_m}$ be the factorization into prime ideals. Then $e_1, \cdots, e_m$ are the \emph{ramification indices}.
+\end{defi}
+
+\begin{defi}[Ramified prime]\index{ramified prime}
+ We say $p$ is \emph{ramified} if some $e_i > 1$.
+\end{defi}
+
+\begin{defi}[Inert prime]\index{inert prime}
+ We say $p$ is \emph{inert} if $m = 1$ and $e_m = 1$, i.e.\ $\bra p\ket$ remains prime.
+\end{defi}
+
+\begin{defi}[Splitting prime]\index{splitting prime}
+ We say $p$ \emph{splits completely} if $e_1 = \cdots = e_m = 1 = f_1 = \cdots = f_m$. So $m = n$.
+\end{defi}
+Note that this does not exhaust all possibilities. The importance of these terms, especially ramification, will become clear later.
+
+So how do we actually compute $\mathfrak{p}_i$ and $e_i$? In other words, how can we factor the ideal $\bra p\ket \lhd \mathcal{O}_L$ into prime ideals? The answer is given \emph{very} concretely by Dedekind's criterion.
+
+\begin{thm}[Dedekind's criterion]\index{Dedekind's criterion}
+ Let $\alpha \in \mathcal{O}_L$ and $g(x) \in \Z[x]$ be its minimal polynomial. Suppose $\Z[\alpha] \subseteq \mathcal{O}_L$ has finite index, coprime to $p$ (i.e.\ $p \nmid |\mathcal{O}_L/\Z[\alpha]|$). We write
+ \[
+ \bar{g}(x) = g(x) \pmod p,
+ \]
+ so $\bar{g}(x) \in \F_p[x]$. We factor
+ \[
+ \bar{g}(x) = \varphi_1^{e_1} \cdots \varphi_m^{e_m}
+ \]
+ into distinct irreducibles in $\F_p[x]$. We define the ideal
+ \[
+ \mathfrak{p}_i = \bra p, \tilde{\varphi}_i(\alpha)\ket \lhd \mathcal{O}_L,
+ \]
+ generated by $p$ and $\tilde{\varphi}_i$, where $\tilde{\varphi}_i$ is any polynomial in $\Z[x]$ such that $\tilde{\varphi}_i \bmod p = \varphi_i$. Notice that if $\tilde{\varphi}'$ is another such polynomial, then $p \mid (\tilde{\varphi}_i - \varphi_i)$, so $\bra p, \tilde{\varphi}'(\alpha)\ket = \bra p, \tilde{\varphi}(\alpha)\ket$.
+
+ Then the $\mathfrak{p}_i$ are prime, and
+ \[
+ \bra p\ket = \mathfrak{p}_1^{e_1} \cdots \mathfrak{p}_m^{e_m}.
+ \]
+ Moreover, $f_i = \deg \varphi_i$, so $N(\mathfrak{a}) = p^{\deg \varphi_i}$.
+\end{thm}
+If we are lucky, we might just find an $\alpha$ such that $\Z[\alpha] = \mathcal{O}_L$. If not, we can find something close, and as long as $p$ is not involved, we are fine. After finding $\alpha$, we get its minimal polynomial, factor it, and immediately get the prime factorization of $\bra p\ket$.
+
+\begin{eg}
+ Consider $L = \Q(\sqrt{-11})$. We want to factor $\bra 5\ket$ in $\mathcal{O}_L$. We consider $\Z[\sqrt{-11}] \subseteq \mathcal{O}_L$. This has index $2$, and (hopefully) $5\nmid 2$. So this is good enough. The minimal polynomial is $x^2 + 11$. Taking mod $5$, this reduces to $x^2 - 4 = (x - 2)(x + 2)$. So Dedekind says
+ \[
+ \bra 5\ket = \bra 5, \sqrt{-11} + 2\ket\bra 5, \sqrt{-11} - 2\ket.
+ \]
+\end{eg}
+
+In general, consider $L = \Q(\sqrt{d})$, $d \not= 0, 1$ and square-free, and $p$ an odd prime. Then $\Z[\sqrt{d}] \subseteq \mathcal{O}_L$ has index $1$ or $2$, both of which are coprime to $p$. So Dedekind says factor $x^2 - d \bmod p$. What are the possibilities?
+\begin{enumerate}
+ \item There are two distinct roots mod $p$, i.e.\ $d$ is a square mod $p$, i.e.\ $\left(\frac{d}{p}\right) = 1$. Then
+ \[
+ x^2 - d = (x + r)(x - r)\pmod p
+ \]
+ for some $r$. So Dedekind says
+ \[
+ \bra p\ket = \mathfrak{p}_1 \mathfrak{p}_2,
+ \]
+ where
+ \[
+ \mathfrak{p}_1 = \bra p, \sqrt{d} - r\ket,\quad \mathfrak{p}_2 = \bra p, \sqrt{d} + r\ket,
+ \]
+ and $N(\mathfrak{p}_1) = N(\mathfrak{p}_2) = p$. So $p$ splits.
+ \item $x^2 - d$ is irreducible, i.e.\ $d$ is not a square mod $p$, i.e.\ $\left(\frac{d}{p}\right) = -1$. Then Dedekind says $\bra p \ket = \mathfrak{p}$ is prime in $\mathcal{O}_L$. So $p$ is \emph{inert}.
+ \item $x^2 - d$ has a repeated root mod $p$, i.e.\ $p \mid d$, or alternatively $\left(\frac{d}{p}\right) = 0$. Then by Dedekind, we know
+ \[
+ \bra p\ket = \mathfrak{p}^2,
+ \]
+ where
+ \[
+ \mathfrak{p} = \bra p, \sqrt{d}\ket.
+ \]
+ So $p$ ramifies.
+\end{enumerate}
+So in fact, we see that the Legendre symbol encodes the ramification behaviour of the primes.
+
+What about the case where $p = 2$, for $L = \Q(\sqrt{d})$? How do we factor $\bra 2\ket$?
+
+\begin{lemma}
+ In $L = \Q(\sqrt{d})$,
+ \begin{enumerate}
+ \item $2$ splits in $L$ if and only if $d \equiv 1 \pmod 8$;
+ \item $2$ is inert in $L$ if and only if $d \equiv 5 \pmod 8$;
+ \item $2$ ramifies in $L$ if $d \equiv 2, 3\pmod 4$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item If $d \equiv 1 \pmod 4$, then then $\mathcal{O}_L = \Z[\alpha]$, where $\alpha = \frac{1}{2}(1 + \sqrt{d})$. This has minimal polynomial
+ \[
+ x^2 - x + \frac{1}{4}(1 - d).
+ \]
+ We reduce this mod $2$.
+ \begin{itemize}
+ \item If $d \equiv 1\pmod 8$, we get $x(x + 1)$. So $2$ splits.
+ \item If $d \equiv 5 \pmod 8$, then we get $x^2 + x + 1$, which is irreducible. So $\bra 2\ket$ is prime, hence $2$ is inert.
+ \end{itemize}
+ \item If $d \equiv 2, 3\pmod 4$, then $\mathcal{O}_L = \Z[\sqrt{d}]$, and $x^2 - d$ is the minimal polynomial. Taking mod $2$, we get $x^2$ or $x^2 + 1 = (x + 1)^2$. In both cases, $2$ ramifies.\qedhere
+ \end{itemize}
+\end{proof}
+
+Note how important $p \nmid |\mathcal{O}_L/\Z[\alpha]|$ is. If we used $\Z[\sqrt{d}]$ when $d \equiv 1 \pmod 4$, we would have gotten the wrong answer.
+
+Recall
+\[
+ D_L =
+ \begin{cases}
+ 4d & d \equiv 2, 3\pmod 4\\
+ d & d \equiv 1 \pmod 4
+ \end{cases}
+\]
+The above computations show that $p \mid D_L$ if and only if $p$ ramifies in $L$. This happens to be true in general. This starts to hint how important these invariants like $D_L$ are.
+
+Now we get to prove Dedekind's theorem.
+
+%We fix $L$ and $p \in \Z$ a prime integer. Consider the surjective homomorphism
+%\[
+% q: \mathcal{O}_L \to \frac{\mathcal{O}_L}{p \mathcal{O}_L}.
+%\]
+%This gives a bijection between ideals of $\mathcal{O}_L/p \mathcal{O}_L$ and ideals of $\mathcal{O}_L$ containing $\bra p\ket$ by $I \mapsto q^{-1}(I)$. But these are just the ideals $\mathfrak{p}$ dividing $\bra p\ket$. So we are just trying to understand ideals in $\mathcal{O}_L/p \mathcal{O}_L$!
+%
+
+\begin{proof}[Proof of Dedekind's criterion]
+ The key claim is that
+ \begin{claim}
+ We have
+ \[
+ \frac{\mathcal{O}_L}{\mathfrak{p}_i} \cong \frac{\F_p[x]}{\bra \varphi_i\ket}.
+ \]
+ \end{claim}
+ Suppose this is true. Then since $\varphi_i$ is irreducible, we know $\frac{\F_p[x]}{\bra \varphi_i\ket}$ is a field. So $\mathfrak{p}_i$ is maximal, hence prime.
+
+ Next notice that
+ \[
+ \mathfrak{p}_1^{e_1} = \bra p, \tilde{\varphi}_i(\alpha)\ket^{e_i} \subseteq \bra p, \tilde{\varphi}_i(\alpha)^{e_i}\ket.
+ \]
+ So we have
+ \[
+ \mathfrak{p}_1^{e_1} \cdots \mathfrak{p}_m^{e_m} \subseteq \bra p, \tilde{\varphi}_1(\alpha)^{e_1} \cdots \tilde{\varphi}_m(\alpha)^{e_m}\ket = \bra p, g(\alpha)\ket = \bra p\ket,
+ \]
+ using the fact that $g(\alpha) = 0$.
+
+ So to prove equality, we notice that if we put $f_i = \deg \varphi_i$, then $N(\mathfrak{p}_i) = p^{f_i}$, and
+ \[
+ N(\mathfrak{p}_1^{e_1} \cdots \mathfrak{p}_m^{e_m}) = N(\mathfrak{p}_1)^{e_1} \cdots N(\mathfrak{p}_m)^{e_m} = p^{\sum e_i f_i} = p^{\deg g}.
+ \]
+ Since $N(\bra p\ket) = p^n$, it suffices to show that $\deg g = n$. Since $\Z[\alpha] \subseteq \mathcal{O}_L$ has finite index, we know $\Z[\alpha] \cong \Z^n$. So $1,\alpha, \cdots, \alpha^{n - 1}$ are independent over $\Z$, hence $\Q$. So $\deg g = [\Q(\alpha):\Q] = n = [L:\Q]$, and we are done.
+
+ So it remains to prove that
+ \[
+ \frac{\mathcal{O}_L}{\mathfrak{p}_i} \cong \frac{\Z[\alpha]}{\mathfrak{p}_i \cap \Z[\alpha]} \cong \frac{\F_p[x]}{\bra\varphi_i\ket}.
+ \]
+ The second isomorphism is clear, since
+ \[
+ \frac{\Z[\alpha]}{\bra p, \tilde{\varphi}_i(\alpha)\ket} \cong \frac{\Z[x]}{\bra p, \tilde{\varphi}_i(x), g(x)\ket} \cong \frac{\F_p[x]}{\bra \tilde{\varphi}_i(x), g(x)\ket} = \frac{\F_p[x]}{\bra \varphi_i(x), \bar{g}(x)\ket} = \frac{\F_p[x]}{\bra \varphi_i\ket}.
+ \]
+ To prove the first isomorphism, it suffices to show that the following map is an isomorphism:
+ \begin{align*}
+ \frac{\Z[\alpha]}{p\Z[\alpha]} &\to \frac{\mathcal{O}_L}{p\mathcal{O}_L}\tag{$*$}\\
+ x + p\Z[\alpha] &\mapsto x + p \mathcal{O}_L
+ \end{align*}
+ If this is true, then quotienting further by $\tilde{\varphi}_i$ gives the desired isomorphism.
+
+ To prove the claim, we consider a slightly different map. We notice $p \nmid |\mathcal{O}_L/\Z[\alpha]|$ means the ``multiplication by $p$'' map
+ \[
+ \begin{tikzcd}
+ \displaystyle\frac{\mathcal{O}_L}{\Z[\alpha]} \ar[r, "p"] & \displaystyle\frac{\mathcal{O}_L}{\Z[\alpha]}
+ \end{tikzcd}\tag{$\dagger$}
+ \]
+ is injective. But $\mathcal{O}_L/\Z[\alpha]$ is a finite abelian group. So the map is an isomorphism.
+
+ By injectivity of $(\dagger)$, we have $\Z[\alpha] \cap p\mathcal{O}_L = p \Z[\alpha]$. By surjectivity, we have $\Z[\alpha] + p\mathcal{O}_L = \mathcal{O}_L$. It thus follows that $(*)$ is injective and surjective respectively. So it is an isomorphism. We have basically applied the snake lemma to the diagram
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & \Z[\alpha] \ar[r, hook] \ar[d, "p"] & \mathcal{O}_L \ar[r, two heads] \ar[d, "p"] & \frac{\mathcal{O}_L}{\Z[\alpha]} \ar[r] \ar[d, "p"] & 0\\
+ 0 \ar[r] & \Z[\alpha] \ar[r, hook] & \mathcal{O}_L \ar[r, two heads] & \frac{\mathcal{O}_L}{\Z[\alpha]} \ar[r] & 0
+ \end{tikzcd}%\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ If $p$ is prime and $p < n = [L:\Q]$, and $\Z[\alpha] \subseteq \mathcal{O}_L$ has finite index coprime to $p$, then $p$ does \emph{not} split completely in $\mathcal{O}_L$.
+\end{cor}
+
+\begin{proof}
+ By Dedekind's theorem, if $g(x)$ is the minimal polynomial of $\alpha$, then the factorization of $\bar{g}(x) = g(x)\bmod p$ determines the factorization of $\bra p\ket$ into prime ideals. In particular, $p$ splits completely if and only if $\bar{g}$ factors into distinct linear factors, i.e.
+ \[
+ \bar{g}(x) = (x - \alpha_1) \cdots (x- \alpha_n),
+ \]
+ where $\alpha_i \in \F_p$ and $\alpha_i$ are distinct. But if $p < n$, then there aren't $n$ distinct elements of $\F_p$!
+\end{proof}
+
+\begin{eg}
+ Let $L = \Q(\alpha)$, where $\alpha$ has minimal polynomial $x^3 - x^2 - 2x - 8$. This is the case where $n = 3 > 2 = p$. On example sheet $2$, you will see that $2$ splits completely, i.e.\ $\mathcal{O}_L/2\mathcal{O}_L = \F_2 \times \F_2 \times \F_2$. But then this corollary shows that for all $\beta \in \mathcal{O}_L$, $\Z[\beta] \subseteq \mathcal{O}_L$ has even index, i.e.\ there does not exist an $\beta \in \mathcal{O}_L$ with $|\mathcal{O}_L/\Z[\beta]|$ odd.
+\end{eg}
+
+%Note that even without Dedekind's criterion, we know if
+%\[
+% \bra p\ket = \mathfrak{p}_1^{e_1} \cdots \mathfrak{p}_n^{e_n},
+%\]
+%then $\mathfrak{p}_i$ is prime, hence maximal. So
+%\[
+% \frac{\mathcal{O}_L}{\mathfrak{p}_i} \cong \frac{\F_p[x]}{\varphi_i(x)},
+%\]
+%for some $\varphi_i$ irreducible in $\F_p[x]$. Writing $\deg \varphi_i = f_i$, we have
+%\[
+% \frac{\mathcal{O}_L}{p\mathcal{O}_L} = \prod_{i = 1}^m \frac{\F_p[x]}{\varphi_i(x)^{e_i}} \cong \prod_{i = 1}^m \frac{\F_{p^{f_i}}[t]}{t^{e_i}},
+%\]
+%with the isomorphisms as ring isomorphisms.
+%
+%Bonus exercise: in the case that Dedekind's theorem applies, prove this. In general, we need slightly different ideas from commutative algebra to prove it. % what is this saying?
+
+As we previously alluded to, the following is true:
+\begin{thm}
+ $p \mid D_L$ if and only if $p$ ramifies in $\mathcal{O}_L$.
+\end{thm}
+
+We will not prove this.
+
+\section{Minkowski bound and finiteness of class group}
+Dedekind's criterion allowed us to find all prime factors of $\bra p\ket$, but if we want to figure out if, say, the class group of a number field is trivial, or even finite, we still have no idea how to do so, because we cannot go and check every single prime $p$ and see what happens.
+
+%In the case of $\Z[i]$, we proved that it is in fact Euclidean by a rather geometric argument. We used the norm as the Euclidean function. Given $a, b\in \Z[i]$, $b\not= 0$. We consider the complex number
+%\[
+% \frac{a}{b} \in \C.
+%\]
+%We then looked at the complex plane, indicating points in $\Z[i]$ by red dots.
+%\begin{center}
+% \begin{tikzpicture}
+% \draw [->] (-2.5, 0) -- (2.5, 0) node [right] {$\Re$};
+% \draw [->] (0, -2.5) -- (0, 2.5) node [above] {$\Im$};
+% \foreach \x in {-2,...,2}{
+% \foreach \y in {-2,...,2}{
+% \node [mred, circ] at (\x, \y) {};
+% }
+% }
+% \node [circ] at (1.6, 0.7) {};
+% \node [right] at (1.6, 0.7) {$\frac{a}{b}$};
+% \end{tikzpicture}
+%\end{center}
+%By looking at the picture, we know that there is some $q \in \Z[i]$ such that $\left|\frac{a}{b} - q\right| < 1$. So we can write
+%\[
+% \frac{a}{b} = q + c
+%\]
+%with $|c| < 1$. Then we have
+%\[
+% a = b\cdot q + \underbrace{b\cdot c}_r.
+%\]
+%We know $r = a - bq \in \Z[i]$, and $N(r) = N(bc) = N(b)N(c) < N(b)$. So done.
+
+What we are now going to do is the following --- we are going to use purely \emph{geometric} arguments to reason about ideals, and figure that each element of the class group $\cl_L = I_L/P_L$ has a representative whose norm is bounded by some number $c_L$, which we will find rather explicitly. After finding the $c_L$, to understand the class group, we just need to factor all prime numbers less than $c_L$ and see what they look like.
+
+We are first going to do the case of quadratic extensions explicitly, since 2-dimensional pictures are easier to draw. We will then do the full general case afterwards.
+
+\subsection*{Quadratic extensions}
+Consider again the case $L = \Q(\sqrt{d})$, where $d < 0$. Then $\mathcal{O}_L = \Z[\alpha]$, where
+\[
+ \alpha =
+ \begin{cases}
+ \sqrt{d} & d \equiv 2, 3\pmod 4\\
+ \frac{1}{2}(1 + \sqrt{d})& d \equiv 1 \pmod 4
+ \end{cases}
+\]
+We can embed this as a subfield $L \subseteq \C$. We can then plot the points on the complex plane. For example, if $d \equiv 2, 3 \pmod 4$, then the points look like this:
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2,...,2}{
+ \foreach \y in {-2,...,2}{
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \node [scale=0.6, right] at (0, 0) {$0$};
+ \node [scale=0.6, right] at (0, 1) {$\sqrt{d}$};
+ \node [scale=0.6, right] at (1, 0) {$1$};
+ \node [scale=0.6, right] at (1, 1) {$1 + \sqrt{d}$};
+ \node [scale=0.6, right] at (0, 2) {$2\sqrt{d}$};
+ \end{tikzpicture}
+\end{center}
+Then an ideal of $\mathcal{O}_L$, say $\mathfrak{a} = \bra 2, \sqrt{d}\ket$, would then be the sub-lattice given by the blue crosses.
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2,...,2}{
+ \foreach \y in {-2,...,2}{
+ \ifodd\x
+ \node [circ] at (\x, \y) {};
+ \else
+ \node [mblue] at (\x, \y) {$\times$};
+ \fi
+ }
+ }
+ \end{tikzpicture}
+\end{center}
+We always get this picture, since any ideal of $\mathcal{O}_L$ is isomorphic to $\Z^2$ as an abelian group.
+
+If we are in the case where $d \equiv 1 \pmod 4$, then the lattice is hexagonal:
+\begin{center}
+ \begin{tikzpicture}[scale=1.3]
+ \foreach \x in {-1.5,...,1.5}{
+ \foreach \y in {-1.5,...,1.5}{
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \foreach \x in {-1,...,1}{
+ \foreach \y in {-1,...,1}{
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \node [scale=0.6, right] at (0, 0) {$0$};
+ \node [scale=0.6, right] at (0, 1) {$\sqrt{d}$};
+ \node [scale=0.6, right] at (1, 0) {$1$};
+ \node [scale=0.6, right] at (0.5, 0.5) {$\frac{1}{2}(1 + \sqrt{d})$};
+ \end{tikzpicture}
+\end{center}
+The key result is the following purely geometric lemma:
+\begin{lemma}[Minskowski's lemma]\index{Minkowski's lemma}
+ Let $\Lambda = \Z \mathbf{v}_1 + \Z \mathbf{v}_2 \subseteq \R^2$ be a lattice, with $\mathbf{v}_1, \mathbf{v}_2$ linearly independent in $\R$ (i.e.\ $\R \mathbf{v}_1 + \R \mathbf{v}_2 = \R^2$). We write $\mathbf{v}_i = a_i \mathbf{e}_1 + b_i \mathbf{e}_2$. Then let
+ \[
+ A(\Lambda) = \text{area of fundamental parallelogram} = \left|\det
+ \begin{pmatrix}
+ a_1 & a_2\\
+ b_1 & b_2
+ \end{pmatrix}\right|,
+ \]
+ where the fundamental parallelogram is the following:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (.5, 1) node [above] {$\mathbf{v}_1$};
+ \draw [->] (0, 0) -- (2, .7) node [right] {$\mathbf{v}_2$};
+ \draw (.5, 1) -- (2.5, 1.7) node [anchor=south west] {$\mathbf{v}_1 + \mathbf{v}_2$} -- (2, .7);
+ \end{tikzpicture}
+ \end{center}
+ Then a closed disc $S$ around $0$ contains a non-zero point of $\Lambda$ if
+ \[
+ \area(S) \geq 4 A(\Lambda).
+ \]
+ In particular, there exists an $\alpha \in \Lambda$ with $\alpha \not= 0$, such that
+ \[
+ 0 < |\alpha|^2 \leq \frac{4 A(\Lambda)}{\pi}.
+ \]
+\end{lemma}
+This is just an easy piece of geometry. What is remarkable is that the radius of the disc needed depends only on the area of the fundamental parallelogram, and not its shape.
+
+\begin{proof}
+ We will prove a general result in any dimensions later.
+\end{proof}
+
+We now apply this to ideals $\mathfrak{a} \leq \mathcal{O}_L$, regarded as a subset of $\C = \R^2$ via some embedding $L \hookrightarrow \C$. The following proposition gives us the areas of the relevant lattices:
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item If $\alpha = a + b\sqrt{\lambda}$, then as a complex number,
+ \[
+ |\alpha|^2 = (a + b\sqrt{\lambda})(a - b\sqrt{\lambda}) = N(\alpha).
+ \]
+ \item For $\mathcal{O}_L$, we have
+ \[
+ A(\mathcal{O}_L) = \frac{1}{2}\sqrt{|D_L|}.
+ \]
+ \item In general, we have
+ \[
+ A(\mathfrak{a}) = \frac{1}{2} \sqrt{|\Delta(\alpha_1, \alpha_2)|},
+ \]
+ where $\alpha_1, \alpha_2$ are the integral basis of $\mathfrak{a}$.
+ \item We have
+ \[
+ A(\mathfrak{a}) = N(\mathfrak{a}) A(\mathcal{O}_L).
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item This is clear.
+ \item We know $\mathcal{O}_L$ has basis $1, \alpha$, where again
+ \[
+ \alpha =
+ \begin{cases}
+ \sqrt{d} & d \equiv 2, 3\pmod 4\\
+ \frac{1}{2}(1 + \sqrt{d})& d \equiv 1 \pmod 4
+ \end{cases}.
+ \]
+ So we can just look at the picture of the lattice, and compute to get
+ \[
+ A(\mathcal{O}_L) =
+ \begin{cases}
+ \sqrt{|d|} & d \equiv 2, 3 \pmod 4\\
+ \frac{1}{2}\sqrt{|d|} & d \equiv 1\pmod 4
+ \end{cases} = \frac{1}{2}\sqrt{|D_L|}.
+ \]
+ \item If $\alpha_1, \alpha_2$ are the integral basis of $\mathfrak{a}$, then the lattice of $\mathfrak{a}$ is in fact spanned by the vectors $\alpha_1 = a + bi, \alpha_2 = a' + b' i$. This has area
+ \[
+ A(\mathfrak{a}) = \det
+ \begin{pmatrix}
+ a & b\\
+ a' & b'
+ \end{pmatrix},
+ \]
+ whereas we have
+ \begin{align*}
+ \Delta(\alpha_1, \alpha_2) &=
+ \det
+ \begin{pmatrix}
+ \alpha_1 & \bar{\alpha}_1\\
+ \alpha_2 & \bar{\alpha}_2
+ \end{pmatrix}^2\\
+ &= (\alpha_1 \bar{\alpha}_2 - \alpha_2 \bar{\alpha}_1)^2\\
+ &= \Im(2\alpha_1 \bar{\alpha}_2)^2\\
+ &= 4(a'b - ab')^2\\
+ &= 4A(\mathfrak{a})^2.
+ \end{align*}
+ \item This follows from (ii) and (iii), as
+ \[
+ \Delta(\alpha_1, \cdots, \alpha_n) = N(\mathfrak{a})^2 D_L
+ \]
+ in general.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Now what does Minkowski's lemma tell us? We know there is an $\alpha \in \mathfrak{a}$ such that
+\[
+ N(\alpha) \leq \frac{4A(\mathfrak{a})}{\pi} = N(\mathfrak{a}) c_L,
+\]
+where
+\[
+ c_L = \frac{2\sqrt{|D_L|}}{\pi}.
+\]
+But $\alpha \in \mathfrak{a}$ implies $\bra \alpha\ket \subseteq \mathfrak{a}$, which implies $\bra \alpha\ket = \mathfrak{a} \mathfrak{b}$ for some ideal $\mathfrak{b}$. So
+\[
+ |N(\alpha)| = N(\bra \alpha\ket) = N(\mathfrak{a})N(\mathfrak{b}).
+\]
+So this implies
+\[
+ N(\mathfrak{b}) \leq c_L = \frac{2\sqrt{|D_L|}}{\pi}.
+\]
+Recall that the class group is $\cl_L = I_L/P_L$, the fractional ideals quotiented by principal ideals, and we write $[\mathfrak{a}]$ for the class of $\mathfrak{a}$ in $\cl_L$. Then if $\mathfrak{a}\mathfrak{b} = \bra \alpha\ket$, then we have
+\[
+ [\mathfrak{b}] = [\mathfrak{a}^{-1}]
+\]
+in $\cl_L$. So we have just shown,
+\begin{prop}[Minkowski bound]\index{Minkowski bound}
+ For all $[\mathfrak{a}] \in \cl_L$, there is a representative $\mathfrak{b}$ of $[\mathfrak{a}]$ (i.e.\ an ideal $\mathfrak{b} \leq \mathcal{O}_L$ such that $[\mathfrak{b}] = [\mathfrak{a}]$) such that
+ \[
+ N(\mathfrak{b}) \leq c_L = \frac{2\sqrt{|D_L|}}{\pi}.
+ \]
+\end{prop}
+
+\begin{proof}
+ Find the $\mathfrak{b}$ such that $[\mathfrak{b}] = [(\mathfrak{a}^{-1})^{-1}]$ and $N(\mathfrak{b}) \leq c_L$.
+\end{proof}
+
+Combining this with the following easy lemma, we know that the class group is finite!
+\begin{lemma}
+ For every $n \in \Z$, there are only finitely many ideals $\mathfrak{a} \leq \mathcal{O}_L$ with $N(\mathfrak{a}) = m$.
+\end{lemma}
+
+\begin{proof}
+ If $N(\mathfrak{a}) = m$, then by definition $|\mathcal{O}_L/\mathfrak{a}| = m$. So $m \in \mathfrak{a}$ by Lagrange's theorem. So $\bra m \ket \subseteq \mathfrak{a}$, i.e.\ $\mathfrak{a} \mid \bra m\ket$. Hence $\mathfrak{a}$ is a factor of $\bra m\ket$. By unique factorization of prime ideals, there are only finitely many such ideals.
+\end{proof}
+
+Another proof is as follows:
+\begin{proof}
+ Each ideal bijects with an ideal in $\mathcal{O}_L/m\mathcal{O}_L = (\Z/m)^n$. So there are only finitely many.
+\end{proof}
+
+Thus, we have proved
+\begin{thm}
+ The class group $\cl_L$ is a finite group, and the divisors of ideals of the form $\bra p\ket$ for $p \in \Z$, $p$ a prime, and $0 < p < c_L$, collectively generate $\cl_L$.
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Each element is represented by an ideal of norm less than $2\sqrt{|D_L|}/\pi$, and there are only finitely many ideals of each norm.
+ \item Given any element of $\cl_L$, we pick a representative $\mathfrak{a}$ such that $N(\mathfrak{a}) < c_L$. We factorize
+ \[
+ \mathfrak{a}= \mathfrak{p}_1^{e_1} \cdots \mathfrak{p}_r^{e_r}.
+ \]
+ Then
+ \[
+ N(\mathfrak{p}_i) \leq N(\mathfrak{a}) < c_L.
+ \]
+ Suppose $\mathfrak{p}_i \mid \bra p\ket$. Then $N(\mathfrak{p})$ is a power of $p$, and is thus at least $p$. So $p < c_L$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+We now try to work with some explicit examples, utilizing Dedekind's criterion and the Minkowski bound.
+\begin{eg}
+ Consider $d = -7$. So $\Q(\sqrt{-7}) = L$, and $D_L = -7$. Then we have
+ \[
+ 1 < c_L = \frac{2\sqrt{7}}{\pi} < 2.
+ \]
+ So $\cl_L = \{1\}$, since there are no primes $p < c_L$. So $\mathcal{O}_L$ is a UFD.
+
+ Similarly, if $d = -1, -2, -3$, then $\mathcal{O}_L$ is a UFD.
+\end{eg}
+
+\begin{eg}
+ Let $d = -5$. Then $D_L = -20$. We have
+ \[
+ 2 < c_L = \frac{4\sqrt{5}}{\pi} < 3.
+ \]
+ So $\cl_L$ is generated by primes dividing $\bra 2\ket$.
+
+ Recall that Dirichlet's theorem implies
+ \[
+ \bra 2\ket = \bra 2, 1 + \sqrt{-5}\ket^2 = \mathfrak{p}^2.
+ \]
+ Also, $\mathfrak{p} = \bra 2, 1 + \sqrt{-5}\ket$ is not principal. If it were, then $\mathfrak{p} = \bra \beta\ket$, with $\beta = x + y \sqrt{-5}$, and $N(\beta) = 2$. But there are no solutions in $\Z$ of $x^2 + 5y^2 = 2$. So $\cl_L = \bra \mathfrak{p}\ket = \Z/2$.
+\end{eg}
+
+\begin{eg}
+ Consider $d = -17 \equiv 3 \pmod 4$. So $c_L \approx 5.3$. So $\cl_L$ is generated by primes dividing by $\bra 2\ket, \bra 3\ket, \bra 5\ket$. We factor
+ \[
+ x^2 + 17 \equiv x^2 + 1 \equiv (x + 1)^2 \pmod 2.
+ \]
+ So
+ \[
+ \bra 2\ket = \mathfrak{p}^2 = \bra 2, 1 + \sqrt{d}\ket^2.
+ \]
+ Doing this mod $3$, we have
+ \[
+ x^2 + 17 \equiv x^2 - 1 \equiv (x - 1)(x + 1) \pmod 3.
+ \]
+ So we have
+ \[
+ \bra 3\ket = \mathfrak{q}\bar{\mathfrak{q}} = \bra 3, 1 + \sqrt{d}\ket \bra 3, 1 - \sqrt{d}\ket.
+ \]
+ Finally, mod $5$, we have
+ \[
+ x^2 + 17 \equiv x^2 + 2 \pmod 5.
+ \]
+ So $5$ is inert, and $[\bra 5\ket] = 1$ in $\cl_L$. So
+ \[
+ \cl_L = \bra [\mathfrak{p}], [\mathfrak{q}]\ket,
+ \]
+ and we need to compute what this is. We can just compute powers $\mathfrak{q}^2, \mathfrak{q}^3, \cdots$, $\mathfrak{p}\mathfrak{q}, \mathfrak{p}\mathfrak{q}^2, \cdots$, and see what happens.
+
+ But a faster way is to look for principal ideals with small norms that are multiples of $2$ and $3$. For example,
+ \[
+ N(\bra 1 + \sqrt{d}\ket) = 18 = 2 \cdot 3^2.
+ \]
+ But we have
+ \[
+ 1 + \sqrt{d} \in \mathfrak{p}, \mathfrak{q}.
+ \]
+ So $\mathfrak{p}, \mathfrak{q} \mid \bra 1 + \sqrt{d}\ket$. Thus we know $\mathfrak{p}\mathfrak{q} \mid \bra 1 + \sqrt{d}\ket$. We have $N(\mathfrak{p}\mathfrak{q}) = 2 \cdot 3 = 6$. So there is another factor of $3$ to account for. In fact, we have
+ \[
+ \bra 1 + \sqrt{d}\ket = \mathfrak{p}\mathfrak{q}^2,
+ \]
+ which we can show by either thinking hard or expanding it out. So we must have
+ \[
+ [\mathfrak{p}] = [\mathfrak{q}]^{-2}
+ \]
+ in $\cl_L$. So we have $\cl_L = \bra [\mathfrak{q}]\ket$. Also, $[\mathfrak{q}]^{-2} = [\mathfrak{p}] \not= 1$ in $\cl_L$, as if it did, then $\mathfrak{p}$ is principal, i.e.\ $\mathfrak{p} = \bra x + y\sqrt{d}\ket$, but $2 = N(\mathfrak{p}) = x^2 + 7y^2$ has no solution in the integers. Also, we know $[\mathfrak{p}]^2 = [1]$. So we know
+ \[
+ \cl_L = \Z/4\Z.
+ \]
+\end{eg}
+
+In fact, we have
+\begin{thm}
+ Let $L = \Q(\sqrt{d})$ with $d < 0$. Then $\mathcal{O}_L$ is a UFD if
+ \[
+ -d \in \{1, 2, 3, 7, 11, 19, 43, 67, 163\}.
+ \]
+ Moreover, this is actually an ``if and only if''.
+\end{thm}
+The first part is a straightforward generalization of what we have been doing, but the proof that no other $d$'s work is \emph{hard}.
+
+\subsection*{General case}
+Now we want to extend these ideas to higher dimensions. We are really just doing the same thing, but we need a bit harder geometry and proper definitions.
+
+\begin{defi}[Discrete subset]\index{discrete subset}
+ A subset $X \subseteq \R^n$ is \emph{discrete} if for every $x \in X$, there is some $\varepsilon > 0$ such that $B_\varepsilon(x) \cap X = \{x\}$. This is true if and only if for every compact $K \subseteq \R^n$, $K\cap X$ is finite.
+\end{defi}
+
+We have the following very useful characterization of discrete subgroups of $\R^n$:
+\begin{prop}
+ Suppose $\Lambda \subseteq \R^n$ is a subgroup. Then $\Lambda$ is a discrete subgroup of $(\R^n, +)$ if and only if
+ \[
+ \Lambda = \left\{\sum_1^m n_i \mathbf{x}_i : n_i \in \Z\right\}
+ \]
+ for some $\mathbf{x}_1, \cdots, \mathbf{x}_m$ linearly independent over $\R$.
+\end{prop}
+
+Note that linear independence is important. For example, $\Z \sqrt{2} + \Z \sqrt{3} \subseteq \R$ is not discrete. On the other hand, if $\Lambda = \mathfrak{a} \lhd \mathcal{O}_L$ is an ideal, where $L = \Q(\sqrt{d})$ and $d < 0$, then this is discrete.
+
+\begin{proof}
+ Suppose $\Lambda$ is generated by $\mathbf{x}_1, \cdots, \mathbf{x}_m$. By linear independence, there is some $g \in \GL_n(\R)$ such that $g \mathbf{x}_i = \mathbf{e}_i$ for all $1 \leq i \leq m$, where $\mathbf{e}_1, \cdots, \mathbf{e}_n$ is the standard basis. We know acting by $g$ preserves discreteness, since it is a homeomorphism, and $g\Lambda = \Z^m \subseteq \R^m \times \R^{n - m}$ is clearly discrete (take $\varepsilon = \frac{1}{2}$). So this direction follows.
+
+ For the other direction, suppose $\Lambda$ is discrete. We pick $\mathbf{y}_1, \cdots, \mathbf{y}_m \in \Lambda$ which are linearly independent over $\R$, with $m$ maximal (so $m \leq n$). Then by maximality, we know
+ \[
+ \left\{\sum_{i = 1}^m \lambda_i \mathbf{y}_i: \lambda_i \in \R\right\} = \left\{ \sum_1^m \lambda_i \mathbf{z}_i: \lambda_i \in \R, \mathbf{z}_i \in \Lambda\right\},
+ \]
+ and this is the smallest vector subspace of $\R^n$ containing $\Lambda$. We now let
+ \[
+ X = \left\{\sum_{i = 1}^m \lambda_i \mathbf{y}_i: \lambda_i \in [0, 1]\right\} \cong [0, 1]^m.
+ \]
+ This is closed and bounded, and hence compact. So $X \cap \Lambda$ is finite.
+
+ Also, we know
+ \[
+ \bigoplus \Z \mathbf{y}_i = \Z^m \subseteq \Lambda,
+ \]
+ and if $\gamma$ is any element of $\Lambda$, we can write it as $\gamma = \gamma_0 + \gamma_1$, where $\gamma_0 \in X$ and $\gamma_1 \in \Z^m$. So
+ \[
+ \left|\frac{\Lambda}{\Z^m}\right| \leq |X \cap \Lambda| < \infty.
+ \]
+ So let $d = |\Lambda/\Z^m|$. Then $d \Lambda \subseteq \Z^m$, i.e.\ $\Lambda \subseteq \frac{1}{d} \Z^m$. So
+ \[
+ \Z^m \subseteq \Lambda \subseteq \frac{1}{d} \Z^m.
+ \]
+ So $\Lambda$ is a free abelian group of rank $m$. So there exists $\mathbf{x}_1, \cdots, \mathbf{x}_m \in \frac{1}{d}\Z^m$ which is an integral basis of $\Lambda$ and are linearly independent over $\R$.
+\end{proof}
+
+\begin{defi}[Lattice]\index{lattice}
+ If $\rank \Lambda = n = \dim \R^n$, then $\Lambda$ is a \emph{lattice} in $\R^n$.
+\end{defi}
+
+\begin{defi}[Covolume and fundamental domain]\index{covolume}\index{fundamental domain}
+ Let $\Lambda \subseteq \R^n$ be a lattice, and $\mathbf{x}_1, \cdots, \mathbf{x}_n$ be a basis of $\Lambda$, then let
+ \[
+ P = \left\{\sum_{i = 1}^n \lambda_i \mathbf{x}_i: \lambda_i \in [0, 1]\right\},
+ \]
+ and define the \emph{covolume} of $\Lambda$ to be
+ \[
+ \covol(\Lambda) = \vol(P) = |\det A|,
+ \]
+ where $A$ is the matrix such that $\mathbf{x}_i = \sum a_{ij} \mathbf{e}_j$.
+
+ We say $P$ is a \emph{fundamental domain} for the action of $\Lambda$ on $\R^n$, i.e.
+ \[
+ \R^n = \bigcup_{\gamma \in \Lambda} (\gamma + P),
+ \]
+ and
+ \[
+ (\gamma + P) \cap (\mu + P) \subseteq \partial (\gamma + P).
+ \]
+ In particular, the intersection has zero volume.
+\end{defi}
+This is called the covolume since if we consider the space $\R^n/ \Lambda$, which is an $n$-dimensional torus, then this has volume $\covol (\Lambda)$.
+
+Observe now that if $\mathbf{x}_1', \cdots, \mathbf{x}_n'$ is a different basis of $\Lambda$, then the transition matrix $\mathbf{x}_i' = \sum b_{ij} \mathbf{x}_j$ has $B \in \GL_n (\Z)$. So we have $\det B = \pm 1$, and $\covol (\Lambda)$ is independent of the basis choice.
+
+With these notations, we can now state Minkowski's theorem.
+
+\begin{thm}[Minkowski's theorem]\index{Minkowski's theorem}
+ Let $\Lambda \subseteq \R^n$ be a lattice, and $P$ be a fundamental domain. We let $S \subseteq \R^n$ be a measurable set, i.e.\ one for which $\vol(S)$ is defined.
+ \begin{enumerate}
+ \item Suppose $\vol(S) > \covol(\Lambda)$. Then there exists distinct $\mathbf{x}, \mathbf{y} \in S$ such that $\mathbf{x} - \mathbf{y} \in \Lambda$.
+ \item Suppose $\mathbf{0} \in S$, and $S$ is symmetric around $0$, i.e.\ $\mathbf{s} \in S$ if and only if $-\mathbf{s} \in S$, and $S$ is convex, i.e.\ for all $\mathbf{x}, \mathbf{y} \in S$ and $\lambda \in [0, 1]$, then
+ \[
+ \lambda \mathbf{x} + (1 - \lambda)\mathbf{y} \in S.
+ \]
+ Then suppose either
+ \begin{enumerate}
+ \item $\vol (S) > 2^n \covol(\Lambda)$; or
+ \item $\vol (S) \geq 2^n \covol(\Lambda)$ and $S$ is closed.
+ \end{enumerate}
+ Then $S$ contains a $\gamma \in \Lambda$ with $\gamma \not= 0$.
+ \end{enumerate}
+\end{thm}
+
+Note that for $n = 2$, this is what we used for quadratic fields.
+
+By considering $\Lambda = \Z^n \subseteq \R^n$ and $S = [-1, 1]^n$, we know the bounds are sharp.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Suppose $\vol(S) > \covol(\Lambda) = \vol(P)$. Since $P \subseteq \R^n$ is a fundamental domain, we have
+ \[
+ \vol(S) = \vol(S \cap \R^n) = \vol\left(S \cap \sum_{\gamma \in \Lambda} (P + \gamma)\right) = \sum_{\gamma \in \Lambda} \vol(S \cap (P + \gamma)).
+ \]
+ Also, we know
+ \[
+ \vol(S \cap (P + \gamma)) = \vol((S - \gamma) \cap P),
+ \]
+ as volume is translation invariant. We now claim the sets $(S - \gamma) \cap P$ for $\gamma \in \Lambda$ are \emph{not} pairwise disjoint. If they were, then
+ \[
+ \vol(P) \geq \sum_{\gamma \in \Lambda} \vol((S - \gamma) \cap P) = \sum_{\gamma \in \Lambda} \vol(S \cap (P + \gamma)) = \vol(S),
+ \]
+ contradicting our assumption.
+
+ Then in particular, there are some distinct $\gamma$ and $\mu$ such that $(S - \gamma)$ and $(S - \mu)$ are not disjoint. In other words, there are $\mathbf{x}, \mathbf{y} \in S$ such that $\mathbf{x} - \gamma = \mathbf{y} - \mu$, i.e.\ $\mathbf{x} - \mathbf{y} = \gamma - \mu \in \Lambda \not= 0$.
+ \item We now let
+ \[
+ S' = \frac{1}{2} S = \left\{\frac{1}{2}s: s \in S\right\}.
+ \]
+ So we have
+ \[
+ \vol(S') = 2^{-n} \vol(S) > \covol(\Lambda),
+ \]
+ by assumption.
+ \begin{enumerate}
+ \item So there exists some distinct $\mathbf{y}, \mathbf{z} \in S'$ such that $\mathbf{y} - \mathbf{z} \in \Lambda \setminus \{0\}$. We now write
+ \[
+ \mathbf{y} - \mathbf{z} = \frac{1}{2} (2\mathbf{y} + (-2\mathbf{z})),
+ \]
+ Since $2\mathbf{z} \in S$ implies $-2\mathbf{z} \in S$ by symmetry around $\mathbf{0}$, so we know $\mathbf{y} - \mathbf{z} \in S$ by convexity.
+ \item We apply the previous part to $S_m = \left(1 + \frac{1}{m}\right)S$ for all $m \in \N$, $m > 0$. So we get a non-zero $\gamma_m \in S_m \cap \Lambda$.
+
+ By convexity, we know $S_m \subseteq S_1 = 2S$ for all $m$. So $\gamma_1, \gamma_2, \cdots \in S_1 \cap \Lambda$. But $S_1$ is compact set. So $S_1 \cap \Lambda$ is finite. So there exists $\gamma$ such that $\gamma_m$ is $\gamma$ infinitely often. So
+ \[
+ \gamma \in \bigcap_{m \geq 0} S_m = S.
+ \]
+ So $\gamma \in S$.\qedhere
+ \end{enumerate}%\qedhere
+ \end{enumerate}
+\end{proof}
+
+We are now going to use this to mimic our previous proof that the class group of an imaginary quadratic field is finite.
+
+To begin with, we need to produce lattices from ideals of $\mathcal{O}_L$. Let $L$ be a number field, and $[L:\Q] = n$. We let $\sigma_1, \cdots,\sigma_r: L \to \R$ be the real embeddings, and $\sigma_{r + 1}, \cdots, \sigma_{r + s},\bar{\sigma}_{r + 1}, \cdots, \bar{\sigma}_{r + s}: L \to \C$ be the complex embeddings (note that which embedding is $\sigma_{r + i}$ and which is $\bar{\sigma}_{r + i}$ is an arbitrary choice).
+
+Then this defines an embedding
+\[
+ \sigma = (\sigma_1, \sigma_2, \cdots, \sigma_r, \sigma_{r + 1}, \cdots, \sigma_{r + s}): L \hookrightarrow \R^r \times \C^s \cong \R^r \times \R^{2s} = \R^{r + 2s} = \R^n,
+\]
+under the isomorphism $\C \to \R^2$ by $x + iy \mapsto (x, y)$.
+
+Just as we did for quadratic fields, we can relate the norm of ideals to their covolume.
+
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item $\sigma(\mathcal{O}_L)$ is a lattice in $\R^n$ of covolume $2^{-s} |D_L|^{\frac{1}{2}}$.
+ \item More generally, if $\mathfrak{a} \lhd \mathcal{O}_L$ is an ideal, then $\sigma(\mathfrak{a})$ is a lattice and the covolume
+ \[
+ \covol(\sigma(\mathfrak{a})) = 2^{-s} |D_L|^{\frac{1}{2}} N(\mathfrak{a}).
+ \]
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}
+ Obviously (ii) implies (i). So we just prove (ii). Recall that $\mathfrak{a}$ has an integral basis $\gamma_1, \cdots, \gamma_n$. Then $\mathfrak{a}$ is the integer span of the vectors
+ \[
+ (\sigma_1(\gamma_i), \sigma_2(\gamma_i), \cdots, \sigma_{r + s}(\gamma_i))
+ \]
+ for $i = 1, \cdots, n$, and they are independent as we will soon see when we compute the determinant. So it is a lattice.
+
+ We also know that
+ \[
+ \Delta(\gamma_1, \cdots, \gamma_n) = \det (\sigma_i(\gamma_j))^2 = N(\mathfrak{a})^2 D_L,
+ \]
+ where the $\sigma_i$ run over all $\sigma_1, \cdots, \sigma_r, \sigma_{r + 1}, \cdots, \sigma_{r + s}, \bar{\sigma}_{r + 1}, \cdots \bar{\sigma}_{r + s}$.
+
+ So we know
+ \[
+ |\det (\sigma_i(\gamma_j))| = N(\mathfrak{a}) |D_L|^{\frac{1}{2}}.
+ \]
+ So what we have to do is to relate $\det(\sigma_i(\gamma_j))$ to the covolume of $\sigma(\mathfrak{a})$. But these two expressions are very similar.
+
+ In the $\sigma_i (\gamma_j)$ matrix, we have columns that look like
+ \[
+ \begin{pmatrix}
+ \sigma_{r + i}(\gamma_j) & \bar{\sigma}_{r + i}(\gamma_j)
+ \end{pmatrix} =
+ \begin{pmatrix}
+ z & \bar{z}
+ \end{pmatrix}.
+ \]
+ On the other hand, the matrix of $\sigma(\gamma)$ has corresponding entries
+ \[
+ \begin{pmatrix}
+ \Re(z) & \Im(z)
+ \end{pmatrix} =
+ \begin{pmatrix}
+ \frac{1}{2}(z + \bar{z}) &\frac{i}{2}(\bar{z} - z)
+ \end{pmatrix} =
+ \frac{1}{2}
+ \begin{pmatrix}
+ 1 & 1\\
+ i & -i
+ \end{pmatrix}
+ \begin{pmatrix}
+ z\\\bar{z}
+ \end{pmatrix}
+ \]
+ We call the last matrix $A = \displaystyle\frac{1}{2}\begin{pmatrix}1 & 1\\i & -i\end{pmatrix}$. We can compute the determinant as
+ \[
+ |\det A| = \left|\det \frac{1}{2}
+ \begin{pmatrix}
+ 1 & 1\\
+ i & -i
+ \end{pmatrix}\right| = \frac{1}{2}.
+ \]
+ Hence the change of basis matrix from $(\sigma_i(\gamma_j))$ to $\sigma(\gamma)$ is $s$ diagonal copies of $A$, so has determinant $2^{-s}$. So this proves the lemma.
+\end{proof}
+
+\begin{prop}
+ Let $\mathfrak{a} \lhd \mathcal{O}_L$ be an ideal. Then there exists an $\alpha \in \mathfrak{a}$ with $\alpha \not= 0$ such that
+ \[
+ |N(\alpha)| \leq c_L N(\mathfrak{a}),
+ \]
+ where
+ \[
+ c_L = \left(\frac{4}{\pi}\right)^s \frac{n!}{n^n} |D_L|^{\frac{1}{2}}.
+ \]\qedhere
+\end{prop}
+This is the \emph{Minkowski bound}\index{Minkowski bound}.
+
+\begin{proof}
+ Let
+ \[
+ B_{r, s}(t) = \left\{(y_1, \cdots, y_r, z_1, \cdots, z_s) \in \R^r \times \C^s: \sum |y_i| + 2 \sum |z_i| \leq t\right\}.
+ \]
+ This
+ \begin{enumerate}
+ \item is closed and bounded;
+ \item is measurable (it is defined by polynomial inequalities);
+ \item has volume
+ \[
+ \vol(B_{r, s}(t)) = 2^r \left(\frac{\pi}{2}\right)^s \frac{t^n}{n!};
+ \]
+ \item is convex and symmetric about $0$.
+ \end{enumerate}
+ Only (iii) requires proof, and it is on the second example sheet, i.e.\ we are not doing it here. It is just doing the integral.
+
+ We now choose $t$ so that
+ \[
+ \vol B_{r, s}(t) = 2^n \covol(\sigma(\mathfrak{a})).
+ \]
+ Explicitly, we let
+ \[
+ t^n = \left(\frac{4}{\pi}\right)^s n! |D_L|^{1/2}N(\mathfrak{a}).
+ \]
+ Then by Minkowski's lemma, there is some $\alpha \in \mathfrak{a}$ non-zero such that $\sigma(\alpha) \in B_{r, s}(t)$. We write
+ \[
+ \sigma(\alpha) = (y_1, \cdots, y_r, z_1, \cdots, z_s).
+ \]
+ Then we observe
+ \[
+ N(\alpha) = y_1\cdots y_r z_1 \bar{z}_1 z_2 \bar{z}_2 \cdots z_s \bar{z}_s = \prod y_i \prod|z_j|^2.
+ \]
+ By the AM-GM inequality, we know
+ \[
+ |N(\alpha)|^{1/n} \leq \frac{1}{n}\left(\sum y_i + 2 \sum |z_j|\right) \leq \frac{t}{n},
+ \]
+ as we know $\sigma(\mathfrak{a}) \in B_{r, s}(t)$. So we get
+ \[
+ |N(\alpha)| \leq \frac{t^n}{n^n} = c_L N(\mathfrak{a}).\qedhere
+ \]
+\end{proof}
+
+\begin{cor}
+ Every $[\mathfrak{a}] \in \cl_L$ has a representative $\mathfrak{a} \in \mathcal{O}_L$ with $N(\mathfrak{a}) \leq c_L$.
+\end{cor}
+
+\begin{thm}[Dirichlet]\index{finiteness of class group}\index{class group!finiteness}
+ The class group $\cl_L$ is finite, and is generated by prime ideals of norm $\leq c_L$.
+\end{thm}
+
+\begin{proof}
+ Just as the case for imaginary quadratic fields.
+\end{proof}
+
+\section{Dirichlet's unit theorem}
+We have previously characterized the units on $\mathcal{O}_L$ as the elements with unit norm, i.e.\ $\alpha \in \mathcal{O}_L$ is a unit if and only if $|N(\alpha)| = 1$. However, this doesn't tell us much about how many units there are, and how they are distributed. The answer to this question is given by Dirichlet's unit theorem.
+
+\begin{thm}[Dirichlet unit theorem]\index{Dirichlet unit theorem}
+ We have the isomorphism
+ \[
+ \mathcal{O}_L^\times \cong \mu_L \times \Z^{r + s - 1},
+ \]
+ where
+ \[
+ \mu_L = \{\alpha \in L: \alpha^N = 1\text{ for some }N > 0\}
+ \]
+ is the group of roots of unity in $L$, and is a finite cyclic group.
+\end{thm}
+
+Just as in the finiteness of the class group, we do it for an example first, or else it will be utterly incomprehensible.
+
+We do the example of \emph{real} quadratic fields, $L = \Q(\sqrt{d})$, where $d > 1$ is square-free. So $r = 2, s = 0$, and $L \subseteq \R$ implies $\mu_L = \{\pm 1\}$. So
+\[
+ \mathcal{O}_L^\times \cong \{\pm 1\} \times \Z.
+\]
+Also, we know that
+\[
+ N(x + y\sqrt{d}) = (x + y\sqrt{d})(x - y\sqrt{d}) = x^2 - dy^2.
+\]
+So Dirichlet's theorem is saying that there are infinitely many solutions of $x^2 - dy^2 = \pm 1$, and are all (plus or minus) the powers of one single element.
+
+\begin{thm}[Pell's equation]\index{Pell's equation}
+ There are infinitely many $x + y\sqrt{d} \in \mathcal{O}_L$ such that $x^2 - dy^2 = \pm 1$.
+\end{thm}
+You might have seen this in IIC Number Theory, where we proved it directly by continued fractions. We will provide a totally unconstructive proof here, since this is more easily generalized to arbitrary number fields.
+
+This is actually just half of Dirichlet's theorem. The remaining part is to show that they are all powers of the same element.
+\begin{proof}
+ Recall that $\sigma: \mathcal{O}_L \to \R^2$ sends
+ \[
+ \alpha = x + y\sqrt{d} \mapsto (\sigma_1(\alpha), \sigma_2(\alpha)) = (x + y\sqrt{d}, x - y\sqrt{d}).
+ \]
+ (in the domain, $\sqrt{d}$ is a formal symbol, while in the codomain, it is a real number, namely the positive square root of $d$)
+
+ Also, we know
+ \[
+ \covol(\sigma(\mathcal{O}_L)) = |D_L|^{\frac{1}{2}}.
+ \]
+ \begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (-6, 0) -- (6, 0);
+ \draw (0, -6) -- (0, 6);
+ \foreach \x in {-6,...,6} {
+ \node [mred, circ] at (\x, \x) {};
+ }
+ \foreach \y/\n in {1/4,2/3, 3/1, 4/0} {
+ \foreach \x in {-\n,...,\n} {
+ \node [circ] at (\x + \y *1.414, \x - \y *1.414) {};
+ }
+ \foreach \x in {-\n,...,\n} {
+ \node [circ] at (\x - \y *1.414, \x + \y *1.414) {};
+ }
+ }
+
+ \node [mred, right] at (1, 1) {(1, 1)};
+ \node [mred, anchor = south west] at (6, 6) {$\Z[X]$};
+
+ \draw [mblue, domain=0.167:6] plot [smooth] (\x, {1/\x});
+ \draw [mblue, domain=0.167:6] plot [smooth] ({-\x}, {-1/\x});
+
+ \node [mblue, right] at (6, 0.167) {$N(\alpha) = 1$};
+ \end{tikzpicture}
+ \end{center}
+ Consider
+ \[
+ s_t = \left\{(y_1, y_2) \in \R^2: |y_1|\leq t, |y_2| \leq \frac{|D_L|^{1/2}}{t}\right\}.
+ \]
+ So
+ \[
+ \vol(s_t) = 4|D_L|^{\frac{1}{2}} = 2^n \covol(\mathcal{O}_L),
+ \]
+ as $n = [L:\Q] = 2$. Now Minkowski implies there is an $\alpha \in \mathcal{O}_L$ non-zero such that $\sigma(\alpha) \in s_t$. Also, if we write
+ \[
+ \sigma(\alpha) = (y_1, y_2),
+ \]
+ then
+ \[
+ N(\alpha) = y_1y_2.
+ \]
+ So such an $\alpha$ will satisfy
+ \[
+ 1 \leq |N(\alpha)| \leq |D_L|^{1/2}.
+ \]
+ This is not quite what we want, since we need $|N(\alpha)| = 1$ exactly. Nevertheless, this is a good start. So let's try to find infinitely such elements.
+
+ First notice that no points on the lattice (apart from the origin) hits the $x$ or $y$ axis, since any such point must satisfy $x \pm y\sqrt{d} = 0$, but $\sqrt{d}$ is not rational. Also, $s_t$ is compact. So $s_t \cap \sigma(\mathcal{O}_L)$ contains finitely many points. So we can find a $t_2$ such that for each $(y_1, y_2) \in s_t \cap \mathcal{O}_L$, we have $|y_1| > t_2$. In particular, $s_{t_2}$ does not contain any point in $s_t \cap \sigma(\mathcal{O}_L)$. So we get a new set of points $\alpha \in s_{t_2} \cap \mathcal{O}_L$ such that $1 \leq |N(\alpha)| \leq |D_L|^{1/2}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) rectangle (3, 2);
+ \node at (3, 1) [right] {$s_1$};
+
+ \node [circ, mred] at (0.9, 0.8) {};
+ \node [circ, mred] at (1.7, 0.3) {};
+ \node [circ, mred] at (1.2, 1.4) {};
+ \node [circ, mred] at (2.4, 1.0) {};
+
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) rectangle (0.8, 4);
+ \node at (0.8, 3) [right] {$s_2$};
+
+ \node [circ, mred] at (0.2, 3.4) {};
+ \node [circ, mred] at (0.6, 2.8) {};
+ \end{tikzpicture}
+ \end{center}
+ We can do the same thing for $s_{t_2}$ and get a new $t_3$. In general, given $t_1 > \cdots > t_n$, pick $t_{n + 1}$ be such that
+ \[
+ 0 < t_{n + 1} < \min\left\{|y_1|: (y_1, y_2) \in \bigcup_{i = 1}^n s_{t_i} \cap \sigma(\mathcal{O}_L)\right\},
+ \]
+ and the minimum is finite since $s_t$ is compact and hence contains finitely many lattice points on $\sigma(\mathcal{O}_L)$.
+
+ Then we get an infinite sequence of $t_i$ such that $s_{t_i} \cap \sigma(\mathcal{O}_L)$ are disjoint for different $i$. Since each must contain at least one point, we have got infinitely many points in $\mathcal{O}_L$ satisfying $1 \leq |N(\alpha)| \leq |D_L|^{1/2}$.
+
+ Since there are only finitely many integers between $1$ and $|D_L|^{1/2}$, we can apply the pigeonhole principle, and get that there is some integer satisfying $1 \leq |m| \leq |D_L|^{1/2}$ such that there exists infinitely many $\alpha \in \mathcal{O}_L$ with $N(\alpha) = m$.
+
+ This is not quite good enough. We consider
+ \[
+ \mathcal{O}_L/m\mathcal{O}_L \cong (\Z/m\Z)^{[L:\Q]},
+ \]
+ another finite set. We notice that each $\alpha \in \mathcal{O}_L$ must fall into one of finitely many the cosets of $m\mathcal{O}_L$ in $\mathcal{O}_L$. In particular, each $\alpha$ such that $N(\alpha) = m$ must belong to one of these cosets.
+
+ So again by the pigeonhole principle, there exists a $\beta \in \mathcal{O}_L$ with $N(\beta) = m$, and infinitely many $\alpha \in \mathcal{O}_L$ with $N(\alpha) = m$ and $\alpha = \beta \pmod {m\mathcal{O}_L}$.
+
+ Now of course $\alpha$ and $\beta$ are not necessarily units, if $m \not= 1$. However, we will show that $\alpha/\beta$ is. The hard part is of course showing that it is in $\mathcal{O}_L$ itself, because it is clear that $\alpha/\beta$ has norm $1$ (alternatively, by symmetry, $\beta/\alpha$ is in $\mathcal{O}_L$, so an inverse exists).
+
+ Hence all it remains is to prove the general fact that if
+ \[
+ \alpha = \beta + m\gamma,
+ \]
+ where $\alpha, \beta, \gamma \in \mathcal{O}_L$ and $N(\alpha) = N(\beta) = m$, then $\alpha/\beta \in \mathcal{O}_L$.
+
+ To show this, we just have to compute
+ \[
+ \frac{\alpha}{\beta} = 1 + \frac{m}{\beta}\gamma = 1 + \frac{N(\beta)}{\beta}\gamma = 1 + \bar{\beta} \gamma \in \mathcal{O}_L,
+ \]
+ since $N(\beta) = \beta\bar{\beta}$. So done.
+\end{proof}
+We have thus constructed infinitely many units.
+
+We now prove the remaining part
+\begin{thm}[Dirchlet's unit theorem for real quadratic fields]\index{Dirichlet's unit theorem}
+ Let $L = \Q(\sqrt{d})$. Then there is some $\varepsilon_0 \in \mathcal{O}_L^\times$ such that
+ \[
+ \mathcal{O}_L^\times = \{\pm \varepsilon_0^n : n \in \Z\}.
+ \]
+ We call such an $\varepsilon_0$ a \emph{fundamental unit} (which is not unique). So
+ \[
+ \mathcal{O}_L^\times \cong \{\pm 1\} \times \Z.
+ \]
+\end{thm}
+
+\begin{proof}
+ We have just proved the really powerful theorem that there are infinitely many $\varepsilon$ with $N(\varepsilon) = 1$. We are not going to need the full theorem. All we need is that there are three --- in particular, something that is not $\pm 1$.
+
+ We pick some $\varepsilon \in \mathcal{O}_L^\times$ with $\varepsilon \not= \pm 1$. This exists by what we just proved. Then we know
+ \[
+ |\sigma_1(\varepsilon)| \not= 1,
+ \]
+ as $|\sigma_1(\varepsilon)| = 1$ if and only if $\varepsilon = \pm 1$. Replacing by $\varepsilon^{-1}$ if necessary, we wlog $E = |\sigma_1(\varepsilon)| > 1$. Now consider
+ \[
+ \{\alpha \in \mathcal{O}_L: N(\alpha) = \pm 1, 1 \leq |\sigma_1(\alpha)| \leq E\}.
+ \]
+ This is again finite, since it is specified by a compact subset of the $\mathcal{O}_L$-lattice. So we pick $\varepsilon_0$ in this set with $\varepsilon_0 \not= \pm 1$ and $|\sigma_1(\varepsilon_0)|$ minimal ($> 1$). Replacing $\varepsilon_0$ by $-\varepsilon_0$ if necessary, we can assume $\sigma_1(\varepsilon) > 1$.
+
+ Finally, we claim that if $\varepsilon \in \mathcal{O}_L^\times$ and $\sigma_1(\varepsilon) > 0$, then $\varepsilon = \varepsilon_0^N$ for some $N \in \Z$. This is obvious if we have addition instead of multiplication. So we take logs.
+
+ Suppose
+ \[
+ \frac{\log \varepsilon}{\log \varepsilon_0} = N + \gamma,
+ \]
+ where $N \in \Z$ and $0 \leq \gamma < 1$. Then we know
+ \[
+ \varepsilon\varepsilon_0^{-N} = \varepsilon_0^\gamma \in \mathcal{O}_L^{\times},
+ \]
+ but $|\varepsilon_0^\gamma| = |\varepsilon_0|^\gamma < |\varepsilon_0|$, as $|\varepsilon_0| > 1$. By our choice of $\varepsilon_0$, we must have $\gamma = 0$. So done.
+\end{proof}
+
+Now we get to prove the Dirichlet unit theorem in its full glory.
+\begin{thm}[Dirichlet unit theorem]\index{Dirichlet's unit theorem}
+ We have the isomorphism
+ \[
+ \mathcal{O}_L^\times \cong \mu_L \times \Z^{r + s - 1},
+ \]
+ where
+ \[
+ \mu_L = \{\alpha \in L: \alpha^N = 1\text{ for some }N > 0\}
+ \]
+ is the group of roots of unity in $L$, and is a finite cyclic group.
+\end{thm}
+
+\begin{proof}
+ We do the proof in the opposite order. We throw in the logarithm at the very beginning. We define
+ \[
+ \ell: \mathcal{O}_L^\times \to \R^{r + s}
+ \]
+ by
+ \[
+ x\mapsto (\log|\sigma_1(x)|, \cdots, \log|\sigma_r(x)|, 2\log|\sigma_{r + 1}(x)|, \cdots, 2 \log|\sigma_{r + s}(x)|).
+ \]
+ Note that $|\sigma_{r + i}(x)| = |\overline{\sigma_{r + \ell}(x)}|$. So this is independent of the choice of one of $\sigma_{r + i}, \bar{\sigma}_{r + i}$.
+
+ \begin{claim}
+ We now claim that $\im \ell$ is a discrete group in $\R^{r + s}$ and $\ker \ell = \mu_L$ is a finite cyclic group.
+ \end{claim}
+ We note that
+ \[
+ \log|ab| = \log|a| + \log|b|.
+ \]
+ So this is a group homomorphism, and the image is a subgroup. To prove the first part, it suffices to show that $\im \ell \cap [-A, A]^{r + s}$ is finite for all $A > 0$. We notice $\ell$ factors as
+ \[
+ \begin{tikzcd}
+ \mathcal{O}_L^\times \ar[r, hook] & \mathcal{O}_L \ar[r, hook, "\sigma"] & \R^r \times \C^s \ar[r, "j"] & \R^{r + s}.
+ \end{tikzcd}
+ \]
+ where $\sigma$ maps $\alpha \mapsto (\sigma_1(\alpha), \cdots, \sigma_{r + s}(\alpha))$, and
+ \[
+ j: (y_1, \cdots, y_r, z_1, \cdots, z_s) \mapsto (\log|y_1|, \cdots, \log|y_r|, 2\log|z_1|, \cdots, 2\log|z_2|).
+ \]
+ We see
+ \[
+ j^{-1}([-A, A]^{r + s}) = \{(y_i, z_j): e^{-A} \leq |y_i| \leq e^A, e^{-A} \leq 2|z_j| \leq e^A\}
+ \]
+ is a compact set, and $\sigma(\mathcal{O}_L)$ is a lattice, in particular discrete. So $\sigma(\mathcal{O}_L) \cap j^{-1}([-A, A]^{r + s})$ is finite. This also shows the kernel is finite, since the kernel is the inverse image of a compact set.
+
+ Now as $\ker \ell$ is finite, all elements are of finite order. So $\ker \ell \subseteq \mu_L$. Conversely, it is clear that $\mu_L \subseteq \ker \ell$. So it remains to show that $\mu_L$ is cyclic. Since $L$ embeds in $\C$, we know $\mu_L$ is contained in the roots of unity in $\C$. Since $\mu_L$ is finite, we know $L$ is generated by a root of unity with the smallest argument (from, say, IA Groups).
+
+ \begin{claim}
+ We claim that
+ \[
+ \im \ell \subseteq \left\{(y_1, \cdots, y_{r + s}): \sum y_i = 0\right\} \cong \R^{r + s - 1}.
+ \]
+ \end{claim}
+ To show this, note that if $\alpha \in \mathcal{O}_L^\times$, then
+ \[
+ N(\alpha) = \prod_{i = 1}^n \sigma_i(\alpha) \prod_{\ell = 1}^s \sigma_{r + \ell}(\alpha) \bar{\sigma}_{r + \ell} = \pm 1.
+ \]
+ Taking the log of the absolute values, we get
+ \[
+ 0 = \sum \log |\sigma_i(\alpha)| + 2 \sum \log |\sigma_{r + i}(\alpha)|.
+ \]
+ So we know $\im \ell \subseteq \R^{r + s - 1}$ as a discrete subgroup. So it is isomorphic to $\Z^a$ for some $a \leq r + s - 1$. Then what we want to show is that $\im \ell \subseteq \R^{r + s - 1}$ is a lattice, i.e.\ it is congruent to $\Z^{r + s - 1}$.
+
+ Note that so far what we have done is the second part of what we did for the real quadratic fields. We took the logarithm to show that these form a discrete subgroup. Next, we want to find $r + s - 1$ independent elements to show it is a lattice.
+
+ \begin{claim}
+ Fix a $k$ such that $1 \leq k \leq r + s$ and $\alpha \in \mathcal{O}_L$ with $\alpha \not= 0$. Then there exists a $\beta \in \mathcal{O}_L$ such that
+ \[
+ |N(\beta)| \leq \left(\frac{2}{\pi}\right)^s |D_L|^{1/2},
+ \]
+ and moreover if we write
+ \begin{align*}
+ \ell(\alpha) &= (a_1, \cdots, a_{r + s})\\
+ \ell(\beta) &= (b_1, \cdots, b_{r + s}),
+ \end{align*}
+ then we have $b_i < a_i$ for all $i \not= k$.
+ \end{claim}
+ We can apply Minkowski to the region
+ \[
+ S = \{(y_1, \cdots, y_r, z_1, \cdots, z_s) \in \R^r \times \C^s: |y_i| \leq c_i, |z_j| \leq c_{r + j}\}
+ \]
+ (we will decide what values of $c_i$ to take later). Then this has volume
+ \[
+ \vol(S) = 2^r \pi^s c_1 \cdots c_{r + s}.
+ \]
+ We notice $S$ is convex and symmetric around $0$. So if we choose $0 < c_i < e^{a_i}$ for $i \not= k$, and choose
+ \[
+ c_k = \left(\frac{2}{\pi}\right)^s |D_L|^{1/2} \frac{1}{c_1 \cdots \hat{c}_k \cdots c_{r + s}}.
+ \]
+ Then Minkowski gives $\beta \in \sigma(\mathcal{O}_L) \cap S$, satisfying the two conditions above.
+
+ \begin{claim}
+ For any $k = 1, \cdots, r + s$, there is a unit $u_k \in \mathcal{O}_L^\times$ such that if $\ell(u_k) = (y_1, \cdots, y_{r + s}$, then $y_i < 0$ for all $i \not= k$ (and hence $y_k > 0$ since $\sum y_i = 0$).
+ \end{claim}
+ This is just as in the proof for the real quadratic case. We can repeatedly apply the previous claim to get a sequence $\alpha_1, \alpha_2, \cdots \in \mathcal{O}_L$ such that $N(\alpha_t)$ is bounded for all $t$, and for all $i \not= k$, the $i$th coordinate of $\ell(\alpha_1), \ell(\alpha_2), \cdots$ is strictly decreasing. But then as with real quadratic fields, the pigeonhole principle implies we can find $t, t'$ such that
+ \[
+ N(\alpha_t) = N(\alpha_{t'}) = m,
+ \]
+ say, \emph{and}
+ \[
+ \alpha_t \equiv \alpha_{t'} \pmod {m\mathcal{O}_L},
+ \]
+ i.e.\ $\alpha_t = \alpha_{t'}$ in $\mathcal{O}_L/m\mathcal{O}_L$. Hence for each $k$, we get a unit $u_k = \alpha_t/\alpha_{t'}$ such that
+ \[
+ \ell(u_k) = \ell(\alpha_t) - \ell(\alpha_t') = (y_1, \cdots, y_{r + s})
+ \]
+ has $y_i < 0$ if $i \not= k$ (and hence $y_k > 0$, since $\sum y_i = 0$). We need a final trick to show the following:
+ \begin{claim}
+ The units $u_1, \cdots, u_{r + s - 1}$ are linearly independent in $\R^{r + s - 1}$. Hence the rank of $\ell(\mathcal{O}_L^\times) = r + s - 1$, and Dirichlet's theorem is proved.
+ \end{claim}
+ We let $A$ be the $(r + s)\times (r + s)$ matrix whose $j$th row is $\ell(u_j)$, and apply the following lemma:
+
+ \begin{claim}
+ Let $A \in \Mat_{m}(\R)$ be such that $a_{ii} > 0$ for all $i$ and $a_{ij} < 0$ for all $i\not= j$, and $\sum_j a_{ij} \geq 0$ for each $i$. Then $\rank(A) \geq m - 1$.
+ \end{claim}
+
+ To show this, we let $\mathbf{v}_i$ be the $i$th column of $A$. We show that $\mathbf{v}_1, \cdots, \mathbf{v}_{m - 1}$ are linearly independent. If not, there exists a sequence $t_i \in \R$ such that
+ \[
+ \sum_{i = 1}^{m - 1} t_i \mathbf{v}_i = 0,\tag{$*$}
+ \]
+ with not all of the $t_i$ non-zero. We choose $k$ so that $|t_k|$ is maximal among the $t_1, \cdots, t_{m - 1}$'s. We divide the whole equation by $t_k$. So we can wlog assume $t_k = 1$, $t_i \leq 1$ for all $i$.
+
+ Now consider the $k$th row of $(*)$. We get
+ \[
+ 0 = \sum_{i = 1}^{m - 1} t_i a_{ki} \geq \sum_{i = 1}^{m - 1} a_{ki},
+ \]
+ as $a < 0$ and $t \leq 1$ implies $at \geq a$. Moreover, we know $a_{mi} > 0$ strictly. So we get
+ \[
+ 0 > \sum_{i = 1}^m a_{ki} \geq 0.
+ \]
+ This is a contradiction. So done.
+\end{proof}
+You should not expect this to be examinable.
+
+We make a quick definition that we will need later.
+\begin{defi}[Regulator]\index{regulator}
+ The \emph{regulator} of a number field $L$ is
+ \[
+ R_L = \covol (\ell(\mathcal{O}_L^\times) \subseteq \R^{r + s - 1}).
+ \]
+\end{defi}
+More concretely, we pick fundamental units $\varepsilon_1, \cdots, \varepsilon_{r + s - 1} \in \mathcal{O}_L^\times$ so that
+\[
+ \mathcal{O}_L^\times = \mu_L \times \{\varepsilon_1^{n_1} \cdots \varepsilon_{r + s - 1}^{n_{r + s - 1}} : n_i \in \Z\}.
+\]
+We take any $(r + s - 1)(r + s - 1)$ subminor of the matrix $\begin{pmatrix} \ell(\varepsilon_1) & \cdots & \ell(\varepsilon_{r + s})\end{pmatrix}$. Their determinants all have the same absolute value, and
+\[
+ |\det(\text{subminor})| = R_L.
+\]
+This is a definition we will need later.
+
+We quickly look at some examples with quadratic fields. Consider $L = \Q(\sqrt{d})$, where $d \not= 0, 1$ square-free.
+\begin{eg}
+ If $d < 0$, then $r = 0$ and $s = 1$. So $r + s - 1 = 0$. So $\mathcal{O}_L^\times = \mu_L$ is a finite group. So $R_L = 1$.
+\end{eg}
+
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item If $d = -1$, then $\Z[i]^{\times} = \{\pm 1, \pm i\} = \Z/4\Z$.
+ \item If $d = -3$, then let $\omega = \frac{1}{2}(1 + \sqrt{d})$, and we have $\omega^6 = 1$. So $\Z[\omega]^{\times} = \{1, \omega, \cdots, \omega^5\} \cong \Z/6\Z$.
+ \item For any other $d < 0$, we have $\mathcal{O}_L^\times = \{\pm 1\}$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}
+ This is just a direct check.
+
+ If $d \equiv 2, 3 \pmod 4$, then by looking at the solution of $x^2 - dy^2 = \pm 1$ in the integers, we get (i) and (iii).
+
+ If $d\equiv 1 \pmod 4$, then by looking at the solutions to $\left(x + \frac{y}{2}\right)^2 - \frac{d}{4}y^2 = \pm 1$ in the integers, we get (ii) and (iii).
+\end{proof}
+
+Now if $d > 0$, then $R_L = |\log|\varepsilon||$, where $\varepsilon$ is a fundamental unit. So how do we find a fundamental unit? In general, there is no good algorithm for finding the fundamental unit of a fundamental field. The best algorithm takes exponential time. We do have a good algorithm for quadratic fields using continued fractions, but we are not allowed to use that.
+
+Instead, we could just guess a solution --- we find a unit by guessing, and then show there is no smaller one by direct check.
+
+\begin{eg}
+ Consider the field $\Q(\sqrt{2})$. We can try $\varepsilon = 1 + \sqrt{2}$. We have $N(\varepsilon) = 1 - 2 = -1$. So this is a unit. We claim this is fundamental. If not, there there exists $u = a + b\sqrt{2}$, where $a, b \in \Z$ and $1 < u < \varepsilon$ (as real numbers). Then we have
+ \[
+ \bar{u} = a - b\sqrt{2}
+ \]
+ has $u\bar{u} = \pm 1$. Since $u > 1$, we know $|\bar{u}| < 1$. Then we must have $u \pm \bar{u} > 0$. So we need $a, b > 0$. We know can only be finitely many possibilities for
+ \[
+ 1 < a + b\sqrt{2} < 1 + \sqrt{2},
+ \]
+ where $a, b$ are positive integers. But there actually are none. So done.
+\end{eg}
+
+\begin{eg}
+ Consider $\Q(\sqrt{11})$. We guess $\varepsilon = 10 - 3\sqrt{11}$ is a unit. We can compute $N(\varepsilon) = 100 - 99 = 1$. Note that $\varepsilon < 1$ and $\varepsilon^{-1} > 1$.
+
+ Suppose this is not fundamental. Then we have some $u$ such that
+ \[
+ 1 < u = a + b\sqrt{11} < 10 + 3\sqrt{11} = \varepsilon^{-1} < 20.\tag{$*$}
+ \]
+ We can check all the cases, but there is a faster way.
+
+ We must have $N(u) = \pm 1$. If $N(u) = -1$, then $a^2 - 11b^2 = -1$. But $-1$ is not a square mod $11$.
+
+ So there we must have $N(u) = 1$. Then $u^{-1} = \bar{u}$. We get $0 < \varepsilon < u^{-1} = \bar{u} < 1$ also. So
+ \[
+ -1 < -a + b\sqrt{11} < 0.
+ \]
+ Adding this to $(*)$, we get
+ \[
+ 0 < 2b\sqrt{11} < 10 + 3\sqrt{11} < 7\sqrt{11}.
+ \]
+ So $b = 1, 2$ or $3$, but $11b^2 + 1$ is not a square for each of these. So done.
+\end{eg}
+
+\section{\texorpdfstring{$L$}{L}-functions, Dirichlet series*}
+This section is non-examinable.
+
+We start by proving the exciting fact that there are infinitely many primes.
+\begin{thm}[Euclid]
+ There are infinitely many primes.
+\end{thm}
+
+\begin{proof}
+ Consider the function
+ \[
+ \prod_{p\text{ primes}} \left(1 - \frac{1}{p}\right)^{-1} = \prod_{p\text{ prime}}\left(1 + \frac{1}{p} + \frac{1}{p^2} + \cdots\right) = \sum_{n > 0}\frac{1}{n}.
+ \]
+ This is since every $n = p_1^{e_1} \cdots p_r^{e_r}$ factors uniquely as a product of primes, and each such product appears exactly once in this. If there were finitely many primes, as $\sum \frac{1}{p^n}$ converges to $\left(1 - \frac{1}{p}\right)^{-1}$, the sum
+ \[
+ \sum_{n \geq 1}\frac{1}{n} = \prod_{p\text{ prime}} \left(1 - \frac{1}{p}\right)
+ \]
+ must be finite. But the harmonic series diverges. This is a contradiction.
+\end{proof}
+
+We all knew that. What we now want to prove is something more interesting.
+\begin{thm}[Dirichlet's theorem]\index{Dirichlet's theorem}
+ Let $a, q \in \Z$ be coprime. Then there exists infinitely many primes in the sequence
+ \[
+ a, a + q, a + 2q, \cdots,
+ \]
+ i.e.\ there are infinitely many primes in any such arithmetic progression.
+\end{thm}
+
+We want to imitate the Euler proof, but then that would amount to showing that
+\[
+ \prod_{\substack{p \equiv a \text{ mod } q\\ p\text{ prime}}} \left(1 - \frac{1}{p}\right)^{-1}
+\]
+is divergent, and there is no nice expression for this. So it will be a lot more work.
+
+To begin with, we define the Riemann zeta function.
+\begin{defi}[Riemann zeta function]\index{Riemann zeta function}\index{$\zeta$}
+ The \emph{Riemann zeta function} is defined as
+ \[
+ \zeta(s) = \sum_{n \geq 1}n^{-s}
+ \]
+ for $s \in \C$.
+\end{defi}
+
+There are some properties we will show (or assert):
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item The Riemann zeta function $\zeta(s)$ converges for $\Re(s) > 1$.
+ \item The function
+ \[
+ \zeta(s) - \frac{1}{s - 1}
+ \]
+ extends to a holomorphic function when $\Re(s) > 0$.
+
+ In other words, $\zeta(s)$ extends to a meromorphic function on $\Re(s) > 0$ with a simple pole at $1$ with residue $1$.
+ \item We have the expression
+ \[
+ \zeta(s) = \prod_{p\text{ prime}} \left(1 - \frac{1}{p^s}\right)^{-1}
+ \]
+ for $\Re(s) > 1$, and the product is absolutely convergent. This is the \emph{Euler product}.
+ \end{enumerate}
+\end{prop}
+
+The first part follows from the following general fact about Dirichlet series.
+\begin{defi}[Dirichlet series]\index{Dirichlet series}
+ A \emph{Dirichlet series} is a series of the form $\sum a_n n^{-s}$, where $a_1, a_2, \cdots\in \C$.
+\end{defi}
+
+\begin{lemma}
+ If there is a real number $r \in \R$ such that
+ \[
+ a_1 + \cdots + a_N = O(N^r),
+ \]
+ then
+ \[
+ \sum a_n n^{-s}
+ \]
+ converges for $\Re(s) > r$, and is a holomorphic function there.
+\end{lemma}
+Then (i) is immediate by picking $r = 1$, since in the Riemann zeta function, $a_1 = a_2 = \cdots = 1$.
+
+Recall that $x^s = e^{s \log x}$ has
+\[
+ |x^s| = |x^{\Re(s)}|
+\]
+if $x \in \R, x > 0$.
+
+\begin{proof}
+ This is just IA Analysis. Suppose $\Re(s) > r$. Then we can write
+ \begin{multline*}
+ \sum_{n = 1}^N a_n n^{-s} = a_1 (1^{-s} - 2^{-s}) + (a_1 + a_2)(2^{-s} - 3^{-s}) + \cdots\\ + (a_1 + \cdots + a_{N - 1})((N - 1)^{-s} - N^{-s}) + R_N,
+ \end{multline*}
+ where
+ \[
+ R_N = \frac{a_1 + \cdots + a_N}{N^s}.
+ \]
+ This is getting annoying, so let's write
+ \[
+ T(N) = a_1 + \cdots + a_N.
+ \]
+ We know
+ \[
+ \left|\frac{T(N)}{N^s}\right| = \left|\frac{T(N)}{N^r}\right| \frac{1}{N^{\Re(s) - r}} \to 0
+ \]
+ as $N \to \infty$, by assumption. Thus we have
+ \[
+ \sum_{n \geq 1} a_n n^{-s} = \sum_{n \geq 1} T(n) (n^{-s} - (n + 1)^{-s})
+ \]
+ if $\Re(s) > r$. But again by assumption, $T(n) \leq B \cdot n^r$ for some constant $B$ and all $n$. So it is enough to show that
+ \[
+ \sum_n n^r (n^{-s} - (n + 1)^{-s})
+ \]
+ converges. But
+ \[
+ n^{-s} - (n + 1)^{-s} = \int_n^{n + 1} \frac{s}{x^{s + 1}}\;\d x,
+ \]
+ and if $x \in [n, n + 1]$, then $n^r \leq x^r$. So we have
+ \[
+ n^r (n^{-s} - (n + 1)^{-s}) \leq \int_n^{n + 1} x^r \frac{s}{x^{s + 1}} \;\d x = s \int_n^{n + 1} \frac{\d x}{x^{s + 1 - r}}.
+ \]
+ It thus suffices to show that
+ \[
+ \int_1^n \frac{\d x}{x^{s + 1 - r}}
+ \]
+ converges, which it does (to $\frac{s}{s - r}$).
+\end{proof}
+
+We omit the proof of (ii). The idea is to write
+\[
+ \frac{1}{s - 1} = \sum_{n = 1}^\infty \int_n^{n + 1} \frac{\d x}{x^s},
+\]
+and show that $\sum \phi_n$ is uniformly convergent when $\Re(s) > 0$, where
+\[
+ \phi_n = n^{-s} - \int_n^{n + 1} \frac{\d x}{x^s}.
+\]
+For (iii), consider the first $r$ primes $p_1, \cdots, p_r$, and
+\[
+ \prod_{i = 1}^r (1 - p_i^{-s})^{-1} = \sum n^{-s},
+\]
+where the sum is over the positive integers $n$ whose prime divisors are among $p_1, \cdots, p_r$. Notice that $1, \cdots, r$ are certainly in the set.
+
+So
+\[
+ \left|\zeta(s) - \prod_{i = 1}^r (1 - p_i^{-s})^{-1}\right| \leq \sum_{n \geq r} |n^{-s}| = \sum_{n \geq r} n^{-\Re(s)}.
+\]
+But $\sum_{n \geq r} n^{-\Re(s)} \to 0$ as $r \to \infty$, proving the result, if we also show that it converges absolutely. We omit this proof, but it follows from the fact that
+\[
+ \sum_{p \text{ prime}} p^{-s} \leq \sum_n n^{-s}.
+\]
+and the latter converges absolutely, plus the fact that $\prod (1 - a_n)$ converges if and only if $\sum a_n$ converges, by IA Analysis I.
+
+This is good, but not what we want. Let's mimic this definition for an arbitrary number field!
+
+\begin{defi}[Zeta function]\index{zeta function}\index{$\zeta_L$}
+ Let $L \supseteq \Q$ be a number field, and $[L:\Q] = n$. We define the \emph{zeta function of $L$} by
+ \[
+ \zeta_L(s) = \sum_{\mathfrak{a} \lhd \mathcal{O}_L} N(\mathfrak{a})^{-s}.
+ \]
+\end{defi}
+It is clear that if $L = \Q$ and $\mathcal{O}_L = \Z$, then this is just the Riemann zeta function.
+
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item $\zeta_L(s)$ converges to a holomorphic function if $\Re(s) > 1$.
+ \item \emph{Analytic class number formula}\index{analytic class number formula}: $\zeta_L(s)$ is a meromorphic function if $\Re(s) > 1 - \frac{1}{n}$ and has a simple pole at $s = 1$ with residue
+ \[
+ \frac{|\cl_L| 2^r(2\pi)^s R_L}{|D_L|^{1/2} |\mu_L|},
+ \]
+ where $\cl_L$ is the class group, $r$ and $s$ are the number of real and complex embeddings, you know what $\pi$ is, $R_L$ is the regulator, $D_L$ is the discriminant and $\mu_L$ is the roots of unity in $L$.
+ \item
+ \[
+ \zeta_L(s) = \prod_{\mathfrak{p} \lhd \mathcal{O}_L\text{ prime ideal}} (1 - N(\mathfrak{p})^{-s})^{-1}.
+ \]
+ This is again known as the \emph{Euler product}.
+ \end{enumerate}
+\end{thm}
+We will not prove this, but the proof does not actually require any new ideas. Note that
+\[
+ \sum_{\mathfrak{a} \lhd \mathcal{O}_L} N(\mathfrak{a})^{-s} = \prod_{\mathfrak{p} \lhd \mathcal{O}_L,\mathfrak{p}\text{ prime}} (1 - N(\mathfrak{p})^{-s})^{-1}
+\]
+holds ``formally'', as in the terms match up when you expand, as an immediate consequence of the unique factorization of ideals into a product of prime ideals. The issue is to study convergence of $\sum N(\mathfrak{a})^{-s}$, and this comes down to estimating the number of ideals of fixed norm geometrically, and that is where all the factors in the pole come it.
+
+\begin{eg}
+ We try to compute $\zeta_L(s)$, where $L = \Q(\sqrt{d})$. This has discriminant $D$, which may be $d$ or $4d$. We first look at the prime ideals.
+
+ If $\mathfrak{p}$ is a prime ideal in $\mathcal{O}_L$, then $\mathfrak{p} \mid \bra p\ket$ for a unique $p$. So let's enumerate the factors of $\eta_L$ controlled by $p \in \Z$.
+
+ Now if $p \mid |D_L|$, then $\bra p\ket = \mathfrak{p}^2$ ramifies, and $N(\mathfrak{p}) = p$. So this contributes a factor of $(1 - p^{-s})^{-1}$.
+
+ Now if $p$ remains prime, then we have $N(\bra p\ket) = p^2$. So we get a factor of
+ \[
+ (1 - p^{-2s})^{-1} = (1 - p^{-s})^{-1} (1 + p^{-s})^{-1}.
+ \]
+ If $p$ splits completely, then
+ \[
+ \bra p\ket = \mathfrak{p}_1 \mathfrak{p}_2.
+ \]
+ So \[
+ N(\mathfrak{p}_i) = p,
+ \]
+ and so we get a factor of
+ \[
+ (1 - p^{-s})^{-1}(1 - p^{-s})^{-1}.
+ \]
+ So we find that
+ \[
+ \zeta_L(s) = \zeta(s) L(\chi_D, s),
+ \]
+ where we define
+ \begin{defi}[$L$-function]\index{$L$-function}
+ We define the \emph{$L$-function} by
+ \[
+ L(\chi, s) = \prod_{p\text{ prime}} (1 - \chi(p)p^{-s})^{-1}.
+ \]
+ \end{defi}
+ In our case, $\chi$ is given by
+ \[
+ \chi_D(p) =
+ \begin{cases}
+ 0 & p \mid D\\
+ -1 & p\text{ remains prime}\\
+ 1 & p\text{ splits}
+ \end{cases} =
+ \begin{cases}
+ \left(\frac{D}{p}\right) & p\text{ is odd}\\
+ \text{depends on }d \text{ mod } 8 & p = 2
+ \end{cases}.
+ \]
+\end{eg}
+
+\begin{eg}
+ If $L = \Q(\sqrt{-1})$, then we know
+ \[
+ \left(\frac{-4}{p}\right) = \left(\frac{-1}{p}\right) = (-1)^{\frac{p - 1}{2}}\text{ if }p \not= 2,
+ \]
+ and $\chi_D(2) = 0$ as $2$ ramifies. We then have
+ \[
+ L(\chi_D, s) = \prod_{p > 2\text{ prime}} (1 - (-1)^{\frac{p - 1}{2}}p^{-s})^{-1} = 1 - \frac{1}{3^s} + \frac{1}{5^s} - \frac{1}{7^s} + \cdots.
+ \]
+\end{eg}
+
+Note that $\chi_D$ was defined for primes only, but we can extend it to a function $\chi_D: \Z \to \C$ by imposing
+\[
+ \chi_D(nm) = \chi_D(n)\chi_D(m),
+\]
+i.e.\ we define
+\[
+ \chi_D(p_1^{e_1} \cdots p_r^{e_r}) = \chi_D(p_1)^{e_1} \cdots \chi_D(p_r)^{e_r}.
+\]
+\begin{eg}
+ Let $L = \Q(\sqrt{-1})$. Then
+ \[
+ \chi_{-4}(m) =
+ \begin{cases}
+ (-1)^{\frac{m - 1}{2}} & m\text{ odd}\\
+ 0 & m\text{ even}.
+ \end{cases}
+ \]
+ It is an exercise to show that this is really the extension, i.e.
+ \[
+ \chi_{-4}(mn) = \chi_{-4}(m) \chi_{-4}(n).
+ \]
+ Notice that this has the property that
+ \[
+ \chi_{-4}(m - 4) = \chi_{-4}(m).
+ \]
+\end{eg}
+We give these some special names
+\begin{defi}[Dirichlet character]\index{Dirichlet character}
+ A function $\chi: \Z \to \C$ is a \emph{Dirichlet character of modulus $D$} if there exists a group homomorphism
+ \[
+ w: \left(\frac{\Z}{D\Z}\right)^\times \to \C^\times
+ \]
+ such that
+ \[
+ \chi(m) =
+ \begin{cases}
+ w(m\text{ mod } D) & \gcd(m, D) = 1\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ We say $\chi$ is \emph{non-trivial} if $\omega$ is non-trivial.
+\end{defi}
+
+\begin{eg}
+ $\chi_{-4}$ is a Dirichlet character of modulus $4$.
+\end{eg}
+
+Note that
+\[
+ \chi(mn) = \chi(m)\chi(n)
+\]
+for such Dirichlet characters, and so
+\[
+ L(\chi, s) = \prod_{p\text{ prime}} (1 - \chi(p)p^{-s})^{-1} = \sum_{n \geq 1} \frac{\chi(n)}{n^s}
+\]
+for such $\chi$.
+
+\begin{prop}
+ $\chi_D$, as defined for $L = \Q(\sqrt{d})$ is a Dirichlet character of modulus $D$.
+\end{prop}
+Note that this is a very special Dirichlet character, as it only takes values $0, \pm 1$. We call this a \emph{quadratic Dirichlet character}.\index{quadratic Dirichlet character}
+
+\begin{proof}
+ We must show that
+ \[
+ \chi_D(p + Da) = \chi_D(p)
+ \]
+ for all $p, a$.
+ \begin{enumerate}
+ \item If $d \equiv 3 \pmod 4$, then $D = 4d$. Then
+ \[
+ \chi_D(2) = 0,
+ \]
+ as $(2)$ ramifies. So $\chi_D(\text{even}) = 0$. For $p > 2$, we have
+ \[
+ \chi_D(p) = \left(\frac{D}{p}\right) = \left(\frac{d}{p}\right) = \left(\frac{p}{d}\right) (-1)^{\frac{p - 1}{2}}
+ \]
+ as $\frac{d - 1}{2} \equiv 1 \pmod 2$, by quadratic reciprocity. So
+ \[
+ \chi_D(p + Da) = \left(\frac{p + Da}{d}\right) (-1)^{\frac{p - 1}{2}} (-1)^{4da/2} = \chi_D(p).
+ \]
+ \item If $d \equiv 1, 2\pmod 4$, see example sheet.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{lemma}
+ Let $\chi$ be any non-trivial Dirichlet character. Then $L(\chi, s)$ is holomorphic for $\Re(s) > 0$.
+\end{lemma}
+
+\begin{proof}
+ By our lemma on convergence of Dirichlet series, we have to show that
+ \[
+ \sum_{i = 1}^N \chi(i) = O(1),
+ \]
+ i.e.\ it is bounded. Recall from Representation Theory that distinct irreducible characters of a finite group $G$ are orthogonal, i.e.
+ \[
+ \frac{1}{|G|} \sum_{g \in G} \overline{\chi_1(g)} \chi_2(g) =
+ \begin{cases}
+ 1 & \chi_1 = \chi_2\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ We apply this to $G = (\Z/D\Z)^{\times}$, where $\chi_1$ is trivial and $\chi_2 = \chi$. So orthogonality gives
+ \[
+ \sum_{aD < i \leq (a + 1)D} \chi(i) = \sum_{i \in (\Z/D\Z)^\times} \chi(i) = 0,
+ \]
+ using that $\chi(i) = 0$ if $i$ is not coprime to $D$. So we are done.
+\end{proof}
+
+\begin{cor}
+ For quadratic characters $\chi_D$, we have
+ \[
+ L(\chi_D, 1) \not= 0.
+ \]
+\end{cor}
+For example, if $D < 0$, then
+\[
+ L(\chi_D, 1) = \frac{2 \pi |\cl_{\Q(\sqrt{d})}|}{|D|^{1/2} |\mu_{\Q(\sqrt{d})}|}.
+\]
+\begin{proof}
+ We have shown that
+ \[
+ \zeta_{\Q(\sqrt{d})}(s) = \zeta_{\Q}(s) L(\chi_D, s).
+ \]
+ Note that $\zeta_{\Q(\sqrt{d})}(s)$ and $\zeta_{\Q}(s)$ have simple poles at $s = 1$, while $L(\chi_D, s)$ is holomorphic at $s = 1$.
+
+ Since the residue of $\zeta_{\Q}(s)$ at $s = 1$ is $1$, while the residue of $\zeta_{\Q(\sqrt{D})}$ at $s = 1$ is non-zero by the analytic class number formula. So $L(\chi_D, 1)$ is non-zero, and given by the analytic class number formula.
+\end{proof}
+
+\begin{eg}
+ If $L = \Q(\sqrt{-1})$, then
+ \[
+ 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots= \frac{2\pi \cdot 1}{2 \cdot 4} = \frac{\pi}{4}.
+ \]
+\end{eg}
+In general, for any field whose class number we know, we can get a series expansion for $\pi$. And it converges incredibly slow.
+
+Note that this corollary required two things --- the analytic input for the analytic class number formula, and quadratic reciprocity (to show that $\chi_D$ is a Dirichlet character).
+
+More ambitiously, we now compute the zeta function of a cyclotomic field, $L = \Q(\omega_q)$, where $\omega_q$ is the primitive $q$th root of unity and $q\in \N$. We need to know the following facts about cyclotomic extensions:
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item We have $[L:\Q] = \varphi(q)$, where
+ \[
+ \varphi(q) = |(\Z/q\Z)^\times|.
+ \]
+ \item $L \supseteq \Q$ is a Galois extension, with
+ \[
+ \Gal(L/\Q) = (\Z/q\Z)^\times,
+ \]
+ where if $r \in (\Z/q\Z)^\times$, then $r$ acts on $\Q(w_Q)$ by sending $\omega_q \mapsto \omega_q^r$. This is what plays the role of quadratic reciprocity for cyclotomic fields.
+ \item The ring of integers is
+ \[
+ \mathcal{O}_L = \Z[\omega_q] = \Z[x]/\Phi_q(x),
+ \]
+ where
+ \[
+ \Phi_q(x) = \frac{x^q - 1}{\prod_{d \mid q, d \not= q}\Phi_d(x)}
+ \]
+ is the $q$th cyclotomic polynomial.
+ \item Let $p$ be a prime. Then $p$ ramifies in $\mathcal{O}_L$ if and only if $p \mid D_L$, if and only if $p \mid q$. So while $D$ might be messy, the prime factors of $D$ are the prime factors of $q$.
+ \item Let $p$ be a prime and $p \nmid q$. Then $\bra p\ket$ factors as a product of $\varphi(q)/f$ distinct prime ideals, each of norm $p^f$, where $f$ is the order of $p$ in $(\Z/q\Z)^{\times}$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item In the Galois theory course.
+ \item In the Galois theory course.
+ \item In the example sheet.
+ \item In the example sheet.
+ \item Requires proof, but is easy Galois theory, and is omitted.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ Let $q = 8$. Then
+ \[
+ \Phi_8 = \frac{x^8 - 1}{(x + 1)(x - 1)(x^2 + 1)} = \frac{x^8 - 1}{x^4 - 1} = x^4 + 1.
+ \]
+ So given a prime $p$ (that is not $2$), we need to understand
+ \[
+ \mathcal{O}_L/p = \frac{\F_p[x]}{\Phi_8},
+ \]
+ i.e.\ how $\Phi_8$ factors factors mod $p$ (Dedekind's criterion). We have
+ \[
+ (\Z/8)^\times = \{1, 3, 5, 7\} = \{1, 3, -3, -1\} = \Z/2 \times \Z/2.
+ \]
+ Then (v) says if $p = 17$, then $x^4$ factors into $4$ linear factors, which it does.
+
+ If $p = 3$, then (v) says $x^4$ factors into $2$ quadratic factors. Indeed, we have
+ \[
+ (x^2 - x - 1)(x^2 + x - 1) = (x^2 - 1)^2 - x^2 = x^4 + 1.
+ \]
+\end{eg}
+
+Given all of these, let's compute the zeta function! Recall that
+\[
+ \zeta_{\Q(\omega_q)}(s) = \prod_{\mathfrak{p}}(1 - N(\mathfrak{p})^{-s})^{-1}.
+\]
+We consider the prime ideals $\mathfrak{p}$ dividing $\bra p\ket$, where $p$ is a fixed integer prime number. If $p \nmid q$, then (v) says this contributes a factor of
+\[
+ (1 - p^{-fs})^{-\varphi(q)/f},
+\]
+to the zeta function, where $f$ is the order of $p$ in $(\Z/q\Z)^\times$. We observe that this thing factors, since
+\[
+ 1 - t^f = \prod_{\gamma \in \mu_f}(1 - \gamma t),
+\]
+with
+\[
+ \mu_f = \{\gamma \in \C: \gamma^f = 1\},
+\]
+and we can put $t = p^{-s}$.
+
+We let
+\[
+ \omega_1, \cdots, \omega_{\varphi(q)}: (\Z/q\Z)^\times \to \C^\times
+\]
+be the distinct irreducible (one-dimensional) representations of $(\Z/q\Z)^\times$, with $\omega_1$ being the trivial representation, i.e.\ $\omega_1(a) = 1$ for all $a \in (\Z/q\Z)^\times$.
+
+The claim is that $\omega_1(p), \cdots, \omega_{\varphi(q)}(p)$ are $f$th roots of $1$, each repeated $\varphi(q)/f$ times. We either say this is obvious, or we can use some representation theory.
+
+We know $p$ generates a cyclic subgroup $\bra p\ket$ of $(\Z/q\Z)^\times$ of order $f$, by definition of $f$. So this is equivalent to saying the restrictions of $\omega_1, \cdots, \omega_{\varphi(q)}$ to $p$ are the $f$ distinct irreducible characters of $\bra p\ket \cong \Z/f$, each repeated $\varphi(q)/f$ times.
+
+Equivalently, note that
+\[
+ \Res^{(\Z/q\Z)^\times}_{\bra p\ket} (\omega_1 \oplus \cdots \oplus \omega_{\varphi(q)}) = \Res_{\bra p\ket}^{(\Z/q\Z)^\times}(\text{regular representation of }(\Z/q\Z)^\times).
+\]
+So this claims that
+\[
+ \Res_{\bra p\ket}^{(\Z/q\Z)^\times} (\text{regular rep. of }(\Z/q\Z)^\times) = \frac{\varphi(q)}{f} (\text{regular rep. of }\Z/f).
+\]
+But this is true for any group, since
+\[
+ \Res_H^G \C G = |G/H| \C H,
+\]
+as the character of both sides is $|G| \delta_e$.
+
+So we have
+\[
+ (1 - p^{-fs})^{-\varphi(q)/f} = \prod_{i = 1}^{\varphi(q)}(1 - \omega_i(p) p^{-s})^{-1}.
+\]
+So we let
+\[
+ \chi_i(n) =
+ \begin{cases}
+ w_i(n\text{ mod }q) & \gcd(n, q) = 1\\
+ 0 & \text{otherwise}
+ \end{cases}
+\]
+be the corresponding Dirichlet characters. So we have just shown that
+\begin{prop} We have
+ \[
+ \zeta_{\Q(\omega_q)}(s) = \prod_{i = 1}^{\varphi(q)} L(\chi_i, s) \cdot (\text{corr.\ factor}) = \zeta_\Q(s) \prod_{i = 2}^{\varphi(q)} L(\chi_i, s) \cdot (\text{corr.\ factor})
+ \]
+ where the correction factor is a finite product coming from the primes that divide $q$.
+\end{prop}
+By defining the $L$ functions in a slightly more clever way, we can hide the correction factors into the $L(\chi, s)$, and then the $\zeta$ function is just the product of these $L$-functions.
+
+\begin{proof}
+ Our analysis covered all primes $p \nmid q$, and the correction factor is just to include the terms with $p \mid q$. The second part is just saying that
+ \[
+ \zeta_\Q(s) = L(\chi_1, s) \prod_{p \mid q}(1 - p^{-s})^{-1}.\qedhere
+ \]
+\end{proof}
+
+This allows us to improve our result on the non-vanishing of $L(\chi, 1)$ to all Dirichlet characters, and not just quadratic Dirichlet characters.
+\begin{cor}
+ If $\chi$ is any non-trivial Dirichlet character, then $L(\chi, 1) \not= 0$.
+\end{cor}
+
+\begin{proof}
+ By definition, Dirichlet characters come from representations of some $(\Z/q\Z)^\times$, so they appear in the formula of the $\zeta$ function of some cyclotomic extension.
+
+ Consider the formula
+ \[
+ \zeta_{\Q(\omega_q)}(s) = \zeta_\Q(s) \prod_{i = 2}^{\varphi(q)} L(\chi_i, s) \cdot (\text{corr.\ factor})
+ \]
+ at $s = 1$. We know that the $L(\chi_i, s)$ are all holomorphic at $s = 1$. Moreover, both $\zeta_{\Q(\omega_q)}$ and $\zeta_\Q$ have a simple pole at $0$. Since the correction terms are finite, it must be the case that all $L(\chi_i, s)$ are non-zero.
+\end{proof}
+
+\begin{thm}[Dirichlet, 1839]\index{Dirichlet's theorem on primes in AP}
+ Let $a, q \in \N$ be coprime, i.e.\ $\gcd(a, q) = 1$. Then there are infinitely many primes in the arithmetic progression
+ \[
+ a, a + q, a + 2q, a + 3q, \cdots.
+ \]
+\end{thm}
+
+\begin{proof}
+ As before, let
+ \[
+ \omega_1, \cdots, \omega_{\varphi(q)}: (\Z/q\Z)^\times \to \C^\times
+ \]
+ be the irreducible characters, and let
+ \[
+ \chi_1, \cdots, \chi_{\varphi(q)}: \Z \to \C
+ \]
+ be the corresponding Dirichlet character, with $\omega_1$ the trivial one.
+
+ Recall the orthogonality of columns of the character table, which says that if $\gcd(p, q) = 1$, then
+ \[
+ \frac{1}{\varphi(q)} \sum_i \overline{\omega_i(a)} \omega_i(p) =
+ \begin{cases}
+ 1 & a \equiv p \pmod q\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ Hence we know
+ \[
+ \frac{1}{\varphi(q)} \sum_i \overline{\chi_i(a)} \chi_i(p) =
+ \begin{cases}
+ 1 & a \equiv p \pmod q\\
+ 0 & \text{otherwise}
+ \end{cases},
+ \]
+ even if $\gcd(p, q) \not= 1$, as then $\chi_i(p) = 0$. So
+ \[
+ \sum_{\substack{p \equiv a \text{ mod } q\\ p \text{ prime}}} p^{-s} = \frac{1}{\varphi(q)} \sum_i \overline{\chi_i(a)} \sum_{\text{all primes } p} \chi_i(p) p^{-s}.\tag{$\ddagger$}
+ \]
+ We want to show this has a pole at $s = 1$, as in Euclid's proof.
+
+ To do so, we show that $\sum_p \chi_i(p) p^{-s}$ is ``essentially'' $\log L(\chi_i, s)$, up to some bounded terms. We Taylor expand
+ \[
+ \log L(\chi, s) = -\sum \log (1 - \chi(p) p^{-s}) = \sum_{\substack{n \geq 1\\ p\text{ prime}}} \frac{\chi(p)^n}{n p^{ns}} = \sum_{\substack{n \geq 1\\ p\text{ prime}}} \frac{\chi(p^n)}{np^{ns}}.
+ \]
+ What we care about is the $n = 1$ term. So we claim that
+ \[
+ \sum_{n \geq 2, p\text{ prime}} \frac{\chi(p^n)}{np^{ns}}
+ \]
+ converges at $s = 1$. This follows from the geometric sum
+ \[
+ \left|\sum_p\sum_{n\geq 2} \frac{\chi(p^n)}{np^{ns}}\right| \leq \sum_p \sum_{n \geq 2} p^{-ns} = \sum_{p\text{ prime}} \frac{1}{p^s(p^s - 1)} \leq \sum_{n \geq 2} \frac{1}{n^s(n^s - 1)} < \infty.
+ \]
+ Hence we know
+ \[
+ \log L(\chi, s) = \sum_p \chi_i(p) p^{-s} + \text{bounded stuff}
+ \]
+ near $s = 1$.
+
+ So at $s = 1$, we have
+ \[
+ (\ddagger) \sim \frac{1}{\varphi(q)} \sum_i \overline{\chi_i (a)} \log L(\chi_i, s).
+ \]
+ and we have to show that the right hand side has a pole at $s = 1$.
+
+ We know that for $i \not= 1$, i.e.\ $\chi_i$ non-trivial, $L(\chi_i, s)$ is holomorphic and non-zero at $s = 1$. So we just have to show that $\log L(\chi_1, s)$ has a pole. Note that $L(\chi_1, s)$ is essentially $\zeta_\Q(s)$. Precisely, we have
+ \[
+ L(\chi_1, s) = \zeta_{\Q}(s) \prod_{p \mid q}(1 - p^{-s}).
+ \]
+ Moreover, we already know that $\zeta_\Q(s)$ blows up at $s = 1$. We have
+ \begin{align*}
+ \zeta_\Q(s) &= \frac{1}{s - 1} + \text{holomorphic function} \\
+ &= \frac{1}{s - 1}(1 + (s - 1)(\text{holomorphic function})).
+ \end{align*}
+ So we know
+ \[
+ \log L(\chi_1, s) \sim \log \zeta_\Q(s) \sim \log\left(\frac{1}{s - 1}\right),
+ \]
+ and this does blow up at $s = 1$.
+\end{proof}
+
+So far, we have been working with \emph{abelian} extensions over $\Q$, i.e.\ extensions $L/\Q$ whose Galois group is abelian. By the \term{Kronecker--Weber theorem}, every abelian extension of $\Q$ is contained within some cyclotomic extension. So in some sense, we have considered the ``most general'' abelian extension.
+
+Can we move on to consider more complicated number fields? In general, suppose $L/\Q$ is Galois, and $G = \Gal(L/\Q)$. We can still make sense of the $\zeta$ functions, and it turns out it always factors as
+\[
+ \zeta_L(s) = \prod_{\rho} L(\rho, s)^{\dim \rho},
+\]
+where $\rho$ ranges over all the irreducible representations of $G$, and $L(\rho, s)$ is the \term{Artin $L$-function}. It takes some effort to define the Artin $L$-function, and we shall not do so here. However, it is worth noting that $L(1, s)$ is just $\zeta_\Q(s)$, and for $\rho \not= 1$, we still have a factorization of the form
+\[
+ L(\rho, s) = \prod_{p \text{ prime}} L_p(\rho, s).
+\]
+This $L_p(\rho, s)$ is known as the \term{Euler factor}.
+
+One can show that $L(\rho, s)$ is always a meromorphic function of $s$, and is \emph{conjectured} to be holomorphic for all $s$ (if $\rho \not= 1$, of course).
+
+If $\rho$ is one-dimensional, then $L(\rho, s)$ is a Dirichlet series $L(\chi, s)$ for some $\chi$. Recall that to establish this fact for quadratic fields, we had to use quadratic reciprocity. In general given a $\rho$, finding $\chi$ is a higher version of ``quadratic reciprocity''. This area is known as \term{class field theory}. If $\dim \rho > 1$, then this is ``non-abelian class field theory'', known as \term{Langlands programme}.
+
+%We end with a random assortment of facts. Suppose $L \supseteq \Q$ is a number field, and $L/\Q$ is a Galois extension. Let $G = \Gal(L/\Q)$. Then
+%
+%\begin{enumerate}
+% \item
+% \[
+% \zeta_L(s) = \prod_{\rho} L(\rho, s)^{\dim \rho},
+% \]
+% where $\rho$ ranges over the irreducible representations of $G$, and $L$ is \emph{Artin's $L$-function}, given by
+% \[
+% L(1, s) = \zeta_\Q(s),\quad L(\rho, s) = \prod_{p\text{ prime}}L_p(\rho,s)
+% \]
+% where $L_p(\rho,s)$ is the \emph{Euler factor} at $p$. Its definition requires some set-up: we have a normal subgroup $\ker\rho\subseteq G$, so by Galois theory, it corresponds to a Galois extension $K/\Q$. Pick an prime $\mathfrak{p}$ above $p$. The following are standard facts from algebraic number theory, which we unfortunately do not have the time to prove:
+% \begin{itemize}
+% \item The group $D_{\mathfrak{p}/p}$ of all $\sigma\in\Gal(K/\Q)$ fixing $\mathfrak{p}$ is called the \emph{decomposition group}. They are conjugates of each other as $\mathfrak{p}$ varies.
+% \item There exists a canonical surjection $D_{\mathfrak{p}/p}\to\Gal(\F_\mathfrak{p}/\F_p)$ via the reduction mod $\mathfrak{p}$ map, where $\F_\mathfrak{p}$ is the residue field of $K$ at $\mathfrak{p}$.
+% \item Its kernel is the \emph{inertia group}, denoted by $I_{\mathfrak{p}/p}$.
+% \item $I_{\mathfrak{p}/p}=\{0\}$ iff $\mathfrak{p}$ is unramified.
+% \item If $p$ is unramified, then there exists $\sigma_{\mathfrak{p}}\in\Gal(K/\Q)$ such that
+% \[
+% \sigma_{\mathfrak{p}}x\equiv x^p\pmod{\mathfrak{p}}
+% \]
+% This is just a lift of the Frobenius element of $\Gal(\F_\mathfrak{p}/\F_p)$ to $D_{\mathfrak{p}/p}$, and it is still called the Frobenius element.
+% \item If $p$ is unramified, then $\sigma_{\mathfrak{p}}$ as $\mathfrak{p}$ ranges over all primes over $p$ form a conjugacy class of $\Gal(K/\Q)$. By abuse of notation, we write $\rho(\sigma_p)$ for $\rho(\sigma_\mathfrak{p})$ for any $\mathfrak{p}$ above $p$.
+% \end{itemize}
+% Given these facts, the Euler factor at $p$ can be defined as
+% \[
+% L_p(\rho,s)=\det(1-\rho(\sigma_p)|_{V^{I_p}}p^{-s})^{-1}
+% \]
+% where $V$ is a vector space on which $G$ acts by $\rho$, and $V^{I_p}$ is the subspace fixed by $I_{\mathfrak{p}/p}$ for a given $\mathfrak{p}$ lying above $p$. If $p$ is unramified, then the factor is just the characteristic polynomial of the Frobenius evaluated at $p^{-s}$.
+%
+% The upshot is that the zeta function always factors, with one factor for each irreducible representation $\rho$ of $G$.
+% \item Also, $L(\rho, s)$ is a meromorphic function of $s$, and is \emph{conjectured} to be holomorphic for all $s$, if $\rho \not= 1$. There is a \emph{function equation} relating the values of $L(\rho,s)$ with the values of $L(\bar{\rho},1-s)$, taking the form
+% \[
+% \Lambda(\rho,s)=W(\rho)\Lambda(\bar{\rho},1-s)
+% \]
+% where $\Lambda(\rho,s)$ is $L(\rho,s)$ multiplied by some factors involving the $\Gamma$-function, interpreted to correspond to the embeddings into $\R$ or $\C$ (also known as the ``primes at infinity''), and $W(\rho)$ is a \term{root number} of modulus 1.
+% \item If $\rho$ is one-dimensional, then $L(\rho, s)$ is a Dirichlet series $L(\chi, s)$ for some $\chi$. But given a $\rho$, finding $\chi$ is a higher version of ``quadratic reciprocity''. This area is known as class field theory. If you are keen, you can go back and check this for subfields of the cyclotomic extension.
+% \item If $\dim \rho > 1$, then this is ``non-abelian class field theory'', known as \emph{Langlands programme}.
+%\end{enumerate}
+%
+%Note that
+%\[
+% e^{\pi\sqrt{163}} = 262537412640768743.99999999999925\cdots
+%\]
+%with the deviation from an integer being of the order $10^{-12}$. This is related to the fact that $163$ is the largest $d$ such that $\Q(\sqrt{-d})$ has class number $1$.
+\printindex
+\end{document}
diff --git a/books/cam/II_L/representation_theory.tex b/books/cam/II_L/representation_theory.tex
new file mode 100644
index 0000000000000000000000000000000000000000..373031bcefad82f2c20617344cbe6a5d106f5e8d
--- /dev/null
+++ b/books/cam/II_L/representation_theory.tex
@@ -0,0 +1,4315 @@
+\documentclass[a4paper]{article}
+
+\def\npart {II}
+\def\nterm {Lent}
+\def\nyear {2016}
+\def\nlecturer {S.\ Martin}
+\def\ncourse {Representation Theory}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\emph{Linear Algebra and Groups, Rings and Modules are essential}
+\vspace{10pt}
+
+\noindent\textbf{Representations of finite groups}\\
+Representations of groups on vector spaces, matrix representations. Equivalence of representations. Invariant subspaces and submodules. Irreducibility and Schur's Lemma. Complete reducibility for finite groups. Irreducible representations of Abelian groups.
+
+\vspace{10pt}
+\noindent\textbf{Character theory}\\
+Determination of a representation by its character. The group algebra, conjugacy classes, and orthogonality relations. Regular representation. Permutation representations and their characters. Induced representations and the Frobenius reciprocity theorem. Mackey's theorem. Frobenius's Theorem.\hspace*{\fill}[12]
+
+\vspace{10pt}
+\noindent\textbf{Arithmetic properties of characters}\\
+Divisibility of the order of the group by the degrees of its irreducible characters. Burnside's $p^a q^b$ theorem.\hspace*{\fill}[2]
+
+\vspace{10pt}
+\noindent\textbf{Tensor products}\\
+Tensor products of representations and products of characters. The character ring. Tensor, symmetric and exterior algebras.\hspace*{\fill}[3]
+
+\vspace{10pt}
+\noindent\textbf{Representations of $S^1$ and $\SU_2$}\\
+The groups $S^1$, $\SU_2$ and $\SO(3)$, their irreducible representations, complete reducibility. The Clebsch-Gordan formula. *Compact groups.*\hspace*{\fill}[4]
+
+\vspace{10pt}
+\noindent\textbf{Further worked examples}\\
+The characters of one of $\GL_2(F_q), S_n$ or the Heisenberg group.\hspace*{\fill}[3]%
+}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+The course studies how groups act as groups of linear transformations on vector spaces. Hopefully, you understand all the words in this sentence. If so, this is a good start.
+
+In our case, groups are usually either finite groups or topological compact groups (to be defined later). Topological compact groups are typically subgroups of the general linear group over some infinite fields. It turns out the tools we have for finite groups often work well for these particular kinds of infinite groups. The vector spaces are always finite-dimensional, and usually over $\C$.
+
+Prerequisites of this course include knowledge of group theory (as much as the IB Groups, Rings and Modules course), linear algebra, and, optionally, geometry, Galois theory and metric and topological spaces. There is one lemma where we must use something from Galois theory, but if you don't know about Galois theory, you can just assume the lemma to be true without bothering yourself too much about the proofs.
+
+\section{Group actions}
+We start by reviewing some basic group theory and linear algebra.
+
+\subsubsection*{Basic linear algebra}
+\begin{notation}
+ $\F$ always represents a field.
+\end{notation}
+
+Usually, we take $\F = \C$, but sometimes it can also be $\R$ or $\Q$. These fields all have characteristic zero, and in this case, we call what we're doing \emph{ordinary} representation theory. Sometimes, we will take $F = \F_p$ or $\bar{\F}_p$, the algebraic closure of $\F_p$. This is called \emph{modular} representation theory.
+
+\begin{notation}
+ We write $V$ for a vector space over $\F$ --- this will always be finite dimensional over $\F$. We write $\GL(V)$ for the group of invertible linear maps $\theta: V \to V$. This is a group with the operation given by composition of maps, with the identity as the identity map (and inverse by inverse).
+\end{notation}
+
+\begin{notation}
+ Let $V$ be a finite-dimensional vector space over $\F$. We write $\End(V)$ for the endomorphism algebra, the set of all linear maps $V \to V$.
+\end{notation}
+
+We recall a couple of facts from linear algebra:
+
+If $\dim_\F V = n < \infty$, we can choose a basis $\mathbf{e}_1, \cdots, \mathbf{e}_n$ of $V$ over $\F$. So we can identify $V$ with $\F^n$. Then every endomorphism $\theta \in \GL(V)$ corresponds to a matrix $A_\theta = (a_{ij}) \in M_n(F)$ given by
+\[
+ \theta(\mathbf{e}_j) = \sum_i a_{ij} \mathbf{e}_i.
+\]
+In fact, we have $A_\theta \in \GL_n(\F)$, the general linear group. It is easy to see the following:
+\begin{prop}
+ As groups, $\GL(V) \cong \GL_n(\F)$, with the isomorphism given by $\theta \mapsto A_\theta$.
+\end{prop}
+
+Of course, picking a different basis of $V$ gives a different isomorphism to $\GL_n(\F)$, but we have the following fact:
+\begin{prop}
+ Matrices $A_1, A_2$ represent the same element of $\GL(V)$ with respect to different bases if and only if they are \emph{conjugate}, namely there is some $X \in \GL_n(\F)$ such that
+ \[
+ A_2 = XA_1 X^{-1}.
+ \]
+\end{prop}
+
+Recall that $\tr(A) = \sum_i a_{ii}$, where $A = (a_{ij}) \in M_n(\F)$, is the trace of $A$. A nice property of the trace is that it doesn't notice conjugacy:
+\begin{prop}
+ \[
+ \tr(XAX^{-1}) = \tr (A).
+ \]
+\end{prop}
+
+Hence we can define the trace of an operator $\tr(\theta) = \tr(A_\theta)$, which is independent of our choice of basis. This is an important result. When we study representations, we will have matrices flying all over the place, which are scary. Instead, we often just look at the traces of these matrices. This reduces our problem of studying matrices to plain arithmetic.
+
+When we have too many matrices, we get confused. So we want to put a matrix into a form as simple as possible. One of the simplest form a matrix can take is being diagonal. So we want to know something about diagonalizing matrices.
+
+\begin{prop}
+ Let $\alpha \in \GL(V)$, where $V$ is a finite-dimensional vector space over $\C$ and $\alpha^m = \id$ for some positive integer $m$. Then $\alpha$ is diagonalizable.
+\end{prop}
+
+This follows from the following more general fact:
+\begin{prop}
+ Let $V$ be a finite-dimensional vector space over $\C$, and $\alpha \in \End(V)$, not necessarily invertible. Then $\alpha$ is diagonalizable if and only if there is a polynomial $f$ with distinct linear factors such that $f(\alpha) = 0$.
+\end{prop}
+Indeed, we have $x^m - 1 = \prod(x - \omega^j)$, where $\omega = e^{2\pi i/m}$.
+
+Instead of just one endomorphism, we can look at many endomorphisms.
+
+\begin{prop}
+ A finite family of individually diagonalizable endomorphisms of a vector space over $\C$ can be simultaneously diagonalized if and only if they commute.
+\end{prop}
+
+\subsubsection*{Basic group theory}
+We will not review the definition of a group. Instead, we look at some of our favorite groups, since they will be handy examples later on.
+
+\begin{defi}[Symmetric group $S_n$]\index{symmetric group}
+ The \emph{symmetric group} $S_n$ is the set of all permutations of $X = \{1, \cdots, n\}$, i.e.\ the set of all bijections $X \to X$. We have $|S_n| = n!$.
+\end{defi}
+
+\begin{defi}[Alternating group $A_n$]\index{alternating group}
+ The \emph{alternating group} $A_n$ is the set of products of an even number of transpositions $(i\; j)$ in $S_n$. We know $|A_n| = \frac{n!}{2}$. So this is a subgroup of index 2 and hence normal.
+\end{defi}
+
+\begin{defi}[Cyclic group $C_m$]\index{cyclic group}
+ The \emph{cyclic group} of order $m$, written $C_m$ is
+ \[
+ C_m = \bra x: x^m = 1\ket.
+ \]
+ This also occurs naturally, as $\Z/m\Z$ over addition, and also the group of $n$th roots of unity in $\C$. We can view this as a subgroup of $\GL_1(\C) \cong \C^\times$. Alternatively, this is the group of rotation symmetries of a regular $m$-gon in $\R^2$, and can be viewed as a subgroup of $\GL_2(\R)$.
+\end{defi}
+
+\begin{defi}[Dihedral group $D_{2m}$]\index{dihedral group}
+ The \emph{dihedral group} $D_{2m}$ of order $2m$ is
+ \[
+ D_{2m} = \bra x, y: x^m = y^2 = 1, yxy^{-1} = x^{-1}\ket.
+ \]
+ This is the symmetry group of a regular $m$-gon. The $x^i$ are the rotations and $x^i y$ are the reflections. For example, in $D_8$, $x$ is rotation by $\frac{\pi}{2}$ and $y$ is any reflection.
+
+ This group can be viewed as a subgroup of $\GL_2(\R)$, but since it also acts on the vertices, it can be viewed as a subgroup of $S_m$.
+\end{defi}
+
+\begin{defi}[Quaternion group]\index{quaternion group}
+ The \emph{quaternion group} is given by
+ \[
+ Q_8 = \bra x, y: x^4 = 1, y^2 = x^2, yxy^{-1} = x^{-1}\ket.
+ \]
+ This has order $8$, and we write $i = x, j = y$, $k = ij$, $-1 = i^2$, with
+ \[
+ Q_8 = \{\pm 1, \pm i, \pm j, \pm k\}.
+ \]
+ We can view this as a subgroup of $\GL_2(\C)$ via
+ \begin{align*}
+ 1 &= \begin{pmatrix}
+ 1&0\\0&1
+ \end{pmatrix},&
+ i &= \begin{pmatrix}
+ i & 0\\0&-i
+ \end{pmatrix},&
+ j &= \begin{pmatrix}
+ 0&1\\-1&0
+ \end{pmatrix},&
+ k &= \begin{pmatrix}
+ 0&i\\i&0
+ \end{pmatrix},\\
+ -1 &= \begin{pmatrix}
+ -1&0\\0&-1
+ \end{pmatrix},&
+ -i &= \begin{pmatrix}
+ -i & 0\\0&i
+ \end{pmatrix},&
+ -j &= \begin{pmatrix}
+ 0&-1\\1&0
+ \end{pmatrix},&
+ -k &= \begin{pmatrix}
+ 0&-i\\-i&0
+ \end{pmatrix}
+ \end{align*}.
+\end{defi}
+
+\begin{defi}[Conjugacy class]\index{conjugacy class}\index{$\mathcal{C}_G$}
+ The \emph{conjugacy class} of $g \in G$ is
+ \[
+ \mathcal{C}_G(g)=\{xgx^{-1}: x \in G\}.
+ \]
+\end{defi}
+
+\begin{defi}[Centralizer]\index{centralizer}\index{$C_g$}
+ The \emph{centralizer} of $g \in G$ is
+ \[
+ C_G(g) = \{x \in G: xg = gx\}.
+ \]
+\end{defi}
+Then by the orbit-stabilizer theorem, we have $|\mathcal{C}_G(g)| = |G: C_G(G)|$.
+
+\begin{defi}[Group action]\index{group action}
+ Let $G$ be a group and $X$ a set. We say $G$ \emph{acts on} $X$ if there is a map $*: G \times X \to X$, written $(g, x) \mapsto g * x = gx$ such that
+ \begin{enumerate}
+ \item $1x = x$
+ \item $g(hx) = (gh)x$
+ \end{enumerate}
+\end{defi}
+
+The group action can also be characterised in terms of a homomorphism.
+\begin{lemma}
+ Given an action of $G$ on $X$, we obtain a homomorphism $\theta: G \to \Sym(X)$, where $\Sym(X)$ is the set of all permutations of $X$.
+\end{lemma}
+
+\begin{proof}
+ For $g \in G$, define $\theta(g) = \theta_g \in \Sym (X)$ as the function $X \to X$ by $x \mapsto gx$. This is indeed a permutation of $X$ because $\theta_{g^{-1}}$ is an inverse.
+
+ Moreover, for any $g_1, g_2 \in G$, we get $\theta_{g_1g_2} = \theta_{g_1} \theta_{g_2}$, since $(g_1g_2) x = g_1(g_2x)$.
+\end{proof}
+
+\begin{defi}[Permutation representation]\index{permutation representation}
+ The \emph{permutation representation} of a group action $G$ on $X$ is the homomorphism $\theta: G \to \Sym (X)$ obtained above.
+\end{defi}
+
+In this course, $X$ is often a finite-dimensional vector space over $\F$ (and we write it as $V$), and we want the action to satisfy some more properties. We will require the action to be linear, i.e.\ for all $g \in G$, $\mathbf{v}_1, \mathbf{v}_2 \in V$, and $\lambda \in \F$.
+\[
+ g(\mathbf{v}_1 + \mathbf{v}_2) = g \mathbf{v}_1 + g \mathbf{v}_2,\quad g(\lambda \mathbf{v}_1) = \lambda (g\mathbf{v}_1).
+\]
+Alternatively, instead of asking for a map $G \to \Sym(X)$, we would require a map $G \to \GL(V)$ instead.
+
+\section{Basic definitions}
+We now start doing representation theory. We boringly start by defining a representation. In fact, we will come up with several equivalent definitions of a representation. As always, $G$ will be a finite group and $\F$ will be a field, usually $\C$.
+
+\begin{defi}[Representation]\index{representation}\index{linear representation}
+ Let $V$ be a finite-dimensional vector space over $\F$. A \emph{(linear) representation} of $G$ on $V$ is a group homomorphism
+ \[
+ \rho = \rho_V: G \to \GL(V).
+ \]
+ We sometimes write $\rho_g$ for $\rho_V(g)$, so for each $g \in G$, $\rho_g \in \GL(V)$, and $\rho_g \rho_h = \rho_{gh}$ and $\rho_{g^{-1}} = (\rho_g)^{-1}$ for all $g, h \in G$.
+\end{defi}
+
+\begin{defi}[Dimension or degree of representation]\index{dimension}\index{degree}
+ The \emph{dimension} (or \emph{degree}) of a representation $\rho: G \to \GL(V)$ is $\dim_\F(V)$.
+\end{defi}
+
+Recall that $\ker \rho \lhd G$ and $G/\ker \rho \cong \rho(G) \leq \GL(V)$. In the very special case where $\ker \rho$ is trivial, we give it a name:
+\begin{defi}[Faithful representation]\index{faithful representation}
+ A \emph{faithful} representation is a representation $\rho$ such that $\ker \rho = 1$.
+\end{defi}
+These are the representations where the identity is the only element that does nothing.
+
+An alternative (and of course equivalent) definition of a representation is to observe that a linear representation is ``the same'' as a linear action of $G$.
+\begin{defi}[Linear action]\index{linear action}
+ A group $G$ \emph{acts linearly} on a vector space $V$ if it acts on $V$ such that
+ \[
+ g(\mathbf{v}_1 + \mathbf{v}_2) = g \mathbf{v}_1 + g \mathbf{v}_2,\quad g(\lambda \mathbf{v}_1) = \lambda (g\mathbf{v}_1)
+ \]
+ for all $g \in G$, $\mathbf{v}_1, \mathbf{v}_2 \in V$ and $\lambda \in \F$. We call this a \emph{linear action}.
+\end{defi}
+Now if $g$ acts linearly on $V$, the map $G \to \GL(V)$ defined by $g \mapsto \rho_g$, with $\rho_g: \mathbf{v} \mapsto g\mathbf{v}$, is a representation in the previous sense. Conversely, given a representation $\rho: G \to \GL(V)$, we have a linear action of $G$ on $V$ via $g\mathbf{v} = \rho(g) \mathbf{v}$.
+
+In other words, a representation is just a linear action.
+
+\begin{defi}[$G$-space/$G$-module]\index{$G$-space}\index{$G$-module}
+ If there is a linear action $G$ on $V$, we say $V$ is a \emph{$G$-space} or \emph{$G$-module}.
+\end{defi}
+
+Alternatively, we can define a $G$-space as a module over a (not so) cleverly picked ring.
+\begin{defi}[Group algebra]\index{group algebra}\index{$\F G$}
+ The \emph{group algebra} $\F G$ is defined to be the algebra (i.e.\ a vector space with a bilinear multiplication operation) of formal sums
+ \[
+ \F G = \left\{ \sum_{g \in G} \alpha_g g: \alpha_g \in F\right\}
+ \]
+ with the obvious addition and multiplication.
+\end{defi}
+Then we can regard $\F G$ as a ring, and a $G$-space is just an $\F G$-module in the sense of IB Groups, Rings and Modules.
+
+\begin{defi}[Matrix representation]\index{matrix representation}
+ $R$ is a \emph{matrix representation} of $G$ of degree $n$ if $R$ Is a homomorphism $G \to \GL_n(\F)$.
+\end{defi}
+We can view this as a representation that acts on $\F^n$. Since all finite-dimensional vector spaces are isomorphic to $\F^n$ for some $n$, every representation is equivalent to some matrix representation. In particular, given a linear representation $\rho: G \to \GL(V)$ with $\dim V = n$, we can get a matrix representation by fixing a basis $\mathcal{B}$, and then define the matrix representation $G \to \GL_n(\F)$ by $g \mapsto [\rho(g)]_{\mathcal{B}}$.
+
+Conversely, given a matrix representation $R$, we get a linear representation $\rho$ in the obvious way --- $\rho: G \to \GL(\F^n)$ by $g \mapsto \rho_g$ via $\rho_g(\mathbf{v}) = R_g \mathbf{v}$.
+
+We have defined representations in four ways --- as a homomorphism to $\GL(V)$, as linear actions, as $\F G$-modules and as matrix representations. Now let's look at some examples.
+
+\begin{eg}[Trivial representation]\index{trivial representation}
+ Given any group $G$, take $V = \F$ (the one-dimensional space), and $\rho: G \to \GL(V)$ by $g \mapsto (\id: \F \to \F)$ for all $g$. This is the \emph{trivial representation} of $G$, and has degree $1$.
+\end{eg}
+Despite being called trivial, trivial representations are highly non-trivial in representation theory. The way they interact with other representations geometrically, topologically etc, and cannot be disregarded. This is a very important representation, despite looking silly.
+
+\begin{eg}
+ Let $G = C_4 = \bra x: x^4 = 1\ket$. Let $n = 2$, and work over $\F = \C$. Then we can define a representation by picking a matrix $A$, and then define $R: x \mapsto A$. Then the action of other elements follows directly by $x^j \mapsto A^j$. Of course, we cannot choose $A$ arbitrarily. We need to have $A^4 = I_2$, and this is easily seen to be the only restriction. So we have the following possibilities:
+ \begin{enumerate}
+ \item $A$ is diagonal: the diagonal entries can be chosen freely from $\{\pm 1, \pm i\}$. Since there are two diagonal entries, we have $16$ choices.
+ \item $A$ is not diagonal: then it will be equivalent to a diagonal matrix since $A^4 = I_2$. So we don't really get anything new.
+ \end{enumerate}
+\end{eg}
+What we would like to say above is that any matrix representation in which $X$ is not diagonal is ``equivalent'' to one in which $X$ is. To make this notion precise, we need to define what it means for representations to be equivalent, or ``isomorphic''.
+
+As usual, we will define the notion of a homomorphism of representations, and then an isomorphism is just an invertible homomorphism.
+
+\begin{defi}[$G$-homomorphism/intertwine]\index{$G$-homomorphism}\index{intertwine}
+ Fix a group $G$ and a field $\F$. Let $V, V'$ be finite-dimensional vector spaces over $\F$ and $\rho: G \to \GL(V)$ and $\rho': G \to \GL(V')$ be representations of $G$. The linear map $\varphi: V \to V'$ is a \emph{$G$-homomorphism} if
+ \[
+ \varphi \circ \rho(g) = \rho'(g) \circ \varphi\tag{$*$}
+ \]
+ for all $g \in G$. In other words, the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ V \ar[r, "\rho_g"] \ar[d, "\varphi"] & V \ar[d, "\varphi"]\\
+ V' \ar[r, "\rho_g'"] & V'
+ \end{tikzcd}
+ \]
+ i.e.\ no matter which way we go form $V$ (top left) to $V'$ (bottom right), we still get the same map.
+
+ We say $\varphi$ \emph{intertwines} $\rho$ and $\rho'$. We write $\Hom_G(V, V')$ for the $\F$-space of all these maps.
+\end{defi}
+
+\begin{defi}[$G$-isomorphism]\index{$G$-isomorphism}
+ A $G$-homomorphism is a \emph{$G$-isomorphism} if $\varphi$ is bijective.
+\end{defi}
+
+\begin{defi}[Equivalent/isomorphic representations]\index{equivalent representation}\index{isomorphic representation}
+ Two representations $\rho, \rho'$ are \emph{equivalent} or \emph{isomorphic} if there is a $G$-isomorphism between them.
+\end{defi}
+
+If $\varphi$ is a $G$-isomorphism, then we can write $(*)$ as
+\[
+ \rho' = \varphi \rho \varphi^{-1}.\tag{$\dagger$}
+\]
+\begin{lemma}
+ The relation of ``being isomorphic'' is an equivalence relation on the set of all linear representations of $G$ over $\F$.
+\end{lemma}
+This is an easy exercise left for the reader.
+
+\begin{lemma}
+ If $\rho, \rho'$ are isomorphic representations, then they have the same dimension.
+\end{lemma}
+
+\begin{proof}
+ Trivial since isomorphisms between vector spaces preserve dimension.
+\end{proof}
+
+The converse is false.
+\begin{eg}
+ $C_4$ has four non-isomorphic one-dimensional representations: if $\omega = e^{2 \pi i/4}$, then we have the representations
+ \[
+ \rho_j (x^i) = \omega^{ij},
+ \]
+ for $0 \leq i \leq 3$, which are not equivalent for different $j = 0, 1, 2, 3$.
+\end{eg}
+Our other formulations of representations give us other formulations of isomorphisms.
+
+Given a group $G$, field $\F$, a vector space $V$ of dimension $n$, and a representation $\rho: G \to \GL(V)$, we fix a basis $\mathcal{B}$ of $V$. Then we get a linear $G$-isomorphism $\varphi: V \to \F^n$ by $\mathbf{v} \mapsto [\mathbf{v}]_{\mathcal{B}}$, i.e.\ by writing $\mathbf{v}$ as a column vector with respect to $\mathcal{B}$. Then we get a representation $\rho': G \to \GL(\F^n)$ isomorphic to $\rho$. In other words, every representation is isomorphic to a matrix representation:
+\[
+ \begin{tikzcd}
+ V \ar[r, "\rho"] \ar[d, "\varphi"] & V \ar[d, "\varphi"]\\
+ \F^n \ar[r, "\rho'"] & \F^n
+ \end{tikzcd}
+\]
+Thus, in terms of matrix representations, the representations $R: G \to \GL_n(\F)$ and $R': G \to \GL_n(\F)$ are $G$-isomorphic if there exists some non-singular matrix $X \in \GL_n(\F)$ such that
+\[
+ R'(g) = X R(g) X^{-1}
+\]
+for all $g$.
+
+Alternatively, in terms of linear $G$-actions, the actions of $G$ on $V$ and $V'$ are $G$-isomorphic if there is some isomorphism $\varphi: V \to V'$ such that
+\[
+ g \varphi(\mathbf{v}) = \varphi(g\mathbf{v}).
+\]
+for all $g \in G, \mathbf{v} \in V$. It is an easy check that this is just a reformulation of our previous definition.
+
+Just as we have subgroups and subspaces, we have the notion of sub-representation.
+\begin{defi}[$G$-subspace]\index{$G$-subspace}
+ Let $\rho: G \to \GL(V)$ be a representation of $G$. We say $W \leq V$ is a \emph{$G$-subspace} if it is a subspace that is $\rho(G)$-invariant, i.e.
+ \[
+ \rho_g(W) \leq W
+ \]
+ for all $g \in G$.
+\end{defi}
+
+Obviously, $\{0\}$ and $V$ are $G$-subspaces. These are the trivial $G$-subspaces.
+
+\begin{defi}[Irreducible/simple representation]\index{irreducible representation}\index{simple representation}
+ A representation $\rho$ is \emph{irreducible} or \emph{simple} if there are no proper non-zero $G$-subspaces.
+\end{defi}
+
+\begin{eg}
+ Any $1$-dimensional representation of $G$ is necessarily irreducible, but the converse does not hold, or else life would be very boring. We will later see that $D_8$ has a two-dimensional irreducible complex representation.
+\end{eg}
+
+\begin{defi}[Subrepresentation]\index{subrepresentation}
+ If $W$ is a $G$-subspace, then the corresponding map $G \to \GL(W)$ given by $g \mapsto \rho(g)|_W$ gives us a new representation of $W$. This is a \emph{subrepresentation} of $\rho$.
+\end{defi}
+
+There is a nice way to characterize this in terms of matrices.
+\begin{lemma}
+ Let $\rho: G \to \GL(V)$ be a representation, and $W$ be a $G$-subspace of $V$. If $\mathcal{B} = \{\mathbf{v}_1, \cdots, \mathbf{v}_n\}$ is a basis containing a basis $\mathcal{B}_1 = \{\mathbf{v}_1, \cdots, \mathbf{v}_m\}$ of $W$ (with $0 < m < n$), then the matrix of $\rho(g)$ with respect to $\mathcal{B}$ has the block upper triangular form
+ \[
+ \begin{pmatrix}
+ * & *\\
+ 0 & *
+ \end{pmatrix}
+ \]
+ for each $g \in G$.
+\end{lemma}
+This follows directly from definition.
+
+However, we do not like block triangular matrices. What we really like is block diagonal matrices, i.e.\ we want the top-right block to vanish. There is no \emph{a priori} reason why this has to be true --- it is possible that we cannot find another $G$-invariant complement to $W$.
+
+\begin{defi}[(In)decomposable representation]\index{decomposable representation}\index{indecomposable representation}
+ A representation $\rho: G \to \GL(V)$ is \emph{decomposable} if there are proper $G$-invariant subspaces $U, W \leq V$ with
+ \[
+ V = U \oplus W.
+ \]
+ We say $\rho$ is a direct sum $\rho_u \oplus \rho_w$.
+
+ If no such decomposition exists, we say that $\rho$ is \emph{indecomposable}.
+\end{defi}
+It is clear that irreducibility implies indecomposability. The converse is not necessarily true. However, over a field of characteristic zero, it turns out irreducibility is the same as indecomposability for finite groups, as we will see in the next chapter.
+
+Again, we can formulate this in terms of matrices.
+\begin{lemma}
+ Let $\rho: G \to \GL(V)$ be a decomposable representation with $G$-invariant decomposition $V = U \oplus W$. Let $\mathcal{B}_1 = \{\mathbf{u}_1, \cdots, \mathbf{u}_k\}$ and $\mathcal{B}_2 = \{\mathbf{w}_1, \cdots, \mathbf{w}_\ell\}$ be bases for $U$ and $W$, and $\mathcal{B} = \mathcal{B}_1 \cup \mathcal{B}_2$ be the corresponding basis for $V$. Then with respect to $\mathcal{B}$, we have
+ \[
+ [\rho(g)]_{\mathcal{B}} =
+ \begin{pmatrix}
+ [\rho_u(g)]_{\mathcal{B}_1} & 0\\
+ 0 & [\rho_u (g)]_{\mathcal{B}_2}
+ \end{pmatrix}
+ \]
+\end{lemma}
+
+\begin{eg}
+ Let $G = D_6$. Then every irreducible complex representation has dimension at most $2$.
+
+ To show this, let $\rho: G \to \GL(V)$ be an irreducible $G$-representation. Let $r \in G$ be a (non-identity) rotation and $s\in G$ be a reflection. These generate $D_6$.
+
+ Take an eigenvector $v$ of $\rho(r)$. So $\rho(r) v = \lambda v$ for some $\lambda \not= 0$ (since $\rho(r)$ is invertible, it cannot have zero eigenvalues). Let
+ \[
+ W = \bra v, \rho(s) v\ket \leq V
+ \]
+ be the space spanned by the two vectors. We now check this is fixed by $\rho$. Firstly, we have
+ \[
+ \rho(s)\rho(s) \mathbf{v} = \rho(e) \mathbf{v} = \mathbf{v} \in W,
+ \]
+ and
+ \[
+ \rho(r)\rho(s) \mathbf{v} = \rho(s) \rho(r^{-1}) \mathbf{v} = \lambda^{-1} \rho(s) \mathbf{v} \in W.
+ \]
+ Also, $\rho(r) \mathbf{v} = \lambda \mathbf{v} \in W$ and $\rho(s) \mathbf{v} \in W$. So $W$ is $G$-invariant. Since $V$ is irreducible, we must have $W = V$. So $V$ has dimension at most $2$.
+\end{eg}
+
+The reverse operation of decomposition is taking direct sums.
+\begin{defi}[Direct sum]\index{direct sum}
+ Let $\rho: G\to \GL(V)$ and $\rho': G \to \GL(V')$ be representations of $G$. Then the \emph{direct sum} of $\rho, \rho'$ is the representation
+ \[
+ \rho \oplus \rho': G \to \GL(V \oplus V')
+ \]
+ given by
+ \[
+ (\rho \oplus \rho')(g)(\mathbf{v} + \mathbf{v}') = \rho(g) \mathbf{v} + \rho'(g) \mathbf{v}'.
+ \]
+\end{defi}
+In terms of matrices, for matrix representations $R: G \to \GL_n(\F)$ and $R': G \to \GL_{n'}(\F)$, define $R \oplus R': G \to \GL_{n + n'}(\F)$ by
+\[
+ (R\oplus R') (g) =
+ \begin{pmatrix}
+ R(g) & 0\\
+ 0 & R'(g)
+ \end{pmatrix}.
+\]
+The direct sum was easy to define. It turns out we can also multiply two representations, known as the tensor products. However, to do this, we need to know what the tensor product of two vector spaces is. We will not do this yet.
+
+\section{Complete reducibility and Maschke's theorem}
+In representation theory, we would like to decompose a representation into sums of irreducible representations. Unfortunately, this is not always possible. When we can, we say the representation is completely reducible.
+
+\begin{defi}[Completely reducible/semisimple representation]\index{completely reducible representation}\index{semisimple representation}
+ A representation $\rho: G \to \GL(V)$ is \emph{completely reducible} or \emph{semisimple} if it is the direct sum of irreducible representations.
+\end{defi}
+Clearly, irreducible implies completely reducible.
+
+Not all representations are completely reducible. An example is to be found on example sheet 1. These are in fact not too hard to find. For example, there are representations of $\Z$ over $\C$ that are not completely reducible, and also a non-completely reducible representation of $C_p$ over $\F_p$.
+
+However, it turns out we have the following theorem:
+\begin{thm}
+ Every finite-dimensional representation $V$ of a finite group over a field of characteristic $0$ is completely reducible, namely, $V \cong V_1 \oplus \cdots \oplus V_r$ is a direct sum of irreducible representations.
+\end{thm}
+By induction, it suffices to prove the following:
+\begin{thm}[Maschke's theorem]\index{Maschke's theorem}
+ Let $G$ be a finite group, and $\rho: G \to \GL(V)$ a representation over a finite-dimensional vector space $V$ over a field $\F $ with $\Char \F = 0$. If $W$ is a $G$-subspace of $V$, then there exists a $G$-subspace $U$ of $V$ such that $V = W \oplus U$.
+\end{thm}
+
+We will prove this many times.
+\begin{proof}
+ From linear algebra, we know $W$ has a complementary sub\emph{space}. Let $W'$ be any vector subspace complement of $W$ in $V$, i.e.\ $V = W \oplus W'$ \emph{as vector spaces}.
+
+ Let $q: V \to W$ be the projection of $V$ onto $W$ along $W'$, i.e.\ if $\mathbf{v} = \mathbf{w} + \mathbf{w}'$ with $\mathbf{w} \in W, \mathbf{w}' \in W'$, then $q(\mathbf{v}) = \mathbf{w}$.
+
+ The clever bit is to take this $q$ and tweak it a little bit. Define
+ \[
+ \bar{q}: \mathbf{v} \mapsto \frac{1}{|G|} \sum_{g \in G} \rho(g) q (\rho(g^{-1})\mathbf{v}).
+ \]
+ This is in some sense an averaging operator, averaging over what $\rho(g)$ does. Here we need the field to have characteristic zero such that $\frac{1}{|G|}$ is well-defined. In fact, this theorem holds as long as $\Char F \nmid |G|$.
+
+ For simplicity of expression, we drop the $\rho$'s, and simply write
+ \[
+ \bar{q}: \mathbf{v} \mapsto \frac{1}{|G|} \sum_{g \in G} g q (g^{-1}\mathbf{v}).
+ \]
+ We first claim that $\bar{q}$ has image in $W$. This is true since for $\mathbf{v} \in V$, $q(g^{-1} \mathbf{v}) \in W$, and $gW \leq W$. So this is a little bit like a projection.
+
+ Next, we claim that for $\mathbf{w} \in W$, we have $\bar{q}(\mathbf{w}) = \mathbf{w}$. This follows from the fact that $q$ itself fixes $W$. Since $W$ is $G$-invariant, we have $g^{-1} \mathbf{w} \in W$ for all $\mathbf{w} \in W$. So we get
+ \[
+ \bar{q}(\mathbf{w}) = \frac{1}{|G|} \sum_{g \in G} g q(g^{-1}\mathbf{w}) = \frac{1}{|G|} \sum_{g \in G} gg^{-1}\mathbf{w} = \frac{1}{|G|} \sum_{g \in G}\mathbf{w} = \mathbf{w}.
+ \]
+ Putting these together, this tells us $\bar{q}$ is a projection onto $W$.
+
+ Finally, we claim that for $h \in G$, we have $h \bar{q}(\mathbf{v}) = \bar{q}(h\mathbf{v})$, i.e.\ it is invariant under the $G$-action. This follows easily from definition:
+ \begin{align*}
+ h \bar{q} (\mathbf{v}) &= h\frac{1}{|G|} \sum_{g \in G} g q (g^{-1}\mathbf{v})\\
+ &= \frac{1}{|G|} \sum_{g \in G} hg q (g^{-1} \mathbf{v})\\
+ &= \frac{1}{|G|} \sum_{g \in G} (hg) q ((hg)^{-1} h\mathbf{v})\\
+ \intertext{We now put $g' = hg$. Since $h$ is invertible, summing over all $g$ is the same as summing over all $g'$. So we get}
+ &= \frac{1}{|G|} \sum_{g' \in G} g' q(g'^{-1}(h\mathbf{v}))\\
+ &= \bar{q} (h\mathbf{v}).
+ \end{align*}
+ We are pretty much done. We finally show that $\ker \bar{q}$ is $G$-invariant. If $\mathbf{v} \in \ker \bar{q}$ and $h \in G$, then $\bar{q}(h\mathbf{v}) = h\bar{q}(\mathbf{v}) = 0$. So $h\mathbf{v} \in \ker \bar{q}$.
+
+ Thus
+ \[
+ V = \im \bar{q} \oplus \ker \bar{q} = W \oplus \ker\bar{q}
+ \]
+ is a $G$-subspace decomposition.
+\end{proof}
+The crux of the whole proof is the definition of $\bar{q}$. Once we have that, everything else follows easily.
+
+Yet, for the whole proof to work, we need $\frac{1}{|G|}$ to exist, which in particular means $G$ must be a finite group. There is no obvious way to generalize this to infinite groups. So let's try a different proof.
+
+The second proof uses inner products, and hence we must take $\F = \C$. This can be generalized to infinite compact groups, as we will later see.
+
+Recall the definition of an inner product:
+\begin{defi}[Hermitian inner product]\index{Hermitian inner product}
+ For $V$ a complex space, $\bra\ph, \ph\ket$ is a \emph{Hermitian inner product} if
+ \begin{enumerate}
+ \item $\bra \mathbf{v}, \mathbf{w}\ket = \overline{\bra \mathbf{w}, \mathbf{v}\ket}$ \hfill (Hermitian)
+ \item $\bra \mathbf{v}, \lambda_1 \mathbf{w}_1 + \lambda_2 \mathbf{w}_2\ket = \lambda_1 \bra \mathbf{v}, \mathbf{w}_1\ket + \lambda_2 \bra \mathbf{v}, \mathbf{w}_2\ket$ \hfill (sesquilinear)
+ \item $\bra \mathbf{v}, \mathbf{v}\ket > 0$ if $\mathbf{v} \not= 0$\hfill (positive definite)
+ \end{enumerate}
+\end{defi}
+
+\begin{defi}[$G$-invariant inner product]\index{$G$-invariant inner product}
+ An inner product $\bra \ph, \ph \ket$ is in addition \emph{$G$-invariant} if
+ \[
+ \bra g\mathbf{v}, g\mathbf{w}\ket = \bra \mathbf{v}, \mathbf{w}\ket.
+ \]
+\end{defi}
+
+\begin{prop}
+ Let $W$ be $G$-invariant subspace of $V$, and $V$ have a $G$-invariant inner product. Then $W^\perp$ is also $G$-invariant.
+\end{prop}
+
+\begin{proof}
+ To prove this, we have to show that for all $\mathbf{v} \in W^\perp$, $g \in G$, we have $g \mathbf{v} \in W^\perp$.
+
+ This is not hard. We know $\mathbf{v} \in W^\perp$ if and only if $\bra \mathbf{v}, \mathbf{w}\ket = 0$ for all $\mathbf{w} \in W$. Thus, using the definition of $G$-invariance, for $\mathbf{v} \in W^\perp$, we know
+ \[
+ \bra g\mathbf{v}, g\mathbf{w}\ket = 0
+ \]
+ for all $g \in G, \mathbf{w}\in W$.
+
+ Thus for all $\mathbf{w}' \in W$, pick $\mathbf{w} = g^{-1} \mathbf{w}' \in W$, and this shows $\bra g\mathbf{v}, \mathbf{w}'\ket = 0$. Hence $g\mathbf{v} \in W^\perp$.
+\end{proof}
+
+Hence if there is a $G$-invariant inner product on any complex $G$-space $V$, then we get another proof of Maschke's theorem.
+\begin{thm}[Weyl's unitary trick]\index{Weyl's unitary trick}
+ Let $\rho$ be a complex representation of a finite group $G$ on the complex vector space $V$. Then there is a $G$-invariant Hermitian inner product on $V$.
+\end{thm}
+
+Recall that the unitary group is defined by
+\begin{align*}
+ \U(V) &= \{f \in \GL(V): \bra f(\mathbf{u}), f(\mathbf{v})\ket = \bra \mathbf{u}, \mathbf{v}\ket \text{ for all }\mathbf{u}, \mathbf{v} \in V\}\\
+ &= \{A \in \GL_n(\C): AA^\dagger = I\}\\
+ &= \U(n).
+\end{align*}
+Then we have an easy corollary:
+\begin{cor}
+ Every finite subgroup of $\GL_n(\C)$ is conjugate to a subgroup of $\U(n)$.
+\end{cor}
+
+\begin{proof}
+ We start by defining an arbitrary inner product on $V$: take a basis $\mathbf{e}_1, \cdots, \mathbf{e}_n$. Define $(\mathbf{e}_i, \mathbf{e}_j) = \delta_{ij}$, and extend it sesquilinearly. Define a new inner product
+ \[
+ \bra \mathbf{v}, \mathbf{w}\ket = \frac{1}{|G|} \sum_{g \in G} (g\mathbf{v}, g\mathbf{w}).
+ \]
+ We now check this is sesquilinear, positive-definite and $G$-invariant. Sesquilinearity and positive-definiteness are easy. So we just check $G$-invariance: we have
+ \begin{align*}
+ \bra h\mathbf{v}, h\mathbf{w}\ket &= \frac{1}{|G|} \sum_{g \in G} ((gh)\mathbf{v}, (gh)\mathbf{w})\\
+ &= \frac{1}{|G|} \sum_{g' \in G} (g' \mathbf{v}, g' \mathbf{w})\\
+ &= \bra \mathbf{v}, \mathbf{w}\ket.\qedhere
+ \end{align*}
+\end{proof}
+Note that this trick also works for real representations.
+
+Again, we had to take the inverse $\frac{1}{|G|}$. To generalize this to compact groups, we will later replace the sum by an integral, and $\frac{1}{|G|}$ by a volume element. This is fine since $(g' \mathbf{v}, g'\mathbf{w})$ is a complex number and we know how to integrate complex numbers. This cannot be easily done in the case of $\bar{q}$.
+
+Recall we defined the group algebra of $G$ to be the $\F$-vector space
+\[
+ \F G = \bra \mathbf{e}_g: g \in G\ket,
+\]
+i.e.\ its basis is in one-to-one correspondence with the elements of $G$. There is a linear $G$-action defined in the obvious way: for $h \in G$, we define
+\[
+ h \sum_g a_g \mathbf{e}_g = \sum_g a_g \mathbf{e}_{hg} = \sum_{g'} a_{h^{-1}g'} \mathbf{e}_{g'}.
+\]
+This gives a representation of $G$.
+\begin{defi}[Regular representation and regular module]\index{regular representation}\index{regular module}
+ The \emph{regular representation} of a group $G$, written $\rho_{\mathrm{reg}}$, is the natural action of $G$ on $\F G$. $\F G$ is called the \emph{regular module}.
+\end{defi}
+It is a nice faithful representation of dimension $|G|$. Far more importantly, it turns out that \emph{every} irreducible representation of $G$ is a subrepresentation of the regular representation.
+
+\begin{prop}
+ Let $\rho$ be an irreducible representation of the finite group $G$ over a field of characteristic 0. Then $\rho$ is isomorphic to a subrepresentation of $\rho_{\mathrm{reg}}$.
+\end{prop}
+
+This will also follow from a more general result using character theory later.
+\begin{proof}
+ Take $\rho: G \to \GL(V)$ be irreducible, and pick our favorite $0 \not= \mathbf{v} \in V$. Now define $\theta: \F G \to V$ by
+ \[
+ \sum_g a_g \mathbf{e}_g \mapsto \sum a_g (g\mathbf{v}).
+ \]
+ It is not hard to see this is a $G$-homomorphism. We are now going to exploit the fact that $V$ is irreducible. Thus, since $\im \theta$ is a $G$-subspace of $V$ and non-zero, we must have $\im \theta = V$. Also, $\ker \theta$ is a $G$-subspace of $\F G$. Now let $W$ be the $G$-complement of $\ker \theta$ in $\F G$, which exists by Maschke's theorem. Then $W \leq \F G$ is a $G$-subspace and
+ \[
+ \F G = \ker \theta \oplus W.
+ \]
+ Then the isomorphism theorem gives
+ \[
+ W\cong \F G / \ker \theta \cong \im \theta = V.\qedhere
+ \]
+\end{proof}
+More generally, $G$ doesn't have to just act on the vector space generated by itself. If $G$ acts on any set, we can take that space and create a space acted on by $G$.
+\begin{defi}[Permutation representation]\index{permutation representation}
+ Let $\F$ be a field, and let $G$ act on a set $X$. Let $\F X = \bra \mathbf{e}_x: x \in X\ket$ with a $G$-action given by
+ \[
+ g \sum_x a_x \mathbf{e}_x = \sum_x a_x \mathbf{e}_{gx}.
+ \]
+ So we have a $G$-space on $\F X$. The representation $G \to \GL(\F X)$ is the corresponding \emph{permutation representation}.
+\end{defi}
+
+\section{Schur's lemma}
+The topic of this chapter is \emph{Schur's lemma}, an easy yet extremely useful lemma in representation theory.
+
+\begin{thm}[Schur's lemma]\index{Schur's lemma}\leavevmode
+ \begin{enumerate}
+ \item Assume $V$ and $W$ are irreducible $G$-spaces over a field $\F$. Then any $G$-homomorphism $\theta: V \to W$ is either zero or an isomorphism.
+ \item If $\F$ is algebraically closed, and $V$ is an irreducible $G$-space, then any $G$-endomorphism $V \to V$ is a scalar multiple of the identity map $\iota_V$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $\theta: V \to W$ be a $G$-homomorphism between irreducibles. Then $\ker \theta$ is a $G$-subspace of $V$, and since $V$ is irreducible, either $\ker \theta = 0$ or $\ker \theta = V$. Similarly, $\im \theta$ is a $G$-subspace of $W$, and as $W$ is irreducible, we must have $\im \theta = 0$ or $\im \theta = W$. Hence either $\ker \theta = V$, in which case $\theta = 0$, or $\ker \theta = 0$ and $\im \theta = W$, i.e.\ $\theta$ is a bijection.
+ \item Since $\F$ is algebraically closed, $\theta$ has an eigenvalue $\lambda$. Then $\theta - \lambda \iota_V$ is a singular $G$-endomorphism of $V$. So by (i), it must be the zero map. So $\theta = \lambda \iota_V$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Recall that the $\F$-space $\Hom_G(V, W)$ is the space of all $G$-homomorphisms $V \to W$. If $V = W$, we write $\End_G(V)$ for the $G$-endomorphisms of $V$.
+
+\begin{cor}
+ If $V, W$ are irreducible complex $G$-spaces, then
+ \[
+ \dim_\C \Hom_G(V, W) =
+ \begin{cases}
+ 1 & V, W\text{ are $G$-isomorphic}\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+\end{cor}
+
+\begin{proof}
+ If $V$ and $W$ are not isomorphic, then the only possible map between them is the zero map by Schur's lemma.
+
+ Otherwise, suppose $V \cong W$ and let $\theta_1, \theta_2 \in \Hom_G(V, W)$ be both non-zero. By Schur's lemma, they are isomorphisms, and hence invertible. So $\theta_2^{-1}\theta_1 \in \End_G(V)$. Thus $\theta_2^{-1}\theta_1 = \lambda \iota_V$ for some $\lambda \in \C$. Thus $\theta_1 = \lambda \theta_2$.
+\end{proof}
+
+We have another less obvious corollary.
+\begin{cor}
+ If $G$ is a finite group and has a faithful complex irreducible representation, then its center $Z(G)$ is cyclic.
+\end{cor}
+This is a useful result --- it allows us transfer our representation-theoretic knowledge (the existence of a faithful complex irreducible representation) to group theoretic knowledge (center of group being cyclic). This will become increasingly common in the future, and is a good thing since representations are easy and groups are hard.
+
+The converse, however, is not true. For this, see example sheet 1, question 10.
+\begin{proof}
+ Let $\rho: G \to \GL(V)$ be a faithful irreducible complex representation. Let $z \in Z(G)$. So $zg = gz$ for all $g \in G$. Hence $\phi_z: \mathbf{v} \mapsto z\mathbf{v}$ is a $G$-endomorphism on $V$. Hence by Schur's lemma, it is multiplication by a scalar $\mu_z$, say. Thus $z\mathbf{v} = \mu_z \mathbf{v}$ for all $\mathbf{v}\in V$.
+
+ Then the map
+ \begin{align*}
+ \sigma: Z(G) &\to \C^\times\\
+ z &\mapsto \mu_g
+ \end{align*}
+ is a representation of $Z(G)$. Since $\rho$ is faithful, so is $\sigma$. So $Z(G) = \{\mu_z: z \in Z(G)\}$ is isomorphic to a finite subgroup of $\C^\times$, hence cyclic.
+\end{proof}
+
+\begin{cor}
+ The irreducible complex representations of a finite abelian group $G$ are all $1$-dimensional.
+\end{cor}
+
+\begin{proof}
+ We can use the fact that commuting diagonalizable matrices are simultaneously diagonalizable. Thus for every irreducible $V$, we can pick some $\mathbf{v} \in V$ that is an eigenvector for each $g \in G$. Thus $\bra \mathbf{v}\ket$ is a $G$-subspace. As $V$ is irreducible, we must have $V = \bra \mathbf{v}\ket$.
+
+ Alternatively, we can prove this in a representation-theoretic way. Let $V$ be an irreducible complex representation. For each $g \in G$, the map
+ \begin{align*}
+ \theta_g: V &\to V\\
+ \mathbf{v} &\mapsto g\mathbf{v}
+ \end{align*}
+ is a $G$-endomorphism of $V$, since it commutes with the other group elements. Since $V$ is irreducible, $\theta_g = \lambda_g \iota_V$ for some $\lambda_g \in \C$. Thus
+ \[
+ g\mathbf{v} = \lambda_g \mathbf{v}
+ \]
+ for any $g$. As $V$ is irreducible, we must have $V = \bra \mathbf{v}\ket$.
+\end{proof}
+Note that this result fails over $\R$. For example, $C_3$ has a two irreducible real representations, one of dimension $1$ and one of dimension $2$.
+
+We can do something else. Recall that every finite abelian group $G$ isomorphic to a product of abelian groups. In fact, it can be written as a product of $C_{p^\alpha}$ for various primes $p$ and $\alpha \geq 1$, and the factors are uniquely determined up to order.
+
+This you already know from IB Groups Rings and Modules. You might be born knowing it --- it's such a fundamental fact of nature.
+
+\begin{prop}
+ The finite abelian group $G = C_{n_1} \times \cdots \times C_{n_r}$ has precisely $|G|$ irreducible representations over $\C$.
+\end{prop}
+This is not a coincidence. We will later show that the number of irreducible representations is the number of conjugacy classes of the group. In abelian groups, each conjugacy class is just a singleton, and hence this result.
+
+\begin{proof}
+ Write
+ \[
+ G = \bra x_1\ket \times \cdots \times \bra x_r\ket,
+ \]
+ where $|x_j| = n_j$. Any irreducible representation $\rho$ must be one-dimensional. So we have
+ \[
+ \rho: G \to \C^\times.
+ \]
+ Let $\rho(1, \cdots, x_j, \cdots, 1) = \lambda_j$. Then since $\rho$ is a homomorphism, we must have $\lambda_j^{n_j} = 1$. Therefore $\lambda_j$ is an $n_j$th root of unity.
+
+ Now the values $(\lambda_1, \cdots, \lambda_r)$ determine $\rho$ completely, namely
+ \[
+ \rho(x_1^{j_1}, \cdots, x_r^{j_r}) = \lambda_1^{j_1} \cdots \lambda_r ^{j_r}.
+ \]
+ Also, whenever $\lambda_i$ is an $n_i$th root of unity for each $i$, then the above formula gives a well-defined representation. So there is a one-to-one correspondence $\rho \leftrightarrow (\lambda_1, \cdots, \lambda_r)$, with $\lambda_j^{n_j} = 1$.
+
+ Since for each $j$, there are $n_j$ many $n_j$th roots of unity, it follows that there are $|G| = n_1\cdots n_r$ many choices of the $\lambda_i$. Thus the proposition.
+\end{proof}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Consider $G = C_4 = \bra x\ket$. The four $1$-dimensional irreducible representations are given by
+ \begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ & $1$ & $x$ & $x^2$ & $x^3$\\
+ \midrule
+ $\rho_1$ & $1$ & $1$ & $1$ & $1$\\
+ $\rho_2$ & $1$ & $i$ & $-1$ & $-i$\\
+ $\rho_2$ & $1$ & $-1$ & $1$ & $-1$\\
+ $\rho_2$ & $1$ & $-i$ & $-1$ & $i$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ \item Consider the Klein four group $G = V_R = \bra x_1 \ket \times \bra x_2\ket$. The irreducible representations are
+ \begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ & $1$ & $x_1$ & $x_2$ & $x_1x_2$\\
+ \midrule
+ $\rho_1$ & $1$ & $1$ & $1$ & $1$\\
+ $\rho_2$ & $1$ & $1$ & $-1$ & $-1$\\
+ $\rho_2$ & $1$ & $-1$ & $1$ & $-1$\\
+ $\rho_2$ & $1$ & $-1$ & $-1$ & $1$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ \end{enumerate}
+\end{eg}
+These are also known as character tables, and we will spend quite a lot of time computing these for non-abelian groups.
+
+Note that there is no ``natural'' one-to-one correspondence between the elements of $G$ and the representations of $G$ (for $G$ finite-abelian). If we choose an isomorphism $G \cong C_{n_1} \times \cdots C_{n_r}$, then we can identify the two sets, but it depends on the choice of the isomorphism (while the decomposition is unique, we can pick a different generator of, say, $C_{n_1}$ and get a different isomorphism to the same decomposition).
+
+\subsubsection*{Isotypical decompositions}
+Recall that we proved we can decompose any $G$-representation into a sum of irreducible representations. Is this decomposition unique? If it isn't, can we say anything about, say, the size of the irreducible representations, or the number of factors in the decomposition?
+
+We know any diagonalizable endomorphism $\alpha: V \to V$ for a \emph{vector space} $V$ gives us a vector space decomposition
+\[
+ V = \bigoplus_{\lambda} V(\lambda),
+\]
+where
+\[
+ V(\lambda) = \{\mathbf{v} \in V: \alpha(\mathbf{v}) = \lambda \mathbf{v}\}.
+\]
+This is canonical in that it depends on $\alpha$ alone, and nothing else.
+
+If $V$ is moreover a $G$-representation, how does this tie in to the decomposition of $V$ into the irreducible representations?
+
+Let's do an example.
+\begin{eg}
+ Consider $G = D_6 \cong S_3 = \bra r, s: r^3 = s^2 = 1, rs = sr^{-1}\ket$. We have previously seen that each irreducible representation has dimension at most $2$. We spot at least three irreducible representations:
+ \begin{center}
+ \begin{tabular}{cccc}
+ 1 & triad & $r \mapsto 1$ & $s \mapsto 1$\\
+ S & sign & $r \mapsto 1$ & $s \mapsto -1$\\
+ W & $2$-dimensional &
+ \end{tabular}
+ \end{center}
+ The last representation is the action of $D_6$ on $\R^2$ in the natural way, i.e.\ the rotations and reflections of the plane that corresponds to the symmetries of the triangle. It is helpful to view this as a complex representation in order to make the matrix look nice. The $2$-dimensional representation $(\rho, W)$ is defined by $W = \C^2$, where $r$ and $s$ act on $W$ as
+ \[
+ \rho(r) =
+ \begin{pmatrix}
+ \omega & 0\\
+ 0 & \omega^2
+ \end{pmatrix},\quad
+ \rho(s) =
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 0
+ \end{pmatrix},
+ \]
+ and $\omega = e^{2\pi i/3}$ is a third root of unity. We will now show that these are indeed all the irreducible representations, by decomposing any representation into sum of these.
+
+ So let's decompose an arbitrary representation. Let $(\rho', V)$ be any complex representation of $G$. Since $\rho'(r)$ has order $3$, it is diagonalizable has eigenvalues $1, \omega, \omega^2$. We diagonalize $\rho'(r)$ and then $V$ splits as a vector space into the eigenspaces
+ \[
+ V = V(1) \oplus V(\omega) \oplus V(\omega^2).
+ \]
+ Since $srs^{-1}= r^{-1}$, we know $\rho'(s)$ preserves $V(1)$ and interchanges $V(\omega)$ and $V(\omega^2)$.
+
+ Now we decompose $V(1)$ into $\rho'(s)$ eigenspaces, with eigenvalues $\pm1$. Since $r$ has to act trivially on these eigenspaces, $V(1)$ splits into sums of copies of the irreducible representations 1 and S.
+
+ For the remaining mess, choose a basis $\mathbf{v}_1, \cdots, \mathbf{v}_n$ of $V(\omega)$, and let $\mathbf{v}_j' = \rho'(s) \mathbf{v}_j$. Then $\rho'(s)$ acts on the two-dimensional space $\bra \mathbf{v}_j, \mathbf{v}_j'\ket$ as $\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}$, while $\rho'(r)$ acts as $\begin{pmatrix} \omega & 0\\0 & \omega^2\end{pmatrix}$. This means $V(\omega) \oplus V(\omega^2)$ decomposes into many copies of W.
+\end{eg}
+
+We did this for $D_6$ by brute force. How do we generalize this? We first have the following lemma:
+\begin{lemma}
+ Let $V, V_1, V_2$ be $G$-vector spaces over $\F$. Then
+ \begin{enumerate}
+ \item $\Hom_G(V, V_1 \oplus V_2) \cong \Hom_G(V, V_1) \oplus \Hom_G(V, V_2)$
+ \item $\Hom_G(V_1 \oplus V_2, V) \cong \Hom_G(V_1, V) \oplus \Hom_G(V_2, V)$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}
+ The proof is to write down the obvious homomorphisms and inverses.
+
+ Define the projection map
+ \[
+ \pi_i: V_1 \oplus V_2 \to V_i,
+ \]
+ which is the $G$-linear projection onto $V_i$.
+
+ Then we can define the $G$-homomorphism
+ \begin{align*}
+ \Hom_G (V, V_1 \oplus V_2) &\mapsto \Hom_G(V, V_1) \oplus \Hom_G(V, V_2)\\
+ \varphi &\mapsto (\pi_1 \varphi, \pi_2 \varphi).
+ \end{align*}
+ Then the map $(\psi_1, \psi_2) \mapsto \psi_1 + \psi_2$ is an inverse.
+
+ For the second part, we have the homomorphism $\varphi \mapsto (\varphi|_{V_1}, \varphi|_{V_2})$ with inverse $(\psi_1, \psi_2) \mapsto \psi_1 \pi_1 + \psi_2 \pi_2$.
+\end{proof}
+
+\begin{lemma}
+ Let $\F$ be an algebraically closed field, and $V$ be a representation of $G$. Suppose $V = \bigoplus_{i = 1}^n V_i$ is its decomposition into irreducible components. Then for each irreducible representation $S$ of $G$,
+ \[
+ |\{j: V_j \cong S\}| = \dim \Hom_G(S, V).
+ \]
+\end{lemma}
+This tells us we can count the multiplicity of $S$ in $V$ by looking at the homomorphisms.
+
+\begin{proof}
+ We induct on $n$. If $n = 0$, then this is a trivial space. If $n = 1$, then $V$ itself is irreducible, and by Schur's lemma, $\dim \Hom_G(S, V) = 1$ if $V = S$, $0$ otherwise. Otherwise, for $n > 1$, we have
+ \[
+ V = \left(\bigoplus_{i = 1}^{n - 1} V_i\right) \oplus V_n.
+ \]
+ By the previous lemma, we know
+ \[
+ \dim \hom_G\left(S, \left(\bigoplus_{i = 1}^{n - 1} V_i\right) \oplus V_n\right) = \dim \Hom_G\left(S, \bigoplus_{i = 1}^{n - 1} V_i\right) + \dim \hom_G(S, V_n).
+ \]
+ The result then follows by induction.
+\end{proof}
+
+\begin{defi}[Canonical decomposition/decomposition into isotypical components]\index{canonical decomposition}\index{isotypical components}
+ A decomposition of $V$ as $\bigoplus W_j$, where each $W_j$ is (isomorphic to) $n_j$ copies of the irreducible $S_j$ (with $S_j \not \cong S_i$ for $i \not= j$) is the \emph{canonical decomposition} or \emph{decomposition into isotypical components}.
+\end{defi}
+For an algebraically closed field $\F$, we know we must have
+\[
+ n_j = \dim \Hom_G(S_j, V),
+\]
+and hence this decomposition is well-defined.
+
+We've finally all the introductory stuff. The course now begins.
+
+\section{Character theory}
+In topology, we want to classify spaces. To do so, we come up with invariants of spaces, like the number of holes. Then we know that a torus is not the same as a sphere. Here, we want to attach invariants to a representation $\rho$ of a finite group $G$ on $V$.
+
+One thing we might want to do is to look at the matrix coefficients of $\rho(g)$. However, this is bad, since this is highly highly highly basis dependent. It is not a true invariant. We need to do something better than that.
+
+Let $\F = \C$, and $G$ be a finite group. Let $\rho = \rho_V: G \to \GL(V)$ be a representation of $G$. The clue is to look at the characteristic polynomial of the matrix. The coefficients are functions of the eigenvalues --- on one extreme, the determinant is the product of all eigenvalues; on the other extreme, and the trace is the sum of all of them. Surprisingly, it is the trace that works. We don't have to bother ourselves with the other coefficients.
+
+\begin{defi}[Character]\index{character}\index{afford}
+ The \emph{character} of a representation $\rho: G \to \GL(V)$, written $\chi_\rho = \chi_v = \chi$, is defined as
+ \[
+ \chi(g) = \tr \rho(g).
+ \]
+ We say $\rho$ \emph{affords} the character $\chi$.
+
+ Alternatively, the character is $\tr R(g)$, where $R(g)$ is any matrix representing $\rho(g)$ with respect to any basis.
+\end{defi}
+
+\begin{defi}[Degree of character]\index{degree of character}
+ The \emph{degree} of $\chi_v$ is $\dim V$.
+\end{defi}
+
+Thus, $\chi$ is a function $G \to \C$.
+\begin{defi}[Linear character]\index{linear character}
+ We say $\chi$ is \emph{linear} if $\dim V = 1$, in which case $\chi$ is a homomorphism $G \to \C^\times = \GL_1(\C)$.
+\end{defi}
+
+Various properties of representations are inherited by characters.
+\begin{defi}[Irreducible character]\index{irreducible character}
+ A character $\chi$ is \emph{irreducible} if $\rho$ is \emph{irreducible}.
+\end{defi}
+
+\begin{defi}[Faithful character]\index{faithful character}
+ A character $\chi$ is \emph{faithful} if $\rho$ is \emph{faithful}.
+\end{defi}
+
+\begin{defi}[Trivial/principal character]\index{trivial character}\index{principal character}
+ A character $\chi$ is \emph{trivial} or \emph{principal} if $\rho$ is the trivial representation. We write $\chi = 1_G$.
+\end{defi}
+
+$\chi$ is a complete invariant in the sense that it determines $\rho$ up to isomorphism. This is staggering. We reduce the whole matrix into a single number --- the trace, and yet we have not lost any information at all! We will prove this later, after we have developed some useful properties of characters.
+
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item $\chi_V(1) = \dim V$.
+ \item $\chi_V$ is a \emph{class function}, namely it is conjugation invariant, i.e.\index{class function}
+ \[
+ \chi_V(hgh^{-1}) = \chi_V(g)
+ \]
+ for all $g, h \in G$. Thus $\chi_V$ is constant on conjugacy classes.
+ \item $\chi_V(g^{-1}) = \overline{\chi_V(g)}$.
+ \item For two representations $V, W$, we have
+ \[
+ \chi_{V \oplus W} = \chi_V + \chi_W.
+ \]
+ \end{enumerate}
+\end{thm}
+These results, despite being rather easy to prove, are very useful, since they save us a lot of work when computing the characters of representations.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Obvious since $\rho_V(1) = \id_V$.
+ \item Let $R_g$ be the matrix representing $g$. Then
+ \[
+ \chi(hgh^{-1}) = \tr(R_h R_g R_h^{-1}) = \tr(R_g) = \chi(g),
+ \]
+ as we know from linear algebra.
+ \item Since $g \in G$ has finite order, we know $\rho(g)$ is represented by a diagonal matrix
+ \[
+ R_g =
+ \begin{pmatrix}
+ \lambda_1\\
+ & \ddots\\
+ && \lambda_n
+ \end{pmatrix},
+ \]
+ and $\chi(g) = \sum \lambda_i$. Now $g^{-1}$ is represented by
+ \[
+ R_{g^{-1}} =
+ \begin{pmatrix}
+ \lambda_1^{-1}\\
+ & \ddots\\
+ && \lambda_n^{-1}
+ \end{pmatrix},
+ \]
+ Noting that each $\lambda_i$ is an $n$th root of unity, hence $|\lambda_i| = 1$, we know
+ \[
+ \chi(g^{-1}) = \sum \lambda_i^{-1} = \sum \overline{\lambda_i} = \overline{\sum \lambda_i} = \overline{\chi(g)}.
+ \]
+ \item Suppose $V = V_1 \oplus V_2$, with $\rho: G \to \GL(V)$ splitting into $\rho_i: G \to \GL(V_i)$. Pick a basis $\mathcal{B}_i$ for $V_i$, and let $\mathcal{B} = B_1 \cup \mathcal{B}_2$. Then with respect to $\mathcal{B}$, we have
+ \[
+ [\rho(g)]_{\mathcal{B}} =
+ \begin{pmatrix}
+ [\rho_1(g)]_{\mathcal{B}_1} & 0\\
+ 0 & [\rho_2(g)]_{\mathcal{B}_2}
+ \end{pmatrix}.
+ \]
+ So $\chi(g) = \tr(\rho(g)) = \tr(\rho_1(g)) + \tr(\rho_2(g)) = \chi_1(g) + \chi_2(g)$.\qedhere
+ \end{enumerate}
+\end{proof}
+We will see later that if we take characters $\chi_1, \chi_2$ of $G$, then $\chi_1 \chi_2$ is also a character of $G$. This uses the notion of tensor products, which we will do later.
+
+\begin{lemma}
+ Let $\rho: G \to \GL(V)$ be a complex representation affording the character $\chi$. Then
+ \[
+ |\chi(g)| \leq \chi(1),
+ \]
+ with equality if and only if $\rho(g) = \lambda I$ for some $\lambda \in \C$, a root of unity. Moreover, $\chi(g) = \chi(1)$ if and only if $g \in \ker \rho$.
+\end{lemma}
+
+\begin{proof}
+ Fix $g$, and pick a basis of eigenvectors of $\rho(g)$. Then the matrix of $\rho(g)$ is diagonal, say
+ \[
+ \rho(g) =
+ \begin{pmatrix}
+ \lambda_1\\
+ & \ddots\\
+ && \lambda_n
+ \end{pmatrix},
+ \]
+ Hence
+ \[
+ |\chi(g)| = \left|\sum \lambda_i\right| \leq \sum |\lambda_i| = \sum 1 = \dim V = \chi(1).
+ \]
+ In the triangle inequality, we have equality if and only if all the $\lambda_i$'s are equal, to $\lambda$, say. So $\rho(g) = \lambda I$. Since all the $\lambda_i$'s are roots of unity, so is $\lambda$.
+
+ And, if $\chi(g) = \chi(1)$, then since $\rho(g) = \lambda I$, taking the trace gives $\chi(g) = \lambda \chi(1)$. So $\lambda = 1$, i.e.\ $\rho(g) = I$. So $g \in \ker \rho$.
+\end{proof}
+
+The following lemma allows us to generate new characters from old.
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item If $\chi$ is a complex (irreducible) character of $G$, then so is $\bar{\chi}$.
+ \item If $\chi$ is a complex (irreducible) character of $G$, then so is $\varepsilon \chi$ for any linear ($1$-dimensional) character $\varepsilon$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $R: G \to \GL_n(\C)$ is a complex matrix representation, then so is $\bar{R}: G \to \GL_n(\C)$, where $g \mapsto \overline{R(g)}$. Then the character of $\bar{R}$ is $\bar{\chi}$
+ \item Similarly, $R': g \mapsto \varepsilon(g) R(g)$ for $g \in G$ is a representation with character $\varepsilon \chi$.
+ \end{enumerate}
+ It is left as an exercise for the reader to check the details.
+\end{proof}
+
+We now have to justify why we care about characters. We said it was a complete invariant, as in two representations are isomorphic if and only if they have the same character. Before we prove this, we first need some definitions
+\begin{defi}[Space of class functions]\index{class function}\index{$\mathcal{C}(G)$}
+ Define the \emph{complex space of class functions} of $G$ to be
+ \[
+ \mathcal{C}(G) = \{f: G \to \C: f(hgh^{-1}) = f(g)\text{ for all }h, g \in G\}.
+ \]
+ This is a vector space by $f_1 + f_2 : g \mapsto f_1(g) + f_2(g)$ and $\lambda f: g \mapsto \lambda f(g)$.
+\end{defi}
+
+\begin{defi}[Class number]\index{class number}\index{$k(G)$}
+ The \emph{class number} $k = k(G)$ is the number of conjugacy classes of $G$.
+\end{defi}
+We can list the conjugacy classes as $\mathcal{C}_1, \cdots, \mathcal{C}_k$. wlog, we let $\mathcal{C}_1 = \{1\}$. We choose $g_1, g_2, \cdots, g_k$ to be representatives of the conjugacy classes. Note also that $\dim \mathcal{C}(G) = k$, since the characteristic function $\delta_j$ of the classes form a basis where
+\[
+ \delta_j(g) =
+ \begin{cases}
+ 1 & g \in \mathcal{C}_j\\
+ 0 & g \not\in \mathcal{C}_j.
+ \end{cases}
+\]
+We define a Hermitian inner product on $\mathcal{C}(G)$ by\index{inner product of class functions}
+\begin{align*}
+ \bra f, f'\ket &= \frac{1}{|G|} \sum_{g \in G}\overline{f(g)} f'(g) \\
+ &= \frac{1}{|G|} \sum_{j = 1}^k |\mathcal{C}_j|\overline{f(g_j)} f'(g_j)\\
+ \intertext{By the orbit-stabilizer theorem, we can write this as}
+ &= \sum_{j = 1}^k \frac{1}{|C_G(g_j)|} \overline{f(g_j)} f'(g_j).
+\end{align*}
+In particular, for characters,
+\[
+ \bra \chi, \chi'\ket = \sum_{j = 1}^k \frac{1}{|C_G(g_j)|} \chi(g_j^{-1}) \chi'(g_j).
+\]
+It should be clear (especially using the original formula) that $\bra \chi, \chi'\ket = \overline{\bra \chi', \chi\ket}$. So when restricted to characters, this is a real symmetric form.
+
+The main result is the following theorem:
+\begin{thm}[Completeness of characters]
+ The complex irreducible characters of $G$ form an orthonormal basis of $\mathcal{C}(G)$, namely
+ \begin{enumerate}
+ \item If $\rho: G \to \GL(V)$ and $\rho': G \to \GL(V')$ are two complex irreducible representations affording characters $\chi, \chi'$ respectively, then
+ \[
+ \bra \chi, \chi'\ket =
+ \begin{cases}
+ 1 & \text{if $\rho$ and $\rho'$ are isomorphic representations}\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ This is the (row) orthogonality of characters.
+ \item Each class function of $G$ can be expressed as a linear combination of irreducible characters of $G$.
+ \end{enumerate}
+\end{thm}
+Proof is deferred. We first look at some corollaries.
+\begin{cor}
+ Complex representations of finite groups are characterised by their characters.
+\end{cor}
+
+\begin{proof}
+ Let $\rho: G \to \GL(V)$ afford the character $\chi$. We know we can write $\rho = m_1 \rho_1 \oplus \cdots \oplus m_k \rho_k$, where $\rho_1, \cdots, \rho_k$ are (distinct) irreducible and $m_j \geq 0$ are the multiplicities. Then we have
+ \[
+ \chi = m_1 \chi_1 + \cdots + m_k \chi_k,
+ \]
+ where $\chi_j$ is afforded by $\rho_j$. Then by orthogonality, we know
+ \[
+ m_j = \bra \chi, \chi_j\ket.
+ \]
+ So we can obtain the multiplicity of each $\rho_j$ in $\rho$ just by looking at the inner products of the characters.
+\end{proof}
+This is not true for infinite groups. For example, if $G = \Z$, then the representations
+\[
+ 1 \mapsto
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 1
+ \end{pmatrix},\quad
+ 1 \mapsto
+ \begin{pmatrix}
+ 1 & 1\\
+ 0 & 1
+ \end{pmatrix}
+\]
+are non-isomorphic, but have the same character $2$.
+
+\begin{cor}[Irreducibility criterion]
+ If $\rho: G \to \GL(V)$ is a complex representation of $G$ affording the character $\chi$, then $\rho$ is irreducible if and only if $\bra \chi, \chi \ket = 1$.
+\end{cor}
+
+\begin{proof}
+ If $\rho$ is irreducible, then orthogonality says $\bra \chi, \chi \ket = 1$. For the other direction, suppose $\bra \chi, \chi\ket = 1$. We use complete reducibility to get
+ \[
+ \chi = \sum m_j \chi_j,
+ \]
+ with $\chi_j$ irreducible, and $m_j \geq 0$ the multiplicities. Then by orthogonality, we get
+ \[
+ \bra \chi, \chi\ket = \sum m_j^2.
+ \]
+ But $\bra \chi, \chi\ket = 1$. So exactly one of $m_j$ is $1$, while the others are all zero, and $\chi = \chi_j$. So $\chi$ is irreducible.
+\end{proof}
+
+\begin{thm}
+ Let $\rho_1, \cdots, \rho_k$ be the irreducible complex representations of $G$, and let their dimensions be $n_1, \cdots, n_k$. Then
+ \[
+ |G| = \sum n_i^2.
+ \]
+\end{thm}
+Recall that for abelian groups, each irreducible character has dimension $1$, and there are $|G|$ representations. So this is trivially satisfied.
+
+\begin{proof}
+ Recall that $\rho_{\mathrm{reg}}: G \to \GL(\C G)$, given by $G$ acting on itself by multiplication, is the regular representation of $G$ of dimension $|G|$. Let its character be $\pi_{\mathrm{reg}}$, the \emph{regular character} of $G$.
+
+ First note that we have $\pi_{\mathrm{reg}}(1) = |G|$, and $\pi_{\mathrm{reg}}(h) = 0$ if $h\not= 1$. The first part is obvious, and the second is easy to show, since we have only $0$s along the diagonal.
+
+ Next, we decompose $\pi_{\mathrm{reg}}$ as
+ \[
+ \pi_{\mathrm{reg}} = \sum a_j \chi_j,
+ \]
+ We now want to find $a_j$. We have
+ \[
+ a_j = \bra \pi_{\mathrm{reg}}, \chi_j\ket = \frac{1}{|G|} \sum_{g \in G} \overline{\pi_{\mathrm{reg}}(g)} \chi_j(g) = \frac{1}{|G|} \cdot |G| \chi_j(1) = \chi_j(1).
+ \]
+ Then we get
+ \[
+ |G| = \pi_{\mathrm{reg}}(1) = \sum a_j \chi_j(1) = \sum \chi_j(1)^2 = \sum n_j^2.\qedhere
+ \]
+\end{proof}
+From this proof, we also see that each irreducible representation is a subrepresentation of the regular representation.
+
+\begin{cor}
+ The number of irreducible characters of $G$ (up to equivalence) is $k$, the number of conjugacy classes.
+\end{cor}
+
+\begin{proof}
+ The irreducible characters and the characteristic functions of the conjugacy classes are both bases of $\mathcal{C}(G)$.
+\end{proof}
+
+\begin{cor}
+ Two elements $g_1, g_2$ are conjugate if and only if $\chi(g_1) = \chi(g_2)$ for all irreducible characters $\chi$ of $G$.
+\end{cor}
+
+\begin{proof}
+ If $g_1, g_2$ are conjugate, since characters are class functions, we must have $\chi(g_1) = \chi(g_2)$.
+
+ For the other direction, let $\delta$ be the characteristic function of the class of $g_1$. Then since $\delta$ is a class function, we can write
+ \[
+ \delta = \sum m_j \chi_j,
+ \]
+ where $\chi_j$ are the irreducible characters of $G$. Then
+ \[
+ \delta(g_2) = \sum m_j \chi_j(g_2) = \sum m_j \chi_j(g_1) = \delta(g_1) = 1.
+ \]
+ So $g_2$ is in the same conjugacy class as $g_1$.
+\end{proof}
+
+Before we move on to the proof of the completeness of characters, we first make a few more definitions:
+\begin{defi}[Character table]\index{character table}
+ The \emph{character table} of $G$ is the $k \times k$ matrix $X = (\chi_i (g_j))$, where $1 = \chi_1, \chi_2, \cdots, \chi_k$ are the irreducible characters of $G$, and $\mathcal{C}_1 = \{1\}, \mathcal{C}_2, \cdots, \mathcal{C}_k$ are the conjugacy classes with $g_j \in \mathcal{C}_j$.
+\end{defi}
+We seldom think of this as a matrix, but as an actual table, as we will see in the table below.
+
+We will spend the next few lectures coming up with techniques to compute these character tables.
+
+\begin{eg}
+ Consider $G = D_6 \cong S_3 = \bra r, s\mid r^3 = s^2 = 1, srs^{-1} = r^{-1}\ket$. As in all computations, the first thing to do is to compute the conjugacy classes. This is not too hard. We have
+ \[
+ \mathcal{C}_1 = \{1\},\quad \mathcal{C}_2 = \{s, sr, sr^2\},\quad \mathcal{C}_2 = \{r, r^{-1}\}.
+ \]
+ Alternatively, we can view this as $S_3$ and use the fact that two elements are conjugate if and only if they have the same cycle types. We have found the three representations: 1, the trivial representation; S, the sign; and W, the two-dimensional representation. In $W$, the reflections $sr^j$ acts by matrix with eigenvalues $\pm 1$. So the sum of eigenvalues is $0$, hence $\chi_w(sr^2)= 0$. It also also not hard to see
+ \[
+ r^m \mapsto
+ \begin{pmatrix}
+ \cos \frac{2m \pi}{3} & - \sin \frac{2m\pi}{3}\\
+ \sin \frac{2m\pi}{3} & \cos \frac{2m \pi}{3}
+ \end{pmatrix}.
+ \]
+ So $\chi_w(r^m) = -1$.
+
+ Fortunately, after developing some theory, we will not need to find all the irreducible representations in order to compute the character table.
+ \begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ & 1 & $\mathcal{C}_2$ & $\mathcal{C}_3$\\
+ \midrule
+ 1 & $1$ & $1$ & $1$\\
+ S & $1$ & $-1$ & $1$\\
+ $\chi_w$ & $2$ & $0$ & $-1$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ We see that the sum of the squares of the first column is $1^2 + 1^2 + 2^2 = 6 = |D_6|$, as expected. We can also check that W is genuinely an irreducible representation. Noting that the centralizers of $\mathcal{C}_1, \mathcal{C}_2$ and $\mathcal{C}_3$ are of sizes $6, 2, 3$. So the inner product is
+ \[
+ \bra \chi_W, \chi_W\ket = \frac{2^2}{6} + \frac{0^2}{2} + \frac{(-1)^2}{3} = 1,
+ \]
+ as expected.
+\end{eg}
+So we now need to prove orthogonality.
+\section{Proof of orthogonality}
+We will do the proof in parts.
+
+\begin{thm}[Row orthogonality relations]\index{row orthogonality}
+ If $\rho: G \to \GL(V)$ and $\rho': G \to \GL(V')$ are two complex irreducible representations affording characters $\chi, \chi'$ respectively, then
+ \[
+ \bra \chi, \chi'\ket =
+ \begin{cases}
+ 1 & \text{if $\rho$ and $\rho'$ are isomorphic representations}\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+\end{thm}
+
+\begin{proof}
+ We fix a basis of $V$ and of $V'$. Write $R(g), R'(g)$ for the matrices of $\rho(g)$ and $\rho'(g)$ with respect to these bases respectively. Then by definition, we have
+ \begin{align*}
+ \bra \chi', \chi\ket &= \frac{1}{|G|} \sum_{g \in G} \chi'(g^{-1}) \chi(g)\\
+ &= \frac{1}{|G|} \sum_{g \in G}\;\;\sum_{\substack{\mathllap{1 \leq}i\mathrlap{\leq n'}\\ \mathllap{1\leq}j\mathrlap{\leq n}}}R'(g^{-1})_{ii} R(g)_{jj}.
+ \end{align*}
+ For any linear map $\varphi: V \to V'$, we define a new map by averaging by $\rho'$ and $\rho$.
+ \begin{align*}
+ \tilde{\varphi}: V &\to V'\\
+ \mathbf{v} &\mapsto \frac{1}{|G|} \sum \rho'(g^{-1}) \varphi \rho(g) \mathbf{v}
+ \end{align*}
+ We first check $\tilde{\varphi}$ is a $G$-homomorphism --- if $h \in G$, we need to show
+ \[
+ \rho'(h^{-1}) \tilde{\varphi} \rho(h) (\mathbf{v}) = \tilde{\varphi}(\mathbf{v}).
+ \]
+ We have
+ \begin{align*}
+ \rho'(h^{-1})\tilde{\varphi}\rho(h) (\mathbf{v}) &= \frac{1}{|G|} \sum_{g \in G} \rho'((gh)^{-1})\varphi\rho(gh)\mathbf{v}\\
+ &= \frac{1}{|G|} \sum_{g' \in G} \rho'(g'^{-1}) \varphi \rho(g') \mathbf{v}\\
+ &= \tilde{\varphi}(\mathbf{v}).
+ \end{align*}
+ \begin{enumerate}
+ \item Now we first consider the case where $\rho, \rho'$ is \emph{not} isomorphic. Then by Schur's lemma, we must have $\tilde{\varphi} = 0$ for any linear $\varphi: V \to V'$.
+
+ We now pick a very nice $\varphi$, where everything disappears. We let $\varphi = \varepsilon_{\alpha\beta}$, the operator having matrix $E_{\alpha\beta}$ with entries $0$ everywhere except $1$ in the $(\alpha,\beta)$ position.
+
+ Then $\tilde{\varepsilon}_{\alpha\beta} = 0$. So for each $i, j$, we have
+ \[
+ \frac{1}{|G|} \sum_{g \in G} (R'(g^{-1}) E_{\alpha\beta}R(g))_{ij} = 0.
+ \]
+ Using our choice of $\varepsilon_{\alpha\beta}$, we get
+ \[
+ \frac{1}{|G|} \sum_{g \in G} R'(g^{-1})_{i\alpha} R(g)_{\beta j} = 0
+ \]
+ for all $i, j$. We now pick $\alpha = i$ and $\beta = j$. Then
+ \[
+ \frac{1}{|G|} \sum_{g \in G}R'(g^{-1})_{ii} R(g)_{jj} = 0.
+ \]
+ We can sum this thing over all $i$ and $j$ to get that $\bra \chi', \chi\ket = 0$.
+
+ \item Now suppose $\rho, \rho'$ are isomorphic. So we might as well take $\chi = \chi'$, $V = V'$ and $\rho = \rho'$. If $\varphi: V \to V$ is linear, then $\tilde{\varphi} \in \End_G(V)$.
+
+ We first claim that $\tr \tilde{\varphi} = \tr{\varphi}$. To see this, we have
+ \[
+ \tr \tilde{\varphi} = \frac{1}{|G|} \sum_{g \in G} \tr(\rho(g^{-1}) \varphi \rho(g)) = \frac{1}{|G|} \sum_{g \in G} \tr \varphi = \tr \varphi,
+ \]
+ using the fact that traces don't see conjugacy (and $\rho(g^{-1}) = \rho(g)^{-1}$ since $\rho$ is a group homomorphism).
+
+ By Schur's lemma, we know $\tilde{\varphi} = \lambda \iota_v$ for some $\lambda \in \C$ (which depends on $\varphi$). Then if $n = \dim V$, then
+ \[
+ \lambda = \frac{1}{n} \tr \varphi.
+ \]
+ Let $\varphi = \varepsilon_{\alpha\beta}$. Then $\tr \varphi = \delta_{\alpha\beta}$. Hence
+ \[
+ \tilde{\varepsilon}_{\alpha\beta} = \frac{1}{|G|} \sum_g \rho(g^{-1}) \varepsilon_{\alpha\beta}\rho(g) = \frac{1}{n} \delta_{\alpha\beta} \iota.
+ \]
+ In terms of matrices, we take the $(i, j)$th entry to get
+ \[
+ \frac{1}{|G|} \sum R(g^{-1})_{i\alpha} R(g)_{\beta j} = \frac{1}{n} \delta_{\alpha\beta} \delta_{ij}.
+ \]
+ We now put $\alpha = i$ and $\beta = j$. Then we are left with
+ \[
+ \frac{1}{|G|} \sum_g R(g^{-1})_{ii} R(g)_{jj} = \frac{1}{n} \delta_{ij}.
+ \]
+ Summing over all $i$ and $j$, we get $\bra \chi, \chi \ket = 1$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+After learning about tensor products and duals later on, one can provide a shorter proof of this result as follows:
+\begin{proof}[Alternative proof]
+ Consider two representation spaces $V$ and $W$. Then
+ \[
+ \bra \chi_{W}, \chi_V\ket = \frac{1}{|G|} \sum \overline{\chi_W(g)} \chi_V(g) = \frac{1}{|G|} \sum \chi_{V \otimes W^*}(g).
+ \]
+ We notice that there is a natural isomorphism $V \otimes W^* \cong \Hom(W, V)$, and the action of $g$ on this space is by conjugation. Thus, a $G$-invariant element is just a $G$-homomorphism $W \to V$. Thus, we can decompose $\Hom(V, W) = \Hom_G(V, W) \oplus U$ for some $G$-space $U$, and $U$ has no $G$-invariant element. Hence in the decomposition of $\Hom(V, W)$ into irreducibles, we know there are exactly $\dim \Hom_G(V, W)$ copies of the trivial representation. By Schur's lemma, this number is $1$ if $V \cong W$, and $0$ if $V \not\cong W$.
+
+ So it suffices to show that if $\chi$ is a non-trivial irreducible character, then
+ \[
+ \sum_{g \in G} \chi(g) = 0.
+ \]
+ But if $\rho$ affords $\chi$, then any element in the image of $\sum_{g \in G} \rho(g)$ is fixed by $G$. By irreducibility, the image must be trivial. So $\sum_{g \in G} \rho(g) = 0$.
+\end{proof}
+What we have done is show the orthogonality of the rows. There is a similar one for the columns:
+
+\begin{thm}[Column orthogonality relations]\index{column orthogonality}
+ We have
+ \[
+ \sum_{i = 1}^k \overline{\chi_i (g_j)}\chi_i (g_\ell) = \delta_{j\ell} |C_G(g_\ell)|.
+ \]
+\end{thm}
+This is analogous to the fact that if a matrix has orthonormal rows, then it also has orthonormal columns. We get the extra factor of $|C_G(g_\ell)|$ because of the way we count.
+
+This has an easy corollary, which we have already shown previously using the regular representation:
+\begin{cor}
+ \[
+ |G| = \sum_{i = 1}^k \chi_i^2 (1).
+ \]
+\end{cor}
+
+\begin{proof}[Proof of column orthogonality]
+ Consider the character table $X = (\chi_i(g_j))$. We know
+ \[
+ \delta_{ij} = \bra \chi_i, \chi_j\ket = \sum_\ell \frac{1}{|C_G(g_\ell)|}\overline{\chi_i(g_\ell)} \chi_k(g_\ell).
+ \]
+ Then
+ \[
+ \bar{X} D^{-1} X^T = I_{k\times k},
+ \]
+ where
+ \[
+ D =
+ \begin{pmatrix}
+ |C_G(g_1)| & \cdots & 0\\
+ \vdots & \ddots & \vdots\\
+ 0 & \cdots & |C_G(g_k)|
+ \end{pmatrix}.
+ \]
+ Since $X$ is square, it follows that $D^{-1} \bar{X}^T$ is the inverse of $X$. So $\bar{X}^T X = D$, which is exactly the theorem.
+\end{proof}
+The proof requires that $X$ is square, i.e.\ there are $k$ many irreducible representations. So we need to prove the last part of the completeness of characters.
+
+\begin{thm}
+ Each class function of $G$ can be expressed as a linear combination of irreducible characters of $G$.
+\end{thm}
+
+\begin{proof}
+ We list all the irreducible characters $\chi_1, \cdots, \chi_\ell$ of $G$. Note that we don't know the number of irreducibles is $k$. This is essentially what we have to prove here. We now claim these generate $\mathcal{C}(G)$, the ring of class functions.
+
+ Now recall that $\mathcal{C}(G)$ has an inner product. So it suffices to show that the orthogonal complement to the span $\bra \chi_1, \cdots, \chi_\ell\ket$ in $\mathcal{C}(G)$ is trivial. To see this, assume $f \in \mathcal{C}(G)$ satisfies
+ \[
+ \bra f, \chi_j\ket = 0
+ \]
+ for all $\chi_j$ irreducible. We let $\rho: G \to \GL(V)$ be an irreducible representation affording $\chi \in \{\chi_1, \cdots, \chi_\ell\}$. Then $\bra f, \chi\ket = 0$.
+
+ Consider the function
+ \[
+ \varphi = \frac{1}{|G|} \sum_g \overline{f}(g) \rho(g): V \to V.
+ \]
+ For any $h \in G$, we can compute
+ \[
+ \rho(h)^{-1} \varphi \rho(h) = \frac{1}{|G|} \sum_g \bar{f}(g) \rho (h^{-1} gh) = \frac{1}{|G|} \sum_g \bar{f}(h^{-1}gh) \rho(h^{-1}gh) = \varphi,
+ \]
+ using the fact that $\bar{f}$ is a class function. So this is a $G$-homomorphism. So as $\rho$ is irreducible, Schur's lemma says it must be of the form $\lambda \iota_V$ for some $\lambda \in \C$.
+
+ Now we take the trace of this thing. So we have
+ \[
+ n\lambda = \tr\left(\frac{1}{|G|} \sum_g \overline{f(g)} \rho(g)\right) = \frac{1}{|G|} \sum_g \overline{f(g)} \chi(g) = \bra f, \chi\ket = 0.
+ \]
+ So $\lambda = 0$, i.e.\ $\sum_g \overline{f(g)} \rho(g) = 0$, the zero endomorphism on $V$. This is valid for any irreducible representation, and hence for every representation, by complete reducibility.
+
+ In particular, take $\rho = \rho_{\mathrm{reg}}$, where $\rho_{\mathrm{reg}}(g): \mathbf{e}_1 \mapsto \mathbf{e}_g$ for each $g \in G$. Hence
+ \[
+ \sum \overline{f(g)} \rho_{\mathrm{reg}}(g): \mathbf{e}_1 \mapsto \sum_g \overline{f(g)} \mathbf{e}_g.
+ \]
+ Since this is zero, it follows that we must have $\sum \overline{f(g)} \mathbf{e}_g = 0$. Since the $\mathbf{e}_g$'s are linearly independent, we must have $\overline{f(g)} = 0$ for all $g \in G$, i.e.\ $f = 0$.
+\end{proof}
+
+\section{Permutation representations}
+\index{permutation representation}
+In this chapter, we are going to study permutation representations in a bit more detail, since we can find some explicit formulas for the character of them. This is particularly useful if we are dealing with the symmetric group.
+
+Let $G$ be a finite group acting on a finite set $X = \{x_1, \cdots, x_n\}$ (sometimes known as a $G$-set). We define $\C X$ to be the complex vector space with basis $\{\mathbf{e}_{x_1}, \cdots, \mathbf{e}_{x_n}\}$. More explicitly,
+\[
+ \C X = \left\{ \sum_j a_j \mathbf{e}_{x_j}: a_j \in \C\right\}.
+\]
+There is a corresponding representation permutation
+\begin{align*}
+ \rho_X: G &\to \GL(\C X)\\
+ g &\mapsto \rho(g)
+\end{align*}
+where $\rho(g): \mathbf{e}_{x_j} \mapsto \mathbf{e}_{g x_j}$, extended linearly. We say $\rho_X$ is the representation corresponding to the action of $G$ on $X$.
+
+This is a generalization of the group algebra --- if we let $G$ act on itself by left multiplication, then this is in fact the group algebra.
+
+The corresponding matrix representations $\rho_X(g)$ with respect to the basis $\{\mathbf{e}_x\}_{x \in X}$ are permutation matrices --- it is $0$ everywhere except for one $1$ in each row and column. In particular, $(\rho(g))_{ij} = 1$ precisely when $g x_j = x_i$. So the only non-zero diagonal entries in $\rho(g)$ are those $i$ such that $x_i$ is fixed by $g$. Thus we have
+
+\begin{defi}[Permutation character]\index{permutation character}
+ The \emph{permutation character} $\pi_X$ is
+ \[
+ \pi_X(g) = |\fix(g)| = |\{x \in X: gx = x\}|.
+ \]
+\end{defi}
+
+\begin{lemma}
+ $\pi_X$ always contains the trivial character $1_G$ (when decomposed in the basis of irreducible characters). In particular, $\spn\{\mathbf{e}_{x_1}+ \cdots+ \mathbf{e}_{x_n}\}$ is a trivial $G$-subspace of $\C X$, with $G$-invariant complement $\left\{\sum_x a_x \mathbf{e}_x: \sum a_x = 0\right\}$.
+\end{lemma}
+
+\begin{lemma}
+ $\bra \pi_X, 1\ket$, which is the multiplicity of $1$ in $\pi_X$, is the number of orbits of $G$ on $X$.
+\end{lemma}
+
+\begin{proof}
+ We write $X$ as the disjoint union of orbits, $X = X_1 \cup \cdots \cup X_\ell$. Then it is clear that the permutation representation on $X$ is just the sum of the permutation representations on the $X_i$, i.e.
+ \[
+ \pi_X = \pi_{X_1} + \cdots + \pi_{x_\ell},
+ \]
+ where $\pi_{X_j}$ is the permutation character of $G$ on $X_j$. So to prove the lemma, it is enough to consider the case where the action is transitive, i.e.\ there is just one orbit.
+
+ So suppose $G$ acts transitively on $X$. We want to show $\bra \pi_X, 1\ket = 1$. By definition, we have
+ \begin{align*}
+ \bra \pi_X, 1 \ket &= \frac{1}{|G|} \sum_g \pi_X(g) \\
+ &= \frac{1}{|G|} \left|\{(g, x) \in G\times X: gx = x\}\right|\\
+ &= \frac{1}{|G|} \sum_{x \in X} |G_x|,\\
+ \intertext{where $G_x$ is the stabilizer of $x$. By the orbit-stabilizer theorem, we have $|G_x||X| = |G|$. So we can write this as}
+ &= \frac{1}{|G|} \sum_{x \in X} \frac{|G|}{|X|}\\
+ &= \frac{1}{|G|} \cdot |X| \cdot \frac{|G|}{|X|}\\
+ &= 1.
+ \end{align*}
+ So done.
+\end{proof}
+
+\begin{lemma}
+ Let $G$ act on the sets $X_1, X_2$. Then $G$ acts on $X_1 \times X_2$ by
+ \[
+ g(x_1, x_2) = (g x_1, g x_2).
+ \]
+ Then the character
+ \[
+ \pi_{X_1 \times X_2} = \pi_{X_1}\pi_{X_2},
+ \]
+ and so
+ \[
+ \bra \pi_{X_1}, \pi_{X_2}\ket = \text{number of orbits of }G\text{ on }X_1 \times X_2.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We know $\pi_{X_1 \times X_2}(g)$ is the number of pairs $(x_1, x_2) \in X_1 \times X_2$ fixed by $g$. This is exactly the number of things in $X_1$ fixed by $g$ times the number of things in $X_2$ fixed by $g$. So we have
+ \[
+ \pi_{X_1 \times X_2}(g) = \pi_{X_1}(g) \pi_{X_2}(g).
+ \]
+ Then using the fact that $\pi_1, \pi_2$ are real, we get
+ \begin{align*}
+ \bra \pi_{X_1}, \pi_{X_2}\ket &= \frac{1}{|G|} \sum_g \overline{\pi_{X_1}(g)}\pi_{X_2}(g)\\
+ &= \frac{1}{|G|} \sum_g \overline{\pi_{X_1}(g) \pi_{X_2}(g)} 1_G(g)\\
+ &= \bra \pi_{X_1}\pi_{X_2}, 1\ket \\
+ &= \bra \pi_{X_1\times X_2}, 1\ket.
+ \end{align*}
+ So the previous lemma gives the desired result.
+\end{proof}
+
+Recall that a $2$-transitive action is an action that is transitive on pairs, i.e.\ it can send any pair to any other pair, as long as the elements in the pair are distinct (it can never send $(x, x)$ to $(x', x'')$ if $x' \not= x''$, since $g(x, x) = (gx, gx)$). More precisely,
+\begin{defi}[$2$-transitive]\index{$2$-transitive action}
+ Let $G$ act on $X$, $|X| > 2$. Then $G$ is \emph{$2$-transitive} on $X$ if $G$ has two orbits on $X \times X$, namely $\{(x, x): x \in X\}$ and $\{(x_1, x_2): x_i \in X, x_1 \not= x_2\}$.
+\end{defi}
+Then we have the following lemma:
+
+\begin{lemma}
+ Let $G$ act on $X$, with $|X| > 2$. Then
+ \[
+ \pi_X = 1_G + \chi,
+ \]
+ with $\chi$ irreducible if and only if $G$ is $2$-transitive on $X$.
+\end{lemma}
+
+\begin{proof}
+ We know
+ \[
+ \pi_X = m_1 1_G + m_2 \chi_2 + \cdots + m_\ell \chi_\ell,
+ \]
+ with $1_G, \chi_2, \cdots, \chi_\ell$ distinct irreducible characters and $m_i \in \N$ are non-zero. Then by orthogonality,
+ \[
+ \bra \pi_X, \pi_X\ket = \sum_{i = 1}^j m_i^2.
+ \]
+ Since $\bra \pi_X, \pi_X\ket$ is the number of orbits of $X \times X$, we know $G$ is $2$-transitive on $X$ if and only if $\ell = 2$ and $m_1 = m_2 = 1$.
+\end{proof}
+
+This is useful if we want to understand the symmetric group, since it is always $2$-transitive (modulo the silly case of $S_1$).
+\begin{eg}
+ $S_n$ acting on $X = \{1, \cdots, n\}$ is $2$-transitive for $n \geq 2$. So $\pi_X = 1 + \chi$, with $\chi$ irreducible of degree $n - 1$ (since $\pi_X$ has degree $n$). This works similarly for $A_n$, whenever $n \geq 3$.
+\end{eg}
+This tells us we can find an irreducible character of $S_n$ by finding $\pi_X$, and then subtracting $1$ off.
+
+\begin{eg}
+ Consider $G = S_4$. The conjugacy classes correspond to different cycle types. We already know three characters --- the trivial one, the sign, and $\pi_X - 1_G$.
+ \begin{center}
+ \begin{tabular}{ccccccc}
+ \toprule
+ & & 1 & 3 & 8 & 6 & 6\\
+ & & $1$ & $(1\; 2)(3\; 4)$ & $(1\; 2\; 3)$ & $(1\; 2\; 3\; 4)$ & $(1\; 2)$\\
+ \midrule
+ trivial & $\chi_1$ & $1$ & $1$ & $1$ & $1$ & $1$\\
+ sign & $\chi_2$ & $1$ & $1$ & $1$ & $-1$ & $-1$\\
+ $\pi_X - 1_G$ & $\chi_3$ & $3$ & $-1$ & $0$ & $-1$ & $1$\\
+ & $\chi_4$\\
+ & $\chi_5$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ We know the product of a one-dimensional irreducible character and another character is another irreducible character, as shown on the first example sheet. So we get
+ \begin{center}
+ \begin{tabular}{ccccccc}
+ \toprule
+ & & 1 & 3 & 8 & 6 & 6\\
+ & & $1$ & $(1\; 2)(3\; 4)$ & $(1\; 2\; 3)$ & $(1\; 2\; 3\; 4)$ & $(1\; 2)$\\
+ \midrule
+ trivial & $\chi_1$ & $1$ & $1$ & $1$ & $1$ & $1$\\
+ sign & $\chi_2$ & $1$ & $1$ & $1$ & $-1$ & $-1$\\
+ $\pi_X - 1_G$ & $\chi_3$ & $3$ & $-1$ & $0$ & $-1$ & $1$\\
+ $\chi_3 \chi_2$ & $\chi_4$ & $3$ & $-1$ & $0$ & $1$ & $-1$\\
+ & $\chi_5$ &\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ For the last representation we can find the dimension by computing $24 - (1^2 + 1^2 + 3^2 + 3^2) = 2^2$. So it has dimension $2$. To obtain the whole of $\chi_5$, we can use column orthogonality --- for example, we let the entry in the second column be $x$. Then column orthogonality says $1 + 1 - 3 - 3 + 2x = 0$ . So $x = 2$. In the end, we find
+ \begin{center}
+ \begin{tabular}{ccccccc}
+ \toprule
+ & & 1 & 3 & 8 & 6 & 6\\
+ & & $1$ & $(1\; 2)(3\; 4)$ & $(1\; 2\; 3)$ & $(1\; 2\; 3\; 4)$ & $(1\; 2)$\\
+ \midrule
+ trivial & $\chi_1$ & $1$ & $1$ & $1$ & $1$ & $1$\\
+ sign & $\chi_2$ & $1$ & $1$ & $1$ & $-1$ & $-1$\\
+ $\pi_X - 1_G$ & $\chi_3$ & $3$ & $-1$ & $0$ & $-1$ & $1$\\
+ $\chi_3 \chi_2$ & $\chi_4$ & $3$ & $-1$ & $0$ & $1$ & $-1$\\
+ & $\chi_5$ & $2$ & $2$ & $-1$ & $0$ & $0$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ Alternatively, we have previously shown that $\chi_{\mathrm{reg}} = \chi_1 + \chi_2 + 3\chi_3 + 3 \chi_4 + 2\chi_5$. By computing $\chi_{\mathrm{reg}}$ directly, we can find $\chi_5$.
+
+ Finally, we can obtain $\chi_5$ by observing $S_4 / V_4 \cong S_3$, and ``lifting'' characters. We will do this in the next chapter. In fact, this $\chi_5$ is the degree-$2$ irreducible representation of $S_3 \cong D_6$ lifted up.
+\end{eg}
+
+We can also do similar things with the alternating group. The issue of course is that the conjugacy classes of the symmetric group may split when we move to the alternating group.
+
+Suppose $g \in A_n$. Then
+\begin{align*}
+ |\mathcal{C}_{S_n}(g)| &= |S_n: C_{S_n}(g)|\\
+ |\mathcal{C}_{A_n}(g)| &= |A_n: C_{A_n}(g)|.
+\end{align*}
+We know $\mathcal{C}_{S_n}(g) \supseteq \mathcal{C}_{A_n}(g)$, but we need not have equality, since elements needed to conjugate $g$ to $h \in \mathcal{C}_{S_n}(g)$ might not be in $A_n$. For example, consider $\sigma = (1\; 2\; 3) \in A_3$. We have $\mathcal{C}_{A_3}(\sigma) = \{\sigma\}$ and $\mathcal{C}_{S_3} (\sigma) = \{\sigma, \sigma^{-1}\}$.
+
+We know $A_n$ is an index 2 subgroup of $S_n$. So $C_{A_n}(g) \subseteq C_{S_n}(g)$ either has index $2$ or $1$. If the index is $1$, then $|\mathcal{C}_{A_n}| = \frac{1}{2}|\mathcal{C}_{S_n}|$. Otherwise, the sizes are equal.
+
+A useful criterion for determining which case happens is the following:
+
+\begin{lemma}
+ Let $g \in A_n$, $n > 1$. If $g$ commutes with some odd permutation in $S_n$, then $\mathcal{C}_{S_n}(g) = \mathcal{C}_{A_n}(g)$. Otherwise, $\mathcal{C}_{S_n}$ splits into two conjugacy classes in $A_n$ of equal size.
+\end{lemma}
+
+This is quite useful for doing the examples on the example sheet, but we will not go through any more examples here.
+
+\section{Normal subgroups and lifting}
+The idea of this section is that there is some correspondence between the irreducible characters of $G/N$ (with $N \lhd G$) and those of $G$ itself. In most cases, $G/N$ is a simpler group, and hence we can use this correspondence to find irreducible characters of $G$.
+
+The main result is the following lemma:
+
+\begin{lemma}\index{lifting}
+ Let $N \lhd G$. Let $\tilde{\rho}: G/N \to \GL(V)$ be a representation of $G/N$. Then the composition
+ \[
+ \begin{tikzcd}
+ \rho: G \ar[r, "\text{natural}"] & G/N \ar[r, "\tilde{\rho}"] & \GL(V)
+ \end{tikzcd}
+ \]
+ is a representation of $G$, where $\rho(g) = \tilde{\rho}(gN)$. Moreover,
+ \begin{enumerate}
+ \item $\rho$ is irreducible if and only if $\tilde{\rho}$ is irreducible.
+ \item The corresponding characters satisfy $\chi(g) = \tilde{\chi}(gN)$.
+ \item $\deg \chi = \deg \tilde{\chi}$.
+ \item The lifting operation $\tilde{\chi} \mapsto \chi$ is a bijection
+ \[
+ \{\text{irreducibles of }G/N\} \longleftrightarrow \{\text{irreducibles of $G$ with $N$ in their kernel}\}.
+ \]
+ \end{enumerate}
+ We say $\tilde{\chi}$ \emph{lifts to} $\chi$.
+\end{lemma}
+
+\begin{proof}
+ Since a representation of $G$ is just a homomorphism $G \to \GL(V)$, and the composition of homomorphisms is a homomorphisms, it follows immediately that $\rho$ as defined in the lemma is a representation.
+ \begin{enumerate}
+ \item We can compute
+ \begin{align*}
+ \bra \chi, \chi\ket &= \frac{1}{|G|} \sum_{g \in G} \overline{\chi(g)}\chi(g)\\
+ &= \frac{1}{|G|} \sum_{gN \in G/N} \sum_{k \in N} \overline{\chi(gk)}\chi(gk)\\
+ &= \frac{1}{|G|} \sum_{gN \in G/N} \sum_{k \in N} \overline{\tilde{\chi}(gN)}\tilde{\chi}(gN)\\
+ &= \frac{1}{|G|} \sum_{gN \in G/N} |N| \overline{\tilde{\chi}(gN)}\tilde{\chi}(gN)\\
+ &= \frac{1}{|G/N|} \sum_{gN \in G/N} \overline{\tilde{\chi}(gN)}\tilde{\chi}(gN)\\
+ &= \bra \tilde{\chi}, \tilde{\chi}\ket.
+ \end{align*}
+ So $\bra \chi, \chi\ket = 1$ if and only if $\bra \tilde{\chi}, \tilde{\chi}\ket = 1$. So $\rho$ is irreducible if and only if $\tilde{\rho}$ is irreducible.
+ \item
+ We can directly compute
+ \[
+ \chi(g) = \tr \rho(g) = \tr (\tilde{\rho}(gN)) = \tilde{\chi}(gN)
+ \]
+ for all $g \in G$.
+ \item To see that $\chi$ and $\tilde{\chi}$ have the same degree, we just notice that
+ \[
+ \deg \chi = \chi(1) = \tilde{\chi}(N) = \deg \tilde{\chi}.
+ \]
+ Alternatively, to show they have the same dimension, just note that $\rho$ and $\tilde{\rho}$ map to the general linear group of the same vector space.
+ \item To show this is a bijection, suppose $\tilde{\chi}$ is a character of $G/N$ and $\chi$ is its lift to $G$. We need to show the kernel contains $N$. By definition, we know $\tilde{\chi}(N) = \chi(1)$. Also, if $k \in N$, then $\chi(k) = \tilde{\chi}(kN) = \tilde{\chi}(N) = \chi(1)$. So $N \leq \ker \chi$.
+
+ Now let $\chi$ be a character of $G$ with $N \leq \ker \chi$. Suppose $\rho: G \to \GL(V)$ affords $\chi$. Define
+ \begin{align*}
+ \tilde{\rho}: G/N &\to \GL(V)\\
+ gN &\mapsto \rho(g)
+ \end{align*}
+ Of course, we need to check this is well-defined. If $gN = g'N$, then $g^{-1} g' \in N$. So $\rho(g) = \rho(g')$ since $N \leq \ker \rho$. So this is indeed well-defined. It is also easy to see that $\tilde{\rho}$ is a homomorphism, hence a representation of $G/N$.
+
+ Finally, if $\tilde{\chi}$ is a character of $\tilde{\rho}$, then $\tilde{\chi}(gN) = \chi(g)$ for all $g \in G$ by definition. So $\tilde{\chi}$ lifts to $\chi$. It is clear that these two operations are inverses to each other.\qedhere
+ \end{enumerate}
+\end{proof}
+There is one particular normal subgroup lying around that proves to be useful.
+
+\begin{lemma}\index{derived subgroup}\index{commutator subgroup}
+ Given a group $G$, the \emph{derived subgroup} or \emph{commutator subgroup}
+ \[
+ G' = \bra [a, b]: a, b \in G\ket,
+ \]
+ where $[a, b] = aba^{-1}b^{-1}$, is the unique minimal normal subgroup of $G$ such that $G/G'$ is abelian. So if $G/N$ is abelian, then $G' \leq N$.
+
+ Moreover, $G$ has precisely $\ell = |G:G'|$ representations of dimension $1$, all with kernel containing $G'$, and are obtained by lifting from $G/G'$.
+
+ In particular, by Lagrange's theorem, $\ell \mid G$.
+\end{lemma}
+
+\begin{proof}
+ Consider $[a, b] = aba^{-1}b^{-1} \in G'$. Then for any $h \in G$, we have
+ \[
+ h(aba^{-1}b^{-1})h^{-1} = \Big((ha)b (ha)^{-1} b^{-1}\Big) \Big(bhb^{-1}h^{-1}\Big) = [ha, b][b, h]\in G'
+ \]
+ So in general, let $[a_1, b_1][a_2, b_2]\cdots[a_n, b_n] \in G'$. Then
+ \[
+ h [a_1, b_1][a_2, b_2]\cdots[a_n, b_n] h^{-1} = (h [a_1, b_1]h^{-1})(h [a_2, b_2] h^{-1}) \cdots (h[a_n, b_n]h^{-1}),
+ \]
+ which is in $G'$. So $G'$ is a normal subgroup.
+
+ Let $N \lhd G$. Let $g, h \in G$. Then $[g, h] \in N$ if and only if $ghg^{-1}h^{-1} \in N$ if and only if $gh N = hg N$, if and only if $(gN)(hN) = (hN)(gN)$ by normality.
+
+ Since $G'$ is generated by all $[g, h]$, we know $G' \leq N$ if and only if $G/N$ is abelian.
+
+ Since $G/G'$, is abelian, we know it has exactly $\ell$ irreducible characters, $\tilde{\chi}_1, \cdots, \tilde{\chi}_{\ell}$, all of degree $1$. The lifts of these to $G$ also have degree $1$, and by the previous lemma, these are precisely the irreducible characters $\chi_i$ of $G$ such that $G' \leq \ker \chi_i$.
+
+ But any degree $1$ character of $G$ is a homomorphism $\chi: G \to \C^\times$, hence $\chi(ghg^{-1}h^{-1}) = 1$. So for any $1$-dimensional character, $\chi$, we must have $G' \leq \ker \chi$. So the lifts $\chi_1, \cdots, \chi_\ell$ are all $1$-dimensional characters of $G$.
+\end{proof}
+
+\begin{eg}
+ Let $G$ be $S_n$, with $n \geq 3$. We claim that $S_n' = A_n$. Since $\sgn(a) = \sgn(a^{-1})$, we know that $[a, b] \in A_n$ for any $a, b \in S_n$. So $S_n' \leq A_n$ (alternatively, since $S_n / A_n \cong C_2$ is abelian, we must have $S_n' \leq A_n$). We now notice that
+ \[
+ (1\; 2\; 3) = (1\; 3\; 2) (1\; 2) (1\; 3\; 2)^{-1} (1\; 2)^{-1} \in S_n'.
+ \]
+ So by symmetry, all $3$-cycles are in $S_n'$. Since the $3$-cycles generate $A_n$, we know we must have $S_n' = A_n$.
+
+ Since $S_n'/A_n \cong C_2$, we get $\ell = 2$. So $S_n$ must have exactly two linear characters, namely the trivial character and the sign.
+\end{eg}
+
+\begin{lemma}
+ $G$ is not simple if and only if $\chi(g) = \chi(1)$ for some irreducible character $\chi \not= 1_G$ and some $1 \not= g \in G$. Any normal subgroup of $G$ is the intersection of the kernels of some of the irreducible characters of $G$, i.e.\ $N = \bigcap \ker \chi_i$.
+\end{lemma}
+In other words, $G$ is simple if and only if all non-trivial irreducible characters have trivial kernel.
+
+\begin{proof}
+ Suppose $\chi(g) = \chi(1)$ for some non-trivial irreducible character $\chi$, and $\chi$ is afforded by $\rho$. Then $g \in \ker \rho$. So if $g \not= 1$, then $1 \not= \ker \rho \lhd G$, and $\ker \rho \not= G$. So $G$ cannot be simple.
+
+ If $1 \not= N \lhd G$ is a non-trivial proper subgroup, take an irreducible character $\tilde{\chi}$ of $G/N$, and suppose $\tilde{\chi} \not= 1_{G/N}$. Lift this to get an irreducible character $\chi$, afforded by the representation $\rho$ of $G$. Then $N \leq \ker \rho \lhd G$. So $\chi(g) = \chi(1)$ for $g \in N$.
+
+ Finally, let $1 \not= N \lhd G$. We claim that $N$ is the intersection of the kernels of the lifts $\chi_1, \cdots, \chi_\ell$ of all the irreducibles of $G/N$. Clearly, we have $N \leq \bigcap_i \ker \chi_i$. If $g \in G \setminus N$, then $gN \not= N$. So $\tilde{\chi}(gN) \not= \tilde{\chi}(N)$ for some irreducible $\tilde{\chi}$ of $G/N$. Lifting $\tilde{\chi}$ to $\chi$, we have $\chi(g) \not= \chi(1)$. So $g$ is not in the intersection of the kernels. % why
+\end{proof}
+
+\section{Dual spaces and tensor products of representations}
+\sectionmark{Dual spaces and tensor products}
+In this chapter, we will consider several linear algebra constructions, and see how we can use them to construct new representations and characters.
+
+\subsection{Dual spaces}
+Our objective of this section is to try to come up with ``dual representations''. Given a representation $\rho: G \to \GL(V)$, we would like to turn $V^* = \Hom(V, \F)$ into a representation. So for each $g$, we want to produce a $\rho^*(g): V^* \to V^*$. So given $\varphi \in V^*$, we shall get a $\rho^*(g) \varphi: V \to \F$. Given $\mathbf{v} \in V$, a natural guess for the value of $(\rho^*(g)\varphi)(\mathbf{v})$ might be
+\[
+ (\rho^*(g) \varphi)(\mathbf{v}) = \varphi(\rho(g)\mathbf{v}),
+\]
+since this is how we can patch together $\varphi, \rho, g$ and $\mathbf{v}$. However, if we do this, we will not get $\rho^*(g) \rho^*(h) = \rho^*(gh)$. Instead, we will obtain $\rho^*(g) \rho^*(h) = \rho^*(hg)$, which is the wrong way round. The right definition to use is actually the following:
+
+\begin{lemma}\index{dual representation}
+ Let $\rho: G \to \GL(V)$ be a representation over $\F$, and let $V^* = \Hom_\F (V, \F)$ be the dual space of $V$. Then $V^*$ is a $G$-space under
+ \[
+ (\rho^*(g) \varphi)(\mathbf{v}) = \varphi(\rho(g^{-1})\mathbf{v}).
+ \]
+ This is the \emph{dual representation} to $\rho$. Its character is $\chi(\rho^*)(g) = \chi_\rho(g^{-1})$.
+\end{lemma}
+
+\begin{proof}
+ We have to check $\rho^*$ is a homomorphism. We check
+ \begin{align*}
+ \rho^*(g_1)(\rho^*(g_2)\varphi)(\mathbf{v}) &= (\rho^*(g_2)\varphi)(\rho(g_1^{-1})(\mathbf{v}))\\
+ &= \varphi(\rho(g_2^{-1})\rho(g_1^{-2}) \mathbf{v})\\
+ &= \varphi(\rho((g_1g_2)^{-1})(\mathbf{v}))\\
+ &= (\rho^*(g_1g_2) \varphi)(\mathbf{v}).
+ \end{align*}
+ To compute the character, fix a $g \in G$, and let $\mathbf{e}_1, \cdots, \mathbf{e}_n$ be a basis of eigenvectors of $V$ of $\rho(g)$, say
+ \[
+ \rho(g) \mathbf{e}_j = \lambda_j \mathbf{e}_j.
+ \]
+ If we have a dual space of $V$, then we have a dual basis. We let $\varepsilon_1,\cdots, \varepsilon_n$ be the dual basis. Then
+ \[
+ (\rho^*(g) \varepsilon_j)(\mathbf{e}_i) = \varepsilon_j(\rho(g^{-1}) \mathbf{e}_i) = \varepsilon_j (\lambda_i^{-1} \mathbf{e}_i) = \lambda_i^{-1} \delta_{ij} = \lambda_j^{-1} \delta_{ij} = \lambda_j^{-1} \varepsilon_j(\mathbf{e}_i).
+ \]
+ Thus we get
+ \[
+ \rho^*(g) \varepsilon_j = \lambda_j^{-1} \varepsilon_j.
+ \]
+ So
+ \[
+ \chi_{\rho^*}(g) = \sum \lambda_j^{-1} = \chi_\rho(g^{-1}).\qedhere
+ \]
+\end{proof}
+
+\begin{defi}[Self-dual representation]
+ A representation $\rho: G \to \GL(V)$ is \emph{self-dual} if there is isomorphism of $G$-spaces $V\cong V^*$.
+\end{defi}
+Note that as (finite-dimensional) vector spaces, we always have $V \cong V^*$. However, what we require here is an isomorphism that preserves the $G$-action, which does not necessarily exist.
+
+Over $\F = \C$, this holds if and only if
+\[
+ \chi_\rho(g) = \chi_{\rho^{*}} (g) = \chi_\rho(g^{-1}) = \overline{\chi_\rho(g)}.
+\]
+So a character is self-dual if and only if it is real.
+
+\begin{eg}
+ All irreducible representations of $S_n$ are self-dual, since every element is conjugate to its own inverse. This is not always true for $A_n$.
+\end{eg}
+
+\subsection{Tensor products}
+The next idea we will tackle is the concept of \emph{tensor products}. We first introduce it as a linear algebra concept, and then later see how representations fit into this framework.
+
+There are many ways we can define the tensor product. The definition we will take here is a rather hands-on construction of the space, which involves picking a basis. We will later describe some other ways to define the tensor product.
+
+\begin{defi}[Tensor product]\index{tensor product}
+ Let $V, W$ be vector spaces over $\F$. Suppose $\dim V = m$ and $\dim W = n$. We fix a basis $\mathbf{v}_1, \cdots, \mathbf{v}_m$ and $\mathbf{w}_1, \cdots, \mathbf{w}_n$ of $V$ and $W$ respectively.
+
+ The \emph{tensor product space} $V \otimes W = V\otimes_\F W$ is an $nm$-dimensional vector space (over $\F$) with basis given by formal symbols
+ \[
+ \{\mathbf{v}_i \otimes \mathbf{w}_j: 1\leq i \leq m, 1 \leq j \leq n\}.
+ \]
+ Thus
+ \[
+ V\otimes W = \left\{\sum \lambda_{ij} \mathbf{v}_i \otimes \mathbf{w}_j: \lambda_{ij} \in \F\right\},
+ \]
+ with the ``obvious'' addition and scalar multiplication.
+
+ If
+ \[
+ \mathbf{v} = \sum \alpha_i \mathbf{v}_i \in V, \quad \mathbf{w} = \sum \beta_j \mathbf{w}_j,
+ \]
+ we define
+ \[
+ \mathbf{v} \otimes \mathbf{w} = \sum_{i, j} \alpha_i\beta_j (\mathbf{v}_i \otimes \mathbf{w}_j).
+ \]
+\end{defi}
+Note that note all elements of $V\otimes W$ are of this form. Some are genuine combinations. For example, $\mathbf{v}_1 \otimes \mathbf{w}_1 + \mathbf{v}_2 \otimes \mathbf{w}_2$ cannot be written as a tensor product of an element in $V$ and another in $W$.
+
+We can imagine our formula of the tensor of two elements as writing
+\[
+ \left(\sum \alpha_i \mathbf{v}_i\right) \otimes \left(\sum \beta_j \mathbf{w}_j\right),
+\]
+and then expand this by letting $\otimes$ distribute over $+$.
+
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item For $\mathbf{v} \in V$, $\mathbf{w} \in W$ and $\lambda \in \F$, we have
+ \[
+ (\lambda \mathbf{v}) \otimes \mathbf{w} = \lambda (\mathbf{v} \otimes \mathbf{w}) = \mathbf{v} \otimes (\lambda \mathbf{w}).
+ \]
+ \item If $\mathbf{x}, \mathbf{x}_1, \mathbf{x}_2 \in V$ and $\mathbf{y}, \mathbf{y}_1, \mathbf{y}_2 \in W$, then
+ \begin{align*}
+ (\mathbf{x}_1 + \mathbf{x}_2) \otimes \mathbf{y} &= (\mathbf{x}_1 \otimes \mathbf{y}) + (\mathbf{x}_2 \otimes \mathbf{y})\\
+ \mathbf{x}\otimes (\mathbf{y}_1 + \mathbf{y}_2) &= (\mathbf{x}\otimes \mathbf{y}_1) + (\mathbf{x} \otimes \mathbf{y}_2).
+ \end{align*}
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $\mathbf{v} = \sum \alpha_i \mathbf{v}_i$ and $\mathbf{w} = \sum \beta_j \mathbf{w}_j$. Then
+ \begin{align*}
+ (\lambda \mathbf{v}) \otimes \mathbf{w} &= \sum_{ij} (\lambda \alpha_i) \beta_j \mathbf{v}_i \otimes \mathbf{w}_j \\
+ \lambda (\mathbf{v}\otimes \mathbf{w}) &= \lambda \sum_{ij} \alpha_i \beta_j \mathbf{v}_i \otimes \mathbf{w}_j\\
+ \mathbf{v}\otimes (\lambda \mathbf{w}) &= \sum \alpha_i (\lambda \beta_j) \mathbf{v}_i \otimes \mathbf{w}_j,
+ \end{align*}
+ and these three things are obviously all equal.
+ \item Similar nonsense.\qedhere
+ \end{enumerate}
+\end{proof}
+We can define the tensor product using these properties. We consider the space of formal linear combinations $\mathbf{v} \otimes \mathbf{w}$ for \emph{all} $\mathbf{v} \in V, \mathbf{w} \in W$, and then quotient out by the relations re had above. It can be shown that this produces the same space as the one constructed above.
+
+Alternatively, we can define the tensor product using its universal property, which says for any $U$, a bilinear map from $V \times W$ to $U$ is ``naturally'' equivalent to a linear map $V \otimes W \to U$. This intuitively makes sense, since the distributivity and linearity properties of $\otimes$ we showed above are exactly the properties required for a bilinear map if we replace the $\otimes$ with a ``$,$''.
+
+It turns out this uniquely defines $V \otimes W$, as long as we provide a sufficiently precise definition of ``naturally''. We can do it concretely as follows --- in our explicit construction, we have a canonical bilinear map given by
+\begin{align*}
+ \phi: V\times W &\to V \otimes W\\
+ (\mathbf{v}, \mathbf{w}) &\mapsto \mathbf{v}\otimes \mathbf{w}.
+\end{align*}
+Given a linear map $V \otimes W \to U$, we can compose it with $\phi: V \times W \to V \otimes W$ in order to get a bilinear map $V \times W \to U$. Then universality says \emph{every} bilinear map $V \times W \to U$ arises this way, uniquely.
+
+In general, we say an equivalence between bilinear maps $V \times W \to U$ and linear maps $V \otimes W \to U$ is ``natural'' if it is be mediated by such a universal map. Then we say a vector space $X$ is the tensor product of $V$ and $W$ if there is a natural equivalence between bilinear maps $V \times W \to U$ and linear maps $X \to U$.
+
+We will stick with our definition for concreteness, but we prove basis-independence here so that we feel happier:
+\begin{lemma}
+ Let $\{\mathbf{e}_1, \cdots, \mathbf{e}_m\}$ be any other basis of $V$, and $\{\mathbf{f}_1, \cdots, \mathbf{f}_m\}$ be another basis of $W$. Then
+ \[
+ \{\mathbf{e}_i \otimes \mathbf{f}_j: 1 \leq i \leq m, 1 \leq j \leq n\}
+ \]
+ is a basis of $V\otimes W$.
+\end{lemma}
+
+\begin{proof}
+ Writing
+ \[
+ \mathbf{v}_k = \sum \alpha_{ik} \mathbf{e}_i,\quad \mathbf{w}_\ell = \sum \beta_{j\ell} \mathbf{f}_\ell,
+ \]
+ we have
+ \[
+ \mathbf{v}_k \otimes \mathbf{w}_\ell = \sum \alpha_{ik} \beta_{jl} \mathbf{e}_i \otimes \mathbf{f}_j.
+ \]
+ Therefore $\{\mathbf{e}_i \otimes \mathbf{f}_j\}$ spans $V \otimes W$. Moreover, there are $nm$ of these. Therefore they form a basis of $V\otimes W$.
+\end{proof}
+
+That's enough of linear algebra. We shall start doing some representation theory.
+\begin{prop}
+ Let $\rho: G \to \GL(V)$ and $\rho': G \to \GL(V')$. We define
+ \[
+ \rho \otimes \rho': G \to \GL(V\otimes V')
+ \]
+ by
+ \[
+ (\rho \otimes \rho')(g) : \sum \lambda_{ij} \mathbf{v}_i \otimes \mathbf{w}_j \mapsto \sum \lambda_{ij} (\rho(g)\mathbf{v}_i)\otimes (\rho'(g) \mathbf{w}_j).
+ \]
+ Then $\rho \otimes \rho'$ is a representation of $g$, with character
+ \[
+ \chi_{\rho\otimes \rho'}(g) = \chi_\rho(g) \chi_{\rho'}(g)
+ \]
+ for all $g \in G$.
+\end{prop}
+As promised, the product of two characters of $G$ is also a character of $G$.
+
+Just before we prove this, recall we showed in sheet 1 that if $\rho$ is irreducible and $\rho'$ is irreducible of degree $1$, then $\rho' \rho = \rho \otimes \rho'$ is irreducible. However, if $\rho'$ is not of degree $1$, then this is almost always false (or else we can produce arbitrarily many irreducible representations).
+
+\begin{proof}
+ It is clear that $(\rho \otimes \rho')(g) \in \GL(V \otimes V')$ for all $g \in G$. So $\rho \otimes \rho'$ is a homomorphism $G \to \GL(V \otimes V')$.
+
+ To check the character is indeed as stated, let $g \in G$. Let $\mathbf{v}_1, \cdots, \mathbf{v}_m$ be a basis of $V$ of eigenvectors of $\rho(g)$, and let $\mathbf{w}_1, \cdots, \mathbf{w}_n$ be a basis of $V'$ of eigenvectors of $\rho'(g)$, say
+ \[
+ \rho(g) \mathbf{v}_i = \lambda_i \mathbf{v}_i,\quad \rho'(g) \mathbf{w}_j = \mu_j \mathbf{w}_j.
+ \]
+ Then
+ \begin{align*}
+ (\rho\otimes \rho')(g)(\mathbf{v}_i \otimes \mathbf{w}_j) &= \rho(g) \mathbf{v}_i \otimes \rho'(g) \mathbf{w}_j\\
+ &= \lambda_i \mathbf{v}_i \otimes \mu_j \mathbf{w}_j\\
+ &= (\lambda_i \mu_j) (\mathbf{v}_i \otimes \mathbf{w}_j).
+ \end{align*}
+ So
+ \[
+ \chi_{\rho\otimes \rho'}(g) = \sum_{i, j} \lambda_i \mu_j = \left(\sum \lambda_i\right)\left(\sum \mu_j\right) = \chi_\rho(g) \chi_{\rho'}(g).\qedhere
+ \]
+\end{proof}
+\subsection{Powers of characters}
+We work over $\C$. Take $V = V'$ to be $G$-spaces, and define
+\begin{notation}\index{$V^{\otimes 2}$}
+ \[
+ V^{\otimes 2} = V\otimes V.
+ \]
+\end{notation}
+We define $\tau: V^{\otimes 2} \to V^{\otimes 2}$ by
+\begin{align*}
+ \tau: \sum \lambda_{ij} \mathbf{v}_i \otimes \mathbf{v}_j \mapsto \sum \lambda_{ij} \mathbf{v}_j \otimes \mathbf{v}_i.
+\end{align*}
+This is a linear endomorphism of $V^{\otimes 2}$ such that $\tau^2 = 1$. So the eigenvalues are $\pm 1$.
+
+\begin{defi}[Symmetric and exterior square]\index{symmetric square}\index{exterior square}\index{$S^2 V$}\index{$\Lambda^2 V$}
+ We define the \emph{symmetric square} and \emph{exterior square} of $V$ to be, respectively,
+ \begin{align*}
+ S^2 V &= \{\mathbf{x} \in V^{\otimes 2}: \tau(\mathbf{x}) = \mathbf{x}\}\\
+ \Lambda^2 V &= \{\mathbf{x} \in V^{\otimes 2}: \tau(\mathbf{x}) = -\mathbf{x}\}.
+ \end{align*}
+ The exterior square is also known as the \emph{anti-symmetric square} and \emph{wedge power}.
+\end{defi}
+
+\begin{lemma}
+ For any $G$-space $V$, $S^2 V$ and $\Lambda^2 V$ are $G$-subspaces of $V^{\otimes 2}$, and
+ \[
+ V^{\otimes 2} = S^2 V \oplus \Lambda^2 V.
+ \]
+ The space $S^2 V$ has basis
+ \[
+ \{\mathbf{v}_i \mathbf{v}_j = \mathbf{v}_i \otimes \mathbf{v}_j + \mathbf{v}_j \otimes \mathbf{v}_i: 1 \leq i \leq j \leq n\},
+ \]
+ while $\Lambda^2 V$ has basis
+ \[
+ \{\mathbf{v}_i \wedge \mathbf{v}_j = \mathbf{v}_i \otimes \mathbf{v}_j - \mathbf{v}_j \otimes \mathbf{v}_i: 1 \leq i < j \leq n\}.
+ \]
+ Note that we have a strict inequality for $i < j$, since $\mathbf{v}_i \otimes \mathbf{v}_j - \mathbf{v}_j \otimes \mathbf{v}_i = 0$ if $i = j$.
+ Hence
+ \[
+ \dim S^2 V = \frac{1}{2}n(n + 1),\quad \dim \Lambda^2 V = \frac{1}{2}n(n - 1).
+ \]
+\end{lemma}
+Sometimes, a factor of $\frac{1}{2}$ appears in front of the definition of the basis elements $\mathbf{v}_i \mathbf{v}_j$ and $\mathbf{v}_i \wedge \mathbf{v}_j$, i.e.
+\[
+ \mathbf{v}_i \mathbf{v}_j = \frac{1}{2}(\mathbf{v}_i \otimes \mathbf{v}_j + \mathbf{v}_j \otimes \mathbf{v}_i),\quad \mathbf{v}_i \wedge \mathbf{v}_j = \frac{1}{2}(\mathbf{v}_i \otimes \mathbf{v}_j - \mathbf{v}_j \otimes \mathbf{v}_i).
+\]
+However, this is not important.
+
+\begin{proof}
+ This is elementary linear algebra. For the decomposition $V^{\otimes 2}$, given $\mathbf{x} \in V^{\otimes 2}$, we can write it as
+ \[
+ \mathbf{x} = \underbrace{\frac{1}{2}(\mathbf{x} + \tau(\mathbf{x}))}_{\in S^2 V} + \underbrace{\frac{1}{2}(\mathbf{x} - \tau(\mathbf{x}))}_{\in \Lambda^2 V}.\qedhere
+ \]
+\end{proof}
+
+How is this useful for character calculations?
+\begin{lemma}\index{$\chi_{\Lambda^2}$}\index{$\chi_{S^2}$}\index{$S^2 \chi$}\index{$\Lambda^2 \chi$}
+ Let $\rho: G \to \GL(V)$ be a representation affording the character $\chi$. Then $\chi^2 = \chi_S + \chi_{\Lambda}$ where $\chi_S = S^2 \chi$ is the character of $G$ in the subrepresentation on $S^2 V$, and $\chi_{\Lambda} = \Lambda^2 \chi$ the character of $G$ in the subrepresentation on $\Lambda^2 V$. Moreover, for $g \in G$,
+ \[
+ \chi_S(g) = \frac{1}{2} (\chi^2 (g) + \chi(g^2)),\quad \chi_\Lambda(g) = \frac{1}{2}(\chi^2(g) - \chi(g^2)).
+ \]
+\end{lemma}
+The real content of the lemma is, of course, the formula for $\chi_S$ and $\chi_\Lambda$.
+
+\begin{proof}
+ The fact that $\chi^2 = \chi_S + \chi_\Lambda$ is immediate from the decomposition of $G$-spaces.
+
+ We now compute the characters $\chi_S$ and $\chi_\Lambda$. For $g \in G$, we let $\mathbf{v}_1, \cdots, \mathbf{v}_n$ be a basis of $V$ of eigenvectors of $\rho(g)$, say
+ \[
+ \rho(g) \mathbf{v}_i = \lambda_i \mathbf{v}_i.
+ \]
+ We'll be lazy and just write $g \mathbf{v}_i$ instead of $\rho(g) \mathbf{v}_i$. Then, acting on $\Lambda^2 V$, we get
+ \[
+ g(\mathbf{v}_i \wedge \mathbf{v}_j) = \lambda_i \lambda_j \mathbf{v}_i \wedge \mathbf{v}_j.
+ \]
+ Thus
+ \[
+ \chi_\Lambda(g) = \sum_{1 \leq i < j \leq n} \lambda_i \lambda_j.
+ \]
+ Since the answer involves the square of the character, let's write that down:
+ \begin{align*}
+ (\chi(g))^2 &= \left(\sum \lambda_i\right)^2 \\
+ &= \sum \lambda_i^2 + 2 \sum_{i < j} \lambda_i \lambda_j \\
+ &= \chi(g^2) + 2 \sum_{i < j} \lambda_i \lambda_j\\
+ &= \chi(g^2) + 2 \chi_\Lambda(g).
+ \end{align*}
+ Then we can solve to obtain
+ \[
+ \chi_\Lambda(g) = \frac{1}{2}(\chi^2(g) - \chi(g^2)).
+ \]
+ Then we can get
+ \[
+ \chi_S = \chi^2 - \chi_\Lambda = \frac{1}{2}(\chi^2(g) + \chi(g^2)).\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ Consider again $G = S_4$. As before, we have the following table:
+ \begin{center}
+ \begin{tabular}{ccccccc}
+ \toprule
+ & & 1 & 3 & 8 & 6 & 6\\
+ & & $1$ & $(1\; 2)(3\; 4)$ & $(1\; 2\; 3)$ & $(1\; 2\; 3\; 4)$ & $(1\; 2)$\\
+ \midrule
+ trivial & $\chi_1$ & $1$ & $1$ & $1$ & $1$ & $1$\\
+ sign & $\chi_2$ & $1$ & $1$ & $1$ & $-1$ & $-1$\\
+ $\pi_X - 1_G$ & $\chi_3$ & $3$ & $-1$ & $0$ & $-1$ & $1$\\
+ $\chi_3 \chi_2$ & $\chi_4$ & $3$ & $-1$ & $0$ & $1$ & $-1$\\
+ & $\chi_5$ &\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ We now compute $\chi_5$ in a different way, by decomposing $\chi_3^2$. We have
+ \begin{center}
+ \begin{tabular}{ccccccc}
+ \toprule
+ & & 1 & 3 & 8 & 6 & 6\\
+ & & $1$ & $(1\; 2)(3\; 4)$ & $(1\; 2\; 3)$ & $(1\; 2\; 3\; 4)$ & $(1\; 2)$\\
+ \midrule
+ & $\chi_3^2$ & $9$ & $1$ & $0$ & $1$ & $1$\\
+ & $\chi_3(g^2)$ & $3$ & $3$ & $0$ & $-1$ & $3$\\
+ & $S^2 \chi_3$ & $6$ & $2$ & $0$ & $0$ & $2$\\
+ & $\Lambda^2 \chi_3$ & $3$ & $-1$ & $0$ & $1$ & $-1$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ We notice $\Lambda^2 \chi_3$ is just $\chi_4$. Now $S^2 \chi_3$ is not irreducible, which we can easily show by computing the inner product. By taking the inner product with the other irreducible characters, we find $S^2\chi_3 = 1 + \chi_3 + \chi_5$, where $\chi_5$ is our new irreducible character. So we obtain
+ \begin{center}
+ \begin{tabular}{ccccccc}
+ \toprule
+ & & 1 & 3 & 8 & 6 & 6\\
+ & & $1$ & $(1\; 2)(3\; 4)$ & $(1\; 2\; 3)$ & $(1\; 2\; 3\; 4)$ & $(1\; 2)$\\
+ \midrule
+ trivial & $\chi_1$ & $1$ & $1$ & $1$ & $1$ & $1$\\
+ sign & $\chi_2$ & $1$ & $1$ & $1$ & $-1$ & $-1$\\
+ $\pi_X - 1_G$ & $\chi_3$ & $3$ & $-1$ & $0$ & $-1$ & $1$\\
+ $\chi_3 \chi_2$ & $\chi_4$ & $3$ & $-1$ & $0$ & $1$ & $-1$\\
+ $S^3 \chi_3 - 1 - \chi_3$ & $\chi_5$ & $2$ & $2$ & $-1$ & $0$ & $0$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+\end{eg}
+
+\subsection{Characters of \texorpdfstring{$G\times H$}{G x H}}
+We have looked at characters of direct products a bit before, when we decomposed an abelian group into a product of cyclic groups. We will now consider this in the general case.
+
+\begin{prop}
+ Let $G$ and $H$ be two finite groups with irreducible characters $\chi_1, \cdots, \chi_k$ and $\psi_1, \cdots, \psi_r$ respectively. Then the irreducible characters of the direct product $G\times H$ are precisely
+ \[
+ \{\chi_i \psi_j: 1 \leq i \leq k, 1 \leq j \leq r\},
+ \]
+ where
+ \[
+ (\chi_i \psi_j)(g, h) = \chi_i(g) \psi_j(h).
+ \]
+\end{prop}
+
+\begin{proof}
+ Take $\rho: G \to \GL(V)$ affording $\chi$, and $\rho': H \to \GL(W)$ affording $\psi$. Then define
+ \begin{align*}
+ \rho \otimes \rho': G \times H &\to \GL(V\otimes W)\\
+ (g, h) &\mapsto \rho(g) \otimes \rho'(h),
+ \end{align*}
+ where
+ \[
+ (\rho(g)\otimes \rho'(h))(\mathbf{v}_i \otimes \mathbf{w}_j) \mapsto \rho(g) \mathbf{v}_i \otimes \rho'(h) \mathbf{w}_j.
+ \]
+ This is a representation of $G \times H$ on $V\otimes W$, and $\chi_{\rho\otimes \rho'} = \chi \psi$. The proof is similar to the case where $\rho, \rho'$ are both representations of $G$, and we will not repeat it here.
+
+ Now we need to show $\chi_i \psi_j$ are distinct and irreducible. It suffices to show they are orthonormal. We have
+ \begin{align*}
+ \bra \chi_i \psi_j, \chi_r \psi_s\ket_{G\times H} &= \frac{1}{|G\times H|} \sum_{(g, h) \in G\times H} \overline{\chi_i \psi_j (g, h)}\chi_r \psi_s(g, h)\\
+ &= \left(\frac{1}{|G|} \sum_{g \in G} \overline{\chi_i (g)}\chi_r(g)\right)\left(\frac{1}{|H|}\sum_{h \in H} \overline{\psi_j(h)} \psi_s(h)\right)\\
+ &= \delta_{ir} \delta_{js}.
+ \end{align*}
+ So it follows that $\{\chi_i \psi_j\}$ are distinct and irreducible. We need to show this is complete. We can consider
+ \[
+ \sum_{i, j} \chi_i \psi_j (1)^2 = \sum \chi_i^2 (1) \psi_j^2(1) = \left(\sum \chi_i^2(1)\right)\left(\sum \psi_j^2(1)\right) = |G||H| = |G\times H|.
+ \]
+ So done.
+\end{proof}
+\subsection{Symmetric and exterior powers}
+We have introduced the objects $S^2 V$ and $\Lambda^2 V$. We can do this in a more general setting, for higher powers. Consider a vector space $V$ over $\F$, and $\dim_\F V = d$. We let $\{\mathbf{v}_1, \cdots, \mathbf{v}_d\}$ be a basis, and let\index{$V^{\otimes n}$}\index{$T^n v$}
+\[
+ T^nV = V^{\otimes n} = \underbrace{V\otimes \cdots \otimes V}_{n\text{ times}}.
+\]
+The basis for this space is just
+\[
+ \{\mathbf{v}_{i_1} \otimes \cdots \otimes \mathbf{v}_{i_n}: i_1, \cdots, i_n \in \{1, \cdots, d\}\}.
+\]
+First of all, there is an obvious $S_n$-action on this space, permuting the multiplicands of $V^{\otimes n}$. For any $\sigma \in S_n$, we can define an action $\sigma: V^{\otimes n} \to V^{\otimes n}$ given by
+\[
+ \mathbf{v}_1 \otimes \cdots \mathbf{v}_n \mapsto \mathbf{v}_{\sigma(1)} \otimes \cdots \otimes \mathbf{v}_{\sigma(n)}.
+\]
+Note that this is a left action only if we compose permutations right-to-left. If we compose left-to-right instead, we have to use $\sigma^{-1}$ instead of $\sigma$ (or use $\sigma$ and obtain a right action). In fact, this defines a representation of $S^n$ on $V^{\otimes n}$ by linear representation.
+
+If $V$ itself is a $G$-space, then we also get a $G$-action on $V^{\otimes n}$. Let $\rho: G \to \GL(V)$ be a representation. Then we obtain the action of $G$ on $V^{\otimes n}$ by
+\[
+ \rho^{\otimes n}: \mathbf{v}_1 \otimes \cdots \otimes \mathbf{v}_n \mapsto \rho(g) \mathbf{v}_1 \otimes \cdots \otimes \rho(g) \mathbf{v}_n.
+\]
+Staring at this carefully, it is clear that these two actions commute with each other. This rather simple innocent-looking observation is the basis of an important theorem by Schur, and has many many applications. However, that would be for another course.
+
+Getting back on track, since the two actions commute, we can decompose $V^{\otimes n}$ as an $S_n$-module, and each isotypical component is a $G$-invariant subspace of $V^{\otimes n}$. We don't really need this, but it is a good generalization of what we've had before.
+
+\begin{defi}[Symmetric and exterior power]\index{symmetric power}\index{exterior power}\index{$S^n V$} \index{$\Lambda^n V$}
+ For a $G$-space $V$, define
+ \begin{enumerate}
+ \item The $n$th symmetric power of $V$ is
+ \[
+ S^n V = \{x \in V^{\otimes n}: \sigma(x) = x\text{ for all }\sigma \in S_n\}.
+ \]
+ \item The $n$th exterior power of $V$ is
+ \[
+ \Lambda^n V = \{x \in V^{\otimes n}: \sigma(x) = (\sgn \sigma) x \text{ for all }\sigma \in S_n\}.
+ \]
+ \end{enumerate}
+ Both of these are $G$-subspaces of $V^{\otimes n}$.
+\end{defi}
+Recall that in the $n = 2$ case, we have $V^{\otimes 2} = S^2 V \oplus \Lambda^2 V$. However, for $n > 2$, we get $S^n V \oplus \Lambda^n V \lneq V^{\otimes n}$. In general, lots of others are obtained from the $S_n$-action.
+
+\begin{eg}
+ The bases for $S^n V$ and $\Lambda^n V$ are obtained as follows: take $\mathbf{e}_1, \cdots, \mathbf{e}_d$ a basis of $V$. Then
+ \[
+ \left\{\frac{1}{n!}\sum_{\sigma \in S_n} \mathbf{e}_{i_{\sigma(1)}}\otimes \cdots \otimes \mathbf{e}_{i_{\sigma(n)}} : 1 \leq i_1 \leq \cdots \leq i_n \leq d\right\}
+ \]
+ is a basis for $S^n V$. So the dimension is
+ \[
+ \dim S^n V = \binom{d + n - 1}{n}.
+ \]
+ For $\Lambda^n V$, we get a basis of
+ \[
+ \left\{\frac{1}{n!} \sum_{\sigma \in S_n} (\sgn \sigma) \mathbf{e}_{i_{\sigma(1)}} \otimes \cdots \otimes \mathbf{e}_{i_{\sigma(n)}}: 1 \leq i_1 < \cdots < i_n \leq d\right\}.
+ \]
+ The dimension is
+ \[
+ \dim \Lambda^n V =
+ \begin{cases}
+ 0 & n > d\\
+ \binom{d}{n} & n \leq d
+ \end{cases}
+ \]
+ Details are left for the third example sheet.
+\end{eg}
+
+\subsection{Tensor algebra}
+This part is for some unknown reason included in the schedules. We do not need it here, but it has many applications elsewhere.
+
+Consider a field $\F$ with characteristic $0$.
+
+\begin{defi}[Tensor algebra]\index{tensor algebra}
+ Let $T^nV = V^{\otimes n}$. Then the \emph{tensor algebra} of $V$ is
+ \[
+ T^{\Cdot}(V) = T(V) = \bigoplus_{n\geq 0} T^nV,
+ \]
+ with $T^0V = \F$ by convention.
+
+ This is a vector space over $\F$ with the obvious addition and multiplication by scalars. $T(V)$ is also a (non-commutative) (graded) ring with product $x \cdot y = x\otimes y$. This is graded in the sense that if $x \in T^n V$ and $y \in T^m V$, $x \cdot y = x\otimes y \in T^{n + m}V$.
+\end{defi}
+
+\begin{defi}[Symmetric and exterior algebra]\index{symmetric algebra}\index{exterior algebra}
+ We define the \emph{symmetric algebra} of $V$ to be
+ \[
+ S(V) = T(V) / (\text{ideal of $T(V)$ generated by all $u\otimes v - v\otimes u$}).
+ \]
+ The \emph{exterior algebra} of $V$ is
+ \[
+ \Lambda(V) = T(V) / (\text{ideal of $T(V)$ generated by all $v \otimes v$}).
+ \]
+ Note that $v$ and $u$ are not elements of $V$, but arbitrary elements of $T^n V$ for some $n$.
+\end{defi}
+The symmetric algebra is commutative, while the exterior algebra is \emph{graded-commutative}, i.e.\ if $x \in \Lambda^n(V)$ and $y \in \Lambda^m(V)$, then $x\cdot y = (-1)^{n + m + 1} y \cdot x$.
+
+\subsection{Character ring}
+Recall that $\mathcal{C}(G)$ is a commutative ring, and contains all the characters. Also, the sum and products of characters are characters. However, they don't quite form a subring, since we are not allowed to subtract.
+
+\begin{defi}[Character ring]\index{character ring}\index{$R(G)$}
+ The \emph{character ring} of $G$ is the $\Z$-submodule of $\mathcal{C}(G)$ spanned by the irreducible characters and is denoted $R(G)$.
+\end{defi}
+Recall we can obtain all characters by adding irreducible characters. However, subtraction is not allowed. So the charactering ring includes something that is not quite a character.
+
+\begin{defi}[Generalized/virtual characters]\index{generalized character}\index{virtual character}
+ The elements of $R(G)$ are called \emph{generalized or virtual characters}. These are class functions of the form
+ \[
+ \psi = \sum_{\chi} n_\chi \chi,
+ \]
+ summing over all irreducibles $\chi$, and $n_\chi \in \Z$.
+\end{defi}
+
+We know $R(G)$ is a ring, and any generalized character is a difference of two characters (we let
+\[
+ \alpha = \sum_{n_\chi \geq 0} n_\chi \chi,\quad \beta = \sum_{n_\chi < 0}(-n_\chi) \chi.
+\]
+Then $\psi = \alpha - \beta$, and $\alpha$ and $\beta$ are characters). Then the irreducible characters $\{\chi_i\}$ forms a $\Z$-basis for $R(G)$ as a free $\Z$-module, since we have shown that they are independent, and it generates $R(G)$ by definition.
+
+\begin{lemma}
+ Suppose $\alpha$ is a generalized character and $\bra \alpha, \alpha\ket = 1$ and $\alpha (1) > 0$. Then $\alpha$ is actually a character of an irreducible representation of $G$.
+\end{lemma}
+
+\begin{proof}
+ We list the irreducible characters as $\chi_1, \cdots, \chi_k$. We then write
+ \[
+ \alpha = \sum n_i \chi_i.
+ \]
+ Since the $\chi_i$'s are orthonormal, we get
+ \[
+ \bra \alpha, \alpha\ket = \sum n_i^2 = 1.
+ \]
+ So exactly one of $n_i$ is $\pm 1$, while the others are all zero. So $\alpha = \pm \chi_i$ for some $i$. Finally, since $\alpha(1) > 0$ and also $\chi(1) > 0$, we must have $n_i = +1$. So $\alpha = \chi_i$.
+\end{proof}
+Henceforth we don't distinguish between a character and its negative, and often study generalized characters of inner product $1$ rather than irreducible characters.
+
+\section{Induction and restriction}
+This is the last chapter on methods of calculating characters. Afterwards, we shall start applying these to do something useful.
+
+Throughout the chapter, we will take $\F = \C$. Suppose $H \leq G$. How do characters of $G$ and $H$ relate? It is easy to see that every representation of $G$ can be restricted to give a representation of $H$. More surprisingly, given a representation of $H$, we can induce a representation of $G$.
+
+We first deal with the easy case of restriction.
+\begin{defi}[Restriction]\index{restriction}
+ Let $\rho: G \to \GL(V)$ be a representation affording $\chi$. We can think of $V$ as an $H$-space by restricting $\rho$'s attention to $h \in H$. We get a representation $\Res^G_H \rho = \rho_H = \rho\downarrow_H$, the \emph{restriction} of $\rho$ to $H$. It affords the character $\Res^G_H \chi = \chi_H = \chi\downarrow_H$.
+\end{defi}
+It is a fact of life that the restriction of an irreducible character is in general not irreducible. For example, restricting any non-linear irreducible character to the trivial subgroup will clearly produce a reducible character.
+
+\begin{lemma}
+ Let $H \leq G$. If $\psi$ is any non-zero irreducible character of $H$, then there exists an irreducible character $\chi$ of $G$ such that $\psi$ is a constituent of $\Res_H^G \chi$, i.e.
+ \[
+ \bra \Res^G_H \chi, \psi \ket \not= 0.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We list the irreducible characters of $G$ as $\chi_1, \cdots, \chi_k$. Recall the regular character $\pi_{\mathrm{reg}}$. Consider
+ \[
+ \bra \Res^G_H\pi_{\mathrm{reg}}, \psi\ket = \frac{|G|}{|H|} \psi(1) \not= 0.
+ \]
+ On the other hand, we also have
+ \[
+ \bra \Res^G_H\pi_{\mathrm{reg}}, \psi\ket_H = \sum_{1}^k \deg \chi_i \bra \Res^G_H \chi_i, \psi\ket.
+ \]
+ If this sum has to be non-zero, then there must be some $i$ such that $\bra \Res^G_H \chi_i, \psi\ket \not= 0$.
+\end{proof}
+
+\begin{lemma}
+ Let $\chi$ be an irreducible character of $G$, and let
+ \[
+ \Res^G_H \chi = \sum_i c_i \chi_i,
+ \]
+ with $\chi_i$ irreducible characters of $H$, and $c_i$ non-negative integers. Then
+ \[
+ \sum c_i^2 \leq |G:H|,
+ \]
+ with equality iff $\chi(g) = 0$ for all $g \in G\setminus H$.
+\end{lemma}
+This is useful only if the index is small, e.g.\ $2$ or $3$.
+\begin{proof}
+ We have
+ \[
+ \bra \Res^G_H \chi, \Res^G_H \chi\ket_H = \sum c_i^2.
+ \]
+ However, by definition, we also have
+ \[
+ \bra \Res^G_H \chi, \Res^G_H \chi\ket_H = \frac{1}{|H|}\sum_{h \in H} |\chi(h)|^2.
+ \]
+ On the other hand, since $\chi$ is irreducible, we have
+ \begin{align*}
+ 1 &= \bra \chi, \chi\ket_G\\
+ &= \frac{1}{|G|}\sum_{g \in G} |\chi(g)|^2 \\
+ &= \frac{1}{|G|} \left(\sum_{h \in H} |\chi(h)|^2 + \sum_{g \in G\setminus H} |\chi(g)|^2\right)\\
+ &= \frac{|H|}{|G|} \sum c_i^2 + \frac{1}{|G|} \sum_{g \in G \setminus H} |\chi(g)|^2\\
+ &\geq \frac{|H|}{|G|} \sum c_i^2.
+ \end{align*}
+ So the result follows.
+\end{proof}
+
+Now we shall go in the other direction --- given a character of $H$, we want to induce a character of the larger group $G$. Perhaps rather counter-intuitively, it is easier for us to first construct the character, and then later find a representation that affords this character.
+
+We start with a slightly more general notion of an induced \emph{class function}.
+\begin{defi}[Induced class function]\index{induced class function}
+ Let $\psi \in \mathcal{C}(H)$. We define the \emph{induced class function} $\Ind^G_H \psi = \psi\uparrow^G = \psi^G$ by
+ \[
+ \Ind_H^G \psi(g) = \frac{1}{|H|} \sum_{x \in G} \mathring{\psi} (x^{-1}gx),
+ \]
+ where
+ \[
+ \mathring{\psi}(y) =
+ \begin{cases}
+ \psi(y) & y \in H\\
+ 0 & y \not\in H
+ \end{cases}.
+ \]
+\end{defi}
+The first thing to check is, of course, that $\Ind^G_H$ is indeed a class function.
+
+\begin{lemma}
+ Let $\psi \in \mathcal{C}_H$. Then $\Ind_H^G \psi \in \mathcal{C}(G)$, and $\Ind_H^G\psi(1) = |G:H| \psi(1)$.
+\end{lemma}
+
+\begin{proof}
+ The fact that $\Ind_H^G\psi$ is a class function follows from direct inspection of the formula. Then we have
+ \[
+ \Ind_H^G \psi(1) = \frac{1}{|H|} \sum_{x \in G} \mathring{\psi}(1) = \frac{|G|}{|H|}\psi(1) = |G:H|\psi(1).\qedhere
+ \]
+\end{proof}
+As it stands, the formula is not terribly useful. So we need to find an alternative formula for it.
+
+If we have a subset, a sensible thing to do is to look at the cosets of $H$. We let $n = |G:H|$, and let $1 = t_1, \cdots, t_n$ be a \emph{left transversal} of $H$ in $G$, i.e.\ $t_1 H = H, \cdots, t_n H$ are precisely the $n$ left-cosets of $H$ in $G$.\index{left transversal}
+
+\begin{lemma}
+ Given a (left) transversal $t_1, \cdots, t_n$ of $H$, we have
+ \[
+ \Ind_H^G\psi(g) = \sum_{i = 1}^n \mathring{\psi}(t_i^{-1} g t_i).
+ \]
+\end{lemma}
+
+\begin{proof}
+ We can express every $x \in G$ as $x = t_i h$ for some $h \in H$ and $i$. We then have
+ \[
+ \mathring{\psi}((t_i h)^{-1} g(t_i h)) = \mathring{\psi}(h^{-1} (t_i^{-1} g t_i) h) = \mathring{\psi}(t_i^{-1} gt_i),
+ \]
+ since $\psi$ is a class function of $H$, and $h^{-1} (t_i^{-1} g t_i) h \in H$ if and only if $t_i^{-1} g t_i \in H$, as $h \in H$. So the result follows.
+\end{proof}
+
+This is not too useful as well. But we are building up to something useful.
+
+\begin{thm}[Frobenius reciprocity]\index{Frobenius reciprocity}
+ Let $\psi \in \mathcal{C}(H)$ and $\varphi \in \mathcal{C}(G)$. Then
+ \[
+ \bra \Res_H^G \varphi, \psi\ket_H = \bra \varphi, \Ind_H^G \psi\ket_G.
+ \]
+\end{thm}
+Alternatively, we can write this as
+\[
+ \bra \varphi_H, \psi\ket = \bra \varphi, \psi^G\ket.
+\]
+If you are a category theorist, and view restriction and induction as functors, then this says restriction is a right adjoint to induction, and thsi is just a special case of the tensor-Hom adjunction. If you neither understand nor care about this, then this is a good sign [sic].
+
+\begin{proof}
+ We have
+ \begin{align*}
+ \bra \varphi, \psi^G\ket &= \frac{1}{|G|} \sum_{g \in G} \overline{\varphi(g)}\psi^G(g) \\
+ &= \frac{1}{|G||H|} \sum_{x, g \in G} \overline{\varphi(g)} \mathring{\psi}(x^{-1}gx)\\
+ \intertext{We now write $y = x^{-1} gx$. Then summing over $g$ is the same as summing over $y$. Since $\varphi$ is a $G$-class function, this becomes}
+ &= \frac{1}{|G||H|} \sum_{x, y \in G} \overline{\varphi(y)} \mathring{\psi}(y)\\
+ \intertext{Now note that the sum is independent of $x$. So this becomes}
+ &=\frac{1}{|H|} \sum_{y \in G} \overline{\varphi}(y) \mathring{\psi}(y)\\
+ \intertext{Now this only has contributions when $y \in H$, by definition of $\mathring{\psi}$. So}
+ &= \frac{1}{|H|} \sum_{y \in H} \overline{\varphi(y)}\psi(y)\\
+ &= \bra \varphi_H, \psi\ket_H.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{cor}
+ Let $\psi$ be a character of $H$. Then $\Ind_H^G \psi$ is a character of $G$.
+\end{cor}
+
+\begin{proof}
+ Let $\chi$ be an irreducible character of $G$. Then
+ \[
+ \bra \Ind_H^G \psi, \chi\ket = \bra \psi, \Res_H^G \chi\ket.
+ \]
+ Since $\psi$ and $\Res_H^G \chi$ are characters, the thing on the right is in $\Z_{\geq 0}$. Hence $\Ind_H^G$ is a linear combination of irreducible characters with non-negative coefficients, and is hence a character.
+\end{proof}
+
+Recall we denote the conjugacy class of $g \in G$ as $\mathcal{C}_G(g)$, while the centralizer is $C_G(g)$. If we take a conjugacy class $\mathcal{C}_G(g)$ in $G$ and restrict it to $H$, then the result $\mathcal{C}_G(g)\cap H$ need not be a conjugacy class, since the element needed to conjugate $x, y \in \mathcal{C}_G(g) \cap H$ need not be present in $H$, as is familiar from the case of $A_n \leq S_n$. However, we know that $\mathcal{C}_G(g) \cap H$ is a union of conjugacy classes in $H$, since elements conjugate in $H$ are also conjugate in $G$.
+
+\begin{prop}
+ Let $\psi$ be a character of $H \leq G$, and let $g \in G$. Let
+ \[
+ \mathcal{C}_G(g) \cap H = \bigcup_{i = 1}^m \mathcal{C}_H(x_i),
+ \]
+ where the $x_i$ are the representatives of the $H$ conjugacy classes of elements of $H$ conjugate to $g$. If $m = 0$, then $\Ind_H^G\psi(g) = 0$. Otherwise,
+ \[
+ \Ind_H^G \psi(g) = |C_G(g)| \sum_{i = 1}^m \frac{\psi(x_i)}{|C_H(x_i)|}.
+ \]
+\end{prop}
+
+This is all just group theory. Note that some people will think this proof is excessive --- everything shown is ``obvious''. In some sense it is. Some steps seem very obvious to certain people, but we are spelling out all the details so that everyone is happy.
+
+\begin{proof}
+ If $m = 0$, then $\{x \in G: x^{-1}gx \in H\} = \emptyset$. So $\mathring{\psi}(x^{-1}gx) = 0$ for all $x$. So $\Ind_H^G \psi(g) = 0$ by definition.
+
+ Now assume $m > 0$. We let
+ \[
+ X_i = \{x \in G: x^{-1}gx \in H\text{ and is conjugate in $H$ to $x_i$}\}.
+ \]
+ By definition of $x_i$, we know the $X_i$'s are pairwise disjoint, and their union is $\{x \in G: x^{-1}gx \in H\}$. Hence by definition,
+ \begin{align*}
+ \Ind_H^G \psi(g) &= \frac{1}{|H|} \sum_{x \in G} \mathring{\psi}(x^{-1}gx) \\
+ &= \frac{1}{|H|}\sum_{i = 1}^m \sum_{x \in X_i} \psi(x^{-1}gx)\\
+ &= \frac{1}{|H|}\sum_{i = 1}^m \sum_{x \in X_i} \psi(x_i)\\
+ &= \sum_{i = 1}^m \frac{|X_i|}{|H|} \psi(x_i).
+ \end{align*}
+ So we now have to show that in fact
+ \[
+ \frac{|X_i|}{|H|} = \frac{|C_G(g)|}{|C_H(x_i)|}.
+ \]
+ We fix some $1 \leq i \leq m$. Choose some $g_i \in G$ such that $g_i^{-1} gg_i = x_i$. This exists by definition of $x_i$. So for every $c \in C_G(g)$ and $h \in H$, we have
+ \begin{align*}
+ (cg_i h)^{-1} g(cg_i h) &= h^{-1} g_i^{-1} c^{-1}gcg_i h \\
+ \intertext{We now use the fact that $c$ commutes with $g$, since $c \in C_G (g)$, to get}
+ &= h^{-1}g_i^{-1} c^{-1}cg g_ih \\
+ &= h^{-1} g_i^{-1}gg_i h \\
+ &= h^{-1} x_i h.
+ \end{align*}
+ Hence by definition of $X_i$, we know $cg_i h \in X_i$. Hence
+ \[
+ C_G(g) g_i H \subseteq X_i.
+ \]
+ Conversely, if $x \in X_i$, then $x^{-1}gx = h^{-1} x_i h = h^{-1}(g_i^{-1}g g_i) h$ for some $h$. Thus $xh^{-1}g_i^{-1} \in C_G(g)$, and so $x \in C_G(g) g_i h$. So
+ \[
+ x \in C_G(g) g_i H.
+ \]
+ So we conclude
+ \[
+ X_i = C_G(g) g_i H.
+ \]
+ Thus, using some group theory magic, which we shall not prove, we get
+ \[
+ |X_i| = |C_G(g) g_iH| = \frac{|C_G(g)||H|}{|H \cap g_i^{-1}C_G(g)g_i|}
+ \]
+ Finally, we note
+ \[
+ g_i^{-1}C_G(g) g_i = C_G(g_i^{-1} gg_i) = C_G(x_i).
+ \]
+ Thus
+ \[
+ |X_i| = \frac{|H||C_G(g)|}{|H\cap C_G(x_i)|} = \frac{|H||C_G(g)|}{|C_H(x_i)|}.
+ \]
+ Dividing, we get
+ \[
+ \frac{|X_i|}{|H|} = \frac{|C_G(g)|}{|C_H(x_i)|}.
+ \]
+ So done.
+\end{proof}
+To clarify matters, if $H, K \leq G$, then a \emph{double coset}\index{double coset} of $H, K$ in $G$ is a set
+\[
+ HxK = \{hxk: h \in H, k \in K\}
+\]
+for some $x \in G$. Facts about them include
+\begin{enumerate}
+ \item Two double cosets are either disjoint or equal.
+ \item The sizes are given by
+ \[
+ \frac{|H||K|}{|H\cap xKx^{-1}|} = \frac{|H||K|}{|x^{-1}Hx \cap K|}.
+ \]
+\end{enumerate}
+
+\begin{lemma}
+ Let $\psi = 1_H$, the trivial character of $H$. Then $\Ind_H^G 1_H = \pi_X$, the permutation character of $G$ on the set $X$, where $X = G/H$ is the set of left cosets of $H$.
+\end{lemma}
+
+\begin{proof}
+ We let $n = |G:H|$, and $t_1, \cdots, t_n$ be representatives of the cosets. By definition, we know
+ \begin{align*}
+ \Ind_H^G 1_H(g) &= \sum_{i = 1}^n \mathring{1}_H(t_i^{-1} g t_i)\\
+ &= |\{i: t_i^{-1}g t_i \in H\}|\\
+ &= |\{i: g \in t_i H t_i^{-1}\}|\\
+ \intertext{But $t_i H t_i^{-1}$ is the stabilizer in $G$ of the coset $t_i H \in X$. So this is equal to}
+ &= |\fix_X(g)|\\
+ &= \pi_X(g).\qedhere
+ \end{align*}
+\end{proof}
+By Frobenius reciprocity, we know
+\[
+ \bra \pi_X, 1_G\ket_G = \bra \Ind_H^G 1_H, 1_G\ket_G = \bra 1_H, 1_H\ket_H = 1.
+\]
+So the multiplicity of $1_G$ in $\pi_X$ is indeed $1$, as we have shown before.
+
+\begin{eg}
+ Let $H = C_4 = \bra (1\; 2\; 3\; 4)\ket \leq G = S_4$. The index is $|S_4: C_4| = 6$. We consider the character of the induced representation $\Ind_H^G (\alpha)$, where $\alpha$ is the faithful $1$-dimensional representation of $C_4$ given by
+ \[
+ \alpha((1\; 2\; 3\; 4)) = i.
+ \]
+ Then the character of $\alpha$ is
+ \begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ & $1$ & $(1\; 2\; 3\; 4)$ & $(1\; 3)(2\; 4)$ & $(1\; 4\; 3\; 2)$\\
+ \midrule
+ $\chi_\alpha$ & $1$ & $i$ & $-1$ & $-i$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ What is the induced character in $S_4$? We know that
+ \[
+ \Ind_H^G(\alpha)(1) = |G:H| \alpha(1) = |G:H| = 6.
+ \]
+ So the degree of $\Ind_H^G(\alpha)$ is $6$. Also, the elements $(1\; 2)$ and $(3\; 4)$ are not conjugate to anything in $C_4$. So the character vanishes.
+
+ For $(1\; 2)(3\; 4)$, only one of the three conjugates in $S_4$ lie in $H$ (namely $(1\; 3)(2\; 4)$). So, using the sizes of the centralizers, we obtain
+ \[
+ \Ind_H^G((1\; 3)(2\; 4)) = 8 \left(\frac{-1}{4}\right) = -2.
+ \]
+ For $(1\; 2\; 3\; 4)$, it is conjugate to $6$ elements of $S_4$, two of which are in $C_4$, namely $(1\; 2\; 3\; 4)$ and $(1\; 4\; 3\; 2)$. So
+ \[
+ \Ind_H^G(1\; 2\; 3\; 4) = 4 \left(\frac{i}{4} + \frac{-i}{4}\right) = 0.
+ \]
+ So we get the following character table:
+ \begin{center}
+ \begin{tabular}{cccccc}
+ \toprule
+ & 1 & 6 & 8 & 3 & 6\\
+ & $1$ & $(1\; 2)$ & $(1\; 2\; 3)$ & $(1\; 2)(3\; 4)$ & $(1\; 2\; 3\; 4)$\\
+ \midrule
+ $\Ind_H^G(\alpha)$ & $6$ & $0$ & $0$ & $-2$ & $0$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+\end{eg}
+\subsubsection*{Induced representations}
+We have constructed a class function, and showed it is indeed the character of some representation. However, we currently have no idea what this corresponding representation is. Here we will try to construct such a representation explicitly. This is not too enlightening, but its nice to know it can be done. We will also need this explicit construction when proving Mackey's theorem later.
+
+Let $H \leq G$ have index $n$, and $1 = t_1, \cdots, t_n$ be a left transversal. Let $W$ be a $H$-space. Define the vector space
+\[
+ V = \Ind_H^G W = W \oplus (t_2 \otimes W) \oplus \cdots \oplus (t_n \otimes W),
+\]
+where
+\[
+ t_i \otimes W = \{t_i \otimes \mathbf{w}: \mathbf{w} \in W\}.
+\]
+So $\dim V = n \dim W$.
+
+To define the $G$-action on $V$, let $g \in G$. Then for every $i$, there is a unique $j$ such that $t_j^{-1} g t_i \in H$, i.e.\ $gt_iH = t_j H$. We define
+\[
+ g(t_i \otimes \mathbf{w}) = t_j \otimes ((t_j^{-1}gt_i) \mathbf{w}).
+\]
+We tend to drop the tensor signs, and just write
+\[
+ g(t_i \mathbf{w}) = t_j (t_j^{-1} g t_i \mathbf{w}).
+\]
+We claim this is a $G$-action. We have
+\[
+ g_1(g_2 t_i \mathbf{w}) = g_1(t_j(t_j^{-1} g_2 t_i)\mathbf{w}),
+\]
+where $j$ is the unique index such that $g_2 t_i H = t_j H$. This is then equal to
+\[
+ t_\ell ((t_\ell^{-1} g t_j)(t_j^{-1} g_2 t_i) \mathbf{w}) = t_\ell((t_\ell^{-1}(g_1g_2)t_i) \mathbf{w}) = (g_1g_2)(t_i \mathbf{w}),
+\]
+where $\ell$ is the unique index such that $g_1 t_j H = t_\ell H$ (and hence $g_1 g_2 t_i H = g_1 t_j H = t_\ell H$).
+
+\begin{defi}[Induced representation]\index{induced representation}
+ Let $H \leq G$ have index $n$, and $1 = t_1, \cdots, t_n$ be a left transversal. Let $W$ be a $H$-space. Define the \emph{induced representation} to be the vector space
+ \[
+ \Ind_H^G W = W \oplus t_2 \otimes W \oplus \cdots \oplus t_n \otimes W,
+ \]
+ with the $G$-action
+ \[
+ g: t_i \mathbf{w} \mapsto t_j (t_j^{-1} g t_i)W,
+ \]
+ where $t_j$ is the unique element (among $t_1, \cdots, t_n$) such that $t_j^{-1} gt_i \in H$.
+\end{defi}
+
+This has the ``right'' character. Suppose $W$ has character $\psi$. Since $g: t_i \mathbf{w} \mapsto t_j (t_j^{-1} gt_i)$, the contribution to the character is $0$ unless $j = i$, i.e.\ if $t_i^{-1} g t_i \in H$. Then it contributes
+\[
+ \psi (t_i^{-1} g t_i).
+\]
+Thus we get
+\[
+ \Ind_H^G \psi(g) = \sum_{i = 1}^n \mathring{\psi}(t_i^{-1} g t_i).
+\]
+Note that this construction is rather ugly. It could be made much nicer if we knew a bit more algebra --- we can write the induced module simply as
+\[
+ \Ind_H^G W = \F G \otimes_{\F H} W,
+\]
+where we view $W$ as a left-$\F H$ module, and $\F G$ as a $(\F G, \F H)$-bimodule. Alternatively, this is the \emph{extension of scalars} from $\F H$ to $\F G$.
+
+\section{Frobenius groups}
+We will now use character theory to prove some major results in finite group theory. We first do Frobenius' theorem here. Later, will prove Burnside's $p^a q^b$ theorem.
+
+This is a theorem with lots and lots of proofs, but all of these proofs involves some sort of representation theory --- it seems like representation is an unavoidable ingredient of it.
+
+\begin{thm}[Frobenius' theorem (1891)]\index{Frobenius' theorem}
+ Let $G$ be a transitive permutation group on a finite set $X$, with $|X| = n$. Assume that each non-identity element of $G$ fixes \emph{at most one} element of $X$. Then the set of fixed point-free elements (``derangements'')
+ \[
+ K = \{1\} \cup \{g \in G: g\alpha \not= \alpha\text{ for all }\alpha \in X\}
+ \]
+ is a normal subgroup of $G$ with order $n$.
+\end{thm}
+On the face of it, it is not clear $K$ is even a subgroup at all. It turns out normality isn't really hard to prove --- the hard part is indeed showing it is a subgroup.
+
+Note that we did not explicitly say $G$ is finite. But these conditions imply $G \leq S_n$, which forces $G$ to be finite.
+
+\begin{proof}
+ The idea of the proof is to construct a character $\Theta$ whose kernel is $K$. First note that by definition of $K$, we have
+ \[
+ G = K \cup \bigcup_{\alpha \in X} G_\alpha,
+ \]
+ where $G_\alpha$ is, as usual, the stabilizer of $\alpha$. Also, we know that $G_\alpha \cap G_\beta = \{1\}$ if $\alpha \not= \beta$ by assumption, and by definition of $K$, we have $K \cap G_\alpha = \{1\}$ as well.
+
+ Next note that all the $G_\alpha$ are conjugate. Indeed, we know $G$ is transitive, and $g G_\alpha g^{-1} = G_{g\alpha}$. We set $H = G_\alpha$ for some arbitrary choice of $\alpha$. Then the above tells us that
+ \[
+ |G| = |K| - |X|(|H| - 1).
+ \]
+ On the other hand, by the orbit-stabilizer theorem, we know $|G| = |X| |H|$. So it follows that we have
+ \[
+ |K| = |X| = n.
+ \]
+ We first compute what induced characters look like.
+ \begin{claim}
+ Let $\psi$ be a character of $H$. Then
+ \[
+ \Ind_H^G \psi(g) =
+ \begin{cases}
+ n \psi (1) & g = 1\\
+ \psi(g) & g \in H \setminus \{1\}\\
+ 0 & g \in K \setminus \{1\}
+ \end{cases}.
+ \]
+ Since every element in $G$ is either in $K$ or conjugate to an element in $H$, this uniquely specifies what the induced character is.
+ \end{claim}
+ This is a matter of computation. Since $[G:H] = n$, the case $g = 1$ immediately follows. Using the definition of the induced character, since any non-identity in $K$ is not conjugate to any element in $H$, we know the induced character vanishes on $K \setminus \{1\}$.
+
+ Finally, suppose $g \in H \setminus \{1\}$. Note that if $x \in G$, then $xgx^{-1} \in G_{x\alpha}$. So this lies in $H$ if and only if $x \in H$. So we can write the induced character as
+ \[
+ \Ind_H^G \psi(g) = \frac{1}{|H|} \sum_{g \in G} \mathring{\psi}(xgx^{-1}) = \frac{1}{|H|} \sum_{h \in H} \psi(hgh^{-1}) = \psi(g).
+ \]
+ \begin{claim}
+ Let $\psi$ be an irreducible character of $H$, and define
+ \[
+ \theta = \psi^G - \psi(1) (1_H)^G + \psi(1) 1_G.
+ \]
+ Then $\theta$ is a character, and
+ \[
+ \theta(g) =
+ \begin{cases}
+ \psi(h) & h \in H\\
+ \psi(1) & k \in K
+ \end{cases}.
+ \]
+ \end{claim}
+ Note that we chose the coefficients exactly so that the final property of $\theta$ holds. This is a matter of computation:
+ \begin{center}
+ \begin{tabular}{cccc}
+ \toprule
+ & $1$ & $h \in H \setminus \{1\}$ & $K \setminus \{1\}$\\
+ \midrule
+ $\psi^G$ & $n \psi(1)$ & $\psi(h)$ & $0$\\
+ $\psi(1) (1_H)^G$ & $n \psi(1)$ & $\psi(1)$ & $0$\\
+ $\psi(1) 1_G$ & $\psi(1)$ & $\psi(1)$ & $\psi(1)$\\
+ $\theta_i$ & $\psi(1)$ & $\psi(h)$ & $\psi(1)$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ The less obvious part is that $\theta$ is a character. From the way we wrote it, we already know it is a virtual character. We then compute the inner product
+ \begin{align*}
+ \bra \theta, \theta\ket_G &= \frac{1}{|G|} \sum_{g \in G} |\theta(g)|^2\\
+ &= \frac{1}{|G|} \left(\sum_{g \in K} |\theta(g)|^2 + \sum_{g \in G \setminus K} |\theta(g)|^2\right)\\
+ &= \frac{1}{|G|} \left(n |\psi(1)|^2 + n\sum_{h \not= 1 \in H} |\psi(h)|^2\right)\\
+ &= \frac{1}{|G|} \left(n \sum_{h \in H} |\psi(h)|^2 \right)\\
+ &= \frac{1}{|G|} (n |H| \bra \psi, \psi\ket_H)\\
+ &= 1.
+ \end{align*}
+ So either $\theta$ or $-\theta$ is a character. But $\theta(1) = \psi(1) > 0$. So $\theta$ is a character.
+
+ Finally, we have
+ \begin{claim}
+ Let $\psi_1, \cdots, \psi_t$ be the irreducible representations of $H$, and $\theta_i$ be the corresponding representations of $G$ constructed above. Set
+ \[
+ \Theta = \sum_{i = 1}^t \theta_i(1) \theta_i.
+ \]
+ Then we have
+ \[
+ \theta(g) =
+ \begin{cases}
+ |H| & g \in K\\
+ 0 & g \not \in K
+ \end{cases}.
+ \]
+ From this, it follows that the kernel of the representation affording $\theta$ is $K$, and in particular $K$ is a normal subgroup of $G$.
+ \end{claim}
+ This is again a computation using column orthogonality. For $1 \not= h \in H$, we have
+ \[
+ \Theta(h) = \sum_{i = 1}^t \psi_i(1) \psi_i(h) = 0,
+ \]
+ and for any $y \in K$, we have
+ \[
+ \Theta(y) = \sum_{i = 1}^t \psi_i(1)^2 = |H|.
+ \]
+%
+% \separator
+% We need to show that $K \lhd G$. The idea is to construct a clever representation whose kernel is exactly $K$.
+%
+% We start by doing some combinatorics to understand the group-theoretic properties of $G$. Let $H = G_\alpha$ be the stabilizer of some $\alpha \in X$. So conjugates of $H$ are the stabilizers of single elements of $x$, since $g G_\alpha g^{-1} = G_{g\alpha}$ and $G$ acts transitively. This means anything not in $K$ is conjugate to something in $H$.
+%
+% Also, by the hypothesis, no two conjugates can share a non-identity element. So $H$ has $n$ distinct conjugates and $G$ has $n(|H| - 1)$ elements that fix exactly one element of $X$. But by the orbit-stabilizer theorem, since the action is transitive, we know
+% \[
+% |G| = |X| |H| = n|H|.
+% \]
+% Hence
+% \[
+% |K| = |G| - n(|H| - 1) = n.
+% \]
+% Let $\{1_H = \psi_1, \cdots, \psi_t\}$ be the irreducible characters of $H$. Since every non-identity element in $G$ is either conjugate to something in $H$ or is in $K$, we want to construct a representation that is trivial on $K$ and non-trivial on $H$. It turns out that the conditions on $G$ result in a very nice formula for inducing characters from $H$ to $G$. We will then use the induced representations to construct characters $\theta_i$ of $G$ so that $\theta_i$ acts trivially on $K$, and acts like $\psi_i$ on $H$. Then by taking an appropriate sum of the $\theta_i$, column orthogonality of the $\psi_i$ will imply that the character vanishes on $H \setminus \{0\}$, and it particular the corresponding representation does not act trivially, and then we are done, since $K$ is a kernel.
+%
+% So we want to understand how induced characters from $H$ behave. Recall our magic formula required us to understand the sizes of the centralizers. So we need to understand the conjugacy classes and centralizers in $H$.
+%
+% We do yet more group theory. We first look at conjugacy classes. Let $1 \not= h \in H$, and suppose $h, h' \in H$ are conjugate in $G$. So there is some $g \in G$ such that $h = g h' g^{-1}$. Then $h$ lies in both $H = G_\alpha$ and $gHg^{-1} = G_{g\alpha}$. Since $h$ can fix at most one thing, we must have $g\alpha = \alpha$. Hence $g \in H$. Therefore the conjugacy class in $G$ of $h$ is precisely the conjugacy class in $H$.
+%
+% Similarly, if $g \in C_G(h)$, then $h = ghg^{-1} \in G_{g\alpha}$. So $g\alpha = \alpha$. So $g \in H$. So $C_G(h) = C_H(h)$.
+%
+% Using this, if $1 \not= h \in H$ and $\psi$ is a character of $H$, then we know
+% \[
+% \Ind_H^G \psi(h) = \psi (h).
+% \]
+% For $h = 1$, we have the usual formula
+% \[
+% \Ind_H^G \psi(1) = n \psi (1).
+% \]
+% Finally, since no conjugate of $K$ lies in $H$ (apart from the identity), we must have
+% \[
+% \Ind_H^G \psi(k) = 0
+% \]
+% for $1 \not= k \in K$.
+%
+% Now every element of $G$ either lies in $K$ or it lies in one of the $n$ stabilizers, each of which is conjugate to $H$. So we can list the representatives of the conjugacy classes of $G$ as
+% \[
+% \{1, h_2, \cdots, h_t, y_1, \cdots, y_u\},
+% \]
+% where $1 = h_1, \cdots, h_t$ are representatives of the conjugacy classes of $H$, while the remaining are the representatives of conjugacy classes of $G$ which comprise $K \setminus \{1\}$.
+%
+% We now do the magic. We take $\theta_1 = 1_G$, and again let $\{1_H = \psi_1, \cdots, \psi_t\}$ be the irreducible characters of $H = G_\alpha$. Now for each $1 \leq i \leq t$, we can write the induced character explicitly as
+% \[
+% \Ind_H^G \psi_i(g) =
+% \begin{cases}
+% |G:H| \psi_i(1) = n \psi_i (1) & g = 1\\
+% \psi_i(h_j) & g = h_j\quad (2 \leq j \leq t)\\
+% 0 & g = y_k\quad (1 \leq k \leq u)
+% \end{cases}
+% \]
+% Now for each $2 \leq i \leq t$, we define the magic character
+% \[
+% \theta_i = \psi_i^G - \psi_i(1) \psi_1^G + \psi_i(1) \theta_1 \in R(G).
+% \]
+% We then can compute
+% \begin{center}
+% \begin{tabular}{cccc}
+% \toprule
+% & $1$ & $h_j$ & $y_k$\\
+% \midrule
+% $\psi_i^G$ & $n \psi_i(1)$ & $\psi_i(h_j)$ & $0$\\
+% $\psi_i(1) \psi_1^G$ & $n \psi_i(1)$ & $\psi_i(1)$ & $0$\\
+% $\psi_i(1) \theta_1$ & $\psi_i(1)$ & $\psi_i(1)$ & $\psi_i(1)$\\
+% $\theta_i$ & $\psi_i(1)$ & $\psi_i(h_j)$ & $\psi_i(1)$\\
+% \bottomrule
+% \end{tabular}
+% \end{center}
+% The last line is also valid for $i = 1$, by direct inspection. We see that this $\theta_i$ has the property we sought, namely that it acts trivially on $K$ and acts like $\psi_i$ on $H$.
+%
+% The first thing to check is that this is a character. In fact, this is an irreducible one. We compute
+% \begin{align*}
+% \bra \theta_i, \theta_i\ket &= \frac{1}{|g|} \sum_{g \in G} |\theta_i(g)|^2\\
+% &= \frac{1}{|G|} \left(\sum_{g \in K} |\theta_i(g)|^2 + \sum_{\alpha \in X}\sum_{1 \not= g \in G_\alpha}|\theta_i(g)|^2\right)\\
+% &= \frac{1}{|G|} \left(n \psi_i(1)^2 + n \sum_{1 \not= h \in H} |\theta_i (h)|^2\right)\\
+% &= \frac{1}{|H|} \sum_{h \in H} |\psi_i(h)|^2\\
+% &= \bra \psi_i, \psi_i \ket\\
+% &= 1.
+% \end{align*}
+% So either $\theta_i$ or $-\theta_i$ is an irreducible character of $G$. Since $\theta_i(1) = \psi_i(1) > 0$, we know $\theta_i$ is a genuine irreducible character.
+%
+% We now let
+% \[
+% \theta = \sum_{i = 1}^t \theta_i(1) \theta_i.
+% \]
+% We hit this with column orthogonality --- for $1 \not= h \in H$, we have
+% \[
+% \theta(h) = \sum_{i = 1}^t \psi_i(1) \psi_i(h) = 0,
+% \]
+% and for any $y \in K$, we have
+% \[
+% \theta(y) = \sum_{i = 1}^t \psi_i(1)^2 = |H|.
+% \]
+% Hence we know
+% \[
+% \theta(g) =
+% \begin{cases}
+% |H| & g \in K\\
+% 0 & g \not\in K
+% \end{cases}.
+% \]
+% Thus
+% \[
+% K = \{g \in G: \theta(g) = \theta(1)\} \lhd G.
+% \]
+% With this established, we can see $\theta$ is just the lift to $G$ of $\chi_{\mathrm{reg}}$ on $G/K$.
+\end{proof}
+
+\begin{defi}[Frobenius group and Frobenius complement]\index{Frobenius group}\index{Frobenius complement}
+ A \emph{Frobenius group} is a group $G$ having a subgroup $H$ such that $H\cap gHg^{-1} = 1$ for all $g \not\in H$. We say $H$ is a \emph{Frobenius complement} of $G$.
+\end{defi}
+How does this relate to the previous theorem?
+
+\begin{prop}
+ The left action of any finite Frobenius group on the cosets of the Frobenius complement satisfies the hypothesis of Frobenius' theorem.
+\end{prop}
+
+\begin{defi}[Frobenius kernel]\index{Frobenius kernel}
+ The \emph{Frobenius kernel} of a Frobenius group $G$ is the $K$ obtained from Frobenius' theorem.
+\end{defi}
+
+\begin{proof}
+ Let $G$ be a Frobenius group, having a complement $H$. Then the action of $G$ on the cosets $G/H$ is transitive. Furthermore, if $1 \not= g\in G$ fixes $xH$ and $yH$, then we have $g \in xHx^{-1} \cap yHy^{-1}$. This implies $H \cap (y^{-1}x) H (y^{-1}x)^{-1} \not= 1$. Hence $xH = yH$.
+\end{proof}
+
+Note that J. Thompson (in his 1959 thesis) proved any finite group having a fixed-point free automorphism of prime order is nilpotent. This implies $K$ is nilpotent, which means $K$ is a direct product of its Sylow subgroups.
+
+\section{Mackey theory}
+We work over $\C$. We wish to describe the restriction to a subgroup $K\leq G$ of an induced representation $\Ind_H^G W$, i.e.\ the character $\Res_K^G \Ind_H^G W$. In general, $K$ and $H$ can be unrelated, but in many applications, we have $K = H$, in which case we can characterize when $\Ind_H^G W$ is irreducible.
+
+It is quite helpful to first look at a special case, where $W = 1_H$ is the trivial representation. Thus $\Ind_H^G 1_H$ is the permutation representation of $G$ on $G/H$ (coset action on the left cosets of $H$ in $G$).
+
+Recall that by the orbit-stabilizer theorem, if $G$ is transitive on the set $X$, and $H = G_\alpha$ for some $\alpha \in X$, then the action of $G$ on $X$ is isomorphic the action on $G/H$, namely the correspondence
+\[
+ g\alpha \leftrightarrow gH
+\]
+is a well-defined bijection, and commutes with with the $g$-action (i.e.\ $x (g\alpha) = (xg)\alpha \leftrightarrow x(gH) = (xg)H$).
+
+We now consider the action of $G$ on $G/H$ and let $K \leq G$. Then $K$ also acts on $G/H$, and $G/H$ splits into $K$-orbits. The $K$-orbit of $gH$ contains precisely $kgH$ for $k \in K$. So it is the double coset
+\[
+ KgH = \{kgh: k \in K, h \in H\}.
+\]
+The set of double cosets $K \backslash G/H$ partition $G$, and the number of double cosets is
+\[
+ |K \backslash G/H| = \bra \pi_{G/K}, \pi_{G/H}\ket.
+\]
+We don't need this, but this is true.
+
+If it happens that $H = K$, and $H$ is normal, then we just have $K\backslash G/H = G/H$.
+
+What about stabilizers? Clearly, $G_{gH} = gHg^{-1}$. Thus, restricting to the action of $K$, we have $K_{gH} = gHg^{-1} \cap K$. We call $H_g =K_{gH}$.
+
+So by our correspondence above, the action of $K$ on the orbit containing $gH$ is isomorphic to the action of $K$ on $K/(gHg^{-1} \cap K) = K/H_g$. From this, and using the fact that $\Ind_H^G 1_H = \C(G/H)$, we get the special case of Mackey's theorem:
+\begin{prop}
+ Let $G$ be a finite group and $H, K \leq G$. Let $g_1, \cdots, g_k$ be the representatives of the double cosets $K \backslash G/H$. Then
+ \[
+ \Res_K^G \Ind_H^G 1_H \cong \bigoplus_{i = 1}^k \Ind_{g_iHg_i^{-1} \cap K}^K 1.
+ \]
+\end{prop}
+
+The general form of Mackey's theorem is the following:
+\begin{thm}[Mackey's restriction formula]\index{Mackey's restriction formula}
+ In general, for $K, H \leq G$, we let $\mathcal{S} = \{1, g_1, \cdots, g_r\}$ be a set of double coset representatives, so that
+ \[
+ G = \bigcup K g_i H.
+ \]
+ We write $H_g = gHg^{-1} \cap K \leq G$. We let $(\rho, W)$ be a representation of $H$. For each $g \in G$, we define $(\rho_g, W_g)$ to be a representation of $H_g$, with the same underlying vector space $W$, but now the action of $H_g$ is
+ \[
+ \rho_g(x) = \rho(g^{-1} xg),
+ \]
+ where $h = g^{-1}xg \in H$ by construction.
+
+ This is clearly well-defined. Since $H_g \leq K$, we obtain an induced representation $\Ind_{H_g}^K W_g$.
+
+ Let $G$ be finite, $H, K \leq G$, and $W$ be a $H$-space. Then
+ \[
+ \Res_K^G \Ind_H^G W = \bigoplus_{g \in \mathcal{S}} \Ind_{H_g}^K W_g.
+ \]
+\end{thm}
+We will defer the proof of this for a minute or two. We will first derive some corollaries of this, starting with the character version of the theorem.
+\begin{cor}
+ Let $\psi$ be a character of a representation of $H$. Then
+ \[
+ \Res_K^G \Ind_H^G \psi = \sum_{g \in \mathcal{S}} \Ind_{H_g}^K \psi_g,
+ \]
+ where $\psi_g$ is the class function (and a character) on $H_g$ given by
+ \[
+ \psi_g(x) = \psi(g^{-1} xg).
+ \]
+\end{cor}
+These characters $\psi_g$ are sometimes known as the \emph{conjugate characters}\index{conjugate character}. This obviously follows from Mackey's restriction formula.
+
+\begin{cor}[Mackey's irreducibility criterion]
+ Let $H \leq G$ and $W$ be a $H$-space. Then $V = \Ind_H^G W$ is irreducible if and only if
+ \begin{enumerate}
+ \item $W$ is irreducible; and
+ \item For each $g \in \mathcal{S} \setminus H$, the two $H_g$ spaces $W_g$ and $\Res_{H_g}^H W$ have no irreducible constituents in common, where $H_g = gHg^{-1}\cap H$.
+ \end{enumerate}
+\end{cor}
+Note that the set $\mathcal{S}$ of representatives was chosen arbitrarily. So we may require (ii) to hold for \emph{all} $g \in G \setminus H$, instead of $g \in \mathcal{S} \setminus H$. While there are more things to check in $G$, it is sometimes convenient not have to pick out an $\mathcal{S}$ explicitly.
+
+Note that the only $g \in \mathcal{S} \cap H$ is the identity, since for any $h \in H$, $KhH = KH = K1H$.
+
+\begin{proof}
+ We use characters, and let $W$ afford the character $\psi$. We take $K = H$ in Mackey's restriction formula. Then we have $H_g = gHg^{-1} \cap H$.
+
+ Using Frobenius reciprocity, we can compute the inner product as
+ \begin{align*}
+ \bra \Ind_H^G \psi, \Ind_H^G \psi\ket_G &= \bra \psi, \Res_H^G \Ind_H^G \psi\ket_H\\
+ &= \sum_{g \in \mathcal{S}} \bra \psi, \Ind_{H_g}^H \psi_g\ket_H\\
+ &= \sum_{g \in \mathcal{S}} \bra \Res_{H_g}^H \psi , \psi_g\ket_{H_g}\\
+ &= \bra\psi, \psi\ket + \sum_{g \in \mathcal{S} \setminus H} \bra \Res_{H_g}^H \psi, \psi_g \ket_{H_g}
+ \end{align*}
+ We can write this because if $g = 1$, then $H_g = H$, and $\psi_g = \psi$.
+
+ This is a sum of non-negative integers, since the inner products of characters always are. So $\Ind_H^G \psi$ is irreducible if and only if $\bra \psi, \psi\ket = 1$, and all the other terms in the sum are $0$. In other words, $W$ is an irreducible representation of $H$, and for all $g \not \in H$, $W$ and $W_g$ are disjoint representations of $H_g$.
+\end{proof}
+
+\begin{cor}
+ Let $H \lhd G$, and suppose $\psi$ is an irreducible character of $H$. Then $\Ind_H^G \psi$ is irreducible if and only if $\psi$ is distinct from all its conjugates $\psi_g$ for $g \in G \setminus H$ (where $\psi_g(h) = \psi(g^{-1}hg)$ as before).
+\end{cor}
+
+\begin{proof}
+ We take $K = H \lhd G$. So the double cosets are just left cosets. Also, $H_g = H$ for all $g$. Moreover, $W_g$ is irreducible since $W$ is irreducible.
+
+ So, by Mackey's irreducible criterion, $\Ind_H^G W$ is irreducible precisely if $W \not\cong W_g$ for all $g \in G \setminus H$. This is equivalent to $\psi \not= \psi_g$.
+\end{proof}
+
+Note that again we could check the conditions on a set of representatives of (double) cosets. In fact, the isomorphism class of $W_g$ (for $g \in G$) depends only on the coset $gH$.
+
+We now prove Mackey's theorem.
+\begin{thm}[Mackey's restriction formula]
+ In general, for $K, H \leq G$, we let $\mathcal{S} = \{1, g_1, \cdots, g_r\}$ be a set of double coset representatives, so that
+ \[
+ G = \bigcup K g_i H.
+ \]
+ We write $H_g = gHg^{-1} \cap K \leq G$. We let $(\rho, W)$ be a representation of $H$. For each $g \in G$, we define $(\rho_g, W_g)$ to be a representation of $H_g$, with the same underlying vector space $W$, but now the action of $H_g$ is
+ \[
+ \rho_g(x) = \rho(g^{-1} xg),
+ \]
+ where $h = g^{-1}xg \in H$ by construction.
+
+ This is clearly well-defined. Since $H_g \leq K$, we obtain an induced representation $\Ind_{H_g}^K W_g$.
+
+ Let $G$ be finite, $H, K \leq G$, and $W$ be a $H$-space. Then
+ \[
+ \Res_K^G \Ind_H^G W = \bigoplus_{g \in \mathcal{S}} \Ind_{H_g}^K W_g.
+ \]
+\end{thm}
+This is possibly the hardest and most sophisticated proof in the course.
+
+\begin{proof}
+ Write $V = \Ind_H^G W$. Pick $g \in G$, so that $KgH \in K \backslash G / H$. Given a left transversal $\mathcal{T}$ of $H$ in $G$, we can obtain $V$ explicitly as a direct sum
+ \[
+ V = \bigoplus_{t \in \mathcal{T}} t\otimes W.
+ \]
+ The idea is to ``coarsen'' this direct sum decomposition using double coset representatives, by collecting together the $t\otimes W$'s with $t \in KgH$. We define
+ \[
+ V(g) = \bigoplus_{t \in KgH \cap \mathcal{T}} t\otimes W.
+ \]
+ Now each $V(g)$ is a $K$-space --- given $k \in K$ and $t \otimes \mathbf{w} \in t \otimes W$, since $t \in KgH$, we have $kt \in KgH$. So there is some $t' \in \mathcal{T}$ such that $ktH = t' H$. Then $t' \in ktH \subseteq KgH$. So we can define
+ \[
+ k \cdot (t \otimes \mathbf{w}) = t' \otimes (\rho(t'^{-1} kt) \mathbf{w}),
+ \]
+ where $t'kt \in H$.
+
+ Viewing $V$ as a $K$-space (forgetting its whole $G$-structure), we have
+ \[
+ \Res_K^G V = \bigoplus_{g \in \mathcal{S}} V(g).
+ \]
+ The left hand side is what we want, but the right hand side looks absolutely nothing like $\Ind_{H_g}^K W_g$. So we need to show
+ \[
+ V(g) = \bigoplus_{t \in KgH \cap \mathcal{T}} t\otimes W \cong \Ind_{H_g}^K W_g,
+ \]
+ as $K$ representations, for each $g \in \mathcal{S}$.
+
+ Now for $g$ fixed, each $t \in KgH$ can be represented by some $kgh$, and by restricting to elements in the traversal $\mathcal{T}$ of $H$, we are really summing over cosets $kgH$. Now cosets $kgH$ are in bijection with cosets $k(gHg^{-1})$ in the obvious way. So we are actually summing over elements in $K/(gHg^{-1} \cap K) = K/H_g$. So we write
+ \[
+ V(g) = \bigoplus_{k \in K/H_g} (kg)\otimes W.
+ \]
+ We claim that there is a isomorphism that sends $k \otimes W_g \cong (kg) \otimes W$. We define $k \otimes W_g \to (kg) \otimes W$ by $k \otimes \mathbf{w} \mapsto kg \otimes \mathbf{w}$. This is an isomorphism of vector spaces almost by definition, so we only check that it is compatible with the action. The action of $x \in K$ on the left is given by
+ \[
+ \rho_g(x) (k \otimes \mathbf{w}) = k' \otimes (\rho_g(k'^{-1} xk) \mathbf{w}) = k' \otimes (\rho(g^{-1} k'^{-1} xkg) \mathbf{w}),
+ \]
+ where $k' \in K$ is such that $k'^{-1} xk \in H_g$, i.e.\ $g^{-1} k'^{-1} xkg \in H$. On the other hand,
+ \[
+ \rho(x) (kg \otimes \mathbf{w}) = k'' \otimes (\rho (k'' x^{-1} (kg)) \mathbf{w}),
+ \]
+ where $k'' \in K$ is such that $k''^{-1} xkg \in H$. Since there is a unique choice of $k''$ (after picking a particular transversal), and $k' g$ works, we know this is equal to
+ \[
+ k' g \otimes (\rho(g^{-1} k'^{-1} xkg) \mathbf{w}).
+ \]
+ So the actions are the same. So we have an isomorphism.
+
+ Then
+ \[
+ V(g) = \bigoplus_{k \in K/H_g} k \otimes W_g = \Ind_{H_g}^K W_g,
+ \]
+ as required.
+\end{proof}
+
+\begin{eg}
+ Let $g = S_n$ and $H = A_n$. Consider a $\sigma \in S_n$. Then its conjugacy class in $S_n$ is determined by its cycle type.
+
+ If the class of $\sigma$ splits into two classes in $A_n$, and $\chi$ is an irreducible character of $A_n$ that takes different values on the two classes, then by the irreducibility criterion, $\Ind_{A_n}^{S_n} \chi$ is irreducible.
+\end{eg}
+
+\section{Integrality in the group algebra}
+The next big result we are going to have is that the degree of an irreducible character always divides the group order. This is not at all an obvious fact. To prove this, we make the cunning observation that character degrees and friends are not just regular numbers, but \emph{algebraic integers}.
+\begin{defi}[Algebraic integers]\index{algebraic integer}
+ A complex number $a \in \C$ is an \emph{algebraic integer} if $a$ is a root of a monic polynomial with integer coefficients. Equivalently, $a$ is such that the subring of $\C$ given by
+ \[
+ \Z[a] = \{f(a): f \in \Z[X]\}
+ \]
+ is finitely generated. Equivalently, $a$ is the eigenvalue of a matrix, all of whose entries lie in $\Z$.
+\end{defi}
+We will quote some results about algebraic integers which we will not prove.
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item The algebraic integers form a subring of $\C$.
+ \item If $a \in \C$ is both an algebraic integer and rational, then $a$ is in fact an integer.
+ \item Any subring of $\C$ which is a finitely generated $\Z$-module consists of algebraic integers.
+ \end{enumerate}
+\end{prop}
+The strategy is to show that given a group $G$ and an irreducible character $\chi$, the value $\frac{|G|}{\chi(1)}$ is an algebraic integer. Since it is also a rational number, it must be an integer. So $|G|$ is a multiple of $\chi(1)$.
+
+We make the first easy step.
+\begin{prop}
+ If $\chi$ is a character of $G$ and $g \in G$, then $\chi(g)$ is an algebraic integer.
+\end{prop}
+
+\begin{proof}
+ We know $\chi(g)$ is the sum of roots $n$th roots of unity (where $n$ is the order of $g$). Each root of unity is an algebraic integer, since it is by definition a root of $x^n - 1$. Since algebraic integers are closed under addition, the result follows.
+\end{proof}
+
+We now have a look at the center of the group algebra $\C G$. Recall that the group algebra $\C G$ of a finite group $G$ is a complex vector space generated by elements of $G$. More precisely,
+\[
+ \C G = \left\{\sum_{g \in G} \alpha_g g: \alpha_g \in \C\right\}.
+\]
+Borrowing the multiplication of $G$, this forms a ring, hence an algebra. We now list the conjugacy classes of $G$ as
+\[
+ \{1\} = \mathcal{C}_1, \cdots, \mathcal{C}_k.
+\]
+\begin{defi}[Class sum]\index{class sum}
+ The \emph{class sum} of a conjugacy class $\mathcal{C}_j$ of a group $G$ is
+ \[
+ C_j = \sum_{g \in \mathcal{C}_j} g \in \C G.
+ \]
+\end{defi}
+
+The claim is now $C_j$ lives in the center of $\C G$ (note that the center of the group algebra is different from the group algebra of the center). Moreover, they form a basis:
+\begin{prop}
+ The class sums $C_1, \cdots, C_k$ form a basis of $Z(\C G)$. There exists non-negative \emph{integers} $a_{ij\ell}$ (with $1 \leq i, j, \ell \leq k$) with
+ \[
+ C_i C_j = \sum_{\ell = 1}^k a_{ij\ell} C_\ell.
+ \]
+\end{prop}
+The content of this proposition is not that $C_i C_j$ can be written as a sum of $C_\ell$'s. This is true because the $C_\ell$'s form a basis. The content is that these are actually integers, not arbitrary complex numbers.
+
+\begin{defi}[Class algebra/structure constants]\index{class algebra constant}\index{structure constant}
+ The constants $a_{ij\ell}$ as defined above are the \emph{class algebra constants} or \emph{structure constants}.
+\end{defi}
+
+\begin{proof}
+ It is clear from definition that $g C_j g^{-1} = C_j$. So we have $C_j \in Z(\C G)$. Also, since the $C_j$'s are produced from disjoint conjugacy classes, they are linearly independent.
+
+ Now suppose $z \in Z(\C G)$. So we can write
+ \[
+ z = \sum_{g \in G} \alpha_g g.
+ \]
+ By definition, this commutes with all elements of $\C G$. So for all $h \in G$, we must have
+ \[
+ \alpha_{h^{-1}gh} = \alpha_g.
+ \]
+ So the function $g \mapsto \alpha_g$ is constant on conjugacy classes of $G$. So we can write $\alpha_j = \alpha_g$ for $g \in \mathcal{C}_j$. Then
+ \[
+ g = \sum_{j = 1}^k \alpha_j C_j.
+ \]
+ Finally, the center $Z(\C G)$ is an algebra. So
+ \[
+ C_i C_j = \sum_{\ell = 1}^k a_{ij\ell} C_\ell
+ \]
+ for some complex numbers $a_{ij\ell}$, since the $C_j$ span. The claim is that $a_{ij\ell} \in \Z_{\geq 0}$ for all $i, j \ell$. To see this, we fix $g_\ell \in \mathcal{C}_\ell$. Then by definition of multiplication, we know
+ \[
+ a_{ij\ell} = |\{(x, y) \in \mathcal{C}_i\times \mathcal{C}_j: xy = g_\ell\}|,
+ \]
+ which is clearly a non-negative integer.
+\end{proof}
+
+Let $\rho: G \to \GL(V)$ be an irreducible representation of $G$ over $\C$ affording the character $\chi$. Extending linearly, we get a map $\rho: \C G \to \End V$, an algebra homomorphism. In general, we have the following definition:
+\begin{defi}[Representation of algebra]\index{representation of algebra}
+ Let $A$ be an algebra. A \emph{representation of $A$} is a homomorphism of algebras $\rho: A \to \End V$.
+\end{defi}
+
+Let $z \in Z(\C G)$. Then $\rho(z)$ commutes with all $\rho(g)$ for $g \in G$. Hence, by Schur's lemma, we can write $\rho(z) = \lambda_z I$ for some $\lambda_z \in \C$. We then obtain the algebra homomorphism
+\begin{align*}
+ \omega_\rho = \omega_\chi= \omega: Z(\C G) &\to \C\\
+ z &\mapsto \lambda_z.
+\end{align*}
+By definition, we have $\rho(C_i) = \omega(C_i) I$. Taking traces of both sides, we know
+\[
+ \sum_{g \in \mathcal{C}_i} \chi(g) = \chi(1) \omega(C_i).
+\]
+We also know the character is a class function. So we in fact get
+\[
+ \chi(1) \omega(C_i) = |\mathcal{C}_i| \chi(g_i),
+\]
+where $g_i \in \mathcal{C}_i$ is a representative of $\mathcal{C}_i$. So we get
+\[
+ \omega(C_i) = \frac{\chi(g_i)}{\chi(1)} |\mathcal{C}_i|.
+\]
+Why should we care about this? The thing we are looking at is in fact an algebraic integer.
+\begin{lemma}
+ The values of
+ \[
+ \omega_\chi(C_i) = \frac{\chi(g)}{\chi(1)} |\mathcal{C}_i|
+ \]
+ are algebraic integers.
+\end{lemma}
+Note that all the time, we are requiring $\chi$ to be an irreducible character.
+\begin{proof}
+ Using the definition of $a_{ij\ell} \in \Z_{\geq 0}$, and the fact that $\omega_\chi$ is an algebra homomorphism, we get
+ \[
+ \omega_\chi(C_i) \omega_\chi(C_j) = \sum_{\ell = 1}^k a_{ij\ell} \omega_\chi (C_\ell).
+ \]
+ Thus the span of $\{\omega(C_j): 1 \leq j \leq k\}$ is a subring of $\C$ and is finitely generated as a $\Z$-module (by definition). So we know this consists of algebraic integers.
+\end{proof}
+That's magic.
+
+In fact, we can compute the structure constants $a_{ij\ell}$ from the character table, namely for all $i, j, \ell$, we have
+\[
+ a_{ij\ell} = \frac{|G|}{|C_G(g_i)||C_G(g_j)|}\sum_{s = 1}^k \frac{\chi_s (g_i) \chi_s(g_j) \chi_s(g_\ell^{-1})}{\chi_s(1)},
+\]
+where we sum over the irreducible characters of $G$. We will neither prove it nor use it. But if you want to try, the key idea is to use column orthogonality.
+
+Finally, we get to the main result:
+\begin{thm}
+ The degree of any irreducible character of $G$ divides $|G|$, i.e.
+ \[
+ \chi_j(1) \mid |G|
+ \]
+ for each irreducible $\chi_j$.
+\end{thm}
+It is highly non-obvious why this should be true.
+
+\begin{proof}
+ Let $\chi$ be an irreducible character. By orthogonality, we have
+ \begin{align*}
+ \frac{|G|}{\chi(1)} &= \frac{1}{\chi(1)}\sum_{g \in G} \chi(g) \chi(g^{-1})\\
+ &= \frac{1}{\chi(1)} \sum_{i = 1}^k |\mathcal{C}_i| \chi(g_i) \chi(g_i^{-1})\\
+ &= \sum_{i = 1}^k \frac{|\mathcal{C}_i| \chi(g_i)}{\chi(1)} \chi(g_i)^{-1}.
+ \end{align*}
+ Now we notice
+ \[
+ \frac{|\mathcal{C}_i|\chi(g_i)}{\chi(1)}
+ \]
+ is an algebraic integer, by the previous lemma. Also, $\chi(g_i^{-1})$ is an algebraic integer. So the whole mess is an algebraic integer since algebraic integers are closed under addition and multiplication.
+
+ But we also know $\frac{|G|}{\chi(1)}$ is rational. So it must be an integer!
+\end{proof}
+
+\begin{eg}
+ Let $G$ be a $p$-group. Then $\chi(1)$ is a $p$-power for each irreducible $\chi$. If in particular $|G| = p^2$, then $\chi(1) = 1, p$ or $p^2$. But the sum of squares of the degrees is $|G| = p^2$. So it cannot be that $\chi(1) = p$ or $p^2$. So $\chi(1) = 1$ for all irreducible characters. So $G$ is abelian.
+\end{eg}
+
+\begin{eg}
+ No simple group has an irreducible character of degree $2$. Proof is left as an exercise for the reader.
+\end{eg}
+
+\section{Burnside's theorem}
+Finally, we get to Burnside's theorem.
+
+\begin{thm}[Burside's $p^a q^b$ theorem]
+ Let $p, q$ be primes, and let $|G| = p^a q^b$, where $a, b \in \Z_{\geq 0}$, with $a + b \geq 2$. Then $G$ is not simple.
+\end{thm}
+Note that if $a + b = 1$ or $0$, then the group is trivially simple.
+
+In fact even more is true, namely that $G$ is soluble, but this follows easily from above by induction.
+
+This result is the best possible in the sense that $|A_5| = 60 = 2^2 \cdot 3 \cdot 5$, and $A_5$ is simple. So we cannot allow for three different primes factors (in fact, there are exactly $8$ simple groups with three prime factors). Also, if $a = 0$ or $b = 0$, then $G$ is a $p$-group, which have non-trivial center. So these cases trivially work.
+
+Later, in 1963, Feit and Thompson proved that every group of odd order is soluble. The proof was 255 pages long. We will not prove this.
+
+In 1972, H. Bender found the first proof of Burnside's theorem without the use of representation theory, but the proof is much more complicated.
+
+This theorem follows from two lemmas, and one of them involves some Galois theory, and is hence non-examinable.
+\begin{lemma}
+ Suppose
+ \[
+ \alpha = \frac{1}{m} \sum_{j = 1}^m \lambda_j,
+ \]
+ is an algebraic integer, where $\lambda_j^n = 1$ for all $j$ and some $n$. Then either $\alpha = 0$ or $|\alpha| = 1$.
+\end{lemma}
+
+\begin{proof}[Proof (non-examinable)]
+ Observe $\alpha \in \F = \Q(\varepsilon)$, where $\varepsilon = e^{2\pi i/n}$ (since $\lambda_j \in \F$ for all $j$). We let $\mathcal{G} = \Gal(\F/\Q)$. Then
+ \[
+ \{\beta \in \F: \sigma(\beta) = \beta \text{ for all }\sigma \in \mathcal{G}\} = \Q.
+ \]
+ We define the ``norm''
+ \[
+ N(\alpha) = \prod_{\sigma \in \mathcal{G}} \sigma(\alpha).
+ \]
+ Then $N(\alpha)$ is fixed by every element $\sigma \in \mathcal{G}$. So $N(\alpha)$ is rational.
+
+ Now $N(\alpha)$ is an algebraic integer, since Galois conjugates $\sigma(\alpha)$ of algebraic integers are algebraic integers. So in fact $N(\alpha)$ is an integer. But for $\alpha \in \mathcal{G}$, we know
+ \[
+ |\sigma(\alpha)| = \left|\frac{1}{m} \sum \sigma(\lambda_j)\right| \leq 1.
+ \]
+ So if $\alpha \not= 0$, then $N(\alpha) = \pm 1$. So $|\alpha| = 1$.
+\end{proof}
+
+\begin{lemma}
+ Suppose $\chi$ is an irreducible character of $G$, and $\mathcal{C}$ is a conjugacy class in $G$ such that $\chi(1)$ and $|\mathcal{C}|$ are coprime. Then for $g \in \mathcal{C}$, we have
+ \[
+ |\chi(g)| = \chi(1) \text{ or }0.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Of course, we want to consider the quantity
+ \[
+ \alpha = \frac{\chi(g)}{\chi(1)}.
+ \]
+ Since $\chi(g)$ is the sum of $\deg \chi = \chi(1)$ many roots of unity, it suffices to show that $\alpha$ is an algebraic integer.
+
+ By B\'ezout's theorem, there exists $a, b \in \Z$ such that
+ \[
+ a \chi(1) + b |\mathcal{C}| = 1.
+ \]
+ So we can write
+ \[
+ \alpha = \frac{\chi(g)}{\chi(1)} = a\chi(g) + b\frac{\chi(g)}{\chi(1)} |\mathcal{C}|.
+ \]
+ Since $\chi(g)$ and $\frac{\chi(g)}{\chi(1)} |\mathcal{C}|$ are both algebraic integers, we know $\alpha$ is.
+\end{proof}
+
+\begin{prop}
+ If in a finite group, the number of elements in a conjugacy class $\mathcal{C}$ is of (non-trivial) prime power order, then $G$ is not non-abelian simple.
+\end{prop}
+
+\begin{proof}
+ Suppose $G$ is a non-abelian simple group, and let $1 \not= g \in G$ be living in the conjugacy class $\mathcal{C}$ of order $p^r$. If $\chi \not= 1_G$ is a non-trivial irreducible character of $G$, then either $\chi(1)$ and $|\mathcal{C}| = p^r$ are not coprime, in which case $p \mid \chi(1)$, or they are coprime, in which case $|\chi(g)| = \chi(1)$ or $\chi(g) = 0$.
+
+ However, it cannot be that $|\chi(g)| = \chi(1)$. If so, then we must have $\rho(g) = \lambda I$ for some $\lambda$. So it commutes with everything, i.e.\ for all $h$, we have
+ \[
+ \rho(gh) = \rho(g)\rho(h) = \rho(h)\rho(g) = \rho(hg).
+ \]
+ Moreover, since $G$ is simple, $\rho$ must be faithful. So we must have $gh = hg$ for all $h$. So $Z(G)$ is non-trivial. This is a contradiction. So either $p \mid \chi(1)$ or $\chi(g) = 0$.
+
+ By column orthogonality applied to $\mathcal{C}$ and $1$, we get
+ \[
+ 0 = 1 + \sum_{1 \not= \chi\text{ irreducible, } p \mid \chi(1)} \chi(1) \chi(g),
+ \]
+ where we have deleted the $0$ terms. So we get
+ \[
+ -\frac{1}{p} = \sum_{\chi \not= 1} \frac{\chi(1)}{p} \chi(g).
+ \]
+ But this is both an algebraic integer and a rational number, but not integer. This is a contradiction.
+\end{proof}
+
+We can now prove Burnside's theorem.
+\begin{thm}[Burside's $p^a q^b$ theorem]\index{Burside's theorem}\index{$p^a q^b$ theorem}
+ Let $p, q$ be primes, and let $|G| = p^a q^b$, where $a, b \in \Z_{\geq 0}$, with $a + b \geq 2$. Then $G$ is not simple.
+\end{thm}
+
+\begin{proof}
+ Let $|G| = p^a q^b$. If $a = 0$ or $b = 0$, then the result is trivial. Suppose $a, b > 0$. We let $Q \in \Syl_q(G)$. Since $Q$ is a $p$-group, we know $Z(Q)$ is non-trivial. Hence there is some $1 \not= g \in Z(Q)$. By definition of center, we know $Q \leq C_G(g)$. Also, $C_G(g)$ is not the whole of $G$, since the center of $G$ is trivial. So
+ \[
+ |\mathcal{C}_G(g)| = |G: C_G(g)| = p^r
+ \]
+ for some $0 < r \leq a$. So done.
+\end{proof}
+This is all we've got to say about finite groups.
+\section{Representations of compact groups}
+We now consider the case of \emph{infinite groups}. It turns out infinite groups don't behave well if we don't impose additional structures. When talking about representations of infinite groups, things work well only if we impose a topological structure on the group, and require the representations to be continuous. Then we get nice results if we only concentrate on the groups that are compact. This is a generalization of our previous results, since we can give any finite group the discrete topology, and then all representations are continuous, and the group is compact due to finiteness.
+
+We start with the definition of a topological group.
+\begin{defi}[Topological group]\index{topological group}
+ A \emph{topological group} is a group $G$ which is also a topological space, and for which multiplication $G \times G \to G$ ($(h, g) \mapsto hg$) and inverse $G \to G$ ($g \mapsto g^{-1}$) are continuous maps.
+\end{defi}
+
+\begin{eg}
+ Any finite group $G$ with the discrete topology is a topological group.
+\end{eg}
+
+\begin{eg}
+ The matrix groups $\GL_n(\R)$ and $\GL_n(\C)$ are topological groups (inheriting the topology of $\R^{n^2}$ and $\C^{n^2}$).
+\end{eg}
+
+However, topological groups are often too huge. We would like to concentrate on ``small'' topological groups.
+\begin{defi}[Compact group]\index{compact group}
+ A topological group is a \emph{compact group} if it is compact as a topological space.
+\end{defi}
+There are some fairly nice examples of these.
+\begin{eg}
+ All finite groups with discrete topology are compact.
+\end{eg}
+
+\begin{eg}
+ The group $S^1 = \{z \in \C: |z| = 1\}$ under multiplication is a compact group. This is known as the \emph{circle group}\index{circle group}, for mysterious reasons. Thus the torus $S^1 \times S^1 \times \cdots \times S^1$ is also a compact group.
+\end{eg}
+
+\begin{eg}
+ The orthogonal group $\Or(n) \leq \GL_n(\R)$ is compact, and so is $\SO(n) = \{A \in \Or(n) : \det A = 1\}$. These are compact since they are closed and bounded as a subspace of $\R^{n^2}$, as the entries can be at most $1$ in magnitude. Note that $\SO(2) \cong S^1$, mapping a rotation by $\theta$ to $e^{i\theta}$.
+\end{eg}
+
+\begin{eg}
+ The unitary group $\U(n) = \{A \in \GL_n(\C): AA^\dagger = I\}$ is compact, and so is $\SU(n) =\{A \in U(n): \det A = 1\}$.
+\end{eg}
+
+Note that we also have $U(1) \cong \SO(2) \cong S^1$. These are not only isomorphisms as group, but these isomorphisms are also homeomorphisms of topological spaces.
+
+The one we are going to spend most of our time on is
+\[
+ \SU(2) = \{(z_1, z_2) \in \C^2: |z_1|^2 + |z_2|^2 = 1\} \leq \R^4 = \C^2.
+\]
+We can see this is homeomorphic to $S^3$.
+
+It turns out that the only spheres on which we can define group operations on are $S^1$ and $S^3$, but we will not prove this, since this is a course on algebra, not topology.
+
+We now want to develop a representation theory of these compact groups, similar to what we've done for finite groups.
+
+\begin{defi}[Representation of topological group]\index{representation}
+ A \emph{representation} of a topological group on a finite-dimensional space $V$ is a continuous group homomorphism $G \to \GL(V)$.
+\end{defi}
+It is important that the group homomorphism is continuous. Note that for a general topological space $X$, a map $\rho: X \to \GL(V) \cong \GL_n(\C)$ is continuous if and only if the component maps $x \mapsto \rho(x)_{ij}$ are continuous for all $i, j$.
+
+The key that makes this work is the compactness of the group and the continuity of the representations. If we throw them away, then things will go very bad. To see this, we consider the simple example
+\[
+ S^1 = U(1) = \{g \in \C^\times : |g| = 1\} \cong \R/\Z,
+\]
+where the last isomorphism is an abelian group isomorphism via the map $x \mapsto e^{2 \pi ix}: \R/\Z \to S^1$.
+
+What happens if we just view $S^1$ as an abstract group, and not a topological group? We can view $\R$ as a vector space over $\Q$, and this has a basis (by Hamel's basis theorem), say, $\mathcal{A} \subseteq \R$. Moreover, we can assume the basis is ordered, and we can assume $1 \in \mathcal{A}$.
+
+As abelian groups, we then have
+\[
+ \R \cong \Q \oplus \bigoplus_{\alpha \in \mathcal{A}\setminus\{1\}} \Q\alpha,
+\]
+Then this induces an isomorphism
+\[
+ \R/\Z \cong \Q/\Z \oplus \bigoplus_{\alpha \in \mathcal{A}\setminus \{1\}} \Q\alpha.
+\]
+Thus, as abstract groups, $S^1$ has uncountably many irreducible representations, namely for each $\lambda \in \mathcal{A} \setminus \{1\}$, there exists a one-dimensional representation given by
+\[
+ \rho_\lambda(e^{2\pi i \mu}) =
+ \begin{cases}
+ 1 & \mu \not\in \Q \lambda\\
+ e^{2\pi i\mu} & \mu \in \Q\lambda
+ \end{cases}
+\]
+We see $\rho_\lambda = \rho_{\lambda'}$ if and only if $\Q \lambda = \Q \lambda'$. Hence there are indeed uncountably many of these. There are in fact even \emph{more} irreducible representation we haven't listed. This is bad.
+
+The idea is then to topologize $S^1$ as a subset of $\C$, and study how it acts naturally on complex spaces in a continuous way. Then we can ensure that we have at most countably many representations. In fact, we can characterize all irreducible representations in a rather nice way:
+
+\begin{thm}
+ Every one-dimensional (continuous) representation $S^1$ is of the form
+ \[
+ \rho: z \mapsto z^n
+ \]
+ for some $n \in \Z$.
+\end{thm}
+This is a nice countable family of representations. To prove this, we need two lemmas from real analysis.
+
+\begin{lemma}
+ If $\psi: (\R, +) \to (\R, +)$ is a continuous group homomorphism, then there exists a $c \in \R$ such that
+ \[
+ \psi(x) = cx
+ \]
+ for all $x \in \R$.
+\end{lemma}
+
+\begin{proof}
+ Given $\psi: (\R, +) \to (\R, +)$ continuous, we let $c = \psi(1)$. We now claim that $\psi(x) = cx$.
+
+ Since $\psi$ is a homomorphism, for every $n \in \Z_{\geq 0}$ and $x \in \R$, we know
+ \[
+ \psi(nx) = \psi(x + \cdots + x) = \psi(x) + \cdots + \psi(x) = n \psi(x).
+ \]
+ In particular, when $x = 1$, we know $\psi(n) = cn$. Also, we have
+ \[
+ \psi(-n) = -\psi(n) = -cn.
+ \]
+ Thus $\psi(n) = cn$ for all $n \in \Z$.
+
+ We now put $x = \frac{m}{n} \in \Q$. Then we have
+ \[
+ m\psi(x) = \psi(nx) = \psi(m) = cm.
+ \]
+ So we must have
+ \[
+ \psi\left(\frac{m}n{}\right) = c \frac{m}{n}.
+ \]
+ So we get $\psi(q) = cq$ for all $q \in \Q$. But $\Q$ is dense in $\R$, and $\psi$ is continuous. So we must have $\psi(x) = cx$ for all $x \in \R$.
+\end{proof}
+
+\begin{lemma}
+ Continuous homomorphisms $\varphi: (\R, +) \to S^1$ are of the form
+ \[
+ \varphi(x) = e^{icx}
+ \]
+ for some $c \in \R$.
+\end{lemma}
+
+\begin{proof}
+ Let $\varepsilon: (\R, +) \to S^1$ be defined by $x \mapsto e^{ix}$. This homomorphism wraps the real line around $S^1$ with period $2\pi$.
+
+ We now claim that given any continuous \emph{function} $\varphi: \R \to S^1$ such that $\varphi(0) = 1$, there exists a unique continuous \emph{lifting} homomorphism $\psi: \R \to \R$ such that
+ \[
+ \varepsilon \circ \psi = \varphi,\quad \psi(0) = 0.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-4, 0) -- (4, 0) node [pos=0.1, above] {$(\R, +)$};
+ \node [circ] {};
+ \node [above] {0};
+
+ \draw [samples=80, domain=0:6] plot [smooth] ({-3 + 0.3 * cos (360 * \x) + \x / 2}, {-2 + 0.6 * sin (360 * \x)});
+ \draw (3, -2) ellipse (0.3 and 0.6);
+
+ \node [below] at (-1.5, -2.6) {$(\R, +)$};
+ \node [below] at (3, -2.6) {$S^1$};
+
+ \draw [->] (1, -2) -- +(1, 0) node [pos=0.5, above] {$\varepsilon$};
+ \draw [->] (1, -0.5) -- (2.33, -1.5) node [pos=0.5, anchor = south west] {$\varphi$};
+ \draw [->, dashed] (-1.5, -0.2) -- (-1.5, -1.2) node [pos=0.5, right] {$\exists ! \psi$};
+
+ \end{tikzpicture}
+ \end{center}
+ The lifting is constructed by starting with $\psi(0) = 0$, and then extending a small interval at a time to get a continuous map $\R \to \R$. We will not go into the details. Alternatively, this follows from the lifting criterion from IID Algebraic Topology.
+
+ We now claim that if in addition $\varphi$ is a homomorphism, then so is its continuous lifting $\psi$. If this is true, then we can conclude that $\psi(x) = cx$ for some $c\in \R$. Hence
+ \[
+ \varphi(x) = e^{icx}.
+ \]
+ To show that $\psi$ is indeed a homomorphism, we have to show that $\psi(x + y) = \psi(x) + \psi(y)$.
+
+ By definition, we know
+ \[
+ \varphi(x + y) = \varphi(x) \varphi(y).
+ \]
+ By definition of $\psi$, this means
+ \[
+ \varepsilon(\psi(x + y) - \psi(x) - \psi(y)) = 1.
+ \]
+ We now look at our definition of $\varepsilon$ to get
+ \[
+ \psi(x + y) - \psi(x) - \psi(y) = 2 k \pi
+ \]
+ for some integer $k \in \Z$, depending \emph{continuously} on $x$ and $y$. But $k$ can only be an integer. So it must be constant. Now we pick our favorite $x$ and $y$, namely $x = y = 0$. Then we find $k = 0$. So we get
+ \[
+ \psi(x + y) = \psi(x) + \psi(y).
+ \]
+ So $\psi$ is a group homomorphism.
+\end{proof}
+
+With these two lemmas, we can prove our characterization of the (one-dimensional) representations of $S^1$.
+\begin{thm}
+ Every one-dimensional (continuous) representation $S^1$ is of the form
+ \[
+ \rho: z \mapsto z^n
+ \]
+ for some $n \in \Z$.
+\end{thm}
+
+\begin{proof}
+ Let $\rho: S^1 \to \C^\times$ be a continuous representation. We now claim that $\rho$ actually maps $S^1$ to $S^1$. Since $S^1$ is compact, we know $\rho(S^1)$ has closed and bounded image. Also,
+ \[
+ \rho(z^n) = (\rho(z))^n
+ \]
+ for all $n \in \Z$. Thus for each $z \in S^1$, if $|\rho(z)| > 1$, then the image of $\rho(z^n)$ is unbounded. Similarly, if it is less than $1$, then $\rho(z^{-n})$ is unbounded. So we must have $\rho(S^1) \subseteq S^1$. So we get a continuous homomorphism
+ \begin{align*}
+ \R &\to S^1\\
+ x &\mapsto \rho(e^{ix}).
+ \end{align*}
+ So we know there is some $c \in \R$ such that
+ \[
+ \rho(e^{ix}) = e^{icx},
+ \]
+ Now in particular,
+ \[
+ 1 = \rho(e^{2 \pi i}) = e^{2\pi i c}.
+ \]
+ This forces $c \in \Z$. Putting $n = c$, we get
+ \[
+ \rho(z) = z^n.\qedhere
+ \]
+\end{proof}
+That has taken nearly an hour, and we've used two not-so-trivial facts from analysis. So we actually had to work quite hard to get this. But the result is good. We have a \emph{complete} list of representations, and we don't have to fiddle with things like characters.
+
+Our next objective is to repeat this for $\SU(2)$, and this time we cannot just get away with doing some analysis. We have to do representation theory properly. So we study the general theory of representations of complex spaces.
+
+In studying finite groups, we often took the ``average'' over the group via the operation $\frac{1}{|G|} \sum_{g \in G}\text{something}$. Of course, we can't do this now, since the group is infinite. As we all know, the continuous analogue of summing is called integration. Informally, we want to be able to write something like $\int_G \;\d g$.
+
+This is actually a measure, called the ``Haar measure'', and always exists as long as you are compact and Hausdorff. However, we will not need or use this result, since we can construct it by hand for $S^1$ and $\SU(2)$.
+
+\begin{defi}[Haar measure]\index{Haar measure}
+ Let $G$ be a topological group, and let
+ \[
+ \mathcal{C}(G) = \{f: G \to \C: f\text{ is continuous}, f(gxg^{-1}) = f(x)\text{ for all }g, x \in G\}.
+ \]
+ A non-trivial linear functional $\int_G: \mathcal{C}(G) \to C$, written as
+ \[
+ \int_G f = \int_G f(g) \;\d g
+ \]
+ is called a \emph{Haar measure} if
+ \begin{enumerate}
+ \item It satisfies the normalization condition
+ \[
+ \int_G 1 \;\d g = 1
+ \]
+ so that the ``total volume'' is $1$.
+ \item It is left and right translation invariant, i.e.
+ \[
+ \int_G f(xg)\;\d g = \int_G f(g)\;\d g = \int_G f(gx) \;\d g
+ \]
+ for all $x \in G$.
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}
+ For a finite group $G$, we can define
+ \[
+ \int_G f(g)\;\d g = \frac{1}{|G|} \sum_{g \in G} f(g).
+ \]
+ The division by the group order is the thing we need to make the normalization hold.
+\end{eg}
+
+\begin{eg}
+ Let $G = S^1$. Then we can have
+ \[
+ \int_G f(g)\;\d g = \frac{1}{2\pi} \int_0^{2\pi} f(e^{i\theta})\;\d \theta.
+ \]
+ Again, division by $2\pi$ is the normalization required.
+\end{eg}
+
+We have explicitly constructed them here, but there is a theorem that guarantees the existence of such a measure for sufficiently nice topological group.
+
+\begin{thm}
+ Let $G$ be a compact Hausdorff topological group. Then there exists a unique Haar measure on $G$.
+\end{thm}
+From now on, the word ``compact'' will mean ``compact and Hausdorff'', so that the conditions of the above theorem holds.
+
+We will not prove theorem, and just assume it to be true. If you don't like this, whenever you see ``compact group'', replace it with ``compact group with Haar measure''. We will explicitly construct it for $\SU(2)$ later, which is all we really care about.
+
+Once we know the existence of such a thing, most of the things we know for finite groups hold for compact groups, as long as we replace the sums with the appropriate integrals.
+
+\begin{cor}[Weyl's unitary trick]
+ Let $G$ be a compact group. Then every representation $(\rho, V)$ has a $G$-invariant Hermitian inner product.
+\end{cor}
+
+\begin{proof}
+ As for the finite case, take any inner product $(\ph, \ph)$ on $V$, then define a new inner product by
+ \[
+ \bra \mathbf{v}, \mathbf{w}\ket = \int_G (\rho(g)\mathbf{v}, \rho(g)\mathbf{w})\;\d g.
+ \]
+ Then this is a $G$-invariant inner product.
+\end{proof}
+
+\begin{thm}[Maschke's theoerm]
+ Let $G$ be compact group. Then every representation of $G$ is completely reducible.
+\end{thm}
+
+\begin{proof}
+ Given a representation $(\rho, V)$. Choose a $G$-invariant inner product. If $V$ is not irreducible, let $W \leq V$ be a subrepresentation. Then $W^\perp$ is also $G$-invariant, and
+ \[
+ V = W \oplus W^\perp.
+ \]
+ Then the result follows by induction.
+\end{proof}
+
+We can use the Haar measure to endow $\mathcal{C}(G)$ with an inner product given by
+\[
+ \bra f, f'\ket = \int_G \overline{f(g)} f'(g) \;\d g.
+\]
+\begin{defi}[Character]
+ If $\rho: G \to \GL(V)$ is a representation, then the \emph{character} $\chi_\rho = \tr \rho$ is a continuous class function, since each component $\rho(g)_{ij}$ is continuous.
+\end{defi}
+
+So characters make sense, and we can ask if all the things about finite groups hold here. For example, we can ask about orthogonality.
+
+Schur's lemma also hold, with the same proof, since we didn't really need finiteness there. Using that, we can prove the following:
+
+\begin{thm}[Orthogonality]
+ Let $G$ be a compact group, and $V$ and $W$ be irreducible representations of $G$. Then
+ \[
+ \bra \chi_V, \chi_W\ket =
+ \begin{cases}
+ 1 & V \cong W\\
+ 0 & V \not\cong W
+ \end{cases}.
+ \]
+\end{thm}
+
+Now do irreducible characters form a basis for $\mathcal{C}(G)$?
+\begin{eg}
+ We take the only (infinite) compact group we know about --- $G = S^1$. We have found that the one-dimensional representations are
+ \[
+ \rho_n: z \mapsto z^n
+ \]
+ for $n \in \Z$. As $S^1$ is abelian, these are all the (continuous) irreducible representations --- given \emph{any} representation $\rho$, we can find a simultaneous eigenvector for each $\rho(g)$.
+
+ The ``character table of $S^1$'' has rows $\chi_n$ indexed by $\Z$, with $\chi_n(e^{i\theta}) = e^{in\theta}$.
+
+ Now given any representation of $S^1$, say $V$, we can break $V$ as a direct sum of $1$-dimensional subrepresentations. So the character $\chi_V$ of $V$ is of the form
+ \[
+ \chi_V(z) = \sum_{n \in \Z} a_n z^n,
+ \]
+ where $a_n \in \Z_{\geq 0}$, and only finitely many $a_n$ are non-zero (since we are assuming finite-dimensionality).
+
+ Actually, $a_n$ is the number of copies of $\rho_n$ in the decomposition of $V$. We can find out the value of $a_n$ by computing
+ \[
+ a_n = \bra \chi_n, \chi_V\ket = \frac{1}{2\pi} \int_0^{2\pi} e^{-in\theta}\chi_V(e^{i\theta})\;\d \theta.
+ \]
+ Hence we know
+ \[
+ \chi_V(e^{i\theta}) = \sum_{n \in \Z} \left(\frac{1}{2\pi} \int_0^{2\pi} \chi_V(e^{i\theta'}) e^{-in\theta'}\;\d \theta'\right) e^{in\theta}.
+ \]
+ This is really just a Fourier decomposition of $\chi_V$. This gives a decomposition of $\chi_V$ into irreducible characters, and the ``Fourier mode'' $a_n$ is the number of each irreducible character occurring in this decomposition.
+
+ It is possible to show that $\chi_n$ form a complete orthonormal set in the Hilbert space $L^2(S^1)$, i.e.\ square-integrable functions on $S^1$. This is the Peter-Weyl theorem, and is highly non-trivial.
+\end{eg}
+
+\subsection{Representations of \texorpdfstring{$\SU(2)$}{SU(2)}}
+In the rest of the time, we would like to talk about $G = \SU(2)$.
+
+Recall that
+\[
+ \SU(2) = \{A \in \GL_2(\C): A^\dagger A = I, \det A = I\}.
+\]
+Note that if
+\[
+ A =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix} \in \SU(2),
+\]
+then since $\det A = 1$, we have
+\[
+ A^{-1} =
+ \begin{pmatrix}
+ d & -b\\
+ -c & a
+ \end{pmatrix}.
+\]
+So we need $d = \bar{a}$ and $c = -\bar{b}$. Moreover, we have
+\[
+ a\bar{a} + b\bar{b} = 1.
+\]
+Hence we can write $\SU(2)$ explicitly as
+\[
+ G = \left\{
+ \begin{pmatrix}
+ a & b\\
+ -\bar{b} & \bar{a}
+ \end{pmatrix}: a, b \in \C, |a|^2 + |b|^2 = 1
+ \right\}.
+\]
+Topologically, we know $G \cong S^3 \subseteq \C^2 \cong \R^4$.
+
+Instead of thinking about $\C^2$ in the usual vector space way, we can think of it as a subgroup of $M_2(\C)$ via
+\[
+ \H = \left\{
+ \begin{pmatrix}
+ z & w\\
+ -\bar{w} & \bar{z}
+ \end{pmatrix}:
+ w, z \in \C\right\} \leq M_2(\C).
+\]
+This is known as \emph{Hamilton's quaternion algebra}\index{Hamilton's quaternion algebra}\index{quaternion algebra}. Then $\H$ is a 4-dimensional Euclidean space (two components from $z$ and two components from $w$), with a norm on $H$ given by
+\[
+ \|A\|^2 = \det A.
+\]
+We now see that $\SU(2) \leq \H$ is exactly the unit sphere $\|A\|^2 = 1$ in $\H$. If $A \in \SU(2)$ and $x \in \H$, then $\|Ax\| = \|x\|$ since $\|A\| = 1$. So elements of $G$ acts as isometries on the space.
+
+After normalization (by $\frac{1}{2\pi^2}$), the usual integration of functions on $S^3$ defines a Haar measure on $G$. It is an exercise on the last example sheet to write this out explicitly.
+
+We now look at conjugacy in $G$. We let
+\[
+ T = \left\{
+ \begin{pmatrix}
+ a & 0\\
+ 0 & \bar{a}
+ \end{pmatrix}: a \in \C, |a|^2 = 1
+ \right\} \cong S^1.
+\]
+This is a \term{maximal torus} in $G$, and it plays a fundamental role, since we happen to know about $S^1$. We also have a favorite element
+\[
+ s =
+ \begin{pmatrix}
+ 0 & 1\\
+ -1 & 0
+ \end{pmatrix} \in \SU(2).
+\]
+We now prove some easy linear algebra results about $\SU(2)$.
+\begin{lemma}[$\SU(2)$-conjugacy classes]\leavevmode
+ \begin{enumerate}
+ \item Let $t \in T$. Then $sts^{-1} = t^{-1}$.
+ \item $s^2 = -I \in Z(\SU(2))$.
+ \item The normalizer
+ \[
+ N_G(T) = T \cup sT =
+ \left\{
+ \begin{pmatrix}
+ a & 0\\
+ 0 & \bar{a}
+ \end{pmatrix},
+ \begin{pmatrix}
+ 0 & a\\
+ -\bar{a} & 0
+ \end{pmatrix}: a \in \C, |a| = 1\right\}.
+ \]
+ \item Every conjugacy class $\mathcal{C}$ of $\SU(2)$ contains an element of $T$, i.e.\ $\mathcal{C} \cap T \not= \emptyset$.
+
+ \item In fact,
+ \[
+ \mathcal{C} \cap T = \{t, t^{-1}\}
+ \]
+ for some $t \in T$, and $t = t^{-1}$ if and only if $t = \pm I$, in which case $\mathcal{C} = \{t\}$.
+ \item There is a bijection
+ \[
+ \{\text{conjugacy classes in }\SU(2)\} \leftrightarrow [-1, 1],
+ \]
+ given by
+ \[
+ A \mapsto \frac{1}{2} \tr A.
+ \]
+ We can see that if
+ \[
+ A =
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \bar{\lambda}
+ \end{pmatrix},
+ \]
+ then
+ \[
+ \frac{1}{2}\tr A = \frac{1}{2}(\lambda + \bar{\lambda}) = \Re(\lambda).
+ \]
+ \end{enumerate}
+\end{lemma}
+Note that what (iv) and (v) really say is that matrices in $\SU(2)$ are diagonalizable (over $\SU(2)$).
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Write it out.
+ \item Write it out.
+ \item Direct verification.
+ \item It is well-known from linear algebra that every unitary matrix $X$ has an orthonormal basis of eigenvectors, and hence is conjugate in $U(2)$ to one in $T$, say
+ \[
+ Q X Q^\dagger \in T.
+ \]
+ We now want to force $Q$ into $\SU(2)$, i.e.\ make $Q$ have determinant $1$.
+
+ We put $\delta = \det Q$. Since $Q$ is unitary, i.e.\ $QQ^\dagger = I$, we know $|\delta| = 1$. So we let $\varepsilon$ be a square root of $\delta$, and define
+ \[
+ Q_1 = \varepsilon^{-1} Q.
+ \]
+ Then we have
+ \[
+ Q_1 X Q_1^{\dagger} \in T.
+ \]
+ \item We let $g \in G$, and suppose $g \in \mathcal{C}$. If $g = \pm I$, then $\mathcal{C} \cap T = \{g\}$. Otherwise, $g$ has two distinct eigenvalues $\lambda, \lambda^{-1}$. Note that the two eigenvlaues must be inverses of each other, since it is in $\SU(2)$. Then we know
+ \[
+ \mathcal{C} = \left\{h
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \lambda^{-1}
+ \end{pmatrix}h^{-1} : h \in G
+ \right\}.
+ \]
+ Thus we find
+ \[
+ \mathcal{C} \cap T = \left\{
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \lambda^{-1}
+ \end{pmatrix},
+ \begin{pmatrix}
+ \lambda^{-1} & 0\\
+ 0 & \lambda
+ \end{pmatrix}
+ \right\}.
+ \]
+ This is true since eigenvalues are preserved by conjugation, so if any
+ \[
+ \begin{pmatrix}
+ \mu & 0\\
+ 0 & \mu^{-1}
+ \end{pmatrix},
+ \]
+ then $\{\mu, \mu^{-1}\} = \{\lambda, \lambda^{-1}\}$. Also, we can get the second matrix from the first by conjugating with $s$.
+ \item Consider the map
+ \[
+ \frac{1}{2} \tr: \{\text{conjugacy classes}\} \to [-1, 1].
+ \]
+ By (v), matrices are conjugate in $G$ iff they have the same set of eigenvalues. Now
+ \[
+ \frac{1}{2}\tr
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \lambda^{-1}
+ \end{pmatrix}
+ = \frac{1}{2}(\lambda + \bar{\lambda}) = \Re(\lambda) = \cos \theta,
+ \]
+ where $\lambda = e^{i\theta}$. Hence the map is a surjection onto $[-1, 1]$.
+
+ Now we have to show it is injective. This is also easy. If $g$ and $g'$ have the same image, i.e.
+ \[
+ \frac{1}{2} \tr g = \frac{1}{2} \tr g',
+ \]
+ then $g$ and $g'$ have the same characteristic polynomial, namely
+ \[
+ x^2 - (\tr g)x + 1.
+ \]
+ Hence they have the same eigenvalues, and hence they are similar.\qedhere
+ \end{enumerate}
+\end{proof}
+
+We now write, for $t \in (-1, 1)$,
+\[
+ \mathcal{C}_t = \left\{g \in G: \frac{1}{2} \tr g = t\right\}.
+\]
+In particular, we have
+\[
+ \mathcal{C}_1 = \{I\}, \quad \mathcal{C}_{-1} = \{-I\}.
+\]
+\begin{prop}
+ For $t \in (-1, 1)$, the class $\mathcal{C}_t \cong S^2$ as topological spaces.
+\end{prop}
+This is not really needed, but is a nice thing to know.
+\begin{proof}
+ Exercise! % exercise for the reader
+\end{proof}
+
+We now move on to classify the irreducible representations of $\SU(2)$.
+
+We let $V_n$ be the complex space of all homogeneous polynomials of degree $n$ in variables $x, y$, i.e.
+\[
+ V_n = \{r_0 x^n + r_1 x^{n - 1}y + \cdots + r_n y^n: r_0, \cdots, r_n \in \C\}.
+\]
+This is an $n + 1$-dimensional complex vector space with basis $x^n, x^{n - 1}y, \cdots, y^n$.
+
+We want to get an action of $\SU(2)$ on $V_n$. It is easier to think about the action of $\GL_2(\C)$ in general, and then restrict to $\SU(2)$. We define the action of $\GL_2(\C)$ on $V_n$ by
+\[
+ \rho_n: \GL_2(\C) \to \GL(V_n)
+\]
+given by the rule
+\[
+ \rho_n\left(
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}\right) f(x, y) = f(ax + cy, bx + dy).
+\]
+In other words, we have
+\[
+ \rho_n\left(
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}\right) f\left(
+ \begin{pmatrix}
+ x & y
+ \end{pmatrix}\right) = f\left(
+ \begin{pmatrix}
+ x & y
+ \end{pmatrix}
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}\right),
+\]
+where the multiplication in $f$ is matrix multiplication.
+
+\begin{eg}
+ When $n = 0$, then $V_0 \cong \C$, and $\rho_0$ is the trivial representation.
+
+ When $n = 1 $, then this is the natural two-dimensional representation, and
+ \[
+ \rho_1\left(
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}
+ \right)
+ \]
+ has matrix
+ \[
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}
+ \]
+ with respect to the standard basis $\{x, y\}$ of $V_1 = \C^2$.
+
+ More interestingly, when $n = 2$, we know
+ \[
+ \rho_2\left(
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix}
+ \right)
+ \]
+ has matrix
+ \[
+ \begin{pmatrix}
+ a^2 & ab & b^2\\
+ 2ac & ad + bc & 2bd\\
+ c^2 & cd & d^2
+ \end{pmatrix},
+ \]
+ with respect to the basis $x^2, xy, y^2$ of $V_2 = \C^3$. We obtain the matrix by computing, e.g.
+ \[
+ \rho_2(g) (x^2) = (ax + cy)^2 = a^2 x^2 + 2ac xy + c^2 y^2.
+ \]
+\end{eg}
+
+Now we know $\SU(2) \leq \GL_2(\C)$. So we can view $V_n$ as a representation of $\SU(2)$ by restriction.
+
+Now we've got some representations. The claim is that these are all the irreducibles. Before we prove that, we look at the character of these things.
+
+\begin{lemma}
+ A continuous class function $f: G \to \C$ is determined by its restriction to $T$, and $F|_T$ is even, i.e.
+ \[
+ f\left(
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \lambda^{-1}
+ \end{pmatrix}
+ \right) =
+ f\left(
+ \begin{pmatrix}
+ \lambda^{-1} & 0\\
+ 0 & \lambda
+ \end{pmatrix}
+ \right).
+ \]
+\end{lemma}
+
+\begin{proof}
+ Each conjugacy class in $\SU(2)$ meets $T$. So a class function is determined by its restriction to $T$. Evenness follows from the fact that the two elements are conjugate.
+\end{proof}
+
+In particular, a character of a representation $(\rho, V)$ is also an even function $\chi_\rho: S^1 \to \C$.
+
+\begin{lemma}
+ If $\chi$ is a character of a representation of $\SU(2)$, then its restriction $\chi|_T$ is a Laurent polynomial, i.e.\ a finite $\N$-linear combination of functions
+ \[
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \lambda^{-1}
+ \end{pmatrix} \mapsto \lambda^n
+ \]
+ for $n \in \Z$.
+\end{lemma}
+
+\begin{proof}
+ If $V$ is a representation of $\SU(2)$, then $\Res_T^{\SU(2)} V$ is a representation of $T$, and its character $\Res_T^{\SU(2)} \chi$ is the restriction of $\chi_V$ to $T$. But every representation of $T$ has its character of the given form. So done.
+\end{proof}
+
+\begin{notation}
+ We write $\N[z, z^{-1}]$ for the set of all Laurent polynomials, i.e.
+ \[
+ \N[z, z^{-1}] = \left\{\sum a_n z^n: a_n \in \N: \text{only finitely many $a_n$ non-zero}\right\}.
+ \]
+ We further write
+ \[
+ \N[z, z^{-1}]_{\mathrm{ev}} = \{f \in \N[z, z^{-1}]: f(z) = f(z^{-1})\}.
+ \]
+\end{notation}
+Then by our lemma, for every continuous representation $V$ of $\SU(2)$, the character $\chi_V \in \N[z, z^{-1}]_{\mathrm{ev}}$ (by identifying it with its restriction to $T$).
+
+We now actually calculate the character $\chi_n$ of $(\rho_n, V_n)$ as a representation of $\SU(2)$. We have
+\[
+ \chi_{V_n}(g) = \tr \rho_n (g),
+\]
+where
+\[
+ g \sim
+ \begin{pmatrix}
+ z & 0\\
+ 0 & z^{-1}
+ \end{pmatrix} \in T.
+\]
+Then we have
+\[
+ \rho_n \left(
+ \begin{pmatrix}
+ z & 0\\
+ 0 & z^{-1}
+ \end{pmatrix}\right)(x^i y^j) = (zx^i)(z^{-1}y^j) = z^{i - j} x^i y^j.
+\]
+So each $x^i y^j$ is an eigenvector of $\rho_n (g)$, with eigenvalue $z^{i - j}$. So we know $\rho_n (g)$ has a matrix
+\[
+ \begin{pmatrix}
+ z^n\\
+ & z^{n - 2}\\
+ & & \ddots\\
+ & & & z^{2 - n}\\
+ & & & & z^{-n}
+ \end{pmatrix},
+\]
+with respect to the standard basis. Hence the character is just
+\[
+ \chi_n\left(
+ \begin{pmatrix}
+ z & 0\\
+ 0 & z^{-1}
+ \end{pmatrix}\right) = z^n + z^{n - 2} + \cdots + z^{-n} = \frac{z^{n + 1} - z^{-(n + 1)}}{z - z^{-1}},
+\]
+where the last expression is valid unless $z = \pm 1$.
+
+We can now state the result we are aiming for:
+\begin{thm}
+ The representations $\rho_n : \SU(2) \to \GL(V_n)$ of dimension $n + 1$ are irreducible for $n \in \Z_{\geq 0}$.
+\end{thm}
+Again, we get a complete set (completeness proven later). A complete list of all irreducible representations, given in a really nice form. This is spectacular.
+
+\begin{proof}
+ Let $0 \not= W \leq V_n$ be a $G$-invariant subspace, i.e.\ a subrepresentation of $V_n$. We will show that $W = V_n$.
+
+ All we know about $W$ is that it is non-zero. So we take some non-zero vector of $W$.
+ \begin{claim}
+ Let
+ \[
+ 0 \not= w = \sum_{j = 0}^n r_j x^{n - j}y^j \in W.
+ \]
+ Since this is non-zero, there is some $i$ such that $r_i \not= 0$. The claim is that $x^{n - i}y^i \in W$.
+ \end{claim}
+ We argue by induction on the number of non-zero coefficients $r_j$. If there is only one non-zero coefficient, then we are already done, as $w$ is a non-zero scalar multiple of $x^{n - i}y^i$.
+
+ So assume there is more than one, and choose one $i$ such that $r_i \not= 0$. We pick $z \in S^1$ with $z^n, z^{n - 2}, \cdots, z^{2 - n}, z^{-n}$ all distinct in $\C$. Now
+ \[
+ \rho_n\left(
+ \begin{pmatrix}
+ z\\&z^{-1}
+ \end{pmatrix}\right)w = \sum r_j z^{n - 2j} x^{n - j}y^j \in W.
+ \]
+ Subtracting a copy of $w$, we find
+ \[
+ \rho_n\left(
+ \begin{pmatrix}
+ z\\&z^{-1}
+ \end{pmatrix}\right)w - z^{n - 2i}w = \sum r_j (z^{n - 2j} - z^{n - 2i})x^{n - j}y^j \in W.
+ \]
+ We now look at the coefficient
+ \[
+ r_j (z^{n - 2j} - z^{n - 2i}).
+ \]
+ This is non-zero if and only if $r_j$ is non-zero and $j \not= i$. So we can use this to remove any non-zero coefficient. Thus by induction, we get
+ \[
+ x^{n - j}y^j \in W
+ \]
+ for all $j$ such that $r_j \not= 0$.
+
+ This gives us one basis vector inside $W$, and we need to get the rest.
+
+ \begin{claim}
+ $W = V_n$.
+ \end{claim}
+ We now know that $x^{n - i}y^i \in W$ for some $i$. We consider
+ \[
+ \rho_n\left(\frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 1 & -1\\
+ 1 & 1
+ \end{pmatrix}\right) x^{n - i}y^i = \frac{1}{\sqrt{2}} (x + y)^{n - i}(-x + y)^i \in W.
+ \]
+ It is clear that the coefficient of $x^n$ is non-zero. So we can use the claim to deduce $x^n \in W$.
+
+ Finally, for general $a, b \not= 0$, we apply
+ \[
+ \rho_n\left(
+ \begin{pmatrix}
+ a & -\bar{b}\\
+ b &\bar{a}
+ \end{pmatrix}\right) x^n = (ax + by)^n \in W,
+ \]
+ and the coefficient of everything is non-zero. So basis vectors are in $W$. So $W = V_n$.
+\end{proof}
+This proof is surprisingly elementary. It does not use any technology at all.
+
+Alternatively, to prove this, we can identify
+\[
+ \mathcal{C}_{\cos \theta} = \left\{A \in G: \frac{1}{2}\tr A = \cos \theta \right\}
+\]
+with the $2$-sphere
+\[
+ \{ |\Im a|^2 + |b|^2 = \sin^2 \theta\}
+\]
+of radius $\sin \theta$. So if $f$ is a class function on $G$, $f$ is constant on each class $\mathcal{C}_{\cos \theta}$. It turns out we get
+\[
+ \int_G f(g)\;\d g = \frac{1}{2\pi^2}\int_0^{2\pi} \frac{1}{2}f\left(
+ \begin{pmatrix}
+ e^{i\theta}\\
+ &e^{-i\theta}
+ \end{pmatrix}\right) 4\pi \sin^2 \theta\;\d \theta.
+\]
+This is the \emph{Weyl integration formula}, which is the Haar measure for $\SU(2)$. Then if $\chi_n$ is the character of $V_n$, we can use this to show that $\bra \chi_n, \chi_n\ket = 1$. Hence $\chi_n$ is irreducible. We will not go into the details of this construction.
+
+\begin{thm}
+ Every finite-dimensional continuous irreducible representation of $G$ is one of the $\rho_n: G \to \GL(V_n)$ as defined above.
+\end{thm}
+
+\begin{proof}
+ Assume $\rho_V: G \to \GL(V)$ is an irreducible representation affording a character $\chi_V \in \N[z, z^{-1}]_{\mathrm{ev}}$. We will show that $\chi_V = \chi_n$ for some $n$. Now we see
+ \begin{align*}
+ \chi_0 &= 1\\
+ \chi_1 &= z + z^{-1}\\
+ \chi_2 &= z^2 +1 + z^{-2}\\
+ &\;\;\vdots
+ \end{align*}
+ form a basis of $\Q[z, z^{-1}]_{\mathrm{ev}}$, which is a non-finite dimensional vector space over $\Q$. Hence we can write
+ \[
+ \chi_V = \sum_n a_n \chi_n,
+ \]
+ a finite sum with finitely many $a_n\not= 0$. Note that it is possible that $a_n \in \Q$. So we clear denominators, and move the summands with negative coefficients to the left hand side. So we get
+ \[
+ m \chi_V + \sum_{i \in I} m_i \chi_i = \sum_{j \in J} n_j \chi_j,
+ \]
+ with $I, J$ disjoint finite subsets of $\N$, and $m, m_i, n_j \in \N$.
+
+ We know the left and right-hand side are characters of representations of $G$. So we get
+ \[
+ m V \oplus \bigoplus_I m_i V_i = \bigoplus_J n_j V_j.
+ \]
+ Since $V$ is irreducible and factorization is unique, we must have $V \cong V_n$ for some $n \in J$.
+\end{proof}
+
+We've got a complete list of irreducible representations of $\SU(2)$. So we can look at what happens when we take products.
+
+We know that for $V, W$ representations of $\SU(2)$, if
+\[
+ \Res_T^{\SU(2)}V \cong \Res_T^{\SU(2)}W,
+\]
+then in fact
+\[
+ V \cong W.
+\]
+This gives us the following result:
+\begin{prop}
+ Let $G = \SU(2)$ or $G = S^1$, and $V, W$ are representations of $G$. Then
+ \[
+ \chi_{V \otimes W} = \chi_V \chi_W.
+ \]
+\end{prop}
+
+\begin{proof}
+ By the previous remark, it is enough to consider the case $G = S^1$. Suppose $V$ and $W$ have eigenbases $\mathbf{e}_1, \cdots, \mathbf{e}_n$ and $\mathbf{f}_1, \cdots, \mathbf{f}_m$ respectively such that
+ \[
+ \rho(z) \mathbf{e}_i = z^{n_i} \mathbf{e}_i,\quad \rho(z) \mathbf{f}_j = z^{m_j} \mathbf{f}_j
+ \]
+ for each $i, j$. Then
+ \[
+ \rho(z)(\mathbf{e}_i\otimes \mathbf{f}_j) = z^{n_i + m_j} \mathbf{e}_i \otimes \mathbf{f}_j.
+ \]
+ Thus the character is
+ \[
+ \chi_{V \otimes W}(z) = \sum_{i, j} z^{n_i + m_j} = \left(\sum_i z^{n_i}\right) \left(\sum_j z^{m_j}\right) = \chi_V(z) \chi_W(z).\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ We have
+ \[
+ \chi_{V_1 \otimes V_1}(z) = (z + z^{-1})^2 = z^2 + 2 + z^{-2} = \chi_{V_2} + \chi_{V_0}.
+ \]
+ So we have
+ \[
+ V_1 \otimes V_1 \cong V_2 \oplus V_0.
+ \]
+ Similarly, we can compute
+ \[
+ \chi_{V_1 \otimes V_2}(z) = (z^2 + 1 + z^{-2})(z + z^{-1}) = z^3 + 2z + 2z^{-1} + z^{-3} = \chi_{V_3} + \chi_{V_1}.
+ \]
+ So we get
+ \[
+ V_1 \otimes V_2 \cong V_3 \oplus V_1.
+ \]
+\end{eg}
+
+\begin{prop}[Clebsch-Gordon rule]\index{Clebsch-Gordon rule}
+ For $n, m \in \N$, we have
+ \[
+ V_n \otimes V_m \cong V_{n + m} \oplus V_{n + m - 2} \oplus \cdots \oplus V_{|n - m| + 2} \oplus V_{|n - m|}.
+ \]
+\end{prop}
+
+\begin{proof}
+ We just check this works for characters. Without loss of generality, we assume $n \geq m$. We can compute
+ \begin{align*}
+ (\chi_n \chi_m)(z) &= \frac{z^{n + 1} - z^{-n - 1}}{z - z^{-1}} (z^m + z^{m - 2} + \cdots + z^{-m})\\
+ &= \sum_{j = 0}^m \frac{z^{n + m + 1 - 2j} - z^{2j - n - m - 1}}{z - z^{-1}}\\
+ &= \sum_{j = 0}^m \chi_{n + m - 2j}(z).
+ \end{align*}
+ Note that the condition $n \geq m$ ensures there are no cancellations in the sum.
+\end{proof}
+
+\subsection{Representations of \texorpdfstring{$\SO(3)$}{SO(3)}, \texorpdfstring{$\SU(2)$}{SU(2)} and \texorpdfstring{$U(2)$}{U(2)}}
+We've now got a complete list of representations of $S^1$ and $\SU(2)$. We see if we can make some deductions about some related groups. We will not give too many details about these groups, and the proofs are just sketches. Moreover, we will rely on the following facts which we shall not prove:
+
+\begin{prop}
+ There are isomorphisms of topological groups:
+ \begin{enumerate}
+ \item $\SO(3) \cong \SU(2)/ \{\pm I\} = \PSU(2)$
+ \item $\SO(4) \cong \SU(2) \times \SU(2)/\{\pm (I, I)\}$
+ \item $\U(2) \cong \U(1) \times \SU(2) / \{\pm (I, I)\}$
+ \end{enumerate}
+ All maps are group isomorphisms, but in fact also homeomorphisms. To show this, we can use the fact that a continuous bijection from a Hausdorff space to a compact space is automatically a homeomorphism.
+\end{prop}
+Assuming this is true, we obtain the following corollary;
+
+\begin{cor}
+ Every irreducible representation of $\SO(3)$ has the following form:
+ \[
+ \rho_{2m}: \SO(3) \to \GL(V_{2m}),
+ \]
+ for some $m \geq 0$, where $V_n$ are the irreducible representations of $\SU(2)$.
+\end{cor}
+
+\begin{proof}
+ Irreducible representations of $\SO(3)$ correspond to irreducible representations of $\SU(2)$ such that $-I$ acts trivially by lifting. But $-I$ acts on $V_n$ as $-1$ when $n$ is odd, and as $1$ when $n$ is even, since
+ \[
+ \rho(-I) =
+ \begin{pmatrix}
+ (-1)^n\\
+ & (-1)^{n - 2}\\
+ & & \ddots\\
+ & & & (-1)^{-n}
+ \end{pmatrix} = (-1)^n I.\qedhere
+ \]
+\end{proof}
+
+For the sake of completeness, we provide a (sketch) proof of the isomorphism $\SO(3) \cong \SU(2) / \{\pm I\}$.
+
+\begin{prop}
+ $\SO(3) \cong \SU(2)/\{\pm I\}$.
+\end{prop}
+
+\begin{proof}[Proof sketch]
+ Recall that $\SU(2)$ can be viewed as the sphere of unit norm quaternions $\H \cong \R^4$.
+
+ Let
+ \[
+ \H^0 = \{A \in \H: \tr A = 0\}.
+ \]
+ These are the ``pure'' quaternions. This is a three-dimensional subspace of $\H$. It is not hard to see this is
+ \[
+ \H^0 = \R\left\bra
+ \begin{pmatrix}
+ i & 0\\
+ 0 & -i
+ \end{pmatrix},
+ \begin{pmatrix}
+ 0 & 1\\
+ -1 & 0
+ \end{pmatrix},
+ \begin{pmatrix}
+ 0 & i\\
+ i & 0
+ \end{pmatrix}\right\ket = \R\left\bra \mathbf{i}, \mathbf{j}, \mathbf{k}\right\ket,
+ \]
+ where $\R\bra \cdots\ket$ is the $\R$-span of the things.
+
+ This is equipped with the norm
+ \[
+ \|A\|^2 = \det A.
+ \]
+ This gives a nice $3$-dimensional Euclidean space, and $\SU(2)$ acts as isometries on $\H_0$ by conjugation, i.e.
+ \[
+ X\cdot A = XAX^{-1},
+ \]
+ giving a group homomorphism
+ \[
+ \varphi: \SU(2) \to \Or(3),
+ \]
+ and the kernel of this map is $Z(\SU(2)) = \{\pm I\}$. We also know that $\SU(2)$ is compact, and $\Or(3)$ is Hausdorff. Hence the continuous group isomorphism
+ \[
+ \bar{\varphi}: \SU(2)/\{\pm I\} \to \im \varphi
+ \]
+ is a homeomorphism. It remains to show that $\im \varphi = \SO(3)$.
+
+ But we know $\SU(2)$ is connected, and $\det (\varphi(X))$ is a continuous function that can only take values $1$ or $-1$. So $\det (\varphi(X))$ is either always $1$ or always $-1$. But $\det (\varphi(I)) = 1$. So we know $\det(\varphi(X)) = 1$ for all $X$. Hence $\im \varphi \leq \SO(3)$.
+
+ To show that equality indeed holds, we have to show that all possible rotations in $\H^0$ are possible. We first show all rotations in the $\mathbf{i},\mathbf{j}$-plane are implemented by elements of the form $a + b\mathbf{k}$, and similarly for any permutation of $\mathbf{i}, \mathbf{j}, \mathbf{k}$. Since all such rotations generate $\SO(3)$, we are then done. Now consider
+ \[
+ \begin{pmatrix}
+ e^{i\theta} & 0\\
+ 0 & e^{-i\theta}
+ \end{pmatrix}
+ \begin{pmatrix}
+ ai & b\\
+ -\bar{b} & -ai
+ \end{pmatrix}
+ \begin{pmatrix}
+ e^{i\theta} & 0\\
+ 0 & e^{i\theta}
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ ai & e^{2i\theta}b\\
+ -\bar{b} e^{-2i\theta} & -ai
+ \end{pmatrix}.
+ \]
+ So
+ \[
+ \begin{pmatrix}
+ e^{i\theta} & 0\\
+ 0 & e^{-i\theta}
+ \end{pmatrix}
+ \]
+ acts on $\R\bra \mathbf{i}, \mathbf{j}, \mathbf{k}\ket$ by a rotation in the $(\mathbf{j}, \mathbf{k})$-plane through an angle $2\theta$. We can check that
+ \[
+ \begin{pmatrix}
+ \cos \theta & \sin \theta\\
+ -\sin \theta & \cos \theta
+ \end{pmatrix},\quad
+ \begin{pmatrix}
+ \cos \theta & i \sin \theta\\
+ i\sin \theta & \cos \theta
+ \end{pmatrix}
+ \]
+ act by rotation of $2\theta$ in the $(\mathbf{i}, \mathbf{k})$-plane and $(\mathbf{i}, \mathbf{j})$-plane respectively. So done.
+\end{proof}
+
+We can adapt this proof to prove the other isomorphisms. However, it is slightly more difficult to get the irreducible representations, since it involves taking some direct products. We need a result about products $G \times H$ of two compact groups. Similar to the finite case, we get the complete list of irreducible representations by taking the tensor products $V \otimes W$, where $V$ and $W$ range over the irreducibles of $G$ and $H$ independently.
+
+We will just assert the results.
+\begin{prop}
+ The complete list of irreducible representations of $\SO(4)$ is $\rho_m \times \rho_n$, where $m, n > 0$ and $m \equiv n \pmod 2$.
+\end{prop}
+
+\begin{prop}
+ The complete list of irreducible representations of $\U(2)$ is
+ \[
+ \det{ }^{\otimes m} \otimes \rho_n,
+ \]
+ where $m, n \in \Z$ and $n \geq 0$, and $\det$ is the obvious one-dimensional representation.
+\end{prop}
+
+\printindex
+\end{document}
diff --git a/books/cam/II_L/statistical_physics.tex b/books/cam/II_L/statistical_physics.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d9ad7a88135003fa2f452df5e7983ebf3fed2f74
--- /dev/null
+++ b/books/cam/II_L/statistical_physics.tex
@@ -0,0 +1,5534 @@
+\documentclass[a4paper]{article}
+
+\def\npart {II}
+\def\nterm {Lent}
+\def\nyear {2017}
+\def\nlecturer {H.\ S.\ Reall}
+\def\ncourse {Statistical Physics}
+
+\input{header}
+
+\def\di{{\mathchar'26\mkern-12mu d}}
+\def\isothermia{(1.6,0.3) (1.7,0.19847) (1.8,0.14429) (1.9,0.11496) (2.0,0.10000) (2.1,0.09349) (2.2,0.09194)}
+\def\isothermib{(2.2,0.09194) (2.3,0.09328) (2.4,0.09623) (2.5,0.10000) (2.6,0.10411) (2.7,0.10825) (2.8,0.11224) (2.9,0.11600) (3.0,0.11944) (3.1,0.12257) (3.2,0.12536) (3.3,0.12782) (3.4,0.12997) (3.5,0.13184) (3.6,0.13343) (3.7,0.13477) (3.8,0.13588) (3.9,0.13679) (4.0,0.13750) (4.1,0.13804) (4.2,0.13843) (4.3,0.13867) (4.4,0.13879) (4.5,0.13880)}
+\def\isothermic{(4.5,0.13880) (4.6,0.13871) (4.7,0.13852) (4.8,0.13825) (4.9,0.13791) (5.0,0.13750) (5.1,0.13703) (5.2,0.13652) (5.3,0.13595) (5.4,0.13535) (5.5,0.13471) (5.6,0.13404) (5.7,0.13334) (5.8,0.13262) (5.9,0.13187) (6.0,0.13111) (6.1,0.13033) (6.2,0.12954) (6.3,0.12874) (6.4,0.12793) (6.5,0.12711) (6.6,0.12629) (6.7,0.12546) (6.8,0.12463) (6.9,0.12379) (7.0,0.12296) (7.1,0.12212) (7.2,0.12129) (7.3,0.12046) (7.4,0.11963) (7.5,0.11880) (7.6,0.11798) (7.7,0.11716) (7.8,0.11635) (7.9,0.11554) (8.0,0.11473) (8.1,0.11393) (8.2,0.11314) (8.3,0.11235) (8.4,0.11157) (8.5,0.11080) (8.6,0.11003) (8.7,0.10927) (8.8,0.10851) (8.9,0.10776) (9.0,0.10702) (9.1,0.10629) (9.2,0.10556) (9.3,0.10484) (9.4,0.10413) (9.5,0.10342) (9.6,0.10272) (9.7,0.10203) (9.8,0.10135) (9.9,0.10067) (10.0,0.10000) (10.1,0.09934) (10.2,0.09868) (10.3,0.09803) (10.4,0.09739) (10.5,0.09675) (10.6,0.09613) (10.7,0.09550) (10.8,0.09489) (10.9,0.09428) (11.0,0.09368) (11.1,0.09308) (11.2,0.09249) (11.3,0.09191) (11.4,0.09133) (11.5,0.09076) (11.6,0.09020) (11.7,0.08964) (11.8,0.08909) (11.9,0.08854) (12.0,0.08801) (12.1,0.08747) (12.2,0.08694) (12.3,0.08642) (12.4,0.08590) (12.5,0.08539) (12.6,0.08489) (12.7,0.08438) (12.8,0.08389) (12.9,0.08340) (13.0,0.08291) (13.1,0.08243) (13.2,0.08196) (13.3,0.08149) (13.4,0.08103) (13.5,0.08057) (13.6,0.08011) (13.7,0.07966) (13.8,0.07921) (13.9,0.07877) (14.0,0.07834) (14.1,0.07790) (14.2,0.07748) (14.3,0.07705) (14.4,0.07663) (14.5,0.07622) (14.6,0.07581) (14.7,0.07540) (14.8,0.07500) (14.9,0.07460) (15.0,0.07421)}
+\def\isothermi{\isothermia \isothermib \isothermic}
+\def\isothermii{(1.8,0.3) (1.9,0.26107) (2.0,0.23150) (2.1,0.21303) (2.2,0.20153) (2.3,0.19444) (2.4,0.19016) (2.5,0.18767) (2.6,0.18629) (2.7,0.18560) (2.8,0.18530) (2.9,0.18521) (3.0,0.18519) (3.1,0.18518) (3.2,0.18513) (3.3,0.18499) (3.4,0.18477) (3.5,0.18444) (3.6,0.18401) (3.7,0.18347) (3.8,0.18285) (3.9,0.18213) (4.0,0.18133) (4.1,0.18046) (4.2,0.17952) (4.3,0.17852) (4.4,0.17747) (4.5,0.17637) (4.6,0.17523) (4.7,0.17406) (4.8,0.17285) (4.9,0.17163) (5.0,0.17038) (5.1,0.16911) (5.2,0.16783) (5.3,0.16654) (5.4,0.16524) (5.5,0.16393) (5.6,0.16263) (5.7,0.16132) (5.8,0.16001) (5.9,0.15871) (6.0,0.15741) (6.1,0.15612) (6.2,0.15483) (6.3,0.15355) (6.4,0.15228) (6.5,0.15102) (6.6,0.14977) (6.7,0.14853) (6.8,0.14730) (6.9,0.14608) (7.0,0.14488) (7.1,0.14368) (7.2,0.14250) (7.3,0.14133) (7.4,0.14018) (7.5,0.13903) (7.6,0.13790) (7.7,0.13679) (7.8,0.13568) (7.9,0.13459) (8.0,0.13352) (8.1,0.13245) (8.2,0.13140) (8.3,0.13037) (8.4,0.12934) (8.5,0.12833) (8.6,0.12733) (8.7,0.12634) (8.8,0.12537) (8.9,0.12441) (9.0,0.12346) (9.1,0.12252) (9.2,0.12160) (9.3,0.12068) (9.4,0.11978) (9.5,0.11889) (9.6,0.11801) (9.7,0.11715) (9.8,0.11629) (9.9,0.11545) (10.0,0.11461) (10.1,0.11379) (10.2,0.11297) (10.3,0.11217) (10.4,0.11138) (10.5,0.11060) (10.6,0.10982) (10.7,0.10906) (10.8,0.10831) (10.9,0.10756) (11.0,0.10683) (11.1,0.10610) (11.2,0.10539) (11.3,0.10468) (11.4,0.10398) (11.5,0.10329) (11.6,0.10261) (11.7,0.10193) (11.8,0.10127) (11.9,0.10061) (12.0,0.09996) (12.1,0.09932) (12.2,0.09868) (12.3,0.09806) (12.4,0.09744) (12.5,0.09683) (12.6,0.09622) (12.7,0.09562) (12.8,0.09503) (12.9,0.09445) (13.0,0.09387) (13.1,0.09330) (13.2,0.09274) (13.3,0.09218) (13.4,0.09163) (13.5,0.09109) (13.6,0.09055) (13.7,0.09001) (13.8,0.08949) (13.9,0.08897) (14.0,0.08845) (14.1,0.08794) (14.2,0.08744) (14.3,0.08694) (14.4,0.08645) (14.5,0.08596) (14.6,0.08548) (14.7,0.08500) (14.8,0.08453) (14.9,0.08406) (15.0,0.08360)}
+\def\isothermiii{(2.2,0.3) (2.3,0.28559) (2.4,0.27480) (2.5,0.26667) (2.6,0.26036) (2.7,0.25531) (2.8,0.25113) (2.9,0.24757) (3.0,0.24444) (3.1,0.24161) (3.2,0.23899) (3.3,0.23652) (3.4,0.23414) (3.5,0.23184) (3.6,0.22958) (3.7,0.22736) (3.8,0.22517) (3.9,0.22299) (4.0,0.22083) (4.1,0.21869) (4.2,0.21655) (4.3,0.21443) (4.4,0.21232) (4.5,0.21023) (4.6,0.20815) (4.7,0.20609) (4.8,0.20404) (4.9,0.20201) (5.0,0.20000) (5.1,0.19801) (5.2,0.19604) (5.3,0.19409) (5.4,0.19217) (5.5,0.19027) (5.6,0.18839) (5.7,0.18653) (5.8,0.18470) (5.9,0.18289) (6.0,0.18111) (6.1,0.17935) (6.2,0.17762) (6.3,0.17591) (6.4,0.17423) (6.5,0.17257) (6.6,0.17093) (6.7,0.16932) (6.8,0.16773) (6.9,0.16617) (7.0,0.16463) (7.1,0.16311) (7.2,0.16161) (7.3,0.16014) (7.4,0.15869) (7.5,0.15726) (7.6,0.15586) (7.7,0.15447) (7.8,0.15311) (7.9,0.15177) (8.0,0.15045) (8.1,0.14914) (8.2,0.14786) (8.3,0.14660) (8.4,0.14535) (8.5,0.14413) (8.6,0.14292) (8.7,0.14173) (8.8,0.14056) (8.9,0.13941) (9.0,0.13827) (9.1,0.13715) (9.2,0.13605) (9.3,0.13496) (9.4,0.13389) (9.5,0.13283) (9.6,0.13179) (9.7,0.13077) (9.8,0.12976) (9.9,0.12876) (10.0,0.12778) (10.1,0.12681) (10.2,0.12585) (10.3,0.12491) (10.4,0.12398) (10.5,0.12307) (10.6,0.12217) (10.7,0.12128) (10.8,0.12040) (10.9,0.11953) (11.0,0.11868) (11.1,0.11783) (11.2,0.11700) (11.3,0.11618) (11.4,0.11537) (11.5,0.11457) (11.6,0.11379) (11.7,0.11301) (11.8,0.11224) (11.9,0.11148) (12.0,0.11073) (12.1,0.10999) (12.2,0.10926) (12.3,0.10854) (12.4,0.10783) (12.5,0.10713) (12.6,0.10644) (12.7,0.10575) (12.8,0.10508) (12.9,0.10441) (13.0,0.10375) (13.1,0.10310) (13.2,0.10245) (13.3,0.10182) (13.4,0.10119) (13.5,0.10057) (13.6,0.09995) (13.7,0.09934) (13.8,0.09875) (13.9,0.09815) (14.0,0.09757) (14.1,0.09699) (14.2,0.09642) (14.3,0.09585) (14.4,0.09529) (14.5,0.09474) (14.6,0.09419) (14.7,0.09365) (14.8,0.09312) (14.9,0.09259) (15.0,0.09206)}
+\def\isothermiv{(1.7,0.3) (1.8,0.21929) (1.9,0.18163) (2.0,0.16000) (2.1,0.14803) (2.2,0.14194) (2.3,0.13944) (2.4,0.13909) (2.5,0.14000) (2.6,0.14161) (2.7,0.14354) (2.8,0.14558) (2.9,0.14757) (3.0,0.14944) (3.1,0.15114) (3.2,0.15263) (3.3,0.15391) (3.4,0.15497) (3.5,0.15584) (3.6,0.15651) (3.7,0.15699) (3.8,0.15731) (3.9,0.15748) (4.0,0.15750) (4.1,0.15740) (4.2,0.15718) (4.3,0.15686) (4.4,0.15644) (4.5,0.15594) (4.6,0.15537) (4.7,0.15473) (4.8,0.15404) (4.9,0.15329) (5.0,0.15250) (5.1,0.15167) (5.2,0.15080) (5.3,0.14991) (5.4,0.14899) (5.5,0.14804) (5.6,0.14708) (5.7,0.14611) (5.8,0.14512) (5.9,0.14412) (6.0,0.14311) (6.1,0.14210) (6.2,0.14108) (6.3,0.14006) (6.4,0.13904) (6.5,0.13802) (6.6,0.13700) (6.7,0.13599) (6.8,0.13497) (6.9,0.13396) (7.0,0.13296) (7.1,0.13196) (7.2,0.13097) (7.3,0.12998) (7.4,0.12900) (7.5,0.12803) (7.6,0.12707) (7.7,0.12612) (7.8,0.12517) (7.9,0.12423) (8.0,0.12330) (8.1,0.12238) (8.2,0.12147) (8.3,0.12057) (8.4,0.11968) (8.5,0.11880) (8.6,0.11792) (8.7,0.11706) (8.8,0.11620) (8.9,0.11536) (9.0,0.11452) (9.1,0.11369) (9.2,0.11288) (9.3,0.11207) (9.4,0.11127) (9.5,0.11048) (9.6,0.10970) (9.7,0.10893) (9.8,0.10817) (9.9,0.10741) (10.0,0.10667) (10.1,0.10593) (10.2,0.10520) (10.3,0.10448) (10.4,0.10377) (10.5,0.10307) (10.6,0.10238) (10.7,0.10169) (10.8,0.10101) (10.9,0.10034) (11.0,0.09968) (11.1,0.09902) (11.2,0.09838) (11.3,0.09774) (11.4,0.09710) (11.5,0.09648) (11.6,0.09586) (11.7,0.09525) (11.8,0.09465) (11.9,0.09405) (12.0,0.09346) (12.1,0.09288) (12.2,0.09230) (12.3,0.09173) (12.4,0.09117) (12.5,0.09061) (12.6,0.09006) (12.7,0.08951) (12.8,0.08897) (12.9,0.08844) (13.0,0.08791) (13.1,0.08739) (13.2,0.08688) (13.3,0.08637) (13.4,0.08586) (13.5,0.08537) (13.6,0.08487) (13.7,0.08438) (13.8,0.08390) (13.9,0.08342) (14.0,0.08295) (14.1,0.08248) (14.2,0.08202) (14.3,0.08156) (14.4,0.08111) (14.5,0.08066) (14.6,0.08022) (14.7,0.07978) (14.8,0.07935) (14.9,0.07892) (15.0,0.07849)}
+% map (\x -> (showFFloat (Just 1) x "", showFFloat (Just 5) (1.35 / (x - 1) - 5 / x^2) "")) [1.6,1.7..15]
+% map (\x -> (showFFloat (Just 1) x "", showFFloat (Just 5) (1.4815/ (x - 1) - 5 / x^2) "")) [1.8,1.9..15]
+% map (\x -> (showFFloat (Just 1) x "", showFFloat (Just 5) (1.6/ (x - 1) - 5 / x^2) "")) [2.2,2.3..15]
+% map (\x -> (showFFloat (Just 1) x "", showFFloat (Just 5) (1.6/ (x - 1) - 5 / x^2) "")) [1.7,1.8..15]
+
+\begin{document}
+\maketitle
+{\small
+ \noindent\emph{Part IB Quantum Mechanics and ``Multiparticle Systems'' from Part II Principles of Quantum Mechanics are essential}
+
+ \vspace{10pt}
+ \noindent\textbf{Fundamentals of statistical mechanics}\\
+ Microcanonical ensemble. Entropy, temperature and pressure. Laws of thermodynamics. Example of paramagnetism. Boltzmann distribution and canonical ensemble. Partition function. Free energy. Specific heats. Chemical Potential. Grand Canonical Ensemble.\hspace*{\fill} [5]
+
+ \vspace{10pt}
+ \noindent\textbf{Classical gases}\\
+ Density of states and the classical limit. Ideal gas. Maxwell distribution. Equipartition of energy. Diatomic gas. Interacting gases. Virial expansion. Van der Waal's equation of state. Basic kinetic theory.\hspace*{\fill} [3]
+
+ \vspace{10pt}
+ \noindent\textbf{Quantum gases}\\
+ Density of states. Planck distribution and black body radiation. Debye model of phonons in solids. Bose--Einstein distribution. Ideal Bose gas and Bose--Einstein condensation. Fermi-Dirac distribution. Ideal Fermi gas. Pauli paramagnetism.\hspace*{\fill}[8]
+
+ \vspace{10pt}
+ \noindent\textbf{Thermodynamics}\\
+ Thermodynamic temperature scale. Heat and work. Carnot cycle. Applications of laws of thermodynamics. Thermodynamic potentials. Maxwell relations.\hspace*{\fill} [4]
+
+ \vspace{10pt}
+ \noindent\textbf{Phase transitions}\\
+ Liquid-gas transitions. Critical point and critical exponents. Ising model. Mean field theory. First and second order phase transitions. Symmetries and order parameters.\hspace*{\fill} [4]%
+}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+In all of our previous physics courses, we mostly focused on ``microscopic'' physics. For example, we used the Schr\"odinger equation to describe the mechanics of a single hydrogen atom. We had to do quite a bit of hard work to figure out exactly how it behaved. But what if we wanted to study a much huger system? Let's say we have a box of hydrogen gas, consisting of $\sim 10^{23}$ molecules. If we tried to solve the Schr\"odinger equation for this system, then, even numerically, it is completely intractable.
+
+So how can we understand such a system? Of course, given these $\sim 10^{23}$ molecules, we are not interested in the detailed dynamics of each molecule. We are only interested in some ``macroscopic'' phenomena. For example, we might want to know its pressure and temperature. So what we want to do is to describe the whole system using just a few ``macroscopic'' variables, and still capture the main properties we are interested.
+
+In the first part of the course, we will approach this subject rather ``rigorously''. We will start from some microscopic laws we know about, and use them to deduce properties of the macroscopic system. This turns out to be rather successful, at least if we treat things sufficiently properly. Surprisingly, it even applies to scenarios where it is absolutely not obvious it should apply!
+
+Historically, this was not how statistical physics was first developed. Instead, we only had the ``macroscopic'' properties we tried to understand. Back in the days, we didn't even know things are made up of atoms! We will try to understand statistical phenomena in purely macroscopic and ``wordy'' terms, and it turns out we can reproduce the same predictions as before.
+
+Finally, we will turn our attention to something rather different in nature --- phase transitions. As we all know, water turns from liquid to gas as we raise the temperature. This transition is an example of a \emph{phase transition}. Of course, this is still a ``macroscopic'' phenomena, fitting in line with the theme of the course. It doesn't make sense to talk about a single molecule transitioning from liquid to gas. Interestingly, when we look at many different systems undergoing phase transitions, they seem to all behave ``in the same way'' near the phase transition. We will figure out why. At least, partially.
+
+Statistical physics is important. As we mentioned, we can use this to make macroscopic predictions from microscopic laws. Thus, this allows us to put our microscopic laws to experimental test. It turns out methods of statistical physics have some far-reaching applications elsewhere. It can be used to study black holes, or even biology!
+
+\section{Fundamentals of statistical mechanics}
+\subsection{Microcanonical ensemble}
+We begin by considering a rather general system. Suppose we have an isolated system containing $N$ particles, where $N$ is a Large Number\textsuperscript{TM}. The canonical example to keep in mind is a box of gas detached from reality.
+
+\begin{defi}[Microstate]\index{microstate}
+ The \emph{microstate} of a system is the actual (quantum) state of the system. This gives a complete description of the system.
+\end{defi}
+
+As one would expect, the microstate is very complicated and infeasible to describe, especially when we have \emph{many} particles. In statistical physics, we observe that many microstates are indistinguishable macroscopically. Thus, we only take note of some macroscopically interesting quantities, and use these macroscopic quantities to put a probability distribution on the microstates.
+
+More precisely, we let $\{\bket{n}\}$ be a basis of normalized eigenstates, say
+\[
+ \hat{H}\bket{n} = E_n \bket{n}.
+\]
+We let $p(n)$ be the probability that the microstate is $\bket{n}$. Note that this probability is \emph{not} the quantum probability we are used to. It is some probability assigned to reflect our ignorance of the system. Given such probabilities, we can define the expectation of an operator in the least imaginative way:
+\begin{defi}[Expectation value]\index{expectation value}
+ Given a probability distribution $p(n)$ on the states, the expectation value of an operator $\mathcal{O}$ is
+ \[
+ \bra \mathcal{O}\ket = \sum_n p(n) \brak{n}\mathcal{O}\bket{n}.
+ \]
+\end{defi}
+If one knows about density operators, we can describe the system as a mixed state with density operator
+\[
+ \rho = \sum_n p(n) \bket{n}\brak{n}.
+\]
+There is an equivalent way of looking at this. We can consider an \term{ensemble} consisting of $W \gg 1$ independent copies of our system such that $Wp(n)$ many copies are in the microstate $\bket{n}$. Then the expectation is just the average over the ensemble. For most purposes, how we think about this doesn't really matter.
+
+We shall further assume our system is in \term{equilibrium}, i.e.\ the probability distribution $p(n)$ does not change in time. So in particular $\bra O\ket$ is independent of time. Of course, this does not mean the particles stop moving. The particles are still whizzing around. It's just that the statistical distribution does not change. In this course, we will mostly be talking about equilibrium systems. When we get out of equilibrium, things become very complicated.
+
+The idea of statistical physics is that we have some partial knowledge about the system. For example, we might know its total energy. The microstates that are compatible with this partial knowledge are called \emph{accessible}\index{accessible state}. The \term{fundamental assumption of statistical mechanics} is then
+\begin{significant}
+ An isolated system in equilibrium is equally likely to be in any of the accessible microstates.
+\end{significant}
+Thus, different probability distributions, or different ensembles, are distinguished by the partial knowledge we know.
+\begin{defi}[Microcanonical ensemble]\index{microcanonical ensemble}
+ In a \emph{microcanonical ensemble}, we know the energy is between $E$ and $E + \delta E$, where $\delta E$ is the accuracy of our measuring device. The accessible microstates are those with energy $E \leq E_n \leq E + \delta E$. We let $\Omega(E)$ be the number of such states.
+\end{defi}
+In practice, $\delta E$ is much much larger than the spacing of energy levels, and so $\Omega(E) \gg 1$. A priori, it seems like our theory will depend on what the value of $\delta E$ is, but as we develop the theory, we will see that this doesn't really matter.
+
+It is crucial here that we are working with a quantum system, so the possible states is discrete, and it makes sense to count the number of systems. We need to do more quite a bit work if we want to do this classically.
+
+\begin{eg}
+ Suppose we have $N = 10^{23}$ particles, and each particle can occupy two states $\bket{\uparrow}$ and $\bket{\downarrow}$, which have the same energy $\varepsilon$. Then we always have $N\varepsilon$ total energy, and we have
+ \[
+ \Omega(N\varepsilon) = 2^{10^{23}}.
+ \]
+ This is a fantastically huge, mind-boggling number. This is the kind of number we are talking about.
+\end{eg}
+
+By the fundamental assumption, we can write
+\[
+ p(n) =
+ \begin{cases}
+ \frac{1}{\Omega(E)} & \text{if } E \leq E_n \leq E + \delta E\\
+ 0 & \text{otherwise}
+ \end{cases}.
+\]
+This is the characteristic distribution of the microcanonical ensemble.
+
+It turns out it is not very convenient to work with $\Omega(E)$. In particular, $\Omega(E)$ is not linear in $N$, the number of particles. Instead, it scales as an exponential of $N$. So we take the logarithm.
+
+\begin{defi}[Boltzmann entropy]\index{Boltzmann entropy}\index{entropy}
+ The \emph{(Boltzmann) entropy} is defined as
+ \[
+ S(E) = k \log \Omega(E),
+ \]
+ where $k = \SI{1.381e-23}{\joule\per\kelvin}$ is \term{Boltzmann's constant}\index{$k$}.
+\end{defi}
+This annoying constant $k$ is necessary because when people started doing thermodynamics, they didn't know about statistical physics, and picked weird conventions.
+
+We wrote our expressions as $S(E)$, instead of $S(E, \delta E)$. As promised, the value of $\delta E$ doesn't really matter. We know that $\Omega(E)$ will scale approximately linearly with $\delta E$. So if we, say, double $\delta E$, then $S(E)$ will increase by $k \log 2$, which is incredibly tiny compared to $S(E) = k \log \Omega(E)$. So it doesn't matter which value of $\delta E$ we pick.
+
+Even if you are not so convinced that multiplying $10^{10^{23}}$ by a factor of $2$ or adding $\log 2$ to $10^{23}$ do not really matter, you should be reassured that at the end, we will rarely talk about $\Omega(E)$ or $S(E)$ itself. Instead, we will often divide two different $\Omega$'s to get probabilities, or differentiate $S$ to get other interesting quantities. In these cases, the factors really do not matter.
+
+
+The second nice property of the entropy is that it is additive --- if we have two \emph{non-interacting} systems with energies $E_{(1)}, E_{(2)}$. Then the total number of states of the combined system is
+\[
+ \Omega(E_{(1)}, E_{(2)}) = \Omega_{1}(E_{(1)})\Omega_{2} (E_{(2)}).
+\]
+So when we take the logarithm, we find
+\[
+ S(E_{(1)}, E_{(2)}) = S(E_{(1)}) + S(E_{(2)}).
+\]
+Of course, this is not very interesting, until we bring our systems together and let them interact with each other.
+
+\subsubsection*{Interacting systems}
+Suppose we bring the two systems together, and let them exchange energy. Then the energy of the individual systems is no longer fixed, and only the total energy
+\[
+ E_{\mathrm{total}} = E_{(1)} + E_{(2)}
+\]
+is fixed. Then we find that
+\[
+ \Omega(E_{\mathrm{total}}) = \sum_{E_i} \Omega_1(E_i) \Omega_2(E_{\mathrm{total}} - E_i),
+\]
+where we sum over all possible energy levels of the first system. In terms of the entropy of the system, we have
+\[
+ \Omega(E_{\mathrm{total}}) = \sum_{E_i} \exp\left(\frac{S_1(E_i)}{k} + \frac{S_2(E_{\mathrm{total}} - E_i)}{k}\right)
+\]
+We can be a bit more precise with what the sum means. We are not summing over all eigenstates. Recall that we have previously fixed an accuracy $\delta E$. So we can imagine dividing the whole energy spectrum into chunks of size $\delta E$, and here we are summing over the chunks.
+
+We know that $S_{1, 2}/k \sim N_{1, 2} \sim 10^{23}$, which is a ridiculously large number. So the sum is overwhelmingly dominated by the term with the largest exponent. Suppose this is maximized when $E_i = E_*$. Then we have
+\[
+ S(E_{\mathrm{total}}) = k \log \Omega(E_{\mathrm{total}}) \approx S_1(E_*) + S_2(E_{\mathrm{total}} - E_*).
+\]
+Again, we are not claiming that only the factor coming from $E_*$ has significant contribution. Maybe one or two energy levels next to $E_*$ are also very significant, but taking these into account will only multiply $\Omega(E_{\mathrm{total}})$ by a (relatively) small constant, hence contributes a small additive factor to $S(E_{\mathrm{total}})$, which can be neglected.
+
+Now given any $E_{(1)}$, what is the probability that the actual energy of the first system is $E_{(1)}$? For convenience, we write $E_{(2)} = E_{\mathrm{total}} - E_{(1)}$, the energy of the second system. Then the probability desired is
+\[
+ \frac{\Omega_1(E_{(1)})\Omega_2(E_{(2)})}{\Omega(E_{\mathrm{total}})} = \exp\left(\frac{1}{k}\left(S_1(E_{(1)}) + S_2(E_{(2)}) - S(E_{\mathrm{total}})\right)\right).
+\]
+Again recall that the numbers at stake are unimaginably huge. So if $S_1(E_{(1)}) + S_2(E_{(2)})$ is even slightly different from $S(E_{\mathrm{total}})$, then the probability is effectively zero. And by above, for the two quantities to be close, we need $E_{(1)} = E_*$. So for all practical purposes, the value of $E_{(1)}$ is fixed into $E_*$.
+
+Now imagine we prepare two systems separately with energies $E_{(1)}$ and $E_{(2)}$ such that $E_{(1)} \not= E_*$, and then bring the system together, then we are no longer in equilibrium. $E_{(1)}$ will change until it takes value $E_*$, and then entropy of the system will increase from $S_1(E_{(1)}) + S_2(E_{(2)})$ to $S_1(E_*) + S_2(E_{\mathrm{total}} - E_*)$. In particular, the entropy increases.
+\begin{law}[Second law of thermodynamics]\index{second law of thermodynamics}
+ The entropy of an isolated system increases (or remains the same) in any physical process. In equilibrium, the entropy attains its maximum value.
+\end{law}
+This prediction is verified by virtually all observations of physics.
+
+While our derivation did not show it is \emph{impossible} to violate the second law of thermodynamics, it is very very very very very very very very unlikely to be violated.
+
+\subsubsection*{Temperature}
+Having defined entropy, the next interesting thing we can define is the \emph{temperature}. We assume that $S$ is a smooth function in $E$. Then we can define the temperature as follows:
+\begin{defi}[Temperature]\index{temperature}
+ The \emph{temperature} is defined to be
+ \[
+ \frac{1}{T} = \frac{\d S}{\d E}.
+ \]
+\end{defi}
+Why do we call this the temperature? Over the course, we will see that this quantity we decide to call ``temperature'' does behave as we would expect temperature to behave. It is difficult to give further justification of this definition, because even though we vaguely have some idea what temperature is like in daily life, those ideas are very far from anything we can concretely write down or even describe.
+
+One reassuring property we can prove is the following:
+\begin{prop}
+ Two interacting systems in equilibrium have the same temperature.
+\end{prop}
+
+\begin{proof}
+ Recall that the equilibrium energy $E_*$ is found by maximizing
+ \[
+ S_1(E_i) + S_2(E_{\mathrm{total}} - E_i)
+ \]
+ over all possible $E_i$. Thus, at an equilibrium, the derivative of this expression has to vanish, and the derivative is exactly
+ \[
+ \left.\frac{\d S_1}{\d E}\right|_{E_{(1)} = E_*} - \left.\frac{\d S_i}{\d E} \right|_{E_{(2)} = E_{\mathrm{total}} - E_*} = 0
+ \]
+ So we need
+ \[
+ \frac{1}{T_1} = \frac{1}{T_2}.
+ \]
+ In other words, we need
+ \[
+ T_1 = T_2.\qedhere
+ \]
+\end{proof}
+
+Now suppose initially, our systems have different temperature. We would expect energy to flow from the hotter system to the cooler system. This is indeed the case.
+
+\begin{prop}
+ Suppose two systems with initial energies $E_{(1)}, E_{(2)}$ and temperatures $T_1, T_2$ are put into contact. If $T_1 > T_2$, then energy will flow form the first system to the second.
+\end{prop}
+
+\begin{proof}
+ Since we are not in equilibrium, there must be some energy transfer from one system to the other. Suppose after time $\delta t$, the energy changes by
+ \begin{align*}
+ E_{(1)} & \mapsto E_{(1)} + \delta E\\
+ E_{(2)} & \mapsto E_{(2)} - \delta E,
+ \end{align*}
+ keeping the total energy constant. Then the change in entropy is given by
+ \[
+ \delta S = \frac{\d S_1}{\d E} \delta E_{(1)} + \frac{\d S_2}{\d E} \delta E_{(2)} = \left(\frac{1}{T_1} - \frac{1}{T_2}\right) \delta E.
+ \]
+ By assumption, we know
+ \[
+ \frac{1}{T_1} - \frac{1}{T_2} < 0,
+ \]
+ but by the second law of thermodynamics, we know $\delta S$ must increase. So we must have $\delta E < 0$, i.e.\ energy flows from the first system to the second.
+\end{proof}
+So this notion of temperature agrees with the basic properties of temperature we expect.
+
+Note that these properties we've derived only depends on the fact that $\frac{1}{T}$ is a monotonically decreasing function of $T$. In principle, we could have picked \emph{any} monotonically decreasing function of $T$, and set it to $\frac{\d S}{\d E}$. We will later see that this definition will agree with the other definitions of temperature we have previously seen, e.g.\ via the ideal gas law, and so this is indeed the ``right'' one.
+
+\subsubsection*{Heat capacity}
+As we will keep on doing later, we can take different derivatives to get different interesting quantities. This time, we are going to get heat capacity. Recall that $T$ was a function of energy, $T = T(E)$. We will assume that we can invert this function, at least locally, to get $E$ as a function of $T$.
+\begin{defi}[Heat capacity]\index{heat capacity}
+ The \emph{heat capacity} of a system is
+ \[
+ C = \frac{\d E}{\d T}.
+ \]
+ The \term{specific heat capacity} is
+ \[
+ \frac{C}{\text{mass of system}}.
+ \]
+\end{defi}
+The specific heat capacity is a property of the substance that makes up the system, and not how much stuff there is, as both $C$ and the mass scale linearly with the size of the system.
+
+This is some quantity we can actually physically measure. We can measure the temperature with a thermometer, and it is usually not too difficult to see how much energy we are pumping into a system. Then by measuring the temperature change, we can figure out the heat capacity.
+
+In doing so, we can indirectly measure the entropy, or at least the changes in entropy. Note that we have
+\[
+ \frac{\d S}{\d T} = \frac{\d S}{\d E} \frac{\d E}{\d T} = \frac{C}{T}.
+\]
+Integrating up, if the temperature changes from $T_1$ to $T_2$, we know
+\[
+ \Delta S = \int_{T_1}^{T_2} \frac{C(T)}{T}\;\d T.
+\]
+As promised, by measuring heat capacity experimentally, we can measure the change in entropy.
+
+The heat capacity is useful in other ways. Recall that to find the equilibrium energy $E_*$, a necessary condition was that it satisfies
+\[
+ \frac{\d S_1}{\d E} - \frac{\d S_2}{\d E} = 0.
+\]
+However, we only know that the solution is an extrema, and not necessarily maximum. To figure out if it is the maximum, we take the second derivative. Note that for a single system, we have
+\[
+ \frac{\d^2 S}{\d E^2} = \frac{\d}{\d E} \left(\frac{1}{T}\right) = -\frac{1}{T^2 C}.
+\]
+Applying this to two systems, one can check that entropy is maximized at $E_{(1)} = E_*$ if $C_1, C_2 > 0$. The actual computations is left as an exercise on the first example sheet.
+
+Let's look at some actual systems and try to calculate these quantities.
+\begin{eg}
+ Consider a $2$-state system, where we have $N$ non-interacting particles with fixed positions. Each particle is either in $\bket{\uparrow}$ or $\bket{\downarrow}$. We can think of these as spins, for example. These two states have different energies
+ \[
+ E_{\uparrow} = \varepsilon,\quad E_{\downarrow} = 0.
+ \]
+ We let $N_{\uparrow}$ and $N_{\downarrow}$ be the number of particles in $\bket{\uparrow}$ and $\bket{\downarrow}$ respectively. Then the total energy of the system is
+ \[
+ E = \varepsilon N_{\uparrow}.
+ \]
+ We want to calculate this quantity $\Omega(E)$. Here in this very contrived example, it is convenient to pick $\delta E < \varepsilon$, so that $\Omega(E)$ is just the number of ways of choosing $N_{\uparrow}$ particles from $N$. By basic combinatorics, we have
+ \[
+ \Omega(E) = \frac{N!}{N_{\uparrow}! (N - N_\uparrow)!},
+ \]
+ and
+ \[
+ S(E) = k \log \left(\frac{N!}{N_{\uparrow}! (N - N_\uparrow)!}\right).
+ \]
+ This is not an incredibly useful formula. Since we assumed that $N$ and $N_\uparrow$ are huge, we can use \term{Stirling's approximation}
+ \[
+ N! = \sqrt{2\pi N} N^N e^{-N} \left(1 + O\left(\frac{1}{N}\right)\right).
+ \]
+ Then we have
+ \[
+ \log N! = N \log N - N + \frac{1}{2}\log (2\pi N) + O\left(\frac{1}{N}\right).
+ \]
+ We just use the approximation three times to get
+ \begin{align*}
+ S(E) &= k\left(N \log N - N - N_\uparrow \log N_\uparrow + N_\uparrow - (N - N_\uparrow) \log(N - N_\uparrow) + N - N_\uparrow\right)\\
+ &= -k \left((N - N_\uparrow) \log \left(\frac{N - N_\uparrow}{N}\right) + N_\uparrow \log\left(\frac{N_{\uparrow}}{N}\right)\right)\\
+ &= -kN\left(\left(1 - \frac{E}{N\varepsilon}\right) \log\left(1 - \frac{E}{N\varepsilon}\right) + \frac{E}{N\varepsilon} \log\left(\frac{E}{N\varepsilon}\right)\right).
+ \end{align*}
+ This is better, but we can get much more out of it if we plot it:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$E$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$S(E)$};
+
+ \draw [mblue, thick, domain=0.001:4.999] plot [smooth] (\x, {(-4.5) * ((1 - \x/5) * ln (1 - \x/5) + (\x / 5) * ln (\x / 5))});
+
+ \node [left] at (0, 0) {$0$};
+ \node [below] at (5, 0) {$N\varepsilon$};
+ \node [below] at (2.5, 0) {$N\varepsilon/2$};
+ \draw [dashed] (2.5, 0) -- (2.5, 3);
+
+ \draw [dashed] (2.5, 3.119) -- (0, 3.119) node [left] {$Nk \log 2$};
+ \end{tikzpicture}
+ \end{center}
+ The temperature is
+ \[
+ \frac{1}{T} = \frac{\d S}{\d T} = \frac{k}{\varepsilon} \log \left(\frac{N \varepsilon}{E} - 1\right),
+ \]
+ and we can invert to get
+ \[
+ \frac{N_{\uparrow}}{N} = \frac{E}{N\varepsilon} = \frac{1}{e^{\varepsilon /kT} + 1}.
+ \]
+ Suppose we get to control the temperature of the system, e.g.\ if we put it with a heat reservoir. What happens as we vary our temperature?
+ \begin{itemize}
+ \item As $T \to 0$, we have $N_\uparrow \to 0$. So the states all try to go to the ground state.
+ \item As $T \to \infty$, we find $N_\uparrow/N \to \frac{1}{2}$, and $E \to N\varepsilon/2$.
+ \end{itemize}
+ The second result is a bit weird. As $T \to \infty$, we might expect all things to go the maximum energy level, and not just half of them.
+
+ To confuse ourselves further, we can plot another graph, for $\frac{1}{T}$ vs $E$. The graph looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$E$};
+ \draw [->] (0, -2.5) -- (0, 2.5) node [above] {$\frac{1}{T}$};
+
+ \draw [mblue, thick, domain=0.1:4.9] plot [smooth] (\x, {(-0.6) * (ln (\x) - ln (5 - \x))});
+
+ \node [left] at (0, 0) {$0$};
+ \node [below] at (5, 0) {$N\varepsilon$};
+ \node [below] at (2.5, 0) {$N\varepsilon/2$};
+
+ \end{tikzpicture}
+ \end{center}
+ We see that having energy $> N\varepsilon/2$ corresponds to negative temperature, and to go from positive temperature to negative temperature, we need to pass through infinite temperature. So in some sense, negative temperature is ``hotter'' than infinite temperature.
+
+ What is going on? By definition, negative $T$ means $\Omega(E)$ is a decreasing function of energy. This is a very unusual situation. In this system, all the particles are fixed, and have no kinetic energy. Consequently, the possible energy levels are bounded. If we included kinetic energy into the system, then kinetic energy can be arbitrarily large. In this case, $\Omega(E)$ is usually an increasing function of $E$.
+
+ Negative $T$ has indeed been observed experimentally. This requires setups where the kinetic energy is not so important in the range of energies we are talking about. One particular scenario where this is observed is in nuclear spins of crystals in magnetic fields. If we have a magnetic field, then naturally, most of the spins will align with the field. We now suddenly flip the field, and then most of the spins are anti-aligned, and this can give us a negative temperature state.
+
+ Now we can't measure negative temperature by sticking a thermometer into the material and getting a negative answer. Something that \emph{can} be interestingly measured is the heat capacity
+ \[
+ C = \frac{\d E}{\d T} = \frac{N\varepsilon^2}{ kT^2 } \frac{e^{\varepsilon/kT}}{(e^{\varepsilon/kT} + 1)^2}.
+ \]
+ This again exhibits some peculiar properties. We begin by looking at a plot:
+ \begin{center}
+ \begin{tikzpicture}[xscale=1.5]
+ \draw [->] (0, 0) -- (4, 0) node [right] {$T$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$C$};
+
+ \draw [mblue, thick ] plot coordinates {(0.01,0.00000) (0.04,0.00000) (0.07,0.00102) (0.10,0.03632) (0.13,0.21581) (0.16,0.60094) (0.19,1.13589) (0.22,1.71794) (0.25,2.26083) (0.28,2.71418) (0.31,3.05901) (0.34,3.29686) (0.37,3.44009) (0.40,3.50519) (0.43,3.50895) (0.46,3.46653) (0.49,3.39067) (0.52,3.29165) (0.55,3.17751) (0.58,3.05435) (0.61,2.92675) (0.64,2.79804) (0.67,2.67060) (0.70,2.54609) (0.73,2.42564) (0.76,2.30996) (0.79,2.19945) (0.82,2.09433) (0.85,1.99462) (0.88,1.90026) (0.91,1.81111) (0.94,1.72699) (0.97,1.64766) (1.00,1.57290) (1.03,1.50244) (1.06,1.43606) (1.09,1.37350) (1.12,1.31453) (1.15,1.25893) (1.18,1.20647) (1.21,1.15697) (1.24,1.11022) (1.27,1.06605) (1.30,1.02429) (1.33,0.98478) (1.36,0.94739) (1.39,0.91196) (1.42,0.87839) (1.45,0.84654) (1.48,0.81631) (1.51,0.78760) (1.54,0.76031) (1.57,0.73436) (1.60,0.70966) (1.63,0.68614) (1.66,0.66373) (1.69,0.64237) (1.72,0.62198) (1.75,0.60252) (1.78,0.58393) (1.81,0.56617) (1.84,0.54918) (1.87,0.53292) (1.90,0.51735) (1.93,0.50244) (1.96,0.48815) (1.99,0.47445) (2.02,0.46130) (2.05,0.44868) (2.08,0.43656) (2.11,0.42492) (2.14,0.41372) (2.17,0.40295) (2.20,0.39259) (2.23,0.38262) (2.26,0.37302) (2.29,0.36376) (2.32,0.35484) (2.35,0.34624) (2.38,0.33795) (2.41,0.32994) (2.44,0.32221) (2.47,0.31475) (2.50,0.30753) (2.53,0.30056) (2.56,0.29382) (2.59,0.28731) (2.62,0.28100) (2.65,0.27490) (2.68,0.26899) (2.71,0.26326) (2.74,0.25772) (2.77,0.25235) (2.80,0.24714) (2.83,0.24209) (2.86,0.23719) (2.89,0.23243) (2.92,0.22782) (2.95,0.22334) (2.98,0.21899) (3.01,0.21477) (3.04,0.21066) (3.07,0.20667) (3.10,0.20280) (3.13,0.19902) (3.16,0.19536) (3.19,0.19179) (3.22,0.18832) (3.25,0.18494) (3.28,0.18165) (3.31,0.17844) (3.34,0.17532) (3.37,0.17228) (3.40,0.16932) (3.43,0.16644) (3.46,0.16362) (3.49,0.16088) (3.52,0.15820) (3.55,0.15559) (3.58,0.15305) (3.61,0.15056) (3.64,0.14814) (3.67,0.14577) (3.70,0.14346) (3.73,0.14120) (3.76,0.13899) (3.79,0.13684) (3.82,0.13474) (3.85,0.13268) (3.88,0.13067) (3.91,0.12870) (3.94,0.12678) (3.97,0.12490) (4.00,0.12307)};
+ %% map (\x -> (showFFloat (Just 2) x "", showFFloat (Just 5) (8 / (x*x) * exp (1/x) / (exp(1/x) + 1)^2) "")) [0.01,0.04..4]
+ \node [left] at (0, 0) {$0$};
+
+ \draw [dashed] (0.43, 0) node [below] {\small $kT \sim \varepsilon$} -- (0.43, 3.50895);
+ \end{tikzpicture}
+ \end{center}
+ By looking at the formula, we see that the maximum $kT$ is related to the microscopic $\varepsilon$. If we know about the value of $k$, then we can use the macroscopic observation of $C$ to deduce something about the microscopic $\varepsilon$.
+
+ Note that $C$ is proportional to $N$. As $T \to 0$, we have
+ \[
+ C \propto T^{-2} e^{-\varepsilon/kT},
+ \]
+ and this is a function that decreases very rapidly as $T \to 0$, and in fact this is one of the favorite examples in analysis where all derivatives of the function at $0$ vanish. Ultimately, this is due to the energy gap between the ground state and the first excited state.
+
+ Another peculiarity of this plot is that the heat capacity vanishes at high temperature, but this is due to the peculiar property of the system at high temperature. In a general system, we expect the heat capacity to increase with temperature.
+
+ How much of this is actually physical? The answer is ``not much''. This is not surprising, because we didn't really do much physics in these computations. For most solids, the contribution to $C$ from spins is swamped by other effects such as contributions of phonons (quantized vibrations in the solid) or electrons. In this case, $C(T)$ is monotonic in $T$.
+
+ However, there are some very peculiar materials for which we obtain a small local maximum in $C(T)$ for very small $T$, before increasing monotonically, which is due to the contributions of spin:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$T$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$C$};
+
+ \draw [mblue, thick ] plot coordinates {(0.01,0.00501) (0.04,0.02066) (0.07,0.07551) (0.10,0.21670) (0.13,0.36877) (0.16,0.47499) (0.19,0.53051) (0.22,0.54983) (0.25,0.54810) (0.28,0.53607) (0.31,0.52028) (0.34,0.50437) (0.37,0.49016) (0.40,0.47848) (0.43,0.46957) (0.46,0.46340) (0.49,0.45979) (0.52,0.45850) (0.55,0.45931) (0.58,0.46196) (0.61,0.46624) (0.64,0.47196) (0.67,0.47896) (0.70,0.48707) (0.73,0.49618) (0.76,0.50617) (0.79,0.51695) (0.82,0.52844) (0.85,0.54056) (0.88,0.55325) (0.91,0.56646) (0.94,0.58014) (0.97,0.59425) (1.00,0.60875) (1.03,0.62362) (1.06,0.63882) (1.09,0.65434) (1.12,0.67014) (1.15,0.68622) (1.18,0.70255) (1.21,0.71912) (1.24,0.73592) (1.27,0.75293) (1.30,0.77015) (1.33,0.78756) (1.36,0.80515) (1.39,0.82293) (1.42,0.84087) (1.45,0.85899) (1.48,0.87725) (1.51,0.89568) (1.54,0.91425) (1.57,0.93297) (1.60,0.95183) (1.63,0.97082) (1.66,0.98995) (1.69,1.00922) (1.72,1.02861) (1.75,1.04812) (1.78,1.06776) (1.81,1.08752) (1.84,1.10740) (1.87,1.12740) (1.90,1.14752) (1.93,1.16775) (1.96,1.18809) (1.99,1.20854) (2.02,1.22910) (2.05,1.24978) (2.08,1.27056) (2.11,1.29145) (2.14,1.31244) (2.17,1.33354) (2.20,1.35475) (2.23,1.37606) (2.26,1.39747) (2.29,1.41898) (2.32,1.44060) (2.35,1.46232) (2.38,1.48413) (2.41,1.50605) (2.44,1.52807) (2.47,1.55019) (2.50,1.57240) (2.53,1.59471) (2.56,1.61713) (2.59,1.63964) (2.62,1.66224) (2.65,1.68495) (2.68,1.70775) (2.71,1.73064) (2.74,1.75364) (2.77,1.77672) (2.80,1.79991) (2.83,1.82319) (2.86,1.84656) (2.89,1.87003) (2.92,1.89360) (2.95,1.91726) (2.98,1.94101) (3.01,1.96486) (3.04,1.98880) (3.07,2.01283) (3.10,2.03696) (3.13,2.06118) (3.16,2.08550) (3.19,2.10991) (3.22,2.13441) (3.25,2.15901) (3.28,2.18370) (3.31,2.20848) (3.34,2.23335) (3.37,2.25832) (3.40,2.28338) (3.43,2.30853) (3.46,2.33377) (3.49,2.35911) (3.52,2.38454) (3.55,2.41006) (3.58,2.43567) (3.61,2.46138) (3.64,2.48717) (3.67,2.51306) (3.70,2.53904) (3.73,2.56512) (3.76,2.59128) (3.79,2.61754) (3.82,2.64388) (3.85,2.67032) (3.88,2.69685) (3.91,2.72348) (3.94,2.75019) (3.97,2.77699) (4.00,2.80389) (4.03,2.83088) (4.06,2.85796) (4.09,2.88513) (4.12,2.91239) (4.15,2.93974) (4.18,2.96718) (4.21,2.99472) (4.24,3.02234) (4.27,3.05006) (4.30,3.07787) (4.33,3.10577) (4.36,3.13376) (4.39,3.16184) (4.42,3.19001) (4.45,3.21827) (4.48,3.24662) (4.51,3.27507) (4.54,3.30360) (4.57,3.33223) (4.60,3.36094) (4.63,3.38975) (4.66,3.41865) (4.69,3.44764) (4.72,3.47672) (4.75,3.50589) (4.78,3.53515) (4.81,3.56450) (4.84,3.59394) (4.87,3.62347) (4.90,3.65310) (4.93,3.68281) (4.96,3.71261) (4.99,3.74251)};
+ %% map (\x -> (showFFloat (Just 2) x "", showFFloat (Just 5) (1 / (4*x*x) * exp (1/(2*x)) / (exp(1/(2*x)) + 1)^2 + x/2 + x^2/20) "")) [0.01,0.04..5]
+
+ \node [left] at (0, 0) {$0$};
+ \end{tikzpicture}
+ \end{center}
+
+\end{eg}
+
+\subsection{Pressure, volume and the first law of thermodynamics}
+So far, our system only had one single parameter --- the energy. Usually, our systems have other external parameters which can be varied. Recall that our ``standard'' model of a statistical system is a box of gas. If we allow ourselves to move the walls of the box, then the volume of the system may vary. As we change the volume, the allowed energies eigenstates will change. So now $\Omega$, and hence $S$ are functions of energy \emph{and} volume:
+\[
+ S(E, V) = k \log \Omega(E, V).
+\]
+We now need to modify our definition of temperature to account for this dependence:
+\begin{defi}[Temperature]\index{temperature}
+ The \emph{temperature} of a system with variable volume is
+ \[
+ \frac{1}{T} = \left(\frac{\partial S}{\partial E}\right)_V,
+ \]
+ with $V$ fixed.
+\end{defi}
+
+But now we can define a different thermodynamic quantity by taking the derivative with respect to $V$.
+
+\begin{defi}[Pressure]\index{pressure}\index{$p$}
+ We define the \emph{pressure} of a system with variable volume to be
+ \[
+ p = T \left(\frac{\partial S}{\partial V}\right)_E.
+ \]
+\end{defi}
+Is this thing we call the ``pressure'' any thing like what we used to think of as pressure, namely force per unit area? We will soon see that this is indeed the case.
+
+
+We begin by deducing some familiar properties of pressure.
+\begin{prop}
+ Consider as before two interacting systems where the total volume $V = V_1 + V_2$ is fixed by the individual volumes can vary. Then the entropy of the combined system is maximized when $T_1 = T_2$ and $p_1 = p_2$.
+\end{prop}
+
+\begin{proof}
+ We have previously seen that we need $T_1 = T_2$. We also want
+ \[
+ \left(\frac{\d S}{\d V}\right)_E = 0.
+ \]
+ So we need
+ \[
+ \left(\frac{\d S_1}{\d V}\right)_E = \left(\frac{\d S_2}{\d V}\right)_E.
+ \]
+ Since the temperatures are equal, we know that we also need $p_1 = p_2$.
+\end{proof}
+
+For a single system, we can use the chain rule to write
+\[
+ \d S = \left(\frac{\partial S}{\partial E}\right)_V \d E + \left(\frac{\partial S}{\partial V}\right)_E \d V.
+\]
+Then we can use the definitions of temperature and pressure to write
+\begin{prop}[First law of thermodynamics]\index{first law of thermodynamics}
+ \[
+ \d E = T\; \d S - p\; \d V.
+ \]
+\end{prop}
+This law relates two infinitesimally close equilibrium states. This is sometimes called the \term{fundamental thermodynamics relation}.
+
+\begin{eg}
+ Consider a box with one side a movable piston of area $A$. We apply a force $F$ to keep the piston in place.
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.2] (0, 0) rectangle (2.5, -2);
+
+ \draw (0, 0) -- (4, 0);
+ \draw (0, 0) -- (0, -2) -- (4, -2);
+ \draw [thick, mred] (2.5, 0) -- (2.5, -2);
+
+ \node at (3.2, -1) {$\Longleftarrow F$};
+
+ \draw [dashed] (2, 0) -- (2, -2);
+
+ \draw [latex'-latex'] (2, 0.1) -- +(0.5, 0) node [pos=0.5, above] {\small $\d x$};
+ \end{tikzpicture}
+ \end{center}
+ What happens if we move the piston for a little bit? If we move through a distance $\d x$, then the volume of the gas has increased by $A \;\d x$. We assume $S$ is constant. Then the first law tells us
+ \[
+ \d E = -pA \;\d x.
+ \]
+ This formula should be very familiar to us. This is just the work done by the force, and this must be $F = pA$. So our definition of pressure in terms of partial derivatives reproduces the mechanics definition of force per unit area.
+\end{eg}
+
+One has to be cautious here. It is not always true that $-p \;\d V$ can be equated with the word done on a system. For this to be true, we require the change to be \emph{reversible}, which is a notion we will study more in depth later. For example, this is not true when there is friction.
+
+In the case of a reversible change, if we equate $-p \;\d V$ with the work done, then there is only one possible thing $T \;\d S$ can be --- it is the heat supplied to the system.
+
+It is important to remember that the first law holds for \emph{any} change. It's just that this interpretation does not.
+\begin{eg}
+ Consider the irreversible change, where we have a ``free expansion'' of gas into vacuum. We have a box
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [opacity=0.3, mblue] (0, 0) rectangle (2, 2);
+
+ \draw (0, 0) rectangle (4, 2);
+ \draw (2, 0) -- (2, 2);
+ \node at (1, 1) {gas};
+ \node at (3, 1) {vacuum};
+ \end{tikzpicture}
+ \end{center}
+ We have a valve in the partition, and as soon as we open up the valve, the gas flows to the other side of the box.
+
+ In this system, no energy has been supplied. So $\d E = 0$. However, $\d V > 0$, as volume clearly increased. But there is no work done on or by the gas. So in this case, $-p \;\d V$ is certainly not the work done. Using the first law, we know that
+ \[
+ T \;\d S = p \;\d V.
+ \]
+ So as the volume increases, the entropy increases as well.
+\end{eg}
+
+We now revisit the concept of heat capacity. We previously defined it as $\d E/\d T$, but now we need to decide what we want to keep fixed. We can keep $V$ fixed, and get\index{$C_V$}
+\[
+ C_V = \left(\frac{\partial E}{\partial T}\right)_V = T \left(\frac{\partial S}{\partial T}\right)_V.
+\]
+While this is the obvious generalization of what we previously had, it is not a very useful quantity. We do not usually do experiments with fixed volume. For example, if we do a chemistry experiment in a test tube, say, then the volume is not fixed, as the gas in the test tube is free to go around. Instead, what is fixed is the pressure. We can analogously define\index{$C_p$}\index{heat capacity}
+\[
+ C_p = T \left(\frac{\partial S}{\partial T}\right)_p.
+\]
+Note that we cannot write this as some sort of $\frac{\partial E}{\partial T}$.
+
+\subsection{The canonical ensemble}
+So far, we have been using the microcanonical ensemble. The underlying assumption is that our system is totally isolated, and we know what the energy of the system is. However, in real life, this is most likely not the case. Even if we produce a sealed box of gas, and try to do experiments with it, the system is not isolated. It can exchange heat with the environment.
+
+On the other hand, there is one thing that \emph{is} fixed --- the temperature. The box is in thermal equilibrium with the environment. If we assume the environment is ``large'', then we can assume that the environment is not really affected by the box, and so the box is forced to have the same temperature as the environment.
+
+Let's try to study this property. Consider a system $S$ interacting with a much larger system $R$. We call this $R$ a \emph{heat reservoir}. Since $R$ is assumed to be large, the energy of $S$ is negligible to $R$, and we will assume $R$ always has a fixed temperature $T$. Then in this set up, the systems can exchange energy without changing $T$.
+
+As before, we let $\bket{n}$ be a basis of microstates with energy $E_n$. We suppose we fix a total energy $E_{\mathrm{total}}$, and we want to find the total number of microstates of the combined system with this total energy. To do so, we fix some state $\bket{n}$ of $S$, and ask how many states of $S + R$ there are for which $S$ is in $\bket{n}$. We then later sum over all $\bket{n}$.
+
+By definition, we can write this as
+\[
+ \Omega_R(E_{\mathrm{total}} - E_n) = \exp\left(k^{-1} S_R(E_{\mathrm{total}} - E_n)\right).
+\]
+By assumption, we know $R$ is a much larger system than $S$. So we only get significant contributions when $E_n \ll E_{\mathrm{total}}$. In these cases, we can Taylor expand to write
+\[
+ \Omega_R(E_{\mathrm{total}} - E_n) = \exp\left(k^{-1} S_R(E_{\mathrm{total}}) - k^{-1}\left(\frac{\partial S_R}{\partial E}\right)_V E_n\right).
+\]
+But we know what $\frac{\partial S_R}{\partial E}$ is --- it is just $T^{-1}$. So we finally get
+\[
+ \Omega_R(E_{\mathrm{total}} - E_n) = e^{k^{-1}S_R(E_{\mathrm{total}})} e^{-\beta E_n},
+\]
+where we define
+\begin{notation}[$\beta$]\index{$\beta$}
+ \[
+ \beta = \frac{1}{kT}.
+ \]
+\end{notation}
+Note that while we derived this this formula under the assumption that $E_n$ is small, it is effectively still valid when $E_n$ is large, because both sides are very tiny, and even if they are very tiny in different ways, it doesn't matter when we add over all states.
+
+Now we can write the \emph{total} number of microstates of $S + R$ as
+\[
+ \Omega(E_{\mathrm{total}}) = \sum_n \Omega_R(E_{\mathrm{total}} - E_n) = e^{k^{-1} S_R(E_{\mathrm{total}})} \sum_n e^{-\beta E_n}.
+\]
+Note that we are summing over all states, not energy.
+
+We now use the fundamental assumption of statistical mechanics that all states of $S + R$ are equally likely. Then we know the probability that $S$ is in state $\bket{n}$ is
+\[
+ p(n) =\frac{\Omega_R(E_{\mathrm{total}} - E_n)}{\Omega(E_{\mathrm{total}})} = \frac{e^{-\beta E_n}}{\sum_k e^{-\beta E_k}}.
+\]
+This is called the \term{Boltzmann distribution} for the canonical ensemble. Note that at the end, all the details have dropped out apart form the temperature. This describes the energy distribution of a system with fixed temperature.
+
+Note that if $E_n \gg kT = \frac{1}{\beta}$, then the exponential is small. So only states with $E_n \sim kT$ have significant probability. In particular, as $T \to 0$, we have $\beta \to \infty$, and so only the ground state can be occupied.
+
+We now define an important quantity.
+\begin{defi}[Partition function]\index{partition function}
+ The \emph{partition function} is
+ \[
+ Z = \sum_n e^{-\beta E_n}.
+ \]
+\end{defi}
+It turns out most of the interesting things we are interested in can be expressed in terms of $Z$ and its derivatives. Thus, to understand a general system, what we will do is to compute the partition function and express it in some familiar form. Then we can use standard calculus to obtain quantities we are interested in. To begin with, we have
+\[
+ p(n) = \frac{e^{-\beta E_n}}{Z}.
+\]
+\begin{prop}
+ For two non-interacting systems, we have $Z(\beta) = Z_1(\beta) Z_2(\beta)$.
+\end{prop}
+
+\begin{proof}
+ Since the systems are not interacting, we have
+ \[
+ Z = \sum_{n, m} e^{-\beta (E_n^{(1)} + E_n^{(2)})} = \left(\sum_n e^{-\beta E_n^{(1)}}\right)\left(\sum_n e^{-\beta E_n^{(2)}}\right) = Z_1 Z_2.\qedhere
+ \]
+\end{proof}
+
+Note that in general, energy is \emph{not} fixed, but we can compute the average value:
+\[
+ \bra E\ket = \sum_n p(n) E_n = \sum \frac{E_n e^{-\beta E_n}}{Z} = -\frac{\partial}{\partial \beta} \log Z.
+\]
+This partial derivative is taken with all $E_i$ fixed. Of course, in the real world, we don't get to directly change the energy eigenstates and see what happens. However, they do depend on some ``external'' parameters, such as the volume $V$, the magnetic field $B$ etc. So when we take this derivative, we have to keep all those parameters fixed.
+
+We look at the simple case where $V$ is the only parameter we can vary. Then $Z = Z(\beta, V)$. We can rewrite the previous formula as
+\[
+ \bra E \ket = - \left(\frac{\partial}{\partial \beta} \log Z\right)_V.
+\]
+This gives us the average, but we also want to know the variance of $E$. We have
+\[
+ \Delta E^2 = \bra (E - \bra E\ket)^2 \ket = \bra E^2\ket - \bra E\ket^2.
+\]
+On the first example sheet, we calculate that this is in fact
+\[
+ \Delta E^2 = \left(\frac{\partial^2}{\partial \beta^2} \log Z\right)_V = - \left(\frac{\partial \bra E\ket}{\partial \beta}\right)_V.
+\]
+We can now convert $\beta$-derivatives to $T$-derivatives using the chain rule. Then we get
+\[
+ \Delta E^2 = kT^2 \left(\frac{\partial \bra E\ket}{\partial T}\right)_V = kT^2 C_V.
+\]
+From this, we can learn something important. We would expect $\bra E\ket \sim N$, the number of particles of the system. But we also know $C_V \sim N$. So
+\[
+ \frac{\Delta E}{\bra E\ket} \sim \frac{1}{\sqrt{N}}.
+\]
+Therefore, the fluctuations are negligible if $N$ is large enough. This is called the \term{thermodynamic limit} $N \to \infty$. In this limit, we can ignore the fluctuations in energy. So we expect the microcanonical ensemble and the canonical ensemble to give the same result. And for all practical purposes, $N \sim 10^{23}$ is a large number.
+
+Because of that, we are often going to just write $E$ instead of $\bra E\ket$.
+
+\begin{eg}
+ Suppose we had particles with
+ \[
+ E_{\uparrow} = \varepsilon,\quad E_{\downarrow} = 0.
+ \]
+ So for one particle, we have
+ \[
+ Z_1 = \sum_n e^{-\beta E_n} = 1 + e^{-\beta \varepsilon} = 2 e^{-\beta \varepsilon/2} \cosh \frac{\beta \varepsilon}{2}.
+ \]
+ If we have $N$ non-interacting systems, then since the partition function is multiplicative, we have
+ \[
+ Z = Z_1^N = 2^n e^{-\beta \varepsilon N/2} \cosh^N \frac{\beta \varepsilon}{2}.
+ \]
+ From the partition function, we can compute
+ \[
+ \bra E \ket = - \frac{\d \log Z}{\d \beta} = \frac{N\varepsilon}{2}\left(1 - \tanh \frac{\beta \varepsilon}{2}\right).
+ \]
+ We can check that this agrees with the value we computed with the microcanonical ensemble (where we wrote the result using exponentials directly), but the calculation is much easier.
+\end{eg}
+
+\subsubsection*{Entropy}
+When we first began our journey to statistical physics, the starting point of everything was the entropy. When we derived the canonical ensemble, we used the entropy of the everything, including that of the reservoir. However, we are not really interested in the reservoir, so we need to come up with an alternative definition of the entropy.
+
+We can motivate our new definition as follows. We use our previous picture of an ensemble. We have $W \gg 1$ many worlds, and our probability distribution says there are $W p(n)$ copies of the world living in state $\bket{n}$. We can ask what is the number of ways of picking a state for each copy of the world so as to reach this distribution.
+
+We apply the Boltzmann definition of entropy to this counting:
+\[
+ S = k \log \Omega
+\]
+This time, $\Omega$ is given by
+\[
+ \Omega = \frac{W!}{\prod_n (W p(n))!}.
+\]
+We can use Stirling's approximation, and find that
+\[
+ S_{\mathrm{ensemble}} = -kW \sum_n p(n) \log p(n).
+\]
+This suggests that we should define the entropy of a single copy as follows:
+\begin{defi}[Gibbs entropy]\index{Gibbs entropy}
+ The \emph{Gibbs entropy} of a probability distribution $p(n)$ is
+ \[
+ S = -k \sum_n p(n) \log p(n).
+ \]
+\end{defi}
+If the density operator is given by
+\[
+ \hat{\rho} = \sum_n p(n) \bket{n}\brak{n},
+\]
+then we have
+\[
+ S = - \Tr(\hat{\rho} \log \hat{\rho}).
+\]
+We now check that this definition makes sense, in that when we have a microcanonical ensemble, we do get what we expect.
+\begin{eg}
+ In the microcanonical ensemble, we have
+ \[
+ p(n) =
+ \begin{cases}
+ \frac{1}{\Omega(E)} & E \leq E_n \leq E + \delta E\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ Then we have
+ \begin{align*}
+ S &= -k \sum_{n: E \leq E_n \leq E + \delta E} \frac{1}{\Omega(E)} \log \frac{1}{\Omega(E)} \\
+ &= -k \Omega(E) \cdot \frac{1}{\Omega(E)} \log \frac{1}{\Omega(E)} \\
+ &= k \log \Omega(E).
+ \end{align*}
+ So the Gibbs entropy reduces to the Boltzmann entropy.
+\end{eg}
+
+How about the canonical ensemble?
+\begin{eg}
+ In the canonical ensemble, we have
+ \[
+ p(n) = \frac{e^{-\beta E_n}}{Z}.
+ \]
+ Plugging this into the definition, we find that
+ \begin{align*}
+ S &= -k \sum_n p(n) \log \left(\frac{e^{-\beta E_n}}{Z}\right)\\
+ &= -k \sum_n p(n) (-\beta E_n - \log Z)\\
+ &= k \beta \bra E\ket + k\log Z,
+ \end{align*}
+ using the fact that $\sum p(n) = 1$.
+
+ Using the formula of the expected energy, we find that this is in fact
+ \[
+ S = k \frac{\partial}{\partial T} (T \log Z)_V.
+ \]
+\end{eg}
+So again, if we want to compute the entropy, it suffices to find a nice closed form of $Z$.
+
+\subsubsection*{Maximizing entropy}
+It turns out we can reach the canonical ensemble in a different way. The second law of thermodynamics suggests we should always seek to maximize entropy. Now if we take the optimization problem of ``maximizing entropy'', what probability distribution will we end up with?
+
+The answer depends on what constraints we put on the optimization problem. We can try to maximize $S_{\mathrm{Gibbs}}$ over all probability distributions such that $p(n) = 0$ unless $E \leq E_n \leq E + \delta E$. Of course, we also have the constraint $\sum p(n) = 1$. Then we can use a Lagrange multiplier $\alpha$ and extremize
+\[
+ k^{-1}S_{\mathrm{Gibbs}} + \alpha \left(\sum_n p(n) - 1\right),
+\]
+Differentiating with respect to $p(n)$ and solving, we get
+\[
+ p(n) = e^{\alpha - 1}.
+\]
+In particular, this is independent of $n$. So all microstates with energy in this range are equally likely, and this gives the microcanonical ensemble.
+
+What about the canonical ensemble? It turns out this is obtained by maximizing the entropy over all $p(n)$ such that $\bra E\ket$ is fixed. The computation is equally straightforward, and is done on the first example sheet.
+
+\subsection{Helmholtz free energy}
+In the microcanonical ensemble, we discussed the second law of thermodynamics, namely the entropy increases with time and the maximum is achieved in an equilibrium.
+
+But this is no longer true in the case of the canonical ensemble, because we now want to maximize the total entropy of the system plus the heat bath, instead of just the system itself. Then is there a proper analogous quantity for the canonical ensemble?
+
+%The answer is given by the \emph{free energy}. In the canonical ensemble, the probability of energy being between $E$ and $E + \delta E$ (for $\delta E \ll kT$) is given by
+%\[
+% p(E) = \Omega_S(E) \frac{e^{-\beta E}}{Z} = \frac{1}{Z} e^{k^{-1} S - \beta E} = \frac{1}{Z} e^{-\beta (E - TS)}
+%\]
+%where $\Omega_S(E)$ is the number of states with energy between $E$ and $\delta E$. Here we assumed that $\delta E$ is small so that the probabilities of the different particles are similar.
+%
+%We define
+
+The answer is given by the Helmholtz free energy.
+\begin{defi}[Helmholtz free energy]\index{Helmholtz free energy}\index{free energy!Helmholtz}
+ The \emph{Helmholtz free energy} is
+ \[
+ F = \bra E\ket - TS.
+ \]
+\end{defi}
+As before, we will often drop the $\bra\ph\ket$.
+%Then we have
+%\[
+% p(E) = \frac{1}{Z} e^{-\beta F}.
+%\]
+%So in the canonical ensemble, the most likely $E$ is the one that minimizes $F$.
+
+In general, in an isolated system, $S$ increases, and $S$ is maximized in equilibrium. In a system with a reservoir, $F$ decreases, and $F$ minimizes in equilibrium. In some sense $F$ captures the competition between entropy and energy.
+
+Now is there anything analogous to the first law
+\[
+ \d E = T\; \d S - p\; \d V?
+\]
+ Using this, we can write
+\[
+ \d F = \d E - \d(TS) = -S\; \d T - p\; \d V.
+\]
+When we wrote down the original first law, we had $\d S$ and $\d V$ on the right, and thus it is natural to consider energy as a function of entropy and volume (instead of pressure and temperature). Similarly, It is natural to think of $F$ as a function of $T$ and $V$. Mathematically, the relation between $F$ and $E$ is that $F$ is the \term{Legendre transform} of $E$.
+
+From this expression, we can immediately write down
+\[
+ S = - \left(\frac{\partial F}{\partial T}\right)_V,
+\]
+and the pressure is
+\[
+ p = -\left(\frac{\partial F}{\partial V}\right)_T.
+\]
+As always, we can express the free energy in terms of the partition function.
+\begin{prop}
+ \[
+ F = -kT \log Z.
+ \]
+ Alternatively,
+ \[
+ Z = e^{-\beta F}.
+ \]
+\end{prop}
+
+\begin{proof}
+ We use the fact that
+ \[
+ \frac{\d}{\d \beta} = kT^2 \frac{\d}{\d T}.
+ \]
+ Then we can start from
+ \begin{align*}
+ F &= E - TS \\
+ &= - \left(\frac{\partial \log Z}{\partial \beta}\right) - TS \\
+ &= kT^2 \left(\frac{\partial \log Z}{\partial T}\right)_V - kT \frac{\partial}{\partial T}(T \log Z)_V\\
+ &= -k T \log Z,
+ \end{align*}
+ and we are done. Good.
+\end{proof}
+
+\subsection{The chemical potential and the grand canonical ensemble}
+So far we have considered situations where we had fixed energy or fixed volume. However, there are often other things that are fixed. For example, the number of particles, or the electric charge of the system would be fixed. If we measure these things, then this restricts which microstates are accessible.
+
+Let's consider $N$. This quantity was held fixed in the microcanonical and canonical ensembles. But these quantities do depend on $N$, and we can write
+\[
+ S(E, V, N) = k \log \Omega(E, V, N).
+\]
+Previously, we took this expression, and asked what happens when we varied $E$, and we got temperature. We then asked what happens when we vary $V$, and we got pressure. Now we ask what happens when we vary $N$.
+\begin{defi}[Chemical potential]\index{chemical potential}
+ The \emph{chemical potential} of a system is given by
+ \[
+ \mu = -T \left(\frac{\partial S}{\partial N}\right)_{E, V}.
+ \]
+\end{defi}
+
+Why is this significant? Recall that when we only varied $E$, then we figured that two systems in equilibrium must have equal temperature. Then we varied $V$ as well, and found that two systems in equilibrium must have equal temperature \emph{and} pressure. Then it shouldn't be surprising that if we have two interacting systems that can exchange particles, then we must have equal temperature, pressure \emph{and} chemical potential. Indeed, the same argument works.
+
+If we want to consider what happens to the first law when we vary $N$, we can just straightforwardly write
+\[
+ \d S = \left(\frac{\partial S}{\partial E}\right)_{V, N} \d E + \left(\frac{\partial S}{\partial V}\right)_{E, N} \d V + \left(\frac{\partial S}{\partial N}\right)_{E, V} \d V.
+\]
+Then as before, we get
+\[
+ \d E = T \;\d S - p \;\d V + \mu \;\d N.
+\]
+From this expressions, we can get some feel for what $\mu$ is. This is the energy cost of adding one particle at fixed $S, V$. We will actually see later that $\mu$ is usually negative. This might seem counter-intuitive, because we shouldn't be able to gain energy by putting in particles in general. However, this $\mu$ is the cost of adding a particle \emph{at fixed entropy and volume}. In general, adding a particle will cause the entropy to increase. So to keep $S$ fixed, we will have to take out energy.
+
+Of course, we can do the same thing with other sorts of external variables. For example, we can change $N$ to $Q$, the electric charge, and then we use $\Phi$, the \term{electrostatic potential} instead of $\mu$. The theory behaves in exactly the same way.
+
+From the first law, we can write
+\[
+ \mu = \left(\frac{\partial E}{\partial N}\right)_{S, V}.
+\]
+In the canonical ensemble, we have fixed $T$, but the free energy will also depend on energy:
+\[
+ F(T, V, N) = E - TS.
+\]
+Again, we have
+\[
+ \d F = \d E - \d (TS) = -S\;\d T - p\;\d V + \mu\;\d N.
+\]
+So we have
+\[
+ \mu = \left(\frac{\partial F}{\partial N}\right)_{T, V}.
+\]
+But in this case, the canonical ensemble is not the most natural thing to consider. Instead of putting our system in a heat reservoir, we put it in a ``heat and particle'' reservoir $R$. In some sense, this is a completely open system --- it can exchange both heat and particles with the external world.
+
+As before, $\mu$ and $T$ are fixed by their values in $R$. We repeat the argument with the canonical ensemble, and we will find that the probability that a system is in state $n$ is
+\[
+ p(n) = \frac{e^{-\beta (E_n - \mu N_n)}}{\mathcal{Z}},
+\]
+where $N_n$ is the number of particles in $\bket{n}$, and we can define the \term{grand canonical partition function}
+\[
+ \mathcal{Z} = \sum_n e^{-\beta(E_n- \mu N_n)}
+\]
+Of course, we can introduce more and more quantities after $V, N$, and then get more and more terms in the partition function, but they are really just the same. We can quickly work out how we can compute quantities from $Z$. By writing out the expressions, we have
+\begin{prop}
+ \[
+ \bra E \ket - \mu \bra N\ket = -\left(\frac{\partial \mathcal{Z}}{\partial \beta}\right)_{\mu, V}.
+ \]
+\end{prop}
+
+\begin{prop}
+ \[
+ \bra N\ket = \sum_n p(n) N_n = \frac{1}{\beta} \left(\frac{\partial \log \mathcal{Z}}{\partial \mu}\right)_{T, V}.
+ \]
+\end{prop}
+
+As in the canonical ensemble, there is a simple formula for variance:
+\begin{prop}
+ \[
+ \Delta N^2 = \bra N^2 \ket - \bra N\ket^2 = \frac{1}{\beta^2} \left(\frac{\partial^2 \log Z}{\partial \mu^2}\right)_{T, V} = \frac{1}{\beta} \left(\frac{\partial \bra N\ket}{\partial \mu}\right)_{T, V} \sim N.
+ \]
+ So we have
+ \[
+ \frac{\Delta N}{ \bra N\ket} \sim \frac{1}{\sqrt{N}}.
+ \]
+\end{prop}
+
+So again in the thermodynamic limit, the fluctuations in $\bra N\ket$ are negligible.
+
+We can also calculate the Gibbs entropy:
+\begin{prop}
+ \[
+ S = k \frac{\partial}{\partial T}(T \log \mathcal{Z})_{\mu, N}.
+ \]
+\end{prop}
+
+With the canonical ensemble, we had the free energy. There is an analogous thing we can define for the grand canonical ensemble.
+\begin{defi}[Grand canonical potential]\index{grand canonical potential}\index{$\Phi$}
+ The \emph{grand canonical potential} is
+ \[
+ \Phi = F - \mu N = E - TS - \mu N.
+ \]
+\end{defi}
+Then we have
+\begin{prop}
+ \[
+ \d \Phi = - S \;\d T - p \;\d V - N \;\d \mu.
+ \]
+\end{prop}
+We thus see that it is natural to view $\Phi$ as a function of $T$, $V$ and $\mu$. Using the formula for $E - \mu N$, we find
+\[
+ \Phi = - \left(\frac{\partial \log \mathcal{Z}}{ \partial \beta}\right)_{\mu, V} = -kT \log \mathcal{Z},
+\]
+which is exactly the form we had for the free energy in the canonical ensemble. In particular, we have
+\[
+ \mathcal{Z} = e^{-\beta \Phi}.
+\]
+\subsection{Extensive and intensive properties}
+So far, we have defined a lot of different quantities --- $p, V, \mu, N, T, S$ etc. In general, we can separate these into two different types. Quantities such as $V, N$ scale with the size of the volume, while $\mu$ and $p$ do not scale with the size.
+
+\begin{defi}[Extensive quantity]\index{extensive quantity}
+ An \emph{extensive quantity} is one that scales proportionally to the size of the system.
+\end{defi}
+
+\begin{defi}[Intensive quantity]\index{intensive quantity}
+ An \emph{intensive quantity} is one that is independent of the size of the system.
+\end{defi}
+
+\begin{eg}
+ $N, V, E, S$ are all extensive quantities.
+\end{eg}
+
+Now note that the entropy is a function of $E, V, N$. So if we scale a system by $\lambda$, we find that
+\[
+ S(\lambda E, \lambda V, \lambda N) = \lambda S(E, V, N).
+\]
+\begin{eg}
+ Recall that we defined
+ \[
+ \frac{1}{T} = \left(\frac{\partial S}{\partial E}\right)_{V, N}.
+ \]
+ So if we scale the system by $\lambda$, then both $S$ and $E$ scale by $\lambda$, and so $T$ does not change. Similarly,
+ \[
+ p = T \left(\frac{\partial S}{\partial V}\right)_{T, N},\quad \mu = -T \left(\frac{\partial S}{\partial N}\right)_{T, V}
+ \]
+\end{eg}
+
+\begin{eg}
+ The \emph{free energy} is defined by
+ \[
+ F = E - TS.
+ \]
+ Since $E$ and $S$ are both extensive, and $T$ is intensive, we find that $F$ is extensive. So
+ \[
+ F (T, \lambda V, \lambda N) = \lambda F(T, V, N).
+ \]
+ Similarly, the grand canonical potential is
+ \[
+ \Phi = F - \mu N.
+ \]
+ Since $F$ and $N$ are extensive and $\mu$ are intensive, we know $\Phi$ is extensive:
+ \[
+ \Phi(T, \lambda V, \mu) = \lambda \Phi(T, V, \mu).
+ \]
+\end{eg}
+This tells us something useful. We see that $\Phi$ must be proportional to $V$. Indeed, taking the above equation with respect to $\lambda$, we find
+\[
+ V \left(\frac{\partial \Phi}{\partial V}\right)_{T, \mu} (T, \lambda V, \mu) = \Phi(T, V, \mu).
+\]
+Now setting $\lambda = 1$, we find
+\[
+ \Phi(T, V, \mu) = V \left(\frac{\partial \Phi}{\partial V} \right)_{T, \mu} = -pV.
+\]
+Here $p$ is an intensive quantity, it cannot depend on $V$. So we have
+\[
+ \Phi(T, V, \mu) = -p(T, \mu) V.
+\]
+This is quite a strong result.
+
+\section{Classical gases}
+So far, we have been doing statistical physics starting from quantum mechanics. But statistical mechanics was invented \emph{before} quantum mechanics. So we should be able to do statistical mechanics classically, and we will apply it to the case of classical gases. This classical theory agree quite well with experiment, except for a few small things that went wrong, and it turns out we have to solve these problems by going quantum.
+
+To do statistical physics classically, the idea is to figure out what the classical version of the partition function should be. We can then use this to derive the different thermodynamic quantities, as we have previously managed to express them in terms of the partition function.
+
+After figuring out the partition function, we are going to study three types of classical gases. We begin by looking at monoatomic ideal gases, which is the simplest type of gases one can think of. They do not have internal structure and do not interact with each other.
+
+After understanding monoatomic ideal gases, we will move on to consider two possible modifications. The first is the case of a diatomic (ideal) gas, where the gases now have some internal structure, and hence kinetic energy is not the only possible kind of energy. It turns out the theory works out pretty similarly to the monoatomic case, except that the average energy of each particle is higher than the ideal gas version.
+
+Finally, we will consider what happens if we have gas molecules that do interact with each other, and we will do so perturbatively, assuming the interactions are small (which they are).
+
+\subsection{The classical partition function}
+In the canonical ensemble, the quantum partition function is
+\[
+ Z = \sum_n e^{-\beta E_n}.
+\]
+What is the classical analogue? Classically, we can specify the state of a system by a point in \term{phase space}, which is the space of all positions and momentum.
+
+For example, if we have a simple particle, then a point in phase space is just $(\mathbf{q}(t), \mathbf{p}(t))$, the position and momentum of the particle. It is conventional to use $\mathbf{q}$ instead of $\mathbf{x}$ when talking about a point in the phase space. So in this case, the phase space is a $6$-dimensional space.
+
+The equation of motion determines the trajectory of the particle through phase space, given the initial position in phase space. This suggests that we should replace the sum over states by an integral over phase space. Classically, we also know what the energy is. For a single particle, the energy is given by the Hamiltonian
+\[
+ H = \frac{\mathbf{p}^2}{2m} + V(\mathbf{q}).
+\]
+So it seems that we know what we need to know to make sense of the partition function classically. We might want to define the partition function as
+\[
+ Z_1 = \int \d^3 q\; \d^3 p\; e^{-\beta H(\mathbf{p}, \mathbf{q})}.
+\]
+This seems to make sense, except that we expect the partition function to be dimensionless. The solution is to introduce a quantity $h$, which has dimensions of length times momentum. Then we have
+\begin{defi}[Partition function (single particle)]\index{partition function!single particle}
+ We define the \emph{single particle partition function} as
+ \[
+ Z_1 = \frac{1}{h^3} \int \d^3 q\; \d^3 p\; e^{-\beta H(\mathbf{p}, \mathbf{q})}.
+ \]
+\end{defi}
+We notice that whenever we use the partition function, we usually differentiate the log of $Z$. So the factor of $h^3$ doesn't really matter for observable quantities.
+
+However, recall that entropy is just given by $\log Z$, and we might worry that the entropy depends on $Z$. But it doesn't matter, because entropy is not actually observable. Only entropy differences are. So we are fine.
+
+The more careful reader might worry that our choice to integrate $e^{-\beta H(\mathbf{p}, \mathbf{q})}$ against $\d^3 q\; \d^3 p$ is rather arbitrary, and there is no good \emph{a priori} reason why we shouldn't integrate it against, say, $\d^3 q\; \d^3 p^5$ instead.
+
+However, it is possible to show that this is indeed the ``correct'' partition function to use, by taking the quantum partition function, and then taking the limit $\hbar \to 0$. Moreover, we find that the correct value of $h$ should be
+\[
+ h = 2\pi \hbar.
+\]
+We will from now on use this value of $h$. In the remainder of the chapter, we will mostly spend our time working out the partition function of several different systems as laid out in the beginning of the chapter.
+
+\subsection{Monoatomic ideal gas}
+We now begin considering ideal gases.
+\begin{defi}[Ideal gas]\index{ideal gas}
+ An \emph{ideal gas} is a gas where the particles do not interact with each other.
+\end{defi}
+Of course, this is never true, but we can hope this is a good approximation when the particles are far apart.
+
+We begin by considering a \term{monoatomic ideal gas}\index{ideal gas!monoatomic}. These gases have no internal structure, and is made up of single atoms. In this case, the only energy we have is the kinetic energy, and we get
+\[
+ H = \frac{\mathbf{p}^2}{2m}.
+\]
+We just have to plug this into our partition function and evaluate the integral. We have
+\begin{align*}
+ Z_1(V, T) &= \frac{1}{(2\pi \hbar)^3} \int \d^3 p\; \d^3 q\; e^{-\beta \mathbf{p}^2 /2m}\\
+ &= \frac{V}{(2\pi \hbar)^3} \int \d^3 p\; e^{-\beta \mathbf{p}^2/2m}
+\end{align*}
+Here $V$ is the volume of the box containing the particle, which is what we obtain when we do the $\d^3 q$ integral.
+
+This remaining integral is just the Gaussian integral. Recall that we have
+\[
+ \int \d x\; e^{-a x^2} = \sqrt{\frac{\pi}{a}}.
+\]
+Using this three times, we find
+\begin{prop}
+ For a monoatomic gas, we have
+ \[
+ Z_1(V, T) = V\left(\frac{mkT}{2\pi \hbar^2}\right)^{3/2} = \frac{V}{\lambda^3},
+ \]
+\end{prop}
+where we define
+\begin{defi}[Thermal de Broglie wavelength]\index{thermal de Broglie wavelength}\index{$\lambda$}\index{de Broglie wavelength!thermal}
+ The \emph{thermal de Broglie wavelength} of a gas at temperature $T$ is
+ \[
+ \lambda = \sqrt{\frac{2\pi \hbar^2}{mkT}}.
+ \]
+\end{defi}
+If we think of the wavelength as the ``size'' of the particle, then we see that the partition function counts the number of particles we can fit in the volume $V$. We notice that this partition function involves $\hbar$, which is a bit weird since we are working classically, but we will see that the $\hbar$ doesn't appear in the formulas we derive from this.
+
+The generalization to multiple particles is straightforward. If we have $N$ particles, since the partition function is again multiplicative, we have
+\[
+ Z(N, V, T) = Z_1^N = V^N \lambda^{-3N}.
+\]
+There is a small caveat at this point. We will later see that this is not quite right. When we think about the quantum version, if the particles are indistinguishable, then we would have counted each state $N!$ times, which would give the wrong answer. Again, this doesn't affect any observable quantity, so we will put this issue aside for the moment, until we get to studying the entropy itself, in which case this $N!$ \emph{does} matter.
+
+We can similarly \emph{define} the pressure to be\index{$p$}\index{pressure}
+\[
+ p = - \left(\frac{\partial F}{\partial V}\right)_T = \frac{\partial}{\partial V} (k T \log Z)_T.
+\]
+Then plugging our partition function into this definition, we find
+\[
+ p = \frac{NkT}{V}.
+\]
+Rearranging, we obtain
+\begin{prop}[Ideal gas law]\index{ideal gas law}
+ \[
+ pV = NkT.
+ \]
+\end{prop}
+Notice that in this formula, the $\lambda$ has dropped out, and there is no dependence on $\hbar$.
+
+\begin{defi}[Equation of state]\index{equation of state}
+ An \emph{equation of state} is an equation that relates state variables, i.e.\ variables that depend only on the current state of the system, as opposed to how we obtained this system.
+\end{defi}
+The ideal gas law is an example of an equation of state.
+
+Let's now look at the energy of the ideal gas. We can similarly compute
+\[
+ \bra E\ket = - \left(\frac{\partial \log Z}{\partial \beta}\right)_V = \frac{3}{2} NkT = 3N \left(\frac{1}{2} kT\right).
+\]
+This is a general phenomenon. Our system has $N$ particles, and each particle has three independent directions it can move in. So there are $3N$ degrees of freedom.
+\begin{law}[Equipartition of energy]\index{Equipartition of energy}
+ Each degree of freedom of an ideal gas contributes $\frac{1}{2}kT$ to the average energy.
+\end{law}
+In the next section, we will study gases with internal structure, hence internal degrees of freedom, and each such degree of freedom will again contribute $\frac{1}{2}kT$ to the average energy.
+
+Of course, this law requires some hidden assumptions we do not make precise. If we add a degree of freedom $s$ with a term $s^{5.3} \log (2s + 1)$ in the Hamiltonian, then there is no reason to believe the contribution to the average energy would still be $\frac{1}{2} kT$. We will also see in the next section that if the degree of freedom has some potential energy, then there will be even more contribution to the energy.
+
+There are other quantities of the gas we can compute. We know the average energy of a single particle is
+\[
+ \frac{\bra \mathbf{p}^2\ket}{2m} = \frac{3}{2} kT.
+\]
+So we have
+\[
+ \bra \mathbf{p}^2 \ket \sim mkT.
+\]
+Thus, for a single particle, we have
+\[
+ |\mathbf{p}| \sim \sqrt{mkT},
+\]
+and so
+\[
+ \lambda \sim \frac{h}{|\mathbf{p}|}.
+\]
+This is the usual formula for the de Broglie wavelength. So our thermal de Broglie wavelength is indeed related to the de Broglie wavelength.
+
+Finally, we can compute the heat capacity
+\[
+ C_V = \left(\frac{\partial E}{\partial T}\right)_V = \frac{3}{2}Nk.
+\]
+\subsubsection*{Boltzmann's constant}
+Recall that Boltzmann's constant is
+\[
+ k = \SI{1.381e-23}{\joule\per\kelvin}
+\]
+This number shows that we were terrible at choosing units. If we were to invent physics again, we would pick energy to have the same unit as temperature, so that $k = 1$. This number $k$ is just a conversion factor between temperature and energy because we chose the wrong units.
+
+But of course, the units were not chosen randomly in order to mess up our thermodynamics. The units were chosen to relate to scales we meet in everyday life. So it is still reasonable to ask why $k$ has such a small value. We look at the ideal gas law.
+\[
+ \frac{pV}{T} = Nk.
+\]
+We would expect when we plug in some everyday values for the left hand side, the result would be somewhat sensible, because our ancestors were sane when picking units (hopefully).
+
+Indeed, we can put in numbers
+\begin{align*}
+ p &= \SI{e5}{\newton\per\meter\squared}\\
+ V &= \SI{1}{\meter\cubed}\\
+ T &= \SI{300}{\kelvin},
+\end{align*}
+and we find that the LHS is $\sim 300$.
+
+So what makes $k$ such a tiny number is that $N$ is huge. The number of particles is of the order $10^{23}$. Thus, for $Nk$ to have a sensible value, $k$ must be tiny.
+
+The fact that $k$ is small tells us that everyday lumps of matter contain a lot of particles, and in turn, this tells us that atoms are small.
+
+\subsubsection*{Entropy}
+The next thing to study is the entropy of an ideal gas. We previously wrote down
+\[
+ Z = Z_1^N,
+\]
+and briefly noted that this isn't actually right. In quantum mechanics, we know that if we swap two indistinguishable particles, then we get back the same state, at least up to a sign. Similarly, if we permute any of our particles, which are indistinguishable, then we get the same system. We are over-counting the states. What we really should do is to divide by the number of ways to permute the particles, namely $N!$:
+\[
+ Z = \frac{1}{N!} Z_1^N.
+\]
+Just as in the constant $h$ in our partition function, this $N!$ doesn't affect any of our observables. In particular, $p$ and $\bra E\ket$ are unchanged. However, this $N!$ does affect the entropy
+\[
+ S = \frac{\partial}{\partial T} (kT \log Z).
+\]
+Plugging the partition function in and using Stirling's formula, we get
+\[
+ S = Nk \left(\log \left(\frac{V}{N\lambda^3}\right) + \frac{5}{2}\right).
+\]
+This is known as the \term{Sackur-Tetrode equation}.
+
+Recall that the entropy is an extensive property. If we re-scale the system by a factor of $\alpha$, then
+\[
+ N \mapsto \alpha N,\quad V \mapsto \alpha V.
+\]
+Since $\lambda$ depends on $T$ only, it is an intensive quantity, and this indeed scales as $S \mapsto \alpha S$. But for this to work, we really needed the $N$ inside the logarithm, and the reason we have the $N$ inside the logarithm is that we had an $N!$ in the partition function.
+
+When people first studied statistical mechanics of ideal gases, they didn't know about quantum mechanics, and didn't know they should put in the $N!$. Then the resulting value of $S$ is no longer extensive. This leads to \term{Gibbs paradox}. The actual paradox is as follows:
+
+Suppose we have a box of bas with entropy $S$. We introduce a partition between the gases, so that the individual partitions have entropy $S_1$ and $S_2$. Then the fact that the gas is not extensive means
+\[
+ S \not= S_1 + S_2.
+\]
+This means by introducing or removing a partition, we have increased or decreased the entropy, which violates the second law of thermodynamics.
+
+This $N!$, which comes from quantum effects, fixes this problem. This is a case where quantum mechanics is needed to understand something that really should be classical.
+
+\subsubsection*{Grand canonical ensemble}
+We now consider the case where we have a grand canonical ensemble, so that we can exchange heat \emph{and} particles. In the case of gas, we can easily visualize this as a small open box of gas where gas is allowed to freely flow around. The grand ensemble has partition function
+\begin{align*}
+ \mathcal{Z}_{\mathrm{ideal}}(\mu, V, T) &= \sum_{N = 0}^\infty e^{\beta \mu N} Z_{\mathrm{ideal}}(N, V, T) \\
+ &= \sum_{N = 0}^\infty \frac{1}{N!} \left(\frac{e^{\beta \mu} V}{\lambda^3}\right)^N\\
+ &= \exp\left(\frac{e^{\beta \mu}V}{\lambda^3}\right)
+\end{align*}
+Armed with this, we can now calculate the average number of particles in our system. Doing the same computations as before, we have
+\[
+ N = \frac{1}{\beta} \left(\frac{\partial \log Z}{\partial \mu}\right)_{V, T} = \frac{e^{\beta \mu}V}{\lambda^3}.
+\]
+So we can work out the value of $\mu$:
+\[
+ \mu = kT \log \left(\frac{\lambda^3 N}{V}\right).
+\]
+Now we can use this to get some idea of what the chemical potential actually means. For a classical gas, we need the wavelength to be significantly less than the average distance between particles, i.e.
+\[
+ \lambda \ll \left(\frac{V}{N}\right)^{1/3},
+\]
+so that the particles are sufficiently separated. If this is not true, then quantum effects are important, and we will look at them later. If we plug this into the logarithm, we find that $\mu < 0$.
+
+Remember that $\mu$ is defined by
+\[
+ \mu = \left(\frac{\partial E}{\partial N}\right)_{S, V}.
+\]
+It might seem odd that we get energy out when we add a particle. But note that this derivative is taken with $S$ fixed. Normally, we would expect adding a particle to increase the entropy. So to keep the entropy fixed, we must simultaneously take out energy of the system, and so $\mu$ is negative.
+
+Continuing our exploration of the grand canonical ensemble, we can look at the fluctuations in $N$, and find
+\[
+ \Delta N^2 = \frac{1}{\beta^2} \log \mathcal{Z}_{\mathrm{ideal}} = N.
+\]
+So we find that
+\[
+ \frac{\Delta N}{N} = \frac{1}{\sqrt{N}} \to 0
+\]
+as $N \to \infty$. So in the thermodynamic limit, the fluctuations are negligible.
+
+We can now obtain our equation of state. Recall the grand canonical potential is
+\[
+ \Phi = -kT \log \mathcal{Z},
+\]
+and that
+\[
+ pV = -\Phi.
+\]
+Since we know $\log \mathcal{Z}$, we can work out what $pV$ is, we find that
+\[
+ pV = kT \frac{e^{\beta \mu}V}{\lambda^3} = NkT.
+\]
+So the ideal gas law\index{ideal gas law} is also true in the grand canonical ensemble. Also, from cancelling the $V$ from both sides, we see that this determines $p$ as a function of $T$ and $\mu$:
+\[
+ p = \frac{kT e^{\beta\mu}}{\lambda^3}.
+\]
+
+\subsection{Maxwell distribution}
+We calculated somewhere the average energy of our gas, so we can calculate the average energy of an atom in the gas. But that is just the average energy. We might want to figure out the distribution of energy in the atoms of our gas. Alternatively, what is the distribution of particle speed in a gas?
+
+We can get that fairly straightforwardly from what we've got so far.
+
+We ask what's the probability of a given particle being in a region of phase space of volume $\d^3 q \;\d^3 p$ centered at $(\mathbf{q}, \mathbf{p})$. We know what this is. It is just
+\[
+ C e^{-\beta \mathbf{p}^2/2m}\;\d^3 \mathbf{q}\;\d^3 \mathbf{p}
+\]
+for some normalization constant $C$, since the kinetic energy of a particle is $\mathbf{p}^2/2m$. Now suppose we don't care about the position, and just want to know about the momentum. So we integrate over $q$, and the probability that the momentum is within $\d^3 \mathbf{p}$ of $\mathbf{p}$ is
+\[
+ CV \d^3 \mathbf{p}\; e^{-\beta \mathbf{p}^2/2m}.
+\]
+Let's say we are interested in velocity instead of momentum, so this is equal to
+\[
+ CV m^2 \d^3 \mathbf{v}\; e^{-\beta m\mathbf{v}^2/2}.
+\]
+Moreover, we are only interested in the speed, not the velocity itself. So we introduce spherical polar coordinates $(v, \theta, \phi)$ for $\mathbf{v}$. Then we get
+\[
+ CV m^3 \sin \theta\; \d \theta\;\d \varphi\;v^2 \d v\;e^{-mv^2/(2kT)}.
+\]
+Now we don't care about the direction, so we again integrate over all possible values of $\theta$ and $\phi$. Thus, the probability that the speed is within $\d v$ of $v$ is
+\[
+ f(v)\;\d v = \mathcal{N} v^2 e^{-mv^2/(2kT)} \;\d v,
+\]
+where we absorbed all our numbers into the constant $\mathcal{N}$. Then $f$ is the probability density function of $v$. We can fix this constant $\mathcal{N}$ by normalizing:
+\[
+ \int_0^\infty f(v) \;\d v = 1.
+\]
+So this fixes
+\[
+ \mathcal{N} = 4\pi \left(\frac{m}{2\pi kT}\right)^{1/2}.
+\]
+This $f(v)$ is called the \term{Maxwell distribution}.
+
+We can try to see what $f$ looks like. We see that for large $v$, it is exponentially decreasing, and for small $v$, it is quadratic in $v$. We can plot it for a few monoatomic ideal gases:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$T$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$C$};
+
+ \foreach \m/\colour in {4/mblue, 20/mgreen, 40/morange, 132/mred} {
+ \draw [\colour, thick, domain=0:5.8, samples=50] plot [smooth] (\x, {1/10 * (\m/4)^(3/2) * \x^2 * exp (-\m / 4 * (\x^2 / 16))});
+ }
+ \node [mblue] at (4.3, 0.9) {$^{4}$He};
+ \node [mgreen] at (2.9, 1.1) {$^{20}$Ne};
+ \node [morange] at (1.5, 2.1) {$^{40}$Ar};
+ \node [mred] at (0.8, 3.7) {$^{132}$Xe};
+ \end{tikzpicture}
+\end{center}
+We see that the energy distribution shifts to the right as we increase the mass. This is expected, because we know that the energy of the particle is always $\frac{1}{2}kT$, and so for lighter particles, we need to have higher energy.
+
+We can sanity-check that the expected value of $v$ is correct. We have
+\[
+ \bra v^2\ket = \int_0^\infty v^2 f(v) \;\d v = \frac{3kT}{m}.
+\]
+So we find that
+\[
+ \bra E\ket = \frac{1}{2} m\bra v^2\ket = \frac{3}{2} kT.
+\]
+This agrees with the equipartition of energy, as it must. But now we have an actual distribution, we can compute other quantities like $\bra v^4\ket$.
+
+Note that in these derivations, we assumed we were dealing with a monoatomic ideal gas, but it happens that in fact this holds for a much more general family of gases. We will not go much into details.
+
+\subsection{Diatomic gases}
+We now move on to consider more complicated gases. If we have molecules, then they can contain other forms of energy such as rotational energy.
+
+Everything we are doing is classical, so we need to come up with a model of a diatomic molecule, instead of studying the actual quantum system of molecules. For simplicity, we can model them as two point masses joined together by a massless spring.
+\begin{center}
+ \begin{tikzpicture}
+ \fill circle [radius=0.1];
+ \fill (2, 0) circle [radius=0.1];
+ \node [above] at (0, 0.1) {$m_1$};
+ \node [above] at (2, 0.1) {$m_2$};
+ \draw [decorate, decoration={snake, amplitude=1, segment length=5}] (0, 0) -- (2, 0);
+ \end{tikzpicture}
+\end{center}
+As we know from classical dynamics, we can decompose the motion into different components:
+\begin{itemize}
+ \item Translation of the center of mass. We can view this as a single mass with energy $M = m_1 + m_2$.
+ \item Rotations about center of mass. There are two axes of rotations orthogonal to the spring, and these have a moment of inertia $I$. We will ignore the rotation along the axes parallel to the spring because we assume the masses are point masses.
+ \item Vibrations along the axis of symmetry. The important quantity here is the reduced mass
+ \[
+ m = \frac{m_1 m_1}{m_1 + m_2}.
+ \]
+\end{itemize}
+We will assume all these motions are independent. In reality, the translation is indeed independent from the others, but the rotations and vibrations can couple in complicated ways. But we are lazy, and will make this simplifying assumption, we have
+\[
+ Z_1 = Z_{\mathrm{trans}} Z_{\mathrm{rot}} Z_{\mathrm{vib}}.
+\]
+We can obtain $Z_{\mathrm{trans}}$ just as the partition energy we obtained previously for a single mass, with mass $M$. The remaining two terms depend only on the relative position of the masses. So they do not depend on the molecule as a whole, and are going to be independent of $V$.
+
+Now when we differentiate $\log Z_1$ with respect to $V$, then the latter two terms drop out, and thus we deduce that the ideal gas law still holds for diatomic gases.
+
+We now try to figure out how rotations and vibrations contribute to the partition function. For rotations, we can parametrize this using spherical polars $(\theta, \varphi)$. The Lagrangian for this rotational motion is given by
+\[
+ \mathcal{L}_{\mathrm{rot}} = \frac{1}{2} I (\dot{\theta}^2 + \sin^2 \theta \dot{\varphi}^2).
+\]
+The conjugate momenta are given by
+\begin{align*}
+ p_\theta &= \frac{\partial \mathcal{L}}{\partial \dot{\theta}} = I \dot{\theta}\\
+ p_\varphi &= \frac{\partial \mathcal{L}}{\partial \dot{\varphi}} = I \sin^2 \theta \dot\varphi.
+\end{align*}
+So we have
+\[
+ H_{\mathrm{rot}} = \dot{\theta} p_\theta + \dot{\varphi} p_\varphi - \mathcal{L}_{\mathrm{rot}} = \frac{p_\theta^2}{2I} + \frac{p_\varphi^2}{2I \sin^2 \theta}.
+\]
+We can then work out the partition function
+\[
+ Z_{\mathrm{rot}} = \frac{1}{(2\pi \hbar)^2} \int \d \theta\; \d \varphi\; \d p_\theta \; d p_\varphi\; e^{-\beta H_{\mathrm{rot}}}
+\]
+We note that the $p_\theta$ and $p_\varphi$ integrals are just Gaussians, and so we get
+\[
+ Z_{\mathrm{rot}} = \frac{1}{(2\pi \hbar)^2} \sqrt{\frac{2\pi I}{\beta}} \int_0^\pi \d \theta \sqrt{\frac{2\pi I \sin^2 \theta}{\beta}} \int_0^{2\pi} \d \varphi = \frac{2IkT}{\hbar^2}.
+\]
+Then we get
+\[
+ E_{\mathrm{rot}} = - \frac{\partial}{\partial \beta} \log Z_{\mathrm{rot}} = \frac{1}{\beta} = kT.
+\]
+This agrees with the equipartition of energy, as here we have two degrees of freedom, and each contributes $\frac{1}{2}kT$.
+
+\begin{eg}
+ In the case of a system where vibrations are not important, e.g.\ if the string is very rigid, then we are done, and we have found
+ \[
+ Z = Z_{\mathrm{trans}} Z_{\mathrm{rot}} \propto (kT)^{5/2}.
+ \]
+ Then the partition function for $N$ particles is
+ \[
+ Z = \frac{1}{N!} Z_1^N.
+ \]
+ and the total energy is
+ \[
+ -\frac{\partial}{\partial \beta} \log Z = \frac{5}{2} NkT.
+ \]
+ This is exactly as we expect. There is $\frac{3}{2}NkT$ from translation, and $NkT$ from rotation. From this, we obtain the heat capacity
+ \[
+ C_V = \frac{5}{2} Nk.
+ \]
+\end{eg}
+We now put in the vibrations. Since we are modelling it by a spring, we can treat it as a harmonic oscillator, with mass $m$ and frequency $\omega$, which is determined by the bond strength. Then if we let $\zeta$ be the displacement form equilibrium, then the Hamiltonian is
+\[
+ H_{\mathrm{vib}} = \frac{p_\zeta^2}{2m} + \frac{1}{2}m \omega^2 \zeta^2.
+\]
+So we have
+\[
+ Z_{\mathrm{vib}} = \frac{1}{2\pi \hbar} \int \d \zeta\;\d p_\zeta\;e^{-\beta H_{\mathrm{vib}}} = \frac{kT}{\hbar \omega}.
+\]
+From the partition function, we can get the energy of a single molecule, and find it to be
+\[
+ E_{\mathrm{vib}} = kT.
+\]
+This is the average energy in the vibrational motion of a molecule. This looks a bit funny. The vibration is only one degree of freedom, but the equipartition of energy seems to think this has two degrees of freedom. It turns out equipartition of energy behaves differently when we have potential energy. In general, we should think of having one degree of freedom for each quadratic term in the Hamiltonian, and so we have one degree of freedom for kinetic energy and another for potential.
+
+Putting all three types of motion together, we get
+\[
+ E = \frac{7}{2} NkT,
+\]
+and the heat capacity is
+\[
+ C_V = \frac{7}{2}Nk.
+\]
+Note that these results are completely independent of the parameters describing the molecule!
+
+Does this agree with experiments? The answer is no! If we go and measure the heat capacity of, say, molecular hydrogen, we find something like
+\begin{center}
+ \begin{tikzpicture}[yscale=0.7]
+ \draw [->] (0, 0) -- (5, 0) node [right] {$T$};
+ \draw [->] (0, 0) -- (0, 5) node [above] {$C_V/Nk$};
+
+ \draw [dashed] (1.75, 0) node [below] {$200$} -- +(0, 5);
+ \draw [dashed] (3.5, 0) node [below] {$2000$} -- +(0, 5);
+
+ \draw [dashed] (0, 1.5) node [left] {$1.5$} -- +(5, 0);
+ \draw [dashed] (0, 2.5) node [left] {$2.5$} -- +(5, 0);
+ \draw [dashed] (0, 3.5) node [left] {$3.5$} -- +(5, 0);
+
+ \draw [thick, mblue] (0, 1.5) -- (1.5, 1.5) .. controls (1.75, 1.5) and (1.75, 2.5) .. (2, 2.5) -- (3.25, 2.5) .. controls (3.5, 2.5) and (3.5, 3.5) .. (3.75, 3.5) -- (5, 3.5);;
+ \end{tikzpicture}
+\end{center}
+So it seems like our prediction only works when we have high enough temperature. At lower temperatures, the vibration modes freeze out. Then as we further lower the energy, rotation modes also go away. This is a big puzzle classically! This is explained by quantum effects, which we will discuss later.
+
+\subsection{Interacting gases}
+So far, we have been talking about ideal gases. What happens if there \emph{are} interactions?
+
+For a real gas, if they are sufficiently dilute, i.e.\ $N/V$ is small, then we expect the interactions to be negligible. This suggests that we can try to capture the effects of interactions perturbatively in $\frac{N}{V}$. We can write the ideal gas law as
+\[
+ \frac{p}{kT} = \frac{N}{V}.
+\]
+We can think of this as a first term in an expansion, and add higher order terms
+\[
+ \frac{p}{kT} = \frac{N}{V} + B_2(T) \frac{N^2}{V^2} + B_3 (T) \frac{N^3}{V^3} + \cdots.
+\]
+Note that the coefficients depend on $T$ only, as they should be intensive quantities. This is called the \term{Virial expansion}, and the coefficients $B_k(T)$ are the \term{Virial coefficients}. Our goal is to figure out what these $B_k(T)$ are.
+
+We suppose the interaction is given by a potential energy $U(r)$ between two neutral atoms (assuming monoatomic) at separation $r$.
+
+\begin{eg}
+ In genuine atoms, for large $r$ (relative to atomic size), we have
+ \[
+ U(r) \propto -\frac{1}{r^6}.
+ \]
+ This comes from dipole-dipole interactions. Heuristically, we can understand this power of $6$ as follows --- while the expectation values of electric dipole of an atom vanishes, there exists non-trivial probability that the dipole $p_1$ is non-zero. This gives an electric field of
+ \[
+ E \sim \frac{p_1}{r^3}.
+ \]
+ This induces a dipole $p_2$ in atom $2$. So we have
+ \[
+ p_2 \propto E \sim \frac{p_1}{r^3}.
+ \]
+ So the resulting potential energy is
+ \[
+ U \propto -p_2 E \sim -\frac{p_1^2}{r^6}.
+ \]
+ This is called the \term{van der Waals interaction}. Note that the negative sign means this is an attractive force.
+
+ For small $r$, the electron orbitals of the atoms start to overlap, and then we get repulsion due to the Pauli principle. All together, we obtain the \term{Lennard-Jones potential} given by
+ \[
+ U(r) = U_0\left(\left(\frac{r_0}{r}\right)^{12} - \left(\frac{r_0}{r}\right)^6\right).
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$r$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$U(r)$};
+
+ \draw [mblue, thick ] plot coordinates {(0.955,2.00000) (0.960,1.77280) (0.965,1.47565) (0.970,1.20365) (0.975,0.95482) (0.980,0.72738) (0.985,0.51965) (0.990,0.33010) (0.995,0.15732) (1.000,0.00000) (1.005,-0.14306) (1.010,-0.27298) (1.015,-0.39077) (1.020,-0.49739) (1.025,-0.59370) (1.030,-0.68052) (1.035,-0.75859) (1.040,-0.82859) (1.045,-0.89116) (1.050,-0.94689) (1.055,-0.99632) (1.060,-1.03996) (1.065,-1.07826) (1.070,-1.11165) (1.075,-1.14054) (1.080,-1.16528) (1.085,-1.18622) (1.090,-1.20366) (1.095,-1.21791) (1.100,-1.22922) (1.105,-1.23784) (1.110,-1.24400) (1.115,-1.24792) (1.120,-1.24978) (1.125,-1.24977) (1.130,-1.24806) (1.135,-1.24480) (1.140,-1.24014) (1.145,-1.23420) (1.150,-1.22710) (1.155,-1.21897) (1.160,-1.20990) (1.165,-1.19999) (1.170,-1.18932) (1.175,-1.17799) (1.180,-1.16606) (1.185,-1.15361) (1.190,-1.14069) (1.195,-1.12737) (1.200,-1.11371) (1.205,-1.09974) (1.210,-1.08553) (1.215,-1.07110) (1.220,-1.05650) (1.225,-1.04177) (1.230,-1.02693) (1.235,-1.01202) (1.240,-0.99707) (1.245,-0.98210) (1.250,-0.96712) (1.255,-0.95217) (1.260,-0.93727) (1.265,-0.92242) (1.270,-0.90764) (1.275,-0.89296) (1.280,-0.87837) (1.285,-0.86390) (1.290,-0.84956) (1.295,-0.83534) (1.300,-0.82127) (1.305,-0.80735) (1.310,-0.79358) (1.315,-0.77997) (1.320,-0.76652) (1.325,-0.75325) (1.330,-0.74015) (1.335,-0.72722) (1.340,-0.71448) (1.345,-0.70191) (1.350,-0.68953) (1.355,-0.67733) (1.360,-0.66532) (1.365,-0.65349) (1.370,-0.64184) (1.375,-0.63039) (1.380,-0.61911) (1.385,-0.60803) (1.390,-0.59712) (1.395,-0.58640) (1.400,-0.57586) (1.405,-0.56550) (1.410,-0.55532) (1.415,-0.54531) (1.420,-0.53548) (1.425,-0.52583) (1.430,-0.51635) (1.435,-0.50703) (1.440,-0.49789) (1.445,-0.48891) (1.450,-0.48009) (1.455,-0.47144) (1.460,-0.46294) (1.465,-0.45460) (1.470,-0.44642) (1.475,-0.43838) (1.480,-0.43050) (1.485,-0.42277) (1.490,-0.41518) (1.495,-0.40773) (1.500,-0.40042) (1.530,-0.35940) (1.560,-0.32284) (1.590,-0.29030) (1.620,-0.26131) (1.650,-0.23550) (1.680,-0.21250) (1.710,-0.19198) (1.740,-0.17367) (1.770,-0.15732) (1.800,-0.14268) (1.830,-0.12958) (1.860,-0.11784) (1.890,-0.10729) (1.920,-0.09782) (1.950,-0.08929) (1.980,-0.08160) (2.010,-0.07467) (2.040,-0.06841) (2.070,-0.06275) (2.100,-0.05762) (2.130,-0.05297) (2.160,-0.04875) (2.190,-0.04491) (2.220,-0.04142) (2.250,-0.03824) (2.280,-0.03534) (2.310,-0.03269) (2.340,-0.03027) (2.370,-0.02806) (2.400,-0.02603) (2.430,-0.02417) (2.460,-0.02246) (2.490,-0.02089) (2.520,-0.01945) (2.550,-0.01812) (2.580,-0.01690) (2.610,-0.01577) (2.640,-0.01473) (2.670,-0.01376) (2.700,-0.01287) (2.730,-0.01205) (2.760,-0.01129) (2.790,-0.01058) (2.820,-0.00992) (2.850,-0.00931) (2.880,-0.00875) (2.910,-0.00822) (2.940,-0.00773) (2.970,-0.00727) (3.000,-0.00685) (3.100,-0.00563) (3.200,-0.00465) (3.300,-0.00387) (3.400,-0.00323) (3.500,-0.00272) (3.600,-0.00230) (3.700,-0.00195) (3.800,-0.00166) (3.900,-0.00142) (4.000,-0.00122) (4.100,-0.00105) (4.200,-0.00091) (4.300,-0.00079) (4.400,-0.00069) (4.500,-0.00060) (4.600,-0.00053) (4.700,-0.00046) (4.800,-0.00041) (4.900,-0.00036) (5.000,-0.00032)};
+
+ % map (\x -> (showFFloat (Just 3) x "", showFFloat (Just 5) (min 2 (5 * (1/(x^12) - 1/(x^6)))) "")) [0.955,0.96..1.5]
+ % map (\x -> (showFFloat (Just 3) x "", showFFloat (Just 5) (5 * (1/(x^12) - 1/(x^6))) "")) [1.53,1.56..3]
+ % map (\x -> (showFFloat (Just 3) x "", showFFloat (Just 5) (5 * (1/(x^12) - 1/(x^6))) "")) [3.1,3.2..5]
+ \end{tikzpicture}
+ \end{center}
+ To make life easy, we will actually use a ``hard core repulsion'' potential instead, given by
+ \[
+ U(r) =
+ \begin{cases}
+ \infty & r < r_0\\
+ -U_0 \left(\frac{r_0}{r}\right)^6 & r > r_0
+ \end{cases}
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$r$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$U(r)$};
+
+ \draw [mblue, thick, domain=0.9:5, samples=50] plot [smooth] (\x, {-1/\x^6});
+
+ \draw [mblue, thick] (0.9, -1.8817) -- (0.9, 2);
+
+ \node [anchor = north east] at (0.9, 0) {$r_0$};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+For a general $U$, we can write the Hamiltonian of the gas is
+\[
+ H = \sum_{i = 1}^N \frac{\mathbf{p}_i^2}{2m} + \sum_{i > j} U(r_{ij}),
+\]
+with
+\[
+ r_{ij} = |\mathbf{r}_i - \mathbf{r}_j|.
+\]
+is the separation between particle $i$ and particle $j$.
+
+Because of this interaction, it is no longer the case that the partition function for $N$ particles is the $N$th power of the partition function for one particle. We have
+\begin{align*}
+ Z(N, V, T) &= \frac{1}{N} \frac{1}{(2\pi \hbar)^{3n}} \int \prod_{i = 1}^N \d^3 p_i\; \d^3 r_i e^{-\beta H}\\
+ &= \frac{1}{N!} \frac{1}{(2\pi \hbar)^{3n}} \left(\int \prod_{i} \d^3 p_i e^{-\beta p_i^2/2m}\right)\left(\int \prod_i \d^3 r_i\; e^{-\beta \sum_{j < k} U(r_{jk})} \right)\\
+ &= \frac{1}{N! \lambda^{3N}} \int \prod_i \d^3 r_i\; e^{-\beta \sum_{j < k} U(r_{jk})},
+\end{align*}
+where again
+\[
+ \lambda = \sqrt{\frac{2\pi \hbar^2}{mkT}}.
+\]
+Since we want to get an expansion, we might want to expand this in terms of the potential, but that is not helpful, because the potential is infinite for small $r$. Instead it is useful to consider the \term{Mayer $f$-function}:
+\[
+ f(r) = e^{-\beta U(r)} - 1.
+\]
+This function has the property that $f(r) = -1$ for $r < r_0$ (in the case of the hardcore repulsion), and $f(r) \to 0$ as $r \to \infty$. So this is a nicer function as it only varies within this finite range.
+
+We further simplify notation by defining
+\[
+ f_{ij} = f(r_{ij}).
+\]
+Then we have
+\begin{align*}
+ Z(N, V, T) &= \frac{1}{N! \lambda^{3N}} \left(\int \prod_i \d^3 r_i\prod_{j < k} (1 + f_{jk})\right)\\
+ &= \frac{1}{N!\lambda^{3N}} \int \prod_i \d^3 r_i \left(1 + \sum_{j < k} f_{jk} + \sum_{j < k} \sum_{\ell < m} f_{jk} f_{\ell m} + \cdots\right).
+\end{align*}
+The first term is just
+\[
+ \int \prod_i \d^3 r_i = V^N,
+\]
+and this gives the ideal gas term. Now each of the second terms is the same, e.g.\ for $j = 1, k = 2$, this is
+\[
+ \int \prod_i \d^3 r_i f_{12} = V^{N - 2} \int \d^3 r_1 \; \d^3 r_2\; f_{12} = V^{N - 1} I,
+\]
+where we set
+\[
+ I = \int\d^3 r\; f(r).
+\]
+Since $f(r) \to 0$ as $r \to \infty$, we might as well integrate over all space. Summing over all terms, and approximating $N(N - 1)/2 \sim N^2/2$, we find that the first two terms of the partition function are
+\[
+ Z(N, V, T) = \frac{V^N}{N! \lambda^{3N}} \left(1 + \frac{N^2}{2V}I + \cdots\right).
+\]
+Up to first order, we can write this as
+\begin{align*}
+ Z(N, V, T) &= \frac{V^N}{N! \lambda^{3N}} \left(1 + \frac{N}{2V}I + \cdots\right)^N\\
+ &= Z_{\mathrm{ideal}} \left(1 + \frac{N}{2V}I + \cdots \right)^N.
+\end{align*}
+This pulling out of the $N$ to the exponent might seem a bit arbitrary, but writing it this way, it makes it much clearer that $S$, $F$ etc would be extensive quantities. For example, we can write down the free energy as
+\[
+ F = -kT \log Z = F_{\mathrm{ideal}} - NkT \log \left(1 + \frac{N}{2V} I + \cdots\right).
+\]
+Without actually computing $I$, we can expect that it grows as $I \sim r_0^3$. Since we are expanding in terms of $NI/V$, we need
+\[
+ \frac{N}{V} \ll \frac{1}{r_0^3}.
+\]
+So we know the expansion is valid if the density of the gas is much less than the density of an atom. In real life, to determine the density of an atom, we need to find a system where the atoms are closely packed. Thus, it suffices to measure the density of the substance in liquid or solid form.
+
+Assuming that the gas is indeed not dense, we can further use the approximation
+\[
+ \log(1 + x) \approx x.
+\]
+So we have
+\[
+ p = - \left(\frac{\partial F}{\partial V}\right)_T = \frac{NkT}{V}\left(1 - \frac{N}{2V} I + \cdots\right).
+\]
+So we have
+\[
+ \frac{pV}{NkT} = 1 - \frac{N}{2V} I + \cdots.
+\]
+So we can now read off what the second Virial coefficient is:
+\[
+ B_2 = -\frac{1}{2} I.
+\]
+We can consider what we get with different potentials. If we have a completely repulsive potential, then $U(r) > 0$ everywhere. So $f < 0$, and thus $B_2(t) > 0$. In other words, having a repulsive interaction tends to increase the pressure, which makes sense. On the other hand, if we have an attractive potential, then the pressure decreases.
+
+If we have a hardcore repulsion, then we have
+\[
+ I = \int_{r = 0}^{r = r_0} \d^3 r\; (-1) + \int_{r = r_0}^\infty \d^3 r\; \left(e^{\beta U_0 (r_0/r)^6} - 1\right).
+\]
+The second term is slightly tricky, so we'll restrict to the case of high temperature, hence small $\beta$. We can write
+\[
+ e^{\beta U_0 (r_0/r)^6} \approx 1 + \beta U_0 \left(\frac{r_0}{r}\right)^6 + \cdots
+\]
+Then we get
+\[
+ I \approx \frac{4}{3}\pi r_0^3 +\frac{4 \pi U_0}{kT} \int_{r_0}^\infty \d r\; \frac{r_0^6}{r^6} = \frac{4\pi r_0^3}{3}\left(\frac{U_0}{kT} - 1\right).
+\]
+For large $T$, this is negative, and so the repulsive interaction dominates. We can plug this into our equation of state, and find
+\[
+ \frac{pV}{NkT} \approx 1 - \frac{N}{V}\left(\frac{a}{kT} - b\right),
+\]
+where
+\[
+ a = \frac{2\pi r_0^3}{3}U_0,\quad b = \frac{2\pi r_0^3}{3}.
+\]
+We can invert this to get
+\[
+ kT = \frac{V}{N}\left(p + \frac{N^2}{V^2} a\right) \left(1 + \frac{N}{V}b\right)^{-1}.
+\]
+Taking the Taylor expansion of the inverse and truncating higher order terms, we obtain
+\[
+ kT = \left(p + \frac{N^2}{V^2}a \right)\left(\frac{V}{N} - b\right).
+\]
+This is the \term{van der Waals equation of state}\index{equation of state!van der waals}, which is valid at low density ($Nr_0^3/V \ll 1$) and high temperature ($\beta U_0 \ll 1$).
+
+We can also write this as
+\[
+ p = \frac{NkT}{V - bN} - a \frac{N^2}{V^2}.
+\]
+In this form, we see that $a$ gives a reduced pressure due to long distance attractive force, while we can view the $b$ contribution as saying that the atoms take up space, and hence reduces the volume.
+
+By why exactly the factor of $bN$? Imagine two atoms living right next to each other.
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw circle [radius=1];
+ \draw (2, 0) circle [radius=1];
+ \draw [dashed] circle [radius=2];
+
+ \draw (0, 0) node [circ] {} -- (2, 0) node [circ] {} node [pos=0.3, above] {$r_0$};
+ \end{tikzpicture}
+\end{center}
+The value $r_0$ was chosen to be the distance between the two cores, so the volume of each atom is
+\[
+ \frac{4\pi}{3}\left(\frac{r_0}{2}\right)^3,
+\]
+which is not $b$. We might think the right way to do this problem is to look at how much volume the atom excludes, i.e.\ the dashed volume. This is
+\[
+ \Omega = \frac{4}{3}\pi r_0^3 = 2b.
+\]
+This is again not right. So we probably want to think more carefully.
+
+Suppose we have $N$ particles. When we put in the first atom, the amount of space available to put it is $V$. When we now try to put in the second, the available space is $V - \Omega$ (assuming the two atoms are not too close together so that they don't exclude the same volume). Similarly, when we put in the third, the available space is $V - 2 \Omega$.
+
+Diving by $N!$ for indistinguishability, we find that the total phase space volume available for placing our particles is
+\begin{align*}
+ \frac{1}{N!}V(V - \Omega) (V -2\Omega) \cdots (V - (N - 1) \Omega) &\approx \frac{1}{N!} V^N\left(1 - \frac{N^2}{2} \frac{\Omega}{V} + \cdots\right)\\
+ &\approx \frac{1}{N!} \left(V - \frac{N\Omega}{2}\right)^N,
+\end{align*}
+which explains why the reduced volume for each particle is $\Omega/2$ instead of $\Omega$.
+
+Now suppose we want to find higher-order corrections to the partition function in the Virial expansion. The obvious thing might be to take them to be the contributions by $\sum \sum f_{jk} f_{\ell m}$. However, this is not quite right. If we try to keep track of how large the terms are carefully, we will get a different answer.
+
+The way to do it properly would be via the \term{cluster expansion}, which involves using nice diagrams to figure out the right terms to include, similar to how Feynman diagrams are used to do perturbation theory in quantum field theory. Unfortunately, we do not have the time to go into the details.
+
+\section{Quantum gases}
+We now move on to study quantum gases. As before, we are going to spend a lot of time evaluating the partition function for different systems. Recall that the partition function is defined as a sum over all eigenstates. In the world of classical gases, we had the benefit of working over a continuous phase space, and thus we can replace the sum by integrals. In the quantum world, this is no longer the case, However, most of the time, the states are very closely packed, and we might as well approximate the sum by an integral. Of course, we cannot replace the sum just by $\int \d E$. We have to make sure we preserve the ``density'' of the energy levels.
+
+\subsection{Density of states}
+Consider an ideal gas in a cubic box with side lengths $L$. So we have $V = L^3$. Since there are no interactions, the wavefunction for multiple states can be obtained from the wavefunction of single particle states,
+\[
+ \psi(\mathbf{x}) = \frac{1}{\sqrt{V}} e^{i\mathbf{k}\cdot \mathbf{x}}.
+\]
+We impose periodic boundary conditions, so the wavevectors are quantized by
+\[
+ k_i = \frac{2\pi n_i}{L},
+\]
+with $n_i \in \Z$.
+
+The one-particle energy is given by
+\[
+ E_\mathbf{n} = \frac{\hbar^2 \mathbf{k}^2}{2m} = \frac{4\pi^2 \hbar^2}{2mL^2}(n_1^2 + n_2^2 + n_3^2).
+\]
+So if we are interested in a one-particle partition function, we have
+\[
+ Z_1 = \sum_\mathbf{n} e^{-\beta E_\mathbf{n}}.
+\]
+We now note that
+\[
+ \beta E_n \sim \frac{\lambda^2}{L^2}\mathbf{n}^2,
+\]
+where
+\[
+ \lambda = \sqrt{\frac{2\pi \hbar^2}{mkT}}.
+\]
+is the de Broglie wavelength.
+
+This $\frac{\lambda^2}{L^2}$ gives the spacing between the energy levels. But we know that $\lambda \ll L$. So the energy levels are very finely spaced, and we can replace the sum by an integral. Thus we can write
+\[
+ \sum_\mathbf{n} \approx \int \d^3 \mathbf{n} \approx \frac{V}{(2\pi)^3} \int \d^3 \mathbf{k} \approx \frac{4\pi V}{(2\pi)^3} \int_0^\infty \d |\mathbf{k}|\; |\mathbf{k}|^2,
+\]
+where in the last step, we replace with spherical polars and integrated over angles. The final step is to replace $|\mathbf{k}|$ by $E$. We know that
+\[
+ E = \frac{\hbar^2 |\mathbf{k}|^2}{2m}.
+\]
+So we get that
+\[
+ \d E = \frac{\hbar^2 |\mathbf{k}|}{m} \d |\mathbf{k}|.
+\]
+Therefore we can write
+\[
+ \sum_\mathbf{n} = \frac{4\pi V}{(2\pi)^3} \int_0^\infty \d E\; \sqrt{\frac{2mE}{\hbar^2}}\frac{m}{\hbar^2}.
+\]
+We then set
+\[
+ g(E) = \frac{V}{2\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} E^{1/2}.
+\]
+This is called the \term{density of states}. Approximately, $g(E) \;\d E$ is the number of single particle states with energy between $E$ and $E + \d E$. Then we have
+\[
+ \sum_\mathbf{n} = \int g(E)\;\d E.
+\]
+The same result (and derivation) holds if the sides of the box have different length. In general, in $d$ dimensions, the density of states is given by
+\[
+ g(E) = \frac{V \vol(S^{d - 1})}{2\cdot \pi^{d/2}} \left(\frac{m}{2\pi \hbar^2}\right)^{d/2} E^{d/2 - 1}.
+\]
+All these derivations work if we have a non-relativistic free particle, since we assumed the \term{dispersion relation} between $E$ and $\mathbf{k}$, namely
+\[
+ E = \frac{\hbar^2 |\mathbf{k}|^2}{2m}.
+\]
+For a relativistic particle, we instead have
+\[
+ E = \sqrt{\hbar^2 |\mathbf{k}|^2 c^2 + m^2c^4}.
+\]
+In this case, $|\mathbf{k}|$ is still quantized as before, and repeating the previous argument, we find that
+\[
+ g(E) = \frac{VE}{2\pi^3 \hbar^3 c^3} \sqrt{E^2 - m^2 c^4}.
+\]
+We will be interested in the special case where $m = 0$. Then we simply have
+\[
+ g(E) = \frac{VE^2}{2\pi^3 \hbar^3 c^3}.
+\]
+We can start doing some physics with this.
+
+\subsection{Black-body radiation}
+Unfortunately, in this chapter, we do not have much opportunity to deal with genuine gases. Instead, we are going to take some system that is \emph{not} a gas, and pretend it is a gas. Of course, nothing in our previous derivations relied on the system actually being a gas. It just relied on the fact that we had a lot of things. So the results should still hold.
+
+In this chapter, we suppose have a gas of photons (which is just a fancy way to say ``photons'') in a box with opaque walls.
+\begin{center}
+ \begin{tikzpicture}[photon/.style={morange!50!black, decorate, decoration={snake, amplitude=1, segment length=3, post=lineto, post length=3}, -latex'}]
+ \draw [thick] (0, 0) rectangle (2, 2);
+
+ \draw [photon] (0.3, 1.7) -- +(0.4, -0.2);
+ \draw [photon] (1, 0.7) -- +(0.05, 0.5);
+
+ \draw [photon] (0.2, 0.3) -- +(0.5, -0.05);
+ \draw [photon] (1.7, 0.7) -- +(0, 0.55);
+ \draw [photon] (1.8, 0.2) -- +(-0.3, 0.3);
+ \draw [photon] (0.2, 1) -- +(0.55, 0);
+ \draw [photon] (1.7, 1.8) -- +(-0.4, -0.2);
+ \end{tikzpicture}
+\end{center}
+We suppose this box of photon is at equilibrium with temperature $T$. We will use the box of photons as a model of a ``\term{black body}'', i.e.\ a perfectly absorbing object. The gas of photons inside is called \term{black-body radiation}.
+ Later on, we will argue that the photons emitted by a black body must indeed follow the same distribution as the photons inside our box.
+
+We begin by reminding ourselves of some basic properties of photons: they are massless and have energy
+\[
+ E = \hbar \omega,\quad \omega = \frac{2\pi c}{\lambda},
+\]
+and $\lambda$ is the (genuine) wavelength of the photon.
+
+Interactions between photons are negligible, so we can treat them as noble gas.
+
+Photons have two polarization states, since given any direction of travel $\mathbf{k}$, the electric and magnetic fields have to obey
+\[
+ \mathbf{E} \cdot \mathbf{k} = \mathbf{B} \cdot \mathbf{k} = \mathbf{B} \cdot \mathbf{E} = 0,
+\]
+and there are two independent choices for $\mathbf{E}$ and $\mathbf{B}$. This has implications for our counting, as we need an extra factor of $2$ in our density of states. So the density of states is given by
+\[
+ g(E) \;\d E = \frac{VE^2}{\pi^2 \hbar^3 c^3}\;\d E.
+\]
+Using the fact that
+\[
+ E = \hbar \omega,
+\]
+it is often convenient to instead write this as
+\[
+ g(\omega)\; \d \omega = \frac{V \omega^2}{\pi^2 c^3}\;\d \omega.
+\]
+This is an abuse of notation, as the two $g$'s we wrote down are different functions, but this is a physics course.
+
+The final important property of the photons is that photon numbers are not conserved. Indeed, they are absorbed and emitted by walls of the box. So we must sum over all possible photon numbers, even though we are in the canonical ensemble. In practice, what we get is the same as the grand canonical ensemble, but with $\mu = 0$.
+
+We can begin. While we have figured out the density of states, we will not use that immediately. Instead, we still begin by assuming that we have a discrete set of states. Only after doing all the manipulations, we replace all remaining occurrences of sums with integrals so that we can actually evaluate it.
+
+We can work with this in more generality. Consider a system of non-interacting particles with one-particle states $\bket{i}$ of energies $E_i$. Then the general accessible state can be labelled by
+\[
+ \{n_1, n_2, \cdots\},
+\]
+where $n_i$ is the number of particles in $\bket{i}$. Since particle numbers are not conserved, we do not impose any restrictions on the possible values of the $n_i$. The energy of such a state is just
+\[
+ \sum_i n_i E_i.
+\]
+As before, in the canonical ensemble, the probability of being in such a state is
+\[
+ p(\{n_i\}) = \frac{1}{Z} e^{-\beta \sum n_j E_j},
+\]
+and
+\[
+ Z = \sum_{\{n_k\}} e^{-\beta \sum n_j E_j} = \sum_{n_1 = 0}^\infty e^{-\beta n_1 E_1} \sum_{n = 2}^\infty e^{-\beta n_2 E_2} \cdots = \prod_i \frac{1}{1 - e^{-\beta E_i}}.
+\]
+So we find that
+\[
+ \log Z = - \sum_i \log (1 - e^{-\beta E_i}).
+\]
+Using this, we have
+\begin{align*}
+ \bra n_i\ket &= \sum_{\{n_k\}} n_i p (\{n_k\}) \\
+ &= \sum_{\{n_k\}} \frac{n_i e^{-\beta \sum_j n_j E_j}}{Z}\\
+ &= - \frac{1}{\beta} \frac{\partial}{\partial E_i} \log Z.
+\end{align*}
+Using the formula of $\log Z$ we had, we find
+\[
+ \bra n_i\ket = \frac{1}{e^{\beta E_i} - 1}.
+\]
+Applying this to photons, we replace the sum with an integral, and $E_i = \hbar \omega$. Using our density of states, we have
+\[
+ \log Z = - \frac{V}{\pi^3 c^3} \int_0^\infty \d \omega\; \omega^2 \log(1 - e^{-\beta \hbar \omega}).
+\]
+Similarly, the average number of photons with frequency between $\omega$ and $\d \omega$ is
+\[
+ n(\omega)\; \d \omega = g(\omega) \;\d \omega \cdot \frac{1}{e^{\beta \hbar \omega} - 1} = \frac{V \omega^2\;\d \omega}{\pi^2 c^3(e^{\beta\hbar\omega} - 1)}.
+\]
+Thus, the total energy in this range is
+\[
+ E(\omega) \;\d \omega = \hbar \omega n(\omega) \;\d \omega = \frac{V \hbar}{\pi^2 c^3} \frac{\omega^3}{e^{\beta \hbar \omega} - 1}\;\d \omega.
+\]
+This is the \term{Planck distribution}.
+
+Let's check that this makes sense. Let's try to compute the total energy of all photons. We have
+\[
+ E = -\left(\frac{\partial \log Z}{\partial \beta}\right)_V = \frac{V \hbar}{\pi^2 c^3}\int_0^\infty \d \omega\; \frac{\omega^3}{e^{\beta \hbar \omega} - 1} = \int_0^\infty E(\omega) \;\d \omega,
+\]
+as we would expect.
+
+Putting $\omega = \frac{2\pi c}{\lambda}$, we find that the average energy with wavelength between $\lambda$ and $\lambda + \d \lambda$ is given by the horrendous formula
+\[
+ \tilde{E} (\lambda) \;\d \lambda = \frac{V \hbar (2\pi c)^4}{\pi^2 c^3 \lambda^5} \frac{\d \lambda}{e^{\beta \hbar 2 \pi c/\lambda} - 1}.
+\]
+We can plot this for some values of $T$:
+\begin{center}
+ \begin{tikzpicture}[xscale=3]
+ \draw [->] (0, 0) -- (2, 0) node [right] {$\lambda$};
+
+ \draw [->] (0, 0) -- (0, 4) node [above] {$E(\lambda)$};
+
+ \draw [mred, thick] plot coordinates {(0.01,0.00000) (0.03,0.00000) (0.05,0.00000) (0.07,0.00000) (0.09,0.00000) (0.11,0.00000) (0.13,0.00010) (0.15,0.00109) (0.17,0.00611) (0.19,0.02244) (0.21,0.06120) (0.23,0.13450) (0.25,0.25167) (0.27,0.41663) (0.29,0.62717) (0.31,0.87585) (0.33,1.15188) (0.35,1.44304) (0.37,1.73730) (0.39,2.02399) (0.41,2.29438) (0.43,2.54192) (0.45,2.76219) (0.47,2.95266) (0.49,3.11234) (0.51,3.24152) (0.53,3.34137) (0.55,3.41373) (0.57,3.46083) (0.59,3.48514) (0.61,3.48919) (0.63,3.47550) (0.65,3.44649) (0.67,3.40444) (0.69,3.35144) (0.71,3.28941) (0.73,3.22004) (0.75,3.14488) (0.77,3.06525) (0.79,2.98233) (0.81,2.89713) (0.83,2.81052) (0.85,2.72324) (0.87,2.63592) (0.89,2.54907) (0.91,2.46314) (0.93,2.37848) (0.95,2.29538) (0.97,2.21406) (0.99,2.13472) (1.01,2.05748) (1.03,1.98244) (1.05,1.90968) (1.07,1.83922) (1.09,1.77111) (1.11,1.70532) (1.13,1.64185) (1.15,1.58068) (1.17,1.52177) (1.19,1.46506) (1.21,1.41052) (1.23,1.35808) (1.25,1.30769) (1.27,1.25928) (1.29,1.21279) (1.31,1.16816) (1.33,1.12531) (1.35,1.08419) (1.37,1.04473) (1.39,1.00687) (1.41,0.97054) (1.43,0.93569) (1.45,0.90225) (1.47,0.87016) (1.49,0.83938) (1.51,0.80984) (1.53,0.78149) (1.55,0.75429) (1.57,0.72818) (1.59,0.70312) (1.61,0.67906) (1.63,0.65596) (1.65,0.63378) (1.67,0.61247) (1.69,0.59201) (1.71,0.57235) (1.73,0.55346) (1.75,0.53530) (1.77,0.51785) (1.79,0.50108) (1.81,0.48494) (1.83,0.46943) (1.85,0.45450) (1.87,0.44014) (1.89,0.42633) (1.91,0.41303) (1.93,0.40022) (1.95,0.38789) (1.97,0.37602) (1.99,0.36458)};
+ \draw [mred!75!mblue, thick] plot coordinates {(0.01,0.00000) (0.03,0.00000) (0.05,0.00000) (0.07,0.00000) (0.09,0.00000) (0.11,0.00000) (0.13,0.00000) (0.15,0.00002) (0.17,0.00018) (0.19,0.00095) (0.21,0.00351) (0.23,0.00990) (0.25,0.02283) (0.27,0.04515) (0.29,0.07922) (0.31,0.12643) (0.33,0.18696) (0.35,0.25984) (0.37,0.34317) (0.39,0.43442) (0.41,0.53075) (0.43,0.62931) (0.45,0.72742) (0.47,0.82274) (0.49,0.91332) (0.51,0.99764) (0.53,1.07459) (0.55,1.14345) (0.57,1.20381) (0.59,1.25557) (0.61,1.29883) (0.63,1.33386) (0.65,1.36107) (0.67,1.38097) (0.69,1.39409) (0.71,1.40103) (0.73,1.40238) (0.75,1.39871) (0.77,1.39061) (0.79,1.37861) (0.81,1.36323) (0.83,1.34492) (0.85,1.32414) (0.87,1.30127) (0.89,1.27669) (0.91,1.25071) (0.93,1.22363) (0.95,1.19571) (0.97,1.16717) (0.99,1.13822) (1.01,1.10904) (1.03,1.07978) (1.05,1.05056) (1.07,1.02152) (1.09,0.99274) (1.11,0.96431) (1.13,0.93629) (1.15,0.90875) (1.17,0.88173) (1.19,0.85528) (1.21,0.82941) (1.23,0.80415) (1.25,0.77953) (1.27,0.75555) (1.29,0.73221) (1.31,0.70953) (1.33,0.68750) (1.35,0.66611) (1.37,0.64537) (1.39,0.62527) (1.41,0.60578) (1.43,0.58692) (1.45,0.56865) (1.47,0.55097) (1.49,0.53387) (1.51,0.51733) (1.53,0.50133) (1.55,0.48587) (1.57,0.47092) (1.59,0.45648) (1.61,0.44252) (1.63,0.42903) (1.65,0.41599) (1.67,0.40340) (1.69,0.39123) (1.71,0.37948) (1.73,0.36812) (1.75,0.35716) (1.77,0.34656) (1.79,0.33632) (1.81,0.32643) (1.83,0.31687) (1.85,0.30763) (1.87,0.29871) (1.89,0.29008) (1.91,0.28175) (1.93,0.27369) (1.95,0.26590) (1.97,0.25837) (1.99,0.25109)};
+ \draw [mred!50!mblue, thick] plot coordinates {(0.01,0.00000) (0.03,0.00000) (0.05,0.00000) (0.07,0.00000) (0.09,0.00000) (0.11,0.00000) (0.13,0.00000) (0.15,0.00000) (0.17,0.00000) (0.19,0.00002) (0.21,0.00013) (0.23,0.00047) (0.25,0.00139) (0.27,0.00338) (0.29,0.00709) (0.31,0.01322) (0.33,0.02241) (0.35,0.03516) (0.37,0.05174) (0.39,0.07217) (0.41,0.09624) (0.43,0.12354) (0.45,0.15350) (0.47,0.18547) (0.49,0.21877) (0.51,0.25270) (0.53,0.28661) (0.55,0.31991) (0.57,0.35209) (0.59,0.38274) (0.61,0.41150) (0.63,0.43812) (0.65,0.46243) (0.67,0.48432) (0.69,0.50373) (0.71,0.52066) (0.73,0.53515) (0.75,0.54728) (0.77,0.55713) (0.79,0.56484) (0.81,0.57053) (0.83,0.57433) (0.85,0.57639) (0.87,0.57685) (0.89,0.57585) (0.91,0.57354) (0.93,0.57003) (0.95,0.56547) (0.97,0.55997) (0.99,0.55365) (1.01,0.54660) (1.03,0.53893) (1.05,0.53072) (1.07,0.52206) (1.09,0.51303) (1.11,0.50369) (1.13,0.49410) (1.15,0.48433) (1.17,0.47441) (1.19,0.46440) (1.21,0.45434) (1.23,0.44426) (1.25,0.43420) (1.27,0.42418) (1.29,0.41423) (1.31,0.40436) (1.33,0.39461) (1.35,0.38497) (1.37,0.37548) (1.39,0.36613) (1.41,0.35694) (1.43,0.34792) (1.45,0.33907) (1.47,0.33040) (1.49,0.32191) (1.51,0.31360) (1.53,0.30549) (1.55,0.29755) (1.57,0.28981) (1.59,0.28226) (1.61,0.27489) (1.63,0.26770) (1.65,0.26071) (1.67,0.25389) (1.69,0.24725) (1.71,0.24078) (1.73,0.23449) (1.75,0.22837) (1.77,0.22242) (1.79,0.21663) (1.81,0.21100) (1.83,0.20553) (1.85,0.20021) (1.87,0.19504) (1.89,0.19001) (1.91,0.18513) (1.93,0.18038) (1.95,0.17577) (1.97,0.17129) (1.99,0.16693)};
+
+ \draw [mred!25!mblue, thick] plot coordinates {(0.01,0.00000) (0.03,0.00000) (0.05,0.00000) (0.07,0.00000) (0.09,0.00000) (0.11,0.00000) (0.13,0.00000) (0.15,0.00000) (0.17,0.00000) (0.19,0.00000) (0.21,0.00000) (0.23,0.00001) (0.25,0.00004) (0.27,0.00012) (0.29,0.00032) (0.31,0.00072) (0.33,0.00147) (0.35,0.00269) (0.37,0.00454) (0.39,0.00718) (0.41,0.01072) (0.43,0.01523) (0.45,0.02077) (0.47,0.02733) (0.49,0.03485) (0.51,0.04326) (0.53,0.05244) (0.55,0.06226) (0.57,0.07257) (0.59,0.08321) (0.61,0.09404) (0.63,0.10491) (0.65,0.11569) (0.67,0.12625) (0.69,0.13649) (0.71,0.14632) (0.73,0.15567) (0.75,0.16446) (0.77,0.17267) (0.79,0.18025) (0.81,0.18719) (0.83,0.19347) (0.85,0.19910) (0.87,0.20407) (0.89,0.20841) (0.91,0.21213) (0.93,0.21526) (0.95,0.21781) (0.97,0.21982) (0.99,0.22132) (1.01,0.22234) (1.03,0.22290) (1.05,0.22305) (1.07,0.22281) (1.09,0.22221) (1.11,0.22128) (1.13,0.22005) (1.15,0.21855) (1.17,0.21680) (1.19,0.21483) (1.21,0.21266) (1.23,0.21032) (1.25,0.20781) (1.27,0.20518) (1.29,0.20242) (1.31,0.19956) (1.33,0.19661) (1.35,0.19359) (1.37,0.19051) (1.39,0.18738) (1.41,0.18421) (1.43,0.18102) (1.45,0.17781) (1.47,0.17459) (1.49,0.17137) (1.51,0.16815) (1.53,0.16494) (1.55,0.16175) (1.57,0.15858) (1.59,0.15544) (1.61,0.15233) (1.63,0.14924) (1.65,0.14620) (1.67,0.14319) (1.69,0.14023) (1.71,0.13730) (1.73,0.13443) (1.75,0.13159) (1.77,0.12881) (1.79,0.12607) (1.81,0.12338) (1.83,0.12074) (1.85,0.11815) (1.87,0.11561) (1.89,0.11311) (1.91,0.11067) (1.93,0.10828) (1.95,0.10594) (1.97,0.10364) (1.99,0.10140)};
+
+ \draw [mblue, thick] plot coordinates {(0.01,0.00000) (0.03,0.00000) (0.05,0.00000) (0.07,0.00000) (0.09,0.00000) (0.11,0.00000) (0.13,0.00000) (0.15,0.00000) (0.17,0.00000) (0.19,0.00000) (0.21,0.00000) (0.23,0.00000) (0.25,0.00000) (0.27,0.00000) (0.29,0.00001) (0.31,0.00003) (0.33,0.00007) (0.35,0.00015) (0.37,0.00030) (0.39,0.00055) (0.41,0.00093) (0.43,0.00149) (0.45,0.00225) (0.47,0.00326) (0.49,0.00453) (0.51,0.00609) (0.53,0.00795) (0.55,0.01011) (0.57,0.01255) (0.59,0.01528) (0.61,0.01825) (0.63,0.02145) (0.65,0.02483) (0.67,0.02837) (0.69,0.03203) (0.71,0.03576) (0.73,0.03954) (0.75,0.04332) (0.77,0.04708) (0.79,0.05078) (0.81,0.05440) (0.83,0.05791) (0.85,0.06130) (0.87,0.06454) (0.89,0.06762) (0.91,0.07053) (0.93,0.07327) (0.95,0.07581) (0.97,0.07817) (0.99,0.08033) (1.01,0.08230) (1.03,0.08409) (1.05,0.08568) (1.07,0.08709) (1.09,0.08833) (1.11,0.08939) (1.13,0.09028) (1.15,0.09102) (1.17,0.09160) (1.19,0.09204) (1.21,0.09235) (1.23,0.09252) (1.25,0.09257) (1.27,0.09251) (1.29,0.09234) (1.31,0.09207) (1.33,0.09171) (1.35,0.09126) (1.37,0.09073) (1.39,0.09014) (1.41,0.08947) (1.43,0.08875) (1.45,0.08797) (1.47,0.08714) (1.49,0.08626) (1.51,0.08535) (1.53,0.08440) (1.55,0.08342) (1.57,0.08241) (1.59,0.08137) (1.61,0.08032) (1.63,0.07925) (1.65,0.07816) (1.67,0.07707) (1.69,0.07596) (1.71,0.07485) (1.73,0.07373) (1.75,0.07261) (1.77,0.07148) (1.79,0.07036) (1.81,0.06925) (1.83,0.06813) (1.85,0.06702) (1.87,0.06592) (1.89,0.06482) (1.91,0.06374) (1.93,0.06266) (1.95,0.06159) (1.97,0.06053) (1.99,0.05949)};
+ %% mapM_ (\n -> print $ map (\x -> (showFFloat (Just 2) x "", showFFloat (Just 5) (24 / (x^5 * (exp(n / x) - 1))) "")) ([0.01,0.03..2])) [3, 3.9, 5, 6.6, 8.6]
+ \end{tikzpicture}
+\end{center}
+Note that the maximum shifts to the right as temperature lowers, and the total energy released also decreases. For the maximum to be near the visible range, we need $T \sim \SI{6000}{\kelvin}$, which is quite a lot.
+
+In general, if we want to figure out when $E(\omega)$ is maximized, then we set
+\[
+ \frac{\d E}{\d \omega} = 0.
+\]
+This gives us
+\[
+ \omega_{\mathrm{max}} = \zeta \frac{kT}{\hbar},
+\]
+where $\zeta \approx 2.822$ is the solution to
+\[
+ 3 - \zeta = 3 e^{-\zeta}.
+\]
+This is known as \term{Wien's displacement law}. In particular, we see that this is linear in $T$.
+
+Recall that we didn't actually find the value of $E$. To do so, we have to do the integral. We perform the change of variables $x = \beta \hbar \omega$ to get
+\[
+ E = \frac{V(kT)^4}{\pi^2 c^3 \hbar^3} \int_0^\infty \frac{x^3\;\d x}{e^x - 1}.
+\]
+The remaining integral is just some constant, which isn't really that important, but it happens that we can actually evaluate it, and find the value to be
+\[
+ \int_0^\infty \frac{x^3\; \d x}{e^x - 1} = \Gamma(4) \zeta(4) = \frac{\pi^4}{15},
+\]
+where $\zeta$ is the \term{Riemann zeta function}.
+
+The energy density of the box is thus
+\[
+ \mathcal{E} = \frac{E}{V} = \frac{\pi^2 k^4}{15 \hbar^3 c^3} T^4 \propto T^4.
+\]
+Now if we are experimentalists, then we have no way to figure out if the above result is correct, because the box is all closed. To look into the box, we cut a small hole in the box:
+\begin{center}
+ \begin{tikzpicture}[photon/.style={morange!50!black, decorate, decoration={snake, amplitude=1, segment length=3, post=lineto, post length=3}, -latex'}]
+ \draw [thick] (0, 0) rectangle (2, 2);
+
+ \draw [photon] (0.3, 1.7) -- +(0.4, -0.2);
+ \draw [photon] (1, 0.7) -- +(0.05, 0.5);
+
+ \draw [photon] (0.2, 0.3) -- +(0.5, -0.05);
+ \draw [photon] (1.7, 0.7) -- +(0, 0.55);
+ \draw [photon] (1.8, 0.2) -- +(-0.3, 0.3);
+ \draw [photon] (0.2, 1) -- +(0.55, 0);
+ \draw [photon] (1.7, 1.8) -- +(-0.4, -0.2);
+
+ \fill [white] (1.95, 0.95) rectangle (2.05, 1.05);
+ \end{tikzpicture}
+\end{center}
+Let's say this hole has area $A$, which is sufficiently small that it doesn't mess with what is going on in the box. How much energy do we expect the box to leak out of the hole?
+
+For the purposes of this analysis, for a wave vector $\mathbf{k}$, we let $S(\mathbf{k})$ be the set of all photons with wave-vector within $\d^3 \mathbf{k}$ of $\mathbf{k}$. We further suppose $\mathbf{k}$ makes an angle $\theta$ with the normal to the hole.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [thick] (0, 0) -- (0, 2);
+
+ \fill [white] (-0.05, 0.8) rectangle (0.05, 1.2);
+
+ \draw [mred, -latex'] (-1, 1.7) -- +(0.6, -0.45) node [pos=0.5, anchor=south west] {$\mathbf{k}$};
+
+ \draw [dashed] (-0.4, 1.25) -- +(-0.7, 0);
+
+ \draw (-0.7, 1.25) arc(180:143:0.3);:
+
+ \draw (-0.7, 0.6) node [right] {$\theta$} edge [out=180, in=180, -latex'] (-0.685, 1.345);
+ \end{tikzpicture}
+\end{center}
+What we want to know is what portion of $S(\mathbf{k})$ manages to get out of the hole, say in a time period $\d t$. To do so, we look at the following volume:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [thick] (0, 0) -- (0, 2);
+
+ \fill [white] (-0.05, 0.8) rectangle (0.05, 1.2);
+
+ \draw [dashed] (0, 0.8) -- (-0.6, 1.25) -- (-0.6, 1.65) -- (0, 1.2) -- cycle;
+ \end{tikzpicture}
+\end{center}
+The volume of this box is $A c \cos \theta \;\d t$. So the proportion of energy that gets out is simply
+\[
+ \frac{A c \cos \theta}{V} \;\d t.
+\]
+We now write
+\[
+ E(|\mathbf{k}|) \;\d^3 \mathbf{k} = \text{total energy in $S(\mathbf{k})$}.
+\]
+Then the total energy leaving the hole in time $\d t$ as
+\[
+ \int_{\theta \in [0, \pi/2]} \d^3 \mathbf{k}\; \frac{A c \cos \theta}{V}\;\d t\; E(|\mathbf{k}|).
+\]
+To evaluate this integrate, we simply have to introduce polar coordinates, and we get
+\[
+ \frac{cA \;\d t}{V} \int_0^{2\pi}\d \varphi \int_0^{\pi/2} \d \theta \sin \theta \cos \theta \int_0^\infty \d |\mathbf{k}|\; |\mathbf{k}|^2 E(|\mathbf{k}|).
+\]
+The first two integrals give $2\pi$ and $\frac{1}{2}$ respectively, and we can rewrite the last integral back into Cartesian coordinates, and it is
+\[
+ \frac{1}{4\pi}\int \d^3 \mathbf{k}\; E(|\mathbf{k}|) = \frac{E}{4\pi}.
+\]
+Putting these all together, we find that the rate of energy emission is
+\[
+ \frac{1}{4} cA \mathcal{E} \;\d t.
+\]
+If we define the \term{energy flux} as energy per unit time per unit area leaving the hole, then the energy flux is given by
+\[
+ \frac{c}{4} \mathcal{E} = \sigma T^4,
+\]
+where
+\[
+ \sigma = \frac{\pi^2 k^4}{60 \hbar^3 c^2} \approx \SI{5.67e-8}{\joule\per\second\per\meter\squared\per\kelvin\tothe{-4}}.
+\]
+What was the point of computing this? We don't really care that much about the experimentalists. Suppose we had any black body, and we want to figure out how much radiation it is emitting. We imagine we put it inside that box. Then to the box, the surface of the black body is just like the hole we drilled through the box. So we know that the black body is absorbing $\sigma T^4$ energy per unit time per unit area.
+
+But the system is in equilibrium, so the black body must emit the exact same amount of radiation out. So what we have derived is in fact how black bodies behave!
+
+The best example of a black body we know of is the cosmic background microwave radiation of the universe. This is a black body radiation to incredibly high accuracy, with a temperature of $T = \SI{2.7}{\kelvin}$. This is the temperature of space.
+
+Let's quickly calculate some other thermodynamic quantities we might be interested in. We have
+\begin{align*}
+ F &= - kT \log Z\\
+ &= -\frac{VkT}{\pi^2 c^3}\int_0^\infty \d \omega\; \omega^2 \log (1 - e^{-\beta \hbar \omega})\\
+ \intertext{Integrating by parts, we obtain}
+ &= -\frac{V \hbar}{3\pi^2 c^3}\int_0^\infty \d \omega\; \frac{\omega^3 e^{-\beta \hbar \omega}}{1 - e^{-\beta \hbar \omega}}\\
+ &= - \frac{V \hbar}{3\pi^2 c^3} \frac{1}{\beta^4 \hbar^4} \int_0^\infty \frac{x^3 \;\d x}{e^x - 1} \\
+ &= - \frac{V \pi^2 k^4}{45 \hbar^3 c^3}T^4.
+\end{align*}
+The free energy is useful, because we can differentiate it to get pressure:
+\[
+ p = - \left(\frac{\partial F}{\partial V}\right)_T = \frac{E}{3V} = \frac{1}{3}\mathcal{E} = \frac{4\sigma}{3c} T^4.
+\]
+This is \term{radiation pressure}. Since $c$ is a big number, we see that radiation pressure is small.
+
+We can also get the entropy from this. We compute
+\[
+ S = -\left(\frac{\partial F}{\partial T}\right)_V = \frac{16 V \sigma}{3c}T^3.
+\]
+Another quantity we can be interested in is the heat capacity, which is
+\[
+ C_V = \left(\frac{\partial E}{\partial T}\right)_V = \frac{16 V \sigma}{c} T^3.
+\]
+The classical/high temperature limit of black body radiation happens when $\hbar \omega \ll kT$. In this case, we have
+\[
+ \frac{1}{e^{\beta \hbar \omega} - 1} \approx \frac{1}{\beta \hbar \omega}.
+\]
+So we have
+\[
+ E(\omega) \;\d \omega \approx \frac{V \omega^2}{\pi^2 c^3}kT \;\d \omega \equiv E_{\mathrm{classical}} (\omega) \;\d \omega.
+\]
+This is known as the \emph{Rayleigh-Jeans law}. Note that there is no $\hbar$ in it. So it is indeed a classical result. It also agrees with equipartition of energy, as it does give $kT$ per normal mode of the electromagnetic field, with each mode viewed as a harmonic oscillator. We get $kT$ because here we have a potential. % think more about this.
+
+However, this has the obvious problem that $E(\omega)$ is unbounded as $\omega \to \infty$. So if we tried to compute the total energy, then we get infinity. This is called the \term{ultraviolet catastrophy}. This showed that classical reasoning doesn't actually work at high energies, and this is what eventually led Planck to come up with Planck's constant.
+
+\subsection{Phonons and the Debye model}
+One could still reasonably imagine a ``gas of photons'' as a gas. In this section, we are going to study solids. It turns out, this works.
+
+In the first example sheet, we studied a model of a solid, by treating it as some harmonic oscillators. We saw it was good at high temperatures, but rubbish at low temperatures. Here we are going to use a better model.
+
+If we have a crystal lattice, then they vibrate, and these give sound waves in the crystal. In quantum mechanics, we know that light is made out of particles, called photons. Similarly, in quantum mechanics, sound is made up of ``particles'' called \term{phonons}. Similar to the case of photons, the energy of a phonon of frequency $\omega$ is
+\[
+ E = \hbar \omega,
+\]
+and the frequency of a phonon is described by a \term{dispersion relation} involving the wavevector $\mathbf{k}$.
+\[
+ \omega = \omega(\mathbf{k}).
+\]
+In general, this function is very complicated. We learn how to compute these in the IID Applications of Quantum Mechanics course, but we will not do that.
+
+Suppose the spacing in the crystal is $a$. If $|\mathbf{k}| a \ll 1$, then the dispersion relation becomes linear, and we have
+\[
+ \omega \approx |\mathbf{k}| c_s,
+\]
+where $c_s$ is the speed of sound in the solid. This is all very similar to photons, except we replace $c$ with $c_s$.
+
+However, there is some important difference compared to photons. While photons have two polarizations, phonons have 3. Two of these are transverse, while the third is longitudinal. As one may know from the IID Waves course, they travel at different speeds, which we write as $c_T$ and $c_L$ for transverse and longitudinal respectively.
+
+We now want to count the number of phonons, and obtain the density of states. Repeating what we did before, we find
+\[
+ g(\omega) \;\d \omega = \frac{V \omega^2}{2\pi} \left(\frac{2}{c_T^3} + \frac{1}{c_L^3}\right)\;\d \omega.
+\]
+This is valid for $|\mathbf{k}| a \ll 1$. Alternatively, we need $\omega a/ c_s \ll 1$.
+
+It is convenient to write this as
+\[
+ g(\omega) \;\d \omega = \frac{3 V \omega^2}{2\pi^2 \bar{c}_s^3}\;\d \omega,\tag{$*$}
+\]
+where
+\[
+ \frac{3}{\bar{c}_s^3} = \frac{2}{c_T^3} + \frac{1}{c_L^3}
+\]
+defines $\bar{c}_s$ as the \emph{average speed}.
+
+The Debye model ignores the $|\mathbf{k}|a \ll 1$ assumption, and supposes it is also valid for $\omega a/c_3 \sim 1$.
+
+There is another difference between photons and phonons --- for phonons, $\omega$ cannot be arbitrarily large. We know high $\omega$ corresponds to small wavelength, but if the wavelength gets too short, it is less than the separation of the atoms, which doesn't make sense. So we have a minimum possible wavelength set by the atomic spacing. Consequently, there is a maximum frequency $\omega_D$ ($D$ for $\emph{Debye}$).
+
+We would expect the minimum wavelength to be $\sim a \sim \left(\frac{V}{N}\right)^{1/3}$. So we expect
+\[
+ \omega_D \sim \bar{c}_s \left(\frac{N}{V}\right)^{1/3}.
+\]
+Here we are assuming that the different speeds of sound are of similar orders of magnitude.
+
+Is this actually true? Given a cut-off $\omega_0$, the total number of $1$-phonon states is
+\[
+ \int_0^{\omega_0} g(\omega) \;\d \omega = \frac{V \omega_0^2}{2\pi^2 \bar{c}_s^3}.
+\]
+This is the number of different ways the lattice can vibrate. In other words, the number of normal modes of the lattice. Thus, to find $\omega_0$, it suffices to find the number of normal modes, and equate it with the above quantity.
+
+\begin{claim}
+ There are in fact $3N$ normal modes.
+\end{claim}
+
+\begin{proof}
+ We consider the big vector
+ \[
+ \mathbf{X} =
+ \begin{pmatrix}
+ \mathbf{x}_1\\
+ \mathbf{x}_2\\
+ \vdots\\
+ \mathbf{x}_N
+ \end{pmatrix},
+ \]
+ where each $\mathbf{x}_i$ is the position of the $i$th atom. Then the Lagrangian is
+ \[
+ L = \frac{1}{2} \dot{\mathbf{X}}^2 - V(\mathbf{X}),
+ \]
+ where $V$ is the interaction potential that keeps the solid a solid.
+
+ We suppose $V$ is minimized at some particular $\mathbf{X} = \mathbf{X}_0$. We let
+ \[
+ \delta \mathbf{X} = \mathbf{X} - \mathbf{X}_0.
+ \]
+ Then we can write
+ \[
+ L = \frac{1}{2} m \delta \dot{\mathbf{X}}^2 - V_0 - \frac{1}{2} \delta \mathbf{X}^T \mathbf{V} \delta \mathbf{X} + \cdots,
+ \]
+ where $\mathbf{V}$ is the Hessian of $V$ at $\mathbf{X}_0$, which is a symmetric positive definite matrix since we are at a minimum. The equation of motion is then, to first order,
+ \[
+ m \delta \ddot{\mathbf{X}} = - \mathbf{V} \delta \mathbf{X}.
+ \]
+ We assume that $\mathbf{X} = \Re(e^{i \omega t} \mathbf{Q})$ for some $\mathbf{Q}$ and $\omega$, and then this reduces to
+ \[
+ \mathbf{V} \mathbf{Q} = m \omega^2 \mathbf{Q}.
+ \]
+ This is an eigenvalue equation for $\mathbf{V}$. Since $\mathbf{V}$ is a $3n \times 3n$ symmetric matrix, it has $3N$ independent eigenvectors, and so we are done, and these are the $3N$ normal modes.
+
+ If we wanted to, we can diagonalize $\mathbf{V}$, and then the system becomes $3N$ independent harmonic oscillators.
+\end{proof}
+
+If we accept that there are $3N$ normal modes, then this tells us
+\[
+ \frac{V \omega_0^2}{ 2\pi^2 \bar{c}_3^3} = 3N.
+\]
+Using this, we can determine
+\[
+ \omega_0 = \left(\frac{6 \pi^2 N}{V}\right)^{1/3} \bar{c}_s,
+\]
+which is of the form we predicted above by handwaving. If we live in a frequency higher than this, then we are exciting the highest frequency phonons. We also define the \term{Debye temperature}
+\[
+ T_0 = \frac{\hbar \omega_0}{k}.
+\]
+How high is this temperature? This depends only on $N, V$ and $\bar{c}_s$, and these are not hard to find. For example $T_0$ for lead is $\sim \SI{100}{\kelvin}$, since lead is a soft substance and speed is slow. On the other hand, $T_0 \sim \SI{2000}{\kelvin}$ for diamond, since it is hard.
+
+We can compute partition functions for these, entirely analogous to that of photons. The only important difference is that we have that cutoff $\omega \leq \omega_0$. What we get is
+\[
+ \log Z = - \int_0^{\omega_0} \d \omega \; g(\omega) \log\left(1 - e^{-\beta \hbar \omega}\right).
+\]
+From the partition function, we can get the energy
+\[
+ E = -\left(\frac{\partial \log Z}{\partial \beta}\right)_V = \int_0^{\omega_0} \frac{\d \omega\; \hbar \omega g(\omega)}{e^{\beta \hbar \omega} - 1} = \frac{3V \hbar}{2\pi^2 \bar{c}_s^3} \int_0^{\omega_0} \frac{\d \omega\; \omega^3}{e^{\beta \hbar \omega} - 1}.
+\]
+Again we set $x = \beta \hbar \omega$. Then the limit becomes $x = T_0/T$. So the integral becomes
+\[
+ E = \frac{3V (kT)^4}{2\pi^3 (\hbar \bar{c}_s)^3} \int_0^{T_0/T} \frac{\d x\; x^3}{e^x - 1}.
+\]
+This is the same as the case of a photon, but the integral has a limit related to the temperature. This is integral not something we can investigate easily, but we can easily analyze what happens at extreme temperatures.
+\begin{itemize}
+ \item If $T \gg T_0$, then $x$ takes a very small range, and we can Taylor expand
+ \[
+ \int_0^{T_0/T}\frac{\d x\; x^3}{e^x - 1}= \int_0^{T_0/T} \d x\; (x^2 + \cdots) = \frac{1}{3} \left(\frac{T_0}{T}\right)^3.
+ \]
+ So we find that $E \propto T$, and
+ \[
+ C_V = \left(\frac{\partial E}{\partial T}\right)_V = \frac{V k^4 T_0^3}{2\pi^2(\hbar \bar{c}_s)^3} = 3 Nk.
+ \]
+ This agrees reasonably well with experiment, and is called the \term{Dulong--Petit law}.
+ This is also predicted by the Einstein's model. This is essentially just a consequence of the equipartition of energy, as there are $3N$ degrees of freedom, and we have $kT$ for each.
+ \item If $T \ll T_0$, then we might as well replace the upper limit of the integral by infinity, and we have
+ \[
+ \int_0^{T_0/T} \frac{\d x\; x^3}{e^x - 1} \approx \int_0^\infty \frac{\d x\; x^3}{e^x - 1} = \frac{\pi^4}{15}.
+ \]
+ So we have $E \propto T^4$, which is exactly like photons. If we work out the heat capacity, then we find
+ \[
+ C_V = \frac{2\pi^2 V k^4}{5(\hbar \bar{c}_s)^3} T^3 = Nk \frac{12 \pi^4}{5}\left(\frac{T}{T_0}\right)^3.
+ \]
+ Remarkably, this $T^3$ behaviour also agrees with experiment for many substances, unlike the Einstein model.
+\end{itemize}
+
+ % insert heat capacity C_V/Nk vs T, max at 3 approached near $T_D$.
+This is pattern is observed for most solids. There is one important exception, which is for metals. In metals, we have electrons which are free to move in the lattice. In this case, they can also be considered to form a gas, and they are important at low temperatures.
+
+Note that in the model, we made the assumption $\omega \approx |\mathbf{k}| c_s$. This is valid for $\omega \ll \omega_D$. Because of this, we would expect the Debye model to work at low temperatures $T \ll T_0$. At high temperature, we saw it also happens to works, but as we said, this is just equipartition of energy with $3N$ oscillators, and any model that involves $3N$ harmonic oscillators should give the same prediction.
+
+\subsection{Quantum ideal gas}
+Finally, we go back to talk about actual gases. Previously, for an ideal gas, we wrote
+\[
+ Z = Z_1^N,
+\]
+and we found there was a problem with entropy, and then we argued that we should have a factor of $\frac{1}{N!}$ there, because we over-counted identical states. However, it turns out this is still not quite true. This is really just an approximation that is valid at certain circumstances.
+
+We let single particle states be $\bket{i}$, with energies $E_i$. For simplicity, we assume they are bosons. Let's consider the simplest non-trivial example, which is when $N = 2$. Then since we have two bosons, we know the states of the particles must be symmetric. So the possible states are of the form
+\[
+ \bket{i}\bket{i},\quad \frac{1}{\sqrt{2}} (\bket{i} \bket{j} + \bket{j} \bket{i}),
+\]
+with $i \not= j$. Let's now calculate the partition function by summing over all these states. We have
+\[
+ Z = \sum_i e^{-\beta 2 E_i} + \frac{1}{2}\sum_{i \not= j}e^{-\beta (E_i + E_j)},
+\]
+where we had to divide by $2$ to avoid double counting. We compare this with
+\[
+ \frac{1}{2!} Z_1^2 = \frac{1}{2} \left( \sum_i e^{-\beta E_i} \right)\left(\sum_j e^{-\beta E_j}\right) = \frac{1}{2} \sum_i e^{-\beta 2E_i} + \frac{1}{2} \sum_{i \not= j} e^{-\beta(E_i + E_j)}.
+\]
+We see that the second parts are the same, but the first terms differ by a factor of $\frac{1}{2}$. Thus, for the approximation
+\[
+ Z \approx \frac{Z_1^2}{2!}
+\]
+to be valid, we need that the probability of two particles being in the same one-particle state is negligible. Similarly, for $N$ particles, the approximation
+\[
+ Z = \frac{Z_1^N}{N!}\tag{$*$}
+\]
+is valid if the probability of $2$ or more particles being in the same state is negligible. This would be true if $\bra n_i\ket \ll 1$ for all $i$, where $n_i$ is the number of particles in $\bket{i}$. Under these circumstances, we can indeed use $(*)$. This is also true for Fermions, but we will not do those computations.
+
+When does this assumption hold? We will somewhat circularly assume that our model is valid, and then derive some relation between $\bra n_i \ket$ and other known quantities.
+
+Last time, we fond
+\[
+ \bra n_i\ket = - \frac{1}{\beta} \frac{\partial \log Z}{\partial E_i} \approx -\frac{N}{\beta} \frac{\partial \log Z_1}{\partial E_i} = \frac{N}{Z_1} e^{-\beta E_i}
+\]
+for all $i$, where we used the approximation $(*)$ in deriving it.
+
+For a monoatomic gas, we had
+\[
+ Z_1 = \sum_i e^{-\beta E_i} \approx \int_0^\infty \d E \; g(e) e^{-\beta E} = \frac{V}{\lambda^3},
+\]
+where
+\[
+ \lambda = \sqrt{\frac{2\pi \hbar^2}{mkT}}.
+\]
+Substituting this into our expression for $\bra n_i\ket$, we need
+\[
+ 1 \gg \frac{N \lambda^3}{V} e^{-\beta E_i}
+\]
+for all $i$. So we need $\lambda \ll \left(\frac{V}{E}\right)^{1/3}$. So we see that this is valid when our gas is not too dense. And by our definition of $\lambda$, this is valid at high temperatures.
+
+Recall that when we did diatomic gases classically, we had
+\begin{align*}
+ H &= H_{\mathrm{trans}} + H_{\mathrm{rot}} + H_{\mathrm{vib}}\\
+ Z_1 &= Z_{\mathrm{trans}} + H_{\mathrm{rot}} + H_{\mathrm{vib}}.
+\end{align*}
+We re-examine this in the quantum case.
+
+We still have
+\[
+ H_{\mathrm{trans}} = \frac{\mathbf{p}^2}{2m}.
+\]
+So we find that
+\[
+ Z_{\mathrm{trans}} = \int_0^\infty \d E\; g(E) e^{-\beta E} = \frac{V}{\lambda^3},
+\]
+as before. So this part is unchanged.
+
+We now look at the rotational part. We have
+\[
+ H_{\mathrm{rot}} = \frac{\mathbf{J}^2}{2I}.
+\]
+We know what the eigenvalues of $\mathbf{J}^2$ are. The energy levels are
+\[
+ \frac{\hbar^2 j(j + 1)}{2I},
+\]
+where $j = 0, 1, 2, \cdots$, and the degeneracy of each level is $2j + 1$. Using this, we can write down the partition function
+\[
+ Z_{\mathrm{rot}} = \sum_{j = 0}^\infty (2j + 1) e^{-\beta \hbar^3 j(j + 1)/2I}.
+\]
+Let's look at the high and low energy limits. If
+\[
+ kT \gg \frac{\hbar^2}{2I},
+\]
+then the exponents up there are small. So we can approximate the sum by an integral,
+\[
+ Z_{\mathrm{rot}} \approx \int_0^\infty \d x\; (2x + 1) e^{-\beta \hbar^3 x(x + 1)/2I}.
+\]
+This fortunately is an integral we can evaluate, because
+\[
+ \frac{\d}{\d x} x(x + 1) = 2x + 1.
+\]
+So we find that
+\[
+ Z_{\mathrm{rot}} = \frac{2I}{\beta \hbar^2}.
+\]
+This is exactly what we saw classically.
+
+But if $kT \ll \frac{\hbar^2}{2I}$, all terms are exponentially suppressed except for $j = 0$. So we find
+\[
+ Z_{\mathrm{rot}} \approx 1.
+\]
+This is what we meant when we said the rotational modes are frozen out for low energy modes.
+
+We can do the same thing for the vibrational modes of the degree of freedom. Recall we described them by harmonic oscillators. Recall that the energy levels are given by
+\[
+ E_n = k \omega\left(n + \frac{1}{2}\right).
+\]
+So we can write down the partition function
+\[
+ Z_{\mathrm{vib}} = \sum_{n = 0}^\infty e^{-\beta k\omega\left(n + \frac{1}{2}\right)} = e^{-\beta \hbar \omega/2} \frac{1}{1 - e^{-\beta \hbar \omega}} = \frac{1}{2 \sinh (\hbar \beta \omega/2)}.
+\]
+We can do the same thing as we did for rotational motion. Now the relevant energy scale is $\hbar \omega/2$.
+
+At high temperatures $kT \gg \hbar \omega/2$, we can replace $\sinh$ by the first term in its Taylor expansion, and we have
+\[
+ Z_{\mathrm{vib}} \approx \frac{1}{\beta \hbar \omega}.
+\]
+On the other hand, at low temperatures, we have a sum of exponentially small terms, and so
+\[
+ Z_{\mathrm{vib}} \approx e^{-\beta \hbar \omega /2}.
+\]
+So we find that
+\[
+ E_{\mathrm{vib}} = - \frac{\partial}{\partial \beta}\log Z_{\mathrm{vib}} = \frac{\hbar \omega}{2}.
+\]
+So for $N$ molecules, we have
+\[
+ E_{\mathrm{vib}} \approx \frac{N \hbar \omega}{2}.
+\]
+But this is not measurable, as it is independent of $T$. So this doesn't affect the heat capacity. So the vibrational modes freeze out.
+
+We can now understand the phenomenon we saw previously:
+\begin{center}
+ \begin{tikzpicture}[yscale=0.7]
+ \draw [->] (0, 0) -- (5, 0) node [right] {$T$};
+ \draw [->] (0, 0) -- (0, 5) node [above] {$C_V/Nk$};
+
+ \draw [dashed] (1.75, 0) node [below] {$\hbar^2/2I$} -- +(0, 5);
+ \draw [dashed] (3.5, 0) node [below] {$\hbar \omega/2$} -- +(0, 5);
+
+ \draw [dashed] (0, 1.5) node [left] {$1.5$} -- +(5, 0);
+ \draw [dashed] (0, 2.5) node [left] {$2.5$} -- +(5, 0);
+ \draw [dashed] (0, 3.5) node [left] {$3.5$} -- +(5, 0);
+
+ \draw [thick, mblue] (0, 1.5) -- (1.5, 1.5) .. controls (1.75, 1.5) and (1.75, 2.5) .. (2, 2.5) -- (3.25, 2.5) .. controls (3.5, 2.5) and (3.5, 3.5) .. (3.75, 3.5) -- (5, 3.5);
+ \end{tikzpicture}
+\end{center}
+Note that in theory it is certainly possible that the vibrational mode comes first, instead of what we drew. However, in practice, this rarely happens.
+
+\subsection{Bosons}
+In the previous derivation, we needed to assume that
+\[
+ \lambda = \sqrt{\frac{2\pi \hbar^2}{mkT}} \ll \left(\frac{V}{N}\right)^{1/2}.
+\]
+At \emph{very} low temperatures, this is no longer true. Then quantum effects become important. If there are no interactions between particles, then only one effect is important --- quantum statistics.
+\begin{itemize}
+ \item Bosons have integer spin, and the states have to be are symmetric with respect to interchange of $2$ particles. For example, photons are bosons.
+ \item Fermions have spin $\frac{1}{2}$, and the states have to be antisymmetric, e.g.\ $e, p, n$.
+\end{itemize}
+Since spins add up, atoms made from an even (odd resp.) number of electrons, protons and neutrons is a boson (fermion resp.). In an atom, since the charge is neutral, we know we have the same number of electrons and protons. So we are really counting the number of neutrons.
+
+For example, hydrogen has no neutrons, so it is a boson. On the other hand, deuterium has 1 neutron, and so it is a fermion.
+
+We suppose we have bosons, and the single particles states are $\bket{r}$ with energy $E_r$. We let $n_r$ be the number of particles in $\bket{r}$. We assume the particles are indistinguishable, so to specify the state of the $n$-particle system, we just need to specify how many particles are in each $1$-particle state by
+\[
+ \{n_1, n_2, n_3, \cdots\}.
+\]
+Once we specify these numbers, then the energy is just
+\[
+ \sum_r n_r E_r.
+\]
+So we can write down the partition function in the canonical ensemble by summing over all possible states:
+\[
+ Z = \sum_{\{n_r\}} e^{-\beta \sum_s n_s E_s}.
+\]
+This is exactly the same as we had for photons. But there is one very important difference. For photons, the photon numbers are not conserved. But here the atoms cannot be destroyed. So we assume that the particle number is conserved. Thus, we are only summing over $\{n_r\}$ such that
+\[
+ \sum n_r = N.
+\]
+This makes it rather difficult to evaluate the sum. The trick is to use the grand canonical ensemble instead of the canonical ensemble. We have
+\[
+ \mathcal{Z} = \sum_{\{n_r\}} e^{-\beta \sum_s n_s (E_s - \mu)},
+\]
+and now there is no restriction on the allowed values of $\{n_i\}$. So we can factorize this sum and write it as
+\[
+ \mathcal{Z} = \prod_s \mathcal{Z}_s,
+\]
+where
+\[
+ \mathcal{Z}_s = \sum_{n_s = 0}^\infty e^{-\beta (E_s - \mu) n_s} = \frac{1}{1 - e^{-\beta(E_s - \mu)}}.
+\]
+For this sum to actually converge, we need $\mu < E_s$, and this has to be true for any $s$. So if the ground state energy is $E_0$, which we can always choose to be $0$, then we need $\mu < 0$.
+
+Taking the logarithm, we can write
+\[
+ \log \mathcal{Z} = - \sum_r \log(1 - e^{-\beta (E_r - \mu)}).
+\]
+We can now compute the expectation values $\bra n_r\ket$. This is
+\[
+ \bra n_r \ket = - \frac{1}{\beta} \frac{\partial}{\partial E_r} \log \mathcal{Z} = \frac{1}{e^{\beta(E_r - \mu)} - 1}.
+\]
+This is called the \term{Bose--Einstein distribution}. This is the expected number of variables in the $1$-particle state $r$.
+
+We can also ask for the expected total number of particles,
+\[
+ \bra N\ket = \frac{1}{\beta} \frac{\partial}{\partial \mu}\log \mathcal{Z} = \sum_r \frac{1}{e^{\beta(E_r - \mu)} - 1} = \sum_r \bra n_r\ket.
+\]
+Similarly, we have
+\[
+ \bra E\ket = \sum_r E_r \bra n_r\ket,
+\]
+as expected. As before, we will stop writing the brackets to denote expectation values.
+
+It is convenient to introduce the \term{fugacity}\index{$z$}
+\[
+ z = e^{\beta\mu}.
+\]
+Since $\mu < 0$, we know
+\[
+ 0 < z < 1.
+\]
+We now replace the sum with an integral using the density of states. Then we obtain
+\[
+ \log \mathcal{Z} = - \int_0^\infty \d E\; g(E) \log (1 - z e^{-\beta E}).
+\]
+The total number of particles is similarly
+\[
+ N = \int_0^\infty \d E\; \frac{g(E)}{z^{-1} e^{\beta E} - 1} = N(T, V, \mu).
+\]
+Now we are using the grand canonical ensemble just because it is convenient, not because we like it. So we hope that we can invert this relation to write % monotonicity?
+\[
+ \mu = \mu(T, V, N).\tag{$*$}
+\]
+So we can write
+\[
+ E = \int_0^\infty \d E\; \frac{E g(E)}{z^{-1} e^{\beta E} - 1} = E(T, V, \mu) = E(T, V, N),
+\]
+using $(*)$. As before, we have the grand canonical potential
+\[
+ pV = -\Phi = \frac{1}{\beta} \log \mathcal{Z} = -\frac{1}{\beta} \int_0^\infty \d E\; g(E) \log(1 - ze^{-\beta E}).
+\]
+We now focus on the case of monoatmoic, non-relativistic particles. Then the density of states is given by
+\[
+ g(E) = \frac{V}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} E^{1/2}.
+\]
+There is a nice trick we can do. We integrate by parts, and by integrating $g(E)$ and differentiating the $\log$, we can obtain
+\[
+ pV = \frac{2}{3} \int_0^\infty \frac{\d E\; Eg(E)}{z^{-1} e^{\beta E} - 1} = \frac{2}{3} E.
+\]
+So if we can express $E$ as a function of $T, V, N$, then we can use this to give us an equation of state!
+
+So what we want to do now is to actually evaluate those integrals, and then invert the relation we found. Often, the integrals cannot be done exactly. However, we can express them in terms of some rather more standardized function. We first introduce the \term{Gamma functions}:
+\[
+ \Gamma(s) = \int_0^\infty t^{s - 1} e^{-t} \;\d t
+\]
+This function is pretty well-known in, say, number-theoretic circles. If $n$ is a positive integer, then we have $\Gamma(n) = (n - 1)!$, and it also happens that
+\[
+ \Gamma\left(\frac{3}{2}\right) = \frac{\sqrt{\pi}}{2},\quad \Gamma\left(\frac{5}{2}\right) = \frac{3\sqrt{\pi}}{4}.
+\]
+We shall also introduce the seemingly-arbitrary functions\index{$g_n(z)$}
+\[
+ g_n(z) = \frac{1}{\Gamma(n)} \int_0^\infty \frac{\d x\; x^{n - 1}}{z^{-1} e^x - 1},
+\]
+where $n$ need not be an integer. It turns these functions appear every time we try to compute something. For example, we have
+\begin{align*}
+ \frac{N}{V} &= \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \int_0^\infty \frac{\d E\; E^{1/2}}{z^{-1} e^{\beta E} - 1}\\
+ &= \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \frac{1}{\beta^{3/2}} \int_0^\infty \frac{\d x\;x^{1/2}}{z^{-1}e^x - 1}\\
+ &= \frac{1}{\lambda^3} g_{3/2}(z).
+\end{align*}
+Similarly, the energy is given by
+\[
+ \frac{E}{V} = \frac{3}{2\lambda^3 \beta} g_{5/2}(z).
+\]
+It is helpful to obtain a series expansion for the $g_n(z)$ when $0 \leq z \leq 1$. Then we have
+\begin{align*}
+ g_n(z) &= \frac{1}{\Gamma(n)} \int_0^\infty \d x\; \frac{z x^{n - 1} e^{-x}}{1 - z e^{-x}}\\
+ &= \frac{z}{\Gamma(n)} \int_0^\infty \d x\; z x^{n - 1} e^{-x} \sum_{m = 0}^\infty z^m e^{-mx}\\
+ &= \frac{1}{\Gamma(n)} \sum_{m = 1}^m z^m \int_0^m \d x\; x^{n - 1} e^{-mx}\\
+ &= \frac{1}{\Gamma(n)} \sum_{m = 1}^\infty \frac{z^m}{m^n}\int_0^\infty \d u\; u^{n - 1} e^{-u}\\
+ &= \sum_{m = 1}^\infty \frac{z^m}{m^n}.
+\end{align*}
+Note that it is legal to exchange the integral and the summation because everything is non-negative.
+
+In particular, this gives series expansions
+\begin{align*}
+ \frac{N}{V} &= \frac{z}{\lambda^3} \left(1 + \frac{z}{2\sqrt{2}} + O(z^2)\right)\\
+ \frac{E}{V} &= \frac{3z}{2\lambda^3 \beta}\left(1 + \frac{z}{4\sqrt{2}} + O(z^2)\right).
+\end{align*}
+
+Let's now consider different possible scenario. In the limit $z \ll 1$, this gives
+\[
+ z \approx \frac{\lambda^3 N}{V}.
+\]
+By the assumption on $z$, this implies that
+\[
+ \lambda \ll \left(\frac{N}{V}\right)^{1/3},
+\]
+which is our good old classical high-temperature limit. So $z \ll 1$ corresponds to \emph{high} temperature.
+
+This might seem a bit counter-intuitive, since $z = e^{\beta \mu}$. So if $T \to \infty$, then we should have $\beta \to 0$, and hence $z \to 1$. But this is not a valid argument, because we are fixing $N$, and hence as we change $T$, the value of $\mu$ also varies.
+
+In this high temperature limit, we can invert the $\frac{N}{V}$ equation to obtain
+\[
+ z = \frac{\lambda^3N}{V} \left(1 - \frac{1}{2\sqrt{2}} \frac{\lambda^3 N}{V} + O\left(\left(\frac{\lambda^3 N}{V}\right)^2\right)\right).
+\]
+Plugging this into our expression for $E/V$, we find that
+\[
+ E = \frac{3}{2} \frac{N}{\beta}\left(1 - \frac{1}{2\sqrt{2}} \frac{\lambda^3N}{V} + \cdots\right) + \left(\frac{1}{4\sqrt{2}} \frac{\lambda^3 N}{V} + \cdots\right).
+\]
+So we find that
+\[
+ pV = \frac{2}{3}E = NkT \left(1 - \frac{1}{4\sqrt{2}} \frac{\lambda^3 N}{V} + O\left(\left(\frac{\lambda^3N}{V}\right)^2\right)\right).
+\]
+We see that at leading order, we recover the \term{ideal gas law}. The first correction term is the \term{second Virial coefficient}\index{Virial coefficient!second}. These corrections are not coming from interactions, but quantum statistics. This Bose statistics reduces pressure.
+
+This is good, but it is still in the classical limit. What we really want to understand is when things become truly quantum, and $z$ is large. This leads to the phenomenon of \emph{Bose--Einstein condensation}.
+
+%Unfortunately, these are not integrals we can do explicitly, so we can only do them at high and low temperature limits. The answers we obtain will involve the \term{Gamma functions}:
+%\[
+% \Gamma(s) = \int_0^\infty t^{s - 1} e^{-t} \;\d t
+%\]
+%If $n$ is a positive integer, then we have $\Gamma(n) = (n - 1)!$.
+%
+%When we have high temperatures, we have $z \ll 1$ (it is not \emph{a priori} clear that this corresponds to high temperature. We will see that later). Then we find
+%\begin{align*}
+% \frac{N}{V} &= \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \int_0^\infty \frac{\d E\; E^{1/2}}{z^{-1} e^{\beta E} - 1}\\
+% &= \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \int_0^\infty \frac{\d E\; ze^{-\beta E} E^{1/2}}{1 - z e^{-\beta E}}\\
+% &= \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \frac{1}{\beta^{3/2}} \int_0^\infty \d x\; \sqrt{x} \sum_{n = 1}^\infty z^n e^{-nx}\\
+% \intertext{Since all the integrands are positive, it follows that it is legal to swap the integral with the summation and obtain}
+% &= \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \frac{1}{\beta^{3/2}} \sum_{n = 1}^\infty z^n\int_0^\infty \d x\; \sqrt{x} e^{-nx}\\
+% &= \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \frac{1}{\beta^{3/2}} \sum_{n = 1}^\infty \frac{z^n}{n^{3/2}}\int_0^\infty \d x\; \sqrt{x} e^{-x}\\
+% &= \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \frac{1}{\beta^{3/2}} \sum_{n = 1}^\infty \frac{z^n}{n^{3/2}} \Gamma \left(\frac{3}{2}\right)\\
+% \intertext{We either do the integral explicitly to find $\Gamma(\frac{3}{2})$, or ask our number theorist friends to tell us that the value is $\frac{\sqrt{\pi}}{2}$. Putting this value in, we find that we are left with}
+% &= \frac{1}{\lambda^3} \sum_{n = 1}^\infty \frac{z^n}{n^{3/2}}\\
+% &= \frac{z}{\lambda^3} \left(1 + \frac{z}{2\sqrt{2}} + O(z^2)\right).
+%\end{align*}
+%To leading order, we must have
+%\[
+% z = \lambda^3 \frac{N}{V},
+%\]
+%where $\lambda$ is again the de Broglie wavelength. So the statement that $z \ll 1$ is equivalent to
+%\[
+% \lambda \ll \left(\frac{N}{V}\right)^{1/3},
+%\]
+%which is just our good old classical high-temperature limit.
+%
+%Note that we know that $z = e^{\beta\mu}$. So we might think that if $T \to \infty$, then we have $\beta \to 0$, and thus $z = e^{\beta \mu} \to 1$. But this is not what's happening. The reason is that we are keeping $N$ fixed here. So we have to let $\mu$ vary, and we see from this expression that
+%\[
+% z \sim \lambda^3 \sim T^{-3/2}.
+%\]
+%At large $T$, we actually have $z \to 0$. So we know that $\mu \to -\infty$. So we have to be careful about what we are holding fixed when we take the limit.
+%
+%We can do the same for the energy integral, and find
+%\begin{align*}
+% \frac{E}{V} &= \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \int_0^\infty \frac{\d E\; E^{3/2}}{z^{-1} e^{\beta E} - 1} \\
+% &= \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \frac{1}{\beta^{5/2}} \int_0^\infty \d x\; x^{3/2} \sum_{n = 1}^\infty z^n e^{-nx}\\
+% &= \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \frac{1}{\beta^{5/2}} \Gamma\left(\frac{5}{2}\right) \sum_{n = 0}^\infty \frac{z^n}{n^{5/2}}\\
+% &= \frac{3z}{2\lambda^3 \beta}\left(1 + \frac{z}{4\sqrt{2}} + O(z^2)\right).
+%\end{align*}
+%But we previously found $z$ as a function of $N/V$. Doing it order by order, we find
+
+\subsection{Bose--Einstein condensation}
+%We saw that in the high temperature limit, we had $z = e^{\beta\mu} \to 0$. So we guess that at the low temperature limit, we have $z \to 1$.
+%
+%We have
+%\[
+% \frac{N}{V} = \frac{1}{4 \pi^2} \left(\frac{2mkT}{\hbar^2}\right)^{3/2} \int_0^\infty \frac{\d x\; x^{1/2}}{ z^{-1} e^x - 1} = \frac{1}{\lambda^3} g_{3/2}(z),
+%\]
+%where in general we define\index{$g_n(z)$}
+%\[
+% g_n(z) = \frac{1}{\Gamma(n)} \int_0^\infty \frac{\d x\; x^{n - 1}}{z^{-1} e^x - 1}.
+%\]
+%We turn this integral into something that looks a bit nicer. We have
+%\begin{align*}
+% g_n(z) &= \frac{1}{\Gamma(n)} \int_0^\infty \d x\; \frac{z x^{n - 1} e^{-x}}{1 - z e^{-x}}\\
+% \intertext{Using the fact that $|z e^{-x}| < 1$ always, we can write this as}
+% &= \frac{z}{\Gamma(n)} \int_0^\infty \d x\; z x^{n - 1} e^{-x} \sum_{m = 0}^\infty z^m e^{-mx}\\
+% &= \frac{1}{\Gamma(n)} \sum_{m = 1}^m z^m \int_0^m \d x\; x^{n - 1} e^{-mx}\\
+% &= \frac{1}{\Gamma(n)} \sum_{m = 1}^\infty \frac{z^m}{m^n}\int_0^\infty \d u\; u^{n - 1} e^{-n}\\
+% &= \sum_{m = 1}^\infty \frac{z^m}{m^n},
+%\end{align*}
+%where we used the definition of the $\Gamma$ function at the end. Since all the coefficients are positive, this is a monotonically increasing function in $z$. We can now write our number density as
+We can write our previous $\frac{N}{V}$ equation as
+\[
+ g_{3/2}(z) = \frac{N}{V} \lambda^3.\tag{$\dagger$}
+\]
+Recall that $\lambda \propto T^{-1/2}$. As we decrease $T$ with $N/V$ fixed, the right hand side grows. So the $g_{3/2}(z)$ must grow as well. Looking at the series expansion, $g_{3/2}$ is evidently an increasing function of $z$. But $z$ cannot grow without bound! It is bounded by $z \leq 1$.
+
+How does $g_{3/2}$ grow as $z \to 1$? By definition, we have
+\[
+ g_n(1) = \zeta(n),
+\]
+the \term{Riemann $\zeta$-function}. It is a standard result that this is finite at $n = \frac{3}{2}$. In fact,
+\[
+ \zeta\left(\frac{3}{2}\right) \approx 2.612.
+\]
+We have secretly met this $\zeta$ function before. Recall that when we did the black body radiation, we had an integral to evaluate, which was
+\[
+ \int_0^\infty \frac{\d x\; x^3}{e^x - 1} = \Gamma(4) g_4(1) = \Gamma(4) \zeta(4) = 3! \cdot \frac{\pi^4}{90} = \frac{\pi^4}{15}.
+\]
+Of course, getting the final answer requires knowledge of the value of $\zeta(4)$, which fortunately the number theorists have figured out for us in their monumental special values of $L$-functions programme.
+
+Going back to physics, we have found that when $T$ hits some \emph{finite} value $T = T_c$, the \term{critical temperature}, we have $z = 1$. We can simply find $T_c$ by inverting the relation $(\dagger)$, and obtain
+\[
+ T_c = \left(\frac{2\pi \hbar^2}{k m}\right) \left(\frac{1}{\zeta(3/2)} \frac{N}{V}\right)^{2/3}.
+\]
+Then we can express
+\[
+ \frac{T_c}{T} = \left(\frac{\lambda^3}{\zeta(3/2)} \frac{N}{V}\right)^{2/3}.
+\]
+So $T_c$ is the temperature at which $\lambda^3$ becomes comparable to the number density.
+
+But physically, there shouldn't be anything that stops us from going \emph{below} $T_c$. But we can't. What does this mean? Maybe $N$ should decrease, but particles cannot just disappear.
+%What happens if $T$ is reduced \emph{below} $T_c$? We have already hit the maximum possible value of $z$. So $g_{3/2}(z)$ cannot change. So it seems like this implies $N$ must decrease. But particles cannot just disappear. Even in the grand canonical ensemble, we expect the particle number to be essentially fixed, as
+%\[
+% \frac{\Delta N}{N} \sim \frac{1}{\sqrt{N}} \ll 1.
+%\]
+So something has gone wrong. What has gone wrong?
+
+The problem happened when we replaced the sum of states with the integral over the density of states
+\[
+ \sum_\mathbf{k} \approx \frac{V(2m)^{3/2}}{4\pi^2 \hbar^3} \int_0^\infty \d E\; E^{1/2}.
+\]
+The right hand side ignores the ground state $E = 0$. Using the formula for $\bra n_r\ket$, we would expect the number of things in the ground state to be
+\[
+ n_0 = \frac{1}{z^{-1} - 1} = \frac{z}{1 - z}.
+\]
+For most values of $z \in [0, 1)$, this is not a problem, as this has just a few particles.
+
+However, for $z$ very very very close to $1$, this becomes large. This is very unusual. Recall that when we described the validity of the classical approximation, we used the fact the number of particles in each state is unlikely to be greater than $1$. Here the exact opposite happens --- a lot of states try to lump into the ground state.
+
+Now the solution is to manually put back these ground states. This might seem a bit dodgy --- do we have to fix the first exciting state as well? And the second? However, we shall not worry ourselves with these problems. Just fixing the ground states is a decent approximation.
+
+We now just put
+\[
+ N = \frac{V}{\lambda^3} g_{3/2}(z) + \frac{z}{1 - z}.
+\]
+Then there is no problem keeping $N$ fixed as we take $T \to 0$. Then we find $z \to 1 - \frac{1}{N}$ as $T \to 0$.
+
+For $T < T_c$, we can set $z = 1$ in the formula for $g_{3/2}(z)$, and we can approximate
+\[
+ N = \frac{V}{\lambda^3} g_{3/2}(1) + \frac{1}{1 - z},
+\]
+and
+\[
+ n_0 = \frac{1}{1 - z}.
+\]
+If we divide this expression through by $N$, we find
+\[
+ \frac{n_0}{N} = 1 - \frac{V}{N \lambda^3} \zeta\left(\frac{3}{2}\right) = 1 - \left(\frac{T_c}{T}\right)^{3/2}.
+\]
+This is the fraction of the particles that are in the ground state, and we see that as $T \to 0$, they all go to the ground state.
+
+For $T < T_c$, there is a macroscopic number of particles that occupy the ground state. This is called a \term{Bose--Einstein condensate}. This is a theoretical prediction made in the early 20th century, and was confirmed experimentally much later in 1995. It took so long to do this because the critical temperature is $\sim \SI{e-7}{\kelvin}$, which is very low!
+
+Let's look at the equation of state in this condensate. We just have to take our previous expression for the equation of state, but put back in the contribution of the ground state. For example, we had
+\[
+ pV = - \Phi = -\frac{1}{\beta} \sum_r \log(1 - e^{-\beta(E_r - \mu)}).
+\]
+We previously just replaced the sum with an integral, but this time we shall not forget the ground state term. Then we have
+\[
+ pV = \frac{2}{3}E - \frac{1}{\beta} \log(1 - z).
+\]
+Recall that we had
+\[
+ \frac{E}{V} = \frac{3kT}{2 \lambda^3} g_{5/2}(z).\tag{$\ddagger$}
+\]
+We don't need to add terms for the ground state since the ground state has zero energy. Substitute this into the equation of state, and use the fact that $z \approx 1$ for $T < T_c$ to obtain
+\[
+ pV = Nk T \left(\frac{V}{N\lambda^3}\right) \zeta\left(\frac{5}{2}\right) - kT \log(1 - z).
+\]
+%Using $(**)$, we find that this gives
+%\[
+% pV = NkT \left(\frac{T}{T_c}\right)^{3/2} \frac{\zeta(5/2)}{\zeta(3/2)} - kT \log(1 - z).
+%\]
+For $T < T_c$, the first term is $O(N)$, and the second term is $O(\log N)$, which is much much smaller than $N$. So the second term is negligible. Thus, we can approximate
+\[
+ p = \frac{kT}{\lambda^3} \zeta\left(\frac{5}{2}\right) \sim T^{5/2}.
+\]
+There is a remarkable thing about this expression --- there is no $V$ or $N$ in here. This is just a function of the energy, and very unlike an ideal gas.
+
+We can also compute the heat capacity. We have
+\[
+ C_V = \left(\frac{\partial E}{\partial T}\right)_{V, N}.
+\]
+Using $\ddagger$, we find that
+\[
+ \frac{C_V}{V} = \frac{15k}{4 \lambda^3} g_{5/2}(1) + \frac{3kT}{2\lambda^3} g_{5/2}'(1) \left(\frac{\partial z}{\partial T}\right)_{V, N}.
+\]
+When $T \ll T_c$, we have $z \approx 1$. So the second term is negligible. So
+\[
+ \frac{C_V}{V} = \frac{15k}{4\lambda^3} \zeta\left(\frac{5}{2}\right) \propto T^{3/2}.
+\]
+That wasn't too exciting. Now consider the case where $T$ is slightly greater than $T_c$. We want to find an expression for $z$ in terms of $T$. Our previous expressions were expansions around $z = 0$, which aren't very helpful here. Instead, we have to find an expression for $z$ near $1$.
+
+Recall that $z$ is determined by
+\[
+ g_{3/2}(z) = \frac{\lambda^3 N}{V}.
+\]
+It turns out the best way to understand the behaviour of $g_{3/2}$ near $z = 1$ is to consider its derivative. Using the series expansion, it is not hard to see that
+\[
+ g_n'(z) = \frac{1}{z} g_{n - 1}(z).
+\]
+Taking $n = 3/2$, we find
+\[
+ g_{3/2}'(z) = \frac{1}{z} g_{1/2}(z).
+\]
+But $g_{1/2}(z)$ is not a very nice function near $z = 1$. If we look at the series expansion, it is clear that it diverges as $z \to 1$. What we want to know is \emph{how} it diverges. Using the integral formula, we have
+\begin{align*}
+ g_{1/2}(z) &= \frac{1}{\Gamma(1/2)} \int_0^\infty \d x\; \frac{x^{-1/2}}{z^{-1}e^x - 1} \\
+ &= \frac{1}{\Gamma(1/2)} \int_0^\varepsilon \frac{\d x\; x^{-1/2}}{z^{-1}(1 + x) - 1} + \text{bounded terms}\\
+ &= \frac{z}{\Gamma(1/2)} \int_0^\varepsilon \frac{\d x\; x^{-1/2}}{1 - z + x} + \cdots\\
+ &= \frac{2z}{\Gamma(1/2)\sqrt{1 - z}} \int_0^{\sqrt{\varepsilon/(1 - z)}} \frac{\d u}{1 + u^2} + \cdots
+\end{align*}
+where we made the substitution $u = \sqrt{\frac{x}{1 - z}}$. As $z \to 1$, the integral tends to $\pi/2$ as it is arctan, and so we find that
+\[
+ g'_{3/2}(z) = \frac{\pi}{\Gamma(1/2)\sqrt{1 - z}} + \text{finite}.
+\]
+We can then integrate this back up and find
+\[
+ g_{3/2}(z) = g_{3/2}(1) - \frac{2\pi}{\Gamma(1/2)} \sqrt{1 - z} + O(1 - z).
+\]
+So from $(\dagger)$, we know that
+\[
+ \frac{2\pi}{\Gamma(1/2)} \sqrt{1 - z} \approx \zeta(3/2) - \frac{\lambda^3 N}{V}.
+\]
+Therefore we find
+\begin{align*}
+ z &\approx 1 - \left(\frac{\Gamma(1/2) \zeta(3/2)}{2\pi}\right)^2 \left(1 - \frac{\lambda^3 N}{\zeta(3/2) V}\right)^2 \\
+ &= 1 - \left(\frac{\Gamma(1/2) \zeta(3/2)}{2\pi}\right)^2 \left(1 - \left(\frac{T_c}{T}\right)^{3/2}\right)^2.
+\end{align*}
+We can expand
+\[
+ 1 - \left(\frac{T_c}{T}\right)^{3/2} \approx \frac{3}{2} \frac{T - T_c}{T_c}
+\]
+for $T \approx T_c$. So we find that
+\[
+ z \approx 1 - B \left(\frac{T - T_c}{T_c}\right)^2
+\]
+for some positive number $B$, and this is valid for $T \to T_c^+$.
+
+We can now use this to compute the quantity in the heat capacity:
+\[
+ \left(\frac{\partial z}{\partial T}\right)_{V, N} \approx -\frac{2B}{T_c^2}(T - T_c).
+\]
+So we find that
+\[
+ \frac{C_V}{V} \approx \frac{15 k}{4 \lambda^3} g_{5/2}(z) - \frac{3Bk}{\lambda^3} g'_{5/2}(1) (T - T_c).
+\]
+Finally, we can compare this with what we have got when $T < T_c$. The first part is the same, but the second term is not. It is only present when $T > T_c$.
+
+Since the second term vanishes when $T = T_c$, we see that $C_V$ is continuous at $T = T_c$, but the derivative is not.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$T$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$C_V$};
+
+ \draw [mblue, thick, domain=0.001:2] plot [smooth] (\x, {\x^(3/2)});
+
+ \draw [dashed] (2, 0) node [below] {$T_c$} -- (2, 2.83);
+
+ \draw [mblue, thick] (2, 2.83) .. controls (2.5, 2) .. (6, 2);
+
+ \draw [dashed] (0, 2) node [left] {$\frac{3}{2} Nk$} -- (6, 2);
+ \end{tikzpicture}
+\end{center}
+This is an example of a \term{phase transition} --- a discontinuity in a thermodynamic quantity.
+
+Note that this is possible only in the thermodynamic limit $N \to \infty$. If we work with finite $N$, then the partition function is just a finite sum of analytic function, so all thermodynamic quantities will be analytic.
+
+\subsubsection*{Superfluid Helium}
+There are two important isotopes of Helium. There is $^4$He, which has two protons, two neutrons and two electrons, hence a boson. There is also a second isotope, namely $^3$He, where we only have one neutron. So we only have $5$ fermions, hence a fermion. Both of these are liquids at low temperatures.
+
+It turns out when temperature is very low, at $\SI{2.17}{\kelvin}$, helium-4 exhibits a phase transition. Then it becomes what is known as a \term{superfluid}. This is a strange state of matter where the liquid appears to have zero viscosity. It does weird things. For example, if you put it in a cup, it just flows out of the cup.
+
+But we don't really care about that. What is relevant to us is that there is a phase transition at this temperature. On the other hand, we see that $^3$He doesn't. Thus, we figure that this difference is due to difference in quantum statistics! One natural question to ask is --- is this superfluid a Bose condensation?
+
+One piece of evidence that suggests they might be related is that when we look at heat capacity, we get a similar discontinuity in derivative. However, when we look at this in detail, there are differences. First of all, we are talking about a liquid, not a gas. Thus, liquid interactions would be important. The details of the $C_V$ graph are also different from Bose gas. For example, the low energy part of the curve goes like $C_V \sim T^3$. So in some sense, we found a fluid analogue of Bose condensates.
+
+It turns out $^3$He becomes superfluid when the temperature is very \emph{very} low, at $T \sim \SI{e-3}{\kelvin}$. Why is this? Apparently, there are very weak forces between helium-3 atoms. At very low temperatures, this causes them to pair up, and when they pair up, they become bosons, and these can condense.
+
+\subsection{Fermions}
+We now move on to study fermions. One major application of this is the study of electrons in a metal, which we can model as gases. But for the moment, we can work in complete generality, and suppose we again have a non-interacting fermion gas. Since they are fermions, they obey the Pauli exclusion principle. Two fermions cannot be in the same state. Mathematically, this means the occupation numbers $\bra n_r\ket$ cannot be arbitrary --- they are either $0$ or $1$. As in the case of bosons, lets study these fermions using the grand canonical ensemble. The partition function is
+\[
+ \mathcal{Z} = \prod_r \mathcal{Z}_r,
+\]
+where
+\[
+ \mathcal{Z}_r = \sum_{n_r = 0}^1 e^{-\beta n_r (E_r - \mu)} = 1 + e^{-\beta (E_r - \mu)}.
+\]
+This is just like the bosonic case, but we only have two terms in the sum. We then find that
+\[
+ \log \mathcal{Z} = \sum_r \log (1 + e^{-\beta (E_r - \mu)}).
+\]
+Then we have
+\[
+ \bra n_r\ket = -\frac{1}{\beta} \frac{\partial}{\partial E_r} \log \mathcal{Z} = \frac{1}{e^{\beta(E_r - \mu)} + 1}.
+\]
+This is called the \term{Fermi-Dirac distribution}. The difference between this and the bosonic case is that we have a difference in sign!
+
+We can get the total number of particles in the usual way,
+\[
+ \bra N\ket = \frac{1}{\beta} \frac{\partial}{\partial \beta}\log \mathcal{Z} = \sum_R \frac{1}{e^{\beta(E_r - \mu)} + 1} = \sum_r \bra r\ket.
+\]
+As before, we will stop drawing the $\bra \ph\ket$.
+
+There is another difference between this and the bosonic case. In the bosonic case, we required $\mu < 0$ for the sum $\mathcal{Z}_r$ to converge. However, here the sum always converges, are there are just two terms. So there is no restriction on $\mu$.
+
+We assume that our particles are non-relativistic, so the energy is
+\[
+ E = \frac{\hbar^2 \mathbf{k}^2}{2m}.
+\]
+Another thing to notice is that our particles have a spin
+\[
+ s \in \Z + \frac{1}{2}.
+\]
+This gives a degeneracy of $g_2 = 2s + 1$. This means the density of states has to be multiplied by $g_s$. We have
+\[
+ g(E) = \frac{g_s V}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} E^{1/2}.\
+\]
+We can now write down the total number of particles in terms of the density of states. We have
+\[
+ N = \int_0^\infty \frac{\d E\; g(E)}{ z^{-1} e^{\beta E} + 1},
+\]
+where again
+\[
+ z = e^{\beta\mu}.
+\]
+Similarly, the energy is given by
+\[
+ E = \int_0^\infty \frac{\d E\; E g(E)}{z^{-1} e^{\beta E} + 1}.
+\]
+Again, we have the density of states
+\[
+ pV = kT \log \mathcal{Z} = kT \int_0^\infty g(E) \log (1 + z e^{-\beta E}).
+\]
+Just as for bosons, we can integrate this by parts, and we obtain
+\[
+ pV = \frac{2}{3} E.
+\]
+We now look at the high temperature limit. In the case of boson, we identified this with a small $z$, and we can do the same here. This is done on example sheet 3, and we find that to leading order, we have
+\[
+ z \approx \lambda^3 \frac{N}{V},
+\]
+exactly as for bosons. This is exactly the same as in bosons, and again as in the example sheet, we find
+\[
+ pV = NkT \left(1 + \frac{\lambda^3 N}{4 \sqrt{2} g_s V} + \cdots\right).
+\]
+We see the sign there is positive. So Fermi statistics \emph{increases} pressure. This is the opposite of what we saw for bosons, which makes sense.
+
+We now go to the opposite extreme, with zero temperature. Then $\beta = \infty$. So
+\[
+ \frac{1}{e^{\beta (E - \mu)} + 1} =
+ \begin{cases}
+ 1 & E < \mu\\
+ 0 & E > \mu
+ \end{cases}
+\]
+Note that here the $\mu$ is evaluated at $0$, as $\mu$ depends on $T$. Clearly the value of $\mu$ at $T = 0$ is important.
+\begin{defi}[Fermi energy]\index{Fermi energy}
+ The \emph{Fermi energy} is
+ \[
+ E_f = \mu(T = 0) = \lim_{T \to 0} \mu(T, V, N).
+ \]
+\end{defi}
+Here we see that at $T = 0$, all states with $E \leq E_f$ are occupied, and states with higher energy are \emph{not} occupied. Now it is easy to work out what this Fermi energy is, because we know the total number of particles is $N$. We have
+\[
+ N = \int_0^{E_f} \d E\; g(E) = \frac{g_s V}{6 \pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} E_f^{3/2}.
+\]
+Inverting this, we find that
+\[
+ E_f = \frac{\hbar^2}{2m} \left(\frac{6 \pi^2}{g_s} \frac{N}{V}\right)^{2/3}.
+\]
+We note that this is very much like the critical temperature of the Bose--Einstein distribution. We can define the characteristic temperature scale
+\[
+ T_f = \frac{E_f}{k}.
+\]
+Now whenever we talk about whether the temperature is high or low, we mean relative to this scale.
+
+How high is this temperature? We saw that for bosons, this temperature is very low.
+\begin{eg}
+ For $^3$He, the Fermi temperature is
+ \[
+ T_f \sim \SI{4.3}{\kelvin}.
+ \]
+ While this is very low compared to everyday temperatures, it is pretty high compared to our previous $T_c$.
+
+ For electrons in a metal, we have $T_f \sim \SI{e4}{\kelvin}$. This is very high! Thus, when studying metals, we cannot take the classical limit.
+
+ For electrons in a white dwarf stars, we have $T_f \sim \SI{e7}{\kelvin}$! One reason this is big is that $m$ is the electron mass, which is very tiny. So it follows that $E_f$ is very large.
+\end{eg}
+
+From the Fermi energy, we can define a momentum scale
+\[
+ \hbar k_f = (2m E_f)^{1/2}.
+\]
+While we will not be using this picture, one way to understand this is that in momentum space, at $T = 0$, all states with $|\mathbf{k}| \leq k_f$ are occupied, and these states are known as a \term{Fermi sea}. States at the boundary of this region are known as \term{Fermi surface}.
+
+Let's continue working at zero temperature and work out the equations of state. We have
+\[
+ E = \int_0^{E_f} \d E\; E g(E) = \frac{3}{5}NE_f.
+\]
+This is the total energy at zero temperature. We saw earlier that
+\[
+ pV = \frac{2}{3} E = \frac{2}{5} N E_f.
+\]
+This shows the pressure is non-zero, even at zero temperature! This is called the \term{degeneracy pressure}. This is pressure coming from the Fermi statistics, which is completely unlike the bosons.
+
+Now let's see what happens when we are at low, but non-zero temperature, say $T \ll T_f$. While we call this low temperature, since we saw that $T_f$ can be pretty high, this ``low temperature'' might still be reasonably high by everyday standards.
+
+In this case, we have
+\[
+ \frac{\mu}{kT} \approx \frac{E_f}{kT} = \frac{T_f}{T} \gg 1.
+\]
+Therefore we find that $z \gg 1$.
+
+Our goal now is to compute the heat capacity. We begin by trying to plot the Fermi-Dirac distribution at different temperature scales based on heuristic arguments. At $T = 0$, it is simply given by
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$E$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$n(E)$};
+
+ \draw [thick, mblue](0, 2) -- (2, 2) -- (2, 0) node [below] {$E_f$} -- (4.9, 0);
+ \end{tikzpicture}
+\end{center}
+When we increase the temperature a bit, we would expect something that looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$E$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$n(E)$};
+
+ \draw [thick, mblue] (0, 2) -- (1.8, 2) .. controls (2, 2) and (2, 0) .. (2.2, 0) -- (4.9, 0);
+
+ \node [below] at (2, 0) {$E_f$};
+ \draw [decoration={markings, mark=at position 0 with {\arrowreversed{latex'}}, mark=at position 1 with {\arrow{latex'}}}, postaction={decorate}] (1.8, 2.2) -- (2.2, 2.2) node [pos=0.5, above] {$kT$};
+ \end{tikzpicture}
+\end{center}
+Since $kT$ is small compared to $E_f$, that transition part is very small. Only states within $kT$ of $E_f$ are affected by $T$.
+
+We can now give a heuristic argument for what the capacity should look like. The number of states in that range is $\sim g(E_f) kT$. Relative to where they are in $T = 0$, each of these is contributing an excess energy of order $kT$. Therefore the total energy is
+\[
+ E \approx E|_{T = 0} + g(E_f) (kT)^2.
+\]
+So we expect the heat capacity to be
+\[
+ C_V \sim k^2 g(E_f) T \sim kN \frac{T}{T_f}.
+\]
+This rather crude argument suggests that this heat capacity is linear in temperature. We shall now show that this is indeed correct by direct calculation. It is the same kind of computations as we did for bosons, except it is slightly more complicated. We have
+\[
+ \frac{N}{V} = \frac{g_s}{4\pi^2}\left(\frac{2m}{\hbar^2}\right)^{2/3} \int_0^\infty \d E \frac{E^{1/2}}{z^{-1} e^{\beta E} + 1}.
+\]
+Similarly, we find
+\[
+ \frac{E}{V} = \frac{g_s}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \int_0^\infty \frac{\d E \; E^{3/2}}{z^{-1} e^{\beta E} + 1}.
+\]
+We again do a substitution $x = \beta E$, and then we find
+\begin{align*}
+ \frac{N}{V} &= \frac{g_s}{\lambda^3} f_{3/2}(z)\\
+ \frac{E}{V} &= \frac{3}{2} \frac{g_s}{\lambda^3}kT f_{5/2}(z),
+\end{align*}
+where we similarly have
+\[
+ f_n(z) = \frac{1}{\Gamma(n)} \int_0^\infty \d x\; \frac{x^{n - 1}}{z^{-1}e^x + 1}.
+\]
+Recall that we have
+\[
+ \Gamma\left(\frac{3}{2}\right) = \frac{\sqrt{\pi}}{2},\quad \Gamma\left(\frac{5}{2}\right) = \frac{3\sqrt{\pi}}{4}.
+\]
+We know that $z \gg 1$, and we want to expand $f_n$ in $z$. This is called the \term{Sommerfeld expansion}. We have
+\[
+ \Gamma(n) f_n(z) = \int_0^{\beta\mu} \d x\; \frac{x^{n - 1}}{z^{-1}e^x + 1} + \underbrace{\int_{\beta\mu}^\infty \frac{x^{n - 1}}{z^{-1}e^x + 1}}_{(2)}.
+\]
+We can rewrite the first term as
+\[
+ \int_0^{\beta \mu} \d x\; x^{n - 1}\left(1 - \frac{1}{1 + z e^{-x}}\right) = \frac{(\log z)^n}{n} - \underbrace{\int_0^{\beta\mu} \frac{x^{n - 1}}{1 + z e^{-x}}}_{(1)},
+\]
+using the fact that $z = e^{\beta\mu}$.
+
+We now see that the two remaining integrals (1) and (2) look very similar. We are going to make a change of variables to make them look more-or-less the same. We let
+\[
+ \eta_1 = \beta \mu - x
+\]
+in (1), and
+\[
+ \eta_2 = x - \beta \mu
+\]
+in (2). Then we find that
+\[
+ \Gamma(n) f_n(z) = \frac{(\log z)^n}{n} - \int_0^{\beta\mu} \d \eta_1 \frac{(\beta \mu - \eta_1)^{n - 1}}{1 + e^{\eta_1}} + \int_0^\infty \d \eta_2 \frac{(\beta \mu + \eta_2)^{n - 1}}{e^{\eta_2} + 1}.
+\]
+Since $\beta\mu \gg 1$, we may approximate the first integral in this expression by $\int_0^\infty$. The error is $\sim e^{-\beta\mu} \sim \frac{1}{z} \ll 1$, so it is fine.
+
+Calling $\eta_1 = \eta_2 = \eta$, we get
+\[
+ \Gamma(n) f_n(z) = \frac{(\log z)^n}{n} + \int_0^\infty \d \eta \frac{(\beta \mu + \eta)^{n - 1} - (\beta \mu - \eta)^{n - 1}}{1 + e^\eta}.
+\]
+Let's now look at the numerator in this expression here. We have
+\begin{align*}
+ &\hphantom{={}}(\beta \mu + \eta)^{n - 1} - (\beta \mu - \eta)^{n - 1} \\
+ &= (\beta\mu)^{n - 1}\left[\left(1 + \frac{\eta}{\beta\mu}\right)^{n - 1} - \left(1 - \frac{\eta}{\beta\mu}\right)^{n -1 }\right]\\
+ &= (\beta \mu)^{n - 1}\left[\left(1 + \frac{(n - 1) \eta}{\beta\mu} + \cdots\right) + \left(1 - \frac{(n - 1)\eta}{\beta \mu} + \cdots\right)\right]\\
+ &= 2(n - 1) (\beta\mu)^{n - 2} \eta \left(1 + O\left(\frac{\eta}{\beta\mu}\right)\right).
+\end{align*}
+We just plug this into our integral, and we find
+\[
+ \Gamma(n) f_n(z) = \frac{(\log z)^n}{n} + 2(n - 1) (\log z)^{n - 2}\int_0^\infty \d \eta \frac{\eta}{e^{\eta + 1}} + \cdots.
+\]
+We have to be slightly careful. Our expansion was in $\frac{\eta}{\beta\mu}$, and while $\beta\mu$ is large, $\mu$ can get arbitrarily large in the integral. Fortunately, for large $\mu$, the integrand is exponentially suppressed.
+
+We can compute
+\begin{align*}
+ \int_0^\infty \d \eta \frac{\eta}{e^{\eta} + 1} &= \int_0^\infty \d \eta \frac{\eta e^{-\eta}}{1 + e^{-\eta}} \\
+ &= \int_0^\infty \d \eta\; \eta \sum_{m = 1}^\infty e^{-m\eta}(-1)^{m + 1}\\
+ &= \sum_{m = 1}^\infty \frac{(-1)^{m + 1}}{m^2}\\
+ &= \frac{\pi^2}{12}.
+\end{align*}
+Note that it was valid to swap the sum and integral because the sum is absolutely convergent.
+
+So we find that
+\[
+ f_n(z) = \frac{(\log z)^n}{\Gamma(n + 1)} \left(1 + \frac{\pi^2}{6}\frac{n(n - 1)}{(\log z)^2} + \cdots\right).
+\]
+Hence we find that
+\[
+ \frac{N}{V} = \frac{g_s}{6\pi^2 \hbar^3} (2m\mu)^{3/2} \left(1 + \frac{\pi^2}{8}\left(\frac{kT}{\mu}\right)^2 + \cdots\right).\tag{$*$}
+\]
+Recall that the reason we did this is that we didn't like $\mu$. We want to express everything in terms of $N$ instead. It is an exercise on the example sheet to invert this, and find
+\[
+ \mu = E_f \left(1 - \frac{\pi^2}{12} \left(\frac{T}{T_f}\right)^2 + \cdots\right).\tag{$\dagger$}
+\]
+We can plug this into the epxression of $\frac{E}{V}$ and obtain
+\[
+ \frac{E}{V} = \frac{g_s}{10 \pi^2 \hbar^3} (2m)^{3/2} \mu^{5/2} \left(1 + \frac{5 \pi^2}{8} \left(\frac{kT}{\mu}\right)^2 + \cdots\right).
+\]
+Dividing by $(*)$ and using $(\dagger)$, we find
+\[
+ \frac{E}{N} = \frac{3 E_f}{5} \left(1 + \frac{5 \pi^2}{12}\left(\frac{T}{T_f}\right)^2 + \cdots\right).
+\]
+We can then differentiate this with respect to $T$, and find the heat capacity to be
+\[
+ C_V \left(\frac{\partial E}{\partial T}\right)_{V, N} = \frac{\pi^2}{2} Nk\frac{T}{T_f} + \cdots,
+\]
+and this is valid for $T \ll T_f$.
+
+\subsubsection*{Heat capacity of metals}
+In a metal, we have a lattice of positive charges, and electrons that are free to move throughout the lattice. We can model them as a Fermi ideal gas. As we mentioned previously, we have
+\[
+ T_f \sim \SI{e4}{\kelvin}.
+\]
+So we usually have $T \ll T_f$. In particular, this means any classical treatment of the electrons would be totally rubbish! We have the lattice as well, and we have to think about the phonons. The Debye temperature is
+\[
+ T_D \sim \SI{e2}{\kelvin}.
+\]
+Now if
+\[
+ T_D \ll T \ll T_f,
+\]
+then, assuming an equal number of electrons and atoms, we have
+\[
+ C_V \approx \underbrace{\frac{\pi^2}{2} Nk \frac{T}{T_f}}_{\mathrm{electrons}} + 3Nk \approx 3Nk.
+\]
+This agrees with the Dulong--Petit result.
+
+But if $T \ll T_D, T_f$, then we need to use the low energy formula for the phonons, and we get
+\[
+ C_V \approx \frac{\pi^2}{2} Nk \frac{T}{T_f} + \frac{12 \pi^4}{5} Nk \left(\frac{T}{T_0}\right)^3.
+\]
+In this case, we cannot just throw the terms away. We can ask ourselves when the contributions of the terms are comparable. When are their contributions comparable. This is not hard to figure out. We just have to equate them and solve for $T$, and we find that this happens at
+\[
+ T^2 = \frac{5 T_D^3}{24 \pi^3 T_f}.
+\]
+If we plug in actual values, we find that this is just a few Kelvins.
+
+While this agrees with experiments very well, there are some flaws. Most notably, we neglected interactions with the lattice. We need to correct for a periodic potential, and is not too difficult. This is addressed in the AQM course.
+
+Even worse, we are ignoring the interaction between electrons! It is a puzzle why our model works despite ignoring these. This is explained by the Landau--Fermi liquid theory.
+
+\subsubsection*{White dwarf stars}
+Another important application of this is to astrophysics. A star is a big ball of hot gas that is not collapsing under gravity. Why does it not collapse under gravity?
+
+In \emph{our} sun, there are nuclear reactions happening in the core of the star, and these produce pressure that resists gravitational collapse. But this process eventually ends. As the star runs out of fuel, the star will shrink. As this happens, the density of the star increases, and thus the Fermi energy $E_f$ for electrons increase. Eventually this becomes so big that it exceeds the energy required to ionize the atoms in the star. This happens at around $T_f \sim \SI{e7}{\kelvin}$. At this stage, then, the electrons are no longer bound to the atoms, and all electrons can move through the star. We can therefore attempt to model these electrons as a low temperature Fermi gas (because $T_f$ is now very high).
+
+One important feature of Fermi gases is that they have pressure even at zero temperature. In a white dwarf star, the degeneracy pressure of this gas supports the star against gravitational collapse, even if the star cools down to $T = 0$! A particular important example of a star we think will end this way is our own sun.
+
+Let's try and understand some properties of these stars. We construct a very crude model of a white dwarf star by treating it as a ball of constant density. In addition to the kinds of energies we've talked about, there is also gravitational energy. This is given by
+\[
+ E_{\mathrm{grav}} = -\frac{3 GM^2}{5R}.
+\]
+This is just a statement about any star of constant density. To see this, we ask ourselves what energy it takes to destroy the star by moving shells of mass $\delta M$ off to infinity. The work done is just
+\[
+ \delta W = -\mathrm{PE} = \frac{GM}{R} \delta M.
+\]
+So we have
+\[
+ \frac{\d W}{\d M} = \frac{GM}{R(M)} = \frac{GM}{(M/(\frac{4}{3} \pi \rho))^{1/3}}.
+\]
+where $\rho$ is the fixed density. Solving this, and using the expression of $R$ in terms of $M$ gives
+\[
+ -E_{\mathrm{grav}} = W = \frac{3 GM^2}{5R}.
+\]
+Let's see what stuff we have got in our star. We have $N$ electrons, hence $N$ protons. This gives $\alpha N$ neutrons for some $\alpha \sim 1$. In the third example sheet, we argue that we can ignore the effect of the protons and neutrons, for reasons related to their mass.
+
+Thus, we are left with electrons, and the total energy is
+\[
+ E_{\mathrm{total}} = E_{\mathrm{grav}} + E_{\mathrm{kinetic}},
+\]
+We can then ask ourselves what radius minimizes the energy. We find that
+\[
+ R \sim M^{-1/3} \sim N^{-1/3}.
+\]
+In other words, the more massive the star is, the smaller it is! This is pretty unusual, because we usually expect more massive things to be bigger.
+
+Now we can ask ourselves --- how massive can the star be? As the mass goes up, the radius increases, and the density goes up. So $E_f$ goes up. For more and more massive stars, the Fermi energy is larger and larger. Eventually, this $E_f$ becomes comparable to the rest mass of the electron, and relativity becomes important.
+
+Consider $E_f \gg mc^2$. Then we have $E \gg mc^2$, and we can expand
+\[
+ g(E) \approx \frac{V}{\pi^2 \hbar^3 c^3}\left(E^2 - \frac{m^2 c^4}{2} + \cdots\right),
+\]
+where we plugged in $g_s = 2$. We can work out the number of electrons to be
+\[
+ N \approx \int_0^{E_f} \d E\; g(E) = \frac{V}{\pi^2 \hbar^3 c^2} \left(\frac{1}{3} E^3_f - \frac{m^2 c^4}{2}\right).
+\]
+Here we are assuming that we are at ``low'' energies, because the Fermi temperature is very high.
+
+Recall this is the equation that gives us what the Fermi energy is. We can invert this to get
+\[
+ E_f = \hbar c \left(\frac{3 \pi^2 N}{V}\right)^{1/3} + O\left(\left(\frac{N}{V}\right)^{1/3}\right).
+\]
+Similarly, we find
+\begin{align*}
+ E_{\mathrm{kinetic}} &\approx \int_0^{E_f}\d E\; E g(E) \\
+ &= \frac{V}{\pi^2 \hbar^3 c^3} \left(\frac{1}{4} E_f^4 - \frac{m^2 c^2}{4} E_f^2 + \cdots\right) \\
+ &= \frac{\hbar c}{4\pi}\frac{(3\pi^2 N)^{4/3}}{V^{1/3}} + O\left( V \left(\frac{N}{V}\right)^{-2/3}\right).
+\end{align*}
+We now note that $V = \frac{4}{3} \pi R^3$, and $m \ll m_p \approx m_n$, where $m, m_p, m_n$ are the masses of an electron, proton and neutron respectively. Then we find that
+\[
+ E_{\mathrm{total}} = \left(\frac{3\hbar c}{4} \left(\frac{9 \pi M^4}{4(\alpha + 1)^4 m_p^4}\right)^{1/3} - \frac{3GM^2}{5}\right) \frac{1}{R} + O(R).
+\]
+This is the total energy once the electrons are very energetic. We can plot what this looks like. If the coefficient of $\frac{1}{R}$ is positive, then we get something that looks like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$R$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$E$};
+
+ \draw [thick, mblue] (0.2, 2) .. controls (1, 0) .. (4, 0.8); % improve
+ \end{tikzpicture}
+\end{center}
+There is a stable minimum in there, and this looks good. However, if the coefficient is negative, then we go into trouble. It looks like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$R$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$E$};
+
+ \draw [thick, mblue] (0.2, -2) .. controls (1, 0.5) .. (4, 0.8);
+ \end{tikzpicture}
+\end{center}
+This is bad! This collapses to small $R$. In other words, the degeneracy of the electrons is not enough to withstand the gravitational collapse.
+
+For stability, we need the coefficient of $\frac{1}{R}$ to be positive. This gives us a maximum possible mass
+\[
+ M_c \sim \left(\frac{\hbar c}{G}\right)^{3/2} \frac{1}{m_p^2}.
+\]
+This is called the \term{Chandrasekhar limit}.
+
+Should we believe this? The above calculation neglects the pressure. We assumed the density is constant, but if this were true, the pressure is also constant, but this is nonsense, because a constant pressure means there is no force!
+
+A more careful treatment shows that
+\[
+ M_c \approx 1.4 M_{\mathrm{sun}}.
+\]
+What if we are bigger than this? In the case of a very very very dense star, we get a neutron star, where it is the neutrons that support the star. In this case, we have protons combining with electrons to get neutrons. This would involve actually doing some general relativity, which we will not do this.
+
+This neutron star also has a maximum mass, which is now
+\[
+ M_{\mathrm{max}} \approx 25 M_{\mathrm{sun}}.
+\]
+If we are bigger than this, then we have to either explode and lose a lot of mass, or become a black hole.
+
+\subsection{Pauli paramagnetism}
+What happens when we put a gas of electrons (e.g.\ electrons in a metal) in a magnetic field? There are two effects --- the Lorentz force, which is $\sim \mathbf{V}\mathbf \mathbf{B}$, and the spin of the electron coupling to $\mathbf{B}$. We will examine the second first.
+
+For simplicity, we assume there is a constant $\mathbf{B}$ field in the $z$-direction, and the electron can either be aligned or anti-aligned. The electron given by the spin is
+\[
+ E_{\mathrm{spn}} = \mu_B B s,
+\]
+where
+\[
+ s =
+ \begin{cases}
+ 1 & \text{spin up}\\
+ -1 & \text{spin down}
+ \end{cases}
+\]
+Here
+\[
+ \mu_B = \frac{|e| \hbar}{2mc}.
+\]
+The \term{Bohr magnetization}. We see that being aligned gives negative energy, because the electron has negative charge.
+
+Due to the asymmetry, the $\uparrow$ and $\downarrow$ states have different occupation numbers. They are no longer degenerate. Again, we have
+\[
+ \frac{N_{\uparrow}}{V} = \frac{1}{4\pi^2} \left(\frac{2m}{\hbar^2}\right)^{3/2} \int_0^\infty \d E \frac{E^{1/2}}{e^{\beta(E + \mu_B B - \mu)} + 1} = \frac{1}{\lambda^3} f_{3/2}(z e^{-\beta \mu_B B}).
+\]
+Similarly, for the down spin, we have
+\[
+ \frac{N_\downarrow}{V} = \frac{1}{\lambda^3} f_{3/2} (z e^{+\beta \mu_B B}).
+\]
+Compared to what we have been doing before, we have an extra external parameter $B$, which the experimenter can vary. Now in the microcanonical ensemble, we can write
+\[
+ E = E(S, V, N, B).
+\]
+As before, we can introduce a new quantity
+\begin{defi}[Magnetization]\index{magnetization}
+ The \emph{magnetization} is
+ \[
+ M = -\left(\frac{\partial E}{\partial B}\right)_{S, V, N}.
+ \]
+\end{defi}
+Using this definition, we can write out a more general form of the first law:
+\[
+ \d E = T \;\d S - p \;\d V + \mu \;\d N - M \;\d B.
+\]
+In the canonical ensemble, we worked with the free energy
+\[
+ F = E - TS.
+\]
+Then we have
+\[
+ \d F = - S \;\d T - p \;\d V + \mu \;\d N - M \;\d B.
+\]
+Thus, we find that
+\[
+ M = -\left(\frac{\partial F}{\partial B}\right)_{T, V, N}.
+\]
+The grand canonical potential is
+\[
+ \Phi = E - TS - \mu \;\d N.
+\]
+So we find
+\[
+ \d\Phi = - S \;\d T - p \;\d V - N \;\d \mu - M \;\d B.
+\]
+So we see that in the grand canonical ensemble, we can write
+\[
+ M = -\left(\frac{\partial \Phi}{\partial B}\right)_{T, V, \mu}.
+\]
+This is what we want, because we are working in the grand canonical ensemble.
+
+How do we compute the magnetization? Recall that we had an expression for $\Phi$ in terms of $\mathcal{Z}$. This gives us
+\[
+ M = kT \left(\frac{\partial \log \mathcal{Z}}{\partial B}\right)_{T, V, \mu}.
+\]
+So we just have to compute the partition function. We have
+\[
+ \log \mathcal{Z} = \int_0^\infty \d E\; g(E) \left[\log \left(1 + ze^{-\beta(E + \mu_B B)}\right) + \log \left(1 + z e^{-\beta (E - \mu_B B)}\right)\right].
+\]
+To compute the magnetization, we simply have to differentiate this with respect to $B$, and we find
+\[
+ M = - \mu_B (N_{\uparrow} - N_{\downarrow}) = \frac{\mu_B V}{\lambda^3} (f_{3/2}(z e^{\beta \mu_B B}) - f_{3/2}(z e^{-\beta\mu_B B})).
+\]
+So we see that the magnetization counts how many spins are pointing up or down.
+
+As before, this function $f_{3/2}$ is not very friendly. So let's try to approximate it in different limits.
+
+First suppose $ze^{\pm \beta \mu_BB} \ll 1$. Then for a small argument, then is essentially what we had in the high temperature limit, and if we do that calculation, we find
+\[
+ f_{3/2}(z e^{\pm \beta\mu_BB}) = z e^{\pm \beta\mu_BB}.
+\]
+Plugging this back in, we find
+\[
+ M \approx \frac{2 \mu_B V}{\lambda^3} z \sinh (\beta \mu_B B).
+\]
+This expression still involves $z$, and $z$ is something we don't want to work with. We want to get rid of it and work with particle number instead. We have
+\[
+ N = N_\uparrow + N_\downarrow \approx \frac{2 V z}{\lambda^3} \cosh (\beta \mu_BB).
+\]
+Now we can ask what our assumption $z e^{\pm \beta \mu_BB} \ll 1$ means. It implies this $\cosh$ is small, and so this means
+\[
+ \frac{\lambda^3 N}{V} \ll 1.
+\]
+It also requires high temperature, or weak field, or else the quantity will be big for one choice of the sign. In this case, we find
+\[
+ M \approx \mu_B N \tanh (\beta \mu_B B).
+\]
+There is one useful quantity we can define from $M$, namely the magnetic susceptibility.
+\begin{defi}[Magnetic susceptibility]\index{magnetic susceptibility}
+ The \emph{magnetic susceptibility} is
+ \[
+ \chi = \left(\frac{\partial M}{\partial \beta}\right)_{T, V, N}.
+ \]
+\end{defi}
+This tells us how easy it is to magnetize a substance.
+
+Let's evaluate this in the limit $B = 0$. It is just
+\[
+ \chi|_{B = 0} = \frac{N \mu_B^2}{kT}.
+\]
+The fact that $\chi$ is proportional to $\frac{1}{T}$ is called \term{Curie's law}.
+
+Let's now look at the opposite limit. Suppose $z e^{\pm \beta \mu_B B} \gg 1$. Recall that for large $z$, we had the result
+\[
+ f_n(z) \approx \frac{(\log z)^n}{\Gamma(n + 1)}.
+\]
+Then this gives
+\[
+ f_{3/2} (ze^{\pm \beta \mu_BB}) \approx \frac{[\beta(\mu \pm \mu_BB)]^{3/2}}{\Gamma(5/2)} = \frac{(\beta \mu)^{3/2}}{\Gamma(5/2)} \left( 1 \pm \frac{3 \mu_BB}{2\mu} + \cdots\right).
+\]
+To get $N$, we need to sum these things up, and we find
+\[
+ N = N_{\uparrow} + N_{\downarrow} \approx \frac{2V}{\lambda^3} \frac{(\beta \mu)^{3/2}}{\Gamma(5/2)}.
+\]
+So we find that $\mu \approx E_f$. Taking the $\log$ of our assumption $ze^{\pm \beta\mu_BB} \gg 1$, we know
+\[
+ \beta(\mu \pm \mu_BB) \gg 1,
+\]
+which is equivalent to
+\[
+ T_f \pm \frac{\mu_BB}{k} \gg T.
+\]
+Since this has to be true for both choices of sign, we must have
+\[
+ T_f - \frac{\mu_B|B|}{k} \gg T.
+\]
+In particular, we need $T \ll T_f$. So this is indeed a low temperature expansion. We also need
+\[
+ \mu_B|B| < k T_f = E_f.
+\]
+So the magnetic field cannot be too strong. It is usually the case that $\mu_B|B| \ll E_f$.
+
+In this case, we find that
+\begin{align*}
+ M &\approx \frac{\mu_BB}{\lambda^3} \frac{(\beta \mu)^{3/2}}{\Gamma(5/2)} \frac{3\mu_BB}{\mu} \\
+ &\approx \frac{\mu_B^2 V}{2\pi^2}\left(\frac{2m}{\hbar^2}\right)^{3/2} E_f^{1/2} B\\
+ &= \mu_B^2 g(E_f) B.
+\end{align*}
+This is the magnetization, and from this we get the susceptibility
+\[
+ \chi \approx \mu_B^2 g(E_f).
+\]
+This is the low temperature result. We see that $\chi$ approaches a constant, and doesn't obey Curie's law.
+
+There is a fairly nice heuristic explanation for why this happens. Suppose we start from a zero $B$ field, and we switch it on. Now the spin down electrons have lower energy than the spin up electrons. So the spin up electrons will want to become spin down, but they can't all do that, because of Fermi statistics. It is only those at the Fermi surface who can do so. Hence the magnetic susceptibility is proportional to how many things there are on the Fermi surface.
+
+Does this match what we see? The first thing we see is that $\chi$ is always non-negative in our expression. Substances with $\chi > 0$ are called \term{paramagnetic}. These are not permanently magnetic, but whenever we turn on a magnetic field, they become magnetized. They are weakly attracted by a magnetic field. Examples of such substances include aluminium.
+
+We can also have paramagnetism coming from other sources, e.g.\ from the metal ions. Most paramagnetic substances actually have complicated contributions from all sorts of things.
+
+\subsubsection*{Landau diamagnetism*}
+There is another effect we haven't discussed yet. The magnetic field induces a Lorentz force, and this tends to make electrons go around in circles. We will not go into full details, as this is non-examinable. The Hamiltonian is
+\[
+ H = \frac{1}{2m} \left(\mathbf{p} + \frac{e}{c} \mathbf{A}(\mathbf{x})\right)^2,
+\]
+where $\mathbf{p}$ is the momentum, $-e$ is the charge, and $\mathbf{A}$ is the magnetic potential,
+\[
+ \mathbf{B} = \nabla \times \mathbf{A}.
+\]
+To understand the effect of the $B$-field, we can look at what happens classically with this $B$-field. We look at the $1$-particle partition function
+\[
+ Z_1 = \frac{1}{(2\pi \hbar)^{3}} \int \d^3 x\; \d^3 p\; e^{-\beta H(x, p)}.
+\]
+By making a change of variables in this integral,
+\[
+ \mathbf{p}' = \mathbf{p} + \frac{e}{c} \mathbf{A}(\mathbf{x}),
+\]
+then we can get rid of $\mathbf{A}$, and the partition function $Z_1$ is independent of $A$! Classically, there is no magnetization.
+
+However, when we go to a quantum treatment, this is no longer true. This leads to Landau levels, which is discussed more thoroughly in IID AQM. When we take these into account, and see what happens at $T \ll T_f$, then we find that
+\[
+ M \approx - \frac{\mu_B^2}{3} g(E_f) B.
+\]
+This gives a \emph{negative} susceptibility. This gives an opposite effect to that coming from spins. But since we have the factor of $-\frac{1}{3}$, we know the net effect is still positive, and we still have $M_{\mathrm{total}}, \chi_{\mathrm{total}} > 0$. The net effect is still positive.
+
+However, there do exists substances that have negative $\chi$. They are repelled by magnetic fields. These are called \term{diamagnetic}. Bismuth is an example of a substance with such effects.
+
+\section{Classical thermodynamics}
+In this chapter, we are going to forget about statistical physics, and return to the early 19th century, when we didn't know that things are made up of atoms. Instead, people just studied macroscopic objects and looked at macroscopic properties only.
+
+Why are we doing this? Now that we have statistical physics, we can start from microscopic properties and then build up to the macroscopic properties. The problem is that we can't always do so. So far, we have been working with rather simple systems, and managed to derive their macroscopic properties from the microscopic ones. However, in real life, a lot of systems have very complex interactions, and we cannot expect to work with it sensibly. However, if we develop theory only on the macroscopic scale, then we can be sure that these apply to all system whatsoever.
+
+\subsection{Zeroth and first law}
+We begin by defining some words.
+
+\begin{defi}[Wall]\index{wall}
+ A \emph{wall} is a rigid boundary that matter cannot cross.
+\end{defi}
+
+\begin{defi}[Adiabatic wall]\index{adiabatic wall}\index{wall!adiabatic}
+ \emph{Adiabatic walls} isolate the system completely from external influences, i.e.\ the system is \emph{insulated}\index{insulated system}.
+\end{defi}
+\begin{defi}[Diathermal wall]\index{diathermal wall}\index{wall!diathermal}
+ A non-adiabatic wall is called \emph{diathermal}. Systems separated by a diathermal wall are said to be in \term{thermal contact}.
+\end{defi}
+
+\begin{defi}[Equilibrium]\index{equilibrium}
+ An isolated system with a time-independent state is said to be in \emph{equilibrium}.
+
+ Two systems are said to be in equilibrium if when they are put in thermal contact, then the whole system is in equilibrium.
+\end{defi}
+
+We will assume that a system in equilibrium can be completely specified by a few macroscopic variables. For our purposes, we assume our system is a gas, and We will take these variables to be pressure $p$ and volume $V$. Historically, these systems are of great interest, because people were trying to build good steam engines.
+
+There will be 4 laws of thermodynamics, which, historically, were discovered experimentally. Since we are actually mathematicians, not physicists, we will not perform, or even talk about such experiments. Instead, we will take these laws as ``axioms'' of the subject, and try to derive consequences of them. Also, back in the days, physicists were secretly programmers. So we start counting at $0$.
+
+\begin{law}[Zeroth law of thermodynamics]\index{zeroth law of thermodynamics}\index{law of thermodynamics!zeroth}
+ If systems $A$ and $B$ are individually in equilibrium with $C$, then $A$ and $B$ are in equilibrium.
+\end{law}
+In other words, ``equilibrium'' is an equivalence relation (since reflexivity and symmetry are immediate).
+
+In rather concise terms, this allows us to define temperature.
+\begin{defi}[Temperature]\index{temperature}
+ \emph{Temperature} is an equivalence class of systems with respect to the ``equilibrium'' relation.
+\end{defi}
+More explicitly, the \emph{temperature} of a system is a quantity (usually a number) assigned to each system, such that two systems have the same temperature iff they are in equilibrium. If we assume any system is uniquely specified by the pressure and volume, then we can write the temperature of a system as $T(p, V)$.
+
+We have a rather large freedom in defining what the temperature of a system is, as a number. If $T$ is a valid temperature-assigning function, then so is $f(T(p, V))$ for any injective function $f$ whatsoever. We can impose some further constraints, e.g.\ require that $T$ is a smooth function in $p$ and $V$, but we are still free to pick almost any function we like.
+
+We will later see there is a rather natural temperature scale to adopt, which is well defined up to a constant, i.e.\ a choice of units. But for now, we can just work with an abstract ``temperature'' function $T$.
+
+%If we have three arbitrary systems $A, B, C$, say specified by $(p_1, V_1)$, $(p_2, V_2)$ and $(p_3, V_3)$ are in general not in equilibrium. There is some constraint, say
+%\[
+% F_{AC} (p_1, V_1; p_3, V_3) = 0
+%\]
+%and
+%\[
+% F_{BC}(p_2, V_2; p_3, V_3) = 0.
+%\]
+%Say we can solve these to get
+%\[
+% V_3 = f_{AC}(p_1, V_1; p_3)
+%\]
+%and
+%\[
+% V_3 = f_{BC}(p_2, V_2; p_3).
+%\]
+%Since $V_3$ is the same thing in both expressions, we need
+%\[
+% f_{AC}(p_1, V_1; p_3) = f_{BC}(p_2, v_2; p_3).\tag{$*$}
+%\]
+%Then the 0th law says whenever this holds, then $A$ and $B$ are in equilibrium. This implies
+%\[
+% F_{AB}(p_1, V_1; p_2; V_2) = 0.
+%\]
+%This has to be true for any $p_3$. But $(*)$ has $p_3$ involved, while our conclusion does not. So we would expect $(*)$ to cancel out, and so $(*)$ becomes
+%\[
+% \mathcal{B}_A(p_1, V_1) = \mathcal{B}_B(p_2, V_2).
+%\]
+%This $\mathcal{B}$ is defined to be the \term{temperature}.
+%
+%This $\mathcal{B}(p, V)$ is not unique. Given an injective function $f$, we obtain a new one by $f(\mathcal{B}(p, V))$.
+%
+%But we can just construct our favorite thermometer, and use that as a reference scale, and this discussion fixes $T$ for all other systems.
+
+We can now move on in ascending order, and discuss the first law.
+
+\begin{law}[First law of thermodynamics]\index{first law of thermodynamics}\index{law of thermodynamics!first}
+ The amount of work required to change an isolated system from one state to another is independent of how the work is done, and depends only on the initial and final states.
+\end{law}
+
+From this, we deduce that there is some function of state $E(p, V)$ such that the work done is just the change in $E$,\index{$E$}\index{$W$}
+\[
+ W = \Delta E.
+\]
+For example, we can pick some reference system $(p_0, V_0)$, and define $E(p, V)$ to be the work done required to get us from $(p_0, V_0)$ to $(p, V)$.
+
+What if the system is not isolated? Then in general $\Delta E \not= W$. We account for the difference by introducing a new quantity \term{$Q$}, defined by
+\[
+ \Delta E = Q + W.
+\]
+it is important to keep in mind which directions these quantities refer to. $Q$ is the heat \emph{supplied} to the system, and $W$ is the work \emph{done} on the system. Sometimes, this relation $\Delta E = Q + W$ is called the first law instead, but here we take it as the definition of $Q$.
+
+It is important to note that $E$ is a \emph{function of state} --- it depends only on $p$ and $V$. However, $Q$ and $W$ are not. They are descriptions of \emph{how} a state changes to another. If we are just given some fixed state, it doesn't make sense to say there is some amount of heat and some amount of work in the state.
+
+For an infinitesimal change, we can write
+\[
+ \d E = \di Q + \di W.
+\]
+Here we write the $\di$ with a slash to emphasize it is not an exact differential in any sense, because $Q$ and $W$ aren't ``genuine variables''.
+
+Most of the time, we are interested in studying how objects change. We will assign different labels to different possible changes.
+\begin{defi}[Quasi-static change]\index{quasi-static change}
+ A change is \emph{quasi-static} if it is done so slowly that the system remains in equilibrium throughout the change.
+\end{defi}
+
+\begin{defi}[Reversible change]\index{reversible change}
+ A change is \emph{reversible} if the time-reversal process is possible.
+\end{defi}
+For example, consider a box of gases with a frictionless piston. We can very very slowly compress or expand the gas by moving the piston.
+\begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.2] (0, 0) rectangle (2.5, -2);
+
+ \draw (0, 0) -- (4, 0);
+ \draw (0, 0) -- (0, -2) -- (4, -2);
+ \draw [thick, mred] (2.5, 0) -- (2.5, -2);
+
+ \draw [dashed] (2, 0) -- (2, -2);
+
+ \draw [fill, mred] (2.5, -0.9) rectangle (4, -1.1);
+ \end{tikzpicture}
+\end{center}
+If we take pressure to mean the force per unit area on the piston, then the work done is given by
+\[
+ \di W = - p \;\d V.
+\]
+Now consider the following two reversible paths in $pV$ space:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$V$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$p$};
+
+ \node [circ] at (1, 1) {};
+ \node [below] at (1, 1) {$A$};
+
+ \node [circ] at (2.5, 3) {};
+ \node [right] at (2.5, 3) {$B$};
+
+ \draw (1, 1) edge [bend left, ->-=0.55] (2.5, 3);
+ \draw (1, 1) edge [bend right, ->-=0.55] (2.5, 3);
+ \end{tikzpicture}
+\end{center}
+The change in energy $E$ is independent of the path, as it is a function of state. On the other hand, the work done on the gas does -- it is
+\[
+ \int \di W = - \int p\; \d V.
+\]
+This is path-dependent.
+
+Now if we go around a cycle instead:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$V$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$p$};
+
+ \node [circ] at (1, 1) {};
+ \node [below] at (1, 1) {$A$};
+
+ \node [circ] at (2.5, 3) {};
+ \node [right] at (2.5, 3) {$B$};
+
+ \draw (1, 1) edge [bend left, ->-=0.55] (2.5, 3);
+ \draw (1, 1) edge [bend right, -<-=0.45] (2.5, 3);
+ \end{tikzpicture}
+\end{center}
+then we have $\Delta E = 0$. So we must have
+\[
+ \oint \di Q = \oint p \;\d V.
+\]
+In other words, the heat supplied to the gas is equal to the work done by the gas. Thus, if we repeat this process many times, then we can convert heat to work, or vice versa. However, there is some restriction on how much of this we can perform, and this is given by the second law.
+
+\subsection{The second law}
+We want to talk about the second law, but we don't know about entropy yet. The ``original'' versions of the second law were much more primitive and direct. There are two versions of the second law, which we will show are equivalent.
+
+\begin{law}[Kelvin's second law]\index{second law of thermodynamics!Kelvin's}\index{Kelvin's second law of thermodynamics}\index{law of thermodynamics!second (Kelvin's)}
+ There exists no process whose sole effect is to extract heat from a heat reservoir and convert it into work.
+\end{law}
+
+\begin{law}[Clausius's second law]\index{second law of thermodynamics!Clausius's}\index{Clausius's second law of thermodynamics}\index{law of thermodynamics!second (Clausius's)}
+ There is no process whose sole effect is to transfer heat from a colder body to a hotter body.
+\end{law}
+Both of these statements are motivated by experiments, but from the point of view of our course, we are just treating them as axioms.
+
+Note that the words ``sole effect'' in the statement of the second laws are important. For example, a fridge transfers heat from a colder body to a hotter body. But of course, a fridge needs work input --- we need to get electricity from somewhere.
+
+Since they are given the same name, the two statements are actually equivalent! This relies on some rather mild assumptions, namely that fridges and heat engines exist. As mentioned, fridges take in work, and transfer heat from a colder body to a hotter body. Heat engines transfer heat from a hotter body to a cooler body, and does work in the process. We will later construct some explicit examples of such machines, but for now we will just take it that they exist.
+
+\begin{prop}
+ Clausius's second law implies Kelvin's second law.
+\end{prop}
+
+\begin{proof}
+ Suppose there were some process that violated Kelvin's second law. Let's use it to run a fridge:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (4, 0) node [pos=0.5, above] {hot reservoir};
+ \draw (0, -3) -- (4, -3) node [pos=0.5, below] {cold reservoir};
+
+ \draw [fill=yellow, fill opacity=0.2] (0.5, -1) rectangle (1.5, -2);
+ \node [align=center]at (1, -1.5) {\small not\\\small Kelvin};
+
+ \draw [fill=yellow, fill opacity=0.2] (3, -1) rectangle (4, -2);
+ \node at (3.5, -1.5) {fridge};
+
+ \draw [->-=0.65] (1, 0) -- (1, -1) node [pos=0.5, right] {\small $Q_H$};
+
+ \draw [->-=0.6] (1.5, -1.5) -- (3, -1.5) node [pos=0.5, above] {\small $W\! =\! Q_H$};
+
+ \draw [->-=0.65] (3.5, -3) -- (3.5, -2) node [pos=0.5, right] {\small $Q_C$};
+ \draw [->-=0.65] (3.5, -1) -- (3.5, 0) node [pos=0.5, right] {\small $Q_C\! +\! Q_H$};
+ \end{tikzpicture}
+ \end{center}
+ and this violates Clausius's law.
+\end{proof}
+
+Similarly, we can prove the other direction.
+\begin{prop}
+ Kelvin's second law implies Clausius's second law.
+\end{prop}
+
+\begin{proof}
+ If we have a not Clausius machine, then we can do
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (4, 0) node [pos=0.5, above] {hot reservoir};
+ \draw (0, -3) -- (4, -3) node [pos=0.5, below] {cold reservoir};
+
+ \draw [fill=yellow, fill opacity=0.2] (0.5, -1) rectangle (1.5, -2);
+ \node [align=center] at (1, -1.5) {\small heat\\\small engine};
+
+ \draw [fill=yellow, fill opacity=0.2] (3, -1) rectangle (4, -2);
+ \node [align=center] at (3.5, -1.5) {\small not\\\scriptsize Clausius};
+
+ \draw [->-=0.65] (1, 0) -- (1, -1) node [pos=0.5, right] {\small $Q_H$};
+ \draw [->-=0.65] (1, -2) -- (1, -3) node [pos=0.5, right] {\small $Q_H\! -\! W$};
+
+ \draw [->-=0.65] (0.5, -1.5) -- (-0.5, -1.5) node [pos=0.5, above] {\small $W$};
+ \draw [->-=0.65] (3.5, -3) -- (3.5, -2) node [pos=0.5, right] {\small $Q_H\! -\! W$};
+ \draw [->-=0.65] (3.5, -1) -- (3.5, 0) node [pos=0.5, right] {\small $Q_H\! -\! W$};
+ \end{tikzpicture}
+ \end{center}
+ Then this has a net effect of taking heat $W$ from the hot reservoir and done work $W$. This violates Kelvin's law.
+\end{proof}
+
+\subsection{Carnot cycles}
+We now construct our ``canonical'' examples of a heat engine, known as the \term{Carnot cycle}. The important feature of this heat engine is that it is reversible, so we can run it backwards, and turn it into a fridge.
+
+The system consists of a box of gas that can be expanded or compressed. The Carnot cycle will involve compressing and expanding this box of gas at different environments in a clever way in order to extract work out of it. Before we explain how it works, we can plot the ``trajectory'' of the system on the $p\mdash V$ plane as follows:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$V$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$p$};
+
+ \draw [domain=0.66:2.5, thick, mblue] plot (\x, {2/\x});
+ \draw [domain=4:5, thick, mblue] plot (\x, {2/\x});
+ \draw [domain=1.66:2, thick, mblue] plot (\x, {5/\x});
+ \draw [domain=3.5:5, thick, mblue] plot (\x, {5/\x});
+
+ \draw [domain=2.5:4, thick, mred, -<-=0.35] plot (\x, {2/\x});
+ \draw [domain=2:3.5, thick, mred, ->-=0.68] plot (\x, {5/\x});
+ \draw [mred, thick] (3.5, 1.4286) edge [bend right=10, ->-=0.7] (4, 0.5);
+ \draw [mred, thick] (2, 2.5) edge [bend right=10, -<-=0.4] (2.5, 0.8);
+
+ \node [circ] at (4, 0.5) {};
+ \node [circ] at (3.5, 1.4286) {};
+ \node [circ] at (2.5, 0.8) {};
+ \node [circ] at (2, 2.5) {};
+
+ \node [below] at (4, 0.5) {$C$};
+ \node [above] at (3.5, 1.4286) {$B$};
+ \node [below] at (2.5, 0.8) {$D$};
+ \node [anchor=south west] at (2, 2.5) {$A$};
+
+ \node [right] at (5, 1) {$T = T_H$};
+ \node [right] at (5, 0.4) {$T = T_C$};
+ \end{tikzpicture}
+\end{center}
+\begin{itemize}
+ \item We start at point $A$, and perform \term{isothermal expansion}\index{expansion!isothermal} to get to $B$. In other words, we slowly expand the gas while in thermal contact with the heat reservoir at hot temperature $T_H$. As it expands, the gas is doing work, which we can use to run a TV. Since it remains in constant temperature, it absorbs some heat $Q_H$.
+ \item When we reach $B$, we perform \term{adiabatic expansion}\index{expansion!adiabatic} to get to $C$. To do this, we isolate the system and allow it to expand slowly. This does work, and no heat is absorbed.
+ \item At $C$, we perform \term{isothermal compression}\index{compression!isothermal} to get to $D$. We put the system in thermal contact with the cold reservoir at the cold temperature $T_C$. Then we compress the gas slowly. This time, we are doing work on the gas, and as we compress it, it gives out heat $Q_C$ to the cold reservoir.
+ \item From $D$ to $A$, we perform \term{adiabatic compression}\index{compression!adiabatic}. We compress the gas slowly in an isolated way. We are doing work on the system. Since it is isothermal, no heat is given out.
+\end{itemize}
+Note that we are not just running in a cycle here. We are also transferring heat from the hot reservoir to the cold reservoir.
+
+Since these combined things form a cycle, we saw that we can write
+\[
+ - \oint \di W = \oint \di Q.
+\]
+The total amount of work done is
+\[
+ W = Q_H - Q_C.
+\]
+If we are building a steam engine, we want to find the most efficient way of doing this. Given a fixed amount of $Q_H$, we want to minimize the heat $Q_C$ we return to the cold reservoir.
+
+\begin{defi}[Efficiency]\index{efficiency}
+ The \emph{efficiency} of a heat engine is
+ \[
+ \eta = \frac{W}{Q_H} = 1 - \frac{Q_C}{Q_H}.
+ \]
+\end{defi}
+Ideally, we would want to build a system where $\eta = 1$. This is, of course, impossible, by Kelvin's second law. So we can ask --- what is the best we can do? The answer is given by Carnot's theorem.
+
+\begin{thm}
+ Of all engines operating between heat reservoirs, reversible engines are the most efficient. In particular, all reversible engines have the same efficiency, and is just a function of the temperatures of the two reservoirs.
+\end{thm}
+
+\begin{proof}
+ Consider any other engine, Ivor. We use this to drive Carnot (an arbitrary reversible engine) backwards:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (4, 0) node [pos=0.5, above] {hot reservoir};
+ \draw (0, -3) -- (4, -3) node [pos=0.5, below] {cold reservoir};
+
+ \draw [fill=yellow, fill opacity=0.2] (0.5, -1) rectangle (1.5, -2);
+ \node at (1, -1.5) {Ivor};
+
+ \draw [fill=yellow, fill opacity=0.2] (3, -1) rectangle (4, -2);
+ \node [align=center] at (3.5, -1.5) {\small reverse\\\small Carnot};
+
+ \draw [->-=0.65] (1, 0) -- (1, -1) node [pos=0.5, right] {$Q_H'$};
+ \draw [->-=0.65] (1, -2) -- (1, -3) node [pos=0.5, right] {$Q_C'$};
+
+
+ \draw [->-=0.6] (1.5, -1.5) -- (3, -1.5) node [pos=0.5, above] {$W$};
+
+ \draw [->-=0.65] (3.5, -3) -- (3.5, -2) node [pos=0.5, right] {$Q_C$};
+ \draw [->-=0.65] (3.5, -1) -- (3.5, 0) node [pos=0.5, right] {$Q_H$};
+ \end{tikzpicture}
+ \end{center}
+ Now we have heat $Q_H' - Q_H$ extracted from the hot reservoir, and $Q_C' - Q_C$ deposited in the cold, and we have
+ \[
+ Q_H' - Q_H = Q_C' - Q_C.
+ \]
+ Then Clausius' law says we must have $Q_H' - Q_H \geq 0$. Then the efficiency of Igor is
+ \[
+ \eta_{\mathrm{Ivor}} = \frac{Q_H' - Q_C'}{Q_H'} = \frac{Q_H - Q_C}{Q_H'} \leq \frac{Q_H - Q_C}{Q_H} = \eta_{\mathrm{Carnot}}.
+ \]
+ Now if Ivor is also reversible, then we can swap the argument around, and deduce that $\eta_{\mathrm{Carnot}} \leq \eta_{\mathrm{Ivor}}$. Combining the two inequalities, they must be equal.
+\end{proof}
+In this case, we know $\eta$ is just a function of $T_H$ and $T_C$. We call this $\eta(T_H, T_C)$. We can use this to define $T$.
+
+To do so, we need \emph{three} heat reservoirs, $T_1 > T_2 > T_3$, and run some Carnot engines.
+\begin{center}
+ \begin{tikzpicture}[yscale=0.7]
+ \draw (0, 0) -- (4, 0) node [right] {$T_1$};
+ \draw (0, -3) -- (4, -3) node [right] {$T_2$};
+ \draw (0, -6) -- (4, -6) node [right] {$T_3$};
+
+ \draw [fill=yellow, fill opacity=0.2] (1.5, -1) rectangle (2.5, -2);
+ \draw [fill=yellow, fill opacity=0.2] (1.5, -4) rectangle (2.5, -5);
+ \node at (2, -1.5) {\small Carnot};
+ \node at (2, -4.5) {\small Carnot};
+ \draw [->-=0.7] (2, 0) -- (2, -1) node [pos=0.5, right] {$Q_1$};
+ \draw [->-=0.7] (2, -2) -- (2, -3) node [pos=0.5, right] {$Q_2$};
+ \draw [->-=0.7] (2, -3) -- (2, -4) node [pos=0.5, right] {$Q_2$};
+ \draw [->-=0.7] (2, -5) -- (2, -6) node [pos=0.5, right] {$Q_3$};
+
+ \draw [->] (2.5, -1.5) -- +(1, 0) node [right] {$W$};
+ \draw [->] (2.5, -4.5) -- +(1, 0) node [right] {$W'$};
+ \end{tikzpicture}
+\end{center}
+Then we have
+\[
+ Q_2 = Q_1 (1 - \eta(T_1, T_2))
+\]
+and thus
+\[
+ Q_3 = Q_2( 1 - \eta(T_2, T_3)) = Q_1(1 - \eta(T_1, T_2))(1 - \eta(T_2, T_3)).
+\]
+But since this composite Carnot engine is also reversible, we must have
+\[
+ Q_3 = Q_1(1 - \eta(T_1, T_3)).
+\]
+So we deduce that
+\[
+ 1 - \eta(T_1, T_3) = (1 - \eta(T_1, T_2))( 1 - \eta(T_2, T_3)).\tag{$*$}
+\]
+We now fix a $T_*$. We define
+\begin{align*}
+ f(T) &= 1 - \eta(T, T_*)\\
+ g(T) &= 1 - \eta(T_*, T).
+\end{align*}
+So we have
+\[
+ 1 - \eta(T, T') = f(T) g(T').
+\]
+Plugging this into $(*)$, we find
+\[
+ f(T_1) g(T_3) = f(T_1) g(T_2) f(T_2) g(T_3).
+\]
+So we find that
+\[
+ g(T_2) f(T_2) = 1
+\]
+for any $T_2$.
+
+Thus, we can write
+\[
+ 1 - \eta(T, T') = f(T) g(T') = \frac{f(T)}{f(T')}.
+\]
+The ideal is to use this to define temperature. We now \emph{define} $T$ such that
+\[
+ f(T) \propto \frac{1}{T}.
+\]
+In other words, such that a reversible engine operating between $T_H$ and $T_C$ has
+\[
+ \eta = 1 - \frac{T_C}{T_H}.
+\]
+Of course, this is defined only up to a constant. We can fix this constant by saying the ``triple point'' of water has\index{temperature}
+\[
+ T = \SI{273.16}{\kelvin}.
+\]
+The triple point is a very precisely defined temperature, where water, ice and water vapour are in equilibrium. We will talk about this later when we discuss phase transitions.
+
+This is rather remarkable. We started from a rather wordy and vague statement of the second law, and we came up with a rather canonical way of defining temperature.
+
+\subsection{Entropy}
+We've been talking about the second law in the previous chapter all along, but we didn't mention entropy! Where does it come from?
+
+Recall that in the Carnot cycle, we had
+\[
+ \eta = 1 - \frac{Q_C}{Q_H} = 1 - \frac{T_C}{T_H}.
+\]
+So we get
+\[
+ \frac{Q_H}{T_H} = \frac{Q_C}{T_C}.
+\]
+But $Q_H$ is the heat absorbed, and $Q_C$ is the heat emitted. This is rather asymmetric. So we set
+\begin{align*}
+ Q_1 &= Q_H & T_1 &= T_H\\
+ Q_2 &= -Q_C & T_2 &= T_C.
+\end{align*}
+Then both $Q_i$ are the heat absorbed, and then we get
+\[
+ \sum_{i = 1}^2 \frac{Q_i}{T_i} = 0.
+\]
+We now consider a more complicated engine.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$V$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$p$};
+
+ \draw [domain=0.66:5, dashed] plot (\x, {2/\x});
+ \draw [domain=1.66:5, dashed] plot (\x, {5/\x});
+
+ \draw [domain=2.5:4, thick, mred] plot (\x, {2/\x});
+ \draw [domain=2:2.8, thick, mred] plot (\x, {5/\x});
+ \draw [mred, thick] (2, 2.5) edge [bend right=10] (2.5, 0.8);
+
+ \node [circ] at (4, 0.5) {};
+ \node [circ] at (3.5, 1.4286) {};
+ \node [circ] at (2.5, 0.8) {};
+ \node [circ] at (2, 2.5) {};
+
+ \node [below] at (4, 0.5) {$C$};
+ \node [above] at (3.5, 1.4286) {$B$};
+ \node [below] at (2.5, 0.8) {$D$};
+ \node [anchor=south west] at (2, 2.5) {$A$};
+
+ \draw [domain=1.1667:5, dashed] plot (\x, {3.5/\x});
+ \draw [domain=3:3.7, thick, mred] plot (\x, {3.5/\x});
+ \draw [mred, thick] (3, 1.167) edge [bend left=10] (2.8, 1.7857);
+
+ \draw [mred, thick] (4, 0.5) edge [bend left=10] (3.7, 0.9459);
+ \draw [dashed] (3.7, 0.9459) edge [bend left=10] (3.5, 1.4286);
+
+ \node [circ] at (2.8, 1.7857) {};
+ \node [above] at (2.8, 1.7857) {$E$};
+
+ \node [circ] at (3, 1.167) {};
+ \node [below] at (3, 1.167) {$F$};
+ \node [circ] at (3.7, 0.9459) {};
+ \node [right] at (3.7, 0.9459) {$G$};
+ \end{tikzpicture}
+\end{center}
+We want to consider AEFGCD as a new engine. We saw that Carnot ABCD had
+\[
+ \frac{Q_{AB}}{T_H} + \frac{Q_{CD}}{T_C} = 0.\tag{$1$}
+\]
+Similarly, Carnot EBGF gives
+\[
+ \frac{Q_{EB}}{T_H} + \frac{Q_{GF}}{T_M} = 0.\tag{$2$}
+\]
+But we have
+\[
+ Q_{AB} = Q_{AE} + Q_{EB},\quad Q_{GF} = - Q_{FG}.
+\]
+So we can subtract $(1) - (2)$, and get
+\[
+ \frac{Q_{AE}}{T_H} + \frac{Q_{FC}}{T_M} + \frac{Q_{CD}}{T_C} = 0.
+\]
+But this is exactly the sum of the $\frac{Q}{T}$ for the Carnot cycle AEFGCD. So we again find
+\[
+ \sum_{i = 1}^3 \frac{Q_i}{T_i} = 0.
+\]
+If we keep doing these corner cuttings, then we can use it to approximate it any simple closed curve in the plane. So we learn that
+\[
+ \oint \frac{\di Q}{T} = 0
+\]
+for \emph{any} reversible cycle. This allows us to define a new function of state, called the \term{entropy}.
+\begin{defi}[Entropy]\index{entropy}
+ The \emph{entropy} of a system at $A = (p, V)$ is given by
+ \[
+ S(A) =\int_0^A \frac{\di Q}{T},
+ \]
+ where $0$ is some fixed reference state, and the integral is evaluated along any reversible path.
+\end{defi}
+We next want to see that this entropy we've defined actually has the properties of entropy we've found previously.
+
+It is immediate from this definition that we have
+\[
+ T\;\d S = \di Q.
+\]
+We also see that for reversible changes, we have
+\[
+ -p \;\d V = \di W.
+\]
+So we know that
+\[
+ \d E = T\;\d S - p \;\d V,
+\]
+which is what we previously called the first law of thermodynamics.
+
+Since this is a relation between functions of state, it must hold for \emph{any} infinitesimal change, whether reversible or not!
+
+Since the same first law holds for the entropy we've previously defined, this must be the same entropy as the one we've discussed previously, at least up to a constant. This is quite remarkable.
+
+Historically, this was how we first defined entropy. And then Boltzmann came along and explained the entropy is the measure of how many states we've got. No one believed him, and he committed suicide, but we now know he was correct.
+
+Let's compare an irreversible Ivor and the Carnot cycle between $T_H$ and $T_C$. We assume the Carnot cycle does the same work as Ivor, say $W$. Now we have
+\[
+ W = Q_H' - Q_C'= Q_H - Q_C.
+\]
+By Carnot's theorem, since Carnot is more efficient, we must have
+\[
+ Q_H' > Q_H.
+\]
+Note that we have
+\begin{align*}
+ \frac{Q_H'}{T_H} - \frac{Q_C'}{T_C} &= \frac{Q_H'}{T_H} + \frac{Q_H - Q_C - Q_H'}{T_C} \\
+ &= \frac{Q_H}{T_H} - \frac{Q_C}{T_C} + (Q_H' - Q_H)\left(\frac{1}{T_H} - \frac{1}{T_C}\right) \\
+ &= (Q_H' - Q_H)\left(\frac{1}{T_H} - \frac{1}{T_C}\right).
+\end{align*}
+The first term is positive, while the second term is negative. So we see that the sum of $\frac{Q}{T}$ is negative.
+
+The same method of ``cutting off corners'' as above shows that
+\[
+ \oint \frac{\di Q}{T} \leq 0
+\]
+for \emph{any} cycle. This is called \term{Clausius inequality}.
+
+Now consider two paths from $A$ to $B$, say $I$ and $II$. Suppose $I$ is irreversible and $II$ is reversible.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$V$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$p$};
+
+ \node [circ] at (1, 1) {};
+ \node [below] at (1, 1) {$A$};
+
+ \node [circ] at (2.5, 3) {};
+ \node [right] at (2.5, 3) {$B$};
+
+ \draw (1, 1) edge [bend left, ->-=0.55] (2.5, 3);
+ \node at (1, 2) {$II$};
+ \draw [->-=0.6] plot [smooth] coordinates {(1, 1) (2, 0.6) (3, 0.9) (2.2, 2) (2.5, 3)};
+
+ \node [right] at (3, 0.9) {$I$};
+ \end{tikzpicture}
+\end{center}
+Then we can run $II$ backwards, and we get
+\[
+ \int_I \frac{\di Q}{T} - \int_{II} \frac{\di Q}{T} = \oint \frac{\di Q}{T} \leq 0.
+\]
+So we know that
+\[
+ \int_I \frac{\di Q}{T} \leq S(B) - S(A).
+\]
+Let's now assume that $I$ is adiabatic. In other words, the system is isolated and is not absorbing any heat. So $\di Q = 0$. So the left hand side of the inequality is $0$. So we found that
+\[
+ S(B) \geq S(A)
+\]
+for isolated systems! This is precisely the way we previously stated the second law previously.
+
+If the change is reversible, then we obtain equality. As an example of this, we look at the Carnot cycle. We can plot this in the $TS$ plane. Here the Carnot cycle is just a rectangle.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$S$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$T$};
+
+ \draw [->-=0.55] (1.3, 2.5) node [circ] {} node [above] {$A$} -- (4, 2.5);
+ \draw [->-=0.6] (4, 2.5) node [circ] {} node [above] {$B$} -- (4, 0.8);
+ \draw [->-=0.55] (4, 0.8) node [circ] {} node [below] {$C$} -- (1.3, 0.8);
+ \draw [->-=0.6] (1.3, 0.8) node [circ] {} node [below] {$D$} -- (1.3, 2.5);
+ \end{tikzpicture}
+\end{center}
+We see that the periods of constant entropy are exactly those when the system is adiabatic.
+
+We can look at some examples of non-reversible processes.
+\begin{eg}
+ Suppose we have a box of gas:
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [opacity=0.3, mblue] (0, 0) rectangle (2, 2);
+
+ \draw (0, 0) rectangle (4, 2);
+ \draw (2, 0) -- (2, 2);
+ \node at (1, 1) {gas};
+ \node at (3, 1) {vacuum};
+ \end{tikzpicture}
+ \end{center}
+ On the left, there is gas, and on the right, it is vacuum. If we remove the partition, then the gas moves on to fill the whole box. If we insulate this good enough, then it is adiabatic, but it is clearly irreversible. This is not a quasi-static process.
+
+ In this scenario, we have $\di Q = 0 = \di W$. So we know that $\d E = 0$, i.e.\ $E$ is constant. Except at low temperatures, we find that the temperature does not change. So we find that $E$ is a function of temperature. This is \emph{Joule's law}. We can plot the points in a $TS$ plane:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0) node [right] {$S$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$T$};
+
+ \node [circ] at (1.2, 1.5) {};
+ \node [circ] at (2.8, 1.5) {};
+ \node [above] at (1.2, 1.5) {initial};
+ \node [above] at (2.8, 1.5) {final};
+
+ \draw [dashed] (1.2, 1.5) -- (2.8, 1.5);
+ \end{tikzpicture}
+ \end{center}
+ Note that it doesn't make sense to draw a line from the initial state to the final state, as the process is not quasi-static.
+\end{eg}
+
+\subsection{Thermodynamic potentials}
+So far, the state of gas is specified by $p$ and $V$. These determine our other quantities $T, E, S$. But we could instead use other variables. For example, when drawing the $TS$ plots, we are using $T$ and $S$.
+
+We can choose any $2$ of $p, V, E, T, S$ to specify the state. Recall that we had
+\[
+ \d E = T \;\d S - p \;\d V.
+\]
+So it is natural to regard $E = E(S, V)$, or $S = S(E, V)$. Taking derivates of these expressions, we find
+\[
+ \left(\frac{\partial E}{\partial S}\right)_V = T,\quad \left(\frac{\partial E}{\partial V}\right)_S = -p.
+\]
+In statical mechanics, we \emph{defined} $T$ and $p$ this way, but here we derived it as a consequence. It is good that they agree.
+
+We now further differentiate these objects. Note that since derivatives are symmetric, we know
+\[
+ \frac{\partial^2 E}{\partial V\partial S} = \frac{\partial^2 E}{\partial S\partial V}.
+\]
+This tells us that
+\[
+ \left(\frac{\partial T}{\partial V}\right)_S = - \left(\frac{\partial p}{\partial S}\right)_V.
+\]
+Equations derived this way are known as \term{Maxwell equations}.
+
+There are $3$ more sets. We can, as before, define the \term{Helmholtz free energy} by
+\[
+ F = E - TS,
+\]
+and then we have
+\[
+ \d F = - S \;\d T - p \;\d V.
+\]
+So we view $F = F(T, V)$.
+
+What does this $F$ tell us? For a reversible change at constant temperature, we have
+\[
+ F(B) - F(A) = - \int_A^B p \;\d V.
+\]
+We now use the fact that the change is reversible to write this as $\int_A^B \di W$, which is the work done on the system. So the free energy measures the amount of energy available do work at constant temperature.
+
+Using this, we can express
+\[
+ \left(\frac{\partial F}{\partial T}\right)_V = - S,\quad \left(\frac{\partial F}{\partial V}\right)_T =- p.
+\]
+Again taking mixed partials, we find that
+\[
+ \left(\frac{\partial S}{\partial V}\right)_T = \left(\frac{\partial p}{\partial T}\right)_V.
+\]
+We obtained $F$ from $E$ by subtracting away the conjugate pair $T, S$. We can look at what happens when we mess with another conjugate pair, $p$ and $V$.
+
+We first motivate this a bit. Consider a system in equilibrium with a reservoir $R$ of fixed temperature $T$ and pressure $p$. The volume of the system and the reservoir $R$ can vary, but the total volume is fixed.
+
+We can consider the total entropy
+\[
+ S_{\mathrm{total}}(E_{\mathrm{total}}, V_{\mathrm{total}}) = S_R(E_{\mathrm{total}} - E, V_{\mathrm{total}} - V) + S(E, V).
+\]
+Again, since the energy of the system is small compared to the total energy of the reservoir, we can Taylor expand $S_R$, and have
+\[
+ S_{\mathrm{total}}(E_{\mathrm{total}}, V_{\mathrm{total}}) = S_R(E_{\mathrm{total}}, V_{\mathrm{total}}) - \left(\frac{\partial S_R}{\partial E}\right)_V E - \left(\frac{\partial S_R}{\partial V}\right)_E V + S(E, V).
+\]
+Using the definition of $T$ and $p$, we obtain
+\[
+ S_{\mathrm{total}}(E_{\mathrm{total}}, V_{\mathrm{total}}) = S_R(E_{\mathrm{total}}, V_{\mathrm{total}}) - \frac{E + pV - TS}{T}.
+\]
+The second law of thermodynamics says we want to maximize this expression. But the first term is constant, and we are assuming we are working at fixed temperature and pressure. So we should minimize
+\[
+ G = F + pV = E + pV - TS.
+\]
+This is the \term{Gibbs free energy}. This is important in chemistry, since we usually do experiments in a test tube, which is open to the atmosphere. It is easy to find that
+\[
+ \d G = - S \;\d T + V \;\d p.
+\]
+So the natural variables for $G$ are $G = G(T, p)$. We again find
+\[
+ S = -\left(\frac{\partial G}{\partial T}\right)_p,\quad V = \left(\frac{\partial G}{\partial p}\right)_T.
+\]
+Using the symmetry of mixed partials again, we find
+\[
+ - \left(\frac{\partial S}{\partial p}\right)_T = \left(\frac{\partial V}{\partial T}\right)_p.
+\]
+We now look at the final possibility, where we just add $pV$ and not subtract $TS$. This is called the \term{enthalpy}\index{$H$}
+\[
+ H = E + pV.
+\]
+Then we have
+\[
+ \d H = T\;\d S + V \;\d p.
+\]
+So we naturally view $H = H(S, p)$. We again have
+\[
+ T = \left(\frac{\partial H}{\partial S}\right)_p,\quad V = \left(\frac{\partial H}{\partial p}\right)_S.
+\]
+So we find
+\[
+ \left(\frac{\partial T}{\partial p}\right)_S = \left(\frac{\partial V}{\partial S}\right)_p.
+\]
+For the benefit of mankind, we collect all the Maxwell relations in the following proposition
+\begin{prop}
+ \begin{align*}
+ \left(\frac{\partial T}{\partial V}\right)_S &= - \left(\frac{\partial p}{\partial S}\right)_V\\
+ \left(\frac{\partial S}{\partial V}\right)_T &= \left(\frac{\partial p}{\partial T}\right)_V\\
+ - \left(\frac{\partial S}{\partial p}\right)_T &= \left(\frac{\partial V}{\partial T}\right)_p\\
+ \left(\frac{\partial T}{\partial p}\right)_S &= \left(\frac{\partial V}{\partial S}\right)_p.
+ \end{align*}
+\end{prop}
+
+\subsubsection*{Ideal gases}
+We now return to gases. We previously defined ideal gases as those whose atoms don't interact. But this is a microscopic definition. Before we knew atoms existed, we couldn't have made this definition. How can we define it using classical thermodynamics?
+
+\begin{defi}[Ideal gas]\index{ideal gas}
+ An \emph{ideal gas} is a gas that satisfies \term{Boyle's law}, which says $pV$ is just a function of $T$, say $pV = f(T)$, and \term{Joule's law}, which says $E$ is a function of $T$.
+\end{defi}
+These are true for real gases at low enough pressure.
+
+Let's try to take this definition and attempt to recover the equation of state. We have
+\[
+ \d E = T\;\d S - p \;\d V.
+\]
+Since $E$ is just a function of $T$, we know that
+\[
+ \left(\frac{\partial E}{\partial V}\right)_T = 0.
+\]
+In other words, plugging in the formula for $\d E$, we find
+\[
+ T\left(\frac{\partial S}{\partial V}\right)_T - p = 0.
+\]
+Using the Maxwell equations, we find
+\[
+ T\left(\frac{\partial p}{\partial T}\right)_p - p = 0.
+\]
+Rearranging this tells us
+\[
+ \left(\frac{\partial p}{\partial T}\right)_V = \frac{p}{T}.
+\]
+Using Boyle's law $pV = f(T)$, this tells us
+\[
+ \frac{f'(T)}{V} = \frac{f(T)}{TV}.
+\]
+So we must have
+\[
+ f(T) = CT
+\]
+for some $C$. So we have
+\[
+ pV = CT.
+\]
+Since the left hand side is extensive, we must have $C \propto N$. If we don't know about atoms, then we can talk about the number of moles of gas rather than the number of atoms. So we have
+\[
+ C = kN
+\]
+for some $k$. Then we obtain
+\[
+ pV = NkT,
+\]
+as desired. In statistical physics, we found the same equation, where $k$ is the Boltzmann constant.
+
+\subsubsection*{Carnot cycle for the ideal gas}
+Let's now look at the Carnot cycle for ideal gases.
+
+\begin{itemize}
+ \item On $AB$, we have $\d T = 0$. So $\d E = 0$. So $\di Q = - \di W$. Then we have
+ \[
+ Q_H = \int_A^B \di Q = - \int_A^B \di W = \int_A^B p \;\d V,
+ \]
+ where we used the fact that the change is reversible in the last equality. Using the equation of state, we have
+ \[
+ Q_H = \int_A^B \frac{NkT}{V}\;\d V = NkT_H \log \frac{V_B}{V_A}.
+ \]
+ Similarly, we have
+ \[
+ Q_C = - NkT_C \log \frac{V_C}{V_D}.
+ \]
+ \item Along $BC$, since the system is isolated, we have $\di Q = 0$. So $\d E = \di W = - p\;\d V$. From Joule's law, we know $E$ is just a function of temperature. So we have
+ \[
+ E'(T) \;\d T = - \frac{NkT}{V}\;\d V.
+ \]
+ We now have a differential equation involving $T$ and $V$. Dividing by $NkT$, we obtain
+ \[
+ \frac{E'(T)}{NkT}\;\d T = - \frac{\d V}{V}.
+ \]
+ Integrating this from $B$ to $C$, we find
+ \[
+ \int_{T_H}^{T_C} \frac{E'(T)}{NkT}\;\d T = - \int_B^C \frac{\d V}{V}.
+ \]
+ So we find
+ \[
+ \log \frac{V_C}{V_B} = \int_{T_C}^{T_H} \frac{E'(T)}{NkT}\;\d T.
+ \]
+ Note that the right hand side depends only on the temperatures of the hot and cold reservoirs. So we have
+ \[
+ \log \frac{V_D}{V_A} = \log \frac{V_C}{V_B}.
+ \]
+ In other words, we have
+ \[
+ \frac{V_D}{V_A} = \frac{V_C}{V_B}.
+ \]
+ Alternatively, we have
+ \[
+ \frac{V_C}{V_D} = \frac{V_B}{V_A}.
+ \]
+\end{itemize}
+Finally, we calculate the efficiency. We have
+\[
+ \eta = 1 - \frac{Q_C}{Q_H} = 1 - \frac{T_C}{T_H},
+\]
+as expected. Of course, this must have been the case.
+
+We can again talk about heat capacities. We again define
+\[
+ C_V = \left(\frac{\partial E}{\partial T}\right)_V = T \left(\frac{\partial S}{\partial T}\right)_V.
+\]
+We can also define\index{$C_V$}\index{$C_p$}
+\[
+ C_p = T \left(\frac{\partial S}{\partial T}\right)_p.
+\]
+We can use the Maxwell relations to find that
+\[
+ \left(\frac{\partial C_V}{\partial V}\right)_T = T \left(\frac{\partial^2 p}{\partial T^2}\right)_V,\quad
+ \left(\frac{\partial C_p}{\partial p}\right)_T = - T \left(\frac{\partial^2 V}{\partial T^2}\right)_p.
+\]
+More interestingly, we find that
+\[
+ C_p - C_V = T \left(\frac{\partial V}{\partial T}\right)_p \left(\frac{\partial p}{\partial T}\right)_V.
+\]
+For an ideal gas, we have $pV = NkT$. Then we have
+\[
+ \left(\frac{\partial V}{\partial T}\right)_p = \frac{Nk}{p},\quad \left(\frac{\partial p}{\partial T}\right)_V = \frac{Nk}{V}.
+\]
+Plugging this into the expression, we find
+\[
+ C_p - C_V = \frac{T(Nk)^2}{pV} = Nk.
+\]
+So for any ideal gas, the difference between $C_p$ and $C_V$ is always the same. Note that $C_V$ and $C_p$ themselves need not be constant.
+
+\subsection{Third law of thermodynamics}
+We end by a quick mention of the third law. The first three laws (starting from $0$!) are ``fundamental'' in some sense, but the third one is rather different.
+\begin{law}[Third law of thermodynamics]\index{third law of thermodynamics}\index{law of thermodynamics!third}
+ As $T \to 0$, we have
+ \[
+ \lim_{T \to 0}S = S_0,
+ \]
+ which is independent of other parameters (e.g.\ $V, B$ etc). In particular, the limit is finite.
+\end{law}
+Recall that classically, $S$ is only defined up to a constant. So we can set $S_0 = 0$, and fix this constant.
+
+What does this imply for the heat capacities? We have
+\[
+ C_V = T \left(\frac{\partial S}{\partial T}\right)_V.
+\]
+Integrating this equation up, we find that
+\[
+ S(T_2, V) - S(T_1, V) = \int_{T_1}^{T_2} \frac{C_V(T, V)}{T}\;\d T.
+\]
+Let $T_1 \to 0$. Then the left hand side is finite. But on the right, as $T \to 0$, the $\frac{1}{T}$ term diverges. For the integral to be finite, we must have $C_V \to 0$ as $T \to 0$. Similarly, doing this with $C_p$, we find that
+\[
+ S(T_2, p) - S(T_1, p) = \int_{T_1}^{T_2} \frac{C_p(T, p)}{T}\;\d T.
+\]
+Saying the same words, we deduce that the third law says $C_p(T, V) \to 0$ as $T \to 0$.
+
+But for an ideal gas, we know that $C_p - C_V$ must be constant! So it cannot, in particular, tend to $0$. So if we believe in the third law, then the approximation of an ideal gas must break down at low temperatures.
+
+If we go back and check, we find that quantum gases do satisfy the third law. The statistical mechanical interpretation of this is just that as $T \to 0$, the number of states goes to $1$, as we are forced to the ground state, and so $S = 0$. So this says the system has a unique ground state.
+
+\section{Phase transitions}
+In the remaining of the course, we are going to study \emph{phase transitions}. This is a \emph{discrete} change in the properties of a system, characterized by discontinuities in some of the variables describing the system. Macroscopically, these often correspond to some rather drastic changes, such as freezing and boiling.
+
+We have already met some examples of phase transitions before, such as Bose--Einstein condensation. However, in this chapter, we are mainly going to focus on two examples --- the liquid-gas transition, and the Ising model. At first sight, these two examples should be rather unrelated to each other, but it turns out they behave in a very similar way. In fact, it seems like \emph{all} phase transitions look somewhat alike. After examining these systems in some detail, our goal is to understand where this similarity came from.
+
+\subsection{Liquid-gas transition}
+We are now going to understand the liquid-gas transition. This is sometimes known as \term{boiling} and \term{condensation}. We will employ the van der Waal's equation of state,
+\[
+ p = \frac{kT}{v - b} - \frac{a}{v^2},
+\]
+where\index{$v$}
+\[
+ v = \frac{V}{N}
+\]
+is the volume per particle. This is only valid for low density gases, but let's now just ignore that, and assume we can use this for any density. This provides a toy model of the liquid-gas phase transition.
+
+Let's look at isotherms of this system in the $pv$ diagram. Note that in the equation state, there is a minimum value of $v$, namely at $v = b$. Depending on the values of $T$, the isotherms can look different:
+
+\begin{center}
+ \begin{tikzpicture}[yscale=12, xscale=0.3]
+ \draw [->] (0, 0.05) -- (15, 0.05) node [right] {$v$};
+ \draw [->] (0, 0.05) -- (0, 0.3) node [above] {$p$};
+
+ \draw [mblue, thick] plot coordinates {\isothermi};
+ \draw [mblue!50!mred, thick] plot coordinates {\isothermii};
+ \draw [mred, thick] plot coordinates {\isothermiii};
+ \node [above, mred] at (11, 0.13) {$T > T_c$};
+ \node [below, mblue] at (7,0.11) {$T < T_c$};
+
+ \draw [dashed] (1.58, 0.05) node [below] {$b$} -- +(0, 0.25);
+ \end{tikzpicture}
+\end{center}
+There is a critical temperature $T_C$ that separates two possible behaviours. If $T > T_C$, then the isotherms are monotonic, whereas when $T < T_C$, then there are two turning points. At $T = T_C$, the two extrema merge into a single point, which is a point of inflection. To find this critical point, we have to solve
+\[
+ \left(\frac{\partial p}{\partial v}\right)_T = \left(\frac{\partial^2 p}{\partial v^2}\right)_T = 0.
+\]
+Using the equation of state, we can determine the critical temperature to be
+\[
+ kT_c = \frac{8a}{27 b}.
+\]
+We are mainly interested in what happens when $T < T_C$. We see that there exists a range of pressures where there are three states with the same $p, T$ but different $V$.
+\begin{center}
+ \begin{tikzpicture}[yscale=12, xscale=0.3]
+ \draw [->] (0, 0.05) -- (15, 0.05) node [right] {$v$};
+ \draw [->] (0, 0.05) -- (0, 0.3) node [above] {$p$};
+
+ \draw [morange, thick] plot coordinates {\isothermia};
+ \draw [mred, thick] plot coordinates {\isothermib};
+ \draw [mblue, thick] plot coordinates {\isothermic};
+
+ \draw [dashed] (0, 0.12) -- +(15, 0);
+
+ \node [circ] at (1.9, 0.12) {};
+ \node [circ] at (3, 0.12) {};
+ \node [circ] at (7.2, 0.12) {};
+
+ \node [anchor = south east] at (1.9, 0.12) {\scriptsize $L$};
+ \node [above] at (3, 0.12) {\scriptsize $U$};
+ \node [above] at (7.2, 0.12) {\scriptsize $G$};
+ \end{tikzpicture}
+\end{center}
+At $U$, we have
+\[
+ \left(\frac{\partial p}{\partial v}\right)_T > 0.
+\]
+Thus, when we increase the volume a bit, the pressure goes up, which makes it expand further. Similarly, if we compress it a bit, then $p$ decreases, and it collapses further. So this state is unstable.
+
+How about $L$? Here we have $v \sim b$. So the atoms are closely packed. Also, the quantity
+\[
+ \left|\left(\frac{\partial p}{\partial v}\right)_T\right|
+\]
+is large. So this is very hard to compress. We call this a \term{liquid}.
+
+There are no surprise to guessing what $G$ is. Here we have $v \gg b$, and $\left|\frac{\partial p}{\partial v}\right|$ is small. So we get a \term{gas}.
+
+Note that once we've made this identification with liquids and gases, we see that above $T_C$, there is no distinction between liquids and gases!
+
+\subsubsection*{Phase equilibriums}
+What actually happens when we try to go to this region? Suppose we fix a pressure $p$ and temperature $T$. Ruling out the possibility of living at $U$, we deduce that we must either be at $L$ or at $G$. But actually, there is a third possibility. It could be that part of the system is at $L$ and the other part is at $G$.
+
+We suppose we have two separate systems, namely $L$ and $G$. They have the same $T$ and $p$. To determine which of the above possibilities happen, we have to consider the chemical potentials $\mu_L$ and $\mu_G$. If we have
+\[
+ \mu_L = \mu_G,
+\]
+then as we have previously seen, this implies the two systems can live in equilibrium. Thus, it follows that we can have any combination of $L$ and $G$ we like, and we will see that the actual combination will be dictated by the volume $v$.
+
+What if they are \emph{not} equal? Say, suppose $\mu_L > \mu_G$. Then essentially by definition of $\mu$, this says we would want to have as little liquid as possible. Of course, since the number of liquid molecules has to be non-negative, this implies we have no liquid molecules, i.e.\, we have a pure gas. Similarly, if $\mu_G > \mu_L$, then we have a pure liquid state.
+
+Now let's try to figure out when these happen. Since we work at fixed $T$ and $p$, the right thermodynamic potential to use is the Gibbs free energy
+\[
+ G = E + pV - TS.
+\]
+We have previously discussed this for a fixed amount of gas, but we would now let it vary. Then
+\[
+ \d E = T\;\d S - p\;\d V + \mu \;\d N.
+\]
+So we have
+\[
+ \d G = - S\;\d T + V \;\d p + \mu \;\d N.
+\]
+So we have $G = G(T, p, N)$, and we also know $G$ is extensive. As before, we can argue using intensivity and extensivity that we must have
+\[
+ G = f(T, p) N
+\]
+for some function $f$. Then we have
+\[
+ f = \left(\frac{\partial G}{\partial N}\right)_{T, p} = \mu.
+\]
+Therefore\index{$g(T, p)$}
+\[
+ \mu(T, p) = g(T, p) \equiv \frac{G}{N}.
+\]
+This is the Gibbs free energy per particle. Of course, this notation is misleading, since we know $T$ and $p$ don't uniquely specify the state. We should have a copy of this equation for $L$, and another for $G$ (and also $U$, but nobody cares).
+
+Using the first law, we get
+\[
+ \left(\frac{\partial \mu}{\partial p}\right)_T = \frac{1}{N} \left(\frac{\partial G}{\partial p}\right)_{T, N} = \frac{V}{N} = v(T, p).\tag{$\dagger$}
+\]
+This allows us to compute $\mu(T, p)$ (up to a constant). To do so, we fix an arbitrary starting point $O$, and then integrate $(\dagger)$ along the isotherm starting at $O$. At some other point $Q$, we can write an equation for the chemical potential
+\[
+ \mu_Q= \mu_O + \int_0^Q \d p\; v(T, p).
+\]
+Geometrically, this integral is just the area between the isotherm and the $p$ axis between the two endpoints.
+
+We will pick $O$ to be a gas state of high volume (and hence low pressure). Referring to the upcoming diagram, we see that as we go from $O$ to $N$, the pressure is increasing. So the integral increases. But once we reach $N$, the pressure starts decreasing until we get $J$. It then keeps increasing again on $JM$.
+\begin{center}
+ \begin{tikzpicture}[yscale=12, xscale=0.3]
+ \draw [->] (0, 0.05) -- (15, 0.05) node [right] {$v$};
+ \draw [->] (0, 0.05) -- (0, 0.3) node [above] {$p$};
+
+ \draw [morange, thick] plot coordinates {\isothermia};
+ \draw [mred, thick] plot coordinates {\isothermib};
+ \draw [mblue, thick] plot coordinates {\isothermic};
+
+ \node [circ] at (2.2, 0.09194) {};
+ \node [circ] at (4.5, 0.13880) {};
+ \node [circ] at (13.2,0.08196) {};
+ \node [circ] at (1.85, 0.13880) {};
+ \node [circ] at (11.3, 0.09194) {};
+
+ \node [below] at (2.2, 0.09194) {\scriptsize$J$};
+ \node [above] at (4.5, 0.13880) {\scriptsize$N$};
+ \node [below] at (13.2,0.08196) {\scriptsize$O$};
+ \node [anchor=south east] at (1.85, 0.13880) {\scriptsize$M$};
+ \node [above] at (11.3, 0.09194) {\scriptsize$K$};
+
+ \draw [dashed] (0, 0.09194) -- +(15, 0);
+ \draw [dashed] (0, 0.13880) -- +(15, 0);
+ \end{tikzpicture}
+\end{center}
+We can now sketch what $\mu$ looks like.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$p$};
+ \draw [->] (0, 0) -- (0, 3) node [above] {$\mu$};
+
+ \draw (0.5, 0.5) edge [bend right=10, mblue, thick] (3, 2.3);
+ \draw (3, 2.3) edge [bend left=10, mred, thick] (2, 2);
+ \draw (2, 2) edge [bend left = 10, morange, thick] (4.5, 1.3);
+
+ \node [mred] (label) at (2.5, 2.7) {\scriptsize unstable};
+
+ \draw (label) edge [bend right, -latex', mred] (2.5, 2.2);
+
+ \node [circ] at (3, 2.3) {};
+ \node [right] at (3, 2.3) {\scriptsize $N$};
+
+ \node [circ] at (2, 2) {};
+ \node [left] at (2, 2) {\scriptsize $J$};
+
+ \node [circ] at (2.63, 1.92) {};
+ \node [anchor = north west] at (2.63, 1.92) {\!\!\scriptsize $X$};
+
+ \draw [dashed] (2.63, 0) node [below] {\tiny $p(T)$} -- (2.63, 1.92);
+ \draw [dashed] (3, 0) node [below] {\tiny $p_1$} -- (3, 2.3);
+ \draw [dashed] (2, 0) node [below] {\tiny $p_2$} -- (2, 2);
+
+ \node [circ] at (2, 1.39) {};
+ \node [circ] at (3, 1.85) {};
+
+ \node [left] at (2, 1.39) {\scriptsize $K$};
+ \node [right] at (3, 1.85) {\scriptsize $M$};
+ \node [below, mblue] at (0.5, 0.5) {\scriptsize gas};
+ \node [below, morange] at (4.5, 1.3) {\scriptsize liquid};
+ \end{tikzpicture}
+\end{center}
+We see that at a unique point $X$, we have $\mu_L = \mu_S$. We define the \term{vapour pressure} $p(T)$ to be the pressure at $X$.
+
+Geometrically, the equilibrium condition is equivalent to saying
+\[
+ \int_G^L \d p\; v = 0.
+\]
+Thus, $p(T)$ is determined by the condition that the shaded regions have equal area. This is known as the \term{Maxwell construction}.
+\begin{center}
+ \begin{tikzpicture}[yscale=12, xscale=0.3]
+ \draw [->] (0, 0.05) -- (15, 0.05) node [right] {$v$};
+ \draw [->] (0, 0.05) -- (0, 0.3) node [above] {$p$};
+
+ \fill [opacity=0.5, gray] plot coordinates {(1.87, 0.12) (1.9,0.11496) (2.0,0.10000) (2.1,0.09349) (2.2,0.09194)(2.2,0.09194) (2.3,0.09328) (2.4,0.09623) (2.5,0.10000) (2.6,0.10411) (2.7,0.10825) (2.8,0.11224) (2.9,0.11600) (3.0,0.11944)};
+
+ \fill [opacity=0.5, gray] plot coordinates {(3.0, 0.11944) (3.1,0.12257) (3.2,0.12536) (3.3,0.12782) (3.4,0.12997) (3.5,0.13184) (3.6,0.13343) (3.7,0.13477) (3.8,0.13588) (3.9,0.13679) (4.0,0.13750) (4.1,0.13804) (4.2,0.13843) (4.3,0.13867) (4.4,0.13879) (4.5,0.13880)(4.6,0.13871) (4.7,0.13852) (4.8,0.13825) (4.9,0.13791) (5.0,0.13750) (5.1,0.13703) (5.2,0.13652) (5.3,0.13595) (5.4,0.13535) (5.5,0.13471) (5.6,0.13404) (5.7,0.13334) (5.8,0.13262) (5.9,0.13187) (6.0,0.13111) (6.1,0.13033) (6.2,0.12954) (6.3,0.12874) (6.4,0.12793) (6.5,0.12711) (6.6,0.12629) (6.7,0.12546) (6.8,0.12463) (6.9,0.12379) (7.0,0.12296) (7.1,0.12212) (7.2,0.12129) (7.3,0.12046)};
+
+ \draw [morange, thick] plot coordinates {\isothermia};
+ \draw [mred, thick] plot coordinates {\isothermib};
+ \draw [mblue, thick] plot coordinates {\isothermic};
+
+ \draw [dashed] (0, 0.12) -- +(15, 0);
+ \end{tikzpicture}
+\end{center}
+We write $N_L$ for the number of particles in liquid phase. Then we have
+\[
+ N_G = N - N_L = \text{number of particles in G phase}.
+\]
+Then we have
+\[
+ G = N_L g_L + (N - N_L) g_G = N_L \mu_L + (N - N_L) \mu_G.
+\]
+The second law tells us we want to try to minimize $G$. Consider the part of the plot where $p < p(T)$. Since $\mu_G < \mu_L$, we find that $G$ is minimized when $N_L = 0$. What does this mean? If we live in the bit of the liquid curve where $p < p(T)$, namely $JX$, then we are not on an unstable part of the curve, as the liquid obeys the stability condition
+\[
+ \left(\frac{\partial p}{\partial v}\right)_T < 0.
+\]
+However, it has higher Gibbs free energy than the gas phase. So it seems like we wouldn't want to live there. Thus, the liquid is locally stable, but not globally stable. It is \term{meta-stable}. We can indeed prepare such states experimentally. They are long-lived but delicate, and they evaporate whenever perturbed. This is called a \term{superheated liquid}.
+
+Dually, when $p > p(T)$, global stability is achieved when $N_G = 0$. If we have a gas living on $XN$, then this is \term{supercooled vapour}.
+
+The interesting part happens when we look at the case $p = p(T)$. Here if we look at the equation for $G$, we see that $N_L$ is undetermined. We can have an arbitrary portion of $L$ and $G$. To have a better picture of what is going on, we go back to the phase diagram we had before, and add two curves. We define the \term{coexistence curve} to be the line in the $pV$ plane where liquid and gas are in equilibrium, and the \term{spinodal curve} to be the line where $\left(\frac{\partial p}{\partial v}\right)_T = 0$.
+\begin{center}
+ \begin{tikzpicture}[yscale=12, xscale=0.3]
+ \draw [mblue, thick] plot coordinates {\isothermi};
+ \draw [mblue, thick] plot coordinates {\isothermiv};
+ \draw [mblue, thick] plot coordinates {\isothermii};
+
+ \draw [->] (0, 0.05) -- (15, 0.05) node [right] {$v$};
+ \draw [->] (0, 0.05) -- (0, 0.3) node [above] {$p$};
+
+
+ \draw [mred] plot [smooth] coordinates {(1.63, 0.05) (1.87, 0.12) (3.2, 0.18513) (7, 0.12) (9.5, 0.05)};
+ \draw [mgreen] plot [smooth] coordinates {(2.05, 0.05) (2.2, 0.09194) (2.4,0.13909) (3.2, 0.18513) (4.5, 0.13880) (6.6, 0.05) };
+
+ \end{tikzpicture}
+\end{center}
+The unstable states are the states inside the spinodal curve, and the metastable ones are those in between the spinodal curve and coexistence curve. Now suppose we only work with stable states. Then we want to remove the metastable states.
+\begin{center}
+ \begin{tikzpicture}[yscale=12, xscale=0.3]
+ \draw [mblue, thick] plot coordinates {(1.6,0.3) (1.7,0.19847) (1.8,0.14429) (1.87, 0.12)};
+ \draw [mblue, thick] plot coordinates {(7.3,0.12) (7.4,0.11963) (7.5,0.11880) (7.6,0.11798) (7.7,0.11716) (7.8,0.11635) (7.9,0.11554) (8.0,0.11473) (8.1,0.11393) (8.2,0.11314) (8.3,0.11235) (8.4,0.11157) (8.5,0.11080) (8.6,0.11003) (8.7,0.10927) (8.8,0.10851) (8.9,0.10776) (9.0,0.10702) (9.1,0.10629) (9.2,0.10556) (9.3,0.10484) (9.4,0.10413) (9.5,0.10342) (9.6,0.10272) (9.7,0.10203) (9.8,0.10135) (9.9,0.10067) (10.0,0.10000) (10.1,0.09934) (10.2,0.09868) (10.3,0.09803) (10.4,0.09739) (10.5,0.09675) (10.6,0.09613) (10.7,0.09550) (10.8,0.09489) (10.9,0.09428) (11.0,0.09368) (11.1,0.09308) (11.2,0.09249) (11.3,0.09191) (11.4,0.09133) (11.5,0.09076) (11.6,0.09020) (11.7,0.08964) (11.8,0.08909) (11.9,0.08854) (12.0,0.08801) (12.1,0.08747) (12.2,0.08694) (12.3,0.08642) (12.4,0.08590) (12.5,0.08539) (12.6,0.08489) (12.7,0.08438) (12.8,0.08389) (12.9,0.08340) (13.0,0.08291) (13.1,0.08243) (13.2,0.08196) (13.3,0.08149) (13.4,0.08103) (13.5,0.08057) (13.6,0.08011) (13.7,0.07966) (13.8,0.07921) (13.9,0.07877) (14.0,0.07834) (14.1,0.07790) (14.2,0.07748) (14.3,0.07705) (14.4,0.07663) (14.5,0.07622) (14.6,0.07581) (14.7,0.07540) (14.8,0.07500) (14.9,0.07460) (15.0,0.07421)};
+ \draw [mblue, thick] plot coordinates {(1.7,0.3) (1.8,0.21929) (1.9,0.18163) (2.0,0.16000) (2.1,0.14803)};
+ \draw [mblue, thick] plot coordinates {(5.4,0.14899) (5.5,0.14804) (5.6,0.14708) (5.7,0.14611) (5.8,0.14512) (5.9,0.14412) (6.0,0.14311) (6.1,0.14210) (6.2,0.14108) (6.3,0.14006) (6.4,0.13904) (6.5,0.13802) (6.6,0.13700) (6.7,0.13599) (6.8,0.13497) (6.9,0.13396) (7.0,0.13296) (7.1,0.13196) (7.2,0.13097) (7.3,0.12998) (7.4,0.12900) (7.5,0.12803) (7.6,0.12707) (7.7,0.12612) (7.8,0.12517) (7.9,0.12423) (8.0,0.12330) (8.1,0.12238) (8.2,0.12147) (8.3,0.12057) (8.4,0.11968) (8.5,0.11880) (8.6,0.11792) (8.7,0.11706) (8.8,0.11620) (8.9,0.11536) (9.0,0.11452) (9.1,0.11369) (9.2,0.11288) (9.3,0.11207) (9.4,0.11127) (9.5,0.11048) (9.6,0.10970) (9.7,0.10893) (9.8,0.10817) (9.9,0.10741) (10.0,0.10667) (10.1,0.10593) (10.2,0.10520) (10.3,0.10448) (10.4,0.10377) (10.5,0.10307) (10.6,0.10238) (10.7,0.10169) (10.8,0.10101) (10.9,0.10034) (11.0,0.09968) (11.1,0.09902) (11.2,0.09838) (11.3,0.09774) (11.4,0.09710) (11.5,0.09648) (11.6,0.09586) (11.7,0.09525) (11.8,0.09465) (11.9,0.09405) (12.0,0.09346) (12.1,0.09288) (12.2,0.09230) (12.3,0.09173) (12.4,0.09117) (12.5,0.09061) (12.6,0.09006) (12.7,0.08951) (12.8,0.08897) (12.9,0.08844) (13.0,0.08791) (13.1,0.08739) (13.2,0.08688) (13.3,0.08637) (13.4,0.08586) (13.5,0.08537) (13.6,0.08487) (13.7,0.08438) (13.8,0.08390) (13.9,0.08342) (14.0,0.08295) (14.1,0.08248) (14.2,0.08202) (14.3,0.08156) (14.4,0.08111) (14.5,0.08066) (14.6,0.08022) (14.7,0.07978) (14.8,0.07935) (14.9,0.07892) (15.0,0.07849)};
+ \draw [mblue, thick] plot coordinates {\isothermii};
+
+ \draw [->] (0, 0.05) -- (15, 0.05) node [right] {$v$};
+ \draw [->] (0, 0.05) -- (0, 0.3) node [above] {$p$};
+ \end{tikzpicture}
+\end{center}
+It seems like there is some sort of missing states in the curve. But really, there aren't. What the phase diagram shows is the plots of the pure states, i.e.\, those that are purely liquid or gas. The missing portion is where $p = p(T)$, and we just saw that at this particular pressure, we can have any mixture of liquid and gas. Suppose the volume per unit particle of the liquid and gas phases are
+\[
+ v_L = \lim_{p \to p(T)^-} v(T, p),\quad v_G = \lim_{p \to p(T)^+} v(T, p).
+\]
+Then if we have $N_L$ many liquid particles and $N_G$ many gas particles, then the total volume is
+\[
+ V = V_L + V_G = N_L v_L + N_G + v_G.
+\]
+or equivalently,
+\[
+ v = \frac{N_L}{N} v_L + \frac{N - N_L}{N} v_G.
+\]
+since $N_L$ is not fixed, $v$ can take any value between $v_L$ and $v_G$. Thus, we should fill the gap by horizontal lines corresponding to liquid-gas equilibrium at $p = p(T)$.
+\begin{center}
+ \begin{tikzpicture}[yscale=12, xscale=0.3]
+ \draw [mblue, thick] plot coordinates {(1.6,0.3) (1.7,0.19847) (1.8,0.14429) (1.87, 0.12)};
+ \draw [mblue, thick] plot coordinates {(7.3,0.12) (7.4,0.11963) (7.5,0.11880) (7.6,0.11798) (7.7,0.11716) (7.8,0.11635) (7.9,0.11554) (8.0,0.11473) (8.1,0.11393) (8.2,0.11314) (8.3,0.11235) (8.4,0.11157) (8.5,0.11080) (8.6,0.11003) (8.7,0.10927) (8.8,0.10851) (8.9,0.10776) (9.0,0.10702) (9.1,0.10629) (9.2,0.10556) (9.3,0.10484) (9.4,0.10413) (9.5,0.10342) (9.6,0.10272) (9.7,0.10203) (9.8,0.10135) (9.9,0.10067) (10.0,0.10000) (10.1,0.09934) (10.2,0.09868) (10.3,0.09803) (10.4,0.09739) (10.5,0.09675) (10.6,0.09613) (10.7,0.09550) (10.8,0.09489) (10.9,0.09428) (11.0,0.09368) (11.1,0.09308) (11.2,0.09249) (11.3,0.09191) (11.4,0.09133) (11.5,0.09076) (11.6,0.09020) (11.7,0.08964) (11.8,0.08909) (11.9,0.08854) (12.0,0.08801) (12.1,0.08747) (12.2,0.08694) (12.3,0.08642) (12.4,0.08590) (12.5,0.08539) (12.6,0.08489) (12.7,0.08438) (12.8,0.08389) (12.9,0.08340) (13.0,0.08291) (13.1,0.08243) (13.2,0.08196) (13.3,0.08149) (13.4,0.08103) (13.5,0.08057) (13.6,0.08011) (13.7,0.07966) (13.8,0.07921) (13.9,0.07877) (14.0,0.07834) (14.1,0.07790) (14.2,0.07748) (14.3,0.07705) (14.4,0.07663) (14.5,0.07622) (14.6,0.07581) (14.7,0.07540) (14.8,0.07500) (14.9,0.07460) (15.0,0.07421)};
+ \draw [mblue, thick] plot coordinates {(1.7,0.3) (1.8,0.21929) (1.9,0.18163) (2.0,0.16000) (2.1,0.14803)};
+ \draw [mblue, thick] plot coordinates {(5.4,0.14899) (5.5,0.14804) (5.6,0.14708) (5.7,0.14611) (5.8,0.14512) (5.9,0.14412) (6.0,0.14311) (6.1,0.14210) (6.2,0.14108) (6.3,0.14006) (6.4,0.13904) (6.5,0.13802) (6.6,0.13700) (6.7,0.13599) (6.8,0.13497) (6.9,0.13396) (7.0,0.13296) (7.1,0.13196) (7.2,0.13097) (7.3,0.12998) (7.4,0.12900) (7.5,0.12803) (7.6,0.12707) (7.7,0.12612) (7.8,0.12517) (7.9,0.12423) (8.0,0.12330) (8.1,0.12238) (8.2,0.12147) (8.3,0.12057) (8.4,0.11968) (8.5,0.11880) (8.6,0.11792) (8.7,0.11706) (8.8,0.11620) (8.9,0.11536) (9.0,0.11452) (9.1,0.11369) (9.2,0.11288) (9.3,0.11207) (9.4,0.11127) (9.5,0.11048) (9.6,0.10970) (9.7,0.10893) (9.8,0.10817) (9.9,0.10741) (10.0,0.10667) (10.1,0.10593) (10.2,0.10520) (10.3,0.10448) (10.4,0.10377) (10.5,0.10307) (10.6,0.10238) (10.7,0.10169) (10.8,0.10101) (10.9,0.10034) (11.0,0.09968) (11.1,0.09902) (11.2,0.09838) (11.3,0.09774) (11.4,0.09710) (11.5,0.09648) (11.6,0.09586) (11.7,0.09525) (11.8,0.09465) (11.9,0.09405) (12.0,0.09346) (12.1,0.09288) (12.2,0.09230) (12.3,0.09173) (12.4,0.09117) (12.5,0.09061) (12.6,0.09006) (12.7,0.08951) (12.8,0.08897) (12.9,0.08844) (13.0,0.08791) (13.1,0.08739) (13.2,0.08688) (13.3,0.08637) (13.4,0.08586) (13.5,0.08537) (13.6,0.08487) (13.7,0.08438) (13.8,0.08390) (13.9,0.08342) (14.0,0.08295) (14.1,0.08248) (14.2,0.08202) (14.3,0.08156) (14.4,0.08111) (14.5,0.08066) (14.6,0.08022) (14.7,0.07978) (14.8,0.07935) (14.9,0.07892) (15.0,0.07849)};
+ \draw [mblue, thick] plot coordinates {\isothermii};
+
+ \draw [->] (0, 0.05) -- (15, 0.05) node [right] {$v$};
+ \draw [->] (0, 0.05) -- (0, 0.3) node [above] {$p$};
+
+
+ \draw [mblue, thick] (1.87, 0.12) -- (7.3, 0.12);
+ \draw [mblue, thick] (2.1, 0.149) -- (5.4, 0.149);
+ \end{tikzpicture}
+\end{center}
+Thus, inside the coexistence curve, the value of $v$ (size of container) determines $N_L$ inside the coexistence curve.
+
+Practically, we can take our gas, and try to study its behaviour at different temperatures and pressures, and see how the phase transition behaves. By doing so, we might be able to figure out $a$ and $b$. Once we know their values, we can use them to predict the value of $T_C$, as well as the temperature needed to liquefy a gas.
+
+\subsubsection*{Clausius--Clapeyron equation}
+We now look at the liquid-gas phase transition in the $(p, T)$ plane.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$T$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$p$};
+
+ \node [circ] at (3, 2.2) {};
+ \node [right] at (3, 2.2) {critical point};
+ \draw (0, 1.4) edge [bend right=10] (3, 2.2);
+ \node at (1, 1.8) {\small $p = p(T)$};
+ \draw [dashed] (3, 0) node [below] {$T_c$} -- (3, 2.2);
+
+ \node at (1.5, 3) {liquid};
+ \node at (1.5, 1) {gas};
+ \end{tikzpicture}
+\end{center}
+Crossing the line $p = p(T)$ results in a \term{phase transition}. The volume changes discontinuously from $v_L(T)$ to $v_G(T)$, or vice versa. On the other hand, $g = \frac{G}{N}$ changes continuously, since we know $g_L = g_G$.
+
+In general, we can write
+\[
+ \d g = \left(\frac{\partial g}{\partial T}\right)_p \;\d T + \left(\frac{\partial g}{\partial p}\right)_T \;\d p.
+\]
+Using
+\[
+ \d G = -S\;\d T + V \;\d p + \mu \;\d N,
+\]
+we find
+\[
+ \d g = - s\;\d T + v\;\d p,
+\]
+where\index{$s$}
+\[
+ s = \frac{S}{N}
+\]
+is the entropy per particle.
+
+Along $p = p(T)$, we know that $g_L = g_G$, and hence
+\[
+ \d g_L = \d g_G.
+\]
+Plugging in our expressions for $\d g$, we find
+\[
+ - s_L \;\d T + v_L \;\d p = - s_G \;\d T + v_G \;\d p,
+\]
+where $s_G$ and $s_L$ are the entropy (per particle) right below and above the line $p = p(T)$.
+
+We can rearrange this to get
+\[
+ \frac{\d p}{\d T} = \frac{s_G - s_L}{v_G - v_L}.
+\]
+This is the \term{Clausius--Clapeyron equation}. Alternatively, we can write this as
+\[
+ \frac{\d p}{\d T} = \frac{S_G - S_L}{V_G - V_L}.
+\]
+There is another way of writing it, in terms of the \term{latent heat of vaporization}
+\[
+ L = T(S_G - S_L),
+\]
+the heat we need to convert things of entropy $S_L$ to things of entropy $S_G$ at temperature $T$. Then we can write it as
+\[
+ \frac{\d p}{\d T} = \frac{L}{T(V_G - V_L)}.
+\]
+This is also called the Clapeyron--Clapeyron equation.
+
+\begin{eg}
+ Suppose $L$ does not depend on temperature, and also $V_G \gg V_L$. We further assume that the gas is ideal. Then we have
+ \[
+ \frac{\d p}{\d T} \approx \frac{L}{T V_G} = \frac{L p}{ NkT^2}.
+ \]
+ So we have
+ \[
+ p = p_0 \exp \left(- \frac{L}{NkT}\right)
+ \]
+ for some $p_0$.
+\end{eg}
+
+\subsection{Critical point and critical exponents}
+Eventually, we want to understand phase transitions in general. To begin, we will classify phase transitions into different orders. The definition we are using is a more ``modern'' one. There is a more old-fashioned way of defining the order, which we will not use.
+
+\begin{defi}[First-order phase transition]\index{first-order phase transition}
+ A \emph{first-order} phase transition is one with a discontinuity in a first derivative of $G$ (or $F$).
+\end{defi}
+For example, if we have a discontinuity in $S$, then this gives rise to latent heat.
+
+\begin{eg}
+ The liquid-gas phase transition is first-order, provided we stay away from the critical point.
+\end{eg}
+
+\begin{defi}[Second-order phase transition]\index{second-order phase transition}
+ A \emph{second-order phase transition} is one with continuous first order derivatives, but some second (or higher) derivative of $G$ (or $F$) exhibits some kind of singularity.
+\end{defi}
+
+For example, we can look at the \term{isothermal compressibility}\index{$\kappa$}
+\[
+ \kappa = -\frac{1}{v} \left(\frac{\partial v}{\partial p}\right)_T = -\frac{1}{v} \left(\frac{\partial^2 g}{\partial p^2}\right)_T.
+\]
+This is a second-order derivative in $G$. We look at how this behaves at the critical point, at $(T_c, p(T_c))$.
+\begin{center}
+ \begin{tikzpicture}[yscale=12, xscale=0.3]
+ \draw [mblue, thick] plot coordinates {\isothermii};
+
+ \draw [->] (0, 0.05) -- (15, 0.05) node [right] {$v$};
+ \draw [->] (0, 0.05) -- (0, 0.3) node [above] {$p$};
+ \node [circ] at (3.2, 0.18513) {};
+ \node [above] at (3.2, 0.18513) {CP};
+ \end{tikzpicture}
+\end{center}
+Here we have
+\[
+ \left(\frac{\partial p}{\partial v}\right)_T = 0
+\]
+at the critical point. Since $\kappa$ is the inverse of this, it diverges at the critical point.
+
+So the liquid-gas transition is second-order at the critical point.
+
+\begin{eg}
+ For water, we have $T_C = \SI{647}{\kelvin}$, and $p(T_C) = 218$ atm.
+\end{eg}
+
+Experimentally, we observe that liquids and gases are not the only possible phases. For most materials, it happens that solid phases are also possible. In this case, the phase diagram of a normal material looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$T$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$p$};
+
+ \node [circ] at (3, 2.2) {};
+ \draw (0.59, 1.4) node [circ] {} edge [bend right=10] (3, 2.2);
+ \draw [dashed] (3, 0) node [below] {$T_c$} -- (3, 2.2);
+
+ \node at (1, 3.3) {solid};
+ \node at (2, 2.2) {liquid};
+ \node at (1.5, 1) {gas};
+
+ \draw (0, 0) .. controls (1, 2.5) .. (3, 4);
+ \end{tikzpicture}
+\end{center}
+There is clearly another special point on the phase diagram, namely the \term{triple point}. This is the point where all three phases meet, and this is what we previously used to fix the temperature scale in classical thermodynamics.
+
+\subsubsection*{Critical point}
+We are going to spend some more time studying the critical point. Let's first carefully figure out where this point is.
+
+If we re-arrange the van der Waals equation of state, we find
+\[
+ pv^3 - (pb + kT) v^2 + av - ab = 0.\tag{$*$}
+\]
+When $T < T_c$, this has three real roots, $(L, G, U)$. At $T > T_C$, we have a single real root, and two complex conjugate roots.
+
+Thus at $T = T_C$, there must be three equal real roots of $T = T_C$. Hence, the equation must look like
+\[
+ p_C(v - v_C)^3 = 0,
+\]
+Equating coefficients, we find
+\[
+ k T_C = \frac{8a}{27 b},\quad v_C = 3b,\quad p_C = \frac{a}{27 b^2} = p(T_C).
+\]
+We introduce some new dimensionless ``reduced'' quantities by
+\[
+ \bar{T} = \frac{T}{T_C},\quad \bar{p} = \frac{p}{p_C},\quad \bar{v} = \frac{v}{v_C}.
+\]
+At the critical point, all these values are $1$. Then the van der Waals equation becomes
+\[
+ \bar{p} = \frac{8}{3} \frac{\bar{T}}{\bar{v} - \frac{1}{3}} - \frac{3}{\bar{v}^2}.
+\]
+We have gotten rid of all free parameters! This is called the \term{law of corresponding states}.
+
+Using this, we can compute the ratio
+\[
+ \frac{p_C v_C}{k T_C} = \frac{3}{8} = 0.375.
+\]
+This depends on \emph{no} external parameters. Thus, this is something we can experimentally try to determine, which can put our van der Waals model to test. Experimentally, we find
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ Substance & $p_C v_C/(kT_C)$\\
+ \midrule
+ H$_2$O & 0.23\\
+ He, H$_2$ & 0.3\\
+ Ne, N$_2$, Ar & 0.29\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+So we got the same order of magnitude, but the actual numbers are clearly off. Indeed, there is no reason to expect van der Waals to give quantitative agreement at the critical point, where the density is not small.
+
+But something curious happens. The van der Waals model predicted all gases should look more-or-less the same, but we saw van der Waals is wrong. However, it is still the case that all gases look more-or-less the same. In particular, if we try to plot $\bar{T}$ against $\bar{\rho} = \bar{v}^{-1}$, then we find that all substances seem to give the same coexistence curve! This is the famous \term{Guggenheim plot}. Why is this the case? There is certainly some pieces we are missing.
+
+\subsubsection*{Critical exponents}
+There are more similarities between substances we can talk about. As we tend towards the critical point, the quantities $v_G - v_L$, $T_C - T$ and $p - p_C$ all tend to $0$. On the other hand, we saw the isothermal compressibility diverges as $T \to T_C^+$. One natural question to ask is \emph{how} these tend to $0$ or diverge. Often, these are given by some power law, and the exponents are called the \term{critical exponents}. Mysteriously, it appears that these exponents for all our systems are all the same.
+
+For example, as the critical point is approached along the coexistence curve, we always have
+\[
+ v_G - v_L \sim (T_C - T)^\beta,
+\]
+where $\beta \approx 0.32$.
+
+On the other hand, if we approach the critical point along the critical isotherm $T = T_C$, then we get
+\[
+ p - p_C \sim (v - v_C)^\delta,
+\]
+where $\delta \approx 4.8$.
+
+Finally, we can vary the isothermal compressibility
+\[
+ \kappa = - \frac{1}{v} \left(\frac{\partial v}{\partial p}\right)_T
+\]
+along $v = v_C$ and let $T \to T_C^+$, then we find
+\[
+ \kappa \sim (T - T_C)^{-\gamma},
+\]
+where $\gamma \approx 1.2$.
+
+These numbers $\beta, \gamma, \delta$ are the \emph{critical exponents}, and appear to be the same for all substances. The challenge is to understand why we have this universal behaviour, and, more ambitiously, calculate them.
+
+We start by making a rather innocent assumption. We suppose we can describe a system near the critical point using \emph{some} equation of state, which we assume is analytic. We also assume that it is ``non-degenerate'', in the sense that if there is no reason for the derivative at a point to vanish, then it doesn't.
+
+If this equation of state gives the same qualitative behaviour as van der Waals, i.e.\ we have a critical point $T_C$ which is a reflection point on $T = T_C$, then we have
+\[
+ \left(\frac{\partial \rho}{\partial v}\right)_T = \left(\frac{\partial^2 p}{\partial v^2}\right)_T = 0
+\]
+at the critical point. Therefore, using the non-degeneracy condition, we find
+\[
+ p - p_C \sim (v - v_C)^3.
+\]
+near $T = T_C$. So we predict $\delta = 3$, is wrong.
+
+But let's continue and try to make other predictions. We look at how $\gamma$ behaves. We have
+\[
+ \left(\frac{\partial p}{\partial v}\right)_T (T, v_C) \approx - a(T - T_C)
+\]
+for some $a$ near the critical point. Therefore we find
+\[
+ \kappa \sim (T - T_C)^{-1}.
+\]
+if $v = v_C$ and $T \to T_C^+$, and so $\gamma = 1$. This is again wrong.
+
+This is pretty bad. While we do predict universal behaviour, we are always getting the wrong values of the exponents! We might as well work out what $\beta$ is, which is more complicated, because we have to figure out the coexistence curve. So we just compute it using van der Waals along the coexistence curve
+\[
+ \bar{p} = \frac{8 \bar{T}}{ 3 \bar{v}_L - 1} - \frac{3}{\bar{v}_L^2} = \frac{8 \bar{T}}{ 3 \bar{v}_G - 1} - \frac{3}{\bar{v}_G^2}.
+\]
+Rearranging, we find
+\[
+ \bar{T} = \frac{(3 \bar{v}_L - 1)( 3 \bar{v}_G - 1) (\bar{v}_L + \bar{v}_G)}{8 \bar{v}_G^2 \bar{v}_L^2}.\tag{$*$}
+\]
+To understand how this behaves as $T \to T_C$, we need to understand the coexistence curve. We use the Maxwell construction, and near the critical point, we have
+\[
+ \bar{v}_L = 1 + \delta \bar{v}_L,\quad \bar{v}_G = 1 + \delta \bar{v}_G.
+\]
+In the final example sheet, we find that we get
+\[
+ \delta \bar{v}_L = - \delta \bar{v}_G = \frac{\varepsilon}{2}
+\]
+for some $\varepsilon$. Then using $(*)$, we find that
+\[
+ \bar{T} \approx 1 - \frac{1}{16}\varepsilon^2 = 1 - \frac{1}{16} (\bar{v}_G - \bar{v}_L)^2.
+\]
+So we find that
+\[
+ v_G - v_L \sim (T_C - T)^{1/2}.
+\]
+This again doesn't agree with experiment.
+
+Why are all these exponents wrong? The answer is fluctuations. Near the critical point, fluctuations are large. So it is not good enough to work with mean values $\bra v\ket$, $\bra p \ket$. Recall that in the grand canonical ensemble, we had
+\[
+ \Delta N^2 = \frac{1}{\beta} \left(\frac{\partial N}{\partial \mu}\right)_{T, V} = \frac{1}{\beta} \left(\frac{\partial N}{\partial p}\right)_{T, V} \left(\frac{\partial p}{\partial \mu}\right)_{T, V}.
+\]
+We can try to work out these terms. Recall that we had the grand canonical potential
+\[
+ \Phi = E - TS - \mu N,
+\]
+which satisfies the first law-like expression
+\[
+ \d \Phi = - S\;\d T - p\;\d V - N \;\d \mu.
+\]
+We also showed before, using extensivity, that
+\[
+ \Phi = - p V.
+\]
+So if we plug this into the first law, we find
+\[
+ -V \;\d p = - S\;\d T - N \;\d \mu.
+\]
+Therefore we know that
+\[
+ \left(\frac{\d p}{\d \mu}\right)_{T, V} = \frac{N}{V}.
+\]
+So we find
+\[
+ \Delta N^2 = \frac{N}{\beta V} \left(\frac{\partial N}{\partial p}\right)_{T, V}.
+\]
+To figure out the remaining term, we use the magic identity
+\[
+ \left(\frac{\partial x}{\partial y}\right)_z \left(\frac{\partial y}{\partial z}\right)_x \left(\frac{\partial z}{\partial x}\right)_y = -1.
+\]
+This gives us
+\begin{align*}
+ \Delta N^2 &=- \frac{N}{\beta V} \frac{1}{ \left(\frac{\partial p}{\partial V}\right)_{T, N} \left(\frac{\partial V}{\partial N}\right)_{T, p}} \\
+ &= -\frac{N}{\beta V}\left(\frac{\partial N}{\partial V}\right)_{T, p} \left(\frac{\partial V}{\partial p}\right)_{T, N}\\
+ &= \frac{\kappa N}{\beta}\left(\frac{\partial N}{\partial V}\right)_{T, p}.
+\end{align*}
+Recall that the density is
+\[
+ \rho (T, p, N) = \frac{N}{V},
+\]
+which is an intensive quantity. So we can write it as $\rho(T, p)$. Since we have
+\[
+ N = \rho V,
+\]
+we simply have
+\[
+ \left(\frac{\partial N}{\partial V}\right)_{T, p} = \rho.
+\]
+Therefore we have
+\[
+ \Delta N^2 = \frac{k N}{\beta} \rho.
+\]
+So we find that
+\[
+ \frac{\Delta N^2}{N^2} = \frac{\kappa kT}{V}.
+\]
+This is how big the fluctuations of $N$ are compared to $N$.
+
+Crucially, this is proportional to $\kappa$, which we have already seen diverges at the critical point. This means near the critical point, we cannot ignore the fluctuations in $N$.
+
+Before we say anything more about this, let's study another system that experiences a phase transition.
+
+\subsection{The Ising model}
+The \emph{Ising model} was invented as a model of a ferromagnet. Consider a $d$-dimensional lattice with $N$ sites. On each of these sites, we have a degree of freedom called the ``spin''
+\[
+ s_i =
+ \begin{cases}
+ +1 & \text{spin up}\\
+ -1 & \text{spin down}
+ \end{cases}.
+\]
+The Hamiltonian is given by
+\[
+ H = -J \sum_{\bra i j\ket} s_i s_j - B \sum_i s_i
+\]
+for some $J, B$. We should think of $B$ as the magnetic field, and for simplicity, we just assume that the magnetic moment is $1$. The first term describes interactions between the spins themselves, where $\sum_{\bra i j\ket}$ denotes the sum over nearest neighbours.
+
+For example in $d = 1$, a lattice looks like
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2,-1,0,1,2} {
+ \node [circ] at (\x, 0) {};
+ }
+ \end{tikzpicture}
+\end{center}
+and the nearest neighbours are the literal neighbours.
+
+In $d = 2$, we can have many different lattices. We could have a square grid
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2,-1,0,1,2} {
+ \foreach \y in {-2,-1,0,1,2} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \node [circ, mred] at (0, 0) {};
+ \node [circ, morange] at (1, 0) {};
+ \node [circ, morange] at (-1, 0) {};
+ \node [circ, morange] at (0, 1) {};
+ \node [circ, morange] at (0, -1) {};
+ \end{tikzpicture}
+\end{center}
+The nearest neighbours of the central red dot are the orange ones.
+
+We let $q$ be the number of nearest neighbours. So for example, when $d = 1$, we have $q = 2$; for a $d = 2$ square lattice, then $q = 4$.
+
+What is $J$? It is clearly a measure of the strength of the interaction. But the sign matters. The case $J > 0$ corresponds to the case where neighbouring spins prefer to align. This is called a \term{ferromagnet}; and the $J < 0$ case corresponds to the case where they prefer to \emph{anti}-align, and this is called an \term{anti-ferromagnet}.
+
+It doesn't really matter a lot, but for the sake of definiteness, we assume $J > 0$.
+
+Now if turn on the $B$ field, then on the basis of energy, it will make the spins to try to align with the $B$ field, and the interactions will further try to make it align.
+
+But energy is not the only thing that matters. There is also entropy. If all spins align with the field, then entropy will be zero, which is not very good. So there is a competition between energy, which wants to align, and entropy, which prefers to anti-align, and the end result depends on how this competition turns out.
+
+To understand this properly, we use the canonical ensemble
+\[
+ Z = \sum_{\{s_i\}} e^{-\beta E[\{s_i\}]}.
+\]
+We define the \emph{average spin}, or the \term{magnetization}, to be
+\[
+ m = \frac{1}{N} \sum_i \bra s_i\ket = \frac{1}{N\beta} \left(\frac{\partial \log Z}{\partial B}\right)_T.\tag{$\dagger$}
+\]
+It turns out there is a different possible interpretation of the Ising model. It can also be viewed as a description of a gas! Consider ``hard core'' particles living on a lattice. Then there is either $0$ or $1$ particle living on each site $i$. Let's call it $n_i$. We suppose there is an attractive force between the particles, and suppose the kinetic energy is not important. The model is then defined by the Hamiltonian
+\[
+ H = - 4J \sum_{\bra ij\ket} n_i n_j.
+\]
+This is a rather crude model for studying gas on a lattice. To understand this system of gas, we use the grand canonical ensemble
+\[
+ \mathcal{Z}_{\mathrm{gas}} = \sum_{\{n_i\}} e^{-\beta (E [\{n_i\}] - \mu \sum_i n_i)}.
+\]
+We look at the exponent
+\[
+ E[\{n_i\}] - \mu \sum_i n_i = -4J \sum_{\bra ij\ket }n_i n_j - \mu \sum_i n_i.
+\]
+We can compare this with the Ising model, and they look very similar. Unsurprisingly, this is equal to $E_{\mathrm{Ising}} [\{s_i\}]$ if we set
+\[
+ n_i = \frac{s_i + 1}{2},
+\]
+and let
+\[
+ B = \frac{\mu}{2} + qJ.
+\]
+Then we have
+\[
+ \mathcal{Z}_{\mathrm{gas}} = Z_{\mathrm{Ising}}.
+\]
+So if we understand one, then we understand the other.
+
+We now concentrate on $Z_{\mathrm{Ising}}$. Can we actually compute it? It turns out it is possible to do it for $d = 1$, which we will do some time later. In the case $d = 2$ with $B = 0$, we can still manage, but in general, we can't. So we want to develop some approximate method to study them.
+
+We'll do so using \emph{mean field theory}. In general, we would expect $\bra s_i\ket = m$. So, being a good physicist, we expand around this equilibrium, and write
+\begin{align*}
+ s_i s_j &= [(s_i - m) + m][(s_j - m) + m)] \\
+ &= (s_i - m)(s_j - m) + m (s_j - m) + m (s_i - m) + m^2.
+\end{align*}
+We will assume that
+\[
+ \sum_{\bra ij\ket} (s_i - m)(s_j - m)
+\]
+is negligible compared to other terms in $H$. This is a subtle assumption. It is not true that $\bra (s_i - m)^2 \ket$ is negligible. In fact, since $s_i = 1$ always, we must have $\bra s_i^2\ket = 1$. Thus,
+\[
+ \bra (s_i - m)^2\ket = \bra s_i^2\ket - 2m \bra s_i\ket + m^2 = 1 - m^2,
+\]
+and for small magnetization, this is actually pretty huge. However, $\bra s_i s_j\ket$ can behave rather differently than $\bra s_i\ket^2$. If we make suitable assumptions about the fluctuations, this can potentially be small.
+
+Assuming we can indeed neglect that term, we have
+\[
+ H \approx - J \sum_{\bra i j\ket} (m(s_i + s_j) - m^2) - B \sum_i s_i.
+\]
+Now the system decouples! We can simply write this as
+\[
+ H = \frac{1}{2} J N qm^2 - (Jqm + B) \sum s_i.
+\]
+Now this is just the 2-state model we saw at the very beginning of the course, with an \term{effective magnetic field}
+\[
+ B_{\mathrm{eff}} = B + J qm.
+\]
+Thus, we have ``averaged out'' the interactions, and turned it into a shifted magnetic field.
+
+Using this, we get
+\begin{align*}
+ Z &\approx e^{-\frac{1}{2} \beta J N q m^2 + B_{\mathrm{eff}}\sum_i s_i}\\
+ &= e^{-\frac{1}{2} \beta J N q m^2} \left(e^{-\beta B_{\mathrm{eff}}} + e^{\beta B_{\mathrm{eff}}}\right)^N\\
+ &= e^{-\frac{1}{2} \beta J N q m^2} 2^N \cosh^N (\beta B + \beta Jqm)
+\end{align*}
+Note that the partition function contains a variable $m$, which we do not know about. But we can determine $m(T, B)$ using $(\dagger)$, and this gives
+\[
+ m = \tanh (\beta B + \beta Jqm).\tag{$**$}
+\]
+It is not easy to solve this analytically, but we can graph this and understand it qualitatively:
+
+We first consider the case $B = 0$. Note that for small $x$, we have
+\[
+ \tanh x \approx x - \frac{1}{3} x^3.
+\]
+We can then plot both sides of the equation as a function of $m$. If $\beta Jq < 1$, then we see that the only solution is $m = 0$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$m$};
+ \draw [->] (0, -2) -- (0, 2);
+
+ \draw [mblue, thick, domain=-3:3] plot [smooth] (\x, {tanh(0.7*\x)});
+ \draw [mblue, thick] (-2, -2) -- (2, 2);
+ \node [circ] at (0, 0) {};
+ \end{tikzpicture}
+\end{center}
+On the other hand, if $\beta J q > 1$, then we have three possible states:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$m$};
+ \draw [->] (0, -2) -- (0, 2);
+
+ \draw [mblue, thick, domain=-3:3] plot [smooth] (\x, {tanh(2*\x)});
+ \draw [mblue, thick] (-2, -2) -- (2, 2);
+
+ \node [circ] at (0.9575, 0.9575) {};
+ \node [circ] at (-0.9575, -0.9575) {};
+ \node [circ] at (0, 0) {};
+
+ \draw [dashed] (0.9575, 0) node [below] {$m_0(T)$} -- (0.9575, 0.9575);
+ \end{tikzpicture}
+\end{center}
+Something interesting happens when $\beta Jq = 1$. We define $T_c$ by
+\[
+ k T_c = Jq.
+\]
+Then we have
+\[
+ \beta Jq = \frac{T_C}{T}.
+\]
+We can now rephrase our previous observation as follows --- if $T > T_C$, then there is only one solution to the system, namely $m = 0$. We interpret this as saying that at high temperature, thermal fluctuations make them not align. In this case, entropy wins.
+
+If $T < T_c$, then there are $3$ solutions. One is $m = 0$, and then there are non-trivial solutions $\pm m_0(T)$. Let's look at the $m = 0$ solution first. This we should think of as the unstable solution we had for the liquid-gas equation. Indeed, taking the derivative of $(**)$ with respect to $B$, we find
+\[
+ \left.\left(\frac{\partial m}{\partial B}\right)_T\right|_{B = 0} = \frac{\beta}{1 - \beta J q} < 0.
+\]
+So for the same reason as before, this is unstable. So only $\pm m_0(T)$ is physical. In this case the spins align. We have $m_0 \to \pm1$ as $T \to 0$, and the two signs correspond to pointing up and down.
+
+The interesting part is, of course, what happens near the critical temperature. When we are close to the critical temperature, we can Taylor expand the equation $(**)$. Then we have
+\[
+ m_0 \approx \beta J q m_0 - \frac{1}{3} (\beta J q m_0)^3.
+\]
+We can rearrange this, and we obtain
+\[
+ m_0 \sim (T_c - T)^{1/2}.
+\]
+We can plot the solutions as
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$T$};
+ \draw [->] (0, -3) -- (0, 3) node [above] {$m$};
+
+ \draw [mblue, thick] (0, -2) node [left, black] {$-1$} arc(-90:90:2) node [left, black] {$1$};
+
+ \draw [mblue, thick] (2, 0) -- (5, 0);
+ \node [anchor = north east] at (2, 0) {$T_c$};
+
+ \node [circ] at (2, 0) {};
+ \end{tikzpicture}
+\end{center}
+There is a phase transition at $T = T_C$ for $B = 0$. For $T > T_C$, the stable states have $m = 0$, while for $T < T_C$ the stable states have non-zero $m$. Thus, the magnetization disappears when we are above the critical temperature $T_C$.
+
+We now want to understand the order of the phase transition. For general $B$, we can write the free energy as
+\[
+ F(T, B) = -kT \log Z = \frac{1}{2} JNqm^2 - NkT \log\left(2 \cosh (\beta B + \beta J q m)\right),
+\]
+where $m = m(T, B)$, which we found by solving $(**)$.
+
+Restricting back to $B = 0$, for $T \approx T_C$, we have a small $m$. Then we can expand the expression in powers of $m$, and get
+\[
+ F(T, 0) \approx \frac{1}{2} NkT_C \left(1 - \frac{T_C}{T}\right) m^2 - NkT \log 2.
+\]
+The second term does not depend on $m$, and behaves in a non-singular way. So we are mostly interested in the behaviour of the other part. We have
+\[
+ F + NkT \log 2 \sim
+ \begin{cases}
+ -(T_C - T)^2 & T < T_C\\
+ 0 & T > T_C
+ \end{cases}
+\]
+If we look at the first derivative, then this is continuous at $T = T_C$! However, the second derivative is discontinuous. Hence we have a second-order phase transition at $B = 0$, $T = T_C$.
+
+We've understood what happens when $B = 0$. Now let's turn on $B$, and put $B \not= 0$. Then this merely shifts the $\tanh$ curve horizontally:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$m$};
+ \draw [->] (0, -2) -- (0, 2);
+
+ \draw [mblue, thick, domain=-3:3] plot [smooth] (\x, {tanh(2*\x + 1)});
+ \draw [mblue, thick] (-2, -2) -- (2, 2);
+
+ \node [circ] at (0.99495, 0.99495) {};
+
+ \draw [dashed] (0.99495, 0) node [below] {$m(T, B)$} -- (0.99495, 0.99495);
+ \end{tikzpicture}
+\end{center}
+We see that we always have a unique solution of $m$ of the same sign as $B$. We call this $m(T, B)$. However, when we are at sufficiently low temperature, and $B$ is not too big, then we also have other solutions $U$ and $M$, which are of opposite sign to $B$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$m$};
+ \draw [->] (0, -2) -- (0, 2);
+
+ \draw [mblue, thick, domain=-3:3] plot [smooth] (\x, {tanh(2.5*\x + 0.5)});
+ \draw [mblue, thick] (-2, -2) -- (2, 2);
+
+ \node [circ] at (-0.955198, -0.955198) {};
+ \node [circ] at (-0.34299, -0.34299) {};
+
+ \node [above] at (-0.955198, -0.955198) {\scriptsize $M$};
+ \node [left] at (-0.34299, -0.34299) {\scriptsize $U$};
+ \draw [dashed] (0.99493, 0) node [below] {$m(T, B)$} -- (0.99493, 0.99493) node [circ] {};
+ \end{tikzpicture}
+\end{center}
+As before, $U$ is unstable. But $M$ is something new. It is natural to ask ourselves which of the two states have lower free energy. It turns out $M$ has higher $F$, and so it is metastable. This state can indeed be constructed by doing very delicate experiments.
+
+This time, at least qualitatively from the pictures, we see that $m(T, B)$ depends smoothly on $T$ and $B$. So there is no phase transition.
+
+We can also see what happens at high temperature. If $\beta J q \ll 1$ and $\beta |B| \ll 1$, then we can expand $(**)$ and get
+\[
+ m \approx \beta B = \frac{B}{kT}.
+\]
+This is \term{Curie's law}. But now let's do something different. We fix $T$, and vary $B$. If $T > T_c$, then $m(T, B)$ depends smoothly on $B$, and in particular
+\[
+ \lim_{B \to 0} m(T, B) = 0.
+\]
+However, if we start lower than the critical temperature, and start at $B > 0$, then our magnetization is positive. As we decrease $B$, we have $B \to 0^+$, and so $m \to m_0(T)$. However, if we start with negative $B$, we have a negative magnetization. We have
+\[
+ \lim_{B \to 0^+} m(T, B) = m_0(T) \not= - m_0(T) = \lim_{B \to 0^-} -m_0(T).
+\]
+So we indeed have a phase transition at $B = 0$ for $T < T_C$.
+
+We now want to determine the order of this transition. We note that we have
+\[
+ m \frac{1}{N\beta}\left(\frac{\partial \log Z}{\partial B}\right)_T = - \frac{1}{N} \left(\frac{\partial f}{\partial B}\right)_T.
+\]
+Since $m$ is discontinuous, we know this first derivative of the free energy is discontinuous. So this is a first-order phase transition.
+
+We can plot our rather unexciting phase diagram:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$B$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$T$};
+
+ \node [circ] at (3, 0) {};
+ \draw [mblue, thick] (0, 0) -- (3, 0);
+ \end{tikzpicture}
+\end{center}
+We can compare this with the liquid-gas phase diagram we had:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (5, 0) node [right] {$T$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$p$};
+
+ \node [circ] at (3, 2.2) {};
+ \node [right] at (3, 2.2) {critical point};
+ \draw [mblue, thick] (0, 1.4) edge [bend right=10] (3, 2.2);
+
+ \node at (1.5, 3) {liquid};
+ \node at (1.5, 1) {gas};
+ \end{tikzpicture}
+\end{center}
+It looks \emph{roughly} the same.
+
+Let's now compute some critical components. We saw that for $B = 0$ and $T \to T_C^-$, we had
+\[
+ m_0 \sim (T_C - T)^\beta,
+\]
+with $\beta = \frac{1}{2}$.
+
+We can also set $T = T_C$, and let $B \to 0$. Then our equation $(**)$ gives
+\[
+ m = \tanh\left(\frac{B}{Tq} + m\right).
+\]
+We can invert this equation by taking $\tanh^{-1}$, and get
+\[
+ \frac{B}{Jq} + m = \tanh^{-1} m = m + \frac{1}{3} m^3 + \cdots.
+\]
+So we find
+\[
+ m \approx \left(\frac{3B}{Jq}\right)^{1/3}.
+\]
+So we find $B \sim m^\delta$, where $\delta = 3$.
+
+Finally, we can consider the susceptibility
+\[
+ \chi = N \left(\frac{\partial m}{\partial B}\right)_T.
+\]
+We set $B = 0$, and let $T \to T_C^+$. We then find that
+\[
+ \chi = \frac{N \beta}{1 - \beta J q} \sim (T - T_C)^{-\gamma},
+\]
+where $\gamma = 1$.
+
+These were exactly the (\term{incorrect}) critical exponents we computed for the liquid-gas phase transition using the van der Waals equation!
+
+In the case of van der Waals, we saw that our result was wrong because we made the approximation of ignoring fluctuations. Here we made the mean field approximation. Is this a problem? There are two questions we should ask --- is the qualitative picture of the phase transition correct? And are the critical exponents correct?
+
+The answer depends on the dimension.
+\begin{itemize}
+ \item If $d = 1$, then this is completely wrong! We can solve the Ising model exactly, and the exact solution has no phase transition.
+ \item If $d \geq 2$, then the phase diagram is qualitatively correct. Moreover, the critical exponents correct for $d \geq 4$. To learn more, one should take III Statistical Field Theory.
+ \item In $d = 2$, we can indeed solve the Ising model exactly, and the exact solution gives
+ \[
+ \beta = \frac{1}{8},\quad \gamma = \frac{7}{4},\quad \delta = 15.
+ \]
+ So the mean field approximation is pretty useless at predicting the critical exponents.
+ \item In $d = 3$, then there is no exact solution known, but there has been a lot of progress on this recently, within the last 2 or 3 years! This led to a very accurate calculation of the critical exponents, using a combination of analytic and numerical methods, which gives
+ \[
+ \beta = 0.326,\quad \gamma = 1.237,\quad \delta = 4.790.
+ \]
+ These are actually known to higher accuracy.
+
+ These are exactly the same as the measured critical exponents for the liquid-gas transition!
+\end{itemize}
+So this funny model of spins on a lattice is giving us exactly the same number as real-world liquid-gas! This is evidence for \term{universality}. Near the critical point, the system is losing knowledge of what the individual microscopic description of the system is, and all systems exhibit the same kind of physics. We say the critical points of the three-dimensional Ising model and liquid-gas system belong to the same \term{universality class}. This is something described by a \term{conformal field theory}.
+
+\subsubsection*{One-dimensional Ising model}
+Let's now solve the one-dimensional Ising model. Here we just have
+\[
+ H = -J \sum_{i = 1}^N s_i s_{i + 1} - \frac{B}{2} \sum_{i = 1}^n (s_i + s_{i + 1}).
+\]
+To make our study easier, we impose the periodic boundary condition $s_{N + 1} \equiv s_1$. We then have
+\[
+ Z = \sum_{s_1 = -1}^1 \sum_{s_2 = -1}^1 \cdots \sum_{s_N = -1}^1 \prod_{i = 1}^N \exp\left(B J s_i s_{i + 1} + \frac{\beta B}{2} (s_i + s_{i + 1})\right).
+\]
+We define the symmetric $2 \times 2$ matrix $T$ by
+\[
+ T_{st} = \exp\left(BJst + \frac{\beta B}{2} (s + t)\right),
+\]
+where $s, t = \pm 1$. We can then rewrite this $Z$ using matrix multiplication ---
+\[
+ Z = \sum_{s_1} \cdots \sum_{s_N} T_{s_1 s_2} T_{s_2 s_3} \cdots T_{s_N s_1} = \Tr (T^N).
+\]
+The trace is defined to be the sum of eigenvalues, and if we know the eigenvalues of $T$, we know that of $T^N$. We can do this directly. We have
+\[
+ \lambda_{\pm} = e^{\beta J} \cosh \beta B \pm \sqrt{e^{2\beta J} \cosh^2 \beta B - 2 \sinh 2 \beta J}.
+\]
+This is not terribly attractive, but we can indeed write it down. As expected, these eigenvalues are real, and we picked them so that $\lambda_+ > \lambda_-$. Then we have
+\[
+ Z = \lambda_+^N + \lambda_-^N.
+\]
+We have thus solved the $1d$ Ising model completely. We can now see if there is a phase transition. We have
+\[
+ m = \frac{1}{N\beta} \left(\frac{\partial \log Z}{\partial B}\right)_T = \frac{1}{\beta Z} \left(\lambda_+^{N - 1} \left(\frac{\partial \lambda_+}{\partial B}\right)_T + \lambda_-^{N - 1} \left(\frac{\partial \lambda_-}{\partial B}\right)_T\right).
+\]
+To evaluate this derivative, we see that the $B$-dependence lives in the $\cosh$ terms. So we have
+\[
+ \left( \frac{\partial \lambda_{\pm}}{\partial B}\right)_T \propto \sinh \beta B.
+\]
+But this is all we need, so we only need to evaluate it at $B = 0$, and $\sinh$ vanishes. So we know that when $B = 0$, we have $m = 0$ for all $T$! So there is no phase transition at $B = 0$.
+
+More generally, we can look at the free energy
+\begin{align*}
+ F &= - \frac{1}{\beta} \log Z \\
+ &= -\frac{1}{\beta} \log \left(\lambda_+^N \left( 1 + \left(\frac{\lambda_-}{\lambda_+}\right)^N\right)\right)\\
+ &= - \frac{N}{\beta} \log \lambda_+ - \frac{1}{\beta} \log \left(1 + \left(\frac{\lambda_-}{\lambda_+}\right)^N\right).
+\end{align*}
+Recall that we said phase transitions are possible only in the thermodynamic limit. So we want to take $N \to \infty$, and see what we get. In this case, we have
+\[
+ \left(\frac{\lambda_-}{\lambda_+}\right)^N \to 0.
+\]
+So we have
+\[
+ \frac{F}{N} \to - \frac{1}{\beta} \lambda_+.
+\]
+We have $\lambda_+ > 0$, and this depends smoothly on $T$ and $B$ (as $\sqrt{\ph}$ never vanishes). So there is no phase transition.
+
+In fact, this holds for any 1d system without long-range interactions (Peierls).
+
+\subsection{Landau theory}
+What is it that made the Ising model behave so similarly to the liquid-gas model? To understand this, we try to fit these into a ``general'' theory of phase transitions, known as Landau theory. We will motivate Landau theory by trying to capture the essential properties of the mean-field Ising model that lead to the phase transition behaviour. Consequently, Landau theory will predict the same ``wrong'' critical exponents, so perhaps it isn't that satisfactory. Nevertheless, doing Landau theory will help us understand better ``where'' the phase transition came from.
+
+In the mean field approximation for the Ising model, the free energy was given by
+\[
+ F (T, B; m) = \frac{1}{2} JNqm^2 - \frac{N}{B} \log \left(2 \cosh (\beta B + \beta Jqm)\right),
+\]
+where $m = m(T, B)$ is determined by the equation
+\[
+ m = \frac{1}{N\beta} \left(\frac{\partial \log Z}{\partial B}\right)_T.\tag{$*$}
+\]
+It is convenient to distinguish between the function $F$ we wrote above, and the value of $F$ when we set $m = m(T, B)$. So we write
+\begin{align*}
+ \tilde{F} (T, B; m) &= \frac{1}{2} JNqm^2 - \frac{N}{B} \log \left(2 \cosh (\beta B + \beta Jqm)\right)\\
+ F(T, B) &= \tilde{F}(T, B, m(T, B)).
+\end{align*}
+The idea of Landau theory is to see what happens when we don't impose $(*)$, and take $\tilde{F}$ seriously. There are some questions we can ask about $\tilde{F}$. For example, we can try to minimize $\tilde{F}$. We set
+\[
+ \left(\frac{\partial \tilde{F}}{\partial m}\right)_{T, B} = 0.
+\]
+Then we find
+\[
+ m = \tanh (\beta B + \beta Jqm),
+\]
+which is exactly the same thing we get by imposing $(*)$! Thus, another way to view the mean-field Ising model is that we have a system with free parameters $m, T, B$, and the equilibrium systems are those where $m$ minimizes the free energy.
+
+To do Landau theory, we need a generalization of $m$ in an arbitrary system. The role of $m$ in the Ising model is that it is the \term{order parameter} --- if $m \not= 0$, then we are ordered, i.e.\ spins are aligned; if $m = 0$, then we are disordered.
+
+%We can also identify the order parameter for the liquid gas system. Suppose we always work with
+%
+%Let's fix $v = v_c$, so $\rho = \rho_c$. We imagine varying the temperature. The order parameter, which we shall also call $m$, is $m = \rho - \rho_c$. For $T > T_C$, we just have $m = 0$. However, below $T_C$, we are in a mixture of liquid in gas. The average density is still $\rho_C$, but the values in the individual liquid and gas phases are different!
+%
+%Near the critical point, the volumes can be written as
+%\[
+% v_G = v_C + \delta c_G,\quad v_L = v_C + \delta v_L.
+%\]
+%On the example sheet, we found that
+%\[
+% \delta v_G = - \delta v_L.
+%\]
+%So the variation in the density is
+%\[
+% \delta \rho_G = - \delta \rho_L.
+%\]
+%So we have
+%\[
+% m_G(T) = - m_L(T).
+%\]
+After we have identified an order parameter $m$, we need to have some function $\tilde{F}$ such that the equilibria are exactly the minima of $\tilde{F}$. In the most basic set up, $\tilde{F}$ is an analytic function of $T$ and $m$, but in general, we can incorporate some other external parameters, such as $B$. Finally, we assume that we have a $\Z/2\Z$ symmetry, namely $F(T, m) = F(T, -m)$, and moreover that $m$ is sufficiently small near the critical point that we can analyze $\tilde{F}$ using its Taylor expansion.
+
+Since $\tilde{F}$ is an even function in $m$, we can write the Taylor expansion as
+\[
+ \tilde{F} (T, m) = F_0(T) + a(T) m^2 + b(T) m^4 + \cdots.
+\]
+\begin{eg}
+ In the Ising model with $B = 0$, we take
+ \[
+ \tilde{F}_{\mathrm{Ising}} (T, m) = -Nk T \log 2 + \frac{NJq}{2} (1 - JqB) m^2 + \frac{N\beta^3 J^4 q^4}{24} m^4 + \cdots.
+ \]
+\end{eg}
+Now of course, the value of $F_0(T)$ doesn't matter. We will assume $b(T) > 0$. Otherwise, we need to look at the $m^6$ terms to see what happens, which will be done in the last example sheet.
+
+There are now two cases to consider. If $a(T) > 0$ as well, then it looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->](-3, 0) -- (3, 0) node [right] {$m$};
+ \draw [->] (0, -1.2) -- (0, 3) node [above] {$\tilde{F}$};
+
+ \draw [mblue, thick, domain=-1.414:1.414, samples=50] plot [smooth] (\x, {4 * ((\x/2)^4 + (\x/2)^2)});
+ \end{tikzpicture}
+\end{center}
+However, if $a(T) < 0$, then now $m = 0$ is a local maximum, and there are two other minimum near $m = 0$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->](-3, 0) -- (3, 0) node [right] {$m$};
+ \draw [->] (0, -1.2) -- (0, 3) node [above] {$\tilde{F}$};
+
+ \draw [mblue, thick, domain=-2.4:2.4, samples=50] plot [smooth] (\x, {4 * ((\x/2)^4 - (\x/2)^2)});
+
+ \draw [dashed] (1.414, 0) node [above] {$m_0(T)$} -- (1.414, -1);
+ \end{tikzpicture}
+\end{center}
+We call the minima $\pm m_0(T)$.
+
+Thus, we expect a rather discrete change in behaviour when we transition from $a(T) > 0$ to $a(T) < 0$. We let $T_C$ be the temperature such that $a(T_C) = 0$. This is the \term{critical temperature}. Since we are only interested in the behaviour near the critical point, We may wlog assume that
+\[
+ a(T)
+ \begin{cases}
+ > 0& \text{ if }T > T_C\\
+ = 0& \text{ if }T = T_C\\
+ < 0& \text{ if }T < T_C\\
+ \end{cases}.
+\]
+In other words, $a(T)$ has the same sign as $T$. We will further assume that this is a simple zero at $T = T_C$.
+
+\begin{eg}
+ In the Ising model, we have $a(T) = 0$ iff $kT = Jq$.
+\end{eg}
+We can work out what the values of $m_0(T)$ are, assuming we ignore $O(m^6)$ terms. This is simply given by
+\[
+ m_0(T) = \sqrt{\frac{-a}{2b}}.
+\]
+Having determined the minimum, we plug it back into $\tilde{F}$ to determine the free energy $F(T)$. This gives
+\[
+ F(T) =
+ \begin{cases}
+ F_0(T) & T > T_C\\
+ F_0(T) - \frac{a^2}{4b} & T < T_C
+ \end{cases}.
+\]
+Recall that $a$ passes through $0$ when $T = T_C$, so this is in fact a continuous function.
+
+Now we want to determine the order of this phase transition. Since we assumed $F$ is analytic, we know in particular $F_0$, $a$ and $b$ are smooth functions in $T$. Then we can write
+\[
+ a(T) = a_0(T - T_C),\quad b(T) = b_0(T_C) > 0
+\]
+near $T = T_C$, where $a_0 > 0$. Then we get
+\[
+ F(T) =
+ \begin{cases}
+ F_0(T) & T > T_C\\
+ F_0(T) - \frac{a_0^2}{4b_0}(T - T_C)^2 & T - T_C
+ \end{cases}.
+\]
+So we see that
+\[
+ S = - \frac{\d F}{\d T}
+\]
+is continuous. So this is not a first-order phase transition, as $S$ is continuous. However, the second-order derivative
+\[
+ C = T \frac{\d S}{\d T}
+\]
+is discontinuous. This is a second-order phase transition. We can calculate critical exponents as well. We have
+\[
+ m_0 \approx \sqrt{\frac{a_0}{2 b_0}} (T_C - T)^{1/2}.
+\]
+So we have $\beta = \frac{1}{2}$, which is exactly what we saw for the mean field approximation and van der Waals. So Landau theory gives us the same answer, which are still wrong. However, it does give a qualitative description of a phase transition. In particular, we see where the non-analyticity of the result comes from.
+
+One final comment is that we started with this $\tilde{F}$ which has a reflection symmetry in $m$. However, below the critical temperature, the ground state does not have such symmetry. It is either $+m_0$ or $- m_0$, and this breaks the symmetry. This is known as \term{spontaneous symmetry breaking}.
+
+\subsubsection*{Non-symmetric free energy}
+In our Ising model, we had an external parameter $B$. We saw that when $T < T_C$, then we had a first-order phase transition when $B$ passes through $0$. How can we capture this behaviour in Landau theory?
+
+The key observation is that in the Ising model, when $B \not= 0$, then $\tilde{F}$ is not symmetric under $m \leftrightarrow -m$. Instead, it has the symmetry
+\[
+ \tilde{F}(T, B, m) = \tilde{F}(T, -B, -m).
+\]
+Consider a general Landau theory, where $\tilde{F}$ has an external parameter $B$ with this same symmetry property. We can expand
+\[
+ \tilde{F} = F_0(T, B) + F_1(T, B)m + F_2(T, B) m^2 + F_3(T, B) m^3 + F_4(T, B)m^4 + \cdots.
+\]
+In this case, $F_n(T, B)$ is odd/even in $B$ if $m$ is odd/even (resp.). We assume $F_4 > 0$, and ignore $O(m^5)$ terms. As before, for any fixed $B$, we either have a single minimum, or has a single maxima and two minima.
+
+As before, at high temperature, we assume we have a single minimum. Then the $\tilde{F}$ looks something like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->](-3, 0) -- (3, 0) node [right] {$m$};
+ \draw [->] (0, -1.2) -- (0, 3) node [above] {$\tilde{F}$};
+
+ \draw [mblue, thick, domain=-1.3:1.5, samples=50] plot [smooth] (\x, {4 * ((\x/2)^4 + (\x/2)^2) - 0.5*\x - 0.3});
+
+ \end{tikzpicture}
+\end{center}
+At low temperature, we assume we go to a situation where $\tilde{F}$ has multiple extrema. Then for $B > 0$, we have
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->](-3, 0) -- (3, 0) node [right] {$m$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$\tilde{F}$};
+
+ \draw [mblue, thick, domain=-2.23:2.4, samples=50] plot [smooth] (\x, {4 * ((\x/2)^4 - (\x/2)^2) - 0.3*\x - 0.3});
+ \node [circ] at (1.48397, -1.73497) {};
+ \node [circ] at (-0.151747, -0.27737) {};
+ \node [circ] at (-1.33222, -0.887656) {};
+
+ \node [above] at (1.48397, -1.73497) {\scriptsize $M$};
+ \node [below] at (-0.151747, -0.27737) {\scriptsize $U$};
+ \node [above] at (-1.33222, -0.887656) {\scriptsize $G$};
+ \end{tikzpicture}
+\end{center}
+So we have a ground state $G$; a metastable state $M$ and an unstable state $U$. When we go to $B = 0$, this becomes symmetric, and it now looks like
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->](-3, 0) -- (3, 0) node [right] {$m$};
+ \draw [->] (0, -1.2) -- (0, 3) node [above] {$\tilde{F}$};
+
+ \draw [mblue, thick, domain=-2.4:2.4, samples=50] plot [smooth] (\x, {4 * ((\x/2)^4 - (\x/2)^2)});
+
+ \draw [dashed] (1.414, 0) node [above] {$m_0(T)$} -- (1.414, -1);
+
+ \end{tikzpicture}
+\end{center}
+Now we have two ground states. When we move to $B < 0$, the ground state now shifts to the left:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->](-3, 0) -- (3, 0) node [right] {$m$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$\tilde{F}$};
+
+ \draw [mblue, thick, domain=-2.4:2.22, samples=50] plot [smooth] (\x, {4 * ((\x/2)^4 - (\x/2)^2) + 0.3*\x - 0.3});
+ \node [circ] at (-1.48397, -1.73497) {};
+ \node [circ] at (0.151747, -0.27737) {};
+ \node [circ] at (1.33222, -0.887656) {};
+
+ \node [above] at (-1.48397, -1.73497) {\scriptsize $M$};
+ \node [below] at (0.151747, -0.27737) {\scriptsize $U$};
+ \node [above] at (1.33222, -0.887656) {\scriptsize $G$};
+ \end{tikzpicture}
+\end{center}
+Now we can see a first-order phase transition. Our $m$ is discontinuous at $B = 0$. In particular,
+\[
+ \lim_{B \to 0^+} m(T, B) = m_0(T),\quad \lim_{B \to 0^-} m(T, B) = -m_0(T).
+\]
+This is exactly the phenomena we observed previously.
+
+\subsubsection*{Landau--Ginzburg theory*}
+The key idea of Landau theory was that we had a single order parameter $m$ that describes the whole system. This ignores fluctuations over large scales, and lead to the ``wrong'' answers. We now want to understand fluctuations better.
+
+\begin{defi}[Correlation function]\index{correlation function}
+ We define the \emph{correlation function}
+ \[
+ G_{ij} = \bra s_i s_j\ket - \bra s_i \ket\bra s_j\ket.
+ \]
+\end{defi}
+Either numerically, or by inspecting the exact solution in low dimensions, we find that
+\[
+ G_{ij} \sim e^{-r_{ij}/\xi},
+\]
+where $r_{ij}$ is the separation between the sites, and $\xi$ is some function known as the \term{correlation length}. The key property of this function is that as we approach the critical point $T = T_C$, $B = 0$, we have $\xi \to \infty$. So we have correlations over arbitrarily large distance.
+
+In this case, it doesn't make much sense to use a single $m$ to represent the average over all lattice sites, as long-range correlations means there is some significant variation from site to site we want to take into account. But we still want to do some averaging. The key idea is that we averaging over \emph{short} distances. This is sometimes called \term{course graining}.
+
+To do this, we pick some length scale that is large relative to the lattice separation, but small relative to the correlation scale. In other words, we pick a $b$ such that
+\[
+ \xi \gg b \gg a = \text{lattice spacing}.
+\]
+Let $m_i$ be the average of all $s_j$ such that $|\mathbf{r}_i - \mathbf{r}_j| < b$.
+
+Now the idea is to promote these numbers to a smooth function. Given some set of values $\{m_i\}$, we let $m(\mathbf{r})$ be a smooth function such that
+\[
+ m(\mathbf{r}_i) = m_i,
+\]
+with $m$ slowly varying over distances $ < b$. Of course, there is no unique way of doing this, but let's just pick a sensible one.
+
+We now want to regard this $m$ as an order parameter, but this is more general than what we did before. This is a spatially varying order parameter. We define a functional $\tilde{F}[T, m]$ by
+\[
+ e^{-B \tilde{F}[T, m]} = \sum_{\{s_i\}} e^{-\beta E[\{s_i\}]},
+\]
+but this time we sum only over the $\{s_i\}$ giving $\{m_i\}$ such that $m_i = m(\mathbf{r}_i)$.
+
+In principle, this determines the Landau functional $\tilde{F}$. In practice, we can't actually evaluate this.
+
+To get the full partition function, we want to sum over all $m(\mathbf{r})$. But $m(\mathbf{r})$ is a function! We are trying to do a sum over all possible choices of a smooth function. We formally write this sum as
+\[
+ Z = \int \D m\; e^{-\beta \tilde{F}[T, m(\mathbf{r})]}.
+\]
+This is an example of a \term{functional integral}, or a \term{path integral}. We shall not mathematically define what we mean by this, because this is a major unsolved problem of theoretical physics. In the context we are talking about here, it isn't really a huge problem. Indeed, our problem was initially discrete, and we started with the finitely many points $m_i$. It made sense to sum over all possible combinations of $\{m_i\}$. It's just that for some reason, we decided to promote this to a function $m(\mathbf{r})$ and cause ourselves trouble. However, writing a path integral like this makes it easier to manipulate, at least formally, and in certain other scenarios in physics, such path integrals are inevitable.
+
+Ignoring the problem, for small $m$, if we have reflection symmetry, then we can expand
+\[
+ \tilde{F} = \int \d^d r\; (a(t) m^2 + b(T) m^4 + c(T) (\nabla m)^2 + \cdots).
+\]
+To proceed with this mathematically fearsome path integral, we make an approximation, namely the \term{saddle point approximation}. This says the path integral should be well-approximated by the value of $\tilde{F}$ at the minimum, so
+\[
+ Z \approx e^{-\beta \tilde{F}[T, m]},
+\]
+where $m$ is determined by minimizing $\tilde{F}$, i.e.
+\[
+ \frac{\delta \tilde{F}}{\delta m} = 0.
+\]
+To solve this, we have to vary $m$. A standard variational calculus manipulation gives
+\begin{align*}
+ \delta \tilde{F} &= \int \d^d r\left(2a m\delta m + 4b m^3\delta m + 2c \nabla m \cdot \nabla (\delta m) + \cdots\right)\\
+ &= \int \d^d r\; (2am + 4b m^2 - 2c \nabla^2 m + \cdots) \delta m.
+\end{align*}
+So the minimum point of $\tilde{F}$ is given by the solution to
+\[
+ c \nabla^2 m = am + 2b m^2 + \cdots .\tag{$*$}
+\]
+Let's first think about what happens when $m$ is constant. Then this equation just reproduces the equation for Landau theory. This ``explains'' why Landau theory works. But in this set up, we can do more than that. We can study corrections to the saddle point approximation, and these are the fluctuations. We find that the fluctuations are negligible if we are in $d \geq 4$. So Landau theory is reliable. However, this is not the case in $d < 4$. It is an unfortunate fact that we live in $3$ dimensions, not $4$.
+
+But we can also go beyond Landau theory in another way. We can consider the case where $m$ is not constant. Really, we have been doing this all along for the liquid-gas system, because below the critical temperature, we can have a mixture of liquid and gas in our system. We can try to describe it in this set up.
+
+Non-constant solutions are indeed possible for appropriate boundary conditions. For example, suppose $T < T_C$, and assume $m \to m_0(T)$ as $x \to \infty$ and $m \to - m_0(T)$ as $x \to -\infty$. So we have gas on one side and liquid on the other. We can take $m = m(x)$, and then the equation $(*)$ becomes the second-order ODE
+\[
+ \frac{\d^2 m}{\d x^2} = \frac{am}{c} + \frac{2b m^3}{c},
+\]
+and we can check that the solution is given by
+\[
+ m = m_0(T) \tanh\left(\sqrt{\frac{-a}{2c}} x\right),
+\]
+and
+\[
+ m_0 = \sqrt{\frac{-a}{2b}}.
+\]
+This configuration is known as the \term{domain wall} that interpolates between the two ground states. This describes an interface between the liquid and gas phases.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$x$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$m$};
+
+ \draw [mblue, thick, domain=-3:3] plot [smooth] (\x, {tanh(1.5*\x)});
+
+ \draw [dashed] (-3, 1) -- (3, 1);
+ \draw [dashed] (-3, -1) -- (3, -1);
+
+ \node [circ] at (0, 1) {};
+ \node [anchor = north east] at (0.1, 1) {\small $m_0$};
+ \node [circ] at (0, -1) {};
+ \node [anchor = north east] at (0.1, -1) {\small $-m_0$};
+ \end{tikzpicture}
+\end{center}
+To learn more about this, take Part III Statistical Field Theory.
+\printindex
+\end{document}
diff --git a/books/cam/II_M/algebraic_topology.tex b/books/cam/II_M/algebraic_topology.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e8002c490081cc819377ebed61eef61734b203d7
--- /dev/null
+++ b/books/cam/II_M/algebraic_topology.tex
@@ -0,0 +1,4968 @@
+\documentclass[a4paper]{article}
+
+\def\npart {II}
+\def\nterm {Michaelmas}
+\def\nyear {2015}
+\def\nlecturer {H.\ Wilton}
+\def\ncourse {Algebraic Topology}
+
+\input{header}
+\usetikzlibrary{knots}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\emph{Part IB Analysis II is essential, and Metric and Topological Spaces is highly desirable}
+\vspace{10pt}
+
+\noindent\textbf{The fundamental group}\\
+Homotopy of continuous functions and homotopy equivalence between topological spaces. The fundamental group of a space, homomorphisms induced by maps of spaces, change of base point, invariance under homotopy equivalence.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Covering spaces}\\
+Covering spaces and covering maps. Path-lifting and homotopy-lifting properties, and their application to the calculation of fundamental groups. The fundamental group of the circle; topological proof of the fundamental theorem of algebra. *Construction of the universal covering of a path-connected, locally simply connected space*. The correspondence between connected coverings of $X$ and conjugacy classes of subgroups of the fundamental group of $X$.\hspace*{\fill} [5]
+
+\vspace{10pt}
+\noindent\textbf{The Seifert-Van Kampen theorem}\\
+Free groups, generators and relations for groups, free products with amalgamation. Statement *and proof* of the Seifert-Van Kampen theorem. Applications to the calculation of fundamental groups.\hspace*{\fill} [4]
+
+\vspace{10pt}
+\noindent\textbf{Simplicial complexes}\\
+Finite simplicial complexes and subdivisions; the simplicial approximation theorem.\hspace*{\fill} [3]
+
+\vspace{10pt}
+\noindent\textbf{Homology}\\
+Simplicial homology, the homology groups of a simplex and its boundary. Functorial properties for simplicial maps. *Proof of functoriality for continuous maps, and of homotopy invariance*.\hspace*{\fill} [4]
+
+\vspace{10pt}
+\noindent\textbf{Homology calculations}\\
+The homology groups of $S^n$, applications including Brouwer's fixed-point theorem. The Mayer-Vietoris theorem. *Sketch of the classification of closed combinatorical surfaces*; determination of their homology groups. Rational homology groups; the Euler-Poincar\'e characteristic and the Lefschetz fixed-point theorem.\hspace*{\fill} [5]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+In topology, a typical problem is that we have two spaces $X$ and $Y$, and we want to know if $X\cong Y$, i.e.\ if $X$ and $Y$ are homeomorphic. If they are homeomorphic, we can easily show this by writing down a homeomorphism. But what if they are not? How can we prove that two spaces are not homeomorphic?
+
+For example, are $\R^m$ and $\R^n$ homeomorphic (for $m\not= n$)? Intuitively, they should not be, since they have different dimensions, and in fact they are not. But how can we actually prove this?
+
+The idea of algebraic topology is to translate these non-existence problems in topology to non-existence problems in algebra. It turns out we are much better at algebra than topology. It is \emph{much} easier to show that two groups are not isomorphic. For example, we will be able to reduce the problem of whether $\R^m$ and $\R^n$ are homeomorphic (for $m \not= n$) to the question of whether $\Z$ and $\{e\}$ are isomorphic, which is very easy.
+
+While the statement that $\R^m \not\cong \R^n$ for $n \not= m$ is intuitively obvious, algebraic topology can be used to prove some less obvious results.
+
+Let $D^n$ be the $n$ dimensional unit disk, and $S^{n - 1}$ be the $n-1$ dimensional unit sphere. We will be able to show that there is no continuous map $F: D^n \to S^{n - 1}$ such that the composition
+\[
+ \begin{tikzcd}
+ S^{n - 1} \ar[r, hook] & D^n \ar[r, "F"] & S^{n - 1}
+ \end{tikzcd}
+\]
+is the identity, where the first arrow is the inclusion map. Alternatively, this says that we cannot continuously map the disk onto the boundary sphere such that the boundary sphere is fixed by the map.
+
+Using algebraic topology, we can translate this statement into an algebraic statement: there is no homomorphism $F: \{0\} \to \Z$ such that
+\[
+ \begin{tikzcd}
+ \Z \ar[r, hook] & \{0\} \ar[r, "F"] & \Z
+ \end{tikzcd}
+\]
+is the identity. This is something we can prove in 5 seconds.
+
+By translating a non-existence problem of a continuous map to a non-existence problem of a homomorphism, we have made our life much easier.
+
+In algebraic topology, we will be developing a lot of machinery to do this sort of translation. However, this machinery is not easy. It will take some hard work, and will be rather tedious and boring at the beginning. So keep in mind that the point of all that hard work is to prove all these interesting theorems.
+
+In case you are completely uninterested in topology, and don't care if $\R^m$ and $\R^n$ are homeomorphic, further applications of algebraic topology include \emph{solving equations}. For example, we will be able to prove the fundamental theorem of algebra (but we won't), as well as Brouwer's fixed point theorem (which says that every continuous function from $D^2 \to D^2$ has a fixed point). If you are not interested in these either, you may as well drop this course.
+
+\section{Definitions}
+\subsection{Some recollections and conventions}
+We will start with some preliminary definitions and conventions.
+
+\begin{defi}[Map]
+ In this course, the word \emph{map} will always refer to continuous maps. We are doing topology, and never care about non-continuous functions.
+\end{defi}
+
+We are going to build a lot of continuous maps in algebraic topology. To do so, we will often need to glue maps together. The \emph{gluing lemma} tells us that this works.
+\begin{lemma}[Gluing lemma]
+ If $f: X\to Y$ is a function of topological spaces, $X = C\cup K$, $C$ and $K$ are both closed, then $f$ is continuous if and only if the restrictions $f|_C$ and $f|_K$ are continuous.
+\end{lemma}
+
+\begin{proof}
+ Suppose $f$ is continuous. Then for any closed $A \subseteq Y$, we have
+ \[
+ f|_C^{-1}(A) = f^{-1}(A) \cap C,
+ \]
+ which is closed. So $f|_C$ is continuous. Similarly, $f|_K$ is continuous.
+
+ If $f|_C$ and $f|_K$ are continuous, then for any closed $A \subseteq Y$, we have
+ \[
+ f^{-1}(A) = f|_C^{-1}(A) \cup f|_K^{-1}(A),
+ \]
+ which is closed. So $f$ is continuous.
+\end{proof}
+This lemma is also true with ``closed'' replaced with ``open''. The proof is the same proof with ``closed'' replaced with ``open''.
+
+We will also need the following technical lemma about metric spaces.
+
+\begin{lemma}
+ Let $(X, d)$ be a compact metric space. Let $\mathcal{U} = \{U_\alpha\}_{\alpha \in A}$ be an open cover of $X$. Then there is some $\delta$ such that for each $x \in X$, there is some $\alpha \in A$ such that $B_\delta(x) \subseteq U_\alpha$. We call $\delta$ a \emph{Lebesgue number} of this cover.
+\end{lemma}
+
+\begin{proof}
+ Suppose not. Then for each $n \in \N$, there is some $x_n \in X$ such that $B_{1/n} (x_n)$ is not contained in any $U_\alpha$. Since $X$ is compact, the sequence $(x_n)$ has a convergent subsequence. Suppose this subsequence converges to $y$.
+
+ Since $\mathcal{U}$ is an open cover, there is some $\alpha \in A$ such that $y \in U_\alpha$. Since $U_\alpha$ is open, there is some $r > 0$ such that $B_r(y) \subseteq U_\alpha$. But then we can find a sufficiently large $n$ such that $\frac{1}{n} < \frac{r}{2}$ and $d(x_n, y) < \frac{r}{2}$. But then
+ \[
+ B_{1/n}(x_n) \subseteq B_r(y) \subseteq U_\alpha.
+ \]
+ Contradiction.
+\end{proof}
+
+\subsection{Cell complexes}
+In topology, we can construct some really horrible spaces. Even if we require them to be compact, Hausdorff etc, we can often still produce really ugly topological spaces with weird, unexpected behaviour. In algebraic topology, we will often restrict our attention to some nice topological spaces, known as \emph{cell complexes}.
+
+To build cell complexes, we are not just gluing maps, but spaces.
+
+\begin{defi}[Cell attachment]
+ For a space $X$, and a map $f: S^{n - 1}\to X$, the space obtained by \emph{attaching an $n$-cell} to $X$ along $f$ is
+ \[
+ X\cup_{f}D^n = (X\amalg D^n)/{\sim},
+ \]
+ where the equivalence relation $\sim$ is the equivalence relation generated by $x\sim f(x)$ for all $x\in S^{n - 1}\subseteq D^n$ (and $\amalg$ is the disjoint union).
+
+ Intuitively, a map $f: S^{n - 1}\to X$ just picks out a subset of $X$ that looks like the sphere. So we are just sticking a disk onto $X$ by attaching the boundary of the disk onto a sphere within $X$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw ellipse (0.5 and 1);
+
+ \draw [thick, mred, decorate, decoration={snake}] circle [radius=0.2];
+
+ \draw [thick, mred] (2, 0.3) arc (90:270:0.1 and 0.3);
+ \draw [dashed, thick, mred] (2, -0.3) arc (270:450:0.1 and 0.3);
+
+ \fill [mblue, opacity=0.3] (2, 0.3) arc (90:270:0.1 and 0.3) arc (270:450:0.3);
+ \draw [thick, mblue] (2, -0.3) arc (270:450:0.3);
+
+ \draw [dashed] (2, 0.3) -- (0.12, 0.35);
+ \draw [dashed] (2, -0.3) -- (-0.05, -0.23);
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+
+\begin{defi}[Cell complex]
+ A (finite) \emph{cell complex} is a space $X$ obtained by
+ \begin{enumerate}
+ \item Start with a discrete finite set $X^{(0)}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (-1, -1.2) {};
+ \node [circ] at (1, -1) {};
+ \node [circ] at (0.3, -2) {};
+ \end{tikzpicture}
+ \end{center}
+ \item Given $X^{(n - 1)}$, form $X^{(n)}$ by taking a finite set of maps $\{f_\alpha: S^{n - 1} \to X^{(n - 1)}\}$ and attaching $n$-cells along the $f_\alpha$:
+ \[
+ X^{(n)} = \left(X^{(n - 1)}\amalg \coprod_\alpha D_{\alpha}^n\right)/\{x\sim f_\alpha(x)\}.
+ \]
+ For example, given the $X^{(0)}$ above, we can attach some loops and lines to obtain the following $X^{(1)}$
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (-1, -1.2) {};
+ \node [circ] at (1, -1) {};
+ \node [circ] (3) at (0.3, -2) {};
+
+ \draw (0, 0) -- (-1, -1.2) -- (1, -1) -- (0, 0);
+ \draw (3) edge [loop, looseness=30] (3);
+ \end{tikzpicture}
+ \end{center}
+ We can add surfaces to obtain the following $X^{(2)}$
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (-1, -1.2) {};
+ \node [circ] at (1, -1) {};
+ \node [circ] (3) at (0.3, -2) {};
+
+ \draw [fill = mblue, fill opacity=0.5] (0, 0) -- (-1, -1.2) -- (1, -1) -- (0, 0);
+ \draw (3) edge [loop, looseness=30] (3);
+ \end{tikzpicture}
+ \end{center}
+ \item Stop at some $X = X^{(k)}$. The minimum such $k$ is the \emph{dimension} of $X$.
+ \end{enumerate}
+ To define non-finite cell complexes, we just have to remove the words ``finite'' in the definition and remove the final stopping condition.
+\end{defi}
+
+We have just had an example of a cell complex above, and it is easy to produce many more examples, such as the $n$-sphere. So instead let's look at a non-cell complex.
+
+\begin{eg}
+ The following is \emph{not} a cell complex: we take $\R^2$, and add a circle with radius $\frac{1}{2}$ and center $(0, \frac{1}{2})$. Then we add another circle with radius $\frac{1}{4}$ and center $(0, \frac{1}{4})$, then a circle with radius $\frac{1}{8}$ and center $(0, \frac{1}{8})$ etc. We obtain something like
+ \begin{center}
+ \begin{tikzpicture}[scale=2]
+ \foreach \n in {1,...,5} {
+ \pgfmathsetmacro\x{1/(2^\n)};
+ \draw (0, \x) circle [radius=\x];
+ \node [circ, mblue] at (0, 2*\x) {};
+ }
+ \draw [gray] (-2, 0) -- (2, 0);
+ \end{tikzpicture}
+ \end{center}
+ This is known as the \emph{Hawaiian Earring}.
+
+ Why is this not an (infinite) cell complex? We did obtain it by attaching lots of 1-cells to the single point $(0, 0)$. However, in the definition of a cell complex, the cells are supposed to be completely unrelated and disjoint, apart from intersecting at the origin. However, here the circles clump together at the origin.
+
+ In particular, if we take the following sequence $(0, 1), (0, \frac{1}{2}), (0, \frac{1}{4}), \cdots$, it converges to $(0, 0)$. If this were a cell complex, then this shouldn't happen because the cells are unrelated, and picking a point from each cell should not produce a convergent sequence (if you are not convinced, if we actually did produce by attaching cells, then note that during the attaching process, we needn't have attached them this way. We could have made it such that the $n$th cell has radius $n$. Then clearly picking the topmost point of each cell will not produce a convergent sequence).
+
+ We will see that the Hawaiian Earring will be a counterexample to a lot of our theorems here.
+\end{eg}
+
+\section{Homotopy and the fundamental group}
+This will be our first trick to translate topological spaces into groups.
+
+\setcounter{subsection}{-1}
+\subsection{Motivation}
+Recall that we wanted to prove that $\R^n \not\cong \R^m$ for $n\not= m$. Let's first do the simple case, where $m = 1, n = 2$. We want to show that $\R\not\cong \R^2$.
+
+This is not hard. We know that $\R$ is a line, while $\R^2$ is a plane. Let's try to remove a point from each of them. If we remove a point from $\R$, the space stops being path connected. However, removing a point does not have this effect on $\R^2$. Since being path connected is a topological property, we have now showed that $\R$ and $\R^2$ are not homeomorphic.
+
+Unfortunately, this does not extend very far. We cannot use this to show that $\R^2$ and $\R^3$ are not homeomorphic. What else can we do?
+
+Notice that when we remove a point from $\R^2$, sure it is still connected, but something has changed.
+
+Consider a circle containing the origin in $\R^2 \setminus \{0\}$. If the origin were there, we can keep shrinking the circle down until it becomes a point. However, we cannot do this if the origin is missing.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 0) -- (2, 0);
+ \draw (0, -2) -- (0, 2);
+
+ \draw [fill=white] circle [radius=0.075];
+ \draw circle [radius=1.8];
+ \draw [opacity=0.9] circle [radius=1.2];
+ \draw [opacity=0.8] circle [radius=0.9];
+ \draw [opacity=0.7] circle [radius=0.7];
+ \draw [opacity=0.6] circle [radius=0.55];
+ \end{tikzpicture}
+\end{center}
+The strategy now is to exploit the fact that $\R^2 \setminus \{0\}$ has circles which cannot be deformed to points.
+
+\subsection{Homotopy}
+We have just talked about the notion of ``deforming'' circles to a point. We can think of a circle in $X$ as a map $S^1 \to X$, and we want to ``deform'' this map to a point. This process of deformation is known as \emph{homotopy}. Here we are going to use the interval $[0, 1]\subseteq \R $ a lot, and we will just call it $I$.
+\begin{notation}
+ \[
+ I = [0, 1]\subseteq \R.
+ \]
+\end{notation}
+
+\begin{defi}[Homotopy]
+ Let $f, g: X\to Y$ be maps. A \emph{homotopy} from $f$ to $g$ is a map
+ \[
+ H: X\times I \to Y
+ \]
+ such that
+ \[
+ H(x, 0) = f(x),\quad H(x, 1) = g(x).
+ \]
+ We think of the interval $I$ as time. For each time $t$, $H(\ph, t)$ defines a map $X\to Y$. So we want to start from $f$, move with time, and eventually reach $g$.
+
+ If such an $H$ exists, we say $f$ is \emph{homotopic} to $g$, and write $f\simeq g$. If we want to make it explicit that the homotopy is $H$, we write $f \simeq_H g$.
+\end{defi}
+As mentioned at the beginning, by calling $H$ a map, we are requiring it to be continuous.
+
+Sometimes, we don't want a general homotopy. We might want to make sure that when we are deforming from a path $f$ to $g$, the end points of the path don't move. In general, we have
+\begin{defi}[Homotopy $\rel$ A]
+ We say $f$ is \emph{homotopic to} $g$ $\rel A$, written $f\simeq g\rel A$, if for all $a \in A\subseteq X$, we have
+ \[
+ H(a, t) = f(a) = g(a).
+ \]
+\end{defi}
+This notion will find itself useful later, but we don't have to pay too much attention to this yet.
+
+Our notation suggests that homotopy is an equivalence relation. Indeed, we have
+\begin{prop}
+ For spaces $X, Y$, and $A\subseteq X$, the ``homotopic $\rel A$'' relation is an equivalence relation. In particular, when $A = \emptyset$, homotopy is an equivalence relation.
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Reflexivity: $f \simeq f$ since $H(x, t) = f(x)$ is a homotopy.
+ \item Symmetry: if $H(x, t)$ is a homotopy from $f$ to $g$, then $H(x, 1 - t)$ is a homotopy from $g$ to $f$.
+ \item Transitivity: Suppose $f, g, h: X\to Y$ and $f\simeq_H g\rel A$, $g\simeq_{H'} h \rel A$. We want to show that $f\simeq h \rel A$. The idea is to ``glue'' the two maps together.
+
+ We know how to continuously deform $f$ to $g$, and from $g$ to $h$. So we just do these one after another.
+ We define $H'': X\times I \to Y$ by
+ \[
+ H''(x, t) =
+ \begin{cases}
+ H(x, 2t) & 0 \leq t \leq \frac{1}{2}\\
+ H'(x, 2t - 1) & \frac{1}{2} \leq t \leq 1
+ \end{cases}
+ \]
+ This is well-defined since $H(x, 1) = g(x) = H'(x, 0)$. This is also continuous by the gluing lemma. It is easy to check that $H''$ is a homotopy $\rel A$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+We now have a notion of equivalence of maps --- two maps are equivalent if they are homotopic. We can extend the notion of homotopy to spaces as well.
+
+Recall that when we defined homeomorphism, we required that there be some $f, g$ such that $f\circ g = \id$, $g\circ f = \id$. Here, we replace equality by homotopy.
+\begin{defi}[Homotopy equivalence]
+ A map $f: X\to Y$ is a \emph{homotopy equivalence} if there exists a $g: Y\to X$ such that $f\circ g \simeq \id_Y$ and $g\circ f \simeq \id_X$. We call $g$ a \emph{homotopy inverse} for $f$.
+
+ If a homotopy equivalence $f: X\to Y$ exists, we say that $X$ and $Y$ are homotopy equivalent and write $X\simeq Y$.
+\end{defi}
+We are soon going to prove that this is indeed an equivalence relation on spaces, but we first look at some examples of homotopy equivalent spaces. Clearly, homeomorphic spaces are homotopy equivalent. However, we will see that we can do much more ``violent'' things to a space and still be homotopy equivalent.
+\begin{eg}
+ Let $X = S^1$, $Y = \R^2 \setminus \{0\}$. We have a natural inclusion map $i: X\hookrightarrow Y$. To obtain a map $Y \to X$, we can project each point onto the circle.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 0) -- (2, 0);
+ \draw (0, -2) -- (0, 2);
+
+ \draw [fill=white] circle [radius=0.075];
+ \draw circle [radius=1.5];
+
+ \node [circ] at (1.06, 1.06) {};
+ \node [circ] at (2, 2) {};
+ \node at (1.06, 1.06) [right] {$r(y)$};
+ \node at (2, 2) [right] {$y$};
+ \draw [dashed] (0.053, 0.053) -- (2, 2);
+ \end{tikzpicture}
+ \end{center}
+ In particular, we define $r: Y\to X$ by
+ \[
+ r(y) = \frac{y}{\|y\|}.
+ \]
+ We immediately have $r\circ i = \id_X$. We will now see that $i\circ r \simeq \id_Y$. The composition $i\circ r$ first projects each object into $S^1$, and then includes it back into $\R^2 \setminus \{0\}$. So this is just the projection map. We can define a homotopy $H: Y\times I \to Y$ by
+ \[
+ H(y, t) = \frac{y}{t + (1 - t)\|y\|}.
+ \]
+ This is continuous, and $H(\ph, 0) = i\circ r$, $H(\ph, 1) = \id_Y$.
+\end{eg}
+As we have said, homotopy equivalence can do really ``violent'' things to a space. We started with a 2-dimensional $\R^2\setminus \{0\}$ space, and squashed it into a one-dimensional sphere.
+
+Hence we see that homotopy equivalence doesn't \emph{care} about dimensions. Dimensions seem to be a rather fundamental thing in geometry, and we are discarding it here. So what is left? What does homotopy equivalence preserve?
+
+While $S^1$ and $\R^2 \setminus \{0\}$ seem rather different, they have something in common --- they both have a ``hole''. We will later see that this is what homotopy equivalence preserves.
+
+\begin{eg}
+ Let $Y = \R^n$, $X = \{0\} = *$. Let $X\hookrightarrow Y$ be the inclusion map, and $r: Y\to X$ be the unique map that sends everything to $\{0\}$. Again, we have $r\circ i = \id_X$. We can also obtain a homotopy from $i \circ r$ to $\id_Y$ by
+ \[
+ H(y, t) = ty.
+ \]
+\end{eg}
+Again, from the point of view of homotopy theory, $\R^n$ is just the same as a point! You might think that this is something crazy to do --- we have just given up a lot of structure of topological spaces. However, by giving up these structures, it is easier to focus on what we really want to care about --- holes. For example, it is often much easier to argue about the one-point space $*$ than the whole of $\R^2$! By studying properties that are preserved by homotopy equivalence, and not just homeomorphism, we can simplify our problems by reducing complicated spaces to simpler ones via homotopy equivalence.
+
+In general things homotopy equivalent to a single point are known as \emph{contractible spaces}.
+\begin{notation}
+ $*$ denotes the one-point space $\{0\}$.
+\end{notation}
+
+\begin{defi}[Contractible space]
+ If $X\simeq *$, then $X$ is \emph{contractible}.
+\end{defi}
+
+We now show that homotopy equivalence of spaces is an equivalence relation. To do this, we first need a lemma.
+\begin{lemma}
+ Consider the spaces and arrows
+ \[
+ \begin{tikzcd}
+ X \ar[r, bend left, "f_0"] \ar[r, bend right, "f_1"'] & Y \ar[r, bend left, "g_0"] \ar[r, bend right, "g_1"'] & Z
+ \end{tikzcd}
+ \]
+ If $f_0\simeq_H f_1$ and $g_0\simeq_{H'} g_1$, then $g_0\circ f_0 \simeq g_1 \circ f_1$.
+\end{lemma}
+
+\begin{proof}
+ We will show that $g_0 \circ f_0 \simeq g_0 \circ f_1 \simeq g_1 \circ f_1$. Then we are done since homotopy between maps is an equivalence relation. So we need to write down two homotopies.
+
+ \begin{enumerate}
+ \item Consider the following composition:
+ \[
+ \begin{tikzcd}
+ X\times I \ar[r, "H"] & Y \ar[r, "g_0"] & Z
+ \end{tikzcd}
+ \]
+ It is easy to check that this is the first homotopy we need to show $g_0\circ f_0 \simeq g_0 \circ f_1$.
+ \item The following composition is a homotopy from $g_0 \circ f_1$ to $g_1 \circ f_1$:
+ \[
+ \begin{tikzcd}
+ X\times I \ar[r, "f_1\times \id_I"] & Y\times I \ar[r, "H'"] & Z
+ \end{tikzcd}\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{prop}
+ Homotopy equivalence of spaces is an equivalence relation.
+\end{prop}
+
+\begin{proof}
+ Symmetry and reflexivity are trivial. To show transitivity, let $f: X \to Y$ and $h: Y \to Z$ be homotopy equivalences, and $g: Y \to X$ and $k: Z \to Y$ be their homotopy inverses. We will show that $h \circ f: X \to Z$ is a homotopy equivalence with homotopy inverse $g \circ k$. We have
+ \[
+ (h \circ f) \circ (g \circ k) = h \circ (f \circ g) \circ k \simeq h \circ {\id_Y} \circ k = h \circ k \simeq \id_Z.
+ \]
+ Similarly,
+ \[
+ (g \circ k) \circ (h \circ f) = g \circ (k \circ h) \circ f \simeq g \circ {\id_Y} \circ f = g \circ f \simeq \id_X.
+ \]
+ So done.
+\end{proof}
+
+\begin{defi}[Retraction]
+ Let $A\subseteq X$ be a subspace. A \emph{retraction} $r: X\to A$ is a map such that $r\circ i = \id_A$, where $i: A\hookrightarrow X$ is the inclusion. If such an $r$ exists, we call $A$ a \emph{retract} of $X$.
+\end{defi}
+This map sends everything in $X$ to $A$ without moving things in $A$. Roughly speaking, if such a retraction exists, then $A$ is no more complicated than $X$.
+
+\begin{defi}[Deformation retraction]
+ The retraction $r$ is a \emph{deformation retraction} if $i\circ r \simeq \id_X$. A deformation retraction is \emph{strong} if we require this homotopy to be a homotopy rel $A$.
+\end{defi}
+Roughly, this says that $A$ is as complicated as $X$.
+
+\begin{eg}
+ Take $X$ any space, and $A = \{x\}\subseteq X$. Then the constant map $r: X\to A$ is a retraction. If $X$ is contractible, then $A$ is a deformation retract of $X$.
+\end{eg}
+
+\subsection{Paths}
+\begin{defi}[Path]
+ A \emph{path} in a space $X$ is a map $\gamma: I\to X$. If $\gamma(0) = x_0$ and $\gamma(1) = x_1$, we say $\gamma$ is a path from $x_0$ to $x_1$, and write $\gamma: x_0 \rightsquigarrow x_1$.
+
+ If $\gamma(0) = \gamma(1)$, then $\gamma$ is called a \emph{loop} (based at $x_0$).
+ \begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.6) (0.3, 0.9) (-1.7, 0.8)};
+
+ \node [circ] at (-1, 0.2) {};
+ \node [above] at (-1, 0.2) {$x_0$};
+ \node [circ] at (0.8, 0.1) {};
+ \node [above] at (0.8, 0.1) {$x_1$};
+ \draw plot [smooth] coordinates {(-1, 0.2) (-0.3, 0) (0.1, 0.3) (0.8, 0.1)};
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+Note that this map does not have to be injective. It can be self-intersecting or do all sorts of weird stuff.
+
+Recall that the basic idea of algebraic topology is to assign to each space $X$ an (algebraic) object, which is (hopefully) easier to deal with. In homotopy theory, we do so using the idea of paths.
+
+To do so, we need to be able to perform operations on paths.
+\begin{defi}[Concatenation of paths]
+ If we have two paths $\gamma_1$ from $x_0$ to $x_1$; and $\gamma_2$ from $x_1$ to $x_2$, we define the \emph{concatenation} to be
+ \[
+ (\gamma_1\cdot \gamma_2)(t) =
+ \begin{cases}
+ \gamma_1(2t) & 0 \leq t \leq \frac{1}{2}\\
+ \gamma_2(2t - 1) & \frac{1}{2} \leq t \leq 1.
+ \end{cases}
+ \]
+ This is continuous by the gluing lemma.
+\end{defi}
+Note that we concatenate left to right, but function composition goes from right to left.
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-2.4, -0.7) (0, -0.7) (1.4, -1) (2.6, -0.9) (2.8, 0.6) (0.6, 1.9) (-3.4, 0.8)};
+
+ \node [circ] at (-2, 0.2) {};
+ \node [above] at (-2, 0.2) {$x_0$};
+ \node [circ] at (-0.2, 0.1) {};
+ \node [above] at (-0.2, 0.1) {$x_1$};
+ \node [circ] at (2, 0.5) {};
+ \node [above] at (2, 0.5) {$x_2$};
+ \draw plot [smooth] coordinates {(-2, 0.2) (-1.3, 0) (-0.9, 0.3) (-0.2, 0.1) (1, 0.6) (2, 0.5)};
+ \end{tikzpicture}
+\end{center}
+\begin{defi}[Inverse of path]
+ The \emph{inverse} of a path $\gamma: I \to X$ is defined by
+ \[
+ \gamma^{-1}(t) = \gamma(1 - t).
+ \]
+ This is exactly the same path but going in the opposite direction.
+\end{defi}
+
+What else do we need? We need an identity.
+\begin{defi}[Constant path]
+ The \emph{constant path} at a point $x\in X$ is given by $c_x(t) = x$.
+\end{defi}
+
+We haven't actually got a good algebraic system. We have $\gamma$ and $\gamma^{-1}$, but when we compose them, we get a path from $x_1$ to $x_2$ and back, and not the identity. Also, we are not able to combine arbitrary paths in a space.
+
+Before we make these into proper algebraic operations, we will talk about something slightly different. We can view this as a first attempt at associating things to topological spaces.
+
+\begin{defi}[Path components]
+ We can define a relation on $X$: $x_1 \sim x_2$ if there exists a path from $x_1$ to $x_2$. By the concatenation, inverse and constant paths, $\sim$ is an equivalence relation. The equivalence classes $[x]$ are called \emph{path components}. We denote the quotient $X/{\sim}$ by $\pi_0(X)$.
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw [fill=mblue, opacity=0.5] circle [radius = 1];
+ \draw (3, 2.5) [fill=mblue, opacity=0.5] circle [radius = 1.3];
+ \draw (6, 1) [fill=mblue, opacity=0.5] circle [radius = 1.2];
+ \end{tikzpicture}
+\end{center}
+In the above space, we have three path components.
+
+This isn't really a very useful definition, since most spaces we care about are path-connected, i.e.\ only have one path component. However, this is a first step at associating \emph{something} to spaces. We can view this as a ``toy model'' for the more useful definitions we will later have.
+
+One important property of this $\pi_0$ is that not only does it associate a set to each topological space, but also associates a function between the corresponding sets to each continuous map.
+\begin{prop}
+ For any map $f: X\to Y$, there is a well-defined function
+ \[
+ \pi_0(f): \pi_0(X) \to \pi_0(Y),
+ \]
+ defined by
+ \[
+ \pi_0(f)([x]) = [f(x)].
+ \]
+ Furthermore,
+ \begin{enumerate}
+ \item If $f\simeq g$, then $\pi_0(f) = \pi_0(g)$.
+ \item For any maps $\begin{tikzcd} A \ar[r, "h"] & B\ar[r, "k"] & C\end{tikzcd}$, we have $\pi_0(k\circ h) = \pi_0(k)\circ \pi_0 (h)$.
+ \item $\pi_0(\id_X) = \id_{\pi_0(X)}$
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ To show this is well-defined, suppose $[x] = [y]$. Then let $\gamma: I \to X$ be a path from $x$ to $y$. Then $f \circ \gamma$ is a path from $f(x)$ to $f(y)$. So $[f(x)] = [f(y)]$.
+ \begin{enumerate}
+ \item If $f \simeq g$, let $H: X \times I \to Y$ be a homotopy from $f$ to $g$. Let $x \in X$. Then $H(x, \ph)$ is a path from $f(x)$ to $g(x)$. So $[f(x)] = [g(x)]$, i.e.\ $\pi_0(f)([x]) = \pi_0(g)([x])$. So $\pi_0(f) = \pi_0(g)$.
+ \item $\pi_0(k \circ h)([x]) = \pi_0(k) \circ \pi_0(h)([x]) = [k(h(x))]$.
+ \item $\pi_0(\id_X)([x]) = [\id_X(x)] = [x]$. So $\pi_0(\id_X) = \id_{\pi_0(X)}$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{cor}
+ If $f: X\to Y$ is a homotopy equivalence, then $\pi_0(f)$ is a bijection.
+\end{cor}
+
+\begin{eg}
+ The two point space $X = \{-1, 1\}$ is not contractible, because $|\pi_0(X)| = 2$, but $|\pi_0(*)| = 1$.
+\end{eg}
+This is a rather silly example, since we can easily prove it directly. However, this is an example to show how we can use this machinery to prove topological results.
+
+Now let's return to our operations on paths, and try to make them algebraic.
+
+\begin{defi}[Homotopy of paths]
+ Paths $\gamma, \gamma': I\to X$ are \emph{homotopic as paths} if they are homotopic rel $\{0, 1\}\subseteq I$, i.e.\ the end points are fixed. We write $\gamma\simeq \gamma'$.
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.6) (0.3, 1.2) (-1.7, 0.8)};
+
+ \node [circ] at (-1, 0.2) {};
+ \node [left] at (-1, 0.2) {$x_0$};
+ \node [circ] at (0.8, 0.1) {};
+ \node [right] at (0.8, 0.1) {$x_1$};
+ \draw [semithick, mred] plot [smooth] coordinates {(-1, 0.2) (-0.5, 0.6) (0.1, 0.8) (0.8, 0.1)};
+ \draw [semithick, mred!50!mblue] plot [smooth] coordinates {(-1, 0.2) (-0.3, 0) (0.1, 0.3) (0.8, 0.1)};
+ \draw [semithick, mblue] plot [smooth] coordinates {(-1, 0.2) (-0.3, -0.5) (0.4, -0.5) (0.8, 0.1)};
+ \end{tikzpicture}
+\end{center}
+Note that we would necessarily want to fix the two end points. Otherwise, if we allow end points to move, we can shrink any path into a constant path, and our definition of homotopy would be rather silly.
+
+This homotopy works well with our previous operations on paths.
+\begin{prop}
+ Let $\gamma_1, \gamma_2: I \to X$ be paths, $\gamma_1(1) = \gamma_2(0)$. Then if $\gamma_1\simeq \gamma_1'$ and $\gamma_2 \simeq \gamma_2'$, then $\gamma_1 \cdot \gamma_2 \simeq \gamma_1' \cdot \gamma_2'$.
+\end{prop}
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-2.4, -0.7) (0, -0.7) (1.4, -1) (2.6, -0.9) (2.8, 0.6) (0.6, 1.9) (-3.4, 0.8)};
+
+ \node [circ] at (-2, 0.2) {};
+ \node [anchor = south east] at (-2, 0.2) {$x_0$};
+ \node [circ] at (-0.2, 0.1) {};
+ \node [above] at (-0.2, 0.1) {$x_1$};
+ \node [circ] at (2, 0.5) {};
+ \node [above] at (2, 0.5) {$x_2$};
+ \draw [semithick, mred] (-2, 0.2) parabola bend (-1.1, -0.3) (-0.2, 0.1);
+ \draw [semithick, mred] (-0.2, 0.1) parabola bend (1, -0.5) (2, 0.5);
+
+ \draw [semithick, mblue] (-2, 0.2) parabola bend (-1.3, 0.7) (-0.2, 0.1);
+ \draw [semithick, mblue] (-0.2, 0.1) parabola bend (1.3, 0.7) (2, 0.5);
+
+ \node [mred, below] at (-1.1, -0.3) {$\gamma_1$};
+ \node [mred, below] at (1, -0.5) {$\gamma_2$};
+
+ \node [mblue, above] at (-1.3, 0.7) {$\gamma_1'$};
+ \node [mblue, above] at (1, 0.7) {$\gamma_2'$};
+ \end{tikzpicture}
+\end{center}
+
+\begin{proof}
+ Suppose that $\gamma_1 \simeq_{H_1}\gamma_1'$ and $\gamma_2\simeq_{H_2}\gamma_2'$. Then we have the diagram
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (2, 2);
+ \draw (2, 0) rectangle (4, 2);
+ \node at (1, 0) [below] {\small$\gamma_1$};
+ \node at (3, 0) [below] {\small$\gamma_2$};
+ \node at (1, 2) [above] {\small$\gamma_1'$};
+ \node at (3, 2) [above] {\small$\gamma_2'$};
+ \node at (0, 1) [left] {\small$x_0$};
+ \node at (2, 1) [left] {\small$x_1$};
+ \node at (4, 1) [right] {\small$x_2$};
+
+ \node at (1, 1) {\large $H_1$};
+ \node at (3, 1) {\large $H_2$};
+ \end{tikzpicture}
+ \end{center}
+ We can thus construct a homotopy by
+ \[
+ H(s, t) =
+ \begin{cases}
+ H_1(s, 2t) & 0 \leq t \leq \frac{1}{2}\\
+ H_2(s, 2t - 1) & \frac{1}{2} \leq t \leq 1
+ \end{cases}.\qedhere
+ \]
+\end{proof}
+
+To solve all our previous problems about operations on paths not behaving well, we can look at paths \emph{up to homotopy}.
+\begin{prop}
+ Let $\gamma_0: x_0 \leadsto x_1, \gamma_1: x_1 \leadsto x_2, \gamma_2: x_2 \leadsto x_3$ be paths. Then
+ \begin{enumerate}
+ \item $(\gamma_0 \cdot \gamma_1)\cdot \gamma_2 \simeq \gamma_0 \cdot (\gamma_1\cdot \gamma_2)$
+ \item $\gamma_0 \cdot c_{x_1}\simeq \gamma_0 \simeq c_{x_0}\cdot \gamma_0$.
+ \item $\gamma_0 \cdot \gamma_0^{-1}\simeq c_{x_0}$ and $\gamma_0^{-1}\cdot \gamma_0 \simeq c_{x_1}$.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Consider the following diagram:
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw rectangle (4, 4);
+
+ \node at (0, 2) [left] {\small$x_0$};
+ \node at (4, 2) [right] {\small$x_3$};
+
+ \node [circ] at (1, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (2, 4) {};
+ \node [circ] at (3, 4) {};
+
+ \node [below] at (0.5, 0) {\small$\gamma_0$};
+ \node [below] at (1.5, 0) {\small$\gamma_1$};
+ \node [below] at (3, 0) {\small$\gamma_2$};
+
+ \node [above] at (1, 4) {\small$\gamma_0$};
+ \node [above] at (2.5, 4) {\small$\gamma_1$};
+ \node [above] at (3.5, 4) {\small$\gamma_2$};
+
+ \draw (1, 0) -- (2, 4) node [pos=0.5, left] {\small$x_1$};
+ \draw (2, 0) -- (3, 4) node [pos=0.5, left] {\small$x_2$};
+ \end{tikzpicture}
+ \end{center}
+ \item Consider the following diagram:
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw rectangle (4, 4);
+ \node [circ] at (2, 0) {};
+ \node at (1, 0) [below] {\small$\gamma_0$};
+ \node at (3, 0) [below] {\small$c_{x_1}$};
+ \node at (2, 4) [above] {\small$\gamma_0$};
+ \node at (0, 2) [left] {\small$x_0$};
+ \node at (4, 2) [right] {\small$x_1$};
+
+ \draw (2, 0) -- (4, 4) node [pos=0.5, left] {\small$x_1$};
+ \end{tikzpicture}
+ \end{center}
+ \item Consider the following diagram:
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw rectangle (4, 4);
+ \node [circ] at (2, 0) {};
+ \node at (1, 0) [below] {\small$\gamma_0$};
+ \node at (3, 0) [below] {\small$\gamma_0^{-1}$};
+ \node at (2, 4) [above] {\small$c_{x_0}$};
+ \node at (0, 2) [left] {\small$x_0$};
+ \node at (4, 2) [right] {\small$x_0$};
+
+ \draw (0, 4) -- (2, 0) -- (4, 4);
+ \end{tikzpicture}
+ \end{center}
+ \end{enumerate}
+ Turning these into proper proofs is left as an exercise for the reader.
+\end{proof}
+
+\subsection{The fundamental group}
+The idea is to take spaces and turn them into groups. We want to try to do this using paths. We've seen that if we want to do this, we should not work directly with paths, but paths up to homotopy. According to our proposition, this operation satisfies associativity, inverses and identity. The last issue we have to resolve is that we can't actually put two paths together unless they have the same start and end points.
+
+The idea is to fix one of the points $x_0$ in our space, and only think about loops that start and end at $x_0$. So we can always join two paths together.
+
+This tells us that we aren't going to just think about spaces, but spaces \emph{with basepoints}. We've now got everything we need.
+
+\begin{defi}[Fundamental group]
+ Let $X$ be a space and $x_0 \in X$. The \emph{fundamental group} of $X$ (based at $x_0$), denoted $\pi_1(X, x_0)$, is the set of homotopy classes of loops in $X$ based at $x_0$ (i.e.\ $\gamma(0) = \gamma(1) = x_0$). The group operations are defined as follows:
+
+ We define an operation by $[\gamma_0][\gamma_1] = [\gamma_0\cdot \gamma_1]$; inverses by $[\gamma]^{-1} = [\gamma^{-1}]$; and the identity as the constant path $e = [c_{x_0}]$.
+\end{defi}
+Often, when we write the homotopy classes of paths $[\gamma]$, we just get lazy and write $\gamma$.
+
+\begin{thm}
+ The fundamental group is a group.
+\end{thm}
+
+\begin{proof}
+ Immediate from our previous lemmas.
+\end{proof}
+
+Often in mathematics, after defining a term, we give lots of examples of it. Unfortunately, it is rather difficult to prove that a space has a non-trivial fundamental group, until we have developed some relevant machinery. Hence we will have to wait for a while before we have some concrete examples. Instead, we will look at some properties of the fundamental group first.
+
+\begin{defi}[Based space]
+ A \emph{based space} is a pair $(X, x_0)$ of a space $X$ and a point $x_0\in X$, the \emph{basepoint}. A \emph{map of based spaces}
+ \[
+ f: (X, x_0) \to (Y, y_0)
+ \]
+ is a continuous map $f: X\to Y$ such that $f(x_0) = y_0$. A \emph{based homotopy} is a homotopy rel $\{x_0\}$.
+\end{defi}
+
+Recall that for $\pi_0$, to every map $f: X\to Y$, we can associate a function $\pi_0(f): \pi_0(X) \to \pi_0(Y)$. We can do the same for $\pi_1$.
+\begin{prop}
+ To a based map
+ \[
+ f: (X, x_0) \to (Y, y_0),
+ \]
+ there is an associated function
+ \[
+ f_* = \pi_1(f): \pi_1(X, x_0) \to \pi_1(Y, y_0),
+ \]
+ defined by $[\gamma] \mapsto [f\circ \gamma]$. Moreover, it satisfies
+ \begin{enumerate}
+ \item $\pi_1(f)$ is a homomorphism of groups.
+ \item If $f \simeq f'$, then $\pi_1(f) = \pi_1(f')$.
+ \item For any maps $\begin{tikzcd} (A, a) \ar[r, "h"] & (B, b) \ar[r, "k"] & (C, c) \end{tikzcd}$, we have $\pi_1(k\circ h) = \pi_1(k)\circ \pi_1 (h)$.
+ \item $\pi_1(\id_X) = \id_{\pi_1(X, x_0)}$
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ Exercise. % fill in
+\end{proof}
+In category-theoretic language, we say that $\pi_1$ is a \emph{functor}.
+
+So far so good. However, to define the fundamental group, we had to make a compromise and pick a basepoint. But we just care about the space. We don't want a basepoint! Hence, we should look carefully at what happens when we change the basepoint, and see if we can live without it.
+
+The first observation is that $\pi_1(X, x_0)$ only ``sees'' the path component of $x_0$, since all loops based at $x_0$ can only live inside the path component of $x_0$. Hence, the first importance of picking a basepoint is picking a path component.
+
+For all practical purposes, we just assume that $X$ is path connected, since if we weren't, the fundamental group just describes a particular path component of the original space.
+
+Now we want to compare fundamental groups with different basepoints. Suppose we have two basepoints $x_0$ and $x_1$. Suppose we have a loop $\gamma$ at $x_0$. How can we turn this into a loop based at $x_1$? This is easy. We first pick a path $u: x_0 \leadsto x_1$. Then we can produce a new loop at $x_1$ by going along $u^{-1}$ to $x_0$, take the path $\gamma$, and then return to $x_0$ by $u$, i.e.\ consider $u^{-1}\cdot \gamma\cdot u$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-2.4, -0.7) (0, -0.7) (1.4, -1) (2.6, -0.9) (2.8, 0.6) (0.6, 1.9) (-3.4, 0.8)};
+
+ \node [circ] at (-2, 0.2) {};
+ \node [anchor = south west] at (-2, 0.2) {$x_0$};
+ \node [circ] at (1.5, 0.1) {};
+ \node [above] at (1.5, 0.1) {$x_1$};
+
+ \draw [->-=0.5] (-2.4, 0.2) circle [radius = 0.4];
+ \draw [->-=0.5] (-2, 0.2) parabola bend (-0.25, 0) (1.5, 0.1) node [pos=0.5, below] {$u$};
+ \draw [mred, ->-=0.3, ->-=0.7] (1.5, 0.1) -- (1.5, 0.2) parabola bend (-0.25, 0.1) (-2, 0.3) arc (14:346:0.4123) parabola bend (-0.25, -0.1) (1.5, 0) -- (1.5, 0.1);
+ \end{tikzpicture}
+\end{center}
+
+\begin{prop}
+ A path $u: x_0 \leadsto x_1$ induces a group \emph{isomorphism}
+ \[
+ u_\#: \pi_1(X, x_0) \to \pi_1(X, x_1)
+ \]
+ by
+ \[
+ [\gamma] \mapsto [u^{-1}\cdot \gamma \cdot u].
+ \]
+ This satisfies
+ \begin{enumerate}
+ \item If $u\simeq u'$, then $u_\# = u'_\#$.
+ \item $(c_{x_0})_\# = \id_{\pi_1(X, x_0)}$
+ \item If $v: x_1 \leadsto x_2$. Then $(u\cdot v)_\# = v_\# \circ u_\#$.
+ \item If $f: X\to Y$ with $f(x_0) = y_0$, $f(x_1) = y_1$, then
+ \[
+ (f\circ u)_\# \circ f_* = f_* \circ u_\#: \pi_1(X, x_0) \to \pi_1(Y, y_1).
+ \]
+ A nicer way of writing this is
+ \[
+ \begin{tikzcd}[row sep=large]
+ \pi_1(X, x_0) \ar[r, "f_*"] \ar[d, "u_\#"] & \pi_1(Y, y_0)\ar [d, "(f\circ u)_\#"]\\
+ \pi_1(X, x_1) \ar[r, "f_*"] & \pi_1(Y, y_1)
+ \end{tikzcd}
+ \]
+ The property says that the composition is the same no matter which way we go from $\pi_1(X, x_0)$ to $\pi_1(Y, y_1)$. We say that the square is a \emph{commutative diagram}. These diagrams will appear all of the time in this course.
+ \item If $x_1 = x_0$, then $u_\#$ is an automorphism of $\pi_1(X, x_0)$ given by conjugation by $u$.
+ \end{enumerate}
+\end{prop}
+It is important (yet difficult) to get the order of concatenation and composition right. Path concatenation is from left to right, while function composition is from right to left.
+
+\begin{proof}
+ Yet another exercise. Note that $(u^{-1})_\# = (u_\#)^{-1}$, which is why we have an isomorphism. %fill in
+\end{proof}
+The main takeaway is that if $x_0$ and $x_1$ are in the same path component, then
+\[
+ \pi_1(X, x_0) \cong \pi_1(X, x_1).
+\]
+So the basepoint isn't really too important. However, we have to be careful. While the two groups are isomorphic, the actual isomorphism depends on \emph{which} path $u: x_0 \leadsto x_1$ we pick. So there is no natural isomorphism between the two groups. In particular, we cannot say ``let $\alpha \in \pi_1(X, x_0)$. Now let $\alpha'$ be the corresponding element in $\pi_1(X, x_1)$''.
+
+We can also see that if $(X, x_0)$ and $(Y, y_0)$ are based homotopy equivalent, then $\pi_1(X, x_0)\cong \pi_1(Y, y_0)$. Indeed, if they are homotopy equivalent, then there are some $f: X \to Y$, $g: Y \to X$ such that
+\[
+ f\circ g \simeq \id_Y,\quad g\circ f \simeq \id_X.
+\]
+So
+\[
+ f_*\circ g_* = \id_{\pi_1(Y, y_0)},\quad g_*\circ f_* = \id_{\pi_1(X, x_0)},
+\]
+and $f_*$ and $g_*$ are the isomorphisms we need.
+
+However, can we do this with non-based homotopies? Suppose we have a space $X$, and a space $Y$, and functions $f, g: X \to Y$, with a homotopy $H: X\times I \to Y$ from $f$ to $g$. If this were a based homotopy, we know
+\[
+ f_* = g_*: \pi_1(X, x_0) \to \pi_1(Y, f(x_0)).
+\]
+Now we don't insist that this homotopy fixes $x_0$, and we could have $f(x_0) \not= g(x_0)$. How can we relate $f_*: \pi_1(X, x_0) \to \pi_1(Y, f(x_0))$ and $g_*: \pi_1(X, x_0) \to \pi_1(Y, g(x_0))$?
+
+First of all, we need to relate the groups $\pi_1(Y, f(x_0))$ and $\pi_1(Y, g(x_0))$. To do so, we need to find a path from $f(x_0)$ to $g(x_0)$. To produce this, we can use the homotopy $H$. We let $u: f(x_0) \leadsto g(x_0)$ with $u = H(x_0, \ph)$. Now we have three maps $f_*, g_*$ and $u_\#$. Fortunately, these do fit together well.
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-3.7, -1.2) (-3.7, 0) (-4, 0.7) (-3.9, 1.3) (-2.4, 1.4) (-1.8, 0.3) (-2.2, -1.7)};
+ \draw plot [smooth cycle] coordinates {(1.3, -1.2) (1.3, 0) (0.7, 0.7) (0.6, 1.3) (2.6, 1.4) (3, 0.3) (2.8, -1.7)};
+
+ \node [circ] at (-3, 0) {};
+ \node [below] at (-3, 0) {$x_0$};
+ \node [above] at (-3, 1.6) {$X$};
+
+ \node [above] at (1.8, 1.6) {$Y$};
+
+ \node [circ] at (1.8, 0.8) {};
+ \node [below] at (1.8, 0.8) {$f(x_0)$};
+
+ \node [circ] at (2, -0.8) {};
+ \node [below] at (2, -0.8) {$g(x_0)$};
+
+ \draw [->] (-1.6, 1) parabola bend (-0.7, 1.3) (0.2, 1.2);
+ \node [above] at (-0.7, 1.3) {$f$};
+
+ \draw [->] (-1.6, -1) parabola bend (-0.3, -1.3) (1, -1.1);
+ \node [below] at (-0.3, -1.3) {$g$};
+ \end{tikzpicture}
+\end{center}
+
+\begin{lemma}
+ The following diagram commutes:
+ \[
+ \begin{tikzcd}
+ & \pi_1(Y, f(x_0)) \ar[dd, "u_\#"]\\
+ \pi_1(X, x_0) \ar[ru, "f_*"] \ar [rd, "g_*"']& \\
+ & \pi_1(Y, g(x_0))
+ \end{tikzcd}
+ \]
+ In algebra, we say
+ \[
+ g_* = u_\# \circ f_*.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Suppose we have a loop $\gamma: I\to X$ based at $x_0$.
+
+ We need to check that
+ \[
+ g_*([\gamma]) = u_\#\circ f_*([\gamma]).
+ \]
+ In other words, we want to show that
+ \[
+ g\circ \gamma \simeq u^{-1}\cdot (f\circ \gamma)\cdot u.
+ \]
+ To prove this result, we want to build a homotopy.
+
+ Consider the composition:
+ \[
+ F:
+ \begin{tikzcd}
+ I\times I \ar[r, "\gamma\times \id_I"] & X\times I \ar[r, "H"] & Y.
+ \end{tikzcd}
+ \]
+ Our plan is to exhibit two homotopic paths $\ell^+$ and $\ell^-$ in $I\times I$ such that
+ \[
+ F\circ \ell^+ = g \circ \gamma,\quad F\circ \ell^- = u^{-1}\cdot (f\circ \gamma) \cdot u.
+ \]
+ This is in general a good strategy --- $X$ is a complicated and horrible space we don't understand. So to construct a homotopy, we make ourselves work in a much nicer space $I\times I$.
+
+ Our $\ell^+$ and $\ell^-$ are defined in a rather simple way.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \node at (2, 5) {$I\times I$};
+ \draw [mgreen, ->-=0.5] (0, 4) -- (0, 0);
+ \draw [mgreen, ->-=0.5] (0, 0) -- (4, 0) node [pos=0.5, below] {$\ell^-$};
+ \draw [mgreen, ->-=0.5] (4, 0) -- (4, 4);
+
+ \draw [mred, ->-=0.5] (0, 4) -- (4, 4) node [pos=0.5, above] {$\ell^+$};
+
+ \draw [->] (4.5, 2) -- (6.5, 2) node [pos=0.5, above] {$F$};
+ \begin{scope}[shift={(7, 0)}]
+ \node at (2, 5) {$Y$};
+ \draw [->-=0.5] (0, 4) -- (0, 0) node [pos=0.5, right] {$u^{-1}$};
+ \draw [->-=0.5] (0, 0) -- (4, 0) node [pos=0.5, below] {$f$};
+ \draw [->-=0.5] (4, 0) -- (4, 4) node [pos=0.5, right] {$u$};
+
+ \draw [->-=0.5] (0, 4) -- (4, 4) node [pos=0.5, above] {$g$};
+ \end{scope}
+
+ \end{tikzpicture}
+ \end{center}
+ More precisely, $\ell^+$ is the path $s\mapsto (s, 1)$, and $\ell^-$ is the concatenation of the paths $s\mapsto (0, 1- s)$, $s \mapsto (s, 0)$ and $s\mapsto (1, s)$.
+
+ Note that $\ell^+$ and $\ell^-$ are homotopic as paths. If this is not obvious, we can manually check the homotopy
+ \[
+ L(s, t) = t\ell^+(s) + (1 - t)\ell^- (s).
+ \]
+ This works because $I\times I$ is convex. Hence $F\circ \ell^+\simeq_{F\circ L} F\circ \ell^-$ as paths.
+
+ Now we check that the compositions $F\circ \ell^{\pm}$ are indeed what we want. We have
+ \[
+ F\circ \ell^+ (s) = H(\gamma(s), 1) = g\circ \gamma (s).
+ \]
+ Similarly, we can show that
+ \[
+ F\circ \ell^-(s) = u^{-1}\cdot (f\circ \gamma)\cdot u (s).
+ \]
+ So done.
+\end{proof}
+It is worth looking at this proof hard and truly understand what is going on, since this is a really good example of how we can construct interesting homotopies.
+
+With this lemma, we can show that fundamental groups really respect homotopies.
+
+\begin{thm}
+ If $f: X\to Y$ is a homotopy equivalence, and $x_0 \in X$, then the induced map
+ \[
+ f_*: \pi_1(X, x_0) \to \pi_1(Y, f(x_0)).
+ \]
+ is an isomorphism.
+\end{thm}
+While this seems rather obvious, it is actually non-trivial if we want to do it from scratch. While we are given a homotopy equivalence, we are given no guarantee that the homotopy respects our basepoints. So this proof involves some real work.
+
+\begin{proof}
+ Let $g: Y\to X$ be a homotopy inverse. So $f\circ g\simeq_H \id_Y$ and $g\circ f\simeq_{H'} \id_X$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-3.7, -1.2) (-3.7, 0) (-4, 0.7) (-3.9, 1.3) (-2.4, 1.4) (-1.8, 0.3) (-2.2, -1.7)};
+ \draw plot [smooth cycle] coordinates {(1.3, -1.2) (1.3, 0) (0.7, 0.7) (0.6, 1.3) (2.6, 1.4) (3, 0.3) (2.8, -1.7)};
+
+ \node [circ] at (-3, 0.7) {};
+ \node [above] at (-3, 0.7) {$x_0$};
+ \node [above] at (-3, 1.6) {$X$};
+
+ \node [above] at (1.8, 1.6) {$Y$};
+
+ \node [circ] at (1.8, 0.8) {};
+ \node [below] at (1.8, 0.8) {$f(x_0)$};
+
+ \node [circ] at (-3, -0.8) {};
+ \node [below] at (-3, -0.8) {$g\circ f(x_0)$};
+
+ \draw [decoration={snake}, decorate] (-3, 0.7) -- (-3, -0.8) node [right, pos=0.5] {$u'$};
+
+ \draw [->] (-1.6, 1) parabola bend (-0.7, 1.3) (0.2, 1.2);
+ \node [above] at (-0.7, 1.3) {$f$};
+
+ \draw [->] (1, -1.1) parabola bend (-0.3, -1.3) (-1.6, -1);
+ \node [below] at (-0.3, -1.3) {$g$};
+ \end{tikzpicture}
+ \end{center}
+ We have no guarantee that $g\circ f(x_0) = x_0$, but we know that our homotopy $H'$ gives us $u' = H'(x_0, \ph): x_0 \leadsto g\circ f(x_0)$.
+
+ Applying our previous lemma with $\id_X$ for ``$f$'' and $g \circ f$ for ``$g$'', we get
+ \[
+ u'_\# \circ (\id_X)_* = (g\circ f)_*
+ \]
+ Using the properties of the $_*$ operation, we get that
+ \[
+ g_*\circ f_* = u'_\#.
+ \]
+ However, we know that $u'_\#$ is an isomorphism. So $f_*$ is injective and $g_*$ is surjective.
+
+ Doing it the other way round with $f\circ g$ instead of $g\circ f$, we know that $g_*$ is injective and $f_*$ is surjective. So both of them are isomorphisms.
+\end{proof}
+With this theorem, we can finally be sure that the fundamental group is a property of the space, without regards to the basepoint (assuming path connectedness), and is preserved by arbitrary homotopies.
+
+We now use the fundamental group to define several simple notions.
+\begin{defi}[Simply connected space]
+ A space $X$ is \emph{simply connected} if it is path connected \emph{and} $\pi_1(X, x_0) \cong 1$ for some (any) choice of $x_0 \in X$.
+\end{defi}
+
+\begin{eg}
+ Clearly, a point $*$ is simply connected since there is only one path on $*$ (the constant path). Hence, any contractible space is simply connected since it is homotopic to $*$. For example, $\R^n$ is simply connected for all $n$.
+\end{eg}
+
+There is a useful characterization of simply connected spaces:
+\begin{lemma}
+ A path-connected space $X$ is simply connected if and only if for any $x_0, x_1\in X$, there exists a unique homotopy class of paths $x_0 \leadsto x_1$.
+\end{lemma}
+
+\begin{proof}
+ Suppose $X$ is simply connected, and let $u, v: x_0 \leadsto x_1$ be paths. Now note that $u \cdot v^{-1}$ is a loop based at $x_0$, it is homotopic to the constant path, and $v^{-1} \cdot v$ is trivially homotopic to the constant path. So we have
+ \[
+ u \simeq u \cdot v^{-1} \cdot v \simeq v.
+ \]
+ On the other hand, suppose there is a unique homotopy class of paths $x_0 \leadsto x_1$ for all $x_0, x_1 \in X$. Then in particular there is a unique homotopy class of loops based at $x_0$. So $\pi_1(X, x_0)$ is trivial.
+\end{proof}
+
+\section{Covering spaces}
+We can ask ourselves a question --- what are groups? We can write down a definition in terms of operations and axioms, but this is not what groups \emph{are}. Groups were created to represent \emph{symmetries} of objects. In particular, a group should be \emph{acting} on something. If not, something wrong is probably going on.
+
+We have created the fundamental group. So what do they act on? Can we find something on which these fundamental groups act on?
+
+An answer to this would also be helpful in more practical terms. So far we have not exhibited a non-trivial fundamental group. This would be easy if we can make the group act on something --- if the group acts non-trivially on our thing, then clearly the group cannot be trivial.
+
+These things we act on are \emph{covering spaces}.
+
+\subsection{Covering space}
+Intuitively, a \emph{covering space} of $X$ is a pair $(\tilde{X}, p: \tilde{X} \to X)$, such that if we take any $x_0 \in X$, there is some neighbourhood $U$ of $x_0$ such that the pre-image of the neighbourhood is ``many copies'' of $U$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+
+ \draw [fill=morange] ellipse (0.6 and 0.3);
+ \node at (0.6, 0.3) [anchor = south east] {$U$};
+
+ \draw [densely dashed] (0, 1.5) -- (0, 0);
+ \foreach \y in {1.5, 2, 2.5, 3} {
+ \node [circ] at (0, \y) {};
+ \pgfmathsetmacro\c{100 - \y * 30};
+ \draw [densely dashed] (0, \y) -- (0, \y - 0.5);
+ \draw [fill=morange!\c!yellow] (0, \y) ellipse (0.6 and 0.3);
+ \node [circ] at (0, \y) {};
+ }
+ \node [circ] at (0, 0) {};
+ \node [left] at (0, 0) {$x_0$};
+
+ \draw [->] (2.3, 2.5) node [above] {$\tilde{X}$}-- +(0, -2.3) node [pos=0.5, right] {$p$} node [below] {$X$};
+ \end{tikzpicture}
+\end{center}
+
+\begin{defi}[Covering space]
+ A \emph{covering space} of $X$ is a pair $(\tilde{X}, p: \tilde{X} \to X)$, such that each $x\in X$ has a neighbourhood $U$ which is \emph{evenly covered}.
+\end{defi}
+Whether we require $p$ to be surjective is a matter of taste. However, it will not matter if $X$ is path-connected, which is the case we really care about.
+
+\begin{defi}[Evenly covered]
+ $U \subseteq X$ is \emph{evenly covered} by $p: \tilde{X} \to X$ if
+ \[
+ p^{-1}(U) \cong \coprod_{\alpha \in \Lambda}V_\alpha,
+ \]
+ where $p|_{V_\alpha}: V_\alpha \to U$ is a homeomorphism, and each of the $V_\alpha \subseteq \tilde{X}$ is open.
+\end{defi}
+
+\begin{eg}
+ Homeomorphisms are covering maps. Duh.
+\end{eg}
+
+\begin{eg}
+ Consider $p: \R \to S^1\subseteq \C$ defined by $t \mapsto e^{2\pi i t}$. This is a covering space.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [samples=80, domain=0:5] plot [smooth] ({0.6 * sin (360 * \x)}, {1 + 0.3 * cos (360 * \x) + \x / 2});
+
+ \draw ellipse (0.6 and 0.3);
+
+ \node [circ] at (0.6, 0) {};
+ \node [right] at (0.6, 0) {\small$1$};
+
+ \foreach \y in {0,1,...,4} {
+ \node [mred, circ] at (0.6, 1 + \y/2 + 0.125) {};
+ }
+ \node [mred, right] at (0.6, 2.125) {\small$p^{-1}(1)$};
+
+ \draw [->] (1.8, 2.7) node [above] {$\R$}-- +(0, -2.3) node [pos=0.5, right] {$p$} node [below] {$S^1$};
+ \end{tikzpicture}
+ \end{center}
+ Here we have $p^{-1}(1) = \Z$.
+\end{eg}
+As we said, we are going to use covering spaces to determine non-trivial fundamental groups. Before we do that, we can guess what the fundamental group of $S^1$ is. If we have a loop on $S^1$, it could be nothing, it can loop around the circle once, or loop around many times. So we can characterize each loop by the number of times it loops around the circle, and it should not be difficult to convince ourselves that two loops that loop around the same number of times can be continuously deformed to one another. So it is not unreasonable to guess that $\pi_1(S^1, 1) \cong \Z$. However, we also know that $p^{-1}(1) = \Z$. In fact, for any point $z\in S^1$, $p^{-1}(z)$ is just ``$\Z$ many copies'' of $z$. We will show that this is not a coincidence.
+
+\begin{eg}
+ Consider $p_n: S^1 \to S^1$ (for any $n \in \Z\setminus\{0\}$) defined by $z \mapsto z^n$. We can consider this as ``winding'' the circle $n$ times, or as the following covering map:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [samples=80, domain=0:5] plot [smooth] ({0.6 * sin (360 * \x)}, {1 + 0.3 * cos (360 * \x) + \x / 2});
+ \node [circ, mred] at (0, 1.3) {};
+ \node [circ, mred] at (0, 3.8) {};
+
+ \draw ellipse (0.6 and 0.3);
+
+ \draw [->] (1.8, 2.7) node [above] {$S_1$}-- +(0, -2.3) node [pos=0.5, right] {$p_n$} node [below] {$S^1$};
+ \end{tikzpicture}
+ \end{center}
+ where we join the two red dots together.
+
+ This time the pre-image of $1$ would be $n$ copies of $1$, instead of $\Z$ copies of $1$.
+\end{eg}
+
+\begin{eg}
+ Consider $X = \RP^2$, which is the real projective plane. This is defined by $S^2/{\sim}$, where we identify every $x\sim -x$, i.e.\ every pair of antipodal points. We can also think of this as the space of lines in $\R^3$. This is actually really difficult to draw (and in fact impossible to fully embed in $\R^3$), so we will not attempt. There is, however, a more convenient way of thinking about $\RP^2$. We just use the definition --- we imagine an $S^2$, but this time, instead of a point being a ``point'' on the sphere, it is a \emph{pair} of antipodal points.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=2];
+
+ \draw [dashed] (2, 0) arc (0:180:2 and 0.5);
+ \draw (-2, 0) arc (180:360:2 and 0.5);
+
+ \draw [dashed, mred] (1, 1.1) node [circ] {} -- (-1, -1.1) node [circ] {};
+
+ \draw [rotate around={-45:(1, 1.1)}, mblue, fill=mblue, fill opacity=0.5] (1, 1.1) ellipse (0.3 and 0.1) node [opacity=1, anchor = south west] {$U$};
+
+ \draw [dashed, rotate around={-45:(-1, -1.1)}, mblue, fill=mblue, fill opacity=0.5] (-1, -1.1) ellipse (0.3 and 0.1) node [opacity=1, anchor = north east] {$U$};
+
+ \node [circ, mred] at (0, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ We define $p: S^2 \to \RP^2$ to be the quotient map. Then this is a covering map, since the pre-image of a small neighbourhood of any $x_0$ is just two copies of the neighbourhood.
+\end{eg}
+
+As we mentioned, a covering space of $X$ is (locally) like many ``copies'' of $X$. Hence, given any function $f: Y \to X$, a natural question is whether we can ``lift'' it to a function $f: Y \to \tilde{X}$, by mapping to one of the copies of $X$ in $\tilde{X}$.
+
+This is not always possible, but when it is, we call this a lift.
+\begin{defi}[Lifting]
+ Let $f: Y\to X$ be a map, and $p: \tilde{X} \to X$ a covering space. A \emph{lift} of $f$ is a map $\tilde{f}: Y\to \tilde{X}$ such that $f = p\circ \tilde{f}$, i.e.\ the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ & \tilde{X} \ar[d, "p"]\\
+ Y \ar[ur, "\tilde{f}"] \ar [r, "f"'] & X
+ \end{tikzcd}
+ \]
+\end{defi}
+We can visualize this in the case where $Y$ is the unit interval $I$ and the map is just a path in $X$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=morange, fill opacity=0.9] ellipse (0.6 and 0.3);
+
+ \foreach \y in {1.5, 2, 2.5, 3} {
+ \pgfmathsetmacro\c{100 - \y * 30};
+ \draw [fill=morange!\c!yellow, fill opacity=0.9] (0, \y) ellipse (0.6 and 0.3);
+ }
+
+ \draw [->] (2, 2.5) node [above] {\Large $\tilde{X}$} -- +(0, -2.2) node [below] {\Large $X$} node [pos=0.5, right] {$p$};
+
+ \draw [thick] (-4.5, 0) -- (-3, 0) node [pos=0.5, below] {\Large $Y$};
+
+ \draw [->] (-2.5, 0) -- (-1, 0) node [below, pos=0.5] {$f$} ;
+ \draw [->] (-2.5, 1.5) -- (-1, 2.5) node [anchor = south east, pos=0.5] {$\tilde{f}$};
+
+ \draw [thick, mred, decoration={snake}, decorate] (0, 0) -- +(0.5, 0.1) node [anchor = south west] {$f(Y)$};
+ \draw [thick, mblue, decoration={snake}, decorate] (0, 2.5) -- +(0.5, 0.1) node [anchor = south west] {$\tilde{f}(Y)$};
+
+ \draw [mblue, dashed, path fading=south] (0, 0) -- +(0, 2.5);
+ \draw [mred, dashed, path fading=north] (0, 0) -- +(0, 2.5);
+ \draw [mblue, dashed, path fading=south] (0.5, 0.1) -- +(0, 2.5);
+ \draw [mred, dashed, path fading=north] (0.5, 0.1) -- +(0, 2.5);
+
+ \end{tikzpicture}
+\end{center}
+It feels that if we know which ``copy'' of $X$ we lifted our map to, then we already know everything about $\tilde{f}$, since we are just moving our $f$ from $X$ to that particular copy of $X$. This is made precise by the following lemma:
+
+\begin{lemma}
+ Let $p: \tilde{X} \to X$ be a covering map, $f: Y\to X$ be a map, and $\tilde{f}_1 ,\tilde{f}_2$ be both lifts of $f$. Then
+ \[
+ S = \{y\in Y: \tilde{f}_1(y) = \tilde{f}_2(y)\}
+ \]
+ is both open and closed. In particular, if $Y$ is connected, $\tilde{f}_1$ and $\tilde{f}_2$ agree either everywhere or nowhere.
+\end{lemma}
+This is sort of a ``uniqueness statement'' for a lift. If we know a point in the lift, then we know the whole path. This is since once we've decided our starting point, i.e.\ which ``copy'' of $X$ we work in, the rest of $\tilde{f}$ has to follow what $f$ does.
+
+\begin{proof}
+ First we show it is open. Let $y$ be such that $\tilde{f}_1(y) = \tilde{f}_2(y)$. Then there is an evenly covered open neighbourhood $U \subseteq X$ of $f(y)$. Let $\tilde{U}$ be such that $\tilde{f}_1(y) \in \tilde{U}$, $p(\tilde{U}) = U$ and $p|_{\tilde{U}} : \tilde{U} \to U$ is a homeomorphism. Let $V = \tilde{f}_1^{-1}(\tilde{U}) \cap \tilde{f}_2^{-1}(\tilde{U})$. We will show that $\tilde{f}_1 = \tilde{f}_2$ on $V$.
+
+ Indeed, by construction
+ \[
+ p|_{\tilde{U}} \circ \tilde{f}_1|_V = p|_{\tilde{U}} \circ \tilde{f}_2|_V.
+ \]
+ Since $p|_{\tilde{U}}$ is a homeomorphism, it follows that
+ \[
+ \tilde{f}_1|_V = \tilde{f}_2|_V.
+ \]
+ Now we show $S$ is closed. Suppose not. Then there is some $y \in \bar{S} \setminus S$. So $\tilde{f}_1(y) \not= \tilde{f}_2(y)$. Let $U$ be an evenly covered neighbourhood of $f(y)$. Let $p^{-1}(U) = \coprod U_\alpha$. Let $\tilde{f}_1(y) \in U_\beta$ and $\tilde{f}_2(y) \in U_\gamma$, where $\beta \not= \gamma$. Then $V = \tilde{f}_1^{-1}(U_\beta) \cap \tilde{f}_2^{-1}(U_\gamma)$ is an open neighbourhood of $y$, and hence intersects $S$ by definition of closure. So there is some $x \in V$ such that $\tilde{f}_1(x) = \tilde{f}_2(x)$. But $\tilde{f}_1(x) \in U_\beta$ and $\tilde{f}_2(x) \in U_\gamma$, and hence $U_\beta$ and $U_\gamma$ have a non-trivial intersection. This is a contradiction. So $S$ is closed.
+\end{proof}
+
+We just had a uniqueness statement. How about existence? Given a map, is there guarantee that we can lift it to something? Moreover, if I have fixed a ``copy'' of $X$ I like, can I also lift my map to that copy? We will later come up with a general criterion for when lifts exist. However, it turns out homotopies can always be lifted.
+
+\begin{lemma}[Homotopy lifting lemma]
+ Let $p: \tilde{X} \to X$ be a covering space, $H: Y\times I \to X$ be a homotopy from $f_0$ to $f_1$. Let $\tilde{f}_0$ be a lift of $f_0$. Then there exists a \emph{unique} homotopy $\tilde{H}: Y\times I \to \tilde{X}$ such that
+ \begin{enumerate}
+ \item $\tilde{H}(\ph, 0) = \tilde{f}_0$; and
+ \item $\tilde{H}$ is a lift of $H$, i.e.\ $p\circ\tilde{H} = H$.
+ \end{enumerate}
+\end{lemma}
+This lemma might be difficult to comprehend at first. We can look at the special case where $Y = *$. Then a homotopy is just a path. So the lemma specializes to
+\begin{lemma}[Path lifting lemma]
+ Let $p: \tilde{X} \to X$ be a covering space, $\gamma: I\to X$ a path, and $\tilde{x}_0 \in \tilde{X}$ such that $p(\tilde{x}_0) = x_0 = \gamma(0)$. Then there exists a \emph{unique} path $\tilde{\gamma}: I\to \tilde{X}$ such that
+ \begin{enumerate}
+ \item $\tilde{\gamma}(0) = \tilde{x}_0$; and
+ \item $\tilde{\gamma}$ is a lift of $\gamma$, i.e.\ $p\circ \tilde{\gamma} = \gamma$.
+ \end{enumerate}
+\end{lemma}
+This is exactly the picture we were drawing before. We just have to start at a point $\tilde{x}_0$, and then everything is determined because locally, everything upstairs in $\tilde{X}$ is just like $X$. Note that we have already proved uniqueness. So we just need to prove existence.
+
+In theory, it makes sense to prove homotopy lifting, and path lifting comes immediately as a corollary. However, the proof of homotopy lifting is big and scary. So instead, we will prove path lifting, which is something we can more easily visualize and understand, and then use that to prove homotopy lifting.
+
+\begin{proof}
+ Let
+ \[
+ S = \{s \in I: \tilde{\gamma}\text{ exists on }[0, s] \subseteq I\}.
+ \]
+ Observe that
+ \begin{enumerate}
+ \item $0\in S$.
+ \item $S$ is open. If $s\in S$ and $\tilde{\gamma}(s) \in V_\beta \subseteq p^{-1}(U)$, we can define $\tilde{\gamma}$ on some small neighbourhood of $s$ by
+ \[
+ \tilde{\gamma}(t) = (p|_{V_\beta})^{-1}\circ \gamma(t)
+ \]
+
+ \item $S$ is closed. If $s \not \in S$, then pick an evenly covered neighbourhood $U$ of $\gamma(s)$. Suppose $\gamma((s - \varepsilon, s)) \subseteq U$. So $s - \frac{\varepsilon}{2} \not \in S$. So $(s - \frac{\varepsilon}{2}, 1] \cap S = \emptyset$. So $S$ is closed.
+ \end{enumerate}
+ Since $S$ is both open and closed, and is non-empty, we have $S = I$. So $\tilde{\gamma}$ exists.
+\end{proof}
+How can we promote this to a proof of the homotopy lifting lemma? At every point $y\in Y$, we know what to do, since we have path lifting. So $\tilde{H}(y, \ph)$ is defined. So the thing we have to do is to show that this is continuous. Steps of the proof are
+\begin{enumerate}
+ \item Use compactness of $I$ to argue that the proof of path lifting works on small neighbourhoods in $Y$.
+ \item For each $y$, we pick an open neighbourhood $U$ of $y$, and define a good path lifting on $U\times I$.
+ \item By uniqueness of lifts, these path liftings agree when they overlap. So we have one big continuous lifting.
+\end{enumerate}
+ % could expand a bit
+
+With the homotopy lifting lemma in our toolkit, we can start to use it to do stuff. So far, we have covering spaces and fundamental groups. We are now going to build a bridge between these two, and show how covering spaces can be used to reflect some structures of the fundamental group.
+
+At least one payoff of this work is that we are going to exhibit some non-trivial fundamental groups.
+
+We have just showed that we are allowed to lift homotopies. However, what we are really interested in is homotopy \emph{as paths}. The homotopy lifting lemma does not tell us that the lifted homotopy preserves basepoints. This is what we are going to show.
+
+\begin{cor}
+ Suppose $\gamma, \gamma': I\to X$ are paths $x_0 \leadsto x_1$ and $\tilde{\gamma}, \tilde{\gamma}': I\to \tilde{X}$ are lifts of $\gamma$ and $\gamma'$ respectively, both starting at $\tilde{x}_0 \in p^{-1}(x_0)$.
+
+ If $\gamma\simeq \gamma'$ as \emph{paths}, then $\tilde{\gamma}$ and $\tilde{\gamma}'$ are homotopic as paths. In particular, $\tilde{\gamma}(1) = \tilde{\gamma}'(1)$.
+\end{cor}
+Note that if we cover the words ``as paths'' and just talk about homotopies, then this is just the homotopy lifting lemma. So we can view this as a stronger form of the homotopy lifting lemma.
+
+\begin{proof}
+ The homotopy lifting lemma gives us an $\tilde{H}$, a lift of $H$ with $\tilde{H}(\ph, 0) = \tilde{\gamma}$.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \draw (0, 0) rectangle (2, 2);
+ \node at (1, 0) [below] {$\gamma$};
+ \node at (1, 2) [above] {$\gamma'$};
+ \node at (0, 1) [left] {$c_{x_0}$};
+ \node at (2, 1) [right] {$c_{x_1}$};
+ \node at (1, 1) {\Large $H$};
+ \end{scope}
+
+ \draw [->] (3, 1) -- (5, 1) node [pos=0.5, above] {lift};
+ \begin{scope}[shift={(6, 0)}]
+ \draw (0, 0) rectangle (2, 2);
+ \node at (1, 0) [below] {$\tilde{\gamma}$};
+ \node [gray] at (1, 2) [above] {$\tilde{\gamma'}$};
+ \node [gray] at (0, 1) [left] {$c_{\tilde{x}_0}$};
+ \node [gray] at (2, 1) [right] {$c_{\tilde{x}_1}$};
+ \node at (1, 1) {\Large $\tilde{H}$};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ In this diagram, we by assumption know the bottom of the $\tilde{H}$ square is $\tilde{\gamma}$. To show that this is a path homotopy from $\tilde{\gamma}$ to $\tilde{\gamma}'$, we need to show that the other edges are $c_{\tilde{x}_0}$, $c_{\tilde{x}_1}$ and $\tilde{\gamma}'$ respectively.
+
+ Now $\tilde{H}(\ph, 1)$ is a lift of $H(\ph, 1) = \gamma'$, starting at $\tilde{x}_0$. Since lifts are unique, we must have $\tilde{H}(\ph, 1) = \tilde{\gamma}'$. So this is indeed a homotopy between $\tilde{\gamma}$ and $\tilde{\gamma}'$. Now we need to check that this is a homotopy of paths.
+
+ We know that $\tilde{H}(0, \ph)$ is a lift of $H(0, \ph) = c_{x_0}$. We are aware of one lift of $c_{x_0}$, namely $c_{\tilde{x}_0}$. By uniqueness of lifts, we must have $\tilde{H}(0, \ph) = c_{\tilde{x}_0}$. Similarly, $\tilde{H}(1, \ph) = c_{\tilde{x}_1}$. So this is a homotopy of paths.
+\end{proof}
+
+So far, our picture of covering spaces is like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1, -0.7) (0, -0.7) (1.2, -1) (2.6, -0.9) (3, 0.4) (0.6, 1) (-1.3, 0.6)};
+
+ \draw [fill=morange, fill opacity=0.9] ellipse (0.6 and 0.3);
+ \node [circ] at (0, 0) {};
+ \node [right] at (0, 0) {$x_0$};
+ \foreach \y in {1.5, 2, 2.5, 3} {
+ \pgfmathsetmacro\c{100 - \y * 30};
+ \draw [fill=morange!\c!yellow, fill opacity=0.9] (0, \y) ellipse (0.6 and 0.3);
+ \node [circ] at (0, \y) {};
+ }
+
+ \begin{scope}[shift={(2, 0)}]
+ \draw [fill=morange, fill opacity=0.9] ellipse (0.6 and 0.3);
+ \node [circ] at (0, 0) {};
+ \node [right] at (0, 0) {$x_1$};
+ \foreach \y in {1.7, 2.2, 2.7} {
+ \pgfmathsetmacro\c{100 - \y * 30};
+ \draw [fill=morange!\c!yellow, fill opacity=0.9] (0, \y) ellipse (0.6 and 0.3);
+ \node [circ] at (0, \y) {};
+ }
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Except\ldots is it? Is it possible that we have four copies of $x_0$ but just three copies of $x_1$? This is obviously possible if $X$ is not path connected --- the component containing $x_0$ and the one containing $x_1$ are completely unrelated. But what if $X$ is path connected?
+
+\begin{cor}
+ If $X$ is a path connected space, $x_0, x_1 \in X$, then there is a bijection $p^{-1}(x_0) \to p^{-1}(x_1)$.
+\end{cor}
+
+\begin{proof}
+ Let $\gamma: x_0 \leadsto x_1$ be a path. We want to use this to construct a bijection between each preimage of $x_0$ and each preimage of $x_1$. The obvious thing to do is to use lifts of the path $\gamma$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1, -0.7) (0, -0.7) (1.2, -1) (2.6, -0.9) (3, 0.4) (0.6, 1) (-1.3, 0.6)};
+
+ \draw [fill=morange, fill opacity=0.9] ellipse (0.6 and 0.3);
+ \node [circ] at (0, 0) {};
+ \node [left] at (0, 0) {$x_0$};
+ \foreach \y in {1.5, 2, 2.5, 3} {
+ \pgfmathsetmacro\c{100 - \y * 30};
+ \draw [fill=morange!\c!yellow, fill opacity=0.9] (0, \y) ellipse (0.6 and 0.3);
+ \node [circ] at (0, \y) {};
+ }
+
+ \begin{scope}[shift={(2, 0)}]
+ \draw [fill=morange, fill opacity=0.9] ellipse (0.6 and 0.3);
+ \node [circ] at (0, 0) {};
+ \node [right] at (0, 0) {$x_1$};
+ \foreach \y in {1.5, 2, 2.5, 3} {
+ \pgfmathsetmacro\c{100 - \y * 30};
+ \draw [fill=morange!\c!yellow, fill opacity=0.9] (0, \y) ellipse (0.6 and 0.3);
+ \node [circ] at (0, \y) {};
+ }
+ \end{scope}
+
+ \draw plot [smooth] coordinates {(0, 0) (0.7, 0.3) (1.6, -0.1) (2, 0)};
+ \node at (1, 0.3) [above] {$\gamma$};
+
+ \draw (0, 1.5) parabola (2, 3);
+ \draw [decorate, decoration={snake, segment length=3cm}] (0, 2) -- (2, 1.5);
+ \draw (0, 3) parabola bend (1, 2.4) (2, 2);
+ \draw (0, 2.5) -- (2, 2.5);
+ \end{tikzpicture}
+ \end{center}
+ Define a map $f_\gamma: p^{-1}(x_0) \to p^{-1}(x_1)$ that sends $\tilde{x}_0$ to the end point of the unique lift of $\gamma$ at $\tilde{x}_0$.
+
+ The inverse map is obtained by replacing $\gamma$ with $\gamma^{-1}$, i.e.\ $f_{\gamma^{-1}}$. To show this is an inverse, suppose we have some lift $\tilde{\gamma}: \tilde{x}_0 \leadsto \tilde{x}_1$, so that $f_\gamma(\tilde{x}_0) = \tilde{x}_1$. Now notice that $\tilde{\gamma}^{-1}$ is a lift of $\gamma^{-1}$ starting at $\tilde{x}_1$ and ending at $\tilde{x}_0$. So $f_{\gamma^{-1}}(\tilde{x}_1) = \tilde{x}_0$. So $f_{\gamma^{-1}}$ is an inverse to $f_\gamma$, and hence $f_\gamma$ is bijective.
+\end{proof}
+
+\begin{defi}[$n$-sheeted]
+ A covering space $p: \tilde{X} \to X$ of a path-connected space $X$ is $n$-\emph{sheeted} if $|p^{-1}(x)| = n$ for any (and hence all) $x \in X$.
+\end{defi}
+Each covering space has a number associated to it, namely the number of sheets. Is there any number we can assign to fundamental groups? Well, the index of a subgroup might be a good candidate. We'll later see if this is the case.
+
+One important property of covering spaces is the following:
+\begin{lemma}
+ If $p: \tilde{X} \to X$ is a covering map and $\tilde{x}_0 \in \tilde{X}$, then
+ \[
+ p_*: \pi_1(\tilde{X}, \tilde{x}_0) \to \pi_1(X, x_0)
+ \]
+ is injective.
+\end{lemma}
+
+\begin{proof}
+ To show that a group homomorphism $p_*$ is injective, we have to show that if $p_*(x)$ is trivial, then $x$ must be trivial.
+
+ Consider a based loop $\tilde{\gamma}$ in $\tilde{X}$. We let $\gamma = p\circ \tilde{\gamma}$. If $\gamma$ is trivial, i.e.\ $\gamma \simeq c_{x_0}$ as paths, the homotopy lifting lemma then gives us a homotopy upstairs between $\tilde{\gamma}$ and $c_{\tilde{x}_0}$. So $\tilde{\gamma}$ is trivial.
+\end{proof}
+
+As we have originally said, our objective is to make our fundamental group act on something. We are almost there already.
+
+Let's look again at the proof that there is a bijection between $p^{-1}(x_0)$ and $p^{-1}(x_1)$. What happens if $\gamma$ is a loop? For any $\tilde{x}_0 \in p^{-1}(x_0)$, we can look at the end point of the lift. This end point may or may not be our original $\tilde{x}_0$. So each loop $\gamma$ ``moves'' our $\tilde{x}_0$ to another $\tilde{x}_0'$.
+
+However, we are not really interested in paths themselves. We are interested in equivalence classes of paths under homotopy of paths. However, this is fine. If $\gamma$ is homotopic to $\gamma'$, then this homotopy can be lifted to get a homotopy between $\tilde{\gamma}$ and $\tilde{\gamma}'$. In particular, these have the same end points. So each (based) homotopy class gives a well-defined endpoint.
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+ \draw [fill=morange, fill opacity=0.9] ellipse (0.6 and 0.3);
+
+ \foreach \y in {1.5, 2, 2.5, 3} {
+ \pgfmathsetmacro\c{100 - \y * 23};
+ \draw [fill=morange!\c!yellow, fill opacity=0.9] (0, \y) ellipse (0.6 and 0.3);
+ \node [circ] at (0, \y) {};
+ }
+
+ \draw [->] (2, 2.5) node [above] {\Large $\tilde{X}$} -- +(0, -2.2) node [below] {\Large $X$} node [pos=0.5, right] {$p$};
+
+ \node [circ] at (0, 0) {};
+ \node [left] at (0, 0) {$x_0$};
+
+ \draw plot [tension=0.9, smooth] coordinates {(0, 0) (1, -0.3) (1.2, 0) (0.8, 0.3) (0, 0)};
+ \node at (0.8, 0.3) [above] {$\gamma$};
+
+ \draw plot [tension=0.9, smooth] coordinates {(0, 2.5) (1, 2.3) (1.2, 2) (0.8, 1.7) (0, 1.5)};
+ \node [right] at (1.2, 2) {$\tilde{\gamma}$};
+
+ \node [left] at (0, 2.5) {$\tilde{x}_0$};
+ \node [left] at (0, 1.5) {$\tilde{x}_0'$};
+ \end{tikzpicture}
+\end{center}
+Now this gives an action of $\pi_1(X, x_0)$ on $p^{-1}(x_0)$! Note, however, that this will not be the sort of actions we are familiar with. We usually work with \emph{left}-actions, where the group acts on the left, but now we will have \emph{right}-actions, which may confuse you a lot. To see this, we have to consider what happens when we perform two operations one after another, which you shall check yourself. We write this action as $\tilde{x}_0 \cdot [\gamma]$.
+
+When we have an action, we are interested in two things --- the orbits, and the stabilizers. This is what the next lemma tells us about.
+
+\begin{lemma}
+ Suppose $X$ is path connected and $x_0 \in X$.
+ \begin{enumerate}
+ \item The action of $\pi_1(X, x_0)$ on $p^{-1}(x_0)$ is transitive if and only if $\tilde{X}$ is path connected. Alternatively, we can say that the orbits of the action correspond to the path components.
+ \item The stabilizer of $\tilde{x}_0 \in p^{-1}(x_0)$ is $p_*(\pi_1(\tilde{X}, \tilde{x}_0)) \subseteq \pi_1(X, x_0)$.
+ \item If $\tilde{X}$ is path connected, then there is a bijection
+ \[
+ p_* (\pi_1(\tilde{X}, \tilde{x}_0))\backslash \pi_1(X, x_0) \to p^{-1}(x_0).
+ \]
+ Note that $p_*(\pi_1(\tilde{X}, \tilde{x}_0))\backslash \pi_1(X, x_0)$ is not a quotient, but simply the set of cosets. We write it the ``wrong way round'' because we have right cosets instead of left cosets.
+ \end{enumerate}
+\end{lemma}
+Note that this is great! If we can find a covering space $p$ and a point $x_0$ such that $p^{-1}(x_0)$ is non-trivial, then we immediately know that $\pi_1(X, x_0)$ is non-trivial!
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $\tilde{x}_0, \tilde{x}'_0 \in p^{-1}(x_0)$, then since $\tilde{X}$ is path connected, we know that there is some $\tilde{\gamma}: \tilde{x}_0 \leadsto \tilde{x}_0'$. Then we can project this to $\gamma = p\circ \tilde{\gamma}$. Then $\gamma$ is a path from $x_0 \leadsto x_0$, i.e.\ a loop. Then by the definition of the action, $\tilde{x}_0 \cdot [\gamma] = \tilde{\gamma}(1) = \tilde{x}_0'$.
+
+ \item Suppose $[\gamma] \in \stab(\tilde{x}_0)$. Then $\tilde{\gamma}$ is a loop based at $\tilde{x_0}$. So $\tilde{\gamma}$ defines $[\tilde{\gamma}] \in \pi_1(\tilde{X}, \tilde{x}_0)$ and $\gamma = p\circ \tilde{\gamma}$.
+
+ \item This follows directly from the orbit-stabilizer theorem.\qedhere
+ \end{enumerate}
+\end{proof}
+
+We now want to use this to determine that the fundamental group of a space is non-trivial. We can be more ambitious, and try to actually \emph{find} $\pi_1(X, x_0)$. In the best possible scenario, we would have $\pi_1(\tilde{X}, \tilde{x}_0)$ trivial. Then we have a bijection between $\pi_1(X, x_0)$ and $p^{-1}(x_0)$. In other words, we want our covering space $\tilde{X}$ to be simply connected.
+
+\begin{defi}[Universal cover]
+ A covering map $p: \tilde{X} \to X$ is a \emph{universal cover} if $\tilde{X}$ is simply connected.
+\end{defi}
+We will look into universal covers in depth later and see what they really are.
+
+\begin{cor}
+ If $p: \tilde{X} \to X$ is a universal cover, then there is a bijection $\ell: \pi_1(X, x_0) \to p^{-1}(x_0)$.
+\end{cor}
+Note that the orbit-stabilizer theorem does not provide a canonical bijection between $p^{-1}(x_0)$ and $p_* \pi_1(\tilde{X}, \tilde{x}_0)\backslash \pi_1(X,x_0)$. To obtain a bijection, we need to pick a starting point $\tilde{x}_0 \in p^{-1}(x_0)$. So the above bijection $\ell$ depends on a choice of $\tilde{x}_0$.
+
+\subsection{The fundamental group of the circle and its applications}
+Finally, we can exhibit a non-trivial fundamental group. We are going to consider the space $S^1$ and a universal covering $\R$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [samples=80, domain=0:5] plot [smooth] ({0.6 * sin (360 * \x)}, {1 + 0.3 * cos (360 * \x) + \x / 2});
+
+ \draw ellipse (0.6 and 0.3);
+
+ \node [circ] at (0.6, 0) {};
+ \node [right] at (0.6, 0) {$1$};
+
+ \foreach \y in {0,1,...,4} {
+ \node [mred, circ] at (0.6, 1 + \y/2 + 0.125) {};
+ }
+ \node [mred, right] at (0.6, 2.125) {$p^{-1}(1)$};
+
+ \draw [->] (1.8, 2.7) node [above] {$\R$} -- +(0, -2.3) node [pos=0.5, right] {$p$} node [below] {$S^1$};
+ \end{tikzpicture}
+\end{center}
+Then our previous corollary gives
+\begin{cor}
+ There is a bijection $\pi_1(S^1, 1) \to p^{-1}(1) = \Z$.
+\end{cor}
+What's next? We just know that $\pi_1(S^1, 1)$ is countably infinite, but can we work out the group structure?
+
+We can, in fact, prove a stronger statement:
+\begin{thm}
+ The map $\ell: \pi_1(S^1, 1) \to p^{-1}(1) = \Z$ is a group isomorphism.
+\end{thm}
+
+\begin{proof}
+ We know it is a bijection. So we need to check it is a group homomorphism. The idea is to write down representatives for what we think the elements should be.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [samples=80, domain=0:5, opacity=0.7] plot [smooth] ({0.6 * sin (360 * \x)}, {1 + 0.3 * cos (360 * \x) + \x / 2});
+
+ \draw [mblue, semithick, domain=2.25:4.25, samples=30] plot [smooth] ({0.6 * sin (360 * \x)}, {1 + 0.3 * cos (360 * \x) + \x / 2});
+
+ \node [mblue] at (-1, 2.6) {$\tilde{u}_2$};
+ \foreach \y in {-2,-1,...,2} {
+ \node [mred, circ] at (0.6, 2 + \y/2 + 0.125) {};
+ \node [mred, right] at (0.6, 2 + \y/2 + 0.125) {$\y$};
+ }
+
+ \draw ellipse (0.6 and 0.3);
+
+ \draw [->] (1.8, 2.7) node [above] {$\R$} -- +(0, -2.3) node [pos=0.5, right] {$p$} node [below] {$S^1$};
+
+ \draw [domain=0.25:2.25, mblue, samples=50] plot [smooth] ({(1.05 + \x / 10) * 0.6 * sin (360 * \x)}, {(1.05 + \x / 10) * 0.3 * cos (360 * \x)});
+ \node [mblue] at (-1, 0) {$u_2$};
+ \end{tikzpicture}
+ \end{center}
+ Let $\tilde{u}_n: I \to \R$ be defined by $t \mapsto nt$, and let $u_n = p\circ \tilde{u}_n$. Since $\R$ is simply connected, there is a unique homotopy class between any two points. So for any $[\gamma] \in \pi_1(S^1, 1)$, if $\tilde{\gamma}$ is the lift to $\R$ at $0$ and $\tilde{\gamma}(1) = n$, then $\tilde{\gamma} \simeq \tilde{u_n}$ as paths. So $[\gamma] = [u_n]$.
+
+ To show that this has the right group operation, we can easily see that $\widetilde{u_m \cdot u_n} = \tilde{u}_{m + n}$, since we are just moving by $n + m$ in both cases. Therefore
+ \[
+ \ell([u_m][u_n]) = \ell([u_m \cdot u_m]) = m + n = \ell([u_{m + n}]).
+ \]
+ So $\ell$ is a group isomorphism.
+\end{proof}
+What have we done? In general, we might be given a horrible, crazy loop in $S^1$. It would be rather difficult to work with it directly in $S^1$. So we pull it up to the universal covering $\R$. Since $\R$ is nice and simply connected, we can easily produce a homotopy that ``straightens out'' the path. We then project this homotopy down to $S^1$, to get a homotopy from $\gamma$ to $u_n$.
+
+It is indeed possible to produce a homotopy directly inside $S^1$ from each loop to some $u_n$, but that would be tedious work that involves messing with a lot of algebra and weird, convoluted formulas.
+
+With the fundamental group of the circle, we do many things. An immediate application is that we can properly define the ``winding number'' of a closed curve. Since $\C \setminus \{0\}$ is homotopy equivalent to $S^1$, its fundamental group is $\Z$ as well. Any closed curve $S^1 \to \C \setminus \{0\}$ thus induces a group homomorphism $\Z \to \Z$. Any such group homomorphism must be of the form $t \mapsto nt$, and the winding number is given by $n$. If we stare at it long enough, it is clear that this is exactly the number of times the curve winds around the origin.
+
+Also, we have the following classic application:
+\begin{thm}[Brouwer's fixed point theorem]
+ Let $D^2 = \{(x, y) \in \R^2: x^2 + y^2 \leq 1\}$ be the unit disk. If $f: D^2 \to D^2$ is continuous, then there is some $x\in D^2$ such that $f(x) = x$.
+\end{thm}
+
+\begin{proof}
+ Suppose not. So $x \not= f(x)$ for all $x\in D^2$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=2cm];
+ \node [circ] at (-0.4, -0.3) {};
+ \node at (-0.4, -0.3) [below] {$x$};
+
+ \node [circ] at (0.4, 0.3) {};
+ \node at (0.4, 0.3) [right] {$f(x)$};
+ \draw (0.4, 0.3) -- (-1.6, -1.2);
+ \node at (-1.6, -1.2) [circ] {};
+ \node at (-1.6, -1.2) [anchor = north east] {$g(x)$};
+ \end{tikzpicture}
+ \end{center}
+ We define $g: D^2 \to S^1$ as in the picture above. Then we know that $g$ is continuous and $g$ is a retraction from $D^2$ onto $S^1$. In other words, the following composition is the identity:
+ \[
+ \begin{tikzcd}
+ S^1 \ar[r, hook, "\iota"] \ar[rr, bend right, "\id_{S^1}"'] & D^2 \ar[r, "g"] & S^1
+ \end{tikzcd}
+ \]
+ Then this induces a homomorphism of groups whose composition is the identity:
+ \[
+ \begin{tikzcd}
+ \Z \ar[r, hook, "\iota_*"] \ar[rr, bend right, "\id_\Z"'] & \{0\} \ar[r, "g_*"] & \Z
+ \end{tikzcd}
+ \]
+ But this is clearly nonsense! So we must have had a fixed point.
+\end{proof}
+But we have a problem. What about $D^3$? Can we prove a similar theorem? Here the fundamental group is of little use, since we can show that the fundamental group of $S^n$ for $n \geq 2$ is trivial. Later in the course, we will be able to prove this theorem for higher dimensions, when we have developed more tools to deal with stuff.
+
+\subsection{Universal covers}
+We have defined universal covers mysteriously as covers that are simply connected. We have just shown that $p: \R\to S^1$ is a universal cover. In general, what do universal covers look like?
+
+Let's consider a slightly more complicated example. What would be a universal cover of the torus $S^1 \times S^1$? An obvious guess would be $p\times p: \R\times \R \to S^1 \times S^1$. How can we visualize this?
+
+First of all, how can we visualize a torus? Often, we just picture it as the surface of a doughnut. Alternatively, we can see it as a quotient of the square, where we identify the following edges:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.55, mred] (0, 0) -- (3, 0);
+ \draw [->-=0.55, mred] (0, 3) -- (3, 3);
+
+ \draw [->-=0.55, mblue] (0, 0) -- (0, 3);
+ \draw [->-=0.55, mblue] (3, 0) -- (3, 3);
+ \end{tikzpicture}
+\end{center}
+Then what does it \emph{feel} like to live in the torus? If you live in a torus and look around, you don't see a boundary. The space just extends indefinitely for ever, somewhat like $\R^2$. The difference is that in the torus, you aren't actually seeing free space out there, but just seeing copies of the same space over and over again. If you live inside the square, the universe actually looks like this:
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2,-1,...,2} {
+ \foreach \y in {-2,-1,...,2} {
+ \pgfmathsetmacro\c{1 - sqrt((\x)^2 + (\y)^2)/6};
+ \begin{scope}[shift={(\x * 2, \y * 2)}, opacity=\c]
+ \draw (0, 0.4) circle [radius=0.15];
+ \draw (0, 0.25) -- (0, -0.25);
+ \draw (0, -0.25) -- (-0.15, -0.7);
+ \draw (0, -0.25) -- (0.15, -0.7);
+ \draw (0, 0.25) -- (-0.15, -0.1);
+ \draw (0, 0.25) -- (0.15, -0.1);
+ \end{scope}
+ }
+ }
+ \end{tikzpicture}
+\end{center}
+As we said, this looks somewhat likes $\R^2$, but we \emph{know} that this is not $\R^2$, since we can see some symmetry in this space. Whenever we move one unit horizontally or vertically, we get back to ``the same place''. In fact, we can move horizontally by $n$ units and vertically by $m$ units, for any $n, m \in \Z$, and still get back to the same place. This space has a huge translation symmetry. What is this symmetry? It is exactly $\Z \times \Z$.
+
+We see that if we live inside the torus $S^1 \times S^1$, it feels like we are actually living in the universal covering space $\R \times \R$, except that we have an additional symmetry given by the fundamental group $\Z \times \Z$.
+
+Hopefully, you are convinced that universal covers are nice. We would like to say that universal covers always exist. However, this is not always true.
+
+Firstly, we should think --- what would having a universal cover imply? Suppose $X$ has a universal cover $\tilde{X}$. Pick any point $x_0 \in X$, and pick an evenly covered neighbourhood $U$ in $X$. This lifts to some $\tilde{U} \subseteq \tilde{X}$. If we draw a teeny-tiny loop $\gamma$ around $x_0$ inside $U$, we can lift this $\gamma$ to $\tilde{\gamma}$ in $\tilde{U}$. But we know that $\tilde{X}$ is simply connected. So $\tilde{\gamma}$ is homotopic to the constant path. Hence $\gamma$ is also homotopic to the constant path. So all loops (contained in $U$) at $x_0$ are homotopic to the constant path.
+
+It seems like for every $x_0 \in X$, there is some neighbourhood of $x_0$ that is simply connected. Except that's not what we just showed above. The homotopy from $\tilde{\gamma}$ to the constant path is a homotopy in $\tilde{X}$, and can pass through anything in $\tilde{X}$, not just $\tilde{U}$. Hence the homotopy induced in $X$ is also a homotopy in $X$, not a homotopy in $U$. So $U$ itself need not be simply connected. What we have is a slightly weaker notion.
+
+\begin{defi}[Locally simply connected]
+ $X$ is \emph{locally simply connected} if for all $x_0\in X$, there is some neighbourhood $U$ of $x_0$ such that $U$ is simply connected.
+\end{defi}
+
+As we mentioned, what we actually want is a weaker condition.
+\begin{defi}[Semi-locally simply connected]
+ $X$ is \emph{semi-locally simply connected} if for all $x_0 \in X$, there is some neighbourhood $U$ of $x_0$ such that any loop $\gamma$ based at $x_0$ is homotopic to $c_{x_0}$ as paths \emph{in $X$}.
+\end{defi}
+
+We have just argued that if a universal cover $p: \tilde{X} \to X$ exists, then $X$ is semi-locally simply connected. This is really not interesting, since we don't care if a space is semi-locally simply connected. What is important is that this is the other direction. We still need one more additional condition:
+
+\begin{defi}[Locally path connected]
+ A space $X$ is \emph{locally path connected} if for any point $x$ and any neighbourhood $V$ of $x$, there is some open path connected $U \subseteq V$ such that $x \in U$.
+\end{defi}
+
+It is important to note that a path connected space need not be locally path connected. It is an exercise for the reader to come up with a counterexample.
+
+\begin{thm}
+ If $X$ is path connected, locally path connected and semi-locally simply connected, then $X$ has a universal covering.
+\end{thm}
+Note that we can alternatively define a universal covering as a covering space of $X$ that is also a covering space of all other covers of $X$. If we use this definition, then we can prove this result easily using Zorn's lemma. However, that proof is not too helpful since it does not tell us where the universal covering comes from. Instead, we will provide a constructive proof (sketch) that will hopefully be more indicative of what universal coverings are like.
+
+\begin{proof}(idea)
+ We pick a basepoint $x_0 \in X$ for ourselves. Suppose we have a universal covering $\tilde{X}$. Then this lifts to some $\tilde{x}_0$ in $\tilde{X}$. If we have any other point $\tilde{x} \in \tilde{X}$, since $\tilde{X}$ should be path connected, there is a path $\tilde{\alpha}: \tilde{x}_0 \rightsquigarrow \tilde{x}$. If we have another path, then since $\tilde{X}$ is simply connected, the paths are homotopic. Hence, we can identify each point in $\tilde{X}$ with a path from $\tilde{x}_0$, i.e.
+ \[
+ \{\text{points of }\tilde{X}\} \longleftrightarrow \{\text{paths }\tilde{\alpha}\text{ from }\tilde{x}_0\in \tilde{X}\}/{\simeq}.
+ \]
+ This is not too helpful though, since we are defining $\tilde{X}$ in terms of things in $\tilde{X}$. However, by path lifting, we know that paths $\tilde{\alpha}$ from $\tilde{x}_0$ in $\tilde{X}$ biject with paths $\alpha$ from $x_0$ in $X$. Also, by homotopy lifting, homotopies of paths in $X$ can be lifted to homotopies of paths in $\tilde{X}$. So we have
+ \[
+ \{\text{points of }\tilde{X}\} \longleftrightarrow \{\text{paths }\alpha\text{ from }x_0 \in X\}/{\simeq}.
+ \]
+ So we can produce our $\tilde{X}$ by picking a basepoint $x_0 \in X$, and defining
+ \[
+ \tilde{X} = \{\text{paths }\alpha: I\to X\text{ such that }\alpha(0) = x_0\}/{\simeq}.
+ \]
+ The covering map $p: \tilde{X} \to X$ is given by $[\alpha] \mapsto \alpha(1)$.
+
+ One then has to work hard to define the topology, and then show this is simply connected. % complete!!!
+\end{proof}
+
+\subsection{The Galois correspondence}
+Recall that at the beginning, we wanted to establish a correspondence between covering spaces and fundamental groups. We have already established the result that covering maps are injective on $\pi_1$. Therefore, given a (based) covering space $p: (\tilde{X}, \tilde{x}_0) \to (X, x_0)$, we can give a subgroup $p_*\pi_1(\tilde{X}, \tilde{x}_0)\leq \pi_1(X, x_0)$. It turns out that as long as we define carefully what we mean for based covering spaces to be ``the same'', this is a one-to-one correspondence --- each subgroup corresponds to a covering space.
+
+We can have the following table of correspondences:
+\begin{center}
+ \begin{tabular}{rcl}
+ \textbf{Covering spaces} & & \textbf{Fundamental group}\\
+ (Based) covering spaces & $\longleftrightarrow$ & Subgroups of $\pi_1$\\
+ Number of sheets & $\longleftrightarrow$ & Index\\
+ Universal covers & $\longleftrightarrow$ & Trivial subgroup\\
+ \end{tabular}
+\end{center}
+We now want to look at some of these correspondences.
+
+Recall that we have shown that $\pi_1(X, x_0)$ acts on $p^{-1}(x_0)$. However, this is not too interesting an action, since $p^{-1}(x_0)$ is a discrete group with no structure. Having groups acting on a cube is fun because the cube has some structure. So we want something more ``rich'' for $\pi_1(X, x_0)$ to act on.
+
+We note that we can make $\pi_1(X, x_0)$ ``act on'' the universal cover. How? Recall that in the torus example, each item in the fundamental group corresponds to translating the whole universal covering by some amount. In general, a point on $\tilde{X}$ can be thought of as a path $\alpha$ on $X$ starting from $x_0$. Then it is easy to make a loop $\gamma: x_0 \leadsto x_0$ act on this: use the concatenation $\gamma\cdot \alpha$: $[\gamma]\cdot [\alpha] = [\gamma \cdot \alpha]$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw plot [smooth cycle] coordinates {(-1.2, -0.7) (0, -0.7) (0.7, -1) (1.3, -0.9) (1.4, 0.4) (0.3, 1) (-1.7, 0.6)};
+
+ \begin{scope}[shift={(-1, 3)}]
+ \draw plot [smooth cycle] coordinates {(-1, -0.7) (0, -0.7) (1.2, -1) (2.6, -0.9) (3, 0.4) (0.6, 1) (-1.3, 0.6)};
+ \node [below] at (-0.5, 0) {$\tilde{x}_0$};
+ \node [below] at (0.8, 0.1) {$\tilde{x}_0'$};
+ \draw (-0.5, 0) parabola (0.8, 0.1) node [pos=0.5, below] {$\tilde{\gamma}$};
+
+ \node [circ] at (-0.5, 0) {};
+ \node [circ] at (0.8, 0.1) {};
+
+ \draw plot [smooth] coordinates {(0.8, 0.1) (1.3, 0.2) (1.8, 0.1)};
+ \node [circ] at (1.8, 0.1) {};
+ \node [above] at (1.3, 0.2) {$\alpha$};
+ \end{scope}
+
+ \draw [->] (3, 2.5) node [above] {\Large $\tilde{X}$} -- +(0, -2.2) node [below] {\Large $X$} node [pos=0.5, right] {$p$};
+
+ \node [circ] at (0, 0) {};
+ \node [below] at (0, 0) {$x_0$};
+
+ \draw plot [tension=0.9, smooth] coordinates {(0, 0) (-1, -0.3) (-1.2, 0) (-0.8, 0.3) (0, 0)};
+ \node at (-0.8, 0.3) [above] {$\gamma$};
+
+ \draw plot [smooth] coordinates {(0, 0) (0.5, 0.1) (1, 0)};
+ \node [circ] at (1, 0) {};
+ \node [above] at (0.5, 0.1) {$\alpha$};
+ \end{tikzpicture}
+\end{center}
+We will use this idea and return to the initial issue of making subgroups correspond to covering spaces. We want to show that this is surjective --- every subgroup arises from some cover. We want to say ``For any subgroup $H \leq \pi_1(X, x_0)$, there is a based covering map $p: (\tilde{X}, \tilde{x}_0)\to (X, x_0)$ such that $p_* \pi_1(\tilde{X}, \tilde{x}_0) = H$''. Except, this cannot possibly be true, since by taking the trivial subgroup, this would imply that there is a universal covering for every space. So we need some additional assumptions.
+
+\begin{prop}
+ Let $X$ be a path connected, locally path connected and semi-locally simply connected space. For any subgroup $H \leq \pi_1(X, x_0)$, there is a based covering map $p: (\tilde{X}, \tilde{x}_0)\to (X, x_0)$ such that $p_* \pi_1(\tilde{X}, \tilde{x}_0) = H$.
+\end{prop}
+
+\begin{proof}
+ Since $X$ is a path connected, locally path connected and semi-locally simply connected space, let $\bar{X}$ be a universal covering. We have an intermediate group $H$ such that $\pi_1(\tilde{X}, \tilde{x}_0) = 1 \leq H \leq \pi_1(X, x_0)$. How can we obtain a corresponding covering space?
+
+ Note that if we have $\bar{X}$ and we want to recover $X$, we can quotient $\bar{X}$ by the action of $\pi_1(X, x_0)$. Since $\pi_1(X, x_0)$ acts on $\bar{X}$, so does $H \leq \pi_1(X, x_0)$. Now we can define our covering space by taking quotients. We define $\sim_H$ on $\bar{X}$ to be the orbit relation for the action of $H$, i.e.\ $\tilde{x} \sim_H \tilde{y}$ if there is some $h \in H$ such that $\tilde{y} = h\tilde{x}$. We then let $\tilde{X}$ be the quotient space $\bar{X}/{\sim_H}$.
+
+ We can now do the messy algebra to show that this is the covering space we want. % complete again
+\end{proof}
+We have just showed that every subgroup comes from some covering space, i.e.\ the map from the set of covering spaces to the subgroups of $\pi_1$ is surjective. Now we want to prove injectivity. To do so, we need a generalization of the homotopy lifting lemma.
+
+Suppose we have path-connected spaces $(Y, y_0)$, $(X, x_0)$ and $(\tilde{X}, \tilde{x}_0)$, with $f: (Y, y_0) \to (X, x_0)$ a continuous map, $p: (\tilde{X}, \tilde{x}_0) \to (X, x_0)$ a covering map. When does a lift of $f$ to $\tilde{f}: (Y, y_0) \to (\tilde{X}, \tilde{x}_0)$ exist? The answer is given by the lifting criterion.
+
+\begin{lemma}[Lifting criterion]
+ Let $p: (\tilde{X}, \tilde{x}_0) \to (X, x_0)$ be a covering map of path-connected based spaces, and $(Y, y_0)$ a path-connected, locally path connected based space. If $f: (Y, y_0) \to (X, x_0)$ is a continuous map, then there is a (unique) lift $\tilde{f}: (Y, y_0) \to (\tilde{X}, \tilde{x}_0)$ such that the diagram below commutes (i.e.\ $p\circ \tilde{f} = f$):
+ \[
+ \begin{tikzcd}[row sep=large]
+ & (\tilde{X}, \tilde{x}_0) \ar[d, "p"]\\
+ (Y, y_0) \ar [r, "f"'] \ar [ru, dashed, "\tilde{f}"] & (X, x_0)
+ \end{tikzcd}
+ \]
+ if and only if the following condition holds:
+ \[
+ f_* \pi_1(Y, y_0) \leq p_*\pi_1(\tilde{X}, \tilde{x}_0).
+ \]
+\end{lemma}
+Note that uniqueness comes from the uniqueness of lifts. So this lemma is really about existence.
+
+Also, note that the condition holds trivially when $Y$ is simply connected, e.g.\ when it is an interval (path lifting) or a square (homotopy lifting). So paths and homotopies can always be lifted.
+
+\begin{proof}
+ One direction is easy: if $\tilde{f}$ exists, then $f = p \circ \tilde{f}$. So $f_* = p_* \circ \tilde{f}_*$. So we know that $\im f_* \subseteq \im p_*$. So done.
+
+ In the other direction, uniqueness follows from the uniqueness of lifts. So we only need to prove existence. We define $\tilde{f}$ as follows:
+
+ Given a $y \in Y$, there is some path $\alpha_y: y_0 \leadsto y$. Then $f$ maps this to $\beta_y: x_0 \leadsto f(y)$ in $X$. By path lifting, this path lifts uniquely to $\tilde{\beta}_y$ in $\tilde{X}$. Then we set $\tilde{f}(y) = \tilde{\beta}_y(1)$. Note that if $\tilde{f}$ exists, then this \emph{must} be what $\tilde{f}$ sends $y$ to. What we need to show is that this is well-defined.
+
+ Suppose we picked a different path $\alpha'_y: y_0 \leadsto y$. Then this $\alpha_y'$ would have differed from $\alpha_y$ by a loop $\gamma$ in $Y$.
+
+ Our condition that $f_* \pi_1(Y, y_0) \leq p_*\pi_1(\tilde{X}, \tilde{x}_0)$ says $f\circ \gamma$ is the image of a loop in $\tilde{X}$. So $\tilde{\beta}_y$ and $\tilde{\beta}'_y$ also differ by a loop in $\tilde{X}$, and hence have the same end point. So this shows that $\tilde{f}$ is well-defined.
+
+ Finally, we show that $\tilde{f}$ is continuous. First, observe that any open set $U \subseteq \tilde{X}$ can be written as a union of $\tilde{V}$ such that $p|_{\tilde{V}}: \tilde{V} \to p(\tilde{V})$ is a homeomorphism. Thus, it suffices to show that if $p|_{\tilde{V}}: \tilde{V} \to p(\tilde{V}) = V$ is a homeomorphism, then $\tilde{f}^{-1}(\tilde{V})$ is open.
+
+ Let $y \in \tilde{f}^{-1}(\tilde{V})$, and let $x = f(y)$. Since $f^{-1}(V)$ is open and $Y$ is locally path-connected, we can pick an open $W \subseteq f^{-1}(V)$ such that $y \in W$ and $W$ is path connected. We claim that $W \subseteq \tilde{f}^{-1}(\tilde{V})$.
+
+ Indeed, if $z \in W$, then we can pick a path $\gamma$ from $y$ to $z$. Then $f$ sends this to a path from $x$ to $f(z)$. The lift of this path to $\tilde{X}$ is given by $p|_{\tilde{V}}^{-1} (f(\gamma))$, whose end point is $p|_{\tilde{V}}^{-1}(f(z)) \in \tilde{V}$. So it follows that $\tilde{f}(z) = p|_{\tilde{V}}^{-1}(f(z)) \in \tilde{V}$.
+
+\end{proof}
+
+Now we prove that every subgroup of $\pi_1$ comes from exactly one covering space. What this statement properly means is made precise in the following proposition:
+\begin{prop}
+ Let $(X, x_0)$, $(\tilde{X}_1, \tilde{x}_1)$, $(\tilde{X}_2, \tilde{x}_2)$ be path-connected based spaced, and $p_i: (\tilde{X}_i, \tilde{x}_i) \to (X, x_0)$ be covering maps. Then we have
+ \[
+ p_{1*}\pi_1(\tilde{X}_1, \tilde{x}_1) = p_{2*} \pi_1(\tilde{X}_2, \tilde{x_2})
+ \]
+ if and only if there is some \emph{homeomorphism} $h$ such that the following diagram commutes:
+ \[
+ \begin{tikzcd}[row sep=large, column sep=tiny]
+ (\tilde{X}_1, \tilde{x}_1) \ar[rr, dashed, "h"] \ar[rd, "p_1"'] & & (\tilde{X}_2, \tilde{x}_2) \ar[ld, "p_2"]\\
+ & (X, x_0)
+ \end{tikzcd}
+ \]
+ i.e.\ $p_1 = p_2 \circ h$.
+\end{prop}
+Note that this is a stronger statement than just saying the two covering spaces are homeomorphic. We are saying that we can find a nice homeomorphism that works well with the covering map $p$.
+
+\begin{proof}
+ If such a homeomorphism exists, then clearly the subgroups are equal. If the subgroups are equal, we rotate our diagram a bit:
+ \[
+ \begin{tikzcd}[row sep=large]
+ & (\tilde{X}_2, \tilde{x}_2) \ar[d, "p_2"]\\
+ (\tilde{X}_1, \tilde{x}_1) \ar [r, "p_1"'] \ar [ru, dashed, "h = \tilde{p}_1"] & (X, x_0)
+ \end{tikzcd}
+ \]
+ Then $h = \tilde{p}_1$ exists by the lifting criterion. By symmetry, we can get $h^{-1} = \tilde{p}_2$. To show $\tilde{p}_2$ is indeed the inverse of $\tilde{p}_1$, note that $\tilde{p}_2 \circ \tilde{p}_1$ is a lift of $p_2 \circ \tilde{p}_1 = p_1$. Since $\id_{\tilde{X}_1}$ is also a lift, by the uniqueness of lifts, we know $\tilde{p}_2 \circ \tilde{p}_1$ is the identity map. Similarly, $\tilde{p}_1 \circ \tilde{p}_2$ is also the identity.
+ \[
+ \begin{tikzcd}[row sep=large]
+ & & (\tilde{X}_1, \tilde{x}_1) \ar[d, "p_1"]\\
+ (\tilde{X}_1, \tilde{x}_1) \ar[rr, bend right, "p_1"'] \ar [r, "\tilde{p}_1"'] & (\tilde{X}_2, \tilde{x}_2) \ar [ru, "\tilde{p}_2"] \ar[r, "p_2"'] & (X, x_0)
+ \end{tikzcd}%\qedhere
+ \]
+\end{proof}
+Now what we would like to do is to forget about the basepoints. What happens when we change base points? Recall that the effect of changing base points is that we will conjugate our group. This doesn't actually change the group itself, but if we are talking about subgroups, conjugation can send a subgroup into a different subgroup. Hence, if we do not specify the basepoint, we don't get a subgroup of $\pi_1$, but a \emph{conjugacy class} of subgroups.
+
+\begin{prop}
+ Unbased covering spaces correspond to conjugacy classes of subgroups.
+\end{prop}
+
+\section{Some group theory}
+Algebraic topology is about translating topology into group theory. Unfortunately, you don't know group theory. Well, maybe you do, but not the right group theory. So let's learn group theory!
+
+\subsection{Free groups and presentations}
+Recall that in IA Groups, we defined, say, the dihedral group to be
+\[
+ D_{2n} = \bra r, s \mid r^n = s^2 = e, srs = r^{-1}\ket.
+\]
+What does this expression actually mean? Can we formally assign a meaning to this expression?
+
+We start with a simple case --- the free group. This is, in some sense, the ``freest'' group we can have. It is defined in terms of an alphabet and words.
+
+\begin{defi}[Alphabet and words]
+ We let $S = \{s_\alpha: \alpha \in \Lambda\}$ be our \emph{alphabet}, and we have an extra set of symbols $S^{-1} = \{s_\alpha^{-1}: \alpha \in \Lambda\}$. We assume that $S\cap S^{-1} = \emptyset$. What do we do with alphabets? We write words with them!
+
+ We define $S^*$ to be the set of \emph{words} over $S\cup S^{-1}$, i.e.\ it contains $n$-tuples $x_1 \cdots x_n$ for any $0 \leq n < \infty$, where each $x_i \in S \cup S^{-1}$.
+\end{defi}
+
+\begin{eg}
+ Let $S = \{a, b\}$. Then words could be the empty word $\emptyset$, or $a$, or $aba^{-1}b^{-1}$, or $aa^{-1}aaaaabbbb$, etc. We are usually lazy and write $aa^{-1}aaaaabbbb$ as $aa^{-1}a^5 b^4$.
+\end{eg}
+
+When we see things like $aa^{-1}$, we would want to cancel them. This is called \emph{elementary reduction}.
+\begin{defi}[Elementary reduction]
+ An \emph{elementary reduction} takes a word $us_\alpha s_\alpha^{-1}v$ and gives $uv$, or turns $us_\alpha^{-1}s_\alpha v$ into $uv$.
+\end{defi}
+Since each reduction shortens the word, and the word is finite in length, we cannot keep reducing for ever. Eventually, we reach a \emph{reduced} state.
+
+\begin{defi}[Reduced word]
+ A word is \emph{reduced} if it does not admit an elementary reduction.
+\end{defi}
+
+\begin{eg}
+ $\emptyset$, $a$, $aba^{-1}b^{-1}$ are reduced words, while $aa^{-1}aaaaabbbb$ is not.
+\end{eg}
+Note that there is an inclusion map $S \to S^*$ that sends the symbol $s_\alpha$ to the word $s_\alpha$.
+
+\begin{defi}[Free group]
+ The \emph{free group} on the set $S$, written $F(S)$, is the set of reduced words on $S^*$ together with some operations:
+ \begin{enumerate}
+ \item Multiplication is given by concatenation followed by elementary reduction to get a reduced word. For example, $(aba^{-1}b^{-1}) \cdot (bab) = aba^{-1}b^{-1}bab = ab^2$
+ \item The identity is the empty word $\emptyset$.
+ \item The inverse of $x_1\cdots x_n$ is $x_n^{-1}\cdots x_1^{-1}$, where, of course, $(s_\alpha^{-1})^{-1} = s_\alpha$.
+ \end{enumerate}
+ The elements of $S$ are called the \emph{generators} of $F(S)$.
+\end{defi}
+Note that we have not showed that multiplication is well-defined --- we might reduce the same word in different ways and reach two different reduced words. We will show that this is indeed well-defined later, using topology!
+
+Some people like to define the free group in a different way. This is a cleaner way to define the free group without messing with alphabets and words, but is (for most people) less intuitive. This definition also does not make it clear that the free group $F(S)$ of any set $S$ exists. We will state this definition as a lemma.
+\begin{lemma}
+ If $G$ is a group and $\phi: S\to G$ is a set map, then there exists a unique \emph{homomorphism} $f: F(S) \to G$ such that the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ F(S) \ar[dr, dashed, "f"] \\
+ S \ar[u] \ar[r, "\phi"'] & G
+ \end{tikzcd}
+ \]
+ where the arrow not labeled is the natural inclusion map that sends $s_\alpha$ (as a symbol from the alphabet) to $s_\alpha$ (as a word).
+\end{lemma}
+
+\begin{proof}
+ Clearly if $f$ exists, then $f$ must send each $s_\alpha$ to $\phi(s_\alpha)$ and $s_\alpha^{-1}$ to $\phi(s_\alpha)^{-1}$. Then the values of $f$ on all other elements must be determined by
+ \[
+ f(x_1\cdots x_n) = f(x_1)\cdots f(x_n)
+ \]
+ since $f$ is a homomorphism. So if $f$ exists, it must be unique. So it suffices to show that this $f$ is a well-defined homomorphism.
+
+ This is well-defined if we define $F(S)$ to be the set of all reduced words, since each reduced word has a unique representation (since it is \emph{defined} to be the representation itself).
+
+ To show this is a homomorphism, suppose
+ \[
+ x = x_1\cdots x_n a_1\cdots a_k,\quad y = a_k^{-1} \cdots a_1^{-1} y_1\cdots y_m,
+ \]
+ where $y_1 \not= x_n^{-1}$. Then
+ \[
+ xy = x_1 \cdots x_n y_1\cdots y_m.
+ \]
+ Then we can compute
+ \begin{align*}
+ f(x)f(y) &= \big(\phi(x_1)\cdots \phi(x_n) \phi(a_1) \cdots \phi(a_k)\big) \big(\phi(a_k)^{-1} \cdots \phi(a_1)^{-1}\phi(y_1)\cdots\phi(y_m)\big) \\
+ &= \phi(x_1)\cdots\phi(x_n) \cdots \phi(y_1)\cdots \phi(y_m) \\
+ &= f(xy).
+ \end{align*}
+ So $f$ is a homomorphism.
+\end{proof}
+We call this a ``universal property'' of $F(S)$. We can show that $F(S)$ is the unique group satisfying the conditions of this lemma (up to isomorphism), by taking $G = F(S)$ and using the uniqueness properties.
+
+\begin{defi}[Presentation of a group]
+ Let $S$ be a set, and let $R \subseteq F(S)$ be any sub\emph{set}. We denote by $\bra \bra R\ket\ket$ the \emph{normal closure} of $R$, i.e.\ the smallest normal subgroup of $F(S)$ containing $R$. This can be given explicitly by
+ \[
+ \bra \bra R\ket \ket = \left\{\prod_{i = 1}^n g_i r_i g_i^{-1}: n \in \N, r_i \in R, g_i \in F(S)\right\}.
+ \]
+ Then we write
+ \[
+ \bra S \mid R\ket = F(S)/\bra\bra R\ket\ket.
+ \]
+\end{defi}
+This is just the usual notation we have for groups. For example, we can write
+\[
+ D_{2n} = \bra r, s \mid r^n, s^2, srsr\ket.
+\]
+Again, we can define this with a universal property.
+\begin{lemma}
+ If $G$ is a group and $\phi: S \to G$ is a set map such that $f(r) = 1$ for all $r \in R$ (i.e.\ if $r = s_1^{\pm 1}s_2^{\pm 1}\cdots s_m^{\pm 1}$, then $\phi(r) = \phi(s_1)^{\pm 1}\phi(s_2)^{\pm 1} \cdots \phi(s_m)^{\pm 1} = 1$), then there exists a unique homomorphism $f: \bra S \mid R\ket \to G$ such that the following triangle commutes:
+ \[
+ \begin{tikzcd}
+ \bra S\mid R\ket \ar[dr, dashed, "f"] \\
+ S \ar[u] \ar[r, "\phi"'] & G
+ \end{tikzcd}
+ \]
+\end{lemma}
+Proof is similar to the previous one.
+
+In some sense, this says that all the group $\bra S\mid R \ket$ does is it satisfies the relations in $R$, and nothing else.
+
+\begin{eg}[The \st{stupid} canonical presentation]
+ Let $G$ be a group. We can view the group as a set, and hence obtain a free group $F(G)$. There is also an obvious surjection $G \to G$. Then by the universal property of the free group, there is a surjection $f: F(G) \to G$. Let $R = \ker f = \bra \bra R\ket \ket$. Then $\bra G \mid R \ket$ is a presentation for $G$, since the first isomorphism theorem says
+ \[
+ G \cong F(G) / \ker f.
+ \]
+\end{eg}
+This is a really stupid example. For example, even the simplest non-trivial group $\Z/2$ will be written as a quotient of a free group with two generators. However, this tells us that every group has a presentation.
+
+\begin{eg}
+ $\bra a, b\mid b \ket \cong \bra a\ket \cong \Z$.
+\end{eg}
+
+\begin{eg}
+ With a bit of work, we can show that $\bra a, b\mid ab^{-3}, ba^{-2}\ket = \Z/5$.
+\end{eg}
+
+\subsection{Another view of free groups}
+Recall that we have not yet properly defined free groups, since we did not show that multiplication is well-defined. We are now going to do this using topology.
+
+Again let $S$ be a set. For the following illustration, we will just assume $S = \{ a, b\}$, but what we will do works for any set $S$. We define $X$ by
+\begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw (-1, 0) circle [radius=1];
+ \draw (1, 0) circle [radius=1];
+ \node [circ] {};
+ \node [left] at (-2, 0) {$a$};
+ \node [right] at (2, 0) {$b$};
+ \draw [->] (2, 0.1) -- (2, 0.11);
+ \draw [->] (-2, 0.1) -- (-2, 0.11);
+ \node [right] {$x_0$};
+ \end{tikzpicture}
+\end{center}
+We call this a ``rose with 2 petals''. This is a cell complex, with one $0$-cell and $|S|$ 1-cells. For each $s \in S$, we have one $1$-cell, $e_s$, and we fix a path $\gamma_s: [0, 1] \to e_s$ that goes around the 1-cell once. We will call the $0$-cells and $1$-cells \emph{vertices} and \emph{edges}, and call the whole thing a \emph{graph}.
+
+What's the universal cover of $X$? Since we are just lifting a $1$-complex, the result should be a $1$-complex, i.e.\ a graph. Moreover, this graph is connected and simply connected, i.e.\ it's a tree. We also know that every vertex in the universal cover is a copy of the vertex in our original graph. So it must have 4 edges attached to it. So it has to look like something this:
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, -3/0/180, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \node [circ] {};
+ \node [anchor = south west] {$\tilde{x}_0$};
+ \end{tikzpicture}
+\end{center}
+In $X$, we know that at each vertex, there should be an edge labeled $a$ going in; an edge labeled $a$ going out; an edge labeled $b$ going in; an edge labeled $b$ going out. This should be the case in $\tilde{X}$ as well. So $\tilde{X}$ looks like this:
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, -3/0/180, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \node [circ] {};
+ \node [anchor = south west] {$\tilde{x}_0$};
+ \draw [->] (1.7, 0) -- (1.71, 0);
+ \node [above] at (1.5, 0) {$a$};
+ \draw [->] (-1.3, 0) -- (-1.29, 0);
+ \node [above] at (-1.5, 0) {$a$};
+
+ \draw [->] (0, 1.7) -- (0, 1.71);
+ \node [left] at (0, 1.5) {$b$};
+ \draw [->] (0, -1.3) -- (0, -1.29);
+ \node [left] at (0, -1.5) {$b$};
+
+ \draw [->] (3.9, 0) -- (4, 0);
+ \node [above] at (3.75, 0) {$a$};
+
+ \draw [->] (3, 0.9) -- (3, 1);
+ \node [left] at (3, 0.75) {$b$};
+ \draw [->] (3, -0.5) -- (3, -0.4);
+ \node [left] at (3, -0.75) {$b$};
+ \end{tikzpicture}
+\end{center}
+The projection map is then obvious --- we send all the vertices in $\tilde{X}$ to $x_0 \in X$, and then the edges according to the labels they have, in a way that respects the direction of the arrow. It is easy to show this is really a covering map.
+
+We are now going to show that this tree ``is'' the free group. Notice that every word $w \in S^*$ denotes a unique ``edge path'' in $\tilde{X}$ starting at $\tilde{x}_0$, where an edge path is a sequence of oriented edges $\tilde{e}_1, \cdots, \tilde{e}_n$ such that the ``origin'' of $\tilde{e}_{i + 1}$ is equal to the ``terminus'' of $\tilde{e}_i$.
+
+For example, the following path corresponds to $w = abb^{-1}b^{-1}b a^{-1}b^{-1}$.
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, -3/0/180, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \node [circ] {};
+ \node [anchor = south west] {$\tilde{x}_0$};
+ \draw [mred, ->-=0.1, ->-=0.45, ->-=0.7, ->-=0.9] (0, 0.1) -- (2.9, 0.1) -- (2.9, 1.5) -- (3.1, 1.5) -- (3.1, -1.5) -- (2.9, -1.5) -- (2.9, -0.1) -- (0.1, -0.1) -- (0.1, -3);
+ \end{tikzpicture}
+\end{center}
+We can note a few things:
+\begin{enumerate}
+ \item $\tilde{X}$ is connected. So for all $\tilde{x} \in p^{-1}(x_0)$, there is an edge-path $\tilde{\gamma}: \tilde{x}_0 \leadsto \tilde{x}$.
+ \item If an edge-path $\tilde{\gamma}$ fails to be locally injective, we can simplify it. How can an edge fail to be locally injective? It is fine if we just walk along a path, since we are just tracing out a line. So it fails to be locally injective if two consecutive edges in the path are the same edge with opposite orientations:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.16, ->-=0.9] (-1, 0) -- (-0.05, 0) node [pos=0.5, below] {$\tilde{e}_{i - 1}$} -- (-0.05, 1) node [pos=0.5, left] {$\tilde{e}_{i}$} -- (0.05, 1) -- (0.05, 0) node [pos=0.5, right] {$\tilde{e}_{i + 1}$} -- (1, 0) node [pos=0.5, below] {$\tilde{e}_{i+2}$};
+ \end{tikzpicture}
+ \end{center}
+ We can just remove the redundant lines and get
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.3, ->-=0.85] (-1, 0) -- (0, 0) node [pos=0.5, below] {$\tilde{e}_{i - 1}$} -- (1, 0) node [pos=0.5, below] {$\tilde{e}_{i + 2}$};
+ \end{tikzpicture}
+ \end{center}
+ This reminds us of two things --- homotopy of paths \emph{and} elementary reduction of words.
+ \item Each point $\tilde{x} \in p^{-1}(x_0)$ is joined to $\tilde{x}_0$ by a \emph{unique} locally injective edge-path.
+ \item For any $w \in S^*$, from (ii), we know that $\tilde{\gamma}$ is locally injective if and only if $w$ is reduced.
+\end{enumerate}
+We can thus conclude that there are bijections
+\[
+ \begin{tikzcd}
+ F(S) & p^{-1}(x_0) \ar[l] \ar [r] & \pi_1(X, x_0)
+ \end{tikzcd}
+\]
+that send $\tilde{x}$ to the word $w \in F(S)$ such that $\tilde{\gamma}_w$ is a locally injective edge-path to $\tilde{x}_0 \leadsto \tilde{x}$; and $\tilde{x}$ to $[\gamma] \in \pi_1(X, x_0)$ such that $\tilde{x}_0 \cdot [\gamma] = \tilde{x}$.
+
+So there is a bijection between $F(S)$ and $\pi_1(X, x_0)$. It is easy to see that the operations on $F(S)$ and $\pi_1(X, x_0)$ are the same, since they are just concatenating words or paths. So this bijection identifies the two group structures. So this induces an isomorphism $F(S)\cong \pi_1(X, x_0)$.
+
+\subsection{Free products with amalgamation}
+We managed to compute the fundamental group of the circle. But we want to find the fundamental group of more things. Recall that at the beginning, we defined cell complexes, and said these are the things we want to work with. Cell complexes are formed by gluing things together. So we want to know what happens when we glue things together.
+
+Suppose a space $X$ is constructed from two spaces $A, B$ by gluing (i.e.\ $X = A\cup B$). We would like to describe $\pi_1(X)$ in terms of $\pi_1(A)$ and $\pi_1(B)$. To understand this, we need to understand how to ``glue'' groups together.
+
+\begin{defi}[Free product]
+ Suppose we have groups $G_1 = \bra S_1 \mid R_1\ket, G_2 = \bra S_2 \mid R_2\ket$, where we assume $S_1 \cap S_2 = \emptyset$. The \emph{free product} $G_1 * G_2$ is defined to be
+ \[
+ G_1 * G_2 = \bra S_1 \cup S_2 \mid R_1 \cup R_2\ket.
+ \]
+\end{defi}
+This is not a really satisfactory definition. A group can have many different presentations, and it is not clear this is well-defined. However, it is clear that this group exists. Note that there are natural homomorphisms $j_i: G_i \to G_1 * G_2$ that send generators to generators. Then we can have the following universal property of the free product:
+
+\begin{lemma}
+ $G_1 * G_2$ is the group such that for any group $K$ and homomorphisms $\phi_i: G_i \to K$, there exists a unique homomorphism $f: G_1 * G_2 \to K$ such that the following diagram commutes:
+ \[
+ \begin{tikzcd}[row sep=large]
+ & G_2 \ar[rdd, bend left, "\phi_2"] \ar[d, "j_2"]\\
+ G_1 \ar[rrd, bend right, "\phi_1"] \ar [r, "j_1"] & G_1 * G_2 \ar [rd, dashed, "f"]\\
+ && K
+ \end{tikzcd}
+ \]
+\end{lemma}
+
+\begin{proof}
+ It is immediate from the universal property of the definition of presentations.
+\end{proof}
+
+\begin{cor}
+ The free product is well-defined.
+\end{cor}
+
+\begin{proof}
+ The conclusion of the universal property can be seen to characterize $G_1 * G_2$ up to isomorphism.
+\end{proof}
+
+Again, we have a definition in terms of a concrete construction of the group, without making it clear this is well-defined; then we have a universal property that makes it clear this is well-defined, but not clear that the object actually exists. Combining the two would give everything we want.
+
+However, this free product is not exactly what we want, since there is little interaction between $G_1$ and $G_2$. In terms of gluing spaces, this corresponds to gluing $A$ and $B$ when $A \cap B$ is trivial (i.e.\ simply connected). What we really need is the free product with amalgamation, as you might have guessed from the title.
+
+\begin{defi}[Free product with amalgamation]
+ Suppose we have groups $G_1$, $G_2$ and $H$, with the following homomorphisms:
+ \[
+ \begin{tikzcd}[row sep=large]
+ H \ar[r, "i_2"] \ar[d, "i_1"] & G_2\\
+ G_1
+ \end{tikzcd}
+ \]
+ The \emph{free product with amalgamation} is defined to be
+ \[
+ G_1 \underset{H}{*} G_2 = G_1 * G_2/ \bra \bra \{(j_2 \circ i_2(h))^{-1} (j_1 \circ i_1)(h): h\in H\}\ket \ket.
+ \]
+\end{defi}
+Here we are attempting to identify things ``in $H$'' as the same, but we need to use the maps $j_k$ and $i_k$ to map the things from $H$ to $G_1 * G_2$. So we want to say ``for any $h$, $j_1 \circ i_1 (h) = j_2 \circ i_2(h)$'', or $(j_2 \circ i_2(h))^{-1} (j_1 \circ i_1)(h) = e$. So we quotient by the (normal closure) of these things.
+
+We have the universal property
+\begin{lemma}
+ $G_1 \underset{H}{*} G_2$ is the group such that for any group $K$ and homomorphisms $\phi_i: G_i \to K$, there exists a unique homomorphism $G_1 \underset{H}{*} G_2 \to K$ such that the following diagram commutes:
+ \[
+ \begin{tikzcd}[row sep=large]
+ H \ar[r, "i_2"] \ar[d, "i_1"] & G_2 \ar[rdd, bend left, "\phi_2"] \ar[d, "j_2"]\\
+ G_1 \ar[rrd, bend right, "\phi_1"] \ar [r, "j_1"] & G_1 \underset{H}{*} G_2 \ar [rd, dashed, "f"]\\
+ && K
+ \end{tikzcd}
+ \]
+\end{lemma}
+This is the language we will need to compute fundamental groups. These definitions will (hopefully) become more concrete as we see more examples.
+\section{Seifert-van Kampen theorem}
+\subsection{Seifert-van Kampen theorem}
+The Seifert-van Kampen theorem is the theorem that tells us what happens when we glue spaces together.
+
+Here we let $X = A\cup B$, where $A, B, A\cap B$ are path-connected.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mred, fill opacity=0.3] (-1, 0) ellipse (1.5 and 0.6);
+ \draw [fill=mblue, fill opacity=0.3] (1, 0) ellipse (1.5 and 0.6);
+ \node [circ] {};
+ \node [above] {$x_0$};
+ \node at (-2.5, 0.6) {$A$};
+ \node at (2.5, 0.6) {$B$};
+ \end{tikzpicture}
+\end{center}
+We pick a basepoint $x_0 \in A\cap B$ for convenience. Since we like diagrams, we can write this as a commutative diagram:
+\[
+ \begin{tikzcd}[row sep=large]
+ A\cap B \ar [r, hook] \ar[d, hook] & B\ar[d, hook]\\
+ A \ar [r, hook] & X
+ \end{tikzcd}
+\]
+where all arrows are inclusion (i.e.\ injective) maps. We can consider what happens when we take the fundamental groups of each space. Then we have the induced homomorphisms
+\[
+ \begin{tikzcd}[row sep=large]
+ \pi_1(A\cap B, x_0) \ar [r] \ar[d] & \pi_1(B, x_0)\ar[d]\\
+ \pi_1(A, x_0) \ar [r] & \pi_1(X, x_0)
+ \end{tikzcd}
+\]
+We might guess that $\pi_1(X, x_0)$ is just the free product with amalgamation
+\[
+ \pi_1(X, x_0) = \pi_1(A, x_0) \underset{\pi_1(A\cap B, x_0)}{*} \pi_1(B, x_0).
+\]
+The Seifert-van Kampen theorem says that, under mild hypotheses, this guess is correct.
+
+\begin{thm}[Seifert-van Kampen theorem]
+ Let $A, B$ be open subspaces of $X$ such that $X = A\cup B$, and $A, B, A\cap B$ are path-connected. Then for any $x_0 \in A\cap B$, we have
+ \[
+ \pi_1(X, x_0) = \pi_1(A, x_0) \underset{\pi_1(A\cap B, x_0)}{*} \pi_1(B, x_0).
+ \]
+\end{thm}
+Note that by the universal property of the free product with amalgamation, we by definition know that there is a unique map $\pi_1(A, x_0) \underset{\pi_1(A\cap B, x_0)}{*} \pi_1(B, x_0) \to \pi_1(X, x_0)$. The theorem asserts that \emph{this} map is an isomorphism.
+
+Proof is omitted because time is short. % put proof.
+
+\begin{eg}
+ Consider a higher-dimensional sphere $S^n = \{\mathbf{v} \in \R^{n + 1}: |\mathbf{v}| = 1\}$ for $n \geq 2$. We want to find $\pi_1(S^n)$.
+
+ The idea is to write $S^n$ as a union of two open sets. We let $n = \mathbf{e}_1 \in S^n \subseteq \R^{n + 1}$ be the North pole, and $s = -\mathbf{e}_1$ be the South pole. We let $A = S^n \setminus \{n\}$, and $B = S^n \setminus\{s\}$. By stereographic projection, we know that $A, B \cong \R^n$. The hard part is to understand the intersection.
+
+ To do so, we can draw a cylinder $S^{n - 1} \times (-1, 1)$, and project our $A\cap B$ onto the cylinder. We can similarly project the cylinder onto $A\cap B$. So $A\cap B\cong S^{n - 1} \times (-1, 1) \simeq S^{n - 1}$, since $(-1, 1)$ is contractible.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=2];
+ \draw [dashed] (2, 0) arc (0:180:2 and 0.3);
+ \draw (-2, 0) arc (180:360:2 and 0.3);
+
+ \node [circ] at (0, 2) {};
+ \node [circ] at (0, -2) {};
+
+ \node [above] at (0, 2) {$n$};
+ \node [below] at (0, -2) {$s$};
+
+ \draw [mblue, shading=axis, left color=mblue, right color=mblue!60!white, fill opacity=0.2] (-2, -2) -- (-2, 2) arc (180:360:2 and 0.3) -- (2, -2) arc (360:180:2 and 0.3);
+ \draw [mblue] (2, 2) arc (0:180:2 and 0.3);
+ \draw [mblue, dashed] (2, -2) arc (0:180:2 and 0.3);
+ \end{tikzpicture}
+ \end{center}
+ We can now apply the Seifert-van Kampen theorem. Note that this works only if $S^{n - 1}$ is path-connected, i.e.\ $n \geq 2$. Then this tells us that
+ \[
+ \pi_1(S^n) \cong \pi_1(\R^n) \underset{\pi_1(S^{n - 1})}{*} \pi_1(\R^n) \cong 1 \underset{\pi_1(S^{n - 1})}{*} 1
+ \]
+ It is easy to see this is the trivial group. We can see this directly from the universal property of the amalgamated free product, or note that it is the quotient of $1 * 1$, which is $1$.
+
+ So for $n \geq 2$, $\pi_1(S^n) \cong 1$.
+\end{eg}
+We have found yet another simply connected space. However, this is unlike our previous examples. Our previous spaces were simply connected because they were contractible. However, we will later see that $S^n$ is not contractible. So this is genuinely a new, interesting example.
+
+Why did we go though all this work to prove that $\pi_1(S^n) \cong 1$? It feels like we can just prove this directly --- pick a point that is not in the curve as the North pole, project stereographically to $\R^n$, and contract it there. However, the problem is that space-filling curves exist. We cannot guarantee that we can pick a point not on the curve! It is indeed possible to prove directly that given any curve on $S^n$ (with $n \geq 1$), we can deform it slightly so that it is no longer surjective. Then the above proof strategy works. However, using Seifert-van Kampen is much neater.
+
+\begin{eg}[$\RP^n$]
+ Recall that $\RP^n \cong S^n/\{\pm \id\}$, and the quotient map $S^n \to \RP^n$ is a covering map. Now that we have proved that $S^n$ is simply connected, we know that $S^n$ is a universal cover of $\RP^n$.
+
+ For any $x_0 \in \RP^n$, we have a bijection
+ \[
+ \pi_1(\RP^n, x_0) \leftrightarrow p^{-1}(x_0).
+ \]
+ Since $p^{-1}(x_0)$ has two elements by definition, we know that $|\pi_1(\RP^n, x_0)| = 2$. So $\pi_1(\RP^n, x_0) = \Z/2$.
+\end{eg}
+You will prove a generalization of this in example sheet 2.
+
+\begin{eg}[Wedge of two circles]
+ We are going to consider the operation of \emph{wedging}. Suppose we have two topological spaces, and we want to join them together. The natural way to join them is to take the disjoint union. What if we have based spaces? If we have $(X, x_0)$ and $(Y, y_0)$, we cannot just take the disjoint union, since we will lose our base point. What we do is take the \emph{wedge sum}, where we take the disjoint union and then identify the base points:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [above] at (-1.5, 1) {$X$};
+ \node [above] at (1.5, 1) {$Y$};
+ \draw [fill=mblue, fill opacity=0.2] (-1.5, 0) circle [radius=1];
+ \draw [fill=mred, fill opacity=0.2] (1.5, 0) circle [radius=1];
+
+ \node [circ] at (-0.5, 0) {};
+ \node [circ] at (0.5, 0) {};
+
+ \node [left] at (-0.5, 0) {$x_0$};
+ \node [right] at (0.5, 0) {$y_0$};
+
+ \draw [->] (0, -1.5) -- (0, -2.5);
+ \begin{scope}[shift={(0, -4)}]
+ \node [right] at (2, 0) {$X\wedge Y$};
+ \draw [fill=mblue, fill opacity=0.2] (-1, 0) circle [radius=1];
+ \draw [fill=mred, fill opacity=0.2] (1, 0) circle [radius=1];
+
+ \node [circ] {};
+ \node [left] {$x_0 \sim y_0$};
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ Suppose we take the wedge sum of two circles $S^1 \wedge S^1$. We would like to pick $A, B$ to be each of the circles, but we cannot, since $A$ and $B$ have to be open. So we take slightly more, and get the following:
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw (-1, 0) circle [radius=1];
+ \draw (1, 0) circle [radius=1];
+ \node [circ] {};
+ \node [right] {$x_0$};
+ \draw [mred] (-1, 0) circle [radius=1.2];
+ \draw [mblue] (1, 0) circle [radius=1.2];
+ \node [below, mred] at (-1, -1.2) {$A$};
+ \node [below, mblue] at (1, -1.2) {$B$};
+ \end{tikzpicture}
+ \end{center}
+ Each of $A$ and $B$ now look like this:
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw (0, 0) circle [radius=1];
+ \draw (1, 0) arc (180:150:1);
+ \draw (1, 0) arc (180:210:1);
+ \node [circ] at (1, 0) {};
+ \end{tikzpicture}
+ \end{center}
+ We see that both $A$ and $B$ retract to the circle. So $\pi_1(A) \cong \pi_1(B) \cong \Z$, while $A\cap B$ is a cross, which retracts to a point. So $\pi_1 (A\cap B) = 1$.
+
+ Hence by the Seifert-van Kampen theorem, we get
+ \[
+ \pi_1(S^1 \wedge S^1, x_0) = \pi_1(A, x_0)\underset{\pi_1(A\cap B, x_0)}{*} \pi_1(B, x_0) \cong \Z \underset{1}{*} \Z \cong \Z * \Z \cong F_2,
+ \]
+ where $F_2$ is just $F(S)$ for $|S| = 2$. We can see that $\Z*\Z \cong F_2$ by showing that they satisfy the same universal property.
+
+ Note that we had already figured this out when we studied the free group, where we realized $F_2$ is the fundamental group of this thing.
+
+ More generally, as long as $x_0, y_0$ in $X$ and $Y$ are ``reasonable'', $\pi_1(X\wedge Y) \cong \pi_1(X) * \pi_1(Y)$.
+\end{eg}
+
+Next, we would exhibit some nice examples of the covering spaces of $S^1 \wedge S^1$, i.e.\ the ``rose with 2 petals''.
+
+Recall that $\pi_1(S^1 \wedge S^1, x_0) \cong F_2 \cong \bra a, b\ket$.
+
+\begin{eg}
+ Consider the map $\phi: F_2 \to \Z/3$ which sends $a \mapsto 1, b \mapsto 1$. Note that $1$ is \emph{not} the identity, since this is an abelian group, and the identity is $0$. This exists since the universal property tells us we just have to say where the generators go, and the map exists (and is unique).
+
+ Now $\ker\phi$ is a subgroup of $F_2$. So there is a based covering space of $S^1 \wedge S^1$ corresponding to it, say, $\tilde{X}$. Let's work out what it is.
+
+ First, we want to know how many sheets it has, i.e.\ how many copies of $x_0$ we have. There are three, since we know that the number of sheets is the index of the subgroup, and the index of $\ker \phi$ is $|\Z/3| = 3$ by the first isomorphism theorem.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 1) {};
+ \node [circ] at (0, -1) {};
+ \node [circ] at (1.87, 0) {};
+ \node [right] at (1.87, 0) {$\tilde{x}_0$};
+ \end{tikzpicture}
+ \end{center}
+ Let's try to lift the loop $a$ at $\tilde{x}_0$. Since $a \not\in \ker\phi = \pi_1(\tilde{X}, \tilde{x}_0)$, $a$ does not lift to a loop. So it goes to another vertex.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 1) {};
+ \node [circ] at (0, -1) {};
+ \node [circ] at (1.87, 0) {};
+ \node [right] at (1.87, 0) {$\tilde{x}_0$};
+
+ \draw [->] (1.87, 0) -- (0, -1) node [pos=0.5, below] {$a$};
+ \end{tikzpicture}
+ \end{center}
+ Similarly, $a^2 \not\in \ker\phi = \pi_1(\tilde{X}, \tilde{x}_0)$. So we get the following
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 1) {};
+ \node [circ] at (0, -1) {};
+ \node [circ] at (1.87, 0) {};
+ \node [right] at (1.87, 0) {$\tilde{x}_0$};
+
+ \draw [->] (1.87, 0) -- (0, -1) node [pos=0.5, below] {$a$};
+ \draw [->] (0, -1) -- (0, 1) node [pos=0.5, left] {$a$};
+ \end{tikzpicture}
+ \end{center}
+ Since $a^3 \in \ker \phi$, we get a loop labelled $a$.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 1) {};
+ \node [circ] at (0, -1) {};
+ \node [circ] at (1.87, 0) {};
+ \node [right] at (1.87, 0) {$\tilde{x}_0$};
+
+ \draw [->] (1.87, 0) -- (0, -1) node [pos=0.5, below] {$a$};
+ \draw [->] (0, -1) -- (0, 1) node [pos=0.5, left] {$a$};
+ \draw [->] (0, 1) -- (1.87, 0) node [pos=0.5, above] {$a$};
+ \end{tikzpicture}
+ \end{center}
+ Note that $ab^{-1} \in \ker \phi$. So $ab^{-1}$ gives a loop. So $b$ goes in the same direction as $a$:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 1) {};
+ \node [circ] at (0, -1) {};
+ \node [circ] at (1.87, 0) {};
+ \node [right] at (1.87, 0) {$\tilde{x}_0$};
+
+ \draw [->] (1.87, 0) -- (0, -1) node [pos=0.5, above] {$a$};
+ \draw [->] (0, -1) -- (0, 1) node [pos=0.5, right] {$a$};
+ \draw [->] (0, 1) -- (1.87, 0) node [pos=0.5, below] {$a$};
+
+ \draw (1.87, 0) to [bend left, ->] node [pos=0.5, below] {$b$} (0, -1);
+ \draw (0, -1) edge [bend left, ->] node [pos=0.5, left] {$b$} (0, 1);
+ \draw (0, 1) edge [bend left, ->] node [pos=0.5, above] {$b$} (1.87, 0);
+ \end{tikzpicture}
+ \end{center}
+ This is our covering space.
+\end{eg}
+This is a fun game to play at home:
+\begin{enumerate}
+ \item Pick a group $G$ (finite groups are recommended).
+ \item Then pick $\alpha, \beta\in G$ and let $\phi: F_2 \to G$ send $a \mapsto \alpha, b \mapsto \beta$.
+ \item Compute the covering space corresponding to $\phi$.
+\end{enumerate}
+
+\subsection{The effect on \texorpdfstring{$\pi_1$}{pi1} of attaching cells}
+We have started the course by talking about cell complexes, but largely ignored them afterwards. Finally, we are getting back to them.
+
+The process of attaching cells is as follows: we start with a space $X$, get a function $S^{n - 1} \to X$, and attach $D^n$ to $X$ by gluing along the image of $f$ to get $X\cup_f D^n$:
+\begin{center}
+ \begin{tikzpicture}[scale=2]
+ \draw ellipse (0.3 and 0.5);
+
+ \draw [thick, mred, decorate, decoration={snake, segment length=1cm}] circle [radius=0.2];
+
+ \draw [thick, mred] (2, 0.3) arc (90:270:0.1 and 0.3);
+ \draw [dashed, thick, mred] (2, -0.3) arc (270:450:0.1 and 0.3);
+
+ \fill [mblue, opacity=0.3] (2, 0.3) arc (90:270:0.1 and 0.3) arc (270:450:0.3);
+ \draw [thick, mblue] (2, -0.3) arc (270:450:0.3);
+
+ \draw [dashed] (2, 0.3) -- (0, 0.4);
+ \draw [dashed] (2, -0.3) -- (0.1, -0.23);
+
+ \node [circ] at (2.3, 0) {};
+ \node [right] at (2.3, 0) {$0$};
+
+ \node [circ] at (1.9, 0) {};
+ \node [left] at (1.9, 0) {$y_0$};
+
+ \node [circ] at (-0.19, 0) {};
+ \node [left] at (-0.19, 0) {$x_0$};
+ \end{tikzpicture}
+\end{center}
+Since we are attaching stuff, we can use the Seifert-van Kampen theorem to analyse this.
+\begin{thm}
+ If $n \geq 3$, then $\pi_1(X\cup_f D^n) \cong \pi_1(X)$. More precisely, the map $\pi_1(X, x_0) \to \pi_1(X\cup_f D^n, x_0)$ induced by inclusion is an isomorphism, where $x_0$ is a point on the image of $f$.
+\end{thm}
+This tells us that attaching a high-dimensional disk does not do anything to the fundamental group.
+
+\begin{proof}
+ Again, the difficulty of applying Seifert-van Kampen theorem is that we need to work with open sets.
+
+ Let $0 \in D^n$ be any point in the interior of $D^n$. We let $A = X\cup_f (D^n \setminus \{0\})$. Note that $D^n \setminus \{0\}$ deformation retracts to the boundary $S^{n - 1}$. So $A$ deformation retracts to $X$. Let $B = \mathring{D}$, the interior of $D^n$. Then
+ \[
+ A\cap B = \mathring{D}^n \setminus 0 \cong S^{n - 1}\times (-1, 1)
+ \]
+ We cannot use $y_0$ as our basepoint, since this point is not in $A\cap B$. Instead, pick an arbitrary $y_1 \in A\cap B$. Since $D^n$ is path connected, we have a path $\gamma: y_1 \leadsto y_0$, and we can use this to recover the fundamental groups based at $y_0$.
+
+ Now Seifert-van Kampen theorem says
+ \[
+ \pi_1(X\cup_f D^n, y_1) \cong \pi_1(A, y_1) \underset{\pi_1(A\cap B, y_1)}{*} \pi_1(B, y_1).
+ \]
+ Since $B$ is just a disk, and $A\cap B$ is simply connected ($n \geq 3$ implies $S^{n - 1}$ is simply connected), their fundamental groups are trivial. So we get
+ \[
+ \pi_1(X\cup_f D^n, y_1) \cong \pi_1(A, y_1).
+ \]
+ We can now use $\gamma$ to change base points from $y_1$ to $y_0$. So
+ \[
+ \pi_1(X\cup_f D^n, y_0) \cong \pi_1(A, y_0) \cong \pi_1(X, y_0).\qedhere
+ \]
+\end{proof}
+The more interesting case is when we have smaller dimensions.
+\begin{thm}
+ If $n = 2$, then the natural map $\pi_1(X, x_0) \to \pi_1(X\cup_f D^n, x_0)$ is \emph{surjective}, and the kernel is $\bra\bra [f] \ket \ket$. Note that this statement makes sense, since $S^{n - 1}$ is a circle, and $f: S^{n - 1} \to X$ is a loop in $X$.
+\end{thm}
+This is what we would expect, since if we attach a disk onto the loop given by $f$, this loop just dies.
+
+\begin{proof}
+ As before, we get
+ \[
+ \pi_1(X\cup_f D^n, y_1) \cong \pi_1(A, y_1) \underset{\pi_1(A\cap B, y_1)}{*} \pi_1(B, y_1).
+ \]
+ Again, $B$ is contractible, and $\pi_1(B, y_1) \cong 1$. However, $\pi_1(A\cap B, y_1) \cong \Z$. Since $\pi_1(A\cap B, y_1)$ is just (homotopic to) the loop induced by $f$, it follows that
+ \[
+ \pi_1(A, y_1) \underset{\pi_1(A\cap B, y_1)}{*} 1 = (\pi_1(A, y_1) * 1)/\bra \bra \pi_1(A\cap B, y_1)\ket \ket \cong \pi_1(X, x_0)/\bra \bra f\ket\ket.\qedhere
+ \]
+\end{proof}
+
+In summary, we have
+\[
+ \pi_1(X \cup_f D^n) =
+ \begin{cases}
+ \pi_1(X) & n \geq 3\\
+ \pi_1(X)/\bra\bra f\ket\ket & n = 2
+ \end{cases}
+\]
+This is a useful result, since this is how we build up cell complexes. If we want to compute the fundamental groups, we can just work up to the two-cells, and know that the higher-dimensional cells do not have any effect. Moreover, whenever $X$ is a cell complex, we should be able to write down the presentation of $\pi_1(X)$.
+
+\begin{eg}
+ Let $X$ be the $2$-torus. Possibly, our favorite picture of the torus is (not a doughnut):
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.55, mred] (0, 0) -- (3, 0);
+ \draw [->-=0.55, mred] (0, 3) -- (3, 3);
+
+ \draw [->-=0.55, mblue] (0, 0) -- (0, 3);
+ \draw [->-=0.55, mblue] (3, 0) -- (3, 3);
+ \end{tikzpicture}
+ \end{center}
+ This is already a description of the torus as a cell complex!
+
+ We start with our zero complex $X^{(0)}$:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ We then add our $1$-cells to get $X^{(1)}$:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \draw [mred] (1, 0) ellipse (1 and 0.4);
+ \draw [mblue] (-0.5, 0) circle [radius=0.5];
+
+ \draw [->, mred] (1, -0.4) -- (1.01, -0.4);
+ \draw [->, mblue] (-1, 0) -- (-1, 0.01);
+ \node [right, mred] at (2, 0) {$a$};
+ \node [left, mblue] at (-1, 0) {$b$};
+ \end{tikzpicture}
+ \end{center}
+ We now glue our square to the cell complex to get $X = X^{(2)}$:
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw [->-=0.55, mred] (0, 0) -- (3, 0);
+ \draw [->-=0.55, mred] (0, 3) -- (3, 3);
+
+ \draw [->-=0.55, mblue] (0, 0) -- (0, 3);
+ \draw [->-=0.55, mblue] (3, 0) -- (3, 3);
+ \end{tikzpicture}
+ \end{center}
+ matching up the colors and directions of arrows.
+
+ So we have our torus as a cell complex. What is its fundamental group? There are many ways we can do this computation, but this time we want to do it as a cell complex.
+
+ We start with $X^{(0)}$. This is a single point. So its fundamental group is $\pi_1(X^{(0)}) = 1$.
+
+ When we add our two 1-cells, we get $\pi_1(X^{(1)}) = F_2 \cong \bra a, b\ket$.
+
+ Finally, to get $\pi_1(X)$, we have to quotient out by the boundary of our square, which is just $aba^{-1}b^{-1}$. So we have
+ \[
+ \pi_1(X^{(2)}) = F_2/\bra \bra aba^{-1}b^{-1}\ket\ket = \bra a, b \mid aba^{-1}b^{-1}\ket \cong \Z^2.
+ \]
+ We have the last congruence since we have two generators, and then we make them commute by quotienting the commutator out.
+\end{eg}
+This procedure can be reversed --- given a presentation of a group, we can just add the right edges and squares to produce a cell complex with that presentation.
+
+\begin{cor}
+ For any (finite) group presentation $\bra S\mid R\ket$, there exists a (finite) cell complex (of dimension 2) $X$ such that $\pi_1(X) \cong \bra S\mid R \ket$.
+\end{cor}
+There really isn't anything that requires that finiteness in this proof, but finiteness makes us feel more comfortable.
+
+\begin{proof}
+ Let $S = \{a_1, \cdots, a_m\}$ and $R = \{r_1, \cdots, r_n\}$. We start with a single point, and get our $X^{(1)}$ by adding a loop about the point for each $a_i \in S$. We then get our $2$-cells $e_j^2$ for $j = 1, \cdots, n$, and attaching them to $X^{(1)}$ by $f_i: S^1 \to X^{(1)}$ given by a based loop representing $r_i \in F(S)$.
+\end{proof}
+Since all groups have presentations, this tells us that all groups are fundamental groups of some spaces (at least those with finite presentations).
+
+\subsection{A refinement of the Seifert-van Kampen theorem}
+We are going to make a refinement of the theorem so that we don't have to worry about that openness problem. We first start with a definition.
+
+\begin{defi}[Neighbourhood deformation retract]
+ A subset $A\subseteq X$ is a \emph{neighbourhood deformation retract} if there is an open set $A\subseteq U\subseteq X$ such that $A$ is a strong deformation retract of $U$, i.e.\ there exists a retraction $r: U \to A$ and $r \simeq \id_U \rel A$.
+\end{defi}
+This is something that is true most of the time, in sufficiently sane spaces.
+
+\begin{eg}
+ If $Y$ is a subcomplex of a cell complex, then $Y$ is a neighbourhood deformation retract.
+\end{eg}
+
+\begin{thm}
+ Let $X$ be a space, $A, B\subseteq X$ closed subspaces. Suppose that $A$, $B$ and $A\cap B$ are path connected, and $A\cap B$ is a neighbourhood deformation retract of $A$ and $B$. Then for any $x_0 \in A\cap B$.
+ \[
+ \pi_1(X, x_0) = \pi_1(A, x_0) \underset{\pi_1(A\cap B, x_0)}{*} \pi_1(B, x_0).
+ \]
+\end{thm}
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mred, fill opacity=0.3] (-1, 0) ellipse (1.5 and 0.6);
+ \draw [fill=mblue, fill opacity=0.3] (1, 0) ellipse (1.5 and 0.6);
+ \node [circ] {};
+ \node [above] {$x_0$};
+ \node at (-2.5, 0.6) {$A$};
+ \node at (2.5, 0.6) {$B$};
+ \end{tikzpicture}
+\end{center}
+This is just like Seifert-van Kampen theorem, but usually easier to apply, since we no longer have to ``fatten up'' our $A$ and $B$ to make them open.
+
+\begin{proof}
+ Pick open neighbourhoods $A\cap B \subseteq U \subseteq A$ and $A \cap B \subseteq V \subseteq B$ that strongly deformation retract to $A\cap B$. Let $U$ be such that $U$ retracts to $A \cap B$. Since $U$ retracts to $A$, it follows that $U$ is path connected since path-connectedness is preserved by homotopies.
+
+ Let $A' = A \cup V$ and $B' = B \cup U$. Since $A' = (X \setminus B) \cup V$, and $B' = (X \setminus A) \cup U$, it follows that $A'$ and $B'$ are open.
+
+ Since $U$ and $V$ retract to $A \cap B$, we know $A' \simeq A$ and $B' \simeq B$. Also, $A' \cap B' = (A \cup V) \cap (B \cup U) = U \cup V \simeq A \cap B$. In particular, it is path connected. So by Seifert van-Kampen, we get
+ \[
+ \pi_1(A \cup B) = \pi_1(A', x_0) \underset{\pi_1(A' \cap B', x_0)}{*} \pi_1(B', x_0) = \pi_1(A, x_0) \underset{\pi_1(A\cap B, x_0)}{*} \pi_1(B, x_0).\qedhere
+ \]
+\end{proof}
+This is basically what we've done all the time when we enlarge our $A$ and $B$ to become open.
+
+\begin{eg}
+ Let $X = S^1 \wedge S^1$, the rose with two petals. Let $A, B\cong S^1$ be the circles.
+ \begin{center}
+ \begin{tikzpicture}[scale=0.75]
+ \draw [mred] (-1, 0) circle [radius=1];
+ \draw [mblue] (1, 0) circle [radius=1];
+ \node [circ] {};
+ \node [right] {$x_0$};
+ \node [below, mred] at (-1, -1) {$A$};
+ \node [below, mblue] at (1, -1) {$B$};
+ \end{tikzpicture}
+ \end{center}
+ Then since $\{x_0\} = A \cap B$ is a neighbourhood deformation retract of $A$ and $B$, we know that
+ \[
+ \pi_1 X \cong \pi_1 S^1 * \pi_1S^1.
+ \]
+\end{eg}
+\subsection{The fundamental group of all surfaces}
+We have found that the torus has fundamental group $\Z^2$, but we already knew this, since the torus is just $S^1 \times S^1$, and the fundamental group of a product is the product of the fundamental groups, as you have shown in the example sheet. So we want to look at something more interesting. We look at \emph{all} surfaces.
+
+We start by defining what a surface is. It is surprisingly difficult to get mathematicians to agree on how we can define a surface. Here we adopt the following definition:
+\begin{defi}[Surface]
+ A \emph{surface} is a Hausdorff topological space such that every point has a neighbourhood $U$ that is homeomorphic to $\R^2$.
+\end{defi}
+Some people like $\C$ more that $\R$, and sometimes they put $\C^2$ instead of $\R^2$ in the definition, which is confusing, since that would have two complex dimensions and hence \emph{four} real dimensions. Needless to say, the actual surfaces will also be different and have different homotopy groups. We will just focus on surfaces with two real dimensions.
+
+To find the fundamental group of all surfaces, we rely on the following theorem that tells us what surfaces there are.
+\begin{thm}[Classification of compact surfaces]
+ If $X$ is a compact surface, then $X$ is homeomorphic to a space in one of the following two families:
+ \begin{enumerate}
+ \item The \emph{orientable surface of genus $g$}, $\Sigma_g$ includes the following (please excuse my drawing skills):
+ \begin{center} % improve
+ \begin{tikzpicture}
+ \begin{scope}
+ \draw circle [radius=1];
+ \draw [dashed] (1, 0) arc (0:180:1 and 0.25);
+ \draw (-1, 0) arc (180:360:1 and 0.25);
+ \end{scope}
+
+ \begin{scope}[shift={(3, 0)}]
+ \draw (0,0) ellipse (1 and 0.56);
+ \path[rounded corners=12pt] (-0.45, 0)--(0, 0.3)--(0.45, 0) (-0.45, 0)--(0, -0.28)--(0.45, 0);
+ \draw[rounded corners=14pt] (-0.55, 0.05)--(0, -.15)--(0.55, 0.05);
+ \draw[rounded corners=12pt] (-0.45, 0)--(0, 0.3)--(0.45, 0);
+ \end{scope}
+
+ \begin{scope}[shift={(6.5, 0)}, yscale=1.5]
+ \draw plot [smooth cycle, tension=0.8] coordinates {(-1.7, 0) (-1.4, -0.27) (-0.7, -0.35) (0, -0.23) (0.7, -0.35) (1.4, -0.27) (1.7, 0) (1.4, 0.27) (0.7, 0.35) (0, 0.23) (-0.7, 0.35) (-1.4, 0.27)};
+
+ \foreach \x in {0.8, -0.8}{
+ \begin{scope}[shift={(\x, -0.03)}]
+ \path[rounded corners=12pt] (-0.45, 0)--(0, 0.2)--(0.45, 0) (-0.45, 0)--(0, -0.28)--(0.45, 0);
+ \draw[rounded corners=14pt] (-0.55, 0.05)--(0, -.15)--(0.55, 0.05);
+ \draw[rounded corners=12pt] (-0.45, 0)--(0, 0.2)--(0.45, 0);
+ \end{scope}
+ }
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ A more formal definition of this family is the following: we start with the 2-sphere, and remove a few discs from it to get $S^2 \setminus \cup_{i = 1}^g D^2$. Then we take $g$ tori with an open disc removed, and attach them to the circles.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=2];
+ \draw [dashed] (2, 0) arc (0:180:2 and 0.5);
+ \draw (-2, 0) arc (180:360:2 and 0.5);
+
+ \foreach \y/\r/\a/\b in {0.8/-60/1.573/1.23, -0.8/60/1.23/1.573} {
+ \begin{scope}[shift={(0, \y)}]
+ \draw [rotate around={\r:(1.4, 0)}, mblue, fill=mblue, fill opacity=0.5] (1.4, 0) ellipse (0.346 and 0.11);
+ \draw [dashed] (\a, -0.3) -- (3.6, -0.3);
+ \draw [dashed] (\b, 0.3) -- (3.6, 0.3);
+ \begin{scope}[shift={(4.5, 0)}, xscale=0.75]
+ \draw plot [smooth, tension=0.8] coordinates {(-1.2, 0.3) (-1, 0.3) (0, 0.56) (1, 0.3) (1, -0.3) (0, -0.56) (-1, -0.3) (-1.2, -0.3)};
+ \draw [mblue, fill=mblue, fill opacity=0.5] (-1.2, 0) ellipse (0.1 and 0.3);
+ \path[rounded corners=12pt] (-0.45, 0)--(0, 0.3)--(0.45, 0) (-0.45, 0)--(0, -0.28)--(0.45, 0);
+ \draw[rounded corners=14pt] (-0.55, 0.05)--(0, -.15)--(0.55, 0.05);
+ \draw[rounded corners=12pt] (-0.45, 0)--(0, 0.3)--(0.45, 0);
+ \end{scope}
+ \end{scope}
+ }
+ \end{tikzpicture}
+ \end{center}
+
+ \item The \emph{non-orientable surface of genus $n$}, $E_n = \{\RP^2, K, \cdots\}$ (where $K$ is the Klein bottle). This has a similar construction as above: we start with the sphere $S^2$, make a few holes, and then glue M\"obius strips to them.
+ \end{enumerate}
+\end{thm}
+It would be nice to be able to compute fundamental groups of these guys. To do so, we need to write them as polygons with identification.
+
+\begin{eg}
+ To obtain a surface of genus two, written $\Sigma_2$, we start with what we had for a torus:
+ \begin{center}
+ \begin{tikzpicture}[rotate=22.5]
+ \coordinate (0) at (1, 0);
+ \coordinate (1) at (0, 0);
+ \coordinate (2) at (-0.707, -0.707);
+ \coordinate (3) at (-0.707, -1.707);
+ \coordinate (4) at (0, -2.414);
+
+ \draw [mred, ->-=0.6] (0) -- (1) node [above, pos=0.5] {$a$};
+ \draw [mblue, ->-=0.6] (1) -- (2) node [left, pos=0.5] {$b$};
+ \draw [mred, ->-=0.6] (3) -- (2) node [left, pos=0.5] {$a$};
+ \draw [mblue, ->-=0.6] (4) -- (3) node [below, pos=0.5] {$b$};
+ \end{tikzpicture}
+ \end{center}
+ If we just wanted a torus, we are done (after closing the loop), but now we want a surface with genus $2$, so we add another torus:
+ \begin{center}
+ \begin{tikzpicture}[rotate=22.5]
+ \coordinate (0) at (1, 0);
+ \coordinate (1) at (0, 0);
+ \coordinate (2) at (-0.707, -0.707);
+ \coordinate (3) at (-0.707, -1.707);
+ \coordinate (4) at (0, -2.414);
+ \coordinate (5) at (1, -2.414);
+ \coordinate (6) at (1.707, -1.707);
+ \coordinate (7) at (1.707, -0.707);
+
+ \draw [mred, ->-=0.6] (0) -- (1) node [above, pos=0.5] {$a_1$};
+ \draw [mblue, ->-=0.6] (1) -- (2) node [left, pos=0.5] {$b_1$};
+ \draw [mred, ->-=0.6] (3) -- (2) node [left, pos=0.5] {$a_1$};
+ \draw [mblue, ->-=0.6] (4) -- (3) node [below, pos=0.5] {$b_1$};
+
+ \draw [mgreen, ->-=0.6] (4) -- (5) node [below, pos=0.5] {$a_2$};
+ \draw [morange, ->-=0.6] (5) -- (6) node [right, pos=0.5] {$b_2$};
+ \draw [mgreen, ->-=0.6] (7) -- (6) node [right, pos=0.5] {$a_2$};
+ \draw [morange, ->-=0.6] (0) -- (7) node [above, pos=0.5] {$b_2$};
+
+ \draw [dashed] (0) -- (4);
+ \end{tikzpicture}
+ \end{center}
+ To visualize how this works, imagine cutting this apart along the dashed line. This would give two tori with a hole, where the boundary of the holes are just the dashed line. Then gluing back the dashed lines would give back our orientable surface with genus 2.
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x/\r in {2/1, -2/-1} {
+ \begin{scope}[shift={(\x, 0)}, xscale=\r]
+ \begin{scope}[xscale=0.75]
+ \draw plot [smooth, tension=0.8] coordinates {(-1.2, 0.3) (-1, 0.3) (0, 0.56) (1, 0.3) (1, -0.3) (0, -0.56) (-1, -0.3) (-1.2, -0.3)};
+ \draw [mblue, fill=mblue, fill opacity=0.5] (-1.2, 0) ellipse (0.1 and 0.3);
+ \path[rounded corners=12pt] (-0.45, 0)--(0, 0.3)--(0.45, 0) (-0.45, 0)--(0, -0.28)--(0.45, 0);
+ \draw[rounded corners=14pt] (-0.55, 0.05)--(0, -.15)--(0.55, 0.05);
+ \draw[rounded corners=12pt] (-0.45, 0)--(0, 0.3)--(0.45, 0);
+ \end{scope}
+ \end{scope}
+ }
+ \draw [dashed] (-1.1, 0.3) -- (1.1, 0.3);
+ \draw [dashed] (-1.1, -0.3) -- (1.1, -0.3);
+ \end{tikzpicture}
+ \end{center}
+ In general, to produce $\Sigma_g$, we produce a polygon with $4g$ sides. Then we get
+ \[
+ \pi_1 \Sigma_g = \bra a_1, b_1, \cdots, a_g, b_g\mid a_1 b_1 a_1^{-1} b_1^{-1} \cdots a_g b_g a_g^{-1}b_g^{-1}\ket.
+ \]
+ We do we care? The classification theorem tells us that each surface is homeomorphic to \emph{some} of these orientable and non-orientable surfaces, but it doesn't tell us there is no overlap. It might be that $\Sigma_6\cong \Sigma_{241}$, via some weird homeomorphism that destroys some holes.
+
+ However, this result lets us know that all these orientable surfaces are genuinely different. While it is difficult to stare at this fundamental group and say that $\pi_1 \Sigma_g \not\cong \pi_1 \Sigma_{g'}$ for $g\not= g'$, we can perform a little trick. We can take the abelianization of the group $\pi_1 \Sigma_g$, where we further quotient by all commutators. Then the abelianized fundamental group of $\Sigma_g$ will simply be $\Z^{2g}$. These are clearly distinct for different values of $g$. So all these surfaces are distinct. Moreover, they are not even homotopy equivalent.
+\end{eg}
+
+The fundamental groups of the non-orientable surfaces is left as an exercise for the reader. % complete ?
+
+\section{Simplicial complexes}
+So far, we have taken a space $X$, and assigned some things to it. The first was easy --- $\pi_0(X)$. It was easy to calculate and understand. We then spent a lot of time talking about $\pi_1(X)$. What are they good for? A question we motivated ourselves with was to prove $\R^m \cong \R^n$ implies $n = m$. $\pi_0$ was pretty good for the case when $n = 1$. If $\R^m \cong \R$, then we would have $\R^m \setminus \{0\} \cong \R\setminus \{0\} \simeq S^0$. We know that $|\pi_0(S^0)| = 2$, while $|\pi_0(\R^m \setminus \{0\})| = 1$ for $m \not= 1$. This is just a fancy way of saying that $\R \setminus \{0\}$ is disconnected while $\R^m \setminus \{0\}$ is not for $m \not= 1$.
+
+We can just add $1$ to $n$, and add $1$ to our subscript. If $\R^m \cong \R^2$, then we have $\R^m \setminus \{0\} \cong \R^2 \setminus \{0\} \simeq S^1$. We know that $\pi_1(S^1) \cong \Z$, while $\pi_1(\R^m \setminus \{0\}) \cong \pi_1(S^{m - 1}) \cong 1$ unless $m = 2$.
+
+The obvious thing to do is to create some $\pi_n(X)$. But as you noticed, $\pi_1$ took us quite a long time to define, and was \emph{really} hard to compute. As we would expect, this only gets harder as we get to higher dimensions. This is indeed possible, but will only be done in Part III courses.
+
+The problem is that $\pi_n$ works with groups, and groups are hard. There are too many groups out there. We want to do some easier algebra, and a good choice is linear algebra. Linear algebra is easy. In the rest of the course, we will have things like $H_0(X)$ and $H_1(X)$, instead of $\pi_n$, which are more closely related to linear algebra.
+
+Another way to motivate the abandoning of groups is as follows: recall last time we saw that if $X$ is a finite cell, reasonably explicitly defined, then we can write down a presentation $\bra S \mid R\ket$ for $\pi_1 (X)$. This sounds easy, except that we don't understand presentations. In fact there is a theorem that says there is no algorithm that decides if a particular group presentation is actually the trivial group.
+
+So even though we can compute the fundamental group in terms of presentations, this is not necessarily helpful. On the other hand, linear algebra is easy. Computers can do it. So we will move on to working with linear algebra instead.
+
+This is where \emph{homology theory} comes in. It takes a while for us to define it, but after we finish developing the machinery, things are easy.
+\subsection{Simplicial complexes}
+There are many ways of formulating homology theory, and these are all equivalent at least for sufficiently sane spaces (such as cell complexes). In this course, we will be using \emph{simplicial homology}, which is relatively more intuitive and can be computed directly. The drawback is that we will have to restrict to a particular kind of space, known as \emph{simplicial complexes}. This is not a very serious restriction \emph{per se}, since many spaces like spheres are indeed simplicial complexes. However, the definition of simplicial homology is based on exactly how we view our space as a simplicial complex, and it will take us quite a lot of work to show that the simplicial homology is indeed a property of the space itself, and not how we represent it as a simplicial complex.
+
+We now start by defining simplicial complexes, and developing some general theory of simplicial complexes that will become useful later on.
+
+\begin{defi}[Affine independence]
+ A finite set of points $\{a_1, \cdots, a_n\} \subseteq \R^m$ is \emph{affinely independent} iff
+ \[
+ \sum_{i = 1}^n t_i a_i = 0 \text{ with } \sum_{i = 1}^n t_i = 0 \Leftrightarrow t_i = 0\text{ for all }i.
+ \]
+\end{defi}
+
+\begin{eg}
+ When $n = 3$, the following points are affinely independent:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \end{tikzpicture}
+ \end{center}
+ The following are not:
+ \begin{center}
+ \begin{tikzpicture}[rotate=30]
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (2, 0) {};
+ \draw (0, 0) -- (2, 0) {};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+The proper way of understanding this definition is via the following lemma:
+\begin{lemma}
+ $a_0, \cdots, a_n \in \R^m$ are affinely independent if and only if $a_1 - a_0, \cdots, a_n - a_0$ are linearly independent.
+\end{lemma}
+Alternatively, $n + 1$ affinely independent points span an $n$-dimensional thing.
+
+\begin{proof}
+ Suppose $a_0, \cdots, a_n$ are affinely independent. Suppose
+ \[
+ \sum_{i = 1}^n \lambda_i (a_i - a_0) = 0.
+ \]
+ Then we can rewrite this as
+ \[
+ \left(-\sum_{i = 1}^n \lambda_i\right)a_0 + \lambda_1 a_1 + \cdots + \lambda_n a_n = 0.
+ \]
+ Now the sum of the coefficients is $0$. So affine independence implies that all coefficients are $0$. So $a_1 - a_0, \cdots, a_n - a_0$ are linearly independent.
+
+ On the other hand, suppose $a_1 - a_0, \cdots, a_n - a_0$ are linearly independent. Now suppose
+ \[
+ \sum_{i = 0}^n t_i a_i = 0,\quad \sum_{i = 0}^n t_i = 0.
+ \]
+ Then we can write
+ \[
+ t_0 = -\sum_{i = 1}^n t_i.
+ \]
+ Then the first equation reads
+ \[
+ 0 = \left(-\sum_{i = 1}^n t_i \right)a_0 + t_1 a_1 + \cdots + t_n a_n = \sum_{i = 1}^n t_i (a_i - a_0).
+ \]
+ So linear independence implies all $t_i = 0$.
+\end{proof}
+
+The relevance is that these can be used to define simplices (which are simple, as opposed to complexes).
+\begin{defi}[$n$-simplex]
+ An \emph{$n$-simplex} is the convex hull of $(n + 1)$ affinely independent points $a_0, \cdots, a_n \in \R^m$, i.e.\ the set
+ \[
+ \sigma = \bra a_0, \cdots, a_n\ket = \left\{\sum_{i = 0}^n t_i a_i : \sum_{i = 0}^n t_i = 1,t_i \geq 0\right\}.
+ \]
+ The points $a_0, \cdots, a_n$ are the \emph{vertices}, and are said to \emph{span} $\sigma$. The $(n + 1)$-tuple $(t_0, \cdots, t_n)$ is called the \emph{barycentric coordinates} for the point $\sum t_i a_i$.
+\end{defi}
+
+\begin{eg}
+ When $n = 0$, then our $0$-simplex is just a point:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ When $n = 1$, then we get a line:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \draw (0, 0) -- (2, 0);
+ \end{tikzpicture}
+ \end{center}
+ When $n = 2$, we get a triangle:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \end{tikzpicture}
+ \end{center}
+ When $n = 3$, we get a tetrahedron:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, -0.5) {};
+ \node [circ] at (1, 1.3) {};
+ \draw [dashed] (0, 0) -- (2, 0);
+ \fill [mblue, opacity=0.5] (2, 0) -- (1, -0.5) -- (1, 1.3) -- cycle;
+ \fill [mblue!60!black, opacity=0.5] (0, 0) -- (1, -0.5) -- (1, 1.3) -- cycle;
+ \draw (2, 0) -- (1, -0.5) -- (0, 0);
+ \draw (0, 0) -- (1, 1.3) -- (2, 0);
+ \draw (1, 1.3) -- (1, -0.5);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+The key motivation of this is that simplices are determined by their vertices. Unlike arbitrary subspaces of $\R^n$, they can be specified by a finite amount of data. We can also easily extract the faces of the simplices.
+
+\begin{defi}[Face, boundary and interior]
+ A \emph{face} of a simplex is a subset (or subsimplex) spanned by a subset of the vertices. The \emph{boundary} is the union of the proper faces, and the \emph{interior} is the complement of the boundary.
+
+ The boundary of $\sigma$ is usually denoted by $\partial \sigma$, while the interior is denoted by $\mathring{\sigma}$, and we write $\tau \leq \sigma$ when $\tau$ is a face of $\sigma$.
+\end{defi}
+In particular, the interior of a vertex is the vertex itself. Note that these notions of interior and boundary are distinct from the topological notions of interior and boundary.
+
+\begin{eg}
+ The \emph{standard $n$-simplex} is spanned by the basis vectors $\{\mathbf{e}_0, \cdots, \mathbf{e}_n\}$ in $\R^{n + 1}$. For example, when $n = 2$, we get the following:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (3, 0);
+ \draw [->] (0, 0) -- (0, 3);
+ \draw [->] (0, 0) -- (-1.5, -1.5);
+
+ \draw [fill=mblue, fill opacity=0.5] (0, 2) -- (2, 0) -- (-.866, -0.866) -- cycle;
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+We will now glue simplices together to build \emph{complexes}, or \emph{simplicial complexes}.
+
+\begin{defi}
+ A \emph{(geometric) simplicial complex} is a finite set $K$ of simplices in $\R^n$ such that
+ \begin{enumerate}
+ \item If $\sigma \in K$ and $\tau$ is a face of $\sigma$, then $\tau \in K$.
+ \item If $\sigma, \tau \in K$, then $\sigma \cap \tau$ is either empty or a face of both $\sigma$ and $\tau$.
+ \end{enumerate}
+\end{defi}
+
+\begin{defi}[Vertices]
+ The \emph{vertices} of $K$ are the zero simplices of $K$, denoted $V_K$.
+\end{defi}
+
+\begin{eg}
+ This is a simplicial complex:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (3, -1) {};
+ \node [circ] at (1, -0.5) {};
+ \node [circ] at (1, 1.3) {};
+ \node [circ] at (-1, -0.5) {};
+ \draw [dashed] (0, 0) -- (2, 0);
+ \fill [mblue, opacity=0.5] (2, 0) -- (1, -0.5) -- (1, 1.3) -- cycle;
+ \fill [mblue!60!black, opacity=0.5] (0, 0) -- (1, -0.5) -- (1, 1.3) -- cycle;
+ \draw (2, 0) -- (1, -0.5) -- (0, 0);
+ \draw (0, 0) -- (1, 1.3) -- (2, 0);
+ \draw (1, 1.3) -- (1, -0.5);
+ \draw [fill=mblue!70!white, fill opacity=0.5] (2, 0) -- (3, -1) -- (1, -0.5);
+
+ \draw (0, 0) -- (-1, -0.5);
+ \draw [fill=mblue, fill opacity=0.5] (-3, -0.5) -- (-1, -0.5) -- (-2, -1.2) -- cycle;
+ \node [circ] at (-3, -0.5) {};
+ \node [circ] at (-1, -0.5) {};
+ \node [circ] at (-2, -1.2) {};
+
+ \draw (-1.5, -0.2) node [circ] {} -- (-1.2, 1) node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+ These are not:
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \draw (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \end{scope}
+ \begin{scope}[shift={(1, 1)}, rotate=45]
+ \draw (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \end{scope}
+
+ \begin{scope}[shift={(6, 0)}, rotate=90]
+ \begin{scope}
+ \draw (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \end{scope}
+ \begin{scope}[shift={(1, 0)}, rotate=180]
+ \draw (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \end{scope}
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+Technically, a simplicial complex is defined to be a set of simplices, which are just collections of points. It is not a subspace of $\R^n$. Hence we have the following definition:
+
+\begin{defi}[Polyhedron]
+ The \emph{polyhedron} defined by $K$ is the union of the simplices in $K$, and denoted by $|K|$.
+\end{defi}
+We make this distinction because distinct simplicial complexes may have the same polyhedron, such as the following:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (2, 0);
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (2, 0) {};
+ \draw (4, 0) -- (6, 0);
+ \node [circ] at (4, 0) {};
+ \node [circ] at (6, 0) {};
+ \node [circ] at (2, 0) {};
+ \end{tikzpicture}
+\end{center}
+Just as in cell complexes, we can define things like dimensions.
+\begin{defi}[Dimension and skeleton]
+ The \emph{dimension} of $K$ is the highest dimension of a simplex of $K$. The \emph{$d$-skeleton} $K^{(d)}$ of $K$ is the union of the $n$-simplices in $K$ for $n \leq d$.
+\end{defi}
+Note that since these are finite and live inside $\R^n$, we know that $|K|$ is always compact and Hausdorff.
+
+Usually, when we are given a space, say $S^n$, it is not defined to be a simplicial complex. We can ``make'' it a simplicial complex by a \emph{triangulation}.
+\begin{defi}[Triangulation]
+ A \emph{triangulation} of a space $X$ is a homeomorphism $h: |K| \to X$, where $K$ is some simplicial complex.
+\end{defi}
+
+\begin{eg}
+ Let $\sigma$ be the standard $n$-simplex. The boundary $\partial \sigma$ is homeomorphic to $S^{n - 1}$ (e.g.\ the boundary of a (solid) triangle is the boundary of the triangle, which is also a circle). This is called the \emph{simplicial $(n - 1)$-sphere}.
+\end{eg}
+
+We can also triangulate our $S^n$ in a different way:
+\begin{eg}
+ In $\R^{n + 1}$, consider the simplices $ \bra \pm \mathbf{e}_0, \cdots, \pm \mathbf{e}_n\ket$ for each possible combination of signs. So we have $2^{n + 1}$ simplices in total. Then their union defines a simplicial complex $K$, and
+ \[
+ |K| \cong S^n.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [gray, ->] (-2, 0) -- (2, 0);
+ \draw [gray, ->] (0, -2) -- (0, 2);
+ \draw [gray, ->] (0.894, 1.788) -- (-0.894, -1.788);
+
+ \node [circ] (x1) at (1, 0) {};
+ \node [circ] (x2) at (-1, 0) {};
+ \node [circ] (z1) at (0, 1) {};
+ \node [circ] (z2) at (0, -1) {};
+ \node [circ] (y1) at (-0.3, -0.6) {};
+ \node [circ] (y2) at (0.3, 0.6) {};
+
+ \draw (x1) -- (y1) -- (z1) -- (x1) -- (z2) -- (y1) -- (x2) -- (z1);
+ \draw (z2) -- (x2);
+ \draw [dashed] (x2) -- (y2) -- (x1);
+ \draw [dashed] (z2) -- (y2) -- (z1);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+The nice thing about this triangulation is that the simplicial complex is invariant under the antipodal map. So not only can we think of this as a triangulation of the sphere, but a triangulation of $\RP^n$ as well.
+
+As always, we don't just look at objects themselves, but also maps between them.
+
+\begin{defi}[Simplicial map]
+ A \emph{simplicial map} $f: K \to L$ is a function $f: V_K \to V_L$ such that if $\bra a_0, \cdots, a_n\ket$ is a simplex in $K$, then $\{f(a_0), \cdots, f(a_n)\}$ spans a simplex of $L$.
+\end{defi}
+The nice thing about simplicial maps is that we only have to specify where the vertices go, and there are only finitely many vertices. So we can completely specify a simplicial map by writing down a finite amount of information.
+
+It is important to note that we say $\{f(a_0), \cdots, f(a_n)\}$ as a set span a simplex of $L$. In particular, they are allowed to have repeats.
+
+\begin{eg}
+ Suppose we have the standard $2$-simplex $K$ as follows:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) node [opacity=1, left] {$a_0$} -- (2, 0) node [opacity=1, right] {$a_1$} -- (1, 1.732) node [opacity=1, above] {$a_2$} -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \end{tikzpicture}
+ \end{center}
+ The following does \emph{not} define a simplicial map because $\bra a_1, a_2\ket$ is a simplex in $K$, but $\{f(a_1), f(a_2)\}$ does not span a simplex:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [above] at (0, 0) {$f(a_0), f(a_1)$};
+ \node [above] at (2, 0) {$f(a_2)$};
+ \end{tikzpicture}
+ \end{center}
+ On the other hand, the following \emph{is} a simplicial map, because now $\{f(a_1), f(a_2)\}$ spans a simplex, and note that $\{f(a_0), f(a_1), f(a_2)\}$ also spans a $1$-simplex because we are treating the collection of three vertices as a set, and not a simplex.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (2, 0);
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [above] at (0, 0) {$f(a_0), f(a_1)$};
+ \node [above] at (2, 0) {$f(a_2)$};
+ \end{tikzpicture}
+ \end{center}
+ Finally, we can also do the following map:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, -1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, -1.732) {};
+ \node [above] at (0, 0) {$f(a_0), f(a_1)$};
+ \node [above] at (2, 0) {$f(a_2)$};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+The following lemma is obvious, but we will need it later on.
+\begin{lemma}
+ If $K$ is a simplicial complex, then every point $x \in |K|$ lies in the interior of a \emph{unique} simplex.
+\end{lemma}
+
+As we said, simplicial maps are nice, but they are not exactly what we want. We want to have maps between \emph{spaces}.
+\begin{lemma}
+ A simplicial map $f: K \to L$ induces a continuous map $|f|: |K|\to |L|$, and furthermore, we have
+ \[
+ |f\circ g| = |f|\circ |g|.
+ \]
+\end{lemma}
+There is an obvious way to define this map. We know how to map vertices, and then just extend everything linearly.
+\begin{proof}
+ For any point in a simplex $\sigma = \bra a_0,\cdots, a_n\ket$, we define
+ \[
+ |f|\left(\sum_{i = 0}^n t_i a_i\right) = \sum_{i = 0}^n t_i f(a_i).
+ \]
+ The result is in $L$ because $\{f(a_i)\}$ spans a simplex. It is not difficult to see this is well-defined when the point lies on the boundary of a simplex. This is clearly continuous on $\sigma$, and is hence continuous on $|K|$ by the gluing lemma.
+
+ The final property is obviously true by definition.
+\end{proof}
+
+\subsection{Simplicial approximation}
+This is all very well, but we are really interested in \emph{continuous maps}. So given a continuous map $f$, we would like to find a related simplicial map $g$. In particular, we want to find a simplicial map $g$ that ``approximates'' $f$ in some sense. The definition we will write down is slightly awkward, but it turns out this is the most useful definition.
+
+\begin{defi}[Open star and link]
+ Let $x \in |K|$. The \emph{open star} of $x$ is the union of all the interiors of the simplices that contain $x$, i.e.
+ \[
+ \St_K(x) = \bigcup_{x \in \sigma \in K} \mathring{\sigma}.
+ \]
+ The \emph{link} of $x$, written $\Lk_K(x)$, is the union of all those simplices that do not contain $x$, but are faces of a simplex that does contain $x$.
+\end{defi}
+
+\begin{defi}[Simplicial approximation]
+ Let $f: |K| \to |L|$ be a continuous map between the polyhedra. A function $g: V_K \to V_L$ is a \emph{simplicial approximation} to $f$ if for all $v \in V_K$,
+ \[
+ f(\St_K(v)) \subseteq \St_L(g(v)).\tag{$*$}
+ \]
+\end{defi}
+
+The following lemma tells us why this is a good definition:
+\begin{lemma}
+ If $f: |K| \to |L|$ is a map between polyhedra, and $g: V_K \to V_L$ is a simplicial approximation to $f$, then $g$ is a simplicial map, and $|g| \simeq f$. Furthermore, if $f$ is already simplicial on some subcomplex $M\subseteq K$, then we get $g|_M = f|_M$, and the homotopy can be made $\rel M$.
+\end{lemma}
+
+\begin{proof}
+ First we want to check $g$ is really a simplicial map if it satisfies $(*)$. Let $\sigma = \bra a_0,\cdots, a_n\ket$ be a simplex in $K$. We want to show that $\{g(a_0), \cdots, g(a_n)\}$ spans a simplex in $L$.
+
+ Pick an arbitrary $x \in \mathring{\sigma}$. Since $\sigma$ contains each $a_i$, we know that $x \in \St_K(a_i)$ for all $i$. Hence we know that
+ \[
+ f(x) \in \bigcap_{i = 0}^n f(\St_K(a_i)) \subseteq \bigcap_{i = 0}^n \St_L(g(a_i)).
+ \]
+ Hence we know that there is one simplex, say, $\tau$ that contains all $g(a_i)$ whose interior contains $f(x)$. Since each $g(a_i)$ is a vertex in $L$, each $g(a_i)$ must be a vertex of $\tau$. So they span a face of $\tau$, as required.
+
+ We now want to prove that $|g| \simeq f$. We let $H: |K| \times I \to |L| \subseteq \R^m$ be defined by
+ \[
+ (x, t) \mapsto t |g|(x) + (1 - t)f(x).
+ \]
+ This is clearly continuous. So we need to check that $\im H \subseteq |L|$. But we know that both $|g|(x)$ and $f(x)$ live in $\tau$ and $\tau$ is convex. It thus follows that $H(x \times I) \subseteq \tau \subseteq |L|$.
+
+ To prove the last part, it suffices to show that every simplicial approximation to a simplicial map must be the map itself. Then the homotopy is $\rel M$ by the construction above. This is easily seen to be true --- if $g$ is a simplicial approximation to $f$, then $f(v) \in f(\St_K(v)) \subseteq \St_L(g(v))$. Since $f(v)$ is a vertex and $g(v)$ is the only vertex in $\St_L(g(v))$, we must have $f(v) = g(v)$. So done.
+\end{proof}
+What's the best thing we might hope for at this point? It would be great if every map were homotopic to a simplicial map. Is this possible? Let's take a nice example. Let's consider the following $K$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \end{tikzpicture}
+\end{center}
+How many homotopy classes of continuous maps are there $K \to K$? Countably many, one for each winding number. However, there can only be at most $3^3 = 27$ simplicial maps. The problem is that we don't have enough vertices to realize all those interesting maps. The idea is to refine our simplicial complexes. Suppose we have the following simplex:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \end{tikzpicture}
+\end{center}
+What we do is to add a point in the center of each simplex, and join them up:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (0.5, 0.866) {};
+ \node [circ] at (1.5, 0.866) {};
+ \node [circ] at (1, 0.5773) {};
+ \draw (0, 0) -- (1, 0.5773) -- (2, 0);
+ \draw (1, 0) -- (1, 1.732);
+ \draw (0.5, 0.866) -- (1, 0.5773) -- (1.5, 0.866);
+ \end{tikzpicture}
+\end{center}
+This is known as the barycentric subdivision. After we subdivide it once, we can realize more homotopy classes of maps. We will show that for any map, as long as we are willing to barycentrically subdivide the simplex many times, we can find a simplicial approximation to it.
+
+\begin{defi}[Barycenter]
+ The \emph{barycenter} of $\sigma = \bra a_0, \cdots, a_n\ket$ is
+ \[
+ \hat{\sigma} = \sum_{i = 0}^n \frac{1}{n + 1} a_i.
+ \]
+\end{defi}
+
+\begin{defi}[Barycentric subdivision]
+ The \emph{(first) barycentric subdivision} $K'$ of $K$ is the simplicial complex:
+ \[
+ K' = \{\bra \hat{\sigma}_0, \cdots, \hat{\sigma}_n\ket: \sigma_i \in K\text{ and }\sigma_0 < \sigma_1 < \cdots < \sigma_n\}.
+ \]
+ If you stare at this long enough, you will realize this is exactly what we have drawn above.
+
+ The $r$th barycentric subdivision $K^{(r)}$ is defined inductively as the barycentric subdivision of the $r - 1$th barycentric subdivision, i.e.
+ \[
+ K^{(r)} = (K^{(r - 1)})'.
+ \]
+\end{defi}
+
+\begin{prop}
+ $|K| = |K'|$ and $K'$ really is a simplicial complex.
+\end{prop}
+
+\begin{proof}
+ Too boring to be included in lectures.
+\end{proof}
+
+We now have a slight problem. Even though $|K'|$ and $|K|$ are equal, the identity map from $|K'|$ to $|K|$ is not a simplicial map.
+
+To solve this problem, we can choose any function $K \to V_K$ by $\sigma \mapsto v_\sigma$ with $v_\sigma \in \sigma$, i.e.\ a function that sends any simplex to any of its vertices. Then we can define $g: K' \to K$ by sending $\hat{\sigma} \mapsto v_\sigma$. Then this is a simplicial map, and indeed a simplicial approximation to the identity map $|K'| \to |K|$. We will revisit this idea later when we discuss homotopy invariance.
+
+The key theorem is that as long as we are willing to perform barycentric subdivisions, then we can always find a simplicial approximation.
+
+\begin{thm}[Simplicial approximation theorem]
+ Let $K$ and $L$ be simplicial complexes, and $f: |K| \to |L|$ a continuous map. Then there exists an $r$ and a simplicial map $g: K^{(r)} \to L$ such that $g$ is a simplicial approximation of $f$. Furthermore, if $f$ is already simplicial on $M\subseteq K$, then we can choose $g$ such that $|g||_M = f|_M$.
+\end{thm}
+
+The first thing we have to figure out is how far we are going to subdivide. To do this, we want to quantify how ``fine'' our subdivisions are.
+\begin{defi}[Mesh]
+ Let $K$ be a simplicial complex. The \emph{mesh} of $K$ is
+ \[
+ \mu(K) = \max\{\|v_0 - v_1\| : \bra v_0, v_1\ket \in K\}.
+ \]
+\end{defi}
+
+We have the following lemma that tells us how large our mesh is:
+\begin{lemma}
+ Let $\dim K = n$, then
+ \[
+ \mu(K^{(r)}) \leq \left(\frac{n}{n + 1}\right)^r \mu(K).
+ \]
+\end{lemma} % insert proof
+The key point is that as $r \to \infty$, the mesh goes to zero. So indeed we can make our barycentric subdivisions finer and finer. The proof is purely technical and omitted.
+
+Now we can prove the simplicial approximation theorem.
+
+\begin{proof}[Proof of simplicial approximation theorem]
+ Suppose we are given the map $f: |K| \to |L|$. We have a natural cover of $|L|$, namely the open stars of all vertices. We can use $f$ to pull these back to $|K|$ to obtain a cover of $|K|$:
+ \[
+ \{f^{-1}(\St_L(w)) : w \in V_L\}.
+ \]
+ The idea is to barycentrically subdivide our $K$ such that each open star of $K$ is contained in one of these things.
+
+ By the Lebesgue number lemma, there exists some $\delta$, the \emph{Lebesgue number} of the cover, such that for each $x \in |K|$, $B_\delta(x)$ is contained in \emph{some} element of the cover. By the previous lemma, there is an $r$ such that $\mu(K^{(r)}) < \delta$.
+
+ Now since the mesh $\mu(K^{(r)})$ is the smallest distance between any two vertices, the radius of every open star $\St_{K^{(r)}}(x)$ is at most $\mu(K^{(r)})$. Hence it follows that $\St_{K^{(r)}}(x) \subseteq B_\delta(x)$ for all vertices $x \in V_{K^{(r)}}$. Therefore, for all $x \in V_{K^{(r)}}$, there is some $w \in V_L$ such that
+ \[
+ \St_{K^{(r)}}(x) \subseteq B_\delta(x) \subseteq f^{-1}(\St_L(w)).
+ \]
+ Therefore defining $g(x) = w$, we get
+ \[
+ f(\St_{K^{(r)}}(x)) \subseteq \St_L(g(x)).
+ \]
+ So $g$ is a simplicial approximation of $f$.
+
+ The last part follows from the observation that if $f$ is a simplicial map, then it maps vertices to vertices. So we can pick $g(v) = f(v)$.
+\end{proof}
+
+\section{Simplicial homology}
+\subsection{Simplicial homology}
+For now, we will forget about simplicial approximations and related fluff, but just note that it is fine to assume everything can be considered to be simplicial. Instead, we are going to use this framework to move on and define some new invariants of simplicial complexes $K$, known as $H_n(K)$. These are analogous to $\pi_0, \pi_1, \cdots$, but only use linear algebra in the definitions, and are thus much simpler. The drawback, however, is that the definitions are slightly less intuitive at first sight.
+
+Despite saying ``linear algebra'', we won't be working with vector spaces most of the time. Instead, we will be using abelian groups, which really should be thought of as $\Z$-modules. At the end, we will come up with an analogous theory using $\Q$-modules, i.e.\ $\Q$-vector spaces, and most of the theory will carry over. Using $\Q$ makes some of our work easier, but as a result we would have lost some information. In the most general case, we can replace $\Z$ with any abelian group, but we will not be considering any of these in this course.
+
+Recall that we defined an $n$-simplex as the collection of $n + 1$ vertices that span a simplex. As a simplex, when we permute the vertices, we still get the same simplex. What we want to do is to remember the orientation of the simplex. In particular, we want think of the simplices $(a, b)$ and $(b, a)$ as different:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.58] (0, 0) node [left] {$a$} node [circ] {} -- (2, 0) node [right] {$b$} node [circ] {};
+ \draw [->-=0.58] (6, 0) node [right] {$b$} node [circ] {} -- (4, 0) node [left] {$a$} node [circ] {};
+ \end{tikzpicture}
+\end{center}
+Hence we define \emph{oriented simplices}.
+
+\begin{defi}[Oriented $n$-simplex]
+ An \emph{oriented $n$-simplex} in a simplicial complex $K$ is an $(n + 1)$-tuple $(a_0, \cdots, a_n)$ of vertices $a_i \in V_k$ such that $\bra a_0, \cdots, a_n\ket \in K$, where we think of two $(n + 1)$-tuples $(a_0, \cdots, a_n)$ and $(a_{\pi(0)}, \cdots, a_{\pi(n)})$ as the same \emph{oriented} simplex if $\pi \in S_n$ is an \emph{even} permutation.
+
+ We often denote an oriented simplex as $\sigma$, and then $\bar{\sigma}$ denotes the same simplex with the opposite orientation.
+\end{defi}
+
+\begin{eg}
+ As oriented $2$-simplices, $(v_0, v_1, v_2)$ and $(v_1, v_2, v_0)$ are equal, but they are different from $(v_2, v_1, v_0)$. We can imagine the two different orientations of the simplices as follows:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [left] at (0, 0) {$v_0$};
+ \node [circ] at (2, 0) {};
+ \node [right] at (2, 0) {$v_2$};
+ \node [circ] at (1, 1.732) {};
+ \node [above] at (1, 1.732) {$v_1$};
+ \draw [semithick, -latex'] (1, 0.3773) arc (270:0:0.2);
+
+ \begin{scope}[shift={(4, 0)}]
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [left] at (0, 0) {$v_0$};
+ \node [circ] at (2, 0) {};
+ \node [right] at (2, 0) {$v_1$};
+ \node [circ] at (1, 1.732) {};
+ \node [above] at (1, 1.732) {$v_2$};
+ \draw [semithick, latex'-] (1, 0.3773) arc (270:0:0.2);
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+One and two dimensions are the dimensions where we can easily visualize the orientation. This is substantially harder in higher dimensions, and often we just work with the definition instead.
+
+\begin{defi}[Chain group $C_n(K)$]
+ Let $K$ be a simplicial complex. For each $n \geq 0$, we define $C_n(K)$ as follows:
+
+ Let $\{\sigma_1, \cdots, \sigma_\ell\}$ be the set of $n$-simplices of $K$. For each $i$, choose an orientation on $\sigma_i$. That is, choose an order for the vertices (up to an even permutation). This choice is not important, but we need to make it. Now when we say $\sigma_i$, we mean the oriented simplex with this particular orientation.
+
+ Now let $C_n(K)$ be the free abelian group with basis $\{\sigma_1, \cdots, \sigma_\ell\}$, i.e.\ $C_n(K) \cong \Z^\ell$. So an element in $C_n(K)$ might look like
+ \[
+ \sigma_3 - 7 \sigma_1 + 52 \sigma_{64} - 28 \sigma_{1000000}.
+ \]
+ In other words, an element of $C_n(K)$ is just a formal sum of $n$-simplices.
+
+ For convenience, we define $C_{\ell}(K) = 0$ for $\ell < 0$. This will save us from making exceptions for $n = 0$ cases later.
+\end{defi}
+For each oriented simplex, we identify $-\sigma_i$ with $\bar{\sigma}_i$, at least when $n \geq 1$.
+
+In this definition, we have to choose a particular orientation for each of our simplices. If you don't like making arbitrary choices, we could instead define $C_n(K)$ as some quotient, but it is slightly more complicated.
+
+Note that if there are no $n$-simplices (e.g.\ when $n = -1$), we can still meaningfully talk about $C_n(K)$, but it's just $0$.
+
+\begin{eg}
+ We can think of elements in the chain group $C_1(X)$ as ``paths'' in $X$. For example, we might have the following simplex:
+ \begin{center}
+ \begin{tikzpicture}
+ \node [mred, circ] at (0, 0) {};
+ \node [mred, left] at (0, 0) {$v_0$};
+ \node [mred, above] at (1, 1.732) {$v_1$};
+ \node [mred, below] at (2, 0) {$v_2$};
+ \node [mred, circ] at (2, 0) {};
+ \node [mred, circ] at (1, 1.732) {};
+ \node [mred, circ] at (3, 1.732) {};
+ \node [mred, circ] at (4, 0.6) {};
+ \node [mred, circ] at (5, 1.6) {};
+ \node [mred, circ] at (6, 0.6) {};
+ \node [mred, circ] at (5, -0.4) {};
+
+ \draw [->-=0.58] (0, 0) -- (1, 1.732) node [pos=0.5, left] {$\sigma_1$};
+ \draw [->-=0.58] (2, 0) -- (1, 1.732) node [pos=0.5, right] {$\sigma_2$};
+ \draw [->-=0.58] (2, 0) -- (0, 0) node [pos=0.5, below] {$\sigma_3$};
+
+ \draw [->-=0.58] (1, 1.732) -- (3, 1.732);
+ \draw [->-=0.58] (2, 0) -- (3, 1.732);
+ \draw [->-=0.58] (4, 0.6) -- (3, 1.732) node [pos=0.5, right] {$\sigma_6$};
+ \draw [->-=0.58] (4, 0.6) -- (5, 1.6);
+ \draw [->-=0.58] (5, 1.6) -- (6, 0.6);
+ \draw [->-=0.58] (6, 0.6) -- (5, -0.4);
+ \draw [->-=0.58] (5, -0.4) -- (4, 0.6);
+ \end{tikzpicture}
+ \end{center}
+ Then the path $v_0 \leadsto v_1 \leadsto v_2 \leadsto v_0 \leadsto v_1$ around the left triangle is represented by the member
+ \[
+ \sigma_1 - \sigma_2 + \sigma_3 + \sigma_1 = 2\sigma_1 - \sigma_2 + \sigma_3.
+ \]
+ Of course, with this setup, we can do more random things, like adding $57$ copies of $\sigma_6$ to it, and this is also allowed. So we could think of these as disjoint union of paths instead.
+\end{eg}
+
+When defining fundamental groups, we had homotopies that allowed us to ``move around''. Somehow, we would like to say that two of these paths are ``equivalent'' under certain conditions. To do so, we need to define the \emph{boundary homomorphisms}.
+
+\begin{defi}[Boundary homomorphisms]
+ We define \emph{boundary homomorphisms}
+ \[
+ d_n: C_n(K) \to C_{n - 1}(K)
+ \]
+ by
+ \[
+ (a_0, \cdots, a_n) \mapsto \sum_{i = 0}^n (-1)^i (a_0, \cdots, \hat{a}_i, \cdots, a_n),
+ \]
+ where $(a_0, \cdots, \hat{a}_i, \cdots, a_n) = (a_0, \cdots, a_{i - 1}, a_{i + 1}, \cdots, a_n)$ is the simplex with $a_i$ removed.
+\end{defi}
+This means we remove each vertex in turn and add them up. This is clear if we draw some pictures in low dimensions:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.58] (0, 0) node [left] {$v_0$} -- (2, 0) node [right] {$v_1$};
+
+ \draw [->] (3, 0) -- (4.5, 0);
+ \draw (3, 0.1) -- +(0, -0.2);
+
+ \node [circ] at (5.5, 0) {};
+ \node [above] at (5.5, 0) {$-v_0$};
+ \node [circ] at (7.5, 0) {};
+ \node [above] at (7.5, 0) {$v_1$};
+ \end{tikzpicture}
+\end{center}
+If we take a triangle, we get
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [left] at (0, 0) {$v_0$};
+ \node [circ] at (2, 0) {};
+ \node [right] at (2, 0) {$v_2$};
+ \node [circ] at (1, 1.732) {};
+ \node [above] at (1, 1.732) {$v_1$};
+ \draw [semithick, -latex'] (1, 0.3773) arc (270:0:0.2);
+
+ \draw [->] (3, 0.866) -- (4.5, 0.866);
+ \draw (3, 0.966) -- +(0, -0.2);
+
+ \begin{scope}[shift={(5.5, 0)}]
+ \draw [->-=0.58] (0, 0) -- (1, 1.732);
+ \draw [->-=0.58] (1, 1.732) -- (2, 0);
+ \draw [->-=0.58] (2, 0) -- (0, 0);
+ \node [circ] at (0, 0) {};
+ \node [left] at (0, 0) {$v_0$};
+ \node [circ] at (2, 0) {};
+ \node [right] at (2, 0) {$v_2$};
+ \node [circ] at (1, 1.732) {};
+ \node [above] at (1, 1.732) {$v_1$};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+An important property of the boundary map is that the boundary of a boundary is empty:
+\begin{lemma}
+ $d_{n - 1}\circ d_n = 0$.
+\end{lemma}
+In other words, $\im d_{n + 1} \subseteq \ker d_n$.
+
+\begin{proof}
+ This just involves expanding the definition and working through the mess. % exercise
+\end{proof}
+
+With this in mind, we will define the homology groups as follows:
+\begin{defi}[Simplicial homology group $H_n(K)$]
+ The \emph{$n$th simplicial homology group} $H_n(K)$ is defined as
+ \[
+ H_n(K) = \frac{\ker d_n}{\im d_{n + 1}}.
+ \]
+\end{defi}
+This is a nice, clean definition, but what does this mean geometrically?
+
+Somehow, $H_k(K)$ describes all the ``$k$-dimensional holes'' in $|K|$. Since we are going to draw pictures, we are going to start with the easy case of $k = 1$. Our $H_1(K)$ is made from the kernel of $d_1$ and the image of $d_2$. First, we give these things names.
+
+\begin{defi}[Chains, cycles and boundaries]
+ The elements of $C_k(K)$ are called \emph{$k$-chains} of $K$, those of $\ker d_k$ are called \emph{$k$-cycles} of $K$, and those of $\im d_{k + 1}$ are called \emph{$k$-boundaries} of $K$.
+\end{defi}
+
+Suppose we have some $c \in \ker d_k$. In other words, $dc = 0$. If we interpret $c$ as a ``path'', if it has no boundary, then it represents some sort of loop, i.e.\ a cycle. For example, if we have the following cycle:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.58] (0, 0) -- (1, 1.732);
+ \draw [->-=0.58] (1, 1.732) -- (2, 0);
+ \draw [->-=0.58] (2, 0) -- (0, 0);
+ \node [circ] at (0, 0) {};
+ \node [left] at (0, 0) {$e_0$};
+ \node [circ] at (2, 0) {};
+ \node [right] at (2, 0) {$e_2$};
+ \node [circ] at (1, 1.732) {};
+ \node [above] at (1, 1.732) {$e_1$};
+ \end{tikzpicture}
+\end{center}
+We have
+\[
+ c = (e_0, e_1) + (e_1, e_2) + (e_2, e_0).
+\]
+We can then compute the boundary as
+\[
+ dc = (e_1 - e_0) + (e_2 - e_1) + (e_0 - e_2) = 0.
+\]
+So this $c$ is indeed a cycle.
+
+Now if $c \in \im d_2$, then $c = db$ for some 2-chain $b$, i.e.\ $c$ is the boundary of some two-dimensional thing. This is why we call this a 1-boundary. For example, suppose we have our cycle as above, but is itself a boundary of a 2-chain.
+\begin{center}
+ \begin{tikzpicture}
+ \fill [mblue, opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \draw [->-=0.58] (0, 0) -- (1, 1.732);
+ \draw [->-=0.58] (1, 1.732) -- (2, 0);
+ \draw [->-=0.58] (2, 0) -- (0, 0);
+
+ \node [circ] at (0, 0) {};
+ \node [left] at (0, 0) {$v_0$};
+ \node [circ] at (2, 0) {};
+ \node [right] at (2, 0) {$v_2$};
+ \node [circ] at (1, 1.732) {};
+ \node [above] at (1, 1.732) {$v_1$};
+ \draw [semithick, -latex'] (1, 0.3773) arc (270:0:0.2);
+ \end{tikzpicture}
+\end{center}
+We see that a cycle that has a boundary has been ``filled in''. Hence the ``holes'' are, roughly, the cycles that haven't been filled in. Hence we define the homology group as the cycles quotiented by the boundaries, and we interpret its elements as $k$-dimensional ``holes''.
+
+\begin{eg}
+ Let $K$ be the standard simplicial $1$-sphere, i.e.\ we have the following in $\R^3$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (3, 0);
+ \draw [->] (0, 0) -- (0, 3);
+ \draw [->] (0, 0) -- (-1, -2);
+
+ \draw [->-=0.58] (2, 0) -- (0, 2);
+ \draw [->-=0.58] (0, 2) -- (-0.5, -1);
+ \draw [->-=0.58] (-0.5, -1) -- (2, 0);
+
+ \node [left] at (-0.5, -1) {$e_0$};
+ \node [below] at (2, 0) {$e_1$};
+ \node [right] at (0, 2) {$e_2$};
+
+ \node [circ] at (-0.5, -1) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (0, 2) {};
+ \end{tikzpicture}
+ \end{center}
+ Our simplices are thus
+ \[
+ K = \{\bra e_0\ket, \bra e_1\ket, \bra e_2\ket, \bra e_0, e_1\ket, \bra e_1, e_2\ket, \bra e_2, e_0\ket\}.
+ \]
+ Our chain groups are
+ \begin{align*}
+ C_0(K) &= \bra (e_0), (e_1), (e_2)\ket \cong \Z^3\\
+ C_1(K) &= \bra (e_0, e_1), (e_1, e_2), (e_2, e_0)\ket \cong \Z^3.
+ \end{align*}
+ All other chain groups are zero. Note that our notation is slightly confusing here, since the brackets $\bra \ph \ket$ can mean the simplex spanned by the vertices, or the group generated by certain elements. However, you are probably clueful enough to distinguish the two uses.
+
+ Hence, the only non-zero boundary map is
+ \[
+ d_1: C_1(K) \to C_0(K).
+ \]
+ We can write down its matrix with respect to the given basis.
+ \[
+ \begin{pmatrix}
+ -1 & 0 & 1\\
+ 1 & -1 & 0\\
+ 0 & 1 & -1
+ \end{pmatrix}
+ \]
+ We have now everything we need to know about the homology groups, and we just need to do some linear algebra to figure out the image and kernel, and thus the homology groups. We have
+ \[
+ H_0(K) = \frac{\ker (d_0: C_0(K) \to C_{-1}(K))}{\im (d_1: C_1(K) \to C_0(K))} \cong \frac{C_0(K)}{\im d_1} \cong \frac{\Z^3}{\im d_1}.
+ \]
+ After doing some row operations with our matrix, we see that the image of $d_1$ is a two-dimensional subspace generated by the image of two of the edges. Hence we have
+ \[
+ H_0(K) = \Z.
+ \]
+ What does this $H_0(K)$ represent? We initially said that $H_k(K)$ should represent the $k$-dimensional holes, but when $k = 0$, this is simpler. As for $\pi_0$, $H_0$ just represents the path components of $K$. We interpret this to mean $K$ has one path component. In general, if $K$ has $r$ path components, then we expect $H_0(K)$ to be $\Z^r$.
+
+ Similarly, we have
+ \[
+ H_1(K) = \frac{\ker d_1}{\im d_2} \cong \ker d_1.
+ \]
+ It is easy to see that in fact we have
+ \[
+ \ker d_1 = \bra (e_0, e_1) + (e_1, e_2) + (e_2, e_0)\ket \cong \Z.
+ \]
+ So we also have
+ \[
+ H_1(K) \cong \Z.
+ \]
+ We see that this $H_1(K)$ is generated by precisely the single loop in the triangle. The fact that $H_1(K)$ is non-trivial means that we do indeed have a hole in the middle of the circle.
+\end{eg}
+
+\begin{eg}
+ Let $L$ be the standard $2$-simplex (and all its faces) in $\R^3$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (3, 0);
+ \draw [->] (0, 0) -- (0, 3);
+ \draw [->] (0, 0) -- (-1, -2);
+
+ \fill [mblue, opacity=0.5] (0, 2) -- (2, 0) -- (-0.5, -1) -- cycle;
+ \draw [semithick, -latex'] (0.7, 0.33333) arc (0:270:0.2);
+
+ \draw [->-=0.58] (2, 0) -- (0, 2);
+ \draw [->-=0.58] (0, 2) -- (-0.5, -1);
+ \draw [->-=0.58] (-0.5, -1) -- (2, 0);
+
+ \node [left] at (-0.5, -1) {$e_0$};
+ \node [below] at (2, 0) {$e_1$};
+ \node [right] at (0, 2) {$e_2$};
+
+ \node [circ] at (-0.5, -1) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (0, 2) {};
+ \end{tikzpicture}
+ \end{center}
+ Now our chain groups are
+ \begin{align*}
+ C_0(L) &= C_0(K) \cong \Z^3 \cong \bra (e_0), (e_1), (e_2)\ket\\
+ C_1(L) &= C_1(K) \cong \Z^3 \cong \bra (e_0, e_1), (e_1, e_2), (e_2, e_0)\ket\\
+ C_2(L) &\cong \Z = \bra (e_0, e_1, e_2)\ket.
+ \end{align*}
+ Since $d_1$ is the same as before, the only new interesting boundary map is $d_2$. We compute
+ \[
+ d_2 ((e_0, e_1, e_2)) = (e_0, e_1) + (e_1, e_2) + (e_2, e_0).
+ \]
+ We know that $H_0(L)$ depends only on $d_0$ and $d_1$, which are the same as for $K$. So
+ \[
+ H_0(L) \cong \Z.
+ \]
+ Again, the interpretation of this is that $L$ is path-connected. The first homology group is
+ \[
+ H_1(L) = \frac{\ker d_1}{\im d_2} = \frac{\bra (e_0, e_1) + (e_1, e_2) + (e_2, e_0) \ket}{\bra (e_0, e_1) + (e_1, e_2) + (e_2, e_0) \ket} \cong 0.
+ \]
+ This precisely illustrates the fact that the ``hole'' is now filled in $L$.
+
+ Finally, we have
+ \[
+ H_2(L) = \frac{\ker d_2}{\im d_3} = \ker d_2 \cong 0.
+ \]
+ This is zero since there aren't any two-dimensional holes in $L$.
+\end{eg}
+We have hopefully gained some intuition on what these homology groups mean. We are now going to spend a lot of time developing formalism.
+
+\subsection{Some homological algebra}
+We will develop some formalism to help us compute homology groups $H_k(K)$ in lots of examples. Firstly, we axiomatize the setup we had above.
+\begin{defi}[Chain complex and differentials]
+ A \emph{chain complex} $C_{\Cdot}$ is a sequence of abelian groups $C_0, C_1, C_2, \cdots$ equipped with maps $d_n: C_n \to C_{n - 1}$ such that $d_{n - 1} \circ d_n = 0$ for all $n$. We call these maps the \emph{differentials} of $C_{\Cdot}$.
+\end{defi}
+\[
+ \begin{tikzcd}
+ 0 & C_0 \ar[l, "d_0"'] & C_1 \ar [l, "d_1"'] & C_2 \ar [l, "d_2"'] & \cdots \ar [l, "d_3"']
+ \end{tikzcd}
+\]
+Whenever we have some of these things, we can define homology groups in exactly the same way as we defined them for simplicial complexes.
+
+\begin{defi}[Cycles and boundaries]
+ The space of \emph{$n$-cycles} is
+ \[
+ Z_n(C) = \ker d_n.
+ \]
+ The space of \emph{$n$-boundaries} is
+ \[
+ B_n(C) = \im d_{n+1}.
+ \]
+\end{defi}
+
+\begin{defi}[Homology group]
+ The \emph{$n$-th homology group} of $C_{\Cdot}$ is defined to be
+ \[
+ H_n(C) = \frac{\ker d_n}{\im d_{n + 1}} = \frac{Z_n(C)}{B_n(C)}.
+ \]
+\end{defi}
+
+In mathematics, when we have objects, we don't just talk about the objects themselves, but also functions between them. Suppose we have two chain complexes. For the sake of simplicity, we write the chain maps of both as $d_n$.
+
+In general, we want to have maps between these two sequences. This would correspond to having a map $f_i: C_i \to D_i$ for each $i$, satisfying some commutativity relations.
+
+\begin{defi}[Chain map]
+ A \emph{chain map} $f_{\Cdot}: C_{\Cdot}\to D_{\Cdot}$ is a sequence of homomorphisms $f_n: C_n \to D_n$ such that
+ \[
+ f_{n - 1} \circ d_n = d_n \circ f_n
+ \]
+ for all $n$. In other words, the following diagram commutes:
+ \[
+ \begin{tikzcd}[row sep=large]
+ 0 \ar[d, equals] & C_0 \ar[l, "d_0"'] \ar[d, "f_0"'] & C_1 \ar [l, "d_1"'] \ar[d, "f_1"'] & C_2 \ar [l, "d_2"'] \ar [d, "f_2"'] & \cdots \ar [l, "d_3"']\\
+ 0 & D_0 \ar[l, "d_0"] & D_1 \ar [l, "d_1"] & D_2 \ar [l, "d_2"] & \cdots \ar [l, "d_3"]
+ \end{tikzcd}
+ \]
+\end{defi}
+
+We want to have homotopies. So far, chain complexes seem to be rather rigid, but homotopies themselves are rather floppy. How can we define homotopies for chain complexes? It turns out we can have a completely algebraic definition for \emph{chain homotopies}.
+
+\begin{defi}[Chain homotopy]
+ A \emph{chain homotopy} between chain maps $f_{\Cdot}, g_{\Cdot}: C_{\Cdot} \to D_{\Cdot}$ is a sequence of homomorphisms $h_n: C_n \to D_{n + 1}$ such that
+ \[
+ g_n - f_n = d_{n + 1} \circ h_n + h_{n - 1} \circ d_n.
+ \]
+ We write $f_{\Cdot} \simeq g_{\Cdot}$ if there is a chain homotopy between $f_{\Cdot}$ and $g_{\Cdot}$.
+\end{defi}
+The arrows can be put in the following diagram, which is \emph{not} commutative:
+\[
+ \begin{tikzcd}[row sep=large]
+ & C_{n - 1} \ar[rd, "h_{n - 1}"'] & C_n \ar [l, "d_n"'] \ar[d, shift left, "f_n"] \ar[d, shift right, "g_n"'] \ar [rd, "h_n"]\\
+ & & D_n & D_{n + 1} \ar [l, "d_{n + 1}"]
+ \end{tikzcd}
+\]
+The intuition behind this definition is as follows: suppose $C_{\Cdot} = C_{\Cdot} (K)$ and $D_{\Cdot} = C_{\Cdot}(L)$ for $K, L$ simplicial complexes, and $f_{\Cdot}$ and $g_{\Cdot}$ are ``induced'' by simplicial maps $f, g: K \to L$ (if $f$ maps an $n$-simplex $\sigma$ to a lower-dimensional simplex, then $f_\bullet \sigma = 0$). How can we detect if $f$ and $g$ are homotopic via the homotopy groups?
+
+Suppose $H: |K| \times I \to |L|$ is a homotopy from $f$ to $g$. We further suppose that $H$ actually comes from a simplicial map $K \times I \to L$ (we'll skim over the technical issue of how we can make $K \times I$ a simplicial complex. Instead, you are supposed to figure this out yourself in example sheet 3).
+
+Let $\sigma$ be an $n$-simplex of $K$, and here is a picture of $H(\sigma \times I)$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-1, 0) -- (1, 0) -- (0, 0.5) -- cycle;
+
+ \draw (-1, -2) -- (1, -2);
+ \draw [dashed] (-1, -2) -- (0, -1.5) -- (1, -2);
+ \draw [dashed] (0, -1.5) -- (0, 0.5);
+ \draw (-1, -2) -- (-1, 0);
+ \draw (1, -2) -- (1, 0);
+ \end{tikzpicture}
+\end{center}
+Let
+\[
+ h_n(\sigma) = H(\sigma \times I).
+\]
+We think of this as an $(n + 1)$-chain. What is its boundary? We've got the vertical sides plus the top and bottom. The bottom should just be $f(\sigma)$, and the top is just $g(\sigma)$, since $H$ is a homotopy from $f$ to $g$. How about the sides? They are what we get when we pull the boundary $\partial \sigma$ up with the homotopy, i.e.\ $H(\partial \sigma \times I) = h_{n - 1} \circ d_n(\sigma)$. Now note that $f(\sigma)$ and $g(\sigma)$ have opposite orientations, so we get the result
+\[
+ d_{n + 1} \circ h_n(\sigma) = h_{n - 1}\circ d_n(\sigma) + g_n(\sigma) - f_n(\sigma).
+\]
+Rearranging and dropping the $\sigma$s, we get
+\[
+ d_{n + 1} \circ h_n - h_{n - 1}\circ d_n = g_n - f_n.
+\]
+This looks remarkably like our definition for chain homotopies of maps, with the signs a bit wrong. So in reality, we will have to fiddle with the sign of $h_n$ a bit to get it right, but you get the idea.
+
+\begin{lemma}
+ A chain map $f_{\Cdot}: C_{\Cdot} \to D_{\Cdot}$ induces a homomorphism:
+ \begin{align*}
+ f_*: H_n(C) &\to H_n(D)\\
+ [c] &\mapsto [f(c)]
+ \end{align*}
+ Furthermore, if $f_{\Cdot}$ and $g_{\Cdot}$ are chain homotopic, then $f_* = g_*$.
+\end{lemma}
+
+\begin{proof}
+ Since the homology groups are defined as the cycles quotiented by the boundaries, to show that $f_*$ defines a homomorphism, we need to show $f$ sends cycles to cycles and boundaries to boundaries. This is an easy check. If $d_n (\sigma) = 0$, then
+ \[
+ d_n (f_n(\sigma)) = f_n(d_n(\sigma)) = f_n(0) = 0.
+ \]
+ So $f_n(\sigma) \in Z_n(D)$.
+
+ Similarly, if $\sigma$ is a boundary, say $\sigma = d_n (\tau)$, then
+ \[
+ f_n(\sigma) = f_n(d_n(\tau)) = d_n(f_n(\tau)).
+ \]
+ So $f_n(\sigma)$ is a boundary. It thus follows that $f_*$ is well-defined.
+
+ Now suppose $h_n$ is a chain homotopy between $f$ and $g$. For any $c \in Z_n(C)$, we have
+ \[
+ g_n(c) - f_n(c) = d_{n + 1} \circ h_n(c) + h_{n - 1} \circ d_n(c).
+ \]
+ Since $c \in Z_n(C)$, we know that $d_n(c) = 0$. So
+ \[
+ g_n(c) - f_n(c) = d_{n + 1} \circ h_n(c) \in B_n(D).
+ \]
+ Hence $g_n(c)$ and $f_n(c)$ differ by a boundary. So $[g_n(c)] - [f_n(c)] = 0$ in $H_n(D)$, i.e.\ $f_*(c) = g_*(c)$.
+\end{proof}
+
+The following statements are easy to check:
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item Being chain-homotopic is an equivalence relation of chain maps.
+ \item If $a_{\Cdot}: A_{\Cdot} \to C_{\Cdot}$ is a chain map and $f_{\Cdot} \simeq g_{\Cdot}$, then $f_{\Cdot} \circ a_{\Cdot} \simeq g_{\Cdot} \circ a_{\Cdot}$.
+ \item If $f: C_{\Cdot} \to D_{\Cdot}$ and $g: D_{\Cdot} \to A_{\Cdot}$ are chain maps, then
+ \[
+ g_* \circ f_* = (g_{\Cdot} \circ f_{\Cdot})_*.
+ \]
+ \item $(\id_{C_{\Cdot}})_* = \id_{H_*(C)}$.
+ \end{enumerate}
+\end{prop}
+The last two statements can be summarized in fancy language by saying that $H_n$ is a functor.
+
+\begin{defi}[Chain homotopy equivalence]
+ Chain complexes $C_{\Cdot}$ and $D_{\Cdot}$ are \emph{chain homotopy equivalent} if there exist $f_{\Cdot}: C_{\Cdot} \to D_{\Cdot}$ and $g_{\Cdot}: D_{\Cdot}\to C_{\Cdot}$ such that
+ \[
+ f_{\Cdot} \circ g_{\Cdot} \simeq \id_{D_{\Cdot}},\quad g_{\Cdot} \circ f_{\Cdot}\simeq \id_{C_{\Cdot}}.
+ \]
+\end{defi}
+We should think of this in exactly the same way as we think of homotopy equivalences of spaces. The chain complexes themselves are not necessarily the same, but the induced homology groups will be.
+
+\begin{lemma}
+ Let $f_{\Cdot}: C_{\Cdot} \to D_{\Cdot}$ be a \emph{chain homotopy equivalence}, then $f_*: H_n(C) \to H_n(D)$ is an isomorphism for all $n$.
+\end{lemma}
+
+\begin{proof}
+ Let $g_{\Cdot}$ be the homotopy inverse. Since $f_{\Cdot} \circ g_{\Cdot} \simeq \id_{D_{\Cdot}}$, we know $f_* \circ g_* = \id_{H_*(D)}$. Similarly, $g_* \circ f_* = \id_{H_*(C)}$. So we get isomorphisms between $H_n(C)$ and $H_n(D)$.
+\end{proof}
+
+\subsection{Homology calculations}
+We'll get back to topology and compute some homologies.
+
+Here, $K$ is always a simplicial complex, and $C_{\Cdot} = C_{\Cdot}(K)$.
+\begin{lemma}
+ Let $f: K \to L$ be a simplicial map. Then $f$ induces a chain map $f_{\Cdot}: C_{\Cdot}(K) \to C_{\Cdot}(L)$. Hence it also induces $f_*: H_n (K) \to H_n(L)$.
+\end{lemma}
+
+\begin{proof}
+ This is fairly obvious, except that simplicial maps are allowed to ``squash'' simplices, so $f$ might send an $n$-simplex to an $(n - 1)$-simplex, which is not in $D_n(L$). We solve this problem by just killing these troublesome simplices.
+
+ Let $\sigma$ be an oriented $n$-simplex in $K$, corresponding to a basis element of $C_n(K)$. Then we define
+ \[
+ f_n(\sigma) =
+ \begin{cases}
+ f(\sigma) & f(\sigma)\text{ is an $n$-simplex}\\
+ 0 & f(\sigma)\text{ is a $k$-simplex for }k < n
+ \end{cases}.
+ \]
+ More precisely, if $\sigma = (a_0, \cdots, a_n)$, then
+ \[
+ f_n(\sigma) =
+ \begin{cases}
+ (f(a_0), \cdots, f(a_n)) & {f(a_0), \cdots, f(a_n)}\text{ spans an $n$-simplex}\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ We then extend $f_n$ linearly to obtain $f_n: C_n(K) \to C_n(L)$.
+
+ It is immediate from this that this satisfies the chain map condition, i.e.\ $f_{\Cdot}$ commutes with the boundary operators.
+\end{proof}
+
+\begin{defi}[Cone]
+ A simplicial complex is a \emph{cone} if, for some $v_0 \in V_k$,
+ \[
+ |K| = \St_K(v_0) \cup \Lk_K(v_0).
+ \]
+\end{defi}
+\begin{center}
+ \begin{tikzpicture}
+ \draw [dashed] (0, 0) -- (2, 0.5) -- (3, -0.5);
+ \draw (3, -0.5) -- (2, -1.5) -- (1, -1) -- (0, 0);
+
+ \node [circ] at (1.5, 2) {};
+ \node [above] at (1.5, 2) {$v_0$};
+ \draw (1.5, 2) -- (0, 0);
+ \draw [dashed] (1.5, 2) -- (2, 0.5);
+ \draw (1.5, 2) -- (3, -0.5);
+ \draw (1.5, 2) -- (2, -1.5);
+ \draw (1.5, 2) -- (1, -1);
+ \end{tikzpicture}
+\end{center}
+We see that a cone ought to be contractible --- we can just squash it to the point $v_0$. This is what the next lemma tells us.
+\begin{lemma}
+ If $K$ is a cone with cone point $v_0$, then inclusion $i: \{v_0\} \to |K|$ induces a chain homotopy equivalence $i_{\Cdot}: C_n (\{v_0\}) \to C_n(K)$. Therefore
+ \[
+ H_n(K) =
+ \begin{cases}
+ \Z & n = 0\\
+ 0 & n > 0
+ \end{cases}
+ \]
+\end{lemma}
+The homotopy inverse to $i_{\Cdot}$ is given by $r_{\Cdot}$, where $r: k \mapsto v_0$ is the only map. It is clear that $r_{\Cdot} \circ i_{\Cdot} = \id$, and we need to show that $i_{\Cdot} \circ r_{\Cdot} \simeq \id$. This chain homotopy can be defined by $h_n: C_n(K) \to C_{n + 1}(K)$, where $h_n$ associates to any simplex $\sigma$ in $\Lk_K(v_0)$ the simplex spanned by $\sigma$ and $v_0$. Details are left to the reader.
+
+\begin{cor}
+ If $\Delta^n$ is the standard $n$-simplex, and $L$ consists of $\Delta^n$ and all its faces, then
+ \[
+ H_k(L) =
+ \begin{cases}
+ \Z & k = 0\\
+ 0 & k > 0
+ \end{cases}
+ \]
+\end{cor}
+
+\begin{proof}
+ $K$ is a cone (on any vertex).
+\end{proof}
+
+What we would really like is an example of non-trivial homology groups. Moreover, we want them in higher dimensions, and not just the examples we got for fundamental groups. An obvious candidate is the standard $n$-sphere.
+\begin{cor}
+ Let $K$ be the standard $(n - 1)$-sphere (i.e.\ the proper faces of $L$ from above). Then for $n \geq 2$, we have
+ \[
+ H_k(K) =
+ \begin{cases}
+ \Z & k = 0\\
+ 0 & 0 < k < n - 1\\
+ \Z & k = n - 1
+ \end{cases}.
+ \]
+\end{cor}
+
+\begin{proof}
+ We write down the chain groups for $K$ and $L$.
+ \[
+ \begin{tikzcd}
+ 0 & C_0(L) \ar[l] \ar[d, phantom, "\rotatebox{90}{=}"] & C_1(L) \ar [l, "d_1^L"'] \ar[d, phantom, "\rotatebox{90}{=}"] & \cdots \ar[l] & C_{n - 1}(L) \ar[l, "d_{n - 1}^L"'] \ar[d, phantom, "\rotatebox{90}{=}"] & C_n(L) \ar[l, "d_n^L"']\\
+ 0 & C_0(K) \ar[l] & C_1(K) \ar [l, "d_1^K"] & \cdots \ar[l] & C_{n - 1}(K) \ar[l, "d_{n - 1}^K"] & C_n(K) = 0\ar[l]
+ \end{tikzcd}
+ \]
+ For $k < n - 1$, we have $C_k(K) = C_k(L)$ and $C_{k + 1}(K) = C_{k + 1}(L)$. Also, the boundary maps are equal. So
+ \[
+ H_k(K) = H_k(L) = 0.
+ \]
+ We now need to compute
+ \[
+ H_{n - 1}(K) = \ker d_{n - 1}^K = \ker d_{n - 1}^L = \im d_n^L.
+ \]
+ We get the last equality since
+ \[
+ \frac{\ker d_{n - 1}^L}{\im d_n^L} = H_{n - 1}(L) = 0.
+ \]
+ We also know that $C_n (L)$ is generated by just one simplex $(e_0, \cdots, e_n)$. So $C_n(L) \cong \Z$. Also $d_n^L$ is injective since it does not kill the generator $(e_0, \cdots, e_n)$. So
+ \[
+ H_{n - 1}(K) \cong \im d_n^L \cong \Z.\qedhere
+ \]
+\end{proof}
+This is very exciting, because at last, we have a suggestion for a non-trivial invariant of $S^{n - 1}$ for $n > 2$. We say this is just a ``suggestion'', since the simplicial homology is defined for simplicial complexes, and we are not guaranteed that if we put a different simplicial complex on $S^{n - 1}$, we will get the same homology groups. So the major technical obstacle we still need to overcome is to see that $H_k$ are invariants of the polyhedron $|K|$, not just $K$, and similarly for maps. But this will take some time.
+
+We will quickly say something we've alluded to already:
+\begin{lemma}[Interpretation of $H_0$]
+ $H_0(K) \cong \Z^d$, where $d$ is the number of path components of $K$.
+\end{lemma}
+
+\begin{proof}
+ Let $K$ be our simplicial complex and $v, w \in V_k$. We note that by definition, $v, w$ represent the same homology class in $H_0(K)$ if and only if there is some $c$ such that $d_1 c = w - v$. The requirement that $d_1 c = w - v$ is equivalent to saying $c$ is a path from $v$ to $w$. So $[v] = [w]$ if and only if $v$ and $w$ are in the same path component of $K$.
+\end{proof}
+Most of the time, we only care about path-connected spaces, so $H_0(K) \cong \Z$.
+
+\subsection{Mayer-Vietoris sequence}
+The Mayer-Vietoris theorem is exactly like the Seifert-van Kampen theorem for fundamental groups, which tells us what happens when we glue two spaces together.
+
+Suppose we have a space $K = M\cup N$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mred, fill opacity=0.3] (-1, 0) ellipse (1.5 and 0.6);
+ \draw [fill=mblue, fill opacity=0.3] (1, 0) ellipse (1.5 and 0.6);
+ \node at (-2.5, 0.6) {$M$};
+ \node at (2.5, 0.6) {$N$};
+ \end{tikzpicture}
+\end{center}
+We will learn how to compute the homology of the union $M\cup N$ in terms of those of $M, N$ and $M\cap N$.
+
+Recall that to state the Seifert-van Kampen theorem, we needed to learn some new group-theoretic notions, such as free products with amalgamation. The situation is somewhat similar here. We will need to learn some algebra in order to state the Mayer-Vietoris theorem. The objects we need are known as \emph{exact sequences}.
+
+\begin{defi}[Exact sequence]
+ A pair of homomorphisms of abelian groups
+ \[
+ \begin{tikzcd}
+ A \ar [r, "f"] & B \ar [r, "g"] & C
+ \end{tikzcd}
+ \]
+ is \emph{exact (at $B$)} if
+ \[
+ \im f = \ker g.
+ \]
+ A collection of homomorphisms
+ \[
+ \begin{tikzcd}
+ \cdots \ar [r, "f_{i - 1}"] & A_{i} \ar [r, "f_i"] & A_{i + 1} \ar [r, "f_{i + 1}"] & A_{i + 2} \ar[r, "f_{i + 2}"] & \cdots
+ \end{tikzcd}
+ \]
+ is \emph{exact at $A_i$} if
+ \[
+ \ker f_i= \im f_{i - 1}.
+ \]
+ We say it is \emph{exact} if it is exact at every $A_i$.
+\end{defi}
+Recall that we have seen something similar before. When we defined the chain complexes, we had $d^2 = 0$, i.e.\ $\im d \subseteq \ker d$. Here we are requiring exact equivalence, which is something even better.
+
+Algebraically, we can think of an exact sequence as chain complexes with trivial homology groups. Alternatively, we see the homology groups as measuring the failure of a sequence to be exact.
+
+There is a particular type of exact sequences that is important.
+
+\begin{defi}[Short exact sequence]
+ A \emph{short exact sequence} is an exact sequence of the form
+ \[
+ \begin{tikzcd}
+ 0 \ar [r] & A \ar [r, "f"] & B \ar [r, "g"] & C \ar[r] & 0
+ \end{tikzcd}
+ \]
+\end{defi}
+What does this mean?
+\begin{itemize}
+ \item The kernel of $f$ is equal to the image of the zero map, i.e.\ $\{0\}$. So $f$ is injective.
+ \item The image of $g$ is the kernel of the zero map, which is everything. So $g$ is surjective.
+ \item $\im f = \ker g$.
+\end{itemize}
+
+Since we like chain complexes, we can produce short exact sequences of chain complexes.
+\begin{defi}[Short exact sequence of chain complexes]
+ A \emph{short exact sequence of chain complexes} is a pair of \emph{chain} maps $i_{\Cdot}$ and $j_{\Cdot}$
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_{\Cdot} \ar [r, "i_{\Cdot}"] & B_{\Cdot} \ar [r, "j_{\Cdot}"] & C_{\Cdot} \ar[r] & 0
+ \end{tikzcd}
+ \]
+ such that for each $k$,
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_k \ar [r, "i_k"] & B_k \ar [r, "j_k"] & C_k \ar[r] & 0
+ \end{tikzcd}
+ \]
+ is exact.
+\end{defi}
+Note that by requiring the maps to be chain maps, we imply that $i_k$ and $j_k$ commute with the boundary maps of the chain complexes.
+
+The reason why we care about short exact sequences of chain complexes (despite them having such a long name) is the following result:
+\begin{thm}[Snake lemma]
+ If we have a short exact sequence of complexes
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_{\Cdot} \ar [r, "i_{\Cdot}"] & B_{\Cdot} \ar [r, "j_{\Cdot}"] & C_{\Cdot} \ar[r] & 0
+ \end{tikzcd}
+ \]
+ then a miracle happens to their homology groups. In particular, there is a long exact sequence (i.e.\ an exact sequence that is not short)
+ \[
+ \begin{tikzcd}
+ \cdots \ar [r] & H_n(A) \ar[r, "i_*"] & H_n(B) \ar[r, "j_*"] & H_n(C) \ar[out=0, in=180, looseness=2, overlay, dll, "\partial_*"']\\
+ & H_{n - 1}(A) \ar[r, "i_*"] & H_{n - 1}(B) \ar[r, "j_*"] & H_{n - 1}(C) \ar [r] & \cdots
+ \end{tikzcd}
+ \]
+ where $i_*$ and $j_*$ are induced by $i_{\Cdot}$ and $j_{\Cdot}$, and $\partial_*$ is a map we will define in the proof.
+\end{thm}
+Having an exact sequence is good, since if we know most of the terms in an exact sequence, we can figure out the remaining ones. To do this, we also need to understand the maps $i_*$, $j_*$ and $\partial_*$, but we don't need to understand \emph{all}, since we can deduce some of them with exactness. Yet, we still need to know some of them, and since they are defined in the proof, you need to remember the proof.
+
+Note, however, if we replace $\Z$ in the definition of chain groups by a field (e.g.\ $\Q$), then all the groups become vector spaces. Then everything boils down to the rank-nullity theorem. Of course, this does not get us the right answer in exams, since we want to have homology groups over $\Z$, and not $\Q$, but this helps us to understand the exact sequences somewhat. If, at any point, homology groups confuse you, then you can try to work with homology groups over $\Q$ and get a feel for what homology groups are like, since this is easier.
+
+We are first going to apply this result to obtain the Mayer-Vietoris theorem.
+\begin{thm}[Mayer-Vietoris theorem]
+ Let $K, L, M, N$ be simplicial complexes with $K = M\cup N$ and $L = M\cap N$. We have the following inclusion maps:
+ \[
+ \begin{tikzcd}
+ L \ar[r, hook, "i"] \ar[d, hook, "j"] & M\ar[d, hook, "k"]\\
+ N \ar[r, hook, "\ell"] & K.
+ \end{tikzcd}
+ \]
+ Then there exists some natural homomorphism $\partial_*: H_n(K) \to H_{n - 1}(L)$ that gives the following long exact sequence:
+ \[
+ \begin{tikzcd}
+ \cdots \ar[r, "\partial_*"] & H_n(L) \ar[r, "i_* + j_*"] & H_n(M) \oplus H_n(N) \ar[r, "k_* - \ell_*"] & H_n(K)\ar[out=0, in=180, looseness=2, overlay, lld, "\partial_*"']\\
+ & H_{n - 1}(L) \ar[r, "i_* + j_*"] & H_{n - 1}(M) \oplus H_{n - 1}(N) \ar[r, "k_* - \ell_*"] & H_{n - 1}(K) \ar [r] & \cdots\\
+ &\cdots \ar[r] & H_0(M) \oplus H_0(N) \ar[r, "k_* - \ell_*"] & H_0(K) \ar [r] & 0
+ \end{tikzcd}
+ \]
+\end{thm}
+Here $A \oplus B$ is the direct sum of the two (abelian) groups, which may also be known as the Cartesian product.
+
+Note that unlike the Seifert-van Kampen theorem, this does not require the intersection $L = M \cap N$ to be (path) connected. This restriction was needed for Seifert-van Kampen since the fundamental group is unable to see things outside the path component of the basepoint, and hence it does not like non-path connected spaces well. However, homology groups don't have these problems.
+
+\begin{proof}
+ All we have to do is to produce a short exact sequence of complexes. We have
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & C_n(L) \ar[r, "i_n + j_n"] & C_n(M) \oplus C_n(N) \ar[r, "k_n - \ell_n"] & C_n(K) \ar[r] & 0
+ \end{tikzcd}
+ \]
+ Here $i_n + j_n: C_n(L) \to C_n(M) \oplus C_n(N)$ is the map $x \mapsto (x, x)$, while $k_n - \ell_n: C_n(M) \oplus C_n(N) \to C_n(K)$ is the map $(a, b) \mapsto a - b$ (after applying the appropriate inclusion maps).
+
+ It is easy to see that this is a short exact sequence of chain complexes. The image of $i_n + j_n$ is the set of all elements of the form $(x, x)$, and the kernel of $k_n - \ell_n$ is also these. It is also easy to see that $i_n + j_n$ is injective and $k_n - \ell_n$ is surjective.
+\end{proof}
+
+At first sight, the Mayer-Vietoris theorem might look a bit scary to use, since it involves all homology groups of all orders at once. However, this is actually often a good thing, since we can often use this to deduce the higher homology groups from the lower homology groups.
+
+Yet, to properly apply the Mayer-Vietoris sequence, we need to understand the map $\partial_*$. To do so, we need to prove the snake lemma.
+\begin{thm}[Snake lemma]
+ If we have a short exact sequence of complexes
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_{\Cdot} \ar [r, "i_{\Cdot}"] & B_{\Cdot} \ar [r, "j_{\Cdot}"] & C_{\Cdot} \ar[r] & 0
+ \end{tikzcd}
+ \]
+ then there is a long exact sequence
+ \[
+ \begin{tikzcd}
+ \cdots \ar [r] & H_n(A) \ar[r, "i_*"] & H_n(B) \ar[r, "j_*"] & H_n(C) \ar[out=0, in=180, looseness=2, overlay, dll, "\partial_*"']\\
+ & H_{n - 1}(A) \ar[r, "i_*"] & H_{n - 1}(B) \ar[r, "j_*"] & H_{n - 1}(C) \ar [r] & \cdots
+ \end{tikzcd}
+ \]
+ where $i_*$ and $j_*$ are induced by $i_{\Cdot}$ and $j_{\Cdot}$, and $\partial_*$ is a map we will define in the proof.
+\end{thm}
+The method of proving this is sometimes known as ``diagram chasing'', where we just ``chase'' around commutative diagrams to find the elements we need. The idea of the proof is as follows --- in the short exact sequence, we can think of $A$ as a subgroup of $B$, and $C$ as the quotient $B/A$, by the first isomorphism theorem. So any element of $C$ can be represented by an element of $B$. We apply the boundary map to this representative, and then exactness shows that this must come from some element of $A$. We then check carefully that this is well-defined, i.e.\ does not depend on the representatives chosen.
+
+\begin{proof}
+ The proof of this is in general not hard. It just involves a lot of checking of the details, such as making sure the homomorphisms are well-defined, are actually homomorphisms, are exact at all the places etc. The only important and non-trivial part is just the construction of the map $\partial_*$.
+
+ First we look at the following commutative diagram:
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_n \ar[r, "i_n"] \ar[d, "d_n"] & B_n \ar[r, "j_n"] \ar[d, "d_n"] & C_n \ar[r] \ar[d, "d_n"] & 0\\
+ 0 \ar[r] & A_{n - 1} \ar[r, "i_{n - 1}"] & B_{n - 1} \ar[r, "j_{n - 1}"] & C_{n - 1} \ar[r] & 0
+ \end{tikzcd}
+ \]
+ To construct $\partial_*: H_n(C) \to H_{n - 1}(A)$, let $[x] \in H_n(C)$ be a class represented by $x \in Z_n(C)$. We need to find a cycle $z \in A_{n - 1}$. By exactness, we know the map $j_n: B_n \to C_n$ is surjective. So there is a $y \in B_n$ such that $j_n(y) = x$. Since our target is $A_{n - 1}$, we want to move down to the next level. So consider $d_n(y) \in B_{n - 1}$. We would be done if $d_n(y)$ is in the image of $i_{n - 1}$. By exactness, this is equivalent to saying $d_n(y)$ is in the kernel of $j_{n -1 }$. Since the diagram is commutative, we know
+ \[
+ j_{n - 1}\circ d_n(y) = d_n \circ j_n (y) = d_n(x) = 0,
+ \]
+ using the fact that $x$ is a cycle. So $d_n (y) \in \ker j_{n - 1} = \im i_{n - 1}$. Moreover, by exactness again, $i_{n - 1}$ is injective. So there is a unique $z \in A_{n - 1}$ such that $i_{n - 1}(z) = d_n(y)$. We have now produced our $z$.
+
+ We are not done. We have $\partial_* [x] = [z]$ as our candidate definition, but we need to check many things:
+ \begin{enumerate}
+ \item We need to make sure $\partial_*$ is indeed a homomorphism.
+ \item We need $d_{n - 1}(z) = 0$ so that $[z] \in H_{n - 1}(A)$;
+ \item We need to check $[z]$ is well-defined, i.e.\ it does not depend on our choice of $y$ and $x$ for the homology class $[x]$.
+ \item We need to check the exactness of the resulting sequence.
+ \end{enumerate}
+ We now check them one by one:
+ \begin{enumerate}
+ \item Since everything involved in defining $\partial_*$ are homomorphisms, it follows that $\partial_*$ is also a homomorphism.
+ \item We check $d_{n - 1}(z) = 0$. To do so, we need to add an additional layer.
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_n \ar[r, "i_n"] \ar[d, "d_n"] & B_n \ar[r, "j_n"] \ar[d, "d_n"] & C_n \ar[r] \ar[d, "d_n"] & 0\\
+ 0 \ar[r] & A_{n - 1} \ar[r, "i_{n - 1}"] \ar[d, "d_{n - 1}"] & B_{n - 1} \ar[r, "j_{n - 1}"] \ar[d, "d_{n - 1}"] & C_{n - 1} \ar[r] \ar[d, "d_{n - 1}"] & 0\\
+ 0 \ar[r] & A_{n - 2} \ar[r, "i_{n - 2}"] & B_{n - 2} \ar[r, "j_{n - 2}"] & C_{n - 2} \ar[r] & 0
+ \end{tikzcd}
+ \]
+ We want to check that $d_{n - 1}(z) = 0$. We will use the commutativity of the diagram. In particular, we know
+ \[
+ i_{n - 2} \circ d_{n - 1}(z) = d_{n - 1} \circ i_{n - 1} (z) = d_{n - 1} \circ d_n(y) = 0.
+ \]
+ By exactness at $A_{n - 2}$, we know $i_{n - 2}$ is injective. So we must have $d_{n - 1}(z) = 0$.
+ \item
+ \begin{enumerate}
+ \item First, in the proof, suppose we picked a different $y'$ such that $j_n(y') = j_n(y) = x$. Then $j_n(y' - y) = 0$. So $y' - y \in \ker j_n = \im i_n$. Let $a \in A_n$ be such that $i_n(a) = y' - y$. Then
+ \begin{align*}
+ d_n(y') &= d_n(y' - y) + d_n(y) \\
+ &= d_n \circ i_n (a) + d_n(y) \\
+ &= i_{n - 1} \circ d_n (a) + d_n(y).
+ \end{align*}
+ Hence when we pull back $d_n(y')$ and $d_n(y)$ to $A_{n - 1}$, the results differ by the boundary $d_n(a)$, and hence produce the same homology class.
+ \item Suppose $[x'] = [x]$. We want to show that $\partial_* [x] = \partial_*[x']$. This time, we add a layer above.
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A_{n + 1} \ar[r, "i_{n + 1}"] \ar[d, "d_{n + 1}"] & B_{n + 1} \ar[r, "j_{n + 1}"] \ar[d, "d_{n + 1}"] & C_{n + 1} \ar[r] \ar[d, "d_{n + 1}"] & 0\\
+ 0 \ar[r] & A_n \ar[r, "i_n"] \ar[d, "d_n"] & B_n \ar[r, "j_n"] \ar[d, "d_n"] & C_n \ar[r] \ar[d, "d_n"] & 0\\
+ 0 \ar[r] & A_{n - 1} \ar[r, "i_{n - 1}"] & B_{n - 1} \ar[r, "j_{n - 1}"] & C_{n - 1} \ar[r] & 0
+ \end{tikzcd}
+ \]
+ By definition, since $[x'] = [x]$, there is some $c \in C_{n + 1}$ such that
+ \[
+ x' = x + d_{n + 1} (c).
+ \]
+ By surjectivity of $j_{n + 1}$, we can write $c = j_{n + 1}(b)$ for some $b \in B_{n + 1}$. By commutativity of the squares, we know
+ \[
+ x' = x + j_n \circ d_{n + 1} (b).
+ \]
+ The next step of the proof is to find some $y$ such that $j_n (y) = x$. Then
+ \[
+ j_n(y + d_{n + 1} (b)) = x'.
+ \]
+ So the corresponding $y'$ is $y' = y + d_{n + 1}(b)$. So $d_n (y) = d_n(y')$, and hence $\partial_*[x] = \partial_* [x']$.
+ \end{enumerate}
+ \item This is yet another standard diagram chasing argument. When reading this, it is helpful to look at a diagram and see how the elements are chased along. It is even more beneficial to attempt to prove this yourself.
+ \begin{enumerate}
+ \item $\im i_* \subseteq \ker j_*$: This follows from the assumption that $i_n \circ j_n = 0$.
+ \item $\ker j_* \subseteq \im i_*$: Let $[b] \in H_n(B)$. Suppose $j_*([b]) = 0$. Then there is some $c \in C_{n + 1}$ such that $j_n(b) = d_{n + 1}(c)$. By surjectivity of $j_{n + 1}$, there is some $b' \in B_{n + 1}$ such that $j_{n + 1}(b') = c$. By commutativity, we know $j_n(b) = j_n \circ d_{n + 1}(b')$, i.e.
+ \[
+ j_n (b - d_{n + 1}(b')) = 0.
+ \]
+ By exactness of the sequence, we know there is some $a \in A_n$ such that
+ \[
+ i_n(a) = b - d_{n + 1}(b').
+ \]
+ Moreover,
+ \[
+ i_{n - 1} \circ d_n(a) = d_n \circ i_n (a) = d_n(b - d_{n + 1}(b')) = 0,
+ \]
+ using the fact that $b$ is a cycle. Since $i_{n - 1}$ is injective, it follows that $d_n(a) = 0$. So $[a] \in H_n(A)$. Then
+ \[
+ i_*([a]) = [b] - [d_{n + 1}(b')] = [b].
+ \]
+ So $[b] \in \im i_*$.
+ \item $\im j_* \subseteq \ker \partial_*$: Let $[b] \in H_n(B)$. To compute $\partial_*(j_*([b]))$, we first pull back $j_n(b)$ to $b \in B_n$. Then we compute $d_n(b)$ and then pull it back to $A_{n + 1}$. However, we know $d_n(b) = 0$ since $b$ is a cycle. So $\partial_*(j_*([b])) = 0$, i.e.\ $\partial_* \circ j_* = 0$.
+ \item $\ker \partial_* \subseteq \im j_*$: Let $[c] \in H_n(C)$ and suppose $\partial_*([c]) = 0$. Let $b \in B_n$ be such that $j_n(b) = c$, and $a \in A_{n - 1}$ such that $i_{n - 1}(a) = d_n(b)$. By assumption, $\partial_*([c]) = [a] = 0$. So we know $a$ is a boundary, say $a = d_n (a')$ for some $a' \in A_n$. Then by commutativity we know $d_n(b) = d_n \circ i_n (a')$. In other words,
+ \[
+ d_n(b - i_n(a')) = 0.
+ \]
+ So $[b - i_n(a')] \in H_n(B)$. Moreover,
+ \[
+ j_*([b - i_n(a')]) = [j_n(b) - j_n \circ i_n(a')] = [c].
+ \]
+ So $[c] \in \im j_*$.
+ \item $\im \partial_* \subseteq \ker i_*$: Let $[c] \in H_n(C)$. Let $b \in B_n$ be such that $j_n(b) = c$, and $a \in A_{n - 1}$ be such that $i_n(a) = d_n(b)$. Then $\partial_*([c]) = [a]$. Then
+ \[
+ i_*([a]) = [i_n(a)] = [d_n(b)] = 0.
+ \]
+ So $i_* \circ \partial_* = 0$.
+ \item $\ker i_* \subseteq \im \partial_*$: Let $[a] \in H_n(A)$ and suppose $i_*([a]) = 0$. So we can find some $b \in B_{n + 1}$ such that $i_n(a) = d_{n + 1}(b)$. Let $c = j_{n + 1}(b)$. Then
+ \[
+ d_{n + 1}(c) = d_{n + 1}\circ j_{n + 1} (b) = j_n \circ d_{n + 1}(b) = j_n \circ i_n (a) = 0.
+ \]
+ So $[c] \in H_n(C)$. Then $[a] = \partial_*([c])$ by definition of $\partial_*$. So $[a] \in \im \partial_*$.\qedhere
+ \end{enumerate}%\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{Continuous maps and homotopy invariance}
+This is the most technical part of the homology section of the course. The goal is to see that the homology groups $H_*(K)$ depend only on the polyhedron, and not the simplicial structure on it. Moreover, we will show that they are \emph{homotopy invariants} of the space, and that homotopic maps $f \simeq g: |K| \to |L|$ induce equal maps $H_*(K) \to H_*(L)$. Note that this is a lot to prove. At this point, we don't even know arbitrary continuous maps induce \emph{any} map on the homology. We only know simplicial maps do.
+
+We start with a funny definition.
+\begin{defi}[Contiguous maps]
+ Simplicial maps $f, g: K \to L$ are \emph{contiguous} if for each $\sigma \in K$, the simplices $f(\sigma)$ and $g(\sigma)$ (i.e.\ the simplices spanned by the image of the vertices of $\sigma$) are faces of some some simplex $\tau \in L$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [circ] {} -- (2, 0) node [pos=0.5, above] {$\sigma$} node [circ] {};
+
+ \fill [mblue, opacity=0.5] (4, 0) -- (6, 0) -- (5, 1.732) -- cycle;
+ \node at (5, 0.57733) {$\tau$};
+ \draw (4, 0) -- (6, 0) -- (5, 1.732) -- cycle;
+
+ \draw [mred, semithick] (4, 0) -- (6, 0);
+ \draw (1, -0.5) edge [bend right, mred, -latex'] (5, -0.3);
+ \node [mred] at (3, -1.3) {$g$};
+
+ \draw [morange, semithick] (4, 0) -- (5, 1.732);
+ \draw (1, 0.5) edge [bend left, morange, -latex'] (4.3, 0.866); % fix arrows
+ \node [morange] at (3, 1.7) {$f$};
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+
+The significance of this definition comes in two parts: simplicial approximations of the same map are contiguous, and contiguous maps induce the same maps on homology.
+
+\begin{lemma}
+ If $f, g: K \to L$ are simplicial approximations to the same map $F$, then $f$ and $g$ are contiguous.
+\end{lemma}
+
+\begin{proof}
+ Let $\sigma \in K$, and pick some $s \in \mathring{\sigma}$. Then $F(s) \in \mathring{\tau}$ for some $\tau \in L$. Then the definition of simplicial approximation implies that for any simplicial approximation $f$ to $F$, $f(\sigma)$ spans a face of $\tau$.
+\end{proof}
+
+\begin{lemma}
+ If $f, g: K \to L$ are continguous simplicial maps, then
+ \[
+ f_* = g_* : H_n(K) \to H_n(L)
+ \]
+ for all $n$.
+\end{lemma}
+
+Geometrically, the homotopy is obvious. If $f(\sigma)$ and $g(\sigma)$ are both faces of $\tau$, then we just pick the homotopy as $\tau$. The algebraic definition is less inspiring, and it takes some staring to see it is indeed what we think it should be.
+\begin{proof}
+ We will write down a chain homotopy between $f$ and $g$:
+ \[
+ h_n((a_0, \cdots, a_n)) = \sum_{i = 0}^n (-1)^i [f(a_0), \cdots, f(a_i), g(a_i), \cdots, g(a_n)],
+ \]
+ where the square brackets means the corresponding oriented simplex if there are no repeats, $0$ otherwise.
+
+ We can now check by direct computation that this is indeed a chain homotopy.
+\end{proof}
+
+Now we know that if $f, g:K \to L$ are both simplicial approximations to the same $F$, then they induce equal maps on homology. However, we do not yet know that continuous maps induce well-defined maps on homologies, since to produce simplicial approximations, we needed to perform barycentric subdivision, and we need to make sure this does not change the homology.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \node [below] at (1, 0) {$K$};
+
+ \draw [->] (2.5, 0.866) -- (4.5, 0.866);
+ \begin{scope}[shift={(5, 0)}]
+ \draw [fill=mblue, fill opacity=0.5] (0, 0) -- (2, 0) -- (1, 1.732) -- cycle;
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (0.5, 0.866) {};
+ \node [circ] at (1.5, 0.866) {};
+ \node [circ] at (1, 0.5773) {};
+ \draw (0, 0) -- (1, 0.5773) -- (2, 0);
+ \draw (1, 0) -- (1, 1.732);
+ \draw (0.5, 0.866) -- (1, 0.5773) -- (1.5, 0.866);
+ \node [below] at (1, 0) {$K'$};
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Given a barycentric subdivision, we need to choose a simplicial approximation to the identity map $a: K' \to K$. It turns out this is easy to do, and we can do it almost arbitrarily.
+
+\begin{lemma}
+ Each vertex $\hat{\sigma} \in K'$ is a barycenter of some $\sigma \in K$. Then we choose $a(\hat{\sigma})$ to be an arbitrary vertex of $\sigma$. This defines a function $a: V_{K'} \to V_K$. This $a$ is a simplicial approximation to the identity. Moreover, \emph{every} simplicial approximation to the identity is of this form.
+\end{lemma}
+
+\begin{proof}
+ Omitted. % see notes.
+\end{proof}
+
+Next, we need to show this gives an isomorphism on the homologies.
+
+\begin{prop}
+ Let $K'$ be the barycentric subdivision of $K$, and $a: K' \to K$ a simplicial approximation to the identity map. Then the induced map $a_*: H_n(K') \to H_n(K)$ is an isomorphism for all $n$.
+\end{prop}
+
+\begin{proof}
+ We first deal with $K$ being a simplex $\sigma$ and its faces. Now that $K$ is just a cone (with any vertex as the cone vertex), and $K'$ is also a cone (with the barycenter as the cone vertex). Therefore
+ \[
+ H_n(K) \cong H_{n}(K') \cong
+ \begin{cases}
+ \Z & n = 0\\
+ 0 & n > 0
+ \end{cases}
+ \]
+ So only $n = 0$ is (a little) interesting, but it is easy to check that $a_*$ is an isomorphism in this case, since it just maps a vertex to a vertex, and all vertices in each simplex are in the same homology class.
+
+ To finish the proof, note that $K$ is built up by gluing up simplices, and $K'$ is built by gluing up subdivided simplices. So to understand their homology groups, we use the Mayer-Vietoris sequence.
+
+ Given a complicated simplicial complex $K$, let $\sigma$ be a maximal dimensional simplex of $K$. We let $L = K \setminus \{\sigma\}$ (note that $L$ includes the boundary of $\sigma$). We let $S = \{\sigma\text{ and all its faces}\} \subseteq K$ and $T = L \cap S$.
+
+ We can do similarly for $K'$, and let $L', S', T'$ be the corresponding barycentric subdivisions. We have $K = L \cup S$ and $K' = L' \cup S'$ (and $L' \cap S' = T'$). By the previous lemma, we see our construction of $a$ gives $a(L') \subseteq L$, $a(S') \subseteq S$ and $a(T') \subseteq T$. So these induce maps of the corresponding homology groups
+ \[
+ \begin{tikzcd}[column sep=tiny]
+ H_n(T') \ar[r] \ar[d, "a_*"] & H_n(S') \oplus H_n(L') \ar[r] \ar[d, "a_* \oplus a_*"] & H_n(K') \ar[r] \ar[d, "a_*"] & H_{n - 1}(T') \ar[r] \ar[d, "a_*"] & H_{n - 1}(S') \oplus H_{n - 1} (L') \ar[d, "a_* \oplus a_*"]\\
+ H_n(T) \ar[r] & H_n(S) \oplus H_n(L) \ar[r] & H_n(K) \ar[r] & H_{n - 1}(T) \ar[r] & H_{n - 1}(S) \oplus H_{n - 1} (L)
+ \end{tikzcd}
+ \]
+ By induction, we can assume all but the middle maps are isomorphisms. By the five lemma, this implies the middle map is an isomorphism, where the five lemma is as follows:
+\end{proof}
+
+\begin{lemma}[Five lemma]
+ Consider the following commutative diagram:
+ \[
+ \begin{tikzcd}
+ A_1 \ar[r] \ar[d, "a"] & B_1 \ar[r] \ar[d, "b"] & C_1 \ar[r] \ar[d, "c"] & D_1 \ar[r] \ar[d, "d"] & E_1 \ar[d, "e"]\\
+ A_2 \ar[r] & B_2 \ar[r] & C_2 \ar[r] & D_2 \ar[r] & E_2
+ \end{tikzcd}
+ \]
+ If the top and bottom rows are exact, and $a, b, d, e$ are isomorphisms, then $c$ is also an isomorphism.
+\end{lemma}
+
+\begin{proof}
+ Exercise in example sheet.
+\end{proof}
+
+We now have everything we need to move from simplical maps to continuous maps. Putting everything we have proved so far together, we obtain the following result:
+
+\begin{prop}
+ To each continuous map $f: |K| \to |L|$, there is an associated map $f_*: H_n(K) \to H_n(L)$ (for all $n$) given by
+ \[
+ f_* = s_* \circ \nu_{K, r}^{-1},
+ \]
+ where $s: K^{(r)} \to L$ is a simplicial approximation to $f$, and $\nu_{K, r}: H_n (K^{(r)}) \to H_n(K)$ is the isomorphism given by composing maps $H_n(K^{(i)}) \to H_n(K^{(i - 1)})$ induced by simplical approximations to the identity.
+
+ Furthermore:
+ \begin{enumerate}
+ \item $f_*$ does not depend on the choice of $r$ or $s$.
+ \item If $g: |M| \to |K|$ is another continuous map, then
+ \[
+ (f \circ g)_* = f_* \circ g_*.
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ Omitted.
+\end{proof}
+
+\begin{cor}
+ If $f: |K| \to |L|$ is a homeomorphism, then $f_*: H_n(K) \to H_n(L)$ is an isomorphism for all $n$.
+\end{cor}
+
+\begin{proof}
+ Immediate from (ii) of previous proposition.
+\end{proof}
+
+This is good. We know now that homology groups is a property of the space itself, not simplicial complexes. For example, we have computed the homology groups of a particular triangulation of the $n$-sphere, and we know it is true for \emph{all} triangulations of the $n$-sphere.
+
+We're not exactly there yet. We have just seen that homology groups are invariant under homeomorphisms. What we would like to know is that they are invariant under \emph{homotopy}. In other words, we want to know homotopy equivalent maps induce equal maps on the homology groups.
+
+The strategy is:
+\begin{enumerate}
+ \item Show that ``small'' homotopies don't change the maps on $H_n$
+ \item Note that all homotopies can be decomposed into ``small'' homotopies.
+\end{enumerate}
+
+\begin{lemma}
+ Let $L$ be a simplicial complex (with $|L| \subseteq \R^n$). Then there is an $\varepsilon = \varepsilon(L) > 0$ such that if $f, g: |K| \to |L|$ satisfy $\|f(x) - g(x)\| < \varepsilon$, then $f_* = g_*: H_n(K) \to H_n(L)$ for all $n$.
+\end{lemma}
+The idea of the proof is that if $\|f(x) - g(x)\|$ is small enough, we can barycentrically subdivide $K$ such that we get a simplicial approximation to both $f$ and $g$.
+\begin{proof}
+ By the Lebesgue number lemma, there is an $\varepsilon > 0$ such that each ball of radius $2\varepsilon$ in $|L|$ lies in some star $\St_L(w)$.
+
+ Now apply the Lebesgue number lemma again to $\{f^{-1}(B_\varepsilon(y))\}_{y \in |L|}$, an open cover of $|K|$, and get $\delta > 0$ such that for all $x \in |K|$, we have
+ \[
+ f(B_\delta(x)) \subseteq B_\varepsilon(y) \subseteq B_{2\varepsilon}(y) \subseteq \St_L(w)
+ \]
+ for some $y \in |L|$ and $\St_L(w)$. Now since $g$ and $f$ differ by at most $\varepsilon$, we know
+ \[
+ g(B_\delta(x)) \subseteq B_{2\varepsilon}(y) \subseteq \St_L(w).
+ \]
+ Now subdivide $r$ times so that $\mu(K^{(r)}) < \frac{1}{2} \delta$. So for all $v \in V_{K^{(r)}}$, we know
+ \[
+ \St_{K^{(r)}} (v) \subseteq B_\delta(v).
+ \]
+ This gets mapped by \emph{both} $f$ and $g$ to $\St_L(w)$ for the same $w \in V_L$. We define $s: V_{K^{(r)}} \to V_L$ sending $v \mapsto w$.
+\end{proof}
+
+\begin{thm}
+ Let $f\simeq g: |K| \to |L|$. Then $f_* = g_*$.
+\end{thm}
+
+\begin{proof}
+ Let $H: |K| \times I \to |L|$. Since $|K|\times I$ is compact, we know $H$ is uniformly continuous. Pick $\varepsilon = \varepsilon(L)$ as in the previous lemma. Then there is some $\delta$ such that $|s - t| < \delta$ implies $|H(x, s) - H(x, t)| < \varepsilon$ for all $x \in |K|$.
+
+ Now choose $0 = t_0 < t_1 < \cdots < t_n = 1$ such that $t_i - t_{i - 1} < \delta$ for all $i$. Define $f_i: |K| \to |L|$ by $f_i(x) = H(x, t_i)$. Then we know $\|f_i - f_{i - 1}\| < \varepsilon$ for all $i$. Hence $(f_i)_* = (f_{i - 1})_*$. Therefore $(f_0)_* = (f_n)_*$, i.e.\ $f_* = g_*$.
+\end{proof}
+
+This is good, since we know we can not only deal with spaces that are homeomorphic to complexes, but spaces that are homotopic to complexes. This is important, since all complexes are compact, but we would like to talk about non-compact spaces such as open balls as well.
+
+\begin{defi}[$h$-triangulation and homology groups]
+ An \emph{$h$-triangulation} of a space $X$ is a simplicial complex $K$ and a homotopy equivalence $h: |K| \to X$. We define $H_n(X) = H_n(K)$ for all $n$.
+\end{defi}
+
+\begin{lemma}
+ $H_n(X)$ is well-defined, i.e.\ it does not depend on the choice of $K$.
+\end{lemma}
+
+\begin{proof}
+ Clear from previous theorem.
+\end{proof}
+
+We have spent a lot of time and effort developing all the technical results and machinery of homology groups. We will now use them to \emph{do stuff}.
+
+\subsubsection*{Digression --- historical motivation}
+At first, we motivated the study of algebraic topology by saying we wanted to show that $\R^n$ and $\R^m$ are not homeomorphic. However, historically, this is not why people studied algebraic topology, since there are other ways to prove this is true.
+
+Initially, people were interested in studying knots. These can be thought of as embeddings $S \hookrightarrow S^3$. We really should just think of $S^3$ as $\R^3 \cup \{\infty\}$, where the point at infinity is just there for convenience.
+
+The most interesting knot we know is the \emph{unknot} $U$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [thick] circle [radius=1cm];
+ \end{tikzpicture}
+\end{center}
+A less interesting but still interesting knot is the \emph{trefoil} $T$.
+\begin{center}
+ \begin{tikzpicture}
+ \foreach \brk in {0,1,2} {
+ \begin{scope}[rotate=\brk * 120]
+ \node[knot crossing, transform shape, inner sep=1.5pt] (k\brk) at (0,-1) {};
+ \end{scope}
+ }
+ \foreach \brk/\brl in {0/2,1/0,2/1} {
+% \pgfmathparse{int(Mod(\brk - 1,3))}
+% \edef\brl{\pgfmathresult}
+ \draw[thick] (k\brk) .. controls (k\brk.4 north west) and (k\brl.4 north east) .. (k\brl.center);
+ \draw[thick] (k\brk.center) .. controls (k\brk.16 south west) and (k\brl.16 south east) .. (k\brl);
+ }
+ \end{tikzpicture}
+\end{center}
+One question people at the time asked was whether the trefoil knot is just the unknot in disguise. It obviously isn't, but can we prove it? In general, given two knots, is there any way we can distinguish if they are the same?
+
+The idea is to study the fundamental groups of the knots. It would obviously be silly to compute the fundamental groups of $U$ and $T$ themselves, since they are both isomorphic to $S^1$ and the groups are just $\Z$. Instead, we look at the fundamental groups of the complements. It is not difficult to see that
+\[
+ \pi_1(S^3 \setminus U) \cong \Z,
+\]
+and with some suitable machinery, we find
+\[
+ \pi_1(S^3 \setminus T) \cong \bra a, b\mid a^3 b^{-2}\ket.
+\]
+Staring at it hard enough, we can construct a surjection $\pi_1(S^3 \setminus T) \to S_3$. This tells us $\pi_1(S^3 \setminus T)$ is non-abelian, and is certainly not $\Z$. So we know $U$ and $T$ are genuinely different knots.
+
+\subsection{Homology of spheres and applications}
+
+\begin{lemma}
+ The sphere $S^{n - 1}$ is triangulable, and
+ \[
+ H_k(S^{n - 1}) \cong
+ \begin{cases}
+ \Z & k = 0, n - 1\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+\end{lemma}
+\begin{proof}
+ We already did this computation for the standard $(n - 1)$-sphere $\partial \Delta^n$, where $\Delta^n$ is the standard $n$-simplex.
+\end{proof}
+We immediately have a few applications
+\begin{prop}
+ $\R^n \not\cong \R^m$ for $m \not= n$.
+\end{prop}
+
+\begin{proof}
+ See example sheet 4.
+\end{proof}
+
+\begin{thm}[Brouwer's fixed point theorem (in all dimensions)]
+ There is no retraction $D^n$ onto $\partial D^n \cong S^{n - 1}$. So every continuous map $f: D^n \to D^n$ has a fixed point.
+\end{thm}
+
+\begin{proof}
+ The proof is exactly the same as the two-dimensional case, with homology groups instead of the fundamental group.
+
+ We first show the second part from the first. Suppose $f: D^n \to D^n$ has no fixed point. Then the following $g: D^n \to \partial D^n$ is a continuous retraction.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw circle [radius=2cm];
+ \node [circ] at (-0.4, -0.3) {};
+ \node at (-0.4, -0.3) [below] {$x$};
+
+ \node [circ] at (0.4, 0.3) {};
+ \node at (0.4, 0.3) [right] {$f(x)$};
+ \draw (0.4, 0.3) -- (-1.6, -1.2);
+ \node at (-1.6, -1.2) [circ] {};
+ \node at (-1.6, -1.2) [anchor = north east] {$g(x)$};
+ \end{tikzpicture}
+ \end{center}
+ So we now show no such continuous retraction can exist. Suppose $r: D^n \to \partial D^n$ is a retraction, i.e.\ $r \circ i \simeq \id: \partial D^n \to \partial D^n$.
+ \[
+ \begin{tikzcd}
+ S^{n - 1} \ar[r, "i"] & D^n \ar[r, "r"] & S^{n - 1}
+ \end{tikzcd}
+ \]
+ We now take the $(n - 1)$th homology groups to obtain
+ \[
+ \begin{tikzcd}
+ H_{n - 1}(S^{n - 1}) \ar[r, "i_*"] & H_{n - 1}(D^{n}) \ar[r, "r_*"] & H_{n - 1}(S^{n - 1}).
+ \end{tikzcd}
+ \]
+ Since $r \circ i$ is homotopic to the identity, this composition must also be the identity, but this is clearly nonsense, since $H_{n - 1}(S^{n - 1}) \cong \Z$ while $H_{n - 1}(D^n) \cong 0$. So such a continuous retraction cannot exist.
+\end{proof}
+Note it is important that we can work with continuous maps directly, and not just their simplicial approximations. Otherwise, here we can only show that every simplicial approximation of $f$ has a fixed point, but this does not automatically entail that $f$ itself has a fixed point.
+
+For the next application, recall from the first example sheet that if $n$ is odd, then the antipodal map $a: S^n \to S^n$ is homotopic to the identity. What if $n$ is even?
+
+The idea is to consider the effect on the homology group:
+\[
+ a_*: H_n(S^n) \to H_n(S^n).
+\]
+By our previous calculation, we know $a_*$ is a map $a_*: \Z \to \Z$. If $a$ is homotopic to the identity, then $a_*$ should be homotopic to the identity map. We will now compute $a_*$ and show that it is multiplication by $-1$ when $n$ is even.
+
+To do this, we want to use a triangulation that is compatible with the antipodal map. The standard triangulation clearly doesn't work. Instead, we use the following triangulation $h: |K| \to S^n$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [gray, ->] (-2, 0) -- (2, 0);
+ \draw [gray, ->] (0, -2) -- (0, 2);
+ \draw [gray, ->] (0.894, 1.788) -- (-0.894, -1.788);
+
+ \node [circ] (x1) at (1, 0) {};
+ \node [circ] (x2) at (-1, 0) {};
+ \node [circ] (z1) at (0, 1) {};
+ \node [circ] (z2) at (0, -1) {};
+ \node [circ] (y1) at (-0.3, -0.6) {};
+ \node [circ] (y2) at (0.3, 0.6) {};
+
+ \draw (x1) -- (y1) -- (z1) -- (x1) -- (z2) -- (y1) -- (x2) -- (z1);
+ \draw (z2) -- (x2);
+ \draw [dashed] (x2) -- (y2) -- (x1);
+ \draw [dashed] (z2) -- (y2) -- (z1);
+ \end{tikzpicture}
+\end{center}
+The vertices of $K$ are given by
+\[
+ V_K = \{\pm\mathbf{e}_0, \pm \mathbf{e}_1, \cdots, \pm\mathbf{e}_n\}.
+\]
+This triangulation works nicely with the antipodal map, since this maps a vertex to a vertex. To understand the homology group better, we need the following lemma:
+
+\begin{lemma}
+ In the triangulation of $S^n$ given by vertices $V_K = \{\pm\mathbf{e}_0, \pm \mathbf{e}_1, \cdots, \pm\mathbf{e}_n\}$, the element
+ \[
+ x = \sum_{\boldsymbol\varepsilon \in \{\pm 1\}^{n + 1}} \varepsilon_0 \cdots \varepsilon_n (\varepsilon_0 \mathbf{e}_0, \cdots, \varepsilon_n \mathbf{e}_n)
+ \]
+ is a cycle and generates $H_n(S^n)$.
+\end{lemma}
+
+\begin{proof}
+ By direct computation, we see that $d x = 0$. So $x$ is a cycle. To show it generates $H_n(S^n)$, we note that everything in $H_n(S^n) \cong \Z$ is a multiple of the generator, and since $x$ has coefficients $\pm 1$, it cannot be a multiple of anything else (apart from $-x$). So $x$ is indeed a generator.
+\end{proof}
+
+Now we can prove our original proposition.
+\begin{prop}
+ If $n$ is even, then the antipodal map $a \not\simeq \id$.
+\end{prop}
+
+\begin{proof}
+ We can directly compute that $a_* x = (- 1)^{n + 1}x$. If $n$ is even, then $a_* = -1$, but $\id_* = 1$. So $a \not\simeq \id$.
+\end{proof}
+
+\subsection{Homology of surfaces}
+We want to study compact surfaces and their homology groups. To work with the simplicial homology, we need to assume they are triangulable. We will not prove this fact, and just assume it to be true (it really is).
+
+Recall we have classified compact surfaces, and have found the following orientable surfaces $\Sigma_g$.
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \draw circle [radius=1];
+ \draw [dashed] (1, 0) arc (0:180:1 and 0.25);
+ \draw (-1, 0) arc (180:360:1 and 0.25);
+ \end{scope}
+
+ \begin{scope}[shift={(3, 0)}]
+ \draw (0,0) ellipse (1 and 0.56);
+ \path[rounded corners=12pt] (-0.45, 0)--(0, 0.3)--(0.45, 0) (-0.45, 0)--(0, -0.28)--(0.45, 0);
+ \draw[rounded corners=14pt] (-0.55, 0.05)--(0, -.15)--(0.55, 0.05);
+ \draw[rounded corners=12pt] (-0.45, 0)--(0, 0.3)--(0.45, 0);
+ \end{scope}
+
+ \begin{scope}[shift={(6.5, 0)}, yscale=1.5]
+ \draw plot [smooth cycle, tension=0.8] coordinates {(-1.7, 0) (-1.4, -0.27) (-0.7, -0.35) (0, -0.23) (0.7, -0.35) (1.4, -0.27) (1.7, 0) (1.4, 0.27) (0.7, 0.35) (0, 0.23) (-0.7, 0.35) (-1.4, 0.27)};
+
+ \foreach \x in {0.8, -0.8}{
+ \begin{scope}[shift={(\x, -0.03)}]
+ \path[rounded corners=12pt] (-0.45, 0)--(0, 0.2)--(0.45, 0) (-0.45, 0)--(0, -0.28)--(0.45, 0);
+ \draw[rounded corners=14pt] (-0.55, 0.05)--(0, -.15)--(0.55, 0.05);
+ \draw[rounded corners=12pt] (-0.45, 0)--(0, 0.2)--(0.45, 0);
+ \end{scope}
+ }
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+We also have non-compact versions of these, known as $F_g$, where we take the above ones and cut out a hole:
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}
+ \draw circle [radius=1];
+ \draw [dashed] (1, 0) arc (0:180:1 and 0.25);
+ \draw (-1, 0) arc (180:360:1 and 0.25);
+ \fill [white] (0.954, 0) ellipse (0.1 and 0.3);
+ \draw (0.954, 0.3) arc(90:270:0.1 and 0.3);
+ \draw (0.954, 0.3) arc(90:270:0.05 and 0.3);
+ \end{scope}
+
+ \begin{scope}[shift={(3, 0)}]
+ \draw plot [smooth, tension=0.8] coordinates {(1.2, 0.3) (1, 0.3) (0, 0.56) (-1, 0.3) (-1, -0.3) (0, -0.56) (1, -0.3) (1.2, -0.3)};
+ \path[rounded corners=12pt] (-0.45, 0)--(0, 0.3)--(0.45, 0) (-0.45, 0)--(0, -0.28)--(0.45, 0);
+ \draw[rounded corners=14pt] (-0.55, 0.05)--(0, -.15)--(0.55, 0.05);
+ \draw[rounded corners=12pt] (-0.45, 0)--(0, 0.3)--(0.45, 0);
+ \draw (1.2, 0) ellipse (0.1 and 0.3);
+ \end{scope}
+
+ \begin{scope}[shift={(6.5, 0)}, yscale=1.5]
+ \draw plot [smooth, tension=0.8] coordinates {(1.6, 0.2) (1.4, 0.27) (0.7, 0.35) (0, 0.23) (-0.7, 0.35) (-1.4, 0.27) (-1.7, 0) (-1.4, -0.27) (-0.7, -0.35) (0, -0.23) (0.7, -0.35) (1.4, -0.27) (1.6, -0.2)};
+ \draw (1.6, 0) ellipse (0.1 and 0.2);
+
+ \foreach \x in {0.8, -0.8}{
+ \begin{scope}[shift={(\x, -0.03)}]
+ \path[rounded corners=12pt] (-0.45, 0)--(0, 0.2)--(0.45, 0) (-0.45, 0)--(0, -0.28)--(0.45, 0);
+ \draw[rounded corners=14pt] (-0.55, 0.05)--(0, -.15)--(0.55, 0.05);
+ \draw[rounded corners=12pt] (-0.45, 0)--(0, 0.2)--(0.45, 0);
+ \end{scope}
+ }
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+We want to compute the homology groups of these $\Sigma_g$. One way to do so is to come up with specific triangulations of the space, and then compute the homology groups directly. However, there is a better way to do it. Given a $\Sigma_g$, we can slice it apart along a circle and write it as
+\[
+ \Sigma_g = F_{g - 1} \cup F_1.
+\]
+\begin{center}
+ \begin{tikzpicture}[yscale=1.5]
+ \draw plot [smooth cycle, tension=0.8] coordinates {(-1.7, 0) (-1.4, -0.27) (-0.7, -0.35) (0, -0.23) (0.8, -0.35) (1.6, -0.23) (2.3, -0.35) (3.0, -0.27) (3.3, 0) (3.0, 0.27) (2.3, 0.35) (1.6, 0.23) (0.8, 0.35) (0, 0.23) (-0.7, 0.35) (-1.4, 0.27)};
+
+ \foreach \x in {2.4, 0.8, -0.8}{
+ \begin{scope}[shift={(\x, -0.03)}]
+ \path[rounded corners=12pt] (-0.45, 0)--(0, 0.2)--(0.45, 0) (-0.45, 0)--(0, -0.28)--(0.45, 0);
+ \draw[rounded corners=14pt] (-0.55, 0.05)--(0, -.15)--(0.55, 0.05);
+ \draw[rounded corners=12pt] (-0.45, 0)--(0, 0.2)--(0.45, 0);
+ \end{scope}
+ }
+
+ \draw (1.6, 0.23) arc(90:270:0.1 and 0.23);
+ \draw [dashed] (1.6, -0.23) arc(-90:90:0.1 and 0.23);
+ \end{tikzpicture}
+\end{center}
+Then, we need to compute $H_*(F_g)$. Fortunately, this is not too hard, since it turns out $F_g$ is homotopic to a relatively simple space. Recall that we produced $\Sigma_g$ by starting with a $4g$-gon and gluing edges:
+\begin{center}
+ \begin{tikzpicture}[rotate=22.5]
+ \coordinate (0) at (1, 0);
+ \coordinate (1) at (0, 0);
+ \coordinate (2) at (-0.707, -0.707);
+ \coordinate (3) at (-0.707, -1.707);
+ \coordinate (4) at (0, -2.414);
+ \coordinate (5) at (1, -2.414);
+ \coordinate (6) at (1.707, -1.707);
+ \coordinate (7) at (1.707, -0.707);
+
+ \fill [mblue, opacity=0.5] (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);
+ \draw [mred, ->-=0.6] (0) -- (1) node [above, pos=0.5] {$a_1$};
+ \draw [mblue, ->-=0.6] (1) -- (2) node [left, pos=0.5] {$b_1$};
+ \draw [mred, ->-=0.6] (3) -- (2) node [left, pos=0.5] {$a_1$};
+ \draw [mblue, ->-=0.6] (4) -- (3) node [below, pos=0.5] {$b_1$};
+
+ \draw [mgreen, ->-=0.6] (4) -- (5) node [below, pos=0.5] {$a_2$};
+ \draw [morange, ->-=0.6] (5) -- (6) node [right, pos=0.5] {$b_2$};
+ \draw [mgreen, ->-=0.6] (7) -- (6) node [right, pos=0.5] {$a_2$};
+ \draw [morange, ->-=0.6] (0) -- (7) node [above, pos=0.5] {$b_2$};
+ \end{tikzpicture}
+\end{center}
+Now what is $F_g$? $F_g$ is $\Sigma_g$ with a hole cut out of it. We'll cut the hole out at the center, and be left with
+\begin{center}
+ \begin{tikzpicture}[rotate=22.5]
+ \coordinate (0) at (1, 0);
+ \coordinate (1) at (0, 0);
+ \coordinate (2) at (-0.707, -0.707);
+ \coordinate (3) at (-0.707, -1.707);
+ \coordinate (4) at (0, -2.414);
+ \coordinate (5) at (1, -2.414);
+ \coordinate (6) at (1.707, -1.707);
+ \coordinate (7) at (1.707, -0.707);
+
+ \fill [mblue, opacity=0.5] (0) -- (1) -- (2) -- (3) -- (4) -- (5) -- (6) -- (7) -- (0);
+ \draw [mred, ->-=0.6] (0) -- (1) node [above, pos=0.5] {$a_1$};
+ \draw [mblue, ->-=0.6] (1) -- (2) node [left, pos=0.5] {$b_1$};
+ \draw [mred, ->-=0.6] (3) -- (2) node [left, pos=0.5] {$a_1$};
+ \draw [mblue, ->-=0.6] (4) -- (3) node [below, pos=0.5] {$b_1$};
+
+ \draw [mgreen, ->-=0.6] (4) -- (5) node [below, pos=0.5] {$a_2$};
+ \draw [morange, ->-=0.6] (5) -- (6) node [right, pos=0.5] {$b_2$};
+ \draw [mgreen, ->-=0.6] (7) -- (6) node [right, pos=0.5] {$a_2$};
+ \draw [morange, ->-=0.6] (0) -- (7) node [above, pos=0.5] {$b_2$};
+
+ \draw [fill=white] (0.5, -1.207) circle [radius=0.5];
+ \end{tikzpicture}
+\end{center}
+We can now expand the hole to the boundary, and get a deformation retraction from $F_g$ to its boundary:
+\begin{center}
+ \begin{tikzpicture}[rotate=22.5]
+ \coordinate (0) at (1, 0);
+ \coordinate (1) at (0, 0);
+ \coordinate (2) at (-0.707, -0.707);
+ \coordinate (3) at (-0.707, -1.707);
+ \coordinate (4) at (0, -2.414);
+ \coordinate (5) at (1, -2.414);
+ \coordinate (6) at (1.707, -1.707);
+ \coordinate (7) at (1.707, -0.707);
+
+ \draw [mred, ->-=0.6] (0) -- (1) node [above, pos=0.5] {$a_1$};
+ \draw [mblue, ->-=0.6] (1) -- (2) node [left, pos=0.5] {$b_1$};
+ \draw [mred, ->-=0.6] (3) -- (2) node [left, pos=0.5] {$a_1$};
+ \draw [mblue, ->-=0.6] (4) -- (3) node [below, pos=0.5] {$b_1$};
+
+ \draw [mgreen, ->-=0.6] (4) -- (5) node [below, pos=0.5] {$a_2$};
+ \draw [morange, ->-=0.6] (5) -- (6) node [right, pos=0.5] {$b_2$};
+ \draw [mgreen, ->-=0.6] (7) -- (6) node [right, pos=0.5] {$a_2$};
+ \draw [morange, ->-=0.6] (0) -- (7) node [above, pos=0.5] {$b_2$};
+ \end{tikzpicture}
+\end{center}
+Gluing the edges together, we obtain the rose with $2g$ petals, which we shall call $X_{2g}$:
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] (0) {};
+ \foreach \d in {0,90,180,270} {
+ \draw (0) edge [loop, looseness=60, out=60+\d,in=120+\d] (0);
+ }
+ \end{tikzpicture}
+\end{center}
+Using the Mayer-Vietoris sequence (exercise), we see that
+\[
+ H_n(F_g) \cong H_n(X_{2g}) \cong
+ \begin{cases}
+ \Z & n = 0\\
+ \Z^{2g} & n = 1\\
+ 0 & n > 1
+ \end{cases}
+\]
+We now compute the homology groups of $\Sigma_g$. The Mayer-Vietoris sequence gives the following
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & H_2(S^1) \ar[r] & H_2(F_{g - 1}) \oplus H_2(F_1) \ar[r] & H_2(\Sigma_g) \ar[out=0, in=180, looseness=2, overlay, dll]\\
+ & H_1(S^1) \ar[r] & H_1(F_{g - 1}) \oplus H_1(F_1) \ar[r] & H_1(\Sigma_g)\ar[out=0, in=180, looseness=2, overlay, dll]\\
+ & H_0(S^1) \ar[r] & H_0(F_{g - 1}) \oplus H_0(F_1) \ar[r] & H_0(\Sigma_g) \ar[r] & 0
+ \end{tikzcd}
+\]
+We can put in the terms we already know, and get
+\[
+ \begin{tikzcd}[column sep=small]
+ 0 \ar[r] & H_2(\Sigma_g) \ar[r] & \Z \ar[r] & \Z^{2g} \ar[r] & H_1(\Sigma_g) \ar[r] & \Z \ar[r] & \Z^2 \ar[r] & \Z \ar[r] & 0
+ \end{tikzcd}
+\]
+By exactness, we know $H_2(\Sigma_g) = \ker\{\Z \to \Z^{2g}\}$. We now note that this map is indeed the zero map, by direct computation --- this map sends $S^1$ to the boundary of the hole of $F_{g - 1}$ and $F_1$. If we look at the picture, after the deformation retraction, this loop passes through each one-cell twice, once with each orientation. So these cancel, and give $0$. So $H_2(\Sigma_g) = \Z$.
+
+To compute $H_1(\Sigma_g)$, for convenience, we break it up into a short exact sequence, noting that the function $\Z \to \Z^{2g}$ is zero:
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & \Z^{2g} \ar[r] & H_1(\Sigma_g) \ar[r] & \ker (\Z \to \Z^2) \ar[r] & 0.
+ \end{tikzcd}
+\]
+We now claim that $\Z \to \Z^2$ is injective --- this is obvious, since it sends $1 \mapsto (1, 1)$. So the kernel is zero. So
+\[
+ H_1(\Sigma_g) \cong \Z^{2g}.
+\]
+This is a typical application of the Mayer-Vietoris sequence. We write down the long exact sequence, and put in the terms we already know. This does not immediately give us the answer we want --- we will need to understand one or two of the maps, and then we can figure out all the groups we want.
+
+\subsection{Rational homology, Euler and Lefschetz numbers}
+So far, we have been working with chains with integral coefficients. It turns out we can use rational coefficients instead. In the past, $C_n(K)$ was an abelian group, or a $\Z$-module. If we use rational coefficients, since $\Q$ is a field, this becomes a vector space, and we can use a lot of nice theorems about vector spaces, such as the rank-nullity theorem. Moreover, we can reasonably talk about things like the dimensions of these homology groups.
+
+\begin{defi}[Rational homology group]
+ For a simplicial complex $K$, we can define the \emph{rational $n$-chain group} $C_n(K, \Q)$ in the same way as $C_n(K) = C_n(K, \Z)$. That is, $C_n(K, \Q)$ is the vector space over $\Q$ with basis the $n$-simplices of $K$ (with a choice of orientation).
+
+ We can define $d_n, Z_n, B_n$ as before, and the \emph{rational $n$th homology group} is
+ \[
+ H_n(K; \Q) \cong \frac{Z_n(K; \Q)}{B_n(K; \Q)}.
+ \]
+\end{defi}
+Now our homology group is a vector space, and it is much easier to work with. However, the consequence is that we will lose some information. Fortunately, the way in which we lose information is very well-understood and rather simple.
+
+\begin{lemma}
+ If $H_n(K) \cong \Z^k \oplus F$ for $F$ a finite group, then $H_n(K; \Q) \cong \Q^k$.
+\end{lemma}
+
+\begin{proof}
+ Exercise. % exercise
+\end{proof}
+Hence, when passing to rational homology groups, we lose all information about the torsion part. In some cases this information loss is not very significant, but in certain cases, it can be. For example, in $\RP^2$, we have
+\[
+ H_n(\RP^2) \cong
+ \begin{cases}
+ \Z & n = 0\\
+ \Z/2 & n = 1\\
+ 0 & n > 1
+ \end{cases}
+\]
+If we pass on to rational coefficients, then we have lost everything in $H_1(\RP^2)$, and $\RP^2$ looks just like a point:
+\[
+ H_n(\RP^2; \Q) \cong H_n(*; \Q).
+\]
+\begin{eg}
+ We have
+ \[
+ H_n(S^n; \Q) \cong
+ \begin{cases}
+ \Q & k = 0, n\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ We also have
+ \[
+ H_n(\Sigma_g, \Q) \cong
+ \begin{cases}
+ \Q & k = 0, 2\\
+ \Q^{2g} & k = 1\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+ In this case, we have not lost any information because there was no torsion part of the homology groups.
+
+ However, for the non-orientable surfaces, one can show that
+ \[
+ H_k(E_n) =
+ \begin{cases}
+ \Z & k = 0\\
+ \Z^{n - 1} \times \Z / 2 & k = 1\\
+ 0 & \text{otherwise}
+ \end{cases},
+ \]
+ so
+ \[
+ H_k(E_n; \Q) =
+ \begin{cases}
+ \Q & k = 0\\
+ \Q^{n - 1} & k = 1\\
+ 0 & \text{otherwise}
+ \end{cases}.
+ \]
+
+ This time, this is different from the integral coefficient case, where we have an extra $\Z_2$ term in $H_1$.
+\end{eg}
+
+The advantage of this is that the homology ``groups'' are in fact vector spaces, and we can talk about things like dimensions. Also, maps on homology groups are simply linear maps, i.e.\ matrices, and we can study their properties with proper linear algebra.
+
+Recall that the Euler characteristic of a polyhedron is defined as ``faces $-$ edges $+$ vertices''. This works for two-dimensional surfaces, and we would like to extend this to higher dimensions. There is an obvious way to do so, by counting higher-dimensional surfaces and putting the right signs. However, if we define it this way, it is not clear that this is a property of the space itself, and not just a property of the triangulation. Hence, we define it as follows:
+
+\begin{defi}[Euler characteristic]
+ The \emph{Euler characteristic} of a triangulable space $X$ is
+ \[
+ \chi(X) = \sum_{i \geq 0} (-1)^i \dim_\Q H_i(X; \Q).
+ \]
+\end{defi}
+This clearly depends only on the homotopy type of $X$, and not the triangulation. We will later show this is equivalent to what we used to have.
+
+More generally, we can define the \emph{Lefschetz number}.
+\begin{defi}[Lefschetz number]
+ Given any map $f: X \to X$, we define the \emph{Lefschetz number} of $f$ as
+ \[
+ L(f) = \sum_{i \geq 0} (-1)^i \tr(f_*: H_i(X; \Q) \to H_i(X; \Q)).
+ \]
+\end{defi}
+Why is this a generalization of the Euler characteristic? Just note that the trace of the identity map is the number of dimensions. So we have
+\[
+ \chi(X) = L (\id).
+\]
+\begin{eg}
+ We have
+ \[
+ \chi(S^n) =
+ \begin{cases}
+ 2 & n\text{ even}\\
+ 0 & n\text{ odd}
+ \end{cases}
+ \]
+ We also have
+ \[
+ \chi(\Sigma_g) = 2 - 2g,\quad \chi(E_n) = 2 - n.
+ \]
+\end{eg}
+
+\begin{eg}
+ If $\alpha: S^n \to S^n$ is the antipodal map, we saw that $\alpha_*: H_n(S^n) \to H_n(S^n)$ is multiplication by $(-1)^{n + 1}$. So
+ \[
+ L(\alpha) = 1 + (-1)^n (-1)^{n + 1} = 1 - 1 = 0.
+ \]
+\end{eg}
+We see that even though the antipodal map has different behaviour for different dimensions, the Lefschetz number ends up being zero all the time. We will soon see why this is the case.
+
+Why do we want to care about the Lefschetz number? The important thing is that we will use this to prove a really powerful generalization of Brouwer's fixed point theorem that allows us to talk about things that are not balls.
+
+Before that, we want to understand the Lefschetz number first. To define the Lefschetz number, we take the trace of $f_*$, and this is a map of the homology groups. However, we would like to understand the Lefschetz number in terms of the chain groups, since these are easier to comprehend. Recall that the homology groups are defined as quotients of the chain groups, so we would like to know what happens to the trace when we take quotients.
+\begin{lemma}
+ Let $V$ be a finite-dimensional vector space and $W \leq V$ a subspace. Let $A: V\to V$ be a linear map such that $A(W) \subseteq W$. Let $B = A|_W: W \to W$ and $C: V/W \to V/W$ the induced map on the quotient. Then
+ \[
+ \tr (A) = \tr(B) + \tr(C).
+ \]
+\end{lemma}
+
+\begin{proof}
+ In the right basis,
+ \[
+ A =
+ \begin{pmatrix}
+ B & A'\\
+ 0 & C
+ \end{pmatrix}.\qedhere
+ \]
+\end{proof}
+
+What this allows us to do is to not look at the induced maps on homology, but just the maps on chain complexes. This makes our life much easier when it comes to computation.
+
+\begin{cor}
+ Let $f_{\Cdot}: C_{\Cdot}(K; \Q) \to C_{\Cdot}(K; \Q)$ be a chain map. Then
+ \[
+ \sum_{i \geq 0} (-1)^i \tr(f_i: C_i(K) \to C_i(K)) = \sum_{i \geq 0} (-1)^i \tr(f_*: H_i(K) \to H_i(K)),
+ \]
+ with homology groups understood to be over $\Q$.
+\end{cor}
+This is a great corollary. The thing on the right is the conceptually right thing to have --- homology groups are nice and are properties of the space itself, not the triangulation. However, to actually do computations, we want to work with the chain groups and actually calculate with chain groups.
+
+\begin{proof}
+ There is an exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & B_i(K; \Q) \ar[r] & Z_i(K; \Q) \ar[r] & H_i(K; \Q) \ar[r] & 0
+ \end{tikzcd}
+ \]
+ This is since $H_i(K, \Q)$ is defined as the quotient of $Z_i$ over $B_i$. We also have the exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & Z_i(K; \Q) \ar[r] & C_i(K; \Q) \ar[r, "d_i"] & B_{i - 1}(K; \Q) \ar[r] & 0
+ \end{tikzcd}
+ \]
+ This is true by definition of $B_{i - 1}$ and $Z_i$. Let $f_i^H, f_i^B, f_i^Z, f_i^C$ be the various maps induced by $f$ on the corresponding groups. Then we have
+ \begin{align*}
+ L(|f|) &= \sum_{i \geq 0} (-1)^i \tr(f_i^H)\\
+ &= \sum_{i \geq 0} (-1)^i (\tr(f_i^Z) - \tr(f_i^B))\\
+ &= \sum_{i \geq 0} (-1)^i (\tr(f_i^C) - \tr(f_{i - 1}^B) - \tr(f_i^B)).
+ \end{align*}
+ Because of the alternating signs in dimension, each $f_i^B$ appears twice in the sum with opposite signs. So all $f_i^B$ cancel out, and we are left with
+ \[
+ L(|f|) = \sum_{i \geq 0} (-1)^i \tr (f_i^C).\qedhere
+ \]
+\end{proof}
+
+Since $\tr (\id|_{C_i(K; \Q)}) = \dim_\Q C_i(K; \Q)$, which is just the number of $i$-simplices, this tells us the Euler characteristic we just defined is the usual Euler characteristic, i.e.
+\[
+ \chi(X) = \sum_{i \geq 0} (-1)^i \text{ number of $i$-simplices}.
+\]
+Finally, we get to the important theorem of the section.
+\begin{thm}[Lefschetz fixed point theorem]
+ Let $f: X \to X$ be a continuous map from a triangulable space to itself. If $L(f) \not= 0$, then $f$ has a fixed point.
+\end{thm}
+
+\begin{proof}
+ We prove the contrapositive. Suppose $f$ has no fixed point. We will show that $L(f) = 0$. Let
+ \[
+ \delta = \inf \{ |x - f(x)|: x \in X\}
+ \]
+ thinking of $X$ as a subset of $\R^n$. We know this is non-zero, since $f$ has no fixed point, and $X$ is compact (and hence the infimum point is achieved by some $x$).
+
+ Choose a triangulation $L: |K| \to X$ such that $\mu(K) < \frac{\delta}{2}$. We now let
+ \[
+ g: K^{(r)} \to K
+ \]
+ be a simplicial approximation to $f$. Since we picked our triangulation to be so fine, for $x \in \sigma \in K$, we have
+ \[
+ |f(x) - g(x)| < \frac{\delta}{2}
+ \]
+ since the mesh is already less than $\frac{\delta}{2}$. Also, we know
+ \[
+ |f(x) - x| \geq \delta.
+ \]
+ So we have
+ \[
+ |g(x) - x| > \frac{\delta}{2}.
+ \]
+ So we must have $g(x) \not\in \sigma$. The conclusion is that for any $\sigma \in K$, we must have
+ \[
+ g(\sigma) \cap \sigma = \emptyset.
+ \]
+ Now we compute $L(f) = L(|g|)$. The only complication here is that $g$ is a map from $K^{(r)}$ to $K$, and the domains and codomains are different. So we need to compose it with $s_i: C_i(K; \Q) \to C_i(K^{(r)}; \Q)$ induced by inverses of simplicial approximations to the identity map. Then we have
+ \begin{align*}
+ L(|g|) &= \sum_{i \geq 0} (-1)^i \tr(g_*: H_i(X; \Q) \to H_i(X; \Q))\\
+ &= \sum_{i \geq 0} (-1)^i \tr(g_i \circ s_i: C_i(K; \Q) \to C_i(K; \Q))\\
+ \intertext{Now note that $s_i$ takes simplices of $\sigma$ to sums of subsimplices of $\sigma$. So $g_i \circ s_i$ takes every simplex off itself. So each diagonal terms of the matrix of $g_i \circ s_i$ is $0$! Hence the trace is}
+ L(|g|) &= 0.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{eg}
+ If $X$ is \emph{any} contractible polyhedron (e.g.\ a disk), then $L(f) = 1$ for any $f: X \to X$, which is obvious once we contract the space to a point. So $f$ has a fixed point.
+\end{eg}
+
+\begin{eg}
+ Suppose $G$ is a path-connected topological group, i.e.\ $G$ is a group and a topological space, and inverse and multiplication are continuous maps.
+
+ If $g \not= 1$, then the map
+ \begin{align*}
+ r_g: G &\to G\\
+ \gamma &\mapsto \gamma g
+ \end{align*}
+ has no fixed point. This implies $L(r_g) = 0$. However, since the space is path connected, $r_g \simeq \id$, where the homotopy is obtained by multiplying elements along the path from $e$ to $r_g$. So
+ \[
+ \chi(G) = L(\id_G) = L(r_g) = 0.
+ \]
+ So if $G \not= 1$, then in fact $\chi(G) = 0$.
+\end{eg}
+This is quite fun. We have worked with many surfaces. Which can be topological groups? The torus is, since it is just $S^1 \times S^1$, and $S^1$ is a topological group. The Klein bottle? Maybe. However, other surfaces cannot since they don't have Euler characteristic $0$.
+\end{document}
diff --git a/books/cam/II_M/galois_theory.tex b/books/cam/II_M/galois_theory.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ab731b2a137338a78328912341bde5fb8cb29130
--- /dev/null
+++ b/books/cam/II_M/galois_theory.tex
@@ -0,0 +1,3274 @@
+\documentclass[a4paper]{article}
+
+\def\npart {II}
+\def\nterm {Michaelmas}
+\def\nyear {2015}
+\def\nlecturer {C.\ Birkar}
+\def\ncourse {Galois Theory}
+
+\input{header}
+
+\newcommand\Fr{\mathrm{Fr}}
+\begin{document}
+\maketitle
+{\small
+\noindent\emph{Groups, Rings and Modules is essential}
+\vspace{10pt}
+
+\noindent Field extensions, tower law, algebraic extensions; irreducible polynomials and relation with simple algebraic extensions. Finite multiplicative subgroups of a field are cyclic. Existence and uniqueness of splitting fields.\hspace*{\fill} [6]
+
+\vspace{5pt}
+\noindent Existence and uniqueness of algebraic closure.\hspace*{\fill} [1]
+
+\vspace{5pt}
+\noindent Separability. Theorem of primitive element. Trace and norm.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Normal and Galois extensions, automorphic groups. Fundamental theorem of Galois theory.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Galois theory of finite fields. Reduction mod $p$.\hspace*{\fill} [2]
+
+\vspace{5pt}
+\noindent Cyclotomic polynomials, Kummer theory, cyclic extensions. Symmetric functions. Galois theory of cubics and quartics.\hspace*{\fill} [4]
+
+\vspace{5pt}
+\noindent Solubility by radicals. Insolubility of general quintic equations and other classical problems.\hspace*{\fill} [3]
+
+\vspace{5pt}
+\noindent Artin's theorem on the subfield fixed by a finite group of automorphisms. Polynomial invariants of a finite group; examples.\hspace*{\fill} [2]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+The most famous result of Galois theory is that there is no general solution to polynomial equations of degree $5$ or above in terms of radicals. However, this result was, in fact, proven before Galois theory existed, and goes under the name of the \emph{Abel--Ruffini theorem}. What Galois theory does provides is a way to decide whether a given polynomial has a solution in terms of radicals, as well as a nice way to prove this result.
+
+However, Galois theory is more than equation solving. In fact, the \emph{fundamental theorem of Galois theory}, which is obviously an important theorem in Galois theory, has completely nothing to do with equation solving. Instead, it is about group theory.
+
+In modern days, Galois theory is often said to be the study of field extensions. The idea is that we have a field $K$, and then add more elements to get a field $L$. When we want to study solutions to polynomial equations, what we add is the roots of the polynomials. We then study the properties of this field extension, and in some cases, show that this field extension cannot be obtained by just adding radicals.
+
+For certain ``nice'' field extensions $K \subseteq L$, we can assign to it the \emph{Galois group} $\Gal(L/K)$. In general, given any group $G$, we can find subgroups of $G$. On the other hand, given a field extension $K\subseteq L$, we can try to find some intermediate field $F$ that can be fitted into $K \subseteq F \subseteq L$. The key idea of Galois theory is that these two processes are closely related --- we can establish a one-to-one correspondence between the subgroups of $G$ and the intermediate fields $F$.
+
+Moreover, many properties of (intermediate) field extensions correspond to analogous ideas in group theory. For example, we have the notion of normal subgroups, and hence there is an analogous notion of normal extensions. Similarly, we have soluble extensions (i.e.\ extensions that can be obtained by adding radicals), and these correspond to ``soluble groups''. In Galois theory, we will study how group-theoretic notions and field-theoretic notions interact.
+
+Nowadays, Galois theory is an important field in mathematics, and finds its applications in number theory, algebraic geometry and even cryptography.
+
+\section{Solving equations}
+Galois theory grew of the desire to \emph{solve equations}. In particular, to solve polynomial equations. To begin with, we will come up with general solutions to polynomial equations of up to degree $4$. However, this is the best we can do, as we will later show in the course --- there is no general solution to polynomial equations of degree $5$ or above.
+
+Before we start, we will define some notations that we will frequently use.
+
+If $R$ is a ring, then $R[t]$ is the polynomial ring over $R$ in the variable $t$. Usually, we take $R = \Q$ and consider polynomials $f(t) \in \Q[t]$. The objective is then to find roots to the equation $f(t) = 0$. Often, we want to restrict our search domain. For example, we might ask if there is a root in $\Q$. We will thus use $\Root_f(X)$ to denote the set of all roots of $f$ in $X$.
+
+\subsubsection*{Linear equations}
+Suppose that $f = t + a \in \Q[t]$ (with $a\in \Q$). This is easy to solve --- we have $\Root_f(\Q) = \{-a\}$.
+
+\subsubsection*{Quadratic equations}
+Consider a simple quadratic $f = t^2 + 1 \in \Q[t]$. Then $\Root_f(\Q) = \emptyset$ since the square of all rationals are positive. However, in the complex plane, we have $\Root_f(\C) = \{\sqrt{-1}, -\sqrt{-1}\}$.
+
+In general, let $f = t^2 + at + b\in \Q[t]$. Then as we all know, the roots are given by
+\[
+ \Root_f(\C) = \left\{\frac{-a \pm \sqrt{a^2 - 4b}}{2}\right\}
+\]
+\subsubsection*{Cubic equations}
+Let $f = t^3 + c\in \Q[t]$. The roots are then
+\[
+ \Root_f(\C) = \{\sqrt[3]{-c}, \mu\sqrt[3]{-c}, \mu^2 \sqrt[3]{-c}\},
+\]
+where $\mu = \frac{-1 + \sqrt{-3}}{2}$ is the 3rd root of unity. Note that $\mu$ is defined by the equation $\mu^3 - 1 = 0$, and satisfies $\mu^2 + \mu + 1 = 0$.
+
+In general, let $f = t^3 + at^2 + bt + c \in \Q[t]$, and let $\Root_f(\C) = \{\alpha_1, \alpha_2, \alpha_3\}$, not necessarily distinct.
+
+Our objective is to solve $f = 0$. Before doing so, we have to make it explicit what we mean by ``solving'' the equation. As in solving the quadratic, we want to express the roots $\alpha_1, \alpha_2$ and $\alpha_3$ in terms of ``radicals'' involving $a, b$ and $c$.
+
+Unlike the quadratic case, there is no straightforward means of coming up with a general formula. The result we currently have is the result of many many years of hard work, and the substitutions we make seemingly come out of nowhere. However, after a lot of magic, we will indeed come up with a general formula for it.
+
+We first simplify our polynomial by assuming $a = 0$. Given any polynomial $f = t^3 + at^2 + bt + c$, we know $a$ is the negative of the sum of the roots. So we can increase each root by $\frac{a}{3}$ so that the coefficient of $t^2$ vanishes. So we perform the change of variables $t\mapsto t - \frac{a}{3}$, and get rid of the coefficient of $t^2$. So we can assume $a = 0$.
+
+Let $\mu$ be as above. Define
+\begin{align*}
+ \beta &= \alpha_1 + \mu \alpha_2 + \mu^2 \alpha_3\\
+ \gamma &= \alpha_1 + \mu^2 \alpha_2 + \mu \alpha_3
+\end{align*}
+These are the \emph{Lagrange resolvers}. We obtain
+\begin{align*}
+ \beta\gamma &= \alpha_1^2 + \alpha_2^2 +\alpha_3^2 + (\mu + \mu^2)(\alpha_1\alpha_2 + \alpha_2\alpha_3 + \alpha_1\alpha_3)\\
+ \intertext{Since $\mu^2 + \mu + 1 = 0$, we have $\mu^2 + \mu = -1$. So we can simplify to obtain}
+ &= (\alpha_1 + \alpha_2 + \alpha_3)^2 - 3(\alpha_1\alpha_2 + \alpha_2\alpha_3 + \alpha_1\alpha_3)\\
+ \intertext{We have $\alpha_1 + \alpha_2 + \alpha_3 = -a = 0$, while $b = \alpha_1\alpha_2 + \alpha_2\alpha_3 + \alpha_1\alpha_3$. So}
+ &= -3b\\
+ \intertext{Cubing, we obtain}
+ \beta^3\gamma^3 &= -27b^3.
+\end{align*}
+On the other hand, recalling again that $\alpha_1 + \alpha_2 + \alpha_3 = 0$, we have
+\begin{align*}
+ \beta^3 + \gamma^3 &= (\alpha_1 + \mu \alpha_2 + \mu^2 \alpha_3)^3 + (\alpha_1 + \mu^2\alpha_2 + \mu \alpha_3)^3 + (\alpha_1 + \alpha_2 + \alpha_3)^3\\
+ &= 3(\alpha_1^3 + \alpha_2^3 + \alpha_3^3) + 18\alpha_1\alpha_2\alpha_3\\
+ \intertext{We have $\alpha_1\alpha_2\alpha_3 = -c$, and since $\alpha_i^3 + b\alpha_i + c = 0$ for all $i$, summing gives $\alpha_1^3 + \alpha_2^3 + \alpha_3^3 + 3c = 0$. So}
+ &= -27c
+\end{align*}
+Hence, we obtain
+\[
+ (t - \beta^3)(t - \gamma^3) = t^2 + 27ct - 27b^3.
+\]
+We already know how to solve this equation using the quadratic formula. We obtain
+\[
+ \{\beta^3, \gamma^3\} = \left\{\frac{-27 c \pm \sqrt{(27c)^2 + 4\times 27b^3}}{2}\right\}
+\]
+We now have $\beta^3$ and $\gamma^3$ in terms of radicals. So we can find $\beta$ and $\gamma$ in terms of radicals. Finally, we can solve for $\alpha_i$ using
+\begin{align*}
+ 0 &= \alpha_1 + \alpha_2 + \alpha_3\\
+ \beta &= \alpha_1 + \mu \alpha_2 + \mu^2 \alpha_3\\
+ \gamma &= \alpha_1 + \mu^2 \alpha_2 + \mu \alpha_3
+\end{align*}
+In particular, we obtain
+\begin{align*}
+ \alpha_1 &= \frac{1}{3}(\beta + \gamma)\\
+ \alpha_2 &= \frac{1}{3}(\mu^2 \beta + \mu \gamma)\\
+ \alpha_3 &= \frac{1}{3}(\mu \beta + \mu^2 \gamma)
+\end{align*}
+So we can solve a cubic in terms of radicals.
+
+This was a lot of magic involved, and indeed this was discovered through a lot of hard work throughout many many years. This is also not a very helpful result since we have no idea where these substitutions came from and why they intuitively work.
+
+\subsubsection*{Quartic equations}
+Assume $f = t^4 + at^3 + bt^2 + ct + d\in \Q[t]$. Let $\Root_f(\C) = \{\alpha_1, \alpha_2, \alpha_3, \alpha_4\}$. Can we express all these in terms of radicals? Again the answer is yes, but the procedure is much more complicated.
+
+We can perform a similar change of variable to assume $a = 0$. So $\alpha_1 + \alpha_2 + \alpha_3 + \alpha_4 = 0$.
+
+This time, define
+\begin{align*}
+ \beta &= \alpha_1 + \alpha_2\\
+ \gamma &= \alpha_1 + \alpha_3\\
+ \lambda &= \alpha_1 + \alpha_4
+\end{align*}
+Doing some calculations, we see that
+\begin{align*}
+ \beta^2 &= -(\alpha_1 + \alpha_2)(\alpha_3 + \alpha_4)\\
+ \gamma^2 &= -(\alpha_1 + \alpha_3)(\alpha_2 + \alpha_4)\\
+ \lambda^2 &= -(\alpha_1 + \alpha_4)(\alpha_2 + \alpha_3)
+\end{align*}
+Now consider
+\begin{align*}
+ g &= (t - \beta^2)(t - \gamma^2)(t - \lambda^2)\\
+ &= t^3 + 2bt^2 + (b^2 - 4d)t - c^2
+\end{align*}
+This we know how to solve, and so we are done.
+
+\subsubsection*{Quintics and above}
+So far so good. But how about polynomials of higher degrees? In general, let $f \in \Q[t]$. Can we write down all the roots of $f$ in terms of radicals? We know that the answer is yes if $\deg f \leq 4$.
+
+Unfortunately, for $\deg f \geq 5$, the answer is no. Of course, this ``no'' means no \emph{in general}. For example, $f = (t - 1)(t - 2) \cdots (t - 5)\in \Q[t]$ has the obvious roots in terms of radicals.
+
+There isn't an easy proof of this result. The general idea is to first associate a \emph{field extension} $F \supseteq \Q$ for our polynomial $f$. This field $F$ will be obtained by adding all roots of $f$. Then we associate a \emph{Galois group} $G$ to this field extension. We will then prove a theorem that says $f$ has a solution in terms of radicals if and only if the Galois group is ``soluble'', where ``soluble'' has a specific algebraic definition in group theory we will explore later. Finally, we find specific polynomials whose Galois group is not soluble.
+
+\section{Field extensions}
+After all that (hopefully) fun introduction and motivation, we will now start Galois theory in a more abstract way. The modern approach is to describe these in terms of field extensions.
+
+\subsection{Field extensions}
+\begin{defi}[Field extension]\index{field extension}\index{subfield}
+ A \emph{field extension} is an inclusion of a field $K\subseteq L$, where $K$ inherits the algebraic operations from $L$. We also write this as \term{$L/K$}. Alternatively, we can define this by a injective homomorphism $K\to L$. We say $L$ is an \emph{extension} of $K$, and $K$ is a \emph{subfield} of $L$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\R/\Q$ is a field extension.
+ \item $\C/\Q$ is a field extension.
+ \item $\Q(\sqrt{2}) = \{a + b\sqrt{2}: a, b\in \Q\} \subseteq \R$ is a field extension over $\Q$.
+ \end{enumerate}
+\end{eg}
+
+Given a field extension $L/K$, we want to quantify how much ``bigger'' $L$ is compared to $K$. For example, to get from $\Q$ to $\R$, we need to add a lot of elements (since $\Q$ is countable and $\R$ is uncountable). On the other hand, to get from $\R$ to $\C$, we just need to add a single element $\sqrt{-1}$.
+
+To do so, we can consider $L$ as a vector space over $K$. We know that $L$ already comes with an additive abelian group structure, and we can define scalar multiplication by simply multiplying: if $a\in K, \alpha\in L$, then $a\cdot \alpha$ is defined as multiplication in $L$.
+
+\begin{defi}[Degree of field extension]\index{degree}
+ The \emph{degree} of $L$ over $K$ is $[L:K]$ is the dimension of $L$ as a vector space over $K$. The extension is \emph{finite} if the degree is finite.
+\end{defi}
+In this course, we are mostly concerned with finite extensions.
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Consider $\C/\R$. This is a finite extension with degree $[\C:\R] = 2$ since we have a basis of $\{1, i\}$.
+ \item The extension $\Q(\sqrt{2})/\Q$ has degree $2$ since we have a basis of $\{1, \sqrt{2}\}$.
+ \item The extension $\R/\Q$ is not finite.
+ \end{enumerate}
+\end{eg}
+
+We are going to use the following result a lot:
+\begin{thm}[Tower Law]
+ Let $F/L/K$ be field extensions. Then
+ \[
+ [F:K] = [F:L][L:K]
+ \]
+\end{thm}
+
+\begin{proof}
+ Assume $[F:L]$ and $[L:K]$ are finite. Let $\{\alpha_1, \cdots, \alpha_m\}$ be a basis for $L$ over $K$, and $\{\beta_1, \cdots, \beta_n\}$ be a basis for $F$ over $L$. Pick $\gamma \in F$. Then we can write
+ \[
+ \gamma = \sum_i b_i \beta_i,\quad b_i\in L.
+ \]
+ For each $b_i$, we can write as
+ \[
+ b_i = \sum_j a_{ij}\alpha_{j},\quad a_{ij}\in K.
+ \]
+ So we can write
+ \[
+ \gamma = \sum_i \left(\sum_j a_{ij}\alpha_j\right)\beta_i = \sum_{i, j} a_{ij}\alpha_j \beta_i.
+ \]
+ So $T = \{\alpha_j\beta_i\}_{i, j}$ spans $F$ over $K$. To show that this is a basis, we have to show that they are linearly independent. Consider the case where $\gamma = 0$. Then we must have $b_i = 0$ since $\{\beta_i\}$ is a basis of $F$ over $L$. Hence each $a_{ij} = 0$ since $\{\alpha_j\}$ is a basis of $L$ over $K$.
+
+ This implies that $T$ is a basis of $F$ over $K$. So
+ \[
+ [F:K] = |T| = nm = [F:L][L:K].
+ \]
+ Finally, if $[F:L] = \infty$ or $[L:K] = \infty$, then clearly $[F:K] = \infty$ as well. So equality holds as well.
+\end{proof}
+
+Recall that in IA Numbers and Sets, we defined a real number $x$ to be algebraic if it is a root of some polynomial in integer (or rational) coefficients. We can do this for general field (extensions) as well.
+\begin{defi}[Algebraic number]
+ Let $L/K$ be a field extension, $\alpha\in L$. We define
+ \[
+ I_\alpha = \{f\in K[t] : f(\alpha) = 0\}\subseteq K[t]
+ \]
+ This is the set of polynomials for which $\alpha$ is a root. It is easy to show that $I_\alpha$ is an ideal, since it is the kernel of the ring homomorphism $K[t] \to L$ by $g \mapsto g(\alpha)$.
+
+ We say $\alpha$ is \emph{algebraic} over $K$ if $I_\alpha \not= 0$. Otherwise, $\alpha$ is \emph{transcendental} over $K$.
+
+ We say $L$ is \emph{algebraic} over $K$ if every element of $L$ is algebraic.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item $\sqrt[9]{7}$ is algebraic over $\Q$ because $f(\sqrt[9]{7}) = 0$, where $f = t^9 - 7$. In general, any number written with radicals is algebraic over $\Q$.
+ \item $\pi$ is not algebraic over $\Q$.
+ \end{enumerate}
+\end{eg}
+These are rather simple examples, and the following lemma will provide us a way of generating much more examples.
+
+\begin{lemma}
+ Let $L/K$ be a finite extension. Then $L$ is algebraic over $K$.
+\end{lemma}
+
+\begin{proof}
+ Let $n = [L:K]$, and let $\alpha\in L$. Then $1, \alpha, \alpha^2, \cdots, \alpha^n$ are linearly dependent over $K$ (since there are $n + 1$ elements). So there exists some $a_i \in K$ (not all zero) such that
+ \[
+ a_n \alpha^n + a_{n - 1}\alpha^{n - 1} + \cdots + a_1 \alpha + a_0 = 0.
+ \]
+ So we have a non-trivial polynomial that vanishes at $\alpha$. So $\alpha$ is algebraic over $K$.
+
+ Since $\alpha$ was arbitrary, $L$ itself is algebraic.
+\end{proof}
+
+If $L/K$ is a field extension and $\alpha \in L$ is algebraic, then by definition, there is some polynomial $f$ such that $f(\alpha) = 0$. It is a natural question to ask if there is a ``smallest'' polynomial that does this job. Obviously we can find a polynomial of smallest \emph{degree} (by the well-ordering principle of the natural numbers), but we can get something even stronger.
+
+Since $K$ is a field, $K[t]$ is a PID (principal ideal domain). This, by definition, implies we can find some (monic) $P_\alpha \in K[t]$ such that $I_\alpha = \bra P_\alpha\ket$. In other words, every element of $I_\alpha$ is just a multiple of $P_\alpha$.
+\begin{defi}[Minimal polynomial]
+ Let $L/K$ be a field extension, $\alpha \in L$. The \emph{minimal polynomial} of $\alpha$ over $K$ is a monic polynomial $P_\alpha$ such that $I_\alpha = \bra P_\alpha\ket$.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Consider $\R/\Q$, $\alpha = \sqrt[3]{2}$. Then the minimal polynomial is $P_\alpha = t^3 - 2$.
+ \item Consider $\C/\R$, $\alpha = \sqrt[3]{2}$. Then the minimal polynomial is $P_\alpha = t - \sqrt[3]{2}$.
+ \end{enumerate}
+\end{eg}
+
+It should be intuitively obvious that by virtue of being ``minimal'', the minimal polynomial is irreducible.
+\begin{prop}
+ Let $L/K$ be a field extension, $\alpha\in L$ algebraic over $K$, and $P_\alpha$ the minimal polynomial. Then $P_\alpha$ is irreducible in $K[t]$.
+\end{prop}
+
+\begin{proof}
+ Assume that $P_\alpha = QR$ in $K[t]$. So $0 = P_\alpha(\alpha) = Q(\alpha) R(\alpha)$. So $Q(\alpha) = 0$ or $R(\alpha) = 0$. Say $Q(\alpha) = 0$. So $Q\in I_\alpha$. So $Q$ is a multiple of $P_\alpha$. However, we also know that $P_\alpha$ is a multiple of $Q_\alpha$. This is possible only if $R$ is a unit in $K[t]$, i.e.\ $R\in K$. So $P_\alpha$ is irreducible.
+\end{proof}
+It should also be clear that if $f$ is irreducible and $f(\alpha) = 0$, then $f$ is the minimal polynomial. Often, it is the irreducibility of $P_\alpha$ that is important.
+
+Apart from the minimal polynomial, we can also ask for the minimal field containing $\alpha$.
+\begin{defi}[Field generated by $\alpha$]
+ Let $L/K$ be a field extension, $\alpha\in L$. We define $K(\alpha)$ to be the smallest subfield of $L$ containing $K$ and $\alpha$. We call $K(\alpha)$ the \emph{field generated by $\alpha$ over $K$}.
+\end{defi}
+This definition by itself is rather abstract and not very helpful. Intuitively, $K(\alpha)$ is what we get when we add $\alpha$ to $K$, plus all the extra elements needed to make $K(\alpha)$ a field (i.e.\ closed under addition, multiplication and inverse). We can express this idea more formally by the following result:
+
+\begin{thm}
+ Let $L/K$ a field extension, $\alpha\in L$ algebraic. Then
+ \begin{enumerate}
+ \item $K(\alpha)$ is the image of the (ring) homomorphism $\phi: K[t] \to L$ defined by $f \mapsto f(\alpha)$.
+ \item $[K(\alpha): K] = \deg P_\alpha$, where $P_\alpha$ is the minimal polynomial of $\alpha$ over $K$.
+ \end{enumerate}
+\end{thm}
+Note that the kernel of the homomorphism $\phi$ is (almost) by definition the ideal $\bra P_\alpha\ket$. So this theorem tells us
+\[
+ \frac{K[t]}{\bra P_\alpha\ket}\cong K(\alpha).
+\]
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $F$ be the image of $\phi$. The first step is to show that $F$ is indeed a field. Since $F$ is the image of a ring homomorphism, we know $F$ is a subring of $L$. Given $\beta \in F$ non-zero, we have to find an inverse.
+
+ By definition, $\beta = f(\alpha)$ for some $f\in K[t]$. The idea is to use B\'ezout's identity. Since $\beta \not= 0$, $f(\alpha) \not= 0$. So $f \not \in I_\alpha = \bra P_\alpha\ket$. So $P_\alpha \nmid f$ in $K[t]$. Since $P_\alpha$ is irreducible, $P_\alpha$ and $f$ are coprime. Then there exists some $g, h \in K[t]$ such that $fg + hP_\alpha = 1$. So $f(\alpha)g(\alpha) = f(\alpha) g(\alpha) + h(\alpha)P_\alpha(\alpha) = 1$. So $\beta g(\alpha) = 1$. So $\beta$ has an inverse. So $F$ is a field.
+
+ From the definition of $F$, we have $K \subseteq F$ and $\alpha \in F$, using the constant polynomials $f = c \in K$ and the identity $f = t$.
+
+ Now, if $K\subseteq G\subseteq L$ and $\alpha \in G$, then $G$ contains all the polynomial expressions of $\alpha$. Hence $F\subseteq G$. So $K(\alpha) = F$.
+ \item Let $n = \deg P_\alpha$. We show that $\{1, \alpha, \alpha^2, \cdots, \alpha^{n - 1}\}$ is a basis for $K(\alpha)$ over $K$.
+
+ First note that since $\deg P_\alpha = n$, we can write
+ \[
+ \alpha^n = \sum_{i = 0}^{n - 1} a_i \alpha^i.
+ \]
+ So any other higher powers are also linear combinations of the $\alpha^i$s (by induction). This means that $K(\alpha)$ is spanned by $1, \cdots, \alpha^{n - 1}$ as a $K$ vector space.
+
+ It remains to show that $\{1, \cdots, \alpha^{n - 1}\}$ is linearly independent. Assume not. Then for some $b_i$, we have
+ \[
+ \sum_{i = 0}^{n - 1} b_i \alpha^i = 0.
+ \]
+ Let $f = \sum b_i t^i$. Then $f(\alpha) = 0$. So $f \in I_\alpha = \bra P_\alpha\ket$. However, $\deg f < \deg P_\alpha$. So we must have $f = 0$. So all $b_i = 0$. So $\{1, \cdots, \alpha^{n - 1}\}$ is a basis for $K(\alpha)$ over $K$. So $[K(\alpha): K] = n$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{cor}
+ Let $L/K$ be a field extension, $\alpha \in L$. Then $\alpha$ is algebraic over $K$ if and only if $K(\alpha)/K$ is a finite extension.
+\end{cor}
+
+\begin{proof}
+ If $\alpha$ is algebraic, then $[K(\alpha): K] = \deg P_\alpha < \infty$ by above. So the extension is finite.
+
+ If $K\subseteq K(\alpha)$ is a finite extension, then by previous lemma, the entire $K(\alpha)$ is algebraic over $K$. So $\alpha$ is algebraic over $K$.
+\end{proof}
+
+We can extend this definition to allow more elements in the generating set.
+\begin{defi}[Field generated by elements]
+ Let $L/K$ be a field extension, $\alpha_1, \cdots, \alpha_n\subseteq L$. We define $K(\alpha_1, \cdots, \alpha_n)$ to be the smallest subfield of $L$ containing $K$ and $\alpha_1, \cdots, \alpha_n$.
+
+ We call $K(\alpha_1, \cdots, \alpha_n)$ the \emph{field generated by} $\alpha_1, \cdots, \alpha_n$ over $K$.
+\end{defi}
+
+And we can prove some similar results.
+
+\begin{thm}
+ Suppose that $L/K$ is a field extension.
+ \begin{enumerate}
+ \item If $\alpha_1, \cdots, \alpha_n \in L$ are algebraic over $K$, then $K(\alpha_1, \cdots, \alpha_n)/K$ is a finite extension.
+ \item If we have field extensions $L/F/K$ and $F/K$ is a finite extension, then $F = K(\alpha_1, \cdots, \alpha_n)$ for some $\alpha_1,\cdots, \alpha_n \in L$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We prove this by induction. Since $\alpha_1$ is algebraic over $K$, $K\subseteq K(\alpha_1)$ is a finite extension.
+
+ For $1 \leq i < n$, $\alpha_{i + 1}$ is algebraic over $K$. So $\alpha_{i + 1}$ is also algebraic over $K(\alpha_1, \cdots, \alpha_i)$. So $K(\alpha_1, \cdots, \alpha_i)\subseteq K(\alpha_1, \cdots, \alpha_i)(\alpha_{i + 1})$ is a finite extension. But $K(\alpha_1, \cdots, \alpha_i)(\alpha_{i + 1}) = K(\alpha_1, \cdots, \alpha_{i + 1})$. By the tower law, $K \subseteq K(\alpha_i, \cdots, \alpha_{i + 1})$ is a finite extension.
+
+ \item Since $F$ is a finite dimensional vector space over $K$, we can take a basis $\{\alpha_1, \cdots, \alpha_n\}$ of $F$ over $K$. Then it should be clear that $F = K(\alpha_1, \cdots, \alpha_n)$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+When studying polynomials, the following result from IB Groups, Rings and Modules is often helpful:
+\begin{prop}[Eisenstein's criterion]
+ Let $f = a_nt^n + \cdots + a_1 t + a_0\in \Z[t]$. Assume that there is some prime number $p$ such that
+ \begin{enumerate}
+ \item $ p \mid a_i$ for all $i < n$.
+ \item $p \nmid a_n$
+ \item $p^2 \nmid a_0$.
+ \end{enumerate}
+ Then $f$ is irreducible in $\Q[t]$.
+\end{prop}
+
+\begin{eg}
+ Consider the field extensions
+ \[
+ \Q \subseteq \Q(\sqrt{2})\subseteq \Q(\sqrt{2}, \sqrt[3]{2})\subseteq \R,
+ \]
+ \[
+ \Q\subseteq \Q(\sqrt[3]{2}) \subseteq \Q(\sqrt{2}, \sqrt[3]{2})\subseteq \R.
+ \]
+ We have $[\Q(\sqrt{2}): \Q] = 2$ since $\{1, \sqrt{2}\}$ is a basis of $\Q(\sqrt{2})$ over $\Q$.
+
+ How about $[\Q(\sqrt[3]{2}):\Q]$? By the Eisenstein criterion, we know that $t^3 - 2$ is irreducible in $\Q[t]$. So the minimal polynomial of $\sqrt[3]{2}$ over $\Q$ is $t^3 - 2$ which has degree 3. So $[\Q(\sqrt[3]{2}):\Q] = 3$.
+
+ These results immediately tells that $\sqrt[3]{2}\not\in \Q(\sqrt{2})$. Otherwise, this entails that $\Q(\sqrt[3]{2}) \subseteq \Q(\sqrt{2})$. Then the tower law says that
+ \[
+ [\Q(\sqrt{2}):\Q] = [\Q(\sqrt{2}):\Q(\sqrt[3]{2})][\Q(\sqrt[3]{2}):\Q].
+ \]
+ In particular, plugging the numbers in entails that that $3$ is a factor of $2$, which is clearly nonsense. Similarly, $\sqrt{2}\not \in \Q(\sqrt[3]{2})$.
+
+ How about the inclusion $\Q(\sqrt{2})\subseteq \Q(\sqrt{2}, \sqrt[3]{2})$? We now show that the minimal polynomial $P_{\sqrt[3]{2}}$ of $\sqrt[3]{2}$ over $\Q(\sqrt{2})$ is $t^3 - 2$.
+
+ Suppose not. Then $t^3 - 2$ is reducible, with the real $P_{\sqrt[3]{2}}$ as one of its factors. Let $t^3 - 2 = P_{\sqrt[3]{2}} \cdot R$ for some non-unit polynomial $R$.
+
+ We know that $P_{\sqrt[3]{2}}$ does not have degree 3 (or else it would be $t^3 - 2$), and not degree 1, since a degree 1 polynomial has a root. So it has degree $2$. So $R$ has degree $1$. Then $R$ has a root, i.e.\ $R(\beta) = 0$ for some $\beta \in \Q(\sqrt{2})$. So $\beta^3 - 2 = 0$. Hence $[\Q(\beta): \Q] = 3$. Again, by the tower law, we have
+ \[
+ [\Q(\sqrt{2}):\Q] = [\Q(\sqrt{2}):\Q(\beta)][\Q(\beta):\Q].
+ \]
+ Again, this is nonsense since it entails that 3 is a factor of 2. So the minimal polynomial is indeed $t^3 -2$. So $[\Q(\sqrt{2}, \sqrt[3]{2}):\Q] = 6$ by the tower law.
+
+ Alternatively, we can obtain this result by noting that the tower law on $\Q \subseteq \Q(\sqrt{2})\subseteq \Q(\sqrt{2}, \sqrt[3]{2})$ and $\Q\subseteq \Q(\sqrt[3]{2}) \subseteq \Q(\sqrt{2}, \sqrt[3]{2})$ entails that 2 and 3 are both factors of $[\Q(\sqrt{2}, \sqrt[3]{2}):\Q]$. So it is at least $6$. Then since $t^3 - 2\in \Q(\sqrt{2})[t]$ has $\sqrt[3]{2}$ as a root, the degree is at most 6. So it is indeed 6.
+\end{eg}
+\subsection{Ruler and compass constructions}
+Before we develop our theory further, we first look into a rather unexpected application of field extensions. We are going to look at some classic problems in geometry, and solve them using what we've learnt so far. In particular, we want to show that certain things cannot be constructed using a compass and a ruler (as usual, we assume the ruler does not have markings on it).
+
+It is often easy to prove that certain things are constructible --- just exhibit an explicit construction of it. However, it is much more difficult to show that things are \emph{not} constructible. Two classical examples are
+\begin{enumerate}
+ \item Doubling the cube: Given a cube, can we construct the side of another cube whose volume is double the volume of the original cube?
+ \item Trisecting an angle: Given an angle, can we divide the angle into three equal angles?
+\end{enumerate}
+The idea here is to associate with each possible construction a field extension, and then prove certain results about how these field extensions should behave. We then show that if we could, say, double the cube, then this construction would inevitable break some of the properties it should have.
+
+Firstly, we want to formulate our problem in a more convenient way. In particular, we will view the plane as $\R^2$, and describe lines and circles by equations. We also want to describe ``compass and ruler'' constructions in a more solid way.
+\begin{defi}[Constructible points]
+ Let $S\subseteq \R^2$ be a set of (usually finite) points in the plane.
+
+ A ``ruler'' allows us to do the following: if $P, Q\in S$, then we can draw the line passing through $P$ and $Q$.
+
+ A ``compass'' allows us to do the following: if $P, Q, Q'\in S$, then we can draw the circle with center at $P$ and radius of length $QQ'$.
+
+ Any point $R\in \R^2$ is \emph{1-step constructible} from $S$ if $R$ belongs to the intersection of two distinct lines or circles constructed from $S$ using rulers and compasses.
+
+ A point $R\in \R^2$ is \emph{constructible} from $S$ if there is some $R_1, \cdots, R_n = R \in \R^2$ such that $R_{i + 1}$ is 1-step constructible from $S\cup \{R_1, \cdots, R_i\}$ for each $i$.
+\end{defi}
+
+\begin{eg}
+ Let $S = \{(0, 0), (1, 0)\}$. What can we construct? It should be easy to see that $(n, 0)$ for all $n \in \Z$ are all constructible from $S$. In fact, we can show that all points of the form $(m, n)\in \Z$ are constructible from $S$.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [anchor = south east] at (0, 0) {\tiny $(0, 0)$};
+ \node [circ] at (1, 0) {};
+ \node [anchor = south west] at (1, 0) {\tiny $(1, 0)$};
+ \node [mblue] [circ] at (-1, 0) {};
+ \node [mred] [circ] at (2, 0) {};
+
+ \draw (-2, 0) -- (3, 0);
+ \draw [mblue] circle [radius=1];
+ \draw [mred] (1, 0) circle [radius = 1];
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{defi}[Field of $S$]
+ Let $S\subseteq \R^2$ be finite. Define the \emph{field of} $S$ by
+ \[
+ \Q(S) = \Q(\{\text{coordinates of points in }S\}) \subseteq \R,
+ \]
+ where we put in the $x$ coordinate and $y$ coordinate separately into the generating set.
+\end{defi}
+For example, if $S = \{(\sqrt{2}, \sqrt{3})\}$, then $\Q(S) = \Q(\sqrt{2}, \sqrt{3})$.
+
+The key theorem we will use to prove our results is
+\begin{thm}
+ Let $S\subseteq \R^2$ be finite. Then
+ \begin{enumerate}
+ \item If $R$ is 1-step constructible from $S$, then $[\Q(S\cup \{R\}):\Q(S)] = 1$ or $2$.
+ \item If $T\subseteq \R^2$ is finite, $S\subseteq T$, and the points in $T$ are constructible from $S$, Then $[\Q(S\cup T): \Q(S)] = 2^k$ for some $k$ (where $k$ can be $0$).
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ By assumption, there are distinct lines or circles $C, C'$ constructed from $S$ using ruler and compass, such that $R\in C\cap C'$. By elementary geometry, $C$ and $C'$ can be given by the equations
+ \begin{align*}
+ C&: a(x^2 + y^2) + bx + cy + d = 0,\\
+ C'&: a'(x^2 + y^2) + b'x + c'y + d' = 0.
+ \end{align*}
+ where $a, b, c, d, a', b', c', d' \in \Q(S)$. In particular, if we have a line, then we can take $a = 0$.
+
+ Let $R = (r_1, r_2)$. If $a = a' = 0$ (i.e.\ $C$ and $C'$ are lines), then solving the two linear equations gives $r_1, r_2 \in \Q(S)$. So $[\Q(S\cup \{R\}):\Q(S)] = 1$.
+
+ So we can now assume wlog that $a\not = 0$. We let
+ \[
+ p = a'b - ab',\quad q = a'c - ac',\quad \ell = a'd - ad',
+ \]
+ which are the coefficients when we perform $a'\times C - a \times C'$. Then by assumption, $p \not= 0$ or $q \not= 0$. Otherwise, $c$ and $c'$ would be the same curve. wlog $p \not= 0$. Then since $(r_1, r_2)$ satisfy both equations of $C$ and $C'$, they satisfy
+ \[
+ px + qy + \ell = 0.
+ \]
+ In other words, $pr_1 + qr_2 + \ell = 0$. This tells us that
+ \[
+ r_1 = -\frac{qr_2 + \ell}{p}.\tag{$*$}
+ \]
+ If we put $r_1, r_2$ into the equations of $C$ and $C'$ and use $(*)$, we get an equation of the form
+ \[
+ \alpha r_2^2 + \beta r_2 + \gamma = 0,
+ \]
+ where $\alpha, \beta, \gamma \in \Q(S)$. So we can find $r_2$ (and hence $r_1$ using linear relations) using only a single radical of degree 2. So
+ \[
+ [\Q(S\cup \{R\}):\Q(S)] = [\Q(S)(r_2): \Q(S)] = 1\text{ or }2,
+ \]
+ since the minimal polynomial of $r_2$ over $\Q(S)$ has degree 1 or 2.
+
+ Then (ii) follows directly from induction, using the tower law.
+\end{proof}
+
+\begin{cor}
+ It is impossible to ``double the cube''.
+\end{cor}
+
+\begin{proof}
+ Consider the cube with unit side length, i.e.\ we are given the set $S = \{(0, 0), (1, 0)\}$. Then doubling the cube would correspond to constructing a side of length $\ell$ such that $\ell^3 = 2$, i.e.\ $\ell = \sqrt[3]{2}$. Thus we need to construct a point $R = (\sqrt[3]{2}, 0)$ from $S$.
+
+ If we can indeed construct this $R$, then we need
+ \[
+ [\Q(S\cup \{R\}):\Q(S)] = 2^k
+ \]
+ for some $k$. But we know that $\Q(S) = \Q$ and $\Q(S\cup \{R\}) = \Q(\sqrt[3]{2})$, and that
+ \[
+ [\Q(\sqrt[3]{2}):\Q] = 3.
+ \]
+ This is a contradiction since $3$ is not a power of $2$.
+\end{proof}
+
+\subsection{\texorpdfstring{$K$}{K}-homomorphisms and the Galois Group}
+Usually in mathematics, we not only want to study objects, but maps between objects. Suppose we have two field extensions $K \subseteq L$ and $K \subseteq L'$. What should a map between these two objects look like? Obviously, we would like this map to be a field homomorphisms between $L$ and $L'$. Moreover, since this is a map between the two field \emph{extensions}, and not just the fields themselves, we would like this map to preserve things in $K$, and is just a map between the ``extended parts'' of $L$ and $L'$.
+
+\begin{defi}[$K$-homomorphism]
+ Let $L/K$ and $L'/K$ be field extensions. A $K$-\emph{homomorphism} $\phi: L \to L'$ is a ring homomorphism such that $\phi|_K = \id$, i.e.\ it fixes everything in $K$. We write $\Hom_K(L, L')$ for the set of all $K$-homomorphisms $L\to L'$.
+
+ A $K$-\emph{isomorphism} is a $K$-homomorphism which is an isomorphism of rings. A $K$-\emph{automorphism} is a $K$-isomorphism $L\to L$. We write $\Aut_K(L)$ for the set of all $K$-automorphism $L\to L$.
+\end{defi}
+There are a couple of things to take note of
+\begin{enumerate}
+ \item Given any $\phi \in \Hom_K(L, L')$, we know that
+ \begin{enumerate}
+ \item Since $\phi|_K = \id$, we know that $\ker \phi \not= L$. Since we know that $\ker \phi$ is an ideal, and a field only has two ideals, we must have $\ker \phi = 0$. So $\phi$ is injective. It is, in fact, true that any homomorphisms of fields is injective.
+ \item $\phi$ gives an isomorphism $L \to \phi(L)$. So $\phi(L)$ is a field and we get the field extensions $K\subseteq \phi(L) \subseteq L'$.
+ \end{enumerate}
+ \item If $[L:K] = [L':K] < \infty$, then any homomorphism in $\Hom_K(L, L')$ is in fact an isomorphism. So
+ \[
+ \{K\text{-homomorphisms}: L \to L'\} = \{K\text{-isomorphisms}: L\to L'\},
+ \]
+ This is since any $K$-homomorphism $\phi: L\to L'$ is an injection. So $[L:K] = [\phi(L):K]$. Hence we know that $[L':K] = [\phi(L):K]$. But we know that $\phi(L)$ is a subfield of $L'$. This is possible only if $L' = \phi(L)$. So $\phi$ is a surjection, and hence an isomorphism.
+
+ In particular, $\Aut_K(L) = \Hom_K(L, L)$.
+\end{enumerate}
+
+\begin{eg}
+ We want to determine $\Aut_\R(\C)$. If we pick any $\psi\in \Aut_\R(\C)$, then
+ \[
+ (\psi(\sqrt{-1}))^2 + 1 = \psi(\sqrt{-1}^2 + 1) = \psi(0) = 0.
+ \]
+ So under any automorphism $\psi$, the image of $\sqrt{-1}$ is a root of $t^2 + 1$. Therefore $\psi(\sqrt{-1}) = \sqrt{-1}$ or $-\sqrt{-1}$. In the first case, $\psi$ is the identity. In the second case, the automorphism is $\phi: a + b\sqrt{-1} \mapsto a - b\sqrt{-1}$, i.e.\ the complex conjugate. So $\Aut_\R(\C) = \{\id, \phi\}$.
+
+ Similarly, we can show that $\Aut_\Q(\Q(\sqrt{2})) = \{\id, \phi\}$, where $\phi$ swaps $\sqrt{2}$ with $-\sqrt{2}$.
+\end{eg}
+
+\begin{eg}
+ Let $\mu^3 = 1$ but $\mu \not= 1$ (i.e.\ $\mu$ is a third root of unity). We want to determine $A = \Hom_\Q(\Q(\sqrt[3]{2}), \C)$.
+
+ First define $\phi, \psi$ by
+ \begin{align*}
+ \phi(\sqrt[3]{2}) &= \sqrt[3]{2}\mu\\
+ \psi(\sqrt[3]{2}) &= \sqrt[3]{2}\mu^2,
+ \end{align*}
+ We have $\phi, \psi \in A$. Are there more?
+
+ Let $\lambda \in A$. Then we must have
+ \[
+ (\lambda(\sqrt[3]{2}))^3 - 2 = 0.
+ \]
+ So $\lambda(\sqrt[3]{2})$ is a root of $t^3 - 2$. So it is either $\sqrt[3]{2}, \sqrt[3]{2}\mu$ or $\sqrt[3]{2}\mu^2$. So $\lambda$ is either $\id$, $\phi$ or $\psi$. So $A = \{\id, \phi, \psi\}$.
+\end{eg}
+Note that in general, if $\alpha$ is algebraic over $\Q$, then $\Q(\alpha) \cong \Q[t]/\bra P_\alpha\ket $. Hence to specify a $\Q$-homomorphism from $\Q(\alpha)$, it suffices to specify the image of $t$, or just the image of $\alpha$.
+
+We will later see that the number of automorphisms $|\Aut_K(L)|$ is bounded above by the degree of the extension $[L:K]$. However, we need not always have $[L:K]$ many automorphisms. When we \emph{do} have enough automorphisms, we call it a \emph{Galois extension}.
+
+\begin{defi}[Galois extension]
+ Let $L/K$ be a finite field extension. This is a \emph{Galois extension} if $|\Aut_K(L)| = [L:K]$.
+\end{defi}
+
+\begin{defi}[Galois group]
+ The \emph{Galois group} of a Galois extension $L/K$ is defined as $\Gal(L/K) = \Aut_K(L)$. The group operation is defined by function composition. It is easy to see that this is indeed a group.
+\end{defi}
+
+
+\begin{eg}
+ The extension $\Q(\sqrt{7})/\Q$ is Galois. The degree $[\Q(\sqrt{7}):\Q] = 2$, and the automorphism group is $\Aut_\Q(\Q(\sqrt{7})) = \{\id, \phi\}$, where $\phi$ swaps $\sqrt{7}$ with $-\sqrt{7}$.
+\end{eg}
+
+\begin{eg}
+ The extension $\Q(\sqrt[3]{2})/\Q$ is not Galois. The degree is $[\Q(\sqrt[3]{2}):\Q] = 3$, but the automorphism group is $\Aut_\Q(\Q(\sqrt[3]{2})) = \{\id\}$.
+
+ To show that there is no other automorphism, note that the automorphism group can be viewed as a subset of $\Hom_\Q(\Q(\sqrt[3]{2}), \C)$. We have just seen that $\Hom_\Q(\Q(\sqrt[3]{2}), \C)$ has three elements, but only the identity maps $\Q(\sqrt[3]{2})$ to itself, while the others map $\sqrt[3]{2}$ to $\sqrt[3]{2}\mu^i \not\in \Q(\sqrt[3]{2})$. So this is the only automorphism.
+
+ The way we should think about this is that there is something missing in $\Q(\sqrt[3]{2})$, namely $\mu$. Without the $\mu$, we cannot get the other automorphisms we need. In fact, in the next example, we will show that $\Q\subseteq \Q(\sqrt[3]{2}, \mu)$ is Galois.
+\end{eg}
+
+\begin{eg}
+ $\Q(\sqrt[3]{2},\mu)/\Q$ is a Galois extension. Firstly, we know that $[\Q(\sqrt[3]{2}, \mu): \Q(\sqrt[3]{2})] = 2$ because $\mu^3 - 1 = 0$ implies $\mu^2 + \mu + 1 = 0$. So the minimal polynomial has degree $2$. This also means that $\mu \not\in \Q(\sqrt[3]{2})$. We also know that $[\Q(\sqrt[3]{2}): \Q] = 3$. So we have
+ \[
+ [\Q(\sqrt[3]{2}, \mu): \Q] = 6
+ \]
+ by the Tower law.
+
+ Now denote $\alpha = \sqrt[3]{2}$, $\beta = \sqrt[3]{2}\mu$ and $\gamma = \sqrt[3]{2}\mu^2$. Then $\Q(\sqrt[3]{2}, \mu) = \Q(\alpha, \beta, \gamma)$. Now let $\phi \in \Aut_\Q(\Q(\sqrt[3]{2}, \mu))$, then $\phi(\alpha)$, $\phi(\beta)$ and $\phi(\gamma)$ are roots of $t^3 - 2$. These roots are exactly $\alpha, \beta, \gamma$. So
+ \[
+ \{\phi(\alpha), \phi(\beta), \phi(\gamma)\} = \{\alpha, \beta, \gamma\}.
+ \]
+ Hence $\phi$ is completely determined by a permutation of the roots of $t^3 - 2$. So $\Aut_\Q(\sqrt[3]{2}, \mu) \cong S_3$ and $|\Aut_\Q(\sqrt[3]{2}, \mu)| = 6$.
+\end{eg}
+
+Most of the time, we will only be interested in Galois extensions. The main reason is that Galois extensions satisfy the \emph{fundamental theorem of Galois theory}, which roughly says: if $L/K$ is a finite Galois extension, then there is a one-to-one correspondence of the set of subgroups $H\leq \Gal(L/K)$ and the intermediate fields $K\subseteq F \subseteq L$. In particular, the normal subgroups corresponds to the ``normal extensions'', which is something we will define later.
+
+However, just as we have seen, it is not straightforward to check if an extension is Galois, even in specific cases like the examples above. Fortunately, by the time we reach the proper statement of the fundamental theorem, we would have developed enough machinery to decide easily whether certain extensions are Galois.
+
+\subsection{Splitting fields}
+As mentioned in the introduction, one major motivation for Galois theory is to study the roots of polynomials. So far, we have just been talking about field extensions. The idea here is given a field $K$ and a polynomial $f \in K[t]$, we would like to study the field extension obtained by adding all roots of $f$. This is known as the \emph{splitting field} of $f$ (over $K$).
+
+\begin{notation}
+ Let $L/K$ be a field extension, $f\in K[t]$. We write $\Root_f(L)$ for the roots of $f$ in $L$.
+\end{notation}
+
+First, we establish a correspondence between the roots of a polynomial and $K$-homomorphisms.
+
+\begin{lemma}
+ Let $L/K$ be a field extension, $f\in K[t]$ irreducible, $\deg f > 0$. Then there is a 1-to-1 correspondence
+ \[
+ \Root_f(L)\longleftrightarrow \Hom_K(K[t]/\bra f\ket, L).
+ \]
+\end{lemma}
+
+\begin{proof}
+ Since $f$ is irreducible, $\bra f\ket$ is a maximal ideal. So $K[t] / \bra f\ket$ is a field. Also, there is a natural inclusion $K\hookrightarrow K[t] / \bra f\ket$. So it makes sense to talk about $\Hom_K(K[t]/\bra f\ket, L)$.
+
+ To any $\beta\in \Root_f(L)$, we assign $\phi: K[t]/\bra f\ket \to L$ where we map $\bar t \mapsto \beta$ ($\bar t$ is the equivalence class of $t$). This is well defined since if $\bar{t} = \bar{g}$, then $g = t + hf$ for some $h \in K[t]$. So $\phi(\bar{g}) = \phi(\overline{t + hf}) = \beta + h(\beta) f(\beta) = \beta$.
+
+ Conversely, given any $K$-homomorphism $\phi: K[t]/\bra f\ket \to L$, we assign $\beta = \phi(\bar t)$. This is a root since $f(\beta) = f(\phi(\bar t)) = \phi(f(\bar t)) = \phi(0) = 0$.
+
+ This assignments are inverses to each other. So we get a one-to-one correspondence.
+\end{proof}
+Recall that if $K\subseteq F$ is a field extension, then for any $\alpha\in F$ with minimal polynomial $P_\alpha$, we have $K[t]/\bra P_\alpha\ket \cong K(\alpha)$. Since an irreducible $f$ is the minimal polynomial of its roots, we can view the above lemma as telling us something about $\Hom_K(K(\alpha), L)$.
+
+\begin{cor}
+ Let $L/K$ be a field extension, $f \in K[t]$ irreducible, $\deg f > 0$. Then
+ \[
+ \left| \Hom_K(K[t]/\bra f\ket, L) \right| \leq \deg f.
+ \]
+ In particular, if $E = K[t]/\bra f\ket$, then
+ \[
+ |\Aut_K(E)| = |\Root_f(E)| \leq \deg f = [E:K].
+ \]
+ So $E/K$ is a Galois extension iff $|\Root_f(E)| = \deg f$.
+\end{cor}
+
+\begin{proof}
+ This follows directly from the following three facts:
+ \begin{itemize}
+ \item $|\Root_f(L)| \leq \deg f$
+ \item $\Aut_K(E) = \Hom_K(E, E)$
+ \item $\deg f = [K(\alpha): K] = [E:K]$.\qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{defi}[Splitting field]
+ Let $L/K$ be a field extensions, $f\in K[t]$. We say $f$ \emph{splits} over $L$ if we can factor $f$ as
+ \[
+ f = a(t - \alpha_1)\cdots (t - \alpha_n)
+ \]
+ for some $a \in K$ and $\alpha_j \in L$. Alternatively, this says that $L$ contains all roots of $f$.
+
+ We say $L$ is a \emph{splitting field} of $f$ if $L = K(\alpha_1, \cdots, \alpha_n)$. This is the smallest field where $f$ has all its roots.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item $\C$ is the splitting field of $t^2 + 1 \in \R[t]$.
+ \item $\Q(\sqrt[3]{2}, \mu)$ is a splitting field of $t^3 - 2 \in \Q[t]$, where $\mu$ is a third root of unity.
+ \item By the fundamental theorem of algebra, for any $K\subseteq \C$ and $f\in K[t]$, there is a splitting field $L\subseteq \C$ of $f$.
+ \end{itemize}
+\end{eg}
+Note that the degree of the splitting field need not be (bounded by) the degree of the polynomial. In the second example, we have $[\Q(\sqrt[3]{2}, \mu):\Q] = 6$, but $t^3 - 2$ only has degree 3.
+
+More generally, we can show that every polynomial has a splitting field, and this is unique up to isomorphism. This is important, since we would like to talk about \emph{the} splitting field of a polynomial all the time.
+
+\begin{thm}
+ Let $K$ be a field, $f\in K[t]$. Then
+ \begin{enumerate}
+ \item There is a splitting field of $f$.
+ \item The splitting field is unique (up to $K$-isomorphism).
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $\deg f = 0$, then $K$ is a splitting field of $f$. Otherwise, we add the roots of $f$ one by one.
+
+ Pick $g \mid f$ in $K[t]$, where $g$ is irreducible and $\deg g > 0$. We have the field extension $K\subseteq K[t]/\bra g\ket$. Let $\alpha_1 = \bar t$. Then $g(\alpha_1) = 0$ which implies that $f(\alpha_1) = 0$. Hence we can write $f = (t - \alpha_1) h$ in $K(\alpha_1)[t]$. Note that $\deg h < \deg f$. So we can repeat the process on $h$ iteratively to get a field extensions $K\subseteq K(\alpha_1, \cdots, \alpha_n)$. This $K(\alpha_1, \cdots, \alpha_n)$ is a splitting field of $f$.
+
+ \item Assume $L$ and $L'$ are both splitting fields of $f$ over $K$. We want to find a $K$-isomorphism from $L$ to $L'$.
+
+ Pick largest $F, F'$ such that $K \subseteq F\subseteq L$ and $K\subseteq F' \subseteq L'$ are field extensions and there is a $K$-isomorphism from $\psi: F \to F'$. By ``largest'', we mean we want to maximize $[F:K]$.
+
+ We want to show that we must have $F = L$. Then we are done because this means that $F'$ is a splitting field, and hence $F' = L'$.
+
+ So suppose $F\not= L$. We will try to produce a larger $\tilde{F}$ with $K$-isomorphism $\tilde{F} \to \tilde{F}' \subseteq L'$.
+
+ Since $F\not= L$, we know that there is some $\alpha \in \Root_f(L)$ such that $\alpha\not\in F$. Then there is some irreducible $g\in K[t]$ with $\deg g > 0$ such that $g(\alpha) = 0$ and $g \mid f$. Say $f = gh$.
+
+ Now we know there is an isomorphism $F[t]/\bra g\ket \to F(\alpha)$ by $\bar t \mapsto \alpha$. The isomorphism $\psi: F \to F'$ extends to a isomorphism $\mu: F[t] \to F'[t]$. Then since the coefficients of $f$ are in $K$, we have $f = \mu(f) = \mu(g)\mu(h)$. So $\mu(g) \mid f$ in $F'[t]$. Since $g$ is irreducible in $F[t]$, $\mu(g)$ is irreducible in $F'[t]$. So there is some $\alpha' \in \Root_{\mu(g)}(L') \subseteq \Root_f(L')$ and isomorphism $F'[t]/\bra \mu(g)\ket \to F'(\alpha')$.
+
+ Now $\mu$ induces a $K$-isomorphism $F[t]/\bra g\ket \to F'[t]/\bra \mu(g)\ket$, which in turn induces a $K$-isomorphism $F(\alpha) \to F'(\alpha')$. This contradicts the maximality of $F$. So we must have had $F = L$.\qedhere
+ \end{enumerate}
+\end{proof}
+Note that the splitting is unique just up to isomorphism. We could be quotienting by different polynomials and still get the same splitting field.
+\begin{eg}
+ $\Q(\sqrt{7})$ is a splitting field of $t^2 - 7\in \Q[t]$. At the same time, $\Q(\sqrt{7})$ is also a splitting field of $t^2 + 3t + \frac{1}{2} \in \Q[t]$.
+\end{eg}
+\subsection{Algebraic closures}
+The splitting field gives us the field with the root of one particular polynomial. We could be greedy and ask for the roots for \emph{all} polynomials, and get the \emph{algebraic closure}. The algebraic closure will not be of much use in this course, but is a nice thing to know about. The major theorems would be the existence and uniqueness of algebraic closures.
+
+\begin{defi}[Algebraically closed field]
+ A field $L$ is \emph{algebraically closed} if for all $f\in L[t]$, we have
+ \[
+ f = a(t - \alpha_1)(t - \alpha_2) \cdots (t - \alpha_n)
+ \]
+ for some $a, \alpha_i \in L$. In other words, $L$ contains all roots of its polynomials.
+
+ Let $L/K$ be a field extension. We say $L$ is an algebraic closure of $K$ if
+ \begin{itemize}
+ \item $L$ is algebraic over $K$
+ \item $L$ is algebraically closed.
+ \end{itemize}
+\end{defi}
+
+\begin{eg}
+ $L$ is an algebraically closed field iff ($L\subseteq E$ is a finite extension implies $E = L$).
+
+ This is since if $L \subseteq E$ is finite, then $E$ is algebraic over $L$, and hence must be $L$.
+\end{eg}
+
+\begin{eg}
+ $\C$ is algebraically closed by the fundamental theorem of algebra, and is the algebraic closure of $\R$ (but not $\Q$).
+\end{eg}
+
+Before we prove our next theorem, we need the following technical lemma:
+\begin{lemma}
+ If $R$ is a commutative ring, then it has a maximal ideal. In particular, if $I$ is an ideal of $R$, then there is a maximal ideal that contains $I$.
+\end{lemma}
+
+\begin{proof}
+ Let
+ \[
+ \mathcal{P} = \{I: I\text{ is an ideal of }R, I \not= R\}.
+ \]
+ If $I_1 \subseteq I_2 \subseteq \cdots$ is any chain of $I_i \in P$, then $I = \bigcup I_i \in \mathcal{P}$. By Zorn's lemma, there is a maximal element of $\mathcal{P}$ (containing $I$). So $R$ has at least one maximal ideal (containing $I$).
+\end{proof}
+
+\begin{thm}[Existence of algebraic closure]
+ Any field $K$ has an algebraic closure.
+\end{thm}
+
+\begin{proof}
+ Let
+ \[
+ \mathcal{A} = \{\lambda = (f, j): f\in K[t] \text{ irreducible monic}, 1 \leq j \leq \deg f\}.
+ \]
+ We can think of $j$ as labelling which root of $f$ we want. For each $\lambda \in \mathcal{A}$, we assign a variable $t_\lambda$. We take
+ \[
+ R = K[t_\lambda: \lambda \in \mathcal{A}]
+ \]
+ to be the polynomial ring over $K$ with variables $t_\lambda$. This $R$ contains all the ``roots'' of the polynomials in $K$. However, we've got a bit too much. For example, (if $K = \Q$), in $R$, $\sqrt{3}$ and $\sqrt{3} + 1$ would be put down as separate, unrelated variables. So we want to quotient this $R$ by something.
+
+ For every monic and irreducible $f \in K[t]$, we define
+ \[
+ \tilde{f} = f - \prod_{j = 1}^{\deg f} (t - t_{(f, j)}) \in R[t].
+ \]
+ If we want the $t_{(f, j)}$ to be roots of $f$, then $\tilde{f}$ should vanish for all $f$. Denote the coefficient of $t^\ell$ in $\tilde{f}$ by $b_{(f, \ell)}$. Then we want $b_{(f, \ell)} = 0$ for all $f, \ell$.
+
+ To do so, let $I\subseteq R$ be the ideal generated by all such coefficients. We now want to quotient $R$ by $I$. We first have to check that $I\not= R$.
+
+ Suppose not. So there are $b_{(f_1, \ell_1)}, \cdots, b_{(f_r, \ell_r)}$ with $g_1, \cdots, g_r \in R$ such that
+ \[
+ g_1 b_{(f_1, \ell_1)} + \cdots + g_r b_{(f_r, \ell_r)} = 1.\tag{$*$}
+ \]
+ We will attempt to reach a contradiction by constructing a homomorphism $\phi$ that sends each $b_{(f_i, \ell_i)}$ to $0$.
+
+ Let $E$ be a splitting field of $f_1f_2\cdots f_r$. So in $E[t]$, for each $i$, we can write
+ \[
+ f_i = \prod_{j = 1}^{\deg f_i} (t - \alpha_{i, j}).
+ \]
+ Then we define a homomorphism $\phi: R \to E$ by
+ \[
+ \begin{cases}
+ \phi(t_{(f_i, j)}) = \alpha_{i, j}\\
+ \phi(t_\lambda) = 0 & \text{otherwise}
+ \end{cases}
+ \]
+ This induces a homomorphism $\tilde{\phi}: R[t] \to E[t]$.
+
+ Now apply
+ \begin{align*}
+ \tilde{\phi}(\tilde{f}_i) &= \tilde{\phi}(f_i) - \prod_{j = 1}^{\deg f_i} \tilde{\phi}(t - t_{(f_i, j)})\\
+ &= f_i - \prod_{j = 1}^{\deg f_i} (t - \alpha_{i, j})\\
+ &= 0
+ \end{align*}
+ So $\phi(b_{(f_i, \ell_i)}) = 0$ as $b_{(f_i, \ell_i)}$ is a coefficient of $f_i$.
+
+ Now we apply $\phi$ to $(*)$ to obtain
+ \[
+ \phi(g_1 b_{(f_1, \ell_1)} + \cdots + g_r b_{(f_r, \ell_r)}) = \phi(1).
+ \]
+ But this is a contradiction since the left had side is $0$ while the right is $1$. Hence we must have $I \not= R$.
+
+ We would like to quotient by $I$, but we have to be a bit more careful, since the quotient need not be a field. Instead, pick a maximal ideal $M$ containing $I$, and consider $L = R/M$. Then $L$ is a field. Moreover, since we couldn't have quotiented out anything in $K$ (any ideal containing anything in $K$ would automatically contain all of $R$), this is a field extension $L/K$. We want to show that $L$ is an algebraic closure.
+
+ Now we show that $L$ is algebraic over $K$. This should all work out smoothly, since that's how we constructed $L$. First we pick $\alpha\in L$. Since $L = R/M$ and $R$ is generated by the terms $t_{\lambda}$, there is some $(f_1, j_1) ,\cdots, (f_r, j_r)$ such that
+ \[
+ \alpha \in K(\bar{t}_{(f_i, j_i)}, \cdots, \bar{t}_{(f_r, j_r)}).
+ \]
+ So $\alpha$ is algebraic over $K$ if each $\bar{t}_{(f_i, j_i)}$ is algebraic over $K$. To show this, note that $\tilde{f}_i = 0$, since we've quotiented out each of its coefficients. So by definition,
+ \[
+ 0 = f_i(t) - \prod_{j = 1}^{\deg f_i} (t - \bar{t}_{(f_i, j)}).
+ \]
+ So $f_i(\bar{t}_{(f_i, j_i)}) = 0$. So done.
+
+ Finally, we have to show that $L$ is algebraically closed. Suppose $L\subseteq E$ is a finite (and hence algebraic) extension. We want to show that $L = E$.
+
+ Consider arbitrary $\beta \in E$. Then $\beta$ is algebraic over $L$, say a root of $f\in L[t]$. Since every coefficient of $f$ can be found in some finite extension $K(\bar{t}_{(f_i, j_i)}, \cdots, \bar{t}_{(f_r, j_r)})$, there is a finite extension $F$ of $K$ that contains all coefficients of $f$. Since $F(\beta)$ is a finite extension of $F$, we know $F(\beta)$ is a finite and hence algebraic extension of $K$. In particular, $\beta$ is algebraic in $K$.
+
+ Let $P_\beta$ be the minimal polynomial of $\beta$ over $K$. Since all polynomials in $K$ split over $L$ by construction ($f(t) = \prod (t - \bar{t}_{(f, j)})$), its roots must in $L$. In particular, $\beta \in L$. So $L = E$.
+\end{proof}
+
+\begin{thm}[Uniqueness of algebraic closure]
+ Any field $K$ has a unique algebraic closure up to $K$-isomorphism.
+\end{thm}
+
+This is the same proof as the proof that the splitting field is unique --- given two algebraic closures, we take the largest subfield of the algebraic closures that biject with each other. However, since there could be infinitely many subfields, we have to apply Zorn's lemma to obtain the maximal such subfield.
+
+\begin{proof}(sketch)
+ Suppose $L, L'$ are both algebraic closures of $K$. Let
+ \[
+ \mathcal{H} = \{(F, \psi): K\subseteq F\subseteq L, \psi \in \Hom_K(F, L')\}.
+ \]
+ We define a partial order on $\mathcal{H}$ by $(F_1, \psi_1) \leq (F_2, \psi_2)$ if $F_1 \leq F_2$ and $\psi_1= \psi_2|_{F_1}$.
+
+ We have to show that chains have upper bounds. Given a chain $\{(F_\alpha, \psi_\alpha)\}$, we define
+ \[
+ F = \bigcup F_\alpha,\quad \psi(x) = \psi_\alpha(x)\text{ for }x \in F_\alpha.
+ \]
+ Then $(F, \psi) \in \mathcal{H}$. Then applying Zorn's lemma, there is a maximal element of $\mathcal{H}$, say $(F, \psi)$.
+
+ Finally, we have to prove that $F = L$, and that $\psi(L) = L'$. Suppose $F \not= L$. Then we attempt to produce a larger $\tilde{F}$ and a $K$-isomorphism $\tilde{F} \to \tilde{F}' \subseteq L'$. Since $F \not= L$, there is some $\alpha \in L\setminus F$. Since $L$ is an algebraic extension of $K$, there is some irreducible $g \in K[t]$ such that $\deg g > 0$ and $g(\alpha) = 0$.
+
+ Now there is an isomorphism $F[t]/\bra g\ket \to F(\alpha)$ defined by $\bar{t} \mapsto \alpha$. The isomorphism $\psi: F \to F'$ then extends to an isomorphism $\mu: F[t] \to F'[t]$ and thus to $\F[t]/\bra g\ket \to F'[t]/\bra \mu(g) \ket$. Then if $\alpha'$ is a root of $\mu(g)$, then we have $F'[t]//\bra \mu(g)\ket \cong F'(\alpha')$. So this gives an isomorphism $F(\alpha) \to F(\alpha')$. This contradicts the maximality of $\phi$.
+
+ By doing the argument the other way round, we must have $\psi(L) = L'$. So done.
+\end{proof}
+
+\subsection{Separable extensions}
+Here we will define what it means for an extension to be separable. This is done via defining separable polynomials, and then an extension is separable if all minimal polynomials are separable.
+
+At first, the definition of separability might seem absurd --- surely every polynomial should be separable. Indeed, polynomials that are not separable tend to be weird, and our theories often break without separability. Hence it is important to figure out when polynomials are separable, and when they are not. Fortunately, we will end up with a result that tells us exactly when a polynomial is not separable, and this is just a very small, specific class. In particular, in fields of characteristic zero, all polynomials are separable.
+
+\begin{defi}[Separable polynomial]
+ Let $K$ be a field, $f\in K[t]$ non-zero, and $L$ a splitting field of $f$. For an irreducible $f$, we say it is \emph{separable} if $f$ has no repeated roots, i.e.\ $|\Root_f(L)| = \deg f$. For a general polynomial $f$, we say it is \emph{separable} if all its irreducible factors in $K[t]$ are separable.
+\end{defi}
+It should be obvious from definition that if $P$ is separable and $Q \mid P$, then $Q$ is also separable.
+
+Note that some people instead define a separable polynomial to be one with no repeated roots, so $(x - 2)^2$ over $\Q$ would not be separable under this definition.
+
+\begin{eg}
+ Any linear polynomial $t - a$ (with $a \in K$) is separable.
+\end{eg}
+This is, however, not a very interesting example. To get to more interesting examples, we need even more preparation.
+
+\begin{defi}[Formal derivative]
+ Let $K$ be a field, $f \in K[t]$. \emph{(Formal) differentiation} the $K$-linear map $K[t] \to K[t]$ defined by $t^n \mapsto n t^{n - 1}$.
+
+ The image of a polynomial $f$ is the \emph{derivative} of $f$, written $f'$.
+\end{defi}
+This is similar to how we differentiate real or complex polynomials (in case that isn't obvious).
+
+The following lemma summarizes the properties of the derivative we need.
+\begin{lemma}
+ Let $K$ be a field, $f, g\in K[t]$. Then
+ \begin{enumerate}
+ \item $(f + g)' = f' + g'$, $(fg)' = fg' + f'g$.
+ \item Assume $f \not= 0$ and $L$ is a splitting field of $f$. Then $f$ has a repeated root in $L$ if and only if $f$ and $f'$ have a common (non-constant) irreducible factor in $K[t]$ (if and only if $f$ and $f'$ have a common root in $L$).
+ \end{enumerate}
+\end{lemma}
+This will allow us to show when irreducible polynomials are separable.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item $(f + g)' = f' + g'$ is true by linearity.
+
+ To show that $(fg)' = fg' + f'g$, we use linearity to reduce to the case where $f = t^n, g = t^m$. Then both sides are $(n + m) t^{n + m - 1}$. So this holds.
+ \item First assume that $f$ has a repeated root. So let $f = (t - \alpha)^2 h \in L[t]$ where $\alpha \in L$. Then $f' = 2(t - \alpha)h + (t - \alpha)^2 h' = (t - \alpha)(2h + (t - \alpha)h')$. So $f(\alpha) = f'(\alpha) = 0$. So $f$ and $f'$ have common roots. However, we want a common irreducible factor in $K[t]$, not $L[t]$. So we let $P_\alpha$ be the minimal polynomial of $\alpha$ over $K$. Then $P_\alpha \mid f$ and $P_\alpha \mid f'$. So done.
+
+ Conversely, suppose $e$ is a common irreducible factor of $f$ and $f'$ in $K[t]$, with $\deg e > 0$. Pick $\alpha \in \Root_e(L)$. Then $\alpha \in \Root_f(L) \cap \Root_{f'}(L)$.
+
+ Since $\alpha$ is a root of $f$, we can write $f = (t - \alpha)q \in L[t]$ for some $q$. Then
+ \[
+ f' = (t - \alpha) q' + q.
+ \]
+ Since $(t - \alpha) \mid f'$, we must have $(t - \alpha) \mid q$. So $(t - \alpha)^2 \mid f$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+Recall that the characteristic of a field $\Char K$ is the minimum $p$ such that $p \cdot 1_K = 0$. If no such $p$ exists, we say $\Char K = 0$. For example, $\Q$ has characteristic $0$ while $\Z_p$ has characteristic $p$.
+\begin{cor}
+ Let $K$ be a field, $f \in K[t]$ non-zero irreducible. Then
+ \begin{enumerate}
+ \item If $\Char K = 0$, then $f$ is separable.
+ \item If $\Char K = p > 0$, then $f$ is not separable iff $\deg f > 0$ and $f \in K[t^p]$. For example, $t^{2p} + 3t^p + 1$ is not separable.
+ \end{enumerate}
+\end{cor}
+\begin{proof}
+ By definition, for irreducible $f$, $f$ is not separable iff $f$ has a repeated root. So by our previous lemma, $f$ is not separable if and only if $f$ and $f'$ have a common irreducible factor of positive degree in $K[t]$. However, since $f$ is irreducible, its only factors are $1$ and itself. So this can happen if and only if $f' = 0$.
+
+ To make it more explicit, we can write
+ \[
+ f = a_n t^n + \cdots + a_1 t + a_0.
+ \]
+ Then we can write
+ \[
+ f' = n a_n t^{n - 1} + \cdots + a_1.
+ \]
+ Now $f' = 0$ if and only if all coefficients $i a_i = 0$ for all $i$.
+ \begin{enumerate}
+ \item Suppose $\Char K = 0$, then if $\deg f = 0$, then $f$ is trivially separable. If $\deg f > 0$, then $f$ is not separable iff $f' = 0$ iff $i a_i = 0$ for all $i$ iff $a_i = 0$ for $i \geq 1$. But we cannot have a polynomial of positive degree with all its coefficients zero (apart from the constant term). So $f$ must be separable.
+ \item If $\deg f = 0$, then $f$ is trivially separable. So assume $\deg f > 0$.
+
+ Then $f$ is not separable $\Leftrightarrow$ $f' = 0$ $\Leftrightarrow$ $i a_i = 0$ for $i \geq 0$ $\Leftrightarrow$ $a_i = 0$ for all $i \geq 1$ not multiples of $p$ $\Leftrightarrow$ $f \in K[t^p]$.\qedhere
+ \end{enumerate}
+\end{proof}
+Using this, it should be easy to find lots of examples of separable polynomials.
+
+\begin{defi}[Separable elements and extensions]
+ Let $K \subseteq L$ be an algebraic field extension. We say $\alpha \in L$ is \emph{separable} over $K$ if $P_\alpha$ is separable, where $P_\alpha$ is the minimal polynomial of $\alpha$ over $K$.
+
+ We say $L$ is \emph{separable} over $K$ (or $L\subseteq L$ is \emph{separable}) if all $\alpha \in L$ are separable.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item The extensions $Q \subseteq \Q(\sqrt{2})$ and $\R \subseteq \C$ are separable because $\Char \Q = \Char \R = 0$. So we can apply our previous corollary.
+ \item Let $L = \F_p(s)$ be the field of rational functions in $s$ over $\F_p$ (which is the fraction field of $\F_p[s]$), and $K = \F_p(s^p)$. We have $K \subseteq L$, and $L = K(s)$. Since $s^p \in K$, $s$ is a root of $t^p - s^p \in K[t]$. So $s$ is algebraic over $K$ and hence $L$ is algebraic over $K$. In fact $P_s = t^p - s^p$ is the minimal polynomial of $s$ over $K$.
+
+ Now $t^p - s^p = (t - s)^p$ since the field has characteristic $p$. So $\Root_{t^p - s^p}(L) = \{s\}$. So $P_s$ is not separable.
+ \end{itemize}
+\end{eg}
+As mentioned in the beginning, separable extensions are nice, or at least non-weird. One particular nice result about separable extensions is that all finite separable extensions are simple, i.e.\ if $K \subseteq L$ is finite separable, then $L = K(\alpha)$ for some $\alpha \in L$. This is what we will be working towards for the remaining of the section.
+
+\begin{eg}
+ Consider $\Q \subseteq \Q(\sqrt{2}, \sqrt{3})$. This is a separable finite extension. So we should be able to generate $\Q(\sqrt{2}, \sqrt{3})$ by just one element, not just two. In fact, we can use $\alpha = \sqrt{2} + \sqrt{3}$, since we have
+ \[
+ \alpha^3 = 11\sqrt{2} + 9\sqrt{3} = 2\sqrt{2} + 9 \alpha.
+ \]
+ So since $\alpha^3 \in \Q(\alpha)$, we know that $\sqrt{2} \in \Q(\alpha)$. So we also have $\sqrt{3} \in \Q(\alpha)$.
+\end{eg}
+In general, it is not easy to find an $\alpha$ that works, but we our later result will show that such an $\alpha$ exists.
+
+Before that, we will prove some results about the $K$-homomorphisms.
+\begin{lemma}
+ Let $L/F/K$ be finite extensions, and $E/K$ be a field extension. Then for all $\alpha \in L$, we have
+ \[
+ |\Hom_K(F(\alpha), E)| \leq [F(\alpha): F] |\Hom_K(F, E)|.
+ \]
+\end{lemma}
+Note that if $P_\alpha$ is the minimal polynomial of $\alpha$ over $F$, then $[F(\alpha): F] = \deg P_\alpha$. So we can interpret this intuitively as follows: for each $\psi \in \Hom_K(F, E)$, we can obtain a $K$-homomorphism in $\Hom_K(F(\alpha), E)$ by sending things in $F$ according to $\psi$, and then send $\alpha$ to any root of $P_\alpha$. Then there are at most $[F(\alpha): F]$ $K$-homomorphisms generated this way. Moreover, each $K$-homomorphism in $\Hom_K(F(\alpha), E)$ can be created this way. So we get this result.
+
+\begin{proof}
+ We show that for each $\psi \in \Hom_K(F, E)$, there are at most $[F(\alpha):F]$ $K$-isomorphisms in $\Hom_K(F(\alpha), E)$ that restrict to $\psi$ in $F$. Since each $K$-isomorphism in $\Hom_K(F(\alpha), E)$ has to restrict to something, it follows that there are at most $[F(\alpha): F] |\Hom_K(F, E)|$ $K$-homomorphisms from $F(\alpha)$ to $E$.
+
+ Now let $P_\alpha$ be the minimal polynomial for $\alpha$ in $F$, and let $\psi \in \Hom_K(F, E)$. To extend $\psi$ to a morphism $F(\alpha) \to E$, we need to decide where to send $\alpha$. So there should be some sort of correspondence
+ \[
+ \Root_{P_\alpha}(E)\longleftrightarrow\{\phi \in \Hom_K(F(\alpha), E): \phi|_F = \psi\}.
+ \]
+ Except that the previous sentence makes no sense, since $P_\alpha \in F[t]$ but we are not told that $F$ is a subfield of $E$. So we use our $\psi$ to ``move'' our things to $E$.
+
+ We let $M = \psi(F) \subseteq E$, and $q \in M[t]$ be the image of $P_\alpha$ under the homomorphism $F[t] \to M[t]$ induced by $\psi$. As we have previously shown, there is a one-to-one correspondence
+ \[
+ \Root_q(E) \longleftrightarrow \Hom_M(M[t]/\bra q \ket, E).
+ \]
+ What we really want to show is the correspondence between $\Root_q(E)$ and the $K$-homomorphisms $F[t]/\bra P_\alpha\ket \to E$ that restrict to $\psi$ on $F$. Let's ignore the quotient for the moment and think: what does it mean for $\phi \in \Hom_K(F[t], E)$ to restrict to $\psi$ on $F$? We know that any $\phi \in \Hom_L(F[t], E)$ is uniquely determined by the values it takes on $F$ and $t$. Hence if $\phi|_F = \psi$, then our $\phi$ must send $F$ to $\psi(F) = M$, and can send $t$ to anything in $E$. This corresponds exactly to the $M$-homomorphisms $M[t] \to E$ that does nothing to $M$ and sends $t$ to that ``anything'' in $E$.
+
+ The situation does not change when we put back the quotient. Changing from $M[t] \to E$ to $M[t]/\bra q\ket \to E$ just requires that the image of $t$ must be a root of $q$. On the other hand, using $F[t]/\bra P_\alpha\ket$ instead of $F[t]$ requires that $\phi(P_\alpha(t)) = 0$. But we know that $\phi(P_\alpha) = \psi(P_\alpha) = q$. So this just requires $q(t) = 0$ as well. So we get the one-to-one correspondence
+ \[
+ \Hom_M(M[t]/\bra q \ket, E) \longleftrightarrow \{\phi \in \Hom_K(F[t]/\bra P_\alpha \ket, E): \phi|_F = \psi\}.
+ \]
+ Since $F[t]/\bra P_\alpha\ket = F(\alpha)$, there is a one-to-one correspondence
+ \[
+ \Root_q(E) \longleftrightarrow \{\phi \in \Hom_K(F(\alpha), E): \phi|_F = \psi\}.
+ \]
+ So done.
+\end{proof}
+
+\begin{thm}
+ Let $L/K$ and $E/K$ be field extensions. Then
+ \begin{enumerate}
+ \item $|\Hom_K(L, E)| \leq [L:K]$. In particular, $|\Aut_K(L)| \leq [L:K]$.
+ \item If equality holds in (i), then for any intermediate field $K \subseteq F \subseteq L$:
+ \begin{enumerate}
+ \item We also have $|\Hom_K(F, E)| = [F:K]$.
+ \item The map $\Hom_K(L, E) \to \Hom_K(F, E)$ by restriction is surjective.
+ \end{enumerate}
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We have previously shown we can find a sequence of field extensions
+ \[
+ K = F_0 \subseteq F_1 \subseteq\cdots \subseteq F_n = L
+ \]
+ such that for each $i$, there is some $\alpha_i$ such that $F_i = F_{i - 1}(\alpha_i)$. Then by our previous lemma, we have
+ \begin{align*}
+ |\Hom_K(L, E)| &\leq [F_n:F_{n - 1}] |\Hom_K(F_{n - 1}, E)|\\
+ &\leq [F_n: F_{n - 1}][F_{n - 1}: F_{n - 2}] |\Hom_K(F_{n - 2}, E)|\\
+ &\quad \vdots\\
+ &\leq [F_n:F_{n - 1}][F_{n - 1}:F_{n - 2}] \cdots [F_1:F_0] |\Hom_K(F_0, E)|\\
+ &= [F_n:F_0]\\
+ &= [L:K]
+ \end{align*}
+ \item
+ \begin{enumerate}
+ \item If equality holds in (i), then every inequality in the proof above has to an equality. Instead of directly decomposing $K \subseteq L$ as a chain above, we can first decompose $K \subseteq F$, then $F \subseteq L$, then join them together. Then we can assume that $F = F_i$ for some $i$. Then we get
+ \[
+ |\Hom_K(L, E)| = [L:F] |\Hom_K(F, E)| = [L:K].
+ \]
+ Then the tower law says
+ \[
+ |\Hom_K(F, E)| = [F:K].
+ \]
+ \item By the proof of the lemma, for each $\psi \in \Hom_K(F, E)$, we know that
+ \[
+ \{\phi: \Hom_K(L, E): \phi|_F = \psi\} \leq [L:F].\tag{$*$}
+ \]
+ As we know that
+ \[
+ |\Hom_K(F, E)| = [F:K],\quad |\Hom_K(L, E)| = [L:K]
+ \]
+ we must have had equality in $(*)$, or else we won't have enough elements. So in particular $\{\phi: \Hom_K(L, E): \phi|_F = \psi\} \geq 1$. So the map is surjective.\qedhere
+ \end{enumerate}%\qedhere
+ \end{enumerate}
+\end{proof}
+
+With this result, we can prove prove the following result characterizing separable \emph{extensions}.
+\begin{thm}
+ Let $L/K$ be a finite field extension. Then the following are equivalent:
+ \begin{enumerate}
+ \item There is some extension $E$ of $K$ such that $|\Hom_K(L, E)| = [L:K]$.
+ \item $L/K$ is separable.
+ \item $L = K(\alpha_1, \cdots, \alpha_n)$ such that $P_{\alpha_i}$, the minimal polynomial of $\alpha_i$ over $K$, is separable for all $i$.
+ \item $L = K(\alpha_1, \cdots, \alpha_n)$ such that $R_{\alpha_i}$, the minimal polynomial of $\alpha_i$ over $K(\alpha_1, \cdots, \alpha_{i - 1})$ is separable for all $i$ for all $i$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Rightarrow$ (ii): For all $\alpha \in L$, if $P_\alpha$ is the minimal polynomial of $\alpha$ over $K$, then since $K(\alpha)$ is a subfield of $L$, by our previous theorem, we have
+ \[
+ |\Hom_K(K(\alpha), E)| = [K(\alpha): K].
+ \]
+ We also know that $|\Root_{P_\alpha}(E)| = |\Hom_K(K(\alpha), E)|$, and that $[K(\alpha): K] = \deg P_\alpha$. So we know that $P_\alpha$ has no repeated roots in any splitting field. So $P_\alpha$ is a separable. So $L/K$ is a separable extension.
+ \item (ii) $\Rightarrow$ (iii): Obvious from definition
+ \item (iii) $\Rightarrow$ (iv): Since $R_{\alpha_i}$ is a minimal polynomial in $K(\alpha_1, \cdots, \alpha_{i - 1})$, we know that $R_{\alpha_i} \mid P_{\alpha_i}$. So $R_{\alpha_i}$ is separable as $P_{\alpha_i}$ is separable.
+ \item (iv) $\Rightarrow$ (i): Let $E$ be the splitting field of $P_{\alpha_1}, \cdots, P_{\alpha_n}$. We do induction on $n$ to show that this satisfies the properties we want. If $n = 1$, then $L = K(\alpha_1)$. Then we have
+ \[
+ |\Hom_K(L, E)| = |\Root_{P_{\alpha_i}}(E)| = \deg P_{\alpha_1} = [K(\alpha_1): K] = [L:K].
+ \]
+ We now induct on $n$. So we can assume that (iv) $\Rightarrow $ (i) holds for smaller number of generators. For convenience, we write $K_i = K(\alpha_1, \cdots, \alpha_i)$. Then we have
+ \[
+ |\Hom_K(K_{n - 1}, E)| = [K_{n - 1}: K].
+ \]
+ We also know that
+ \[
+ |\Hom_K(K_n, E)| \leq [K_n: K_{n - 1}] |\Hom_K(K_{n - 1}, E)|.
+ \]
+ What we actually want is equality. We now re-do (parts of) the proof of this result, and see that separability guarantees that equality holds. If we pick $\psi \in \Hom_K(K_{n - 1}, E)$, then there is a one-to-one correspondence between $\{\phi \in \Hom_K(K_n, E): \phi|_{K_{n - 1}} = \psi\}$ and $\Root_q(E)$, where $q \in M[t]$ is defined as the image of $R_{\alpha_n}$ under $K_{n - 1}[t] \to M[t]$, and $M$ is the image of $\psi$.
+
+ Since $P_{\alpha_n} \in K[t]$ and $R_{\alpha_n} \mid P_{\alpha_n}$, then $q \mid P_{\alpha_n}$. So $q$ splits over $E$. By separability assumption , we get that
+ \[
+ |\Root_q(E)| = \deg q = \deg R_{\alpha_n} = [K_n: K_{n - 1}].
+ \]
+ Hence we know that
+ \begin{align*}
+ |\Hom_K(L, E)| &= [K_n: K_{n - 1}] | \Hom_K(K_{n - 1}, E)|\\
+ &= [K_n: K_{n - 1}][K_{n - 1}: K]\\
+ &= [K_n: K].
+ \end{align*}
+ So done.\qedhere
+ \end{itemize}
+\end{proof}
+
+Before we finally get to the primitive element theorem, we prove the following lemma. This will enable us to prove the trivial case of the primitive element theorem, and will also be very useful later on.
+\begin{lemma}
+ Let $L$ be a field, $L^*\setminus \{0\}$ be the multiplicative group of $L$. If $G$ is a finite subgroup of $L^*$, then $G$ is cyclic.
+\end{lemma}
+
+\begin{proof}
+ Since $L^*$ is abelian, $G$ is also abelian. Then by the structure theorem on finite abelian groups,
+ \[
+ G \cong \frac{\Z}{\bra n_1\ket} \times \cdots \times \frac{\Z}{\bra n_r\ket},
+ \]
+ for some $n_i \in \N$. Let $m$ be the least common multiple of $n_1, \cdots, n_r$, and let $f = t^m - 1$.
+
+ If $\alpha \in G$, then $\alpha^m = 1$. So $f(\alpha) = 0$ for all $\alpha \in G$. Therefore
+ \[
+ |G| = n_1 \cdots n_r \leq |\Root_f(L)| \leq \deg f = m.
+ \]
+ Since $m$ is the least common multiple of $n_1, \cdots, n_r$, we must have $m = n_1 \cdots n_r$ and thus $(n_i, n_j) = 1$ for all $i \not= j$. Then by the Chinese remainder theorem, we have
+ \[
+ G \cong \frac{Z}{\bra n_1\ket} \times \cdots \times \frac{\Z}{\bra n_r\ket} = \frac{\Z}{\bra n_1 \cdots n_r\ket}.
+ \]
+ So $G$ is cyclic.
+\end{proof}
+We now come to the main theorem of the lecture:
+\begin{thm}[Primitive element theorem]
+ Assume $L/K$ is a finite and separable extension. Then $L/K$ is simple, i.e.\ there is some $\alpha \in L$ such that $L = K(\alpha)$.
+\end{thm}
+
+\begin{proof}
+ At some point in our proof, we will require that $L$ is infinite. So we first do the finite case first. If $K$ is finite, then $L$ is also finite, which in turns implies $L^*$ is finite too. So by the lemma, $L^*$ is a cyclic group (since it is a finite subgroup of itself). So there is some $\alpha \in L^*$ such that every element in $L^*$ is a power of $\alpha$. So $L = K(\alpha)$.
+
+ So focus on the case where $K$ is infinite. Also, assume $K \not= L$. Then since $L/K$ is a finite extension, there is some intermediate field $K\subseteq F\subsetneq L$ such that $L = F(\beta)$ for some $\beta$. Now $L/K$ is separable. So $F/K$ is also separable, and $[F:K] < [L:K]$. Then by induction on degree of extension, we can assume $F/K$ is simple. In other words, there is some $\lambda \in F$ such that $F = K(\lambda)$. Now $L = K(\lambda, \beta)$. In the rest of the proof, we will try to replace the two generators $\lambda, \beta$ with just a single generator.
+
+ Unsurprisingly, the generator of $L$ will be chosen to be a linear combination of $\beta$ and $\lambda$. We set
+ \[
+ \alpha = \beta + a \lambda
+ \]
+ for some $a \in K$ to be chosen later. We will show that $K(\alpha) = L$. Actually, almost any choice of $a$ will do, but at the end of the proof, we will see which ones are the bad ones.
+
+ Let $P_\beta$ and $P_\lambda$ be the minimal polynomial of $\beta$ and $\lambda$ over $K$ respectively. Consider the polynomial $f = P_\beta(\alpha - at) \in K(\alpha)[t]$. Then we have
+ \[
+ f(\lambda) = P_\beta(\alpha - a\lambda) = P_\beta(\beta) = 0.
+ \]
+ On the other hand, $P_\lambda(\lambda) = 0$. So $\lambda$ is a common root of $P_\lambda$ and $f$.
+
+ We now want to pick an $a$ such that $\lambda$ is the \emph{only} common root of $f$ and $P_\lambda$ (in $E$). If so, then the gcd of $f$ and $P_\alpha$ in $K(\alpha)$ must only have $\lambda$ as a root. But since $P_\lambda$ is separable, it has no double roots. So the gcd must be $t - \lambda$. In particular, we must have $\lambda \in K(\alpha)$. Since $\alpha = \beta + a \lambda$, it follows that $\beta \in K(\alpha)$ as well, and so $K(\alpha )= L$.
+
+ Thus, it remains to choose an $a$ such that there are no other common roots. We work in a splitting field of $P_\beta P_\lambda$, and write
+ \begin{align*}
+ P_\beta &= (t - \beta_1)\cdots (t - \beta_m)\\
+ P_\lambda &= (t - \lambda_1) \cdots (t - \lambda_n).
+ \end{align*}
+ We wlog $\beta_1 = \beta$ and $\lambda_1 = \lambda$.
+
+ Now suppose $\theta$ is a common root of $f$ and $P_\lambda$. Then
+ \[
+ \begin{cases}
+ f(\theta) = 0\\
+ P_\lambda(\theta) = 0
+ \end{cases}
+ \Rightarrow\quad
+ \begin{cases}
+ P_\beta(\alpha - a\theta) = 0\\
+ P_\lambda(\theta) = 0
+ \end{cases}
+ \Rightarrow\quad
+ \begin{cases}
+ \alpha - a\theta = \beta_i\\
+ \theta = \lambda_j
+ \end{cases}
+ \]
+ for some $i, j$. Then we know that
+ \[
+ \alpha = \beta_i + a\lambda_j.
+ \]
+ However, by definition, we also know that
+ \[
+ \alpha = \beta + a\lambda
+ \]
+ Now we see how we need to choose $a$. We need to choose $a$ such that the elements
+ \[
+ \beta + a \lambda \not= \beta_i + a \lambda_j
+ \]
+ for all $i, j$. But if they were equal, then we have
+ \[
+ a = \frac{\lambda - \lambda_j}{\beta_i - \beta},
+ \]
+ and there are only finitely many elements of this form. So we just have to pick an $a$ \emph{not} in this list.
+\end{proof}
+
+\begin{cor}
+ Any finite extension $L/K$ of field of characteristic $0$ is simple, i.e.\ $L = K(\alpha)$ for some $\alpha \in L$.
+\end{cor}
+
+\begin{proof}
+ This follows from the fact that all extensions of fields of characteristic zero are separable.
+\end{proof}
+
+We have previously seen that $\Q(\sqrt{2}, \sqrt{3})/\Q$ is a simple extension, but that is of course true from this theorem. A more interesting example would be one in which this fails. We will need a field with non-zero characteristic.
+
+\begin{eg}
+ Let $L = \F_p(s, u)$, the fraction field of $\F_p[s, u]$. Let $K = \F_p(s^p, u^p)$. We have $L/K$. We want to show this is not simple.
+
+ If $\alpha \in L$, then $\alpha^p \in K$. So $\alpha$ is a root of $t^p - \alpha^p \in K[t]$. Thus the minimal polynomial $P_\alpha$ has degree at most $p$. So $[K(\alpha): K] = \deg P_\alpha \leq p$. On the other hand, we have $[L:K] = p^2$, since $\{s^iu^j: 0 \leq i, j < p\}$ is a basis. So for any $\alpha$, we have $K(\alpha) \not= L$. So $L/K$ is not a simple extension. This then implies $L/K$ is not separable.
+\end{eg}
+At this point, one might suspect that all fields with positive characteristic are not separable. This is not true by considering a rather silly example.
+\begin{eg}
+ Consider $K = \F_2$ and $L = \F_2[s]/\bra s^2 + s + 1\ket$. We can check manually that $s^2 + s + 1$ has no roots and hence irreducible. So $L$ is a field. So $L/\F_2$ is a finite extension. Note that $L$ only has 4 elements.
+
+ Now if $\alpha \in L \setminus \F_2$, and $P_\alpha$ is the minimal polynomial of $\alpha$ over $\F_2$, then $P_\alpha\mid t^2 + t + 1$. So $P_\alpha$ is separable as a polynomial. So $L/\F_2$ is separable.
+\end{eg}
+
+In fact, we have
+\begin{prop}
+ Let $L/K$ be an extension of finite fields. Then the extension is separable.
+\end{prop}
+
+\begin{proof}
+ Let the characteristic of the fields be $p$. Suppose the extension were not separable. Then there is some non-separable element $\alpha \in L$. Then its minimal polynomial must be of the form $P_\alpha = \sum a_i t^{pi}$.
+
+ Now note that the map $K \to K$ given by $x \mapsto x^p$ is injective, hence surjective. So we can write $a_i = b_i^p$ for all $i$. Then we have
+ \[
+ P_\alpha = \sum a_i t^{pi} = \left(\sum b_i t^i\right)^p,
+ \]
+ and so $P_\alpha$ is not irreducible, which is a contradiction.
+\end{proof}
+
+\subsection{Normal extensions}
+We are almost there. We will now move on to study normal extensions. Normal extensions are \emph{very} closely related to Galois extensions. In fact, we will show that if an extension is normal and separable, then it is Galois. The advantage of introducing the idea of normality is that normality is a much more concrete definition to work with. It is much easier to check if an extension is normal than to check if $|\Aut_K(L)| = [K:L]$. In particular, we will shortly prove that the splitting field of any polynomial is normal.
+
+This is an important result, since we are going to use the splitting field to study the roots of a polynomial, and since we mostly care about polynomials over $\Q$, this means all these splitting fields are automatically Galois extensions of $\Q$.
+
+It is not immediately obvious why these extensions are called ``normal'' (just like most other names in Galois theory). We will later see that normal extensions are extensions that correspond to normal subgroups, in some precise sense given by the fundamental theorem of Galois theory.
+
+\begin{defi}[Normal extension]\index{normal extension}
+ Let $K \subseteq L$ be an algebraic extension. We say $L/K$ is \emph{normal} if for all $\alpha \in L$, the minimal polynomial of $\alpha$ over $K$ splits over $L$.
+\end{defi}
+In other words, given any minimal polynomial, $L$ should have all its roots.
+
+\begin{eg}
+ The extension $\Q(\sqrt[3]{2})/\Q$ is not normal since the minimal polynomial $t^3 - 2$ does not split over $\Q(\sqrt[3]{2})$.
+\end{eg}
+In some sense, extensions that are not ``normal'' are missing something. This is somewhat similar to how Galois extensions work. Before we go deeper into this, we need a lemma.
+
+\begin{lemma}
+ Let $L/F/K$ be finite extensions, and $\bar K$ is the algebraic closure of $K$. Then any $\psi \in \Hom_K(F, \bar K)$ extends to some $\phi \in \Hom_K(L, \bar K)$.
+\end{lemma}
+
+\begin{proof}
+ Let $\psi \in \Hom_K(F, \bar K)$. If $F = L$, then the statement is trivial. So assume $L \not= F$.
+
+ Pick $\alpha \in L\setminus F$. Let $q_\alpha \in F[t]$ be the minimal polynomial of $\alpha$ over $F$. Consider $\psi(q_\alpha) \in \bar{K} [t]$. Let $\beta$ be any root of $q_\alpha$, which exists since $\bar{K}$ is algebraically closed. Then as before, we can extend $\psi$ to $F(\alpha)$ by sending $\alpha$ to $\beta$. More explicitly, we send
+ \[
+ \sum_{i = 0}^N a_i \alpha^i \mapsto \sum \psi(a_i) \beta^i,
+ \]
+ which is well-defined since any polynomial relation satisfied by $\alpha$ in $F$ is also satisfied by $\beta$.
+
+ Repeat this process finitely many times to get some element in $\Hom_K(L, \bar{K})$.
+\end{proof}
+We will use this lemma to characterize normal extensions.
+
+\begin{thm}
+ Let $L/K$ be a finite extension. Then $L/K$ is a normal extension if and only if $L$ is the splitting field of some $f \in K[t]$.
+\end{thm}
+
+\begin{proof}
+ Suppose $L/K$ is normal. Since $L$ is finite, let $L = K(\alpha_1, \cdots, \alpha_n)$ for some $\alpha_i \in L$. Let $P_{\alpha_i}$ be the minimal polynomial of $\alpha_i$ over $K$. Take $f = P_{\alpha_1}\cdots P_{\alpha_n}$. Since $L/K$ is normal, each $P_{\alpha_i}$ splits over $L$. So $f$ splits over $L$, and $L$ is a splitting field of $f$.
+
+ For the other direction, suppose that $L$ is the splitting field of some $f \in K[t]$. First we wlog assume $L \subseteq \bar K$. This is possible since the natural injection $K\hookrightarrow \bar{K}$ extends to some $\phi:L \to \bar{K}$ by our previous lemma, and we can replace $L$ with $\phi(L)$.
+
+ Now suppose $\beta \in L$, and let $P_\beta$ be its minimal polynomial. Let $\beta'$ be another root. We want to show it lives in $L$.
+
+ Now consider $K(\beta)$. By the proof of the lemma, we can produce an embedding $\iota: K(\beta) \to \bar{K}$ that sends $\beta$ to $\beta'$. By the lemma again, this extends to an embedding of $L$ into $\bar{K}$. But any such embedding must send a root of $f$ to a root of $f$. So it must send $L$ to $L$. In particular, $\iota(\beta) = \beta'\in L$. So $P_\beta$ splits over $L$.
+\end{proof}
+
+This allows us to identify normal extensions easily. The following theorem then allows us to identify \emph{Galois} extensions using this convenient tool.
+
+\begin{thm}
+ Let $L/K$ be a finite extension. Then the following are equivalent:
+ \begin{enumerate}
+ \item $L/K$ is a Galois extension.
+ \item $L/K$ is separable and normal.
+ \item $L = K(\alpha_1, \cdots, \alpha_n)$ and $P_{\alpha_i}$, the minimal polynomial of $\alpha_i$ over $K$, is separable and splits over $L$ for all $i$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Rightarrow$ (ii): Suppose $L/K$ is a Galois extension. Then by definition, this means
+ \[
+ |\Hom_K(L, L)| = |\Aut_K(L)| = [L:K].
+ \]
+ To show that $L/K$ is separable, recall that we proved that an extension is separable if and only if there is some $E$ such that $|\Hom_K(L, E)| = [L:K]$. In this case, just pick $E = L$. Then we know that the extension is separable.
+
+ To check normality, let $\alpha \in L$, and let $P_\alpha$ be its minimal polynomial over $K$. We know that
+ \[
+ |\Root_{P_\alpha}(L)| = |\Hom_K(K[t]/\bra P_\alpha\ket, L)| = |\Hom_K(K(\alpha), L)|.
+ \]
+ But since $|\Hom_K(L, L)| = [L:K]$ and $K(\alpha)$ is a subfield of $L$, this implies
+ \[
+ |\Hom_K(K(\alpha), L)| = [K(\alpha): K] = \deg P_\alpha.
+ \]
+ Hence we know that
+ \[
+ |\Root_{P_\alpha}(L)| = \deg P_\alpha.
+ \]
+ So $P_\alpha$ splits over $L$.
+ \item (ii) $\Rightarrow$ (iii): Just pick $\alpha_1, \cdots, \alpha_n$ such that $L = K(\alpha_1, \cdots, \alpha_n)$. Then these polynomials are separable since the extension is separable, and they split since $L/K$ is normal. In fact, by the primitive element theorem, we can pick these such that $n = 1$.
+
+ \item (iii) $\Rightarrow$ (i): Since $L = K(\alpha_1, \cdots, \alpha_n)$ and the minimal polynomials $P_{\alpha_i}$ over $K$ are separable, by a previous theorem, there are some extension $E$ of $K$ such that
+ \[
+ |\Hom_K(L, E)| = [L:K].
+ \]
+ To simplify notation, we first replace $L$ with its image inside $E$ under some $K$-homomorphism $L \to E$, which exists since $|\Hom_K(L, E)| = [L:K] > 0$. So we can assume $L\subseteq E$.
+
+ We now claim that the inclusion
+ \[
+ \Hom_K(L, L) \to \Hom_K(L, E)
+ \]
+ is a surjection, hence a bijection. Indeed, if $\phi: L \to E$, then $\phi$ takes $\alpha_i$ to $\phi(\alpha_i)$, which is a root of $P_{\alpha_i}$. Since $P_{\alpha_i}$ splits over $L$, we know $\phi(\alpha_i) \in L$ for all $i$. Since $L$ is generated by these $\alpha_i$, it follows that $\phi(L) \subseteq L$.
+
+ Thus, we have
+ \[
+ [L:K] = |\Hom_K(L, E)| = |\Hom_K(L, L)|,
+ \]
+ and the extension is Galois.\qedhere
+ \end{itemize}
+\end{proof}
+From this, it follows that if $L/K$ is Galois, and we have an intermediate field $K\subseteq F \subseteq L$, then $L/F$ is also Galois.
+
+\begin{cor}
+ Let $K$ be a field and $f \in K[t]$ be a separable polynomial. Then the splitting field of $f$ is Galois.
+\end{cor}
+This is one of the most crucial examples.
+
+\subsection{The fundamental theorem of Galois theory}
+Finally, we can get to the fundamental theorem of Galois theory. Roughly, given a Galois extension $K \subseteq L$, the fundamental theorem tell us there is a one-to-one correspondence between intermediate field extensions $K \subseteq F \subseteq L$ and subgroups of the automorphism group $\Gal(L/K)$.
+
+Given an intermediate field $F$, we can obtain a subgroup of $\Gal(L/K)$ by looking at the automorphisms that fix $F$. To go the other way round, given a subgroup $H \leq \Gal(L/K)$, we can obtain a corresponding field by looking at the field of elements that are fixed by everything in $H$. This is known as the \emph{fixed field}, and can in general be defined even for non-Galois extensions.
+
+\begin{defi}[Fixed field]\index{fixed field}
+ Let $L/K$ be a field extension, $H\leq \Aut_K(L)$ a subgroup. We define the \emph{fixed field} of $H$ as
+ \[
+ L^H = \{\alpha \in L: \phi(\alpha) = \alpha\text{ for all }\phi \in H\}.
+ \]
+ It is easy to see that $L^H$ is an intermediate field $K\subseteq L^H \subseteq L$.
+\end{defi}
+
+Before we get to the fundamental theorem, we first prove \emph{Artin's lemma}. This in fact proves part of the results in the fundamental theorem, but is also useful on its own right.
+
+\begin{lemma}[Artin's lemma]\index{Artin's lemma}
+ Let $L/K$ be a field extension and $H\leq \Aut_K(L)$ a finite subgroup. Then $L/L^H$ is a Galois extension with $\Aut_{L^H}(L) = H$.
+\end{lemma}
+Note that we are not assuming that $L/K$ is Galois, or even finite!
+\begin{proof}
+ Pick any $\alpha \in L$. We set
+ \[
+ \{\alpha_1,\cdots, \alpha_n\} = \{\phi(\alpha): \phi \in H\},
+ \]
+ where $\alpha_i$ are distinct. Here we are allowing for the possibility that $\phi(\alpha) = \psi(\alpha)$ for some distinct $\phi, \psi \in H$.
+
+ By definition, we clearly have $n < |H|$. Let
+ \[
+ f = \prod_1^n (t - \alpha_i) \in L[t].
+ \]
+ We know that any $\phi \in H$ gives an homomorphism $L[t] \to L[t]$, and any such map fixes $f$ because $\phi$ just permutes the $\alpha_i$. Thus, the coefficients of $f$ are in $L^H$, and thus $f \in L^H[t]$.
+
+ Since $\id \in H$, we know that $f(\alpha) = 0$. So $\alpha$ is algebraic over $L^H$. Moreover, if $q_\alpha$ is the minimal polynomial of $\alpha$ over $L^H$, then $q_\alpha \mid f$ in $L^H[t]$. Hence
+ \[
+ [L^H(\alpha): L^H] = \deg q_\alpha \leq \deg f \leq |H|.
+ \]
+ Further, we know that $f$ has distinct roots. So $q_\alpha$ is separable, and so $\alpha$ is separable. So it follows that $L/L^H$ is a separable extension.
+
+ We next show that $L/L^H$ is simple. This doesn't immediately follow from the primitive element theorem, because we don't know it is a finite extension yet, but we can still apply the theorem cleverly.
+
+ Pick $\alpha \in L$ such that $[L^H(\alpha): L^H]$ is maximal. This is possible since $[L^H(\alpha):L^H]$ is bounded by $|H|$. The claim is that $L = L^H(\alpha)$.
+
+ We pick an arbitrary $\beta \in L$, and will show that this is in $L^H(\alpha)$. By the above arguments, $L^H\subseteq L^H(\alpha, \beta)$ is a finite separable extension. So by the primitive element theorem, there is some $\lambda \in L$ such that $L^H(\alpha, \beta) = L^H(\lambda)$. Note that we must have
+ \[
+ [L^H(\lambda): L^H] \geq [L^H(\alpha): L^H].
+ \]
+ By maximality of $[L^H(\alpha): L^H]$, we must have equality. So $L^H(\lambda) = L^H(\alpha)$. So $\beta \in L^H(\alpha)$. So $L = L^H(\alpha)$.
+
+ Finally, we show it is a Galois extension. Let $L = L^H(\alpha)$. Then
+ \[
+ [L:L^H] = [L^H(\alpha): L^H] \leq |H| \leq |\Aut_{L^H}(L)|
+ \]
+ Recall that we have previously shown that for any extension $L/L^H$, we have $|\Aut_{L^H}(L)| \leq [L:L^H]$. Hence we must have equality above. So
+ \[
+ [L:L^H] = |\Aut_{L^H}(L)|.
+ \]
+ So the extension is Galois. Also, since we know that $H\subseteq \Aut_{L^H}(L)$, we must have $H = \Aut_{L^H}(L)$.
+\end{proof}
+
+\begin{thm}
+ Let $L/K$ be a finite field extension. Then $L/K$ is Galois if and only if $L^H = K$, where $H = \Aut_K(L)$.
+\end{thm}
+
+\begin{proof}
+ $(\Rightarrow)$ Suppose $L/K$ is a Galois extension. We want to show $L^H = K$. Using Artin's lemma (and the definition of $H$), we have
+ \[
+ [L:K] = |\Aut_K(L)| = |H| = |\Aut_{L^H}(L)| = [L:L^H]
+ \]
+ So $[L:K] = [L:L^H]$. So we must have $L^H = K$.
+
+ ($\Leftarrow$) By the lemma, $K = L^H \subseteq L$ is Galois.
+\end{proof}
+
+This is an important theorem. Given a Galois extension $L/K$, this gives us a very useful test of when elements of $\alpha \in L$ are in fact in $K$. We will use this a lot.
+
+Finally, we get to the fundamental theorem.
+\begin{thm}[Fundamental theorem of Galois theory]\index{fundamental theorem of Galois theory}
+ Assume $L/K$ is a (finite) Galois extension. Then
+ \begin{enumerate}
+ \item There is a one-to-one correspondence
+ \[
+ H\leq \Aut_K(L) \longleftrightarrow \text{intermediate fields }K\subseteq F \subseteq L.
+ \]
+ This is given by the maps $H \mapsto L^H$ and $F\mapsto \Aut_F(L)$ respectively. Moreover, $|\Aut_K(L):H| = [L^H:K]$.
+ \item $H\leq \Aut_K(L)$ is normal (as a subgroup) if and only if $L^H/K$ is a normal extension if and only if $L^H/K$ is a Galois extension.
+ \item If $H\lhd \Aut_K(L)$, then the map $\Aut_K(L) \to \Aut_K(L^H)$ by the restriction map is well-defined and surjective with kernel isomorphic to $H$, i.e.
+ \[
+ \frac{\Aut_K(L)}{H} = \Aut_K(L^H).
+ \]
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ Note that since $L/K$ is a Galois extension, we know
+ \[
+ |\Aut_K(L)| = |\Hom_K(L, L)| = [L:K],
+ \]
+ By a previous theorem, for any intermediate field $K\subseteq F \subseteq L$, we know $|\Hom_K(F, L)| = [F:K]$ and the restriction map $\Hom_K(L, L) \to \Hom_K(F, L)$ is surjective.
+
+ \begin{enumerate}
+ \item The maps are already well-defined, so we just have to show that the maps are inverses to each other. By Artin's lemma, we know that $H = \Aut_{L^H}(L)$, and since $L/F$ is a Galois extension, the previous theorem tells that $L^{\Aut_F (L)} = F$. So they are indeed inverses. The formula relating the index and the degree follows from Artin's lemma.
+
+ \item Note that for every $\phi \in \Aut_K(L)$, we have that $L^{\phi H \phi^{-1}} = \phi L^H$, since $\alpha \in L^{\phi H \phi^{-1}}$ iff $\phi (\psi (\phi^{-1} (\alpha))) = \alpha$ for all $\psi \in H$ iff $\psi (\phi^{-1}(\alpha)) = \phi^{-1}(\alpha)$ for all $\psi \in H$ iff $\alpha \in \phi L^H$. Hence $H$ is a normal subgroup if and only if
+ \[
+ \phi(L^H) = L^H\text{ for all }\phi \in \Aut_K(L).\tag{$*$}
+ \]
+ Assume $(*)$. We want to first show that $\Hom_K(L^H, L^H) = \Hom_K(L^H, L)$. Let $\psi \in \Hom_K(L^H, L)$. Then by the surjectivity of the restriction map $\Hom_K(L, L) \to \Hom_K(L^H, L)$, $\psi$ must be the restriction of some $\tilde{\psi} \in \Hom_K(L, L)$. So $\tilde{\psi}$ fixes $L^H$ by $(*)$. So $\psi$ sends $L^H$ to $L^H$. So $\psi \in \Hom_K(L^H, L^H)$. So we have
+ \[
+ |\Aut_K(L^H)| = |\Hom_K(L^H, L^H)| = |\Hom_K(L^H, L)| = [L^H:K].
+ \]
+ So $L^H/K$ is Galois, and hence normal.
+
+ Now suppose $L^H/K$ is a normal extension. We want to show this implies $(*)$. Pick any $\alpha \in L^H$ and $\phi\in \Aut_K(L)$. Let $P_\alpha$ be the minimal polynomial of $\alpha$ over $K$. So $\phi(\alpha)$ is a root of $P_\alpha$ (since $\phi$ fixes $P_\alpha \in K$, and hence maps roots to roots). Since $L^H/K$ is normal, $P_\alpha$ splits over $L^H$. This implies that $\phi(\alpha) \in L^H$. So $\phi(L^H) = L^H$.
+
+ Hence, $H$ is a normal subgroup if and only if $\phi(L^H) = L^H$ if and only if $L^H/K$ is a Galois extension.
+ \item Suppose $H$ is normal. We know that $\Aut_K(L) = \Hom_K(L, L)$ restricts to $\Hom_K(L^H, L)$ surjectively. To show that we in fact have restriction to $\Aut_K(L^H)$, by the proof above, we know that $\phi(L^H) = L^H$ for all $\phi \in \Aut_K(L^H)$. So this does restrict to an automorphism of $L^H$. In other words, the map $\Aut_K(L) \to \Aut_K(L^H)$ is well-defined. It is easy to see this is a group homomorphism.
+
+ Finally, we have to calculate the kernel of this homomorphism. Let $E$ be the kernel. Then by definition, $E\supseteq H$. So it suffices to show that $|E| = |H|$. By surjectivity of the map and the first isomorphism theorem of groups, we have
+ \[
+ \frac{|\Aut_K(L)|}{|E|} = |\Aut_K(L^H)| = [L^H:K] = \frac{[L:K]}{[L:L^H]} = \frac{|\Aut_K(L)|}{|H|},
+ \]
+ noting that $L^H/K$ and $L/K$ are both Galois extensions, and $|H| = [L^H:K]$ by Artin's lemma. So $|E| = |H|$. So we must have $E = H$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ Let $p$ be an odd prime, and $\zeta_p$ be a primitive $p$th root of unity. Given a (square-free) integer $n$, when is $\sqrt{n}$ in $\Q(\zeta_p)$? We know that $\sqrt{n} \in \Q(\zeta_p)$ if and only if $\Q(\sqrt{n}) \subseteq \Q(\zeta_p)$. Moreover, $[\Q(\sqrt{n}):\Q] = 2$, i.e.\ $\Q(\sqrt{n})$ is a quadratic extension.
+
+ We will later show that $\Gal(\Q(\zeta_p)/\Q) \cong (\Z/p\Z)^* \cong C_{p - 1}$. Then by the fundamental theorem of Galois theory, quadratic extensions contained in $\Q(\zeta_p)$ correspond to index 2-subgroups of $\Gal(\Q(\zeta_p)/\Q)$. By general group theory, there is exactly one such subgroup. So there is exactly one square-free $n$ such that $\Q(\sqrt{n}) \subseteq \Q(\zeta_p)$ (since all quadratic extensions are of the form $\Q(\sqrt{n})$), given by the fixed field of the index 2 subgroup of $(\Z/p\Z)^*$.
+
+ Now we shall try to find some square root lying in $\Q(\zeta_p)$. We will not fully justify the derivation, since we can just square the resulting number to see that it is correct. We know the general element of $\Q(\zeta_p)$ looks like
+ \[
+ \sum_{k = 0}^{p - 1} c_k \zeta_p^k.
+ \]
+ We know $\Gal(\Q(\zeta_p)/\Q)\cong (\Z/p\Z)^*$ acts by sending $\zeta_p \mapsto \zeta_p^n$ for each $n \in (\Z/p\Z)^*$, and the index 2 subgroup consists of the quadratic residues. Thus, if an element is fixed under the action of the quadratic residues, the quadratic residue powers all have the same coefficient, and similarly for the non-residue powers.
+
+ If we wanted this to be a square root, then the action of the remaining elements of $\Gal(\Q(\zeta_p)/\Q)$ should negate this object. Since these elements swap the residues and the non-residues, we would want to have something like $c_k = 1$ if $k$ is a quadratic residue, and $-1$ if it is a non-residue, which is just the Legendre symbol! So we are led to try to square
+ \[
+ \tau = \sum_{k = 1}^{p - 1} \left(\frac{k}{p}\right) \zeta_p^k.
+ \]
+ It is an exercise in the Number Theory example sheet to show that the square of this is in fact
+ \[
+ \tau^2 = \left(\frac{-1}{p}\right) p.
+ \]
+ So we have $\sqrt{p} \in \Q(\zeta_p)$ if $p \equiv 1 \pmod 4$, and $\sqrt{-p} \in \Q(\zeta_p)$ if $p \equiv 3 \pmod 4$.
+\end{eg}
+
+\subsection{Finite fields}
+We'll have a slight digression and look at finite fields. We adopt the notation where $p$ is always a prime number, and $\Z_p =\Z/\bra p\ket$. It turns out finite fields are rather simple, as described in the lemma below:
+
+\begin{lemma}
+ Let $K$ be a finite field with $q = |K|$ element. Then
+ \begin{enumerate}
+ \item $q = p^d$ for some $d \in \N$, where $p = \Char K > 0$.
+ \item Let $f = t^q - t$. Then $f(\alpha) = 0$ for all $\alpha \in K$. Moreover, $K$ is the splitting field of $f$ over $\F_p$.
+ \end{enumerate}
+\end{lemma}
+This means that a finite field is completely determined by the number of elements.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Consider the set $\{m \cdot 1_K\}_{m \in \Z}$, where $1_K$ is the unit in $K$ and $m\cdot$ represents repeated addition. We can identify this with $\F_p$. So we have the extension $\F_p \subseteq K$. Let $d = [K: \F_p]$. Then $q = |K| = p^d$.
+
+ \item Note that $K^* = K\setminus \{0\}$ is a finite multiplicative group with order $q - 1$. Then by Lagrange's theorem, $\alpha^{q - 1} = 1$ for all $\alpha\in K^*$. So $\alpha^q - \alpha = 0$ for all $\alpha \not= 0$. The $\alpha = 0$ case is trivial.
+
+ Now every element in $K$ is a root of $f$. So we need to check that all roots of $f$ are in $K$. Note that the derivative $f' = qt^{q - 1} - 1 = -1$ (since $q$ is a power of the characteristic). So $f'(\alpha) = -1 \not= 0$ for all $\alpha \in K$. So $f$ and $f'$ have no common roots. So $f$ has no repeated roots. So $K$ contains $q$ distinct roots of $f$. So $K$ is a splitting field.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{lemma}
+ Let $q = p^d$, $q' = p^{d'}$, where $d, d' \in \N$. Then
+ \begin{enumerate}
+ \item There is a finite field $K$ with exactly $q$ elements, which is unique up to isomorphism. We write this as $\F_q$.
+ \item We can embed $\F_q \subseteq \F_{q'}$ iff $d \mid d'$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $f = t^q - t$, and let $K$ be a splitting field of $f$ over $\F_p$. Let $L = \Root_f(K)$. The objective is to show that $L = K$. Then we will have $|K| = |L| = |\Root_f(K)| = \deg f = q$, because the proof of the previous lemma shows that $f$ has no repeated roots.
+
+ To show that $L = K$, by definition, we have $L \subseteq K$. So we need to show every element in $K$ is in $L$. We do so by showing that $L$ itself is a field. Then since $L$ contains all the roots of $f$ and is a subfield of the splitting field $K$, we must have $K = L$.
+
+ It is straightforward to show that $L$ is a field: if $\alpha, \beta \in L$, then
+ \[
+ (\alpha + \beta)^q = \alpha^q + \beta^q = \alpha + \beta.
+ \]
+ So $\alpha + \beta \in L$. Similarly, we have
+ \[
+ (\alpha\beta)^q = \alpha^q \beta^q = \alpha\beta.
+ \]
+ So $\alpha\beta \in L$. Also, we have
+ \[
+ (\alpha^{-1})^q = (\alpha^q)^{-1} = \alpha^{-1}.
+ \]
+ So $\alpha^{-1} \in L$. So $L$ is in fact a field.
+
+ Since any field of size $q$ is a splitting field of $f$, and splitting fields are unique to isomorphism, we know that $K$ is unique.
+ \item Suppose $\F_q \subseteq \F_{q'}$. Then let $n = [\F_{q'}: \F_q]$. So $q' = q^n$. So $d'= nd$. So $d \mid d'$.
+
+ On the other hand, suppose $d \mid d'$. Let $d' = dn$. We let $f = t^{q'} - t$. Then for any $\alpha \in \F_q$, we have
+ \[
+ f(\alpha) = \alpha^{q'} - \alpha = \alpha^{q^n} - \alpha = (\cdots ((\alpha^q)^q)^q\cdots )^q - \alpha = \alpha - \alpha = 0.
+ \]
+ Since $\F_{q'}$ is the splitting field of $f$, all roots of $f$ are in $\F_{q'}$. So we know that $\F_q \subseteq \F_q'$.\qedhere
+ \end{enumerate}
+\end{proof}
+Note that if $\bar{\F}_p$ is the algebraic closure of $\F_p$, then $\F_q \subseteq \bar{F}_p$ for every $q = p^d$. We then have
+\[
+ \bigcup_{k \in \N} \F_{p^k} = \bar{\F}_p,
+\]
+because any $\alpha \in \bar{\F}_p$ is algebraic over $\F_p$, and so belongs to some $\F_q$.
+
+\begin{defi}
+ Consider the extension $\F_{q^n}/\F_q$, where $q$ is a power of $p$. The \emph{Frobenius} $\Fr_q: \F_{q^n} \to \F_{q^n}$ is defined by $\alpha\mapsto \alpha^q$.
+\end{defi}
+This is a homomorphism precisely because the field is of characteristic zero. In fact, $\Fr_q \in \Aut_{\F_q} (\F_{q^n})$, since $\alpha^q = \alpha$ for all $\alpha \in \F_q$.
+
+The following two theorems tells us why we care about the Frobenius.
+\begin{thm}
+ Consider $\F_{q^n}/\F_q$. Then $\Fr_q$ is an element of order $n$ as an element of $\Aut_{\F_q}(\F_{q^n})$.
+\end{thm}
+
+\begin{proof}
+ For all $\alpha \in \F_{q^n}$, we have $\Fr_q^n (\alpha) = \alpha^{q^n} = \alpha$. So the order of $\Fr_q$ divides $n$.
+
+ If $m \mid n$, then the set
+ \[
+ \{\alpha \in \F_{q^n} : \Fr_q^m (\alpha) = \alpha\} = \{\alpha \in \F_{q^n}: \alpha^{q^m} = \alpha\} = \F_{q^m}.
+ \]
+ So if $m$ is the order of $\Fr_q$, then $\F_{q^m} = \F_{q^n}$. So $m = n$.
+\end{proof}
+
+\begin{thm}
+ The extension $\F_{q^n}/\F_q$ is Galois with Galois group $\Gal(\F_{q^n}/\F_q) = \Aut_{\F_q}(\F_{q^n}) \cong \Z/n\Z$, generated by $\Fr_q$.
+\end{thm}
+
+\begin{proof}
+ The multiplicative group $\F_{q^n}^* = \F_{q^n} \setminus \{0\}$ is finite. We have previously seen that multiplicative groups of finite fields are cyclic. So let $\alpha$ be a generator of this group. Then $\F_{q^n} = \F_q(\alpha)$. Let $P_\alpha$ be the minimal polynomial of $\alpha$ over $\F_q$. Then since $\Aut_{\F_q}(\F_{q^n})$ has an element of order $n$, we get
+ \[
+ n \leq |\Aut_{\F_q}(\F_{q^n})| = |\Hom_{\F_q}(\F_q(\alpha), \F_{q^n})|.
+ \]
+ Since $F_q(\alpha)$ is generated by one element, we know
+ \[
+ |\Hom_{\F_q}(\F_q(\alpha), \F_{q^n})| = |\Root_{P_\alpha}(\F_{q^n})|
+ \]
+ So we have
+ \[
+ n \leq |\Root_{P_\alpha}(\F_{q^n})| \leq \deg P_\alpha = [\F_{q^n}: \F_q] = n.
+ \]
+ So we know that
+ \[
+ |\Aut_{\F_q}(\F_{q^n})| = [\F_{q^n} : \F_q] = n.
+ \]
+ So $\F_{q^n}/\F_q$ is a Galois extension.
+
+ Since $|\Aut_{\F_q}(\F_{q^n})|$, it has to be generated by $\Fr_q$, since this has order $n$. In particular, this group is cyclic.
+\end{proof}
+We see that finite fields are rather nice --- there is exactly one field of order $p^d$ for each $d$ and prime $p$, and these are all of the finite fields. All extensions are Galois and the Galois group is a simple cyclic group.
+
+\begin{eg}
+ Consider $\F_2/\F_2$. We can write
+ \[
+ \F_2 = \{0, 1\} \subseteq \F_4 = \{0, 1, \alpha, \alpha^2\},
+ \]
+ where $\alpha$ is a generator of $\F_4^*$. Define $\phi \in \Aut_{\F_2}(\F_4)$ by $\phi(\alpha) = \alpha^2$. Then
+ \[
+ \Aut_{\F_2}(\F_4) = \{\id, \phi\}
+ \]
+ since it has order $2$.
+\end{eg}
+Note that we can also define the Frobenius $\Fr_p: \bar{\F}_p \to \bar{\F}_p$, where $\alpha \mapsto \alpha^p$. Then $\F_{p^d}$ is the elements of $\bar{\F}_p$ fixed by $\Fr_p^d$. So we can recover this subfield by just looking at the Frobenius.
+
+\section{Solutions to polynomial equations}
+We have now proved the fundamental theorem of Galois theory, and this gives a one-to-one correspondence between (intermediate) field extensions and subgroups of the Galois group. That is our first goal achieved. Our next big goal is to use this Galois correspondence to show that, in general, polynomials of degree 5 or more cannot be solved by radicals.
+
+First of all, we want to make this notion of ``solving by radicals'' precise. We all know what this means if we are working over $\Q$, but we need to be very precise when working with arbitrary fields.
+
+For example, we know that the polynomial $f = t^3 - 5 \in \Q[t]$ can be ``solved by radicals''. In this case, we have
+\[
+ \Root_f(\C) = \{\sqrt[3]{5}, \mu \sqrt[3]{5}, \mu^2\sqrt[3]{5}\},
+\]
+where $\mu^3 = 1, \mu \not= 1$. In general fields, we want to properly define the analogues of $\mu$ and $\sqrt[3]{5}$.
+
+These will correspond to two different concepts. The first is \emph{cyclotomic extensions}, where the extension adds the analogues of $\mu$, and the second is \emph{Kummer extensions}, where we add things like $\sqrt[3]{5}$.
+
+Then, we would say a polynomial is soluble by radicals if the splitting field of the polynomial can be obtained by repeatedly taking cyclotomic and Kummer extensions.
+
+\subsection{Cyclotomic extensions}
+\begin{defi}[Cyclotomic extension]\index{cyclotomic extension}
+ For a field $K$, we define the $n$th \emph{cyclotomic extension} to be the splitting field of $t^n - 1$.
+\end{defi}
+Note that if $K$ is a field and $L$ is the $n$th cyclotomic extension, then $\Root_{t^n - 1}(L)$ is a subgroup of multiplicative group $L^* = L\setminus \{0\}$. Since this is a finite subgroup of $L^*$, it is a cyclic group.
+
+Moreover, if $\Char K = 0$ or $0 < \Char K \nmid n$, then $(t^n - 1)' = nt^{n - 1}$ and this has no common roots with $t^n - 1$. So $t^n - 1$ has no repeated roots. In other words, $t^n - 1$ has $n$ distinct roots. So as a group,
+\[
+ \Root_{t^n - 1}(L) \cong \Z/n\Z.
+\]
+In particular, this group has at least one element $\mu$ of order $n$.
+
+\begin{defi}[Primitive root of unity]
+ The \emph{$n$th primitive root of unity} is an element of order $n$ in $\Root_{t^n - 1}(L)$.
+\end{defi}
+These elements correspond to the elements of the multiplicative group of units in $\Z/n\Z$, written $(\Z/n\Z)^\times$.
+
+The next theorem tells us some interesting information about these roots and some related polynomials.
+
+\begin{thm}
+ For each $d \in \N$, there exists a \emph{$d$th cyclotomic monic polynomial} $\phi_d \in \Z[t]$ satisfying:
+ \begin{enumerate}
+ \item For each $n \in \N$, we have
+ \[
+ t^n - 1 = \prod_{d \mid n} \phi_d.
+ \]
+ \item Assume $\Char K = 0$ or $0 < \Char K \nmid n$. Then
+ \[
+ \Root_{\phi_n}(L) = \{n\text{th primitive roots of unity}\}.
+ \]
+ Note that here we have an abuse of notation, since $\phi_n$ is a polynomial in $\Z[t]$, not $K[t]$, but we can just use the canonical map $\Z[t] \to K[t]$ mapping $1$ to $1$ and $t$ to $t$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ We do induction on $n$ to construct $\phi_n$. When $n = 1$, let $\phi_1 = t - 1$. Then (i) and (ii) hold in this case, trivially.
+
+ Assume now that (i) and (ii) hold for smaller values of $n$. Let
+ \[
+ f = \prod_{d \mid n, d < n} \phi_d.
+ \]
+ By induction, $f \in \Z[t]$. Moreover, if $d \mid n$ and $d < n$, then $\phi_d \mid (t^n - 1)$ because $(t^d - 1) \mid (t^n - 1)$. We would like to say that $f$ also divides $t^n - 1$. However, we have to be careful, since to make this conclusion, we need to show that $\phi_d$ and $\phi_{d'}$ have no common roots for distinct $d, d' \mid n$ (and $d, ,d' < n$).
+
+ Indeed, by induction, $\phi_d$ and $\phi'_d$ have no common roots because
+ \begin{align*}
+ \Root_{\phi_d}(L) &= \{d\text{th primitive roots of unity}\},\\
+ \Root_{\phi_{d'}}(L) &= \{d'\text{th primitive roots of unity}\},
+ \end{align*}
+ and these two sets are disjoint (or else the roots would not be \emph{primitive}). Therefore $\phi_d$ and $\phi_{d'}$ have no common irreducible factors. Hence $f \mid t^n - 1$. So we can write
+ \[
+ t^n - 1 = f \phi_n,
+ \]
+ where $\phi_n \in \Q[t]$. Since $f$ is monic, $\phi_n$ has integer coefficients. So indeed $\phi_n \in \Z[t]$. So the first part is proven.
+
+ To prove the second part, note that by induction,
+ \[
+ \Root_f(L) = \{\text{non-primitive }n\text{th roots of unit}\},
+ \]
+ since all $n$th roots of unity are $d$th primitive roots of unity for some smaller $d$.
+
+ Since $f \phi_n = t^n - 1$, $\phi_n$ contains the remaining, primitive $n$th roots of unit. Since $t^n - 1$ has no repeated roots, we know that $\phi_n$ does not contain any extra roots. So
+ \[
+ \Root_{\phi_n}(L) = \{n\text{th primitive roots of unity}\}.\qedhere
+ \]
+\end{proof}
+These $\phi_n$ are what we use to ``build up'' the polynomials $t^n - 1$. These will later serve as a technical tool to characterize the Galois group of the $n$th cyclotomic extension of $\Q$.
+
+Before we an reach that, we first take a tiny step, and prove something that works for arbitrary fields first.
+
+\begin{thm}
+ Let $K$ be a field with $\Char K = 0$ or $0 < \Char K \nmid n$. Let $L$ be the $n$th cyclotomic extension of $K$. Then $L/K$ is a Galois extension, and there is an injective homomorphism $\theta: \Gal(L/K) \to (\Z/n\Z)^\times$.
+
+ In addition, every irreducible factor of $\phi_n$ (in $K[t]$) has degree $[L:K]$.
+\end{thm}
+The important thing about our theorem is the homomorphism
+\[
+ \theta: \Gal(L/K) \to (\Z/n\Z)^\times.
+\]
+In general, we don't necessarily know much about $\Gal(L/K)$, but the group $(\Z/n\Z)^\times$ is well-understood. In particular, we now know that $\Gal(L/K)$ is abelian.
+
+\begin{proof}
+ Let $\mu$ be an $n$th primitive root of unity. Then
+ \[
+ \Root_{t^n - 1}(L) = \{1, \mu, \mu^2, \cdots, \mu^{n - 1}\}
+ \]
+ is a cyclic group of order $n$ generated by $\mu$. We first construct the homomorphism $\theta: \Aut_K(L) \to (\Z/n\Z)^\times$ as follows: for each $\phi \in \Aut_K(L)$, $\phi$ is completely determined by the value of $\phi(\mu)$ since $L = K(\mu)$. Since $\phi$ is an automorphism, it must take an $n$th primitive root of unity to another $n$th primitive root of unity. So $\phi(\mu) = \mu^i$ for some $i$ such that $(i, n) = 1$. Now let $\theta(\phi) = \bar{i} \in (\Z/n\Z)^\times$. Note that this is well-defined since if $\mu^i = \mu^j$, then $i - j$ has to be a multiple of $n$.
+
+ Now it is easy to see that if $\phi, \psi \in \Aut_K(L)$ are given by $\phi(\mu) = \mu^i$, and $\psi(\mu) = \mu^j$, then $\phi\circ \psi(\mu) = \phi(\mu^j) = \mu^{ij}$. So $\theta(\phi \psi) = \bar{ij} = \theta(\phi)\theta(\psi)$. So $\theta$ is a group homomorphism.
+
+ Now we check that $\theta$ is injective. If $\theta(\phi) = \bar{1}$ (note that $(\Z/n\Z)^\times$ is a multiplicative group with unit $1$), then $\phi(\mu) = \mu$. So $\phi = \id$.
+
+ Now we show that $L/K$ is Galois. Recall that $L = K(\mu)$, and let $P_\mu$ be a minimal polynomial of $\mu$ over $K$. Since $\mu$ is a root of $t^n - 1$, we know that $P_\mu \mid t^n - 1$. Since $t^n - 1$ has no repeated roots, $P_\mu$ has no repeated roots. So $P_\mu$ is separable. Moreover, $P_\mu$ splits over $L$ as $t^n - 1$ splits over $L$. So the extension is separable and normal, and hence Galois.
+
+ Applying the previous theorem, each irreducible factor $g$ of $\phi_n$ is a minimal polynomial of some $n$th primitive root of unity, say $\lambda$. Then $L = K(\lambda)$. So
+ \[
+ \deg g = \deg P_\lambda = [K(\lambda): K] = [L:K].\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ We can calculate the following in $\Q[t]$.
+ \begin{enumerate}
+ \item $\phi_1 = t - 1$.
+ \item $\phi_2 = t + 1$ since $t^2 - 1 = \phi_1 \phi_2$.
+ \item $\phi_3 = t^2 + t + 1$.
+ \item $\phi_4 = t^2 + 1$.
+ \end{enumerate}
+ These are rather expected. Now take $K = \F_2$. Then $1 = -1$. So we might be able to further decompose these polynomials. For example, $t + 1 = t - 1$ in $\F_2$. So we have
+ \[
+ \phi_4 = t^2 + 1 = t^2 - 1 = \phi_1 \phi_2.
+ \]
+ So in $\F_2$, $\phi_4$ is not irreducible. Similarly, if we have too much time, we can show that
+ \[
+ \phi_{15} = (t^4 + t + 1)(t^4 + t^3 + 1).
+ \]
+ So $\phi_{15}$ is not irreducible. However, they \emph{are} irreducible over the rationals, as we will soon see.
+\end{eg}
+
+So far, we know $\Gal(L/K)$ is an abelian group, isomorphic to a subgroup of $(\Z/n\Z)^\times$. However, we are greedy and we want to know more. The following lemma tells us when this $\theta$ is an isomorphism.
+
+\begin{lemma}
+ Under the notation and assumptions of the previous theorem, $\phi_n$ is irreducible in $K[t]$ if and only if $\theta$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+ $(\Rightarrow)$ Suppose $\phi_n$ is irreducible. Recall that $\Root_{\phi_n}(L)$ is exactly the $n$th primitive roots of unity. So if $\mu$ is an $n$th primitive root of unity, then $P_\mu$, the minimal polynomial of $\mu$ over $K$ is $\phi_n$. In particular, if $\lambda$ is also an $n$th primitive root of unity, then $P_\mu = P_\lambda$. This implies that there is some $\phi_\lambda \in \Aut_{K}(L)$ such that $\phi_\lambda(\mu) = \lambda$.
+
+ Now if $\bar{i} \in (\Z/n\Z)^\times$, then taking $\lambda = \mu^i$, this shows that we have $\phi_\lambda \in \Aut_K(L)$ such that $\theta(\phi_\lambda) = \bar{i}$. So $\theta$ is surjective, and hence an isomorphism.
+
+ $(\Leftrightarrow)$ Suppose that $\theta$ is an isomorphism. We will reverse the above argument and show that all roots have the same minimal polynomial. Let $\mu$ be a $n$th primitive root of unity, and pick $\bar{i} \in (\Z/n\Z)^\times$, and let $\lambda = \mu^i$. Since $\theta$ is an isomorphism, there is some $\phi_\lambda \in \Aut_K(L)$ such that $\theta(\phi_\lambda) = \bar{i}$, i.e.\ $\phi_\lambda(\mu) = \mu^i = \lambda$. Then we must have $P_\mu = P_\lambda$.
+
+ Since every $n$th primitive root of unity is of the form $\mu^i$ (with $(i, n) = 1$), this implies that all $n$th primitive roots have the same minimal polynomial. Since the roots of $\phi_n$ are all the $n$th primitive roots of unity, its irreducible factors are exactly the minimal polynomials of the primitive roots. Moreover, $\phi_n$ does not have repeated roots. So $\phi_n = P_\mu$. In particular, $\phi_n$ is irreducible.
+\end{proof}
+
+We want to apply this lemma to the case of rational numbers. We want to show that $\theta$ is an isomorphism. So we have to show that $\phi_n$ is irreducible in $\Q[t]$.
+\begin{thm}
+ $\phi_n$ is irreducible in $\Q[t]$. In particular, it is also irreducible in $\Z[t]$.
+\end{thm}
+
+\begin{proof}
+ As before, this can be achieved by showing that all $n$th primitive roots have the same minimal polynomial. Moreover, let $\mu$ be our favorite $n$th primitive root. Then all other primitive roots $\lambda$ are of the form $\lambda = \mu^i$, where $(i, n) = 1$. By the fundamental theorem of arithmetic, we can write $i$ as a product $i = q_1 \cdots q_n$. Hence it suffices to show that for all primes $q \nmid n$, we have $P_{\mu} = P_{\mu^q}$. Noting that $\mu^q$ is also an $n$th primitive root, this gives
+ \[
+ P_{\mu} = P_{\mu^{q_1}} = P_{(\mu^{q_1})^{q_2}} = P_{\mu^{q_1q_2}} = \cdots = P_{\mu^{q_1\cdots q_r}} = P_{\mu^i}.
+ \]
+ So we now let $\mu$ be an $n$th primitive root, $P_\mu$ be its minimal polynomial. Since $\mu$ is a root of $\phi_n$, we can write $P_\mu \mid \phi_n$ inside $\Q[t]$. So we can write
+ \[
+ \phi_n = P_\mu R,
+ \]
+ Since $\phi_n$ and $P_\mu$ are monic, $R$ is also monic. By Gauss' lemma, we must have $P_\mu, R \in \Z[t]$.
+
+ Note that showing $P_\mu = P_{\mu^q}$ is the same as showing $\mu^q$ is a root of $P_\mu$, since $\deg P_\mu = \deg P_{\mu^q}$. So suppose it's not. Since $\mu^q$ is an $n$th primitive root of unity, it is a root of $\phi_n$. So $\mu^q$ must be a root of $R$. Now let $S = R(t^q)$. Then $\mu$ is a root of $S$, and so $P_\mu \mid S$.
+
+ We now reduce mod $q$. For any polynomial $f \in \Z[t]$, we write the result of reducing the coefficients mod $q$ as $\bar{f}$. Then we have $\bar{S} = \overline{R(t^q)} = \overline{R(t)}^q$. Since $\bar{P}_\mu$ divides $\bar{S}$ (by Gauss' lemma), we know $\bar{P}_\mu$ and $\overline{R(t)}$ have common roots. But $\bar{\phi}_n = \bar{P}_\mu \bar{R}$, and so this implies $\bar{\phi}_n$ has repeated roots. This is impossible since $\bar{\phi}^n$ divides $t^n - 1$, and since $q \nmid n$, we know the derivative of $t^n - 1$ does not vanish at the roots. So we are done.
+\end{proof}
+
+\begin{cor}
+ Let $K = \Q$ and $L$ be the $n$th cyclotomic extension of $\Q$. Then the injection $\theta: \Gal(L/\Q) \to (\Z/n\Z)^\times$ is an isomorphism.
+\end{cor}
+
+\begin{eg}
+ Let $p$ be a prime number, and $q = p^d$, $d \in \N$. Consider $\F_q$, a field with $q$ elements, and let $L$ be the $n$th cyclotomic extension of $\F_q$ (where $p\nmid n$). Then we have a homomorphism $\theta: \Gal(L/\F_q) \rightarrow (\Z/n\Z)^\times$.
+
+ We have previously shown that $\Gal(L/\F_q)$ must be a cyclic group. So if $(\Z/n\Z)^\times$ is non-cyclic, then $\theta$ is not an isomorphism, and $\phi_n$ is not irreducible in $\F_q[t]$.
+
+ For example, take $p = q = 7$ and $n = 8$. Then
+ \[
+ (\Z/8\Z)^\times = \{\bar{1}, \bar{3}, \bar{5}, \bar{7}\}
+ \]
+ is not cyclic, because manual checking shows that there is no element of order $4$. Hence $\theta: \Gal(L/\F_7) \to (\Z/8\Z)^\times$ is not an isomorphism, and $\phi_8$ is not irreducible in $\F_7[t]$.
+\end{eg}
+
+\subsection{Kummer extensions}
+We shall now consider a more general case, and study the splitting field of $t^n - \lambda \in K[t]$. As we have previously seen, we will need to make use of the $n$th primitive roots of unity.
+
+The definition of a Kummer extension will involve a bit more that it being the splitting field of $t^n - \lambda$. So before we reach the definition, we first studying some properties of an arbitrary splitting field of $t^n - \lambda$, and use this to motivate the definition of a Kummer extension.
+
+\begin{defi}[Cyclic extension]\index{cyclic extension}
+ We say a Galois extension $L/K$ is \emph{cyclic} is $\Gal(L/K)$ is a cyclic group.
+\end{defi}
+
+\begin{thm}
+ Let $K$ be a field, $\lambda \in K$ non-zero, $n \in \N$, $\Char K = 0$ or $0 < \Char K \nmid n$. Let $L$ be the splitting field of $t^n - \lambda$. Then
+ \begin{enumerate}
+ \item $L$ contains an $n$th primitive root of unity, say $\mu$.
+ \item $L/K(\mu)$ is a cyclic (and in particular Galois) extension with degree $[L:K(\mu)] \mid n$.
+ \item $[L:K(\mu)] = n$ if and only if $t^n - \lambda$ is irreducible in $K(\mu)[t]$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Under our assumptions, $t^n - \lambda$ and $(t^n - \lambda)' = nt^{n - 1}$ have no common roots in $L$. So $t^n - \lambda$ has distinct roots in $L$, say $\alpha_1, \cdots, \alpha_n \in L$.
+
+ It then follows by direct computation that $\alpha_1 \alpha_1^{-1}, \alpha_2 \alpha_1^{-1}, \cdots, \alpha_n \alpha_1^{-1}$ are distinct roots of unity, i.e.\ roots of $t^n - 1$. Then one of these, say $\mu$ must be an $n$th primitive root of unity.
+ \item We know $L/K(\mu)$ is a Galois extension because it is the splitting field of the separable polynomial $t^n - \lambda$.
+
+ To understand the Galois group, we need to know how this field exactly looks like. We let $\alpha$ be any root of $t^n - \lambda$. Then the set of all roots can be written as
+ \[
+ \{\alpha, \mu\alpha, \mu^2 \alpha, \cdots, \mu^{n - 1} \alpha\}
+ \]
+ Then
+ \[
+ L = K(\alpha_1, \cdots, \alpha_n) = K(\mu, \alpha) = K(\mu)(\alpha).
+ \]
+ Thus, any element of $\Gal(L/K(\mu))$ is uniquely determined by what it sends $\alpha$ to, and any homomorphism must send $\alpha$ to one of the other roots of $t^n - \lambda$, namely $\mu^i \alpha$ for some $i$.
+
+ Define a homomorphism $\sigma: \Gal(L/K(\mu)) \to \Z/n\Z$ that sends $\phi$ to the corresponding $i$ (as an element of $\Z/n\Z$, so that it is well-defined).
+
+ It is easy to see that $\sigma$ is an injective group homomorphism. So we know $\Gal(L/K(\mu))$ is isomorphic to a subgroup of $\Z/n\Z$. Since the subgroup of any cyclic group is cyclic, we know that $\Gal(L/K(\mu))$ is cyclic, and its size is a factor of $n$ by Lagrange's theorem. Since $|\Gal(L/K(\mu))| = [L:K(\mu)]$ by definition of a Galois extension, it follows that $[L:K(\mu)]$ divides $n$.
+
+ \item We know that $[L:K(\mu)] = [K(\mu, \alpha): K(\mu)] = \deg q_\alpha$. So $[L:K(\mu)] = n$ if and only if $\deg q_\alpha = n$. Since $q_\alpha$ is a factor of $t^n - \lambda$, $\deg q_\alpha = n$ if and only if $q_\alpha = t^n - \lambda$. This is true if and only if $t^n - \lambda$ is irreducible $K(\mu)[t]$. So done.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ Consider $t^4 + 2 \in \Q[t]$. Let $\mu = \sqrt{-1}$, which is a $4$th primitive root of unity. Now
+ \[
+ t^4 + 2 = (t - \alpha)(t + \alpha)(t - \mu \alpha)(t + \mu \alpha),
+ \]
+ where $\alpha = \sqrt[4]{-2}$ is one of the roots of $t^4 + 2$. Then we have the field extension $\Q \subseteq \Q(\mu) \subseteq \Q(\mu, \alpha)$, where $\Q(\mu, \alpha)$ is a splitting field of $t^4 + 2$.
+
+ Since $\sqrt{-2} \not\in \Q(\mu)$, we know that $t^4 + 2$ is irreducible in $\Q(\mu)[t]$ by looking at the factorization above. So by our theorem, $\Q(\mu) \subseteq \Q(\mu, \alpha)$ is a cyclic extension of degree exactly $4$.
+\end{eg}
+
+\begin{defi}[Kummer extension]\index{Kummer extension}
+ Let $K$ be a field, $\lambda \in K$ non-zero, $n \in \N$, $\Char K = 0$ or $0 < \Char K \nmid n$. Suppose $K$ contains an $n$th primitive root of unity, and $L$ is a splitting field of $t^n - \lambda$. If $\deg [L:K] = n$, we say $L/K$ is a \emph{Kummer extension}.
+\end{defi}
+Note that we used to have extensions $K\subseteq K(\mu) \subseteq L$. But if $K$ already contains a primitive root of unity, then $K = K(\mu)$. So we are left with the cyclic extension $K \subseteq L$.
+
+To following technical lemma will be useful:
+\begin{lemma}
+ Assume $L/K$ is a field extension. Then $\Hom_K(L, L)$ is linearly independent. More concretely, let $\lambda_1, \cdots, \lambda_n \in L$ and $\phi_1, \cdots, \phi_n \in \Hom_K(L, L)$ distinct. Suppose for all $\alpha \in L$, we have
+ \[
+ \lambda_1 \phi_1(\alpha) + \cdots + \lambda_n \phi_n(\alpha) = 0.
+ \]
+ Then $\lambda_i = 0$ for all $i$.
+\end{lemma}
+
+\begin{proof}
+ We perform induction on $n$.
+
+ Suppose we have some $\lambda_i \in L$ and $\phi_i \in \Hom_K(L, L)$ such that
+ \[
+ \lambda_1 \phi_1(\alpha) + \cdots + \lambda_n \phi_n(\alpha) = 0.
+ \]
+ The $n = 1$ case is trivial, since $\lambda_1 \phi_1 = 0$ implies $\lambda_1 = 0$ (the zero homomorphism does not fix $K$).
+
+ Otherwise, since the homomorphisms are distinct, pick $\beta \in L$ such that $\phi_1(\beta) \not= \phi_n(\beta)$. Then we know that
+ \[
+ \lambda_1 \phi_1(\alpha\beta) + \cdots + \lambda_n \phi_n(\alpha\beta) = 0
+ \]
+ for all $\alpha \in L$. Since $\phi_i$ are homomorphisms, we can write this as
+ \[
+ \lambda_1 \phi_1(\alpha)\phi_1(\beta) + \cdots + \lambda_n \phi_n(\alpha)\phi_n(\beta) = 0.
+ \]
+ On the other hand, by just multiplying the original equation by $\phi_n(\beta)$, we get
+ \[
+ \lambda_1 \phi_1(\alpha)\phi_n(\beta) + \cdots + \lambda_n \phi_n(\alpha)\phi_n(\beta) = 0.
+ \]
+ Subtracting the equations gives
+ \[
+ \lambda_1 \phi_1(\alpha) (\phi_1(\beta) - \phi_n(\beta)) + \cdots + \lambda_{n - 1}\phi_{n - 1}(\alpha) (\phi_{n - 1}(\beta) - \phi_n(\beta)) = 0
+ \]
+ for all $\alpha \in L$. By induction, $\lambda_i(\phi_i(\beta) - \phi_n(\beta)) = 0$ for all $1 \leq i \leq n - 1$. In particular, since $\phi_1(\beta) - \phi_n(\beta) \not= 0$, we have $\lambda_1 = 0$. Then we are left with
+ \[
+ \lambda_2 \phi_2(\alpha) + \cdots + \lambda_n \phi_n(\alpha) = 0.
+ \]
+ Then by induction again, we know that all coefficients are zero.
+\end{proof}
+
+\begin{thm}
+ Let $K$ be a field, $n \in \N$, $\Char K = 0$ or $0 < \Char K \nmid n$. Suppose $K$ contains an $n$th primitive root of unity, and $L/K$ is a cyclic extension of degree $[L:K] = n$. Then $L/K$ is a Kummer extension.
+\end{thm}
+
+This is a rather useful result. If we look at the splitting field of a polynomial $t^n - \lambda$, even if the ground field includes the right roots of unity, \emph{a priori}, this doesn't have to be a Kummer extension if it doesn't have degree $n$. But we previously showed that the extension must be cyclic. And so this theorem shows that it is still a Kummer extension of some sort.
+
+This is perhaps not too surprising. For example, if, say, $n = 4$ and $\lambda$ is secretly a square, then the splitting field of $t^4 - \lambda$ is just the splitting field of $t^2 - \sqrt{\lambda}$.
+
+\begin{proof}
+ Our objective here is to find a clever $\lambda \in K$ such that $L$ is the splitting field of $t^n - \lambda$. To do so, we will have to hunt for a root $\beta$ of $t^n - \lambda$ in $L$.
+
+ Pick $\phi$ a generator of $\Gal(L/K)$. We know that if $\beta$ were a root of $t^n - \lambda$, then $\phi(\beta) = \mu^{-1} \beta$ for some primitive $n$th root of unity $\mu$. Thus, we want to find an element that satisfies such a property.
+
+ By the previous lemma, we can find some $\alpha \in L$ such that
+ \[
+ \beta = \alpha + \mu\phi(\alpha) + \mu^2\phi^2(\alpha) + \cdots + \mu^{n - 1}\phi^{n - 1}(\alpha) \not= 0.
+ \]
+ Then, noting that $\phi^n$ is the identity and $\phi$ fixes $\mu \in K$, we see that $\beta$ trivially satisfies
+ \[
+ \phi(\beta) = \phi(\alpha) + \mu \phi^2 \alpha + \cdots + \mu^{n - 1} \phi^n (\alpha) = \mu^{-1}\beta,
+ \]
+ In particular, we know that $\phi(\beta) \in K(\beta)$.
+
+ Now pick $\lambda = \beta^n$. Then $\phi(\beta^n) = \mu^{-n} \beta^n = \beta^n$. So $\phi$ fixes $\beta^n$. Since $\phi$ generates $\Gal(L/K)$, we know all automorphisms of $L/K$ fixes $\beta^n$. So $\beta^n \in K$.
+
+ Now the roots of $t^n - \lambda$ are $\beta, \mu \beta, \cdots, \mu^{n - 1}\beta$. Since these are all in $\beta$, we know $K(\beta)$ is the splitting field of $t^n - \lambda$.
+
+ Finally, to show that $K(\beta) = L$, we observe that $\id, \phi|_{K(\beta)}$, \ldots, $\phi^n|_{K(\beta)}$ are distinct elements of $\Aut_K(K(\beta))$ since they do different things to $\beta$. Recall our previous theorem that
+ \[
+ [K(\beta):K] \geq |\Aut_K (K(\beta))|.
+ \]
+ So we know that $n = [L:K] = [K(\beta):K]$. So $L = K(\beta)$. So done.
+\end{proof}
+
+\begin{eg}
+ Consider $t^3 - 2 \in \Q[t]$, and $\mu$ a third primitive root of unity. Then we have the extension $\Q\subseteq \Q(\mu) \subseteq \Q(\mu, \sqrt[3]{2})$. Then $\Q\subseteq \Q(\mu)$ is a cyclotomic extension of degree $2$, and $\Q(\mu) \subseteq \Q(\mu, \sqrt[3]{2})$ is a Kummer extension of degree 3.
+\end{eg}
+
+\subsection{Radical extensions}
+We are going to put these together and look at radical extensions, which allows us to characterize what it means to ``solve a polynomial with radicals''.
+
+\begin{defi}[Radical extension]
+ A field extension $L/K$ is \emph{radical} if there is some further extension $E/L$ and with a sequence
+ \[
+ K = E_0 \subseteq E_1 \subseteq \cdots \subseteq E_r = E,
+ \]
+ such that each $E_i \subseteq E_{i + 1}$ is a cyclotomic or Kummer extension, i.e.\ $E_{i + 1}$ is a splitting field of $t^n - \lambda_{i + 1}$ over $E_i$ for some $\lambda_{i + 1} \in E_i$.
+\end{defi}
+Informally, we say $E_{i + 1}$ is obtained by adding the roots ``$\sqrt[n]{\lambda_{i + 1}}$'' to $E_i$. Hence we interpret a radical extension as an extension that only adds radicals.
+
+\begin{defi}[Solubility by radicals]
+ Let $K$ be a field, and $f \in K[t]$. $f$. We say $f$ is \emph{soluble by radicals} if the splitting field of $f$ is a radical extension of $K$.
+\end{defi}
+This means that $f$ can be solved by radicals of the form $\sqrt[n]{\lambda_i}$.
+
+Let's go back to our first lecture and describe what we've done in the language we've developed in the course.
+\begin{eg}
+ As we have shown in lecture 1, any polynomial $f \in \Q[t]$ of degree at most $4$ can be solved by radicals.
+
+ For example, assume $\deg f = 3$. So $f = t^3 + a t^2 + bt + c$. Let $L$ be the splitting field of $f$. Recall we reduced the problem of ``solving'' $f$ to the case $a = 0$ by the substitution $x \mapsto x - \frac{a}{3}$. Then we found our $\beta, \gamma \in \C$ such that each root $\alpha_i$ can be written as a linear combination of $\beta$ and $\gamma$ (and $\mu$), i.e.\ $L \subseteq \Q(\beta, \gamma, \mu)$.
+
+ Then we showed that
+ \[
+ \{\beta^3, \gamma^3\} = \left\{\frac{-27 c \pm \sqrt{(27c)^2 + 4\times 27b^3}}{2}\right\}.
+ \]
+ We now let
+ \[
+ \lambda = \sqrt{(27 c)^2 + 4\times 27 b^3}.
+ \]
+ Then we have the extensions
+ \[
+ \Q \subseteq \Q(\lambda) \subseteq \Q(\lambda, \mu) \subseteq \Q(\lambda, \mu, \beta),
+ \]
+ and also
+ \[
+ \Q \subseteq L \subseteq \Q(\lambda, \mu, \beta).
+ \]
+ Note that the first extension $\Q \subseteq \Q(\lambda)$ is a Kummer extension since it is a splitting field of $t^2 - \lambda^2$. Then $\Q(\lambda)\subseteq \Q(\lambda, \mu)$ is the third cyclotomic extension. Finally, $\Q(\lambda, \mu) \subseteq \Q(\lambda, \mu \beta)$ is a Kummer extension, the splitting field of $t^3 - \beta^3$. So $\Q \subseteq L$ is a radical extension.
+\end{eg}
+
+Let's go back to the definition of a radical extension. We said $L/K$ is radical if there is a further extension $E/L$ that satisfies certain nice properties. It would be great if $E/K$ is actually a Galois extensions. To show this, we first need a technical lemma.
+\begin{lemma}
+ Let $L/K$ be a Galois extension, $\Char K = 0$, $\gamma \in L$ and $F$ the splitting field of $t^n - \gamma$ over $L$. Then there exists a further extension $E/F$ such that $E/L$ is radical and $E/K$ is Galois.
+\end{lemma}
+Here we have the inclusions
+\[
+ K\subseteq L \subseteq F \subseteq E,
+\]
+where $K, L$ and $F$ are given and $E$ is what we need to find. The idea of the proof is that we just add in the ``missing roots'' to obtain $E$ so that $E/K$ is Galois, and doing so only requires performing cyclotomic and Kummer extensions.
+
+\begin{proof}
+ Since we know that $L/K$ is Galois, we would rather work in $K$ than in $L$. However, our $\gamma$ is in $L$, not $K$. Hence we will employ a trick we've used before, where we introduce a new polynomial $f$, and show that its coefficients are fixed by $\Gal(L/K)$, and hence in $K$. Then we can look at the splitting field of $f$ or its close relatives.
+
+ Let
+ \[
+ f = \prod_{\phi \in \Gal(L/K)} (t^n - \phi(\gamma)).
+ \]
+ Each $\phi \in \Gal(L/K)$ induces a homomorphism $L[t] \to L[t]$. Since each $\phi \in \Gal(L/K)$ just rotates the roots of $f$ around, we know that this induced homomorphism fixes $f$. Since all automorphisms in $\Gal(L/K)$ fix the coefficients of $f$, the coefficients must all be in $K$. So $f \in K[t]$.
+
+ Now since $L/K$ is Galois, we know that $L/K$ is normal. So $L$ is the splitting field of some $g \in K[t]$. Let $E$ be the splitting field of $fg$ over $K$. Then $K\subseteq E$ is normal. Since the characteristic is zero, this is automatically separable. So the extension $K\subseteq E$ is Galois.
+
+ We have to show that $L \subseteq E$ is a radical extension. We pick our fields as follows:
+ \begin{itemize}
+ \item $E_0 = L$
+ \item $E_1 = $ splitting field of $t^n - 1$ over $E_0$
+ \item $E_2 = $ splitting field of $t^n - \gamma$ over $E_1$
+ \item $E_3 = $ splitting field of $t^n - \phi_1(\gamma)$ over $E_2$
+ \item \ldots
+ \item $E_r = E$,
+ \end{itemize}
+ where we enumerate $\Gal(L/K)$ as $\{\id , \phi_1, \phi_2, \cdots\}$.
+
+ We then have the sequence of extensions
+ \[
+ L = E_0 \subseteq E_1 \subseteq E_2 \subseteq \cdots \subseteq E_r
+ \]
+ Here $E_0 \subseteq E_1$ is a cyclotomic extension, and $E_1 \subseteq E_2$, $E_2 \subseteq E_3$ etc. are Kummer extensions since they contain enough roots of unity and are cyclic. By construction, $F \subseteq E_2$. So $F\subseteq E$.
+\end{proof}
+
+\begin{thm}
+ Suppose $L/K$ is a radical extension and $\Char K = 0$. Then there is an extension $E/L$ such that $E/K$ is Galois and there is a sequence
+ \[
+ K = E_0 \subseteq E_1 \subseteq \cdots \subseteq E,
+ \]
+ where $E_i \subseteq E_{i + 1}$ is cyclotomic or Kummer.
+\end{thm}
+
+\begin{proof}
+ Note that this is equivalent to proving the following statement: Let
+ \[
+ K = L_0 \subseteq L_1 \subseteq \cdots L_s
+ \]
+ be a sequence of cyclotomic or Kummer extensions. Then there exists an extension $L_s \subseteq E$ such that $K \subseteq E$ is Galois and can be written as a sequence of cyclotomic or Kummer extensions.
+
+ We perform induction on $s$. The $s = 0$ case is trivial.
+
+ If $s > 0$, then by induction, there is an extension $M/L_{s - 1}$ such that $M/K$ is Galois and is a sequence of cyclotomic and Kummer extensions. Now $L_s$ is a splitting field of $t^n - \gamma$ over $L_{s - 1}$ for some $\gamma \in L_{s - 1}$. Let $F$ be the splitting field of $t^n - \gamma$ over $M$. Then by the lemma and its proof, there exists an extension $E/M$ that is a sequence of cyclotomic or Kummer extensions, and $E/K$ is Galois.
+ \begin{center}
+ \begin{tikzpicture}[scale=2]
+ \node at (0.5, 0) (K) {$K$};
+ \node at (2, 0) (Ls1) {$L_{s - 1}$};
+ \node at (3, 0.5) (Ls) {$L_s = L_{s - 1}(\sqrt[n]{\gamma})$};
+ \node at (3, -0.5) (M) {$M$};
+ \node at (4, 0) (F) {$F = M (\sqrt[n]{\gamma})$};
+ \node at (5, 0) (E) {$E$};
+
+ \draw (K) -- (Ls1) -- (Ls) -- (F) -- (E);
+ \draw (Ls1) -- (M) -- (F);
+ \end{tikzpicture}
+ \end{center}
+ However, we already know that $M/K$ is a sequence of cyclotomic and Kummer extensions. So $E/K$ is a sequence of cyclotomic and Kummer extension. So done.
+\end{proof}
+
+\subsection{Solubility of groups, extensions and polynomials}
+Let $f \in K[t]$. We defined the notion of solubility of $f$ in terms of radical extensions. However, can we decide whether $f$ is soluble or not without resorting to the definition? In particular, is it possible to decide whether its soluble by just looking at $\Gal(L/K)$, where $L$ is the splitting field of $f$ over $K$? It would be great if we could do so, since groups are easier to understand than fields.
+
+The answer is yes. It turns out the solubility of $f$ corresponds to the solubility of $\Gal(L/K)$. Of course, we will have to first define what it means for a group to be soluble. After that, we will find examples of polynomials $f$ of degree at least $5$ such that $\Gal(L/K)$ is not soluble. In other words, there are polynomials that cannot be solved by radicals.
+\begin{defi}[Soluble group]
+ A finite group $G$ is \emph{soluble} if there exists a sequence of subgroups
+ \[
+ G_r = \{1\} \lhd \cdots \lhd G_1 \lhd G_0 = G,
+ \]
+ where $G_{i + 1}$ is normal in $G_i$ and $G_i/G_{i + 1}$ is cyclic.
+\end{defi}
+
+\begin{eg}
+ Any finite abelian group is solvable by the structure theorem of finite abelian groups:
+ \[
+ G \cong \frac{\Z}{\bra n_1\ket} \times \cdots \times \frac{\Z}{\bra n_r\ket}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Let $S_n$ be the symmetric group of permutations of $n$ letters. We know that $G_3$ is soluble since
+ \[
+ \{1\} \lhd A_3 \lhd S_3,
+ \]
+ where $S_3/A_3 \cong \Z/\bra 2\ket$ and $A_3/\{0\} \cong \Z/\bra 3\ket$.
+
+ $S_4$ is also soluble by
+ \[
+ \{1\} \lhd \{e, (1\; 2)(3\; 4)\} \lhd \{e, (1\; 2)(3\; 4), (1\; 3)(2\; 4), (1\; 4)(2\; 3)\} \lhd A_4 \lhd S_4.
+ \]
+ We can show that the quotients are $\Z/\bra 2\ket$, $\Z/\bra 3\ket$, $\Z/\bra 2\ket$ and $\Z/\bra 2\ket$ respectively.
+\end{eg}
+How about $S_n$ for higher $n$? It turns out they are no longer soluble for $n \geq 5$. To prove this, we first need a lemma.
+
+\begin{lemma}
+ Let $G$ be a finite group. Then
+ \begin{enumerate}
+ \item If $G$ is soluble, then any subgroup of $G$ is soluble.
+ \item If $A \lhd G$ is a normal subgroup, then $G$ is soluble if and only if $A$ and $G/A$ are both soluble.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $G$ is soluble, then by definition, there is a sequence
+ \[
+ G_r = \{1\} \lhd \cdots \lhd G_1 \lhd G_0 = G,
+ \]
+ such that $G_{i + 1}$ is normal in $G_i$ and $G_i/G_{i + 1}$ is cyclic.
+
+ Let $H_i = H \cap G_i$. Note that $H_{i + 1}$ is just the kernel of the obvious homomorphism $H_i \to G_i/G_{i + 1}$. So $H_{i + 1} \lhd H_i$. Also, by the first isomorphism theorem, this gives an injective homomorphism $H_i/H_{i + 1} \to G_i/G_{i + 1}$. So $H_i/H_{i + 1}$ is cyclic. So $H$ is soluble.
+
+ \item $(\Rightarrow)$ By (i), we know that $A$ is solvable. To show the quotient is soluble, by assumption, we have the sequence
+ \[
+ G_r = \{1\} \lhd \cdots \lhd G_1 \lhd G_0 = G,
+ \]
+ such that $G_{i + 1}$ is normal in $G_i$ and $G_i/G_{i + 1}$ is cyclic. We construct the sequence for the quotient in the obvious way. We want to define $E_i$ as the quotient $G_i/A$, but since $A$ is not necessarily a subgroup of $E$, we instead define $E_i$ to be the image of quotient map $G_i \to G/A$. Then we have a sequence
+ \[
+ E_r = \{1\} \lhd \cdots \lhd E_0 = G/A.
+ \]
+ The quotient map induces a surjective homomorphism $G_i/G_{i + 1} \to E_i/E_{i + 1}$, showing that $E_i/E_{i + 1}$ are cyclic.
+
+ $(\Leftarrow)$ From the assumptions, we get the sequences
+ \[
+ A_m = \{1\} \lhd \cdots \lhd A_0 = A
+ \]
+ \[
+ F_n = A \lhd \cdots \lhd F_0 = G
+ \]
+ where each quotient is cyclic. So we get a sequence
+ \[
+ A_m = \{1\} \lhd A_1 \lhd \cdots \lhd A_0 = F_n \lhd F_{n - 1} \lhd \cdots \lhd F_0 = G,
+ \]
+ and each quotient is cyclic. So done.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{eg}
+ We want to show that $S_n$ is not soluble if $n \geq 5$. It is a well-known fact that $A_n$ is a simple non-abelian group, i.e.\ it has no non-trivial subgroup. So $A_n$ is not soluble. So $S_n$ is not soluble.
+\end{eg}
+
+The key observation in Galois theory is that solubility of polynomials is related to solubility of the Galois group.
+
+\begin{defi}[Soluble extension]
+ A finite field extension $L/K$ is soluble if there is some extension $L\subseteq E$ such that $K\subseteq E$ is Galois and $\Gal(E/K)$ is soluble.
+\end{defi}
+Note that this definition is rather like the definition of a radical extension, since we do not require the extension itself to be ``nice'', but just for there to be a further extension that is nice. In fact, we will soon see they are the same.
+
+\begin{lemma}
+ Let $L/K$ be a Galois extension. Then $L/K$ is soluble if and only if $\Gal(L/K)$ is soluble.
+\end{lemma}
+This means that the whole purpose of extending to $E$ is just to make it a Galois group, and it isn't used to introduce extra solubility.
+
+\begin{proof}
+ $(\Leftarrow)$ is clear from definition.
+
+ $(\Rightarrow)$ By definition, there is some $E\subseteq L$ such that $E/K$ is Galois and $\Gal(E/K)$ is soluble. By the fundamental theorem of Galois theory, $\Gal(L/K)$ is a quotient of $\Gal(E/K)$. So by our previous lemma, $\Gal(L/K)$ is also soluble.
+\end{proof}
+
+We now come to the main result of the lecture:
+\begin{thm}
+ Let $K$ be a field with $\Char K = 0$, and $L/K$ is a radical extension. Then $L/K$ is a soluble extension.
+\end{thm}
+
+\begin{proof}
+ We have already shown that if we have a radical extension $L/K$, then there is a finite extension $K\subseteq E$ such that $K\subseteq E$ is a Galois extension, and there is a sequence of cyclotomic or Kummer extensions
+ \[
+ E_0 = K \subseteq E_1 \subseteq \cdots \subseteq E_r = E.
+ \]
+ Let $G_i = \Gal(E/E_i)$. By the fundamental theorem of Galois theory, inclusion of subfields induces an inclusion of subgroups
+ \[
+ G_0 = \Gal(E/K) \geq G_1 \geq \cdots \geq G_r = \{1\}.
+ \]
+ In fact, $G_i \rhd G_{i + 1}$ because $E_i \subseteq E_{i + 1}$ are Galois (since cyclotomic and Kummer extensions are). So in fact we have
+ \[
+ G_0 = \Gal(E/K) \rhd G_1 \rhd \cdots \rhd G_r = \{1\}.
+ \]
+ Finally, note that by the fundamental theorem of Galois theory,
+ \[
+ G_i/G_{i + 1} = \Gal(E_{i + 1}/E_i).
+ \]
+ We also know that the Galois groups of cyclotomic and Kummer extensions are abelian. Since abelian groups are soluble, our previous lemma implies that $L/K$ is soluble.
+\end{proof}
+In fact, we will later show that the converse is also true. So an extension is soluble if and only if it is radical.
+
+\begin{cor}
+ Let $K$ be a field with $\Char K = 0$, and $f \in K[t]$. If $f$ can be solved by radicals, then $\Gal(L/K)$ is soluble, where $L$ is the splitting field of $f$ over $K$.
+\end{cor}
+Again, we will later show that the converse is also true. However, to construct polynomials that cannot be solved by radicals, this suffices. In fact, this corollary is all we really need.
+
+\begin{proof}
+ We have seen that $L/K$ is a Galois extension. By assumption, $L/K$ is thus a radical extension. By the theorem, $L/K$ is also a soluble extension. So $\Gal(L/K)$ is soluble.
+\end{proof}
+
+To find an $f \in \Q[t]$ which cannot be solved by radicals, it suffices to find an $f$ such that the Galois group is not soluble. We don't know many non-soluble groups so far. So in fact, we will find an $f$ such that $\Gal(L/\Q) = S_5$.
+
+To do so, we want to relate Galois groups to permutation groups.
+\begin{lemma}
+ Let $K$ be a field, $f \in K[t]$ of degree $n$ with no repeated roots. Let $L$ be the splitting field of $f$ over $K$. Then $L/K$ is Galois and there exist an injective group homomorphism
+ \[
+ \Gal(L/K) \to S_n.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Let $\Root_f(L) = \{\alpha_1, \cdots, \alpha_n\}$. Let $P_{\alpha_i}$ be the minimal polynomial of $\alpha_i$ over $K$. Then $P_{\alpha_i} \mid f$ implies that $P_{\alpha_i}$ is separable and splits over $L$. So $L/K$ is Galois.
+
+ Now each $\phi \in \Gal(L/K)$ permutes the $\alpha_i$, which gives a map $\Gal(L/K) \to S_n$. It is easy to show this is an injective group homomorphism.
+\end{proof}
+Note that there is not a unique or naturally-defined injective group homomorphism to $S_n$. This homomorphism, obviously, depends on how we decide to number our roots.
+
+\begin{eg}
+ Let $f = (t^2 - 2)(t^2 - 3) \in \Q[t]$. Let $L$ be the splitting field of $f$ over $\Q$. Then the roots are
+ \[
+ \Root_f(L) = \{\sqrt{2}, - \sqrt{2}, \sqrt{3}, -\sqrt{3}\}.
+ \]
+ We label these roots as $\alpha_1, \alpha_2, \alpha_3, \alpha_4$ in order. Now note that $L = \Q(\sqrt{2}, \sqrt{3})$, and thus $[L:\Q] = 4$. Hence $|\Gal(L/\Q)| = 4$ as well. We can let $\Gal(L/\Q) = \{\id, \varphi, \psi, \lambda\}$, where
+ \begin{align*}
+ \id(\sqrt{2}) &= \sqrt{2} & \id(\sqrt{3}) &= \sqrt{3}\\
+ \varphi(\sqrt{2}) &= -\sqrt{2} & \varphi(\sqrt{3}) &= \sqrt{3}\\
+ \psi(\sqrt{2}) &= \sqrt{2} & \psi(\sqrt{3}) &= -\sqrt{3}\\
+ \lambda(\sqrt{2}) &= -\sqrt{2} & \lambda(\sqrt{3}) &= -\sqrt{3}
+ \end{align*}
+ Then the image of $\Gal(L/\Q) \to S_4$ is given by
+ \[
+ \{e, (1\; 2), (3\; 4), (1\; 2)(3\; 4)\}.
+ \]
+\end{eg}
+
+What we really want to know is if there are polynomials in which this map is in fact an isomorphism, i.e.\ the Galois group is the symmetric group. If so, then we can use this to produce a polynomial that is not soluble by polynomials.
+
+To find this, we first note a group-theoretic fact.
+\begin{lemma}
+ Let $p$ be a prime, and $\sigma \in S_p$ have order $p$. Then $\sigma$ is a $p$-cycle.
+\end{lemma}
+
+\begin{proof}
+ By IA Groups, we can decompose $\sigma$ into a product of disjoint cycles:
+ \[
+ \sigma = \sigma_1 \cdots \sigma_r.
+ \]
+ Let $\sigma_i$ have order $m_i > 1$. Again by IA Groups, we know that
+ \[
+ p = \text{order of }\sigma = \lcm(m_1, \cdots, m_r).
+ \]
+ Since $p$ is a prime number, we know that $p = m_i$ for all $i$. Hence we must have $r = 1$, since the cycles are disjoint and there are only $p$ elements. So $\sigma = \sigma_1$. Hence $\sigma$ is indeed an $p$ cycle.
+\end{proof}
+
+We will use these to find an example where the Galois group is the symmetric group. The conditions for this to happen are slightly awkward, but the necessity of these will become apparent in the proof.
+\begin{thm}
+ Let $f \in \Q[t]$ be irreducible and $\deg f = p$ prime. Let $L\subseteq \C$ be the splitting field of $f$ over $\Q$. Let
+ \[
+ \Root_f(L) = \{\alpha_1, \alpha_2, \cdots, \alpha_{p - 2}, \alpha_{p - 1}, \alpha_p\}.
+ \]
+ Suppose that $\alpha_1, \alpha_2, \cdots, \alpha_{p - 2}$ are all real numbers, but $\alpha_{p - 1}$ and $\alpha_p$ are not. In particular, $\alpha_{p - 1} = \bar{\alpha}_p$. Then the homomorphism $\beta: \Gal(L/\Q) \to S_n$ is an isomorphism.
+\end{thm}
+
+\begin{proof}
+ From IA groups, we know that the cycles $(1\; 2\; \cdots\; p)$ and $(p - 1\; p)$ generate the whole of $S_n$. So we show that these two are both in the image of $\beta$.
+
+ As $f$ is irreducible, we know that $f = P_{\alpha_1}$, the minimal polynomial of $\alpha_1$ over $\Q$. Then
+ \[
+ p = \deg P_{\alpha_i} = [\Q(\alpha_1): \Q].
+ \]
+ By the tower law, this divides $[L:\Q]$, which is equal to $|\Gal(L/\Q)|$ since the extension is Galois. Since $p$ divides the order of $\Gal(L/\Q)$, by Cauchy's theorem of groups, there must be an element of $\Gal(L/\Q)$ that is of order $p$. This maps to an element $\sigma \in \im \beta$ of order exactly $p$. So $\sigma$ is a $p$-cycle.
+
+ On the other hand, the isomorphism $\C \to \C$ given by $z \mapsto \bar{z}$ restricted to $L$ gives an automorphism in $\Gal(L/\Q)$. This simply permutes $\alpha_{p - 1}$ and $\alpha_p$, since it fixes the real numbers and $\alpha_{p - 1}$ and $\alpha_p$ must be complex conjugate pairs. So $\tau = (p - 1\; p) \in \im \beta$.
+
+ Now for every $1 \leq i < p$, we know that $\sigma^i$ again has order $p$, and hence is a $p$-cycle. So if we change the labels of the roots $\alpha_1, \cdots, \alpha_p$ and replace $\sigma$ with $\sigma^i$, and then waffle something about combinatorics, we can assume $\sigma = (1\; 2\; \cdots \;p - 1\; p)$. So done.
+\end{proof}
+
+\begin{eg}
+ Let $t^5 - 4t + 2 \in \Q[t]$. Let $L$ be the splitting field of $f$ over $\Q$.
+
+ First note that $\deg f = 5$ is a prime. Also, by Eisenstein's criterion, $f$ is irreducible. By finding the local maximum and minimum points, we find that $f$ has exactly three real roots. So by the theorem, there is an isomorphism $\Gal(L/\Q) \to S_5$. In other words, $\Gal(L/\Q) \cong S_5$.
+
+ We know $S_5$ is not a soluble group. So $f$ cannot be solved by radicals.
+\end{eg}
+After spending 19 lectures, we have found an example of a polynomial that cannot be solved by radicals. Finally.
+
+Note that there are, of course, many examples of $f \in \Q[t]$ irreducible of degree at least 5 that \emph{can} be solved by radicals, such as $f = t^5 - 2$.
+
+\subsection{Insolubility of general equations of degree 5 or more}
+We now want to do something more interesting. Instead of looking at a particular example, we want to say there is no general formula for solving polynomial equations of degree $5$ or above. First we want to define certain helpful notions.
+
+\begin{defi}[Field of symmetric rational functions]
+ Let $K$ be a field, $L = K(x_1, \cdots, x_n)$, the field of rational functions over $K$. Then there is an injective homomorphism $S_n \to \Aut_K(L)$ given by permutations of $x_i$.
+
+ We define the \emph{field of symmetric rational functions} $F = L^{S_n}$ to be the fixed field of $S_n$.
+\end{defi}
+
+There are a few important symmetric rational functions that we care about more.
+\begin{defi}[Elementary symmetric polynomials]
+ The \emph{elementary symmetric polynomials} are $e_1, e_2, \cdots, e_n$ defined by
+ \[
+ e_i = \sum_{1 \leq l_1 < l_2 < \cdots < l_i \leq n} x_{\ell_1} \cdots x_{\ell_i}.
+ \]
+\end{defi}
+It is easy to see that
+\begin{align*}
+ e_1 &= x_1 + x_2 + \cdots + x_n\\
+ e_2 &= x_1x_2 + x_1x_3 + \cdots + x_{n - 1}x_n\\
+ e_n &= x_1\cdots x_n.
+\end{align*}
+Obviously, $e_1, \cdots, e_n \in F$.
+
+\begin{thm}[Symmetric rational function theorem]
+ Let $K$ be a field, $L = K(x_1, \cdots, x_n)$. Let $F$ the field fixed by the automorphisms that permute the $x_i$. Then
+ \begin{enumerate}
+ \item $L$ is the splitting field of
+ \[
+ f = t^n - e_1 t^{n - 1} + \cdots + (-1)^n e_n
+ \]
+ over $F$.
+ \item $F = L^{S_n} \subseteq L$ is a Galois group with $\Gal(L/F)$ isomorphic to $S_n$.
+ \item $F = K(e_1, \cdots, e_n)$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item In $L[t]$, we have
+ \[
+ f = (t - x_1)\cdots (t - x_n).
+ \]
+ So $L$ is the splitting field of $f$ over $F$.
+ \item By Artin's lemma, $L/K$ is Galois and $\Gal(L/F) \cong S_n$.
+ \item Let $E = K(e_1, \cdots, e_n)$. Clearly, $E \subseteq F$. Now $E\subseteq L$ is a Galois extension, since $L$ is the splitting field of $f$ over $E$ and $f$ has no repeated roots.
+
+ By the fundamental theorem of Galois theory, since we have the Galois extensions $E\subseteq F \subseteq L$, we have $\Gal(L/F) \leq \Gal(L/E)$. So $S_n \leq \Gal(L/E)$. However, we also know that $\Gal(L/E)$ is a subgroup of $S_n$, we must have $\Gal(L/E) = \Gal(L/F) = S_n$. So we must have $E = F$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[General polynomial]
+ Let $K$ be a field, $u_1, \cdots, u_n$ variables. The \emph{general polynomial over $K$} of degree $n$ is
+ \[
+ f = t^n + u_1t^{n - 1} + \cdots + u_n.
+ \]
+ Technically, this is a polynomial in the polynomial ring $K(u_1, \cdots, u_n)[t]$. However, we say this is the general polynomial over $K$ be cause we tend to think of these $u_i$ as representing actual elements of $K$.
+\end{defi}
+
+We say the general polynomial over $K$ of degree $n$ can be solved by radicals if $f$ can be solved by radicals over $K(u_1, \cdots, u_n)$.
+
+\begin{eg}
+ The general polynomial of degree $2$ over $\Q$ is
+ \[
+ t^2 + u_1 t + u_2.
+ \]
+ This can be solved by radicals because its roots are
+ \[
+ \frac{-u_1 \pm \sqrt{u_1^2 - 4u_2}}{2}.
+ \]
+\end{eg}
+
+\begin{thm}
+ Let $K$ be a field with $\Char K = 0$. Then the general polynomial polynomial over $K$ of degree $n$ cannot be solved by radicals if $n \geq 5$.
+\end{thm}
+
+\begin{proof}
+ Let
+ \[
+ f = t^n + u_1t^{n - 1} + \cdots + u_n.
+ \]
+ be our general polynomial of degree $n \geq 5$. Let $N$ be a splitting field of $f$ over $K(u_1, \cdots, u_n)$. Let
+ \[
+ \Root_f(N) = \{\alpha_1, \cdots, \alpha_n\}.
+ \]
+ We know the roots are distinct because $f$ is irreducible and the field has characteristic $0$. So we can write
+ \[
+ f = (t - \alpha_1)\cdots(t - \alpha_n) \in N[t].
+ \]
+ We can expand this to get
+ \begin{align*}
+ u_1 &= -(\alpha_1 + \cdots + \alpha_n)\\
+ u_2 &= \alpha_1\alpha_2 + \alpha_1\alpha_3 + \cdots + \alpha_{n - 1}\alpha_n\\
+ &\quad \vdots\\
+ u_i &= (-1)^i (i\text{th elementary symmetric polynomial in }\alpha_1, \cdots, \alpha_n).
+ \end{align*}
+ Now let $x_1, \cdots, x_n$ be new variables, and $e_i$ the $i$th elementary symmetric polynomial in $x_1, \cdots, x_n$. Let $L = K(x_1, \cdots, x_n)$, and $F = K(e_1, \cdots, e_n)$. We know that $F\subseteq L$ is a Galois extension with Galois group isomorphic to $S_n$.
+
+ We define a ring homomorphism
+ \begin{align*}
+ \theta: K[u_1, \cdots, u_n] &\to K[e_1, \cdots, e_n] \subseteq K[x_1, \cdots, x_n]\\
+ u_i &\mapsto (-1)^i e_i.
+ \end{align*}
+ This is our equations of $u_i$ in terms $\alpha_i$, but with $x_i$ instead of $\alpha_i$.
+
+ We want to show that $\theta$ is an isomorphism. Note that since the homomorphism just renames $u_i$ into $e_i$, the fact that $\theta$ is an isomorphism means there are no ``hidden relations'' between the $e_i$. It is clear that $\theta$ is a surjection. So it suffices to show $\theta$ is injective. Suppose $\theta(h) = 0$. Then
+ \[
+ h(-e_1, \cdots, (-1)^n e_n) = 0.
+ \]
+ Since the $x_i$ are just arbitrary variables, we now replace $x_i$ with $\alpha_i$. So we get
+ \[
+ h(-e_1(\alpha_1, \cdots, \alpha_n), \cdots, (-1)^n (e_n(\alpha_1, \cdots, \alpha_n))) = 0.
+ \]
+ Using our expressions for $u_i$ in terms of $e_i$, we have
+ \[
+ h(u_1, \cdots, u_n) = 0,
+ \]
+ But $h(u_1, \cdots, u_n)$ is just $h$ itself. So $h = 0$. Hence $\theta$ is injective. So it is an isomorphism. This in turns gives an isomorphism between
+ \[
+ K(u_1, \cdots, u_n) \to K(e_1, \cdots, e_n) = F.
+ \]
+ We can extend this to their polynomial rings to get isomorphisms between
+ \[
+ K(u_1, \cdots, u_n)[t] \to F[t].
+ \]
+ In particular, this map sends our original $f$ to
+ \[
+ f \mapsto t^n - e_1 t^{n - 1} + \cdots + (-1)^n e_n = g.
+ \]
+ Thus, we get an isomorphism between the splitting field of $f$ over $K(u_1, \cdots, u_n)$ and the splitting field $g$ over $F$.
+
+ The splitting field of $f$ over $K(u_1, \cdots, u_n)$ is just $N$ by definition. From the symmetric rational function theorem, we know that the splitting field of $g$ over $F$ is just $L$, and So $N \cong L$. So we have an isomorphism
+ \[
+ \Gal(N/K(u_1, \cdots, u_n)) \to \Gal(L/F) \cong S_n.
+ \]
+ Since $S_n$ is not soluble, $f$ is not soluble.
+\end{proof}
+
+This is our second main goal of the course, to prove that general polynomials of degree $5$ or more are not soluble by radicals.
+
+Recall that we proved that all radical extensions are soluble. We now prove the converse.
+
+\begin{thm}
+ Let $K$ be a field with $\Char K = 0$. If $L/K$ is a soluble extension, then it is a radical extension.
+\end{thm}
+
+\begin{proof}
+ Let $L \subseteq E$ be such that $K \subseteq E$ is Galois and $\Gal(E/K)$ is soluble. We can replace $L$ with $E$, and assume that in fact $L/K$ is a soluble Galois extension. So there is a sequence of groups
+ \[
+ \{0\} = G_r \lhd \cdots \lhd G_1 \lhd G_0 = \Gal(L/K)
+ \]
+ such that $G_i/G_{i + 1}$ is cyclic.
+
+ By the fundamental theorem of Galois theory, we get a sequence of field extension given by $L_i = L^{G_i}$:
+ \[
+ K = L_0 \subseteq \cdots \subseteq L_r = L.
+ \]
+ Moreover, we know that $L_i \subseteq L_{i + 1}$ is a Galois extension with Galois group $\Gal(L_{i + 1}/L_i) \cong G_i/G_{i + 1}$. So $\Gal(L_{i + 1}/L_i)$ is cyclic.
+
+ Let $n = [L:K]$. Recall that we proved a previous theorem that if $\Gal(L_{i + 1}/L_i)$ is cyclic, and $L_i$ contains a primitive $k$th root of unity (with $k = [L_{i + 1}:L_i]$), then $L_i \subseteq L_{i + 1}$ is a Kummer extension. However, we do not know of $L_i$ contains the right root of unity. Hence, the trick here is to add an $n$th primitive root of unity to each field in the sequence.
+
+ Let $\mu$ an $n$th primitive root of unity. Then if we add the $n$th primitive root to each item of the sequence, we have
+ \[
+ \begin{tikzcd}
+ L_0(\mu) \ar[r, phantom, "{\subseteq}"] \ar [d, phantom, "\rotatebox{90}{$\subseteq$}"] & \cdots \ar [r, phantom, "{\subseteq}"] & L_i(\mu) \ar[r, phantom, "{\subseteq}"] \ar [d, phantom, "\rotatebox{90}{$\subseteq$}"]& L_{i + 1}(\mu) \ar[r, phantom, "{\subseteq}"] \ar [d, phantom, "\rotatebox{90}{$\subseteq$}"]& \cdots \ar [r, phantom, "{\subseteq}"] & L_r(\mu)\ar [d, phantom, "\rotatebox{90}{$\subseteq$}"]\\
+ K = L_0 \ar[r, phantom, "{\subseteq}"] & \cdots \ar [r, phantom, "{\subseteq}"] & L_i \ar[r, phantom, "{\subseteq}"] & L_{i + 1} \ar[r, phantom, "{\subseteq}"] & \cdots \ar [r, phantom, "{\subseteq}"] & L_r = L
+ \end{tikzcd}
+ \]
+ We know that $L_0\subseteq L_0(\mu)$ is a cyclotomic extension by definition. We will now show that $L_i(\mu) \subseteq L_{i + 1}(\mu)$ is a Kummer extension for all $i$. Then $L/K$ is radical since $L\subseteq L_r(\mu)$.
+
+ Before we do anything, we have to show $L_i(\mu) \subseteq L_{i + 1}(\mu)$ is a Galois extension. To show this, it suffices to show $L_i \subseteq L_{i + 1}(\mu)$ is a Galois extension.
+
+ Since $L_i \subseteq L_{i + 1}$ is Galois, $L_i \subseteq L_{i + 1}$ is normal. So $L_{i + 1}$ is the splitting of some $h$ over $L_i$. Then $L_{i + 1}(\mu)$ is just the splitting field of $(t^n - 1) h$. So $L_i \subseteq L_{i + 1}(\mu)$ is normal. Also, $L_i \subseteq L_{i + 1}(\mu)$ is separable since $\Char K = \Char L_i = 0$. Hence $L_i \subseteq L_{i + 1}(\mu)$ is Galois, which implies that $L_i(\mu) \subseteq L_{i + 1}(\mu)$ is Galois.
+
+ We define a homomorphism of groups
+ \[
+ \Gal(L_{i + 1}(\mu) / L_i (\mu)) \to \Gal(L_{i + 1}/L_i)
+ \]
+ by restriction. This is well-defined because $L_{i + 1}$ is the splitting field of some $h$ over $L_i$, and hence any automorphism of $L_{i + 1}(\mu)$ must send roots of $h$ to roots of $h$, i.e.\ $L_{i + 1}$ to $L_{i + 1}$.
+
+ Moreover, we can see that this homomorphism is injective. If $\phi \mapsto \phi|_{L_{i + 1}} = \id$, then it fixes everything in $L_{i + 1}$. Also, since it is in $\Gal(L_{i + 1}(\mu)/L_i(\mu))$, it fixes $L_i(\mu)$. In particular, it fixes $\mu$. So $\phi$ must fix the whole of $L_{i + 1}(\mu)$. So $\phi = \id$.
+
+ By injectivity, we know that $\Gal(L_{i + 1}(\mu)/L_i(\mu))$ is isomorphic to a subgroup of $\Gal(L_{i + 1}/L_i)$. Hence it is cyclic. By our previous theorem, it follows that $L_i(\mu) \subseteq L_{i + 1}(\mu)$ is a Kummer extension. So $L/K$ is radical.
+\end{proof}
+
+\begin{cor}
+ Let $K$ be a field with $\Char K = 0$ and $h \in K[t]$. Let $L$ be the splitting of $h$ over $K$. Then $h$ can be solved by radicals if and only if $\Gal(L/K)$ is soluble.
+\end{cor}
+
+\begin{proof}
+ $(\Rightarrow)$ Proved before.
+
+ $(\Leftarrow)$ Since $L/K$ is a Galois extension, $L/K$ is a soluble extension. So it is a radial extension. So $h$ can be solved by radicals.
+\end{proof}
+
+\begin{cor}
+ Let $K$ be a field with $\Char K = 0$. Let $f \in K[t]$ have $\deg f \leq 4$. Then $f$ can be solved by radicals.
+\end{cor}
+
+\begin{proof}
+ Exercise.
+\end{proof}
+
+Note that in the case where $K = \Q$, we have proven this already by given explicit solutions in terms of radicals in the first lecture.
+
+\section{Computational techniques}
+In the last three lectures, we will look at some techniques that allow us to actually compute the Galois group of polynomials (i.e.\ Galois groups of their splitting fields).
+
+\subsection{Reduction mod \texorpdfstring{$p$}{p}}
+The goal of this chapter is to see what happens when we reduce a polynomial $f \in \Z[t]$ to the corresponding polynomial $\bar{f} \in \F_p[t]$.
+
+More precisely, suppose we have a polynomial $f \in \Z[t]$, and $E$ is its splitting field over $\Q$. We then reduce $f$ to $\bar{f} \in \F_p[t]$ by reducing the coefficients mod $p$, and let $\bar{E}$ be the splitting field of $\bar{f}$ over $\F_p$.
+
+The ultimate goal is to show that under mild assumptions, there is an injection
+\[
+ \Gal(\overline{E}/\F_p) \hookrightarrow \Gal(E/\Q).
+\]
+To do this, we will go through a lot of algebraic fluff to obtain an alternative characterization of the Galois group, and obtain the result as an easy corollary.
+
+This section will be notationally heavy. First, in the background, we have a polynomial $f$ of degree $n$ (whose field we shall specify later). Then we will have three distinct set of variables, namely $(x_1, \cdots, x_n)$, $(u_1, \cdots, u_n)$, plus a $t$. They will play different roles.
+\begin{itemize}
+ \item The $x_i$ will be placeholders. After establishing our definitions, we will then map each $x_i$ to $\alpha_i$, a root of our $f$.
+ \item The $u_i$ will stay as ``general coefficients'' all the time.
+ \item $t$ will be the actual variable we think our polynomial is in, i.e.\ all polynomials will be variables in $t$, and $u_i$ and $x_i$ will form part of the coefficients.
+\end{itemize}
+
+To begin with, let
+\begin{align*}
+ L &= \Q(x_1, \cdots, x_n)\\
+ F &= \Q(e_1, \cdots, e_n).
+\end{align*}
+where $x_i$ are variables and $e_i$ are the symmetric polynomials in the $x_1, \cdots, x_n$. We have seen that $\Gal(L/F) \cong S_n$.
+
+Now let
+\begin{align*}
+ B &= \Z[x_1, \cdots, x_n]\\
+ A &= \Z[e_1, \cdots, e_n].
+\end{align*}
+It is an exercise on example sheet 4 to show that
+\[
+ B\cap F = A.\tag{$*$}
+\]
+We will for now take this for granted.
+
+We now add it new variables $u_1, \cdots, u_n, t$. We previously mentioned that $S_n$ can act on, say $L[u_1, \cdots, u_n, t]$ by permuting the variables. Here there are two ways in which this can happen --- a permutation can either permute the $x_i$, or permute the $u_i$. We will have to keep this in mind.
+
+Now for each $\sigma \in S_n$, we define the linear polynomial
+\[
+ R_\sigma = t - x_{\sigma(1)} u_1 - \cdots - x_{\sigma(n)} u_n.
+\]
+For example, we have
+\[
+ R_{(1)} = t - x_1 u_1 - \cdots - x_n u_n.
+\]
+As mentioned, an element $\rho \in S_n$ can act on $R_\rho$ in two ways: it either sends $R_\sigma \mapsto R_{\rho \sigma}$ or $R_\sigma \mapsto R_{\sigma\rho^{-1}}$.
+
+It should be clear that the first action permutes the $x_i$. What the second action does is permute the $u_i$. To see this, we can consider a simple case where $n = 2$. Then the action $\rho$ acting on $R_{(1)}$ sends
+\[
+ t - x_1 u_1 - x_2 u_2 \mapsto t - x_{\rho^{-1}(1)} u_1 - x_{\rho^{-2}(2)} u_2 = t - x_1 u_{\rho(1)} - x_2 u_{\rho(2)}.
+\]
+Finally, we define the following big scary polynomial:
+\[
+ R = \prod_{\sigma \in S_n} R_\sigma \in B[u_1, \cdots, u_n, t].
+\]
+We see that this is fixed by any permutation in $\sigma \in S_n$ under both actions. Considering the first action and using $(*)$, we see that in fact
+\[
+ R \in A[u_1, \cdots, u_n, t].
+\]
+This is since if we view $R$ as a polynomial over $B$ in the variables $u_1, \cdots, u_n, t$, then its coefficients will be invariant under permuting the $x_i$. So the coefficients must be a function of the $e_i$, i.e.\ lie in $A$.
+
+With these definitions in place, we can focus on a concrete polynomial.
+
+Let $K$ be a field, and let
+\[
+ f = t^n + a_1 t^{n - 1} + \cdots + a_n \in K[t]
+\]
+have no repeated roots. We let $E$ be the splitting field of $f$ over $K$. Write
+\[
+ \Root_f(E) = \{\alpha_1, \cdots, \alpha_n\}.
+\]
+Note that this is the setting we had at the beginning of the chapter, but with an arbitrary field $K$ instead of $\Q$ and $\F_p$.
+
+We define a ring homomorphism $\theta: B \to E$ by $x_i \mapsto \alpha_i$. This extends to a ring homomorphism
+\[
+ \theta: B[u_1, \cdots, u_n, t] \to E[u_1, \cdots, u_n, t].
+\]
+Note that the ring homomorphism $\theta$ send $e_i \mapsto (-1)^i a_i$. So in particular, if we restrict the homomorphism $\theta$ to $A$, then the image is restricted to the field generated by $a_i$. But we already have $a_i \in K$. So $\theta(A) = K$. In particular, since $R \in A[u_1, \cdots, u_n, t]$, we have
+\[
+ \theta(R) \in K[u_1, \cdots, u_n, t].
+\]
+Now let $P$ be an irreducible factor of $\theta(R)$ in $K[u_1, \cdots, u_n, t]$. We want to say each such irreducible polynomial is related to the Galois group $G = \Gal(E/K)$. Since $f$ has no repeated roots, we can consider $G$ as a subgroup of $S_n$, where the elements of $G$ are just the permutations of the roots $\alpha_i$. We will then show that each irreducible polynomial corresponds to a coset of $G$.
+
+Recall that at the beginning, we said $S_n$ can act on our polynomial rings by permuting the $x_i$ or $u_i$. However, once we have mapped the $x_i$ to the $\alpha_i$ and focus on a specific field, $S_n$ as a whole can no longer act on the $\alpha_i$, since there might be non-trivial relations between the $\alpha_i$. Instead, only the subgroup $G \leq S_n$ can act on $\alpha_i$. On the other hand, $S_n$ can still act on $u_i$.
+
+Recall that $R$ is defined as a product of linear factors $R_\sigma$'s. So we can find a subset $\Lambda \subseteq S_n$ such that
+\[
+ P = \prod_{\sigma \in \Lambda} R_\sigma.
+\]
+We will later see this $\Lambda$ is just a coset of the Galois group $G$.
+
+Pick $\sigma \in \Lambda$. Then by definition of $P$,
+\[
+ R_\sigma \mid P
+\]
+in $E[u_1, \cdots, u_n, t]$. Now if $\rho \in G$, then we can let $\rho$ act on both sides by permuting the $x_i$ (i.e.\ the $\alpha_i$). This does not change $P$ because $P$ has coefficients in $K$ and the action of $G$ has to fix $K$. Hence we have
+\[
+ R_{\rho\sigma} \mid P.
+\]
+More generally, if we let
+\[
+ H = \prod_{\rho \in G} R_{\rho \sigma} \in E[u_1, \cdots, u_n, t],
+\]
+then
+\[
+ H \mid P
+\]
+by the irreducibility of $P$.
+
+Since $H$ is also invariant under the action of $G$, we know $H \in K[u_1, \cdots, u_n, t]$. By the irreducibility of $P$, we know $H = P$. Hence, we know
+\[
+ \Lambda = G\sigma.
+\]
+We have thus proved that the irreducible factors of $\theta(R)$ in $K[u_1, \cdots, u_n, t] $ are in one-to-one correspondence with the cosets of $G$ in $S_n$. In particular, if $P$ corresponds to $G$ itself, then
+\[
+ P = \prod_{\tau \in G} R_\tau.
+\]
+In general, if $P$ corresponds to a coset $G \sigma$, we can let $\lambda \in S_n$ act on $P$ by permuting the $u_i$'s. Then this sends
+\[
+ P = \prod_{\rho \in G} R_{\rho \sigma} \mapsto Q = \prod_{\rho \in G} R_{\rho \sigma \lambda^{-1}}.
+\]
+So this corresponds to the coset $G\sigma\lambda^{-1}$. In particular, $P = Q$ if and only if $G\sigma = G \sigma \lambda^{-1}$. So we can use this to figure out what permutations preserve an irreducible factor. In particular, taking $\sigma = (1)$, we have
+\begin{thm}
+ \[
+ G = \{\lambda \in S_n : \lambda \text{ preserves the irreducible factor corresponding to }G\}.\tag{$\dagger$}
+ \]
+\end{thm}
+This is the key result of this chapter, and we will apply this as follows:
+
+\begin{thm}
+ Let $f \in \Z[t]$ be monic with no repeated roots. Let $E$ be the splitting field of $f$ over $\Q$, and take $\bar{f} \in \F_p[t]$ be the obvious polynomial obtained by reducing the coefficients of $f$ mod $p$. We also assume this has no repeated roots, and let $\bar{E}$ be the splitting field of $\bar{f}$.
+
+ Then there is an injective homomorphism
+ \[
+ \bar{G} = \Gal(\bar{E}/\F_p) \hookrightarrow G = \Gal(E/\Q).
+ \]
+ Moreover, if $\bar{f}$ factors as a product of irreducibles of length $n_1, n_2, \cdots, n_r$, then $\Gal(f)$ contains an element of cycle type $(n_1, \cdots, n_r)$.
+\end{thm}
+
+\begin{proof}
+ We apply the previous theorem twice. First, we take $K = \Q$. Then
+ \[
+ \theta(R) \in \Z[u_1, \cdots, u_n, t].
+ \]
+ Let $P$ be the irreducible factor of $\theta(R)$ corresponding to the Galois group $G$. Applying Gauss' lemma, we know $P$ has integer coefficients.
+
+ Applying the theorem again, taking $K = \F_p$. Denote the ring homomorphism as $\bar{\theta}$. Then $\bar{\theta}(R) \in \F_p[u_1, \cdots, u_n, t]$. Now let $Q$ be the irreducible factor $\bar{\theta}(R)$ corresponding to $\bar{G}$.
+
+ Now note that $\theta(R_{(1)}) \mid P$ and $\bar{\theta}(R_{(1)}) \mid Q$, since the identity is in $G$ and $\bar{G}$. Also, note that $\bar{\theta}(R) = \overline{\theta(R)}$, where the bar again denotes reduction mod $p$. So $Q \mid \bar{P}$.
+
+ Considering the second action of $S_n$ (i.e.\ permuting the $u_i$), we can show $\bar{G} \subseteq G$, using the characterization $(\dagger)$. Details are left as an exercise.
+\end{proof}
+
+This is incredibly useful for computing Galois groups, as it allows us to explicitly write down some cycles in $\Gal(E, \Q)$.
+
+\subsection{Trace, norm and discriminant}
+We are going to change direction a bit and look at traces and norms. These will help us understand the field better, and perhaps prove some useful facts from it. They will also lead to the notion of the discriminant, which is again another tool that can be used to compute Galois groups, amongst many other things.
+
+\begin{defi}[Trace]
+ Let $K$ be a field. If $A = [a_{ij}]$ is an $n \times n$ matrix over $K$, we define the \emph{trace} of $A$ to be
+ \[
+ \tr (A) = \sum_{i = 1}^n a_{ii},
+ \]
+ i.e.\ we take the sum of the diagonal terms.
+\end{defi}
+
+It is a well-known fact that if $B$ is an invertible $n \times n$ matrix, then
+\[
+ \tr (B^{-1} AB) = \tr (A).
+\]
+Hence given a finite-dimensional vector space $V$ over $K$ and $\sigma: V \to V$ a $K$-linear map, then we can define the trace for the linear map as well.
+\begin{defi}[Trace of linear map]
+ Let $V$ be a finite-dimensional vector space over $K$, and $\sigma: V \to V$ a $K$-linear map. Then we can define
+ \[
+ \tr (\sigma) = \tr(\text{any matrix representing }\sigma).
+ \]
+\end{defi}
+
+\begin{defi}[Trace of element]
+ Let $K \subseteq L$ be a finite field extension, and $\alpha \in L$. Consider the $K$-linear map $\sigma: L \to L$ given by multiplication with $\alpha$, i.e.\ $\beta \mapsto \alpha\beta$. Then we define the \emph{trace} of $\alpha$ to be
+ \[
+ \tr_{L/K}(\alpha) = \tr(\sigma).
+ \]
+\end{defi}
+Similarly, we can consider the determinant, and obtain the norm.
+
+\begin{defi}[Norm of element]
+ We define the \emph{norm} of $\alpha$ to be
+ \[
+ N_{L/K}(\alpha) = \det(\sigma),
+ \]
+ where $\sigma$ is, again, the multiplication-by-$\alpha$ map.
+\end{defi}
+This construction gives us two functions $\tr_{L/K}, N_{L/K}: L \to K$. It is easy to see from definition that $\tr_{L/K}$ is additive while $N_{L/K}$ is multiplicative.
+
+\begin{eg}
+ Let $L/K$ be a finite field extension, and $x \in K$. Then the matrix of $x$ is represented by $xI$, where $I$ is the identity matrix. So
+ \[
+ N_{L/K}(x) = x^{[L:K]},\quad \tr_{L/K}(x) = [L:K]x.
+ \]
+\end{eg}
+
+\begin{eg}
+ Let $K =\Q$, $L = \Q(i)$. Consider an element $a + bi \in \Q(i)$, and pick the basis $\{1, i\}$ for $\Q(i)$. Then the matrix of $a + bi$ is
+ \[
+ \begin{pmatrix}
+ a & -b\\
+ b & a
+ \end{pmatrix}.
+ \]
+ So we find that $\tr_{L/K}(a + bi) = 2a$ and $N(a + bi) = a^2 + b^2 = |a + bi|^2$.
+
+ In general, if $K = \Q$ and $L = \Q(\sqrt{-d})$ where $d > 0$ is square-free, then $N(a + b\sqrt{-d}) = a^2 + b^2 d = |a + b\sqrt{-d}|^2$. However, for other fields, the norm is not at all related to the absolute value.
+\end{eg}
+In general, computing norms and traces with the definition directly is not fun. It turns out we can easily find the trace and norm of $\alpha$ from the minimal polynomial of $\alpha$, just like how we can find usual traces and determinants from the characteristic polynomial.
+
+To do so, we first prove the transitivity of trace and norm.
+\begin{lemma}
+ Let $L/F/K$ be finite field extensions. Then
+ \[
+ \tr_{L/K} = \tr_{F/K} \circ \tr_{L/F},\quad N_{L/K} = N_{F/K} \circ N_{L/F}.
+ \]
+\end{lemma}
+
+To prove this directly is not difficult, but involves some confusing notation. Purely for the sake of notational convenience, we shall prove the following more general fact:
+\begin{lemma}
+ Let $F/K$ be a field extension, and $V$ an $F$-vector space. Let $T: V \to V$ be an $F$-linear map. Then it is in particular a $K$-linear map. Then
+ \[
+ \det\nolimits_K T = N_{F/K}(\det\nolimits_F T),\quad \tr_K T = \tr_{F/K}(\tr_F T).
+ \]
+\end{lemma}
+Taking $V$ to be $L$ and $T$ to be multiplication by $\alpha \in F$ clearly gives the original intended result.
+
+\begin{proof}
+ For $\alpha \in F$, we will write $m_\alpha: F \to F$ for multiplication by $\alpha$ map viewed as a $K$-linear map.
+
+ By IB Groups, Rings and Modules, there exists a basis $\{e_i\}$ such that $T$ is in rational canonical form, i.e.\ such that $T$ is block diagonal with each diagonal looking like
+ \[
+ \begin{pmatrix}
+ 0 & 0 & \cdots & 0 & a_0\\
+ 1 & 0 & \cdots & 0 & a_1\\
+ 0 & 1 & \cdots & 0 & a_2\\
+ \vdots & \vdots & \ddots & \vdots & \vdots\\
+ 0 & 0 & \cdots & 1 & a_{r - 1}
+ \end{pmatrix}.
+ \]
+ Since the norm is multiplicative and trace is additive, and
+ \[
+ \det
+ \begin{pmatrix}
+ A & 0\\
+ 0 & B
+ \end{pmatrix} = \det A \det B,\quad
+ \tr
+ \begin{pmatrix}
+ A & 0\\
+ 0 & B
+ \end{pmatrix} = \tr A + \tr B,
+ \]
+ we may wlog $T$ is represented by a single block as above.
+
+ From the rational canonical form, we can read off
+ \[
+ \det\nolimits_F T = (-1)^{r - 1} a_0,\quad \tr_F T = a_{r - 1}.
+ \]
+
+ We now pick a basis $\{f_j\}$ of $F$ over $K$, and then $\{e_i f_j\}$ is a basis for $V$ over $K$. Then in this basis, the matrix of $T$ over $K$ is given by
+ \[
+ \begin{pmatrix}
+ \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & m_{a_0}\\
+ \mathbf{1} & \mathbf{0} & \cdots & \mathbf{0} & m_{a_1}\\
+ \mathbf{0} & \mathbf{1} & \cdots & \mathbf{0} & m_{a_2}\\
+ \vdots & \vdots & \ddots & \vdots & \vdots\\
+ \mathbf{0} & \mathbf{0} & \cdots & \mathbf{1} & m_{a_{r - 1}}
+ \end{pmatrix}.
+ \]
+ It is clear that this has trace
+ \[
+ \tr_K (m_{a_{r - 1}}) = \tr_{F/K} (a_{r - 1}) = \tr_{F/K} (\tr_F T).
+ \]
+ Moreover, writing $n = [L:K]$, we have
+ \begin{align*}
+ \det\nolimits_K
+ \begin{pmatrix}
+ \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & m_{a_0}\\
+ \mathbf{1} & \mathbf{0} & \cdots & \mathbf{0} & m_{a_1}\\
+ \mathbf{0} & \mathbf{1} & \cdots & \mathbf{0} & m_{a_2}\\
+ \vdots & \vdots & \ddots & \vdots & \vdots\\
+ \mathbf{0} & \mathbf{0} & \cdots & \mathbf{1} & m_{a_{r - 1}}
+ \end{pmatrix}
+ &= (-1)^{n(r - 1)}
+ \det\nolimits_K
+ \begin{pmatrix}
+ m_{a_0} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0}\\
+ m_{a_1} &\mathbf{1} & \mathbf{0} & \cdots & \mathbf{0}\\
+ m_{a_2} &\mathbf{0} & \mathbf{1} & \cdots & \mathbf{0}\\
+ \vdots & \vdots & \vdots & \ddots & \vdots\\
+ m_{a_{r - 1}} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{1}
+ \end{pmatrix}\\
+ &= (-1)^{n(r - 1)} \det\nolimits_K(m_{a_0})\\
+ &= \det\nolimits_K ((-1)^{r - 1} m_{a_0})\\
+ &= N_{F/K}((-1)^{r - 1} a_0)\\
+ &= N_{F/K} (\det\nolimits_F T).
+ \end{align*}
+% by fiddling with column permutations and keeping track of signs carefully, we find that the determinant of this matrix is
+% \[
+% \det\nolimits_K ((-1)^{r - 1} m_{a_0}) = N_{F/K} ((-1)^{r - 1} a_0) = N_{F/K}(\det\nolimits_F T).
+% \]
+ So the result follows.
+\end{proof}
+%\begin{proof}
+% To make the notation less confusing, we prove the following more general fact --- if $F/K$ is
+%
+% Let $\alpha \in L$, and write $m_\alpha^{L/F}$ for the $F$-linear map $L \to L$ given by multiplication by $\alpha$. Recall the rational canonical form theorem, which says we can pick a basis of $L$ over $F$ such that $m_\alpha$ is block diagonal, with each block looking like
+% \[
+% c(f) =
+% \begin{pmatrix}
+% 0 & 0 & \cdots & 0 & a_0\\
+% 1 & 0 & \cdots & 0 & a_1\\
+% 0 & 1 & \cdots & 0 & a_2\\
+% \vdots & \vdots & \ddots & \vdots & \vdots\\
+% 0 & 0 & \cdots & 1 & a_r
+% \end{pmatrix}.
+% \]
+% We suppose that there is only one such block in $m_\alpha^{L/F}$. Then the matrix for $m_\alpha^{L/K}$ is given by
+% \[
+% m_{\alpha}^{L/K} =
+% \begin{pmatrix}
+% \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & m_{a_\mathbf{0}}^{F/K}\\
+% \mathbf{1} & \mathbf{0} & \cdots & \mathbf{0} & m_{a_1}^{F/K}\\
+% \mathbf{0} & \mathbf{1} & \cdots & \mathbf{0} & m_{a_2}^{F/K}\\
+% \vdots & \vdots & \ddots & \vdots & \vdots\\
+% \mathbf{0} & \mathbf{0} & \cdots & \mathbf{1} & m_{a_r}^{F/K}
+% \end{pmatrix}.
+% \]
+% Then we see that
+% \[
+% \det (m_\alpha^{L/K}) =
+% \]
+% Since
+% \[
+% \det
+% \begin{pmatrix}
+% A & 0\\
+% 0 & B
+% \end{pmatrix} = \det A \det B,\quad
+% \tr
+% \begin{pmatrix}
+% A & 0\\
+% 0 & B
+% \end{pmatrix} = \tr A + \tr B,
+% \]
+% We pick a basis of $L$ over $F$ where the matrix of $\alpha$ is in rational canonical form, i.e.\ it is of the form
+% \[
+% \begin{pmatrix}
+% c(f_1) & 0 & \cdots & 0\\
+% 0 & c(f_2) & \cdots & 0\\
+% \vdots & \vdots & \ddots & \vdots\\
+% 0 & 0 & \cdots & c(f_s)
+% \end{pmatrix},
+% \]
+%
+%\end{proof}
+
+As a corollary, we have the following very powerful tool for computing norms and traces.
+\begin{cor}
+ Let $L/K$ be a finite field extension, and $\alpha \in L$. Let $r = [L:K(\alpha)]$ and let $P_\alpha$ be the minimal polynomial of $\alpha$ over $K$, say
+ \[
+ P_\alpha = t^n + a_{n - 1} t^{n - 1} + \cdots + a_0.
+ \]
+ with $a_i \in K$. Then
+ \[
+ \tr_{L/K}(\alpha) = - r a_{n - 1}
+ \]
+ and
+ \[
+ N_{L/K}(\alpha) = (-1)^{nr} a_0^r.
+ \]
+\end{cor}
+Note how this resembles the relation between the characteristic polynomial and trace/determinants in linear algebra.
+
+\begin{proof}
+ We first consider the case $r = 1$. Write $m_\alpha$ for the matrix representing multiplication by $\alpha$. Then $P_\alpha$ is the minimal polynomial of $m_\alpha$. But since $\deg P_\alpha = n = \dim_K K(\alpha)$, it follows that this is also the characteristic polynomial. So the result follows.
+
+ Now if $r \not= 1$, we can consider the tower of extensions $L/K(\alpha)/K$. Then we have
+ \begin{multline*}
+ N_{L/K}(\alpha) = N_{K(\alpha)/K} (N_{L/K(\alpha)}(\alpha)) = N_{K(\alpha)/K}(\alpha^r) \\
+ = (N_{K(\alpha)/K} (\alpha))^r = (-1)^{nr} a_0^r.
+ \end{multline*}
+ The computation for trace is similar.
+% then $L$ decomposes as $K(\alpha)^r$ as a $K(\alpha)$ vector space. Thus,
+% We first consider the simple case where $r = 1$. We claim that $P_\alpha$ is the characteristic polynomial of the matrix representing multiplication by $\alpha$. But $P_\alpha$ divides the characteristic polynomial and both have degree $n$. So they must be equal.
+%
+% Indeed, we pick the basis $\{1, \alpha, \alpha^2, \cdots, \alpha^{n - 1}\}$. Then the multiplication map sends
+% \begin{align*}
+% 1 &\mapsto \alpha\\
+% \alpha &\mapsto \alpha^2\\
+% &\;\vdots\\
+% \alpha^{n-1} &\mapsto \alpha^n = -a_{n - 1} \alpha^{n - 1} - \cdots - a_0
+% \end{align*}
+% So the matrix is just
+% \[
+% A = \begin{pmatrix}
+% 0 & 0 & \cdots & -a_0\\
+% 1 & 0 & \cdots & -a_1\\
+% 0 & 1 & \cdots & -a_2\\
+% \vdots & \vdots & \ddots & \vdots\\
+% 0 & 0 & \cdots & -a_{n - 1}
+% \end{pmatrix}
+% \]
+% The characteristic polynomial of this matrix is
+% \[
+% \det(tI - A) = \det
+% \begin{pmatrix}
+% t & 0 & \cdots & a_0\\
+% -1 & t & \cdots & a_1\\
+% 0 & -1 & \cdots & a_2\\
+% \vdots & \vdots & \ddots & \vdots\\
+% 0 & 0 & \cdots & t + a_{n - 1}
+% \end{pmatrix}
+% \]
+% By adding $t^i$ multiples of the $i$th row to the first row for each $i$, this gives
+% \[
+% \det (tI - A) = \det
+% \begin{pmatrix}
+% 0 & 0 & \cdots & P_\alpha\\
+% -1 & t & \cdots & a_1\\
+% 0 & -1 & \cdots & a_2\\
+% \vdots & \vdots & \ddots & \vdots\\
+% 0 & 0 & \cdots & t + a_{n - 1}
+% \end{pmatrix} = P_\alpha.
+% \]
+% So we are done.
+%
+% Now we treat the general case of $r \not= 1$. Then as $K(\alpha)$ vector spaces, $L \cong K(\alpha)^r$.
+%
+% Suppose $\{\mu_1, \cdots, \mu_n\}$ is a basis of $K(\alpha)/K$ and $\{\lambda_1, \cdots, \lambda_r\}$ is a basis of $L/K(\alpha)$. Then $\{\mu_i \lambda_j\}$ is a basis for $L/K$.
+%
+% Let $A \in M_n(K)$ be the matrix of multiplication by $\alpha$ in $K(\alpha)$ with respect to the basis $\mu_1, \cdots, \mu_n$.
+%
+%
+% Suppose that $\{\lambda_1, \cdots, \lambda_r\}$ is a basis of $L$ over $K(\alpha)$. Here we can assume wlog $\lambda_1 = 1$. Then
+% \[
+% \left\{
+% \begin{array}{c}
+% \lambda_1, \lambda_1 \alpha, \lambda_1 \alpha^2, \cdots, \alpha_1 \alpha^{n - 1}\\
+% \lambda_2, \lambda_2 \alpha, \lambda_2 \alpha^2, \cdots, \alpha_2 \alpha^{n - 1}\\
+% \vdots\\\
+% \lambda_r, \lambda_r \alpha, \lambda_r \alpha^2, \cdots, \alpha_r \alpha^{n - 1}\\
+% \end{array}
+% \right\}
+% \]
+%% is a basis of $L$ over $K$.
+% Then it is easy to see that the matrix of multiplication by $\alpha$ is just
+% Hence we know
+% \[
+% \tr (C) = r \tr A = - r a_{n - 1},\quad \det (C) = (\det (A))^r = (-1)^{nr} a_0^r.
+% \]
+\end{proof}
+
+It is also instructive to prove this directly. In the case $r = 1$, we can pick the basis $\{1, \alpha, \alpha^2, \cdots, \alpha^{n - 1}\}$ of $L$ over $K$. Then the multiplication map sends
+\begin{align*}
+ 1 &\mapsto \alpha\\
+ \alpha &\mapsto \alpha^2\\
+ &\;\vdots\\
+ \alpha^{n-1} &\mapsto \alpha^n = -a_{n - 1} \alpha^{n - 1} - \cdots - a_0
+\end{align*}
+So the matrix is just
+\[
+ A = \begin{pmatrix}
+ 0 & 0 & \cdots & -a_0\\
+ 1 & 0 & \cdots & -a_1\\
+ 0 & 1 & \cdots & -a_2\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & -a_{n - 1}
+ \end{pmatrix}
+\]
+The characteristic polynomial of this matrix is
+\[
+ \det(tI - A) = \det
+ \begin{pmatrix}
+ t & 0 & \cdots & a_0\\
+ -1 & t & \cdots & a_1\\
+ 0 & -1 & \cdots & a_2\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & t + a_{n - 1}
+ \end{pmatrix}
+\]
+By adding $t^i$ multiples of the $i$th row to the first row for each $i$, this gives
+\[
+ \det (tI - A) = \det
+ \begin{pmatrix}
+ 0 & 0 & \cdots & P_\alpha\\
+ -1 & t & \cdots & a_1\\
+ 0 & -1 & \cdots & a_2\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & t + a_{n - 1}
+ \end{pmatrix} = P_\alpha.
+\]
+Then we notice that for $r \not= 1$, in an appropriate choice of basis, the matrix looks like
+\[
+ C = \begin{pmatrix}
+ A & 0 & \cdots & 0\\
+ 0 & A & \cdots & 0\\
+ \vdots & \vdots & \ddots & \vdots\\
+ 0 & 0 & \cdots & A
+ \end{pmatrix}.
+\]
+
+%Before that, we need to first prove a technical lemma.
+%\begin{lemma}
+% Let $K \subseteq F \subseteq L$ be a finite extension. Then
+% \[
+% \tr_{L/K} = \tr_{F/K} \circ \tr_{L/F},
+% \]
+% where the $\circ$ is function composition.
+%\end{lemma}
+%This is a very useful result for calculations.
+%
+%\begin{proof}
+% Let $\{\beta_1, \cdots, \beta_n\}$ be a basis of $F$ over $K$, and $\{\lambda_1, \cdots, \lambda_m\}$ be a basis for $L$ over $F$. Let $\alpha \in L$.
+%
+% Consider the $F$-linear map $\alpha\ph: L \to L$ and let $B = [\gamma_{ij}]$ be the matrix representing this map with respect to $\{\lambda_1, \cdots,\lambda_m\}$.
+%
+% Now $\gamma_{ij} \in F$. So we can consider $K$-linear map $\gamma_{ij}\ph: F \to F$. Let $A_{ij}$ be the matrix representing these linear maps with respect to $\{\beta_1, \cdots, \beta_n\}$.
+%
+% Then one can see that the $K$-linear map $\alpha\ph: L \to L$ is represented by
+% \[
+% \begin{pmatrix}
+% A_{11} & A_{12} & \cdots & A_{1m}\\
+% A_{21} & A_{22} & \cdots & A_{2m}\\
+% \vdots & \vdots & \ddots & \vdots\\
+% A_{m1} & A_{m2} & \cdots & A_{mm}
+% \end{pmatrix}
+% \]
+% So we have
+% \begin{align*}
+% \tr(\alpha) &= \sum_{i = 1}^m \tr (A_{ii}) \\
+% &= \sum_{i = 1}^m \tr_{F/K} (\gamma_{ii}) \\
+% &= \tr_{F/K} \left(\sum_{i = 1}^m \gamma_{ii}\right) \\
+% &= \tr_{F/K} (\tr B) \\
+% &= \tr_{F/K}(\tr_{L/F}(\alpha)).
+% \end{align*}
+%\end{proof}
+
+\begin{thm}
+ Let $L/K$ be a finite but not separable extension. Then $\tr_{L/K}(\alpha) = 0$ for all $\alpha \in L$.
+\end{thm}
+
+\begin{proof}
+ Pick $\beta \in L$ such that $P_\beta$, the minimal polynomial of $\beta$ over $K$, is not separable. Then by the previous characterization of separable polynomials, we know $p = \Char K > 0$ with $P_\beta = q(t^p)$ for some $q \in K[t]$.
+
+ Now consider
+ \[
+ K \subseteq K(\beta^p) \subseteq K(\beta) \subseteq L.
+ \]
+ To show $\tr_{L/K} = 0$, by the previous proposition, it suffices to show $\tr_{K(\beta)/K(\beta^p)} = 0$.
+
+ Note that the minimal polynomial of $\beta^p$ over $K$ is $q$ because $q(\beta^p) = 0$ and $q$ is irreducible. Then $[K(\beta):K] = \deg P_\beta = p \deg q$ and $\deg[K(\beta^p):K] = \deg q$. So $[K(\beta):K(\beta^p)] = p$.
+
+ Now $\{1, \beta, \beta^2, \cdots, \beta^{p - 1}\}$ is a basis of $K(\beta)$ over $K(\beta^p)$. Let $R_{\beta^i}$ be the minimal polynomial of $\beta^i$ over $K(\beta^p)$. Then
+ \[
+ R_{\beta^i} =
+ \begin{cases}
+ t - 1 & i = 0\\
+ t^p - \beta^{ir} & i \not=0
+ \end{cases},
+ \]
+ We get the second case using the fact that $p$ is a prime number, and hence $K(\beta^p)(\beta^i) = K(\beta)$ if $1 \leq i < p$. So $[K(\beta^p)(\beta^i):K(\beta^p)] = p$ and hence the minimal polynomial has degree $p$. Hence $\tr_{K(\beta)/K(\beta^p)} (\beta^i) = 0$ for all $i$.
+
+ Thus, $\tr_{K(\beta)/K(\beta^p)} = 0$. Hence
+ \[
+ \tr_{L/K} = \tr_{K(\beta^p)/K} \circ \tr_{K(\beta)/K(\beta^p)} \circ \tr_{L/K(\beta)} = 0.\qedhere
+ \]
+\end{proof}
+Note that if $L/K$ is a finite extension, and $\Char K = 0$, then
+\[
+ \tr_{L/K}(1) = [L:K] \not= 0.
+\]
+So $\tr_{L/K} \not= 0$. It is in fact true that all separable extensions have $\tr_{L/K} \not= 0$, not only when the field has characteristic $0$.
+
+\begin{eg}
+ We want to show $\sqrt[3]{3} \not\in \Q(\sqrt[3]{2})$. Suppose not. Then we have $L = \Q(\sqrt[3]{3}) = \Q(\sqrt[3]{2})$, since both extensions of $\Q$ have degree $3$. Then there exists some $a, b, c \in \Q$ such that
+ \[
+ \sqrt[3]{3} = a + b\sqrt[3]{2} + c \sqrt[3]{2^2}.
+ \]
+ We now compute the traces over $\Q$. The minimal polynomials over $\Q$ are
+ \[
+ P_{\sqrt[3]{3}} = t^3 - 3,\quad P_{\sqrt[3]{2}} = t^3 - 2,\quad P_{\sqrt[3]{4}} = t^3 - 4.
+ \]
+ So we have
+ \[
+ \tr_{L/\Q}(\sqrt[3]{3}) = a \tr_{L/\Q}(1) + b \tr_{L/\Q}(\sqrt[3]{2}) + c\tr_{L/\Q}(\sqrt[3]{4}).
+ \]
+ Since the minimal polynomials above do not have coefficients in $t^2$, the traces of the cube roots are zero. So we need $a = 0$. Then we are left with
+ \[
+ \sqrt[3]{3} = b\sqrt[3]{2} + c\sqrt[3]{4}.
+ \]
+ We apply the same trick again. We multiply by $\sqrt[3]{2}$ to obtain
+ \[
+ \sqrt[3]{6} = b\sqrt[3]{4} + 2c.
+ \]
+ We note that the minimal polynomial of $\sqrt[3]{6}$ is $t^3 - 6$. Taking the trace gives
+ \[
+ \tr_{L/\Q}(\sqrt[3]{6}) = b \tr_{L/\Q}(\sqrt[3]{4}) + 6c.
+ \]
+ Again, the traces are zero. So $c = 0$. So we have
+ \[
+ \sqrt[3]{3} = b \sqrt[3]{2}.
+ \]
+ In other words,
+ \[
+ b^3 = \frac{3}{2},
+ \]
+ which is clearly nonsense. This is a contradiction. So $\sqrt[3]{3} \not\in \Q(\sqrt[3]{2})$.
+\end{eg}
+
+We can obtain another formula for the trace and norm as follows:
+\begin{thm}
+ Let $L/K$ be a finite separable extension. Pick a further extension $E/L$ such that $E/K$ is normal and
+ \[
+ |\Hom_K(L, E)| = [L:K].
+ \]
+ Write $\Hom_K(L, E) = \{\varphi_1, \cdots, \varphi_n\}$. Then
+ \[
+ \tr_{L/K} (\alpha) = \sum_{i = 1}^n \varphi_i(\alpha),\quad N_{L/K}(\alpha) = \prod_{i = 1}^n \varphi_i(\alpha)
+ \]
+ for all $\alpha \in L$.
+\end{thm}
+
+\begin{proof}
+ Let $\alpha \in L$. Let $P_\alpha$ be the minimal polynomial of $\alpha$ over $K$. Then there is a one-to-one correspondence between
+ \[
+ \Hom_K(K(\alpha), E)\longleftrightarrow \Root_{P_\alpha}(E) = \{\alpha_1, \cdots, \alpha_d\}.
+ \]
+ wlog we let $\alpha = \alpha_1$.
+
+ Also, since
+ \[
+ |\Hom_K(L, E)| = [L:K],
+ \]
+ we get
+ \[
+ |\Hom_K(K(\alpha), E)| = [K(\alpha): K] = \deg P_\alpha.
+ \]
+ Moreover, the restriction map $\Hom_K(L, E) \to \Hom_K(K(\alpha), E)$ (defined by $\varphi \mapsto \varphi|_{K(\alpha)}$) is surjective and sends exactly $[K(\alpha): K]$ elements to any particular element in $\Hom_K(K(\alpha), E)$.
+
+ Therefore
+ \[
+ \sum \varphi_i(\alpha) = [L:K(\alpha)] \sum_{\psi \in \Hom_K(K(\alpha), E)} \psi(\alpha) = [L:K(\alpha)] \sum_{i = 1}^d \alpha_i.
+ \]
+ Moreover, we can read the sum of roots of a polynomial is the (negative of the) coefficient of $t^{d - 1}$, where
+ \[
+ P_\alpha = t^d + a_{d - 1} t^{d - 1} + \cdots + a_0.
+ \]
+ So
+ \[
+ \sum \varphi_i(\alpha) = [L:K(\alpha)] (- a_{d - 1}) = \tr_{L/K}(\alpha).
+ \]
+ Similarly, we have
+ \begin{align*}
+ \prod \varphi_i(\alpha) &= \left(\prod_{\psi \in \Hom_K(K(\alpha), E)} \psi(\alpha)\right)^{[L:K(\alpha)]} \\
+ &= \left(\prod_{i = 1}^d \alpha_i\right)^{[L:K(\alpha)]}\\
+ &= ((- 1)^d a_0)^{[L:K(\alpha)]}\\
+ &= N_{L/K}(\alpha).\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{cor}
+ Let $L/K$ be a finite separable extension. Then there is some $\alpha \in L$ such that $\tr_{L/K}(\alpha) \not= 0$.
+\end{cor}
+
+\begin{proof}
+ Using the notation of the previous theorem, we have
+ \[
+ \tr_{L/K}(\alpha) = \sum \varphi_i (\alpha).
+ \]
+ Similar to a previous lemma, we can show that $\varphi_1, \cdots, \varphi_n$ are ``linearly independent'' over $E$, and hence $\sum \varphi_i$ cannot be identically zero. Hence there is some $\alpha$ such that
+ \[
+ \tr_{L/K}(\alpha) = \sum \varphi_i(\alpha) \not= 0.\qedhere
+ \]
+\end{proof}
+
+\begin{eg}
+ Let $K = \F_q \subseteq L = \F_{q^n}$, with $q$ is a power of some prime number $p$. By a previous theorem on finite fields, we know $L/K$ is Galois and
+ \[
+ \Gal(L/K) = \frac{\Z}{n\Z}
+ \]
+ and is generated by the Frobenius $\varphi = \Fr_q$.
+
+ To apply the theorem, we had to pick an $E$ such that $E/K$ is normal and $\Hom_K(L, E) = [L:K]$. However, since $L/K$ is Galois, we can simply pick $E = L$.
+
+ Then we know
+ \begin{align*}
+ \tr_{L/K}(\alpha) &= \sum_{\psi \in \Gal(L/K)} \psi(\alpha) \\
+ &= \sum_{i = 0}^{n - 1} \varphi^i (\alpha)\\
+ &= \alpha + \alpha^q + \alpha^{q^2} + \cdots + \alpha^{q^{n - 1}}.
+ \end{align*}
+ Similarly, the norm is
+ \[
+ N_{L/K}(\alpha) = \prod_{i = 0}^{n - 1} \varphi^i (\alpha) = \alpha\cdot \alpha^q \cdot \cdots \cdot \alpha^{q^{n - 1}}.
+ \]
+\end{eg}
+
+Recall that when solving quadratic equations $f = t^2 + bt + c$, we defined the \emph{discriminant} as $b^2 - 4c$. This discriminant then determined the types of roots of $f$. In general, we can define the discriminant of a polynomial of any degree, in a scary way.
+
+\begin{defi}[Discriminant]
+ Let $K$ be a field and $f \in K[t]$, $L$ the splitting field of $f$ over $K$. So we have
+ \[
+ f = a(t - \alpha_1)\cdots(t - \alpha_n)
+ \]
+ for some $a, \alpha_1, \cdots, \alpha_n \in L$. We define
+ \[
+ \Delta_f = \prod_{i < j}(\alpha_i - \alpha_j),\quad D_f = \Delta_f^2 = (-1)^{n(n - 1)/2} \prod_{i \not= j} (\alpha_i - \alpha_j).
+ \]
+ We call $D_f$ the \emph{discriminant} of $f$.
+\end{defi}
+Clearly, $D_f \not= 0$ if and only if $f$ has no repeated roots.
+
+\begin{thm}
+ Let $K$ be a field and $f \in K[t]$, $L$ is the splitting field of $f$ over $K$. Suppose $D_f \not= 0$ and $\Char K \not= 2$. Then
+ \begin{enumerate}
+ \item $D_f \in K$.
+ \item Let $G = \Gal(L/K)$, and $\theta: G \to S_n$ be the embedding given by the permutation of the roots. Then $\im \theta \subseteq A_n$ if and only if $\Delta_f \in K$ (if and only if $D_f$ is a square in $K$).
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item It is clear that $D_f$ is fixed by $\Gal(L/K)$ since it only permutes the roots.
+ \item Consider a permutation $\sigma \in S_n$ of the form $\sigma = (\ell \;m)$, and let it act on the roots. Then we claim that
+ \[
+ \sigma(\Delta_f) = - \Delta_f.\tag{$\dagger$}
+ \]
+ So in general, odd elements in $S_n$ negate $\Delta_f$ while even elements fix it. Thus, $\Delta_f \in K$ iff $\Delta_f$ is fixed by $\Gal(L/K)$ iff every element of $\Gal(L/K)$ is even.
+
+ To prove $(\dagger)$, we have to painstakingly check all terms in the product. We wlog $\ell < m$. If $k < \ell, m$. Then this swaps $(\alpha_k - \alpha_\ell)$ with $\alpha_k - \alpha_m)$, which has no effect. The $k > m$ case is similar. If $\ell < k < m$, then this sends $(\alpha_\ell - \alpha_k) \mapsto (\alpha_m - \alpha_k)$ and $(\alpha_k - \alpha_m) \mapsto (\alpha_\ell - \alpha_m)$. This introduces two negative signs, which has no net effect. Finally, this sends $(\alpha_k - \alpha_m)$ to its negation, and so introduces a negative sign.\qedhere
+ \end{enumerate}
+\end{proof}
+We will later use this result to compute certain Galois groups. Before that, we see how this discriminant is related to the norm.
+
+\begin{thm}
+ Let $K$ be a field, and $f \in K[t]$ be an $n$-degree monic irreducible polynomial with no repeated roots. Let $L$ be the splitting field of $f$ over $K$, and let $\alpha \in \Root_F(L)$. Then
+ \[
+ D_f = (-1)^{n(n - 1)/2} N_{K(\alpha)/K}(f'(\alpha)).
+ \]
+\end{thm}
+
+\begin{proof}
+ Let $\Hom_K(K(\alpha), L) = \{\varphi_1, \cdots, \varphi_n\}$. Recall these are in one-to-one correspondence with $\Root_f(L) = \{\alpha_1, \cdots, \alpha_n\}$. Then we can compute
+ \[
+ \prod_{i \not= j} (\alpha_i - \alpha_j) = \prod_i \prod_{j \not= i} (\alpha_i - \alpha_j).
+ \]
+ Note that since $f$ is just monic, we have
+ \[
+ f = (t - \alpha_1) \cdots (t - \alpha_n).
+ \]
+ Computing the derivative directly, we find
+ \[
+ \prod_{j \not= i} (\alpha_i - \alpha_j) = f'(\alpha_i).
+ \]
+ So we have
+ \[
+ \prod_{i \not= j} (\alpha_i - \alpha_j) = \prod_i f'(\alpha_i).
+ \]
+ Now since the $\varphi_i$ just maps $\alpha$ to $\alpha_i$, we have
+ \[
+ \prod_{i \not= j} (\alpha_i - \alpha_j) = \prod_i \varphi_i (f'(\alpha)) = N_{K(\alpha)/K}(f'(\alpha)).
+ \]
+ Finally, multiplying the factor of $(-1)^{n(n - 1)/2}$ gives the desired result.
+\end{proof}
+
+\begin{eg}
+ Let $K$ be a field with $\Char K \not = 2, 3$. Let $f \in K[t]$ have degree $3$, say
+ \[
+ f = t^3 + bt + c
+ \]
+ where we have gotten rid of the $t^2$ term as in the first lecture. We further assume $f$ is irreducible with no repeated roots, and let $L$ be the splitting field of $f$.
+
+ We want to compute the discriminant of this polynomial. Let $\alpha \in \Root_f(L)$. Then
+ \[
+ \beta = f'(\alpha) = 3 \alpha^2 + b.
+ \]
+ Then we can see
+ \[
+ \beta = -2b - \frac{3c}{\alpha}.
+ \]
+ Alternatively, we have
+ \[
+ \alpha = \frac{-3c}{\beta + 2b}.\tag{$*$}
+ \]
+ Putting $(*)$ into $\alpha^3 + b\alpha + c = 0$, we find the minimal polynomial of $\beta$ has constant term $-4 b^3 - 27 c^2$. This then gives us the norm, and we get
+ \[
+ D_f = -N_{K(\alpha)/K}(\beta) = - 4b^3 - 27c^2.
+ \]
+ This is the discriminant of a cubic.
+
+ We can take a specific example, where
+ \[
+ f = t^3 - 31 t + 62.
+ \]
+ Then $f$ is irreducible over $\Q$. We can compute $D_f$, and find that it is a square. So the previous theorem says the image of the Galois group $\Gal(L/K)$ is a subgroup of $A_3$. However, we also know $\Gal(L/K)$ has three elements since $\deg f = 3$. So we know $\Gal(L/K) \cong A_3$.
+\end{eg}
+
+\end{document}
diff --git a/books/cam/II_M/integrable_systems.tex b/books/cam/II_M/integrable_systems.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ca7c7c9892b08f857fd2aaa39c2ed86666d72f73
--- /dev/null
+++ b/books/cam/II_M/integrable_systems.tex
@@ -0,0 +1,3355 @@
+\documentclass[a4paper]{article}
+
+\def\npart {II}
+\def\nterm {Michaelmas}
+\def\nyear {2016}
+\def\nlecturer {A.\ Ashton}
+\def\ncourse {Integrable Systems}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\emph{Part IB Methods, and Complex Methods or Complex Analysis are essential; Part II Classical Dynamics is desirable.}
+
+\vspace{10pt}
+\noindent Integrability of ordinary differential equations: Hamiltonian systems and the Arnol'd--Liouville Theorem (sketch of proof). Examples.\hspace*{\fill}[3]
+
+\vspace{5pt}
+\noindent Integrability of partial differential equations: The rich mathematical structure and the universality of the integrable nonlinear partial differential equations (Korteweg-de Vries, sine--Gordon). Backlund transformations and soliton solutions.\hspace*{\fill}[2]
+
+\vspace{5pt}
+\noindent The inverse scattering method: Lax pairs. The inverse scattering method for the KdV equation, and other integrable PDEs. Multi soliton solutions. Zero curvature representation. \hspace*{\fill}[6]
+
+\vspace{5pt}
+\noindent Hamiltonian formulation of soliton equations.\hspace*{\fill}[2]
+
+\vspace{5pt}
+\noindent Painleve equations and Lie symmetries: Symmetries of differential equations, the ODE reductions of certain integrable nonlinear PDEs, Painleve equations.\hspace*{\fill}[3]%
+}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+What is an integrable system? Unfortunately, an integrable system is a something mathematicians have not yet managed to define properly. Intuitively, an integrable system is a differential equation we can ``integrate up'' directly. While in theory, integrable systems should be very rare, it happens that in nature, a lot of systems happen to be integrable. By exploiting the fact that they are integrable, we can solve them much more easily.
+
+\section{Integrability of ODE's}
+\subsection{Vector fields and flow maps}
+In the first section, we are going to look at the integrability of ODE's. Here we are going to consider a general $m$-dimensional first order non-linear ODE's. As always, restricting to only first-order ODE's is not an actual restriction, since any higher-order ODE can be written as a system of first-order ODE's. At the end, we will be concerned with a special kind of ODE given by a \emph{Hamiltonian system}. However, in this section, we first give a quick overview of the general theory of ODEs.
+
+An $m$-dimensional ODE is specified by a \term{vector field} $\mathbf{V}: \R^m \to \R^m$ and an \term{initial condition} $\mathbf{x}_0 \in \R^m$. The objective is to find some $\mathbf{x}(t) \in \R^m$, which is a function of $t \in (a, b)$ for some interval $(a, b)$ containing $0$, satisfying
+\[
+ \dot{\mathbf{x}} = \mathbf{V}(\mathbf{x}),\quad \mathbf{x}(0) = \mathbf{x}_0.
+\]
+In this course, we will assume the vector field $\mathbf{V}$ is sufficiently ``nice'', so that the following result holds:
+\begin{fact}
+ For a ``nice'' vector field $\mathbf{V}$ and any initial condition $\mathbf{x}_0$, there is always a unique solution to $\dot{\mathbf{x}} = \mathbf{V}(\mathbf{x})$, $\mathbf{x}(0) = \mathbf{x}_0$. Moreover, this solution depends smoothly (i.e.\ infinitely differentiably) on $t$ and $\mathbf{x}_0$.
+\end{fact}
+
+It is convenient to write the solution as
+\[
+ \mathbf{x}(t) = g^t \mathbf{x}_0,
+\]
+where $g^t: \R^m \to \R^m$ is called the \emph{flow map}. Since $\mathbf{V}$ is nice, we know this is a smooth map. This flow map has some nice properties:
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $g^0 = \id$
+ \item $g^{t + s} = g^t g^s$
+ \item $(g^{t})^{-1} = g^{-t}$
+ \end{enumerate}
+\end{prop}
+If one knows group theory, then this says that $g$ is a group homomorphism from $\R$ to the group of diffeomorphisms of $\R^m$, i.e.\ the group of smooth invertible maps $\R^m \to \R^m$.
+
+\begin{proof}
+ The equality $g^0 = \id$ is by definition of $g$, and the last equality follows from the first two since $t + (-t) = 0$. To see the second, we need to show that
+ \[
+ g^{t + s}\mathbf{x}_0 = g^t (g^s \mathbf{x}_0)
+ \]
+ for any $\mathbf{x}_0$. To do so, we see that both of them, as a function of $t$, are solutions to
+ \[
+ \dot{\mathbf{x}} = \mathbf{V}(\mathbf{x}),\quad \mathbf{x}(0) = g^s \mathbf{x}_0.
+ \]
+ So the result follows since solutions are unique.
+\end{proof}
+
+We say that $\mathbf{V}$ is the \term{infinitesimal generator} of the flow $g^t$. This is because we can Taylor expand.
+\[
+ \mathbf{x}(\varepsilon) = g^\varepsilon \mathbf{x}_0 = \mathbf{x}(0) + \varepsilon \dot{\mathbf{x}}(0) + o(\varepsilon) = \mathbf{x}_0 + \varepsilon \mathbf{V}(\mathbf{x}_0) + o(\varepsilon).
+\]
+Given vector fields $\mathbf{V}_1, \mathbf{V}_2$, one natural question to ask is whether their flows commute, i.e.\ if they generate $g_1^t$ and $g_2^s$, then must we have
+\[
+ g_1^t g_2^s \mathbf{x}_0 = g_2^s g_1^t \mathbf{x}_0
+\]
+for all $\mathbf{x}_0$? In general, this need not be true, so we might be interested to find out if this happens to be true for particular $\mathbf{V}_1, \mathbf{V}_2$. However, often, it is difficult to check this directly, because differential equations are generally hard to solve, and we will probably have a huge trouble trying to find explicit expressions for $g_1$ and $g_2$.
+
+Thus, we would want to be able to consider this problem at an infinitesimal level, i.e.\ just by looking at $\mathbf{V}_1, \mathbf{V}_2$ themselves. It turns out the answer is given by the commutator:
+\begin{defi}[Commutator]\index{commutator}
+ For two vector fields $\mathbf{V}_1, \mathbf{V}_2: \R^m \to \R^m$, we define a third vector field called the \emph{commutator} by
+ \[
+ [\mathbf{V}_1, \mathbf{V}_2] = \left(\mathbf{V}_1 \cdot \frac{\partial}{\partial \mathbf{x}}\right) \mathbf{V}_2 - \left(\mathbf{V}_2 \cdot \frac{\partial}{\partial \mathbf{x}}\right) \mathbf{V}_1,
+ \]
+ where we write
+ \[
+ \frac{\partial}{\partial \mathbf{x}} = \left(\frac{\partial}{\partial x_1}, \cdots, \frac{\partial}{\partial x_n}\right)^T.
+ \]
+ More explicitly, the $i$th component is given by
+ \[
+ [\mathbf{V}_1, \mathbf{V}_2]_i = \sum_{j = 1}^m (\mathbf{V}_1)_j \frac{\partial}{\partial x_j} (\mathbf{V}_2)_i - (\mathbf{V}_2)_j \frac{\partial}{\partial x_j} (\mathbf{V}_1)_i
+ \]
+\end{defi}
+
+The result we have is
+\begin{prop}
+ Let $\mathbf{V}_1, \mathbf{V}_2$ be vector fields with flows $g_1^t$ and $g_2^s$. Then we have
+ \[
+ [\mathbf{V}_1, \mathbf{V}_2] = 0 \quad\Longleftrightarrow\quad g_1^t g_2^s = g_2^s g_1^t.
+ \]
+\end{prop}
+
+\begin{proof}
+ See example sheet 1.
+\end{proof}
+
+\subsection{Hamiltonian dynamics}
+From now on, we are going to restrict to a very special kind of ODE, known as a \emph{Hamiltonian system}. To write down a general ODE, the background setting is just the space $\R^n$. We then pick a vector field, and then we get an ODE. To write down a Hamiltonian system, we need more things in the background, but conversely we need to supply less information to get the system. These Hamiltonian system are very useful in classical dynamics, and our results here have applications in classical dynamics, but we will not go into the physical applications here.
+
+The background settings of a Hamiltonian is a \term{phase space} $M = \R^{2n}$. Points on $M$ are described by coordinates
+\[
+ (\mathbf{q}, \mathbf{p}) = (q_1, \cdots, q_n, p_1, \cdots, p_n).
+\]
+We tend to think of the $q_i$ are ``generalized positions'' of particles, and the $p_n$ as the ``generalized momentum'' coordinates. We will often write
+\[
+ \mathbf{x} = (\mathbf{q}, \mathbf{p})^T.
+\]
+It is very important to note that here we have ``paired up'' each $q_i$ with the corresponding $p_i$. In normal $\R^n$, all the coordinates are equal, but this is no longer the case here. To encode this information, we define the $2n \times 2n$ anti-symmetric matrix
+\[
+ J =
+ \begin{pmatrix}
+ 0 & I_n\\
+ -I_n & 0
+ \end{pmatrix}.
+\]
+We call this the \term{symplectic form}, and this is the extra structure we have for a phase space. We will later see that all the things we care about can be written in terms of $J$, but for practical purposes, we will often express them in terms of $\mathbf{p}$ and $\mathbf{q}$ instead.
+
+The first example is the \emph{Poisson bracket}:
+
+\begin{defi}[Poisson bracket]\index{Poisson bracket}
+ For any two functions $f, g: M \to \R$, we define the \emph{Poisson bracket} by
+ \[
+ \{f, g\} = \frac{\partial f}{\partial \mathbf{x}} J \frac{\partial g}{\partial \mathbf{x}} = \frac{\partial f}{\partial \mathbf{q}} \cdot \frac{\partial g}{\partial \mathbf{p}} - \frac{\partial f}{\partial \mathbf{p}} \cdot \frac{\partial g}{\partial \mathbf{q}}.
+ \]
+\end{defi}
+
+This has some obvious and not-so-obvious properties:
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item This is linear in each argument.
+ \item This is antisymmetric, i.e.\ $\{f, g\} = - \{g, f\}$.
+ \item This satisfies the Leibniz property:
+ \[
+ \{f, gh\} = \{f, g\}h + \{f, h\} g.
+ \]
+ \item This satisfies the Jacobi identity:
+ \[
+ \{f, \{g, h\}\} + \{g, \{h, f\}\} + \{h, \{f, g\}\} = 0.
+ \]
+ \item We have
+ \[
+ \{q_i, q_j\} = \{p_i, p_j\} = 0,\quad \{q_i, p_j\} = \delta_{ij}.
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ Just write out the definitions. In particular, you will be made to write out the 24 terms of the Jacobi identity in the first example sheet.
+\end{proof}
+
+We will be interested in problems on $M$ of the following form:
+\begin{defi}[Hamilton's equation]\index{Hamilton's equation}
+ \emph{Hamilton's equation} is an equation of the form
+ \[
+ \dot{\mathbf{q}} = \frac{\partial H}{\partial \mathbf{p}},\quad \dot{\mathbf{p}} = -\frac{\partial H}{\partial \mathbf{q}}\tag{$*$}
+ \]
+ for some function $H: M \to \R$ called the \term{Hamiltonian}.
+\end{defi}
+Just as we think of $\mathbf{q}$ and $\mathbf{p}$ as generalized position and momentum, we tend to think of $H$ as generalized energy.
+
+Note that given the phase space $M$, all we need to specify a Hamiltonian system is just a Hamiltonian function $H: M \to \R$, which is much less information that that needed to specify a vector field.
+
+In terms of $J$, we can write Hamilton's equation as
+\[
+ \dot{\mathbf{x}} = J \frac{\partial H}{\partial \mathbf{x}}.
+\]
+We can imagine Hamilton's equation as specifying the trajectory of a particle. In this case, we might want to ask how, say, the speed of the particle changes as it evolves. In general, suppose we have a smooth function $f: M \to \R$. We want to find the value of $\frac{\d f}{\d t}$. We simply have to apply the chain rule to obtain
+\[
+ \frac{\d f}{\d t} = \frac{\d}{\d t} f(\mathbf{x}(t)) = \frac{\partial f}{\partial \mathbf{x}} \cdot \dot{\mathbf{x}} = \frac{\partial f}{\partial \mathbf{x}} J \frac{\partial H}{\partial \mathbf{x}} = \{f, H\}.
+\]
+We record this result:
+\begin{prop}
+ Let $f: M \to \R$ be a smooth function. If $\mathbf{x}(t)$ evolves according to Hamilton's equation, then
+ \[
+ \frac{\d f}{\d t} = \{f, H\}.
+ \]
+\end{prop}
+In particular, a function $f$ is constant if and only if $\{f, H\} = 0$. This is very convenient. Without a result like this, if we want to see if $f$ is a conserved quantity of the particle (i.e.\ $\frac{\d f}{\d t} = 0$), we might have to integrate the equations of motion, and then try to find explicitly what is conserved, or perhaps mess around with the equations of motion to somehow find that $\frac{\d f}{\d t}$ vanishes. However, we now have a very systematic way of figuring out if $f$ is a conserved quantity --- we just compute $\{f, H\}$.
+
+In particular, we automatically find that the Hamiltonian is conserved:
+\[
+ \frac{\d H}{\d t} = \{H, H\} = 0.
+\]
+\begin{eg}
+ Consider a particle (of unit mass) with position $\mathbf{q} = (q_1, q_2, q_3)$ (in Cartesian coordinates) moving under the influence of a potential $U(\mathbf{q})$. By Newton's second law, we have
+ \[
+ \ddot{\mathbf{q}} = -\frac{\partial U}{\partial \mathbf{q}}.
+ \]
+ This is actually a Hamiltonian system. We define the momentum variables by
+ \[
+ p_i = \dot{q}_i,
+ \]
+ then we have
+ \[
+ \dot{\mathbf{x}} =
+ \begin{pmatrix}
+ \dot{\mathbf{q}}\\
+ \dot{\mathbf{p}}
+ \end{pmatrix}
+ =
+ \begin{pmatrix}
+ \mathbf{p}\\
+ -\frac{\partial U}{\partial \mathbf{q}}
+ \end{pmatrix}
+ = J \frac{\partial H}{\partial \mathbf{x}},
+ \]
+ with
+ \[
+ H = \frac{1}{2} \abs{\mathbf{p}}^2 + U(\mathbf{q}).
+ \]
+ This is just the usual energy! Indeed, we can compute
+ \[
+ \frac{\partial H}{\partial \mathbf{p}} = \mathbf{p},\quad \frac{\partial H}{\partial \mathbf{q}} = \frac{\partial U}{\partial \mathbf{q}}.
+ \]
+\end{eg}
+
+\begin{defi}[Hamiltonian vector field]\index{Hamiltonian vector field}
+ Given a Hamiltonian function $H$, the \emph{Hamiltonian vector field} is given by
+ \[
+ \mathbf{V}_H = J \frac{\partial H}{\partial \mathbf{x}}.
+ \]
+\end{defi}
+
+We then see that by definition, the Hamiltonian vector field generates the Hamiltonian flow. More generally, for any $f: M \to \R$, we call
+\[
+ \mathbf{V}_f = J \frac{\partial f}{\partial \mathbf{x}}.
+\]
+This is the Hamiltonian vector field with respect to $f$.
+
+We know have two bracket-like things we can form. Given two $f, g$, we can take the Poisson bracket to get $\{f, g\}$, and consider its Hamiltonian vector field $\mathbf{V}_{\{f, g\}}$. On the other hand, we can first get $\mathbf{V}_f$ and $\mathbf{V}_g$, and then take the commutator of the vector fields. It turns out these are not equal, but differ by a sign.
+\begin{prop}
+ We have
+ \[
+ [\mathbf{V}_f, \mathbf{V}_g] = - \mathbf{\mathbf{V}}_{\{f, g\}}.
+ \]
+\end{prop}
+
+\begin{proof}
+ See first example sheet.
+\end{proof}
+
+\begin{defi}[First integral]\index{first integral}
+ Given a phase space $M$ with a Hamiltonian $H$, we call $f: M \to \R$ a \emph{first integral} of the Hamiltonian system if
+ \[
+ \{f, H\} = 0.
+ \]
+\end{defi}
+The reason for the term ``first integral'' is historical --- when we solve a differential equation, we integrate the equation. Every time we integrate it, we obtain a new constant. And the first constant we obtain when we integrate is known as the first integral. However, for our purposes, we can just as well think of it as a constant of motion.
+
+\begin{eg}
+ Consider the two-body problem --- the Sun is fixed at the origin, and a planet has Cartesian coordinates $\mathbf{q} = (q_1, q_2, q_3)$. The equation of motion will be
+ \[
+ \ddot{\mathbf{q}} = - \frac{\mathbf{q}}{|\mathbf{q}|^3}.
+ \]
+ This is equivalent to the Hamiltonian system $\mathbf{p} = \dot{\mathbf{q}}$, with
+ \[
+ H = \frac{1}{2} |\mathbf{p}|^2 - \frac{1}{|\mathbf{q}|}.
+ \]
+ We have an angular momentum given by
+ \[
+ \mathbf{L} = \mathbf{q} \wedge \mathbf{p}.
+ \]
+ Working with coordinates, we have
+ \[
+ L_i = \varepsilon_{ijk} q_j p_k.
+ \]
+ We then have (with implicit summation)
+ \begin{align*}
+ \{L_i, H\} &= \frac{\partial L_i}{\partial q_\ell}\frac{\partial H}{\partial p_\ell} - \frac{\partial L_i}{\partial p_\ell} \frac{\partial H}{\partial q_\ell}\\
+ &= \varepsilon_{ijk} \left(p_k \delta_{\ell j}p_\ell + \frac{q_j q_k}{|\mathbf{q}|^3}\right)\\
+ &= \varepsilon_{ijk} \left(p_k p_j + \frac{q_j q_k}{|\mathbf{q}|^3}\right)\\
+ &= 0,
+ \end{align*}
+ where we know the thing vanishes because we contracted a symmetric tensor with an antisymmetric one. So this is a first integral.
+
+ Less interestingly, we know $H$ is also a first integral. In general, some Hamiltonians have many many first integrals.
+\end{eg}
+
+Our objective of the remainder of the chapter is to show that if our Hamiltonian system has enough first integrals, then we can find a change of coordinates so that the equations of motion are ``trivial''. However, we need to impose some constraints on the integrals for this to be true. We will need to know about the following words:
+\begin{defi}[Involution]\index{involution}
+ We say that two first integrals $F, G$ are in \emph{involution} if $\{F, G\} = 0$ (so $F$ and $G$ ``\emph{Poisson commute}'').
+\end{defi}
+
+\begin{defi}[Independent first integrals]\index{independent first integrals}
+ A collection of functions $f_i: M \to \R$ are independent if at each $\mathbf{x} \in M$, the vectors $\frac{\partial f_i}{\partial \mathbf{x}}$ for $i = 1, \cdots, n$ are independent.
+\end{defi}
+
+In general we will say a system is ``integrable'' if we can find a change of coordaintes so that the equations of motion become ``trivial'' and we can just integrate it up. This is a bit vague, so we will define integrability in terms of the existence of first integrals, and then we will later see that if these conditions are satisfied, then we can indeed integrate it up.:
+\begin{defi}[Integrable system]\index{integrable system}
+ A $2n$-dimensional Hamiltonian system $(M, H)$ is \emph{integrable} if there exists $n$ first integrals $\{f_i\}_{i = 1}^n$ that are independent and in involution (i.e.\ $\{f_i, f_j\} = 0$ for all $i, j$).
+\end{defi}
+The word independent is very important, or else people will cheat, e.g.\ take $H, 2H, e^H, H^2, \cdots$.
+
+\begin{eg}
+ Two-dimensional Hamiltonian systems are always integrable.
+\end{eg}
+
+\subsection{Canonical transformations}
+We now come to the main result of the chapter. We will show that we can indeed integrate up integrable systems. We are going to show that there is a clever choice of coordinates such Hamilton's equations become ``trivial''. However, recall that the coordinates in a Hamiltonian system are not arbitrary. We have somehow ``paired up'' $q_i$ and $p_i$. So we want to only consider coordinate changes that somehow respect this pairing.
+
+There are many ways we can define what it means to ``respect'' the pairing. We will pick a simple definition --- we require that it preserves the form of Hamilton's equation.
+
+Suppose we had a general coordinate change $(\mathbf{q}, \mathbf{p}) \mapsto (\mathbf{Q}(\mathbf{q}, \mathbf{p}), \mathbf{P}(\mathbf{q}, \mathbf{p}))$.
+
+\begin{defi}[Canonical transformation]\index{canonical transformation}
+ A coordinate change $(\mathbf{q}, \mathbf{p}) \mapsto (\mathbf{Q}, \mathbf{P})$ is called \emph{canonical} if it leaves Hamilton's equations invariant, i.e.\ the equations in the original coordinates
+ \[
+ \dot{\mathbf{q}} = \frac{\partial H}{\partial \mathbf{q}},\quad \mathbf{p} = -\frac{\partial H}{\partial \mathbf{q}}.
+ \]
+ is equivalent to
+ \[
+ \dot{\mathbf{Q}} = \frac{\partial \tilde{H}}{\partial \mathbf{P}},\quad \dot{\mathbf{P}} = -\frac{\partial \tilde{H}}{\partial \mathbf{Q}},
+ \]
+ where $\tilde{H}(\mathbf{Q}, \mathbf{P}) = H(\mathbf{q}, \mathbf{p})$.
+\end{defi}
+
+If we write $\mathbf{x} = (\mathbf{q}, \mathbf{p})$ and $\mathbf{y} = (\mathbf{Q}, \mathbf{P})$, then this is equivalent to asking for
+\[
+ \dot{\mathbf{x}} = J \frac{\partial H}{\partial \mathbf{x}} \quad\Longleftrightarrow\quad \dot{\mathbf{y}} = J \frac{\partial \tilde{H}}{ \partial \mathbf{y}}.
+\]
+\begin{eg}
+ If we just swap the $\mathbf{q}$ and $\mathbf{p}$ around, then the equations change by a sign. So this is not a canonical transformation.
+\end{eg}
+
+\begin{eg}
+ The simplest possible case of a canonical transformation is a linear transformation. Consider a linear change of coordinates given by
+ \[
+ \mathbf{x} \mapsto \mathbf{y}(\mathbf{x}) = A\mathbf{x}.
+ \]
+ We claim that this is canonical iff $AJA^t = J$, i.e.\ that $A$ is \emph{symplectic}\index{symplectic transformation}.
+
+ Indeed, by linearity, we have
+ \[
+ \dot{\mathbf{y}} = A\dot{\mathbf{x}} = AJ\frac{\partial H}{\partial \mathbf{x}}.
+ \]
+ Setting $\tilde{H}(\mathbf{y}) = H(\mathbf{x})$, we have
+ \[
+ \frac{\partial H}{\partial x_i} = \frac{\partial y_j}{\partial x_i} \frac{\partial \tilde{H}(\mathbf{y})}{\partial y_j} = A_{ji} \frac{\partial \tilde{H}(\mathbf{y})}{\partial y_j} = \left[A^T \frac{\partial \tilde{H}}{\partial \mathbf{y}}\right]_i.
+ \]
+ Putting this back in, we have
+ \[
+ \dot{\mathbf{y}} = AJA^T \frac{\partial\tilde{H}}{\partial \mathbf{y}}.
+ \]
+ So $\mathbf{y} \mapsto \mathbf{y}(\mathbf{x})$ is canonical iff $J = AJA^T$.
+\end{eg}
+
+What about more general cases? Recall from IB Analysis II that a differentiable map is ``locally linear''. Now Hamilton's equations are purely local equations, so we might expect the following:
+\begin{prop}
+ A map $\mathbf{x} \mapsto \mathbf{y}(\mathbf{x})$ is canonical iff $D\mathbf{y}$ is \emph{symplectic}\index{symplectic map}\index{symplectomorphism}, i.e.
+ \[
+ D\mathbf{y} J (D\mathbf{y})^T = J.
+ \]
+\end{prop}
+Indeed, this follows from a simple application of the chain rule.
+
+\subsubsection*{Generating functions}
+We now discuss a useful way of producing canonical transformation, known as \term{generating functions}. In general, we can do generating functions in four different ways, but they are all very similar, so we will just do one that will be useful later on.
+
+Suppose we have a function $S: \R^{2n} \to \R$. We suggestively write its arguments as $S(\mathbf{q}, \mathbf{P})$. We now set
+\[
+ \mathbf{p} = \frac{\partial S}{\partial \mathbf{q}},\quad \mathbf{Q} = \frac{\partial S}{\partial \mathbf{P}}.
+\]
+By this equation, we mean we write down the first equation, which allows us to solve for $\mathbf{P}$ in terms of $\mathbf{q}, \mathbf{p}$. Then the second equation tells us the value of $\mathbf{Q}$ in terms of $\mathbf{q}, \mathbf{P}$, hence in terms of $\mathbf{p}, \mathbf{q}$.
+
+Usually, the way we use this is that we already have a candidate for what $\mathbf{P}$ should be. We then try to find a function $S(\mathbf{q}, \mathbf{P})$ such that the first equation holds. Then the second equation will tell us what the right choice of $\mathbf{Q}$ is.
+
+Checking that this indeed gives rise to a canonical transformation is just a very careful application of the chain rule, which we shall not go into. Instead, we look at a few examples to see it in action.
+\begin{eg}
+ Consider the generating function
+ \[
+ S(\mathbf{q}, \mathbf{P}) = \mathbf{q} \cdot \mathbf{P}.
+ \]
+ Then we have
+ \[
+ \mathbf{p} = \frac{\partial S}{\partial \mathbf{q}} = \mathbf{P},\quad \mathbf{Q} = \frac{\partial S}{\partial \mathbf{P}} = \mathbf{q}.
+ \]
+ So this generates the identity transformation $(\mathbf{Q}, \mathbf{P}) = (\mathbf{q}, \mathbf{p})$.
+\end{eg}
+
+\begin{eg}
+ In a 2-dimensional phase space, we consider the generating function
+ \[
+ S(q, P) = qP + q^2.
+ \]
+ Then we have
+ \[
+ p = \frac{\partial S}{\partial q} = P + 2q,\quad Q = \frac{\partial S}{\partial P} = q.
+ \]
+ So we have the transformation
+ \[
+ (Q, P) = (q, p - 2q).
+ \]
+ In matrix form, this is
+ \[
+ \begin{pmatrix}
+ Q\\P
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 1 & 0\\
+ -2 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ q\\p
+ \end{pmatrix}.
+ \]
+ To see that this is canonical, we compute
+ \[
+ \begin{pmatrix}
+ 1 & 0\\
+ -2 & 1
+ \end{pmatrix}
+ J
+ \begin{pmatrix}
+ 1 & 0\\
+ -2 & 1
+ \end{pmatrix}^T =
+ \begin{pmatrix}
+ 1 & 0\\
+ -2 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ 0 & 1\\
+ -1 & 0
+ \end{pmatrix}
+ \begin{pmatrix}
+ 1 & -2\\
+ 0 & 1
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 0 & 1\\
+ -1 & 0
+ \end{pmatrix}
+ \]
+ So this is indeed a canonical transformation.
+\end{eg}
+
+\subsection{The Arnold-Liouville theorem}
+We now get to the Arnold-Liouville theorem. This theorem says that if a Hamiltonian system is integrable, then we can find a canonical transformation $(\mathbf{q}, \mathbf{p}) \mapsto (\mathbf{Q}, \mathbf{P})$ such that $\tilde{H}$ depends only on $\mathbf{P}$. If this happened, then Hamilton's equations reduce to
+\[
+ \dot{\mathbf{Q}} = \frac{\partial \tilde{H}}{\partial \mathbf{P}},\quad \dot{\mathbf{P}} = -\frac{\partial \tilde{H}}{\partial \mathbf{Q}} = 0,
+\]
+which is pretty easy to solve. We find that $\mathbf{P}(t) = \mathbf{P}_0$ is a constant, and since the right hand side of the first equation depends only on $\mathbf{P}$, we find that $\dot{\mathbf{Q}}$ is also constant! So $\mathbf{Q} = \mathbf{Q}_0 + \Omega t$, where
+\[
+ \Omega = \frac{\partial \tilde{H}}{\partial \mathbf{P}} (\mathbf{P}_0).
+\]
+So the solution just falls out very easily.
+
+Before we prove the Arnold-Liouville theorem in full generality, we first see how the canonical transformation looks like in a very particular case. Here we will just have to write down the canonical transformation and see that it works, but we will later find that the Arnold-Liouville theorem give us a general method to find the transformation.
+
+\begin{eg}
+ Consider the harmonic oscillator with Hamiltonian
+ \[
+ H(q, p) = \frac{1}{2}p^2 + \frac{1}{2}\omega^2 q^2.
+ \]
+ Since is a 2-dimensional system, so we only need a single first integral. Since $H$ is a first integral for trivial reasons, this is an integrable Hamiltonian system.
+
+ We can actually draw the lines on which $H$ is constant --- they are just ellipses:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0) node [right] {$q$};
+ \draw [->] (0, -2) -- (0, 2) node [above] {$q$};
+
+ \foreach \x in {0.4, 0.8, 1.2} {
+ \begin{scope}[scale=\x]
+ \draw ellipse (1.5 and 1);
+ \end{scope}
+ }
+ \end{tikzpicture}
+ \end{center}
+ We note that the ellipses are each homeomorphic to $S^1$. Now we introduce the coordinate transformation $(q, p) \mapsto (\phi, I)$, defined by
+ \[
+ q = \sqrt{\frac{2I}{\omega}} \sin \phi,\quad p = \sqrt{2I\omega} \cos \phi,
+ \]
+ For the purpose of this example, we can suppose we obtained this formula through divine inspiration. However, in the Arnold-Liouville theorem, we will provide a general way of coming up with these formulas.
+
+ We can manually show that this transformation is canonical, but it is merely a computation and we will not waste time doing that. IN these new coordinates, the Hamiltonian looks like
+ \[
+ \tilde{H}(\phi, I) = H(q(\phi, I), p(\phi, I)) = \omega I.
+ \]
+ This is really nice. There is no $\phi$! Now Hamilton's equations become
+ \[
+ \dot\phi = \frac{\partial \tilde{H}}{ \partial I} = \omega,\quad \dot{I} = -\frac{\partial \tilde{H}}{\partial \phi} = 0.
+ \]
+ We can integrate up to obtain
+ \[
+ \phi(t) = \phi_0 + \omega t,\quad I(t) = I_0.
+ \]
+ For some unexplainable reason, we decide it is fun to consider the integral along paths of constant $H$:
+ \begin{align*}
+ \frac{1}{2\pi}\oint p \;\d q &= \frac{1}{2\pi} \int_0^{2\pi}p(\phi, I) \left(\frac{\partial q}{\partial \phi} \;\d \phi + \frac{\partial q}{\partial I} \;\d I\right)\\
+ &= \frac{1}{2\pi} \int_0^{2\pi}p(\phi, I) \left(\frac{\partial q}{\partial \phi} \;\d \phi\right)\\
+ &= \frac{1}{2\pi} \int_0^{2\pi} \sqrt{\frac{2I}{\omega}}\sqrt{2I\omega} \cos^2 \phi \;\d \phi\\
+ &= I
+ \end{align*}
+ This is interesting. We could always have performed the integral $\frac{1}{2\pi} \oint p \;\d q$ along paths of constant $H$ without knowing anything about $I$ and $\phi$, and this would have magically gave us the new coordinate $I$.
+\end{eg}
+
+There are two things to take away from this.
+\begin{enumerate}
+ \item The motion takes place in $S^1$
+ \item We got $I$ by performing $\frac{1}{2\pi}\oint p \;\d q$.
+\end{enumerate}
+These two ideas are essentially what we are going to prove for general Hamiltonian system.
+
+\begin{thm}[Arnold-Liouville theorem]\index{Arnold-Liouville theorem}
+ We let $(M, H)$ be an integrable $2n$-dimensional Hamiltonian system with independent, involutive first integrals $f_1, \cdots, f_n$, where $f_1 = H$. For any fixed $\mathbf{c} \in \R^n$, we set
+ \[
+ M_\mathbf{c} = \{(\mathbf{q}, \mathbf{p}) \in M: f_i(\mathbf{q}, \mathbf{p}) = c_i, i =1 , \cdots, n\}.
+ \]
+ Then
+ \begin{enumerate}
+ \item $M_\mathbf{c}$ is a smooth $n$-dimensional surface in $M$. If $M_\mathbf{c}$ is compact and connected, then it is diffeomorphic to
+ \[
+ T^n = S^1 \times \cdots \times S^1.
+ \]
+ \item If $M_\mathbf{c}$ is compact and connected, then locally, there exists canonical coordinate transformations $(\mathbf{q}, \mathbf{p}) \mapsto (\boldsymbol\phi, \mathbf{I})$ called the \term{action-angle coordinates} such that the angles $\{\phi_k\}_{k = 1}^n$ are coordinates on $M_\mathbf{c}$; the actions $\{I_k\}_{k = 1}^n$ are first integrals, and $H(\mathbf{q}, \mathbf{p})$ does not depend on $\boldsymbol\phi$. In particular, Hamilton's equations
+ \[
+ \dot{\mathbf{I}} = 0,\quad \dot{\boldsymbol\phi} = \frac{\partial \tilde{H}}{\partial \mathbf{I}} = \text{constant}.
+ \]
+ \end{enumerate}
+\end{thm}
+
+Some parts of the proof will refer to certain results from rather pure courses, which the applied people may be willing to just take on faith.
+\begin{proof}[Proof sketch]
+ The first part is pure differential geometry. To show that $M_\mathbf{c}$ is smooth and $n$-dimensional, we apply the preimage theorem you may or may not have learnt from IID Differential Geometry (which is in turn an easy consequence of the inverse function theorem from IB Analysis II). The key that makes this work is that the constraints are independent, which is the condition that allows the preimage theorem to apply.
+
+ We next show that $M_\mathbf{c}$ is diffeomorphic to the torus if it is compact and connected. Consider the Hamiltonian vector fields defined by
+ \[
+ \mathbf{V}_{f_i} = J \frac{\partial f_i}{\partial \mathbf{x}}.
+ \]
+ We claim that these are \emph{tangent} to the surface $M_\mathbf{c}$. By differential geometry, it suffices to show that the derivative of the $\{f_j\}$ in the direction of $\mathbf{V}_{f_i}$ vanishes. We can compute
+ \[
+ \left(\mathbf{V}_{f_i} \cdot \frac{\partial}{\partial \mathbf{x}}\right)f_j = \frac{\partial f_j}{\partial \mathbf{x}} J \frac{\partial f_i}{\partial \mathbf{x}} = \{f_j, f_i\} = 0.
+ \]
+ Since this vanishes, we know that $\mathbf{V}_{f_i}$ is a tangent to the surface. Again by differential geometry, the flow maps $\{g_i\}$ must map $M_\mathbf{c}$ to itself. Also, we know that the flow maps commute. Indeed, this follows from the fact that
+ \[
+ [\mathbf{V}_{f_i}, \mathbf{V}_{f_j}] = -\mathbf{V}_{\{f_i, f_j\}} = -\mathbf{V}_{0} = 0.
+ \]
+ So we have a whole bunch of commuting flow maps from $M_\mathbf{c}$ to itself. We set
+ \[
+ g^\mathbf{t} = g_1^{t_1} g_2^{t_2} \cdots g_n^{t_n},
+ \]
+ where $\mathbf{t} \in \R^n$. Then because of commutativity, we have
+ \[
+ g^{\mathbf{t}_1 + \mathbf{t}_2} = g^{\mathbf{t}_1}g^{\mathbf{t}_2}.
+ \]
+ So this is gives a group action of $\R^n$ on the surface $M_\mathbf{c}$. We fix $\mathbf{x} \in M_\mathbf{c}$. We define
+ \[
+ \stab(\mathbf{x}) = \{\mathbf{t} \in \R^n: g^\mathbf{t}\mathbf{x} = \mathbf{x}\}.
+ \]
+ We introduce the map
+ \[
+ \phi: \frac{\R^n}{\stab(\mathbf{x})} \to M_\mathbf{c}
+ \]
+ given by $\phi(\mathbf{t}) = g^{\mathbf{t}}\mathbf{x}$. By the orbit-stabilizer theorem, this gives a bijection between $\R^n/\stab(\mathbf{x})$ and the orbit of $\mathbf{x}$. It can be shown that the orbit of $\mathbf{x}$ is exactly the connected component of $\mathbf{x}$. Now if $M_\mathbf{c}$ is connected, then this must be the whole of $\mathbf{x}$! By general differential geometry theory, we get that this map is indeed a diffeomorphism.
+
+ We know that $\stab(\mathbf{x})$ is a subgroup of $\R^n$, and if the $g_i$ are non-trivial, it can be seen (at least intuitively) that this is discrete. Thus, it must be isomorphic to something of the form $\Z^k$ with $1 \leq k \leq n$.
+
+ So we have
+ \[
+ M_\mathbf{c} \cong \R^n / \stab(\mathbf{x}) \cong \R^n/\Z^k \cong \R^k/\Z^k \times \R^{n - k} \cong T^k \times \R^{n - k}.
+ \]
+ Now if $M_\mathbf{c}$ is compact, we must have $n - k = 0$, i.e.\ $n = k$, so that we have no factors of $\R$. So $M_\mathbf{c} \cong T^n$.
+
+ \separator
+
+ With all the differential geometry out of the way, we can now construct the action-angle coordinates.
+
+ For simplicity of presentation, we only do it in the case when $n = 2$. The proof for higher dimensions is entirely analogous, except that we need to use a higher-dimensional analogue of Green's theorem, which we do not currently have.
+
+ We note that it is currently trivial to re-parameterize the phase space with coordinates $(\mathbf{Q}, \mathbf{P})$ such that $\mathbf{P}$ is constant within the Hamiltonian flow, and each coordinate of $\mathbf{Q}$ takes values in $S^1$. Indeed, we just put $\mathbf{P} = \mathbf{c}$ and use the diffeomorphism $T^n \cong M_\mathbf{c}$ to parameterize each $M_\mathbf{c}$ as a product of $n$ copies of $S^n$. However, this is not good enough, because such an arbitrary transformation will almost certainly not be canonical. So we shall try to find a more natural and in fact canonical way of parametrizing our phase space.
+
+ We first work on the generalized momentum part. We want to replace $\mathbf{c}$ with something nicer. We will do something analogous to the simple harmonic oscillator we've got.
+
+ So we fix a $\mathbf{c}$, and try to come up with some numbers $\mathbf{I}$ that labels this $M_\mathbf{c}$. Recall that our surface $M_\mathbf{c}$ looks like a torus:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0,0) ellipse (2 and 1.12);
+ \path[rounded corners=24pt] (-.9,0)--(0,.6)--(.9,0) (-.9,0)--(0,-.56)--(.9,0);
+ \draw[rounded corners=28pt] (-1.1,.1)--(0,-.6)--(1.1,.1);
+ \draw[rounded corners=24pt] (-.9,0)--(0,.6)--(.9,0);
+ \end{tikzpicture}
+ \end{center}
+ Up to continuous deformation of loops, we see that there are two non-trivial ``single'' loops in the torus, given by the red and blue loops:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0,0) ellipse (2 and 1.12);
+ \draw [mred] (0,0) ellipse (1.5 and 0.6);
+
+ \draw [mblue] (0, -0.71) ellipse (0.1 and 0.41);
+ \draw [rounded corners=28pt] (-1.1,.1)--(0,-.6)--(1.1,.1);
+ \draw [rounded corners=24pt] (-.9,0)--(0,.6)--(.9,0);
+ \end{tikzpicture}
+ \end{center}
+ More generally, for an $n$ torus, we have $n$ such distinct loops $\Gamma_1, \cdots, \Gamma_n$. More concretely, after identifying $M_\mathbf{c}$ with $S^n$, these are the loops given by
+ \[
+ \{0\} \times \cdots \times \{0\} \times S^1 \times \{0\} \times \cdots \times \{0\} \subseteq S^1.
+ \]
+ We now attempt to define:
+ \[
+ I_j = \frac{1}{2\pi} \oint_{\Gamma_j} \mathbf{p}\cdot \d \mathbf{q},
+ \]
+ This is just like the formula we had for the simple harmonic oscillator.
+
+ We want to make sure this is well-defined --- recall that $\Gamma_i$ actually represents a \emph{class} of loops identified under continuous deformation. What if we picked a different loop?
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0,0) ellipse (2 and 1.12);
+ \draw [mblue] (0.5, -1.09) arc (270:450:0.1 and 0.43);
+ \draw [mblue, dashed] (0.5, -0.23) arc (90:270:0.1 and 0.43);
+ \draw [mblue, dashed] (-0.5, -1.09) arc (270:450:0.1 and 0.43);
+ \draw [mblue] (-0.5, -0.23) arc (90:270:0.1 and 0.43);
+
+ \node [mblue, right] at (0.6, -0.66) {$\Gamma_2'$};
+ \node [mblue, left] at (-0.6, -0.66) {$\Gamma_2$};
+ \draw [rounded corners=28pt] (-1.1,.1)--(0,-.6)--(1.1,.1);
+ \draw [rounded corners=24pt] (-.9,0)--(0,.6)--(.9,0);
+ \end{tikzpicture}
+ \end{center}
+ On $M_\mathbf{c}$, we have the equation
+ \[
+ f_i(\mathbf{q}, \mathbf{p}) = \mathbf{c}_i.
+ \]
+ We will have to assume that we can invert this equation for $\mathbf{p}$ locally, i.e.\ we can write
+ \[
+ \mathbf{p} = \mathbf{p}(\mathbf{q}, \mathbf{c}).
+ \]
+ The condition for being able to do so is just
+ \[
+ \det\left(\frac{\partial f_i}{\partial p_j}\right) \not= 0,
+ \]
+ which is not hard.
+
+ Then by definition, the following holds identically:
+ \[
+ f_i(\mathbf{q}, \mathbf{p}(\mathbf{q}, \mathbf{c})) = c_i.
+ \]
+ We an then differentiate this with respect to $q_k$ to obtain
+ \[
+ \frac{\partial f_i}{\partial q_k} + \frac{\partial f_i}{\partial p_\ell} \frac{\partial p_\ell}{\partial q_k} = 0
+ \]
+ on $M_\mathbf{c}$. Now recall that the $\{f_i\}$'s are in involution. So on $M_\mathbf{c}$, we have
+ \begin{align*}
+ 0 &= \{f_i, f_j\} \\
+ &= \frac{\partial f_i}{\partial q_k} \frac{\partial f_j}{\partial p_k} - \frac{\partial f_i}{\partial p_k} \frac{\partial f_j}{\partial q_k}\\
+ &= \left(-\frac{\partial f_i}{\partial p_\ell} \frac{\partial p_\ell}{\partial q_k}\right)\frac{\partial f_j}{\partial p_k} - \frac{\partial f_i}{\partial p_k}\left(-\frac{\partial f_j}{\partial p_\ell} \frac{\partial p_\ell}{\partial q_k}\right)\\
+ &= \left(-\frac{\partial f_i}{\partial p_k} \frac{\partial p_k}{\partial q_\ell}\right)\frac{\partial f_j}{\partial p_\ell} - \frac{\partial f_i}{\partial p_k}\left(-\frac{\partial f_j}{\partial p_\ell} \frac{\partial p_\ell}{\partial q_k}\right)\\
+ &= \frac{\partial f_i}{\partial p_k} \left(\frac{\partial p_\ell}{\partial q_k} - \frac{\partial p_k}{\partial q_\ell}\right) \frac{\partial f_j}{\partial p_\ell}.
+ \end{align*}
+ Recall that the determinants of the matrices $\frac{\partial f_i}{\partial p_k}$ and $\frac{\partial f_j}{\partial p_\ell}$ are non-zero, i.e.\ the matrices are invertible. So for this to hold, the middle matrix must vanish! So we have
+ \[
+ \frac{\partial p_\ell}{\partial q_k} - \frac{\partial p_k}{\partial q_\ell} = 0.
+ \]
+ In our particular case of $n = 2$, since $\ell, k$ can only be $1, 2$, the only non-trivial thing this says is
+ \[
+ \frac{\partial p_1}{\partial q_2} - \frac{\partial p_2}{\partial q_1} = 0.
+ \]
+ Now suppose we have two ``simple'' loops $\Gamma_2$ and $\Gamma_2'$. Then they bound an area $A$:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0,0) ellipse (2 and 1.12);
+ \draw [mblue] (0.5, -1.09) arc (270:450:0.1 and 0.43);
+ \draw [mblue, dashed] (0.5, -0.23) arc (90:270:0.1 and 0.43);
+ \draw [mblue, dashed] (-0.5, -1.09) arc (270:450:0.1 and 0.43);
+ \draw [mblue] (-0.5, -0.23) arc (90:270:0.1 and 0.43);
+
+ \node [mblue, right] at (0.6, -0.66) {$\Gamma_2'$};
+ \node [mblue, left] at (-0.6, -0.66) {$\Gamma_2$};
+ \draw [rounded corners=28pt] (-1.1,.1)--(0,-.6)--(1.1,.1);
+ \draw [rounded corners=24pt] (-.9,0)--(0,.6)--(.9,0);
+
+ \fill [morange, opacity=0.3] (0.5, -1.09) arc (270:450:0.1 and 0.43) to [out=190, in=-10] (-0.5, -0.23) arc (90:270:0.1 and 0.43) to [out=-7, in=187] (0.5, -1.09);
+
+ \node [morange!50!black] at (0, -0.71) {$A$};
+ \end{tikzpicture}
+ \end{center}
+ Then we have
+ \begin{align*}
+ \left(\oint_{\Gamma_2} - \oint_{\Gamma_2'}\right) \mathbf{p}\cdot \d \mathbf{q} &= \oint_{\partial A}\mathbf{p}\cdot \d \mathbf{q}\\
+ &= \iint_A \left(\frac{\partial p_2}{\partial q_1} - \frac{\partial p_1}{\partial q_2}\right) \;\d q_1\;\d q_2\\
+ &= 0
+ \end{align*}
+ by Green's theorem.
+
+ So $I_j$ is well-defined, and
+ \[
+ \mathbf{I} = \mathbf{I}(\mathbf{c})
+ \]
+ is just a function of $c$. This will be our new ``momentum'' coordinates. To figure out what the angles $\boldsymbol\phi$ should be, we use generating functions. For now, we assume that we can invert $\mathbf{I}(\mathbf{c})$, so that we can write
+ \[
+ \mathbf{c} = \mathbf{c}(\mathbf{I}).
+ \]
+ We arbitrarily pick a point $\mathbf{x}_0$, and define the generating function
+ \[
+ S(\mathbf{q}, \mathbf{I}) = \int_{\mathbf{x}_0}^\mathbf{x} \mathbf{p}(\mathbf{q}', \mathbf{c}(\mathbf{I})) \cdot \d \mathbf{q}',
+ \]
+ where $\mathbf{x} = (\mathbf{q}, \mathbf{p}) = (\mathbf{q}, \mathbf{p}(\mathbf{q}, \mathbf{c}(\mathbf{I})))$. However, this is not \emph{a priori} well-defined, because we haven't said how we are going to integrate from $\mathbf{x}_0$ to $\mathbf{x}$. We are going to pick paths arbitrarily, but we want to make sure it is well-defined. Suppose we change from a path $\gamma_1$ to $\gamma_2$ by a little bit, and they enclose a surface $B$.
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+ \node [left] {$\mathbf{x}_0$};
+ \node at (2, 2) [circ] {};
+ \node at (2, 2) [right] {$\mathbf{x}$};
+ \node at (2, 1) {$\gamma_2$};
+ \draw [->-=0.6] plot [smooth, tension=1] coordinates {(0, 0) (0.8, 1.7) (2, 2)};
+ \node [above] at (0.8, 1.7) {$\gamma_1$};
+ \draw [->-=0.6] plot [smooth, tension=1] coordinates {(0, 0) (1.2, -0.1) (2, 2)};
+ \node at (1, 0.9) {$B$};
+ \end{tikzpicture}
+ \end{center}
+ Then we have
+ \[
+ S(\mathbf{q}, \mathbf{I}) \mapsto S(\mathbf{q}, \mathbf{I}) + \oint_{\partial B} \mathbf{p} \cdot \d \mathbf{q}.
+ \]
+ Again, we are integrating $\mathbf{p} \cdot \d\mathbf{q}$ around a boundary, so there is no change.
+
+ However, we don't live in flat space. We live in a torus, and we can have a crazy loop that does something like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-3, 1) to [out=5, in=175] (3, 1);
+ \draw (-3, -1) to [out=5, in=175] (3, -1);
+
+ \node (i) [circ] at (-1, 0) {};
+ \node (f) [circ] at (1, 0.2) {};
+
+ \draw [mblue, ->-=0.5] (i) to [out=10, in=150] (f);
+
+ \draw [mred] (i) to [out=30, in=180] (0, 1.15);
+
+ \draw [mred, dashed, ->-=0.7] (0, 1.15) arc(90:-90:0.1 and 1);
+
+ \draw [mred] (0, -0.85) to [out=180, in=180] (f);
+ \end{tikzpicture}
+ \end{center}
+ Then what we have effectively got is that we added a loop (say) $\Gamma_2$ to our path, and this contributes a factor of $2\pi I_2$. In general, these transformations give changes of the form
+ \[
+ S(\mathbf{q}, \mathbf{I}) \mapsto S(\mathbf{q}, \mathbf{I}) + 2\pi I_j.
+ \]
+ This is the only thing that can happen. So differentiating with respect to $I$, we know that
+ \[
+ \boldsymbol\phi = \frac{\partial S}{\partial \mathbf{I}}
+ \]
+ is well-defined modulo $2\pi$. These are the \emph{angles coordinates}. Note that just like angles, we can pick $\boldsymbol\phi$ consistently locally without this ambiguity, as long as we stay near some fixed point, but when we want to talk about the whole surface, this ambiguity necessarily arises. Now also note that
+ \[
+ \frac{\partial S}{\partial \mathbf{q}} = \mathbf{p}.
+ \]
+ Indeed, we can write
+ \[
+ S = \int_{\mathbf{x}_0}^\mathbf{x} \mathbf{F} \cdot \d \mathbf{x}',
+ \]
+ where
+ \[
+ \mathbf{F} = (\mathbf{p}, 0).
+ \]
+ So by the fundamental theorem of calculus, we have
+ \[
+ \frac{\partial S}{\partial \mathbf{x}} = \mathbf{F}.
+ \]
+ So we get that
+ \[
+ \frac{\partial S}{\partial \mathbf{q}} = \mathbf{p}.
+ \]
+ In summary, we have constructed on $M_\mathbf{c}$ the following: $\mathbf{I}= \mathbf{I}(\mathbf{c})$, $S(\mathbf{q}, I)$, and
+ \[
+ \boldsymbol\phi = \frac{\partial S}{\partial \mathbf{I}},\quad \mathbf{p} = \frac{\partial S}{\partial \mathbf{q}}.
+ \]
+ So $S$ is a generator for the canonical transformation, and $(\mathbf{q}, \mathbf{p}) \mapsto (\boldsymbol\phi, \mathbf{I})$ is a canonical transformation.
+
+ Note that at any point $\mathbf{x}$, we know $\mathbf{c} = \mathbf{f}(\mathbf{x})$. So $I(\mathbf{c}) = I(\mathbf{f})$ depends on the first integrals only. So we have
+ \[
+ \dot{\mathbf{I}} = 0.
+ \]
+ So Hamilton's equations become
+ \[
+ \dot{\boldsymbol\phi} = \frac{\partial \tilde{H}}{\partial \mathbf{I}},\quad \dot{\mathbf{I}} = 0 = \frac{\partial \tilde{H}}{\partial \boldsymbol\phi}.
+ \]
+ So the new Hamiltonian depends only on $\mathbf{I}$. So we can integrate up and get
+ \[
+ \boldsymbol\phi(t) = \boldsymbol\phi_0 + \Omega t,\quad \mathbf{I}(t) = \mathbf{I}_0,
+ \]
+ where
+ \[
+ \Omega = \frac{\partial\tilde{H}}{\partial \mathbf{I}}(\mathbf{I}_0).\qedhere
+ \]
+\end{proof}
+To summarize, to integrate up an integrable Hamiltonian system, we identify the different cycles $\Gamma_1, \cdots, \Gamma_n$ on $M_\mathbf{c}$. We then construct
+\[
+ I_j = \frac{1}{2\pi} \oint_{\Gamma_j} \mathbf{p}\cdot \d \mathbf{q},
+\]
+where $\mathbf{p} = \mathbf{p}(\mathbf{q}, \mathbf{c})$. We then invert this to say
+\[
+ \mathbf{c} = \mathbf{c}(\mathbf{I}).
+\]
+We then compute
+\[
+ \boldsymbol\phi = \frac{\partial S}{\partial \mathbf{I}},
+\]
+where
+\[
+ S = \int_{\mathbf{x}_0}^{\mathbf{x}} \mathbf{p}(\mathbf{q}', \mathbf{c}(\mathbf{I})) \cdot \d \mathbf{q}'.
+\]
+Now we do this again with the Harmonic oscillator.
+\begin{eg}
+ In the harmonic oscillator, we have
+ \[
+ H(q, p) = \frac{1}{2}p^2 + \frac{1}{2}\omega^2 q^2.
+ \]
+ We then have
+ \[
+ M_\mathbf{c} = \left\{(q, p): \frac{1}{2} p^2 + \frac{1}{2}\omega^2 q^2 = c\right\}.
+ \]
+ The first part of the Arnold-Liouville theorem says this is diffeomorphic to $T^1 = S^1$, which it is! The next step is to pick a loop, and there is an obvious one --- the circle itself. We write
+ \[
+ p = p(q, c) = \pm \sqrt{2c - \omega^2 q^2}
+ \]
+ on $M_\mathbf{c}$. Then we have
+ \[
+ I = \frac{1}{2\pi} \int p \cdot \d q = \frac{c}{\omega}.
+ \]
+ We can then write $c$ as a function of $I$ by
+ \[
+ c = c(I) = \omega I.
+ \]
+ Now construct
+ \[
+ S(q, I) = \int_{x_0}^{x} p(q', c(I))\;\d q'.
+ \]
+ We can pick $x_0$ to be the point corresponding to $\theta = 0$. Then this is equal to
+ \[
+ \int_0^q \sqrt{2\omega I - \omega^2 q'^2} \;\d q'.
+ \]
+ To find $\phi$, we need to differentiate this thing to get
+ \[
+ \phi = \frac{\partial S}{\partial I} = \omega\int_0^q \frac{\d q'}{\sqrt{2 \omega I - \omega^2 q'^2}} = \sin^{-1}\left(\sqrt{\frac{\omega}{2I}}q\right)
+ \]
+ As expected, this is only well-defined up to a factor of $2\pi$! Using the fact that $c = H$, we have
+ \[
+ q = \sqrt{\frac{2\pi}{\omega}} \sin \phi,\quad p = \sqrt{2I\omega} \cos \phi.
+ \]
+ These are exactly the coordinates we obtained through divine inspiration last time.
+\end{eg}
+\section{Partial Differential Equations}
+For the remainder of the course, we are going to look at PDE's. We can view these as infinite-dimensional analogues of ODE's. So what do we expect for integrable PDE's? Recall that If an $2n$-dimensional ODE is integrable, then we $n$ first integrals. Since PDE's are infinite-dimensional, and half of infinity is still infinity, we would expect to have infinitely many first integrals. Similar to the case of integrable ODE's, we would also expect that there will be some magic transformation that allows us to write down the solution with ease, even if the initial problem looks very complicated.
+
+These are all true, but our journey will be less straightforward. To begin with, we will not define what integrability means, because it is a rather complicated issue. We will go through one method of ``integrating up'' a PDE in detail, known as the inverse scattering transform, and we will apply it to a particular equation. Unfortunately, the way we apply the inverse scattering transform to a PDE is not obvious, and here we will have to do it through ``divine inspiration''.
+
+Before we get to the inverse scattering transform, we first look at a few examples of PDEs.
+
+\subsection{KdV equation}
+The \term{KdV equation} is given by
+\[
+ u_t + u_{xxx} - 6 u u_x = 0.
+\]
+Before we study the KdV equation, we will look at some variations of this where we drop some terms, and then see how they compare.
+
+\begin{eg}
+ Consider the linear PDE
+ \[
+ u_t + u_{xxx} = 0,
+ \]
+ where $u = u(x, t)$ is a function on two variables. This admits solutions of the form
+ \[
+ e^{ikx - i\omega t},
+ \]
+ known as \term{plane wave modes}. For this to be a solution, $\omega$ must obey the \term{dispersion relation}
+ \[
+ \omega = \omega(k) = -k^3.
+ \]
+ For \emph{any} $k$, as long as we pick $\omega$ this way, we obtain a solution. By writing the solution as
+ \[
+ u(x, t) = \exp\left(ik\left(x - \frac{\omega(k)}{k}t\right)\right),
+ \]
+ we see that plane wave modes travel at speed
+ \[
+ \frac{\omega}{k} = -k^2.
+ \]
+ It is very important that the speed depends on $k$. Different plane wave modes travel at different speeds. This is going to give rise to what we call \term{dispersion}.
+
+ A general solution is a superposition of plane wave modes
+ \[
+ \sum_k a(k) e^{ikx - i \omega(k) t},
+ \]
+ or even an uncountable superposition
+ \[
+ \int_k A(k) e^{ikx - i\omega(k)t}\;\d k.
+ \]
+ It is a theorem that for linear PDE's on convex domains, all solutions are indeed superpositions of plane wave modes. So this is indeed completely general.
+
+ So suppose we have an initial solution that looks like this:
+ \begin{center}
+ \begin{tikzpicture}[yscale=1.5]
+ \draw [domain=-2:2,samples=50, mblue, thick] plot (\x, {exp(-3 * \x * \x)});
+ \end{tikzpicture}
+ \end{center}
+ We write this as a superposition of plane wave modes. As we let time pass, different plane wave modes travel at different speeds, so this becomes a huge mess! So after some time, it might look like
+ \begin{center}
+ \begin{tikzpicture}[yscale=1.5]
+ \draw [domain=0:2,samples=50, mblue, thick] plot (\x, {exp(-3 * \x * \x)});
+ \draw [domain=-3:0,samples=50, mblue, thick] plot [smooth] (\x, {exp(-\x * \x) *(1 - 0.5 * sin(400*\x*\x) * sin(400*\x*\x))});
+ \end{tikzpicture}
+ \end{center}
+ Intuitively, what gives us the dispersion is the third order derivative $\partial^3_x$. If we had $\partial_x$ instead, then there will be no dispersion.
+\end{eg}
+
+\begin{eg}
+ Consider the non-linear PDE
+ \[
+ u_t - 6 uu_x = 0.
+ \]
+ This looks almost intractable, as non-linear PDE's are scary, and we don't know what to do. However, it turns out that we can solve this for any initial data $u(x, 0) = f(x)$ via the method of characteristics. Details are left on the second example sheet, but the solution we get is
+ \[
+ u(x, t) = f(\xi),
+ \]
+ where $\xi$ is given implicitly by
+ \[
+ \xi = x - 6t f(\xi)
+ \]
+ We can show that $u_x$ becomes, in general, infinite in finite time. Indeed, we have
+ \[
+ u_x = f'(\xi) \frac{\partial \xi}{\partial x}.
+ \]
+ We differentiate the formula for $\xi$ to obtain
+ \[
+ \frac{\partial \xi}{\partial x} = 1 - 6tf'(\xi) \frac{\partial \xi}{\partial x}
+ \]
+ So we know $\frac{\partial \xi}{\partial x}$ becomes infinite when $1 + 6t f'(\xi) = 0$. In general, this happens in finite time, and at the time, we will get a straight slope. \emph{After} that, it becomes a multi-valued function! So the solution might evolve like this:
+ \begin{center}
+ \begin{tikzpicture}[xscale=0.9]
+ \draw [domain=-1.5:1.5,samples=50, thick, mblue] plot (\x, {1.5 * exp(-3 * \x * \x)});
+
+ \draw [->] (2, 0.75) -- (3, 0.75);
+
+ \begin{scope}[shift={(5, 0)}]
+ \draw [mblue, thick] (-1.5, 0) -- (-1.3, 0) to [out=0, in=90] (0.5, 1.3) -- (0.5, 0.3) to [out=270, in=180] (1, 0) -- (1.5, 0);
+ \draw [->] (2, 0.75) -- (3, 0.75);
+ \end{scope}
+
+ \begin{scope}[shift={(10, 0)}]
+ \draw [mblue, thick] (-1.5, 0) -- (-1.3, 0) to [out=0, in=90, looseness=0.7] (0.7, 1.3) to [out=270, in=90] (0.3, 0.5) to [out=270, in=180] (1, 0) -- (1.5, 0);
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ This is known as \term{wave-breaking}.
+
+ We can imagine that $-6uu_x$ gives us wave breaking.
+\end{eg}
+
+What happens if we combine both of these effects?
+\begin{defi}[KdV equation]\index{KdV equation}
+ The \emph{KdV equation} is given by
+ \[
+ u_t + u_{xxx} - 6 u u_x = 0.
+ \]
+\end{defi}
+It turns out that this has a perfect balance between dispersion and non-linearity. This admits very special solutions known as \term{solitons}. For example, a $1$-solution solution is
+\[
+ u(x, t) = -2 \chi_1^2 \sech^2\left(\chi_1 (x - 4 \chi_1^2 t)\right).
+\]
+\begin{center}
+ \begin{tikzpicture}[xscale=0.8]
+ \draw [domain=-5.5:5.5,samples=200, thick, mblue] plot (\x, {1.5* (cosh (\x))^(-2)});
+ \end{tikzpicture}
+\end{center}
+The solutions tries to both topple over and disperse, and it turns out they actually move like normal waves at a constant speed. If we look at the solution, then we see that this has a peculiar property that the speed of the wave depends on the amplitude --- the taller you are, the faster you move.
+
+Now what if we started with \emph{two} of these solitons? If we placed them far apart, then they should not interact, and they would just individually move to the right. But note that the speed depends on the amplitude. So if we put a taller one before a shorter one, they might catch up with each other and then collide! Indeed, suppose they started off looking like this:
+\pgfplotsset{compat=1.12}
+\pgfplotsset{width=\textwidth, height=0.4\textwidth, axis lines=none}
+\begin{center}
+ \centering
+ \begin{tikzpicture}
+ \begin{axis}[restrict x to domain=-20:0, ymax=0.8]
+ \addplot [thick, mblue] table [x={x}, y={0}] {solitons.csv};
+ \end{axis}
+ \end{tikzpicture}
+\end{center}
+After a while, the tall one starts to catch up:
+\begin{center}
+ \centering
+ \begin{tikzpicture}
+ \begin{axis}[restrict x to domain=-15:5, ymax=0.8]
+ \addplot [thick, mblue] table [x={x}, y={1}] {solitons.csv};
+ \end{axis}
+ \end{tikzpicture}
+\end{center}
+Note that both of the humbs are moving to the right. It's just that we had to move the frame so that everything stays on the page. Soon, they collide into each other:
+\begin{center}
+ \centering
+ \begin{tikzpicture}
+ \begin{axis}[restrict x to domain=-11:9, ymax=0.8]
+ \addplot [thick, mblue] table [x={x}, y={2}] {solitons.csv};
+ \end{axis}
+ \end{tikzpicture}
+\end{center}
+and then they start to merge:
+\begin{center}
+ \centering
+ \begin{tikzpicture}
+ \begin{axis}[restrict x to domain=-10:10, ymax=0.8]
+ \addplot [thick, mblue] table [x={x}, y={3}] {solitons.csv};
+ \end{axis}
+ \end{tikzpicture}
+\end{center}
+What do we expect to happen? The KdV equation is a very complicated non-linear equation, so we might expect a lot of interactions, and the result to be a huge mess. But nope. They pass through each other as if nothing has happened:
+\begin{center}
+ \centering
+ \begin{tikzpicture}
+ \begin{axis}[restrict x to domain=-9:11, ymax=0.8]
+ \addplot [thick, mblue] table [x={x}, y={4}] {solitons.csv};
+ \end{axis}
+ \end{tikzpicture}
+\end{center}
+and then they just walk away
+\begin{center}
+ \centering
+ \begin{tikzpicture}
+ \begin{axis}[restrict x to domain=-5:15, ymax=0.8]
+ \addplot [thick, mblue] table [x={x}, y={5}] {solitons.csv};
+ \end{axis}
+ \end{tikzpicture}
+\end{center}
+and then they depart.
+\begin{center}
+ \centering
+ \begin{tikzpicture}
+ \begin{axis}[restrict x to domain=0:20, ymax=0.8]
+ \addplot [thick, mblue] table [x={x}, y={6}] {solitons.csv};
+ \end{axis}
+ \end{tikzpicture}
+\end{center}
+This is like magic! If we just looked at the equation, there is no way we could have guessed that these two solitons would interact in such an uneventful manner. Non-linear PDEs in general are messy. But these are very stable structures in the system, and they behave more like particles than waves.
+
+At first, this phenomenon was discovered through numerical simulation. However, later we will see that the KdV equation is integrable, and we can in fact find explicit expressions for a general $N$-soliton equation.
+
+\subsection{Sine--Gordon equation}
+We next look at another equation that again has soliton solutions, known as the \emph{sine--Gordon equation}.
+\begin{defi}[Sine--Gordon equation]\index{sine--Gordon equation}
+ The \emph{sine--Gordon equation} is given by
+ \[
+ u_{tt} - u_{xx} + \sin u = 0.
+ \]
+\end{defi}
+This is known as the sine--Gordon equation, because there is a famous equation in physics known as the \emph{Klein--Gordon equation}, given by
+\[
+ u_{tt} - u_{xx} + u = 0.
+\]
+Since we have a sine instead of a $u$, we call it a sine-Gordon equation!
+
+There are a few ways we can motive the sine-Gordon equation. We will use one from physics. Suppose we have a chain of pendulums of length $\ell$ with masses $m$:
+\begin{center}
+ \begin{tikzpicture}
+ \draw [thick] (0, 0) -- (5, 0);
+ \foreach \x in {0,1,2,3,4,5} {
+ \draw (\x, 0) -- (\x, -1) node [circ] {} node [below] {$m$};
+ }
+ \draw [latex'-latex'] (0, 0.3) -- (1, 0.3) node [pos=0.5, fill=white] {\scriptsize$\Delta x$};
+
+ \draw [latex'-latex'] (-0.5, 0) -- (-0.5, -1) node [pos=0.5, fill=white] {$\ell$};
+ \end{tikzpicture}
+\end{center}
+The pendulum will be allowed to rotate about the vertical plane, i.e.\ the plane with normal along the horizontal line, and we specify the angle by $\theta_i(t)$. Since we want to eventually take the limit as $\Delta x \to 0$, we imagine $\theta$ is a function of both space and time, and write this as $\theta_i(t) = \theta(i \Delta x, t)$.
+
+Since gravity exists, each pendulum has a torque of
+\[
+ -m\ell g \sin \theta_i.
+\]
+We now introduce an interaction between the different pendulum. We imagine the masses are connected by some springs, so that the $i$th pendulum gets a torque of
+\[
+ \frac{K(\theta_{i + 1} - \theta_i)}{\Delta x},\quad \frac{K(\theta_{i - 1} - \theta_i)}{\Delta x}.
+\]
+By Newton's laws, the equations of motion is
+\[
+ m\ell^2 \frac{\d^2 \theta_i}{\d t^2} = -mg \ell \sin \theta_i + \frac{K(\theta_{i + 1} - 2 \theta_i + \theta_{i - 1})}{\Delta x}.
+\]
+We divide everything by $\Delta x$, and take the limit as $\Delta x \to 0$, with $M = \frac{m}{\Delta x}$ held constant. We then end up with
+\[
+ M \ell^2 \frac{\partial^2 \theta}{\partial t^2} = - Mg\ell \sin \theta + K \frac{\partial^2 \theta}{\partial x^2}.
+\]
+Making some simple coordinate scaling, this becomes
+\[
+ u_{tt} - u_{xx} + \sin u = 0.
+\]
+There is also another motivation for this from differential geometry. It turns out solutions to the sine-Gordon equation correspond to pseudospherical surfaces in $\R^3$, namely the surfaces that have constant negative curvature.
+
+If we pick so-called ``\term{light cone coordinates}'' $\xi = \frac{1}{2}(x - t)$ and $\tau = \frac{1}{2}(x + t)$, then the sine-Gordon equations become
+\[
+ \frac{\partial^2 u}{\partial \xi \partial\tau} = \sin u,
+\]
+and often this is the form of the sine-Gordon equations we will encounter.
+
+This also admits soliton solutions
+\[
+ u(x, t) = 4 \tan^{-1} \left(\exp\left(\frac{x - vt}{\sqrt{1 - v^2}}\right)\right).
+\]
+We can check that this is indeed a solution for this non-linear PDE.
+
+This solution looks like
+\begin{center}
+ \begin{tikzpicture}[xscale=0.5]
+ \draw [->] (-8, 0) -- (8, 0);
+ \draw [->] (-4, -0.5) -- (-4, 3);
+ \draw [domain=-8:8, thick, blue] plot [smooth] (\x, {atan (exp (\x)) / 45});
+
+ \draw [dashed] (-8, 2) -- +(16, 0);
+ \node [anchor = north east] at (-4, 2) {$2\pi$};
+ \end{tikzpicture}
+\end{center}
+Now remember that $\theta$ was an angle. So $2\pi$ is just the same as $0$! If we think of the value of $u$ as living in the circle $S^1$, then this satisfies the boundary condition $u \to 0$ as $x \to \pm \infty$:
+\begin{center}
+ \begin{tikzpicture}[xscale=0.5]
+ \draw (-8, 1) -- (8, 1);
+ \draw (-8, 0) -- (8, 0);
+ \draw [domain=-8:0, thick, blue] plot [smooth] (\x, {exp(-(\x)^2) + (1 - exp(-((\x)^2))) * atan (exp (\x)) / 45});
+ \draw [dashed, domain=8:0, thick, blue] plot [smooth] (\x, {exp(-(\x)^2) + (1 - exp(-((\x)^2))) * atan (exp (-\x)) / 45});
+
+ \draw (-8, 0.5) ellipse (0.1 and 0.5);
+
+ \draw (8, 1) arc (90:-90:0.1 and 0.5);
+ \draw [dashed] (8, 1) arc (90:270:0.1 and 0.5);
+ \end{tikzpicture}
+\end{center}
+If we view it this way, it is absolutely obvious that no matter how this solution evolves in time, it will never become, or even approach the ``trivial'' solution $u = 0$, even though both satisfy the boundary condition $u \to 0$ as $x \to \pm \infty$.
+
+\subsection{\texorpdfstring{B\"acklund}{Backlund} transformations}
+For a linear partial differential equation, we have the principle of superposition --- if we have two solutions, then we can add them to get a third solution. This is no longer true in non-linear PDE's.
+
+One way we can find ourselves a new solution is through a \emph{B\"acklund transformation}. This originally came from geometry, where we wanted to transform a surface to another, but we will only consider the applications to PDE's.
+
+The actual definition of the B\"acklund transformation is complicated. So we start with an example.
+\begin{eg}
+ Consider the Cauchy-Riemann equation
+ \[
+ u_x = v_u,\quad u_y = -v_x.
+ \]
+ We know that the pair $u, v$ satisfies the Cauchy-Riemann equations, if and only if both $u, v$ are harmonic, i.e.\ $u_{xx} + u_{yy} = 0$ etc.
+
+ Now suppose we have managed to find a harmonic function $v = v(x, y)$. Then we can try to solve the Cauchy-Riemann equations, and we would get \emph{another} harmonic function $u = u(x, y)$.
+
+ For example, if $v = 2xy$, then we get the partial differential equations
+ \[
+ u_x = 2x,\quad u_y = -2y.
+ \]
+ So we obtain
+ \[
+ u(x, y) = x^2 - y^2 + C
+ \]
+ for some constant $C$, and this function $u$ is guaranteed to be a solution to Laplace's equations.
+
+ So the Cauchy-Riemann equation generates new solutions to Laplace's equation from old ones. This is an example of an (auto-)B\"acklund transformation for Laplace's equation.
+\end{eg}
+
+In general, we have the following definition:
+\begin{defi}[B\"acklund transformation]\index{B\"acklund transformation}
+ A \emph{B\"acklund transformation} is a system of equations that relate the solutions of some PDE's to
+ \begin{enumerate}
+ \item A solution to some other PDE; or
+ \item Another solution to the same PDE.
+ \end{enumerate}
+ In the second case, we call it an \term{auto-B\"acklund transformation}.
+\end{defi}
+
+\begin{eg}
+ The equation $u_{xt} = e^u$ is related to the equation $v_{xt} = 0$ via the B\"acklund transformation
+ \[
+ u_x + v_x = \sqrt{2} \exp \left(\frac{u - v}{2}\right),\quad u_t - v_t = \sqrt{2} \exp\left(\frac{u + v}{2}\right).
+ \]
+ The verification is left as an exercise on the first example sheet. Since $v_{xt} = 0$ is an easier equation to solve, this gives us a method to solve $u_{xt} = e^u$.
+\end{eg}
+
+We also have examples of auto-B\"acklund transformations:
+\begin{eg}
+ For any non-zero constant $\varepsilon$, consider
+ \begin{align*}
+ \frac{\partial}{\partial \xi}(\varphi_1 - \varphi_2) &= 2 \varepsilon \sin \left(\frac{\varphi_1 + \varphi_2}{2}\right)\\
+ \frac{\partial}{\partial \tau}(\varphi_1 + \varphi_2) &= \frac{2}{\varepsilon} \sin\left(\frac{\varphi_1 - \varphi_2}{2}\right).
+ \end{align*}
+ These equations come from geometry, and we will not go into details motivating these. We can compute
+ \begin{align*}
+ \frac{\partial^2}{\partial \xi\partial \tau} (\varphi_1 - \varphi_2) &= \frac{\partial}{\partial \tau}\left(2\varepsilon \sin \left(\frac{\varphi_1 + \varphi_2}{2}\right)\right)\\
+ &= 2\varepsilon \cos \left(\frac{\varphi_1 + \varphi_2}{2}\right)\frac{\partial}{\partial \tau}\left(\frac{\varphi_1 + \varphi_2}{2}\right)\\
+ &= 2 \varepsilon \cos \left(\frac{\varphi_1 + \varphi_2}{2}\right) \frac{1}{2} \cdot \frac{2}{\varepsilon} \sin \left(\frac{\varphi_1 - \varphi_2}{2}\right)\\
+ &= 2 \cos \left(\frac{\varphi_1 + \varphi_2}{2}\right)\sin \left(\frac{\varphi_1 - \varphi_2}{2}\right)\\
+ &= \sin \varphi_1 - \sin \varphi_2.
+ \end{align*}
+ It then follows that
+ \[
+ \frac{\partial^2 \varphi_2}{\partial \xi \partial \tau} = \sin \varphi_2\quad\Longleftrightarrow\quad \frac{\partial^2 \varphi_1}{\partial \xi \partial \tau} = \sin \varphi_1.
+ \]
+ In other words, $\varphi_1$ solves the sine-Gordon equations in light cone coordinates, if and only if $\varphi_2$ does. So this gives an auto-B\"acklund transformation for the sine-Gordon equation. Moreover, since we had a free parameter $\varepsilon$, we actually have a \emph{family} of auto-B\"acklund transforms.
+
+ For example, we already know a solution to the sine-Gordon equation, namely $\varphi_1 = 0$. Using this, the equations say we need to solve
+ \begin{align*}
+ \frac{\partial \varphi}{\partial \xi} &= 2\varepsilon \sin \frac{\varphi}{2}\\
+ \frac{\partial \varphi}{\partial \tau} &= -\frac{2}{\varepsilon} \sin \frac{\varphi}{2}.
+ \end{align*}
+ We see this equation has some sort of symmetry between $\xi$ and $\tau$. So we use an ansatz
+ \[
+ \varphi(\xi, \tau) = 2\chi (\varepsilon \xi - \varepsilon^{-1} \tau).
+ \]
+ Then both equations tell us
+ \[
+ \frac{\d \chi}{\d x} = \sin \chi.
+ \]
+ We can separate this into
+ \[
+ \csc \chi \;\d \chi = \d x.
+ \]
+ Integrating this gives us
+ \[
+ \log \tan \frac{\chi}{2} = x + C.
+ \]
+ So we find
+ \[
+ \chi(x) = 2\tan^{-1} (Ae^x).
+ \]
+ So it follows that
+ \[
+ \varphi(\xi, \tau) = 2 \tan^{-1} (A (\varepsilon \xi + \varepsilon^{-1}\tau)),
+ \]
+ where $A$ and $\varepsilon$ are free parameters. After a bit more work, this was the 1-soliton solution we previously found.
+
+ Applying the B\"acklund transform again to this new solution produces multi-soliton solutions.
+\end{eg}
+
+\section{Inverse scattering transform}
+Recall that in IB Methods, we decided we can use Fourier transforms to solve PDE's. For example, if we wanted to solve the Klein--Gordon equation
+\[
+ u_{tt} - u_{xx} = u,
+\]
+then we simply had to take the Fourier transform with respect to $x$ to get
+\[
+ \hat{u}_{tt} + k^2 \hat{u} = \hat{u}.
+\]
+This then becomes a very easy ODE in $t$:
+\[
+ \hat{u}_{tt} = (1 - k^2) \hat{u},
+\]
+which we can solve. After solving for this, we can take the inverse Fourier transform to get $u$.
+
+The inverse scattering transform will follow a similar procedure, except it is much more involved and magical. Again, given a differential equation in $u(x, t)$, for each fixed time $t$, we can transform the solution $u(x, t)$ to something known as the \emph{scattering data} of $u$. Then the differential equation will tell us how the scattering data should evolve. After we solved for the scattering data at all times, we invert the transformation and recover the solution $u$.
+
+We will find that each step of that process will be linear, i.e.\ easy, and this will magically allow us to solve non-linear equations.
+\subsection{Forward scattering problem}
+\index{forward scattering problem}
+Before we talk about the inverse scattering transform, it is helpful to know what the \emph{forward} problem is. This is, as you would have obviously guessed, related to the Schr\"odinger operator we know and love from quantum mechanics. Throughout this section, $L$ will be the Schr\"odinger operator
+\[
+ L = -\frac{\partial^2}{\partial x^2} + u(x),
+\]
+where the ``potential'' $u$ has compact support, i.e.\ $u = 0$ for $|x|$ sufficiently large. What we actually need is just that $u$ decays quickly enough as $|x| \to \infty$, but to make our life easy, we do not figure out the precise conditions to make things work, and just assume that $u$ actually vanishes for large $|x|$. For a fixed $u$, we are interested in an eigenvalue (or ``spectral'') problem, i.e.\ we want to find solutions to
+\[
+ L \psi = \lambda \psi.
+\]
+This is the ``forward'' problem, i.e.\ given a $u$, we want to find the eigenvalues and eigenfunctions. The \emph{inverse} problem is given the collection of all such eigenvalues and eigenfunctions, some sort of solutions like this, we want to find out what $u$ is.
+
+We will divide this into the continuous and discrete cases.
+\subsubsection{Continuous spectrum}\label{sssec:continuous-spectrum}
+Here we consider solutions to $L\psi = k^2 \psi$ for real $k$. Since $u = 0$ for $|x|$ large, we must have
+\[
+ \psi_{xx} + k^2 \psi = 0
+\]
+for large $|x|$.
+
+So solutions as $|x| \to \infty$ are linear combinations of $e^{\pm i k x}$. We look for specific solutions for $\psi = \varphi(x, k)$ defined by the condition
+\[
+ \varphi = e^{-ikx}\text{ as } x \to -\infty.
+\]
+Then there must be coefficients $a = a(k)$ and $b = b(k)$ such that
+\[
+ \phi(x, k) = a(k) e^{-ikx} + b(k) e^{ikx}\text{ as }x \to +\infty.
+\]
+We define the quantities
+\[
+ \Phi(x, k) = \frac{\varphi(x, k)}{a(k)},\quad R(k) = \frac{b(k)}{a(k)},\quad T(k) = \frac{1}{a(k)}.
+\]
+Here $R(k)$ is called the \term{reflection coefficient}, and $A(k)$ is the \term{transmission coefficient}. You may have seen these terms from IB Quantum Mechanics. Then we can write
+\[
+ \Phi(x, k) =
+ \begin{cases}
+ T(k) e^{-ikx} & x \to -\infty\\
+ e^{-ikx} + R(k) e^{kx} & x \to +\infty
+ \end{cases}.
+\]
+We can view the $e^{-ikx}$ term as waves travelling to the left, and $e^{ikx}$ as waves travelling to the right. Thus in this scenario, we have an incident $e^{-ikx}$ wave coming from the right, the potential reflects some portion of the wave, namely $R(k) e^{ikx}$, and transmits the remaining $T(k) e^{-ikx}$. It will be shown on the first example sheet that in fact $|T(k)|^2 + |R(k)|^2 = 1$.
+
+What would happen when we change $k$? Since $k$ is the ``frequency'' of the wave, which is proportional to the energy we would expect that the larger $k$ is, the more of the wave is transmitted. Thus we might expect that $T(k) \to 1$, and $R(k) \to 0$. This is indeed true, but we will not prove it. We can think of these as ``boundary conditions'' for $T$ and $R$.
+
+So far, we've only been arguing hypothetically about what the solution has to look like if it existed. However, we do not know if there is a solution at all!
+
+In general, differential equations are bad. They are hard to talk about, because if we differentiate a function, it generally gets worse. It might cease to be differentiable, or even continuous. This means differential operators could take our function out of the relevant function space we are talking about. On the other hand, integration makes functions look \emph{better}. The more times we integrate, the smoother it becomes. So if we want to talk about the existence of solutions, it is wise to rewrite the differential equation as an integral solution instead.
+
+We consider the integral equation for $f = f(x, k)$ given by
+\[
+ f(x, k) = f_0(x, k) + \int_{-\infty}^\infty G(x - y, k) u(y) f(y, k) \;\d y,
+\]
+where $f_0$ is any solution to $(\partial^2_x + k^2) f_0 = 0$, and $G$ is the Green's function for the differential operator $\partial^2_x + k^2$, i.e.\ we have
+\[
+ (\partial_x^2 + k^2) G = \delta(x).
+\]
+What we want to show is that if we can find an $f$ that satisfies this integral equation, then it also satisfies the eigenvalue equation. We simply compute
+\begin{align*}
+ (\partial_x^2 + k^2) f &= (\partial_x^2 + k^2)f_0 + \int_{-\infty}^\infty (\partial_x^2 + k^2) G(x - y, k) u(y) f(y, k) \;\d y\\\
+ &= 0 + \int_{-\infty}^\infty \delta(x - y) u(y) f(y, k)\;\d y\\
+ &= u(x) f(x, k).
+\end{align*}
+In other words, we have
+\[
+ Lf = k^2 f.
+\]
+So it remains to prove that solutions to the integral equation exists.
+
+We pick $f_0 = e^{-ikx}$ and
+\[
+ G(x, k) =
+ \begin{cases}
+ 0 & x < 0\\
+ \frac{1}{k} \sin (kx) & x \geq 0
+ \end{cases}.
+\]
+Then our integral equation automatically implies
+\[
+ f(x, k) = e^{-ikx}
+\]
+as $x \to -\infty$, because for very small $x$, either $x - y < 0$ or $y$ is very small, so the integral always vanishes as $u$ has compact support.
+
+To solve the integral equation, we write this in abstract form
+\[
+ (I - K) f = f_0,
+\]
+where $I$ is the identity, and
+\[
+ (Kf)(x) = \int_{-\infty}^\infty G(x - y, k) u(y) f(y, k) \;\d y.
+\]
+So we can ``invert''
+\[
+ f = (I - K)^{-1} f_0.
+\]
+We can ``guess'' a solution to the inverse. If we don't care about rigour and just expand this, we get
+\[
+ f = (I + K + K^2 + \cdots)f_0.
+\]
+It doesn't matter how unrigorous our derivation was. To see it is a valid solution, we just have to check that it works! The first question to ask is if this expression converges. On the second example sheet, we will show that this thing actually converges. If this holds, then we have
+\[
+ (I - K)f = If_0 + K f_0 + K^2 f_0 + \cdots - (K + K^2 f_0 + K^3 f_0 + \cdots) = f_0.
+\]
+So this is a solution!
+
+Of course, this result is purely formal. Usually, there are better \emph{ad hoc} ways to solve the equation, as we know from IB Quantum Mechanics.
+
+\subsubsection{Discrete spacetime and bound states}
+We now consider the case $\lambda = - \kappa^2 < 0$, where we wlog $\lambda > 0$. We are going to seek solutions to
+\[
+ L \psi_\kappa = - \kappa^2 \psi_\kappa.
+\]
+This time, we are going to ask that
+\[
+ \|\psi_\kappa\|^2 = \int_{-\infty}^\infty \psi_\kappa(x)^2 \;\d x = 1.
+\]
+We will wlog $\psi_\kappa \in \R$. We will call these things \term{bound states}.
+
+Since $u$ has compact support, any solution
+\[
+ L \varphi = - \kappa^2 \varphi
+\]
+must obey
+\[
+ \varphi_{xx} - \kappa^2 \phi = 0
+\]
+for $|x| \to \infty$. Then the solutions are linear combinations of $e^{\pm \kappa x}$ as $|x| \to \infty$. We now fix $\varphi_\kappa$ by the boundary condition
+\[
+ \varphi_\kappa(x) = e^{-\kappa x} \text{ as } x \to +\infty
+\]
+Then as $x \to -\infty$, there must exist coefficients $\alpha = \alpha(\kappa)$, $\beta = \beta(\kappa)$ such that
+\[
+ \varphi_\kappa(x) = \alpha(\kappa) e^{\kappa x} + \beta(\kappa) e^{-\kappa x}\text{ as }x \to -\infty.
+\]
+Note that for \emph{any} $\kappa$, we can solve the equation $L \varphi = - \kappa^2 \varphi$ and find a solution of this form. However, we have the additional condition that $\|\psi_\kappa\|^2 = 1$, and in particular is finite. So we must have $\beta(\kappa) = 0$. It can be shown that the function $\beta = \beta(\kappa)$ has only finitely many zeroes
+\[
+ \chi_1 > \chi_2 > \cdots > \chi_N > 0.
+\]
+%To show this, we will have to find out an explicit expression for the solution of $\varphi_\kappa$, just as we did for the continuous spectrum, and then explicitly find a formula of $\beta$. We can then find that it has finitely many zeros.
+So we have a finite list of bound-states $\{\psi_n\}_{n= 1}^N$, written
+\[
+ \psi_n(x) = c_n \varphi_{\chi_n}(x),
+\]
+where $c_n$ are normalization constants chosen so that $\|\psi_n\| = 1$.
+
+\subsubsection{Summary of forward scattering problem}
+In summary, we had a spectral problem
+\[
+ L\psi = \lambda \psi,
+\]
+where
+\[
+ L = -\frac{\partial^2}{\partial x^2} + u,
+\]
+where $u$ has compact support. The goal is to find $\psi$ and $\lambda$.
+
+In the continuous spectrum, we have $\lambda = k^2 > 0$. Then we can find some $T(k)$ and $R(k)$ such that
+\[
+ \phi(x, k) =
+ \begin{cases}
+ T(k) e^{-ikx} & x \to -\infty\\
+ e^{-ikx} + R(k) e^{kx} & x \to +\infty
+ \end{cases},
+\]
+and solutions exist for all $k$.
+
+In the discrete spectrum, we have $\lambda = - \kappa^2 < 0$. We can construct \emph{bound states} $\{\psi_n\}_{n = 1}^N$ such that $L\psi_n = - \chi_n^2 \psi_n$ with
+\[
+ \chi_1 > \chi_2 > \cdots > \chi_N > 0,
+\]
+and $\|\psi_n\| = 1$.
+
+Bound states are characterized by large, positive $x$ behaviour
+\[
+ \psi_n(x) = c_n e^{-\chi_n x}\text{ as }x \to +\infty,
+\]
+where $\{c_n\}_{n = 1}^N$ are normalization constants.
+
+Putting all these together, the \term{scattering data} for $L$ is
+\[
+ S = \left\{\{\chi_n, c_n\}_{n = 1}^N, R(k), T(k)\right\}.
+\]
+\begin{eg}
+ Consider the Dirac potential $u(x) = - 2 \alpha \delta(x)$, where $\alpha > 0$. Let's try to compute the scattering data.
+
+ We do the continuous spectrum first. Since $u(x) = 0$ for $x \not= 0$, we must have
+ \[
+ \Phi(x, k) =
+ \begin{cases}
+ T(k) e^{-ikx} & x < 0\\
+ e^{-ikx} + R(k) e^{ikx} & x > 0
+ \end{cases}
+ \]
+ Also, we want $\Phi(x, k)$ to be continuous at $x = 0$. So we must have
+ \[
+ T(k) = 1 + R(k).
+ \]
+ By integrating $L\Phi = k^2 \Phi$ over $(-\varepsilon, \varepsilon)$, taking $\varepsilon \to 0$, we find that $\frac{\partial \Phi}{\partial x}$ has a jump discontinuity at $x = 0$ given by
+ \[
+ ik(R - 1) + ikT = -2 \alpha T.
+ \]
+ We now have two equations and two unknowns, and we can solve to obtain
+ \[
+ R(k) = \frac{i\alpha}{k - i\alpha},\quad T(k) = \frac{k}{k - i \alpha}.
+ \]
+ We can see that we indeed have
+ \[
+ |R|^2 + |T|^2 = 1.
+ \]
+ Note that as $k$ increases, we find that $R(k) \to 0$ and $T(k) \to 1$. This makes sense, since we can think of $k$ as the energy of the wave, and the larger the energy, the more likely we are to pass through.
+
+ Now let's do the discrete part of the spectrum, and we jump through the same hoops. Since $\delta(x) = 0$ for $x \not= 0$, we must have
+ \[
+ -\frac{\partial^2 \psi_n}{\partial x^2} + \chi_n^2 \psi_n = 0
+ \]
+ for $x \not= 0$. So we have
+ \[
+ \psi_n(x) = c_n e^{- \chi_n |x|}.
+ \]
+ Integrating $L\psi_n = - \chi_n^2 \psi_n$ over $(-\varepsilon, \varepsilon)$, we similarly find that
+ \[
+ c_n \chi_n = c_n \alpha.
+ \]
+ So there is just one bound state, with $\chi_1 = \alpha$. We finally find $c_n$ by requiring $\norm{\psi_1} = 1$. We have
+ \[
+ 1 = \int_{-\infty}^\infty \psi_1(x)^2 \;\d x = c_1^2 \int_{-\infty}^\infty e^{-2 \chi_1|x|}\;\d x = \frac{c_1^2}{\alpha}.
+ \]
+ So we have
+ \[
+ c_1 = \sqrt{\alpha}.
+ \]
+ In total, we have the following scattering data:
+ \[
+ S = \left\{\{\alpha, \sqrt{\alpha}\},\quad \frac{i\alpha}{k - i \alpha},\quad \frac{k}{k - i\alpha}\right\}.
+ \]
+\end{eg}
+
+\subsection{Inverse scattering problem}
+We might be interested in the inverse problem. Given scattering data
+\[
+ S = \left\{\{\chi_n, c_n\}_{n = 1}^N, R(k), T(k)\right\},
+\]
+can we reconstruct the potential $u = u(x)$ such that
+\[
+ L = -\frac{\partial^2}{\partial x^2} + u(x)
+\]
+has scattering data $S$? The answer is yes! Moreover, it turns out that $T(k)$ is not needed.
+
+We shall write down a rather explicit formula for the inverse scattering problem, but we will not justify it.
+\begin{thm}[GLM inverse scattering theorem]\index{GLM inverse scattering theorem}
+ A potential $u = u(x)$ that decays rapidly to $0$ as $|x| \to \infty$ is completely determined by its scattering data
+ \[
+ S = \left\{\{\chi_n, c_n\}_{n = 1}^N, R(k)\right\}.
+ \]
+ Given such a scattering data, if we set
+ \[
+ F(x) = \sum_{n = 1}^N c_n^2 e^{-\chi_n x} + \frac{1}{2\pi} \int_{-\infty}^\infty e^{ikx} R(k) \;\d k,
+ \]
+ and define $k(x, y)$ to be the \emph{unique} solution to
+ \[
+ k(x, y) + F(x + y) + \int_{x}^\infty k(x, z) f(z + y) \;\d z = 0,
+ \]
+ then
+ \[
+ u(x) = -2 \frac{\d}{\d x} k(x, x).
+ \]
+\end{thm}
+
+\begin{proof}
+ Too hard.
+\end{proof}
+Note that this equation
+\[
+ k(x, y) + F(x + y) + \int_{x}^\infty k(x, z) f(z + y) \;\d z = 0
+\]
+is not too hard to solve. We can view it as a linear equation of the form
+\[
+ \mathbf{x} + \mathbf{b} + A \mathbf{x} = 0
+\]
+for some linear operator $A$, then use our familiar linear algebra techniques to guess a solution. Afterwards, we can then verify that it works. We will see an explicit example later on when we actually use this to solve problems.
+
+Now that we've got this result, we understand how scattering problems work. We know how to go forwards \emph{and} backwards.
+
+This is all old theory, and not too exciting. The real exciting thing is how we are going to use this to solve PDE's. Given the KdV equation
+\[
+ u_t + u_{xxx} - 6uu_x = 0,
+\]
+we can think of this as a potential evolving over time, with a starting potential $u(x, 0) = u_0(x)$. We then compute the initial scattering data $T$, $R$, $\chi$ and $c$. Afterwards, we obtain the corresponding equations of evolution of the scattering data form the KdV equation. It turns out this is really simple --- the $\chi_n$ are always fixed, and the others evolve as
+\begin{align*}
+ R(k, t) &= e^{8ik^3 t}R(k, 0)\\
+ T(k, t) &= T(k, 0)\\
+ c_n(t) &=e^{4 \chi_n^3 t} c_n(0).
+\end{align*}
+Then we use this GLM formula to reconstruct the potential $u$ at all times!
+
+\subsection{Lax pairs}
+The final ingredient to using the inverse scattering transform is how to relate the evolution of the potential to the evolution of the scattering data. This is given by a \emph{lax pair}.
+
+Recall that when we studied Hamiltonian systems at the beginning of the course, under a Hamiltonian flow, functions evolve by
+\[
+ \frac{\d f}{\d t} = \{f, H\}.
+\]
+In quantum mechanics, when we ``quantize'' this, in the Heisenberg picture, the operators evolve by
+\[
+ i\hbar \frac{\d L}{\d t} = [L, H].
+\]
+In some sense, these equations tell us $H$ ``generates'' time evolution. What we need here is something similar --- an operator that generates the time evolution of our operator.
+\begin{defi}[Lax pair]\index{Lax pair}
+ Consider a time-dependent self-adjoint linear operator
+ \[
+ L = a_m (x, t) \frac{\partial^m}{\partial x^m} + \cdots + a_1 (x, t) \frac{\partial}{\partial x} + a_0(x, t),
+ \]
+ where the $\{a_i\}$ (possibly matrix-valued) functions of $(x, t)$. If there is a second operator $A$ such that
+ \[
+ L_t = LA - AL = [L, A],
+ \]
+ where
+ \[
+ L_t = \dot{a}_m \frac{\partial^m}{\partial x^m} + \cdots + \dot{a}_0,
+ \]
+ denotes the derivative of $L$ with respect to $t$, then we call $(L, A)$ a \emph{Lax pair}.
+\end{defi}
+
+The main theorem about Lax pairs is the following isospectral flow theorem:
+\begin{thm}[Isospectral flow theorem]\index{isospectral flow theorem}
+ Let $(L, A)$ be a Lax pair. Then the discrete eigenvalues of $L$ are time-independent. Also, if $L\psi = \lambda \psi$, where $\lambda$ is a discrete eigenvalue, then
+ \[
+ L \tilde{\psi} = \lambda \tilde{\psi},
+ \]
+ where
+ \[
+ \tilde{\psi} = \psi_t + A \psi.
+ \]
+\end{thm}
+The word ``isospectral'' means that we have an evolving system, but the eigenvalues are time-independent.
+
+\begin{proof}
+ We will assume that the eigenvalues at least vary smoothly with $t$, so that for each eigenvalue $\lambda_0$ at $t = 0$ with eigenfunction $\psi_0(x)$, we can find some $\lambda(t)$ and $\psi(x, t)$ with $\lambda(0) = \lambda_0$, $\psi(x, 0) = \psi_0(x)$ such that
+ \[
+ L(t) \psi(x, t) = \lambda(t) \psi (x, t).
+ \]
+ We will show that in fact $\lambda(t)$ is constant in time. Differentiating with respect to $t$ and rearranging, we get
+ \begin{align*}
+ \lambda_t \psi &= L_t \psi + L \psi_t - \lambda \psi_t\\
+ &= LA \psi - AL \psi + L \psi_t - \lambda \psi_t\\
+ &= LA \psi - \lambda A \psi + L \psi_t - \lambda \psi_t\\
+ &= (L - \lambda)(\psi_t + A \psi)
+ \end{align*}
+ We now take the inner product $\psi$, and use that $\|\psi\| = 1$. We then have
+ \begin{align*}
+ \lambda_t &= \bra \psi, \lambda_t \psi\ket\\
+ &= \bra \psi, (L - \lambda)(\psi_t + A_\psi)\ket\\
+ &= \bra (L - \lambda)\psi, \psi_t + A_\psi\ket\\
+ &= 0,
+ \end{align*}
+ using the fact that $L$, hence $L - \lambda$ is self-adjoint.
+
+ So we know that $\lambda_t = 0$, i.e.\ that $\lambda$ is time-independent. Then our above equation gives
+ \[
+ L \tilde{\psi} = \lambda \tilde{\psi},
+ \]
+ where
+ \[
+ \tilde{\psi} = \psi_t + A\psi.\qedhere
+ \]
+\end{proof}
+
+In the case where $L$ is the Schr\"odinger operator, the isospectral theorem tells us how we can relate the evolution of some of the scattering data (namely the $\chi_n$), to some differential equation in $L$ (namely the Laxness of $L$). For a cleverly chosen $A$, we will be able to relate the Laxness of $L$ to some differential equation in $u$, and this establishes our first correspondence between evolution of $u$ and the evolution of scattering data.
+\begin{eg}
+ Consider
+ \begin{align*}
+ L &= -\partial_x^2 + u(x, t)\\
+ A &= 4 \partial_x^3 - 3 (u \partial_x + \partial_x u).
+ \end{align*}
+ Then $(L, A)$ is a Lax pair iff $u = u(x, t)$ satisfies KdV. In other words, we have
+ \[
+ L_t - [L, A] = 0 \quad\Longleftrightarrow\quad u_t + u_{xxx} - 6uu_x =0 .
+ \]
+\end{eg}
+
+\subsection{Evolution of scattering data}
+Now we do the clever bit: we allow the potential $u = u(x, t)$ to evolve via KdV
+\[
+ u_t + u_{xxx} - 6 u u_x = 0.
+\]
+We see how the scattering data for $L = -\partial_x^2 + u(x, t)$ evolves. Again, we will assume that $u$ has compact support. Note that this implies that we have
+\[
+ A = 4 \partial_x^3\text{ as }|x| \to \infty.
+\]
+\subsubsection{Continuous spectrum (\tph{$\lambda = k^2 > 0$}{lambda = k2 > 0}{λ = k2 > 0})}
+As in Section \ref{sssec:continuous-spectrum}, for each $t$, we can construct a solution $\varphi$ to $L \varphi = k^2 \varphi$ such that
+\[
+ \varphi(x, t) =
+ \begin{cases}
+ e^{-ikx} & x \to -\infty\\
+ a(k, t) e^{-ikx} + b(k, t) e^{ikx} & x \to \infty
+ \end{cases}.
+\]
+This time, we know that for any $u$, we can find a solution for any $k$. So we can assume that $k$ is fixed in the equation
+\[
+ L\varphi = k^2 \varphi.
+\]
+We assume that $u$ is a solution to the KdV equation, so that $(L, A)$ is a Lax pair. As in the proof of the isospectral flow theorem, we differentiate this to get
+\[
+ 0 = (L - k^2)(\varphi_t + A_\varphi).
+\]
+This tells us that
+\[
+ \tilde{\varphi} = \varphi_t + A \varphi
+\]
+solves
+\[
+ L \tilde{\varphi} = k^2 \tilde{\varphi}.
+\]
+We can try to figure out what $\tilde{\varphi}$ is for large $|x|$. We recall that for large $|x|$, we simply have $A = 4 \partial_x^3$. Then we can write
+\[
+ \tilde{\varphi}(x, t) =
+ \begin{cases}
+ 4 ik^3 e^{-ikx} & x \to -\infty\\
+ (a_t + 4 ik^3 a)e^{-ikx} + (b_t - 4ik^3 b) e^{ikx} & x \to \infty
+ \end{cases}
+\]
+We now consider the function
+\[
+ \theta = 4ik^3 \varphi - \tilde{\varphi}.
+\]
+By linearity of $L$, we have
+\[
+ L \theta = k^2 \theta.
+\]
+Note that by construction, we have $\theta(x, t) \to 0$ as $x \to -\infty$. We recall that the solution to $Lf = k^2 f$ for $f = f_0$ as $x \to -\infty$ is just
+\begin{align*}
+ f &= (I - K)^{-1} f_0\\
+ &= (I + K + K^2 + \cdots)f_0
+\end{align*}
+So we obtain
+\[
+ \theta = (1 + K + K^2 + \cdots) 0 = 0.
+\]
+So we must have
+\[
+ \tilde{\varphi} = 4ik^3 \varphi.
+\]
+Looking at the $x \to +\infty$ behaviour, we figure that
+\begin{align*}
+ a_t + 4ik^3 a &= 4 ik^3 a\\
+ b_t - 4ik^3 b &= 4ik^3 b
+\end{align*}
+Of course, these are equations we can solve. We have
+\begin{align*}
+ a(k, t) &= a(k, 0)\\
+ b(k, t) &= b(k, 0) e^{8ik^3 t}.
+\end{align*}
+In terms of the reflection and transmission coefficients, we have
+\begin{align*}
+ R(k, t) &= R(k, 0) e^{8ik^3 t}\\
+ T(k, t) &= T(k, 0).
+\end{align*}
+Thus, we have shown that if we assume $u$ evolves according to the \emph{really} complicated KdV equation, then the scattering data must evolve in this simple way! This is \emph{AMAZING}.
+
+\subsubsection{Discrete spectrum (\tph{$\lambda= -\kappa^2 < 0$}{lambda = -kappa^2 < 0}{&labmda; = -κ2 < 0})}
+The discrete part is similar. By the isospectral flow theorem, we know the $\chi_n$ are constant in time. For each $t$, we can construct bound states $\{\psi_n(x, t)\}_{n = 1}^N$ such that
+\[
+ L\psi_n = - \chi_n^2 \psi_n,\quad \|\psi_n\| = 1.
+\]
+Moreover, we have
+\[
+ \psi_n(x, t) = c_n(t) e^{-\chi_n|x|} \text{ as }x \to +\infty.
+\]
+From the isospectral theorem, we know the function
+\[
+ \tilde{\psi}_n = \partial_t \psi_n + A\psi_n
+\]
+also satisfies
+\[
+ L\tilde{\psi}_n = - \chi_n^2 \tilde{\psi}_n
+\]
+It is an exercise to show that these solutions must actually be proportional to one another. Looking at Wronskians, we can show that $\tilde{\psi}_n \propto \psi_n$. Also, we have
+\begin{align*}
+ \bra \psi_n, \tilde{\psi}_n\ket &= \bra \psi_n, \partial_t \psi_n\ket + \bra\psi_n, A \psi_n\ket\\
+ &= \frac{1}{2} \frac{\partial}{\partial t} \bra \psi_n, \psi_n\ket + \bra \psi_n, A \psi_n\ket\\
+ &= 0,
+\end{align*}
+using the fact that $A$ is antisymmetric and $\|\psi_n\|$ is constant. We thus deduce that $\tilde{\psi}_n = 0$.
+
+Looking at large $x$-behaviour, we have
+\[
+ \tilde{\psi}_n(x, t) = (\dot{c}_n - 4 \chi_n^3 c_n) e^{- \chi_n x}
+\]
+as $x \to +\infty$. Since $\tilde{\psi}_n \equiv 0$, we must have
+\[
+ \dot{c}_n - 4 \chi_n^3 c_n = 0.
+\]
+So we have
+\[
+ c_n(t) = c_n(0) e^{4 \chi_n^3 t}.
+\]
+This is again \emph{AMAZING}.
+
+\subsubsection{Summary of inverse scattering transform}
+So in summary, suppose we are given that $u = u(x, t)$ evolves according to KdV, namely
+\[
+ u_t + u_{xxx} - 6uu_x = 0.
+\]
+If we have an initial condition $u_0(x) = u(x, 0)$, then we can compute its scattering data
+\[
+ S(0) = \left\{\{\chi_n, c_n(0)\}_{n = 1}^N, R(k, 0)\right\}.
+\]
+Then for arbitrary time, the scattering data for $L = -\partial_x^2 + u$ is
+\[
+ S(t) = \left\{\{\chi_n, c_n(0)e^{4\chi_n^3 t}\}_{n = 1}^N, R(k, 0)e^{8ik^3 t}\right\}.
+\]
+We then apply GLM to obtain $u(x, t)$ for all time $t$.
+\[
+ \begin{tikzcd}[row sep=5em, column sep=8em]
+ u_0(x) \ar[r, "\text{Construct scattering data}", "L = -\partial_x^2 + u_0(x)"'] \ar[d, "\substack{\text{KdV}\\\text{equation}}"] & S(0) = \left\{\{\chi_n, c_n(0)\}_{n = 1}^N, R(k, 0)\right\} \ar[d, "\substack{\text{Evolve}\\\text{scattering}\\\text{data}}"', "{L_t = [L, A]}"]\\
+ u(x, t) & S(t) = \left\{\{\chi_n, c_n(0)e^{4\chi_n^3 t}\}_{n = 1}^N, R(k, 0)e^{8ik^3 t}\right\} \ar[l, "\text{Solve GLM equation}"]
+ \end{tikzcd}
+\]
+The key thing that makes this work is that $u_t + u_{xxx} - 6 uu_x$ holds if and only if $L_t = [L, A]$.
+
+For comparison, this is what we would do if we had to solve $u_t + u_{xxx} = 0$ by a Fourier transform:
+\[
+ \begin{tikzcd}[row sep=5em, column sep=8em]
+ u_0(x) \ar[r, "\text{Fourier transform}"] \ar[d, "u_t + u_{xxx} = 0"] & \hat{u}_0(k) \ar[d, "\hat{u}_t - ik^3 \hat{u} = 0"]\\
+ u(x, t) & \hat{u}(u, t) = \hat{u}_0(k) e^{ik^3 t} \ar[l, "\text{Inverse Fourier}"', "\text{Transform}"]
+ \end{tikzcd}
+\]
+It is just the same steps, but with a simpler transform!
+
+\subsection{Reflectionless potentials}
+We are now going to actually solve the KdV equation for a special kind of potential --- reflectionless potentials.
+
+\begin{defi}[Reflectionless potential]\index{reflectionless potential}\index{potential!reflectionless}
+ A \emph{reflectionless potential} is a potential $u(x, 0)$ satisfying $R(k, 0) = 0$.
+\end{defi}
+
+Now if $u$ evolves according to the KdV equation, then
+\[
+ R(k, t) = R(k, 0) e^{8ik^3 t} = 0.
+\]
+So if a potential starts off reflectionless, then it remains reflectionless.
+
+We now want to solve the GLM equation in this case. Using the notation when we wrote down the GLM equation, we simply have
+\[
+ F(x) = \sum_{n = 1}^N c_n^2 e^{-\chi_n x}.
+\]
+We will mostly not write out the $t$ when we do this, and only put it back in at the very end. We now guess that the GLM equation has a solution of the form
+\[
+ K(x, y) = \sum_{m = 1}^N K_m(x) e^{-\chi_m y}
+\]
+for some unknown functions $\{K_m\}$ (in the second example sheet, we show that it \emph{must} have this form). We substitute this into the GLM equation and find that
+\[
+ \sum_{n = 1}^N \left[c_n^2 e^{-\chi_n x}+ K_n(x) + \sum_{m = 1}^N c_n^2 K_m(x) \int_x^\infty e^{-(\chi_n + \chi_m)z}\;\d z\right] e^{-\chi_n y} = 0.
+\]
+Now notice that the $e^{-\chi_n y}$ for $n = 1, \cdots, N$ are linearly independent. So we actually have $N$ equations, one for each $n$. So we know that
+\[
+ c_n^2 e^{-\chi_n x} + K_n(x) + \sum_{m = 1}^N \frac{c_n^2 K_m(x)}{\chi_n + \chi_m} e^{-(\chi_n + \chi_m)x} = 0\tag{$*$}
+\]
+for all $n = 1, \cdots, N$. Now if our goal is to solve the $K_n(x)$, then this is just a linear equation for each $x$! We set
+\begin{align*}
+ \mathbf{c} &= (c_1^2 e^{-\chi_1 x}, \cdots, c_N^2 e^{-\chi_N x})^T\\
+ \mathbf{K} &= (K_1(x), \cdots, K_N(x))^T\\
+ A_{nm} &= \delta_{nm} + \frac{c_n^2 e^{-(\chi_n - \chi_m)x}}{\chi_n + \chi_m}.
+\end{align*}
+Then $(*)$ becomes
+\[
+ A\mathbf{K} = -\mathbf{c}.
+\]
+This really is a linear algebra problem. But we don't really have to solve this. The thing we really want to know is
+\begin{align*}
+ K(x, x) &= \sum_{m = 1}^N K_m(x) e^{-\chi_m x}\\
+ &= \sum_{m = 1}^N \sum_{m = 1}^N (A^{-1})_{mn} (-\mathbf{c})_n e^{-\chi_m x}
+\end{align*}
+Now note that
+\[
+ \frac{\d}{\d x}A_{nm}(x) = A_{nm}'(x) = - c_n^2 e^{-\chi_n x} e^{-\chi_m x} = (-\mathbf{c})_n e^{-\chi_m x}.
+\]
+So we can replace the above thing by
+\[
+ K(x, x) = \sum_{m = 1}^N \sum_{n = 1}^N (A^{-1})_{mn} A'_{nm} = \tr(A^{-1}A').
+\]
+It is an exercise on the second example sheet to show that this is equal to
+\[
+ K(x, x) = \frac{1}{\det A} \frac{\d}{\d x} (\det A) = \frac{\d}{\d x} \log (\det A).
+\]
+So we have
+\[
+ u(x) = - 2 \frac{\d^2}{\d x^2} \log(\det A).
+\]
+We now put back the $t$-dependence we didn't bother to write all along. Then we have
+\[
+ u(x, t) = -2 \frac{\partial^2}{\partial x^2} \log(\det A(x, t)),
+\]
+where
+\[
+ A_{nm}(x, t) = \delta_{nm} + \frac{c_n(0)^2 e^{8\chi_n^3 t}e^{-(\chi_n + \chi_m)x}}{\chi_n + \chi_m}.
+\]
+It turns out these are soliton solutions, and the number of discrete eigenstates $N$ is just the number of solitons!
+\subsection{Infinitely many first integrals}
+As we've previously mentioned, we are expecting our integrable PDE's to have infinitely many first integrals. Recall we can construct $\varphi = \varphi(x, k, t)$ such that
+\[
+ L\varphi = k^2 \varphi,
+\]
+and we had
+\[
+ \varphi(x, k, t) =
+ \begin{cases}
+ e^{-ikx} & x \to -\infty\\
+ a(k, t) e^{-ikx} + b(k, t) e^{ikx} & x \to \infty
+ \end{cases}.
+\]
+But when we looked at the evolution of the scattering data, we can actually write down what $a$ and $b$ are. In particular, $a(k, t) = a(k)$ is independent of $t$. So we might be able to extract some first integrals from it. We have
+\[
+ e^{ikx} \varphi(x, k, t) = a(k) + b(k, t) e^{2ikx} \text{ as }x \to \infty.
+\]
+We now take the average over $[R, 2R]$ for $R \to \infty$. We do the terms one by one. We have the boring integral
+\[
+ \frac{1}{R} \int_R^{2R} a(k) \;\d x = a(k).
+\]
+For the $b(k, t)$ term, we have
+\[
+ \frac{1}{R} \int_R^{2R} b(k, t) e^{2ikx} \;\d x = O\left(\frac{1}{R}\right),
+\]
+So we have
+\begin{align*}
+ a(k) &= \lim_{R \to \infty} \frac{1}{R} \int_R^{2R} e^{ikx} \varphi(x, k, t) \;\d x\\
+ &= \lim_{R \to \infty} \int_1^2 e^{ikRx} \varphi(Rx, k, t) \;\d x.
+\end{align*}
+So can we figure out what this thing is? Since $\varphi = e^{-ikx}$ as $x \to -\infty$, it is ``reasonable'' to write
+\[
+ \varphi(x, k, t) = \exp\left(-ikx + \int_{-\infty}^x S(y, k, t)\;\d y\right)
+\]
+for some function $S$. Then after some dubious manipulations, we would get
+\begin{align*}
+ a(k) &= \lim_{R \to \infty} \int_1^2 \exp\left(\int_{-\infty}^{Rx} S(y, k, t)\;\d y\right)\;\d x\\
+ &= \exp\left(\int_{-\infty}^\infty S(y, k, t)\;\d y\right).\tag{$\dagger$}
+\end{align*}
+Now this is interesting, since the left hand side $a(k)$ has no $t$-dependence, but the right-hand formula does. So this is where we get our first integrals from.
+
+Now we need to figure out what $S$ is. To find $S$, recall that $\varphi$ satisfies
+\[
+ L\varphi = k^2 \varphi.
+\]
+So we just try to shove our formula of $\varphi$ into this equation. Notice that
+\[
+ \varphi_x = (S - ik) \varphi,\quad \varphi_{xx} = S_x \varphi + (S - ik)^2 \varphi.
+\]
+We then put these into the Schr\"odinger equation to find
+\[
+ S_x - (2ik)S + S^2 = -u.
+\]
+We have got no $\varphi$'s left. This is a famous type of equation --- a \term{Ricatti-type equation}. We can make a guess
+\[
+ S(x, k, t) = \sum_{n = 1}^\infty \frac{S_n(x, t)}{(2ik)^n}.
+\]
+This seems like a strange thing to guess, but there are indeed some good reasons for this we will not get into. Putting this into the equation and comparing coefficients of $k^{-n}$, we obtain a recurrence relation
+\begin{align*}
+ S_1 &= -u\\
+ S_{n + 1} &= \frac{\d S_n}{\d x} + \sum_{m = 1}^{n - 1} S_m S_{n - m}.
+\end{align*}
+This is a straightforward recurrence relation to compute. We can make a computer do this, and get
+\[
+ S_2 = -u_x,\quad S_3 = -u_{xx} + u^2,\quad S_4 = \cdots
+\]
+Using the expression for $S$ in $(\dagger)$, we find that
+\begin{align*}
+ \log a(k) &= \int_{-\infty}^\infty S(x, k, t) \;\d x\\
+ &= \sum_{n = 1}^\infty \frac{1}{(2ik)^n} \int_{-\infty}^\infty S_n(x, t)\;\d x.
+\end{align*}
+Since the LHS is time-independent, so is the RHS. Moreover, this is true for all $k$. So we know that
+\[
+ \int_{-\infty}^\infty S_n (x, t)\;\d t
+\]
+must be constant with time!
+
+We can explicitly compute the first few terms:
+\begin{enumerate}
+ \item For $n = 1$, we find a first integral
+ \[
+ \int_{-\infty}^\infty u(x, t)\;\d x
+ \]
+ We can view this as a conservation of mass.
+
+ \item For $n = 2$, we obtain a first integral
+ \[
+ \int_{-\infty}^\infty u_x(x, t)\;\d x.
+ \]
+ This is actually boring, since we assumed that $u$ vanishes at infinity. So we knew this is always zero anyway.
+
+ \item For $n = 3$, we have
+ \[
+ \int_{-\infty}^\infty (-u_{xx}(x, t) + u(x, t)^2) \;\d x = \int_{-\infty}^\infty u(x, t)^2 \;\d x.
+ \]
+ This is in some sense a conservation of momentum.
+\end{enumerate}
+
+It is an exercise to show that $S_n$ is a total derivative for all even $n$, so we don't get any interesting conserved quantity. But still, half of infinity is infinity, and we do get infinitely many first integrals!
+
+\section{Structure of integrable PDEs}
+\subsection{Infinite dimensional Hamiltonian system}
+When we did ODEs, our integrable ODEs were not just random ODEs. They came from some (finite-dimensional) Hamiltonian systems. If we view PDEs as infinite-dimension ODEs, then it is natural to ask if we can generalize the notion of Hamiltonian systems to infinite-dimensional ones, and then see if we can put our integrable systems in the form of a Hamiltonian system. It turns out we can, and nice properties of the PDE falls out of this formalism.
+
+We recall that a (finite-dimensional) phase space is given by $M = \R^{2n}$ and a non-degenerate anti-symmetric matrix $J$. Given a Hamiltonian function $H: M \to \R$, the equation of motion for $\mathbf{x}(t) \in M$ becomes
+\[
+ \frac{\d \mathbf{x}}{\d t} = J \frac{\partial H}{\partial \mathbf{x}},
+\]
+where $\mathbf{x}(t)$ is a vector of length $2n$, $J$ is a non-degenerate anti-symmetric matrix, and $H = H(\mathbf{x})$ is the Hamiltonian.
+
+In the infinite-dimensional case, instead of having $2n$ coordinates $x_i(t)$, we have a function $u(x, t)$ that depends continuously on the parameter $x$. When promoting finite-dimensional things to infinite-dimensional versions, we think of $x$ as a continuous version of $i$. We now proceed to generalize the notions we used to have for finite-dimensional to infinite dimensional ones.
+
+The first is the inner product. In the finite-dimensional case, we could take the inner product of two vectors by
+\[
+ \mathbf{x}\cdot \mathbf{y} = \sum x_i y_i.
+\]
+Here we have an analogous inner product, but we replace the sum with an integral.
+\begin{notation}
+ For functions $u(x)$ and $v(x)$, we write
+ \[
+ \bra u, v\ket = \int_\R u(x) v(x)\;\d x.
+ \]
+ If $u, v$ are functions of time as well, then so is the inner product.
+\end{notation}
+
+For finite-dimensional phase spaces, we talked about functions of $\mathbf{x}$. In particular, we had the Hamiltonian $H(\mathbf{x})$. In the case of infinite-dimensional phase spaces, we will not consider arbitrary functions on $u$, but only \emph{functionals}:
+
+\begin{defi}[Functional]\index{functional}
+ A \emph{functional} $F$ is a real-valued function (on some function space) of the form
+ \[
+ F[u] = \int_\R f(x, u, u_x, u_{xx}, \cdots)\;\d x.
+ \]
+ Again, if $u$ is a function of time as well, the $F[u]$ is a function of time.
+\end{defi}
+
+We used to be able to talk about the derivatives of functions. Time derivatives of $F$ would work just as well, but differentiating with respect to $u$ will involve the \emph{functional derivative}, which you may have met in IB Variational Principles.
+
+\begin{defi}[Functional derivative/Euler-Lagrange derivative]\index{functional derivative}\index{Euler-Lagrange derivative}
+ The \emph{functional derivative} of $F = F[u]$ at $u$ is the unique function $\delta F$ satisfying
+ \[
+ \bra \delta F, \eta\ket = \lim_{\varepsilon \to 0} \frac{F[u + \varepsilon \eta] - F[u]}{\varepsilon}
+ \]
+ for all smooth $\eta$ with compact support.
+
+ Alternatively, we have
+ \[
+ F[u + \varepsilon \eta] = F[u] + \varepsilon \bra \delta F, \eta\ket + o (\varepsilon).
+ \]
+ Note that $\delta F$ is another function, depending on $u$.
+\end{defi}
+
+\begin{eg}
+ Set
+ \[
+ F[u] = \frac{1}{2} \int u_x^2 \;\d x.
+ \]
+ We then have
+ \begin{align*}
+ F[u + \varepsilon \eta] &= \frac{1}{2}\int (u_x + \varepsilon \eta_x)^2 \;\d x\\
+ &= \frac{1}{2} u_x^2 \;\d x + \varepsilon \int u_x \eta_x \;\d x + o(\varepsilon)\\
+ &= F[u] + \varepsilon \bra u_x, \eta_x\ket + o(\varepsilon)\\
+ \intertext{This is no good, because we want something of the form $\bra \delta F, \eta\ket$, not an inner product with $\eta_x$. When in doubt, integrate by parts! This is just equal to}
+ &= F[u] + \varepsilon \bra -u_{xx}, \eta\ket + o(\varepsilon).
+ \end{align*}
+ Note that when integrating by parts, we don't have to mess with the boundary terms, because $\eta$ is assumed to have compact support. So we have
+ \[
+ \delta F = - u_{xx}.
+ \]
+\end{eg}
+In general, from IB Variational Principles, we know that if
+\[
+ F[u] = \int f(x, u, u_x, u_{xx}, \cdots)\;\d x,
+\]
+then we have
+\[
+ \delta F = \frac{\partial f}{\partial u} - \D_x\left(\frac{\partial f}{\partial u_x}\right) + \D_x^2 \left(\frac{\partial f}{\partial u_{xx}}\right) - \cdots.
+\]
+Here $\D_x$ is the total derivative, which is different from the partial derivative.
+\begin{defi}[Total derivative]
+ Consider a function $f(x, u, u_x, \cdots)$. For any given function $u(x)$, the total derivative with respect to $x$ is
+ \[
+ \frac{\d}{\d x} f(x, u(x), u_x(x), \cdots) = \frac{\partial f}{\partial x} + u_x \frac{\partial f}{\partial u} + u_{xx} \frac{\partial f}{\partial u_x} + \cdots
+ \]
+\end{defi}
+
+\begin{eg}
+ \[
+ \frac{\partial}{\partial x}(xu) = u,\quad \D_x (xu) = u + x u_x.
+ \]
+\end{eg}
+
+Finally, we need to figure out an alternative for $J$. In the case of a finite-dimensional Hamiltonian system, it is healthy to think of it as an anti-symmetric bilinear form, so that $\mathbf{v}J\mathbf{w}$ is $J$ applied to $\mathbf{v}$ and $\mathbf{w}$. However, since we also have an inner product given by the dot product, we can alternatively think of $J$ as a linear map $\R^{2n} \to \R^{2n}$ so that we apply it as
+\[
+ \mathbf{v} \cdot J \mathbf{w} = \mathbf{v}^T J \mathbf{w}.
+\]
+Using this $J$, we can define the Poisson bracket of $f = f(\mathbf{x}), g = g(\mathbf{x})$ by
+\[
+ \{f, g\} = \frac{\partial f}{\partial \mathbf{x}} \cdot J \frac{\partial g}{\partial \mathbf{x}}.
+\]
+We know this is bilinear, antisymmetric and satisfies the Jacobi identity.
+
+How do we promote this to infinite-dimensional Hamiltonian systems? We can just replace $\frac{\partial f}{\partial \mathbf{x}}$ with the functional derivative and the dot product with the inner product. What we need is a replacement for $J$, which we will write as $\mathcal{J}$. There is no obvious candidate for $\mathcal{J}$, but assuming we have found a reasonable linear and antisymmetric candidate, we can make the following definition:
+
+\begin{defi}[Poisson bracket for infinite-dimensional Hamiltonian systems]\index{Poisson bracket!infinite-dimensional}
+ We define the \emph{Poisson bracket} for two functionals to be
+ \[
+ \{F, G\} = \bra \delta F, \mathcal{J} \delta G\ket = \int \delta F(x) \mathcal{J} \delta G(x)\;\d x.
+ \]
+\end{defi}
+Since $\mathcal{J}$ is linear and antisymmetric, we know that this Poisson bracket is bilinear and antisymmetric. The annoying part is the Jacobi identity
+\[
+ \{F, \{G, H\}\} + \{G, \{H, F\}\} + \{H, \{F, G\}\} = 0.
+\]
+This is \emph{not} automatically satisfied. We need conditions on $\mathcal{J}$. The simplest antisymmetric linear map we can think of would be $\mathcal{J} = \partial_x$, and this works, i.e.\ the Jacobi identity is satisfied. Proving that is easy, but painful.
+
+Finally, we get to the equations of motions. Recall that for finite-dimensional systems, our equation of evolution is given by
+\[
+ \frac{\d \mathbf{x}}{\d t} = J \frac{\partial H}{\partial \mathbf{x}}.
+\]
+We make the obvious analogues here:
+\begin{defi}[Hamiltonian form]\index{Hamiltonian form}
+ An evolution equation for $u = u(x, t)$ is in \emph{Hamiltonian form} if it can be written as
+ \[
+ u_t = \mathcal{J} \frac{\delta H}{\delta u}.
+ \]
+ for some functional $H = H[u]$ and some linear, antisymmetric $\mathcal{J}$ such that the Poisson bracket
+ \[
+ \{F, G\} = \bra \delta F, \mathcal{J} \delta G\ket
+ \]
+ obeys the Jacobi identity.
+
+\end{defi}
+
+Such a $\mathcal{J}$ is known as a \emph{Hamiltonian operator}.
+\begin{defi}[Hamiltonian operator]\index{Hamiltonian operator}
+ A \emph{Hamiltonian operator} is linear antisymmetric function $\mathcal{J}$ on the space of functions such that the induced Poisson bracket obeys the Jacobi identity.
+\end{defi}
+
+Recall that for a finite-dimensional Hamiltonian system, if $f = f(\mathbf{x})$ is any function, then we had
+\[
+ \frac{\d f}{\d t} = \{f, H\}.
+\]
+This generalizes to the infinite-dimensional case.
+\begin{prop}
+ If $u_t = \mathcal{J} \delta H$ and $I = I[u]$, then
+ \[
+ \frac{\d I}{\d t} = \{I, H\}.
+ \]
+ In particular $I[u]$ is a first integral of $u_t = \mathcal{J} \delta H$ iff $\{I, H\} = 0$.
+\end{prop}
+
+The proof is the same.
+\begin{proof}
+ \[
+ \frac{\d I}{\d t} = \lim_{\varepsilon \to 0} \frac{I[u + \varepsilon u_t] - I[u]}{\varepsilon} = \bra \delta I, u_t\ket = \bra \delta I, \mathcal{J}\delta H\ket = \{I, H\}.\qedhere
+ \]
+\end{proof}
+
+In summary, we have the following correspondence:
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ $2n$-dimensional phase space & infinite dimensional phase space\\
+ \midrule
+ $x_i(t): i = 1, \cdots, 2n$ & $u(x, t): x \in \Omega$\\
+ $\mathbf{x}\cdot\mathbf{y} = \sum_i x_i y_i$ & $\bra u, v \ket = \int_\Omega u(x, t) v(x, t) \;\d x$\\
+ $\frac{\d}{\d t}$ & $\frac{\partial}{\partial t}$\\
+ $\frac{\partial}{\partial \mathbf{x}}$ & $\frac{\delta}{\delta u}$\\
+ anti-symmetric matrix $J$ & anti-symmetric linear operator $\mathcal{J}$ \\
+ functions $f = f(\mathbf{x})$ & functionals $F = F[u]$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+
+
+\subsection{Bihamiltonian systems}
+So far, this is not too interesting, as we just generalized the finite-dimensional cases in sort-of the obvious way. However, it is possible that the same PDE might be able to be put into Hamiltonian form for \emph{different} $\mathcal{J}$'s. These are known as \emph{bihamiltonian systems}.
+\begin{defi}[Bihamiltonian system]\index{bihamiltonian system}
+ A PDE is \emph{bihamiltonian} if it can be written in Hamiltonian form for different $\mathcal{J}$.
+\end{defi}
+It turns out that when this happens, then the system has infinitely many first integrals in involution! We will prove this later on. This is rather miraculous!
+
+\begin{eg}
+ We can write the KdV equation in Hamiltonian form by
+ \[
+ u_t = \mathcal{J}_1 \delta H_1,\quad \mathcal{J}_1 = \frac{\partial}{\partial x},\quad H_1[u] =\int \frac{1}{2}u_x^2 + u^3 \;\d x.
+ \]
+ We can check that this says
+ \begin{align*}
+ u_t &= \frac{\partial}{\partial x}\left(\frac{\partial}{\partial u} - D_x \left(\frac{\partial}{\partial u_x}\right)\right)\left(\frac{1}{2}u_x^2 + u^3\right)\\
+ &= 6uu_x- u_{xxx},
+ \end{align*}
+ and this is the KdV equation.
+
+ We can also write it as
+ \[
+ u_t = \mathcal{J}_0 \delta H_0,\quad \mathcal{J}_0 = - \frac{\partial^3}{\partial x^3} + 4u \partial_x + 2 u_x,\quad H_0[u] = \int \frac{1}{2}u^2 \;\d x.
+ \]
+\end{eg}
+So KdV is bi-Hamiltonian. We then know that
+\[
+ \mathcal{J}_1 \delta H_1 = \mathcal{J}_0 \delta H_0.
+\]
+We define a sequence of Hamiltonians $\{H_n\}_{n \geq 0}$ via
+\[
+ \mathcal{J}_1 \delta H_{n + 1} = \mathcal{J}_0 \delta H_n.
+\]
+We will assume that we can always solve for $H_{n + 1}$ given $H_n$. This can be proven, but we shall not. We then have the miraculous result.
+
+\begin{thm}
+ Suppose a system is bi-Hamiltonian via $(\mathcal{J}_0, H_0)$ and $(\mathcal{J}_1, H_1)$. It is a fact that we can find a sequence $\{H_n\}_{n \geq 0}$ such that
+ \[
+ \mathcal{J}_1 \delta H_{n + 1} = \mathcal{J}_0 \delta H_n.
+ \]
+ Under these definitions, $\{H_n\}$ are all first integrals of the system and are in involution, i.e.
+ \[
+ \{H_n, H_m\} = 0
+ \]
+ for all $n, m \geq 0$, where the Poisson bracket is taken with respect to $\mathcal{J}_1$.
+\end{thm}
+
+\begin{proof}
+ We notice the following interesting fact: for $m \geq 1$, we have
+ \begin{align*}
+ \{H_n, H_m\} &= \bra \delta H_n, \mathcal{J}_1 \delta H_m\ket\\
+ &= \bra \delta H_n, \mathcal{J}_0 \delta H_{m - 1}\ket\\
+ &= - \bra \mathcal{J}_0 \delta H_n, \delta H_{m - 1}\ket\\
+ &= - \bra \mathcal{J}_1 \delta H_{n + 1}, \delta H_{m - 1}\ket\\
+ &= \bra \delta H_{n + 1}, \mathcal{J}_1 \delta H_{m - 1}\ket\\
+ &= \{H_{n + 1}, H_{m - 1}\}.
+ \end{align*}
+ Iterating this many times, we find that for any $n, m$, we have
+ \[
+ \{H_n, H_m\} = \{H_m, H_n\}.
+ \]
+ Then by antisymmetry, they must both vanish. So done.
+\end{proof}
+
+\subsection{Zero curvature representation}
+There is a more geometric way to talk about integrable systems, which is via zero-curvature representations.
+
+Suppose we have a function $u(x, t)$, which we currently think of as being fixed. From this, we construct $N \times N$ matrices $U = U(\lambda)$ and $V = V(\lambda)$ that depend on $\lambda$, $u$ and its derivatives. The $\lambda$ will be thought of as a ``\term{spectral parameter}'', like the $\lambda$ in the eigenvalue problem $L \varphi = \lambda \varphi$.
+
+Now consider the system of PDE's
+\[
+ \frac{\partial}{\partial x}\mathbf{v} = U(\lambda) \mathbf{v},\quad \frac{\partial}{\partial t} \mathbf{v} = V(\lambda) \mathbf{v},\tag{$\dagger$}
+\]
+where $\mathbf{v} = \mathbf{v}(x, t; \lambda)$ is an $N$-dimensional vector.
+
+Now notice that here we have twice as many equations as there are unknowns. So we need some compatibility conditions. We use the fact that $\mathbf{v}_{xt} = \mathbf{v}_{tx}$. So we need
+\begin{align*}
+ 0 &= \frac{\partial}{\partial t} U(\lambda) \mathbf{v} - \frac{\partial}{\partial x}V(\lambda) \mathbf{v}\\
+ &= \frac{\partial U}{\partial t} \mathbf{v} + U \frac{\partial \mathbf{v}}{\partial t} - \frac{\partial V}{\partial x}\mathbf{v} - V \frac{\partial \mathbf{v}}{\partial x}\\
+ &= \frac{\partial U}{\partial t}\mathbf{v} + UV \mathbf{x} - \frac{\partial V}{\partial x}\mathbf{v} - VU \mathbf{v}\\
+ &= \left(\frac{\partial U}{\partial t} - \frac{\partial V}{\partial x} + [U, V]\right) \mathbf{v}.
+\end{align*}
+So we know that if a (non-trivial) solution to the PDE's exists for any initial $\mathbf{v}_0$, then we must have
+\[
+ \frac{\partial U}{\partial t} - \frac{\partial V}{\partial x} + [U, V] = 0.
+\]
+These are known as the \term{zero curvature equations}.
+
+There is a beautiful theorem by Frobenius that if this equation holds, then solutions always exist. So we have found a correspondence between the existence of solutions to the PDE, and some equation in $U$ and $V$.
+
+Why are these called the zero curvature equations? In differential geometry, a connection $A$ on a tangent bundle has a curvature given by the \emph{Riemann curvature tensor}
+\[
+ R = \partial \Gamma - \partial \Gamma + \Gamma \Gamma - \Gamma \Gamma,
+\]
+where $\Gamma$ is the Christoffel symbols associated to the connection. This equation is less silly than it seems, because each of the objects there has a bunch of indices, and the indices on consecutive terms are not equal. So they do not just outright cancel. In terms of the connection $A$, the curvature vanishes iff
+\[
+ \frac{\partial A_j}{\partial x_i} - \frac{\partial A_i}{\partial x_j} + [A_i, A_j] = 0,
+\]
+which has the same form as the zero-curvature equation.
+
+\begin{eg}
+ Consider
+ \[
+ U(\lambda) = \frac{i}{2}
+ \begin{pmatrix}
+ 2\lambda & u_x\\
+ u_x & -2\lambda
+ \end{pmatrix},\quad
+ V(\lambda) =
+ \frac{1}{4i\lambda}
+ \begin{pmatrix}
+ \cos u & -i \sin u\\
+ i \sin u & -\cos u
+ \end{pmatrix}.
+ \]
+ Then the zero curvature equation is equivalent to the sine--Gordon equation
+ \[
+ u_{xt} = \sin u.
+ \]
+ In other words, the sine--Gordon equation holds iff the PDEs $(\dagger)$ have a solution.
+\end{eg}
+In geometry, curvature is an intrinsic property of our geometric object, say a surface. If we want to to compute the curvature, we usually pick some coordinate systems, take the above expression, interpret it in that coordinate system, and evaluate it. However, we could pick a different coordinate system, and we get different expressions for each of, say, $\frac{\partial A_j}{\partial x_i}$. However, if the curvature vanishes in one coordinate system, then it should also vanish in any coordinate system. So by picking a new coordinate system, we have found \emph{new} things that satisfies the curvature equation.
+
+Back to the real world, in general, we can give a gauge transformation that takes some solution $(U, V)$ to a new $(\tilde{U}, \tilde{V})$ that preserves the zero curvature equation. So we can use gauge transformations to obtain a lot of new solutions! This will be explored in the last example sheet.
+
+What are these zero-curvature representations good for? We don't have time to go deep into the matter, but these can be used to do some inverse-scattering type things. In the above formulation of the sine--Gordon equation. If $u_x \to 0$ as $|x| \to \infty$, we write
+\[
+ \mathbf{v} =
+ \begin{pmatrix}
+ \psi_1\\
+ \psi_2
+ \end{pmatrix}.
+\]
+Then we have
+\[
+ \frac{\partial}{\partial x}
+ \begin{pmatrix}
+ \psi_1\\
+ \psi_2
+ \end{pmatrix}
+ = \frac{i}{2}
+ \begin{pmatrix}
+ 2\lambda & u_x\\
+ u_x & -2\lambda
+ \end{pmatrix}
+ \begin{pmatrix}
+ \psi_1\\
+ \psi_2
+ \end{pmatrix} = i\lambda
+ \begin{pmatrix}
+ \psi_1\\
+ -\psi_2
+ \end{pmatrix}.
+\]
+So we know
+\[
+ \begin{pmatrix}
+ \psi_1\\
+ \psi_2
+ \end{pmatrix} =
+ A
+ \begin{pmatrix}
+ 1\\0
+ \end{pmatrix}
+ e^{i\lambda x} + B
+ \begin{pmatrix}
+ 0 \\1
+ \end{pmatrix}
+ e^{-i\lambda x}
+\]
+as $|x| \to \infty$. So with any $\mathbf{v}$ satisfying the first equation in $(\dagger)$, we can associate to it some ``scattering data'' $A, B$. Then the second equation in $(\dagger)$ tells us how $\mathbf{v}$, and thus $A, B$ evolves in time, and using this we can develop some inverse scattering-type way of solving the equation.
+
+\subsection{From Lax pairs to zero curvature}
+Lax pairs are very closely related to the zero curvature. Recall that we had this isospectral flow theorem --- if Lax's equation
+\[
+ L_t = [L, A],
+\]
+is satisfied, then the eigenvalues of $L$ are time-independent. Also, we found that our eigensolutions satisfied
+\[
+ \tilde{\psi} = \psi_t + A \psi = 0.
+\]
+So we have two equations:
+\begin{align*}
+ L \psi &= \lambda \psi\\
+ \psi_t + A \psi &= 0.
+\end{align*}
+Now suppose we reverse this --- we \emph{enforce} that $\lambda_t = 0$. Then differentiating the first equation and substituting in the second gives
+\[
+ L_t = [L, A].
+\]
+So we can see Lax's equation as a compatibility condition for the two equations above. We will see that given any equations of this form, we can transform it into a zero curvature form.
+
+Note that if we have
+\begin{align*}
+ L &= \left(\frac{\partial}{\partial x}\right)^n + \sum_{j = 0}^{n - 1}u_j(x, t) \left(\frac{\partial}{\partial x}\right)^j\\
+ A &= \left(\frac{\partial}{\partial x}\right)^n + \sum_{j = 0}^{n - 1}v_j(x, t) \left(\frac{\partial}{\partial x}\right)^j
+\end{align*}
+then
+\[
+ L \psi = \lambda \psi
+\]
+means that derivatives of order $\geq n$ can be expressed as linear combinations of derivatives $< n$. Indeed, we just have
+\[
+ \partial^n_x \psi = \lambda\psi - \sum_{j = 0}^{n - 1} u_j(x, t) \partial_x^j \psi.
+\]
+Then differentiating this equation will give us an expression for the higher derivatives in terms of the lower ones.
+
+Now by introducing the vector
+\[
+ \boldsymbol\Psi = (\psi, \partial_x \psi, \cdots, \partial^{n - 1}_x \psi),
+\]
+The equation $L \psi = \lambda \psi$ can be written as
+\[
+ \frac{\partial}{\partial x} \boldsymbol\Psi = U(\lambda) \boldsymbol\Psi,
+\]
+where
+\[
+ U(\lambda) =
+ \begin{pmatrix}
+ 0 & 1 & 0 & \cdots & 0\\
+ 0 & 0 & 1 & \cdots & 0\\
+ \vdots & \vdots & \vdots &\ddots & \vdots\\
+ 0 & 0 & 0 & \cdots & 1\\
+ \lambda - u_0 & - u_1 & -u_2 & \cdots & - u_{n - 1}
+ \end{pmatrix}
+\]
+Now differentiate ``$\psi_t + A\psi = 0$'' $i$ times with respect to $x$ to obtain
+\[
+ (\partial_x^{i - 1} \psi)_t + \underbrace{\partial_x^{i - 1} \left(\sum_{j = 0}^{m - 1} v_j(x, t) \left(\frac{\partial}{\partial x}\right)^j \psi\right)}_{\sum_{j = 1}^n V_{ij}(x, t) \partial^{j - 1}_x \psi} = 0
+\]
+for some $V_{ij}(x, t)$ depending on $v_j, u_i$ and their derivatives. We see that this equation then just says
+\[
+ \frac{\partial}{\partial t}\boldsymbol\Psi = V \boldsymbol\Psi.
+\]
+So we have shown that
+\[
+ L_t = [L, A] \Leftrightarrow
+ \begin{cases}
+ L\psi = \lambda \psi\\
+ \psi_t + A\psi = 0
+ \end{cases} \Leftrightarrow
+ \begin{cases}
+ \boldsymbol\Psi_x = U(\lambda) \boldsymbol\Psi\\
+ \boldsymbol\Psi_t = V(\lambda) \boldsymbol\Psi
+ \end{cases} \Leftrightarrow
+ \frac{\partial U}{\partial t} - \frac{\partial V}{\partial x} + [U, V] = 0.
+\]
+So we know that if something can be written in the form of Lax's equation, then we can come up with an equivalent equation in zero curvature form.
+
+\section{Symmetry methods in PDEs}
+Finally, we are now going to learn how we can exploit symmetries to solve differential equations. A lot of the things we do will be done for ordinary differential equations, but they all work equally well for partial differential equations.
+
+To talk about symmetries, we will have to use the language of groups. But this time, since differential equations are continuous objects, we will not be content with just groups. We will talk about \emph{smooth groups}, or \emph{Lie groups}. With Lie groups, we can talk about continuous families of symmetries, as opposed to the more ``discrete'' symmetries like the symmetries of a triangle.
+
+At this point, the more applied students might be scared and want to run away from the word ``group''. However, understanding ``pure'' mathematics is often very useful when doing applied things, as a lot of the structures we see in the physical world can be explained by concepts coming from pure mathematics. To demonstrate this, we offer the following cautionary tale, which may or may not be entirely made up.
+
+Back in the 60's, Gell-Mann was trying to understand the many different seemingly-fundamental particles occurring nature. He decided one day that he should plot out the particles according to certain quantum numbers known as isospin and hypercharge. The resulting diagram looked like this:
+\begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -2.5) -- (0, 2.5);
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 1.732) {};
+ \node [circ] at (1, -1.732) {};
+ \node [circ] at (-2, 0) {};
+ \node [circ] at (-1, 1.732) {};
+ \node [circ] at (-1, -1.732) {};
+ \node [circ] at (-0.2, 0) {};
+ \node [circ] at (0.2, 0) {};
+ \end{tikzpicture}
+\end{center}
+So this is a nice picture, as it obviously formed some sort of lattice. However, it is not clear how one can generalize this for more particles, or where this pattern came from.
+
+Now a pure mathematician happened to got lost, and was somehow walked into in the physics department and saw that picture. He asked ``so you are also interested in the eight-dimensional adjoint representations of $\su(3)$?'', and the physicist was like, ``no\ldots?''.
+
+It turns out the weight diagram (whatever that might be) of the eight-dimensional adjoint representation of $\su(3)$ (whatever that might be), looked exactly like that. Indeed, it turns out there is a good correspondence between representations of $\su(3)$ and quantum numbers of particles, and then the way to understand and generalize this phenomenon became obvious.
+
+\subsection{Lie groups and Lie algebras}
+So to begin with, we remind ourselves with what a group is!
+
+\begin{defi}[Group]\index{group}
+ A \emph{group} is a set $G$ with a binary operation
+ \[
+ (g_1, g_2) \mapsto g_1 g_2
+ \]
+ called ``group multiplication'', satisfying the axioms
+ \begin{enumerate}
+ \item Associativity\index{Associativity}: $(g_1 g_2)g_3 = g_1 (g_2 g_3)$ for all $g_1, g_2, g_3$
+ \item Existence of identity: there is a (unique) identity element $e \in G$ such that
+ \[
+ ge = eg = g
+ \]
+ for all $g \in G$
+ \item Inverses exist: for each $g \in G$, there is $g^{-1} \in G$ such that
+ \[
+ gg^{-1} = g^{-1}g = e.
+ \]
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}
+ $(\Z, +)$ is a group.
+\end{eg}
+
+What we are really interested in is how groups act on certain sets.
+
+\begin{defi}[Group action]\index{group action}
+ A group $G$ acts on a set $X$ if there is a map $G \times X \to X$ sending $(g, x) \mapsto g(x)$ such that
+ \[
+ g(h(x)) = (gh)(x),\quad e(x) = x
+ \]
+ for all $g, h \in G$ and $x \in X$.
+\end{defi}
+
+\begin{eg}
+ The rotation matrices $\SO(2)$ acts on $\R^2$ via matrix multiplication.
+\end{eg}
+
+We are not going to consider groups in general, but we will only talk about Lie groups, and coordinate changes born of them. For the sake of simplicity, we are not going to use the ``real'' definition of Lie group, but use an easier version that really looks more like the definition of a local Lie group than a Lie group. The definition will probably be slightly confusing, but it will become clearer with examples.
+
+\begin{defi}[Lie group]\index{Lie group}
+ An \emph{$m$-dimensional Lie group} is a group such that all the elements depend continuously on $m$ parameters, in such a way that the maps $(g_1, g_2) \mapsto g_1 g_2$ and $g \mapsto g^{-1}$ correspond to a smooth function of those parameters.
+\end{defi}
+In practice, it suffices to check that the map $(g_1, g_2) \mapsto g_1 g_2^{-1}$ is smooth.
+
+So elements of an ($m$-dimensional) Lie group can be written as $g(\mathbf{t})$, where $\mathbf{t} \in \R^m$. We make the convention that $g(0) = e$. For those who are doing differential geometry, this is a manifold with a group structure such that the group operations are smooth maps. For those who are doing category theory, this is a group object in the category of smooth manifolds.
+
+\begin{eg}
+ Any element of $G = \SO(2)$ can be written as
+ \[
+ g(t) =
+ \begin{pmatrix}
+ \cos t & -\sin t\\
+ \sin t & \cos t
+ \end{pmatrix}
+ \]
+ for $t \in \R$. So this is a candidate for a 1-dimensional Lie group that depends on a single parameter $t$. We now have to check that the map $(g_1, g_2) \mapsto g_1 g_2^{-1}$ is smooth. We note that
+ \[
+ g(t_1)^{-1} = g(-t_1).
+ \]
+ So we have
+ \[
+ g(t_1) g(t_2)^{-1} = g(t_1) g(-t_2) = g(t_1 - t_2).
+ \]
+ So the map
+ \[
+ (g_1, g_2) \mapsto g_1 g_2^{-1}
+ \]
+ corresponds to
+ \[
+ (t_1, t_2) \mapsto t_1 - t_2.
+ \]
+ Since this map is smooth, we conclude that $\SO(2)$ is a $1$-dimensional Lie group.
+\end{eg}
+
+\begin{eg}
+ Consider matrices of the form
+ \[
+ g(\mathbf{t}) =
+ \begin{pmatrix}
+ 1 & t_1 & t_3\\
+ 0 & 1 & t_2\\
+ 0 & 0 & 1
+ \end{pmatrix},\quad \mathbf{t} \in \R^3
+ \]
+ It is easy to see that this is a group under matrix multiplication. This is known as the \term{Heisenberg group}. We now check that it is in fact a Lie group. It has three obvious parameters $t_1, t_2, t_3$, and we have to check the smoothness criterion. We have
+ \[
+ g(\mathbf{a}) g(\mathbf{b}) =
+ \begin{pmatrix}
+ 1 & a_1 & a_3\\
+ 0 & 1 & a_2\\
+ 0 & 0 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ 1 & b_1 & b_3\\
+ 0 & 1 & b_2\\
+ 0 & 0 & 1
+ \end{pmatrix} =
+ \begin{pmatrix}
+ 1 & a_1 + b_1 & a_3 + b_3 + a_1 b_2\\
+ 0 & 1 & a_2 + b_2\\
+ 0 & 0 & 1
+ \end{pmatrix}.
+ \]
+ We can then write down the inverse
+ \[
+ g(\mathbf{b})^{-1}=
+ \begin{pmatrix}
+ 1 & - b_1 & b_1 b_2 - b_3\\
+ 0 & 1 & -b_2\\
+ 0 & 0 & 1
+ \end{pmatrix}
+ \]
+ So we have
+ \begin{align*}
+ g(\mathbf{a}) g(\mathbf{b})^{-1} &=
+ \begin{pmatrix}
+ 1 & a_1 & a_3\\
+ 0 & 1 & a_2\\
+ 0 & 0 & 1
+ \end{pmatrix}
+ \begin{pmatrix}
+ 1 & - b_1 & b_1 b_2 - b_3\\
+ 0 & 1 & -b_2\\
+ 0 & 0 & 1
+ \end{pmatrix} \\
+ &=
+ \begin{pmatrix}
+ 1 & a_1 - b_1 & b_1 b_2 - b_3 - a_1 b_2 + a_3\\
+ 0 & 1 & a_2 - b_2\\
+ 0 & 0 & 1
+ \end{pmatrix}
+ \end{align*}
+ This then corresponds to
+ \[
+ (\mathbf{a}, \mathbf{b}) \mapsto
+ \begin{pmatrix}
+ a_1 - b_1\\
+ a_2 -b_2\\
+ b_1 b_2 - b_3 - a_1 b_2 + a_3
+ \end{pmatrix},
+ \]
+ which is a smooth map! So we conclude that the Heisenberg group is a three-dimensional Lie group.
+\end{eg}
+Recall that at the beginning of the course, we had vector fields and flow maps. Flow maps are hard and complicated, while vector fields are nice and easy. Thus, we often want to reduce the study of flow maps to the study of vector fields, which can be thought of as the ``infinitesimal flow''. For example, checking that two flows commute is very hard, but checking that the commutator of two vector fields vanishes is easy.
+
+Here we are going to do the same. Lie groups are hard. To make life easier, we look at ``infinitesimal'' elements of Lie groups, and this is known as the Lie algebra.
+
+We will only study Lie algebras informally, and we'll consider only the case of matrix Lie groups, so that it makes sense to add, subtract, differentiate the elements of the Lie group (in addition to the group multiplication), and the presentation becomes much easier.
+
+Suppose we have a curve $\mathbf{x}_1(\varepsilon)$ in our parameter space passing through $0$ at time $0$. Then we can obtain a curve
+\[
+ A(\varepsilon) = g(\mathbf{x}_1(t))
+\]
+in our Lie group $G$. We set $a = A'(0)$, so that
+\[
+ A(\varepsilon) = I + \varepsilon a + o(\varepsilon).
+\]
+We now define the \emph{Lie algebra} $\mathfrak{g}$ to be the set of all ``leading order terms'' $a$ arising from such curves. We now proceed to show that $\mathfrak{g}$ is in fact a vector space.
+
+Suppose we have a second curve $B(x)$, which we expand similarly as
+\[
+ B(\varepsilon) = I + \varepsilon b + o (\varepsilon).
+\]
+We will show that $a + b \in \mathfrak{g}$. Consider the curve
+\[
+ t \mapsto A(t) B(t),
+\]
+using the multiplication in the Lie group. Then we have
+\[
+ A(\varepsilon)B(\varepsilon) = (I + \varepsilon a + o(\varepsilon)) (I + \varepsilon b + o(\varepsilon)) = I + \varepsilon(a + b) + o (\varepsilon).
+\]
+So we know $a, b \in \mathfrak{g}$ implies $a + b \in \mathfrak{g}$.
+
+For scalar multiplication, given $\lambda \in \R$, we can construct a new curve
+\[
+ t \mapsto A(\lambda t).
+\]
+Then we have
+\[
+ A(\lambda \varepsilon) = I + \varepsilon (\lambda a) + o(\varepsilon).
+\]
+So if $a \in \mathfrak{g}$, then so is $\lambda a \in \mathfrak{g}$ for any $\lambda \in \R$.
+
+So we get that $\mathfrak{g}$ has the structure of a vector space! This is already a little interesting. Groups are complicated. They have this weird structure and they are not necessarily commutative. However, we get a nice, easy vector space structure form the group structure.
+
+It turns out we can do something more fun. The \emph{commutator} of any two elements of $\mathfrak{g}$ is also in $\mathfrak{g}$. To see this, we define a curve $C(t)$ for $t > 0$ by
+\[
+ t \mapsto A(\sqrt{t}) B(\sqrt{t}) A(\sqrt{t})^{-1} B(\sqrt{t})^{-1}.
+\]
+We now notice that $A(\varepsilon)^{-1} = I - \varepsilon a + o(\varepsilon)$, since if $A(\varepsilon)^{-1} = I + \varepsilon \tilde{a} + o(\varepsilon)$, then
+\begin{align*}
+ I &= A(\varepsilon) A(\varepsilon)^{-1}\\
+ &= (I + \varepsilon a + o(\varepsilon))(I + \varepsilon \tilde{a} + o(\varepsilon))\\
+ &= I + \varepsilon(a + \tilde{a}) + o(\varepsilon)
+\end{align*}
+So we must have $\tilde{a} = -a$.
+
+Then we have
+\begin{align*}
+ C(\varepsilon) &= (I + \sqrt{\varepsilon} a + \cdots)(I + \varepsilon b + \cdots) (I - \sqrt{\varepsilon} a + \cdots )(I - \sqrt{\varepsilon} b + \cdots)\\
+ &= I + \varepsilon(ab - ba) + o(\varepsilon).
+\end{align*}
+It is an exercise to show that this is actually true, because we have to keep track of the second order terms we didn't write out to make sure they cancel properly.
+
+So if $a, b \in \mathfrak{g}$, then
+\[
+ [a, b]_L = ab - ba \in \mathfrak{g}.
+\]
+Vector spaces with this extra structure is called a \emph{Lie algebra}. The idea is that the Lie algebra consists of elements of the group infinitesimally close to the identity. While the product of two elements $a, b$ infinitesimally close to the identity need not remain infinitesimally close to the identity, the commutator $ab - ba$ does.
+
+\begin{defi}[Lie algebra]\index{Lie algebra}
+ A \emph{Lie algebra} is a vector space $\mathfrak{g}$ equipped with a bilinear, anti-symmetric map $[\ph, \ph]_L: \mathfrak{g} \times \mathfrak{g} \to \mathfrak{g}$ that satisfies the \term{Jacobi identity}
+ \[
+ [a, [b, c]_L]_L + [b, [c, a]_L]_L + [c, [a, b]_L]_L = 0.
+ \]
+ This antisymmetric map is called the \term{Lie bracket}.
+
+ If $\dim \mathfrak{g} = m$, we say the Lie algebra has \emph{dimension}\index{dimension!Lie algebra}\index{Lie algebra!dimension} $m$.
+\end{defi}
+
+The main source of Lie algebras will come from Lie groups, but there are many other examples.
+
+\begin{eg}
+ We can set $\mathfrak{g} = \R^3$, and
+ \[
+ [\mathbf{a}, \mathbf{b}]_L = \mathbf{a} \times \mathbf{b}.
+ \]
+ It is a straightforward (and messy) check to see that this is a Lie algebra.
+\end{eg}
+
+\begin{eg}
+ Let $M$ be our phase space, and let
+ \[
+ \mathfrak{g} = \{f: M \to \R \text{ smooth}\}.
+ \]
+ Then
+ \[
+ [f, g]_K = \{f, g\}
+ \]
+ is a Lie algebra.
+\end{eg}
+
+\begin{eg}
+ We now find the Lie algebra of the matrix group $\SO(n)$. We let
+ \[
+ G = \SO(n) = \{A \in \Mat_n(\R): AA^T = I, \det A = 1\}.
+ \]
+ We let $A(\varepsilon)$ be a curve in $G$ with $A(0) = I$. Then we have
+ \begin{align*}
+ I &= A(\varepsilon) A(\varepsilon)^T \\
+ &= (I + \varepsilon a + o(\varepsilon))(I + \varepsilon a^T + o(\varepsilon))\\
+ &= I + \varepsilon(a + a^T) + o(\varepsilon).
+ \end{align*}
+ So we must have $a + a^T = 0$, i.e.\ $a$ is anti-symmetric. The other condition says
+ \[
+ 1 = \det A(\varepsilon) = \det (I + \varepsilon a + o(\varepsilon)) = 1 + \varepsilon \tr(a) + o(\varepsilon).
+ \]
+ So we need $\tr (a) = 0$, but this is already satisfied since $A$ is antisymmetric.
+
+ So it looks like the Lie algebra $\mathfrak{g} = \so(n)$ corresponding to $\SO(n)$ is the vector space of anti-symmetric matrices:
+ \[
+ \so(n) = \{a \in \Mat_n(\R): a + a^T = 0\}.
+ \]
+ To see this really is the answer, we have to check that every antisymmetric matrix comes from some curve. It is an exercise to check that the curve
+ \[
+ A(t) = \exp(at).
+ \]
+ works.
+
+ We can manually check that $\mathfrak{g}$ is closed under the commutator:
+ \[
+ [a, b]_L = ab - ba.
+ \]
+ Indeed, we have
+ \[
+ [a, b]_L^T = (ab - ba)^T = b^T a^T - a^T b^T = ba - ab = - [a, b]_L^T.
+ \]
+\end{eg}
+Note that it is standard that if we have a group whose name is in capital letters (e.g.\ $\SO(n)$), then the corresponding Lie algebra is the same thing in lower case, fraktur letters (e.g.\ $\so(n)$).
+
+Note that above all else, $\mathfrak{g}$ is a vector space. So (at least if $\mathfrak{g}$ is finite-dimensional) we can give $\mathfrak{g}$ a basis $\{a_i\}_{i = 1}^m$. Since the Lie bracket maps $\mathfrak{g} \times \mathfrak{g} \to \mathfrak{g}$, it must be the case that
+\[
+ [a_i, a_j] = \sum_{k = 1}^m c_{ij}^k a_k
+\]
+for some constants $c_{ij}^k$. These are known as the \term{structure constants}\index{Lie algebra!structure constant}.
+\subsection{Vector fields and one-parameter groups of transformations}
+Ultimately, we will be interested in coordinate transformations born of the action of some Lie group. In other words, we let the Lie group act on our coordinate space (smoothly), and then use new coordinates
+\[
+ \tilde{\mathbf{x}} = g(\mathbf{x}),
+\]
+where $g \in G$ for some Lie group $G$. For example, if $G$ is the group of rotations, then this gives new coordinates by rotating.
+
+Recall that a vector field $\mathbf{V}: \R^n \to \R^n$ defines an integral curve through the point $x$ via the solution of differential equations
+\[
+ \frac{\d}{\d \varepsilon} \tilde{\mathbf{x}} = \mathbf{V}(\tilde{\mathbf{x}}),\quad \tilde{\mathbf{x}}(0) = \mathbf{x}.
+\]
+To represent solutions to this problem, we use the flow map $g^\varepsilon$ defined by
+\[
+ \tilde{x}(\varepsilon) = g^\varepsilon \mathbf{x} = \mathbf{x} + \varepsilon \mathbf{V}(\mathbf{x}) + o(\varepsilon).
+\]
+We call $V$ the generator of the flow. This flow map is an example of a \emph{one-parameter group of transformations}.
+\begin{defi}[One-parameter group of transformations]\index{one-parameter group of transformations}\index{1.p.g.t.}
+ A smooth map $g^\varepsilon: \R^n \to \R^n$ is called a \emph{one-parameter group of transformations} (1.p.g.t) if
+ \[
+ g^0 = \id,\quad g^{\varepsilon_1}g^{\varepsilon_2} = g^{\varepsilon_1 + \varepsilon_2}.
+ \]
+ We say such a one-parameter group of transformations is generated by the vector field
+ \[
+ \mathbf{V}(\mathbf{x}) = \left.\frac{\d}{\d \varepsilon} (g^\varepsilon \mathbf{x}) \right|_{\varepsilon = 0}.
+ \]
+ Conversely, every vector field $\mathbf{V}: \R^n \to \R^n$ generates a one-parameter group of transformations via solutions of
+ \[
+ \frac{\d}{\d \varepsilon} \tilde{\mathbf{x}} = \mathbf{V}(\tilde{\mathbf{x}}),\quad \tilde{\mathbf{x}}(0) = \mathbf{x}.
+ \]
+\end{defi}
+For some absurd reason, differential geometers decided that we should represent vector fields in a different way. This notation is standard but odd-looking, and is in many settings more convenient.
+\begin{notation}
+ Consider a vector field $\mathbf{V} = (V_1, \cdots, V_n)^T : \R^n \to \R^n$. This vector field uniquely defines a differential operator
+ \[
+ V = V_1 \frac{\partial}{\partial x_1} + V_2 \frac{\partial}{\partial x_2} + \cdots + V_n \frac{\partial}{\partial x_n}.
+ \]
+ Conversely, any linear differential operator gives us a vector field like that. We will confuse a vector field with the associated differential operator, and we think of the $\frac{\partial}{\partial x_i}$ as a basis for our vector field.
+\end{notation}
+
+\begin{eg}
+ We will write the vector field $\mathbf{V} = (x^2 + y, yx)$ as
+ \[
+ V = (x^2 + y) \frac{\partial}{\partial x} + yx \frac{\partial}{\partial y}.
+ \]
+\end{eg}
+One good reason for using this definition is that we have a simple description of the commutator of two vector fields. Recall that the commutator of two vector fields $\mathbf{V}, \mathbf{W}$ was previously defined by
+\[
+ [\mathbf{V}, \mathbf{W}]_i = \left[\left(\mathbf{V}\cdot \frac{\partial}{\partial \mathbf{x}}\right) \mathbf{W} - \left(\mathbf{W}\cdot \frac{\partial}{\partial \mathbf{x}}\right) \mathbf{V}\right]_i = V_j \frac{\partial W_i}{\partial x_j} - W_j \frac{\partial V_i}{\partial x_j}.
+\]
+Now if we think of the vector field as a differential operator, then we have
+\[
+ V = \mathbf{V}\cdot \frac{\partial}{\partial \mathbf{x}},\quad W = \mathbf{W}\cdot \frac{\partial}{\partial \mathbf{x}}.
+\]
+The usual definition of commutator would then be
+\begin{align*}
+ (VW - WV)(f) &= V_j \frac{\partial}{\partial x_j}W_i \frac{\partial f}{\partial x_i} - W_j \frac{\partial}{\partial x_j} W_i \frac{\partial f}{\partial x_i}\\
+ &= \left(V_j \frac{\partial W_i}{\partial x_j} - W_j \frac{\partial V_i}{\partial x_j}\right) \frac{\partial f}{\partial x_i} + V_j W_i \frac{\partial^2 f}{\partial x_i \partial x_j} - W_j V_i \frac{\partial^2 f}{\partial x_i \partial x_j}\\
+ &= \left(V_j \frac{\partial W_i}{\partial x_j} - W_j \frac{\partial V_i}{\partial x_j}\right) \frac{\partial f}{\partial x_i}\\
+ &= [\mathbf{V}, \mathbf{W}] \cdot \frac{\partial}{\partial \mathbf{x}} f.
+\end{align*}
+So with the new notation, we literally have
+\[
+ [V, W] = VW - WV.
+\]
+We shall now look at some examples of vector fields and the one-parameter groups of transformations they generate. In simple cases, it is not hard to find the correspondence.
+
+\begin{eg}
+ Consider a vector field
+ \[
+ V = x \frac{\partial}{\partial x} + \frac{\partial}{\partial y}.
+ \]
+ This generates a $1$-parameter group of transformations via solutions to
+ \[
+ \frac{\d \tilde{\mathbf{x}}}{\d \varepsilon} = \tilde{x}, \quad \frac{\d \tilde{y}}{\d \varepsilon} = 1
+ \]
+ where
+ \[
+ (\tilde{x}(0), \tilde{y}(0)) = (x, y).
+ \]
+ As we are well-trained with differential equations, we can just write down the solution
+ \[
+ (\tilde{x}(\varepsilon), \tilde{y}(\varepsilon)) = g^\varepsilon (x, y) = (xe^{\varepsilon}, y + \varepsilon)
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider the natural action of $\SO(2) \cong S^1$ on $\R^2$ via
+ \[
+ g^\varepsilon(x, y) = (x \cos \varepsilon - y \sin \varepsilon, y \cos \varepsilon + x \sin \varepsilon).
+ \]
+ We can show that $g^0 = \id$ and $g^{\varepsilon_1} g^{\varepsilon_2} = g^{\varepsilon_1 + \varepsilon_2}$. The generator of this vector field is
+ \begin{align*}
+ V &= \left(\left.\frac{\d \tilde{x}}{\d \varepsilon}\right|_{\varepsilon = 0}\right)\frac{\partial}{\partial x} + \left(\left.\frac{\d \tilde{y}}{\d \varepsilon}\right|_{\varepsilon = 0}\right) \frac{\partial}{\partial y}\\
+ &= -y \frac{\partial}{\partial x} + x \frac{\partial}{\partial y}.
+ \end{align*}
+ We can plot this as:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-3, 0) -- (3, 0);
+ \draw [->] (0, -3) -- (0, 3);
+
+ \foreach \t in {0,30,60,90,120,150,180,210,240,270,300,330,360} {
+ \begin{scope}[rotate=\t]
+ \draw [-latex'] (2.5, 0) -- +(0, 0.5);
+ \draw [-latex'] (2, 0) -- +(0, 0.4);
+ \draw [-latex'] (1.5, 0) -- +(0, 0.3);
+ \draw [-latex'] (1, 0) -- +(0, 0.2);
+ \draw [-latex'] (0.5, 0) -- +(0, 0.1);
+ \end{scope}
+ }
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{eg}
+ If
+ \[
+ V = \alpha \frac{\partial}{\partial x},
+ \]
+ then we have
+ \[
+ g^\varepsilon x = x + \alpha \varepsilon.
+ \]
+ This is a translation with constant speed.
+
+ If we instead have
+ \[
+ V = \beta x \frac{\partial}{\partial x},
+ \]
+ then we have
+ \[
+ g^\varepsilon x = e^{\beta\varepsilon} x,
+ \]
+ which is scaling $x$ up at an exponentially growing rate.
+\end{eg}
+
+How does this study of one-parameter group of transformations relate to our study of Lie groups? It turns out the action of Lie groups on $\R^n$ can be reduced to the study of one-parameter groups of transformations. If a Lie group $G$ acts on $\R^n$, then it might contain many one-parameter groups of transformations. More precisely, we could find some elements $g^{\varepsilon} \in G$ depending smoothly on $\varepsilon$ such that the action of $g^\varepsilon$ on $\R^n$ is a one-parameter group of transformation.
+
+It turns out that Lie groups contain a lot of one-parameter groups of transformations. In general, given any $g(\mathbf{t}) \in G$ (in a neighbourhood of $e \in G$), we can reach it via a sequence of one-parameter group of transformations:
+\[
+ g(\mathbf{t}) = g_{i_1}^{\varepsilon_1} g_{i_2}^{\varepsilon_2} \cdots g_{i_N}^{\varepsilon_N}.
+\]
+So to understand a Lie group, we just have to understand the one-parameter groups of transformations. And to understand these one-parameter groups, we just have to understand the vector fields that generate them, i.e.\ the Lie algebra, and this is much easier to deal with than a group!
+
+\subsection{Symmetries of differential equations}
+So far we've just been talking about Lie groups in general. We now try to apply this to differential equations. We will want to know when a one-parameter group of transformations is a symmetry of a differential equation.
+
+We denote a general (ordinary) differential equation by
+\[
+ \Delta[\mathbf{x}, u, u_\mathbf{x}, u_{\mathbf{x}\mathbf{x}} \cdots] = 0.
+\]
+Note that in general, $\Delta$ can be a vector, so that we can have a system of equations. We say $u = u(\mathbf{x})$ is a solution to the differential equation if it satisfies the above equation.
+
+Suppose $g^\varepsilon$ be a $1$-parameter group of transformations generated by a vector field $V$, and consider the new coordinates
+\[
+ (\tilde{\mathbf{x}}, \tilde{u}) = g^\varepsilon(\mathbf{x}, u).
+\]
+Note that we transform \emph{both} the domain $\mathbf{x}$ and the codomain $u$ of the function $u(\mathbf{x})$, and we are allowed to mix them together.
+
+We call $g^\varepsilon$ a \term{Lie point symmetry} of $\Delta$ if
+\[
+ \Delta [\mathbf{x}, u, u_{\mathbf{x}}, \cdots] = 0 \quad\Longrightarrow\quad \Delta[\tilde{\mathbf{x}}, \tilde{\mathbf{u}}, \tilde{u}_{\tilde{\mathbf{x}}}, \cdots] = 0
+\]
+In other words, it takes solutions to solutions.
+
+We say this Lie point symmetry is generated by $V$.
+
+\begin{eg}
+ Consider the KdV equation
+ \[
+ \Delta = u_t + u_{xxx} - 6 uu_x = 0.
+ \]
+ Then translation in the $t$ direction given by
+ \[
+ g^\varepsilon(x, t, u) = (x, t + \varepsilon, u)
+ \]
+ is a Lie point symmetry. This is generated by
+ \[
+ V = \frac{\partial}{\partial t}.
+ \]
+ Indeed, by the chain rule, we have
+ \[
+ \frac{\partial \tilde{u}}{\partial \tilde{t}} = \frac{\partial u}{\partial \tilde{t}} = \frac{\partial t}{\partial \tilde{t}} \frac{\partial u}{\partial t} + \frac{\partial x}{\partial \tilde{t}} \frac{\partial u}{\partial x} = \frac{\partial u}{\partial t}.
+ \]
+ Similarly, we have
+ \[
+ \tilde{u}_{\tilde{x}} = u_x,\quad \tilde{u}_{\tilde{x}\tilde{x}\tilde{x}} = u_{xxx}.
+ \]
+ So if
+ \[
+ \Delta[x, t, u] = 0,
+ \]
+ then we also have
+ \[
+ \Delta[\tilde{x}, \tilde{t}, \tilde{u}] = \Delta[x, t, u] = 0.
+ \]
+ In other words, the vector field $V = \frac{\partial}{\partial t}$ generates a Lie point symmetry of the KdV equation.
+\end{eg}
+Obviously Lie point symmetries give us new solutions from old ones. More importantly, we can use it to solve equations!
+\begin{eg}
+ Consider the ODE
+ \[
+ \frac{\d u}{\d x} = F\left(\frac{u}{x}\right).
+ \]
+ We see that there are things that look like $u/x$ on both sides. So it is not too hard to see that this admits a Lie-point symmetry
+ \[
+ g^\varepsilon (x, u) = (e^\varepsilon x, e^\varepsilon u).
+ \]
+ This Lie point symmetry is generated by
+ \[
+ V = x \frac{\partial}{\partial x} + t \frac{\partial}{\partial t}.
+ \]
+ The trick is to find coordinates $(s, t)$ such that $V(s) = 0$ and $V(t) = 1$. We call these ``invariant coordinates''. Then since $V$ is still a symmetry of the equation, this suggests that $t$ should not appear explicitly in the differential equation, and this will in general make our lives easier. Of course, terms like $t_s$ can still appear because translating $t$ by a constant does not change $t_s$.
+
+ We pick
+ \[
+ s = \frac{u}{x}, t = \log |x|,
+ \]
+ which does indeed satisfy $V(s) = 0, V(t) = 1$. We can invert these to get
+ \[
+ x = e^t, \quad u = se^t.
+ \]
+ With respect to the $(s, t)$ coordinates, the ODE becomes
+ \[
+ \frac{\d t}{\d s} = \frac{1}{F(s) - s},
+ \]
+ at least for $F(s) \not= s$. As promised, this does not have an explicit $t$ dependence. So we can actually integrate this thing up. We can write the solution as
+ \[
+ t = C + \int^s \frac{\d s'}{F(s') - s'}.
+ \]
+ Going back to the original coordinates, we know
+ \[
+ \log |x| = C \int^{u/x} \frac{\d s}{F(s) - s}.
+ \]
+ If we actually had an expression for $F$ and did the integral, we could potentially restore this to get an expression of $u$ in terms of $x$. So the knowledge of the Lie point symmetry allowed us to integrate up our ODE.
+\end{eg}
+In general, for an $n$th order ODE
+\[
+ \Delta[x, u, u', \cdots, u^{(n)}] = 0
+\]
+admitting a Lie point symmetry generated by
+\[
+ V= \xi(x, u) \frac{\partial}{\partial x} + \eta(x, u) \frac{\partial}{\partial u},
+\]
+we introduce coordinates
+\[
+ s = s(u, x),\quad t = t(u, x)
+\]
+such that in the new coordinates, we have
+\[
+ V = \frac{\partial}{\partial t}.
+\]
+This means that in the new coordinates, the ODE has the form
+\[
+ \Delta[s, t', \cdots, t^{(n)} ] = 0.
+\]
+Note that there is no explicit $t$! We can now set $r = t'$, so we get an $(n - 1)$th order ODE
+\[
+ \Delta[s, r, r', \cdots, r^{(n - 1)}],
+\]
+i.e.\ we have reduced the order of the ODE by $1$. Now rinse and repeat.
+
+\subsection{Jets and prolongations}
+This is all nice, but we still need to find a way to get Lie point symmetries. So far, we have just found them by divine inspiration, which is not particularly helpful. In general. Is there a more systematic way of finding Lie symmetries?
+
+We can start by looking at the trivial case --- a $0$th order ODE
+\[
+ \Delta[x, u] = 0.
+\]
+Then we know $g^\varepsilon: (x, u) \mapsto (\tilde{x}, \tilde{u})$ is a Lie point symmetry if
+\[
+ \Delta[x, u] = 0\quad\Longrightarrow\quad \Delta[\tilde{x}, \tilde{u}] = \Delta[g^\varepsilon(x, u)] = 0.
+\]
+Can we reduce this to a statement about the generator of $g^\varepsilon$? Here we need to assume that $\Delta$ is of \term{maximal rank}, i.e.\ the matrix of derivatives
+\[
+ \frac{\partial \Delta_j}{\partial y_i}
+\]
+is of maximal rank, where the $y_i$ runs over $x, u$, and in general all coordinates. So for example, the following theory will not work if, say $\Delta[x, u] = x^2$. Assuming $\Delta$ is indeed of maximal rank, it is an exercise on the example sheet to see that if $V$ is the generator of $g^\varepsilon$, then $g^\varepsilon$ is a Lie point symmetry iff
+\[
+ \Delta = 0 \quad\Longrightarrow\quad V(\Delta) = 0.
+\]
+This essentially says that the flow doesn't change $\Delta$ iff the derivative of $\Delta$ along $V$ is constantly zero, which makes sense. Here we are thinking of $V$ as a differential operator. We call this constraint an \term{on-shell} condition, because we only impose it whenever $\Delta = 0$ is satisfied, instead of at all points.
+
+This equivalent statement is very easy! This is just an algebraic equation for the coefficients of $V$, and it is in general very easy to solve!
+
+However, as you may have noticed, these aren't really ODE's. They are just equations. So how do we generalize this to $N \geq 1$ order ODE's? Consider a general vector field
+\[
+ V(x, u) = \xi(x, u) \frac{\partial}{\partial x} + \eta(x, u) \frac{\partial}{\partial u}.
+\]
+This only knows what to do to $x$ and $u$. But if we know how $x$ and $u$ change, we should also know how $u_x, u_{xx}$ etc. change. Indeed this is true, and extending the action of $V$ to the derivatives is known as the \emph{prolongation} of the vector field.
+
+We start with a concrete example.
+
+\begin{eg}
+ Consider a 1 parameter group of transformations
+ \[
+ g^\varepsilon: (x, u) \mapsto (e^\varepsilon x, e^{-\varepsilon}u) = (\tilde{x}, \tilde{u})
+ \]
+ with generator
+ \[
+ V = x\frac{\partial}{\partial x} - u\frac{\partial}{\partial u}.
+ \]
+ This induces a transformation
+ \[
+ (x, u, u_x) \mapsto (\tilde{x}, \tilde{u}, \tilde{u}_{\tilde{x}})
+ \]
+ By the chain rule, we know
+ \[
+ \frac{\d \tilde{u}}{\d \tilde{x}} = \frac{\d\tilde{u}/\d x}{\d \tilde{x}/\d x} = e^{-2 \varepsilon} u_x.
+ \]
+ So in fact
+ \[
+ (\tilde{x}, \tilde{u}, \tilde{u}_{\tilde{x}}) \equiv (e^\varepsilon x, e^{-\varepsilon} u, e^{-2\varepsilon} u_x).
+ \]
+\end{eg}
+
+If we call $(x, u)$ coordinates for the \term{base space}, then we call the extended system $(x, u, u_x)$ coordinates for the \term{first jet space}. Given any function $u = u(x)$, we will get a point $(x, u, u_x)$ in the jet space for each $x$.
+
+What we've just seen is that a one-parameter group of transformation of the base space induces a one-parameter group of transformation of the first jet space. This is known as the \term{prolongation}, written
+\[
+ \pr^{(1)}g^\varepsilon: (x, u, u_x) \mapsto (\tilde{x}, \tilde{u}, \tilde{u}_{\tilde{x}}) = (e^\varepsilon x, e^{-\varepsilon} u, e^{-2\varepsilon} u_x).
+\]
+One might find it a bit strange to call $u_x$ a coordinate. If we don't like doing that, we can just replace $u_x$ with a different symbol $p_1$. If we have the $n$th derivative, we replace the $n$th derivative with $p_n$.
+
+Since we have a one-parameter group of transformations, we can write down the generator. We see that $\pr^{(1)}g^\varepsilon$ is generated by
+\[
+ \pr^{(1)} V = x\frac{\partial}{\partial x} - u \frac{\partial}{\partial u} - 2 u_x \frac{\partial}{\partial u_x}.
+\]
+This is called the \term{first prolongation} of $V$.
+
+Of course, we can keep on going. Similarly, $\pr^{(2)} g^\varepsilon$ acts on the second jet space which has coordinates $(x, u, u_x, u_{xx})$. In this case, we have
+\[
+ \pr^{(2)} g^\varepsilon (x, u, u_x, u_{xx}) \mapsto (\tilde{x}, \tilde{u}, \tilde{u}_{\tilde{x}\tilde{x}}) \equiv (e^\varepsilon x, e^{-\varepsilon}u, e^{-2\varepsilon} u_x, e^{-3\varepsilon}u_{xx}).
+\]
+This is then generated by
+\[
+ \pr^{(2)}V = x\frac{\partial}{\partial x} - u \frac{\partial}{\partial u} - 2 u_x \frac{\partial}{\partial u_x} - 3 u_{xx} \frac{\partial}{\partial u_{xx}}.
+\]
+Note that we don't have to recompute all terms. The $x, u, u_x$ terms did not change, so we only need to check what happens to $\tilde{u}_{\tilde{x}\tilde{x}}$.
+
+We can now think of an $n$th order ODE
+\[
+ \Delta[x, u, u_x, \cdots, u^{(n)}] = 0
+\]
+as an algebraic equation on the $n$th jet space. Of course, this is not just an arbitrary algebraic equation. We will only consider solutions in the $n$th jet space that come from some function $u = u(x)$. Similarly, we only consider symmetries on the $n$th jet space that come from the prolongation of some transformation on the base space.
+
+With that restriction in mind, we have effectively dressed up our problem into an algebraic problem, just like the case of $\Delta[x, u] = 0$ we discussed at the beginning. Then $g^\varepsilon: (x, u) \mapsto (\tilde{x}, \tilde{u})$ is a Lie point symmetry if
+\[
+ \Delta[\tilde{x}, \tilde{u}, \tilde{u}_{\tilde{x}}, \ldots, \tilde{u}^{(n)}] = 0
+\]
+when $\Delta = 0$. Or equivalently, we need
+\[
+ \Delta[\pr^{(n)} g^\varepsilon(x, u, \ldots, u^{(n)})] = 0
+\]
+when $\Delta = 0$. This is just a one-parameter group of transformations on a huge coordinate system on the jet space. Thinking of all $x, u, \cdots, u^{(n)}$ as just independent coordinates, we can rewrite it in terms of vector fields. (Assuming maximal rank) this is equivalent to asking for
+\[
+ \pr^{(n)}V (\Delta) = 0.
+\]
+This results in an overdetermined system of differential equations for $(\xi, \eta)$, where
+\[
+ V(x, u) = \xi(x, u) \frac{\partial}{\partial x} + \eta(x, u) \frac{\partial}{\partial u}.
+\]
+Now in order to actually use this, we need to be able to compute the $n$th prolongation of an arbitrary vector field. This is what we are going to do next.
+
+Note that if we tried to compute the prolongation of the action of the Lie \emph{group}, then it would be horrendous. However, what we actually need to compute is the prolongation of the vector field, which is about the Lie \emph{algebra}. This makes it much nicer.
+
+We can write
+\[
+ g^\varepsilon(x, u) = (\tilde{x}, \tilde{u}) = (x + \varepsilon \xi(x, u), u \varepsilon + \eta(x, u)) + o(\varepsilon).
+\]
+We know the $n$th prolongation of $V$ must be of the form
+\[
+ \pr^{(n)}V = V + \sum_{k = 1}^n \eta_k\frac{\partial}{\partial u^{(k)}},
+\]
+where we have to find out what $\eta_k$ is. Then we know $\eta_k$ will satisfy
+\begin{align*}
+ g^{\varepsilon}(x, u, \cdots, u^{(n)}) &= (\tilde{x}, \tilde{u}, \cdots, \tilde{u}^{(n)})\\
+ &= (x + \varepsilon \xi, u + \varepsilon \eta, u_x + \eta_1, \cdots, u^{(n)} + \varepsilon\eta_n) + o(\varepsilon).
+\end{align*}
+To find $\eta_1$, we use the \emph{contact condition}
+\[
+ \d \tilde{u} = \frac{\d \tilde{u}}{\d \tilde{x}} \d \tilde{x} = \tilde{u}_{\tilde{x}} \d \tilde{x}.
+\]
+We now use the fact that
+\begin{align*}
+ \tilde{x} = x + \varepsilon \xi(x, u) + o(\varepsilon)\\
+ \tilde{u} = x + \varepsilon \eta(x, u) + o(\varepsilon).
+\end{align*}
+Substituting in, we have
+\[
+ \d u + \varepsilon \d \eta = \tilde{u}_{\tilde{x}}(\d x + \varepsilon \d \xi) + o(\varepsilon).
+\]
+We want to write everything in terms of $\d x$. We have
+\begin{align*}
+ \d u &= u_x \d x\\
+ \d \eta &= \frac{\partial \eta}{\partial x} \d x + \frac{\partial \eta}{\partial u} \d u\\
+ &= \left(\frac{\partial \eta}{\partial x} + u_x \frac{\partial \eta}{\partial u}\right) \;\d x\\
+ &= D_x \eta \;\d x,
+\end{align*}
+where $D_x$ is the total derivative
+\[
+ D_x = \frac{\partial}{\partial x} + u_x \frac{\partial}{\partial u} + u_{xx} \frac{\partial}{\partial u_x} + \cdots.
+\]
+We similarly have
+\[
+ \d \xi = \D_x \xi \d x.
+\]
+So substituting in, we have
+\[
+ (u_x + \varepsilon D_x\eta) \d x = \tilde{u}_{\tilde{x}}(1 + \varepsilon D_x \xi) \d x + o(\varepsilon).
+\]
+This implies that
+\begin{align*}
+ \tilde{u}_{\tilde{x}} &= \frac{u_x + \varepsilon D_x \eta}{1 + \varepsilon \D_x \xi} + o(\varepsilon)\\
+ &= (u_x + \varepsilon \D_x \eta)(1 - \varepsilon \D_x \xi) + o (\varepsilon)\\
+ &= u_x + \varepsilon(\D_x \eta - u_x \D_x \xi) + o(\varepsilon).
+\end{align*}
+So we have
+\[
+ \eta_1 = \D_x \eta - u_x D_x \eta.
+\]
+Now building up $\eta_k$ recursively, we use the contact condition
+\[
+ \d \tilde{u}^{(k)} = \frac{\d \tilde{u}^{(k)}}{\d \tilde{x}}\d \tilde{x} = \tilde{u}^{(k + 1)} \d \tilde{x}.
+\]
+We use
+\begin{align*}
+ \tilde{u}^{(k)} &= u^{(k)} + \varepsilon \eta_k + o(\varepsilon)\\
+ \tilde{x} &= x + \varepsilon \xi + o(\varepsilon).
+\end{align*}
+Substituting that back in, we get
+\[
+ (u^{(k + 1)} + \varepsilon D_x \eta_k) \;\d x = \tilde{u}^{(k + 1)} (1 + \varepsilon \D_x \xi) \d x + o(\varepsilon).
+\]
+So we get
+\begin{align*}
+ \tilde{u}^{(k + 1)} &= (u^{(k + 1)} + \varepsilon \D_x \eta_k) (1 - \varepsilon \D_x \xi) + o(\varepsilon)\\
+ &= u^{(k + 1)} + \varepsilon(\D_x \eta_k - u^{(k + 1)} \D_x \xi) + o(\varepsilon).
+\end{align*}
+So we know
+\[
+ \eta_{k + 1} = \D_x \eta_k - u^{(k + 1)} \D_x \xi.
+\]
+So we know
+\begin{prop}[Prolongation formula]\index{Prolongation formula}
+ Let
+ \[
+ V(x, u) = \xi(x, u) \frac{\partial}{\partial x} + \eta(x, u) \frac{\partial}{\partial u}.
+ \]
+ Then we have
+ \[
+ \pr^{(n)}V = V + \sum_{k = 1}^n \eta_k\frac{\partial}{\partial u^{(k)}},
+ \]
+ where
+ \begin{align*}
+ \eta_0 &= \eta(x, u)\\
+ \eta_{k + 1} &= \D_x \eta_k - u^{(k + 1)} D_x \xi.
+ \end{align*}
+\end{prop}
+
+\begin{eg}
+ For
+ \[
+ g^\varepsilon: (x, u) \mapsto (e^\varepsilon x, e^{-\varepsilon}u),
+ \]
+ we have
+ \[
+ V = x \frac{\partial}{\partial x} + (-u) \frac{\partial}{\partial u}.
+ \]
+ So we have
+ \begin{align*}
+ \xi(x, u) &= x\\
+ \eta(x, u) &= -u.
+ \end{align*}
+ So by the prolongation formula, we have
+ \[
+ \pr^{(1)}V = V + \eta_1 \frac{\partial}{\partial u_x},
+ \]
+ where
+ \[
+ \eta_x = \D_x (-u) - u_x \D_x(x) = -2 u_x,
+ \]
+ in agreement with what we had earlier!
+\end{eg}
+
+In the last example sheet, we will derive an analogous prolongation formula for PDEs.
+
+\subsection{\texorpdfstring{Painlev\'e}{Painleve} test and integrability}
+We end with a section on the Painlev\'e test. If someone just gave us a PDE, how can we figure out if it is integrable? It turns out there are some necessary conditions for integrability we can check.
+
+Recall the following definition.
+\begin{defi}[Singularirty]\index{singularity}
+ A \emph{singularity} of a complex-valued function $w = w(z)$ is a place at which it loses analyticity.
+\end{defi}
+These can be poles, branch points, essential singularities etc.
+
+Suppose we had an ODE of the form
+\[
+ \frac{\d^2 w}{\d z^2} + p(z) \frac{\d w}{ \d z} + q(z) w = 0,
+\]
+and we want to know if the solutions have singularities. It turns out that \emph{any} singularity of a solution $w = w(z)$ must be inherited from the functions $p(z), q(z)$. In particular, the locations of the singularities will \emph{not} depend on initial conditions $w(z_0), w'(z_0)$.
+
+This is \emph{not} the case for non-linear ODE's. For example, the equation
+\[
+ \frac{\d w}{\d z} + w^2 = 0
+\]
+gives us
+\[
+ w(z) = \frac{1}{z - z_0}.
+\]
+The location of this singularity changes, and it depends on the initial condition. We say it is \term{movable}.
+
+This leads to the following definition:
+\begin{defi}[Painlev\'e property]\index{Painlev\'e property}
+ We will say that an ODE of the form
+ \[
+ \frac{\d^n w}{\d z^n} = F\left(\frac{\d^{n - 1} w}{\d z^{n - 1}}, \cdots, w, z\right)
+ \]
+ has the \emph{Painlev\'e property} if the movable singularities of its solutions are at worst poles.
+\end{defi}
+
+\begin{eg}
+ We have
+ \[
+ \frac{\d w}{\d z} + w^2 = 0.
+ \]
+ has a solution
+ \[
+ w(z) = \frac{1}{z - z_0}.
+ \]
+ Since this movable singularity is a pole, this has the \emph{Painleve\'e property}.
+\end{eg}
+
+\begin{eg}
+ Consider the equation
+ \[
+ \frac{\d w}{\d z} + w^3 = 0.
+ \]
+ Then the solution is
+ \[
+ w(z) = \frac{1}{\sqrt{2(z - z_0)}},
+ \]
+ whose singularity is not a pole.
+\end{eg}
+In the olden days, Painlev\'e wanted to classify all ODE's of the form
+\[
+ \frac{\d^2 w}{\d z^2} = F\left(\frac{\d w}{\d z}, w, z\right),
+\]
+where $F$ is a rational function, that had the Painlev\'e property.
+
+He managed to show that there are fifty such equations (up to simple coordinate transformations). The interesting thing is that 44 of these can be solved in terms of well-known functions, e.g.\ Jacobi elliptic functions, Weierstrass $\wp$ functions, Bessel functions etc.
+
+The remaining six gave way to solutions that were genuinely new functions, called the six \term{Painlev\'e transcendents}. The six differential equations are
+\begin{align*}
+ \text{(PI)}\quad\frac{\d^2 w}{\d z^2} &= 6 w^2 + z\\
+ \text{(PII)}\quad\frac{\d^2 w}{\d z^2} &= 2x^3 + zw + \alpha\\
+ \text{(PIII)}\quad\frac{\d^2 w}{\d z^2} &= \frac{1}{w}\left(\frac{\d w}{\d z}\right)^2 + \frac{1}{z}\left(-\frac{\d w}{\d z} + \alpha w^2 + \beta\right) + \gamma w^3 + \frac{\delta}{w}\\
+ \text{(PIV)}\quad\frac{\d^2 w}{\d z^2} &= \frac{1}{2w}\left(\frac{\d w}{\d z}\right)^2 + \frac{3 w^3}{2} + 4zw^2 + 2(z^2 - \alpha) w + \frac{\beta}{w}\\
+ \text{(PV)}\quad\frac{\d^2 w}{\d z^2} &= \left(\frac{1}{2w} + \frac{1}{w - 1}\right) \left(\frac{\d w}{\d z}\right)^2 - \frac{1}{z} \frac{\d w}{\d z} + \frac{(w - 1)^2}{z^2}\left(\alpha w + \frac{\beta}{w}\right) \\
+ &\hphantom{\left(\frac{1}{2w} + \frac{1}{w - 1}\right) \left(\frac{\d w}{\d z}\right)^2 - \frac{1}{z} \frac{\d w}{\d z} + \frac{(w - 1)^2}{z^2}}+ \frac{\gamma w}{z} + \frac{\delta w(w + 1)}{w - 1}\\
+ \text{(PVI)}\quad\frac{\d^2 w}{\d z^2} &= \frac{1}{2}\left(\frac{1}{w} + \frac{1}{w - 1} + \frac{1}{w - z}\right)\left(\frac{\d w}{\d z}\right)^{\!2} - \left(\frac{1}{z} + \frac{1}{z - 1} + \frac{1}{w - z}\right)\!\frac{\d w}{\d z}\\
+ &\quad\quad+ \frac{w(w - 1)(w - z)}{z^2 (z - 1)^2} \left(\alpha + \frac{\beta z}{w^2} + \frac{\gamma (z - 1)}{(w - 1)^2} + \frac{\delta z(z - 1)}{(w - z)^2}\right).
+\end{align*}
+Fun fact: Painlev\'e served as the prime minister of France twice, for 9 weeks and 7 months respectively.
+
+This is all good, but what has this got to do with integrability of PDE's?
+
+\begin{conjecture}[Ablowitz-Ramani-Segur conjecture (1980)]\index{Ablowitz-Ramani-Segur conjecture}
+ Every ODE reduction (explained later) of an integrable PDE has the Painlev\'e property.
+\end{conjecture}
+This is still a conjecture since, as we've previously mentioned, we don't really have a definition of integrability. However, we have proved this conjecture for certain special cases, where we have managed to pin down some specific definitions.
+
+What do we mean by ODE reduction? Vaguely speaking, if we have a Lie point symmetry of a PDE, then we can use it to introduce coordinates that are invariant and then form subsequence ODE's in these coordinates. We can look at some concrete examples:
+
+\begin{eg}
+ In the wave equation, we can try a solution of the form $u(x, t) = f(x - ct)$, and then the wave equation gives us an ODE (or lack of) in terms of $f$.
+\end{eg}
+
+\begin{eg}
+ Consider the sine--Gordon equation in light cone coordinates
+ \[
+ u_{xt} = \sin u.
+ \]
+ This equation admits a Lie-point symmetry
+ \[
+ g^\varepsilon: (x, t, u) \mapsto (e^\varepsilon x, e^{-\varepsilon}t, u),
+ \]
+ which is generated by
+ \[
+ V = x \frac{\partial}{\partial x} - t \frac{\partial}{\partial t}.
+ \]
+ We should now introduce a variable invariant under this Lie-point symmetry. Clearly $z = xt$ is invariant, since
+ \[
+ V(z) = xt - tx = 0.
+ \]
+ What we should do, then is to look for a solution that depends on $z$, say
+ \[
+ u(x, t) = F(z).
+ \]
+ Setting
+ \[
+ w = e^{iF},
+ \]
+ the sine--Gordon equation becomes
+ \[
+ \frac{\d^2 w}{\d z^2} = \frac{1}{w} \left(\frac{\d w}{\d z}\right)^2 - \frac{1}{z} \frac{\d w}{\d z} - \frac{1}{2z}.
+ \]
+ This is equivalent to PIII, i.e.\ this ODE reduction has the Painlev\'e property.
+\end{eg}
+
+\begin{eg}
+ Consider the KdV equation
+ \[
+ u_t + u_{xxx} - 6 uu_x = 0.
+ \]
+ This admits a not-so-obvious Lie-point symmetry
+ \[
+ g^\varepsilon (x, t, u) = \left(x + \varepsilon t + \frac{1}{2} \varepsilon^2, t + \varepsilon, u - \frac{1}{6} \varepsilon\right)
+ \]
+ This is generated by
+ \[
+ V = t \frac{\partial}{\partial x} + \frac{\partial}{\partial t} - \frac{1}{6} \frac{\partial}{\partial u}.
+ \]
+ We then have invariant coordinates
+ \[
+ z = x - \frac{1}{2}t^2,\quad w = \frac{1}{6} t + u.
+ \]
+ To get an ODE for $w$, we write the second equation as
+ \[
+ u(x, t) = - \frac{1}{6} t + w(z).
+ \]
+ Then we have
+ \[
+ u_t = - \frac{1}{5} - t w'(z),\quad u_x = w'(z),\quad u_{xx} = w''(z),\quad u_{xxx} = w'''(z).
+ \]
+ So KdV becomes
+ \[
+ 0 = u_t + u_{xxx} - 6 uu_x = - \frac{1}{6} + w'''(z) - 6w w'(z).
+ \]
+ We would have had some problems if the $t$'s didn't get away, because we wouldn't have an ODE in $w$. But since we constructed these coordinates, such that $w$ and $z$ are invariant under the Lie point symmetry but $t$ is not, we are guaranteed that there will be no $t$ left in the equation.
+
+ Integrating this equation once, we get an equation
+ \[
+ w''(z) - 3 w(z)^2 - \frac{1}{5}z + z_0 = 0,
+ \]
+ which is is PI. So this ODE reduction of KdV has the Painlev\'e property.
+\end{eg}
+
+In summary, the Painlev\'e test of integrability is as follows:
+\begin{enumerate}
+ \item Find all Lie point symmetries of the PDE.
+ \item Find all corresponding ODE reductions.
+ \item Test each ODE for Painlev\'e property.
+\end{enumerate}
+We can then see if our PDE is \emph{not} integrable. Unfortunately, there is no real test for the converse.
+
+\printindex
+\end{document}
diff --git a/books/cam/II_M/linear_analysis.tex b/books/cam/II_M/linear_analysis.tex
new file mode 100644
index 0000000000000000000000000000000000000000..0a6f6d4f4fc9ab7462116fbf01ec64436bdc5caf
--- /dev/null
+++ b/books/cam/II_M/linear_analysis.tex
@@ -0,0 +1,3619 @@
+\documentclass[a4paper]{article}
+
+\def\npart {II}
+\def\nterm {Michaelmas}
+\def\nyear {2015}
+\def\nlecturer {J.\ W.\ Luk}
+\def\ncourse {Linear Analysis}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\emph{Part IB Linear Algebra, Analysis II and Metric and Topological Spaces are essential}
+\vspace{10pt}
+
+\noindent Normed and Banach spaces. Linear mappings, continuity, boundedness, and norms. Finite-dimensional normed spaces.\hspace*{\fill} [4]
+
+\vspace{5pt}
+\noindent The Baire category theorem. The principle of uniform boundedness, the closed graph theorem and the inversion theorem; other applications.\hspace*{\fill} [5]
+
+\vspace{5pt}
+\noindent The normality of compact Hausdorff spaces. Urysohn's lemma and Tiezte's extension theorem. Spaces of continuous functions. The Stone-Weierstrass theorem and applications. Equicontinuity: the Ascoli-Arzel\`a theorem.\hspace*{\fill} [5]
+
+\vspace{5pt}
+\noindent Inner product spaces and Hilbert spaces; examples and elementary properties. Orthonormal systems, and the orthogonalization process. Bessel's inequality, the Parseval equation, and the Riesz-Fischer theorem. Duality; the self duality of Hilbert space.\hspace*{\fill} [5]
+
+\vspace{5pt}
+\noindent Bounded linear operations, invariant subspaces, eigenvectors; the spectrum and resolvent set. Compact operators on Hilbert space; discreteness of spectrum. Spectral theorem for compact Hermitian operators.\hspace*{\fill} [5]}
+
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+In IB Linear Algebra, we studied vector spaces in general. Most of the time, we concentrated on finite-dimensional vector spaces, since these are easy to reason about. For example, we know that every finite-dimensional vector space (by definition) has a basis. Using the basis, we can represent vectors and linear maps concretely as column vectors (in $\F^n$) and matrices.
+
+However, in real life, often we have to work with infinite-dimensional vector spaces instead. For example, we might want to consider the vector space of all continuous (real) functions, or the vector space of infinite sequences. It is difficult to analyse these spaces using the tools from IB Linear Algebra, since many of those assume the vector space is finite-dimensional. Moreover, in these cases, we often are not interested in the vector space structure itself. It's just that the objects we are interested in happen to have a vector space structure. Instead, we want to look at notions like continuity and convergence. We want to do \emph{analysis} on vector spaces. These are not something the vector space structure itself provides.
+
+In this course, we are going to give our vector spaces some additional structure. For the first half of the course, we will grant our vector space a \emph{norm}. This allows us to assign a ``length'' to each vector. With this, we can easily define convergence and continuity. It turns out this allows us to understand a lot about, say, function spaces and sequence spaces.
+
+In the second half, we will grant a stronger notion, called the inner product. Among many things, this allows us to define orthogonality of the elements of a vector space, which is something we are familiar with from, say, IB Methods.
+
+Most of the time, we will be focusing on infinite-dimensional vector spaces, since finite-dimensional spaces are boring. In fact, we have a section dedicated to proving that finite-dimensional vector spaces are boring. In particular, they are all isomorphic to $\R^n$, and most of our theorems can be proved trivially for finite-dimensional spaces using what we already know from IB Linear Algebra. So we will not care much about them.
+
+\section{Normed vector spaces}
+In IB Linear Algebra, we have studied vector spaces in quite a lot of detail. However, just knowing something is a vector space usually isn't too helpful. Often, we would want the vector space to have some additional structure. The first structure we will study is a \emph{norm}.
+
+\begin{defi}[Normed vector space]
+ A \emph{normed vector space} is a pair $(V, \|\ph \|)$, where $V$ is a vector space over a field $\F$ and $\|\ph \|$ is a function $\|\ph \|: V \mapsto \R$, known as the \emph{norm}, satisfying
+ \begin{enumerate}
+ \item $\|\mathbf{v}\| \geq 0$ for all $\mathbf{v}\in V$, with equality iff $\mathbf{v} = \mathbf{0}$.
+ \item $\| \lambda \mathbf{v}\| = |\lambda| \|\mathbf{v}\|$ for all $\lambda \in \F, \mathbf{v}\in V$.
+ \item $\|\mathbf{v} + \mathbf{w}\| \leq \|\mathbf{v}\| + \|\mathbf{w}\|$ for all $\mathbf{v}, \mathbf{w} \in V$.
+ \end{enumerate}
+\end{defi}
+Intuitively, we think of $\|\mathbf{v}\|$ as the ``length'' or ``magnitude'' of the vector.
+
+\begin{eg}
+ Let $V$ be a finite dimensional vector space, and $\{e_1, \cdots, e_n\}$ a basis. Then, for any $\mathbf{v} = \sum_{i = 1}^n v_i \mathbf{e}_i$, we can define a norm as
+ \[
+ \|\mathbf{v}\| = \sqrt{\sum_{i = 1}^n v_i^2}.
+ \]
+\end{eg}
+
+If we are given a norm of a vector space $V$, we immediately obtain two more structures on $V$ for free, namely a metric and a topology.
+
+Recall from IB Metric and Topological Spaces that $(V, d)$ is a metric space if the metric $d: V\times V \to \R$ satisfies
+\begin{enumerate}
+ \item $d(x, x) = 0$ for all $x\in V$.
+ \item $d(x, y) = d(y, x)$ for all $x, y\in V$.
+ \item $d(x, y) \leq d(x, z) + d(z, y)$ for all $x, y, z\in V$.
+\end{enumerate}
+Also, a topological spaces is a set $V$ together with a topology (a collection of open subsets) such that
+\begin{enumerate}
+ \item $\emptyset$ and $V$ are open subsets.
+ \item The union of open subsets is open.
+ \item The finite intersection of open subsets is open.
+\end{enumerate}
+As we have seen in IB Metric and Topological Spaces, a norm on a vector space induces a metric by $d(v, w) = \|v - w\|$. This metric in terms defines a topology on $V$ where the open sets are given by ``$U\subseteq V$ is open iff for any $x\in U$, there exists some $\varepsilon > 0$ such that $B(x, \varepsilon) = \{y\in V: d(x, y) < \varepsilon\}\subseteq U$''.
+
+This induced topology is not just a random topology on the vector space. They have the nice property that the vector space operators behave well under this topology.
+\begin{prop}
+ Addition $+: V\times V \to V$, and scalar multiplication $\cdot: \F \times V \to V$ are continuous with respect to the topology induced by the norm (and the usual product topology).
+\end{prop}
+
+\begin{proof}
+ Let $U$ be open in $V$. We want to show that $(+)^{-1} (U)$ is open. Let $(\mathbf{v}_1, \mathbf{v}_2) \in (+)^{-1}(U)$, i.e.\ $\mathbf{v}_1 + \mathbf{v}_2 \in U$. Since $\mathbf{v}_1 + \mathbf{v}_2 \in U$, there exists $\varepsilon$ such that $B(\mathbf{v}_1 + \mathbf{v}_2, \varepsilon) \subseteq U$. By the triangle inequality, we know that $B(\mathbf{v}_1, \frac{\varepsilon}{2}) + B(\mathbf{v}_2, \frac{\varepsilon}{2}) \subseteq U$. Hence we have $(\mathbf{v}_1, \mathbf{v}_2) \in B\left((\mathbf{v}_1, \mathbf{v}_2), \frac{\varepsilon}{2}\right) \subseteq (+)^{-1}(U)$. So $(+)^{-1}(U)$ is open.
+
+ Scalar multiplication can be done in a very similar way.
+\end{proof}
+This motivates the following definition --- we can do without the norm, and just require a topology in which addition and scalar multiplication are continuous.
+\begin{defi}[Topological vector space]
+ A \emph{topological vector space} $(V, \mathcal{U})$ is a vector space $V$ together with a topology $\mathcal{U}$ such that addition and scalar multiplication are continuous maps, and moreover singleton points $\{\mathbf{v}\}$ are closed sets.
+\end{defi}
+The requirement that points are closed is just a rather technical requirement needed in certain proofs. We should, however, not pay too much attention to this when trying to understand it intuitively.
+
+A natural question to ask is: when is a topological vector space \emph{normable}? i.e.\ Given a topological vector space, can we find a norm that induces the topology?
+
+To answer this question, we will first need a few definitions.
+
+\begin{defi}[Absolute convexity]
+ Let $V$ be a vector space. Then $C\subseteq V$ is \emph{absolutely convex} (or \emph{balanced convex}) if for any $\lambda, \mu \in \F$ such that $|\lambda| + |\mu| \leq 1$, we have $\lambda C + \mu C \subseteq C$. In other words, if $\mathbf{c}_1, \mathbf{c}_2 \in C$, we have $\lambda \mathbf{c}_1 + \mu \mathbf{c}_2 \in C$.
+\end{defi}
+
+\begin{prop}
+ If $(V, \|\ph \|)$ is a normed vector space, then $B(t) = B(\mathbf{0}, t) = \{\mathbf{v}: \|\mathbf{v}\| < t\}$ is absolutely convex.
+\end{prop}
+
+\begin{proof}
+ By triangle inequality.
+\end{proof}
+
+\begin{defi}[Bounded subset]
+ Let $V$ be a topological vector space. Then $B\subseteq V$ is \emph{bounded} if for every open neighbourhood $U\subseteq V$ of $\mathbf{0}$, there is some $s > 0$ such that $B\subseteq t U$ for all $t > s$.
+\end{defi}
+At first sight, this might seem like a rather weird definition. Intuitively, this just means that $B$ is bounded if, whenever we take any open set $U$, by enlarging it by a scalar multiple, we can make it fully contain $B$.
+
+\begin{eg}
+ $B(t)$ in a normed vector space is bounded.
+\end{eg}
+
+\begin{prop}
+ A topological vector space $(V, \mathcal{U})$ is normable if and only if there exists an absolutely convex, bounded open neighbourhood of $\mathbf{0}$.
+\end{prop}
+
+\begin{proof}
+ One direction is obvious --- if $V$ is normable, then $B(t)$ is an absolutely convex, bounded open neighbourhood of $\mathbf{0}$.
+
+ The other direction is not too difficult as well. We define the \emph{Minkowski functional} $\mu: V \to \R$ by
+ \[
+ \mu_C(\mathbf{v}) = \inf \{t > 0: \mathbf{v}\in tC\},
+ \]
+ where $C$ is our absolutely convex, bounded open neighbourhood.
+
+ Note that by definition, for any $t < \mu_C(\mathbf{v})$, $\mathbf{v}\not\in tC$. On the other hand, by absolute convexity, for any $t > \mu_C(\mathbf{v})$, we have $\mathbf{v}\in tC$.
+
+ We now show that this is a norm on $V$:
+ \begin{enumerate}
+ \item If $\mathbf{v} = \mathbf{0}$, then $\mathbf{v}\in 0C$. So $\mu_C(\mathbf{0}) = 0$. On the other hand, suppose $\mathbf{v} \not= \mathbf{0}$. Since a singleton point is closed, $U = V\setminus \{\mathbf{v}\}$ is an open neighbourhood of $0$. Hence there is some $t$ such that $C\subseteq tU$. Alternatively, $\frac{1}{t}C \subseteq U$. Hence, $\mathbf{v}\not\in \frac{1}{t}C$. So $\mu_C (\mathbf{v}) \geq \frac{1}{t} > 0$. So $\mu_C (\mathbf{v}) = \mathbf{0}$ iff $\mathbf{v} = \mathbf{0}$.
+
+ \item We have
+ \[
+ \mu_C(\lambda \mathbf{v}) = \inf \{t> 0: \lambda \mathbf{v}\in tC\} = \lambda \inf \{t > 0: \mathbf{v}\in tC\} = \lambda \mu_C(\mathbf{v}).
+ \]
+ \item We want to show that
+ \[
+ \mu_C (\mathbf{v} + \mathbf{w}) \leq \mu_C(\mathbf{v}) + \mu_C(\mathbf{w}).
+ \]
+ This is equivalent to showing that
+ \[
+ \inf\{t > 0: \mathbf{v} + \mathbf{w} \in tC\} \leq \inf\{t > 0: \mathbf{v}\in tC\} + \inf\{r > 0: \mathbf{w}\in rC\}.
+ \]
+ This is, in turn equivalent to proving that if $\mathbf{v}\in tC$ and $\mathbf{w}\in rC$, then $(\mathbf{v} + \mathbf{w})\in (t + r)C$.
+
+ Let $\mathbf{v}' = \mathbf{v}/t, \mathbf{w}' = \mathbf{w}/r$. Then we want to show that if $\mathbf{v}' \in C$ and $\mathbf{w}' \in C$, then $\frac{1}{(t + r)}(t \mathbf{v}' + r \mathbf{w}') \in C$. This is exactly what is required by convexity. So done.\qedhere
+ \end{enumerate}
+\end{proof}
+In fact, the condition of absolute convexity can be replaced ``convex'', where ``convex'' means for every $t\in [0, 1]$, $tC + (1 - t)C \subseteq C$. This is since for every convex bounded $C$, we can find always find a absolutely convex bounded $\tilde{C} \subseteq C$, which is something not hard to prove.
+
+Among all normed spaces, some are particularly nice, known as Banach spaces.
+\begin{defi}[Banach spaces]
+ A normed vector space is a \emph{Banach space} if it is complete as a metric space, i.e.\ every Cauchy sequence converges.
+\end{defi}
+
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item A finite dimensional vector space (which is isomorphic to $\F^n$ for some $n$) is Banach.
+ \item Let $X$ be a compact Hausdorff space. Then let
+ \[
+ B(X) = \{f: X\to \R\text{ such that }f\text{ is bounded}\}.
+ \]
+ This is obviously a vector space, and we can define the norm be $\|f\| = \sup_{x\in X}f(x)$. It is easy to show that this is a norm. It is less trivial to show that this is a Banach space.
+
+ Let $\{f_n\}\subseteq B(X)$ be a Cauchy sequence. Then for any $x$, $\{f_n(x)\}\subseteq \R$ is also Cauchy. So we can define $f(x) = \lim\limits_{n \to \infty}f_n(x)$.
+
+ To show that $f_n \to f$, let $\varepsilon > 0$. By definition of $f_n$ being Cauchy, there is some $N$ such that for any $n, m > N$ and any fixed $x$, we have $|f_n(x) - f_m(x)| < \varepsilon$. Take the limit as $m \to \infty$. Then $f_m(x) \to f(x)$. So $|f_n(x) - f(x)| \leq \varepsilon$. Since this is true for all $x$, for any $n > N$, we must have $\|f_n - f\| \leq \varepsilon$. So $f_n \to f$.
+
+ \item Define $X$ as before, and let
+ \[
+ C(X) = \{f: X\to \R\text{ such that }f\text{ is continuous}\}.
+ \]
+ Since any continuous $f$ is bounded, so $C(X) \subseteq B(X)$. We define the norm as before.
+
+ Since we know that $C(X)\subseteq B(X)$, to show that $C(X)$ is Banach, it suffices to show that $C(X) \subseteq B(X)$ is closed, i.e.\ if $f_n \to f$, $f_n \in C(X)$, then $f\in C(X)$, i.e.\ the uniform limit of a continuous function is continuous. Proof can be found in IB Analysis II.
+ \item For $1 \leq p < \infty$, define
+ \[
+ \hat{L}_p ([0, 1]) = \{f: [0, 1] \to \R \text{ such that }f\text{ is continuous}\}.
+ \]
+ We define the norm $\|\ph \|_{\hat{L}_p}$ by
+ \[
+ \|f\|_{\hat{L}_p} = \left(\int_0^1 |f|^p \;\d x\right)^{1/p}.
+ \]
+ It is easy to show that $\hat{L}_p$ is indeed a vector space, and we now check that this is a norm.
+ \begin{enumerate}
+ \item $\|f\|_{\hat{L}_p} \geq 0$ is obvious. Also, suppose that $\|f\|_{\hat{L}_p} = 0$. Then we must have $f = 0$. Otherwise, if $f\not = 0$, say $f(x) = \varepsilon$ for some $x$. Then there is some $\delta$ such that for any $y\in (x - \delta, x + \delta)$, we have $\|f(y)\| \geq \frac{\varepsilon}{2}$. Hence
+ \[
+ \|f\|_{\hat{L}_p} = \left(\int_0^1 |f|^p \;\d x\right)^{1/p} \geq \left[2\delta\left(\frac{\varepsilon}{2}\right)^p\right]^{1/p} > 0.
+ \]
+ \item $\|\lambda f\| = |\lambda|\|f\|$ is obvious
+ \item The triangle inequality is the exactly what the Minkowski inequality says, which is in the example sheet.
+ \end{enumerate}
+ It turns out that $\hat{L}_p$ is \emph{not} a Banach space. We can brute-force a hard proof here, but we will later develop some tools that allow us to prove this much more easily.
+
+ Hence, we define $L_p([0, 1])$ to be the completion of $\hat{L}_p ([0, 1])$. In IID Probability and Measure, we will show that $L_p([0, 1])$ is in fact the space
+ \[
+ L_p([0, 1]) = \left\{f: [0, 1] \to \R\text{ such that } \int_0^1 |f|^p \;\d x < \infty\right\}/{\sim},
+ \]
+ where the integral is the Lebesgue integral, and we are quotienting by the relation $f\sim g$ if $f = g$ Lebesgue almost everywhere. You will understand what these terms mean in the IID Probability and Measure course.
+
+ \item $\ell_p$ spaces: for $p\in [1, \infty)$, define
+ \[
+ \ell_p (\F) = \left\{(x_1, x_2, \cdots): x_i \in \F, \sum_{i = 1}^\infty |x_i|^p < \infty\right\},
+ \]
+ with the norm
+ \[
+ \|\mathbf{x}\|_{\ell_p} = \left(\sum_{i = 1}^\infty |x_i|^p\right)^{1/p}.
+ \]
+ It should be easy to check that this is a normed vector space. Moreover, this is a Banach space. Proof is in example sheet.
+ \item $\ell_\infty$ space: we define
+ \[
+ \ell_\infty = \left\{(x_1, x_2, \cdots): x_i\in \F, \sup_{i\in \N} |x_i| < \infty\right\}
+ \]
+ with norm
+ \[
+ \|\mathbf{x}\|_{\ell_\infty} = \sup_{i\in \N}|x_i|.
+ \]
+ Again, this is a Banach space.
+ \item Let $B = B(1)$ be the unit open ball in $\R^n$. Define $C(B)$ to be the set of continuous functions $f: B\to \R$. Note that unlike in our previous example, these functions need not be bounded. So our previous norm cannot be applied. However, we can still define a topology as follows:
+
+ Let $\{K_i\}_{i = 1}^\infty$ be a sequence of compact subsets of $B$ such that $K_i \subseteq K_{i + 1}$ and $\bigcup_{i = 1}^\infty K_i = B$. We define the basis to include
+ \[
+ \left\{f \in C(B): \sup_{x \in K_i} |f(x)| < \frac{1}{m}\right\}
+ \]
+ for each $m, i = 1, 2, \cdots$, as well as the translations of these sets.
+
+ This weird basis is chosen such that $f_n \to f$ in this topology iff $f_n \to f$ uniformly in every compact set. It can be showed that this is not normable.
+ \end{enumerate}
+\end{eg}
+
+\subsection{Bounded linear maps}
+With vector spaces, we studied linear maps. These are maps that respect the linear structure of a vector space. With \emph{normed} vector spaces, the right kind of maps to study is the \emph{bounded} linear maps.
+
+\begin{defi}[Bounded linear map]
+ $T: X\to Y$ is a \emph{bounded linear map} if there is a constant $C > 0$ such that $\|T\mathbf{x}\|_Y \leq C\|\mathbf{x}\|_X$ for all $\mathbf{x}\in X$. We write $\mathcal{B}(X, Y)$ for the set of bounded linear maps from $X$ to $Y$.
+\end{defi}
+This is equivalent to saying $T(B_X(1)) \subseteq B_Y(C)$ for some $C > 0$. This also equivalent to saying that $T(B)$ is bounded for every bounded subset $B$ of $X$. Note that this final characterization is also valid when we just have a topological vector space.
+
+How does boundedness relate to the topological structure of the vector spaces? It turns out that boundedness is the same as continuity, which is another reason why we like bounded linear maps.
+
+\begin{prop}
+ Let $X$, $Y$ be normed vector spaces, $T: X\to Y$ a linear map. Then the following are equivalent:
+ \begin{enumerate}
+ \item $T$ is continuous.
+ \item $T$ is continuous at $\mathbf{0}$.
+ \item $T$ is bounded.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ (i) $\Rightarrow$ (ii) is obvious.
+
+ (ii) $\Rightarrow $ (iii): Consider $B_Y(1) \subseteq Y$, the unit open ball. Since $T$ is continuous at $0$, $T^{-1}(B_Y(1))\subseteq X$ is open. Hence there exists $\varepsilon > 0$ such that $B_X(\varepsilon) \subseteq T^{-1}(B_Y(1))$. So $T(B_\mathbf{x}(\varepsilon)) \subseteq B_Y(1)$. So $T(B_X(1)) \subseteq B_Y\left(\frac{1}{\varepsilon}\right)$. So $T$ is bounded.
+
+ (iii) $\Rightarrow$ (i): Let $\varepsilon > 0$. Then $\|T \mathbf{x}_1 - T \mathbf{x}_2\|_Y = \|T(\mathbf{x}_1 - \mathbf{x}_2)\|_Y \leq C\|\mathbf{x}_1 - \mathbf{x}_2\|_X$. This is less than $\varepsilon$ if $\|\mathbf{x}_1 - \mathbf{x}_2\| < C^{-1}\varepsilon$. So done.
+\end{proof}
+
+Using the obvious operations, $\mathcal{B}(X, Y)$ can be made a vector space. What about a norm?
+
+\begin{defi}[Norm on $\mathcal{B}(X, Y)$]
+ Let $T: X\to Y$ be a bounded linear map. Define $\|T\|_{\mathcal{B}(X, Y)}$ by
+ \[
+ \|T\|_{\mathcal{B}(X, Y)} = \sup_{\|x\| \leq 1} \|T \mathbf{x}\|_Y.
+ \]
+\end{defi}
+Alternatively, this is the minimum $C$ such that $\|T\mathbf{x}\|_Y \leq C\|\mathbf{x}\|_X$ for all $\mathbf{x}$. In particular, we have
+\[
+ \|T\mathbf{x}\|_Y \leq \|T\|_{\mathcal{B}(X, Y)}\|\mathbf{x}\|_X.
+\]
+\subsection{Dual spaces}
+We will frequently be interested in one particular case of $\mathcal{B}(X, Y)$.
+\begin{defi}[Dual space]
+ Let $V$ be a normed vector space. The \emph{dual space} is
+ \[
+ V^* = \mathcal{B}(V, \F).
+ \]
+ We call the elements of $V^*$ \emph{functionals}. The \emph{algebraic dual} of $V$ is
+ \[
+ V' = \mathcal{L}(V, \F),
+ \]
+ where we do not require boundedness.
+\end{defi}
+
+One particularly nice property of the dual is that $V^*$ is always a Banach space.
+
+\begin{prop}
+ Let $V$ be a normed vector space. Then $V^*$ is a Banach space.
+\end{prop}
+
+\begin{proof}
+ Suppose $\{T_i\} \in V^*$ is a Cauchy sequence. We define $T$ as follows: for any $\mathbf{v}\in V$, $\{T_i(\mathbf{v})\}\subseteq \F$ is Cauchy sequence. Since $\F$ is complete (it is either $\R$ or $\C$), we can define $T: V\to \R$ by
+ \[
+ T(\mathbf{v}) = \lim_{n \to \infty}T_n (\mathbf{v}).
+ \]
+ Our objective is to show that $T_i \to T$. The first step is to show that we indeed have $T \in V^*$, i.e.\ $T$ is a bounded map.
+
+ Let $\|\mathbf{v}\| \leq 1$. Pick $\varepsilon = 1$. Then there is some $N$ such that for all $i > N$, we have
+ \[
+ |T_i(\mathbf{v}) - T(\mathbf{v})| < 1.
+ \]
+ Then we have
+ \begin{align*}
+ |T(\mathbf{v})| &\leq |T_i(\mathbf{v}) - T(\mathbf{v})| + |T_i(\mathbf{v})| \\
+ &< 1 + \|T_i\|_{V^*}\|\mathbf{v}\|_V\\
+ & \leq 1 + \|T_i\|_{V^*}\\
+ &\leq 1 + \sup_i \|T_i\|_{V^*}
+ \end{align*}
+ Since $T_i$ is Cauchy, $\sup_i \|T_i\|_{V^*}$ is bounded. Since this bound does not depend on $\mathbf{v}$ (and $N$), we get that $T$ is bounded.
+
+ Now we want to show that $\|T_i - T\|_{V^*} \to 0$ as $n\to \infty$.
+
+ For arbitrary $\varepsilon > 0$, there is some $N$ such that for all $i, j > N$, we have
+ \[
+ \|T_i - T_j\|_{V^*} < \varepsilon.
+ \]
+ In particular, for any $\mathbf{v}$ such that $\|\mathbf{v}\| \leq 1$, we have
+ \[
+ |T_i(\mathbf{v}) - T_j(\mathbf{v})| < \varepsilon.
+ \]
+ Taking the limit as $j\to \infty$, we obtain
+ \[
+ |T_i(\mathbf{v}) - T(\mathbf{v})| \leq \varepsilon.
+ \]
+ Since this is true for any $\mathbf{v}$, we have
+ \[
+ \|T_i - T\|_{V^*} \leq \varepsilon.
+ \]
+ for all $i > N$. So $T_i \to T$.
+\end{proof}
+Exercise: in general, for $X, Y$ normed vector spaces, what condition on $X$ and $Y$ guarantees that $\mathcal{B}(X, Y)$ is a Banach space?
+
+\subsection{Adjoint}
+The idea of the adjoint is given a $T\in \mathcal{B}(X, Y)$, produce a ``dual map'', or an \emph{adjoint} $T^*\in \mathcal{B}(Y^*, X^*)$.
+
+There is really only one (non-trivial) natural way of doing this. First we can think about what $T^*$ should do. It takes in something from $Y^*$ and produces something in $X^*$. By the definition of the dual space, this is equivalent to taking in a function $g: Y \to \F$ and returning a function $T^*(g): X\to \F$.
+
+To produce this $T^*(g)$, the only things we have on our hands to use are $T: X\to Y$ and $g: Y\to \F$. Thus the only option we have is to define $T^*(g)$ as the composition $g\circ T$, or $T^*(g)(\mathbf{x}) = g(T(\mathbf{x}))$ (we also have a silly option of producing the zero map regardless of input, but this is silly). Indeed, this is the definition of the adjoint.
+
+\begin{defi}[Adjoint]
+ Let $X, Y$ be normal vector spaces. Given $T\in \mathcal{B}(X, Y)$, we define the \emph{adjoint} of $T$, denoted $T^*$, as a map $T^*\in \mathcal{B}(Y^*, X^*)$ given by
+ \[
+ T^*(g)(\mathbf{x}) = g(T(\mathbf{x}))
+ \]
+ for $\mathbf{x} \in X$, $y\in Y^*$. Alternatively, we can write
+ \[
+ T^*(g) = g\circ T.
+ \]
+\end{defi}
+It is easy to show that our $T^*$ is indeed linear. We now show it is bounded.
+
+\begin{prop}
+ $T^*$ is bounded.
+\end{prop}
+
+\begin{proof}
+ We want to show that $\|T^*\|_{\mathcal{B}(Y^*, X^*)}$ is finite. For simplicity of notation, the supremum is assumed to be taken over non-zero elements of the space. We have
+ \begin{align*}
+ \|T^*\|_{\mathcal{B}(Y^*, X^*)} &= \sup_{g\in Y^*}\frac{\|T^*(g)\|_{X^*}}{\|g\|_{Y^*}}\\
+ &= \sup_{g\in Y^*}\sup_{\mathbf{x}\in X}\frac{|T^*(g)(\mathbf{x})|/\|\mathbf{x}\|_X}{\|g\|_{Y^*}}\\
+ &= \sup_{g\in Y^*}\sup_{\mathbf{x}\in X} \frac{|g(T\mathbf{x})|}{\|g\|_{Y^*}\|\mathbf{x}\|_X}\\
+ &\leq \sup_{g\in Y^*}\sup_{\mathbf{x}\in X} \frac{\|g\|_{Y^*}\|T\mathbf{x}\|_Y}{\|g\|_{Y^*}\|\mathbf{x}\|_X}\\
+ &\leq \sup_{\mathbf{x}\in X} \frac{\|T\|_{\mathcal{B}(X, Y)}\|\mathbf{x}\|_X}{\|\mathbf{x}\|_X}\\
+ &= \|T\|_{\mathcal{B}(X, Y)}
+ \end{align*}
+ So it is finite.
+\end{proof}
+
+\subsection{The double dual}
+\begin{defi}[Double dual]
+ Let $V$ be a normed vector space. Define $V^{**} = (V^*)^*$.
+\end{defi}
+
+We want to define a map $\phi: V\to V^{**}$. Again, we can reason about what we expect this function to do. It takes in a $\mathbf{v}\in V$, and produces a $\phi(\mathbf{v}) \in V^{**}$. Expanding the definition, this gives a $\phi(\mathbf{v}): V^* \to \F$. Hence this $\phi(\mathbf{v})$ takes in a $g\in V^*$, and returns a $\phi(\mathbf{v})(g)\in \F$.
+
+This is easy. Since $g \in V^*$, we know that $g$ is a function $g: V\to \F$. Given this function $g$ and a $\mathbf{v}\in V$, it is easy to produce a $\phi(\mathbf{v})(g)\in \F$. Just apply $g$ on $\mathbf{v}$:
+\[
+ \phi(\mathbf{v})(g) = g(\mathbf{v}).
+\]
+\begin{prop}
+ Let $\phi: V\to V^{**}$ be defined by $\phi(\mathbf{v})(g) = g(\mathbf{v})$. Then $\phi$ is a bounded linear map and $\|\phi\|_{\mathcal{B}(V, V^*)} \leq 1$
+\end{prop}
+
+\begin{proof}
+ Again, we are taking supremum over non-zero elements. We have
+ \begin{align*}
+ \|\phi\|_{\mathcal{B}(V, V^*)} &= \sup_{\mathbf{v}\in V} \frac{\|\phi(\mathbf{v})\|_{V^{**}}}{\|\mathbf{v}\|_V}\\
+ &= \sup_{\mathbf{v}\in V} \sup_{g\in V^*}\frac{|\phi(\mathbf{v})(g)|}{\|\mathbf{v}\|_V\|g\|_{V^*}}\\
+ &= \sup_{\mathbf{v}\in V}\sup_{g\in V^*}\frac{|g(\mathbf{v})|}{\|\mathbf{v}\|_V\|g\|_{V^*}}\\
+ &\leq 1.\qedhere
+ \end{align*}
+\end{proof}
+In fact, we will later show that $\|\phi\|_{\mathcal{B}(V, V^*)} = 1$.
+
+\subsection{Isomorphism}
+So far, we have discussed a lot about bounded linear maps, which are ``morphisms'' between normed vector spaces. It is thus natural to come up with the notion of isomorphism.
+
+\begin{defi}[Isomorphism]
+ Let $X, Y$ be normed vector spaces. Then $T: X\to Y$ is an \emph{isomorphism} if it is a bounded linear map with a bounded linear inverse (i.e.\ it is a homeomorphism).
+
+ We say $X$ and $Y$ are \emph{isomorphic} if there is an isomorphism $T: X\to Y$.
+
+ We say that $T: X\to Y$ is an \emph{isometric} isomorphism if $T$ is an isomorphism and $\|T\mathbf{x}\|_Y = \|\mathbf{x}\|_X$ for all $\mathbf{x}\in X$.
+
+ $X$ and $Y$ are \emph{isometrically isomorphic} if there is an isometric isomorphism between them.
+\end{defi}
+
+\begin{eg}
+ Consider a finite-dimensional space $\F^n$ with the standard basis $\{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$. For any $\mathbf{v} = \sum v_i \mathbf{e}_i$, the norm is defined by
+ \[
+ \|\mathbf{v}\| = \left(\sum v_i^2\right)^{1/2}.
+ \]
+ Then any $g\in V^*$ is determined by $g(\mathbf{e}_i)$ for $i = 1, \cdots, n$. We want to show that there are no restrictions on what $g(\mathbf{e}_i)$ can be, i.e.\ whatever values I assign to them, $g$ will still be bounded. We have
+ \begin{align*}
+ \|g\|_{V^*} &= \sup_{\mathbf{v}\in V}\frac{|g(\mathbf{v})|}{\|\mathbf{v}\|}\\
+ &\leq \sup_{\mathbf{v}\in V}\frac{\sum |v_i||g(\mathbf{e}_i)|}{(\sum |v_i|^2)^{1/2}}\\
+ &\leq C\sup_{\mathbf{v}\in V}\frac{(\sum |v_i|^2)^{\frac{1}{2}}}{(\sum|v_i|^2)^{\frac{1}{2}}}\left(\sup_i |g(\mathbf{e}_i)|\right)\\
+ &= C\sup_i |g(\mathbf{e}_i)|
+ \end{align*}
+ for some $C$, where the second-to-last line is due to the Cauchy-Schwarz inequality.
+
+ The supremum is finite since $\F^n$ is finite dimensional.
+
+ Since $g$ is uniquely determined by a list of values $(g(\mathbf{e}_1), g(\mathbf{e}_2), \cdots, g(\mathbf{e}_n))$, it has dimension $n$. Therefore, $V^*$ is isomorphic to $\F^n$. By the same lines of argument, $V^{**}$ is isomorphic to $\F^n$.
+
+ In fact, we can show that $\phi: V\to V^{**}$ by $\phi(\mathbf{v})(g) = g(\mathbf{v})$ is an isometric isomorphism (this is not true for general normed vector spaces. Just pick $V$ to be incomplete, then $V$ and $V^{**}$ cannot be isomorphic since $V^{**}$ is complete).
+\end{eg}
+
+\begin{eg}
+ Consider $\ell_p$ for $p\in [1, \infty)$. What is $\ell_p^*$?
+
+ Suppose $q$ is the \emph{conjugate exponent} of $p$, i.e.
+ \[
+ \frac{1}{q} + \frac{1}{p} = 1.
+ \]
+ (if $p = 1$, define $q = \infty$) It is easy to see that $\ell_q \subseteq \ell_p^*$ by the following:
+
+ Suppose $(x_1, x_2, \cdots) \in \ell_p$, and $(y_1, y_2, \cdots)\in \ell_q$. Define $y(\mathbf{x}) = \sum_{i = 1}^\infty x_i y_i$. We will show that $y$ defined this way is a bounded linear map. Linearity is easy to see, and boundedness comes from the fact that
+ \[
+ \|y\|_{\ell_p^*} = \sup_{\mathbf{x}}\frac{\|y(\mathbf{x})\|}{\|\mathbf{x}\|_{\ell_p}} = \sup_{\mathbf{x}} \frac{\sum x_i y_i}{\|\mathbf{x}\|_{\ell_p}} \leq \sup \frac{\|\mathbf{x}\|_{\ell_p}\|\mathbf{y}\|_{\ell_q}}{\|\mathbf{x}\|_{\ell_p}} = \|\mathbf{y}\|_{\ell_p},
+ \]
+ by the H\"older's inequality. So every $(y_i) \in \ell_q$ determines a bounded linear map. In fact, we can show $\ell_p^*$ is isomorphic to $\ell_q$.
+\end{eg}
+
+\subsection{Finite-dimensional normed vector spaces}
+We are now going to look at a special case of normed vector spaces, where the vector space is finite dimensional.
+
+It turns out that finite-dimensional vector spaces are rather boring. In particular, we have
+\begin{enumerate}
+ \item All norms are equivalent.
+ \item The closed unit ball is compact.
+ \item They are Banach spaces.
+ \item All linear maps whose domain is finite dimensional are bounded.
+\end{enumerate}
+These are what we are going to show in this section.
+
+First of all, we need to say what we mean when we say all norms are ``equivalent''
+\begin{defi}[Equivalent norms]
+ Let $V$ be a vector space, and $\|\ph \|_1$, $\|\ph \|_2$ be norms on $V$. We say that these are \emph{equivalent} if there exists a constant $C > 0$ such that for any $\mathbf{v}\in V$, we have
+ \[
+ C^{-1}\|\mathbf{v}\|_2 \leq \|\mathbf{v}\|_1 \leq C\|\mathbf{v}\|_2.
+ \]
+\end{defi}
+It is an exercise to show that equivalent norms induce the same topology, and hence agree on continuity and convergence. Also, equivalence of norms is an equivalence relation (as the name suggests).
+
+Now let $V$ be an $n$-dimensional vector space with basis $\{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$. We can define the $\ell_p^n$ norm by
+\[
+ \|\mathbf{v}\|_{\ell_p^n} = \left(\sum_{i = 1}^n |v_i|^p \right)^{1/p},
+\]
+where
+\[
+ \mathbf{v} = \sum_{i = 1}^n v_i \mathbf{e}_i.
+\]
+\begin{prop}
+ Let $V$ be an $n$-dimensional vector space. Then all norms on $V$ are equivalent to the norm $\|\ph\|_{\ell_1^n}$.
+\end{prop}
+
+\begin{cor}
+ All norms on a finite-dimensional vector space are equivalent.
+\end{cor}
+
+\begin{proof}
+ Let $\|\ph \|$ be a norm on $V$.
+
+ Let $\mathbf{v} = (v_1, \cdots, v_n) = \sum v_i \mathbf{e}_i \in V$. Then we have
+ \begin{align*}
+ \|\mathbf{v}\| &= \left\|\sum v_i \mathbf{e}_i\right\|\\
+ &\leq \sum_{i = 1}^n |v_i|\|\mathbf{e}_i\|\\
+ &\leq \left(\sup_i \|\mathbf{e}_i\|\right) \sum_{i = 1}^n |v_i|\\
+ &\leq C\|\mathbf{v}\|_{\ell_1^n},
+ \end{align*}
+ where $C = \sup \|\mathbf{e}_i\| < \infty$ since we are taking a finite supremum.
+
+ For the other way round, let $S_1 = \{\mathbf{v}\in V: \|\mathbf{v}\|_{\ell_1^n} = 1\}$. We will show the two following results:
+ \begin{enumerate}
+ \item $\|\ph \|: (S_1, \|\ph \|_{\ell_1^n}) \to \R$ is continuous.
+ \item $S_1$ is a compact set.
+ \end{enumerate}
+ We first see why this gives what we want. We know that for any continuous map from a compact set to $\R$, the image is bounded and the infimum is achieved. So there is some $\mathbf{v}_* \in S_1$ such that
+ \[
+ \|\mathbf{v}_*\| = \inf_{\mathbf{v}\in S_1} \|\mathbf{v}\|.
+ \]
+ Since $\mathbf{v}_*\not= 0$, there is some $c'$ such that $\|\mathbf{v}\| \geq c'$ for all $\mathbf{v} \in S_1$.
+
+ Now take an arbitrary non-zero $\mathbf{v} \in V$, since $\frac{\mathbf{v}}{\|\mathbf{v}\|_{\ell_1^n}} \in S_1$, we know that
+ \[
+ \left\|\frac{\mathbf{v}}{\|\mathbf{v}\|_{\ell_1^n}}\right\| \geq c',
+ \]
+ which is to say that
+ \[
+ \|\mathbf{v}\| \geq c' \|\mathbf{v}\|_{\ell_1^n}.
+ \]
+ Since we have found $c, c' > 0$ such that
+ \[
+ c' \|\mathbf{v}\|_{\ell_1^n} \leq \|\mathbf{v}\|\leq c \| \mathbf{v}\|_{\ell_1^n},
+ \]
+ now let $C = \max\left\{c, \frac{1}{c'}\right\} > 0$. Then
+ \[
+ C^{-1}\|\mathbf{v}\|_2 \leq \|\mathbf{v}\|_1 \leq C\|\mathbf{v}\|_2.
+ \]
+ So the norms are equivalent. Now we can start to prove (i) and (ii).
+
+ First, let $\mathbf{v}, \mathbf{w}\in V$. We have
+ \[
+ \big|\|\mathbf{v}\| - \|\mathbf{w}\|\big| \leq \|\mathbf{v} - \mathbf{w}\| \leq C\|\mathbf{v} - \mathbf{w}\|_{\ell_1^n}.
+ \]
+ Hence when $\mathbf{v}$ is close to $\mathbf{w}$ under $\ell_1^n$, then $\|\mathbf{v}\|$ is close to $\|\mathbf{w}\|$. So it is continuous.
+
+ To show (ii), it suffices to show that the unit ball $B = \{\mathbf{v} \in V: \|\mathbf{v}\|_{\ell_1^n}\leq 1\}$ is compact, since $S_1$ is a closed subset of $B$. We will do so by showing it is sequentially compact.
+
+ Let $\{\mathbf{v}^{(k)}\}_{k = 1}^\infty$ be a sequence in $B$. Write
+ \[
+ \mathbf{v}^{(k)} = \sum_{i = 1}^n \lambda_i^{(k)} \mathbf{e}_i.
+ \]
+ Since $\mathbf{v}^{(k)} \in B$, we have
+ \[
+ \sum_{i = 1}^n |\lambda_i^{(k)}| \leq 1.
+ \]
+ Consider the sequence $\lambda_1^{(k)}$, which is a sequence in $\F$.
+
+ We know that $|\lambda_1^{(k)}| \leq 1$. So by Bolzano-Weierstrass, there is a convergent subsequence $\lambda_1^{(k_{j_1})}$.
+
+ Now look at $\lambda_2^{(k_{j_1})}$. Since this is bounded, there is a convergent subsequence $\lambda_2^{(k_{j_2})}$.
+
+ Iterate this for all $n$ to obtain a sequence $k_{j_n}$ such that $\lambda_i^{(k_{j_n})}$ is convergent for all $i$. So $\mathbf{v}^{(k_{j_n})}$ is a convergent subsequence.
+\end{proof}
+
+\begin{prop}
+ Let $V$ be a finite-dimensional normed vector space. Then the closed unit ball
+ \[
+ \bar{B}(1) = \{\mathbf{v} \in V: \|\mathbf{v}\| \leq 1\}
+ \]
+ is compact.
+\end{prop}
+
+\begin{proof}
+ This follows from the proof above.
+\end{proof}
+
+\begin{prop}
+ Let $V$ be a finite-dimensional normed vector space. Then $V$ is a Banach space.
+\end{prop}
+
+\begin{proof}
+ Let $\{\mathbf{v}_i\} \in V$ be a Cauchy sequence. Since $\{\mathbf{v}_i\}$ is Cauchy, it is bounded, i.e.\ $\{\mathbf{v}_i\} \subseteq \bar{B}(R)$ for some $R > 0$. By above, $\bar{B}(R)$ is compact. So $\{\mathbf{v}_i\}$ has a convergent subsequence $\mathbf{v}_{i_k} \to \mathbf{v}$. Since $\{\mathbf{v}_i\}$ is Cauchy, we must have $\mathbf{v}_i \to \mathbf{v}$. So $\mathbf{v}_i$ converges.
+\end{proof}
+
+\begin{prop}
+ Let $V, W$ be normed vector spaces, $V$ be finite-dimensional. Also, let $T: V\to W$ be a linear map. Then $T$ is bounded.
+\end{prop}
+
+\begin{proof}
+ Recall discussions last time about $V^*$ for finite-dimensional $V$. We will do a similar proof.
+
+ Note that since $V$ is finite-dimensional, $\im T$ finite dimensional. So wlog $W$ is finite-dimensional. Since all norms are equivalent, it suffices to consider the case where the vector spaces have $\ell_1^n$ and $\ell_1^m$ norm. This can be represented by a matrix $T_{ij}$ such that
+ \[
+ T(x_1, \cdots, x_n) = \left(\sum T_{1i}x_i, \cdots, \sum T_{mi}x_i\right).
+ \]
+ We can bound this by
+ \[
+ \|T(x_1, \cdots, x_n)\| \leq \sum_{j = 1}^m \sum_{i = 1}^n |T_{ji}||x_i|\\ \leq m \left(\sup_{i, j}|T_{ij}|\right) \sum_{i = 1}^n |x_i| \leq C \|\mathbf{x}\|_{\ell_1^n}
+ \]
+ for some $C > 0$, since we are taking the supremum over a finite set. This implies that $\|T\|_{\mathcal{B}(\ell_1^n, \ell_1^m)} \leq C$.
+\end{proof}
+
+There is another way to prove this statement.
+\begin{proof}(alternative)
+ Let $T: V\to W$ be a linear map. We define a norm on $V$ by $\|\mathbf{v}\|' = \|\mathbf{v}\|_V + \| T \mathbf{v}\|_W$. It is easy to show that this is a norm.
+
+ Since $V$ is finite dimensional, all norms are equivalent. So there is a constant $C>0$ such that for all $\mathbf{v}$, we have
+ \[
+ \|\mathbf{v}\|' \leq C\|\mathbf{v}\|_V.
+ \]
+ In particular, we have
+ \[
+ \|T\mathbf{v}\| \leq C\|\mathbf{v}\|_V.
+ \]
+ So done.
+\end{proof}
+
+Among all these properties, compactness of $\bar{B}(1)$ characterizes finite dimensionality.
+\begin{prop}
+ Let $V$ be a normed vector space. Suppose that the closed unit ball $\bar{B}(1)$ is compact. Then $V$ is finite dimensional.
+\end{prop}
+
+\begin{proof}
+ Consider the following open cover of $\bar{B}(1)$:
+ \[
+ \bar{B}(1) \subseteq \bigcup_{\mathbf{y}\in \bar{B}(1)} B\left(\mathbf{y}, \frac{1}{2}\right).
+ \]
+ Since $\bar{B}(1)$ is compact, this has a finite subcover. So there is some $\mathbf{y}_1, \cdots, \mathbf{y}_n$ such that
+ \[
+ \bar{B}(1) \subseteq \bigcup_{i = 1}^n B\left(\mathbf{y}_i, \frac{1}{2}\right).
+ \]
+ Now let $Y = \spn\{\mathbf{y}_1, \cdots, \mathbf{y}_n\}$, which is a finite-dimensional subspace of $V$. We want to show that in fact we have $Y = V$.
+
+ Clearly, by definition of $Y$, the unit ball
+ \[
+ B(1) \subseteq Y + B\left(\frac{1}{2}\right),
+ \]
+ i.e.\ for every $\mathbf{v}\in B(1)$, there is some $\mathbf{y}\in Y, \mathbf{w} \in B(\frac{1}{2})$ such that $\mathbf{v} = \mathbf{y} + \mathbf{w}$. Multiplying everything by $\frac{1}{2}$, we get
+ \[
+ B\left(\frac{1}{2}\right) \subseteq Y + B\left(\frac{1}{4}\right).
+ \]
+ Hence we also have
+ \[
+ B(1) \subseteq Y + B\left(\frac{1}{4}\right).
+ \]
+ By induction, for every $n$, we have
+ \[
+ B(1) \subseteq Y + B\left(\frac{1}{2^n}\right).
+ \]
+ As a consequence,
+ \[
+ B(1) \subseteq \bar{Y}.
+ \]
+ Since $Y$ is finite-dimensional, we know that $Y$ is complete. So $Y$ is a closed subspace of $V$. So $\bar{Y} = Y$. So in fact
+ \[
+ B(1) \subseteq Y.
+ \]
+ Since every element in $V$ can be rescaled to an element of $B(1)$, we know that $V = Y$. Hence $V$ is finite dimensional.
+\end{proof}
+This concludes our discussion on finite-dimensional vector spaces. We'll end with an example that shows these are not true for infinite dimensional vector spaces.
+\begin{eg}
+ Consider $\ell_1$, and $\mathbf{e}_i = (0, 0, \cdots, 0, 1, 0, \cdots)$, where $\mathbf{e}_i$ is $1$ on the $i$th entry, $0$ elsewhere.
+
+ Note that if $i \not= j$, then
+ \[
+ \|\mathbf{e}_i - \mathbf{e}_j\| = 2.
+ \]
+ Since $\mathbf{e}_i \in \bar{B}(1)$, we see that $\bar{B}(1)$ cannot be covered by finitely many open balls of radius $\frac{1}{2}$, since each open ball can contain at most one of $\{\mathbf{e}_i\}$.
+\end{eg}
+
+\subsection{Hahn--Banach Theorem}
+Let $V$ be a real normed vector space. What can we say about $V^* = \mathcal{B}(V, \R)$? For instance, If $V$ is non-trivial, must $V^*$ be non-trivial?
+
+The main goal of this section is to prove the \emph{Hahn--Banach theorem} (surprise), which allows us to produce a lot of elements in $V^*$. Moreover, it doesn't just tell us that $V^*$ is non-empty (this is rather dull), but provides a tool to craft (or at least prove existence of) elements of $V^*$ that satisfy some property we want.
+
+\begin{prop}
+ Let $V$ be a real normed vector space, and $W\subseteq V$ has co-dimension 1. Assume we have the following two items:
+ \begin{itemize}
+ \item $p: V \to \R$ (not necessarily linear), which is positive homogeneous, i.e.
+ \[
+ p (\lambda \mathbf{v}) = \lambda p(\mathbf{v})
+ \]
+ for all $\mathbf{v}\in V, \lambda > 0$, and subadditive, i.e.
+ \[
+ p(\mathbf{v}_1 + \mathbf{v}_2) \leq p (\mathbf{v}_1) + p(\mathbf{v}_2)
+ \]
+ for all $\mathbf{v}_1, \mathbf{v}_2 \in V$. We can think of something like a norm, but more general.
+ \item $f: W \to \R$ a linear map such that $f(\mathbf{w}) \leq p (\mathbf{w})$ for all $\mathbf{w}\in W$.
+ \end{itemize}
+ Then there exists an extension $\tilde{f}: V\to \R$ which is linear such that $\tilde{f}|_{W} = f$ and $\tilde{f}(\mathbf{v}) \leq p(\mathbf{v})$ for all $\mathbf{v}\in V$.
+\end{prop}
+Why do we want this weird theorem? Our objective is to find something in $V^*$. This theorem tells us that to find a bounded linear map in $V$, we just need something in $W$ bounded by a norm-like object, and then we can extend it to $V$.
+
+\begin{proof}
+ Let $\mathbf{v}_0 \in V\setminus W$. Since $W$ has co-dimension $1$, every element $\mathbf{v}\in V$ can be written uniquely as $\mathbf{v} = \mathbf{w} + a \mathbf{v}_0$, for some $\mathbf{w}\in W, a\in \R$. Therefore it suffices to define $\tilde{f}(\mathbf{v}_0)$ and then extend linearly to $V$.
+
+ The condition we want to meet is
+ \[
+ \tilde{f}(\mathbf{w} + a \mathbf{v}_0) \leq p(\mathbf{w} + a \mathbf{v}_0)\tag{$*$}
+ \]
+ for all $\mathbf{w} \in W, a \in \R$. If $a = 0$, then this is satisfied since $\tilde{f}$ restricts to $f$ on $W$.
+
+ If $a > 0$ then $(*)$ is equivalent to
+ \[
+ \tilde{f}(\mathbf{w}) + a\tilde{f}(\mathbf{v}_0) \leq p(\mathbf{w} + a \mathbf{v}_0).
+ \]
+ We can divide by $a$ to obtain
+ \[
+ \tilde{f}(a^{-1}\mathbf{w}) + \tilde{f}(v_0) \leq p(a^{-1}\mathbf{w} + \mathbf{v}_0).
+ \]
+ We let $\mathbf{w}' = a^{-1} \mathbf{w}$. So we can write this as
+ \[
+ \tilde{f}(\mathbf{v}_0) \leq p(\mathbf{w}' + \mathbf{v}_0) - f(\mathbf{w}'),
+ \]
+ for all $\mathbf{w}'\in W$.
+
+ If $a < 0$, then $(*)$ is equivalent to
+ \[
+ \tilde{f}(\mathbf{w}) + a\tilde{f}(\mathbf{v}_0) \leq p(\mathbf{w} + a \mathbf{v}_0).
+ \]
+ We now divide by $a$ and flip the sign of the equality. So we have
+ \[
+ \tilde{f}(a^{-1}\mathbf{w}) + \tilde{f}(\mathbf{v}_0) \geq -(-a^{-1})p(\mathbf{w} + a\mathbf{v}_0).
+ \]
+ In other words, we want
+ \[
+ \tilde{f}(\mathbf{v}_0) \geq -p(-a^{-1} \mathbf{w} - v_0) - f(a^{-1}\mathbf{w}).
+ \]
+ We let $\mathbf{w}' = -a^{-1}\mathbf{w}$. Then we are left with
+ \[
+ \tilde{f}(\mathbf{v}_0) \geq -p(\mathbf{w}' - v_0) + f(\mathbf{w}').
+ \]
+ for all $\mathbf{w}' \in W$.
+
+ Hence we are done if we can define a $\tilde{f}(\mathbf{v}_0)$ that satisfies these two conditions. This is possible if and only if
+ \[
+ -p(\mathbf{w}_1 - \mathbf{v}_0) + f(\mathbf{w}_1) \leq p(\mathbf{w}_2 + \mathbf{v}_0) - f(\mathbf{w}_2)
+ \]
+ for all $\mathbf{w}_1, \mathbf{w}_2$. This holds since
+ \begin{align*}
+ f(\mathbf{w}_1) + f(\mathbf{w}_2) &= f(\mathbf{w}_1 + \mathbf{w}_2) \\
+ &\leq p(\mathbf{w}_1 + \mathbf{w}_2) \\
+ &= p(\mathbf{w}_1 - \mathbf{v}_0 + \mathbf{w}_2 + \mathbf{v}_0) \\
+ &\leq p(\mathbf{w}_1 - \mathbf{v}_0) + p(\mathbf{w}_2 + \mathbf{v}_0).
+ \end{align*}
+ So the result follows.
+\end{proof}
+
+The goal is to ``iterate'' this to get a similar result without the co-dimension 1 assumption. While we can do this directly for finitely many times, this isn't helpful (since we already know a lot about finite dimensional normed spaces). To perform an ``infinite iteration'', we need the mysterious result known as Zorn's lemma.
+\subsubsection*{Digression on Zorn's lemma}
+We first need a few definitions before we can come to Zorn's lemma.
+\begin{defi}[Partial order]
+ A relation $\leq$ on a set $X$ is a \emph{partial order} if it satisfies
+ \begin{enumerate}
+ \item $x \leq x$\hfill (reflexivity)
+ \item $x \leq y$ and $y \leq x$ implies $x = y$ \hfill (antisymmetry)
+ \item $x \leq y$ and $y \leq z$ implies $x \leq z$ \hfill (transitivity)
+ \end{enumerate}
+\end{defi}
+
+\begin{defi}[Total order]
+ Let $(S, \leq)$ be a partial order. $T\subseteq S$ is \emph{totally ordered} if for all $x, y\in T$, either $x \leq y$ or $y\leq x$, i.e.\ every two things are related.
+\end{defi}
+
+\begin{defi}[Upper bound]
+ Let $(S, \leq)$ be a partial order. $S'\subseteq S$ subset. We say $b\in S$ is an \emph{upper bound} of this subset if $x \leq b$ for all $x \in S'$.
+\end{defi}
+
+\begin{defi}[Maximal element]
+ Let $(S, \leq)$ be a partial order. Then $m\in S$ is a \emph{maximal element} if $x \geq m$ implies $x = m$.
+\end{defi}
+
+The glorious Zorn's lemma tells us that:
+\begin{lemma}[Zorn's lemma]
+ Let $(S, \leq)$ be a non-empty partially ordered set such that every totally-ordered subset $S'$ has an upper bound in $S$. Then $S$ has a maximal element.
+\end{lemma}
+We will not give a proof of this lemma here, but can explain why it should be true.
+
+We start by picking one element $x_0$ in $S$. If it is maximal, then done. Otherwise, there is some $x_1 > x_0$. If this is not maximal, then pick $x_2 > x_1$. We do this to infinity ``and beyond'' --- after picking infinitely many $x_i$, if we have not yet reached a maximal element, we take an upper bound of this set, and call it $x_\omega$. If this is not maximal, we can continue picking a larger element.
+
+We can do this forever, but if this process never stops, even after infinite time, we would have picked out more elements than there are in $S$, which is clearly nonsense. Of course, this is hardly a formal proof. The proper proof can be found in the IID Logic and Set Theory course.
+
+\subsubsection*{Back to vector spaces}
+The Hahn--Banach theorem is just our previous proposition without the constraint that $W$ has co-dimension 1.
+\begin{thm}[Hahn--Banach theorem*]
+ Let $V$ be a real normed vector space, and $W\subseteq V$ a subspace. Assume we have the following two items:
+ \begin{itemize}
+ \item $p: V \to \R$ (not necessarily linear), which is positive homogeneous and subadditive;
+ \item $f: W \to \R$ a linear map such that $f(\mathbf{w}) \leq p (\mathbf{w})$ for all $\mathbf{w}\in W$.
+ \end{itemize}
+ Then there exists an extension $\tilde{f}: V\to \R$ which is linear such that $\tilde{f}|_{W} = f$ and $\tilde{f}(\mathbf{v}) \leq p(\mathbf{v})$ for all $\mathbf{v}\in V$.
+\end{thm}
+
+\begin{proof}
+ Let $S$ be the set of all pairs $(\tilde{V}, \tilde{f})$ such that
+ \begin{enumerate}
+ \item $W\subseteq \tilde{V}\subseteq V$
+ \item $\tilde{f}: \tilde{V} \to \R$ is linear
+ \item $\tilde{f}|_W = f$
+ \item $\tilde{f}(\tilde{\mathbf{v}}) \leq p(\tilde{\mathbf{v}})$ for all $\tilde{\mathbf{v}} \in V$
+ \end{enumerate}
+
+ We introduce a partial order $\leq$ on $S$ by $(\tilde{V}_1, \tilde{f}_1) \leq (\tilde{V}_2, \tilde{f}_2)$ if $\tilde{V}_1 \subseteq \tilde{V}_2$ and $\tilde{f}_2 |_{\tilde{V}_1} = \tilde{f}_1$. It is easy to see that this is indeed a partial order.
+
+ We now check that this satisfies the assumptions of Zorn's lemma. Let $\{(\tilde{V}_\alpha, \tilde{f}_\alpha)\}_{\alpha\in A} \subseteq S$ be a totally ordered set. Define $(\tilde{V}, \tilde{f})$ by
+ \[
+ \tilde{V} = \bigcup_{\alpha\in A} \tilde{V}_\alpha,\quad \tilde{f}(\mathbf{x}) = \tilde{f}_\alpha(\mathbf{x})\text{ for }\mathbf{x}\in \tilde{V_\alpha}.
+ \]
+ This is well-defined because $\{(\tilde{V}, \tilde{f}_\alpha)\}_{\alpha\in A}$ is totally ordered. So if $\mathbf{x}\in \tilde{V}_{\alpha_1}$ and $\mathbf{x} \in \tilde{V}_{\alpha_2}$, wlog assume $(\tilde{V}_{\alpha_1}, \tilde{f}_{\alpha_1}) \leq (\tilde{V}_{\alpha_2}, \tilde{f}_{\alpha_2})$. So $\tilde{f}_{\alpha_2}|_{\tilde{V}_{\alpha_2}} = \tilde{f}_{\alpha_1}$. So $\tilde{f}_{\alpha_1}(\mathbf{x}) = \tilde{f}_{\alpha_2}(\mathbf{x})$.
+
+ It should be clear that $(\tilde{V}, \tilde{f})\in S$ and $(\tilde{V}, \tilde{f})$ is indeed an upper bound of $\{(\tilde{V}_\alpha, \tilde{f}_\alpha)\}_{\alpha\in A}$. So the conditions of Zorn's lemma are satisfied.
+
+ Hence by Zorn's lemma, there is an maximal element $(\tilde{W}, \tilde{f}) \in S$. Then by definition, $\tilde{f}$ is linear, restricts to $f$ on $W$, and bounded by $p$. We now show that $\tilde{W} = V$.
+
+ Suppose not. Then there is some $\mathbf{v}_0 \in V\setminus \tilde{W}$. Define $\tilde{V} = \spn\{\tilde{W}, \mathbf{v}_0\}$. Now $\tilde{W}$ is a co-dimensional 1 subspace of $\tilde{V}$. By our previous result, we know that there is some $\tilde{\tilde{f}}: \tilde{V} \to \R$ linear such that $\tilde{\tilde{f}}|_{\tilde{W}} = \tilde{f}$ and $\tilde{\tilde{f}}(\mathbf{v}) \leq p(\mathbf{v})$ for all $\mathbf{v}\in \tilde{V}$.
+
+ Hence we have $(\tilde{W}, \tilde{\tilde{f}}) \in S$ but $(\tilde{W}, \tilde{f}) < (\tilde{V}, \tilde{\tilde{f}})$. This contradicts the maximality of $(\tilde{W}, \tilde{f})$.
+\end{proof}
+
+There is a particularly important special case of this, which is also known as Hahn-Banach theorem sometimes.
+\begin{cor}[Hahn-Banach theorem 2.0]
+ Let $W \subseteq V$ be real normed vector spaces. Given $f \in W^*$, there exists a $\tilde{f} \in V^*$ such that $\tilde{f}|_W = f$ and $\|\tilde{f}\|_{V^*} = \|f\|_{W^*}$.
+\end{cor}
+
+\begin{proof}
+ Use the Hahn-Banach theorem with $p(\mathbf{x}) = \|f\|_{W^*}\|\mathbf{x}\|_V$ for all $\mathbf{x}\in V$. Positive homogeneity and subadditivity follow directly from the axioms of the norm. Then by definition $f(\mathbf{w}) \leq p(\mathbf{w})$ for all $\mathbf{w}\in W$. So Hahn-Banach theorem says that there is $\tilde{f}: V\to \R$ linear such that $\tilde{f}|_W = f$ and $\tilde{f}(\mathbf{v}) \leq p(\mathbf{w}) = \|f\|_{W^*} \|\mathbf{v}\|_V$.
+
+ Now notice that
+ \[
+ \tilde{f}(\mathbf{v}) \leq \|f\|_{W^*}\|\mathbf{v}\|_V,\quad -\tilde{f}(\mathbf{v}) = \tilde{f}(-\mathbf{v}) \leq \|f\|_{W^*}\|\mathbf{v}\|_V
+ \]
+ implies that $|\tilde{f}(\mathbf{v})| \leq \|f\|_{W^*}\|\mathbf{v}\|_V$ for all $\mathbf{v}\in V$.
+
+ On the other hand, we have (again taking supremum over non-zero $\mathbf{v}$)
+ \[
+ \|\tilde{f}\|_{V^*} = \sup_{\mathbf{v}\in V} \frac{|\tilde{f}(\mathbf{v})|}{\|\mathbf{v}\|_V} \geq \sup_{\mathbf{w}\in W} \frac{|f(\mathbf{w})|}{\|\mathbf{w}\|_W} = \|f\|_{W^*}.
+ \]
+ So indeed we have $\|\tilde{f}\|_{V^*} = \|f\|_{W^*}$.
+\end{proof}
+
+We'll have some quick corollaries of these theorems.
+\begin{prop}
+ Let $V$ be a real normed vector space. For every $\mathbf{v}\in V\setminus \{0\}$, there is some $f_{\mathbf{v}} \in V^*$ such that $f_{\mathbf{v}}(\mathbf{v}) = \|\mathbf{v}\|_V$ and $\|f_{\mathbf{v}}\|_{V^*} = 1$.
+\end{prop}
+
+\begin{proof}
+ Apply Hahn-Banach theorem (2.0) with $W = \spn\{\mathbf{v}\}$, $f_{\mathbf{v}}'(\mathbf{v}) = \|\mathbf{v}\|_V$.
+\end{proof}
+
+\begin{cor}
+ Let $V$ be a real normed vector space. Then $\mathbf{v} = \mathbf{0}$ if and only if $f(\mathbf{v}) = 0$ for all $f\in V^*$.
+\end{cor}
+
+\begin{cor}
+ Let $V$ be a non-trivial real normed vector space, $\mathbf{v}, \mathbf{w}\in V$ with $\mathbf{v}\not= \mathbf{w}$. Then there is some $f\in V^*$ such that $f(\mathbf{v}) \not= f(\mathbf{w})$.
+\end{cor}
+
+\begin{cor}
+ If $V$ is a non-trivial real normed vector space, then $V^*$ is non-trivial.
+\end{cor}
+
+We now want to restrict the discussion to double duals. We define $\phi: V\to V^{**}$ as before by $\phi(\mathbf{v})(f) = f(\mathbf{v})$ for $\mathbf{v}\in V, f\in V^*$.
+
+\begin{prop}
+ The map $\phi: V\to V^{**}$ is an isometry, i.e.\ $\|\phi(\mathbf{v})\|_{V^{**}} = \|\mathbf{v}\|_V$.
+\end{prop}
+
+\begin{proof}
+ We have previously shown that
+ \[
+ \|\phi\|_{\mathcal{B}(V, V^{**})} \leq 1.
+ \]
+ It thus suffices to show that the norm is greater than $1$, or that
+ \[
+ \|\phi(\mathbf{v})\|_{V^{**}} \geq \|\mathbf{v}\|_V.
+ \]
+ We can assume $\mathbf{v}\not= \mathbf{0}$, for which the inequality is trivial. We have
+ \[
+ \|\phi(\mathbf{v})\|_{V^{**}} = \sup_{f\in V^*} \frac{|\phi(\mathbf{v})(f)|}{\|f\|_{V^*}} \geq \frac{|\phi(\mathbf{v})(f_\mathbf{v})|}{\|f_\mathbf{v}\|_{V^*}} = |f_\mathbf{v}(\mathbf{v})| = \|\mathbf{v}\|_V,
+ \]
+ where $f_\mathbf{v}$ is the function such that $f_\mathbf{v}(\mathbf{v}) = \|\mathbf{v}\|_V, \|f_\mathbf{v}\|_{V^*} = 1$ as we have previously defined.
+
+ So done.
+\end{proof}
+
+In particular, $\phi$ is injective and one can view $\phi$ as an isometric embedding of $V$ into $V^{**}$.
+
+\begin{defi}[Reflexive]
+ We say $V$ is \emph{reflexive} if $\phi(V) = V^{**}$.
+\end{defi}
+Note that any reflexive space is Banach, since $V^{**}$, being the dual of $V^*$, is reflexive.
+
+You might have heard that for any infinite dimensional vector space $V$, the dual of $V$ is always strictly larger than $V$. This does not prevent an infinite dimensional vector space from being reflexive. When we said the dual of $V$ is always strictly larger than $V$, we are referring to the \emph{algebraic dual}, i.e.\ the set of all \emph{linear maps} from $V$ to $\F$. In the definition of reflexive (and everywhere else where we mention ``dual'' in this course), we mean the \emph{continuous dual}, where we look at the set of all \emph{bounded} linear maps from $V$ to $\F$. It is indeed possible for the continuous dual to be isomorphic to the original space, even in infinite dimensional spaces, as we will see later.
+
+\begin{eg}
+ Finite-dimensional normed vector spaces are reflexive. Also $\ell^p$ is reflexive for $p \in (1, \infty)$.
+\end{eg}
+
+Recall that given $T\in \mathcal{B}(V, W)$, we defined $T^*\in \mathcal{B}(W^*, V^*)$ by
+\[
+ T^*(f)(\mathbf{v}) = f(T\mathbf{v})
+\]
+for $\mathbf{v}\in V, f\in W^*$.
+
+We have previously shown that
+\[
+ \|T^*\|_{\mathcal{B}(W^*, V^*)} \leq \|T\|_{\mathcal{B}(V, W)}.
+\]
+We will now show that in fact equality holds.
+\begin{prop}
+ \[
+ \|T^*\|_{\mathcal{B}(W^*, V^*)} = \|T\|_{\mathcal{B}(V, W)}.
+ \]
+\end{prop}
+
+\begin{proof}
+ We have already shown that
+ \[
+ \|T^*\|_{\mathcal{B}(W^*, V^*)} \leq \|T\|_{\mathcal{B}(V, W)}.
+ \]
+ For the other inequality, first let $\varepsilon > 0$. Since
+ \[
+ \|T\|_{\mathcal{B}(V, W)} = \sup_{\mathbf{v}\in V} \frac{\|T\mathbf{v}\|_W}{\|\mathbf{v}\|_V}
+ \]
+ by definition, there is some $\mathbf{v}\in V$ such that $\|T\mathbf{v}\|_W \geq \|T\|_{\mathcal{B}(V, W)}\|\mathbf{v}\|_V - \varepsilon$. wlog, assume $\|\mathbf{v}\|_V = 1$. So
+ \[
+ \|T\mathbf{v}\|_W \geq \|T\|_{\mathcal{B}(V, W)} - \varepsilon.
+ \]
+ Therefore, we get that
+ \begin{align*}
+ \|T^*\|_{\mathcal{B}(W^*, V^*)} &= \sup_{f\in W^*} \frac{\|T^*(f)\|_{V^*}}{\|f\|_{W^*}} \\
+ &\geq \|T^*(f_{T\mathbf{v}})\|_{V^*} \\
+ &\geq |T^*(f_{T\mathbf{v}})(\mathbf{v})| \\
+ &= |f_{T\mathbf{v}}(T\mathbf{v})| \\
+ &= \|T\mathbf{v}\|_W \\
+ &\geq \|T\|_{\mathcal{B}(V, W)} - \varepsilon,
+ \end{align*}
+ where we used the fact that $\|f_{T\mathbf{v}}\|_{W^*}$ and $\|\mathbf{v}\|_V$ are both $1$. Since $\varepsilon$ is arbitrary, we are done.
+\end{proof}
+
+\section{Baire category theorem}
+\subsection{The Baire category theorem}
+When we first write the Baire category theorem down, it might seem a bit pointless. However, it turns out to be a really useful result, and we will be able to prove surprisingly many results from it.
+
+In fact, the Baire category theorem itself does not involve normed vector spaces. It works on any metric space. However, most of the applications we have here are about normed vector spaces.
+
+To specify the theorem, we will need some terminology.
+
+\begin{defi}[Nowhere dense set]
+ Let $X$ be a topological space. A subset $E\subseteq X$ is \emph{nowhere dense} if $\bar{E}$ has empty interior.
+\end{defi}
+Usually, we will pick $E$ to be closed so that the definition just says that $E$ has an empty interior.
+
+\begin{defi}[First/second category, meagre and residual]
+ Let $X$ be a topological space. We say that $Z\subseteq X$ is of \emph{first category} or \emph{meagre} if it is a countable union of nowhere dense sets.
+
+ A subset is of \emph{second category} or \emph{non-meagre} if it is not of first category.
+
+ A subset is \emph{residual} if $X\setminus Z$ is meagre.
+\end{defi}
+
+\begin{thm}[Baire category theorem]
+ Let $X$ be a complete metric space. Then $X$ is of second category.
+\end{thm}
+This, by definition, means that it is not a countable union of nowhere dense sets. This is equivalent to saying that if we can write
+\[
+ X = \bigcup_{n = 1}^\infty C_n,
+\]
+where each $C_n$ are closed, then $C_n$ has a non-empty interior for some $n$.
+
+Alternatively, we can say that if $U_n$ is a countable collection of open dense sets, then $\bigcap_{n = 1}^\infty U_n \not= \emptyset$ (for if $U_n$ is open dense, then $X\setminus X_n$ is closed with empty interior).
+
+\begin{proof}
+ We will prove that the intersection of a countable collection of open dense sets is non-empty. Let $U_n$ be a countable collection of open dense set.
+
+ The key to proving this is completeness, since that is the only information we have. The idea is to construct a sequence, show that it is Cauchy, and prove that the limit is in the intersection.
+
+ Construct a sequence $x_n \in X$ and $\varepsilon_n > 0$ as follows: let $x_1, \varepsilon_1$ be defined such that $\overline{B(x_1, \varepsilon_1)} \subseteq U_1$. This exists $U_1$ is open and dense. By density, there is some $x_1 \in U_1$, and $\varepsilon_1$ exists by openness.
+
+ We define the $x_n$ iteratively. Suppose we already have $x_n$ and $\varepsilon_n$. Define $x_{n + 1}, \varepsilon_{n + 1}$ such that $\overline{B(x_{n + 1}, \varepsilon_{n + 1})} \subseteq \overline{B(x_n, \varepsilon_n)}\cap U_{n + 1}$. Again, this is possible because $U_{n + 1}$ is open and dense. Moreover, we choose our $\varepsilon_{n + 1}$ such that $\varepsilon_{n + 1} < \frac{1}{n}$ so that $\varepsilon_n \to 0$.
+
+ Since $\varepsilon_n \to 0$, we know that $x_n$ is a Cauchy sequence. By completeness of $X$, we can find an $x\in X$ such that $x_n \to x$. Since $x$ is the limit of $x_n$, we know that $x\in \overline{B(x_n, \varepsilon_n)}$ for all $n$. In particular, $x\in U_n$ for all $n$. So done.
+\end{proof}
+
+\subsection{Some applications}
+We are going to have a few applications of the Baire category theorem.
+
+\begin{prop}
+ $\R \setminus \Q \not= \emptyset$, i.e.\ there is an irrational number.
+\end{prop}
+Of course, we can also prove this directly by, say, showing that $\sqrt{2}$ is irrational, or that noting that $\R$ is uncountable, but $\Q$ is not. However, we can also use the Baire category theorem.
+
+\begin{proof}
+ Recall that we defined $\R$ to be the completion of $\Q$. So we just have to show that $\Q$ is not complete.
+
+ First, note that $\Q$ is countable. Also, for all $q \in \Q$, $\{q\}$ is closed and has empty interior. Hence
+ \[
+ \Q = \bigcup_{q \in \Q}\{q\}
+ \]
+ is the countable union of nowhere dense sets. So it is not complete by the Baire category theorem.
+\end{proof}
+
+We will show that there are normed vector spaces which are not Banach spaces.
+\begin{prop}
+ Let $\hat{\ell}_1$ be a normed vector space defined by the vector space
+ \[
+ V = \{(x_1, x_2, \cdots): x_i \in \R, \exists I \in \N \text{ such that } i > I \Rightarrow x_i = 0\},
+ \]
+ with componentwise addition and scalar multiplication. This is the space of all sequences that are eventually zero.
+
+ We define the norm by
+ \[
+ \|x\|_{\hat{\ell}_1} = \sum_{i = 1}^\infty |x_i|.
+ \]
+ Then $\hat{\ell}_1$ is not a Banach space.
+\end{prop}
+Note that $\hat{\ell}_1$ is not standard notation.
+
+\begin{proof}
+ Let
+ \[
+ E_n = \{x\in \hat{\ell}_1: x_i = 0, \forall i \geq n\}.
+ \]
+ By definition,
+ \[
+ \hat{\ell}_1 = \bigcup_{n = 1}^\infty E_n.
+ \]
+ We now show that $E_n$ is nowhere dense. We first show that $E_n$ is closed. If $x_j \to x$ in $\hat{\ell}_1$ with $x_j \in E_n$, then since $x_j$ is $0$ from the $n$th component onwards, $x$ is also $0$ from the $n$th component onwards. So we must have $x \in E_n$. So $E_n$ is closed.
+
+ We now show that $E_n$ has empty interior. We need to show that for all $x\in E_n$ and $\varepsilon > 0$, there is some $y \in \hat{\ell}_1$ such that $\|y - x\| < \varepsilon$ but $y\not\in E_n$. This is also easy. Given $x = (x_1, \cdots, x_{n - 1}, 0, 0, \cdots)$, we consider
+ \[
+ y = (x_1, \cdots, x_{n - 1}, \varepsilon/2, 0, 0,\cdots).
+ \]
+ Then $\|y - x\|_{\hat{\ell}_1} < \varepsilon$ but $y \not\in E_n$. Hence by the Baire category theorem, $\hat{\ell}_1$ is not complete.
+\end{proof}
+
+\begin{prop}
+ There exists an $f\in C([0, 1])$ which is nowhere differentiable.
+\end{prop}
+
+\begin{proof}(sketch)
+ We want to show that the set of all continuous functions which are differentiable at at least one point is contained in a meagre subset of $C([0, 1])$. Then this set cannot be all of $C([0, 1])$ since $C([0, 1])$ is complete.
+
+ Let $E_{m, n}$ be the set of all $f \in C([0, 1])$ such that
+ \[
+ (\exists x)(\forall y)\; 0 < |y - x| < \frac{1}{m} \Rightarrow |f(y) - f(x)| < n|y - x|.
+ \]
+ (where the quantifiers range over $[0, 1]$).
+
+ We now show that
+ \[
+ \{f \in C([0, 1]): f\text{ is differentiable somewhere}\} \subseteq \bigcup_{n, m = 1}^\infty E_{m, n}.
+ \]
+ This is easy from definition. Suppose $f$ is differentiable at $x_0$. Then by definition,
+ \[
+ \lim_{y \to x_0} \frac{f(y) - f(x_0)}{y - x_0} = f'(x_0).
+ \]
+ Let $n \in \N$ be such that $|f'(x_0)| < n$. Then by definition of the limit, there is some $m$ such that whenever $0 < |y - x| < \frac{1}{m}$, we have $\frac{|f(y) - f(x)|}{|y - x_0|} < n$. So $f \in E_{m, n}$.
+
+ Finally, we need to show that each $E_{m, n}$ is closed and has empty interior. This is left as an exercise for the reader. % exercise
+\end{proof}
+
+\begin{thm}[Banach-Steinhaus theorem/uniform boundedness principle]
+ Let $V$ be a Banach space and $W$ be a normed vector space. Suppose $T_\alpha$ is a collection of bounded linear maps $T_\alpha: V\to W$ such that for each fixed $\mathbf{v} \in V$,
+ \[
+ \sup_\alpha \|T_\alpha (\mathbf{v})\|_W < \infty.
+ \]
+ Then
+ \[
+ \sup_\alpha \|T_\alpha\|_{\mathcal{B}(V, W)} < \infty.
+ \]
+\end{thm}
+This says that if the set of linear maps is pointwise bounded, then they are uniformly bounded.
+\begin{proof}
+ Let
+ \[
+ E_n = \{\mathbf{v} \in V: \sup_\alpha \|T_\alpha(\mathbf{v})\|_W \leq n\}.
+ \]
+ Then by our conditions,
+ \[
+ V = \bigcup_{n = 1}^\infty E_n.
+ \]
+ We can write each $E_n$ as
+ \[
+ E_n = \bigcap_\alpha \{\mathbf{v} \in V: \|T_\alpha (\mathbf{v})\|_W \leq n\}.
+ \]
+ Since $T_\alpha$ is bounded and hence continuous, so $\{\mathbf{v} \in V: \|T_\alpha (\mathbf{v})\|_W\leq n\}$ is the continuous preimage of a closed set, and is hence closed. So $E_n$, being the intersection of closed sets, is closed.
+
+ By the Baire category theorem, there is some $n$ such that $E_n$ has non-empty interior. In particular, $(\exists n) (\exists \varepsilon > 0)(\exists \mathbf{v}_0 \in V)$ such that for all $\mathbf{v}\in B(\mathbf{v}_0, \varepsilon)$, we have
+ \[
+ \sup_{\alpha} \|T_\alpha(\mathbf{v})\|_W \leq n.
+ \]
+ Now consider arbitrary $\|\mathbf{v}'\|_V \leq 1$. Then
+ \[
+ \mathbf{v}_0 + \frac{\varepsilon}{2} \mathbf{v}' \in B(\mathbf{v}_0, \varepsilon).
+ \]
+ So
+ \[
+ \sup_\alpha \left\|T_\alpha\left(\mathbf{v}_0 + \frac{\varepsilon \mathbf{v}'}{2}\right)\right\|_W \leq n.
+ \]
+ Therefore
+ \[
+ \sup_\alpha \|T_\alpha \mathbf{v}'\|_W \leq \frac{2}{\varepsilon} \left(n + \sup_\alpha \|T_\alpha \mathbf{v}_0\|\right).
+ \]
+ Note that the right hand side is independent of $\mathbf{v}'$. So
+ \[
+ \sup_{\|\mathbf{v}'\| \leq 1}\sup_\alpha \|T_\alpha \mathbf{v}'\|_W \leq \infty.\qedhere
+ \]
+\end{proof}
+Note that this result is not true for general functions. For example, consider $f: [0, 1] \to \R$ defined by
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0) node [right] {$x$};
+ \draw [->] (0, 0) -- (0, 4) node [above] {$y$};
+ \node [circ] at (4, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (0.5, 0) {};
+ \node [below] at (4, 0) {$\frac{1}{2^{n - 1}}$};
+ \node [below] at (2, 0) {$\frac{1}{2^n}$};
+ \node [below] at (1, 0) {\quad$\frac{1}{2^{n + 1}}$};
+ \node [below] at (0.5, 0) {$\frac{1}{2^{n + 2}}$\quad};
+
+ \draw [mred, semithick] (0, 0) -- (0.5, 0) -- (1, 3) -- (2, 3) -- (4, 0) -- (5.5, 0);
+
+ \draw [dashed] (0, 3) node [left] {$n$} node [circ] {} -- (1, 3);
+ \end{tikzpicture}
+\end{center}
+Then for all $x\in [0, 1]$, we have
+\[
+ \sup_n |f_n(x)| < \infty,
+\]
+but
+\[
+ \sup_n \sup_x |f_n(x)| = \infty.
+\]
+However, by a proof very similar to what we had above, we have
+\begin{thm}[Osgood]
+ Let $f_n: [0, 1] \to \R$ be a sequence of continuous functions such that for all $x\in [0, 1]$
+ \[
+ \sup_n |f_n(x)| < \infty
+ \]
+ Then there are some $a, b$ with $0 \leq a < b \leq 1$ such that
+ \[
+ \sup_n \sup_{x\in [a, b]} |f_n(x)| < \infty.
+ \]
+\end{thm}
+
+\begin{proof}
+ See example sheet.
+\end{proof}
+
+\begin{thm}[Open mapping theorem]
+ Let $V$ and $W$ be Banach spaces and $T: V\to W$ be a bounded surjective linear map. Then $T$ is an open map, i.e.\ $T(U)$ is an open subset of $W$ whenever $U$ is an open subset of $V$.
+\end{thm}
+
+Note that surjectivity is necessary. If $\mathbf{q}\not\in T(V)$, then we can scale $\mathbf{q}$ down arbitrarily and still not be in the image of $T$. So $T(V)$ does not contain an open neighbourhood of $\mathbf{0}$, and hence cannot be open.
+
+\begin{proof}
+ We can break our proof into three parts:
+ \begin{enumerate}
+ \item We first want an easy way to check if a map is an open map. We want to show that $T$ is open if and only if $T(B_V(1)) \supseteq B_W(\varepsilon)$ for some $\varepsilon > 0$. Note that one direction is easy --- if $T$ is open, then by definition $T(B_V(1))$ is open, and hence we can find the epsilon required. So we are going to prove the other direction.
+ \item We show that $\overline{T(B_V(1))} \supseteq B_W(\varepsilon)$ for some $\varepsilon > 0$
+ \item By rescaling the norm in $W$, we may wlog the $\varepsilon$ obtained above is in fact $1$. We then show that if $\overline{T(B_V(1))} \supseteq B_W(1)$, then $T(B_V(1)) \supseteq B_W(\frac{1}{2})$.
+ \end{enumerate}
+ We now prove them one by one.
+ \begin{enumerate}
+ \item Suppose $T(B_V(1))\supseteq B_W(\varepsilon)$ for some $\varepsilon > 0$. Let $U \subseteq V$ be an open set. We want to show that $T(U)$ is open. So let $\mathbf{p}\in U, \mathbf{q} = T\mathbf{p}$.
+
+ Since $U$ is open, there is some $\delta > 0$ such that $B_V(\mathbf{p}, \delta) \subseteq U$. We can also write the ball as $B_V(\mathbf{p}, \delta) = \mathbf{p} + B_V(\delta)$. Then we have
+ \begin{align*}
+ T(U) &\supseteq T(\mathbf{p} + B_V(\delta)) \\
+ &= T\mathbf{p} + T(B_V(\delta)) \\
+ &= T\mathbf{p} + \delta T(B_V(1)) \\
+ &\supseteq \mathbf{q} + \delta B_W(\varepsilon) \\
+ &= \mathbf{q} + B_W(\delta\varepsilon)\\
+ &= B_W(\mathbf{q}, \delta \varepsilon).
+ \end{align*}
+ So done.
+ \item This is the step where we use the Baire category theorem.
+
+ Since $T$ is surjective, we can write $W$ as
+ \[
+ W = \bigcup_{n = 1}^\infty T(B_V(n)) = \bigcup_{n = 1}^{\infty} T(nB_V(1)) = \bigcup_{n = 1}^\infty \overline{T(nB_V(1))}.
+ \]
+ We have written $W$ as a countable union of closed sets. Since $W$ is a Banach space, by the Baire category theorem, there is some $n \geq 1$ such that $\overline{T(n B_V(1))}$ has non-empty interior. But since $\overline{T(n B_V(1))} = n \overline{T(B_V(1))}$, and multiplication by $n$ is a homeomorphism, it follows that $\overline{T(B_V(1))}$ has non-empty interior. So there is some $\varepsilon > 0$ and $\mathbf{w}_0 \in W$ such that
+ \[
+ \overline{T(B_V(1))} \supseteq B_W(\mathbf{w}_0, \varepsilon).
+ \]
+ We have now found an open ball in the neighbourhood, but we want a ball centered at the origin. We will use linearity in two ways. Firstly, since if $\mathbf{v}\in B_V(1)$, then $-\mathbf{v}\in B_V(1)$. By linearly of $T$, we know that
+ \[
+ \overline{T(B_V(1))} \supseteq B_W(-\mathbf{w}_0, \varepsilon).
+ \]
+ Then by linearity, intuitively, since the image contains the balls $B_W(\mathbf{w}_0, \varepsilon)$ and $B_W(-\mathbf{w}_0, \varepsilon)$, it must contain everything in between. In particular, it must contain $B_W(\varepsilon)$.
+
+ To prove this properly, we need some additional work. This is easy if we had $T(B_V(1)) \supseteq B_W(\mathbf{w}_0, \varepsilon)$ instead of the closure of it --- for any $\mathbf{w}\in B_W(\varepsilon)$, we let $\mathbf{v}_1, \mathbf{v}_2\in B_V(1)$ be such that $T(\mathbf{v}_1) = \mathbf{w}_0 + \mathbf{w}$, $T(\mathbf{v}_2) = -\mathbf{w}_0 + \mathbf{w}$. Then $\mathbf{v} = \frac{\mathbf{v}_1 + \mathbf{v}_2}{2}$ satisfies $\|\mathbf{v}\|_V < 1$ and $T(\mathbf{v}) = \mathbf{w}$.
+
+ Since we now have the closure instead, we need to mess with sequences. Since $\overline{T(B_V(1))} \supseteq \pm \mathbf{w}_0 + B_W(\varepsilon)$, for any $\mathbf{w}\in B_W(\varepsilon)$, we can find sequences $(\mathbf{v}_i)$ and $(\mathbf{u}_i)$ such that $\|\mathbf{v}_i\|_V, \|\mathbf{u}_i\|_V < 1$ for all $i$ and $T(\mathbf{v}_i) \to \mathbf{w}_0 + \mathbf{w}$, $T(\mathbf{u}_i) \to -\mathbf{w}_0 + \mathbf{w}$.
+
+ Now by the triangle inequality, we get
+ \[
+ \left\|\frac{\mathbf{v}_i + \mathbf{u}_i}{2}\right\| < 1,
+ \]
+ and we also have
+ \[
+ \frac{\mathbf{v}_i + \mathbf{u}_i}{2} \to \frac{\mathbf{w}_0 + \mathbf{w}}{2} + \frac{-\mathbf{w}_0 + \mathbf{w}}{2} = \mathbf{w}.
+ \]
+ So $\mathbf{w}\in \overline{T(B_V(1))}$. So $\overline{T(B_V(1))} \supseteq B_W(\varepsilon)$.
+ \item Let $\mathbf{w} \in B_W(\frac{1}{2})$. For any $\delta$, we know
+ \[
+ \overline{T(B_V(\delta))} \supseteq B_W(\delta).
+ \]
+ Thus, picking $\delta = \frac{1}{2}$, we can find some $\mathbf{v}_1 \in V$ such that
+ \[
+ \|\mathbf{v}_1\|_V < \frac{1}{2},\quad \|T \mathbf{v}_1 - \mathbf{w}\| < \frac{1}{4}.
+ \]
+ Suppose we have recursively found $\mathbf{v}_n$ such that
+ \[
+ \|\mathbf{v}_n\|_V < \frac{1}{2^n},\quad \|T (\mathbf{v}_1 + \cdots + \mathbf{v}_n) - \mathbf{w}\| < \frac{1}{2^{n + 1}}.
+ \]
+ Then picking $\delta = \frac{1}{2^{n + 1}}$, we can find $\mathbf{v}_{n + 1}$ satsifying the properties listed above. Then $\sum_{n = 1}^\infty \mathbf{v}_n$ is Cauchy, hence convergent by completeness. Let $\mathbf{v}$ be the limit. Then
+ \[
+ \|\mathbf{v}\|_V \leq \sum_{i = 1}^\infty \|\mathbf{v}_i\|_V < 1.
+ \]
+ Moreover, by continuity of $T$, we know $T\mathbf{v} = \mathbf{w}$. So we are done.\qedhere
+%
+% \item Let $\mathbf{w}\in B_W(1) \subseteq W$. We want to show that there exists $\mathbf{v}\in V$ with $\|\mathbf{v}\|_V \leq 1$ such that $T\mathbf{v} = \mathbf{w}$. Note that what we really want is to have $\|\mathbf{v}\|_V < 1$. However, just $\leq$ is enough --- for $\mathbf{w} \in B_W(1)$, we must have $\mathbf{w} \in B_W(1 - \delta)$ for some small $\delta > 0$ as well. Then we can find a $\mathbf{v}$ with $T\mathbf{v} = \mathbf{w}$ and $\|\mathbf{v}\|_V \leq 1 - \delta < 1$.
+%
+% To find the $\mathbf{v}$, we want to construct a sequence $\mathbf{u}_i$ such that $T\mathbf{u}_i \to \mathbf{w}$ and $\|\lim \mathbf{u}_i\| \leq 1$. Then $\mathbf{v} = \lim \mathbf{u}_i$ would be what we want.
+%
+% However, it turns out that instead of constructing $\mathbf{u}_i$ directly, it is easier to construct a sequence $\mathbf{v}_i$ and let $\mathbf{u}_n = \sum_{i = 1}^n \mathbf{v}_i$. So each $\mathbf{v}_i$ moves us one-step closer to our target location $\mathbf{v}$. The advantage of this approach is that we will be able to bound the length of $\mathbf{v}$ by the sum of lengths of $\mathbf{v}_i$, which we can show to be strictly less than $1$.
+%
+% Firstly, by assumption, we know that $\overline{T(B_V(r))} \supseteq B_W(r)$ for all $r$. This means that for any $\mathbf{u}\in B_W(r)$, we can find some sequence $(\mathbf{v}_i)\in B_V(r)$ such $T\mathbf{v}_i \to \mathbf{u}$. In particular, for any $\delta$ we can find some $\mathbf{v}\in B_V(r)$ such that $\|T \mathbf{v} - \mathbf{u}\| < \delta$.
+%
+% So what would this $\mathbf{u}_n$ look like? We know that $T\mathbf{u}_n \to \mathbf{w}$. So the idea is to pick $T\mathbf{u}_n$ to be a multiple of $\mathbf{w}$, say $\lambda_n \mathbf{w}$, where $\lambda_n \to 1$. But we are not guaranteed to be able to do so. We only know that we can make $T\mathbf{u}_n$ \emph{close to} $\lambda_n \mathbf{w}$. This is what we are going to do. We will make $T\mathbf{u}_n$ closer and closer to $\lambda_n \mathbf{w}$ as $n \to \infty$, and $\lambda_n \mathbf{w}$ closer and closer to $\mathbf{w}$, and then we are done.
+%
+% So this is how we pick our sequence. Since $\mathbf{w} \in B_W(1)$, there exists some $\delta > 0$ such that $\mathbf{w} \in B_W(1 - \delta)$. We claim that we can find a $\mathbf{v}_1$ such that
+% \[
+% \|\mathbf{v}_1\|_V < \frac{1}{2}, \quad \|T\mathbf{v}_1 - \mathbf{w}\|_W < \frac{1}{2}
+% \]
+% Indeed, we notice that $\frac{\mathbf{w}}{2} \in B_W(\frac{1}{2})$. Since $\overline{T(B_V(\frac{1}{2}))} \supseteq B_W(\frac{1}{2})$, we know there exists $\mathbf{v}_1 \in B_V\left(\frac{1 - \delta}{2}\right)$ such that $\|T\mathbf{v}_1 - \frac{\mathbf{w}}{2}\| < \frac{\delta}{4}$. Then
+% \[
+% \|T\mathbf{v}_1 - \mathbf{w}\|_W \leq \left\|T\mathbf{v}_1 - \frac{\mathbf{w}}{2}\right\|_W + \left\|\frac{\mathbf{w}}{2}\right\|_W < \frac{\delta}{4} + \frac{1 - \delta}{2} = \frac{1}{2}\left(1 - \frac{\delta}{2}\right).
+% \]
+% So done.
+%
+% Similarly, we can find $\mathbf{v}_2$ such that
+% \[
+% \|\mathbf{v}_2\|_V < \frac{1}{4}\left(1 - \frac{\delta}{2}\right), \quad \|T\mathbf{v}_1 + T\mathbf{v}_2 - \mathbf{w}\|_W < \frac{1}{4}\left(1 - \frac{\delta}{4}\right).
+% \]
+% Again, since $T \mathbf{v}_1 - \mathbf{w} \in B_W\left(\frac{1}{2}\left(1 - \frac{\delta}{2}\right)\right)$, we have
+% \[
+% \frac{T\mathbf{v}_1 - \mathbf{w}}{2}\in B_W\left(\frac{1}{4}\left(1 - \frac{\delta}{2}\right)\right)
+% \]
+% So there is some $\mathbf{v}_2$ such that $\mathbf{v}_2 \in B_V\left(\frac{1}{4}\left(1 - \frac{\delta}{2}\right)\right)$ and
+% \[
+% \left\|T\mathbf{v}_2 + \frac{T\mathbf{v}_1 - \mathbf{w}}{2}\right\|_W < \frac{\delta}{16}.
+% \]
+% Then
+% \begin{align*}
+% \left\|T\mathbf{v}_2 + T\mathbf{v}_1 - \mathbf{w}\right\|_W &\leq \left\|T\mathbf{v}_2 + \frac{T\mathbf{v}_1 - \mathbf{w}}{2}\right\|_W + \left\|\frac{T\mathbf{v}_1 - \mathbf{w}}{2}\right\|_W \\
+% &< \frac{\delta}{16} + \frac{1}{4}\left(1 - \frac{\delta}{2}\right) \\
+% &= \frac{1}{4}\left(1 - \frac{\delta}{4}\right).
+% \end{align*}
+% In general, we iterate this process --- given $\mathbf{v}_n$ such that
+% \[
+% \|\mathbf{v}_n\|_V < \frac{1}{2^n}\left(1 - \frac{\delta}{2^{n - 1}}\right),\quad \left\|\left(\sum_{i = 1}^n T\mathbf{v}_i\right) - \mathbf{w}\right\|_W < \frac{1}{2^n} \left(1 - \frac{\delta}{2^n}\right),
+% \]
+% define $\mathbf{v}_{n + 1}$ such that
+% \[
+% \|\mathbf{v}_{n + 1}\|_V < \frac{1}{2^{n + 1}}\left(1 - \frac{\delta}{2^n}\right),\quad \left\|\left(\sum_{i = 1}^{n + 1} T\mathbf{v}_i\right) - \mathbf{w}\right\|_W < \frac{1}{2^{n + 1}} \left(1 - \frac{\delta}{2^{n + 1}}\right).
+% \]
+% This exists for the same reasons as described above.
+%
+% By the bounds on $\|\mathbf{v}_i\|_V$, we know that $\sum_{i = 1}^n \mathbf{v}_i$ is a Cauchy sequence. By the completeness of $V$, there is some $\mathbf{v}$ such that
+% \[
+% \mathbf{v} = \sum_{i = 1}^\infty \mathbf{v}_i.
+% \]
+% Moreover, $\|\mathbf{v}\|_V \leq \sum_{i = 1}^\infty \|\mathbf{v}_i\|_V < 1$. Using the continuity of $T$, we know that $T\mathbf{v} = \mathbf{w}$. So done.\qedhere
+ \end{enumerate}
+\end{proof}
+Note that in this proof, we required both $V$ and $W$ to be Banach spaces. However, we used the completeness in different ways. We used the completeness of $V$ to extract a limit, but we just used the completeness of $W$ to say it is of second category. In particular, it suffices to assume the image of $T$ is of second category, instead of assuming surjectivity. Hence if we know that $T: V\to W$ is a bounded linear map such that $V$ is Banach and $\im T$ is of second category, then $T$ is open. As a consequence $T$ has to be surjective (its image contains a small open ball which we can scale up arbitrary).
+
+We are now going to look at certain applications of the open mapping theorem
+\begin{thm}[Inverse mapping theorem]
+ Let $V, W$ be Banach spaces, and $T: V\to W$ be a bounded linear map which is both injective and surjective. Then $T^{-1}$ exists and is a bounded linear map.
+\end{thm}
+
+\begin{proof}
+ We know that $T^{-1}$ as a function of sets exists. It is also easy to show that it is linear since $T$ is linear. By the open mapping theorem, since $T(U)$ is open for all $U \subseteq V$ open. So $(T^{-1})^{-1}(U)$ is open for all $U\subseteq V$. By definition, $T^{-1}$ is continuous. Hence $T^{-1}$ is bounded since boundedness and continuity are equivalent.
+\end{proof}
+
+\begin{thm}[Closed graph theorem]
+ Let $V, W$ be Banach spaces, and $T: V\to W$ a linear map. If the graph of $T$ is closed, i.e.
+ \[
+ \Gamma(T) = \{(\mathbf{v}, T(\mathbf{v})): \mathbf{v}\in V\} \subseteq V\times W
+ \]
+ is a closed subset of the product space (using the norm $\|(\mathbf{v}, \mathbf{w})\|_{V\times W} = \max \{\|\mathbf{v}\|_V, \|\mathbf{w}\|_W\}$), then $T$ is bounded.
+\end{thm}
+What does this mean? Closedness of the graph means that if $\mathbf{v}_n \to \mathbf{v}$ in $V$ and $T (\mathbf{v}_n) \to \mathbf{w}$, then $\mathbf{w} = T(\mathbf{v})$. What we want to show is continuity, which is a stronger statement that if $\mathbf{v}_n \to \mathbf{v}$, then $T(\mathbf{v}_n)$ converges and converges to $T(\mathbf{v})$.
+
+\begin{proof}
+ Consider $\phi: \Gamma(T) \to V$ defined by $\phi(\mathbf{v}, T(\mathbf{v})) = \mathbf{v}$. We want to apply the inverse mapping theorem to this. To do so, we need to show a few things. First we need to show that the spaces are Banach spaces. This is easy --- $\Gamma(T)$ is a Banach space since it is a closed subset of a complete space, and we are already given that $V$ is Banach.
+
+ Now we need to show surjectivity and injectivity. This is surjective since for any $\mathbf{v}\in V$, we have $\phi(\mathbf{v}, T(\mathbf{v})) = \mathbf{v}$. It is also injective since the function $T$ is single-valued.
+
+ Finally, we want to show $\phi$ is bounded. This is since
+ \[
+ \|\mathbf{v}\|_V \leq \max\{\|\mathbf{v}\|, \|T(\mathbf{v})\|\} = \|(\mathbf{v}, T(\mathbf{v}))\|_{\Gamma(T)}.
+ \]
+ By the inverse mapping theorem, $\phi^{-1}$ is bounded, i.e.\ there is some $C > 0$ such that
+ \[
+ \max\{\|\mathbf{v}\|_V, \|T(\mathbf{v})\|\} \leq C\|\mathbf{v}\|_V
+ \]
+ In particular, $\|T(\mathbf{v})\| \leq C\|\mathbf{v}\|_V$. So $T$ is bounded.
+\end{proof}
+
+\begin{eg}
+ We define $D(T)$ to be equal to $C^1([0, 1])$ as a vector space, but equipped with the $C([0, 1])$ norm instead. We seek to show that $D(T)$ is not complete.
+
+ To do so, consider the differentiation map
+ \[
+ T: D(T) \to C([0, 1]),
+ \]
+ First of all, this is unbounded. Indeed, consider the sequence of functions $f_n(x) = x^n$. Then $\|f_n\|_{C([0, 1])} = 1$ for all $n$. However,
+ \[
+ \|T f_n\|_{C([0, 1])} = \sup_{x \in [0, 1]} n x^{n - 1} = n.
+ \]
+ So $T$ is unbounded.
+
+ We claim the graph of $T$ is closed. If so, then since $C([0, 1])$ is complete, the closed graph theorem implies $D(T)$ is not complete.
+
+ To check this, suppose we have $f_n \to f$ in the $C([0, 1])$ norm, and $f_n' \to g$, again in the $C([0, 1])$ norm. We want to show that $f' = g$.
+
+ By the fundamental theorem of calculus, we have
+ \[
+ f_n(t) = f_n(0) + \int_0^t f'_n(x) \;\d x.
+ \]
+ Hence by uniform convergence of $f_n' \to g$ and $f_n \to f$, we have
+ \[
+ f(t) = f(0) + \int_0^t g(x)\;\d x.
+ \]
+ So by the fundamental theorem of calculus, we know that $f'(t) = g(t)$. So the graph of $T$ is closed.
+
+ Since we know that $C([0, 1])$ is complete, this shows that $D(T)$ is not complete.
+\end{eg}
+
+\begin{eg}
+ We can also use the Baire category theorem to understand Fourier series. Let $f: S^1 \to \R$ be continuous, i.e.\ $f: [-\pi, \pi] \to \R$ which is continuous with periodic boundary condition $f(-\pi) = f(\pi)$. We define the Fourier coefficients $\hat{f}: \Z \to \C$ by
+ \[
+ \hat{f}(k) = \frac{1}{2\pi} \int_{-\pi}^\pi e^{-ikx}f(x)\;\d x.
+ \]
+ We define the Fourier series by
+ \[
+ \sum_{k \in \Z} e^{ikz}\hat{f}(k).
+ \]
+ In particular, we define the partial sums as
+ \[
+ S_n(f)(x) = \sum_{k = -n}^n e^{ikx}\hat{f}(k).
+ \]
+ The question we want to ask is: does the Fourier series converge? We are not even asking if it converges back to $f$. Just if it converges at all. More concretely, we want to know if $S_n(f)(x)$ has a limit as $n \to \infty$ for $f$ continuous.
+
+ Unfortunately, no. We can show that there exists a continuous function $f$ such that $S_n(f)(x)$ diverges. To show this, we can consider $\phi_n: C(S^1) \to \R$ defined by $\phi_n (f) = S_n(f)(0)$. Assume that
+ \[
+ \sup_n |\phi_n (f)|
+ \]
+ is finite for all continuous $f$. By Banach-Steinhaus theorem, we have
+ \[
+ \sup_n \|\phi_n\|_{\mathcal{B}(C(S^1), \R)} < \infty,
+ \]
+ On the other hand, we can show that
+ \[
+ \phi_n(f) = \frac{1}{2\pi}\int_{-\pi}^\pi f(x)\frac{\sin\left(n + \frac{1}{2}\right)x}{\sin \frac{x}{2}} \;\d x.
+ \]
+ It thus suffices to find a sequence $f_n$ such that $\|f_n\|_{C(S^1)} \leq 1$ but
+ \[
+ \left|\int_{-\pi}^\pi f(x)\frac{\sin\left(n + \frac{1}{2}\right)x}{\sin \frac{x}{2}}\;\d x\right|\to \infty,
+ \]
+ which is a contradiction. Details are left for the example sheet.
+
+ What's the role of the Banach-Steinhaus theorem here? If we wanted to prove the result directly, then we need to find a \emph{single} function $f\in C(S^1)$ such that $\phi_n(f)$ is unbounded. However, now we just have to find a sequence $f_n \in C(S^1)$ such that $\phi_n(f_n) \to \infty$. This is much easier.
+
+ There is another thing we can ask. Note that if $f$ is continuous, then $|\hat{f}(k)| \to 0$ as $k \to \pm\infty$. In fact, this is even true if $f \in L^1(S^1)$, i.e.\ $f$ is Lebesgue integrable and
+ \[
+ \int_{-\pi}^\pi |f(x)| \;\d x < \infty.
+ \]
+ A classical question is: do all sequences $\{a_n\}\in \C$ with $|a_n| \to 0$ as $n \to \pm \infty$ arise as the Fourier series of some $f\in L^1$? The answer is no. We let $\tilde{c}_0$ be the set of all such sequences. By the inverse mapping theorem, if the map $\phi: L^1(S^1) \to \tilde{c}_0$ is surjective, then the inverse is bounded. But this is not true, since we can find a sequence $f_n$ such that $\|f_n\|_{L^1(S^1)} \to \infty$ but $\sup_k |\hat{f}_k(\ell)| \leq 1$ for all $\ell$. Details are again left for the reader.
+\end{eg}
+
+\section{The topology of \texorpdfstring{$C(K)$}{C(K)}}
+Before we start the chapter, it helps to understand the title. In particular, what is $C(K)$? In the chapter, $K$ will denote a compact Hausdorff topological space. We will first define what it means for a space to be Hausdorff.
+
+\begin{defi}[Hausdorff space]
+ A topological space $X$ is \emph{Hausdorff} if for all distinct $p, q \in X$, there are $U_p, U_q \subseteq X$ that are open subsets of $X$ such that $p \in U_p$, $q\in U_q$ and $U_p\cap U_q = \emptyset$.
+\end{defi}
+
+\begin{eg}
+ Every metric space is Hausdorff.
+\end{eg}
+
+What we want to look at here is compact Hausdorff spaces.
+\begin{eg}
+ $[0, 1]$ is a compact Hausdorff space.
+\end{eg}
+
+\begin{notation}
+ $C(K)$ is the set of continuous functions $f: K \to \R$ with the norm
+ \[
+ \|f\|_{C(K)} = \sup_{x \in K} |f(x)|.
+ \]
+\end{notation}
+There are three themes we will discuss
+\begin{enumerate}
+ \item There are many functions in $C(K)$. For example, we will show that given a finite set of points $\{p_i\}_{i = 1}^n \subseteq K$ and $\{y_i\}_{i = 1}^n \subseteq \R$, there is some $f \in C(k)$ such that $f(p_i) = y_i$. We will prove this later. Note that this is trivial for $C([0, 1])$, since we can use piecewise linear functions. However, this is not easy to prove if $K$ is a general compact Hausdorff space. In fact, we can prove a much stronger statement, known as the Tietze-Urysohn theorem.
+
+ \item Elements of $C(K)$ can be approximated by nice functions. This should be thought of as a generalization of the Weierstrass approximation theorem, which states that polynomials are dense in $C([0, 1])$, i.e.\ every continuous function can be approximated uniformly to arbitrary accuracy by polynomials.
+
+ \item Compact subsets of $C(K)$. One question we would like to understand is that given a sequence of functions $\{f_k\}_{n = 1}^\infty \subseteq C(K)$, when can we extract a convergent subsequence?
+\end{enumerate}
+
+\subsection{Normality of compact Hausdorff spaces}
+At this point, it is helpful to introduce a new class of topological spaces.
+\begin{defi}[Normal space]
+ A topological space $X$ is \emph{normal} if for every disjoint pair of closed subsets $C_1, C_2$ of $X$, there exists $U_1, U_2 \subseteq X$ disjoint open such that $C_1 \subseteq U_1, C_2 \subseteq U_2$.
+\end{defi}
+This is similar to being Hausdorff, except that instead of requiring the ability to separate points, we want the ability to separate closed subsets.
+
+In general, one makes the following definition:
+\begin{defi}[$T_i$ space]
+ A topological space $X$ has the $T_1$ \emph{property} if for all $x, y\in X$, where $x \not= y$, there exists $U\subseteq X$ open such that $x \in U$ and $y\not\in U$.
+
+ A topological space $X$ has the $T_2$ \emph{property} if $X$ is Hausdorff.
+
+ A topological space $X$ has the $T_3$ \emph{property} if for any $x \in X$, $C\subseteq X$ closed with $x \not\in C$, then there are $U_x, U_C$ disjoint open such that $x \in U_x, C\subseteq U_C$. These spaces are called \emph{regular}.
+
+ A topological space $X$ has the $T_4$ \emph{property} if $X$ is normal.
+\end{defi}
+Note here that $T_4$ and $T_1$ together imply $T_2$. It suffices to notice that $T_1$ implies that $x$ is a closed set for all $x$ --- for all $y$, let $U_y$ be such that $y \in U_y$ and $x \not\in U_y$. Then we can write
+\[
+ X\setminus \{x\} = \bigcup_{y \not= x}U_y,
+\]
+which is open since it is a union of open sets.
+
+More importantly, we have the following theorem:
+\begin{thm}
+ Let $X$ be a Hausdorff space. If $C_1, C_2 \subseteq X$ are \emph{compact} disjoint subsets, then there are some $U_1, U_2 \subseteq X$ disjoint open such that $C_1 \subseteq U_1, C_2, \subseteq U_2$.
+
+ In particular, if $X$ is a compact Hausdorff space, then $X$ is normal (since closed subsets of compact spaces are compact).
+\end{thm}
+
+\begin{proof}
+ Since $C_1$ and $C_2$ are disjoint, by the Hausdorff property, for every $p \in C_1$ and $q \in C_2$, there is some $U_{p, q}, V_{p, q}\subseteq X$ disjoint open with $p \in U_{p, q}, q \in V_{p, q}$.
+
+ Now fix a $p$. Then $\bigcup_{q \in C_2}V_{p, q}\supseteq C_2$ is an open cover. Since $C_2$ is compact, there is a finite subcover, say
+ \[
+ C_2 \subseteq \bigcup_{i = 1}^n V_{p, q_i}\text{ for some }\{q_1,\cdots, q_n\}\subseteq C_2.
+ \]
+ Note that $n$ and $q_i$ depends on which $p$ we picked at the beginning.
+
+ Define
+ \[
+ U_p = \bigcap_{i=1}^n U_{p, q_i},\quad V_p = \bigcup_{i = 1}^n V_{p, q_i}.
+ \]
+ Since these are finite intersections and unions, $U_p$ and $V_p$ are open. Also, $U_p$ and $V_p$ are disjoint. We also know that $C_2 \subseteq V_p$.
+
+ Now note that $\bigcup_{p \in C_1} U_p \supseteq C_1$ is an open cover. By compactness of $C_1$, there is a finite subcover, say
+ \[
+ C_1 \subseteq \bigcup_{j = 1}^m U_{p_j}\text{ for some }\{p_1, \cdots, p_m\}\subseteq C_1.
+ \]
+ Now define
+ \[
+ U = \bigcup_{j = 1}^m U_{p_j},\quad V = \bigcap_{j = 1}^m V_{p_j}.
+ \]
+ Then $U$ and $V$ are disjoint open with $C_1 \subseteq U$, $C_2 \subseteq V$. So done.
+\end{proof}
+Why do we care about this? It turns out it is easier to discuss continuous functions on normal spaces rather than Hausdorff spaces. Hence, if we are given that a space is compact Hausdorff (e.g.\ $[0, 1]$), then we know it is normal.
+
+\subsection{Tietze-Urysohn extension theorem}
+The objective of this chapter is to show that if we have a continuous function defined on a closed subset of a normal space, then we can extend this to the whole of the space.
+
+We start a special case of this theorem.
+\begin{lemma}[Urysohn's lemma]
+ Let $X$ be normal and $C_0, C_1$ be disjoint closed subsets of $X$. Then there is a $f \in C(X)$ such that $f|_{C_0} = 0$ and $f|_{C_1} = 1$, and $0 \leq f(x) \leq 1$ for all $X$.
+\end{lemma}
+Before we prove this, let's look at a ``stupid'' example. Let $X = [-1, 2]$. This is compact Hausdorff. We let $C_0 = [-1, 0]$ and $C_1 = [1, 2]$. To construct the function $f$, we do the obvious thing:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-1.5, 0) -- (2.5, 0);
+ \foreach \x in {-1,0,1,2} {
+ \node [circ] at (\x, 0) {};
+ \node [below] at (\x, 0) {$\x$};
+ }
+ \draw [mred, semithick] (-1, 0) -- (0, 0) -- (1, 1) -- (2, 1);
+ \end{tikzpicture}
+\end{center}
+We can define this function $f$ (in $[0, 1]$) by
+\[
+ f(x) = \inf\left\{\frac{a}{2^n}: a, n \in \N, 0 \leq a \leq 2^n, x \leq \frac{a}{2^n}\right\}.
+\]
+This is obviously a rather silly way to write our function out. However, this is what we will end up doing in the proof below. So keep this in mind for now.
+
+\begin{proof}
+ In this proof, all subsets labeled $C$ are closed, and all subsets labeled $U$ are open.
+
+ First note that normality is equivalent to the following: suppose $C \subseteq U \subseteq X$, where $U$ is open and $C$ is closed. Then there is some $\tilde{C}$ closed, $\tilde{U}$ open such that $C\subseteq \tilde{U} \subseteq \tilde{C} \subseteq U$.
+
+ We start by defining $U_1 = X \setminus C_1$. Since $C_0$ and $C_1$ are disjoint, we know that $C_0 \subseteq U_1$. By normality, there exists $C_{\frac{1}{2}}$ and $U_{\frac{1}{2}}$ such that
+ \[
+ C_0 \subseteq U_{\frac{1}{2}} \subseteq C_{\frac{1}{2}} \subseteq U_1.
+ \]
+ Then we can find $C_{\frac{1}{4}}, C_{\frac{3}{4}}, U_{\frac{1}{4}}, U_{\frac{3}{4}}$ such that
+ \[
+ C_0 \subseteq U_{\frac{1}{4}}\subseteq C_{\frac{1}{4}} \subseteq U_{\frac{1}{2}} \subseteq C_{\frac{1}{2}} \subseteq U_{\frac{3}{4}} \subseteq C_{\frac{3}{4}} \subseteq U_1.
+ \]
+ Iterating this, we get that for all dyadic rationals $q = \frac{a}{2^n}$, $a, n\in \N, 0 < a < 2^n$, there are some $U_q$ open, $C_q$ closed such that $U_q \subseteq C_q$, with $C_q \subseteq U_{q'}$ if $q < q'$.
+
+ We now define $f$ by
+ \[
+ f(x) = \inf\left\{q \in (0, 1] \text{ dyadic rational}: x \in U_q\right\},
+\]
+with the understanding that $\inf \emptyset = 1$. We now check the properties desired.
+\begin{itemize}
+ \item By definition, we have $0 \leq f(x) \leq 1$.
+ \item If $x \in C_0$, then $x \in U_q$ for all $q$. So $f(x) = 0$.
+ \item If $x \in C_1$, then $x \not\in U_q$ for all $q$. So $f(x) = 1$.
+ \item To show $f$ is continuous, it suffices to check that $\{x: f(x) > \alpha\}$ and $\{x: f(x) < \alpha\}$ are open for all $\alpha \in \R$, as this shows that the pre-images of all open intervals in $\R$ are open. We know that
+ \begin{align*}
+ f(x) < \alpha &\Leftrightarrow \inf\{q \in (0, 1)\text{ dyadic rational}: x \in U_q\} < \alpha \\
+ &\Leftrightarrow (\exists q)\; q < \alpha \text{ and }x \in U_q\\
+ &\Leftrightarrow x \in \bigcup_{q < \alpha} U_q.
+ \end{align*}
+ Hence we have
+ \[
+ \{x: f(x) < \alpha\} = \bigcup_{q < \alpha} U_q.
+ \]
+ which is open, since each $U_q$ is open for all $q$. Similarly we know that
+ \begin{align*}
+ f(x) > \alpha &\Leftrightarrow \inf\{q: x \in U_q\} > \alpha\\
+ &\Leftrightarrow (\exists q > \alpha)\; x \not\in C_q\\
+ &\Leftrightarrow x \in \bigcup_{q > \alpha} X \setminus C_q.
+ \end{align*}
+ Since this is a union of complement of closed sets, this is open.\qedhere
+\end{itemize}
+\end{proof}
+With this, we can already say that there are many continuous functions. We can just pick some values on some $C_0, C_1$, and then get a continuous function out of it. However, we can make a stronger statement.
+
+\begin{thm}[Tietze-Urysohn extension theorem]
+ Let $X$ be a normal topological space, and $C\subseteq X$ be a closed subset. Suppose $f: C \to \R$ is a continuous function. Then there exists an extension $\tilde{f}: X \to \R$ which is continuous and satisfies $\tilde{f}|_C = f$ and $\|\tilde{f}\|_{C(X)} = \|f\|_{C(C)}$.
+\end{thm}
+This is in some sense similar to the Hahn-Banach theorem, which states that we can extend linear maps to larger spaces.
+
+Note that this implies the Urysohn's lemma, since if $C_0$ and $C_1$ are closed, then $C_0\cup C_1$ is closed. However, we cannot be lazy and not prove Urysohn's lemma, because the proof of this theorem relies on the Urysohn's lemma.
+
+\begin{proof}
+ The idea is to repeatedly use Urysohn's lemma to get better and better approximations. We can assume wlog that $0\leq f(x) \leq 1$ for all $x \in C$. Otherwise, we just translate and rescale our function. Moreover, we can assume that the $\sup\limits_{x \in C} f(x) = 1$. It suffices to find $\tilde{f}: X \to \R$ with $\tilde{f}|_C = f$ with $0 \leq \tilde{f}(x) \leq 1$ for all $x \in X$.
+
+ We define the sequences of continuous functions $f_i: C \to \R$ and $g_i: X \to \R$ for $i \in \N$. We want to think of the sum $\sum_{i = 0}^n g_i$ to be the approximations, and $f_{n + 1}$ the error on $C$.
+
+ Let $f_0 = f$. This is the error we have when we approximate with the zero function.
+
+ We first define $g_0$ on a subset of $X$ by
+ \[
+ g_0(x) =
+ \begin{cases}
+ 0 & x \in f_0^{-1}\left(\left[0, \frac{1}{3}\right]\right)\\
+ \frac{1}{3} & x \in f_0^{-1}\left(\left[\frac{2}{3}, 1\right]\right)
+ \end{cases}.
+ \]
+ We can then extend this to the whole of $X$ with $0 \leq g_0(x) \leq \frac{1}{3}$ for all $x$ by Urysohn's lemma.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (4.5, 0);
+ \draw (0, 0) -- (0, 3.5);
+ \node at (0, 1) [left] {$\frac{1}{3}$};
+ \node at (0, 2) [left] {$\frac{2}{3}$};
+ \node at (0, 3) [left] {$1$};
+ \foreach \y in {1, 2, 3} {
+ \draw [dashed] (0, \y) -- (4.5, \y);
+ }
+
+ \draw plot [smooth] coordinates {(1, 1.4) (1.3, 2.3) (1.8, 0.5) (2.3, 1.3)};
+ \draw (3, 0.6) parabola (4, 1.6);
+
+ \draw [very thick, morange] (1.14, 1) -- (1.4, 1);
+ \draw [very thick, morange] (1.63, 0) -- (2.17,0);
+ \draw [very thick, morange] (3, 0) -- (3.6325, 0);
+
+ \draw (1.05, 0.1) -- +(-0.05, 0) -- +(-0.05, -0.2) -- +(0, -0.2);
+ \draw (2.25, 0.1) -- +(0.05, 0) -- +(0.05, -0.2) -- +(0, -0.2);
+
+ \draw (3.05, 0.1) -- +(-0.05, 0) -- +(-0.05, -0.2) -- +(0, -0.2);
+ \draw (3.95, 0.1) -- +(0.05, 0) -- +(0.05, -0.2) -- +(0, -0.2);
+
+ \draw [mblue] plot [smooth] coordinates {(0, 0.5) (0.5, 0.3) (0.8, 0.7) (1, 0.6) (1.14, 1)};
+ \draw [mblue] plot [smooth] coordinates {(1.4, 1) (1.48, 0.5) (1.53, 0.53) (1.63, 0)};
+
+ \draw [mblue] plot [smooth] coordinates {(2.16, 0) (2.4, 0.5) (2.6, 0.3) (3, 0)};
+ \draw [mblue] plot [smooth] coordinates {(3.6325, 0) (3.8, 0.3) (4, 0.8)};
+ \node [mblue] at (4, 0.8) [right] {$g_0(x)$};
+ \node at (4, 1.6) [right] {$f(x)$};
+ \end{tikzpicture}
+ \end{center}
+ We define
+ \[
+ f_1 = f_0 - g_0|_C.
+ \]
+ By construction, we know that $0 \leq f_1 \leq \frac{2}{3}$. This is our first approximation. Note that we have now lowered our maximum error from $1$ to $\frac{2}{3}$. We now repeat this.
+
+ Given $f_i: C \to \R$ with $0 \leq f_i \leq \left(\frac{2}{3}\right)^i$, we define $g_i$ by requiring
+ \[
+ g_i(x) =
+ \begin{cases}
+ 0 & x \in f_i^{-1}\left(\left[0, \frac{1}{3}\left(\frac{2}{3}\right)^i\right]\right)\\
+ \frac{1}{3}\left(\frac{2}{3}\right)^i & x \in f_i^{-1}\left(\left[\left(\frac{2}{3}\right)^{i + 1}, \left(\frac{2}{3}\right)^i\right]\right)\\
+ \end{cases},
+ \]
+ and then extending to the whole of $X$ with $0 \leq g_i \leq \frac{1}{3}\left(\frac{2}{3}\right)^i$ and $g_i$ continuous. Again, this exists by Urysohn's lemma. We then define $f_{i + 1} = f_i - g_i|_C$.
+
+ We then have
+ \[
+ \sum_{i = 0}^n g_i = (f_0 - f_1) + (f_1 - f_2) + \cdots + (f_n - f_{n + 1}) = f - f_{n + 1}.
+ \]
+ We also know that
+ \[
+ 0 \leq f_{i + 1} \leq \left(\frac{2}{3}\right)^{i + 1}.
+ \]
+ We conclude by letting
+ \[
+ \tilde{f} = \sum_{i = 0}^\infty g_i.
+ \]
+ This exists because we have the bounds
+ \[
+ 0 \leq g_i \leq \frac{1}{3}\left(\frac{2}{3}\right)^i,
+ \]
+ and hence $\sum_{i = 0}^n g_i$ is Cauchy. So the limit exists and is continuous by the completeness of $C(X)$.
+
+ Now we check that
+ \[
+ \sum_{i = 0}^n g_i|_C - f = -f_{n + 1}.
+ \]
+ Since we know that $\|f_{n + 1}\|_{_C(C)} \to 0$. Therefore, we know that
+ \[
+ \left.\sum_{i = 0}^\infty g_i\right|_C = \tilde{f}|_C = f.
+ \]
+ Finally, we check the bounds. We need to show that $0 \leq \tilde{f}(x) \leq 1$. This is true since $g_i \geq 0$ for all $i$, and also
+ \[
+ |\tilde{f}(x)| \leq \sum_{i = 0}^\infty g_i(x) \leq \sum_{i = 0}^n \frac{1}{3}\left(\frac{2}{3}\right)^i = 1.
+ \]
+ So done.
+\end{proof}
+We can already show what was stated last time --- if $K$ is compact Hausdorff, $\{p_1, \cdots, p_n\}\subseteq K$ a finite set of points, and $\{y_1, \cdots, y_n\} \subseteq \R$, then there exists $f: K \to \R$ continuous such that $f(p_i) = y_i$. This is since compact Hausdorff spaces are normal, and singleton points are closed sets in Hausdorff spaces. In fact, we can prove this directly with the Urysohn's lemma, by, say, requesting functions $f_i$ such that $f_i(p_i) = y_i$, $f_i(p_j) = 0$ for $i \not= j$. Then we just sum all $f_i$.
+
+Note that normality is necessary for Urysohn's lemma. Since Urysohn's lemma is a special case of the Tietze-Urysohn extension theorem, normality is also necessary for the Tietze-Urysohn extension theorem. In fact, the lemma is \emph{equivalent} to normality. Let $C_0, C_1$ be disjoint closed sets of $X$. If there is some $f: X \to \R$ such that $f|_{C_0} = 0$, $f|_{C_1} = 1$, then $U_0 = f^{-1}\left(\left(-\infty, \frac{1}{3}\right)\right)$ and $U_1 = f^{-1}\left(\left(\frac{2}{3}, \infty\right)\right)$ are open disjoint sets such that $C_0 \subseteq U_0, C_1 \subseteq U_1$.
+
+Closedness of $C_0$ and $C_1$ is also necessary in Urysohn's lemma. For example, we cannot extend $f: [0, \frac{1}{2}) \cup (\frac{1}{2}, 1]$ to $[0, 1]$ continuously, where $f$ is defined as
+\[
+ f(x) =
+ \begin{cases}
+ 0 & x < \frac{1}{2}\\
+ 1 & x > \frac{1}{2}
+ \end{cases}.
+\]
+\subsection{Arzel\texorpdfstring{\`a}{a}-Ascoli theorem}
+Let $K$ be compact Hausdorff, and $\{f_n\}_{n = 1}^\infty$ be a sequence of continuous functions $f_n: K \to \R$ (or $\C$). When does $(f_n)$ have a convergent subsequence in $C(K)$? In other words, when is there a subsequence which converges uniformly?
+
+This will be answered by the Arzel\`a-Ascoli theorem. Before we get to that, we look at some examples.
+\begin{eg}
+ Let $K = [0, 1]$, and $f_n(x) = n$. This does not have a convergent subsequence in $C(K)$ since it does not even have a subsequence that converges pointwise. This is since $f_n$ is unbounded.
+\end{eg}
+We see that unboundedness is one ``enemy'' that prevents us from having a convergent subsequence.
+
+\begin{eg}
+ We again let $K[0, 1])$, and let $f$ be defined as follows:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-0.5, 0) -- (3, 0);
+ \draw [->] (0, -0.5) -- (0, 2);
+ \draw [mred, semithick] (0, 1.5) node [black, left] {$1$} -- (0.6, 0) node [black, below] {$\frac{1}{n}$} -- (2.5, 0) node [black, below] {$1$};
+ \end{tikzpicture}
+ \end{center}
+ We know that $f_n$ does not have a convergent subsequence in $C(K)$, since any subsequence must converge pointwise to
+ \[
+ f(x) =
+ \begin{cases}
+ 0 & x \not= 0\\
+ 1 & x = 0
+ \end{cases},
+ \]
+ which is not continuous.
+\end{eg}
+What is happening here? For every $n$, fixed $x$ and every $\varepsilon$, by continuity of $f_n$, there ie some $\delta$ such that $|x - y| < \delta$ implies $|f_n(x) - f_n(y)| < \varepsilon$, but this choice of $\delta$ depends on $n$, and there is no universal choice that works for us. This is another problem that leads to the lack of a limit.
+
+The Arzel\`a-Ascoli theorem tells us these are the only two ``enemies'' that prevent us from extracting a convergent subsequence.
+
+To state this theorem, we first need a definition.
+\begin{defi}[Equicontinuous]
+ Let $K$ be a topological space, and $F\subseteq C(K)$. We say $F$ is \emph{equicontinuous at $x \in K$} if for every $\varepsilon$, there is some $U$ which is an open neighbourhood of $x$ such that
+ \[
+ (\forall f \in F)(\forall y \in U)\; |f(y) - f(x)| < \varepsilon.
+ \]
+ We say $F$ is \emph{equicontinuous} if it is equicontinuous at $x$ for all $x \in K$.
+\end{defi}
+
+\begin{thm}[Arzel\`a-Ascoli theorem]
+ Let $K$ be a compact topological space. Then $F\subseteq C(K)$ is pre-compact, i.e.\ $\bar{F}$ is compact, if and only if $F$ is bounded and equicontinuous.
+\end{thm}
+This indeed applies to the problem of extracting a uniformly convergent subsequence, since $C(K)$ is a metric space, and compactness is equivalent to sequential compactness. Indeed, let $(f_n)$ be a bounded and equicontinuous sequence in $C(K)$. Then $F = \{f_n: n \in \N\} \subseteq C(K)$ is bounded and equicontinuous. So it is pre-compact, and hence $(f_n)$, being a sequence in $\bar{F}$, has a convergent subsequence.
+
+To prove this, it helps to introduce some more terminology and a few lemmas first.
+
+\begin{defi}[$\varepsilon$-net]
+ Let $X$ be a metric space, and let $E\subseteq X$. For $\varepsilon > 0$, we say that $N \subseteq X$ is an \emph{$\varepsilon$-net} for $E$ if and only if $\bigcup_{x \in \N}B(x, \varepsilon) \supseteq E$.
+\end{defi}
+
+\begin{defi}[Totally bounded subset]
+ Let $X$ be a metric space, and $E\subseteq X$. We say that $E$ is \emph{totally bounded} for every $\varepsilon$, there is a finite $\varepsilon$-net $N_\varepsilon$ for $E$.
+\end{defi}
+
+An important result about totally bounded subsets is the following:
+\begin{prop}
+ Let $X$ be a complete metric space. Then $E\subseteq X$ is totally bounded if and only if for every sequence $\{y_i\}_{i = 1}^\infty \subseteq E$, there is a subsequence which is Cauchy.
+\end{prop}
+
+By completeness, we can rewrite this as
+\begin{cor}
+ Let $X$ be a complete metric space. Then $E\subseteq X$ is totally bounded if and only if $\bar{E}$ is compact.
+\end{cor}
+
+We'll prove these later. For now, we assume this corollary and we will prove Arzel\`a-Ascoli theorem.
+\begin{thm}[Arzel\`a-Ascoli theorem]
+ Let $K$ be a compact topological space. Then $F\subseteq C(K)$ is pre-compact, i.e.\ $\bar{F}$ is compact, if and only if $F$ is bounded and equicontinuous.
+\end{thm}
+
+\begin{proof}
+ By the previous corollary, it suffices to prove that $F$ is totally bounded if and only if $F$ is bounded and equicontinuous. We first do the boring direction.
+
+ $(\Rightarrow)$ Suppose $F$ is totally bounded. First notice that $F$ is obviously bounded, since $F$ can be written as the finite union of $\varepsilon$-balls, which must be bounded.
+
+ Now we show $F$ is equicontinuous. Let $\varepsilon > 0$. Since $F$ is totally bounded, there exists a finite $\varepsilon$-net for $F$, i.e.\ there is some $\{f_1, \cdots, f_n\} \subseteq F$ such that for every $f \in F$, there exists an $i\in \{1, \cdots, n\}$ such that $\|f - f_i\|_{C(K)} < \varepsilon$.
+
+ Consider a point $x \in K$. Since $\{f_1, \cdots, f_n\}$ are continuous, for each $i$, there exists a neighbourhood $U_i$ of $x$ such that $|f_i (y) - f_i(x)| < \varepsilon$ for all $y \in U_i$.
+
+ Let
+ \[
+ U = \bigcap_{i = 1}^n U_i.
+ \]
+ Since this is a finite intersection, $U$ is open. Then for any $f \in F$, $y \in U$, we can find some $i$ such that $\|f - f_i\|_{C(K)} < \varepsilon$. So
+ \[
+ |f(y) - f(x)| \leq |f(y) - f_i(y)| + |f_i(y) - f_i(x)| + |f_i(x) - f(x)| < 3\varepsilon.
+ \]
+ So $F$ is equicontinuous at $x$. Since $x$ was arbitrary, $F$ is equicontinuous.
+
+ $(\Leftarrow)$ Suppose $F$ is bounded and equicontinuous. Let $\varepsilon > 0$. By equicontinuity, for every $x \in K$, there is some neighbourhood $U_x$ of $x$ such that $|f(y) - f(x)| < \varepsilon$ for all $y \in U_x, f \in F$. Obviously, we have
+ \[
+ \bigcup_{x \in K}U_x = K.
+ \]
+ By the compactness of $K$, there are some $\{x_1, \cdots, x_n\}$ such that
+ \[
+ \bigcup_{i = 1}^n U_{x_i}\supseteq K.
+ \]
+ Consider the restriction of functions in $F$ to these points. This can be viewed as a bounded subset of $\ell^n_{\infty}$, the $n$-dimensional normed vector space with the supremum norm. Since this is finite-dimensional, boundedness implies total boundedness (due to, say, the compactness of the closed unit ball). In other words, there is a finite $\varepsilon$-net $\{f_1, \cdots, f_n\}$ such that for every $f \in F$, there is a $j \in \{1, \cdots, m\}$ such that
+ \[
+ \max_i |f(x_i) - f_j(x_i)| < \varepsilon.
+ \]
+ Then for every $f \in F$, pick an $f_j$ such that the above holds. Then
+ \begin{align*}
+ \|f - f_j\|_{C(K)} &= \sup_y |f(y) - f_j(y)|\\
+ \intertext{Since $\{U_{x_i}\}$ covers $K$, we can write this as}
+ &= \max_i \sup_{y \in U_{x_i}}|f(y) - f_j(y)|\\
+ &\leq \max_i \sup_{y \in U_{x_i}} \big(|f(y) - f(x_i)| + |f(x_i) - f_j(x_i)| + |f_j(x_i) - f_j(y)|\big)\\
+ &< \varepsilon + \varepsilon +\varepsilon = 3\varepsilon.
+ \end{align*}
+ So done.
+\end{proof}
+
+Now we return to prove the proposition we just used to prove Arzel\`a-Ascoli.
+\begin{prop}
+ Let $X$ be a (complete) metric space. Then $E\subseteq X$ is totally bounded if and only if for every sequence $\{y_i\}_{i = 1}^\infty \subseteq E$, there is a subsequence which is Cauchy.
+\end{prop}
+
+\begin{proof}
+ $(\Rightarrow)$ Let $E \subseteq X$ be totally bounded, $\{y_i\} \in E$. For every $j \in \N$, there exists a finite $\frac{1}{j}$-net, call it $N_j$.
+
+ Now since $N_1$ is finite, there is some $x_1$ such that there are infinitely many $y_i$'s in $B(x_1, 1)$. Pick the first $y_i$ in $B(x_1, 1)$ and call it $y_{i_1}$.
+
+ Now there is some $x_2 \in N_2$ such that there are infinitely many $y_i$'s in $B(x_1, 1) \cap B(x_2, \frac{1}{2})$. Pick the one with smallest value of $i > i_1$, and call this $y_{i_2}$. Continue till infinity.
+
+ This procedure gives a sequence $x_i \in N_i$ and a subsequence $\{y_{i_k}\}$, and also
+ \[
+ y_{i_n} \in \bigcap_{j = 1}^n B\left(x_j, \frac{1}{j}\right).
+ \]
+ It is easy to see that $\{y_{i_n}\}$ is Cauchy since if $m > n$, then $d(y_{i_m}, y_{i_n}) < \frac{2}{n}$.
+
+ $(\Leftarrow)$ Suppose $E$ is not totally bounded. So there is no finite $\varepsilon$-net. Pick any $y_1$. Pick $y_2$ such that $d(y_1, y_2) \geq \varepsilon$. This exists because there is no finite $\varepsilon$-net.
+
+ Now given $y_1, \cdots, y_n$ such that $d(y_i, y_j) \geq \varepsilon$ for all $i, j = 1, \cdots, n$, $i \not= j$, we pick $y_{n + 1}$ such that $d(y_{n + 1}, y_j) \geq \varepsilon$ for all $j = 1, \cdots, n$. Again, this exists because there is no finite $\varepsilon$-net. Then clearly any subsequence of $\{y_n\}$ is not Cauchy.
+\end{proof}
+Note that the first part is similar to the proof of Bolzano-Weierstrass in $\R^n$ by repeated bisection.
+
+Recall that at the beginning of the chapter, we have seen that boundedness and equicontinuity assumptions are necessary. The compactness of $K$ is also important. Let $X = \R$, which is not compact, and let $\phi \in C_c^{\infty}$, an infinitely differentiable function with compact support, say, the bump function.
+\begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \draw [->] (-2, 0) -- (2, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 1);
+
+ \draw [semithick, domain=-0.999:0.999, mblue] plot (\x, {exp (-1 / (1 - \x * \x))});
+ \draw [semithick, mblue] (-1.5, 0) -- (-1, 0);
+ \draw [semithick, mblue] (1, 0) -- (1.5, 0);
+ \end{tikzpicture}
+\end{center}
+We now let $f_n(x) = \phi(x - n)$, i.e.\ we shift our bump function to the right by $n$ units. This sequence is clearly bounded and equicontinuous, but this has no convergent subsequence --- $f_n$ converges pointwise to the zero function, but the convergence is not uniform, and this is true for arbitrary subsequences as well.
+
+We are going to look at some applications of the theorem:
+
+\begin{eg}
+ Let $K\subseteq \R$ be a compact space, $\{f_n\}_{n = 1}^\infty$ be a sequence of continuously differentiable functions in $C(K)$, such that
+ \[
+ \sup_x \sup_n (|f_n(x)| + |f_n'(x)|) < C
+ \]
+ for some $C$. Then there is a convergent subsequence. We clearly have uniform boundedness. To obtain equicontinuity, since the derivative is bounded, by the mean value theorem, we have
+ \[
+ \frac{|f_n(x) - f_n(y)|}{|x - y|} = |f_n'(z)| \leq C
+ \]
+ So
+ \[
+ |f_n(x) - f_n(y)| \leq C |x - y|.
+ \]
+\end{eg}
+
+Consider the ordinary differential equation $x' = f(x)$ with the boundary conditions $x(0) = x_0 \in \R^n$. Recall from IB Analysis II that Picard-Lindel\"of theorem says that if $f$ is a Lipschitz function, then there exists some $\varepsilon > 0$ such that the ODE has a \emph{unique} solution in $(-\varepsilon, \varepsilon)$.
+
+What if $f$ is not Lipschitz? If so, we can get the following
+\begin{thm}[Peano*]
+ Given $f$ continuous, then there is some $\varepsilon > 0$ such that $x' = f(x)$ with boundary condition $x(0) = x_0 \in \R$ has a solution in $(-\varepsilon, \varepsilon)$.
+\end{thm}
+Note that uniqueness is false. For example, if $x' = \sqrt{|x|}$, $x(0) = 0$, then $x(t) = 0$ and $x(t) = t^2$ are both solutions.
+
+\begin{proof}(sketch)
+ We approximate $f$ by a sequence of continuously differentiable functions $f_n$ such that $\|f - f_n\|_{C(K)} \to 0$ for some $K\subseteq \R$. We use Picard-Lindel\"of to get a solution for all $n$. Then we use the ODE to get estimates for the solution. Finally, we can use Arzel\`a-Ascoli to extract a limit as $n \to \infty$. We can then show it is indeed a solution.
+\end{proof}
+
+\subsection{Stone--Weierstrass theorem}
+Here we will prove the Stone--Weierstrass theorem, which is a generalization of the classical Weierstrass approximation theorem.
+\begin{thm}[Weierstrass approximation theorem]
+ The set of polynomials are dense in $C([0, 1])$.
+\end{thm}
+This tells us that $C([0, 1])$ is not too ``big'' because it is has a dense subset of ``nice'' functions.
+
+We will try to generalize this for more general domains. Note that for this section, real-valued and complex-valued continuous functions are somewhat differentiable. So we will write these as $C_{\R}(K)$ and $C_{\C}(K)$.
+
+To state this theorem, we need some definitions.
+
+\begin{defi}[Algebra]
+ A vector space $(V, +)$ is called an \emph{algebra} if there is an operation (called multiplication) $\ph: V \to V$ such that $(V, +, \ph)$ is a \emph{rng} (i.e.\ ring not necessarily with multiplicative identity). Also, $\lambda(\mathbf{v}\cdot \mathbf{w}) = (\lambda \mathbf{v})\cdot \mathbf{w} = \mathbf{v}\cdot (\lambda \mathbf{w})$ for all $\lambda \in \F$, $\mathbf{v}, \mathbf{w} \in V$.
+
+ If $V$ is in addition a normed vector space and
+ \[
+ \|\mathbf{v}\cdot \mathbf{w}\|_V \leq \|\mathbf{v}\|_V \cdot \|\mathbf{w}\|_V
+ \]
+ for all $\mathbf{v}, \mathbf{w} \in V$, then we say $V$ is a \emph{normed algebra}.
+
+ If $V$ complete normed algebra, we say $V$ is a \emph{Banach algebra}.
+
+ If $V$ is an algebra that is commutative as a rng, then we say $V$ is a \emph{commutative algebra}.
+
+ If $V$ is an algebra with multiplicative identity, then $V$ is a \emph{unital algebra}.
+\end{defi}
+
+\begin{eg}
+ $C(K)$ is a commutative, unital, Banach algebra.
+\end{eg}
+
+\begin{eg}
+ Recall from the example sheets that if $T_1, T_2: V\to V$, then
+ \[
+ \|T_1 \circ T_2\|_{\mathcal{B}(V, V)} \leq \|T_1\|_{\mathcal{B}(V, V)}\|T_2\|_{\mathcal{B}(V, V)}.
+ \]
+ So $\mathcal{B}(V, V)$ is a unital normed algebra.
+\end{eg}
+
+We will need this language to generalize the Weierstrass approximation theorem. The main problem in doing so is that we have to figure out what we can generalize polynomials to. This is why we need these funny definitions.
+
+\begin{thm}[Stone-Weierstrass theorem]
+ Let $K$ be compact, and $\mathcal{A} \subseteq C_{\R}(K)$ be a subalgebra (i.e.\ it is a subset that is closed under the operations) with the property that it separates points, i.e.\ for every $x, y \in K$ distinct, there exists some $f \in \mathcal{A}$ such that $f(x) \not= f(y)$. Then either $\bar{\mathcal{A}} = C_\R(K)$ or there is some $x_0 \in K$ such that
+ \[
+ \bar{\mathcal{A}} = \{f \in C_{\R(K)}: f(x_0) = 0\}.
+ \]
+\end{thm}
+Note that it is not possible that $\bar{A}$ is always zero on $2$ or more points, since $\mathcal{A}$ separates points.
+
+This indeed implies the Weierstrass approximation theorem, since polynomials separates points (consider the polynomial $x$), and the polynomial $1$ is never $0$ for all $x$. In fact, this works also for polynomials on compact subsets of $\R^n$.
+
+Note, however, that the second case of Stone-Weierstrass theorem can actually happen. For example, consider $K = [0, 1]$ compact, and $\mathcal{A}$ be the algebra generated by $x$. Then $\bar{A} = \{f \in C_\R(K): f(0) = 0\}$.
+
+We will prove this in using two lemmas:
+\begin{lemma}
+ Let $K$ compact, $\mathcal{L} \subseteq C_\R(K)$ be a subset which is closed under taking maximum and minimum, i.e.\ if $f, g \in \mathcal{L}$, then $\max\{f, g\} \in \mathcal{L}$ and $\min \{f, g\} \in \mathcal{L}$ (with $\max\{f, g\}$ defined as $\max\{f, g\}(x) = \max\{f(x), g(x)\}$, and similarly for minimum).
+
+ Given $g \in C_\R(K)$, assume further that for any $\varepsilon > 0$ and $x, y \in K$, there exists $f_{x, y} \in \mathcal{L}$ such that
+ \[
+ |f_{x, y}(x) - g(x)| < \varepsilon,\quad |f_{x, y}(y) - g(y)| < \varepsilon.
+ \]
+ Then there exists some $f \in \mathcal{L}$ such that
+ \[
+ \|f - g\|_{C_\R(K)} < \varepsilon,
+ \]
+ i.e.\ $g \in \bar{\mathcal{L}}$.
+\end{lemma}
+This means that if we are allowed to take maximums and minimums, to be able to approximate a function, we just need to be able to approximate it at any two points.
+
+The idea is to next show that if $\mathcal{A}$ is a subalgebra, then $\bar{\mathcal{A}}$ is closed under taking maximum and minimum. Then use the ability to separate points to find $f_{x, y}$, and prove that we can approximate arbitrary functions.
+
+\begin{proof}
+ Let $g \in C_\R(K)$ and $\varepsilon > 0$ be given. So for every $x, y \in K$, there is some $f_{x, y} \in \mathcal{L}$ such that
+ \[
+ |f_{x, y}(x) - g(x)| < \varepsilon,\quad |f_{x, y}(y) - g(y)| < \varepsilon.
+ \]
+ \begin{claim}
+ For each $x \in K$, there exists $f_x \in \mathcal{L}$ such that $|f_x(x) - g(x)| < \varepsilon$ and $f_x(z) < g(z) + \varepsilon$ for all $z \in K$.
+ \end{claim}
+ Since $f_{x, y}$ is continuous, there is some $U_{x, y}$ containing $y$ open such that
+ \[
+ |f_{x, y}(z) - g(z)| < \varepsilon
+ \]
+ for all $z \in U_{x, y}$. Since
+ \[
+ \bigcup_{y \in K} U_{x, y} \supseteq K,
+ \]
+ by compactness of $K$, there exists a some $y_1, \cdots, y_n$ such that
+ \[
+ \bigcup_{i = 1}^n U_{x, y_i}\supseteq K.
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (6, 0);
+ \draw [->] (0, 0) -- (0, 4);
+
+ \draw [thick, mblue](0.5, 1.2) .. controls (1.5, 2.4) .. (3, 1.6) .. controls (4, 1.2) .. (5.5, 3.6) node [right] {$g$};
+ \draw [dashed, mgreen, yshift=0.5cm] (0.5, 1.2) .. controls (1.5, 2.4) .. (3, 1.6) .. controls (4, 1.2) .. (5.5, 3.6) node [right] {$g + \varepsilon$};
+
+ \node [circ] at (3, 1.6) {};
+ \node [below] at (3, 1.6) {\small$x$};
+
+ \node [circ, gray] at (1, 1.78) {};
+ \node [below, gray] at (1, 1.78) {\small\;\;$y_1$};
+ \node [circ, gray] at (2.1, 2.06) {};
+ \node [below, gray] at (2.1, 2.06) {\small$y_2$};
+
+ \node [circ, gray] at (4.7, 2.34) {};
+ \node [below, gray] at (4.7, 2.34) {\small$\;\;\;\;y_3$};
+
+ \draw [morange] plot [smooth, tension=0.8] coordinates {(0.5, 1.15) (1.7, 2.9) (3, 1.5) (4.2, 1.2) (5.5, 1.3)};
+ \node [right, morange] at (5.5, 1.3) {$f_{y_1}$};
+
+ \draw [mred!50!morange] plot [smooth, tension=0.8] coordinates {(0.5, 2.5) (2.1, 1.8) (3.5, 1.8) (5.5, 2.2)};
+ \node [right, mred!50!morange] at (5.5, 2.2) {$f_{y_2}$};
+
+ \draw [mred] plot [smooth, tension=0.8] coordinates {(0.5, 3.5) (3.5, 1.4) (5.5, 3.8)};
+ \node [above, mred] at (0.5, 3.5) {$f_{y_3}$};
+ \end{tikzpicture}
+ \end{center}
+ We then let
+ \[
+ f_x(z) = \min\{f_{x, y_1}(z), \cdots, f_{x, y_n}(z)\}
+ \]
+ for every $z \in K$. We then see that this works. Indeed, by assumption, $f_x \in \mathcal{L}$. If $z \in K$ is some arbitrary point, then $z \in U_{x, y_i}$ for some $i$. Then
+ \[
+ f_{x, y_i}(z) < g(z) + \varepsilon.
+ \]
+ Hence, since $f_x$ is the minimum of all such $f_{x, y_i}$, for any $z$, we have
+ \[
+ f_x(z) < g(z) + \varepsilon.
+ \]
+ The property at $x$ is also clear.
+ \begin{claim}
+ There exists $f \in \mathcal{L}$ such that $|f(z) - g(z)| < \varepsilon$ for all $z \in K$.
+ \end{claim}
+ We are going to play the same game with this. By continuity of $f_x$, there is $V_x$ containing $x$ open such that
+ \[
+ |f_x(w) - g(w)| < \varepsilon
+ \]
+ for all $w \in V_x$. Since
+ \[
+ \bigcup_{x\in K} V_x \supseteq K,
+ \]
+ by compactness of $K$, there is some $\{x_1, \cdots, x_m\}$ such that
+ \[
+ \bigcup_{j = 1}^m V_{x_j} \supseteq K.
+ \]
+ Define
+ \[
+ f(z) = \max\{f_{x_1}(z), \cdots, f_{x_m}(z)\}.
+ \]
+ Again, by assumption, $f \in \mathcal{L}$. Then we know that
+ \[
+ f(z) > g(z) - \varepsilon.
+ \]
+ We still have our first bound
+ \[
+ f(z) < g(z) + \varepsilon.
+ \]
+ Therefore we have
+ \[
+ \|f - g\|_{C_\R (K)} < \varepsilon.\qedhere
+ \]
+\end{proof}
+
+\begin{lemma}
+ Let $\mathcal{A}\subseteq C_\R(K)$ be a subalgebra that is a closed subset in the topology of $C_\R(K)$. Then $\mathcal{A}$ is closed under taking maximum and minimum.
+\end{lemma}
+
+\begin{proof}
+ First note that
+ \begin{align*}
+ \max\{f(x), g(x)\} &= \frac{1}{2}(f(x) + g(x)) + \frac{1}{2}|f(x) - g(x)|,\\
+ \min\{f(x), g(x)\} &= \frac{1}{2}(f(x) + g(x)) - \frac{1}{2}|f(x) - g(x)|.
+ \end{align*}
+ Since $\mathcal{A}$ is an algebra, it suffices to show that $f \in A$ implies $|f| \in A$ for every $f$ such that $\|f\|_{C_\R(K)} \leq 1$.
+
+ The key observation is the following: consider the function $h(x) = \sqrt{x + \varepsilon^2}$. Then $h(x^2)$ approximates $|x|$. This has the property that the Taylor expansion of $h(x)$ centered at $x = \frac{1}{2}$ is uniformly convergent for $x \in [0, 1]$. Therefore there exists a polynomial $S(x)$ such that
+ \[
+ |S(x) - \sqrt{x + \varepsilon^2}| < \varepsilon.
+ \]
+ Now note that $S(x) - S(0)$ is a polynomial with no constant term. Therefore, since $\mathcal{A}$ is an algebra, if $f \in \mathcal{A}$, then $S(f^2) - S(0) \in \mathcal{A}$ by closure.
+
+ Now look at
+ \[
+ \||f| - (S(f^2) - S(0))\|_{C_\R(K)} \leq \||f| - \sqrt{f^2 + \varepsilon^2}\| + \|\sqrt{f^2 + \varepsilon^2} - S(f^2)\| + \|S(0)\|.
+ \]
+ We will make each individual term small. For the first term, note that
+ \[
+ \sup_{x \in [0, 1]} |x - \sqrt{x^2 + \varepsilon^2}| = \sup_{x \in [0, 1]} \frac{\varepsilon^2}{|x + \sqrt{x^2 + \varepsilon^2}|} = \varepsilon.
+ \]
+ So the first term is at most $\varepsilon$. The second term is also easy, since $S$ is chosen such that $|S(x) - \sqrt{x + \varepsilon^2}| < 1$ for $x \in [0, 1]$, and $|f(x)^2| \leq 1$ for all $x \in [0, 1]$. So it is again bounded by $\varepsilon$.
+
+ By the same formula, $|S(0) - \sqrt{0 + \varepsilon^2}| < \varepsilon$. So $|S(0)| < 2\varepsilon$. So
+ \[
+ \||f| - (S(f^2) - S(0))\|_{C_\R(K)} < 4\varepsilon.
+ \]
+ Since $\varepsilon > 0$ and $\mathcal{A}$ is closed in the topology of $C_{\R}(K)$, $f\in \mathcal{A}$ and $\|f\|_{C_\R(K)} \leq 1$ implies that $|f| \in \mathcal{A}$.
+\end{proof}
+Note that if we have already proven the classical Weierstrass approximation theorem, we can just use it to get a polynomial approximation for $|f|$ directly.
+
+We will now combine both lemmas and prove the Stone-Weierstrass theorem.
+\begin{thm}[Stone-Weierstrass theorem]
+ Let $K$ be compact, and $\mathcal{A} \subseteq C_{\R}(K)$ be a subalgebra (i.e.\ it is a subset that is closed under the operations) with the property that it separates points, i.e.\ for every $x, y \in K$ distinct, there exists some $f \in \mathcal{A}$ such that $f(x) \not= f(y)$. Then either $\bar{\mathcal{A}} = C_\R(K)$ or there is some $x_0 \in K$ such that
+ \[
+ \bar{\mathcal{A}} = \{f \in C_{\R(K)}: f(x_0) = 0\}.
+ \]
+\end{thm}
+
+\begin{proof}
+ Note that there are two possible outcomes. We will first look at the first possibility.
+
+ Consider the case where for all $x \in K$, there is some $f \in \mathcal{A}$ such that $f(x) \not= 0$. Let $g \in C_{\R}(K)$ be given. By our previous lemmas, to approximate $g$ in $\bar{\mathcal{A}}$, we just need to show that we can approximate $g$ at two points. So given any $\varepsilon > 0$, $x, y \in K$, we want to find $f_{x, y} \in \mathcal{A}$ such that
+ \[
+ |f_{x, y}(x) - g(x)| < \varepsilon,\quad |f_{x, y}(y) - g(y)| < \varepsilon. \tag{$*$}
+ \]
+ For every $x, y \in K$, $x \not= y$, we first show that there exists $h_{x, y} \in \mathcal{A}$ such that $h_{x, y}(x) \not= 0$, and $h_{x, y}(x) \not= h_{x, y}(y)$. This is easy to see. By our assumptions, we can find the following functions:
+ \begin{enumerate}
+ \item There exists $h_{x, y}^{(1)}$ such that $h_{x, y}^{(1)} \not= h_{x, y}^{(1)}(y)$
+ \item There exists $h_{x, y}^{(2)}$ such that $h_{x, y}^{(2)}(x) \not= 0$.
+ \item There exists $h_{x, y}^{(3)}$ such that $h_{x, y}^{(3)}(y) \not= 0$.
+ \end{enumerate}
+ Then it is an easy exercise to show that some linear combination of $h_{x, y}^{(1)}$ and $h_{x, y}^{(2)}$ and $h_{x, y}^{(3)}$ works, say $h_{x, y}$.
+
+ We will want to find our $f_{x, y}$ that satisfies $(*)$. But we will do better. We will make it equal $g$ on $x$ and $y$. The idea is to take linear combinations of $h_{x, y}$ and $h_{x, y}^2$. Instead of doing the messy algebra to show that we can find a working linear combination, just notice that $(h_{x, y}(x), h_{x, y}(y))$ and $(h_{x, y}(x)^2, h_{x, y}(y)^2)$ are linearly independent vectors in $\R^2$. Therefore there exists $\alpha, \beta \in \R$ such that
+ \[
+ \alpha(h_{x, y}(x), h_{x, y}(y)) + \beta(h_{x, y}(x)^2, h_{x, y}(y)^2) = (g(x), g(y)).
+ \]
+ So done.
+
+ In the other case, given $\mathcal{A}$, suppose there is $x_0 \in K$ such that $f(x_0) = 0$ for all $f \in \mathcal{A}$. Consider the algebra
+ \[
+ \mathcal{A}' = \mathcal{A} + \lambda 1 = \{f + \lambda 1: f \in \mathcal{A}, \lambda \in \R\}
+ \]
+ Since $\mathcal{A}$ separates points, and for any $x \in K$, there is some $f \in \mathcal{A}'$ such that $f(x) \not= 0$ (e.g.\ $f = 1$), by the previous part, we know that $\bar{\mathcal{A}}' = C_{\R}(K)$.
+
+ Now note that
+ \[
+ \bar{\mathcal{A}} \subseteq \{f \in C_\R(K): f(x_0) = 0\} = B.
+ \]
+ So we suffices to show that we have equality, i.e.\ for any $g \in B$ and $\varepsilon > 0$, there is some $f \in \mathcal{A}$ such that
+ \[
+ \|f - g\|_{C_\R(K)} < \varepsilon.
+ \]
+ Since $\bar{\mathcal{A}}' = C_\R(K)$, given such $g$ and $\varepsilon$, there is some $f \in \mathcal{A}$ and $\lambda_0 \in \R$ such that
+ \[
+ \|g - (f + \lambda 1)\|_{C_{\R}(K)} < \varepsilon.
+ \]
+ But $g(x_0) = f(x_0) = 0$, which implies that $|\lambda| < \varepsilon$. Therefore $\|g - f\|_{C_\R(K)} < 2 \varepsilon$. So done.
+\end{proof}
+
+What happens for $C_\C(K)$?
+\begin{eg}
+ Let $K = \overline{B(0, 1)} \subseteq \C$, and let $\mathcal{A}$ be the set of polynomials on $\overline{B(0, 1)}$. We will show that $\bar{\mathcal{A}} \not= C_{\C}(\overline{B(0, 1)})$.
+
+ Consider $f(z) = \bar{z}$. This is not in the closure of $\mathcal{A}$, since this is not holomorphic, but the uniform limit of any sequence of holomorphic is holomorphic (by Morera's theorem, i.e.\ $f$ is holomorphic iff $\oint_\gamma f(z) \;\d z = 0$ for all closed piecewise smooth curve $\gamma$).
+\end{eg}
+Hence we need an additional condition on this. It turns out that this rather simple example is the only way in which things can break. We have the following:
+
+\begin{thm}[Complex version of Stone-Weierstrass theorem]
+ Let $K$ be compact and $\mathcal{A} \subseteq C_\C(K)$ be a subalgebra over $\C$ which separates points and is closed under complex conjugation (i.e.\ if $f \in \mathcal{A}$, then $\bar{f} = \mathcal{A}$). Then either $\bar{\mathcal{A}} = C_\C(K)$ or these is an $x_0$ such that $\bar{A} = \{f \in C_\C(K): f(x_0) = 0\}$.
+\end{thm}
+
+\begin{proof}
+ It suffices to show that either $\bar{\mathcal{A}} \supseteq C_{\R}(K)$ or there exists a point $x_0$ such that $\bar{\mathcal{A}} \supseteq \{f \in C_{\R}(K): f(x_0) = 0\}$, since we can always break a complex function up into its real and imaginary parts.
+
+ Now consider
+ \[
+ \mathcal{A}' = \left\{\frac{f + \bar{f}}{2}: f \in \mathcal{A}\right\} \cup \left\{\frac{f - \bar{f}}{2i}: f \in \mathcal{A}\right\}.
+ \]
+ Now note that by closure of $\mathcal{A}$, we know that $\mathcal{A}'$ is a subset of $\mathcal{A}$ and is a subalgebra of $C_\R(K)$ over $\R$, which separates points. Hence by the real version of Stone-Weierstrass, either $\bar{\mathcal{A}'} = C_\R(K)$ or there is some $x_0$ such that $\bar{\mathcal{A}'} = \{f \in C_\R(K): f(x_0) = 0\}$. So done.
+\end{proof}
+
+\section{Hilbert spaces}
+\subsection{Inner product spaces}
+We have just looked at continuous functions on some compact space $K$. Another important space is the space of square-integrable functions. Consider the space
+\[
+ L^2(\R) = \left\{f: f\text{ is Lebesgue integrable} : \int |f|^2 < \infty\right\} / {\sim},
+\]
+where $f \sim g$ if $f = g$ is Lebesgue almost everywhere.
+
+One thing we like to think about is the Fourier series. Recall that for $f \in C(S^1)$, we have defined, for each $k \in \Z$,
+\[
+ \hat{f}(k) = \frac{1}{2\pi} \int_{-\pi}^\pi e^{-ikx}f(x)\;\d x,
+\]
+and we have defined the partial sum
+\[
+ S_N(f)(x) = \sum_{n = -N}^N e^{inx} \hat{f}(k).
+\]
+We have previously seen that even if $f$ is continuous, it is possible that the partial sums $S_N$ do not converge, even pointwise. However, we can ask for something weaker:
+
+\begin{prop}
+ Let $f \in C(S^1)$. Then
+ \[
+ \lim_{N \to \infty} \frac{1}{2\pi}\int_{-\pi}^\pi |f(x) - S_N(f)(x)|^2 \;\d x = 0.
+ \]
+\end{prop}
+We will prove this later. However, the key points of the proof is the ``orthogonality'' of $\{e^{inx}\}$. More precisely, we have
+\[
+ \frac{1}{2\pi} \int_{-\pi}^\pi e^{inx}e^{-imx} \;\d x = 0\text{ if } m\not= n.
+\]
+The introduction of Hilbert spaces is in particular a way to put this in a general framework. We want to introduce an extra structure that gives rise to ``orthogonality''.
+
+\begin{defi}[Inner product]
+ Let $V$ be a vector space over $\R$ or $\C$. We say $p: V \times V \to \R$ or $\C$ is an \emph{inner product} on $V$ it satisfies
+ \begin{enumerate}
+ \item $p(\mathbf{v}, \mathbf{w}) = \overline{p(\mathbf{w}, \mathbf{v})}$ for all $\mathbf{v}, \mathbf{w} \in V$.\hfill(antisymmetry)
+ \item $p(\lambda_1 \mathbf{v}_1 + \lambda_2 \mathbf{v}_2, \mathbf{u}) = \lambda_1 p(\mathbf{v}_1, \mathbf{w}) + \lambda_2 p(\mathbf{v}_2, \mathbf{w})$. \hfill(linearity in first argument)
+ \item $p(\mathbf{v}, \mathbf{v}) \geq 0$ for all $\mathbf{v} \in V$ and equality holds iff $\mathbf{v} = \mathbf{0}$.\hfill(non-negativity)
+ \end{enumerate}
+ We will often denote an inner product by $p(\mathbf{v}, \mathbf{w}) = \bra \mathbf{v}, \mathbf{w}\ket$. We call $(V, \bra\ph ,\ph \ket)$ an \emph{inner product space}.
+\end{defi}
+
+\begin{defi}[Orthogonality]
+ In an inner product space, $\mathbf{v}$ and $\mathbf{w}$ are \emph{orthogonal} if $\bra \mathbf{v}, \mathbf{w} \ket = 0$.
+\end{defi}
+
+Orthogonality and the inner product is important when dealing with vector spaces. For example, recall that when working with finite-dimensional spaces, we had things like Hermitian matrices, orthogonal matrices and normal matrices. All these are in some sense defined in terms of the inner product and orthogonality. More fundamentally, when we have a finite-dimensional vector spaces, we often write the vectors as a set of $n$ coordinates. To define this coordinate system, we start by picking $n$ orthogonal vectors (which are in fact orthonormal), and then the coordinates are just the projections onto these orthogonal vectors.
+
+Hopefully, you are convinced that inner products are important. So let's see what we can get if we put in inner products to arbitrary vector spaces.
+
+We will look at some easy properties of the inner product.
+\begin{prop}[Cauchy-Schwarz inequality]
+ Let $(V, \bra \ph,\ph\ket)$ be an inner product space. Then for all $\mathbf{v}, \mathbf{w} \in V$,
+ \[
+ |\bra \mathbf{v}, \mathbf{w}\ket| \leq \sqrt{\bra \mathbf{v}, \mathbf{v}\ket \bra \mathbf{w}, \mathbf{w}\ket},
+ \]
+ with equality iff there is some $\lambda \in \R$ or $\C$ such that $\mathbf{v} = \lambda \mathbf{w}$ or $\mathbf{w} = \lambda \mathbf{v}$.
+\end{prop}
+
+\begin{proof}
+ wlog, we can assume $\mathbf{w} \not= 0$. Otherwise, this is trivial. Moreover, assume $\bra \mathbf{v}, \mathbf{w}\ket \in \R$. Otherwise, we can just multiply $\mathbf{w}$ by some $e^{i\alpha}$.
+
+ By non-negativity, we know that for all $t$, we have
+ \begin{align*}
+ 0 &\leq \bra \mathbf{v} + t\mathbf{w}, \mathbf{v} + t\mathbf{w}\ket\\
+ &= \bra \mathbf{v}, \mathbf{v}\ket + 2t\bra \mathbf{v}, \mathbf{w}\ket + t^2 \bra \mathbf{w}, \mathbf{w}\ket.
+ \end{align*}
+ Therefore, the discriminant of this quadratic polynomial in $t$ is non-positive, i.e.
+ \[
+ 4(\bra \mathbf{v}, \mathbf{w}\ket)^2 - 4\bra \mathbf{v}, \mathbf{v}\ket \bra \mathbf{w}, \mathbf{w}\ket \leq 0,
+ \]
+ from which the result follows.
+
+ Finally, note that if equality holds, then the discriminant is $0$. So the quadratic has exactly one root. So there exists $t$ such that $\mathbf{v} + t\mathbf{w} = 0$, which of course implies $\mathbf{v} = -t\mathbf{w}$.
+\end{proof}
+
+\begin{prop}
+ Let $(V, \bra \ph, \ph\ket)$ be an inner product space. Then
+ \[
+ \|\mathbf{v}\| = \sqrt{\bra \mathbf{v}, \mathbf{v}\ket}
+ \]
+ defines a norm.
+\end{prop}
+
+\begin{proof}
+ The first two axioms of the norm are easy to check, since it follows directly from definition of the inner product that $\|\mathbf{v}\| \geq 0$ with equality iff $\mathbf{v} = \mathbf{0}$, and $\|\lambda \mathbf{v}\| = |\lambda| \|\mathbf{v}\|$.
+
+ The only non-trivial thing to check is the triangle inequality. We have
+ \begin{align*}
+ \|\mathbf{v} + \mathbf{w}\|^2 &= \bra \mathbf{v} + \mathbf{w}, \mathbf{v} + \mathbf{w}\ket\\
+ &= \|\mathbf{v}\|^2 + \|\mathbf{w}\|^2 + |\bra \mathbf{v}, \mathbf{w}\ket| + |\bra \mathbf{w}, \mathbf{v}\ket|\\
+ &\leq \|\mathbf{v}\|^2 + \|\mathbf{w}\|^2 + 2\|\mathbf{v}\|\|\mathbf{w}\| \\
+ &= (\|\mathbf{v}\| + \|\mathbf{w}\|)^2
+ \end{align*}
+ Hence we know that $\|\mathbf{v} + \mathbf{w}\| \leq \|\mathbf{v} \| + \|\mathbf{w}\|$.
+\end{proof}
+
+This motivates the following definition:
+\begin{defi}[Euclidean space]
+ A normed vector space $(V, \|\ph\|)$ is a \emph{Euclidean space} if there exists an inner product $\bra \ph,\ph\ket$ such that
+ \[
+ \|\mathbf{v}\| = \sqrt{\bra \mathbf{v}, \mathbf{v}\ket}.
+ \]
+\end{defi}
+
+\begin{prop}
+ Let $(E, \|\ph\|)$ be a Euclidean space. Then there is a \emph{unique} inner product $\bra \ph,\ph\ket$ such that $\|\mathbf{v}\| = \sqrt{\bra \mathbf{v}, \mathbf{v}\ket}$.
+\end{prop}
+
+\begin{proof}
+ The real and complex cases are slightly different.
+
+ First suppose $E$ is a vector space over $\R$, and suppose also that we have an inner product $\bra \ph,\ph\ket$ such that $\|\mathbf{v}\| = \sqrt{\bra \mathbf{v}, \mathbf{v}\ket }$. Then
+ \[
+ \bra \mathbf{v} + \mathbf{w}, \mathbf{v} + \mathbf{w}\ket = \|\mathbf{v}\|^2 + 2\bra \mathbf{v}, \mathbf{w}\ket + \|\mathbf{w}\|^2.
+ \]
+ So we get
+ \[
+ \bra \mathbf{v}, \mathbf{w}\ket = \frac{1}{2} (\|\mathbf{v} + \mathbf{w}\|^2 - \|\mathbf{v}\|^2 - \|\mathbf{w}\|^2).\tag{$*$}
+ \]
+ In particular, the inner product is completely determined by the norm. So this must be unique.
+
+ Now suppose $E$ is a vector space over $\C$. We have
+ \begin{align*}
+ \bra \mathbf{v} + \mathbf{w}, \mathbf{v} + \mathbf{w}\ket &= \|\mathbf{v}\|^2 + \|\mathbf{w}\|^2 + \bra \mathbf{v}, \mathbf{w}\ket + \bra \mathbf{w}, \mathbf{v}\ket\tag{1}\\
+ \bra \mathbf{v} - \mathbf{w}, \mathbf{v} - \mathbf{w}\ket &= \|\mathbf{v}\|^2 + \|\mathbf{w}\|^2 - \bra \mathbf{v}, \mathbf{w}\ket - \bra \mathbf{w}, \mathbf{v}\ket\tag{2}\\
+ \bra \mathbf{v} +i\mathbf{w}, \mathbf{v} +i\mathbf{w}\ket &= \|\mathbf{v}\|^2 + \|\mathbf{w}\|^2 -i\bra \mathbf{v}, \mathbf{w}\ket +i\bra \mathbf{w}, \mathbf{v}\ket\tag{3}\\
+ \bra \mathbf{v} -i\mathbf{w}, \mathbf{v} -i\mathbf{w}\ket &= \|\mathbf{v}\|^2 + \|\mathbf{w}\|^2 +i\bra \mathbf{v}, \mathbf{w}\ket -i\bra \mathbf{w}, \mathbf{v}\ket\tag{4}
+ \end{align*}
+ Now consider $(1) - (2) + i(3) - i(4)$. Then we obtain
+ \[
+ \|\mathbf{v} + \mathbf{w}\|^2 - \|\mathbf{v} - \mathbf{w}\|^2 + i\|\mathbf{v} + i\mathbf{w}\|^2 - i\|\mathbf{v} - i\mathbf{w}\|^2 = 4\bra \mathbf{v}, \mathbf{w}\ket.\tag{$\dagger$}
+ \]
+ So again $\bra \mathbf{v}, \mathbf{w}\ket$ is again determined by the norm.
+\end{proof}
+The identities $(*)$ and $(\dagger)$ are sometimes known as the \emph{polarization identities}.
+
+\begin{defi}[Hilbert space]
+ A Euclidean space $(E, \|\ph\|)$ is a \emph{Hilbert space} if it is complete.
+\end{defi}
+
+We will prove certain properties of the inner product.
+\begin{prop}[Parallelogram law]
+ Let $(E, \|\ph\|)$ be a Euclidean space. Then for $\mathbf{v}, \mathbf{w} \in E$, we have
+ \[
+ \|\mathbf{v} - \mathbf{w}\|^2 + \|\mathbf{v} + \mathbf{w}\|^2 = 2\|\mathbf{v}\|^2 + 2\|\mathbf{w}\|^2.
+ \]
+\end{prop}
+This is called the parallelogram law because it says that for any parallelogram, the sum of the square of the lengths of the diagonals is the sum of square of the lengths of the two sides.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (2, 0) -- (2.5, 1) -- (0.5, 1);
+ \draw [->] (0, 0) -- (2, 0) node [right] {$\mathbf{v}$};
+ \draw [->] (0, 0) -- (0.5, 1) node [above] {$\mathbf{w}$};
+ \draw [mred, ->] (0, 0) -- (2.5, 1) node [right] {$\mathbf{v} + \mathbf{w}$};
+ \draw [mred, ->] (0.5, 1) -- (2, 0) node [below] {$\mathbf{v} - \mathbf{w}$};
+ \end{tikzpicture}
+\end{center}
+
+\begin{proof}
+ This is just simple algebraic manipulation. We have
+ \begin{align*}
+ \|\mathbf{v} - \mathbf{w}\|^2 + \|\mathbf{v} + \mathbf{w}\|^2 &= \bra \mathbf{v} - \mathbf{w}, \mathbf{v} - \mathbf{w}\ket + \bra \mathbf{v} + \mathbf{w}, \mathbf{v} + \mathbf{w}\ket\\
+ &= \bra \mathbf{v}, \mathbf{v}\ket - \bra \mathbf{v}, \mathbf{w}\ket - \bra \mathbf{w}, \mathbf{v}\ket + \bra \mathbf{w}, \mathbf{w}\ket\\
+ &+ \bra \mathbf{v}, \mathbf{v}\ket + \bra \mathbf{v}, \mathbf{w}\ket + \bra \mathbf{w}, \mathbf{v}\ket + \bra \mathbf{w}, \mathbf{w}\ket\\
+ &= 2\bra \mathbf{v}, \mathbf{v}\ket + 2 \bra \mathbf{w}, \mathbf{w}\ket.\qedhere
+ \end{align*}
+\end{proof}
+
+\begin{prop}[Pythagoras theorem]
+ Let $(E, \|\ph\|)$ be a Euclidean space, and let $\mathbf{v}, \mathbf{w}\in E$ be orthogonal. Then
+ \[
+ \|\mathbf{v} + \mathbf{w}\|^2 = \|\mathbf{v}\|^2 + \|\mathbf{w}\|^2.
+ \]
+\end{prop}
+
+\begin{proof}
+ \begin{align*}
+ \|\mathbf{v} + \mathbf{w}\|^2 &= \bra \mathbf{v} + \mathbf{w}, \mathbf{v} + \mathbf{w}\ket\\
+ &= \bra \mathbf{v}, \mathbf{v}\ket + \bra \mathbf{v}, \mathbf{w}\ket + \bra \mathbf{w}, \mathbf{v} \ket + \bra \mathbf{w}, \mathbf{w}\ket\\
+ &= \bra \mathbf{v}, \mathbf{v}\ket + 0 + 0 + \bra \mathbf{w}, \mathbf{w}\ket\\
+ &= \|\mathbf{v}\|^2 + \|\mathbf{w}\|^2.\qedhere
+ \end{align*}
+\end{proof}
+By induction, if $\mathbf{v}_i \in E$ for $i = 1,\cdots, n$ such that $\bra \mathbf{v}_i, \mathbf{v}_j\ket = 0$ for $i \not= j$, i.e.\ they are mutually orthogonal, then
+\[
+ \left\|\sum_{i = 1}^n \mathbf{v}_i\right\|^2 = \sum_{i = 1}^n \|\mathbf{v}_i\|^2.
+\]
+\begin{prop}
+ Let $(E, \|\ph\|)$ be a Euclidean space. Then $\bra \ph, \ph\ket: E \times E \to \C$ is continuous.
+\end{prop}
+
+\begin{proof}
+ Let $(\mathbf{v}, \mathbf{w}) \in E\times E$, and $(\tilde{\mathbf{v}}, \tilde{\mathbf{w}}) \in E\times E$. We have
+ \begin{align*}
+ \|\bra \mathbf{v}, \mathbf{w}\ket - \bra \tilde{\mathbf{v}}, \tilde{\mathbf{w}}\ket\| &= \|\bra \mathbf{v}, \mathbf{w}\ket - \bra \mathbf{v}, \tilde{\mathbf{w}}\ket + \bra \mathbf{v}, \tilde{\mathbf{w}}\ket - \bra \tilde{\mathbf{v}}, \tilde{\mathbf{w}}\ket\|\\
+ &\leq \|\bra \mathbf{v}, \mathbf{w}\ket - \bra \mathbf{v}, \tilde{\mathbf{w}}\ket\| + \|\bra \mathbf{v}, \tilde{\mathbf{w}}\ket - \bra \tilde{\mathbf{v}}, \tilde{\mathbf{w}}\ket\|\\
+ &= \|\bra \mathbf{v}, \mathbf{w} - \tilde{\mathbf{w}}\ket \| + \|\bra \mathbf{v} - \tilde{\mathbf{v}}, \tilde{\mathbf{w}}\ket \|\\
+ &\leq \|\mathbf{v}\| \|\mathbf{w} - \tilde{\mathbf{w}}\| + \|\mathbf{v} - \tilde{\mathbf{v}}\|\|\tilde{\mathbf{w}}\|
+ \end{align*}
+ Hence for $\mathbf{v}, \mathbf{w}$ sufficiently closed to $\tilde{\mathbf{v}}, \tilde{\mathbf{w}}$, we can get $\|\bra \mathbf{v}, \mathbf{w}\ket - \bra \tilde{\mathbf{v}}, \tilde{\mathbf{w}}\ket\|$ arbitrarily small. So it is continuous.
+\end{proof}
+
+When we have an incomplete Euclidean space, we can of course take the completion of it to form a complete extension of the original normed vector space. However, it is not immediately obvious that the inner product can also be extended to the completion to give a Hilbert space. The following proposition tells us we can do so.
+\begin{prop}
+ Let $(E, \|\ph\|)$ denote a Euclidean space, and $\bar{E}$ its completion. Then the inner product extends to an inner product on $\bar{E}$, turning $\bar{E}$ into a Hilbert space.
+\end{prop}
+
+\begin{proof}
+ Recall we constructed the completion of a space as the equivalence classes of Cauchy sequences (where two Cauchy sequences $(x_n)$ and $(x_n')$ are equivalent if $|x_n - x_n'| \to 0$). Let $(\mathbf{x}_n), (\mathbf{y}_n)$ be two Cauchy sequences in $E$, and let $\tilde{x}, \tilde{y} \in \bar{E}$ denote their equivalence classes. We define the inner product as
+ \[
+ \bra \tilde{\mathbf{x}}, \tilde{\mathbf{y}}\ket = \lim_{n \to \infty} \bra \mathbf{x}_n,\mathbf{y}_n\ket.\tag{$*$}
+ \]
+ We want to show this is well-defined. Firstly, we need to make sure the limit exists. We can show this by showing that this is a Cauchy sequence. We have
+ \begin{align*}
+ \|\bra \mathbf{x}_n, \mathbf{y}_n\ket - \bra \mathbf{x}_m, \mathbf{y}_m \ket\| &= \|\bra \mathbf{x}_n, \mathbf{y}_n\ket - \bra \mathbf{x}_m, \mathbf{y}_n\ket + \bra \mathbf{x}_m, \mathbf{y}_n\ket - \bra \mathbf{x}_m, \mathbf{y}_m\ket\|\\
+ &\leq \|\bra \mathbf{x}_n, \mathbf{y}_n\ket - \bra \mathbf{x}_m, \mathbf{y}_n\ket \| + \|\bra \mathbf{x}_m, \mathbf{y}_n\ket - \bra \mathbf{x}_m, \mathbf{y}_m\ket\|\\
+ &\leq \|\bra \mathbf{x}_n, \mathbf{x}_m, \mathbf{y}_n\ket \| + \|\bra \mathbf{x}_m, \mathbf{y}_n - \mathbf{y}_m\ket\|\\
+ &\leq \|\mathbf{x}_n - \mathbf{x}_m\| \|\mathbf{y}_n\| + \|\mathbf{x}\|\|\mathbf{y}_n - \mathbf{y}_n\|
+ \end{align*}
+ So $\bra \mathbf{x}_n, \mathbf{y}_n\ket$ is a Cauchy sequence since $(\mathbf{x}_n)$ and $(\mathbf{y}_n)$ are.
+
+ We also need to show that $(*)$ does not depend on the representatives for $\tilde{\mathbf{x}}$ and $\tilde{\mathbf{y}}$. This is left as an exercise for the reader % exercise
+
+ We also need to show that $\bra \ph, \ph\ket_{\bar{E}}$ define the norm of $\|\ph \|_{\bar{E}}$, which is yet another exercise. % exercise
+\end{proof}
+
+\begin{eg}
+ Consider the space
+ \[
+ \ell_2 = \left\{(x_1, x_2, \cdots): x_i \in \C, \sum_{i = 1}^\infty |x_i|^2 < \infty \right\}.
+ \]
+ We already know that this is a complete Banach space. We can also define an inner product on this space by
+ \[
+ \bra \mathbf{a}, \mathbf{b}\ket_{\ell^2} = \sum_{i = 1}^\infty a_i \bar{b}_i.
+ \]
+ We need to check that this actually converges. We prove this by showing absolute convergence. For each $n$, we can use Cauchy-Schwarz to obtain
+ \[
+ \sum_{i = 1}^n |a_i \bar{b}_i| \leq \left(\sum_{i = 1}^n|a|^2\right)^{\frac{1}{2}}\left(\sum_{i = 1}^n|b|^2\right)^{\frac{1}{2}} \leq \|\mathbf{a}\|_{\ell^2} \|\mathbf{b}\|_{\ell^2}.
+ \]
+ So it converges. Now notice that the $\ell^2$ norm is indeed induced by this inner product.
+\end{eg}
+This is a significant example since we will later show that every separable (i.e.\ has countable basis) infinite dimensional Hilbert space is isometric isomorphic to $\ell^2$.
+
+\begin{defi}[Orthogonal space]
+ Let $E$ be a Euclidean space and $S\subseteq E$ an arbitrary subset. Then the \emph{orthogonal space} of $S$, denoted by $S^{\perp}$ is given by
+ \[
+ S^{\perp} = \{\mathbf{v} \in E: \forall \mathbf{w} \in S, \bra \mathbf{v}, \mathbf{w} \ket = 0\}.
+ \]
+\end{defi}
+
+\begin{prop}
+ Let $E$ be a Euclidean space and $S\subseteq E$. Then $S^\perp$ is a closed subspace of $E$, and moreover
+ \[
+ S^{\perp} = (\overline{\spn S})^{\perp}.
+ \]
+\end{prop}
+
+\begin{proof}
+ We first show it is a subspace. Let $\mathbf{u}, \mathbf{v} \in S^\perp$ and $\lambda, \mu \in \C$. We want to show $\lambda \mathbf{u} + \mu \mathbf{v} \in S^\perp$. Let $\mathbf{w} \in S$. Then
+ \[
+ \bra \lambda \mathbf{u} + \mu \mathbf{v}, \mathbf{w}\ket = \lambda \bra \mathbf{u}, \mathbf{w}\ket + \mu \bra \mathbf{v}, \mathbf{w}\ket = 0.
+ \]
+ To show it is closed, let $\mathbf{u}_n \in S^\perp$ be a sequence such that $\mathbf{u}_n \to \mathbf{u} \in E$. Let $\mathbf{w} \in S$. Then we know that
+ \[
+ \bra \mathbf{u}_n, \mathbf{w}\ket = 0.
+ \]
+ Hence, by the continuity of the inner product, we have
+ \[
+ 0 = \lim_{n \to \infty} \bra \mathbf{u}_n, \mathbf{w}\ket = \bra \lim \mathbf{u}_n, \mathbf{w}\ket = \bra \mathbf{u}, \mathbf{w}\ket.
+ \]
+ The remaining part is left as an exercise. % exercise
+\end{proof}
+Note that if $V$ is a linear subspace, then $V \cap V^\perp = \{0\}$, since any $\mathbf{v} \in V \cap V^\perp$ has to satisfy $\bra \mathbf{v}, \mathbf{v}\ket = 0$. So $V + V^\perp$ is a direct sum.
+
+\begin{thm}
+ Let $(E, \|\ph\|)$ be a Euclidean space, and $F\subseteq E$ a \emph{complete} subspace. Then $F \oplus F^\perp = E$.
+
+ Hence, by definition of the direct sum, for $\mathbf{x} \in E$, we can write $\mathbf{x} = \mathbf{x}_1 + \mathbf{x}_2$, where $\mathbf{x}_1 \in F$ and $\mathbf{x}_2 \in F^\perp$. Moreover, $\mathbf{x}_1$ is uniquely characterized by
+ \[
+ \|\mathbf{x}_1 - \mathbf{x}\| = \inf_{\mathbf{y} \in F} \|\mathbf{y\mathbf{}} - \mathbf{x}\|.
+ \]
+\end{thm}
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, 0) -- (2, 0) node [right] {$F$};
+ \draw [dashed] (0, 1) node [above] {$\mathbf{x}$} node [circ] {} -- (0, 0) node [below] {$\mathbf{x}_1$} node [circ] {};
+ \end{tikzpicture}
+\end{center}
+Note that this is not necessarily true if $F$ is not complete.
+
+\begin{proof}
+ We already know that $F \oplus F^\perp$ is a direct sum. It thus suffices to show that the sum is the whole of $E$.
+
+ Let $\mathbf{y}_i \in F$ be a sequence with
+ \[
+ \lim_{i \to \infty} \|\mathbf{y}_i - \mathbf{x}\| = \inf_{\mathbf{y} \in F} \|\mathbf{y} - \mathbf{x}\| = d.
+ \]
+ We want to show that $\mathbf{y}$ is a Cauchy sequence. Let $\varepsilon > 0$ be given. Let $n_0 \in \N$ such that for all $i \geq n_0$, we have
+ \[
+ \|\mathbf{y}_i - \mathbf{x}\|^2 \leq d^2 + \varepsilon.
+ \]
+ We now use the parallelogram law for $\mathbf{v} = \mathbf{x} - \mathbf{y}_i$, $\mathbf{w} = \mathbf{x} - \mathbf{y}_j$ with $i, j \geq n_0$. Then the parallelogram law says:
+ \[
+ \|\mathbf{v} + \mathbf{w}\|^2 + \|\mathbf{v} - \mathbf{w}\|^2 = 2\|\mathbf{v}\|^2 + 2\|\mathbf{w}\|^2,
+ \]
+ or
+ \[
+ \|\mathbf{y}_j - \mathbf{y}_i\|^2 + \|2\mathbf{x} - \mathbf{y}_i - \mathbf{y}_j\|^2 = 2\|\mathbf{y}_i - \mathbf{x}\|^2 + 2\|\mathbf{y}_j - \mathbf{x}\|^2.
+ \]
+ Hence we know that
+ \begin{align*}
+ \|\mathbf{y}_i - \mathbf{y}_j\|^2 &\leq 2\|\mathbf{y}_i - \mathbf{x}\|^2 + 2\|\mathbf{y}_j - \mathbf{x}\|^2 - 4\left\|\mathbf{x} - \frac{\mathbf{y}_i + \mathbf{y}_j}{2}\right\|^2\\
+ &\leq 2(d^2 + \varepsilon) + 2(d^2 + \varepsilon) - 4d^2\\
+ &\leq 4\varepsilon.
+ \end{align*}
+ So $\mathbf{y}_i$ is a Cauchy sequence. Since $F$ is complete, $\mathbf{y}_i \to \mathbf{y} \in F$ for some $F$. Moreover, by continuity, of $\|\ph\|$, we know that
+ \[
+ d = \lim_{i \to \infty}\|\mathbf{y}_i - \mathbf{x}\| = \|\mathbf{y} - \mathbf{x}\|.
+ \]
+ Now let $\mathbf{x}_1 = \mathbf{y}$ and $\mathbf{x}_2 = \mathbf{x} - \mathbf{y}$. The only thing left over is to show $\mathbf{x}_2 \in F^\perp$. Suppose not. Then there is some $\tilde{\mathbf{y}} \in F$ such that
+ \[
+ \bra \tilde{\mathbf{y}}, \mathbf{x}_2 \ket \not= 0.
+ \]
+ The idea is that we can perturbe $\mathbf{y}$ by a little bit to get a point even closer to $\mathbf{x}$. By multiplying $\tilde{\mathbf{y}}$ with a scalar, we can assume
+ \[
+ \bra \tilde{\mathbf{y}}, \mathbf{x}_2\ket > 0.
+ \]
+ Then for $t > 0$, we have
+ \begin{align*}
+ \|(\mathbf{y} + t\tilde{\mathbf{y}}) - \mathbf{x}\|^2 &= \bra \mathbf{y} + t\tilde{\mathbf{y}} - \mathbf{x}, \mathbf{y} + t\tilde{\mathbf{y}} - \mathbf{x}\ket \\
+ &= \bra \mathbf{y} - \mathbf{x}, \mathbf{y} - \mathbf{x}\ket + \bra t\tilde{\mathbf{y}}, \mathbf{y} - \mathbf{x}\ket + \bra \mathbf{y} - \mathbf{x}, t\tilde{\mathbf{y}}\ket + t^2 \bra \tilde{\mathbf{y}}, \tilde{\mathbf{y}}\ket\\
+ &= d^2 - 2t \bra \tilde{\mathbf{y}}, \mathbf{x}_2\ket + t^2 \|\tilde{\mathbf{y}}\|^2.
+ \end{align*}
+ Hence for sufficiently small $t$, the $t^2$ term is negligible, and we can make this less that $d^2$. This is a contradiction since $\mathbf{y} + t\tilde{\mathbf{y}} \in F$.
+\end{proof}
+
+As a corollary, we can define the \emph{projection map} as follows:
+\begin{cor}
+ Let $E$ be a Euclidean space and $F\subseteq E$ a complete subspace. Then there exists a projection map $P: E \to E$ defined by $P(\mathbf{x}) = \mathbf{x}_1$, where $\mathbf{x}_1 \in F$ is as defined in the theorem above. Moreover, $P$ satisfies the following properties:
+ \begin{enumerate}
+ \item $P(E) = F$ and $P(F^\perp) = \{0\}$, and $P^2 = P$. In other words, $F^\perp \leq \ker P$.
+ \item $(I - P)(E) = F^{\perp}$, $(I - P)(F) = \{0\}$, $(I - P)^2 = (I - P)$.
+ \item $\|P\|_{\mathcal{B}(E, E)} \leq 1$ and $\|I - P\|_{\mathcal{B}(E, E)} \leq 1$, with equality if and only if $F \not= \{0\}$ and $F^{\perp} \not= \{0\}$ respectively.
+ \end{enumerate}
+\end{cor}
+Here $P$ projects our space to $F$, where $I - P$ projects our space to $F^\perp$.
+
+\subsection{Riesz representation theorem}
+Our next theorem allows us to completely understand the duals of Hilbert spaces.
+
+Consider the following map. Given a Hilbert space $H$ and $\mathbf{v} \in H$, consider $\phi_{\mathbf{v}} \in H^*$ defined by
+\[
+ \phi_{\mathbf{v}}(\mathbf{w}) = \bra \mathbf{w}, \mathbf{v}\ket.
+\]
+Note that this construction requires the existence of a inner product.
+
+Notice that this is indeed a bounded linear map, where boundedness comes from the Cauchy-Schwarz inequality
+\[
+ |\bra \mathbf{w}, \mathbf{v}\ket| \leq \|\mathbf{v}\|_H \|\mathbf{w}\|_H.
+\]
+Therefore, $\phi$ taking $\mathbf{v} \mapsto \phi_{\mathbf{v}}$ is a map $\phi: H \to H^*$.
+
+Using this simple construction, we have managed to produce a lot of members of the dual. Are there any more things in the dual? The answer is no, and this is given by the Riesz representation theorem.
+\begin{prop}[Riesz representation theorem]
+ Let $H$ be a Hilbert space. Then $\phi: H\to H^*$ defined by $\mathbf{v} \mapsto \bra \ph, \mathbf{v}\ket$ is an isometric anti-isomorphism, i.e.\ it is isometric, bijective and
+ \[
+ \phi(\lambda \mathbf{v} + \mu \mathbf{w}) = \bar{\lambda} \phi(\mathbf{v}) + \bar{\mu} \phi(\mathbf{v}).
+ \]
+\end{prop}
+
+\begin{proof}
+ We first prove all the easy bits, namely everything but surjectivity.
+ \begin{itemize}
+ \item To show injectivity, if $\phi_{\mathbf{v}} = \phi_{\mathbf{u}}$, then $\bra \mathbf{w}, \mathbf{v}\ket = \bra \mathbf{w}, \mathbf{u}\ket$ for all $\mathbf{w}$ by definition. So $\bra \mathbf{w}, \mathbf{v} - \mathbf{u}\ket = 0$ for all $\mathbf{w}$. In particular, $\bra \mathbf{v} - \mathbf{w}, \mathbf{v} - \mathbf{w}\ket = 0$. So $\mathbf{v} - \mathbf{w} = 0$.
+
+ \item To show that it is an anti-homomorphism, let $\mathbf{v}, \mathbf{w}, \mathbf{y} \in H$ and $\lambda, \mu \in \F$. Then
+ \[
+ \phi_{\lambda\mathbf{v} + \mu \mathbf{w}}(\mathbf{y}) = \bra \mathbf{y}, \lambda \mathbf{v} + \mu \mathbf{w}\ket = \bar{\lambda} \bra \mathbf{y}, \mathbf{v}\ket + \bar{\mu}\bra \mathbf{y}, \mathbf{w}\ket = \bar{\lambda} \phi_{\mathbf{v}}(\mathbf{y}) + \bar{\mu} \phi_{\mathbf{w}} (\mathbf{y}).
+ \]
+ \item To show it is isometric, let $\mathbf{v}, \mathbf{w} \in H$ and $\|\mathbf{w}\|_H = 1$. Then
+ \[
+ |\phi_\mathbf{v}(\mathbf{w})| = |\bra \mathbf{w}, \mathbf{v}\ket| \leq \|\mathbf{w}\|_H \|\mathbf{v}\|_H = \|\mathbf{v}\|_H.
+ \]
+ Hence, for all $\mathbf{v}$, $\|\phi_\mathbf{v}\|_{H^*} \leq \|\mathbf{v}\|_H$ for all $\mathbf{v} \in H$. To show $\|\phi_\mathbf{v}\|_{H^*}$ is exactly $\|\mathbf{v}\|_H$, it suffices to note that
+ \[
+ |\phi_\mathbf{v}(\mathbf{v})| = \bra \mathbf{v}, \mathbf{v}\ket = \|\mathbf{v}\|_H^2.
+ \]
+ So $\|\phi_\mathbf{v}\|_{H^*} \geq \|\mathbf{v}\|_H^2 /\|\mathbf{v}\|_H = \|\mathbf{v}\|_H$.
+ \end{itemize}
+
+ Finally, we show surjectivity. Let $\xi \in H^*$. If $\xi = 0$, then $\xi = \phi_{\mathbf{0}}$.
+
+ Otherwise, suppose $\xi \not= 0$. The idea is that $(\ker \xi)^\perp$ is one-dimensional, and then the $\mathbf{v}$ we are looking for will be an element in this complement. So we arbitrarily pick one, and then scale it appropriately.
+
+ We now write out the argument carefully. First, we note that since $\xi$ is continuous, $\ker \xi$ is closed, since it is the inverse image of the closed set $\{0\}$. So $\ker \xi$ is complete, and thus we have
+ \[
+ H = \ker \xi \oplus (\ker \xi)^\perp.
+ \]
+ The next claim is that $\dim (\ker \xi) = 1$. This is an immediate consequence of the first isomorphism theorem, whose proof is the usual one, but since we didn't prove that, we will run the argument manually.
+
+ We pick any two elements $\mathbf{v}_1, \mathbf{v}_2 \in (\ker \xi)^\perp$. Then we can always find some $\lambda, \mu$ not both zero such that
+ \[
+ \lambda \xi(\mathbf{v}_1) + \mu \xi(\mathbf{v}_2) = 0.
+ \]
+ So $\lambda \mathbf{v}_1 + \mu \mathbf{v}_2 \in \ker \xi$. But they are also in $(\ker \xi)^\perp$ by linearity. Since $\ker \xi$ and $(\ker \xi)^\perp$ have trivial intersection, we deduce that $\lambda \mathbf{v}_1 + \mu \mathbf{v}_2 = 0$. Thus, any two vectors in $(\ker \xi)^\perp$ are dependent. Since $\xi \not= 0$, we know that $\ker \xi$ has dimension $1$.
+
+ Now pick any $\mathbf{v} \in (\ker \xi)^\perp$ such that $\xi(\mathbf{v}) \not= 0$. By scaling it appropriately, we can obtain a $\mathbf{v}$ such that
+ \[
+ \xi(\mathbf{v}) = \bra \mathbf{v}, \mathbf{v}\ket.
+ \]
+ Finally, we show that $\xi = \phi_\mathbf{v}$. To prove this, let $\mathbf{w} \in H$. We decompose $\mathbf{w}$ using the previous theorem to get
+ \[
+ \mathbf{w} = \alpha \mathbf{v} + \mathbf{w}_0
+ \]
+ for some $\mathbf{w}_0 \in \ker \xi$ and $\alpha \in \F$. Note that by definition of $(\ker \xi)^\perp$, we know that $\bra \mathbf{w}_0, \mathbf{v}\ket = 0$. Hence we know that
+ \begin{multline*}
+ \xi(\mathbf{w}) = \xi(\alpha \mathbf{v} + \mathbf{w}_0) = \xi(\alpha \mathbf{v}) = \alpha \xi (\mathbf{v})\\
+ = \alpha \bra \mathbf{v}, \mathbf{v}\ket = \bra \alpha \mathbf{v}, \mathbf{v}\ket = \bra \alpha \mathbf{v} + \mathbf{w}_0, \mathbf{v}\ket = \bra \mathbf{w}, \mathbf{v}\ket.
+ \end{multline*}
+ Since $\mathbf{w}$ was arbitrary, we are done.
+\end{proof}
+
+Using this proposition twice, we know that all Hilbert spaces are reflexive, i.e.\ $H \cong H^{**}$.
+
+We now return to the proof of the proposition we claimed at the beginning.
+\begin{prop}
+ For $f \in C(S^1)$, defined, for each $k \in \Z$,
+ \[
+ \hat{f}(k) = \frac{1}{2\pi} \int_{-\pi}^\pi e^{ikx}f(x)\;\d x.
+ \]
+ The partial sums are then defined as
+ \[
+ S_N(f)(x) = \sum_{n = -N}^N e^{inx} \hat{f}(k).
+ \]
+ Then we have
+ \[
+ \lim_{N \to \infty} \frac{1}{2\pi}\int_{-\pi}^\pi |f(x) - S_N(f)(x)|^2 \;\d x = 0.
+ \]
+\end{prop}
+
+\begin{proof}
+ Consider the following Hilbert space $L^2(S^1)$ defined as the completion of $C_{\C}(S^1)$ under the inner product
+ \[
+ \bra f, g\ket = \frac{1}{2\pi} \int_{-\pi}^\pi f(x) \bar{g}(x) \;\d x,
+ \]
+ Consider the closed subspace
+ \[
+ U_N = \spn \{e^{inx}: |n| \leq N\}.
+ \]
+ Then in fact $S_N$ defined above by
+ \[
+ S_N(f)(x) = \sum_{n = -N}^N e^{-inx} \hat{f}(k)
+ \]
+ is the projection operator onto $U_N$. This is since we have the orthonormal condition
+ \[
+ \bra e^{inx}, e^{-imx}\ket = \frac{1}{2\pi} \int_{-\pi}^\pi e^{inx}e^{-imx}\;\d x =
+ \begin{cases}
+ 1 & n = m\\
+ 0 & n \not= m
+ \end{cases}
+ \]
+ Hence it is easy to check that if $f \in U_N$, say $f = \sum_{n = -N}^N a_n e^{inx}$, then $S_N f = f$ since
+ \[
+ S_N (f) = \sum_{n - -N}^N \hat{f}(k) e^{-inx} = \sum_{n = -N}^N \bra f, e^{inx}\ket e^{-inx} = \sum_{n = -N}^N a_n e^{-inx} = f
+ \]
+ using the orthogonality relation. But if $f \in U_N^\perp$, then
+ \[
+ \frac{1}{2\pi} \int_{-\pi}^\pi e^{-inx} f(x)\;\d x = 0
+ \]
+ for all $|n| < N$. So $S_N(f) = 0$. So this is indeed a projection map.
+
+ In particular, we will use the fact that projection maps have norms $\leq 1$. Hence for any $P(x)$, we have
+ \[
+ \frac{1}{2\pi}\int_{-\pi}^\pi |S_N(f)(x) - S_N(P)(x)|^2 \;\d x \leq \frac{1}{2\pi} \int_{-\pi}^\pi |f(x) - P(x)|^2 \;\d x
+ \]
+ Now consider the \emph{algebra} $\mathcal{A}$ generated $\{e^{inx}: n \in \Z\}$. Notice that $\mathcal{A}$ separates points and is closed under complex conjugation. Also, for every $x \in S^1$, there exists $f \in \mathcal{A}$ such that $f(x) \not= 0$ (using, say $f(x) = e^{ix}$). Hence, by Stone-Weierstrass theorem, $\bar{\mathcal{A}} = C_\C(S^1)$, i.e.\ for every $f \in C_\C (S^1)$ and $\varepsilon > 0$, there exists a polynomial $P$ of $e^{ix}$ and $e^{-ix}$ such that
+ \[
+ \|P - f\| < \varepsilon.
+ \]
+ We are almost done. We now let $N > \deg P$ be a large number. Then in particular, we have $S_N(P) = P$. Then
+ \begin{align*}
+ \left(\frac{1}{2\pi} \int_{-\pi}^\pi |S_N(f) - f|^2 \;\d x\right)^{\frac{1}{2}} &\leq \left(\frac{1}{2\pi}\int_{-\pi}^\pi |S_N(f) - S_N(P)|^2 \;\d x\right)^{\frac{1}{2}} \\
+ &\quad \quad+ \left(\frac{1}{2\pi} \int_{-\pi}^\pi |S_N(P) - P|^2 \;\d x\right)^{\frac{1}{2}}\\
+ &\quad \quad+ \left(\frac{1}{2\pi} \int_{-\pi}^\pi |P - f|^2 \;\d x\right)^{\frac{1}{2}}\\
+ &\leq \varepsilon + 0 + \varepsilon\\
+ &= 2\varepsilon.
+ \end{align*}
+ So done.
+\end{proof}
+
+\subsection{Orthonormal systems and basis}
+\begin{defi}[Orthonormal system]
+ Let $E$ be a Euclidean space. A set of unit vectors $\{\mathbf{e}_\alpha\}_{\alpha \in A}$ is called an \emph{orthonormal system} if $\bra \mathbf{e}_\alpha, \mathbf{e}_\beta\ket = 0$ if $\alpha \not= \beta$.
+\end{defi}
+
+We want to define a ``basis'' in an infinite dimensional vector space. The idea is that these should be orthonormal systems ``big enough'' to span everything. In finite-dimensions, this was easy, since there is the notion of dimension --- if we have $n$ dimensions, then we just take an orthonormal system of $n$ vectors, and we are done.
+
+If we have infinite dimensions, this is trickier. If we have many many dimensions and vectors, we can keep adding things to our orthonormal system, but we might never get to such a ``basis'', if our ``basis'' has to be uncountable. Hence we have the idea of ``maximality''.
+
+\begin{defi}[Maximal orthonormal system]
+ Let $E$ be a Euclidean space. An orthonormal space is called \emph{maximal} if it cannot be extended to a strictly larger orthonormal system.
+\end{defi}
+By Zorn's lemma, a maximal orthonormal system always exists. We will later see that in certain nice cases, we can construct a maximal orthonormal system directly, without appealing to Zorn's lemma. The advantage of an explicit construction is that we will understand our system much more.
+
+One important thing we would like to do is given an orthonormal system, decide whether it is maximal. In general, this is difficult, and Zorn's lemma is completely useless.
+
+Now suppose we are nicer and have a Hilbert space. What we would like to say is that if we have a maximal orthonormal system, then its span is the whole space $H$. However, this doesn't really work. The span of a set $S$ only allows us to take finite linear combinations, but by completeness of $H$, we want to have the infinite sums, i.e.\ the limits as well. So what we really have is the following.
+
+\begin{prop}
+ Let $H$ be a Hilbert space. Let $S$ be a maximal orthonormal system. Then $\overline{\spn S} = H$.
+\end{prop}
+While this might seem difficult to prove at first, it turns out the proof is pretty short and simple.
+
+\begin{proof}
+ Recall that $S^\perp = (\overline{\spn S})^\perp$. Since $H$ is a Hilbert space, we have
+ \[
+ H = \overline{\spn S} \oplus (\overline{\spn S})^\perp = \overline{\spn S} \oplus S^\perp.
+ \]
+ Since $S$ is maximal, $S^\perp = \{0\}$. So done.
+\end{proof}
+
+How about the converse? It is also true. In fact, it is true even for Euclidean spaces, and the proof is easy.
+
+\begin{prop}
+ Let $E$ be Euclidean, and let $S$ be an orthonormal system. If $\overline{\spn S} = E$, then $S$ is maximal.
+\end{prop}
+
+\begin{proof}
+ \[
+ S^\perp = (\overline{\spn S})^\perp = E^\perp = \{0\}.\qedhere
+ \]
+\end{proof}
+So in a Hilbert space, we have an if and only if condition --- a system is maximal if and only if the closure of the span is everything. In other words, given any vector $\mathbf{v} \in H$, we can find a sequence $\mathbf{v}_i$ in the span of the maximal system that converges to $\mathbf{v}$. This sequence is clearly not unique, since we can just add a random term to the first item.
+
+However, we can do something better. Consider our space $\ell_2$, and the element $(1, \frac{1}{2}, \frac{1}{4}, \cdots)$. There is a very natural way to write this as the limit of the sequence:
+\[
+ (1, 0, 0, \cdots), \left(1, \frac{1}{2}, 0, \cdots\right), \left(1, \frac{1}{2}, \frac{1}{4}, 0, \cdots\right), \cdots.
+\]
+What we are doing is that we are truncating the element at the $n$th component for each $n$. Alternatively, the $n$th term is what we get when we project our $\mathbf{v}$ onto the space spanned by the first $n$ ``basis'' vectors. This is a nice and natural way to produce the sequence.
+
+\begin{defi}[Hilbert space basis]
+ Let $H$ be a Hilbert space. A maximal orthonormal system is called a \emph{Hilbert space basis}.
+\end{defi}
+
+Recall that at the beginning, we said we needed Zorn's lemma to get an orthonormal system. In many cases, we can find a basis \emph{without} using Zorn's lemma. This relies on the Gram-Schmidt procedure.
+
+\begin{prop}
+ Let $\{\mathbf{x}_i\}_{i = 1}^n$, $n \in \N$ be linearly independent. Then there exists $\{\mathbf{e}_i\}_{i = 1}^n$ such that $\{\mathbf{e}_i\}_{i = 1}^n$ is an orthonormal system and
+ \[
+ \spn\{\mathbf{x}_1, \cdots, \mathbf{x}_j\} = \spn\{\mathbf{e}_1, \cdots, \mathbf{e}_j\}
+ \]
+ for all $j \leq n$.
+\end{prop}
+
+\begin{proof}
+ Define $\mathbf{e}_1$ by
+ \[
+ \mathbf{e}_1 = \frac{\mathbf{x}_1}{\|\mathbf{x}_1\|}.
+ \]
+ Assume we have defined $\{\mathbf{e}_i\}_{i = 1}^j$ orthonormal such that
+ \[
+ \spn\{\mathbf{x}_1, \cdots, \mathbf{x}_j\} = \spn\{\mathbf{e}_1, \cdots, \mathbf{e}_j\}.
+ \]
+ Then by linear independence, we know that
+ \[
+ \mathbf{x}_{j + 1} \not\in \spn\{\mathbf{x}_1, \cdots, \mathbf{x}_j\} = \spn\{\mathbf{e}_1, \cdots, \mathbf{e}_j\} = F_j.
+ \]
+ We now define
+ \[
+ \tilde{\mathbf{x}}_{j + 1} = \mathbf{x}_{j + 1} - P_{F_j}(\mathbf{x}_{j + 1}),
+ \]
+ where $P_{F_j}$ is the projection onto $F_j$ given by
+ \[
+ P_{F_j} = \sum_{i = 1}^j \bra \mathbf{x}, \mathbf{e}_i\ket \mathbf{e}_i.
+ \]
+ Since $F_j$ is a closed, finite subspace, we know that
+ \[
+ \mathbf{x}_{j + 1} - P_{F_j} \mathbf{x}_{j + 1} \perp F_j.
+ \]
+ Thus
+ \[
+ \mathbf{e}_{j + 1} = \frac{\tilde{\mathbf{x}}_{j + 1}}{\|\tilde{\mathbf{x}}_{j + 1}\|}
+ \]
+ is the right choice. We can also write this in full as
+ \[
+ \mathbf{e}_{j + 1} = \frac{\mathbf{x}_{j + 1} - \sum_{i = 1}^j \bra \mathbf{x}_j \mathbf{e}_j\ket \mathbf{e}_i}{\|\mathbf{x}_{j + 1} - \sum_{i = 1}^j \bra \mathbf{x}_j \mathbf{e}_j\ket \mathbf{e}_i\|}.
+ \]
+ So done.
+\end{proof}
+Note that projection into the first $n$ basis is exactly what we did when we wrote an element in $\ell_2$ as the limit of the sequence.
+
+This is a helpful result, since it is a constructive way of producing orthonormal systems. So if we are handed a set of vectors, we can just apply this result, plug our vectors into this ugly formula, and get a result. Of course, we want to apply this to infinite spaces.
+
+\begin{prop}
+ Let $H$ be separable, i.e.\ there is an infinite set $\{\mathbf{y}_i\}_{i \in \N}$ such that
+ \[
+ \overline{\spn \{\mathbf{y}_i\}} = H.
+ \]
+ Then there exists a countable basis for $\spn \{\mathbf{y}_i\}$.
+\end{prop}
+
+\begin{proof}
+ We find a subset $\{\mathbf{y}_{i_j}\}$ such that $\spn \{\mathbf{y}_i\} = \spn\{\mathbf{y}_{i_j}\}$ and $\{\mathbf{y}_{i_j}\}$ are independent. This is easy to do since we can just throw away the useless dependent stuff. At this point, we do Gram-Schmidt, and done.
+\end{proof}
+
+\begin{eg}
+ Consider $H = \ell_2$ and the sequence $\{\mathbf{e}_i\}_{i \in \N}$ be defined by
+ \[
+ \mathbf{e}_i = (0, 0, \cdots, 0, 1, 0, \cdots),
+ \]
+ with the zero in the $i$th column.
+
+ Note that $\mathbf{x} \perp \{\mathbf{e}_i\}_{i \in \N}$ if and only if each component is zero, i.e.\ $\mathbf{x} = \mathbf{0}$. So $\{\mathbf{e}_i\}$ is maximal, and hence a basis.
+\end{eg}
+
+\begin{eg}
+ Consider $H = L^2$, the completion of $C(S^1)$ under the $L^2$ norm, i.e.
+ \[
+ \bra f, g\ket = \int_{-\pi}^\pi f\bar{g}\;\d x.
+ \]
+ Trigonometric polynomials are dense in $C(S^1)$ with respect to the supremum norm due to Stone-Weierstrass. So in fact $\spn \left\{\frac{1}{\sqrt{2\pi}}e^{inx}\right\}$ for $n \in \N$ is dense in $C(S^1)$. Hence it is dense in $C(S^1)$ under the $L^2$ norm since convergence under the supremum norm implies convergence under $L^2$. In particular, it is dense in the $L^2$ space since $L^2$ is the completion of $C(S^1)$. Moreover, this set is orthonormal in $C(S^1)$ under the $L^2$ norm. So $\left\{\frac{1}{\sqrt{2\pi}} e^{inx}\right\}$ is a basis for $L^2$.
+\end{eg}
+Note that in these two examples, we have exhibited two different ways of constructing a basis. In the first case, we showed that it is maximal directly. In the second case, we show that its span is a dense subset of the space. By our proposition, these are equivalent and valid ways of proving that it is a basis.
+
+\subsection{The isomorphism with \tph{$\ell_2$}{l2}{&\#x21132}}
+We ended the previous section with two examples. Both of them are Hilbert spaces, and both have countable basis. Is there any way we can identify the both? This is a reasonable thing to ask. If we are given a Hilbert space $H$ of finite dimension $\dim H = n$, then we know that $H$ is indeed isomorphic to $\R^n$ (or $\C^n$) with the Euclidean norm. In some sense $\ell_2$ is just an ``infinite version'' of $\R^n$. So we might expect all other Hilbert spaces with countable dimension to be isomorphic to $\ell_2$.
+
+Recall that if we have a finite-dimensional Hilbert space $H$ with $\dim H = n$, and an orthonormal basis $\{\mathbf{e}_1, \cdots, \mathbf{e}_n\}$, then each $\mathbf{x} \in H$ can be written as
+\[
+ \mathbf{x} = \sum_{i = 1}^n \bra \mathbf{x}, \mathbf{e}_i\ket \mathbf{e}_i,
+\]
+and
+\[
+ \|\mathbf{x}\|^2 = \sum_{i = 1}^n |\bra \mathbf{x}, \mathbf{e}_i\ket|^2.
+\]
+Thus $H$ is isomorphic to $\ell_2^n$, the space $\R^n$ with the Euclidean norm, via the map
+\[
+ \mathbf{x} \mapsto (\bra \mathbf{x}, \mathbf{e}_1\ket, \cdots, \bra \mathbf{x}, \mathbf{e}_n\ket).
+\]
+Can we push this to the infinite dimensional case? Yes. We will have to replace our finite sum $\sum_{i = 1}^n$ with an infinite sum. Of course, with an infinite sum, we need to make sure things converge. This is guaranteed by Bessel's inequality.
+
+\begin{lemma}[Bessel's inequality]
+ Let $E$ be Euclidean and $\{\mathbf{e}_i\}_{i = 1}^N$ with $N \in \N \cup \{\infty\}$ an orthonormal system. For any $\mathbf{x} \in E$, define $x_i = \bra \mathbf{x}, \mathbf{e}_i\ket$. Then for any $j \leq N$, we have
+ \[
+ \sum_{i = 1}^j |x_i|^2 \leq \|\mathbf{x}\|^2.
+ \]
+\end{lemma}
+
+\begin{proof}
+ Consider the case where $j$ is finite first. Define
+ \[
+ F_j = \spn\{\mathbf{e}_1, \cdots, \mathbf{e}_j\}.
+ \]
+ This is a finite dimensional subspace of $E$. Hence an orthogonal projection $P_{F_j}$ exists. Moreover, we have an explicit formula for this:
+ \[
+ P_{F_j} = \sum_{i = 1}^j \bra \mathbf{x}, \mathbf{e}_i\ket \mathbf{e}_i.
+ \]
+ Thus
+ \[
+ \sum_{i = 1}^j |x_i|^2 = \|P_{F_j} \mathbf{x}\|^2 \leq \|\mathbf{x}\|^2
+ \]
+ since we know that $\|P_{F_j}\| \leq 1$. Taking the limit as $j \to \infty$ proves the case for infinite $j$.
+\end{proof}
+
+The only thing we required in the proof is for the space to be Euclidean. This is since we are talking about the sum
+\[
+ \sum_{i = 1}^\infty |x_i|^2,
+\]
+and this is a sum of numbers. However, if we want to investigate the sum
+\[
+ \mathbf{x} = \sum_{i = 1}^\infty \bra \mathbf{x}, \mathbf{e}_i\ket \mathbf{e}_i,
+\]
+then we'd better require the space to be Hilbert, so that the sum has something to converge to.
+
+\begin{prop}
+ Let $H$ be a separable Hilbert space, with a countable basis $\{\mathbf{e}_i\}_{i = 1}^N$, where $N \in \N \cup \{\infty\}$. Let $\mathbf{x}, \mathbf{y} \in H$ and
+ \[
+ x_i = \bra \mathbf{x}, \mathbf{e}_i\ket ,\quad y_i = \bra \mathbf{y}, \mathbf{e}_i\ket.
+ \]
+ Then
+ \[
+ \mathbf{x} = \sum_{i = 1}^N x_i \mathbf{e}_i,\quad \mathbf{y} = \sum_{i = 1}^N y_i \mathbf{e}_i,
+ \]
+ and
+ \[
+ \bra \mathbf{x}, \mathbf{y}\ket = \sum_{i = 1}^N x_i \bar{y}_i.
+ \]
+ Moreover, the sum converges absolutely.
+\end{prop}
+
+\begin{proof}
+ We only need to consider the case $N = \infty$. Otherwise, it is just finite-dimensional linear algebra.
+
+ First, note that our expression is written as an infinite sum. So we need to make sure it converges. We define the partial sums to be
+ \[
+ \mathbf{s}_n = \sum_{i = 1}^n x_i \mathbf{e}_i.
+ \]
+ We want to show $\mathbf{s}_n \to \mathbf{x}$. By Bessel's inequality, we know that
+ \[
+ \sum_{i = 1}^\infty |x_i|^2 \leq \|\mathbf{x}\|^2.
+ \]
+ In particular, the sum is bounded, and hence converges.
+
+ For any $m < n$, we have
+ \[
+ \|\mathbf{s}_n - \mathbf{s}_m\| = \sum_{i = m + 1}^n |x_i|^2 \leq \sum_{i = m + 1}^\infty |x_i|^2.
+ \]
+ As $m \to \infty$, the series must go to $0$. Thus $\{\mathbf{s}_n\}$ is Cauchy. Since $H$ is Hilbert, $\mathbf{s}_n$ converges, say
+ \[
+ \mathbf{s}_n \to \mathbf{s} = \sum_{i = 1}^\infty x_i \mathbf{e}_i.
+ \]
+ Now we want to prove that this sum is indeed $\mathbf{x}$ itself. Note that so far in the proof, we have \emph{not} used the fact that $\{\mathbf{e}_i\}$ is a basis. We just used the fact that it is orthogonal. Hence we should use this now. We notice that
+ \[
+ \bra \mathbf{s}, \mathbf{e}_i\ket = \lim_{n \to \infty} \bra \mathbf{s}_n, \mathbf{e}_i\ket = \lim_{n \to \infty} \sum_{j = 1}^n x_j \bra \mathbf{e}_j, \mathbf{e}_i\ket = x_i.
+ \]
+ Hence we know that
+ \[
+ \bra \mathbf{x} - \mathbf{s}, \mathbf{e}_i\ket = 0.
+ \]
+ for all $i$. So $\mathbf{x} - \mathbf{s}$ is perpendicular to all $\mathbf{e}_i$. Since $\{\mathbf{e}_i\}$ is a basis, we must have $\mathbf{x} - \mathbf{s} = 0$, i.e.\ $\mathbf{x} = \mathbf{s}$.
+
+ To show our formula for the inner product, we can compute
+ \begin{align*}
+ \bra \mathbf{x}, \mathbf{y}\ket &= \lim_{n \to \infty}\left\bra \sum_{i = 1}^n x_i \mathbf{e}_i, \sum_{j = 1}^n y_j \mathbf{e}_j\right\ket\\
+ &= \lim_{n \to \infty} \sum_{i, j = 1}^n x_i \bar{y}_j \bra \mathbf{e}_i, \mathbf{e}_j\ket\\
+ &= \lim_{n \to \infty} \sum_{i, j = 1}^n x_i \bar{y}_j \delta_{ij}\\
+ &= \lim_{n \to \infty} \sum_{i = 1}^n x_i \bar{y}_i\\
+ &= \sum_{i = 1}^\infty x_i \bar{y}_i.
+ \end{align*}
+ Note that we \emph{know} the limit exists, since the continuity of the inner product ensures the first line is always valid.
+
+ Finally, to show absolute convergence, note that for all finite $j$, we have
+ \[
+ \sum_{i = 1}^j |x_i \bar{y}_i| \leq \sqrt{\sum_{i = 1}^n |x_i|^2} \sqrt{\sum_{i = 1}^n |y_i|^2} \leq \|\mathbf{x}\|\|\mathbf{y}\|.
+ \]
+ Since this is a uniform bound for any $j$, the sum converges absolutely.
+\end{proof}
+Note that in the case of $\mathbf{x} = \mathbf{y}$, our formula for the inner product gives
+\[
+ \|\mathbf{x}\|^2 = \sum_{i = 1}^N |x_i|^2.
+\]
+This is known as \emph{Parseval's equality}
+
+What this proposition gives us is that given any separable Hilbert space, we can find ``coordinates'' for it, and in terms of these coordinates, our inner product and hence norm all act like $\ell_2$. In particular, we have the map
+\[
+ \mathbf{x} \mapsto \{\bra \mathbf{x}, \mathbf{e}_i\ket \}_{i = 1}^N
+\]
+that takes $H$ into $\ell_2$. This is injective since by Parseval's equality, if the image of $\mathbf{x}$ is $0$, then $\|\mathbf{x}\|^2 = \sum 0 = 0$. So $\mathbf{x} = \mathbf{0}$.
+
+This is good, but not good enough. We want the map to be an isomorphism. Hence, we need to show it is surjective. In other words, \emph{every} element in $\ell_2$ is obtained. This is a theorem by Riesz and Fisher, and is in fact easy to prove, since there is an obvious candidate for the preimage of any $\{x_i\}_{i \in \N}$.
+
+\begin{prop}
+ Let $H$ be a separable Hilbert space with orthonormal basis $\{\mathbf{e}_i\}_{i \in \N}$. Let $\{a_i\}_{i \in \N} \in \ell_2(\C)$. Then there exists an $\mathbf{x} \in H$ with $\bra \mathbf{x}, \mathbf{e}_i \ket = a_i$. Moreover, this $\mathbf{x}$ is exactly
+ \[
+ \mathbf{x} = \sum_{i = 1}^\infty x_i \mathbf{e}_i.
+ \]
+\end{prop}
+
+\begin{proof}
+ The only thing we need to show is that this sum converges. For any $n \in \N$, define
+ \[
+ \mathbf{s}_n = \sum_{i = 1}^n a_i \mathbf{e}_i \in H.
+ \]
+ For $m < n$, we have
+ \[
+ \|\mathbf{s}_n - \mathbf{s}_m\|^2 = \sum_{m + 1}^n |a_i|^2 \to 0
+ \]
+ as $m \to \infty$ because $\{a_i\} \in \ell^2$. Hence $\mathbf{s}_n$ is Cauchy and as such converges to $\mathbf{x}$. Obviously, we have
+ \[
+ \bra \mathbf{x}, \mathbf{e}_i\ket = \lim_{n \to \infty} \sum_{j = 1}^n a_j \bra \mathbf{e}_j, \mathbf{e}_i \ket = a_i.
+ \]
+ So done.
+\end{proof}
+This means we have a isomorphism between $\ell_2$ and $H$. Moreover, this is continuous and in fact isometric. So this is a very strong result. This says all separable Hilbert spaces \emph{are} $\ell_2$.
+\subsection{Operators}
+We are going to look at operators on Hilbert spaces. For example, we would like to see how differential operators behave on spaces of differentiable functions.
+
+In this section, we will at least require the space to be Banach. So let $X$ be a Banach space over $\C$. We will consider $\mathcal{B}(X) = \mathcal{B}(X, X)$, the vector space of bounded linear maps from $X$ to itself. We have seen in the example sheets that $\mathcal{B}(X)$ is a unital Banach algebra, i.e.\ it forms a complete algebra with composition as multiplication. Our goal is to generalize some considerations in finite dimensions such as eigenvectors and eigenvalues.
+
+\begin{defi}[Spectrum and resolvent set]
+ Let $X$ be a Banach space and $T \in \mathcal{B}(X)$, we define the \emph{spectrum} of $T$, denoted by $\sigma(T)$ by
+ \[
+ \sigma(t) = \{\lambda \in \C: T - \lambda I\text{ is not invertible}\}.
+ \]
+ The \emph{resolvent set}, denoted by $\rho(T)$, is
+ \[
+ \rho(t) = \C \setminus \sigma(T).
+ \]
+\end{defi}
+Note that if $T - \lambda$ is is bijective, then by the inverse mapping theorem, we know it has a bounded inverse. So if $\lambda \in \sigma(T)$, then either $T - \lambda I$ is not injective, or it is not surjective. In other words, $\ker (T - \lambda I) \not= \{0\}$ or $\im (T - \lambda I) \not= X$. In finite dimensions, these are equivalent by, say, the rank-nullity theorem, but in general, they are not.
+
+\begin{eg}
+ Consider the shift operator $s: \ell_\infty \to \ell_\infty$ defined by
+ \[
+ (a_1, a_2, a_3, \cdots) \mapsto (0, a_1, a_2, \cdots).
+ \]
+ Then this is injective but not surjective.
+\end{eg}
+Now if $\lambda \in \rho(T)$, i.e.\ $T - \lambda I$ is invertible, then $(T - \lambda I)^{-1}$ is automatically bounded by the inverse mapping theorem. This is why we want to work with Banach spaces.
+
+\begin{defi}[Resolvent]
+ Let $X$ be a Banach space. The \emph{resolvent} is the map $R: \rho(T) \to \mathcal{B}(X)$ given by
+ \[
+ \lambda \mapsto (T - \lambda I)^{-1}.
+ \]
+\end{defi}
+
+\begin{defi}[Eigenvalue]
+ We say $\lambda$ is an \emph{eigenvalue} of $T$ if $\ker (T - \lambda I) \not= \{0\}$.
+\end{defi}
+
+\begin{defi}[Point spectrum]
+ Let $X$ be a Banach space. The \emph{point spectrum} is
+ \[
+ \sigma_p(T) = \{\lambda \in \C: \lambda\text{ is an eigenvalue of }T\}.
+ \]
+\end{defi}
+Obviously, $\sigma_p(T) \subseteq \sigma(T)$, but they are in general not equal.
+
+\begin{defi}[Approximate point spectrum]
+ Let $X$ be a Banach space. The \emph{approximate point spectrum} is defined as
+ \[
+ \sigma_{ap}(X) = \{ \lambda \in \C: \exists \{\mathbf{x}_n\} \subseteq X: \|\mathbf{x}_n\|_X = 1\text{ and }\|(T - \lambda I)\mathbf{x}_n\|_X \to 0\}.
+ \]
+\end{defi}
+Again, we have
+\[
+ \sigma_p(T) \subseteq \sigma_{ap}(T) \subseteq \sigma(T).
+\]
+The last inclusion follows from the fact that if an inverse exists, then the inverse is bounded.
+
+An important characterization of the spectrum is the following theorem:
+\begin{thm}
+ Let $X$ be a Banach space, $T \in B(X)$. Then $\sigma(T)$ is a non-empty, closed subset of
+ \[
+ \{\lambda \in \C: |\lambda| \leq \|T\|_{\mathcal{B}(X)}\}.
+ \]
+\end{thm}
+In finite dimensions, this in particular implies the existence of eigenvalues, since the spectrum is equal to the point spectrum. Notice this is only true for vector spaces over $\C$, as we know from linear algebra.
+
+To prove this theorem, we will first prove two lemmas.
+\begin{lemma}
+ Let $X$ be a Banach space, $T \in \mathcal{B}(X)$ and $\|T\|_{\mathcal{B}(X)} < 1$. Then $I - T$ is invertible.
+\end{lemma}
+
+\begin{proof}
+ To prove it is invertible, we construct an explicit inverse. We want to show
+ \[
+ (I - T)^{-1} = \sum_{i = 0}^\infty T^i.
+ \]
+ First, we check the right hand side is absolutely convergent. This is since
+ \[
+ \sum_{i = 0}^\infty \|T^i\|_{\mathcal{B}(X)} \leq \sum_{i = 0}^\infty \|T\|_{\mathcal{B}(X)}^i \leq \frac{1}{1 - \|T\|_{\mathcal{B}(X)}} < \infty.
+ \]
+ Since $X$ is Banach, and hence $\mathcal{B}(X)$ is Banach, the limit is well-defined. Now it is easy to check that
+ \begin{align*}
+ (I - T) \sum_{i = 1}^\infty T^i &= (I - T)(I + T + T^2 + \cdots) \\
+ &= I + (T - T) + (T^2 - T^2) + \cdots \\
+ &= I.
+ \end{align*}
+ Similarly, we have
+ \[
+ \left(\sum_{i = 1}^\infty T^i\right)(I - T) = I.\qedhere
+ \]
+\end{proof}
+
+\begin{lemma}
+ Let $X$ be a Banach space, $S_1 \in \mathcal{B}(X)$ be invertible. Then for all $S_2 \in \mathcal{B}(X)$ such that
+ \[
+ \|S_1^{-1}\|_{\mathcal{B}(X)} \|S_1 - S_2\|_{\mathcal{B}(X)} < 1,
+ \]
+ $S_2$ is invertible.
+\end{lemma}
+This is some sort of an ``openness'' statement for the invertible bounded linear maps, since if $S_1$ is invertible, then any ``nearby'' bounded linear map is also invertible.
+\begin{proof}
+ We can write
+ \[
+ S_2 = S_1(I - S_1^{-1}(S_1 - S_2)).
+ \]
+ Since
+ \[
+ \|S_1^{-1}(S_1 - S_2)\|_{\mathcal{B}(X)} \leq \|S_1^{-1} \|_{\mathcal{B}(X)} \|S_1 - S_2\|_{\mathcal{B}(X)} < 1
+ \]
+ by assumption, by the previous lemma, $(I - S_1^{-1}(S_1 - S_2))^{-1}$ exists. Therefore the inverse of $S_2$ is
+ \[
+ S_2^{-1} = (I - S_1^{-1}(S_1 - S_2))^{-1} S_1^{-1}.\qedhere
+ \]
+\end{proof}
+
+We can now return to prove our original theorem.
+\begin{thm}
+ Let $X$ be a Banach space, $T \in \mathcal{B}(X)$. Then $\sigma(T)$ is a non-empty, closed subset of
+ \[
+ \{\lambda \in \C: |\lambda| \leq \|T\|_{\mathcal{B}(X)}\}.
+ \]
+\end{thm}
+Note that it is not hard to prove that it is closed and a subset of $\{\lambda \in \C: |\lambda| \leq \|T\|_{\mathcal{B}(X)}\}$. The hard part is to prove it is non-empty.
+
+\begin{proof}
+ We first prove the closedness of the spectrum. It suffices to prove that the resolvent set $\rho(T) = \C \setminus \sigma(T)$ is open, by the definition of closedness.
+
+ Let $\lambda \in \rho(T)$. By definition, $S_1 = T - \lambda I$ is invertible. Define $S_2 = T - \mu I$. Then
+ \[
+ \|S_1 - S_2\|_{\mathcal{B}(X)} = \|(T - \lambda I) - (T - \mu I)\|_{\mathcal{B}(X)} = |\lambda - \mu|.
+ \]
+ Hence if $|\lambda - \mu|$ is sufficiently small, then $T - \mu I$ is invertible by the above lemma. Hence $\mu \in \rho(T)$. So $\rho(T)$ is open.
+
+ To show $\sigma(T) \subseteq \{\lambda \in \C: |\lambda| \leq \|T\|_{\mathcal{B}(X)}\}$ is equivalent to showing
+ \[
+ \{\lambda \in \C: |\lambda| > \|T\|_{\mathcal{B}(X)}\} \subseteq \C \setminus \sigma(T) = \rho(T).
+ \]
+ Suppose $|\lambda| > \|T\|$. Then $I - \lambda^{-1} T$ is invertible since
+ \[
+ \|\lambda^{-1} T\|_{\mathcal{B}(X)} = \lambda^{-1} \|T\|_{\mathcal{B}(X)} < 1.
+ \]
+ Therefore, $(I - \lambda^{-1} T)^{-1}$ exists, and hence
+ \[
+ (\lambda I - T)^{-1} = \lambda^{-1} (I - \lambda T)^{-1}
+ \]
+ is well-defined. Therefore $\lambda I - T$, and hence $T - \lambda I$ is invertible. So $\lambda \in \rho(T)$.
+
+ Finally, we need to show it is non-empty. How did we prove it in the case of finite-dimensional vector spaces? In that case, it ultimately boiled down to the fundamental theorem of algebra. And how did we prove the fundamental theorem of algebra? We said that if $p(x)$ is a polynomial with no roots, then $\frac{1}{p(x)}$ is bounded and entire, hence constant.
+
+ We are going to do the same proof. We look at $\frac{1}{T - \lambda I}$ as a function of $\lambda$. If $\sigma(T) = \emptyset$, then this is an everywhere well-defined function. We show that this is entire and bounded, and hence by ``Liouville's theorem'', it must be constant, which is impossible (in the finite-dimensional case, we would have inserted a $\det$ there).
+
+ So suppose $\sigma(T) = \emptyset$, and consider the function $R: \C \to \mathcal{B}(X)$, given by $R(\lambda) = (T - \lambda I)^{-1}$.
+
+ We first show this is entire. This, by definition, means $R$ is given by a power series near any point $\lambda_0 \in \C$. Fix such a point. Then as before, we can expand
+ \begin{align*}
+ T - \lambda I &= (T - \lambda_0 I)\Big[I - (T - \lambda_0 I)^{-1} \Big((T - \lambda_0 I) - (T - \lambda I)\Big)\Big]\\
+ &= (T - \lambda_0 I)\Big[I - (\lambda - \lambda_0)(T - \lambda_0 I)^{-1}\Big].
+ \end{align*}
+ Then for $(\lambda - \lambda_0)$ small, we have
+ \begin{align*}
+ (T - \lambda I)^{-1} &= \left(\sum_{i = 0}^\infty (\lambda - \lambda_0)^i (T - \lambda_0 I)^{-i}\right) (T - \lambda_0 I)^{-1}\\
+ &= \sum_{i = 0}^\infty (\lambda - \lambda_0)^i (T - \lambda_0I)^{-i - 1}.
+ \end{align*}
+ So this is indeed given by an absolutely convergent power series near $\lambda_0$.
+% This is just the formula for $S_1$ and $S_2$ we had above, with $S_1 = T - \lambda_0 I$ and $S_2 = T - \lambda I$. We can expand this as
+% \[
+% T - \lambda I = (T - \lambda_0 I)(I - (T - \lambda_0 I)^{-1}((\lambda - \lambda_0)I)).
+% \]
+% For $|\lambda - \lambda_0|$ sufficiently small, we have $\|(T - \lambda_0 I)^{-1}((\lambda - \lambda_0)I)\| < 1$. Then by the proof the previous lemma, the inverse of $(I - (T - \lambda_0 I)^{-1}((\lambda - \lambda_0)I))$ is given by
+% \begin{align*}
+% (T - \lambda I)^{-1} &= \sum_{i = 0}^\infty ((T - \lambda_0 I)^{-1} (\lambda - \lambda_0) I)^i (T - \lambda_0 I)^{-1}\\
+% &= \sum_{i = 0}^\infty (T - \lambda_0 I)^{-i - 1}(\lambda - \lambda_0)^i I.
+% \end{align*}
+% This is a power series in $(\lambda - \lambda_0)$, and for $|\lambda - \lambda_0|$ small, this series converges absolutely. Hence $R$ is analytic at $\lambda_0$. Since $\lambda_0$ is arbitrary, $R$ is entire.
+
+ Next, we show $R$ is bounded, i.e.
+ \[
+ \sup_{\lambda \in \C} \|R (\lambda)\|_{\mathcal{B}(X)} < \infty.
+ \]
+ It suffices to prove this for $\lambda$ large. Note that we have
+ \[
+ (T - \lambda I)^{-1} = \lambda^{-1}(\lambda^{-1} T - I)^{-1} = -\lambda^{-1} \sum_{i = 0}^\infty \lambda^{-i} T^i.
+ \]
+ Hence we get
+ \begin{align*}
+ \|(\lambda I - T)^{-1}\|_{\mathcal{B}(X)} &\leq |\lambda|^{-1}\sum_{i = 0}^\infty |\lambda|^{-i} \|T^i\|_{\mathcal{B}(X)}\\
+ &\leq |\lambda|^{-1} \sum_{i = 0}^\infty \big(|\lambda|^{-1} \|T\|_{\mathcal{B}(X)}\big)^i\\
+ &\leq \frac{1}{|\lambda| - \|T\|_{\mathcal{B}(X)}},
+ \end{align*}
+ which tends to $0$ as $|\lambda| \to \infty$. So it is bounded.
+
+ By ``Liouville's theorem'', $R(\lambda)$ is constant, which is clearly a contradiction since $R(\lambda) \not= R(\mu)$ for $\lambda \not= \mu$.
+\end{proof}
+Of course, to do this properly, we need a version of Liouville's theorem for Banach-space valued functions as opposed to complex-valued functions. So let's prove this.
+
+\begin{prop}[Liouville's theorem for Banach space-valued analytic function]
+ Let $X$ be a Banach space, and $F: \C \to X$ be entire (in the sense that $F$ is given by an absolutely convergent power series in some neighbourhood of any point) and norm bounded, i.e.
+ \[
+ \sup_{z \in \C} \|F(z)\|_X < \infty.
+ \]
+ Then $F$ is constant.
+\end{prop}
+This is a generalization of Liouville's theorem to the case where the target of the map is a Banach space. To prove this, we reduce this to the case of complex-valued functions. To do so, we compose this $F$ with a map $X \to \C$.
+
+\begin{proof}
+ Let $f \in X^*$. Then we show $f \circ F: \C \to \C$ is bounded and entire. To see it is bounded, just note that $f$ is a bounded linear map. So
+ \[
+ \sup_{z \in \C} |f\circ F(z)| \leq \sup_{ z \in \C} \|f\|_{X^*} \|F(z)\|_X < \infty.
+ \]
+ Analyticity can be shown in a similar fashion, exploiting the fact that $f^*$ is linear.
+
+ Hence Liouville's theorem implies $f \circ F$ is constant, i.e.\ $(f \circ F)(z) = (f\circ F)(0)$. In particular, this implies $f(F(z) - F(0)) = 0$. Moreover, this is true for all $f \in X^*$. Hence by (corollary of) Hahn-Banach theorem, we know $F(z) - F(0) = 0$ for all $z \in \C$. Therefore $F$ is constant.
+\end{proof}
+We have thus completed our proof that $\sigma(T)$ is non-empty, closed and a subset of $\{\lambda \in \C: |\lambda| \leq \|T\|_{\mathcal{B}(X)}\}$.
+
+However, we want to know more. Apart from the spectrum itself, we also had the point spectrum $\sigma_p(T)$ and the approximate point spectrum $\sigma_{ap}(T)$, and we had the inclusions
+\[
+ \sigma_p(T) \subseteq \sigma_{ap}(T) \subseteq \sigma(T).
+\]
+We know that the largest set $\sigma(T)$ is non-empty, but we want the smaller ones to be non-empty as well. We have the following theorem:
+\begin{thm}
+ We have
+ \[
+ \sigma_{ap}(T) \supseteq \partial \sigma(T),
+ \]
+ where $\partial \sigma(T)$ is the boundary of $\sigma(T)$ in the topology of $\C$. In particular, $\sigma_{ap}(T) \not= \emptyset$.
+\end{thm}
+On the other hand, it is possible for $\sigma_p(T)$ to be empty (in infinite dimensional cases).
+
+\begin{proof}
+ Let $\lambda \in \partial \sigma(T)$. Pick sequence $\{\lambda_n\}_{n = 1}^\infty \subseteq \rho(T) = \C\setminus \sigma(T)$ such that $\lambda_n \to \lambda$. We claim that $R(\lambda_n) = (T - \lambda_n I)^{-1}$ satisfies
+ \[
+ \|R(\lambda_n)\|_{\mathcal{B}(X)} \to \infty.
+ \]
+ If this were the case, then we can pick $\mathbf{y}_n \in X$ such that $\|\mathbf{y}_n\| \to 0$ and $\|R(\lambda_n)(\mathbf{y}_n)\| = 1$. Setting $\mathbf{x}_n = R(\lambda_n)(\mathbf{y}_n)$, we have
+ \begin{align*}
+ \|(T - \lambda I)\mathbf{x}_n \| &\leq \|(T - \lambda_n I) \mathbf{x}_n\|_X + \|(\lambda - \lambda_n) \mathbf{x}_n\|_X\\
+ &= \|(T - \lambda_n I) (T - \lambda_n I)^{-1} \mathbf{y}_n\|_X + \|(\lambda - \lambda_n) \mathbf{x}_n\|\\
+ &= \|\mathbf{y}_n\|_X + |\lambda - \lambda_n|\\
+ &\to 0.
+ \end{align*}
+ So $\lambda \in \sigma_{ap}(T)$.
+
+ Thus, it remains to prove that $\|R(\lambda_n)\|_{\mathcal{B}(X)} \to \infty$. Recall from last time if $S_1$ is invertible, and
+ \[
+ \|S_1^{-1}\|_{\mathcal{B}(X)} \|S_1 - S_2\|_{\mathcal{B}(X)} \leq 1, \tag{$*$}
+ \]
+ then $S_2$ is invertible. Thus, for any $\mu \in \sigma(T)$, we have
+ \[
+ \|R(\lambda_n)\|_{\mathcal{B}(X)} |\mu - \lambda_n| = \|R(\lambda_n)\|_{\mathcal{B}(X)} \|(T - \lambda_n I) - (T - \mu I)\|_{B(X)} \geq 1.
+ \]
+ Thus, it follows that
+ \[
+ \|R(\lambda_n)\|_{\mathcal{B}(X)} \geq \frac{1}{\inf\{ |\mu - \lambda_n|: \mu \in \sigma(T)\}} \to \infty.
+ \]
+ So we are done.
+\end{proof}
+
+Having proven so many theorems, we now look at an specific example.
+
+\begin{eg}
+ Consider the shift operator $S: \ell_\infty \to \ell_\infty$ defined by
+ \[
+ (a_1, a_2, a_3, \cdots) \mapsto (0, a_1, a_2, \cdots).
+ \]
+ Then $S$ is a bounded linear operator with norm $\|S\|_{\mathcal{B}(\ell_\infty)} = 1$. The theorem then tells $\sigma(S)$ is a non-empty closed subset of $\{\lambda \in \C: \|\lambda\| \leq 1\}$.
+
+ First, we want to understand what the point spectrum is. In fact, it is empty. To show this, suppose
+ \[
+ S(a_1, a_2, a_3, \cdots) = \lambda (a_1, a_2, a_3, \cdots)
+ \]
+ for some $\lambda \in \C$. In other words,
+ \[
+ (0, a_1, a_2, \cdots) = \lambda (a_1, a_2, a_3, \cdots).
+ \]
+ First consider the possibility that $\lambda = 0$. This would imply that the left is zero. So $a_i = 0$ for all $i$.
+
+ If $\lambda \not= 0$, then for the first coordinate to match, we must have $a_1 = 0$. Then for the second coordinate to match, we also need $a_2 = 0$. By induction, we need all $a_i = 0$. So $\ker (S - \lambda I) = \{0\}$ for all $\lambda \in \C$.
+
+ To find the spectrum, we will in fact show that
+ \[
+ \sigma(S) = D = \{\lambda \in \C: |\lambda| \leq 1\}.
+ \]
+ To prove this, we need to show that for any $\lambda \in D$, $S - \lambda I$ is not surjective. The $\lambda = 0$ case is obvious. For the other cases, we first have a look at what the image of $S - \lambda I$ looks like. We take
+ \[
+ (b_1, b_2, b_3, \cdots) \in \ell_\infty.
+ \]
+ Suppose for some $\lambda \in D$, there exists $(a_1, a_2, \cdots)$ such that we have
+ \[
+ (S - \lambda I) (a_1, a_2, \cdots) = (b_1, b_2, \cdots)
+ \]
+ In other words, we have
+ \[
+ (0, a_1, a_2, \cdots) - (\lambda a_1, \lambda a_2, \lambda a_3, \cdots) = (b_1, b_2, b_3, \cdots).
+ \]
+ So $-\lambda a_1 = b_1$. Hence we have
+ \[
+ a_1 = -\lambda^{-1} b_1.
+ \]
+ The next line then gives
+ \[
+ a_1 - \lambda a_2 = b_2.
+ \]
+ Hence
+ \[
+ a_2 = -\lambda^{-1}(b_2 - a_1) = -\lambda^{-1} (b_2 + \lambda^{-1} b_1).
+ \]
+ Inductively, we can show that
+ \[
+ a_n = \lambda^{-1}(b_n + \lambda^{-1} b_{n - 1} + \lambda^{-2}b_{n - 2} + \cdots + \lambda^{-n + 1}b_1).
+ \]
+ Now if $|\lambda| \leq 1$, we pick $b_n$ such that $|b_n| = 1$ and $\lambda^{-i}b_{n - i} = |\lambda|^{-i}$. Then we must have $|a_n| \to \infty$. Such a sequence $(a_n) \not\in \ell_\infty$. So $(b_n) \not\in \im (S - \lambda I)$. Therefore for $|\lambda| \leq 1$, $S - \lambda I$ is not surjective.
+
+ Hence we have $\sigma(S) \supseteq D$. By the theorem, we also know $\sigma(S) \subseteq D$. So in fact $\sigma(S) = D$.
+
+ Finally, we show that
+ \[
+ \sigma_{ap}(S) = \partial D = \{\lambda \in \C: |\lambda| = 1\}.
+ \]
+ Our theorem tells us that $\partial D \subseteq \sigma_{ap}(S) \subseteq D$. To show that indeed $\partial D = \sigma_{ap}(S)$, note that if $|\lambda| < 1$, then for all $\mathbf{x} \in \ell_\infty$.
+ \[
+ \|(S - \lambda I)\mathbf{x}\|_{\ell_\infty} \geq \|S_\mathbf{x}\|_{\ell\infty} - |\lambda|\|\mathbf{x}\|_{\ell_\infty} = \|\mathbf{x}\|_{\ell_\infty} - |\lambda| \|\mathbf{x}\|_{\ell_\infty} = (1 - \lambda) \|\mathbf{x}\|_{\ell_\infty}.
+ \]
+ So if $|\lambda| < 1$, then there exists no sequence $\mathbf{x}_n$ with $\|\mathbf{x}_n\|_{\ell_\infty} = 1$ and $\|(S - \lambda I) \mathbf{x}_n \|_{\ell_\infty} \to 0$. So $\lambda$ is not in the approximate point spectrum.
+\end{eg}
+We see that these results are rather unpleasant and the spectrum behaves rather unlike the finite dimensional cases. We saw last time that the spectrum $\sigma(T)$ can in general be complicated. In fact, any non-empty compact subset of $\C$ can be realized as the spectrum of some operator $T$ on some Hilbert space $H$. This is an exercise on the Example sheet.
+
+We are now going to introduce a class of ``nice'' operators whose spectrum behaves in a way more similar to the finite-dimensional case. In particular, most of $\sigma(T)$ consists of $\sigma_p(T)$ and is discrete. This includes, at least, all finite rank operators (i.e.\ operators $T$ such that $\dim (\im T)) < \infty$) and their limits.
+
+\begin{defi}[Compact operator]
+ Let $X, Y$ be Banach spaces. We say $T \in \mathcal{L}(X, Y)$ is \emph{compact} if for every bounded subset $E$ of $X$, $T(E)$ is totally bounded.
+
+ We write $\mathcal{B}_0(X)$ for the set of all compact operators $T \in \mathcal{B}(X)$.
+\end{defi}
+Note that in the definition of $T$, we only required $T$ to be a linear map, not a bounded linear map. However, boundedness comes from the definition for free because a totally bounded set is bounded.
+
+There is a nice alternative characterization of compact operators:
+\begin{prop}
+ Let $X, Y$ be Banach spaces. Then $T \in \mathcal{L}(X, Y)$ is compact if and only if $T(B(1))$ is totally bounded if and only if $\overline{T(B(1))}$ is compact.
+\end{prop}
+The first equivalence is obvious, since $B(1)$ is a bounded set, and given any bounded set $E$, we can rescale it to be contained in $B(1)$. The second equivalence comes from the fact that a space is compact if and only if it is totally bounded and complete.
+
+The last characterization is what we will use most of the time, and this is where the name ``compact operator'' came from.
+
+\begin{prop}
+ Let $X$ be a Banach space. Then $\mathcal{B}_0(X)$ is a closed subspace of $\mathcal{B}(X)$. Moreover, if $T \in \mathcal{B}_0(X)$ and $S \in \mathcal{B}(X)$, then $TS, ST \in \mathcal{B}_0(X)$.
+
+ In a more algebraic language, this means $\mathcal{B}_0(X)$ is a closed ideal of the algebra $\mathcal{B}(X)$.
+\end{prop}
+
+\begin{proof}
+ There are three things to prove. First, it is obvious that $\mathcal{B}_0(X)$ is a subspace. To check it is closed, suppose $\{T_n\}_{n = 1}^\infty \subseteq \mathcal{B}_0(X)$ and $\|T_n - T\|_{\mathcal{B}(X)} \to 0$. We need to show $T \in \mathcal{B}_0(X)$, i.e.\ $T(B(1))$ is totally bounded.
+
+ Let $\varepsilon > 0$. Then there exists $N$ such that
+ \[
+ \|T - T_n\|_{\mathcal{B}(X)} < \varepsilon
+ \]
+ whenever $n \geq N$. Take such an $n$. Then $T_n(B(1))$ is totally bounded. So there exists $\mathbf{x}_1, \cdots, \mathbf{x}_k \in B(1)$ such that $\{T_n \mathbf{x}_i\}_{i = 1}^k$ is an $\varepsilon$-net for $T_n(B(1))$. We now claim that $\{T \mathbf{x}_i\}_{i = 1}^k$ is an $3\varepsilon$-net for $T(B(1))$.
+
+ This is easy to show. Let $\mathbf{x} \in X$ be such that $\|\mathbf{x}\| \leq 1$. Then by the triangle inequality,
+ \begin{align*}
+ \|T \mathbf{x} - T \mathbf{x}_i\|_X &\leq \| T \mathbf{x} - T_n \mathbf{x}\| + \|T_n \mathbf{x} - T_n \mathbf{x}_i\| + \|T_n \mathbf{x}_i - T \mathbf{x}_i\|\\
+ &\leq \varepsilon + \|T_n \mathbf{x} - T_n \mathbf{x}_i\|_X + \varepsilon\\
+ &= 2 \varepsilon + \|T_n \mathbf{x} - T_n \mathbf{x}_i\|_X
+ \end{align*}
+ Now since $\{T_n \mathbf{x}_i\}$ is an $\varepsilon$-net for $T_n (B(1))$, there is some $i$ such that $\|T_n \mathbf{x} - T_n \mathbf{x}_i\| < \varepsilon$. So this gives
+ \[
+ \|T \mathbf{x} - T \mathbf{x}_i\|_X \leq 3 \varepsilon.
+ \]
+ Finally, let $T \in \mathcal{B}_0(X)$ and $S \in \mathcal{B}(X)$. Let $\{\mathbf{x}_n \} \subseteq X$ such that $\|\mathbf{x}_n\|_X \leq 1$. Since $T$ is compact, i.e.\ $\overline{T(B(1))}$ is compact, there exists a convergence subsequence of $\{T \mathbf{x}_i\}$.
+
+ Since $S$ is bounded, it maps a convergent sequence to a convergent sequence. So $\{S T \mathbf{x}_n\}$ also has a convergent subsequence. So $\overline{ST(B(1))}$ is compact. So $ST$ is compact.
+
+ We also have to show that $TS(B(1))$ is totally bounded. Since $S$ is bounded, $S(B(1))$ is bounded. Since $T$ sends a bounded set to a totally bounded set, it follows that $TS(B(1))$ is totally bounded. So $TS$ is compact.
+\end{proof}
+
+At this point, it helps to look at some examples at actual compact operators.
+\begin{eg}\leavevmode
+ \begin{enumerate}
+ \item Let $X, Y$ by Banach spaces. Then any finite rank operator in $\mathcal{B}(X, Y)$ is compact. This is since $T(B(1))$ is a bounded subset of a finite-dimensional space (since $T$ is bounded), and any bounded subset of a finite-dimensional space is totally bounded.
+
+ In particular, any $f \in X^*$ is compact. Moreover, by the previous proposition, limits of finite-rank operators are also compact (since $\mathcal{B}_0(X)$ is closed).
+ \item Let $X$ be a Banach space. Then $I: X \to X$ is compact if and only if $X$ is finite dimensional. This is since $\overline{B(1)}$ is compact if and only if the space is finite-dimensional.
+ \item Let $K: \R \to \R$ be a smooth function. Define $T: C([0, 1]) \to C([0, 1])$ by the convolution
+ \[
+ (T f)(x) = \int_0^1 K(x - y) f(y)\;\d y.
+ \]
+ We first show $T$ is bounded. This is since
+ \[
+ \sup_x |T f(x)| \leq \sup_{z \in [-1, 1]} K(z) \sup_y |f(y)|.
+ \]
+ Since $K$ is smooth, it is bounded on $[-1, 1]$. So $Tf$ is bounded. In fact, $T$ is compact. To see this, notice
+ \[
+ \sup_x \left|\frac{\d (Tf)}{\d x}(x)\right| \leq \sup_{z \in [-1, 1]} |K'(z)| \sup_y |f(y)|.
+ \]
+ Therefore if we have a sequence $\{f_n\} \subseteq C([0, 1])$ with $\|f_n\|_{C([0, 1])} \leq 1$, then $\{T f_n\}_{n = 1}^\infty$ is uniformly bounded with uniformly bounded derivative, and hence equicontinuous. By Arzel\`a-Ascoli theorem, $\{T f_n\}_{n = 1}^\infty$ has a convergent subsequence. Therefore $\overline{T(B(1))}$ is compact.
+ \end{enumerate}
+\end{eg}
+There is much more we can say about the last example. For example, requiring $K$ to be smooth is clearly overkill. Requiring, say, differentiable with bounded derivative is already sufficient. Alternatively, we can ask what we can get if we work on, say, $L^2$ instead of $C([0, 1])$.
+
+However, we don't have enough time for that, and instead we should return to developing some general theory. An important result is the following theorem, characterizing the point spectrum and spectrum of a compact operator.
+\begin{thm}
+ Let $X$ be an infinite-dimensional Banach space, and $T \in \mathcal{B}(X)$ be a compact operator. Then $\sigma_p(T) = \{\lambda_i\}$ is at most countable. If $\sigma_p(T)$ is infinite, then $\lambda_i \to 0$.
+
+ The spectrum is given by $\sigma(T) = \sigma_p(T) \cup \{0\}$. Moreover, for every non-zero $\lambda_i \in \sigma_p (T)$, the eigenspace has finite dimensions.
+\end{thm}
+Note that it is still possible for a compact operator to have empty point spectrum. In that case, the spectrum is just $\{0\}$. An example of this is found on the example sheet.
+
+We will only prove this in the case where $X = H$ is a Hilbert space. In a lot of the proofs, we will have a closed subspace $V \leq X$, and we often want to pick an element in $X \setminus V$ that is ``far away'' from $V$ in some sense. If we have a Hilbert space, then we can pick this element to be an element in the complement of $V$. If we are not in a Hilbert space, then we need to invoke \term{Riesz's lemma}, which we shall not go into.
+
+We will break the full proof of the theory into many pieces.
+
+\begin{prop}
+ Let $H$ be a Hilbert space, and $T \in \mathcal{B}_0(H)$ a compact operator. Let $a > 0$. Then there are only finitely many linearly independent eigenvectors whose eigenvalue have magnitude $\geq a$.
+\end{prop}
+This already gives most of the theorem, since this mandates $\sigma_p(T)$ is at most countable, and if $\sigma_p(T)$ is infinite, we must have $\lambda_i \to 0$. Since there are only finitely many linearly independent \emph{eigenvectors}, the eigenspaces are also finite.
+
+\begin{proof}
+ Suppose not. There there are infinitely many independent $\mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3, \cdots$ such that $T \mathbf{x}_i = \lambda_i \mathbf{x}_i$ with $|\lambda_i| \geq a$.
+
+ Define $X_n = \spn\{\mathbf{x}_1, \cdots, \mathbf{x}_n\}$. Since the $\mathbf{x}_i$'s are linearly independent, there exists $\mathbf{y}_n \in X_n \cap X_{n - 1}^\perp$ with $\|\mathbf{y}_n\|_H = 1$.
+
+ Now let
+ \[
+ \mathbf{z}_n = \frac{\mathbf{y}_n}{\lambda_n}.
+ \]
+ Note that
+ \[
+ \|\mathbf{z}_n\|_H \leq \frac{1}{a}.
+ \]
+ Since $X_n$ is spanned by the eigenvectors, we know that $T$ maps $X_n$ into itself. So we have
+ \[
+ T \mathbf{z}_n \in X_n.
+ \]
+ Moreover, we claim that $T \mathbf{z}_n - \mathbf{y}_n \in X_{n - 1}$. We can check this directly. Let
+ \[
+ \mathbf{y}_n = \sum_{k = 1}^n c_k \mathbf{x}_k.
+ \]
+ Then we have
+ \begin{align*}
+ T\mathbf{z}_n - \mathbf{y}_n &= \frac{1}{\lambda_n} T\left(\sum_{k = 1}^n c_k \mathbf{x}_k\right) - \sum_{k = 1}^n c_k \mathbf{x}_k\\
+ &= \sum_{k = 1}^n c_k \left(\frac{\lambda_k}{\lambda_n} - 1\right) \mathbf{x}_k\\
+ &= \sum_{k = 1}^{n - 1} c_k \left(\frac{\lambda_k}{\lambda_n} - 1\right) \mathbf{x}_k \in X_{n - 1}.
+ \end{align*}
+ We next claim that $\|T \mathbf{z}_n - T \mathbf{z}_m\|_H \geq 1$ whenever $n > m$. If this holds, then $T$ is not compact, since $T \mathbf{z}_n$ does not have a convergent subsequence.
+
+ To show this, wlog, assume $n > m$. We have
+ \begin{align*}
+ \|T \mathbf{z}_n - T \mathbf{z}_m\|_H^2 &= \|(T \mathbf{z}_n - \mathbf{y}_n) - (T \mathbf{z}_m - \mathbf{y}_n)\|_H^2\\
+ \intertext{Note that $T \mathbf{z}_n - \mathbf{y}_n \in X_{n - 1}$, and since $m < n$, we also have $T \mathbf{z}_m \in X_{n - 1}$. By construction, $\mathbf{y}_n \perp X_{n - 1}$. So by Pythagorean theorem, we have}
+ &= \|T \mathbf{z}_n - \mathbf{y}_n - T \mathbf{z}_m\|_H^2 + \|\mathbf{y}_n\|_H^2\\
+ &\geq \|\mathbf{y}_n\|^2\\
+ &= 1
+ \end{align*}
+ So done.
+\end{proof}
+
+To prove the previous theorem, the only remaining thing to prove is that $\sigma(T) = \sigma_p(T) \cup \{0\}$. In order to prove this, we need a lemma, which might seem a bit unmotivated at first, but will soon prove itself useful.
+
+\begin{lemma}
+ Let $H$ be a Hilbert space, and $T \in \mathcal{B}(H)$ compact. Then $\im (I - T)$ is closed.
+\end{lemma}
+
+\begin{proof}
+ We let $S$ be the orthogonal complement of $\ker (I - T)$, which is a closed subspace, hence a Hilbert space. We shall consider the restriction $(I - T)|_S$, which has the same image as $I - T$.
+
+% By restricting to the orthogonal complement of $\ker (I - T)$, which is a closed subspace, hence a Hilbert space, it suffices to prove this in the case $I - T$ has trivial kernel.
+
+ To show that $\im (I - T)$ is closed, it suffices to show that $(I - T)|_S$ is bounded \emph{below}, i.e.\ there is some $C > 0$ such that
+ \[
+ \|\mathbf{x}\|_H \leq C \|(I - T) \mathbf{x}\|_H
+ \]
+ for all $\mathbf{x} \in S$. If this were the case, then if $(I - T) \mathbf{x}_n \to \mathbf{y}$ in $H$, then
+ \[
+ \|\mathbf{x}_n - \mathbf{x}_m\| \leq C \|(I - T)(\mathbf{x}_n - \mathbf{x}_m)\| \to 0,
+ \]
+ and so $\{\mathbf{x}_n\}$ is a Cauchy sequence. Write $\mathbf{x}_n \to \mathbf{x}$. Then by continuity, $(I - T)\mathbf{x} = \mathbf{y}$, and so $\mathbf{y} \in \im (I - T)$.
+
+ Thus, suppose $(I - T)$ is not bounded below. Pick $\mathbf{x}_n$ such that $\|\mathbf{x}_n\|_H = 1$, but $(I - T)\mathbf{x}_n \to 0$. Since $T$ is compact, we know $T \mathbf{x}_n$ has a convergent subsequence. We may wlog $T \mathbf{x}_n \to \mathbf{y}$. Then since $\|T\mathbf{x}_n - \mathbf{x}_n\|_H \to 0$, it follows that we also have $\mathbf{x}_n \to \mathbf{y}$. In particular, $\|\mathbf{y}\| = 1 \not= 0$, and $\mathbf{y} \in S$.
+
+ But $\mathbf{x}_n \to \mathbf{y}$ also implies $T \mathbf{x}_n \to T \mathbf{y}$. So this means we must have $T\mathbf{y} = \mathbf{y}$. But this is a contradiction, since $\mathbf{y}$ does not lie in $\ker (I - T)$.
+%
+% In particular $\|\mathbf{y}\|_H = 1$.
+%
+% To prove the lemma, we first show there exists some $C > 0$ such that $\|\mathbf{x}\|_H \leq C \|S_0 \mathbf{x}\|_H$ for all $\mathbf{x} \in M$. Suppose not. Then we can find a sequence $\{\mathbf{x}_n\}_{n = 1}^\infty \subseteq M$ such that $\|\mathbf{x}_n\|_H = 1$ and $\|S_0 \mathbf{x}_n\|_H \to 0$.
+%
+% By compactness of $T$, $\{T \mathbf{x}_n\}$ has a convergent subsequence. So without loss of generality, we can assume $T \mathbf{x}_n \to \mathbf{y}$ for some $\mathbf{y} \in H$. But now since $\|S_0 \mathbf{x}_n\|_H \to 0$, we have $\|(I - T) \mathbf{x}_n\|_H \to 0$, and therefore $\|\mathbf{x}_n - \mathbf{y}\| \to 0$, i.e.\ $\mathbf{x}_n \to \mathbf{y}$. In particular, $\|\mathbf{y}\|_H = 1$.
+%
+% By continuity of $(I - T)$, we also have $(I - T) \mathbf{x}_n \to (I - T) \mathbf{y}$. But $(I - T) \mathbf{x}_n = S_0 \mathbf{x}_n \to \mathbf{0}$. So $(I - T) \mathbf{y} = 0$. Therefore $\mathbf{y} \in \ker(I - T)$. However, since $M$ is closed, we also have $\mathbf{y} \in M = (\ker (I - T))^\perp$. So $\mathbf{y} = \mathbf{0}$. This is impossible, since $\|\mathbf{y}\|_H = 1$.
+%
+% So we have shown that there is some $C > 0$ such that $\|\mathbf{x}\|_H \leq C \|S_0 \mathbf{x}\|_H$. We now show $\im S_0$ is closed.
+%
+% Suppose there is some $\mathbf{x}_n \subseteq M$ such that $S_0 \mathbf{x}_n \to \mathbf{y}$ for some $\mathbf{y} \in H$. Since $S_0 \mathbf{x}_n \to \mathbf{y}$, we know that $\{S_0 \mathbf{x}_n\}$ is Cauchy. By the above, it follows that $\{\mathbf{x}_n\}$ is also Cauchy. Since $M = N^\perp$ is closed (and therefore complete, since $H$ is Hilbert), there exists some $\mathbf{x}$ such that $\mathbf{x}_n \to \mathbf{x}$. By continuity of $S_0$, it follows $S_0 \mathbf{x}_n \to S_0 \mathbf{x}$. So $S_0 \mathbf{x} = \mathbf{y}$. So $\mathbf{y} \in \im S_0$. So $\im S_0$ is closed.
+\end{proof}
+
+\begin{prop}
+ Let $H$ be a Hilbert space, $T \in \mathcal{B}(H)$ compact. If $\lambda \not= 0$ and $\lambda \in \sigma(T)$, then $\lambda \in \sigma_p(T)$.
+\end{prop}
+
+\begin{proof}
+ We will prove if $\lambda \not= 0$ and $\lambda \not\in \sigma_p(T)$, then $\lambda\not\in \sigma(T)$. In other words, let $\lambda \not= 0$ and $\ker (T - \lambda I) = \{0\}$. We will show that $T - \lambda I$ is surjective, i.e.\ $\im (T - \lambda I) = H$.
+
+ Suppose this is not the case. Denote $H_0 = H$ and $H_1 = \im (T - \lambda I)$. We know that $H_1$ is closed and is hence a Hilbert space. Moreover, $H_1 \subsetneq H_0$ by assumption.
+
+ We now define the sequence $\{H_n\}$ recursively by
+ \[
+ H_n = (T - \lambda I) H_{n - 1}.
+ \]
+ We claim that $H_n \subsetneq H_{n - 1}$. This must be the case, because the map $(T - \lambda I)^n: H_0 \to H_n$ is an isomorphism (it is injective and surjective). So the inclusion $H_n \subseteq H_{n - 1}$ is isomorphic to the inclusion $H_1 \subseteq H_0$, which is strict.
+
+ Thus we have a strictly decreasing sequence
+ \[
+ H_0 \supsetneq H_1 \supsetneq H_2 \supsetneq \cdots
+ \]
+ Let $\mathbf{y}_n$ be such that $\mathbf{y}_n \in H_n$, $\mathbf{y}_n \perp H_{n + 1}$ and $\|\mathbf{y}_n\|_H = 1$. We now claim $\|T \mathbf{y}_n - T \mathbf{y}_m\| \geq |\lambda|$ if $n \not= m$. This then contradicts the compactness of $T$. To show this, again wlog we can assume that $n > m$. Then we have
+ \begin{align*}
+ \|T \mathbf{y}_n - T \mathbf{y}_m\|_H^2 &= \|(T \mathbf{y}_n - \lambda\mathbf{y}_n) - (T \mathbf{y}_m - \lambda\mathbf{y}_m) - \lambda\mathbf{y}_m + \lambda\mathbf{y}_n\|^2\\
+ &= \|(T - \lambda I) \mathbf{y}_n - (T - \lambda I) \mathbf{y}_m - \lambda \mathbf{y}_m + \lambda \mathbf{y}_n\|^2_H\\
+ \intertext{Now note that $(T - \lambda I)\mathbf{y} _n \in H_{n + 1}\subseteq H_{m + 1}$, while $(T - \lambda I)\mathbf{y}_m$ and $\lambda \mathbf{y}_n$ are both in $H_{m + 1}$. So $\lambda \mathbf{y}_m$ is perpendicular to all of them, and Pythagorean theorem tells}
+ &= |\lambda|^2 \|\mathbf{y}_m\|^2 + \|(T - \lambda I)\mathbf{y}_m - (T - \lambda I)\mathbf{y}_m - \lambda \mathbf{y}_m\|^2\\
+ &\geq |\lambda|^2 \|\mathbf{y}_m\|^2\\
+ &= |\lambda|^2.
+ \end{align*}
+ This contradicts the compactness of $T$. Therefore $\im (T - \lambda I) = H$.
+\end{proof}
+
+Finally, we can prove the initial theorem.
+\begin{thm}
+ Let $H$ be an infinite-dimensional Hilbert space, and $T \in \mathcal{B}(H)$ be a compact operator. Then $\sigma_p(T) = \{\lambda_i\}$ is at most countable. If $\sigma_p(T)$ is infinite, then $\lambda_i \to 0$.
+
+ The spectrum is given by $\sigma(T) = \sigma_p(T) \cup 0$. Moreover, for every non-zero $\lambda_i \in \sigma_p (T)$, the eigenspace has finite dimensions.
+\end{thm}
+
+\begin{proof}
+ As mentioned, it remains to show that $\sigma(T) = \sigma_p(T) \cup \{0\}$. The previous proposition tells us $\sigma(T) \setminus \{0\} \subseteq \sigma_p(T)$. So it only remains to show that $0 \in \sigma(T)$.
+
+ There are two possible cases. The first is if $\{\lambda_i\}$ is infinite. We have already shown that $\lambda_i \to 0$. So $0 \in \sigma(T)$ by the closedness of the spectrum.
+
+ Otherwise, if $\{\lambda_i\}$ is finite, let $E_{\lambda_1}, \cdots, E_{\lambda_n}$ be the eigenspaces. Define
+ \[
+ H' = \spn\{E_{\lambda_1}, \cdots, E_{\lambda_n}\}^\perp.
+ \]
+ This is non-empty, since each $E_{\lambda_i}$ is finite-dimensional, but $H$ is infinite dimensional. Then $T$ restricts to $T|_{H'}: H' \to H'$.
+
+ Now $T|_{H'}$ has no non-zero eigenvalues. By the previous discussion, we know $\sigma(T|_{H'}) \subseteq \{0\}$. By non-emptiness of $\sigma(T|_{H'})$, we know $0 \in \sigma(T|_{H'}) \subseteq \sigma(T)$.
+
+ So done.
+\end{proof}
+
+\subsection{Self-adjoint operators}
+We have just looked at compact operators. This time, we are going to add a condition of self-adjointness.
+
+\begin{defi}[Self-adjoint operator]
+ Let $H$ be a Hilbert space, $T \in \mathcal{B}(H)$. Then $T$ is \emph{self-adjoint} or \emph{Hermitian} if for all $\mathbf{x}, \mathbf{y} \in H$, we have
+ \[
+ \bra T \mathbf{x}, \mathbf{y}\ket = \bra \mathbf{x}, T \mathbf{y}\ket.
+ \]
+\end{defi}
+It is important to note that we defined the term for bounded linear operators $T$. If we have unbounded operators instead, Hermitian means something different from self-adjoint, and we have to be careful.
+
+Recall that we defined the adjoint of a linear map to be a map of the dual spaces. However, we will often abuse notation and call $T^*: H \to H$ the \emph{adjoint}, which is the (unique) operator such that for all $\mathbf{x}, \mathbf{y} \in H$,
+\[
+ \bra T \mathbf{x}, \mathbf{y}\ket = \bra \mathbf{x}, T^* \mathbf{y}\ket.
+\]
+It is an exercise to show that this is well-defined.
+
+How is this related to the usual adjoint? Let $\tilde{T}^*: H^* \to H^*$ be the usual adjoint. Then we have
+\[
+ T^* = \phi^{-1} \circ \tilde{T}^* \circ \phi,
+\]
+where $\phi: H \to H^*$ is defined by
+\[
+ \phi(\mathbf{v})(\mathbf{w}) = \bra \mathbf{w}, \mathbf{v}\ket
+\]
+as in the Reisz representation theorem.
+
+The main result regarding self-adjoint operators is the spectral theorem:
+\begin{thm}[Spectral theorem]
+ Let $H$ be an infinite dimensional Hilbert space and $T: H \to H$ a compact self-adjoint operator.
+ \begin{enumerate}
+ \item $\sigma_p(T) = \{\lambda_i\}_{i = 1}^N$ is at most countable.
+ \item $\sigma_p(T) \subseteq \R$.
+ \item $\sigma(T) = \{0\} \cup \sigma_p(T)$.
+ \item If $E_{\lambda_i}$ are the eigenspaces, then $\dim E_{\lambda_i}$ is finite if $\lambda_i \not= 0$.
+ \item $E_{\lambda_i} \perp E_{\lambda_j}$ if $\lambda_i \not= \lambda_j$.
+ \item If $\{\lambda_i\}$ is infinite, then $\lambda_i \to 0$.
+ \item
+ \[
+ T = \sum_{i = 1}^N \lambda_i P_{E_{\lambda_i}}.
+ \]
+ \end{enumerate}
+\end{thm}
+We have already shown (i), (iii), (iv) and (vi). The parts (ii) and (v) we already did in IA Vectors and Matrices, but for completeness, we will do the proof again. They do not require compactness. The only non-trivial bit left is the last part (vii).
+
+We first do the two easy bits.
+\begin{prop}
+ Let $H$ be a Hilbert space and $T \in \mathcal{B}(H)$ self-adjoint. Then $\sigma_p(T) \subseteq \R$.
+\end{prop}
+
+\begin{proof}
+ Let $\lambda \in \sigma_p(T)$ and $\mathbf{v} \in \ker(T - \lambda I) \setminus \{0\}$. Then by definition of $\mathbf{v}$, we have
+ \[
+ \lambda = \frac{\bra T \mathbf{v}, \mathbf{v}\ket}{\|\mathbf{v}\|^2_H} = \frac{\bra \mathbf{v}, T \mathbf{v}\ket}{\|\mathbf{v}\|^2_H} = \bar{\lambda}.
+ \]
+ So $\lambda \in \R$.
+\end{proof}
+
+\begin{prop}
+ Let $H$ be a Hilbert space and $T \in \mathcal{B}(H)$ self-adjoint. If $\lambda, \mu \in \sigma_p(T)$ and $\lambda \not= \mu$, then $E_\lambda \perp E_\mu$.
+\end{prop}
+
+\begin{proof}
+ Let $\mathbf{v} \in \ker (T - \lambda I)\setminus \{0\}$ and $\mathbf{w} \in \ker(T - \mu I)\setminus \{0\}$. Then
+ \[
+ \lambda \bra \mathbf{v}, \mathbf{w}\ket = \bra T\mathbf{v}, \mathbf{w}\ket = \bra \mathbf{v}, T \mathbf{w}\ket = \bar{\mu} \bra \mathbf{v}, \mathbf{w}\ket = \mu \bra \mathbf{v}, \mathbf{w}\ket,
+ \]
+ using the fact that eigenvalues are real. Since $\lambda \not= \mu$ by assumption, we must have $\bra \mathbf{v}, \mathbf{w}\ket = 0$.
+\end{proof}
+
+To prove the final part, we need the following proposition:
+\begin{prop}
+ Let $H$ be a Hilbert space and $T \in \mathcal{B}(H)$ a \emph{compact} self-adjoint operator. If $T \not= 0$, then $T$ has a non-zero eigenvalue.
+\end{prop}
+This is consistent with our spectral theorem, since if $T$ is non-zero, then something in the sum $\lambda_i P_{E_{\lambda_i}}$ has to be non-zero. It turns out this is most of the work we need.
+
+However, to prove this, we need the following lemma:
+\begin{lemma}
+ Let $H$ be a Hilbert space, and $T \in \mathcal{B}(H)$ a compact self-adjoint operator. Then
+ \[
+ \|T\|_{\mathcal{B}(H)} = \sup_{\|\mathbf{x}\|_H = 1} |\bra \mathbf{x}, T \mathbf{x}\ket|
+ \]
+\end{lemma}
+
+\begin{proof}
+ Write
+ \[
+ \lambda = \sup_{\|\mathbf{x}\|_H = 1} |\bra \mathbf{x}, T \mathbf{x}\ket|.
+ \]
+ Note that one direction is easy, since for all $\mathbf{x}$, Cauchy-Schwarz gives
+ \[
+ |\bra \mathbf{x}, T \mathbf{x}\ket| \leq \|T\mathbf{x}\|_H \|\mathbf{x}\|_H = \|T\|_{\mathcal{B}(H)} \|\mathbf{x}\|_H^2.
+ \]
+ So it suffices to show the inequality in the other direction. We now claim that
+ \[
+ \|T\|_{\mathcal{B}(H)} = \sup_{\|\mathbf{x}\|_H = 1, \|\mathbf{y}\|_H = 1} |\bra T\mathbf{x}, \mathbf{y}\ket|.
+ \]
+ To show this, recall that $\phi: H \to H^*$ defined by $\mathbf{v} \mapsto \bra \ph, \mathbf{v}\ket$ is an isometry. By definition, we have
+ \[
+ \|T\|_{\mathcal{B}(H)} = \sup_{\|\mathbf{x}\|_H = 1} \|T \mathbf{x}\|_H = \sup_{\|\mathbf{x}\|_H = 1} \|\phi(T \mathbf{x})\|_{H^*} = \sup_{\|\mathbf{x}\|_H = 1}\sup_{\|\mathbf{y}\|_H = 1} |\bra \mathbf{y}, T \mathbf{x}\ket|.
+ \]
+ Hence, it suffices to show that
+ \[
+ \sup_{\|\mathbf{x}\|_H = 1, \|\mathbf{y}\|_H = 1} |\bra T\mathbf{x}, \mathbf{y}\ket| \leq \lambda.
+ \]
+ Take $\mathbf{x}, \mathbf{y} \in H$ such that $\|\mathbf{x}\|_H = \|\mathbf{y}\|_H = 1$. We first perform a trick similar to the polarization identity. First, by multiplying $\mathbf{y}$ by an appropriate scalar, we can wlog assume $\bra T\mathbf{x}, \mathbf{y}\ket$ is real. Then we have
+ \begin{align*}
+ |\bra T(\mathbf{x} + \mathbf{y}), \mathbf{x} + \mathbf{y} \ket - \bra T(\mathbf{x} - \mathbf{y}), \mathbf{x} - \mathbf{y}\ket| &= 2 |\bra T\mathbf{x}, \mathbf{y}\ket + \bra T \mathbf{y}, \mathbf{x}\ket|\\
+ &= 4 |\bra T \mathbf{x}, \mathbf{y}\ket|.
+ \end{align*}
+ Hence we have
+ \begin{align*}
+ |\bra T \mathbf{x}, \mathbf{y}\ket| &= \frac{1}{4}|\bra T(\mathbf{x} + \mathbf{y}), \mathbf{x} + \mathbf{y} \ket - \bra T(\mathbf{x} - \mathbf{y}), \mathbf{x} - \mathbf{y}\ket|\\
+ &\leq \frac{1}{4} (\lambda \|\mathbf{x} + \mathbf{y}\|_H^2 + \lambda \|\mathbf{x} - \mathbf{y}\|_H^2)\\
+ &= \frac{\lambda}{4} (2\|\mathbf{x}\|_H^2 + 2 \|\mathbf{y}\|_H^2)\\
+ &= \lambda,
+ \end{align*}
+ where we used the parallelogram law. So we have $\|T\|_{\mathcal{B}(H)} \leq \lambda$.
+\end{proof}
+
+Finally, we can prove our proposition.
+\begin{prop}
+ Let $H$ be a Hilbert space and $T \in \mathcal{B}(H)$ a \emph{compact} self-adjoint operator. If $T \not= 0$, then $T$ has a non-zero eigenvalue.
+\end{prop}
+
+\begin{proof}
+ Since $T \not= 0$, then $\|T\|_{\mathcal{B}(H)} \not= 0$. Let $\|T\|_{\mathcal{B}(H)} = \lambda$. We now claim that either $\lambda$ or $-\lambda$ is an eigenvalue of $T$.
+
+ By the previous lemma, there exists a sequence $\{\mathbf{x}_n\}_{n = 1}^\infty \subseteq H$ such that $\|\mathbf{x}_n\|_H = 1$ and $\bra \mathbf{x}_n, T \mathbf{x}_n\ket \to \pm \lambda$.
+
+ We consider the two cases separately. Suppose $\bra \mathbf{x}_n, T \mathbf{x}_n \ket \to \lambda$. Consider $T \mathbf{x}_n - \lambda \mathbf{x}_n$. Since $T$ is compact, there exists a subsequence such that $T \mathbf{x}_{n_k} \to \mathbf{y}$ for some $\mathbf{y} \in H$. For simplicity of notation, we assume $T \mathbf{x}_n \to \mathbf{y}$ itself. We have
+ \begin{align*}
+ 0 &\leq \|T \mathbf{x}_n - \lambda \mathbf{x}_n \|_H^2\\
+ &= \bra T\mathbf{x}_n - \lambda \mathbf{x}_n, T \mathbf{x}_n - \lambda \mathbf{x}_n\ket\\
+ &= \|T \mathbf{x}_n\|_H^2 - 2\lambda \bra T \mathbf{x}_n, \mathbf{x}_n\ket + \lambda^2 \|\mathbf{x}_n\|^2\\
+ &\to \lambda^2 - 2 \lambda^2 + \lambda^2\\
+ &= 0
+ \end{align*}
+ as $n \to \infty$. Note that we implicitly used the fact that $\bra T \mathbf{x}_n, \mathbf{x}_n\ket = \bra \mathbf{x}_n, T \mathbf{x}_n\ket$ since $\bra T \mathbf{x}_n, \mathbf{x}_n\ket$ is real. So we must have
+ \[
+ \|T \mathbf{x}_n - \lambda \mathbf{x}_n\|_H^2 \to 0.
+ \]
+ In other words,
+ \[
+ \mathbf{x}_n \to \frac{1}{\lambda} \mathbf{y}.
+ \]
+ Finally, we show $\mathbf{y}$ is an eigenvector. This is easy, since
+ \[
+ T \mathbf{y} = \lim_{n \to \infty} T(\lambda \mathbf{x}_n) = \lambda \mathbf{y}.
+ \]
+ The case where $\mathbf{x}_n \to -\lambda$ is entirely analogous. In this case, $-\lambda$ is an eigenvalue. The proof is exactly the same, apart form some switching of signs.
+\end{proof}
+
+Finally, we can prove the last part of the spectral theorem.
+\begin{prop}
+ Let $H$ be an infinite dimensional Hilbert space and $T: H \to H$ a compact self-adjoint operator. Then
+ \[
+ T = \sum_{i = 1}^N \lambda_i P_{E_{\lambda_i}}.
+ \]
+\end{prop}
+
+\begin{proof}
+ Let
+ \[
+ U = \spn\{E_{\lambda_1}, E_{\lambda_2}, \cdots\}.
+ \]
+ Firstly, we clearly have
+ \[
+ T|_U = \sum_{i = 1}^N \lambda_i P_{E_{\lambda_i}}.
+ \]
+ This is since for any $\mathbf{x} \in U$ can be written as
+ \[
+ \mathbf{x} = \sum_{i = 1}^n P_{E_{\lambda_i}} \mathbf{x}.
+ \]
+ Less trivially, this is also true for $\bar{U}$, i.e.
+ \[
+ T|_{\bar{U}} = \sum_{i = 1}^N \lambda_i P_{E_{\lambda_i}},
+ \]
+ but this is also clear from definition once we stare at it hard enough.
+
+ We also know that
+ \[
+ H = \bar{U} \oplus U^\perp.
+ \]
+ It thus suffices to show that
+ \[
+ T|_{U^\perp} = 0.
+ \]
+ But since $T|_{U^\perp}$ has no non-zero eigenvalues, this follows from our previous proposition. So done.
+\end{proof}
+\end{document}
diff --git a/books/cam/II_M/probability_and_measure.tex b/books/cam/II_M/probability_and_measure.tex
new file mode 100644
index 0000000000000000000000000000000000000000..3d004cd01e0ad4bd73cdac9a2ce8f0d0fb4ca4b8
--- /dev/null
+++ b/books/cam/II_M/probability_and_measure.tex
@@ -0,0 +1,4410 @@
+\documentclass[a4paper]{article}
+\def\npart {II}
+\def\nterm {Michaelmas}
+\def\nyear {2016}
+\def\nlecturer {J.\ Miller}
+\def\ncourse {Probability and Measure}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\noindent\emph{Analysis II is essential}
+\vspace{10pt}
+
+\noindent Measure spaces, $\sigma$-algebras, $\pi$-systems and uniqueness of extension, statement *and proof* of Carath\'eodory's extension theorem. Construction of Lebesgue measure on $\R$. The Borel $\sigma$-algebra of $\R$. Existence of non-measurable subsets of $\R$. Lebesgue-Stieltjes measures and probability distribution functions. Independence of events, independence of $\sigma$-algebras. The Borel--Cantelli lemmas. Kolmogorov's zero-one law.\hspace*{\fill}[6]
+
+\vspace{5pt}
+\noindent Measurable functions, random variables, independence of random variables. Construction of the integral, expectation. Convergence in measure and convergence almost everywhere. Fatou's lemma, monotone and dominated convergence, differentiation under the integral sign. Discussion of product measure and statement of Fubini's theorem.\hspace*{\fill}[6]
+
+\vspace{5pt}
+\noindent Chebyshev's inequality, tail estimates. Jensen's inequality. Completeness of $L^p$ for $1 \leq p \leq \infty$. The H\"older and Minkowski inequalities, uniform integrability.\hspace*{\fill}[4]
+
+\vspace{5pt}
+\noindent $L^2$ as a Hilbert space. Orthogonal projection, relation with elementary conditional probability. Variance and covariance. Gaussian random variables, the multivariate normal distribution.\hspace*{\fill}[2]
+
+\vspace{5pt}
+\noindent The strong law of large numbers, proof for independent random variables with bounded fourth moments. Measure preserving transformations, Bernoulli shifts. Statements *and proofs* of maximal ergodic theorem and Birkhoff's almost everywhere ergodic theorem, proof of the strong law.\hspace*{\fill}[4]
+
+\vspace{5pt}
+\noindent The Fourier transform of a finite measure, characteristic functions, uniqueness and inversion. Weak convergence, statement of L\'evy's convergence theorem for characteristic functions. The central limit theorem.\hspace*{\fill}[2]%
+}
+
+\tableofcontents
+\setcounter{section}{-1}
+\section{Introduction}
+In measure theory, the main idea is that we want to assign ``sizes'' to different sets. For example, we might think $[0, 2] \subseteq \R$ has size $2$, while perhaps $\Q \subseteq \R$ has size $0$. This is known as a \emph{measure}. One of the main applications of a measure is that we can use it to come up with a new definition of an integral. The idea is very simple, but it is going to be very powerful mathematically.
+
+Recall that if $f: [0, 1] \to \R$ is continuous, then the Riemann integral of $f$ is defined as follows:
+\begin{enumerate}
+ \item Take a partition $0 = t_0 < t_1 < \cdots < t_n = 1$ of $[0, 1]$.
+ \item Consider the Riemann sum
+ \[
+ \sum_{j = 1}^n f(t_j) (t_j - t_{j - 1})
+ \]
+ \item The Riemann integral is
+ \[
+ \int f = \text{Limit of Riemann sums as the mesh size of the partition }\to 0.
+ \]
+\end{enumerate}
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (5, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 5) node [above] {$y$};
+
+ \draw [domain=-1:5] plot (\x, {(\x + 1)*(\x + 1)/10 + 1});
+
+ \draw (0.5, 0) node [below] {$0$} -- (0.5, 1.225) -- (1, 1.225);
+ \draw (1, 0) node [below] {$t_1$} -- (1, 1.4) -- (1.5, 1.4);
+ \draw (1.5, 0) node [below] {$t_2$} -- (1.5, 1.625) -- (2, 1.625) -- (2, 0) node [below] {$t_3$};
+ \node at (2.4, 0.8) {$\cdots$};
+ \draw (2.75, 0) node [below] {$t_k$} -- (2.75, 2.40625) -- (3.25, 2.40625) -- (3.25, 0) node [anchor = north west] {$\!\!\!\!\!t_{k + 1}\cdots$};
+ \node at (3.65, 1.2) {$\cdots$};
+ \draw (4, 0) -- (4, 3.5) -- (4.5, 3.5) -- (4.5, 0) node [below] {$1$};
+ \end{tikzpicture}
+\end{center}
+The idea of measure theory is to use a different approximation scheme. Instead of partitioning the domain, we partition the range of the function. We fix some numbers $r_0 < r_1 < r_2 < \cdots < r_n$.
+
+We then approximate the integral of $f$ by
+\[
+ \sum_{j = 1}^n r_j \cdot (\text{``size of }f^{-1}([r_{j - 1}, r_j])\text{''}).
+\]
+We then define the integral as the limit of approximations of this type as the mesh size of the partition $\to 0$.
+\begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1, 0) -- (5, 0) node [right] {$x$};
+ \draw [->] (0, -0.5) -- (0, 5) node [above] {$y$};
+
+ \draw [domain=-1:5] plot (\x, {(\x + 1)*(\x + 1)/10 + 1});
+
+ \draw (0.5, 0) rectangle (4.5, 0.7);
+ \draw (0.5, 0.7) rectangle (4.5, 1.225);
+ \draw (1.5, 1.225) rectangle (4.5, 1.625);
+
+ \draw (2.75, 1.625) rectangle (4.5, 2.40625);
+ \draw (3.65, 2.40625) rectangle (4.5, 3.16225);
+ \end{tikzpicture}
+\end{center}
+We can make an analogy with bankers --- If a Riemann banker is given a stack of money, they would just add the values of the money in order. A measure-theoretic banker will sort the bank notes according to the type, and then find the total value by multiplying the number of each type by the value, and adding up.
+
+Why would we want to do so? It turns out this leads to a much more general theory of integration on much more general spaces. Instead of integrating functions $[a, b] \to \R$ only, we can replace the domain with any measure space. Even in the context of $\R$, this theory of integration is much much more powerful than the Riemann sum, and can integrate a much wider class of functions. While you probably don't care about those pathological functions anyway, being able to integrate more things means that we can state more general theorems about integration without having to put in funny conditions.
+
+That was all about measures. What about probability? It turns out the concepts we develop for measures correspond exactly to many familiar notions from probability if we restrict it to the particular case where the total measure of the space is $1$. Thus, when studying measure theory, we are also secretly studying probability!
+
+\section{Measures}
+In the course, we will write $f_n \nearrow f$ for ``$f_n$ converges to $f$ monotonically increasingly'', and $f_n \searrow f$ similarly. Unless otherwise specified, convergence is taken to be pointwise.
+
+\subsection{Measures}
+The starting point of all these is to come up with a function that determines the ``size'' of a given set, known as a \emph{measure}. It turns out we cannot sensibly define a size for \emph{all} subsets of $[0, 1]$. Thus, we need to restrict our attention to a collection of ``nice'' subsets. Specifying which subsets are ``nice'' would involve specifying a $\sigma$-algebra.
+
+This section is mostly technical.
+
+\begin{defi}[$\sigma$-algebra]\index{$\sigma$-algebra}\index{sigma-algebra}
+ Let $E$ be a set. A \emph{$\sigma$-algebra} $\mathcal{E}$ on $E$ is a collection of subsets of $E$ such that
+ \begin{enumerate}
+ \item $\emptyset \in \mathcal{E}$.
+ \item $A \in \mathcal{E}$ implies that $A^C = X \setminus A \in \mathcal{E}$.
+ \item For any sequence $(A_n)$ in $\mathcal{E}$, we have that
+ \[
+ \bigcup_n A_n \in \mathcal{E}.
+ \]
+ \end{enumerate}
+ The pair $(E, \mathcal{E})$ is called a \emph{measurable space}.\index{measurable space}
+\end{defi}
+Note that the axioms imply that $\sigma$-algebras are also closed under countable intersections, as we have $A \cap B = (A^C \cup B^C)^C$.
+
+\begin{defi}[Measure]\index{measure}
+ A \emph{measure} on a measurable space $(E, \mathcal{E})$ is a function $\mu: \mathcal{E} \to [0, \infty]$ such that
+ \begin{enumerate}
+ \item $\mu(\emptyset) = 0$
+ \item Countable additivity:\index{countable additivity} For any disjoint sequence $(A_n)$ in $\mathcal{E}$, then
+ \[
+ \mu\left(\bigcup_n A_n\right) = \sum_{n = 1}^\infty \mu(A_n).
+ \]
+ \end{enumerate}
+\end{defi}
+
+\begin{eg}
+ Let $E$ be any countable set, and $\mathcal{E} = P(E)$ be the set of all subsets of $E$. A \emph{mass function}\index{mass function} is any function $m: E \to [0, \infty]$. We can then define a measure by setting
+ \[
+ \mu(A) = \sum_{x \in A} m(x).
+ \]
+ In particular, if we put $m(x) = 1$ for all $x \in E$, then we obtain the \emph{counting measure}\index{counting measure}.
+\end{eg}
+
+Countable spaces are nice, because we can always take $\mathcal{E} = P(E)$, and the measure can be defined on all possible subsets. However, for ``bigger'' spaces, we have to be more careful. The set of all subsets is often ``too large''. We will see a concrete and also important example of this later.
+
+In general, $\sigma$-algebras are often described on large spaces in terms of a smaller set, known as the \emph{generating sets}\index{generating set}.
+\begin{defi}[Generator of $\sigma$-algebra]\index{generator of $\sigma$-algebra}
+ Let $E$ be a set, and that $\mathcal{A} \subseteq P(E)$ be a collection of subsets of $E$. We define
+ \[
+ \sigma(\mathcal{A}) = \{A \subseteq E: A \in \mathcal{E}\text{ for all $\sigma$-algebras $\mathcal{E}$ that contain $\mathcal{A}$}\}.
+ \]
+ In other words $\sigma(\mathcal{A})$ is the smallest sigma algebra that contains $\mathcal{A}$. This is known as the sigma algebra \emph{generated by} $\mathcal{A}$.
+\end{defi}
+
+\begin{eg}
+ Take $E = \Z$, and $\mathcal{A} = \{\{x\}: x \in \Z\}$. Then $\sigma(\mathcal{A})$ is just $P(E)$, since every subset of $E$ can be written as a countable union of singletons.
+\end{eg}
+
+\begin{eg}
+ Take $E = \Z$, and let $\mathcal{A} = \{ \{x, x + 1, x + 2, x + 3, \cdots\}: x \in E\}$. Then again $\sigma(E)$ is the set of all subsets of $E$.
+\end{eg}
+
+The following is the most important $\sigma$-algebra in the course:
+\begin{defi}[Borel $\sigma$-algebra]\index{Borel $\sigma$-algebra}
+ Let $E = \R$, and $\mathcal{A} = \{U \subseteq \R: U \text{ is open}\}$. Then $\sigma(\mathcal{A})$ is known as the \emph{Borel $\sigma$-algebra}, which is \emph{not} the set of all subsets of $\R$.
+
+ We can equivalently define this by $\tilde{\mathcal{A}} = \{(a, b): a < b, a, b \in \Q\}$. Then $\sigma(\tilde{\mathcal{A}})$ is also the Borel $\sigma$-algebra.
+\end{defi}
+
+Often, we would like to prove results that allow us to deduce properties about the $\sigma$-algebra just by checking it on a generating set. However, usually, we cannot just check it on an arbitrary generating set. Instead, the generating set has to satisfy some nice closure properties. We are now going to introduce a bunch of many different definitions that you need not aim to remember (except when exams are near).
+
+\begin{defi}[$\pi$-system]\index{pi-system}\index{$\pi$-system}
+ Let $\mathcal{A}$ be a collection of subsets of $E$. Then $\mathcal{A}$ is called a \emph{$\pi$-system} if
+ \begin{enumerate}
+ \item $\emptyset \in A$
+ \item If $A, B \in \mathcal{A}$, then $A \cap B \in A$.
+ \end{enumerate}
+\end{defi}
+
+\begin{defi}[d-system]\index{d-system}
+ Let $\mathcal{A}$ be a collection of subsets of $E$. Then $\mathcal{A}$ is called a \emph{d-system} if
+ \begin{enumerate}
+ \item $E \in \mathcal{A}$
+ \item If $A, B \in \mathcal{A}$ and $A \subseteq B$, then $B \setminus A \in \mathcal{A}$
+ \item For all increasing sequences $(A_n)$ in $\mathcal{A}$, we have that $\bigcup_n A_n \in \mathcal{A}$.
+ \end{enumerate}
+\end{defi}
+The point of d-systems and $\pi$-systems is that they separate the axioms of a $\sigma$-algebra into two parts. More precisely, we have
+\begin{prop}
+ A collection $\mathcal{A}$ is a $\sigma$-algebra if and only if it is both a $\pi$-system and a $d$-system.
+\end{prop}
+This follows rather straightforwardly from the definitions.
+
+The following definitions are also useful:
+\begin{defi}[Ring]\index{ring}
+ A collection of subsets $\mathcal{A}$ is a \emph{ring} on $E$ if $\emptyset \in A$ and for all $A, B \in \mathcal{A}$, we have $B \setminus A \in \mathcal{A}$ and $A \cup B \in \mathcal{A}$.
+\end{defi}
+
+\begin{defi}[Algebra]\index{algebra}
+ A collection of subsets $\mathcal{A}$ is an \emph{algebra} on $E$ if $\emptyset \in A$, and for all $A, B \in \mathcal{A}$, we have $A^C \in \mathcal{A}$ and $A \cup B \in \mathcal{A}$.
+\end{defi}
+So an algebra is like a $\sigma$-algebra, but it is just closed under finite unions only, rather than countable unions.
+
+While the names $\pi$-system and $d$-system are rather arbitrary, we can make some sense of the names ``ring'' and ``algebra''. Indeed, a ring forms a ring (without unity) in the algebraic sense with symmetric difference as ``addition'' and intersection as ``multiplication''. Then the empty set acts as the additive identity, and $E$, if present, acts as the multiplicative identity. Similarly, an algebra is a boolean subalgebra under the boolean algebra $P(E)$.
+
+A very important lemma about these things is Dynkin's lemma:
+\begin{lemma}[Dynkin's $\pi$-system lemma]\index{Dynkin's $\pi$-system lemma}
+ Let $\mathcal{A}$ be a $\pi$-system. Then any d-system which contains $\mathcal{A}$ contains $\sigma(\mathcal{A})$.
+\end{lemma}
+This will be very useful in the future. If we want to show that all elements of $\sigma(\mathcal{A})$ satisfy a particular property for some generating $\pi$-system $\mathcal{A}$, we just have to show that the elements of $\mathcal{A}$ satisfy that property, and that the collection of things that satisfy the property form a $d$-system.
+
+While this use case might seem rather contrived, it is surprisingly common when we have to prove things.
+
+\begin{proof}
+ Let $\mathcal{D}$ be the intersection of all d-systems containing $\mathcal{A}$, i.e.\ the smallest d-system containing $\mathcal{A}$. We show that $\mathcal{D}$ contains $\sigma(\mathcal{A})$. To do so, we will show that $\mathcal{D}$ is a $\pi$-system, hence a $\sigma$-algebra.
+
+ There are two steps to the proof, both of which are straightforward verifications:
+ \begin{enumerate}
+ \item We first show that if $B \in \mathcal{D}$ and $A \in \mathcal{A}$, then $B \cap A \in \mathcal{D}$.
+ \item We then show that if $A, B \in \mathcal{D}$, then $A \cap B \in \mathcal{D}$.
+ \end{enumerate}
+ Then the result immediately follows from the second part.
+
+ We let
+ \[
+ \mathcal{D}' = \{ B \in \mathcal{D}: B \cap A \in \mathcal{D}\text{ for all }A \in \mathcal{A}\}.
+ \]
+ We note that $\mathcal{D}' \supseteq \mathcal{A}$ because $\mathcal{A}$ is a $\pi$-system, and is hence closed under intersections. We check that $\mathcal{D}'$ is a d-system. It is clear that $E \in \mathcal{D}'$. If we have $B_1, B_2 \in \mathcal{D}'$, where $B_1 \subseteq B_2$, then for any $A \in \mathcal{A}$, we have
+ \[
+ (B_2 \setminus B_1) \cap A = (B_2 \cap A) \setminus (B_1 \cap A).
+ \]
+ By definition of $\mathcal{D}'$, we know $B_2 \cap A$ and $B_1 \cap A$ are elements of $\mathcal{D}$. Since $\mathcal{D}$ is a d-system, we know this intersection is in $\mathcal{D}$. So $B_2 \setminus B_1 \in \mathcal{D}'$.
+
+ Finally, suppose that $(B_n)$ is an increasing sequence in $\mathcal{D}'$, with $B = \bigcup B_n$. Then for every $A \in \mathcal{A}$, we have that
+ \[
+ \left(\bigcup B_n\right) \cap A = \bigcup (B_n \cap A) = B \cap A \in \mathcal{D}.
+ \]
+ Therefore $B \in \mathcal{D}'$.
+
+ Therefore $\mathcal{D}'$ is a d-system contained in $\mathcal{D}$, which also contains $\mathcal{A}$. By our choice of $\mathcal{D}$, we know $\mathcal{D}' = \mathcal{D}$.
+
+ We now let
+ \[
+ \mathcal{D}'' = \{B \in \mathcal{D}: B \cap A \in \mathcal{D}\text{ for all }A \in \mathcal{D}\}.
+ \]
+ Since $\mathcal{D}' = \mathcal{D}$, we again have $\mathcal{A} \subseteq \mathcal{D}''$, and the same argument as above implies that $\mathcal{D}''$ is a d-system which is between $\mathcal{A}$ and $\mathcal{D}$. But the only way that can happen is if $\mathcal{D}'' = \mathcal{D}$, and this implies that $\mathcal{D}$ is a $\pi$-system.
+\end{proof}
+
+After defining all sorts of things that are ``weaker versions'' of $\sigma$-algebras, we now defined a bunch of measure-like objects that satisfy fewer properties. Again, no one really remembers these definitions:
+
+\begin{defi}[Set function]
+ Let $\mathcal{A}$ be a collection of subsets of $E$ with $\emptyset \in \mathcal{A}$. A \term{set function} function $\mu: \mathcal{A} \to [0, \infty]$ such that $\mu(\emptyset) = 0$.
+\end{defi}
+
+\begin{defi}[Increasing set function]\index{Increasing set function}\index{set function!increasing}
+ A set function is \emph{increasing} if it has the property that for all $A, B \in \mathcal{A}$ with $A \subseteq B$, we have $\mu(A) \leq \mu(B)$.
+\end{defi}
+\begin{defi}[Additive set function]\index{Additive set function}\index{set function!additive}
+ A set function is \emph{additive} if whenever $A, B \in \mathcal{A}$ and $A \cup B \in \mathcal{A}$, $A \cap B = \emptyset$, then $\mu(A \cup B) = \mu(A) + \mu(B)$.
+\end{defi}
+
+\begin{defi}[Countably additive set function]\index{countably additive set function}\index{set function!countably additive}
+ A set function is \emph{countably additive} if whenever $A_n$ is a sequence of disjoint sets in $\mathcal{A}$ with $\cup A_n \in \mathcal{A}$, then
+ \[
+ \mu\left(\bigcup_n A_n \right) = \sum_n \mu(A_n).
+ \]
+\end{defi}
+
+Under these definitions, a measure is just a countable additive set function defined on a $\sigma$-algebra.
+
+\begin{defi}[Countably subadditive set function]\index{countably subadditive set function}\index{set function!countably additive}
+ A set function is \emph{countably subadditive} if whenever $(A_n)$ is a sequence of sets in $\mathcal{A}$ with $\bigcup_n A_n \in \mathcal{A}$, then
+ \[
+ \mu\left(\bigcup_n A_n\right) \leq \sum_n \mu(A_n).
+ \]
+\end{defi}
+
+The big theorem that allows us to construct measures is the Caratheodory extension theorem. In particular, this will help us construct the \emph{Lebesgue measure} on $\R$.
+\begin{thm}[Caratheodory extension theorem]\index{Caratheodory extension theorem}
+ Let $\mathcal{A}$ be a ring on $E$, and $\mu$ a countably additive set function on $\mathcal{A}$. Then $\mu$ extends to a measure on the $\sigma$-algebra generated by $\mathcal{A}$.
+\end{thm}
+
+\begin{proof}(non-examinable)
+ We start by defining what we want our measure to be. For $B \subseteq E$, we set
+ \[
+ \mu^*(B) = \inf\left\{\sum_n\mu(A_n): (A_n) \in \mathcal{A}\text{ and } B\subseteq \bigcup A_n\right\}.
+ \]
+ If it happens that there is no such sequence, we set this to be $\infty$. This measure is known as the \term{outer measure}. It is clear that $\mu^*(\phi) = 0$, and that $\mu^*$ is increasing.
+
+ We say a set $A \subseteq E$ is $\mu^*$-measurable if
+ \[
+ \mu^*(B) = \mu^*(B \cap A) + \mu^*(B \cap A^C)
+ \]
+ for all $B \subseteq E$. We let
+ \[
+ \mathcal{M} = \{\text{$\mu^*$-measurable sets}\}.
+ \]
+ We will show the following:
+ \begin{enumerate}
+ \item $\mathcal{M}$ is a $\sigma$-algebra containing $\mathcal{A}$.
+ \item $\mu^*$ is a measure on $\mathcal{M}$ with $\mu^*|_{\mathcal{A}} = \mu$.
+ \end{enumerate}
+ Note that it is not true in general that $\mathcal{M} = \sigma(A)$. However, we will always have $M \supseteq \sigma(\mathcal{A})$.
+
+ We are going to break this up into five nice bite-size chunks.
+
+ \begin{claim}
+ $\mu^*$ is countably subadditive.
+ \end{claim}
+ Suppose $B\subseteq \bigcup_n B_n$. We need to show that $\mu^*(B) \leq \sum_n \mu^*(B_n)$. We can wlog assume that $\mu^*(B_n)$ is finite for all $n$, or else the inequality is trivial. Let $\varepsilon > 0$. Then by definition of the outer measure, for each $n$, we can find a sequence $(B_{n, m})_{m = 1}^\infty$ in $\mathcal{A}$ with the property that
+ \[
+ B_n \subseteq \bigcup_m B_{n, m}
+ \]
+ and
+ \[
+ \mu^*(B_n) + \frac{\varepsilon}{2^n} \geq \sum_m \mu(B_{n, m}).
+ \]
+ Then we have
+ \[
+ B \subseteq \bigcup_n B_n \subseteq \bigcup_{n, m}B_{n, m}.
+ \]
+ Thus, by definition, we have
+ \[
+ \mu^*(B) \leq \sum_{n, m}\mu^*(B_{n, m}) \leq \sum_n \left(\mu^*(B_n) + \frac{\varepsilon}{2^n}\right) = \varepsilon + \sum_n \mu^*(B_n).
+ \]
+ Since $\varepsilon$ was arbitrary, we are done.
+ \begin{claim}
+ $\mu^*$ agrees with $\mu$ on $\mathcal{A}$.
+ \end{claim}
+ In the first example sheet, we will show that if $\mathcal{A}$ is a ring and $\mu$ is a countably additive set function on $\mu$, then $\mu$ is in fact countably subadditive and increasing.
+
+ Assuming this, suppose that $A, (A_n)$ are in $\mathcal{A}$ and $A \subseteq \bigcup_n A_n$. Then by subadditivity, we have
+ \[
+ \mu(A) \leq \sum_n \mu(A \cap A_n) \leq \sum_n \mu(A_n),
+ \]
+ using that $\mu$ is countably subadditivity and increasing. Note that we have to do this in two steps, rather than just applying countable subadditivity, since we did not assume that $\bigcup_n A_n \in \mathcal{A}$. Taking the infimum over all sequences, we have
+ \[
+ \mu(A) \leq \mu^*(A).
+ \]
+ Also, we see by definition that $\mu(A) \geq \mu^*(A)$, since $A$ covers $A$. So we get that $\mu(A) = \mu^*(A)$ for all $A \in \mathcal{A}$.
+
+ \begin{claim}
+ $\mathcal{M}$ contains $\mathcal{A}$.
+ \end{claim}
+ Suppose that $A \in \mathcal{A}$ and $B \subseteq E$. We need to show that
+ \[
+ \mu^*(B) = \mu^*(B \cap A) + \mu^*(B \cap A^C).
+ \]
+ Since $\mu^*$ is countably subadditive, we immediately have $\mu^*(B) \leq \mu^*(B \cap A) + \mu^*(B \cap A^C)$. For the other inequality, we first observe that it is trivial if $\mu^*(B)$ is infinite. If it is finite, then by definition, given $\varepsilon > 0$, we can find some $(B_n)$ in $\mathcal{A}$ such that $B \subseteq \bigcup_n B_n$ and
+ \[
+ \mu^*(B) + \varepsilon \geq \sum_n \mu(B_n).
+ \]
+ Then we have
+ \begin{align*}
+ B \cap A &\subseteq \bigcup_n (B_n \cap A)\\
+ B \cap A^C &\subseteq \bigcup_n (B_n \cap A^C)
+ \end{align*}
+ We notice that $B_n \cap A^C = B_n \setminus A \in \mathcal{A}$. Thus, by definition of $\mu^*$, we have
+ \begin{align*}
+ \mu^* (B \cap A) + \mu^*(B \cap A^c) &\leq \sum_n \mu(B_n \cap A) + \sum_n \mu(B_n \cap A^C)\\
+ &= \sum_n (\mu(B_n \cap A) + \mu(B_n \cap A^C))\\
+ &= \sum_n \mu(B_n) &\\
+ &\leq \mu^*(B_n) + \varepsilon.
+ \end{align*}
+ Since $\varepsilon$ was arbitrary, the result follows.
+
+ \begin{claim}
+ We show that $\mathcal{M}$ is an algebra.
+ \end{claim}
+ We first show that $E \in \mathcal{M}$. This is true since we obviously have
+ \[
+ \mu^*(B) = \mu^*(B \cap E) + \mu^*(B \cap E^C)
+ \]
+ for all $B \subseteq E$.
+
+ Next, note that if $A \in \mathcal{M}$, then by definition we have, for all $B$,
+ \[
+ \mu^*(B) = \mu^*(B \cap A) + \mu^*(B \cap A^C).
+ \]
+ Now note that this definition is symmetric in $A$ and $A^C$. So we also have $A^C \in M$.
+
+ Finally, we have to show that $\mathcal{M}$ is closed under intersection (which is equivalent to being closed under union when we have complements). Suppose $A_1, A_2 \in \mathcal{M}$ and $B \subseteq E$. Then we have
+ \begin{align*}
+ \mu^*(B) ={}& \mu^*(B \cap A_1) + \mu^*(B \cap A_1^C)\\
+ ={}& \mu^*(B \cap A_1 \cap A_2) + \mu^*(B \cap A_1 \cap A_2^C) + \mu^*(B \cap A_1^C)\\
+ ={}& \mu^*(B \cap (A_1 \cap A_2)) + \mu^*(B \cap (A_1\cap A_2)^C \cap A_1) \\
+ &+ \mu^*(B \cap (A_1 \cap A_2)^C \cap A_1^C)\\
+ ={}& \mu^*(B \cap (A_1 \cap A_2)) + \mu^*(B \cap (A_1 \cap A_2)^C).
+ \end{align*}
+ So we have $A_1 \cap A_2 \in \mathcal{M}$. So $\mathcal{M}$ is an algebra.
+ \begin{claim}
+ $\mathcal{M}$ is a $\sigma$-algebra, and $\mu^*$ is a measure on $\mathcal{M}$.
+ \end{claim}
+ To show that $\mathcal{M}$ is a $\sigma$-algebra, we need to show that it is closed under countable unions. We let $(A_n)$ be a disjoint collection of sets in $\mathcal{M}$, then we want to show that $A = \bigcup_n A_n \in \mathcal{M}$ and $\mu^*(A) = \sum_n \mu^*(A_n)$.
+
+ Suppose that $B \subseteq E$. Then we have
+ \begin{align*}
+ \mu^*(B) &= \mu^*(B \cap A_1) + \mu^*(B \cap A_1^C)\\
+ \intertext{Using the fact that $A_2 \in \mathcal{M}$ and $A_1 \cap A_2 =\emptyset$, we have}
+ &= \mu^*(B \cap A_1) + \mu^*(B \cap A_2) + \mu^*(B \cap A_1^C \cap A_2^C)\\
+ &= \cdots\\
+ &= \sum_{i = 1}^n \mu^*(B\cap A_i) + \mu^*(B \cap A_1^C \cap \cdots \cap A_n^C)\\
+ &\geq \sum_{i = 1}^n \mu^*(B \cap A_i) + \mu^*(B \cap A^C).
+ \end{align*}
+ Taking the limit as $n \to \infty$, we have
+ \[
+ \mu^*(B) \geq \sum_{i = 1}^\infty \mu^*(B \cap A_i) + \mu^*(B \cap A^C).
+ \]
+ By the countable-subadditivity of $\mu^*$, we have
+ \[
+ \mu^*(B \cap A) \leq \sum_{i = 1}^\infty \mu^*(B \cap A_i).
+ \]
+ Thus we obtain
+ \[
+ \mu^*(B) \geq \mu^*(B \cap A) + \mu^*(B \cap A^C).
+ \]
+ By countable subadditivity, we also have inequality in the other direction. So equality holds. So $A \in \mathcal{M}$. So $\mathcal{M}$ is a $\sigma$-algebra.
+
+ To see that $\mu^*$ is a measure on $\mathcal{M}$, note that the above implies that
+ \[
+ \mu^*(B) = \sum_{i = 1}^\infty (B \cap A_i) + \mu^*(B \cap A^C).
+ \]
+ Taking $B = A$, this gives
+ \[
+ \mu^*(A) = \sum_{i = 1}^\infty (A \cap A_i) + \mu^*(A \cap A^C) = \sum_{i = 1}^\infty \mu^*(A_i).\qedhere
+ \]
+\end{proof}
+Note that when $\mathcal{A}$ itself is actually a $\sigma$-algebra, the outer measure can be simply written as
+\[
+ \mu^*(B) = \inf \{\mu(A): A \in \mathcal{A}, B \subseteq A\}.
+\]
+Caratheodory gives us the existence of some measure extending the set function on $\mathcal{A}$. Could there be many? In general, there could. However, in the special case where the measure is finite, we do get uniqueness.
+\begin{thm}
+ Suppose that $\mu_1, \mu_2$ are measures on $(E, \mathcal{E})$ with $\mu_1(E) = \mu_2(E) < \infty$. If $\mathcal{A}$ is a $\pi$-system with $\sigma(A) = \mathcal{E}$, and $\mu_1$ agrees with $\mu_2$ on $\mathcal{A}$, then $\mu_1 = \mu_2$.
+\end{thm}
+
+\begin{proof}
+ Let
+ \[
+ \mathcal{D} = \{A \in \mathcal{E}: \mu_1(A) = \mu_2(A)\}
+ \]
+ We know that $\mathcal{D} \supseteq \mathcal{A}$. By Dynkin's lemma, it suffices to show that $\mathcal{D}$ is a d-system. The things to check are:
+ \begin{enumerate}
+ \item $E \in \mathcal{D}$ --- this follows by assumption.
+ \item If $A, B \in \mathcal{D}$ with $A \subseteq B$, then $B \setminus A \in \mathcal{D}$. Indeed, we have the equations
+ \begin{align*}
+ \mu_1(B) &= \mu_1(A) + \mu_1(B \setminus A) < \infty\\
+ \mu_2(B) &= \mu_2(A) + \mu_2(B \setminus A) < \infty.
+ \end{align*}
+ Since $\mu_1(B) = \mu_2(B)$ and $\mu_1(A) = \mu_2(A)$, we must have $\mu_1(B \setminus A) = \mu_2(B \setminus A)$.
+ \item Let $(A_n) \in \mathcal{D}$ be an increasing sequence with $\bigcup A_n = A$. Then
+ \[
+ \mu_1(A) = \lim_{n \to \infty}\mu_1(A_n) = \lim_{n \to \infty} \mu_2(A_n) = \mu_2(A).
+ \]
+ So $A \in\mathcal{D}$.\qedhere
+ \end{enumerate}
+\end{proof}
+The assumption that $\mu_1(E) = \mu_2(E) < \infty$ is necessary. The theorem does not necessarily hold without it. We can see this from a simple counterexample:
+
+\begin{eg}
+ Let $E = \Z$, and let $\mathcal{E} = P(E)$. We let
+ \[
+ \mathcal{A} = \{\{x, x+1, x+2, \cdots\}: x \in E\} \cup \{\emptyset\}.
+ \]
+ This is a $\pi$-system with $\sigma(A) = \mathcal{E}$. We let $\mu_1(A)$ be the number of elements in $A$, and $\mu_2 = 2\mu_1(A)$. Then obviously $\mu_1 \not= \mu_2$, but $\mu_1(A) = \infty = \mu_2(A)$ for $A \in \mathcal{A}$.
+\end{eg}
+
+\begin{defi}[Borel $\sigma$-algebra]\index{Borel $\sigma$-algebra}\index{$\mathcal{B}(E)$}
+ Let $E$ be a topological space. We define the \emph{Borel $\sigma$-algebra} as
+ \[
+ \mathcal{B}(E) = \sigma(\{U \subseteq \mathcal{E}: U \text{ is open}\}).
+ \]
+ We write \term{$\mathcal{B}$} for $\mathcal{B}(\R)$.
+\end{defi}
+
+\begin{defi}[Borel measure and Radon measure]\index{Borel measure}
+ A measure $\mu$ on $(E, \mathcal{B}(E))$ is called a \emph{Borel measure}. If $\mu(K) < \infty$ for all $K \subseteq E$ compact, then $\mu$ is a \term{Radon measure}.
+\end{defi}
+The most important example of a Borel measure we will consider is the \emph{Lebesgue measure}.
+
+\begin{thm}
+ There exists a unique Borel measure $\mu$ on $\R$ with $\mu([a, b]) = b - a$.
+\end{thm}
+
+\begin{proof}
+ We first show uniqueness. Suppose $\tilde{\mu}$ is another measure on $\mathcal{B}$ satisfying the above property. We want to apply the previous uniqueness theorem, but our measure is not finite. So we need to carefully get around that problem.
+
+ For each $n \in \Z$, we set
+ \begin{align*}
+ \mu_n(A) &= \mu(A \cap (n, n + 1]))\\
+ \tilde{\mu}_n(A) &= \tilde{\mu}(A \cap (n, n + 1]))
+ \end{align*}
+ Then $\mu_n$ and $\tilde{\mu}_n$ are finite measures on $\mathcal{B}$ which agree on the $\pi$-system of intervals of the form $(a, b]$ with $a, b \in \R$, $a < b$. Therefore we have $\mu_n = \tilde{\mu}_n$ for all $n \in \Z$. Now we have
+ \[
+ \mu(A) = \sum_{n \in \Z} \mu(A \cap (n, n + 1]) = \sum_{n\in \Z}\mu_n(A) = \sum_{n \in \Z}\tilde{\mu}_n(A) = \tilde{\mu}(A)
+ \]
+ for all Borel sets $A$.
+
+ To show existence, we want to use the Caratheodory extension theorem. We let $\mathcal{A}$ be the collection of finite, disjoint unions of the form
+ \[
+ A = (a_1, b_1] \cup (a_2, b_2] \cup \cdots \cup (a_n, b_n].
+ \]
+ Then $\mathcal{A}$ is a ring of subsets of $R$, and $\sigma(\mathcal{A}) = \mathcal{B}$ (details are to be checked on the first example sheet).
+
+ We set
+ \[
+ \mu(A) = \sum_{i = 1}^n (b_i - a_i).
+ \]
+ We note that $\mu$ is well-defined, since if
+ \[
+ A = (a_1, b_1] \cup \cdots \cup (a_n, b_n] = (\tilde{a}_1, \tilde{b}_1] \cup \cdots \cup (\tilde{a}_n, \tilde{b}_n],
+ \]
+ then
+ \[
+ \sum_{i = 1}^n (b_i - a_i) = \sum_{i = 1}^n (\tilde{b}_i - \tilde{a}_i).
+ \]
+ Also, if $\mu$ is additive, $A, B \in \mathcal{A}$, $A \cap B = \emptyset$ and $A \cup B \in \mathcal{A}$, we obviously have $\mu(A \cup B) = \mu(A) + \mu(B)$. So $\mu$ is additive.
+
+ Finally, we have to show that $\mu$ is in fact countably additive. Let $(A_n)$ be a disjoint sequence in $\mathcal{A}$, and let $A = \bigcup_{i = 1}^\infty A_n \in \mathcal{A}$. Then we need to show that $\mu(A) = \sum_{n = 1}^\infty \mu(A_n)$.
+
+ Since $\mu$ is additive, we have
+ \begin{align*}
+ \mu(A) &= \mu(A_1) + \mu(A \setminus A_1) \\
+ &= \mu(A_1) + \mu(A_2) + \mu(A \setminus A_1 \cup A_2)\\
+ &= \sum_{i = 1}^n \mu(A_i) + \mu\left(A \setminus \bigcup_{i = 1}^n A_i\right)
+ \end{align*}
+ To finish the proof, we show that
+ \[
+ \mu\left(A \setminus \bigcup_{i = 1}^n A_i\right) \to 0\text{ as }n \to \infty.
+ \]
+ We are going to reduce this to the \term{finite intersection property} of compact sets in $\R$: if $(K_n)$ is a sequence of compact sets in $\R$ with the property that $\bigcap_{m = 1}^n K_m \not= \emptyset$ for all $n$, then $\bigcap_{m = 1}^\infty K_m \not= \emptyset$.
+
+ We first introduce some new notation. We let
+ \[
+ B_n = A \setminus \bigcup_{m = 1}^n A_m.
+ \]
+ We now suppose, for contradiction, that $\mu(B_n) \not\to 0$ as $n \to \infty$. Since the $B_n$'s are decreasing, there must exist $\varepsilon > 0$ such that $\mu(B_n) \geq 2 \varepsilon$ for every $n$.
+
+ For each $n$, we take $C_n \in \mathcal{A}$ with the property that $\overline{C_n} \subseteq B_n$ and $\mu(B_n \setminus C_n) \leq \frac{\varepsilon}{2^n}$. This is possible since each $B_n$ is just a finite union of intervals. Thus we have
+ \begin{align*}
+ \mu(B_n) - \mu\left(\bigcap_{m = 1}^n C_m\right) &= \mu\left(B_n \setminus \bigcap_{m = 1}^n C_m\right) \\
+ &\leq \mu\left(\bigcup_{m = 1}^n (B_m \setminus C_m)\right) \\
+ &\leq \sum_{m = 1}^n \mu(B_m \setminus C_m) \\
+ &\leq \sum_{m = 1}^n \frac{\varepsilon}{2^m} \\
+ &\leq \varepsilon.
+ \end{align*}
+ On the other hand, we also know that $\mu(B_n) \geq 2\varepsilon$.
+ \[
+ \mu\left(\bigcap_{m = 1}^n C_m\right) \geq \varepsilon
+ \]
+ for all $n$. We now let that $K_n = \bigcap_{m = 1}^n \overline{C_m}$. Then $\mu(K_n) \geq \varepsilon$, and in particular $K_n \not= \emptyset$ for all $n$.
+
+ Thus, the finite intersection property says
+ \[
+ \emptyset \not= \bigcap_{n = 1}^\infty K_n \subseteq \bigcap_{n = 1}^\infty B_n = \emptyset.
+ \]
+ This is a contradiction. So we have $\mu(B_n) \to 0$ as $n \to \infty$. So done.
+\end{proof}
+
+\begin{defi}[Lebesgue measure]\index{Lebesgue measure}
+ The \emph{Lebesgue measure} is the unique Borel measure $\mu$ on $\R$ with $\mu([a, b]) = b - a$.
+\end{defi}
+
+Note that the Lebesgue measure is not a finite measure, since $\mu(\R) = \infty$. However, it is a $\sigma$-finite measure.
+\begin{defi}[$\sigma$-finite measure]\index{$\sigma$-finite measure}
+ Let $(E, \mathcal{E})$ be a measurable space, and $\mu$ a measure. We say $\mu$ is \emph{$\sigma$-finite} if there exists a sequence $(E_n)$ in $\mathcal{E}$ such that $\bigcup_n E_n = E$ and $\mu(E_n) < \infty$ for all $n$.
+\end{defi}
+This is the next best thing we can hope after finiteness, and often proofs that involve finiteness carry over to $\sigma$-finite measures.
+
+\begin{prop}
+ The Lebesgue measure is \term{translation invariant}, i.e.
+ \[
+ \mu(A + x) = \mu(A)
+ \]
+ for all $A \in \mathcal{B}$ and $x \in \R$, where
+ \[
+ A + x = \{y + x, y \in A\}.
+ \]
+\end{prop}
+
+\begin{proof}
+ We use the uniqueness of the Lebesgue measure. We let
+ \[
+ \mu_x (A) = \mu(A + x)
+ \]
+ for $A \in \mathcal{B}$. Then this is a measure on $\mathcal{B}$ satisfying $\mu_x([a, b]) = b - a$. So the uniqueness of the Lebesgue measure shows that $\mu_x = \mu$.
+\end{proof}
+
+It turns out that translation invariance actually characterizes the Lebesgue measure.
+
+\begin{prop}
+ Let $\tilde{\mu}$ be a Borel measure on $\R$ that is translation invariant and $\mu([0, 1]) = 1$. Then $\tilde{\mu}$ is the Lebesgue measure.
+\end{prop}
+
+\begin{proof}
+ We show that any such measure must satisfy
+ \[
+ \mu([a, b]) = b - a.
+ \]
+ By additivity and translation invariance, we can show that $\mu([p, q]) = q - p$ for all rational $p < q$. By considering $\mu([p, p + 1/n])$ for all $n$ and using the increasing property, we know $\mu(\{p\}) = 0$. So $\mu(([p, q)) = \mu((p, q]) = \mu((p, q)) = q - p$ for all rational $p, q$.
+
+ Finally, by countable additivity, we can extend this to all real intervals. Then the result follows from the uniqueness of the Lebesgue measure.
+\end{proof}
+
+In the proof of the Caratheodory extension theorem, we constructed a measure $\mu^*$ on the $\sigma$-algebra $\mathcal{M}$ of $\mu^*$-measurable sets which contains $\mathcal{A}$. This contains $\mathcal{B} = \sigma(A)$, but could in fact be bigger than it. We call $\mathcal{M}$ the \term{Lebesgue $\sigma$-algebra}.
+
+Indeed, it can be given by
+\[
+ \mathcal{M} = \{A \cup N: A \in \mathcal{B}, N \subseteq B \in \mathcal{B}\text{ with }\mu(B) = 0\}.
+\]
+If $A \cup N \in \mathcal{M}$, then $\mu(A \cup N) = \mu(A)$. The proof is left for the example sheet.
+
+It is also true that $\mathcal{M}$ is strictly larger than $\mathcal{B}$, so there exists $A \in \mathcal{M}$ with $A \not\in \mathcal{B}$. Construction of such a set was on last year's exam (2016).
+
+On the other hand, it is also true that not all sets are Lebesgue measurable. This is a rather funny construction.
+
+\begin{eg}
+ For $x, y \in [0, 1)$, we say $x \sim y$ if $x - y$ is rational. This defines an equivalence relation on $[0, 1)$. By the axiom of choice, we pick a representative of each equivalence class, and put them into a set $S \subseteq [0, 1)$. We will show that $S$ is not Lebesgue measurable.
+
+ Suppose that $S$ were Lebesgue measurable. We are going to get a contradiction to the countable additivity of the Lebesgue measure. For each rational $r \in [0, 1) \cap \Q$, we define
+ \[
+ S_r = \{s + r \bmod 1: s \in S\}.
+ \]
+ By translation invariance, we know $S_r$ is also Lebesgue measurable, and $\mu(S_r) = \mu(S)$.
+
+ Also, by construction of $S$, we know $(S_r)_{r \in \Q}$ is disjoint, and $\bigcup_{r \in \Q} S_r = [0, 1)$. Now by countable additivity, we have
+ \[
+ 1 = \mu([0, 1)) = \mu\left(\bigcup_{r \in \Q}S_r\right) = \sum_{r \in \Q}\mu(S_r) = \sum_{r \in \Q}\mu(S),
+ \]
+ which is clearly not possible. Indeed, if $\mu(S) = 0$, then this says $1 = 0$; If $\mu(S) > 0$, then this says $1 = \infty$. Both are absurd.
+\end{eg}
+
+\subsection{Probability measures}
+Since the course is called ``probability and measure'', we'd better start talking about probability! It turns out the notions we care about in probability theory are very naturally just special cases of the concepts we have previously considered.
+
+\begin{defi}[Probability measure and probability space]
+ Let $(E, \mathcal{E})$ be a measure space with the property that $\mu(E) = 1$. Then we often call $\mu$ a \term{probability measure}, and $(E, \mathcal{E}, \mu)$ a \term{probability space}.
+\end{defi}
+
+Probability spaces are usually written as $(\Omega, \mathcal{F}, \P)$ instead.
+
+\begin{defi}[Sample space]
+ In a probability space $(\Omega, \mathcal{F}, \P)$, we often call $\Omega$ the \term{sample space}.
+\end{defi}
+
+\begin{defi}[Events]
+ In a probability space $(\Omega, \mathcal{F}, \P)$, we often call the elements of $\mathcal{F}$ the \term{events}.
+\end{defi}
+
+\begin{defi}[Probaiblity]\index{probability}
+ In a probability space $(\Omega, \mathcal{F}, \P)$, if $A \in \mathcal{F}$, we often call $\P[A]$ the \term{probability} of the event $A$.
+\end{defi}
+
+These are exactly the same things as measures, but with different names! However, thinking of them as probabilities could make us ask different questions about these measure spaces. For example, in probability, one is often interested in \emph{independence}.
+
+\begin{defi}[Independence of events]\index{independent!events}\index{event!independent}
+ A sequence of events $(A_n)$ is said to be \emph{independent} if
+ \[
+ \P\left[\bigcap_{n \in J} A_n\right] = \prod_{n \in J} \P[A_n]
+ \]
+ for all finite subsets $J \subseteq \N$.
+\end{defi}
+
+However, it turns out that talking about independence of events is usually too restrictive. Instead, we want to talk about the independence of $\sigma$-algebras:
+
+\begin{defi}[Independence of $\sigma$-algebras]\index{independent!$\sigma$-algebras}\index{$\sigma$-algebra!independent}
+ A sequence of $\sigma$-algebras $(\mathcal{A}_n)$ with $\mathcal{A}_n \subseteq \mathcal{F}$ for all $n$ is said to be independent if the following is true: If $(A_n)$ is a sequence where $A_n \in \mathcal{A}_n$ for all $n$, them $(A_n)$ is independent.
+\end{defi}
+
+\begin{prop}
+ Events $(A_n)$ are independent iff the $\sigma$-algebras $\sigma(\mathcal{A}_n)$ are independent.
+\end{prop}
+While proving this directly would be rather tedious (but not too hard), it is an immediate consequence of the following theorem:
+
+\begin{thm}
+ Suppose $\mathcal{A}_1$ and $\mathcal{A}_2$ are $\pi$-systems in $\mathcal{F}$. If
+ \[
+ \P[A_1 \cap A_2] = \P[A_1] \P[A_2]
+ \]
+ for all $A_1 \in \mathcal{A}_1$ and $A_2 \in \mathcal{A}_2$, then $\sigma(\mathcal{A}_1)$ and $\sigma(\mathcal{A}_2)$ are independent.
+\end{thm}
+
+\begin{proof}
+ This will follow from two applications of the fact that a finite measure is determined by its values on a $\pi$-system which generates the entire $\sigma$-algebra.
+
+ We first fix $A_1 \in \mathcal{A}_1$. We define the measures
+ \[
+ \mu(A) = \P[A \cap A_1]
+ \]
+ and
+ \[
+ \nu(A) = \P[A] \P[A_1]
+ \]
+ for all $A \in \mathcal{F}$. By assumption, we know $\mu$ and $\nu$ agree on $A_2$, and we have that $\mu(\Omega) = \P[A_1] = \nu(\Omega) \leq 1 < \infty$. So $\mu$ and $\nu$ agree on $\sigma(\mathcal{A}_2)$. So we have
+ \[
+ \P[A_1 \cap A_2] = \mu(A_2) = \nu(A_2) = \P[A_1] \P[A_2]
+ \]
+ for all $A_2 \in \sigma(\mathcal{A}_2)$.
+
+ So we have now shown that if $\mathcal{A}_1$ and $\mathcal{A}_2$ are independent, then $\mathcal{A}_1$ and $\sigma(\mathcal{A}_2)$ are independent. By symmetry, the same argument shows that $\sigma(\mathcal{A}_1)$ and $\sigma(\mathcal{A}_2)$ are independent.
+\end{proof}
+
+Say we are rolling a dice. Instead of asking what the probability of getting a $6$, we might be interested instead in the probability of getting a $6$ \emph{infinitely often}. Intuitively, the answer is ``it happens with probability $1$'', because in each dice roll, we have a probability of $\frac{1}{6}$ of getting a $6$, and they are all independent.
+
+We would like to make this precise and actually prove it. It turns out that the notions of ``occurs infinitely often'' and also ``occurs eventually'' correspond to more analytic notions of $\limsup$ and $\liminf$.
+
+\begin{defi}[limsup and liminf]\index{$\limsup$}\index{$\liminf$}
+ Let $(A_n)$ be a sequence of events. We define
+ \begin{align*}
+ \limsup A_n &= \bigcap_n \bigcup_{m \geq n} A_m\\
+ \liminf A_n &= \bigcup_n \bigcap_{m \geq n} A_m.
+ \end{align*}
+\end{defi}
+To parse these definitions more easily, we can read $\cap$ as ``for all'', and $\cup$ as ``there exits''. For example, we can write
+\begin{align*}
+ \limsup A_n &= \forall n,\exists m \geq n\text{ such that }A_m\text{ occurs}\\
+ &= \{x: \forall n, \exists m \geq n, x \in A_m\}\\
+ &= \{A_m\text{ occurs infinitely often}\}\\
+ &= \{A_m \text{ i.o.}\}
+\end{align*}
+Similarly, we have
+\begin{align*}
+ \lim\inf A_n &= \exists n, \forall m \geq n\text{ such that }A_m\text{ occurs}\\
+ &= \{x: \exists n, \forall m \geq n, x \in A_m\}\\
+ &= \{A_m\text{ occurs eventually}\}\\
+ &= \{A_m\text{ e.v.}\}
+\end{align*}
+We are now going to prove two ``obvious'' results, known as the \emph{Borel--Cantelli lemmas}. These give us necessary conditions for an event to happen infinitely often, and in the case where the events are independent, the condition is also sufficient.
+
+\begin{lemma}[Borel--Cantelli lemma]\index{Borel--Cantelli lemma}
+ If
+ \[
+ \sum_n \P[A_n] < \infty,
+ \]
+ then
+ \[
+ \P[A_n\text{ i.o.}] = 0.
+ \]
+\end{lemma}
+
+\begin{proof}
+ For each $k$, we have
+ \begin{align*}
+ \P[A_n\text{ i.o}] &= \P\left[\bigcap_n \bigcup_{m \geq n} A_m\right]\\
+ &\leq \P\left[\bigcup_{m \geq k} A_m\right]\\
+ &\leq \sum_{m = k}^\infty \P[A_m]\\
+ &\to 0
+ \end{align*}
+ as $k \to \infty$. So we have $\P[A_n\text{ i.o.}] = 0$.
+\end{proof}
+Note that we did not need to use the fact that we are working with a probability measure. So in fact this holds for any measure space.
+
+\begin{lemma}[Borel--Cantelli lemma II]\index{Borel--Cantelli lemma II}
+ Let $(A_n)$ be independent events. If
+ \[
+ \sum_n \P[A_n] = \infty,
+ \]
+ then
+ \[
+ \P[A_n\text{ i.o.}] = 1.
+ \]
+\end{lemma}
+Note that independence is crucial. If we flip a fair coin, and we set all the $A_n$ to be equal to ``getting a heads'', then $\sum_n \P[A_n] = \sum_n \frac{1}{2} = \infty$, but we certainly do not have $\P[A_n\text{ i.o.}] = 1$. Instead it is just $\frac{1}{2}$.
+
+\begin{proof}
+ By example sheet, if $(A_n)$ is independent, then so is $(A_n^C)$. Then we have
+ \begin{align*}
+ \P\left[\bigcap_{m = n}^N A_m^C\right] &= \prod_{m = n}^N \P[A_m^C]\\
+ &= \prod_{m = n}^N (1 - \P[A_m])\\
+ &\leq \prod_{m = n}^N \exp(-\P[A_m])\\
+ &= \exp\left(- \sum_{m = n}^N \P[A_m]\right)\\
+ &\to 0
+ \end{align*}
+ as $N \to \infty$, as we assumed that $\sum_n \P[A_n] = \infty$. So we have
+ \[
+ \P\left[\bigcap_{m = n}^\infty A_m^C\right] = 0.
+ \]
+ By countable subadditivity, we have
+ \[
+ \P\left[\bigcup_n \bigcap_{m = n}^\infty A_m^C\right] = 0.
+ \]
+ This in turn implies that
+ \[
+ \P\left[\bigcap_n \bigcup_{m = n}^\infty A_m\right] = 1 - \P\left[\bigcup_n \bigcap_{m = n}^\infty A_m^C\right] = 1.
+ \]
+ So we are done.
+\end{proof}
+
+\section{Measurable functions and random variables}
+We've had enough of measurable sets. As in most of mathematics, not only should we talk about objects, but also maps between objects. Here we want to talk about maps between measure spaces, known as \emph{measurable functions}. In the case of a probability space, a measurable function is a random variable!
+
+In this chapter, we are going to start by defining a measurable function and investigate some of its basic properties. In particular, we are going to prove the \emph{monotone class theorem}, which is the analogue of Dynkin's lemma for measurable functions. Afterwards, we turn to the probabilistic aspects, and see how we can make sense of the independence of random variables. Finally, we are going to consider different notions of ``convergence'' of functions.
+
+\subsection{Measurable functions}
+The definition of a measurable function is somewhat like the definition of a continuous function, except that we replace ``open'' with ``in the $\sigma$-algebra''.
+
+\begin{defi}[Measurable functions]\index{measurable function}
+ Let $(E, \mathcal{E})$ and $(G, \mathcal{G})$ be measure spaces. A map $f: E \to G$ is \emph{measurable} if for every $A \in \mathcal{G}$, we have
+ \[
+ f^{-1}(A) = \{x \in E: f(x) \in E\} \in \mathcal{E}.
+ \]
+ If $(G, \mathcal{G}) = (\R, \mathcal{B})$, then we will just say that $f$ is measurable on $E$.
+
+ If $(G, \mathcal{G}) = ([0, \infty], \mathcal{B})$, then we will just say that $f$ is \index{non-negative measurable function}\emph{non-negative measurable}.
+
+ If $E$ is a topological space and $\mathcal{E} = \mathcal{B}(E)$, then we call $f$ a \term{Borel function}.
+\end{defi}
+
+How do we actually check in practice that a function is measurable? It turns out we are lucky. We can simply check that $f^{-1}(A) \in \mathcal{E}$ for $A$ in \emph{any} generating set $\mathcal{Q}$ of $\mathcal{G}$.
+\begin{lemma}
+ Let $(E, \mathcal{E})$ and $(G, \mathcal{G})$ be measurable spaces, and $\mathcal{G} = \sigma(\mathcal{Q})$ for some $\mathcal{Q}$. If $f^{-1}(A) \in \mathcal{E}$ for all $A \in \mathcal{Q}$, then $f$ is measurable.
+\end{lemma}
+
+\begin{proof}
+ We claim that
+ \[
+ \{A \subseteq G: f^{-1}(A) \in \mathcal{E}\}
+ \]
+ is a $\sigma$-algebra on $G$. Then the result follows immediately by definition of $\sigma(\mathcal{Q})$.
+
+ Indeed, this follows from the fact that $f^{-1}$ preserves everything. More precisely, we have
+ \[
+ f^{-1}\left(\bigcup_n A_n\right) = \bigcup_n f^{-1}(A_n),\quad f^{-1}(A^C) = (f^{-1}(A))^C,\quad f^{-1}(\emptyset) = \emptyset.
+ \]
+ So if, say, all $A_n \in \mathcal{A}$, then so is $\bigcup_n A_n$.
+\end{proof}
+
+\begin{eg}
+ In the particular case where we have a function $f: E \to \R$, we know that $\mathcal{B} = \mathcal{B}(\R)$ is generated by $(-\infty, y]$ for $y \in \R$. So we just have to check that
+ \[
+ \{x \in E: f(x) \leq y\} = f^{-1}((-\infty, y])) \in \mathcal{E}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Let $E, F$ be topological spaces, and $f: E \to F$ be continuous. We will see that $f$ is a measurable function (under the Borel $\sigma$-algebras). Indeed, by definition, whenever $U \subseteq F$ is open, we have $f^{-1}(U)$ open as well. So $f^{-1}(U) \in \mathcal{B}(E)$ for all $U \subseteq F$ open. But since $\mathcal{B}(F)$ is the $\sigma$-algebra generated by the open sets, this implies that $f$ is measurable.
+\end{eg}
+
+This is one very important example. We can do another very important example.
+
+\begin{eg}
+ Suppose that $A \subseteq E$. The indicator function of $A$ is $\boldsymbol1_A(x): E \to \{0, 1\}$ given by
+ \[
+ \boldsymbol1_A(x) =
+ \begin{cases}
+ 1 & x \in A\\
+ 0 & x \not\in A
+ \end{cases}.
+ \]
+ Suppose we give $\{0, 1\}$ the non-trivial measure. Then $\boldsymbol1_A$ is a measurable function iff $A \in \mathcal{E}$.
+\end{eg}
+
+\begin{eg}
+ The identity function is always measurable.
+\end{eg}
+
+\begin{eg}
+ Composition of measurable functions are measurable. More precisely, if $(E, \mathcal{E})$, $(F, \mathcal{F})$ and $(G, \mathcal{G})$ are measurable spaces, and the functions $f: E \to F$ and $g: F \to G$ are measurable, then the composition $g \circ f: E \to G$ is measurable.
+
+ Indeed, if $A \in \mathcal{G}$, then $g^{-1}(A) \in \mathcal{F}$, so $f^{-1}(g^{-1}(A)) \in \mathcal{E}$. But $f^{-1}(g^{-1}(A)) = (g \circ f)^{-1}(A)$. So done.
+\end{eg}
+
+\begin{defi}[$\sigma$-algebra generated by functions]\index{$\sigma$-algebra generated by functions}
+ Now suppose we have a set $E$, and a family of real-valued functions $\{f_i: i \in I\}$ on $E$. We then define
+ \[
+ \sigma(f_i: i \in I) = \sigma(f^{-1}_i(A): A \in \mathcal{B}, i \in I).
+ \]
+\end{defi}
+This is the smallest $\sigma$-algebra on $E$ which makes all the $f_i$'s measurable. This is analogous to the notion of initial topologies for topological spaces.
+
+If we want to construct more measurable functions, the following definition will be rather useful:
+\begin{defi}[Product measurable space]\index{product measurable space}\index{product $\sigma$-algebra}\index{$\sigma$-algebra!product}\index{measurable space!product}
+ Let $(E, \mathcal{E})$ and $(G, \mathcal{G})$ be measure spaces. We define the \emph{product measure space} as $E \times G$ whose $\sigma$-algebra is generated by the projections
+ \[
+ \begin{tikzcd}
+ & E \times G \ar[ld, "\pi_1"'] \ar[rd, "\pi_2"]\\
+ E & & G
+ \end{tikzcd}.
+ \]
+ More explicitly, the $\sigma$-algebra is given by
+ \[
+ \mathcal{E} \otimes \mathcal{G} = \sigma(\{A \times B: A \in \mathcal{E}, B \in \mathcal{G}\}).
+ \]
+ More generally, if $(E_i, \mathcal{E}_i)$ is a collection of measure spaces, the \emph{product measure space} has underlying set $\prod_i E_i$, and the $\sigma$-algebra generated by the projection maps $\pi_i: \prod_j E_j \to E_i$.
+\end{defi}
+
+This satisfies the following property:
+\begin{prop}
+ Let $f_i: E \to F_i$ be functions. Then $\{f_i\}$ are all measurable iff $(f_i): E \to \prod F_i$ is measurable, where the function $(f_i)$ is defined by setting the $i$th component of $(f_i)(x)$ to be $f_i(x)$.
+\end{prop}
+
+\begin{proof}
+ If the map $(f_i)$ is measurable, then by composition with the projections $\pi_i$, we know that each $f_i$ is measurable.
+
+ Conversely, if all $f_i$ are measurable, then since the $\sigma$-algebra of $\prod F_i$ is generated by sets of the form $\pi^{-1}_j(A): A \in \mathcal{F}_j$, and the pullback of such sets along $(f_i)$ is exactly $f_j^{-1}(A)$, we know the function $(f_i)$ is measurable.
+\end{proof}
+
+Using this, we can prove that a whole lot more functions are measurable.
+\begin{prop}
+ Let $(E, \mathcal{E})$ be a measurable space. Let $(f_n: n \in \N)$ be a sequence of non-negative measurable functions on $E$. Then the following are measurable:
+ \begin{gather*}
+ f_1 + f_2,\quad f_1 f_2,\quad \max \{f_1, f_2\},\quad \min \{f_1, f_2\},\\
+ \inf_n f_n,\quad \sup_n f_n,\quad \liminf_n f_n,\quad \limsup_n f_n.
+ \end{gather*}
+ The same is true with ``real'' replaced with ``non-negative'', provided the new functions are real (i.e.\ not infinity).
+\end{prop}
+
+\begin{proof}
+ This is an (easy) exercise on the example sheet. For example, the sum $f_1 + f_2$ can be written as the following composition.
+ \[
+ \begin{tikzcd}
+ E \ar[r, "{(f_1, f_2)}"] & \lbrack 0, \infty\rbrack^2 \ar[r, "+"] & \lbrack0, \infty\rbrack.
+ \end{tikzcd}
+ \]
+ We know the second map is continuous, hence measurable. The first function is also measurable since the $f_i$ are. So the composition is also measurable.
+
+ The product follows similarly, but for the infimum and supremum, we need to check explicitly that the corresponding maps $[0, \infty]^\N \to [0, \infty]$ is measurable.
+\end{proof}
+
+\begin{notation}
+ We will write
+ \[
+ f \wedge g = \min\{f, g\},\quad f \vee g = \max\{f, g\}.
+ \]
+\end{notation}
+
+We are now going to prove the monotone class theorem, which is a ``Dynkin's lemma'' for measurable functions. As in the case of Dynkin's lemma, it will sound rather awkward but will prove itself to be very useful.
+\begin{thm}[Monotone class theorem]\index{monotone class theorem}
+ Let $(E, \mathcal{E})$ be a measurable space, and $\mathcal{A} \subseteq \mathcal{E}$ be a $\pi$-system with $\sigma(\mathcal{A}) = \mathcal{E}$. Let $\mathcal{V}$ be a vector space of functions such that
+ \begin{enumerate}
+ \item The constant function $1 = \mathbf{1}_E$ is in $\mathcal{V}$.
+ \item The indicator functions $\mathbf{1}_A \in \mathcal{V}$ for all $A \in \mathcal{A}$
+ \item $\mathcal{V}$ is closed under bounded, monotone limits.
+
+ More explicitly, if $(f_n)$ is a bounded non-negative sequence in $\mathcal{V}$, $f_n \nearrow f$ (pointwise) and $f$ is also bounded, then $f \in \mathcal{V}$.
+ \end{enumerate}
+ Then $\mathcal{V}$ contains all bounded measurable functions.
+\end{thm}
+
+Note that the conditions for $\mathcal{V}$ is pretty like the conditions for a d-system, where taking a bounded, monotone limit is something like taking increasing unions.
+
+\begin{proof}
+ We first deduce that $\mathbf{1}_A \in \mathcal{V}$ for all $A \in \mathcal{E}$.
+ \[
+ \mathcal{D} = \{A \in \mathcal{E}: \mathbf{1}_A \in \mathcal{V}\}.
+ \]
+ We want to show that $\mathcal{D} = \mathcal{E}$. To do this, we have to show that $\mathcal{D}$ is a $d$-system.
+ \begin{enumerate}
+ \item Since $\mathbf{1}_E \in \mathcal{V}$, we know $E \in \mathcal{D}$.
+ \item If $\mathbf{1}_A \in V$, then $1 - \mathbf{1}_A = \mathbf{1}_{E \setminus A} \in \mathcal{V}$. So $E \setminus A \in \mathcal{D}$.
+ \item If $(A_n)$ is an increasing sequence in $\mathcal{D}$, then $\mathbf{1}_{A_n} \to \mathbf{1}_{\bigcup A_n}$ monotonically increasingly. So $\mathbf{1}_{\bigcup A_n}$ is in $\mathcal{D}$.
+ \end{enumerate}
+ So, by Dynkin's lemma, we know $\mathcal{D} = \mathcal{E}$. So $\mathcal{V}$ contains indicators of all measurable sets. We will now try to obtain any measurable function by approximating.
+
+ Suppose that $f$ is bounded and non-negative measurable. We want to show that $f \in \mathcal{V}$. To do this, we approximate it by letting
+ \[
+ f_n = 2^{-n} \lfloor 2^n f\rfloor = \sum_{k = 0}^\infty k 2^{-n} \mathbf{1}_{\{k 2^{-n} \leq f < (k + 1) 2^{-n}\}}.
+ \]
+ Note that since $f$ is bounded, this is a finite sum. So it is a finite linear combination of indicators of elements in $\mathcal{E}$. So $f_n \in \mathcal{V}$, and $0 \leq f_n \to f$ monotonically. So $f \in \mathcal{V}$.
+
+ More generally, if $f$ is bounded and measurable, then we can write
+ \[
+ f = (f \vee 0) + (f \wedge 0) \equiv f^+ - f^-.
+ \]
+ Then $f^+$ and $f^-$ are bounded and non-negative measurable. So $f \in \mathcal{V}$.
+\end{proof}
+
+Unfortunately, we will not have a chance to use this result until the next chapter where we discuss integration. There we will use this \emph{a lot}.
+
+\subsection{Constructing new measures}
+We are going to look at two ways to construct new measures on spaces based on some measurable function we have.
+
+\begin{defi}[Image measure]\index{image measure}
+ Let $(E, \mathcal{E})$ and $(G, \mathcal{G})$ be measure spaces. Suppose $\mu$ is a measure on $\mathcal{E}$ and $f: E \to G$ is a measurable function. We define the \emph{image measure} $\nu = \mu \circ f^{-1}$ on $G$ by
+ \[
+ \nu(A) = \mu(f^{-1}(A)).
+ \]
+\end{defi}
+It is a routine check that this is indeed a measure.
+
+If we have a strictly increasing continuous function, then we know it is invertible (if we restrict the codomain appropriately), and the inverse is also strictly increasing. It is also clear that these conditions are necessary for an inverse to exist. However, if we relax the conditions a bit, we can get some sort of ``pseudoinverse'' (some categorists may call them ``left adjoints'' (and will tell you that it is a trivial consequence of the adjoint functor theorem)).
+
+Recall that a function $g$ is \term{right continuous} if $x_n \searrow x$ implies $g(x_n) \to g(x)$, and similarly $f$ is \term{left continuous} if $x_n \nearrow x$ implies $f(x_n) \to f(x)$.
+\begin{lemma}
+ Let $g: \R \to \R$ be non-constant, non-decreasing and right continuous. We set
+ \[
+ g(\pm \infty) = \lim_{x \to \pm\infty} g(x).
+ \]
+ We set $I = (g(-\infty), g(\infty))$. Since $g$ is non-constant, this is non-empty.
+
+ Then there is a non-decreasing, left continuous function $f: I \to \R$ such that for all $x \in I$ and $y \in \R$, we have
+ \[
+ x \leq g(y) \Leftrightarrow f(x) \leq y.
+ \]
+ Thus, taking the negation of this, we have
+ \[
+ x > g(y) \Leftrightarrow f(x) > y.
+ \]
+ Explicitly, for $x \in I$, we define
+ \[
+ f(x) = \inf\{y \in \R: x \leq g(y)\}.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We just have to verify that it works. For $x \in I$, consider
+ \[
+ J_x = \{y \in \R: x \leq g(y)\}.
+ \]
+ Since $g$ is non-decreasing, if $y \in J_x$ and $y' \geq y$, then $y' \in J_x$. Since $g$ is right-continuous, if $y_n \in J_x$ is such that $y_n \searrow y$, then $y \in J_x$. So we have
+ \[
+ J_x = [f(x), \infty).
+ \]
+ Thus, for $f \in \R$, we have
+ \[
+ x \leq g(y) \Leftrightarrow f(x) \leq y.
+ \]
+ So we just have to prove the remaining properties of $f$. Now for $x \leq x'$, we have $J_x \subseteq J_{x'}$. So $f(x) \leq f(x')$. So $f$ is non-decreasing.
+
+ Similarly, if $x_n \nearrow x$, then we have $J_x = \bigcap_n J_{x_n}$. So $f(x_n) \to f(x)$. So this is left continuous.
+\end{proof}
+
+\begin{eg}
+ If $g$ is given by the function
+ \begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \draw [->] (0, 0) -- (4, 0);
+ \draw [->] (0, 0) -- (0, 4);
+
+ \draw [thick, mblue] (0, 0) -- (2, 1.5) node [draw, fill=white, circle, inner sep = 0, minimum size = 3] {};
+
+ \draw [thick, mblue] (2, 2.5) node [circ] {} -- (3, 2.5) -- (4, 4);
+ \end{tikzpicture}
+ \end{center}
+ then $f$ is given by
+ \begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \draw [->] (0, 0) -- (4, 0);
+ \draw [->] (0, 0) -- (0, 4);
+
+ \draw [thick, mblue] (0, 0) -- (1.5, 2) -- (2.5, 2) node [draw, fill=white, circle, inner sep = 0, minimum size = 3] {};
+
+ \draw [thick, mblue] (2.5, 3) node [circ] {} -- (4, 4);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+This allows us to construct new measures on $\R$ with ease.
+\begin{thm}
+ Let $g: \R \to \R$ be non-constant, non-decreasing and right continuous. Then there exists a unique Radon measure $\d g$ on $\mathcal{B}$ such that
+ \[
+ \d g((a, b]) = g(b) - g(a).
+ \]
+ Moreover, we obtain all non-zero Radon measures on $\R$ in this way.
+\end{thm}
+We have already seen an instance of this when we $g$ was the identity function.
+
+Given the lemma, this is very easy.
+\begin{proof}
+ Take $I$ and $f$ as in the previous lemma, and let $\mu$ be the restriction of the Lebesgue measure to Borel subsets of $I$. Now $f$ is measurable since it is left continuous. We define $\d g = \mu \circ f^{-1}$. Then we have
+ \begin{align*}
+ \d g((a, b]) &= \mu(\{x \in I: a < f(x) \leq b\}) \\
+ &= \mu(\{x \in I: g(a) < x \leq g(b)\}) \\
+ &= \mu((g(a), g(b)]) = g(b) - g(a).
+ \end{align*}
+ So $\d g$ is a Radon measure with the required property.
+
+ There are no other such measures by the argument used for uniqueness of the Lebesgue measure.
+
+ To show we get all non-zero Radon measures this way, suppose we have a Radon measure $\nu$ on $\R$, we want to produce a $g$ such that $\nu = \d g$. We set
+ \[
+ g(y) =
+ \begin{cases}
+ -\nu ((y, 0]) & y \leq 0\\
+ \nu((0, y]) & y > 0
+ \end{cases}.
+ \]
+ Then $\nu((a, b]) = g(b) - g(a)$. We see that $\nu$ is non-zero, so $g$ is non-constant. It is also easy to see it is non-decreasing and right continuous. So $\nu = \d g$ by continuity.
+\end{proof}
+
+\subsection{Random variables}
+We are now going to look at these ideas in the context of probability. It turns out they are concepts we already know and love!
+
+\begin{defi}[Random variable]\index{random variable}
+ Let $(\Omega, \mathcal{F}, \P)$ be a probability space, and $(E, \mathcal{E})$ a measurable space. Then an \emph{$E$-valued random variable} is a measurable function $X: \Omega \to E$.
+
+ By default, we will assume the random variables are real.
+\end{defi}
+Usually, when we have a random variable $X$, we might ask questions like ``what is the probability that $X \in A$?''. In other words, we are asking for the ``size'' of the set of things that get sent to $A$. This is just the image measure!
+\begin{defi}[Distribution/law]\index{distribution}
+ Given a random variable $X: \Omega \to E$, the \emph{distribution} or \term{law} of $X$ is the image measure $\mu_x: \P \circ X^{-1}$. We usually write
+ \[
+ \P(X \in A) = \mu_x(A) = \P(X^{-1}(A)).
+ \]
+\end{defi}
+If $E = \R$, then $\mu_x$ is determined by its values on the $\pi$-system of intervals $(-\infty, y]$. We set
+\[
+ F_X(x) = \mu_X((-\infty, x]) = \P(X \leq x)
+\]
+This is known as the \term{distribution function}\index{$F_X$} of $X$.
+
+\begin{prop}
+ We have
+ \[
+ F_X(x) \to
+ \begin{cases}
+ 0 & x \to -\infty\\
+ 1 & x \to +\infty
+ \end{cases}.
+ \]
+ Also, $F_X(x)$ is non-decreasing and right-continuous.
+\end{prop}
+
+We call any function $F$ with these properties a distribution function.
+\begin{defi}[Distribution function]\index{distribution function}
+ A \emph{distribution function} is a non-decreasing, right continuous function $f: \R \to [0, 1]$ satisfying
+ \[
+ F_X(x) \to
+ \begin{cases}
+ 0 & x \to -\infty\\
+ 1 & x \to +\infty
+ \end{cases}.
+ \]
+\end{defi}
+We now want to show that every distribution function is indeed a distribution.
+
+\begin{prop}
+ Let $F$ be any distribution function. Then there exists a probability space $(\Omega, \mathcal{F}, \P)$ and a random variable $X$ such that $F_X = F$.
+\end{prop}
+
+\begin{proof}
+ Take $(\Omega, \mathcal{F}, \P) = ((0, 1), \mathcal{B}(0, 1), \text{Lebesgue})$. We take $X: \Omega \to \R$ to be
+ \[
+ X(\omega) = \inf\{x: \omega \leq f(x)\}.
+ \]
+ Then we have
+ \[
+ X(\omega) \leq x \Longleftrightarrow w \leq F(x).
+ \]
+ So we have
+ \[
+ F_X(x) = \P[X \leq x] = \P[(0, F(x)]] = F(x).
+ \]
+ Therefore $F_X = F$.
+\end{proof}
+
+This construction is actually very useful in practice. If we are writing a computer program and want to sample a random variable, we will use this procedure. The computer usually comes with a uniform (pseudo)-random number generator. Then using this procedure allows us to produce random variables of any distribution from a uniform sample.
+
+The next thing we want to consider is the notion of independence of random variables. Recall that for random variables $X, Y$, we used to say that they are independent if for any $A, B$, we have
+\[
+ \P[X \in A, Y \in B] = \P[X \in A]\P[Y \in B].
+\]
+But this is exactly the statement that the $\sigma$-algebras generated by $X$ and $Y$ are independent!
+
+\begin{defi}[Independence of random variables]\index{random variable!independent}\index{independent!random variable}
+ A family $(X_n)$ of random variables is said to be \emph{independent} if the family of $\sigma$-algebras $(\sigma(X_n))$ is independent.
+\end{defi}
+
+\begin{prop}
+ Two real-valued random variables $X, Y$ are independent iff
+ \[
+ \P[X \leq x, Y \leq y] = \P[X \leq x] \P[Y \leq y].
+ \]
+ More generally, if $(X_n)$ is a sequence of real-valued random variables, then they are independent iff
+ \[
+ \P[x_1 \leq x_1, \cdots, x_n \leq x_n] = \prod_{j = 1}^n \P[X_j \leq x_j]
+ \]
+ for all $n$ and $x_j$.
+\end{prop}
+
+\begin{proof}
+ The $\Rightarrow$ direction is obvious. For the other direction, we simply note that $\{(-\infty, x]: x \in \R\}$ is a generating $\pi$-system for the Borel $\sigma$-algebra of $\R$.
+\end{proof}
+
+In probability, we often say things like ``let $X_1, X_2, \cdots$ be iid random variables''. However, how can we guarantee that iid random variables do indeed exist? We start with the less ambitious goal of finding iid $\Bernoulli(1/2)$ random variables:
+
+\begin{prop}
+ Let
+ \[
+ (\Omega, \mathcal{F}, \P) = ((0, 1), \mathcal{B}(0, 1), \text{Lebesgue}).
+ \]
+ be our probability space. Then there exists as sequence $R_n$ of independent $\Bernoulli(1/2)$ random variables.
+\end{prop}
+
+\begin{proof}
+ Suppose we have $\omega \in \Omega = (0, 1)$. Then we write $\omega$ as a binary expansion
+ \[
+ \omega = \sum_{n = 1}^\infty \omega_n 2^{-n},
+ \]
+ where $\omega_n \in \{0, 1\}$. We make the binary expansion unique by disallowing infinite sequences of zeroes.
+
+ We define $R_n(\omega) = \omega_n$. We will show that $R_n$ is measurable. Indeed, we can write
+ \[
+ R_1(\omega) = \omega_1 = \mathbf{1}_{(1/2, 1]}(\omega),
+ \]
+ where $\mathbf{1}_{(1/2, 1]}$ is the indicator function. Since indicator functions of measurable sets are measurable, we know $R_1$ is measurable. Similarly, we have
+ \[
+ R_2(\omega) = \mathbf{1}_{(1/4, 1/2]}(\omega) + \mathbf{1}_{(3/4, 1]}(\omega).
+ \]
+ So this is also a measurable function. More generally, we can do this for any $R_n(\omega)$: we have
+ \[
+ R_n(\omega) = \sum_{j = 1}^{2^{n - 1}} \mathbf{1}_{(2^{-n}(2j - 1), 2^{-n}(2j)]} (\omega).
+ \]
+ So each $R_n$ is a random variable, as each can be expressed as a sum of indicators of measurable sets.
+
+ Now let's calculate
+ \[
+ \P[R_n = 1] = \sum_{j = 1}^{2n - 1} 2^{-n}((2j) - (2j - 1)) = \sum_{j = 1}^{2n - 1} 2^{-n} = \frac{1}{2}.
+ \]
+ Then we have
+ \[
+ \P[R_n = 0] = 1 - \P[R_n = 1] = \frac{1}{2}
+ \]
+ as well. So $R_n \sim \Bernoulli(1/2)$.
+
+ We can straightforwardly check that $(R_n)$ is an independent sequence, since for $n \not= m$, we have
+ \[
+ \P[R_n = 0 \text{ and }R_m = 0] = \frac{1}{4} = \P[R_n = 0] \P[R_m = 0].\qedhere
+ \]
+\end{proof}
+
+We will now use the $(R_n)$ to construct any independent sequence for any distribution.
+
+\begin{prop}
+ Let
+ \[
+ (\Omega, \mathcal{F}, \P) = ((0, 1), \mathcal{B}(0, 1), \text{Lebesgue}).
+ \]
+ Given any sequence $(F_n)$ of distribution functions, there is a sequence $(X_n)$ of independent random variables with $F_{X_n} = F_n$ for all $n$.
+\end{prop}
+
+\begin{proof}
+ Let $m: \N^2 \to \N$ be any bijection, and relabel
+ \[
+ Y_{k, n} = R_{m(k, n)},
+ \]
+ where the $R_j$ are as in the previous random variable. We let
+ \[
+ Y_n = \sum_{k = 1}^\infty 2^{-k} Y_{k, n}.
+ \]
+ Then we know that $(Y_n)$ is an independent sequence of random variables, and each is uniform on $(0, 1)$. As before, we define
+ \[
+ G_n(y) = \inf\{x: y \leq F_n(x)\}.
+ \]
+ We set $X_n = G_n(Y_n)$. Then $(X_n)$ is a sequence of random variables with $F_{X_n} = F_n$.
+\end{proof}
+
+We end the section with a random fact: let $(\Omega, \mathcal{F}, \P)$ and $R_j$ be as above. Then $\frac{1}{n} \sum_{j = 1}^n R_j$ is the average of $n$ independent of $\Bernoulli(1/2)$ random variables. The weak law of large numbers says for any $\varepsilon > 0$, we have
+\[
+ \P\left[\left\lvert\frac{1}{n} \sum_{j = 1}^n R_j - \frac{1}{2}\right\rvert \geq \varepsilon\right] \to 0\text{ as }n\to \infty.
+\]
+The strong law of large numbers, which we will prove later, says that
+\[
+ \P\left[\left\{\omega: \frac{1}{n} \sum_{j = 1}^n R_j \to \frac{1}{2}\right\}\right] = 1.
+\]
+So ``almost every number'' in $(0, 1)$ has an equal proportion of $0$'s and $1$'s in its binary expansion. This is known as the normal number theorem.
+
+\subsection{Convergence of measurable functions}
+The next thing to look at is the convergence of measurable functions. In measure theory, wonderful things happen when we talk about convergence. In analysis, most of the time we had to require uniform convergence, or even stronger notions, if we want limits to behave well. However, in measure theory, the kinds of convergence we talk about are somewhat pointwise in nature. In fact, it will be \emph{weaker} than pointwise convergence. Yet, we are still going to get good properties out of them.
+
+\begin{defi}[Convergence almost everywhere]\index{convergence!almost everywhere}\index{almost everywhere}\index{almost sure convergence}\index{convergence!almost sure}
+ Suppose that $(E, \mathcal{E}, \mu)$ is a measure space. Suppose that $(f_n), f$ are measurable functions. We say $f_n \to f$ \emph{almost everywhere (a.e.)} if
+ \[
+ \mu(\{x \in E: f_n(x) \not\to f(x)\}) = 0.
+ \]
+ If $(E, \mathcal{E}, \mu)$ is a probability space, this is called \emph{almost sure convergence}.
+\end{defi}
+To see this makes sense, i.e.\ the set in there is actually measurable, note that
+\[
+ \{x \in E: f_n(x) \not\to f(x)\} = \{x \in E: \limsup |f_n(x) - f(x)| > 0\}.
+\]
+We have previously seen that $\limsup |f_n - f|$ is non-negative measurable. So the set $\{x \in E: \limsup |f_n(x) - f(x)| > 0\}$ is measurable.
+
+Another useful notion of convergence is convergence in measure.
+\begin{defi}[Convergence in measure]\index{convergence!in measure}\index{convergence!in probability}
+ Suppose that $(E, \mathcal{E}, \mu)$ is a measure space. Suppose that $(f_n), f$ are measurable functions. We say $f_n \to f$ \emph{in measure} if for each $\varepsilon > 0$, we have
+ \[
+ \mu(\{x \in E: |f_n(x) - f(x)| \geq \varepsilon\}) \to 0\text{ as } n \to \infty,
+ \]
+ then we say that $f_n \to f$ \emph{in measure}.
+
+ If $(E, \mathcal{E}, \mu)$ is a probability space, then this is called \emph{convergence in probability}.
+\end{defi}
+In the case of a probability space, this says
+\[
+ \P(|X_n - X| \geq \varepsilon) \to 0\text{ as }n \to \infty
+\]
+for all $\varepsilon$, which is how we state the weak law of large numbers in the past.
+
+After we define integration, we can consider the norms of a function $f$ by
+\[
+ \|f\|_p = \left(\int |f(x)|^p \;\d x\right)^{1/p}.
+\]
+Then in particular, if $\|f_n - f\|_p \to 0$, then $f_n \to f$ in measure, and this provides an easy way to see that functions converge in measure.
+
+In general, neither of these notions imply each other. However, the following theorem provides us with a convenient dictionary to translate between the two notions.
+
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item If $\mu(E) < \infty$, then $f_n \to f$ a.e. implies $f_n \to f$ in measure.
+ \item For any $E$, if $f_n \to f$ in measure, then there exists a subsequence $(f_{n_k})$ such that $f_{n_k} \to f$ a.e.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item First suppose $\mu(E) < \infty$, and fix $\varepsilon > 0$. Consider
+ \[
+ \mu(\{x \in E: |f_n(x) - f(x)|\leq \varepsilon\}).
+ \]
+ We use the result from the first example sheet that for any sequence of events $(A_n)$, we have
+ \[
+ \liminf \mu(A_n) \geq \mu(\liminf A_n).
+ \]
+ Applying to the above sequence says
+ \begin{align*}
+ \liminf \mu(\{x: |f_n(x) - f(x)|\leq \varepsilon\}) &\geq \mu(\{x: |f_m (x) - f(x)| \leq \varepsilon\text{ eventually}\})\\
+ &\geq \mu(\{x \in E: |f_m(x) - f(x)| \to 0\})\\
+ &= \mu(E).
+ \end{align*}
+ As $\mu(E) < \infty$, we have $\mu(\{x \in E: |f_n(x) - f(x)| > \varepsilon\}) \to 0$ as $n \to \infty$.
+
+ \item Suppose that $f_n \to f$ in measure. We pick a subsequence $(n_k)$ such that
+ \[
+ \mu\left(\left\{x \in E: |f_{n_k}(x) - f(x)| > \frac{1}{k}\right\}\right) \leq 2^{-k}.
+ \]
+ Then we have
+ \[
+ \sum_{k = 1}^\infty \mu\left(\left\{x \in E: f_{n_k}(x) - f(x)|> \frac{1}{k}\right\}\right) \leq \sum_{k = 1}^\infty 2^{-k} = 1 < \infty.
+ \]
+ By the first Borel--Cantelli lemma, we know
+ \[
+ \mu\left(\left\{x \in E: |f_{n_k}(x) - f(x)| > \frac{1}{k} \text{ i.o.}\right\}\right) = 0.
+ \]
+ So $f_{n_k} \to f$ a.e.\qedhere
+ \end{enumerate}
+\end{proof}
+It is important that we assume that $\mu(E) < \infty$ for the first part.
+\begin{eg}
+ Consider $(E, \mathcal{E}, \mu) = (\R, \mathcal{B}, \text{Lebesgue})$. Take $f_n(x) = \mathbf{1}_{[n, \infty)}(x)$. Then $f_n(x) \to 0$ for all $x$, and in particular almost everywhere. However, we have
+ \[
+ \mu\left(\left\{x \in \R: |f_n(x)| > \frac{1}{2}\right\}\right) = \mu([n, \infty)) = \infty
+ \]
+ for all $n$.
+\end{eg}
+
+There is one last type of convergence we are interested in. We will only first formulate it in the probability setting, but there is an analogous notion in measure theory known as \emph{weak convergence}, which we will discuss much later on in the course.
+\begin{defi}[Convergence in distribution]\index{convergence!in distribution}
+ Let $(X_n), X$ be random variables with distribution functions $F_{X_n}$ and $F_X$, then we say $X_n \to X$ \emph{in distribution} if $F_{X_n}(x) \to F_X(x)$ for all $x \in \R$ at which $F_X$ is continuous.
+\end{defi}
+Note that here we do not need that $(X_n)$ and $X$ live on the same probability space, since we only talk about the distribution functions.
+
+But why do we have the condition with continuity points? The idea is that if the resulting distribution has a ``jump'' at $x$, it doesn't matter which side of the jump $F_X(x)$ is at. Here is a simple example that tells us why this is very important:
+
+\begin{eg}
+ Let $X_n$ to be uniform on $[0, 1/n]$. Intuitively, this should converge to the random variable that is always zero.
+
+ We can compute
+ \[
+ F_{X_n} (x) =
+ \begin{cases}
+ 0 & x \leq 0\\
+ nx & 0 < x< 1/n\\
+ 1 & x \geq 1/n
+ \end{cases}.
+ \]
+ We can also compute the distribution of the zero random variable as
+ \[
+ F_0 =
+ \begin{cases}
+ 0 & x < 0\\
+ 1 & x \geq 0
+ \end{cases}.
+ \]
+ But $F_{X_n}(0) = 0 $ for all $n$, while $F_X(0) = 1$.
+\end{eg}
+One might now think of cheating by cooking up some random variable such that $F$ is discontinuous at so many points that random, unrelated things converge to $F$. However, this cannot be done, because $F$ is a non-decreasing function, and thus can only have countably many points of discontinuities.
+
+The big theorem we are going to prove about convergence in distribution is that actually it is very boring and doesn't give us anything new.
+
+\begin{thm}[Skorokhod representation theorem of weak convergence]\index{Skorokhod representation theorem of weak convergence}\leavevmode
+ \begin{enumerate}
+ \item If $(X_n), X$ are defined on the same probability space, and $X_n \to X$ in probability. Then $X_n \to X$ in distribution.
+ \item If $X_n \to X$ in distribution, then there exists random variables $(\tilde{X}_n)$ and $\tilde{X}$ defined on a common probability space with $F_{\tilde{X}_n} = F_{X_n}$ and $F_{\tilde{X}} = F_X$ such that $\tilde{X}_n \to \tilde{X}$ a.s.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ Let $S = \{x \in \R: F_X\text{ is continuous}\}$.
+ \begin{enumerate}
+ \item Assume that $X_n \to X$ in probability. Fix $x \in S$. We need to show that $F_{X_n}(x) \to F_X(x)$ as $n \to \infty$.
+
+ We fix $\varepsilon > 0$. Since $x \in S$, this implies that there is some $\delta > 0$ such that
+ \begin{align*}
+ F_X(x - \delta) &\geq F_X(x) - \frac{\varepsilon}{2}\\
+ F_X(x + \delta) & \leq F_X(x) + \frac{\varepsilon}{2}.
+ \end{align*}
+ We fix $N$ large such that $n \geq N$ implies $\P[|X_n - X| \geq \delta] \leq \frac{\varepsilon}{2}$. Then
+ \begin{align*}
+ F_{X_n}(x) &= \P[X_n \leq x] \\
+ &= \P[(X_n - X) + X \leq x] \\
+ \intertext{We now notice that $\{(X_n - X) + X \leq x\} \subseteq \{X \leq x + \delta\} \cup \{|X_n - X| > \delta\}$. So we have}
+ &\leq \P[X \leq x + \delta] + \P[|X_n - X| > \delta]\\
+ &\leq F_X(x + \delta) + \frac{\varepsilon}{2}\\
+ &\leq F_X(x) + \varepsilon.
+ \end{align*}
+ We similarly have
+ \begin{align*}
+ F_{X_n}(x) &= \P[X_n \leq x] \\
+ &\geq \P[X \leq x - \delta] - \P[|X_n - X| > \delta]\\
+ &\geq F_X(x - \delta) - \frac{\varepsilon}{2}\\
+ &\geq F_X(x) - \varepsilon.
+ \end{align*}
+ Combining, we have that $n \geq N$ implying $|F_{x_n}(x) - F_X(x)| \leq \varepsilon$. Since $\varepsilon$ was arbitrary, we are done.
+ \item Suppose $X_n \to X$ in distribution. We again let
+ \[
+ (\Omega, \mathcal{F}, \mathcal{B}) = ((0, 1), \mathcal{B}((0, 1)), \text{Lebesgue}).
+ \]
+ We let
+ \begin{align*}
+ \tilde{X}_n(\omega) &= \inf\{x : \omega \leq F_{X_n}(x)\},\\
+ \tilde{X}(\omega) &= \inf \{x : \omega \leq F_X(x)\}.
+ \end{align*}
+ Recall from before that $\tilde{X}_n$ has the same distribution function as $X_n$ for all $n$, and $\tilde{X}$ has the same distribution as $X$. Moreover, we have
+ \begin{align*}
+ \tilde{X}_n(\omega) \leq x &\Leftrightarrow \omega \leq F_{X_n}(x)\\
+ x < \tilde{X}_n(\omega) &\Leftrightarrow F_{X_n}(x) < \omega,
+ \end{align*}
+ and similarly if we replace $X_n$ with $X$.
+
+ We are now going to show that with this particular choice, we have $\tilde{X}_n \to \tilde{X}$ a.s.
+
+ Note that $\tilde{X}$ is a non-decreasing function $(0, 1) \to \R$. Then by general analysis, $\tilde{X}$ has at most countably many discontinuities. We write
+ \[
+ \Omega_0 = \{\omega \in (0, 1): \tilde{X}\text{ is continuous at }\omega_0\}.
+ \]
+ Then $(0, 1) \setminus \Omega_0$ is countable, and hence has Lebesgue measure $0$. So
+ \[
+ \P[\Omega_0] = 1.
+ \]
+ We are now going to show that $\tilde{X}_n (\omega) \to \tilde{X}(\omega)$ for all $\omega \in \Omega_0$.
+
+ Note that $F_X$ is a non-decreasing function, and hence the points of discontinuity $\R \setminus S$ is also countable. So $S$ is dense in $\R$. Fix $\omega \in \Omega_0$ and $\varepsilon > 0$. We want to show that $|\tilde{X}_n(\omega) - \tilde{X}(\omega)| \leq \varepsilon$ for all $n$ large enough.
+
+ Since $S$ is dense in $\R$, we can find $x^-, x^+$ in $S$ such that
+ \[
+ x^- < \tilde{X}(\omega) < x^+
+ \]
+ and $x^+ - x^- < \varepsilon$. What we \emph{want} to do is to use the characteristic property of $\tilde{X}$ and $F_X$ to say that this implies
+ \[
+ F_X(x^-) < \omega < F_X(x^+).
+ \]
+ Then since $F_{X_n} \to F_X$ at the points $x^-, x^+$, for sufficiently large $n$, we have
+ \[
+ F_{X_n}(x^-) < \omega < F_{X_n}(x^+).
+ \]
+ Hence we have
+ \[
+ x^- < \tilde{X}_n(\omega) < x^+.
+ \]
+ Then it follows that $|\tilde{X}_n(\omega) - \tilde{X}(\omega)| < \varepsilon$.
+
+ However, this doesn't work, since $\tilde{X}(\omega) < x^+$ only implies $\omega \leq F_X(x^+)$, and our argument will break down. So we do a funny thing where we introduce a new variable $\omega^+$.
+
+ Since $\tilde{X}$ is continuous at $\omega$, we can find $\omega^+\in (\omega, 1)$ such that $\tilde{X}(\omega^+) \leq x^+$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (0, 0) -- (4, 0);
+ \draw [->] (0, 0) -- (0, 3);
+
+ \draw [thick, mblue] (0, 0) .. controls (1, 2) and (3, 1) .. (4, 3) node [right] {$\tilde{X}(\omega)$};
+
+ \node [circ] at (2, 0) {};
+ \node [below] at (2, 0) {$\omega$};
+ \node [circ] at (2, 1.5) {};
+
+ \draw [dashed] (0, 1.24) node [left] {$x^-$} -- (1.3, 1.24) node [circ] {};
+ \draw [dashed] (0, 1.76) node [left] {$x^+$} -- (2.7, 1.76) node [circ] {};
+
+ \node [circ] at (2.5, 0) {};
+ \node [below] at (2.5, 0) {$\omega^+$};
+ \draw [dashed] (2.5, 0) -- (2.5, 1.63);
+ \draw [dashed] (2, 0) -- (2, 1.5);
+
+ \draw [latex'-latex'] (0.5, 1.24) -- (0.5, 1.76) node [pos=0.5, right] {$\varepsilon$};
+ \end{tikzpicture}
+ \end{center}
+ Then we have
+ \[
+ x^- < \tilde{X}(\omega) \leq \tilde{X}(\omega^+) < x^+.
+ \]
+ Then we have
+ \[
+ F_X(x^-) < \omega < \omega^+ \leq F_X(x^+).
+ \]
+ So for sufficiently large $n$, we have
+ \[
+ F_{X_n}(x^-) < \omega < F_{X_n}(x^+).
+ \]
+ So we have
+ \[
+ x^- < \tilde{X}_n(\omega) \leq x^+,
+ \]
+ and we are done.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\subsection{Tail events}
+Finally, we are going to quickly look at tail events. These are events that depend only on the asymptotic behaviour of a sequence of random variables.
+\begin{defi}[Tail $\sigma$-algebra]\index{tail $\sigma$-algebra}\index{$\sigma$-algebra!tail}
+ Let $(X_n)$ be a sequence of random variables. We let
+ \[
+ \mathcal{T}_n = \sigma(X_{n + 1}, X_{n + 2}, \cdots),
+ \]
+ and
+ \[
+ \mathcal{T} = \bigcap_n \mathcal{T}_n.
+ \]
+ Then $\mathcal{T}$ is the \emph{tail $\sigma$-algebra}.
+\end{defi}
+Then $\mathcal{T}$-measurable \index{$\mathcal{T}$-measurable} events and random variables only depend on the asymptotic behaviour of the $X_n$'s.
+
+\begin{eg}
+ Let $(X_n)$ be a sequence of real-valued random variables. Then
+ \[
+ \limsup_{n \to \infty} \frac{1}{n} \sum_{j = 1}^n X_j,\quad \liminf_{n \to \infty} \frac{1}{n} \sum_{j = 1}^n X_j
+ \]
+ are $\mathcal{T}$-measurable random variables. Finally,
+ \[
+ \left\{\lim_{n \to \infty} \frac{1}{n} \sum_{j = 1}^n X_j\text{ exists }\right\} \in \mathcal{T},
+ \]
+ since this is just the set of all points where the previous two things agree.
+\end{eg}
+
+\begin{thm}[Kolmogorov 0-1 law]\index{Kolmogorov 0-1 law}
+ Let $(X_n)$ be a sequence of independent (real-valued) random variables. If $A \in \mathcal{T}$, then $\P[A] = 0$ or $1$.
+
+ Moreover, if $X$ is a $\mathcal{T}$-measurable random variable, then there exists a constant $c$ such that
+ \[
+ \P[X = c] = 1.
+ \]
+\end{thm}
+
+\begin{proof}
+ The proof is very funny the first time we see it. We are going to prove the theorem by checking something that seems very strange. We are going to show that if $A \in \mathcal{T}$, then $A$ is independent of $A$. It then follows that
+ \[
+ \P[A] = \P[A \cap A] = \P[A] \P[A],
+ \]
+ so $\P[A] = 0$ or $1$. In fact, we are going to prove that $\mathcal{T}$ is independent of $\mathcal{T}$.
+
+ Let
+ \[
+ \mathcal{F}_n = \sigma(X_1, \cdots, X_n).
+ \]
+ This $\sigma$-algebra is generated by the $\pi$-system of events of the form
+ \[
+ A = \{X_1 \leq x_1, \cdots, X_n \leq x_n\}.
+ \]
+ Similarly, $\mathcal{T}_n = \sigma(X_{n + 1}, X_{n + 2}, \cdots)$ is generated by the $\pi$-system of events of the form
+ \[
+ B = \{X_{n + 1} \leq x_{n + 1}, \cdots, X_{n + k} \leq x_{n + k}\},
+ \]
+ where $k$ is any natural number.
+
+ Since the $X_n$ are independent, we know for any such $A$ and $B$, we have
+ \[
+ \P[A \cap B] = \P[A]\P[B].
+ \]
+ Since this is true for all $A$ and $B$, it follows that $\mathcal{F}_n$ is independent of $\mathcal{T}_n$.
+
+ Since $\mathcal{T} = \bigcap_k \mathcal{T}_k \subseteq \mathcal{T}_n$ for each $n$, we know $\mathcal{F}_n$ is independent of $\mathcal{T}$.
+
+ Now $\bigcup_k \mathcal{F}_k$ is a $\pi$-system, which generates the $\sigma$-algebra $\mathcal{F}_\infty = \sigma(X_1, X_2, \cdots)$. We know that if $A \in \bigcup_n \mathcal{F}_n$, then there has to exist an index $n$ such that $A \in \mathcal{F}_n$. So $A$ is independent of $\mathcal{T}$. So $\mathcal{F}_\infty$ is independent of $\mathcal{T}$.
+
+ Finally, note that $\mathcal{T} \subseteq \mathcal{F}_\infty$. So $\mathcal{T}$ is independent of $\mathcal{T}$.
+
+ To find the constant, suppose that $X$ is $\mathcal{T}$-measurable. Then
+ \[
+ \P[X \leq x] \in \{0, 1\}
+ \]
+ for all $x \in \R$ since $\{X \leq x\} \in \mathcal{T}$.
+
+ Now take
+ \[
+ c = \inf\{x \in \R: \P[X \leq x] = 1\}.
+ \]
+ Then with this particular choice of $c$, it is easy to see that $\P[X = c] = 1$. This completes the proof of the theorem.
+\end{proof}
+
+\section{Integration}
+\subsection{Definition and basic properties}
+We are now going to work towards defining the integral of a measurable function on a measure space $(E, \mathcal{E}, \mu)$. Different sources use different notations for the integral. The following notations are all commonly used:
+\[
+ \mu(f) = \int_E f \;\d \mu = \int_E f(x) \;\d \mu(x) = \int_E f(x) \mu(\d x).
+\]
+In the case where $(E, \mathcal{E}, \mu) = (\R, \mathcal{B}, \mathrm{Lebesgue})$, people often just write this as
+\[
+ \mu(f) = \int_\R f(x)\;\d x.
+\]
+On the other hand, if $(E, \mathcal{E}, \mu) = (\Omega, \F, \P)$ is a probability space, and $X$ is a random variable, then people write the integral as $\E[X]$, the \term{expectation} of $X$.\index{$\E[X]$}\index{$\mu(f)$}
+
+So how are we going to define the integral? There are two steps to defining the integral. The idea is that we first define the integral on \emph{simple functions}, and then extend the definition to more general measurable functions by taking the limit. When we do the definition for simple functions, it will be obvious that the definition satisfies the nice properties, and we will have to check that they are preserved when we take the limit.
+
+\begin{defi}[Simple function]\index{simple function}
+ A \emph{simple function} is a measurable function that can be written as a finite non-negative linear combination of indicator functions of measurable sets, i.e.
+ \[
+ f = \sum_{k = 1}^n a_k \mathbf{1}_{A_k}
+ \]
+ for some $A_k \in \mathcal{E}$ and $a_k \geq 0$.
+\end{defi}
+Note that some sources do not assume that $a_k \geq 0$, but assuming this makes our life easier.
+
+It is obvious that
+\begin{prop}
+ A function is simple iff it is measurable, non-negative, and takes on only finitely-many values.
+\end{prop}
+
+\begin{defi}[Integral of simple function]\index{integral!simple function}\index{simple function!integral}
+ The integral of a simple function
+ \[
+ f = \sum_{k = 1}^n a_k \mathbf{1}_{A_k}
+ \]
+ is given by
+ \[
+ \mu(f) = \sum_{k = 1}^n a_k \mu(A_k).
+ \]
+\end{defi}
+Note that it can be that $\mu(A_k) = \infty$, but $a_k = 0$. When this happens, we are just going to declare that $0 \cdot \infty = 0$ (this makes sense because this means we are ignoring all $0 \cdot\mathbf{1}_A$ terms for any $A$). After we do this, we can check the integral is well-defined.
+
+We are now going to extend this definition to non-negative measurable functions by a limiting procedure. Once we've done this, we are going to extend the definition to measurable functions by linearity of the integral. Then we would have a definition of the integral, and we are going to deduce properties of the integral using approximation.
+
+\begin{defi}[Integral]\index{integral}
+ Let $f$ be a non-negative measurable function. We set
+ \[
+ \mu(f) = \sup\{\mu(g): g \leq f, g\text{ is simple}\}.
+ \]
+ For arbitrary $f$, we write
+ \[
+ f = f^+ - f^- = (f \vee 0) + (f \wedge 0).
+ \]
+ We put $|f| = f^+ + f^-$. We say $f$ is \emph{integrable}\index{integrable function} if $\mu(|f|) < \infty$. In this case, set
+ \[
+ \mu(f) = \mu(f^+) - \mu(f^-).
+ \]
+ If only one of $\mu(f^+), \mu(f_-) < \infty$, then we can still make the above definition, and the result will be infinite.
+\end{defi}
+
+In the case where we are integrating over (a subset of) the reals, we call it the \term{Lebesgue integral}.
+
+\begin{prop}
+ Let $f: [0, 1] \to \R$ be Riemann integrable. Then it is also Lebesgue integrable, and the two integrals agree.
+\end{prop}
+We will not prove this, but this immediately gives us results like the fundamental theorem of calculus, and also helps us to actually compute the integral. However, note that this does not hold for infinite domains, as you will see in the second example sheet.
+
+But the Lebesgue integrable functions are better. A lot of functions are Lebesgue integrable but not Riemann integrable.
+\begin{eg}
+ Take the standard non-Riemann integrable function
+ \[
+ f = \mathbf{1}_{[0, 1]\setminus \Q}.
+ \]
+ Then $f$ is not Riemann integrable, but it is Lebesgue integrable, since
+ \[
+ \mu(f) = \mu([0, 1] \setminus \Q) = 1.
+ \]
+\end{eg}
+
+We are now going to study some basic properties of the integral. We will first look at the properties of integrals of simple functions, and then extend them to general integrable functions.
+
+For $f, g$ simple, and $\alpha, \beta \geq 0$, we have that
+\[
+ \mu(\alpha f + \beta g) = \alpha \mu(f) + \beta\mu(g).
+\]
+So the integral is linear.
+
+Another important property is monotonicity --- if $f \leq g$, then $\mu(f) \leq \mu(g)$.
+
+Finally, we have $f = 0$ a.e. iff $\mu(f) = 0$. It is absolutely crucial here that we are talking about non-negative functions.
+
+Our goal is to show that these three properties are also satisfied for arbitrary non-negative measurable functions, and the first two hold for integrable functions.
+
+In order to achieve this, we prove a very important tool --- the monotone convergence theorem. Later, we will also learn about the dominated convergence theorem and Fatou's lemma. These are the main and very important results about exchanging limits and integration.
+
+\begin{thm}[Monotone convergence theorem]\index{monotone convergence theorem}
+ Suppose that $(f_n), f$ are non-negative measurable with $f_n \nearrow f$. Then $\mu(f_n) \nearrow \mu(f)$.
+\end{thm}
+
+In the proof we will use the fact that the integral is monotonic, which we shall prove later.
+\begin{proof}
+ We will split the proof into five steps. We will prove each of the following in turn:
+ \begin{enumerate}
+ \item If $f_n$ and $f$ are indicator functions, then the theorem holds.
+ \item If $f$ is an indicator function, then the theorem holds.
+ \item If $f$ is simple, then the theorem holds.
+ \item If $f$ is non-negative measurable, then the theorem holds.
+ \end{enumerate}
+ Each part follows rather straightforwardly from the previous one, and the reader is encouraged to try to prove it themself.
+
+ \separator
+
+ We first consider the case where $f_n = \mathbf{1}_{A_n}$ and $f = \mathbf{1}_A$. Then $f_n \nearrow f$ is true iff $A_n \nearrow A$. On the other hand, $\mu(f_n) \nearrow \mu(f)$ iff $\mu(A_n) \nearrow \mu(A)$.
+
+ For convenience, we let $A_0 = \emptyset$. We can write
+ \begin{align*}
+ \mu(A) &= \mu\left(\bigcup_n A_n \setminus A_{n - 1}\right) \\
+ &= \sum_{n = 1}^\infty \mu(A_n \setminus A_{n - 1}) \\
+ &= \lim_{N \to \infty} \sum_{n = 1}^N \mu(A_n \setminus A_{n - 1}) \\
+ &= \lim_{N \to \infty}\mu (A_N).
+ \end{align*}
+ So done.
+
+ \separator
+
+ We next consider the case where $f = \mathbf{1}_A$ for some $A$. Fix $\varepsilon > 0$, and set
+ \[
+ A_n = \{f_n > 1 - \varepsilon\} \in \mathcal{E}.
+ \]
+ Then we know that $A_n \nearrow A$, as $f_n \nearrow f$. Moreover, by definition, we have
+ \[
+ (1 - \varepsilon) \mathbf{1}_{A_n} \leq f_n \leq f = \mathbf{1}_A.
+ \]
+ As $A_n \nearrow A$, we have that
+ \[
+ (1 - \varepsilon) \mu(f) = (1 - \varepsilon) \lim_{n \to \infty} \mu(A_n) \leq \lim_{n \to \infty} \mu(f_n) \leq \mu(f)
+ \]
+ since $f_n \leq f$. Since $\varepsilon$ is arbitrary, we know that
+ \[
+ \lim_{n \to \infty} \mu(f_n) = \mu(f).
+ \]
+
+ \separator
+
+ Next, we consider the case where $f$ is simple. We write
+ \[
+ f = \sum_{k = 1}^m a_k \mathbf{1}_{A_k},
+ \]
+ where $a_k > 0$ and $A_k$ are pairwise disjoint. Since $f_n \nearrow f$, we know
+ \[
+ a_k^{-1} f_n \mathbf{1}_{A_k} \nearrow \mathbf{1}_{A_k}.
+ \]
+ So we have
+ \[
+ \mu(f_n) = \sum_{k = 1}^m \mu(f_n \mathbf{1}_{A_k}) = \sum_{k = 1}^m a_k \mu(a_k^{-1} f_n \mathbf{1}_{A_k}) \to \sum_{k = 1}^m a_k \mu(A_k) = \mu(f).
+ \]
+
+ \separator
+
+ Suppose $f$ is non-negative measurable. Suppose $g \leq f$ is a simple function. As $f_n \nearrow f$, we know $f_n \wedge g \nearrow f \wedge g = g$. So by the previous case, we know that
+ \[
+ \mu(f_n \wedge g) \to \mu(g).
+ \]
+ We also know that
+ \[
+ \mu(f_n) \geq \mu(f_n\wedge g).
+ \]
+ So we have
+ \[
+ \lim_{n \to \infty} \mu(f_n) \geq \mu(g)
+ \]
+ for all $g \leq f$. This is possible only if
+ \[
+ \lim_{n \to \infty} \mu(f_n) \geq \mu(f)
+ \]
+ by definition of the integral. However, we also know that $\mu(f_n) \leq \mu(f)$ for all $n$, again by definition of the integral. So we must have equality. So we have
+ \[
+ \mu(f) = \lim_{n \to \infty} \mu(f_n).\qedhere
+ \]
+\end{proof}
+
+\begin{thm}
+ Let $f, g$ be non-negative measurable, and $\alpha, \beta \geq 0$. We have that
+ \begin{enumerate}
+ \item $\mu(\alpha f + \beta g) = \alpha \mu(f) + \beta \mu(g)$.
+ \item $f \leq g$ implies $\mu(f) \leq \mu(g)$.
+ \item $f = 0$ a.e. iff $\mu(f) = 0$.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let
+ \begin{align*}
+ f_n = 2^{-n}\lfloor 2^n f\rfloor \wedge n\\
+ g_n = 2^{-n}\lfloor 2^n g\rfloor \wedge n.
+ \end{align*}
+ Then $f_n, g_n$ are simple with $f_n \nearrow f$ and $g_n \nearrow g$. Hence $\mu(f_n) \nearrow \mu(f)$ and $\mu(g_n) \nearrow \mu(g)$ and $\mu(\alpha f_n + \beta g_n) \nearrow \mu(\alpha f + \beta g)$, by the monotone convergence theorem. As $f_n, g_n$ are simple, we have that
+ \[
+ \mu(\alpha f_n + \beta g_n) = \alpha \mu(f_n) + \beta \mu(g_n).
+ \]
+ Taking the limit as $n \to \infty$, we get
+ \[
+ \mu(\alpha f + \beta g) = \alpha \mu(f) + \beta \mu(g).
+ \]
+ \item We shall be careful not to use the monotone convergence theorem. We have
+ \begin{align*}
+ \mu(g) &= \sup\{\mu(h): h \leq g\text{ simple}\}\\
+ &\geq \sup \{\mu(h): h \leq f\text{ simple}\}\\
+ &= \mu(f).
+ \end{align*}
+ \item Suppose $f \not= 0$ a.e. Let
+ \[
+ A_n = \left\{x: f(x) > \frac{1}{n}\right\}.
+ \]
+ Then
+ \[
+ \{x: f(x) \not= 0\} = \bigcup_n A_n.
+ \]
+ Since the left hand set has non-negative measure, it follows that there is some $A_n$ with non-negative measure. For that $n$, we define
+ \[
+ h = \frac{1}{n} \mathbf{1}_{A_n}.
+ \]
+ Then $\mu(f) \geq \mu(h) > 0$. So $\mu(f) \not= 0$.
+
+ Conversely, suppose $f = 0$ a.e. We let
+ \[
+ f_n = 2^{-n} \lfloor 2^n f\rfloor \wedge n
+ \]
+ be a simple function. Then $f_n \nearrow f$ and $f_n = 0$ a.e. So
+ \[
+ \mu(f) = \lim_{n \to \infty}\mu(f_n) = 0.\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+
+We now prove the analogous statement for general integrable functions.
+\begin{thm}
+ Let $f, g$ be integrable, and $\alpha, \beta \geq 0$. We have that
+ \begin{enumerate}
+ \item $\mu(\alpha f + \beta g) = \alpha \mu(f) + \beta \mu(g)$.
+ \item $f \leq g$ implies $\mu(f) \leq \mu(g)$.
+ \item $f = 0$ a.e. implies $\mu(f) = 0$.
+ \end{enumerate}
+\end{thm}
+Note that in the last case, the converse is no longer true, as one can easily see from the sign function $\sgn: [-1, 1] \to \R$.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item We are going to prove these by applying the previous theorem.
+
+ By definition of the integral, we have $\mu(-f) = - \mu(f)$. Also, if $\alpha \geq 0$, then
+ \[
+ \mu(\alpha f) = \mu(\alpha f^+) - \mu(\alpha f^-) = \alpha \mu(f^+) - \alpha \mu(f^-) = \alpha \mu(f).
+ \]
+ Combining these two properties, it then follows that if $\alpha$ is a real number, then
+ \[
+ \mu(\alpha f) = \alpha \mu(f).
+ \]
+ To finish the proof of (i), we have to show that $\mu(f + g) = \mu(f) + \mu(g)$. We know that this is true for non-negative functions, so we need to employ a little trick to make this a statement about the non-negative version. If we let $h = f + g$, then we can write this as
+ \[
+ h^+ - h^- = (f^+ - f^-) + (g^+ - g^-).
+ \]
+ We now rearrange this as
+ \[
+ h^ + f^- + g^- = f^+ + g^+ + h^-.
+ \]
+ Now everything is non-negative measurable. So applying $\mu$ gives
+ \[
+ \mu(f^+) + \mu(f^-) + \mu(g^-) = \mu(f^+) + \mu(g^+) + \mu(h^-).
+ \]
+ Rearranging, we obtain
+ \[
+ \mu(h^+) - \mu (h^-) = \mu(f^+) - \mu(f^-) + \mu(g^+) - \mu(g^-).
+ \]
+ This is exactly the same thing as saying
+ \[
+ \mu(f + g) = \mu(h) = \mu(f) = \mu(g).
+ \]
+ \item If $f \leq g$, then $g - f \geq 0$. So $\mu(g - f) \geq 0$. By (i), we know $\mu(g) - \mu(f) \geq 0$. So $\mu(g) \geq \mu(f)$.
+
+ \item If $f = 0$ a.e., then $f^+, f^- = 0$ a.e. So $\mu(f^+) = \mu(f^-) = 0$. So $\mu(f) = \mu(f^+) - \mu(f^-) = 0$.\qedhere
+ \end{enumerate}
+\end{proof}
+As mentioned, the converse to (iii) is no longer true. However, we do have the following partial converse:
+\begin{prop}
+ If $\mathcal{A}$ is a $\pi$-system with $E \in \mathcal{A}$ and $\sigma(\mathcal{A}) = \mathcal{E}$, and $f$ is an integrable function that
+ \[
+ \mu(f\mathbf{1}_A) =0
+ \]
+ for all $A \in \mathcal{A}$. Then $\mu(f) = 0$ a.e.
+\end{prop}
+
+\begin{proof}
+ Let
+ \[
+ \mathcal{D} = \{A \in \mathcal{E}: \mu (f\mathbf{1}_A) = 0\}.
+ \]
+ It follows immediately from the properties of the integral that $\mathcal{D}$ is a d-system. So $\mathcal{D} = \mathcal{E}$ by Dynkin's lemma. Let
+ \begin{align*}
+ A^+ &= \{x \in E: f(x) > 0\},\\
+ A^- &= \{x \in E: f(x) < 0\}.
+ \end{align*}
+ Then $A^{\pm} \in \mathcal{E}$, and
+ \[
+ \mu(f \mathbf{1}_{A^+}) = \mu(f \mathbf{1}_{A^-}) = 0.
+ \]
+ So $f\mathbf{1}_{A^+}$ and $f \mathbf{1}_{A^-}$ vanish a.e. So $f$ vanishes a.e.
+\end{proof}
+
+\begin{prop}
+ Suppose that $(g_n)$ is a sequence of non-negative measurable functions. Then we have
+ \[
+ \mu\left(\sum_{n = 1}^\infty g_n\right) = \sum_{n = 1}^\infty \mu(g_n).
+ \]
+\end{prop}
+
+\begin{proof}
+ We know
+ \[
+ \left(\sum_{n = 1}^N g_n\right) \nearrow \left(\sum_{n = 1}^\infty g_n\right)
+ \]
+ as $N \to \infty$. So by the monotone convergence theorem, we have
+ \[
+ \sum_{n = 1}^N \mu(g_n) = \mu \left(\sum_{n = 1}^N g_n\right) \nearrow \mu\left(\sum_{n = 1}^\infty g_n\right).
+ \]
+ But we also know that
+ \[
+ \sum_{n = 1}^N \mu(g_n) \nearrow \sum_{n = 1}^\infty \mu(g_n)
+ \]
+ by definition. So we are done.
+\end{proof}
+
+So for non-negative measurable functions, we can always switch the order of integration and summation.
+
+Note that we can consider summation as integration. We let $E = \N$ and $\mathcal{E} = \{\text{all subsets of $\N$}\}$. We let $\mu$ be the counting measure, so that $\mu(A)$ is the size of $A$. Then integrability (and having a finite integral) is the same as absolute convergence. Then if it converges, then we have
+\[
+ \int f \;\d \mu = \sum_{n = 1}^\infty f(n).
+\]
+So we can just view our proposition as proving that we can swap the order of two integrals. The general statement is known as Fubini's theorem.
+
+\subsection{Integrals and limits}
+We are now going to prove more things about exchanging limits and integrals. These are going to be extremely useful in the future, as we want to exchange limits and integrals a lot.
+
+\begin{thm}[Fatou's lemma]\index{Fatou's lemma}
+ Let $(f_n)$ be a sequence of non-negative measurable functions. Then
+ \[
+ \mu(\lim\inf f_n) \leq \lim\inf \mu(f_n).
+ \]
+\end{thm}
+Note that a special case was proven in the first example sheet, where we did it for the case where $f_n$ are indicator functions.
+
+\begin{proof}
+ We start with the trivial observation that if $k\geq n$, then we always have that
+ \[
+ \inf_{m \geq n} f_m \leq f_k.
+ \]
+ By the monotonicity of the integral, we know that
+ \[
+ \mu\left(\inf_{m \geq n} f_m\right) \leq \mu(f_k).
+ \]
+ for all $k \geq n$.
+
+ So we have
+ \[
+ \mu\left(\inf_{m \geq n} f_m\right) \leq \inf_{k \geq n} \mu(f_k) \leq \liminf_m \mu(f_m).
+ \]
+ It remains to show that the left hand side converges to $\mu(\liminf f_m)$. Indeed, we know that
+ \[
+ \inf_{m \geq n} f_m \nearrow \liminf_m f_m.
+ \]
+ Then by monotone convergence, we have
+ \[
+ \mu\left(\inf_{m \geq n} f_m\right) \nearrow \mu\left(\liminf_m f_m\right).
+ \]
+ So we have
+ \[
+ \mu\left(\liminf_m f_m\right) \leq \liminf_m \mu(f_m).\qedhere
+ \]
+\end{proof}
+No one ever remembers which direction Fatou's lemma goes, and this leads to many incorrect proofs and results, so it is helpful to keep the following example in mind:
+
+\begin{eg}
+ We let $(E, \mathcal{E}, \mu) = (\R, \mathcal{B}, \text{Lebesgue})$. We let
+ \[
+ f_n = \mathbf{1}_{[n, n + 1]}.
+ \]
+ Then we have
+ \[
+ \liminf_n f_n = 0.
+ \]
+ So we have
+ \[
+ \mu(f_n) = 1\text{ for all }n.
+ \]
+ So we have
+ \[
+ \liminf \mu(f_n) = 1,\quad \mu(\liminf f_n) = 0.
+ \]
+ So we have
+ \[
+ \mu\left(\liminf_m f_m\right) \leq \liminf_m \mu(f_m).
+ \]
+\end{eg}
+The next result we want to prove is the dominated convergence theorem. This is like the monotone convergence theorem, but we are going to remove the increasing and non-negative measurable condition, and add in something else.
+\begin{thm}[Dominated convergence theorem]\index{dominated convergence theorem}
+ Let $(f_n), f$ be measurable with $f_n(x) \to f(x)$ for all $x \in E$. Suppose that there is an integrable function $g$ such that
+ \[
+ |f_n| \leq g
+ \]
+ for all $n$, then we have
+ \[
+ \mu(f_n) \to \mu(f)
+ \]
+ as $n \to \infty$.
+\end{thm}
+
+\begin{proof}
+ Note that
+ \[
+ |f| = \lim_n |f|_n \leq g.
+ \]
+ So we know that
+ \[
+ \mu(|f|) \leq \mu(g) < \infty.
+ \]
+ So we know that $f$, $f_n$ are integrable.
+
+ Now note also that
+ \[
+ 0 \leq g + f_n,\quad 0 \leq g - f_n
+ \]
+ for all $n$. We are now going to apply Fatou's lemma twice with these series. We have that
+ \begin{align*}
+ \mu(g) + \mu(f) &= \mu(g + f) \\
+ &= \mu\left(\liminf_n (g + f_n)\right) \\
+ &\leq \liminf_n \mu(g + f_n)\\
+ &= \liminf_n (\mu(g) + \mu(f_n))\\
+ &= \mu(g) + \liminf_n \mu(f_n).
+ \end{align*}
+ Since $\mu(g)$ is finite, we know that
+ \[
+ \mu(f) \leq \liminf_n \mu(f_n).
+ \]
+ We now do the same thing with $g - f_n$. We have
+ \begin{align*}
+ \mu(g) - \mu(f) &= \mu(g - f) \\
+ &= \mu\left(\liminf_n (g - f_n)\right) \\
+ &\leq \liminf_n \mu(g - f_n)\\
+ &= \liminf_n (\mu(g) - \mu(f_n))\\
+ &= \mu(g) - \limsup_n \mu(f_n).
+ \end{align*}
+ Again, since $\mu(g)$ is finite, we know that
+ \[
+ \mu(f) \geq \limsup_n \mu(f_n).
+ \]
+ These combine to tell us that
+ \[
+ \mu(f) \leq \liminf_n \mu(f_n) \leq \limsup_n \mu(f_n) \leq \mu(f).
+ \]
+ So they must be all equal, and thus $\mu(f_n) \to \mu(f)$.
+\end{proof}
+
+\subsection{New measures from old}
+We have previously considered several ways of constructing measures from old ones, such as the image measure. We are now going to study a few more ways of constructing new measures, and see how integrals behave when we do these.
+
+\begin{defi}[Restriction of measure space]\index{restriction of measure space}\index{measure space!restriction}
+ Let $(E, \mathcal{E}, \mu)$ be a measure space, and let $A \in \mathcal{E}$. The \emph{restriction} of the measure space to $A$ is $(A, \mathcal{E}_A, \mu_A)$, where
+ \[
+ \mathcal{E}_A = \{B \in \mathcal{E}: B \subseteq A\},
+ \]
+ and $\mu_A$ is the restriction of $\mu$ to $\mathcal{E}_A$, i.e.
+ \[
+ \mu_A(B) = \mu(B)
+ \]
+ for all $B \in \mathcal{E}_A$.
+\end{defi}
+
+It is easy to check the following:
+\begin{lemma}
+ For $(E, \mathcal{E}, \mu)$ a measure space and $A \in \mathcal{E}$, the restriction to $A$ is a measure space.\qed
+\end{lemma}
+
+%\begin{proof}
+% We check that $\mathcal{E}_A$ is a $\sigma$-algebra.
+%
+% Since $\emptyset \in \mathcal{E}$ and $\emptyset\subseteq A$, we know $\phi \in \mathcal{E}_A$.
+%
+% Also, if $B \in \mathcal{E}$ and $B \subseteq A$, then $A \cap B^C \in \mathcal{E}$. So $A \setminus B \in \mathcal{E}_A$.
+%
+% Finally, if $B_n$ is a sequence in $\mathcal{E}$ and $B_n \subseteq A$ for all $n$, then $\bigcup_n B_n \in \mathcal{E}$ and $\bigcup_n B_n \subseteq A$. So $\bigcup_n B_n \in \mathcal{E}_A$.
+%
+% It is clear that $\mu_A$ is also a measure.
+%\end{proof}
+
+\begin{prop}
+ Let $(E, \mathcal{E}, \mu)$ and $(F, \mathcal{F}, \mu')$ be measure spaces and $A \in \mathcal{E}$. Let $f: E \to F$ be a measurable function. Then $f|_A$ is $\mathcal{E}_A$-measurable.
+\end{prop}
+
+\begin{proof}
+ Let $B \in \mathcal{F}$. Then
+ \[
+ (f|_A)^{-1}(B) = f^{-1}(B) \cap A \in \mathcal{E}_A.\qedhere
+ \]
+\end{proof}
+
+Similarly, we have
+\begin{prop}
+ If $f$ is integrable, then $f|_A$ is $\mu_A$-integrable and $\mu_A(f|_A) = \mu (f\mathbf{1}_A)$. \qed
+\end{prop}
+
+Note that means we have
+\[
+ \mu(f\mathbf{1}_A) = \int_E f\mathbf{1}_A \;\d \mu = \int_A f \;\d \mu_A.
+\]
+Usually, we are lazy and just write
+\[
+ \mu(f\mathbf{1}_A) = \int_A f\;\d \mu.
+\]
+In the particular case of Lebesgue integration, if $A$ is an interval with left and right end points $a, b$ (i.e.\ it can be open, closed, half open or half closed), then we write
+\[
+ \int_A f\;\d \mu = \int_a^b f(x) \;\d x.
+\]
+There is another construction we would be interested in.
+\begin{defi}[Pushforward/image of measure]\index{pushforward of measure}\index{image measure}\index{measure!pushforward}\index{measure!image}
+ Let $(E, \mathcal{E})$ and $(G, \mathcal{G})$ be measure spaces, and $f: E \to G$ a measurable function. If $\mu$ is a measure on $(E, \mathcal{E})$, then
+ \[
+ \nu = \mu \circ f^{-1}
+ \]
+ is a measure on $(G, \mathcal{G})$, known as the \emph{pushforward} or \emph{image} measure.
+\end{defi}
+We have already seen this before, but we can apply this to integration as follows:
+
+\begin{prop}
+ If $g$ is a non-negative measurable function on $G$, then
+ \[
+ \nu(g) = \mu(g \circ f).
+ \]
+\end{prop}
+
+\begin{proof}
+ Exercise using the monotone class theorem (see example sheet).
+\end{proof}
+
+Finally, we can specify a measure by \emph{specifying a density}.
+\begin{defi}[Density]\index{density}
+ Let $(E, \mathcal{E}, \mu)$ be a measure space, and $f$ be a non-negative measurable function. We define
+ \[
+ \nu(A) = \mu(f \mathbf{1}_A)
+ .\]
+ Then $\nu$ is a measure on $(E, \mathcal{E})$.
+\end{defi}
+
+\begin{prop}
+ The $\nu$ defined above is indeed a measure.
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item $\nu(\phi) = \mu(f\mathbf{1}_\emptyset) = \mu(0) = 0$.
+ \item If $(A_n)$ is a disjoint sequence in $\mathcal{E}$, then
+ \[
+ \nu\left(\bigcup A_n\right) = \mu(f\mathbf{1}_{\bigcup A_n}) = \mu\left(f \sum\mathbf{1}_{A_n}\right) = \sum \mu\left(f \mathbf{1}_{A_n}\right) = \sum \nu(f).\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{defi}[Density]\index{density!random variable}\index{random variable!density}
+ Let $X$ be a random variable. We say $X$ has a density if its law $\mu_X$ has a density with respect to the Lebesgue measure. In other words, there exists $f_X$ non-negative measurable so that
+ \[
+ \mu_X(A) = \P[X \in A] = \int_A f_X(x)\;\d x.
+ \]
+ In this case, for any non-negative measurable function, for any non-negative measurable $g$, we have that
+ \[
+ \E[g(X)] = \int_\R g(x) f_X(x) \;\d x.
+ \]
+\end{defi}
+
+\subsection{Integration and differentiation}
+In ``normal'' calculus, we had three results involving both integration and differentiation. One was the fundamental theorem of calculus, which we already stated. The others are the change of variables formula, and differentiating under the integral sign.
+
+We start by proving the change of variables formula.
+\begin{prop}[Change of variables formula]\index{change of variables formula}
+ Let $\phi: [a, b] \to \R$ be continuously differentiable and increasing. Then for any bounded Borel function $g$, we have
+ \[
+ \int_{\phi(a)}^{\phi(b)} g(y) \;\d y = \int_a^b g(\phi(x))\phi'(x)\;\d x.\tag{$*$}
+ \]
+\end{prop}
+
+We will use the monotone class theorem.
+\begin{proof}
+ We let
+ \[
+ V = \{\text{Borel functions $g$ such that $(*)$ holds}\}.
+ \]
+ We will want to use the monotone class theorem to show that this includes all bounded functions.
+
+ We already know that
+ \begin{enumerate}
+ \item $V$ contains $\mathbf{1}_A$ for all $A$ in the $\pi$-system of intervals of the form $[u, v] \subseteq [a, b]$. This is just the fundamental theorem of calculus.
+ \item By linearity of the integral, $V$ is indeed a vector space.
+ \item Finally, let $(g_n)$ be a sequence in $V$, and $g_n \geq 0$, $g_n \nearrow g$. Then we know that
+ \[
+ \int_{\phi(a)}^{\phi(b)} g_n(y) \;\d y = \int_a^b g_n(\phi(x))\phi'(x)\;\d x.
+ \]
+ By the monotone convergence theorem, these converge to
+ \[
+ \int_{\phi(a)}^{\phi(b)} g(y) \;\d y = \int_a^b g(\phi(x))\phi'(x) \;\d x.
+ \]
+ \end{enumerate}
+ Then by the monotone class theorem, $V$ contains all bounded Borel functions.
+\end{proof}
+
+The next problem is differentiation under the integral sign. We want to know when we can say
+\[
+ \frac{\d}{\d t} \int f(x, t) \;\d x = \int \frac{\partial f}{\partial t}(x, t) \;\d x.
+\]
+
+\begin{thm}[Differentiation under the integral sign]\index{differentiation under the integral sign}
+ Let $(E, \mathcal{E}, \mu)$ be a space, and $U \subseteq \R$ be an open set, and $f: U \times E \to \R$. We assume that
+ \begin{enumerate}
+ \item For any $t \in U$ fixed, the map $x \mapsto f(t, x)$ is integrable;
+ \item For any $x \in E$ fixed, the map $t \mapsto f(t, x)$ is differentiable;
+ \item There exists an integrable function $g$ such that
+ \[
+ \left|\frac{\partial f}{\partial t}(t, x)\right| \leq g(x)
+ \]
+ for all $x \in E$ and $t \in U$.
+ \end{enumerate}
+ Then the map
+ \[
+ x \mapsto \frac{\partial f}{\partial t}(t, x)
+ \]
+ is integrable for all $t$, and also the function
+ \[
+ F(t) = \int_E f(t, x) \d \mu
+ \]
+ is differentiable, and
+ \[
+ F'(t) = \int_E \frac{\partial f}{\partial t}(t, x)\;\d \mu.
+ \]
+\end{thm}
+The reason why we want the derivative to be bounded is that we want to apply the dominated convergence theorem.
+
+\begin{proof}
+ Measurability of the derivative follows from the fact that it is a limit of measurable functions, and then integrability follows since it is bounded by $g$.
+
+ Suppose $(h_n)$ is a positive sequence with $h_n \to 0$. Then let
+ \[
+ g_n(x) = \frac{f(t + h_n, x) - f(t, x)}{h_n} - \frac{\partial f}{\partial t}(t, x).
+ \]
+ Since $f$ is differentiable, we know that $g_n(x) \to 0$ as $n \to \infty$. Moreover, by the mean value theorem, we know that
+ \[
+ |g_n(x)| \leq 2g(x).
+ \]
+ On the other hand, by definition of $F(t)$, we have
+ \[
+ \frac{F(t + h_n) - F(t)}{h_n} - \int_E \frac{\partial f}{\partial t}(t, x) \;\d \mu = \int g_n(x)\;\d x.
+ \]
+ By dominated convergence, we know the RHS tends to $0$. So we know
+ \[
+ \lim_{n \to \infty} \frac{F(t + h_n) - F(t)}{h_n} \to \int_E \frac{\partial f}{\partial t}(t, x)\;\d \mu.
+ \]
+ Since $h_n$ was arbitrary, it follows that $F'(t)$ exists and is equal to the integral.
+\end{proof}
+
+\subsection{Product measures and Fubini's theorem}
+Recall the following definition of the product $\sigma$-algebra.
+\begin{defi}[Product $\sigma$-algebra]\index{product $\sigma$-algebra}\index{$\sigma$-algebra!product}
+ Let $(E_1, \mathcal{E}_1, \mu_1)$ and $(E_2, \mathcal{E}_2, \mu_2)$ be finite measure spaces. We let
+ \[
+ \mathcal{A} = \{A_1 \times A_2: A_2 \times \mathcal{E}_1, A_2 \times \mathcal{E}_2\}.
+ \]
+ Then $\mathcal{A}$ is a $\pi$-system on $E_1 \times E_2$. The \emph{product $\sigma$-algebra} is
+ \[
+ \mathcal{E} = \mathcal{E}_1 \otimes \mathcal{E}_2 = \sigma(\mathcal{A}).
+ \]
+\end{defi}
+We now want to construct a measure on the product $\sigma$-algebra. We can, of course, just apply the Caratheodory extension theorem, but we would want a more explicit description of the integral. The idea is to define, for $A \in \mathcal{E}_1 \otimes \mathcal{E}_2$,
+\[
+ \mu(A) = \int_{E_1} \left(\int_{E_2} \mathbf{1}_A(x_1, x_2)\;\mu_2(\d x_2)\right)\mu_1(\d x_1).
+\]
+Doing this has the advantage that it would help us in a step of proving Fubini's theorem.
+
+However, before we can make this definition, we need to do some preparation to make sure the above statement actually makes sense:
+\begin{lemma}
+ Let $E = E_1 \times E_2$ be a product of $\sigma$-algebras. Suppose $f: E \to \R$ is $\mathcal{E}$-measurable function. Then
+ \begin{enumerate}
+ \item For each $x_2 \in E_2$, the function $x_1 \mapsto f(x_1, x_2)$ is $\mathcal{E}_1$-measurable.
+ \item If $f$ is bounded or non-negative measurable, then
+ \[
+ f_2(x_2) = \int_{E_1} f(x_1, x_2)\; \mu_1(\d x_1)
+ \]
+ is $\mathcal{E}_2$-measurable.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}
+% We are again going to use the monotone class theorem. Let $V$ be the set of all $\mathcal{E}$-measurable functions such that (i) holds. We have
+% \begin{enumerate}
+% \item It is clear that $\mathbf{1}_E$ and $\mathbf{1}_A \in V$ for all $A \in \mathcal{A}$, where $\mathcal{A}$ is as in the definition of the product $\sigma$-algebra.
+% \item It is also clear that $V$ is a vector space.
+% \item If $(f_n)$ is a sequence in $V$ with $f_n \geq 0$ and $f_n \nearrow f$, then we know that $(x_2 \mapsto f_n(x_1, x_2)) \nearrow x_2 \mapsto f(x_2, x_2)$. Hence $x_2 \mapsto f(x_1, x_2)$ is measurable as a limit of measurable functions.
+% \end{enumerate}
+% By the monotone class theorem, $V$ contains all bounded measurable functions.
+%
+% If $f$ is $\mathcal{E}$-measurable, then $(f \vee (-n))\wedge n$ is a bounded, measurable function. So $f \vee (-n) \vee n \in V$ for all $n$. So
+% \[
+% f = \lim_{n \to \infty} (f \vee ((-n))\wedge n \in V.
+% \]
+% So we are done.
+%
+ The first part follows immediately from the fact that for a fixed $x_2$, the map $\iota_1: E_1 \to E$ given by $\iota_1(x_1) = (x_1, x_2)$ is measurable, and that the composition of measurable functions is measurable.
+
+ For the second part, we use the monotone class theorem. We let $V$ be the set of all measurable functions $f$ such that $x_2 \mapsto \int_{E_1} f(x_1, x_2) \mu_1(\d x_1)$ is $\mathcal{E}_2$-measurable.
+ \begin{enumerate}
+ \item It is clear that $\mathbf{1}_E, \mathbf{1}_A \in V$ for all $A \in \mathcal{A}$ (where $\mathcal{A}$ is as in the definition of the product $\sigma$-algebra).
+ \item $V$ is a vector space by linearity of the integral.
+ \item Suppose $(f_n)$ is a non-negative sequence in $V$ and $f_n \nearrow f$, then
+ \[
+ \left(x_2 \mapsto \int_{E_1} f_n(x_1, x_2)\; \mu_1(\d x_1)\right) \nearrow \left(x_2 \mapsto \int_{E_1} f(x_1, x_2)\;\mu(\d x_1)\right)
+ \]
+ by the monotone convergence theorem. So $f \in V$.
+ \end{enumerate}
+ So the monotone class theorem tells us $V$ contains all bounded measurable functions.
+
+ Now if $f$ is a general non-negative measurable function, then $f \wedge n$ is bounded and measurable, hence $f\wedge n \in V$. Therefore $f \in V$ by the monotone convergence theorem.
+\end{proof}
+
+\begin{thm}
+ There exists a unique measurable function $\mu = \mu_1 \otimes \mu_2$ on $\mathcal{E}$ such that
+ \[
+ \mu(A_1 \times A_2) = \mu(A_1) \mu(A_2)
+ \]
+ for all $A_1 \times A_2 \in \mathcal{A}$.
+\end{thm}
+
+Here it is crucial that the measure space is finite. Actually, everything still works for $\sigma$-finite measure spaces, as we can just reduce to the finite case. However, things start to go wrong if we don't have $\sigma$-finite measure spaces.
+
+\begin{proof}
+ One might be tempted to just apply the Caratheodory extension theorem, but we have a more direct way of doing it here, by using integrals. We define
+ \[
+ \mu(A) = \int_{E_1} \left(\int_{E_2} \mathbf{1}_A(x_1, x_2)\;\mu_2(\d x_2)\right)\mu_1(\d x_1).
+ \]
+ Here the previous lemma is very important. It tells us that these integrals actually make sense!
+
+ We first check that this is a measure:
+ \begin{enumerate}
+ \item $\mu(\emptyset) = 0$ is immediate since $1_{\emptyset} = 0$.
+ \item Suppose $(A_n)$ is a disjoint sequence and $A = \bigcup A_n$. Then we have
+ \begin{align*}
+ \mu(A) &= \int_{E_1} \left(\int_{E_2} \mathbf{1}_A(x_1, x_2)\;\mu_2(\d x_2)\right)\mu_1(\d x_1)\\
+ &= \int_{E_1} \left(\int_{E_2} \sum_n \mathbf{1}_{A_n}(x_1, x_2) \;\mu_2(\d x_2)\right)\mu_1(\d x_1)\\
+ \intertext{We now use the fact that integration commutes with the sum of non-negative measurable functions to get}
+ &= \int_{E_1} \left(\sum_n \left(\int_{E_2} \mathbf{1}_A(x_1, x_2)\;\mu_2(\d x_2)\right)\right)\mu_1(\d x_1)\\
+ &= \sum_n \int_{E_1} \left(\int_{E_2} \mathbf{1}_{A_n}(x_1, x_2)\;\mu_2(\d x_2)\right)\mu_1(\d x_1)\\
+ &= \sum_n \mu(A_n).
+ \end{align*}
+ \end{enumerate}
+ So we have a working measure, and it clearly satisfies
+ \[
+ \mu(A_1 \times A_2) = \mu(A_1)\mu(A_2).
+ \]
+ Uniqueness follows because $\mu$ is finite, and is thus characterized by its values on the $\pi$-system $\mathcal{A}$ that generates $\mathcal{E}$.
+\end{proof}
+
+\begin{ex}
+ Show the non-uniqueness of the product Lebesgue measure on $[0, 1]$ and the counting measure on $[0, 1]$.
+\end{ex}
+
+Note that we could as well have defined the measure as
+\[
+ \mu(A) = \int_{E_2} \left(\int_{E_1} \mathbf{1}_A(x_1, x_2)\;\mu_1(\d x_1)\right)\mu_2(\d x_2).
+\]
+The same proof would go through, so we have another measure on the space. However, by uniqueness, we know they must be the same! Fubini's theorem generalizes this to arbitrary functions.
+\begin{thm}[Fubini's theorem]\index{Fubini's theorem}\leavevmode
+ \begin{enumerate}
+ \item If $f$ is non-negative measurable, then
+ \[
+ \mu(f) = \int_{E_1} \left(\int_{E_2} f(x_1, x_2)\; \mu_2(\d x_2)\right)\mu_1(\d x_1).\tag{$*$}
+ \]
+ In particular, we have
+ \[
+ \int_{E_1}\!\!\!\left(\int_{E_2}\!\!\! f(x_1, x_2)\; \mu_2(\d x_2)\right)\mu_1(\d x_1) = \int_{E_2}\!\!\!\left(\int_{E_1}\!\!\!f(x_1, x_2)\; \mu_1(\d x_1)\right)\mu_2(\d x_2).
+ \]
+ This is sometimes known as \term{Tonelli's theorem}.
+ \item If $f$ is integrable, and
+ \[
+ A = \left\{x_1 \in E: \int_{E_2}|f(x_1, x_2)| \mu_2(\d x_2) < \infty\right\}.
+ \]
+ then
+ \[
+ \mu_1(E_1 \setminus A) = 0.
+ \]
+ If we set
+ \[
+ f_1(x_1) =
+ \begin{cases}
+ \int_{E_2}f(x_1, x_2) \;\mu_2(\d x_2) & x_1 \in A\\
+ 0 & x_1 \not\in A
+ \end{cases},
+ \]
+ then $f_1$ is a $\mu_1$ integrable function and
+ \[
+ \mu_1(f_1) = \mu(f).
+ \]
+ \end{enumerate}
+\end{thm}
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Let $V$ be the set of all measurable functions such that $(*)$ holds. Then $V$ is a vector space since integration is linear.
+ \begin{enumerate}
+ \item By definition of $\mu$, we know $\mathbf{1}_E$ and $\mathbf{1}_A$ are in $V$ for all $A \in \mathcal{A}$.
+ \item The monotone convergence theorem on both sides tell us that $V$ is closed under monotone limits of the form $f_n \nearrow f$, $f_n \geq 0$.
+ \end{enumerate}
+ By the monotone class theorem, we know $V$ contains all bounded measurable functions. If $f$ is non-negative measurable, then $(f \wedge n) \in V$, and monotone convergence for $f \wedge n \nearrow f$ gives that $f \in V$.
+ \item Assume that $f$ is $\mu$-integrable. Then
+ \[
+ x_1 \mapsto \int_{E_2} |f(x_1, x_2)|\;\mu(\d x_2)
+ \]
+ is $\mathcal{E}_1$-measurable, and, by (i), is $\mu_1$-integrable. So $A_1$, being the inverse image of $\infty$ under that map, lies in $\mathcal{E}_1$. Moreover, $\mu_1(E_1 \setminus A_1) = 0$ because integrable functions can only be infinite on sets of measure $0$.
+
+ We set
+ \begin{align*}
+ f_1^+(x_1) &= \int_{E_2} f^+(x_1, x_2)\;\mu_2(\d x_2)\\
+ f_1^-(x_1) &= \int_{E_2} f^-(x_1, x_2)\;\mu_2(\d x_2).
+ \end{align*}
+ Then we have
+ \[
+ f_1 = (f_1^+ - f_1^-) \mathbf{1}_{A_1}.
+ \]
+ So the result follows since
+ \[
+ \mu(f) = \mu(f^+) - \mu(f^-) = \mu(f_1^+) - \mu_1(f_1^-) = \mu_1(f_1).
+ \]
+ by (i).\qedhere
+ \end{enumerate}
+\end{proof}
+
+Since $\R$ is $\sigma$-finite, we know that we can sensibly talk about the $d$-fold product of the Lebesgue measure on $\R$ to obtain the Lebesgue measure on $\R^d$.
+
+What $\sigma$-algebra is the Lebesgue measure on $\R^d$ defined on? We know the Lebesgue measure on $\R$ is defined on $\mathcal{B}$. So the Lebesgue measure is defined on
+\[
+ \mathcal{B} \times \cdots \times \mathcal{B} = \sigma(B_1 \times \cdots \times B_d: B_i \in \mathcal{B}).
+\]
+By looking at the definition of the product topology, we see that this is just the Borel $\sigma$-algebra on $\R^d$!
+
+Recall that when we constructed the Lebesgue measure, the Caratheodory extension theorem yields a measure on the ``Lebesgue $\sigma$-algebra'' $\mathcal{M}$, which was strictly bigger than the Borel $\sigma$-algebra. It was shown in the first example sheet that $\mathcal{M}$ is complete, i.e.\ if we have $A \subseteq B \subseteq \R$ with $B \in \mathcal{M}$, $\mu(B) = 0$, then $A \in \mathcal{M}$. We can also take the Lebesgue measure on $\R^d$ to be defined on $\mathcal{M} \otimes \cdots \otimes \mathcal{M}$. However, it happens that $\mathcal{M} \otimes \mathcal{M}$ together with the Lebesgue measure on $\R^2$ is no longer complete (proof is left as an exercise for the reader).
+
+We now turn to probability. Recall that random variables $X_1, \cdots, X_n$ are independent iff the $\sigma$-algebras $\sigma(X_1), \cdots, \sigma(X_n)$ are independent. We will show that random variables are independent iff their laws are given by the product measure.
+
+\begin{prop}
+ Let $X_1, \cdots, X_n$ be random variables on $(\Omega, \mathcal{F}, \P)$ with values in $(E_1, \mathcal{E}_1), \cdots, (E_n, \mathcal{E}_n)$ respectively. We define
+ \[
+ E = E_1 \times \cdots \times E_n, \quad \mathcal{E} = \mathcal{E}_1 \otimes \cdots \otimes \mathcal{E}_n.
+ \]
+ Then $X = (X_1, \cdots, X_n)$ is $\mathcal{E}$-measurable and the following are equivalent:
+ \begin{enumerate}
+ \item $X_1, \cdots, X_n$ are independent.
+ \item $\mu_X = \mu_{X_1}\otimes \cdots \otimes \mu_{X_n}$.
+ \item For any $f_1, \cdots, f_n$ bounded and measurable, we have
+ \[
+ \E\left[\prod_{k = 1}^n f_k (X_k)\right] = \prod_{k = 1}^n \E[f_k(X_k)].
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (i) $\Rightarrow$ (ii): Let $\nu = \mu_{X_1} \times \cdots \otimes \mu_{X_n}$. We want to show that $\nu = \mu_X$. To do so, we just have to check that they agree on a $\pi$-system generating the entire $\sigma$-algebra. We let
+ \[
+ \mathcal{A} = \{A_1 \times \cdots \times A_n: A_1 \in \mathcal{E}_1, \cdots, A_k \in \mathcal{E}_k\}.
+ \]
+ Then $\mathcal{A}$ is a generating $\pi$-system of $\mathcal{E}$. Moreover, if $A = A_1 \times \cdots \times A_n \in \mathcal{A}$, then we have
+ \begin{align*}
+ \mu_X(A) &= \P[X \in A] \\
+ &= \P[X_1 \in A_1, \cdots, X_n \in A_n]\\
+ \intertext{By independence, we have}
+ &= \prod_{k = 1}^n \P[X_k \in A_k]\\
+ &= \nu(A).
+ \end{align*}
+ So we know that $\mu_X = \nu = \mu_{X_1} \otimes \cdots \otimes \mu_{X_n}$ on $\mathcal{E}$.
+ \item (ii) $\Rightarrow$ (iii): By assumption, we can evaluate the expectation
+ \begin{align*}
+ \E\left[\prod_{k = 1}^n f_k(X_k)\right] &= \int_E \prod_{k = 1}^n f_k(x_k) \mu(\d x_k)\\
+ &= \prod_{k = 1}^n \int_{E_k} f(x_k) \mu_k(\d x_k)\\
+ &= \prod_{k = 1}^n \E[f_k(X_k)].
+ \end{align*}
+ Here in the middle we have used Fubini's theorem.
+ \item (iii) $\Rightarrow$ (i): Take $f_k = \mathbf{1}_{A_k}$ for $A_k \in \mathcal{E}_k$. Then we have
+ \begin{align*}
+ \P[X_1 \in A_1, \cdots, X_n \in A_n] &= \E\left[\prod_{k = 1}^n \mathbf{1}_{A_k}(X_k)\right] \\
+ &= \prod_{k = 1}^n \E[\mathbf{1}_{A_k}(X_k)]\\
+ &= \prod_{k = 1}^n \P[X_k \in A_k]
+ \end{align*}
+ So $X_1, \cdots, X_n$ are independent.\qedhere
+ \end{itemize}
+\end{proof}
+
+\section{Inequalities and \tph{$L^p$}{Lp}{Lp} spaces}
+Eventually, we will want to define the $L^p$ spaces as follows:
+\begin{defi}[$L^p$ spaces]\index{$L^p$ space}\index{$\|f\|_p$}
+ Let $(E, \mathcal{E}, \mu)$ be a measurable space. For $1 \leq p < \infty$, we define $L^p = L^p(E, \mathcal{E}, \mu)$ to be the set of all measurable functions $f$ such that
+ \[
+ \|f\|_p = \left(\int |f|^p \d \mu\right)^{1/p} < \infty.
+ \]
+ For $p = \infty$, we let $L^\infty = L^\infty(E, \mathcal{E}, \mu)$ to be the space of functions with
+ \[
+ \|f\|_{\infty} = \inf \{\lambda \geq 0: |f| \leq \lambda \text{ a.e.}\} < \infty.
+ \]
+\end{defi}
+However, it is not clear that this is a norm. First of all, $\|f\|_p = 0$ does not imply that $f = 0$. It only means that $f = 0$ a.e. But this is easy to solve. We simply quotient out the vector space by functions that differ on a set of measure zero. The more serious problem is that we don't know how to prove the triangle inequality.
+
+To do so, we are going to prove some inequalities. Apart from enabling us to show that $\| \ph \|_p$ is indeed a norm, they will also be very helpful in the future when we want to bound integrals.
+
+\subsection{Four inequalities}
+The four inequalities we are going to prove are the following:
+\begin{enumerate}
+ \item Chebyshev/Markov inequality
+ \item Jensen's inequality
+ \item H\"older's inequality
+ \item Minkowski's inequality.
+\end{enumerate}
+
+So let's start proving the inequalities.
+\begin{prop}[Chebyshev's/Markov's inequality]\index{Chebyshev's inequality}\index{Markov's inequality}
+ Let $f$ be non-negative measurable and $\lambda > 0$. Then
+ \[
+ \mu(\{f \geq \lambda\}) \leq \frac{1}{\lambda} \mu(f).
+ \]
+\end{prop}
+This is often used when this is a probability measure, so that we are bounding the probability that a random variable is big.
+
+The proof is essentially one line.
+\begin{proof}
+ We write
+ \[
+ f \geq f \mathbf{1}_{f \geq \lambda} \geq \lambda \mathbf{1}_{f \geq \lambda}.
+ \]
+ Taking $\mu$ gives the desired answer.
+\end{proof}
+This is incredibly simple, but also incredibly useful!
+
+The next inequality is Jensen's inequality. To state it, we need to know what a convex function is.
+\begin{defi}[Convex function]\index{convex function}
+ Let $I \subseteq \R$ be an interval. Then $c: I \to \R$ is convex if for any $t \in [0, 1]$ and $x, y \in I$, we have
+ \[
+ c(tx + (1 - t)y) \leq t c(x) + (1 - t) c(y).
+ \]
+ \begin{center}
+ \begin{tikzpicture}
+ \draw(-2, 4) -- (-2, 0) -- (2, 0) -- (2, 4);
+ \draw (-1.3, 0.1) -- (-1.3, -0.1) node [below] {$x$};
+ \draw (1.3, 0.1) -- (1.3, -0.1) node [below] {$y$};
+ \draw (-1.7, 2) parabola bend (-.2, 1) (1.7, 3.3);
+ \draw [dashed] (-1.3, 0) -- (-1.3, 1.53) node [circ] {};
+ \draw [dashed] (1.3, 0) -- (1.3, 2.42) node [circ] {};
+ \draw (-1.3, 1.53) -- (1.3, 2.42);
+ \draw [dashed] (0, 0) node [below] {\tiny $(1 - t)x + ty$} -- (0, 1.975) node [above] {\tiny$(1 - t)f(x) + t c(y)\quad\quad\quad\quad\quad\quad$} node [circ] {};
+ \end{tikzpicture}
+ \end{center}
+\end{defi}
+Note that if $c$ is twice differentiable, then this is equivalent to $c'' > 0$.
+
+\begin{prop}[Jensen's inequality]\index{Jensen's inequality}
+ Let $X$ be an integrable random variable with values in $I$. If $c: I \to \R$ is convex, then we have
+ \[
+ \E[c(X)] \geq c(E[X]).
+ \]
+\end{prop}
+It is crucial that this only applies to a probability space. We need the total mass of the measure space to be $1$ for it to work. Just being finite is not enough. Jensen's inequality will be an easy consequence of the following lemma:
+\begin{lemma}
+ If $c: I \to \R$ is a convex function and $m$ is in the interior of $I$, then there exists real numbers $a, b$ such that
+ \[
+ c(x) \geq ax + b
+ \]
+ for all $x \in I$, with equality at $x = m$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->] (-1.5, 0) -- (1.5, 0);
+ \draw (-1, 2) parabola bend (0, 0.5) (1, 2) node [above] {$\phi$};
+ \node [circ] at (0.4, 0.75) {};
+ \node [right] at (0.4, 0.75) {$m$};
+ \draw (-0.7, -0.5) -- (1.5, 2) node [right] {$ax + b$};
+ \end{tikzpicture}
+ \end{center}
+\end{lemma}
+If the function is differentiable, then we can easily extract this from the derivative. However, if it is not, then we need to be more careful.
+
+\begin{proof}
+ If $c$ is smooth, then we know $c'' \geq 0$, and thus $c'$ is non-decreasing. We are going to show an analogous statement that does not mention the word ``derivative''. Consider $x < m < y$ with $x, y, m \in I$. We want to show that
+ \[
+ \frac{c(m) - c(x)}{m - x} \leq \frac{c(y) - c(m)}{y - m}.
+ \]
+ To show this, we turn off our brains and do the only thing we can do. We can write
+ \[
+ m = tx + (1 - t) y
+ \]
+ for some $t$. Then convexity tells us
+ \[
+ c(m) \leq t c(x) + (1 - t) c(y).
+ \]
+ Writing $c(m) = t c(m) + (1 - t) c(m)$, this tells us
+ \[
+ t (c(m) - c(x)) \leq (1 - t)(c(y) - c(m)).
+ \]
+ To conclude, we simply have to compute the actual value of $t$ and plug it in. We have
+ \[
+ t = \frac{y - m}{y - x},\quad 1 - t = \frac{m - x}{y - x}.
+ \]
+ So we obtain
+ \[
+ \frac{y - m}{y - x} (c(m) - c(x)) \leq \frac{m - x}{y - x} (c(y) - c(m)).
+ \]
+ Cancelling the $y - x$ and dividing by the factors gives the desired result.
+
+ Now since $x$ and $y$ are arbitrary, we know there is some $a \in \R$ such that
+ \[
+ \frac{c(m) - c(x)}{m - x} \leq a \leq \frac{c(y) - c(m)}{y - m}.
+ \]
+ for all $x < m < y$. If we rearrange, then we obtain
+ \[
+ c(t) \geq a(t - m) + c(m)
+ \]
+ for all $t \in I$.
+\end{proof}
+
+\begin{proof}[Proof of Jensen's inequality]
+ To apply the previous result, we need to pick a right $m$. We take
+ \[
+ m = \E [X].
+ \]
+ To apply this, we need to know that $m$ is in the interior of $I$. So we assume that $X$ is \emph{not} a.s. constant (that case is boring). By the lemma, we can find some $a, b \in \R$ such that
+ \[
+ c(X) \geq aX + b.
+ \]
+ We want to take the expectation of the LHS, but we have to make sure the $\E[c(X)]$ is a sensible thing to talk about. To make sure it makes sense, we show that $\E[c(X)^-] = \E[(-c(X)) \vee 0]$ is finite.
+
+ We simply bound
+ \[
+ [c(X)]^- = [-c(X)] \vee 0 \leq |a| |X| + |b|.
+ \]
+ So we have
+ \[
+ \E[c(X)^-] \leq |a| \E |X| + |b| < \infty
+ \]
+ since $X$ is integrable. So $\E[c(X)]$ makes sense.
+
+ We then just take
+ \[
+ \E[c(X)] \geq \E[aX + b] = a\E[X] + b = am + b = c(m) = c(\E[X]).
+ \]
+ So done.
+\end{proof}
+
+We are now going to use Jensen's inequality to prove H\"older's inequality. Before that, we take note of the following definition:
+\begin{defi}[Conjugate]\index{conjugate}
+ Let $p, q \in [1, \infty]$. We say that they are \emph{conjugate} if
+ \[
+ \frac{1}{p} + \frac{1}{q} = 1,
+ \]
+ where we take $1/\infty = 0$.
+\end{defi}
+
+\begin{prop}[H\"older's inequality]\index{H\"older's inequality}
+ Let $p, q \in (1, \infty)$ be conjugate. Then for $f, g$ measurable, we have
+ \[
+ \mu(|fg|) = \|fg\|_1 \leq \|f\|_p \|g\|_q.
+ \]
+ When $p = q = 2$, then this is the Cauchy-Schwarz inequality.
+\end{prop}
+
+We will provide two different proofs.
+\begin{proof}
+ We assume that $\|f\|_p > 0$ and $\|f\|_p < \infty$. Otherwise, there is nothing to prove. By scaling, we may assume that $\|f\|_p = 1$. We make up a probability measure by
+ \[
+ \P[A] = \int |f|^p \mathbf{1}_A \;\d \mu.
+ \]
+ Since we know
+ \[
+ \|f\|_p = \left(\int |f|^p\;\d \mu\right)^{1/p} = 1,
+ \]
+ we know $\P[\ph]$ is a probability measure. Then we have
+ \begin{align*}
+ \mu(|fg|) &= \mu(|fg| \mathbf{1}_{\{|f| > 0\}})\\
+ &= \mu\left(\frac{|g|}{|f|^{p - 1}} \mathbf{1}_{\{|f| > 0\}} |f|^p\right)\\
+ &= \E\left[\frac{|g|}{|f|^{p - 1}}\mathbf{1}_{\{|f| > 0\}}\right]\\
+ \intertext{Now use the fact that $(\E|X|)^q \leq \E[|X|^q]$ since $x \mapsto x^q$ is convex for $q > 1$. Then we obtain}
+ &\leq \left(\E\left[\frac{|g|^q}{|f|^{(p - 1)q}}\mathbf{1}_{\{|f| > 0\}}\right]\right)^{1/q}.
+ \end{align*}
+ The key realization now is that $\frac{1}{q} + \frac{1}{p} = 1$ means that $q(p - 1) = p$. So this becomes
+ \[
+ \E\left[\frac{|g|^q}{|f|^p} \mathbf{1}_{\{|f| > 0\}}\right]^{1/q} = \mu(|g|^q)^{1/q} = \|g\|_q.
+ \]
+ Using the fact that $\|f\|_p = 1$, we obtain the desired result.
+\end{proof}
+
+\begin{proof}[Alternative proof]
+ We wlog $0 < \|f\|_p, \|g\|_q < \infty$, or else there is nothing to prove. By scaling, we wlog $\|f\|_p = \|g\|_q = 1$. Then we have to show that
+ \[
+ \int |f||g| \;\d \mu \leq 1.
+ \]
+ To do so, we notice if $\frac{1}{p} + \frac{1}{q} = 1$, then the concavity of $\log$ tells us for any $a, b > 0$, we have
+ \[
+ \frac{1}{p} \log a + \frac{1}{q} \log b \leq \log \left(\frac{a}{p} + \frac{b}{q}\right).
+ \]
+ Replacing $a$ with $a^p$; $b$ with $b^p$ and then taking exponentials tells us
+ \[
+ a b \leq \frac{a^p}{p} + \frac{b^q}{q}.
+ \]
+ While we assumed $a, b > 0$ when deriving, we observe that it is also valid when some of them are zero. So we have
+ \[
+ \int |f||g| \;\d \mu \leq \int \left(\frac{|f|^p}{p} + \frac{|g|^q}{q}\right) \;\d \mu = \frac{1}{p} + \frac{1}{q} = 1.\qedhere
+ \]
+\end{proof}
+Just like Jensen's inequality, this is very useful when bounding integrals, and it is also theoretically very important, because we are going to use it to prove the Minkowski inequality. This tells us that the $L^p$ norm is actually a norm.
+
+Before we prove the Minkowski inequality, we prove the following tiny lemma that we will use repeatedly:
+\begin{lemma}
+ Let $a, b \geq 0$ and $p \geq 1$. Then
+ \[
+ (a + b)^p \leq 2^p(a^p + b^p).
+ \]
+\end{lemma}
+This is a terrible bound, but is useful when we want to prove that things are finite.
+
+\begin{proof}
+ We wlog $a \leq b$. Then
+ \[
+ (a + b)^p \leq (2b)^p \leq 2^p b^p \leq 2^p(a^p + b^p).\qedhere
+ \]
+\end{proof}
+\begin{thm}[Minkowski inequality]\index{Minkowski inequality}
+ Let $p \in [1, \infty]$ and $f, g$ measurable. Then
+ \[
+ \|f + g\|_p \leq \|f\|_p + \|g\|_p.
+ \]
+\end{thm}
+
+Again the proof is magic.
+\begin{proof}
+ We do the boring cases first. If $p = 1$, then
+ \[
+ \|f + g\|_1 = \int |f + g| \leq \int (|f| + |g|) = \int |f| + \int |g| = \|f\|_1 + \|g\|_1.
+ \]
+ The proof of the case of $p = \infty$ is similar.
+
+ Now note that if $\|f + g\|_p = 0$, then the result is trivial. On the other hand, if $\|f + g\|_p = \infty$, then since we have
+ \[
+ |f + g|^p \leq (|f| + |g|)^p \leq 2^p(|f|^p + |g|^p),
+ \]
+ we know the right hand side is infinite as well. So this case is also done.
+
+ Let's now do the interesting case. We compute
+ \begin{align*}
+ \mu(|f + g|^p) &= \mu(|f + g| |f + g|^{p - 1}) \\
+ &\leq \mu(|f||f + g|^{p - 1}) + \mu(|g||f + g|^{p - 1})\\
+ &\leq \|f\|_p \||f + g|^{p - 1}\|_q + \|g\|_p \||f + g|^{p - 1}\|_q\\
+ &= (\|f\|_p + \|g\|_p) \||f + g|^{p - 1}\|_q\\
+ &= (\|f\|_p + \|g\|_p) \mu(|f + g|^{(p - 1)q})^{1 - 1/p}\\
+ &= (\|f\|_p + \|g\|_p) \mu(|f + g|^p)^{1 - 1/p}.
+ \end{align*}
+ So we know
+ \[
+ \mu(|f + g|^p) \leq (\|f\|_p + \|g\|_p) \mu(|f + g|^p)^{1 - 1/p}.
+ \]
+ Then dividing both sides by $(\mu(|f + g|^p)^{1 - 1/p}$ tells us
+ \[
+ \mu(|f + g|^p)^{1/p} = \|f + g\|_p \leq \|f\|_p + \|g\|_p.\qedhere
+ \]
+\end{proof}
+
+Given these inequalities, we can go and prove some properties of $L^p$ spaces.
+
+\subsection{\tph{$L^p$}{Lp}{Lp} spaces}
+Recall the following definition:
+
+\begin{defi}[Norm of vector space]\index{norm!vector space}\index{vector space!norm}
+ Let $V$ be a vector space. A \emph{norm} on $V$ is a function $\|\ph\|: V \to \R_{\geq 0}$ such that
+ \begin{enumerate}
+ \item $\|u + v\| \leq \norm{u} + \norm{v}$ for all $U, v \in V$.
+ \item $\|\alpha v\| = |\alpha|\|v\|$ for all $v \in V$ and $\alpha \in \R$
+ \item $\|v\| = 0$ implies $v = 0$.
+ \end{enumerate}
+\end{defi}
+
+\begin{defi}[$L^p$ spaces]\index{$L^p$ space}\index{$\|f\|_p$}
+ Let $(E, \mathcal{E}, \mu)$ be a measurable space. For $1 \leq p < \infty$, we define $L^p = L^p(E, \mathcal{E}, \mu)$ to be the set of all measurable functions $f$ such that
+ \[
+ \|f\|_p = \left(\int |f|^p \d \mu\right)^{1/p} < \infty.
+ \]
+ For $p = \infty$, we let $L^\infty = L^\infty(E, \mathcal{E}, \mu)$ to be the space of functions with
+ \[
+ \|f\|_{\infty} = \inf \{\lambda \geq 0: |f| \leq \lambda \text{ a.e.}\} < \infty.
+ \]
+\end{defi}
+
+By Minkowski's inequality, we know $L^p$ is a vector space, and also (i) holds. By definition, (ii) holds obviously. However, (iii) does \emph{not} hold for $\|\ph\|_p$, because $\|f\|_p = 0$ does not imply that $f = 0$. It merely implies that $f = 0$ a.e.
+
+To fix this, we define an equivalence relation as follows: for $f, g \in L^p$, we say that $f \sim g$ iff $f - g = 0$ a.e. For any $f \in L^p$, we let $[f]$ denote its equivalence class under this relation. In other words,
+\[
+ [f] = \{g \in L^p: f - g = 0\text{ a.e.}\}.
+\]
+\begin{defi}[$\mathcal{L}^p$ space]\index{$\mathcal{L}^p$ space}
+ We define
+ \[
+ \mathcal{L}^p = \{[f]: f \in L^p\},
+ \]
+ where
+ \[
+ [f] = \{g \in L^p: f - g = 0\text{ a.e.}\}.
+ \]
+ This is a normed vector space under the $\|\ph\|_p$ norm.
+\end{defi}
+
+One important property of $L^p$ is that it is complete, i.e.\ every Cauchy sequence converges.
+\begin{defi}[Complete vector space/Banach spaces]\index{complete vector space}\index{Banach space}
+ A normed vector space $(V, \|\ph\|)$ is \emph{complete} if every Cauchy sequence converges. In other words, if $(v_n)$ is a sequence in $V$ such that $\|v_n - v_m\| \to 0$ as $n, m \to \infty$, then there is some $v \in V$ such that $\|v_n - v\| \to 0$ as $n \to \infty$. A complete vector space is known as a \emph{Banach space}.
+\end{defi}
+
+\begin{thm}
+ Let $1 \leq p \leq \infty$. Then $\mathcal{L}^p$ is a Banach space. In other words, if $(f_n)$ is a sequence in $L^p$, with the property that $\|f_n - f_m\|_p \to 0$ as $n, m \to \infty$, then there is some $f \in L^p$ such that $\|f_n - f\|_p \to 0$ as $n \to \infty$.
+\end{thm}
+
+\begin{proof}
+ We will only give the proof for $p < \infty$. The $p = \infty$ case is left as an exercise for the reader.
+
+ Suppose that $(f_n)$ is a sequence in $L^p$ with $\|f_n - f_m\|_p \to 0$ as $n, m \to \infty$. Take a subsequence $(f_{n_k})$ of $(f_n)$ with
+ \[
+ \|f_{n_{k + 1}} - f_{n_k}\|_p \leq 2^{-k}
+ \]
+ for all $k \in \N$. We then find that
+ \[
+ \norm{\sum_{k = 1}^M |f_{n_{k + 1}} - f_{n_k}|}_p \leq \sum_{k = 1}^M \|f_{n_{k + 1}} - f_{n_k}\|_p \leq 1.
+ \]
+ We know that
+ \[
+ \sum_{k = 1}^M |f_{n_{k + 1}} - f_{n_k}| \nearrow \sum_{k = 1}^\infty |f_{n_{k + 1}} - f_{n_k}|\text{ as }M \to \infty.
+ \]
+ So applying the monotone convergence theorem, we know that
+ \[
+ \norm{\sum_{k = 1}^\infty |f_{n_{k + 1}} - f_{n_k}|}_p \leq \sum_{k = 1}^\infty \|f_{n_{k + 1}} - f_{n_k}\|_p \leq 1.
+ \]
+ In particular,
+ \[
+ \sum_{k = 1}^\infty |f_{n_{k + 1}} - f_{n_k}| < \infty \text{ a.e.}
+ \]
+ So $f_{n_k}(x)$ converges a.e., since the real line is complete. So we set
+ \[
+ f(x) =
+ \begin{cases}
+ \lim_{k \to \infty} f_{n_k}(x) & \text{if the limit exists}\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ By an exercise on the first example sheet, this function is indeed measurable. Then we have
+ \begin{align*}
+ \|f_n - f\|_p^p &= \mu(|f_n - f|^p) \\
+ &= \mu\left(\liminf_{k \to \infty} |f_n - f_{n_k}|^p\right)\\
+ &\leq \liminf_{k \to \infty} \mu(|f_n - f_{n_k}|^p),
+ \end{align*}
+ which tends to $0$ as $n \to \infty$ since the sequence is Cauchy. So $f$ is indeed the limit.
+
+ Finally, we have to check that $f \in L^p$. We have
+ \begin{align*}
+ \mu(|f|^p) &= \mu(|f - f_n + f_n|^p)\\
+ &\leq \mu((|f - f_n| + |f_n|)^p)\\
+ &\leq \mu(2^p (|f - f_n|^p + |f_n|^p))\\
+ &= 2^p (\mu(|f - f_n|^p) + \mu(|f_n|^p)^2)
+ \end{align*}
+ We know the first term tends to $0$, and in particular is finite for $n$ large enough, and the second term is also finite. So done.
+\end{proof}
+
+\subsection{Orthogonal projection in \tph{$\mathcal{L}^2$}{L2}{L2}}
+In the particular case $p = 2$, we have an extra structure on $\mathcal{L}^2$, namely an inner product structure, given by
+\[
+ \bra f, g\ket = \int fg\;\d \mu.
+\]
+This inner product induces the $L^2$ norm by
+\[
+ \|f\|_2^2 = \bra f, f\ket.
+\]
+Recall the following definition:
+\begin{defi}[Hilbert space]\index{Hilbert space}
+ A \emph{Hilbert space} is a vector space with a complete inner product.
+\end{defi}
+So $\mathcal{L}^2$ is not only a Banach space, but a Hilbert space as well.
+
+Somehow Hilbert spaces are much nicer than Banach spaces, because you have an inner product structure as well. One particular thing we can do is orthogonal complements.
+
+\begin{defi}[Orthogonal functions]\index{orthogonal functions}
+ Two functions $f, g \in \mathcal{L}^2$ are \emph{orthogonal} if
+ \[
+ \bra f, g\ket = 0,
+ \]
+\end{defi}
+
+\begin{defi}[Orthogonal complement]\index{orthogonal complement}\index{$V^\perp$}
+ Let $V \subseteq L^2$. We then set
+ \[
+ V^\perp = \{f \in L^2: \bra f, v \ket = 0\text{ for all }v \in V\}.
+ \]
+\end{defi}
+
+Note that we can always make these definitions for any inner product space. However, the completeness of the space guarantees nice properties of the orthogonal complement.
+
+Before we proceed further, we need to make a definition of what it means for a subspace of $L^2$ to be closed. This isn't the usual definition, since $L^2$ isn't really a normed vector space, so we need to accommodate for that fact.
+
+\begin{defi}[Closed subspace]\index{closed subspace}
+ Let $V \subseteq L^2$. Then $V$ is closed if whenever $(f_n)$ is a sequence in $V$ with $f_n \to f$, then there exists $v \in V$ with $v \sim f$.
+\end{defi}
+
+Thee main thing that makes $L^2$ nice is that we can use closed subspaces to decompose functions orthogonally.
+
+\begin{thm}
+ Let $V$ be a closed subspace of $L^2$. Then each $f \in L^2$ has an \term{orthogonal decomposition}
+ \[
+ f = u + v,
+ \]
+ where $v \in V$ and $u \in V^\perp$. Moreover,
+ \[
+ \|f - v\|_2 \leq \|f - g\|_2
+ \]
+ for all $g \in V$ with equality iff $g \sim v$.
+\end{thm}
+
+To prove this result, we need two simple identities, which can be easily proven by writing out the expression.
+\begin{lemma}[Pythagoras identity]\index{Pythagoras identity}
+ \[
+ \|f + g\|^2 = \|f\|^2 + \|g\|^2 + 2 \bra f, g\ket.
+ \]
+\end{lemma}
+
+\begin{lemma}[Parallelogram law]\index{parallelogram law}
+ \[
+ \|f + g\|^2 + \|f - g\|^2 = 2(\|f\|^2 + \|g\|^2).
+ \]
+\end{lemma}
+
+To prove the existence of orthogonal decomposition, we need to use a slight trick involving the parallelogram law.
+
+\begin{proof}[Proof of orthogonal decomposition]
+ Given $f \in L^2$, we take a sequence $(g_n)$ in $V$ such that
+ \[
+ \|f - g_n\|_2 \to d(f, V) = \inf_g \|f - g\|_2.
+ \]
+ We now want to show that the infimum is attained. To do so, we show that $g_n$ is a Cauchy sequence, and by the completeness of $L^2$, it will have a limit.
+
+ If we apply the parallelogram law with $u = f - g_n$ and $v = f - g_m$, then we know
+ \[
+ \|u + v\|^2_2 + \|u - v\|_2^2 = 2(\|u\|^2_2 + \|v\|_2^2).
+ \]
+ Using our particular choice of $u$ and $v$, we obtain
+ \[
+ \norm{2 \left(f - \frac{g_n + g_m}{2}\right)}_2^2 + \|g_n - g_m\|_2^2 = 2(\|f - g_n\|_2^2 + \|f - g_m\|^2_2).
+ \]
+ So we have
+ \[
+ \|g_n - g_m\|_2^2 = 2(\|f - g_n\|_2^2 + \|f - g_m\|^2_2) - 4\norm{f - \frac{g_n + g_m}{2}}_2^2.
+ \]
+ The first two terms on the right hand side tend to $d(f, V)^2$, and the last term is bounded below in magnitude by $4d(f, V)$. So as $n, m \to \infty$, we must have $\|g_n - g_m\|_2 \to 0$. By completeness of $\mathcal{L}^2$, there exists a $g \in L^2$ such that $g_n \to g$.
+
+ Now since $V$ is assumed to be closed, we can find a $v \in V$ such that $g = v$ a.e. Then we know
+ \[
+ \|f - v\|_2 = \lim_{n \to \infty}\|f - g_n\|_2 = d(f, V).
+ \]
+ So $v$ attains the infimum. To show that this gives us an orthogonal decomposition, we want to show that
+ \[
+ u = f - v \in V^\perp.
+ \]
+ Suppose $h \in V$. We need to show that $\bra u, h\ket = 0$. We need to do another funny trick. Suppose $t \in \R$. Then we have
+ \begin{align*}
+ d(f, V)^2 &\leq \|f - (v + th)\|_2^2 \\
+ &= \|f - v\|^2 + t^2 \|h\|_2^2 - 2t \bra f - v, h\ket.
+ \end{align*}
+ We think of this as a quadratic in $t$, which is minimized when
+ \[
+ t = \frac{\bra f - v, h\ket}{\|h\|^2_2}.
+ \]
+ But we know this quadratic is minimized when $t = 0$. So $\bra f - v, h\ket = 0$.
+\end{proof}
+
+We are now going to look at the relationship between conditional expectation and orthogonal projection.
+\begin{defi}[Conditional expectation]\index{conditional expectation}
+ Suppose we have a probability space $(\Omega, \mathcal{F}, \P)$, and $(G_n)$ is a collection of pairwise disjoint events with $\bigcup_n G_n = \Omega$. We let
+ \[
+ \mathcal{G} = \sigma(G_n: n \in \N).
+ \]
+ The \emph{conditional expectation} of $X$ given $\mathcal{G}$ is the random variable
+ \[
+ Y = \sum_{n = 1}^\infty \E[X \mid G_n] \mathbf{1}_{G_n},
+ \]
+ where
+ \[
+ \E[X \mid G_n] = \frac{\E [X \mathbf{1}_{G_n}]}{ \P[G_n]}\text{ for }\P[G_n] > 0.
+ \]
+\end{defi}
+In other words, given any $x \in \Omega$, say $x \in G_n$, then $Y(x) = \E[X \mid G_n]$.
+
+If $X \in L^2(\P)$, then $Y \in L^2(\P)$, and it is clear that $Y$ is $\mathcal{G}$-measurable. We claim that this is in fact the projection of $X$ onto the subspace $L^2(\mathcal{G}, \P)$ of $\mathcal{G}$-measurable $L^2$ random variables in the ambient space $L^2(\P)$.
+
+\begin{prop}
+ The conditional expectation of $X$ given $\mathcal{G}$ is the projection of $X$ onto the subspace $L^2(\mathcal{G}, \P)$ of $\mathcal{G}$-measurable $L^2$ random variables in the ambient space $L^2(\P)$.
+\end{prop}
+In some sense, this tells us $Y$ is our best prediction of $X$ given only the information encoded in $\mathcal{G}$.
+
+\begin{proof}
+ Let $Y$ be the conditional expectation. It suffices to show that $\E[(X - W)^2]$ is minimized for $W = Y$ among $\mathcal{G}$-measurable random variables. Suppose that $W$ is a $\mathcal{G}$-measurable random variable. Since
+ \[
+ \mathcal{G} = \sigma(G_n: n \in \N),
+ \]
+ it follows that
+ \[
+ W = \sum_{n = 1}^\infty a_n \mathbf{1}_{G_n}.
+ \]
+ where $a_n \in \R$. Then
+ \begin{align*}
+ \E[(X - W)^2] &= \E\left[\left(\sum_{n = 1}^\infty (X - a_n) \mathbf{1}_{G_n}\right)^2\right]\\
+ &= \E\left[\sum_n (X^2 + a_n^2 - 2a_n X)\mathbf{1}_{G_n}\right]\\
+ &= \E\left[\sum_n (X^2 + a_n^2 - 2a_n \E[X\mid G_n])\mathbf{1}_{G_n}\right]
+ \end{align*}
+ We now optimize the quadratic
+ \[
+ X^2 + a_n^2 - 2 a_n \E[X \mid G_n]
+ \]
+ over $a_n$. We see that this is minimized for
+ \[
+ a_n = \E[X \mid G_n].
+ \]
+ Note that this does not depend on what $X$ is in the quadratic, since it is in the constant term.
+
+ Therefore we know that $\E[X\mid G_n]$ is minimized for $W = Y$.
+\end{proof}
+
+We can also rephrase variance and covariance in terms of the $L^2$ spaces.
+
+Suppose $X, Y \in L^2(\P)$ with
+\[
+ m_X = \E[X],\quad m_Y = \E[Y].
+\]
+Then \term{variance} and \term{covariance} just correspond to $L^2$ inner product and norm. In fact, we have
+\begin{align*}
+ \var(X) &= \E[(X - m_X)^2] = \|X - m_X\|_2^2,\\
+ \cov(X, Y) &= \E[(X - m_X)(Y - m_Y)] = \bra X - m_X, Y - m_Y\ket.
+\end{align*}
+More generally, the \term{covariance matrix} of a random vector $X = (X_1, \cdots, X_n)$ is given by
+\[
+ \var(X) =(\cov(X_i, X_j))_{ij}.
+\]
+On the example sheet, we will see that the covariance matrix is a positive definite matrix.
+
+\subsection{Convergence in \tph{$L^1(\P)$}{L1(P)}{L1(&\#x2119;)} and uniform integrability}
+What we are looking at here is the following question --- suppose $(X_n), X$ are random variables and $X_n \to X$ in probability. Under what extra assumptions is it true that $X_n$ also converges to $X$ in $L_1$, i.e.\ $\E[X_n - X] \to 0$ as $X \to \infty$?
+
+This is not always true.
+
+\begin{eg}
+ If we take $(\Omega, \mathcal{F}, \P) = ((0, 1), \mathcal{B}((0, 1)), \text{Lebesgue})$, and
+ \[
+ X_n = n\mathbf{1}_{(0, 1/n)}.
+ \]
+ Then $X_n \to 0$ in probability, and in fact $X_n \to 0$ almost surely. However,
+ \[
+ \E[|X_n - 0|] = \E[X_n] = n \cdot \frac{1}{n} = 1,
+ \]
+ which does not converge to $1$.
+\end{eg}
+
+We see that the problem with this series is that there is a lot of ``stuff'' concentrated near $0$, and indeed the functions can get unbounded near $0$. We can easily curb this problem by requiring our functions to be bounded:
+\begin{thm}[Bounded convegence theorem]\index{bounded convergence theorem}
+ Suppose $X, (X_n)$ are random variables. Assume that there exists a (non-random) constant $C > 0$ such that $|X_n| \leq C$. If $X_n \to X$ in probability, then $X_n \to X$ in $L^1$.
+\end{thm}
+The proof is a rather standard manipulation.
+
+\begin{proof}
+ We first show that $|X| \leq C$ a.e. Let $\varepsilon > 0$. We then have
+ \begin{align*}
+ \P[|X| > C+ \varepsilon] &\leq \P[|X - X_n| + |X_n| > C + \varepsilon]\\
+ &\leq \P[|X - X_n|> \varepsilon] + \P[|X_n| > C]
+ \end{align*}
+ We know the second term vanishes, while the first term $\to 0$ as $n \to \infty$. So we know
+ \[
+ \P[|X| > C + \varepsilon] = 0
+ \]
+ for all $\varepsilon$. Since $\varepsilon$ was arbitrary, we know $|X| \leq C$ a.s.
+
+ Now fix an $\varepsilon > 0$. Then
+ \begin{align*}
+ \E[|X_n - X| &= \E\left[ |X_n - X| (\mathbf{1}_{|X_n - X| \leq \varepsilon} + \mathbf{1}_{|X_n - X| > \varepsilon})\right] \\
+ &\leq \varepsilon + 2C\; \P\left[|X_n - X| > \varepsilon\right].
+ \end{align*}
+ Since $X_n \to X$ in probability, for $N$ sufficiently large, the second term is $\leq \varepsilon$. So $\E[|X_n - X|] \leq 2 \varepsilon$, and we have convergence in $L^1$.
+\end{proof}
+
+But we can do better than that. We don't need the functions to be actually bounded. We just need that the functions aren't concentrated in arbitrarily small subsets of $\Omega$. Thus, we make the following definition:
+
+\begin{defi}[Uniformly integrable]\index{uniformly integrable}\index{UI}
+ Let $\mathcal{X}$ be a family of random variables. Define
+ \[
+ I_\mathcal{X}(\delta) = \sup\{\E[|X|\mathbf{1}_A]: X \in \mathcal{X}, A \in \mathcal{F} \text{ with }\mathcal{P}[A] < \delta\}.
+ \]
+ Then we say $\mathcal{X}$ is \emph{uniformly integrable} if $\mathcal{X}$ is $L^1$-bounded (see below), and $I_\mathcal{X}(\delta) \to 0$ as $\delta \to 0$.
+\end{defi}
+
+\begin{defi}[$L^p$-bounded]\index{$L^p$-bounded}
+ Let $\mathcal{X}$ be a family of random variables. Then we say $\mathcal{X}$ is $L^p$-bounded if
+ \[
+ \sup\{\|X\|_p: X \in \mathcal{X}\} < \infty.
+ \]
+\end{defi}
+
+In some sense, this is ``uniform continuity for integration''. It is immediate that
+
+\begin{prop}
+ Finite unions of uniformly integrable sets are uniformly integrable.
+\end{prop}
+
+How can we find uniformly integrable families? The following proposition gives us a large class of such families.
+
+\begin{prop}
+ Let $\mathcal{X}$ be an $L^p$-bounded family for some $p > 1$. Then $\mathcal{X}$ is uniformly integrable.
+\end{prop}
+
+\begin{proof}
+ We let
+ \[
+ C = \sup\{\|X\|_p : X \in \mathcal{X}\} < \infty.
+ \]
+ Suppose that $X \in \mathcal{X}$ and $A \in \mathcal{F}$. We then have
+ \[
+ \E[|X|\mathbf{1}_A] = \leq \E[|X|^p]^{1/p} \P[A]^{1/q} \leq C \P[A]^{1/q}.
+ \]
+ by H\"older's inequality, where $p, q$ are conjugates. This is now a uniform bound depending only on $\P[A]$. So done.
+\end{proof}
+
+This is the best we can get. $L^1$ boundedness is not enough. Indeed, our earlier example
+\[
+ X_n = n \mathbf{1}_{(0, 1/n)},
+\]
+is $L^1$ bounded but not uniformly integrable. So $L^1$ boundedness is not enough.
+
+For many practical purposes, it is convenient to rephrase the definition of uniform integrability as follows:
+
+\begin{lemma}
+ Let $\mathcal{X}$ be a family of random variables. Then $\mathcal{X}$ is uniformly integrable if and only if
+ \[
+ \sup\{\E[|X|\mathbf{1}_{|X| > k}]: X \in \mathcal{X}\} \to 0
+ \]
+ as $k \to \infty$.
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item[$(\Rightarrow)$] Suppose that $\chi$ is uniformly integrable. For any $k$, and $X \in \mathcal{X}$ by Chebyshev inequality, we have
+ \[
+ \P[|X| \geq k] \leq \frac{\E[X]}{k}.
+ \]
+ Given $\varepsilon > 0$, we pick $\delta$ such that $\P[|X| \mathbf{1}_A] < \varepsilon$ for all $A$ with $\mu(A) < \delta$. Then pick $k$ sufficiently large such that $k \delta < \sup\{\E[X]: X \in \mathcal{X}\}$. Then $\P[|X| \geq k] < \delta$, and hence $\E[|X|\mathbf{1}_{|X| > k}] < \varepsilon$ for all $X \in \mathcal{X}$.
+ \item[$(\Leftarrow)$] Suppose that the condition in the lemma holds. We first show that $\mathcal{X}$ is $L^1$-bounded. We have
+ \[
+ \E[|X|] = \E[|X|(\mathbf{1}_{|X| \leq k} + \mathbf{1}_{|X| > k})] \leq k + \E[|X| \mathbf{1}_{|X| > k}] < \infty
+ \]
+ by picking a large enough $k$.
+
+ Next note that for any measurable $A$ and $X \in \mathcal{X}$, we have
+ \[
+ \E[|X| \mathbf{1}_A] = \E[|X| \mathbf{1}_A(\mathbf{1}_{|X| > k} + \mathbf{1}_{|X| \leq k})] \leq \E[|X| \mathbf{1}_{|X| > k}] + k \P[A].
+ \]
+ Thus, for any $\varepsilon > 0$, we can pick $k$ sufficiently large such that the first term is $< \frac{\varepsilon}{2}$ for all $X \in \mathcal{X}$ by assumption. Then when $\P[A] < \frac{\varepsilon}{2k}$, we have $\E|X| \mathbf{1}_A] \leq \varepsilon$.\qedhere
+ \end{itemize}
+\end{proof}
+
+As a corollary, we find that
+\begin{cor}
+ Let $\mathcal{X} = \{X\}$, where $X \in L^1(\P)$. Then $\mathcal{X}$ is uniformly integrable.
+
+ Hence, a finite collection of $L^1$ functions is uniformly integrable.
+\end{cor}
+
+\begin{proof}
+ Note that
+ \[
+ \E[|X|] = \sum_{k = 0}^\infty \E[|X| \mathbf{1}_{X \in [k, k + 1)}].
+ \]
+ Since the sum is finite, we must have
+ \[
+ \E[|X|\mathbf{1}_{|X| \geq K}] = \sum_{k = K}^\infty \E[|X| \mathbf{1}_{X \in [k, k + 1)}] \to 0.\qedhere
+ \]
+\end{proof}
+
+With all that preparation, we now come to the main theorem on uniform integrability.
+\begin{thm}
+ Let $X, (X_n)$ be random variables. Then the following are equivalent:
+ \begin{enumerate}
+ \item $X_n, X \in L^1$ for all $n$ and $X_n \to X$ in $L^1$.
+ \item $\{X_n\}$ is uniformly integrable and $X_n \to X$ in probability.
+ \end{enumerate}
+\end{thm}
+
+The (i) $\Rightarrow$ (ii) direction is just a standard manipulation. The idea of the (ii) $\Rightarrow$ (i) direction is that we use uniformly integrability to cut off $X_n$ and $X$ at some large value $K$, which gives us a small error, then apply bounded convergence.
+\begin{proof}
+ We first assume that $X_n, X$ are $L^1$ and $X_n \to X$ in $L^1$. We want to show that $\{X_n\}$ is uniformly integrable and $X_n \to X$ in probability.
+
+ We first show that $X_n \to X$ in probability. This is just going to come from the Chebyshev inequality. For $\varepsilon > 0$. Then we have
+ \[
+ \P[|X - X_n| > \varepsilon] \leq \frac{\E[|X - X_n|]}{\varepsilon} \to 0
+ \]
+ as $n \to \infty$.
+
+ Next we show that $\{X_n\}$ is uniformly integrable. Fix $\varepsilon > 0$. Take $N$ such that $n \geq N$ implies $\E[|X - X_n|] \leq \frac{\varepsilon}{2}$. Since \emph{finite} families of $L^1$ random variables are uniformly integrable, we can pick $\delta > 0$ such that $A \in \mathcal{F}$ and $\P[A] < \delta$ implies
+ \[
+ \E[X \mathbf{1}_A], \E[|X_n|\mathbf{1}_A] \leq \frac{\varepsilon}{2}
+ \]
+ for $n = 1, \cdots, N$.
+
+ Now when $n > N$ and $A \in \mathcal{F}$ with $\P[A] \leq \delta$, then we have
+ \begin{align*}
+ \E[|X_n| \mathbf{1}_A] &\leq \E[|X - X_n| \mathbf{1}_A] + \E[|X|\mathbf{1}_A]\\
+ &\leq \E[|X - X_n|] + \frac{\varepsilon}{2}\\
+ &\leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2}\\
+ &= \varepsilon.
+ \end{align*}
+ So $\{X_n\}$ is uniformly integrable.
+
+ \separator
+
+ Assume that $\{X_n\}$ is uniformly integrable and $X_n \to X$ in probability.
+
+ The first step is to show that $X \in L^1$. We want to use Fatou's lemma, but to do so, we want almost sure convergence, not just convergence in probability.
+
+ Recall that we have previously shown that there is a subsequence $(X_{n_k})$ of $(X_n)$ such that $X_{n_k} \to X$ a.s. Then we have
+ \[
+ \E[|X|] = \E\left[\liminf_{k \to \infty} |X_{n_k}|\right] \leq \liminf_{k \to \infty} \E[|X_{n_k}|] < \infty
+ \]
+ since uniformly integrable families are $L^1$ bounded. So $\E[|X|] < \infty$, hence $X \in L^1$.
+
+ Next we want to show that $X_n \to X$ in $L^1$. Take $\varepsilon > 0$. Then there exists $K \in (0, \infty)$ such that
+ \[
+ \E\left[|X| \mathbf{1}_{\{|X| > K\}}\right], \E\left[|X_n| \mathbf{1}_{\{|X_n| > K\}}\right] \leq \frac{\varepsilon}{3}.
+ \]
+ To set things up so that we can use the bounded convergence theorem, we have to invent new random variables
+ \[
+ X_n^K = (X_n \vee -K) \wedge K,\quad X^K = (X \vee -K)\wedge K.
+ \]
+ Since $X_n \to X$ in probability, it follows that $X_n^K \to X^K$ in probability.
+
+ Now bounded convergence tells us that there is some $N$ such that $n \geq N$ implies
+ \[
+ \E[|X_n^K - X^K|] \leq \frac{\varepsilon}{3}.
+ \]
+ Combining, we have for $n \geq N$ that
+ \[
+ \E[|X_n - X|] \leq \E[|X_n^K - X^K|] + \E[|X|\mathbf{1}_{\{|X| \geq K\}}] + \E[|X_n|\mathbf{1}_{\{|X_n| \geq K\}}] \leq \varepsilon.
+ \]
+ So we know that $X_n \to X$ in $L^1$.
+\end{proof}
+
+The main application is that when $\{X_n\}$ is a type of stochastic process known as a \term{martingale}. This will be done in III Advanced Probability and III Stochastic Calculus.
+
+\section{Fourier transform}
+We now turn to the exciting topic of the Fourier transform. There are two main questions we want to ask --- when does the Fourier transform exist, and when we can recover a function from its Fourier transform.
+
+Of course, not only do we want to know if the Fourier transform exists. We also want to know if it lies in some nice space, e.g.\ $L^2$.
+
+It turns out that when we want to prove things about Fourier transforms, it is often helpful to ``smoothen'' the function by doing what is known as a \emph{Gaussian convolution}. So after defining the Fourier transform and proving some really basic properties, we are going to investigate convolutions and Gaussians for a bit (convolutions are also useful on their own, since they correspond to sums of independent random variables). After that, we can go and prove the actual important properties of the Fourier transform.
+
+\subsection{The Fourier transform}
+When talking about Fourier transforms, we will mostly want to talk about functions $\R^d \to \C$. So from now on, we will write $L^p$ for \emph{complex valued} Borel functions on $\R^d$ with
+\[
+ \|f\|_p = \left(\int_{\R^d} |f|^p\right)^{1/p} < \infty.
+\]
+The integrals of complex-valued function are defined on the real and imaginary parts separately, and satisfy the properties we would expect them to. The details are on the first example sheet.
+
+\begin{defi}[Fourier transform]\index{Fourier transform}
+ The \emph{Fourier transform} $\hat{f}: \R^d \to \C$ of $f \in L^1(\R^d)$ is given by
+ \[
+ \hat{f}(u) = \int_{\R^d}f(x) e^{i (u, x)}\;\d x,
+ \]
+ where $u \in \R^d$ and $(u, x)$ denotes the inner product, i.e.
+ \[
+ (u, x) = u_1 x_1 + \cdots + u_d x_d.
+ \]
+\end{defi}
+
+Why do we care about Fourier transforms? Many computations are easier with $\hat{f}$ in place of $f$, especially computations that involve differentiation and convolutions (which are relevant to sums of independent random variables). In particular, we will use it to prove the central limit theorem.
+
+More generally, we can define the Fourier transform of a measure:
+
+\begin{defi}[Fourier transform of measure]\index{Fourier transform!of measure}
+ The Fourier transform of a \emph{finite measure} $\mu$ on $\R^d$ is the function $\hat{\mu}: \R^d \to \C$ given by
+ \[
+ \hat{\mu}(u) = \int_{\R^d} e^{i(u, x)} \mu(\d x).
+ \]
+\end{defi}
+
+In the context of probability, we give these things a different name:
+\begin{defi}[Characteristic function]\index{characteristic function}\index{random variable!characteristic function}
+ Let $X$ be a random variable. Then the \emph{characteristic function} of $X$ is the Fourier transform of its law, i.e.
+ \[
+ \phi_X(u) = \E[e^{i(u, X)}] = \hat{\mu}_X(u),
+ \]
+ where $\mu_X$ is the law of $X$.
+\end{defi}
+
+We now make the following (trivial) observations:
+\begin{prop}
+ \[
+ \|\hat{f}\|_\infty \leq \|f\|_1, \quad \|\hat{\mu}\|_\infty \leq \mu(\R^d).
+ \]
+\end{prop}
+
+Less trivially, we have the following result:
+\begin{prop}
+ The functions $\hat{f}, \hat{\mu}$ are continuous.
+\end{prop}
+
+\begin{proof}
+ If $u_n \to u$, then
+ \[
+ f(x) e^{i(u_n, x)} \to f(x) e^{i(u, x)}.
+ \]
+ Also, we know that
+ \[
+ |f(x) e^{i(u_n, x)}| = |f(x)|.
+ \]
+ So we can apply dominated convergence theorem with $|f|$ as the bound.
+\end{proof}
+
+\subsection{Convolutions}
+To actually do something useful about the Fourier transforms, we need to talk about convolutions.
+
+\begin{defi}[Convolution of random variables]\index{convolution!random variable}\index{random variable!convolution}
+ Let $\mu, \nu$ be probability measures. Their \emph{convolution} $\mu * \nu$ is the law of $X + Y$, where $X$ has law $\mu$ and $Y$ has law $\nu$, and $X, Y$ are independent. Explicitly, we have
+ \begin{align*}
+ \mu * \nu (A) &= \P[X + Y \in \mathcal{A}] \\
+ &= \iint \mathbf{1}_{A}(x + y)\;\mu(\d x)\;\nu(\d y)
+ \end{align*}
+\end{defi}
+Let's suppose that $\mu$ has a density function $f$ with respect to the Lebesgue measure. Then we have
+\begin{align*}
+ \mu * \nu(A) &= \iint \mathbf{1}_A(x + y) f(x) \;\d x\;\nu(\d y)\\
+ &= \iint \mathbf{1}_A(x) f(x - y) \;\d x\;\nu(\d y)\\
+ &= \int \mathbf{1}_A(x) \left(\int f(x - y)\;\nu (\d y)\right)\;\d x.
+\end{align*}
+So we know that $\mu * \nu$ has law
+\[
+ \int f(x - y)\;\nu (\d y).
+\]
+This thing has a name.
+\begin{defi}[Convolution of function with measure]\index{convolution!function with measure}
+ Let $f \in L^p$ and $\nu$ a probability measure. Then the \emph{convolution} of $f$ with $\mu$ is
+ \[
+ f * \nu(x) = \int f(x - y)\;\nu(\d y) \in L^p.
+ \]
+\end{defi}
+Note that we do have to treat the two cases of convolutions separately, since a measure need not have a density, and a function need not specify a probability measure (it may not integrate to $1$).
+
+We check that it is indeed in $L^p$. Since $\nu$ is a probability measure, Jensen's inequality says we have
+\begin{align*}
+ \|f * \nu\|_p^p &= \int \left(\int |f(x - y)| \nu(\d y)\right)^p \;\d x\\
+ &\leq \iint |f(x - y)|^p \nu(\d y)\;\d x\\
+ &= \iint |f(x - y)|^p \;\d x \;\nu (\d y)\\
+ &= \|f\|_p^p \\
+ &< \infty.
+\end{align*}
+In fact, from this computation, we see that
+\begin{prop}
+ For any $f \in L^p$ and $\nu$ a probability measure, we have
+ \[
+ \|f * \nu\|_p \leq \|f\|_p.
+ \]
+\end{prop}
+
+The interesting thing happens when we try to take the Fourier transform of a convolution.
+\begin{prop}
+ \[
+ \widehat{f * \nu}(u) = \hat{f}(u) \hat{\nu}(u).
+ \]
+\end{prop}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ \widehat{f * \nu}(u) &= \int \left(\int f(x - y) \nu(\d y)\right) e^{i(u, x)}\;\d x\\
+ &= \iint f(x - y) e^{i(u, x)} \;\d x\;\nu (\d y)\\
+ &= \int \left(\int f(x - y) e^{i(u, x - y)} \;\d (x - y)\right) e^{i(u, y)}\;\mu(\d y)\\
+ &= \int \left(\int f(x) e^{i(u, x)} \;\d (x)\right) e^{i(u, y)}\;\mu(\d y)\\
+ &= \int \hat{f}(u) e^{i(u, x)}\;\mu(\d y)\\
+ &= \hat{f}(u) \int e^{i(u, x)}\;\mu(\d y)\\
+ &= \hat{f}(u) \hat{\nu}(u).\qedhere
+ \end{align*}
+\end{proof}
+
+In the context of random variables, we have a similar result:
+\begin{prop}
+ Let $\mu, \nu$ be probability measures, and $X, Y$ be independent variables with laws $\mu, \nu$ respectively. Then
+ \[
+ \widehat{\mu * \nu}(u) = \hat{\mu}(u) \hat{\nu}(u).
+ \]
+\end{prop}
+
+\begin{proof}
+ We have
+ \[
+ \widehat{\mu * \nu}(u) = \E[e^{i(u, X + Y)}] = \E[e^{i(u, X)}] \E[e^{i(u, Y)}] = \hat{\mu}(u) \hat{\nu}(u).\qedhere
+ \]
+\end{proof}
+
+\subsection{Fourier inversion formula}
+We now want to work towards proving the Fourier inversion formula:
+\begin{thm}[Fourier inversion formula]
+ Let $f,\hat{f} \in L^1$. Then
+ \[
+ f(x) = \frac{1}{(2\pi)^d} \int \hat{f}(u) e^{-i(u, x)} \;\d u\text{ a.e.}
+ \]
+\end{thm}
+
+Our strategy is as follows:
+\begin{enumerate}
+ \item Show that the Fourier inversion formula holds for a Gaussian distribution by direct computations.
+ \item Show that the formula holds for Gaussian convolutions, i.e.\ the convolution of an arbitrary function with a Gaussian.
+ \item We show that any function can be approximated by a Gaussian convolution.
+\end{enumerate}
+Note that the last part makes a lot of sense. If $X$ is a random variable, then convolving with a Gaussian is just adding $X + \sqrt{t}Z$, and if we take $t \to 0$, we recover the original function. What we have to do is to show that this behaves sufficiently well with the Fourier transform and the Fourier inversion formula that we will actually get the result we want.
+
+\subsubsection*{Gaussian densities}
+Before we start, we had better start by defining the Gaussian distribution.
+\begin{defi}[Gaussian density]\index{Gaussian density}
+ The \emph{Gaussian density} with variance $t$ is
+ \[
+ g_t(x) = \left(\frac{1}{2\pi t}\right)^{d/2} e^{-|x|^2/2t}.
+ \]
+ This is equivalently the density of $\sqrt{t} Z$, where $Z = (Z_1, \cdots, Z_d)$ with $Z_i \sim N(0, 1)$ independent.
+\end{defi}
+
+We now want to compute the Fourier transformation directly and show that the Fourier inversion formula works for this.
+
+We start off by working in the case $d = 1$ and $Z \sim N(0, 1)$. We want to compute the Fourier transform of the law of this guy, i.e.\ its characteristic function. We will use a nice trick.
+
+\begin{prop}
+ Let $Z \sim N(0, 1)$. Then
+ \[
+ \phi_Z(a) = e^{-u^2/2}.
+ \]
+\end{prop}
+We see that this is in fact a Gaussian up to a factor of $2\pi$.
+
+\begin{proof}
+ We have
+ \begin{align*}
+ \phi_Z(u) &= \E[e^{iuZ}]\\
+ &= \frac{1}{\sqrt{2\pi}} \int e^{iux} e^{-x^2/2}\;\d x.
+ \end{align*}
+ We now notice that the function is bounded, so we can differentiate under the integral sign, and obtain
+ \begin{align*}
+ \phi_Z'(u) &= \E[iZ e^{iuZ}]\\
+ &= \frac{1}{\sqrt{2\pi}} \int ix e^{iux} e^{-x^2/2}\;\d x\\
+ &= - u\phi_Z(u),
+ \end{align*}
+ where the last equality is obtained by integrating by parts. So we know that $\phi_Z(u)$ solves
+ \[
+ \phi_Z'(u) = -u \phi_Z(u).
+ \]
+ This is easy to solve, since we can just integrate this. We find that
+ \[
+ \log \phi_Z(u) = - \frac{1}{2}u^2 + C.
+ \]
+ So we have
+ \[
+ \phi_Z(u) = A e^{-u^2/2}.
+ \]
+ We know that $A = 1$, since $\phi_Z(0) = 1$. So we have
+ \[
+ \phi_Z(u) = e^{-u^2/2}.\qedhere
+ \]
+\end{proof}
+
+We now do this problem in general.
+\begin{prop}
+ Let $Z = (Z_1, \cdots, Z_d)$ with $Z_j \sim N(0, 1)$ independent. Then $\sqrt{t} Z$ has density
+ \[
+ g_t(x) = \frac{1}{(2\pi t)^{d/2}} e^{-|x|^2/(2t)}.
+ \]
+ with
+ \[
+ \hat{g}_t(u) = e^{-|u|^2 t/2}.
+ \]
+\end{prop}
+
+\begin{proof}
+ We have
+ \begin{align*}
+ \hat{g}_t(u) &= \E[e^{i(u, \sqrt{t}Z)}]\\
+ &= \prod_{j = 1}^d \E[e^{i(u_j, \sqrt{t} Z_j)}]\\
+ &= \prod_{j = 1}^d \phi_Z(\sqrt{t} u_j)\\
+ &= \prod_{j = 1}^d e^{-t u_j^2/2}\\
+ &= e^{-|u|^2 t/2}.\qedhere
+ \end{align*}
+\end{proof}
+Again, $g_t$ and $\hat{g}_t$ are almost the same, apart form the factor of $(2\pi t)^{-d/2}$ and the position of $t$ shifted. We can thus write this as
+\[
+ \hat{g}_t(u) = (2\pi)^{d/2} t^{-d/2} g_{1/t} (u).
+\]
+So this tells us that
+\[
+ \hat{\hat{g}}_t(u) = (2\pi)^d g_t(u).
+\]
+This is not exactly the same as saying the Fourier inversion formula works, because in the Fourier inversion formula, we integrated against $e^{-i(u, x)}$, not $e^{i(u, x)}$. However, we know that by the symmetry of the Gaussian distribution, we have
+\[
+ g_t(x) = g_t(-x) = (2\pi)^{-d} \hat{\hat{g}}_t (-x) = \left(\frac{1}{2\pi}\right)^d \int \hat{g}_t(u) e^{-i(u, x)}\;\d u.
+\]
+So we conclude that
+\begin{lemma}
+ The Fourier inversion formula holds for the Gaussian density function.
+\end{lemma}
+
+\subsubsection*{Gaussian convolutions}
+\begin{defi}[Gaussian convolution]\index{Gaussian convolution}
+ Let $f \in L^1$. Then a \emph{Gaussian convolution} of $f$ is a function of the form $f * g_t$.
+\end{defi}
+
+We are now going to do a little computation that shows that functions of this type also satisfy the Fourier inversion formula.
+
+Before we start, we make some observations about the Gaussian convolution. By general theory of convolutions, we know that we have
+\begin{prop}
+ \[
+ \|f * g_t\|_1 \leq \|f\|_1.
+ \]
+\end{prop}
+We also have a pointwise bound
+\begin{align*}
+ |f*g_t(x)| &= \left|\int f(x - y) e^{-|y|^2/(2t)} \left(\frac{1}{2\pi t}\right)^{d/2} \;\d y\right| \\
+ &\leq (2\pi t)^{-d/2} \int |f(x - y)| \;\d x\\
+ &\leq (2\pi t)^{-d/2} \|f\|_1.
+\end{align*}
+This tells us that in fact
+\begin{prop}
+ \[
+ \|f * g_t\|_\infty \leq (2\pi t)^{-d/2} \|f\|_1.
+ \]
+\end{prop}
+So in fact the convolution is pointwise bounded. We see that the bound gets worse as $t \to 0$, and we will see that this is because as $t \to 0$, the convolution $f * g_t$ becomes a better and better approximation of $f$, and we did not assume that $f$ is bounded.
+
+Similarly, we can compute that
+\begin{prop}
+ \[
+ \|\widehat{f * g_t}\|_1 = \|\hat{f} \hat{g}_t\|_1 \leq (2\pi)^{d/2} t^{-d/2}\|\hat{f}\|_1,
+ \]
+ and
+ \[
+ \|\widehat{f * g_t}\|_\infty \leq \|\hat{f}\|_\infty.
+ \]
+\end{prop}
+Now given these bounds, it makes sense to write down the Fourier inversion formula for a Gaussian convolution.
+\begin{lemma}
+ The Fourier inversion formula holds for Gaussian convolutions.
+\end{lemma}
+
+We are going to reduce this to the fact that the Gaussian distribution itself satisfies Fourier inversion.
+
+\begin{proof}
+ We have
+ \begin{align*}
+ f * g_t(x) &= \int f(x - y) g_t(y) \;\d y\\
+ &= \int f(x - y) \left(\frac{1}{(2\pi)^d} \int \hat{g}_t(u) e^{-i(u, y)}\;\d u\right)\;\d y\\
+ &= \left(\frac{1}{2\pi}\right)^d \iint f(x - y) \hat{g}_t(u) e^{-i(u, y)}\;\d u\;\d y\\
+ &= \left(\frac{1}{2\pi}\right)^d \int \left(\int f(x - y) e^{-i(u, x - y)} \;\d y\right) \hat{g}_t(u) e^{-i(u, x)}\;\d u\\
+ &= \left(\frac{1}{2\pi}\right)^d \int \hat{f}(u)\hat{g}_t(u) e^{-i(u, x)}\;\d u\\
+ &= \left(\frac{1}{2\pi}\right)^d \int \widehat{f * g_t}(u) e^{-i(u, x)}\;\d u
+ \end{align*}
+ So done.
+\end{proof}
+
+\subsubsection*{The proof}
+Finally, we are going to extend the Fourier inversion formula to the case where $f, \hat{f} \in L^2$.
+\begin{thm}[Fourier inversion formula]
+ Let $f \in L^1$ and
+ \[
+ f_t(x) = (2\pi)^{-d} \int \hat{f}(u) e^{-|u|^2 t/2} e^{-i(u, x)}\;\d u = (2\pi)^{-d} \int \widehat{f * g_t}(u) e^{-i(u, x)}\;\d u.
+ \]
+ Then $\|f_t - f\|_1 \to 0$, as $t \to 0$, and the Fourier inversion holds whenever $f, \hat{f} \in L^1$.
+\end{thm}
+
+To prove this, we first need to show that the Gaussian convolution is indeed a good approximation of $f$:
+\begin{lemma}
+ Suppose that $f \in L^p$ with $p \in [1, \infty)$. Then $\|f * g_t - f\|_p \to 0$ as $t \to 0$.
+\end{lemma}
+Note that this cannot hold for $p = \infty$. Indeed, if $p = \infty$, then the $\infty$-norm is the uniform norm. But we know that $f * g_t$ is always continuous, and the uniform limit of continuous functions is continuous. So the formula cannot hold if $f$ is not already continuous.
+
+\begin{proof}
+ We fix $\varepsilon > 0$. By a question on the example sheet, we can find $h$ which is continuous and with compact support such that $\|f - h\|_p \leq \frac{\varepsilon}{3}$. So we have
+ \[
+ \|f * g_t - h * g_t\|_p = \|(f - h) * g_t\|_p \leq \|f - h\|_p \leq \frac{\varepsilon}{3}.
+ \]
+ So it suffices for us to work with a continuous function $h$ with compact support. We let
+ \[
+ e(y) = \int |h(x - y) - h(x)|^p \;\d x.
+ \]
+ We first show that $e$ is a bounded function:
+ \begin{align*}
+ e(y) &\leq \int 2^p(|h(x - y)|^p + |h(x)|^p)\;\d x\\
+ &= 2^{p + 1} \|h\|_p^p.
+ \end{align*}
+ Also, since $h$ is continuous and bounded, the dominated convergence theorem tells us that $e(y) \to 0$ as $y \to 0$.
+
+ Moreover, using the fact that $\int g_t(y) \;\d y = 1$, we have
+ \begin{align*}
+ \|h*g_t - h\|_p^p &= \int \left|\int (h(x - y) - h(x))g_t(y) \;\d y\right|^p \;\d x\\
+ \intertext{Since $g_t(y) \;\d y$ is a probability measure, by Jensen's inequality, we can bound this by}
+ &\leq \iint |h(x - y) - h(x)|^p g_t(y) \;\d y \;\d x\\
+ &= \int \left(\int |h(x - y) - h(x)|^p \;\d x\right) g_t(y) \;\d y\\
+ &= \int e(y) g_t(y) \;\d y\\
+ &= \int e(\sqrt{t} y) g_1(y)\;\d y,
+ \end{align*}
+ where we used the definition of $g$ and substitution. We know that this tends to $0$ as $t \to 0$ by the bounded convergence theorem, since we know that $e$ is bounded.
+
+ Finally, we have
+ \begin{align*}
+ \|f * g_t - f\|_p &\leq \|f * g_t - h * g_t\|_p + \|h*g_t - h\|_p + \|h - f\|_p\\
+ &\leq \frac{\varepsilon}{3} + \frac{\varepsilon}{3} + \|h*g_t - h\|_p\\
+ &= \frac{2\varepsilon}{3} + \|h*g_t - h\|_p.
+ \end{align*}
+ Since we know that $\|h*g_t - h\|_p \to 0$ as $t \to 0$, we know that for all sufficiently small $t$, the function is bounded above by $\varepsilon$. So we are done.
+\end{proof}
+
+With this lemma, we can now prove the Fourier inversion theorem.
+\begin{proof}[Proof of Fourier inversion theorem]
+ The first part is just a special case of the previous lemma. Indeed, recall that
+ \[
+ \widehat{f * g_t} (u) = \hat{f}(u) e^{-|u|^2 t/2}.
+ \]
+ Since Gaussian convolutions satisfy Fourier inversion formula, we know that
+ \[
+ f_t = f * g_t.
+ \]
+ So the previous lemma says exactly that $\|f_t - f\|_1 \to 0$.
+
+ Suppose now that $\hat{f} \in L^1$ as well. Then looking at the integrand of
+ \[
+ f_t(x) = (2\pi)^{-d} \int \hat{f}(u) e^{-|u|^2t/2} e^{-i(u, x)}\;\d u,
+ \]
+ we know that
+ \[
+ \left| \hat{f}(u) e^{-|u|^2t/2} e^{-i(u, x)}\right| \leq |\hat{f}|.
+ \]
+ Then by the dominated convergence theorem with dominating function $|\hat{f}|$, we know that this converges to
+ \[
+ f_t(x) \to (2\pi)^{-d} \int \hat{f}(u) e^{-i(u, x)}\;\d u\text{ as }t \to 0.
+ \]
+ By the first part, we know that $\|f_t - f\|_1 \to 0$ as $t \to 0$. So we can find a sequence $(t_n)$ with $t_n > 0$, $t_n \to 0$ so that $f_{t_n} \to f$ a.e. Combining these, we know that
+ \[
+ f(x) = \frac{1}{(2\pi)^d} \int \hat{f}(u) e^{-i(u, x)}\;\d u \text{ a.e.}
+ \]
+ So done.
+\end{proof}
+
+\subsection{Fourier transform in \tph{$\mathcal{L}^2$}{L2}{L2}}
+It turns out wonderful things happen when we take the Fourier transform of an $L^2$ function.
+\begin{thm}[Plancherel identity]\index{Plancherel identity}
+ For any function $f \in L^1 \cap L^2$, the \emph{Plancherel identity} holds:
+ \[
+ \|\hat{f}\|_2 = (2\pi)^{d/2} \|f\|_2.
+ \]
+\end{thm}
+As we are going to see in a moment, this is just going to follow from the Fourier inversion formula plus a clever trick.
+
+\begin{proof}
+ We first work with the special case where $f, \hat{f} \in L^1$, since the Fourier inversion formula holds for $f$. We then have
+ \begin{align*}
+ \|f\|_2^2 &= \int f(x) \overline{f(x)}\;\d x\\
+ &= \frac{1}{(2\pi)^d} \int \left(\int \hat{f}(u) e^{-i(u, x)}\;\d u\right)\overline{f(x)}\;\d x\\
+ &= \frac{1}{(2\pi)^d} \int \hat{f}(u) \left(\overline{f(x)} e^{-i(u, x)}\;\d x\right) \;\d u\\
+ &= \frac{1}{(2\pi)^d} \int \hat{f}(u) \overline{\left(f(x) e^{i(u, x)}\;\d x\right)} \;\d u\\
+ &= \frac{1}{(2\pi)^d} \int \hat{f}(u) \overline{\hat{f}(u)} \;\d u\\
+ &= \frac{1}{(2\pi)^d} \|\hat{f}(u)\|_2^2.
+ \end{align*}
+ So the Plancherel identity holds for $f$.
+
+ To prove it for the general case, we use this result and an approximation argument. Suppose that $f \in L^1 \cap L^2$, and let $f_t = f * g_t$. Then by our earlier lemma, we know that
+ \[
+ \|f_t\|_2 \to \|f\|_2\text{ as }t \to 0.
+ \]
+ Now note that
+ \[
+ \hat{f}_t(u) = \hat{f}(u)\hat{g}_t(u) = \hat{f}(u) e^{-|u|^2 t/2}.
+ \]
+ The important thing is that $e^{-|u|^2 t/2} \nearrow 1$ as $t \to 0$. Therefore, we know
+ \[
+ \|\hat{f}_t \|_2^2 = \int |\hat{f}(u)|^2 e^{-|u|^2 t}\; \d u \to \int |\hat{f}(u)|^2 \;\d u = \|\hat{f}\|_2^2
+ \]
+ as $t \to 0$, by monotone convergence.
+
+ Since $f_t, \hat{f}_t \in L^1$, we know that the Plancherel identity holds, i.e.
+ \[
+ \|\hat{f}_t\|_2 = (2\pi)^{d/2} \|f_t\|_2.
+ \]
+ Taking the limit as $t \to 0$, the result follows.
+\end{proof}
+
+What is this good for? It turns out that the Fourier transform gives as a bijection from $\mathcal{L}^2$ to itself. While it is not true that the Fourier inversion formula holds for everything in $\mathcal{L}^2$, it holds for enough of them that we can just approximate everything else by the ones that are nice. Then the above tells us that in fact this bijection is a norm-preserving automorphism.
+
+\begin{thm}
+ There exists a unique Hilbert space automorphism $F: \mathcal{L}^2 \to \mathcal{L}^2$ such that
+ \[
+ F([f]) = [(2\pi)^{-d/2}\hat{f}]
+ \]
+ whenever $f \in L^1 \cap L^2$.
+
+ Here $[f]$ denotes the equivalence class of $f$ in $\mathcal{L}^2$, and we say $F: \mathcal{L}^2 \to \mathcal{L}^2$ is a Hilbert space automorphism if it is a linear bijection that preserves the inner product.
+\end{thm}
+Note that in general, there is no guarantee that $F$ sends a function to its Fourier transform. We know that only if it is a well-behaved function (i.e.\ in $L^1 \cap L^2$). However, the formal property of it being a bijection from $\mathcal{L}^2$ to itself will be convenient for many things.
+
+\begin{proof}
+ We define $F_0: \mathcal{L}^1 \cap \mathcal{L}^2 \to \mathcal{L}^2$ by
+ \[
+ F_0([f]) = [(2\pi)^{-d/2} \hat{f}].
+ \]
+ By the Plancherel identity, we know $F_0$ preserves the $L^2$ norm, i.e.
+ \[
+ \|F_0([f])\|_2 = \|[f]\|_2.
+ \]
+ Also, we know that $\mathcal{L}^1 \cap \mathcal{L}^2$ is dense in $\mathcal{L}^2$, since even the continuous functions with compact support are dense. So we know $F_0$ extends uniquely to an isometry $F: \mathcal{L}^2 \to \mathcal{L}^2$.
+
+ Since it preserves distance, it is in particular injective. So it remains to show that the map is surjective. By Fourier inversion, the subspace
+ \[
+ V = \{[f] \in \mathcal{L}^2: f, \hat{f} \in L^1\}
+ \]
+ is sent to itself by the map $F$. Also if $f \in V$, then $F^4[f] = [f]$ (note that applying it twice does not suffice, because we actually have $F^2[f](x) = [f](-x)$). So $V$ is contained in the image $F$, and also $V$ is dense in $\mathcal{L}^2$, again because it contains all Gaussian convolutions (we have $\hat{f_t} = \hat{f} \hat{g_t}$, and $\hat{f}$ is bounded and $\hat{g_t}$ is decaying exponentially). So we know that $F$ is surjective.
+\end{proof}
+
+\subsection{Properties of characteristic functions}
+We are now going to state a bunch of theorems about characteristic functions. Since the proofs are not examinable (but the statements are!), we are only going to provide a rough proof sketch.
+
+\begin{thm}
+ The characteristic function $\phi_X$ of a distribution $\mu_X$ of a random variable $X$ determines $\mu_X$. In other words, if $X$ and $\tilde{X}$ are random variables and $\phi_X = \phi_{\tilde{X}}$, then $\mu_X = \mu_{\tilde{X}}$
+\end{thm}
+
+\begin{proof}[Proof sketch]
+ Use the Fourier inversion to show that $\phi_X$ determines $\mu_X(g) = \E[g(X)]$ for any bounded, continuous $g$.
+\end{proof}
+
+\begin{thm}
+ If $\phi_X$ is integrable, then $\mu_X$ has a bounded, continuous density function
+ \[
+ f_X(x) = (2\pi)^{-d} \int \phi_X(u) e^{-i(u, x)}\;\d u.
+ \]
+\end{thm}
+
+\begin{proof}[Proof sketch]
+ Let $Z \sim N(0, 1)$ be independent of $X$. Then $X + \sqrt{t} Z$ has a bounded continuous density function which, by Fourier inversion, is
+ \[
+ f_t(x) = (2\pi)^{-d} \int \phi_X(u) e^{-|u|^2 t / 2} e^{-i(u, x)}\;\d u.
+ \]
+ Sending $t \to 0$ and using the dominated convergence theorem with dominating function $|\phi_X|$.
+\end{proof}
+
+The next theorem relates to the notion of weak convergence.
+\begin{defi}[Weak convergence of measures]\index{weak convergence!of measures}
+ Let $\mu, (\mu_n)$ be Borel probability measures. We say that $\mu_n \to \mu$ \emph{weakly} if and only if $\mu_n(g) \to \mu(g)$ for all bounded continuous $g$.
+\end{defi}
+
+Similarly, we can define weak convergence of random variables.
+
+\begin{defi}[Weak convergence of random variables]\index{weak convergence!of random variables}
+ Let $X, (X_n)$ be random variables. We say $X_n \to X$ weakly iff $\mu_{X_n} \to \mu_X$ weakly, iff $\E[g(X_n)] \to \E[g(X)]$ for all bounded continuous $g$.
+\end{defi}
+
+This is related to the notion of convergence in distribution, which we defined long time ago without talking about it much. It is an exercise on the example sheet that weak convergence of random variables in $\R$ is equivalent to convergence in distribution.
+
+It turns out that weak convergence is very useful theoretically. One reason is that they are related to convergence of characteristic functions.
+\begin{thm}
+ Let $X, (X_n)$ be random variables with values in $\R^d$. If $\phi_{X_n}(u) \to \phi_X(u)$ for each $u \in \R^d$, then $\mu_{X_n} \to \mu_X$ weakly.
+\end{thm}
+
+The main application of this that will appear later is that this is the fact that allows us to prove the central limit theorem.
+
+\begin{proof}[Proof sketch]
+ By the example sheet, it suffices to show that $\E[g(X_n)] \to \E[g(X)]$ for all compactly supported $g \in C^\infty$. We then use Fourier inversion and convergence of characteristic functions to check that
+ \[
+ \E[g(X_n + \sqrt{t}Z)] \to \E[g(X + \sqrt{t}Z)]
+ \]
+ for all $t > 0$ for $Z\sim N(0, 1)$ independent of $X, (X_n)$. Then we check that $\E[g(X_n + \sqrt{t}Z)]$ is close to $\E[g(X_n)]$ for $t > 0$ small, and similarly for $X$.
+\end{proof}
+
+\subsection{Gaussian random variables}
+Recall that in the proof of the Fourier inversion theorem, we used these things called Gaussians, but didn't really say much about them. These will be useful later on when we want to prove the central limit theorem, because the central limit theorem says that in the long run, things look like Gaussians. So here we lay out some of the basic definitions and properties of Gaussians.
+
+\begin{defi}[Gaussian random variable]\index{Gaussian random variable}\index{random variable!Gaussian}
+ Let $X$ be a random variable on $\R$. This is said to be \emph{Gaussian} if there exists $\mu \in \R$ and $\sigma \in (0, \infty)$ such that the density of $X$ is
+ \[
+ f_X(x) = \frac{1}{\sqrt{2\pi \sigma^2}} \exp\left(-\frac{(x - \mu)^2}{2 \sigma^2}\right).
+ \]
+ A constant random variable $X = \mu$ corresponds to $\sigma = 0$. We say this has \term{mean} $\mu$ and \term{variance} $\sigma^2$.\index{Gaussian random variable!mean}\index{Gaussian random variable!variance}
+
+ When this happens, we write $X \sim N(\mu, \sigma^2)$.\index{$N(\mu, \sigma^2)$}
+\end{defi}
+
+For completeness, we record some properties of Gaussian random variables.
+
+\begin{prop}
+ Let $X \sim N(\mu, \sigma^2)$. Then
+ \[
+ \E[X] = \mu,\quad \var(X) = \sigma^2.
+ \]
+ Also, for any $a, b \in \R$, we have
+ \[
+ aX + b \sim N(a\mu + b, a^2 \sigma^2).
+ \]
+ Lastly, we have
+ \[
+ \phi_X(u) = e^{-i\mu u - u^2 \sigma^2/2}.
+ \]
+\end{prop}
+
+\begin{proof}
+ All but the last of them follow from direct calculation, and can be found in IA Probability.
+
+ For the last part, if $X \sim N(\mu, \sigma^2)$, then we can write
+ \[
+ X = \sigma Z + \mu,
+ \]
+ where $Z \sim N(0, 1)$. Recall that we have previously found that the characteristic function of a $N(0, 1)$ function is
+ \[
+ \phi_Z(u) = e^{-|u|^2/2}.
+ \]
+ So we have
+ \begin{align*}
+ \phi_X(u) &= \E[e^{iu(\sigma Z + \mu)}]\\
+ &= e^{iu\mu} \E[e^{iu\sigma \Z}]\\
+ &= e^{iu\mu} \phi_Z(iu\sigma)\\
+ &= e^{iu \mu - u^2 \sigma^2/2}.\qedhere
+ \end{align*}
+\end{proof}
+What we are next going to do is to talk about the corresponding facts for the Gaussian in higher dimensions. Before that, we need to come up with the definition of a higher-dimensional Gaussian distribution. This might be different from the one you've seen previously, because we want to allow some degeneracy in our random variable, e.g.\ some of the dimensions can be constant.
+
+\begin{defi}[Gaussian random variable]\index{Gaussian random variable}\index{random variable!Gaussian}
+ Let $X$ be a random variable. We say that $X$ is a \emph{Gaussian on $\R^n$} if $(u, X)$ is Gaussian on $\R$ for all $u \in \R^n$.
+\end{defi}
+
+We are now going to prove a version of our previous theorem to higher dimensional Gaussians.
+\begin{thm}
+ Let $X$ be Gaussian on $\R^n$, and le t$A$ be an $m \times n$ matrix and $b \in \R^m$. Then
+ \begin{enumerate}
+ \item $AX + b$ is Gaussian on $\R^m$.
+ \item $X \in L^2$ and its law $\mu_X$ is determined by $\mu = \E[X]$ and $V = \var(X)$, the covariance matrix.
+ \item We have
+ \[
+ \phi_X(u) = e^{i(u, \mu) - (u, Vu)/2}.
+ \]
+ \item If $V$ is invertible, then $X$ has a density of
+ \[
+ f_X (x) = (2\pi)^{-n/2} (\det V)^{-1/2} \exp\left(-\frac{1}{2} (x - \mu, V^{-1}(x - \mu))\right).
+ \]
+ \item If $X = (X_1, X_2)$ where $X_i \in \R^{n_i}$, then $\cov(X_1, X_2) = 0$ iff $X_1$ and $X_2$ are independent.
+ \end{enumerate}
+\end{thm}
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item If $u \in \R^m$, then we have
+ \[
+ (AX + b, u) = (AX, u) + (b, u) = (X, A^T u) + (b, u).
+ \]
+ Since $(X, A^T u)$ is Gaussian and $(b, u)$ is constant, it follows that $(AX + b, u)$ is Gaussian.
+ \item We know in particular that each component of $X$ is a Gaussian random variable, which are in $L^2$. So $X \in L^2$. We will prove the second part of (ii) with (iii)
+ \item If $\mu = \E[X]$ and $V = \var(X)$, then if $u \in \R^n$, then we have
+ \[
+ \E[(u, X)] = (u, \mu),\quad \var((u, X)) = (u, Vu).
+ \]
+ So we know
+ \[
+ (u, X) \sim N((u, \mu), (u, Vu)).
+ \]
+ So it follows that
+ \[
+ \phi_X(u) = \E[e^{i(u, X)}] = e^{i(u, \mu) - (u, Vu)/2}.
+ \]
+ So $\mu$ and $V$ determine the characteristic function of $X$, which in turn determines the law of $X$.
+ \item We start off with a boring Gaussian vector $Y = (Y_1, \cdots, Y_n)$, where the $Y_i \sim N(0, 1)$ are independent. Then the density of $Y$ is
+ \[
+ f_Y(y) = (2\pi)^{-n/2} e^{-|y|^2/2}.
+ \]
+ We are now going to construct $X$ from $Y$. We define
+ \[
+ \tilde{X} = V^{1/2} Y + \mu.
+ \]
+ This makes sense because $V$ is always non-negative definite. Then $\tilde{X}$ is Gaussian with $\E[\tilde{X}] = \mu$ and $\var(\tilde{X}) = V$. Therefore $X$ has the same distribution as $\tilde{X}$. Since $V$ is assumed to be invertible, we can compute the density of $\tilde{X}$ using the change of variables formula.
+ \item It is clear that if $X_1$ and $X_2$ are independent, then $\cov(X_1, X_2) = 0$.
+
+ Conversely, let $X = (X_1, X_2)$, where $\cov(X_1, X_2) = 0$. Then we have
+ \[
+ V = \var(X) =
+ \begin{pmatrix}
+ V_{11} & 0\\
+ 0 & V_{22}
+ \end{pmatrix}.
+ \]
+ Then for $u = (u_1, u_2)$, we have
+ \[
+ (u, Vu) = (u_1 V_{11} u_1) + (u_2, V_{22}u_2),
+ \]
+ where $V_{11} = \var(X_1)$ and $V_{22} \var(X_2)$. Then we have
+ \begin{align*}
+ \phi_X(u) &= e^{i\mu u - (u, Vu)/2}\\
+ &= e^{i\mu_1 u_1 - (u_1, V_{11}u_1)/2} e^{i\mu_2 u_2 - (u_2, V_{22}u_2)/2}\\
+ &= \phi_{X_1}(u_1)\phi_{X_2} (u_2).
+ \end{align*}
+ So it follows that $X_1$ and $X_2$ are independent.\qedhere
+ \end{enumerate}
+\end{proof}
+
+\section{Ergodic theory}
+We are now going to study a new topic --- ergodic theory. This is the study the ``long run behaviour'' of system under the evolution of some $\Theta$. Due to time constraints, we will not do much with it. We are going to prove two ergodic theorems that tell us what happens in the long run, and this will be useful when we prove our strong law of large numbers at the end of the course.
+
+The general settings is that we have a measure space $(E, \mathcal{E}, \mu)$ and a measurable map $\Theta: E \to E$ that is measure preserving, i.e.\ $\mu(A) = \mu(\Theta^{-1}(A))$ for all $A \in \mathcal{E}$.
+
+\begin{eg}
+ Take $(E, \mathcal{E}, \mu) = ([0, 1), \mathcal{B}([0, 1)), \mathrm{Lebesgue})$. For each $a \in [0, 1)$, we can define
+ \[
+ \Theta_a (x) = x + a \mod 1.
+ \]
+ By what we've done earlier in the course, we know this translation map preserves the Lebesgue measure on $[0, 1)$.
+\end{eg}
+
+Our goal is to try to understand the ``long run averages'' of the system when we apply $\Theta$ many times. One particular quantity we are going to look at is the following:
+
+Let $f$ be measurable. We define\index{$S_n(f)$}
+\[
+ S_n(f) = f + f \circ \Theta + \cdots + f \circ \Theta^{n - 1}.
+\]
+We want to know what is the long run behaviour of $\frac{S_n(f)}{n}$ as $n \to \infty$.
+
+The \emph{ergodic theorems} are going to give us the answer in a certain special case. Finally, we will apply this in a particular case to get the strong law of large numbers.
+
+\begin{defi}[Invariant subset]\index{invariant subset}
+ We say $A \in \mathcal{E}$ is invariant for $\Theta$ if $A = \Theta^{-1}(A)$.
+\end{defi}
+
+\begin{defi}[Invariant function]\index{invariant function}
+ A measurable function $f$ is invariant if $f = f \circ \Theta$.
+\end{defi}
+
+\begin{defi}[$\mathcal{E}_\Theta$]\index{$\mathcal{E}_\Theta$}
+ We write
+ \[
+ \mathcal{E}_\Theta = \{A \in \mathcal{E}: A\text{ is invariant}\}.
+ \]
+\end{defi}
+It is easy to show that $\mathcal{E}_\Theta$ is a $\sigma$-algebra, and $f: E \to \R$ is invariant iff it is $\mathcal{E}_\Theta$ measurable.
+
+\begin{defi}[Ergodic]\index{ergodic}
+ We say $\Theta$ is \emph{ergodic} if $A \in \mathcal{E}_\Theta$ implies $\mu(A) = 0$ or $\mu(A^C) = 0$.
+\end{defi}
+
+\begin{eg}
+ For the translation map on $[0, 1)$, we have $\Theta_a$ is ergodic iff $a$ is irrational. Proof is left on example sheet 4.
+\end{eg}
+
+\begin{prop}
+ If $f$ is integrable and $\Theta$ is measure-preserving. Then $f \circ \Theta$ is integrable and
+ \[
+ \int f \circ \Theta \d \mu = \int_E f \d \mu.
+ \]
+\end{prop}
+
+It turns out that if $\Theta$ is ergodic, then there aren't that many invariant functions.
+\begin{prop}
+ If $\Theta$ is ergodic and $f$ is invariant, then there exists a constant $c$ such that $f = c$ a.e.
+\end{prop}
+The proofs of these are left as an exercise on example sheet $4$.
+
+We are now going to spend a little bit of time studying a particular example, because this will be needed to prove the strong law of large numbers.
+
+\begin{eg}[Bernoulli shifts]\index{Bernoulli shift}
+ Let $m$ be a probability distribution on $\R$. Then there exists an iid sequence $Y_1, Y_2, \cdots$ with law $m$. Recall we constructed this in a really funny way. Now we are going to build it in a more natural way.
+
+ We let $E = \R^\N$ be the set of all real sequences $(x_n)$. We define the $\sigma$-algebra $\mathcal{E}$ to be the $\sigma$-algebra generated by the projections $X_n(x) = x_n$. In other words, this is the smallest $\sigma$-algebra such that all these functions are measurable. Alternatively, this is the $\sigma$-algebra generated by the $\pi$-system
+ \[
+ \mathcal{A} = \left\{\prod_{n \in \N} A_n, A_n \in \mathcal{B}\text{ for all $n$ and $A_n = \R$ eventually}\right\}.
+ \]
+ Finally, to define the measure $\mu$, we let
+ \[
+ Y = (Y_1, Y_2, \cdots) : \Omega \to E
+ \]
+ where $Y_i$ are iid random variables defined earlier, and $\Omega$ is the sample space of the $Y_i$.
+
+ Then $Y$ is a measurable map because each of the $Y_i$'s is a random variable. We let $\mu = \P \circ Y^{-1}$.
+
+ By the independence of $Y_i$'s, we have that
+ \[
+ \mu(A) = \prod_{n \in \N} m(A_n)
+ \]
+ for any
+ \[
+ A = A_1 \times A_2 \times \cdots \times A_n \times \R \times \cdots \times \R.
+ \]
+ Note that the product is eventually $1$, so it is really a finite product.
+
+ This $(E, \mathcal{E}, \mu)$ is known as the \term{canonical space} associated with the sequence of iid random variables with law $m$.
+
+ Finally, we need to define $\Theta$. We define $\Theta: E \to E$ by
+ \[
+ \Theta(x) = \Theta(x_1, x_2, x_3, \cdots) = (x_2, x_3, x_4, \cdots).
+ \]
+ This is known as the \term{shift map}.
+\end{eg}
+Why do we care about this? Later, we are going to look at the function
+\[
+ f(x) = f(x_1, x_2, \cdots) = x_1.
+\]
+Then we have
+\[
+ S_n(f) = f + f \circ \Theta + \cdots + f \circ \Theta^{n - 1} = x_1 + \cdots + x_n.
+\]
+So $\frac{S_n(f)}{n}$ will the average of the first $n$ things. So ergodic theory will tell us about the long-run behaviour of the average.
+
+\begin{thm}
+ The shift map $\Theta$ is an ergodic, measure preserving transformation.
+\end{thm}
+
+\begin{proof}
+ It is an exercise to show that $\Theta$ is measurable and measure preserving.
+
+ To show that $\Theta$ is ergodic, recall the definition of the tail $\sigma$-algebra
+ \[
+ \mathcal{T}_n = \sigma(X_m: m \geq n + 1),\quad \mathcal{T} = \bigcap_n \mathcal{T}_n.
+ \]
+ Suppose that $A \in \prod_{n \in \N} A_n \in \mathcal{A}$. Then
+ \[
+ \Theta^{-n}(A) = \{X_{n + k} \in A_k \text{ for all }k\} \in \mathcal{T}_n.
+ \]
+ Since $\mathcal{T}_n$ is a $\sigma$-algebra, we and $\Theta^{-n}(A) \in \mathcal{T}_N$ for all $A \in \mathcal{A}$ and $\sigma(\mathcal{A}) = \mathcal{E}$, we know $\Theta^{-n}(A) \in \mathcal{T}_N$ for all $A \in \mathcal{E}$.
+
+ So if $A \in \mathcal{E}_\Theta$, i.e.\ $A = \Theta^{-1}(A)$, then $A \in \mathcal{T}_N$ for all $N$. So $A \in \mathcal{T}$.
+
+ From the Kolmogorov 0-1 law, we know either $\mu[A] = 1$ or $\mu[A] = 0$. So done.
+\end{proof}
+
+\subsection{Ergodic theorems}
+The proofs in this section are non-examinable.
+
+Instead of proving the ergodic theorems directly, we first start by proving the following magical lemma:
+\begin{lemma}[Maximal ergodic lemma]\index{maximal ergodic lemma}
+ Let $f$ be integrable, and
+ \[
+ S^* = \sup_{n \geq 0} S_n(f) \geq 0,
+ \]
+ where $S_0(f) = 0$ by convention. Then
+ \[
+ \int_{\{S^* > 0\}} f \;\d \mu \geq 0.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We let
+ \[
+ S_n^* = \max_{0 \leq m \leq n} S_m
+ \]
+ and
+ \[
+ A_n = \{S_n^* > 0\}.
+ \]
+ Now if $1 \leq m \leq n$, then we know
+ \[
+ S_m = f + S_{m - 1} \circ \Theta \leq f + S_n^* \circ \Theta.
+ \]
+ Now on $A_n$, we have
+ \[
+ S_n^* = \max_{1 \leq m \leq n} S_m,
+ \]
+ since $S_0 = 0$. So we have
+ \[
+ S_n^* \leq f + S_n^* \circ \Theta.
+ \]
+ On $A_n^C$, we have
+ \[
+ S_n^* = 0 \leq S_n^* \circ \Theta.
+ \]
+ So we know
+ \begin{align*}
+ \int_E S_n^*\;\d \mu&= \int_{A_n} S_n^*\; \d \mu + \int_{A_n^C} S_n^* \;\d \mu\\
+ &\leq \int_{A_n} f \;\d \mu + \int_{A_n} S_n^* \circ \Theta \;\d \mu + \int_{A_n^C} S_n^* \circ \Theta \;\d \mu\\
+ &= \int_{A_n} f\;\d \mu + \int_E S_n^* \circ \Theta \;\d \mu\\
+ &= \int_{A_n} f\;\d \mu + \int_E S_n^* \;\d \mu
+ \end{align*}
+ So we know
+ \[
+ \int_{A_n} f\;\d \mu \geq 0.
+ \]
+ Taking the limit as $n \to \infty$ gives the result by dominated convergence with dominating function $f$.
+\end{proof}
+
+We are now going to prove the two ergodic theorems, which tell us the limiting behaviour of $S_n(f)$.
+
+\begin{thm}[Birkhoff's ergodic theorem]\index{Birkhoff's ergodic theorem}
+ Let $(E, \mathcal{E}, \mu)$ be $\sigma$-finite and $f$ be integrable. There exists an invariant function $\bar{f}$ such that
+ \[
+ \mu(|\bar{f}|) \leq \mu(|f|),
+ \]
+ and
+ \[
+ \frac{S_n(f)}{n} \to \bar{f}\text{ a.e.}
+ \]
+ If $\Theta$ is ergodic, then $\bar{f}$ is a constant.
+\end{thm}
+Note that the theorem only gives $\mu(|\bar{f}|) \leq \mu(|f|)$. However, in many cases, we can use some integration theorems such as dominated convergence to argue that they must in fact be equal. In particular, in the ergodic case, this will allow us to find the value of $\bar{f}$.
+
+\begin{thm}[von Neumann's ergodic theorem]\index{von Neumann's ergodic theorem}
+ Let $(E, \mathcal{E}, \mu)$ be a \emph{finite} measure space. Let $p \in [1, \infty)$ and assume that $f \in L^p$. Then there is some function $\bar{f} \in L^p$ such that
+ \[
+ \frac{S_n(f)}{n} \to \bar{f}\text{ in $L^p$}.
+ \]
+\end{thm}
+
+\begin{proof}[Proof of Birkhoff's ergodic theorem]
+ We first note that
+ \[
+ \limsup_n \frac{S_n}{n},\quad \limsup_n \frac{S_n}{n}
+ \]
+ are invariant functions, Indeed, we know
+ \begin{align*}
+ S_n \circ \Theta &= f \circ \Theta + f \circ \Theta^2 + \cdots + f \circ \Theta^n\\
+ &= S_{n + 1} - f
+ \end{align*}
+ So we have
+ \[
+ \limsup_{n \to \infty} \frac{S_n \circ \Theta}{n} = \limsup_{n \to \infty} \frac{S_n}{n} + \frac{f}{n} \to \limsup_{n \to \infty} \frac{S_n}{n}.
+ \]
+ Exactly the same reasoning tells us the $\liminf$ is also invariant.
+
+ What we now need to show is that the set of points on which $\limsup$ and $\liminf$ do not agree have measure zero. We set $a < b$. We let
+ \[
+ D = D(a, b) = \left\{x \in E: \liminf_{n \to \infty} \frac{S_n(x)}{n} < a < b < \limsup_{n \to \infty} \frac{S_n(x)}{n}\right\}.
+ \]
+ Now if $\limsup \frac{S_n(x)}{n} \not= \liminf \frac{S_n(x)}{n}$, then there is some $a , b \in \Q$ such that $x \in D(a, b)$. So by countable subadditivity, it suffices to show that $\mu(D(a, b)) = 0$ for all $a, b$.
+
+ We now fix $a, b$, and just write $D$. Since $\limsup \frac{S_n}{n}$ and $\liminf \frac{S_n}{n}$ are both invariant, we have that $D$ is invariant. By restricting to $D$, we can assume that $D = E$.
+
+ Suppose that $B \in \mathcal{E}$ and $\mu(G) < \infty$. We let
+ \[
+ g = f - b \mathbf{1}_B.
+ \]
+ Then $g$ is integrable because $f$ is integrable and $\mu(B) < \infty$. Moreover, we have
+ \[
+ S_n(g) = S_n(f - b \mathbf{1}_B) \geq S_n(f) - nb.
+ \]
+ Since we know that $\limsup_n \frac{S_n(f)}{n} > b$ by definition, we can find an $n$ such that $S_n(g) > 0$. So we know that
+ \[
+ S^*(g)(x) = \sup_n S_n(g)(x) > 0
+ \]
+ for all $x \in D$. By the maximal ergodic lemma, we know
+ \[
+ 0 \leq \int_D g \;\d \mu = \int_D f - b \mathbf{1}_B \;\d \mu = \int_D f \;\d \mu - b \mu(B).
+ \]
+ If we rearrange this, we know
+ \[
+ b \mu(B) \leq \int_D f \;\d \mu.
+ \]
+ for all measurable sets $B \in \mathcal{E}$ with finite measure. Since our space is $\sigma$-finite, we can find $B_n \nearrow D$ such $\mu(B_n) < \infty$ for all $n$. So taking the limit above tells
+ \[
+ b \mu(D) \leq \int_D f \;\d \mu.\tag{$\dagger$}
+ \]
+ Now we can apply the same argument with $(-a)$ in place of $b$ and $(-f)$ in place of $f$ to get
+ \[
+ (-a) \mu(D) \leq -\int_D f \;\d \mu.\tag{$\ddagger$}
+ \]
+ Now note that since $b > a$, we know that at least one of $b > 0$ and $a < 0$ has to be true. In the first case, $(\dagger)$ tells us that $\mu(D)$ is finite, since $f$ is integrable. Then combining with $(\ddagger)$, we see that
+ \[
+ b \mu(D) \leq \int_D f \;\d\mu \leq a \mu(D).
+ \]
+ But $a < b$. So we must have $\mu(D) = 0$. The second case follows similarly (or follows immediately by flipping the sign of $f$).
+
+ We are almost done. We can now define
+ \[
+ \bar{f}(x) =
+ \begin{cases}
+ \lim S_n(f)/n & \text{the limit exists}\\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ Then by the above calculations, we have
+ \[
+ \frac{S_n(f)}{n} \to \bar{f}\text{ a.e.}
+ \]
+ Also, we know $\bar{f}$ is invariant, because $\lim S_n(f)/n$ is invariant, and so is the set where the limit exists.
+
+ Finally, we need to show that
+ \[
+ \mu(\bar{f}) \leq \mu(|f|).
+ \]
+ This is since
+ \[
+ \mu(|f \circ \Theta^n|) = \mu(|f|)
+ \]
+ as $\Theta^n$ preserves the metric. So we have that
+ \[
+ \mu(|S_n|) \leq n \mu(|f|) < \infty.
+ \]
+ So by Fatou's lemma, we have
+ \begin{align*}
+ \mu(|\bar{f}|) &\leq \mu\left(\liminf_n \left|\frac{S_n}{n}\right|\right)\\
+ &\leq \liminf_n \mu\left(\frac{S_n}{n}\right)\\
+ &\leq \mu(|f|)
+ \end{align*}
+\end{proof}
+
+The proof of the von Neumann ergodic theorem follows easily from Birkhoff's ergodic theorem.
+\begin{proof}[Proof of von Neumann ergodic theorem]
+ It is an exercise on the example sheet to show that
+ \[
+ \|f \circ \Theta\|_p^p = \int |f \circ \Theta|^p \;\d \mu = \int |f|^p \;\d \mu = \|f\|_p^p.
+ \]
+ So we have
+ \[
+ \norm{\frac{S_n}{n}}_p = \frac{1}{n} \|f + f \circ \Theta + \cdots + f \circ \Theta^{n - 1}\| \leq \|f\|_p
+ \]
+ by Minkowski's inequality.
+
+ So let $\varepsilon > 0$, and take $M \in (0, \infty)$ so that if
+ \[
+ g = (f \vee (-M)) \wedge M,
+ \]
+ then
+ \[
+ \|f - g\|_p < \frac{\varepsilon}{3}.
+ \]
+ By Birkhoff's theorem, we know
+ \[
+ \frac{S_n(g)}{n} \to \bar{g}
+ \]
+ a.e.
+
+ Also, we know
+ \[
+ \abs{\frac{S_n(g)}{n}} \leq M
+ \]
+ for all $n$. So by bounded convergence theorem, we know
+ \[
+ \norm{\frac{S_n(g)}{n} - \bar{g}}_p \to 0
+ \]
+ as $n \to \infty$. So we can find $N$ such that $n \geq N$ implies
+ \[
+ \norm{\frac{S_n(g)}{n} - \bar{g}}_p < \frac{\varepsilon}{3}.
+ \]
+ Then we have
+ \begin{align*}
+ \norm{\bar{f} - \bar{g}}_p^p &= \int \liminf_n \abs{\frac{S_n(f - g)}{n}}^p \;\d \mu\\
+ &\leq \liminf \int \abs{\frac{S_n(f - g)}{n}}^p \;\d\mu\\
+ &\leq \|f - g\|_p^p.
+ \end{align*}
+ So if $n \geq N$, then we know
+ \[
+ \norm{\frac{S_n(f)}{n} - \bar{f}}_p \leq \norm{\frac{S_n(f - g)}{n}}_p + \norm{\frac{S_n(g)}{n} - \bar{f}}_p + \norm{\bar{g} - \bar{f}}_p \leq \varepsilon.
+ \]
+ So done.
+\end{proof}
+
+\section{Big theorems}
+We are now going to use all the tools we have previously developed to prove two of the most important theorems about the sums of independent random variables, namely the strong law of large numbers and the central limit theorem.
+
+\subsection{The strong law of large numbers}
+Before we start proving the strong law of large numbers, we first spend some time discussing the difference between the strong law and the weak law. In both cases, we have a sequence $(X_n)$ of iid random variables with $\E[X_i] = \mu$. We let
+\[
+ S_n = X_1 + \cdots + X_n.
+\]
+\begin{itemize}
+ \item The weak law of large number says $S_n/n \to \mu$ in probability as $n \to \infty$, provided $\E[X_1^2] < \infty$.
+ \item The strong law of large number says $S_n/n \to \mu$ a.s. provided $\E|X_1| < \infty$.
+\end{itemize}
+So we see that the strong law is indeed stronger, because convergence almost everywhere implies convergence in measure.
+
+We are actually going to do two versions of the strong law with different hypothesis.
+\begin{thm}[Strong law of large numbers assuming finite fourth moments]\index{strong law of large numbers!assuming finite fourth moments}
+ Let $(X_n)$ be a sequence of independent random variables such that there exists $\mu \in \R$ and $M > 0$ such that
+ \[
+ \E[X_n] = \mu,\quad \E[X_n^4] \leq M
+ \]
+ for all $n$. With $S_n = X_1 + \cdots + X_n$, we have that
+ \[
+ \frac{S_n}{n} \to \mu\text{ a.s. as }n \to \infty.
+ \]
+\end{thm}
+Note that in this version, we do not require that the $X_n$ are iid. We simply need that they are independent and have the same mean.
+
+The proof is completely elementary.
+
+\begin{proof}
+ We reduce to the case that $\mu = 0$ by setting
+ \[
+ Y_n = X_n - \mu.
+ \]
+ We then have
+ \[
+ \E[Y_n] = 0,\quad \E[Y_n^4] \leq 2^4(\E[\mu^4 + X_n^4]) \leq 2^4(\mu^4 + M).
+ \]
+ So it suffices to show that the theorem holds with $Y_n$ in place of $X_n$. So we can assume that $\mu = 0$.
+
+ By independence, we know that for $i \not= j$, we have
+ \[
+ \E[X_i X_j^3] = \E[X_i] \E[X_j^3] = 0.
+ \]
+ Similarly, for all $i, j, k, \ell$ distinct, we have
+ \[
+ \E[X_i X_j X_k^2] = \E[X_i X_j X_k X_\ell] = 0.
+ \]
+ Hence we know that
+ \[
+ \E[S_n^4] = \E\left[\sum_{k = 1}^n X_k^4 + 6 \sum_{1 \leq i < j \leq n} X_i^2 X_j^2\right].
+ \]
+ We know the first term is bounded by $nM$, and we also know that for $i \not= j$, we have
+ \[
+ \E[X_i^2 X_j^2] = \E[X_i^2] \E[X_j^2] \leq \sqrt{\E[X_i^4] \E[X_j^4]} \leq M
+ \]
+ by Jensen's inequality. So we know
+ \[
+ \E\left[6 \sum_{1 \leq i < j\leq n} X_i^2 X_j^2\right] \leq 3n(n - 1)M.
+ \]
+ Putting everything together, we have
+ \[
+ \E[S_n^4] \leq nM + 3n(n - 1)M \leq 3n^2 M.
+ \]
+ So we know
+ \[
+ \E\left[(S_n/n)^4\right] \leq \frac{3M}{n^2}.
+ \]
+ So we know
+ \[
+ \E \left[\sum_{n = 1}^\infty \left(\frac{S_n}{n}\right)^4\right] \leq \sum_{n = 1}^\infty \frac{3M}{n^2} < \infty.
+ \]
+ So we know that
+ \[
+ \sum_{n = 1}^\infty \left(\frac{S_n}{n}\right)^4 < \infty \text{ a.s.}
+ \]
+ So we know that $(S_n/n)^4 \to 0$ a.s., i.e.\ $S_n/n \to 0$ a.s.
+\end{proof}
+
+We are now going to get rid of the assumption that we have finite fourth moments, but we'll need to work with iid random variables.
+
+\begin{thm}[Strong law of large numbers]\index{strong law of large numbers}
+ Let $(Y_n)$ be an iid sequence of integrable random variables with mean $\nu$. With $S_n = Y_1 + \cdots + Y_n$, we have
+ \[
+ \frac{S_n}{n} \to \nu \text{ a.s.}
+ \]
+\end{thm}
+
+We will use the ergodic theorem to prove this. This is not the ``usual'' proof of the strong law, but since we've done all that work on ergodic theory, we might as well use it to get a clean proof. Most of the work left is setting up the right setting for the proof.
+
+\begin{proof}
+ Let $m$ be the law of $Y_1$ and let $\mathbf{Y} = (Y_1, Y_2, Y_3, \cdots)$. We can view $Y$ as a function
+ \[
+ Y: \Omega \to \R^\N = E.
+ \]
+ We let $(E, \mathcal{E}, \mu)$ be the canonical space associated with the distribution $m$ so that
+ \[
+ \mu = \P \circ Y^{-1}.
+ \]
+ We let $f: E \to \R$ be given by
+ \[
+ f(x_1, x_2, \cdots) = X_1(x_1, \cdots, x_n) = x_1.
+ \]
+ Then $X_1$ has law given by $m$, and in particular is integrable. Also, the shift map $\Theta: E \to E$ given by
+ \[
+ \Theta(x_1, x_2, \cdots) = (x_2, x_3, \cdots)
+ \]
+ is measure-preserving and ergodic. Thus, with
+ \[
+ S_n(f) = f + f \circ \Theta + \cdots + f \circ \Theta^{n - 1} = X_1 + \cdots + X_n,
+ \]
+ we have that
+ \[
+ \frac{S_n(f)}{n} \to \bar{f}\text{ a.e.}
+ \]
+ by Birkhoff's ergodic theorem. We also have convergence in $L^1$ by von Neumann ergodic theorem.
+
+ Here $\bar{f}$ is $\mathcal{E}_\Theta$-measurable, and $\Theta$ is ergodic, so we know that $\bar{f} = c$ a.e. for some constant $c$. Moreover, we have
+ \[
+ c = \mu(\bar{f}) = \lim_{n \to \infty} \mu(S_n(f)/n) = \nu.
+ \]
+ So done.
+\end{proof}
+
+\subsection{Central limit theorem}
+\begin{thm}
+ Let $(X_n)$ be a sequence of iid random variables with $\E[X_i] = 0$ and $\E[X_1^2] = 1$. Then if we set
+ \[
+ S_n = X_1 + \cdots + X_n,
+ \]
+ then for all $x \in \R$, we have
+ \[
+ \P\left[ \frac{S_n}{\sqrt{n}} \leq x\right] \to \int_{-\infty}^x \frac{e^{-y^2/2}}{\sqrt{2\pi}}\;\d y = \P[N(0, 1) \leq x]
+ \]
+ as $n \to \infty$.
+\end{thm}
+
+\begin{proof}
+ Let $\phi(u) = \E[e^{iuX_1}]$. Since $\E[X_1^2] = 1 < \infty$, we can differentiate under the expectation twice to obtain
+ \[
+ \phi(u) = \E[e^{iuX_1}],\quad \phi'(u) = \E[iX_1 e^{iuX_1}],\quad \phi''(u) = \E[-X_1^2 e^{iuX_1}].
+ \]
+ Evaluating at $0$, we have
+ \[
+ \phi(0) = 1,\quad \phi'(0) = 0,\quad \phi''(0) = -1.
+ \]
+ So if we Taylor expand $\phi$ at $0$, we have
+ \[
+ \phi(u) = 1 - \frac{u^2}{2} + o(u^2).
+ \]
+ We consider the characteristic function of $S_n/\sqrt{n}$
+ \begin{align*}
+ \phi_n(u) &= \E[e^{iuS_n/\sqrt{n}}] \\
+ &= \prod_{i = 1}^n \E[e^{iuX_j/\sqrt{n}}] \\
+ &= \phi(u/\sqrt{n})^n\\
+ &= \left(1 - \frac{u^2}{2n} + o\left(\frac{u^2}{n}\right)\right)^n.
+ \end{align*}
+ We now take the logarithm to obtain
+ \begin{align*}
+ \log \phi_n(u) &= n \log\left(1 - \frac{u^2}{2n} + o\left(\frac{u^2}{n}\right)\right)\\
+ &= - \frac{u^2}{2} + o(1)\\
+ &\to -\frac{u^2}{2}
+ \end{align*}
+ So we know that
+ \[
+ \phi_n(u) \to e^{-u^2/2},
+ \]
+ which is the characteristic function of a $N(0, 1)$ random variable.
+
+ So we have convergence in characteristic function, hence weak convergence, hence convergence in distribution.
+\end{proof}
+
+
+\printindex
+\end{document}
diff --git a/books/cam/IV_E/bounded_cohomology.tex b/books/cam/IV_E/bounded_cohomology.tex
new file mode 100644
index 0000000000000000000000000000000000000000..aca994be2be07d07e93424b9e4a21fd0193f1847
--- /dev/null
+++ b/books/cam/IV_E/bounded_cohomology.tex
@@ -0,0 +1,1943 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IV}
+\def\nterm {Easter}
+\def\nyear {2017}
+\def\nlecturer {M.\ Burger}
+\def\ncourse {Bounded Cohomology}
+
+\input{header}
+
+\newcommand\QH{\mathcal{QH}}
+\newcommand\scl{\mathrm{scl}}
+\newcommand\Homeo{\mathrm{Homeo}}
+\newcommand\Rot{\mathrm{Rot}}
+\newcommand\Free{F}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+The cohomology of a group or a topological space in degree $k$ is a real vector space which describes the ``holes'' bounded by k dimensional cycles and encodes their relations. Bounded cohomology is a refinement which provides these vector spaces with a (semi) norm and hence topological objects acquire mysterious numerical invariants. This theory, introduced in the beginning of the 80's by M.\ Gromov, has deep connections with the geometry of hyperbolic groups and negatively curved manifolds. For instance, hyperbolic groups can be completely characterized by the ``size'' of their bounded cohomology.
+
+The aim of this course is to give an introduction to the bounded cohomology of groups, and treat more in detail one of its important applications to the study of groups acting by homeomorphisms on the circle. More precisely we will treat the following topics:
+\begin{enumerate}
+ \item Ordinary and bounded cohomology of groups: meaning of these objects in low degrees, that is, zero, one and two; relations with quasimorphisms. Proof that the bounded cohomology in degree two of a non abelian free group contains an isometric copy of the Banach space of bounded sequences of reals. Examples and meaning of bounded cohomology classes of geometric origin with non trivial coefficients.
+
+ \item Actions on the circle, the bounded Euler class: for a group acting by orientation preserving homeomorphisms of the circle, Ghys has introduced an invariant, the bounded Euler class of the action, and shown that it characterizes (minimal) actions up to conjugation. We will treat in some detail this work as it leads to important applications of bounded cohomology to the question of which groups can act non trivially on the circle: for instance $\SL(2,\Z)$ can, while lattices in ``higher rank Lie groups'', like $\SL(n,\Z)$ for $n$ at least $3$, can't.
+
+ \item Amenability and resolutions: we will set up the abstract machinery of resolutions and the notions of injective modules in ordinary as well as bounded cohomology; this will provide a powerful way to compute these objects in important cases. A fundamental role in this theory is played by various notions of amenability; the classical notion of amenability for a group, and amenability of a group action on a measure space, due to R.\ Zimmer. The goal is then to describe applications of this machinery to various rigidity questions, and in particular to the theorem due, independently to Ghys, and Burger--Monod, that lattices in higher rank groups don't act on the circle.
+\end{enumerate}
+
+\subsubsection*{Pre-requisites}
+Prerequisites for this course are minimal: no prior knowledge of group cohomology of any form is needed; we'll develop everything we need from scratch. It is however an advantage to have a ``zoo'' of examples of infinite groups at one's disposal: for example free groups and surface groups. In the third part, we'll need basic measure theory; amenability and ergodic actions will play a role, but there again everything will be built up on elementary measure theory.
+
+The basic reference for this course is R.\ Frigerio, ``Bounded cohomology of discrete groups'', \href{https://arxiv.org/abs/1611.08339}{arXiv:1611.08339}, and for part 3, M.\ Burger \& A.\ Iozzi, ``A useful formula from bounded cohomology'', available at: \url{https://people.math.ethz.ch/~iozzi/publications.html}.%
+}
+\tableofcontents
+
+\section{Quasi-homomorphisms}
+\subsection{Quasi-homomorphisms}
+In this chapter, $A$ will denote $\Z$ or $\R$. Let $G$ be a group. The usual definition of a group homomorphism $f\colon G \to A$ requires that for all $x, y \in G$, we have
+\[
+ f(xy) = f(x) + f(y).
+\]
+In a quasi-homomorphism, we replace the equality with a weaker notion, and allow for some ``errors''.
+
+\begin{defi}[Quasi-homomorphism]\index{quasi-homomorphism}
+ Let $G$ be a group. A function $f\colon G \to A$ is a \emph{quasi-homomorphism} if the function
+ \begin{align*}
+ \d f\colon G \times G &\to A\\
+ (x, y) &\mapsto f(x) + f(y) - f(xy)
+ \end{align*}
+ is \emph{bounded}. We define the \term{defect}\index{quasi-homomorphism!defect} of $f$ to be
+ \[
+ D(f) = \sup_{x, y \in G} |\d f(x, y)|.
+ \]
+ We write \term{$\QH(G, A)$} for the $A$-module of quasi-homomorphisms.
+\end{defi}
+
+\begin{eg}
+ Every homomorphism is a quasi-homomorphism with $D(f) = 0$. Conversely, a quasi-homomorphism with $D(f) = 0$ is a homomorphism.
+\end{eg}
+
+We can obtain some ``trivial'' quasi-homomorphisms as follows --- we take any homomorphism, and then edit finitely many values of the homomorphism. Then this is a quasi-homomorphism. More generally, we can add any bounded function to a quasi-homomorphism and still get a quasi-homomorphism.
+
+\begin{notation}
+ We write\index{$\ell^\infty(G, A)$}
+ \[
+ \ell^\infty(G, A) = \{f\colon G \to A: \text{$f$ is bounded}\}.
+ \]
+\end{notation}
+
+Thus, we are largely interested in the quasi-homomorphisms modulo $\ell^\infty(G, A)$. Often, we also want to quotient out by the genuine homomorphisms, and obtain
+\[
+ \frac{\QH(G, A)}{\ell^\infty(G, A) + \Hom(G, A)}.
+\]
+This contains subtle algebraic and geometric information about $G$, and we will later see this is related to the second bounded cohomology $H_b^2(G, A)$.
+
+We first prove a few elementary facts about quasi-homomorphisms. The first task is to find canonical representatives of the classes in the quotient $\QH(G, \R)/\ell^\infty (G, \R)$.
+
+\begin{defi}[Homogeneous function]\index{homegeneous function}\index{function!homogeneous}
+ A function $f\colon G \to \R$ is \emph{homogeneous} if for all $n \in \Z$ and $g \in G$, we have $f(g^n) = n f(g)$.
+\end{defi}
+
+\begin{lemma}
+ Let $f \in \QH(G, A)$. Then for every $g \in G$, the limit
+ \[
+ Hf(g) = \lim_{n \to \infty} \frac{f(g^n)}{n}
+ \]
+ exists in $\R$. Moreover,
+ \begin{enumerate}
+ \item $Hf\colon G \to \R$ is a homogeneous quasi-homomorphism.
+ \item $f - Hf \in \ell^\infty(G, \R)$.
+ \end{enumerate}
+\end{lemma}
+% So this $Hf$ gives us a ``preferred'' representative for each class in$\QH(G, \R) / \ell^\infty(G, \R)$.
+
+\begin{proof}
+ We iterate the quasi-homomorphism property
+ \[
+ |f(ab) - f(a) - f(b)| \leq D(f).
+ \]
+ Then, viewing $g^{mn} = g^m \cdots g^m$, we obtain
+ \[
+ |f(g^{mn}) - n f(g^m)| \leq (n - 1) D(f).
+ \]
+ Similarly, we also have
+ \[
+ |f(g^{mn}) -m f(g^n)| \leq (m - 1) D(f).
+ \]
+ Thus, dividing by $nm$, we find
+ \begin{align*}
+ \left|\frac{f(g^{mn})}{nm} - \frac{f(g^m)}{m}\right| &\leq \frac{1}{m} D(f)\\
+ \left|\frac{f(g^{mn})}{nm} - \frac{f(g^n)}{n}\right| &\leq \frac{1}{n} D(f).
+ \end{align*}
+ So we find that
+ \[
+ \left|\frac{f(g^n)}{n} - \frac{f(g^m)}{m}\right| \leq \left(\frac{1}{m} + \frac{1}{n} \right) D(f).\tag{$*$}
+ \]
+ Hence the sequence $\frac{f(g^n)}{n}$ is Cauchy, and the limit exists.
+
+ The fact that $Hf$ is a quasi-homomorphism follows from the second assertion. To prove the second assertion, we can just take $n = 1$ in $(*)$ and take $m \to \infty$. Then we find
+ \[
+ |f(g) - Hf(g)| \leq D(f).
+ \]
+ So this shows that $f - Hf$ is bounded, hence $Hf$ is a quasi-homomorphism.
+
+ The homogeneity is left as an easy exercise.
+\end{proof}
+
+\begin{notation}
+ We write \term{$\QH_h(G, \R)$} for the vector space of homogeneous quasi-homomorphisms $G \to \R$.
+\end{notation}
+
+Then the above theorem gives
+\begin{cor}
+ We have
+ \[
+ \QH(G, \R) = \QH_h(G, \R) \oplus \ell^\infty(G, \R)
+ \]
+\end{cor}
+
+\begin{proof}
+ Indeed, observe that a bounded homogeneous quasi-homomorphism must be identically zero.
+\end{proof}
+
+Thus, if we want to study $\QH(G, \R)$, it suffices to just look at the homogeneous quasi-homomorphisms. It turns out these have some very nice perhaps-unexpected properties.
+\begin{lemma}
+ Let $f\colon G \to \R$ be a homogeneous quasi-homomorphism.
+ \begin{enumerate}
+ \item We have $f(xyx^{-1}) = f(y)$ for all $x, y \in G$.
+ \item If $G$ is abelian, then $f$ is in fact a homomorphism. Thus
+ \[
+ \QH_h(G, \R) = \Hom(G, \R).
+ \]
+ \end{enumerate}
+\end{lemma}
+Thus, quasi-homomorphisms are only interesting for non-abelian groups.
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Note that for any $x$, the function
+ \[
+ y \mapsto f(xyx^{-1})
+ \]
+ is a homogeneous quasi-homomorphism. It suffices to show that the function
+ \[
+ y \mapsto f(xyx^{-1}) - f(y)
+ \]
+ is a bounded homogeneous quasi-homomorphism, since all such functions must be zero. Homogeneity is clear, and the quasi-homomorphism property follows from the computation
+ \[
+ |f(xyx^{-1}) - f(y)| \leq |f(x) + f(y) + f(x^{-1}) - f(y)| + 2D(f) = 2D(f),
+ \]
+ using the fact that $f(x^{-1}) = -f(x)$ by homogeneity.
+ \item If $x$ and $y$ commute, then $(xy)^n = x^n y^n$. So we can use homogeneity to write
+ \begin{align*}
+ |f(xy) - f(x) - f(y)| &= \frac{1}{n} |f((xy)^n) - f(x^n) - f(y^n)|\\
+ &= \frac{1}{n} | f(x^n y^n) - f(x^n) - f(y^n)|\\
+ &\leq \frac{1}{n} D(f).
+ \end{align*}
+ Since $n$ is arbitrary, the difference must vanish.\qedhere
+ \end{enumerate}
+\end{proof}
+
+The case of $\QH(G, \Z)/\ell^\infty(G, \Z)$ is more complicated. For example, we have the following nice result:
+\begin{eg}
+ Given $\alpha \in \R$, define the map $g_\alpha\colon \Z \to \Z$ by
+ \[
+ g_\alpha(m) = [m\alpha].
+ \]
+ Then this is a homomorphism, and one can check that the map
+ \begin{align*}
+ \R &\longrightarrow \frac{\QH(\Z, \Z)}{\ell^\infty(\Z, \Z)}\\
+ \alpha &\longmapsto g_\alpha
+ \end{align*}
+ is an isomorphism. This gives a further isomorphism
+ \[
+ \R/\Z \cong \frac{\QH(\Z, \Z)}{\ell^\infty(\Z, \Z) + \Hom(\Z, \Z)}.
+ \]
+\end{eg}
+
+We next turn to the case $G = \Free_2$, the free group on two generators $a, b$. We will try to work out explicitly a lot of non-trivial elements of $\QH(\Free_2, \R)$. In general, when we try to construct quasi-homomorphisms, what we manage to get are \emph{not} homogeneous. So when we construct several quasi-homomorphisms, it takes some non-trivial work to show that they are distinct. Our construction will be one such that this is relatively easy to see.
+
+Consider the vector space:
+\[
+ \ell_{\mathrm{odd}}^\infty (\Z) = \{\alpha\colon \Z \to \R : \alpha \text{ bounded and } \alpha(-n) = -\alpha (n)\}.
+\]
+Note that in particular, we have $\alpha(0) = 0$.
+
+Given $\alpha, \beta \in \ell_{\mathrm{odd}}^\infty(\Z)$, we define a quasi-homomorphisms $f_{\alpha, \beta} \colon \Free_2 \to \R$ as follows --- given a reduced word $w = a^{n_1} b^{m_1} \cdots a^{n_k}b^{m_k}$, we let
+\[
+ f_{\alpha, \beta}(w) = \sum_{i= 1}^k \alpha(n_i) + \sum_{j = 1}^k \beta(m_j).
+\]
+Allowing for $n_1 = 0$ or $m_k = 0$, this gives a well-defined function $f_{\alpha,\beta}$ defined on all of $\Free_2$.
+
+Let's see what this does on some special sequences.
+\begin{eg}
+ We have
+ \[
+ f_{\alpha, \beta}(a^n) = \alpha(n),\quad f_{\alpha, \beta}(b^m) = \beta(m),
+ \]
+ and these are bounded functions of $n, m$.
+\end{eg}
+So we see that $f_{\alpha, \beta}$ is never homogeneous unless $\alpha = \beta = 0$.
+
+\begin{eg}
+ Pick $k_1, k_2, n \not= 0$, and set
+ \[
+ w = a^{nk_1} b^{nk_2} (b^{k_2}a^{k_1})^{-n} = a^{nk_1} b^{nk_2} \underbrace{a^{-k_1} b^{-k_2} \cdots a^{-k_1} b^{-k_2}}_{n\text{ times}}.
+ \]
+ This is now in reduced form. So we have
+ \[
+ f_{\alpha, \beta}(w) = \alpha(n k_1) + \beta (n k_2) - n \alpha(k_1) - n \beta(k_2).
+ \]
+\end{eg}
+This example is important. If $\alpha(k_1) + \beta(k_2) \not= 0$, then this is an unbounded function as $n \to \infty$. However, we know any genuine homomorphisms $f\colon \Free_2 \to \R$ must factor through the abelianization, and $w$ vanishes in the abelianization. So this suggests our $f_{\alpha, \beta}$ is in some sense very far away from being a homomorphism.
+
+\begin{thm}[P.\ Rolli, 2009]
+ The function $f_{\alpha, \beta}$ is a quasi-homomorphism, and the map
+ \[
+ \ell^\infty_{\mathrm{odd}} (\Z) \oplus \ell^\infty_{\mathrm{odd}}(\Z) \to \frac{\QH(\Free_2, \R)}{\ell^\infty(\Free_2, \R) + \Hom(\Free_2, \R)}
+ \]
+ is injective.
+\end{thm}
+This tells us there are a lot of non-trivial elements in $\QH(\Free_2, \R)$.
+
+The advantage of this construction is that the map above is a \emph{linear} map. So to see it is injective, it suffices to see that it has trivial kernel.
+\begin{proof}
+ Let $\alpha, \beta \in \ell_{\mathrm{odd}}^\infty (\Z, \R)$, and define $f_{\alpha, \beta}$ as before. By staring at it long enough, we find that
+ \[
+ |f(xy) - f(x) - f(y)| \leq 3 \max (\|\alpha\|_\infty, \|\beta\|_\infty),
+ \]
+ and so it is a quasi-homomorphism. The main idea is that
+ \[
+ f(b^n) + f(b^{-n}) = f(a^n) + f(a^{-n}) = 0
+ \]
+ by oddness of $\alpha$ and $\beta$. So when we do the word reduction in the product, the amount of error we can introduce is at most $3 \max (\|\alpha\|_\infty, \|\beta\|_\infty)$.
+
+ To show that the map is injective, suppose
+ \[
+ f_{\alpha, \beta} = \varphi + h,
+ \]
+ where $\varphi\colon \Free_2 \to \R$ is bounded and $h\colon \Free_2 \to \R$ is a homomorphism. Then we must have
+ \[
+ h(a^\ell) = f(a^\ell) - \varphi(a^\ell) = \alpha(\ell) - \psi(a^\ell),
+ \]
+ which is bounded. So the map $\ell \mapsto h(a^\ell) = \ell h(a)$ is bounded, and so $h(a) = 0$. Similarly, $h(b) = 0$. So $h \equiv 0$. In other words, $f_{\alpha, \beta}$ is bounded.
+
+ Finally,
+ \[
+ f((a^{\ell_1} b^{\ell_2})^k) = k (\alpha(\ell_1) + \beta(\ell_2)) = 0.
+ \]
+ Since this is bounded, we must have $\alpha(\ell_1) + \beta(\ell_2) = 0$ for all $\ell_1, \ell_2 \not= 0$. Using the fact that $\alpha$ and $\beta$ are odd, this easily implies that $\alpha(\ell_1) = \beta(\ell_2) = 0$ for all $\ell_1$ and $\ell_2$.
+
+% We say $x \in \F_2$ is a power if $x = a^k$ or $b^k$ for some $k \in \Z \setminus \{0\}$.
+%
+% Every $x \in \F_2$ has a unique shortest factorization into powers. If $x = x_1 \cdots x_n$ is the shortest factorization into powers, then
+% \[
+% f(x) = \sum_{i = 1}^n f(x_i).
+% \]
+% Let $x, y \in \F_2$ with shortest factorzation
+% \begin{align*}
+% x &= x_0 \cdots x_n\\
+% y &= y_0 \cdots y_m
+% \end{align*}
+% Then we have
+% \[
+% xy = x_1 \cdots x_n y_1 \cdots y_n.
+% \]
+% Now if $x_n y_0 = e$, then $f(x_n) + f(y_0) = 0$ by oddness. Suppose that
+% \[
+% x_n y_0 = x_{n - 1} y_1 = .. = x_{n - (r - 2)} y_{r - 2} = e,
+% \]
+% and
+% \[
+% x_{n - (r - 1)} y_{r - 1} = \zeta,
+% \]
+% and
+% \[
+% xy = x_0 \cdots x_{n - r} \zeta y_r \cdots y_m
+% \]
+% is a factorization into powers. Then we have
+% \[
+% f(y) = \sum_{i = 0}^n f(x_i) = \sum_{i = 0}^{n - r} f(x_i) + f(x_{n - (r - 1)}) + \sum_{j = 0}^r f(x_{n - j})
+% \]
+\end{proof}
+
+More generally, we have the following theorem, which we shall not prove, or even explain what the words mean:
+\begin{thm}[Hull--Osin 2013]
+ The space
+ \[
+ \frac{\QH(G, \R)}{\ell^\infty(G, \R) + \Hom(G, \R)}
+ \]
+ is infinite-dimensional if $G$ is acylindrically hyperbolic.
+\end{thm}
+
+\subsection{Relation to commutators}
+A lot of interesting information about quasi-homomorphisms can be captured by considering commutators. Recall that we write
+\[
+ [x, y] = xyx^{-1}y^{-1}.
+\]
+If $f$ is a genuine homomorphisms, then it vanishes on all commutators, since the codomain is abelian. For \emph{homogeneous} quasi-homomorphisms, we can bound the value of $f$ by the defect:
+\begin{lemma}
+ If $f$ is a homogeneous quasi-homomorphism and $x, y \in G$, then
+ \[
+ |f([x, y])| \leq D(f).
+ \]
+\end{lemma}
+For non-homogeneous ones, the value of $f$ on a commutator is still bounded, but requires a bigger bound.
+
+\begin{proof}
+ By definition of $D(f)$, we have
+ \[
+ |f([x, y]) - f(xyx^{-1}) - f(y^{-1})| \leq D(f).
+ \]
+ But since $f$ is homogeneous, we have $f(xyx^{-1}) = f(y) = - f(y^{-1})$. So we are done.
+\end{proof}
+
+This bound is in fact the best we can obtain:
+\begin{lemma}[Bavard, 1992]
+ If $f$ is a homogeneous quasi-homomorphism, then
+ \[
+ \sup_{x, y} |f([x, y])| = D(f).
+ \]
+\end{lemma}
+We will neither prove this nor use this --- it is merely for amusement.
+
+For a general element $a \in [G, G]$, it need not be of the form $[x, y]$. We can define
+\begin{defi}[Commutator length]\index{commutator length}
+ Let $a \in [G, G]$. Then \emph{commutator length} \term{$\cl(a)$} is the word length with respect to the generators
+ \[
+ \{[x, y] : x, y \in G\}.
+ \]
+ In other words, it is the smallest $n$ such that
+ \[
+ a = [x_1, y_1][x_2, y_2] \cdots [x_n, y_n]
+ \]
+ for some $x_i, y_i \in G$.
+\end{defi}
+
+It is an easy inductive proof to show that
+\begin{lemma}
+ For $a \in [G, G]$, we have
+ \[
+ |f(a)| \leq 2D(f) \cl(a).
+ \]
+\end{lemma}
+
+By homogeneity, it follows that
+\[
+ |f(a)| = \frac{1}{n} |f(a^n)| \leq 2 D(f) \frac{\cl(a^n)}{n}.
+\]
+
+\begin{defi}[Stable commutator length]\index{stable commutator length}
+ The \emph{stable commutator length} is defined by
+ \[
+ \scl(a) = \lim_{n \to \infty} \frac{\cl(a^n)}{n}.
+ \]
+\end{defi}
+
+Then we have
+\begin{prop}
+ \[
+ |f(a)| \leq 2 D(f) \scl(a).
+ \]
+\end{prop}
+
+\begin{eg}
+ Consider $\Free_2$ with generators $a, b$. Then clearly we have
+ \[
+ \cl([a, b]) = 1.
+ \]
+ It is not hard to verify that we also have
+ \[
+ \cl([a, b]^2) = 2.
+ \]
+ But interestingly, this ``pattern'' doesn't extend to higher powers. By writing it out explicitly, we find that
+ \[
+ [a, b]^3 = [aba^{-1}, b^{-1} aba^{-2}] [b^{-1}ab, b^2].
+ \]
+\end{eg}
+In general, something completely mysterious can happen as we raise the power, especially in more complicated groups.
+
+Similar to the previous result by Bavard, the bound of $|f(a)|$ by $\scl(a)$ is sharp.
+\begin{thm}[Bavard, 1992]
+ For all $a \in [G, G]$, we have
+ \[
+ \scl(a) = \frac{1}{2} \sup_{\phi \in \QH_h(G, \R)} \frac{|\phi(a)|}{|D(\phi)|},
+ \]
+ where, of course, we skip over those $\phi \in \Hom(G, \R)$ in the supremum to avoid division by zero.
+\end{thm}
+%We've seen some ``duality'' result of this sort before. Recall that if we have a Banach space $X$, then for $x \in X$, we have
+%\[
+% \|x\|_X = \sup_{\phi \in X^*} \frac{|\phi(x)|}{\|\phi\|_{X^*}}.
+%\]
+%That wasn't a very exciting result --- one direction is by definition of $\|\phi\|_{X^*}$ and the other follows from Hahn--Banach. It is possible to view Bavard's theorem as a version of this duality, but that involves some complicated twisting.
+
+\begin{eg}
+ It is true that
+ \[
+ \scl([a, b]) = \frac{1}{2}.
+ \]
+ However, showing this is not very straightforward.
+\end{eg}
+
+\begin{cor}
+ The stable commutator length vanishes identically iff every homogeneous quasi-homomorphism is a homomorphism.
+\end{cor}
+
+Note that if $\cl$ is bounded, then we have $\scl \equiv 0$. There exists interesting groups with bounded $\cl$, such as nilpotent finitely-generated groups, and so these have $\QH_h(G, \R) = \Hom(G, \R)$. We might think that the groups with $\cl$ bounded are ``almost abelian'', but it turns out not.
+
+\begin{thm}[Carder--Keller 1983]
+ For $n \geq 3$, we have
+ \[
+ \SL(n, \Z) = [\SL(n, \Z), \SL(n, \Z)],
+ \]
+ and the commutator length is bounded.
+\end{thm}
+
+More generally, we have
+\begin{thm}[D.\ Witte Morris, 2007]
+ Let $\mathcal{O}$ be the ring of integers of some number field. Then $\cl\colon [\SL(n, \mathcal{O}), \SL(n, \mathcal{O})] \to \R$ is bounded iff $n \geq 3$ or $n = 2$ and $\mathcal{O}^\times$ is infinite.
+\end{thm}
+The groups $\SL(n, \mathcal{O})$ have a common property --- they are lattices in real semisimple Lie groups. In fact, we have
+
+\begin{thm}[Burger--Monod, 2002]
+ Let $\Gamma < G$ be an irreducible lattice in a connected semisimple group $G$ with finite center and rank $G \geq 2$. Then every homogeneous quasimorphism $\Gamma \to \R$ is $\equiv 0$.
+\end{thm}
+
+\begin{eg}
+ If $\Gamma < \SL(n, \R)$ is a discrete subgroup such that $\Gamma \backslash \SL(n, \R)$ is compact, then it falls into the above class, and the rank condition is $n \geq 3$.
+\end{eg}
+It is in fact conjectured that
+\begin{itemize}
+ \item The commutator length is bounded.
+ \item $\Gamma$ is boundedly generated, i.e.\ we can find generators $\{s_1, \cdots, s_k\}$ such that
+ \[
+ \Gamma = \bra s_1 \ket \bra s_2\ket \cdots \bra s_k\ket.
+ \]
+\end{itemize}
+
+There is another theorem that seems completely unrelated to this, but actually uses the same technology.
+\begin{thm}[Burger--Monod, 2009]
+ Let $\Gamma$ be a finitely-generated group and let $\mu$ be a symmetric probability measure on $\Gamma$ whose support generates $\Gamma$. Then every class in $\QH(\Gamma, \R)/\ell^\infty(\Gamma, \R)$ has a unique $\mu$-harmonic representative. In addition, this harmonic representative $f$ satisfies the following:
+ \[
+ \|\d f\|_\infty \leq \|\d g\|_\infty
+ \]
+ for any $g \in f+ \ell^\infty(\Gamma, R)$.
+\end{thm}
+This is somewhat like the Hodge decomposition theorem.
+
+\subsection{Poincare translation quasimorphism}
+We will later spend quite a lot of time studying actions on the circle. Thus, we are naturally interested in the homeomorphism group of the sphere. We are mostly interested in orientation-preserving actions only. Thus, we need to define what it means for a homeomorphism $\varphi\colon S^1 \to S^1$ to be orientation-preserving.
+
+The topologist will tell us that $\varphi$ induces a map
+\[
+ \varphi_*\colon H_1(S^1, \Z) \to H_1(S^1, \Z).
+\]
+Since the homology group is generated by the fundamental class $[S^1]$, invertibility of $\varphi_*$ implies $\varphi_*([S^1]) = \pm [S^1]$. Then we say $\varphi$ is orientation-preserving if $\varphi_*([S^1]) = [S^1]$.
+
+However, this definition is practically useless if we want to do anything with it. Instead, we can make use of the following definition:
+
+\begin{defi}[Positively-oriented triple]\index{positively-oriented triple}
+ We say a triple of points $x_1, x_2, x_3 \in S^1$ is positively-oriented if they are distinct and ordered as follows:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [mblue, thick] circle [radius=1];
+ \node [circ] at (1, 0) {};
+ \node [right] at (1, 0) {$x_1$};
+ \node [circ] at (0.5, 0.866) {};
+ \node [anchor = south west] at (0.5, 0.866) {$x_2$};
+ \node [circ] at (-0.866, 0.5) {};
+ \node [left] at (-0.866, 0.5) {$x_3$};
+ \end{tikzpicture}
+ \end{center}
+ More formally, recall that there is a natural covering map $\pi\colon \R \to S^1$ given by quotienting by $\Z$. Formally, we let $\tilde{x}_1 \in \R$ be any lift of $x_1$. Then let $\tilde{x}_2, \tilde{x}_3$ be the unique lifts of $x_2$ and $x_3$ respectively to $[\tilde{x}_1, \tilde{x}_1 + 1)$. Then we say $x_1, x_2, x_3$ are positively-oriented if $\tilde{x}_2 < \tilde{x}_3$.
+\end{defi}
+
+\begin{defi}[Orientation-preserving map]\index{orientation-preserving map}
+ A map $S^1 \to S^1$ is orientation-preserving if it sends positively-oriented triples to positively-oriented triples.We write \term{$\Homeo^+(S^1)$} for the group of orientation-preserving homeomorphisms of $S^1$.
+\end{defi}
+
+We can generate a large collection of homeomorphisms of $S^1$ as follows --- for any $x \in \R$, we define the translation map
+\begin{align*}
+ T_x\colon \R &\to \R\\
+ y &\mapsto y + x.
+\end{align*}
+Identifying $S^1$ with $\R/\Z$, we see that this gives a map $T_x \in \Homeo^+(S^1)$. Of course, if $n$ is an integer, then $T_x = T_{n + x}$.
+
+One can easily see that
+\begin{prop}
+ Every lift $\tilde{\varphi}\colon \R \to \R$ of an orientation preserving homeomorphism $\varphi\colon S^1 \to S^1$ is a monotone increasing homeomorphism of $\R$, commuting with translation by $\Z$, i.e.
+ \[
+ \tilde{\varphi} \circ T_m = T_m \circ \tilde{\varphi}
+ \]
+ for all $m \in \Z$.
+
+ Conversely, any such map is a lift of an orientation-preserving homeomorphism.
+\end{prop}
+We write \term{$\Homeo^+_\Z(\R)$} for the set of all monotone increasing homeomorphisms $\R \to \R$ that commute with $T_m$ for all $m \in \Z$. Then the above proposition says there is a natural surjection $\Homeo^+_\Z(\R) \to \Homeo^+(S^1)$. The kernel consists of the translation-by-$m$ maps for $m \in \Z$. Thus, $\Homeo^+_\Z(\R)$ is a \term{central extension} of $\Homeo^+(S^1)$. In other words, we have a short exact sequence
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & \Z \ar[r, "i"] & \Homeo_\Z^+ (\R) \ar[r, "p"] & \Homeo^+(S^1) \ar[r] & 0
+ \end{tikzcd}.
+\]
+The ``central'' part in the ``central extension'' refers to the fact that the image of $\Z$ is in the center of $\Homeo_\Z^+(\R)$.
+
+\begin{notation}
+ We write \term{$\Rot$} for the group of rotations in $\Homeo^+(S^1)$. This corresponds to the subgroup $T_\R \subseteq \Homeo_\Z^+(\R)$.
+\end{notation}
+
+From a topological point of view, we can see that $\Homeo^+(S^1)$ retracts to $\Rot$. More precisely, if we fix a basepoint $x_0 \in S_1$, and write $\Homeo^+(S^1, x_0)$ for the basepoint preserving maps, then every element in $\Homeo^+(S^1)$ is a product of an element in $\Rot$ and one in $\Homeo^+(S^1, x_0)$. Since $\Homeo^+(S^1, x_0) \cong \Homeo^+([0, 1])$ is contractible, it follows that $\Homeo^+(S^1)$ retracts to $\Rot$.
+
+A bit more fiddling around with the exact sequence above shows that $\Homeo^+_{\Z}(\R) \to \Homeo^+(S^1)$ is in fact a universal covering space, and that $\pi_1(\Homeo^+(S^1)) = \Z$.
+
+\begin{lemma}
+ The function $F\colon \Homeo_\Z^+(\R) \to \R$ given by $\varphi \mapsto \varphi(0)$ is a quasi-homomorphism.
+\end{lemma}
+
+\begin{proof}
+ The commutation property of $\varphi$ reads as follows:
+ \[
+ \varphi(x + m) = \varphi(x) + m.
+ \]
+ For a real number $x \in \R$, we write
+ \[
+ x = \{x \} + [x],
+ \]
+ where $0 \leq \{x\} < 1$ and $[x] = 1$. Then we have
+ \begin{align*}
+ F(\varphi_1 \varphi_2) &= \varphi_1(\varphi_2(0)) \\
+ &= \varphi_1(\varphi_2(0))\\
+ &= \varphi_1(\{\varphi_2(0)\} + [\varphi_2(0)])\\
+ &= \varphi_1(\{\varphi_2(0)\}) + [\varphi_2(0)]\\
+ &= \varphi_1(\{\varphi_2(0)\}) + \varphi_2(0) - \{\varphi_2(0)\}.
+ \end{align*}
+ Since $0 \leq \{\varphi_2(0)\} < 1$, we know that
+ \[
+ \varphi_1(0) \leq \varphi_1(\{\varphi_2(0)\}) < \varphi_1(1) = \varphi_1(0) + 1.
+ \]
+ Then we have
+ \[
+ \varphi_1(0) + \varphi_2(0) - \{\varphi_2(0)\} \leq F(\varphi_1\varphi_2) < \varphi_1(0) + 1 + \varphi_2(0) - \{\varphi_2(0)\}.
+ \]
+ So subtracting, we find that
+ \[
+ -1 \leq - \{\varphi_2(0)\} \leq F(\varphi_1 \varphi_2) - F(\varphi_1) - F(\varphi_2) < 1 - \{\varphi_2(0) \} \leq 1.
+ \]
+ So we find that
+ \[
+ D(f) \leq 1.\qedhere
+ \]
+\end{proof}
+
+\begin{defi}[Poincare translation quasimorphism]\index{Poincare translation quasimorphism}
+ The \emph{Poincare translation quasimorphism} $T\colon \Homeo_\Z^+ (\R) \to \R$ is the homogenization of $F$.
+\end{defi}
+
+It is easily seen that $T(T_x) = x$. This allows us to define
+
+\begin{defi}[Rotation number]\index{rotation number}
+ The \emph{rotation number} \index{$R(\varphi)$} of $\varphi \in \Homeo^+(S^1)$ is $T(\tilde{\varphi}) \bmod \Z \in \R/\Z$, where $\tilde{\varphi}$ is a lift of $\varphi$ to $\Homeo_\Z^+(\R)$.
+\end{defi}
+This rotation number contains a lot of interesting information about the dynamics of the homeomorphism. For instance, minimal homeomorphisms of $S^1$ are conjugate iff they have the same rotation number.
+
+We will see that bounded cohomology allows us to generalize the rotation number of a homeomorphism into an invariant for any group action.
+
+\section{Group cohomology and bounded cohomology}
+\subsection{Group cohomology}
+We can start talking about cohomology. Before doing bounded cohomology, we first try to understand usual group cohomology. In this section, $A$ will be any abelian group. Ultimately, we are interested in the case $A = \Z$ or $\R$, but we can develop the theory in this generality.
+
+The general idea is that to a group $\Gamma$, we are going to associate a sequence of abelian groups $H^k(\Gamma, A)$ that is
+\begin{itemize}
+ \item covariant in $A$; and
+ \item contravariant in $\Gamma$.
+\end{itemize}
+It is true, but we will not prove or use, that if $X = K(\Gamma, 1)$, i.e.\ $X$ is a CW-complex whose fundamental group is $\Gamma$ and has a contractible universal cover, then there is a natural isomorphism
+\[
+ H^k(\Gamma, A) \cong H^k_{\mathrm{sing}}(X, A).
+\]
+
+There are several ways we can define group cohomology. A rather powerful way of doing so is via the theory of derived functors. However, developing the machinery requires considerable effort, and to avoid scaring people off, we will use a more down-to-earth construction. We begin with the following definition:
+
+\begin{defi}[Homogeneous $k$-cochain]\index{$k$-cochain}\index{homogeneous $k$-cochain}\index{cochain}
+ A \emph{homogeneous $k$-cochain} with values in $A$ is a function $f\colon \Gamma^{k + 1} \to A$. The set \term{$C(\Gamma^{k + 1}, A)$} is an abelian group and $\Gamma$ acts on it by automorphisms in the following way:
+ \[
+ (\gamma_* f) (\gamma_0, \cdots, \gamma_m) = f(\gamma^{-1} \gamma_0, \cdots, \gamma^{-1} \gamma_k).
+ \]
+ By convention, we set $C(\Gamma^0, A) \cong A$.
+\end{defi}
+
+\begin{defi}[Differential $d^{(k)}$]\index{$d^{(k)}$}\index{differential}
+ We define the differential $d^{(k)}\colon C(\Gamma^k, A) \to C(\Gamma^{k + 1}, A)$ by
+ \[
+ (d^{(k)}f) (\gamma_0, \cdots, \gamma_k) = \sum_{j = 0}^k (-1)^j f(\gamma_0, \cdots, \hat{\gamma}_j, \cdots, \gamma_k).
+ \]
+ In particular, we set $d^{(0)}(a)$ to be the function that is constantly $a$.
+\end{defi}
+
+\begin{eg}
+ We have
+ \begin{align*}
+ d^{(1)} f(\gamma_0, \gamma_1) &= f(\gamma_1) - f(\gamma_0)\\
+ d^{(2)} f(\gamma_0, \gamma_1, \gamma_2) &= f(\gamma_1, \gamma_2) - f(\gamma_0, \gamma_2) + f(\gamma_0, \gamma_1).
+ \end{align*}
+\end{eg}
+
+Thus, we obtain a \emph{complex} of abelian groups\index{chain complex}\index{complex}
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & A \ar[r, "d^{(0)}"] & C(\Gamma, A) \ar[r, "d^{(1)}"] & C(\Gamma^2, A) \ar[r, "d^{(2)}"] & \cdots
+ \end{tikzcd}.
+\]
+
+The following are crucial properties of this complex.
+\begin{lemma}\leavevmode
+ \begin{enumerate}
+ \item $d^{(k)}$ is a $\Gamma$-equivariant group homomorphism.
+ \item $d^{(k + 1)} \circ d^{(k)} = 0 $. So $\im d^{(k)} \subseteq \ker d^{(k + 1)}$.
+ \item In fact, we have $\im d^{(k)} = \ker d^{(k + 1)}$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item This is clear.
+ \item You just expand it out and see it is zero.
+ \item If $f \in \ker d^{(k)}$, then setting $\gamma_k = e$, we have
+ \begin{multline*}
+ 0 = d^{(k)} f(\gamma_0, \cdots, \gamma_{k - 1}, e) = (-1)^k f(\gamma_0, \cdots, \gamma_{k - 1}) \\
+ + \sum_{j = 0}^{k - 1} (-1)^j f(\gamma_0, \cdots, \hat{\gamma}_j, \cdots, \gamma_{k - 1}, e).
+ \end{multline*}
+ Now define the following $(k - 1)$-cochain
+ \[
+ h(\gamma_0, \cdots, \gamma_{k - 2}) = (-1)^k f(\gamma_0, \cdots, \gamma_{k - 2}, e).
+ \]
+ Then the above reads
+ \[
+ f = d^{(k - 1)} h.\qedhere
+ \]%\qedhere
+ \end{enumerate}
+\end{proof}
+
+We make the following definitions:
+\begin{defi}[$k$-cocycle and $k$-coboundaries]\index{$k$-cocycle}\index{$k$-coboundary}\index{cocycle}\index{coboundary}\leavevmode
+ \begin{itemize}
+ \item The $k$-cocycles are $\ker d^{(k + 1)}$.
+ \item The $k$-coboundaries are $\im d^{(k)}$.
+ \end{itemize}
+\end{defi}
+So far, every cocycle is a coboundary, so nothing interesting is happening. To obtain interesting things, we use the action of $\Gamma$ on $C(\Gamma^k, A)$. We denote\index{$C(\Gamma^k, A)^\Gamma$}
+\[
+ C(\Gamma^k, A)^\Gamma = \{f\colon \Gamma^k \to A \mid f \text{ is $\Gamma$-invariant}\}.
+\]
+Since the differentials $d^{(k)}$ commute with the $\Gamma$-action, it restricts to a map $C(\Gamma^k, A)^\Gamma \to C(\Gamma^{k + 1}, A)^\Gamma$. We can arrange these into a new complex
+\[
+ \begin{tikzcd}
+ 0 \ar[r] \ar[d, hook] & A \ar[r, "d^{(0)}"] \ar[d, hook] & C(\Gamma, A)^\Gamma \ar[r, "d^{(1)}"] \ar[d, hook] & C(\Gamma^2, A)^\Gamma \ar[r, "d^{(2)}"]\ar[d, hook] & \cdots\\
+ 0 \ar[r] & A \ar[r, "d^{(0)}"] & C(\Gamma, A) \ar[r, "d^{(1)}"] & C(\Gamma^2, A) \ar[r, "d^{(2)}"] & \cdots
+ \end{tikzcd}.
+\]
+
+We are now in a position to define group cohomology.
+\begin{defi}[Group cohomology $H^k(\Gamma, A)$]\index{$H^k(\Gamma, A)$}\index{group cohomology}
+ We define the \emph{$k$th cohomology group} to be
+ \[
+ H^k = \frac{(\ker d^{(k + 1)})^\Gamma}{d^{(k)} (C(\Gamma^k, A)^\Gamma)} = \frac{(d^{(k)} (C(\Gamma^k, A)))^\Gamma}{d^{(k)} (C(\Gamma^k, A)^\Gamma)}.
+ \]
+\end{defi}
+
+Before we do anything with group cohomology, we provide a slightly different description of group cohomology, using \term{inhomogeneous cochains}. The idea is to find a concrete description of \emph{all} invariant cochains.
+
+Observe that if we have a function $f\colon \Gamma^{k + 1} \to A$ that is invariant under the action of $\Gamma$, then it is uniquely determined by the value on $\{(e, \gamma_1, \cdots, \gamma_k): \gamma_i \in \Gamma\}$. So we can identify invariant functions $f\colon \Gamma^{k + 1} \to A$ with arbitrary functions $\Gamma^k \to A$. So we have one variable less to worry about, but on the other hand, the coboundary maps are much more complicated.
+
+More explicitly, we construct an isomorphism
+\[
+ \begin{tikzcd}
+ C(\Gamma^k, A)^\Gamma \ar[r, yshift=2, "\rho^{(k - 1)}"] & C(\Gamma^{k - 1}, A)\ar[l, yshift=-2, "\tau^{(k)}"]
+ \end{tikzcd},
+\]
+by setting
+\begin{align*}
+ (\rho^{(k - 1)} f)(g_1, \cdots, g_{k - 1}) &= f(e, g_1, g_2, \cdots, g_1 \cdots g_{k - 1})\\
+ (\tau^{(k)} h)(g_1, \cdots, g_k) &= h (g_1^{-1} g_2, g_2^{-1} g_3, \cdots, g_{k - 1}^{-1} g_k).
+\end{align*}
+
+These homomorphisms are inverses of each other. Then under this identification, we obtain a new complex
+\[
+ \begin{tikzcd}
+ C(\Gamma^k, A)^\Gamma \ar[r, "d^{(k)}"] & C(\Gamma^{k + 1}, A)^\Gamma \ar[d,"\rho^{(k)}"] \\
+ C(\Gamma^{k - 1}, A) \ar[u, "\tau^{(k)}"] \ar[r, "d^{k}"] & C(\Gamma^k, A)
+ \end{tikzcd}
+\]
+where
+\[
+ d^k = \rho^k \circ d^{(k)} \circ \tau^k.
+\]
+A computation shows that
+\begin{align*}
+ (d^k f) (g_1, \cdots, g_k) = f(g_2, \cdots, g_k) + \sum_{j = 1}^{k - 1} (-1)^j f(g_1, \cdots, g_j g_{j + 1}, \cdots, g_k) \\
+ + (-1)^k f(g_1, \cdots, g_{k - 1}).
+\end{align*}
+It is customary to denote
+\begin{align*}
+ \mathcal{Z}^k(\Gamma, A) &= \ker^{d + 1} \subseteq C(\Gamma^k, A)\\
+ \mathcal{B}^k (\Gamma, A) &= \im d^k \subseteq C(\Gamma^k, A),
+\end{align*}
+the \term{inhomogeneous $k$-cocycles}\index{$k$-cocycle!inhomogeneous}\index{cocycle!inhomogeneous} and \term{inhomogeneous $k$-coboundaries}\index{$k$-coboundary!inhomogeneous}\index{coboundary!inhomogeneous}. Then we simply have
+\[
+ H^k(\Gamma, A) = \frac{\mathcal{Z}^k(\Gamma, A)}{\mathcal{B}^k(\Gamma, A)}.
+\]
+It is an exercise to prove the following:
+\begin{lemma}
+ A homomorphism $f: \Gamma \to \Gamma'$ of groups induces a natural map $f^*: H^k(\Gamma', \Z) \to H^k(\Gamma, \Z)$ for all $k$. Moreover, if $g: \Gamma' \to \Gamma''$ is another group homomorphism, then $f^* \circ g^* = (gf)^*$.
+\end{lemma}
+
+\subsubsection*{Computation in degrees $k = 0, 1, 2$}
+It is instructive to compute explicitly what these groups mean in low degrees. We begin with the boring one:
+\begin{prop}
+ $H^0(\Gamma, A) \cong A$.
+\end{prop}
+
+\begin{proof}
+ The relevant part of the cochain is
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A \ar[r, "d^1 = 0"] & C(\Gamma, A)
+ \end{tikzcd}.\qedhere
+ \]
+\end{proof}
+
+The $k = 1$ case is not too much more interesting.
+\begin{prop}
+ $H^1(\Gamma, A) = \Hom(\Gamma, A)$.
+\end{prop}
+
+\begin{proof}
+ The relevant part of the complex is
+ \[
+ \begin{tikzcd}
+ A \ar[r, "d^1 = 0"] & C(\Gamma, A) \ar[r, "d^2"] & C(\Gamma^2, A)
+ \end{tikzcd},
+ \]
+ and we have
+ \[
+ (d^2 f) (\gamma_1, \gamma_2) = f(\gamma_1) - f(\gamma_1 \gamma_2) + f(\gamma_2).\qedhere
+ \]
+\end{proof}
+
+The $k = 2$ part is more interesting. The relevant part of the complex is
+\[
+ \begin{tikzcd}
+ C(\Gamma, A) \ar[r, "d^2"] & C(\Gamma^2, A) \ar[r, "d^3"] & C(\Gamma^3, A)
+ \end{tikzcd}.
+\]
+Here $d^3$ is given by
+\[
+ d^3 \alpha (g_1, g_2, g_3) = \alpha(g_2, g_3) - \alpha(g_1 g_2, g_3) + \alpha(g_1, g_2 g_3) - \alpha(g_1, g_2).
+\]
+Suppose that $d^3 \alpha (g_1, g_2, g_3) = 0$, and, in addition, by some magic, we managed to pick $\alpha$ such that $\alpha(g_1, e) = \alpha(e, g_2) = 0$. This is known as a \term{normalized cocycle}. We can now define the following operation on $\Gamma \times A$:
+\[
+ (\gamma_1, a_2)(\gamma_2, a_2) = (\gamma_1 \gamma_2, a_1 +a _2 + \alpha (\gamma_1, \gamma_2)).
+\]
+Then the property that $\alpha$ is a normalized cocycle is equivalent to the assertion that this is an associative group law with identity element $(e, 0)$. We will write this group as $\Gamma \times_\alpha A$.
+
+We can think of this as a generalized version of the semi-direct product. This group here has a special property. We can organize it into an exact sequence
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & A \ar[r] & \Gamma \times_\alpha A \ar[r] & \Gamma \ar[r] & 0
+ \end{tikzcd}.
+\]
+Moreover, the image of $A$ is in the center of $\Gamma \times_\alpha A$. This is known as a \term{central extension}.
+\begin{defi}[Central extension]\index{central extension}
+ Let $A$ be an abelian group, and $\Gamma$ a group. Then a central extension of $\Gamma$ by $A$ is an exact sequence
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A \ar[r] & \tilde{\Gamma} \ar[r] & \Gamma \ar[r] & 0
+ \end{tikzcd}
+ \]
+ such that the image of $A$ is contained in the center of $\tilde{\Gamma}$.
+\end{defi}
+The claim is now that
+
+\begin{prop}
+ $H^2(\Gamma, A)$ parametrizes the set of isomorphism classes of central extensions of $\Gamma$ by $A$.
+\end{prop}
+
+\begin{proof}[Proof sketch]
+ Consider a central extension
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & A \ar[r, "i"] & G \ar[r, "p"] & \Gamma \ar[r] & 0
+ \end{tikzcd}.
+ \]
+ Arbitrarily choose a section $s\colon \Gamma \to G$ of $p$, as a function of sets. Then we know there is a unique $\alpha(\gamma_1, \gamma_2)$ such that
+ \[
+ s(\gamma_1 \gamma_2) \alpha(\gamma_1, \gamma_2) = s(\gamma_1) s(\gamma_2).
+ \]
+ We then check that $\alpha$ is a (normalized) $2$-cocycle, i.e.\ $\alpha(\gamma_1, e) = \gamma(e, \gamma_2) = 0$.
+
+ One then verifies that different choices of $s$ give cohomologous choices of $\alpha$, i.e.\ they represent the same class in $H^2(\Gamma, A)$.
+
+ Conversely, given a $2$-cocycle $\beta$, we can show that it is cohomologous to a normalized $2$-cocycle $\alpha$. This gives rise to a central extension $G = \Gamma \times_\alpha A$ as constructed before (and also a canonical section $s(\gamma) = (\gamma, 0)$).
+
+ One then checks this is a bijection.
+\end{proof}
+
+\begin{ex}
+ $H^2(\Gamma, A)$ has a natural structure as an abelian group. Then by the proposition, we should be able to ``add'' two central extensions. Figure out what this means.
+\end{ex}
+
+\begin{eg}
+ As usual, write $\Free_r$ for the free group on $r$ generators. Then
+ \begin{align*}
+ H^k(\Free_r, A) =
+ \begin{cases}
+ A & k = 0\\
+ A^r & k = 1\\
+ 0 & k = 2
+ \end{cases}.
+ \end{align*}
+ The fact that $H^2(\Free_r, A)$ vanishes is due to the fact that $\Free_r$ is free, so every short exact sequence splits.
+\end{eg}
+
+\begin{eg}
+ Consider $\Gamma_g = \pi_1(S_g)$ for $g > 0$. Explicitly, we can write
+ \[
+ \Gamma_g = \left\{a_1, b_1, \cdots, a_g, b_g : \prod_{i = 1}^g [a_i, b_i] = e\right\}
+ \]
+ Then we have $H^1(\Gamma_g, \Z) = \Z^{2g}$ and $H^2(\Gamma_g, \Z) \cong \Z$.
+
+ We can provide a very explicit isomorphism for $H^2(\Gamma_g, \Z)$. We let
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & \Z \ar[r, "i"] & G \ar[r, "p"] & \Gamma \ar[r] & 0
+ \end{tikzcd}
+ \]
+ be a central extension. Observe that whenever $\gamma, \eta \in \Gamma_g$, and $\tilde{\gamma}, \tilde{\eta} \in G$ are lifts, then $[\tilde{\gamma}, \tilde{\eta}]$ is a lift of $[\gamma, \eta]$ and doesn't depend on the choice of $\tilde{\gamma}$ and $\tilde{\eta}$. Thus, we can pick $\tilde{a}_1, \tilde{b}_1, \cdots, \tilde{a}_g, \tilde{b}_g$. Then notice that
+ \[
+ \prod_{i = 1}^g [\tilde{a}_i, \tilde{b}_i]
+ \]
+ is in the kernel of $p$, and is hence in $\Z$.
+
+ Alternatively, we can compute the group cohomology using topology. We notice that $\R^2$ is the universal cover of $S_g$, and it is contractible. So we know $S_g = K(\Gamma_g, 1)$. Hence, by the remark at the beginning of the section (which we did not prove), it follows that $H^1(\Gamma_g, \Z) \cong H^1_{\mathrm{sing}}(S_g; \Z)$, and the latter is a standard computation in algebraic topology.
+\end{eg}
+
+Finally, we look at actions on a circle. Recall that we previously had the central extension
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & \Z \ar[r, "i"] & \Homeo^+_\Z(\R) \ar[r, "p"] & \Homeo^+(S^1) \ar[r] & 0
+ \end{tikzcd}.
+\]
+This corresponds to the \term{Euler class} $e \in H^2(\Homeo^+(S^1), \Z)$.
+
+We can in fact construct a representative cocycle of $e$. To do so, we pick a section $s\colon \Homeo^+(S^1) \to \Homeo_\Z^+(\R)$ by sending $f \in \Homeo^+(S^1)$ to the unique lift $\bar{f}\colon \R \to \R$ such that $\bar{f}(0) \in [0, 1)$.
+
+Then we find that
+\[
+ s(f_1, f_2) T_{c(f_1, f_2)} = s(f_1) s(f_2)
+\]
+for some $c(f_1, f_2) \in \Z$.
+
+\begin{lemma}
+ We have $c(f_1, f_2) \in \{0, 1\}$.
+\end{lemma}
+
+\begin{proof}
+ We have $\overline{f_1 f_2}(0) \in [0, 1)$, while $\bar{f}_2(0) \in [0, 1)$. So we find that
+ \[
+ \bar{f}_1(\bar{f}_2(0)) \in [\bar{f}_1(0), \bar{f}_1(1)) = [\bar{f}_1(0), \bar{f}_1(0) + 1) \subseteq [0, 2).
+ \]
+ But we also know that $c(f_1, f_2)$ is an integer. So $c(f_1, f_2) \in \{0, 1\}$.
+\end{proof}
+
+\begin{defi}[Euler class]\index{Euler class}
+ The \emph{Euler class} of the $\Gamma$-action by orientation-preserving homeomorphisms of $S^1$ is
+ \[
+ h^*(e) \in H^2(\Gamma, \Z),
+ \]
+ where $h\colon \Gamma \to \Homeo^+(S^1)$ is the map defining the action.
+\end{defi}
+
+For example, if $\Gamma_g$ is a surface group, then we obtain an invariant of actions valued in $\Z$.
+
+There are some interesting theorems about this Euler class that we will not prove.
+\begin{thm}[Milnor--Wood]
+ If $h\colon \Gamma_g \to \Homeo^+(S^1)$, then $|h^*(e)| \leq 2g - 2$.
+\end{thm}
+
+\begin{thm}[Gauss--Bonnet]
+ If $h\colon \Gamma_g \to \PSL(2, \R) \subseteq \Homeo^+(S^1)$ is the holonomy representation of a hyperbolic structure, then
+ \[
+ h^*(e) = \pm (2g - 2).
+ \]
+\end{thm}
+
+\begin{thm}[Matsumoko, 1986]
+ If $h$ defines a minimal action of $\Gamma_g$ on $S^1$ and $|h^*(e)| = 2g - 2$, then $h$ is conjugate to a hyperbolization.
+\end{thm}
+
+\subsection{Bounded cohomology of groups}
+We now move on to bounded cohomology. We will take $A = \Z$ or $\R$ now. The idea is to put the word ``bounded'' everywhere. For example, we previously had $C(\Gamma^{k + 1}, A)$ denoting the functions $\Gamma^{k + 1} \to A$. Likewise, we denote\index{$C_b(\Gamma^{k + 1}, A)$}
+\[
+ C_b(\Gamma^{k + 1}, A) = \{f \in C(\Gamma^{k + 1}, A) : f\text{ is bounded}\} \subseteq C(\Gamma^{k + 1}, A).
+\]
+We have $d^{(k)}(C_b(\Gamma^k, A)) \subseteq C_b(\Gamma^{k + 1}, A)$, and so as before, we obtain a chain complexes
+\[
+ \begin{tikzcd}
+ 0 \ar[r] \ar[d, hook] & A \ar[r, "d^{(0)}"] \ar[d, hook] & C_b(\Gamma, A)^\Gamma \ar[r, "d^{(1)}"] \ar[d, hook] & C_b(\Gamma^2, A)^\Gamma \ar[r, "d^{(2)}"]\ar[d, hook] & \cdots\\
+ 0 \ar[r] & A \ar[r, "d^{(0)}"] & C_b(\Gamma, A) \ar[r, "d^{(1)}"] & C_b(\Gamma^2, A) \ar[r, "d^{(2)}"] & \cdots
+ \end{tikzcd}.
+\]
+This allows us to define
+
+\begin{defi}[Bounded cohomology]\index{bounded cohomology}\index{$k$-th bounded cohomology}
+ The \emph{$k$-th bounded cohomology group} of $\Gamma$ with coefficients in $A$ Is
+ \[
+ H_b^k(\Gamma, A) = \frac{\ker (d^{(k + 1)} \colon C_b(\Gamma^{k + 1}, A)^\Gamma \to C_b(\Gamma^{k + 2}, A)^\Gamma)}{d^{(k)}(C_b(\Gamma^k, A)^\Gamma)}.
+ \]
+\end{defi}
+
+This comes with two additional features.
+\begin{enumerate}
+ \item As one would expect, a bounded cochain is bounded. So given an element $f \in C_b(\Gamma^{k + 1}, A)$, we can define
+ \[
+ \|f\|_\infty = \sup_{x \in \Gamma^{k + 1}} |f(x)|.
+ \]
+ Then $\|\ph\|_\infty$ makes $C_b(\Gamma^{k + 1}, A)$ into a normed abelian group, and in the case $A = \R$, a Banach space.
+
+ Then for $[f] \in H^k_b(\Gamma, A)$, we define
+ \[
+ \|[f]\|_\infty = \inf \{ \|f + d g\|_\infty : g \in C_b(\Gamma^k, A)^\Gamma\}.
+ \]
+ This induces a semi-norm on $H^k_b(\Gamma, A)$. This is called the \term{canonical semi-norm}.
+ \item We have a map of chain complexes
+ \[
+ \begin{tikzcd}
+ C_b(\Gamma, A)^\Gamma \ar[r] \ar[d, hook] & C_b(\Gamma^2, A)^\Gamma \ar[r] \ar[d, hook] & C_b(\Gamma^3, A)^\Gamma \ar[r] \ar[d, hook] & \cdots\\
+ C(\Gamma, A)^\Gamma \ar[r] & C(\Gamma^2, A)^\Gamma \ar[r] & C(\Gamma^3, A)^\Gamma \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ Thus, this induces a natural map $c_k\colon H^k_b (\Gamma, A) \to H^k(\Gamma, A)$, known as the \term{comparison map}. In general, $c_k$ need not be injective or surjective.
+\end{enumerate}
+
+As before, we can instead use the complex of inhomogeneous cochains. Then we have a complex that looks like
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & A \ar[r, "d^1 = 0"] & C_b(\Gamma, A) \ar[r, "d^2"] & C_b(\Gamma^2, A) \ar[r, "d^3"] & \cdots
+ \end{tikzcd}
+\]
+In degree $0$, the boundedness condition is useless, and we have
+\[
+ H_b^0(\Gamma, A) = H^0(\Gamma, A) = A.
+\]
+For $k = 1$, we have $\im d^1 = 0$. So we just have to compute the cocycles. For $f \in C_b(\Gamma, A)$, we have $d^2 f = 0$ iff $f(g_1) - f(g_1 g_2) + f(g_2) = 0$, iff $f \in \Hom(\Gamma, A)$. But we have the additional information that $f$ is bounded, and there are no non-zero bounded homomorphisms to $\Gamma$ or $A$! So we have
+\[
+ H_b^1(\Gamma, A) = 0.
+\]
+If we allow non-trivial coefficients, then $H^1_b(\Gamma, A)$ may be always be zero. But that's another story.
+
+The interesting part starts at $H_b^2(\Gamma, A)$. To understand this, We are going to determine the kernel of the comparison map
+\[
+ c_2\colon H_b^2(\Gamma, A) \to H^2(\Gamma, A).
+\]
+We consider the relevant of the defining complexes, where we take inhomogeneous cochains
+\[
+ \begin{tikzcd}
+ C(\Gamma, A) \ar[r, "d^2"] & C(\Gamma^2, A) \ar[r, "d^3"] & C(\Gamma^3, A)\\
+ C_b(\Gamma, A) \ar[r, "d^2"] \ar[u, hook] & C_b(\Gamma^2, A) \ar[r, "d^3"] \ar[u, hook] & C_b(\Gamma^3, A) \ar[u, hook]
+ \end{tikzcd}
+\]
+By definition, the kernel of $c_2$ consists of the $[\alpha] \in H^2_b(\Gamma, A)$ such that $\alpha = d^2 f$ for some $f \in C(\Gamma, A)$. But $d^2 f = \alpha$ being bounded tells us $f$ is a quasi-homomorphism! Thus, we have a map
+\[
+ \begin{tikzcd}[cdmap]
+ \bar{d}^2\colon \QH(\Gamma, A) \ar[r] & \ker c_2\\
+ f \ar[r, maps to] & \lbrack d^2 f\rbrack.
+ \end{tikzcd}
+\]
+\begin{prop}
+ The map $\bar{d}^2$ induces an isomorphism
+ \[
+ \frac{\QH(\Gamma, A)}{\ell^\infty(\Gamma, A) + \Hom(\Gamma, A)} \cong \ker c_2.
+ \]
+\end{prop}
+
+\begin{proof}
+ We know that $\bar{d}^2$ is surjective. So it suffices to show that the kernel is $\ell^\infty(\Gamma, A) + \Hom(\Gamma, A)$.
+
+ Suppose $f \in \QH(\Gamma, A)$ is such that $\bar{d}^2 f \in H_b^2(\Gamma, A) = 0$. Then there exists some $g \in C_b(\Gamma, A)$ such that
+ \[
+ d^2 f = d^2g.
+ \]
+ So it follows that $d^2 (f - g) = 0$. That is, $f - g \in \Hom(\Gamma, A)$. Hence it follows that
+ \[
+ \ker \bar{d}^2 \subseteq \ell^\infty(\Gamma, A) + \Hom(\Gamma, A).
+ \]
+ The other inclusion is clear.
+\end{proof}
+
+Since we already know about group cohomology, the determination of the kernel can help us compute the bounded cohomology. In certain degenerate cases, it can help us determine it completely.
+
+\begin{eg} For $G$ abelian and $A = \R$, we saw that $\QH(\Gamma, A) = \ell^\infty(\Gamma, A) + \Hom(\Gamma, A)$. So it follows that $c_2$ is injective.
+\end{eg}
+
+\begin{eg}
+ For $H^2_b(\Z, \Z)$, we know $H^2(\Z, \Z) = 0$ since $\Z$ is a free group (hence, e.g.\ every extension splits, and in particular all central extensions do). Then we know
+ \[
+ H_b^2 (\Z, \Z) \cong \frac{\QH(\Z, \Z)}{\ell^\infty(\Z, \Z) + \Hom(\Z, \Z)} \cong \R / \Z.
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider $H_b^2(\Free_r, \R)$. We know that $H^2 (\Free_r, \R) = 0$. So again $H^2_b(\Free_r, \R)$ is given by the quasi-homomorphisms. We previously found many such quasi-homomorphisms --- by Rollis' theorem, we have an inclusion
+ \[
+ \begin{tikzcd}[cdmap]
+ \ell^\infty_{\mathrm{odd}}(\Z, \R) \oplus \ell^\infty_{\mathrm{odd}}(\Z, \R) \ar[r] & H^2_b(\Free_r, \R)\\
+ (\alpha, \beta) \ar[r, maps to] & \lbrack d^2 f_{\alpha, \beta}\rbrack
+ \end{tikzcd}
+ \]
+ Recall that $H_b^2(\Free_r, \R)$ has the structure of a semi-normed space, which we called the canonical norm. One can show that
+ \[
+ \|[d^2 f_{\alpha, \beta}]\| = \max (\|d \alpha\|_\infty, \|d \beta\|_\infty).
+ \]
+\end{eg}
+
+Returning to general theory, a natural question to ask ourselves is how the groups $H^\Cdot_b(\Gamma, \Z)$ and $H^\Cdot_b(\Gamma, \R)$ are related. For ordinary group cohomology, if $A \leq B$ is a subgroup (we are interested in $\Z \leq \R$), then we have a long exact sequence of the form
+\[
+ \begin{tikzcd}[column sep=small]
+ \cdots \ar[r] & H^{k - 1}(\Gamma, B/A) \ar[r, "\beta"] & H^k(\Gamma, A) \ar[r] & H^k(\Gamma, B) \ar[r] & H^k(\Gamma, B/A) \ar[r] & \cdots
+ \end{tikzcd},
+\]
+where $\beta$ is known as the \term{Bockstein homomorphism}. This long exact sequence comes from looking at the short exact sequence of chain complexes (of inhomogeneous cochains)
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & C (\Gamma^\Cdot, B) \ar[r] & C (\Gamma^\Cdot, A) \ar[r] & C(\Gamma^\Cdot, B/A) \ar[r] & 0
+ \end{tikzcd},
+\]
+and then applying the snake lemma.
+
+If we want to perform the analogous construction for bounded cohomology, we might worry that we don't know what $C_b(\Gamma^\Cdot, \R/\Z)$ means. However, if we stare at it long enough, we realize that we don't have to worry about that. It turns out the sequence
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & C_b (\Gamma^\Cdot, \Z) \ar[r] & C_b (\Gamma^\Cdot, \R) \ar[r] & C(\Gamma^\Cdot, \R/\Z) \ar[r] & 0
+ \end{tikzcd}
+\]
+is short exact. Thus, snake lemma tells us we have a long exact sequence
+\[
+ \begin{tikzcd}[column sep=small]
+ \cdots \ar[r] & H^{k - 1}(\Gamma, \R/\Z) \ar[r, "\delta"] & H^k_b(\Gamma, \Z) \ar[r] & H_b^k(\Gamma, \R) \ar[r] & H^k(\Gamma, \R/\Z) \ar[r] & \cdots
+ \end{tikzcd}.
+\]
+This is known as the \term{Gersten long exact sequence}\index{long exact sequence!Gersten}
+
+\begin{eg}
+ We can look at the beginning of the sequence, with
+ \[
+ \begin{tikzcd}
+ 0 = H_b^1(\Gamma, \R) \ar[r] & \Hom(\Gamma, \R/\Z) \ar[r, "\delta"] & H_b^2(\Gamma, \Z) \ar[r] & H_b^2(\Gamma, \R)
+ \end{tikzcd}.
+ \]
+ In the case $\Gamma = \Z$, from our first example, we know $c_2: H_b^2(\Z, \R) \to H^2(\Z, \R) = 0$ is an injective map. So we recover the isomorphism
+ \[
+ \R/\Z = \Hom(\Z, \R/\Z) \cong H_b^2(\Z, \Z)
+ \]
+ we found previously by direct computation.
+\end{eg}
+
+We've been talking about the kernel of $c_2$ so far. In Gersten's \emph{Bounded cocycles and combing of groups} (1992) paper, it was shown that the image of the comparison map $c_2\colon H_b^2(\Gamma, \Z) \to H^2(\Gamma, \Z)$ describes central extensions with special metric features. We shall not pursue this too far, but the theorem is as follows:
+
+\begin{thm}
+ Assume $\Gamma$ is finitely-generated. Let $G_\alpha$ be the central extension of $\Gamma$ by $\Z$, defined by a class in $H^2(\Gamma, \Z)$ which admits a bounded representative. Then with any word metric, $\Gamma_\alpha$ is quasi-isometric to $\Gamma \times \Z$ via the ``identity map''.
+\end{thm}
+
+%A typical application is as follows --- for $n \geq 2$, the preimage $\tilde{\Gamma}$ of $\Sp(2n, \Z)$ in the universal covering of $\Sp(2n, \R)$ is a central $\Z$-extension of the above type. In addition, $\tilde{\Gamma}$ has property (T). But $\Gamma \times \Z$ doesn't have property (T).
+
+Before we end the chapter, we produce a large class of groups for which bounded cohomology (with real coefficients) vanish, namely \emph{amenable groups}.
+\begin{defi}[Amenable group]\index{amenable group}
+ A discrete group $\Gamma$ is \emph{amenable} if there is a linear form $m\colon \ell^\infty(\Gamma,\R) \to \R$ such that
+ \begin{itemize}
+ \item $m(f) \geq 0$ if $f \geq 0$;
+ \item $m(1) = 1$; and
+ \item $m$ is left-invariant, i.e.\ $m(\gamma_* f) = m(f)$, where $(\gamma_* f)(x) = f(\gamma^{-1}x)$.
+ \end{itemize}
+\end{defi}
+A linear form that satisfies the first two properties is known as a \term{mean}, and we can think of this as a way of integrating functions. Then an amenable group is a group with a left invariant mean. Note that the first two properties imply
+\[
+ |m(f)| \leq \|f\|_\infty.
+\]
+\begin{eg}\leavevmode
+ \begin{itemize}
+ \item Abelian groups are amenable, and finite groups are.
+ \item Subgroups of amenable groups are amenable.
+ \item If
+ \[
+ \begin{tikzcd}
+ 0 \ar[r] & \Gamma_1 \ar[r] & \Gamma_2 \ar[r] & \Gamma_3 \ar[r] & 0
+ \end{tikzcd}
+ \]
+ is a short exact sequence, then $\Gamma_2$ is amenable iff $\Gamma_1$ and $\Gamma_3$ are amenable.
+ \item Let $\Gamma = \bra S \ket$ for $S$ a finite set. Given a finite set $A \subseteq \Gamma$, we define $\partial A$ to be the set of all edges with exactly one vertex in $A$.
+
+ For example, $\Z^2$ with the canonical generators has Cayley graph
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2, -1, 0, 1, 2} {
+ \draw (\x, -3) -- (\x, 3);
+ \draw (-3, \x) -- (3, \x);
+ }
+ \foreach \x in {-1, 0, 1} {
+ \draw [morange, thick] (1, \x) -- (2, \x);
+ \draw [morange, thick] (-1, \x) -- (-2, \x);
+ \draw [morange, thick] (\x, 1) -- (\x, 2);
+ \draw [morange, thick] (\x, -1) -- (\x, -2);
+ }
+ \foreach \x in {-1, 0, 1} {
+ \foreach \y in {-1, 0, 1} {
+ \node [circ, mred] at (\x, \y) {};
+ }
+ }
+ \end{tikzpicture}
+ \end{center}
+ Then if $A$ consists of the red points, then the boundary consists of the orange edges.
+
+ It is a theorem that a group $\Gamma$ is non-amenable iff there exists a constant $c = c(S, \Gamma) > 0$ such that for all $A \subseteq \Gamma$, we have $|\partial A| \geq c|A|$.
+ \item There exists infinite, finitely generated, simple, anemable groups.
+ \item If $\Gamma \subseteq \GL(n, \C)$, then $\Gamma$ is amenable iff it contains a finite-index subgroup which is solvable.
+ \item $\Free_2$ is non-amenable.
+ \item Any non-elementary word-hyperbolic group is non-amenable.
+ \end{itemize}
+\end{eg}
+
+\begin{prop}
+ Let $\Gamma$ be an amenable group. Then $H^k_b(\Gamma, \R) = 0$ for $k \geq 1$.
+\end{prop}
+
+The proof requires absolutely no idea.
+\begin{proof}
+ Let $k \geq 1$ and $f\colon \Gamma^{k + 1} \to \R$ a $\Gamma$-invariant bounded cocycle. In other words,
+ \begin{align*}
+ d^{(k + 1)} f &= 0\\
+ f(\gamma\gamma_0, \cdots, \gamma\gamma_k) &= f(\gamma_0, \cdots, \gamma_k).
+ \end{align*}
+ We have to find $\varphi\colon \Gamma^k \to \R$ bounded such that
+ \begin{align*}
+ d^{(k)} \varphi &= f\\
+ \varphi(\gamma\gamma_0, \cdots, \gamma\gamma_{k - 1}) &= \varphi(\gamma_0, \cdots, \gamma_{k - 1}).
+ \end{align*}
+ Recall that for $\eta \in \Gamma$, we can define
+ \[
+ h_\eta (\gamma_0, \cdots, \gamma_{k - 1}) = (-1)^{k + 1} f(\gamma_0, \cdots, \gamma_{k + 1}, \eta),
+ \]
+ and then
+ \[
+ d^{(k + 1)}f = 0 \Longleftrightarrow f = d^{(k)}(h_\eta).
+ \]
+ However, $h_\eta$ need not be invariant. Instead, we have
+ \[
+ h_\eta(\gamma\gamma_0, \cdots, \gamma\gamma_{k - 1}) = h_{\gamma^{-1} \eta} (\gamma_0, \cdots, \gamma_{k - 1}).
+ \]
+ To fix this, let $m \colon \ell^\infty(\Gamma) \to \R$ be a left-invariant mean. We notice that the map
+ \[
+ \eta \mapsto h_\eta (\gamma_0, \cdots, \gamma_{k - 1})
+ \]
+ is bounded by $\|f\|_\infty$. So we can define
+ \[
+ \varphi(\gamma_0, \cdots, \gamma_{k - 1}) = m \Big\{ \eta \mapsto h_\eta (\gamma_0, \cdots, \gamma_{k - 1})\Big\}.
+ \]
+ Then this is the $\varphi$ we want. Indeed, we have
+ \[
+ \varphi(\gamma \gamma_0, \cdots, \gamma \gamma_{k - 1}) = m \Big\{ \eta \mapsto h_{\gamma^{-1}\eta} (\gamma_0, \cdots, \gamma_{k - 1})\Big\}.
+ \]
+ But this is just the mean of a left translation of the original function. So this is just $\varphi(\gamma_0, \cdots, \gamma_{k - 1})$. Also, by properties of the mean, we know $\|\varphi\|_\infty \leq \|f\|_\infty$.
+
+ Finally, by linearity, we have
+ \begin{align*}
+ d^{(k)} \varphi(\gamma_0, \cdots, \gamma_k) &= m \Big\{ \eta \mapsto d^{(k)} h_\eta (\gamma_0, \cdots, \gamma_k) \Big\}\\
+ &= m \Big\{ f(\gamma_0, \cdots, \gamma_k) \cdot \mathbf{1}_\Gamma\Big\}\\
+ &= f(\gamma_0, \cdots, \gamma_k) m (\mathbf{1}_\Gamma)\\
+ &= f(\gamma_0, \cdots, \gamma_k).\qedhere
+ \end{align*}
+\end{proof}
+
+\section{Actions on \tph{$S^1$}{S1}{S1}}
+\subsection{The bounded Euler class}
+We are now going going to apply the machinery of bounded cohomology to understand actions on $S^1$. Recall that the central extension
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & \Z \ar[r] & \Homeo^+_\Z(\R) \ar[r] & \Homeo^+(S^1) \ar[r] & 0
+ \end{tikzcd}
+\]
+defines the Euler class $e \in H^2(\Homeo^+(S^1), \Z)$. We have also shown that there is a representative cocycle $c(f, g)$ taking the values in $\{0, 1\}$, defined by
+\[
+ \overline{f \circ g} \circ T_{c(f, g)} = \bar{f} \circ \bar{g},
+\]
+where for any $f$, the map $\bar{f}$ is the unique lift to $\R$ such that $\bar{f}(0) \in [0, 1)$.
+
+Since $c$ takes values in $\{0, 1\}$, in particular, it is a bounded cocycle. So we can use it to define
+\begin{defi}[Bounded Euler class]\index{bounded Euler class}\index{Euler class!bounded}\index{$e^b$}
+ The \emph{bounded Euler class}
+ \[
+ e^b \in H_b^2(\Homeo^+(S^1), \Z)
+ \]
+ is the bounded cohomology class represented by the cocycle $c$.
+\end{defi}
+
+By construction, $e^b$ is sent to $e$ via the comparison map
+\[
+ \begin{tikzcd}
+ c_2\colon H_b^2(\Homeo^+(S^1), \Z) \ar[r] & H^2(\Homeo^+(S^1), \Z)
+ \end{tikzcd}.
+\]
+In fact, the comparison map is injective. So this $e^b$ is the unique element that is sent to $e$, and doesn't depend on us arbitrarily choosing $c$ as the representative.
+
+\begin{defi}[Bounded Euler class of action]\index{bounded Euler class}\index{Euler class!bounded}
+ The bounded Euler class of an action $h\colon \Gamma \to \Homeo^+(S^1)$ is $h^*(e^b) \in H_b^2(\Gamma, \Z)$.
+\end{defi}
+
+By naturality (proof as exercise), $h^*(e^b)$ maps to $h^*(e)$ under the comparison map. The bounded Euler class is actually a rather concrete and computable object. Note that if we have an element $\varphi \in \Homeo^+(S^1)$, then we obtain a group homomorphism $\Z \to \Homeo^+(S^1)$ that sends $1$ to $\varphi$, and vice versa. In other words, we can identify elements of $\Homeo^+(S^1)$ with homomorphisms $h: \Z \to \Homeo^+(S^1)$. Any such homomorphism will give a bounded Euler class $h^*(e^b) \in H_b^2(\Z, \Z) \cong \R/\Z$.
+
+\begin{ex}
+ If $h\colon \Z \to \Homeo^+(S^1)$ and $\varphi = h(1)$, then under the isomorphism $H^2_b(\Z, \Z) \cong \R/\Z$, we have $h^*(e^b) = \Rot(\varphi)$, the Poincar\'e rotation number of $\varphi$.
+\end{ex}
+%
+%\begin{proof}
+% We first work out in more detail what the isomorphism $H^2_b(\Z, \Z) \cong \R/\Z$ is. We saw that every element $[\tilde{c}] \in H^2(\Z, \Z)$ can be written as $[d^2 f]$ for some $f \in \QH(\Z, \Z)$. We have the formula
+% \[
+% \tilde{c}(n, m) = d^2 f (n, m) = f(m) - f(n + m) + f(n).
+% \]
+% In particular, putting $n = 1$, we have
+% \[
+% f(n + 1) = f(n) + f(1) - \tilde{c} (1, m) = (n + 1) f(1) - \sum_{j = 1}^n \tilde{c}(1, j).
+% \]
+% The element in $\R/\Z$ associated to $f$ is
+% \[
+% \lim_{n \to \infty} \frac{f(n)}{n} = f(1) - 1 + \lim_{n \to \infty} \frac{1}{n} \sum_{j = 1}^n (1 - \tilde{c}(1, j))
+% \]
+% Since $f(1) - 1 \in \Z$, we can drop it.
+%
+% In the case where $[\tilde{c}]$ is the bounded Euler class, using our explicit construction of $c$, we know $\tilde{c}(1, j) \in \{0, 1\}$. So viewed as an element of $\R/\Z$, the bounded Euler class is the (asymptotic) proportion of $j$ such that $\tilde{c}(1, j) = 0$. Upon some thought, we see that $\tilde{c}(1, j) = 0$ whenever $\varphi^j(0), 0, \varphi^{j + 1}(0)$ are positively-oriented:
+% \begin{center}
+% \begin{tikzpicture}
+% \draw [mblue, thick] circle [radius=1];
+% \node [circ] at (0.5, -0.866) {};
+% \node [anchor = north west, inner sep=0] at (0.5, -0.866) {$\varphi^{j + 1}(0)$};
+% \node [circ] at (-0.866, -0.5) {};
+% \node [anchor = north east, shift={(0.05, 0.05)}] at (-0.866, -0.5) {$\varphi^j(0)$};
+%
+% \node [circ] at (0, -1) {};
+% \node [below] at (0, -1) {$0$};
+% \end{tikzpicture}
+% \end{center}
+% At least intuitively, the proportion of $j$ for which this happens should be the Poincar\'e rotation number of $\varphi$. A detailed proof is left as an exercise for the reader.
+%\end{proof}
+Thus, one way to think about the bounded Euler class is as a generalization of the Poincar\'e rotation number.
+
+\begin{ex}
+ Assume $h\colon \Gamma \to \Homeo^+(S^1)$ takes values in the rotations $\Rot$. Let $\chi\colon \Gamma \to \R/\Z$ the corresponding homomorphism. Then under the connecting homomorphism
+ \[
+ \begin{tikzcd}
+ \Hom(\Gamma, \R/\Z) \ar[r, "\delta"] & H_b^2(\Gamma, \Z)
+ \end{tikzcd},
+ \]
+ we have $\delta(\chi) = h^*(e^b)$.
+\end{ex}
+
+\begin{ex}
+ If $h_1$ and $h_2$ are conjugate in $\Homeo^+(S^1)$, i.e.\ there exists a $\varphi \in \Homeo^+(S^1)$ such that $h_1(\gamma) = \varphi h_2(\gamma) \varphi^{-1}$, then
+ \[
+ h_1^*(e) = h_2^*(e),\quad h_1^*(e^b) = h_2^*(e^b).
+ \]
+ The proof involves writing out a lot of terms explicitly.
+\end{ex}
+
+How powerful is this bounded Euler class in distinguishing actions? We just saw that conjugate actions have the same bounded Euler class. The converse does not hold. For example, one can show that any action with a global fixed point has trivial bounded Euler class, and there are certainly non-conjugate actions that both have global fixed points (e.g.\ take one of them to be the trivial action).
+
+It turns out there is a way to extend the notion of conjugacy so that the bounded Euler class becomes a complete invariant.
+
+\begin{defi}[Increasing map of degree $1$]\index{increasing map}\index{increasing map!degree 1}
+ A map $\varphi\colon S^1 \to S^1$ is \emph{increasing of degree $1$} if there is some $\tilde{\varphi}\colon \R \to \R$ lifting $\varphi$ such that $\tilde{\varphi}$ is is monotonically increasing and
+ \[
+ \tilde{\varphi}(x + 1) = \tilde{\varphi}(x) + 1
+ \]
+ for all $x \in \R$.
+\end{defi}
+Note that there is no continuity assumption made on $\varphi$. On the other hand, it is an easy exercise to see that any monotonic map $\R \to \R$ has a countable set of discontinuities. This is also not necessarily injective.
+
+\begin{eg}
+ The constant map $S^1 \to S^1$ sending $x \mapsto 0$ is increasing of degree $1$, as it has a lift $\tilde{\varphi}(x) = [x]$.
+\end{eg}
+
+Equivalently, such a map is one that sends a positive $4$-tuple to a weakly positive $4$-tuple (exercise!). % picture and explain
+
+\begin{defi}[Semiconjugate action]\index{semiconjugate action}\index{action!semiconjugate}\index{conjugate!semi-}
+ Two actions $h_1, h_2\colon \Gamma \to \Homeo^+(S^1)$ are semi-conjugate if there are increasing maps of degree $1$ $\varphi_1, \varphi_2\colon S^1 \to S^1$ such that
+ \begin{enumerate}
+ \item $h_1(\gamma) \varphi_1 = \varphi_1 h_2(\gamma)$ for all $\gamma \in \Gamma$;
+ \item $h_2(\gamma) \varphi_2 = \varphi_2 h_1(\gamma)$ for all $\gamma \in \Gamma$.
+ \end{enumerate}
+\end{defi}
+One can check that the identity action is semiconjugate to any action with a global fixed point.
+
+Recall the following definition:
+\begin{defi}[Minimal action]\index{minimal action}\index{action!minimal}
+ An action on $S^1$ is \emph{minimal} if every orbit is dense.
+\end{defi}
+
+\begin{lemma}
+ If $h_1$ and $h_2$ are minimal actions that are semiconjugate via $\varphi_1$ and $\varphi_2$, then $\varphi_1$ and $\varphi_2$ are homeomorphisms and are inverses of each other.
+\end{lemma}
+
+\begin{proof}
+ The condition (i) tells us that
+ \[
+ h_1(\gamma) (\varphi_1(x)) = \varphi_1(h_2(\gamma)(x)).
+ \]
+ for all $x \in S^1$ and $\gamma \in \Gamma$. This means $\im \varphi_1$ is $h_1(\Gamma)$-invariant, hence dense in $S^1$. Thus, we know that $\im \tilde{\varphi}_1$ is dense in $\R$. But $\tilde{\varphi}$ is increasing. So $\tilde{\varphi}_1$ must be continuous. Indeed, we can look at the two limits
+ \[
+ \lim_{x \nearrow y} \tilde{\varphi}_1(x) \leq \lim_{x \searrow y} \tilde{\varphi}_1(x).
+ \]
+ But since $\tilde{\varphi}_1$ is increasing, if $\tilde{\varphi}_1$ were discontinuous at $y \in \R$, then the inequality would be strict, and hence the image misses a non-trivial interval. So $\tilde{\varphi}_1$ is continuous.
+
+ We next claim that $\tilde{\varphi}_1$ is injective. Suppose not. Say $\varphi(x_1) = \varphi(x_2)$. Then by looking at the lift, we deduce that $\varphi((x_1, x_2)) = \{x\}$ for some $x$. Then by minimality, it follows that $\varphi$ is locally constant, hence constant, which is absurd.
+
+ We can continue on and then decide that $\varphi_1, \varphi_2$ are homeomorphisms. % complete
+\end{proof}
+
+\begin{thm}[F. Ghys, 1984]
+ Two actions $h_1$ and $h_2$ are semiconjugate iff $h_1^*(e^b) = h_2^*(e^b)$.
+\end{thm}
+Thus, in the case of minimal actions, the bounded Euler class is a complete invariant of actions up to conjugacy.
+
+\begin{proof}
+ We shall only prove one direction, that if the bounded Euler classes agree, then the actions are semi-conjugate.
+
+ Let $h_1, h_2\colon \Gamma \to \Homeo^+(S^1)$. Recall that $c(f, g) \in \{0, 1\}$ refers to the (normalized) cocycle defining the bounded Euler class. Therefore
+ \begin{align*}
+ c_1(\gamma, \eta) &= c(h_1(\gamma), h_1(\eta))\\
+ c_2(\gamma, \eta) &= c(h_2(\gamma), h_2(\eta)).
+ \end{align*}
+ are representative cocycles of $h_1^*(e^b), h_2^*(e^b) \in H_b^2(\Gamma, \Z)$.
+
+ By the hypothesis, there exists $u\colon \Gamma \to \Z$ bounded such that
+ \[
+ c_2(\gamma, \eta) = c_1(\gamma, \eta) + u(\gamma) - u(\gamma\eta) + u(\eta)
+ \]
+ for all $\gamma, \eta \in \Gamma$.
+
+ Let $\bar{\Gamma} = \Gamma \times_{c_1} \Z$ be constructed with $c_1$, with group law
+ \[
+ (\gamma, n)(\eta, m) = (\gamma \eta, c_1(\gamma, \eta) + n + m)
+ \]
+ We have a section
+ \begin{align*}
+ s_1\colon \Gamma &\to \bar{\Gamma} \\
+ \gamma &\mapsto (\gamma, 0).
+ \end{align*}
+ We also write $\delta = (e, 1) \in \bar{\Gamma}$, which generates the copy of $\Z$ in $\bar{\Gamma}$. Then we have
+ \[
+ s_1(\gamma \eta) \delta^{c_1(\gamma, \eta)} = s_1(\gamma) s_2(\eta).
+ \]
+ Likewise, we can define a section by
+ \[
+ s_2(\gamma) = s_1(\gamma) \delta^{u(\gamma)}.
+ \]
+ Then we have
+ \begin{align*}
+ s_2(\gamma \eta) &= s_1 (\gamma \eta) \delta^{u(\gamma \eta)} \\
+ &= \delta^{-c_1(\gamma, \eta)} s_1(\gamma) s_1 (\eta) \delta^{u(\gamma \eta)}\\
+ &= \delta^{-c_1(\gamma, \eta)} \delta^{-u(\gamma)} s_2(\gamma) \delta^{-u(\eta)} s_2 (\eta) \delta^{u(\gamma \eta)}\\
+ &= \delta^{-c_1(\gamma, \eta) - u(\gamma) + u(\gamma \eta) - u(\eta)} s_2 (\gamma) s_2(\eta)\\
+ &= \delta^{-c_2(\gamma, \eta)} s_2(\gamma) s_2(\eta).
+ \end{align*}
+ Now every element in $\bar{\Gamma}$ can be uniuely written as a product $s_1(\gamma) \delta^n$, and the same holds for $s_2(\gamma) \delta^m$.
+
+ Recall that for $f \in \Homeo^+(S^1)$, we write $\bar{f}$ for the unique lift with $\bar{f}(0) \in [0, 1)$. We define
+ \[
+ \Phi_i (s_i(\gamma) \delta^n) = \overline{h_i(\gamma)} \cdot T_n.
+ \]
+ We claim that this is a homomorphism! We simply compute
+ \begin{align*}
+ \Phi_i (s_i(\gamma) \delta^n s_i(\eta) \delta^m) &= \Phi_i(s_i(\gamma) s_i(\eta) \delta^{n + m})\\
+ &= \Phi_i(s_i(\gamma \eta) \delta^{c_i(\gamma, \eta) + n + m})\\
+ &= \overline{h_i(\gamma \eta)} T_{c_i(\gamma, \eta)} T_{n + m}\\
+ &= \overline{h}_i(\gamma) \overline{h}_i(\eta) T_{n + m}\\
+ &= \overline{h}_i(\gamma) T_n \overline{h}_i(\eta) T_m\\
+ &= \Phi_i(s_i(\gamma) \delta^n) \Phi_i(s_i(\eta) \delta^m).
+ \end{align*}
+ So we get group homomorphisms $\Phi_i\colon \bar{\Gamma} \to \Homeo^+_\Z(\R)$.
+
+ \begin{claim}
+ For any $x \in \R$, the map
+ \begin{align*}
+ \bar{\Gamma} &\to \R\\
+ g &\mapsto \Phi_1(g)^{-1} \Phi_2(g) (x)
+ \end{align*}
+ is bounded.
+ \end{claim}
+
+ \begin{proof}
+ We define
+ \[
+ v(g, x) = \Phi_1(g)^{-1} \Phi(g)x.
+ \]
+ We notice that
+ \begin{align*}
+ v(g \delta^m, x) &= \Phi_1(g \delta^m)^{-1} \Phi_2(g \delta^m)(x)\\
+ &= \Phi_1(g)^{-1} T_{-m} T_m \Phi_2(g)\\
+ &= v(g, x).
+ \end{align*}
+ Also, for all $g$, the map $x \mapsto v(g, x)$ is in $\Homeo^+_\Z(\R)$.
+
+ Hence it is sufficient to show that
+ \[
+ \gamma \mapsto v(s_2(\gamma), 0)
+ \]
+ is bounded. Indeed, we just have
+ \begin{align*}
+ v(s_2(\gamma), 0) &= \Phi_1 (s_2(\gamma)^{-1} \Phi_2(s_2(\gamma))(0)\\
+ &= \Phi_1(s_1(\gamma) \delta^{u(\gamma)})^{-1} \Phi_2(s_2(\gamma))(0)\\
+ &= \delta^{-u(\gamma)} \overline{h_1(\gamma)}^{-1} \overline{h_2(\gamma)} (0)\\
+ &= - u(\gamma) + \overline{h_1(\gamma)}^{-1} (\overline{h_2}(\gamma)(0)).
+ \end{align*}
+ But $u$ is bounded, and also
+ \[
+ \overline{h_1(\gamma)}^{-1} (\overline{h_2(\gamma)}(0)) \in (-1, 1).
+ \]
+ So we are done.
+ \end{proof}
+ Finally, we can write down our two quasi-conjugations. We define
+ \[
+ \tilde{\varphi}(x) = \sup_{g \in \bar{\Gamma}} v(g, x).
+ \]
+ Then we verify that
+ \[
+ \tilde{\varphi}(\Phi_2(h) x) = \Phi_1(h)(\varphi(x)).
+ \]
+ Reducing everything modulo $\Z$, we find that
+ \[
+ \varphi h_2(\gamma) = h_1(\gamma) \varphi.
+ \]
+ The other direction is symmetric.
+\end{proof}
+
+\subsection{The real bounded Euler class}
+The next thing we do might be a bit unexpected. We are going to forget that the cocycle $c$ takes values in $\Z$, and view it as an element in the \emph{real} bounded cohomology group.
+\begin{defi}[Real bounded Euler class]\index{real bounded Euler class}\index{bounded Euler class!real}\index{Euler class!real bounded}
+ The \emph{real bounded Euler class} is the class $e_\R^b \in H_b^2(\Homeo^+(S^1), \R)$ obtained by change of coefficients from $\Z \to \R$.
+
+ The real bounded Euler class of an action $h\colon \Gamma \to \Homeo^+(S^1)$ is the pullback
+ \[
+ h^*(e_\R^b) \in H_b^2(\Gamma, \R).
+ \]
+\end{defi}
+
+A priori, this class contains less information that the original Euler class. However, it turns out the real bounded Euler class can distinguish between very different dynamical properties. Recall that we had the Gersten long exact sequence
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & \Hom(\Gamma, \R/\Z) \ar[r, "\delta"] & H_b^2(\Gamma, \Z) \to H_b^2(\Gamma, \R)
+ \end{tikzcd}.
+\]
+By exactness, the real bounded Euler class vanishes if and only if $e^b$ is in the image of $\delta$. But we can characterize the image of $\delta$ rather easily. Each homomorphism $\chi: \Gamma \to \R/\Z$ gives an action by rotation, and in a previous exercise, we saw the bounded Euler class of this action is $\delta(\chi)$. So the image of $\delta$ is exactly the bounded Euler classes of actions by rotations. On the other hand, we know that the bounded Euler class classifies the action up to semi-conjugacy. So we know that
+
+\begin{cor}
+ An action $h$ is semi-conjugate to an action by rotations iff $h^*(e_\R^b) = 0$.
+\end{cor}
+%
+%\begin{proof}\leavevmode
+% \begin{itemize}
+% \item[($\Rightarrow$)] Let $h_1\colon \Gamma \to \Rot \subseteq \Homeo^+(S^1)$ be semi-conjugate to $h$. Let $\chi\colon \Gamma \to \R/\Z$ be the associated group homomorphism under the isomorphism $\Rot \cong \R/\Z$. Recall from a previous exercise that
+% \[
+% h_1^*(e^b) = \delta(\chi),
+% \]
+% where $\delta$ is the connecting homomorphism in
+% \[
+% \begin{tikzcd}
+% 0 \ar[r] & \Hom(\Gamma, \R/\Z) \ar[r, "\delta"] & H_b^2(\Gamma, \Z) \to H_b^2(\Gamma, \R)
+% \end{tikzcd}.
+% \]
+% So $h^*(e^b) = h_1^*(e^b) = \delta(\chi)$. But by exactness, the image in $H_b^2(\Gamma, \R)$ vanishes.
+%
+% \item[($\Leftarrow$)] Suppose $h^*(e_\R^b) = 1$. Then $h^*(e^b) \in H_b^2(\Gamma, \Z)$ is in the kernel of the map $H^2_b(\Gamma, \Z) \to H^2_b(\Gamma, \R)$. Hence, by exactness, there exists $\chi \in \Hom(\Gamma, \R/\Z)$ such that $\delta(\chi) = h^*(e^b_\R)$. Then we can define $h_1\colon \Gamma \to \Rot$ to be given by $\chi$. Then $h_1^*(e^b) = \delta(\chi) = h(e^b)$. So $h$ is semi-conjugate to $h_1$.
+% \end{itemize}
+%\end{proof}
+
+We want to use the real bounded Euler class to classify different kinds of actions. Before we do that, we first classify actions \emph{without} using the real bounded Euler class, and then later see how this classification is related to the real bounded Euler class.
+
+\begin{thm}
+ Let $h\colon \Gamma \to \Homeo^+(S^1)$ be an action. Then one of the following holds:
+ \begin{enumerate}
+ \item There is a finite orbit, and all finite orbits have the same cardinality.
+ \item The action is minimal.
+ \item There is a closed, minimal, invariant, infinite, proper subset $K \subsetneq S^1$ such that any $x \in S^1$, the closure of the orbit $\overline{h(\Gamma) x}$ contains $K$.
+ \end{enumerate}
+\end{thm}
+We will provide a proof sketch. More details can be found in Hector--Hirsch's \emph{Introduction to the geometry of foliations}.
+
+\begin{proof}[Proof sketch]
+ By compactness and Zorn's lemma, we can find a minimal, non-empty, closed, invariant subset $K \subseteq S^1$. Let $\partial K = K \setminus \mathring{K}$, and let $K'$ be the set of all accumulation points of $K$ (i.e.\ the set of all points $x$ such that every neighbourhood of $x$ contains infinitely many points of $K$). Clearly $K'$ and $\partial K$ are closed and invariant as well, and are contained in $K$. By minimality, they must be $K$ or empty.
+
+ \begin{enumerate}
+ \item If $K' = \emptyset$, then $K$ is finite. It is an exercise to show that all orbits have the same size.
+ \item If $K' = K$, and $\partial K = \emptyset$, then $K = \mathring{K}$, and hence is open. Since $S^1$ is connected, $K = S^1$, and the action is minimal.
+ \item If $K' = K = \partial K$, then $K$ is \emph{perfect}, i.e.\ every point is an accumulation point, and $K$ is totally disconnected. We definitely have $K \not= S^1$ and $K$ is infinite. It is also minimal and invariant.
+
+ Let $x \in S^1$. We want to show that the closure of its orbit contains $K$. Since $K$ is minimal, it suffices to show that $\overline{h(\Gamma)x}$ contains a point in $K$. If $x \in K$, then we are done. Otherwise, the complement of $K$ is open, hence a disjoint union of open intervals.
+
+ For the sake of explicitness, we define an interval of a circle as follows --- if $a, b \in S^1$ and $a \not= b$, then \index{open interval!of circle}
+ \[
+ (a, b) = \{z \in S^1: (a, z, b)\text{ is positively oriented}\}.
+ \]
+ Now let $(a, b)$ be the connected component of $S^1 \setminus K$ containing $x$. Then we know $a \in K$.
+
+ We observe that $S^1 \setminus K$ has to be the union of \emph{countably} many intervals, and moreover $h(\Gamma) a$ consists of end points of these open intervals. So $h(\Gamma) a$ is a countable set. On the other hand, since $K$ is perfect, we know $K$ is uncountable. The point is that this allows us to pick some element $y \in K \setminus h(\Gamma) a$.
+
+ Since $a \in K$, minimality tells us there exists a sequence $(\gamma_n)_{n \geq 1}$ such that $h(\gamma_n) a \to y$. But since $y \not \in h(\Gamma) a$, we may wlog assume that all the points $\{h(\gamma_n)a : n \geq 1\}$ are distinct. Hence $\{h(\gamma_n)(a, b) \}_{n \geq 1}$ is a collection of disjoint intervals in $S^1$. This forces their lengths tend to $0$. We are now done, because then $h(\gamma_n) x$ gets arbitrarily close to $h(\gamma_n) a$ as well.\qedhere
+ \end{enumerate}
+\end{proof}
+
+We shall try to rephrase this result in terms of the real bounded Euler class. It will take some work, but we shall state the result as follows:
+
+\begin{cor}
+ Let $h\colon \Gamma \to S^1$ be an action. Then one of the following is true:
+ \begin{enumerate}
+ \item $h^*(e^b_\R) = 0$ and $h$ is semi-conjugate to an action by rotations.
+ \item $h^*(e^b_\R) \not= 0$, and then $h$ is semi-conjugate to a minimal \emph{unbounded}\index{unbounded action}\index{action!unbounded} action, i.e.\ $\{h(\gamma): \gamma \in \Gamma\}$ is not equicontinuous.
+ \end{enumerate}
+\end{cor}
+Observe that if $\Lambda \subseteq \Homeo^+(S^1)$ is equicontinuous, then by Arzela--Ascoli, its closure $\bar{\Lambda}$ is compact.
+
+To prove this, we first need the following lemma:
+
+\begin{lemma}
+ A minimal compact subgroup $U \subseteq \Homeo^+(S^1)$ is conjugate to a subgroup of $\Rot$.
+\end{lemma}
+
+\begin{proof}
+ By Kakutani fixed point theorem, we can pick an $U$-invariant probability measure on $S^1$, say $\mu$, such that $\mu(S^1) = 2\pi$.
+
+ We parametrize the circle by $p\colon [0, 2\pi) \to S^1$. We define $\varphi \in \Homeo^+(S^1)$ by
+ \[
+ \varphi(p(t)) = p(s),
+ \]
+ where $s \in [0, 2\pi)$ is unique with the property that
+ \[
+ \mu(p([0, s)) = t.
+ \]
+ One then verifies that $\varphi$ is a homeomorphism, and $\varphi U \varphi^{-1} \subseteq \Rot$.
+\end{proof}
+
+\begin{proof}[Proof of corollary] % understand this
+ Suppose $h^*(e^b_\R) \not= 0$. Thus we are in case (ii) or (iii) of the previous trichotomy.
+
+ We first show how to reduce (iii) to (ii). Let $K \subsetneq S^1$ be the minimal $h(\Gamma)$-invariant closed set given by the trichotomy theorem. The idea is that this $K$ misses a lot of open intervals, and we want to collapse those intervals.
+
+ We define the equivalence relation on $S^1$ by $x \sim y$ if $\{x, y\} \subseteq \bar{I}$ for some connected component $I$ of $S^1 \setminus K$. Then $\sim$ is an equivalence relation that is $h(\Gamma)$-invariant, and the quotient map is homeomorphic to $S^1$ (exercise!). Write $i\colon S^1/\sim \to S^1$ for the isomorphism.
+
+ In this way, we obtain an action of $\rho\colon \Gamma \to \Homeo^+(S^1)$ which is minimal, and the map
+ \[
+ \varphi\colon
+ \begin{tikzcd}
+ S^1 \ar[r, "\mathrm{pr}"] & S^1/\sim \ar[r, "i"] & S^1
+ \end{tikzcd}
+ \]
+ intertwines the two actions, i.e.
+ \[
+ \varphi h(\gamma) = \rho(\gamma) \varphi.
+ \]
+ Then one shows that $\varphi$ is increasing of degree $1$. Then we would need to find $\psi\colon S^1 \to S^1$ which is increasing of degree $1$ with
+ \[
+ \psi \rho(\gamma) = h(\gamma) \psi.
+ \]
+ But $\varphi$ is surjective, and picking an appropriate section of this would give the $\psi$ desired.
+
+ So $h$ is semi-conjugate to $\rho$, and $0 \not= h^*(e^b_\R) = \rho^*(e^b_\R)$.
+
+ Thus we are left with $\rho$ minimal, with $\rho^*(e^b_\R) \not= 0$. We have to show that $\rho$ is not equicontinuous. But if it were, then $\rho(\Gamma)$ would be contained in a compact subgroup of $\Homeo^+(S^1)$, and hence by the previous lemma, would be conjugate to an action by rotation.
+\end{proof}
+
+The following theorem gives us a glimpse of what unbounded actions look like:
+\begin{thm}[Ghys, Margulis]
+ If $\rho\colon \Gamma \to \Homeo^+(S^1)$ is an action which is minimal and unbounded. Then the centralizer $C_{\Homeo^+(S^1)}(\rho(\Gamma))$ is finite cyclic, say $\bra \varphi\ket$, and the factor action $\rho_0$ on $S^1/\bra \varphi\ket \cong S^1$ is minimal and strongly proximal. We call this action the \term{strongly proximal quotient} of $\rho$.
+\end{thm}
+
+\begin{defi}[Strongly proximal action]\index{strongly proximal action}\index{action!strongly proximal}
+ A $\Gamma$-action by homeomorphisms on a compact metrizable space $X$ is \emph{strongly proximal} if for all probability measures $\mu$ on $X$, the weak-$*$ closure $\overline{\Gamma}_* \mu$ contains a Dirac mass.
+\end{defi}
+
+For a minimal action on $X = S^1$, the property is equivalent to the following:
+\begin{itemize}
+ \item Every proper closed interval can be contracted. In other words, for every interval $J \subseteq S^1$, there exists a sequence $(\gamma_n)_{n \geq 1}$ such that $\diam(\rho(\gamma_n)J) \to 0$ as $n \to \infty$.
+\end{itemize}
+
+\begin{proof}[Proof of theorem]
+ Let $\psi$ commute with all $\rho(\gamma)$ for all $\gamma \in \Gamma$, and assume $\psi \not= \id$.
+ \begin{claim}
+ $\psi$ has no fixed points.
+ \end{claim}
+
+ \begin{proof}
+ Otherwise, if $\psi(p) = p$, then
+ \[
+ \psi(\rho(\gamma) p) = \rho(\gamma) \psi(p) = \rho(\gamma)(p).
+ \]
+ Then by density of $\{\rho(\gamma) p: \gamma \in \Gamma\}$, we have $\psi = \id$.
+ \end{proof}
+
+ Hence we can find $\varepsilon > 0$ such that $\length([x, \psi(x)]) \geq \varepsilon$ for all $x$ by compactness. Observe that
+ \[
+ \rho(\gamma) [x, \psi(x)] = [\rho(\gamma) x, \rho(\gamma) \psi(x)] = [\rho(\gamma) x, \psi(\rho(\gamma)x)].
+ \]
+ This is just an element of the above kind. So $\length(\rho(\gamma)[x, \psi(x)]) \geq \varepsilon$.
+
+ Now assume $\rho(\Gamma)$ is minimal and not equicontinuous.
+ \begin{claim}
+ Every point $x \in S^1$ has a neighbourhood that can be contracted.
+ \end{claim}
+
+ \begin{proof}
+ Indeed, since $\rho(\Gamma)$ is not equicontinuous, there exists $\varepsilon > 0$, a sequence $(\gamma_n)_{n \geq 1}$ and intervals $I_k$ such that $\length(I_k) \searrow 0$ and $\length(\rho(\gamma_n)I_n) \geq \varepsilon$.
+
+ Since we are on a compact space, after passing to a subsequence, we may assume that for $n$ large enough, we can find some interval $J$ such that $\length(J) \geq \frac{\varepsilon}{2}$ and $J \subseteq \rho(\gamma_n) I_n$.
+
+ But this means
+ \[
+ \rho(\gamma_n)^{-1} J \subseteq I_n.
+ \]
+ So $J$ can be contracted. Since the action is minimal,
+ \[
+ \bigcup_{\gamma \in \Gamma} \rho(\gamma) J = S^1.
+ \]
+ So every point in $S^1$ is contained in some interval that can be contracted.
+ \end{proof}
+
+ We shall now write down what the homeomorphism that generates the centralizer. Fix $x \in S^1$. Then the set
+ \[
+ \mathcal{C}_x = \{[x, y) \in S^1: [x, y)\text{ can be contracted}\}
+ \]
+ is totally ordered ordered by inclusion. Define
+ \[
+ \varphi(x) \sup \mathcal{C}_x.
+ \]
+ Then
+ \[
+ [x, \varphi(x)) = \bigcup \mathcal{C}_x.
+ \]
+ This gives a well-defined map $\varphi$ that commutes with the action of $\gamma$. It is then an interesting exercise to verify all the desired properties.
+ \begin{itemize}
+ \item To show $\varphi$ is homeomorphism, we show $\varphi$ is increasing of degree $1$, and since it commutes with a minimal action, it is a homeomorphism.
+
+ \item If $\varphi$ is not periodic, then there is some $n$ such that $\varphi^n(x)$ is between $x$ and $\varphi(x)$. But since $\varphi$ commutes with the action of $\Gamma$, this implies $[x, \varphi^n(x)]$ cannot be contracted, which is a contradiction.\qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{ex}
+ We have
+ \[
+ \rho^*(e^b) = k \rho^*_0(e^b),
+ \]
+ where $k$ is the cardinality of the centralizer.
+\end{ex}
+
+\begin{eg}
+ We can decompose $\PSL(2, \R) = \PSO(2) AN$, where
+ \[
+ A = \left\{
+ \begin{pmatrix}
+ \lambda & 0\\
+ 0 & \lambda^{-1}
+ \end{pmatrix}: \lambda > 0
+ \right\},\quad N = \left\{
+ \begin{pmatrix}
+ 1 & x\\
+ 0 & 1
+ \end{pmatrix}
+ \right\}.
+ \]
+ More precisely, $SO(2) \times A \times N \to \SL(2, \R)$ is a diffeomorphism and induces on $\PSO(2)\times A \times N \to \PSL(2, \R)$. In particular, the inclusion $i\colon \PSO(2) \hookrightarrow \PSL(2, \R)$ induces an isomorphism on the level of $\pi_1 \cong \Z$.
+
+ We can consider the subgroup $k\Z \subseteq \Z$. which gives us a covering of $\PSO(2)$ and $\PSL(2, \R)$ that fits in the diagram
+ \[
+ \begin{tikzcd}
+ \PSO(2)_k \ar[r, "i_k"] \ar[d, "p"] & \PSL(2, \R) \ar[d, "p"]\\
+ \PSO(2) \ar[r, "i"] & \PSL(2, \R)
+ \end{tikzcd}.
+ \]
+ On the other hand, if we put $B = A \cdot N$, which is a contractible subgroup, we obtain a homomorphism $s\colon B \to \PSL(2, \R)_k$, and we find that
+ \[
+ \PSL(2, \R)_k \cong \PSO(2)_k \cdot s(B).
+ \]
+ So we have
+ \[
+ \frac{\PSL(2, \R)_k}{ s(B)} \cong \PSO(2)_k.
+ \]
+ So $\PSL(2, \R)_k/s(B)$ is homeomorphic to a circle. So we obtain an action of $\PSL(2, \R)_k$ on the circle. % homogeneous space
+
+ Now we can think of $\Gamma \cong \Free_r$ as a lattice in $\PSL(2, \R)$. Take any section $\sigma\colon \Gamma \to \PSL(2, \R)_k$. This way, we obtain an unbounded minimal action with centralizer isomorphic to $\Z/k\Z$.
+\end{eg}
+
+\begin{defi}[Lattice]\index{lattice}
+ A lattice in a locally compact group $G$ is a discrete subgroup $\Gamma$ such that on $\Gamma \backslash G$, there is a $G$-invariant probability measure.
+\end{defi}
+
+\begin{eg}
+ Let $\mathcal{O}$ be the ring of integers of a finite extension $k/\Q$. Then $\SL(n, \mathcal{O})$ is a lattice in an appropriate Lie group. To construct this, we write $[k:\Q] = r + 2s$, where $r$ and $2s$ are the number of real and complex field embeddings of $k$. Using these field embeddings, we obtain an injection
+ \[
+ \SL(n, \mathcal{O}) \to \SL(n, \R)^r \times \SL(n, \C)^s,
+ \]
+ and the image is a lattice.
+\end{eg}
+
+\begin{eg}
+ If $X$ is a complete proper CAT(0) space, then $\Isom(X)$ is locally compact, and in many cases conatins lattices.
+\end{eg}
+
+\begin{thm}[Burger, 2007]
+ Let $G$ be a second-countable locally compact group, and $\Gamma < G$ be a lattice, and $\rho\colon \Gamma \to \Homeo^+(S^1)$ a minimal unbounded action. Then the following are equivalent:
+ \begin{itemize}
+ \item $\rho^*(e^b_\R)$ is in the image of the restriction map $H^2_{bc}(G, \R) \to H_b^2(\Gamma, \R)$ % bc is continuous
+ \item The strongly proximal quotient $\rho_{ss}\colon \Gamma \to \Homeo^+(S^1)$ extends continuously to $G$. % introduce _ss notation for strongly proximal quotient.
+ \end{itemize}
+\end{thm}
+% ``an extension criterion for lattice actions on the circle'' in ``Geometry, rigidity, and group actions'', Chicago
+
+\begin{thm}[Burger--Monod, 2002]
+ The restriction map $H^2_{bc}(G) \to H^2_b(\Gamma, \R)$ is an isomorphism in the following cases:
+ \begin{enumerate}
+ \item $G = G_1 \times \cdots \times G_n$ is a cartesian product of locally compact groups and $\Gamma$ has dense projections on each individual factor.
+ \item $G$ is a connected semisimple Lie group with finite center and rank $G \geq 2$, and $\Gamma$ is irreducible.
+ \end{enumerate}
+\end{thm}
+
+\begin{eg}
+ Let $k/\Q$ be a finite extension that is not an imaginary quadratic extension. Then we have an inclusion
+ \[
+ \SL(2, \mathcal{O}) \hookrightarrow \SL(2, \R)^r \times \SL(2, \C)^s
+ \]
+ and is a product of more than one thing. One can actually explicitly compute the continuous bounded cohomology group of the right hand side.
+\end{eg}
+
+\begin{ex}
+ Let $\Gamma < \SL(3, \R)$ be any lattice. Are there any actions by oriented homeomorphisms on $S^1$?
+
+ Let's discuss according to $\rho^*(e^b_\R)$.
+ \begin{itemize}
+ \item If $\rho^*(e^b_\R) = 0$, then there is a finite orbit. Then we are stuck, and don't know what to say.
+ \item If $\rho^*(e^b_\R) \not= 0$, then we have an unbounded minimal action. This leads to a strongly proximal action $\rho_{ss}\colon \Gamma \to \Homeo^+(S^1)$. But by the above results, this implies the action extends continuously to an action of $\SL(3, \R)$ on $S^1$. But $\SL(3, \R)$ contains $\SO(3)$, which is a compact group. But we know what compact subgroups of $\Homeo^+(S^1)$ look like, and it eventually follows that the action is trivial. So this case is not possible.
+ \end{itemize}
+\end{ex}
+
+%We say a topological group $T$ has \emph{small subgroups} if every neighbourhood of the identity contains a non-trivial subgroup. Typical examples include $(\Z/2\Z)^\N$, under the product topology.
+
+\section{The relative homological approach}
+\subsection{Injective modules}
+When we defined ordinary group cohomology, we essentially defined it as the right-derived functor of taking invariants. While we do not need the machinery of homological algebra and derived functors to define group cohomology, having that available means we can pick different injective resolutions to compute group cohomology depending on the scenario, and often this can be helpful. It also allows us to extend group cohomology to allow non-trivial coefficients. Thus, we would like to develop a similar theory for bounded cohomology.
+
+We begin by specifying the category we are working over.
+\begin{defi}[Banach $\Gamma$ module]\index{Banach $\Gamma$-module}\index{$\Gamma$-module!Banach}
+ A Banach $\Gamma$-module is a Banach space $V$ together with an action $\Gamma \times V \to V$ by linear isometries.
+\end{defi}
+Given a Banach $\Gamma$-module $V$, we can take the submodule of $\Gamma$-invariants $V^\Gamma$. The relative homological approach tells us we can compute the bounded cohomology $H_b^\Cdot(\Gamma, \R)$ by first taking an appropriate exact sequences of Banach $\Gamma$-modules
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & \R \ar[r, "d^{(0)}"] & E_0 \ar[r, "d^{(1)}"] & E_1 \ar[r, "d^{(2)}"] \ar[r] & \cdots,
+ \end{tikzcd}
+\]
+and then take the cohomology of the complex of $\Gamma$-invariants
+\[
+ \begin{tikzcd}
+ 0 \ar[r] & E_0^\Gamma \ar[r, "d^{(1)}"] & E_1^\Gamma \ar[r, "d^{(2)}"] \ar[r] & E_2^\Gamma \ar[r] & \cdots
+ \end{tikzcd}.
+\]
+Of course, this works if we take $E_k = C(\Gamma^{k + 1}, A)$ and $d^{(k)}$ to be the differentials we have previously constructed, since this is how we defined bounded cohomology. The point is that there exists a large class of ``appropriate'' exact sequences such that this procedure gives us the bounded cohomology.
+
+We first need the following definition:
+\begin{defi}[Admissible morphism]\index{admissible morphism}
+ An injective morphism $i\colon A \to B$ of Banach spaces is \emph{admissible} if there exists $\sigma\colon B \to A$ with
+ \begin{itemize}
+ \item $\sigma i = \id_A$; and
+ \item $\|\sigma\|\leq 1$.
+ \end{itemize}
+\end{defi}
+This is a somewhat mysterious definition, but when we have such a situation, this in particular implies $\im A$ is closed and $B = i(A) \oplus \ker \sigma$. In usual homological algebra, we don't meet these kinds of things, because our vector spaces always have complements. However, here we need them.
+
+\begin{defi}[Injective Banach $\Gamma$-module]\index{injective Banach $\Gamma$-module}\index{Banach $\Gamma$-module!injective}\index{$\Gamma$-module!injective Banach}
+ A Banach $\Gamma$-module is injective if for any diagram
+ \[
+ \begin{tikzcd}
+ A \ar[r, "i"] \ar[d, "\alpha"] & B\\
+ E
+ \end{tikzcd}
+ \]
+ where $i$ and $\alpha$ are morphisms of $\Gamma$-modules, and $i$ is injective admissible, then there exists $\beta \colon B \to E$ a morphism of $\Gamma$-modules such that
+ \[
+ \begin{tikzcd}
+ A \ar[r, "i"] \ar[d, "\alpha"] & B \ar[ld, dashed, "\beta"]\\
+ E
+ \end{tikzcd}
+ \]
+ commutes and $\|\beta\| \leq \|\alpha\|$.
+\end{defi}
+In other words, we can extend any map from a closed complemented subspace of $B$ to $E$.
+
+\begin{defi}[Injective resolution]\index{injective resolution}
+ Let $V$ be a Banach $\Gamma$-module. An \emph{injective resolution} of $V$ is an exact sequence
+ \[
+ \begin{tikzcd}
+ V \ar[r] & E_0 \ar[r] & E_1 \ar[r] & E_2 \ar[r] & \cdots
+ \end{tikzcd}
+ \]
+ where each $E_k$ is injective.
+\end{defi}
+Then standard techniques from homological algebra imply the following theorem:
+\begin{thm}
+ Let $E^{\Cdot}$ be an injective resolution of $\R$ Then
+ \[
+ H^\Cdot(E^{\Cdot \Gamma}) \cong H_b^\Cdot(\Gamma, \R)
+ \]
+ as topological vector spaces.
+
+ In case $E^\Cdot$ admits contracting homotopies, this isomorphism is semi-norm decreasing.
+\end{thm}
+
+Unsurprisingly, the defining complex for bounded cohomology were composed of injective $\Gamma$-modules.
+\begin{lemma}\leavevmode
+ \begin{itemize}
+ \item $\ell^\infty(\Gamma^n)$ for $n \geq 1$ are all injective Banach $\Gamma$-modules.
+ \item $\ell_{\mathrm{alt}}^\infty(\Gamma^n)$ for $n \geq 1$ are injective Banach $\Gamma$-modules as well.
+ \end{itemize}
+\end{lemma}
+This is a verification. More interestingly, we have the following
+\begin{prop}
+ The trivial $\Gamma$-module $\R$ is injective iff $\Gamma$ is amenable.
+\end{prop}
+As an immediate corollary, we know that if $\Gamma$ is amenable, then all the higher bounded cohomology groups vanish, as $0 \to \R \to 0 \to 0 \to \cdots$ is an injective resolution.
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item[$(\Rightarrow)$] Suppose $A$ is injective. Consider the diagram
+ \[
+ \begin{tikzcd}
+ \R \ar[d, equals] \ar[r, "i"] & \ell^\infty(\Gamma)\\
+ \R
+ \end{tikzcd},
+ \]
+ where $i(t)$ is the constant function $t$. We need to verify that $i$ is an admissible injection. Then we see that $\sigma(f) = f(e)$ is a left inverse to $i$ and $\|\sigma\| \leq 1$. Then there exists a morphism $\beta\colon \ell^\infty(\Gamma) \to \R$ filling in the diagram with $\|\beta\| \leq \|\id_\R\| = 1$ and in particular
+ \[
+ \beta(\mathbf{1}_\Gamma) = 1
+ \]
+ Since the action of $\Gamma$ on $\R$ is trivial, this $\beta$ is an invariant linear form on $\Gamma$, and we see that this is an invariant mean.
+ \item[$(\Leftarrow)$] Assume $\Gamma$ is amenable, and let $m\colon \ell^\infty(\Gamma) \to \R$ be an invariant mean. Consider a diagram
+ \[
+ \begin{tikzcd}
+ A \ar[r, "i"] \ar[d, "\alpha"] & B\\
+ \R
+ \end{tikzcd}
+ \]
+ as in the definition of injectivity. Since $i$ is an admissible, it has a left inverse $\sigma\colon B \to A$. Then we can define
+ \[
+ \beta(v) = m \{\gamma \mapsto \alpha(\sigma(\gamma_* v))\}.
+ \]
+ Then this is an injective map $B \to \R$ and one can verify this works.\qedhere
+ \end{itemize}
+\end{proof}
+
+This theory allows us to study bounded cohomology with more general coefficients. This can also be extended to $G$ a locally-compact second-countable groups with coefficients a $G$-Banach module $E$ which is the dual of a continuous separable Banach module $E^b$. This is more technical and subtle, but it works.
+
+\subsection{Amenable actions}
+In Riemannian geometry, we have the Hodge decomposition theorem. It allows us to understand the de Rham cohomology of a Riemannian manifold in terms of the complex of harmonic forms, whose cohomology is the complex itself. In bounded cohomology, we don't have something like this, but we can produce a complex such that the complex is equal to the cohomology in the second degree.
+
+The setting is that we have a locally-compact second-countable group $G$ with a non-singular action on a standard measure space $(S, \mathcal{M}, \mu)$. We require that the action map $G \times S \to S$ which is measurable. Moreover, for any $g \in G$, the measure $g_* \mu$ is equivalent to $\mu$. In other words, the $G$-action preserves the null sets.
+
+\begin{eg}
+ Let $M$ be a smooth manifold. Then the action of $\Diff(M)$ on $M$ is non-singular.
+\end{eg}
+
+We want to come up with a notion similar to amenability. This is what we call conditional expectation.
+\begin{defi}[Conditional expectation]\index{conditional expectation}
+ A \emph{conditional expectation} on $G \times S$ is a linear map $M\colon L^\infty(G \times S) \to L^\infty(S)$ such that
+ \begin{enumerate}
+ \item $M(1) = 1$;
+ \item If $M \geq 0$, then $M(f) \geq 0$; and
+ \item $M$ is $L^\infty(S)$-linear.
+ \end{enumerate}
+\end{defi}
+We have a left $G$-action on $L^\infty(G \times S)$ given by the diagonal action, and also a natural on $L^\infty(S)$. We say $M$ is \term{$G$-equivariant} if it is $G$-equivariant.
+
+\begin{defi}[Amenable action]\index{amenable action}\index{action!amenable}
+ A $G$-action on $S$ is amenable if there exists a $G$-equivariant conditional expectation.
+\end{defi}
+Note that a point (with the trivial action) is a conditional $G$-space if $G$ is amenable itself.
+
+\begin{eg}
+ Let $H$ be a closed subgroup of $G$, then the $G$ action on $G/H$ is amenable iff $H$ is amenable.
+\end{eg}
+
+\begin{thm}[Burger--Monod, 2002]
+ Let $G \times S \to S$ be a non-singular action. Then the following are equivalent:
+ \begin{enumerate}
+ \item The $G$ action is amenable.
+ \item $L^\infty(S)$ is an injective $G$-module.
+ \item $L^\infty(S^n)$ for all $n \geq 1$ is injective.
+ \end{enumerate}
+\end{thm}
+So any amenable $G$-space can be used to compute the bounded cohomology of $G$.
+
+\begin{cor}
+ If $(S, \mu)$ is an amenable $G$-space, then we have an isometric isomorphism $H^\Cdot(L^\infty(S^n, \mu)^G, d_n) \cong H^\Cdot(L^\infty_{\mathrm{alt}}(S^n, \mu)^G, d_n) \cong H_b(G, \R)$.
+\end{cor}
+
+\begin{eg}
+ Let $\Gamma < G$ be a lattice in $\SL(n, \R)$, say. Let $P < G$ be a parabolic subgroup, e.g.\ the subgroup of upper-triangular matrices. We use $L^\infty_{\mathrm{alt}} ((G/P)^n)^\Gamma$ to compute bounded cohomology of $\Gamma$, since the restriction of amenable actions to closed subgroups is amenable. We have
+ \[
+ \begin{tikzcd}[row sep=small]
+ 0 \ar[r] & L^\infty(G/p)^\Gamma \ar[r] \ar[d, equals] & L_{\mathrm{alt}} ((G/P)^2)^\Gamma \ar[r] \ar[d, equals] & L_{\mathrm{alt}} ((G/P)^3)^\Gamma \ar[r] & \cdots\\
+ & \R \ar[r, "0"] & 0
+ \end{tikzcd}
+ \]
+ So we know that $H^2_b(\Gamma, \R)$ is isometric to $\mathcal{Z}(L_{\mathrm{alt}}^\infty((G/P)^3)^\Gamma)$. In particular, it is a Banach space.
+\end{eg}
+\printindex
+\end{document}
diff --git a/books/cam/IV_L/topics_in_number_theory.tex b/books/cam/IV_L/topics_in_number_theory.tex
new file mode 100644
index 0000000000000000000000000000000000000000..52f90f3e3c164e8dbd4bf7a3693935e7a246925f
--- /dev/null
+++ b/books/cam/IV_L/topics_in_number_theory.tex
@@ -0,0 +1,2739 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IV}
+\def\nterm {Lent}
+\def\nyear {2018}
+\def\nlecturer {A.\ J.\ Scholl}
+\def\ncourse {Topics in Number Theory}
+\def\nnotready {}
+
+\input{header}
+\DeclareMathOperator\Br{Br}
+\renewcommand\G{\mathbb{G}}
+\newcommand\A{\mathbb{A}}
+\DeclareMathOperator\Cl{\mathrm{Cl}}
+\newcommand\ab{\mathrm{ab}}
+\newcommand\ur{\mathrm{ur}}
+\newcommand\CM{\mathrm{CM}}
+\newcommand\sprime{\!{\vphantom{\prod}}'}
+\renewcommand\sp{\mathrm{sp}}
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+The ``Langlands programme'' is a far-ranging series of conjectures describing the connections between automorphic forms on the one hand, and algebraic number theory and arithmetic algebraic geometry on the other. In these lectures we will give an introduction to some aspects of this programme.
+
+\subsubsection*{Pre-requisites}
+The course will follow on naturally from the Michaelmas term courses \emph{Algebraic Number Theory} and \emph{Modular Forms and L-Functions}, and knowledge of them will be assumed. Some knowledge of algebraic geometry will be required in places.
+}
+\tableofcontents
+
+\setcounter{section}{-1}
+\section{Introduction}
+In this course, we shall first give an outline of class field theory. We then look at abelian $L$-functions (Hecke, Tate). We then talk about \emph{non-}abelian $L$-functions, and in particular the Weil--Deligne group and local $L$- and $\varepsilon$-factors.
+
+We then talk about local Langlands for $\GL_n$ a bit, and do a bit of global theory and automorphic forms at the end.
+
+The aim is not to prove everything, because that will take 3 courses instead of one, but we are going to make precise definitions and statements of everything.
+
+\section{Class field theory}
+\subsection{Preliminaries}
+Class field theory is the study of abelian extensions of local or global fields. Before we can do class field theory, we must first know \emph{Galois} theory.
+\begin{notation}
+ Let $K$ be a field. We will write $\bar{K}$\index{$\bar{K}$} for a separable closure of $K$, and $\Gamma_K = \Gal(\bar{K}/K)$\index{$\Gamma_K$}. We have
+ \[
+ \Gamma_K = \lim_{L/K\text{ finite separable}} \Gal(L/K),
+ \]
+ which is a \term{profinite group}. The associated topology is the \term{Krull topology}.
+\end{notation}
+
+Galois theory tells us
+\begin{thm}[Galois theory]
+ There are bijections
+ \begin{align*}
+ \left\{\parbox{3cm}{\centering closed subgroups of $\Gamma_K$}\right\} &\longleftrightarrow \left\{\parbox{3cm}{\centering subfields $K \subseteq L \subseteq \bar{K}$}\right\}\\
+ \left\{\parbox{3cm}{\centering open subgroups of $\Gamma_K$}\right\} &\longleftrightarrow \left\{\parbox{3cm}{\centering finite subfields $K \subseteq L \subseteq \bar{K}$}\right\}
+ \end{align*}
+\end{thm}
+
+\begin{notation}
+ We write \term{$K^{\ab}$} for the maximal abelian subextension of $\bar{K}$, and then
+ \[
+ \Gal(K^{\ab}/K) = \Gamma_K^{\ab} = \frac{\Gamma_K}{\overline{[\Gamma_K, \Gamma_K]}}.
+ \]
+\end{notation}
+It is crucial to note that while $\bar{K}$ is unique, it is only unique up to non-canonical isomorphism. Indeed, it has many automorphisms, given by elements of $\Gal(\bar{K}/K)$. Thus, $\Gamma_K$ is well-defined up to conjugation only. On the other hand, the abelianization $\Gamma_K^{\ab}$ \emph{is} well-defined. This will be important in later naturality statements.
+
+\begin{defi}[Non-Archimedean local field]\index{non-Archimedean local field}\index{local field}
+ A \emph{non-Archimedean local field} is a finite extension of $\Q_p$ or $\F_p((t))$.
+\end{defi}
+We can also define Archimedean local fields, but they are slightly less interesting.
+\begin{defi}[Archimedean local field]\index{Archimedean local field}
+ An Archimedean local field is a field that is $\R$ or $\C$.
+\end{defi}
+
+If $F$ is a non-Archimedean local field, then it has a canonical normalized valuation
+\[
+ v = v_F: F^\times \twoheadrightarrow \Z.
+\]
+\begin{defi}[Valuation ring]\index{valuation ring}
+ The \emph{valuation ring} of a non-Archimedean local field $F$ is
+ \[
+ \mathcal{O} = \mathcal{O}_F = \{x \in F : v(x) \geq 0\}.
+ \]
+ Any element $\pi = \pi_f \in \mathcal{O}_F$ with $v(\pi) = 1$ is called a \term{uniformizer}. This generates the maximal ideal
+ \[
+ \mathfrak{m} = \mathfrak{m}_F = \{x \in \mathcal{O}_F: v(x) \geq 1\}.
+ \]
+\end{defi}
+
+\begin{defi}[Residue field]\index{residue field}
+ The \emph{residue field} of a non-Archimedean local field $F$ is\index{$k_F$}
+ \[
+ k = k_F = \mathcal{O}_F/\mathfrak{m}_F.
+ \]
+ This is a finite field of order $q = p^r$.
+\end{defi}
+
+A particularly well-understood subfield of $F^{\ab}$ is the \term{maximal unramified extension} \term{$F^{\ur}$}. We have
+\[
+ \Gal(F^{\ur}/F) = \Gal(\bar{k}/k) = \hat{\Z} = \lim_{n \geq 1} \Z/n\Z.
+\]
+and this is completely determined by the behaviour of the residue field. The rest of $\Gamma_F$ is called the \emph{inertia group}.
+\begin{defi}[Inertia group]\index{inertia group}\index{$I_F$}
+ The \emph{inertia group} $I_F$ is defined to be
+ \[
+ I_F = \Gal(\bar{F}/F^{\ur}) \subseteq \Gamma_F.
+ \]
+\end{defi}
+We also define
+\begin{defi}[Wild inertia group]\index{wild inertia group}\index{$P_F$}
+ The \emph{wild inertia group} $P_F$ is the maximal pro-$p$-subgroup of $I_F$.
+\end{defi}
+
+Returning to the maximal unramified extension, note that saying $\Gal(\bar{k}/k) \cong \hat{\Z}$ requires picking an isomorphism, and this is equivalent to picking an element of $\hat{\Z}$ to be the ``$1$''. Naively, we might pick the following:
+\begin{defi}[Arithmetic Frobenius]\index{arithmetic Frobenius}
+ The \emph{arithmetic Frobenius} $\varphi_q \in \Gal(\bar{k}/k)$ (where $|k| = q$) is defined to be
+ \[
+ \varphi_q(x) = x^q.
+ \]
+\end{defi}
+
+Identifying this with $1 \in \hat{\Z}$ leads to infinite confusion, and we shall not do so. Instead, we define
+\begin{defi}[Geometric Frobenius]\index{geometric Frobenius}
+ The \emph{geometric Frobenius} is\index{$\Frob_q$}
+ \[
+ \Frob_q = \varphi_q^{-1} \in \Gal(\bar{k}/k).
+ \]
+\end{defi}
+We shall identify $\Gal(\bar{k}/k) \cong \hat{\Z}$ by setting the \emph{geometric} Frobenius to be $1$.
+
+The reason this is called the geometric Frobenius is that if we have a scheme over a finite field $k$, then there are two ways the Frobenius can act on it --- either as a Galois action, or as a pullback along the morphism $(-)^q: k \to k$. The latter corresponds to the geometric Frobenius.
+
+We now turn to understand the inertia groups. The point of introducing the wild inertia group is to single out the ``$p$-phenomena'', which we would like to avoid. To understand $I_F$ better, let $n$ be a natural number prime to $p$. As usual, we write\index{$\mu_n(\bar{k})$}
+\[
+ \mu_n(\bar{k}) = \{\zeta \in \bar{k}: \zeta^n = 1\}.
+\]
+We also pick an $n$th root of $\pi$ in $\bar{F}$, say $\pi_n$. By definition, this has $\pi_n^n = \pi$.
+\begin{defi}[Tame mod $n$ character]\index{tame character}
+ The \emph{tame mod $n$ character} is the map $t(n): I_F = \Gal(\bar{F}/F^{ur}) \to \mu_n(\bar{k})$ given by
+ \[
+ \gamma \mapsto \gamma(\pi_n)/\pi_n \pmod \pi.
+ \]
+\end{defi}
+Note that since $\gamma$ fixes $\pi = \pi_n^n$, we indeed have
+\[
+ \left(\frac{\gamma(\pi_n)}{\pi_n}\right)^{n} = \frac{\gamma(\pi_n^n)}{\pi_n^n} = 1.
+\]
+Moreover, this doesn't depend on the choice of $\pi_n$. Any other choice differs by an $n$th root of unity, but the $n$th root of unity lies in $F^{ur}$ since $n$ is prime to $p$. So $\gamma$ fixes it and so it cancels out in the fraction. For the same reason, if $\gamma$ moves $\pi_n$ at all, then this is visible down in $\bar{k}$, since $\gamma$ would have multiplied $\pi_n$ by an $n$th root of unity, and these $n$th roots are present in $\bar{k}$.
+
+Now that everything is canonically well-defined, we can take the limit over all $n$ to obtain a map
+\[
+ \hat{t}: I_F \to \lim_{(n, p) = 1} \mu_n(\bar{k}) = \prod_{\ell \not= p} \lim_{m \geq 1} \mu_{\ell^m}(\bar{k}) \equiv \prod_{\ell \not= p} \Z_{\ell}(1)(\bar{k}).
+\]
+This \term{$\Z_{\ell}(1)(\bar{k})$} is the \term{Tate module} of $\bar{k}^{\times}$. This is isomorphic to $\Z_{\ell}$, but not canonically.
+
+\begin{thm}
+ $\ker \hat{t} = P_F$.
+\end{thm}
+Thus, it follows that \term{maximal tamely ramified extension} of $F$, i.e.\ the fixed field of $P_F$ is
+\[
+ \bigcup_{(n, p) = 1} F^{ur} (\sqrt[n]{\pi}).
+\]
+
+Note that $t(n)$ extends to a map $\Gamma_F \to \mu_n$ given by the same formula, but this now depends on the choice of $\pi_n$, and further, it is not a homomorphism, because
+\[
+ t(n)(\gamma \delta) = \frac{\gamma\delta(\pi_n)}{\pi_n} = \frac{\gamma(\pi_n)}{\pi_n} \gamma \left(\frac{\delta(\pi_n)}{\pi_n}\right) = t(n)(\gamma) \cdot \gamma(t(n)(\delta)).
+\]
+So this formula just says that $t(n)$ is a $1$-cocycle. Of course, picking another $\pi_n$ will modify $t(n)$ by a coboundary.
+
+\subsection{Local class field theory}
+Local class field theory is a (collection of) theorems that describe abelian extensions of a local field. The key takeaway is that finite abelian extensions of $F$ correspond to open finite index subgroups of $F^\times$, but the theorem says a bit more than that:
+\begin{thm}[Local class field theory]\index{local class field theory}\leavevmode
+ \begin{enumerate}
+ \item Let $F$ be a local field. Then there is a continuous homomorphism, the \term{local Artin map}\index{$\Art_F$}\index{Artin map!local}
+ \[
+ \Art_F: F^\times \to \Gamma_F^{\ab}
+ \]
+ with dense image characterized by the properties
+ \begin{enumerate}
+ \item The following diagram commutes:
+ \[
+ \begin{tikzcd}
+ F^{\times} \ar[d, two heads, "v_F"] \ar[r, "\Art_F"] & \Gamma_F^{\ab} \ar[r, two heads] & \Gamma_F/I_F \ar[d, "\sim"]\\
+ \Z \ar[rr, hook] & & \hat{\Z}
+ \end{tikzcd}
+ \]
+ \item If $F'/F$ is finite, then the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ (F')^\times \ar[r, "\Art_{F'}"] \ar[d, "N_{F'/F}"] & \Gamma_{F'}^{\ab} = \Gal(F'^{\ab}/F') \ar[d, "\text{restriction}"]\\
+ F^\times \ar[r, "\Art_F"] & \Gamma_F^{\ab} = \Gal(F^{\ab}/F)
+ \end{tikzcd}
+ \]
+ \end{enumerate}
+ \item Moreover, the \term{existence theorem} says $\Art_F^{-1}$ induces a bijection
+ \[
+ \left\{\parbox{4cm}{\centering open finite index\\subgroups of $F^\times$}\right\} \longleftrightarrow \left\{\parbox{4cm}{\centering open subgroups of $\Gamma_F^{\ab}$}\vphantom{\parbox{4cm}{open finite index\\subgroups of $F^\times$}}\right\}
+ \]
+ Of course, open subgroups of $\Gamma_F^{\ab}$ further corresponds to finite abelian extensions of $F$.
+
+ \item Further, $\Art_F$ induces an isomorphism
+ \[
+ \mathcal{O}_F^\times \overset{\sim}{\to} \im(I_F \to \Gamma_F^{\ab})
+ \]
+ and this maps $(1 + \pi \mathcal{O}_F)^\times$ to the image of $P_F$. Of course, the quotient $\mathcal{O}_F^\times/(1 + \pi \mathcal{O}_F)^\times \cong k^\times = \mu_\infty(k)$.
+
+ \item Finally, this is functorial, namely if we have an isomorphism $\alpha: F \overset{\sim}{\to} F'$ and extend it to $\bar{\alpha} :\bar{F} \overset{\sim}{\to} \bar{F}'$, then this induces isomorphisms between the Galois groups $\alpha_*: \Gamma_F \overset{\sim}{\to} \Gamma_{F'}$ (up to conjugacy), and $\alpha_*^{\ab} \circ \Art_F = \Art_{F'} \circ \alpha_*^{\ab}$.\fakeqed
+ \end{enumerate}
+\end{thm}
+
+On the level of finite Galois extensions $E/F$, we can rephrase the first part of the theorem as giving a map
+\[
+ \Art_{E/F}: \frac{F^\times}{N_{E/F}(E^\times)} \to \Gal(E/F)^{\ab}
+\]
+which is now an isomorphism (since a dense subgroup of a discrete group is the whole thing!).
+
+We can write down these maps explicitly in certain special cases. We will not justify the following example:
+\begin{eg}
+ If $F = \Q_p$, then
+ \[
+ F^{\ab} = \Q_p(\mu_\infty) = \bigcup \Q_p(\mu_n) = \Q_p^{ur} (\mu_{p^\infty}).
+ \]
+ Moreover, if we write $x \in \Q_p^\times$ as $p^n y$ with $y \in \Z_p^\times$, then
+ \[
+ \Art_\Q(x)|_{\Q_p^{ur}} = \Frob_p^n,\quad \Art_\Q(x) |_{\Q_p(\mu_{p^\infty})} = (\zeta_{p^n} \mapsto \zeta_{p^n}^{y \text{ mod }p^n}).
+ \]
+ If we had the arithmetic Frobenius instead, then we would have a $-y$ in the power there, which is less pleasant.
+\end{eg}
+
+The cases of the Archimedean local fields are easy to write down and prove directly!
+\begin{eg}
+ If $F = \C$, then $\Gamma_F = \Gamma_F^{ab} = 1$ is trivial, and the Artin map is similarly trivial. There are no non-trivial open finite index subgroups of $\C^\times$, just as there are no non-trivial open subgroups of the trivial group.
+\end{eg}
+
+\begin{eg}
+ If $F = \R$, then $\bar{\R} = \C$ and $\Gamma_F = \Gamma_F^{ab} = \Z/2\Z = \{\pm 1\}$. The Artin map is given by the sign map. The unique open finite index subgroup of $\R^\times$ is $\R_{>0}^\times$, and this corresponds to the finite Galois extension $\C/\R$.
+\end{eg}
+
+As stated in the theorem, the Artin map has dense image, but is not surjective (in general). To fix this problem, it is convenient to introduce the \emph{Weil group}.
+\begin{defi}[Weil group]\index{Weil group}
+ Let $F$ be a non-Archimedean local field. Then the \emph{Weil group} of $F$ is the topological group $W_F$ defined as follows:
+ \begin{itemize}
+ \item As a group, it is
+ \[
+ W_F = \{\gamma \in \Gamma_F \mid \gamma|_{F^{ur}} = \Frob_q^n\text{ for some }n \in \Z\}.
+ \]
+ Recall that $\Gal(F^{ur}/F) = \hat{\Z}$, and we are requiring $\gamma|_{F^{ur}}$ to be in $\Z$. In particular, $I_F \subseteq W_F$.
+ \item The topology is defined by the property that $I_F$ is an \emph{open} subgroup with the profinite topology. Equivalently, $W_F$ is a fiber product of topological groups
+ \[
+ \begin{tikzcd}
+ W_F \ar[r, hook] \ar[d, two heads] & \Gamma_F \ar[d, two heads]\\
+ \Z \ar[r, hook] & \hat{\Z}
+ \end{tikzcd}
+ \]
+ where $\Z$ has the discrete topology.
+ \end{itemize}
+\end{defi}
+Note that $W_F$ is not profinite. It is totally disconnected but not compact.
+
+This seems like a slightly artificial definition, but this is cooked up precisely so that
+\begin{prop}
+ $\Art_F$ induces an \emph{isomorphism} of topological groups
+ \[
+ \Art_F^W: F^\times \to W_F^{\ab}.
+ \]
+ This maps $\mathcal{O}_F^\times$ isomorphically onto the inertia subgroup of $\Gamma_F^{\ab}$.
+\end{prop}
+
+In the case of Archimedean local fields, we make the following definitions. They will seem rather ad hoc, but we will provide some justification later.
+\begin{itemize}
+ \item The Weil group of $\C$ is defined to be $W_\C = \C^\times$, and the Artin map $\Art^W_\R$ is defined to be the identity.
+ \item The Weil group of $\R$ is defined the non-abelian group
+ \[
+ W_\R = \bra \C^\times , \sigma \mid \sigma^2 = -1 \in \C^\times, \sigma z \sigma^{-1} = \bar{z}\text{ for all }z \in \C^\times\ket.
+ \]
+ This is a (non)-split extension of $\C^\times$ by $\Gamma_\R$,
+ \[
+ 1 \to \C^\times \to W_\R \to \Gamma_\R \to 1,
+ \]
+ where the last map sends $z \mapsto 0$ and $\sigma \mapsto 1$. This is in fact the unique non-split extension of $\Gamma_\R$ by $\C^\times$ where $\Gamma_\R$ acts on $\C^\times$ in a natural way.
+
+ The map $\Art_\R^W$ is better described by its inverse, which maps
+ \begin{align*}
+ (\Art_\R^W)^{-1}: W_\R^{\ab} &\overset{\sim}{\longrightarrow} \R^\times\\
+ z &\longmapsto z \bar{z}\\
+ \sigma &\longmapsto -1
+ \end{align*}
+\end{itemize}
+
+To understand these definitions, we need the notion of the relative Weil group.
+\begin{defi}[Relative Weil group]\index{relative Weil group}
+ Let $F$ be a non-Archimedean local field, and $E/F$ Galois but not necessarily finite. We define
+ \[
+ W_{E/F} = \{\gamma \in \Gal(E^{\ab}/F) : \gamma|_{F^{ur}} = \Frob_q^n, n \in \Z\} = \frac{W_F}{\overline{[W_E, W_E]}}.
+ \]
+ with the quotient topology.
+\end{defi}
+
+The $W_{\bar{F}/F} = W_F$, while $W_{F/F} = W_F^{\ab} = F^\times$ by local class field theory.
+
+Now if $E/F$ is a finite extension, then we have an exact sequence of Galois groups
+\[
+ \begin{tikzcd}
+ 1 \ar[r]& \Gal(E^{\ab}/E) \ar[r] & \Gal(E^{\ab}/F) \ar[r] & \Gal(E/F) \ar[r] \ar[d, equals]& 1\\
+ 1 \ar[r] & W_E^{\ab} \ar[r] \ar[u, hook] & W_{E/F} \ar[r] \ar[u, hook] & \Gal(E/F) \ar[r] & 1.
+ \end{tikzcd}
+\]
+By the Artin map, $W_E^{\ab} = E^\times$. So the relative Weil group is an extension of $\Gal(E/F)$ by $E^\times$. In the case of non-Archimedean fields, we have
+\[
+ \lim_E E^\times = \{1\},
+\]
+where the field extensions are joined by the norm map. So $\bar{F}^\times$ is invisible in $W_F = \lim W_{E/F}$. The weirdness above comes from the fact that the separable closures of $\R$ and $\C$ are finite extensions.% Observe that the extension above is characterized by an element of $H^2(\Gal(E/F), E^\times) \simeq \frac{1}{n}\Z/\Z$, and it is in fact $\frac{1}{n}$. We see that this is true also in the case of $\R$.
+
+We are, of course, not going to prove local class field theory in this course. However, we can say something about the proofs. There are a few ways of proving it:
+\begin{itemize}
+ \item The cohomological method (see Artin--Tate, Cassels--Fr\"ohlich), which only treats the first part, namely the existence of $\Art_K$. We start off with a finite Galois extension $E/F$, and we want to construct an isomorphism
+ \[
+ \Art_{E/F}: F^\times/N_{E/F}(E^\times) \to \Gal(E/F)^{\ab}.
+ \]
+ Writing $G = \Gal(E/F)$, this uses the cohomological interpretation
+ \[
+ F^\times/N_{E/F}(E^\times) = \hat{H}^0(G, E^\times),
+ \]
+ where $\hat{H}$ is the Tate cohomology of finite groups. On the other hand, we have
+ \[
+ G^{\ab} = H_1(G, \Z) = \hat{H}^{-2}(G, \Z).
+ \]
+ The main step is to compute
+ \[
+ H^2(G, E^\times) = \hat{H}^2(G, E^\times) \cong \tfrac{1}{n}\Z/\Z \subseteq \Q/\Z = H^2(\Gamma_F, \bar{F}^\times).
+ \]
+ where $n = [E:F]$. The final group $H^2(\Gamma_F, \bar{F}^\times)$ is the \term{Brauer group} $\Br(F)$, and the subgroup is just the kernel of $\Br(F) \to \Br(E)$.
+
+ Once we have done this, we then define $\Art_{E/F}$ to be the cup product with the generator of $\hat{H}^2(G, E^\times)$, and this maps $\hat{H}^{-2}(G, \Z) \to \hat{H}^0(G, E^\times)$. The fact that this map is an isomorphism is rather formal.
+
+ The advantage of this method is that it generalizes to duality theorems about $H^*(G, M)$ for arbitrary $M$, but this map is not at all explicit, and is very much tied to abelian extensions.
+ \item Formal group methods: We know that the maximal abelian extension of $\Q_p$ is obtained in two steps --- we can write
+ \[
+ \Q_p^{\ab} = \Q_p^{ur}(\mu_{p^\infty}) = \Q_p^{ur} (\text{torsion points in }\hat{\G}_m),
+ \]
+ where $\hat{\G}_m$ is the formal multiplication group, which we can think of as $(1 + \mathfrak{m}_{\bar{\Q}_p})^\times$. This generalizes to any $F/\Q_p$ --- we have
+ \[
+ F^{\ab} = F^{ur}(\text{torsion points in }\hat{\G}_\pi),
+ \]
+ where $\hat{\G}_\pi$ is the ``Lubin--Tate formal group''. This is described in Iwasawa's book, and also in a paper of Yoshida's. The original paper by Lubin and Tate is also very readable.
+
+ The advantage of this is that it is very explicit, and when done correctly, gives both the existence of the Artin map and the existence theorem. This also has a natural generalization to non-abelian extensions. However, it does not give duality theorems.
+ \item Neukrich's method: Suppose $E/F$ is abelian and finite. If $g \in \Gal(E/F)$, we want to construct $\Art_{E/F}^{-1}(g) \in F^\times/N_{E/F}(E^\times)$. The point is that there is only one possibility, because $\bra g\ket$ is a cyclic subgroup of $\Gal(E/F)$, and corresponds to some cyclic extension $\Gal(E/F')$. We have the following lemma:
+ \begin{lemma}
+ There is a finite $K/F'$ such that $K \cap E = F'$, so $\Gal(KE/K) \cong \Gal(E/F') = \bra g\ket$. Moreover, $KE/K$ is unramified.
+ \end{lemma}
+ Let $g'|_E = g$, and suppose $g' = \Frob_{KE/K}^a$. If local class field theory is true, then we have to have
+ \[
+ \Art_{KE/K}^{-1}(g') = \pi_K^a\pmod{N_{KE/K}(KE^\times)}.
+ \]
+ Then by our compatibility conditions, this implies
+ \[
+ \Art_{E/F}^{-1}(g) = N_{K/F}(\pi_K^a) \pmod {N_{E/F}(E^\times)}.
+ \]
+ The problem is then to show that this does not depend on the choices, and then show that it is a homomorphism. These are in fact extremely complicated. Note that everything so far is just Galois theory. Solving these two problems is then where all the number theory goes in.
+ \item When class field theory was first done, we first did \emph{global} class field theory, and deduced the local case from that. No one does that anymore nowadays, since we now have purely local proofs. However, when we try to generalize to the Langlands programme, what we have so far all start with global theorems and then proceed to deduce local results.
+\end{itemize}
+
+\subsection{Global class field theory}
+We now proceed to discuss global class field theory.
+\begin{defi}[Global field]\index{global field}
+ A \emph{global field} is a number field or $k(C)$ for a smooth projective absolutely irreducible curve $C/\F_q$, i.e.\ a finite extension of $\F_q(t)$.
+\end{defi}
+A lot of what we can do can be simultaneously done for both types of global fields, but we are mostly only interested in the case of number fields, and our discussions will mostly focus on those.
+
+\begin{defi}[Place]\index{place}
+ Let $K$ be a global field. Then a \emph{place} is a valuation on $K$. If $K$ is a number field, we say a valuation $v$ is a \term{finite place} if it is the valuation at a prime $\mathfrak{p} \lhd \mathcal{O}_K$. A valuation $v$ is an \term{infinite place} if comes from a complex or real embedding of $K$. We write $\Sigma_K$ for the set of places of $K$, and \term{$\Sigma_K^\infty$} and \term{$\Sigma_{K, \infty}$} for the sets of finite and infinite places respectively. We also write \term{$v \nmid \infty$} if $v$ is a finite place, and \term{$v \mid \infty$} otherwise.
+
+ If $K$ is a function field, then all places are are finite, and these correspond to closed points of the curve.
+\end{defi}
+
+If $v \in \Sigma_K$ is a place, then there is a completion $K \hookrightarrow K_v$. If $v$ is infinite, then $K_v$ is $\R$ or $\C$, i.e.\ an Archimedean local field. Otherwise, $K_v$ is a non-Archimedean local field.
+
+\begin{eg}
+ If $\Q = K$, then there is one infinite prime, which we write as $\infty$, given by the embedding $\Q \hookrightarrow \R$. If $v = p$, then we get the embedding $\Q \hookrightarrow \Q_p$ into the $p$-adic completion.
+\end{eg}
+
+\begin{notation}
+ If $v$ is a finite place, we write $\mathcal{O}_v \subseteq K_v$ for the valuation ring of the completion.
+\end{notation}
+
+Any local field has a canonically normalized valuation, but there is no canonical absolute value. It is useful to fix the absolute value of our local fields. For doing class field theory, the right way to put an absolute value on $K_v$ (and hence $K$) is by
+\[
+ |x|_v = q_v^{-v(x)},
+\]
+where
+\[
+ q_v = \left|\frac{\mathcal{O}_v}{\pi_v \mathcal{O}_v}\right|
+\]
+is the cardinality of the residue field at $v$. For example, if $K = \Q$ and $v = p$, then $q_v = p$, and $|p|_v = \frac{1}{p}$.
+
+In the Archimedean case, if $K_v$ is real, then we set $|x|$ to be the usual absolute value; if $K_v$ is complex, then we take $|x|_v = x\bar{x} = |x|^2$.
+
+The reason for choosing these normalizations is that we have the following product formula:
+\begin{prop}[Product formula]\index{product formula}
+ If $x \in K^\times$, then
+ \[
+ \prod_{v \in \Sigma_K} |x|_v = 1.
+ \]
+\end{prop}
+The proof is not difficult. First observe that it is true for $\Q$, and then show that this formula is ``stable under finite extensions'', which extends the result to all finite extensions of $\Q$.
+
+Global class field theory is best stated in terms of adeles and ideles. We make the following definition:
+\begin{defi}[Adele]\index{adele}\index{$\mathbb{A}_K$}
+ The \emph{adeles} is defined to be the restricted product
+ \[
+ \A_K = \prod_v\sprime K_V = \Big\{ (x_v)_{v \in K_v} : x_v \in \mathcal{O}_v \text{ for all but finitely many $v \in \Sigma_K^\infty$}\Big\}.
+ \]
+ We can write this as $K_\infty \times \hat{K}$ or $\A_{K, \infty} \times \A_K^\infty$, where
+ \[
+ K_\infty = \A_{K, \infty} = K\otimes_\Q \R = \R^{r_1} \times \C^{r_2}.
+ \]
+ consists of the product over the infinite places, and
+ \[
+ \hat{K} = \A_K^\infty = \prod_{v \nmid \infty}\sprime K_V = \bigcup_{S \subseteq \Sigma_K^\infty\text{ finite}} \prod_{v \in S} K_v \times \prod_{v \in \Sigma_K^\infty \setminus S} \mathcal{O}_v.
+ \]
+ This contains $\hat{\mathcal{O}}_K = \prod_{v \nmid \infty} \mathcal{O}_K$. In the case of a number field, $\hat{\mathcal{O}}_K$ is the profinite completion of $\mathcal{O}_K$. More precisely, if $K$ is a number field then
+ \[
+ \hat{\mathcal{O}}_K = \lim_{\mathfrak{a}} \mathcal{O}_K/\mathfrak{a} = \lim \mathcal{O}_K/N\mathcal{O}_K = \mathcal{O}_K \otimes_\Z \hat{\Z},
+ \]
+ where the last equality follows from the fact that $\mathcal{O}_K$ is a finite $\Z$-module.
+\end{defi}
+%When adeles was first invented, they used the normal product instead of the restricted product, and it turned out to be a terrible idea, since taking the product of infinitely many copies of $\Z$'s lying inside each $K_V$ gives you something ghastly.
+
+\begin{defi}[Idele]\index{idele}\index{$J_K$}\index{$\mathbb{A}_K^\times$}
+ The \emph{ideles} is the restricted product
+ \[
+ J_K = \A_K^\times = \prod_v\sprime K_v^\times = \Big\{(x_v)_v \in \prod K_v^\times: x_v \in \mathcal{O}_v^\times\text{ for almost all } v\Big\}.
+ \]
+\end{defi}
+These objects come with natural topologies. On $\A_K$, we take $K_\infty \times \hat{\mathcal{O}}_K$ to be an open subgroup with the product topology. Once we have done this, there is a unique structure of a topological ring for which this holds. On $J_K$, we take $K_\infty^\times \times \hat{\mathcal{O}}_K^\times$ to be open with the product topology. Note that this is not the subspace topology under the inclusion $J_K \hookrightarrow \A_K$. Instead, it is induced by the inclusion
+\begin{align*}
+ J_K &\hookrightarrow \A_K \times \A_K\\
+ x &\mapsto (x, x^{-1}).
+\end{align*}
+
+It is a basic fact that $K^\times \subseteq J_K$ is a \emph{discrete} subgroup.
+\begin{defi}[Idele class group]
+ The \term{idele class group} is then
+ \[
+ C_K = J_K/K^\times.
+ \]
+\end{defi}
+The idele class group plays an important role in global class field theory. Note that $J_K$ comes with a natural absolute value
+\begin{align*}
+ |\ph|_A: J_K &\to \R_{> 0}^\times\\
+ (x_v) &\mapsto \prod_{v \in \Sigma_K} |x_v|_v.
+\end{align*}
+The product formula implies that $|K^\times|_{\A} = \{1\}$. So this in fact a map $C_K \to \R_{>0}^\times$. Moreover, we have
+
+\begin{thm}
+ The map $|\ph|_\A: C_K \to \R_{>0}^\times$ has compact kernel.
+\end{thm}
+This seemingly innocent theorem is actually quite powerful. For example, we will later construct a continuous surjection $C_K \to \Cl(K)$ to the ideal class group of $K$. In particular, this implies $\Cl(K)$ is finite! In fact, the theorem is equivalent to the finiteness of the class group and Dirichlet's unit theorem.
+
+\begin{eg}
+ In the case $K = \Q$, we have, by definition,
+ \[
+ J_{\Q} = \R^\times \times \prod_p\sprime \Q_p^\times.
+ \]
+ Suppose we have an idele $x = (x_v)$. It is easy to see that there exists a unique rational number $y \in \Q^\times$ such that $\sgn(y) = \sgn(x_\infty)$ and $v_p(y) = v_p(x_p)$. So we have
+ \[
+ J_\Q = \Q^\times \times \left(\R^\times_{>0} \times \prod_p \Z_p^\times\right).
+ \]
+ Here we think of $\Q^\times$ as being embedded diagonally into $J_\Q$, and as we have previously mentioned, $\Q^\times$ is discrete. From this description, we can read out a description of the idele class group
+ \[
+ C_\Q = \R_{>0}^\times \times \hat{\Z}^\times,
+ \]
+ and $\hat{\Z}^\times$ is the kernel $|\ph|_{\A}$.
+\end{eg}
+
+From the decomposition above, we see that $C_\Q$ has a maximal connected subgroup $\R^{\times}_{>0}$. In fact, this is the intersection of all open subgroups containing $1$. For a general $K$, we write \term{$C_K^0$} for the maximal connected subgroup, and then
+\[
+ \pi_0(C_K) = C_K/C_K^0,
+\]
+In the case of $K = \Q$, we can naturally identify
+\[
+ \pi_0(C_\Q) = \hat{\Z}^\times = \lim_n (\Z/n\Z)^\times = \lim_n \Gal(\Q(\zeta_n)/\Q) = \Gal(\Q(\zeta_\infty)/\Q).
+\]
+The field $\Q(\zeta_\infty)$ is not just any other field. The Kronecker--Weber theorem says this is in fact the maximal abelian extension of $\Q$. Global class field theory is a generalization of this isomorphism to all fields.
+
+Just like in local class field theory, global class field theory involves a certain Artin map. In the local case, we just pulled them out of a hat. To construct the global Artin map, we simply have to put these maps together to form a global Artin map.
+
+Let $L/K$ be a finite Galois extension, $v$ a place of $K$, and $w \mid v$ a place of $L$ extending $v$. For finite places, this means the corresponding primes divide; for infinite places, this means the embedding $w$ is an extension of $v$. We then have the \term{decomposition group}
+\[
+ \Gal(L_w/K_v) \subseteq \Gal(L/K).
+\]
+If $L/K$ is abelian, since any two places lying above $v$ are conjugate, this depends only on $v$. In this case, we can now define the \term{global Artin map}\index{Artin map!global}
+\begin{align*}
+ \Art_{L/K}: J_K &\longrightarrow \Gal(L/K)\\
+ (x_v)_v &\longmapsto \prod_v \Art_{L_w/K_v} (x_v),
+\end{align*}
+where we pick one $w$ for each $v$. To see this is well-defined, note that if $x_v \in \mathcal{O}_v^\times$ and $L/K$ is unramified at $v \nmid \infty$, then $\Art_{K_w/K_v}(x_v) = 1$. So the product is in fact finite.
+
+By the compatibility of the Artin maps, we can passing on to the limit over all extensions $L/K$, and get a continuous map
+\[
+ \Art_K : J_K \to \Gamma_K^{\ab}.
+\]
+\begin{thm}[Artin reciprocity law]\index{Artin reciprocity law}\leavevmode
+ $\Art_K(K^\times) = \{1\}$, so induces a map $C_K \to \Gamma_K^{\ab}$. Moreover,
+ \begin{enumerate}
+ \item If $\Char(K) = p > 0$, then $\Art_K$ is injective, and induces an isomorphism $\Art_K: C_k \overset{\sim}{\to} W_K^{\ab}$, where $W_K$ is defined as follows: since $K$ is a finite extension of $\F_q(T)$, and wlog assume $\bar{\F}_q \cap K = \F_q \equiv k$. Then $W_K$ is defined as the pullback
+ \[
+ \begin{tikzcd}
+ W_K \ar[r, hook] \ar[d] & \Gamma_K = \Gal(\bar{K}/K) \ar[d, "\mathrm{restr}."]\\
+ \Z \ar[r, hook] & \hat{\Z} \cong \Gal(\bar{k}/k)
+ \end{tikzcd}
+ \]
+ \item If $\Char(K) = 0$, we have an isomorphism
+ \[
+ \Art_K: \pi_0(C_K) = \frac{C_K}{C_K^0} \overset{\sim}{\to} \Gamma_K^{\ab}.
+ \]
+ \end{enumerate}
+ Moreover, if $L/K$ is finite, then we have a commutative diagram
+ \[
+ \begin{tikzcd}
+ C_L \ar[d, "N_{L/K}"] \ar[r, "\Art_L"] & \Gamma_L^{\ab} \ar[d, "\mathrm{restr.}"]\\
+ C_K \ar[r, "\Art_K"] & \Gamma_K^{\ab}
+ \end{tikzcd}
+ \]
+ If this is in fact Galois, then this induces an isomorphism
+ \[
+ \Art_{L/K}: \frac{J_K}{K^\times N_{L/K}(J_L)} \overset{\sim}{\to} \Gal(L/K)^{\ab}.
+ \]
+ Finally, this is functorial, namely if $\sigma: K \overset{\sim}{\to} K'$ is an isomorphism, then we have a commutative square
+ \[
+ \begin{tikzcd}
+ C_K \ar[r, "\Art_K"] \ar[d, "\sigma"] & \Gamma_K^{\ab} \ar[d]\\
+ C_{K'} \ar[r, "\Art_{K'}"] & \Gamma_{K'}^{\ab}
+ \end{tikzcd}
+ \]
+\end{thm}
+Observe that naturality and functoriality are immediate consequences of the corresponding results for local class field theory.
+
+As a consequence of the isomorphism, we have a correspondence
+\begin{align*}
+ \left\{\parbox{4.2cm}{\centering finite abelian extensions $L/K$}\right\} &\longleftrightarrow \left\{\parbox{4.2cm}{\centering finite index open subgroups of $J_K$ containing $K^\times$}\right\}\\
+ L &\longmapsto \ker(\Art_{L/K}: J_K \to \Gal(L/K))
+\end{align*}
+Note that there exists finite index subgroups that are not open!
+
+Recall that in local class field theory, if we decompose $K_v = \bra \pi\ket \times \mathcal{O}_v^\times$, then the local Artin map sends $\mathcal{O}_v^\times$ to (the image of) the inertia group. Thus an extension $L_w/K_v$ is unramified iff the local Artin map kills of $\mathcal{O}_v^\times$. Globally, this tells us
+\begin{prop}
+ If $L/K$ is an abelian extension of global fields, which corresponds to the open subgroup $U \subseteq J_K$ under the Artin map, then $L/K$ is unramified at a finite $v \nmid \infty$ iff $\mathcal{O}_v^\times \subseteq U$.
+\end{prop}
+
+We can extend this to the infinite places if we make the appropriate definitions. Since $\C$ cannot be further extended, there is nothing to say for complex places.
+\begin{defi}[Ramification]\index{ramification!real place}
+ If $v \mid \infty$ is a real place of $K$, and $L/K$ is a finite abelian extension, then we say $v$ is ramified if for some (hence all) places $w$ of $L$ above $v$, $w$ is complex.
+\end{defi}
+The terminology is not completely standard. In this case, Neukrich would say $v$ is inert instead.
+
+With this definition, $L/K$ is unramified at a real place $v$ iff $K_v^\times = \R^\times \subseteq U$. Note that since $U$ is open, it automatically contains $\R^\times_{> 0}$.
+
+We can similarly read off splitting information.
+\begin{prop}
+ If $v$ is finite and unramified, then $v$ splits completely iff $K_v^\times \subseteq U$.
+\end{prop}
+\begin{proof}
+ $v$ splits completely iff $L_w = K_v$ for all $w \mid v$, iff $\Art_{L_w/K_v}(K_v^\times) = \{1\}$.
+\end{proof}
+
+\begin{eg}
+ We will use global class field theory to compute all $S_3$ extensions $L/\Q$ which are unramified outside $5$ and $7$.
+
+ If we didn't have global class field theory, then to solve this problem we have to find all cubics whose discriminant are divisible by $5$ and $7$ only, and there is a cubic diophantine problem to solve.
+
+ While $S_3$ is not an abelian group, it is solvable. So we can break our potential extension as a chain
+ \begin{center}
+ \begin{tikzpicture}
+ \node (Q) {$\Q$};
+ \node (K) at (0, 1) {$K$};
+ \node (L) at (0, 2) {$L$};
+ \draw (Q) -- (K) node [pos=0.5, right] {$2 = \bra \sigma\ket$};
+ \draw (K) -- (L) node [pos=0.5, right] {$3$};
+ \end{tikzpicture}
+ \end{center}
+ Since $L/\Q$ is unramified outside $5$ and $7$, we know that $K$ must be one of $\Q(\sqrt{5})$, $\Q(\sqrt{-7})$ and $\Q(\sqrt{-35})$. We then consider each case in turn, and then see what are the possibilities for $L$. We shall only do the case $K = \Q(\sqrt{-7})$ here. If we perform similar computations for the other cases, we find that the other choices of $K$ do not work.
+
+ So fix $K = \Q(\sqrt{-7})$. We want $L/K$ to be cyclic of degree $3$, and $\sigma$ must act non-trivially on $L$ (otherwise we get an abelian extension).
+
+ Thus, by global class field theory, we want to find a subgroup $U \leq J_K$ of index $3$ such that $\mathcal{O}_v^\times \subseteq U$ for all $v \nmid 35$. We also need $\sigma(U) = U$, or else the composite extension would not even be Galois, and $\sigma$ has to acts as $-1$ on $J_K/U \cong \Z/3\Z$ to get a non-abelian extension.
+
+ We know $K = \Q(\sqrt{-7})$ has has class number $1$, and the units are $\pm 1$. So we know
+ \[
+ \frac{C_K}{C_K^0} = \frac{\hat{\mathcal{O}}_K^\times}{\{\pm 1\}}.
+ \]
+ By assumption, we know $U$ contains $\prod_{v \nmid 35} \mathcal{O}_v^\times$. So we have to look at the places that divide $35$. In $\mathcal{O}_{\Q(\sqrt{-7})}$, the prime $5$ is inert and $7$ is ramified.
+
+ Since $5$ is inert, we know $K_5/\Q_5$ is an unramified quadratic extension. So we can write
+ \[
+ \mathcal{O}_{(5)}^\times = \F_{25}^\times \times (1 + 5\mathcal{O}_{(5)})^\times.
+ \]
+ The second factor is a pro-5 group, and so it must be contained in $U$ for the quotient to have order $3$. On $\F_{25}^\times$, $\sigma$ acts as the Frobenius $\sigma(x) = x^5$. Since $\F_{25}^\times$ is cyclic of order $24$, there is a unique index $3$ subgroup, cyclic of order $6$. This gives an index $3$ subgroup $U_5 \subseteq \mathcal{O}_{(5)}^\times$. Moreover, on here, $\sigma$ acts by $x \mapsto x^5 = x^{-1}$. Thus, we can take
+ \[
+ U = \prod_{v \not= (5)} \mathcal{O}_v^\times \times U_5,
+ \]
+ and this gives an $S_3$ extension of $\Q$ that is unramified outside $5$ and $7$. It is an exercise to explicitly identify this extension. % do this
+
+ We turn to the prime $7 = -\sqrt{-7}^2$. Since this is ramified, we have
+ \[
+ \mathcal{O}_{(\sqrt{-7})}^\times = \F_7^\times \times \left(1 + (\sqrt{-7}) \mathcal{O}_{\sqrt{-7}}\right)^{\times},
+ \]
+ and again the second factor is a pro-$7$ group. Moreover $\sigma$ acts trivially on $\F_7^\times$. So $U$ must contain $\mathcal{O}_{(\sqrt{-7})}^\times$. So what we found above is the unique such extension.
+\end{eg}
+
+We previously explicitly described $C_\Q$ as $\R_{>0}^\times \times \hat{\Z}$. It would be nice to have a similar description of $C_K$ for an arbitrary $K$. The connected component will come from the infinite places $K_\infty^\times \prod_{v \mid \infty} K_v^\times$. The connected component is given by
+\[
+ K_\infty^{\times, 0} = (\R^\times_{> 0})^{r_1} \times (\C^\times)^{r_2},
+\]
+where there are $r_1$ real places and $r_2$ complex ones. Thus, we find that
+\begin{prop}
+ \[
+ C_K/C_K^0 = \frac{\{\pm 1\}^{r_1} \times \hat{K}^\times}{\overline{K^\times}}.
+ \]
+\end{prop}
+
+There is a natural map from the ideles to a more familiar group, called the content homomorphism.
+\begin{defi}[Content homomorphism]\index{content homomorphism}
+ The \emph{content homomorphism} is the map
+ \begin{align*}
+ c: J_K&\to \text{ fractional ideals of }K\\
+ (x_v)_v &\mapsto \prod_{v \nmid \infty}\mathfrak{p}_v^{v(x_v)},
+ \end{align*}
+ where $\mathfrak{p}_v$ is the prime ideal corresponding to $v$. We ignore the infinite places completely.
+\end{defi}
+Observe that $c(K^\times)$ is the set of all principal ideals by definition. Moreover, the kernel of the content map is $K_\infty^\times \times \hat{\mathcal{O}}_K^\times$, by definition. So we have a short exact sequence
+\[
+ 1 \to \frac{\{\pm 1\}^{r_1} \times \hat{\mathcal{O}}_K^\times}{\overline{\mathcal{O}_K^\times}} \to C_K /C_K^0 \to \Cl(K) \to 1.
+\]
+If $K = \Q $ or $\Q(\sqrt{-D})$, then $\overline{\mathcal{O}_K^\times} = \mathcal{O}_K^\times$ is finite, and in particular is closed. But in general, it will not be closed, and taking the closure is indeed needed.
+
+Returning to the case $K = \Q$, our favorite abelian extensions are those of the form $L = \Q(\zeta_N)$ with $N > 1$. This comes with an Artin map
+\[
+ \hat{\Z}^\times \cong C_\Q/C_\Q^0 \to \Gal(L/\Q) \cong (\Z/N\Z)^\times.
+\]
+By local class field theory for $\Q_p$, we see that with our normalizations, this is just the quotient map, whose kernel is
+\[
+ (1 + N \hat{\Z})^\times = \prod_{p \nmid N}\Z_p^\times \times \prod_{p \mid N} (1 + N \Z_p)^\times \subseteq \prod \Z_p^\times = \hat{\Z}^\times.
+\]
+Note that if we used the arithmetic Frobenius, then we would get the \emph{inverse} of the quotient map.
+
+These subgroups of $\hat{\Z}^\times$ are rather special ones. First of all $(1 + N \hat{\Z})^\times$ form a neighbourhood of the identity in $\hat{\Z}^\times$. Thus, any open subgroup contains a subgroup of this form. Equivalently, every abelian extension of $\Q$ is contained in $\Q(\zeta_N)$ for some $N$. This is the \term{Kronecker--Weber theorem}.
+
+For a general number field $K$, we want to write down an explicit basis for open subgroups of $1$ in $\pi_0(C_k)$.
+\begin{defi}[Modulus]\index{modulus}
+ A \emph{modulus} is a finite formal sum
+ \[
+ \mathfrak{m} = \sum_{v \in \Sigma_k} m_v \cdot (v)
+ \]
+ of places of $K$, where $m_v \geq 0$ are integers.
+\end{defi}
+
+Given a modulus $\mathfrak{m}$, we define the subgroup
+\[
+ U_\mathfrak{m} = \prod_{v \mid \infty, m_v > 0}\!\! K_v^{\times, 0} \times \prod_{v \mid \infty, m_v = 0} \!\!K_v^\times \times \prod_{v \nmid \infty, m_v > 0} (1 + \mathfrak{p}_v^{m_v} \mathcal{O}_v)^{\times} \times \prod_{v \nmid \infty, m_v = 0} \mathcal{O}_v^\times \subseteq J_K.
+\]
+Then essentially by definition of the topology of $J_K$, any open subgroup of $J_K$ containing $K_\infty^{\times, 0}$ contains some $U_\mathfrak{m}$.
+
+In our previous example, our moduli are all of the form
+\begin{defi}[$\mathfrak{a}(\infty)$]\index{$\mathfrak{a}(\infty)$}
+ If $\mathfrak{a} \lhd \mathcal{O}_K$ is an ideal, we write $\mathfrak{a}(\infty)$ for the modulus with $m_v = v(\mathfrak{a})$ for all $v \nmid \infty$, and $m_v = 1$ for all $v \mid \infty$.
+\end{defi}
+
+If $k = \Q$ and $\mathfrak{m} = (N)(\infty)$, then we simply get $U_\mathfrak{m} = \R_{>0}^\times \times (1 + N \hat{\Z})^\times$, and so
+\[
+ \frac{J_\Q}{\Q^\times U_\mathfrak{m}} = (\Z/n\Z)^\times,
+\]
+corresponding to the abelian extension $\Q(\zeta_N)$.
+
+In general, we define
+\begin{defi}[Ray class field]
+ If $L/K$ is abelian with $\Gal(L/K) \cong J_K/K^\times U_\mathfrak{m}$ under the Artin map, we call $L$ the \term{ray class field} of $K$ modulo $\mathfrak{m}$.
+\end{defi}
+
+\begin{defi}[Conductor]
+ If $L$ corresponds to $U \subseteq J_K$, then $U \supseteq K^\times U_\mathfrak{m}$ for some $\mathfrak{m}$. The minimal such $\mathfrak{m}$ is the \term{conductor} of $L/K$.
+\end{defi}
+
+\subsection{Ideal-theoretic description of global class field theory}
+Originally, class field theory was discovered using ideals, and the ideal-theoretic formulation is at times more convenient.
+
+Let $\mathfrak{m}$ be a modulus, and let $S$ be the set of finite $v$ such that $m_v > 0$. Let $I_S$\index{$I_S$} be the group of fractional ideals prime to $S$. Consider
+\[
+ P_\mathfrak{m} = \{(x) \in I_S: x \equiv 1 \bmod {\mathfrak{m}}\}.
+\]
+To be precise, we require that for all $v \in S$, we have $v(x - 1) \geq m_v$, and for all infinite $v$ real with $m_v > 0$, then $\tau(x) > 0$ for $\tau: K \to \R$ the corresponding to $v$. In other words, $x \in K^\times \cap U_\mathfrak{m}$.
+
+Note that if $\mathfrak{m}$ is trivial, then $I_S/P_\mathfrak{m}$ is the ideal class group. Thus, it makes sense to define
+\begin{defi}[Ray class group]
+ Let $\mathfrak{m}$ be a modulus. The \term{generalized ideal class group}, or \term{ray class group} modulo $\mathfrak{m}$ is
+ \[
+ \Cl_\mathfrak{m}(K) = I_S/P_\mathfrak{m}.
+ \]
+\end{defi}
+One can show that this is always a finite group.
+
+\begin{prop}
+ There is a canonical isomorphism
+ \[
+ \frac{J_K}{K^\times U_\mathfrak{m}} \overset{\sim}{\to} \Cl_\mathfrak{m}(K)
+ \]
+ such that for $v \not \in S\cup \Sigma_{K, \infty}$, the composition
+ \[
+ K_v^\times \hookrightarrow J_K \to \Cl_\mathfrak{m}(K)
+ \]
+ sends $x \mapsto \mathfrak{p}_v^{-v(x)}$.
+
+ Thus, in particular, the Galois group $\Gal(L/K)$ of the ray class field modulo $\mathfrak{m}$ is $\Cl_\mathfrak{m}(K)$. Concretely, if $\mathfrak{p} \not \in S$ is an ideal, then $[\mathfrak{p}] \in \Cl_{\mathfrak{m}}(K)$ corresponds to $\sigma_\mathfrak{p} \in \Gal (L/K)$, the arithmetic Frobenius. This was Artin's original reciprocity law.
+\end{prop}
+
+When $\mathfrak{m} = 0$, then this map is the inverse of the map given by content. However, in general, it is not simply (the inverse of) the prime-to-$S$ content map, even for ideles whose content is prime to $S$. According to Fr\"olich, this is the ``\term{fundamental mistake of class field theory}''.
+
+
+\begin{proof}[Proof sketch]
+ Let $J_K(S) \subseteq J_K$ be given by
+ \[
+ J_K(S) = \prod_{v \not\in S \cup \Sigma_{K, \infty}} K_v^\times.
+ \]
+ Here we do have the inverse of the content map
+ \begin{align*}
+ c^{-1}: J_K(S) &\twoheadrightarrow I_S\\
+ (x_v) &\mapsto \prod \mathfrak{p}_v^{-v(x_v)}
+ \end{align*}
+ We want to extend it to an isomorphism. Observe that
+ \[
+ J_K(S) \cap U_\mathfrak{m} = \prod_{v \not \in S \cup \Sigma_{K, \infty}} \mathcal{O}_v^\times,
+ \]
+ which is precisely the kernel of the map $c^{-1}$. So $c^{-1}$ extends uniquely to a homomorphism
+ \[
+ \frac{J_K(S) U_\mathfrak{m}}{U_\mathfrak{m}} \cong \frac{J_K(S)}{J_K(S) \cap U_\mathfrak{m}} \to I_S.
+ \]
+ We then use that $K^\times J_K(S) U_\mathfrak{m} = J_K$ (weak approximation), and
+ \[
+ K^\times \cap V_\mathfrak{m} = \{x \equiv 1 \bmod{\mathfrak{m}},\; x \in K^*\},
+ \]
+ where
+ \[
+ V_\mathfrak{m} = J_K(S) U_\mathfrak{m} = \{(x_v)\in J_K \mid \text{ for all $v$ with $m_v > 0$, }x_v \in U_\mathfrak{m}\}.\qedhere
+ \]
+\end{proof}
+
+\section{\texorpdfstring{$L$}{L}-functions}
+Recall that Dirichlet characters $\chi: (\Z/N\Z)^\times \to \C^\times$ give rise to Dirichlet $L$-functions. Explicitly, we extend $\chi$ to a function on $\Z$ by setting $\chi(a) = 0$ if $[a] \not \in (\Z/N\Z)^\times$, and then the Dirichlet $L$-function is defined by
+\[
+ L(\chi, s) = \sum_{n = 1}^\infty \frac{\chi(n)}{n^s} = \prod_p (1 - \chi(p) p^{-s})^{-1}.
+\]
+
+As one can see from the $n$ and $p$ appearing, Dirichlet $L$-functions are ``about $\Q$'', and we would like to have a version of $L$-functions that are for arbitrary number fields $K$. If $\chi = 1$, we already know what this should be, namely the $\zeta$-function
+\[
+ \zeta_K(s) = \sum_{\mathfrak{a} \lhd \mathcal{O}_K} \frac{1}{N(\mathfrak{a})^s} = \prod_{\mathfrak{p} \lhd \mathcal{O}_K} \frac{1}{1 - N(\mathfrak{p})^{-s}},
+\]
+where the first sum is over all ideals of $\mathcal{O}_K$, and the second product is over all \emph{prime} ideals of $\mathcal{O}_K$.
+
+In general, the replacement of Dirichlet characters is a \emph{Hecke character}.
+
+\subsection{Hecke characters}
+\begin{defi}[Hecke character]\index{Hecke character}
+ A \emph{Hecke character} is a continuous (not necessarily unitary) homomorphism
+ \[
+ \chi: J_K/K^\times \to \C^\times.
+ \]
+\end{defi}
+These are also known as \term{quasi-characters} in some places, where character means unitary. However, we shall adopt the convention that characters need not be unitary. The German term \term{Gr\"o{\ss}encharakter} (or suitable variations) are also used.
+
+In this section, we will seek to understand Hecke characters better, and see how Dirichlet characters arise as a special case where $K = \Q$. Doing so is useful if we want to write down actual Hecke characters. The theory of $L$-functions will be deferred to the next chapter.
+
+We begin with the following result:
+\begin{prop}
+ Let $G$ be a profinite group, and $\rho: G \to \GL_n(\C)$ continuous. Then $\ker \rho$ is open.
+\end{prop}
+Of course, the kernel is always closed.
+
+\begin{proof}
+ It suffices to show that $\ker \rho$ contains an open subgroup. We use the fact that $\GL_n(\C)$ has ``no small subgroups'', i.e.\ there is an open neighbourhood $U$ of $1 \in \GL_n(\C)$ such that $U$ contains no non-trivial subgroup of $\GL_n(\C)$ (exercise!). For example, if $n = 1$, then we can take $U$ to be the right half plane.
+
+ Then for such $U$, we know $\rho^{-1}(U)$ is open. So it contains an open subgroup $V$. Then $\rho(V)$ is a subgroup of $\GL_n(\C)$ contained in $U$, hence is trivial. So $V \subseteq \ker (\rho)$.
+\end{proof}
+
+While the multiplicative group of a local field is not profinite, it is close enough, and we similarly have
+\begin{ex}
+ Let $F$ be a local field. Then any continuous homomorphism $F^\times \to \C^\times$ has an open kernel, i.e.\ $\chi(1 + \mathfrak{p}_F^N) = 1$ for some $N \gg 0$.
+\end{ex}
+
+\begin{defi}[Unramified character]\index{unramified character}
+ If $F$ is a local field, a character $\chi: F \to \C^\times$ is \emph{unramified} if
+ \[
+ \chi|_{\mathcal{O}_F^\times} = 1.
+ \]
+
+ If $F \cong \R$, we say $\chi: F^\times \to \C^\times$ is unramified if $\chi(-1) = 1$.
+\end{defi}
+
+Using the decomposition $F \cong \mathcal{O}_F \times \bra \pi_F\ket$ (for the local case), we see that
+\begin{prop}
+ $\chi$ is unramified iff $\chi(x) = |x|_F^s$ for some $s \in \C$.\fakeqed
+\end{prop}
+
+We now return to global fields. We will think of Hecke characters as continuous maps $J_K \to \C^\times$ that factor through $J_K/K^\times$, since it is easier to reason about $J_K$ than the quotient. We can begin by discussing arbitrary characters $\chi: J_K \to \C^\times$.
+
+\begin{prop}
+ The set of continuous homomorphisms $\chi: J_K = \prod_v' K_v^\times \to \C^\times$ bijects with the set of all families $(\chi_v)_{v \in \Sigma_k}$, $\chi_v: K_v^\times \to \C^\times$ such that $\chi_v$ is unramified for almost all (i.e.\ all but finitely many) $v$, with the bijection given by $\chi \mapsto (\chi_v)$, $\chi_v = \chi|_{K_v^\times}$.
+\end{prop}
+
+\begin{proof}
+ Let $\chi: J_K \to \C^\times$ be a character. Since $\hat{\mathcal{O}}_K \subseteq J_K$ is profinite, we know $\ker \chi|_{\hat{\mathcal{O}}_K^\times}$ is an open subgroup. Thus, it contains $\mathcal{O}_v^\times$ for all but finitely many $v$. So we have a map from the LHS to the RHS.
+
+ In the other direction, suppose we are given a family $(\chi_v)_v$. We attempt to define a character $\chi: J_K \to \C^\times$ by
+ \[
+ \chi(x_v) = \prod \chi_v(x_v).
+ \]
+ By assumption, $\chi_v(x_v) = 1$ for all but finitely many $v$. So this is well-defined. These two operations are clearly inverses to each other.
+\end{proof}
+In general, we can write $\chi$ as
+\[
+ \chi = \chi_\infty \chi^\infty,\quad \chi^\infty = \prod_{v \nmid \infty} \chi_v: K^{\times, \infty} \to \C^\times, \quad \chi_\infty = \prod_{v \mid \infty} \chi_v: K_\infty^\times \to \C^\times.
+\]
+
+\begin{lemma}
+ Let $\chi$ be a Hecke character. Then the following are equivalent:
+ \begin{enumerate}
+ \item $\chi$ has finite image.
+ \item $\chi_\infty(K_\infty^{\times, 0}) = 1$.
+ \item $\chi_\infty^2 = 1$.
+ \item $\chi(C_K^0) = 1$.
+ \item $\chi$ factors through $\Cl_{\mathfrak{m}}(K)$ for some modulus $\mathfrak{m}$.
+ \end{enumerate}
+ In this case, we say $\chi$ is a \term{ray class character}.
+\end{lemma}
+
+\begin{proof}
+ Since $\chi_\infty(K_\infty^{\times, 0})$ is either $1$ or infinite, we know (i) $\Rightarrow$ (ii). It is clear that (ii) $\Leftrightarrow$ (iii), and these easily imply (iv). Since $C_K/C_K^0$ is profinite, if (iii) holds, then $\chi$ factors through $C_K/C_K^0$ and has open kernel, hence the kernel contains $U_\mathfrak{m}$ for some modulus $\mathfrak{m}$. So $\chi$ factors through $\Cl_{\mathfrak{m}}(K)$. Finally, since $\Cl_{\mathfrak{m}}(K)$ is finite, (v) $\Rightarrow$ (i) is clear.
+\end{proof}
+%\begin{eg}
+% $\chi$ has finite order iff $\chi(C_K^0) = 1$ iff $\chi_\infty(K_\infty^{\times, 0}) = 1$, iff $\chi_\infty^2 = 1$, iff $\chi$ factors through $\Cl_\mathfrak{m}(K)$ for some modulus $\mathfrak{m}$. We say $\chi$ is a \term{ray class character}.
+%\end{eg}
+
+Using this, we are able to classify all Hecke characters when $K = \Q$.
+\begin{eg}
+ The idele norm $|\ph|_{\A}: C_K \to \R^\times_{>0}$ is a character not of finite order. In the case $K = \Q$, we have $C_\Q = \R_{>0}^\times \times \hat{\Z}^\times$. The idele norm is then the projection onto $\R_{>0}^\times$.
+
+ Thus, if $\chi; C_\Q \to \C^\times$ is a Hecke character, then the restriction to $\R_{>0}^\times$ is of the form $x \mapsto x^{s}$ for some $s$. If we write
+ \[
+ \chi(x) = |x|_{\A}^s \cdot \chi'(x)
+ \]
+ for some $\chi'$, then $\chi'$ vanishes on $\R_{>0}^\times$, hence factors through $\hat{\Z}^\times$ and has finite order. Thus, it factors as
+ \[
+ \chi': C_\Q \to \hat{\Z}^\times \to (\Z/N\Z)^\times \to \C^\times
+ \]
+ for some $N$. In other words, it is a Dirichlet character.
+\end{eg}
+For other fields, there can be more interesting Hecke characters.
+
+For a general field $K$, we have finite order characters as we just saw. They correspond to characters on $I_S$ which are trivial on $P_\mathfrak{m}$. In fact, we can describe all Hecke characters in terms of ideals.
+
+There is an alternative way to think about Hecke characters. We can think of Dirichlet characters as (partial) functions $\Z \to \C^\times$ that satisfy certain multiplicativity properties. In general, a Hecke character can be thought of as a function on a set of ideals of $\mathcal{O}_K$.
+
+Pick a modulus $\mathfrak{m}$ such that $\chi^\infty$ is trivial on $\hat{K}^\times \cap U_\mathfrak{m}$. Let $S$ be the set of finite $v$ such that $\mathfrak{m}_v$ is positive, and let $I_S$ be the set of fractional ideals prime to $S$. We then define a homomorphism
+\begin{align*}
+ \Theta: I_S &\to \C^\times\\
+ \mathfrak{p}_v &\mapsto \chi_v(\pi_v)^{-1}
+\end{align*}
+
+One would not expect $\Theta$ to remember much information about the infinite part $\chi_\infty$. However, once we know $\Theta$ and $\chi_\infty$ (and $\mathfrak{m}$), it is not difficult to see that we can recover all of $\chi$.
+
+On the other hand, an arbitrary pair $(\Theta, \chi_\infty)$ doesn't necessarily come from a Hecke character. Indeed, the fact that $\chi$ vanishes on $K^\times$ implies there is some compatibility condition between $\Theta$ and $\chi_\infty$.
+
+Suppose $x \in K^\times$ is such that $x \equiv 1 \bmod \mathfrak{m}$. Then $(x) \in I_S$, and we have
+\[
+ 1 = \chi(x) = \chi_\infty(x) \prod_{v \not \in S\text{ finite}} \chi_v(x) = \chi_\infty(x) \prod_{\text{finite }v \not \in S} \chi_v(\pi_v)^{v(x)}.
+\]
+
+Writing $P_\mathfrak{m}$ for the set of principal ideals generated by these $x$, as we previously did, we see that for all $x \in P_\mathfrak{m}$,
+\[
+ \chi_\infty(x) = \Theta(x).
+\]
+One can check that given $(\Theta, \chi_\infty)$ (and $\mathfrak{m}$) satisfying this compatibility condition, there is a unique Hecke character that gives rise to this pair. This was Hecke's original definition of a Hecke character, which is more similar to the definition of a Dirichlet character.
+
+\begin{eg}
+ Take $K = \Q(i)$, and fix an embedding into $\C$. Since $\Cl(K) = 1$, we have
+ \[
+ C_K = \frac{\C^\times \times \hat{\mathcal{O}}_K^\times}{\mu_4 = \{\pm 1, \pm i\}}.
+ \]
+ Let $v_2$ be the place over $2$, corresponding to the prime $(1 + i) \mathcal{O}_K$. Then $K_{v_2}$ is a ramified extension over $\Q_2$ of degree $2$. Moreover,
+ \[
+ \left(\frac{\mathcal{O}_K}{(1 + i)^3 \mathcal{O}_K}\right)^\times = \mu_4 = \{\pm 1, \pm i\}.
+ \]
+ So we have a decomposition
+ \[
+ \mathcal{O}_{v_2}^\times = (1 + (1 + i)^3 \mathcal{O}_{v_2}) \times \mu_4.
+ \]
+ Thus, there is a natural projection
+ \[
+ C_K \twoheadrightarrow \frac{\C^\times \times \mathcal{O}_{v_2}^\times}{\mu_4} \cong \C^\times \times (1 + (1 + i)^3 \mathcal{O}_{v_2}) \twoheadrightarrow \C^\times.
+ \]
+ This gives a Hecke character with $\chi_\infty(z) = z$, and is trivial on
+ \[
+ \prod_{v \not \in \{v_2, \infty\}} \mathcal{O}_v^\times \times (1 + (1 + i)^3 \mathcal{O}_{v_2}),
+ \]
+ This has modulus
+ \[
+ \mathfrak{m} = 3 (v_2).
+ \]
+
+ In ideal-theoretic terms, if $\mathfrak{p} \not= (1 + i)$ is a prime ideal of $K$, then $\mathfrak{p} = (\pi_\mathfrak{p})$ for a unique $\pi_\mathfrak{p} \in \mathcal{O}_K$ with $\pi_\mathfrak{p} \equiv 1 \bmod (1 + i)^3$. Then $\Theta$ sends $\mathfrak{p}$ to $\pi_\mathfrak{p}$.
+%
+% We can say what this is in ideal-theoretic terms. If $\mathfrak{p} \not= (1 + i)$ is a prime ideal of $K$, then $\mathfrak{p} = (\pi)$ for $\pi \equiv 1 \bmod (2 + 2i)$ for a unique $\pi = \pi_\mathfrak{p}$. The associated $\Theta: I_{\{v_2\}} \to \C^\times$ is just given by $\Theta(\mathfrak{p}) = \pi_\mathfrak{p}$.
+%
+\end{eg}
+
+This is an example of an algebraic Hecke character.
+\begin{defi}[Algebraic homomorphism]\index{algebraic homomorphism}
+ A homomorphism $K^\times \to \C^\times$ is \emph{algebraic} if there exists integers $n(\sigma)$ (for all $\sigma: K \hookrightarrow \C$) such that
+ \[
+ \varphi(x) = \prod \sigma(x)^{n(\sigma)}.
+ \]
+\end{defi}
+The first thing to note is that if $\varphi$ is algebraic, then $\varphi(K^\times)$ is contained in the Galois closure of $K$ in $\C$. In particular, it takes values in the number field. Another equivalent definition is that it is algebraic in the sense of algebraic geometry, i.e.\ if $K = \bigoplus \Q e_i$ for $i = 1, \ldots, n$ as a vector space, then we can view $K$ as the $\Q$-points of an $n$-dimensional affine group scheme. We can then define $R_{K/\Q} \G_m \subseteq \A$ to be the set on which $X$ is invertible, and then an algebraic Hecke character is a homomorphism of algebraic groups $(T_K)/\C \to \G_m/\C$, where $T_K = \Res_{K/\Q}(\G_m)$.
+
+If we have a real place $v$ of $K$, then this corresponds to a real embedding $\sigma_v: K \to K_v \cong \R$, and if $v$ is a complex place, we have a pair of embedding $\sigma_v, \bar{\sigma}_v: K \hookrightarrow K_v \simeq \C$, picking one of the pair to be $\sigma_v$. So $\varphi$ extends to a homomorphism
+\[
+ \varphi: K_\infty^\times \to \C^\times
+\]
+given by
+\[
+ \varphi(x_v) = \prod_{v\text{ real}} x_v^{n(\sigma_v)} \prod_{v\text{ complex}} x_v^{n(\sigma_v)} \bar{x}_v^{n(\bar{\sigma}_v)}
+\]
+\begin{defi}[Algebraic Hecke character]\index{algebraic Hecke character}
+ A Hecke character $\chi = \chi_\infty \chi^\infty: J_K/K^\times \to \C^\times$ is \emph{algebraic} if there exists an algebraic homomorphism $\varphi: K^\times \to \C^\times$ such that $\varphi(x) = \chi_\infty(x)$ for all $x \in K_\infty^{\times, 0}$, i.e.\ $\chi_\infty = \varphi \prod_{v\text{ real}} \sgn_v^{e_v}$ for $e_v \in \{0, 1\}$.
+
+ We say $\varphi$ (or the tuple $(n(\sigma))_\sigma$) is the \term{infinite type} of $\chi$.
+\end{defi}
+\begin{eg}
+ The adelic norm $|\ph|_{\A}: J_K \to \C^\times$ has
+ \[
+ \chi_\infty = \prod |\ph|_v,
+ \]
+ and so $\chi$ is algebraic, and the associated $\varphi$ is just $N_{K/\Q}: K^\times \to \Q^\times \subseteq \C^\times$, with $(n_\sigma) = (1, \ldots, 1)$.
+\end{eg}
+
+\begin{ex}
+ Let $K = \Q(i)$, and $\chi$ from the previous example, whose associated character of ideals was $\Theta: \mathfrak{p} \mapsto \pi_\mathfrak{p}$, where $\pi_\mathfrak{p} \equiv 1 \bmod (2 + 2i)$. The infinity type is the inclusion $K^\times \hookrightarrow \C^\times$, i.e.\ it has type $(1, 0)$.
+\end{ex}
+
+Observe that the image of an algebraic homomorphism $\varphi: K^\times \to \C^\times$ lies in the normal closure of $K$. More generally,
+\begin{prop}
+ If $\chi$ is an algebraic Hecke character, then $\chi^\infty$ takes values in some number field. We write $E = E(\chi)$ for the smallest such field.
+\end{prop}
+Of course, we cannot expect $\chi$ to take algebraic values, since $J_K$ contains copies of $\R$ and $\C$.
+
+\begin{proof}
+ Observe that $\chi^\infty(\hat{\mathcal{O}}_K^\times)$ is finite subgroup, so is $\mu_n$ for some $n$. Let $x \in K^\times$, totally positive. Then
+ \[
+ \chi^\infty(x) = \chi_\infty(x)^{-1} = \varphi(x)^{-1} \in K^{cl},
+ \]
+ where $K^{cl}$ is the Galois closure. Then since $K_{>0}^\times \times \hat{\mathcal{O}}_K^\times \to \hat{K}^\times$ has finite cokernel (by the finiteness of the class group), so
+ \[
+ \chi^\infty(\hat{K}^\times) = \coprod_{i = 1}^d z_i \chi^\infty(K_{>0}^\times \hat{\mathcal{O}}_K^\times),
+ \]
+ where $z_i^d \in \chi^\infty(K_{>0}^\times \hat{\mathcal{O}}_K^\times)$, and is therefore contained inside a finite extension of the image of $K_{>0}^\times \times \hat{\mathcal{O}}_K^\times$.
+\end{proof}
+
+Hecke characters of finite order (i.e.\ algebraic Hecke characters with infinity type $(0, \ldots, 0)$) are in bijection with continuous homomorphisms $\Gamma_K \to \C^\times$, necessarily of finite order. What we show now is how to associate to a general algebraic Hecke character $\chi$ a continuous homomorphism $\psi_\ell: \Gamma_K \to E(\chi)_\lambda^\times \supseteq \Q_\ell^\times$, where $\lambda$ is a place of $E(\chi)$ over $\ell$. This is continuous for the $\ell$-adic topology on $E_\lambda$. In general, this will not be of finite order. Thus, algebraic Hecke characters correspond to $\ell$-adic Galois representations.
+
+The construction works as follows: since $\chi^\infty(x) = \varphi(x)^{-1}$, we can restrict the infinity type $\varphi$ to a homomorphism $\varphi: K^\times \to E^\times$. We define $\tilde{\chi}: J_K \to E^\times$ as follows: if $x = x_\infty x^\infty \in K_\infty^\times K^{\infty, \times} \in J_K$, then we set
+\[
+ \tilde{\chi}(x) = \chi(x) \varphi(x_\infty)^{-1}.
+\]
+Notice that this is not trivial on $K^\times$ in general. Then $\tilde{\chi}_\infty$ takes values in $\{\pm 1\}$. Thus, $\tilde{\chi}$ takes values in $E^\times$. Thus, we know that $\tilde{\chi}$ has open kernel, i.e.\ it is continuous for the discrete topology on $E^\times$, and $\tilde{\chi}|_{K^\times} = \varphi^{-1}$.
+
+Conversely, if $\tilde{\chi}: K^\times \to E^\times$ is a continuous homomorphism for the discrete topology on $E^\times$, and $\tilde{\chi}|_{K^\times}$ is an algebraic homomorphism, then it comes from an algebraic Hecke character in this way.
+
+Let $\lambda$ be a finite place of $E$ over $\ell$, a rational prime. Recall that $\varphi: K^\times \to E^\times$ is an algebraic homomorphism, i.e.
+\[
+ \varphi\left(\sum x_i e_i\right) = f(\mathbf{x}),\quad f \in E(X_1, \ldots, X_n).
+\]
+We can extend this to $K_\ell^\times = (K \otimes_{\Q} \Q_\ell)^\times = \prod_{v \mid \ell} K_v^\times$ to get a homomorphism
+\[
+ \varphi_\lambda: K_\ell^\times \to E_\lambda^\times
+\]
+This is still algebraic, so it is certainly continuous for the $\ell$-adic topology.
+
+Now consider the character $\psi_\lambda: J_K \to E_\lambda^\times$, where now
+\[
+ \psi_\lambda((x_v)) = \tilde{\chi}(x) \varphi_\lambda((x_v)_{v \mid \ell}).
+\]
+This is then continuous for the $\ell$-adic topology on $E_\lambda^\times$, and moreover, we see that $\psi_\lambda(K^\times) = \{1\}$ as $\tilde{\chi}|_{K^\times} = \varphi^{-1}$ while $\varphi_\lambda|_{K^\times} = \varphi$. Since $\tilde{\chi}(K_\infty^{\times, 0}) = \{1\}$, we know that $\psi_\lambda$ it is in fact defined on $C_K/C_K^0 \cong \Gamma_K^{ab}$.
+
+Obviously, $\psi_\lambda$ determines $\tilde{\chi}$ and hence $\chi$.
+\begin{fact}
+ An $\ell$-adic character $\psi: C_K/C_K^0 \to E_\lambda^\times$ comes from an algebraic Hecke character in this way if and only if the associated Galois representation is \emph{Hodge--Tate}, which is a condition on the restriction to the decomposition groups $\Gal(\bar{K}_v/K_v)$ for the primes $v \mid \ell$.
+\end{fact}
+
+\begin{eg}
+ Let $K = \Q$ and $\chi = |\ph |_{\A}$, then
+ \[
+ \tilde{\chi} = \sgn(x_\infty) \prod_p |x_p|_p.
+ \]
+ So
+ \[
+ \psi_\ell((x_v)) = \sgn(x_\infty) \prod_{p \not= \ell} |x_p|_p \cdot |x_\ell|_{\ell} \cdot x_\ell.
+ \]
+ Note that $|x_\ell|_{\ell} x_\ell \in \Z_\ell^\times$. We have
+ \[
+ C_\Q/C_\Q^0 \cong \hat{\Z}^\times.
+ \]
+ Under this isomorphism, the map $\hat{\Z}^\times \Q_\ell^\times$ is just the projection onto $\Z_\ell^\times$ followed by the inclusion, and by class field theory, $\psi_\ell: \Gal(\bar{\Q}/\Q) \to \Z_\ell^\times$ is just the cyclotomic character of the field $\Q(\{\zeta_{\ell^n}\})$,
+ \[
+ \sigma(\zeta_{\ell^n}) = \zeta_{\ell^n} ^{\psi_\ell(\sigma) \bmod \ell^n}.
+ \]
+\end{eg}
+
+\begin{eg}
+ Consider the elliptic curve $y^2 = x^3 - x$ with complex multiplication over $\Q(i)$. In other words, $\End(E/\Q(i)) = \Z[i]$, where we let $i$ act by
+ \[
+ i \cdot (x, y) \mapsto (-x, iy).
+ \]
+ Its Tate module
+ \[
+ T_\ell E = \lim \E[\ell^n]
+ \]
+ is a $\Z_\ell [i]$-module. If $\lambda \mid \ell$, then we define
+ \[
+ V_\lambda E = T_\ell E \otimes_{\Z_{\ell}[i]} K_\lambda.
+ \]
+ Then $\Gamma_K$ act by $\Gamma_K: \Aut_{K_\lambda} V_\lambda E = K_\lambda^\times$.
+\end{eg}
+
+We now want to study the infinity types of an algebraic Hecke character.
+\begin{lemma}
+ Let $K$ be a number field, $\varphi: K^\times \to E^\times \subseteq \C^\times$ be an algebraic homomorphism, and suppose $E/\Q$ is Galois. Then $\varphi$ factors as
+ \[
+ K^\times \overset{\mathrm{norm}}{\longrightarrow} (K \cap E)^\times \overset{\phi'}{\longrightarrow} E^\times.
+ \]
+ Note that since $E$ is Galois, the intersection $K \cap E$ makes perfect sense.
+\end{lemma}
+
+\begin{proof}
+ By definition, we can write
+ \[
+ \varphi(x) = \prod_{\sigma : K \hookrightarrow \C} \sigma(x)^{n(\sigma)}.
+ \]
+ Then since $\varphi(x) \in E$, for all $x \in K^\times$ and $\tau \in \Gamma_E$, we have
+ \[
+ \prod \tau \sigma(x) ^{n(\sigma)} = \prod \sigma(x)^{n(\sigma)}.
+ \]
+ In other words, we have
+ \[
+ \prod_\sigma \sigma(x)^{n(\tau^{-1} \sigma)} = \prod_\sigma \sigma(x)^{n(\sigma)}.
+ \]
+ Since the homomorphisms $\sigma$ are independent, we must have $n(\tau \sigma) = n (\sigma)$ for all embeddings $\sigma: K \hookrightarrow \bar{\Q}$ and $\tau \in \Gamma_E$. This implies the theorem.
+\end{proof}
+
+Recall that if $\mathfrak{m}$ is a modulus, then we defined open subgroups $U_\mathfrak{m} \subseteq J_K$, consisting of the elements $(x_v)$ such that if a real $v \mid \mathfrak{m}$, then $x_v > 0$, and if $v \mid \mathfrak{m}$ for a finite $v$, then $v(x_v - 1) \geq m_v$. We can write this as
+\[
+ U_\mathfrak{m} = U_{\mathfrak{m}, \infty} \times U_{\mathfrak{m}}^\infty.
+\]
+\begin{prop}
+ Let $\varphi: K^\times \to \C^\times$ be an algebraic homomorphism. Then $\varphi$ is the infinity type of an algebraic Hecke character $\chi$ iff $\varphi(\mathcal{O}_K^\times)$ is finite.
+\end{prop}
+
+\begin{proof}
+ To prove the $(\Rightarrow)$ direction, suppose $\chi = \chi_\infty \chi^\infty$ is an algebraic Hecke character with infinity type $\varphi$. Then $\chi^\infty(U_\mathfrak{m}^\infty) = 1$ for some $\mathfrak{m}$. Let $E_\mathfrak{m} = K^\times \cap U_\mathfrak{m} \subseteq \mathcal{O}_K^\times$, a subgroup of finite index. As $\chi^\infty(E_\mathfrak{m}) = 1 = \chi(E_\mathfrak{m})$, we know $\chi_\infty(E_\mathfrak{m}) = 1$. So $\varphi(\mathcal{O}_K^\times)$ is finite.
+
+ To prove $(\Leftarrow)$, given $\varphi$ with $\varphi(\mathcal{O}_K^\times)$ finite, we can find some $\mathfrak{m}$ such that $\varphi(E_\mathfrak{m}) = 1$. Then $(\varphi, 1): K_\infty^\times \times U_\mathfrak{m}^\infty \to \C^\times$ is trivial on $E_\mathfrak{m}$. So we can extend this to a homomorphism
+ \[
+ \frac{K_\infty^\times U_\mathfrak{m} K^\times}{K^\times} \cong \frac{K_\infty^\times U_\mathfrak{m}}{E_\mathfrak{m}} \to \C^\times,
+ \]
+ since $E_\mathfrak{m} = K^\times \cap U_\mathfrak{m}$. But the LHS is a finite index subgroup of $C_K$. So the map extends to some $\chi$.
+\end{proof}
+
+Here are some non-standard terminology:
+\begin{defi}[Serre type]\index{Serre type}
+ A homomorphism $\varphi: K^\times \to \C^\times$ is of \emph{Serre type} if it is algebraic and $\varphi(\mathcal{O}_K^\times)$ is finite.
+\end{defi}
+These are precisely homomorphisms that occur as infinity types of algebraic Hecke characters.
+
+Note that the unit theorem implies that
+\[
+ \mathcal{O}_K^\times \hookrightarrow K_\infty^{\times, 1} = \{x \in K_\infty^{\times} : |x|_{\A} = 1\}
+\]
+has compact cokernel. If $\varphi(\mathcal{O}_K^\times)$ is finite, then $\varphi(K_\infty^{\times, 1})$ is compact. So it maps into $\U(1)$.
+
+\begin{eg}
+ Suppose $K$ is totally real. Then
+ \[
+ K_\infty^\times = (\R^\times)^{\{\sigma: K \hookrightarrow \R\}}.
+ \]
+ Then we have
+ \[
+ K_\infty^{\times, 1} = \{(x_\sigma): \prod x_\sigma = \pm 1\}.
+ \]
+ Then $\varphi((x_\sigma)) = \prod x_\sigma^{n(\sigma)}$, so $|\varphi(K_\infty^{\times, 1})| = 1$. In other words, all the $n_\sigma$ are equal. Thus, $\varphi$ is just a power of the norm map.
+
+ Thus, algebraic Hecke characters are all of the form
+ \[
+ |\ph |_\A^m \cdot (\text{finite order character}).
+ \]
+\end{eg}
+
+Another class of examples comes from CM fields.
+\begin{defi}[CM field]\index{CM field}
+ $K$ is a CM field if $K$ is a totally complex quadratic extension of a totally real number field $K^+$.
+\end{defi}
+This CM refers to \emph{complex multiplication}.
+
+This is a rather restrictive condition, since this implies $\Gal(K/K^+) = \{1, c\} = \Gal(K_w /K_v^+)$ for every $w \mid v \mid \infty$. So $c$ is equal to complex conjugation for \emph{every} embedding $K \hookrightarrow \C$.
+
+From this, it is easy to see that CM fields are all contained in $\Q^{\CM} \subseteq \bar{\Q} \subseteq \C$, given by the fixed field of the subgroup
+\[
+ \bra c \sigma c \sigma^{-1}: \sigma \in \Gamma_\Q\ket \subseteq \Gamma_\Q.
+\]
+For example, we see that the compositum of two CM fields is another CM field.
+
+\begin{ex}
+ Let $K$ be a totally complex $S_3$-extension over $\Q$. Then $K$ is not CM, but the quadratic subfields is complex and is equal to $K \cap \Q^{\CM}$.
+\end{ex}
+
+\begin{eg}
+ Let $K$ be a CM field of degree $2r$. Then Dirichlet's unit theorem tells us
+ \[
+ \rk \mathcal{O}_K^\times = r - 1 = \rk \mathcal{O}_{K^+}^\times.
+ \]
+ So $\mathcal{O}_K^\times$ is a finite index subgroup of $\mathcal{O}_{K^+}^\times$. So $\varphi: K^\times \to \C^\times$ is of Serre type iff it is algebraic and its restriction to $K^{+, \times}$ is of Serre type. In other words, we need $n(\sigma) + n(\bar{\sigma})$ to be independent of $\sigma$.
+\end{eg}
+
+\begin{thm}
+ Suppose $K$ is arbitrary, and $\varphi: K^\times \to E^\times \subseteq \C^\times$ is algebraic, and we assume $E/\Q$ is Galois, containing the normal closure of $K$. Thus, we can write
+ \[
+ \varphi(x) = \prod_{\sigma: K \hookrightarrow E} \sigma(x)^{n(\sigma)}.
+ \]
+ Then the following are equivalent:
+ \begin{enumerate}
+ \item $\varphi$ is of Serre type.
+ \item $\varphi = \psi \circ N_{K/F}$, where $F$ is the maximal CM subfield and $\psi$ is of Serre type.
+ \item For all $c' \in \Gal(E/\Q)$ conjugate to complex conjugation $c$, the map $\sigma \mapsto n(\sigma) + n (c' \sigma)$ is constant.
+ \item (in the case $K \subseteq \C$ and $K/\Q$ is Galois with Galois group $G$) Let $\lambda = \sum n(\sigma) \sigma \in \Z[G]$. Then for all $\tau \in G$, we have
+ \[
+ (\tau - 1)(c + 1) \lambda = 0 = (c + 1)(\tau - 1) \lambda.
+ \]
+ \end{enumerate}
+\end{thm}
+Note that in (iii), the constant is necessarily
+\[
+ \frac{2}{[K:\Q]} \sum_\sigma n(\sigma).
+\]
+So in particular, it is independent of $c'$.
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item (iii) $\Leftrightarrow$ (iv): This is just some formal symbol manipulation.
+ \item (ii) $\Rightarrow$ (i): The norm takes units to units.
+ \item (i) $\Rightarrow$ (iii): By the previous lecture, we know that if $\varphi$ is of Serre type, then
+ \[
+ |\varphi(K_\infty^{\times, 1})| = 1.
+ \]
+ Now if $(x_v) \in K_\infty^\times$, we have
+ \[
+ |\varphi((x_v))| = \prod_{\text{real }v} |x_v|^{n(\sigma_v)} \prod_{\text{complex }v} |x_v|^{n(\sigma_v) + n(\bar{\sigma}_v)} = \prod_v |x_v|_v^{\frac{1}{2} (n(\sigma_v) + n(\bar{\sigma}_v))}.
+ \]
+ Here the modulus without the subscript is the usual modulus. Then $|\varphi(K_\infty^{\times, 1})| = 1$ implies $n(\sigma_v) + n(\bar{\sigma}_v)$ is constant. In other words, $n(\sigma) + n(c\sigma) = m$ is constant.
+
+ But if $\tau \in \Gal(E/\Q)$, and $\varphi' = \tau \circ \varphi$, $n'(\sigma) = n(\tau^{-1}\sigma)$, then this is also of Serre type. So
+ \[
+ m = n'(\sigma) + n'(c\sigma) = n(\tau^{-1} \sigma) + n(\tau^{-1} c \sigma) = n(\tau^{-1} \sigma) + n((\tau^{-1} c \tau) \tau^{-1} \sigma).
+ \]
+ \item (iii) $\Rightarrow$ (ii): Suppose $n(\sigma) + n(c'\sigma) = m$ for all $\sigma$ and all $c' = \tau c \tau^{-1}$. Then we must have
+ \[
+ n(c'\sigma) = n(c\sigma)
+ \]
+ for all $\sigma$. So
+ \[
+ n(\sigma) = n(c\tau c \tau^{-1} \sigma)
+ \]
+ So $n$ is invariant under $H = [c, \Gal(E/\Q)] \leq \Gal(E/\Q)$, noting that $c$ has order $2$. So $\varphi$ takes values in the fixed field $E^H = E \cap \Q^{\CM}$. By the proposition last time, this implies $\varphi$ factors through $N_{K/F}$, where $F = E^H \cap K = K \cap \Q^{\CM}$.\qedhere
+ \end{itemize}
+\end{proof}
+Recall that a homomorphism $\varphi: K^\times \to \C^\times$ is algebraic iff it is a character of the commutative algebraic group $T_K = R_{K/\Q} \G_m$, so that $T_K(\Q) = K^\times$, i.e.\ there is an algebraic character $\varphi': T_K/\C \to \G_m/\C$ such that $\varphi'$ restricted to $T_K(\Q)$ is $\varphi$.
+
+Then $\varphi$ is of Serre type iff $\varphi$ is a character of $^KS^0 = T_K/\mathcal{E}_K^0$, where $\mathcal{E}_K$ is the Zariski closure of $\mathcal{O}_K^\times$ in $T_K$ and $\mathcal{E}_K^0$ is the identity component, which is the same as the Zariski closure of $\Delta \subseteq \mathcal{O}_K^\times$, where $\Delta$ is a sufficiently small finite-index subgroup.
+
+The group $^KS^0$ is called the \term{connected Serre group}. We have a commutative diagram (with exact rows)
+\[
+ \begin{tikzcd}
+ 1 \ar[r] \ar[d] & K^\times \ar[r] \ar[d] & J_K \ar[r] \ar[d] & J_K/K^\times \ar[d, "\pi_0"] \ar[r] & 1 \ar[d]\\
+ 1 \ar[r] & ^KS^0 \ar[r] & ^KS \ar[r] & \Gamma^{ab}_K \ar[r] & 1
+ \end{tikzcd}
+\]
+This $^KS$ is a projective limit of algebraic groups over $\Q$. We have
+\[
+ \Hom(^KS, \C^\times) = \Hom(^KS, \G_m/\C) = \{\text{algebraic Hecke characters of $K$}\}
+\]
+The infinity type is just the restriction to $^KS^0$.
+
+Langlands created a larger group, the \term{Tamiyama group}, an extension of $\Gal(\bar{\Q}/\Q)$ by $^K S^0$, which is useful for abelian varieties with CM and conjugations and Shimura varieties.
+
+\subsection{Abelian \texorpdfstring{$L$}{L}-functions}
+We are now going to define $L$-functions for Hecke characters. Recall that amongst all other things, an $L$-function is a function in a complex variable. Here we are going to do things slightly differently. For any Hecke character $\chi$, we will define $L(\chi)$, which will be a \emph{number}. We then define
+\[
+ L(\chi, s) = L(|\ph|_\A^s \chi).
+\]
+We shall define $L(\chi)$ as an Euler product, and then later show it can be written as a sum.
+
+\begin{defi}[Hecke $L$-function]\index{Hecke $L$-function}
+ Let $\chi: C_K \to \C^\times$ be a Hecke character. For $v \in \Sigma_K$, we define local $L$-factors $L(\chi_v)$ as follows:
+ \begin{itemize}
+ \item If $v$ is non-Archimedean and $\chi_v$ unramified, i.e.\ $\chi_v|_{\mathcal{O}_{K_v}^\times} = 1$, we set
+ \[
+ L(\chi_v) = \frac{1}{1 - \chi_v(\pi_v)}.
+ \]
+ \item If $v$ is non-Archimedean and $\chi_v$ is ramified, then we set
+ \[
+ L(\chi_v) = 1.
+ \]
+ \item If $v$ is a real place, then $\chi_v$ is of the form
+ \[
+ \chi_v(x) = x^{-N} |x|_v^s,
+ \]
+ where $N = 0, 1$. We write
+ \[
+ L(\chi_v) = \Gamma_\R(s) = \pi^{-s/2} \Gamma(s/2).
+ \]
+ \item If $v$ is a complex place, then $\chi_v$ is of the form
+ \[
+ \chi_v(x) = \sigma(x)^{-N} |x|_v^s,
+ \]
+ where $\sigma$ is an embedding of $K_v$ into $\C$ and $N \geq 0$. Then
+ \[
+ L(\chi_v) = \Gamma_\C(s) = 2 (2\pi)^{-s} \Gamma(s)
+ \]
+ \end{itemize}
+ We then define
+ \[
+ L(\chi_v, s) = L(\chi_v \cdot |\ph|_v^s).
+ \]
+ So for finite unramified $v$, we have
+ \[
+ L(\chi_v, s) = \frac{1}{1 - \chi_v(\pi_v) q_v^{-s}},
+ \]
+ where $q_v = |\mathcal{O}_{K_v}/(\pi_v)|$.
+
+ Finally, we define\index{$\Lambda(\chi, s)$}\index{$L(\chi, s)$}
+ \begin{align*}
+ L(\chi, s) &= \prod_{v \nmid \infty} L(\chi_v, s)\\
+ \Lambda(\chi, s) &= \prod_v L(\chi_v, s).
+ \end{align*}
+\end{defi}
+
+Recall that the kernel of the idelic norm $|\ph|_\A: C_K \to \R^\times_{>0}$ is compact. It is then not hard to see that for every $\chi$, there is some $t \in \R$ such that $\chi \cdot |\ph|_\A^t$ is unitary. Thus, $L(\chi, s)$ converges absolutely on some right half-plane. Observe that
+\[
+ \Lambda(\chi |\ph|_\A^t, s) = \Lambda(\chi, t + s).
+\]
+\begin{thm}[Hecke--Tate]\leavevmode
+ \begin{enumerate}
+ \item $\Lambda(\chi, s)$ has a meromorphic continuation to $\C$, entire unless $\chi = |\ph|_\A^t$ for some $t \in \C$, in which case there are simple poles at $s = 1 - t, -t$.
+ \item There is some function, the \term{global $\varepsilon$-factor},
+ \[
+ \varepsilon(\chi, s) = A B^s
+ \]
+ for some $A \in \C^\times$ and $B \in \R_{>0}$ such that
+ \[
+ \Lambda(\chi, s) = \varepsilon(\chi, s) \Lambda(\chi^{-1}, 1 - s).
+ \]
+ \item There is a factorization
+ \[
+ \varepsilon(\chi, s) = \prod_v \varepsilon_v(\chi_v, \mu_v, \psi_v, s),
+ \]
+ where $\varepsilon_v = 1$ for almost all $v$, and $\varepsilon_v$ depends only on $\chi_v$ and certain auxiliary data $\psi_v, \mu_v$. These are the \term{local $\varepsilon$-factors}.
+ \end{enumerate}
+\end{thm}
+Traditionally, we write
+\[
+ L(\chi, s) = \prod_{\text{finite } v} L(\chi_v, s),
+\]
+and then
+\[
+ \Lambda(\chi, s) = L(\chi, s) L_\infty(\chi, s).
+\]
+However, Tate (and others, especially the automorphic people) use $L(\chi, s)$ for the product over all $v$.
+
+At first, Hecke proved (i) and (ii) using global methods, using certain $\Theta$ functions. Later, Tate proved (i) to (iii) using local-global methods and especially Fourier analysis on $K_v$ and $\A_K$. This generalizes considerably, e.g.\ to automorphic representations.
+
+We can explain some ideas of Hecke's method. We have a decomposition
+\[
+ K_\infty =K \otimes_\Q \R \cong \R^{r_1} \times \C^{r_2} \cong \C^n,
+\]
+and this has a norm $\|\ph\|$ induces by the Euclidean metric on $\R^n$. Let $\Delta \subseteq \mathcal{O}_{K, +}^\times$ be a subgroup of totally positive units of finite index, which is $\cong \Z^{r_1 + r_2 - 1}$. This has an embedding $\Delta = K_\infty^{\times, 1}$, which extends to a continuous homomorphism $\Delta \otimes \R \to K_\infty^{\times, 1}$. The key fact is
+\begin{prop}
+ Let $x \in K^\times$. Pick some invariant measure $\d u$ on $\Delta \otimes \R$. Then
+ \[
+ \int_{\Delta \otimes \R} \frac{1}{\|ux\|^{2s}} \;\d u = \frac{\text{stuff}}{|N_{K/\Q}(x)|^{2s/n}},
+ \]
+ where the stuff is some ratio of $\Gamma$ factors and powers of $\pi$ (and depends on $s$).
+\end{prop}
+\begin{ex}
+ Prove this when $K = \Q[\sqrt{d}]$ for $d > 0$. Then $\Delta = \bra \varepsilon\ket$, and then
+ \begin{align*}
+ \text{LHS} &= \int_{-\infty}^\infty \frac{1}{|\varepsilon^t x + \varepsilon^{-t} x'|^{2s}}\;\d t\\
+ \text{RHS} &= \frac{\text{stuff}}{|xx'|^s}.
+ \end{align*}
+\end{ex}
+The consequence of this is that if $\mathfrak{a} \subseteq K$ is a fractional ideal, then
+\begin{align*}
+ \sum_{0 \not= x \in \mathfrak{a}\bmod \Delta} \frac{1}{|N_{K/\Q}(x)|^s} &= \text{stuff} \cdot \sum_{0 \not= x \in \mathfrak{a}\bmod \Delta} \int_{\Delta \otimes \R} \frac{1}{\|u x\|^{ns}\;\d u}\\
+ &= \text{stuff} \cdot \int_{\Delta \otimes \R/\Delta} \left(\sum_{0 \not= x \in \mathfrak{a}} \frac{1}{\|u x\|^{ns}}\right)\;\d u
+\end{align*}
+The integrand has a name, and is called the \emph{Epstein $\zeta$-function} of the lattice $(\mathfrak{a}, \|u\ph\|^2)$. By the Poisson summation formula, we get an analytic continuity and functional equation for the epsilon $\zeta$ function. On the other hand, taking linear combinations of the left gives $L(\chi, s)$ for $\chi: \Cl(K) \to \C^\times$. For more general $\chi$, we modify this with some extra factors. When the infinity type is non-trivial, this is actually quite subtle.
+
+Note that if $\chi$ is unramified outside $S$ and ramified at $S$, recall we had a homomorphism $\Theta: I_S \to \C^\times$ sending $\mathfrak{p}_v \mapsto \chi_v(\pi_v)^{-1}$. So
+\[
+ L(\chi, s) = \prod_{\text{finite }v \not \in S} \left(\frac{1}{1 - \Theta(\mathfrak{p}_v)^{-1} (N\mathfrak{p}_v)^{-s}}\right) = \sum_{\mathfrak{a} \in \mathcal{O}_K\text{ prime to }S} \frac{\Theta(\mathfrak{a})^{-1}}{(N\mathfrak{a})^s}.
+\]
+This was Hecke's original definition of the Hecke character.
+
+If $K = \Q$ and $\chi: C_\Q \to \C^\times$ is of finite order, then it factors through $C_\Q \to C_\Q/C_\Q^0 \cong \hat{\Z}^\times \to (\Z/N\Z)^\times$, and so $\chi$ is just some Dirichlet character $\varphi: (\Z/n\Z)^\times \to \C^\times$. The associated $L$-functions are just Dirichlet $L$-functions. Indeed, if $p \nmid N$, then
+\[
+ \chi_p(p) = \chi(1, \ldots, 1, p, 1, \ldots) = \chi(p^{-1}, \ldots, p^{-1}, 1, p^{-1}, \ldots) = \varphi(p\bmod N)^{-1}.
+\]
+In other words, $L(\chi, s)$ is the Dirichlet $L$-series of $\varphi^{-1}$ (assuming $N$ is chosen so that $\chi$ ramifies exactly at $v \mid N$).
+
+Tate's method uses local $\varepsilon$-factors $\varepsilon(\chi_v, \mu_v, \psi_v, s)$, where $\psi_v: K_v \to \U(1)$ is a non-trivial additive character, e.g.\ for $v$ finite,
+\[
+ \begin{tikzcd}
+ K_V \ar[r, "\tr"] & \Q_p \ar[r] & \Q_p/\Z_p \cong \Z[v_p] / \Z \ar[r, "e^{2\pi ix}", hook] & \C^\times,
+ \end{tikzcd}
+\]
+which we needed because Fourier transforms take in additive measures, and $\mu_v$ is a Haar measure on $K_v$. The condition for (iii) to hold is
+\[
+ \prod \psi_v: \A_K \to \U(1)
+\]
+is well-defined and trivial on $K \subseteq \A_K$, and $\mu_\A = \prod \mu_v$ is a well-defined measure on $\A_K$, i.e.\ $\mu_v(\mathcal{O}_v) = 1$ for all $v$ and
+\[
+ \int_{\A_K/K} \mu_\A = 1.
+\]
+There exists explicit formulae for these $\varepsilon_v$'s. If $\chi_v$ is unramified, then it is just $A_v B_v^s$, and is usually $1$; for ramified finite $v$, they are given by Gauss sums.
+
+\subsection{Non-abelian \texorpdfstring{$L$}{L}-functions}
+Let $K$ be a number field. Then we have a reciprocity isomorphism
+\[
+ \Art_K: C_K/C_K^0 \overset{\sim}{\to} \Gamma_K^{\ab}.
+\]
+If $\chi: C_K \to C_K^0 \to \C^\times$ is a Hecke character of finite order, then we can view it as a map $\psi = \chi \circ \Art_K^{-1}: \Gamma_K \to \C^\times$. Then
+\[
+ L(\chi, s) = \prod_{\text{finite }v\text{ unramified}} \frac{1}{1 - \chi_v(\pi_v) q_v^{-s}}^{-1} = \prod \frac{1}{1 - \psi(\Frob_v) q_v^{-s}},
+\]
+where $\Frob_v \in \Gamma_{K_v}/I_{K_v}$ is the geometric Frobenius, using that $\psi(I_{K_v}) = 1$. Artin generalized this to arbitrary complex representations of $\Gamma_K$.
+
+Let $\rho: \Gamma_K \to \GL_n(\C)$ be a representation. Define
+\[
+ L(\rho, s) = \prod_{\text{finite }v} L(\rho_v, s),
+\]
+where $\rho_v$ is the restriction to the decomposition group at $v$, and depends only on the isomorphism class of $\rho$. We first define these local factors for non-Archimedean fields:
+
+\begin{defi}
+ Let $F$ be local and non-Archimedean. Let $\rho: W_F \to \GL_\C(V)$ be a representation. Then we define
+ \[
+ L(\rho, s) = \det (1 - q^{-s} \rho(\Frob_F)|_{V^{I_F}})^{-1},
+ \]
+ where $V^{I_F}$ is the invariants under $I_F$.
+\end{defi}
+Note that in this section, all representations will be finite-dimensional and continuous for the complex topology (so in the case of $W_F$, we require $\ker \sigma$ to be open).
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item If
+ \[
+ 0 \to (\rho', V') \to (\rho, V) \to (\rho'', V') \to 0
+ \]
+ is exact, then
+ \[
+ L(\rho, s) = L(\rho', s) \cdot L(\rho'', s).
+ \]
+ \item If $E/F$ is finite separable, $\rho: W_E \to \GL_\C(V)$ and $\sigma = \Ind_{W_E}^{W_F} \rho : W_F \to \GL_\C(U)$, then
+ \[
+ L(\rho, s) = L(\sigma, s).
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{enumerate}
+ \item Since $\rho$ has open kernel, we know $\rho(I_F)$ is \emph{finite}. So
+ \[
+ 0 \to (V')^{I_F} \to V^{I_F} \to (V')^{I_F} \to 0
+ \]
+ is exact. Then the result follows from the multiplicativity of $\det$.
+ \item We can write
+ \[
+ U = \{\varphi: W_F \to V : \varphi(gx) = \rho(g) \varphi(x)\text{ for all }g \in W_E, x \in W_F\}.
+ \]
+ where $W_F$ acts by
+ \[
+ \sigma(g)\varphi(x) = \varphi(xg).
+ \]
+ Then we have
+ \[
+ U^{I_F} = \{\varphi: W_F/I_F \to V : \cdots\}.
+ \]
+ Then whenever $\varphi \in U^{I_F}$ and $g \in I_E$, then
+ \[
+ \sigma(g) \varphi(x) = \varphi(xg) = \varphi((xgx^{-1})x) = \varphi(x).
+ \]
+ So in fact $\varphi$ takes values in $V^{I_E}$. Therefore
+ \[
+ U^{I_F} = \Ind_{W_E/I_E}^{W_F/I_F} V^{I_E}.
+ \]
+ Of course, $W_F /I_F \cong \Z$, which contains $W_E/I_E$ as a subgroup. Moreover,
+ \[
+ \Frob_F^d = \Frob_E,
+ \]
+ where $d = [k_E:k_F]$. We note the following lemma:
+ \begin{lemma}
+ Let $G = \bra g\ket \supseteq H = \bra h = g^d\ket$, $\rho: H \to \GL_\C(V)$ and $\sigma = \Ind_H^G \rho$. Then
+ \[
+ \det (1 - t^d \rho(h)) = \det (1 - t \sigma(g)).
+ \]
+ \end{lemma}
+ \begin{proof}
+ Both sides are multiplicative for exact sequences of representations of $H$. So we can reduce to the case of $\dim V = 1$, where $\rho(h) = \lambda \in \C^\times$. We then check it explicitly.
+ \end{proof}
+ To complete the proof of (ii), take $g = \Frob_F$ and $t = q_F^{-s}$ so that $t^d = q_E^{-s}$.\qedhere
+ \end{enumerate}
+\end{proof}
+
+For Archimedean $F$, we define $L(\rho, s)$ in such a way to ensure that (i) and (ii) hold, and if $\dim V = 1$, then
+\[
+ L(\rho, s) = L(\chi, s),
+\]
+where if $\rho: W_{F}^\ab \to \C^\times$, then $\chi$ Is the corresponding character of $F^\times$ under the Artin map.
+
+If $F \simeq \C$, then this is rather easy, since every irreducible representation of $W_F \cong \C^\times$ is one-dimensional. We then just define for $\rho$ 1-dimensional using $W_F^\ab \cong F^{\times}$ and extend to all $\rho$ by (i). The Jordan--H\"older theorem tells us this is well-defined.
+
+If $F \simeq \R$, then recall that
+\[
+ W_\R = \bra \C^\times, s : s^2 = -1 \in \C^\times, szs^{-1} = \bar{z}\ket.
+\]
+Contained in here is $W_\R^{(1)} = \bra \U(1), s\ket$. Then
+\[
+ W_\R = W_\R^{(1)} \times \R_{>0}^\times.
+\]
+It is then easy to see that the irreducible representations of $W_\R$ are
+\begin{enumerate}
+ \item $1$-dimensional $\rho_{W_\R}$; or
+ \item $2$-dimensional, $\sigma = \Ind_\C^{W_\R} \rho$, where $\rho \not= \rho^s: \C^\times \to \C^\times$.
+\end{enumerate}
+In the first case, we define
+\[
+ L(\rho, s) = L(\chi, s)
+\]
+using the Artin map, and in the second case, we define
+\[
+ L(\sigma, s) = L(\rho, s)
+\]
+using (ii).
+
+To see that the properties are satisfied, note that (i) is true by construction, and there is only one case to check for (ii), which is if $\rho = \rho^s$, i.e.
+\[
+ \rho(z) = (z\bar{z})^t.
+\]
+Then $\Ind_{\C^\times}^{W_\R} \rho$ is reducible, and is a sum of characters of $W_\R^{\ab} \cong \R^\times$, namely $x \mapsto |x|^t$ and $x \mapsto \sgn(x) |x|^t = x^{-1}|x|^{t + 1}$. Then (ii) follows from the identity
+\[
+ \Gamma_\R(s) \Gamma_\R(s + 1) = \Gamma_\C(s) = 2 (2\pi)^{-s} \Gamma(s).
+\]
+Now let $K$ be global, and let $\rho: \Gamma_K \to \GL_\C(V)$. For each $v \in \Sigma_K$, choose $\bar{k}$ of $\bar{K}$ over $v$. Let $\Gamma_v \cong \Gamma_{K_v}$ be the decomposition group at $\bar{v}$. These contain $I_V$, and we have the geometric Frobenius $\Frob_v \in \Gamma_v/I_V$. We define $\rho_v = \rho|_{\Gamma_v}$, and then set
+\begin{align*}
+ L(\rho, s) &= \prod_{v \nmid \infty} L(\rho_v, s) =\prod_{v \nmid \infty} \det (1 - q_v^{-s} \Frob_v|_{V^{I_v}})^{-1}\\
+ \Lambda(\rho, s) &= L L_\infty\\
+ L_\infty &= \prod_{v \mid \infty} L(\rho_v, s).
+\end{align*}
+This is well-defined as the decomposition groups $\bar{v} \mid v $ are conjugate. If $\dim V = 1$, then $\rho = \chi \circ \Art_K^{-1}$ for a finite-order Hecke character $\chi$, and then
+\[
+ L(\rho, s) = L(\chi, s).
+\]
+The facts we had for local factors extend to global statements
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $L(\rho \oplus \rho', s) = L(\rho, s) L(\rho', s)$.
+ \item If $L/K$ is finite separable and $\rho: \Gamma_L \to \GL_\C(V)$ and $\sigma = \Ind_{\Gamma_L}^{\Gamma_K}(\rho)$, then
+ \[
+ L(\rho, s) = L(\sigma, s).
+ \]
+ \end{enumerate}
+ The same are true for $\Lambda(\rho, s)$.
+\end{prop}
+
+\begin{proof}
+ (i) is clear. For (ii), we saw that if $w \in \Sigma_L$ over $v \in \Sigma_K$ and consider the local extension $L_w/K_v$, then
+ \[
+ L(\rho_w, s) = L(\Ind_{\Gamma_{L_w}}^{\Gamma_{K_v}} \rho_w).
+ \]
+ In the global world, we have to take care of the splitting of primes. This boils down to the fact that
+ \[
+ \left.\left(\Ind_{\Gamma_L}^{\Gamma_K} \rho\right)\right|_{\Gamma_{K_v}} = \bigoplus_{w \mid v} \Ind_{\Gamma_{L_w}}^{\Gamma_{K_v}} (\rho|_{\Gamma_{L_w}}).\tag{$*$}
+ \]
+ We fix a valuation $\bar{v}$ of $\bar{K}$ over $v$. Write $\Gamma_{\bar{v}/v}$ for the decomposition group in $\Gamma_K$. Write $\bar{S}$ for the places of $\bar{K}$ over $v$, and $S$ the places of $L$ over $v$.
+
+ The Galois group acts transitively on $\bar{S}$, and we have
+ \[
+ \bar{S} \cong \Gamma_K/\Gamma_{\bar{v}/v}.
+ \]
+ We then have
+ \[
+ S \cong \Gamma_L \backslash \Gamma_K/\Gamma_{\bar{v}/v},
+ \]
+ which is compatible with the obvious map $\bar{S} \to S$.
+
+ For $\bar{w} = g \bar{v}$, we have
+ \[
+ \Gamma_{\bar{w}/v} = g \Gamma_{\bar{v}/v} g^{-1}.
+ \]
+ Conjugating by $g^{-1}$, we can identify this with $\Gamma_{\bar{v}/v}$. Similarly, if $w = \bar{w}|_L$, then this contains
+ \[
+ \Gamma_{\bar{w}/w} = g \Gamma_{\bar{v}/v} g^{-1} \cap \Gamma_L,
+ \]
+ and we can identify this with $\Gamma_{\bar{v}/v} \cap g^{-1} \Gamma_L g$.
+
+ There is a theorem, usually called Mackey's formula, which says if $H, K \subseteq G$ are two subgroups of finite index, and $\rho: H \to \GL_\C(V)$ is a representation of $H$. Then
+ \[
+ (\Ind_H^G V)|_K \cong \bigoplus_{g \in H \backslash G /K} \Ind^K_{K \cap g^{-1}Hg} (^{g^{-1}} V),
+ \]
+ where $^{g^{-1}} V$ is the $K \cap g^{-1}Hg$-representation where $g^{-1}xg$ acts by $\rho(x)$. We then apply this to $G = \Gamma_K, H = \Gamma_L, K = \Gamma_{\bar{v}/v}$.
+\end{proof}
+
+\begin{eg}
+ If $\rho$ is trivial, then
+ \[
+ L(\rho, s) = \prod_v (1 - q_v^{-s})^{-1} = \sum_{\mathfrak{a} \lhd \mathcal{O}_K} \frac{1}{N\mathfrak{a}^s} = \zeta_K(s).
+ \]
+ This is just the Dedekind $\zeta$-function of $K$.
+\end{eg}
+
+\begin{eg}
+ Let $L/K$ be a finite Galois extension with Galois group $G$. Consider the \emph{regular representation} $r_{L/K}$ on $\C[G]$. This decomposes as $\bigoplus \rho_i^{d_i}$, where $\{\rho_i\}$ run over the irreducible representations of $G$ of dimension $d_i$. We also have
+ \[
+ r_{L/K} = \Ind_{\Gamma_L}^{\Gamma_K}(1).
+ \]
+ So by the induction formula, we have
+ \[
+ \zeta_L(s) = L(r_{L/K}, s) = \prod_i L(\rho_i, s)^{d_i}.
+ \]
+\end{eg}
+
+\begin{eg}
+ For example, if $L/K = \Q(\zeta_N)/\Q$, then
+ \[
+ \zeta_{\Q(\zeta_N)}(s) = \prod_{\chi} L(\chi, s),
+ \]
+ where the product runs over all primitive Dirichlet characters mod $M \mid N$. Since $\zeta_{\Q(\zeta_N)}, \zeta_\Q$ have simple poles at $s = 1$, we know that $L(\chi, 1) \not= 0$ if $\chi \not= \chi_0$.
+\end{eg}
+
+\begin{thm}[Brauer induction theorem]
+ Suppose $\rho: G \to \GL_N(\C)$ is a representation of a finite group. Then there exists subgroups $H_j \subseteq G$ and homomorphisms $\chi_j: H_j \to \C^\times$ and integers $m_i \in \Z$ such that
+ \[
+ \tr \rho = \sum_j m_j \tr \Ind_{H_j}^G \chi_j.
+ \]
+ Note that the $m_j$ need not be non-negative. So we cannot quite state this as a statement about representations.
+\end{thm}
+
+\begin{cor}
+ Let $\rho: \Gamma_K \to \GL_N(\C)$. Then there exists finite separable $L_j/K$ and $\chi_j: \Gamma_{L_j} \to \C^\times$ of finite order and $m_j \in \Z$ such that
+ \[
+ L(\rho, s) = \prod_j L(\chi_j, s)^{m_j}.
+ \]
+ In particular, $L(\rho, s)$ has meromorphic continuation to $\C$ and has a functional equation
+ \[
+ \Lambda(\rho, s) = L \cdot L_\infty = \varepsilon(\rho, s) L(\tilde{\rho}, 1 - s)
+ \]
+ where
+ \[
+ \varepsilon(\rho, s) = A B^s = \prod \varepsilon(\chi_j, s)^{m_j},
+ \]
+ and $\tilde{\rho}(g) = ^t \rho(g^{-1})$.
+\end{cor}
+
+\begin{conjecture}[Artin conjecture]
+ If $\rho$ does not contain the trivial representation, then $\Lambda(\rho, s)$ is entire.
+\end{conjecture}
+This is closely related to the global Langlands conjecture.
+
+In general, there is more than one way to write $\rho$ as an sum of virtual induced characters. But when we take the product of the $\varepsilon$ factors, it is always well-defined. We also know that
+\[
+ \varepsilon(\chi_j, s) = \prod \varepsilon_v(\chi_{j, v}, s)
+\]
+is a product of local factors. It turns out the local decomposition is not independent of the decomposition, so if we want to write
+\[
+ \varepsilon(\rho, s) = \prod_v \varepsilon_v(\rho_v, s),
+\]
+we cannot just take $\varepsilon_v(\rho_v, s) = \prod \varepsilon_v(\chi_{j, v}, s)$, as this is not well-defined. However, Langlands proved that there exists a unique factorization of $\varepsilon(\rho, s)$ satisfying certain conditions.
+
+We fix $F$ a non-Archimedean local field, $\chi: F^\times \to \C^\times$ and local $\varepsilon$ factors
+\[
+ \varepsilon(\chi, \psi, \mu),
+\]
+where $\mu$ is a Haar measure on $F$ and $\psi: F \to \U(1)$ is a non-trivial character. Let $n(\psi)$ be the least integer such that $\psi(\pi^n_F \mathcal{O}_F) = 1$. Then
+\[
+ \varepsilon(\chi, \psi, \mu) =
+ \begin{cases}
+ \mu(\mathcal{O}_F) & \chi\text{ unramified}, n(\psi) = 0\\
+ \int_{F^\times} \chi^{-1} \cdot \psi \;\d \mu & \chi\text{ ramified}
+ \end{cases}
+\]
+Since $\chi$ and $\psi$ are locally constant, the integral is actually sum, which turns out to be finite (this uses the fact that $\chi$ is ramified).
+
+For $a \in F^\times$ and $b > 0$, we have
+\[
+ \varepsilon(\chi, \psi(ax), b \mu) = \chi(a) |a|^{-1} b \varepsilon(\chi, \psi, \mu).
+\]
+\begin{thm}[Langlands--Deligne]
+ There exists a unique system of local constants $\varepsilon(\rho, \psi, \mu)$ for $\rho: W_F \to \GL_\C(V)$ such that
+ \begin{enumerate}
+ \item $\varepsilon$ is multiplicative in exact sequences, so it is well-defined for virtual representations.
+ \item $\varepsilon(\rho, \psi, b\mu) = b^{\dim V} \varepsilon(\rho, \psi, \mu)$.
+ \item If $E/F$ is finite separable, and $\rho$ is a virtual representation of $W_F$ of degree $0$ and $\sigma = \Ind_{W_E}^{W_F} \rho$, then
+ \[
+ \varepsilon(\sigma, \psi, \mu) = \varepsilon(\rho, \psi \circ \tr_{E/F}, \mu').
+ \]
+ Note that this is independent of the choice of $\mu$ and $\mu'$, since ``$\dim V = 0$''.
+ \item If $\dim \rho = 1$, then $\varepsilon(\rho)$ is the usual abelian $\varepsilon(\chi)$.
+ \end{enumerate}
+\end{thm}
+
+\section{\tph{$\ell$}{l}{ℓ}-adic representations}
+In this section, we shall discuss $\ell$-adic representations of the Galois group, which often naturally arise from geometric situations. At the end of the section, we will relate these to \emph{complex} representations of the \emph{Weil--Langlands group}, which will be what enters the Langlands correspondence.
+
+\begin{defi}[$\ell$-adic representation]\index{$\ell$-adic representation}
+ Let $G$ be a topological group. An \emph{$\ell$-adic representation} consists of the following data:
+ \begin{itemize}
+ \item A finite extension $E/\Q_{\ell}$;
+ \item An $E$-vector space $V$; and
+ \item A continuous homomorphism $\rho: G \to \GL_E(V) \cong \GL_n(E)$.
+ \end{itemize}
+\end{defi}
+
+In this section,we will always take $G = \Gamma_F$ or $W_F$, where $F/\Q_p$ is a finite extension with $p \not= \ell$.
+\begin{eg}
+ The \term{cyclotomic character} $\chi_{\mathrm{cycl}}: \Gamma_K \to \Z_\ell^\times \subseteq \Q_\ell^\times$ is defined by the relation
+ \[
+ \zeta^{\chi_{\mathrm{cycl}}(\gamma)} = \gamma(\zeta)
+ \]
+ for all $\zeta \in \bar{K}$ with $\zeta^{\ell^n} = 1$ and $\gamma \in \Gamma_K$. This is a one-dimensional $\ell$-adic representation.
+\end{eg}
+
+\begin{eg}
+ Let $E/K$ be an elliptic curve. We define the \term{Tate module} by
+ \[
+ T_\ell E = \varprojlim_n E[\ell^n](\bar{K}),\quad V_\ell E = T_\ell E \otimes_{\Z_\ell} \Q_{\ell}.
+ \]
+ Then $V_\ell E$ is is a 2-dimensional $\ell$-adic representation of $\Gamma_K$ over $\Q_\ell$.
+\end{eg}
+
+\begin{eg}
+ More generally, if $X/K$ is any algebraic variety, then
+ \[
+ V = H^i_{\text{\'et}}(X \otimes_K \bar{K}, \Q_\ell)
+ \]
+ is an $\ell$-adic representation of $\Gamma_K$.
+\end{eg}
+
+We will actually focus on the representations of the Weil group $W_F$ instead of the full Galois group $G_F$. The reason is that every representation of the Galois group restricts to one of the Weil group, and since the Weil group is dense, no information is lost when doing so. On the other hand, the Weil group can have more representations, and we seek to be slightly more general.
+
+Another reason to talk about the Weil group is that local class field theory says there is an isomorphism
+\[
+ \Art_F: W_F^{ab} \cong F^\times.
+\]
+So one-dimensional representations of $W_F$ are the same as one-dimensional representations of $F^\times$.% $I_F$ maps to $\mathcal{O}_F^\times \subseteq F^\times$.
+
+For example, there is an absolute value map $F^\times \to \Q^\times$, inducing a representation $\omega: W_F \to \Q^\times$. Under the Artin map, this sends the geometric Frobenius to $\frac{1}{q}$. In fact, $\omega$ is the restriction of the cyclotomic character to $W_F$.
+%
+%There is also a valuation map $F^\times \to \Q^\times$, which gives rise to $\omega: W_F \to \Q^\times$. So if $\gamma \in W_F$ corresponds to the geometric Frobenius, then $\omega(\gamma) = \frac{1}{q}$. So $\omega$ is the restriction of the cyclotomic character to $W_F$.
+
+Recall that we previously defined the tame character. Pick a sequence $\pi_n \in \bar{F}$ by $\pi_0 = \pi$ and $\pi_{n + 1}^\ell = \pi_n$. We defined, for any $\gamma \in \Gamma_F$,
+\[
+ t_\ell(\gamma) = \left(\frac{\gamma(\pi_n)}{\pi_n}\right)_n \in \varprojlim \mu_{\ell^n} (\bar{F}) = \Z_\ell(1).
+\]
+When we restrict to the inertia group, this is a homomorphism, independent of the choice of $(\pi_n)$, which we call the \term{tame character}. In fact, this map is $\Gamma_F$-equivariant, where $\Gamma_F$ acts on $I_F$ by conjugation. In general, this still defines a function $\Gamma_F \to \Z_\ell(1)$, which depends on the choice of $\pi_n$.
+
+\begin{eg}
+ Continuing the previous notation, where $\pi$ is a uniformizer of $F$, we let $T_n$ be the $\ell^n$-torsion subgroup of $\bar{F}^\times/\bra \pi\ket$. Then
+ \[
+ T_n = \bra \zeta_n, \pi_n\ket / \bra \pi_n^{\ell^n}\ket \cong (\Z/\ell^n \Z)^2.
+ \]
+ The $\ell$th power map $T_n \to T_{n - 1}$ is surjective, and we can form the inverse limit $T$, which is then isomorphic to $\Z_{\ell}^2$. This then gives a $2$-dimensional $\ell$-adic representation of $\Gamma_F$.
+
+ In terms of the basis $(\zeta_{\ell^n}), (\pi_n)$, the representation is given by
+ \[
+ \gamma \mapsto
+ \begin{pmatrix}
+ \chi_{\mathrm{cycl}}(\gamma) & t_\ell(\gamma)\\
+ 0 & 1
+ \end{pmatrix}.
+ \]
+ Notice that the image of $I_F$ is $\begin{pmatrix} 1 & \Z_\ell \\0 & 1\end{pmatrix}$. In particular, it is infinite. This cannot happen for one-dimensional representations.
+% In fact, this is the Galois representation on the Tate module of an elliptic curve over $F$ with multiplicative reduction. % Tate module A(\bar{F})= \bar{F}^\times /\bra \pi\ket
+\end{eg}
+Perhaps surprisingly, the category of $\ell$-adic representations of $W_F$ over $E$ does not depend on the topology of $E$, but only the $E$ as an abstract field. In particular, if we take $E = \bar{\Q}_\ell$, then after taking care of the slight annoyance that it is infinite over $\bar{\Q}_{\ell}$, the category of representations over $\bar{\Q}_{\ell}$ does not depend on $\ell$!
+
+To prove this, we make use of the following understanding of $\ell$-adic representations.
+\begin{thm}[Grothendieck's monodromy theorem]\index{Grothendieck monodromy theorem}
+ Fix an isomorphism $\Z_{\ell}(1) \cong \Z_\ell$. In other words, fix a system $(\zeta_{\ell^n})$ such that $\zeta_{\ell^n}^\ell = \zeta_{\ell^{n - 1}}$. We then view $t_\ell$ as a homomorphism $I_F \to \Z_\ell$ via this identification.
+
+ Let $\rho: W_F \to \GL(V)$ be an $\ell$-adic representation over $E$. Then there exists an open subgroup $I' \subseteq I_F$ and a nilpotent $N \in \End_E V$ such that for all $\gamma \in I'$,
+ \[
+ \rho(\gamma) = \exp (t_\ell(\gamma) N) = \sum_{j = 0}^\infty \frac{(t_\ell(\gamma) N)^j}{j!}.
+ \]
+ In particular, $\rho(I')$ unipotent and abelian.
+\end{thm}
+In our previous example, $N = \begin{pmatrix}0 & 1\\0 & 0\end{pmatrix}$.
+
+\begin{proof}
+ If $\rho(I_F)$ is finite, let $I' = \ker \rho \cap I_F$ and $N = 0$, and we are done.
+
+ Otherwise, first observe that $G$ is any compact group and $\rho: G \to \GL(V)$ is an $\ell$-adic representation, then $V$ contains a $G$-invariant lattice, i.e.\ a finitely-generated $\mathcal{O}_E$-submodule of maximal rank. To see this, pick any lattice $L_0 \subseteq V$. Then $\rho(G) L_0$ is compact, so generates a lattice which is $G$-invariant.
+
+ Thus, pick a basis of an $I_F$-invariant lattice. Then $\rho: W_F \to \GL_n(E)$ restricts to a map $I_F \to \GL_n(\mathcal{O}_E)$.
+
+ We seek to understand this group $\GL_n(\mathcal{O}_E)$ better. We define a filtration on $\GL_n(\mathcal{O}_E)$ by
+ \[
+ G_k = \{g \in \GL_n(\mathcal{O}_E) : g \equiv I \bmod \ell^k \},
+ \]
+ which is an open subgroup of $\GL_n(\mathcal{O}_E)$. Note that for $k \geq 1$, there is an isomorphism
+ \[
+ G_k/G_{k + 1} \to M_n(\mathcal{O}_E/\ell \mathcal{O}_E),
+ \]
+ sending $1 + \ell^k g$ to $g$. Since the latter is an $\ell$-group, we know $G_1$ is a pro-$\ell$ group. Also, by definition, $(G_k)^\ell \subseteq G_{k + 1}$.
+
+ Since $\rho^{-1}(G_2)$ is open, we can pick an open subgroup $I' \subseteq I_F$ such that $\rho(I') \subseteq G_2$. Recall that $t_\ell(I_F)$ is the maximal pro-$\ell$ quotient of $I_F$, because the tame characters give an isomorphism
+ \[
+ I_F/P_F \cong \prod_{\ell \nmid p} \Z_\ell(1).
+ \]
+ So $\rho|_{I'}: I' \to G_2$ factors as
+ \[
+ \begin{tikzcd}
+ I' \ar[r, "t_\ell", two heads] & t_\ell(I') = \ell^s \Z_\ell \ar[r, "\nu"] & G_2
+ \end{tikzcd},
+ \]
+ using the assumption that $\rho(I_F)$ is infinite.
+
+ Now for $r \geq s$, let $T_r = \nu(\ell^r) = T_s^{r - s} \in G_{r + 2 - s}$. For $r$ sufficiently large,
+ \[
+ N_r = \log (T_r) = \sum_{m \geq 1} (-1)^{m - 1} \frac{(T_r - 1)^m}{m}
+ \]
+ converges $\ell$-locally, and then $T_r = \exp N_r$.
+
+ We claim that $N_r$ is nilpotent. To see this, if we enlarge $E$, we may assume that all the eigenvalues of $N_r$ are in $E$. For $\delta \in W_F$ and $\gamma \in I_F$, we know
+ \[
+ t_\ell(\delta \gamma \delta^{-1}) = \omega(\delta) t_\ell(\gamma).
+ \]
+ So
+ \[
+ \rho(\delta \gamma \delta^{-1}) = \rho(\gamma)^{w(\sigma)}
+ \]
+ for all $\gamma \in I'$. So
+ \[
+ \rho(\sigma) N_r \rho(\delta^{-1}) = \omega(\delta) N_r.
+ \]
+ Choose $\delta$ lifting $\varphi_q$, $w(\delta) = q$. Then if $v$ is an eigenvector for $N_r$ with eigenvalue $\lambda$, then $\rho(\delta)v$ is an eigenvector of eigenvalue $q^{-1}\lambda$. Since $N_r$ has finitely many eigenvalues, but we can do this as many times as we like, it must be the case that $\lambda = 0$.
+
+ Then take
+ \[
+ N = \frac{1}{\ell^r} N_r
+ \]
+ for $r$ sufficiently large, and this works.
+\end{proof}
+There is a slight unpleasantness in this theorem that we fixed a choice of $\ell^n$ roots of unity. To avoid this, we can say there exists an $N: v(1) = V \otimes_{\Z_\ell} \Z_{\ell}(1) \to V$ nilpotent such that for all $\gamma \in I'$, we have
+\[
+ \rho(\gamma) = \exp (t_\ell(\gamma) N).
+\]
+
+Grothendieck's monodromy theorem motivates the definition of the Weil--Deligne groups, whose category of representations are isomorphic to the category of $\ell$-adic representations. It is actually easier to state what the representations of the Weil--Deligne group are. One can then write down a definition of the Weil--Deligne group as a semi-direct product if they wish.
+
+\begin{defi}[Weil--Deligne representation]\index{Weil--Deligne representation}
+ A \emph{Weil--Deligne representation} of $W_F$ over a field $E$ of characteristic $0$ is a pair $(\rho, N)$, where
+ \begin{itemize}
+ \item $\rho: W_F \to \GL_E(V)$ is a finite-dimensional representation of $W_F$ over $E$ with open kernel; and
+ \item $N \in \End_E(V)$ is nilpotent such that for all $\gamma \in W_F$, we have
+ \[
+ \rho(\gamma) N \rho(\gamma)^{-1} = \omega(\gamma) N,
+ \]
+ \end{itemize}
+\end{defi}
+Note that giving $N$ is the same as giving a unipotent $T = \exp N$, which is the same as giving an algebraic representation of $\G_a$. So a Weil--Deligne representation is a representation of a suitable semi-direct product $W_F \ltimes \G_a$.
+
+Weil--Deligne representations form a symmetric monoidal category in the obvious way, with
+\[
+ (\rho, N ) \otimes (\rho', N') = (\rho \otimes \rho', N \otimes 1 + 1 \otimes N).
+\]
+There are similarly duals.
+
+\begin{thm}
+ Let $E/\Q_\ell$ be finite (and $\ell \not= p$). Then there exists an equivalence of (symmetric monoidal) categories
+ \[
+ \left\{\parbox{4.6cm}{\centering $\ell$-adic representations\\ of $W_F$ over $E$}\right\} \longleftrightarrow \left\{\parbox{4.6cm}{\centering Weil--Deligne representations of $W_F$ over $E$}\right\}
+ \]
+\end{thm}
+Note that the left-hand side is pretty topological, while the right-hand side is almost purely algebraic, apart from the requirement that $\rho$ has open kernel. In particular, the topology of $E$ is not used.
+
+\begin{proof}
+ We have already fixed an isomorphism $\Z_\ell(1) \cong \Z_\ell$. We also pick a lift $\Phi \in W_F$ of the geometric Frobenius. In other words, we are picking a splitting
+ \[
+ W_F = \bra \Phi\ket \ltimes I_F.
+ \]
+ The equivalence will take an $\ell$-adic representation $\rho_\ell$ to the Weil--Deligne representation $(\rho, N)$ on the same vector space such that
+ \[
+ \rho_\ell(\Phi^m \gamma) = \rho (\Phi^m \gamma) \exp t_\ell(\gamma) N\tag{$*$}
+ \]
+ for all $m \in \Z$ and $\gamma \in I_F$.
+
+ To check that this ``works'', we first look at the right-to-left direction. Suppose we have a Weil--Deligne representation $(\rho, N)$ on $V$. We then define $\rho_\ell: W_F \to \Aut_E(V)$ by $(*)$. Since $\rho$ has open kernel, it is continuous. Since $t_\ell$ is also continuous, we know $\rho_\ell$ is continuous. To see that $\rho_\ell$ is a homomorphism, suppose
+ \[
+ \Phi^m \gamma \cdot \Phi^m \delta = \Phi^{m + n} \gamma' \delta
+ \]
+ where $\gamma, \delta \in I_F$ and
+ \[
+ \gamma' = \Phi^{-n} \gamma \Phi^n.
+ \]
+ Then
+ \begin{align*}
+ \exp t_\ell(\gamma) N \cdot \rho(\Phi^n \delta) &= \sum_{j \geq 0} \frac{1}{j!} t_\ell(\gamma)^j N^j \rho(\Phi^n \delta)\\
+ &= \sum_{j \geq 0} \frac{1}{j!} t_\ell(\gamma) q^{nj} \rho(\Phi^n \delta) N^j\\
+ &= \rho (\Phi^n \delta) \exp (q^n t_\ell(\gamma)).
+ \end{align*}
+ But
+ \[
+ t_\ell(\gamma') = t_\ell(\Phi^{-n} \gamma \Phi^n) = \omega(\Phi^{-n}) t_\ell(\gamma) = q^n t_\ell(\gamma).
+ \]
+ So we know that
+ \[
+ \rho_\ell(\Phi^m \gamma) \rho_\ell(\Phi^n \delta) = \rho_\ell(\Phi^{m + n} \gamma' \delta).
+ \]
+ Notice that if $\gamma \in I_F \cap \ker \rho$, then $\rho_\ell(\gamma) = \exp t_\ell(\gamma) N$. So $N$ is the nilpotent endomorphism occurring in the Grothendieck theorem. % this shows that N is unique.
+
+ Conversely, given an $\ell$-adic representation $\rho_\ell$, let $N \in \End_EV$ be given by the monodromy theorem. We then define $\rho$ by $(*)$. Then the same calculation shows that $(\rho, N)$ is a Weil--Deligne representation, and if $I' \subseteq I_F$ is the open subgroup occurring in the theorem, then $\rho_\ell(\gamma) = \exp t_\ell(\gamma) N$ for all $\gamma \in I'$. So by $(*)$, we know $\rho(I') = \{1\}$, and so $\rho$ has open kernel.
+\end{proof}
+This equivalence depends on two choices --- the isomorphism $\Z_\ell(1) \cong \Z_\ell$ and also on the choice of $\Phi$. It is not hard to check that up to natural isomorphisms, the equivalence does not depend on the choices.
+
+We can similarly talk about representations over $\bar{\Q}_\ell$, instead of some finite extension $E$. Note that if we have a continuous homomorphism $\rho: W_F \to \GL_n(\bar{\Q}_\ell)$, then there exists a finite $E/\Q_\ell$ such that $\rho$ factors through $\GL_n(E)$.
+
+Indeed, $\rho(I_F) \subseteq \GL_n(\bar{\Q}_\ell)$ is compact, since it is a continuous image of compact group. So it is a complete metric space. Moreover, the set of finite extensions of $E/\Q_\ell$ is countable (Krasner's lemma). So by the Baire category theorem, $\rho(I_F)$ is contained in some $\GL_n(E)$, and of course, $\rho(\Phi)$ is contained in some $\GL_n(E)$.
+
+Recalling that a Weil--Deligne representation over $E$ only depends on $E$ as a field, and $\bar{\Q}_\ell \cong \bar{\Q}_{\ell'}$ for any $\ell, \ell'$, we know that
+\begin{thm}
+ Let $\ell, \ell' \not= p$. Then the category of $\bar{\Q}_\ell$ representations of $W_F$ is equivalent to the category of $\bar{\Q}_{\ell'}$ representations of $W_F$.
+\end{thm}
+
+Conjecturally, $\ell$-adic representations coming from algebraic geometry have semi-simple Frobenius. This notion is captured by the following proposition/definition.
+\begin{prop}
+ Suppose $\rho_\ell$ is an $\ell$-adic representation corresponding to a Weil--Deligne representation $(\rho, N)$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $\rho_\ell(\Phi)$ is semi-simple (where $\Phi$ is a lift of $\Frob_q$).
+ \item $\rho_\ell(\gamma)$ is semi-simple for all $\gamma \in W_F \setminus I_F$.
+ \item $\rho$ is semi-simple.
+ \item $\rho(\Phi)$ is semi-simple.
+ \end{enumerate}
+ In this case, we say $\rho_\ell$ and $(\rho, N)$ are \term{$F$-semisimple} (where $F$ refers to \emph{Frobenius}).
+\end{prop}
+\begin{proof}
+ Recall that $W_F \cong \Z \rtimes I_F$, and $\rho(I_F)$ is finite. So that part is always semisimple, and thus (iii) and (iv) are equivalent.
+
+ Moreover, since $\rho_\ell(\Phi) = \rho(\Phi)$, we know (i) and (iii) are equivalent. Finally, $\rho_\ell(\Phi)$ is semi-simple iff $\rho_\ell(\Phi^n)$ is semi-simple for all $\Phi$. Then this is equivalent to (ii) since the equivalence before does not depend on the choice of $\Phi$.
+\end{proof}
+
+\begin{eg}
+ The Tate module of an elliptic curve over $F$ is not semi-simple, since it has matrix
+ \[
+ \rho_\ell(\gamma) =
+ \begin{pmatrix}
+ \omega(\gamma) & t_\ell(\gamma)\\
+ 0 & 1
+ \end{pmatrix}.
+ \]
+ However, it is $F$-semisimple, since
+ \[
+ \rho(\gamma) =
+ \begin{pmatrix}
+ \omega(\gamma) & 0\\
+ 0 & 1
+ \end{pmatrix},\quad N =
+ \begin{pmatrix}
+ 0 & 1\\
+ 0 & 0
+ \end{pmatrix}.
+ \]
+\end{eg}
+
+It turns out we can classify all the indecomposable and $F$-semisimple Weil--Deligne representations. In the case of vector spaces, if a matrix $N$ acts nilpotently on a vector space $V$, then the Jordan normal form theorem says there is a basis of $V$ in which $N$ takes a particularly nice form, namely the only non-zero entries are the entries of the form $(i, i + 1)$, and the entries are all either $0$ or $1$. In general, we have the following result:
+
+%Let $V$ be an object in an abelian category, and $N \in \End(V)$ nilpotent with $N^{m + 1} = 0$, say. Then there are two filtrations we can put on $V$, namely
+%\[
+% K_\Cdot = \ker N^{\Cdot + 1},\quad I^{\Cdot} = \im N^{\Cdot}.
+%\]
+%
+%and is captured by the following theorem:
+%\begin{thm}
+% Let $V$ be an object in an abelian category, $N \in \End(V)$ with $N^{m + 1} = 0$ for some $m \geq 0$. Then there exists a unique filtration
+% \[
+% 0 = M_{-m - 1} \subseteq M_{-m} \subseteq \cdots \subseteq M_0 \subseteq \cdots \subseteq M_m = V
+% \]
+% by subobjects such that
+% \begin{enumerate}
+% \item $N(M_j) = M_{j - 2}$ (where $M_j = V$ if $j \geq m$, and $M_j = 0$ if $j < -m$). Thus, $N$ induces a map
+% \[
+% \bar{N}: \gr_j^M V = \frac{M_j}{M_{j - 1}} \to \gr_{j - 2}^M V.
+% \]
+% \item For all $r \geq 0$, $\bar{N}^r: \gr_r^M V \to \gr_{-r}^M V$ is an isomorphism.
+% \end{enumerate}
+%\end{thm}
+%The proof of the theorem is straightforward, but the statement of the theorem deserves a bit of explanation.
+
+\begin{thm}[Jordan normal form]
+ If $V$ is semi-simple, $N \in \End(V)$ is nilpotent with $N^{m + 1} = 0$, then there exists subobjects $P_0, \ldots, P_m \subseteq V$ (not unique as subobjects, but unique up to isomorphism), such that $N^r: P_r \to N^r P_r$ is an isomorphism, and $N^{r + 1} P_r = 0$, and
+ \[
+ V = \bigoplus_{r = 0}^m P_r \oplus N P_r \oplus \cdots \oplus N^r P_r = \bigoplus_{r = 0}^m P_r \otimes_\Z \frac{\Z[N]}{(N^{r + 1})}.
+ \]
+\end{thm}
+For vector spaces, this is just the Jordan normal form for nilpotent matrices.
+%
+%\begin{proof}[Proof of theorem]
+% We induct on $m$. Then $m = 0$ case is trivial. If $M$ exists, satisfying (i) and (ii), then $N^m$ induces an isomorphism
+% \[
+% N^m: \gr_m^M = V/M_{m - 1} \overset{\sim}{\longrightarrow} \gr_{-m}^M = M_{-m}. \] So we must have
+% \[
+% M_{m - 1} = \ker N^m,\quad M_{-m} = \im N^m.
+% \]
+% So we define $M_{m - 1}, M_{-m}$ to be these. Let $V' = M_{m - 1}/M_{-m}$. Then there is an induced endomorphism $n' \in \End V'$ and $(N')^m = 0$.
+%
+% Thus, by induction, there is a monodromy filtration
+% \[
+% M'_{-m} = 0 \subseteq M'_{-m + 1} \subseteq \cdots \subseteq M'_{m - 1} = V'
+% \]
+% on $V'$. The only choice for $M_i$ on $V$ is to define
+% \[
+% M_j = \pi^{-1}(M_j')
+% \]
+% for $\pi: M_{m - 1} \to V'$ the projection.
+%
+% Having done this, we will have (i) except for perhaps $j = m$ or $j = -m + 1$. To do this, we need to know what $M_{m - 2}$ is, which is
+% \[
+% M_{m - 2} = \pi^{-1}(\ker (N')^{m - 1}) = \ker N^m \cap (N^{m - 1})^{-1} (\im N^m),
+% \]
+% using that $\im N^m = \ker \pi$. So $M_{m - 2} \supseteq \im N$. To check the other one, we use
+% \begin{multline*}
+% M_{-m + 1} = \pi^{-1}(\im(N^r)^{m- 1}) = \ker \pi + N^{m - 1}(M_{m- 1}) \\
+% = \im N^m + N^{m - 1}(M_{n - 1}) \subseteq \ker N.
+% \end{multline*}
+% So indeed $N(M_{-m + 1}) = 0$. % check (ii) ?
+%\end{proof}
+%
+%The picture is:
+%
+%% insert picture
+%
+%We have an increasing kernel filtration $K_{\Cdot} = \ker N^{\Cdot + 1}$, and a decreasing image filtration $I^\Cdot = \im N^\Cdot$. In fact, we have
+%\[
+% M_j = \sum_{a - b = j} K_a \cap I^b,
+%\]
+%called the \emph{convolution filtration}.
+\begin{proof}
+ If we had the desired decomposition, then heuristically, we want to set $P_0$ to be the things killed by $N$ but not in the image of $N$. Thus, using semisimplicity, we pick $P_0$ to be a splitting
+ \[
+ \ker N = (\ker N \cap \im N) \oplus P_0.
+ \]
+ Similarly, we can pick $P_1$ by
+ \[
+ \ker N^2 = (\ker N + (\im N \cap \ker N^2)) \oplus P_1.
+ \]
+ One then checks that this works.
+% If $V$ is semi-simple, then we can split
+% \begin{align*}
+%
+% \ker N^2 &= (\ker N + \im N \cap \ker N^2) \oplus P_1
+% \end{align*}
+% etc. % fill this in.
+\end{proof}
+
+We will apply this when $V$ is a representation of $W_F \to \GL(V)$ and $N$ is the nilpotent endomorphism of a Weil--Deligne representation. Recall that we had
+\[
+ \rho(\gamma) N \rho(\gamma)^{-1} = \omega(\gamma) N,
+\]
+so $N$ is a map $V \to V \otimes \omega^{-1}$, rather than an endomorphism of $V$. Thankfully, the above result still holds (note that $V \otimes \omega^{-1}$ is still the same vector space, but with a different action of the Weil--Deligne group).
+
+%We still get the monodromy filtration $M_{\Cdot}$, but now
+%\[
+% \bar{N}^r: \gr_r^M \to \gr_{-r}^M \otimes \omega^{-r}.
+%\]
+\begin{prop}
+ Let $(\rho, N)$ be a Weil--Deligne representation.
+ \begin{enumerate}
+ \item $(\rho, N)$ is irreducible iff $\rho$ is irreducible and $N = 0$.
+ \item $(\rho, N)$ is indecomposable and $F$-semisimple iff
+ \[
+ (\rho, N) = (\sigma, 0) \otimes \sp(n),
+ \]
+ where $\sigma$ is an irreducible representation of $W_F$ and $\sp(n) \cong E^n$ is the representation
+ \[
+ \rho = \diag(\omega^{n - 1}, \ldots, \omega, 1),\quad N =
+ \begin{pmatrix}
+ 0 & 1\\
+ & \ddots & \ddots\\
+ & & 0 & 1\\
+ & & & 0
+ \end{pmatrix}
+ \]
+ \end{enumerate}
+\end{prop}
+
+\begin{eg}
+ If
+ \[
+ \rho =
+ \begin{pmatrix}
+ \omega \\
+ \omega & \omega\\
+ & & 1
+ \end{pmatrix}, N=
+ \begin{pmatrix}
+ 0 & 0 & 0\\
+ 0 & 0 & 1\\
+ 0 & 0 & 0
+ \end{pmatrix},
+ \]
+ then this is an indecomposable Weil--Deligne representation not of the above form.
+\end{eg}
+
+\begin{proof}
+ (i) is obvious.
+
+ For (ii), we first prove $(\Leftarrow)$. If $(\rho, N) = (\sigma, 0) \otimes \sp(n)$, then $F$-semisimplicity is clear, and we have to check that it is indecomposable. Observe that the kernel of $N$ is still a representation of $W_F$. Writing $V^{N = 0}$ for the kernel of $N$ in $V$, we note that $V^{N = 0} = \sigma \otimes \omega^{n - 1}$, which is irreducible. Suppose that
+ \[
+ (\rho, N) = U_1 \oplus U_2.
+ \]
+ Then for each $i$, we must have $U_i^{N = 0} = 0 $ or $V^{N = 0}$. We may wlog assume $U_1^{N = 0} = 0$. Then this forces $U_1 = 0$. So we are done.
+
+ Conversely, if $(\rho, N, V)$ is $F$-semisimple and indecomposable, then $V$ is a representation of $W_F$ which is semi-simple and $N: V \to V \otimes \omega^{-1}$. By Jordan normal form, we must have
+ \[
+ V = U \oplus NU \oplus \cdots \oplus N^r U
+ \]
+ with $N^{r + 1} = 0$, and $U$ is irreducible. So $V = (\sigma, 0) \otimes \sp(r + 1)$.
+\end{proof}
+
+Given this classification result, when working over complex representations, the representation theory of $\SU(2)$ lets us capture the $N$ bit of $F$-semisimple Weil--Deligne representation via the following group:
+\begin{defi}[Weil--Langlands group]\index{Langlands group}\index{Weil--Langlands group}
+ We define the \emph{(Weil--)Langlands group} to be
+ \[
+ \mathcal{L}_F = W_F \times \SU(2).
+ \]
+\end{defi}
+A representation of $\mathcal{L}_F$ is a continuous action on a finite-dimensional vector space (thus, the restriction to $W_F$ has open kernel).
+
+\begin{thm}
+ There exists a bijection between $F$-semisimple Weil--Deligne representations over $\C$ and semi-simple representations of $\mathcal{L}_F$, compatible with tensor products, duals, dimension, etc. In this correspondence:
+ \begin{itemize}
+ \item The representations $\rho$ of $\mathcal{L}_F$ that factor through $W_F$ correspond to the Weil--Deligne representations $(\rho, 0)$.
+
+ \item More generally, simple $\mathcal{L}_F$ representations $\sigma \otimes (\Sym^{n - 1} \C^2)$ correspond to the Weil--Deligne representation $(\sigma \otimes \omega^{(-1 + n)/2}, 0) \otimes \sp(n)$.\fakeqed
+ \end{itemize}
+\end{thm}
+If one sits down and checks the theorem, then one sees that the twist in the second part is required to ensure compatibility with tensor products.
+
+Of course, the ($F$-semisimple) Weil--Deligne representations over $\C$ are in bijection those over $\bar{\Q}_\ell$, using an isomorphism $\bar{\Q}_\ell \cong \C$.
+
+\section{The Langlands correspondence}
+Local class field theory says we have an isomorphism
+\[
+ W_F^{ab} \cong F^\times.
+\]
+If we want to state this in terms of the full Weil group, we can talk about the one-dimensional representations of $W_F$, and write local class field theory as a correspondence
+\[
+ \left\{\parbox{4.6cm}{\centering characters of $\GL_1(F)$}\vphantom{\parbox{4.6cm}{\centering $1$-dimensional representations\\ of $W_F$}}\right\} \longleftrightarrow \left\{\parbox{4.6cm}{\centering $1$-dimensional representations\\ of $W_F$}\right\}
+\]
+
+The Langlands correspondence aims to understand the representations of $\GL_n(F)$, and it turns out this corresponds to $n$-dimensional representations of $\mathcal{L}_F$. That is, if we put enough adjectives in front of these words.
+
+%The natural question is then, what are the $n$-dimensional representations of $W_F$? A good guess would be the representations of $\GL_n(F)$.
+%
+%References:
+%\begin{enumerate}
+% \item Kudla: Motives, volume 2
+% \item Prasad--Raghuram
+% \item Cartier: article in Corvallis volume 1
+%\end{enumerate}
+
+\subsection{Representations of groups}
+The adjectives we need are fairly general. The group $\GL_n(F)$ contains a profinite open subgroup $\GL_n(\mathcal{O}_F)$. The general theory applies to any topological group with a profinite open subgroup $K$, with $G/K$ countable.
+
+\begin{defi}[Smooth representation]\index{smooth representation}
+ A \emph{smooth representation} of $G$ is a continuous representation of $G$ over $\C$, where $\C$ is given the discrete topology. That is, it is a pair $(\pi, V)$ where $V$ is a complex vector space and $\pi: G \to \GL_\C(V)$ a homomorphism such that for every $v \in V$, the stabilizer of $v$ in $G$ is open.
+\end{defi}
+Note that we can replace $\C$ with any field, but we like $\C$. Typically, $V$ is an \emph{infinite-dimensional} vector space. To retain some sanity, we often desire the following property:
+
+\begin{defi}[Admissible representation]\index{admissible representation}
+ We say $(\pi, V)$ is \emph{admissible} if for every open compact subgroup $K \subseteq G$ the fixed set $V^K = \{v \in V: \pi(g)v = v\;\forall g \in K\}$ is finite-dimensional.
+\end{defi}
+
+\begin{eg}
+ Take $G = \GL_2(F)$. Then $\P^1(F)$ has a right action of $G$ by linear transformations. In fact, we can write $\P^1(F)$ as
+ \[
+ \P^1(F) =
+ \begin{pmatrix}
+ * & *\\
+ 0 & *
+ \end{pmatrix} \backslash \,G.
+ \]
+ Let $V$ be the space of all locally constant functions $f: \P^1(F) \to \C$. There are lots of such functions, because $\P^1(F)$ is totally disconnected. However, since $\P^1(F)$ is compact, each such function can only take finitely many values.
+
+ We let
+ \[
+ \pi(g) f = (x \mapsto f(xg)).
+ \]
+ It is not difficult to see that this is an infinite-dimensional admissible representation.
+\end{eg}
+Of course, any finite-dimensional representation is an example, but $\GL_n(F)$ does not have very interesting finite-dimensional representations.
+
+\begin{prop}
+ Let $G = \GL_n(F)$. If $(\pi, V)$ is a smooth representation with $\dim V < \infty$, then
+ \[
+ \pi = \sigma \circ \det
+ \]
+ for some $\sigma: F^\times \to \GL_\C(V)$.
+\end{prop}
+So these are pretty boring.
+
+\begin{proof}
+ If $V = \bigoplus_{i = 1}^d C e_i$, then
+ \[
+ \ker \pi = \bigcap_{i = 1}^d (\text{stabilizers of }e_i)
+ \]
+ is open. It is also a normal subgroup, so
+ \[
+ \ker \pi \supseteq K_m = \{g \in \GL_n(\mathcal{O}) : g \equiv I \bmod {\varpi^m}\}
+ \]
+ for some $m$, where $\varpi$ is a uniformizer of $F$. In particular, $\ker \pi$ contains $E_{ij}(x)$ for some $i \not= j$ and $x$, which is the matrix that is the identity except at entry $(i, j)$, where it is $x$.
+
+ But since $\ker \pi$ is normal, conjugation by diagonal matrices shows that it contains all $E_{ij}(x)$ for all $x \in F$ and $i \not= j$. For any field, these matrices generate $\SL_n(F)$. So we are done.
+\end{proof}
+So the interesting representations are all infinite-dimensional. Fortunately, a lot of things true for finite-dimensional representations also hold for these. For example,
+
+\begin{lemma}[Schur's lemma]\index{Schur's lemma}
+ Let $(\pi, V)$ be an irreducible representation. Then every endomorphism of $V$ commuting with $\pi$ is a scalar.
+
+ In particular, there exists $\omega_\pi: Z(G) \to \C^\times$ such that
+ \[
+ \pi(zg) = \omega_\pi(z) \pi(g)
+ \]
+ for all $z \in Z(G)$ and $g \in G$. This is called the \term{central character}.\fakeqed
+\end{lemma}
+
+At this point, we are already well-equipped to state a high-level description of the local Langlands correspondence.
+\begin{thm}[Harris--Taylor, Henniart]\index{Langlands correspondence}
+ There is a bijection
+ \[
+ \left\{\parbox{4.6cm}{\centering irreducible, admissible representations of $\GL_n(F)$}\right\} \longleftrightarrow \left\{\parbox{4.6cm}{\centering semi-simple $n$-dimensional representations of $\mathcal{L}_F$}\right\}.
+ \]
+\end{thm}
+
+In the next section, we will introduce the Hecke algebra, which allows us to capture these scary infinite dimensional representations of $\GL_n(F)$ in terms of something finite-dimensional.
+
+Afterwards, we are going to state the Langlands classification of irreducible admissible representations of $\GL_n(F)$. We can then state a detailed version of the local Langlands correspondence in terms of this classification.
+
+\subsection{Hecke algebras}
+Let $G, K$ be as before.
+\begin{notation}\index{$C^c_\infty(G)$}
+ We write $C_c^\infty(G)$ for the vector space of locally constant functions $f: G \to \C$ of compact support.
+\end{notation}
+
+\begin{defi}[Hecke algebra]\index{Hecke algebra}
+ The \emph{Hecke algebra} is defined to be\index{$\mathcal{H}(G, K)$}
+ \[
+ \mathcal{H}(G, K) = \{\varphi \in C_c^\infty(G) : \varphi(kgk') = \varphi(g)\text{ for all }k, k' \in K\}.
+ \]
+ This is spanned by the characteristic functions of double cosets $KgK$.
+\end{defi}
+This algebra comes with a product called the convolution product. To define this, we need the Haar measure on $G$. This is a functional $C_c^\infty(G) \to \C$, written
+\[
+ f \mapsto \int_G f(g) \;\d \mu (g),
+\]
+that is invariant under left translation, i.e.\ for all $h \in G$,we have
+\[
+ \int f(hg) \;\d \mu(g) = \int f(g)\;\d \mu(g).
+\]
+To construct the Haar measure, we take $\mu(1_K) = 1$. Then if $K' \subseteq K$ is an open subgroup, then it is of finite index, and since we want $\mu(1_{xK'}) = \mu(1_{K'})$, we must have
+\[
+ \mu(1_{K'}) = \frac{1}{(K:K')}.
+\]
+We then set $\mu(1_{xK'}) = \mu(1_{K'})$ for any $x \in G$, and since these form a basis of the topology, this defines $\mu$.
+
+\begin{defi}[Convolution product]\index{convolution product}
+ The \emph{convolution product} on $\mathcal{H}(G, K)$ is
+ \[
+ (\varphi * \varphi')(g) = \int_G \varphi(x) \varphi'(x^{-1}g)\;\d \mu(x).
+ \]
+ Observe that this integral is actually a finite sum.
+\end{defi}
+
+It is an exercise to check that this is a $\C$-algebra with unit
+\[
+ e_K = \frac{1}{\mu(K)} 1_K,
+\]
+Now if $(\pi, V)$ is a smooth representation, then for all $v \in V$ and $\varphi \in \mathcal{H}(G, K)$, consider the expression
+\[
+ \pi(\varphi)v = \int_G \varphi(g) \pi(g) v\;\d \mu(g).
+\]
+Note that since the stabilizer of $v$ is open, the integral is actually a finite sum, so we can make sense of it. One then sees that
+\[
+ \pi(\varphi) \pi(\varphi') = \pi(\varphi * \varphi').
+\]
+This would imply $V$ is a $\mathcal{H}(G, K)$-module, if the unit acted appropriately. It doesn't, however, since in fact $\pi(\varphi)$ always maps into $V^K$. Indeed, if $k \in K$, then
+\[
+ \pi(k) \pi(\varphi)v= \int_G \varphi(g) \pi(kg) v\;\d \mu(g) = \int \varphi(g) \pi(g)\;\d \mu(g) = \pi(\varphi) v,
+\]
+using that $\varphi(g) = \varphi(k^{-1}g)$ and $\d \mu(g) = \d \mu(k^{-1}g)$.
+
+So our best hope is that $V^K$ is an $\mathcal{H}(G, K)$-module, and one easily checks that $\pi(e_K)$ indeed acts as the identity. We also have a canonical projection $\pi(e_K): V \twoheadrightarrow V^K$.
+
+In good situations, this Hecke module determines $V$.
+\begin{prop}
+ There is a bijection between isomorphism classes of irreducible admissible $(\pi, V)$ with $V^K \not= 0$ and isomorphism classes of simple finite-dimensional $\mathcal{H}(G, K)$-modules, which sends $(\pi, V)$ to $V^K$ with the action we described.
+\end{prop}
+
+If we replace $K$ by a smaller subgroup $K' \subseteq K$, then we have an inclusion
+\[
+ \mathcal{H}(G, K) \hookrightarrow \mathcal{H}(G, K'),
+\]
+which does \emph{not} take $e_K$ to $e_{K'}$. We can then form the union of all of these, and let\index{$\mathcal{H}(G)$}
+\[
+ \mathcal{H}(G) = \varinjlim_K \mathcal{H}(G, K)
+\]
+which is an algebra without unit. Heuristically, the unit should be the delta function concentrated at the identity, but that is not a function.
+
+This $\mathcal{H}(G)$ acts on any smooth representation, and we get an equivalence of categories
+\[
+ \left\{\parbox{3.5cm}{\centering smooth $G$-representations}\right\} \longleftrightarrow \left\{\parbox{3.5 cm}{\centering non-degenerate $\mathcal{H}(G)$-modules}\right\}.
+\]
+The non-degeneracy condition is $V = \mathcal{H}(G) V$.
+
+Note that if $\varphi \in \mathcal{H}(G)$ and $(\pi, V)$ is admissible, then the $\rank \pi(\varphi) < \infty$, using that $V^K$ is finite-dimensional. So the trace is well-defined. The \emph{character} of $(\pi, V)$ is then the map
+\[
+ \varphi \mapsto \tr \pi(\varphi)
+\]
+In this sense, admissible representations have traces.
+\subsection{The Langlands classification}
+Recall that the group algebra $\C[G]$ is an important tool in the representation theory of finite groups. This decomposes as a direct sum over all irreducible representations
+\[
+ \C[G] = \bigoplus_{\pi} \pi^{\dim (\pi)}.
+\]
+The same result is true for compact groups, if we replace $\C[G]$ by $L^2(G)$. We get a decomposition
+\[
+ L^2(G) = \hat{\bigoplus_{\pi}} \pi^{\dim (\pi)},
+\]
+where $L^2$ is defined with respect to the Haar measure, and the sum is over all (finite dimensional) irreducible representations of $G$. The hat on the direct sum says it is a Hilbert space direct sum, which is the completion of the vector space direct sum. This result is known as the \term{Peter--Weyl theorem}. For example,
+\[
+ L^2(\R/\Z) = \hat{\bigoplus}_{n \in \Z} \C \cdot e^{2\pi i n y}.
+\]
+However, if $G$ is non-compact, then this is no longer true.
+
+Sometimes, we can salvage this a bit by replacing the discrete direct sum with a continuous version. For example, the characters of $\R$ are those of the form
+\[
+ x \mapsto e^{2\pi i xy},
+\]
+which are not $L^2$ functions. But we can write any function in $L^2(\R)$ as
+\[
+ x \mapsto \int_y \varphi(y) e^{2\pi i xy} \;\d y.
+\]
+So in a sense, $L^2(\R)$ is the ``continuous direct sum'' of irreducible representations.
+
+In general, $L^2(G)$ decomposes as a sum of irreducible representations, and contains both a discrete sum and a continuous part. However, there are irreducible representations that \emph{don't} appear in $L^2(G)$, discretely or continuously. These are known as the \term{complementary series} representations. This happens, for example, for $G = \SL_2(\R)$ (Bargmann 1947).
+
+%For non-compact groups, another thing that can happen is that there are unitary representations of $G$ which do not appear (discretely or continuously) in $L^2$. For example (Borgmann 1947), if $G = \SL_2(\R)$, then there are unitary irreducible representations called unity principal series representations, and also discrete series representations, and there are the ``complementary series''. It turns out % discrete depends on integer parameter, and unitary depends on real parameters
+%\[
+% L^2(G) = \bigoplus \text{discrete series} \oplus \int \text{unitary principal series},
+%\]
+%but the complementary series (and trivial representation) don't occur.
+
+We now focus on the case $G = \GL_n(F)$, or any reductive $F$-group (it doesn't hurt to go for generality if we are not proving anything anyway). It turns out in this case, we can describe the representations that appear in $L^2(G)$ pretty explicitly. These are characterized by the matrix coefficients.
+
+If $\pi: G \to \GL_n(\C)$ is a finite-dimensional representation, then the matrix coefficients $\pi(g)_{ij}$ can be written as
+\[
+ \pi(g)_{ij} = \delta^i(\pi(g) e_j),
+\]
+where $e_j \in \C^n$ is the $j$\textsuperscript{th} basis vector and $\delta^i \in (\C^n)^*$ is the $i$\textsuperscript{th} dual basis vector.
+
+More generally, if $\pi: G \to \GL(V)$ is a finite-dimensional representation, and $v \in V$, $\ell \in V^*$, then we can think of
+\[
+ \pi_{v, \ell}(g) = \ell(\pi(g)v)
+\]
+as a matrix element of $\pi(g)$, and this defines a function $\pi_{v, \ell}: G \to \C$.
+
+In the case of the $G$ we care about, our representations are fancy infinite-dimensional representations, and we need a fancy version of the dual known as the \emph{contragredient}.
+\begin{defi}[Contragredient]\index{contragredient}
+ Let $(\pi, V)$ be a smooth representation. We define $V^* = \Hom_\C(V, \C)$, and the representation $(\pi^*, V^*)$ of $G$ is defined by
+ \[
+ \pi_*(g)\ell = (v \mapsto \ell(\pi(g^{-1})v)).
+ \]
+ We then define the \emph{contragredient} $(\tilde{\pi}, \tilde{v})$ to be the subrepresentation
+ \[
+ \tilde{V} = \{\ell \in V^* \text{ with open stabilizer}\}.
+ \]
+\end{defi}
+This contragredient is quite pleasant. Recall that
+\[
+ V = \bigcup_K V^K.
+\]
+We then have
+\[
+ \tilde{V} = \bigcup (V^*)^K.
+\]
+Using the projection $\pi(e_K): V \to V^K$, we can identify
+\[
+ (V^*)^K = (V^K)^*.
+\]
+So in particular, if $\pi$ is admissible, then so is $\tilde{\pi}$, and we have a canonical isomorphism
+\[
+ V \to \tilde{\tilde{V}}.
+\]
+
+\begin{defi}[Matrix coefficient]\index{matrix coefficient}\index{$\pi_{v, \ell}$}
+ Let $(\pi, V)$ be a smooth representation, and $v \in V$, $\ell \in \tilde{V}$. The \emph{matrix coefficient} $\pi_{v, \ell}$ is defined by
+ \[
+ \pi_{v, \ell}(g) = \ell(\pi(g) v).
+ \]
+ This is a locally constant function $G \to \C$.
+\end{defi}
+
+We can now make the following definition:
+\begin{defi}[Square integrable representation] \index{square integrable representation}
+ Let $(\pi, V)$ be an irreducible smooth representation of $G$. We say it is \emph{square integrable} if $\omega_\pi$ is unitary and
+ \[
+ |\pi_{v, \ell}| \in L^2(G/Z)
+ \]
+ for all $(v, \ell)$.
+\end{defi}
+Note that the fact that $\omega_{\pi}$ is unitary implies $|\pi_{v, \ell}|$ is indeed a function on $L^2(G/Z)$. In general, it is unlikely that $\pi_{v, \ell}$ is in $L^2(G)$.
+
+If $Z$ is finite, then $\omega_\pi$ is automatically unitary and we don't have to worry about quotienting about the center. Moreover, $\pi$ is square integrable iff $\pi_{v, \ell} \in L^2(G)$. In this case, if we pick $\ell \in \tilde{V}$ non-zero, then $v \mapsto \pi_{v, \ell}$ gives an embedding of $V$ into $L^2(G)$. In general, we have
+\[
+ V \subseteq L^2(G, \omega_\pi) = \{f: G \to \C: f(zg) = \omega_\pi(z) f(z),\quad |f| \in L^2(G/Z)\},
+\]
+
+A slight weakening of square integrability is the following weird definition:
+\begin{defi}[Tempered representation]\index{tempered representation}
+ Let $(\pi, V)$ be irreducible, $\omega_\pi$ unitary. We say it is \emph{tempered} if for all $(v, \ell)$ and $\varepsilon > 0$, we have
+ \[
+ |\pi_{v, \ell}| \in L^{2 + \varepsilon}(G/Z).
+ \]
+\end{defi}
+The reason for this definition is that $\pi$ is tempered iff it occurs in $L^2(G)$, not necessarily discretely. % cite?
+
+Weakening in another direction gives the following definition:
+\begin{defi}[Essentially square integrable]
+ Let $(\pi, V)$ be irreducible. Then $(\pi, V)$ is \term{essentially square integrable} (or \term{essentially tempered}) if
+ \[
+ \pi \otimes (\chi \circ \det)
+ \]
+ is square integrable (or tempered) for some character $\chi: F^\times \to \C$.
+\end{defi}
+%For $\GL_n$, if we take $\chi = |\omega_\pi|^{-1/n}$, then $\omega_{\pi \otimes \chi}$ is unitary, and if we twist $\chi$ by a unitary character, then it is not going to change our conditions.
+
+Note that while these definitions seem very analytic, there are in fact purely algebraic interpretations of these definitions, using Jacquet modules.
+
+A final category of representations is the following:
+\begin{defi}[Supercuspidal representation]\index{supercuspidal representation}
+ We say $\pi$ is \emph{supercuspidal} if for all $(v, \ell)$, the support of $\pi_{v, \ell}$ is compact mod $Z$.
+\end{defi}
+
+These are important because they are building blocks of \emph{all} irreducible representations of $\GL_n(F)$, in a sense we will make precise.
+
+The key notion is that of parabolic induction, which takes a list of representations $\sigma_i$ of $\GL_{n_i}(F)$ to a representation of $\GL_N(F)$, where $N = \sum n_i$.
+
+We first consider a simpler case, where we have an $n$-tuple $\chi = (\chi_1, \ldots, \chi_n): (F^\times)^n \to \C^\times$ of characters. The group $G = \GL_n(F)$ containing the Borel subgroup $B$ of upper-triangular matrices. Then $B$ is the semi-direct product $TN$, where $T \cong (F^\times)^n$ consists of the diagonal matrices and $N$ the unipotent ones. We can then view $\chi$ as a character $\chi: B \to B/N = T \to \C^\times$.
+
+We then induce this up to a representation of $G$. Here the definition of an induced representation is not the usual one, but has a twist.
+\begin{defi}[Induced representation]\index{induced representation}
+ Let $\chi: B \to \C$ be a character. We define the induced representation $\Ind^G_B(\chi)$ to be the space of locally constant functions $f: g \to \C$ such that
+ \[
+ f(bg) = \chi(b) \delta_B(b)^{1/2} f(g)
+ \]
+ for all $b \in B$ and $g \in G$, where $G$ acts by
+ \[
+ \pi(g) f: x \mapsto f(xg).
+ \]
+ The function $\delta_B(b)^{1/2}$ is defined by
+ \[
+ \delta_B(b) = |\det \ad_B(b)|.
+ \]
+ More explicitly, if the diagonal entries of $b \in B$ are $x_1, \ldots, x_n$, then
+ \[
+ \delta_B(b) = \prod_{i = 1}^n |x_i|^{n + 1 - 2i} = |x_1|^{n - 1} |x_2|^{n - 3} \cdots |x_n|^{-n + 1}
+ \]
+
+ This is a smooth representation since $B\backslash G$ is compact. In fact, it is admissible and of finite length.
+
+ When this is irreducible, it is said to be a \term{principle series representation} of $G$.
+\end{defi}
+\begin{eg}
+ Recall that $\P^1(F) = B \backslash \GL_2(F)$. In this terminology,
+ \[
+ C^\infty(\P^1(F)) = \Ind_B^G(\delta_B^{-1/2}).
+ \]
+ This is not irreducible, since it contains the constant functions, but quotienting by these does give an irreducible representation. This is called the \term{Steinberg representation}.
+\end{eg}
+
+In general, we start with a \term{parabolic subgroup} $P \subseteq \GL_n(F) = G$, i.e.\ one conjugate to block upper diagonal matrices with a partition $n = n_1 + \cdots + n_r$. This then decomposes into $MN$, where
+\[
+ M \cong \prod_i \GL_{n_i}(F),\quad N = \left\{
+ \begin{pmatrix}
+ I_{n_1} & \cdots & *\\
+ & \ddots & \vdots\\
+ & & I_{n_r}
+ \end{pmatrix}
+ \right\}.
+\]
+This is an example of a \term{Levi decomposition}, and $M$ is a \term{Levi subgroup}.
+
+To perform \term{parabolic induction}, we let $(\sigma, U)$ be a smooth representation of $M$, e.g.\ $\sigma_1 \otimes \cdots \otimes \sigma_r$, where each $\sigma_i$ is a representation of $\GL_{n_i}(F)$. This then defines a representation of $P$ via $P \to P/N = M$, and we define $\Ind_P^G(\sigma)$ to be the space of all locally constant functions $f: G \to U$ such that
+\[
+ f(pg) = \delta_P(p) \sigma(p) f(g)
+\]
+for all $p \in P, g \in G$ and $\delta_P$ is again defined by
+\[
+ \delta_P(p) = |\det \ad_P(p)|.
+\]
+This is again a smooth representation.
+
+\begin{prop}\leavevmode
+ \begin{enumerate}
+ \item $\sigma$ is admissible implies $\pi = \Ind_P^G \sigma$ is admissible.
+ \item $\sigma$ is unitary implies $\pi$ is unitary.
+ \item $\Ind_P^G(\tilde{\sigma}) = \tilde{\pi}$.\fakeqed
+ \end{enumerate}
+\end{prop}
+(ii) and (iii) are the reasons for the factor $\delta_P^{1/2}$.
+
+\begin{eg}
+ Observe
+ \[
+ \widetilde{C^\infty(\P(F))} = \Ind(\delta_B^{1/2}) = \{f: G \to \C: f(bg) = \delta_B(b) f(g)\}.
+ \]
+ There is a linear form to $\C$ given by integrating $f$ over $\GL_2(\mathcal{O})$ (we can't use $F$ since $\GL_2(F)$ is not compact). In fact, this map is $G$-invariant, and not just $\GL_2(\mathcal{O})$-invariant. This is dual to the constant subspace of $C^\infty(\P^1(F))$.
+\end{eg}
+
+The rough statement of the classification theorem is that every irreducible admissible representation of $G = \GL_n(F)$, is a subquotient of an induced representation $\Ind_P^G \sigma$ for some supercuspidal representation of a Levi subgroup $P = \GL_{n_1} \times \cdots \times \GL_{n_r}(F)$. This holds for \emph{any} reductive $G/F$.
+
+For $\GL_n(F)$, we can be more precise. This is called the \emph{Langlands classification}. We first classify all the essentially square integrable representations:
+
+\begin{thm}
+ Let $n = mr$ with $m, r \geq 1$. Let $\sigma$ be any supercuspidal representation of $\GL_m(F)$. Let
+ \[
+ \sigma(x) = \sigma \otimes |{\det}_m|^x.
+ \]
+ Write $\Delta = (\sigma, \sigma(1), \ldots, \sigma(r - 1))$, a representation of $\GL_m(F) \times \cdots \times \GL_m(F)$. Then $\Ind_P^G(\Delta)$ has a unique irreducible subquotient $Q(\Delta)$, which is essentially square integrable.
+
+ Moreover, $Q(\Delta)$ is square integrable iff the central character is unitary, iff $\sigma(\frac{r - 1}{2})$ is square-integrable, and every essentially square integrable $\pi$ is a $Q(\Delta)$ for a unique $\Delta$.
+\end{thm}
+
+\begin{eg}
+ Take $n = 2 = r$, $\sigma = |\ph|^{-1/2}$. Take
+ \[
+ P = B =
+ \begin{pmatrix}
+ * & *\\
+ 0 & *
+ \end{pmatrix}
+ \]
+ Then
+ \[
+ \Ind_B^G(|\ph|^{-1/2}, |\ph|^{-1/2}) = C^\infty(B \backslash G) \supseteq \C,
+ \]
+ where $\C$ is the constants. Then $C^\infty(B\backslash G)$ is not supercuspidal, but the quotient is the Steinberg representation, which is square integrable. Thus, every two-dimensional essentially square integrable representation which is not supercuspidal is a twist of the Steinberg representation by $\chi \circ \det$.
+\end{eg}
+
+We can next classify tempered representations.
+\begin{thm}
+ The tempered irreducible admissible representations of $\GL_n(F)$ are precisely the representations $\Ind_P^G \sigma$, where $\sigma$ is irreducible square integrable. In particular, $\Ind_P^G \sigma$ are always irreducible when $\sigma$ is square integrable.
+\end{thm}
+
+\begin{eg}
+ For $\GL_2$, we seek a $\pi$ which is tempered but not square integrable. This must be of the form
+ \[
+ \pi = \Ind_B^G (\chi_1, \chi_2),
+ \]
+ where $|\chi_1| = |\chi_2| = 1$. If we want it to be essentially tempered, then we only need $|\chi_1| = |\chi_2|$.
+\end{eg}
+
+Finally, we classify all irreducible (admissible) representations.
+\begin{thm}
+ Let $n = n_1 + \cdots + n_r$ be a partition, and $\sigma_i$ tempered representation of $\GL_{n_i}(F)$. Let $t_i \in \R$ with $t_1 > \cdots > t_r$. Then $\Ind_P^G(\sigma_1(t_1), \ldots, \sigma_r(t_r))$ has a unique irreducible quotient \term{Langlands quotient}, and every $\pi$ is (uniquely) of this form.
+\end{thm}
+\begin{eg}
+ For $\GL_2$, the remaining (i.e.\ not essentially tempered) representations are the irreducible subquotients of
+ \[
+ \Ind_B^G (\chi_1, \chi_2),
+ \]
+ where
+ \[
+ |\chi_i| = |\ph |_F^{t_i},\quad t_1 > t_2.
+ \]
+ Note that the one-dimensional representations must occur in this set, because we haven't encountered any yet.
+
+ For example, if we take $\chi_1 = |\ph|^{1/2}$ and $\chi_2 = |\ph|^{-1/2}$, then
+ \[
+ \Ind_B^G(\chi_1, \chi_2) = \widetilde{C^\infty(B \backslash G)},
+ \]
+ which has the trivial representation as its irreducible quotient.
+\end{eg}
+
+\subsection{Local Langlands correspondence}
+\begin{thm}[Harris--Taylor, Henniart]\index{local Langlands correspondence}
+ There is a bijection
+ \[
+ \left\{\parbox{4.6cm}{\centering irreducible, admissible representations of $\GL_n(F)$}\right\} \longleftrightarrow \left\{\parbox{4.6cm}{\centering semi-simple $n$-dimensional representations of $\mathcal{L}_F$}\right\}.
+ \]
+ Moreover,
+ \begin{itemize}
+ \item For $n = 1$, this is the same as local class field theory.
+ \item Under local class field theory, this corresponds between $\omega_\pi$ and $\det \sigma$.
+ \item The supercuspidals correspond to the irreducible representations of $W_F$ itself.
+ \item If a supercuspidal $\pi_0$ corresponds to the representation $\sigma_0$ of $W_F$, then the essentially square integrable representation $\pi = Q(\pi_0(-\frac{r-1}{2}), \ldots, \pi_0(\frac{r - 1}{2}))$ corresponds to $\sigma = \sigma_0 \otimes \Sym^{r - 1} \C^2$.
+
+ \item If $\pi_i$ correspond to $\sigma_i$, where $\sigma_i$ are irreducible and unitary, then the tempered representation $\Ind_P^G(\pi_1 \otimes \cdots \otimes \pi_r)$ corresponds to $\sigma_1 \oplus \cdots \oplus \sigma_r$.
+%
+% Tempered representations correspond to unitary representations of the form$\sigma = \sigma_1 \oplus \cdots \oplus \sigma_r$, where each $\sigma_i$ is irreducible and unitary, where if $\sigma_i$ corresponds to $\pi_i$, then the tempered representation is given by $\Ind_P^G(\pi_1 \otimes \cdots \otimes \pi_r)$.
+ \item For general representations, if $\pi$ is the Langlands quotient of
+ \[
+ \Ind(\pi_1(t_1), \ldots, \pi_r(t_r))
+ \]
+ with each $\pi_i$ tempered, and $\pi_i$ corresponds to unitary representations $\sigma_i$ of $\mathcal{L}_F$, then $\pi$ corresponds to $\bigoplus \sigma_i \otimes |\Art_F^{-1}|_F^{t_i}$.
+ \end{itemize}
+\end{thm}
+The hard part of the theorem is the correspondence between the supercuspidal representations and irreducible representations of $W_F$. This correspondence is characterized by $\varepsilon$-factors of pairs.
+
+Recall that for an irreducible representation of $W_F$, we had an $\varepsilon$ factor $\varepsilon(\sigma, \mu_F, \psi)$. If we have two representations, then we can just take the tensor product $\varepsilon(\sigma_1 \otimes \sigma_2, \mu_F, \psi)$. It turns out for supercuspidals, we can also introduce $\varepsilon$-factors $\varepsilon(\pi, \mu_F, \psi)$. There are also $\varepsilon$ factors for pairs, $\varepsilon(\pi_1, \pi_2, \mu_F, \psi)$. Then the correspondence is such that if $\pi_i$ correspond to $\sigma_i$, then
+\[
+ \varepsilon(\sigma_1 \otimes \sigma_2, \mu_F, \psi) = \varepsilon(\pi_1, \pi_2, \mu_F, \psi).
+\]
+When $n = 1$, we get local class field theory. Recall that we actually have a \emph{homomorphic} correspondence between characters of $F^\times$ and characters of $W_F$, and the correspondence is uniquely determined by
+\begin{enumerate}
+ \item The behaviour on unramified characters, which is saying that the Artin map sends uniformizers to geometric Frobenii
+ \item The base change property: the restriction map $W_{F'}^{\ab} \to W_F^{\ab}$ correspond to the norm map of fields
+\end{enumerate}
+If we want to extend this to the local Langlands correspondence, the corresponding picture will include
+\begin{enumerate}
+ \item Multiplication: taking multiplications of $\GL_n$ and $\GL_m$ to representations of $\GL_{mn}$
+ \item Base change: sending representations of $\GL_n(F)$ to representations of $\GL_n(F')$ for a finite extension $F'/F$
+\end{enumerate}
+These thing exist by virtue of local Langlands correspondence (much earlier, base change for \emph{cyclic extensions} was constructed by Arthur--Clozel).
+
+\begin{prop}
+ Let $\sigma: W_F \to \GL_n(\C)$ be an irreducible representation. Then the following are equivalent:
+ \begin{enumerate}
+ \item For some $g \in W_F \setminus I_F$, $\sigma(g)$ has an eigenvalue of absolute value $1$.
+ \item $\im \sigma$ is relatively compact, i.e.\ has compact closure, i.e.\ is bounded.
+ \item $\sigma$ is unitary.
+ \end{enumerate}
+\end{prop}
+
+\begin{proof}
+ The only non-trivial part is (i) $\Rightarrow$ (ii). We know
+ \[
+ \im \sigma = \bra \sigma(\Phi), \sigma(I_F) = H\ket,
+ \]
+ where $\Phi$ is some lift of the Frobenius and $H$ is a finite group. Moreover, $I_F$ is normal in $W_F$. So for some $n \geq 1$, $\sigma(\Phi^n)$ commutes with $H$. Thus, replacing $g$ and $\Phi^n$ with suitable non-zero powers, we can assume $\sigma(g) = \sigma(\Phi^n) h$ for some $h \in \H$. Since $H$ is finite, and $\sigma(\Phi^n)$ commutes with $h$, we may in fact assume $\sigma(g) = \sigma(\Phi)^n$. So we know $\sigma(\Phi)$ has eigenvalue with absolute value $1$.
+
+ Let $V_1 \subseteq V = \C^n$ be a sum of eigenspaces for $\sigma(\Phi)^n$ with all eigenvalues having absolute value $1$. Since $\sigma(\Phi^n)$ is central, we know $V_1$ is invariant, and hence $V$ is irreducible. So $V_1 = V$. So all eigenvalues of $\sigma(\Phi)$ have eigenvalue $1$. Since $V$ is irreducible, we know it is $F$-semisimple. So $\sigma(\Phi)$ is semisimple. So $\bra \sigma(\Phi)\ket$ is bounded. So $\im \sigma$ is bounded.
+\end{proof}
+
+\section{Modular forms and representation theory}
+Recall that a modular form is a holomorphic function $f: \H \to \C$ such that
+\[
+ f(z) = j(\gamma, z)^{-k} f(\gamma (z))
+\]
+for all $\gamma$ in some congruence subgroup of $\SL_2(\Z)$, where
+\[
+ \gamma =
+ \begin{pmatrix}
+ a & b\\
+ c & d
+ \end{pmatrix},\quad \gamma(z) = \frac{az + b}{cz + d},\quad j(\gamma, z) = cz + d.
+\]
+Let $M_k$ be the set of all such $f$.
+
+Consider the group
+\[
+ \GL_2(\Q)^+ = \{g \in \GL_2(\Q) : \det g > 0\}.
+\]
+This acts on $M_k$ on the left by
+\[
+ g: f \mapsto j_1(g^{-1}, z)^{-k} f(g^{-1}(z)),\quad j_1(g, z) = |\det g|^{-1/2} j(g, z).
+\]
+The factor of $|\det g|^{-1/2}$ makes the diagonal $\diag(\Q^\times_{> 0})$ act trivially. Note that later we will consider some $g$ with negative determinant, so we put the absolute value sign in.
+
+For any $f \in M_k$, the stabilizer contains some $\Gamma(N)$, which we can think of as some continuity condition. To make this precise, we can form the completion of $\GL_2(\Q)^+$ with respect to $\{\Gamma(N) : N \geq 1\}$, and the action extends to a representation $\pi'$ of this completion. In fact, this completion is
+\[
+ G' = \{g \in \GL_2(\A^\infty_{\Q}) : \det (g) \in \Q^{\times}_{> 0}\}.
+\]
+This is a closed subgroup of $G = \GL_2(\A^\infty_\Q)$, and in fact
+\[
+ G = G' \cdot
+ \begin{pmatrix}
+ \hat{\Z} & 0\\
+ 0 & 1
+ \end{pmatrix}.
+\]
+In fact $G$ is a semidirect product of the groups.
+
+The group $G'$ seems quite nasty, since the determinant condition is rather unnatural. It would be nice to get a representation of $G$ itself, and the easy way to do so is by induction. What is this? By definition, it is
+\[
+ \Ind_{G'}^G(M_k) = \{\varphi: G \to M_k: \forall h \in G', \varphi(hg) = \pi'(h) \varphi(g)\}.
+\]
+Equivalently, this consists of functions $F: \H \times G \to \C$ such that for all $\gamma \in \GL_2(\Q)^+$, we have
+\[
+ j_1(\gamma, z)^{-k} F(\gamma(z), \gamma g) = F(z, g),
+\]
+and for every $g \in G$, there is some open compact $K \subseteq G$ such that
+\[
+ F(z, g) = F(z, gh) \text{ for all }h \in K,
+\]
+and that $F$ is holomorphic in $z$ (and at the cusps).
+
+To get rid of the plus, we can just replace $\GL_2(\Q)^+, \H$ with $\GL_2(\Q), \C \setminus \R = \H^{\pm}$. These objects are called \emph{adelic modular forms}\index{adelic modular form}.
+
+If $F$ is an adelic modular form, and we fix $g$, then the function $f(z) = F(z, g)$ is a usual modular form. Conversely, if $F$ invariant under $\ker (\G_2(\hat{\Z}) \to \GL_2(\Z/N\Z))$, then $F$ corresponds to a tuple of $\Gamma(N)$-modular forms indexed by $(\Z/N\Z)^\times$. This has an action of $G = \prod_p ' \GL_2(\Q_p)$ (which is the restricted product with respect to $\GL_2(\Z_p)$).
+
+The adelic modular forms contain the cusp forms, and any $f \in M_k$ generates a subrepresentation of the space of adelic forms.
+
+\begin{thm}\leavevmode
+ \begin{enumerate}
+ \item The space $V_f$ of adelic cusp forms generated by $f \in S_k(\Gamma_1(N))$ is irreducible iff $f$ is a $T_p$ eigenvector for all $p \nmid n$.
+ \item This gives a bijection between irreducible $G$-invariant spaces of adelic cusp forms and Atkin--Lehner newforms.
+ \end{enumerate}
+\end{thm}
+Note that it is important to stick to cusp forms, where there is an inner product, because if we look at the space of adelic modular forms, it is not completely decomposable.
+
+Now suppose $(\pi, V)$ is an irreducible admissible representation of $\GL_2(\A_\Q^\infty) = \prod' G_p = \GL_2(\Q_p)$, and take a maximal compact subgroups $K_p^0 = \GL_2(\Z_p) \subseteq \GL_2(\Q_p)$. Then general facts about irreducible representations of products imply irreducibility (and admissibility) is equivalent to the existence of irreducible admissible representations $(\pi_p, V_p)$ of $G_p$ for all $p$ such that
+\begin{enumerate}
+ \item For almost all $p$, $\dim V_p^{K_p^0} \geq 1$ (for $G_p = \GL_n(\Q_p)$, this implies the dimension is $1$). Fix some non-zero $f_p^0 \in V_p^{K_p^0}$.
+ \item We have
+ \[
+ \pi = \otimes_p' \pi_p,
+ \]
+ the restricted tensor product, which is generated by $\bigotimes_p v_p$ with $v_p = f_p^0$ for almost all $p$. To be precise,
+ \[
+ \otimes_p' \pi_p = \varinjlim_{\text{finite }S} \bigotimes_{p \in S} \pi_p.
+ \]
+ The use of $v_p$ is to identify smaller tensor products with larger tensor products.
+\end{enumerate}
+
+Note that (i) is equivalent to the assertion that $(\pi_p, V_p)$ is an irreducible principal series representation $\Ind_{B_p}^{G_p} (\chi_1, \chi_2)$ where $\chi_i$ are \emph{unramified} characters. These unramified characters are determined by $\chi_p(p) = \alpha_{p, i}$.
+
+If $f = \sum a_n q^n \in S_k(\Gamma_1(N))$ is a normalized eigenform with character $\omega: (\Z/N\Z)^\times \to \C^\times$, and if $f$ corresponds to $\pi = \bigotimes' \pi_p$, then for every $p \nmid N$, we have $\pi_p = \Ind_{B_p}^{G_p}(\chi_1, \chi_2)$ is an unramified principal series, and
+\begin{align*}
+ a_p &= p^{(k - 1)/2} (\alpha_{p, 1} + \alpha_{p, 2})\\
+ \omega(p)^{-1} &= \alpha_{p, 1} \alpha_{p, 2}.
+\end{align*}
+We can now translate difficult theorems about modular forms into representation theory.
+\begin{eg}
+ The Ramanujan conjecture (proved by Deligne) says
+ \[
+ |a_p| \leq 2 p^{(k - 1)/2}.
+ \]
+ If we look at the formulae above, since $\omega(p)^{-1}$ is a root of unity, this is equivalent to the statement that $|\alpha_{p, i}| = 1$. This is true iff $\pi_p$ is tempered.
+\end{eg}
+
+We said above that if $\dim V_p^{K_p^0} \geq 1$, then in fact it is equal to $1$. There is a generalization of that.
+
+\begin{thm}[Local Atkin--Lehner theorem]
+ If $(\pi, V)$ is an irreducible representation of $\GL_2(F)$, where $F/\Q_p$ and $\dim V = \infty$, then there exists a unique $n_\pi > 0$ such that
+ \[
+ V^{K_n} =
+ \begin{cases}
+ 0 & n < n_\pi\\
+ \text{one-dimensional} & n = n_\pi
+ \end{cases},\quad K_n = \left\{
+ g \equiv
+ \begin{pmatrix}
+ * & *\\
+ 0 & 1
+ \end{pmatrix}\bmod \varpi^n
+ \right\}.
+ \]
+ Taking the product of these invariant vectors for $n = n_\pi$ over all $p$ gives Atkin--Lehner newform.
+\end{thm}
+
+What about the other primes? i.e.\ the primes at infinity?
+
+We have a map $f: \H^{\pm} \times \GL_2(\A_\Q^\infty) \to \C$. Writing $\H = \SL_2(\R)/\SO(2)$, which has an action of $\Gamma$, we can convert this to an action of $\SO(2)$ on $\Gamma \backslash \SL_2(\R)$. Consider the function
+\[
+ \Phi_f: \GL_2(\R) \times \GL_2(\A_\Q^\infty) = \GL_2(\A_\Q) \to \C
+\]
+given by
+\[
+ \Phi_f(h_\infty, h^\infty) = j_1(h_\infty, i)^{-k} f(h_\infty(i), h^\infty).
+\]
+Then this is invariant under $\GL_2(\Q)^+$, i.e.
+\[
+ \Phi_f(\gamma h_\infty, \gamma h^\infty) = \Phi(h_\infty, h^\infty).
+\]
+Now if we take
+\[
+ k_\theta =
+ \begin{pmatrix}
+ \cos \theta & \sin \theta\\
+ -\sin \theta & \cos \theta
+ \end{pmatrix} \in \SO(2),
+\]
+then
+\[
+ \Phi_f(h_\infty k_\theta, h^\infty) = e^{ik\theta} \Phi_f(h_\infty, h^\infty).
+\]
+So we get invariance under $\gamma$, but we need to change what $\SO(2)$ does. In other words, $\Phi_f$ is now a function
+\[
+ \Phi_f: \GL_2(\Q) \backslash \G_2(\A_\Q) \to \C
+\]
+satisfying various properties:
+\begin{itemize}
+ \item It generates a finite-dimensional representation of $\SO(2)$
+ \item It is invariant under an open subset of $\GL_2(\A^\infty_\Q)$
+ \item It satisfies growth condition and cuspidality, etc.
+\end{itemize}
+By the Cauchy--Riemann equations, the holomorphicity condition of $f$ says $\Phi_f$ satisfies some differential equation. In particular, that says $\Phi_f$ is an eigenfunction for the Casimir in the universal enveloping algebra of $\sl_2$. These conditions together define the space of \term{automorphic forms}.
+
+\begin{eg}
+ Take the non-holomorphic Eisenstein series
+ \[
+ E(z, s) = \sum_{(c, d) \not= (0, 0)} \frac{1}{|cz + d|^{2s}}.
+ \]
+ This is a real analytic function on $\Gamma \backslash \H \to \C$. Using the above process, we get an automorphic form on $\GL_2(\Q) \backslash \GL_2(\A_\Q)$ with $k = 0$. So it actually invariant under $\SO(2)$. It satisfies
+ \[
+ \Delta E = s(1 - s) E.
+ \]
+\end{eg}
+There exist automorphic cusp forms which are invariant under $\SO(2)$, known as Maass forms. They also satisfy a differential equation with a Laplacian eigenvalue $\lambda$. A famous conjecture of Selberg says $\lambda \geq \frac{1}{4}$. What this is equivalent to is that the representation of $\GL_2(\R)$ they generate is tempered.
+
+In terms of representation theory, this looks quite similar to Ramanujan's conjecture!
+\printindex
+\end{document}
diff --git a/books/cam/IV_M/topics_in_geometric_group_theory.tex b/books/cam/IV_M/topics_in_geometric_group_theory.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f0a288e5185b776a3f7b719b05a2cf77fcc0aa0e
--- /dev/null
+++ b/books/cam/IV_M/topics_in_geometric_group_theory.tex
@@ -0,0 +1,3247 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IV}
+\def\nterm {Michaelmas}
+\def\nyear {2017}
+\def\nlecturer {H.\ Wilton}
+\def\ncourse {Topics in Geometric Group Theory}
+\def\ncoursehead {Geometric Group Theory}
+
+\input{header}
+
+\DeclareFontFamily{U}{mathb}{\hyphenchar\font45}
+\DeclareFontShape{U}{mathb}{m}{n}{
+ <5> <6> <7> <8> <9> <10> gen * mathb
+ <10.95> mathb10 <12> <14.4> <17.28> <20.74> <24.88> mathb12
+ }{}
+\DeclareSymbolFont{mathb}{U}{mathb}{m}{n}
+
+% Define a subset character from that font (from mathabx.dcl)
+% to completely replace the \subset character, you can replace
+% \varsubset with \subset
+
+\DeclareMathSymbol{\precneq}{3}{mathb}{"AC}
+
+\newcommand\Cay{\mathrm{Cay}}
+\newcommand{\qi}{\underset{qi}{\simeq}}
+\DeclareMathOperator\Area{Area}
+\DeclareMathOperator\FArea{FArea}
+\DeclareMathOperator\Diam{Diam}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+
+The subject of geometric group theory is founded on the observation that the algebraic and algorithmic properties of a discrete group are closely related to the geometric features of the spaces on which the group acts. This graduate course will provide an introduction to the basic ideas of the subject.
+
+Suppose $\Gamma$ is a discrete group of isometries of a metric space $X$. We focus on the theorems we can prove about $\Gamma$ by imposing geometric conditions on $X$. These conditions are motivated by curvature conditions in differential geometry, but apply to general metric spaces and are much easier to state. First we study the case when $X$ is \emph{Gromov-hyperbolic}, which corresponds to negative curvature. Then we study the case when $X$ is \emph{CAT(0)}, which corresponds to non-positive curvature. In order for this theory to be useful, we need a rich supply of negatively and non-positively curved spaces. We develop the theory of \emph{non-positively curved cube complexes}, which provide many examples of CAT(0) spaces and have been the source of some dramatic developments in low-dimensional topology over the last twenty years.
+
+\begin{itemize}
+ \item[Part 1.] We will introduce the basic notions of geometric group theory: Cayley graphs, quasiisometries, the Schwarz--Milnor Lemma, and the connection with algebraic topology via presentation complexes. We will discuss the word problem, which is quantified using the Dehn functions of a group.
+ \item[Part 2.] We will cover the basic theory of word-hyperbolic groups, including the Morse lemma, local characterization of quasigeodesics, linear isoperimetric inequality, finitely presentedness, quasiconvex subgroups etc.
+ \item[Part 3.] We will cover the basic theory of CAT(0) spaces, working up to the Cartan--Hadamard theorem and Gromov's Link Condition. These two results together enable us to check whether the universal cover of a complex admits a CAT(1) metric.
+ \item[Part 4.] We will introduce cube complexes, in which Gromov's link condition becomes purely combinatorial. If there is time, we will discuss Haglund--Wise's \emph{special} cube complexes, which combine the good geometric properties of CAT(0) spaces with some strong algebraic and topological properties.
+\end{itemize}
+
+\subsubsection*{Pre-requisites}
+Part IB Geometry and Part II Algebraic topology are required.
+}
+
+\tableofcontents
+
+\section{Cayley graphs and the word metric}
+\subsection{The word metric}
+There is an unfortunate tendency for people to think of groups as being part of algebra. Which is, perhaps, reasonable. There are elements and there are operations on them. These operations satisfy some algebraic laws. But there is also geometry involved. For example, one of the simplest non-abelian groups we know is $D_6$, the group of symmetries of a triangle. This is fundamentally a geometric idea, and often this geometric interpretation can let us prove a lot of things about $D_6$.
+
+In general, the way we find groups in nature is that we have some object, and we look at its symmetry. In geometric group theory, what we do is that we want to remember the object the group acts on, and often these objects have some geometric structure. In fact, we shall see that all groups are symmetries of some geometric object.
+
+Let $\Gamma$ be a group, and $S$ a generating set for $\Gamma$. For the purposes of this course, $S$ will be finite. So in this course, we are mostly going to think about finitely-generated groups. Of course, there are non-finitely-generated groups out there in the wild, but we will have to put in some more effort to make our ideas work.
+
+The simplest geometric idea we can think of is that of distance, i.e.\ a metric. So we want to use our generating set $S$ to make $\Gamma$ into a metric space.
+
+\begin{defi}[Word length]\index{word length}
+ Let $\Gamma$ be a group and $S$ a finite generating set. If $\gamma \in \Gamma$, the \emph{word length} of $\gamma$ is
+ \[
+ \ell_S(\gamma) = \min \{n : \gamma = s_1^{\pm 1} \cdots s_n^{\pm 1}\text{ for some } s_i \in S\}.
+ \]
+\end{defi}
+
+\begin{defi}[Word metric]\index{word metric}
+ Let $\Gamma$ be a group and $S$ a finite generating set. The \emph{word metric} on $\Gamma$ is given by
+ \[
+ d_S (\gamma_1, \gamma_2) = \ell_s(\gamma_1^{-1} \gamma_2).
+ \]
+\end{defi}
+
+If we stare at the definition, we see that the word metric is left-invariant, i.e.\ for any $g, \gamma_1, \gamma_2 \in \Gamma$, we have
+\[
+ d_S(g \gamma_1, g \gamma_2) = d_S(\gamma_1, \gamma_2).
+ \]
+However, it is usually not the case that the metric is right invariant, i.e.\ $d_S(\gamma_1 g, \gamma_2 g)$ is not equal to $d_S(\gamma_1, \gamma_2)$. This is one of the prices we have to pay for wanting to do geometry.
+
+If we tried to draw our metric space out, it looks pretty unhelpful --- it's just a bunch of dots. It is not path connected. We can't really think about it well. To fix this, we talk about Cayley graphs.
+
+\begin{defi}[Cayley graph]\index{Cayley graph}
+ The \emph{Cayley graph} $\Cay_S(\Gamma)$ is defined as follows:
+ \begin{itemize}
+ \item $V(\Cay_S(\Gamma)) = \Gamma$
+ \item For each $\gamma \in \Gamma$ and $s \in S$, we draw an edge from $\gamma$ to $\gamma s$.
+ \end{itemize}
+ If we want it to be a directed and labelled graph, we can label each edge by the generator $s$.
+\end{defi}
+
+There is, of course, an embedding of $\Gamma$ into $\Cay_S(\Gamma)$. Moreover, if we metrize $\Cay_S(\Gamma)$ such that each edge has length $[0, 1]$, then this is an isometric embedding.
+
+It will be useful for future purposes to note that there is a left action of $\Gamma$ on $\Cay_S(\Gamma)$ which extends the left action of $\Gamma$ on itself. This works precisely because the edges are defined by multiplication on the \emph{right}, and left and right multiplication commute.
+
+\begin{eg}
+ Take $\Gamma = C_2 = \Z/2\Z$, and take $S = \{1\}$. Then the Cayley graph looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (2, 0) {};
+ \node [below] at (0, 0) {$0$};
+ \node [below] at (2, 0) {$1$};
+
+ \draw (0, 0) edge [bend left, ->-=0.57] (2, 0);
+ \draw (2, 0) edge [bend left, ->-=0.57] (0, 0);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{eg}
+ Take $\Gamma = S_3$ and $S = \{(1\; 2), (1\; 2\; 3)\}$. The Cayley graph is
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] (L) at (0, 0) {};
+ \node [circ] (R) at (2, 0) {};
+ \node [circ] (T) at (1, 1.732) {};
+ \node [circ] (LL) at (-0.866, -0.5) {};
+ \node [circ] (RR) at (2.866, -0.5) {};
+ \node [circ] (TT) at (1, 2.732) {};
+
+% \node [below] at (L) {$1$};
+% \node at (R) {$(3\; 2\; 1)$};
+% \node at (T) {$(1\; 2\; 3)$};
+% \node at (LL) {$(1\; 2)$};
+
+ \draw (L) edge [bend left, ->-=0.55] (T);
+ \draw (T) edge [bend left, ->-=0.55] (R);
+ \draw (R) edge [bend left, ->-=0.55] (L);
+
+ \draw (TT) edge [bend right, ->-=0.57] (LL);
+ \draw (LL) edge [bend right, ->-=0.57] (RR);
+ \draw (RR) edge [bend right, ->-=0.57] (TT);
+
+ \draw (T) edge [bend left, ->-=0.66] (TT);
+ \draw (TT) edge [bend left, ->-=0.66] (T);
+
+ \draw (R) edge [bend left, ->-=0.66] (RR);
+ \draw (RR) edge [bend left, ->-=0.66] (R);
+
+ \draw (L) edge [bend left, ->-=0.66] (LL);
+ \draw (LL) edge [bend left, ->-=0.66] (L);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{eg}
+ Take $\Gamma = \Z$, and $S = \{1\}$. Then the Cayley graph looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-4,-3,-2,-1,0,1,2,3,4} {
+ \node [circ] at (\x, 0) {};
+ \node [below] at (\x, 0) {$\x$};
+ }
+ \draw (-4.5, 0) -- (4.5, 0);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+\begin{eg}
+ Take $\Gamma = \Z$ and $S = \{2, 3\}$. Then the Cayley graph looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \clip (-5, -0.3) rectangle (5, 0.5);
+ \foreach \x in {-4,-3,-2,-1,0,1,2,3,4} {
+ \node [circ] at (\x, 0) {};
+ }
+ \foreach \x in {-6,-5,-4,-3,-2,-1,0,1,2,3,4} {
+ \draw [red] (\x, 0) edge [bend left] +(3, 0);
+ \draw [blue] (\x, 0) edge [bend right] +(2, 0);
+ }
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+Now this seems quite complicated. It's still the same group $\Gamma = \Z$, but with a different choice of generating set, it looks very different. It seems like what we do depends heavily on the choice of the generating set.
+
+\begin{eg}
+ If $\Gamma = \Z^2$ and $S = \{(1, 0), (0, 1)\}$, then the Cayley graph is a grid
+ \begin{center}
+ \begin{tikzpicture}
+ \foreach \x in {-2,-1,0,1,2} {
+ \foreach \y in {-2,-1,0,1,2} {
+ \node [circ] at (\x, \y) {};
+ }
+ }
+ \draw [step=1] (-2, -2) grid (2, 2);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{eg}
+ If $\Gamma = F_2 = \bra a, b\ket$, $S = \{a, b\}$, then the Cayley graph looks like
+ \begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, -3/0/180, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+If one has done algebraic topology, then one might recognize this as the universal cover of a space. This is not a coincidence, and we will talk about this later.
+
+Let's now thing a bit more about the choice of generating set. As one would expect, the Cayley graph depends a lot on the generating set chosen, but often we only care about the group itself, and not a group-with-a-choice-of-generating-set.
+
+The key observation is that while the two Cayley graphs of $\Z$ we drew seem quite different, if we looked at them from $100$ meters away, they look quite similar --- they both look like a long, thin line.
+
+\begin{defi}[Quasi-isometry]\index{quasi-isometry}\index{quasi-isometric embedding}
+ Let $\lambda \geq 1$ and $c \geq 0$. A function between metric spaces $f: X \to Y$ is a \emph{$(\lambda, c)$-quasi-isometric embedding} if for all $x_1, x_2 \in X$
+ \[
+ \frac{1}{\lambda} d_X(x_1, x_2) - c \leq d_Y(f(x_1), f(x_2)) \leq \lambda d_X(x,_1, x_2) + c
+ \]
+ If, in addition, there is a $C$ such that for all $y \in Y$, there exists $x \in X$ such that $d_Y(y, f(x)) \leq C$, we say $f$ is a \emph{quasi-isometry}, and $X$ is \term{quasi-isometric} to $Y$. We write $X\qi Y$.
+\end{defi}
+We can think of the first condition as saying we are \term{quasi-injective}, and the second as saying we are \term{quasi-surjective}.
+
+The right way to think about the definition is that the $c$ says we don't care about what happens at scales less than $c$, and the $\lambda$ is saying we allow ourselves to stretch distances. Note that if we take $c = 0$, then this is just the notion of a bi-Lipschitz map. But in general, we don't even require $f$ to be continuous, since continuity is a rather fine-grained property.
+
+\begin{ex}
+ Check that $X\qi Y$ is an equivalence relation.
+\end{ex}
+This is not immediately obvious, because there is no inverse to $f$. In fact, we need the axiom of choice to prove this result.
+
+\begin{eg}
+ Any bounded metric spaces is quasi-isometric to a point. In particular, if $\Gamma$ is finite, then $(\Gamma, d_S) \qi 1$.
+\end{eg}
+For this reason, this is not a great point of view for studying finite groups. On the other hand, for infinite groups, this is really useful.
+
+\begin{eg}
+ For any $\Gamma$ and $S$, the inclusion $(\Gamma, d_S)\hookrightarrow (\Cay_S(\Gamma), d_S)$ is a quasi-isometry (take $\lambda = 1, c = 0, C = \frac{1}{2}$).
+\end{eg}
+
+Of course, the important result is that any two word metrics on the same group are quasi-isomorphic.
+\begin{thm}
+ For any two finite generating sets $S, S'$ of a group $\Gamma$, the identity map $(\Gamma, d_S) \to (\Gamma, d_{S'})$ is a quasi-isometry.
+\end{thm}
+
+\begin{proof}
+ Pick
+ \[
+ \lambda = \max_{s \in S} \ell_{S'}(s),\quad \lambda' = \max_{s \in S'} \ell_{S}(s),
+ \]
+ We then see
+ \[
+ \ell_S(\gamma) \leq \lambda' \ell_{S'}(\gamma), \quad \ell_{S'}(\gamma) \leq \lambda \ell_s(\gamma). % check
+ \]
+ for all $\gamma \in \Gamma$. Then the claim follows.
+\end{proof}
+So as long as we are willing to work up to quasi-isometry, we have a canonically defined geometric object associated to each finitely-generated group.
+
+Our next objective is to be able to state and prove the Schwarz--Milnor lemma. This is an important theorem in geometric group theory, saying that if a group $\Gamma$ acts ``nicely'' on a metric space $X$, then $\Gamma$ must be finitely generated, and in fact $(\Gamma, d_s)$ is quasi-isomorphic to $X$. This allows us to produce some rather concrete geometric realizations of $(\Gamma, d_s)$, as we will see in examples.
+
+In order to do that, we must write down a bunch of definitions.
+
+\begin{defi}[Geodesic]\index{geodesic}
+ Let $X$ be a metric space. A \emph{geodesic} in $X$ is an isometric embedding of a closed interval $\gamma: [a, b] \to X$.
+\end{defi}
+This is not exactly the same as the notion in differential geometry. For example, if we have a sphere and two non-antipodal points on the sphere, then there are many geodesics connecting them in the differential geometry sense, but only the shortest one is a geodesic in our sense. To recover the differential geometry notion, we need to insert the words ``locally'' somewhere, but we shall not need that.
+
+\begin{defi}[Geodesic metric space]\index{geodesic metric space}\index{metric space!geodesic}
+ A metric space $X$ is called \emph{geodesic} if every pair of points $x, y \in X$ is joined by a geodesic denoted by $[x, y]$.
+\end{defi}
+Note that the notation $[x, y]$ is not entirely honest, since there may be many geodesics joining two points. However, this notation turns out to work surprisingly well.
+
+\begin{defi}[Proper metric space]\index{proper metric space}
+ A metric space is \emph{proper} if closed balls in $X$ are compact.
+\end{defi}
+For example, infinite-dimensional Hilbert spaces are not proper, but finite-dimensional ones are.
+
+\begin{eg}
+ If $\Gamma = \langle S\rangle$, then $\Cay_S(\Gamma)$ is geodesic. If $S$ is finite, then $\Cay_S(\Gamma)$ is proper.
+\end{eg}
+
+\begin{eg}
+ Let $M$ be a connected Riemannian manifold. Then there is a metric on $M$ defined by
+ \[
+ d(x, y) = \inf_{\alpha: x \to y} \length(\alpha),
+ \]
+ where we take the infimum over all smooth paths from $x$ to $y$.
+
+ The \term{Hopf--Rinow theorem} says if $M$ is complete (as a metric space), then the metric is proper and geodesic. This is great, because completeness is in some sense a local property, but ``proper'' and ``geodesic'' are global properties.
+\end{eg}
+
+We need two more definitions, before we can state the Schwarz--Milnor lemma.
+\begin{defi}[Proper discontinuous action]\index{proper discontinuous action}
+ An action $\Gamma$ on a topological space $X$ is \emph{proper discontinuous} if for every compact set $K$, the set
+ \[
+ \{g \in \Gamma: gK \cap K \not= \emptyset\}
+ \]
+ is finite.
+\end{defi}
+
+\begin{defi}[Cocompact action]\index{cocompact action}
+ An action $\Gamma$ on a topological space $X$ is \emph{cocompact} if the quotient $\Gamma \setminus X$ is compact.
+\end{defi}
+
+\begin{lemma}[Schwarz--Milnor lemma]
+ Let $X$ be a proper geodesic metric space, and let $\Gamma$ act properly discontinuously, cocompactly on $X$ by isometries. Then
+ \begin{enumerate}
+ \item $\Gamma$ is finitely-generated.
+ \item For any $x_0 \in X$, the orbit map
+ \begin{align*}
+ \Gamma &\to X\\
+ \gamma &\mapsto \gamma x_0
+ \end{align*}
+ is a quasi-isometry $(\Gamma, d_s) \qi (X, d)$.
+ \end{enumerate}
+\end{lemma}
+An easy application is that $\Gamma$ acting on its Caylay graph satisfies these conditions. So this reproduces our previous observation that a group is quasi-isometric to its Cayley graph. More interesting examples involve manifolds.
+
+\begin{eg}
+ Let $M$ be a closed (i.e.\ compact without boundary), connected Riemannian manifold. Then the universal cover $\tilde{M}$ is also a complete, connected, Riemannian manifold. By the Hopf--Rinow theorem, this is proper and geodesic.
+
+ Since the metric of $\tilde{M}$ is pulled back from $M$, we know $\pi_1(M)$ acts on $\tilde{M}$ by isometries. Therefore by the Schwarz--Milnor lemma, we know
+ \[
+ \pi_1(M) \qi \tilde{M}.
+ \]
+\end{eg}
+\begin{eg}
+ The universal cover of the torus $S^1 \times S^1$ is $\R^2$, and the fundamental group is $\Z^2$. So we know $\Z^2 \qi \R^2$, which is not surprising.
+\end{eg}
+
+\begin{eg}
+ Let $M = \Sigma_2$, the surface of genus $2$.
+ \begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \draw plot [smooth, tension=0.8] coordinates {(1.6, 0.2) (1.4, 0.27) (0.7, 0.35) (0, 0.23) (-0.7, 0.35) (-1.4, 0.27) (-1.7, 0) (-1.4, -0.27) (-0.7, -0.35) (0, -0.23) (0.7, -0.35) (1.4, -0.27) (1.6, -0.2)};
+ \draw (1.6, 0) ellipse (0.1 and 0.2);
+
+ \foreach \x in {0.8, -0.8}{
+ \begin{scope}[shift={(\x, -0.03)}]
+ \path[rounded corners=12pt] (-0.45, 0)--(0, 0.2)--(0.45, 0) (-0.45, 0)--(0, -0.28)--(0.45, 0);
+ \draw[rounded corners=14pt] (-0.55, 0.05)--(0, -.15)--(0.55, 0.05);
+ \draw[rounded corners=12pt] (-0.45, 0)--(0, 0.2)--(0.45, 0);
+ \end{scope}
+ }
+ \end{tikzpicture}
+ \end{center}
+ We then have
+ \[
+ \pi_1 \Sigma_2 \cong \langle a_1, b_1, a_2, b_2 \mid [a_1, b_1][a_2, b_2]\rangle.
+ \]
+ On the other hand, the universal cover can be thought of as the hyperbolic plane $\H^2$, as we saw in IB Geometry. % insert picture!!
+
+ So it follows that
+ \[
+ \pi_1 \Sigma_2 \qi \H^2.
+ \]
+\end{eg}
+This gives us some concrete hold on what the group $\pi_1 \Sigma_2$ is like, since people have been thinking about the hyperbolic plane for centuries.
+
+\begin{proof}[Proof of Schwarz--Milnor lemma]
+ Let $\bar{B} = \bar{B}(x, R)$ be such that $\Gamma \bar{B} = X$. This is possible since the quotient is compact.
+
+ Let $S = \{\gamma \in \Gamma: \gamma \bar{B} \cap \bar{B} \not=\emptyset\}$. By proper discontinuity, this set is finite.
+
+ We let
+ \[
+ r = \inf_{\gamma' \not \in S} d(\bar{B}, \gamma' \bar{B}).
+ \]
+ If we think about it, we see that in fact $r$ is the \emph{minimum} of this set, and in particular $r > 0$.
+
+ Finally, let
+ \[
+ \lambda = \max_{s \in S} d(x_0, s x_0).
+ \]
+ We will show that $S$ generates $\Gamma$, and use the word metric given by $S$ to show that $\Gamma$ is quasi-isometric to $X$.
+
+ We first show that $\Gamma = \langle S\rangle$. We let $\gamma \in \Gamma$ be arbitrary. Let $[x_0, \gamma x_0]$ be a geodesic from $x_0$ to $\gamma x_0$. Let $\ell$ be such that
+ \[
+ (\ell - 1)r \leq d(x_0, \gamma x_0) < \ell r.
+ \]
+ Then we can divide the geodesic into $\ell$ pieces of length about $r$. We can choose $x_1, \ldots x_{\ell - 1}, x_\ell = \gamma x_0$ such that $d(x_{i - 1}, x_i) < r$.
+
+ By assumption, we can pick $\gamma_i \in \Gamma$ such that $x_i \in \gamma_i \bar{B}$, and further we pick $\gamma_\ell = \gamma$, $\gamma_0 = e$. Now for each $i$, we know
+ \[
+ d(\bar{B}, \gamma_{i -1 }^{-1} \gamma_i \bar{B}) = d(\gamma_{i - 1} \bar{B}, \gamma_i \bar{B}) \leq d(x_{i - 1}, x_i) < r.
+ \]
+ So it follows that $\gamma_{i - 1}^{-1} \gamma_i \in S$. So we have
+ \[
+ \gamma = \gamma_\ell = (\gamma_0^{-1} \gamma_1) (\gamma_1^{-1}\gamma_2) \cdots (\gamma_{\ell - 1}^{-1}\gamma_\ell) \in \langle S\rangle.
+ \]
+ This proves $\Gamma = \langle S \rangle$.
+
+ To prove the second part, we simply note that
+ \[
+ r\ell - r \leq d (x_0 \gamma x_0).
+ \]
+ We also saw that $\ell$ is an upper bound for the word length of $\gamma$ under $S$. So we have
+ \[
+ r \ell_s(\gamma) - r \leq d(x_0, \gamma x_0).
+ \]
+ On the other hand, by definition of $\lambda$, we have
+ \[
+ d(x_0, \gamma x_0) \leq \lambda \ell_s(\gamma).
+ \]
+ So the orbit map is an orbit-embedding, and quasi-surjectivity follows from cocompactness.
+\end{proof}
+
+
+\subsection{Free groups}
+If we want to understand a group, then a reasonable strategy now would be to come up with a well-understood (proper geodesic) space for $\Gamma$ to act (proper discontinuously, cocompactly) on, and see what we can tell from this quasi-isometry. There is no general algorithm for doing so, but it turns out for certain groups, there are some ``obvious'' good choices to make.
+
+Let's begin by understanding the free group. In some sense, they are the ``simplest'' groups, but they also contain some rich structure. Roughly speaking, a free group on a set $S$ is a group that is ``freely'' generated by $S$. This freeness is made precise by the following universal property.
+
+\begin{defi}[Free group]\index{free group}\index{$F(S)$}
+ Let $S$ be a (usually finite) set. A group $F(S)$ with a map $S \to F(S)$ is called the \emph{free group on $S$} if it satisfies the following universal property: for any set map $S \to G$, there is a unique group homomorphism $F(S) \to G$ such that the following diagram commutes:
+ \[
+ \begin{tikzcd}
+ S \ar[r] \ar[rd] & F(S) \ar[d, dashed]\\
+ & G
+ \end{tikzcd}.
+ \]
+ Usually, if $|S| = r$, we just write $F(S) = F_r$.\index{$F_r$}
+\end{defi}
+
+In fancy category-theoretic language, this says the functor $F: \mathbf{Sets} \to \mathbf{Grps}$ is left adjoint to the forgetful functor $U: \mathbf{Grps} \to \mathbf{Sets}$.
+
+This definition is good for some things, but not good for others. One thing it does well is it makes it clear that $F(S)$ is unique up to isomorphism if it exists. However, it is not immediately clear that $F(S)$ exists! (unless one applies the adjoint functor theorem)
+
+Thus, we must concretely construct a group satisfying this property. Apart from telling us that $F(S)$ actually exists, a concrete construction lets us actually do things with the group.
+
+For concreteness, suppose $S = \{a_1, \ldots, a_r\}$ is a finite set. We consider a graph
+\[
+ X_r =
+ \begin{tikzpicture}[eqpic, scale=3]
+ \node [circ] {};
+ \foreach \r in {-130,-65,0,65,130} {
+ \draw [rotate=\r] (0, 0) edge [out=-30, in=30, looseness=2, loop] (0, 0);
+ }
+
+ \node at (-0.2, 0.03) {$\vdots$};
+ \end{tikzpicture}
+\]
+where there are $r$ ``petals''. This is known as the \term{rose with $r$ petals}.
+
+We claim that its fundamental group is $F(S)$. To see this, we have to understand the universal cover $\tilde{X}_r$ of $X_r$. We know $\tilde{X}_r$ is a simply connected graph, so it is a tree. Moreover, since it is a covering space of $X_r$, it is regular with degree $2r$ on each vertex.
+
+If $r = 2$, then this is our good old picture % label edges as a_1 (horizontal) and a_2 (vertical), and label basepoint as *
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, -3/0/180, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \node [circ] {};
+ \node [anchor = north east] {$*$};
+ \draw [->] (1.7, 0) -- (1.71, 0);
+ \node [above] at (1.5, 0) {$a_1$};
+ \draw [->] (-1.3, 0) -- (-1.29, 0);
+ \node [above] at (-1.5, 0) {$a_1$};
+
+ \draw [->] (0, 1.7) -- (0, 1.71);
+ \node [left] at (0, 1.5) {$a_2$};
+ \draw [->] (0, -1.3) -- (0, -1.29);
+ \node [left] at (0, -1.5) {$a_2$};
+
+ \end{tikzpicture}
+\end{center}
+
+We can use this to understand the fundamental group $\pi_1(X_r)$. The elements on $\pi_1 (X_r)$ are homotopy classes [$\gamma$] of based loops in $X_r$. Using the homotopy lifting lemma, this map can be lifted to a path $\tilde{\gamma}$ in the universal cover starting at $*$.
+\begin{center}
+ \begin{tikzpicture}[scale=0.5]
+ \draw (-3, 0) -- (3, 0);
+ \draw (0, -3) -- (0, 3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, -3/0/180, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \foreach \x/\y/\rot in {3/0/0, 0/3/90, 0/-3/270} {
+ \begin{scope}[shift={(\x, \y)}, scale=0.5, rotate=\rot]
+ \draw (0, 0) -- (3, 0);
+ \draw (0, 3) -- (0, -3);
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \end{scope}
+ }
+ \node [circ] {};
+ \node [anchor = north east] {$*$};
+ \draw [mred, ->-=0.1, ->-=0.45, ->-=0.7, ->-=0.9] (0, 0.1) -- (2.9, 0.1) -- (2.9, 1.5) -- (3.1, 1.5) -- (3.1, -1.5) -- (2.9, -1.5) -- (2.9, -0.1) -- (0.1, -0.1) -- (0.1, -3);
+ \end{tikzpicture}
+\end{center}
+In general, there are many paths that represent the same element. However, there is a ``canonical'' such path. Indeed, we can eliminate all backtracks in the path to obtain an embedded path from $*$ to the end point of $\tilde{\gamma}$. We may reparametrize so that it spends equal time on each edge, but that is not important.
+
+Algebraically, how do we understand this? Each loop in $X_r$ represents an element $a_i$ in $\pi_1(X_r)$. If we look at the ``canonical'' form of the path above, then we see that we are writing $\gamma$ in the form
+\[
+ \gamma = a_{i_1}^{\pm 1} a_{i_2}^{\pm 1} a_{i_3}^{\pm 1}\cdots a_{i_n}^{\pm 1}.
+\]
+The fact that we have eliminated all backtracks says this expression is \emph{reduced}\index{reduced word}, i.e.\ we don't see anything looking like $a_i a_i^{-1}$, or $a_i^{-1} a_i$. Importantly, there is a unique way to write $\gamma$ as a reduced word, corresponding to the unique embedded path from $*$ to the end point of $\tilde{\gamma}$ in the universal cover. Thus, if we somehow expressed $\gamma$ as a \emph{non-reduced} word in the $a_1, \ldots, a_r$, then we can reduce it by removing instances of $a_i a_i^{-1}$ or $a_i^{-1} a_i$, and keep doing so until we don't see any. This terminates since the length of the word is finite, and the result does not depend on the order we performed the reduction in, by uniqueness.
+
+\begin{eg}
+ Take the case $\pi_1 X_2 = \langle a, b\rangle$. We might be given a word
+ \[
+ a^3 a^{-2} b^{-2} b^3 a^5 a^{-3} a^{-3} b^{-1}.
+ \]
+ We can reduce this to
+ \[
+ a ba^{-1}b^{-1}.
+ \]
+ In particular, since this is not the identity, we know the original path was not null-homotopic.
+\end{eg}
+
+This gives us a very concrete understanding of $\pi_1(X_r)$ --- it is generated by $a_1, \ldots, a_r$, and each element is represented uniquely by a reduced word. To multiply words together, write them one after another, and then reduce the result.
+
+From this description, it is clear that
+\begin{cor}
+ $\pi_1(X_r)$ has the universal property of $F(S)$, with $S = \{a_1, \ldots, a_r\}$. So $\pi_1(X_r) \cong F_r$.
+\end{cor}
+
+\begin{proof}
+ Given any map $f: S \to G$, define $\tilde{f}: F(S) \to G$ by
+ \[
+ \tilde{f}(a_{i_1}^{\pm 1}\cdots a_{i_n}^{\pm 1}) = \tilde{f}(a_i)^{\pm 1} \cdots \tilde{f}(a_{i_n})^{\pm 1}
+ \]
+ for any reduced word $a_{i_1}^{\pm 1} \cdots a_{i_n}^{\pm 1}$. This is easily seen to be well-defined and is the unique map making the diagram commute.
+\end{proof}
+
+\subsection{Finitely-presented groups}
+Let's try to consider groups that are slightly more complex than free groups. If a group $\Gamma$ is generated by $S$, then we have a surjective homomorphism $F(S) \to \Gamma$. Let $K = \ker \eta$. Then the first isomorphism theorem tells us
+\[
+ \Gamma \cong \frac{F(S)}{K}.
+\]
+Since we understand $F(S)$ quite explicitly, it would be nice if we have a solid gasp on $K$ as well. If $R$ normally generates $K$, so that $K = \bra\bra R\ket\ket$, then we say that $\bra S \mid R \ket$ is a \term{presentation} for $\Gamma$. We will often write that
+\[
+ \Gamma \cong \langle S \mid R \rangle.
+\]
+\begin{eg}
+ \[
+ \Z^2 = \langle a, b\mid aba^{-1}b^{-1}\rangle
+ \]
+\end{eg}
+
+\begin{defi}[Finitely-presented group]\index{finitely-presented group}\index{finitely-presentable group}
+ A \emph{finitely-presentable group} is a group $\Gamma$ such that there are finite $S$ and $R$ such that $\Gamma \cong \bra S \mid R\ket$.
+
+ A \emph{finitely-presented group} is a group $\Gamma$ equipped $S$ and $R$ such that $\Gamma \cong \bra S \mid R \ket$.
+\end{defi}
+
+Presentations give us ways to geometrically understand a group. Given a presentation $\mathcal{P} = \langle S \mid R \rangle$, we can construct space \term{$X_\mathcal{P}$} such that $\pi_1 X_\mathcal{P} \cong \bra S \mid R\ket$
+
+To do so, we first construct a rose with $|S|$ many petals, each labeled by an element of $S$. For each $r \in R$, glue a disk onto the rose along the path specified by $r$. The \term{Seifert--van Kampen theorem} then tells us $\pi_1 X_{\mathcal{P}} \cong \Gamma$.
+
+\begin{eg}
+ We take the presentation $\Z^2 \cong \bra a, b \mid aba^{-1}b^{-1}\ket$. If we think hard enough (or a bit), we see this construction gives the torus.
+\end{eg}
+
+Conversely, if we are given a connected cell complex $X$, we can obtain a presentation of the fundamental group. This is easy if the cell complex has a single $0$-cell. If it has multiple $0$-cells, then we choose a maximal tree in $X^{(1)}$, and the edges $S = \{a_i\}$ not in the maximal tree define a generating set for $\pi_1 X^{(1)}$. The attaching maps of the $2$-cells in $X$ define elements $R = \{r_j\}$ of $\pi_1 X^{(1)}$, and these data define a presentation $\mathcal{P}_X = \bra S \mid R\ket$ for $\pi_1 X$.
+
+This is not canonical, since we have to pick a maximal tree, but let's not worry too much about that. The point of maximal trees is to get around the problem that we might have more than one vertex.
+
+\begin{ex}
+ If $X$ has one vertex, then $\Cay_S \pi_1 X = \tilde{X}^{(1)}$.
+\end{ex}
+
+\subsection{The word problem}
+Long time ago, Poincar\'e was interested in characterizing the $3$-sphere. His first conjecture was that if a closed $3$-manifold $M$ had $H_*(M) \cong H_*(S^3)$, then $M \cong S^3$. Poincar\'e thought very hard about the problem, and came up with a counterexample. The counterexample is known as the \term{Poincar\'e homology sphere}\index{homology sphere}. This is a manifold $P^3$ with
+\[
+ H_*(P) \cong H_*(S^3),
+\]
+but it turns out there is a surjection $\pi_1(P^3) \to A_5$. In particular, $P^3$ is not homeomorphic to a sphere.
+
+So he made a second conjecture, that if $M$ is a compact manifold with $\pi_1M \cong 1$, then $M \cong S^3$. Note that the condition already implies $H_*(M) \cong H_*(S^3)$ by Hurewicz theorem and Poincar\'e duality. This is known as the Poincar\'e conjecture, which was proved in 2002 by Perelman.
+
+But there is more to say about Poincar\'e's initial conjecture. Some time later, Max Dehn constructed an \emph{infinite family} of 3d homology spheres. In order to prove that he genuinely had infinitely many distinct homology spheres, he had to show that his homology spheres had different fundamental groups. He did manage to write down the presentations of the fundamental groups, and so what was left to do is to distinguish whether the presentations actually presented the same group.
+
+To make sense of this statement, we must have some way to distinguish between the homology spheres. To do so, he managed to write down the presentation of the fundamental group of his homology spheres, and he had to figure out if those groups are genuinely the same.
+
+For our purposes, perhaps we should be slightly less ambitious. Suppose we are given a presentation $\bra S \mid R\ket$ of a finitely-presented group $\Gamma$. Define the set of alphabets
+\[
+ S^{\pm} = S \amalg S^{-1} = \{ s, s^{-1}: s \in S\},
+\]
+and let $S^*$ be the set of all finite words in $S^{\pm}$. Then the fundamental question is, given a word $w \in S^*$, when does it represent the identity element $1 \in \Gamma$? Ideally, we would want an \emph{algorithm} that determines whether this is the case.
+
+\begin{eg}
+ Recall that in the free group $F(S) = \bra S \mid \emptyset\ket$, we had a normal form theorem, namely that every element in $\Gamma$ can be written \emph{uniquely} as a product of generator (and inverses) such that there aren't any occurrences of $ss^{-1}$ or $s^{-1}s$. This gives a way of determining whether a given word $w \in S^*$ represents the identity. We perform \term{elementary reductions}, removing every occurrence of $ss^{-1}$ or $s^{-1}s$. Since each such procedure reduces the length of the word, this eventually stops, and we would end up with a \term{reduced word}. If this reduced word is empty, then $w$ represents the identity. If it is non-empty, $w$ does not.
+\end{eg}
+Thus, we have a complete solution to the word problem for free groups, and moreover the algorithm is something we can practically implement.
+
+For a general finitely-presented group, this is more difficult. We can first reformulate our problem. The presentation $\Gamma = \bra S \mid R \ket$ gives us a map $F(S) \to \Gamma$, sending $s$ to $s$. Each word gives us an element in $F(S)$, and thus we can reformulate our problem as identifying the elements in the kernel of this map.
+
+\begin{lemma}
+ Let $\Gamma = \bra S \mid R\ket$. Then the elements of $\ker (F(S) \to \Gamma)$ are precisely those of the form
+ \[
+ \prod_{i = 1}^d g_i r_i^{\pm 1} g_i^{-1},
+ \]
+ where $g_i \in F(S)$ and $r_i \in R$.
+\end{lemma}
+
+\begin{proof}
+ We know that $\ker (F(S) \to \Gamma) = \bra \bra R\ket \ket$, and the set described above is exactly $\bra \bra R\ket \ket$, noting that $gxyg^{-1} = (gxg^{-1})(gyg^{-1})$.
+\end{proof}
+
+This tells us the set of elements in $S^*$ that represent the identity is \term{recursively enumerable}, i.e.\ there is an algorithm that lists out all words that represent the identity. However, this is not very helpful when we want to identify whether a word represents the identity. If it does, then our algorithm will eventually tell us it is (maybe after 3 trillion years), but if it doesn't, the program will just run forever, and we will never know that it doesn't represent the identity.
+
+\begin{eg}
+ Let $\Gamma = \Z^2 \cong \bra a, b \mid [a, b]\ket$. Consider the word
+ \[
+ w = a^n b^n a^{-n} b^{-n}
+ \]
+ We see that this represents our identity element. But how long would it take for our algorithm to figure this fact out? Equivalently, what $d$ do we need to pick so that $w$ is of the form
+ \[
+ \prod_{i = 1}^d g_i r_i^{\pm 1} g_i^{-1}?
+ \]
+ We can draw a picture. Our element $[a, b]$ looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (1, 1);
+ \node [right] at (1, 0.5) {$b$};
+ \node [left] at (0, 0.5) {$b$};
+ \node [below] at (0.5, 0) {$a$};
+ \node [above] at (0.5, 1) {$a$};
+ \end{tikzpicture}
+ \end{center}
+ If, say, $n = 2$, then $w$ is given by
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [step=1, gray] (0, 0) grid (2, 2);
+ \foreach \x in {0.5,1.5} {
+ \node [below] at (\x, 0) {$a$};
+ \node [above] at (\x, 2) {$a$};
+ \node [right] at (2, \x) {$b$};
+ \node [left] at (0, \x) {$b$};
+ }
+ \end{tikzpicture}
+ \end{center}
+ If we think about it hard, then we see that what we need to do is to find out how to fill in this big square with the small squares, and we need $4$ squares. Indeed, we can read off the sequence
+ \[
+ w = [a, b] \cdot (b [a, b] b^{-1}) \cdot (b^2 a b^{-1} [a, b] ba^{-1} b^{-2}) \cdot (b^2 a^2 b^{-1}a^{-1}b^{-1}[a, b]baba^{-2}b^{-2}),
+ \]
+ which corresponds to filling in the square one by one as follows:
+ \begin{center}
+ \begin{tikzpicture}
+ \fill [opacity=0.3, morange] (0, 0) rectangle +(1, 1);
+ \draw [step=1, gray!50!white] (0, 0) grid (2, 2);
+ \draw (0, 0) rectangle (1, 1);
+ \draw [-latex'] (0.58, 0) -- +(0.001, 0);
+
+ \begin{scope}[shift={(3, 0)}]
+ \fill [opacity=0.3, morange] (0, 1) rectangle +(1, 1);
+ \draw [step=1, gray!50!white] (0, 0) grid (2, 2);
+ \draw (0, 0) -- (0, 2) -- (1, 2) -- (1, 1) -- (0.05, 1) -- (0.05, 0);
+
+ \draw [-latex'] (0.58, 1) -- +(0.001, 0);
+ \end{scope}
+ \begin{scope}[shift={(6, 0)}]
+ \fill [opacity=0.3, morange] (1, 1) rectangle +(1, 1);
+ \draw [step=1, gray!50!white] (0, 0) grid (2, 2);
+ \draw (0, 0) -- (0, 2) -- (1, 2) -- (1, 1) -- (1.05, 1) -- (1.05, 2) -- (2, 2) -- (2, 0.95) -- (0.95, 0.95) -- (0.95, 1.95) -- (0.05, 1.95) -- (0.05, 0);
+ \draw [-latex'] (1.58, 0.95) -- +(0.001, 0);
+ \end{scope}
+ \begin{scope}[shift={(9, 0)}]
+ \fill [opacity=0.3, morange] (1, 0) rectangle +(1, 1);
+ \draw [step=1, gray!50!white] (0, 0) grid (2, 2);
+
+ \draw (0.05, 0) -- (0.05, 1.95) -- (1.95, 1.95) -- (1.95, 1.05) -- (0.95, 1.05) -- (0.95, 0) -- (2, 0) -- (2, 0.95) -- (1.05, 0.95) -- (1.05, 0.05) -- (1, 0.05) -- (1, 1) -- (2, 1) -- (2, 2) -- (0, 2) -- (0, 0);
+ \draw [-latex'] (1.58, 0) -- +(0.001, 0);
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+Of course, this is fine if we know very well what the Cayley graph of the group looks like, but in general it is quite hard. Indeed, solving the word problem is part of what you do if you want to draw the Cayley graph, since you need to know when two words give the same element!
+
+So how do we solve the word problem? Our previous partial algorithm would make a good, full solution if we knew how far we had to search. If we know that we will only need at most $d = 10^{10^{100}}$, then if we searched for all expressions of the form $\prod_{i = 1}^d g_i r_i^{\pm} g_i^{-1}$ for $d < 10^{10^{100}}$, and didn't find $w$, then we know $w$ does not represent the identity element (we will later argue that we don't have to worry about there being infinitely many $g_i$'s to look through).
+
+\begin{defi}[Null-homotopic]\index{null-homotopic}
+ We say $w \in S^*$ is \emph{null-homotopic} if $w = 1$ in $\Gamma$.
+\end{defi}
+
+\begin{defi}[Algebraic area]\index{algebraic area}
+ Let $w \in S^*$ be mull-homotopic. Its \emph{algebraic area} is
+ \[
+ \Area_{a, \mathcal{P}} (w) = \min \left\{d : w = \prod_{i = 1}^d g_i r_i^{\pm 1} g_i^{-1}\right\}.
+ \]
+\end{defi}
+We write the subscript $a$ to emphasize this is the algebraic area. We will later define the geometric area, and we will write it as $\Area_g$. Afterwards, we will show that they are the same, and we will stop writing the subscript.
+
+Let us use $|\ph|_S$ to denote word length in $F(S)$, while $\ell_S$ continues to denote the word length in $\Gamma$.
+
+\begin{defi}[Dehn function]\index{Dehn function}
+ Then \emph{Dehn function} is the function $\delta_{\mathcal{P}}: \N \to \N$ mapping
+ \[
+ n \mapsto \max \left\{ \mathrm{Area}_{a, \mathcal{P}} (w) \mid |w|_S \leq n, w \text{ is null-homotopic}\right\}.
+ \]
+\end{defi}
+This Dehn function measures the difficulty of the word problem in $\mathcal{P}$.
+
+\begin{prop}
+ The word problem for $\mathcal{P}$ is solvable iff $\delta_{\mathcal{P}}$ is a computable function.
+\end{prop}
+
+We will postpone the proof. The hard part of the proof is that we don't have to worry about the infinitely many possible $g_i$ that may be used to conjugate.
+
+It would be really nice if the Dehn function can be viewed as a property of the group, and not the presentation. This requires us to come up with a notion of equivalence relation on functions $\N \to [0, \infty)$ (or $[0, \infty) \to [0, \infty)$).
+
+\begin{defi}[$\preccurlyeq$]\index{$\preccurlyeq$}
+ We write $f \preccurlyeq g$ iff for some $C > 0$, we have
+ \[
+ f(x) \leq C g(Cx + C) + Cx + C.
+ \]
+ for all $x$.
+
+ We write $f \approx g$ if $f \preccurlyeq g$ and $g \preccurlyeq f$.
+\end{defi}
+This captures the (super-linear) asymptotic behaviour of $f$.
+
+\begin{eg}
+ For $\alpha, \beta \geq 1$, $n^\alpha \preccurlyeq n^\beta$ iff $\alpha \leq \beta$.
+\end{eg}
+
+\begin{prop}
+ If $P$ and $Q$ are two finite presentations for $\Gamma$, then $\delta_{\mathcal{P}} \approx \delta_{\mathcal{Q}}$.
+\end{prop}
+
+We start with two special cases, and then show that we can reduce everything else to these two special cases.
+\begin{lemma}
+ If $R' \subseteq \bra \bra R \ket \ket$ is a finite set, and
+ \[
+ \mathcal{P} = \bra S \mid R \ket,\quad \mathcal{Q} = \bra S \mid R \cup R'\ket,
+ \]
+ then $\delta_{\mathcal{P}} \simeq \delta_{\mathcal{Q}}$.
+\end{lemma}
+
+\begin{proof}
+ Clearly, $\delta_{\mathcal{Q}} \leq \delta_{\mathcal{P}}$. Let
+ \[
+ m= \max_{r' \in R'} \Area_{\mathcal{P}}(r').
+ \]
+ It is then easy to see that
+ \[
+ \Area_{\mathcal{P}}(w) \leq m \Area_{\mathcal{Q}}(w).\qedhere
+ \]
+\end{proof}
+
+\begin{lemma}
+ Let $\mathcal{P} = \bra S \mid R\ket$, and let
+ \[
+ \mathcal{Q} = \bra S \amalg T \mid R \amalg R'\ket,
+ \]
+ where
+ \[
+ R' = \{t w_t^{-1}: t \in T, w_t \in F(S)\}.
+ \]
+ Then $\delta_{\mathcal{P}} \approx \delta_{\mathcal{Q}}$.
+\end{lemma}
+
+\begin{proof}
+ We first show $\delta_{\mathcal{P}} \preccurlyeq \delta_{\mathcal{Q}}$. Define
+ \begin{align*}
+ \rho: F(S \amalg T) & \to F(s)\\
+ s &\mapsto s\\
+ t &\mapsto w_t.
+ \end{align*}
+ In particular, $\rho(r) = r$ for all $r \in R$ and $\rho(r') = 1$ for all $r' \in R'$.
+
+ Given $w \in F(S)$, we need to show that
+ \[
+ \Area_{\mathcal{P}}(w) \leq \Area_{\mathcal{Q}}(w).
+ \]
+ Let $d = \Area_\mathcal{Q} (w)$. Then
+ \[
+ w = \prod_{i = 1}^d g_i r_i^{\pm 1} g_i^{-1},
+ \]
+ where $g_i \in F(S \coprod T), r_i \in R \cup R'$. We now apply $\rho$. Since $\rho$ is a retraction, $\rho(w) = w$. Thus,
+ \[
+ w = \prod_{i = 1}^d \rho(g_i) \rho(r_i)^{\pm 1} \rho(g_i)^{-1}.
+ \]
+ Now $\rho(r_i)$ is either $r_i$ or $1$. In the first case, nothing happens, and in the second case, we can just forget the $i$th term. So we get
+ \[
+ w = \prod_{i = 1, r_i \in R}^d\rho(g_i) r_i^{\pm 1} \rho(g_i)^{-1}.
+ \]
+ Since this is a valid proof in $\mathcal{P}$ that $w = 1$, we know that
+ \[
+ \Area_{\mathcal{P}}(w) \leq d = \Area_{\mathcal{Q}}(w).
+ \]
+ We next prove that $\delta_{\mathcal{Q}} \preccurlyeq \delta_{\mathcal{P}}$. It is unsurprising that some constants will appear this time, instead of just inequality on the nose. We let
+ \[
+ C = \max_{t \in T} |w_t|_S.
+ \]
+ Consider a null-homotopic word $w \in F(S \amalg T)$. This word looks like
+ \[
+ w = s_{i_1}^{\pm} s_{i_2}^{\pm } \cdots t_{j_1} \cdots s \cdots t \cdots \in F(S \amalg T).
+ \]
+ We want to turn these $t$'s into $s$'s, and we need to use the relators to do so.
+
+ We apply relations from $R'$ to write this as
+ \[
+ w' = s_{i_1}^{\pm} s_{i_2}^{\pm } \cdots w_{t_{j_1}} \cdots s \cdots w_t \cdots \in F(S).
+ \]
+ We certainly have $|w'|_S \leq C |w|_{S \amalg T}$. With a bit of thought, we see that
+ \begin{align*}
+ \Area_{\mathcal{Q}}(w) &\leq \Area_{\mathcal{P}}(w') + |w|_{S \amalg T}\\
+ &\leq \delta_{\mathcal{P}}(C|w|_{S \amalg T} ) + |w|_{S \amalg T}.
+ \end{align*}
+ So it follows that
+ \[
+ \delta_{\mathcal{Q}}(n) \leq \delta_{\mathcal{P}}(Cn) + n.\qedhere
+ \]
+\end{proof}
+
+\begin{proof}[Proof of proposition]
+ Let $\mathcal{P} = \bra S \mid R \ket$ and $\mathcal{Q} = \bra S' \mid R'\ket$. If $\mathcal{P}, \mathcal{Q}$ present isomorphic groups, then we can write
+ \[
+ s = u_{s} \in F(S') \text{ for all }s \in S
+ \]
+ Similarly, we have
+ \[
+ s' = v_{s'} \in F(S)\text{ for all }s' \in S'
+ \]
+ We let
+ \begin{align*}
+ T &= \{s u_s^{-1} \mid s \in S\}\\
+ T' &= \{s' v_{s'}^{-1} \mid s' \in S'\}
+ \end{align*}
+ We then let
+ \[
+ \mathcal{M} = \bra S \amalg S' \mid R \cup R' \cup T \cup T'\ket.
+ \]
+ We can then apply our lemmas several times to show that $\delta_{\mathcal{P}} \approx \delta_{\mathcal{M}} \approx \delta_{\mathcal{Q}}$.
+\end{proof}
+
+Now we can talk about $\delta_{\Gamma}$, the Dehn function of $\Gamma$, as long as we know we are only talking up to $\approx$. In fact, it is true that
+
+\begin{fact}
+ If $\Gamma_1 \qi \Gamma_2$, then $\delta_{\Gamma_1} \approx \delta_{\Gamma_2}$.
+\end{fact}
+The proof is rather delicate, and we shall not attempt to reproduce it.
+
+\section{Van Kampen diagrams}
+Recall we previously tried to understand the Dehn function of $\Z^2$ in terms of filling in squares on a grid. In this chapter, we pursue this idea further, and come up with an actual theorem.
+
+To make this more precise, recall that in the case of $\Z^2$, we could embed a null-homotopic word into the plane of the form
+\begin{center}
+ \begin{tikzpicture}
+ \draw [step=1, opacity=0.5] (0, 0) grid (3, 3);
+
+ \draw [thick, mblue, fill=morange, fill opacity=0.3] (0, 0) -- (1, 0) -- (1, 1) -- (2, 1) -- (2, 2) -- (3, 2) -- (3, 3) -- (1, 3) -- (1, 2) -- (0, 2) -- (0, 1) -- (1, 1) -- (1, 0) -- (0, 0);
+ \end{tikzpicture}
+\end{center}
+We saw, at least heuristically, the algebraic area of $w$ is just the geometric area of this figure. Of course, sometimes it is slightly more complicated. For example, if we have the word $w = [a, b]^2$, then we have to use the cell $[a, b]$ twice.
+%
+%\begin{eg}
+% Take $\Gamma = \Z^2 \cong \bra a, b \mid [a, b]\ket$. Drawing this out:
+% \begin{center}
+% \begin{tikzpicture}
+% \draw [step=1, opacity=0.5] (-2, -2) grid (4, 4);
+%
+% \draw [->] (-2, 0) -- (4, 0);
+% \draw [->] (0, -2) -- (0, 4);
+%
+% \draw [->-=0.5] (0, 0) -- (1, 0) node [pos=0.5, below] {$a$};
+% \draw [->-=0.5] (0, 0) -- (0, 1) node [pos=0.5, left] {$b$};
+% \end{tikzpicture}
+% \end{center}
+% A null-homotopic word $w$ is then a loop in this diagram:
+% \begin{center}
+% \begin{tikzpicture}
+% \draw [step=1, opacity=0.5] (-2, -2) grid (4, 4);
+%
+% \draw [->] (-2, 0) -- (4, 0);
+% \draw [->] (0, -2) -- (0, 4);
+%
+% \draw [->-=0.5] (0, 0) -- (1, 0) node [pos=0.5, below] {$a$};
+% \draw [->-=0.5] (0, 0) -- (0, 1) node [pos=0.5, left] {$b$};
+%
+% \draw [thick, mblue] (0, 0) -- (1, 0) -- (1, 1) -- (2, 1) -- (2, 2) -- (3, 2) -- (3, 3) -- (1, 3) -- (1, 2) -- (0, 2) -- (0, 1) -- (1, 1) -- (1, 0) -- (0, 0);
+% \end{tikzpicture}
+% \end{center}
+% The area of $W$ is just the area we have to fill in, $\Area(w) \leq 4$.
+%
+% So to compute the Dehn function, we need to know what is the largest region we can fill with a path of length $n$. But this is just some classical geometry. We know that the most optimal way of doing so is by using a circle. In this case a loop of length $\ell$ can be filled in with area $\preccurlyeq \ell^2$. This is the \term{isoperimetric inequality}.
+%
+% Of course, we cannot really draw a circle in $\Z^2$, but if we view the Dehn function only up to $\approx$, then it doesn't really matter, and $\delta_{\Z^2} \approx n^2$.
+%
+%\end{eg}
+%
+%\begin{eg}
+% In the same $\Z^2 \cong \bra a, b \mid [a, b] \ket$, and $w = [a, b]^2$, then $w$ is not embedded in $\R^2$, and we have to use the cell of $\R^2$ twice.
+%\end{eg}
+%
+%How can we make this more rigorous? The key concept here is that of a \emph{singular disk diagram}.
+
+\begin{defi}[Singular disc diagram]\index{singular disc diagram}\index{disc diagram}
+ A \emph{(singular) disc diagram} is a compact, contractible subcomplex of $\R^2$, e.g.
+ \begin{center}
+ \begin{tikzpicture}
+ \fill (0, 0) rectangle (1, 1);
+ \draw (1, 1) -- (2.5, 1);
+ \fill (3.5, 1) ellipse (1 and 0.5);
+ \draw (1, 1) -- (1.5, 2);
+ \fill (1.5, 2) -- (2, 2) -- (2.5, 2.5) -- (2.5, 3) -- (2, 3) -- (2, 2.5) -- (1.5, 2.5) -- (1.5, 2);
+
+ \draw (0, 1) -- (-0.5, 1.5);
+ \fill (-0.5, 1.5) circle [radius=0.414];
+
+ \fill (-0.5, -0.5) circle [radius=0.707];
+ \end{tikzpicture}
+ \end{center}
+ We usually endow the disc diagram $D$ with a base point $p$, and define
+ \[
+ \Area_g(D) = \text{number of $2$-cells of $D$}
+ \]
+ and
+ \[
+ \Diam_p(D) = \text{length of the longest embedded path in $D^{(1)}$, starting at $p$}.
+ \]
+ If we orient $\R^2$, then $D$ has a well-defined boundary cycle. A disc diagram is \emph{labelled} if each (oriented) edge is labelled by an element $s \in S^{\pm}$. The set of \term{face labels} is the set of boundary cycles of the $2$-cells.
+
+ If all the face labels are elements of $R^{\pm}$, then we say $D$ is a \emph{diagram} over $\bra S \mid R\ket$.
+\end{defi}
+
+
+\begin{defi}[van Kampen diagram]\index{van Kampen diagram}
+ If $w \in \bra \bra R \ket \ket$, and $w$ is the boundary cycle of a singular disc diagram $D$ over the presentation $\bra S \mid R\ket$, we say that $D$ is a \emph{van Kampen diagram} for $w$.
+\end{defi}
+\begin{eg}
+ The diagram
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [step=1, opacity=0.5] (0, 0) grid (3, 3);
+
+ \draw [thick, mblue, fill=morange, fill opacity=0.3] (0, 0) -- (1, 0) -- (1, 1) -- (2, 1) -- (2, 2) -- (3, 2) -- (3, 3) -- (1, 3) -- (1, 2) -- (0, 2) -- (0, 1) -- (1, 1) -- (1, 0) -- (0, 0);
+
+ \draw [-latex'] (0.58, 0) -- +(0.001, 0) node [below] {$a$};
+ \draw [-latex'] (0, 0.58) -- +(0, 0.001) node [left] {$b$};
+ \end{tikzpicture}
+ \end{center}
+ is a van Kampen diagram for $abababa^{-2}b^{-1} a^{-1}b^{-1}ab^{-1}a^{-1}$.
+\end{eg}
+
+Note that in this case, the map $S^1 \to X_{\mathcal{P}}$ that represents $w$ factors through a map $D \to X_\mathcal{P}$.
+
+\begin{lemma}[van Kampen's lemma]\index{van Kampen's lemma}
+ Let $\mathcal{P} = \bra S \mid R\ket$ be a presentation and $w \in S^*$. Then the following are equivalent:
+ \begin{enumerate}
+ \item $w = 1$ in $\Gamma$ presented by $\mathcal{P}$ (i.e.\ $w$ is null-homotopic)
+ \item There is a van Kampen diagram for $w$ given $\mathcal{P}$.
+ \end{enumerate}
+ If so, then
+ \[
+ \Area_a(w) = \min \{\Area_g(D): D\text{ is a van Kampen diagram for $w$ over $\mathcal{P}$}\}.
+ \]
+\end{lemma}
+
+\begin{proof}
+% First observe that we may assume that $w$ is reduced, since if we have a non-reduced word, then in the van Kampen diagram, there must be a path segment that immediately backtracks itself, and we can just cancel them. This does not affect the area of the diagram.
+ In one direction, given
+ \[
+ w = \prod_{i = 1}^n g_i r_i^{\pm} g_i^{-1} \in F(S)
+ \]
+ such that $w = 1 \in \Gamma$, we start by writing down a ``lollipop diagram''
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] {};
+
+ \foreach \x/\lab in {-60/d,0/3,30/2,60/1} {
+ \begin{scope}[rotate=\x]
+ \draw (0, 0) -- (0, 2);
+
+ \node at (0.3, 1.4) {$g_\lab$};
+ \draw (0, 2.3) circle [radius=0.3];
+ \node [above] at (0, 2.6) {$r_\lab^{\pm}$};
+ \end{scope}
+ }
+
+ \node at (0.9, 1.3) {$\ddots$};
+ \end{tikzpicture}
+ \end{center}
+ This defines a diagram for the word $\prod_{i = 1}^n g_i r_i^{\pm} g_i^{-1}$ which is equal in $F(S)$ to $w$, but is not exactly $w$. However, note that performing elementary reductions (or their inverses) correspond to operations on the van Kampen diagram that does not increase the area. We will be careful and not say that the area doesn't change, since we may have to collapse paths that look like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) edge [bend left, fill=morange, fill opacity=0.5] (2, 0);
+ \draw (2, 0) edge [bend left, fill=morange, fill opacity=0.5] (0, 0);
+ \draw (0, 0) edge [bend left, ->-=0.6] (2, 0);
+ \draw (0, 0) edge [bend right, ->-=0.6] (2, 0);
+ \end{tikzpicture}
+ \end{center}
+%
+% $w' \underset{F(S)}{=} w$. Note that $\Area(D') = d$. We next need to define moves on diagrams to transform $w'$ to $w$. These are
+% \begin{itemize}
+% \item Pruning: We replace
+% \begin{center}
+% \begin{tikzpicture}
+% \fill [gray] circle [radius=0.5];
+% \draw [->-=0.5] (0.5, 0) -- (1.5, 0) node [pos=0.5, above] {$s$};
+% \end{tikzpicture}
+% \end{center}
+% with
+% \begin{center}
+% \begin{tikzpicture}
+% \fill [gray] circle [radius=0.5];
+% \end{tikzpicture}
+% \end{center}
+% \item Folding: If we see
+% \begin{center}
+% \begin{tikzpicture}
+% \draw [->-=0.5] (0, 0) -- (1, 0.5) node [pos=0.5, above] {$s$};
+% \draw [->-=0.5] (0,0 0) -- (1, -0.5) node [pos=0.5, below] {$s$};
+%
+% \draw [red] (1, 0.5) -- (2, 2);
+% \draw [blue] (1, -0.5) -- (2, -2);
+% \end{tikzpicture}
+% \end{center}
+% We replace it with
+% \begin{center}
+% \begin{tikzpicture}
+% \draw [->-=0.5] (0, 0) -- (1, 0) node [pos=0.5, above] {$s$};
+%
+% \draw [red] (1, 0) -- (2, 2);
+% \draw [blue] (1, 0) -- (2, -2);
+% \end{tikzpicture}
+% \end{center}
+% \end{itemize}
+% Note that if we see something that looks like
+% \begin{center}
+% \begin{tikzpicture}
+% \fill [gray] (0, 0) edge [bend left] (2, 0) edge [bend left] (0, 0); % this is going to be wrong
+% \draw (0, 0) edge [bend left, ->-=0.5] (2, 0) node [pos=0.5, above] {$s$};
+% \draw (0, 0) edge [bend right, ->-=0.5] (2, 0) node [pos=0.5, above] {$s$};
+% \end{tikzpicture}
+% \end{center}
+% folding replaces it with simply
+% \begin{center}
+% \begin{tikzpicture}
+% \draw [->-=0.5] (0, 0) -- (2, 0) node [pos=0.5, above] {$s$};
+% \end{tikzpicture}
+% \end{center}
+% Note that in this step, we lose area.
+%
+% We see that we can perform one of moves whenever the boundary of our diagram forms a non-reduced word, and each of these moves corresponds to reducing the words. Therefore the procedure terminates and in the end, we obtain a diagram $D$ with boundary cycle $w$ and
+% \[
+% \Area(D) \leq \Area(D').
+% \]
+% % in each step, either area decreases or length decreases
+
+ In the other direction, given a diagram $D$ for $w$, we need to produce an expression
+ \[
+ w = \prod_{i = 1}^d g_i r_i^{\pm} g_i^{-1}
+ \]
+ such that $d \leq \Area(D)$.
+
+ We let $e$ be the first $2$-cell that the boundary curve arrives at, reading anticlockwise around $\partial D$. Let $g$ be the path from $p$ to $e$, and let $r$ be the relator read anti-clockwise around $e$. Let
+ \[
+ D' = D - e,
+ \]
+ and let $w' = \partial D'$. Note that
+ \[
+ w = gr^{\pm 1} g^{-1} w'.
+ \]
+ Since $\Area(D') = \Area(D) - 1$, we see by induction that
+ \[
+ w = \prod_{i = 1}^d g_i r_i^{\pm 1} g_i^{-1}
+ \]
+ with $d = \Area(D)$.
+\end{proof}
+
+We observe that in our algorithm about, the $g_i$'s produced have length $\leq \Diam(D)$. So we see that
+\begin{cor}
+ If $w$ is null-homotopic, then we can write $w$ in the form
+ \[
+ w = \prod_{i = 1}^d g_i r_i^{\pm} g_i^{-1},
+ \]
+ where
+ \[
+ |g_i|_S \leq \Diam D
+ \]
+ with $D$ a minimal van Kampen diagram for $w$. We can further bound this by
+ \[
+ (\max |r_i|_S) \Area(D) + |w|_S \leq \text{constant} \cdot \delta_{\mathcal{P}}(|w|_S) + |w|_S.
+ \]
+\end{cor}
+So in particular, if we know $\delta_{\mathcal{P}}(|w|_S)$, then we can bound the maximum length of $g_i$ needed.
+
+It is now easy to prove that
+\begin{prop}
+ The word problem for a presentation $\mathcal{P}$ is solvable iff $\delta_{\mathcal{P}}$ is computable.
+\end{prop}
+
+\begin{proof}\leavevmode
+ \begin{itemize}
+ \item[$(\Leftarrow)$] By the corollary, the maximum length of a conjugator $g_i$ that we need to consider is computable. Therefore we know how long the partial algorithm needs to run for.
+ \item[$(\Rightarrow)$] To compute $\delta_{\mathcal{P}}(n)$, we use the word problem solving ability to find all null-homotopic words in $F(S)$ of length $\leq n$. Then for each $d$, go out and look for expressions
+ \[
+ w = \prod_{i = 1}^d g_i r_i^{\pm 1} g_i^{-1}.
+ \]
+ A naive search would find the smallest area expression, and this gives us the Dehn function.\qedhere
+ \end{itemize}
+\end{proof}
+
+It is a hard theorem that
+\begin{thm}[Novikov--Boone theorem]\index{Novikov--Boone theorem}
+ There exists a finitely-presented group with an unsolvable word problem.
+\end{thm}
+
+\begin{cor}
+ $\delta_{\mathcal{P}}$ is sometimes non-computable.
+\end{cor}
+
+Let's look at some applications to geometry. In geometry, people wanted to classify manifolds. Classifying (orientable) $2$-dimensional manifolds is easy. They are completely labeled by the genus. This classification can in fact be performed by a computer. If we manage to triangulate our manifold, then we can feed the information of the triangulation into a computer, and then the computer can compute the Euler characteristic.
+
+Is it possible to do this in higher dimensions? It turns out the word problem gives a severe hindrance to doing so.
+\begin{thm}
+ Let $n \geq 4$ and $\Gamma = \bra S \mid R\ket$ be a finitely-presented group. Then we can construct a closed, smooth, orientable manifold $M^n$ such that $\pi_1 M \cong \Gamma$.
+\end{thm}
+This is a nice, little surgery argument.
+
+\begin{proof}
+ Let $S = \{a_1, \ldots, a_m\}$ and $R = \{r_1, \ldots, r_n\}$. We start with
+ \[
+ M_0 = \#_{i = 0}^m (S^1 \times S^{n - 1}).
+ \]
+ Note that when we perform this construction, as $n \geq 3$, we have
+ \[
+ \pi_1M_0 \cong F_m
+ \]
+ by Seifert--van Kampen theorem. We now construct $M_k$ from $M_{k - 1}$ such that
+ \[
+ \pi_1 M_k \cong \bra a_1, \ldots, a_m \mid r_1, \ldots, r_k\ket.
+ \]
+ We realize $r_k$ as a loop in $M_{k - 1}$. Because $n \geq 3$, we may assume (after a small homotopy) that this is represented by a smooth embedded map $r_k: S^1 \to M_{k - 1}$.
+
+ We take $N_k$ to be a smooth tubular neighbourhood of $r_k$. Then $N_k \cong S^1 \times D^{n - 1} \subseteq M_{k - 1}$. Note that $\partial N_k \cong S^1 \times S^{n - 2}$.
+
+ Let $U_k = D^2 \times S^{n - 2}$. Notice that $\partial U_k \cong \partial N_k$. Since $n \geq 4$, we know $U_k$ is simply connected. So we let
+ \[
+ M'_{k - 1} = M_k \setminus \mathring{N}_k,
+ \]
+ a manifold with boundary $S^1 \times S^{n - 2}$. Choose an orientation-reversing diffeomorphism $\varphi_k: \partial U_k \to \partial M_{k - 1}'$. Let
+ \[
+ M_k = M_{k - 1}' \cup_{\varphi_k} U_k.
+ \]
+ Then by applying Seifert van Kampen repeatedly, we see that
+ \[
+ \pi_1 M_k = \pi_1 M_{k - 1} / \bra \bra r_k \ket \ket,
+ \]
+ as desired.
+\end{proof}
+Thus, if we have an algorithm that can distinguish between the fundamental groups of manifolds, then that would solve (some variant of) the word problem for us, which is impossible.
+
+Finally, we provide some (even more) geometric interpretation of the Dehn function.
+
+\begin{defi}[Filling disc]\index{filling disc}
+ Let $(M, g)$ be a closed Riemannian manifold. Let $\gamma: S^1 \to M$ be a smooth null-homotopic loop. A filling disk for $\gamma$ in $M$ is a smooth map $f: D^2 \to M$ such that the diagram
+ \[
+ \begin{tikzcd}
+ S^1 \ar[d, hook] \ar[rd, "\gamma"] \\
+ D^2 \ar[r, "f"] & M
+ \end{tikzcd}
+ \]
+\end{defi}
+
+Since there is a metric $g$ on $M$, we can pull it back to a (possibly degenerate) metric $f^* g$ on $D^2$, and hence measure quantities like the length $\ell(\gamma)$ of $\gamma$ and the area $\Area(f)$ of $D^2$.
+
+A classic result is
+\begin{thm}[Douglas, Rad\'u, Murray] % check
+ If $\gamma$ is embedded, then there is a least-area filling disc.
+\end{thm}
+So we can define
+\begin{defi}[$\FArea$]\index{$\FArea$}
+ \[
+ \FArea(\gamma) = \inf \{\Area(f) \mid f: D^2 \to M\text{ is a filling disc for }\gamma\}.
+ \]
+\end{defi}
+
+\begin{defi}[Isoperimetric function]\index{isoperimetric function}
+ The isoperimetric function of $(M, g)$ is
+ \begin{align*}
+ &FiU^M: [0, \infty) \to [0, \infty)\\
+ &\ell \mapsto \sup\{\FArea(\gamma): \gamma: S^1 \to M\text{ is a smooth null-homotopic loop, }\ell(\gamma) \leq \ell\}
+ \end{align*}
+\end{defi}
+
+\begin{thm}[Filling theorem]
+ let $M$ be a closed Riemannian manifold. Then $FiU^M \simeq \delta_{\pi_1 M}$.
+\end{thm}
+
+By our previous construction, we know this applies to every finitely-presented group.
+%This theorem applies to every finitely-presented group, because
+%\begin{thm}
+% If $n \geq 4$ and $\Gamma = \bra S \mid R\ket$ is a finitely-presented group, then there exists a closed, smooth, orientable manifold $M^n$ such that $\pi_1 M \cong \Gamma$.
+%\end{thm}
+%This is a nice, little surgery argument.
+%
+%\begin{proof}
+% Let $S = \{a_1, \ldots, a_m\}$ and $R = \{r_1, \ldots, r_n\}$. We start with
+% \[
+% M_0 = \#_{i = 0}^m (S^1 \times S^{n - 1}.
+% \]
+% Note that when we perform this construction, as $n \geq 3$, we have
+% \[
+% \pi_1M_0 \cong F_m
+% \]
+% by Seifert--van Kampen theorem. We now construct $M_k$ from $M_{k - 1}$ such that
+% \[
+% \pi_1 M_k \cong \bra a_1, \ldots, a_m \mid r_1, \ldots, r_k\ket.
+% \]
+% We realize $R_k$ as a loop in $M_{k - 1}$, which we may assume is null-homotopic. Because $n \geq 3$, we may assume (after a small homotopy) that this is represented by a smooth embedded map $r_k: S^1 \to M_{k - 1}$.
+%
+% We take $N_k$ to be a smooth tubular neighbourhood of $r_k$. Then $N_k \cong S^1 \times D^{n - 1} \subseteq M_{k - 1}$. Note that $\partial N_k \cong S^1 \times S^{n - 2}$.
+%
+% Let $U_k = D^2 \times S^{n - 2}$. Notice that $\partial U_k \cong \partial N_k$. Since $n \geq 4$, we know $U_k$ is simply connected. So we let
+% \[
+% M'_{k - 1} = M_k \setminus \mathring{N}_k,
+% \]
+% a manifold with boundary $S^1 \times S^{n - 2}$. Choose an orientation-reversing diffeomorphism $\varphi_k: \partial U_k \to \partial M_{k - 1}'$. Let
+% \[
+% M_k = M_{k - 1}' \cup_{\varphi_k} U_k.
+% \]
+% Then by applying Seifert van Kampen repeatedly, we see that
+% \[
+% \pi_1 M_k = \pi_1 M_{k - 1} / \bra \bra r_k \ket \ket,
+% \]
+% as desired.
+%\end{proof}
+%
+%The point is that whenever we come up with a bad group, we can construct a bad manifold with similar bad behaviour. So manifolds are at least as difficult to understand as groups. In particular, there is no algorithm that can distinguish any two smooth manifolds, since that would allow us to figure out if two group presentations define the same group, which we know is impossible.
+
+\section{Bass--Serre theory}
+\subsection{Graphs of spaces}
+Bass--Serre theory is a way of building spaces by gluing old spaces together, in a way that allows us to understand the fundamental group of the resulting space. In this section, we will give a brief (and sketchy) introduction to Bass-Serre theory, which generalizes some of the ideas we have previously seen, but they will not be used in the rest of the course.
+
+Suppose we have two spaces $X, Y$, and we want to glue them along some subspace. For concreteness, suppose we have another space $Z$, and maps $\partial^-: Z \to X$ and $\partial^+: Z \to Y$. We want to glue $X$ and $Y$ by identifying $\partial^-(z) \sim \partial^+(z)$ for all $z \in Z$.
+
+If we simply take the disjoint union of $X$ and $Y$ and then take the quotient, then this is a pretty poorly-behaved construction. Crucially, if we want to understand the fundamental group of the resulting space via Seifert--van Kampen, then the maps $\partial^{\pm}$ must be very well-behaved for the theorem to be applicable. The \emph{homotopy pushout} corrects this problem by gluing like this:
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (2, 2);
+ \draw (6, 0) rectangle (8, 2);
+
+ \draw (5, 0.5);
+
+ \fill [mblue, opacity=0.1] (3, 0.5) -- (5, 0.5) arc(-90:-270:0.25 and 0.5) -- (3, 1.5) arc(90:270:0.25 and 0.5);
+ \draw [dashed] (5, 0.5) arc(270:90:0.25 and 0.5);
+ \draw (3, 0.5) arc(270:90:0.25 and 0.5);
+
+ \draw [fill=mblue, fill opacity=0.3] (3, 0.5) -- (5, 0.5) arc(-90:90:0.25 and 0.5) -- (3, 1.5) arc(90:-90:0.25 and 0.5);
+
+ \draw [dashed, thick, mblue] (4, 0.5) arc(270:90:0.25 and 0.5);
+
+ \draw [->] (3, 1) -- (1, 1) node [pos=0.3, above] {$\partial^-$};
+ \draw [->] (5, 1) -- (7, 1) node [pos=0.35, above] {$\partial^+$};
+
+ \draw [thick, mblue] (4, 0.5) arc(-90:90:0.25 and 0.5);
+
+ \node [above] at (1, 2) {$X$};
+ \node [above] at (7, 2) {$Y$};
+ \node [above] at (4, 1.5) {$Z$};
+ \end{tikzpicture}
+\end{center}
+
+\begin{defi}[Homotopy pushout]\index{homotopy pushout}
+ Let $X, Y, Z$ be spaces, and $\partial^-: Z \to X$ and $\partial^+: Z \to Y$ be maps. We define\index{$X \underset{Z}{\coprod} Z$}
+ \[
+ X \underset{Z}{\amalg} Y = (X \amalg Y \amalg Z \times [-1, 1])/\sim,
+ \]
+ where we identify $\partial^{\pm}(z)\sim (z, \pm 1)$ for all $z \in Z$.
+\end{defi}
+By Seifert--van Kampen, we know $\pi_1(X\underset{Z}{\amalg} Y)$ is the pushout
+\[
+ \begin{tikzcd}
+ \pi_1Z \ar[r, "\partial^+_*"] \ar[d, "\partial^-_*"] & \pi_1(X) \ar[d]\\
+ \pi_1 Y \ar[r] & \pi_1(X \underset{Z}{\amalg} Y)
+ \end{tikzcd}
+\]
+In other words, we have
+\[
+ \pi_1(X \underset{Z}{\amalg} Y) \cong \pi_1 X \underset{\pi_1 Z}{*} \pi_1 Y.
+\]
+In general, this is more well-behaved if the maps $\partial_*^{\pm}$ are in fact injective, and we shall focus on this case.
+
+With this construction in mind, we can try to glue together something more complicated:
+\begin{defi}[Graph of spaces]\index{graph!of spaces}
+ A \emph{graph of spaces} $\mathcal{X}$ consists of the following data
+ \begin{itemize}
+ \item A connected graph $\Xi$.
+ \item For each vertex $v \in V(\Xi)$, a path-connected space $X_v$.
+ \item For each edge $e \in E(\Xi)$, a path-connected space $X_e$.
+ \item For each edge $e \in E(\Xi)$ attached to $v^{\pm} \in V(\Xi)$, we have $\pi_1$-injective maps $\partial^{\pm}_e : X_e \to X_{v^{\pm}}$.
+ \end{itemize}
+ The \term{realization} of $\mathcal{X}$ is
+ \[
+ |\mathcal{X}| = X = \frac{\coprod_{v \in V(\Xi)} X_v \amalg \coprod_{e \in E(\Xi)} (X_e \times [-1, 1])}{(\forall e \in E(\Xi), \forall x \in X_e, (x, \pm 1) \sim \partial_e^{\pm}(x))}.
+ \]
+\end{defi}
+These conditions are not too restrictive. If our vertex or edge space were not path-connected, then we can just treat each path component as a separate vertex/edge. If our maps are not $\pi_1$ injective, as long as we are careful enough, we can attach $2$-cells to kill the relevant loops.
+
+%\begin{eg}
+% Let $X$ be an orientable surface and $Y$ be a multi-curve, i.e.\ a disjoint union of essentially simple closed curves on $X$. Take a tubular neighbourhood of each curve in $Y$, and let $N(Y)$ be the union of all these tubular neighbourhoods, which is a union of closed annuli. Let
+% \[
+% \coprod X_i = X - \mathring{N(Y)}
+% \]
+% be the decomposition of $X - \mathring{N(Y)}$ into connected components.
+%
+% % insert picture
+%
+% We can then take $\coprod X_e = Y$. Then $X$ is just the realization of this graph, when we pick the appropriate attaching maps.
+%\end{eg}
+
+\begin{eg}
+ A homotopy pushout is a special case of a realization of a graph of spaces.
+% In the case of
+% \begin{center}
+% \begin{tikzpicture}
+% \draw (0, 0) rectangle (2, 2);
+% \draw (6, 0) rectangle (8, 2);
+%
+% \draw (3, 1) ellipse (0.25 and 0.5);
+%
+% \draw [thick, mblue] (4, 0.5) arc(-90:90:0.25 and 0.5);
+% \draw [dashed, thick, mblue] (4, 0.5) arc(270:90:0.25 and 0.5);
+%
+% \draw (5, 0.5) arc(-90:90:0.25 and 0.5);
+% \draw [dashed] (5, 0.5) arc(270:90:0.25 and 0.5);
+%
+% \draw (3, 0.5) -- (5, 0.5);
+% \draw (3, 1.5) -- (5, 1.5);
+%
+% \draw [->] (3, 1) -- (1, 1) node [pos=0.5, above] {$\partial^-$};
+% \draw [->] (5, 1) -- (7, 1) node [pos=0.5, above] {$\partial^+$};
+%
+% \node [above] at (1, 2) {$X_{v_1}$};
+% \node [above] at (7, 2) {$X_{v_2}$};
+% \node [above] at (4, 1.5) {$X_e$};
+% \end{tikzpicture}
+% \end{center}
+% Recall that Seifert--van Kampen tells us
+% \[
+% \pi_1 X \cong \pi_1 X_{v_1} \underset{\pi_1 X_e}{*} \pi_1 X_{v_2}.
+% \]
+\end{eg}
+
+\begin{eg}
+ Suppose that the underlying graph looks like this:
+ \begin{center}
+ \begin{tikzpicture}[scale=4]
+ \node [circ] at (0, 0) {};
+ \node [below] {$v$};
+ \draw (0, 0) edge [out=45, in =135, loop] (0, 0);
+ \node [above] at (0, 0.26) {$e$};
+ \end{tikzpicture}
+ \end{center}
+ The corresponding gluing diagram looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[cm={1,0, 0.5, 1, (0, 0)}]
+ \draw (0, 0) rectangle (4, 1.5);
+ \draw [fill=mblue, fill opacity=0.3] (2.6, 0.75) arc (180:360:0.4 and 0.2);
+ \draw [dashed] (2.6, 0.75) arc (180:0:0.4 and 0.2);
+
+ \draw [fill=mblue, fill opacity=0.3] (0.6, 0.75) arc (180:360:0.4 and 0.2);
+ \draw [dashed] (0.6, 0.75) arc (180:0:0.4 and 0.2);
+ \end{scope}
+ \begin{scope}[shift={(0.375, 0)}]
+ \draw (1.4, 0.75) -- (1.4, 1.1) arc (180:0:0.6 and 0.6) -- (2.6, 0.75);
+ \draw (0.6, 0.75) -- (0.6, 1.1) arc (180:0:1.4 and 1.4) -- (3.4, 0.75);
+ \fill [mblue, opacity=0.3] (1.4, 0.75) -- (1.4, 1.1) arc (180:0:0.6 and 0.6) -- (2.6, 0.75) -- (3.4, 0.75) -- (3.4, 1.1) arc(0:180:1.4 and 1.4) -- (0.6, 0.75);
+ \end{scope}
+ \end{tikzpicture}
+ \end{center}
+ Fix a basepoint $* \in X_e$. Pick a path from $\partial_e^-(*)$ to $\partial_e^+(*)$. This then gives a loop $t$ that ``goes around'' $X_e \times [-1, 1]$ by first starting at $\partial_e^-(*)$, move along $* \times [-1, 1]$, then returning using the path we chose.
+
+ Since every loop inside $X_e$ can be pushed down along the ``tube'' to a loop in $X_v$, it should not be surprising that the group $\pi_1(X)$ is in fact generated by $\pi_1(X_v)$ and $t$.
+
+ In fact, we can explicitly write
+ \[
+ \pi_1 X = \frac{\pi_1 X_v * \bra t\ket}{\bra \bra(\partial_e^+)_*(g) = t (\partial_e^-)_*(g) t^{-1} \forall g \in \pi_1 X_e\ket \ket}.
+ \]
+
+ This is known as an \term{HNN extension}. The way to think about this is as follows --- we have a group $\pi_1 X_v$, and we have two subgroups that are isomorphic to each other. Then the HNN extension is the ``free-est'' way to modify the group so that these two subgroups are conjugate.
+\end{eg}
+
+How about for a general graph of spaces? If $\Xi$ is a graph of spaces, then its fundamental group has the structure of a \emph{graph of groups} $\mathcal{G}$.
+
+\begin{defi}[Graph of groups]\index{graph!of groups}
+ A \emph{graph of groups} $\mathcal{G}$ consists of
+ \begin{itemize}
+ \item A graph $\Gamma$
+ \item Groups $G_v$ for all $v \in V(\Gamma)$
+ \item Groups $G_e$ for all $e \in E(\Gamma)$
+ \item For each edge $e$ with vertices $v^{\pm}(e)$, injective group homomorphisms
+ \[
+ \partial_e^{\pm}: G_e \to G_{v^{\pm}(e)}.
+ \]
+ \end{itemize}
+\end{defi}
+In the case of a graph of spaces, it was easy to define a realization. One way to do so is that we have already see how to do so in the two above simple cases, and we can build the general case up inductively from the simple cases, but that is not so canonical. However, this has a whole lot of choices involved. Instead, we are just going to do is to associate to a graph of groups $\mathcal{G}$ a graph of spaces $\mathcal{X}$ which ``inverts'' the natural map from graphs of spaces to graphs of groups, given by taking $\pi_1$ of everything.
+
+This can be done by taking Eilenberg--Maclane spaces.
+\begin{defi}[Aspherical space]\index{aspherical space}
+ A space $X$ is \emph{aspherical} if $\tilde{X}$ is contractible. By Whitehead's theorem and the lifting criterion, this is true iff $\pi_n(X) = 0$ for all $n \geq 2$.
+\end{defi}
+
+\begin{prop}
+ For all groups $G$ there exists an aspherical space $BG = K(G, 1)$\index{$BG$}\index{$K(G, 1)$} such that $\pi_1(K(G, 1)) \cong G$. Moreover, for any two choices of $K(G, 1)$ and $K(H, 1)$, and for every homomorphism $f: G \to H$, there is a unique map (up to homotopy) $\bar{f}: K(G, 1) \to K(H, 1)$ that induces this homomorphism on $\pi_1$. In particular, $K(G, 1)$ is well-defined up to homotopy equivalence.
+
+ Moreover, we can choose $K(G, 1)$ functorially, namely there are choices of $K(G, 1)$ for each $G$ and choices of $\bar{f}$ such that $\overline{f_1 \circ f_2} = \overline{f}_1 \circ \bar{f}_2$ and $\overline{\id_{G}} = \id_{K(G, 1)}$ for all $f, G, H$.
+\end{prop}
+These $K(G, 1)$ are known as \term{Eilenberg--MacLane spaces}.
+
+When we talked about presentations, we saw that this is true if we don't have the word ``aspherical''. But the aspherical requirement makes the space unique (up to homotopy).
+
+Using Eilenberg--MacLane spaces, given any graph of groups $\mathcal{G}$, we can construct a graph of spaces $\Xi$ such that when we apply $\pi_1$ to all the spaces in $\mathcal{X}$, we recover $\mathcal{G}$.
+
+We can now set
+\[
+ \pi_1 \mathcal{G} = \pi_1|\mathcal{X}|.
+\]
+Note that if $\Gamma$ is finite, and all the $G_v$'s are finitely-generated, then $\pi_1\mathcal{G}$ is also finitely-generated, which one can see by looking into the construction of $K(G, 1)$. If $\Gamma$ is finite, all $G_v$'s are finitely-presented and all $G_e$'s are finitely-generated, then $\pi_1 \mathcal{G}$ is finitely-presented.
+
+For more details, read \emph{Trees} by Serre, or \emph{Topological methods in group theory} by Scott and Wall.
+
+\subsection{The Bass--Serre tree}
+Given a graph of groups $\mathcal{G}$, we want to analyze $\pi_ 1\mathcal{G} = \pi_1 X$, and we do this by the natural action of $G$ on $\tilde{X}$ by deck transformations.
+
+Recall that to understand the free group, we looked at the universal cover of the rose with $r$ petals. Since the rose was a graph, the universal cover is also a graph, and because it is simply connected, it must be a tree, and this gives us a normal form theorem. The key result was that the covering space of a graph is a graph.
+
+\begin{lemma}
+ If $\mathcal{X}$ is a graph of spaces and $\hat{X} \to X$ is a covering map, then $\hat{X}$ naturally has the structure of a graph of spaces $\hat{\mathcal{X}}$, and $p$ respects that structure.
+\end{lemma}
+Note that the underlying graph of $\hat{\mathcal{X}}$ is not necessarily a covering space of the underlying graph of $\mathcal{X}$.
+
+\begin{proof}[Proof sketch]
+ Consider
+ \[
+ \bigcup_{v \in V(\Xi)} X_v \subseteq X.
+ \]
+ Let
+ \[
+ p^{-1}\left(\bigcup_{v \in V(\Xi)} X_V\right) = \coprod_{\hat{v} \in V(\hat{\Xi})} \hat{W}_{\hat{v}}.
+ \]
+ This defines the vertices of $\hat{\Xi}$, the underlying graph of $\hat{\mathcal{X}}$. The path components $\hat{X}_{\hat{v}}$ are going to be the vertex spaces of $\hat{\mathcal{X}}$. Note that for each $\hat{v}$, there exists a unique $v \in V(\Xi)$ such that $p: \hat{X}_{\hat{v}} \to X_v$ is a covering map.
+
+ Likewise, the path components of
+ \[
+ p^{-1}\left(\bigcup_{e \in E(\Xi)} X_e \times \{0\}\right)
+ \]
+ form the edge spaces $\coprod_{e \in E(\Xi)} \hat{X}_{\hat{e}}$ of $\hat{\mathcal{X}}$, which again are covering spaces of the edge space of $X$.
+
+ Now let's define the edge maps $\partial_{\hat{e}}^{\pm}$ for each $\hat{e} \in E(\hat{\Xi}) \mapsto e \in E(\Xi)$. To do so, we consider the diagram
+ \[
+ \begin{tikzcd}
+ \hat{X}_{\hat{e}} \ar[r, hook, "\sim"] \ar[rr, bend left] \ar[d] & \hat{X}_{\hat{e}} \times [-1, 1] \ar[d] \ar[r, dashed] & \hat{X} \ar[d, "p"]\\
+ X_e \ar[r, hook, "\sim"] & X_e \times [-1, 1] \ar[r] & X
+ \end{tikzcd}
+ \]
+ By the lifting criterion, for the dashed map to exist, there is a necessary and sufficient condition on $(\hat{X}_{\hat{e}} \times [-1, 1] \to X_e \times [-1, 1] \to X)_*$. But since this condition is homotopy invariant, we can check it on the composition $(\hat{X}_{\hat{e}} \to X_e \to X)_*$ instead, and we know it must be satisfied because a lift exists in this case.
+
+ The attaching maps $\partial^{\pm}_{\hat{e}}: \hat{X}_e \to \hat{X}$ are precisely the restriction to $\hat{X}_e \times \{\pm 1\} \to \hat{X}$.
+
+ Finally, check using covering space theory that the maps $\hat{X}_{\hat{e}} \times [-1, 1] \to \hat{X}$ can injective on the interior of the cylinder, and verify that the appropriate maps are $\pi_1$-injective.
+\end{proof}
+
+Now let's apply this to the universal cover $\tilde{X} \to X$. We see that $\tilde{X}$ has a natural action of $G = \pi_1X$, which preserves the graph of spaces structure.
+
+Note that for any graph of spaces $X$, there are maps
+\begin{align*}
+ \iota: \Xi &\to X\\
+ \rho: X &\to \Xi
+\end{align*}
+such that $\rho \circ \iota \simeq \id_{\Xi}$. In particular, this implies $\rho_*$ is surjective (and of course $\rho$ itself is also surjective).
+
+In the case of the universal cover $\tilde{X}$, we see that the underlying graph $\tilde{\Xi} = T$ is connected and simply connected, because $\pi_1 \tilde{\Xi} = \rho_*(\pi_1 \tilde{X}) = 1$. So it is a tree!
+
+The action of $G$ on $\tilde{X}$ descends to an action of $G$ on $\tilde{\Xi}$. So whenever we have a graph of spaces, or a graph of groups $\mathcal{G}$, we have an action of the fundamental group on a tree. This tree is called the \term{Bass--Serre tree} of $\mathcal{G}$.
+
+Just like the case of the free group, careful analysis of the Bass--Serre tree leads to theorems relating $\pi_1(\mathcal{G})$ to the vertex groups $G_v$ and edge groups $G_e$.
+
+\begin{eg}
+ Let
+ \[
+ G = F_2 = \bra a\ket * \bra b\ket = \Z * \Z.
+ \]
+ In this case, we take $X$ to be
+
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [mred, thick] circle [radius=0.5];
+ \draw [mblue, thick] (2, 0) circle [radius=0.5];
+ \draw [morange, thick] (0.5, 0) -- (1.5, 0) node [pos=0.5, below] {$X_e$};
+
+ \node [below] at (0, -0.5) {$X_u$};
+ \node [below] at (3, -0.5) {$X_v$};
+ \end{tikzpicture}
+ \end{center}
+ Here we view this as a graph where the two vertex spaces are circles, and there is a single edge connecting them. The covering space $\tilde{X}$ then looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [mred, thick] (0, -3.3) -- (0, 3.3);
+ \foreach \y in {-3,...,3} {
+ \node [circ] at (0, \y) {};
+ \draw [morange, thick] (0, \y) -- (2, \y);
+
+ \draw [mblue, thick] (2, \y-0.3) -- (2, \y + 0.3);
+ \foreach \z in {-0.3, -0.2, -0.1, 0.1, 0.2, 0.3} {
+ \draw [morange] (2, \y + \z) -- +(-0.2, 0);
+ \draw [mred] (1.8, \y + \z + 0.04) -- (1.8, \y + \z - 0.04);
+ }
+ }
+ \end{tikzpicture}
+ \end{center}
+ This is \emph{not} the Bass--Serre tree. The Bass--Serre tree is an infinite-valent bipartite tree, which looks roughly like
+ \begin{center}
+ \begin{tikzpicture}
+
+ \foreach \t in {0,20,...,340}{
+ \begin{scope}[rotate=\t]
+ \begin{scope}[shift={(1.5, 0)}]
+ \foreach \s in {-18,-12,-6,0,6,12,18}{
+ \draw [rotate=\s, morange] (0, 0) -- (0.5, 0) node [scirc, mred] {};
+ }
+ \end{scope}
+ \draw [morange](0, 0) -- (1.5, 0) node [circ, mblue] {};
+ \end{scope}
+ }
+ \node [circ, mred] {};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+This point of view gives two important results that relate elements of $G$ to the vertex groups $G_{v_i}$ and the edge maps $\partial_{e_j}^{\pm}: G_{e_j} \to G_{v_j^{\pm}}$.
+
+\begin{lemma}[Britton's lemma]\index{Britton's lemma}
+ For any vertex $\Xi$, the natural map $G_v \to G$ is injective.
+\end{lemma}
+Unsurprisingly, this really requires that the edge maps are injective. It is an exercise to find examples to show that this fails if the boundary maps are not injective.
+
+\begin{proof}[Proof sketch]
+ Observe that the universal cover $\tilde{X}$ can be produce by first building universal covers of the vertex space, which are then glued together in a way that doesn't kill the fundamental groups.
+\end{proof}
+
+More importantly, Bass-Serre trees give us normal form theorems! Pick a base vertex $v \in V(\Xi)$. We can then represent elements of $G$ in the form
+\[
+ \gamma = g_0 e_1^{\pm 1} g_1 e_2^{\pm 1} \cdots e_n^{\pm 1}g_n
+\]
+where each $e_i$ is an edge such that $e_1^{\pm1} e_2^{\pm 1} \cdots e_n^{\pm1}$ forms a closed loop in the underlying graph of $\Xi$, and each $g_i$ is an element of the graph at the vertex connecting $e_i^{\pm 1}$ and $e_{i + 1}^{\pm 1}$.
+
+We say a \emph{pinch} is a sub-word of the form
+\[
+ e^{\pm 1} \partial_e^{\pm 1} (g) e^{\mp 1},
+\]
+which can be replaced by $\partial_e^{\mp 1}(g)$.
+
+We say a loop is \emph{reduced} if it contains no pinches.
+
+\begin{thm}[Normal form theorem]\index{normal form theorem}
+ Every element can be represented by a reduced loop, and the only reduced loop representing the identity is the trivial loop.
+\end{thm}
+This is good enough, since if we can recognize is something is the identity, then we see if the product of one with the inverse of the other is the identity. An exact normal form for all words would be a bit too ambitious.
+
+\begin{proof}[Proof idea]
+ It all boils down to the fact that the Bass--Serre tree is a tree. Connectedness gives the existence, and the simply-connectedness gives the ``uniqueness''.
+\end{proof}
+
+\section{Hyperbolic groups}
+So far, we obtained a lot of information about groups by seeing how they act on trees. Philosophically, the reason for this is that trees are very ``negatively curved'' spaces. We shall see in this chapter than in general, whenever a group acts on a negatively curved space, we can learn a lot about the group.
+
+\subsection{Definitions and examples}
+We now want to define a negatively-curved space in great generality. Let $X$ be a geodesic metric space. Given $x, y \in X$, we will write $[x, y]$ for a choice of geodesic between $x$ and $y$.
+
+\begin{defi}[Geodesic triangle]\index{geodesic triangle}\index{triangle!geodesic}
+ A \emph{geodesic triangle} $\Delta$ is a choice of three points $x, y, z$ and geodesics $[x, y], [y, z], [z, x]$.
+\end{defi}
+Geodesic triangles look like this:
+\begin{center}
+ \begin{tikzpicture}
+ \node [circ] (a) at (0, 0) {};
+ \node [circ] (b) at (2, 0) {};
+ \node [circ] (c) at (1, 1.732) {};
+
+ \draw (a) edge [bend left] (b);
+ \draw (b) edge [bend left] (c);
+ \draw (c) edge [bend left] (a);
+ \end{tikzpicture}
+\end{center}
+Note that in general, the geodesics may intersect.
+
+\begin{defi}[$\delta$-slim triangle]\index{$\delta$-slim triangle}
+ We say $\Delta$ is $\delta$-slim if every side of $\Delta$ is contained in the union of the $\delta$-neighbourhoods of the other two sides.
+\end{defi}
+
+\begin{defi}[Hyperbolic space]\index{hyperbolic space}\index{Gromov hyperbolic space}\index{$\delta$-hyperbolic space}
+ A metric space is \emph{(Gromov) hyperbolic} if there exists $\delta \geq 0$ such that every geodesic triangle in $X$ is $\delta$-slim. In this case, we say it \emph{is $\delta$-hyperbolic}.
+\end{defi}
+
+\begin{eg}
+ $\R^2$ is not Gromov hyperbolic.
+\end{eg}
+
+\begin{eg}
+ If $X$ is a tree, then $X$ is $0$-hyperbolic! Indeed, each triangle looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] (a) at (0, 0) {};
+ \node [circ] (b) at (2, 0) {};
+ \node [circ] (c) at (1, 1.732) {};
+
+ \draw (a) -- (1, 0.866);
+ \draw (b) -- (1, 0.866);
+ \draw (c) -- (1, 0.866);
+ \end{tikzpicture}
+ \end{center}
+ We call this a \term{tripod}.
+\end{eg}
+Unfortunately, none of these examples really justify why we call these things hyperbolic. Let's look at the actual motivating example.
+
+\begin{eg}
+ Let $X = \H^2$, the \term{hyperbolic plane}. Let $\Delta \subseteq \H^2$ be a triangle. Then $\Delta$ is $\delta$-slim, where $\delta$ is the maximum radius of an inscribed semi-circle $D$ in $\Delta$ with the center on one of the edges.
+
+ But we know that the radius of $D$ is bounded by some increasing function of the area of $D$, and the area of $D$ is bounded above by the area of $\Delta$. On the other hand, by hyperbolic geometry, we know the area of \emph{any} triangle is bounded by $\pi$. So $\H^2$ is $\delta$-hyperbolic for some $\delta$.
+\end{eg}
+If we worked a bit harder, then we can figure out the best value of $\delta$. However, for the arguments we are going to do, we don't really care about what $\delta$ is.
+
+\begin{eg}
+ Let $X$ be any bounded metric space, e.g.\ $S^2$. Then $X$ is Gromov hyperbolic, since we can just take $\delta$ to be the diameter of the metric space.
+\end{eg}
+This is rather silly, but it makes sense if we take the ``coarse point of view'', and we have to ignore bounded things.
+
+What we would like to do is to say is that a group $\Gamma$ if for every finite generating set $S$, the Cayley graph $\Cay_S(\Gamma)$ equipped with the word metric is $\delta$-hyperbolic for some $\delta$.
+
+However, this is not very helpful, since we have to check it for all finite generating sets. So we want to say that being hyperbolic is quasi-isometry invariant, in some sense.
+
+This is slightly difficult, because we lose control of how the geodesic behaves if we only look at things up to isometry. To do so, we have to talk about quasi-geodesics.
+
+\subsection{Quasi-geodesics and hyperbolicity}
+\begin{defi}[Quasi-geodesic]\index{quasi-geodesic}
+ A $(\lambda, \varepsilon)$-quasi-geodesic for $\lambda \geq 1, \varepsilon \geq 0$ is a $(\lambda, \varepsilon)$-quasi-isometric embedding $I \to X$, where $I \subseteq \R$ is a closed interval.
+\end{defi}
+Note that our definition allows for $I$ to be unbounded, i.e.\ $I$ may be of the forms $[a, b]$, $[0, \infty)$ or $\R$ respectively. We call these \emph{quasi-geodesic intervals}\index{quasi-geodesic interval}, \emph{quasi-geodesic rays}\index{quasi-geodesic ray} and \emph{quasi-geodesic lines}\index{quasi-geodesic line} respectively.
+
+\begin{eg}
+ The map $[0, \infty) \to \R^2$ given in polar coordinates by
+ \[
+ t \mapsto (t, \log(1 + t))
+ \]
+ is a quasigeodesic ray.
+\end{eg}
+This should be alarming. The quasi-geodesics in the Euclidean plane can look unlike any genuine geodesic. However, it turns out things are well-behaved in hyperbolic spaces.
+
+\begin{thm}[Morse lemma]\index{Morse lemma}
+ For all $\delta \geq 0, \lambda \geq 1$ there is $R(\delta, \lambda, \varepsilon)$ such that the following holds:
+
+ If $X$ is a $\delta$-hyperbolic metric space, and $c: [a, b] \to X$ is a $(\lambda, \varepsilon)$-quasigeodesic from $p$ to $q$, and $[p, q]$ is a choice of geodesic from $p$ to $q$, then
+ \[
+ d_{\mathrm{Haus}}([p, q], \im(c)) \leq R(\delta, \lambda, \varepsilon),
+ \]
+ where
+ \[
+ d_{\mathrm{Haus}}(A, B) = \inf\{\varepsilon > 0 \mid A \subseteq N_\varepsilon(B) \text{ and }B \subseteq N_\varepsilon(A)\}
+ \]
+ is the \term{Hausdorff distance}\index{$d_{\mathrm{Haus}}$}.
+\end{thm}
+This has nothing to do with the Morse lemma in differential topology.
+
+\begin{cor}
+ There is an $M(\delta, \lambda, \varepsilon)$ such that a geodesic metric space $X$ is $\delta$-hyperbolic iff any $(\lambda, \varepsilon)$-quasigeodesic triangle is $M$-slim.
+\end{cor}
+
+\begin{cor}
+ Suppose $X, X'$ are geodesic metric spaces, and $f: X \to X''$ is a quasi-isometric embedding. If $X'$ is hyperbolic, then so is $X$.
+
+ In particular, hyperbolicity is a quasi-isometrically invariant property, when restricted to geodesic metric spaces.
+\end{cor}
+
+Thus, we can make the following definition:
+\begin{defi}[Hyperbolic group]\index{hyperbolic group}
+ A group $\Gamma$ is \emph{hyperbolic} if it acts properly discontinuously and cocompactly by isometries on a proper, geodesic hyperbolic metric space. Equivalently, if it is finitely-generated and with a hyperbolic Cayley graph.
+\end{defi}
+
+Let's prove the Morse lemma. To do so, we need the following definition:
+\begin{defi}[Length of path]\index{length of path}
+ Let $c: [a, b] \to X$ be a continuous path. Let $a = t_0 < t_1 < \cdots < t_n = b$ be a \term{dissection} $\mathcal{D}$ of $[a, b]$. Define
+ \[
+ \ell (c) = \sup_{\mathcal{D}} \sum_{i = 1}^n d(c(t_{i - 1}), c(t_i)).
+ \]
+\end{defi}
+In general, this number is extremely hard to compute, and may even be infinite.
+
+\begin{defi}[Rectifiable path]\index{rectifiable path}
+ We say a path $c$ is \emph{rectifiable} if $\ell(c) < \infty$.
+\end{defi}
+
+\begin{eg}
+ Piecewise geodesic paths are rectifiable.
+\end{eg}
+
+\begin{lemma}
+ Let $X$ be a geodesic space. For any $(\lambda, \varepsilon)$-quasigeodesic $c: [a, b] \to X$, there exists a continuous, rectifiable $(\lambda, \varepsilon')$-quasigeodesic $c': [a, b] \to X$ with $\varepsilon' = 2(\lambda + \varepsilon)$ such that
+ \begin{enumerate}
+ \item $c'(a) = c(a)$, $c'(b) = c(b)$.
+ \item For all $a \leq t < t' \leq b$, we have
+ \[
+ \ell(c'|_{[t, t']}) \leq k_1 d(c'(t), c'(t')) + k_2
+ \]
+ where $k_1 = \lambda (\lambda + \varepsilon)$ and $k_2 = (\lambda \varepsilon' + 3)(\lambda + 3)$.
+ \item $d_{\mathrm{Haus}}(\im c, \im c') \leq \lambda + \varepsilon$.
+ \end{enumerate}
+\end{lemma}
+
+\begin{proof}[Proof sketch]
+ Let $\Sigma = \{a, b\} \cup ((a, b) \cap \Z)$. For $t \in \Sigma$, we let $c'(t) = c(t)$, and define $c'$ to be geodesic between the points of $\Sigma$. Then claims (i) and (iii) are clear, and to prove quasigeodesicity and (ii), let $\sigma: [a, b] \to \Sigma$ be a choice of closest point in $\Sigma$, and then estimate $d(c'(t), c'(t'))$ in terms of $d(c(\sigma(t)), c(\sigma(t')))$.
+\end{proof}
+
+\begin{lemma}
+ let $X$ be $\delta$-hyperbolic, and $c: [a, b] \to X$ a continuous, rectifiable path in $X$ joining $p$ to $q$. For $[p, q]$ a geodesic, for any $x \in [p, q]$, we have
+ \[
+ d(x, \im c) \leq \delta |\log_2\ell(c)| + 1.
+ \]
+\end{lemma}
+
+\begin{proof}
+ We may assume $c: [0, 1] \to X$ is parametrized proportional to arc length. Suppose
+ \[
+ \frac{\ell(c)}{2^N} < 1 \leq \frac{\ell(c)}{2^{N - 1}}.
+ \]
+ Let $x_0 = x$. Pick a geodesic triangle between $p, q, c(\frac{1}{2})$. By $\delta$-hyperbolicity, there exists a point $x_1$ lying on the other two edges such that $d(x_0, x_1) \leq \delta$. We wlog assume $x_1 \in [p, c(\frac{1}{2})]$. We can repeat the argument with $c|_{[0, \frac{1}{2}]}$.
+
+ \begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \draw (-2, 0) -- (2, 0) arc(0:180:2);
+ \node [below] at (-2, 0) {$p$};
+ \node [below] at (2, 0) {$q$};
+ \node [above] at (0, 2) {$c(\frac{1}{2})$};
+
+ \draw (-2, 0) edge [bend right] (0, 2);
+ \draw (0, 2) edge [bend right] (2, 0);
+
+ \draw (-2, 0) edge [bend right] (-1.414, 1.414);
+ \draw (-1.414, 1.414) edge [bend right] (0, 2);
+
+ \draw (-1.414, 1.414) edge [bend right] (-0.7654, 1.8476);
+ \draw (-0.7654, 1.8476) edge [bend right] (0, 2);
+
+ \node [circ] (a) at (-0.4, 0) {};
+ \node [below] at (-0.4, 0) {\small$x_0$};
+
+ \node [circ] (b) at (-0.32, 1.2) {};
+ \node [right] at (b) {\small$x_1$};
+
+ \node [circ] (c) at (-1, 1.41) {};
+
+ \node [circ] (d) at (-0.9, 1.65) {};
+
+ \draw (a) edge [bend right, -latex'] (b);
+ \draw (b) edge [bend left, -latex'] (c);
+ \draw (c) edge [bend right, -latex'] (d);
+ \end{tikzpicture}
+ \end{center}
+
+ Formally, we proceed by induction on $N$. If $N = 0$ so that $\ell(c) < 1$, then we are done by taking desired point on $\im c$ to be $p$ (or $q$). Otherwise, there is some $x_1 \in [p, c(\frac{1}{2})]$ such that $d(x_0, x_1) \leq \delta$. Then
+ \[
+ \frac{\ell(c|_{[0, \frac{1}{2}]})}{2^{N - 1}} < 1 \leq \frac{\ell(c|_{[0, \frac{1}2}])}{2^{N - 2}}.
+ \]
+ So by the induction hypothesis,
+ \begin{align*}
+ d(x_1, \im c) &\leq \delta |\log_2 \ell(c|_{[0, \frac{1}{2}]}| + 1\\
+ &= \delta \abs{ \frac{1}{2}\log_2 \ell(c)} + 1\\
+ &= \delta (\abs{\log_2 \ell(c)} - 1) + 1.
+ \end{align*}
+ Note that we used the fact that $\ell(c) > 1$, so that $\log_2 \ell(c) > 0$.
+
+ Then we are done since
+ \[
+ d(x, \im c) \leq d(x, x_1) + d(x_1, \im c).\qedhere
+ \]
+\end{proof}
+
+\begin{proof}[Proof of Morse lemma]
+ By the first lemma, we may assume that $c$ is continuous and rectifiable, and satisfies the properties as in the lemma.
+
+ Let $p, q$ be the end points of $c$, and $[p, q]$ a geodesic. First we show that $[p, q]$ is contained in a bounded neighbourhood of $\im c$. Let
+ \[
+ D = \sup_{x \in [p, q]} d(x, \im c).
+ \]
+ By compactness of the interval, let $x_0 \in [p, q]$ where the supremum is attained. Then by assumption, $\im c$ lies outside $\mathring{B}(x_0, D)$. Choose $y, z \in [p, q]$ be such that $d(x_0, y) = d(x_0, z) = 2D$ and $y, x_0, z$ appear on the geodesic in this order (take $y = p$, or $z = q$ if that is not possible).
+
+ Let $y' = c(s) \in \im c$ be such that $d(y', y) \leq D$, and similarly let $z' = c(t) \in \im c$ be such that $d(z, z') \leq D$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [mblue, thick] (-4, 0) arc(180:0:4 and 2);
+ \draw (-4, 0) -- (4, 0);
+ \draw (-2, 0) arc (180:0:2);
+
+ \node [circ] at (-4, 0) {};
+ \node [left] at (-4, 0) {$p$};
+ \node [circ] at (4, 0) {};
+ \node [right] at (4, 0) {$q$};
+
+ \node [mblue, above] at (2, 1.75) {$c$};
+
+ \draw [<->] (0, 0)node [circ] {} node [below] {$x_0$} -- (0, 2) node [pos=0.5, left] {$D$};
+
+ \node [circ] (y) at (-3, 0) {};
+ \node [below] at (y) {$y$};
+ \node [circ] (z) at (3, 0) {};
+ \node [below] at (z) {$z$};
+
+ \draw [decorate, decoration={brace, amplitude=5pt}](0, -0.5) -- (-3, -0.5) node [pos=0.5, below=0.2cm] {$2D$};
+ \draw [decorate, decoration={brace, amplitude=5pt}](3, -0.5) -- (0, -0.5) node [pos=0.5, below=0.2cm] {$2D$};
+
+ \node [circ] (y') at (-3.35, 1.1) {};
+ \node [anchor = south east] at (y') {$y'$};
+ \node [circ] (z') at (3.35, 1.1) {};
+ \node [anchor = south west] at (z') {$z'$};
+
+ \draw [dashed] (y) -- (y') node [pos=0.5, right] {$\leq D$};
+ \draw [dashed] (z) -- (z') node [pos=0.5, left] {$\leq D$};
+ \end{tikzpicture}
+ \end{center}
+ Let $\gamma = [y, y'] \cdot c|_{[s, t]} \cdot [z', z]$. Then
+ \[
+ \ell(\gamma) = d(y, y') + d(z, z') + \ell(c|_{[s, t]}) \leq D + D + k_1 d(y', z') + k_2,
+ \]
+ by assumption. Also, we know that $d(y', z') \leq 6D$. So we have
+ \[
+ \ell(\gamma) \leq 6k_1D + 2D + k_2.
+ \]
+ But we know that
+ \[
+ d(x_0, \im \gamma) = D.
+ \]
+ So the second lemma tells us
+ \[
+ D \leq \delta |\log_2 (6 k_1 D + 2D + k_2)| + 1.
+ \]
+ The left hand side is linear in $D$, and the right hand side is a logarithm in $D$. So it must be the case that $D$ is bounded. Hence $[p, q] \subseteq N_{D_0}(\im c)$, where $D_0$ is some constant.
+
+ It remains to find a bound $M$ such that $\im c \subseteq N_M([p, q])$. Let $[a', b']$ be a maximal subinterval of $[a, b]$ such that $c[a', b']$ lies entirely outside $\mathring{N}_{D_0}([p, q])$. Since $\bar{N}_D(c[a, a'])$ and $\bar{N}_D(c[b', b])$ are both closed, and they collectively cover the connected set $[p, q]$, there exists
+ \[
+ w \in [p, q] \cap \bar{N}_{D_0}(c[a, a']) \cap \bar{N}_{D_0}(c[b', b]).
+ \]
+ Therefore there exists $t \in [a, a']$ and $t' \in [b', b]$ such that $d(w, c(t)) \leq D_0$ and $d(w, c(t')) \leq D_0$. In particular, $d(c(t), c(t')) \leq 2 D_0$.
+
+ By the first lemma, we know
+ \[
+ \ell(c|_{[t, t']}) \leq 2k_1 D_0 + k_2.
+ \]
+ So we know that for $s \in [a', b']$, we have
+ \begin{align*}
+ d(c(s), [p, q]) &\leq d(c(s), w) \\
+ &\leq d(c(s), c(t)) + d(c(t), w) \\
+ &\leq \ell(c|_{[t, t']}) + D_0 \\
+ &\leq D_0 + 2k_1 D_0 + k_2.\qedhere
+ \end{align*}
+\end{proof}
+
+With the Morse lemma proven, we can sensibly talk about hyperbolic groups. We have already seen many examples of hyperbolic spaces, such as trees and the hyperbolic plane.
+
+\begin{eg}
+ Any finite group is hyperbolic since they have bounded Cayley graphs.
+\end{eg}
+
+\begin{eg}
+ Finitely-generated free groups are hyperbolic, since their Cayley graphs are trees.
+\end{eg}
+
+\begin{eg}
+ Surface groups $\Sigma_g$ for $g \geq 2$ acts on the hyperbolic plane properly discontinuously and cocompactly by isometries (since the universal cover of the genus $g$ surface is the hyperbolic plane by tessellating the hyperbolic plane with $4g$-gons, cf.\ IB Geometry).
+\end{eg}
+
+The following notion of ``virtually'' would be helpful
+\begin{defi}[virtually-P]\index{virtually-P}
+ Let P be a property of groups. Then we say $G$ is virtually-P if there is a finite index subgroup $G_0 \leq G$ such that $G_0$ is P.
+\end{defi}
+
+\begin{eg}
+ Finite groups are virtually trivial!
+\end{eg}
+
+Note that $G_0 \leq G$ is finite-index, then $G_0$ acts on $\Cay_S(G)$ in a way that satisfies the properties of the Schwarz--Milnor lemma (the only property we may worry about is cocompactness, which is what we need finite-index for) and hence $G_0 \qi G$. In particular,
+
+\begin{eg}
+ A virtually hyperbolic group is hyperbolic. For example, virtually free groups such that $(\Z/2) * (\Z/3) = \PSL_2\Z$ are hyperbolic.
+\end{eg}
+
+These are some nice classes of examples, but they will be dwarfed by our next class of examples.
+
+A ``random group'' is hyperbolic. More precisely, fix $m \geq 2$ and $n \geq 1$. We temporarily fix $\ell \geq 0$. Consider groups of the form
+\[
+ \Gamma = \bra a_1, \ldots, a_m \mid r_1, \ldots, r_n\ket.
+\]
+such that $r_i$ are all \emph{cyclicly reduced}\index{cyclicly reduced word} words of length $\ell$ (i.e.\ reduced words such that the first letter is not the inverse of the last). Put the uniform probability distribution on the set of all such groups. This defines a group-valued random variable $\Gamma_\ell$. For a property $P$, we say that ``a random group is $P$''\index{random group} if % explain cyclicly reduced
+\[
+ \lim_{\ell \to \infty}\P(\text{$\Gamma_\ell$ is hyperbolic}) = 1
+\]
+for all $n, m$.
+\begin{thm}[Gromov]
+ A random group is infinite and hyperbolic.
+\end{thm}
+
+There are, of course, other ways to define a ``random group''. As long as we control the number of relations well so that we don't end up with finite groups all the time, the ``random group'' should still be hyperbolic.
+
+Recall that in differential geometry, geodesics are defined locally. However, we defined our geodesics to be embedded interval, which is necessarily a global notion. We want an analogous local version. However, if we want to work up to quasi-isomorphism, then we cannot go completely local, because locally, you are allowed to be anything.
+
+\begin{defi}[$k$-local geodesic]\index{$k$-local geodesic}\index{local geodesic}
+ Let $X$ be a geodesic metric space, and $k > 0$. A path $c: [a, b] \to X$ is a \emph{$k$-local geodesic} if
+ \[
+ d(c(t), c(t')) = |t - t'|
+ \]
+ for all $t, t' \in [a, b]$ with $|t - t'| \leq k$.
+\end{defi}
+
+We know very well that on a general Riemannian manifold, local geodesics need not be actual geodesics. In fact, they can be far from being an actual geodesic, since, for example, we can wrap around the sphere many times. Again, things are much better in the hyperbolic world.
+\begin{thm}
+ Let $X$ be $\delta$-hyperbolic and $c: [a, b] \to X$ be a $k$-local geodesic where $k > 8 \delta$. Then $c$ is a $(\lambda, \varepsilon)$-quasigeodesic for some $\lambda = \lambda(\delta, k)$ and $\varepsilon = \varepsilon(\delta, k)$.
+\end{thm}
+
+First, we prove
+\begin{lemma}
+ Let $X$ be $\delta$-hyperbolic and $k > 8\delta$. If $c: [a, b] \to X$ is a $k$-local geodesic, then $\im c$ is contained in the $2\delta$-neighbourhood of $[c(a), c(b)]$.
+\end{lemma}
+Observe that by iterating the definition of hyperbolicity, on a $\delta$-hyperbolic space, any point on an $n$-gon is at most $(n - 2)\delta$ away from a point on another side.
+
+\begin{proof}
+ Let $x = c(t)$ maximize $d(x, [c(a), c(b)])$. Let
+ \[
+ y = c\left(t - \frac{k}{2}\right),\quad z = c\left(t - \frac{k}{2}\right).
+ \]
+ If $t - \frac{k}{2} < a$, we set $y = c(a)$ instead, and similarly for $z$.
+
+ Let $y' \in [c(a), c(b)]$ minimize $d(y, y')$, and likewise let $z' \in [c(a), c(b)]$ minimize $d(z, z')$.
+ \begin{center}
+ \begin{tikzpicture}
+ \clip (-3, -1) rectangle (3, 3);
+ \draw [mblue, thick] (-4, 0) arc(180:0:4 and 2);
+ \draw (-4, 0) -- (4, 0);
+
+ \node [circ] at (0, 2) {};
+ \node [above] at (0, 2) {$x$};
+
+ \node [circ] (y) at (-2, 0) {};
+ \node [below] at (y) {$y'$};
+ \node [circ] (z) at (2, 0) {};
+ \node [below] at (z) {$z'$};
+
+ \node [circ] (y') at (-2.3, 1.64) {};
+ \node [anchor = south east] at (y') {$y$};
+ \node [circ] (z') at (2.3, 1.64) {};
+ \node [anchor = south west] at (z') {$z$};
+
+ \draw (y) edge [bend right] (y');
+ \draw (z) edge [bend left] (z');
+
+ \node [circ] (w) at (-1.9, 0.9) {};
+ \node [right] at (w) {$w$};
+ \end{tikzpicture}
+ \end{center}
+
+ Fix geodesics $[y, y']$ and $[z, z']$. Then we have a geodesic rectangle with vertices $y, y', z, z'$. By $\delta$-hyperbolicity, there exists $w$ on the rectangle \emph{not} on $\im c$, such that $d(x, w) = 2 \delta$.
+
+ If $w \in [y', z']$, then we win. Otherwise, we may wlog assume $w \in [y, y']$. Note that in the case $y = c(a)$, we must have $y = y'$, and so this would imply $w = y \in [c(a), c(b)]$. So we are only worried about the case $y = c\left(t - \frac{k}{2}\right)$. So $d(y, x) = \frac{k}{2} > 4\delta$. But then by the triangle inequality, we must have $d(y, w) > 2\delta = d(x, w)$.
+
+ However,
+ \[
+ d(x, y') \leq d(x, w) + d(w, y') < d(y, w) + d(w, y') = d(y, y').
+ \]
+ So it follows that
+ \[
+ d(x, [c(a), c(b)]) < d(y, y') = d(y, [c(a), c(b)]).
+ \]
+ This contradicts our choice of $x$.
+\end{proof}
+
+\begin{proof}[Proof of theorem]
+ Let $c: [a, b] \to X$ be a $k$-local geodesic, and $t \leq t' \in [a, b]$. Choose $t_0 = t < t_1 < \cdots < t_n < t'$ such that $t_i = t_{i - 1} + k$ for all $i$ and $t' - t_n < k$.
+
+ Then by definition, we have
+ \[
+ d(c(t_{i - 1}), c(t_i)) = k,\quad d(c(t_n), c(t')) = |t_n - t'|.
+ \]
+ for all $i$. So by the triangle inequality, we have
+ \[
+ d(c(t), c(t')) \leq \sum_{i = 1}^n d(c(t_{i - 1}), c(t_i)) + d(t_n, t') = |t - t'|.
+ \]
+ We now have to establish a coarse \emph{lower} bound on $d(c(t), c(t'))$.
+
+ We may wlog assume $t = a$ and $t' = b$. We need to show that
+ \[
+ d(c(a), c(b)) \geq \frac{1}{\lambda} |b - a| - \varepsilon.
+ \]
+ We divide $c$ up into regular subintervals $[x_i, x_{i + 1}]$, and choose $x_i'$ close to $x_i$. The goal is then to prove that the $x_i'$ appear in order along $[c(a), c(b)]$.
+
+ Let
+ \[
+ k' = \frac{k}{2} + 2 \delta > 6\delta.
+ \]
+ Let $b - a = M k' + \eta$ for $0 \leq \eta < k'$ and $M \in \N$. Put $x_i = c(i k')$ for $i = 1, \ldots, M$, and let $x_i'$ be a closest point on $[c(a), c(b)]$ to $x_i$. By the lemma, we know $d(x_i, x_i') \leq 2\delta$.
+
+ \begin{claim}
+ $x_1', \ldots, x_m'$ appear in the correct order along $[c(a), c(b)]$.
+ \end{claim}
+ Let's finish the proof assuming the claim. If this holds, then note that
+ \[
+ d(x_i', x_{i + 1}') \geq k' - 4\delta > 2\delta
+ \]
+ because we know $d(x_i, x_{i + 1}) = 6\delta$, and also $d(x_m, c(b)) \geq \eta - 2 \delta$. Therefore
+ \[
+ d(c(a), c(b)) = \sum_{i = 1}^M d(x_i, x_{i - 1}) + d(x_m, c(b)) \geq 2 \delta M + \eta - 2 \delta \geq 2\delta(M - 1).
+ \]
+ On the other hand, we have
+ \[
+ M = \frac{|b - a| - \eta}{k'} \geq \frac{|b - a|}{k'} - 1.
+ \]
+ Thus, we find that
+ \[
+ d(c(a), c(b)) \geq \frac{2\delta}{k'} |b - a| - 4 \delta.
+ \]
+
+ To prove the claim, let $x_i = c(t_i)$ for all $i$. We let
+ \begin{align*}
+ y &= c(t_{i - 1} + 2 \delta)\\
+ z &= c(t_{i + 1} - 2 \delta).
+ \end{align*}
+ Define
+ \begin{align*}
+ \Delta_- &= \Delta(x_{i - 1}, x_{i - 1}', y)\\
+ \Delta_+ &= \Delta(x_{i + 1}, x_{i + 1}', z).
+ \end{align*}
+ Both $\Delta_-$ and $\Delta_+$ are disjoint from $B(x_i, 3 \delta)$. Indeed, if $w \in \Delta_-$ with $d(x_i, w) \leq 3\delta$, then by $\delta$-slimness of $\Delta_-$, we know $d(w, x_{i - 1}) \leq 3 \delta$, and so $d(x_i, x_{i - 1}) \leq 6 \delta$, which is not possible.
+
+ Therefore, since the rectangle $y, z, x_{i + 1}', x_{i - 1}'$ is $2\delta$-slim, and $x_i$ is more than $2\delta$ away from the sides $y x_{i - 1}'$ and $z x_{i + 1}'$. So there must be some $x_i'' \in [x_{i - 1}', x_{i + 1}']$ with $d(x_i, x_i'') \leq 2 \delta$.
+
+ Now consider $\Delta = \Delta(x_i, x_i', x_i'')$. We know $x_i x_i'$ and $x_i x_i''$ are both of length $\leq 2 \delta$. Note that every point in this triangle is within $3\delta$ of $x_i$ by $\delta$-slimness. So $\Delta \subseteq B(x_i, 3 \delta)$, and this implies $\Delta$ is disjoint from $B(x_{i - 1}, 3 \delta)$ and $B(x_{i + 1}, 3 \delta)$ as before.
+
+ But $x_{i - 1}' \in B(x_{i - 1}, 3 \delta)$ and $x_{i + 1}' \in B(x_{i + 1}, 3\delta)$. Moreover, $\Delta$ contains the segment of $[c(a), c(b)]$ joining $x_i'$ and $x_i''$. Therefore, it must be the case that $x_i' \in [x_{i - 1}', x_{i + 1}']$.
+\end{proof}
+
+\subsection{Dehn functions of hyperbolic groups}
+We now use our new understanding of quasi-geodesics in hyperbolic spaces to try to understand the word problem in hyperbolic groups. Note that by the Schwarz--Milnor lemma, hyperbolic groups are finitely-generated and their Cayley graphs are hyperbolic.
+
+\begin{cor}
+ Let $X$ be $\delta$-hyperbolic. Then there exists a constant $C = C(\delta)$ such that any non-constant loop in $X$ is \emph{not} $C$-locally geodesic.
+\end{cor}
+
+\begin{proof}
+ Take $k = 8 \delta + 1$, and let
+ \[
+ C = \max \{\lambda \varepsilon, k\}
+ \]
+ where $\lambda, \varepsilon$ are as in the theorem.
+
+ Let $\gamma: [a, b] \to X$ be a closed loop. If $\gamma$ were $C$-locally geodesic, then it would be $(\lambda, \varepsilon)$-quasigeodesic. So
+ \[
+ 0 = d(\gamma(a), \gamma(b)) \geq \frac{|b - a|}{\lambda} - \varepsilon.
+ \]
+ So
+ \[
+ |b - a| \leq \lambda \varepsilon < C.
+ \]
+ But $\gamma$ is a $C$-local geodesic. This implies $\gamma$ is a constant loop.
+\end{proof}
+
+\begin{defi}[Dehn presentation]\index{Dehn presentation}
+ A finite presentation $\bra S \mid R \ket$ for a group $\Gamma$ is called \emph{Dehn} if for every null-homotopic reduced word $w \in S^*$, there is (a cyclic conjugate of) a relator $r \in R$ such that $r = u^{-1}v$ with $\ell_S(u) < \ell_S(v)$, and $w = w_1 v w_2$ (without cancellation).
+\end{defi}
+The point about this is that if we have a null-homotopic word, then there is some part in the word that can be replaced with a \emph{shorter} word using a single relator.
+
+If a presentation is Dehn, then the naive way of solving the word problem just works. In fact,
+
+\begin{lemma}
+ If $\Gamma$ has a Dehn presentation, then $\delta_\Gamma$ is linear.
+\end{lemma}
+
+\begin{proof}
+ Exercise.
+\end{proof}
+
+\begin{thm}
+ Every hyperbolic group $\Gamma$ is finitely-presented and admits a Dehn presentation.
+
+ In particular, the Dehn function is linear, and the word problem is solvable.
+\end{thm}
+So while an arbitrary group can be very difficult, the generic group is easy.
+
+\begin{proof}
+ Let $S$ be a finite generating set for $\Gamma$, and $\delta$ a constant of hyperbolicity for $\Cay_S(\Gamma)$.
+
+ Let $C = C(\delta)$ be such that every non-trivial loop is \emph{not} $C$-locally geodesic.
+
+ Take $\{u_i\}$ to be the set of all words in $F(S)$ representing geodesics $[1, u_i]$ in $\Cay_S(\Gamma)$ with $|u_i| < C$. Let $\{v_j\} \subseteq F(S)$ be the set of all \emph{non}-geodesic words of length $\leq C$ in $\Cay_S(\Gamma)$. Let $R = \{u_i^{-1} v_j \in F(S) : u_i =_\Gamma v_j\}$.
+
+ We now just observe that this gives the desired Dehn presentation! Indeed, any non-trivial loop must contain one of the $v_j$'s, since $\Cay_S(\Gamma)$ is not $C$-locally geodesic, and re can replace it with $u_i$!
+\end{proof}
+This argument was developed by Dehn to prove results about the fundamental group of surface groups in 1912. In the 1980's, Gromov noticed that Dehn's argument works for an arbitrary hyperbolic group!
+
+One can keep on proving new things about hyperbolic groups if we wished to, but there are better uses of our time. So for the remaining of the chapter, we shall just write down random facts about hyperbolic groups without proof.
+
+So hyperbolic groups have linear Dehn functions. In fact,
+\begin{thm}[Gromov, Bowditch, etc] % Papadopoulos (?)
+ If $\Gamma$ is a finitely-presented group and $\delta_\Gamma \precneq n^2$, then $\Gamma$ is hyperbolic.
+\end{thm}
+Thus, there is a ``gap'' in the isoperimetric spectrum. We can collect our results as
+\begin{thm}
+ If $\Gamma$ is finitely-generated, then the following are equivalent:
+ \begin{enumerate}
+ \item $\Gamma$ is hyperbolic.
+ \item $\Gamma$ has a Dehn presentation.
+ \item $\Gamma$ satisfies a linear isoperimetric inequality.
+ \item $\Gamma$ has a subquadratic isoperimetric inequality.
+ \end{enumerate}
+\end{thm}
+
+In general, we can ask the question --- for which $\alpha \in \R$ is $n^\alpha \simeq$ a Dehn function of a finitely-presented group? As we saw, $\alpha$ cannot lie in $(1, 2)$, and it is a theorem that the set of such $\alpha$ is dense in $[2, \infty)$. In fact, it includes all rationals in the interval.
+
+ \subsubsection*{Subgroup structure}
+When considering subgroups of a hyperbolic group $\Gamma$, it is natural to consider ``geometrically nice'' subgroups, i.e.\ finitely-generated subgroups $H \subseteq \Gamma$ such that the inclusion is a quasi-isometric embedding. Such subgroups are called \term{quasi-convex}, and they are always hyperbolic.
+
+What sort of such subgroups can we find? There are zillions of free quasi-convex subgroups!
+\begin{lemma}[Ping-pong lemma]
+ Let $\Gamma$ be hyperbolic and torsion-free (for convenience of statement). If $\gamma_1, \gamma_2 \in \Gamma$ do not commute, then for large enough $n$, the subgroup $\bra \gamma_1^n, \gamma_2^n\ket \cong F_2$ and is quasi-convex.
+\end{lemma}
+How about non-free subgroups? Can we find surface groups? Of course, we cannot always guarantee the existence of such surface groups, since all subgroups of free groups are free.
+
+\begin{question}
+ Let $\Gamma$ be hyperbolic and torsion-free, and not itself free. Must $\Gamma$ contain a quasi-convex subgroup isomorphic to $\pi_1 \Sigma$ for some closed hyperbolic surface $\Sigma$?
+\end{question}
+We have no idea if it is true or false.
+
+Another open problem we can ask is the following:
+\begin{question}
+ If $\Gamma$ is hyperbolic and not the trivial group, must $\Gamma$ have a proper subgroup of finite index?
+\end{question}
+
+\begin{prop}
+ Let $\Gamma$ be hyperbolic, and $\gamma \in \Gamma$. Then $C(\gamma)$ is quasiconvex. In particular, it is hyperbolic.
+\end{prop}
+
+\begin{cor}
+ $\Gamma$ does not contain a copy of $\Z^2$.
+\end{cor}
+
+\subsubsection*{The boundary}
+Recall that if $\Sigma$ is a compact surface of genus $g \geq 2$, then $\pi_1 \Sigma \qi \H^2$. If we try to draw the hyperbolic plane in the disc model, then we would probably draw a circle and fill it in. One might think the drawing of the circle is just an artifact of the choice of the model, but it's not! It's genuinely there.
+
+\begin{defi}[Geodesic ray]\index{geodesic ray}
+ Let $X$ be a $\delta$-hyperbolic geodesic metric space. A \emph{geodesic ray} is an isometric embedding $r: [0, \infty) \to X$.
+\end{defi}
+
+We say $r_1 \sim r_2$ if there exists $M$ such that $d(r_1(t), r_2(t)) \leq M$ for all $t$. In the disc model of $\H^2$, this is the scenario where two geodesic rays get very close together as $t \to \infty$. For example, in the upper half plane model of $\H^2$, all vertical lines are equivalent in this sense.
+
+We define $\partial_\infty X = \{\text{geodesic rays}\}/\sim$\index{$\partial_\infty X$}. This can be topologized in a sensible way, and in this case $X \cup \partial_\infty X$ is compact. By the Morse lemma, for hyperbolic spaces, this is quasi-isometry invariant.
+
+\begin{eg}
+ If $\Gamma = \pi_1 \Sigma$, with $\Sigma$ closed hyperbolic surface, then $\partial_\infty \Gamma = S^1$ and the union $X \cup \partial_\infty X$ gives us the closed unit disc.
+\end{eg}
+
+\begin{thm}[Casson--Jungreis, Gabai]
+ If $\Gamma$ is hyperbolic and $\partial_\infty \Gamma \cong S^1$, then $\Gamma$ is virtually $\pi_1 \Sigma$ for some closed hyperbolic $\Sigma$.
+\end{thm}
+
+\begin{eg}
+ If $\Gamma$ is free, then $\partial_\infty \Gamma$ is the Cantor set.
+\end{eg}
+
+\begin{conjecture}[Cannon]
+ If $\Gamma$ is hyperbolic and $\partial_\infty \Gamma \cong S^2$, then $\Gamma$ is virtually $\pi_1 M$ for $M$ a closed hyperbolic $3$-manifold.
+\end{conjecture}
+
+\section{CAT(0) spaces and groups}
+From now on, instead of thinking of geodesics as being isometric embeddings, we reparametrize them linearly so that the domain is always $[0, 1]$.
+
+\subsection{Some basic motivations}
+Given a discrete group $\Gamma$, there are two basic problems you might want to solve.
+\begin{question}
+ Can we solve the word problem in $\Gamma$?
+\end{question}
+
+\begin{question}
+ Can we compute the (co)homology of $\Gamma$?
+\end{question}
+
+\begin{defi}[Group (co)homology]\index{group homology}\index{group cohomology}
+ The \emph{(co)homology} of a group $\Gamma$ is the (co)homology of $K(\Gamma, 1)$.
+\end{defi}
+We can define this in terms of the group itself, but would require knowing some extra homological algebra. A very closely related question is
+\begin{question}
+ Can we find an explicit $X$ such that $\Gamma = \pi_1 X$ and $\tilde{X}$ is contractible?
+\end{question}
+We know that these problems are not solvable in general:
+\begin{thm}[Novikov--Boone theorem]\index{Novikov--Boone theorem}
+ There exists a finitely-presented group with an unsolvable word problem.
+\end{thm}
+
+\begin{thm}[Gordon] % check
+ There exists a sequence of finitely generated groups $\Gamma_n$ such that $H_2(\Gamma_n)$ is not computable.
+\end{thm}
+
+As before, we might expect that we can solve these problems if our groups come with some nice geometry. In the previous chapter, we talked about hyperbolic groups, which are \emph{negatively} curved. In this section, we shall work with slightly more general spaces, namely those that are \emph{non-positively} curved.
+
+Let $M$ be a compact manifold of non-positive sectional curvature. It is a classical fact that such a manifold satisfies a quadratic isoperimetric inequality. This is not too surprising, since the ``worst case'' we can get is a space with constant zero curvature, which implies $\tilde{M} \cong \R^n$.
+
+If we know this, then by the Filling theorem, we know the Dehn function of the fundamental group is at worst quadratic, and in particular it is computable. This solves the first question.
+
+What about the second question?
+\begin{thm}[Cartan--Hadamard theorem]\index{Cartan--Hadamard theorem}
+ Let $M$ be a non-positively curved compact manifold. Then $\tilde{M}$ is diffeomorphic to $\R^n$. In particular, it is contractible. Thus, $M = K(\pi_1 M, 1)$.
+\end{thm}
+For example, this applies to the torus, which is not hyperbolic.
+
+So non-positively curved manifolds are good. However, there aren't enough of them. Why? In general, the homology of a group can be very complicated, and in particular can be infinite dimensional. However, manifolds always have finite-dimensional homology groups. Moreover, they satisfy Poincar\'e duality.
+
+\begin{thm}[Poincar\'e duality]\index{Poincar\'e duality}
+ Let $M$ be an orientable compact $n$-manifold. Then
+ \[
+ H_k(M; \R) \cong H_{n - k}(M; \R).
+ \]
+\end{thm}
+This is a very big constraint, and comes very close to characterizing manifolds. In general, it is difficult to write down a group whose homology satisfies Poincar\'e duality, unless we started off with a manifold whose universal cover is contractible, and then took its fundamental group.
+
+Thus, we cannot hope to realize lots of groups as the $\pi_1$ of a non-positively curved manifold. The idea of CAT(0) spaces is to mimic the properties of non-positively curved manifolds in a much more general setting.
+
+\subsection{CAT(\tph{$\kappa$}{kappa}{κ}) spaces}
+Let $\kappa = -1, 0$ or $1$, and let $M_\kappa$ be the unique, simply connected, complete $2$-dimensional Riemannian manifold of curvature $\kappa$. Thus,
+\[
+ M_1 = S^2,\quad M_0 = \R^2,\quad M_{-1} = \H^2.
+\]
+We can also talk about $M_\kappa$ for other $\kappa$, but we can just obtain those by scaling $M_{\pm 1}$.
+
+Instead of working with Riemannian manifolds, we shall just think of these as \emph{complete geodesic metric spaces}. We shall now try to write down a ``CAT($\kappa$)'' condition, that says the curvature is bounded by $\kappa$ in some space.
+
+\begin{defi}[Triangle]\index{triangle}
+ A \emph{triangle} with vertices $\{p, q, r\} \subseteq X$ is a choice
+ \[
+ \Delta(p, q, r) = [p, q] \cup [q, r] \cup [r, p].
+ \]
+\end{defi}
+If we want to talk about triangles on a sphere, then we have to be a bit more careful since the sides cannot be too long. Let $D_\kappa = \diam M_\kappa$, i.e.\ $D_\kappa = \infty$ for $\kappa = 0, -1$ and $D_\kappa = \pi$ for $\kappa = +1$.
+
+Suppose $\Delta = \Delta(x_1, x_2, x_3)$ is a triangle of perimeter $\leq 2 D_\kappa$ in some complete geodesic metric space $(X, d)$. Then there is, up to isometry, a unique \term{comparison triangle} $\bar{\Delta} = \Delta(\bar{x}_1, \bar{x}_2, \bar{x}_3) \subseteq M_\kappa$ such that
+\[
+ d_{M_k}(\bar{x}_i, \bar{x}_j) = d(x_i, x_j).
+\]
+This is just the fact we know from high school that a triangle is determined by the lengths of its side. The natural map $\bar{\Delta} \to \Delta$ is called the \term{comparison triangle}.
+
+Similarly, given a point $p \in [x_i, x_j]$, there is a \term{comparison point} $\bar{p} \in [\bar{x}_i, \bar{y}_j]$. Note that $p$ might be on multiple edges, so it could have multiple comparison points. However, the comparison point is well-defined as long as we specify the edge as well.
+
+\begin{defi}[CAT($\kappa$) space]\index{CAT($\kappa$) space} % named after three people
+ We say a space $(X, d)$ is CAT($\kappa$) if for any geodesic triangle $\Delta \subseteq X$ of diameter $\leq 2 D_\kappa$, any $p, q \in \Delta$ and any comparison points $\bar{p}, \bar{q} \in \bar{\Delta}$,
+ \[
+ d(p, q) \leq d_{M_\kappa} (\bar{p}, \bar{q}).
+ \]
+ If $X$ is locally CAT($\kappa$), then $K$ is said to be of \term{curvature} at most $\kappa$.
+
+ In particular, a locally CAT(0) space is called a \term{non-positively curved space}.
+\end{defi}
+
+We are mostly interested in CAT(0) spaces. At some point, we will talk about CAT(1) spaces.
+
+\begin{eg}
+ The following are CAT(0):
+ \begin{enumerate}
+ \item Any Hilbert space.
+ \item Any simply-connected manifold of non-positive sectional curvature.
+ \item Symmetric spaces.
+ \item Any tree.
+ \item If $X, Y$ are CAT(0), then $X \times Y$ with the $\ell_2$ metric is CAT(0).
+ \item In particular, a product of trees is CAT(0).
+ \end{enumerate}
+\end{eg}
+
+\begin{lemma}[Convexity of the metric]
+ Let $X$ be a CAT(0) space, and $\gamma, \delta: [0, 1] \to X$ be geodesics (reparameterized). Then for all $t \in [0, 1]$, we have
+ \[
+ d(\gamma(t), \delta(t)) \leq (1 - t) d(\gamma(0), \delta(0)) + t d(\gamma(1), \delta(1)).
+ \]
+\end{lemma}
+Note that we have strict equality if this is in Euclidean space. So it makes sense that in CAT(0) spaces, we have an inequality.
+\begin{proof}
+ Consider the rectangle
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) rectangle (3, 2);
+ \draw (0, 0) -- (3, 2) node [pos=0.5, anchor=north west] {$\alpha$};
+ \node [below] at (0, 0) {$\gamma(0)$};
+ \node [below] at (3, 0) {$\gamma(1)$};
+ \node [above] at (0, 2) {$\delta(0)$};
+ \node [above] at (3, 2) {$\delta(1)$};
+
+ \node [below] at (1.5, 0) {$\gamma$};
+ \node [above] at (1.5, 2) {$\delta$};
+ \end{tikzpicture}
+ \end{center}
+ Let $\alpha: [0, 1] \to X$ be a geodesic from $\gamma(0)$ to $\delta(1)$. Applying the CAT(0) estimate to $\Delta(\gamma(0), \gamma(1), \delta(1))$, we get
+ \[
+ d(\gamma(t), \alpha(t)) \leq d(\overline{\gamma(t)}, \overline{\alpha(t)}) = t d(\overline{\gamma(1)}, \overline{\alpha(1)}) = t d(\gamma(1), \alpha(1)) = t d(\gamma(1), \delta(1)),
+ \]
+ using what we know in plane Euclidean geometry. The same argument shows that
+ \[
+ d(\delta(t), \alpha(t)) \leq (1 - t) d(\delta(0), \gamma(0)).
+ \]
+ So we know that
+ \[
+ d(\gamma(t), \delta(t)) \leq d(\gamma(t), \alpha(t)) + d(\alpha(t), \delta(t)) \leq (1 - t) d(\gamma(0), \delta(0)) + t d(\gamma(1), \delta(1)).\qedhere
+ \]
+\end{proof}
+
+\begin{lemma}
+ If $X$ is CAT(0), then $X$ is \emph{uniquely geodesic}, i.e.\ each pair of points is joined by a unique geodesic.
+\end{lemma}
+
+\begin{proof}
+ Suppose $x_0, x_1 \in X$ and $\gamma(0) = \delta(0) = x_0$ and $\gamma(1) = \delta(1) = x_1$. Then by the convexity of the metric, we have $d(\gamma(t), \delta(t)) \leq 0$. So $\gamma(t) = \delta(t)$ for all $t$.
+\end{proof}
+This is not surprising, because this is true in the Euclidean plane and the hyperbolic plane, but not in the sphere. Of course, one can also prove this result directly.
+
+\begin{lemma}
+ Let $X$ be a proper, uniquely geodesic metric space. Then geodesics in $X$ vary continuously with their end points in the compact-open topology (which is the same as the uniform convergence topology).
+\end{lemma}
+This is actually true without the word ``proper'', but the proof is harder.
+
+\begin{proof}
+ This is an easy application of the Arzel\'a--Ascoli theorem.
+\end{proof}
+
+\begin{prop}
+ Any proper CAT(0) space $X$ is contractible.
+\end{prop}
+
+\begin{proof}
+ Pick a point $x_0 \in X$. Then the map $X \to \mathrm{Maps}([0, 1], X)$ sending $x$ to the unique geodesic from $x_0$ to $x$ is continuous. The adjoint map $X \times [0, 1] \to X$ is then a homotopy from the constant map at $x_0$ to the identity map.
+\end{proof}
+
+\begin{defi}[CAT(0) group]\index{CAT(0) group}
+ A group is CAT(0) if it acts properly discontinuously and cocompactly by isometries on a proper CAT(0) space.
+\end{defi}
+Usually, for us, the action will also be free. This is the case, for example, when a fundamental group acts on the covering space.
+
+Note that a space being CAT(0) is a very fine-grained property, so this is not the same as saying the Cayley graph of the group is CAT(0).
+
+\begin{eg}
+ $\Z^n$ for any $n$ is CAT(0), since it acts on $\R^n$.
+\end{eg}
+
+\begin{eg}
+ More generally, $\pi_1 M$ for any closed manifold $M$ of non-positive curvature is CAT(0).
+\end{eg}
+
+\begin{eg}
+ Uniform lattices in semi-simple Lie groups. Examples include $\SL_n \mathcal{O}_K$ for certain number fields $K$.
+\end{eg}
+\begin{eg}
+ Any free group, or direct product of free groups is CAT(0).
+\end{eg}
+
+We remark, but will not prove
+\begin{prop}
+ Any CAT(0) group $\Gamma$ satisfies a quadratic isoperimetric inequality, that is $\delta_\Gamma \simeq n$ or $\sim n^2$.
+\end{prop}
+
+Note that if $\Gamma$ is in fact CAT(-1), then $\Gamma$ is hyperbolic, which is not terribly difficult to see, since $\H^2$ is hyperbolic. But if a group is hyperbolic, is it necessarily CAT(-1)? Or even just CAT(0)? This is an open question. The difficulty in answering this question is that hyperbolicity is a ``coarse condition'', but being CAT(0) is a fine condition. For example, Cayley graphs are not CAT(0) spaces unless they are trees.
+
+\subsection{Length metrics}
+In differential geometry, if we have a covering space $\tilde{X} \to X$, and we have a Riemannian metric on $X$, then we can lift this to a Riemannian metric to $\tilde{X}$. This is possible since the Riemannian metric is a purely local notion, and hence we can lift it locally. Can we do the same for metric spaces?
+
+Recall that if $\gamma: [a, b] \to X$ is a path, then
+\[
+ \ell(\gamma) = \sup_{a = t_0 < t_1 < \cdots < t_n = b} \sum_{i = 1}^n d(\gamma(t_{i - 1}), \gamma(t_i)).
+\]
+We say $\gamma$ is \emph{rectifiable} if $\ell(\gamma) < \infty$.
+
+\begin{defi}[Length space]\index{length space}
+ A metric space $X$ is called a \emph{length space} if for all $x, y \in X$, we have
+ \[
+ d(x, y) = \inf_{\gamma: x \to y} \ell(\gamma).
+ \]
+\end{defi}
+Given any metric space $(X, d)$, we can construct a \term{length pseudometric} $\hat{d}: X \times X \to [0, \infty]$ given by
+\[
+ \hat{d}(x, y) = \inf_{\gamma: x \to y} \ell(\gamma).
+\]
+Now given a covering space $p: \tilde{X} \to X$ and $(X, d)$ a metric space, we can define a length pseudometric on $\tilde{X}$ by, for any path $\tilde{\gamma}: [a, b] \to \tilde{X}$,
+\[
+ \ell(\tilde{\gamma}) = \ell(p \circ \tilde{\gamma}).
+\]
+This induces an induced pseudometric on $\tilde{X}$.
+
+\begin{ex}
+ If $X$ is a length space, then so is $\tilde{X}$.
+\end{ex}
+
+Note that if $A, B$ are length spaces, and $X = A \cup B$ (such that the metrics agree on the intersection), then $X$ has a natural induced length metric. Recall we previously stated the Hopf--Rinow theorem in the context of differential geometry. In fact, it is actually just a statement about length spaces.
+
+\begin{thm}[Hopf--Rinow theorem]\index{Hopf--Rinow theorem}
+ If a length space $X$ is complete and locally compact, then $X$ is proper and geodesic.
+\end{thm}
+This is another application of the Arzel\'a--Ascoli theorem.
+
+\subsection{Alexandrov's lemma}
+Alexandrov's lemma is a lemma that enables us to glue CAT(0) space together to obtain new examples.
+
+\begin{lemma}[Alexandrov's lemma]\index{Alexandrov's lemma}
+ Suppose the triangles $\Delta_1 = \Delta(x, y, z_1)$ and $\Delta_2 = \Delta(x, y, z_2)$ in a metric space satisfy the CAT(0) condition, and $y \in [z_1, z_2]$.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (2, 1) node [right] {$z_1$} -- (2, -1) node [right] {$z_2$} -- (0, 0) node [left] {$x$};
+ \draw (0, 0) -- (2, 0) node [right] {$y$};
+ \end{tikzpicture}
+ \end{center}
+ Then $\Delta = \Delta(x, z_1, z_2)$ also satisfies the CAT(0) condition.
+\end{lemma}
+This is the basic result we need if we want to prove ``gluing theorems'' for CAT(0) spaces.
+
+\begin{proof}
+ Consider $\bar{\Delta}_1$ and $\bar{\Delta}_2$, which together form a Euclidean quadrilateral $\bar{Q}$ with with vertices $\bar{x}, \bar{z}_1, \bar{z}_2, \bar{y}$. We claim that then the interior angle at $\bar{y}$ is $\geq 180^\circ$. Suppose not, and it looked like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [left] {$\bar{x}$} -- (3, 0) node [right] {$\bar{y}$};
+ \draw (0, 0) -- (1.5, 1) node [above] {$\bar{z}_1$} -- (3, 0);
+ \draw (0, 0) -- (1.5, -1) node [below] {$\bar{z}_2$} -- (3, 0);
+ \end{tikzpicture}
+ \end{center}
+ If not, there exists $\bar{p}_i \in [\bar{y}, \bar{z}_i]$ such that $[\bar{p}_1, \bar{p}_2] \cap [\bar{x}, \bar{y}] = \{\bar{q}\}$ and $\bar{q} \not= \bar{y}$. Now
+ \begin{align*}
+ d(p_1, p_2) &= d(p_2, y) + d(y, p_2) \\
+ &= d(\bar{p}_1, \bar{y}) + d(\bar{y}, \bar{p}_1)\\
+ &= d(\bar{p}, \bar{y}) + d(\bar{y}, \bar{p}_2)\\
+ &> d(\bar{p}_1, \bar{q}) + d(\bar{q}, \bar{p}_2)\\
+ &\geq d(p_1, q) + d(q, p_2)\\
+ &\geq d(p_1, p_2),
+ \end{align*}
+ which is a contradiction.
+
+ Thus, we know the right picture looks like this:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) node [left] {$\bar{x}$} -- (1.5, 0) node [right] {$\bar{y}$};
+ \draw (0, 0) -- (2, 1.5) node [above] {$\bar{z}_1$} -- (1.5, 0);
+ \draw (0, 0) -- (2, -1.5) node [below] {$\bar{z}_2$} -- (1.5, 0);
+ \end{tikzpicture}
+ \end{center}
+ To obtain $\bar{\Delta}$, we have to ``push'' $\bar{y}$ out so that the edge $\bar{z}_1 \bar{z}_2$ is straight, while keeping the lengths fixed. There is a natural map $\pi: \bar{\Delta} \to \bar{Q}$, and the lemma follows by checking that for any $a, b \in \bar{\Delta}$, we have
+ \[
+ d(\pi(a), \pi(b)) \leq d(a, b).
+ \]
+ This is an easy case analysis (or is obvious).
+\end{proof}
+
+A sample application is the following:
+\begin{prop}
+ If $X_1, X_2$ are both locally compact, complete CAT(0) spaces and $Y$ is isometric to closed, subspaces of both $X_1$ and $X_2$. Then $X_1 \cup_Y X_2$, equipped with the induced length metric, is CAT(0).
+\end{prop}
+\subsection{Cartan--Hadamard theorem}
+
+\begin{thm}[Cartan--Hadamard theorem]\index{Cartan--Hadamard theorem}
+ If $X$ is a complete, connected length space of non-positive curvature, then the universal cover $\tilde{X}$, equipped with the induced length metric, is CAT(0).
+\end{thm}
+This was proved by Cartan and Hadamard in the differential geometric setting.
+
+\begin{cor}
+ A (torsion free) group $\Gamma$ is CAT(0) iff it is the $\pi_1$ of a complete, connected space $X$ of non-positive curvature.
+\end{cor}
+
+We'll indicate some steps in the proof of the theorem.
+\begin{lemma}
+ If $X$ is proper, non-positively curved and uniquely geodesic, then $X$ is CAT(0).
+\end{lemma}
+
+\begin{proof}[Proof idea]
+ The idea is that given a triangle, we cut it up into a lot of small triangles, and since $X$ is locally CAT(0), we can use Alexandrov's lemma to conclude that the large triangle is CAT(0).
+
+ Recall that geodesics vary continuously with their endpoints. Consider a triangle $\Delta = \Delta (x, y, z) \subseteq \bar{B} \subseteq X$, where $\bar{B}$ is a compact ball. By compactness, there is an $\varepsilon$ such that for every $x \in \bar{B}$, the ball $B_x(\bar{\varepsilon})$ is CAT(0).
+
+ We let $\beta_t$ be the geodesic from $x$ to $\alpha(t)$. Using continuity, we can choose $0 < t_1 < \cdots < t_N = 1$ such that
+ \[
+ d(\beta_{t_i}(s), \beta_{t_{i + 1}}(s)) < \varepsilon
+ \]
+ for all $s \in [0, 1]$.
+
+ Now divide $\Delta$ up into a ``patchwork'' of triangles, each contained in an $\varepsilon$ ball, so each satisfies the CAT(0) condition, and apply induction and Alexandrov's lemma to conclude.
+\end{proof}
+
+Now to prove the Cartan--Hadamard theorem, we only have to show that the universal cover is uniquely geodesic. Here we must use the simply-connectedness condition.
+\begin{thm}
+ Let $X$ be a proper length space of non-positive curvature, and $p, q \in X$. Then each homotopy class of paths from $p$ to $q$ contains a \emph{unique} (local) geodesic representative.
+\end{thm}
+
+\subsection{Gromov's link condition}
+Gromov's link condition is a criterion that makes it very easy to write down interesting examples of non-positively-curved metric spaces.
+
+\begin{defi}[Euclidean cell complex]\index{Euclidean cell complex}
+ A locally finite cell complex $X$ is \emph{Euclidean} if every cell is isometric to a convex polyhedron in Euclidean space and the attaching maps are isometries from the lower-dimensional cell to a face of the new cell.
+\end{defi}
+Such a complex $X$ has a natural length metric which is proper and geodesic by Hopf--Rinow. What we'd like to do is to come up with a condition that ensures $X$ is CAT(0).
+
+\begin{eg}
+ The usual diagram for a torus gives an example of a Euclidean complex.
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.55, mred] (0, 0) -- (3, 0);
+ \draw [->-=0.55, mred] (0, 3) -- (3, 3);
+
+ \draw [->-=0.55, mblue] (0, 0) -- (0, 3);
+ \draw [->-=0.55, mblue] (3, 0) -- (3, 3);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+\begin{eg}
+ We can construct a sphere this way, just by taking a cube!
+\end{eg}
+We know that $T^2$ is non-positively curved (since it is flat), but the cube is not, because by Cartan--Hadamard, it would be CAT(0), hence contractible, but it is not.
+
+\begin{defi}[Link]\index{link}
+ Let $X$ be a Euclidean complex, and let $v$ be a vertex of $X$, and let $0 < \varepsilon \ll \text{shortest $1$-cell}$. Then the \emph{link} of $v$ is
+ \[
+ Lk(v) = S_v(\varepsilon) = \{x \in X: d(x, v) = \varepsilon\}.
+ \]
+\end{defi}
+Note that $Lk(V)$ is a cell complex: the intersection of $Lk(v)$ with a cell of $X$ of dimension is a cell of dimension $n - 1$.
+
+\begin{eg}
+ In the torus, there is only one vertex. The link looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw [->-=0.55, mred] (0, 0) -- (3, 0);
+ \draw [->-=0.55, mred] (0, 3) -- (3, 3);
+
+ \draw [->-=0.55, mblue] (0, 0) -- (0, 3);
+ \draw [->-=0.55, mblue] (3, 0) -- (3, 3);
+
+ \draw (0, 0.3) arc(90:0:0.3);
+ \draw (3, 0.3) arc(90:180:0.3);
+ \draw (0, 2.7) arc(-90:0:0.3);
+ \draw (3, 2.7) arc(-90:-180:0.3);
+ \end{tikzpicture}
+ \end{center}
+ So the link is $S^1$.
+\end{eg}
+
+\begin{eg}
+ If we take the corner of a cube, then $Lk(v)$ is homeomorphic to $S^1$, but it is a weird one, because it is made up of three right angles, not two.
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[cm={1,0, 0.2, 0.6, (0, 0)}]
+ \draw (0, 0) rectangle (2, 2);
+ \draw [mred] (0.5, 0) arc (0:90:0.5);
+ \end{scope}
+ \begin{scope}[cm={0.2,0.6, 0, 1, (0, 0)}]
+ \draw (0, 0) rectangle (2, 2);
+ \draw [mred] (0.5, 0) arc (0:90:0.5);
+ \end{scope}
+ \draw [fill=white, fill opacity=0.7](0, 0) rectangle (2, 2);
+ \draw [mred] (0.5, 0) arc (0:90:0.5);
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+
+How can we distinguish between these two $S^1$'s? \emph{Angle} puts a metric on $Lk(v)$. We can do this for general metric spaces, but we only need it for Euclidean complexes, in which case there is not much to do.
+
+Restricted to each cell, the link is just a part of a sphere, and it has a natural spherical metric, which is a length metric. These then glue together to a length metric on $Lk(v)$. Note that this is \emph{not} the same as the induced metric.
+
+\begin{eg}
+ In the torus, the total length of the link is $2\pi$, while that of the cube is $\frac{3\pi}{2}$.
+\end{eg}
+
+The important theorem, which we are not going to prove, is this:
+\begin{thm}[Gromov's link criterion]\index{Gromov's link criterion}
+ A Euclidean complex $X$ is non-positively-curved iff for every vertex $v$ of $X$, $Lk(v)$ is CAT(1).
+\end{thm}
+Note that by definition, we only have to check the CAT(1) inequality on triangles of perimeter $< 2\pi$.
+
+\begin{ex}
+ Check these in the case of the torus and the cube.
+\end{ex}
+
+Thus, given a group, we can try to construct a space whose $\pi_1$ is it, and then put a Euclidean structure on it, then check Gromov's link criterion.
+
+In general, Gromov's link condition might not be too useful, because we still don't know how to check if something is CAT(1)! However, it is very simple in dimension $2$.
+
+\begin{cor}
+ If $X$ is a $2$-dimensional Euclidean complex, then for all vertices $v$, $Lk(v)$ is a metric graph, and $X$ is CAT(0) iff $Lk(v)$ has no loop of length $< 2\pi$ for all $v$.
+\end{cor}
+
+\begin{eg}[Wise's example]\index{Wise's example}
+ Consider the group
+ \[
+ W = \bra a, b, s, t \mid [a, b] = 1, s^{-1}as = (ab)^2, t^{-1}bt = (ab)^2\ket.
+ \]
+ Letting $c = ab$, it is easy to see that this is the $\pi_1$ of the following Euclidean complex:
+ \begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[ xscale=2]
+ \draw [->-=0.6, morange] (0, 0.5) -- (1, 0.1) node [pos=0.5, below] {$b$};
+ \draw [-<-=0.3] (1, 0.1) -- (1, 0.9);
+ \draw [-<-=0.4, mgreen] (1, 0.9) -- (0, 0.5) node [pos=0.5, above] {$a$};
+
+ \draw [->-=0.7] (0, -0.1) -- (0, -0.9);
+ \draw [-<-=0.4, mgreen] (0, -0.9) -- (1, -0.5);
+ \draw [-<-=0.4, morange] (1, -0.5) -- (0, -0.1);% two purple vertices, one red vertex
+ \end{scope}
+
+ \draw (3, 1.5) rectangle (6, -1.5); % blue right angles
+ \draw (7, 1.5) rectangle (10, -1.5);
+
+ \node [circ] at (4.5, 1.5) {};
+ \node [circ] at (8.5, 1.5) {};
+
+ \draw [->] (3.95, 1.5) -- +(0.0001, 0) node [above] {$c$};
+ \draw [->] (5.35, 1.5) -- +(0.0001, 0);
+
+ \draw [->] (7.95, 1.5) -- +(0.0001, 0);
+ \draw [->] (9.35, 1.5) -- +(0.0001, 0);
+
+ \draw [->, mgreen] (4.7, -1.5) -- +(0.0001, 0);
+ \draw [->, morange] (8.7, -1.5) -- +(0.0001, 0);
+
+ \draw [->, red] (3, 0.2) -- +(0, 0.0001) node [left] {$s$};
+ \draw [->, red] (6, 0.2) -- +(0, 0.0001);
+ \draw [->, mblue] (7, 0.2) -- +(0, 0.0001) node [left] {$t$};
+ \draw [->, mblue] (10, 0.2) -- +(0, 0.0001);
+ \end{tikzpicture}
+ \end{center}
+ This is metrized in the obvious way, where all edges have length $2$ except the black ones of length $1$. To understand the links, we set
+ \[
+ \alpha = \sin^{-1}\frac{1}{4},\quad \beta = \cos^{-1} \frac{1}{4}.
+ \]
+ Then the triangles each has angles $2\alpha, \beta, \beta$.
+
+ We show that $W$ is non-positively curved, and then use this to show that there is a homomorphism $W \to W$ that is surjective but not injective. We say $W$ is \term{non-Hopfian}. In particular, this will show that $W$ is not linear, i.e.\ it is not a matrix group.
+
+ To show that $X$ is non-positively curved, we check the link condition:
+ \begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \node [circ] at (-2.5, 0) (left) {};
+ \node [circ] at (-2, 1) (topleft) {};
+ \node [circ] at (-2, -1) (bottomleft) {};
+ \node [circ] at (2.5, 0) (right) {};
+ \node [circ] at (2, 1) (topright) {};
+ \node [circ] at (2, -1) (bottomright) {};
+
+ \node [left] at (topleft) {$a$};
+ \node [right] at (bottomright) {$a$};
+ \node [left] at (left) {$c$};
+ \node [right] at (right) {$c$};
+ \node [left] at (bottomleft) {$b$};
+ \node [right] at (topright) {$b$};
+
+ \draw [red] (topleft) -- (topright);
+ \draw [red] (bottomleft) -- (bottomright);
+ \draw [purple] (bottomleft) -- (left) -- (topleft);
+ \draw [purple] (bottomright) -- (right) -- (topright);
+
+ \draw [blue] (bottomleft) -- (topright);
+ \draw [blue] (topleft) -- (bottomright);
+ \draw [blue] (left) edge [bend left=10] (right);
+ \draw [blue] (left) edge [bend right=10] (right);
+ \draw [blue] (left) edge [bend left=20] (right);
+ \draw [blue] (left) edge [bend right=20] (right);
+
+ \node [circ] at (0.2, 0.1) {};
+ \node [circ] at (0.2, -0.1) {};
+
+ \node [circ] at (0, -0.26) {};
+ \node [circ] at (0, -0.5) {};
+ \node [below] at (0, -0.5) {$t$};
+ \node [above] at (0.2, 0.1) {$t$};
+ \node [right] at (0.2, -0.1) {$s$};
+ \node [above] at (0, -0.26) {$s$};
+
+ % label intermediate nodes in blue thing as $s$ and $t$, each occurring twice
+ \end{tikzpicture}
+ \end{center}
+ To check the link condition, we have to check that there are no loops of length $< 2\pi$ in $Lk(v)$. Note that by trigonometric identities, we know $\alpha + \beta = \frac{\pi}{2}$. So we can just observe that everything is fine.
+
+ To see that $W$ is non-Hopfian, we define
+ \begin{align*}
+ f: W &\to W\\
+ a &\mapsto a^2\\
+ b &\mapsto b^2\\
+ s &\mapsto s^2\\
+ t &\mapsto t
+ \end{align*}
+ We check that this is a well-defined homomorphism, and is surjective, since we observe
+ \[
+ a = s a^2 b^2 s^{-1},\quad b = ta^2 b^2 t^{-1},
+ \]
+ and so $a, b, s, t \in \im t$. The non-trivial claim is that $\ker f \not= 0$. Let
+ \[
+ g = [scs^{-1}, tct^{-1}].
+ \]
+ Note that
+ \[
+ f(g) = [f(scs^{-1}), f(tct^{-1})] = [sc^2 s^{-1}, tc^2 t^{-1}] = [a, b] = 1.
+ \]
+ So the crucial claim is that $g \not= 1$. This is where geometry comes to the rescue. For convenience, write $p = scs^{-1}, q = tct^{-1}$. Consider the following local geodesics in the two squares:
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (3, 1.5) rectangle (6, -1.5);
+ \draw (7, 1.5) rectangle (10, -1.5);
+
+ \node [circ] at (4.5, 1.5) {};
+ \node [circ] at (8.5, 1.5) {};
+
+ \draw [->] (3.95, 1.5) -- +(0.0001, 0) node [above] {$c$};
+ \draw [->] (5.35, 1.5) -- +(0.0001, 0);
+
+ \draw [->] (7.95, 1.5) -- +(0.0001, 0);
+ \draw [->] (9.35, 1.5) -- +(0.0001, 0);
+
+ \draw [->, mgreen] (4.7, -1.5) -- +(0.0001, 0);
+ \draw [->, morange] (8.7, -1.5) -- +(0.0001, 0);
+
+ \draw [->, red] (3, 0.2) -- +(0, 0.0001) node [left] {$s$};
+ \draw [->, red] (6, 0.2) -- +(0, 0.0001);
+ \draw [->, mblue] (7, 0.2) -- +(0, 0.0001) node [left] {$t$};
+ \draw [->, mblue] (10, 0.2) -- +(0, 0.0001);
+
+ \draw [thick, brown] (3, -1.5) -- (3.75, 1.5);
+ \draw [thick, brown] (5.25, 1.5) -- (6, -1.5);
+
+ \draw [thick, brown] (7, -1.5) -- (7.75, 1.5);
+ \draw [thick, brown] (9.25, 1.5) -- (10, -1.5);
+ \end{tikzpicture}
+ \end{center}
+ The left- and right-hand local geodesics represent $p$ and $q$ respectively. Then
+ \[
+ g = p \cdot q \cdot \bar{p} \cdot \bar{q}.
+ \]
+ We claim that this represents $g$ by a local geodesic. The only place that can go wrong is the points where they are joined. By the proof of the Gromov link condition, to check this, we check that the three ``turns'' at the vertex are all of angle $\geq \pi$.
+ \begin{center}
+ \begin{tikzpicture}[scale=1.5]
+ \node [circ] at (-2.5, 0) (left) {};
+ \node [circ] at (-2, 1) (topleft) {};
+ \node [circ] at (-2, -1) (bottomleft) {};
+ \node [circ] at (2.5, 0) (right) {};
+ \node [circ] at (2, 1) (topright) {};
+ \node [circ] at (2, -1) (bottomright) {};
+
+ \node [left] at (topleft) {$a$};
+ \node [right] at (bottomright) {$a$};
+ \node [left] at (left) {$c$};
+ \node [right] at (right) {$c$};
+ \node [left] at (bottomleft) {$b$};
+ \node [right] at (topright) {$b$};
+
+ \draw [red] (topleft) -- (topright);
+ \draw [red] (bottomleft) -- (bottomright);
+ \draw [purple] (bottomleft) -- (left) -- (topleft);
+ \draw [purple] (bottomright) -- (right) -- (topright);
+
+ \draw [blue] (bottomleft) -- (topright);
+ \draw [blue] (topleft) -- (bottomright);
+ \draw [blue] (left) edge [bend left=10] (right);
+ \draw [blue] (left) edge [bend right=10] (right);
+ \draw [blue] (left) edge [bend left=20] (right);
+ \draw [blue] (left) edge [bend right=20] (right);
+
+ \node [circ] at (0.2, 0.1) {};
+ \node [circ] at (0.2, -0.1) {};
+
+ \node [circ] at (0, -0.26) {};
+ \node [circ] at (0, -0.5) {};
+ \node [below] at (0, -0.5) {$t$};
+ \node [above] at (0.2, 0.1) {$t$};
+ \node [right] at (0.2, -0.1) {$s$};
+ \node [above] at (0, -0.26) {$s$};
+
+ \node [circ] at (-1, -0.5) {};
+ \node [circ] at (1, -0.5) {};
+ \node [circ] at (-1, 0.5) {};
+ \node [circ] at (1, 0.5) {};
+
+ \node [right] at (1, 0.5) {$q_+$};
+ \node [left] at (-1, -0.5) {$q_-$};
+ \node [left] at (-1, 0.5) {$p_-$};
+ \node [right] at (1, -0.5) {$p_+$};
+ \end{tikzpicture}
+ \end{center}
+ Here $p_+$ is the point where $p$ ends and $p_-$ is the point where $p$ starts, and similarly for $q$. Moreover, the distance from $p_{\pm}, q_{\pm}$ to the top/bottom left/right vertices is $\beta$. So we can just check and see that everything works.
+
+ Recall that every homotopy class of paths in $X$ contains a \emph{unique} locally geodesic representative. Since the constant path is locally geodesic, we know $g \not= 1$.
+\end{eg}
+
+\begin{defi}[Residually finite group]\index{residually finite group}
+ A group $G$ is \emph{residually finite} if for every $g \in G \setminus \{1\}$, there is a homomorphism $\varphi: G \to \text{finite group}$ such that $\varphi(g) \not= 0$.
+\end{defi}
+
+\begin{thm}[Mal'cev]
+ Every finitely generated linear subgroup (i.e.\ a subgroup of $\GL_n(\C)$) is resudially finite.
+\end{thm}
+
+\begin{proof}[Proof sketch]
+ If the group is in fact a subgroup of $\GL_n(\Z)$, then we just reduce mod $p$ for $p \gg 0$. To make it work over $\GL_n(\C)$, we need a suitable version of the Nullstellensatz.
+\end{proof}
+
+\begin{thm}[Mal'cev]
+ Every finitely generated residually finite group is Hopfian.
+\end{thm}
+
+\begin{proof}
+ Finding a proof is a fun exercise!
+\end{proof}
+We know that Wise's example is not Hopfian, hence not residually finite, hence not a linear group.
+
+Contrast this with the amazing theorem of Sela that all hyperbolic groups are Hopfian. However, a major open question is whether all hyperbolic groups are residually finite. On the other hand, it is known that not all hyperbolic groups are not linear.
+
+How are we supposed to think about residually finite groups?
+
+\begin{lemma}[Scott's criterion]\index{Scott's criterion}
+ Let $X$ be a cell complex, and $G = \pi_1 X$. Then $G$ is residually finite if and only if the following holds:
+
+ Let $p: \tilde{X} \to X$ be the universal cover. For all compact subcomplexes $K \subseteq \tilde{X}$, there is a finite-sheeted cover $X' \to X$ such that the natural covering map $p': \tilde{X} \to X'$ is injective on $K$.
+\end{lemma}
+A good (though not technically correct) way to think about this is follows: if we have a map $f: K \to X$ that may be complicated, and in particular is not injective, then we might hope that there is some ``finite resolution'' $X' \to X$ such that $f$ lifts to $X'$, and the lift is injective. Of course, this is not always possible, and a necessary condition for this to work is that the lift to the universal cover is injective. If this is not true, then obviously no such resolution can exist. And residual finiteness says if there is no obvious reason to fail, then it in fact does not fail.
+
+\subsection{Cube complexes}
+\begin{defi}[Cube complex]\index{cube complex}
+ A Euclidean complex is a \emph{cube complex} if every cell is isometric to a cube (of any dimension).
+\end{defi}
+This is less general than Euclidean complexes, but we can still make high dimension things out of this. Of course, general Euclidean complexes also work in high dimensions, but except in two dimensions, the link condition is rather tricky to check. It turns out the link condition is easy to check for cube complexes.
+
+Purely topologically, the link of each vertex is made out of simplices, and, subdividing if necessary, they are simplicial complexes. The metric is such that every edge has length $\frac{\pi}{2}$, and this is called the ``all-right simplicial complex''.
+
+Recall that
+\begin{center}
+ \begin{tikzpicture}
+ \draw (-2, -2) rectangle (2, 2);
+ \draw (0, -2) -- (0, 2);
+ \draw (-2, 0) -- (2, 0);
+ \draw [mred] (0.5, 0) -- (0, 0.5) -- (-0.5, 0) -- (0, -0.5) -- cycle;
+ \end{tikzpicture}
+\end{center}
+is non-positively curved, while
+\begin{center}
+ \begin{tikzpicture}
+ \begin{scope}[cm={1,0, 0.2, 0.6, (0, 0)}]
+ \draw (0, 0) rectangle (2, 2);
+ \draw [mred] (0.5, 0) -- (0, 0.5);
+ \end{scope}
+ \begin{scope}[cm={0.2,0.6, 0, 1, (0, 0)}]
+ \draw (0, 0) rectangle (2, 2);
+ \draw [mred] (0.5, 0) -- (0, 0.5);
+ \end{scope}
+ \draw [fill=white, fill opacity=0.7](0, 0) rectangle (2, 2);
+ \draw [mred] (0.5, 0) -- (0, 0.5);
+ \end{tikzpicture}
+\end{center}
+is not.
+
+\begin{defi}[Flag simplicial complex]\index{flag simplicial complex}
+ A simplicial complex $X$ is \emph{flag} if for all $n \geq 2$, each copy of $\partial \Delta^n$ in $X$ is in fact the boundary of a copy of $\Delta^n$ in $X$.
+\end{defi}
+Note that topologically, flag complexes are not special in any sense. For any simplicial complex $K$, the first barycentric subdivision is flag.
+
+\begin{thm}[Gromov]
+ A cube complex is non-positively curved iff every link is flag.
+\end{thm}
+Now the property of being flag is purely combinatorial, and easy to check. So this lets us work with cube complexes.
+
+\subsubsection*{Right-angled Artin groups}
+\begin{defi}[right-angled Artin group]\index{right-angled Artin group}
+ Let $N$ be a simplicial graph, i.e.\ a graph where the vertices determine the edges, i.e.\ a graph as a graph theorist would consider. Then
+ \[
+ A_N = \bra V(N) \mid [u, v] = 1 \text{ for all } (u, v) \in E(N)\ket
+ \]
+ is the \emph{right-angled Artin group}, or \term{graph group} of $N$.
+\end{defi}
+
+\begin{eg}
+ If $N$ is the discrete graph on $n$ vertices, then $A_N = F_n$.
+\end{eg}
+
+\begin{eg}
+ If $N$ is the complete graph on $n$ vertices, then $A_N = \Z^n$.
+\end{eg}
+
+\begin{eg}
+ If $N$ is a square, i.e.\ the complete bipartite graph $K_{2, 2}$, then $A_N = F_2 \times F_2$.
+\end{eg}
+
+\begin{eg}
+ When $N$ is the path with $4$ vertices, then this is a complicated group that doesn't have a good, alternative description. This is quite an interesting group.
+\end{eg}
+
+\begin{defi}[Salvetti complex]\index{Salvetti complex}
+ Given a simplicial group $N$, the \emph{Salvetti complex} $\mathcal{S}_N$ is the cube complex defined as follows:
+ \begin{itemize}
+ \item Set $\mathcal{S}_N^{(2)}$ is the presentation complex for $A_N$.
+ \item For any immersion of the $2$-skeleton of a $d$-dimensional cube, we glue in an $d$-dimensional cube to $\mathcal{S}_N^{(2)}$.
+ \end{itemize}
+ Alternatively, we have a natural inclusion $\mathcal{S}_N^{(2)} \subseteq (S^1)^{|V(N)|}$, and $\mathcal{S}_N$ is the largest subcomplex whose $2$-skeleton coincides with $\mathcal{S}_N^{(2)}$.
+\end{defi}
+
+\begin{eg}
+ If $N$ is
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (2, 0) {};
+ \draw (1, 0) -- (2, 0);
+ \end{tikzpicture}
+ \end{center}
+ then $A_n = \Z * \Z^2$, and $\mathcal{S}_N$ is a circle glued to a torus.
+\end{eg}
+
+\begin{eg}
+ If $N$ is
+ \begin{center}
+ \begin{tikzpicture}
+ \node [circ] at (0, 0) {};
+ \node [circ] at (1, 0) {};
+ \node [circ] at (2, 0) {};
+ \draw (0, 0) -- (1, 0);
+ \end{tikzpicture},
+ \end{center}
+ then $\mathcal{S}_N$ is $T^2 \vee S^1$. There is a unique vertex $v$, and its link looks like
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (-0.5, 0) -- (0, 0.5) -- (0.5, 0) -- (0, -0.5) -- (-0.5, 0);
+
+ \node [circ] at (1, 0.5) {};
+ \node [circ] at (1, -0.5) {};
+ \end{tikzpicture}
+ \end{center}
+\end{eg}
+In fact, there is a recipe for getting the link out of $N$.
+
+\begin{defi}[Double]\index{double}\index{$D(K)$}
+ The \emph{double} $D(K)$ of a simplicial complex $K$ is defined as follows:
+ \begin{itemize}
+ \item The vertices are $\{v_1^+, \ldots, v_n^+, v_1^-, \ldots, v_n^-\}$, where $\{v_1, \ldots, v_n\}$ are the vertices of $k$.
+ \item The simplices are those of the form $\bra v_{i_0}^{\pm}, \ldots, v_{i_k}^{\pm}\ket$, where $\bra v_{i_0}, \ldots, v_{i_k}\ket \in K$.
+ \end{itemize}
+\end{defi}
+
+\begin{eg}
+ In the example above, the double of $N$ is just $Lk(v)$!
+\end{eg}
+
+%Note that $D(K)$ contains many copies of $K$, especially $K^+$, which is spanned by the $v_i^+$, and $K^-$, which is spanned by the $v_i^-$. In fact, $K^+$ (and also $K^-$) is a retract of $D(K)$, using the map that sends $v^{\pm}_i$ to $v_i$. Note also that $D(K)$ is flag iff $K$ is flag.
+\begin{defi}[Flag complex]\index{flag complex}
+ The \emph{flag complex} of $N$, written $\bar{N}$, is the only flag simplicial complex with $1$-skeleton $N$.
+\end{defi}
+
+\begin{lemma}
+ For any (simplicial) graph $N$, the link of the unique vertex of $\mathcal{S}_N$ is $D(\bar{N})$. In particular, $\mathcal{S}_N$ is non-positively curved.
+\end{lemma}
+
+Thus, right-angled Artin groups and their Salvetti complexes give examples of non-positively curved spaces with very general links. It turns out
+
+\begin{thm}
+ Right-angled Artin groups embed into $\GL_n\Z$ (where $n$ depends on $N$).
+\end{thm}
+
+\subsection{Special cube complexes}
+Let $X$ be a non-positively curved cube complex. We will write down explicit geometric/combinatorial conditions on $X$ so that $\pi_1 X$ embeds into $A_N$ for some $N$.
+
+\subsubsection*{Hyperplanes and their pathologies}
+If $C \cong [-1, 1]^n$, then a \term{midcube} $M \subseteq C$ is the intersection of $C$ with $\{x_i = 0\}$ for some $i$.
+
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (1, 0) -- (1.4, 0.2) -- (1.4, 1.2) -- (0.4, 1.2) -- (0, 1) -- cycle;
+ \draw (0, 1) -- (1, 1) -- (1, 0);
+ \draw (1, 1) -- (1.4, 1.2);
+ \draw [dashed] (0, 0) -- (0.4, 0.2) -- (1.4, 0.2);
+ \draw [dashed] (0.4, 0.2) -- (0.4, 1.2);
+
+ \draw [mblue, thick, fill, fill opacity=0.3] (0.5, 0) -- (0.9, 0.2) -- (0.9, 1.2) -- (0.5, 1) -- cycle;
+
+ \end{tikzpicture}
+\end{center}
+
+Now if $X$ is a non-positively curved cube complex, and $M_1, M_2$ are midcubes of cubes in $X$, we say $M_1 \sim M_2$ if they have a common face, and extend this to an equivalence relation. The equivalence classes are \emph{immersed hyperplanes}. We usually visualize these as the union of all the midcubes in the equivalence class.
+\begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (1, 0) -- (1.4, 0.2) -- (1.4, 1.2) -- (0.4, 1.2) -- (0, 1) -- cycle;
+ \draw (0, 1) -- (1, 1) -- (1, 0);
+ \draw (1, 1) -- (1.4, 1.2);
+ \draw [dashed] (0, 0) -- (0.4, 0.2) -- (1.4, 0.2);
+ \draw [dashed] (0.4, 0.2) -- (0.4, 1.2);
+
+ \draw [mblue, thick, fill, fill opacity=0.3] (0.5, 0) -- (0.9, 0.2) -- (0.9, 1.2) -- (0.5, 1) -- cycle;
+
+ \draw (0, 0) -- (-0.4, -0.2) -- (0.6, -0.2) -- (1, 0);
+ \draw [mblue, thick] (0.5, 0) -- (0.1, -0.2);
+ \begin{scope}[shift={(-0.4, -0.2)}]
+ \draw (0, 0) -- (-0.4, -0.2);
+ \draw (0.6, -0.2) -- (1, 0);
+ \draw [dashed] (-0.4, -0.2) -- (0.6, -0.2);
+ \draw [mblue, thick] (0.5, 0) -- (0.1, -0.2);
+ \end{scope}
+
+ \begin{scope}[shift={(-1.2, -0.6)}]
+ \fill [white, opacity=0.5] (0.4, 0.2) rectangle +(1, 1);
+ \draw (0, 0) -- (1, 0) -- (1.4, 0.2) -- (1.4, 1.2) -- (0.4, 1.2) -- (0, 1) -- cycle;
+ \draw (0, 1) -- (1, 1) -- (1, 0);
+ \draw (1, 1) -- (1.4, 1.2);
+ \draw [dashed] (0, 0) -- (0.4, 0.2) -- (0.4, 1.2);
+
+ \draw [mblue, thick, fill, fill opacity=0.3] (0.5, 0) -- (0.9, 0.2) -- (0.9, 1.2) -- (0.5, 1) -- cycle;
+ \end{scope}
+ \end{tikzpicture}
+\end{center}
+Note that in general, these immersed hyperplanes can have self-intersections, hence the word ``immersed''. Thus, an immersed hyperplane can be thought of as a locally isometric map $H \looparrowright X$, where $H$ is a cube complex.
+
+In general, these immersed hyperplanes can have several ``pathologies''. The first is self-intersections, as we have already met. The next problem is that of \emph{orientation}, or \emph{sidedness}. For example, we can have the following cube complex with an immersed hyperplane:
+
+\pgfplotsset{width=7cm,compat=1.8}
+\begin{center}
+ \begin{tikzpicture}
+ \begin{axis}[hide axis, view = {40}{40}]
+ \addplot3 [
+ patch,
+ faceted color=black,
+ color=white,
+ point meta = x,
+ samples = 15,
+ samples y = 2,
+ domain = 0:360,
+ y domain =-0.5:0.5
+ ] ({(1+0.3*y*cos(x/2)))*cos(x)}, {(1+0.3*y*cos(x/2)))*sin(x)}, {0.5*y*sin(x/2)});
+
+ \addplot3 [samples=50, domain=-135:185, samples y=0, thick, mblue] ({cos(x)}, {sin(x)}, {0});
+ \end{axis}
+ \end{tikzpicture}
+\end{center}
+
+This is bad, for the reason that if we think of this as a $(-1, 1)$-bundle over $H$, then it is non-orientable, and in particular, non-trivial.
+
+In general, there could be self intersections. So we let $N_H$ be the pullback interval bundle over $H$. That is, $N_H$ is obtained by gluing together $\{M \times (-1, 1) \mid M\text{ is a cube in }H\}$. Then we say $H$ is \term{two-sided} if this bundle is trivial, and \term{one-sided} otherwise.
+
+Sometimes, we might not have self-intersections, but something like this:
+\begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \draw (0, 0) rectangle (1, 1);
+ \draw (0, 0) rectangle (1, -1);
+ \draw (0, 0) -- (-1, 0.2) -- (-1, 1.2) -- (0, 1);
+ \draw (0, 0) -- (-1, -0.2) -- (-1, -1.2) -- (0, -1);
+ \draw (1, 0) -- (2, 0.2) -- (2, 1.2) -- (1, 1);
+ \draw (1, 0) -- (2, -0.2) -- (2, -1.2) -- (1, -1);
+
+ \draw (-1, -0.2) arc(270:90:0.2);
+ \draw (-1, -1.2) arc(270:90:1.2);
+ \draw (-2.2, 0) -- (-1.2, 0);
+
+ \draw [mblue, thick] (2, 0.7) -- (1, 0.5) -- (0, 0.5) -- (-1, 0.7) arc (90:270:0.7) -- (0, -0.5) -- (1, -0.5) -- (2, -0.7);
+ \end{tikzpicture}
+\end{center}
+This is a \term{direct self-osculation}. We can also have \term{indirect osculations} that look like this:
+\begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \draw (0, 0) rectangle (1, 1);
+ \draw (0, 0) rectangle (1, -1);
+ \draw (0, 0) -- (-1, 0.2) -- (-1, 1.2) -- (0, 1);
+ \draw (1, 0) -- (2, 0.2) -- (2, 1.2) -- (1, 1);
+
+ \draw (0, 0) -- (-1, -0.2) -- (-1, -1.2) -- (0, -1);
+ \draw (1, -1) arc(90:-90:0.2);
+ \draw (1, 0) arc(90:-90:1.2);
+
+ \draw (1, -1.4) rectangle (-1, -2.4);
+ \draw (0, -1.4) -- (0, -2.4);
+
+ \draw (-1, 0.2) arc(90:270:0.8);
+ \draw (-1, 1.2) arc(90:270:1.8);
+ \begin{scope}[shift={(-1, -0.6)}]
+ \draw (-0.8, 0) -- (-1.8, 0);
+ \draw [rotate=45] (-0.8, 0) -- (-1.8, 0);
+ \draw [rotate=-45] (-0.8, 0) -- (-1.8, 0);
+ \end{scope}
+
+ \draw (1.2, -1.2) -- +(1, 0);
+ \draw [mblue, thick] (2, 0.7) -- (1, 0.5) -- (0, 0.5) -- (-1, 0.7) arc (90:270:1.3) -- (1, -1.9) arc(-90:90:0.7) -- (0, -0.5) -- (-1, -0.7);
+ \end{tikzpicture}
+\end{center}
+
+Note that it makes sense to distinguish between direct and indirect osculations only if our hyperplane is two-sided.
+
+Finally, we have \term{inter-osculations}, which look roughly like this:
+\begin{center}
+ \begin{tikzpicture}[scale=0.7]
+ \draw (0, 0) rectangle (1, 1);
+ \draw (0, 0) rectangle (1, -1);
+ \draw (0, 0) -- (-1, 0.2) -- (-1, 1.2) -- (0, 1);
+ \draw (0, 0) -- (-1, -0.2) -- (-1, -1.2) -- (0, -1);
+ \draw (1, 0) -- (2, 0.2) -- (2, 1.2) -- (1, 1);
+ \draw (1, 0) -- (2, -0.2) -- (2, -1.2) -- (1, -1);
+
+
+ \draw [mblue, thick] (-1, 0.7) -- (0, 0.5) -- (1, 0.5) -- (2, 0.7) edge [bend left] (4, -1);
+ \draw [mblue, thick] (-1, -0.5) -- (0, -0.5) -- (1, -0.5) -- (2, -0.7) edge [bend right] (4, 1);
+ \end{tikzpicture}
+\end{center}
+
+Haglund and Wise defined \emph{special cube complexes} to be cube complexes that are free of pathologies.
+\begin{defi}[Special cube complex]\index{special cube complex}\index{cube complex!special}
+ A cube complex is \emph{special} if its hyperplanes do not exhibit any of the following four pathologies:
+ \begin{itemize}
+ \item One-sidedness
+ \item Self-intersection
+ \item Direct self-osculation
+ \item Inter-osculation
+ \end{itemize}
+\end{defi}
+
+\begin{eg}
+ A cube is a special cube complex.
+\end{eg}
+
+\begin{eg}
+ Traditionally, the way to exhibit a surface as a cube complex is to first tile it by right-angled polygons, so that every vertex has degree $4$, and then the dual exhibits the surface as a cube complex. The advantage of this approach is that the hyperplanes are exactly the edges in the original tiling!
+
+ % insert picture
+
+ From this, one checks that we in fact have a special curve complex.
+\end{eg}
+This is one example, but it is quite nice to have infinitely many. It is true that covers of special things are special. So this already gives us infinitely many special cube complexes. But we want others.
+
+\begin{eg}
+ If $X = \mathcal{S}_N$ is a Salvetti complex, then it is a special cube complex, and it is not terribly difficult to check.
+\end{eg}
+
+The key theorem is the following:
+\begin{thm}[Haglund--Wise]
+ If $X$ is a compact special cube complex, then there exists a graph $N$ and a local isometry of cube complexes
+ \[
+ \varphi_X: X \looparrowright \mathcal{S}_N.
+ \]
+\end{thm}
+
+\begin{cor}
+ $\pi_1 X \hookrightarrow A_N$.
+\end{cor}
+
+\begin{proof}[Proof of corollary]
+ If $g \in \pi_1 X$, then $g$ is uniquely represented by a local geodesic $\gamma: I \to X$. Then $\varphi \circ \gamma$ is a local geodesic in $\mathcal{S}_N$. Since homotopy classes of loops are represented by unique local geodesics, this implies $\varphi_X \circ \gamma$ is not null-homotopic. So the map $(\varphi_X)_*$ is injective.
+\end{proof}
+
+So if we know some nice group-theoretic facts about right-angled Artin groups, then we can use them to understand $\pi_1X$. For example,
+\begin{cor}
+ If $X$ is a special cube complex, then $\pi_1 X$ is linear, residually finite, Hopfian, etc.
+\end{cor}
+We shall try to give an indication of how we can prove the Haglund--Wise theorem. We first make the following definition.
+\begin{defi}[Virtually special group]\index{virtually special proof}
+ A group $\Gamma$ is \emph{virtually special} if there exists a finite index subgroup $\Gamma_0 \leq \Gamma$ such that $\Gamma_0 \cong \pi_1 X$, where $X$ is a compact special cube complex.
+\end{defi}
+
+\begin{proof}[Sketch proof of Haglund--Wise]
+ We have to first come up with an $N$. We set the vertices of $N$ to be the hyperplanes of $X$, and we join two vertices iff the hyperplanes cross in $X$. This gives $\mathcal{S}_N$. We choose a transverse orientation on each hyperplane of $X$.
+
+ Now we define $\varphi_X: X \looparrowright \mathcal{S}_N$ cell by cell.
+ \begin{itemize}
+ \item Vertices: There is only one vertex in $\mathcal{S}_N$.
+ \item Edges: Let $e$ be an edge of $X$. Then $e$ crosses a unique hyperplane $H$. Then $H$ is a vertex of $N$. This corresponds to a generator in $A_N$, hence a corresponding edge $e(H)$ of $\mathcal{S}_N$. Send $e$ to $e(H)$. The choice of transverse orientation tells us which way round to do it
+ \item Squares: given hyperplanes
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0, 0) -- (2, 0) node [below, pos=0.5] {$f_2$} -- (2, 2) node [right, pos=0.5] {$e_2$} -- (0, 2) node [pos=0.5, above] {$f_1$} -- (0, 0) node [pos=0.5, left] {$e_1$};
+
+ \draw [mred, thick] (1, 0) -- (1, 2) node [pos=0.7, right] {$H'$};
+ \draw [mblue, thick] (0, 1) -- (2, 1) node [pos=0.7, below] {$H$};
+ \end{tikzpicture}
+ \end{center}
+ Note that we already mapped $e_1, e_2$ to $e(H)$, and $f_1, f_2$ to $e(H')$. Since $H$ and $H'$ cross in $X$, we know $e(H)$ and $e(H')$ bound a square in $\mathcal{S}_N$. Send this square in $X$ to that square in $\mathcal{S}_N$.
+ \item There is nothing to do for the higher-dimensional cubes, since by definition of $\mathcal{S}_N$, they have all the higher-dimensional cubes we can hope for.
+ \end{itemize}
+ We haven't used a lot of the nice properties of special cube complexes. They are needed to show that the map is a local isometric embedding. What we do is to use the hypothesis to show that the induced map on links is an isometric embedding, which implies $\varphi_X$ is a local isometry.
+\end{proof}
+
+This really applies to a really wide selection of objects.
+\begin{eg}
+ The following groups are virtually special groups:
+ \begin{itemize}
+ \item $\pi_1 M$ for $M$ almost any $3$-manifold.
+ \item Random groups
+ \end{itemize}
+\end{eg}
+This is pretty amazing. A ``random group'' is linear!
+\printindex
+\end{document}
diff --git a/books/cam/archive/3-manifolds.tex b/books/cam/archive/3-manifolds.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6a474c3b3ae12bef36e6eded2b92dbf7696932b8
--- /dev/null
+++ b/books/cam/archive/3-manifolds.tex
@@ -0,0 +1,439 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Lent}
+\def\nyear {2018}
+\def\nlecturer {S.\ Rasmussen}
+\def\ncourse {3-Manifolds}
+
+\input{header}
+\DeclareMathOperator\Tor{Tor}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+This course aims to provide a survey of topics relevant to research in 3-manifold topology and geometry.
+\begin{itemize}
+ \item \textit{Knots and links.} Invariants of knots and links, including the Jones and Alexander polynomials. Categorification of invariants. Dehn filling and Dehn surgery.
+ \item \textit{Geometrization and Hyperbolic geometry.} A survey, mostly without proof, of primary notions from geometrization --- which characterizes 3-manifold geometry and topology in terms of the fundamental group --- and hyperbolic geometry.
+ \item \textit{3-manifold constructions.} Mapping tori, handle decompositions, Heegaard splittings, triangulations.
+ \item \textit{Foliations.} Singular, Reeb, and taut foliations, with connections to fundamental group actions, topology, and geometry. Transverse foliations on Seifert fibred spaces.
+\end{itemize}
+\subsubsection*{Pre-requisites}
+Part III Algebraic Topology and Part III Differential Geometry.
+}
+\tableofcontents
+
+\section{Introduction}
+In the 1900's, Poincar\'e asked the following question:
+\begin{center}
+ Does homology distinguish $S^3$ from other closed $3$-manifolds?
+\end{center}
+In general, for a closed orientable manifold, by the universal coefficients theorem for cohomology, we know
+\[
+ H^i(M) \cong \Hom(H_i(M), \Z) \oplus \Ext(H_{i - 1}(M), \Z),
+\]
+and $\Ext(H_{i - 1}(M), \Z)$ is just the torsion subgroup of $H_{i - 1}(M)$. In particular, we have
+\[
+ H^3(M) = H_3(M) \oplus \Tor(H_2(M)).
+\]
+But we know both $H^3(M)$ and $H_3(M)$ are $\Z$. So we know that $\Tor(H_2(M)) = 0$.
+
+On the other hand, we have Poincar\'e duality
+\[
+ H_1(M) \cong H^2(M) \cong H_2(M) \oplus \Tor(H_1(M)).
+\]
+Thus, $H_2(M)$ is just the free bit of $H_1(M)$. The conclusion is that, for a $3$-manifold, the homology is completely determined by $H_1$.
+
+The answer to Poincar\'e's original question is no! Poincar\'e constructed a ``\term{Poincar\'e sphere}'' $P_S$ with $H_1(P_S)$ but $P_S \not\cong S^3$.
+
+Thus, we need a stronger invariant. Thus, he came up with the notion of the fundamental group $\pi_1(M)$. The next question is:
+\begin{center}
+ Does $\pi_1$ distinguish $S^3$?
+\end{center}
+We can test, and check that $|\pi_1(P_S)| = 120$, but $\pi_1(S^3)$ is trivial.
+
+The answer of this question didn't come until 2002, when Perelman proved
+\begin{thm}[Poincar\'e conjecture]\index{Poincar\'e conjecture}
+ If $M$ is a $3$-manifold, $\pi_1(M)$ is finite, and $M$ is \emph{prime}, then the universal cover of $M$ is $S^3$.
+\end{thm}
+
+\begin{cor}
+ If $\pi_1(M)$ is trivial, then $M \cong S^3$.
+\end{cor}
+
+%Instead of the Poincar\'e conjecture, we can consider the following simplification: We say the Poincar\'e conjecture is \emph{topologically true} if whenever $M$ is homotopy equivalent to $S^n$, then $M$ is homeomorphic to $S^n$.
+
+It turns out in high dimensions, the Poincar\'e conjecture is easy. Via the $h$-cobordism theorem, Smale and Stallings proved that as long as $n \geq 5$, any simply-connected $n$-manifold with the homology of $S^n$ is in fact homeomorphic to $S^n$.
+%For $n \geq 5$, Smale and Stallings proved that the Poincar\'e conjecture is topologically true, using the $h$-cobordism theorem. % check this
+
+\begin{thm}[Freedman]
+ The Poincar\'e conjecture is topologically true for $S^4$.
+\end{thm}
+
+This answers the topological question, namely whether manifolds are \emph{homeomorphic} to $S^n$. How about the differential structure?
+\begin{thm}[Milnor]
+ There exists exotic smooth structures on $S^7$, i.e.\ there exists different smooth structures on $S^7$ that are not diffeomorphic.
+\end{thm}
+
+On the other hand, there are no exotic smooth structures on $S^5$ and $S^6$, but there are for ``most'' $n \geq 7$.
+
+It is also not hard to see that there are no exotic smooth structures on $S^1$ and $S^2$. In dimensions 3, we have
+\begin{thm}[Morse]
+ Two $3$-manifolds are homeomorphic iff they are diffeomorphic.
+\end{thm}
+
+For $S^4$, it is not known!
+
+So how do we distinguish smooth structures on $4$-manifolds? The idea of Donaldson was to use an invariant. The idea was to pick some differential equation and count how many solutions there are on each manifold. If two manifolds are diffeomorphic, then they should have the same number of solutions! In particular, he took the equations of gauge theory from classical field theory in physics. Donaldson used $\SU(2)$ gauge theory, while Seiberg and Witten used $\U(1)$ gauge theory with spinor structures.
+
+But why are we talking about $4$-manifolds? The point is that $4 = 3 + 1$. In particular, if we have a cobordism between $3$-manifolds, then we get a $4$-manifold! Often we can reduce gauge invariants on the total $4$-manifold into ``dimensionally reduced'' gauge invariants on the bounding $3$-manifolds.
+
+In research today, there is a huge industry of studying gauge-theoretic invariants on $3$-manifolds. It is still unclear what these invariants can tell us about the $3$-manifolds themselves.
+
+Now let's think about the following question --- what $3$-manifolds are there? For example, which groups are fundamental groups of $3$-manifolds? It turns out this question is uninteresting in higher dimensions, since any group can be the fundamental group of a $4$-manifold, or above.
+
+Let's look at some examples.
+\begin{defi}[Lens space]\index{lens space}
+ A \emph{lens space} $L(p, q)$ is the quotient of $S^3 \subseteq \C^2$ by the relation
+ \[
+ (z_1, z_2) \sim (e^{2\pi i/p} z_1, e^{2\pi i q/p} z_2).
+ \]
+\end{defi}
+
+\begin{prop}
+ We have $\pi_1(L(p, q)) \cong \Z/p\Z$.
+\end{prop}
+
+On the other hand,
+\begin{thm}[Reidemeister]
+ $L(p, q) \cong L(p, q')$ iff $q \equiv \pm q'^{\pm 1} \pmod p$.
+\end{thm}
+
+So the fundamental group does not distinguish between these. But it turns out this is all that can go wrong.
+\begin{thm}[Perelman]
+ If no ``pieces'' of $M$ are lens spaces, then $\pi_1(M)$ completely determines $M$ up to homeomorphism.
+\end{thm}
+
+\subsection{Structures on 3-manifolds}
+One can put knots and links on $3$-manifolds.
+\begin{defi}[Knot]\index{knot}
+ Let $M$ be a $3$-manifold. A \emph{knot} on $M$ is an embedding of $S^1$ into $M$.
+\end{defi}
+If $M$ is not specified, it is taken to be $S^3$.
+\begin{defi}[Link]\index{link}
+ Let $M$ be a $3$-manifold. A \emph{link} on $M$ is an embedding of some disjoint union of $S^1$'s into $M$.
+\end{defi}
+It turns out knots are useful for building $3$-manifolds:
+\begin{thm}[Dehn]
+ A $3$-manifold can be presented as surgery on a link in $S^3$.
+\end{thm}
+
+We can also put contact structures on $3$-manifolds, which we can think of as boundary conditions for symplectic structures, and there are also taut foliations.
+
+\subsection{Prime decomposition}
+Suppose we have a 3-manifold, and we want to try to simplify it. One way to do so is to try to write it as a \emph{connected sum} of simpler manifolds.
+
+% insert definition of connected sum.
+
+\begin{defi}[Prime manifold]\index{prime manifold}
+ An $n$-dimensional manifold $M$ is \emph{prime} if whenever $M = A_1 \# A_2$, then one of the $A_i$ is $S^n$.
+\end{defi}
+
+One useful technique is the following:
+\begin{thm}[Alexander's trick]
+ If $\phi: S^2 \to S^2$ is a homeomorphism, then it extends to a homeomorphism $\phi: B^3 \to B^3$.
+\end{thm}
+This says there is a unique way to glue in a $B^3$.
+\begin{proof}
+ % fill this in
+\end{proof}
+This is a motivation to show that the direct sum gluing map along $S^2$ is unique up to homeomorphism.
+
+\begin{thm}[Papakyriakopoulos $S^2$ theorem]\index{Papakyriakopoulos $S^2$ theorem}
+ % insert theorem
+\end{thm}
+
+\begin{thm}[Knesser--Milnor theorem]\index{Knesser--Milnor theorem}
+
+\end{thm}
+To prove existence, the idea is that every time we break our manifold up into connected sums, the $\pi_2$ gets ``simpler'', and group theory helps us ensure the process will eventually stop.
+
+\begin{thm}
+ Any orientation-preserving diffeomorphism $\phi: S^2 \to S^2$ is \term{isotopic} to the identity $S^2 \to S^2$ via homeomorphisms.
+\end{thm}
+
+\begin{proof}
+ We want to think of $S^2$ as the one-point compactification of $\R^2$. By composing with a rotation (which isotopic to the identity), we may assume $\phi$ fixes the point $\infty$. Thus, we may prove the same result with $S^2$ replaced by $\R^2$.
+
+% Pick some point $p \in S^2$. By composing $\phi$ with a rotation (which is isotopic to the identity), we may assume $\phi(p) = p$. By some rotation and stretching, we may further assume that $\phi$ is differentiable at $p$ and $\d_p \phi = \id$, using that $\phi$ is orientation-preserving. Then we use the fact that any homeomorphism $\psi: \R^n \to \R^n$ with $\psi(0) = 0$ and $\d_0 \psi = \id$ is isotopic to the identity on $\R^n$.
+\end{proof}
+
+% unique decomposition theorem
+
+% more things
+
+\begin{defi}[Splitting]\index{splitting}
+ Given a closed surface $\Sigma$ and an embedding $\Sigma \hookrightarrow M$, the \emph{splitting} of $M$ along $\Sigma$ is $M | \Sigma = M \setminus \nu(\Sigma)$, where $\nu(\Sigma)$ is a tubular neighbourhood of $\Sigma$.
+\end{defi}
+The idea is that we just want to remove $\Sigma$, but it is more well-behaved if we remove an $3$-manifold from a $3$-manifold, as opposed to removing a $2$-manifold from a $3$-manifold.
+
+\begin{defi}[Separating submanifold]\index{separating submanifold}
+ A closed subsurface $\Sigma \hookrightarrow M$ of a connected manifold $M$ is \emph{separating} if $M|\Sigma$ has two components.
+\end{defi}
+
+Thus, a manifold $M$ is prime if it has no separating $S^2$-embeddings that does not bound a ball.
+
+\begin{eg}
+ $\{1\} \times S^2$ is not separating in $S^1 \times S^2$, but the equator is separating in $S^3$.
+\end{eg}
+
+\begin{defi}[Irreducible manifold]\index{irreducible manifold}
+ A manifold $M$ is irreducible if it contains no $S^2$ that does not bounds a $B^3$.
+\end{defi}
+In fact, there is only one $3$-manifold that is prime but not irreducible.
+
+\begin{prop}
+ $S^2 \times S^1$ is the unique $3$-manifold that is prime but not irreducible.
+\end{prop}
+
+A hard theorem by Alexander's theorem says:
+\begin{thm}[Alexander's theorem]
+ Any embedded $S^2$ in $\R^3$ bounds a ball.
+\end{thm}
+
+\begin{cor}
+ $S^3$ is prime and irreducible.
+\end{cor}
+
+What does decomposition along $S^2$'s do to the fundamental group? We will need the following basic properties of the fundamental group:
+\begin{thm}[Product theorem]\index{product theorem}
+ Let $X, Y$ be spaces. Then
+ \[
+ \pi_1(X, Y) \simeq \pi_1(X) \times \pi_1(Y).
+ \]
+\end{thm}
+
+\begin{thm}[Seifert--van Kampen theorem]\index{Seifert--van Kampen theorem}
+ Let $X, Y$ be $3$-manifolds. Then
+ \[
+ \pi_1(X_1 \# X_2) \cong \pi_1(X_1) * \pi_1(X_2).
+ \]
+\end{thm}
+The main point here is that $S^2$ is simply connected.
+
+\begin{prop}[Homotopy lifting property of covering spaces]\index{homotopy lifting property}
+ If $X, Y$ are manifolds and $p: \tilde{Y} \to Y$ is a covering map, then for any map $\tilde{f}: X \to \tilde{Y}$ and any homotopy $f_t: X \to Y$ with $f_0 = f = p \circ \tilde{f}$, there is a unique lift $\tilde{f}_t: X \to Y$ of $f_t$ such that $p \circ \tilde{f}_t = f_t$.
+\end{prop}
+
+% maybe say something about the lifting criterion.
+
+\begin{thm}[Grushko's theorem]\index{Grushko's theorem}
+ If $G_1$ and $G_2$ are finitely-generated groups, then $\rank(G_1 * G_2) = \rank(G_1) + \rank(G_2)$, where the \term{rank} of $G$ is the minimal number of generators in a presentation of $G$.
+\end{thm}
+
+\begin{thm}[Poincar\'e conjecture]
+ If $X$ is a closed, oriented $3$-manifold with $\pi_1(X) = 1$, then $X \cong S^3$.
+\end{thm}
+
+Together, these tell us if we keep decomposing a $3$-manifolds into direct sums, then it eventually terminates.
+
+\begin{thm}[Kneser's conjecture/Kneser--Stalling theorem]\index{Kneser's conjecture}\index{Kneser--Stalling theorem}
+ If $M$ is a compact, oriented, connected $3$-manifold, and $\pi_1(M)$ decomposes as $G_1 * G_2$, then there exists $M_i$ (compact, oriented, connected) such that $\pi_1(M_i) = G_i$ and $M \cong M_1 \# M_2$.
+\end{thm}
+
+\section{Surface decompositions}
+\begin{thm}[Loop theorem]\index{Loop theorem}
+ Let $M$ be a $3$-manifold, and $B$ a compact surface with an inclusion $i: B \hookrightarrow \partial M$. Suppose $N \lhd \pi_1(B)$ is a normal subgroup, and $N \not\supseteq \ker (i_*: \pi_1(\partial B) \to \pi_1(N))$. Then there is an embedding $f: (D^2, S^1) \to (M, B)$ such that $f: S^1 \to B$ represents a conjugacy class in $\pi_1(B)$ which is not in $N$.
+\end{thm}
+
+\begin{cor}[Dehn's lemma]\index{Dehn's lemma}
+ Let $M$ be a $3$-manifold, and $f: (D^2, S^1) \to (M, \partial M)$ be a map that is an embedding on some collar neighbourhood $A$ of $\partial D$. Then there is an embedding $f': D^2 \hookrightarrow M$ such that $f|_A = f'|_A$.
+\end{cor}
+Dehn's lemma was stated before the loop theorem was proved. The loop theorem was proven by Papakyriakopoulos, and Dehn's lemma was later proved by Papakyriakopoulos and Stallings.
+
+\begin{defi}[Compressing disk]
+ Given a compact oriented surface $\Sigma \hookrightarrow M$ properly embedded, a \emph{compressing disk} is a proper embedding $\iota: (D^2, S^1) \hookrightarrow (M, \Sigma)$ such that $\iota(S^1)$ is essential in $\Sigma$.
+\end{defi}
+
+\begin{defi}[Incompressible $3$-manifold]\index{incompressible $3$-manifold}
+ A $3$-manifold $M$ with boundary $\Sigma = \partial M$ is \emph{incompressible} if there are no compressing disks $D^2$ in $M$ with $\partial D \hookrightarrow \partial M$.
+\end{defi}
+
+\begin{defi}[Essential submanifold]\index{essential submanifold}
+ An embedded surface of genus not $0$, $\Sigma \hookrightarrow M$ is \emph{essential} if it is $\pi_1$-injective. An embedding of $S^2$ is essential if it is $\pi_2$ injective. A curve is essential if it does not bound a disk.
+\end{defi}
+
+A corollary of the loop theorem and Dehn's lemma is
+\begin{thm}
+ If an embedded surface is essential, then it is incompressible.
+\end{thm}
+Under the appropriate hypothesis, this is an if-and-only-if.
+
+\begin{thm}[Jaco--Shalen, Johannson] % check this
+ If $M$ is a closed, oriented, irreducible, toroidal $3$-manifold, then there is a collection of disjoint embedded essential tori $T_1, \ldots, T_k \hookrightarrow M$ such that the splitting of $M$ along the tori consists of pieces that are Seifert fibered or ``simple'' (which Thurston showed is the same as hyperbolic).
+\end{thm}
+This is called a \term{JSJ decomposition}.
+
+\begin{thm}[Morton Biowny Kirby] % fix this
+ A codimension $1$ or $2$ topological embedding has a collar iff it is locally flat.
+\end{thm}
+
+\begin{defi}[Locally flat embedding]\index{locally flat embedding}
+ An embedding $\iota: X \hookrightarrow Y$ is \emph{locally flat} if for every $x \in X$, there is a chart about $Y$ containing $x$ such that the embedding looks like $\R^k \hookrightarrow \R^n$.
+\end{defi}
+
+In this course, an embedding is always smooth or locally flat.
+
+\begin{thm}
+ If $X$ and $Y$ are compact, then $f: X \to Y$ is a proper embedding iff $f|_{\int(X)} \to \int(Y)$ is a proper map, and $f$ is an embedding.
+\end{thm}
+
+\section{Decomposition of 3-manifolds}
+Suppose $X_1, X_2$ are compact, connected, oriented $n$-manifolds. There is a natural way of joining them together to get a new manifold.
+\begin{defi}[Connected sum]\index{connected sum}
+ Let $X_1, X_2$ are compact, connected, oriented $n$-manifolds. Pick two embeddings $\iota_i; B^n \hookrightarrow X_i$. We define the connected sum $X_1 \# X_2$ to be the union
+ \[
+ X_1 \# X_2 = (X_1 \setminus \iota_i(B^n)) \cup_\varphi(X_2 \setminus \iota(B^n)),
+ \]
+ where $\varphi$ is an orientation \emph{reversing} map identifying the two boundaries of $X_1$ and $X_2$.
+\end{defi}
+
+It is a hard theorem that this is well-defined, and we can perform this in both the topological category and the smooth category.
+
+We now want to try to decompose our manifolds under this operation.
+\begin{defi}[Prime manifold]\index{prime manifold}
+ A manifold $M$ is prime if whenever $M \cong M_1 \# M_2$, one of $M_1$ and $M_2$ is $S^n$.
+\end{defi}
+
+Our objective is to decompose every manifold into a sum of prime manifolds. To do so, a reasonable strategy is to embed spheres $S^{n - 1}$ into $M$, and try to cut $M$ out along this $S^{n - 1}$. If this $S^{n - 1}$ bounds a ball, then we would have cut out a decomposition where one piece is $S^n$. This leads to the definition:
+\begin{defi}[Irreducible manifold]\index{irreducible manifold}
+ A manifold $M$ is irreducible if every embedding of $S^{n - 1}$ bounds a ball.
+\end{defi}
+
+Note that being irreducible is not the same as being prime! For example, $S^2 \times S^1$ is prime, but is not irreducible, as the obvious embedding of $S^2$ does not bound a ball. The problem is that removing the copy of $S^2$ does not necessarily break your manifold into two pieces.
+
+We now focus on the case $n = 3$. We then have
+\begin{thm}[Kneser--Milnor theorem]\index{Kneser--Milnor theorem}
+ Every $3$-manifold admits a decomposition into a connected sum of prime manifolds, none of which is $S^3$, and the decomposition is unique up to permutation.
+\end{thm}
+
+This is not easy to prove, and falls out of the scope of the course, but we shall indicate how this follows from the Poincar\'e conjecture as well as some group-theoretic results.
+
+First of all, by Seifert--van Kampen, we have
+\begin{prop}
+ Let $M = M_1 \# M_2$. Then
+ \[
+ \pi_1(M) \cong \pi_1(M_1) * \pi_2(M_2).
+ \]
+\end{prop}
+The idea is that when this happens, $\pi_1(M_i)$ are simpler than $\pi_1(M)$. So if we keep decomposing, it must stop. To make this precise, recall that the \term{rank} of a group is the minimal number of generators. We then have
+
+\begin{thm}[Grushko's theorem]\index{Grushko's theorem}
+ If $G_1$ and $G_2$ are finitely-generated groups, then $\rank(G_1 * G_2) = \rank(G_1) + \rank(G_2)$, where the \term{rank} of $G$ is the minimal number of generators in a presentation of $G$.
+\end{thm}
+
+Again, we are not going to prove this. To conclude the Kneser--Milnor theorem, we use
+\begin{thm}[Poincar\'e conjecture]\index{Poincar\'e conjecture}
+ Every $3$-manifold with trivial $\pi_1$ is homeomorphic to $S^3$.
+\end{thm}
+
+From these, the Kneser--Milnor theorem follows.
+
+\section{The mapping class group}
+If we want to glue $3$-manifolds along a fixed surface $\Sigma$, then we need to pick a homeomorphism that identifies the boundaries of the manifolds. This corresponds to understanding the homeomorphisms from $\Sigma$ to itself. This suggests studying
+\begin{defi}[Mapping class group]\index{mapping class group}
+ Let $\Sigma$ be a surface. The mapping class group $MCG(\Sigma)$ of $\Sigma$ is the group of homeomorphisms $\Sigma \to \Sigma$ quotiented by isotopy.
+\end{defi}
+
+In general, isotopy is not very amenable to the techniques of algebraic topology. Instead, we want to work with homotopy. A perhaps surprising theorem of Baer tells us this doesn't matter.
+\begin{thm}[Baer's theorem]\index{Baer's theorem}
+ Two homeomorphisms $f_1, f_2: \Sigma \to \Sigma$ of surfaces are isotopic iff they are homeomorphic.
+\end{thm}
+Thus, we can define $MCG(\Sigma)$ to be the $\pi_0$ of the topological group of self-homeomorphisms of $\Sigma$.
+
+We can further strengthen the theorem, to say
+\begin{thm}
+ $f: \Sigma \to \Sigma$ are classified by their induced map on $\pi_1$. Thus, we have an embedding $MCG(\Sigma) \hookrightarrow \Aut(\pi_1(\Sigma))$. % quotient by conjugation?
+\end{thm}
+
+\begin{thm}[Dehn] % check this
+ This map is also surjective if $\Sigma$ is closed.
+\end{thm}
+
+This has a very explicit descriptions in the case where $\Sigma$ is a torus, which is probably an exercise in Part II Algebraic Topology. In this case, $\Aut(\Z \oplus \Z) \cong \SL(2, \Z)$ is generated by $\begin{pmatrix}1 & -1\\ 0 & 1\end{pmatrix}$ and $\begin{pmatrix}1 & 0\\1 & 1\end{pmatrix}$, the \term{Dehn twist}. % fix this
+
+
+In general, if we have any surface, it has many holes, and we can do a Dehn twist on any embedded $S^1$.
+\begin{thm}[Dehn, Lickorish, Humphries, Johnson]
+ The mapping class groups are generated by $2g + 1$ Dehn twists. % insert picture
+\end{thm}
+
+We can focus on the case of a torus, where the mapping class group is $\SL(2, \Z)$. Let $M$ be an element in $\SL(2, \Z)$. Then it has eigenvalues $\lambda, \lambda^{-1}$. There are a few possible cases:
+\begin{itemize}
+ \item Elliptic: If $\lambda, \lambda^{-1}$ are complex, then $\lambda^{-1} = \lambda^*$. One example is $\begin{pmatrix}0 & 1\\ -1 & 1\end{pmatrix}$.
+ \item Parabolic: If $\lambda = \lambda^{-1} = 1$, then this is elliptic. These include $\begin{pmatrix}1 & n\\0 & 1\end{pmatrix}$.
+ \item Hyperbolic: These are the other cases, e.g.\ $\begin{pmatrix}2 & 1\\ 1 & 1\end{pmatrix}$.
+\end{itemize}
+
+Given a homeomorphism $M: T^2 \to T^2$, we can form the \term{mapping torus} given by taking $\Sigma \times [0, 1]$ and then quotienting $(M(x), 0) \sim (x, 1)$. % Seifert fibration
+
+\subsection{JSJ decomposition}
+\begin{defi}[Boundary parallel]\index{boundary parallel}
+ An embedded surface $\Sigma \subseteq M$ is \emph{boundary parallel} if it is isotopic to a boundary component of $M$.
+\end{defi}
+
+\begin{defi}[Peripheral subgroup]\index{peripheral subgroup}
+ A \emph{peripheral subgroup} of $\pi(X)$ is a subgroup of $\pi_1(X)$ that is conjugate to the inclusion of a boundary component of $X$.
+\end{defi}
+
+\begin{defi}[Geometrically toroidal]\index{geometrically toroidal}
+ A $3$-manifold $M$ is geometrically toroidal if $M$has an embedded incompressible torus that is not boundary parallel.
+\end{defi}
+
+\begin{defi}[Group-theoretically atoroidal]\index{group-theoretically atoroidal}
+ A manifold $M$ is \emph{group-theoretically atoroidal} if $\pi_1(M)$ has no non-peripheral $\Z \oplus \Z$ subgroup.
+\end{defi}
+
+\begin{thm}[Apanasov]
+ If $M$ is compact, irreducible, oriented, boundary incompressible (i.e.\ the boundary does not have any compressing disks), then it is geometrically toroidal iff it is geometrically toroidal iff it is group-theoretically toroidal.
+\end{thm}
+In these cases, we just say $M$ is \term{toroidal}.
+
+\begin{thm}[JSJ decomposition]\index{JSJ decomposition}
+ If $M$ is compact, irreducible, oriented with \term{toroidal boundary} (i.e.\ the boundary is a disjoint union of tori), then there is a (possibly empty) disjoint union $\{T_1^2, \ldots, T_n^2\}$ of embedding incompressible tori, such that $M \setminus \{T_1^2 \cup \cdots \cup T_n^2\}$ such that all the pieces are either Seifert fibered or atoroidal (it could be both).
+\end{thm}
+
+\begin{defi}[Haken manifold]\index{Haken manifold}
+ A connected, compact, oriented $3$-manifold $M$ is \emph{Haken} if it has a properly embedded, incompressible surface that is not boundary parallel.
+\end{defi}
+
+\begin{thm}[Seifert]
+ Any compact oriented $3$-manifold with toroidal boundary has a Seifert surface.
+\end{thm}
+
+\begin{cor}
+ Such a manifold is Haken.
+\end{cor}
+
+Why do we care about Haken things?
+\begin{thm}[Thurston's hyperbolization theorem]
+ % insert correct statement
+\end{thm}
+
+\begin{cor}
+ In the case of non-empty splitting, the non-Seifert-fibered things are hyperbolic.
+\end{cor}
+
+How about the closed cased? These are either Seifert fibered or atoroidal closed. The closed atoroidal case is hard.
+\begin{thm}[Perelman, 2002--2003]
+ If I have an irreducible, oriented, atoroidal closed manifold, then I can Ricci flow with surgery to a constant curvature metric. If the curvature is finite, then the space has finite topological subgroup, which is a rigid subgroup of $\SO(4)$, acting on $S^3 \subseteq \R^4$, and the universal cover is $S^3$.
+
+ In the case of negative curvature, we are hyperbolic.
+\end{thm}
+
+
+\printindex
+\end{document}
diff --git a/books/cam/archive/spinor_techniques_in_general_relativity.tex b/books/cam/archive/spinor_techniques_in_general_relativity.tex
new file mode 100644
index 0000000000000000000000000000000000000000..49d953b440960b894735e4004b4b8dde74a5e651
--- /dev/null
+++ b/books/cam/archive/spinor_techniques_in_general_relativity.tex
@@ -0,0 +1,636 @@
+\documentclass[a4paper]{article}
+
+\def\npart {IV}
+\def\nterm {Lent}
+\def\nyear {2018}
+\def\nlecturer {I.\ M.\ M.\ Borzym and P.\ O'Donnell}
+\def\ncourse {Spinor Techniques in General Relativity}
+\def\ncoursehead {Spinor Techniques in GR}
+
+\input{header}
+
+\newcommand\Cl{\mathrm{Cl}}
+\newcommand\omicron{o}
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+Spinor structures and techniques are an essential part of modern mathematical physics. This course provides a gentle introduction to spinor methods which are illustrated with reference to a simple 2-spinor formalism in four dimensions. Apart from their role in the description of fermions, spinors also often provide useful geometric insights and consequent algebraic simplifications of some calculations which are cumbersome in terms of spacetime tensors.
+
+The first half of the course will include an introduction to spinors illustrated by 2-spinors. Topics covered will include the conformal group on Minkowski space and a discussion of conformal compactifications, geometry of scri, other simple simple geometric applications of spinor techniques, zero rest mass field equations, Petrov classification, the Plucker embedding and a comparison with Euclidean spacetime. More specific references will be provided during the course and there will be worked examples and handouts provided during the lectures.
+
+The second half of the course will include: Newman--Penrose (NP) spin coefficient formalism, NP field equations, NP quantities under Lorentz transformations, Geroch--Held--Penrose (GHP) formalism, modified GHP formalism, Goldberg-Sachs theorem, Lanczos potential theory, Introduction to twistors. There will be no problem sets.
+\subsubsection*{Pre-requisites}
+The Part 3 general relativity course is a prerequisite.
+
+No prior knowledge of spinors will be assumed.
+}
+\tableofcontents
+
+\section{Spin structures}
+%Spinors are important in physics and mathematics:
+%\begin{enumerate}
+% \item In quantum field theory, spinors let us work with fermions.
+% \item Dirac equation and dirac spinors let us describe quantum states of the relativistic electrons.
+% \item They are useful in applications to symplectic geometry, gauge theory, complex algebraic geometry, K-theory, orientation theory, etc. For example, they provide a proof of the Atiyah--Singer index theorem. This is useful in conformal field theories, for example.
+%\end{enumerate}
+%It is useful because they give the universal covers of $\SO(p, q)$.
+
+Consider the special orthogonal group $\SO(n)$. We can naturally think of this as a subspace of \emph{all} endomorphisms $\R^n$, $\End(\R^n)$. Under this embedding, we can read off the Lie algebra of $\SO(n)$ as a certain linear subspace of $\End(\R^n)$. For $n > 2$, it is often useful to consider the universal cover $\Spin(n)$ of $\SO(n)$. It turns out this also naturally sits as a multiplicative subgroup of a certain algebra, called a \emph{Clifford algebra}. This lets us understand the Lie algebra of $\Spin(n)$ concretely as a certain linear subspace of the Clifford algebra.
+
+\begin{defi}[Clifford algebra]\index{Clifford algebra}
+ Let $K$ be a field, $V$ be a vector space over $K$, and $Q$ a non-degenerate quadratic form on $V$. We define the Clifford algebra $\Cl(V, Q)$ to be the quotient of the tensor power $\bigoplus_{i = 0}^\infty V^{\otimes i}$ by the relation $x^2 = Q(x)$ for all $x \in V$.
+\end{defi}
+Since $V$ includes into $\Cl(V, Q)$, we can often confuse $V$ with the image in $\Cl(V, Q)$ (as we already did), but we have to be more careful when we use suffix notation later on.
+
+This enjoys the following universal property:
+\begin{prop}
+ Let $A$ be an associative algebra over $K$ and suppose $j: V \to A$ is a linear map such that $j(x)^2 = Q(x)$. Then there is a unique extension of the map to $\Cl(V, Q)$.
+ \[
+ \begin{tikzcd}
+ V \ar[r, "j"] \ar[rd] & A\\
+ & \Cl(V, Q) \ar[u, dashed]
+ \end{tikzcd}
+ \]
+\end{prop}
+
+In general, the Clifford algebra is not a commutative algebra. Instead, if $Q$ comes from a bilinear form $\bra \ph, \ph \ket$ (which is always the case in characteristic not $2$), then we have the formula
+\[
+ uv + vu = Q(u + v) - Q(u) - Q(v) = 2 \bra u, v\ket.
+\]
+It is often convenient to pick a basis for our Clifford algebra. To do so, we begin with an orthonormal basis $e_1, \ldots, e_n$ of $V$. Then
+\[
+ \{e_{j(1)} \cdots e_{j(k)} : 1 \leq j(1) < \cdots < j(k) \leq n\}
+\]
+is a basis for $\Cl(V, Q)$ and satisfies $e_i e_j = - e_j e_i$. Thus,
+\[
+ \dim \Cl(V, Q) = \sum_{i = 0}^n \binom{n}{k} = 2^n.
+\]
+Note that we can extend the quadratic form on $V$ to a quadratic form on all of $\Cl(V, Q)$, requiring that the distinct elements $e_{j(1)} \cdots e_{j(k)}$ to be orthogonal and $Q(e_{j(1)} \cdots e_{j(k)}) = Q(e_{j(1)}) \cdots Q(e_{j(k)})$. We have $Q(\lambda) = \lambda^2$. This is in fact independent of the basis chosen.
+
+\subsection{Real Clifford Algebras}
+Let's first try to understand real Clifford algebras, which are a bit more complicated than their complex cousins. By Sylvester's law of inertia, we may assume our quadratic form on $\R^n$ is given by
+\[
+ Q(v) = v_1^2 + \cdots + v_p^2 - v_{p + 1}^2 - \cdots - v_{p + q}^2,
+\]
+where $p + q = n$. We write the corresponding Clifford algebra as $\Cl_{p, q}(\R)$. It follows easily from definition that we have isomorphisms
+\[
+ \Cl_{0, 0}(\R) \cong \R, \quad \Cl_{0, 1}(\R) \cong \C,\quad \Cl_{0, 2}(\R) \cong \H.
+\]
+With a bit more work, one sees that
+\[
+ \Cl_{0, 3}(\R) \cong \H \oplus \H,\quad \Cl_{0, 4} \cong M_2(\H).
+\]
+In general, the following can be said about these Clifford algebras:
+\begin{itemize}
+ \item Every $\Cl_{p, q}(\R)$ is of the form $M_k(D)$ or $M_k(D) \oplus M_k(D)$, where $k$ is a positive integer (possibly $1$) and $D = \R, \C$ or $\H$.
+ \item The type of $\Cl_{p, q}(\R)$ (i.e.\ the choice of $D$ and whether it is of the first or second form) depends only on $p - q \pmod 8$, and once we know the type, we can determine $k$ using the fact that $\dim \Cl_{p, q}(\R) = 2^{p + q}$. In particular, all these Clifford algebras are semi-simple.
+ \item In fact, we have
+ \begin{center}
+ \begin{tabular}{ccccc}
+ \toprule
+ $p - q$ & $\Cl_{p, q}(\R)$ &\hphantom{asd} & $p - q$ & $\Cl_{p, q}(\R)$\\
+ \midrule
+ 0 & $M_k(\R)$ & & 4 & $M_k(\H)$ \\
+ 1 & $M_k(\R) \oplus M_k(\R)$ & & $5$ & $M_k(\H) \oplus M_k(\H)$\\
+ 2 & $M_k(\R)$ & & 6 & $M_k(\H)$\\
+ 3 & $M_k(\C)$ & & 7 & $M_k(\C)$\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+\end{itemize}
+The fact that the structure of $\Cl_{p, q}(\R)$ depends only on the value of $p - q \pmod 8$ is known as \term{Bott periodicity}.
+
+\subsection{Complex Clifford Algebras}
+In the complex case, there is no longer distinction between signatures, and $\Cl(V, Q)$ depends only on $\dim V$. We write these as $\Cl_n(\C)$. The structure is considerably simpler in the complex case, and the ``type'' of $\Cl_n(\C)$ depends only on $n \pmod 2$. Observe that for any real vector space $V$ and quadratic form $Q$, we always have
+\[
+ \Cl(V, Q) \otimes \C = \Cl(V \otimes \C, Q \otimes \C).
+\]
+Thus, we have
+\[
+ \Cl_{p + q}(\C) = \Cl_{p, q}(\R) \otimes \C.
+\]
+From the above, we can read off
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ $n$ & $\Cl_{n}(\C)$ \\
+ \midrule
+ 0 & $M_k(\C)$\\
+ 1 & $M_k(\C) \oplus M_k(\C)$\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+
+%\begin{center}
+% \begin{tabular}{cccccccccc}
+% \toprule
+% $p\backslash q$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\
+% \midrule
+% 0 & $\R$ & $\C$ & $\H$ & $\H^2$ & $M_2(\H)$ & $M_4(\C)$ & $M_8(\R)$ & $M_8(\R)^2$ & $M_{16}(\R)$ \\
+% 1 & $\R^2$ & $M_2(\R)$ & $M_2(\C)$ & $M_2(\H)$ & $M_2(\H)^2$ & $M_4(\H)$ & $M_8(\C)$ & $M_{16}(\R)$\\
+% 2 & $M_2(\R)$\\
+% 3 \\
+% 4 \\
+% 5 \\
+% 6 \\
+% 7 \\
+% 8 \\
+% \bottomrule
+% \end{tabular}
+%\end{center}
+%\begin{itemize}
+% \item $C_{0, 0}(\R) \cong \R$.
+% \item $C_{0, 1}(\R) \cong \C$.
+% \item $C_{0, 2}(\R) \cong \H$.
+% \item $C_{0, 3}(\R) \cong \H \oplus \H$.
+%\end{itemize}
+
+%\begin{defi}[Conformal structure]\index{conformal structure}
+% A \emph{conformal structure} on a real manifold is a smoothly varying family of light cones in the tangent spaces, such that at each point the null cone is defined as the zero set of a non-degenerate real quadratic form.
+%\end{defi}
+
+\section{The Lorentz group}
+\subsection{The universal cover of the Lorentz group}
+The group $\SO(1, 3)$ is not simply connected, which is not good. In particular, representations of the Lie algebra $\so(1, 3)$ need not lift to representations of $\SO(1, 3)$, which is undesirable. Thus, we would like to find a universal cover of $\SO(1, 3)$. Of course, it exists by general theory, but we would like an explicit identification of this universal cover.
+
+First consider the case of $\SO(3)$, which is a natural subgroup of $\SO(1, 3)$. It is ``well-known'' that its universal cover is $\SU(2)$, and this has a Clifford algebra interpretation.
+
+Recall that $C_{3, 0} \cong M_2(\C)$. This identification can be obtained explicitly by the inclusion of $\R^3$ into $M_2(\C)$ by the map
+\begin{align*}
+ x &\mapsto \tilde{x}\\
+ \begin{pmatrix}
+ x^1\\x^2\\x^3
+ \end{pmatrix}&\mapsto
+ \frac{1}{\sqrt{2}}(x^1 \sigma_1 + x^2 \sigma_2 + x^3 \sigma_3) \\
+ &= \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ x^3 & x^1 - ix^2\\
+ x^1 + i x^2 & -x^3
+ \end{pmatrix}.
+\end{align*}
+The presence of the $\frac{1}{\sqrt{2}}$ is purely by convention. This gives an isomorphism (of vector spaces) between $\R^3$ and the traceless self-adjoint matrices, and this ``remembers'' the norm in the sense that $\|x\|^2 = - 2 \det \tilde{x}$.
+
+The group of matrices in $\GL_2(\C)$ that preserve this subspace under conjugation is exactly $\U(2)$, and the spin group $\Spin(3)$ is $\SU(2)$, and acts on $M_2(\C)$ by conjugation. Moreover, the only elements that act trivially are $\pm I$, giving a short exact sequence
+\[
+ 0 \to \Z_2 \to \SU(2) \to \SO(3) \to 0.
+\]
+The surjectivity follows from the fact that $\SO(3)$ is connected, and the derivative of the map at the identity is non-singular.
+
+Moreover, since $\SU(2) \cong S^3$ as spaces, we know $\SU(2)$ is simply connected, and this exhibits $\SU(2)$ as a double cover.
+
+Instead of trying to explicitly understand the Clifford algebra interpretation for $\SO(1, 3)$, it is easier to directly extend the above map to work with $\SO(1, 3)$.
+
+We define a map $\R^4 \to M_2(\C)$ by sending
+\begin{align*}
+ x &\mapsto \tilde{x}\\
+ \begin{pmatrix}
+ x^0\\x^1\\x^2\\x^3
+ \end{pmatrix}&\mapsto
+ \frac{1}{\sqrt{2}}(x^0 I + x^1 \sigma_1 + x^2 \sigma_2 + x^3 \sigma_3) \\
+ &= \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ x^0 + x^3 & x^1 - ix^2\\
+ x^1 + i x^2 & x^0-x^3
+ \end{pmatrix}.
+\end{align*}
+Thus, we now map $\R^4$ into the space of all self-adjoint matrices. As before, we have
+\[
+ \|x\|^2 = (x^0)^2 - (x^1)^2 - (x^2)^2 - (x^3)^2 = 2\det \tilde{x}.
+\]
+Now in fact conjugating by $\SL(2, \C)$\index{$\SL(2, \C)$} preserves the space of Hermitian matrices, and as before $\pm I$ are the only matrices that act trivially. So we have a short exact sequence
+\[
+ 0 \to \Z_2 \to \SL(2, \C) \to \SO(1, 3) \to 0
+\]
+that exhibits $\SL(2, \C)$ as a double cover of $\SO(1, 3)$.
+
+Since $\SL(2, \C)$ deformation retracts to $\SU(2)$ by the Gram-Schmidt process, we know that
+\begin{prop}
+ $\SL(2, \C)$ is simply connected.
+\end{prop}
+
+\begin{cor}
+ $\SL(2, \C)$ is the universal cover of $\SO(1, 3)$.
+\end{cor}
+
+It turns out the identification of $\R^4$ with the self-adjoint matrices brings us some unexpected benefits --- rotations and boosts now take a particularly simple form (whose proof is a simple verification):
+\begin{prop}\leavevmode
+ \begin{itemize}
+ \item Boosts in the 0-3 plane (with the 1-2 plane fixed) are given by conjugation by
+ \[
+ \begin{pmatrix}
+ e^{\psi/2} & 0\\
+ 0 & e^{-\psi/2}
+ \end{pmatrix} \in \SL(2, \C)
+ \]
+ \item Rotation in the 1-2 plane (with the 0-3 plane fixed) by $\varphi$ is given by conjugation by
+ \[
+ \begin{pmatrix}
+ e^{i\varphi/2} & 0\\
+ 0 & e^{-i\varphi/2}
+ \end{pmatrix} \in \SL(2, \C)
+ \]
+ \end{itemize}
+\end{prop}
+Note that in the case of rotation, while rotating by $\varphi$ and rotating by $\varphi + 2\pi$ are the same rotations, they are represented by different elements of $\SL(2, \C)$. This is, of course, reflecting the fact that $\SL(2, \C)$ is a non-trivial double cover of $\SO(1, 3)$.
+
+We might ask --- where did this construction come from? It doesn't, \emph{a priori}, seem like something of Clifford algebra origin, but it turns out we can provide some Clifford algebra understanding of this.
+
+First observe $\dim_\R C_{1, 3} = 2^4$, while $\dim_\R M_{2^k}(\C) = 2^{2k + 1}$, so there is no hope in identifying $C_{1, 3}$ as some complex matrix algebra (of course, we can also look up our table of identifications above, and see that $C_{1, 3}$ is something else). However, we can look one level higher, and see that $C_{2, 3} \cong M_4(\C)$. Thus, we are lead to consider the following composition
+\[
+ \R^4 \hookrightarrow \R^5 \hookrightarrow C_{2, 3} \cong M_4(\C).
+\]
+By convention, we label the basis elements of $\R^5$ as $\gamma^0, \gamma^1, \gamma^2, \gamma^3, \gamma^5$, where $\gamma^0$ and $\gamma^5$ are the elements that square to $1$. This are represented as elements in $M_4(\C)$ by
+\[
+ \gamma^0 =
+ \begin{pmatrix}
+ \mathbf{0} & \mathbf{1}\\
+ \mathbf{1} & \mathbf{0}
+ \end{pmatrix},\quad
+ \gamma^i =
+ \begin{pmatrix}
+ \mathbf{0} & \sigma^i\\
+ -\sigma^i & \mathbf{0}
+ \end{pmatrix},\quad
+ \gamma^5 =
+ \begin{pmatrix}
+ \mathbf{1} & \mathbf{0}\\
+ \mathbf{0} & -\mathbf{1}
+ \end{pmatrix}
+\]
+for $i = 1, 2, 3$. We now immediately see that our inclusion of $\R^4$ to $M_2(\C)$ is given by picking out the top right-hand corner of their image in $M_4(\C)$.
+
+\subsection{More about \tph{$\SL(2, \C)$}{SL(2, C)}{SL(2, &\#x2102;)}}
+Let us think a bit more about $\SL(2, \C)$. This acts naturally on the space $\C^2$, and the elements of $\SL(2, \C)$ are picked out as those that have determinant $1$. Recall that given a linear map $T: V \to V$ with $\dim V = n$, the determinant of $T$ is defined to be the induced map $T^{\wedge n}: \bigwedge^n V \to \bigwedge^n V$, which is given by multiplication by a scalar. In particular, $\SL(2, \C)$ consists of the matrices $T$ such that $Tx \wedge Ty = x \wedge y$ for all $x$ and $y$.
+
+It is convenient to think of $\wedge$ as a bilinear map. To do so, we \emph{pick} an identification $\bigwedge^2 \C^2 \cong \C$. Since $\wedge$ is naturally anti-symmetric and non-degenerate, this defines a \term{symplectic form} on $\C^2$, and we can thus identify $\SL(2, \C) = \Sp(2, \C)$.
+
+The bilinear form is denoted $\varepsilon$, with
+\[
+ \varepsilon_{ab} =
+ \begin{pmatrix}
+ 0 & 1\\
+ -1 & 0
+ \end{pmatrix}
+\]
+in an appropriate basis. Any basis for which $\varepsilon_{ab}$ looks like this is known as a \term{spinor basis}. Thus, $\SL(2, \C)$ is equivalently the matrices that send a spinor basis to another spinor basis.
+
+Since $\varepsilon$ is not symmetric, we have to be careful about how we order things. The matrix $\varepsilon_{ab}$ is \emph{defined} to be so that
+\[
+ \xi^a \chi^b \varepsilon_{ab} = \xi \wedge \chi.
+\]
+The bilinear form $\varepsilon$ induces an isomorphism between $\C^2$ and its dual, and we can use this to raise and lower indices. Again, we have to fix an order, and we set the dual of $\xi$ to be the map $\xi \wedge -$. Thus, we must define
+\[
+ \xi_a = \xi^b \varepsilon_{ba}
+\]
+One checks that the condition $\varepsilon^{ab} \varepsilon_{ac} \varepsilon_{bd} = \varepsilon_{cd}$ forces
+\[
+ \varepsilon^{ab} = \varepsilon_{ab}.
+\]
+Thus, it follows that
+\[
+ \varepsilon^{ab} \varepsilon_{cb} \equiv \varepsilon_c\!^a = \delta_c\!^a
+\]
+Therefore, we have
+\[
+ \xi^b = \varepsilon^{ba} \xi_a.
+\]
+Observe that for consistency, we must set
+\[
+ \varepsilon^a\!_c = \varepsilon_b\!^d \varepsilon^{ab} \varepsilon_{dc} = - \varepsilon_a\!^c.
+\]
+\subsection{Conjugate spinors}
+As a complex Lie group, $\SL(2, \C)$ acts naturally on $\C^2$, as we have previously discussed. We call this the \term{fundamental representation}. However, for us, $\SL(2, \C)$ really acts as a \emph{real} Lie group $\Spin(1, 3)$. Given this, one can consider the conjugate representation of $\SL(2, \C)$ on $\C^2$, given by
+\[
+ N \cdot \bar{\xi} = N^* \bar{\xi}.
+\]
+Here $N^*$ denotes the result of taking the complex conjugate of every entry in $N$, \emph{without} taking a transpose. Elements in the conjugate representation are not compatible with elements in the fundamental representation, and in particular we cannot contract them. Thus, it is common to use primes to label their indices, e.g.\ $\bar{\xi}^{a'}$. It is also common to write some bars on top of their names, as we have done so far.
+
+As before, we can fix a antisymmetric form $\bar{\varepsilon}^{a'b'}$, which has the same entries as $\varepsilon^{a'b'}$. By fixing appropriate bases, complex conjugation defines an anti-linear map between the fundamental and conjugate representations.
+
+ % obtain by complexifying C^2
+\subsection{Spinors in Minkowski spacetime}
+Let $M$ be the real Minkowski spacetime, and the trivial spin structure gives rise to a spinor bundle $M \times T(S)$. We can complexify Minkowski space as well as a metric, so that
+\[
+ \d s^2 = g_{ab} \;\d z^a \;\d z^b.
+\]
+Note that this is not the usual Hermitian complexification.
+
+\section{\tph{$\SL(2, \C)$}{SL(2, C)}{SL(2, &\#x2102;)}}
+\subsection{\tph{$\SL(2, \C)$}{SL(2, C)}{SL(2, &\#x2102;)} and \texorpdfstring{$\SO(1, 3)$}{SO(1, 3)}}
+Fast-forwarding to the end of the chapter, the conclusion of the section is
+\begin{thm}\index{$\SL(2, \C)$}
+ $\SL(2, \C)$ is simply connected, and there is a non-trivial double cover $\SL(2, \C) \to \SO(1, 3)$. Hence $\Spin(1, 3) \cong \SL(2, \C)$.
+\end{thm}
+
+Establishing this theorem will involve thinking about the representations of $\SL(2, \C)$, and it will take a while before $\R^{1 + 3}$ and $\SO(1, 3)$ resurface from the discussion. The formalism developed in this identification will be very useful for later purposes.
+
+First of all, we prove that $\SL(2, \C)$ is simply connected.
+\begin{prop}
+ $\SL(2, \C)$ is simply connected.
+\end{prop}
+
+\begin{proof}
+ By the Gram--Schmidt process, $\SL(2, \C)$ deformation retracts onto $\SU(2)$, and $\SU(2) \cong S^3$.
+\end{proof}
+
+We begin with the defining representation of $\SL(2, \C)$, which we shall call $S$. Thus, $S \cong \C^2$, and $\SL(2, \C)$ acts on $S$ in the usual way, as we learnt in IA Vectors and Matrices. It is important to make the point that despite everything here seems complex, we always think of $\SL(2, \C)$ as a \emph{real} Lie group. In particular, we shall happily take complex conjugations without worrying about holomorphicity. $S$ is both a complex and a real representation of $\SL(2, \C)$ (which, I shall reiterate, is thought of as a real Lie group, like $\U(1)$), and they will take different roles at different time.
+
+There are many representations we can produce out of $S$. The first obvious operation is taking the dual, and this gives $S^*$. The next operation is formed by taking the complex conjugate. This is given by the action of $\SL(2, \C)$ on $\C^2$ by
+\begin{align*}
+ \SL(2, \C) \times \C^2 &\to \C^2\\
+ (A, \kappa) &\mapsto A^* \kappa,
+\end{align*}
+where $A^*$ is the matrix obtained by taking the complex conjugate of every entry in $A^*$, \emph{without} taking a transpose. We call this representation $S'$. Finally, we can perform both of these actions, and produce $S'^*$. We shall write $T(S)$ for the (commutative) algebra freely generated by tensor products of the different versions of $S$, and this is called the \term{spinor algebra}
+
+Observe that as \emph{real} representations, $S$ is isomorphic to $S'$ --- after fixing a basis, there is a natural conjugation map $S \to S'$ given by complex conjugating each component. We usually write this as $\kappa \mapsto \bar{\kappa}$. This is not a $\C$-linear map, so does not give an isomorphism of \emph{complex} representations. We shall, from now on, fix such an isomorphism, without necessarily picking a basis. However, once we have fixed this isomorphism, picking a basis of one of $S$ and $S"$ gives a basis of the other.
+
+In index notation, we shall write elements of $S$ with upper, capital indices, e.g.\ $\kappa^A$, and elements of $S'$ with upper, capital indices followed by a prime, e.g.\ $\bar{\kappa}^{A'}$. The elements in the duals are written with lower indices.
+
+A fun fact: If we take $S$ and treat it as a real representation, then complexifying it makes it split as $S\otimes_\R \C \cong S \oplus S'$. Similarly, if we take the real Lie algebra $\sl(2, \C)$ and complexify, it splits as $\sl(2, \C) \oplus \sl(2, \C)$, and $S$ and $S'$ are the fundamental representations of each copy of $\sl(2, \C)$.
+
+To understand $\SL(2, \C)$ better, recall that $\SL(2, \C)$ is defined to be the two-by-two matrices of determinant $1$. Recall also the definition of a determinant: given a linear map $T: V \to V$, where $\dim V = n$, the determinant of $T$ is defined to be the map $T^{\wedge n}: V^{\wedge n} \to V^{\wedge n}$, which is multiplication by a single complex number. In the particular case $V = S = \C^2$, we can fix an isomorphism $S^{\wedge 2} \cong \C$. Then $\wedge: S \otimes S \to S^{\wedge 2} \cong \C$ defines a non-degenerate anti-symmetric form, i.e.\ a symplectic form. Thus, $\SL(2, \C)$ is the space of all matrices that preserve the symplectic form, i.e.\ $\SL(2, \C) \cong \Sp(2, \C)$.
+
+Since this symplectic form is quite important, we shall give it a name, $\varepsilon$, and establish the index conventions for it. Since $\varepsilon$ is not symmetric, we have to be a bit careful. We define $\varepsilon_{AB}$ to be such that
+\[
+ \kappa^A \tau^B \varepsilon_{AB} = \kappa \wedge \tau = \varepsilon(\kappa, \tau).
+\]
+The non-degeneracy of $\varepsilon$ induces an isomorphism between $S$ and $S^*$, sending $\kappa$ to $\kappa \wedge -$. In index notation, this means
+\[
+ \kappa_A = \kappa^B \varepsilon_{BA}.
+\]
+The dual form $\varepsilon^{AB}$ is required to satisfy $\varepsilon^{AB} \varepsilon_{AC} \varepsilon_{BD} = \varepsilon_{CD}$, and this forces
+\[
+ \varepsilon^{AB} = \varepsilon_{AB}
+\]
+in a symplectic/spinor basis\index{symplectic basis}\index{spinor basis}, i.e.\ a basis where
+\[
+ \varepsilon_{AB} =
+ \begin{pmatrix}
+ 0 & 1\\
+ -1 & 0
+ \end{pmatrix}.
+\]
+From this, it follows that
+\[
+ \varepsilon^{AB} \varepsilon_{CB} \equiv \varepsilon_C\!^A = \delta_C\!^a,
+\]
+and therefore we must set
+\[
+ \kappa^B = \varepsilon^{BA} \kappa_A.
+\]
+Finally, observe that for consistency, we have
+\[
+ \varepsilon^A\!_C = \varepsilon_B\!^D \varepsilon^{AB} \varepsilon_{DC} = - \varepsilon_A\!^C.
+\]
+We can similarly form $\bar{\varepsilon}_{A'B'} = \varepsilon_{AB}$, and will usually write it without the bar.
+
+Recall that we previously had a conjugation map $S \to S'$, that somehow remembers our complex structure. Usually, we can recover ``real'' things by looking at the fixed point of a conjugation map, but here it doesn't map to itself. The next best thing we can do is to consider the tensor product $S \otimes S'$. By performing complex conjugation on both entries, and then composing with a swap, we obtain a conjugation map
+\[
+ \begin{tikzcd}[row sep=tiny]
+ S \otimes S' \ar[r] & S' \otimes S \ar[r] & S \otimes S'\\
+ \kappa \otimes \bar{\tau} \ar[r, maps to] & \bar{\kappa} \otimes \tau \ar[r, maps to] & \tau \otimes \bar{\kappa}
+ \end{tikzcd}
+\]
+We can take the tensor product of $\varepsilon$ and $\bar{\varepsilon}$, which now becomes a \emph{symmetric} bilinear form $\varepsilon \otimes \bar{\varepsilon}$ from $S \otimes S'$ to $\C$, that is equivariant under conjugation. Restricting to the real parts, i.e.\ the parts fixed by conjugation, this descents to a symmetric bilinear form $g: \R^4 \otimes \R^4 \to \R$.
+
+The claim is, of course, that this signature of this bilinear form is $(1, 3)$, and thus gives an embedding of $\R^{1 + 3}$ into $S \otimes S'$. Since the conjugation is $\SL(2, \C)$ equivariant, this restricts the $\SL(2, \C)$ to one on $\R^{1 + 3}$, and the since the bilinear form is also $\SL(2, \C)$ equivariant by construction, this induces the desired map $\SL(2, \C) \to \SO(1, 3)$.
+
+To verify this claim, it suffices to write everything out and look at it. We can write an element $V^{AA'} \in S \otimes S'$ as a matrix
+\[
+ V^{AA'} =
+ \begin{pmatrix}
+ V^{00'} & V^{01'}\\
+ V^{10'} & V^{11'}
+ \end{pmatrix},
+\]
+and then our conjugation action corresponds to taking the Hermitian conjugate. Moreover, one checks that $(\varepsilon \otimes \bar{\varepsilon})(V, V) = \det V$. The elements of the ``real'' part then correspond to the Hermitian matrices, which are in general of the form
+\[
+ V^{AA'} =
+ \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ V^0 + V^3 & V^1 - i V^2\\
+ V^1 + i V^2 & V^0 V^3
+ \end{pmatrix}.
+\]
+Identifying this with $V = (V^0, V^1, V^2, V^3) \in \R^4$, we easily read off the norm squared of $V$ as
+\[
+ \|V\|^2 = \frac{1}{2} \Big((V^0)^2 - (V^1)^2 - (V^2)^2 - (V^3)^2\Big),
+\]
+which, up to a scaling, is just the usual $(1, 3)$-norm on $\R^{1 + 3}$. Thus, we have a canonical copy of $\R^{1 + 3}$ in our spinor algebra.
+
+Under this presentation, $\SL(2, \C)$ acts by conjugation on the Hermitian matrices, and $\pm I$ are the only matrices that act trivially. Thus, we have a short exact sequence
+\[
+ 0 \to \Z_2 \to \SL(2, \C) \to \SO(1, 3) \to 0
+\]
+that exhibits $\SL(2, \C)$ as a double cover of $\SO(1, 3)$.
+
+It turns out the identification of $\R^4$ with the Hermitian matrices brings us some unexpected benefits --- rotations and boosts now take a particularly simple form (whose proof is a simple verification):
+\begin{prop}\leavevmode
+ \begin{itemize}
+ \item Boosts in the 0-3 plane (with the 1-2 plane fixed) are given by conjugation by
+ \[
+ \begin{pmatrix}
+ e^{\psi/2} & 0\\
+ 0 & e^{-\psi/2}
+ \end{pmatrix} \in \SL(2, \C)
+ \]
+ \item Rotation in the 1-2 plane (with the 0-3 plane fixed) by $\varphi$ is given by conjugation by
+ \[
+ \begin{pmatrix}
+ e^{i\varphi/2} & 0\\
+ 0 & e^{-i\varphi/2}
+ \end{pmatrix} \in \SL(2, \C)
+ \]
+ \end{itemize}
+\end{prop}
+Note that in the case of rotation, while rotating by $\varphi$ and rotating by $\varphi + 2\pi$ are the same rotations, they are represented by different elements of $\SL(2, \C)$. This is, of course, reflecting the fact that $\SL(2, \C)$ is a non-trivial double cover of $\SO(1, 3)$.
+
+\subsection{Vectors from spinors}
+So we have now identified our ``vectors'', i.e.\ elements in $\R^{1 + 3}$ as certain special spinors, namely the Hermitian elements in $S \otimes S'$. Thus, given a spinor, there is an easy way to produce a vector.
+
+To be concrete, if $\kappa^A \in S$, then $\bar{\kappa}^{A'} \in S'$. Thus, we can form
+\[
+ k^a = k^{AA'} = \kappa^A \bar{\kappa}^{A'} = \kappa \otimes \bar{\kappa}.
+\]
+We write it as $k^a$ to signify the fact that we are thinking of it as living in $\R^{1 + 3}$. Can we obtain all vectors this way? No! We can easily compute
+\[
+ k^a k_a = \kappa^A \bar{\kappa}^{A'} \kappa_A \bar{\kappa}_{A'} = 0,
+\]
+using that the symplectic form is anti-symmetric. Thus, this procedure only produces null vectors.
+
+An obvious next question is, does this produce all null vectors? Let us look at dimensions. The space of null vectors has real dimension $3$, while the space of all $\kappa^A$ have dimension $4$. However, we observe that replacing $\kappa^A$ with $e^{i\theta} \kappa^A$ does not change $k^a$. So we have (at least) one dimension of degeneracy. So the answer might be yes. It turns out this is close, but not quite it.
+
+\begin{lemma}
+ A vector is of the form $\kappa^A \bar{\kappa}^{A'}$ iff it is a future-pointing null vector. Moreover, $\kappa^A \bar{\kappa}^{A'}$ is unique up to a phase.
+\end{lemma}
+
+\begin{proof}
+ We write $k^a = k^{AA'}$ as a matrix
+ \[
+ k^{AA'} =
+ \begin{pmatrix}
+ \xi & \bar{\varsigma}\\
+ \varsigma & \eta
+ \end{pmatrix}.
+ \]
+ Then the null condition dictates that this has determinant $0$, and the future-pointing condition requires $\xi$ and $\eta$ to be positive (they must have the same sign since $\det k = 0$ and $\varsigma \bar\varsigma$ is always non-negative).
+
+ From this, we see that $\kappa^A \bar{\kappa}^{A'}$ is always null and future-pointing. Conversely, if $k^{AA'}$ is null and future-pointing, then the vanishing of the determinant implies the rows are proportional to each other. We may wlog assume $\varsigma = z \xi$. Then
+ \[
+ \eta = z \bar{\varsigma} = |z|^2 \bar{\xi} = |z|^2 \xi,
+ \]
+ using that $\xi$ is real. Thus, we can write
+ \[
+ k^{AA'} = \xi
+ \begin{pmatrix}
+ 1 & \bar{z}\\
+ z & |z|^2
+ \end{pmatrix} =
+ \begin{pmatrix}
+ \sqrt{\xi}\\ \sqrt{\xi} z
+ \end{pmatrix}
+ \begin{pmatrix}
+ \sqrt{\xi} & \sqrt{\xi}\bar{z}
+ \end{pmatrix}.
+ \]
+ Finally, to show the uniqueness of $\kappa$, note that except in degenerate cases, we can always fix the first component of $\kappa$ to be positive and real, and we see that picking different options for the second component give rise to different $k^a$'s.
+\end{proof}
+
+Using this, we now have a way to represent spinors as elements in ``real space'' $\R^{1 + 3}$, by mapping $\kappa^A$ to $k^a = \kappa^A \bar{\kappa}^{A'}$. By doing so, we have lost information about the phase of $\kappa^A$, and we might hope to find something that knows about this phase as well. It turns out we can quite do that, but we can recover the phase up to a sign. This is perhaps expected, reflecting the fact that $\SL(2, \C) \to \SO(1, 3)$ is a double cover, and not an isomorphism.
+
+To describe this construction, suppose $\kappa^A$ is a non-zero spinor. Suppose $\tau^A$ is another spinor such that $(\kappa^A, \tau^A)$ is a spinor basis. This $\tau^A$ always exists, and is well-defined up to a multiple of $\kappa^A$. Consider
+\[
+ L^a = \kappa^A \bar{\tau}^{A'} + \bar{\kappa}^{A'} \tau^A.
+\]
+This is then well-defined up to a multiple of $k^a$. We call $k^a$ the \term{flag pole}, and the space of all possible $L^a$ the \term{flag plane}.
+
+Observe that if we multiply $\kappa_A$ by a phase $e^{i\theta}$, then $\kappa^A \bar{\tau}^{A'}$ gets multiplied by $e^{2i\theta}$, while $\bar{\kappa}^{A'} \tau^A$ gets multiplied by $e^{-2i\theta}$. In either case, $L^a$ doesn't change when we multiply by $-1$. This is all that is lost.
+
+\begin{lemma}
+ The pair $(k^a, L^a)$ determines $\kappa^A$ up to a sign. % prove this
+\end{lemma}
+
+What do we get out of this? Observe that by picking out $\R^{1 + 3}$ as a subspace of $S \otimes S'$, it comes with a \emph{canonical} time orientation, namely that given by the requirement that future null vectors are those of the form $\kappa^A \kappa^{A'}$.
+
+\section{Newman--Penrose formalism}
+\subsection{Spin space}
+Spin space is the space spinors live in. We write $\boldsymbol\zeta, \boldsymbol\eta$ for ``spin vectors''. These are elements that belong to \emph{spin space} $S$. As a vector space, $S$ is a two-dimensional complex vector space. Further, there exists an non-degenerate anti-symmetric bilinear form $[\ph, \ph]: S \otimes S \to \C$.
+
+A \term{spin basis} is a basis $\{\boldsymbol\omicron, \boldsymbol\iota\}$ such that
+\[
+ [\boldsymbol\omicron, \boldsymbol\iota] = 1 = - [\boldsymbol\iota, \boldsymbol\omicron].
+\]
+Then we define the components of some spin vector $\boldsymbol\zeta \in S$ by
+\[
+ \boldsymbol\zeta = \zeta^0 \boldsymbol\omicron + \zeta^1 \boldsymbol\iota.
+\]
+We can compute these coefficients by
+\[
+ \zeta_0 = [\boldsymbol\zeta, \boldsymbol\iota], \zeta^1 = [\boldsymbol\zeta, \boldsymbol\omicron].
+\]
+So
+\[
+ \boldsymbol\omicron = (1, 0),\quad \boldsymbol\iota = (0, 1).
+\]
+By direct calculation,
+\[
+ [\boldsymbol\zeta, \boldsymbol\eta] = \zeta^0 \eta^1 - \zeta^1 \eta^0.
+\]
+We introduce the \term{Levi--Civita spinor} $\varepsilon_{AB}$ of $S_{AB} = (S^*)^{\otimes 2}$ defined by
+\[
+ [\zeta, \eta] = \varepsilon_{AB} \zeta^A \zeta^B.
+\]
+Then antisymmetry entails $\varepsilon_{AB} = - \varepsilon_{BA}$.
+\begin{prop}
+ We have
+ \begin{enumerate}
+ \item $\varepsilon_{AB} = \varepsilon_{BA}$.
+ \item $\zeta_B = \varepsilon_{AB} \zeta^A = - \varepsilon_{BA} \zeta^A$. This is defined so that $[\zeta, \eta] = \zeta_B \eta^B$
+ \item $\varepsilon^{AB} \varepsilon_{AC} = \delta^B_C = \varepsilon_C\!^B = - \varepsilon^B\!_C$.
+ \end{enumerate}
+\end{prop}
+In general, if we have multi-spinors $\chi^{M\ldots NA}$, then
+\[
+ \chi^{M\ldots NA} = \varepsilon^{AB} \chi^{M\ldots N}\!_B = - \chi^{M\ldots N}\!_B \varepsilon^{BA} = \chi^{M\ldots NB} \varepsilon_B\!^A
+\]
+We also have
+\[
+ \varepsilon_{A[B} \varepsilon_{CD]} = 0.
+\]
+Observe that this is the same as saying
+\[
+ \varepsilon_{AB} \varepsilon_{CD} + \varepsilon_{AC} \varepsilon_{DB} + \varepsilon_{AD} \varepsilon_{BC} = 0.
+\]
+Raising $C$ and $D$ and rearranging, we get
+\[
+ \varepsilon_A\!^C \varepsilon_B\!^D - \varepsilon_B\!^C \varepsilon_A\!^D = \varepsilon_{AB} \varepsilon^{CD}.
+\]
+Transvecting (i.e.\ contracting) with $\chi_{\cdots CD \cdots}$, we get
+\[
+ 2\chi_{\cdots [AB] \cdots} = \chi_{\cdots AB \cdots} - \chi_{\cdots BA \cdots } = \chi_{\cdots C}\!^C \!_{\cdots \varepsilon_{AB}}.
+\]
+In particular, if $\chi_{\cdots [AB]\cdots} = \chi_{AB \cdots}$, then we get
+\[
+ \chi_{\cdots AB \cdots} = \frac{1}{2} \chi_{\cdots C}\!^C_{\cdots} \varepsilon_{AB}.
+\]
+This is just the statement that every anti-symmetric matrix is a multiple of $\varepsilon$.
+
+For a general $\chi$, we can always write it as a sum of symmetric part and a sum of antisymmetric part,
+\[
+ \chi_{\cdots AB \cdots} = \chi_{\cdots (AB) \cdots} + \chi_{\cdots [AB] \cdots},
+\]
+it follows that
+\begin{prop}
+ \[
+ \chi_{\cdots AB\cdots} = \chi_{\cdots (AB) \cdots} + \frac{1}{2} \chi_{\cdots C}\!^C_{\cdots} \varepsilon_{AB}.
+ \]
+\end{prop}
+Thus, in some sense, only symmetric tensors matter.
+
+\subsection{Infeld--van der Waerden symbols}
+The \term{Infeld--van der Waerden symbols} are connecting quantities which allow transference between world tensors and $2$-spinors. They are depicted as $\sigma^{\mathbf{a}}_{AB'}$, given by
+\[
+ \sigma^{\mathbf{0}}_{AB'} = \frac{1}{\sqrt{2}}
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 1
+ \end{pmatrix},\quad \sigma^{\mathbf{1}}_{AB'} = \frac{-1}{\sqrt{2}}
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 0
+ \end{pmatrix},\quad \sigma^{\mathbf{2}}_{AB'} = \frac{-1}{\sqrt{2}}
+ \begin{pmatrix}
+ 0 & -i\\
+ i & 0
+ \end{pmatrix},\quad \sigma^{\mathbf{3}}_{AB'} = \frac{-1}{\sqrt{2}}
+ \begin{pmatrix}
+ 1 & 0 \\
+ 0 & -1
+ \end{pmatrix}.
+\]
+These are Hermitian, so that $\overline{\sigma^{\mathbf{a}}_{AB'}} = \sigma^\mathbf{a}_{AB'}$, where by definition,
+\[
+ \overline{\sigma^{\mathbf{a}}_{AB'}} = \bar{\sigma}^{\mathbf{a}}_{A'B}.
+\]
+For a general spinors to be real, we need
+\[
+ \bar{\chi}_{AA' \cdots}\!^{BB'\cdots} = \chi_{AA'\cdots }\!^{BB'\cdots}
+\]
+The relationship between $\sigma^{\mathbf{a}}_{AB'}$ and metric tensors is
+\[
+ g_{\mathbf{a}\mathbf{b}} = \varepsilon_{AB} \varepsilon_{A'B'} \sigma_a^{AA'} \sigma_\mathbf{b}^{BB'}.
+\]
+Introduce
+\[
+ \sigma_{\mathbf{a}}\!^{AA'} \sigma^{\mathbf{b}}\!_{AA'} = \delta_{\mathbf{a}}^{\mathbf{b}},\quad \sigma^{\mathbf{a}}_{AA'} \sigma_{\mathbf{a}}\!^{BB'} = \varepsilon_A\!^B \varepsilon_{A'}\!^{B'}
+\]
+we can now show that this is equivalent to
+\[
+ g_{\mathbf{a}\mathbf{b}} \sigma^{\mathbf{a}}\!_{AA'} \sigma^\mathbf{b}\!_{BB'} = \varepsilon_{AB} \varepsilon_{A'B'}.
+\]
+\subsection{Null vectors}
+
+\printindex
+\end{document}
diff --git a/books/cam/archive/supersymmetry.tex b/books/cam/archive/supersymmetry.tex
new file mode 100644
index 0000000000000000000000000000000000000000..312c63cadce57ed31aa6d6908161441e3ed3f90b
--- /dev/null
+++ b/books/cam/archive/supersymmetry.tex
@@ -0,0 +1,1549 @@
+\documentclass[a4paper]{article}
+
+\def\npart {III}
+\def\nterm {Easter}
+\def\nyear {2018}
+\def\nlecturer {F.\ Quevedo}
+\def\ncourse {Supersymmetry}
+
+\input{header}
+
+\begin{document}
+\maketitle
+{\small
+\setlength{\parindent}{0em}
+\setlength{\parskip}{1em}
+This course provides an introduction to the use of supersymmetry in quantum field theory. Supersymmetry combines commuting and anti-commuting dynamical variables and relates fermions and bosons.
+
+Firstly, a physical motivation for supersymmetry is provided. The supersymmetry algebra and representations are then introduced, followed by superfields and superspace. 4-dimensional supersymmetric Lagrangians are then discussed, along with the basics of supersymmetry breaking. The minimal supersymmetric standard model will be introduced. If time allows a short discussion of supersymmetry in higher dimensions will be briefly discussed.
+
+Three examples sheets and examples classes will complement the course.
+\subsubsection*{Pre-requisites}
+It is necessary to have attended the Quantum Field Theory and the Symmetries in Particle Physics courses, or be familiar with the material covered in them.
+}
+\tableofcontents
+
+\section{Introduction}
+What do we know so far? There are two basic theories in high energy physics --- special relativity and quantum mechanics. We know that quantum field theory gives us a consistent way to encapsulate both of these basic theories. In quantum field theory, particles are represented as excitations of a field $\phi(x)$. Particles come in two types, namely bosons and fermions. These have integer and half-integer spins respectively.
+
+The basic and central example of a quantum field theory is the standard model. This has the following particles:
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ Particle & Spin\\
+ \midrule
+ Higgs & 0\\
+ Quarks and leptons & $1/2$\\
+ Gauge (photos, gluons, $W^{\pm}$, $Z$) & 1\\
+ Nothing & $3/2$\\
+ Gravitons & 2\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+There are good reasons to believe that there aren't particles of spin greater than $2$. But we are still missing particles of spin $3/2$ in the standard model, which is curious.
+
+In physics in general, a basic tool to understand our theory is symmetry. There are several types of symmetries:
+\begin{itemize}
+ \item Spacetime symmetries: $X^\mu \mapsto X'^\mu = \Lambda^\mu\!_\nu X^\nu + a^\mu$.
+ \item Internal symmetries: $\phi \mapsto \phi' = \Omega \cdot \phi$ for some matrix $\Omega$. If $\Omega$ is constant, then this is said to be global. If they are spacetime dependent, they are said to be local.
+\end{itemize}
+Symmetries are useful for labeling and classifying particles, by mass, spin, charge, etc. Supersymmetry is a symmetry that relates fermions to bosons, and vice versa.
+
+Symmetries are not here for fun. Gauge symmetries determine interactions, which are important in the Standard Model.
+
+Unfortunately, symmetries are not always visible, but can hide from us, via spontaneous supersymmetry breaking. Thus, it is possible that there are a lot of symmetries around that we do not see. In fact, we have not seen any supersymmetry so far, but it is still fun to think about.
+\subsection{History of SUSY}
+In the 1960's, people discovered many hadrons. They could be organized in multiplets in the eightfold way.
+
+In 1967, Coleman--Mandula provide that the symmetries of the $S$-matrix must be the direct product of the Poincar\'e group and internal symmetries, and they cannot be mixed. So you can't have symmetries that mix bosons and fermions because they act differently under the Poincar\'e group.
+
+In 1971, Gelfand--Likhtman extended the Poincar\'e algebra, generated by $M_{\mu \nu}$ and $P_\mu$, to include include spinor generators, and this gives supersymmetry. This is something Coleman--Mandula missed. They assumed all the generators are not spinors.
+
+At the same time, Ramond and Neveu--Schwarz, produced supersymmetry on the worldsheet of string theory, incorporating fermions in the theory.
+
+In 1973, Volkov--Akulov thought it could be that neutrinos are Goldstone particles of some broken symmetry, which is supersymmetry.
+
+In 1974, Wess--Zumino wrote down a supersymmetric field theory in four dimensions. This gives a well-defined subject for people to study. Then Salam--Strathdee came up with the notions of superfields and superspace.
+
+In 1975, Haag--Lopuszanski--Sohnius generalized Coleman--Mandula to supersymmetry.
+
+So far, this has been done without gravity.
+
+In 1976, Freidman--van Nieuwenhuizen--Fesrora and Deser--Zumino came up with supergravity, with $s = \frac{3}{2}$ partners of gravitons.
+
+Between 1977 and 1980s, people started to study supersymmetry phenomenology. For example, this gives a solution to the hierarchy problem, since supersymmetry protects certain quantities from quantum corrections.
+
+From 1981 to 1984, Green--Schwarz developed superstring theory.
+
+% 1991, data showed with SUSY, there is unification at high energies.
+
+In 1994, Seiberg--Witten showed we can do non-perturbative $N = 2$ supersymmetry.
+
+% 1996, black hole counting
+
+In 1998, the AdS/CFT correspondence was discovered, and the conformal field theory in CFT is a supersymmetric field theory.
+\section{SUSY Algebra and Representations}
+\subsection{Poincar\'e symmetry and spinors} % fix this for html
+In special relativity, the symmetries are given by the Poincar\'e group,
+\[
+ X^\mu \mapsto X'^\mu = \Lambda^\mu\!_\nu X^\nu + a^\mu,
+\]
+where $\Lambda^\mu\!_\nu$ is a Lorentz boost and $a^\mu$ is a translation. There is an invariant metric
+\[
+ \eta_{\mu\nu} = \eta^{\mu\nu} = \diag(+1, -1, -1, -1),
+\]
+with an invariant metric
+\[
+ \d s^2 = \eta_{\mu\nu}\;\d x^\mu\;\d x^\nu.
+\]
+Coordinate invariance means if $\Lambda$ is a Lorentz transform, then
+\[
+ \Lambda^T \eta \Lambda = \eta.
+\]
+This implies that $\det \Lambda = \pm 1$, and we also see that % how
+\[
+ (\Lambda^0\!_0)^2 - (\Lambda^1\!_0)^2 - (\Lambda^2\!_0)^2 - (\Lambda^3\!_0)^2 = 1.
+\]
+So we see that either $\Lambda^0\!_0 \geq 1$ or $\Lambda^0\!_0 \leq -1$.
+
+In fact, the Lorentz group has four disconnected components, corresponding to the signs of $\det \Lambda$ and $\Lambda^0\!_0$. % Lorentz group is $\Or(3, 1)$.
+
+\begin{defi}[Proper orthochronus group]\index{proper orthochronus group}\index{$\SO(3, 1)^\uparrow$}
+ The \emph{proper orthochronus group} $\SO(3, 1)^\uparrow$ is the subgroup of the Lorentz group consisting of matrices $\Lambda$ with $\det \Lambda = 1$ and $\Lambda^0\!_0 \geq 1$.
+\end{defi}
+We have $\Or(3, 1) / \SO(3, 1)^{\uparrow} \cong \{1, [\Lambda_P], [\Lambda_T], [\Lambda_{PT}] \}$, where
+\[
+ \Lambda_P=
+ \begin{pmatrix}
+ +1 \\
+ & -1\\
+ & & -1\\
+ & & & -1
+ \end{pmatrix},\quad \Lambda_T =
+ \begin{pmatrix}
+ -1 \\
+ & +1\\
+ & & +1\\
+ & & & +1
+ \end{pmatrix},\quad
+ \Lambda_{PT} = \Lambda_P \Lambda_T = -I.
+\]
+Infinitesimally, we can write
+\[
+ \Lambda^\mu\!_\nu = \delta^\mu\!_\nu + \omega^\mu\!_\nu, \quad a^\mu = \varepsilon^\mu.
+\]
+The condition that $\Lambda^T \mu \Lambda = \eta$ implies $\omega_{\mu\nu} = -\omega_{\nu\mu}$. So in total, we have $6 + 4$ parameters. So the Poincar\'e group has ten dimensions.
+
+\subsubsection*{Poinacr\'e algebra}
+What does the Poincar\'e algebra look like? We can describe it explicitly with generators and relations. If the Poincar\'e group acts on a Hilbert space via a representation $U = U(\Lambda, a)$, then we want to Taylor expand
+\[
+ U(1 + \varepsilon, \varepsilon) = 1 - \frac{i}{2} \omega_{\mu\nu} M^{\mu\nu} + i \varepsilon_\mu P^\mu.
+\]
+The point of the $i$'s is that if $U$ is unitary, then $M^{\mu\nu}$ and $P^\mu$ are Hermitian. These generate the Poincar\'e algebra.
+
+Since translations commute, we have
+\[
+ [P_\mu, P_\nu] = 0.
+\]
+We next want to understand $[P^\sigma, M^{\mu\nu}]$. Since $P$ is a vector, under a Lorentz transformation, it transforms as
+\[
+ P^\sigma \mapsto \Lambda^\sigma\!_\rho P^\rho = (\delta^\sigma\!_\rho + \omega^\sigma_p) P^\rho = P^\sigma + \frac{1}{2} \omega_{\alpha \rho} (\eta^{\sigma \alpha} P^\rho - \eta^{\sigma \rho} P^\alpha).
+\]
+We can also think of $P^\sigma$ as an operator. Then
+\begin{align*}
+ P^\sigma \mapsto U^\dagger P^\sigma U &= \left(1 + \frac{i}{2} \omega_{\mu\nu} M^{\mu\nu}\right) P^\sigma \left(1 - \frac{i}{2} \omega_{\mu\nu} M^{\mu\nu}\right)\\
+ &= P^\sigma - \frac{i}{2} \omega_{\mu\nu} [P^\sigma M^{\mu\nu} - M^{\mu\nu} P^\sigma].
+\end{align*}
+So we deduce that
+\[
+ [P^\sigma, M^{\mu\nu}] = i (P^\mu \eta^{\nu \sigma} - P^\nu \eta^{\mu \sigma}).
+\]
+Similarly, one calculates that
+\[
+ [M^{\mu\nu}, M^{\rho \sigma}] = i (M^{\mu\sigma} \eta^{\nu \rho} + M^{\nu \rho} \eta^{\mu\sigma} - M^{\mu \rho} \eta^{\nu \sigma} - M^{\nu \sigma} \eta^{\mu \rho}).
+\]
+\subsubsection*{Properties of the Lorentz group}
+There is an isomorphism
+\[
+ \so(3, 1) \cong \su(2) \oplus \su(2).
+\]
+To see this, we define
+\[
+ J_i = \frac{1}{2} \varepsilon_{ijk} M_{jk},\quad K_i = M_{0i}.
+\]
+Observe that $J_i$ commute with $P^0$, the Hamiltonian, but $K_i$ do not. So $J_i$ are conserved quantities.
+
+These then satisfy the commutation relations
+\[
+ [J_i, J_j] = i\varepsilon_{ijk} J_k,\quad [J_i, K_j] = i \varepsilon_{ijk} K_k,\quad [K_i, K_j] = -i \varepsilon_{ijk} J_k.
+\]
+The first commutation relation is familiar, coming from the rotation group, but the latter two are less familiar. If we further define
+\[
+ A_i = \frac{1}{2} (J_i + i K_i),\quad B_i = \frac{1}{2} (J_i - i K_i),
+\]
+then we get
+\[
+ [A_i, A_j] = i \varepsilon_{ijk} A_i,\quad [B_i, B_j] = \varepsilon_{ijk} B_k, [A_i, B_j] = 0.
+\]
+So in fact this Lie algebra is that of $\su(2) \oplus \su(2)$. Note that even though $J_i, K_i$ are Hermitian, $A_i$ and $B_i$ are not.
+
+Using this isomorphism, we can classify all representations of $\so(3, 1)$. We write the labels as $(j_A, j_B)$, e.g.\ $(\frac{1}{2}, 0)$ means you tensor the smallest irrep of $A$ with the trivial representation of $B$, and similarly for $(0, \frac{1}{2})$. Moth of these have $j = j_A + j_B = \frac{1}{2}$.
+
+$\SO(3, 1)$ is not simply connected. Instead, $\SL(2, \C)$ is a double (universal) cover of $\SO(3, 1)$. To see this, consider a vector $X = x_\mu e^\mu = (x_0, x_1, x_2, x_3)$. We can then define a corresponding element
+\[
+ \tilde{X} = X_\mu \sigma^\mu =
+ \begin{pmatrix}
+ x_0 + x_3 & x_1 - i x_2\\
+ x_1 + i x_2 & x_0 - x_3,
+ \end{pmatrix}
+\]
+where $\sigma^0 = I$ and $\sigma^i$ are the Pauli matrices. Explicitly,
+\[
+ \sigma^\mu = \left\{
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 1
+ \end{pmatrix},
+ \begin{pmatrix}
+ 0 & 1\\
+ 1 & 0
+ \end{pmatrix},
+ \begin{pmatrix}
+ 0 & -i\\
+ i & 0
+ \end{pmatrix},
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & -1
+ \end{pmatrix}
+ \right\}.
+\]
+Since the $\sigma^\mu$ are linearly independent, $\tilde{X}$ and $X$ determine each other.
+
+Recall the Lorentz group is defined by the property that the matrices preserve the metric $|X|^2 = x_0^2 - x_1^2 - x_2^2 - x_3^2$. This quantity is also the determinant of $\tilde{X}$. Since the action of $\SL(2, \C)$ on $\tilde{X}$ by conjugation $\tilde{X} \mapsto N \tilde{X} N^*$ preserves the determinant, by definition, there is a natural map $\SL(2, \C) \to \SO(3, 1)$ (since $\SL(2, \C)$ is connected), which we can explicitly check has kernel $\pm I$. So this is a double cover. % N^* is the componentwise complex conjugate
+
+We can also explicitly check that
+\[
+ \begin{pmatrix}
+ e^{i\theta/2} & 0\\
+ 0 & e^{-i\theta/2}
+ \end{pmatrix}
+\]
+is rotation of $\theta$ around $x_3$.
+
+If $N \in \SL(2, \C)$, then we can perform a \emph{polar decomposition} $N = e^H U$, where $H$ is Hermitian and $U$ is unitary. Since $H$ has three arbitrary parameters and $U$ takes values in $\SU(2) \cong S^3$, we know $\SL(2, \C) \cong \R^3 \times S^3$, and it is simply connected. % polar decomposition is unique?
+
+\subsection{Representations and invariant tensors of \tph{$\SL(2, \C)$}{SL(2, C)}{SL(2, C)}} % proper tph
+We now consider some representations of $\SL(2, \C)$.
+\begin{defi}[Fundamental representation]\index{fundamental representation}
+ The \emph{fundamental representation} of $\SL(2, \C)$ is the standard action of $\SL(2, \C)$ on $\C^2$. We write the elements as $\psi_\alpha$, which transform as
+ \[
+ \psi_\alpha \mapsto N_\alpha\!^\beta \psi_\beta
+ \]
+ for $\alpha, \beta = 1, 2$ and $N_\alpha\!^\beta \in \SL(2, \C)$. We say $\psi_\alpha$ is a \term{left-handed Weyl spinor}.
+\end{defi}
+
+\begin{defi}[Conjugate representation]\index{conjugate representation}
+ The \emph{conjugate representation} is given by the action of $\SL(2, \C)$ on $\C^2$ given by
+ \[
+ \psi \mapsto \bar{N} \psi,
+ \]
+ where $\bar{N}$ is the element-wise complex conjugate of $N$. We write this in indices as
+ \[
+ \bar{\chi}_{\dot{\alpha}} \mapsto N^*_{\dot{\alpha}}\!^{\dot{\beta}} \bar{\chi}_{\dot{\beta}}.
+ \]
+ This is a \term{right-handed Weyl spinor}.
+\end{defi}
+
+\begin{defi}[Contravariant representations]\index{contravariant representations}
+ We have
+ \[
+ \psi'^\alpha \mapsto \psi'^\beta (N^{-1})_\beta\!^\alpha,\quad \bar{\chi}'^{\dot{\alpha}} = \bar{\chi}^{\dot{\beta}} ((N^*)^{-1})_{\dot{\beta}}\!^{\dot{\alpha}}.
+ \]
+\end{defi}
+
+For $\SO(3, 1)$, we used $\eta^{\mu\nu} = (\eta_{\mu\nu})^{-1}$ to raise and lower indices. For $\SL(2, \C)$, we use
+\[
+ \varepsilon^{\alpha\beta} = \varepsilon^{\dot{\alpha}\dot{\beta}} =
+ \begin{pmatrix}
+ 0 & 1\\
+ -1 & 0
+ \end{pmatrix} = - \varepsilon_{\alpha\beta} = - \varepsilon_{\dot{\alpha}\dot{\beta}}.
+\]
+Essentially by definition of the determinant, $\varepsilon$ is invariant under $\SL(2, \C)$. This gives isomorphisms between the contravariant representations and the fundamental representations.
+
+The map $\SL(2, \C) \to \SO(3, 1)$ tells us how we can treat $\SO(3, 1)$ representations as $\SL(2, \C)$ representations. Explicitly, given some $x_\mu$, we obtain $(x_\mu \sigma^\mu)_{\alpha \dot{\alpha}}$, where $\alpha$ and $\dot{\alpha}$ are unrelated indices. These transform as
+\[
+ (x_\mu \sigma^\mu)_{\alpha \dot{\alpha}} \mapsto N_\alpha\!^\beta (x_\nu \sigma^\nu)_{\beta \dot{\gamma}} (N^*)_{\dot{\alpha}}\!^{\dot{\gamma}}.
+\]
+We can also define
+\[
+ (\bar{\sigma}^\mu)^{\dot{\alpha} \alpha} \equiv = \varepsilon^{\alpha \beta} \varepsilon^{\dot{\alpha} \dot{\beta}} (\sigma^\mu)_{\beta \dot{\beta}} = (\mathbf{1}, -\boldsymbol\sigma).
+\]
+Note that the bar on $\bar{\sigma}^\mu$ has got nothing to do with complex conjugation!
+\begin{ex}\leavevmode
+ \begin{align*}
+ \Tr(\sigma^\mu \bar{\sigma}^\nu) &= 2\eta^{\mu\nu}\\
+ \sigma^\mu\!_{\alpha \dot{\alpha}} (\bar{\sigma}_\mu)^{\dot{\beta} \beta} &= 2 \delta_{\alpha}^\beta \delta_{\dot{\alpha}}^{\dot{\beta}}\\
+ \sigma^\mu \bar{\sigma}^\nu + \sigma^\nu \bar{\sigma}^\mu &= 2\mathbf{1} \eta^{\mu\nu}.
+ \end{align*}
+\end{ex}
+
+\subsection{Generators of \tph{$\SL(2, \C)$}{SL(2, C)}{SL(2, C)}} % fix tph
+Infinitesimally, the spinors transform as
+\begin{align*}
+ \psi_\alpha &\mapsto (e^{-\frac{i}{2} (\omega_{\mu\nu} \sigma^{\mu\nu})})_\alpha\!^\beta \psi_\beta.\\
+ \bar{\chi}^{\dot{\alpha}}&\mapsto (e^{-\frac{i}{2} \omega_{\mu\nu} \bar{\sigma}^{\mu\nu}})^{\dot{\alpha}}\!_{\dot{\beta}} \bar{\chi}^{\dot{\beta}},
+\end{align*}
+where
+\begin{align*}
+(\sigma^{\mu\nu})_\alpha\!^\beta &= \frac{i}{4} (\sigma^\mu \bar{\sigma}^\nu - \sigma^\nu \bar{\sigma}^\mu)_\alpha\!^\beta\\&=
+ (\bar{\sigma}^{\mu\nu})^{\dot{\alpha}}\!_{\dot{\beta}} &= \frac{i}{4} (\bar{\sigma}^\mu \sigma^\nu - \bar{\sigma}^\nu \sigma^\mu)^{\dot{\alpha}}\!_{\dot{\beta}}.
+\end{align*}
+Thus, we have commutation relations
+\[
+ [\sigma^{\mu\nu}, \sigma^{\lambda\rho}] = i (\eta^{\mu\rho} \sigma^{\nu\lambda} + \eta^{\nu\lambda} \sigma^{\mu\rho} - \eta^{\mu\lambda} \sigma^{\nu\rho} - \eta^{\nu\rho} \sigma^{\mu\lambda}).
+\]
+Recall that we defined
+\[
+ J_i = \frac{1}{2} \varepsilon_{ijk} M_{jk}, \quad K_i = M_{0i}.
+\]
+Under the spinor representation, with $M^{\mu\nu} = \sigma^{\mu\nu}$, we get
+\[
+ J_i = \frac{1}{2} \sigma_i,\quad K_i = - \frac{i}{2} \sigma_i.
+\]
+So $A_i = \sigma_i,\quad B_i = 0$. So the fundamental representation is a $(\frac{1}{2}, 0)$ representation. Similarly, we see that under the conjugate representation, $A_i = 0,\quad B_i = \sigma_i$. So the conjugate representation is $(0, \frac{1}{2})$.
+
+It will be convenient to note that
+\begin{align*}
+ \sigma^{\mu\nu} &= \frac{1}{2i} \varepsilon^{\mu\nu\rho\sigma} \sigma_{\rho\sigma}\\
+ \bar{\sigma}^{\mu\nu} &= -\frac{1}{2i} \varepsilon^{\mu\nu\rho\sigma} \bar{\sigma}_{\rho\sigma}.
+\end{align*}
+We say $\sigma^{\mu\nu}$ is self-dual and $\bar{\sigma}^{\mu\nu}$ is anti-self dual. These imply that $\{\sigma^{\mu\nu}\}$ and $\{\bar{\sigma}^{\mu\nu}\}$ each only span a $3$-dimensional algebra.
+
+\begin{notation}
+ We write
+ \[
+ \chi\psi \equiv \chi^\alpha \psi_\alpha = - \chi_\alpha \psi^\alpha.
+ \]
+ Similarly,
+ \[
+ \bar{\chi} \bar{\psi} = \bar{\chi}_{\dot{\alpha}} \bar\psi^{\dot{\alpha}} = - \bar{\phi}^{\dot{\alpha}} \bar{\psi}_{\dot{\beta}}.
+ \]
+\end{notation}
+In particular,
+\[
+ \psi\psi = \psi^\alpha \psi_\alpha = \varepsilon^{\alpha\beta} \psi_\alpha \psi_\alpha = \psi_2 \psi_1 - \psi_1 \psi_2.
+\]
+We choose $\psi_\alpha$ to be anti-commuting numbers. Then $\psi_1 \psi_2 = - \psi_2 \psi_1$, and so $\psi \psi = 2 \psi_2 \psi_1$. Then we have
+\[
+ \chi \psi = \chi^\alpha \psi_\alpha = - \chi_\alpha \psi^\alpha = + \psi^\alpha \chi_\alpha = \psi \chi.
+\]
+\begin{prop}[Fierz representations]\index{Fierz representation}
+ \begin{align*}
+ (\theta \psi) (\theta \psi) &= - \frac{1}{2} (\psi \psi) (\theta \theta) = - \frac{1}{2} (\theta \theta)(\psi \psi)\\
+ \psi \sigma^{\mu\nu} \chi &= - \chi \sigma^{\mu\nu} \psi\\
+ \psi_\alpha \bar{\chi}_\alpha &= \frac{1}{2} (\psi \sigma_\mu \bar{\chi}) \sigma^\mu\!_{\alpha \dot{\alpha}}\\
+ \psi_\alpha \psi_\beta &= \frac{1}{2} \varepsilon_{\alpha\beta} (\psi \chi) + \frac{1}{2} (\sigma^{\mu\nu} \varepsilon^T)_{\alpha\betab} (\psi \sigma_{\mu\nu} \chi).
+ \end{align*}
+\end{prop}
+
+What is the connection to Dirac spinors? We can define
+\[
+ \gamma^\mu =
+ \begin{pmatrix}
+ 0 & \sigma^\mu\\
+ \bar{\sigma}^\mu & 0
+ \end{pmatrix},
+\]
+where each entry is a $2 \times 2$ matrix. So each $\gamma^\mu$ is a $4 \times 4$ matrix. Then $\sigma^\mu \bar{\sigma}^\nu + \sigma^\nu \bar{\sigma}^\mu = 2\mathbf{1}\eta^{\mu\nu}$ tells us
+\[
+ \{\gamma^\mu, \gamma^\nu\} = i \eta^{\mu\nu}.
+\]
+So $\{\gamma^\mu\}$ form a Clifford algebra representation.
+
+We can also define
+\[
+ \gamma^5 = -i \gamma^0 \gamma^1 \gamma^2 \gamma^3 =
+ \begin{pmatrix}
+ -\mathbf{1} & 0\\
+ 0 & \mathbf{1}
+ \end{pmatrix}.
+\]
+The eigenvalues of $\pm 1$ are the chirality.
+
+\emph{Dirac spinors}\inde{Dirac spinor} are things of the form
+\[
+ \Psi_D = \begin{pmatrix}\psi_\alpha\\\bar{\chi}^{\dot{\alpha}}\end{pmatrix}.
+\]
+Then we have
+\[
+ \gamma_5 \Psi_D = \begin{pmatrix}\psi_\alpha\\\bar{\chi}^{\dot{\alpha}}\end{pmatrix}.
+\]
+There are projections $P_L = \frac{1}{2} (1 - \gamma_5)$ and $P_R = \frac{1}{2}(1 + \gamma_5)$, which give
+\[
+ P_L \Psi_D = \begin{pmatrix}\psi_\alpha\\ 0\end{pmatrix},\quad
+ P_R \Psi_D = \begin{pmatrix}0\\\bar{\chi}^{\dot{\alpha}}\end{pmatrix}.
+\]
+Spinors of this form are also said to be \emph{Weyl}\index{Weyl spinor}.
+
+Also, given a Dirac spinor, we can define the conjugate
+\[
+ \bar{\Psi}_D =
+ \begin{pmatrix}
+ \chi^\alpha & \bar{\psi}_{\dot{\alpha}}
+ \end{pmatrix} = \psi_D^+ \gamma^0,\quad \Psi_D^c =
+ \begin{pmatrix}
+ \chi_\alpha \\ \bar{\psi}^{\dot{\alpha}}
+ \end{pmatrix} = C \bar{\Psi}^T,
+\]
+where
+\[
+ C =
+ \begin{pmatrix}
+ \varepsilon_{\alpha\beta} & 0\\
+ 0 & \varepsilon^{\dot{\alpha} \dot{\beta}}
+ \end{pmatrix}
+\]
+is the \term{charge conjugate} matrix.
+
+A \term{Majorana spinor} is a Dirac spinor with $\psi_\alpha = \chi_\alpha$. So
+\[
+ \psi_M =
+ \begin{pmatrix}
+ \psi_\alpha\\ \bar{\psi}^{\dot{\alpha}}
+ \end{pmatrix}.
+\]
+\subsection{The Supersymmetry Algebra}
+A graded algebra is an algebra $A$ with a decomposition $A = A^0 \oplus A^1$. If $\mathcal{O}_a \in A^{\eta_a}$ etc., then we define % degree 0 = bosonic, degree 1 = fermionic.
+\[
+ [\mathcal{O}_a, \mathcal{O}_b] = \mathcal{O}_a \mathcal{O}_b - (-1)^{\eta_a \eta_b} \mathcal{O}_b \mathcal{O}_a. % we consider \{ \}
+\]
+If it has generators $\{\mathcal{O}_a\}$ each of a definite degree, then we write
+\[
+ [\mathcal{O}_a, \mathcal{O}_b] = i C^e_{ab} \mathcal{O}_e.
+\]
+In the supersymmetric extension of the Poincar\'e group, we have bosonic generators $P^\mu, M^{\mu\nu}$. We also introduce fermionic operators $Q_\alpha^A$ and $\bar{Q}_{\dot{\alpha}}^B$, where $\alpha, \dot{\alpha}$ are spinor indices, and $A, B = 1, 2, \ldots, \mathcal{N}$ are generic labels.
+
+When $\mathcal{N} = 1$, we say we have \emph{simple supersymmetry}, and for $\mathcal{N} > 1$, it is extended supersymmetry.
+
+We focus on the $\mathcal{N} = 1$. To determine the actual algebra structure, we need to know the commutators
+\[
+ [Q_\alpha, M^{\mu\nu}], [Q_\alpha, P^\mu], \{Q_\alpha, Q_\beta\}, [Q_\alpha, \bar{Q}_{\dot{\beta}}].
+\]
+If we have internal symmetries $T_i$, then we also want to know that $[Q_\alpha, T_i]$. Note that if we impose that $Q_\alpha$ and $\bar{Q}_{\dot{\alpha}}$ are ``conjugate'', then we get the remaining supersymmetry operators.
+
+To determine the first, we use that $Q_\alpha$ is a spinor. So we want
+\[
+ Q_\alpha' = (e^{-\frac{i}{2} \omega_{\mu\nu} \sigma^{\mu\nu}})_\alpha^\beta Q_\beta = \left(1 - \frac{i}{2} \omega_{\mu\nu} \sigma^{\mu\nu}\right)_\alpha^\beta Q_B.
+\]
+As an operator, it transforms as
+\[
+ Q_\alpha' = U^\dagger Q_\alpha U = \left(1 + \frac{i}{2} \omega_{\mu\nu} M^{\mu\nu}) Q_\alpha \left(1 - \frac{i}{2} \omega_{\mu\nu} M^{\mu\nu}\right).
+\]
+Comparing both expressions, we get
+\[
+ [Q_\alpha, M^{\mu\nu}] = (\sigma^{\mu\nu})_\alpha\!^\beta Q_\beta.
+\]
+For the second, we see that the only possible combination is
+\[
+ [Q_\alpha, P^\mu] = c \sigma^\mu\!_{\alpha \dot{\alpha}} \bar{Q}^{\dot{\alpha}}.
+\]
+We have to determine $c$. Note that using $(\sigma \bar{Q})^\dagger = (Q\sigma)$, this implies
+\[
+ [\bar{Q}^{\dot{\alpha}}, P^\mu] = c^* (\bar{\sigma}^\mu)^{\dot{\alpha} \beta} Q_\beta.
+\]
+This does not follow immediately by ``complex conjugation'', because $\bar{\sigma}$ is not the complex conjugate of $\sigma$.
+
+We now pull out the Jacobi identity
+\[
+ [A, [B, C]] + [C, [A, B]] + [B, [C, A]] = 0.
+\]
+with $A = P^\mu, B = P^\nu$ and $C = Q^_\alpha$. We then expand to get
+\[
+ |c|^2 (\sigma^\nu \bar{\sigma}^\mu - \sigma^\mu \bar{\sigma}^\nu) Q_\beta = 0.
+\]
+So we must have $c = 0$. So $[Q_\alpha, P^\mu] = 0$.
+
+For the third, we again argue that we must have
+\[
+ \{Q_\alpha, Q^\beta\} = k (\sigma^{\mu\nu})_\alpha\!^\beta M_{\mu\nu}. % (1/2, 0) + (1/2, 0) = (0, 0) + (1, 0). No (0, 0) possibility.
+\]
+We have to find $k$. Observe that the left-hand side commutes with $P^\mu$. So the right-hand side must commute as well. So $k = 0$.
+
+This is very particular to $\mathcal{N} = 1$. For larger $\mathcal{N}$, we can have scalar quantities.
+
+For the fourth, we have
+\[
+ \{Q_\alpha, \bar{Q}_{\dot{\alpha}}\} = t \sigma^\mu_{\alpha \dot{\alpha}} P_\mu.
+\]
+It turns out there is no way to fix $t$, and the choice does not affect the physics (as long as it is non-zero, since we can always scale $Q$). The convention is that $t = 2$. So we get
+\[
+ \{Q_\alpha, \bar{Q}_{\dot{\alpha}}\} = 2 \sigma^\mu\!_{\alpha \dot{\alpha}} P_\mu.
+\]
+This says supersymmetry is a spacetime symmetry, because the action of two supersymmetric operators give us a translation.
+
+Finally, internal symmetry generators, in general, commute with all spacetime symmetries, i.e.
+\[
+ [T_i, M^{\mu\nu}] = [T_i, P^\mu] = [T_i, Q_\alpha] = 0.
+\]
+There is an exceptions, known as \term{$R$-symmetry}. This is a $\U(1)$ automorphism of the supersymmetric algebra
+\begin{align*}
+ Q_\alpha &\mapsto e^{i\gamma} Q_\alpha\\
+ \bar{Q}_{\dot\alpha} &\mapsto e^{-i\gamma} \bar{Q}_{\dot\alpha}.
+\end{align*}
+This has a $\U(1)$ generator $R$ with
+\begin{align*}
+ [Q_\alpha, R] &= Q_\alpha\\
+ [Q_{\dot\alpha}, R] &= -\bar{Q}_{\dot\alpha}
+\end{align*}
+This says $Q_\alpha$ and $\bar{Q}_{\dot{\alpha}}$ have charge $+1$ and $-1$ respectively.
+
+\subsection{Representations of the \tph{Poincar\'e}{Poincare}{Poincare} group} % fix tph
+We first recall the representations of $\so(3)$. Recall that this has generators $J_i$ for $i = 1, 2, 3$, with commutation relation
+\[
+ [J_i, J_j] = i \varepsilon_{ijk} J_k.
+\]
+To understand the representations, it is convenient to consider the \term{Casimir operator}
+\[
+ J^2 = J_1^2 + J_2^2 + J_3^2.
+\]
+This operator satisfies
+\[
+ [J^2, J_i] = 0.
+\]
+We can pick our favorite value of $i$, say $i = 3$, and then label our states by the eigenvalues of $j$ and $j_3$. We write such a state as $|j, j_3\ket$, where $j_3 = -j, \ldots, j$.
+
+In the Poincar\'e algebra, we have generators $M^{\mu\nu}$ and $P^\mu$. To proceed, we first find some Casimir operators, $C_1$ and $C_2$. We define them as
+\[
+ C_1 = P^\mu P_\mu,\quad C_2 = W^\mu W_\mu,\quad W^\mu = \frac{1}{2} \varepsilon_{\mu\nu\rho\sigma} P^\mu M^{\rho\sigma}.
+\]
+This mysterious object $W_\mu$ is called the \term{Pauli--Lubanski vector}. Note that this is not an element of the algebra itself. It is a straightforward (but tedious) exercise to check that these indeed commute with all the generators.
+
+Each representation (multiplet) can thus be labelled by two numbers $m, w$, which are eigenvalues of $C_1, C_2$. Just as we had $j_3$ above, we would later seek more interesting labels.
+
+One can check that
+\[
+ [W_\mu, P_\nu] = 0,\quad [W_\mu, W_\nu] = i \varepsilon_{\mu\nu\rho\sigma} W^\rho P^\sigma.
+\]
+As before, this is not a statement in the Lie algebra. Formally, we are working in the universal enveloping algebra.
+
+To find more labels, we use that $[P^\mu, P^\nu] = 0$. So the $P^\mu$'s can be simultaneously diagonalized. So we have a new label $p^\mu$, and we can label our states as $|m, w, p^\mu\ket$.
+
+We can find more generators. To make this easier to see, we pick a frame for $p^\mu$ and find generators that have it invariant. To be explicit, we change frame so that
+\[
+ p^\mu = (m, 0, 0, 0)
+\]
+if $C_1 = P^\mu P_\mu = 0$. Once we are in this frame, we see very clearly that $\so(3)$ leaves $p^\mu$ invariant. This $\so(3)$ is called the \term{little group} We can then label our state as
+\[
+ |m, j; p^\mu, j_3\ket. % j = w?
+\]
+If $C_1 = 0$, then we pick a frame where
+\[
+ p^\mu = (E, 0, 0, E).
+\]
+This corresponds to a massless particle. The little group is more complicated. Naively, we at least have an $\so(2)$ that acts in the middle. Observe that we have
+\[
+ (W_0, W_1, W_2, W_3) = E(J_3, -J_1 + K_2, -J_2 - K_1, -J_3).
+\]
+Then we find that
+\[
+ [W_1, W_2] = 0,\quad [W_3, W_1] = - i E W_2,\quad [W_3, W_2] = i E W_1.
+\]
+This algebra is called the \term{Euclidean group} in two dimensions. % this has infinite-dimensional representations (irreducible?)
+
+When Wigner discovered this, he wasn't happy about the infinite-dimensional representations. If we require $W_1 = W_2 = 0$, then we are left with just the rotation part, and this has finite-dimensional representations. In this case, we find that $C_2 = 0$ too.
+
+In this case, the only labels we are left with are $p_\mu$ and $\lambda$, the eigenvalue of $J_3$. The labels are $|0, 0; p_\mu, \lambda\ket = |p_\mu, \lambda\ket$.
+
+We know that
+\[
+ e^{2\pi i \lambda} |p_\mu, \lambda\ket = \pm |p_\mu, \lambda\ket,
+\]
+we know that we have $\lambda \in \frac{1}{2} \Z$. % this is helicity?
+
+\subsection{Representations of \texorpdfstring{$\mathcal{N} = 1$}{N = 1} SUSY}
+In the $\mathcal{N} = 1$ SUSY algebra, we still have a Casimir
+\[
+ C_1 = P^\mu P_\mu.
+\]
+However, the old $C_2$ does not commute with the $Q_\alpha$. So it is no longer a Casimir. However, we can define
+\begin{align*}
+ B_\mu &= W_\mu - \frac{1}{4} \bar{Q}_{\dot{\alpha}} (\sigma_\alpha)^{\dot{\alpha} \beta } Q_\beta\\
+ C_{\mu\nu} &= B_\mu P_\nu - B_\nu P_\mu\\
+ \tilde{C}_2 = C_{\mu\nu} C^{\mu\nu}.
+\end{align*}
+Then $\tilde{C}_2$ is a Casimir operator.
+
+\begin{prop}
+ In any supersymmetric multiplet, the number of fermions is equal to the number of bosons. In short,
+ \[
+ n_F = n_B.
+ \]
+\end{prop}
+
+\begin{proof}
+ Consider the operator $(-1)^F$ defined by
+ \[
+ (-1)^F \bket{B} = \bket{B},\quad (-1)^F \bket{F} = -\bket{F}.
+ \]
+ Then we have
+ \[
+ \Tr (-1)^F = n_B - n_F.
+ \]
+ To compute the trace, we first observe that
+ \[
+ (-1)^F Q_\alpha \bket{F} = Q_\alpha \bket{F} = - Q_\alpha(-1)^F \bket{F},
+ \]
+ and similarly for bosons. So we find that
+ \[
+ \{(-1)^F, Q_\alpha\} = 0.
+ \]
+ Then in an irreducible representation, we have
+ \begin{align*}
+ \Tr \left[(-1)^F \{Q_\alpha, \bar{Q}_{\dot{\beta}}\}\right] &= \Tr \left[(-1)^F Q_\alpha \bar{Q}_{\dot{\beta}} + (-1)^F \bar{Q}_{\dot{\beta}} Q_\alpha\right]\\
+ &= \Tr\{-Q_\alpha(-1)^F \bar{Q}_{\dot{\beta}} + Q_\alpha (-1)^F \bar{Q}_{\dot{\beta}}\}
+ &= 0.
+ \end{align*}
+ On the other hand, we know
+ \[
+ \{Q_\alpha,\bar{Q}_{\dot{\beta}}\} = 2 \sigma^\mu_{\alpha \dot{\beta}} P_\mu.
+ \]
+ So we compute that
+ \[
+ \Tr \left[(-1)^F \{Q_\alpha, \bar{Q}_{\dot{\beta}}\}\right] = 2 \sigma^\mu\!_{\alpha\dot{\beta}} \Tr (-1)^F.
+ \]
+ So we must have $\Tr (-1)^F = 0$.
+\end{proof}
+
+\subsection{Massless supermultiplet}
+Consider the case $p_\mu = (E, 0, 0, E)$. We then have $C_1 = \tilde{C}_2 = 0$. Restricting to such states, we have
+\[
+ \{Q_\alpha, \bar{Q}_{\dot{\beta}}\} = 2 \sigma^\mu_{\alpha \dot{\beta}} P_\mu = 2 E (\sigma^0 + \sigma^3) = 4E
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 0
+ \end{pmatrix}_{\alpha \dot{\beta}}.
+\]
+This implies $Q_2 = 0$, and
+\[
+ \{Q_1, \bar{Q}_1\} = 4E.
+\]
+We now define
+\[
+ a = \frac{Q_1}{\sqrt{4E}},\quad a^\dagger = \frac{\bar{Q}_1}{\sqrt{4E}}.
+\]
+we then have
+\[
+ \{a, a^\dagger\} = 1,\quad \{a, a\} = \{a^\dagger, a^\dagger\} = 0.
+\]
+Also, we have
+\[
+ [a, J^2] = \frac{1}{2} \sigma_{11}^3 a = \frac{1}{2}a.
+\]
+So we find that
+\[
+ J^3a \bket{p^\mu, \lambda} = (aJ^3 - [a ,J])\bket{p^\mu, \lambda} = \left(aJ^3 - \frac{1}{2} a\right) \bket{p^\mu, \lambda} = \left(\lambda - \frac{1}{2}\right) a \bket{p^\mu \lambda}.
+\]
+So $a \bket{p^\mu, \lambda}$ is a state with helicity $\lambda - \frac{1}{2}$. Similarly, $a^\dagger \bket{p^\mu, \lambda} = \bket{p^\mu, \lambda - \frac{1}{2}}$.
+
+Thus, to build a representation, start with a state $\bket{\Omega} = \bket{p^\mu, \lambda}$ such that $a \bke{\Omega} = 0$. The next state is then $a^\dagger \bket{\Omega} = \bket{p^\mu, \lambda + \frac{1}{2}}$. Since $(a^\dagger)^2 = 0$, we stop. % we try to build a finite-dimensional representation, or helicity is bounded below, this is dodgy
+
+Thus, $\mathcal{N} = 1$ SUSY multiplets are two-dimensional, of the form % how about other momenta
+\[
+ \left\{\bket{p^\mu, \lambda}, \bket{p^\mu, ``l + \frac{2}{2}}\right\}.
+\]
+For $\lambda = 0$, we have the following pairings:
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ $\bket{p^{\mu}, 0}$ & $\bket{p^\mu, \frac{1}{2}}$\\
+ \midrule
+ Higgs & Higgsino\\
+ squark & quark\\
+ slepton & lepton\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+These are known as chiral multiplets.
+
+For $\lambda = \frac{1}{2}$, we have
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ $\bket{p^{\mu}, \pm \frac{1}{2}}$ & $\bket{p^\mu, \pm 1}$\\
+ \midrule
+ photino & photon\\
+ wino & W\\
+ zino & Z\\
+ gluino & gluon\\ % capitalization
+ \bottomrule
+ \end{tabular}
+\end{center}
+These are known as gauge multiplets. Finally, for $\lambda = \frac{3}{2}$, we have
+\begin{center}
+ \begin{tabular}{cc}
+ \toprule
+ $\bket{p^{\mu}, \pm \frac{3}{2}}$ & $\bket{p^\mu, \pm 2}$\\
+ \midrule
+ gravitino & graviton\\
+ \bottomrule
+ \end{tabular}
+\end{center}
+This is the graviton multiplet.
+
+\subsubsection*{Massive $\mathcal{N}=1$ multiplets}
+Here we have
+\[
+ p^\mu = (m, 0, 0, 0),\quad C_1 = m^2,\quad \tilde{C}_2 = 2m^4 Y^i Y_i,
+\]
+where
+\[
+ Y_i = J_i - \frac{1}{4m} (\bar{Q} \sigma_i Q) = \frac{B_i}{m}.
+\]
+The eigenvalue of $Y$ is known as the \term{superspin}.
+
+Observe that
+\[
+ [Y_i, Y_j] = i \varepsilon_{ijk} Y_k.
+\]
+The multiplet labels are $\bket{m, y}$, where $m$ comes from the eigenvalue of $C_1$ and $y$ is the eigenvalue of $Y^2$.
+
+In this case, we have
+\[
+ \{Q_\alpha, \bar{Q}_{\dot{\beta}}\} = 2m \sigma^0_{\alpha \dot{\beta}} = 2m
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 1
+ \end{pmatrix}_{\alpha \dot{\beta}}.
+\]
+This time $Q_1, Q_2$ are non-zero, and so we have two sets of creation and annihilation operators. We set
+\[
+ a_{1, 2} = \frac{Q_{1, 2}}{\sqrt{2m}}.
+\]
+Then we have
+\[
+ \{a_p, a_q^\dagger\} = \delta_{pq},\quad \{a_p, a_q\} = \{a_p^\dagger, a_q^\dagger\} = 0.
+\]
+If we start with a vacuum $\bket{\Omega}$, then
+\[
+ a_1 \bket{\Omega} = a_2 \bket{\Omega} = 0.
+\]
+In this case, we have
+\[
+ Y_i \bket{\Omega} = J_i \bket{\Omega}.
+\]
+So we have $y = j$. Then we have
+\[
+ \bket{\Omega} = \bket{m, j = y, p^\mu ,j_3}.
+\]
+For other states in the multiplet, $j$ will not be equal to $y$.
+
+For example, for $y = 0$, we have
+\begin{align*}a
+ \Omega &= \bket{m, j = 0, p^\mu, j_3 = 0}\\
+ a^\dagger_{1, 2} \bket{\Omega} &= \bket{m, j = \frac{1}{2}, p^\mu, j_3 = \pm \frac{1}{2}}\\
+ a_1^\dagger a_2^\dagger \bket{\Omega} = \bket{m, j = 0, p^\mu, j_3 = 0}
+\end{align*}
+This forms an $\mathcal{N} = 1$, $y = 0$ multiplet. Also, observe that $\bket{\Omega'} = a_1^\dagger a_2^\dagger \bket{\Omega}$ has the same labels as $\bket{\Omega}$, but they are not the same element. Indeed, the latter is annihilated by $a_1$, but the former is that.
+
+We also see that $\bket{\Omega}$ transforms in the $(1/2, 0)$ representation, while the latter transforms in the $(0, 1/2)$ representation. So they are related by parity. % why?
+
+If $y \not= 0$, then we have a vacuum $\bket{\Omega}$ of spin $j$. Then $a_p^\dagger \bket{\Omega}$ transforms as $\frac{1}{2} \otimes j$, which decomposes as two irreducible representations.
+\begin{align*}
+ a_1^\dagger \bket{\Omega} &= k_1 \bket{m, j = y + 1/2, p^\mu, j_3 + 1/2} + k_2 \bket{m, j= y - 1/2, p^\mu, j_3 + 1/2}\\
+ a_2^\dagger \bket{\Omega} &= k_3 \bket{m, j = y + 1/2, p^\mu, j_3 - 1/2} + k_4 \bket{m, j = y-1/2, p^\mu, j_3 - 1/2}\\
+ a_2^\dagger a_1^\dagger \bket{\Omega} = - a_1^\dagger a_2^\dagger \bket{\Omega} = \bket{\Omega'}.
+\end{align*}
+At the end, we have two $\bket{m, j = y, p^\mu, j_3}$ states and one each of $\bket{m = j = y \pm 1/2, p^\mu, j_3}$. There are
+\[
+ 2(2y + 1) + (2y + 2) + (2y)
+\]
+many states. So we see that
+\[
+ n_B = n_F = 4y + 2.
+\]
+\subsection{Extended supersymmetry}
+We now consider supersymmetry with $\mathcal{N} > 1$. The supersymmetry operators satisfy (anti-)commutation relations
+\begin{align*}
+ \{Q_\alpha^A, Q_{\dot{\beta}, B}\} &= 2 \sigma^\mu_{\alpha\beta} P_\mu \delta^A\!_B\\
+ \{Q_\alpha^A, Q_\beta\!^B\} = \varepsilon_{\alpha\beta} Z^{AB},
+\end{align*}
+where the $Z^{AB}$ are scalar operators called \emph{central charges}, which commute with $Q^A_\alpha, M^{\mu\nu}, P^\mu, Z^{CD}$, etc. Here $Z^{AB}$ has to be anti-symmetric, which is why it had to vanish for $\mathcal{N} = 1$. One check that this gives a genuine super Lie algebra.
+
+Recall that we had $R$-symmetries. Here we have to be careful to make it preserve our second anti-commutation relations. If all $Z^{AB}$ are zero, then the inner automorphisms are of the form
+\begin{align*}
+ Q_\alpha^A &\mapsto U^A\!_B Q_\alpha^B\\
+ Q_{\dot{\alpha}}^A &\mapsto (U^\dagger)^A\!_B \bar{Q}_{\dot{\alpha}}^B
+\end{align*}
+So this has a $U(\mathcal{N})$ symmetry. However, if $Z^{AB} \not= 0$, we have to inspect it manually.
+
+If we have a massless representation, we get
+\[
+ \{Q_\alpha^A, \bar{Q}_{\dot{\beta}, B}\} = 4E
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 0
+ \end{pmatrix}_{\alpha\beta}.
+\]
+So we must have $Q_2^A = 0$ and hence $Z^{AB} = 0$.
+
+As before, we can define
+\[
+ a^A = \frac{Q_1^A}{2\sqrt{E}},\quad a^{A\dagger} = \frac{\bar{Q}_j}{2 \sqrt{E}},\quad\{a^A, a_B^T\} = \delta^A\!_B.
+\]
+We can tabulate the types and number of states:
+\begin{tabular}{ccc}
+ \toprule
+ States & Helicity & Number of states\\
+ \midrule
+ $\bket{\Omega}$ & $\lambda_0$ & $1 = \binom{\mathcal{N}}{0}$\\
+ $a^{A\dagger} \bket{\Omega}$ & $\lambda_0 + \frac{1}{2}$ & $\mathcal{N} = \binom{\mathcal{N}}{1}$\\
+ $a^{A\dagger} a^{B\dagger} \bket{\Omega}$ & $\lambda_0 + 1$ & $\binom{\mathcal{N}}{2}$\\
+ $\vdots$ & $\vdots$ & $\vdots$ \\
+ $a^{\mathcal{N}\dagger} a^{(\mathcal{N} - 1)\dagger} \cdots a^{1\dagger} \bket{\Omega}$ & $\lambda_0 + \frac{\mathcal{N}}{2}$ & $1 = \binom{\mathcal{N}}{\mathcal{N}}$\\
+ \bottomrule
+\end{tabular}
+We see that there are $2^{\mathcal{N}}$ states in total,
+
+\begin{eg}
+ If $\mathcal{N} = 2$, we have a \term{vector multiplet} with $\lambda_0 = 0$. This contains two $\mathcal{N} = 1$ chiral multiplets and two $\mathcal{N} = 1$ vector multiplets.
+\end{eg}
+
+\begin{eg}
+ If $\mathcal{N} = 2$ again, we have a \term{hypermultiplet} with $\lambda_0 = -\frac{1}{2}$.
+\end{eg}
+
+In general, in every multiplet,
+\[
+ \lambda_{\mathrm{max}} - \lambda_{\mathrm{min}} = \frac{\mathcal{N}}{2}.
+\]
+However, in order for the theory to be renormalizable, we want $|\lambda| \leq 1$. So we often restrict to $\mathcal{N} \leq 4$.
+
+Even if we don't care about renormalizability, then the maximum number of supersymmetries is $\mathcal{N} = 8$, since it would have more than one spin $2$ state (graviton), and there are also strong arguments against the existence of massless particles of helicities $> 2$. % soft theorems?
+
+In general, for $\mathcal{N} > 1$ supersymmetry, we cannot avoid spin $1$ particles. So the theory cannot be chiral. The only exception is the hyper-multiplet, which doesn't have spin one particles, but we can manually see that is not chiral. But since real life is chiral, we stick to $\mathcal{N} = 1$.
+
+%In real life, we see particles. Fields are introduced to describe local interactions among particles, say
+%\[
+% \varphi(x), \psi_\alpha(x), A_\mu(x), \psi_\alpha^\mu (x) h_{\mu\nu}(x) % spin \lambda = 0, 1/2, 1, 3/2, 2.
+%\]
+%The relation between fields and particles is not one-to-one. For example, if we start with $A_\mu$, and write it as
+%\[
+% A_\mu(x) = \int \frac{\d3 p}{\sqrt{E}} \varepsilon_\mu(p, \lambda) e^{ip\cdot x} a(p, \lambda) + \cdots,
+%\]
+%where $a(p, \lambda)$ is an annihilation operator, and $\varepsilon_\mu(p, \lambda)$ is a polarization.
+%
+%Observe that the particle $\bket{p^\mu, \pm 1}$ has two degrees of freedom, while the field $A_\mu$ has four degrees of freedom. What people do is that they impose constraints on the fields. We require
+%\[
+% p^\mu \varepsilon_\mu = 0,
+%\]
+%which says the polarization is orthogonal to the momentum. We also have a redundancy that we can replace $\varepsilon_\mu$ with $\varepsilon_\mu + \alpha(p, \lambda) p_\mu$ for any $\alpha(p, \lambda)$.
+
+\subsubsection{Massive representations}
+For massive representations, if $Z^{AB} = 0$, then we get
+\[
+ \{Q_\alpha^A, \bar{Q}_{\beta, B}\} = 2m \delta^A\!_B
+ \begin{pmatrix}
+ 1 & 0\\
+ 0 & 1
+ \end{pmatrix}_{\alpha\beta}
+\]
+with $Q_1, Q_2 \not= 0$. Then there are $2^{2\mathcal{N}}$ states in a multiplet.
+
+If $Z^{AB} \not= 0$, we can consider
+\[
+ \mathcal{H} \equiv (\bar{\sigma}^0)^{\dot{\beta \alpha}} \{Q_\alpha^A - \Gamma_\alpha\!^A \bar{Q}_{\dot{\beta} A} - \bar{\Gamma}_{\dot{\beta}, A}\} \geq 0,
+\]
+with
+\[
+ \Gamma_\alpha\!^A = \varepsilon_{\alpha\beta} U^{AB} \bar{Q}_{\dot{\gamma} B} (\bar{\sigma}^0)^{\dot{\gamma} \beta},
+\]
+where $U$ is a unitary matrix. From the supersymmetric algebra, we find that
+\[
+ \mathcal{H} = 8m\mathcal{N} - 2 \Tr(ZU^\dagger + U Z^\dagger).
+\]
+We now perform a polar decomposition of $Z$,
+\[
+ Z = HV,
+\]
+where $H$ is Hermitian and positive, and $V$ is unitary. We choose $U = V$. Then we get
+\[
+ \Tr(ZU^\dagger + UZ^\dagger) = \Tr (HVV^\dagger + UU^\dagger H) = 2 \Tr H.
+\]
+This implies
+\[
+ \mathcal{H} = 8m\mathcal{N} - 4 \Tr H \geq 0.
+\]
+Equivalently, we know that
+\[
+ m \geq \frac{\Tr \sqrt{Z^\dagger Z}}{2\mathcal{N}}.
+\]
+This gives a lower bound on the mass of particles. This is known as the \term{BPS bound}, by Bogomolny, Prasad and Sommerfeld.
+
+If the bound is saturated, i.e.\ we have equality in the bound, then these are called \term{BPS states}. This is the case iff
+\[
+ Q_\alpha^A - \Gamma_\alpha^A = 0.
+\]
+This is important, the reason massless multiplets are smaller is that we had $Q_2 = 0$. Here a combination of $Q_i$ is smaller, and we again have smaller multiplets. Moreover, since BPS states are the lightest states of a given charge, they are stable.
+
+For example, for $\mathcal{N} = 2$, we can take
+\[
+ Z^{AB} =
+ \begin{pmatrix}
+ 0 & q_1\\
+ -q_1 & 0
+ \end{pmatrix}.
+\]
+We then get
+\[
+ m \geq \frac{q_1}{2}.
+\]
+\section{Superfields and \texorpdfstring{$\mathcal{N} = 2$}{N = 1} superspace}
+Our goal is to write down a supersymmetric field theory. For $\mathcal{N} = 0$ we have fields of the form $\phi(x^\mu)$, where $x^\mu$ is a point in Minkowski space, and $\phi$ transforms under the Poincar\'e group.
+
+For $\mathcal{N} = 1$ supersymmetry, we want a ``superfield'' $\Phi(X)$, where $X$ is a coordinate in superspace and $\Phi$ transforms under the super Poincar\'e algebra.
+
+So what is superspace? Recall that a Lie group $G$ is in particular a manifold. To emphasize that we are thinking about it as a manifold, we write it as $M_G$.
+
+\begin{eg}
+ $\SL(2, \C) \cong \R^3 \times S^3$.
+\end{eg}
+
+If $H \leq G$ is a closed subgroup, then $G/H$ naturally has a manifold structure as well
+\begin{eg}
+ $\Or(n)/\Or(n - 1) \cong S^{n - 1}$ by the orbit stabilizer theorem.
+\end{eg}
+
+\begin{eg}
+ $\U(1)$ embeds diagonally into $\SU(2)$, and $\SU(2)/\U(1) \cong S^2$, giving rise to the \term{Hopf fibration}.
+\end{eg}
+
+\begin{eg}
+ Let $G$ be the Poincar\'e group, and $H$ be the Lorentz group. Then $G/H$ is Minkowski space, since $H$ is the stabilizer of the origin.
+\end{eg}
+We now have an $\mathcal{N} = 1$ super Poincar\'e ``group''. We can write an element as
+\[
+ g = 1 - i( \omega_{\mu\nu} M^{\mu\nu} - x_\mu P^\mu + \theta^\alpha Q_\alpha + \bar{\theta}_{\dot{\alpha}} \bar{Q}^{\dot{\alpha}}) + \cdots
+\]
+Quotienting this group by the Lorentz group then gives a manifold that is locally given by the parameters $\{x_\mu, \theta^\alpha, \bar{\theta}_{\dot{\alpha}}\}$. This is called $\mathcal{N} = 1$ superspace.
+
+These $\theta^\alpha$ are anti-commuting numbers, and are known as \term{Grassmann variables}.
+
+\subsection{Properties of Grassmann variables}
+For simplicity, assume we have a single dimension $\theta$. Then an analytic function would have a Taylor expansion
+\[
+ f(\theta) = f_0 + f_1 \theta + f_2 \theta^2 + \cdots.
+\]
+However, since $\theta$ is anti-commuting, we know $\theta^2 = 0$. So the most general analytic function is just
+\[
+ f(\theta) = f_0 + f_1 \theta.
+\]
+This has ``derivative''\index{derivative}
+\[
+ \frac{\d f}{\d \theta} \equiv f_1.
+\]
+We require integrals to satisfy
+\[
+ \int \d \theta \left(\frac{\d f}{\d \theta}\right) = 0.
+\]
+which we can think of as Stokes' theorem. So we know that $\int \d \theta = 0$. To completely define the integral, we need to define what $\int \theta\;\d \theta$ is. We \emph{choose} this to be equal to one.
+
+Thus, we see that we can define a ``Dirac delta function''
+\[
+ \delta(\theta) = \theta.
+\]
+Thus, we have
+\[
+ \int \d \theta\; f(\theta) = \int \d \theta\; (f_0 + f_1 \theta) = f_1 = \frac{\d f}{\d \theta}.
+\]
+If we have multiple Grassmann variables $\theta^\alpha$ and $\bar{\theta}_{\dot{\alpha}}$, we write
+\[
+ \theta \theta = \theta^\alpha \theta_\alpha,\quad \bar{\theta} \bar{\theta} = \bar{\theta}_{\dot{\alpha}} \bar{\theta}^{\dot{\alpha}}.
+\]
+We then have
+\[
+ \theta^\alpha \theta^\beta = -\frac{1}{2} \varepsilon^{\alpha\beta} (\theta \theta),\quad \bar{\theta}^{\dot{\theta}} \bar{\theta}^{\dot{\beta}} = \frac{1}{2} \varepsilon^{\dot{\alpha} \dot{\beta}} \bar{\theta} \bar{\theta}.
+\]
+We then set
+\[
+ \frac{\partial \theta^\beta}{\partial \theta^\alpha} = \delta_\alpha\!^\beta,\quad \frac{\partial\bar{\theta}^\dot{\beta}}{\partial \bar{\theta}^{\dot{\alpha}}} = \delta_{\dot{\alpha}}\!^\dot{\beta}.
+\]
+Then
+\[
+ \int \d \theta^1 \int \d \theta^2 \; \theta^2 \theta^1 = \frac{1}{2} \int \d \theta^1 \int \d \theta^2\; \theta\theta = 1.
+\]
+as a shorthand, we have
+\[
+ \int \d^2 \theta = \frac{1}{2} \int \d \theta^1 \int \d \theta^2.
+\]
+We can think of this as saying
+\[
+ \d^2 \theta = - \frac{1}{4} \d \theta^\alpha \d \theta^\beta \varepsilon_{\alpha\beta},\quad \d^2 \bar{\theta} = \frac{1}{4} \d \bar{\theta}^{\dot{\alpha}} \d \bar{\theta}^{\dot{\beta}} \varepsilon_{\dot{\alpha} \dot{\beta}}
+\]
+Then we have
+\[
+ \int \d^2 \theta = \frac{1}{4} \varepsilon^{\alpha\beta} \frac{\partial}{\partial \theta^\alpha} \frac{\partial}{\partial \theta^\beta},\quad \int^2 \bar{\theta} = - \frac{1}{4} \varepsilon^{\dot{\alpha}\dot{\beta}} \frac{\partial}{\partial \bar{\theta}^\dot{\alpha}} \frac{\partial}{\partial \bar{\theta}^{\dot{\beta}}}.
+\]
+\subsection{General scalar superfield}
+
+Recall that for $\mathcal{N} = 0$, our scalar fields $\varphi(x^\mu)$ translate under translations by
+\[
+ \varphi \mapsto e^{-i a_\mu P^\mu} \varphi e^{ia_\mu P^\mu} = (1 - i a_\mu P^\mu) \varphi e^{i a_\mu P^\mu} = \varphi + i a_\mu [\varphi, P^\mu].
+\]
+On thee other hand, we have
+\[
+ \varphi \mapsto \vaprhi(x^\mu + a^\mu) = e^{ia_\mu \mathcal{P}^\mu} \varphi = \varphi(x) + \frac{\partial \varphi}{\partial x^``m} a_\mu.
+\]
+So we know that
+\[
+ \mathcal{P}^\mu = -i \frac{\partial}{\partial x^\mu}.
+\]
+Then we see that
+\[
+ \delta \varphi = i a_\mu [\varphi, P^\mu] i a_\mu \mathcal{P}^\mu \varphi = a_\mu \partial^\mu \varphi.
+\]
+
+For $\mathcal{N} = 1$, we consider a scalar field $S(x^\mu, \theta_\alpha, \bar{\theta}_{\dot{\alpha}})$. The most general expression for $S$ is
+\begin{multline*}
+ S(x^\mu, \theta_{\alpha}, \theta_{\dot{\alpha}}) = \varphi(x) + \theta \psi(x) + \bar{\theta} \bar{\chi}(x) + \theta \theta M(x) + \bar{\theta} \bar{\theta} N(x) + (\theta \sigma^\mu \bar{\theta}) V_\mu(x)\\
+ + (\theta \theta) \bar{\theta} \bar{\lambda}(x) + (\bar{\theta} \bar{\theta}) \theta \rho(x) + (\theta \theta)(\bar{\theta} \bar{\theta}) D(x).
+\end{multline*}
+We see that this ``scalar field'' actually consists of many different fields $\varphi, \psi, \bar{\chi}, M, N, V_\mu, \bar{\lambda}, \rho, D$, some of which are bosonic and some of which are fermionic.
+
+Under a supersymmetric transformation, we have
+\[
+ S \mapsto e^{-i(\varepsilon Q + \bar{\varepsilon} \bar{Q})} S(x, \theta, \bar{\theta}) e^{i (\varepsilon Q + \bar{\varepsilon}) \bar{Q}} = e^{i (\varepsilon \mathcal{Q} + \bar{\varepsilon} \bar{\mathcal{Q}})} S(x, \theta, \bar{\theta}).
+\]
+We then find that
+\[
+ \delta S = i [S, \varepsilon Q + \bar{\varepsilon} \bar{Q}] = i (\varepsilon \mathcal{Q} + \bar{\varepsilon} \bar{\mathcal{Q}}) S(x, \theta, \bar{\theta}).
+\]
+Our goal is then to find $Q_\alpha$ and $\bar{Q}_{\dot{\alpha}}$.
+
+We assert that we have
+\[
+ S(x, \theta, \bar{q}) = e^{i(x^\mu P_\mu, \theta Q + \theta \bar{Q})} S(0, 0, 0) e^{i (x^\mu P_\mu + \theta Q + \bar{\theta} \bar{Q})}.
+\]
+Then the BCH formula
+\[
+ e^A e^B = e^{A + B + [A, B] + \cdots}
+\]
+tells us we get
+\[
+ S \mapsto S(x -i \varepsilon \sigma^\mu \bar{\theta} + i \theta \sigma^\mu \bar{\varepsilon}, \theta + \varepsilon, \theta + \bar{\varepsilon})
+\]
+Taylor expanding, we get
+\begin{align*}
+ \mathcal{Q}_\alpha &= -i \frac{\partial}{\partial \theta^\alpha} + (\sigma^\mu)_{\alpha \dot{\beta}} \bar{\theta}^{\dot{\beta}} \frac{\partial}{\partial x^\mu}\\
+ \bar{\mathcal{Q}}_{\dot{\alpha}} &= i \frac{\partial}{\partial \bar{\theta}^{\dot{\alpha}}} + \theta^\beta (\alpha^\mu)_{\beta \dot{\alpha}} \frac{\partial}{\partial x^\mu}\\
+ \mathcal{P}_\mu &= -i \partial_\mu
+\end{align*}
+One can then verify that we have
+\begin{align*}
+ \{\mathcal{Q}_\alpha, \bar{\mathcal{Q}}_{\dot{\alpha}} &= 2 (\sigma^\mu)_{\alpha \dot{\alpha}} \mathcal{\mathcal{P}}_\mu\\ % sign because \varepsilon Q = \varepsilon^\alpha Q_\alpha and \bar{\varepsilon}\bar{Q} = \bar{\varepsilon}_{\dot{\alpha}} \bar{Q}^{\dot\alpha}
+ \{\mathcal{Q}_\alpha, \mathcal{Q}_\beta\} &= 0\\
+ [\mathcal{Q}_\alpha, \mathcal{P}_\mu] = 0.
+\end{align*}
+We can now read off how each of the individual components transform:
+\begin{align*}
+ \delta \varphi &= \varepsilon \psi + \bar{\varepsilon} \bar{\chi}\\
+ \delta \psi &= 2 \varepsilon M + \sigma^\mu \bar{\varepsilon} (i \partial_\mu \varphi + V_m)\\
+ \delta \bar{\chi} &= 2 \bar{\varepsilon} N - \varepsilon \sigma^\mu (i \partial_\mu \varphi - V_\mu)\\
+ \delta M &= \bar{\varepsilon} \bar{\lambda} - \frac{i}{2} \partial_\mu \psi \sigma^\mu \bar{\varepsilon}\\
+ \delta N &= \varepsilon \rho + \frac{i}{2} \varepsilon \sigma^\mu \partial_\mu \bar{\chi}\\
+ \delta V_\mu &= \varepsilon \sigma_\mu \bar{\lambda} + \rho \sigma_\mu \bar{\varepsilon} + \frac{i}{2} (\partial^\nu \psi \sigma_\mu \bar{\sigma}_\nu \varepsilon - \bar{\varepsilon} \bar{\sigma}_\nu \sigma_\mu \partial^\nu \bar{\chi})\\
+ \delta \bar{\lambda} &= 2 \bar{\varepsilon} D + \frac{i}{2} (\bar{\sigma}^\nu \sigma^\mu \bar{\varepsilon})\partial_\mu V_\nu + i \bar{\sigma}^\mu \varepsilon \partial_\mu M\\
+ \delta \rho &= 2 \varepsilon D - \frac{i}{2} (\bar{\sigma}^\nu \sigma^\mu \varepsilon) \partial_\mu V_\nu + i \bar{\sigma}^\mu \varepsilon \partial_\mu M\\
+ \delta D &= \frac{i}{2} \partial_\mu (\varepsilon \sigma^\mu \bar{\lambda} - \rho \sigma^\mu \bar{\varepsilon}).
+\end{align*}
+Above all, note that the change in the last component $D$ is a total derivative!
+
+Note that the product of any two superfields is a superfield, transforming infinitesimally under
+\[
+ \delta (S_1 S_2) = i [S_1 S_2, \varepsilon Q + \bar{\varepsilon} \bar{Q}] = i S_1 [S_2, \varepsilon Q + \bar{\varepsilon} \bar{Q}] + i [S_1, \varepsilon Q + \bar{\varepsilon} \bar{Q}] S_2.
+\]
+Note that $\partial_\mu S$ is a superfield, but $\partial_\alpha S$ is \emph{not} a superfield. This is since $\partial_\mu$ commutes with $Q_\alpha$ and $\bar{Q}_{\dot{\alpha}}$, but $\partial_\dot{\alpha}$ does not. Thankfully, we can define a covariant derivative
+\[
+ \mathcal{D}_\alpha = \partial_\alpha + i (\sigma^\mu)_{\dot{\alpha} \dot{\beta}} \bar{\theta}^{\dot{\beta}} \partial_\mu,\quad \bar{\mathcal{D}}_{\dot{\alpha}} = \bar{\partial}_{\dot{\alpha}} + i \theta^\beta (\sigma^\mu)_{\beta \dot{\alpha}} \partial_\mu.
+\]
+One then checks that
+\[
+ \{\mathcal{D}_\alpha, \mathcal{Q}_\beta\} = \{\mathcal{D}_\alpha \bar{\mathcal{Q}}_{\beta}\} = \{\bar{\mathcal{D}}_\alpha, \mathcal{Q}_\beta\} = \{\bar{\mathcal{D}}_\alpha, \bar{\mathcal{Q}}_{\dot{\beta}}\} = 0.
+\]
+Therefore
+\[
+ [\mathcal{D}_\alpha, \varepsilon \mathcal{Q} + \bar{\varepsilon} \bar{\mathcal{Q}}] = 0.
+\]
+So $\mathcal{D}_\alpha S$ and $\bar{\mathcal{D}}_{\dot{\alpha}} S$ are superfields.
+
+Observe that if $S = f(x)$ is a superfield, then $f$ is constant.
+
+Note that the number of degrees of freedom of $S$ is much greater than what we saw when we studied representation theory. This suggests $S$ does \emph{not} give an irreducible representation of the supersymmetric field. By imposing some conditions on $S$, we can get smaller representations.
+\begin{itemize}
+ \item \term{chiral superfields} $\Phi(x, \theta, \bar{\theta})$ are superfields that satisfy $\bar{\mathcal{D}}_{\dot{\alpha}} \Phi = 0$
+ \item \term{anti-chiral superfields} $\Phi(x, \theta, \bar{\theta})$ satisfy $\mathcal{D}_\alpha \Phi = 0$.
+ \item \term{vector superfields} $V$ satisfies $V = V^\dagger$
+ \item \term{linear superfields} $L$ satisfy $LL^\dagger$ and $\mathcal{D} \mathcal{D} L = 0$.
+\end{itemize}
+We will see that chiral and anti-chiral superfields are ``matter'' fields; vector superfields are the gauge fields, and linear superfields are in some sense the same as chiral superfields under duality transformations.
+
+\subsection{Chiral superfields}
+To simplify the covariant derivative, we absorb some of it by defining.
+\[
+ y^\mu = x^\mu + i \theta \sigma^\mu \bar{\theta}.
+\]
+Then we can express the chirality condition as
+\[
+ 0 = \bar{\mathcal{D}}_\alpha \Phi = \bar{\partial}_{\dot{\alpha}} \Phi + \frac{\partial \Phi}{\partial y^\mu} \frac{\partial y^\mu}{\partial \bar{\theta}^\mu} + i \theta^\beta \sigma^\mu_{\beta \dot{\alpha}} \partial_\mu \Phi = \bar{\partial}_{\dot{\alpha}} \Phi
+\]
+So $\Phi$ is chiral iff $\Phi = \Phi(y, \theta)$. Then we can write
+\[
+ \Phi(y, \theta) = \varphi(y^\mu) + \sqrt{2} \theta \psi(y^\mu) + \theta \theta F(y^\mu).
+\]
+This is far simpler than what we had before!
+
+Observe that $\varphi$ and $\psi$ have four real degrees of freedom, and $\psi$ has four real degrees of freedom. In terms of $x$, this can be written as
+\[
+ \Phi(x, \theta, \bar{\theta}) = \varphi(x) + \sqrt{2} \theta \psi(x) + \theta \theta F(x) + i \theta \sigma^\mu \bar{\theta} \partial_\mu \varphi(x) - \frac{i}{\sqrt{2}} (\theta\theta)\partial_\mu \psi(x) \sigma^\mu \bar{\theta} - \frac{1}{4} (\theta\theta)(\bar{\theta} \bar{\theta}) \partial_\mu \partial^\mu \varphi(x).
+\]
+Then under a supersymmetric transformation, we have
+\begin{align*}
+ \delta \varphi &= \sqrt{2} \varepsilon \psi\\
+ \delta \psi &= i \sqrt{2} \sigma^\mu \bar{\varepsilon} \partial_\mu \varphi + \sqrt{2} \varepsilon F\\
+ \delta F &= i \sqrt{2} \bar{\varepsilon} \bar{\sigma}^\mu \partial_\mu \psi.
+\end{align*}
+Observe that similar to $D$, the quantity $F$ changes by a total derivative. We will see that $(\varphi, \psi)$ is a chiral $\mathcal{N} = 1$ multiplet, and $F$ is an auxiliary field.
+
+Observe that the Leibniz rule says the product of chiral superfields is chiral. Hence any holomorphic function of chiral superfields is chiral. Also, if $\Phi$ is chiral, then $\Phi^\dagger$ is anti-chiral.
+
+Also if $\Phi$ is chiral, then $\Phi^\dagger \Phi$ and $\Phi^\dagger + \Phi$ are real superfields.
+
+Also, if $X = (x, \psi_x, F_x)$ is chiral and nilpotent, i.e.\ $X^2 = 0$, then
+\[
+ x^2 + 2 \sqrt{2} x \theta \psi_x + (2x F_x - \psi^2_x)\theta \theta = 0.
+\]
+This requires $x = \psi_x^2/2F_x$.
+
+\subsection{Vector superfields}
+We now consider vector superfields, satisfying
+\[
+ V(x, \theta, \bar{\theta}) = V^\dagger (x, \theta, \bar{\theta}).
+\]
+This can be expressed as
+\begin{multline*}
+ V = C(x) + i \theta \chi(x) - i \bar{\theta} \bar{\chi}(x) + \frac{i}{2} \theta \theta (M(x) + i N(x)) - \frac{i}{2} \bar{\theta} \theta (M - iN) + \theta \sigma^\mu \theta V_\mu +\\
+ i \theta \theta \bar{\theta} \left(-i \bar{\lambda}(x) + \frac{i}{2} \bar{\sigma}^\mu \partial_\mu \chi(x)\right) - i \bar{\theta} \bar{\theta} \theta \left(i \lambda(x) - \frac{i}{2} \sigma^\mu \partial_\mu \bar{\chi}(x)\right) + \frac{1}{2} (\theta \theta)(\bar{\theta} \bar{\theta}) \left(d - \frac{1}{2} \partial_\mu \partial^\mu C\right).
+\end{multline*}
+One can check that there are 8 bosonic components, $C, M , N, D, V_\mu$, and eight fermionic components $\chi_\alpha, \lambda_\alpha$.
+
+Notice that if $\Lambda$ is chiral, then the imaginary part $i(\Lambda - \Lambda^\dagger)$ is a vector superfield. We see that the components of $i(\Lambda - \Lambda^\dagger)$ are
+\begin{align*}
+ C &= i (\varphi - \varphi^\dagger)\\
+ \chi &= \sqrt{2} \psi\\
+ \frac{1}{2} (M + i N) &= F\\
+ V_\mu &= - \partial_\mu (\varphi + \varphi^\dagger)\\
+ \lambda = D = 0
+\end{align*}
+Note that $V_\mu$ is a total derivative of a real function. Define a generalized gauge transformation of a superfield $V$ by
+\[
+ V \mapsto V - \frac{i}{2} (\Lambda - \lambda^\dagger).
+\]
+The \term{Wess-Zumino gauge} chooses components of $\Lambda$ such that $C = M = N = 0$. This is clearly possible. Of course, we will have to use Lagrangians that are gauge invariant. Then
+\[
+ V_{WZ} (x, \theta, \bar{\theta}) = (\theta \sigma^\mu \bar{\theta}) V_\mu(x) + (\theta\theta)(\bar{\theta} \bar{\lambda}(x)) + \bar{\theta} \bar{\theta} (\theta\lambda(x)) + \frac{1}{2} (\theta\theta)(\bar{\theta}\bar{\theta}) D(x).
+\]
+So the components are $(V_\mu(x), \lambda(x), D(x))$. Here $V_\mu$ is the gauge field, $\lambda(x)$ is the gaugino and $D(x)$ is an auxiliary field.
+
+Note that we have
+\[
+ V_{WZ}^2 = \frac{1}{2} \theta \theta \bar{\theta} \bar{\theta} V^\mu V_\mu,
+\]
+and higher powers of $V_{WZ}$ vanish.
+
+Recall that for gauge fields, we had field strengths. We can do that here as well. To avoid complications involving the commutator, we shall only do the abelian case. We define
+\[
+ \mathcal{W}_\alpha = - \frac{1}{4} (\bar{\mathcal{D}} \bar{\mathcal{D}}) \mathcal{D}_\alpha V.
+\]
+Note that this is chiral and gauge invariant. In terms of $y$, we have
+\[
+ \mathcal{W}_\alpha(y, \theta) = \lambda_\alpha(y) + \theta_\alpha D(y) + (\sigma^{\mu\nu} \theta)_\alpha F_{\mu\nu}(y) - i (\theta \theta) \sigma^\mu_{\alpha \dot{\beta}} \partial_\mu \bar{\lambda}^\beta(y),
+\]
+where
+\[
+ F_{\mu\nu} = \partial_\mu V_\nu - \partial_\nu V_\mu.
+\]
+\section{Supersymmetric Lagrangian}
+\subsection{\texorpdfstring{$\mathcal{N} = 1$}{N = 1} global SUSY}
+Our goal is to write down a Lagrangian for interactions of chiral and vector superfields, $\mathcal{L}[\Phi, V, \mathcal{W}_\alpha]$. We first consider Lagrangians for chiral superfields.
+
+Recall that for a general scalar superfield, the component $D$ transforms as a total derivative
+\[
+ \delta D = \frac{i}{2} \partial_\mu (\varepsilon \sigma^\mu \bar{\lambda} - \rho \sigma^\mu \varepsilon).
+\]
+For a chiral superfield, $F$ transforms as a total derivative
+\[
+ \delta F = i \sqrt{2} \bar{\varepsilon} \bar{\sigma}^\mu \partial_\mu \psi.
+\]
+The most general Lagrangian for $\Phi$ is
+\[
+ \mathcal{L} = K(\Phi, \Phi^\dagger)|_D + (W(\Phi)|_F + \text{h.c.})
+\]
+where the $|_D$ thing says we take the $D$ part, and same for $F$. Then under a supersymmetric transformation, $\mathcal{L}$ transforms by a total derivative. This $K$ is called the \term{K\"ahler potential}, and $W$ is the \term{superpotential}.
+
+We look for renormalizable Lagrangians. For this, we need to understand the units of the superfields. We have
+\[
+ [\mathcal{L}] = 4, [\varphi] = 1, [\Phi] = 1, [\psi] = \frac{3}{2}, [\theta] = -\frac{1}{2}, [F] = [2].
+\]
+For $\mathcal{L} = 4$, we want
+\[
+ K = \cdots + K_D \theta \theta (\bar{\theta} \bar{\theta}),
+\]
+with $[K_D] \leq 4$. So we must have
+\[
+ [K] \leq 2.
+\]
+Similarly, we need that
+\[
+ [W] \leq 3.
+\]
+\begin{eg}
+ We can take
+ \[
+ K = \Phi^\dagger \Phi,\quad W = \alpha + \lambda \Phi + \frac{m}{2} \Phi^2 + \frac{g}{3} \Phi^3.
+ \]
+ These choices define the \term{Wess--Zumino model}. Explicitly, % fill in
+% \[
+% \partial^\mu \varphi \partial_\mu \varphi^* + \bar\psi \sigma^\mu \partial_\mu \psi + F^2.
+% \]
+\end{eg}
+For $W(\Phi)$, it is convenient to expand around $\Phi = \varphi$.
+\[
+ W(\Phi) = W(\varphi) + (\Phi - \varphi) \frac{\partial W}{\partial \varphi} + \frac{1}{2} (\Phi - \varphi)^2 \frac{\partial^2 W}{\partial \varphi^2} + \cdots.
+\]
+One finds that the Lagrangian for $F$ is
+\[
+ \mathcal{L}[F] = FF^* + F^* \frac{\partial W}{\partial \varphi} + F^* \frac{\partial W^*}{\partial \varphi^*}.
+\]
+If we solve for $F$ by setting $\frac{\delta \mathcal{L}}{\delta F} = 0$, then we get
+\begin{align*}
+ F^* + \frac{\partial W}{\partial \varphi} &= 0\\
+ F + \frac{\partial W^*}{\partial \varphi^*} &= 0.
+\end{align*}
+Plugging this back in, we get
+\[
+ \mathcal{L}[F] = - \left|\frac{\partial W}{\partial \varphi}\right|^2.
+\]
+This gives a scalar potential for $\varphi$,
+\[
+ V = \left|\frac{\partial W}{\partial \varphi}\right|^2 \geq 0.
+\]
+Note that
+\begin{itemize}
+ \item The $\mathcal{N} = 1$ Lagrangian is a particular case of a $\mathcal{N} = 0$ Lagrangian.
+ \item The scalar potential for $\varphi$ is automatically always non-negative.
+ \item The boson and fermion have the same mass.
+ \item There is a quartic term in the potential $g^2 |\varphi|^4$.
+ \item There is also a Yukawa coupling with the same coupling $g$, which allows for ``miraculous cancellation'' between $g^2 |\varphi|^4$ and two $g \psi^2 \varphi$ couplings. % insert diagram
+\end{itemize}
+
+In general, if we have several fields $\Phi_i$, the kinetic term would be
+\[
+ K_{i\bar{j}} \partial^\mu\varphi^i \partial_\mu \varphi^{\bar{j}},
+\]
+where
+\[
+ K_{i\bar{j}} = \frac{\partial^2 K}{\partial \varphi_i \partial \varphi_{\bar{j}}}.
+\]
+In this case, the potential is
+\[
+ V_{(F)} = K_{i\bar{j}}^{-1} \frac{\partial W}{\partial \varphi_i} \frac{\partial \bar{W}}{\partial \varphi_{\bar{j}}} \geq 0.
+\]
+\subsection{\texorpdfstring{$\mathcal{N} = 1$}{N = 1} super QED}
+Recall that to do $\U(1)$ gauge theory, we start with a Lagrangian $\mathcal{L} = \bar{\psi} \slashed\partial \psi$, we promote the global $\U(1)$ symmetry to a local symmetry $\psi \mapsto e^{i \lambda} \psi$, where $\lambda = \lambda(x)$. Thus, our new Lagrangian is
+\[
+ \mathcal{L} = \bar{\psi} \slashed{D} \psi,\quad D_\mu = \partial_\mu - i q A_\mu.
+\]
+We then add a kinetic term $\frac{1}{4} F_{\mu\nu} F^{\mu\nu}$ for the potential, where $F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$.
+
+For $\mathcal{N} = 1$ supersymmetry, we start with
+\[
+ \mathcal{L} = \Phi^\dagger \Phi|_D.
+\]
+We want symmetry under $\Phi \to e^{i q \Lambda} \Phi$, where $\Lambda$ is a chiral superfield. Then the Lagrangian transforms as
+\[
+ \mathcal{L} \mapsto \Phi^\dagger e^{-i q (\Lambda^\dagger - \Lambda)} \Phi|_D.
+\]
+To make it invariant, we introduce a real vector superfield $V$ that transforms as
+\[
+ V \mapsto V - \frac{i}{2} (\Lambda - \Lambda^\dagger),
+\]
+Then the new Lagrangian
+\[
+ \mathcal{L} = \Phi^\dagger e^{iqV} \Phi|_D
+\]
+is invariant. Note that in the Wess--Zumino gauge, we simply have
+\[
+ e^{2qV_{WZ}} = 1 + 2qV_{WZ} + 2q^2 V_{WZ}^2.
+\]
+In this gauge, $\Phi= \{ \varphi, \psi, F\}$ and $V = \{\lambda, V_\mu, D\}$, and
+\[
+ \mathcal{L} = F^* F + \D_\mu \varphi \D^\mu \varphi^* + i \bar{\psi} \bar{\sigma}^\mu \D_\mu \psi + \sqrt{2} q(\varphi \bar{\lambda} \bar{\psi} + \varphi^* \lambda \psi) + q D |\varphi|^2,
+\]
+where
+\[
+ \D_\mu = \partial_\mu - i q V_\mu.
+\]
+Note that the Wess--Zumino gauge is not preserved by supersymmetric transformations.
+
+To introduce the kinetic part of the field strength, we use
+\[
+ \mathcal{W}^\alpha \mathcal{W}_\alpha|_F = D^2 - \frac{1}{2} F_{\mu\nu} F^{\mu\nu} - 2i \lambda \sigma^\mu \partial_\mu \bar{\lambda} - \frac{i}{2} F_{\mu\nu} \tilde{F}^{\mu\nu},
+\]
+where $\tilde{F}^{\mu\nu} \varepsilon^{\mu\nu\rho\sigma} F_{\varphi \sigma}$. Note that $F_{\mu\nu} \tilde{F}^{\mu\nu}$ is a total derivative term, so sometimes people don't talk about it, but it is important for, say, instantons. % this term vanishes when we take hermitian conjugate.
+
+Then the total Lagrangian is
+\[
+ \mathcal{L}_{\mathrm{total}} = (\Phi^\dagger e^{2qV} \Phi)_D + ((\mathcal{W}^\alpha \mathcal{W}_\alpha)_F + \text{h.c.}).
+\]
+Are there more terms we can add to the Lagrangian? Fayet and Iliopoulos introduced the \term{Fayet--Iliopoulos term}
+\[
+ \mathcal{L}_{FI} = \xi V_D,
+\]
+where $\xi$ is a number. Note that this is gauge invariant only for an abelian theory. The dependence of $\mathcal{L}_{\mathrm{total}}$ on $D$ is now
+\[
+ \mathcal{L}[D] = D^2 + q |\varphi|^2 D + \frac{1}{2} \xi D.
+\]
+This has no kinetic term, and hence doesn't propagate. We can solve for $D$ to get
+\[
+ D = - \frac{\xi}{2} - q |\varphi|^2.
+\]
+Then we get
+\[
+ \mathcal{L}_[D] = - \frac{1}{8} (\xi + 2q|\varphi|^2)^2 \equiv -V_{(D)}.
+\]
+So the total potential is
+\[
+ V = V[F] + V[D] = \left|\frac{\partial W}{\partial \varphi}\right|^2 + \frac{1}{8} (\xi + 2q|\varphi|^2)^2 \geq 0.
+\]
+
+More generally, we can take
+\[
+ \mathcal{L} = K(\Phi^\dagger, e^{iqV}, \phi)|_D + (f(\Phi_i) \mathcal{W}^\alpha \mathcal{W}_\alpha + \text{h.c.}) + (W(\Phi_i)_F + \text{h.c.}) + \xi V_D.
+\]
+Here $f(\Phi)$ is a holomorphic function in $\Phi$, and the middle term is called the gauge kinetic function.
+
+Given a Lagrangian $\mathcal{L}$, we can define the action as a superspace integral. For $\mathcal{N} = 0$, we have
+\[
+ S = \int \d^4 x\; \mathcal{L}.
+\]
+For $\mathcal{N} = 1$, recall that we had
+\[
+ \int \d^2 \theta\; \theta \theta = 1,\quad \int\d^4 \theta (\theta\theta)(\bar{\theta} \bar{\theta}) = 1.
+\]
+Then
+\[
+ S = \int \d^4 x\; \left[\int \d^4 \theta\; (K + \xi V) + \int \d^2 \theta (W(\Phi) + f(\Phi) \mathcal{W}^\alpha \mathcal{W}_\alpha) + \text{h.c.}\right].
+\]
+
+\section{Perturbative non-renormalization theorems}
+We study the Wilsonian effective action
+\[
+ e^{-S_{eff}} = \int_{P > \lambda} \mathcal{D} A e^{i S(A)},
+\]
+where we integrate out all momenta above $\lambda$. The quantum corrections can be expanded perturbatively as
+\[
+ S_{eff} = S_{\text{tree level}} + \sum_n g^n S_{n\text{-loop}} + S_{\text{non-perturbative}}.
+\]
+Usually, people do not talk about the non-perturbative part, but it natural arises if you want to talk about instantons.
+
+\begin{thm}
+ Perturbatively,
+ \begin{enumerate}
+ \item $K$ gets corrected to each loop.
+ \item $f(\Phi)$ gets corrections at $1$-loop, i.e.\ $f = f_{\mathrm{tree}} + f_{\mathrm{1-loop}}$.
+ \item $W(\Phi)$ and $\xi$ do not get any corrections.
+ \end{enumerate}
+\end{thm}
+The main ideas of the proof come from string theory.
+
+\begin{proof}
+ There are a few ideas:
+ \begin{itemize}
+ \item We introduce spurious superfields as a bookkeeping device
+ \item Use all symmetries
+ \item Use holomorphicity
+ \item Take weak coupling limits.
+ \end{itemize}
+ We begin with spurious chiral superfields
+ \[
+ X = (x, \psi_x, F_x),\quad Y = (y, \psi_y, F_y).
+ \]
+ Consider the action
+ \[
+ S = \int \d^4 x \left[ \int \d^4 \theta (K + \xi) + \int \d^2 \theta (Y W(\Phi) + X \mathcal{W}^\alpha \mathcal{W}_\alpha) + \text{h.c.}\right].
+ \]
+ We write our effective Lagrangian as
+ \[
+ S_{\mathrm{eff}} = \int \d^4 x \left[\int \d^4 \theta \left[J(\Phi, \Phi^\dagger, e^V, X, Y) + \hat{\xi}(X, X^\dagger, Y, Y^\dagger) V\right] + \int \d^2 \theta [H(\Phi, X, Y \mathcal{W}^\alpha] + \text{h.c.}\right].
+ \]
+ Notice that $H$ is holomorphic.
+
+ Recall that we had $R$-symmetry. This acts on our fields with charges
+ \begin{center}
+ \begin{tabular}{cccccccc}
+ \toprule
+ & $\Phi$ & $V$ & $X$ & $Y$ & $\theta$ & $\bar{\theta}$ & $\mathcal{W}^\alpha$\\
+ \midrule
+ charge & 0 & 0 & 0 & 2 & 1 & -1 & 1\\
+ \bottomrule
+ \end{tabular}
+ \end{center}
+ This does not commute with supersymmetry, because the $\theta$ transform.
+
+ Since this symmetry has to be preserved in $S_{eff}$, we see that $H$ must be of the form
+ \[
+ H = Y h(X, \Phi) + g(X, \Phi) \mathcal{W}^\alpha \mathcal{W}_\alpha.
+ \]
+ The next symmetry is the Peccei--Quinn symmetry, given by
+ \[
+ X \mapsto X + i r,
+ \]
+ where $r$ is a real constant. Recall that
+ \[
+ X \mathcal{W}^\alpha \mathcal{W}_\alpha = \Re(X) F^{\mu\nu} F_{\mu\nu} + \Im X F^{^\mu\nu} \tilde{F}_{\mu\nu},
+ \]
+ and so $r F^{\mu\nu} \tilde{F}_{\mu\nu}$ is a total derivative. Since the system is invariant under a shift in $X$, and $h$ is holomorphic, we know that $h$ cannot be a function of $X$. We have to be a bit more careful for the second term, where we can allow a linear term in $X$ by the argument above. So we get
+ \[
+ H = Y h(\Phi) + (\alpha X + g(\Phi)) \mathcal{W}^\alpha \mathcal{W}_\alpha.
+ \]
+ This is key. Holomorphicity is what is forcing non-renormalization.
+
+ Now in the limit $Y \to 0$, we must have $S \to S_{\mathrm{tree}}$. So we must have $h(\Phi) = W(\Phi)$. Similarly, taking $X \to 0$, we must have $\alpha = 1$. Now the number of powers of $X$ in each graph is
+ \[
+ N_X = V_X - I_X.
+ \]
+ Moreover, the number of loops is
+ \[
+ L = I_X - V_X + I = 1 - N_X.
+ \]
+ So we can only have $L = 0$ or $1$. So we are done.
+
+ Finally, we have to talk about $\hat{\xi}$. We know this has to be constant to preserve supersymmetry and gauge invariance. But this can potentially be a constant other than the original $\xi$. To argue that this doesn't change, one argues that the correction is proportional to the sum of charges in the theory, which must vanish or else another diagram diverges. % fix this
+\end{proof}
+
+\section{Supersymmetry breaking}
+We say supersymmetry is broken if the vacuum state is such that $Q_\alpha \bket{\Omega} \not= 0$. This is true iff $E > 0$.
+
+\subsection{\texorpdfstring{$F$}{F}-term supersymmetry breaking}
+If we have a chiral superfield $\Phi = \{\varphi, \psi, F\}$, then under supersymmetry, the transformations are given by
+\begin{align*}
+ \delta \varphi &= \sqrt{2} \varepsilon \psi\\
+ \delta \psi &= \sqrt{2} \varepsilon F + i \sqrt{2} \sigma^\mu \bar{\varepsilon} \partial_\mu \varphi\\
+ \delta F &= i \sqrt{2} \bar{\varepsilon} \bar{\sigma}^\mu \partial_\mu \psi.
+\end{align*}
+Note that to preserve Lorentz invariance, the vacuum expectation of any scalar field must be non-zero. So the only chance for non-zero vacuum energy is if $\bra F\ket \not= 0$. Then $\delta \psi \not= 0$. This $\psi$ is called the \term{Goldstone fermion}, or a Goldstino. This is consistent with our previous observation that the scalar potential is
+\[
+ V = FF^* = \left| \frac{\partial W}{\partial \psi} \right|^2.
+\]
+The simplest case where this happens is the \term{O'Raifeartaigh model}. This has three chiral superfields $\Phi_1, \Phi_2, \Phi_3$, with
+\[
+ K = \Phi_i^\dagger \Phi_i,\quad W = g \Phi_1( \Phi_3^2 - m^2) + M \Phi_2 \Phi_3,\quad M \gg m \not= 0.
+\]
+Then we have
+\begin{align*}
+ --F_1 &= \frac{\partial W}{\partial \varphi_1} = g (\varphi_3^2 - m^2)\\
+ - F_2 &= \frac{\partial W}{\partial \varphi_2} = M \varphi_3\\
+ - F_3 &= \frac{\partial W}{\partial \varphi_3} = 2g \varphi_1 \vavrphi_3 + M \varphi_2.
+\end{align*}
+We immediately see that it is impossible for $F_1 = F_2 = 0$, for we would simultaneously need $\varphi_3 = \pm m$ and $\varphi_3 = 0$. We can calculate
+\[
+ V = g^2 |\varphi_3^2 - m^2|^2 + M^2 |\varphi_3|^2 + |2g \varphi_1 \varphi_3 + M \varphi_2|^2.
+\]
+This has a minimum at
+\[
+ \bra \varphi_2\ket = \bra \varphi_3\ket = 0,
+\]
+and $\bra \varphi_1\ket$ can be arbitrary. So the minimum has a ``flat'' direction along $\varphi_1$. At the minimum, we have
+\[
+ \bra V\ket = g^2 m^4 > 0.
+\]
+We now want to compute the masses of the scalars, given by $\frac{\partial^2 V}{\partial \varphi_i \partial \varphi_j}$. We then find that
+\[
+ m_{\vaprhi_1} = 0,\quad m_{\varphi_2} = M,\quad \varphi_3 = a + ib,\quad m_a^2 = M^2 - 2g^2 m^2,\quad M_b^2 = M^2 + 2 g^2 m^2. % what does this mean?
+\]
+We see that there is a splitting of the masses. The masses of the fermions are given by
+\[
+ \left\bra \frac{\partial^2 W}{\partial \varphi_i \partial \varphi_j} \psi_i \psi_j =
+ \begin{pmatrix}
+ 0 & 0 & 0\\
+ 0 & 0 & M\\
+ 0 & M & 0
+ \end{pmatrix}.
+\]
+So we see that
+\[
+ m_{\psi_1} = 0,\quad m_{\psi_2} = m_{\psi_3} = M.
+\]
+So $\psi_1$ is the goldstino, which breaks the supersymmetry --- the mass of the fermion $\psi_3$ is no longer the same as the mass of the corresponding bosons $\varphi_3$. However, this cannot possibly model real life, because in this scenario, if $\psi_3$ is a quark, then the corresponding super-quark would be lighter than the actual quark, so we would have seen it already!
+
+There are general results that this problem cannot be solved by renormalization group flow, i.e.\ the lighter boson cannot become heavier than the corresponding fermion particle. So we need to find other ways of breaking supersymmetry. % supersymmetry must be broken at tree level, because non-renormalization theorem says the potential doesn't change.
+
+In general, the supertrace
+\[
+ \mathrm{Str} M^2 \equiv \sum (-1)^{2j + 1} (2j + 1) m_j^2 = 0,
+\]
+and this is not good if we want to break supersymmetry at tree level. This can be solved if we introduce supergravity.
+
+There is also a $D$-term supersymmetry breaking for vector superfields $V = \{\lambda, A_\mu, D\}$ with $\delta \lambda = \varepsilon D$. Then $\bra D \ket \not= 0$ implies supersymmetry is broken, and $\lambda$ is the goldstino.
+
+In supergravity, $V$ is not positive definite, and $\bra F \ket \not= 0$ and $\bra V \ket = 0$ is possible. The gravitino ``eats'' the godstino fermions. This is a super-Higgs effect (not to be confused with a supersymmetric Higgs effect, where the normal Higgs effect happens in a supersymmetric setting).
+
+\section{The minimal supersymmetric standard model}
+The minimal supersymmetric standard model is the simplest way to turn the standard model into a supersymmetric one. This has a super-gauge potential $V$ associated to $\SU_3 \times \SU_2 \times \SU_1$, and this has chiral superfields
+\begin{align*}
+ Q_i &= (3, 2, -1/6) & U_i^r % complete this from notes.
+\end{align*}
+We need two Higgs to cancel triangular anomalies. This has K\"ahler potential
+\[
+ K = \Phi^\dagger e^{2qV} \Phi,
+\]
+and potential
+\begin{align*}
+ W &= y_1 Q H_2 \tilde{u}^c + y_2 Q H_1 \tilde{d}^c + y_3 L H_1 \tilde{e}^c + \mu H_1 H_2 + W_{\slashed{BL}}\\
+ W_{\slashed{BL}} &= \lambda_1 LL \tilde{e}^c + \lambda_@ L Q \tilde{d}^c + \lambda_3 \tilde{u}^e \tilde{d}^c \tilde{d}^c + \mu' LH_2.
+\end{align*}
+The $W_{\slashed{BL}}$ term breaks baryon and lepton number conservation. This is problematic, because it can lead to proton decay $p \to e^+ + \pi^0$. % inset diagram
+
+To avoid this, we impose $R$-parity.
+
+%\section{Physics in higher dimensions}
+%\subsection{Bosons in higher dimensions}
+%In higher dimensions, we have Poincar\'e generators $P^M, M^{NQ}$ where $M, N, Q = 0, \ldots, D - 1$. The algebra is formally the same, with
+%\begin{align*}
+% [P^M, M^{NQ}] &= -i (\eta^{MN} P^Q - \eta^{MQ} P^N)\\
+% [M^{MN}, M^{PQ}] &= i(\eta M + \eta M - \eta M - \eta M)\\ % check
+%\end{align*}
+%Note in particular that $M^{2j, 2j + 1}$ commute with each other. They can then be individually diagonalized, and we have a generalized notion of spin, given by eigenvalues of $M^{23}, M^{45}, \ldots$. If $\mathcal{O}$ is an operator with
+%\[
+% [\mathcal{O}, M^{01}] = i \omega \mathcal{O},
+%\]
+%we say the \term{weight} of $\mathcal{O}$ is $\omega$.
+%
+%The massless particles can be chosen to have momentum
+%\[
+% P^M = (E, E, 0, \ldots, 0).
+%\]
+%The little subgroup is then an extension of $\SO(D - 2)$. We can have the following types of bosonic fields:
+%\begin{enumerate}
+% \item Real scalar fields $\varphi(x^n)$ with helicity $0$ and $1$ degree of freedom.
+% \item Vector fields $A_M (x^M)$. This has helicities $\lambda = 1, 0$ (depending on the direction we are looking at). For example, for $J_{23}$, the eigenvalue of $A_i$ is $\lambda = 1 $ whenever $i = 2, 3$, and $0$ for $m = 4, \ldots, D - 1$.
+%
+% The number of degrees of freedom is $D - 2$.
+% \item The graviton $g_{MN}(x^M)$ has helicities $2, 1, 0$. The number of degrees of freedom is $ \frac{(D - 2)(D - 1)}{2} - 1 = \frac{D(D - 3)}{2}$.
+% \item In general, we can have $p$-forms for all $p$. For example, in $D$ dimensions, for a $2$-form, $A_{ij}$ and $A_{mn}$ have $\lambda = 0$, while $A_{im}$ has $\lambda = 1$. % i = 2, 3; m = 4, \ldots, D - 1
+% The number of degrees of freedom is $\binom{D - 2}{p}$.
+%
+%\end{enumerate}
+\printindex
+\end{document}
diff --git a/books/cring/categories.tex b/books/cring/categories.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ba6f6246d96747581c911058377ce6dfe1e881b8
--- /dev/null
+++ b/books/cring/categories.tex
@@ -0,0 +1,2126 @@
+\setcounter{chapter}{-1}
+\chapter{Categories}
+\label{categorychapter}
+
+
+The language of categories is not strictly necessary to understand the basics
+of commutative
+algebra. Nonetheless, it is extremely convenient and powerful. It will clarify
+many of the constructions made in the future when we can freely use terms like
+``universal property'' or ``adjoint functor.'' As a result, we begin this book
+with a brief introduction to category theory. We only scratch the surface; the
+interested reader can pursue further study in \cite{Ma98} or \cite{KS05}.
+
+
+Nonetheless, the reader is advised not to take the present chapter too
+seriously; skipping it for the moment to chapter 1 and returning here as a
+reference could be quite reasonable.
+
+\section{Introduction}
+
+\subsection{Definitions}
+
+Categories are supposed to be places where mathematical objects live.
+Intuitively, to any given type of structure (e.g. groups, rings, etc.),
+there should be a
+category of objects with that structure. These are not, of course, the only
+type of categories, but they will be the primary ones of concern to us in this
+book.
+
+
+The basic idea of a category is that there should be objects, and that one
+should be able to map between objects. These mappings could be functions, and
+they often are, but they don't have to be. Next, one has to be able to compose
+mappings, and associativity and unit conditions are required. Nothing else is required.
+
+\begin{definition}
+A \textbf{category} $\mathcal{C}$ consists of:
+\begin{enumerate}
+\item A collection of \textbf{objects},
+$\ob \mathcal{C}$.
+\item For each pair of objects $X, Y \in
+\ob \mathcal{C}$, a set
+of \textbf{morphisms} $\hom_{\mathcal{C}}(X, Y)$ (abbreviated $\hom(X,Y)$).
+\item For each object $X \in \ob\mathcal{C}$, there is an \textbf{identity
+morphism}
+$1_X \in \hom_{\mathcal{C}}(X, X)$ (often just abbreviated to $1$).
+\item There is a \textbf{composition law}
+$\circ: \hom_{\mathcal{C}}(X, Y) \times \hom_{\mathcal{C}}(Y, Z) \to
+\hom_{\mathcal{C}}(X, Z), (g, f) \to g
+\circ f$ for every
+triple $X, Y, Z$ of objects.
+\item The composition law is unital and associative.
+In other words, if $f \in \hom_{\mathcal{C}}(X, Y)$, then $1_Y \circ f = f
+\circ 1_X = f$. Moreover, if $g \in \hom_{\mathcal{C}}(Y, Z)$ and $h \in
+\hom_{\mathcal{\mathcal{C}}}(Z, W)$ for objects $Z, Y, W$, then
+\[ h \circ (g \circ f) = (h \circ g) \circ f \in \hom_{\mathcal{C}}(X, W). \]
+\end{enumerate}
+\end{definition}
+
+We shall write $f: X \to Y$ to denote an element of $\hom_{\mathcal{C}}(X, Y)$.
+In practice, $\mathcal{C}$ will often be the storehouse for mathematical objects: groups, Lie algebras,
+rings, etc., in which case these ``morphisms'' will just be ordinary functions.
+
+Here is a simple list of examples.
+\begin{example}[Categories of structured sets]
+\begin{enumerate}
+\item $\mathcal{C} = \mathbf{Sets}$; the objects are sets, and the morphisms
+are functions.
+\item $\mathcal{C} = \mathbf{Grps}$; the objects are groups, and the morphisms
+are maps of groups (i.e. homomorphisms).
+\item $\mathcal{C} = \mathbf{LieAlg}$; the objects are Lie algebras, and the
+morphisms are maps of Lie algebras (i.e. homomorphisms).\footnote{Feel free to
+omit if the notion of Lie algebra is unfamiliar.}
+\item $\mathcal{C} = \mathbf{Vect}_k$; the objects are vector spaces over a
+field $k$, and the morphisms are linear maps.
+\item $\mathcal{C} = \mathbf{Top}$; the objects are topological spaces, and the
+morphisms are continuous maps.
+\item This example is slightly more subtle. Here the category $\mathcal{C}$
+has objects consisting of topological spaces, but the morphisms between two
+topological spaces $X,Y$ are the \emph{homotopy classes} of maps $X \to Y$.
+Since composition respects homotopy classes, this is well-defined.
+\end{enumerate}
+\end{example}
+
+
+
+In general, the objects of a category do not have to form a set; they can
+be too large for
+that.
+For instance, the collection of objects in $\mathbf{Sets}$ does not form a set.
+
+
+\begin{definition}
+A category is \textbf{small} if the collection of objects is a set.
+\end{definition}
+
+The standard examples of categories are the ones above: structured sets
+together with structure-preserving maps. Nonetheless, one can easily give
+other examples that are not of this form.
+
+\begin{example}[Groups as categories] \label{BG}
+Let $G$ be a finite group. Then we can make a category $B_G$ where the objects
+just consist of one element $\ast$ and the maps $\ast \to \ast$ are the elements
+ $g \in G$. The identity is the identity of $G$ and composition is multiplication
+in the group.
+
+In this case, the category does not represent much of a class of objects, but
+instead we think of the composition law as the key thing. So a group is a
+special kind of (small) category.
+\end{example}
+
+\begin{example}[Monoids as categories]
+A monoid is precisely a category with one object. Recall that a \textbf{monoid}
+is a set together with an associative and unital multiplication (but which
+need not have inverses).
+\end{example}
+
+
+\begin{example}[Posets as categories] \label{posetcategory} Let $(P, \leq)$ be a partially ordered
+ (or even preordered) set (i.e. poset). Then $P$ can be regarded as a (small) category, where the objects are the elements
+$p \in P$, and $$\hom_P(p, q) = \begin{cases}
+\ast & \text{if } p \leq q \\
+\emptyset & \text{otherwise}
+ \end{cases} $$
+\end{example}
+
+There is, however, a major difference between category theory and set theory.
+There is \textbf{nothing} in the language of categories that lets one look
+\emph{inside} an object. We think of vector spaces having elements, spaces
+having points, etc.
+By contrast, categories treat these kinds of things as invisible. There
+is nothing ``inside'' of an object $X \in \mathcal{C}$; the only way to
+understand $X$ is
+to understand the ways one can map into and out of $X$.
+Even if one is working with a category of ``structured sets,'' the underlying
+set of an object in this category is not part of the categorical data.
+However, there are instances in which the ``underlying set'' can be recovered
+as a $\hom$-set.
+
+\begin{example}
+In the category $\mathbf{Top}$ of topological spaces, one can in fact recover the
+``underlying set'' of a topological space via the hom-sets. Namely, for each
+topological space, the points of $X$ are the same thing as the mappings from a
+one-point space into $X$.
+That is, we have
+\[ |X| = \hom_{\mathbf{Top}}(\ast, X), \]
+where $\ast$ is the one-point space.
+
+Later we will say that the functor assigning to each
+space its underlying set is \emph{corepresentable.}
+\end{example}
+
+\begin{example}
+Let $\mathbf{Ab}$ be the category of abelian groups and group-homomorphisms. Again, the claim is that
+using only this category, one can recover the underlying set of a given abelian
+group $A$. This is because the elements of $A$ can be canonically identified
+with \emph{morphisms} $\mathbb{Z} \to A$ (based on where $1 \in \mathbb{Z}$
+maps).
+\end{example}
+
+
+\begin{definition}
+We say that $\mathcal{C}$ is a \textbf{subcategory} of the category
+$\mathcal{D}$ if the collection of objects of $\mathcal{C}$ is a subclass of
+the collection of objects of $\mathcal{D}$, and if whenever
+$X, Y \in \mathcal{C}$, we have
+\[ \hom_{\mathcal{C}}(X, Y) \subset \hom_{\mathcal{D}}(X, Y) \]
+with the laws of composition in $\mathcal{C}$ induced by that in $\mathcal{D}$.
+
+$\mathcal{C}$ is called a \textbf{full subcategory} if $\hom_{\mathcal{C}}(X,
+Y) = \hom_{\mathcal{D}}(X, Y)$ whenever $X, Y \in \mathcal{C}$.
+\end{definition}
+
+
+\begin{example}
+The category of abelian groups is a full subcategory of the category of groups.
+\end{example}
+
+
+\subsection{The language of commutative diagrams}
+
+While the language of categories is, of course, purely algebraic, it will be
+convenient for psychological reasons to visualize categorical arguments
+through diagrams.
+We shall introduce this notation here.
+
+Let $\mathcal{C}$ be a category, and let $X, Y$ be objects in $\mathcal{C}$.
+If $f \in \hom(X, Y)$, we shall sometimes write $f$ as an arrow
+\[ f: X \to Y \]
+or
+\[ X \stackrel{f}{\to} Y \]
+as if $f$ were an actual function.
+If $X \stackrel{f}{\to} Y$ and $Y \stackrel{g}{\to} Z$ are morphisms,
+composition $g \circ f: X \to Z$ can be visualized by the picture
+\[ X \stackrel{f}{\to} Y \stackrel{g}{\to} Z.\]
+
+Finally, when we work with several objects, we shall often draw collections of
+morphisms into diagrams, where arrows indicate morphisms between two objects.
+\begin{definition}
+A diagram will be said to \textbf{commute} if whenever one goes from one
+object in the diagram to another by following the arrows in the right order,
+one obtains the same morphism.
+For instance, the commutativity of the diagram
+\[ \xymatrix{
+X \ar[d]^f \ar[r]^{f'} & W \ar[d]^g \\
+Y \ar[r]^{g'} & Z
+}\]
+is equivalent to the assertion that
+\[ g \circ f' = g' \circ f \in \hom(X, Z). \]
+\end{definition}
+
+
+As an example, the assertion that the associative law holds in a category
+$\mathcal{C}$ can be stated as follows. For every quadruple $X, Y, Z, W \in
+\mathcal{C}$, the following diagram (of \emph{sets}) commutes:
+\[ \xymatrix{
+\hom(X, Y) \times \hom(Y, Z) \times \hom(Z, W) \ar[r] \ar[d] & \hom(X, Z)
+\times \hom(Z, W) \ar[d] \\
+\hom(X,Y) \times \hom(Y, W) \ar[r] & \hom(X, W).
+}\]
+Here the maps are all given by the composition laws in $\mathcal{C}$.
+For instance, the downward map to the left is the product of the identity on
+$\hom(X, Y)$ with the composition law $\hom(Y, Z) \times \hom(Z, W) \to \hom(Y,
+W)$.
+\subsection{Isomorphisms}
+
+Classically, one can define an isomorphism of groups as a bijection that
+preserves the group structure. This does not generalize well to categories, as
+we do not have a notion of ``bijection,'' as there is no way (in general) to
+talk about the ``underlying set'' of an object.
+Moreover, this definition does not generalize well to topological spaces:
+there, an isomorphism should not just be a bijection, but something which
+preserves the topology (in a strong sense), i.e. a homeomorphism.
+
+
+Thus we make:
+
+\begin{definition}
+An \textbf{isomorphism} between objects $X, Y$ in a category $\mathcal{C}$ is a
+map $f: X \to Y$ such that there exists $g: Y \to X$ with
+\[ g \circ f = 1_X, \quad f \circ g = 1_Y. \]
+
+Such a $g$ is called an \textbf{inverse} to $f$. \end{definition}
+
+\begin{remark}
+It is easy to check that the inverse $g$ is
+unique. Indeed, suppose $g, g'$ both were inverses to $f$. Then
+\[ g' = g' \circ 1_Y = g' \circ (f \circ g) = (g' \circ f) \circ g = 1_X
+\circ g = g. \]
+\end{remark}
+
+This notion is isomorphism is more correct than the idea of being one-to-one and onto. A bijection of
+topological spaces is not necessarily a homeomorphism.
+
+
+\begin{example}
+It is easy to check that an isomorphism in the category $\mathbf{Grp}$ is an
+isomorphism of groups, that an isomorphism in the category $\mathbf{Set}$ is a
+bijection, and so on.
+\end{example}
+
+We are supposed to be able to identify isomorphic objects. In the categorical
+sense, this means mapping into $X$ should be the same as mapping into $Y$, if
+$X, Y$ are isomorphic, via an isomorphism $f: X \to Y$.
+Indeed, let
+$Z$ be another object of $\mathcal{C}$.
+Then we can define a map
+\[ \hom_{\mathcal{C}}(Z, X) \to \hom_{\mathcal{C}}(Z, Y) \]
+given by post-composition with $f$. This is a \emph{bijection} if $f$ is an
+isomorphism (the inverse is given by postcomposition with the inverse to $f$).
+Similarly, one can easily see that mapping \emph{out of} $X$ is essentially the
+same as mapping out of $Y$.
+Anything in general category theory that is true for $X$ should be true for $Y$
+(as general category theory can only try to understand $X$ in terms of maps
+into or out of it!).
+
+\begin{exercise}
+The relation ``$X, Y$ are isomorphic'' is an equivalence relation on the class
+of objects of a category $\mathcal{C}$.
+\end{exercise}
+
+\begin{exercise}
+Let $P$ be a preordered set, and make $P$ into a category as in
+\cref{posetcategory}. Then $P$ is a poset if and only if two isomorphic objects
+are equal.
+\end{exercise}
+
+For the next exercise, we need:
+\begin{definition}
+A \textbf{groupoid} is a category where every morphism is an isomorphism.
+\end{definition}
+\begin{exercise}
+ The
+sets $\hom_{\mathcal{C}}(A, A)$ are \emph{groups} if $\mathcal{C}$ is a
+groupoid and $A \in \mathcal{C}$. A group is essentially the same as a groupoid
+with one object.
+\end{exercise}
+
+\begin{exercise}
+
+Show that the following is a groupoid. Let $X$ be a topological space, and let
+$\Pi_1(X)$ be the category defined as follows: the objects are elements of $X$,
+and morphisms $x \to y$ (for $x,y \in X$) are homotopy classes of maps $[0,1]
+\to X$ (i.e. paths) that send $0 \mapsto x, 1 \mapsto y$. Composition of maps
+is given by concatenation of paths.
+(Check that, because one is working with \emph{homotopy classes} of paths,
+composition is associative.)
+
+$\Pi_1(X)$ is called the \textbf{fundamental groupoid} of $X$. Note that
+$\hom_{\Pi_1(X)}(x, x)$ is the \textbf{fundamental group} $\pi_1(X, x)$.
+\end{exercise}
+
+\section{Functors}
+
+A functor is a way of mapping from one category to another: each object is sent
+to another object, and each morphism is sent to another morphism. We shall
+study many functors in the sequel: localization, the tensor product, $\hom$,
+and fancier ones like $\tor, \ext$, and local cohomology functors.
+The main benefit of a functor is that it doesn't simply send objects to
+other objects, but also morphisms to morphisms: this allows one to get new commutative
+diagrams from old ones.
+This will turn out to be a powerful tool.
+
+
+\subsection{Covariant functors}
+Let $\mathcal{C}, \mathcal{D}$ be categories. If $\mathcal{C}, \mathcal{D}$
+are categories of structured sets (of possibly different types), there may be a
+way to associate objects in $\mathcal{D}$ to objects in $\mathcal{C}$. For
+instance, to every group $G$ we can associate its \emph{group ring}
+$\mathbb{Z}[G]$
+ (which we do not define here); to each topological space we can associate its
+ \emph{singular chain complex}, and so on.
+In many cases, given a map between objects in $\mathcal{C}$ preserving the
+relevant structure, there will be an induced map on the corresponding objects
+in $\mathcal{D}$. It is from here that we define a \emph{functor.}
+
+\begin{definition} \label{covfunc}
+A \textbf{functor} $F: \mathcal{C} \to \mathcal{D}$ consists of a function $F:
+\mathcal{C} \to \mathcal{D}$ (that is, a rule that assigns to each object
+in $\mathcal{C}$ an object of $\mathcal{D}$) and, for each pair $X, Y \in
+\mathcal{C}$,
+a map
+$F: \hom_{\mathcal{C}}(X, Y) \to \hom_{\mathcal{D}}(FX, FY)$, which preserves
+the identity
+maps and composition.
+
+In detail, the last two conditions state the following.
+\begin{enumerate}
+\item If $X \in
+\mathcal{C}$, then $F(1_X)$ is the identity morphism $1_{F(X)}: F(X) \to
+F(X)$.
+\item If $A \stackrel{f}{\to} B \stackrel{g}{\to} C$ are
+morphisms in $\mathcal{C}$,
+then $F(g \circ f) = F(g) \circ F(f)$ as morphisms $F(A) \to F(C)$.
+Alternatively, we can say that $F$ \emph{preserves commutative diagrams.}
+\end{enumerate}
+\end{definition}
+
+In the last statement of the definition, note that if
+\[ \xymatrix{
+X \ar[rd]^h \ar[r]^f & Y \ar[d]^g \\
+ & Z
+}\]
+is a commutative diagram in $\mathcal{C}$, then the diagram obtained by
+applying the functor $F$, namely
+\[ \xymatrix{
+F(X) \ar[rd]^{F(h)} \ar[r]^{F(f)} & F(Y) \ar[d]^{F(g)} \\
+ & F(Z)
+}\]
+also commutes. It follows that applying $F$ to more complicated commutative
+diagrams also yields new commutative diagrams.
+
+
+Let us give a few examples of functors.
+
+\begin{example}
+There is a functor from $\mathbf{Sets} \to \mathbf{AbelianGrp}$ sending a set
+$S$ to the free abelian group on the set. (For the definition of a free abelian
+group, or more generally a free $R$-module over a ring $R$, see
+\cref{freemoduledef}.)
+\end{example}
+
+\begin{example} \label{pi0}
+Let $X$ be a topological space. Then to it we can associate the set $\pi_0(X)$
+of \emph{connected components} of $X$.
+
+Recall that the continuous image of a
+connected set is connected, so if $f: X \to Y$ is a continuous map and $X'
+\subset X$ connected, $f(X')$ is contained in a connected component of $Y$. It
+follows that $\pi_0$ is a functor $\mathbf{Top} \to \mathbf{Sets}$.
+In fact, it is a functor on the \emph{homotopy category} as well, because
+homotopic maps induce the same maps on $\pi_0$.
+\end{example}
+
+
+\begin{example}
+Fix $n$.
+There is a functor from $\mathbf{Top} \to \mathbf{AbGrp}$
+(categories of topological spaces and abelian groups) sending a
+space $X$ to its $n$th homology group $H_n(X)$. We know that given a map of spaces
+$f: X \to Y$,
+we get a map of abelian groups $f_*: H_n(X) \to H_n(Y)$. See \cite{Ha02}, for
+instance.
+\end{example}
+
+We shall often need to compose functors. For instance, we will want to see, for
+instance, that the \emph{tensor product} (to be defined later, see
+\cref{sec:tensorprod}) is
+associative, which is really a statement about composing functors. The
+following (mostly self-explanatory) definition elucidates this.
+
+\begin{definition}\label{composefunctors}
+If $\mathcal{C}, \mathcal{D}, \mathcal{E}$ are categories, $F: \mathcal{C} \to
+\mathcal{D}, G: \mathcal{D} \to \mathcal{E}$ are covariant functors, then one
+can define a \textbf{composite functor}
+\[ F \circ G: \mathcal{C} \to \mathcal{E} \]
+This sends an object $X \in \mathcal{C}$ to $G(F(X))$.
+Similarly, a morphism $f :X \to Y$ is sent to $G(F(f)): G(F(X)) \to G(F(Y))$.
+We leave the reader to check that this is well-defined.
+\end{definition}
+
+
+
+
+\begin{example}\label{categoryofcats}
+In fact, because we can compose functors, there is a \emph{category of
+categories.} Let $\mathbf{Cat}$ have objects as the small categories, and
+morphisms as functors. Composition is defined as in \cref{composefunctors}.
+\end{example}
+
+
+\begin{example}[Group actions] \label{groupact} Fix a group $G$.
+Let us understand what a functor $B_G \stackrel{}{\to} \mathbf{Sets}$ is. Here $B_G$ is the
+category of \cref{BG}.
+
+The unique object $\ast$ of $B_G$ goes to some set $X$. For each element $g \in G$, we
+get a map $g: \ast \to \ast$ and thus a map $X \to X$. This is supposed to
+preserve the composition law (which in $G$ is just multiplication), as well as
+identities.
+
+In particular, we get maps $i_g: X \to X$ corresponding to each $g \in G$, such
+that the following diagram commutes for each $g_1, g_2 \in G$:
+\[ \xymatrix{
+X \ar[r]^{i_{g_1}} \ar[rd]_{i_{g_1g_2}} & X \ar[d]^{i_{g_2}} \\ & X.
+}\]
+Moreover, if $e \in G$ is the identity, then $i_e = 1_X$.
+So a functor $B_G \to \mathbf{Sets}$ is just a left $G$-action on a set $X$.
+\end{example}
+
+
+
+An important example of functors is given by the following. Let $\mathcal{C}$
+be a category of ``structured sets.'' Then, there is a functor $F: \mathcal{C}
+\to \textbf{Sets}$ that sends a structured set to the underlying set. For
+instance, there is a functor from groups to sets that forgets the group
+structure.
+More generally, suppose given two categories $\mathcal{C}, \mathcal{D}$, such
+that $\mathcal{C}$ can be regarded as ``structured objects in $\mathcal{D}$.''
+Then there is a functor $\mathcal{C} \to \mathcal{D}$ that forgets the
+structure.
+Such examples are called \emph{forgetful functors.}
+
+\subsection{Contravariant functors}
+Sometimes what we have described above are called \textit{covariant functors}.
+Indeed, we shall also be interested in similar objects that reverse the
+arrows, such as duality functors:
+
+\begin{definition}
+A \textbf{contravariant functor} $\mathcal{C}
+\stackrel{F}{\to}\mathcal{D}$ (between categories $\mathcal{C}, \mathcal{D}$)
+is similar
+data as in \cref{covfunc} except that now a map $X \stackrel{f}{\to} Y$ now
+goes to a map $F(Y)
+\stackrel{F(f)}{\to} F(X)$. Composites
+are required to be preserved, albeit in the other direction.
+In other words, if $X \stackrel{f}{\to} Y, Y \stackrel{g}{\to} Z$ are
+morphisms, then we require
+\[ F ( g \circ f) = F(f) \circ F(g): F(Z) \to F(X). \]
+\end{definition}
+
+We shall sometimes say just ``functor'' for \emph{covariant functor}. When we are
+dealing with a contravariant functor, we will always say the word
+``contravariant.''
+
+
+A contravariant functor also preserves commutative diagrams, except that the
+arrows have to be reversed. For instance, if $F: \mathcal{C} \to \mathcal{D}$
+is contravariant and the diagram
+\[ \xymatrix{
+A \ar[d] \ar[r] & C\\
+B \ar[ru]
+}\]
+is commutative in $\mathcal{C}$, then the diagram
+\[ \xymatrix{
+F(A) & \ar[l] \ar[ld] F(C)\\
+F(B) \ar[u]
+}\]
+commutes in $\mathcal{D}$.
+
+One can, of course, compose contravariant functors as in \cref{composefunctors}. But the composition of two
+contravariant functors will be \emph{covariant.} So there is no ``category of
+categories'' where the morphisms between categories are contravariant functors.
+
+Similarly as in \cref{groupact}, we have:
+
+\begin{example}
+A \textbf{contravariant} functor from $B_G$ (defined as in \cref{BG}) to $\mathbf{Sets}$ corresponds to a
+set with a \emph{right} $G$-action.
+\end{example}
+
+\begin{example}[Singular cohomology]
+In algebraic topology, one encounters contravariant functors on the homotopy
+category of topological spaces via the \emph{singular cohomology} functors $X
+\mapsto H^n(X; \mathbb{Z})$. Given a continuous map $f: X \to Y$, there is a
+homomorphism of groups
+\[ f^* : H^n(Y; \mathbb{Z}) \to H^n(X; \mathbb{Z}). \]
+\end{example}
+
+\begin{example}[Duality for vector spaces] \label{dualspace}
+On the category $\mathbf{Vect}$ of vector spaces over a field $k$, we
+have
+the contravariant functor
+\[ V \mapsto V^{\vee}. \]
+sending a vector space to its dual $V^{\vee} = \hom(V,k)$.
+Given a map $V \to W$ of vector spaces, there is an induced map
+\[ W^{\vee} \to V^{\vee} \]
+given by the transpose.
+\end{example}
+
+\begin{example}
+If we map $B_G \to B_G$ sending $\ast \mapsto \ast$ and $g \mapsto g^{-1}$, we
+get a
+contravariant functor.
+\end{example}
+
+We now give a useful (linguistic) device for translating between covariance and
+contravariance.
+
+\begin{definition}[The opposite category] \label{oppositecategory}
+Let $\mathcal{C}$ be a category. Define the \textbf{opposite category}
+$\mathcal{C}^{op}$ of $\mathcal{C}$ to have the same objects as
+$\mathcal{C}$ but such that the morphisms between $X,Y$ in
+$\mathcal{C}^{op}$
+are those between $Y$ and $X$ in $\mathcal{C}$.
+\end{definition}
+
+There is a contravariant functor $\mathcal{C} \to
+\mathcal{C}^{op}$.
+In fact, contravariant functors out of $\mathcal{C}$ are the \emph{same} as
+covariant functors out of $\mathcal{C}^{op}$.
+
+As a result, when results are often stated for both covariant and contravariant
+functors, for instance, we can often reduce to the covariant case by using the
+opposite category.
+
+\begin{exercise}
+A map that is an isomorphism in $\mathcal{C}$ corresponds to an isomorphism in
+$\mathcal{C}^{op}$.
+\end{exercise}
+\subsection{Functors and isomorphisms}
+Now we want to prove a simple and intuitive fact: if isomorphisms allow one to
+say that one object in a category is ``essentially the same'' as another,
+functors should be expected to preserve this.
+\begin{proposition}
+If $f: X \to Y$ is a map in $\mathcal{C}$, and $F: \mathcal{C} \to \mathcal{D}$
+is a functor, then $F(f): FX \to FY$ is an isomorphism.
+\end{proposition}
+
+The proof is quite straightforward, though there is an important point here.
+Note that the analogous result holds for \emph{contravariant} functors too.
+
+\begin{proof}
+If we have maps $f: X \to Y$ and $g : Y \to X$ such that the composites both
+ways are identities, then we can apply the functor $F$ to this, and we find
+that since
+\[ f \circ g = 1_Y, \quad g \circ f = 1_X, \]
+it must hold that
+\[ F(f) \circ F(g) = 1_{F(Y)}, \quad F(g) \circ F(f) = 1_{F(X)}. \]
+We have used the fact that functors preserve composition and identities. This
+implies that $F(f)$ is an isomorphism, with inverse $F(g)$.
+\end{proof}
+
+Categories have a way of making things so general that are trivial. Hence,
+this material is called general abstract nonsense.
+Moreover, there is another philosophical point about category theory to
+be made here: often, it is the definitions, and not the proofs, that matter.
+For instance, what matters here is not the theorem, but the \emph{definition of
+an
+isomorphism.} It is a categorical one, and much more general than the usual
+notion via injectivity and surjectivity.
+
+
+\begin{example}
+As a simple example, $\left\{0,1\right\}$ and $[0,1]$ are not isomorphic in the
+homotopy category of topological spaces (i.e. are not homotopy equivalent)
+because $\pi_0([0,1]) = \ast$ while $\pi_0(\left\{0,1\right\}) $ has two
+elements.
+\end{example}
+
+\begin{example}
+More generally, the higher homotopy group functors $\pi_n$ (see \cite{Ha02}) can be used to show
+that the $n$-sphere $S^n$ is not homotopy equivalent to a point. For then
+$\pi_n(S^n, \ast)$ would be trivial, and it is not.
+\end{example}
+
+
+There is room, nevertheless, for something else. Instead of having
+something that sends objects to other objects, one could have something that
+sends an object to a map.
+
+
+
+\subsection{Natural transformations}
+
+
+
+Suppose $F, G: \mathcal{C} \to \mathcal{D}$ are functors.
+
+\begin{definition}
+A \textbf{natural transformation} $T: F \to G$ consists of the following data.
+For each $X \in C$, there is a morphism $TX: FX \to GX$ satisfying the
+following
+condition. Whenever $f: X \to Y$ is a morphism, the following diagram must
+commute:
+\[ \xymatrix{
+FX \ar[d]^{TX }\ar[r]^{F(f)} & FY \ar[d]^{TY} \\
+GX \ar[r]^{G(f)} & GY
+}.\]
+
+If $TX$ is an isomorphism for each $X$, then we shall say that $T$ is a
+\textbf{natural isomorphism.}
+\end{definition}
+
+It is similarly possible to define the notion of a natural transformation
+between \emph{contravariant} functors.
+
+When we say that things are ``natural'' in the future, we will mean that the
+transformation between functors is natural in this sense.
+We shall use this language to state theorems conveniently.
+
+\begin{example}[The double dual]
+Here is the canonical example of ``naturality.''
+Let $\mathcal{C}$ be the category of finite-dimensional vector spaces over a
+given field $k$. Let us further restrict the category such that the only
+morphisms are the \emph{isomorphisms} of vector spaces.
+For each $V \in \mathcal{C}$, we know that there is an isomorphism
+\[ V \simeq V^{\vee} = \hom_k(V, k), \]
+because both have the same dimension.
+
+Moreover, the maps $V \mapsto V, V \mapsto V^{\vee}$ are both covariant functors on
+$\mathcal{C}$.\footnote{Note that the dual $\vee$ was defined as a
+\emph{contravariant} functor in \cref{dualspace}.} The first is the identity functor; for the second, if $f: V \to
+W$ is an isomorphism, then there is induced a transpose map $f^t: W^{\vee} \to V^{\vee}$
+(defined by sending a map $W \to k$ to the precomposition $V \stackrel{f}{\to}
+W \to k$), which is an isomorphism; we can take its inverse.
+So we have two functors from $\mathcal{C}$ to itself, the identity and the
+dual, and we know that $V \simeq V^{\vee}$ for each $V$ (though we have not
+chosen any particular set of isomorphisms).
+
+
+However, the isomorphism $V \simeq
+V^{\vee}$ \emph{cannot} be made natural. That is, there is no way of choosing
+isomorphisms
+\[ T_V: V \simeq V^{\vee} \]
+such that,
+whenever $f: V \to W$ is an isomorphism of vector spaces, the following diagram
+commutes:
+\[ \xymatrix{
+V \ar[r]^{f} \ar[d]^{T_V} & W \ar[d]^{T_W} \\
+V^{\vee} \ar[r]^{(f^t)^{-1}} & W^{\vee}.
+}\]
+Indeed, fix $d>1$, and choose $V = k^d$.
+Identify $V^{\vee}$ with $k^d$, and so the map $T_V$ is a $d$-by-$d$ matrix $M$
+with coefficients in $k$. The requirement is that for each \emph{invertible}
+$d$-by-$d$ matrix $N$, we have
+\[ (N^t)^{-1}M = MN, \]
+by considering the above diagram with $V = W = k^d$, and $f$ corresponding to
+the matrix $N$.
+This is impossible unless $M = 0$, by elementary linear algebra.
+
+Nonetheless, it \emph{is} possible to choose a natural isomorphism
+\[ V \simeq V^{\vee \vee}. \]
+To do this, given $V$, recall that $V^{\vee \vee}$ is the collection of maps
+$V^{\vee} \to k$. To give a map $V \to V^{\vee \vee}$ is thus the same as
+giving linear functions $l_v, v \in V$ such that $l_v: V \to k$ is linear in
+$v$. We can do this by letting $l_v$ be ``evaluation at $v$.''
+That is, $l_v$ sends a linear functional $\ell: V \to k$ to $\ell(v) \in k$. We
+leave it to the reader to check (easily) that this defines a homomorphism $V
+\to V^{\vee \vee}$, and that everything is natural.
+\end{example}
+
+
+
+
+
+\begin{exercise}
+ Suppose there are two functors $B_G \to
+\mathbf{Sets}$, i.e. $G$-sets. What is a natural transformation between them?
+\end{exercise}
+
+
+Natural transformations can be \emph{composed}. Suppose given functors $F, G,
+H: \mathcal{C} \to \mathcal{D}$ a natural
+transformation $T: F \to G$ and a natural transformation $U: G \to H$.
+Then, for each $X \in \mathcal{C}$, we have maps $TX: FX \to GX, UX: GX \to
+HY$. We can compose $U$ with $T$ to get a natural transformation $U \circ T: F
+\to H$.
+
+In fact, we can thus define a \emph{category} of functors
+$\mathrm{Fun}(\mathcal{C}, \mathcal{D})$ (at least if $\mathcal{C},
+\mathcal{D}$ are small). The objects of this category are the functors $F:
+\mathcal{C} \to \mathcal{D}$. The morphisms are natural transformations between
+functors. Composition of morphisms is as above.
+
+\subsection{Equivalences of categories}
+
+Often we want to say that two categories $\mathcal{C}, \mathcal{D}$ are ``essentially the same.'' One way
+of formulating this precisely is to say that $\mathcal{C}, \mathcal{D}$ are
+\emph{isomorphic} in the category of categories. Unwinding the definitions,
+this means that there exist functors
+\[ F: \mathcal{C} \to \mathcal{D}, \quad G: \mathcal{D} \to \mathcal{C} \]
+such that $F \circ G = 1_{\mathcal{D}}, G \circ F = 1_{\mathcal{C}}$.
+This notion, of \emph{isomorphism} of categories, is generally far too
+restrictive.
+
+For instance, we could consider the category of all finite-dimensional vector
+spaces over a given field $k$, and we could consider the full subcategory
+of vector spaces of the form $k^n$. Clearly both categories encode essentially
+the same mathematics, in some sense, but they are not isomorphic: one has a
+countable set of objects, while the other has an uncountable set of objects.
+Thus, we need a more refined way of saying that two categories are
+``essentially the same.''
+
+\begin{definition}
+Two categories $\mathcal{C}, \mathcal{D}$ are called \textbf{equivalent} if
+there are functors
+\[ F: \mathcal{C} \to \mathcal{D}, \quad G: \mathcal{D} \to \mathcal{C} \]
+and natural isomorphisms
+\[ F G \simeq 1_{\mathcal{D}}, \quad GF \simeq 1_{\mathcal{C}}. \]
+\end{definition}
+
+For instance, the category of all vector spaces of the form $k^n$ is equivalent
+to the category of all finite-dimensional vector spaces.
+One functor is the inclusion from vector spaces of the form $k^n$; the other
+functor maps a finite-dimensional vector space $V$ to $k^{\dim V}$. Defining
+the second functor properly is, however, a little more subtle.
+The next criterion will be useful.
+
+\begin{definition}
+A functor $F: \mathcal{C} \to \mathcal{D}$ is \textbf{fully faithful} if $F:
+\hom_{\mathcal{C}}(X, Y) \to \hom_{\mathcal{D}}(FX, FY)$ is a bijection for each pair of objects $X, Y \in
+\mathcal{C}$.
+$F$ is called \textbf{essentially surjective} if every element of $\mathcal{D}$
+is isomorphic to an object in the image of $F$.
+\end{definition}
+
+So, for instance, the inclusion of a full subcategory is fully faithful (by
+definition). The forgetful functor from groups to sets is not fully faithful,
+because not all functions between groups are automatically homomorphisms.
+
+
+\begin{proposition}
+A functor $F: \mathcal{C} \to \mathcal{D}$ induces an equivalence of categories
+if and only if it is fully faithful and essentially surjective.
+\end{proposition}
+\begin{proof}
+\add{this proof, and the definitions in the statement.}
+\end{proof}
+
+\section{Various universal constructions}
+
+Now that we have introduced the idea of a category and showed that a functor
+takes isomorphisms to isomorphisms, we shall take various steps to characterize objects in terms of
+maps (the most complete of which is the Yoneda lemma, \cref{yonedalemma}). In
+general category
+theory, this is generally all we \emph{can} do, since this is all the data we
+are given.
+We shall describe objects satisfying certain ``universal properties'' here.
+
+
+As motivation, we first discuss the concept of the ``product'' in terms of a
+universal property.
+
+\subsection{Products}
+Recall that if we have two sets $X$ and $Y$, the product $X\times Y$ is the set
+of all elements of the form $(x,y)$ where $x\in X$ and $y\in Y$. The product is
+also equipped with natural projections $p_1: X \times Y \to X$ and $p_2: X
+\times Y \to Y$ that take $(x,y)$ to $x$
+and $y$ respectively. Thus any element of $X\times Y$ is uniquely determined by
+where they project to on $X$ and $Y$. In fact, this is the case more generally; if
+we have an index set $I$ and a product $X=\prod_{i\in I} X_i$, then an element
+$x\in X$ determined uniquely by where where the projections $p_i(x)$ land in
+$X_i$.
+
+To get into the categorical spirit, we should speak not of elements but of maps
+to $X$. Here is the general observation: if we have any other set $S$ with maps
+$f_i:S\rightarrow X_i$ then there is a unique map $S\rightarrow X=\prod_{i\in
+I}X_i$ given by sending $s\in S$ to the element $\{ f_i(s)\}_{i\in I}$. This
+leads to the following characterization of a product using only ``mapping
+properties.''
+
+\begin{definition} Let $\{X_i\}_{i\in I}$ be a collection of objects in some
+category $\mathcal{C}$. Then an object $P \in \mathcal{C}$ with projections $p_i: P\rightarrow X_i$
+is said to be the \textbf{product} $\prod_{i\in I} X_i$ if the following ``universal
+property'' holds:
+let $S$ be any other object in $\mathcal{C}$ with maps $f_i:S\rightarrow X_i$.
+Then there is a unique morphism $f:S\rightarrow P$ such that $p_i f = f_i$.
+\end{definition}
+
+In other words, to map into $X$ is the same as mapping into all the
+$\left\{X_i\right\}$ at once. We have thus given a precise description of how
+to map into $X$.
+Note that, however, the product need not exist!
+If it does, however, we can express the above formalism by the following
+natural isomorphism
+of contravariant functors
+\[ \hom(\cdot, \prod_I X_i) \simeq \prod_I \hom(\cdot, X_i). \]
+This is precisely the meaning of the last part of the definition. Note that
+this observation shows that products in the category of \emph{sets} are really
+fundamental to the idea of products in any category.
+
+\begin{example} One of the benefits of this construction is that an actual
+category is not specified; thus when we take $\mathcal{C}$ to be
+$\mathbf{Sets}$, we
+recover the cartesian product notion of sets, but if we take $\mathcal{C}$ to
+be $\mathbf{Grp}$, we achieve the regular notion of the product of groups (the reader is
+invited to check these statements). \end{example}
+
+The categorical product is not unique, but it is as close to being so as
+possible.
+
+\begin{proposition}[Uniqueness of products]\label{produnique}
+Any two products of the collection $\left\{X_i\right\}$ in $\mathcal{C}$ are
+isomorphic by a unique isomorphism commuting with the projections.
+\end{proposition}
+
+This is a special case of a general ``abstract nonsense'' type result that we
+shall see many more of in the sequel.
+The precise statement is the following: let $X$ be a product of the
+$\left\{X_i\right\}$ with projections $p_i : X \to X_i$, and let $Y$ be a
+product of them too, with projections $q_i: Y \to X_i$.
+Then the claim is that there is a \emph{unique} isomorphism
+\[ f: X \to Y \]
+such that the diagrams below commute for each $i \in I$:
+\begin{equation} \label{prodcommutative} \xymatrix{
+X \ar[rd]^{p_i} \ar[rr]^f & & Y \ar[ld]_{q_i} \\
+& X_i.
+}\end{equation}
+\begin{proof}
+This is a ``trivial'' result, and is part of a general fact that objects
+with the same universal property are always canonically isomorphic. Indeed, note that the projections $p_i: X \to
+X_i$ and the fact that mapping into $Y$ is the same as mapping into all the
+$X_i$ gives a unique map $f: X \to Y$ making the diagrams
+\eqref{prodcommutative} commute. The same reasoning (applied to the $q_i: Y \to
+X_i$) gives a map $g: Y \to X$ making the diagrams
+\begin{equation} \label{prodcommutative2} \xymatrix{
+Y \ar[rd]^{q_i} \ar[rr]^g & & X \ar[ld]_{p_i} \\
+& X_i
+}\end{equation}
+commute. By piecing the two diagrams together, it follows that the composite $g \circ f$ makes the diagram
+\begin{equation} \label{prodcommutative3} \xymatrix{
+X \ar[rd]^{p_i} \ar[rr]^{g \circ f} & & X \ar[ld]_{p_i} \\
+& X_i
+}\end{equation}
+commute.
+But the identity $1_X: X \to X$ also would make \eqref{prodcommutative3}
+commute, and the \emph{uniqueness} assertion in the definition of the product
+shows that $g \circ f = 1_X$. Similarly, $f \circ g = 1_Y$. We are done.
+\end{proof}
+\begin{remark}
+ If we reverse the arrows in the above construction,
+the universal property obtained (known as the ``coproduct'') characterizes
+disjoint unions in the category of sets and free products in the category of
+groups.
+That is, to map \emph{out} of a coproduct of objects $\left\{X_i\right\}$ is the same as
+mapping out of each of these. We shall later study this construction more
+generally.
+\end{remark}
+
+
+\begin{exercise}
+Let $P$ be a poset, and make $P$ into a category as in \cref{posetcategory}.
+Fix $x, y \in P$. Show that the \emph{product} of $x,y$ is the greatest lower
+bound of $\left\{x,y\right\}$ (if it exists). This claim holds more generally
+for arbitrary subsets of $P$.
+
+In particular, consider the poset of subsets of a given set $S$. Then the
+``product'' in this category corresponds to the intersection of subsets.
+\end{exercise}
+
+We shall, in this section, investigate this notion of ``universality''
+more thoroughly.
+
+
+\subsection{Initial and terminal objects}
+
+We now introduce another example of universality, which is simpler but more
+abstract than the products introduced in the previous section.
+
+\begin{definition}
+Let $\mathcal{C}$ be a category. An \textbf{initial object} in $\mathcal{C}$ is an
+object $X \in \mathcal{C}$ with the property that $\hom_{\mathcal{C}}(X, Y)$ has one
+element for all $Y \in \mathcal{C}$.
+
+\end{definition}
+
+So there is a unique map out of $X$ into each $Y \in \mathcal{C}$.
+Note that this idea is faithful to the categorical spirit of describing objects
+in terms of their mapping properties. Initial objects are very easy to map
+\emph{out} of.
+
+
+\begin{example}
+If $\mathcal{C}$ is $\mathbf{Sets}$, then the empty set $\emptyset$ is an
+initial object. There is a unique map from the empty set into any other set;
+one has to make no decisions about where elements are to map when
+constructing a map $\emptyset \to X$.
+\end{example}
+
+\begin{example}
+In the category $\mathbf{Grp}$ of groups, the group consisting of one element
+is an initial object.
+\end{example}
+
+Note that the initial object in $\mathbf{Grp}$ is \emph{not} that in
+$\mathbf{Sets}$. This should not be too surprising, because $\emptyset$ cannot
+be a group.
+
+\begin{example}
+Let $P$ be a poset, and make it into a category as in \cref{posetcategory}.
+Then it is easy to see that an initial object of $P$ is the smallest object in
+$P$ (if it exists). Note that this is equivalently the product of all the
+objects in $P$. In general, the initial object of a category is not the product
+of all objects in $\mathcal{C}$ (this does not even make sense for a large
+category).
+\end{example}
+
+There is a dual notion, called a \textit{terminal object}, where every object
+can map into it in precisely one way.
+\begin{definition}
+A \textbf{terminal object} in a category $\mathcal{C}$ is an object $Y \in
+\mathcal{C}$ such that $\hom_{\mathcal{C}}(X, Y) = \ast$ for each $X \in \mathcal{C}$.
+\end{definition}
+
+Note that an initial object in $\mathcal{C}$ is the same as a terminal object
+in $\mathcal{C}^{op}$, and vice versa. As a result, it suffices to prove
+results about initial objects, and the corresponding results for terminal
+objects will follow formally.
+But there is a fundamental difference between initial and terminal objects.
+Initial objects are characterized by how one maps \emph{out of} them, while
+terminal objects are characterized by how one maps \emph{into} them.
+\begin{example}
+The one point set is a terminal object in $\mathbf{Sets}$.
+\end{example}
+
+
+
+The important thing about the next ``theorems'' is the conceptual framework.
+\begin{proposition}[Uniqueness of the initial (or terminal) object]
+\label{initialunique}
+Any two initial (resp. terminal) objects in $\mathcal{C}$ are isomorphic by a
+unique isomorphism.
+\end{proposition}
+\begin{proof}
+The proof is easy. We do it for terminal objects. Say $Y, Y'$ are
+terminal objects. Then $\hom(Y, Y')$ and $\hom(Y', Y)$ are one
+point sets. So there are unique maps $f: Y \to Y', g: Y' \to Y$, whose composites
+must be the identities: we know that $\hom(Y, Y) , \hom(Y', Y')$
+are one-point sets, so the composites have no other choice to be the
+identities. This means that the maps $f: Y \to Y', g: Y' \to Y$ are
+isomorphisms.
+\end{proof}
+
+There is a philosophical point to be made here. We have characterized an object
+uniquely in terms of mapping properties. We have characterized it
+\emph{uniquely up to unique isomorphism,} which is really the best one can do
+in mathematics. Two sets are not generally the ``same,'' but they may be
+isomorphic up to unique isomorphism. They are different,
+but the sets are isomorphic up
+to unique isomorphism.
+Note also that the argument was essentially similar to that of \cref{produnique}.
+
+In fact, we could interpret \cref{produnique} as a special case of
+\cref{initialunique}.
+If $\mathcal{C}$ is a category and $\left\{X_i\right\}_{i \in I}$ is a family
+of objects in $\mathcal{C}$, then we can define a category $\mathcal{D}$ as
+follows. An object of $\mathcal{D}$ is the data of an object $Y \in
+\mathcal{C}$ and morphisms $f_i: Y \to X_i$ for all $i \in I$.
+A morphism between objects $(Y, \left\{f_i: Y \to X_i\right\})$ and $(Z,
+\left\{g_i: Z \to X_i\right\})$ is
+a map $Y \to Z$ making the obvious diagrams commute. Then a product $\prod X_i$
+in $\mathcal{C}$ is the same thing as a terminal object in $\mathcal{D}$, as
+one easily checks from the definitions.
+
+\subsection{Push-outs and pull-backs}
+
+Let $\mathcal{C}$ be a category.
+
+Now we are going to talk about more examples of universal constructions, which can all be
+phrased via initial or terminal objects in some category. This,
+therefore, is the proof for the uniqueness up to unique
+isomorphism of \emph{everything} we will do in this
+section. Later we will present these in more generality.
+
+
+Suppose we have objects $A, B, C, X \in \mathcal{C}$.
+
+\begin{definition}
+A commutative square
+\[
+\xymatrix{
+A \ar[d] \ar[r] & B \ar[d] \\
+C \ar[r] & X}.
+\]
+is a \textbf{pushout square} (and $X$ is called the \textbf{push-out}) if,
+given a commutative diagram
+\[ \xymatrix{
+A \ar[r] \ar[d] & B \ar[d] \\
+C \ar[r] & Y \\
+}\]
+there is a unique map $X \to Y$ making the following diagram commute:
+\[
+\xymatrix{
+A \ar[d] \ar[r] & B \ar[d] \ar[rdd] \\
+C \ar[r] \ar[rrd] & X \ar[rd] \\
+& & Y'}
+\]
+Sometimes push-outs are also called \textbf{fibered coproducts}.
+We shall also write $X = C \sqcup_A B$.
+\end{definition}
+
+In other words, to map out of $X = C \sqcup_A B$ into some object $Y$ is to
+give maps $B \to Y, C \to Y$ whose restrictions to $A$ are the same.
+
+
+The next few examples will rely on notions to be introduced later.
+\begin{example}
+The following is a pushout square in the category of abelian groups:
+\[ \xymatrix{
+\mathbb{Z}/2 \ar[r] \ar[d] & \mathbb{Z}/4 \ar[d] \\
+\mathbb{Z}/6 \ar[r] & \mathbb{Z}/12
+}\]
+In the category of groups, the push-out is actually
+$\mathrm{SL}_2(\mathbb{Z})$, though we do not prove it. The point is that
+the property of a square's being a
+push-out is actually dependent on the category.
+
+In general, to construct a push-out of groups $C \sqcup_A B$, one constructs
+the direct sum $C \oplus B$ and quotients by the subgroup generated by
+$(a, a)$ (where $a \in A$ is identified with its image in $C \oplus B$).
+We shall discuss this later, more thoroughly, for modules over a ring.
+\end{example}
+
+\begin{example}
+Let $R$ be a commutative ring and let $S$ and $Q$ be two commutative
+$R$-algebras. In other words, suppose
+we have two maps of rings $s:R\rightarrow S$ and $q:R\rightarrow Q$. Then we
+can fit this information together
+into a pushout square:
+
+\[ \xymatrix{
+R \ar[r] \ar[d] & S \ar[d] \\
+Q \ar[r] &X
+}\]
+It turns out that the pushout in this case is the tensor product of algebras
+$S\otimes_R Q$ (see \cref{tensprodalg} for the construction). This is particularly important
+in algebraic geometry as the dual construction will give the correct notion of
+``products'' in the category of ``schemes'' over
+a field.\end{example}
+
+\begin{proposition}
+Let $\mathcal{C}$ be any category.
+If the push-out of
+the diagram
+\[ \xymatrix{
+A \ar[d] \ar[r] & B \\
+C
+}\]
+exists, it is unique up to unique isomorphism.
+\end{proposition}
+\begin{proof}
+We can prove this in two ways. One is to suppose that there were two pushout
+squares:
+\[
+\xymatrix{
+A \ar[d] \ar[r] & B \ar[d] \\
+C \ar[r] & X \\
+}
+\quad \quad
+\xymatrix{
+A \ar[d] \ar[r] & B \ar[d] \\
+C \ar[r] & X' \\
+}
+\]
+Then there are unique maps $f:X \to X', g: X' \to X$ from the universal property.
+In detail, these maps fit into commutative diagrams
+\[
+\xymatrix{
+A \ar[d] \ar[r] & B \ar[d] \ar[rdd] \\
+C \ar[r] \ar[rrd] & X \ar[rd]^f\\
+ & & X'
+}
+\quad \quad
+\xymatrix{
+A \ar[d] \ar[r] & B \ar[d] \ar[rdd] \\
+C \ar[r] \ar[rrd] & X' \ar[rd]^g\\
+ & & X
+}
+\]
+Then $g \circ f$ and $f \circ g$ are the identities of $X, X'$ again by
+\emph{uniqueness} of the map in the definition of the push-out.
+
+Alternatively, we can phrase push-outs in terms of initial objects. We could
+consider the category of all diagrams as above,
+\[ \xymatrix{
+A \ar[d] \ar[r] & B \ar[d] \\
+C \ar[r] & D
+},\]
+where $A \to B, A \to C$ are fixed and $D$ varies.
+The morphisms in this category of diagrams consist of commutative
+diagrams. Then the initial
+object in this category is the push-out, as one easily checks.
+\end{proof}
+
+Often when studying categorical constructions, one can create a kind of
+``dual''construction by reversing the direction of the arrows. This is exactly
+the
+relationship between the push-out construction and the pull-back
+construction to be described below.
+So suppose we have two morphisms $A \to C$ and $B\to C$, forming a diagram
+\[ \xymatrix{
+& B \ar[d] \\
+A \ar[r] & C.
+}\]
+\begin{definition}
+The \textbf{pull-back} or \textbf{fibered product} of the above
+diagram is an object $P$ with two morphisms $P\to B$ and $P\to
+C$ such that the following diagram commutes:
+\[ \xymatrix {
+P \ar[d] \ar[r] & B \ar[d]\\
+A\ar[r] & C }\]
+Moreover, the object $P$ is required to be universal in the following sense: given any $P'$
+and maps $P'\to A$ and $P'\to B$ making the square commute, there is a
+unique map
+$P'\to P$ making the following diagram commute:
+\[
+\xymatrix{
+ P' \ar[rd] \ar[rrd] \ar[ddr] \\
+& P \ar[d] \ar[r] & B \ar[d] \\
+& A \ar[r] & C }\]
+We shall also write $P = B \times_C A$.
+\end{definition}
+
+
+\begin{example}
+In the category $\mathbf{Set}$ of sets, if we have sets $A, B, C$ with maps $f:
+A \to C, g: B \to C$, then the fibered product $A \times_C B$ consists of
+pairs $(a,b) \in A \times B$ such that $f(a) = g(b)$.
+\end{example}
+
+\begin{example}[Requires prerequisites not developed yet] The next example may
+be omitted without loss of continuity.
+
+As said above, the fact that the tensor product of algebras is
+a push-out in the category of
+commutative $R$-algebras allows for the correct notion of the ``product'' of
+schemes. We now elaborate on this example: naively one would think that we
+could pick the underlying space of the product scheme to just be the topological
+product of two Zariski topologies. However, it is an easy exercise to check
+that the product of two Zariski topologies in general is not Zariski! This
+motivates
+the need for a different concept.
+
+Suppose we have a field $k$ and two $k$-algebras $A$ and $B$ and let
+$X=\spec(A)$and $Y=\spec(B)$ be the affine $k$-schemes corresponding to $A$ and
+$B$. Consider the following pull-back diagram:
+\[
+\xymatrix{
+X\times_{\spec(k)} Y \ar[d] \ar[r] &X \ar[d]\\
+Y \ar[r] &\spec(k) }\]
+
+Now, since $\spec$ is a contravariant functor, the arrows in this pull-back
+diagram have been flipped; so in fact, $X\times_{\spec(k)} Y$ is actually
+$\spec(A\otimes _k B)$. This construction is motivated by the following example:
+let $A=k[x]$ and $B=k[y]$. Then $\spec(A)$ and $\spec(B)$ are both affine lines
+$\mathbb{A}^1_k$ so we want a suitable notion of product that makes the product
+of $\spec(A)$ and $\spec(B)$ the affine plane. The pull-back construction is the
+correct one since $\spec(A)\times_{\spec(k)} \spec(B)=\spec(A\otimes_k
+B)=\spec(k[x,y])=\mathbb{A}^2_k$.
+\end{example}
+
+
+\subsection{Colimits}
+
+
+We now want to generalize the push-out.
+Instead of a shape with $A,B,C$, we do something more general.
+Start with a small category $I$: recall that \emph{smallness} means that the objects of $I$
+form a set. $I$ is to be called the \textbf{indexing
+category}. One is supposed to picture
+is that $I$ is something like the category
+\[
+\xymatrix{
+\ast \ar[d] \ar[r] & \ast \\
+\ast
+}
+\]
+or the category
+\[ \ast \rightrightarrows \ast. \]
+We will formulate the notion of a \textbf{colimit} which will specialize to the
+push-out when $I$ is the first case.
+
+
+So we will look at functors
+\[ F: I \to \mathcal{C}, \]
+which in the case of the three-element category, will just
+ correspond to
+diagrams
+\[ \xymatrix{A \ar[d] \ar[r] & B \\ C}. \]
+
+We will call a \textbf{cone} on $F$ (this is an ambiguous term) an object $X
+\in \mathcal{C}$ equipped with maps $F_i \to X, \forall i \in I$ such that for
+all maps $i \to
+i' \in I$, the diagram below commutes:
+\[ \xymatrix{
+F_i \ar[d] \ar[r] & X \\
+F_{i'} \ar[ru]
+}.\]
+
+An example would be a cone on the three-element category above: then
+this is just a commutative diagram
+\[ \xymatrix{
+A \ar[r]\ar[d] & B \ar[d] \\
+C \ar[r] & D
+}.\]
+
+\newcommand{\colim}{\mathrm{colim}}
+
+\begin{definition}
+The \textbf{colimit} of the diagram $F: I \to \mathcal{C}$, written as $\colim
+F$ or $\colim_I F $ or $\varinjlim_I F$, if it exists, is a cone $F \to X$ with
+the property that if $F \to Y$ is any other cone, then there is a unique map $X
+\to Y$ making the diagram
+\[ \xymatrix{
+F \ar[rd] \ar[r] & X \ar[d] \\
+& Y
+}\]
+commute. (This means that the corresponding diagram with $F_i$ replacing $F$
+commutes for each $i \in I$.)
+\end{definition}
+
+We could form a category $\mathcal{D}$ where the objects are the cones $F \to
+X$, and the morphisms from $F \to X$ and $F \to Y$ are the maps $X \to Y$ that
+make all the obvious diagrams commute. In this case, it is easy to see that a
+\emph{colimit} of the diagram is just an initial object in $\mathcal{D}$.
+
+ In any case, we see:
+
+\begin{proposition}
+$\colim F$, if it exists, is unique up to unique isomorphism.
+\end{proposition}
+
+Let us go through some examples. We already looked at push-outs.
+
+\begin{example}
+Consider the category $I$ visualized as
+\[ \ast, \ast, \ast, \ast. \]
+So $I$ consists of four objects with no non-identity morphisms.
+A functor $F: I \to \mathbf{Sets}$ is just a list of four sets $A, B, C, D$.
+The colimit is just the disjoint union $A \sqcup B \sqcup C \sqcup D$. This is
+the universal property of the disjoint union. To map out of the disjoint union
+is the same thing as mapping out of each piece.
+\end{example}
+
+
+\begin{example}
+Suppose we had the same category $I$ but the functor $F$ took values in the
+category of abelian groups. Then $F$
+corresponds, again, to a list of four abelian groups. The colimit is the direct
+sum. Again, the direct sum is characterized by the same universal property.
+\end{example}
+
+\begin{example}
+Suppose we had the same $I$ ($\ast, \ast, \ast, \ast$) the functor took its
+value in the category of groups. Then the colimit is the
+free product of the four groups.
+\end{example}
+
+\begin{example}
+Suppose we had the same $I$ and the category $\mathcal{C}$ was of commutative
+rings with unit. Then the colimit is the tensor product.
+\end{example}
+
+So the idea of a colimit unifies a whole bunch of constructions.
+Now let us take a different example.
+
+\begin{example}
+Take
+\[ I = \ast \rightrightarrows \ast. \]
+So a functor $I \to \mathbf{Sets}$ is a diagram
+\[ A \rightrightarrows B. \]
+Call the two maps $f,g: A \to B$. To get the colimit, we take $B$ and mod out
+by the equivalence relation generated by $f(a) \sim g(a)$.
+To hom out of this is the same thing as homming out of $B$ such that the
+pullbacks to $A$ are the same.
+
+This is the relation \textbf{generated} as above, not just as above. It can get
+tricky.
+\end{example}
+
+\begin{definition}
+When $I$ is just a bunch of points $\ast, \ast, \ast, \dots$ with no
+non-identity morphisms, then the
+colimit over $I$ is called the \textbf{coproduct}.
+\end{definition}
+
+We use the coproduct to mean things like direct sums, disjoint unions, and
+tensor products.
+If $\left\{A_i, i \in I\right\}$ is a collection of objects in some category,
+then we find the universal property of the coproduct can be stated succinctly:
+\[ \hom_{\mathcal{C}}(\bigsqcup_I A_i, B) = \prod \hom_{\mathcal{C}}(A_i, B). \]
+
+\begin{definition}
+When $I$ is $\ast \rightrightarrows \ast$, the colimit is called the
+\textbf{coequalizer}.
+\end{definition}
+
+\begin{theorem} \label{coprodcoequalsufficeforcocomplete}
+If $\mathcal{C}$ has all coproducts and coequalizers, then it has all colimits.
+\end{theorem}
+
+\begin{proof}
+Let $F: I \to \mathcal{C}$ be a functor, where $I$ is a small category. We
+need to obtain an object $X$ with morphisms
+\[ Fi \to X, \quad i \in I \]
+such that for each $f: i \to i'$, the diagram below commutes:
+\[
+\xymatrix{
+Fi \ar[d] \ar[r] & Fi' \ar[ld] \\
+X
+}
+\]
+and such that $X$ is universal among such diagrams.
+
+To give such a diagram, however, is equivalent to giving a collection of maps
+\[ Fi \to X \]
+that satisfy some conditions. So $X$ should be thought of as a quotient of the
+coproduct $\sqcup_i Fi$.
+Let us consider the coproduct $\sqcup_{i \in I, f} Fi$, where $f$ ranges over
+all
+morphisms in the category $I$ that start from $i$.
+We construct two maps
+\[ \sqcup_f Fi \rightrightarrows \sqcup_f Fi, \]
+whose coequalizer will be that of $F$. The first map is the identity. The
+second map sends a factor
+\end{proof}
+
+\subsection{Limits}
+As in the example with pull-backs and push-outs and products and coproducts,
+one can define a limit by using the exact same universal property above
+just with
+all the arrows reversed.
+
+\begin{example} The product is an example of a limit where the indexing
+category is a small category $I$ with no morphisms other than the identity. This
+example
+shows the power of universal constructions; by looking at colimits and limits,
+a whole variety of seemingly unrelated mathematical constructions are shown
+to be
+in the same spirit.
+\end{example}
+
+\subsection{Filtered colimits}
+
+
+\emph{Filtered colimits} are colimits
+over special indexing categories $I$ which look like totally ordered sets.
+These have several convenient properties as compared to general colimits.
+For instance, in the category of \emph{modules} over a ring (to be studied in
+\rref{foundations}), we shall see that filtered colimits actually
+preserve injections and surjections. In fact, they are \emph{exact.} This is
+not true in more general categories which are similarly structured.
+
+
+
+\begin{definition}
+An indexing category is \textbf{filtered} if the following hold:
+\begin{enumerate}
+\item Given $i_0, i_1 \in I$, there is a third object $i \in I$ such that both
+$i_0, i_1$ map into $i$.
+So there is a diagram
+\[ \xymatrix{
+i_0 \ar[rd] \\
+& i \\
+i_1 \ar[ru]
+}.\]
+\item Given any two maps $i_0 \rightrightarrows i_1$, there exists $i$ and $i_1
+\to i$ such that the two maps $i_0 \rightrightarrows i$ are equal:
+intuitively, any two ways
+of pushing an object into another can be made into the same eventually.
+\end{enumerate}
+\end{definition}
+
+\begin{example}
+If $I$ is the category
+\[ \ast \to \ast \to \ast \to \dots, \]
+i.e. the category generated by the poset $\mathbb{Z}_{\geq 0}$, then that is
+filtered.
+\end{example}
+
+
+\begin{example}
+If $G$ is a torsion-free abelian group, the category $I$ of finitely generated
+subgroups of $G$ and inclusion maps is filtered. We don't actually need the
+lack of torsion.
+\end{example}
+
+\begin{definition}
+Colimits over a filtered category are called \textbf{filtered colimits}.
+\end{definition}
+
+\begin{example}
+Any torsion-free abelian group is the filtered colimit of its finitely
+generated subgroups, which are free abelian groups.
+\end{example}
+This gives a simple approach for showing that a torsion-free abelian group is
+flat.
+
+\begin{proposition}
+If $I$ is filtered\footnote{Some people say filtering.} and $\mathcal{C} =
+\mathbf{Sets}, \mathbf{Abgrp}, \mathbf{Grps}$, etc., and $F: I \to \mathcal{C}$
+is a functor, then $\colim_I F$ exists and is given by the disjoint union of
+$F_i, i \in I$ modulo the relation $x \in F_i$ is equivalent to $x' \in F_{i'}$
+if $x$ maps to $x'$ under $F_i \to F_{i'}$. This is already an equivalence
+relation.
+\end{proposition}
+
+The fact that the relation given above is transitive uses the filtering of the
+indexing set. Otherwise, we would need to use the relation generated by it.
+
+\begin{example}
+Take $\mathbb{Q}$. This is the filtered colimit of the free submodules
+$\mathbb{Z}(1/n)$.
+
+Alternatively, choose a sequence of numbers $m_1 , m_2, \dots, $ such that for
+all $p, n$, we have $p^n \mid m_i$ for $i \gg 0$. Then we have a sequence of
+maps
+\[ \mathbb{Z} \stackrel{m_1}{\to} \mathbb{Z} \stackrel{m_2}{\to}\mathbb{Z}
+\to \dots. \]
+The colimit of this is $\mathbb{Q}$. There is a quick way of seeing this, which
+is left to the reader.
+\end{example}
+
+When we have a functor $F: I \to \mathbf{Sets}, \mathbf{Grps},
+\mathbf{Modules}$ taking values in a ``nice'' category (e.g. the category of
+sets, modules, etc.), one can construct the colimit by taking the union of the
+$F_i, i \in I$ and quotienting by the equivalence relation $x \in F_i \sim x'
+\in F_{i'}$ if $f: i \to i'$ sends $x$ into $x'$. This is already an
+equivalence relation, as one can check.
+
+Another way of saying this is that we have the disjoint union of the $F_i$
+modulo the relation that $a \in F_i$ and $b \in F_{i'}$ are equivalent if and
+only if there is a later $i''$ with maps $i \to i'', i' \to i''$ such that
+$a,b$ both map to the same thing in $F_{i''}$.
+
+
+One of the key properties of filtered colimits is that, in ``nice'' categories they commute with
+finite limits.
+
+\begin{proposition}
+In the category of sets, filtered colimits and finite limits commute with each
+other.
+\end{proposition}
+
+The reason this result is so important is that, as we shall see, it will imply
+that in categories such as the category of $R$-modules, filtered colimits
+preserve \emph{exactness}.
+\begin{proof}
+Let us show that filtered colimits commute with (finite) products in the
+category of sets. The case of an equalizer is similar, and finite limits can be
+generated from products and equalizers.
+
+So let $I$ be a filtered category, and $\left\{A_i\right\}_{i \in I},
+\left\{B_i\right\}_{i \in I}$
+be functors from $I \to \mathbf{Sets}$.
+We want to show that
+\[ \varinjlim_I (A_i \times B_i) = \varinjlim_I A_i \times \varinjlim_I B_i . \]
+To do this, note first that there is a map in the direction $\to$ because of
+the natural maps $\varinjlim_I (A_i \times B_i) \to \varinjlim_I A_i$ and
+$\varinjlim_I (A_i \times B_i) \to \varinjlim_I B_i$.
+We want to show that this is an isomorphism.
+
+Now we can write the left side as the disjoint union $\bigsqcup_I (A_i \times
+B_i)$ modulo the equivalence relation that $(a_i, b_i)$ is related to $(a_j,
+b_j)$ if there exist morphisms $i \to k, j \to k$ sending $(a_i, b_i), (a_j,
+b_j)$ to the same object in $A_k \times B_k$.
+For the left side, we have to work with pairs: that is, an element of
+ $\varinjlim_I A_i \times \varinjlim_I B_i$ consists of a pair $(a_{i_1},
+ b_{i_2})$
+ with two pairs $(a_{i_1}, b_{i_2}), (a_{j_1}, b_{j_2})$ equivalent if there exist
+ morphisms $i_1,j_1 \to k_1 $ and $i_2, j_2 \to k_2$ such that both have the
+ same image in $A_{k_1} \times A_{k_2}$. It is easy to see that these amount to
+ the same thing, because of the filtering condition: we can always modify an
+ element of $A_{i} \times B_{j}$ to some $A_{k} \times B_k$ for $k$ receiving
+ maps from $i, j$.
+\end{proof}
+
+\begin{exercise}
+Let $A$ be an abelian group, $e: A \to A$ an \emph{idempotent} operator, i.e.
+one such that $e^2 = e$. Show that $eA$ can be obtained as the filtered colimit
+of
+\[ A \stackrel{e}{\to} A \stackrel{e}{\to} A \dots. \]
+\end{exercise}
+
+\subsection{The initial object theorem}
+
+We now prove a fairly nontrivial result, due to Freyd. This gives a sufficient
+condition for the existence of initial objects.
+We shall use it in proving the adjoint functor theorem below.
+
+Let $\mathcal{C}$ be a category. Then we recall that $A \in \mathcal{C}$ if
+for each $X \in \mathcal{C}$, there is a \emph{unique} $A \to X$.
+Let us consider the weaker condition that for each $ X \in \mathcal{C}$, there
+exists \emph{a} map $A \to X$.
+
+\begin{definition} Suppose $\mathcal{C}$ has equalizers.
+If $A \in \mathcal{C}$ is such that $\hom_{\mathcal{C}}(A, X) \neq \emptyset$
+for each $X \in \mathcal{C}$, then $X$ is called \textbf{weakly initial.}
+\end{definition}
+
+We now want to get an initial object from a weakly initial object.
+To do this, note first that if $A$ is weakly initial and $B$ is any object
+with a morphism $B \to A$, then $B$ is weakly initial too. So we are going to
+take
+our initial object to be a very small subobject of $A$.
+It is going to be so small as to guarantee the uniqueness condition of an
+initial object. To make it small, we equalize all endomorphisms.
+
+\begin{proposition} \label{weakinitial}
+If $A$ is a weakly initial object in $\mathcal{C}$,
+then the equalizer of all endomorphisms $A \to A$ is initial for $\mathcal{C}$.
+\end{proposition}
+\begin{proof}
+Let $A'$ be this equalizer; it is endowed with a morphism $A'\to A$. Then let
+us recall what this means. For any two
+endomorphisms $A \rightrightarrows A$, the two pull-backs $A'
+\rightrightarrows A$ are equal. Moreover, if $B \to A$ is a morphism that has
+this property, then $B$ factors uniquely through $A'$.
+
+Now $A' \to A$ is a morphism, so by the remarks above, $A'$ is weakly initial:
+to each $X \in \mathcal{C}$, there exists a morphism $A' \to X$.
+However, we need to show that it is unique.
+
+So suppose given two maps $f,g: A' \rightrightarrows X$. We are going to show
+that they are equal. If not, consider their equalizer $O$.
+Then we have a morphism $O \to A'$ such that the post-compositions with $f,g$
+are equal. But by weak initialness, there is a map $A \to O$; thus we get a
+composite
+\[ A \to O \to A'. \]
+We claim that this is a \emph{section} of the embedding $A'\to A$.
+This will prove the result. Indeed, we will have constructed a section $A \to
+A'$, and since it factors through $O$, the two maps
+\[ A \to O \to A' \rightrightarrows X \]
+are equal. Thus, composing each of these with the inclusion $A' \to A$ shows
+that $f,g$ were equal in the first place.
+
+Thus we are reduced to proving:
+\begin{lemma}
+Let $A$ be an object of a category $\mathcal{C}$. Let $A'$ be the equalizer of
+all endomorphisms of $A$. Then any morphism $A \to A'$ is a section of the
+inclusion $A' \to A$.
+\end{lemma}
+\begin{proof}
+Consider the canonical inclusion $i: A' \to A$. We are given some map $s: A
+\to A'$; we must show that $si = 1_{A'}$.
+Indeed, consider the composition
+\[ A' \stackrel{i}{\to} A \stackrel{s}{\to} A' \stackrel{i}{\to} A .\]
+Now $i$ equalizes endomorphisms of $A$; in particular, this composition is the
+same as
+\[ A' \stackrel{i}{\to} A \stackrel{\mathrm{id}}{\to} A; \]
+that is, it equals $i$. So the map $si: A' \to A$ has the property that $isi =
+i$ as maps $A' \to A$. But $i$ being a monomorphism, it follows that $si =
+1_{A'}$.
+\end{proof}
+\end{proof}
+
+\begin{theorem}[Freyd] \label{initialobjectthm}
+Let $\mathcal{C}$ be a category admitting all small limits.\footnote{We shall
+later call such a category \textbf{complete}.} Then $\mathcal{C}$ has an initial
+object if and only if the following \textbf{solution set condition holds:}
+there is a set $\left\{X_i, i \in I\right\}$ of objects in $\mathcal{C}$ such
+that any $X \in \mathcal{C}$ can be mapped into by one of these.
+\end{theorem}
+
+The idea is that the family $\left\{X_i\right\}$ is somehow weakly universal
+\emph{together.}
+\begin{proof}
+If $\mathcal{C}$ has an initial object, we may just consider that as the
+family $\left\{X_i\right\}$: we can hom out (uniquely!) from a universal
+object into anything, or in other words a universal object is weakly universal.
+
+Suppose we have a ``weakly universal family'' $\left\{X_i\right\}$. Then the
+product $\prod X_i$ is weakly universal. Indeed, if $X \in \mathcal{C}$,
+choose some $i'$ and a morphism $X_{i'} \to X$ by the hypothesis. Then this map
+composed with the projection from the product gives a map $\prod X_i \to
+X_{i'} \to X$.
+\cref{weakinitial} now implies that $\mathcal{C}$ has an initial object.
+\end{proof}
+
+\subsection{Completeness and cocompleteness}
+\begin{definition}\label{completecat} A category $\mathcal{C}$ is said to be \textbf{complete} if for every
+functor $F:I\rightarrow \mathcal{C}$ where $I$ is a small category, the limit
+$\lim F$ exists (i.e. $\mathcal{C}$ has all small limits). If all colimits exist, then $\mathcal{C}$ is said to be
+\textbf{cocomplete}.
+\end{definition}
+
+If a category is complete, various nice properties hold.
+\begin{proposition} If $\mathcal{C}$ is a complete category, the following
+conditions are true:
+\begin{enumerate}
+\item{all (finite) products exist}
+\item{all pull-backs exist}
+\item{there is a terminal object}
+\end{enumerate}
+\end{proposition}
+\begin{proof} The proof of the first two properties is trivial since they can
+all be expressed as limits; for the proof of the existence of a terminal
+object, consider the empty diagram $F:\emptyset \rightarrow \mathcal{C}$. Then
+the
+terminal object is just $\lim F$.
+\end{proof}
+
+Of course, if one dualizes everything we get a theorem about cocomplete
+categories which is proved in essentially the same manner. More is true
+however; it turns out that finite (co)completeness are equivalent to the
+properties above if one requires the finiteness condition for the existence of
+(co)products.
+
+\subsection{Continuous and cocontinuous functors}
+\subsection{Monomorphisms and epimorphisms}
+We now wish to characterize monomorphisms and epimorphisms in a purely
+categorical setting. In categories where there is an underlying set the notions
+of injectivity and surjectivity makes sense but in category theory, one
+does not
+in a sense have ``access'' to the internal structure of objects. In this light,
+we make the following definition.
+
+\begin{definition}
+A morphism $f:X \to Y$ is a \textbf{monomorphism} if for any two morphisms
+$g_1:X'\rightarrow X$ and $g_2:X'\rightarrow X$, we have that $f g_1 = f g_2$
+implies $g_1=g_2$. A morphism $f:X\rightarrow Y$ is an \textbf{epimorphism} if for any two
+maps $g_1:Y\rightarrow Y'$ and $g_2:Y\rightarrow Y'$, we have that $g_1 f = g_2
+f$ implies $g_1 = g_2$.
+\end{definition}
+
+So $f: X \to Y$ is a monomorphism if whenever $X'$ is another object in
+$\mathcal{C}$, the map
+\[ \hom_{\mathcal{C}}(X', X) \to \hom_{\mathcal{C}}(X', Y) \]
+is an injection (of sets). Epimorphisms in a category are defined similarly;
+note that neither definition makes any reference to \emph{surjections} of sets.
+
+
+The reader can easily check:
+
+\begin{proposition} \label{compositeofmono}
+The composite of two monomorphisms is a monomorphism, as is the composite of
+two epimorphisms.
+\end{proposition}
+
+\begin{exercise}
+Prove \cref{compositeofmono}.
+\end{exercise}
+
+
+\begin{exercise}
+The notion of ``monomorphism'' can be detected using only the notions of
+fibered product and isomorphism. To see this, suppose $i: X \to Y$ is a
+monomorphism. Show that the diagonal
+\[ X \to X \times_Y X \]
+is an isomorphism. (The diagonal map is such that the two
+projections to $X$ both give the identity.) Conversely, show that if $i: X \to Y$ is any morphism such
+that the above diagonal map is an isomorphism, then $i$ is a monomorphism.
+
+Deduce the following consequence: if $F: \mathcal{C} \to \mathcal{D}$ is a
+functor that commutes with fibered products, then $F $ takes monomorphisms to
+monomorphisms.
+\end{exercise}
+
+
+\section{Yoneda's lemma}
+
+\add{this section is barely fleshed out}
+
+Let $\mathcal{C}$ be a category.
+In general, we have said that there is no way to study an object in a
+category other than by considering maps into and out of it.
+We will see that essentially everything about $X \in \mathcal{C}$ can be
+recovered from these hom-sets.
+We will thus get an embedding of $\mathcal{C}$ into a category of functors.
+
+\subsection{The functors $h_X$}
+
+We now use the structure of a category to construct hom functors.
+\begin{definition}
+Let $X \in \mathcal{C}$. We define the contravariant functor $h_X: \mathcal{C}
+\to \mathbf{Sets}$ via
+\[ h_X(Y) = \hom_{\mathcal{C}}(Y, X). \]
+\end{definition}
+
+This is, indeed, a functor. If $g: Y \to Y'$, then precomposition gives a map
+of sets
+\[ h_X(Y') \to h_X(Y), \quad f \mapsto f \circ g \]
+which satisfies all the usual identities.
+
+As a functor, $h_X$ encodes \emph{all} the information about
+how one can map into $X$.
+It turns out that one can basically recover $X$ from $h_X$, though.
+
+\subsection{The Yoneda lemma}
+
+Let $X \stackrel{f}{\to} X'$ be a morphism in $\mathcal{C}$.
+Then for each $Y \in \mathcal{C}$, composition gives a map
+\[ \hom_{\mathcal{C}}(Y, X) \to \hom_{\mathcal{C}}(Y, X'). \]
+It is easy to see that this induces a \emph{natural} transformation
+\[ h_{X} \to h_{X'}. \]
+Thus we get a map of sets
+\[ \hom_{\mathcal{C}}(X, X') \to \hom(h_X, h_{X'}), \]
+where $h_X, h_{X'}$ lie in the category of contravariant functors $\mathcal{C}
+\to \mathbf{Sets}$.
+In other words, we have defined a \emph{covariant functor}
+\[ \mathcal{C} \to \mathbf{Fun}(\mathcal{C}^{op}, \mathbf{Sets}). \]
+This is called the \emph{Yoneda embedding.} The next result states that the
+embedding is fully faithful.
+
+\begin{theorem}[Yoneda's lemma]
+\label{yonedalemma}
+If $X, X' \in \mathcal{C}$, then the map
+$\hom_{\mathcal{C}}(X, X') \to \hom(h_X, h_{X'})$ is a bijection. That is,
+every natural transformation $h_X \to h_{X'}$ arises in one and only one way
+from a morphism $X \to X'$.
+\end{theorem}
+
+
+\begin{theorem}[Strong Yoneda lemma]
+\end{theorem}
+
+\subsection{Representable functors}
+
+We use the same notation of the preceding section: for a category
+$\mathcal{C}$ and $X \in \mathcal{C}$, we let $h_X$ be the contravariant
+functor $\mathcal{C} \to \mathbf{Sets}$ given by $Y \mapsto
+\hom_{\mathcal{C}}(Y, X)$.
+\begin{definition}
+A contravariant functor $F: \mathcal{C} \to \mathbf{Sets}$ is
+\textbf{representable} if it is naturally isomorphic to some $h_X$.
+\end{definition}
+
+The point of a representable functor is that it can be realized as maps into a
+specific object.
+In fact, let us look at a specific feature of the functor $h_X$.
+Consider the object $\alpha \in h_X(X)$ that corresponds to the identity.
+Then any morphism
+\[ Y \to X \]
+factors \emph{uniquely}
+as \[ Y \to X \stackrel{\alpha}{\to } X \]
+(this is completely trivial!) so that
+any element of $h_X(Y)$ is a $f^*(\alpha)$ for precisely one $f: Y \to X$.
+
+\begin{definition}
+Let $F: \mathcal{C} \to \mathbf{Sets}$ be a contravariant functor. A
+\textbf{universal object} for $\mathcal{C}$ is a pair $(X, \alpha)$ where $X
+\in \mathcal{C}, \alpha \in F(X)$ such that the following condition holds:
+if $Y$ is any object and $\beta \in F(Y)$, then there is a unique $f: Y \to X$
+such that $\alpha$ pulls back to $\beta$ under $f$.
+
+In other words, $\beta = f^*(\alpha)$.
+\end{definition}
+
+So a functor has a universal object if and only if it is representable.
+Indeed, we just say that the identity $X \to X$ is universal for $h_X$, and
+conversely if $F$ has a universal object $(X, \alpha)$, then $F$ is naturally
+isomorphic to $h_X$ (the isomorphism $h_X \simeq F$ being given by pulling
+back $\alpha$ appropriately).
+
+
+The article \cite{Vi08} by Vistoli contains a good introduction to and several
+examples of this theory.
+Here is one of them:
+
+\begin{example}
+Consider the contravariant functor $F: \mathbf{Sets} \to \mathbf{Sets}$ that
+sends any set $S$ to its power set $2^S$ (i.e. the collection of subsets).
+This is a contravariant functor: if $f: S \to T$, there is a morphism
+\[ 2^T \to 2^S, \quad T' \mapsto f^{-1}(T'). \]
+
+This is a representable functor. Indeed, the universal object can be taken as
+the pair
+\[ ( \left\{0,1\right\}, \left\{1\right\}). \]
+
+To understand this, note that a subset $S;$ of $S$ determines its
+\emph{characteristic function} $\chi_{S'}: S \to \left\{0,1\right\}$ that
+takes the value $1$ on $S$ and $0$ elsewhere.
+If we consider $\chi_{S'}$ as a morphism $ S \to \left\{0,1\right\}$, we see
+that
+\[ S' = \chi_{S'}^{-1}(\{1\}). \]
+Moreover, the set of subsets is in natural bijection with the set of
+characteristic functions, which in turn are precisely \emph{all} the maps $S
+\to \left\{0,1\right\}$. From this the assertion is clear.
+\end{example}
+
+We shall meet some elementary criteria for the representability of
+contravariant functors in the next subsection. For now, we note\footnote{The
+reader unfamiliar with algebraic topology may omit these remarks.} that in
+algebraic topology, one often works with the \emph{homotopy category} of
+pointed CW complexes (where morphisms are pointed continuous maps modulo
+homotopy), any contravariant functor that satisfies two relatively mild
+conditions (a
+Mayer-Vietoris condition and a condition on coproducts), is automatically
+representable by a theorem of Brown. In particular, this implies that the
+singular cohomology functors $H^n(-, G)$ (with coefficients in some group $G$)
+are representable; the representing objects are the so-called
+Eilenberg-MacLane spaces $K(G,n)$. See \cite{Ha02}.
+
+
+\subsection{Limits as representable functors}
+
+\add{}
+
+\subsection{Criteria for representability}
+
+Let $\mathcal{C}$ be a category.
+We saw in the previous subsection that a representable functor must send
+colimits to limits.
+We shall now see that there is a converse under certain set-theoretic
+conditions.
+For simplicity, we start by stating the result for corepresentable functors.
+
+\begin{theorem}[(Co)representability theorem]
+Let $\mathcal{C}$ be a complete category, and let $F: \mathcal{C} \to
+\mathbf{Sets}$ be a covariant functor. Suppose $F$ preserves limits and satisfies the solution set condition:
+there is a set of objects $\left\{Y_\alpha\right\}$ such that, for any $X \in
+\mathcal{C}$ and $x \in F(X)$, there is a morphism
+\[ Y_\alpha \to X \]
+carrying some element of $F(Y_\alpha)$ onto $x$.
+
+Then $F$ is corepresentable.
+\end{theorem}
+\begin{proof}
+To $F$, we associate the following \emph{category} $\mathcal{D}$. An object of
+$\mathcal{D}$ is a pair $(x, X)$ where $x \in F(X)$ and $X \in \mathcal{C}$.
+A morphism between $(x, X)$ and $(y, Y)$ is a map
+\[ f:X \to Y \]
+that sends $x$ into $y$ (via $F(f): F(X) \to F(Y)$).
+It is easy to see that $F$ is corepresentable if and only if there is an initla
+object in this category; this initial object is the ``universal object.''
+
+We shall apply the initial object theorem, \cref{initialobjectthm}. Let us first verify that
+$\mathcal{D}$ is complete; this follows because $\mathcal{C}$ is and $F$
+preserves limits. So, for instance, the product of $(x, X)$ and $(y, Y)$ is
+$((x,y), X \times Y)$; here $(x,y)$ is the element of $F(X) \times F(Y) = F(X
+\times Y)$.
+The solution set condition states that there is a weakly
+initial family of objects, and the initial object theorem now implies that
+there is an initial object.
+\end{proof}
+\section{Adjoint functors}
+
+According to MacLane, ``Adjoint functors arise everywhere.'' We shall see
+several examples of adjoint functors in this book (such as $\hom$ and the
+tensor product). The fact that a functor has an adjoint often immediately
+implies useful properties about it (for instance, that it commutes with either
+limits or colimits); this will lead, for instance, to conceptual arguments
+behind the right-exactness of the tensor product later on.
+
+
+\subsection{Definition}
+
+Suppose $\mathcal{C}, \mathcal{D}$ are categories, and let $F: \mathcal{C} \to
+\mathcal{D}, G: \mathcal{D} \to \mathcal{C}$ be (covariant) functors.
+
+\begin{definition}
+$F, G$ are \textbf{adjoint functors} if there is a natural isomorphism
+\[ \hom_{\mathcal{D}}(Fc, d) \simeq \hom_{\mathcal{C}}(c, Gd) \]
+whenever $c \in \mathcal{C}, d \in \mathcal{D}$. $F$ is said to be the
+\textbf{right adjoint,} and $G$ is the \textbf{left adjoint.}
+\end{definition}
+
+Here ``natural'' means that the two quantities are supposed to be considered
+as functors $\mathcal{C}^{op} \times \mathcal{D} \to \mathbf{Set}$.
+
+\begin{example}
+There is a simple pair of adjoint functors between $\mathbf{Set}$ and $\mathbf{AbGrp}$. Here
+$F$ sends a set $A$ to the free abelian group (see \cref{} for a discussion
+of free modules over arbitrary rings) $\mathbb{Z}[A]$, while $G$ is
+the ``forgetful'' functor that sends an abelian group to its underlying set.
+Then $F$ and $G$ are adjoints. That is, to give a group-homomorphism
+\[ \mathbb{Z}[A] \to G \]
+for some abelian group $G$
+is the same as giving a map of \emph{sets}
+\[ A \to G. \]
+This is precisely the defining property of the free abelian group.
+\end{example}
+
+\begin{example}
+In fact, most ``free'' constructions are just left adjoints.
+For instance, recall the universal property of the free group $F(S)$ on a set $S$ (see
+\cite{La02}): to give a group-homomorphism $F(S) \to G$ for $G$ any group is
+the same as choosing an image in $G$ of each $s \in S$.
+That is,
+\[ \hom_{\mathbf{Grp}}(F(S), G) = \hom_{\mathbf{Sets}}(S, G). \]
+This states that the free functor $S \mapsto F(S)$ is left adjoint to the
+forgetful functor from $\mathbf{Grp}$ to $\mathbf{Sets}$.
+\end{example}
+
+\begin{example}
+The abelianization functor $G \mapsto G^{ab} = G/[G, G]$ from $\mathbf{Grp}
+\to \mathbf{AbGrp}$ is left adjoint to the
+inclusion $\mathbf{AbGrp} \to \mathbf{Grp}$.
+That is, if $G$ is a group and $A$ an abelian group, there is a natural
+correspondence between homomorphisms $G \to A$ and $G^{ab} \to A$.
+Note that $\mathbf{AbGrp}$ is a subcategory of $\mathbf{Grp}$ such that the
+inclusion admits a left adjoint; in this situation, the subcategory is called
+\textbf{reflective.}
+\end{example}
+
+
+
+\subsection{Adjunctions}
+
+The fact that two functors are adjoint is encoded by a simple set of algebraic
+data between them.
+To see this, suppose $F: \mathcal{C} \to \mathcal{D}, G: \mathcal{D} \to \mathcal{C}$ are
+adjoint functors.
+For any object $c \in \mathcal{C}$, we know that
+\[ \hom_{\mathcal{D}}(Fc, Fc) \simeq \hom_{\mathcal{C}}(c, GF c), \]
+so that the identity morphism $Fc \to Fc$ (which is natural in $c$!) corresponds to a map $c \to GFc$
+that is natural in $c$, or equivalently a natural
+transformation
+\[ \eta: 1_{\mathcal{C}} \to GF. \]
+Similarly, we get a natural transformation
+\[ \epsilon: FG \to 1_{\mathcal{D}} \]
+where the map $FGd \to d$ corresponds to the identity $Gd \to Gd$ under the
+adjoint correspondence.
+Here $\eta$ is called the \textbf{unit}, and $\epsilon$ the \textbf{counit.}
+
+These natural transformations $\eta, \epsilon$ are not simply arbitrary.
+We are, in fact, going to show that they determine the isomorphism
+determine the isomorphism $\hom_{\mathcal{D}}(Fc, d) \simeq
+\hom_{\mathcal{C}}(c, Gd)$. This will be a little bit of diagram-chasing.
+
+We know that the isomorphism $\hom_{\mathcal{D}}(Fc, d) \simeq
+\hom_{\mathcal{C}}(c, Gd)$ is \emph{natural}. In fact, this is the key point.
+Let $\phi: Fc \to d$ be any map.
+Then there is a morphism $(c, Fc) \to (c, d) $ in the product category
+$\mathcal{C}^{op} \times \mathcal{D}$; by naturality of the adjoint
+isomorphism, we get a commutative square of sets
+\[ \xymatrix{
+\hom_{\mathcal{D}}(Fc, Fc) \ar[r]^{\mathrm{adj}} \ar[d]^{\phi_*} & \hom_{\mathcal{C}}(c, GF c)
+\ar[d]^{G(\phi)_*} \\
+\hom_{\mathcal{D}}(Fc, d) \ar[r]^{\mathrm{adj}} & \hom_{\mathcal{C}}(c, Gd)
+}\]
+Here the mark $\mathrm{adj}$ indicates that the adjoint isomorphism is used.
+If we start with the identity $1_{Fc}$ and go down and right, we get the map
+\( c \to Gd \)
+that corresponds under the adjoint correspondence to $Fc \to d$. However, if we
+go right and down, we get the natural unit map $\eta(c): c \to GF c$ followed by $G(\phi)$.
+
+Thus, we have a \emph{recipe} for constructing a map $c \to Gd$ given $\phi: Fc \to
+d$:
+\begin{proposition}[The unit and counit determines everything]
+Let $(F, G)$ be a pair of adjoint functors with unit and counit transformations
+$\eta, \epsilon$.
+
+Then given $\phi: Fc \to d$, the adjoint map $\psi:c \to Gd$ can be constructed simply as
+follows.
+Namely, we start with the unit $\eta(c): c \to GF c$ and take
+\begin{equation} \label{adj1} \psi = G(\phi) \circ \eta(c): c \to Gd
+\end{equation} (here $G(\phi): GFc \to Fd$).
+\end{proposition}
+
+In the same way, if we are given $\psi: c \to Gd$ and want to construct a map
+$\phi: Fc \to d$, we construct
+\begin{equation} \label{adj2} \epsilon(d) \circ F(\psi): Fc \to FGd \to d.
+\end{equation}
+In particular, we have seen that the \emph{unit and counit morphisms determine
+the adjoint isomorphisms.}
+
+
+Since the adjoint isomorphisms $\hom_{\mathcal{D}}(Fc, d) \to
+\hom_{\mathcal{C}}(c, Gd)$ and
+$\hom_{\mathcal{C}}(c, Gd) \to \hom_{\mathcal{D}}(Fc, d)
+$
+are (by definition) inverse to each other, we can determine
+conditions on the units and counits.
+
+For
+instance, the natural transformation $F \circ \eta$ gives a natural
+transformation $F \circ \eta: F \to FGF$, while the natural transformation
+$\epsilon \circ F$ gives a natural transformation $FGF \to F$.
+(These are slightly different forms of composition!)
+
+\begin{lemma} The composite natural transformation $F \to F$ given by
+$(\epsilon \circ F) \circ (F \circ \eta)$ is the identity.
+Similarly, the composite natural transformation
+$G \to GFG \to G$ given by $(G \circ \epsilon) \circ (\eta \circ G)$ is the
+identity.
+\end{lemma}
+
+
+\begin{proof} We prove the first assertion; the second is similar.
+Given $\phi: Fc \to d$, we know that we must get back to $\phi$ applying the
+two constructions above. The first step (going to a map $\psi: c \to Gd$) is by
+\eqref{adj1}
+\( \psi = G(\phi) \circ \eta(c); \) the second step sends $\psi$ to
+$\epsilon(d) \circ F(\psi)$, by \eqref{adj2}.
+It follows that
+\[ \phi = \epsilon(d) \circ F( G(\phi) \circ \eta(c)) = \epsilon(d) \circ
+F(G(\phi)) \circ F(\eta(c)). \]
+Now suppose we take $d = Fc$ and $\phi: Fc \to Fc $ to be the identity.
+We find that $F(G(\phi))$ is the identity $FGFc \to FGFc$, and consequently we
+find
+\[ \id_{F(c)} = \epsilon(Fc) \circ F(\eta(c)). \]
+This proves the claim.
+\end{proof}
+
+
+
+\begin{definition}
+Let $F: \mathcal{C} \to \mathcal{D}, G: \mathcal{D} \to \mathcal{C}$ be
+covariant functors. An \textbf{adjunction} is the data of two natural
+transformations
+\[ \eta: 1 \to GF, \quad \epsilon: FG \to 1, \]
+called the \textbf{unit} and \textbf{counit}, respectively, such that the
+composites $(\epsilon \circ F) \circ (F \circ \epsilon): F \to F$
+and $(G \circ \epsilon) \circ (\eta \circ G)$ are the identity (that is, the
+identity natural transformations of $F, G$).
+\end{definition}
+
+We have seen that a pair of adjoint functors gives rise to an adjunction.
+Conversely, an adjunction between $F, G$ ensures that $F, G$ are adjoint, as
+one may check: one uses the same formulas \eqref{adj1} and \eqref{adj2} to
+define the natural isomorphism.
+
+
+For any set $S$, let $F(S)$ be the free group on $S$.
+So, for instance, the fact that there is a natural map of sets
+$S \to F(S)$, for any set $S$, and a natural map of
+groups $F(G) \to G$ for any group $G$, determines the adjunction between the
+free group functor from $\mathbf{Sets}$ to $\mathbf{Grp}$, and the forgetful
+functor $\mathbf{Grp} \to \mathbf{Sets}$.
+
+
+
+As another example, we give a criterion for a functor in an adjunction to be
+fully faithful.
+
+\begin{proposition} \label{adjfullfaithful}
+Let $F, G$ be a pair of adjoint functors between categories $\mathcal{C}, \mathcal{D}$.
+Then $G$ is fully faithful if and only if the unit maps $\eta: 1 \to GF$ are
+isomorphisms.
+\end{proposition}
+\begin{proof}
+We use the recipe \eqref{adj1}.
+Namely, we have a map $\hom_{\mathcal{D}}(Fc, d) \to
+\hom_{\mathcal{C}}(c, Gd)$ given by
+$\phi \mapsto G(\phi) \circ \eta(c)$. This is an isomorphism, since we have an
+adjunction.
+As a result, composition with $\eta$ is an isomorphism of hom-sets if and only if $\phi
+\mapsto G(\phi)$ is an isomorphism. From this the result is easy to deduce.
+\end{proof}
+
+\begin{example}
+For instance, recall that the inclusion functor from $\mathbf{AbGrp}$ to
+$\mathbf{Grp}$ is fully faithful (clear).
+This is a right adjoint to the abelianization functor $G \mapsto G^{ab}$.
+As a result, we would expect the unit map of the adjunction to be an
+isomorphism, by \cref{adjfullfaithful}.
+
+The unit map sends an abelian group to its abelianization: this is obviously an
+isomorphism, as abelianizing an abelian group does nothing.
+\end{example}
+
+\subsection{Adjoints and (co)limits}
+One very pleasant property of functors that are left (resp. right) adjoints is
+that they preserve all colimits (resp. limits).
+
+\begin{proposition} \label{adjlimits}
+A left adjoint $F: \mathcal{C} \to \mathcal{D}$ preserves colimits. A right
+adjoint $G: \mathcal{D} \to \mathcal{C}$ preserves limits.
+\end{proposition}
+
+As an example, the free functor from $\mathbf{Sets}$ to $\mathbf{AbGrp}$ is a
+left adjoint, so it preserves colimits. For instance, it preserves coproducts.
+This corresponds to the fact that if $A_1, A_2$ are sets, then $\mathbb{Z}[A_1
+\sqcup A_2]$ is naturally isomorphic to $\mathbb{Z}[A_1] \oplus
+\mathbb{Z}[A_2]$.
+
+\begin{proof}
+Indeed, this is mostly formal.
+Let $F: \mathcal{C}\to \mathcal{D}$ be a left adjoint functor, with right
+adjoint $G$.
+Let $f: I \to \mathcal{C}$ be a ``diagram'' where $I$ is a small category.
+Suppose $\colim_I f$ exists as an object of $\mathcal{C}$. The result states
+that $\colim_I F \circ f$ exists as an object of $\mathcal{D}$ and can be
+computed as
+$F(\colim_I f)$.
+To see this, we need to show that mapping out of $F(\colim_I f)$ is what we
+want---that is, mapping out of $F(\colim_I f)$ into some $d \in \mathcal{D}$---amounts to
+giving compatible $F(f(i)) \to d$ for each $i \in I$.
+In other words, we need to show that $\hom_{\mathcal{D}}( F(\colim_I f), d) =
+\lim_I \hom_{\mathcal{D}}(
+F(f(i)), d)$; this is precisely the defining property of the colimit.
+
+But we have
+\[ \hom_{\mathcal{D}}( F(\colim_I f ), d) = \hom_{\mathcal{C}}(\colim_I f, Gd)
+= \lim_I \hom_{\mathcal{C}}(fi, Gd) = \lim_I \hom_{\mathcal{D}}(F(fi), d),
+\]
+by using adjointness twice.
+This verifies the claim we wanted.
+\end{proof}
+
+The idea is that one can easily map \emph{out} of the value of a left adjoint
+functor, just as one can map out of a colimit.
+
+
+
diff --git a/books/cring/completion.tex b/books/cring/completion.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5419d23b9ca7ad7716a18f162f7bb788c25ed4f7
--- /dev/null
+++ b/books/cring/completion.tex
@@ -0,0 +1,1079 @@
+\chapter{Completions}
+\label{completions}
+
+The algebraic version of completion is essentially analogous to the familiar
+process of completing a metric space as in analysis, i.e. the process whereby $\mathbb{R}$ is constructed from
+$\mathbb{Q}$. Here, however, the emphasis will be on how the algebraic properties and
+structure pass to the completion. For instance, we will see that the
+dimension is invariant under completion for noetherian local rings.
+
+
+Completions are used in geometry and number theory in order to give a finer picture of local structure; for example, taking completions of rings allows for the recovery of a topology that looks more like the Euclidean topology as it has more open sets than the Zariski topology. Completions are also used in algebraic number theory to allow for the study of fields around a prime number (or prime ideal).
+
+\section{Introduction}
+
+\subsection{Motivation}
+
+Let $R$ be a commutative ring. Consider a maximal ideal $\mathfrak{m} \in \spec
+R$. If one thinks of $\spec R$ as a space, and $R$ as a collection of functions
+on that space, then $R_{\mathfrak{m}}$ is to be interpreted as the collection of ``germs'' of
+functions defined near the point $\mathfrak{m}$. (In the language of schemes,
+$R_{\mathfrak{m}}$ is the \emph{stalk} of the structure sheaf.)
+
+However, the Zariski topology is coarse, making it difficult small neighborhoods of $\mathfrak{m}$.
+Thus the word ``near'' is to be taken with a grain of salt.
+
+\begin{example}
+Let $X$ be a compact Riemann surface, and let $x \in X$. Let $R$ be the ring of
+holomorphic functions on $X - \left\{x\right\}$ which are meromorphic at $x$.
+In this case, $\spec R$ has the ideal $(0)$ and maximal ideals corresponding
+to functions vanishing at some point in $X - \left\{x\right\}$. So $\spec R$ is $X -
+\left\{x\right\}$ together with a ``generic'' point.
+
+Let us just look at the closed points. If we pick $y \in X - \left\{x\right\}$,
+then we can consider the local ring $R_y = \left\{s^{-1}r, s(y) \neq
+0\right\}$. This ring is a direct limit of the rings $\mathcal{O}(U)$ of holomorphic functions
+on open sets $U$ that extend meromorphically to $X$. Here, however, $U$ ranges
+only over open
+subsets of $X$ containing $y$ that are the nonzero loci of elements $R$. Thus $U$ really ranges over complements of
+finite subsets. It does not range over open sets in the \emph{complex} topology.
+
+Near $y$, $X$ looks like $\mathbb{C}$ in the \emph{complex} topology. In the Zariski topology, this
+is not the case. Each localization $R_y$ actually remembers the whole Riemann
+surface. Indeed, the quotient field of $R_y$ is the rational function field of
+$X$, which determines $X$. Thus $R_y$ remembers too much, and it fails to
+give a truly local picture near $y$.
+\end{example}
+
+We would like a variant of localization that would remember much less about the
+global topology.
+
+\subsection{Definition}
+
+\begin{definition} \label{defcompletion}
+Let $R$ be a commutative ring and $I \subset R$ an ideal. Then we define the
+\textbf{completion of $R$ at $I$} as
+\[ \hat{R}_I = \varprojlim R/I^n. \]
+By definition, this is the inverse limit of the quotients $R/I^n$, via the tower of
+commutative rings
+\[ \dots \to R/I^3 \to R/I^2 \to R/I \]
+where each map is the natural reduction map. Note that $\hat{R}_I$ is
+naturally an $R$-algebra. If the map $R \to \hat{R}_I$ is an isomorphism, then
+$R$ is said to be \textbf{$I$-adically complete.}
+\end{definition}
+
+In general, though, we can be more general. Suppose $R$ is a commutative ring
+with a linear topology. Consider a neighborhood basis at the origin consisting
+of ideals
+$\left\{I_\alpha\right\}$.
+
+\begin{definition}
+The \textbf{completion} $\hat{R}$ of the topological ring $R$ is the inverse limit
+$R$-algebra
+\[ \varprojlim R/I_\alpha, \]
+where the maps $R/I_\alpha \to R/I_\beta$ for $I_\alpha \subset I_\beta$ are
+the obvious ones. $\hat{R}$ is given a structure of a topological ring via the
+inverse limit topology.
+
+If the map $R \to \hat{R}$ is an isomorphism, then $R$ is said to be
+\textbf{complete.}
+\end{definition}
+
+The collection of ideals $\left\{I_\alpha\right\}$ is a directed set, so we
+can talk about inverse limits over it.
+When we endow $R$ with the $I$-adic topology, we see that the above definition
+is a generalization of \rref{defcompletion}.
+
+\begin{exercise}
+Let $R$ be a linearly topologized ring. Then the map $R \to \hat{R}$ is injective if
+and only if $\bigcap I_\alpha = 0$ for the $I_\alpha$ open ideals; that is, if
+and only if $R$ is \emph{Hausdorff.}
+\end{exercise}
+
+\begin{exercise}
+If $R/I_\alpha$ is finite for each open ideal $I_\alpha \subset R$, then
+$\hat{R}$ is compact as a topological ring. (Hint: Tychonoff's theorem.)
+\end{exercise}
+
+\add{Notation needs to be worked out for the completion}
+
+The case of a local ring is particularly
+important. Let $R$ be a local ring and $\mathfrak{m}$ its maximal ideal. Then
+the completion of $R$ with respect to $\mathfrak{m}$, denoted $\hat{R}$, is the inverse limit
+$
+\hat{R}=\lim_{\leftarrow}(R/\mathfrak{m}^nR)$. We then topologize $\hat{R}$ by setting powers of $\mathfrak{m}$ to be basic open sets around $0$. The topology formed by these basic open sets is called the ``Krull'' or ``$\mathfrak{m}$-adic topology.''
+
+In fact, the case of local rings is the most important one.
+Usually, we will complete $R$ at \emph{maximal} ideals.
+If we wanted to study $R$ near a prime $\mathfrak{p} \in \spec R$, we might
+first replace $R$ by $R_{\mathfrak{p}}$, which is a local ring; we might
+make another approximation to $R$ by completing $R_{\mathfrak{p}}$. Then we
+get a \emph{complete} local ring.
+
+\begin{definition}
+Let $R$ be a ring, $M$ an $R$-module, $I \subset R$ an ideal. We define the
+\textbf{completion of $M$ at $I$} as
+\[ \hat{M}_I = \varprojlim M/I^n M. \]
+
+This is an inverse limit of $R$-modules, so it is an $R$-module. Furthermore,
+it is even an $\hat{R}_I$-module, as one easily checks. It is also functorial.
+\end{definition}
+
+In fact, we get a functor
+\[ R-\mathrm{modules} \to \hat{R}_I - \mathrm{modules}. \]
+
+
+\subsection{Classical examples}
+Let us give some examples.
+\begin{example}
+Recall that in algebraic number theory, a number field is a
+finite dimensional algebraic extension of $\mathbb{Q}$.
+Sitting inside of $\mathbb{Q}$ is the ring of integers, $\mathbb{Z}$. For any prime number $p\in \mathbb{Z}$, we can localize $\mathbb{Z}$ to the
+ prime ideal $(p)$ giving us a local ring $\mathbb{Z}_(p)$.
+ If we take the completion of this local ring we get the $p$-adic numbers $\mathbb{Q}_p$. Notice that since $\mathbb{Z}_(p)/p^n\cong\mathbb{Z}/p$, this is really the same as taking the inverse limit $\lim_{\leftarrow}\mathbb{Z}/p^n$.
+\end{example}
+
+\begin{example}
+Let $X$ be a Riemann surface. Let $ x \in X$ be as before, and let $R$ be as
+before: the ring of meromorphic functions on $X$ with poles only at $x$. We can
+complete $R$ at the ideal $\mathfrak{m}_y \subset R$ corresponding to $y \in X - \left\{x\right\}$. This
+is always isomorphic to a power series ring
+\[ \mathbb{C}[[t]] \]
+where $t$ is a holomorphic coordinate at $y$.
+
+The reason is that if one considers $R/\mathfrak{m}_y^n$, one always gets
+$\mathbb{C}[t]/(t^n)$, where $t$ corresponds to a local coordinate at $y$. Thus
+\emph{these} rings don't remember much about the Riemann surface. They're all
+isomorphic, for instance.
+\end{example}
+
+\begin{remark}
+There is always a map $R \to \hat{R}_I$ by taking the limit of the maps $R/I^i$.
+\end{remark}
+
+\subsection{Noetherianness and completions}
+
+A priori, one might think this operation of completion gives a big mess. The amazing thing is that for
+noetherian rings, completion is surprisingly well-behaved.
+
+
+\begin{proposition}
+Let $R$ be noetherian, $I \subset R$ an ideal. Then $\hat{R}_I$ is noetherian.
+\end{proposition}
+\begin{proof}
+Choose generators $x_1, \dots, x_n \in I$. This can be done as $I$ is finitely generated
+Consider a power series ring
+\[ R[[t_1, \dots, t_n]] ; \]
+the claim is that there is a map $R[[t_1\dots t_n]] \to \hat{R}_I$ sending each
+$t_i$ to $x_i \in \hat{R}_I$. This is not trivial, since we aren't talking
+about a polynomial ring, but a power series ring.
+
+To build this map, we want a compatible family of maps
+\[ R[[t_1, \dots, t_n]] \to R[t_1, \dots, t_n]/(t_1, \dots, t_n)^k \to R/I^k. \]
+where the second ring is the polynomial ring where homogeneous
+polynomials of degree $\geq k$ are killed. There is a map from $R[[t_1, \dots, t_n]]$ to
+the second ring that kills monomials of degree $ \geq k$. The second map
+$R[t_1, \dots, t_n]/(t_1, \dots, t_n)^k \to R/I^k$ sends $t_i \to x_i$ and is
+obviously well-defined.
+
+So we get the map
+\[ \phi: R[[t_1, \dots, t_n]] \to \hat{R}_I , \]
+which I claim is surjective. Let us prove this. Suppose $a \in \hat{R}_I$. Then $a$ can be thought
+of as a collection of elements $(a_k) \in R/I^k$ which are compatible with one
+another. We can lift each $a_k$ to some $\overline{a_k} \in R$ in a
+compatible manner, such that
+\[ \overline{a_{k+1}} = \overline{a_k} + b_k, \quad b_k \in I^k. \]
+Since $b_k \in I^k$, we can write it as
+\[ b_k = f_k(x_1, \dots, x_n) \]
+for $f_k$ a polynomial in $R$ of degree $k$, by definition of the generators in
+$I^k$.
+
+I claim now that
+\[ a = \phi\left( \sum f_k(t_1, \dots, t_n) \right). \]
+The proof is just to check modulo $I^k$ for each $k$. This we do by induction.
+When one reduces modulo $I^k$, one gets $a_k$ (as one easily checks).
+
+As we have seen, $\hat{R}_I$ is the quotient of a power series ring. In the
+homework, it was seen that $R[[t_1, \dots, t_n]]$ is noetherian; this is a
+variant of the Hilbert basis theorem proved in class. So $\hat{R}_I$ is
+noetherian.
+\end{proof}
+
+
+In fact, following \cite{Se65}, we shall sometimes find it convenient to note a generalization of the
+above argument.
+
+\begin{lemma} \label{grsurjective}
+Suppose $A$ is a filtered ring, $M, N$ filtered $A$-modules and $\phi: M \to N$ a
+morphism of filtered modules. Suppose $\gr(\phi)$ surjective and $M, N$
+complete; then $\phi$ is surjective.
+\end{lemma}
+\begin{proof} This will be a straightforward ``successive approximation''
+argument.
+Indeed, let $\left\{M_n\right\}, \left\{N_n\right\}$ be the filtrations on $M,
+N$.
+Suppose $n \in N$.
+We know that there is $m_0 \in M$ such that
+\[ n - \phi(m_0) \in N_1 \]
+since $M/M_1 \to N/N_1 $ is surjective.
+Similarly, we can choose $m_1 \in M_1$ such that
+\[ n - \phi(m_0) - \phi(m_1) \in M_2 \]
+because $n - \phi(m_0) \in N_1$ and $M_1/M_2 \to N_1/N_2$ is surjective. We
+inductively continue the sequence $m_2, m_3, \dots$ such that it tends to zero
+rapidly; we then have that $n - \phi\left( \sum m_i \right) \in \bigcap N_i$,
+so $n = \phi\left(\sum m_i\right)$ as $N$ is complete.
+\end{proof}
+
+
+
+\begin{theorem} \label{grnoetherian}
+Suppose $A$ is a filtered ring. Let $M$ be a filtered $A$-module, separated
+with respect to its topology.
+If $\gr(M)$ is noetherian over $\gr(A)$, then $M$ is a noetherian $A$-module.
+\end{theorem}
+\begin{proof}
+If $N \subset M$, then we can obtain an induced filtration on $N$ such that
+$\gr(N)$ is a submodule of $\gr(M)$. Since noetherianness equates to the
+finite generation of each submodule, it suffices to show that if $\gr(M)$ is
+finitely generated, so is $M$.
+
+Suppose $\gr(M)$ is generated by homogeneous elements $\overline{e}_1, \dots,
+\overline{e}_n$ of
+degrees $d_1, \dots, d_n$, represented by elements $e_1, \dots, e_n \in M$. From this we can define a map
+\[ A^n \to M \]
+sending the $i$th basis vector to $e_i$. This will induce a surjection
+$\gr(A^n) \to \gr(M)$. We will have to be careful, though, exactly how we
+define the filtration on $A^n$, because the $d_i$ may have large degrees, and
+if we are not careful, the map on $\gr$'s will be zero.
+
+We choose the filtration such that at the $m$th level, we get the subgroup of
+$A^n$ such that the $i$th coordinate is in $I_{n-d_i}$ (for
+$\left\{I_n\right\}$ the filtration of $A$). It is then clear that the
+associated map
+\[ \gr(A^n ) \to \gr(M) \]
+has image containing each $\overline{e}_i$. Since $A^n$ is complete with
+respect to this topology, we find that $A^n \to M$ is surjective by
+\cref{grsurjective}.
+This shows that $M$ is finitely generated and completes the proof.
+\end{proof}
+
+
+\begin{corollary} \label{completenoetherian}
+Suppose $A$ is a ring, complete with respect to the $I$-adic topology. If
+$A/I$ is noetherian and $I/I^2$ a finitely generated $A$-module, then $A$ is
+noetherian.
+\end{corollary}
+\begin{proof}
+Indeed, we need to show that $\gr(A)$ is a noetherian ring (by
+\cref{grnoetherian}). But this is the ring
+\[ A/I \oplus I/I^2 \oplus I^2/I^3 \oplus \dots. \]
+It is easy to see that this is generated by $I/I^2$ as an $A/I$-algebra. By
+Hilbert's basis theorem, this is noetherian under the conditions of the result.
+\end{proof}
+
+\cref{completenoetherian} gives another means of showing that if a ring $A$ is
+noetherian, then its completion $\hat{A}$ with respect to an ideal $I \subset A$ is
+noetherian. For the algebra $\gr(A)$ (where $A$ is given the $I$-adic
+topology) is noetherian because it is finitely generated over $A/I$. Moreover,
+$\gr(\hat{A}) = \gr(A)$, so $\hat{A}$ is noetherian.
+
+\section{Exactness properties}
+
+The principal result of this section is:
+\begin{theorem} \label{completionisexact}
+If $R$ is noetherian and $I \subset R$ an ideal, then the construction $M \to
+\hat{M}_I$ is exact when restricted to finitely generated modules.
+\end{theorem}
+
+
+Let's be more precise. If $M$ is finitely generated, and
+\( 0 \to M' \to M \to M'' \to 0 \)
+is an exact sequence,\footnote{The ends are finitely generated by noetherianness.} then
+\[ 0 \to \hat{M'}_I \to \hat{M}_I \to \hat{M''}_I \to 0 \]
+is also exact.
+
+We shall prove this theorem in several pieces.
+
+\subsection{Generalities on inverse limits}
+For a moment, let us step back and think about exact sequences of inverse
+limits of abelian groups. Say we have a tower of exact sequences of abelian
+groups
+\[
+\xymatrix{
+0 \ar[r] & \vdots \ar[d] \ar[r] & \vdots \ar[d] \ar[r] & \vdots \ar[d]
+\ar[r] & 0 \\
+0 \ar[r] & A_2 \ar[d] \ar[r] & B_2 \ar[d] \ar[r] & C_2 \ar[d] \ar[r] & 0
+\\
+0 \ar[r] & A_1 \ar[d] \ar[r] & B_1 \ar[d] \ar[r] & C_1 \ar[d] \ar[r] & 0
+\\
+0 \ar[r] & A_0 \ar[r] & B_0 \ar[r] & C_0 \ar[r] & 0
+}.
+\]
+Then we get a sequence
+\[ 0 \to \varprojlim A_n \to \varprojlim B_n \to \varprojlim C_n \to 0. \]
+In general, it is \emph{not} exact. But it is left-exact.
+
+\begin{proposition}
+Hypotheses as above, $ 0 \to \varprojlim A_n \to \varprojlim B_n \to
+\varprojlim C_n$ is exact.
+\end{proposition}
+\begin{proof}
+It is obvious that $\phi \circ \psi = 0$.
+
+Let us first show that $\phi: \varprojlim A_n \to \varprojlim B_n$ is
+injective. So suppose $a $ is in the projective limit, represented by a
+compatible sequence of elements $(a_k )\in A_k$. If $\phi$ maps to zero, all
+the $a_k$ go to zero in $B_k$. Injectivity of $A_k \to B_k$ implies that each
+$a_k$ is zero. This implies $\phi$ is injective.
+
+Now let us show exactness at the next step. Let $\psi: \varprojlim B_n \to
+\varprojlim C_n$ and let $b = (b_k)$ be in $\ker \psi$. This means that each
+$b_k$ gets killed when it maps to $C_k$. This means that each $b_k$ comes from
+something in $a_k$. These $a_k$ are unique by injectivity of $A_k \to B_k$. It
+follows that the $a_k$ have no choice but to be compatible. Thus $(a_k)$ maps
+into $(b_k)$. So $b$ is in the image of $\phi$.
+\end{proof}
+
+So far, so good. We get some level of exactness. But the map on the end is not
+necessarily surjective. Nonetheless:
+
+\begin{proposition}
+$\psi: \varprojlim B_n \to \varprojlim C_n$ is surjective if each $A_{n+1} \to
+A_n$ is surjective.
+\end{proposition}
+\begin{proof}
+Say $c \in \varprojlim C_n$, represented by a compatible family $(c_k)$. We
+have to show that there is a compatible family $(b_k) \in \varprojlim B_n$
+which maps into $c$. It is easy to choose the $b_k$ \emph{individually} since
+$B_k \to C_k$ is surjective. The problem is that a priori we may not get
+something compatible.
+
+We construct $b_k$ by induction on then, therefore. Assume that $b_k$ which
+lifts $c_k$ has been constructed.
+We know that $c_k$ receives a map from $c_{k+1}$.
+\[ \xymatrix{
+& & c_{k+1 } \ar[d] \\
+& b_k \ar[r] & c_k
+}.\]
+Choose any $x \in B_{k+1}$ which maps to $c_{k+1}$. However, $x$ might not map
+down to $b_k$, which would screw up the compatibility conditions. Next, we try to adjust $x$.
+Consider $x' \in B_k$ to be the image of $x$ under $B_{k+1} \to B_k$. We know
+that $x' - b_k$ maps to zero in $C_k$, because $c_{k+1}$ maps to $c_k$.
+So $x' - b_k$ comes from something in $A_k$, call it $a$.
+\[ \xymatrix{
+& x \ar[r] & c_{k+1 } \ar[d] \\
+& b_k \ar[r] & c_k
+}.\]
+But $a$ comes from some $\overline{a} \in A_{k+1}$. Then we define
+\[ b_{k+1} = x - \overline{a}, \]
+which adjustment doesn't change the fact that $b_{k+1}$ maps to $c_{k+1}$.
+However, this adjustment makes $b_{k+1}$ compatible with $b_k$. Then we
+construct the family $b_k$ by induction. We have seen surjectivity.
+\end{proof}
+
+Now, let us study the exactness of completions.
+\begin{proof}[Proof of \rref{completionisexact}]
+
+Let us try to apply the general remarks above to studying the sequence
+\[ 0 \to \hat{M'}_I \to \hat{M}_I \to \hat{M''}_I \to 0. \]
+Now $\hat{M}_I = \varprojlim M/I^n$. We can construct surjective maps
+\[ M/I^n \twoheadrightarrow M''/I^n \]
+whose inverse limits lead to $\hat{M}_I \to \hat{M''}_I$. The image is
+$M/(M' + I^n M)$. What is the kernel?
+Well, it is $M' + I^n M/ I^n M$. This is equivalently
+\[ M'/M' \cap I^n M. \]
+So we get an exact sequence
+\[ 0 \to M'/M' \cap I^n M \to M/I^n M \to M''/I^n M'' \to 0. \]
+By the above analysis of exactness of inverse limits, we get an exact sequence
+\[ 0 \to \varprojlim M'/(I^n M \cap M') \to \hat{M}_I \to \hat{M''}_I \to 0. \]
+We of course have surjective maps $M'/I^n M' \to M'/(I^n M \cap M') $ though
+these are generally not isomorphisms. Something ``divisible by $I^n$'' in $M$ but
+in $M'$ is generally not divisible by $I^n$ in $M'$.
+Anyway, we get a map
+\[ \varprojlim M'/I^n M' \to \varprojlim M'/I^n M \cap M' \]
+where the individual maps are not necessarily isomorphisms. Nonetheless, I
+claim that the map on inverse limits is an isomorphism. This will imply that
+completion is indeed an exact functor.
+
+But this follows because the filtrations $\left\{I^n M'\right\},
+\left\{I^n M \cap M'\right\}$ are equivalent in view of the Artin-Rees lemma,
+\cref{artinrees}.
+\end{proof}
+
+Last time, we were talking about completions. We showed that if $R$ is
+noetherian and $I \subset R$ an ideal, an exact sequence
+\[ 0 \to M' \to M \to M \to 0 \]
+of finitely generated $R$-modules leads to a sequence
+\[ 0 \to \hat{M'}_I \to \hat{M}_I \to \hat{M;}_I \to 0 \]
+which is also exact. We showed this using the Artin-Rees lemma.
+
+\begin{remark}
+In particular, for finitely generated modules over a noetherian ring, completion is an \textbf{exact functor}: if $A \to B \to C$ is
+exact, so is the sequence of completions. This can be seen by drawing in
+kernels and cokernels, and using the fact that completions preserve short
+exact sequences.
+\end{remark}
+
+\subsection{Completions and flatness}
+
+Suppose that $M$ is a finitely generated $R$-module. Then there is a surjection $R^n
+\twoheadrightarrow M$, whose kernel is also finitely generated as $R$ is
+noetherian. It follows that
+$M$ is finitely presented. In particular, there is a sequence
+\[ R^m \to R^n \to M \to 0. \]
+We get an exact sequence
+\[ \hat{R}^m \to \hat{R}^n \to \hat{M} \to 0 \]
+where the second map is just multiplication by the same $m$-by-$n$ matrix as in
+the first case.
+
+\begin{corollary}
+If $M$ is finitely generated and $R$ noetherian, there is a canonical isomorphism
+\[ \hat{M}_I \simeq M \otimes_R \hat{R}_I. \]
+\end{corollary}
+
+\begin{proof}
+We know that there is a map $M \to \hat{M}_I$, so the canonical morphism
+$\phi_M: M \otimes_R \hat{R}_{I} \to \hat{M}_I$ exists
+(because this induces a map from $M \otimes_R \hat{R}_I$). We
+need to check that it is an isomorphism.
+
+If there is an exact sequence $M' \to M \to M'' \to 0$, there is a commutative
+diagram
+\[ \xymatrix{
+M' \otimes_R \hat{R}_I \ar[d]^{\phi_{M'}} \ar[r] & M \otimes_R \hat{R}_I
+\ar[d]^{\phi_M} \ar[r] &
+M'' \otimes_R \hat{R}_I \ar[d] \ar[r] & 0 \\
+\hat{M'}_I \ar[r] & \hat{M}_I \ar[r] & \hat{M''}_I \ar[r] & 0
+}.\]
+Exactness of completion and right-exactness of $\otimes$ implies that this
+diagram is exact. It follows that if $\phi_M, \phi_{M'}$ are isomorphisms, so
+is $\phi_{M''}$.
+
+But any $M''$ appears at the end of such a sequence with $M', M$ are free by
+the finite presentation argument above. So it suffices to prove $\phi$ an
+isomorphism for finite frees, which reduces to the case of $\phi_R$ an
+isomorphism. That is obvious.
+\end{proof}
+
+\begin{corollary}
+If $R$ is noetherian, then $\hat{R}_I$ is a flat $R$-module.
+\end{corollary}
+\begin{proof}
+Indeed, tensoring with $\hat{R}_I$ is exact (because it is completion, and
+completion is exact) on the category of finitely generated $R$-modules.
+Exactness on the category of all $R$-modules follows by taking direct limits,
+since every module is a direct limit of finitely generated modules, and
+direct limits preserve exactness.
+\end{proof}
+
+
+\begin{remark}
+Warning: $\hat{M}_I$ is, in general, not $M \otimes_R \hat{R}_I$ when $M$ is
+not finitely generated. One example to think about is $M = \mathbb{Z}[t]$,
+$R = \mathbb{Z}$. The
+completion of $M$ at $I = (p)$ is the completion of $\mathbb{Z}[t]$ at $p
+\mathbb{Z}[t]$, which contains elements like
+\[ 1 + pt + p^2 t^2 + \dots, \]
+which belong to the completion but not to $\hat{R}_I \otimes M = \mathbb{Z}_p
+[t]$.
+\end{remark}
+
+\begin{remark}
+By the Krull intersection theorem, if $R$ is a local noetherian ring, then the
+map from $R \to \hat{R}$ is an injection.
+\end{remark}
+
+
+\section{Hensel's lemma} One thing that you might be interested in doing is solving
+Diophantine equations. Say $R = \mathbb{Z}$; you want to find solutions to a
+polynomial $f(X) \in \mathbb{Z}[X]$. Generally, it is very hard to find
+solutions. However, there are easy tests you can do that will tell you if there
+are no solutions. For instance, reduce mod a prime. One way you can prove that
+there are no solutions is to show that there are no solutions mod 2.
+
+But there might be solutions mod 2 and yet you might not be sure about
+solutions in $\mathbb{Z}$. So you might try mod 4, mod 8, and so on---you get a
+whole tower of problems to consider. If you manage to solve all these equations, you can solve the equations in the 2-adic integers $\mathbb{Z}_2 =
+\hat{\mathbb{Z}}_{(2)}$.
+But the Krull intersection theorem implies that $\mathbb{Z} \to \mathbb{Z}_2$
+is injective. So if you expected that there was a unique solution in
+$\mathbb{Z}$, you might try looking at the solutions in $\mathbb{Z}_2$ to be
+the solutions in $\mathbb{Z}$.
+
+
+
+The moral is that solving an equation over $\mathbb{Z}_2$ is intermediate in
+difficulty between $\mathbb{Z}/2$ and $\mathbb{Z}$. Nonetheless, it turns out
+that solving an equation mod $\mathbb{Z}/2$ is very close to solving it over
+$\mathbb{Z}_2$, thanks to Hensel's lemma.
+
+\subsection{The result}
+
+\begin{theorem}[Hensel's Lemma]
+Let $R$ be a noetherian ring, $I \subset R$ an ideal. Let $f(X) \in R[X]$ be a
+polynomial such that the equation $f(X)=0$ has a solution $ a \in R/I$.
+Suppose, moreover, that $f'(a)$ is invertible in $R/I$.
+
+Then $a$ lifts uniquely to a solution of the equation $f(X) = 0$ in $\hat{R}_I$.
+\end{theorem}
+
+\begin{example}
+Let $R = \mathbb{Z}, I = (5)$. Consider the equation $f(x) = x^2 + 1 = 0$ in $R$. This
+has a solution modulo five, namely $2$. Then $f'(2) = 4$ is invertible in
+$\mathbb{Z}/5$. So the equation $x^2 + 1 = 0$ has a solution in $\mathbb{Z}_5$.
+In other words, $\sqrt{-1} \in \mathbb{Z}_5$.
+\end{example}
+
+Let's prove Hensel's lemma.
+\begin{proof}
+Now we have $a \in R/I$ such that $f(a) = 0 \in R/I$ and $f'(a)$ is invertible.
+The claim is going to be that for each $m \geq 1$, there is a \emph{unique}
+element $a_n \in R/I^n$ such that
+\[ a_n \to a \ (I), \quad f(a_n) = 0 \in R/I^n. \]
+Uniqueness implies that this sequence $(a_n)$ is compatible, and thus gives the
+required element of the completion.
+It will be a solution of $f(X) = 0$ since it is a solution at each element of
+the tower.
+
+Let us now prove the claim.
+For $n=1$, $a_1 = a$ necessarily.
+The proof is induction on $n$. Assume that $a_n$ exists and is unique. We would like to show
+that $a_{n+1}$ exists and is unique. Well, if it is going to exist, when we
+reduce $a_{n+1}$ modulo $I^n$, we must get $a_n$ or uniqueness at the $n$-th
+step would fail.
+
+So let $\overline{a}$ be any lifting of $a_n$ to $R/I^{n+1}$. Then $a_{n+1}$
+is going to be that lifting plus some $\epsilon \in I^n/I^{n+1}$. We want
+\[ f(\overline{a} + \epsilon) = 0 \in R/I^{n+1}. \]
+But this is
+\[ f(\overline{a}) + \epsilon f'(\overline{a}) \]
+because $\epsilon^2 = 0 \in R/I^{n+1}$. However, this lets us solve for
+$\epsilon$, because then necessarily $\epsilon =
+\frac{-f(\overline{a})}{f'(\overline{a})} \in I^n$.
+Note that $f'(\overline{a}) \in R/I^{n+1}$ is invertible. If you believe this
+for a moment, then we have seen that $\epsilon$ exists and is unique; note
+that $\epsilon \in I^n$ because $f(\overline{a}) \in I^n$.
+
+
+\begin{lemma}
+$f'(\overline{a}) \in R/I^{n+1}$ is invertible.
+\end{lemma}
+\begin{proof}
+If we reduce this modulo $R/I$, we get the invertible element $f'(a) \in R/I$.
+Note also that the $I/I^{n+1}$ is a nilpotent ideal in $R/I^{n+1}$. So we are
+reduced to showing, more generally:
+
+\begin{lemma}
+Let $A$ be a ring,\footnote{E.g. $R/I^{n+1}$.} $J$ a nilpotent
+ideal.\footnote{E.g. $J = I/I^{n+1}$.} Then an element $x \in A$ is invertible
+if and only if its reduction in $A/J$ is invertible.
+\end{lemma}
+\begin{proof}
+One direction is obvious. For the converse, say $x \in A$ has an invertible
+image. This implies that there is $y \in A$ such that $xy \equiv 1 \mod J$. Say
+$$xy = 1+m,$$ where $m \in J$. But $1+m$ is invertible because
+\[ \frac{1}{1+m} = 1 - m + m^2 \pm \dots. \]
+The expression makes sense as the high powers of $m$ are zero.
+So this means that $y(1+m)^{-1}$ is the inverse to $x$.
+\end{proof}
+\end{proof}
+\end{proof}
+
+This was one of many versions of Hensel's lemma. There are many ways you can
+improve on a statement. The above version says something about
+``nondegenerate'' cases, where the derivative is invertible. There are better
+versions which handle degenerate cases.
+
+\begin{example}
+Consider $x^2 - 1$; let's try to solve this in $\mathbb{Z}_2$. Well,
+$\mathbb{Z}_2$ is a domain, so the only solutions can be $\pm 1$. But these
+have the same reduction in $\mathbb{Z}/2$. The lifting of the solution is
+non-unique.
+
+The reason why Hensel's lemma fails is that $f'(\pm 1) = \pm 2$ is not
+invertible in $\mathbb{Z}/2$. But it is not far off. If you go to
+$\mathbb{Z}/4$, we do get two solutions, and the derivative is at least nonzero
+at those places.
+\end{example}
+
+One possible extension of Hensel's lemma is to allow the derivative to be
+noninvertible, but at least to bound the degree to which it is noninvertible.
+From this you can get interesting information.
+But then you may have to look at equations $R/I^n$ instead of just $R/I$, where
+$n$ depends on the level of noninvertibility.
+
+Let us describe the multivariable Hensel lemma.
+
+\begin{theorem}
+Let $f_1, \dots, f_n$ be polynomials in $n$ variables over the ring $R$. Let
+$J$ be the Jacobian matrix $( \frac{\partial f_i}{\partial x_j})$. Suppose
+$\Delta = \det J \in R[x_1, \dots, x_n]$.
+
+If the system $\left\{f_i(x) = 0\right\}$ has a solution $a \in (R/I)^n$ in $R/I$ for some
+ideal $I$ satisfying the condition that $\Delta(a)$ is invertible, then there
+is a unique solution of $\left\{f_i(x) =0\right\}$ in $\hat{R}_I^n$ which lifts $a$.
+\end{theorem}
+The proof is the same idea: successive approximation, using the invertibility
+of $\Delta$.
+
+\subsection{The classification of complete DVRs (characteristic zero)}
+Let $R$ be a complete DVR with maximal ideal $\mathfrak{m}$ and quotient field
+$F$. We let $k:=R/\mathfrak{m}$; this is the \textbf{residue field} and is,
+e.g., the integers mod $p$ for the $p$-adic integers.
+
+The main result that we shall prove is the following:
+\begin{theorem} Suppose $k$ is of characteristic zero. Then $R \simeq k[[X]]$, the power series ring in one variable, with respect to the usual discrete valuation on $k[[X]]$.
+\end{theorem}
+
+The ``usual discrete valuation'' on the power series ring is the order at zero. Incidentally, this applies to the (non-complete) subring of $\mathbb{C}[[X]]$ consisting of power series that converge in some neighborhood of zero, which is the ring of germs of holomorphic functions at zero; the valuation again measures the zero at $z=0$.
+
+
+To prove it (following \cite{Se79}), we need to introduce another concept. A \textbf{system of representatives} is a set $S \subset R$ such that the reduction map $S \to k$ is bijective. A \textbf{uniformizer} is a generator of the maximal ideal $\mathfrak{m}$. Then:
+
+\begin{proposition} If $S$ is a system of representatives and $\pi$ a uniformizer, we can write each $x \in R$ uniquely as
+\[ x= \sum_{i=0}^\infty s_i \pi^i, \quad \mathrm{where} \ s_i \in S.\]
+\end{proposition}
+\begin{proof}
+Given $x$, we can find by the definitions $s_0 \in S$ with $x-s_0 \in \pi R$. Repeating, we can write ${x-s_0}\ {\pi} \in R$ as ${x-s_0}\ {\pi} - s_1 \in \pi R$, or $x - s_0 - s_1 \pi \in \pi^2 R$. Repeat the process inductively and note that the differences $x - \sum_{i=0}^{n} s_i \pi^i \in \pi^{n+1}R$ tend to zero.
+
+In the $p$-adic numbers, we can take $\{0, \dots, p-1\}$ as a system of representatives, so we find each $p$-adic integer has a unique $p$-adic expansion $x = \sum_{i=0}^\infty x_i p^i$ for $x_i \in \{0, \dots, p-1\}$.
+\end{proof}
+
+We now prove the first theorem.
+
+\begin{proof}
+Note that $\mathbb{Z}-0 \subset R$ gets sent to nonzero elements in the residue field $k$, which is of characteristic zero. This means that $\mathbb{Z}-0 \subset R$ consists of units, so $\mathbb{Q} \subset R$.
+
+Let $L \subset R$ be a subfield. Then $L \simeq \overline{L} \subset k$; if $t \in k - \overline{ L}$, I claim that there is $L' \supset R$ containing $L$ with $t \in \overline{L'}$.
+
+If $t$ is transcendental, lift it to $T \in R$; then $T$ is transcendental over $L$ and is invertible in $R$, so we can take $L' := L(T)$.
+
+If the minimal polynomial of $t$ over $\overline{L}$ is $\overline{f}(X) \in k[X]$, we have $\overline{f}(t) = 0$. Moreover, $\overline{f}'(t) \neq 0$ because these fields are of characteristic zero and all extensions are separable.
+So lift $\overline{f}(X)$ to $f(X) \in R[X]$; by Hensel lift $t$ to $u \in R$ with $f(u) = 0$. Then $f$ is irreducible in $L[X]$ (otherwise we could reduce a factoring to get one of $\overline{f} \in \overline{L}[X]$), so $L[u] = L[X]/(f(X))$, which is a field $L'$.
+
+So if $K \subset R$ is the maximal subfield (use Zorn's lemma), this is our system of representatives by the above argument.
+\end{proof}
+
+
+\section{Henselian rings}
+
+
+
+There is a substitute for completeness that captures the essential
+properties: Henselianness. A ring is Henselian if it satisfies
+Hensel's lemma, more or less. We mostly follow \cite{Ra70} in the treatment.
+
+\subsection{Semilocal rings}
+
+To start with, we shall need a few preliminaries on semi-local rings.
+
+Fix a local ring $A$ with maximal ideal $\mathfrak{m} \subset A$.
+Fix a finite $A$-algebra $B$; by definition, $B $ is a finitely
+generated $A$-module.
+
+\begin{proposition}
+Hypotheses as above, the maximal ideals of $B$ are in bijection with
+the prime ideals of $B $ containing $\mathfrak{m} B$, or equivalently
+the prime ideals of $\overline{B} = B \otimes_A A/\mathfrak{m}$.
+\end{proposition}
+
+\begin{proof}
+We have to show that every maximal ideal of $B$ contains $\mathfrak{m}
+B$. Suppose $\mathfrak{n} \subset B$ was maximal and was otherwise.
+Then by Nakayama's lemma, $\mathfrak{n} + \mathfrak{m} B \neq B$ is a
+proper ideal strictly containing $\mathfrak{n}$; this contradicts
+maximality.
+
+It is now clear that the maximal ideals of $B$ are in bijection
+naturally with those of $\overline{B}$.
+However, $\overline{B}$ is an artinian ring, as it is finite over the
+field $A/\mathfrak{m}$, so every prime ideal in it is maximal.
+\end{proof}
+
+
+
+The next thing to observe is that $\overline{B}$, as an artinian ring,
+decomposes as a product of local artinian rings.
+In fact, this decomposition is unique.
+However, this does not mean that $B$ itself
+is a product of local rings ($B$ is not necessarily artinian).
+Nonetheless, if such a splitting exists, it is necessarily unique.
+
+\begin{proposition}
+Suppose $R = \prod R_i$ is a finite product of local rings $R_i$. Then
+the $R_i$ are unique.
+\end{proposition}
+\begin{proof}
+To give a decomposition $R = \prod R_i$ is equivalent to giving
+idempotents $e_i$. If we had another decomposition $R = \prod S_j$,
+then we would have new idempotents $f_j$. The image of each $f_j$ in
+each $R_i$ is either zero or one as a local ring has no nontrivial
+idempotents. From this, one can easily deduce that the $f_j$'s are
+sums of the $e_i$'s, and if the $S_j$ are local, one sees that the
+$S_j$'s are just the $R_i$'s permuted.
+\end{proof}
+
+In fact, there is a canonical way of determining the factors $R_i$.
+A finite product of local rings as above is \textit{semi-local};
+the maximal ideals $\mathfrak{m_i}$ are finite in number, and, furthermore, the
+canonical map
+\[ R \to \prod R_{\mathfrak{m_i}} \]
+is an isomorphism.
+
+
+In general, this splitting \textbf{fails} for semi-local rings, and in
+particular for rings finite over a local ring.
+We have seen that this splitting nonetheless works for rings finite
+over a field.
+
+
+To recapitulate, we can give a criterion for when a semi-local ring splits as
+above.
+\begin{proposition} \label{whatissplitting}
+Let $R$ be a semilocal ring with maximal ideals $\mathfrak{m}_1, \dots,
+\mathfrak{m}_k$. Then $R$ splits into local factors if and only if, for each
+$i$, there is an idempotent $e_i \in \bigcap_{j \neq i} \mathfrak{m}_j -
+\mathfrak{m}_i$. Then the rings $Re_i$ are local and $R = \prod Re_i$.
+\end{proposition}
+\begin{proof}
+If $R$ splits into local factors, then clearly we can find such idempotents.
+Conversely, suppose given the $e_i$.
+Then for each $i \neq j$, $e_i e_j$ is an idempotent $e_{ij}$ that belongs to all the
+maximal ideals $\mathfrak{m}_k$. So it is in the Jacobson radical. But then $1 -
+e_{ij}$ is invertible, so $e_{ij}(1-e_{ij})=0$ implies that $e_{ij} = 0$.
+
+It follows that the $\left\{e_i\right\}$ are \emph{orthogonal} idempotents. To
+see that $R = \prod Re_i$ as rings, we now need only to see that the
+$\left\{e_i\right\}$ form a \emph{complete} set; that is, $\sum e_i = 1$. But
+the sum $\sum e_i$ is an idempotent itself since the $e_i$ are mutually
+orthogonal. Moreover, the sum $\sum e_i$ belongs to no $\mathfrak{m}_i$, so it
+is invertible, thus equal to $1$. The claim is now clear, since each $Re_i$ is
+local by assumption.
+\end{proof}
+
+Note that if we can decompose a semilocal ring into a product of local rings,
+then we can go no further in a sense---it is easy to check that a local ring has
+no nontrivial idempotents.
+
+\subsection{Henselian rings}
+
+\begin{definition}
+A local ring $(R, \mathfrak{m})$ is \textbf{henselian} if every finite
+$R$-algebra is a product of local $R$-algebras.
+\end{definition}
+
+
+It is clear from the remarks of the previous section that the
+decomposition as a product of local algebras is unique.
+Furthermore, we have already seen:
+
+\begin{proposition}
+A field is henselian.
+\end{proposition}
+\begin{proof}
+Indeed, then any finite algebra over a field is artinian (as a
+finite-dimensional vector space).
+\end{proof}
+
+This result was essentially a corollary of basic facts about artinian
+rings. In general, though, henselian rings are very far from artinian. For
+instance, we will see that every \emph{complete} local ring is henselian.
+
+We continue with a couple of further easy claims.
+\begin{proposition} \label{finitehenselian}
+A local ring that is finite over a henselian ring is henselian.
+\end{proposition}
+\begin{proof}
+Indeed, if $R$ is a henselian local ring and $S$ a finite $R$-algebra, then
+every finite $S$-algebra is a finite $R$-algebra, and thus splits into a product
+of local rings.
+\end{proof}
+
+We have seen that henselianness of a local ring $(R, \mathfrak{m})$ with residue
+field $k$ is equivalent to the condition that every finite $R$-algebra $S$ splits
+into a product of local rings. Since $S \otimes_R k$ always splits into a
+product of local rings, and this splitting is unique, we see that if a splitting
+of $S$ exists, it necessarily lifts the splitting of $S \otimes_R k$.
+
+Since a ``splitting'' is the same thing (by \cref{whatissplitting}) as a
+complete collection of idempotents, one for each maximal ideal, we are going to
+characterize henselian rings by the property that one can lift idempotents from
+the residue ring.
+
+\begin{definition}
+A local ring $(R, \mathfrak{m})$ \textbf{satisfies lifting
+ idempotents} if for every finite $R$-algebra $S$, the canonical
+(reduction) map
+between idempotents of $S$ and those of $S/\mathfrak{m}S$ is surjective.
+\end{definition}
+
+Recall that there is a functor $\idem$ from rings to sets that sends each ring
+to its collection of idempotents. So the claim is that the natural map $\idem(S)
+\to \idem(S/\mathfrak{m}S)$ is a surjection.
+
+In fact, in this case, we shall see that the map $\idem(S) \to
+\idem(S/\mathfrak{m}S)$ is even injective.
+\begin{proposition}
+The map from idempotents of $S$ to those of $S/\mathfrak{m} S$ is
+always injective.
+\end{proposition}
+We shall not even use the fact that $S$ is a finite $R$-algebra here.
+
+\begin{proof}
+Suppose $e, e' \in S$ are idempotents whose images in $S/\mathfrak{m}S$ are the
+same. Then
+\[ (e-e')^3 = e^3 -3e^2 e' + 3e'^2 e- e'^3 = e^3 - e'^3 = e - e. \]
+Thus if we let $x = e - e'$, we have $x^3 - x =0$, and $x $ belongs to
+$\mathfrak{m}S$. Thus
+\[ x(1-x^2) = 0, \]
+and $1-x^2$ is invertible in $S$ (because $x^2$ belongs to the Jacobson radical
+of $S$). Thus $x =0 $ and $e = e'$.
+\end{proof}
+
+
+With this, we now want a characterization of henselian rings in terms of the
+lifting idempotents property.
+
+\begin{proposition} \label{orthogonallifts}
+Suppose $(R, \mathfrak{m})$ satisfies lifting idempotents, and let $S$ be a
+finite $R$-algebra. Then given
+orthogonal idempotents $\overline{e}_1, \dots, \overline{e}_n$ of $S/\mathfrak{m}S$, there are
+mutually orthogonal lifts $\left\{e_i\right\} \in S$.
+\end{proposition}
+
+The point is that we can make the lifts mutually orthogonal. (Recall that
+idempotents are \emph{orthogonal} if their product is zero.)
+\begin{proof}
+Indeed, by assumption we can get lifts $\left\{e_i\right\}$ which are
+idempotent; we need to show that they are mutually orthogonal. But in any case
+$e_i e_j$ for $i \neq j$ is an idempotent, which lies in
+$\mathfrak{m}S \subset S$ and thus in the Jacobson radical. It follows that
+$e_i e_j = 0$, proving the orthogonality.
+\end{proof}
+
+\begin{proposition}
+A local ring is henselian if and only if it satisfies lifting idempotents.
+\end{proposition}
+\begin{proof}
+Suppose first $(R, \mathfrak{m} )$ satisfies lifting idempotents.
+Let $S$ be any finite $R$-algebra. Then $S/\mathfrak{m}S$ is artinian,
+so factors as a product of local artinian rings $\prod \overline{S}_i$. This
+factorization corresponds to idempotents $\overline{e}_i \in
+S/\mathfrak{m}S$.
+We can lift these to orthogonal idempotents $e_i \in S$ by
+\cref{orthogonallifts}.
+These idempotents correspond to a decomposition
+\[ S = \prod S_i \]
+which lifts the decomposition $\overline{S} = \prod \overline{S}_i$. Since the
+$\overline{S_i}$ are local, so are the $S_i$.
+Thus $R$ is henselian.
+
+Conversely, suppose $R$ henselian.
+Let $S$ be a finite $R$-algebra and let $\overline{e} \in \overline{S} =
+S/\mathfrak{m}S$ be idempotent. Since $\overline{S}$ is a product of local
+rings, $\overline{e}$ is a finite sum of the primitive idempotents in
+$\overline{S}$. By henselianness, each of these primitive idempotents lifts to
+$S$, so $\overline{e}$ does too.
+\end{proof}
+
+
+\begin{proposition}
+Let $R$ be a local ring and $I \subset R$ an ideal consisting of nilpotent
+elements. Then $R$ is henselian if and only if
+$R/I$ is.
+\end{proposition}
+\begin{proof} One direction is clear by \cref{finitehenselian}. For the other,
+suppose $R/I$ is henselian. Let $\mathfrak{m} \subset R$ be the maximal ideal.
+Let $S$ be any finite $R$-algebra; we have to show surjectivity of
+\[ \idem(S) \to \idem(S/\mathfrak{m}S). \]
+However, we are given that, by henselianness of $S/I$,
+\[ \idem(S/IS ) \to \idem(S/\mathfrak{m}S) \]
+is a surjection. Now we need only observe that $\idem(S) \to \idem(S/IS)$ is a
+bijection. This follows because idempotents in $S$ (resp. $S/IS$) correspond
+to disconnections of $\spec S$ (resp. $\spec S/IS$) by \cref{}. However, as
+$I$ consists of nilpotents, $\spec S$ and $\spec S/IS$ are homeomorphic
+naturally.
+\end{proof}
+
+\subsection{Hensel's lemma}
+
+We now want to show that Hensel's lemma is essentially what characterizes
+henselian rings, which explains the name.
+Throughout, we use the $\overline{}$ symbol to denote reduction mod an ideal
+(usually $\mathfrak{m}$ or $\mathfrak{m}$ times another ring).
+
+\begin{proposition} \label{factorcriterion}
+Let $(R, \mathfrak{m})$ be a local ring with residue field $k$. Then $R$ is henselian if and only if,
+whenever a monic polynomial $P \in R[X]$ satisfies
+\[ \overline{P} = \overline{Q}\overline{R} \in k[X], \]
+for some relatively prime polynomials $\overline{Q}, \overline{R} \in k[X]$,
+then the factorization lifts to a factorization
+\[ P = QR \in R[X]. \]
+\end{proposition}
+\textbf{This notation should be improved.}
+\begin{proof}
+Suppose $R$ henselian and suppose $P$ is a polynomial whose reduction admits
+such a factorization.
+Consider the finite $R$-algebra
+\( S = R[X]/(P); \)
+since $\overline{S } = S/\mathfrak{m}S $ can be represented as
+$k[X]/(\overline{P})$, it admits a splitting into components
+\[ \overline{S} = k[X]/(\overline{Q}) \times k[X]/(\overline{R}). \]
+Since $R$ is henselian, this splitting lifts to $S$, and we get a splitting
+\[ S = S_1 \times S_2. \]
+Here $S_1 \otimes k \simeq k[X]/(\overline{Q})$ and $S_2 \otimes k \simeq
+k[X]/(\overline{R})$.
+The image of $X$ in $S_1 \otimes k$ is annihilated by $\overline{Q}$, and the
+image of $X$ in $S_2 \otimes k$ is annihilated by $\overline{R}$.
+
+\begin{lemma}
+Suppose $R$ is a local ring, $S$ a finite $R$-algebra generated
+by an element $x \in S$. Suppose the image $\overline{x} \in \overline{S}=S
+\otimes_R
+k$ satisfies a monic polynomial equation $u(\overline{x}) = 0$. Then
+there is a monic polynomial $U$ lifting $u$ such that $U(x) = 0$ (in $S$). \end{lemma}
+\begin{proof}
+Let $\overline{x} \in \overline{S}$ be the generating element that satisfies
+$u(\overline{x})=0$, and let $x \in S$ be a lift of it. Suppose $u$ has
+rank $n$. Then $1, x, \dots,
+x^{n-1}$ spans $S$ by Nakayama's lemma. Thus there is a monic polynomial $U$ of
+degree $n$ that annihilates $x$; the reduction must be a multiple of $u$,
+hence $u$.\end{proof}
+
+
+Returning to the proposition, we see that the image of the generator $X$ in $S_1, S_2$
+must satisfy polynomial equations $Q, R$ that lift $\overline{Q},
+\overline{R}$. Thus $X$ satisfies $QR$ in $S[X]/(P)$; in other words, $QR$ is a
+multiple of $P$, hence equal to $P$. Thus we have lifted the factorization
+$\overline{P} = \overline{Q} \overline{R}$.
+This proves that factorizations can be lifted.
+
+Now, let us suppose that factorizations can always be lifted for finite
+$R$-algebras. We are now going
+to show that $R$ satisfies lifting idempotents.
+Suppose $S$ is a finite $R$-algebra, $\overline{e}$ a primitive idempotent in
+$\overline{S}$.
+We can lift $\overline{e}$ to some element $e' \in S$. Since $e'$ is contained
+in a finite $R$-algebra that contains $R$, we know that $e'$ is \emph{integral} over $R$,
+so that we can find a map $R[X]/(P) \to S$ sending the generator $X \mapsto
+e'$, for some polynomial $P$.
+We are going to use the fact that $R[X]/(P)$ splits to lift the idempotent
+$\overline{e}$.
+
+Let $\mathfrak{m}_1, \dots, \mathfrak{m}_k$ be the maximal ideals of $S$.
+These equivalently correspond to the points of $\spec \overline{S}$. We know
+that $e'$ belongs precisely to one of the $\mathfrak{m_i}$ (because a
+primitive idempotent in $\overline{S}$ is one on one maximal ideal and zero
+elsewhere). Call this $\mathfrak{m}_1$, say.
+
+We have a map $\spec S \to \spec R[X]/(P)$ coming from the map $\phi:
+R[X]/(P) \to S$. We claim that the image of
+$\mathfrak{m}_1$ is different from the images of the $\mathfrak{m}_j, j > 1$.
+Indeed, $b \in \mathfrak{m}_j$ precisely for $j > 1$, so the image of
+$\mathfrak{m}_1$ does not contain $X$. However, the image of $\mathfrak{m}_j,
+j> 1$ does contain $X$.
+
+Consider a primitive idempotent for $R[X]/(P)$ corresponding to
+$\phi^{-1}(\mathfrak{m}_1)$, say $f$. Then $f$ belongs to every other maximal
+ideal of $R[X]/(P)$ but not to $\phi^{-1}(\mathfrak{m}_1)$. Thus $\phi(f)$,
+which is idempotent,
+belongs to $\mathfrak{m}_1$ but not to any other maximal ideal of $S$.
+It follows that $\phi(f)$ must lift $\overline{e} $, and we have completed
+the proof.
+\end{proof}
+
+
+\begin{corollary}
+If every monogenic,\footnote{That is, generated by one element.} finitely
+presented and finite $R$-algebra is a product of local rings, then $R$ is
+henselian.
+\end{corollary}
+\begin{proof}
+Indeed, the proof of the above result shows that if $R[X]/(P)$ splits for
+every monic $P$, then $R$ is henselian.
+\end{proof}
+
+
+From the above result, we can get a quick example of a non-complete henselian
+ring:
+
+\begin{example}
+The integral closure of the localization $\mathbb{Z}_{(p)}$ in the
+ring $\mathbb{Z}_p$ of $p$-adic integers is a henselian ring. Indeed, it is
+first of all a discrete valuation ring (as we can restrict the valuation on
+$\mathbb{Z}_p$; note that an element of $\mathbb{Q}_p$ which is algebraic over
+$\mathbb{Q}$ and has norm at most one is \emph{integral} over
+$\mathbb{Z}_{(p)}$). This follows from the criterion of
+\cref{factorcriterion}. If a monic polynomial $P$ factors in the residue field, then
+it factors in $\mathbb{Z}_p$, and if $P$ has coefficients integral over
+$\mathbb{Z}_{(p)}$, so does any factor.
+\end{example}
+
+
+\begin{example}
+If $k$ is a complete field with a nontrivial absolute value and $X$ is any
+topological space, we can consider for each open subset $U \subset X$ the
+ring $\mathcal{A}(U)$ of continuous maps $U \to k$. As $U$ ranges over the open subsets
+containing an element $x$, the colimit $\varinjlim \mathcal{A}(U)$ (the
+``local ring'' at $x$) is a local henselian ring. See \cite{Ra70}.
+\end{example}
+
+
+\begin{proposition}
+Let $(R_i, \mathfrak{m}_i)$ be an inductive system of local rings and local
+homomorphisms. If each $R_i$ is henselian, then the colimit $\varinjlim R_i$
+is henselian too.
+\end{proposition}
+\begin{proof}
+We already know (\cref{}) that the colimit is a local ring, and that
+the maximal ideal of $\varinjlim R_i$ is the colimit $\varinjlim
+\mathfrak{m}_i$.
+Finally, given any monic polynomial in $\varinjlim R_i$ with a factoring in the residue
+field, the polynomial and the factoring come from some finite $R_i$; the
+henselianness of $R_i$ allows us to lift the factoring.
+\end{proof}
+
+
+\subsection{Example: Puiseux's theorem}
+
+Using the machinery developed here, we are going to prove:
+
+\begin{theorem}
+Let $K$ be an algebraically closed field of characteristic zero. Then any
+finite extension of the field of meromorphic power
+series\footnote{That is, the quotient field of $K[[T]]$.} $K((T))$ is of the
+form $K((T^{1/n}))$ for some $n$.
+\end{theorem}
+In particular, we see that any finite extension of $K((T))$ is abelian, even
+cyclic. The idea is going to be to look at the integral closure of $K[[T]]$ in
+the finite extension, argue that it itself is a DVR, and then refine an
+``approximate'' root in this DVR of the equation $\alpha^n = T$ to an exact one.
+
+\begin{proof}
+Let $R = K[[T]]$ be the power series ring; it is a complete, and thus
+henselian, DVR. Let $L$ be a finite extension of $K((T))$ of degree $n$ and $S$ the integral
+closure of $R$ in $S$, which we know to be a DVR. This is a finite $R$-algebra (cf. \cref{}), so $S$ is a
+product of local domains. Since $S$ is a domain, it is itself local. It is
+easy to see that if $\mathfrak{n} \subset S$ is the maximal ideal, then $S$ is
+$\mathfrak{n}$-adically complete (for instance because the maximal ideal of
+$R$ is a power of $\mathfrak{n}$, and $S$ is a free $R$-module).
+
+Let $\mathfrak{m} \subset R$ be the maximal ideal.
+We have the formula $ef = n$, because there is only one prime of $S$ lying
+above $\mathfrak{m}$. But $f = 1$ as the residue field of $R$ is algebraically
+closed. Hence $e = n$, and the extension is \emph{totally} ramified.
+
+Let $\alpha \in S$ be a uniformizer; we know that $\alpha$ is congruent,
+modulo $\mathfrak{n}^2$, to something in $R$ as the residue extension is trivial. Then $\alpha^n $ is congruent to something in $R$, which must be a
+uniformizer by looking at the valuation. By rescaling, we may assume
+\[ \alpha^n \equiv T \mod \mathfrak{n}^2. \]
+Since the polynomial $X^n - T$ is separable in the residue field, we can
+(using Hensel's lemma) refine $\alpha$ to a new $\alpha' \equiv \alpha \mod
+\mathfrak{n}^2$ with
+\[ \alpha'^n = T. \]
+Then $\alpha'$ is also a uniformizer at $\mathfrak{n}$ (as $\alpha' \equiv
+\alpha \mod \mathfrak{n}^2$).
+It follows that $R[\alpha']$ must in fact be equal to $S$,\footnote{\cref{};
+a citation here is needed.} and thus $L$ is
+equal to $K((T))(\alpha') = K((T^{1/n}))$.
+\end{proof}
diff --git a/books/cring/dedekind.tex b/books/cring/dedekind.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d07982307b085ddcd344dcbbc9fc7961757f03fe
--- /dev/null
+++ b/books/cring/dedekind.tex
@@ -0,0 +1,766 @@
+\chapter{Dedekind domains}
+
+The notion of a Dedekind domain allows one to generalize the usual unique
+factorization in principal ideal domains as in $\mathbb{Z}$ to settings such
+as the ring of integers in an algebraic number field. In general, a Dedekind
+domain does not have unique factorization, but the \emph{ideals} in a Dedekind
+domain do factor uniquely into a product of prime ideals.
+We shall see that Dedekind domains have a short characterization in terms of
+the characteristics we have developed.
+
+After this, we shall study the case of an \emph{extension} of Dedekind domains $A \subset B$. It will be of
+interest to determine how a prime ideal of $A$ factors in $B$. This should
+provide background for the study of basic algebraic number theory, e.g. a
+rough equivalent of the first chapter of
+\cite{La94} or \cite{Se79}.
+
+
+
+
+\section{Discrete valuation rings}
+
+
+\subsection{Definition}
+
+We start with the simplest case of a \emph{discrete valuation ring,} which is
+the local version of a Dedekind domain.
+Among the one-dimensional local noetherian rings, these will be the nicest.
+
+\begin{theorem} \label{DVRthm}
+Let $R$ be a noetherian local domain whose only prime ideals are $(0)$ and the maximal
+ideal
+$\mathfrak{m} \neq 0$.
+Then, the following are equivalent:
+\begin{enumerate}
+\item $R$ is factorial.
+\item $\mathfrak{m}$ is principal.
+\item $R$ is integrally closed.
+\item $R$ is a valuation ring with value group $\mathbb{Z}$.
+\end{enumerate}
+\end{theorem}
+\begin{definition}
+A ring satisfying these conditions is called a \textbf{discrete valuation
+ring} (\textbf{DVR}).
+A discrete valuation ring necessarily has only two prime ideals, namely
+$\mathfrak{m}$ and $(0)$.
+
+Alternatively, we can say that a noetherian local domain is a DVR if and only
+if it is of dimension one and integrally closed.
+\end{definition}
+
+
+
+\begin{proof}
+Assume 1: that is, suppose $R$ is factorial. Then every prime ideal of height one is principal
+by \cref{heightonefactoriality}.
+But $\mathfrak{m}$ is the only prime that can be height one: it is minimal over
+any nonzero nonunit of $R$, so $\mathfrak{m}$ is principal. Thus 1 implies 2, and similarly 2 implies 1 by
+\cref{heightonefactoriality}.
+
+1 implies 3 is true for any $R$: a factorial ring is always integrally
+closed, by \cref{factorialimpliesnormal}.
+
+4 implies 2 is easy as well. Indeed, suppose $R$ is a valuation ring with
+value group $\mathbb{Z}$. Then, one chooses an element $x \in R$ such that the valuation of
+$x$ is one. It is easy to see that $x$ generates $\mathfrak{m}$: if $y
+\in \mathfrak{m}$ is a non-unit, then the valuation of $y$ is at least one,
+so $y/x \in R$ and $y \in (x)$.
+
+The proof that 2 implies 4 is also straightforward. Suppose
+$\mathfrak{m}$ is principal, generated by $t$.
+In this case, we claim that any $x \in R$ is associate (i.e. differs by a
+unit from) a power of $t$.
+Indeed, since $\bigcap \mathfrak{m}^n = 0$ by the Krull intersection theorem
+(\cref{krullintersection}), it follows that there exists $n$ such that $x$ is
+divisible by $t^n$ but not by $t^{n+1}$.
+In particular, if we write $x = u t^n$, then $u \notin (t)$ is a unit. This
+proves the claim.
+
+With this in mind, we need to show that $R$ is a valuation ring with value
+group $\mathbb{Z}$.
+If $x
+\in R$, we define the valuation of $x$ to be the nonnegative integer $n$ such
+that $(x) = (t^n)$. One can
+easily check that this is a valuation on $R$, which extends to the quotient
+field by additivity.
+
+The interesting part of the argument is the claim that 3
+implies 2. Suppose $R$ is integrally closed, noetherian, and of dimension one; we claim that $\mathfrak{m}$ is
+principal. Choose $x \in \mathfrak{m} - \left\{0\right\}$. If $(x) =
+\mathfrak{m}$, we are done.
+
+Otherwise, we can look at $\mathfrak{m}/(x) \neq
+0$. The module $\mathfrak{m}/(x)$ is finitely generated module a noetherian ring which is
+nonzero, so it has an associated prime. That associated prime is either zero or
+$\mathfrak{m}$ because $R$ has dimension one. But $0$ is not an associated prime because every element in the
+module is killed by $x$. So $\mathfrak{m}$ is an associated prime of
+$\mathfrak{m}/(x)$.
+
+There is $\overline{y} \in \mathfrak{m}/(x)$ whose annihilator is
+$\mathfrak{m}$.
+Thus, there is $y \in \mathfrak{m}$ such that $y \notin (x)$ and $\mathfrak{m}y
+\subset (x)$. In particular, $y/x \in K(R) - R$, but
+\[ (y/x) \mathfrak{m} \subset R. \]
+There are two cases:
+\begin{enumerate}
+\item Suppose $(y/x) \mathfrak{m} = R$. Then we can write $\mathfrak{m} =
+R(x/y)$. So $\mathfrak{m}$ is principal. (This argument shows that $x/y \in R$.)
+\item The other possibility is that $y/x \mathfrak{m} \subsetneq R$. In this
+case, $(y/x)\mathfrak{m}$ is an ideal, so
+\[ (y/x) \mathfrak{m} \subset \mathfrak{m}. \]
+In particular, multiplication by $y/x$ carries $\mathfrak{m}$ to itself, and
+stabilizes the finitely generated \emph{faithful} module
+$\mathfrak{m}$. By \cref{thirdintegralitycriterion}, we see that
+$y/x$ is integral over $R$. In particular, we find that $y/x \in R$, as $R$
+was integrally closed, a contradiction as $y \notin (x)$.
+\end{enumerate}
+\end{proof}
+
+Let us give several examples of DVRs.
+\begin{example}
+The localization $\mathbb{Z}_{(p)}$ at any prime ideal $(p) \neq 0$ is a DVR.
+The associated valuation is the $p$-adic valuation.
+\end{example}
+
+\begin{example}
+Although we shall not prove (or define) this, the local ring of an
+algebraic curve at a smooth point is a DVR. The associated valuation measures the
+extent to which a function (or germ thereof) has a zero (or pole) at that
+point.
+\end{example}
+
+\begin{example}
+The formal power series ring $\mathbb{C}[[T]]$ is a discrete valuation ring,
+with maximal ideal $(T)$.
+\end{example}
+
+\subsection{Another approach}
+
+In the proof of \cref{DVRthm}, we freely used the notion of associated primes,
+and thus some of the results of \cref{noetherian}.
+However, we can avoid all that and give a more ``elementary approach,'' as in
+\cite{CaFr67}.
+
+
+Let us suppose that $R$ is an integrally closed, local noetherian domain of
+dimension one. We shall prove that the maximal ideal $\mathfrak{m} \subset R$
+is principal. This was the hard part of \cref{DVRthm}, and the only part
+where we used associated primes earlier.
+\begin{proof}
+We will show that $\mathfrak{m}$ is principal, by showing it is \emph{invertible} (as will be seen below). We divide the proof into steps:
+
+\paragraph{Step one}
+For a nonzero ideal $I \subset R$, let $I^{-1} := \{ x \in K(R): xI \subset R \}$,
+where $K(R)$ is the quotient field of $R$. Then clearly $I^{-1} \supset R$
+and $I^{-1}$ is an $R$-module, but in general we cannot say that $I^{-1} \neq
+R$ even if $I$ is proper.
+Nevertheless, we claim that in the present situation, we have \[
+{\mathfrak{m}^{-1} \neq R.}\] This is the conclusion of Step one.
+
+The proof runs across a familiar line: we show that any maximal element in the
+set of ideals $I \subset R$ with $I^{-1} \neq R$ is prime.
+The set of such ideals is nonempty: it contains any $(a)$ for $a \in \mathfrak{m}$ (in which case $(a)^{-1} = Ra^{-1} \neq R$).
+There must be a maximal element in this set of ideals by noetherianness, which
+as we will see is prime; thus, that maximal element must be $\mathfrak{m}$, which proves our claim.
+
+So to fill in the missing link, we must prove:
+\begin{lemma} If $S$ is a noetherian domain, any maximal element in the set of ideals $I \subset S$ with $I^{-1} \neq S$ is prime.
+\end{lemma}
+
+\begin{proof}
+Let $J$ be a maximal element, and suppose we have $ab \in J$, with $a,b \notin J$. I claim that if $z \in J^{-1} - S$, then $za, zb \in J^{-1} - S$. The $J^{-1}$ part follows since $J^{-1}$ is a $S$-module.
+
+By symmetry it is enough to prove the other half for $a$, namely that $za \notin
+S$; but then if $za \in S$, we would have $z( (a) + J ) \subset S$, which implies $ ( (a) + J)^{-1} \neq S$, contradiction, for $J$ was maximal.
+
+Then it follows that $z(ab) = (za) b \in J^{-1} - S$, by applying the claim just made twice. But $ab \in J$, so $z(ab) \in S$, contradiction.
+\end{proof}
+
+
+
+\paragraph{Step two} In the previous step, we have established that
+$\mathfrak{m}^{-1} \neq R$.
+
+We now claim that $\mathfrak{m}\mathfrak{m}^{-1} = R$. First, we know of course
+that $\mathfrak{m}\mathfrak{m}^{-1} \subset R$ by definition of inverses,
+and equally $\mathfrak{m} \subset \mathfrak{m}\mathfrak{m}^{-1}$ too. So $\mathfrak{m}\mathfrak{m}^{-1}$ is an ideal sandwiched between $\mathfrak{m}$ and $R$.
+Thus we only need to prove that $\mathfrak{m} \mathfrak{m}^{-1} = \mathfrak{m}$
+is impossible. If this were the case, we could choose some $a \in
+\mathfrak{m}^{-1} - R$ which must satisfy $a \mathfrak{m} \subset \mathfrak{m}$.
+Then $a$ would integral over $R$.
+As $R$ is integrally closed, this is impossible.
+
+\paragraph{Step three}
+
+Finally, we claim that $\mathfrak{m}$ is principal, which is the final step of
+the proof.
+In fact, let us prove a more general claim.
+
+\begin{proposition}
+Let $(R, \mathfrak{m})$ be a local noetherian domain such that $\mathfrak{m}
+\mathfrak{m}^{-1} = R$. Then $\mathfrak{m}$ is principal.
+\end{proposition}
+\begin{proof}
+Indeed, since $\mathfrak{m} \mathfrak{m}^{-1} = R$, write
+\[ 1 = \sum m_i n_i, \quad m_i \in \mathfrak{m}, \ n_i \in \mathfrak{m}^{-1}.\]
+At least one $m_j n_j$ is invertible, since $R$ is local.
+It follows that there are $x \in \mathfrak{m}$ and $y \in \mathfrak{m}^{-1}$
+whose product $xy$ is a unit in $R$.
+We may even assume $xy = 1$.
+
+Then we claim $\mathfrak{m} = (x)$.
+Indeed, we need only prove $\mathfrak{m} \subset (x)$. For this, if $q \in
+\mathfrak{m}$, then $qy \in R$ by definition of $\mathfrak{m}^{-1}$, so \[ q =
+x(qy) \in ( x).\]
+\end{proof}
+
+\end{proof}
+
+So we are done in this case too. Taking stock, we have an effective way to say whether a ring is a DVR. These three conditions are much easier to check in practice (noetherianness is usually easy, integral closure is usually automatic, and the last one is not too hard either for reasons that will follow) than the existence of an absolute value.
+
+
+\section{Dedekind rings}
+
+\subsection{Definition}
+We now introduce a closely related notion.
+\begin{definition}
+A \textbf{Dedekind ring} is a noetherian domain $R$ such that
+\begin{enumerate}
+\item $R$ is integrally closed.
+\item Every nonzero prime ideal of $R$ is maximal.
+\end{enumerate}
+\end{definition}
+
+
+\begin{remark}
+If $R$ is Dedekind, then any nonzero element is height one. This is evident
+since every nonzero prime is maximal.
+
+If $R$ is Dedekind, then $R$ is locally factorial. In fact, the localization of
+$R$ at a nonzero prime $\mathfrak{p}$ is a DVR.
+\begin{proof}
+$R_{\mathfrak{p}}$ has precisely two prime ideals: $(0)$ and
+$\mathfrak{p}R_{\mathfrak{p}}$. As a localization of an integrally closed
+domain, it is integrally closed. So $R_{\mathfrak{p}}$ is a DVR by the above
+result (hence
+factorial).
+\end{proof}
+\end{remark}
+
+
+Assume $R$ is Dedekind now.
+We have an exact sequence
+\[ 0 \to R^* \to K(R)^* \to \cart(R) \to \pic(R) \to 0. \]
+Here $\cart(R) \simeq \weil(R)$. But $\weil(R)$ is free on the nonzero
+primes, or equivalently maximal ideals, $R$ being Dedekind.
+In fact, however, $\cart(R)$ has a simpler description.
+
+\begin{proposition}
+Suppose $R$ is Dedekind. Then $\cart(R)$ consists of all nonzero finitely generated
+submodules of $K(R)$ (i.e. \textbf{fractional ideals}).
+\end{proposition}
+
+This is the same thing as saying as every nonzero finitely generated submodule of $K(R)$ is
+invertible.
+\begin{proof}
+Suppose $M \subset K(R)$ is nonzero and finitely generated It suffices to check that $M$ is
+invertible after localizing at every prime, i.e. that $M_{\mathfrak{p}}$ is
+an invertible---or equivalently, trivial, $R_{\mathfrak{p}}$-module. At the
+zero prime, there is nothing to check. We might as well assume that
+$\mathfrak{p}$ is maximal. Then $R_{\mathfrak{p}}$ is a DVR and
+$M_{\mathfrak{p}}$ is a finitely generated submodule of $K(R_{\mathfrak{p}}) = K(R)$.
+
+Let $S$ be the set of integers $n$ such that there exists $ x \in
+M_{\mathfrak{p}}$ with $v(x) = n$, for $v$ the valuation of $R_{\mathfrak{p}}$.
+By finite generation of $M$, $S$ is bounded below. Thus $S$ has a least element
+$k$. There is an element of $M_{\mathfrak{p}}$, call it $x$, with valuation $k$.
+
+It is easy to check that $M_{\mathfrak{p}}$ is generated by $x$, and is in fact free with
+generator $x$. The reason is simply that $x$ has the smallest valuation of
+anything in $M_{\mathfrak{p}}$.
+\end{proof}
+
+What's the upshot of this?
+
+\begin{theorem}
+If $R$ is a Dedekind ring, then any nonzero ideal $I \subset R$ is invertible,
+and therefore uniquely described as a product of powers of (nonzero) prime ideals, $I =
+\prod \mathfrak{p}_i^{n_i}$.
+\end{theorem}
+\begin{proof}
+This is simply because $I$ is in $\cart(R) = \weil(R)$ by the above result.
+\end{proof}
+
+This is Dedekind's generalization of unique factorization.
+
+We now give the standard examples:
+\begin{example}
+\begin{enumerate}
+\item Any PID (in particular, any DVR) is Dedekind.
+\item If $K$ is a finite extension of $\mathbb{Q}$, and set $R $ to be the
+integral closure of $\mathbb{Z}$ in $K$, then $R$ is a Dedekind ring. The ring
+of integers in any number field is a Dedekind ring.
+\item If $R$ is the coordinate ring of an algebraic variety which is smooth and
+irreducible of dimension one, then $R$ is Dedekind.
+\item Let $X$ be a compact Riemann surface, and let $S \subset X$ be a
+nonempty finite subset. Then the ring of meromorphic functions on $X$ with
+poles only in $S$ is
+Dedekind. The maximal ideals in this ring are precisely those corresponding to
+points of $X-S$.
+\end{enumerate}
+\end{example}
+
+
+
+\subsection{A more elementary approach}
+
+We would now like to give a more elementary approach to the unique
+factorization of ideals in Dedekind domains, one which does not use the heavy
+machinery of Weil and Cartier divisors.
+
+In particular, we can encapsulate what has already been proved as:
+\begin{theorem} Let $A$ be a Dedekind domain with quotient field $K$. Then there is a bijection between the discrete valuations of $K$ that assign nonnegative orders to elements of $A$ and the nonzero prime ideals of $A$.
+\end{theorem}
+\begin{proof} Indeed, every valuation gives a prime ideal of elements of positive order; every prime ideal $\mathfrak{p}$ gives a discrete valuation on $A_{\mathfrak{p}}$, hence on $K$. \end{proof}
+
+
+This result, however trivial to prove, is the main reason we can work essentially interchangeably with prime ideals in Dedekind domains and discrete valuations.
+
+Now assume $A$ is Dedekind. A finitely generated $A$-submodule of the quotient field $F$ is called a \textbf{fractional ideal}; by multiplying by some element of $A$, we can always pull a fractional ideal into $A$, when it becomes an ordinary ideal. The sum and product of two fractional ideals are fractional ideals.
+
+\begin{theorem}[Invertibility] If $I$ is a nonzero fractional ideal and $ I^{-1} := \{ x \in F: xI \subset A \}$, then $I^{-1}$ is a fractional ideal and $I I^{-1} = A$.
+\end{theorem}
+
+Thus, the nonzero fractional ideals are an \emph{abelian group} under multiplication.
+
+\begin{proof}
+To see this, note that invertibility is preserved under localization: for a multiplicative set $S$, we have $S^{-1} ( I^{-1} ) = (S^{-1} I)^{-1}$, where the second ideal inverse is with respect to $S^{-1}A$; this follows from the fact that $I$ is finitely generated. Note also that invertibility is true for discrete valuation rings: this is because the only ideals are principal, and principal ideals (in any integral domain) are obviously invertible.
+
+So for all primes $\mathfrak{p}$, we have $(I I^{-1})_{\mathfrak{p}} = A_{\mathfrak{p}}$, which means the inclusion of $A$-modules $I I^{-1} \to A$ is an isomorphism at each localization. Therefore it is an isomorphism, by general algebra.
+\end{proof}
+
+The next result says we have unique factorization of \textbf{ideals}:
+\begin{theorem}[Factorization] Each ideal $I \subset A$ can be written uniquely as a product of powers of prime ideals.
+\end{theorem}
+\begin{proof}
+Let's use the pseudo-inductive argument to obtain existence of a prime factorization. Let $I$ be the maximal ideal which can't be written in such a manner, which exists since $A$ is Noetherian. Then $I$ isn't prime (evidently), so it's contained in some prime $\mathfrak{p}$. But $I = (I\mathfrak{p}^{-1})\mathfrak{p}$, and $I\mathfrak{p}^{-1} \neq I$ can be written as a product of primes, by the inductive assumption. Whence so can $I$, contradiction.
+
+Uniqueness of factorization follows by localizing at each prime.
+\end{proof}
+
+\begin{definition} Let $P$ be the subgroup of nonzero principal ideals in the group $I$ of nonzero ideals. The quotient $I/P$ is called the \textbf{ideal class group}.
+\end{definition}
+
+The ideal class group of the integers, for instance (or any principal ideal domain) is clearly trivial. In general, this is not the case, because Dedekind domains do not generally admit unique factorization.
+\begin{proposition} Let $A$ be a Dedekind domain. Then $A$ is a UFD if and only if its ideal class group is trivial.
+\end{proposition}
+\begin{proof} If the ideal class group is trivial, then $A$ is a principal ideal domain, hence a UFD by elementary algebra. Conversely, suppose $A$ admits unique factorization.
+Then, by the following lemma, every prime ideal is principal. Hence every ideal is principal, in view of the unique factorization of ideals.
+\end{proof}
+\begin{lemma} Let $R$ be a UFD, and let $\mathfrak{p}$ be a prime ideal which contains no proper prime sub-ideal except for $0$. Then $\mathfrak{p}$ is principal.
+\end{lemma}
+The converse holds as well; a domain is a UFD if and only if every prime ideal
+of height one is principal, by \rref{heightonefactoriality}.
+\begin{proof}
+First, $\mathfrak{p}$ contains an element $x \neq 0$, which we factor into irreducibles $\pi_1 \dots \pi_k$. One of these, say $\pi_j$, belongs to $\mathfrak{p}$, so $\mathfrak{p} \supset (\pi_j)$. Since $\mathfrak{p}$ is minimal among nonzero prime ideals, we have $\mathfrak{p} = (\pi_j)$. (Note that $(\pi_j)$ is prime by unique factorization.)
+\end{proof}
+
+\begin{exercise}
+This exercise is from \cite{Li02}. If $A$ is the integral closure of
+$\mathbb{Z}$ in a number field (so that $A$ is a Dedekind domain), then it is
+known (cf. \cite{La94} for a proof) that the ideal class group of $A$ is
+\emph{finite}. From this, show that every open subset of $\spec A$ is a
+principal open set $D(f)$. Scheme-theoretically, this means that every open
+subscheme of $\spec A$ is affine (which is not true for general rings).
+\end{exercise}
+
+\subsection{Modules over Dedekind domains}
+
+Let us now consider some properties of Dedekind domains.
+
+\begin{proposition} Let $A$ be a Dedekind domain, and let $M$ be a finitely generated $A$ module. Then
+$M$ is projective (or equivalently flat, or locally free) if and only if it is torsion-free.
+\label{Dedekind means projective=tors free}
+\end{proposition}
+\begin{proof}
+If $M$ is projective, then it is a direct summand of a free module, so it is torsion-free. So we need to show that if $M$ is torsion-free, then it is projective. Recall that to show $M$ is projective, it suffices to show that $M_\mathfrak{p}$ is projective for any prime $\mathfrak{p} \subset M$. But note that $A_\mathfrak{p}$ is a PID so a module over it is torsion free if and only if it is flat, by Lemma \ref{PID means flat=tors free}. However, it is also a local Noetherian ring, so a module is flat if and only if it is projective. So $M_\mathfrak{p}$ is projective if and only if it is torsion-free, so it now suffices to show that it is torsion-free.
+
+However for any multiplicative set $S \subset A$, if $M$ is torsion-free then $M_S$ is also torsion-free. This is because if
+\[\frac{a}{s'} \cdot \frac{m}{s}=0\]
+then there is $t$ such that $tam=0$,
+as desired.
+\end{proof}
+
+\begin{proposition}
+Let $A$ be a Dedekind domain. Then any finitely generated module $M$ over it has (not canonically) a decomposition $M=M^{tors} \oplus M^{tors-free}$.
+\end{proposition}
+\begin{proof}
+Note that by Lemma \ref{tors tors-free ses}, we have a short exact sequence
+\[ 0 \to M^{tors} \to M \to M^{tors-free} \to 0\]
+but by proposition \ref{Dedekind means projective=tors free} the torsion free part is projective, so $M$ can be split, not necessarily canonically as $M^{tors} \oplus M^{tors-free}$, as desired.
+\end{proof}
+
+Note that we may give further information about the torsion free part of the module:
+\[M^{tors}=\bigoplus_{\mathfrak{p}} M_{\mathfrak{p}}^{tors}\]
+First note that there is a map
+\[M^{tors} \to \bigoplus_{\mathfrak{p}} M_{\mathfrak{p}}^{tors}\]
+because $M$ is torsion, every element is supported at finitely many points, so the image of $f$ in $M^{tors}_\mathfrak{p}$ is only nonzero for finitely many $\mathfrak{p}$.
+It is an isomorphism, because it is an isomorphism after every localization.
+
+So we have pretty much specified what the torsion part is. We can in fact also classify the torsion free part; in particular, we have
+\[M^{tors-free} \simeq \oplus \mathcal{L}\]
+where $\mathcal{L}$ are locally free modules of rank 1.
+This is because we know from above that the torsion free module is projective, we may apply Problem Set 10, Problem 12, and then since $L$ is a line bundle, and $I_{-D}$ is also, $L \otimes I_{-D}$ is a line bundle, and then $M/L \otimes I_{-D}$ is flat, so it is projective, so we may split it off.
+
+
+
+\begin{lemma} For $A$ a Dedekind Domain, and $I \subset A$ an ideal, then $I$ is a locally free module of rank 1.
+\end{lemma}
+\begin{proof}
+First note that $I$ is torsion-free and therefore projective by \ref{Dedekind means projective=tors free}, and it is also finitely generated, because $A$ is Noetherian. But for a finitely generated module over a Noetherian ring, we know that it is projective if and only if it is locally free, so we have shown that it is locally free.
+
+Also recall that for a module which is locally free, the rank is well defined,
+i.e, any localization which makes it free makes it free of the same rank. So
+to test the rank, it suffices to show that if we tensor with the field of
+fractions $K$, it is free of rank 1. But note that since $K$, being a localization of $A$ is flat over $A$ so we have short exact sequence
+\[0 \to I \otimes_A K \to A \otimes_A K \to (A/I) \otimes_A K \to 0\]
+
+However, note that $\supp(A/I)=V(\Ann(A/I))=V(I)$, and the prime $(0)$ is not
+in $V(I)$, so $A/I \otimes_{A} K$, which is the localization of $A/I$ at $(0)$ vanishes, so we have
+\[I\otimes_A K \simeq A\otimes_A K\]
+but this is one-dimensional as a free $K$ module, so the rank is 1, as desired.
+\end{proof}
+
+We close by listing a collection of useful facts about Dedekind domains.
+ \underline{A dozen things every Good Algebraist should know about Dedekind domains}. $R$
+ is a Dedekind domain.
+ \begin{enumerate}
+ \item $R$ is local $\Longleftrightarrow$ $R$ is a field or a DVR.
+ \item $R$ semi-local $\Longrightarrow$ it is a PID.
+ \item $R$ is a PID $\Longleftrightarrow$ it is a UFD $\Longleftrightarrow$ $C(R)=\{1\}$
+ \item $R$ is the full ring of integers of a number field $K$ $\Longrightarrow$ $|C(R)|<
+ \infty$, and this number is the \emph{class number} of $K$.
+ \item $C(R)$ can be any abelian group. This is Clayborn's Theorem.
+ \item For any non-zero prime $\mathfrak{p}\in \spec R$, $\mathfrak{p}^n/\mathfrak{p}^{n+1}\cong R/\mathfrak{p}$ as an
+ $R$-module.
+ \item ``To contain is to divide'', i.e.~if $A,B\subset R$, then $A\subset B$
+ $\Longleftrightarrow$ $A=BC$ for some $C\subset R$.
+ \item (Generation of ideals) Every non-zero ideal $B\subset R$ is generated by two elements.
+ Moreover, one of the generators can be taken to be any non-zero element of $B$.
+ \item (Factor rings) If $A\subset R$ is non-zero, then $R/A$ is a PIR
+ (\textbf{principal ideal ring}).
+ \item (Steinitz Isomorphism Theorem) If $A,B\subset R$ are non-zero ideals, then $A\oplus
+ B\cong {}_RR\oplus AB$ as $R$-modules.
+ \item If $M$ is a finitely generated torsion-free $R$-module of rank $n$,\footnote{The
+ rank is defined as $rk(M)=\dim_{K(R)} M\otimes_R K(R)$ where $K(R)$ is the
+ quotient field.} then it is of the
+ form $M\cong R^{n-1}\oplus A$, where $A$ is a non-zero ideal, determined up to
+ isomorphism.
+ \item If $M$ is a finitely generated torsion $R$-module, then $M$ is uniquely of the
+ form $M\cong R/A_1\oplus
+ \cdots \oplus R/A_n$ with $A_1\subsetneq A_2\subsetneq \cdots \subsetneq
+ A_n\subsetneq R$.
+ \end{enumerate}
+
+\add{eventually, proofs of these should be added}
+
+\section{Extensions}
+
+
+In this section, we will essentially consider the following question: if $A$
+is a Dedekind domain, and $L$ a finite extension of the quotient field of $A$,
+is the integral closure of $A$ in $L$ a Dedekind domain? The general answer
+turns out to be yes, but the result is somewhat simpler for the case of a
+separable extension, with which we begin.
+
+\subsection{Integral closure in a finite separable extension}
+
+One of the reasons Dedekind domains are so important is
+\begin{theorem} \label{intclosdedekind} Let $A$ be a Dedekind domain with quotient field $K$, $L$ a finite separable extension of $K$, and $B$ the integral closure of $A$ in $L$. Then $B$ is Dedekind.
+\end{theorem}
+
+This can be generalized to the Krull-Akizuki theorem below (\cref{}).
+
+First let us give an alternate definition of ``separable''.
+For a finite field extension $k'$ of $k$, we may consider the bilinear pairing
+$k' \otimes_k k' \to k$ given by $x,y \mapsto \Tr_{k'/k}(xy)$. Which is to say $xy \in k'$ can be seen as a $k$-linear map of finite dimensional vector spaces $k' \to k'$, and we are considering the trace of this map. Then we claim that $k'$ is separable if and only if the bilinear pairing $k' \times k' \to k$ is non-degenerate.
+
+To show the above claim, first note that the pairing is non-degenerate if and
+only if it is non-degenerate after tensoring with the algebraic closure. This
+is because if $\Tr(xy)=0$ for all $y \in k'$, then $\Tr((x\otimes
+1_{\overline{k}})y)=0$ for all $y \in k' \otimes_k \overline{k}$, which we may see to be true by decomposing into pure tensors. The other direction is obtained by selecting a basis of $\overline{k}$ over $k$, and then noting that for $y_i$ basis elements, if $\Tr(\sum x y_i)=0$ then $\Tr(xy_i)=0$ for each $i$.
+
+So now we just need to show that $X=k' \otimes_k \overline{k}$ is reduced if
+and only if the map $X \otimes_{\overline{k}} X \to \overline{k}$ given by $a \otimes b \mapsto \Tr(ab)$ is non-degenerate. To do this, we show that elements of the kernel of the bilinear map are exactly the nilpotents. But note that $X$ is a finite dimensional algebra over $\overline{k}$, and we may elements as matrices. Then if $\Tr(AB)=0$ for all $B$ if and only if $\Tr(PAP^{-1}PBP^{-1})=0$ for all $B$, so we may assume $A$ is in Upper Triangular Form. From this, the claim becomes clear.
+
+
+\begin{proof} We need to check that $B$ is Noetherian, integrally, closed, and of dimension 1.
+
+\begin{itemize}
+\item Noetherian. Indeed, $B$ is a finitely generated $A$-module, which
+obviously implies Noetherianness. To see this, note that the $K$-linear map
+$(.,.): L \times L \to K$, $a,b \to \mathrm{Tr}(ab)$ is nondegenerate since
+$L$ is separable over $K$ (\rref{}). Let $F \subset B$ be a free module spanned by a $K$-basis for $L$. Then since traces preserve integrality and $A$ is integrally closed, we have $B \subset F^*$, where $F^* := \{ x \in K: (x,F) \subset A \}$. Now $F^*$ is $A$-free on the dual basis of $F$ though, so $B$ is a submodule of a finitely generated $A$ module, hence a finitely generated $A$-module.
+\item Integrally closed. $B$ is the integral closure of $A$ in $L$, so it
+is integrally closed (integrality being transitive).
+\item Dimension 1. Indeed, if $A \subset B$ is an integral extension of
+domains, then $\dim A = \dim B$. This follows essentially from the theorems of
+``lying over'' and ``going up.'' Cf. \cite{Ei95}.
+\end{itemize}
+
+So, consequently the ring of algebraic integers (integral over $\mathbb{Z}$) in a number field (finite extension of $\mathbb{Q}$) is Dedekind.
+\end{proof}
+
+
+
+Note that the above proof actually implied (by the argument about traces) the following useful fact:
+\begin{proposition}\label{intclosurefgen} Let $A$ be a noetherian integrally closed domain with quotient field $K$. Let $L$ be a finite separable extension and $B$ the ring of integers. Then $B$ is a finitely generated $A$-module.
+\end{proposition}
+
+We shall give another, more explicit proof of \rref{intclosurefgen} whose technique will be useful in the sequel.
+Let $\alpha \in B$ be a generator of $L/K$. Let $n=[L:K]$ and $\sigma_1, \dots, \sigma_n$ the distinct embeddings of $L$ into the algebraic closure of $K$.
+Define the \textbf{discriminant} of $\alpha$ to be
+\[ D(\alpha) = \left(\det \begin{bmatrix}
+1 & \sigma_1\alpha & (\sigma_1 \alpha)^2 & \dots \\
+1 & \sigma_2\alpha & (\sigma_2 \alpha)^2 & \dots \\
+\vdots & \vdots & \vdots & \ddots \end{bmatrix}\right)^2 .\]
+This maps to the same element under each $\sigma_i$, so is in $K^*$ (and even
+$A^*$ by integrality); it is nonzero by basic facts about vanderMonde
+determinants since each $\sigma_i$ maps $\alpha$ to a different element. The next lemma clearly implies that $B$ is contained in a finitely generated $A$-module, hence is finitely generated (since $A$ is noetherian).
+\begin{lemma} We have $B \subset D(\alpha)^{-1} A[\alpha]$.
+\end{lemma}
+\begin{proof}
+Indeed, suppose $x \in B$. We can write $x = c_0 (1) + c_1 (\alpha) + \dots c_{n-1}(\alpha^{n-1})$ where each $c_i \in K$. We will show that in fact, each $c_i \in D(\alpha)^{-1}A$, which will prove the lemma. Applying each $\sigma_i$, we have for each $i$, $\sigma_i x = c_0 (1) + c_1 (\sigma_i \alpha) + \dots + c_{n-1} ( \sigma_i \alpha^{n-1})$.
+Now by Cramer's lemma, each $c_i$ can be written as a quotient of determinants of matrices involving $\sigma_jx $ and the $\alpha^{j}$. The denominator determinant is in fact $D(\alpha)$. The numerator is in $K$ and must be integral, hence is in $A$. This proves the claim and the lemma.
+\end{proof}
+
+The above technique may be illustrated with an example.
+\begin{example} Let $p^i$ be a power of a prime $p$ and consider the extension
+$\mathbb{Q}(\zeta_{p^i})/\mathbb{Q}$ for $\zeta_{p^i}$ a primitive $p^i$-th
+root of unity. This is a special case of a cyclotomic extension, an important
+example in the subject. We claim that the ring of integers (that is, the
+integral closure of $\mathbb{Z}$)
+in $\mathbb{Q}(\zeta_{p^i})$ is precisely $\mathbb{Z}[\zeta_{p^i}]$. This is true in fact for all cyclotomic extensions, but we will not be able to prove it here.
+
+First of all, $\zeta_{p^i}$ satisfies the equation $X^{p^{i-1}(p-1)} +
+X^{p^{i-1}(p-2)} + \dots + 1 = 0$. This is because if $\zeta_p$ is a $p$-th
+root of unity, $(\zeta_p-1)(1+\zeta_p + \dots + \zeta_p^{p-1}) = \zeta_p^p - 1 =
+0$. In particular, $X - \zeta_{p^i} \mid X^{p^{i-1}(p-1)} +
+X^{p^{i-1}(p-2)} + \dots + 1 $, and consequently (taking $X=1$), we find that
+$1 - \zeta_{p^i}$ divides $p$ in the ring of integers in
+$\mathbb{Q}(\zeta_{p^i})/\mathbb{Q}$. This is true for \emph{any} primitive
+$p^i$-th root of unity for \emph{any} $p^i$. Thus the norm to $\mathbb{Q}$ of $1 - \zeta_{p^i}^j$ for any $j$ is a power of $p$.
+
+I claim that this implies that the discriminant $D(\zeta_{p^i})$ is a power of
+$p$, up to sign. But by the vanderMonde formula, this discriminant is a
+product of terms of the form $\prod (1 - \zeta_{p^i}^{j})$ up to roots of
+unity. The norm to $\mathbb{Q}$ of each factor is thus a power of $p$, and the discriminant itself plus or minus a power of $p$.
+
+By the lemma, it follows that the ring of integers is contained in
+$\mathbb{Z}[p^{-1}, \zeta_{p^i}]$. To get down further to
+$\mathbb{Z}[\zeta_{p^i}]$ requires a bit more work. \add{this proof}
+\end{example}
+
+
+\subsection{The Krull-Akizuki theorem}
+
+We are now going to prove a general theorem that will allow us to remove the
+separability hypothesis in \cref{}. Let us say that a noetherian domain has
+\textbf{dimension at most one} if every nonzero prime ideal is maximal; we shall later
+generalize this notion of ``dimension.''
+
+\begin{theorem}[Krull-Akizuki] Suppose $A$ is a noetherian domain of dimension
+at most one. Let $L$ be a finite extension of the quotient field $K(A)$, and suppose
+$B \subset L$ is a domain containing $A$. Then $B$ is noetherian of dimension
+at most one.
+\end{theorem}
+
+
+From this, it is clear:
+
+\begin{theorem}
+The integral closure of a Dedekind domain in any finite extension of the
+quotient field is a Dedekind domain.
+\end{theorem}
+\begin{proof}
+Indeed, by Krull-Akizuki, this integral closure is noetherian and of dimension
+$\leq 1$; it is obviously integrally closed as well, hence a Dedekind domain.
+\end{proof}
+
+Now let us prove Krull-Akizuki.
+\add{we need to introduce material about length}
+\begin{proof}
+We are going to show that for any $a \in A - \left\{0\right\}$, the $A$-module
+$B/a B$ has finite length. (This is quite nontrivial, since $B$ need not even
+be finitely \emph{generated} as an $A$-module.)
+From this it will be relatively easy to deduce the result.
+
+Indeed, if $I \subset B$ is any nonzero ideal, then $I$ contains a nonzero element of $A$; to
+see this, we need only choose an element $b \in I$ and consider an irreducible polynomial
+\[ a_0 X^n + \dots + a_n \in K[X] \]
+that it satisfies. We can assume that all the $a_i \in A$ by clearing
+denominators. It then follows that $a_n \in A \cap I$.
+So choose some $a \in (A \cap I) - \left\{0\right\}$.
+We then know by the previous paragraph (though we have not proved it yet) that
+$B/aB$ has finite length as an $A$-module (and a fortiori as a $B$-module); in
+particular, the submodule $I/aB$ is finitely generated as a
+$B$-module. The
+exact sequence
+\[ 0 \to a B \to I \to I/aB \to 0 \]
+shows that $I$ must be finitely generated as a $B$-module, since the two outer
+terms are.
+Thus any ideal of $B$ is finitely generated, so $B$ is noetherian.
+
+\add{$B$ has dimension at most one}
+
+To prove the Krull-Akizuki theorem, we are going to prove:
+\begin{lemma}[Finite length lemma]
+If $A$ is a noetherian domain of dimension at most one, then for any
+torsion-free
+$A$-module $M$ such that $K(A) \otimes_A M$ is finite-dimensional
+(alternatively: $M$ has finite rank) and $a \neq 0$, $M/aM$ has finite length.
+\end{lemma}
+\begin{proof}
+We are going to prove something stronger. If $M$ has rank $n$ and is
+torsion-free, then will show
+\begin{equation} \label{boundka} \ell(M/aM) \leq n \ell(A/aA). \end{equation}
+Note that $A/aA$ has finite length. This follows because there is a filtration of
+$A/aA$ whose quotients are of the form $A/\mathfrak{p}$ for $\mathfrak{p}$
+prime; but these $\mathfrak{p}$ cannot be zero as $A/aA$ is torsion. So these
+primes are maximal, and $A/aA$ has a filtration whose quotients are
+\emph{simple}. Thus $\ell(A/aA) < \infty$. In fact, we see thus that \emph{any
+torsion, finitely-generated module has finite length}; this will be used in
+the sequel.
+
+
+There are two cases:
+\begin{enumerate}
+\item $M$ is finitely generated.
+We can choose generators $m_1, \dots, m_n$ in $M$ of $K(A) \otimes_A M$; we
+then from these generators get a map
+\[ A^n \to M \]
+which becomes an isomorphism after localizing at $A - \left\{0\right\} $. In
+particular, the kernel and cokernel are torsion modules. The kernel must be trivial
+($A$ being a domain), and $A^n \to M$ is thus injective.
+Thus we have found a finite free submodule $F \subset M$ such that $M/F$ is a
+torsion module $T$, which is also finitely generated.
+
+
+
+We have an exact sequence
+\[ 0 \to F/(aM \cap F) \to M/aM \to T/aT \to 0. \]
+Here the former has length at most $\ell(F/aF) = n \ell(A/aA)$, and we get
+the bound $\ell(M/aM) \leq n \ell(A/aA) + \ell(T/aT)$. However, we
+have the annoying last term to contend with, which makes things somewhat
+inconvenient. Thus, we use a trick: for each $t > 0$, we consider the exact
+sequence
+\[ 0 \to F/(a^tM \cap F) \to M/a^tM \to T/a^tT \to 0. \]
+This gives
+\[ \ell(M/a^t M) \leq t n \ell(A/aA) + \ell(T/a^t T) \leq t n \ell(A/aA) +
+\ell(T). \]
+However, $\ell(T) < \infty$ as $T$ is torsion (cf. the first paragraph).
+If we divide by $t$, we get the inequality
+\begin{equation} \label{almostboundka} \frac{1}{t} \ell(M/a^t M) \leq n
+\ell(A/aA) + \frac{\ell(T)}{t}. \end{equation}
+However, the filtration $a^t M \subset a^{t-1}M \subset \dots \subset a M
+\subset M$ whose quotients are all isomorphic to $M/aM$ ($M$ being
+torsion-free) shows that $\ell(M/a^t M ) = t \ell(M/aM)$
+In particular, letting $t \to \infty$ in \eqref{almostboundka} gives
+\eqref{boundka} in the case where $M$ is finitely generated.
+\item $M$ is not finitely generated. Now we can use a rather cheeky argument.
+$M$ is the inductive limit of its finitely generated submodules $M_F \subset
+M$, each of which
+is itself torsion free and of rank at most $n$. Thus $M/aM $ is the inductive limit
+of its submodules $M_F/(aM \cap M_F)$ as $M_F$ ranges over
+We know that $\ell(M_F/(aM \cap M_F)) \leq n \ell(A/aA)$ for each finitely
+generated $M_F \subset M$ by the first case above (and the fact that
+$M_F/(aM \cap M_F)$ is a quotient of $M_F/aM_F$).
+
+But if $M/aM$ is the inductive limit of \emph{submodules} of length at most $n
+\ell(A/aA)$, then it itself can have length at most $n \ell(A/aA)$. For $M/aM$
+must be in fact equal to the submodule $M_F/(aM \cap M_F)$ that has the
+largest length (no other submodule $M_{F'}/(aM \cap M_{F'})$ can properly
+contain this).
+\end{enumerate}
+
+
+\end{proof}
+
+With this lemma proved, it is now clear that Krull-Akizuki is proved as well.
+
+\end{proof}
+
+\subsection{Extensions of discrete valuations}
+
+As a result, we find:
+\begin{theorem}
+Let $K$ be a field, $L$ a finite separable extension. Then a discrete valuation on $K$ can be extended to one on $L$.
+\end{theorem}
+\add{This should be clarified --- what is a discrete valuation?}
+\begin{proof}
+Indeed, let $R \subset K$ be the ring of integers of the valuation, that is
+the subset of elements of nonnegative valuation. Then $R$ is a DVR, hence Dedekind, so the integral closure $S \subset L$ is Dedekind too (though in general it is not a DVR---it may have several non-zero prime ideals) by \rref{intclosdedekind}. Now as above, $S$ is a finitely generated $R$-module, so if $\mathfrak{m} \subset R$ is the maximal ideal, then
+\[ \mathfrak{m} S \neq S \]
+by Nakayama's lemma (cf. for instance \cite{Ei95}). So $\mathfrak{m} S$ is contained in a maximal ideal $\mathfrak{M}$ of $S$ with, therefore, $\mathfrak{M} \cap R = \mathfrak{m}$. (This is indeed the basic argument behind lying over, which I could have just invoked.) Now $S_{\mathfrak{M}} \supset R_{\mathfrak{m}}$ is a DVR as it is the localization of a Dedekind domain at a prime ideal, and one can appeal to \rref{niupmeansdvr}. So there is a discrete valuation on $S_{\mathfrak{M}}$. Restricted to $R$, it will be a power of the given $R$-valuation, because its value on a uniformizer $\pi$ is $<1$. However, a power of a discrete valuation is a discrete valuation too. So we can adjust the discrete valuation on $S_{\mathfrak{M}}$ if necessary to make it an extension.
+
+This completes the proof.
+\end{proof}
+
+Note that there is a one-to-one correspondence between extensions of the valuation on $K$ and primes of $S$ lying above $\mathfrak{m}$. Indeed, the above proof indicated a way of getting valuations on $L$ from primes of $S$. For an extension of the valuation on $K$ to $L$, let $\mathfrak{M} := \{ x \in S: \left| x \right| < 1\}$.
+\section{Action of the Galois group}
+
+ Suppose we have an integral domain (we don't even have to assume it Dedekind) $A$ with quotient field $K$, a finite Galois extension $L/K$, with $B$ the integral closure in $L$. Then the Galois group $G = G(L/K)$ acts on $B$; it preserves $B$ because it preserves equations in $A[X]$.
+In particular, if $\mathfrak{P} \subset B$ is a prime ideal, so is $\sigma \mathfrak{P}$, and the set $\spec B$ of prime ideals in $B$ becomes a $G$-set.
+
+\subsection{The orbits of the Galois group} It is of interest to determine the orbits; this question has a very clean answer.
+
+\begin{proposition} The orbits of $G$ on the prime ideals of $B$ are in bijection with the primes of $A$, where a prime ideal $\mathfrak{p} \subset A$ corresponds to the set of primes of $B$ lying over $A$.\footnote{It is useful to note here that the lying over theorem works for arbitrary integral extensions.} Alternatively, any two primes $\mathfrak{P}, \mathfrak{Q} \subset B$ lying over $A$ are conjugate by some element of $G$.
+\end{proposition}
+
+In other words, under the natural map $\spec B \to \spec A =\spec B^G$, the latter space is the quotient under the action of $G$, while $A=B^G$ is the ring of invariants in $B$.\footnote{The reader who does not know about the $\spec$ of a ring can disregard these remarks.}
+
+\begin{proof}
+We need only prove the second statement.
+Let $S$ be the multiplicative set $A - \mathfrak{p}$. Then $S^{-1}B $ is the integral closure of $S^{-1}A$, and in $S^{-1}A = A_{\mathfrak{p}}$, the ideal $\mathfrak{p}$ is maximal.
+Let $\mathfrak{Q}, \mathfrak{P}$ lie over $\mathfrak{p}$; then $S^{-1}\mathfrak{Q},S^{-1} \mathfrak{P}$ lie over $S^{-1}\mathfrak{p}$ and are maximal (to be added). If we prove that $S^{-1} \mathfrak{Q}, S^{-1} \mathfrak{P}$ are conjugate under the Galois group, then $\mathfrak{Q}, \mathfrak{P}$ must also be conjugate by the properties of localization. \emph{In particular, we can reduce to the case of $\mathfrak{p}, \mathfrak{Q}, \mathfrak{P}$ all maximal.}
+
+The rest of the proof is now an application of the Chinese remainder theorem. Suppose that, for all $\sigma \in G$, we have $\sigma \mathfrak{P} \neq \mathfrak{Q}$. Then the ideals $\sigma \mathfrak{P}, \mathfrak{Q}$ are distinct maximal ideals, so by the remainder theorem, we can find $x \equiv 1 \mod \sigma \mathfrak{P}$ for all $\sigma \in G$ and $x \equiv 0 \mod \mathfrak{Q}$.
+Now, consider the norm $N^L_K(x)$; the first condition implies that it is congruent to 1 modulo $\mathfrak{p}$. But the second implies that the norm is in $\mathfrak{Q} \cap K = \mathfrak{p}$, contradiction.
+\end{proof}
+
+\subsection{The decomposition and inertia groups}
+Now, let's zoom in on a given prime $\mathfrak{p} \subset A$. We know that $G$ acts transitively on the set $\mathfrak{P}_1, \dots, \mathfrak{P}_g$ of primes lying above $\mathfrak{p}$; in particular, there are at most $[L:K]$ of them.
+
+\begin{definition} If $\mathfrak{P}$ is any one of the $\mathfrak{P}_i$, then the stabilizer in $G$ of this prime ideal is called the \textbf{decomposition group} $G_{\mathfrak{P}}$. \end{definition}
+We have, clearly, $(G: G_{\mathfrak{P}}) = g$.
+
+Now if $ \sigma \in G_{\mathfrak{P}}$, then $\sigma$ acts on the residue field $B/\mathfrak{P}$ while fixing the subfield $A/\mathfrak{p}$. In this way, we get a homomorphism $\sigma \to \overline{\sigma}$ from $G$ into the automorphism group of $B/\mathfrak{P}$ over $A/\mathfrak{p})$ (we don't call it a Galois group because we don't yet know whether the extension is Galois).
+
+The following result will be crucial in constructing the so-called ``Frobenius elements'' of crucial use in class field theory.
+
+\begin{proposition} Suppose $A/\mathfrak{p}$ is perfect. Then $B/\mathfrak{P}$ is Galois over $A/\mathfrak{p}$, and the homomorphism $\sigma \to \overline{\sigma}$ is surjective from $G_{\mathfrak{P}} \to G(B/\mathfrak{P}/A/\mathfrak{p})$.
+\end{proposition}
+\begin{proof}
+In this case, the extension $B/\mathfrak{P}/A/\mathfrak{p}$ is separable, and we can choose $\overline{x} \in B/\mathfrak{P}$ generating it by the primitive element theorem. We will show that $\overline{x}$ satisfies a polynomial equation $\overline{P}(X) \in A/\mathfrak{p}[X]$ all of whose roots lie in $B/\mathfrak{P}$, which will prove that the residue field extension is Galois. Moreover, we will show that all the nonzero roots of $\overline{P}$ in $B/\mathfrak{P}$ are conjugates of $\overline{x}$ under elements of $G_{\mathfrak{P}}$. This latter will imply surjectivity of the homomorphism $\sigma \to \overline{\sigma}$, because it shows that any conjugate of $\overline{x}$ under $G(B/\mathfrak{P}/A/\mathfrak{p})$ is a conjugate under $G_{\mathfrak{P}}$.
+
+We now construct the aforementioned polynomial. Let $x \in B$ lift $\overline{x}$. Choose $y \in B$ such that $y \equiv x \mod \mathfrak{P}$ but $y \equiv 0 \mod \mathfrak{Q}$ for the other primes $\mathfrak{Q}$ lying over $\mathfrak{p}$. We take $P(X) = \prod_{\sigma \in G} (X - \sigma(y)) \in A[X]$. Then the reduction $\overline{P}$ satisfies $\overline{P}(\overline{x})= \overline{P}(\overline{y}) = 0$, and $\overline{P}$ factors completely (via $\prod_{\sigma} (X - \overline{\sigma(t)})$) in $B/\mathfrak{P}[X]$. This implies that the residue field extension is Galois, as already stated.
+But it is also clear that the polynomial $\overline{P}(X)$ has roots of zero and $\sigma(\overline{y}) = \sigma(\overline{x})$ for $\sigma \in G_{\mathfrak{P}}$. This completes the proof of the other assertion, and hence the proposition.
+\end{proof}
+
+\begin{definition}
+The kernel of the map $\sigma \to \overline{\sigma}$ is called the \textbf{inertia
+group} $T_{\mathfrak{P}}$. Its fixed field is called the \textbf{inertia
+field}.
+\end{definition}
+
+These groups will resurface significantly in the future.
+
+\begin{remark} Although we shall never need this in the future, it is of
+interest to see what happens when the extension $L/K$ is \emph{purely
+inseparable}.\footnote{Cf. \cite{La02}, for instance.} Suppose $A$ is integrally closed in $K$, and $B$ is the integral closure in $L$. Let the characteristic be $p$, and the degree $[L:K] = p^i$.
+In this case, $x \in B$ if and only if $x^{p^i} \in A$. Indeed, it is clear that the condition mentioned implies integrality. Conversely, if $x$ is integral, then so is $x^{p^i}$, which belongs to $K$ (by basic facts about purely inseparable extensions). Since $A$ is integrally closed, it follows that $x^{p^i} \in A$.
+
+Let now $\mathfrak{p} \subset A$ be a prime ideal. I claim that there is precisely one prime ideal $\mathfrak{P}$ of $B$ lying above $A$, and $\mathfrak{P}^{p^i} = \mathfrak{p}$. Namely, this ideal consists of $x \in B$ with $x^{p^i} \in \mathfrak{p}$! The proof is straightforward; if $\mathfrak{P}$ is \emph{any} prime ideal lying over $\mathfrak{p}$, then $x \in \mathfrak{P}$ iff $x^{p^i} \in L \cap \mathfrak{P} = \mathfrak{p}$. In a terminology to be explained later, $\mathfrak{p}$ is \emph{totally ramified.}
+\end{remark}
+
+
diff --git a/books/cring/dimension.tex b/books/cring/dimension.tex
new file mode 100644
index 0000000000000000000000000000000000000000..249cfaa8755a4c3149dae535fda30193aef19040
--- /dev/null
+++ b/books/cring/dimension.tex
@@ -0,0 +1,1760 @@
+\chapter{Dimension theory}
+
+\label{chdimension}
+\textbf{Dimension theory} assigns to each commutative ring---say,
+noetherian---an invariant called the dimension. The most standard definition,
+that of Krull dimension (which we shall not adopt at first), defines the
+dimension in terms of the maximal lengths of ascending chains of prime ideals.
+In general, however, the geometric intuition behind dimension is that it
+should assign to an affine ring---say, one of the form $\mathbb{C}[x_1, \dots,
+X_n]/I$---something like the ``topological dimension'' of the affine variety
+in $\mathbb{C}^n$ cut out by the ideal $I$.
+
+In this chapter, we shall obtain three different expressions for the dimension
+of a noetherian local ring $(R, \mathfrak{m})$, each of which will be useful
+at different times in proving results.
+
+\section{The Hilbert function and the dimension of a local ring}
+\subsection{Integer-valued polynomials}
+
+It is now necessary to do a small amount of general algebra.
+
+Let $P \in \mathbb{Q}[t]$. We consider the question of when $P$ maps the
+integers $\mathbb{Z}$, or more generally the sufficiently large integers, into
+$\mathbb{Z}$. Of course, any polynomial in $\mathbb{Z}[t]$ will do this, but
+there are others: consider $\frac{1}{2}(t^2 -t)$, for instance.
+
+\begin{proposition}\label{integervalued}
+Let $P \in \mathbb{Q}[t]$. Then $P(m)$ is an integer for $m \gg 0$ integral if and only if
+$P$ can be written in the form
+\[ P(t) = \sum_n c_n \binom{t}{n}, \quad c_n \in \mathbb{Z}. \]
+In particular, $P(\mathbb{Z}) \subset \mathbb{Z}$.
+\end{proposition}
+So $P$ is a $\mathbb{Z}$-linear function of binomial coefficients.
+\begin{proof}
+Note that the set $\left\{\binom{t}{n}\right\}_{n \in \mathbb{Z}_{\geq 0}}$ forms a basis for the set of
+polynomials $\mathbb{Q}[t]$. It is thus clear that $P(t)$ can be written as
+a rational combination $\sum c_n \binom{t}{n}$ for the $c_n \in \mathbb{Q}$.
+We need to argue that the $c_n \in \mathbb{Z}$ in fact.
+
+Consider the operator $\Delta$ defined on functions $\mathbb{Z} \to
+\mathbb{C}$ as follows:
+\[( \Delta f)(m) = f(m) - f(m-1). \]
+It is obvious that if $f$ takes integer values for $m \gg 0$, then so does
+$\Delta f$. It is also easy to check that $\Delta \binom{t}{n} =
+\binom{t}{n-1}$.
+
+By looking at the
+function $\Delta P = \sum c_n \binom{t}{n-1}$ (which takes values in $\mathbb{Z}$), it is easy to see that the $c_n \in \mathbb{Z}$ by induction
+on the degree.
+It is also easy to see directly that the binomial coefficients take values in
+$\mathbb{Z}$ at \emph{all} arguments.
+\end{proof}
+
+
+
+\subsection{Definition and examples}
+Let $R$ be a ring.
+\begin{question}
+What is a good definition for $\dim(R)$? Actually, more generally,
+what is the dimension of $R$ at a given ``point'' (i.e. prime ideal)?
+\end{question}
+
+Geometrically, think of $\spec R$, for any ring; pick some point corresponding to a maximal
+ideal $\mathfrak{m} \subset R$. We want to define the \textbf{dimension of $R$}
+at $\mathfrak{m}$. This is to be thought of kind of like ``dimension over the
+complex numbers,'' for algebraic varieties defined over $\mathbb{C}$. But it
+should be purely algebraic.
+What might you do?
+
+
+Here is an idea. For a topological space $X$ to be $n$-dimensional at $x \in
+X$, there should be $n$ coordinates at the point $x$. In other words, the
+point $x$ should be uniquely defined
+by the zero locus of $n$ points on the space.
+Motivated by this, we could try defining $\dim_{\mathfrak{m}} R$
+to be the number of generators of $\mathfrak{m}$.
+However, this is a bad definition, as $\mathfrak{m}$ may not have the same number of
+generators as $\mathfrak{m}R_{\mathfrak{m}}$. In other words, it is not a
+truly \emph{local} definition.
+\begin{example}
+Let $R$ be a noetherian integrally closed domain which is not a UFD. Let $\mathfrak{p}
+\subset R$ be a prime ideal which is minimal over a principal ideal but which
+is not itself principal. Then $\mathfrak{p}R_{\mathfrak{p}}$ is generated by
+one element, as we will eventually see, but $\mathfrak{p}$ is not.
+\end{example}
+
+We want our definition of dimension to be
+local.
+So this leads us to:
+\begin{definition}
+If $R$ is a (noetherian) \emph{local} ring with maximal ideal $\mathfrak{m}$,
+then the \textbf{embedding dimension} of $R$, denoted $\emdim R$ is
+the minimal number of generators for $\mathfrak{m}$. If $R$ is a noetherian
+ring and $\mathfrak{p} \subset R$ a prime ideal, then the \textbf{embedding
+dimension at $\mathfrak{p}$} is that of the local ring $R_{\mathfrak{p}}$.
+\end{definition}
+
+In the above definition, it is clearly sufficient to study what happens for
+local rings, and we impose that restriction for now. By Nakayama's lemma, the
+embedding dimension is the minimal number of generators of
+$\mathfrak{m}/\mathfrak{m}^2$, or the $R/\mathfrak{m}$-dimension of that vector
+space:
+\[ \emdim R = \dim_{R/\mathfrak{m}} \mathfrak{m}/ \mathfrak{m}^2. \]
+
+
+In general, however, the embedding dimension is not going to coincide with the
+intuitive ``geometric'' dimension of an algebraic
+variety.
+
+\begin{example}
+Let $R = \mathbb{C}[t^2, t^3] \subset \mathbb{C}[t]$, which is the coordinate
+ring of a cubic curve $y^2 =x^3$ as $R \simeq \mathbb{C}[x,y]/(x^2 - y^3)$
+via $x = t^3, y = t^2$. Let us localize at the prime ideal $\mathfrak{p} = (t^2,
+t^3)$: we get $R_{\mathfrak{p}}$.
+
+Now $\spec R$ is singular at the origin. In fact, as a result, $\mathfrak{p}
+R_{\mathfrak{p}} \subset R_{\mathfrak{p}}$ needs two generators, but the
+variety it corresponds to is one-dimensional.
+\end{example}
+
+So the embedding dimension is the smallest dimension into which you can embed
+$R$ into a smooth space.
+But for singular varieties this is not the dimension we want.
+
+So instead of considering simply $\mathfrak{m}/\mathfrak{m}^2$, let us
+consider the \emph{sequence} of finite-dimensional vector spaces
+\[ \mathfrak{m}^k/\mathfrak{m}^{k+1}. \]
+Computing these dimensions as a function of $k$ gives some invariant that describes the local
+geometry of $\spec R$.
+
+We shall eventually prove:
+\begin{theorem} \label{hilbfnispolynomial}
+Let $(R, \mathfrak{m})$ be a local noetherian ring. Then there exists a
+polynomial
+$f \in \mathbb{Q}[t]$ such that
+\[ f(n) = \ell(R/\mathfrak{m}^n) = \sum_{i=0}^{n-1} \dim
+\mathfrak{m}^i/\mathfrak{m}^{i+1} \quad \forall n \gg 0. \]
+
+Moreover, $\deg f \leq \dim \mathfrak{m}/\mathfrak{m}^2$.
+\end{theorem}
+
+
+Note that this polynomial is well-defined, as any two polynomials agreeing for large $n$
+coincide. Note also that $R/\mathfrak{m}^n$ is artinian so of finite length,
+and that we have used the fact that the length is additive for short exact
+sequences. We would have liked to write $\dim R/\mathfrak{m}^n$, but we can't,
+in general, so we use the substitute of the length.
+
+Based on this, we define:
+\begin{definition}
+The \textbf{dimension} of the local ring $R$ is the degree of the polynomial
+$f$ above. For an arbitrary noetherian ring $R$, we define $\dim R =
+\sup_{\mathfrak{p} \in \spec R} \dim (R_{\mathfrak{p}})$.
+\end{definition}
+
+
+Let us now do a few example computations.
+
+\begin{example}[The affine line] \label{easydimcomputation}
+\label{dimaffineline}
+Consider the local ring $(R, \mathfrak{m}) = \mathbb{C}[t]_{(t)}$. Then $\mathfrak{m} = (t)$ and
+$\mathfrak{m}^k/\mathfrak{m}^{k+1}$ is one-dimensional, generated by $t^k$. In
+particular, the ring has dimension one.
+\end{example}
+
+\begin{example}[A singular curve] Consider $R = \mathbb{C}[t^2, t^3]_{(t^2, t^3)}$, the local ring of $y^2 = x^3$
+at zero. Then $\mathfrak{m}^n$ is generated by $t^{2n}, t^{2n+1}, \dots$.
+$\mathfrak{m}^{n+1}$ is generated by $t^{2n+2}, t^{2n+3}, \dots$. So the
+quotients all have dimension two. The dimension of these quotients is a little
+larger than in \rref{easydimcomputation}, but they do not grow. The ring still has dimension one.
+\end{example}
+
+\begin{example}[The affine plane] \label{dimaffineplane}
+Consider $R = \mathbb{C}[x,y]_{(x,y)}$. Then $\mathfrak{m}^k$ is generated by
+polynomials in $x,y$ that are homogeneous in degree $k$. So $\mathfrak{m}^k/\mathfrak{m}^{k+1}$
+has dimensions that \emph{grow} linearly in $k$. This is a genuinely two-dimensional
+example.
+\end{example}
+
+It is this difference between constant linear and quadratic growth in
+$R/\mathfrak{m}^n$ as $n \to \infty$, and not the size of the initial terms,
+that we want for our definition of dimension.
+
+Let us now generalize \rref{dimaffineline} and \rref{dimaffineplane}
+above to affine spaces of arbitrary dimension.
+\begin{example}[Affine space]
+Consider $R = \mathbb{C}[x_1, \dots, x_n]_{(x_1, \dots, x_n)}$.
+This represents the variety $\mathbb{C}^n = \mathbb{A}^n_{\mathbb{C}}$ near the origin geometrically, so
+it should intuitively have dimension $n$. Let us check that it does.
+
+Namely, we need to compute the polynomial $f$ above. Here $R/\mathfrak{m}^k$ looks like the set of
+polynomials of degree $ 0$. Then
+consider the filtration of $M$
+\[ 0 \subset \ker( x_1: M \to M) \subset \ker (x_1^2: M \to M) \subset \dots
+\subset M. \]
+This must stabilize by noetherianness at some $M' \subset M$. Each of the
+quotients $\ker( x_1^i)/\ker (x_1^{i+1})$ is a finitely generated module over
+$k[x_1, \dots,
+x_n]/(x_1)$, which is a smaller polynomial ring. So each of these quotients
+$\ker (x_1^{i+1})/\ker (x_1^{i})$ has a Hilbert function of degree $\leq n-1$ by
+the inductive hypothesis.
+
+Climbing up the filtration, we see that $M'$ has a Hilbert function which is the sum of the Hilbert functions of
+these quotients $\ker(x_1^{i+1})/\ker(x_1^{i})$. In particular, $f_{M'}$ exists. If we show that $f_{M/M'}$
+exists, then $f_M$ necessarily exists. So we might as well show that the
+Hilbert function $f_M$ exists when $x_1$ is a non-zerodivisor on $M$.
+
+So, we have reduced to the case where $M \stackrel{x_1}{\to} M$
+is injective.
+Now $M$ has a filtration
+\[ M \supset x_1 M \supset x_1^2 M \supset \dots \]
+which is an exhaustive filtration of $M$ in that nothing can be divisible by
+powers of $x_1$ over and over, or the degree would not be finite. So it
+follows that $\bigcap x_1^m M = 0$.
+
+Let $N = M/x_1 M $, which is isomorphic to $ x_1^m M/x_1^{m+1} M$ since $M \stackrel{x_1}{\to} M$ is
+injective. Here $N$ is a finitely generated graded module over $k[x_2, \dots, x_n]$, and by the
+inductive hypothesis on $n$, we see that there is a polynomial $f_N^+$ of degree $\leq n-1$ such that
+\[ f_N^+(t) = \sum_{t' \leq t} \dim N_{t'}, \quad t \gg 0. \]
+
+Fix $t \gg 0$ and consider the $k$-vector space $M_t$, which has a finite filtration
+\[ M_t \supset (x_1 M)_t \supset (x_1^2 M)_t \supset \dots \]
+which has successive quotients that are the graded pieces of $N \simeq
+M/x_1 M \simeq x_1 M/x_1^2 M \simeq \dots$ in dimensions $t, t-1, \dots$. We
+find that
+\[ (x_1^2 M)_t/(x_1^3 M)_t \simeq N_{t-2}, \]
+for instance. Summing this, we find that
+\[ \dim M_t = \dim N_t + \dim N_{t-1} + \dots . \]
+The sum above is actually finite. In fact, by finite generation, there is $K
+\gg 0 $ such that $\dim N_q = 0$ for $q< -K$. From this, we find that
+\[ \dim M_t = \sum_{t' = -K}^{t} \dim N_{t'}, \]
+which implies that $\dim M_t$ is a polynomial for $t \gg 0$. This completes
+the proof.
+\end{proof}
+\end{proof}
+
+
+Let $(R, \mathfrak{m})$ a noetherian local ring and $M$ a finitely generated
+$R$-module.
+\begin{proposition} \label{hilbertlocalring}
+$\ell(M/\mathfrak{m}^m M)$ is a polynomial for $m \gg 0$.
+\end{proposition}
+\begin{proof}
+This follows from \rref{hilbfngeneral}, and in fact we have
+essentially seen the argument above. Indeed, we consider the
+associated graded module
+\[ N = \bigoplus \mathfrak{m}^k M/\mathfrak{m}^{k+1}M , \]
+which is finitely generated over the associated graded ring
+\[ \bigoplus \mathfrak{m}^k/\mathfrak{m}^{k+1}. \]
+Consequently, the graded pieces of $N$ have dimensions growing polynomially
+for large degrees. This implies the result.
+
+\end{proof}
+\begin{definition}
+We define the \textbf{Hilbert function} $H_M(m)$ to be the unique polynomial
+such that
+\[ H_M(m) = \ell(M/ \mathfrak{m}^m M), \quad m \gg 0. \]
+\end{definition}
+
+It is clear, incidentally, that $H_M$ is integer-valued, so we see by
+\rref{integervalued} that $H_M$ is a $\mathbb{Z}$-linear
+combination of binomial coefficients.
+
+\subsection{The dimension of a module}
+Let $R $ be a local noetherian ring with maximal ideal $\mathfrak{m}$. We
+have seen (\rref{hilbertlocalring}) that there is a polynomial $H(t)$ with
+\[ H(t) = \ell(R/\mathfrak{m}^t), \quad t \gg 0. \]
+Earlier, we defined the \textbf{dimension} of $R$ is the degree of $f_M^+$.
+Since the degree of the Hilbert function is at most the number of generators
+of the polynomial ring, we saw that
+\[ \dim R \leq \emdim R. \]
+
+Armed with the machinery of the Hilbert function, we can extend this
+definition to modules.
+\begin{definition}
+If $R$ is local noetherian, and $N$ a finite $R$-module, then $N$ has a
+Hilbert polynomial $H_N(t)$ which when evaluated at $t \gg 0$ gives
+the length $\ell(N/\mathfrak{m}^t N)$.
+We say that the \textbf{dimension of
+$N$} is the degree of this Hilbert polynomial.
+\end{definition}
+
+Clearly, the dimension of the \emph{ring} $R$ is the same thing as that of the
+\emph{module} $R$.
+
+We next show that the dimension behaves well with respect to short exact
+sequences. This is actually slightly subtle since, in general, tensoring with
+$R/\mathfrak{m}^t$ is not exact; it turns out to be \emph{close} to being
+exact by the Artin-Rees lemma.
+On the other hand, the corresponding fact for modules over a \emph{polynomial
+ring} is very easy, as
+no tensoring was involved in the definition.
+
+\begin{proposition}
+Suppose we have an exact sequence
+\[ 0 \to M' \to M \to M'' \to 0 \]
+of graded modules over a polynomial ring $k[x_1, \dots, x_n]$. Then
+\[ f_M(t) = f_{M'}(t) + f_{M''}(t), \quad f_M^+(t) = f_{M'}^+(t) +
+f_{M''}^+(t). \]
+As a result, $\deg f_M = \max \deg f_{M'}, \deg f_{M''}$.
+\end{proposition}
+\begin{proof} The first part is obvious as the dimension is additive on vector
+spaces. The second part follows because Hilbert functions have nonnegative
+leading coefficients.
+\end{proof}
+
+\begin{proposition} \label{dimexactseq}
+Fix an exact sequence
+\[ 0 \to N' \to N \to N'' \to 0 \]
+of finite $R$-modules. Then $\dim N = \max (\dim N', \dim N'')$.
+\end{proposition}
+\begin{proof}
+We have an exact sequence
+\[ 0 \to K \to N/\mathfrak{m}^t N \to N''/\mathfrak{m}^t N'' \to 0 \]
+where $K$ is the kernel. Here $K = (N' + \mathfrak{m}^t N)/ \mathfrak{m}^t N
+= N'/( N' \cap \mathfrak{m}^t N)$. This is not quite $N'/\mathfrak{m}^t N'$,
+but it's pretty close.
+We have a surjection
+\[ N'/\mathfrak{m}^t N \twoheadrightarrow N'/(N' \cap \mathfrak{m}^t N) = K. \]
+In particular,
+\[ \ell(K) \leq \ell(N'/\mathfrak{m}^t N'). \]
+On the other hand, we have the Artin-Rees lemma, which gives an inequality in
+the opposite direction. We have a containment
+\[ \mathfrak{m}^t N' \subset N' \cap \mathfrak{m}^t N \subset
+\mathfrak{m}^{t-c} N' \]
+for some $c$. This implies that $\ell(K) \geq \ell( N'/\mathfrak{m}^{t-c} N')$.
+
+Define $M = \bigoplus \mathfrak{m}^t N/\mathfrak{m}^{t+1} N$, and define $M',
+M''$ similarly in terms of $N', N''$. Then we have seen that
+\[ \boxed{f_M^+(t-c) \leq \ell(K) \leq f_M^+(t).} \]
+We also know that the length of $K$ plus the length of $N''/\mathfrak{m}^t N''$
+is $f_M^+(t)$, i.e.
+\[ \ell(K) + f_{M''}^+(t) = f_M^+(t). \]
+Now the length of $K$ is a polynomial in $t$ which is pretty similar to
+$f_{M'}^+$, in that the leading coefficient is the same. So we have an
+approximate equality $f_{M'}^+(t) + f_{M''}^+(t) \simeq f_M^+(t)$. This implies the
+result since the degree of $f_M^+$ is $\dim N$ (and similarly for the others).
+\end{proof}
+
+
+
+\begin{proposition}
+$\dim R$ is the same as $\dim R/\rad R$.
+\end{proposition}
+I.e., the dimension doesn't change when you kill off nilpotent elements, which
+is what you would expect, as nilpotents don't affect $\spec (R)$.
+\begin{proof}
+For this, we need a little more information about Hilbert functions.
+We thus digress substantially.
+
+
+Finally, let us return to the claim about dimension and nilpotents. Let $R$ be
+a local noetherian ring and $I = \rad (R)$. Then $I$ is a finite $R$-module. In
+particular, $I$ is nilpotent, so $I^n = 0$ for $n \gg 0$. We will show that
+\[ \dim R/I = \dim R/I^2 = \dots \]
+which will imply the result, as eventually the powers become zero.
+
+In particular, we have to show for each $k$,
+\[ \dim R/I^k = \dim R/I^{k+1}. \]
+There is an exact sequence
+\[ 0 \to I^k/I^{k+1} \to R/I^{k+1} \to R/I^k \to 0. \]
+The dimension of these rings is the same thing as the dimensions as
+$R$-modules. So we can use this short exact sequence of modules. By the
+previous result, we are reduced to showing that
+\[ \dim I^k/I^{k+1} \leq \dim R/I^k. \]
+Well, note that $I$ kills $I^k/I^{k+1}$. In particular, $I^k/I^{k+1}$ is a finitely generated
+$R/I^k$-module. There is an exact sequence
+\[ \bigoplus_N R/I^k \to I^k/I^{k+1} \to 0 \]
+which implies that $\dim I^k/I^{k+1} \leq \dim \bigoplus_N R/I^k = \dim R/I^k$.
+\end{proof}
+
+\begin{example}
+Let $\mathfrak{p} \subset \mathbb{C}[x_1, \dots, x_n]$ and let $R =
+(\mathbb{C}[x_1,\dots, x_n]/\mathfrak{p})_{\mathfrak{m}}$ for some maximal
+ideal $\mathfrak{m}$. What is $\dim R$?
+What does dimension mean for coordinate rings over $\mathbb{C}$?
+
+Recall by the Noether normalization theorem that there exists a polynomial ring
+$\mathbb{C}[y_1, \dots, y_m]$ contained in $S=\mathbb{C}[x_1,\dots,
+x_n]/\mathfrak{p}$ and $S$ is a finite integral extension over this polynomial ring.
+We claim that
+\[ \dim R = m. \]
+There is not sufficient time for that today.
+\end{example}
+
+\subsection{Dimension depends only on the support}
+
+Let $(R, \mathfrak{m})$ be a local noetherian ring. Let $M$ be a finitely generated
+$R$-module. We defined the \textbf{Hilbert polynomial} of $M$ to be the
+polynomial which evaluates at $t \gg 0$ to $\ell(M/\mathfrak{m}^tM)$. We proved
+last time that such a polynomial always exists, and called its degree the
+\textbf{dimension of $M$}. However,
+we shall now see that $\dim M$ really depends only on the support\footnote{
+Recall that $\supp M = \left\{\mathfrak{p}: M_{\mathfrak{p}}\neq 0\right\}$.} $\supp M$.
+In this sense, the dimension is really a statement about the \emph{topological
+space} $\supp M \subset \spec R$, not about $M$ itself.
+
+
+In other words, we will prove:
+\begin{proposition}
+$\dim M$ depends only on $\supp M$.
+\end{proposition}
+
+In fact, we shall show:
+
+\begin{proposition}
+$\dim M = \max_{\mathfrak{p} \in \supp M} \dim R/\mathfrak{p}$.
+\end{proposition}
+\begin{proof}
+By \rref{filtrationlemma} in \rref{noetherian}, there is a finite filtration
+\[ 0 = M_0 \subset M_1 \subset \dots \subset M_m = M, \]
+such that each of the successive quotients is isomorphic to $R/\mathfrak{p}_i
+\subset R$
+for some prime ideal $\mathfrak{p}_i$. Given a short exact sequence
+of modules, we know that the dimension in the middle is the maximum of the dimensions at the
+two ends (\rref{dimexactseq}). Iterating this, we see that the
+dimension of $M$ is the maximum of the
+dimension of the successive quotients $M_i/M_{i-1}$.
+
+But the $\mathfrak{p}_i$'s that occur
+are all in $\supp M$, so we find
+\[ \dim M = \max_{\mathfrak{p}_i} R/\mathfrak{p}_i \leq \max_{\mathfrak{p} \in \supp M} \dim R/\mathfrak{p}. \]
+We must show the reverse inequality. But fix any prime $\mathfrak{p} \in \supp
+M$. Then $M_{\mathfrak{p}} \neq 0$, so one of the $R/\mathfrak{p}_i$ localized
+at $\mathfrak{p}$ must be nonzero, as localization is an exact functor. Thus
+$\mathfrak{p}$ must contain some $\mathfrak{p}_i$. So $R/\mathfrak{p}$ is a
+quotient of $R/\mathfrak{p}_i$. In particular,
+\[ \dim R/\mathfrak{p} \leq \dim R/\mathfrak{p}_i. \]
+\end{proof}
+
+Having proved this, we throw out the notation $\dim M$, and henceforth write
+instead $\dim \supp M$.
+
+
+
+%% N. B. This following material on affine rings should be added in
+\begin{comment}
+\subsection{The dimension of an affine ring} Last time, we made a claim. If $R$
+is a domain and a finite module over a polynomial ring $k[x_1, \dots, x_n]$,
+then $R_{\mathfrak{m}}$ for any maximal $\mathfrak{m} \subset R$ has dimension
+$n$. This connects the dimension with the transcendence degree.
+
+First, let us talk about finite extensions of rings. Let $R$ be a commutative
+ring and let $R \to R'$ be a morphism that makes $R'$ a finitely generated $R$-module (in
+particular, integral over $R$). Let $\mathfrak{m}' \subset R'$ be maximal. Let
+$\mathfrak{m}$ be the pull-back to $R$, which is also maximal (as $R \to R'$ is
+integral).
+Let $M$ be a finitely generated $R'$-module, hence also a finitely generated $R$-module.
+
+We can look at $M_{\mathfrak{m}}$ as an $R_{\mathfrak{m}}$-module or
+$M_{\mathfrak{m}'}$ as an $R'_{\mathfrak{m}'}$-module. Either of these will be
+finitely generated.
+
+\begin{proposition}
+$\dim \supp M_{\mathfrak{m}} \geq \dim \supp M_{\mathfrak{m}'}$.
+\end{proposition}
+Here $M_{\mathfrak{m}}$ is an $R_{\mathfrak{m}}$-module, $M_{\mathfrak{m}'}$ is
+an $R'_{\mathfrak{m}'}$-module.
+
+\begin{proof}
+Consider $R/\mathfrak{m} \to R'/\mathfrak{m} R' \to R'/\mathfrak{m}'$. Then we
+see that $R'/\mathfrak{m} R'$ is a finite $R/\mathfrak{m}$-module, so a
+finite-dimensional $R/\mathfrak{m}$-vector space. In particular,
+$R'/\mathfrak{m} R'$ is of finite length as an $R/\mathfrak{m}$-module, in
+particular an artinian ring. It is thus a product of local artinian rings.
+These artinian rings are the localizations of $R'/\mathfrak{m}R'$ at ideals of
+$R'$ lying over $\mathfrak{m}$. One of these ideals is $\mathfrak{m}'$.
+So in particular
+\[ R'/\mathfrak{m}R \simeq R'/\mathfrak{m}'\times \mathrm{other \ factors}. \]
+The nilradical of an artinian ring being nilpotent, we see that
+$\mathfrak{m}'^c R'_{\mathfrak{m}'} \subset \mathfrak{m} R'_{\mathfrak{m}}$ for
+some $c$.
+
+OK, I'm not following this---too tired. Will pick this up someday.
+\end{proof}
+
+
+\begin{proposition}
+$\dim \supp M_{\mathfrak{m}} = \max_{\mathfrak{m}' \mid \mathfrak{m}} \dim
+\supp M_{\mathfrak{m}'}$.
+\end{proposition}
+
+This means $\mathfrak{m}'$ lies over $\mathfrak{m}$.
+\begin{proof}
+Done similarly, using artinian techniques. I'm kind of tired.
+\end{proof}
+\end{comment}
+\begin{example}
+Let $R' = \mathbb{C}[x_1, \dots, x_n]/\mathfrak{p}$. Noether normalization says
+that there exists a finite injective map $\mathbb{C}[y_1, \dots, y_a] \to R'$.
+The claim is that
+\[ \dim R'_{\mathfrak{m}} =a \]
+for any maximal ideal $\mathfrak{m} \subset R'$. We are set up to prove a
+slightly weaker definition. In particular (see below for the definition of the
+dimension of a non-local ring), by the proposition, we
+find the weaker claim
+\[ \dim R' = a, \]
+as the dimension of a polynomial ring $\mathbb{C}[y_1, \dots, y_a]$ is $a$.
+(\textbf{I don't think we have proved this yet.})
+\end{example}
+
+
+
+\section{Other definitions and characterizations of dimension}
+
+\subsection{The topological characterization of dimension} We now want a topological
+characterization of dimension. So, first, we want to study how dimension
+changes as we do things to a module. Let $M$ be a finitely generated $R$-module over a local
+noetherian ring $R$. Let $x \in \mathfrak{m}$ for $\mathfrak{m}$ as the maximal
+ideal.
+You might ask
+\begin{quote}
+What is the relation between $\dim \supp M$ and $\dim \supp M/xM$?
+\end{quote}
+Well, $M$ surjects onto $M/xM$, so we have the inequality $\geq$. But we think
+of dimension as describing the number of parameters you need to describe
+something. The number of parameters shouldn't change too much with going from
+$M$ to $M/xM$. Indeed, as one can check,
+\[ \supp M/xM = \supp M \cap V(x) \]
+and intersecting $\supp M$ with the ``hypersurface'' $V(x)$ should shrink the
+dimension by one.
+
+
+We thus make:
+\begin{prediction}
+\[ \dim \supp M/xM = \dim \supp M - 1. \]
+\end{prediction}
+Obviously this is not always true, e.g. if $x$ acts by zero on $M$. But we want
+to rule that out.
+Under reasonable cases, in fact, the prediction is correct:
+
+\begin{proposition} \label{dimdropsbyone}
+Suppose $x \in \mathfrak{m}$ is a nonzerodivisor on $M$. Then
+\[ \dim \supp M/xM = \dim \supp M - 1. \]
+\end{proposition}
+\begin{proof}
+To see this, we look at Hilbert polynomials. Let us consider the exact sequence
+\[ 0 \to xM \to M \to M/xM \to 0 \]
+which leads to an exact sequence for each $t$,
+\[ 0 \to xM/(xM \cap \mathfrak{m}^t M) \to M/\mathfrak{m}^t M \to M/(xM +
+\mathfrak{m}^t M) \to 0 . \]
+For $t$ large, the lengths of these things are given by Hilbert polynomials,
+as the thing on the right is $M/xM \otimes_R R/\mathfrak{m}^t$.
+We have
+\[ f_M^+(t) = f_{M/xM}^+(t) + \ell(xM/ (x M \cap \mathfrak{m}^t M), \quad t
+\gg 0. \]
+In particular, $\ell( xM/ (xM \cap \mathfrak{m}^t M))$ is a polynomial in $t$.
+What can we say about it? Well, $xM \simeq M$ as $x$ is a nonzerodivisor. In
+particular
+\[ xM / (xM \cap \mathfrak{m}^t M) \simeq M/N_t \]
+where
+\[ N_t = \left\{a \in M: xa \in \mathfrak{m}^t M\right\} . \]
+In particular, $N_t \supset \mathfrak{m}^{t-1} M$. This tells us that
+$\ell(M/N_t) \leq \ell(M/\mathfrak{m}^{t-1} M) = f_M^+(t-1)$ for $t \gg 0$.
+Combining this with the above information, we learn that
+\[ f_M^+(t) \leq f_{M/xM}^+(t) + f_M^+(t-1), \]
+which implies that $f_{M/xM}^+(t)$ is at least the successive difference
+$f_M^+(t) - f_M^+(t-1)$. This last polynomial has degree $\dim \supp M -1$. In
+particular, $f_{M/xM}^+(t)$ has degree at least $\dim \supp M -1 $. This gives
+us one direction, actually the hard one. We showed that intersecting something with codimension one
+doesn't drive the dimension down too much.
+
+Let us now do the other direction. We essentially did this last time via the
+Artin-Rees lemma. We know that $N_t = \left\{a \in M: xa \in
+\mathfrak{m}^t\right\}$. The Artin-Rees lemma tells us that there is a constant
+$c$ such that $N_{t+c} \subset \mathfrak{m}^t M$ for all $t$. Therefore,
+$\ell(M/N_{t+c}) \geq \ell(M/\mathfrak{m}^t M) = f_M^+(t), t \gg 0$. Now
+remember the exact sequence $0 \to M/N_t \to M/\mathfrak{m}^t M \to M/(xM +
+\mathfrak{m}^t M) \to 0$. We see from this that
+\[ \ell(M/ \mathfrak{m}^t M) = \ell(M/N_t) + f_{M/xM}^+(t) \geq f_M^+(t-c) +
+f_{M/xM}^+(t), \quad t \gg 0, \]
+which implies that
+\[ f_{M/xM}^+(t) \leq f_M^+(t) - f_M^+(t-c), \]
+so the degree must go down. And we find that $\deg f_{M/xM}^+ < \deg f_{M}^+$.
+\end{proof}
+
+This gives us an algorithm of computing the dimension of an $R$-module $M$.
+First, it reduces to computing $\dim R/\mathfrak{p}$ for $\mathfrak{p} \subset
+R$ a prime ideal. We may assume that $R$ is a domain and that we are looking
+for $\dim R$. Geometrically, this
+corresponds to taking an irreducible component of $\spec R$.
+
+Now choose any $x
+\in R$ such that $x$ is nonzero but noninvertible. If there is no such element,
+then $R$ is a field and has dimension zero. Then compute $\dim R/x$
+(recursively) and add one.
+
+Notice that this algorithm said nothing about Hilbert polynomials, and only
+talked about the structure of prime ideals.
+
+\subsection{Recap}
+Last time, we were talking about dimension theory.
+Recall that $R$ is a local noetherian ring with maximal ideal $\mathfrak{m}$,
+$M$ a finitely generated $R$-module. We can look at the lengths $\ell(M/\mathfrak{m}^t M)$
+for varying $t$; for $t \gg 0$ this is a polynomial function. The degree of
+this polynomial is called the \textbf{dimension} of $\supp M$.
+
+\begin{remark}
+If $M = 0$, then we define $\dim \supp M = -1$ by convention.
+\end{remark}
+
+Last time, we showed that if $M \neq 0$ and $x \in \mathfrak{m}$ such that $x$
+is a nonzerodivisor on $M$ (i.e. $M \stackrel{x}{\to} M$ injective), then
+\[ \boxed{ \dim \supp M/xM = \dim \supp M - 1.}\]
+Using this, we could give a recursion for calculating the dimension.
+To compute $\dim R = \dim \spec R$, we note three properties:
+\begin{enumerate}
+\item $\dim R = \sup_{\mathfrak{p} \ \mathrm{a \ minimal \ prime}}
+R/\mathfrak{p}$. Intuitively, this says that a variety which is the union of
+irreducible components has dimension equal to the maximum of these irreducibles.
+\item $\dim R = 0$ for $R$ a field. This is obvious from the definitions.
+\item If $R$ is a domain, and $x \in \mathfrak{m} - \left\{0\right\}$, then
+$\dim R/(x) +1 = \dim R $. This is obvious from the boxed formula as $x$ is a nonzerodivisor.
+\end{enumerate}
+
+These three properties \emph{uniquely characterize} the dimension invariant.
+
+\textbf{More precisely, if
+$d: \left\{\mathrm{local \ noetherian \ rings}\right\} \to \mathbb{Z}_{\geq 0}$
+satisfies the above three properties, then $d = \dim $. }
+\begin{proof}
+Induction on $\dim R$. It is clearly sufficient to prove this for $R$ a domain.
+If $R$ is a field, then it's clear; if $\dim R>0$, the third condition lets us
+reduce to a case covered by the inductive hypothesis (i.e. go down).
+\end{proof}
+
+Let us rephrase 3 above:
+\begin{quote}
+3': If $R$ is a domain and not a field, then
+\[ \dim R = \sup_{x \in \mathfrak{m} - 0} \dim R/(x) + 1. \]
+\end{quote}
+Obviously 3' implies 3, and it is clear by the same argument that 1,2, 3'
+characterize the notion of dimension.
+
+\subsection{Krull dimension} We shall now define another notion of
+dimension, and show that it is equivalent to the older one by showing that it
+satisfies these axioms.
+
+\begin{definition}
+Let $R$ be a commutative ring. A \textbf{chain of prime ideals} in $R$ is a finite
+sequence
+\[ \mathfrak{p}_0 \subsetneq \mathfrak{p}_1 \subsetneq \dots \subsetneq
+\mathfrak{p}_n. \]
+This chain is said to have \textbf{length $n$.}
+\end{definition}
+
+\begin{definition}
+The \textbf{Krull dimension} of $R$ is equal to the maximum length of any chain
+of prime ideals. This might be $\infty$, but we will soon see this cannot
+happen for $R$ local and noetherian.
+\end{definition}
+
+\begin{remark}
+For any maximal chain $\left\{\mathfrak{p}_i, 0 \leq i \leq n\right\}$ of primes (i.e. which can't be expanded), we must have
+that $\mathfrak{p}_0$ is minimal prime and $\mathfrak{p}_n$ a maximal ideal.
+\end{remark}
+
+\begin{theorem}
+For a noetherian local ring $R$, the Krull dimension of $R$ exists and is equal
+to the usual $\dim R$.
+\end{theorem}
+\begin{proof}
+We will show that the Krull dimension satisfies the above axioms. For now,
+write $\krdim$ for Krull dimension.
+
+\begin{enumerate}
+\item First, note that $\krdim(R) = \max_{\mathfrak{p} \in R \
+\mathrm{minimal}} \krdim(R/\mathfrak{p})$. This is because any chain of prime
+ideals in $R$ contains a minimal prime. So any chain of prime ideals in $R$ can
+be viewed as a chain in \emph{some} $R/\mathfrak{p}$, and conversely.
+\item Second, we need to check that $\krdim(R) = 0$ for $R$ a field. This is
+obvious, as there is precisely one prime ideal.
+\item The third condition is interesting. We must check that for $(R,
+\mathfrak{m})$ a local
+domain,
+\[ \krdim(R) = \max_{x \in \mathfrak{m} - \left\{0\right\}} \krdim(R/(x)) + 1. \]
+If we prove this, we will have shown that condition 3' is satisfied by the
+Krull dimension. It will follow by the inductive argument above that $\krdim(R)
+= \dim (R)$ for any $R$.
+There are two inequalities to prove. First, we must show
+\[ \krdim(R) \geq \krdim(R/x) +1, \quad \forall x \in \mathfrak{m} - 0. \]
+So suppose $k = \krdim(R/x)$. We want to show that there is a chain of prime
+ideals of length $k+1$ in $R$. So say $\mathfrak{p}_0 \subsetneq \dots
+\subsetneq \mathfrak{p}_k$ is a chain of length $k$ in $R/(x)$. The inverse
+images in $R$ give a proper chain of primes in $R$ of length $k$, all of which
+contain $(x)$ and thus properly contain $0$. Thus adding zero will give a chain
+of primes in $R$ of length $k+1$.
+
+Conversely, we want to show that if there is a chain of primes in $R$ of
+length $k+1$, then there is a chain of length $k$ in $R/(x)$ for some $x \in
+\mathfrak{m} - \left\{0\right\}$. Let us write the chain of length $k+1$:
+\[ \mathfrak{q}_{-1} \subset \mathfrak{q}_0 \subsetneq \dots \subsetneq
+\mathfrak{q}_k \subset R . \]
+Now evidently $\mathfrak{q}_0$ contains some $x \in \mathfrak{m} - 0$. Then the
+chain $\mathfrak{q}_0 \subsetneq \dots \subsetneq \mathfrak{q}_k$ can be
+identified with a chain in $R/(x)$ for this $x$. So for this $x$, we have that
+$\krdim R \leq \sup \krdim R/(x) + 1$.
+\end{enumerate}
+\end{proof}
+
+There is thus a combinatorial definition of definition.
+
+Geometrically, let $X = \spec R$ for $R$ an affine ring over $\mathbb{C}$ (a
+polynomial ring mod some ideal). Then $R$ has Krull dimension $\geq k$ iff there is a
+chain of irreducible subvarieties of $X$,
+\[ X_0 \supset X_1 \supset \dots \supset X_k . \]
+You will meet justification for this in \rref{subsectiondimension} below.
+
+\begin{remark}[\textbf{Warning!}] Let $R$ be a local noetherian ring of dimension $k$. This
+means that there is a chain of prime ideals of length $k$, and no longer
+chains. Thus there is a maximal chain whose length is $k$. However, not all
+maximal chains in $\spec R$ have length $k$.
+\end{remark}
+
+\begin{example}
+Let $R =( \mathbb{C}[X,Y,Z]/(XY,XZ))_{(X,Y,Z)}$. It is left as an
+exercise to the reader to see that there are maximal chains of
+length not two.
+
+There are more complicated local noetherian \emph{domains} which have maximal
+chains of prime ideals not of the same length. These examples are not what you
+would encounter in daily experience, and are necessarily complicated. This
+cannot happen for finitely generated domains over a field.
+\end{example}
+
+\begin{example}
+An easier way all maximal chains could fail to be of the same length is if
+$\spec R$ has two components (in which case $R = R_0 \times R_1$ for rings
+$R_0, R_1$).
+\end{example}
+
+
+\subsection{Yet another definition}
+Let's start by thinking about the definition of a module. Recall that if $(R,
+\mathfrak{m})$ is
+a local noetherian ring and $M$ a finitely generated $R$-module, and $x \in \mathfrak{m}$ is
+a nonzerodivisor on $M$, then
+\[ \dim \supp M/xM = \dim \supp M -1. \]
+
+\begin{question}
+What if $x$ is a zerodivisor?
+\end{question}
+
+This is not necessarily true (e.g. if $x \in \ann(M)$). Nonetheless, we claim
+that even in this case:
+\begin{proposition}
+For any $x \in \mathfrak{m}$,
+\[ \boxed{ \dim \supp M \geq \dim \supp M/xM \geq \dim \supp M -1 .}\]
+\end{proposition}
+The upper bound on $\dim M/xM$ is obvious as $M/xM$ is a quotient of $M$. The
+lower bound is trickier.
+
+\begin{proof}
+Let $N = \left\{a \in M: x^n a = 0 \ \mathrm{for \ some \ } n \right\}$. We can
+construct an exact sequence
+\[ 0 \to N \to M \to M/N \to 0. \]
+Let $M'' = M/N$.
+Now $x$ is a nonzerodivisor on $M/N$ by construction. We claim that
+\[ 0 \to N/xN \to M/xM \to M''/xM'' \to 0 \]
+is exact as well. For this we only need to see exactness at the beginning,
+i.e. injectivity of $N/xN \to M/xM$. So
+we need to show that if $a \in N$ and $a \in xM$, then $a \in x N$.
+
+To see this, suppose $a = xb$ where $b \in M$. Then if $\phi: M \to M''$, then
+$\phi(b) \in M''$ is killed by $x$ as $x \phi(b) = \phi(bx) = \phi(a)$.
+This means that $\phi(b)=0$ as $M'' \stackrel{x}{\to} M''$ is injective. Thus
+$b \in N$ in fact. So $a \in xN$ in fact.
+
+From the exactness, we see that (as $x$ is a nonzerodivisor on $M''$)
+\begin{align*} \dim M/xM & = \max (\dim M''/xM'', \dim N/xN) \geq \max(\dim M'' -1, \dim
+N)\\ & \geq \max( \dim M'', \dim N)-1 . \end{align*}
+The reason for the last claim is that $\supp N/xN = \supp N$ as $N$ is
+$x$-torsion, and the dimension depends only on the support. But the thing on the right is just $\dim M -1$.
+\end{proof}
+
+As a result, we find:
+
+\begin{proposition}
+$\dim \supp M$ is the minimal integer $n$ such that there exist elements $x_1,
+\dots, x_n \in \mathfrak{m}$ with $M/(x_1 , \dots, x_n) M$ has finite length.
+\end{proposition}
+Note that $n$ always exists, since we can look at a bunch of generators of the
+maximal ideal, and $M/\mathfrak{m}M $ is a finite-dimensional vector space and
+is thus of finite length.
+\begin{proof}
+Induction on $\dim \supp M$. Note that $\dim \supp(M)=0$ if and only if the
+Hilbert polynomial has degree zero, i.e. $M$ has finite length or that $n=0$
+($n$ being defined as in the statement).
+
+Suppose $\dim \supp M > 0$. \begin{enumerate}
+\item We first show that there are $x_1, \dots, x_{\dim M}$
+with $M/(x_1, \dots, x_{\dim M})M$ have finite length.
+Let $M' \subset M$ be the maximal submodule having finite length. There
+is an exact sequence
+\[ 0 \to M' \to M \to M'' \to 0 \]
+where $M'' = M/M'$ has no finite length submodules. In this case, we can
+basically ignore $M'$, and replace $M$ by $M''$. The reason is that modding out
+by $M'$ doesn't affect either $n$ or the dimension.
+
+So let us replace $M$ with
+$M''$ and thereby assume that $M$ has no finite length submodules. In
+particular, $M$ does not contain a copy of $R/\mathfrak{m}$, i.e. $\mathfrak{m}
+\notin \ass(M)$.
+By prime avoidance, this means that there is $x_1 \in \mathfrak{m}$ that acts as
+a nonzerodivisor on $M$. Thus
+\[ \dim M/x_1M = \dim M -1. \]
+The inductive hypothesis says that there are $x_2, \dots, x_{\dim M}$ with
+$$(M/x_1 M)/(x_2, \dots, x_{\dim M}) (M/xM) \simeq M/(x_1, \dots, x_{\dim M})M $$
+of finite length. This shows the claim.
+\item Conversely, suppose that there $M/(x_1, \dots, x_n)M$ has finite length.
+Then we claim that $n \geq \dim M$. This follows because we had the previous
+result that modding out by a single element can chop off the dimension by at
+most $1$. Recursively applying this, and using the fact that $\dim$ of a
+finite length module is zero, we find
+\[ 0 = \dim M/(x_1 , \dots, x_n )M \geq \dim M -n. \]
+\end{enumerate}
+\end{proof}
+
+
+\begin{corollary}
+Let $(R, \mathfrak{m})$ be a local noetherian ring. Then $\dim R$ is equal to the minimal $n$
+such that there exist $x_1, \dots, x_n \in R$ with $R/(x_1, \dots, x_n) R$ is
+artinian. Or, equivalently, such that $(x_1, \dots, x_n)$ contains a power of
+$\mathfrak{m}$.
+\end{corollary}
+
+
+\begin{remark}
+We manifestly have here that the dimension of $R$ is at most the embedding
+dimension. Here, we're not worried about generating the maximal ideal, but
+simply something containing a power of it.
+\end{remark}
+\lecture{11/5}
+
+We have been talking about dimension. Let $R$ be a local noetherian ring with
+maximal ideal $\mathfrak{m}$. Then, as we have said in previous lectures, $\dim R$ can be characterized by:
+\begin{enumerate}
+\item The minimal $n$ such that there is an $n$-primary ideal generated by $n$
+elements $x_1, \dots, x_n \in \mathfrak{m}$. That is, the closed point
+$\mathfrak{m}$ of
+$\spec R$ is cut out \emph{set-theoretically} by the intersection $\bigcap
+V(x_i)$. This is one way of saying that the closed point can be defined by $n$
+parameters.
+\item The \emph{maximal} $n$ such that there exists a chain of prime ideals
+\[ \mathfrak{p}_0 \subset \mathfrak{p}_1 \subset \dots \subset \mathfrak{p}_n. \]
+\item The degree of the Hilbert polynomial $f^+(t)$, which equals
+$\ell(R/\mathfrak{m}^t)$ for $t \gg 0$.
+\end{enumerate}
+
+
+\subsection{Krull's Hauptidealsatz}
+
+
+Let $R$ be a local noetherian ring.
+The following is now clear from what we have shown:
+
+\begin{theorem} \label{hauptv1}
+$R$ has dimension $1$ if and only if there is a nonzerodivisor $x \in \mathfrak{m}$ such that
+$R/(x)$ is artinian.
+\end{theorem}
+
+
+
+\begin{remark}
+Let $R$ be a domain. We said that a nonzero prime $\mathfrak{p} \subset R$ is
+\textbf{height one} if $\mathfrak{p}$ is minimal among the prime ideals
+containing some nonzero $x \in R$.
+
+According to Krull's Hauptidealsatz, $\mathfrak{p}$ has height one \textbf{if
+and only if $\dim R_{\mathfrak{p}} = 1$.}
+\end{remark}
+
+
+We can generalize the notion of $\mathfrak{p}$ as follows.
+\begin{definition}
+Let $R$ be a noetherian ring (not necessarily local), and $\mathfrak{p} \in
+\spec R$. Then we define the \textbf{height} of $\mathfrak{p}$, denoted
+$\het(\mathfrak{p})$, as $\dim R_{\mathfrak{p}}$.
+We know that this is the length of a maximal chain of primes in
+$R_{\mathfrak{p}}$. This is thus the maximal length of prime ideals of $R$,
+\[ \mathfrak{p}_0 \subset \dots \subset \mathfrak{p}_n = \mathfrak{p} \]
+that ends in $\mathfrak{p}$. This is the origin of the term ``height.''
+\end{definition}
+
+\begin{remark}
+Sometimes, the height is called the \textbf{codimension}. This corresponds to
+the codimension in $\spec R$ of the corresponding irreducible closed subset of
+$\spec R$.
+\end{remark}
+
+\begin{theorem}[Krull's Hauptidealsatz] Let $R$ be a noetherian ring, and $x
+\in R$ a nonzerodivisor. If $\mathfrak{p} \in \spec R$ is minimal over $x$,
+then $\mathfrak{p}$ has height one.
+\end{theorem}
+\begin{proof}
+Immediate from \cref{hauptv1}.
+\end{proof}
+
+\begin{theorem}[Artin-Tate]
+Let $A$ be a noetherian domain. Then the following are equivalent:
+\begin{enumerate}
+\item There is $f \in A-\left\{0\right\} $ such that $A_f$ is a field.
+\item $A$ has finitely many maximal ideals and has dimension at most 1.
+\end{enumerate}
+\end{theorem}
+\begin{proof} We follow \cite{EGA}.
+
+Suppose first that there is $f$ with $A_f$ a field.
+Then all nonzero prime ideals of $A$ contain $f$.
+We need to deduce that $A$ has dimension $\leq 1$. Without loss of generality,
+we may assume that $A$ is not a field.
+
+There are finitely many primes $\mathfrak{p}_1,\dots, \mathfrak{p}_k$ which
+are minimal over $f$; these are all height one. The claim is that any maximal ideal of $A$ is of this
+form. Suppose $\mathfrak{m}$ were maximal and not one of the $\mathfrak{p}_i$.
+Then by prime avoidance, there is $g \in \mathfrak{m}$ which
+lies in no $\mathfrak{p}_i$. A minimal prime $\mathfrak{P}$ of $g$ has height
+one, so by our assumptions contains $f$. However, it is then one of the
+$\mathfrak{p}_i$; this is a contradiction as $g \in \mathfrak{P}$.
+\end{proof}
+
+
+\subsection{Further remarks}
+
+We can recast earlier notions in terms of dimension.
+\begin{remark}
+A noetherian ring has dimension zero if and only if $R$ is artinian. Indeed,
+$R$ has dimension zero iff all primes are maximal.
+\end{remark}
+
+
+\begin{remark}
+A noetherian domain has dimension zero iff it is a field. Indeed, in this case
+$(0)$ is maximal.
+\end{remark}
+
+\begin{remark}
+$R$ has dimension $\leq 1$ if and only if every non-minimal prime of $R$ is
+maximal. That is, there are no chains of length $\geq 2$.
+\end{remark}
+
+\begin{remark}
+A (noetherian) domain $R$ has dimension $\leq 1$ iff every nonzero prime ideal
+is maximal.
+\end{remark}
+
+In particular,
+\begin{proposition}
+$R$ is Dedekind iff it is a noetherian, integrally closed domain of dimension
+$1$.
+\end{proposition}
+
+
+\section{Further topics}
+
+\subsection{Change of rings}
+Let $f: R \to R'$ be a map of noetherian rings.
+
+\begin{question}
+What is the relationship between $\dim R$ and $\dim R'$?
+\end{question}
+
+A map $f$ gives a map $\spec R' \to \spec R$, where $\spec R'$ is the union
+of various fibers over the points of $\spec R$. You might imagine that the
+dimension is the dimension of $R$ plus the fiber dimension. This is sometimes
+true.
+
+Now assume that $R, R'$ are \emph{local} with maximal ideals $\mathfrak{m},
+\mathfrak{m}'$. Assume furthermore that $f$ is local, i.e. $f(\mathfrak{m})
+\subset \mathfrak{m}'$.
+
+\begin{theorem}
+$\dim R' \leq \dim R + \dim R'/\mathfrak{m}R'$. Equality holds if $f: R \to
+R'$ is flat.
+\end{theorem}
+
+Here $R'/\mathfrak{m}R'$ is to be interpreted as the ``fiber'' of $\spec R'$
+above $\mathfrak{m} \in \spec R$. The fibers can behave weirdly as the
+basepoint varies in $\spec R$, so we can't
+expect equality in general.
+
+\begin{remark}
+Let us review flatness as it has been a while. An $R$-module $M$ is \emph{flat} iff
+the operation of tensoring with $M$ is an exact functor. The map $f: R \to R'$
+is \emph{flat} iff $R'$ is a flat $R$-module. Since the construction of taking
+fibers is a tensor product (i.e. $R'/\mathfrak{m}R' = R' \otimes_R
+R/\mathfrak{m}$), perhaps the condition of flatness here is not as surprising as
+it might be.
+\end{remark}
+
+\begin{proof}
+Let us first prove the inequality. Say $$\dim R = a, \ \dim R'/\mathfrak{m}R'
+= b.$$ We'd like to see that
+\[ \dim R' \leq a+b. \]
+To do this, we need to find $a+b$ elements in the maximal ideal $\mathfrak{m}'$
+that generate a $\mathfrak{m}'$-primary ideal of $R'$.
+
+There are elements $x_1, \dots, x_a \in \mathfrak{m}$ that generate an
+$\mathfrak{m}$-primary ideal $I = (x_1, \dots, x_a)$ in $R$. There is a surjection $R'/I R'
+\twoheadrightarrow R'/\mathfrak{m}R'$.
+The kernel $\mathfrak{m}R'/IR'$ is nilpotent since $I$ contains a power of
+$\mathfrak{m}$. We've seen that nilpotents \emph{don't} affect the dimension.
+In particular,
+\[ \dim R'/IR' = \dim R'/\mathfrak{m}R' = b. \]
+There are thus elements $y_1, \dots, y_b \in \mathfrak{m}'/IR'$ such that the
+ideal $J = (y_1, \dots, y_b) \subset R'/I R'$ is $\mathfrak{m}'/IR'$-primary.
+The inverse image of $J$ in $R'$, call it $\overline{J} \subset R'$, is
+$\mathfrak{m}'$-primary. However, $\overline{J}$ is generated by the $a+b$
+elements
+\[ f(x_1), \dots, f(x_a), \overline{y_1}, \dots, \overline{y_b} \]
+if the $\overline{y_i}$ lift $y_i$.
+
+But we don't always have equality. Nonetheless, if all the fibers are similar,
+then we should expect that the dimension of the ``total space'' $\spec R'$ is
+the dimension of the ``base'' $\spec R$ plus the ``fiber'' dimension $\spec
+R'/\mathfrak{m}R'$.
+\emph{The precise condition of $f$ flat articulates the condition that the fibers
+ ``behave well.'' }
+Why this is so is something of a mystery, for now.
+But for some evidence, take the present result about fiber dimension.
+
+Anyway, let us now prove equality for flat $R$-algebras. As before, write $a =
+\dim R, b = \dim R'/\mathfrak{m}R'$. We'd like to show that
+\[ \dim R' \geq a+b. \]
+By what has been shown, this will be enough.
+This is going to be tricky since we now need to give \emph{lower bounds} on the
+dimension; finding a sequence $x_{1}, \dots, x_{a+b}$ such that the quotient
+$R/(x_1, \dots, x_{a+b})$ is artinian would bound \emph{above} the dimension.
+
+So our strategy will be to find a chain of primes of length $a+b$. Well, first
+we know that there are primes
+\[ \mathfrak{q}_0 \subset \mathfrak{q}_1 \subset \dots \subset \mathfrak{q}_b
+\subset R'/\mathfrak{m}R'. \]
+Let $\overline{\mathfrak{q}_i}$ be the inverse images in $R'$. Then the
+$\overline{\mathfrak{q}_i}$ are a strictly ascending chain of primes in $R'$ where
+$\overline{\mathfrak{q}_0}$ contains $\mathfrak{m}R'$. So we have a chain of
+length $b$; we need to extend this by additional terms.
+
+Now $f^{-1}(\overline{\mathfrak{q}_0})$ contains $\mathfrak{m}$, hence is
+$\mathfrak{m}$. Since $\dim R = a$, there is a chain
+$\left\{\mathfrak{p}_i\right\}$ of prime ideals of length
+$a$ going down from $f^{-1}(\overline{\mathfrak{q}_0}) = \mathfrak{m}$. We are
+now going to find primes $\mathfrak{p}_i' \subset R'$ forming a chain such that
+$f^{-1}(\mathfrak{p}_i') = \mathfrak{p}_i$. In other words, we are going to
+\emph{lift} the chain $\mathfrak{p}_i$ to $\spec R'$. We can do this at the
+first stage for $i=a$, where $\mathfrak{p}_a = \mathfrak{m}$ and we can set
+$\mathfrak{p}'_a = \overline{\mathfrak{q}_0}$. If we can indeed do this
+lifting, and catenate the chains $\overline{\mathfrak{q}_j}, \mathfrak{p}'_i$,
+then we will have a chain of the appropriate length.
+
+We will proceed by descending induction. Assume that we have
+$\mathfrak{p}_{i+1}' \subset R'$ and $f^{-1}(\mathfrak{p}_{i+1}') =
+\mathfrak{p}_{i+1} \subset R$. We want to find $\mathfrak{p}_i' \subset
+\mathfrak{p}'_{i+1}$ such that $f^{-1}(\mathfrak{p}_i') = \mathfrak{p}_i$. The
+existence of that prime is a consequence of the following general fact.
+
+\begin{theorem}[Going down] Let $f: R \to R'$ be a flat map of
+noetherian commutative
+rings. Suppose $\mathfrak{q} \in \spec R'$, and let $\mathfrak{p}
+=f^{-1}(\mathfrak{q})$. Suppose $\mathfrak{p}_0 \subset \mathfrak{p}$ is a
+prime of $R$. Then there is a prime $\mathfrak{q}_0 \subset \mathfrak{q}$ with
+\[ f^{-1}(\mathfrak{q}_0) = \mathfrak{p}_0. \]
+\end{theorem}
+\begin{proof}
+We may replace $R'$ with $R'_{\mathfrak{q}}$. There is still a map
+\[ R \to R_{\mathfrak{q}}' \]
+which is flat as localization is flat. The maximal ideal in $R'_{\mathfrak{q}}$
+has inverse image $\mathfrak{p}$. So the problem now reduces to finding
+\emph{some} $\mathfrak{p}_0$ in the localization that pulls back appropriately.
+
+Anyhow, throwing out the old $R$ and replacing with the localization, we may
+assume that $R'$ is local and $\mathfrak{q}$ the maximal ideal. (The condition
+$\mathfrak{q}_0 \subset \mathfrak{q}$ is now automatic.)
+
+The claim now is that we can replace $R$ with $R/\mathfrak{p}_0$ and $R'$ with
+$R'/\mathfrak{p}_0 R' = R' \otimes R/\mathfrak{p}_0$. We can do this because
+base change preserves flatness (see below), and in this case we can reduce to the case of
+$\mathfrak{p}_0 = (0)$---in particular, $R$ is a domain.
+Taking these quotients just replaces $\spec R, \spec R'$ with closed subsets
+where all the action happens anyhow.
+
+Under these replacements, we now have:
+\begin{enumerate}
+\item $R'$ is local with maximal ideal $\mathfrak{q}$
+\item $R$ is a domain and $\mathfrak{p}_0 = (0)$.
+\end{enumerate}
+We want a prime of $R'$ that pulls back to $(0)$ in $R$. I claim that any
+minimal prime of $R'$ will work.
+Suppose otherwise. Let $\mathfrak{q}_0 \subset R'$ be a minimal prime, and
+suppose $x \in R \cap f^{-1}(\mathfrak{q}_0) - \left\{0\right\}$. But
+$\mathfrak{q}_0 \in \ass(R')$. So $f(x)$ is
+a zerodivisor on $R'$. Thus multiplication by $x$ on $R'$ is not injective.
+
+But, $R$ is a domain, so $R \stackrel{x}{\to} R$ is injective. Tensoring with
+$R'$ must preserve this, implying that $R' \stackrel{x}{\to} R'$ is injective
+because $R'$ is flat. This is a contradiction.
+\end{proof}
+
+We used:
+\begin{lemma}
+Let $R \to R'$ be a flat map, and $S$ an $R$-algebra. Then $S \to S \otimes_R
+R'$ is a flat map.
+\end{lemma}
+\begin{proof}
+The construction of taking an $S$-module with $S \otimes_R R'$ is an exact
+functor, because that's the same thing as taking an $S$-module, restricting to
+$R$, and tensoring with $R'$.
+\end{proof}
+The proof of the fiber dimension theorem is now complete.
+
+\end{proof}
+
+
+
+\subsection{The dimension of a polynomial ring}
+
+Adding an indeterminate variable corresponds geometrically to taking the
+product with the affine line, and so should increase the dimension by one. We
+show that this is indeed the case.
+\label{dimpoly}
+\begin{theorem}
+Let $R$ be a noetherian ring. Then $\dim R[X] = \dim R+1$.
+\end{theorem}
+
+Interestingly, this is \emph{false} if $R$ is non-noetherian, cf. \cite{}.
+Let $R$ be a ring of dimension $n$.
+
+\begin{lemma}
+$\dim R[x] \geq \dim R+1$.
+\end{lemma}
+\begin{proof}
+Let $\mathfrak{p}_0 \subset \dots \subset \mathfrak{p}_n$ be a chain of primes of
+length $n = \dim R$. Then $\mathfrak{p}_0 R[x] \subset \dots \subset
+\mathfrak{p}_n R[x] \subset (x, \mathfrak{p}_n)R[x]$ is a chain of primes in
+$R[x]$ of length $n+1$ because of the following fact: if $\mathfrak{q} \subset
+R$ is prime, then so is $\mathfrak{q}R[x] \subset R[x]$.\footnote{This is
+because $R[x]/\mathfrak{q}R[x] = (R/\mathfrak{q})[x]$ is a domain.} Note also
+that as $\mathfrak{p}_n \subsetneq R$, we have that $\mathfrak{p}_n R[x]
+\subsetneq (x, \mathfrak{p}_n)$. So this is indeed a legitimate chain.
+\end{proof}
+
+Now we need only show:
+\begin{lemma}
+Let $R$ be noetherian of dimension $n$. Then $\dim R[x] \leq \dim R+1$.
+\end{lemma}
+\begin{proof}
+Let $\mathfrak{q}_0 \subset \dots \subset \mathfrak{q}_m \subset R[x]$ be a chain of primes
+in $R[x]$. Let $\mathfrak{m} = \mathfrak{q}_m \cap R$. Then if we localize and
+replace $R$ with $R_{\mathfrak{m}}$, we get a chain of primes of length $m$ in
+$R_{\mathfrak{m}}[x]$.
+In fact, we get more. We get a chain of primes of length $m$ in
+$(R[x])_{\mathfrak{q}_m}$, and a \emph{local } inclusion of noetherian local rings
+\[ R_{\mathfrak{m}} \hookrightarrow (R[x])_{\mathfrak{q}_m} . \]
+To this we can apply the fiber dimension theorem. In particular, this implies
+that
+\[ m \leq \dim (R[x])_{\mathfrak{q}_m} \leq \dim R_{\mathfrak{m}} + \dim
+(R[x])_{\mathfrak{q}_m} /\mathfrak{m} (R[x])_{\mathfrak{q}_m}. \]
+Here $\dim R_{\mathfrak{m}} \leq \dim R = n$. So if we show that $\dim
+(R[x])_{\mathfrak{q}_m} /\mathfrak{m} (R[x])_{\mathfrak{q}_m} \leq 1$, we will
+have seen that $m \leq n+1$, and will be done. But this last ring is a
+localization of $(R_{\mathfrak{m}}/\mathfrak{m}R_{\mathfrak{m}})[x]$, which is
+a PID by the euclidean algorithm for polynomial rings over a field, and thus of
+dimension $\leq 1$.
+\end{proof}
+
+\subsection{A refined fiber dimension theorem}
+
+Let $R$ be a local noetherian domain, and let $R \to S$ be an injection of
+rings making $S$ into an $R$-algebra. Suppose $S$ is also a local domain, such
+that the morphism $R \to S$ is local. This is essentially the setup of
+\cref{dimpoly}, but in this section, we make the refining assumption that $S$
+is \emph{essentially of finite type} over $R$; in other words, $S$ is the
+localization of a finitely generated $R$-algebra.
+
+Let $k$ be the residue field of $R$, and $k'$ that of $S$; because $R \to S$ is
+local, there is induced a morphism of fields $k \to k'$.
+We shall prove, following \cite{EGA}:
+\newcommand{\trdeg}{\mathrm{tr.deg.}}
+\begin{theorem}[Dimension formula]
+\begin{equation}\label{strongfiberdim} \dim S + \trdeg S/R \leq \dim R + \trdeg
+k'/k. \end{equation}
+\end{theorem}
+Here $\trdeg B/A$ is more properly the transcendence degree of the quotient
+field of $B$ over that of $A$.
+Geometrically, it corresponds to the dimension of the ``generic'' fiber.
+
+\begin{proof} Let $\mathfrak{m} \subset R$ be the maximal ideal.
+We know that $S$ is a localization of an algebra of the form
+$(R[x_1, \dots, x_k])/\mathfrak{p}$ where $\mathfrak{p} \subset R[x_1, \dots,
+x_n]$ is a prime ideal $\mathfrak{q}$.
+We induct on $k$.
+
+Since we can ``d\'evissage'' the extension $R \to S$ as the
+composite
+\[ R \to (R[x_1, \dots, x_{k-1}]/(\mathfrak{p} \cap R[x_1, \dots,
+x_{k-1}])_{\mathfrak{q'}} \to S, \]
+(where $\mathfrak{q}' \in \spec R[x_1, \dots, x_{k-1}]/(\mathfrak{p} \cap R[x_1, \dots,
+x_{k-1}]$ is the pull-back of $\mathfrak{q}$),
+we see that it suffices to prove \eqref{strongfiberdim} when $k=1$, that is $S$
+is the localization of a quotient of $R[x]$.
+
+So suppose $k=1$. Then we have $S = (R[x])_{\mathfrak{q}}/\mathfrak{p}$ where
+$\mathfrak{q} \subset R[x]$ is another prime ideal lying over $\mathfrak{m}$.
+Let us start by considering the case where $\mathfrak{q} = 0$.
+
+\begin{lemma} Let $(R, \mathfrak{m})$ be a local noetherian domain as above.
+Let $S = R[x]_{\mathfrak{q}}$ where $\mathfrak{q} \in \spec R[x]$ is a prime
+lying over $\mathfrak{m}$. Then \eqref{strongfiberdim} holds with equality.
+\end{lemma}
+\begin{proof}
+In this case, $\trdeg S/R = 1$. Now $\mathfrak{q}$
+could be $\mathfrak{m} R[x]$ or a prime ideal containing that, which is then
+automatically
+maximal, as we know from the proof of \cref{dimpoly}. Indeed, primes
+containing $\mathfrak{m}R[x]$ are
+in bijection with primes of $R/\mathfrak{m}[x]$, and these come in two forms:
+zero, and those generated by one element. (Note that in the former case, the
+residue field is the field of rational functions $k(x)$ and in the second, the residue field is finite over
+$k$.)
+
+\begin{enumerate}
+\item
+In the first case, $\dim S = \dim R[x]_{\mathfrak{m}R[x]} = \dim R$ and but the
+residue field extension is $(R[x]_{\mathfrak{m}R[x]})/\mathfrak{m}
+R[x]_{\mathfrak{m}R[x]} = k(x)$, so $\trdeg k'/k = 1$ and the formula is
+satisfied.
+\item In the second case, $\mathfrak{q}$ properly contains $\mathfrak{m}
+R[x]$.
+Then $\dim R[x]_{\mathfrak{q}} = \dim R + 1$, but the residue field extension
+is finite. So in this case too, the formula is satisfied.
+\end{enumerate}
+\end{proof}
+
+
+Now, finally, we have to consider the case where $\mathfrak{p} \subset R[x]$ is
+not zero, and we have $S = (R[x]/\mathfrak{p})_{\mathfrak{q}}$ for
+$\mathfrak{q} \in \spec R[x]/\mathfrak{p}$ lying over $\mathfrak{m}$.
+In this case, $\trdeg S/R = 0$. So we need to prove
+\[ \dim S \leq \dim R + \trdeg k'/k. \]
+Let us, by abuse of notation, identify $\mathfrak{q}$ with its preimage in
+$R[x]$.
+(Recall that $\spec R[x]/\mathfrak{p}$ is canonically identified as a closed
+subset of $\spec R[x]$.)
+Then we know that
+\( \dim ( R[x]/\mathfrak{p})_{\mathfrak{q}} \)
+is the largest chain of primes in $R[x]$ between $\mathfrak{p}, \mathfrak{q}$.
+In particular, it is at most $\dim R[x]_{\mathfrak{q}} - \mathrm{height}
+\mathfrak{p}
+\leq \dim R + 1 - 1 = \dim R$. So the result is clear.
+\end{proof}
+
+In \cite{EGA}, this is used to prove the geometric result that if $\phi:X \to
+Y$ is a morphism of varieties over an algebraically closed field (or a morphism
+of finite type between nice schemes), then the local dimension (that is, the
+dimension at $x$) of
+the fiber $\phi^{-1}(\phi(x))$ is an upper semi-continuous function of $x \in X$.
+\subsection{An infinite-dimensional noetherian ring}
+
+We shall now present an example, due to Nagata, of an infinite-dimensional
+noetherian ring. Note that such a ring cannot be \emph{local}.
+
+Consider the ring $R=\mathbb{C}[\{x_{i,j}\}_{0 \leq i \leq j}]$ of polynomials in
+infinitely many variables $x_{i,j}$.
+This is clearly an infinite-dimensional ring, but it is also not noetherian.
+We will localize it suitably to make it noetherian.
+
+Let $\mathfrak{p}_n \subset R$ be the
+ideal $(x_{1,n}, x_{2,n}, \dots, x_{n,n})$ for all $i \leq n$.
+Let $S = R - \bigcup \mathfrak{p}_n$; this is a multiplicatively closed set.
+
+\begin{theorem}[Nagata] The ring $S^{-1}R$ is noetherian and has infinite
+dimension.
+\end{theorem}
+
+We start with
+\begin{proposition}
+The ring in the statement of the problem is noetherian.
+\end{proposition}
+
+The proof is slightly messy, so we first prove a few lemmas.
+
+Let $R' = S^{-1}R$ as in the problem statement. We start by proving that every ideal in $R'$ is contained
+in one of the $\mathfrak{p}_n$ (which, by abuse of notation, we identify with
+their localizations in $R' = S^{-1}R$).
+In particular, the $\mathfrak{p}_n$ are the maximal ideals in $R'$.
+
+\begin{lemma}
+The $\mathfrak{p}_n$ are the maximal ideals in $R'$.
+\end{lemma}
+\begin{proof}
+We start with an observation:
+\begin{quote}
+If $f \neq 0 $, then $f$ belongs to only finitely many $\mathfrak{p}_n$.
+\end{quote}
+To see this, let us use the following notation. If $M$ is a monomial, we let
+$S(M)$ denote the set of subscripts $x_{a,b}$ that occur and $S_2(M)$ the set
+of second subscripts (i.e. the $b$'s).
+For $f \in R$, we define $S(f)$ to be the intersection of the $S(M)$ for $M$ a
+monomial occurring nontrivially in $f$. Similarly we define $S_2(f)$.
+
+Let us prove:
+\begin{lemma}
+$f \in \mathfrak{p}_n$ iff $n \in S_2(f)$. Moreover, $S(f)$ and $S_2(f)$ are
+finite for any $f \neq 0$.
+\end{lemma}
+\begin{proof}
+Indeed, $f \in \mathfrak{p}_n$ iff every monomial in $f$ is divisible by some
+$x_{i,n}, i \leq n$, as $\mathfrak{p}_n = (x_{i,n})_{i \leq n}$. From this the first assertion is clear. The second too,
+because $f$ will contain a nonzero monomial, and that can be divisible by only
+finitely many $x_{a,b}$.
+\end{proof}
+From this, it is clear how to define $ S_2(f)$ for any element in $R'$,
+not necessarily a polynomial in $R$. Namely, it is the set of $n$ such that $f
+\in \mathfrak{p}_n$.
+It is now clear, from the second statement of the lemma, that any $f \neq 0$ lies in \emph{only finitely many
+$\mathfrak{p}_n$}. In particular, the observation has been proved.
+
+Let $\mathcal{T} = \left\{ S_2(f), f \in I - 0\right\}$. \emph{I claim that
+$\emptyset \in \mathcal{T}$ iff $I = (1)$.} For $\emptyset \in \mathcal{T}$ iff
+there is a polynomial lying in no $\mathfrak{p}_n$. Since the union $\bigcup
+\mathfrak{p}_n$ is the set of non-units (by construction), we find that the
+assertion is clear.
+
+
+\begin{lemma}
+$\mathcal{T}$ is closed under finite intersections.
+\end{lemma}
+\begin{proof}
+Suppose $T_1, T_2 \in \mathcal{T}$. Without loss of generality, there are
+\emph{polynomials} $F_1, F_2 \in R$ such that $S_2(F_1) = T_1, S_2(F_2) = T_2$.
+A generic linear combination $a F_1 + bF_2$ will involve no cancellation for
+$a, b \in \mathbb{C}$, and
+the monomials in this linear combination will be the union of those in $F_1$
+and those in $F_2$ (scaled appropriately). So $S_2(aF_1 + bF_2) = S_2(F_1) \cap S_2(F_2)$.
+\end{proof}
+
+Finally, we can prove the result that the $\mathfrak{p}_n$ are the only maximal
+ideals. Suppose $I$ was contained in no $\mathfrak{p}_n$, and form the set
+$\mathcal{T}$ as above. This is a collection of finite sets. Since $I
+\not\subset \mathfrak{p}_n$ for each $n$, we find that $n \notin \bigcap_{T \in
+\mathcal{T}} T$. This intersection is thus empty. It follows that there is a
+\emph{finite} intersection of sets in $\mathcal{T}$
+which is empty as $\mathcal{T}$ consists of finite sets. But $\mathcal{T}$ is closed under intersections. There is thus
+an element in $I$ whose $S_2$ is empty, and is thus a unit. Thus $I = (1)$.
+\end{proof}
+
+We have proved that the $\mathfrak{p}_n$ are the only maximal ideals. This is
+not enough, though. We need:
+\begin{lemma}
+$R'_{\mathfrak{p}_n}$ is noetherian for each $n$.
+\end{lemma}
+\begin{proof}
+Indeed, any polynomial involving the variables $x_{a,b}$ for $ b \neq n$ is
+invertible in this ring. We see that this ring contains the field
+\[ \mathbb{C}(\{x_{a,b}, b \neq n\}), \]
+and over it is contained in the field $\mathbb{C}(\left\{x_{a,b}, \forall
+a,b\right\})$. It is a localization of the algebra $\mathbb{C}(\{x_{a,b}, b
+\neq n\})[x_{1,n} , \dots, x_{n,n}]$ and is consequently noetherian by
+Hilbert's basis theorem.
+\end{proof}
+
+The proof will be completed with:
+\begin{lemma}
+Let $R$ be a ring. Suppose every element $x \neq 0$ in the ring belongs to only
+finitely many maximal ideals, and suppose that $R_{\mathfrak{m}}$ is noetherian
+for each $\mathfrak{m} \subset R$ maximal. Then $R$ is noetherian.
+\end{lemma}
+\begin{proof}
+Let $I \subset R$ be a nonzero ideal. We must show that it is finitely generated. We
+know that $I$ is contained in only finitely many maximal ideals $\mathfrak{m}_1
+, \dots , \mathfrak{m}_k$.
+At each of these maximal ideals, we know that $I_{\mathfrak{m}_i}$ is finitely
+generated. Clearing denominators, we can choose a finite set of generators in
+$R$. So we can collect them together and get a finite set $a_1, \dots, a_N
+\subset I$
+which generate $I_{\mathfrak{m}_i} \subset R_{\mathfrak{m}_i}$ for each $i$. It
+is not necessarily true that $J = (a_1, \dots, a_N) = I$, though we do have
+$\subset$. However, $I_{\mathfrak{m}} = J_{\mathfrak{m}}$ except at finitely
+many maximal ideals $\mathfrak{n}_1, \dots, \mathfrak{n}_M$ because a nonzero
+element is a.e. a unit. However, these $\mathfrak{n}_j$ are not among the
+$\mathfrak{m}_i$. In particular, for each $j$, there is $b_j \in I -
+\mathfrak{n}_j$ as $I \not\subset \mathfrak{n}_j$. Then we find that the ideal
+\[ (a_1, \dots, a_N, b_1, \dots, b_M) \subset I \]
+becomes equal to $I$ in all the localizations. So it is $I$, and $I$ is finitely generated
+
+\end{proof}
+
+We need only see that the ring $R'$ has infinite dimension. But for each $n$, there
+is a chain of primes $(x_{1,n}) \subset (x_{1,n}, x_{2,n}) \subset
+\dots \subset (x_{1,n}, \dots, x_{n,n})$ of length $n-1$. The supremum of the
+lengths is thus infinite.
+
+\subsection{Catenary rings}
+ \begin{definition}
+ A ring $R$ is \emph{catenary} if given any two primes $\mathfrak{p}\subsetneq \mathfrak{p}'$, any two
+ maximal prime chains from $\mathfrak{p}$ to $\mathfrak{p}'$ have the same length.
+ \end{definition}
+ Nagata showed that there are noetherian domains which are not catenary. We
+ shall see that \emph{affine rings}, or rings finitely generated over a field,
+ are always catenary.
+
+ \begin{definition}
+ If $\mathfrak{p}\in \spec R$, then $\dim \mathfrak{p}:= \dim R/\mathfrak{p}$.
+ \end{definition}
+
+
+\begin{lemma}
+ Let $S$ be a $k$-affine domain with $tr.d._k S=n$, and let $\mathfrak{p}\in
+ \spec S$ be height one. Then
+ $tr.d._k (S/\mathfrak{p})=n-1$.
+ \end{lemma}
+ \begin{proof}
+ \underline{Case 1}: assume $S=k[x_1,\dots, x_n]$ is a polynomial algebra. In this
+ case, height 1 primes are principal, so $\mathfrak{p}=(f)$ for some $f$. Say $f$ has positive
+ degree with respect to $x_1$, so $f = g_r(x_2,\dots, x_n)x_1^r + \cdots$. We have
+ that $k[x_2,\dots, x_n]\cap (f)=(0)$ (just look at degree with respect to $x_1$). It
+ follows that $k[x_2,\dots, x_n]\hookrightarrow S/(f)$, so $\bar x_2,\dots, \bar x_n$
+ are algebraically independent in $S/\mathfrak{p}$. By $\bar x_1$ is algebraic over $Q(k[\bar
+ x_2,\dots, \bar x_n])$ as witnessed by $f$. This, $tr.d._k S/\mathfrak{p}=n-1$.
+
+ \underline{Case 2}: reduction to case 1. Let $R=k[x_1,\dots, x_n]$ be a Noether
+ normalization for $S$, and let $\mathfrak{p}_0=\mathfrak{p}\cap R$. Observe that Going Down applies
+ (because $S$ is a domain and $R$ is normal). It follows that $ht_R(\mathfrak{p}_0)=ht_S(\mathfrak{p})=1$.
+ By case 1, we get that $tr.d. (R/\mathfrak{p}_0)=n-1$. By $(\ast)$, we get that $tr.d.
+ R/\mathfrak{p}_0=tr.d. (S/\mathfrak{p})$.
+ \end{proof}
+
+
+ \begin{theorem}
+ Any $k$-affine algebra $S$ is catenary (even if $S$ is not a domain). In fact, any
+ saturated prime chain from $\mathfrak{p}$ to $\mathfrak{p}'$ has length $\dim \mathfrak{p} - \dim \mathfrak{p}'$. If $S$ is a
+ domain, then all maximal ideals have the same height.
+ \end{theorem}
+ \begin{proof}
+ Consider any chain $\mathfrak{p}\subsetneq \mathfrak{p}_1\subsetneq \cdots \subsetneq \mathfrak{p}_r = \mathfrak{p}'$. Then we
+ get the chain
+ \[
+ S/\mathfrak{p} \twoheadrightarrow S/\mathfrak{p}_1 \twoheadrightarrow \cdots \twoheadrightarrow S/\mathfrak{p}_r
+ = S/\mathfrak{p}'
+ \]
+ Here $\mathfrak{p}_i/\mathfrak{p}_{i-1}$ is height 1 in $S/\mathfrak{p}_{i-1}$, so each arrow decreases the
+ transcendence degree by exactly 1. Therefore, $tr.d._k S/\mathfrak{p}' = tr.d._k S/\mathfrak{p} -r$.
+ \[
+ r = tr.d._k S/\mathfrak{p} - tr.d._k S/\mathfrak{p}' = \dim S/\mathfrak{p} - \dim S/\mathfrak{p}' = \dim \mathfrak{p}-\dim \mathfrak{p}'.
+ \]
+ To get the last statement, take $\mathfrak{p}=0$ and $\mathfrak{p}'=\mathfrak{m}$. Then we get that $ht(\mathfrak{m})=\dim S$.
+ \end{proof}
+ Note that the last statement fails in general.
+ \begin{example}
+ Take $S=k\times k[x_1,\dots, x_n]$. Then $ht(0\times k[x_1,\dots, x_n])=0$, but
+ $ht\bigl(k\times (x_1,\dots, x_n)\bigr) = n$.
+ \end{example}
+ But that example is not connected.
+ \begin{example}
+ $S = k[x,y,z]/(xy,xz)$.
+ \end{example}
+ But this example is not a domain. In general, for any prime $\mathfrak{p}$ in any ring $S$, we
+ have
+ \[
+ ht(\mathfrak{p}) + \dim \mathfrak{p} \le \dim S.
+ \]
+ \begin{theorem}
+ Let $S$ be an affine algebra, with minimal primes $ \{\mathfrak{p}_1,\dots, \mathfrak{p}_r\}$. Then the following
+ are equivalent.
+ \begin{enumerate}
+ \item $\dim \mathfrak{p}_i$ are all equal.
+ \item $ht(\mathfrak{p})+\dim \mathfrak{p} =\dim S$ for all primes $\mathfrak{p}\in \spec S$. In particular, if $S$
+ is a domain, we get this condition.
+ \end{enumerate}
+ \end{theorem}
+ \begin{proof}
+ $(1\Rightarrow 2)$ $ht(\mathfrak{p})$ is the length of some saturated prime chain from $\mathfrak{p}$ to
+ some minimal prime $\mathfrak{p}_i$. This length is $\dim \mathfrak{p}_i - \dim \mathfrak{p} = \dim S - \dim \mathfrak{p}$ (by
+ condition 1). Thus, we get $(2)$.
+
+ $(2\Rightarrow 1)$ Apply (2) to the minimal prime $\mathfrak{p}_i$ to get $\dim \mathfrak{p}_i=\dim S$ for
+ all $i$.
+ \end{proof}
+ We finish with a (non-affine) noetherian domain $S$ with maximal ideals of different
+ heights. We need the following fact.\\
+ \underline{Fact}: If $R$ is a ring with $a\in R$, then there is a canonical $R$-algebra
+ isomorphism $R[x]/(ax-1) \cong R[a^{-1}]$, $x\leftrightarrow a^{-1}$.
+ \begin{example}
+ Let $\bigl(R,(\mathfrak{p}i)\bigr)$ be a DVR with quotient field $K$. Let $S=R[x]$, and assume
+ for now that we know that $\dim S=2$. Look at $\mathfrak{m}_2=(\mathfrak{p}i,x)$ and $\mathfrak{m}_1=(\mathfrak{p}i x-1)$.
+ Note that $\mathfrak{m}_1$ is maximal because $S/\mathfrak{m}_1 = K$. It is easy to show that
+ $ht(\mathfrak{m}_1)=1$. However, $\mathfrak{m}_2\supsetneq (x)\supsetneq (0)$, so $ht(\mathfrak{m}_2)=2$.
+ \end{example}
+
+\subsection{Dimension theory for topological spaces}
+\label{subsectiondimension}
+The present subsection (which consists mostly of exercises) is a digression that may illuminate the notion of
+Krull dimension.
+
+\begin{definition}
+Let $X$ be a topological space.\footnote{We do not include the empty space.} Recall that $ X$ is
+\textbf{irreducible} if cannot be written as the union of
+two proper closed subsets $F_1, F_2 \subsetneq X$.
+
+We say that a subset of $X$ is irreducible if it is irreducible with respect
+to the induced topology.
+\end{definition}
+
+In general, this notion is not valid from the topological spaces familiar from
+analysis. For instance:
+
+\begin{exercise}
+Points are the only irreducible subsets of $\mathbb{R}$.
+\end{exercise}
+
+Nonetheless, irreducible sets behave very nicely with respect to certain
+operations. As you will now prove, if $U \subset X$ is an open subset, then
+the irreducible closed subsets of $U$ are in bijection with the irreducible
+closed subsets of $X$ that intersect $U$.
+\begin{exercise} \label{irredifeveryopenisdense}
+A space is irreducible if and only if every open set is dense, or
+alternatively if every open set is connected.
+\end{exercise}
+
+\begin{exercise}
+Let $X$ be a space, $Y \subset X$ an irreducible subset. Then
+$\overline{Y} \subset X$ is irreducible.
+\end{exercise}
+
+\begin{exercise}
+Let $X$ be a space, $U \subset X$ an open subset.
+Then the map $Z \to Z \cap U$ gives a bijection between the irreducible
+closed subsets of $X$ meeting $U$ and the irreducible closed subsets of $U$.
+The inverse is given by $Z' \to \overline{Z'}$.
+\end{exercise}
+
+As stated above, the notion of irreducibility is useless for spaces
+like manifolds. In fact, by \rref{irredifeveryopenisdense}, a
+Hausdorff space cannot be irreducible unless it consists of one point.
+However, for the highly non-Hausdorff spaces encountered in algebraic geometry, this notion is very
+useful.
+
+Let $R$ be a commutative ring, and $X = \spec R$.
+
+\begin{exercise}
+A closed subset $F \subset \spec R$ is irreducible if and only if it can be
+written in the form $F = V(\mathfrak{p})$ for $\mathfrak{p} \subset R$ prime.
+In particular, $\spec R$ is irreducible if and only if $R$ has one minimal
+prime.
+\end{exercise}
+
+In fact, spectra of rings are particularly nice: they are \textbf{sober
+spaces.}
+\begin{definition}
+A space $X$ is called \textbf{sober} if to every irreducible closed $F \subset
+X$, there is a unique point $\xi \in F$ such that $F = \overline{ \left\{\xi\right\}}$.
+This point is called the \textbf{generic point.}
+\end{definition}
+
+\begin{exercise}
+Check that if $X$ is any topological space and $\xi \in X$, then the closure
+$\overline{\left\{\xi\right\}}$ of the point $\xi$ is irreducible.
+\end{exercise}
+
+\begin{exercise}
+Show that $\spec R$ for $R$ a ring is sober.
+\end{exercise}
+
+\begin{exercise}
+Let $X$ be a space with a cover $\left\{X_\alpha\right\}$ by open subsets,
+each of which is a sober space. Then $X$ is a sober space. (Hint: any
+irreducible closed subset must intersect one of the $X_\alpha$, so is the
+closure of its intersection with that one.)
+\end{exercise}
+
+We now come to the main motivation of this subsection, and the reason for its
+inclusion here.
+
+\begin{definition}
+Let $X$ be a topological space. Then the \textbf{dimension} (or
+\textbf{combinatorial dimension}) of $X$ is the maximal $k$ such that a chain
+\[ F_0 \subsetneq F_1 \subsetneq \dots \subsetneq F_k \subset X \]
+with the $F_i$ irreducible, exists. This number is denoted $\dim X$ and may be
+infinite.
+\end{definition}
+
+\begin{exercise}
+What is the Krull dimension of $\mathbb{R}$?
+\end{exercise}
+
+\begin{exercise}
+Let $X = \bigcup X_i$ be the finite union of subsets $X_i \subset X$.
+
+\end{exercise}
+
+\begin{exercise}
+Let $R$ be a ring. Then $\dim \spec R$ is equal to the Krull dimension of $R$.
+\end{exercise}
+
+Most of the spaces one wishes to work with in standard algebraic geometry have a
+strong form of compactness. Actually, compactness is the wrong word, since the
+spaces of algebraic geometry are not Hausdorff.
+
+\begin{definition}
+A space is \textbf{noetherian} if every descending sequence of closed subsets
+$F_0 \supset F_1 \supset \dots$ stabilizes.
+\end{definition}
+
+\begin{exercise}
+If $R$ is noetherian, $\spec R$ is noetherian as a topological space.
+\end{exercise}
+
+
+\subsection{The dimension of a tensor product of fields}
+
+The following very clear result gives us the dimension of the tensor product
+of fields.
+\begin{theorem}[Grothendieck-Sharp]
+Let $K, L$ be field extensions of a field $k$. Then
+\[ \dim K \otimes_k L = \mathrm{min}(\mathrm{tr.deg.} K, \mathrm{tr.deg.} L). \]
+\end{theorem}
+This result is stated in the errata of \cite{EGA}, vol IV (4.2.1.5), but that
+did not make it
+ well-known; apparently it was independently discovered and published again by R. Y. Sharp, ten years
+ later.\footnote{Thanks to Georges Elencwajg for a helpful discussion at
+\url{http://math.stackexchange.com/questions/56669/a-tensor-product-of-a-power-series/56794}.}
+Note that in general, this tensor product is \emph{not} noetherian.
+
+\begin{proof}
+We start by assuming $K$ is a finitely generated, purely transcendental extension of $k$.
+Then $K $ is the quotient field of a polynomial ring $k[x_1, \dots, x_n]$.
+It follows that $K \otimes_k L$ is a localization of $L[x_1, \dots, x_n]$, and
+consequently of dimension at most $n = \mathrm{tr.deg.} K$.
+
+Now the claim is that if $\mathrm{tr.deg.} L > n$, then we have equality
+\[ \dim K \otimes_k L = n. \]
+To see this, we have to show that $K \otimes_k L$ admits an $L$-homomorphism to
+$L$. For then there will be a maximal ideal $\mathfrak{m}$ of $K \otimes_k L$ which comes from
+a maximal ideal $\mathfrak{M}$ of $L[x_1, \dots, x_n]$ (corresponding to this
+homomorphism). Consequently, we will have $(K \otimes_k L)_{\mathfrak{m}} =
+(L[x_1, \dots, x_n])_{\mathfrak{M}},$ which has dimension $n$.
+
+So we need to produce this homomorphism $K \otimes_k L \to L$. Since $K =
+k(x_1, \dots, x_n)$ and $L$ has transcendence degree more than $n$, we just choose $n$
+algebraically independent elements of $L$, and use that to define a map of
+$k$-algebras $K \to L$. By the universal property of the tensor product, we get
+a section $K \otimes_k L \to L$.
+This proves the result in the case where $K$ is a finitely generated, purely
+transcendental extension.
+
+Now we assume that $K$ has
+finite transcendence degree over $k$, but is not necessarily purely
+transcendental. Then $K$ contains a subfield $E$ which is purely transcendental
+over $k$ and such that $E/K$ is algebraic. Then $K \otimes_k L$ is
+\emph{integral} over its subring $E \otimes_k L$. The previous analysis applies
+to $E \otimes_k L$, and by integrality the dimensions of the two objects are
+the same.
+
+Finally, we need to consider the case when $K$ is allowed to have infinite
+transcendence degree over $k$. Again, we may assume that $K$ is the quotient
+field of the polynomial ring $k[\left\{x_\alpha\right\}]$ (by the integrality
+argument above).
+We need to show that if $L$ has \emph{larger} transcendence degree than $K$,
+then $\dim K \otimes_k L = \infty$. As before, there is a section $K \otimes_k
+L \to L$, and $K \otimes_k L$ is a localization of the
+polynomial ring $L[\left\{x_\alpha\right\}]$.
+If we take the maximal ideal in $L[\left\{x_\alpha\right\}]$ corresponding to
+this section $K \otimes_k L \to L$, it is of the form $(x_\alpha -
+t_\alpha)_\alpha$
+for the $t_\alpha \in L$. It is easy to check that the localization of
+$L[\left\{x_\alpha\right\}]$ at this maximal ideal, which is a localization of
+$K \otimes_k L$, has infinite dimension.
+\end{proof}
+
diff --git a/books/cring/etale.tex b/books/cring/etale.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c722ea1ad5676bde01e1f9d5e2a83306b5de1766
--- /dev/null
+++ b/books/cring/etale.tex
@@ -0,0 +1,2267 @@
+\chapter{\'Etale, unramified, and smooth morphisms}
+
+
+In this chapter, we shall introduce three classes of morphisms of rings
+defined by lifting properties and study their properties.
+Although in the case of morphisms of finite presentation, the three types of
+morphisms (unramified, smooth, and \'etale) can be defined directly (without
+lifting properties), in practice, in algebraic geometry, the functorial
+criterion given by lifts matter: if one wants to show an algebra is
+representable, then one can just study the \emph{corepresentable functor},
+which may be more accessible.
+
+\section{Unramified morphisms}
+\label{section-formally-unramified}
+
+
+\subsection{Definition}
+
+Formal \'etaleness, smoothness, and unramifiedness all deal with the existence
+or uniqueness of liftings under nilpotent extensions. We start with formal
+unramifiedness.
+
+\begin{definition}
+\label{definition-formally-unramified}
+Let $R \to S$ be a ring map.
+We say $S$ is {\bf formally unramified over $R$} if for every
+commutative solid diagram
+\begin{equation} \label{inflift}
+\xymatrix{
+S \ar[r] \ar@{-->}[rd] & A/I \\
+R \ar[r] \ar[u] & A \ar[u]
+}
+\end{equation}
+where $I \subset A$ is an ideal of square zero, there exists
+at most one dotted arrow making the diagram commute.
+
+We say that $S$ is \textbf{unramified over $R$} if $S$ is formally unramified
+over $R$ and is a finitely generated $R$-algebra.
+\end{definition}
+
+In other words, an $R$-algebra $S$ is formally unramified if and only if
+whenever $A$ is an $R$-algebra and $I \subset A$ an ideal of square zero, the
+map of sets
+\[ \hom_R(S, A) \to \hom_R(S, A/I) \]
+is injective.
+Restated again, for such $A, I$, there is \emph{at most one} lift of a given
+$R$-homomorphism $S \to A/I$ to $S \to A$.
+This is a statement purely about the associated ``functor of points.''
+Namely, let $S$ be an $R$-algebra, and consider the functor $F:
+R\text{--}\mathbf{alg}
+\to \mathbf{Sets}$ given by $F(X) = \hom_R(S, X)$.
+This is the ``functor of points.''
+Then $S$ is formally unramified over $R$ if $F(A) \to F(A/I)$ is injective for each
+$A, I$ as above.
+
+The intuition is that maps from $S$ into $T$ are like ``tangent vectors,'' and
+consequently the condition geometrically means something like that tangent
+vectors can be lifted uniquely: that is, the associated map is an immersion.
+More formally, if $R\to S$ is a morphism of algebras of finite type
+over $\mathbb{C}$, which corresponds to a map $\spec S \to \spec R$ of
+\emph{smooth} varieties (this is a condition on $R, S$!), then $R \to S$ is
+unramified if and only if the associated map of complex manifolds is an
+immersion. (We are not proving this, just stating it for intuition.)
+
+Note also that we can replace ``$I$ of square zero'' with the weaker condition
+``$I$ nilpotent.'' That is, the map $R \to S$ (if it is formally unramified)
+still has the same lifting property. This follows because one can factor $A \to
+A/I$ into the \emph{finite} sequence $\dots \to A/I^{n+1} \to A/I^{n} \to \dots
+\to A/I$, and each step is a square-zero extension.
+
+
+We now show that the module of K\"ahler differentials provides a simple
+criterion for an extension to be formally unramified.
+\begin{proposition} \label{formalunrmeansomegazero}
+An $R$-algebra $S$ is formally unramified if and only if $\Omega_{S/R} = 0$.
+\end{proposition}
+
+Suppose $R, S$ are both algebras over some smaller ring $k$.
+Then there is an exact sequence
+\[ \Omega_{R/k}\otimes_R S \to \Omega_{S/k} \to \Omega_{S/R} \to 0, \]
+and consequently, we see that formal unramifiedness corresponds to surjectivity
+of the map on ``cotangent spaces'' $\Omega_{R/k} \otimes_R S \to \Omega_{S/k}$.
+This is part of the intuition that formally unramified maps are geometrically
+like immersions (since surjectivity on the cotangent spaces corresponds to
+injectivity on the tangent spaces).
+
+\begin{proof}
+Suppose first $\Omega_{S/R}=0$. This is equivalent to the statement that
+\emph{any} $R$-derivation of $S$ into an $S$-module is trivial, because
+$\Omega_{S/R}$ is the recipient of the ``universal'' $R$-derivation.
+If given an $R$-algebra $T$ with an ideal $I \subset T$ of square zero and a
+morphism
+\[ S \to T/I, \]
+and two liftings $f,g: S \to T$, then we find that $f-g$ maps $S$ into $I$.
+Since $T/I$ is naturally an $S$-algebra, it is easy to see (since $I$ has
+square zero) that $I$ is naturally an $S$-module and $f-g$ is an
+$R$-derivation $S \to I$.
+Thus $f-g \equiv 0$ and $f=g$.
+
+Conversely, suppose $S$ has the property that liftings in \eqref{inflift} are
+unique.
+Consider the $S$-module $T=S \oplus \Omega_{S/R}$ with the multiplicative
+structure $(a,a')(b,b') = (ab, ab' + a'b)$ that makes it into an algebra.
+(This is a general construction one can do with an $S$-module $M$: $S \oplus
+M$ is an algebra where $M$ becomes an ideal of square zero.)
+
+Consider the ideal $\Omega_{S/R} \subset T$, which has
+square zero; the quotient is $S$. We will find two liftings of the identity $S
+\to S$. For the first, define $S \to T$ sending $s \to (s,0)$. For the second,
+define $S \to T$ sending $s \to (s, ds)$; the derivation property of $b$ shows
+that this is a morphism of algebras.
+
+By the lifting property, the two morphisms $S \to T$ are equal. In particular,
+the map $S \to \Omega_{S/R}$ sending $s \to ds$ is trivial. This implies that
+$\Omega_{S/R}=0$.
+
+\end{proof}
+
+Here is the essential point of the above argument. Let $I \subset T$ be an
+ideal of square zero in the $R$-algebra $T$.
+Suppose given a homomorphism $g: S \to T/I$.
+Then the set of lifts $S \to T$ of $g$ (which are $R$-algebra morphisms)
+is either empty or a torsor over
+$\mathrm{Der}_R(S, I)$ (by adding a derivation to
+a homomorphism).
+Note that $I$ is naturally a $T/I$-module (because $I^2 = 0$), and hence an
+$S$-module by $g$.
+
+This means that if the object $\mathrm{Der}_R(S, I)$ is trivial, then
+injectivity of the above map must hold.
+Conversely, if injectivity of the above map always holds (i.e. $S$ is formally
+unramified),
+then we must have $\mathrm{Der}_R(S, I) = 0$ for all such $I \subset T$; since
+we can obtain any $S$-module in this manner, it follows that there is no such
+thing as a nontrivial $R$-derivation out of $S$.
+%most of the code below was contributed by the Stacks project authors
+
+
+We next show that formal unramifiedness is a local property.
+\begin{lemma}
+\label{lemma-formally-unramified-local}
+Let $R \to S$ be a ring map.
+The following are equivalent:
+\begin{enumerate}
+\item $R \to S$ is formally unramified,
+\item $R \to S_{\mathfrak q}$ is formally unramified for all
+primes $\mathfrak q$ of $S$, and
+\item $R_{\mathfrak p} \to S_{\mathfrak q}$ is formally unramified
+for all primes $\mathfrak q$ of $S$ with $\mathfrak p = R \cap \mathfrak q$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We have seen in
+\cref{formalunrmeansomegazero}
+that (1) is equivalent to
+$\Omega_{S/R} = 0$. Similarly, since K\"ahler differentials localize, we see that (2) and (3)
+are equivalent to $(\Omega_{S/R})_{\mathfrak q} = 0$ for all
+$\mathfrak q$.
+As a result, the statement of this lemma is simply the fact that an $S$-module
+is zero if and only if all its localizations at prime ideals are zero.
+\end{proof}
+
+We shall now give the typical list of properties (``le sorite'') of unramified morphisms.
+
+\begin{proposition} \label{locunramified}
+Any map $R \to R_f$ for $f \in R$ is unramified.
+More generally, a map from a ring to any localization is \emph{formally}
+unramified, but not necessarily unramified.
+\end{proposition}
+\begin{proof}
+Indeed, we know that $\Omega_{R/R} = 0$ and $\Omega_{R_f/R } =
+(\Omega_{R/R})_f=0$, and the map is clearly of finite type.
+\end{proof}
+
+\begin{proposition} \label{epiunr}
+A surjection of rings is unramified.
+More generally, a categorical epimorphism of rings is formally unramified.
+\end{proposition}
+\begin{proof}
+Obvious from the lifting property: if $R \to S$ is a categorical epimorphism,
+then given any $R$-algebra $T$, there can be \emph{at most one} map of
+$R$-algebras $S \to T$ (regardless of anything involving square-zero ideals).
+\end{proof}
+
+In the proof of \cref{epiunr}, we could have alternatively argued as follows. If $R \to S$ is an epimorphism
+in the category of rings, then $S \otimes_R S \to S$ is an isomorphism.
+This is a general categorical fact, the dual of which for monomorphisms is
+perhaps simpler: if $X \to Y$ is a monomorphism of objects in any category,
+then $X \to X \times_Y X$ is an isomorphism. See \cref{}. By the alternate
+construction of $\Omega_{S/R}$ (\cref{alternateOmega}), it follows that this must vanish.
+
+
+\begin{proposition}
+\label{sorite1unr}
+If $R \to S$ and $S \to T$ are unramified (resp. formally unramified), so is $R \to T$.
+\end{proposition}
+\begin{proof}
+Since morphisms of finite type are preserved under composition, we only need
+to prove the result about formally unramified maps. So let $R \to S, S \to T$
+be formally unramified. We need to check that
+$\Omega_{T/R} = 0$. However, we have an exact sequence (see
+\cref{firstexactseq}):
+\[ \Omega_{S/R}\otimes_S T \to \Omega_{T/R} \to \Omega_{T/S} \to 0, \]
+and since $\Omega_{S/R} = 0, \Omega_{T/S} = 0$, we find that $\Omega_{T/R} =
+0$. This shows that $R \to T$ is formally unramified.
+\end{proof}
+More elegantly, we could have proved this by using the lifting property (and
+this is what we will do for formal \'etaleness and smoothness).
+Then this is simply a formal argument.
+
+\begin{proposition} \label{unrbasechange}
+If $R \to S$ is unramified (resp. formally unramified), so is $R' \to S' = S \otimes_R R'$ for any $R$-algebra
+$R'$.
+\end{proposition}
+\begin{proof}
+This follows from the fact that $\Omega_{S'/R'} = \Omega_{S/R} \otimes_S S'$
+(see \cref{basechangediff}).
+Alternatively, it can be checked easily using the lifting criterion.
+For instance, suppose given an $R'$-algebra $T$ and an ideal $I \subset T$ of
+square zero. We want to show that a morphism of $R'$-algebras
+$S' \to T/I$ lifts in at most one way to a map $S' \to T$. But if we had two
+distinct liftings, then we could restrict to $S$ to get two liftings of $S \to
+S' \to T/I$. These are easily seen to be distinct, a contradiction as $R \to
+S$ was assumed formally unramified.
+\end{proof}
+
+
+In fact, the question of what unramified morphisms look like can be reduced to
+the case where the ground ring is a \emph{field} in view of the previous and
+the following result.
+Given $\mathfrak{p} \in \spec R$, we let $k(\mathfrak{p})$ to be the residue
+field of $R_{\mathfrak{p}}$.
+
+
+\begin{proposition} \label{reduceunrtofield}
+Let $\phi: R \to S$ be a morphism of finite type. Then $\phi$ is unramified if
+and only if for every $\mathfrak{p} \in \spec R$, we have
+\( k(\mathfrak{p}) \to S \otimes_R k(\mathfrak{p}) \)
+unramified.
+\end{proposition}
+The classification of unramified extensions of a field is very simple, so this
+will be useful.
+\begin{proof}
+One direction is clear by \cref{unrbasechange}. For the other, suppose
+$k(\mathfrak{p}) \to S \otimes_R k(\mathfrak{p})$ unramified for all $\mathfrak{p} \subset R$.
+We then know that
+\( \Omega_{S/R} \otimes_R k(\mathfrak{p}) = \Omega_{S \otimes_R
+k(\mathfrak{p})/k(\mathfrak{p})} = 0 \)
+for all $\mathfrak{p}$. By localization, it follows that
+\begin{equation} \label{auxdiff} \mathfrak{p}
+\Omega_{S_{\mathfrak{q}}/R_{\mathfrak{p}}} =
+\Omega_{S_{\mathfrak{q}}/R_{\mathfrak{p}}} = \Omega_{S_{\mathfrak{q}}/R} \end{equation}
+for any $\mathfrak{q} \in \spec S$ lying over $\mathfrak{p}$.
+
+Let $\mathfrak{q} \in \spec S$. We will now show that
+$(\Omega_{S/R})_{\mathfrak{q}} = 0$.
+Given this, we will find that $\Omega_{S/R} =0$, which will prove the
+assertion of the corollary.
+Indeed, let $\mathfrak{p} \in \spec R$ be
+the image of $\mathfrak{q}$, so that there is a \emph{local} homomorphism
+$R_{\mathfrak{p}} \to S_{\mathfrak{q}}$. By \eqref{auxdiff}, we find that
+\[ \mathfrak{q} \Omega_{S_{\mathfrak{q}}/R} = \Omega_{S_{\mathfrak{q}}/R}. \]
+and since $\Omega_{S_{\mathfrak{q}}/R}$ is a finite $S_{\mathfrak{q}}$-module
+(\cref{finitelygeneratedOmega}),
+Nakayama's lemma now implies that $\Omega_{S_{\mathfrak{q}}/R}=0$, proving
+what we wanted.
+\end{proof}
+
+
+The following is simply a combination of the various results proved:
+\begin{corollary}
+\label{lemma-formally-unramified-localize}
+Let $A \to B$ be a formally unramified ring map.
+\begin{enumerate}
+\item For $S \subset A$ a multiplicative subset,
+$S^{-1}A \to S^{-1}B$ is formally unramified.
+\item For $S \subset B$ a multiplicative subset,
+$A \to S^{-1}B$ is formally unramified.
+\end{enumerate}
+\end{corollary}
+
+\subsection{Unramified extensions of a field}
+Motivated by \cref{reduceunrtofield}, we classify unramified morphisms out of a
+field; we are going to see that these are just finite products of separable
+extensions. Let us first consider the case when the field is \emph{algebraically
+closed.}
+
+\begin{proposition} \label{unrextalgclosedfld}
+Suppose $k$ is algebraically closed. If $A$ is an unramified $k$-algebra, then
+$A$ is a product of copies of $k$.
+\end{proposition}
+\begin{proof}
+Let us
+show first that $A$ is necessarily finite-dimensional.
+If not,
+
+
+So let us now assume that $A$ is finite-dimensional over $k$, hence \emph{artinian}.
+Then $A$ is a direct product of artinian local $k$-algebras.
+Each of these is unramified over $k$. So we need to study what local,
+artinian, unramified extensions of $k$ look like; we shall show that any such
+is isomorphic to $k$ with:
+
+\begin{lemma}
+A finite-dimensional, local $k$-algebra which is unramified over $k$ (for $k$
+algebraically closed) is isomorphic to $k$.
+\end{lemma}
+\begin{proof}
+First, if $\mathfrak{m} \subset A$ is the maximal ideal, then $\mathfrak{m}$
+is nilpotent, and $A/\mathfrak{m}\simeq k$ by the Hilbert Nullstellensatz. Thus the ideal
+$\mathfrak{M}=\mathfrak{m}
+\otimes A + A \otimes \mathfrak{m} \subset A \otimes_k A$ is nilpotent and
+$(A \otimes_k A)/\mathfrak{M} = k \otimes_k k = k$. In particular, $\mathfrak{M}$ is maximal and
+$A \otimes_k A$ is also local.
+(We could see this as follows: $A$ is associated to a one-point variety, so the
+fibered product $\spec A \times_k \spec A$ is also associated to a one-point
+variety. It really does matter that we are working over an
+algebraically closed field here!)
+
+By assumption, $\Omega_{A/k} = 0$. So if $I = \ker(A \otimes_k A \to A)$, then
+$I = I^2$.
+ But from \cref{idemlemma}, we find that if we had $I \neq 0$, then $\spec A \otimes_k A$
+would be disconnected. This is clearly false (a local ring has no nontrivial
+idempotents), so $I = 0$ and
+$A \otimes_k A \simeq A$. Since $A$ is finite-dimensional over $k$,
+necessarily $A \simeq k$.
+\end{proof}
+\end{proof}
+
+Now let us drop the assumption of algebraic closedness to get:
+
+\begin{theorem} \label{unrfield}
+An unramified $k$-algebra for $k$ any field is isomorphic to a product $\prod
+k_i$ of finite separable extensions $k_i$ of $k$.
+\end{theorem}
+\begin{proof}
+Let $k$ be a field, and $\overline{k}$ its algebraic closure. Let $A$ be an
+unramified $k$-algebra. Then $A \otimes_k \overline{k}$ is an unramified
+$\overline{k}$-algebra by \cref{unrbasechange}, so is a finite product of copies of
+$\overline{k}$.
+It is thus natural that we need to study tensor products of fields to
+understand this problem.
+
+\begin{lemma} \label{productoffields}
+Let $E/k$ be a finite extension, and $L/k$ any extension.
+If $E/k$ is separable, then $L \otimes_k E$ is isomorphic (as a $L$-algebra) to a product of
+copies of separable extensions of $L$.
+\end{lemma}
+\begin{proof}
+By the primitive element theorem, we have $E = k(\alpha)$ for some $\alpha \in
+E$ satisfying a separable irreducible polynomial $P \in k[X]$.
+Thus
+\[ E = k[X]/(P), \]
+so
+\[ E \otimes_k L = L[X]/(P). \]
+But $P$ splits into several irreducible factors $\left\{P_i\right\}$ in
+$L[X]$, no two of which are the same by separability.
+Thus by the Chinese remainder theorem,
+\[ E \otimes_k L = L(X)/(\prod P_i) = \prod L[X]/(P_i), \]
+and each $L[X]/(P_i)$ is a finite separable extension of $L$.
+\end{proof}
+
+As a result of this, we can easily deduce that any $k$-algebra of the form
+$A=\prod k_i$ for the $k_i$ separable over $k$ is unramified.
+Indeed, we have
+\[ \Omega_{A/k}\otimes_k \overline{k} = \Omega_{A \otimes_k
+\overline{k}/\overline{k}}, \]
+so it suffices to prove that $A \otimes_k \overline{k}$ is unramified over
+$\overline{k}$. However, from \cref{productoffields}, $A \otimes_k
+\overline{k}$ is isomorphic as a $\overline{k}$-algebra to a product of copies
+of $\overline{k}$. Thus $A \otimes_k \overline{k}$ is obviously unramified
+over $\overline{k}$.
+
+On the other hand, suppose $A/k$ is unramified. We shall show it is of the
+form given as in the theorem. Then $A \otimes_k
+\overline{k}$ is unramified over $\overline{k}$, so it follows by
+\cref{unrextalgclosedfld} that $A$ is finite-dimensional over $k$. In
+particular, $A$ is \emph{artinian}, and thus decomposes as a product of
+finite-dimensional unramified $k$-algebras.
+
+We are thus reduced to showing that a local, finite-dimensional $k$-algebra
+that is unramified is a separable extension of $k$. Let $A$ be one such. Then
+$A$ can have no nilpotents because then $A \otimes_k \overline{k}$ would have
+nilpotents, and could not be isomorphic to a product of copies of
+$\overline{k}$.
+Thus the unique maximal ideal of $A$ is zero, and $A$ is a field.
+We need only show that $A$ is separable over $k$. This is accomplished by:
+
+\begin{lemma}
+Let $E/k$ be a finite inseparable extension. Then $E \otimes_k \overline{k}$
+contains nonzero nilpotents.
+\end{lemma}
+
+\begin{proof} There exists an $\alpha \in E$ which is inseparable over $k$,
+i.e. whose minimal polynomial has multiple roots.
+Let $E' = k(\alpha)$. We will show that $E' \otimes_k \overline{k}$ has
+nonzero nilpotents; since the map $E' \otimes_k \overline{k} \to E \otimes_k
+\overline{k}$ is an injection, we will be done.
+Let $P$ be the minimal polynomial of $\alpha$, so that $E' = k[X]/(P)$.
+Let $P = \prod P_i^{e_i}$ be the factorization of $P$ in $\overline{k}$ for
+the $P_i \in \overline{k}[X]$ irreducible (i.e. linear). By
+assumption, one of the $e_i$ is greater than one.
+It follows that
+\[ E' \otimes_k \overline{k} = \overline{k}[X]/(P) = \prod
+\overline{k}[X]/(P_i^{e_i}) \]
+has nilpotents corresponding to the $e_i$'s that are greater than one.
+\end{proof}
+
+\end{proof}
+\begin{comment}
+We now come to the result that explains why the present theory is connected
+with Zariski's Main Theorem.
+\begin{corollary} \label{unrisqf}
+An unramified morphism $A \to B$ is quasi-finite.
+\end{corollary}
+\begin{proof}
+Recall that a morphism of rings is \emph{quasi-finite} if the associated map
+on spectra is. Equivalently, the morphism must be of finite type and have
+finite fibers. But by assumption $A \to B$ is of finite type. Moreover, if
+$\mathfrak{p} \in \spec A$ and $k(\mathfrak{p})$ is the residue field, then
+$k(\mathfrak{p}) \to B \otimes_A k(\mathfrak{p})$ is \emph{finite} by the
+above results, so the fibers are finite.
+\end{proof}
+
+
+\end{comment}
+
+
+\subsection{Conormal modules and universal thickenings}
+\label{section-conormal}
+
+It turns out that one can define the first infinitesimal neighbourhood
+not just for a closed immersion of schemes, but already for any formally
+unramified morphism. This is based on the following algebraic fact.
+
+\begin{lemma}
+\label{lemma-universal-thickening}
+Let $R \to S$ be a formally unramified ring map. There exists a surjection of
+$R$-algebras $S' \to S$ whose kernel is an ideal of square zero with the
+following universal property: Given any commutative diagram
+$$
+\xymatrix{
+S \ar[r]_{a} & A/I \\
+R \ar[r]^b \ar[u] & A \ar[u]
+}
+$$
+where $I \subset A$ is an ideal of square zero, there is a unique $R$-algebra
+map $a' : S' \to A$ such that $S' \to A \to A/I$ is equal to $S' \to S \to A$.
+\end{lemma}
+
+\begin{proof}
+Choose a set of generators $z_i \in S$, $i \in I$ for $S$ as an $R$-algebra.
+Let $P = R[\{x_i\_{i \in I}]$ denote the polynomial ring on generators
+$x_i$, $i \in I$. Consider the $R$-algebra map $P \to S$ which maps
+$x_i$ to $z_i$. Let $J = \text{Ker}(P \to S)$. Consider the map
+$$
+\text{d} : J/J^2 \longrightarrow \Omega_{P/R} \otimes_P S
+$$
+see
+\rref{lemma-differential-seq}.
+This is surjective since $\Omega_{S/R} = 0$ by assumption, see
+\rref{lemma-characterize-formally-unramified}.
+Note that $\Omega_{P/R}$ is free on $\text{d}x_i$, and hence the module
+$\Omega_{P/R} \otimes_P S$ is free over $S$. Thus we may choose a splitting
+of the surjection above and write
+$$
+J/J^2 = K \oplus \Omega_{P/R} \otimes_P S
+$$
+Let $J^2 \subset J' \subset J$ be the ideal of $P$ such that
+$J'/J^2$ is the second summand in the decomposition above.
+Set $S' = P/J'$. We obtain a short exact sequence
+$$
+0 \to J/J' \to S' \to S \to 0
+$$
+and we see that $J/J' \cong K$ is a square zero ideal in $S'$. Hence
+$$
+\xymatrix{
+S \ar[r]_1 & S \\
+R \ar[r] \ar[u] & S' \ar[u]
+}
+$$
+is a diagram as above. In fact we claim that this is an initial object in
+the category of diagrams. Namely, let $(I \subset A, a, b)$ be an arbitrary
+diagram. We may choose an $R$-algebra map $\beta : P \to A$ such that
+$$
+\xymatrix{
+S \ar[r]_1 & S \ar[r]_a & A/I \\
+R \ar[r] \ar@/_/[rr]_b \ar[u] & P \ar[u] \ar[r]^\beta & A \ar[u]
+}
+$$
+is commutative. Now it may not be the case that $\beta(J') = 0$, in other
+words it may not be true that $\beta$ factors through $S' = P/J'$.
+But what is clear is that $\beta(J') \subset I$ and
+since $\beta(J) \subset I$ and $I^2 = 0$ we have $\beta(J^2) = 0$.
+Thus the ``obstruction'' to finding a morphism from
+$(J/J' \subset S', 1, R \to S')$ to $(I \subset A, a, b)$ is
+the corresponding $S$-linear map $\overline{\beta} : J'/J^2 \to I$.
+The choice in picking $\beta$ lies in the choice of $\beta(x_i)$.
+A different choice of $\beta$, say $\beta'$, is gotten by taking
+$\beta'(x_i) = \beta(x_i) + \delta_i$ with $\delta_i \in I$.
+In this case, for $g \in J'$, we obtain
+$$
+\beta'(g) =
+\beta(g) + \sum\nolimits_i \delta_i \frac{\partial g}{\partial x_i}.
+$$
+Since the map $\text{d}|_{J'/J^2} : J'/J^2 \to \Omega_{P/R} \otimes_P S$
+given by $g \mapsto \frac{\partial g}{\partial x_i}\text{d}x_i$
+is an isomorphism by construction, we see that there is a unique choice
+of $\delta_i \in I$ such that $\beta'(g) = 0$ for all $g \in J'$.
+(Namely, $\delta_i$ is $-\overline{\beta}(g)$ where $g \in J'/J^2$
+is the unique element with $\frac{\partial g}{\partial x_j} = 1$ if
+$i = j$ and $0$ else.) The uniqueness of the solution implies the
+uniqueness required in the lemma.
+\end{proof}
+
+\noindent
+In the situation of
+\rref{lemma-universal-thickening}
+the $R$-algebra map $S' \to S$ is unique up to unique isomorphism.
+
+\begin{definition}
+\label{definition-universal-thickening}
+Let $R \to S$ be a formally unramified ring map.
+\begin{enumerate}
+\item The {\it universal first order thickening} of $S$ over $R$ is
+the surjection of $R$-algebras $S' \to S$ of
+\rref{lemma-universal-thickening}.
+\item The {\it conormal module} of $R \to S$ is the kernel $I$ of the
+universal first order thickening $S' \to S$, seen as a $S$-module.
+\end{enumerate}
+We often denote the conormal module {\it $C_{S/R}$} in this situation.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-universal-thickening-quotient}
+Let $I \subset R$ be an ideal of a ring.
+The universal first order thickening of $R/I$ over $R$
+is the surjection $R/I^2 \to R/I$. The conormal module
+of $R/I$ over $R$ is $C_{(R/I)/R} = I/I^2$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universal-thickening-localize}
+Let $A \to B$ be a formally unramified ring map.
+Let $\varphi : B' \to B$ be the universal first order thickening of
+$B$ over $A$.
+\begin{enumerate}
+\item Let $S \subset A$ be a multiplicative subset.
+Then $S^{-1}B' \to S^{-1}B$ is the universal first order thickening of
+$S^{-1}B$ over $S^{-1}A$. In particular $S^{-1}C_{B/A} = C_{S^{-1}B/S^{-1}A}$.
+\item Let $S \subset B$ be a multiplicative subset.
+Then $S' = \varphi^{-1}(S)$ is a multiplicative subset in $B'$
+and $(S')^{-1}B' \to S^{-1}B$ is the universal first order thickening
+of $S^{-1}B$ over $A$. In particular $S^{-1}C_{B/A} = C_{S^{-1}B/A}$.
+\end{enumerate}
+Note that the lemma makes sense by
+\rref{lemma-formally-unramified-localize}.
+\end{lemma}
+
+\begin{proof}
+With notation and assumptions as in (1). Let $(S^{-1}B)' \to S^{-1}B$
+be the universal first order thickening of $S^{-1}B$ over $S^{-1}A$.
+Note that $S^{-1}B' \to S^{-1}B$ is a surjection of $S^{-1}A$-algebras
+whose kernel has square zero. Hence by definition we obtain a map
+$(S^{-1}B)' \to S^{-1}B'$ compatible with the maps towards $S^{-1}B$.
+Consider any commutative diagram
+$$
+\xymatrix{
+B \ar[r] & S^{-1}B \ar[r] & D/I \\
+A \ar[r] \ar[u] & S^{-1}A \ar[r] \ar[u] & D \ar[u]
+}
+$$
+where $I \subset D$ is an ideal of square zero. Since $B'$ is the universal
+first order thickening of $B$ over $A$ we obtain an $A$-algebra map
+$B' \to D$. But it is clear that the image of $S$ in $D$ is mapped to
+invertible elements of $D$, and hence we obtain a compatible map
+$S^{-1}B' \to D$. Applying this to $D = (S^{-1}B)'$ we see that we get
+a map $S^{-1}B' \to (S^{-1}B)'$. We omit the verification that this map
+is inverse to the map described above.
+
+\medskip\noindent
+With notation and assumptions as in (2). Let $(S^{-1}B)' \to S^{-1}B$
+be the universal first order thickening of $S^{-1}B$ over $A$.
+Note that $(S')^{-1}B' \to S^{-1}B$ is a surjection of $A$-algebras
+whose kernel has square zero. Hence by definition we obtain a map
+$(S^{-1}B)' \to (S')^{-1}B'$ compatible with the maps towards $S^{-1}B$.
+Consider any commutative diagram
+$$
+\xymatrix{
+B \ar[r] & S^{-1}B \ar[r] & D/I \\
+A \ar[r] \ar[u] & A \ar[r] \ar[u] & D \ar[u]
+}
+$$
+where $I \subset D$ is an ideal of square zero. Since $B'$ is the universal
+first order thickening of $B$ over $A$ we obtain an $A$-algebra map
+$B' \to D$. But it is clear that the image of $S'$ in $D$ is mapped to
+invertible elements of $D$, and hence we obtain a compatible map
+$(S')^{-1}B' \to D$. Applying this to $D = (S^{-1}B)'$ we see that we get
+a map $(S')^{-1}B' \to (S^{-1}B)'$. We omit the verification that this map
+is inverse to the map described above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differentials-universal-thickening}
+Let $R \to A \to B$ be ring maps. Assume $A \to B$ formally unramified.
+Let $B' \to B$ be the universal first order thickening of $B$ over $A$.
+Then $B'$ is formally unramified over $A$, and the canonical map
+$\Omega_{A/R} \otimes_A B \to \Omega_{B'/R} \otimes_{B'} B$ is an
+isomorphism.
+\end{lemma}
+
+\begin{proof}
+We are going to use the construction of $B'$ from the proof of
+\rref{lemma-universal-thickening}
+allthough in principle it should be possible to deduce these results
+formally from the definition. Namely, we choose a presentation
+$B = P/J$, where $P = A[x_i]$ is a polynomial ring over $A$.
+Next, we choose elements $f_i \in J$ such that
+$\text{d}f_i = \text{d}x_i \otimes 1$ in $\Omega_{P/A} \otimes_P B$.
+Having made these choices we have
+$B' = P/J'$ with $J' = (f_i) + J^2$, see proof of
+\rref{lemma-universal-thickening}.
+
+\medskip\noindent
+Consider the canonical exact sequence
+$$
+J'/(J')^2 \to \Omega_{P/A} \otimes_P B' \to \Omega_{B'/A} \to 0
+$$
+see
+\rref{lemma-differential-seq}.
+By construction the classes of the $f_i \in J'$ map to elements of
+the module $\Omega_{P/A} \otimes_P B'$ which generate it modulo
+$J'/J^2$ by construction. Since $J'/J^2$ is a nilpotent ideal, we see
+that these elements generate the module alltogether (by
+Nakayama's \rref{lemma-NAK}). This proves that $\Omega_{B'/A} = 0$
+and hence that $B'$ is formally unramified over $A$, see
+\rref{lemma-characterize-formally-unramified}.
+
+\medskip\noindent
+Since $P$ is a polynomial ring over $A$ we have
+$\Omega_{P/R} = \Omega_{A/R} \otimes_A P \oplus \bigoplus P\text{d}x_i$.
+We are going to use this decomposition.
+Consider the following exact sequence
+$$
+J'/(J')^2 \to
+\Omega_{P/R} \otimes_P B' \to
+\Omega_{B'/R} \to 0
+$$
+see
+\rref{lemma-differential-seq}.
+We may tensor this with $B$ and obtain the exact sequence
+$$
+J'/(J')^2 \otimes_{B'} B \to
+\Omega_{P/R} \otimes_P B \to
+\Omega_{B'/R} \otimes_{B'} B \to 0
+$$
+If we remember that $J' = (f_i) + J^2$
+then we see that the first arrow annihilates the submodule $J^2/(J')^2$.
+In terms of the direct sum decomposition
+$\Omega_{P/R} \otimes_P B =
+\Omega_{A/R} \otimes_A B \oplus \bigoplus B\text{d}x_i $ given
+we see that the submodule $(f_i)/(J')^2 \otimes_{B'} B$ maps
+isomorphically onto the summand $\bigoplus B\text{d}x_i$. Hence what is
+left of this exact sequence is an isomorphism
+$\Omega_{A/R} \otimes_A B \to \Omega_{B'/R} \otimes_{B'} B$
+as desired.
+\end{proof}
+
+
+\section{Smooth morphisms}
+
+\subsection{Definition}
+The idea of a \emph{smooth} morphism in algebraic geometry is one that is
+surjective on the tangent space, at least if one is working with smooth
+varieties over an algebraically closed field. So this means that one should be
+able to lift tangent vectors, which are given by maps from the ring into
+$k[\epsilon]/\epsilon^2$.
+
+This makes the following definition seem more plausible:
+
+\begin{definition}
+Let $S$ be an $R$-algebra. Then $S$ is \textbf{formally smooth} over $R$ (or
+the map $R \to S$ is formally smooth) if given any
+$R$-algebra $A$ and ideal $I \subset A $ of square zero, the map
+\[ \hom_R(S, A) \to \hom_R(S, A/I)\]
+is a surjection.
+We shall say that $S$ is \textbf{smooth} (over $R$) if it is formally smooth and of finite
+presentation.
+\end{definition}
+
+So this means that in any diagram
+$$
+\xymatrix{
+S \ar[r] \ar@{-->}[rd] & A/I \\
+R \ar[r] \ar[u] & A, \ar[u]
+}
+$$
+with $I$ an ideal of square zero in $A$, there exists a dotted arrow making the diagram commute.
+As with formal unramifiedness, this is a purely functorial statement: if $F$ is
+the corepresentable functor associated to $S$, then we want $F(A) \to F(A/I)$
+to be a \emph{surjection} for each $I \subset A$ of square zero and each
+$R$-algebra $A$. Also, again we can replace ``$I$ of square zero'' with ``$I$
+nilpotent.''
+
+
+\begin{example}
+The basic example of a formally smooth $R$-algebra is the polynomial ring
+$R[x_1, \dots, x_n]$. For to give a map $R[x_1, \dots, x_n] \to A/I$ is to give
+$n$ elements of $A/I$; each of these elements can clearly be lifted to $A$.
+This is analogous to the statement that a free module is projective.
+
+More generally, if $P$ is a projective $R$-module (not necessarily of finite
+type), then the symmetric algebra $\Sym P$ is a formally smooth $R$-algebra.
+This follows by the same reasoning.
+\end{example}
+
+We can state the usual list of properties of formally smooth morphisms:
+
+\begin{proposition}
+\label{smoothsorite}
+Smooth (resp. formally smooth) morphisms are preserved under base extension and
+composition.
+If $R$ is a ring, then any localization is formally smooth over $R$.
+\end{proposition}
+\begin{proof} As usual, only the statements about \emph{formal} smoothness are
+interesting.
+The statements about base extension and composition will be mostly left to the reader:
+they are an exercise in diagram-chasing. (Note that we cannot argue as we did
+for formally unramified morphisms, where we had a simple criterion in terms of
+the module of K\"ahler differentials and various properties of them.)
+For example, let $R \to S, S \to T$ be formally smooth.
+Given a diagram (with $I \subset A$ an ideal of square zero)
+\[ \xymatrix{
+T \ar[r] \ar@{-->}[rdd] & A/I \\
+S \ar[u] \ar@{-->}[rd] & \\
+R \ar[r] \ar[u] & A, \ar[uu]
+}\]
+we start by finding a dotted arrow $S \to A$ by using formal smoothness of $R
+\to S$. Then we find a dotted arrow $T \to A$ making the top quadrilateral
+commute. This proves that the composite is formally smooth.
+\end{proof}
+\subsection{Quotients of formally smooth rings}
+
+Now, ultimately, we want to show that this somewhat abstract definition of
+smoothness will give us something nice and geometric. In particular, in this case we want to show
+that $B$ is \emph{flat,} and the fibers are smooth varieties (in the old sense).
+To do this, we will need to do a bit of work, but we can argue in a fairly
+elementary manner. On the one hand, we will first need to give a criterion for when a
+quotient of a formally smooth ring is formally smooth.
+
+
+
+\begin{theorem}
+\label{smoothconormal}
+Let $A$ be a ring, $B$ an $A$-algebra. Suppose $B$ is formally smooth over $A$,
+and let $I \subset B$ be an ideal.
+Then $C = B/I$ is a formally smooth $A$-algebra if and only if the canonical map
+\[ I/I^2 \to \Omega_{B/A} \otimes_B C \]
+has a section.
+In other words, $C$ is formally smooth precisely when the conormal sequence
+\[ I/I^2 \to \Omega_{B/A} \otimes_B C \to \Omega_{C/A} \to 0 \]
+is split exact.
+\end{theorem}
+
+This result is stated in more generality for \emph{topological} rings, and uses
+some functors on ring extensions, in \cite{EGA}, 0-IV, 22.6.1.
+
+\begin{proof}
+Suppose first $C$ is formally smooth over $A$.
+Then we have a map
+\( B/I^2 \to C \)
+given by the quotient. The claim is that there is a section of this map.
+There is a diagram of $A$-algebras
+\[ \xymatrix{
+B/I & \ar[l] B/I^2 \\
+C \ar[u]^{=} \ar@{-->}[ru]
+}\]
+and the lifting $s: C \to B/I^2$ exists by formal smoothness.
+This is a section of the natural projection $B/I^2 \to C = B/I$.
+
+
+In particular, the combination of the natural inclusion $I/I^2 \to B/I^2$ and
+the section $s$ gives an isomorphism of \emph{rings} (even $A$-algebras)
+\( B/I^2 \simeq C \oplus I/I^2 . \)
+Here $I/I^2$ squares to zero.
+
+We are interested in showing that $I/I^2 \to \Omega_{B/A} \otimes_B C$ is a
+split injection of $C$-modules. To see this, we will show that any map out of the former
+extends to a map out of the latter.
+Now suppose given a map
+of $C$-modules
+\[ \phi: I/I^2 \to M \]
+into a $C$-module $M$.
+Then we get an $A$-derivation
+\[ \delta: B/I^2 \to M \]
+by using the splitting $B/I^2 = C \oplus I/I^2$.
+(Namely, we just extend the map by zero on $C$.)
+Since $I/I^2$ is imbedded in $B/I^2$ by the canonical injection, this
+derivation restricts on $I/I^2$ to $\phi$. In other words there is a
+commutative diagram
+\[ \xymatrix{
+I/I^2 \ar[d]^{\phi} \ar[r] & B/I^2 \ar[ld]^{\delta} \\
+M
+}.\]
+It follows thus that we may define, by pulling back, an $A$-derivation $B \to
+M$ that restricts on $I$ to the map $I \to I/I^2 \stackrel{\phi}{\to} M$.
+By the universal property of the differentials, this is the same thing as a
+homomorphism $\Omega_{B/A} \to M$, or equivalently $\Omega_{B/A} \otimes_B C
+\to M$ since $M$ is a $C$-module.
+Pulling back this derivation to $I/I^2$ corresponds to pulling back via $I/I^2
+\to \Omega_{B/A} \otimes_B C$.
+
+It follows that the map
+\[ \hom_C(\Omega_{B/A} \otimes_B C, M) \to \hom_C(I/I^2, M) \]
+is a surjection. This proves one half of the result.
+
+Now for the other.
+Suppose that there is a section of the conormal map.
+This translates, as above, to saying that
+any map $I/I^2 \to M$ (of $C$-modules) for a $C$-module $M$
+ can be extended to an $A$-derivation $B \to M$.
+We must deduce from this formal smoothness.
+
+Let $E$ be any $A$-algebra, and $J \subset E$
+an ideal of square zero.
+We suppose given an $A$-homomorphism $C \to E/J$
+and would like to lift it to $C \to E$; in other words, we must
+find a lift in the diagram
+\[ \xymatrix{
+& C \ar@{-->}[ld] \ar[d] \\
+E \ar[r] & E/J
+}.\]
+Let us pull this map back by the surjection
+$B \twoheadrightarrow C$; we get a diagram
+\[ \xymatrix{
+& B \ar@{-->}[ldd]^{\phi}\ar[d] \\
+& C \ar@{-->}[ld] \ar[d] \\
+E \ar[r] & E/J
+}.\]
+In this diagram, we know that a lifting $\phi: B \to E$ does exist because $B$ is
+formally smooth over $A$.
+So we can find a dotted arrow from $B \to E$ in the diagram.
+The problem is that it might not send
+$I = \ker(B \to C) $ into zero.
+If we can show that there \emph{exists} a lifting that does factor through $C$
+(i.e. sends $I$ to zero), then we are done.
+
+In any event, we have a morphism of $A$-modules
+$ I \to E$ given by restricting $\phi: B \to E$.
+This lands in $J$, so we get a map $I \to J$. Note that $J$ is an $E/J$-module,
+hence a $C$-module, because $J$ has square zero. Moreover $I^2$ gets sent to
+zero because $J^2 = 0$, and we have a morphism of
+$C$-modules $I/I^2 \to J$.
+Now by hypothesis, there is an $A$-derivation
+$\delta: B \to J$ such that $\delta|_I = \phi$.
+Since $J$ has square zero, it follows that
+\[ \phi - \delta: B \to E \]
+is an $A$-homomorphism of algebras, and it kills $I$.
+Consequently this factors through $C$ and gives the desired lifting $C \to E$.
+
+
+
+\end{proof}
+
+\begin{corollary} \label{fsOmegaprojective}
+If $A \to B$ is formally smooth, then
+$\Omega_{B/A}$ is a projective $B$-module.
+\end{corollary}
+The intuition is that projective modules correspond to vector bundles
+over the $\spec$ (unlike general modules, the rank is locally constant,
+which should happen in a vector bundle). But a smooth algebra is like a
+manifold, and for a manifold the cotangent bundle is very much a vector
+bundle, whose dimension is locally constant.
+\begin{proof}
+Indeed, we can write $B$ as a quotient of a polynomial ring $D$ over $A$; this
+is formally smooth. Suppose $B = D/I$.
+Then we know that there is a split exact sequence
+\[ 0 \to I/I^2 \to \Omega_{D/A} \otimes_D B \to \Omega_{B/A} \to 0. \]
+But the middle term is free as $D/A$ is a polynomial ring; hence the last term
+is projective.
+\end{proof}
+
+In particular, we can rewrite the criterion for formal smoothness of $C= B/I$,
+if $B$ is formally smooth over $A$:
+\begin{enumerate}
+\item $\Omega_{C/A} $ is a projective $C$-module.
+\item $I/I^2 \to \Omega_{B/A} \otimes_B C$ is a monomorphism.
+\end{enumerate}
+Indeed, these two are equivalent to the splitting of the conormal sequence
+(since the middle term is always projective by \cref{fsOmegaprojective}).
+
+In particular, we can check that smoothness is \emph{local}:
+\begin{corollary} \label{fsislocal}
+Let $A$ be a ring, $B$ a finitely presented $A$-algebra. Then $B$ is smooth
+over $A$ if and only if for each $\mathfrak{q} \in \spec B$ with $\mathfrak{p}
+\in \spec A$ the inverse image, the map $A_{\mathfrak{p}} \to B_{\mathfrak{q}}$
+is formally smooth.
+\end{corollary}
+\begin{proof}
+Indeed, we see that $B = D/I$ for a polynomial ring $D = A[x_1,\dots, x_n]$ in finitely many
+variables, and $I \subset D$ a finitely generated ideal.
+We have just seen that we just need to check that the conormal map $I/I^2 \to
+\Omega_{D/A} \otimes_D B$ is injective, and that $\Omega_{B/A}$ is a projective
+$B$-module, if and only if the analogs hold over the localizations. This
+follows by the criterion for formal smoothness just given above.
+
+But both can be checked locally. Namely, the conormal map is an injection if
+and only if, for all $\mathfrak{q} \in \spec B$ corresponding to $\mathfrak{Q}
+\in \spec D$, the map $(I/I^2)_{\mathfrak{q}} \to
+\Omega_{D_{\mathfrak{Q}}/A_{\mathfrak{p}}} \otimes_{D_{\mathfrak{Q}}}
+B_{\mathfrak{q}}$ is an injection.
+Moreover, we know that for a finitely presented module over a ring,
+like $\Omega_{B/A}$, projectivity is equivalent to projectivity (or freeness) of all the stalks
+(\cref{}). So we can check projectivity on the localizations too.
+\end{proof}
+
+In fact, the method of proof of \cref{fsislocal} yields the following
+observation: \emph{formal} smoothness ``descends'' under faithfully flat base change.
+That is:
+\begin{corollary}
+If $B$ is an $A$-algebra, and $A'$ a faithfully flat algebra, then $B$ is
+formally smooth over $A$ if and only if $B \otimes_A A'$ is formally smooth
+over $A'$.
+\end{corollary}
+We shall not give a complete proof, except in the case when $B$ is finitely
+presented over $A$ (so that the question is of smoothness).
+\begin{proof}
+One direction is just the ``sorite'' (see \cref{}). We want to show that
+formal smoothness ``descends.''
+The claim is that the two conditions for formal smoothness above (that
+$\Omega_{B/A}$ be projective and the conormal map be a monomorphism) descend
+under faithfully flat base-change. Namely, the fact about the conormal maps is
+clear (by faithful flatness).
+
+Now let $B' = B \otimes_A A'$.
+So we need to argue that if $\Omega_{B'/A'} = \Omega_{B/A} \otimes_B
+B'$ is projective as a $B'$-module, then so is $\Omega_{B/A}$. Here we use the
+famous result of Raynaud-Gruson (see \cite{RG71}), which states that
+projectivity descends under faithfully flat extensions, to complete the proof.
+
+If $B$ is finitely presented over $A$, then $\Omega_{B/A}$ is finitely
+presented as a $B$-module.
+We can run most of the same proof as before, but we want to avoid using the
+Raynaud-Gruson theorem: we must give a separate argument that $\Omega_{B/A}$ is
+projective if $\Omega_{B'/A'}$ is. However, for a finitely presented module,
+projectivity is \emph{equivalent} to flatness, by \cref{fpflatmeansprojective}. Moreover, since $\Omega_{B'/A'}$
+is $B'$-flat, faithful flatness enables us to conclude that $\Omega_{B/A}$ is
+$B$-flat, and hence projective.
+\end{proof}
+
+
+
+\subsection{The Jacobian criterion}
+
+
+Now we want a characterization of when a morphism is smooth. Let us
+motivate this with an analogy from standard differential topology.
+Consider real-valued functions $f_1, \dots, f_p \in C^{\infty}(\mathbb{R}^n)$.
+Now, if $f_1, f_2, \dots, f_p$ are such that their gradients $\nabla f_i$ form a
+matrix of rank $p$, then we can define a manifold near zero
+which is the common zero set of all the $f_i$.
+We are going to give a relative version of this in the algebraic setting.
+
+
+
+Recall that a map of rings $A \to B$ is \emph{essentially of finite
+presentation} if $B$ is the localization of a finitely presented $A$-algebra.
+
+
+\begin{proposition} \label{smoothjac}
+Let $(A, \mathfrak{m}) \to (B, \mathfrak{n})$ be a local homomorphism of local
+rings such that $B$ is essentially of finite presentation.
+Suppose $B = (A[X_1, \dots, X_n])_{\mathfrak{q}}/I$ for some finitely generated
+ideal $I \subset A[X_1, \dots, X_n]_{\mathfrak{q}}$, where $\mathfrak{q}$ is a
+prime ideal in the polynomial ring.
+
+Then $I/I^2$ is generated as a $B$-module by polynomials
+$f_1, \dots, f_k \in I \subset A[X_1, \dots, X_n]$ whose Jacobian matrix has maximal rank
+in $C/\mathfrak{q} = B/\mathfrak{n}$ if and only if $B$ is formally smooth over $A$.
+In this case, $I/I^2$ is even freely generated by the $f_i$.
+\end{proposition}
+
+The Jacobian matrix $\frac{\partial f_i}{\partial X_j}$ is a matrix of
+elements of $A[X_1, \dots, X_n]$, and we can take the associated images in
+$B/\mathfrak{n}$.
+
+\begin{example}
+Suppose $A$ is an algebraically closed field $k$.
+Then $I$ corresponds to some ideal in the polynomial ring $k[X_1, \dots,
+X_n]$, which cuts out a variety $X$.
+Suppose $\mathfrak{q}$ is a maximal ideal in the polynomial ring.
+
+Then $B$ is the local
+ring of the algebraic variety $X$ at $\mathfrak{q}$.
+Then \cref{smoothjac} states that $\mathfrak{q}$ is a ``smooth point''
+of the variety (i.e., the Jacobian matrix has maximal rank) if and only if
+$B$ is formally smooth over $k$.
+We will expand on this later.
+\end{example}
+
+
+\begin{proof}
+Indeed, we know that polynomial rings are formally smooth.
+In particular $D = A[X_1, \dots, X_n]_{\mathfrak{q}}$ is formally smooth over
+$A$, because localization preserves formal smoothness. Note also that $\Omega_{D/A}$ is a free $D$-module, because
+this is true for a polynomial ring and K\"ahler differentials commute with
+localization.
+
+So \cref{smoothconormal} implies that
+\[ I/I^2 \to \Omega_{D/A} \otimes_D B \]
+is a split injection precisely when $B$ is formally smooth over $A$. Suppose
+that this holds.
+Now $I/I^2$ is then a summand of the free module $\Omega_{D/A} \otimes_D B$, so it
+is projective, hence free as $B$ is local.
+Let $K = B/\mathfrak{n}$. It follows that the map
+\[ I/I^2 \otimes_D K \to \Omega_{D/A} \otimes_D K = K^n \]
+is an injection. This map sends a polynomial to its gradient (reduced
+modulo $\mathfrak{q}$, or $\mathfrak{n}$). Hence the assertion is
+clear: choose polynomials $f_1, \dots, f_k \in I$ that generate
+$(I/I^2)_{\mathfrak{q}}$, and their gradients in $B/\mathfrak{n}$ must be
+linearly independent.
+
+Conversely, suppose that $I/I^2$ has such generators.
+Then the map
+\[ I/I^2 \otimes K \to K^n, \quad f\mapsto df \]
+is a split injection.
+However, if a map of finitely generated modules over a local ring, with the
+target free, is such that tensoring with
+the residue field makes it an injection, then it is a split injection. (We
+shall prove this below.) Thus $I/I^2 \to \Omega_{D/A} \otimes_D B$ is a split
+injection. In view
+of the criterion for formal smoothness, we find that $B$ is formally smooth.
+\end{proof}
+
+Here is the promised lemma necessary to complete the proof:
+\begin{lemma}
+\label{splitinjreduce}
+If $(A, \mathfrak{m})$ is a local ring with residue field $k$, $M$ a finitely
+generated $A$-module, $N$ a finitely
+generated projective $A$-module, then a map
+\( \phi: M \to N \)
+is a split injection if and only if
+\(M \otimes_A k \to N \otimes_A k \)
+is an injection.
+\end{lemma}
+\begin{proof}
+One direction is clear, so it suffices to show that $M \to N$ is a split
+injection if the map on fibers is an injection.
+
+
+Let $L$ be a ``free approximation'' to $M$, that is, a free module $L$ together
+with a map $L \to M$ which is an isomorphism modulo $k$. By Nakayama's lemma,
+$L \to M$ is surjective.
+Then the map
+$L \to M \to N$ is such that the $L \otimes k \to N \otimes k$ is injective, so
+$L \to N$ is a split injection (by an elementary criterion).
+It follows that we can find a splitting $N \to L$, which when composed with $L
+\to M$ is a splitting of $M \to N$.
+\end{proof}
+
+\subsection{The fiberwise criterion for smoothness}
+
+We shall now prove that a smooth morphism is flat. In fact, we will get
+a general ``fiberwise'' criterion for smoothness (i.e., a morphism is smooth
+if and only if it is flat and the fibers are smooth), which will enable
+us to reduce smoothness questions, in some cases, to the situation where the
+base is a field.
+
+We shall need some lemmas on regular sequences.
+The first will give a useful criterion for checking $M$-regularity of an
+element by checking on the fiber.
+For our purposes, it will also give a criterion for when quotienting by a
+regular element preserves flatness over a smaller ring.
+
+\begin{lemma}
+Let $(A, \mathfrak{m}) \to (B, \mathfrak{n})$ be a local homomorphism
+of local noetherian
+rings.
+Let $M$ be a finitely generated $B$-module, which is flat over $A$.
+
+Let $f \in B$. Then the following are equivalent:
+\begin{enumerate}
+\item $M/fM$ is flat over $A$ and $f: M \to M$ is injective.
+\item $f: M \otimes_A k \to M \otimes_A k$ is injective where $k = A/\mathfrak{m}$.
+\end{enumerate}
+\end{lemma}
+
+For instance, let us consider the case $M = B$. The lemma states that if
+multiplication by $f$ is regular on $B \otimes_A k$, then the hypersurface cut
+out by $f$ (i.e., corresponding to the ring $B/fB$) is flat over $A$.
+
+\begin{proof} All $\tor$ functors here will be over $A$.
+If $M/fM$ is $A$-flat and $f: M \to M$ is injective, then the sequence
+\[ 0 \to M \stackrel{f}{\to} M \to M/fM \to 0 \]
+leads to a long exact sequence
+\[ \tor_1(k, M/fM) \to M \otimes_A k \stackrel{f}{\to} M \otimes_A k \to (M/fM)
+\otimes_A k \to 0. \]
+But since $M/fM$ is flat, the first term is zero, and it follows that $M \otimes k \stackrel{f}{\to} M
+\otimes k$ is injective.
+
+The other direction is more subtle. Suppose multiplication by $f$ is a
+monomorphism on $M \otimes_A k$. Now write the exact sequence
+\[ 0 \to P \to M \stackrel{f}{\to} M \to Q \to 0 \]
+where $P, Q$ are the kernel and cokernel. We want to show that $P = 0$
+and $Q$ is flat over $A$.
+
+We can also consider the image $I = fM \subset M$, to split this into two
+exact sequences
+\[ 0 \to P \to M \to I \to 0 \]
+and
+\[ 0 \to I \to M \to Q \to 0. \]
+Here the map $M \otimes_A k \to I \otimes_A k \to M \otimes_A k$ is given by
+multiplication by $f$, so it is injective by hypothesis. This implies
+that $M \otimes_A k \to I
+\otimes_A k$ is injective. So $M \otimes k \to I \otimes k$ is actually an isomorphism because it
+is obviously surjective, and we have just seen it is injective.
+Moreover, $I \otimes_A k \to M \otimes_A k$ is isomorphic to the
+homothety $f: M \otimes_A k \to M \otimes_A k$, and consequently is
+injective.
+To summarize:
+\begin{enumerate}
+\item $M \otimes_A k \to I \otimes_A k$ is an isomorphism.
+\item $I \otimes_A k \to M \otimes_A k$ is an injection.
+\end{enumerate}
+
+Let us tensor these two exact sequences with $k$. We get
+\[ 0 \to \tor_1(k, I) \to P \otimes_A k \to M \otimes_A k \to I \otimes_A k \to 0 \]
+because $M$ is flat. We also get
+\[ 0 \to \tor_1(k, Q) \to I \otimes_A k \to M \otimes_A k \to Q \otimes_A k \to 0
+.\]
+We'll start by using the second sequence. Now $I \otimes_A k \to M
+\otimes_A k$
+was just said to be injective, so that $\tor_1(k, Q) = 0$. By the local
+criterion for flatness, it follows that $Q$ is a flat
+$A$-module as well.
+But $Q = M/fM$, so this gives one part of what we wanted.
+
+Now, we want to show finally that $P = 0$.
+Now, $I$ is flat; indeed, it is the kernel of a surjection of flat maps $M \to
+Q$, so the long exact sequence shows that it is flat. So we have a short exact
+sequence
+\[ 0 \to P \otimes_A k \to M \otimes_A k \to I \otimes_A k \to 0, \]
+which shows now that $P \otimes_A k = 0$ (as $M \otimes_A k \to I \otimes_A k$ was
+just shown to be an isomorphism earlier). By Nakayama $P = 0$.
+This implies that $f$ is $M$-regular.
+\end{proof}
+
+\begin{corollary} \label{regseqflat} Let $(A, \mathfrak{m}) \to (B, \mathfrak{n})$ be a morphism
+of noetherian local rings.
+Suppose $M$ is a finitely generated $B$-module, which is flat over $A$.
+
+Let $f_1, \dots, f_k \in \mathfrak{n}$. Suppose that $f_1, \dots, f_k$ is a
+regular sequence on $M \otimes_A k$. Then it is a regular sequence on $M$ and,
+in fact, $M/(f_1, \dots, f_k ) M$ is flat over $A$.
+\end{corollary}
+\begin{proof}
+This is now clear by induction.
+\end{proof}
+
+
+\begin{theorem}\label{smoothflat1} Let $(A, \mathfrak{m}) \to (B, \mathfrak{n})$ be
+a morphism of local rings such that $B$ is the localization of
+a finitely presented $A$-algebra at a prime
+ideal, $B = (A[X_1, \dots, X_n])_{\mathfrak{q}}/I$. Then if $A \to B$ is formally smooth, $B$ is a flat $A$-algebra.
+\end{theorem}
+
+The strategy is that $B$ is going to be written as the quotient of a
+localization of a polynomial
+ring by a sequence $\left\{f_i\right\}$
+whose gradients are independent (modulo the maximal ideal), i.e. modulo
+$B/\mathfrak{n}$.
+If we were working modulo a field, then we could use arguments about regular
+local rings to argue that the $\left\{f_i\right\}$ formed a regular
+sequence. We will use \cref{regseqflat} to bootstrap from this case to the
+general situation.
+
+\begin{proof}
+Let us first assume that $A$ is \emph{noetherian.}
+
+Let $C = (A[X_1, \dots, X_n])_{\mathfrak{q}}$. Then $C$ is a local ring,
+smooth over $A$, and we have morphisms of local rings
+\[ (A, \mathfrak{m}) \to (C, \mathfrak{q}) \twoheadrightarrow (B,
+\mathfrak{n}). \]
+Moroever, $C$ is a \emph{flat} $A$-module, and we are going to apply the
+fiberwise criterion for regularity to $C$ and a suitable sequence.
+
+Now we know that $I/I^2$ is a $B$-module generated by polynomials $f_1,
+\dots, f_m
+\in A[X_1, \dots, X_n]$
+whose Jacobian matrix has maximal rank in $B/\mathfrak{n}$ (by the Jacobian
+criterion, \cref{smoothjac}).
+The claim is that the $f_i$ are linearly independent in
+$\mathfrak{q}/\mathfrak{q}^2$. This will be the first key step in the proof.
+In other words, if $\left\{u_i\right\}$ is a family of elements of $C$, not all
+non-units, we do not have
+\[ \sum u_i f_i \in \mathfrak{q}^2. \]
+For if we did, then we could take derivatives
+and find
+\[ \sum u_i \partial_j f_i \in \mathfrak{q} \]
+for each $j$. This contradicts the gradients of the $f_i$ being linearly
+independent in $B/\mathfrak{n} = C/\mathfrak{q}$.
+
+Now we want to show that the $\left\{f_i\right\}$ form a regular sequence in
+$C$. To do this, we shall reduce to the case where $A$ is a field. Indeed, let
+us make the base-change $A \to k = A/\mathfrak{m}, B \to \overline{B} = B
+\otimes_A k, C \to \overline{C}=C \otimes_A k$ where $k =
+A/\mathfrak{m}$ is the residue field.
+Then $\overline{B},\overline{C}$ are formally smooth local rings over a
+field $k$. We also know that $\overline{C}$ is a \emph{regular} local ring,
+since it is a localization of a polynomial ring over a field.
+
+
+Let us denote the maximal ideal of
+$\overline{C}$ by
+$\overline{\mathfrak{q}}$; this is just the image of $\mathfrak{q}$.
+
+
+Now the $\left\{f_i\right\}$ have images in $\overline{C}$ that are linearly
+independent
+in $\overline{\mathfrak{q}}/\overline{\mathfrak{q}}^2 =
+\mathfrak{q}/\mathfrak{q}^2$. It follows that the $\left\{f_i\right\}$ form a
+regular sequence in $\overline{C}$, by general facts about regular local
+rings (see, e.g. \cref{quotientreg44}); indeed, each of the successive quotients $\overline{C}/(f_1, \dots,
+f_i)$ will then be regular.
+It follows from the fiberwise criterion ($C$ being flat) that the
+$\left\{f_i\right\}$ form a regular sequence in $C$ itself, and that the
+quotient $C/(f_i) = B$ is $A$-flat.
+\end{proof}
+
+The proof in fact showed a bit more: we expressed $B$ as the quotient of a
+localized
+polynomial ring by a regular sequence.
+In other words:
+
+\begin{corollary}[Smooth maps are local complete intersections]
+Let $(A, \mathfrak{m}) \to (B, \mathfrak{n})$ be an essentially of
+finite presentation, formally smooth map. Then there exists a localization
+of a polynomial ring, $C$, such that $B$ can be expressed as
+$C/(f_1, \dots, f_n)$ for the $\left\{f_i\right\}$ forming a regular
+sequence in the maximal ideal of $C$.
+\end{corollary}
+
+We also get the promised result:
+\begin{theorem} \label{smoothflat}
+Let $A \to B$ be a smooth morphism of rings. Then $B$ is flat over $A$.
+\end{theorem}
+\begin{proof}
+Indeed, we immediately reduce to \cref{smoothflat1} by checking locally at each
+prime (which gives formally smooth maps).
+\end{proof}
+
+In fact, we can get a general criterion now:
+
+\begin{theorem} \label{fiberwisesmooth}
+Let $(A, \mathfrak{m}) \to (B, \mathfrak{n})$ be a (local) morphism of local
+noetherian rings such that $B$ is the localization of a finitely presented $A$-algebra at a prime
+ideal, $B = (A[X_1, \dots, X_n])_{\mathfrak{q}}/I$. Then $B$ is formally
+smooth over $A$ if $B$ is $A$-flat and $B/\mathfrak{m}B$ is formally smooth
+over $A/\mathfrak{m}$.
+\end{theorem}
+
+\begin{proof}
+One direction is immediate from what we have already shown. Now we need to
+show that if $B$ is $A$-flat, and $B/\mathfrak{m}B$ is formally smooth over
+$A/\mathfrak{m}$, then $B$ is itself formally smooth over $A$.
+This will be comparatively easy, with all the machinery developed.
+This will be comparatively easy, with all the machinery developed.
+
+As before, write the sequence
+\[ (A, \mathfrak{m}) \to (C, \mathfrak{q}) \twoheadrightarrow
+(B,\mathfrak{n}),
+\]
+where $C$ is a localization of a polynomial ring at a prime ideal, and in
+particular is formally smooth over $A$.
+We know that $B = C/I$, where $I \subset \mathfrak{q}$.
+
+To check that $B$ is formally smooth over $A$, we need to show ($C$ being
+formally smooth) that the conormal sequence
+\begin{equation} \label{thisexact1} I/I^2 \to \Omega_{C/A} \otimes_C B \to
+\Omega_{C/B} \to 0. \end{equation}
+is split exact.
+
+Let $\overline{A}, \overline{C}, \overline{B}$ be the base changes of $A, B,
+C$ to $k = A/\mathfrak{m}$; let $\overline{I}$ be the kernel of $\overline{C}
+\twoheadrightarrow \overline{B}$.
+Note that $\overline{I} = I/\mathfrak{m}I$ by flatness of $B$.
+Then we know that the sequence
+\begin{equation} \label{thisexact2} \overline{I}/\overline{I}^2 \to \Omega_{\overline{C}/k} / \overline{I}
+\Omega_{\overline{C}/k} \to \Omega_{\overline{C}/\overline{B}} \to
+0\end{equation}
+is split exact, because $\overline{C}$ is a formally smooth $k$-algebra (in
+view of \cref{smoothconormal}).
+
+But \eqref{thisexact2} is the reduction of \eqref{thisexact1}. Since the middle
+term of \eqref{thisexact1} is finitely generated and projective over $B$, we can check
+splitting modulo the maximal ideal (see \cref{splitinjreduce}).
+\end{proof}
+
+In particular, we get the global version of the fiberwise criterion:
+
+\begin{theorem}
+Let $A \to B$ be a finitely presented morphism of rings. Then $B$ is a smooth
+$A$-algebra if and only if $B$ is a flat $A$-algebra and, for each
+$\mathfrak{p} \in \spec A$, the morphism $k(\mathfrak{p}) \to B \otimes_A
+k(\mathfrak{p})$ is smooth.
+\end{theorem}
+Here $k(\mathfrak{p})$ denotes the residue field of $A_{\mathfrak{p}}$, as
+usual.
+\begin{proof}
+One direction is clear. For the other, we recall
+that smoothness is \emph{local}: $A \to B$ is smooth if and only if, for each
+$\mathfrak{q} \in \spec B$ with image $\mathfrak{p} \in \spec A$, we have $A_{\mathfrak{p}} \to B_{\mathfrak{q}}$
+formally smooth (see \cref{fsislocal}).
+But, by \cref{fiberwisesmooth}, this is the case if and only if, for each such
+pair $(\mathfrak{p}, \mathfrak{q})$, the morphism $k(\mathfrak{p}) \to
+B_{\mathfrak{q}} \otimes_{A_{\mathfrak{p}}} k(\mathfrak{p})$ is formally smooth.
+Now if $k(\mathfrak{p}) \to B \otimes_A k(\mathfrak{p})$ is smooth for each
+$\mathfrak{p}$, then this condition is clearly satisfied.
+\end{proof}
+
+\subsection{Formal smoothness and regularity}
+
+We now want to explore the connection between formal smoothness and regularity.
+In general, the intuition is that a variety over an algebraically closed field
+is \emph{smooth} if and only if the local rings at closed points (and thus at
+all points by \cref{locofregularloc}) are regular local rings.
+Over a non-algebraically closed field, only one direction is still true: we
+want the local rings to be \emph{geometrically regular.}
+So far we will just prove one direction, though.
+
+\begin{theorem}
+Let $(A, \mathfrak{m})$ be a noetherian local ring containing a copy of its
+residue field $A/\mathfrak{m}= k$. Then if $A$ is formally smooth over $k$, $A$
+is regular.
+\end{theorem}
+\begin{proof}
+We are going to compare the quotients $A/\mathfrak{m}^m$ to the quotients of
+$R= k[x_1, \dots, x_n]$ where $n$ is the \emph{embedding dimension} of
+$A$.
+Let $\mathfrak{n} \subset k[x_1, \dots, x_n]$ be the ideal $(x_1, \dots, x_n)$.
+We are going to give surjections
+\[ A/\mathfrak{m}^m \twoheadrightarrow R/\mathfrak{n}^m \]
+for each $m \geq 2$.
+
+Let $t_1, \dots, t_n \in \mathfrak{m}$ be a $k$-basis for
+$\mathfrak{m}/\mathfrak{m}^2$.
+Consider the map $A \twoheadrightarrow R/\mathfrak{n}^2 $ that goes
+$A \twoheadrightarrow A/\mathfrak{m}^2 \simeq k \oplus
+\mathfrak{m}/\mathfrak{m}^2 \simeq R/\mathfrak{n}^2$, where $t_i$ is sent to
+$x_i$. This is well-defined, and gives a surjection $A \twoheadrightarrow
+R/\mathfrak{n}^2$.
+Using the infinitesimal lifting property, we can lift this map to
+$k$-algebra maps
+\[ A \to R/\mathfrak{n}^m \]
+for each $k$, which necessarily factor through $A/\mathfrak{m}^m$ (as they send
+$\mathfrak{m}$ into $\mathfrak{n}$). They are surjective by Nakayama's lemma.
+It follows that
+\[ \dim_k A/\mathfrak{m}^m \geq \dim_k R/\mathfrak{n}^m, \]
+and since $R_{\mathfrak{n}}$ is a regular local ring, the last term grows
+asymptotically like $m^n$. It follows that $\dim R \geq n$, and since $\dim R$
+is always at most the embedding dimension, we are done.
+\end{proof}
+
+
+\subsection{A counterexample}
+
+It is in fact true that a formally smooth morphism between \emph{arbitrary} noetherian rings is
+flat, although we have only proved this in the case of a morphism of finite
+type.
+This is false if we do not assume noetherian hypotheses.
+A formally smooth morphism need not be flat.
+
+\begin{example} \label{fsisn'tflat}
+Consider a field $k$, and consider $R = k[T^{x}]_{x \in \mathbb{Q}_{>0}}$.
+This is the filtered colimit of the polynomial rings $k[T^{1/n}]$ over all $n$. There is a
+natural map $R \to k$ sending each power of $T$ to zero.
+The claim is that $R \to k$ is a formally smooth morphism which is not flat.
+It is a \emph{surjection}, so it is a lot different from the intuitive idea of
+a smooth map.
+
+Yet it turns out to be \emph{formally} smooth. To see this, consider an $R$-algebra $S$ and an ideal $I \subset S$ such that $S^2 =
+0$. The claim is that an $R$-homomorphism $k \to S/I$ lifts to $k \to S$.
+Consider the diagram
+\[ \xymatrix{ \\
+& & S \ar[d] \\
+R \ar[rru] \ar[r] & k \ar@{-->}[ru] \ar[r] & S/I,
+}\]
+in which we have to show that a dotted arrow exists.
+
+However, there can be at most one $R$-homomorphism $k \to S/I$, since $k$ is a
+quotient of $R$. It follows that each $T^{x}, x \in \mathbb{Q}_{>0}$ is mapped
+to zero in $S/I$.
+So each $T^x, x \in I$ maps to elements of $I$ (by the map $R \to S$ assumed to
+exist). It follows that $T^x = (T^{x/2})^2$ maps to zero in $S$, as $I^2 =0$.
+Thus the map $R \to S$ annihilates each $T^x$, which means that there is a
+(unique) dotted arrow.
+
+Note that $R \to k$ is not flat. Indeed, multiplication by $T$ is injective on
+$R$, but it acts by zero on $k$.
+\end{example}
+
+This example was described by Anton Geraschenko on MathOverflow; see
+\cite{MO200}.
+The same reasoning shows more generally:
+
+\begin{proposition}
+Let $R$ be a ring, $I \subset R$ an ideal such that $I = I^2$. Then the
+projection $R \to R/I$ is formally \'etale.
+\end{proposition}
+
+For a noetherian ring, if $I = I^2$, then we know that $I$ is generated by an
+idempotent in $R$ (see \cref{idempotentideal}), and the projection $R \to R/I$ is projection on the
+corresponding direct factor (actually, the complementary one).
+In this case, the projection is flat, and this is to be expected: as stated
+earlier, formally \'etale implies flat for noetherian rings.
+But in the non-noetherian case, we can get interesting examples.
+
+\begin{example} We shall now give an example showing that formally \'etale
+morphisms do not necessarily preserve reducedness. We shall later see that this
+is true in the \emph{\'etale} case (see \cref{reducedetale}).
+
+Let $k$ be a field of characteristic $\neq 2$.
+Consider the ring $R = k[T^x]_{x \in \mathbb{Q}_{>0}}$ as before.
+Take $S = R[X]/(X^2 - T)$, and consider the ideal $I$ generated by all the positive
+powers $T^x, x > 0$. As before, clearly $I=I^2$, and thus $S \to S/I$ is
+formally \'etale.
+The claim is that $S$ is reduced; clearly $S/I = k[X]/(X^2)$ is not.
+Indeed, an element of $S$ can be uniquely described by $\alpha = P(T) + Q(T)X$ where $P, Q$ are
+``polynomials'' in $T$---in actuality, they are allowed to have terms $T^x, x
+\in \mathbb{Q}_{>0}$.
+Then $\alpha^2 = P(T)^2 + Q(T)^2 T + 2 P(T) Q(T) X$. It is thus easy to see
+that if $\alpha^2 = 0$, then $\alpha = 0$.
+\end{example}
+
+
+\section{\'Etale morphisms}
+\label{section-formally-etale}
+
+\subsection{Definition}
+The definition is just another nilpotent lifting property:
+\begin{definition}
+\label{definition-formally-etale}
+Let $S$ be an $R$-algebra. Then $S$ is \textbf{formally \'etale} over $R$ (or
+the map $R \to S$ is formally \'etale) if given any
+$R$-algebra $A$ and ideal $I \subset A $ of square zero, the map
+\[ \hom_R(S, A) \to \hom_R(S, A/I)\]
+is a bijection.
+A ring homomorphism is \textbf{\'etale} if and only if it is formally \'etale
+and of finite presentation.
+\end{definition}
+
+So $S$ is {\it formally \'etale over $R$} if for every
+commutative solid diagram
+$$
+\xymatrix{
+S \ar[r] \ar@{-->}[rd] & A/I \\
+R \ar[r] \ar[u] & A \ar[u]
+}
+$$
+where $I \subset A$ is an ideal of square zero, there exists
+a unique dotted arrow making the diagram commute. As before, the functor
+of points can be used to test formal \'etaleness.
+Moreover, clearly a ring map is formally \'etale if and only if
+it is both formally smooth and formally unramified.
+
+We have the usual:
+\begin{proposition}
+\'Etale (resp. formally \'etale) morphisms are closed under composition
+and base change.
+\end{proposition}
+\begin{proof}
+Either a combination of the corresponding results for formal
+smoothness and formal unramifiedness (i.e. \cref{sorite1unr},
+\cref{unrbasechange}, and \cref{smoothsorite}), or easy to verify
+directly.
+\end{proof}
+
+Filtered colimits preserve formal \'etaleness:
+\begin{lemma}
+\label{lemma-colimit-formally-etale}
+Let $R$ be a ring. Let $I$ be a directed partially ordered set.
+Let $(S_i, \varphi_{ii'})$ be a system of $R$-algebras
+over $I$. If each $R \to S_i$ is formally \'etale, then
+$S = \text{colim}_{i \in I}\ S_i$ is formally \'etale over $R$
+\end{lemma}
+The idea is that we can make the lifts on each piece, and glue them
+automatically.
+\begin{proof}
+Consider a diagram as in \rref{definition-formally-etale}.
+By assumption we get unique $R$-algebra maps $S_i \to A$ lifting
+the compositions $S_i \to S \to A/I$. Hence these are compatible
+with the transition maps $\varphi_{ii'}$ and define a lift
+$S \to A$. This proves existence.
+The uniqueness is clear by restricting to each $S_i$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localization-formally-etale}
+Let $R$ be a ring. Let $S \subset R$ be any multiplicative subset.
+Then the ring map $R \to S^{-1}R$ is formally \'etale.
+\end{lemma}
+
+\begin{proof}
+Let $I \subset A$ be an ideal of square zero. What we are saying
+here is that given a ring map $\varphi : R \to A$ such that
+$\varphi(f) \mod I$ is invertible for all $f \in S$ we have also that
+$\varphi(f)$ is invertible in $A$ for all $f \in S$. This is true because
+$A^*$ is the inverse image of $(A/I)^*$ under the canonical map
+$A \to A/I$.
+\end{proof}
+
+
+We now want to give the standard example of an \'etale morphism;
+geometrically, this corresponds to a hypersurface in affine 1-space given by
+a nonsingular equation. We will eventually show that any \'etale
+morphism looks like this, locally.
+
+
+\begin{example}
+Let $R$ be a ring, $P \in R[X]$ a polynomial. Suppose $Q \in R[X]/P$ is such that in the
+localization $(R[X]/P)_Q$, the image of the derivative $P' \in R[X]$ is a unit. Then the map
+\[ R \to (R[X]/P)_Q \]
+is called a \textbf{standard \'etale morphism.}
+\end{example}
+
+
+
+The name is justified by:
+\begin{proposition}
+A standard \'etale morphism is \'etale.
+\end{proposition}
+\begin{proof}
+It is sufficient to check the condition on the K\"ahler differentials, since a
+standard \'etale morphism is evidently flat and of finite presentation.
+Indeed, we have that
+\[ \Omega_{(R[X]/P)_Q/R} = Q^{-1} \Omega_{(R[X]/P)/R} = Q^{-1}
+\frac{R[X]}{(P'(X), P(X)) R[X]} \]
+by basic properties of K\"ahler differentials. Since $P'$ is a unit after
+localization at $Q$, this last object is clearly zero.
+\end{proof}
+
+\begin{example} \label{etalefield}
+A separable algebraic extension of a field $k$ is formally \'etale.
+Indeed, we just need to check this
+for a finite separable extension $L/k$, in view of \cref{lemma-colimit-formally-etale}, and then we can write $L = k[X]/(P(X))$
+for $P$ a separable polynomial. But it is easy to see that this is a special
+case of a standard \'etale morphism.
+In particular, any unramified extension of a field is \'etale, in view of the
+structure theory for unramified extensions of fields (\cref{unrfield}).
+\end{example}
+
+
+\begin{example}
+The example of \cref{fsisn'tflat} is a formally \'etale morphism, because we
+showed the map was formally smooth and it was clearly surjective.
+It follows that a formally \'etale morphism is not necessarily flat!
+\end{example}
+
+
+We also want a slightly different characterization of an \'etale morphism. This
+criterion will be of extreme importance for us in the sequel.
+\begin{theorem}
+An $R$-algebra $S$ of finite presentation is \'etale if and only if
+it is flat and unramified.
+\end{theorem}
+This is in fact how \'etale morphisms are defined in \cite{SGA1} and in
+\cite{Ha77}.
+\begin{proof}
+An \'etale morphism is smooth, hence flat (\cref{smoothflat}). Conversely,
+suppose $S$ is flat and unramified over $R$. We just need to show that $S$ is
+smooth over $R$. But this follows by the fiberwise criterion for smoothness,
+\cref{fiberwisesmooth}, and the fact that an unramified extension of a
+field is automatically \'etale, by \cref{etalefield}.
+\end{proof}
+
+
+Finally, we would like a criterion for when a morphism of \emph{smooth}
+algebras is \'etale.
+We state it in the local case first.
+\begin{proposition} \label{etalecotangent}
+Let $B, C$ be local, formally smooth, essentially of finite presentation
+$A$-algebras and let $f: B \to C$ be a local $A$-morphism.
+Then $f$ is formally \'etale if and only if and only if the map $\Omega_{B/A}\otimes_B C \to \Omega_{C/A}$ is an isomorphism.
+\end{proposition}
+The intuition is that $f$ induces an isomorphism on the cotangent spaces; this
+is analogous to the definition of an \emph{\'etale} morphism of smooth
+manifolds (i.e. one that induces an isomorphism on each tangent space, so is a
+local isomorphism at each point).
+\begin{proof}
+We prove this for $A$ noetherian.
+
+We just need to check that $f$ is flat if the map on differentials is an
+isomorphism.
+Since $B, C$ are flat $A$-algebras, it suffices (by the general criterion,
+\cref{fiberwiseflat}), to show that $B
+\otimes_A k \to C \otimes_A k$ is flat for $k$ the residue field of $A$.
+We will also be done if we show that $B \otimes_A \overline{k} \to C \otimes_A
+\overline{k}$ is flat. Note that the same hypotheses (that
+
+So we have reduced to a question about rings essentially of finite type over a
+\emph{field}. Namely, we have local rings $\overline{B}, \overline{C}$ which
+are both formally smooth, essentially of finite-type $k$-algebras, and a map $\overline{B} \to \overline{C}$ that
+induces an isomorphism on the K\"ahler differentials as above.
+
+The claim is that $\overline{B} \to \overline{C}$ is flat (even local-\'etale).
+Note that both $\overline{B}, \overline{C}$ are \emph{regular} local rings, and
+the condition about K\"ahler differentials implies that they of the same
+dimension. Consequently, $\overline{B} \to \overline{C}$ is \emph{injective}:
+if it were not injective, then the dimension of $\im(\overline{B} \to
+\overline{C})$ would be \emph{less} than $\dim \overline{B} = \dim \overline{C}$.
+But since $\overline{C}$ is unramified over $\im(\overline{B} \to
+\overline{C})$, the dimension can only drop: $\dim \overline{C} \leq \dim
+\im(\overline{B} \to \overline{C})$.\footnote{This follows by the surjection of
+modules of K\"ahler differentials, in view of \cref{}.}
+This contradicts $\dim \overline{B} = \dim\overline{C}$. It follows that
+$\overline{B} \to \overline{C}$ is injective, and hence flat by \cref{} below
+(one can check that there is no circularity).
+
+
+\end{proof}
+
+\subsection{The local structure theory}
+We know two easy ways of getting an unramified morphism out of a ring $R$.
+First, we can take a standard \'etale morphism, which is necessarily
+unramified; next we can take a quotient of that. The local structure theory
+states that this is all we can have, locally.
+
+\textbf{Warning: this section will use Zariski's Main Theorem, which is not in
+this book yet.}
+
+
+For this we introduce a definition.
+
+\begin{definition}
+Let $R$ be a commutative ring, $S$ an $R$-algebra of finite type. Let $\mathfrak{q} \in \spec
+S$ and $\mathfrak{p} \in \spec R$ be the image. Then $S$ is called
+\textbf{unramified at $\mathfrak{q}$} (resp. \textbf{\'etale at
+$\mathfrak{p}$}) if $\Omega_{S_{\mathfrak{q}}/R_{\mathfrak{p}}} = 0$ (resp.
+that and $S_{\mathfrak{q}}$ is $R_{\mathfrak{p}}$-flat).
+\end{definition}
+
+Now when works with finitely generated algebras, the module of K\"ahler
+differentials is always finitely generated over the top ring.
+In particular, if
+$\Omega_{S_{\mathfrak{q}}/R_{\mathfrak{p}}} = (\Omega_{S/R} )_{\mathfrak{q}} =
+0$, then there is $f \in S - \mathfrak{q}$ with $\Omega_{S_f/R} = 0$.
+So being unramified at $\mathfrak{q}$ is equivalent to the existence of $f \in
+S-\mathfrak{q}$ such that $S_f$ is unramified over $R$.
+Clearly if $S$ is unramified over $R$, then it is unramified at all primes,
+and conversely.
+
+
+\begin{theorem}
+Let $\phi: R \to S$ be morphism of finite type, and $\mathfrak{q} \subset S$ prime
+with $\mathfrak{p} = \phi^{-1}(\mathfrak{q})$. Suppose $\phi$ is unramified at
+$\mathfrak{q}$.
+Then there is $f \in R- \mathfrak{p}$ and $g \in S - \mathfrak{q}$ (divisible
+by $\phi(f)$) such that
+the morphism
+\[ R_f \to S_g \]
+factors as a composite
+\[ R_f \to (R_f[x]/P)_{h} \twoheadrightarrow S_g \]
+where the first is a standard \'etale morphism and the second is a
+surjection. Moreover, we can arrange things such that the fibers above
+$\mathfrak{p}$ are isomorphic.
+\end{theorem}
+
+
+\begin{proof}We shall assume that $R$ is \emph{local} with maximal ideal
+$\mathfrak{p}$. Then the question reduces to finding
+$g \in S$ such that $S_g$ is a quotient of an algebra standard \'etale over $R$.
+This reduction is justified by the following argument: if $R$ is
+not necessarily local, then the morphism $R_{\mathfrak{p}} \to
+S_{\mathfrak{p}}$ is still unramified. If we can show that there is $g \in
+S_{\mathfrak{p}} - \mathfrak{q}S_{\mathfrak{p}}$ such
+that $(S_{\mathfrak{p}})_g$ is a quotient of a standard \'etale
+$R_{\mathfrak{p}}$-algebra, it
+will follow that there is $f \notin \mathfrak{p}$ such that the same works
+with $R_f \to S_{gf}$.
+
+\emph{We shall now reduce to the case where $S$ is a finite $R$-algebra.}
+Let $R$ be local, and let $R \to S$ be unramified at $\mathfrak{q}$. By assumption, $S$ is finitely generated over $R$.
+We have seen by \cref{unrisqf} that $S$ is quasi-finite over $R$ at
+$\mathfrak{q}$.
+By Zariski's Main Theorem (\cref{zmtCA}), there is a finite
+$R$-algebra $S'$ and $\mathfrak{q} ' \in \spec S'$ such that $S$ near
+$\mathfrak{q}$ and $S'$ near $\mathfrak{q}'$ are isomorphic (in
+the sense that there are $g \in S-\mathfrak{q}$, $h \in S' -
+\mathfrak{q}'$ with $S_g \simeq S'_h$).
+Since $S'$ must be unramified at $\mathfrak{q}'$, we can assume at
+the outset, by
+replacing $S$ by $S'$, that $R
+\to S$ is finite and unramified at $\mathfrak{q}$.
+
+
+\emph{We shall now reduce to the case where $S$ is generated by one element as
+$R$-algebra}. This will occupy us for a few paragraphs.
+
+We have assumed that $R$ is a local ring with maximal ideal $\mathfrak{p} \subset R$; the
+maximal ideals of $S$ are finite, say, $\mathfrak{q},\mathfrak{q}_1, \dots,
+\mathfrak{q}_r$ because $S$ is finite over $R$; these all contain $\mathfrak{p}$ by Nakayama.
+These are no inclusion relations among $\mathfrak{q}$ and the $\mathfrak{q}_i$
+as $S/\mathfrak{p}S$ is an artinian ring.
+
+Now $S/\mathfrak{q}$ is a finite separable field extension of
+$R/\mathfrak{p}$ by \cref{unrfield}; indeed, the morphism $R/\mathfrak{p}
+\to S/\mathfrak{p}S \to S/\mathfrak{q}$ is a composite of
+unramified extensions and is thus unramified. In particular, by the primitive element theorem, there is $x \in S$ such that $x$ is a
+generator of the field extension $R/\mathfrak{p} \to S/\mathfrak{q}$.
+We can also choose $x$ to lie in the other $\mathfrak{q}_i$ by the Chinese
+remainder theorem.
+Consider the subring $C=R[x] \subset S$.
+It has a maximal ideal $\mathfrak{s}$ which is the intersection of
+$\mathfrak{q}$ with $C$.
+We are going to show that locally, $C$ and $S$ look the same.
+
+\begin{lemma}[Reduction to the monogenic case]
+Let $(R, \mathfrak{p})$ be a local ring and $S$ a finite $R$-algebra. Let
+$\mathfrak{q}, \mathfrak{q}_1, \dots, \mathfrak{q}_r \in \spec S$ be the prime ideals
+lying above $\mathfrak{p}$. Suppose $S$ is unramified at $\mathfrak{q}$.
+
+Then there is $x \in S$ such that the rings $R[x] \subset S$ and $S$ are
+isomorphic near $\mathfrak{q}$: more precisely, there is $g \in R[x] -
+\mathfrak{q}$ with $R[x]_g = S_g$.
+\end{lemma}
+\begin{proof} Choose $x$ as in the paragraph preceding the statement of
+the lemma.
+Define $\mathfrak{s}$ in the same way.
+We have morphisms
+\[ R \to C_{\mathfrak{s}} \to S_{\mathfrak{s}} \]
+where $S_{\mathfrak{s}}$ denotes $S$ localized at $C-\mathfrak{s}$, as usual.
+The second morphism here is finite.
+However, we claim that $S_{\mathfrak{s}}$ is in fact a local ring with maximal
+ideal $\mathfrak{q} S_{\mathfrak{s}}$; in particular, $S_{\mathfrak{s}} =
+S_{\mathfrak{q}}$.
+Indeed, $S$ can have no maximal ideals other than
+$\mathfrak{q}$ lying above $\mathfrak{s}$; for,
+if $\mathfrak{q}_i$ lay over $\mathfrak{s}$ for some $i$, then $x \in
+\mathfrak{q}_i \cap C = \mathfrak{s}$. But $x \notin\mathfrak{s}$ because $x$
+is not zero in $S/\mathfrak{q}$.
+
+
+It thus follows that $S_{\mathfrak{s}}$ is a local ring with maximal ideal
+$\mathfrak{q}S_{\mathfrak{s}}$. In particular, it is
+equal to $S_{\mathfrak{q}}$, which is a localization of
+$S_{\mathfrak{s}}$ at the maximal ideal.
+In particular, the morphism
+\[ C_{\mathfrak{s}} \to S_{\mathfrak{s}} = S_{\mathfrak{q}} \]
+is finite. Moreover, we have $\mathfrak{s} S_{\mathfrak{q}} =
+\mathfrak{q}S_{\mathfrak{q}}$ by unramifiedness of $R \to S$.
+So since the residue fields are the same by choice of $x$, we have
+$\mathfrak{s}S_{\mathfrak{q}} + C_{\mathfrak{s}} = S_{\mathfrak{q}}$.
+Thus by Nakyama's lemma, we find that $S_{\mathfrak{s}} = S_{\mathfrak{q}} = C_{\mathfrak{s}}$.
+
+
+There is thus an element $g \in C - \mathfrak{r}$ such that $S_g = C_g$.
+In particular, $S$ and $C$ are isomorphic near $\mathfrak{q}$.
+\end{proof}
+
+We can thus replace $S$ by $C$ and assume that $C$ has one generator.
+
+\emph{With this reduction now made, we proceed.} We are now considering the
+case where $S$ is generated by one element, so a quotient $S = R[X]$ for
+some monic polynomial $P$.
+Now $\overline{S} = S/\mathfrak{p}S$ is thus a quotient of $k[X]$, where $k =
+R/\mathfrak{p}$ is the residue field.
+It thus follows that
+\[ \overline{S} = k[X]/(\overline{P}) \]
+for $\overline{P}$ a monic polynomial, as $\overline{S}$ is a finite
+$k$-vector space.
+
+Suppose $\overline{P}$ has degree $n$.
+Let $x \in S$ be a generator of $S/R$.
+We know that $1, x, \dots, x^{n-1}$ has reductions that form a $k$-basis for
+$S \otimes_R k$, so by Nakayama they generate $S$ as an $R$-module.
+In particular, we can find a monic polynomial $P$ of degree $n$ such that
+$P(x) = 0$.
+It follows that the reduction of $P$ is necessarily $\overline{P}$.
+So we have a surjection
+\[ R[X]/(P) \twoheadrightarrow S \]
+which induces an isomorphism modulo $\mathfrak{p}$ (i.e. on the fiber).
+
+Finally, we claim that we can modify $R[X]/P$ to make a standard \'etale
+algebra. Now,
+if we let $\mathfrak{q}'$ be the preimage of $\mathfrak{q}$ in
+$R[X]/P$, then we have morphisms of local rings
+\[ R \to (R[X]/P)_{\mathfrak{q}'} \to S_{\mathfrak{q}}. \]
+The claim is that $R[X]/(P)$ is unramified
+over $R$ at $\mathfrak{q}'$.
+
+
+To see this, let $T = (R[X]/P)_{\mathfrak{q}'}$. Then, since the fibers of $T$ and $S_{\mathfrak{q}}$ are the same at
+$\mathfrak{p}$, we have that
+\[ \Omega_{T/R} \otimes_R k(\mathfrak{p}) = \Omega_{T \otimes_R
+k(\mathfrak{p})/k(\mathfrak{p})} =
+\Omega_{(S_{\mathfrak{q}}/\mathfrak{p}S_{\mathfrak{q}})/k(\mathfrak{p})} = 0 \]
+as $S$ is $R$-unramified at $\mathfrak{q}$.
+It follows that $\Omega_{T/R} = \mathfrak{p} \Omega_{T/R}$, so a fortiori
+$\Omega_{T/R} = \mathfrak{q} \Omega_{T/R}$; since this is a finitely generated
+$T$-module, Nakayama's lemma implies that is zero.
+ We conclude
+that $R[X]/P$ is unramified at $\mathfrak{q}'$; in particular, by the
+K\"ahler differential criterion, the image of the derivative $P'$ is not in
+$\mathfrak{q}'$. If we localize at the image of $P'$, we then get what we
+wanted in the theorem.
+\end{proof}
+
+We now want to deduce a corresponding (stronger) result for \emph{\'etale}
+morphisms. Indeed, we prove:
+
+\begin{theorem}
+If $R \to S$ is \'etale at $\mathfrak{q} \in \spec S$ (lying over
+$\mathfrak{p} \in \spec R$), then there are $f \in R-\mathfrak{p}, g \in S -
+\mathfrak{q}$ such that the morphism $R_f \to S_g$ is a standard \'etale
+morphism.
+\end{theorem}
+\begin{proof}
+By localizing suitably, we can assume that $(R, \mathfrak{p})$ is local,
+and (in view of \cref{}),
+$R \to S$ is a quotient of a standard \'etale morphism
+\[ (R[X]/P)_h \twoheadrightarrow S \]
+with the kernel some ideal $I$. We may assume that the surjection is an
+isomorphism modulo $\mathfrak{p}$, moreover.
+By localizing $S$ enough\footnote{We are not assuming $S$ finite over $R$ here,}
+we may suppose that $S$ is a \emph{flat} $R$-module as well.
+
+Consider the exact sequence of $(R[X]/P)_h$-modules
+\[ 0 \to I \to (R[X]/P)_h/I \to S \to 0. \]
+
+Let $\mathfrak{q}'$ be the image of $\mathfrak{q}$ in $\spec (R[X]/P)_h$.
+We are going to show that the first term vanishes upon localization at
+$\mathfrak{q}'$.
+Since everything here is finitely generated,
+it will follow that after further localization by some element in
+$(R[X]/P)_h - \mathfrak{q}'$, the first term will vanish. In particular, we
+will then be done.
+
+Everything here is a module over $(R[X]/P)_h$, and certainly a module over
+$R$. Let us tensor everything over $R$ with
+$R/\mathfrak{p}$; we find an exact sequence
+\[ I \to S/\mathfrak{p}S \to S/\mathfrak{p}S \to 0 ;\]
+we have used the fact that the morphism $(R[X]/P)_h \to S$ was assumed to
+induce an isomorphism modulo $\mathfrak{p}$.
+
+However, by \'etaleness we assumed that $S$ was \emph{$R$-flat}, so we find that exactness holds at the left too.
+It follows that
+\[ I = \mathfrak{p}I, \]
+so a fortiori
+\[ I = \mathfrak{q}'I, \]
+which implies by Nakayama that $I_{\mathfrak{q}'} = 0$. Localizing at a
+further element of $(R[X]/P)_h - \mathfrak{q}'$, we can assume that $I=0$;
+after this localization, we find that $S$ looks \emph{precisely} a standard
+\'etale algebra.
+\end{proof}
+
+\subsection{Permanence properties of \'etale morphisms}
+We shall now return to (more elementary) commutative algebra, and discuss the
+properties that an \'etale extension $A \to B$ has. An \'etale extension is
+not supposed to make $B$ differ too much from $A$, so we might expect some of
+the same properties to be satisfied.
+
+We might not necessarily expect global properties to be preserved
+(geometrically, an open imbedding of schemes is \'etale, and that does
+not necessarily preserve global properties), but local ones should be.
+
+Thus the right definition for us will be the following:
+\begin{definition}
+A morphism of local rings $(A, \mathfrak{m}_A) \to (B, \mathfrak{m}_B)$ is \textbf{local-unramified}
+$\mathfrak{m}_A B$ is the maximal ideal of $B$ and $B/\mathfrak{m}_B$ is a
+finite separable extension of $A/\mathfrak{m}_A$.
+
+A morphism of local rings $A \to B$ is \textbf{local-\'etale} if it is flat
+and local-unramified.
+\end{definition}
+
+
+
+\begin{proposition} \label{dimpreserved}
+Let $(R, \mathfrak{m}) \to (S, \mathfrak{n})$ be a local-\'etale morphism of noetherian local
+rings. Then $\dim R = \dim S$.
+\end{proposition}
+\begin{proof}
+Indeed, we know that $\mathfrak{m}S = \mathfrak{n}$ because $R \to S$ is
+local-unramified.
+Also $R/\mathfrak{m}\to S/\mathfrak{n}$ is a finite separable extension.
+We have a natural morphism
+\[ \mathfrak{m} \otimes_R S \to \mathfrak{n} \]
+which is injective (as the map $\mathfrak{m} \otimes_R S \to S$ is injective by
+flatness) and consequently is an isomorphism.
+More generally, $\mathfrak{m}^n \otimes_R S \simeq \mathfrak{n}^n$ for each $n$.
+By flatness again, it follows that
+\begin{equation} \label{thisiso} \mathfrak{m}^n/\mathfrak{m}^{n+1} \otimes_{R/\mathfrak{m}}
+(S/\mathfrak{n}) = \mathfrak{m}^n/\mathfrak{m}^{n+1} \otimes_R S \simeq
+\mathfrak{n}^n/\mathfrak{n}^{n+1}. \end{equation}
+Now if we take the dimensions of these vector spaces, we get polynomials in
+$n$; these polynomials are the dimensions of $R, S$, respectively. It follows
+that $\dim R = \dim S$.
+\end{proof}
+
+
+
+\begin{proposition} \label{depthpreserved}
+Let $(R, \mathfrak{m}) \to (S, \mathfrak{n})$ be a local-\'etale morphism of noetherian local
+rings.
+Then $\depth R = \depth S$.
+\end{proposition}
+\begin{proof}
+We know that a nonzerodivisor in $R$ maps to a nonzerodivisor in $S$. Thus by
+an easy induction we reduce to the case where $\depth R = 0$.
+This means that $\mathfrak{m}$ is an associated prime of $R$; there is thus
+some $x \in R$, nonzero (and necessarily a non-unit) such that the annihilator
+of $x$ is all of $\mathfrak{m}$. Now $x$ is a nonzero element of $S$, too, as
+the map $R \to S$ is an inclusion by flatness.
+It is then clear that $\mathfrak{n} = \mathfrak{m}S$ is the annilhilator of
+$x$ in $S$, so $\mathfrak{n}$ is an associated prime of $S$ too.
+\end{proof}
+
+\begin{corollary}
+Let $(R, \mathfrak{m}) \to (S, \mathfrak{n})$ be a local-\'etale morphism of noetherian local
+rings.
+Then $R$ is regular (resp. Cohen-Macaulay) if and only if $S$ is.
+\end{corollary}
+\begin{proof}
+The results \cref{depthpreserved} and \cref{dimpreserved} immediately give
+the result about Cohen-Macaulayness.
+For regularity, we use \eqref{thisiso} with $n=1$ to see at once that the
+embedding dimensions of $R$ and $S$ are the same.
+\end{proof}
+
+Recall, however, that regularity of $S$ implies that of $R$ if we just assume
+that $R \to S$ is \emph{flat} (by Serre's characterization of regular
+local rings as those having finite global dimension).
+
+
+We shall next show that reducedness is preserved
+under \'etale extensions.
+We shall need another hypothesis, though, that the map of local rings
+be essentially of finite type.
+This will always be the case in situations of interest, when we are looking at
+the map on local rings induced by a morphism of rings of finite type.
+
+\begin{proposition}
+\label{reducedetale}
+Let $(R, \mathfrak{m}) \to (S, \mathfrak{n})$ be a local-\'etale morphism of noetherian local
+rings.
+Suppose $S$ is essentially of finite type over $R$.
+Then $S$ is reduced if and only if $R$ is reduced.
+\end{proposition}
+\begin{proof}
+As $R \to S$ is injective by (faithful) flatness, it suffices to show that if
+$R$ is reduced, so is $S$.
+Now there is an imbedding $R \to \prod_{\mathfrak{p} \ \mathrm{minimal}}
+R/\mathfrak{p}$ of $R$ into a product of local domains. We get an imbedding of
+$S$ into a product of local rings $\prod S/\mathfrak{p}S$.
+Each $S/\mathfrak{p}S$ is essentially of finite type over $R/\mathfrak{p}$,
+and local-\'etale over it too.
+
+We are reduced to showing that each $S/\mathfrak{p}S$ is reduced. So we need
+only show that a local-\'etale, essentially of finite type local ring over a
+local noetherian domain is reduced.
+
+So suppose $A$ is a local noetherian domain, $B$ a
+local-\'etale, essentially of finite type local $A$-algebra.
+We want to show that $B$ is reduced, and then we will be done. Now $A$ imbeds into its field of
+fractions $K$; thus $B$ imbeds into $B \otimes_A K$.
+Then $B \otimes_A K$ is formally unramified over $K$ and is essentially of
+finite type over $K$. This means that $B \otimes_A K$ is a product of fields
+by the usual classification, and is in particular reduced. Thus $B$ was itself
+reduced.
+\end{proof}
+
+
+To motivate the proof that normality is preserved, though, we indicate another
+proof of this fact, which does not even use the essentially of finite type
+hypothesis.
+Recall that a noetherian ring $A$ is reduced if and only if
+for every prime $\mathfrak{p} \in \spec A$ of height zero,
+$A_{\mathfrak{p}}$ is regular (i.e., a field), and for every
+prime $\mathfrak{p}$ of height $>0$, $R_{\mathfrak{p}}$ has depth
+at least one. See \cref{reducedserrecrit}.
+
+So suppose $R \to S$ is a local-\'etale and suppose $R$ is reduced.
+We are going to apply the above criterion, together with the results already
+proved, to show that $S$ is reduced.
+
+Let $\mathfrak{q} \in \spec S$ be a minimal prime, whose image in
+$\spec R$ is $\mathfrak{p}$.
+Then we have a morphism
+\[ R_{\mathfrak{p}} \to S_{\mathfrak{q}} \]
+which is locally of finite type, flat, and indeed local-\'etale, as it is
+formally unramified (as $R \to S$ was).
+We know that $\dim R_{\mathfrak{p}} = \dim S_{\mathfrak{q}}$
+by \cref{dimpreserved}, and consequently
+since $R_{\mathfrak{p}}$ is regular, so is $S_{\mathfrak{q}}$.
+Thus the localization of $S$ at any minimal prime is regular.
+
+Next, if $\mathfrak{q} \in \spec S$ is such that $S_{\mathfrak{q}}$ has height
+has positive dimension, then $R_{\mathfrak{p}} \to S_{\mathfrak{q}}$ (where
+$\mathfrak{p}$ is as above) is local-\'etale and consequently $\dim
+R_{\mathfrak{q}} = \dim S_{\mathfrak{q}} > 0$.
+Thus,
+$\depth R_{\mathfrak{p}} = \depth S_{\mathfrak{q}} >0$ because $R$ was reduced.
+It follows that the above criterion is valid for $S$.
+
+
+Recall that a noetherian ring is a \emph{normal} domain if it is integrally closed
+in its quotient field, and simply \emph{normal} if all its localizations are
+normal domains; this equates to the ring being a product of normal domains.
+We want to show that this is preserved under \'etaleness.
+To do this, we shall use a criterion similar to that used at the end of the
+last section.
+We have the following important criterion for normality.
+
+\begin{theorem*}[Serre] Let $A$ be a noetherian ring. Then $A$ is normal if
+and only if for all $\mathfrak{p} \in \spec R$:
+\begin{enumerate}
+\item If $\dim A_{\mathfrak{p}} \leq 1$, then $A_{\mathfrak{p}}$ is regular.
+\item If $\dim A_{\mathfrak{p}} \geq 2$, then $\depth A_{\mathfrak{p}} \geq 2$.
+\end{enumerate}
+\end{theorem*}
+This is discussed in \cref{realserrecrit}.
+
+From this, we will be able to prove without difficulty the next result.
+\begin{proposition} \label{normalitypreserved}
+Let $(R, \mathfrak{m}) \to (S, \mathfrak{n})$ be a local-\'etale morphism of noetherian local
+rings.
+Suppose $S$ is essentially of finite type over $R$.
+Then $S$ is normal if and only if $R$ is normal.
+\end{proposition}
+\begin{proof}
+This is proved in the same manner as the result for reducedness was proved at
+the end of the previous subsection.
+For instance, suppose $R$ normal. Let $\mathfrak{q} \in \spec S$ be arbitrary,
+contracting to $\mathfrak{p} \in \spec R$. If $\dim S_{\mathfrak{q}} \leq 1$,
+then $\dim R_{\mathfrak{p}} \leq 1$ so that $R_{\mathfrak{p}}$, hence
+$S_{\mathfrak{q}}$ is regular. If $\dim S_{\mathfrak{q}} \geq 2$, then $\dim
+R_{\mathfrak{p}} \geq 2$, so
+$\depth S_{\mathfrak{q}} = \depth R_{\mathfrak{p}} \geq 2$.
+\end{proof}
+
+We mention a harder result:
+
+\begin{theorem}
+\label{injunrflat}
+If $f:(R, \mathfrak{m}) \to (S, \mathfrak{n})$ is local-unramified, injective,
+and essentially of finite type, with $R$ normal and noetherian, then $R \to S$ is
+local-\'etale.
+Thus, an injective unramified morphism of finite type between noetherian rings,
+whose source is a normal domain, is \'etale.
+\end{theorem}
+
+A priori, it is not obvious at all that $R \to S$ should be flat. In fact,
+proving flatness directly seems to be difficult, and we will have to use the
+local structure theory for \emph{unramified} morphisms together with nontrivial
+facts about \'etale morphisms to establish this result.
+\begin{proof}
+We essentially follow \cite{Mi67} in the proof.
+Clearly, only the local statement needs to be proved.
+
+We shall use the (non-elementary, relying on ZMT) structure theory of unramified morphisms,
+which implies that there is a factorization of $R \to S$ via
+\[ (R, \mathfrak{m}) \stackrel{g}{\to} (T, \mathfrak{q}) \stackrel{h}{\to} (S, \mathfrak{n}), \]
+where all morphisms are local homomorphisms of local rings, $g: R \to T$ is
+local-\'etale and essentially of finite type, and $h:T \to S$ is surjective.
+This was established in \cref{}.
+
+We are going to show that $h$ is an isomorphism, which will complete the proof.
+Let $K$ be the quotient field of $R$.
+Consider the diagram
+\[ \xymatrix{
+R \ar[d] \ar[r]^g & T \ar[r]^h \ar[d] & S \ar[d] \\
+K \ar[r]^{g \otimes 1} & T \otimes_R K \ar[r]^{h \otimes 1}& S
+\otimes_R K.
+}\]
+Now the strategy is to show that $h$ is injective.
+We will prove this by chasing around the diagram.
+
+Here $R \to S$ is formally unramified and essentially of finite type, so $K \to S
+\otimes_R K$ is too, and $S \otimes_R K$ is in particular a finite product of
+separable extensions of $K$. The claim is that it is nonzero; this follows
+because $f: R \to S$ is injective, and $S \to S \otimes_R K$ is injective
+because localization is exact. Consequently $R \to S \otimes_R K$ is injective,
+and the target must be nonzero.
+
+As a result, the surjective map $h \otimes 1: T \otimes_R K \to S \otimes_R K$
+is nonzero. Now we claim that $T \otimes_R K$ is a field. Indeed, it is an \'etale extension
+of $K$ (by base-change), so it is a product of fields. Moreover, $T$ is a
+normal domain since $R$ is (by \cref{normalitypreserved}) and $R \to T$ is injective by flatness,
+so the localization $T \otimes_R K$ is a domain as well.
+Thus it must be a field. In particular, the map $h \otimes 1: T \otimes_R K \to
+S \otimes_R K$ is a surjection from a field to a product of fields. It is thus
+an \emph{isomorphism.}
+
+Finally, we can show that $h$ is injective. Indeed, it suffices to show that
+the composite $T \to T \otimes_R K \to S \otimes_R K$ is injective. But the
+first map is injective as it is a map from a domain to a localization, and the
+second is an isomorphism (as we have just seen). So $h$ is injective, hence an
+isomorphism. Thus $T \simeq S$, and we are done.
+\end{proof}
+
+Note that this \emph{fails} if the source is not normal.
+\begin{example}
+Consider a nodal cubic $C$ given by $y^2 = x^2 (x-1)$ in $\mathbb{A}^2_k$ over an
+algebraically closed field $k$. As is well-known, this curve is smooth except
+at the origin. There is a map $\overline{C} \to C$ where $\overline{C}$ is
+the normalization; this is a finite map, and a local isomorphism outside of
+the origin.
+
+The claim is that $\overline{C} \to C$ is unramified but not \'etale. If it
+were \'etale, then $C$ would be smooth since $\overline{C}$ is. So it is not
+\'etale. We just need to see that it is unramified, and for this we need only
+see that the map is unramified at the origin.
+
+We may compute: the normalization of $C$ is given by $\overline{C} =
+\mathbb{A}^1_k$, with the map
+\[ t \mapsto (t^2+1, t (t^2 + 1)). \]
+Now the two points $\pm 1$ are both mapped to $0$.
+We will show that
+\[ \mathcal{O}_{C, 0} \to \mathcal{O}_{\mathbb{A}^1_k, 1} \]
+is local-unramified; the other case is similar.
+Indeed, any line through the origin which is not a tangent direction will be
+something in $\mathfrak{m}_{C, 0}$ that is mapped to a uniformizer in $
+\mathcal{O}_{\mathbb{A}^1_k, 1}$.
+For instance, the local function $x \in \mathcal{O}_{C,0}$ is mapped to
+the function $t \mapsto t^2 + 1$ on $\mathbb{A}^1_k$, which has a simple zero
+at $1$ (or $-1$).
+It follows that the maximal ideal $\mathfrak{m}_{C,0}$ generates the maximal
+ideal of $\mathcal{O}_{\mathbb{A}^1_k, 1}$ (and similarly for $-1$).
+\end{example}
+
+\subsection{Application to smooth morphisms}
+
+We now want to show that the class of \'etale morphisms essentially determines
+the class of smooth morphisms. Namely, we are going to show that
+smooth morphisms are those that look \'etale-locally like \'etale morphisms
+followed by projection from affine space. (Here ``projection from affine
+space'' is the geometric picture: in terms of commutative rings, this is the
+embedding $A \hookrightarrow A[x_1, \dots, x_n]$.)
+
+Here is the first goal:
+\begin{theorem}
+Let $f: (A, \mathfrak{m}) \to (B, \mathfrak{n})$ be an essentially of finite
+presentation, local morphism of local rings.
+Then $f$ is formally smooth
+if and only if there exists a factorization
+\[ A \to C \to B \]
+where $(C, \mathfrak{q})$ is a localization of the polynomial ring $A[X_1,
+\dots, X_n]$ at a prime ideal with $A \to C$ the natural embedding, and $C \to
+B$ a formally \'etale morphism.
+\end{theorem}
+
+For convenience, we have stated this result for local rings, but we can get a
+more general criterion as well (see below). This states that smooth
+morphisms, \'etale locally, look like the imbedding of a ring into a
+polynomial ring.
+In \cite{SGA1}, this is in fact how smooth morphisms are \emph{defined.}
+
+\begin{proof} First assume $f$ smooth.
+We know then that $\Omega_{B/A}$ is a finitely generated projective $B$-module,
+hence free, say of rank $n$.
+There are $t_1, \dots, t_n \in B$ such that $\left\{dt_i\right\}$ forms a basis
+for $\Omega_{B/A}$: namely, just choose a set of such elements that forms a
+basis for $\Omega_{B/A} \otimes_B B/\mathfrak{n}$ (since these elements
+generate $\Omega_{B/A}$).
+
+Now these elements $\left\{t_i\right\}$ give a map of rings $A[X_1, \dots, X_n]
+\to B$. We let $\mathfrak{q}$ be the pre-image of $\mathfrak{n}$ (so
+$\mathfrak{n}$ contains the image of $\mathfrak{m} \subset A$), and take $C =
+C = A[X_1,\dots, X_n]_{\mathfrak{q}}$. This gives local homomorphisms $A \to C,
+C \to B$. We only need to check that $C \to B$ is \'etale.
+But the map
+\[ \Omega_{C/A} \otimes_C B \to \Omega_{B/A} \]
+is an isomorphism, by construction. Since $C, B$ are both formally smooth over
+$A$, we find that $C \to B$ is \'etale by the characterization of \'etaleness
+via cotangent vectors
+(\cref{etalecotangent}).
+
+The other direction, that $f$ is formally smooth if it admits such a
+factorization, is clear because the localization of a polynomial algebra is
+formally smooth, and a formally \'etale map is clearly formally smooth.
+\end{proof}
+
+\begin{corollary}
+Let $(R, \mathfrak{m}) \to (S, \mathfrak{n})$ be a formally smooth, essentially
+of finite type morphism of noetherian rings. Then if $R$ is normal, so is $S$.
+Ditto for reduced.
+\end{corollary}
+\begin{proof}
+
+\end{proof}
+
+\subsection{Lifting under nilpotent extensions}
+
+In this subsection, we consider the following question. Let $A$ be a ring, $I
+\subset A$ an ideal of square zero, and let $A_0 = A/I$. Suppose $B_0$ is a
+flat $A_0$-algebra (possibly satisfying other conditions).
+Then, we ask if there exists a flat $A$-algebra $B$ such that $B_0 \simeq B
+\otimes_A A_0 = B/IB$.
+If there is, we say that $B$ can be \emph{lifted} along the nilpotent
+thickening from $B_0$ to $B$---we think of $B$ as the mostly the same as $B_0$,
+but with some additional ``fuzz'' (given by the additional nilpotents).
+
+We are going to show that this can \emph{always} be done for \'etale
+algebras, and that this always can be done \emph{locally} for smooth
+algebras. As a result, we will get a very simple characterization of what
+finite\'etale algebras over a complete (and later, henselian) local ring look like:
+they are the same as \'etale extensions of the residue field (which we have
+classified completely).
+
+In algebraic geometry, one spectacular application of these ideas is
+Grothendieck's proof in \cite{SGA1} that a smooth projective curve over a field
+of characteristic $p$ can be ``lifted'' to characteristic zero. The idea is to
+lift it successively along nilpotent thickenings of the base field, bit by bit
+(for instance, $\mathbb{Z}/p^n \mathbb{Z}$ of $\mathbb{Z}/p\mathbb{Z}$),
+by using the techniques of this subsection; then, he uses hard existence
+results in formal geometry to show that this compatible system of nilpotent
+thickenings comes from a curve over a DVR (e.g. the $p$-adic numbers). The
+application in mind is the (partial) computation of the \'etale fundamental
+group of a smooth projective curve over a field of positive characteristic.
+We will only develop some of the more basic ideas in commutative algebra.
+
+Namely, here is the main result.
+For a ring $A$, let $\et(A)$ denote the category of \'etale $A$-algebras (and
+$A$-morphisms). Given $A \to A'$, there is a natural functor $\et(A) \to
+\et(A')$ given by base-change.
+\begin{theorem}
+Let $A \to A_0$ be a surjective morphism whose kernel is nilpotent. Then
+$\et(A) \to \et(A_0)$ is an equivalence of categories.
+\end{theorem}
+
+$\spec A$ and $\spec A_0$ are identical topologically, so this result is
+sometimes called the topological invariance of the \'etale site.
+Let us sketch the idea before giving the proof. Full faithfulness is the easy
+part, and is essentially a restatement of the nilpotent lifting property.
+The essential surjectivity is the non-elementary part, and relies on the local
+structure theory. Namely, we will show that a standard \'etale morphism can be
+lifted (this is essentially trivial). Since an \'etale morphism is locally
+standard \'etale, we can \emph{locally} lift an \'etale $A_0$-algebra to an
+\'etale $A$-algebra.
+We next ``glue'' the local liftings using the full faithfulness.
+\begin{proof} Without loss of generality, we can assume that the ideal defining
+$A_0$ has square zero.
+Let $B, B'$ be \'etale $A$-algebras. We need to show that
+\[ \hom_A(B, B') = \hom_{A_0}(B_0, B_0'), \]
+where $B_0, B_0'$ denote the reductions to $A_0$ (i.e. the base change).
+But $\hom_{A_0}(B_0, B_0') = \hom_{A}(B, B_0')$, and this is clearly the same
+as $\hom_A(B, B')$ by the definition of an \'etale morphism. So full
+faithfulness is automatic.
+
+The trickier part is to show that any \'etale $A_0$-algebra can be lifted
+to an \'etale $A$-algebra.
+First, note that a standard \'etale $A_0$-algebra of the form
+$(A_0[X]/(P(X))_{Q}$ can be lifted to $A$---just lift $P$ and $Q$. The condition
+that it be standard \'etale is invertibility of $P'$, which is unaffected by
+nilpotents.
+
+Now the strategy is to glue these appropriately.
+The details should be added at some point, but they are not. \add{details}
+\end{proof}
+
+
+
diff --git a/books/cring/factorization.tex b/books/cring/factorization.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f42f70316e3b978a9a0263467a0f260e029936f1
--- /dev/null
+++ b/books/cring/factorization.tex
@@ -0,0 +1,867 @@
+\chapter{Unique factorization and the class group}
+
+
+Commutative rings in general do not admit unique factorization.
+Nonetheless, for many rings (``integrally closed'' rings), which
+includes the affine coordinate rings one obtains in algebraic geometry when
+one studies smooth varieties, there is an invariant called the ``class
+group'' that measures the failure of unique factorization. This ``class
+group'' is a certain quotient of codimension one primes (geometrically,
+codimension one subvarieties) modulo rational equivalence.
+
+Many even nicer rings have the convenient property that their localizations at prime
+ideals \emph{factorial}, a key example being the coordinate ring of an affine
+nonsingular variety.
+For these even nicer rings, an alternative method of defining the class group
+can be given: the class group corresponds to the group of isomorphism
+classes of \emph{invertible modules}. Geometrically, such invertible modules
+are line bundles on the associated variety (or scheme).
+
+\section{Unique factorization}
+
+\subsection{Definition}
+We begin with the nicest of all possible cases, when the ring itself admits
+unique factorization.
+
+
+
+Let $R$ be a domain.
+\begin{definition}
+A nonzero element $x \in R$ is \textbf{prime} if $(x)$ is a prime ideal.
+\end{definition}
+
+In other words, $x$ is not a unit, and if $x \mid ab$, then either $x \mid a$
+or $x \mid b$.
+
+We restate the earlier \cref{earlyUFD} slightly.
+\begin{definition}
+A domain $R$ is \textbf{factorial} (or a \textbf{unique factorization domain},
+or a \textbf{UFD}) if every nonzero noninvertible element $x \in R$ factors as a
+product $ x_1 \dots x_n$ where each $x_i$ is prime.
+\end{definition}
+
+Recall that a \emph{principal ideal domain} is a UFD (\cref{PIDUFD}), as is a
+\emph{euclidean} domain (\cref{EDPID}); actually, a euclidean domain is a PID.
+Previously, we imposed something seemingly slightly stronger: that the
+factorization be unique. We next show that we get that for free.
+
+\begin{proposition}[The fundamental theorem of arithmetic]
+This factorization is essentially unique, that is, up to multiplication by units.
+\end{proposition}
+\begin{proof} Let $x \in R$ be a nonunit.
+Say $x = x_1 \dots x_n = y_1 \dots y_m$ were two different prime
+factorizations. Then $m,n>0$.
+
+We have that $x_1 \mid y_1 \dots y_m$, so $x_1 \mid y_i$ for some $i$. But
+$y_i$ is prime. So $x_1$ and $y_i$ differ by a unit. By removing each of these,
+we can get a smaller set of nonunique factorizations.
+Namely, we find that
+\[ x_2 \dots x_n = y_1 \dots \hat{y_i} \dots y_m \]
+and then we can induct on the number of factors.
+\end{proof}
+
+The motivating example is of course:
+\begin{example}
+$\mathbb{Z}$ is factorial. This is the fundamental theorem of arithmetic, and
+follows because $\mathbb{Z}$ is a euclidean domain. The same observation
+applies to a polynomial ring over a field by \cref{polyringED}.
+\end{example}
+
+\subsection{Gauss's lemma}
+
+We now show that factorial rings are closed under the operation of forming
+polynomial rings.
+
+\begin{theorem}[Gauss's lemma]
+If $R$ is factorial, so is the polynomial ring $R[X]$.
+\end{theorem}
+In general, if $R$ is a PID, $R[X]$ will \emph{not} be a PID. For instance,
+$\mathbb{Z}[X]$ is not a PID: the prime ideal $(2, X)$ is not principal.
+
+\begin{proof}
+In the course of this proof, we shall identify the prime elements in $R[X]$.
+We start with a lemma that allows us to compare factorizations in $K[X]$ (for
+$K$ the quotient field) and $R[X]$; the advantage is that we already know the
+polynomial ring over a \emph{field} to be a UFD.
+\begin{lemma} Suppose $R$ is a unique factorization domain with
+quotient field $K$.
+Suppose $f \in R[X]$ is irreducible in $R[X]$ and there is no nontrivial common divisor of
+the coefficients of $f$. Then $f$ is irreducible in $K[X]$.
+\end{lemma}
+With this in mind, we say that a polynomial in $R[X]$ is \textbf{primitive} if
+the coefficients have no common divisor in $R$.
+
+\begin{proof} Indeed, suppose we had a factorization
+\[ f = gh, \quad g, h \in K[X], \]
+where $g,h$ have degree $\geq 1$.
+Then we can clear denominators to find a factorization
+\[ rf = g' h' \]
+where $r \in R - \left\{0\right\}$ and $g', h' \in R[X]$. By clearing
+denominators as little as possible, we may assume that $g',h'$ are primitive.
+To be precise, we divide $g', h'$ by their \emph{contents.} Let us define:
+
+\begin{definition}
+The \textbf{content} $\cont (f)$ of a polynomial $f \in R[X]$ is the greatest common
+divisor of its coefficients. The content of an element $f$ in $K[X]$ is defined
+by considering $r \in R$ such that $rf \in R[X]$, and taking $\cont(rf)/r$.
+This is well-defined, modulo elements of $R^*$, and we have $\cont(sf) = s \cont f$ if $s \in K$.
+\end{definition}
+
+To say that the content lies in $R$ is to say that the polynomial is in $R[X]$;
+to say that the content is a unit is to say that the polynomial is primitive.
+Note that a monic polynomial in $R[X]$ is primitive.
+
+So we have:
+\begin{lemma}
+Any element of $K[X]$ is a product of $\cont(f)$ and something primitive in
+$R[X]$.
+\end{lemma}
+\begin{proof}
+Indeed, $f/\cont(f)$ has content a unit. It therefore cannot have anything in the
+denominator. Indeed, if it had a term $r/p^i X^n$ where $r ,p \in R$ and $p
+\nmid r$ is prime, then the content would divide $r/p^i$. It thus could not be
+in $R$.
+\end{proof}
+
+\begin{lemma}
+$\cont(fg) = \cont(f) \cont(g)$ if $f,g \in K[X]$.
+\end{lemma}
+\begin{proof}
+By dividing $f,g$ by their contents, it suffices to show that the product of
+two primitive polynomials in $R[X]$ (i.e. those with no common divisor of all
+their coefficients) is itself primitive. Indeed, suppose $f,g$ are primitive
+and $p \in R$ is a prime. Then $\overline{f}, \overline{g} \in R/(p)[X]$ are
+nonzero. Their product $\overline{f}\overline{g}$ is also not zero because
+$R/(p)[X]$ is a domain, $p$ being prime. In particular, $p$ is not a common
+factor of the coefficients of $fg$. Since $p$ was arbitrary, this completes the
+proof.
+\end{proof}
+
+So return to the main proof. We know that $f = gh$. We divided $g,h$ by their
+contents to get $g', h' \in R[X]$. We had then
+\[ r f = g' h', \quad r \in K^*. \]
+Taking the contents, and using the fact that $f, g', h'$ are primitive, we have then:
+\[ r = \cont(g') \cont(h') = 1 \quad \mathrm{(modulo \ } R^*). \]
+But then $f = r^{-1} g' h'$ shows that $f$ is not irreducible in $R[X]$,
+contradiction.
+\end{proof}
+
+
+Let $R$ be a ring. Recall that an element is \textbf{irreducible} if it admits
+no nontrivial factorization. The product of an irreducible element and a unit
+is irreducible.
+Call a ring \textbf{finitely irreducible} if every element in the ring admits a
+factorization into finitely many irreducible elements.
+
+\begin{lemma}
+A ring $R$ is finitely irreducible if every ascending sequence of
+\emph{principal} ideals in $R$ stabilizes.
+\end{lemma}
+A ring such that every ascending sequence of ideals (not necessarily
+principal) stabilizes is said to be \emph{noetherian;} this is a highly useful
+finiteness condition on a ring.
+\begin{proof}
+Suppose $R$ satisfies the ascending chain condition on principal ideals. Then
+let $x \in R$. We would like to show it can be factored as a product of
+irreducibles.
+So suppose $x$ is not the product of finitely many irreducibles. In particular,
+it is reducible: $x = x_1 x_1'$, where neither factor is a unit. One of this
+cannot be written as a finite product of irreducibles. Say it is $x_1$.
+Similarly, we can write $x_1 = x_2 x_2''$ where one of the factors, wlog $x_2$,
+is not the product of finitely many irreducibles. Repeating inductively gives
+the ascending sequence
+\[ (x) \subset (x_1) \subset (x_2) \subset \dots, \]
+and since each factorization is nontrivial, the inclusions are each nontrivial.
+This is a contradiction.
+\end{proof}
+
+\begin{lemma}
+Suppose $R$ is a UFD. Then every ascending sequence of principal ideals in
+$R[X]$ stabilizes. In particular, $R[X]$ is finitely irreducible.
+\end{lemma}
+\begin{proof}
+Suppose $(f_1) \subset (f_2) \subset \dots \in R[X]$. Then each $f_{i+1} \mid
+f_{i}$. In particular, the degrees of $f_i$ are nonincreasing, and consequently
+stabilize. Thus for $i \gg 0$, we have $\deg f_{i+1} = \deg f_i$.
+We can thus assume that all the degrees are the same. In this case, if $i \gg
+0$ and $k>0$,
+$f_{i}/f_{i+k} \in R[X]$ must actually lie in $R$ as $R$ is a domain.
+In particular, throwing out the first few elements in the sequence if
+necessary, it follows that our sequence looks like
+\[ f, f/r_1, f/(r_1r_2), \dots \]
+where the $r_i \in R$. However, we can only continue this a finite amount of
+time before the $r_i$'s will have to become units since $R$ is a UFD. (Or $f
+ = 0$.)
+So the sequence of ideals stabilizes.
+\end{proof}
+
+\begin{lemma}
+Every element in $R[X]$ can be factored into a product of irreducibles.
+\end{lemma}
+\begin{proof} Now evident from the preceding lemmata.
+\end{proof}
+Suppose $P$ is an irreducible element in $R[X]$. I claim that $P$ is prime.
+There are two cases:
+\begin{enumerate}
+\item $P \in R$ is a prime in $R$. Then we know that $P \mid f$ if and only if
+the coefficients of $f$ are divisible by $P$. In particular, $P \mid f$ iff $P
+\mid \cont (f)$. It is now clear that $P \mid fg$ if and only if $P$ divides
+one of $\cont(f), \cont(g)$ (since $\cont(fg) = \cont(f) \cont(g)$).
+\item $P$ does not belong to $R$. Then $P$ must have content a unit or it would
+be divisible by its content.
+So $P$ is irreducible in $K[X]$ by the above reasoning.
+
+Say we have an expression
+\[ P \mid fg, \quad f,g \in R[X]. \]
+Since $P$ is irreducible, hence prime, in the UFD (even PID) $K[X]$, we have
+that $P $ divides one of $f,g$ in $K[X]$. Say we can write
+\[ f = q P , q \in K[X]. \]
+Then taking the content shows that $\cont(q) = \cont(f) \in R$, so $q \in
+R[X]$. It follows that $P \mid f$ in $R[X]$.
+\end{enumerate}
+
+We have shown that every element in $R[X]$ factors into a product of prime
+elements. From this, it is clear that $R[X]$ is a UFD.
+\end{proof}
+
+
+\begin{corollary}
+The polynomial ring $k[X_1, \dots, X_n]$ for $k$ a field is factorial.
+\end{corollary}
+\begin{proof}
+Induction on $n$.
+\end{proof}
+
+\subsection{Factoriality and height one primes}
+
+We now want to give a fancier criterion for a ring to be a UFD, in terms of the
+lattice
+structure on $\spec R$. This will require a notion from dimension theory (to be
+developed more fully later).
+\begin{definition}
+Let $R$ be a domain. A prime ideal $\mathfrak{p} \subset R$ is said to be of
+\textbf{height one} if $\mathfrak{p}$ is minimal among ideals
+containing $x$ for some nonzero $x \in R$.
+\end{definition}
+So a prime of height one is not the zero prime, but it is as close to zero as
+possible, in some sense. When we later talk about dimension theory, we will
+talk about primes of any height. In a sense, $\mathfrak{p}$ is ``almost''
+generated by one element.
+
+\begin{theorem} \label{heightonefactoriality}
+Let $R$ be a noetherian domain. The following are equivalent:
+\begin{enumerate}
+\item $R$ is factorial.
+\item Every height one prime is principal.
+\end{enumerate}
+\end{theorem}
+\begin{proof}
+Let's first show 1) implies 2). Assume $R$ is factorial and $\mathfrak{p}$ is
+height one, minimal containing $(x)$ for some $x \neq 0 \in R$.
+Then $x$ is a nonunit, and it is nonzero, so it has a prime factorization
+\[ x = x_1 \dots x_n, \quad \mathrm{each \ } x_i \ \mathrm{prime}. \]
+Some $x_i \in \mathfrak{p}$ because $\mathfrak{p}$ is prime. In particular,
+\[ \mathfrak{p} \supset (x_i) \supset (x). \]
+But $(x_i)$ is prime itself, and it contains $(x)$. The minimality of
+$\mathfrak{p}$ says that $\mathfrak{p} = (x_i)$.
+
+Conversely, suppose every height one prime is principal. Let $x \in R$ be
+nonzero and a nonunit. We want
+to factor $x$ as a product of primes.
+Consider the ideal $(x) \subsetneq R$. As a result, $(x)$ is contained in a
+prime ideal. Since $R$ is noetherian, there is a minimal prime ideal
+$\mathfrak{p}$ containing $(x)$. Then $\mathfrak{p}$, being a height one
+prime, is principal---say $\mathfrak{p}=(x_1)$. It follows that $x_1 \mid x$
+and $x_1$ is prime.
+Say
+\[ x = x_1 x_1'. \]
+If $x_1'$ is a nonunit, repeat this process to get $x_1' = x_2 x_2'$ with $x_2$ a prime element.
+Keep going; inductively we have
+\[ x_k = x_{k+1}x_{k+1}'. \]
+If this process stops, with one of the $x_k'$ a unit, we get a prime
+factorization of $x$. Suppose the process
+continues forever. Then we would have
+\[ (x) \subsetneq (x_1') \subsetneq (x_2') \subsetneq (x_3') \subsetneq \dots, \]
+which is impossible by noetherianness.
+\end{proof}
+
+We have seen that unique factorization can be formulated in terms of prime
+ideals.
+
+
+\subsection{Factoriality and normality}
+
+We next state a generalization of the ``rational root theorem'' as in high
+school algebra.
+\begin{proposition} \label{factorialimpliesnormal}
+A factorial domain is integrally closed.
+\end{proposition}
+
+\begin{proof}
+\add{proof -- may be in the queue already}
+\end{proof}
+
+\section{Weil divisors}
+
+\label{weildivsec}
+\subsection{Definition}
+We start by discussing Weil divisors.
+\begin{definition}
+A \textbf{Weil divisor} for $R$ is a formal linear combination $\sum n_{i}
+[\mathfrak{p}_i]$ where the $\mathfrak{p}_i$ range over height one primes of
+$R$. So the group of Weil divisors is the free abelian group on the height one
+primes of $R$. We denote this group by $\weil(R)$.
+\end{definition}
+
+
+The geometric picture behind Weil divisors is that a Weil divisor is like a
+hypersurface: a subvariety of codimension one.
+
+\subsection{Valuations}
+
+
+
+\subsection{Nagata's lemma} We finish with a fun application of the exact
+sequence of Weil divisors to a purely algebraic statement about factoriality.
+
+\begin{lemma}
+Let $A$ be a normal noetherian domain.
+\end{lemma}
+
+\begin{theorem}
+Let $A$ be a noetherian domain, $x \in A-\left\{0\right\}$. Suppose $(x)$ is
+prime and $A_x$ is factorial. Then $A$ is factorial.
+\end{theorem}
+\begin{proof}
+We first show that $A$ is normal (hence regular in codimension one).
+Indeed, $A_x$ is normal. So if $t \in K(A)$ is integral over $A$, it lies in
+$A_x$.
+So we need to check that if $a/x^n \in A_x$ is integral over $A$ and $x \nmid
+x$, then $n=0$.
+Suppose we had an equation
+\[ (a/x^n)^N + b_1 (a/x^n)^{N-1} + \dots + b_N = 0. \]
+Multiplying both sides by $x^{nN}$ gives that
+\[ a^N \in xR, \]
+so $x \mid a$ by primality.
+
+Now we use the exact sequence
+\[ (x) \to \mathrm{Cl}(A) \to \mathrm{Cl}(A_x) \to 0. \]
+The end is zero, and the image of the first map is zero. So
+$\mathrm{Cl}(A)=0$. Thus $A$ is a UFD.
+\end{proof}
+
+
+
+
+
+\section{Locally factorial domains}
+
+\subsection{Definition}
+\begin{definition}
+A noetherian domain $R$ is said to be \textbf{locally factorial} if
+$R_{\mathfrak{p}}$ is factorial for each $\mathfrak{p}$ prime.
+\end{definition}
+
+\begin{example}
+The coordinate ring $\mathbb{C}[x_1, \dots, x_n/I$ of an algebraic variety is
+locally factorial if the variety is smooth. We may talk about this later.
+\end{example}
+
+\begin{example}[Nonexample]
+Let $R$ be $\mathbb{C}[A,B,C,D]/(AD - BC)$. The spectrum of $R$ has maximal
+ideals consisting of 2-by-2 matrices of determinant zero. This variety is very
+singular at the origin. It is not even locally factorial at the origin.
+
+The failure of unique factorization comes from the fact that
+\[ AD = BC \]
+in this ring $R$. This is a prototypical example of a ring without unique
+factorization. The reason has to do with the fact that the variety has a
+singularity at the origin.
+\end{example}
+
+\subsection{The Picard group}
+
+\begin{definition}
+Let $R$ be a commutative ring. An $R$-module $I$ is \textbf{invertible} if
+there exists $J$ such that
+\[ I \otimes_R J \simeq R. \]
+Invertibility is with respect to the tensor product.
+\end{definition}
+
+\begin{remark} \label{linebundremark}
+In topology, one is often interested in classifying \emph{vector bundles} on
+spaces. In algebraic geometry, a module $M$ over a ring $R$ gives (as in
+\cref{}) a sheaf of abelian groups over the topological space $\spec R$; this
+is supposed to be an analogy with the theory of vector bundles. (It is not so
+implausible since the Serre-Swan theorem (\cref{}) gives an equivalence of
+categories between the vector bundles over a compact space $X$ and the
+projective modules over the ring $C(X)$ of continuous functions.)
+In this analogy, the invertible modules are the \emph{line bundles}.
+The definition has a counterpart in the topological setting: for instance, a
+vector bundle $\mathcal{E} \to X$ over a space $X$ is a line bundle (that is,
+of rank one) if and only if there is a vector bundle $\mathcal{E}' \to X$ such
+that $\mathcal{E} \otimes \mathcal{E}'$ is the trivial bundle $X \times
+\mathbb{R}$.
+\end{remark}
+
+There are many equivalent characterizations.
+
+\begin{proposition}
+Let $R$ be a ring, $I$ an $R$-module. TFAE:
+\begin{enumerate}
+\item $I$ is invertible.
+\item $I$ is finitely generated and $I_{\mathfrak{p}} \simeq R_{\mathfrak{p}}$ for all primes
+$\mathfrak{p} \subset R$.
+\item $I$ is finitely generated and there exist $a_1, \dots, a_n \in R$ which generate $(1)$
+in $R$ such that
+\[ I[a_i^{-1}]\simeq R[a_i^{-1}]. \]
+\end{enumerate}
+\end{proposition}
+\begin{proof}
+First, we show that if $I$ is invertible, then $I$ is finitely generated.
+Suppose $I \otimes_R J \simeq R$. This means that $1 \in R$ corresponds to an
+element
+\[ \sum i_k \otimes j_k \in I \otimes_R J . \]
+Thus, there exists a finitely generated submodule $I_0\subset I$ such that the map $I_0 \otimes J \to I
+\otimes J$ is surjective. Tensor this with $I$, so we get a surjection
+\[ I_0 \simeq I_0 \otimes J \otimes I \to I \otimes J \otimes I \simeq I \]
+which leads to a surjection $I_0 \twoheadrightarrow I$. This implies that $I$
+is finitely generated
+
+\textbf{Step 1: 1 implies 2.}
+We now show 1 implies 2. Note that if $I$ is invertible, then $I \otimes_R R'$
+is an invertible $R'$ module for any $R$-algebra $R'$; to get an inverse of
+$I \otimes_R R'$,
+tensor the inverse of $I$ with $R'$.
+In particular, $I_{\mathfrak{p}}$ is an invertible $R_{\mathfrak{p}}$-module
+for each $\mathfrak{p}$. As a result,
+\[ I_{\mathfrak{p}}/\mathfrak{p} I_{\mathfrak{p}} \]
+is invertible over the \emph{field} $R_{\mathfrak{p}}/\mathfrak{p}R_{\mathfrak{p}}$. This means
+that
+$ I_{\mathfrak{p}}/\mathfrak{p} I_{\mathfrak{p}}$ is a one-dimensional vector
+space over the residue field. (The invertible modules over a vector space are
+the one-dimensional spaces.)
+Choose an element $x \in I_{\mathfrak{p}}$ which generates
+$I_{\mathfrak{p}}/\mathfrak{p}I_{\mathfrak{p}}$. Since $I_{\mathfrak{p}}$ is
+finitely generated, Nakayama's lemma shows that $x$ generates $I_{\mathfrak{p}}$.
+
+We get a surjection $\alpha: R_{\mathfrak{p}} \twoheadrightarrow I_{\mathfrak{p}}$
+carrying $1 \to x$. We claim that this map is injective.
+This will imply that $I_{\mathfrak{p}}$ is free of rank 1. So, let $J$ be an
+inverse of $I$ among $R$-modules, so that $I \otimes_R J = R$; the same
+argument as above provides a surjection
+$ \beta: {R}_{\mathfrak{p}} \to J_{\mathfrak{p}}$.
+Then $\beta' = \beta \otimes 1_{I_{\mathfrak{p}}}: I_{\mathfrak{p}} \to
+R_{\mathfrak{p}}$ is also a surjection.
+Composing, we get a surjective map
+\[ R_{\mathfrak{p}} \stackrel{\alpha}{\twoheadrightarrow} I_{\mathfrak{p}}
+\stackrel{\beta'}{\twoheadrightarrow} R_{\mathfrak{p}} \]
+whose composite must be multiplication by a unit, since the ring is local. Thus
+the composite is injective and $\alpha$ is injective.
+It follows that $\alpha$ is an isomorphism, so that $I_{\mathfrak{p}}$ is free
+of rank one.
+
+\textbf{Step 2: 2 implies 3.}
+Now we show 2 implies 3. Suppose $I$ is finitely generated with generators $\left\{x_1, \dots, x_n\right\} \subset I$ and $I_{\mathfrak{p}} \simeq
+R_{\mathfrak{p}}$ for all $\mathfrak{p}$. Then for each $\mathfrak{p}
+$, we can choose an element $x$ of $I_{\mathfrak{p}}$ generating
+$I_{\mathfrak{p}}$ as $R_{\mathfrak{p}}$-module.
+By multiplying by the denominator, we can assume that $x \in I$.
+By assumption, we can then find $a_i,s_i \in R$ with
+\[ s_i x_i = a_i x \in R \]
+for some $s_i \notin \mathfrak{p}$ as $x$ generates $I_{\mathfrak{p}}$. This means that $x$ generates $I$ after inverting the $s_i$. It
+follows that $I[1/a] = R[1/a]$ where $a = \prod s_i \notin \mathfrak{p}$.
+In particular, we find that there is an open covering $\{\spec
+R[1/a_{\mathfrak{p}}] \}$ of $\spec R$ (where $a_{\mathfrak{p}} \notin
+\mathfrak{p}$) on which $I$ is isomorphic to $R$.
+To say that these cover $\spec R$ is to say that the $a_{\mathfrak{p}}$
+generate $1$.
+
+Finally, let's do the implication 3 implies 1. Assume that we have the
+situation of $I[1/a_i] \simeq R[1/a_i]$. We want to show that $I$ is invertible.
+We start by showing that $I$ is \textbf{finitely presented}. This means that
+there is an exact sequence
+\[ R^m \to R^n \to I \to 0, \]
+i.e. $I$ is the cokernel of a map between free modules of finite rank.
+To see this, first, we've assumed that $I$ is finitely generated. So there is a
+surjection
+\[ R^n \twoheadrightarrow I \]
+with a kernel $K \rightarrowtail R^n$. We must show that $K$ is finitely
+generated. Localization is an exact functor, so $K[1/a_i]$ is the kernel of
+$R[1/a_i]^n \to I[1/a_i]$. However, we have an exact sequence
+\[ K[1/a_i] \rightarrowtail R[1/a_i]^n \twoheadrightarrow R[1/a_i] \]
+by the assumed isomorphism $I[1/a_i] \simeq R[1/a_i]$. But since a free module
+is projective, this sequence splits and we find that $K[1/a_i]$ is finitely
+generated. If it's finitely generated, it's generated by finitely many elements
+in $K$.
+As a result, we find that there is a map
+\[ R^N \to K \]
+such that the localization to $\spec R[1/a_i]$ is surjective. This implies by
+the homework that $R^N \to K$ is surjective.\footnote{To check that a map is
+surjective, just check at the localizations at any maximal ideal.} Thus $K$ is finitely generated.
+
+In any case, we have shown that the module $I$ is finitely presented.
+\textbf{Define} $J = \hom_R(I, R)$ as the candidate for its dual. This
+construction is compatible with localization.
+We can choose a finite presentation $R^m \to R^n \to I \to 0$, which leads to a
+sequence
+\[ 0 \to J \to \hom(R^n, R) \to \hom(R^m, R). \]
+It follows that the formation of $J$ commutes with localization.
+In particular, this argument shows that
+\[ J[1/a] = \hom_{R[1/a]}(I[1/a], R[1/a]). \]
+One can check this by using the description of $J$. By construction, there is a
+canonical map $I \otimes J \to R$.
+I claim that this map is invertible.
+
+For the proof, we use the fact that one can check for an isomorphism locally.
+It suffices to show that
+\[ I[1/a] \otimes J[1/a] \to R[1/a] \]
+is an isomorphism for some collection of $a$'s that generate the unit ideal.
+However, we have $a_1, \dots, a_n$ that generate the unit ideal such that
+$I[1/a_i]$ is free of rank 1, hence so is $J[1/a_i]$. It thus follows that
+$I[1/a_i] \otimes J[1/a_i]$ is an isomorphism.
+\end{proof}
+
+
+\begin{definition}
+Let $R$ be a commutative ring. We define the \textbf{Picard group} $\pic(R)$ to
+be the set of isomorphism classes of invertible $R$-modules. This is an abelian
+group; the addition law is defined so that the sum of the classes represented
+by $M, N$ is $M \otimes_R N$.
+The identity element is given by $R$.
+\end{definition}
+
+The Picard group is thus analogous (cf. \cref{linebundleremark}) to the set of
+isomorphism classes of line bundles on a topological space (which is also an
+abelian group). While the latter can often be easily computed (for a nice space
+$X$, the line bundles are classified by elements of $H^2(X, \mathbb{Z})$), the
+interpretation in the algebraic setting is more difficult.
+\subsection{Cartier divisors}
+
+Assume furthermore that $R$ is a domain. We now introduce:
+
+\begin{definition}
+A \textbf{Cartier divisor} for $R$ is a submodule $M \subset K(R)$ such that
+$M$ is invertible.
+\end{definition}
+In other words, a Cartier divisor is an invertible fractional ideal.
+Alternatively, it is an invertible $R$-module $M$ with a nonzero map $M \to
+K(R)$. \textbf{ Once this map is nonzero, it is automatically injective,} since
+injectivity can be checked at the localizations, and any module-homomorphism from a domain into
+its quotient field is either zero or injective (because it is multiplication by
+some element).
+
+
+We now make this into a group.
+\begin{definition}
+Given $(M, a: M \hookrightarrow K(R))$ and $(N, b: N \hookrightarrow K(R))$, we
+define the sum to be
+\[ (M \otimes N, a \otimes b: M \otimes N \hookrightarrow K(R)). \]
+The map $a \otimes b$ is nonzero, so by what was said above, it is an injection.
+Thus the Cartier divisors from an abelian group $\cart(R)$.
+\end{definition}
+
+By assumption, there is a homomorphism
+\[ \cart(R) \to\pic(R) \]
+mapping $(M, M \hookrightarrow K(R)) \to M$.
+
+\begin{proposition}
+The map $\cart(R) \to \pic(R)$ is surjective. In other words, any invertible
+$R$-module can be embedded in $K(R)$.
+\end{proposition}
+\begin{proof} Let $M$ be an invertible $R$-module.
+Indeed, we know that $M_{(0)} = M \otimes_R K(R)$ is an invertible
+$K(R)$-module, so a one-dimensional vector space over $K(R)$. In particular,
+$M_{(0)} \simeq K(R)$. There is a nonzero homomorphic map
+\[ M \to M_{(0) } \simeq K(R), \]
+which is automatically injective by the discussion above.
+\end{proof}
+
+What is the kernel of $\cart(R) \to \pic(R)$? This is the set of Cartier divisors which are
+isomorphic to $R$ itself. In other words, it is the set of $(R, R
+\hookrightarrow K(R))$. This data is the same thing as the data of a nonzero
+element of $K(R)$.
+So the kernel of
+\[ \cart(R) \to \pic(R) \]
+has kernel isomorphic to $K(R)^*$. We have a short exact sequence
+\[ K(R)^* \to \cart(R) \to \pic(R) \to 0. \]
+
+\subsection{Weil divisors and Cartier divisors}
+
+Now, we want to assume $\cart(R)$ if $R$ is ``good.'' The ``goodness'' in
+question is to assume that $R$ is locally factorial, i.e. that
+$R_{\mathfrak{p}}$ is factorial for each $\mathfrak{p}$. This is true, for
+instance, if $R$ is the coordinate ring of a smooth algebraic variety.
+
+
+
+\begin{proposition}
+If $R$ is locally factorial and noetherian, then the group $\cart(R)$ is a free abelian group.
+The generators are in bijection with the height one primes of $R$.
+\end{proposition}
+Now assume that $R$ is a locally factorial, noetherian domain.
+We shall produce an isomorphism
+\[ \weil(R) \simeq \cart(R) \]
+that sends $[\mathfrak{p}_i]$ to that height one prime $\mathfrak{p}_i$
+together with the imbedding $\mathfrak{p}_i \hookrightarrow R \to K(R)$.
+
+We first check that this is well-defined. Since $\weil(R)$ is free, all we have
+to do is check that each $\mathfrak{p}_i$ is a legitimate Cartier divisor. In
+other words, we need to show that:
+
+\begin{proposition}
+If $\mathfrak{p} \subset R$ is a height one prime and $R$ locally factorial, then $\mathfrak{p}$ is
+invertible.
+\end{proposition}
+\begin{proof}
+In the last lecture, we gave a criterion for invertibility: namely, being
+locally trivial. We have to show that for any prime $\mathfrak{q}$, we have
+that $\mathfrak{p}_{\mathfrak{q}}$ is isomorphic to $R_{\mathfrak{q}}$. If
+$\mathfrak{p} \not\subset \mathfrak{q}$, then $\mathfrak{p}_{\mathfrak{q}}$ is
+the entire ring $R_{\mathfrak{q}}$, so this is obvious. Conversely, suppose
+$\mathfrak{p} \subset {\mathfrak{q}}$. Then $\mathfrak{p}_{\mathfrak{q}}$ is
+a height one prime of $R_{\mathfrak{q}}$: it is minimal over some element in
+$R_{\mathfrak{q}}$.
+
+Thus $\mathfrak{p}_{\mathfrak{q}}$ is principal, in particular free of rank
+one, since $R_{\mathfrak{q}}$ is factorial. We saw last time that being
+factorial is equivalent to the principalness of height one primes.
+\end{proof}
+
+We need to define the inverse map
+\[ \cart(R) \to \weil(R). \]
+In order to do this, start with a Cartier divisor $(M, M \hookrightarrow
+K(R))$. We then have to describe which coefficient to assign a height one
+prime. To do this, we use a local criterion.
+
+Let's first digress a bit.
+Consider a locally factorial domain $R$ and a prime $\mathfrak{p}$ of height
+one. Then $R_{\mathfrak{p}}$ is factorial. In particular, its maximal ideal
+$\mathfrak{p}R_{\mathfrak{p}}$ is height one, so principal.
+It is the principal ideal generated by some $t \in R_{\mathfrak{p}}$.
+Now we show:
+\begin{proposition}
+Every nonzero ideal in $R_{\mathfrak{p}}$ is of the form $(t^n)$ for some unique $n
+\geq 0$.
+\end{proposition}
+\begin{proof}
+Let $I_0 \subset R_{\mathfrak{p}}$ be nonzero. If $I_0 = R_{\mathfrak{p}}$, then
+we're done---it's generated by $t^0$. Otherwise, $I_0 \subsetneq
+R_{\mathfrak{p}}$, so contained in $\mathfrak{p}R_{\mathfrak{p}} = (t)$. So let
+$I_1 = \left\{x \in R_{\mathfrak{p}}: tx \in I_0\right\}$. Thus
+\[ I_1 = t^{-1} I_0. \]
+I claim now that $I_1 \neq I_0$, i.e. that there exists $x \in R_{\mathfrak{p}}$ such that $x
+\notin I_0$ but $tx \in I_0$. The proof comes from the theory of associated
+primes.
+Look at $R_{\mathfrak{p}}/I_0$; it has at least one associated prime as it is
+nonzero.
+
+Since it
+is a torsion module, this associated prime must be
+$\mathfrak{p}R_{\mathfrak{p}}$ since the only primes in $R_{\mathfrak{p}}$
+are $(0)$ and $(t)$, \textbf{which we have not yet shown}. So there exists an
+element in the quotient $R/I_0$ whose annihilator is precisely $(t)$. Lifting
+this gives an element in $R$ which when multiplied by $(t)$ is in $I_0$ but
+which is not in $I_0$. So $I_0 \subsetneq I_1$.
+
+Proceed as before now. Define $I_2 = \left\{x \in R_{\mathfrak{p}}: tx \in
+I_1\right\}$. This process must halt since we have assumed noetherianness. We
+must have $I_m = I_{m+1}$ for some $m$, which would imply that some $I_m =
+R_{\mathfrak{p}}$ by the above argument. It then follows that $I_0 = (t^m)$
+since each $I_i$ is just $t I_{i+1}$.
+\end{proof}
+
+We thus have a good structure theory for ideals in $R$ localized at a height one prime.
+Let us make a more general claim.
+
+\begin{proposition}
+Every nonzero finitely generated $R_{\mathfrak{p}}$-submodule of the fraction field $K(R)$ is of the
+form $(t^n)$ for some $n \in \mathbb{Z}$.
+\end{proposition}
+\begin{proof}
+Say that $M \subset K(R)$ is such a submodule. Let $I = \left\{x \in
+R_{\mathfrak{p}}, x M \subset R_{\mathfrak{p}}\right\}$. Then $I \neq 0$ as $M$
+is finitely generated $M$ is generated over $R_{\mathfrak{p}}$ by a finite number of fractions $a_i/b_i, b_i \in R$.
+Then the product $b = \prod b_i$ brings $M$ into $R_{\mathfrak{p}}$.
+
+We know that $I = (t^m) $ for some $m$. In particular, $t^m M$ is an ideal in
+$R$. In particular,
+\[ t^m M = t^p R \]
+for some $p$, in particular $M = t^{p-m}R$.
+
+\end{proof}
+
+Now let's go back to the main discussion. $R$ is a noetherian locally factorial
+domain; we want to construct a map
+\[ \cart(R) \to \weil(R). \]
+Given $(M, M \hookrightarrow K(R))$ with $M$ invertible, we want to define a
+formal sum $\sum n_i [\mathfrak{p}_i]$. For every height one prime
+$\mathfrak{p}$, let us look at the local ring $R_{\mathfrak{p}}$ with maximal
+ideal generated by some $t_{\mathfrak{p}} \in R_{\mathfrak{p}}$. Now
+$M_{\mathfrak{p}} \subset K(R)$ is a finitely generated
+$R_{\mathfrak{p}}$-submodule, so generated by some
+$t_{\mathfrak{p}}^{n_{\mathfrak{p}}}$. So we map $(M, M \hookrightarrow K(R))$
+to
+\[ \sum_{\mathfrak{p}} n_{\mathfrak{p}}[\mathfrak{p}]. \]
+First, we have to check that this is well-defined. In particular, we have to
+show:
+
+\begin{proposition}
+For almost all height one $\mathfrak{p}$, we have $M_{\mathfrak{p}} =
+R_{\mathfrak{p}}$. In other words, the integers $n_{\mathfrak{p}}$ are almost all zero.
+\end{proposition}
+\begin{proof}
+We can always assume that $M$ is actually an ideal. Indeed, choose $a \in R$
+with $aM = I \subset R$. As Cartier divisors, we have $M = I - (a)$. If we
+prove the result for $I$ and $(a)$, then we will have proved it for $M$ (note
+that the $n_{\mathfrak{p}}$'s are additive invariants\footnote{To see this,
+localize at $\mathfrak{p}$---then if $M$ is generated by $t^a$, $N$ generated
+by $t^b$, then $M \otimes N$ is generated by $t^{a+b}$.}). So because of this
+additivity, it is sufficient to prove the proposition for actual (i.e.
+nonfractional) ideals.
+
+Assume thus that $M \subset R$.
+All of these $n_{\mathfrak{p}}$ associated to $M$ are at least zero because $M$
+is actually an ideal. What we want is that $n_{\mathfrak{p}} \leq 0$ for almost
+all $\mathfrak{p}$. In other words, we must show that
+\[ M_{\mathfrak{p}} \supset R_{\mathfrak{p}} \quad \mathrm{almost \ all \ }
+\mathfrak{p}. \]
+To do this, just choose any $x \in M - 0$. There are finitely many minimal
+primes containing $(x)$ (by primary decomposition applied to $R/(x)$). Every
+other height one prime $\mathfrak{q}$ does not contain $(x)$.\footnote{Again, we're using
+something about height one primes not proved yet.}
+This states that $M_{\mathfrak{q}} \supset x/x = 1$, so $M_{\mathfrak{q}}
+\supset R_{\mathfrak{q}}$.
+
+The key claim we've used in this proof is the following. If $\mathfrak{q}$ is a
+height one prime in a domain $R$ containing some nonzero element $(x)$, then
+$\mathfrak{q}$ is minimal among primes containing $(x)$. In other words, we can
+test the height one condition at any nonzero element in that prime.
+Alternatively:
+\begin{lemma}
+There are no nontrivial containments among height one primes.
+\end{lemma}
+\end{proof}
+
+Anyway, we have constructed maps between $\cart(R) $ and $\weil(R)$. The map
+$\cart(R) \to \weil(R)$ takes $M \to \sum n_{\mathfrak{p}}[\mathfrak{p}]$. The
+other map $\weil(R) \to \cart(R)$ takes $[\mathfrak{p}] \to \mathfrak{p}
+\subset K(R)$. The composition $\weil(R) \to \weil(R)$ is the identity. Why is that? Start with a
+prime $\mathfrak{p}$; that goes to the Cartier divisor $\mathfrak{p}$. Then we
+need to finitely generatedre the multiplicities at other height one primes. But if
+$\mathfrak{p}$ is height one and $\mathfrak{q}$ is a height one prime, then if
+$\mathfrak{p} \neq \mathfrak{q}$ the lack of nontrivial containment relations
+implies that the multiplicity of $\mathfrak{p}$ at $\mathfrak{q}$ is zero. We
+have shown that
+\[ \weil(R) \to \cart(R) \to \weil(R) \]
+is the identity.
+
+Now we have to show that $\cart(R) \to \weil(R)$ is injective. Say we have a
+Cartier divisor $(M, M \hookrightarrow K(R))$ that maps to zero in $\weil(R)$,
+i.e. all its multiplicities
+$n_{\mathfrak{p}}$ are zero at height one primes.
+We show that $M = R$.
+
+First, assume $M \subset R$.
+It is sufficient to show that at any maximal ideal $\mathfrak{m} \subset R$, we
+have
+\[ M_{\mathfrak{m}} = R_{\mathfrak{m}}. \]
+What can we say? Well, $M_{\mathfrak{m}}$ is principal as $M$ is invertible,
+being a Cartier divisor. Let it be generated by $x \in R_{\mathfrak{m}}$;
+suppose $x$ is a nonunit (or we're already done). But
+$R_{\mathfrak{m}}$ is factorial, so $x = x_1 \dots x_n$ for each $x_i $ prime.
+If $n>0$, then however $M$ has nonzero multiplicity at the prime ideal $(x_i) \subset
+R_{\mathfrak{m}} $. This is a contradiction.
+
+The general case of $M$ not really a subset of $R$ can be handled similarly:
+then the generating element $x$ might lie in the fraction field. So $x$, if it
+is not a unit in $R$, is a
+product of some primes in the numerator and some primes in the denominator.
+The nonzero primes that occur lead to nonzero multiplicities.
+\lecture{10/13}
+
+\subsection{Recap and a loose end}
+
+Last time, it was claimed that if $R$ is a locally factorial domain, and
+$\mathfrak{p} \subset R$ is of height one, then every prime ideal of
+$R_{\mathfrak{p}}$ is either maximal or zero. This follows from general
+dimension theory. This is equivalent to the following general claim about
+height one primes:
+
+\begin{quote}
+There are no nontrivial inclusions among height one primes for $R$ a locally
+factorial domain.
+\end{quote}
+
+\begin{proof} Suppose $\mathfrak{q} \subsetneq \mathfrak{p}$ is an inclusion
+of height one primes.
+
+Replace $R$ by $R_{\mathfrak{p}}$. Then $R$ is local with some maximal ideal
+$\mathfrak{m}$, which is principal with some generator $x$.
+Then we have an inclusion
+\[ 0 \subset \mathfrak{q} \subset \mathfrak{m}. \]
+This inclusion is proper. However, $\mathfrak{q}$ is principal since
+it is height one in the factorial ring $R_{\mathfrak{p}}$.
+This cannot be since every element is a power of $x$ times a unit.
+(Alright, this wasn't live \TeX ed well.)
+\end{proof}
+
+Last time, we were talking about $\weil(R)$ and $\cart(R)$ for $R$ a locally
+factorial noetherian domain.
+\begin{enumerate}
+\item $\weil(R)$ is free on the height one primes.
+\item $\cart(R)$ is the group of invertible submodules of $K(R)$.
+\end{enumerate}
+We produced an isomorphism
+\[ \weil(R) \simeq \cart(R). \]
+
+\begin{remark}
+Geometrically, what is this? Suppose $R = \mathbb{C}[X_1, \dots, X_n]/I$ for
+some ideal $I$. Then the maximal ideals, or closed points in $\spec R$, are
+certain points in $\mathbb{C}^n$; they form an irreducible variety if $R$ is
+a domain. The locally factorial condition is satisfied, for instance, if the
+variety is \emph{smooth}. In this case, the Weil divisors correspond to sums of
+irreducible varieties of codimension one---which correspond to the primes of
+height one. The Weil divisors are free on the set
+of irreducible varieties of codimension one.
+
+The Cartier divisors can be thought of as ``linear combinations'' of
+subvarieties which are locally defined by one equation. It is natural to assume
+that the condition of being defined by one equation corresponds to being
+codimension one. This is true by the condition of $R$ locally factorial.
+
+In general, we can always construct a map
+\[ \cart(R) \to \weil(R), \]
+but it is not necessarily an isomorphism.
+
+
+\end{remark}
+
+\subsection{Further remarks on $\weil(R)$ and $\cart(R)$} Recall that the Cartier group fits in an exact sequence:
+\[ K(R)^* \to \cart(R) \to \pic(R) \to 0, \]
+because every element of $\cart(R)$ determines its isomorphism class, and
+every element of $K(R)^*$ determines a free module of rank one. Contrary
+to what was stated last time, it is \textbf{not true} that exactness holds on
+the right. In fact, the kernel is the group $R^*$ of units of $R$. So the exact
+sequence runs
+\[ 0 \to R^* \to K(R)^* \to \cart(R) \to \pic(R) \to 0. \]
+This is true for \emph{any} domain $R$. For $R$ locally factorial and
+noetherian, we know that $\cart(R) \simeq \weil(R)$, though.
+
+We can think of this as a generalization of unique factorization.
+\begin{proposition}
+$R$ is factorial if and only if $R$ is locally factorial and $\pic(R) = 0$.
+\end{proposition}
+\begin{proof}
+Assume $R$ is locally factorial and $\pic(R)=0$. Then every prime ideal of
+height one (an element of $\weil(R)$, hence of $\cart(R)$) is principal, which
+implies that $R$ is factorial. And conversely.
+\end{proof}
+
+In general, we can think of the exact sequence above as a form of unique
+factorization for a locally factorial domain: any invertible fractional ideal is a product of height one prime
+ideals.
+
+Let us now give an example.
+\add{?}
+
diff --git a/books/cring/fields.tex b/books/cring/fields.tex
new file mode 100644
index 0000000000000000000000000000000000000000..444c1f45f0b64143391dfb445552fa02355d91e9
--- /dev/null
+++ b/books/cring/fields.tex
@@ -0,0 +1,1288 @@
+\chapter{Fields and Extensions}
+
+In this chapter, we shall discuss the theory of fields.
+Recall that a \textbf{field} is an integral domain for which all non-zero elements are
+invertible; equivalently, the only two ideals of a field are $(0)$ and $(1)$
+since any nonzero element is a unit. Consequently fields will be the
+simplest cases of much of the theory developed later.
+
+The theory of field extensions has a different feel from standard commutative
+algebra since, for instance, any morphism of fields is injective. Nonetheless,
+it turns out that questions involving rings can often be reduced to questions
+about fields. For instance, any integral domain can be embedded in a field
+(its quotient field), and any \emph{local ring} (that is, a ring with a unique
+maximal ideal; we have not defined this term yet) has associated to it its
+residue field (that is, its quotient by the maximal ideal).
+A knowledge of field extensions will thus be useful.
+
+\section{Introduction}
+
+
+Recall once again that:
+\begin{definition}
+A \textbf{field} is an integral domain where every non-zero element is
+invertible. Alternatively, it is a set $k$, endowed with binary operations of
+addition and multiplication, which satisfy the usual axioms of commutativity,
+associativity, distributivity, $1$ and $0$ (and $1 \neq 0$!), and additive and
+multiplicative inverses.
+\end{definition}
+
+A \textbf{subfield} is a subset closed under these operations: equivalently, it
+is a subring that is itself a field.
+
+For a field $k$, we write $k^*$ for the subset $k \setminus \left\{0\right\}$.
+(This generalizes the usual notation \cref{} $R^*$ that refers to the group of
+invertible elements in a ring $R$.)
+
+\subsection{Examples}
+To get started, let us begin by providing several examples of fields. The reader should
+recall (\cref{maximalfield}) that if $R$ is a ring and $I \subset R$ an
+ideal, then $R/I$ is a field precisely when $I$ is maximal.
+
+\begin{example}
+One of the most familiar examples of a field is the rational
+numbers $\mathbb{Q}$.
+\end{example}
+
+\begin{example}
+If $p$ is a prime number, then $\mathbb{Z}/(p)$ is a field, denoted
+$\mathbb{F}_p$. Indeed, $(p)$ is a
+maximal ideal in $\mathbb{Z}$. Thus, fields may be finite: $\mathbb{F}_p$
+contains $p$ elements.
+\end{example}
+
+
+\begin{example}[Quotients of the polynomial ring]
+In a principal ideal domain, every prime ideal is principal. Now, by
+\rref{polyringED}, if $k$ is a field, then the polynomial ring $k[x]$ is a PID.
+It follows that if $P \in k[x]$ is an irreducible polynomial (that is, a
+nonconstant polynomial
+that does not admit a factorization into terms of smaller degrees), then
+$k[x]/(P)$ is a field. It contains a copy of $k$ in a natural way.
+
+This is a very general way of constructing fields. For instance, the
+complex numbers $\mathbb{C}$
+can be constructed as $\mathbb{R}[x]/(x^2 + 1)$.
+\end{example}
+
+
+
+\begin{exercise}
+What is $\mathbb{C}[x]/(x^2 + 1)$?
+\end{exercise}
+
+\begin{example}[Quotient fields]
+Recall from \cref{quotfld1} that, given an integral domain $A$, there is an
+imbedding $A \hookrightarrow K(A)$ into a field $K(A)$ formally constructed as
+quotients $a/b, a, b \in A$ (and $b \neq 0$) modulo an evident equivalence
+relation.
+This is called the \textbf{quotient field.}
+The quotient field has the following universal property: given an injection
+$\phi: A
+\hookrightarrow K$ for a field $K$, there is a unique map $\psi: K(A) \to K$ making
+the diagram commutative (i.e. a map of $A$-algebras).
+Indeed, it is clear how to define such a map: we set
+\[ \psi(a/b) = \phi(a)/\phi(b), \]
+where injectivity of $\phi$ assures that $\phi(b) \neq 0$ if $ b \neq 0$.
+
+If the map is not injective, then such a factorization may not exist. Consider
+the imbedding $\mathbb{Z} \to \mathbb{Q}$ into its quotient field, and consider
+the map $\mathbb{Z} \to \mathbb{F}_p$: this last map goes from $\mathbb{Z} $
+into a field, but it does not factor through $\mathbb{Q}$ (as $p$ is invertible
+in $\mathbb{Q}$ and zero in $\mathbb{F}_p$!).
+\end{example}
+
+
+\begin{example}[Rational function field] \label{monofldext}
+\label{rationalfnfld}
+If $k$ is a field, then we can consider the field $k(x)$ of \textbf{rational functions}
+over $k$. This is the quotient field of the polynomial ring $k[x]$; in other
+words, it is the set of quotients $F/G$ for $F, G \in k[x]$ with the obvious
+equivalence relation.
+\end{example}
+
+
+Here is a fancier example of a field.
+\begin{example}
+\label{meromorphicfn}
+Let $X$ be a Riemann surface.\footnote{Readers not familiar with Riemann
+surfaces may ignore this example.} Let $\mathbb{C}(X)$ denote the
+set of meromorphic functions on $X$; clearly $\mathbb{C}(X)$ is a ring under
+multiplication and addition of functions. It turns out that in fact
+$\mathbb{C}(X)$ is a
+field; this is because if a nonzero function $f(z)$ is meromorphic, so is $1/f(z)$. For example,
+let $S^2$ be the Riemann sphere; then we know from complex
+analysis that the ring of meromorphic functions $\mathbb{C}(S^2)$ is the
+field of rational functions $\mathbb{C}(z)$.
+\end{example}
+
+
+
+One reason fields are so nice from the point of view of most other chapters
+in this book is that the theory of $k$-modules (i.e. vector spaces), for $k$ a field, is very simple.
+Namely:
+
+\begin{proposition} \label{vectorspacefree}
+If $k$ is a field, then every $k$-module is free.
+\end{proposition}
+\begin{proof}
+Indeed, by linear algebra we know that a $k$-module (i.e. vector space) $V$ has a
+\emph{basis} $\mathcal{B} \subset V$, which defines an isomorphism from the
+free vector space on $\mathcal{B}$ to $V$.
+\end{proof}
+
+\begin{corollary} \label{fieldsemisimple}
+Every exact sequence of modules over a field splits.
+\end{corollary}
+\begin{proof}
+This follows from \cref{} and \cref{vectorspacefree}, as every vector space is
+projective.
+\end{proof}
+
+This is another reason why much of the theory in future chapters will not say
+very much about fields, since modules behave in such a simple manner.
+Note that \cref{fieldsemisimple} is a statement about the \emph{category} of
+$k$-modules (for $k$ a field), because the notion of exactness is inherently
+arrow-theoretic (i.e. makes use of purely categorical notions, and can in fact
+be phrased within a so-called \emph{abelian category}).
+
+Henceforth, since the study of modules over a field is linear algebra, and
+since the ideal theory of fields is not very interesting, we shall study what
+this chapter is really about: \emph{extensions} of fields.
+
+\subsection{The characteristic of a field}
+\label{more-fields}
+
+In the category of rings, there is an \emph{initial object} $\mathbb{Z}$: any
+ring $R$ has a map from $\mathbb{Z}$ into it in precisely one way. For fields,
+there is no such initial object.
+Nonetheless, there is a family of objects such that every field can be mapped
+into in exactly one way by exactly one of them, and in no way by the others.
+
+Let $F$ be a field. As $\mathbb{Z}$ is the initial object of the category of
+rings, there is a ring map $f : \mathbb{Z} \to F$, see
+\rref{integersinitial}.
+The image of this ring map is an integral domain (as a subring of a field)
+hence the kernel of $f$ is a prime ideal in $\mathbb{Z}$, see
+\rref{primeifdomain}.
+Hence the kernel of $f$ is either $(0)$ or $(p)$ for some prime number $p$, see
+\rref{integerprimes}.
+
+In the first case we see that $f$ is injective, and in this case
+we think of $\mathbb{Z}$ as a subring of $F$. Moreover, since every
+nonzero element of $F$ is invertible we see that it makes sense to
+talk about $p/q \in F$ for $p, q \in \mathbb{Z}$ with $q \not = 0$.
+Hence in this case we may and we do think of $\mathbb{Q}$ as a subring of $F$.
+One can easily see that this is the smallest subfield of $F$ in this case.
+
+In the second case, i.e., when $\text{Ker}(f) = (p)$ we see that
+$\mathbb{Z}/(p) = \mathbb{F}_p$ is a subring of $F$. Clearly it is the smallest subfield of $F$.
+
+Arguing in this way we see that every field contains a smallest subfield
+which is either $\mathbb{Q}$ or finite equal to $\mathbb{F}_p$ for some
+prime number $p$.
+
+\begin{definition}
+The \textbf{characteristic} of a field $F$ is $0$ if
+$\mathbb{Z} \subset F$, or is a prime $p$ if $p = 0$ in $F$.
+The \textbf{prime subfield of $F$} is the smallest subfield of $F$
+which is either $\mathbb{Q} \subset F$ if the characteristic is zero, or
+$\mathbb{F}_p \subset F$ if the characteristic is $p > 0$.
+\end{definition}
+
+
+It is easy to see that if $E$ is a field containing $k$, then the characteristic of
+$E$ is the same as the characteristic of $k$.
+
+\begin{example}
+The characteristic of $\mathbb{Z}/p$ is $p$, and that of $\mathbb{Q}$ is $0$.
+This is obvious from the definitions.
+\end{example}
+
+
+\section{Field extensions}
+
+\subsection{Preliminaries}
+
+In general, though, we are interested not so much in fields by themselves but
+in field \emph{extensions.} This is perhaps analogous to studying not rings
+but \emph{algebras} over a fixed ring.
+The nice thing for fields is that the notion of a ``field over another field''
+just recovers the notion of a field extension, by the next result.
+
+\begin{proposition} \label{fieldinj} If $F$ is a field and $R$ is any ring, then any ring homomorphism $f:F\rightarrow
+R$ is either injective or the zero map (in which case $R=0$).
+\end{proposition}
+
+\begin{proof} Indeed, $\ker(f)$ is an ideal in
+$F$. But there are only two ideals in $F$, namely $(0)$ and $(1)$. If $f$ is identically
+zero, then $1=f(1)=0$ in $R$, so $R=0$ too.
+\end{proof}
+
+\begin{definition} If $F$ is a field contained in a field $G$, then $G$ is said
+to be a \textbf{field extension} of $F$. We shall write $G/F$ to indicate
+that $G$ is an extension of $F$.
+\end{definition}
+
+So if $F, F'$ are fields, and $F \to F'$ is any ring-homomorphism, we see by
+\cref{fieldinj} that it is injective,\footnote{The zero ring is not a field!} and $F'$ can be regarded as an extension
+of $F$, by a slight abuse of notation. Alternatively, a field extension of $F$
+is just an $F$-algebra that happens to be a field.
+This is completely different than the situation for general rings, since a
+ring homomorphism is not necessarily injective.
+
+Let $k$ be a field. There is a \emph{category} of field extensions of $k$.
+An object of this category is an extension $E/k$, that is a
+(necessarily injective) morphism of fields
+\[ k \to E, \]
+while a morphism between extensions $E/k, E'/k$ is a $k$-algebra morphism $E \to E'$;
+alternatively, it is a commutative diagram
+\[ \xymatrix{
+E \ar[rr] & & E' \\
+& k \ar[ru] \ar[lu] &
+}.\]
+
+
+\begin{definition}
+A \textbf{tower} of field extensions $E'/E/k$ consists of an extension $E/k$
+and an extension $E'/E$.
+\end{definition}
+
+It is easy to see that any morphism $E \to E'$ in the category of
+$k$-extensions gives a tower.
+
+
+Let us give a few examples of field extensions.
+
+\begin{example}
+Let $k$ be a field, and $P \in k[x]$ an irreducible polynomial. We have seen
+that $k[x]/(P)$ is a field (\rref{monofldext}). Since it is also a $k$-algebra
+in the obvious way, it is an extension of $k$.
+\end{example}
+
+\begin{example}
+If $X$ is a Riemann surface, then the field of meromorphic functions
+$\mathbb{C}(X)$ (see \cref{meromorphicfn}) is an extension field of
+$\mathbb{C}$, because any element of $\mathbb{C}$ induces a
+meromorphic---indeed, holomorphic---constant function on $X$.
+\end{example}
+
+Let $F/k$ be a field extension. Let $S \subset F$ be any subset.
+Then there is a \emph{smallest} subextension of $F$ (that is, a subfield of $F$ containing $k$)
+that contains $S$.
+To see this, consider the family of subfields of $F $ containing $S$ and
+$k$, and take their intersection; one easily checks that this is a field.
+It is easy to see, in fact, that this is the set of elements of $F$ that can
+be obtained via a finite number of elementary algebraic operations
+(addition, multiplication, subtraction, and division) involving elements of
+$k$ and $S$.
+
+\begin{definition}
+If $F/k$ is an extension and $S \subset F$, we write $k(S)$ for the smallest
+subextension of $F$ containing $S$.
+We will say that $S$ \textbf{generates} the extension $k(S)/k$.
+\end{definition}
+
+For instance, $\mathbb{C}$ is generated by $i$ over $\mathbb{R}$.
+
+\begin{exercise}
+Show that $\mathbb{C}$ does not have a countable set of generators over
+$\mathbb{Q}$.
+\end{exercise}
+
+
+Let us now classify extensions generated by one element.
+\begin{proposition}[Simple extensions of a field] \label{fldmono}
+If an extension $F/k$ is generated by one element, then it is $F$ is $k$-isomorphic
+either to the rational function field $k(t)/k$ or to one of the extensions
+$k[t]/(P)$ for $P \in k[t]$ irreducible.
+\end{proposition}
+
+We will see that many of the most important cases of field extensions are generated
+by one element, so this is actually useful.
+
+\begin{proof}
+Let $\alpha \in F$ be such that $F = k(\alpha)$; by assumption, such an
+$\alpha$ exists.
+There is a morphism of rings
+\[ k[t] \to F \]
+sending the indeterminate $t$ to $\alpha$. The image is a domain, so the
+kernel is a prime ideal. Thus, it is either $(0)$ or $(P)$ for $P \in k[t]$
+irreducible.
+
+If the kernel is $(P)$ for $P \in k[t]$ irreducible, then the map factors
+through $k[t]/(P)$, and induces a morphism of fields $k[t]/(P) \to F$. Since
+the image contains $\alpha$, we see easily that the map is surjective, hence
+an isomorphism. In this case, $k[t]/(P) \simeq F$.
+
+If the kernel is trivial, then we have an injection
+$k[t] \to F$.
+One may thus define a morphism of the quotient field $k(t)$ into $F$; given a
+quotient $R(t)/Q(t)$ with $R(t), Q(t) \in k[t]$, we map this to
+$R(\alpha)/Q(\alpha)$. The hypothesis that $k[t] \to F$ is injective implies
+that $Q(\alpha) \neq 0$ unless $Q$ is the zero polynomial.
+The quotient field of $k[t]$ is the rational function field $k(t)$, so we get a morphism $k(t) \to F$
+whose image contains $\alpha$. It is thus surjective, hence an isomorphism.
+\end{proof}
+
+
+
+
+\subsection{Finite extensions}
+If
+$F/E$ is a field extension, then evidently $F$ is also a vector space over $E$
+(the scalar action is just multiplication in $F$).
+
+\begin{definition}
+The dimension of $F$
+considered as an $E$-vector space is called the \textbf{degree} of the extension and is
+denoted $[F:E]$. If $[F:E]<\infty$ then $F$ is said to be a
+\textbf{finite} extension.
+\end{definition}
+
+\begin{example}
+$\mathbb{C}$ is obviously a finite extension of $\mathbb{R}$ (of degree 2).
+\end{example}
+
+Let us now consider the degree in the most important special example, that
+given by \cref{fldmono}, in the next two examples.
+
+\begin{example}[Degree of a simple transcendental extension]
+\label{monodeg1}
+If $k$ is any field, then the rational function field $k(t)$ is \emph{not} a
+finite extension. The elements $\left\{t^n, n \in \mathbb{Z}\right\}$
+are linearly independent over $k$.
+
+In fact, if $k$ is uncountable, then $k(t)$ is \emph{uncountably} dimensional
+as a $k$-vector space. To show this, we claim that the family of elements
+$\left\{1/(t- \alpha), \alpha \in k\right\} \subset k(t)$ is linearly independent over $k$. A
+nontrivial relation between them would lead to a contradiction: for instance,
+if one works over $\mathbb{C}$, then this follows because
+$\frac{1}{t-\alpha}$, when considered as a meromorphic function on
+$\mathbb{C}$, has a pole at $\alpha$ and nowhere else.
+Consequently any sum $\sum c_i \frac{1}{t - \alpha_i}$ for the $c_i \in k^*$,
+and $\alpha_i \in k$ distinct, would have poles at each of the $\alpha_i$.
+In particular, it could not be zero.
+
+(Amusingly, this leads
+to a quick if suboptimal proof of the Hilbert Nullstellensatz; see \cref{}.)
+\end{example}
+
+\begin{example}[Degree of a simple algebraic extension]
+\label{monodeg2}
+Consider a monogenic field extension $E/k$ of the form in
+\rref{monofldext}, say $E = k[t]/(P)$ for $P \in k[t]$ an irreducible
+polynomial.
+Then the degree $[E:k]$ is just the degree $\deg P$.
+Indeed, without loss of generality, we can assume $P$ monic, say
+\begin{equation} \label{P} P = t^n + a_1 t^{n-1} + \dots + a_0.\end{equation}
+It is then easy to see that the images of $1, t, \dots, t^{n-1}$ in
+$k[t]/(P)$ are linearly
+independent over $k$, because any relation involving them would have
+degree strictly smaller than that of $P$, and $P$ is the element of smallest
+degree in the ideal $(P)$.
+
+Conversely, the set $S=\left\{1, t, \dots, t^{n-1}\right\}$ (or more
+properly their images) spans $k[t]/(P)$ as a vector space.
+Indeed, we have by \eqref{P} that $t^n$ lies in the span of $S$.
+Similarly, the relation $tP(t)=0$ shows that the image of $t^{n+1}$ lies in the span of
+$\left\{1, t, \dots, t^n\right\}$---by what was just shown, thus in the span of
+$S$. Working upward inductively, we find
+that the image of $t^M$ for $M \geq n$ lies in the span of $S$.
+\end{example}
+
+This confirms the observation that $[\mathbb{C}: \mathbb{R}] = 2$, for instance.
+More generally, if $k$ is a field, and $\alpha \in k$ is not a square, then the
+irreducible polynomial $x^2 - \alpha \in k[x]$ allows one to construct an
+extension $k[x]/(x^2 - \alpha)$ of degree two.
+We shall write this as $k(\sqrt{\alpha})$. Such extensions will be called
+\textbf{quadratic,} for obvious reasons.
+
+
+The basic fact about the degree is that it is \emph{multiplicative in
+towers.}
+
+\begin{proposition}[Multiplicativity]
+Suppose given a tower $F/E/k$. Then
+\[ [F:k] = [F:E][E:k]. \]
+\end{proposition}
+\begin{proof}
+Let $\alpha_1, \dots, \alpha_n \in F$ be an $E$-basis for $F$. Let $\beta_1,
+\dots, \beta_m \in E$ be a $k$-basis for $E$. Then the claim is that
+the set of products $\{\alpha_i \beta_j, 1 \leq i \leq n, 1 \leq j \leq m\}$ is a $k$-basis for $F$.
+Indeed, let us check first that they span $F$ over $k$.
+
+By assumption, the $\left\{\alpha_i\right\}$ span $F$ over $E$. So if $f \in
+F$, there are $a_i \in E$ with
+\[ f = \sum a_i\alpha_i, \]
+and, for each $i$, we can write $a_i = \sum b_{ij} \beta_j$ for some $b_{ij} \in k$. Putting
+these together, we find
+\[ f = \sum_{i,j} b_{ij}\alpha_i \beta_j, \]
+proving that the $\left\{\alpha_i \beta_j\right\}$ span $F$ over $k$.
+
+Suppose now that there existed a nontrivial relation
+\[ \sum_{i,j} c_{ij} \alpha_i \beta_j =0 \]
+for the $c_{ij} \in k$. In that case, we would have
+\[ \sum_i \alpha_i \left( \sum_j c_{ij} \beta_j \right) =0, \]
+and the inner terms lie in $E$ as the $\beta_j$ do. Now $E$-linear independence of
+the $\left\{\alpha_i\right\}$ shows that the inner sums are all zero. Then
+$k$-linear independence of the $\left\{\beta_j\right\}$ shows that the
+$c_{ij}$ all vanish.
+\end{proof}
+
+We sidetrack to a slightly tangential definition:
+\begin{definition}
+ A field extensions $K$ of $\mathbb{Q}$ is said to be a \textbf{number field}
+if it is a finite extension of $\mathbb{Q}$.
+\end{definition}
+Number fields are the basic objects in algebraic number theory. We shall see
+later that,
+for the analog of the integers $\mathbb{Z}$ in a number field, something kind
+of like unique factorization still holds (though strict unique factorization
+generally does not!).
+
+\subsection{Algebraic extensions}
+
+Consider a field extension $F/E$.
+
+\begin{definition}
+An element $\alpha\in F$ is said to be \textbf{algebraic} over $E$ if
+$\alpha$ is the root of some polynomial with coefficients in $E$. If all
+elements of $F$ are \textbf{algebraic} then $F$ is said to be an algebraic extension.
+\end{definition}
+
+By \cref{fldmono}, the subextension $E(\alpha)$ is isomorphic either to
+the rational function field $E(t)$ or to a quotient ring $E[t]/(P)$ for $P
+\in E[t]$ an irreducible polynomial.
+In the latter case, $\alpha$ is algebraic over $E$ (in fact, it
+satisfies the polynomial $P$!); in the former case, it is not.
+
+\begin{example}
+$\mathbb{C}$ is algebraic over $\mathbb{R}$.
+\end{example}
+
+\begin{example}
+Let $X$ be a compact Riemann surface, and $f \in \mathbb{C}(X) - \mathbb{C}$ any
+nonconstant meromorphic function on $X$ (see \cref{meromorphicfn}). Then it is known that
+$\mathbb{C}(X)$ is algebraic over the subextension $\mathbb{C}(f)$ generated by
+$f$. We shall not prove this.
+\end{example}
+
+We now show that there is a deep connection between finiteness and being
+algebraic.
+\begin{proposition} \label{finalgebraic}
+A finite extension is algebraic.
+In fact, an extension $E/k$ is algebraic if and only if every subextension
+$k(\alpha)/k$ generated by some $\alpha \in E$ is finite.
+\end{proposition}
+In general, it is very false that an algebraic extension is finite.
+\begin{proof}
+Let $E/k$ be finite, say of degree $n$. Choose $\alpha \in E$.
+Then the elements
+$\left\{1, \alpha, \dots, \alpha^n\right\}$ are linearly
+dependent over $E$, or we would necessarily have $[E:k] > n$. A relation of
+linear dependence now gives the desired polynomial that $\alpha$ must satisfy.
+
+For the last assertion, note that a monogenic extension $k(\alpha)/k$ is
+finite if and only $\alpha$ is algebraic over $k$, by \cref{monodeg1} and
+\cref{monodeg2}.
+So if $E/k$ is algebraic, then each $k(\alpha)/k, \alpha \in E$, is a finite
+extension, and conversely.
+\end{proof}
+
+
+
+We can extract a corollary of the last proof (really of \cref{monodeg1} and
+\cref{monodeg2}): a monogenic extension is finite
+if and only if it is algebraic.
+We shall use this observation in the next result.
+
+\begin{corollary} \label{fingenalg}
+Let $k$ be a field, and let $\alpha_1, \alpha_2, \dots, \alpha_n$ be elements
+of some extension field such that each $\alpha_i$ is finite over $k$. Then the
+extension $k(\alpha_1, \dots, \alpha_n)/k$ is finite.
+That is, a finitely generated algebraic extension is finite.
+\end{corollary}
+\begin{proof}
+Indeed, each $k(\alpha_{1}, \dots, \alpha_{i+1})/k(\alpha_1, \dots,
+\alpha_{i})$ is monogenic and algebraic, hence finite.
+\end{proof}
+
+The set of complex numbers that are algebraic over $\mathbb{Q}$ are simply
+called the \textbf{algebraic numbers.} For instance, $\sqrt{2}$ is algebraic,
+$i$ is algebraic, but $\pi$ is not.
+It is a basic fact that the algebraic numbers form a field, although it is not
+obvious how to prove this from the definition that a number is algebraic
+precisely when it satisfies a nonzero polynomial equation with rational
+coefficients (e.g. by polynomial equations).
+
+
+
+
+\begin{corollary}
+Let $E/k$ be a field extension. Then the elements of $E$ algebraic over $k$
+form a field.
+\end{corollary}
+\begin{proof}
+Let $\alpha, \beta \in E$ be algebraic over
+$k$. Then $k(\alpha, \beta)/k$ is a finite extension by \cref{fingenalg}. It follows that $k(\alpha
++ \beta) \subset k(\alpha, \beta)$ is a finite extension, which implies that
+$\alpha + \beta$ is algebraic by \cref{finalgebraic}.
+\end{proof}
+
+
+Many nice properties of field extensions, like those of rings, will have the property
+that they will be preserved by towers and composita.
+
+
+\begin{proposition}[Towers]
+Let $E/k$ and $F/E$ be algebraic. Then $F/k$ is algebraic.
+\end{proposition}
+\begin{proof}
+Choose $\alpha \in F$. Then $\alpha$ is algebraic over $E$.
+The key observation is that $\alpha$ is algebraic over a \emph{finitely
+generated} subextension of $k$.
+That is, there is a finite set $S \subset E$ such that $\alpha $ is algebraic
+over $k(S)$: this is clear because being algebraic means that a certain
+polynomial in $E[x]$ that $\alpha$ satisfies exists, and as $S$ we can take the
+coefficients of this polynomial.
+
+It follows that $\alpha$ is algebraic over $k(S)$. In particular, $k(S,
+\alpha)/ k(S)$ is finite. Since $S$ is a finite set, and $k(S)/k$ is algebraic,
+\cref{fingenalg} shows that $k(S)/k$ is finite. Together we find that
+$k(S,\alpha)/k$ is finite, so $\alpha$ is algebraic over $k$.
+\end{proof}
+
+The method of proof in the previous argument---that being algebraic over $E$ was a
+property that \emph{descended} to a finitely generated subextension of $E$---is
+an idea that recurs throughout algebra, and will be put to use more generality
+in \cref{}.
+
+\subsection{Minimal polynomials}
+
+Let $E/k$ be a field extension, and let $\alpha \in E$ be algebraic over $k$.
+Then $\alpha$ satisfies a (nontrivial) polynomial equation in $k[x]$.
+Consider the set of polynomials $P(x) \in k[x]$ such that $P(\alpha) = 0$; by
+hypothesis, this set does not just contain the zero polynomial.
+It is easy to see that this set is an \emph{ideal.} Indeed, it is the kernel
+of the map
+\[ k[x] \to E, \quad x \mapsto \alpha. \]
+Since $k[x]$ is a PID,
+there is a \emph{generator} $m(x) \in k[x]$ of this ideal. If we assume $m$
+monic, without loss of generality, then $m$ is uniquely determined.
+
+\begin{definition}
+$m(x)$ as above is called the \textbf{minimal polynomial} of $\alpha$ over $k$.
+\end{definition}
+
+The minimal polynomial has the following characterization: it is the monic
+polynomial, of smallest degree, that annihilates $\alpha$. (Any nonconstant
+multiple of $m(x)$ will have larger degree, and only multiples of $m(x)$ can
+annihilate $\alpha$.)
+This explains the name \emph{minimal.}
+
+Clearly the minimal polynomial is \emph{irreducible.} This is equivalent to the
+assertion that the ideal in $k[x]$ consisting of polynomials annihilating
+$\alpha$ is prime. But this follows from the fact that the map $k[x] \to E, x
+\mapsto \alpha$ is
+a map into a domain (even a field), so the kernel is a prime ideal.
+
+\begin{proposition}
+The degree of the minimal polynomial is $[k(\alpha):k]$.
+\end{proposition}
+\begin{proof}
+This is just a restatement of the argument in \cref{monofld}: the observation is that if $m(x)$
+is the minimal polynomial of $\alpha$, then the map
+\[ k[x]/(m(x)) \to k(\alpha), \quad x \mapsto \alpha \]
+is an isomorphism as in the aforementioned proof, and we have counted the degree
+of such an extension (see \cref{monodeg2}).
+\end{proof}
+
+So the observation of the above proof is that if $\alpha \in E$ is algebraic,
+then $k(\alpha) \subset E$ is isomorphic to $k[x]/(m(x))$.
+
+
+\subsection{Algebraic closure}
+
+Now we want to define a ``universal'' algebraic extension of a field. Actually,
+we should be careful: the algebraic closure is \emph{not} a universal object.
+That is, the algebraic closure is not unique up to \emph{unique} isomorphism:
+it is only unique up to isomorphism. But still, it will be very handy, if not
+functorial.
+
+
+\begin{definition}
+Let $F$ be a field. An \textbf{algebraic closure} of $F$ is a field
+$\overline{F}$ containing $F$ such that:
+\begin{enumerate}[\textbf{AC} 1]
+\item $\overline{F} $ is algebraic over $F$.
+\item $\overline{F}$ is \textbf{algebraically closed} (that is, every
+non-constant polynomial in $\overline{F}[X]$ has a root in $\overline{F}$).
+\end{enumerate}
+\end{definition}
+
+The ``fundamental theorem of algebra'' states that $\mathbb{C}$ is
+algebraically closed. While the easiest proof of this result uses Liouville's
+theorem in complex analysis, we shall give a mostly algebraic proof below
+(\cref{}).
+
+We now prove the basic existence result.
+
+
+\begin{theorem}
+Every field has an algebraic closure.
+\end{theorem}
+
+The proof will mostly be a red herring to the rest of the chapter. However, we
+will want to know that it is \emph{possible} to embed a field inside an
+algebraically closed field, and we will often assume it done.
+\begin{proof}
+Let $ K$ be a field and $ \Sigma$ be the set of all monic irreducibles in $ K[x]$. Let $ A = K[\{x_f : f \in \Sigma\}]$ be the polynomial ring generated by indeterminates $ x_f$, one for each $ f \in \Sigma$. Then let $ \mathfrak{a}$ be the ideal of $ A$ generated by polynomials of the form $ f(x_f)$ for each $ f \in \Sigma$.
+
+\emph{Claim 1}. $ \mathfrak{a}$ is a proper ideal.
+
+\emph{Proof of claim 1}. Suppose $ \mathfrak{a} = (1)$, so there exist finitely many polynomials $ f_i \in \Sigma$ and $ g_i \in A$ such that $ 1 = f_1(x_{f_1}) g_1 + \dotsb + f_k(x_{f_k}) g_k$. Each $ g_i$ uses some finite collection of indeterminates $ V_i \{x_{f_{i_1}}, \dotsc, x_{f_{i_{k_i}}}\}$. This notation is ridiculous, so we simplify it.
+
+We can take the union of all the $ V_i$, together with the indeterminates $ x_{f_1}, \dotsc, x_{f_k}$ to get a larger but still finite set of indeterminates $ V = \{x_{f_1}, \dotsc, x_{f_n}\}$ for some $ n \geq k$ (ordered so that the original $ x_{f_1}, \dotsc, x_{f_k}$ agree the first $ k$ elements of $ V$). Now we can regard each $ g_i$ as a polynomial in this new set of indeterminates $ V$.
+Then, we can write $ 1 = f_1(x_{f_1}) g_1 + \dotsb + f_n(x_{f_n}) g_n$ where for each $ i > k$, we let $ g_i = 0$ (so that we've adjoined a few zeroes to the right hand side of the equality).
+Finally, we define $ x_i = x_{f_i}$, so that we have
+$ 1 = f_1(x_1)g_1(x_1, \dotsc, x_n) + \dotsb + f_n(x_n) g_n(x_1, \dotsc, x_n)$.
+
+Suppose $ n$ is the minimal integer such that there exists an expression of this form, so that
+\[ \mathfrak{b} = (f_1(x_1), \dotsc, f_{n-1}(x_{n-1})) \]
+is a proper ideal of $ B = K[x_1, \dotsc, x_{n-1}]$, but
+\[ (f_1(x_1), \dotsc, f_n(x_n)) \]
+is the unit ideal in $ B[x_n]$. Let $ \hat{B} = B/\mathfrak{b}$ (observe that this ring is nonzero). We have a composition of maps
+\[ B[x_n] \to \hat{B}[x_n] \to \hat{B}[x_n]/(\widehat{f_n(x_n)}) \]
+where the first map is reduction of coefficients modulo $ \mathfrak{b}$, and the second map is the quotient by the principal ideal generated by the image $ \widehat{f_n(x_n)}$ of $ f_n(x_n)$ in $ \hat{B}[x_n]$. We know $ \hat{B}$ is a nonzero ring, so since $ f_n$ is monic, the top coefficient of $ \widehat{f_n(x_n)}$ is still $ 1 \in \hat{B}$. In particular, the top coefficient cannot be nilpotent. Furthermore, since $ f_n$ was irreducible, it is not a constant polynomial, so by the characterization of units in polynomial rings, $ \widehat{f_n(x_n)}$ is not a unit, so it does not generate the unit ideal. Thus the quotient $ \hat{B}[x_n]/(\widehat{f_n(x_n)})$ should not be the zero ring.
+
+On the other hand, observe that each $ f_i(x_i)$ is in the kernel of this composition, so in fact the entire ideal $ (f_1(x_1), \dotsc, f_n(x_n))$ is contained in the kernel. But this ideal is the unit ideal, so all of $ B[x_n]$ is in the kernel of this composition. In particular, $ 1 \in B[x_n]$ is in the kernel, and since ring maps preserve identity, this forces $ 1 = 0$ in $ \hat{B}[x_n]/(\widehat{f_n(x_n)})$, which makes this the the zero ring. This contradicts our previous observation, and proves the claim that $ \mathfrak{a}$ is a proper ideal.
+
+Now, given claim 1, there exists a maximal ideal $ \mathfrak{m}$ of $ A$ containing $ \mathfrak{a}$. Let $ K_1 = A/\mathfrak{m}$. This is an extension field of $ K$ via the inclusion given by
+\[ K \to A \to A/\mathfrak{m} \]
+(this map is automatically injective as it is a map between fields). Furthermore every $ f \in \Sigma$ has a root in $ K_1$. Specifically, the coset $ x_f + \mathfrak{m}$ in $ A/\mathfrak{m} = K_1$ is a root of $ f$ since
+\[ f(x_f + \mathfrak{m}) = f(x_f) + \mathfrak{m} = 0. \]
+
+Inductively, given $ K_n$ for some $ n \geq 1$, repeat the construction with $ K_n$ in place of $ K$ to get an extension field $ K_{n+1}$ of $ K_n$ in which every irreducible $ f \in K_n[x]$ has a root. Let $ L = \bigcup_{n = 1}^{\infty} K_n$.
+
+\emph{Claim 2}. Every $ f \in L[x]$ splits completely into linear factors in $ L$.
+
+\emph{Proof of claim 2}. We induct on the degree of $ f$. In the base case, when $ f$ itself is linear, there is nothing to prove. Inductively, suppose every polynomial in $ L[x]$ of degree less than $ n$ splits completely into linear factors, and suppose
+\[ f = a_0 + a_1x + \dotsb + a_nx^n \in L[x] \]
+has degree $ n$. Then each $ a_i \in K_{n_i}$ for some $ n_i$, so let $ n = \max n_i$ and regard $ f$ as a polynomial in $ K_n[x]$. If $ f$ is reducible in $ K_n[x]$, then we have a factorization $ f = gh$ with the degree of $ g, h$ strictly less than $ n$. Therefore, inductively, they both split into linear factors in $ L[x]$, so $ f$ must also. On the other hand, if $ f$ is irreducible, then by our construction, it has a root $ a\in K_{n+1}$, so we have $ f = (x - a) g$ for some $ g \in K_{n+1}[x]$ of degree $ n - 1$. Again inductively, we can split $ g$ into linear factors in $ L$, so clearly we can do the same with $ f$ also. This completes the proof of claim 2.
+
+Let $ \bar{K}$ be the set of algebraic elements in $ L$. Clearly $ \bar{K}$ is an algebraic extension of $ K$. If $ f \in \bar{K}[x]$, then we have a factorization of $ f$ in $ L[x]$ into linear factors
+\[ f = b(x - a_1)(x - a_2) \dotsb (x - a_n). \]
+for $ b \in \bar{K}$ and, a priori, $ a_i \in L$. But each $ a_i$ is a root of $ f$, which means it is algebraic over $ \bar{K}$, which is an algebraic extension of $ K$; so by transitivity of "being algebraic," each $ a_i$ is algebraic over $ K$. So in fact we conclude that $ a_i \in \bar{K}$ already, since $ \bar{K}$ consisted of all elements algebraic over $ K$. Therefore, since $ \bar{K}$ is an algebraic extension of $ K$ such that every $ f \in \bar{K}[x]$ splits into linear factors in $ \bar{K}$, $ \bar{K}$ is the algebraic closure of $ K$.
+
+\end{proof}
+
+\add{two algebraic closures are isomorphic}
+
+Let $K$ be an algebraically closed field. Then the ring $K[x]$ has a very
+simple ideal structure.
+Since every polynomial $P \in K[x]$ has a root, it follows that there is always
+a decomposition (by dividing repeatedly)
+\[ P =c (x-\alpha_1)\dots(x-\alpha_n) ,\]
+where $c$ is the constant term and the $\left\{\alpha_i\right\} \subset k$ are the roots
+of $P$.
+In particular:
+\begin{proposition}
+For $K$ algebraically closed, the only irreducible polynomials in $K[x]$ are
+the linear polynomials $c(x-\alpha), \ c, \alpha \in K$ (and $c \neq 0$).
+\end{proposition}
+
+In particular, two polynomials in $K[x]$ are \textbf{relatively prime}
+(i.e., generate the unit ideal) if and only if they have no common roots. This
+follows because the maximal ideals of $K[x]$ are of the form $(x-\alpha),
+\alpha \in K$.
+So if $F, G \in K[x]$ have no common root, then $(F, G)$ cannot be contained
+in any $(x-\alpha)$ (as then they would have a common root at $\alpha$).
+
+
+If $k$ is \emph{not} algebraically closed, then this still gives
+information about when two polynomials in $k[x]$ generate the unit ideal.
+
+\begin{definition}
+If $k$ is any field, we say that two polynomials in $k[x]$ are
+\textbf{relatively prime} if they generate the unit ideal in $k[x]$.
+\end{definition}
+
+\begin{proposition} \label{primepoly}
+Two polynomials in $k[x]$ are relatively prime precisely when they
+have no common roots in an algebraic closure $\overline{k}$ of $k$.
+\end{proposition}
+\begin{proof}
+The claim is that any two polynomials $P, Q$ generate $(1)$ in $k[x]$ if and
+only if they generate $(1)$ in $\overline{k}[x]$. This is a piece of
+linear algebra: a system of linear equations with coefficients in $k$ has
+a solution if and only if it has a solution in any extension of $k$.
+Consequently, we can reduce to the case of an algebraically closed field, in
+which case the result is clear from what we have already proved.
+\end{proof}
+
+
+
+\section{Separability and normality}
+
+
+\subsection{Separable extensions}
+
+Throughout, $F \subset K$ is a finite field extension. We fix once and for
+all an algebraic closure $\overline{F}$ for $F$ and an embedding of $F$ in $M$.
+
+
+\begin{definition}
+For an element $\alpha \in K$ with minimal polynomial $q \in F[x]$, we say
+$q$ and $\alpha$ are \textbf{separable} if $q$ has distinct roots (in some
+algebraic closure $\overline{F}$!), and we say $K$ is
+separable if this holds for all $\alpha \in K$.
+\end{definition}
+
+
+
+By \cref{primepoly}, separability of a polynomial $P \in F[x]$ is equivalent
+to $(P, P') = 1$ in $F[x]$.
+Indeed, this follows from the fact that $P$ has no multiple roots if and only if $P, P'$ have no
+common roots.
+
+\begin{lemma} $q(x) \in F[x]$ is separable if and only if $\gcd(q, q') = 1$,
+where $q'$ is the formal derivative of $q$.
+\label{der_poly}
+\end{lemma}
+
+
+
+
+\subsection{Purely inseparable extensions}
+
+\begin{definition}
+For an element $\alpha \in K$ with minimal polynomial $q$, we say $\alpha$ is \textbf{purely
+inseparable} if $q$ has only one root. We say $K$ is splitting if each $q$
+splits in $K$.
+\label{def:sepsplit}
+\end{definition}
+
+
+\begin{definition} If $K = F(\alpha)$ for some $\alpha$ with minimal polynomial
+$q(x) \in F[x]$, then by \rref{sep_poly}, $q(x) = r(x^{p^d})$, where $p =
+\Char{F}$ (or $1$ if $\Char{F} = 0$) and $r$ is separable; in this case we
+also denote $\deg_s(K/F) = \deg(r), \deg_i(K/F) = p^d$. \label{def:prim_sep}
+\end{definition}
+
+
+\section{Galois theory}
+\subsection{Definitions}
+
+Throughout, $F \subset K$ is a finite field extension. We fix once and for
+all an algebraic closure $M$ for both and an embedding of $F$ in $M$. When
+necessary, we write $K = F(\alpha_1, \dots, \alpha_n)$, and $K_0 = F, K_i =
+F(\alpha_1, \dots, \alpha_i)$, $q_i$ the minimal polynomial of $\alpha_i$ over
+$F_{i - 1}$, $Q_i$ that over $F$.
+
+\begin{definition} $\Aut(K/F)$ denotes the group of automorphisms of $K$ which fix
+$F$ (pointwise!). $\Emb(K/F)$ denotes the set of embeddings of $K$ into $M$
+respecting the chosen embedding of $F$.
+\label{def:gal}
+\end{definition}
+
+\begin{definition} By $\deg(K/F)$ we mean the dimension of $K$ as an $F$-vector
+space. We denote $K_s/F$ the set of elements of $K$ whose minimal polynomials
+over $F$ have distinct roots; by \rref{sep_subfield} this is a subfield, and
+$\deg(K_s/F) = \deg_s(K/F)$ and $\deg(K/K_s) = \deg_i(K/F)$ by definition.
+\label{def:sep}
+\end{definition}
+\subsection{Theorems}
+\begin{lemma} If $\Char{F} = 0$ then $K_s = K$. If $\Char{F} = p > 0$, then for
+any irreducible $q(x) \in K[x]$, there is some $d \geq 0$ and polynomial $r(x)
+\in K[x]$ such that $q(x) = r(x^{p^d})$, and $r$ is separable and irreducible.
+\label{sep_poly}
+\end{lemma}
+
+\begin{proof} By formal differentiation, $q'(x)$ has positive degree unless
+each exponent is a multiple of $p$; in characteristic zero this never occurs.
+If this is not the case, since $q$ is irreducible, it can have no factor in
+common with $q'$ and therefore has distinct roots by \rref{der_poly}.
+
+If $p > 0$, let $d$ be the largest integer such that each exponent of $q$ is a
+multiple of $p^d$, and define $r$ by the above equation. Then by
+construction, $r$ has at least one exponent which is not a multiple of $p$,
+and therefore has distinct roots. \end{proof}
+
+\begin{corollary} In the statement of \rref{sep_poly}, $q$ and $r$ have the same
+number of roots.
+\label{sep_roots}
+\end{corollary}
+
+\begin{proof} $\alpha$ is a root of $q$ if and only if $\alpha^{p^d}$ is a
+root of $r$; i.e. the roots of $q$ are the roots of $x^{p^d} - \beta$, where
+$\beta$ is a root of $r$. But if $\alpha$ is one such root, then $(x -
+\alpha)^{p^d} = x^{p^d} - \alpha^{p^d} = x^{p^d} - \beta$ since $\Char{K} =
+p$, and therefore $\alpha$ is the only root of $x^{p^d} - \beta$. \end{proof}
+
+\begin{lemma} The correspondence which to each $g \in \Emb(K/F)$ assigns the
+$n$-tuple $(g(\alpha_1), \dots, g(\alpha_n))$ of elements of $M$ is a
+bijection from $\Emb(K/F)$ to the set of tuples of $\beta_i \in M$, such that
+$\beta_i$ is a root of $q_i$ over $K(\beta_1, \dots, \beta_{i - 1})$.
+\label{emb_roots}
+\end{lemma}
+
+\begin{proof} First take $K = F(\alpha) = F[x]/(q)$, in which case the maps $g
+\colon K \to M$ over $F$ are identified with the elements $\beta \in M$ such
+that $q(\beta) = 0$ (where $g(\alpha) = \beta$).
+
+Now, considering the tower $K = K_n / K_{n - 1} / \dots / K_0 = F$, each
+extension of which is primitive, and a given embedding $g$, we define
+recursively $g_1 \in \Emb(K_1/F)$ by restriction and subsequent $g_i$ by
+identifying $K_{i - 1}$ with its image and restricting $g$ to $K_i$. By the
+above paragraph each $g_i$ corresponds to the image $\beta_i = g_i(\alpha_i)$,
+each of which is a root of $q_i$. Conversely, given such a set of roots of
+the $q_i$, we define $g$ recursively by this formula. \end{proof}
+
+\begin{corollary} $|\Emb(K/F)| = \prod_{i = 1}^n \deg_s(q_i)$.
+\label{emb_size}
+\end{corollary}
+
+\begin{proof} This follows immediately by induction from \rref{emb_roots} by
+\rref{sep_roots}. \end{proof}
+
+\begin{lemma} For any $f \in \Emb(K/F)$, the map $\Aut(K/F) \to \Emb(K/F)$ given
+by $\sigma \mapsto f \circ \sigma$ is injective.
+\label{aut_inj}
+\end{lemma}
+
+\begin{proof} This is immediate from the injectivity of $f$. \end{proof}
+
+\begin{corollary} $\Aut(K/F)$ is finite.
+\label{aut_fin}
+\end{corollary}
+
+\begin{proof} By \rref{aut_inj}, $\Aut(K/F)$ injects into $\Emb(K/F)$, which by
+\rref{emb_size} is finite. \end{proof}
+
+\begin{proposition} The inequality
+\begin{equation*}
+|\Aut(K/F)| \leq |\Emb(K/F)|
+\end{equation*}
+is an equality if and only if the $q_i$ all split in $K$.
+\label{aut_ineq}
+\end{proposition}
+
+\begin{proof} The inequality follows from \rref{aut_inj} and from \rref{aut_fin}.
+Since both sets are finite, equality holds if and only if the injection of
+\rref{aut_inj} is surjective (for fixed $f \in \Emb(K/F)$).
+
+If surjectivity holds, let $\beta_1, \dots, \beta_n$ be arbitrary roots of
+$q_1, \dots, q_n$ in the sense of \rref{emb_roots}, and extract an embedding $g
+\colon K \to M$ with $g(\alpha_i) = \beta_i$. Since the correspondence $f
+\mapsto f \circ \sigma$ ($\sigma \in \Aut(K/F)$) is a bijection, there is some
+$\sigma$ such that $g = f \circ \sigma$, and therefore $f$ and $g$ have the
+same image. Therefore the image of $K$ in $M$ is canonical, and contains
+$\beta_1, \dots, \beta_n$ for any choice thereof.
+
+If the $q_i$ all split, let $g \in \Emb(K/F)$ be arbitrary, so the
+$g(\alpha_i)$ are roots of $q_i$ in $M$ as in \rref{emb_roots}. But the $q_i$
+have all their roots in $K$, hence in the image $f(K)$, so $f$ and $g$ again
+have the same image, and $f^{-1} \circ g \in \Aut(K/F)$. Thus $g = f \circ
+(f^{-1} \circ g)$ shows that the map of \rref{aut_inj} is surjective.
+\end{proof}
+
+\begin{corollary} Define
+\begin{equation*}
+D(K/F) = \prod_{i = 1}^n \deg_s(K_i/K_{i - 1}).
+\end{equation*}
+Then the chain of equalities and inequalities
+\begin{equation*}
+|\Aut(K/F)| \leq |\Emb(K/F)| = D(K/F) \leq \deg(K/F)
+\end{equation*}
+holds; the first inequality is an equality if and only if each $q_i$ splits in
+$K$, and the second if and only if each $q_i$ is separable.
+\label{large_aut_ineq}
+\end{corollary}
+
+\begin{proof} The statements concerning the first inequality are just
+\rref{aut_ineq}; the interior equality is just \rref{emb_size}; the latter
+inequality is obvious from the multiplicativity of the degrees of field
+extensions; and the deduction for equality follows from the definition of
+$\deg_s$. \end{proof}
+
+\begin{corollary} The $q_i$ respectively split and are separable in $K$ if and only
+if the $Q_i$ do and are.
+\label{absolute_sepsplit}
+\end{corollary}
+
+\begin{proof} The ordering of the $\alpha_i$ is irrelevant, so we may take
+each $i = 1$ in turn. Then $Q_1 = q_1$ and if either of the equalities in
+\rref{large_aut_ineq} holds then so does the corresponding statement here.
+Conversely, clearly each $q_i$ divides $Q_i$, so splitting or separability
+for the latter implies that for the former. \end{proof}
+
+\begin{corollary} Let $\alpha \in K$ have minimal polynomial $q$; if the $Q_i$ are
+respectively split, separable, and purely inseparable over $F$ then $q$ is as
+well.
+\label{global_sepsplit}
+\end{corollary}
+
+\begin{proof} We may take $\alpha$ as the first element of an alternative
+generating set for $K/F$. The numerical statement of \rref{large_aut_ineq}
+does not depend on the particular generating set, hence the conditions given
+hold of the set containing $\alpha$ if and only if they hold of the canonical
+set ${\alpha_1, \dots, \alpha_n}$.
+
+For purely inseparable, if the $Q_i$ all have only one root then $|\Emb(K/F)|
+= 1$ by \rref{large_aut_ineq}, and taking $\alpha$ as the first element of a
+generating set as above shows that $q$ must have only one root as well for
+this to hold. \end{proof}
+
+\begin{corollary} $K_s$ is a field and $\deg(K_s/F) = D(K/F)$.
+\label{sep_subfield}
+\end{corollary}
+
+\begin{proof} Assume $\Char{F} = p > 0$, for otherwise $K_s = K$. Using
+\rref{sep_poly}, write each $Q_i = R_i(x^{p^{d_i}})$, and let $\beta_i =
+\alpha_i^{p^{d_i}}$. Then the $\beta_i$ have $R_i$ as minimal polynomials and
+the $\alpha_i$ satisfy $s_i = x^{p^{d_i}} - \beta_i$ over $K' = F(\beta_1,
+\dots, \beta_n)$. Therefore the $\alpha_i$ have minimal polynomials over $K'$
+dividing the $s_i$ and hence those polynomials have but one distinct root.
+
+By \rref{global_sepsplit}, the elements of $K'$ are separable, and those of
+$K'$ purely inseparable over $K'$. In particular, since these minimal
+polynomials divide those over $F$, none of these elements is separable, so $K'
+= K_s$.
+
+The numerical statement follows by computation:
+\begin{equation*}
+\deg(K/K') = \prod_{i = 1}^n p^{d_i}
+ = \prod_{i = 1}^n \frac{\deg(K_i/K_{i - 1})}{\deg_s(K_i/K_{i - 1})}
+ = \frac{\deg(K/F)}{D(K/F)}.
+ \end{equation*}
+\end{proof}
+
+\begin{theorem} The following inequality holds:
+\begin{equation*}
+|\Aut(K/F)| \leq |\Emb(K/F)| = \deg_s(K/F) \leq \deg(K/F).
+\end{equation*}
+Equality holds on the left if and only if $K/F$ is splitting; it holds on the
+right if and only if $K/F$ is separable.
+\label{galois_size}
+\end{theorem}
+
+\begin{proof} The numerical statement combines \rref{large_aut_ineq} and
+\rref{sep_subfield}. The deductions combine \rref{absolute_sepsplit} and
+\rref{global_sepsplit}. \end{proof}
+
+\subsection{Definitions}
+
+Throughout, we will denote as before $K/F$ a finite field extension, and $G =
+\Aut(K/F)$, $H$ a subgroup of $G$. $L/F$ is a subextension of $K/F$.
+
+\begin{definition} When $K/F$ is separable and splitting, we say it is Galois and
+write $G = \Gal(K/F)$, the Galois group of $K$ over $F$.
+\label{defn:galois_extension}
+\end{definition}
+
+\begin{definition} The fixed field of $H$ is the field $K^H$ of elements fixed by
+the action of $H$ on $K$. Conversely, $G_L$ is the fixing subgroup of $L$,
+the subgroup of $G$ whose elements fix $L$.
+\label{defn:fixing}
+\end{definition}
+
+\subsection{Theorems}
+
+\begin{lemma} A polynomial $q(x) \in K[x]$ which splits in $K$ lies in
+$K^H[x]$ if and only if its roots are permuted by the action of $H$. In this
+case, the sets of roots of the irreducible factors of $q$ over $K^H$ are the orbits
+of the action of $H$ on the roots of $q$ (counting multiplicity).
+\label{root_action}
+\end{lemma}
+
+\begin{proof} Since $H$ acts by automorphisms, we have $\sigma q(x) = q(\sigma
+x)$ as a functional equation on $K$, so $\sigma$ permutes the roots of $q$.
+Conversely, since the coefficients of $\sigma$ are the elementary symmetric
+polynomials in its roots, $H$ permuting the roots implies that it fixes the
+coefficients.
+
+Clearly $q$ is the product of the polynomials $q_i$ whose roots are the orbits
+of the action of $H$ on the roots of $q$, counting multiplicities, so it
+suffices to show that these polynomials are defined over $K^H$ and are
+irreducible. Since $H$ acts on the roots of the $q_i$ by construction, the
+former is satisfied. If some $q_i$ factored over $K^H$, its factors would
+admit an action of $H$ on their roots by the previous paragraph. The roots of
+$q_i$ are distinct by construction, so its factors do not share roots; hence
+the action on the roots of $q_i$ would not be transitive, a contradiction.
+\end{proof}
+
+\begin{corollary} Let $q(x) \in K[x]$; if it is irreducible, then $H$ acts
+transitively on its roots; conversely, if $q$ is separable and $H$ acts
+transitively on its roots, then $q(x) \in K^H[x]$ is irreducible.
+\label{sep_irred}
+\end{corollary}
+
+\begin{proof} Immediate from \rref{root_action}. \end{proof}
+
+\begin{lemma} If $K/F$ is Galois, so is $K/L$, and $\Gal(K/L) = G_L$..
+\label{sub_galois}
+\end{lemma}
+
+\begin{proof} $K/F$ Galois means that the minimal polynomial over $F$ of every
+element of $K$ is separable and splits in $K$; the minimal polynomials over $L
+= K^H$ divide those over $F$, and therefore this is true of $K/L$ as well;
+hence $K/L$ is likewise a Galois extension. $\Gal(K/L) = \Aut(K/L)$ consists
+of those automorphisms $\sigma$ of $K$ which fix $L$; since $F \subset L$ we
+have \emph{a fortiori} that $\sigma$ fixes $F$, hence $\Gal(K/L) \subset G$
+and consists of the subgroup which fixes $L$; i.e. $G_L$. \end{proof}
+
+\begin{corollary} If $K/F$ and $L/F$ are Galois, then the action of $G$ on elements of $L$
+defines a surjection of $G$ onto $\Gal(L/F)$. Thus $G_L$ is normal in $G$ and $\Gal(L/F) \cong G/G_L$. Conversely, if $N \subset G$ is normal, then $K^N/F$ is Galois.
+\label{normal}
+\end{corollary}
+
+\begin{proof} $L/F$ is splitting, so by \rref{root_action} the elements of $G$
+act as endomorphisms (hence automorphisms) of $L/F$, and the kernel of this action is $G_L$. By
+\rref{sub_galois}, we have $G_L = \Gal(K/L)$, so $|G_L| = |\Gal(K/L)| = [K : L] = [K : F] / [L : F]$,
+or rearranging and using that $K/F$ is Galois, we get $|G|/|G_L| = [L : F] =
+|\Gal(L/F)|$. Thus the map $G \to \Gal(L/F)$ is surjective and thus the induced map $G/G_L \to
+\Gal(L/F)$ is an isomorphism.
+
+Conversely, let $N$ be normal and take $\alpha \in K^N$. For any conjugate $\beta$ of $\alpha$, we
+have $\beta = g(\alpha)$ for some $g \in G$; let $n \in N$. Then $n(\beta) = (ng)(\alpha) =
+g(g^{-1} n g)(\alpha) = g(\alpha) = \beta$, since $g^{-1} n g \in N$ by normality of $N$. Thus
+$\beta \in K^N$, so $K^N$ is splitting, i.e., Galois. \end{proof}
+
+\begin{proposition} If $K/F$ is Galois and $H = G_L$, then $K^H = L$.
+\label{fixed_field}
+\end{proposition}
+
+\begin{proof} By \rref{sub_galois}, $K/L$ and $K/K^H$ are both Galois. By
+definition, $\Gal(K/L) = G_L = H$; since $H$ fixes $K^H$ we certainly have
+$H < \Gal(K/K^H)$, but since $L \subset K^H$ we have \emph{a fortiori} that
+$\Gal(K/K^H) < \Gal(K/L) = H$, so $\Gal(K/K^H) = H$ as well. It follows
+from \rref{galois_size} that $\deg(K/L) = |H| = \deg(K/K^H)$, so that $K^H =
+L$. \end{proof}
+
+\begin{lemma} If $K$ is a finite field, then $K^\ast$ is cyclic.
+\label{fin_cyclic}
+\end{lemma}
+
+\begin{proof} $K$ is then a finite extension of $\mathbb{F}_p$ for $p =
+\Char{K}$, hence has order $p^n$, $n = \deg(K/\mathbb{F}_p)$. Thus
+$\alpha^{p^n} = \alpha$ for all $\alpha \in K$, since $|K^\ast| = p^n - 1$.
+It follows that every element of $K$ is a root of $q_n(x) = x^{p^n} - x$. For
+any $d < n$, the elements of order at most $p^d - 1$ satisfy $q_d(x)$, which has
+$p^d$ roots. It follows that there are at least $p^n(p - 1) > 0$ elements of
+order exactly $p^n - 1$, so $K^\ast$ is cyclic. \end{proof}
+
+\begin{corollary} If $K$ is a finite field, then $\Gal(K/F)$ is cyclic, generated by
+the Frobenius automorphism.
+\label{fin_gal_cyclic}
+\end{corollary}
+
+\begin{proof} First take $F = \mathbb{F}_p$. Then the map $f_i(\alpha) =
+\alpha^{p^i}$ is an endomorphism, injective since $K$ is a field, and
+surjective since it is finite, hence an automorphism. Since every $\alpha$
+satisfies $\alpha^{p^n} = \alpha$, $f_n = 1$, but by \rref{fin_cyclic}, $f_{n -
+1}$ is nontrivial (applied to the generator). Since $n = \deg(K/F)$, $f =
+f_1$ generates $\Gal(K/F)$.
+
+If $F$ is now arbitrary, by \rref{fixed_field} we have $\Gal(K/F) =
+\Gal(K/\mathbb{F}_p)_F$, and every subgroup of a cyclic group is cyclic.
+\end{proof}
+
+\begin{corollary} If $K$ is finite, $K/F$ is primitive.
+\label{fin_prim_elt}
+\end{corollary}
+
+\begin{proof} No element of $G$ fixes the generator $\alpha$ of $K^\ast$, so
+it cannot lie in any proper subfield. Therefore $F(\alpha) = K$. \end{proof}
+
+\begin{proposition} If $F$ is infinite and $K/F$ has only finitely many subextensions, then it is
+primitive.
+\label{gen_prim_elt}
+\end{proposition}
+
+\begin{proof} We proceed by induction on the number of generators of $K/F$.
+
+If $K = F(\alpha)$ we are done. If not, $K = F(\alpha_1, \dots, \alpha_n) =
+F(\alpha_1, \dots, \alpha_{n - 1})(\alpha_n) = F(\beta, \alpha_n)$ by
+induction, so we may assume $n = 2$. There are infinitely many subfields
+$F(\alpha_1 + t \alpha_2)$, with $t \in F$, hence two of them are equal, say for $t_1$ and
+$t_2$. Thus, $\alpha_1 + t_2 \alpha_2 \in F(\alpha_1 + t_1 \alpha_2)$. Then
+$(t_2 - t_1)\alpha_2 \in F(\alpha_1 + t_1 \alpha_2)$, hence $\alpha_2$ lies in
+this field, hence $\alpha_1$ does. Therefore $K = F(\alpha_1 + t_1
+\alpha_2)$. \end{proof}
+
+\begin{corollary} If $K/F$ is separable, it is primitive, and the generator may be
+taken to be a linear combination of any finite set of generators of $K/F$.
+\label{prim_elt}
+\end{corollary}
+
+\begin{proof} We may embed $K/F$ in a Galois extension $M/F$ by adjoining all
+the conjugates of its generators. Subextensions of $K/F$ are as well subextensions
+of $K'/F$ and by \rref{fixed_field} the map $H \mapsto (K')^H$ is a surjection
+from the subgroups of $G$ to the subextensions of $K'/F$, which are hence
+finite in number. By \rref{fin_prim_elt} we may assume $F$ is infinite. The
+result now follows from \rref{gen_prim_elt}. \end{proof}
+
+\begin{corollary}
+ If $K/F$ is Galois and $H \subset G$, then if $L = K^H$, we have $H = G_L$.
+ \label{fixing_subgroup}
+\end{corollary}
+
+\begin{proof}
+ Let $\alpha$ be a primitive element for $K/L$. The polynomial $\prod_{h \in H} (x - h(\alpha))$ is fixed by $H$, and therefore has coefficients in $L$, so $\alpha$ has $|H|$ conjugate roots over $L$. But since $\alpha$ is primitive, we have $K = L(\alpha)$, so the minimal polynomial of $\alpha$ has degree $\deg(K/L)$, which is the same as the number of its roots. Thus $|H| = \deg(K/L)$. Since $H \subset G_L$ and $|G_L| = \deg(K/L)$, we have equality.
+\end{proof}
+
+
+\begin{theorem} The correspondences $H \mapsto K^H$, $L \mapsto G_L$ define
+inclusion-reversing inverse maps between the set of subgroups of $G$ and the
+set of subextensions of $K/F$, such that normal subgroups and Galois subfields
+correspond.
+\label{fundamental_theorem}
+\end{theorem}
+
+\begin{proof} This combines \rref{fixed_field}, \rref{fixing_subgroup}, and \rref{normal}.
+\end{proof}
+
+
+\section{Transcendental Extensions}
+
+
+There is a distinguished type of transcendental extension: those that are
+``purely transcendental.''
+\begin{definition} A field extension $E'/E$ is purely transcendental if it is
+obtained by adjoining a set $B$ of algebraically independent elements. A set of
+elements is algebraically independent over $E$ if there is no nonzero polynomial$P$
+with coefficients in $E$ such
+that $P(b_1,b_2,\cdots b_n)=0$ for any finite subset of elements $b_1, \dots,
+b_n \in B$.
+\end{definition}
+
+\begin{example} The field $\mathbb{Q}(\pi)$ is purely transcendental; in
+particular, $\mathbb{Q}(\pi)\cong\mathbb{Q}(x)$ with the isomorphism fixing
+$\mathbb{Q}$. \end{example}
+Similar to the degree of an algebraic extension, there is a way of keeping
+track of the number of algebraically independent generators that are required to
+generate a purely transcendental extension.
+\begin{definition} Let $E'/E$ be a purely transcendental extension generated by
+some set of algebraically independent elements $B$. Then the transcendence
+degree $trdeg(E'/E)=\#(B)$ and $B$ is called a transcendence basis for $E'/E$
+(we will see later that $trdeg(E'/E)$ is independent of choice of basis).
+\end{definition}
+In general, let $F/E$ be a field extension, we can always construct an
+intermediate extension $F/E'/E$ such that $F/E'$ is algebraic and $E'/E$ is
+purely transcendental. Then if $B$ is a transcendence basis for $E'$, it is
+also called a transcendence basis for $F$. Similarly, $trdeg(F/E)$ is defined to
+be
+$trdeg(E'/E)$.
+\begin{theorem} Let $F/E$ be a field extension, a transcendence basis exists.
+\end{theorem}
+\begin{proof} Let $A$ be an algebraically independent subset of $F$. Now pick a
+subset $G\subset F$ that generates $F/E$, we can find a transcendence basis
+$B$ such that $A\subset B\subset G$. Define a collection of algebraically
+independent sets $\mathcal{B}$ whose members are subsets of $G$ that contain
+$A$. The set can be partially ordered inclusion and contains at least one
+element, $A$. The union of elements of $\mathcal{B}$ is algebraically
+independent since any algebraic dependence relation would have occurred in one
+of the elements of $\mathcal{B}$ since the polynomial is only allowed to be over
+finitely many variables. The union also satisfies $A\subset
+\bigcup\mathcal{B}\subset G$ so by Zorn's lemma, there is a maximal element
+$B\in\mathcal{B}$. Now we claim $F$ is algebraic over $E(B)$. This is because
+if it wasn't then there would be a transcendental element $f\in G$ (since
+$E(G)=F$)such that $B\cup\{f\}$ wold be algebraically independent contradicting
+the
+maximality of $B$. Thus $B$ is our transcendence basis. \end{proof}
+Now we prove that the transcendence degree of a field extension is independent
+of choice of basis.
+\begin{theorem} Let $F/E$ be a field extension. Any two transcendence bases for
+$F/E$ have the same cardinality. This shows that the $trdeg(E/F)$ is well
+defined. \end{theorem}
+\begin{proof}
+Let $B$ and $B'$ be two transcendence bases. Without loss of generality, we can
+assume that $\#(B')\leq \#(B)$. Now we divide the proof into two cases: the
+first case is that $B$ is an infinite set. Then for each $\alpha\in B'$, there
+is a finite set $B_{\alpha}$ such that $\alpha$ is algebraic over
+$E(B_{\alpha})$ since any algebraic dependence relation only uses finitely many
+indeterminates. Then we define $B^*=\bigcup_{\alpha\in B'} B_{\alpha}$. By
+construction, $B^*\subset B$, but we claim that in fact the two sets are
+equal. To see this, suppose that they are not equal, say there is an element
+$\beta\in B\setminus B^*$. We know $\beta$ is algebraic over $E(B')$ which is
+algebraic over $E(B^*)$. Therefor $\beta$ is algebraic over $E(B^*)$, a
+contradiction. So $\#(B)\leq \sum_{\alpha\in B'} \#(B_{\alpha})$. Now if $B'$ is
+finite, then so is $B$ so we can assume $B'$ is infinite; this means
+\begin{equation} \#(B)\leq \sum_{\alpha\in B'}\#(B_{\alpha})=\#(\coprod
+B_{\alpha})\leq \#(B'\times\mathbb{Z})=\#(B')\end{equation} with the inequality $\#(\coprod
+B_{\alpha}) \leq \#(B'\times \mathbb{Z})$ given by the correspondence
+$b_{\alpha_i}\mapsto (\alpha,i)\in B'\times \mathbb{Z}$ with $B_\alpha =
+\{b_{\alpha_1},b_{\alpha_2}\cdots b_{\alpha_{n_\alpha}}\}$ Therefore in the
+infinite case, $\#(B)=\#(B')$.
+
+Now we need to look at the case where $B$ is finite. In this case, $B'$ is also
+finite, so suppose $B=\{\alpha_1,\cdots\alpha_n\}$ and
+$B'=\{\beta_1,\cdots\beta_m\}$ with $m\leq n$. We perform induction on $m$: if
+$m=0$ then $F/E$ is algebraic so $B=\null$ so $n=0$, otherwise there is an
+irreducible polynomial $f\in E[x,y_1,\cdots y_n]$ such that
+$f(\beta_1,\alpha_1,\cdots \alpha_n) = 0$. Since $\beta_1$ is not algebraic over
+$E$, $f$ must involve some $y_i$ so without loss of generality, assume $f$ uses
+$y_1$. Let $B^*=\{\beta_1,\alpha_2,\cdots\alpha_n\}$. We claim that $B^*$ is a
+basis for $F/E$. To prove this claim, we see that we have a tower of algebraic
+extensions $F/E(B^*,\alpha_1)/E(B^*)$ since $\alpha_1$ is algebraic over
+$E(B^*)$. Now we claim that $B^*$ (counting multiplicity of elements) is
+algebraically independent over $E$ because if it weren't, then there would be an
+irreducible $g\in E[x,y_2,\cdots y_n]$ such that
+$g(\beta_1,\alpha_2,\cdots\alpha_n)=0$ which must involve $x$ making $\beta_1$
+algebraic over $E(\alpha_2,\cdots \alpha_n)$ which would make $\alpha_1$
+algebraic over $E(\alpha_2,\cdots \alpha_n)$ which is impossible. So this means
+that $\{\alpha_2,\cdots\alpha_n\}$ and $\{\beta_2,\cdots\beta_m\}$ are bases for
+$F$ over $E(\beta_1)$ which means by induction, $m=n$. \end{proof}
+
+\begin{example} Consider the field extension $\mathbb{Q}(e,\pi)$ formed by
+adjoining the numbers $e$ and $\pi$. This field extension has transcendence
+degree at least $1$ since both $e$ and $\pi$ are transcendental over the
+rationals. However, this field extension might have transcendence degree $2$ if
+$e$ and $\pi$ are algebraically independent. Whether or not this is true is
+unknown and the problem of determining $trdeg(\mathbb{Q}(e,\pi))$ is an open
+problem.\end{example}
+
+\begin{example} let $E$ be a field and $F=E(t)/E$. Then $\{t\}$ is a
+transcendence basis since $F=E(t)$. However, $\{t^2\}$ is also a transcendence
+basis since $E(t)/E(t^2)$ is algebraic. This illustrates that while we can
+always decompose an extension $F/E$ into an algebraic extension $F/E'$ and a
+purely transcendental extension $E'/E$, this decomposition is not unique and
+depends on choice of transcendence basis. \end{example}
+
+\begin{exercise} If we have a tower of fields $G/F/E$, then $trdeg(G/E)=trdeg(F/E)+trdeg(G/F)$. \end{exercise}
+
+\begin{example}
+Let $X$ be a compact Riemann surface. Then the function field $\mathbb{C}(X)$
+(see \cref{meromorphicfn}) has transcendence degree one over $\mathbb{C}$. In
+fact, \emph{any} finitely generated extension of $\mathbb{C}$ of transcendence
+degree one arises from a Riemann surface. There is even an equivalence of
+categories between the category of compact Riemann surfaces and
+(non-constant) holomorphic maps
+and the opposite category of finitely generated extensions of $\mathbb{C}$ and
+morphisms of $\mathbb{C}$-algebras. See \cite{Fo81}.
+
+There is an algebraic version of the above statement as well. Given an
+(irreducible) algebraic curve in projective space over an algebraically
+closed field $k$ (e.g. the complex numbers), one can consider its ``field of rational
+functions:'' basically, functions that look like quotients of polynomials,
+where the denominator does not identically vanish on the curve.
+There is a similar anti-equivalence of categories between smooth projective curves and
+non-constant morphisms of curves and finitely generated extensions of $k$ of
+transcendence degree one. See \cite{Ha77}.
+\end{example}
+
+
+\subsection{Linearly Disjoint Field Extensions}
+Let $k$ be a field, $K$ and $L$ field extensions of $k$. Suppose also that $K$ and $L$ are embedded in some larger field $\Omega$.
+
+\begin{definition} The compositum of $K$ and $L$ written $KL$ is $k(K\cup L)=L(K)=K(L)$.
+\end{definition}
+
+
+
+\begin{definition} $K$ and $L$ are said to be linearly disjoint over $k$ if the following map is injective:
+\begin{equation} \theta: K\otimes_k L\rightarrow KL \end{equation} defined by $x\otimes y\mapsto xy$.
+\end{definition}
+
+
diff --git a/books/cring/flat.tex b/books/cring/flat.tex
new file mode 100644
index 0000000000000000000000000000000000000000..9e5fb440200e84f94b2c06270ee2f82d2b9721b1
--- /dev/null
+++ b/books/cring/flat.tex
@@ -0,0 +1,1596 @@
+\chapter{Flatness revisited}
+
+In the past, we have already encountered the notion of \emph{flatness}. We
+shall now study it in more detail.
+We shall start by introducing the notion of \emph{faithful} flatness and
+introduce the idea of ``descent.'' Later, we shall consider other criteria for
+(normal) flatness that we have not yet explored.
+
+We recall (\cref{flatdefn}) that a module $M$ over a commutative ring $R$ is
+\emph{flat} if the functor $N \mapsto N \otimes_R M$ is an exact functor. An
+$R$-algebra is flat if it is flat as a module. For instance, we have seen that
+any localization of $R$ is a flat algebra, because localization is an exact
+functor.
+
+
+\textbf{All this has not been added yet!}
+
+\section{Faithful flatness}
+
+
+
+\subsection{Faithfully flat modules}
+Let $R$ be a commutative ring.
+
+\begin{definition}
+The $R$-module $M$ is \textbf{faithfully flat} if any complex $N' \to N
+\to N''$ of $R$-modules is exact if and only if the tensored sequence $N'
+\otimes_R M \to N \otimes_R M \to N'' \otimes_R M$ is exact.
+\end{definition}
+
+Clearly, a faithfully flat module is flat.
+
+
+\begin{example}
+The direct sum of faithfully flat modules is faithfully flat.
+\end{example}
+\begin{example}
+A (nonzero) free module is faithfully flat, because $R$ itself is flat
+(tensoring with $R$ is the identity functor).
+\end{example}
+
+We shall now prove several useful criteria about faithfully flat modules.
+
+\begin{proposition} \label{easyffcriterion}
+An $R$-module $M$ is faithfully flat if and only if it is flat and if $M
+\otimes_R N = 0$ implies $N=0$ for any $N$.
+\end{proposition}
+\begin{proof} Suppose $M$ faithfully flat
+Then $M$ is flat, clearly. In addition, if $N$ is any $R$-module, consider the
+sequence
+\[ 0 \to N \to 0; \]
+it is exact if and only if
+\[ 0 \to M \otimes_R N \to 0 \]
+is exact. Thus $N=0$ if and only if $M \otimes_R N = 0$.
+
+Conversely, suppose $M$ is flat and satisfies the additional condition. We
+need to show that if $N'
+\otimes_R M \to N \otimes_R M \to N'' \otimes_R M$ is exact, so is $N' \to N
+\to N''$. Since $M$ is flat, taking homology commutes with tensoring with $M$.
+In particular, if $H$ is the homology of $N' \to N \to N''$, then $H \otimes_R
+M$ is the homology of
+$N'
+\otimes_R M \to N \otimes_R M \to N'' \otimes_R M$. It follows that $H
+\otimes_R M = 0$, so $H=0$, and the initial complex is exact.
+\end{proof}
+
+\begin{example}
+Another illustration of the above technique is the following observation: if
+$M$ is faithfully flat and $N \to N'$ is any morphism, then $N \to N'$ is an
+isomorphism if and only if $M \otimes N' \to M \otimes N$ is an isomorphism.
+This follows because the condition that a map be an isomorphism can be phrased
+as the exactness of a certain (uninteresting) complex.
+\end{example}
+\begin{exercise}
+The direct sum of a flat module and a faithfully flat module is faithfully flat.
+\end{exercise}
+
+
+From the above result, we can get an important example of a faithfully flat
+algebra over a ring.
+\begin{example}
+Let $R$ be a commutative ring, and $\left\{f_i\right\}$ a finite set of
+elements that generate the unit ideal in $R$ (or equivalently, the basic open
+sets $D(f_i) = \spec R_{f_i}$ form a covering of $\spec R$).
+Then the algebra $\prod R_{f_i}$ is faithfully flat over $R$ (i.e., is so as a
+module). Indeed, as a
+product of localizations, it is certainly flat.
+
+So by \cref{easyffcriterion}, we are left with showing that if $M$ is any
+$R$-module and $M_{f_i} =0 $ for all $i$, then $M = 0$.
+Fix $m \in M$, and consider the ideal $\ann(m)$ of elements annihilating $m$.
+Since $m$ maps to zero in each localization $M_{f_i}$, there is a power of
+$f_i$ in $\ann(m)$ for each $i$.
+This easily implies that $\ann(m) = R$, so $m=0$. (We used the fact that if the
+$\left\{f_i\right\}$ generate the unit ideal, so do $\left\{f_i^N\right\}$ for
+any $N \in \mathbb{Z}_{\geq 0}$.)
+\end{example}
+
+A functor $F$ between two categories is said to be \textbf{faithful} if the
+induced map on the hom-sets $\hom(x,y) \to \hom(Fx, Fy)$ is always injective.
+The following result explains the use of the term ``faithful.''
+
+\begin{proposition}
+A module $M$ is faithfully flat if and only if it is flat and the functor $N \to N
+\otimes_R M$ is faithful.
+\end{proposition}
+\begin{proof} Let $M$ be flat.
+We need to check that $M$ is faithfully flat if and only if the natural map
+\[ \hom_R(N, N') \to \hom_R(N \otimes_R M, N' \otimes_R M) \]
+is injective.
+Suppose first $M$ is faithfully flat and $f: N \to N'$ goes to zero $f \otimes
+1_M: N \otimes_R M \to N' \otimes_R M$. We know by flatness that
+\[ \im(f) \otimes_R M = \im(f \otimes 1_M) \]
+so that if $f \otimes 1_M = 0$, then $\im(f) \otimes M = 0$. Thus by faithful
+flatness, $\im(f) = 0$ by \rref{easyffcriterion}.
+
+Conversely, let us suppose $M$ flat and the functor $N \to N \otimes_R M$
+faithful. Let $N \neq 0$; then $1_N \neq 0$ as maps $N \to N$.
+It follows that $1_N \otimes 1_M$ and $0 \otimes 1_M = 0$ are different as
+endomorphisms of $M \otimes_R N$. Thus $M \otimes_R N \neq 0$. By
+\rref{easyffcriterion}, we are done again.
+\end{proof}
+
+\begin{example}
+Note, however, that $\mathbb{Z} \oplus \mathbb{Z}/2$ is a $\mathbb{Z}$-module
+such that tensoring by it is a faithful but not exact functor.
+\end{example}
+
+Finally, we prove one last criterion:
+
+\begin{proposition} \label{ffmaximal}
+$M$ is faithfully flat if and only if $M$ is flat and $\mathfrak{m}M \neq M$ for all
+maximal ideals $\mathfrak{m} \subset R$.
+\end{proposition}
+\begin{proof}
+If $M$ is faithfully flat, then $M$ is flat, and $M \otimes_R R/\mathfrak{m} =
+M/\mathfrak{m}M \neq 0$ for all $\mathfrak{m}$ as $R/\mathfrak{m} \neq 0$, by
+\rref{easyffcriterion}. So we get one direction.
+
+Alternatively, suppose $M$ is flat and $M \otimes_R R/\mathfrak{m} \neq 0$ for
+all maximal $\mathfrak{m}$. Since every proper ideal is contained in a maximal
+ideal, it follows that $M \otimes_R R/I \neq 0$ for all proper ideals $I$. We
+shall use this and \rref{easyffcriterion} to prove that $M$ is faithfully flat
+
+Let $N$ now be any nonzero module. Then $N$ contains a \emph{cyclic} submodule, i.e.
+one isomorphic to $R/I$ for some proper $I$. The injection
+\[ R/I \hookrightarrow N \]
+becomes an injection
+\[ R/I \otimes_R M \hookrightarrow N \otimes_R M, \]
+and since $R/I \otimes_R M \neq 0$, we find that $N \otimes_R M \neq 0$. By
+\rref{easyffcriterion}, it follows that $M$ is faithfully flat
+\end{proof}
+
+\begin{corollary}
+A nonzero finitely generated flat module over a \emph{local} ring is faithfully flat.
+\end{corollary}
+\begin{proof}
+This follows from \cref{ffmaximal} and Nakayama's lemma.
+\end{proof}
+
+A \emph{finitely presented} flat module over a local ring is in fact free, but we do not prove
+this (except when the ring is noetherian, see \cref{}).
+\begin{proof}
+Indeed, let $R$ be a local ring with maximal ideal $\mathfrak{m}$, and $M$ a
+finitely generated flat $R$-module. Then by Nakayama's lemma, $M/\mathfrak{m}M
+\neq 0$, so that $M$ must be faithfully flat.
+\end{proof}
+
+\begin{proposition}
+Faithfully flat modules are closed under direct sums and tensor products.
+\end{proposition}
+
+\begin{proof}
+Exercise.
+\end{proof}
+
+
+
+
+\subsection{Faithfully flat algebras}
+
+Let $\phi: R \to S$ be a morphism of rings, making $S$ into an $R$-algebra.
+
+\begin{definition}
+$S$ is a \textbf{faithfully flat $R$-algebra} if it is faithfully flat as an
+$R$-module.
+\end{definition}
+
+\begin{example}
+The map $R \to R[x]$ from a ring into its polynomial ring is always faithfully
+flat. This is clear.
+\end{example}
+
+Next, we indicate the usual ``sorite'' for faithfully flat morphisms:
+\begin{proposition} \label{ffsorite}
+Faithfully flat morphisms are closed under composition and base change.
+\end{proposition}
+That is, if $R \to S$, $S \to T$ are faithfully flat, so is $R \to T$.
+Similarly, if $R \to S$ is faithfully flat and $R'$ any $R$-algebra, then $R'
+\to S \otimes_R R'$ is faithfully flat.
+
+The reader may wish to try this proof as an exercise.
+\begin{proof}
+The first result follows because the composite of the two faithful and exact
+functors (tensoring $ \otimes_R S$ and tensoring $ \otimes_S T$ gives the
+composite $\otimes_R T$) yields a faithful and exact functor.
+
+In the second case, let $M$ be an $R'$-module. Then $M \otimes_{R'} (R'
+\otimes_R S)$ is canonically isomorphic to $M \otimes_R S$. From this it is
+clear if the functor $M \mapsto M \otimes_R S$ is faithful and
+exact, so is
+$M \mapsto M \otimes_{R'} (R'
+\otimes_R S)$.
+\end{proof}
+
+Flat maps are usually injective, but they need not be. For instance, if $R$ is a
+product $R_1 \times R_2$, then the projection map $R \to R_1$ is flat.
+This never happens for faithfully flat maps.
+In particular, a quotient can never be faithfully flat.
+
+\begin{proposition} \label{ffinjective}
+If $S$ is a faithfully flat $R$-algebra, then the structure map $R \to S$ is injective.
+\end{proposition}
+\begin{proof}
+Indeed, let us tensor the map $R \to S $ with $S$, over $R$. We get a morphism
+of $S$-modules
+\[ S \to S \otimes_R S , \]
+sending $s \mapsto 1 \otimes s$.
+This morphism has an obvious section $S \otimes_R S \to S$ sending $a \otimes b
+\mapsto ab$. Since it has a section, it is injective. But faithful flatness says
+that the original map $R \to S$ must be injective itself.
+\end{proof}
+
+\begin{example}
+The converse of \cref{ffinjective} definitely fails. Consider the localization $\mathbb{Z}_{(2)}$;
+it is a flat $\mathbb{Z}$-algebra, but not faithfully flat (for instance,
+tensoring with $\mathbb{Z}/3$ yields zero).
+\end{example}
+
+\begin{exercise}
+Suppose $\phi: R \to S$ is a flat, injective morphism of rings such that $S/\phi(R)$ is a
+flat $R$-module. Then show that $\phi$ is faithfully flat.
+\end{exercise}
+
+Flat morphisms need not be injective, but they are locally injective. We shall see this using:
+\begin{proposition} \label{flatlocal}
+A flat local homomorphism of local rings is faithfully flat. In particular, it
+is injective.
+\end{proposition}
+\begin{proof}
+Let $\phi: R \to S$ be a local homomorphism of local rings with maximal ideals
+$\mathfrak{m}, \mathfrak{n}$. Then by definition $\phi(\mathfrak{m}) \subset
+\mathfrak{n}$. It follows that $S \neq \phi(\mathfrak{m})S$, so by
+\rref{ffmaximal} we win.
+\end{proof}
+The point of the above proof was, of course, the fact that the
+ring-homomorphism was \emph{local}. If we just had that $\phi( \mathfrak{m})S
+\subsetneq S$ for every maximal ideal $\mathfrak{m} \subset R$, that would be
+sufficient for the argument.
+
+\begin{corollary}
+Let $\phi: R \to S$ be a flat morphism. Let $\mathfrak{q} \in \spec S$,
+$\mathfrak{p} = \phi^{-1}(\mathfrak{q})$ the image in $\spec R$. Then
+$R_{\mathfrak{p}} \to S_{\mathfrak{q}}$ is faithfully flat, hence injective.
+\end{corollary}
+\begin{proof}
+We only need to show that the map is flat by \cref{flatlocal}.
+Let $M' \hookrightarrow M$ be an injection of $R_{\mathfrak{p}} \to
+S_{\mathfrak{q}}$-modules. Note that $M', M$ are then $R$-modules as well.
+Then
+$$M' \otimes_{R_{\mathfrak{p}}} S_{\mathfrak{q}} = (M' \otimes_R
+R_{\mathfrak{p}}) \otimes_{R_{\mathfrak{p}}} S_{\mathfrak{q}} = M' \otimes_R
+S_{\mathfrak{q}}.$$
+Similarly for $M$.
+This shows that tensoring over $R_{\mathfrak{p}}$ with $S_{\mathfrak{q}}$ is
+the same as tensoring over $R$ with $S_{\mathfrak{q}}$. But $S_{\mathfrak{q}}$
+is flat over $S$, and $S$ is flat over $R$, so by \cref{ffsorite},
+$S_{\mathfrak{q}}$ is flat over $R$. Thus the result is clear.
+\end{proof}
+
+\subsection{Descent of properties under faithfully flat base change}
+
+Let $S$ be an $R$-algebra. Often, things that are true about objects over $R$
+(for instance, $R$-modules) will remain true after base-change to $S$.
+For instance, if $M$ is a finitely generated $R$-module, then $M \otimes_R S$
+is a finitely generated $S$-module.
+In this section, we will show that we can conclude the \emph{reverse}
+implication when $S$ is \emph{faithfully flat} over $R$.
+
+\begin{exercise}
+Let $R \to S$ be a faithfully flat morphism of rings. If $S$ is noetherian, so
+is $R$. The converse is false!
+\end{exercise}
+
+
+\begin{exercise} Many properties of morphisms of rings are such that if they hold after
+one makes a faithfully flat base change, then they hold for the original
+morphism.
+Here is a simple example.
+Suppose $S$ is a faithfully flat $R$-algebra. Let $R'$ be any $R$-algebra.
+Suppose $S' =S \otimes_R R'$ is finitely generated over $R'$. Then $S$ is
+finitely generated over $R$.
+
+To see that, note that $R'$ is the colimit of its finitely generated
+$R$-subalgebras $R_\alpha$. Thus $S'$ is the colimit of the $R_\alpha
+\otimes_R S$, which inject into $S'$; finite generation implies that one of
+the $R_\alpha \otimes_R S \to S'$ is an isomorphism. Now use the fact that
+isomorphisms ``descend'' under faithfully flat morphisms.
+
+In algebraic geometry, one can show that many properties of morphisms of
+\emph{schemes} allow for descent under faithfully flat base-change. See
+\cite{EGA}, volume IV-2.
+\end{exercise}
+
+
+\subsection{Topological consequences}
+
+There are many topological consequences of faithful flatness on the $\spec$'s.
+These are
+explored in detail in volume 4-2 of \cite{EGA}. We shall only scratch the
+surface.
+The reader
+should bear in mind the usual intuition that flatness means that the fibers
+``look similar'' to one other.
+
+\begin{proposition}
+Let $R \to S$ be a faithfully flat morphism of rings. Then the map $\spec S
+\to \spec R$ is surjective.
+\end{proposition}
+
+\begin{proof} Since $R \to S$ is injective, we may regard $R$ as a subring of $S$.
+We shall first show that:
+
+\begin{lemma} \label{intideal}
+If $I \subset R$ is any ideal, then $R \cap IS = I$.
+\end{lemma}
+\begin{proof}
+To see this, note that the morphism
+\[ R/I \to S/IS \]
+is faithfully flat, since faithful flatness is preserved by base-change, and
+this is the base-change of $R \to S$ via $R \to R/I$.
+In particular, it is injective. Thus $IS \cap R = I$.
+\end{proof}
+
+
+Now to see surjectivity, we use a general criterion:
+
+\begin{lemma} \label{imagespec}
+Let $\phi: R \to S$ be a morphism of rings and suppose $\mathfrak{p} \in \spec
+R$. Then $\mathfrak{p}$ is in the image of $\spec S \to \spec R$ if and only if
+$\phi^{-1}( \phi(\mathfrak{p}) S) = \mathfrak{p}$.
+\end{lemma}
+
+This lemma will prove the proposition.
+\begin{proof}
+Suppose first that $\mathfrak{p}$ is in the image of $\spec S \to \spec R$. In
+this case, there is $\mathfrak{q} \in \spec S$ such that
+$ \mathfrak{p}$ is the preimage of $\mathfrak{q}$.
+In particular, $\mathfrak{q} \supset \phi(\mathfrak{p})S$, so that, if we take
+pre-images,
+\[ \mathfrak{p} \supset \phi^{-1}(\phi(\mathfrak{p}) S), \]
+while the other inclusion is obviously true.
+
+Conversely, suppose that $\mathfrak{p} \subset \phi^{-1}(\phi(\mathfrak{p})
+S)$. In this case, we know that
+\[ \phi(R - \mathfrak{p}) \cap \phi(\mathfrak{p})S = \emptyset. \]
+Now $T = \phi(R - \mathfrak{p})$ is a multiplicatively closed subset.
+There is a morphism
+\begin{equation} \label{randomequationwhichidonthaveanamefor}
+R_{\mathfrak{p}} \to T^{-1}S
+\end{equation}
+which sends elements of $\mathfrak{p}$ into non-units, by
+\eqref{randomequationwhichidonthaveanamefor} so it is a \emph{local}
+homomorphism. The maximal ideal of $T^{-1} S$ pulls back to that of
+$R_{\mathfrak{p}}$. By the usual commutative diagrams, it follows that
+$\mathfrak{p}$ is the preimage of something in $\spec S$.
+\end{proof}
+\end{proof}
+
+\begin{remark}
+The converse also holds. If $\phi: R \to S$ is a flat morphism of rings such
+that $\spec S \to \spec R$ is surjective, then $\phi$ is faithfully flat.
+Indeed, \cref{imagespec} shows then that for any prime ideal $\mathfrak{p}
+\subset R$, $\phi(\mathfrak{p})$ fails to generate $S$.
+This is sufficient to imply that $S$ is faithfully flat by \cref{ffmaximal}.
+\end{remark}
+
+\begin{remark}
+A ``slicker'' argument that faithful flatness implies surjectiveness on spectra
+can be given as follows. Let $R \to S$ be faithfully flat. Let $\mathfrak{p}
+\in \spec R$; we want to show that $\mathfrak{p}$ is in the image of $\spec S$.
+Now \emph{base change preserves faithful flatness.} So we can replace $R$ by
+$R/\mathfrak{p}$, $S$ by $S/\mathfrak{p}S$, and assume that $R$ is a domain and
+$\mathfrak{p} = 0$.
+Indeed, the commutative diagram
+\[ \xymatrix{
+\spec S/\mathfrak{p}S \ar[d] \ar[r] & \spec R/\mathfrak{p} \ar[d] \\
+\spec S \ar[r] & \spec R
+}\]
+shows that $\mathfrak{p}$ is in the image of $\spec S \to \spec R$ if and only
+if $\left\{0\right\}$ is in the image of $\spec S/\mathfrak{p}S \to \spec
+R/\mathfrak{p}$.
+
+We can make another reduction: by localizing at $\mathfrak{p}$ (that is,
+$\left\{0\right\}$), we may assume that $R$ is local and thus a field.
+So we have to show that if $R$ is a field and $S$ a faithfully flat
+$R$-algebra, then $\spec S \to \spec R$ is surjective. But since $S$ is not the
+zero ring (by \emph{faithful} flatness!), it is clear that $S$ has a prime
+ideal and $\spec S \to \spec R$ is thus surjective.
+\end{remark}
+
+In fact, one can show that the morphism $\spec S \to \spec R$ is actually an
+\emph{identification,} that is, a quotient map. This is true more generally
+for faithfully flat and quasi-compact morphisms of schemes; see \cite{EGA},
+volume 4-2.
+
+\begin{theorem}
+Let $\phi: R \to S$ be a faithfully flat morphism of rings. Then $\spec S \to
+\spec R$ is a quotient map of topological spaces.
+\end{theorem}
+
+In other words, a subset of $\spec R$ is closed if and only if its pre-image
+in $\spec S$ is closed.
+
+\begin{proof}
+We need to show that if $F \subset \spec R$ is such that its pre-image in
+$\spec S$ is closed, then $F$ itself is closed. \textbf{ADD THIS PROOF}
+\end{proof}
+
+
+\section{Faithfully flat descent}
+
+Fix a ring $R$, and let $S$ be an $R$-algebra. Then there is a natural functor
+from $R$-modules to $S$-modules sending $N \mapsto S \otimes_R N$.
+In this section, we shall be interested in going in the opposite direction,
+or in characterizing the image of this functor.
+Namely, given an $S$-module, we want to ``descend'' to an $R$-module when
+possible; given a morphism of $S$-modules, we want to know when it comes from a
+morphism of $R$-modules by base change.
+
+\add{this entire section!}
+
+
+\subsection{The Amitsur complex}
+\add{citation needed}
+
+Suppose $B$ is an $A$-algebra.
+Then we can construct a complex of $A$-modules
+\[ 0 \to A \to B \to B \otimes_A B \to B \otimes_A B \otimes_A B \to \dots \]
+as follows.
+For each $n$, we denote by $B^{\otimes n}$ the tensor product of $B$ with
+itself $n$ times (over $A$).
+There are morphisms of $A$-algebras
+\[ d_i: B^{\otimes n} \to B^{\otimes n+1} , \quad 0 \leq i \leq n+1 \]
+where the map sends
+\[ b_1 \otimes \dots \otimes b_n \mapsto b_1 \otimes \dots \otimes b_{i-1}
+\otimes 1 \otimes b_i \otimes \dots \otimes b_n, \]
+so that the $1$ is placed in the $i$th spot.
+Then the coboundary
+$\partial: B^{\otimes n} \to B^{\otimes n+1}$ is defined as $\sum (-1)^i d_i$.
+It is easy to check that this forms a complex of $A$-modules.
+
+\begin{definition}
+The above complex of $B$-modules is called the \textbf{Amitsur complex} of $B$
+over $A$, and we denote it $\mathcal{A}_{B/A}$. It is clearly functorial in
+$B$; a map of $A$-algebras $B \to C$ induces a morphism of complexes
+$\mathcal{A}_{B/A} \to \mathcal{A}_{C/A}$.
+\end{definition}
+
+Note that the Amitsur complex behaves very nicely with respect to base-change.
+If $A'$ is an $A$-algebra and $B' = B \otimes_A A'$ is the base extension, then
+$\mathcal{A}_{B'/A'} = \mathcal{A}_{B/A} \otimes_A A'$, which follows easily
+from the fact that base-change commutes with tensor products.
+
+In general, the Amitsur complex is not even exact.
+For instance, if it is exact in degree one, then the map $A \to B$ is necessarily injective.
+If, however, the morphism is \emph{faithfully flat}, then we do get exactness:
+
+\begin{theorem}
+If $B$ is a faithfully flat $A$-algebra, then the Amitsur complex of $B/A$ is
+exact. In fact, if $M$ is any $A$-module, then $\mathcal{A}_{B/A} \otimes_A
+M$ is exact.
+\end{theorem}
+\begin{proof}
+We prove this first under the assumption that $A \to B$ has a section.
+In this case, we will even have:
+
+\begin{lemma}
+Suppose $A \to B$ is a morphism of rings with a section $B \to A$. Then the
+Amitsur complex $\mathcal{A}_{B/A}$ is homotopically trivial. (In particular,
+$\mathcal{A}_{B/A} \otimes_A M$ is acyclic for all $M$.)
+\end{lemma}
+\begin{proof}
+Let $s: B \to A$ be the section; by assumption, this is a morphism of
+$A$-algebras. We shall define a chain contraction of $\mathcal{A}_{B/A}$.
+To do this, we must define a collection of morphisms of $A$-modules
+\( h_{n+1} : B^{\otimes n+1} \to B^{\otimes n}, \)
+and this we do by sending
+\[ b_1 \otimes \dots \otimes b_{n+1} \mapsto s(b_{n+1}) \left( b_1 \otimes
+\dots \otimes b_n \right). \]
+It is still necessary to check that the $\left\{h_{n+1}\right\}$ form a chain
+contraction; in other words, that $\partial h_{n} + h_{n+1} \partial =
+1_{B^{\otimes n}}$.
+By linearity, we need only check this on elements of the form $b_1 \otimes
+\dots \otimes b_n$. Then we find
+\[ \partial h_n (b_1 \otimes b_n) = s(b_1) \sum (-1)^i b_2 \otimes \dots \otimes 1
+\otimes \dots \otimes b_n \]
+where the $1$ is in the $i$th place,
+while
+\[ h_{n+1} \partial ( b_1 \otimes \dots \otimes b_n) = b_1 \otimes \dots \otimes b_n +
+\sum_{i>0} s(b_1) (-1)^{i-1}b_2 \otimes \dots \otimes 1 \otimes \dots \otimes b_n \]
+where again the $1$ is in the $i$th place. The assertion is from this clear.
+Note that if $\mathcal{A}_{B/A}$ is contractible, we can tensor the chain
+homotopy with $M$ to see that $\mathcal{A}_{B/A} \otimes_A M$ is chain contractible
+for any $M$.
+\end{proof}
+
+With this lemma proved, we see that the Amitsur complex $\mathcal{A}_{B/A}$
+(or even $\mathcal{A}_{B/A} \otimes_A M$) is acyclic whenever $B/A$ admits a
+section. Now if we make the base-change by the morphism $A \to B$, we get the
+morphism $B \to B \otimes_A B$. That is,
+\[ B \otimes_A \left( \mathcal{A}_{B/A} \otimes_A M \right)= \mathcal{A}_{B
+\otimes_A B/B} \otimes_B (M \otimes_A B). \]
+The latter is acyclic because $B \to B \otimes_A B$ admits a section (namely,
+$b_1 \otimes b_2 \mapsto b_1 b_2$). So the complex $\mathcal{A}_{B/A}
+\otimes_A M$ becomes acyclic after base-changing to $B$; this, however, is a
+faithfully flat base-extension, so the original complex was itself exact.
+\end{proof}
+
+\begin{remark}
+A powerful use of the Amitsur complex in algebraic geometry is to show that
+the cohomology of a quasi-coherent sheaf on an affine scheme is trivial. In
+this case, the Cech complex (of a suitable covering) turns out to be precisely
+the Amitsur complex (with the faithfully flat morphism $A \to \prod A_{f_i}$
+for the $\left\{f_i\right\}$ a family generating the unit ideal). This
+argument generalizes to showing that the \emph{{\'e}tale}
+cohomology of a quasi-coherent sheaf on an affine is trivial; cf. \cite{Ta94}.
+\end{remark}
+
+\subsection{Descent for modules}
+Let $A \to B$ be a faithfully flat morphism of rings.
+Given an $A$-module $M$, we have a natural way of getting a $B$-module $M_B = M
+\otimes_A B$. We want to describe the image of this functor; alternatively,
+given a $B$-module, we want to describe the image of this functor.
+
+Given an $A$-module $M$ and the associated $B$-module $M_B = M \otimes_A B$,
+there are two ways of getting $B \otimes_A B$-modules from $M_B$, namely
+the two tensor products $M_B \otimes_B (B \otimes_A B)$ according as we pick
+the first map $b \mapsto b \otimes 1$ from $B \to B \otimes_A B$ or the
+second $b \mapsto 1 \otimes b$.
+We shall denote these by $M_B \otimes_A B$ and $B \otimes_A M_B$ with the
+action clear.
+But these are naturally isomorphic because both are obtained from $M$ by
+base-extension $A \rightrightarrows B \otimes_A B$, and the two maps are the
+same. Alternatively, these two tensor products are
+$M \otimes_A B \otimes_A B$ and $B \otimes_A M \otimes_A B$ and these are
+clearly isomorphic by the braiding isomorphism\footnote{It is \emph{not} the
+braiding isomorphism $M_B \otimes_A B \simeq B \otimes_A M_B$, which is not an
+isomorphism of $B \otimes_A B$-modules.
+This is the isomorphism that sends $m \otimes b \otimes b'$ to $b \otimes m
+\otimes b'$.
+} of the first two factors as $B \otimes_A B$-modules (with the $B \otimes_A B$ part
+acting on the $B$'s in the above tensor product!).
+
+\begin{definition}
+The \textbf{category of descent data} for the faithfully flat extension $A \to
+B$ is defined as follows. An object in this category consists of the following
+data:
+\begin{enumerate}
+\item A $B$-module $N$.
+\item An isomorphism of $B \otimes_A B$-modules $\phi: N \otimes_A B \simeq B \otimes_A N$.
+This isomorphism is required to make the following diagram\footnote{This is
+the cocycle condition.} of $B \otimes_A B
+\otimes_A B$-modules commutative:
+\begin{equation} \label{dc} \xymatrix{
+B \otimes_A B \otimes_A N \ar[rr]^{\phi_{23}} \ar[rd]^{\phi_{13}} & & B \otimes_A N \otimes_A B
+\ar[ld]^{\phi_{12}} \\
+& N \otimes_A B \otimes_A B
+}\end{equation}
+Here $\phi_{ij}$ means that the permutation of the $i$th and $j$th factors of
+the tensor product is done using the isomorphism $\phi$.
+\end{enumerate}
+A morphism between objects $(N, \phi), (N', \psi)$ is a morphism of
+$B$-modules $f: N \to N'$ that makes the diagram
+\begin{equation} \label{dc2} \xymatrix{
+N \otimes_A B \ar[d]^{f \otimes 1} \ar[r]^{\phi} & B \otimes_A N
+\ar[d]^{1 \otimes f} \\
+N' \otimes_A B \ar[r]^{\psi} & B \otimes_A N' \\
+}\end{equation}
+\end{definition}
+
+As we have seen, there is a functor $F$ from $A$-modules
+to descent data.
+Strictly speaking, we should check the commutativity of \eqref{dc}, but this
+is clear: for $N= M\otimes_A B$, \eqref{dc} looks like
+$$
+\xymatrix{
+B \otimes_A B \otimes_A M \otimes_A B \ar[rr]^{\phi_{23}} \ar[rd]^{\phi_{13}} &
+& B \otimes_A M \otimes_A B \otimes_A B
+\ar[ld]^{\phi_{12}} \\
+& M \otimes_A B \otimes_A B \otimes_A B
+}$$
+Here all the maps are just permutations of the factors (that is, the braiding
+isomorphisms in the structure of symmetric tensor category on the category of
+$A$-modules), so it clearly commutes.
+
+The main theorem is:
+
+\begin{theorem}[Descent for modules]
+The above functor from $A$-modules to descent data for $A \to B$ is an
+equivalence of categories.
+\end{theorem}
+
+We follow \cite{Vi08} in the proof.
+\begin{proof}
+We start by describing the inverse functor from descent data to $A$-modules.
+Recall that if $M$ is an $A$-module, then $M$ can be characterized as the
+submodule of $M_B$ consisting of $m \in M_B$ such that $1
+\otimes m$ and $m \otimes 1$ corresponded to the same thing in $M_B \otimes_A B
+\simeq B \otimes_A M_B$.
+(The case $M = A$ was particularly transparent: elements of $A$ were elements
+$x \in B$ such that $x \otimes 1 = 1 \otimes x$ in $B \otimes_A B$.)
+In other words, we had the exact sequence
+\[ 0 \to M \to M_B \to M_B \otimes_A B . \]
+
+We want to imitate this for descent data.
+Namely, we want to construct a functor $G$ from descent data to $A$-modules.
+Given descent data $(N, \phi)$ where $\phi: N \otimes_A B \simeq B \otimes_A
+N$ is an isomorphism of $B \otimes_A B$-modules, we define $GN$ to be
+\[ GN = \ker ( N \stackrel{n \mapsto 1 \otimes n - \psi(n \otimes 1) }{\to} B
+\otimes_A N). \]
+It is clear that this is an $A$-module, and that it is functorial in the
+descent data.
+We have also shown that $GF (M)$ is naturally isomorphic to $M$ for any
+$A$-module $M$.
+
+We need to show the analog for $FG(N, \phi)$; in other words, we need to show
+that any descent data arises via the $F$-construction. Even before that, we
+need to describe a natural transformation from $FG(N, \phi)$ to the identity.
+Fix a descent data $(N, \phi)$.
+Then $G(N, \phi)$ gives an $A$-submodule $M \subset N$.
+We get a morphism
+\[ f: M_B = M \otimes_A B \to N \]
+by the universal property. This sends $m \otimes b \mapsto bm$.
+The claim is that
+this is a map of descent data.
+In other words, we have to show that \eqref{dc2} commutes.
+The diagram looks like
+\[ \xymatrix{
+M_B \otimes_A B \ar[d]^{f \otimes 1} \ar[r] & B \otimes_A M_B \ar[d]^{1
+\otimes f} \\
+N \otimes_A B \ar[r]^{\phi} & B \otimes_A N
+}.\]
+In other words, if $m\otimes b \in M_B$ and $b' \in B$, we have to show that
+$\phi( bm \otimes b' ) = (1 \otimes f)( b \otimes m \otimes b') = b \otimes
+b' m$.
+
+However,
+\[ \phi(bm \otimes b') = (b \otimes b') \phi(m \otimes 1) = (b \otimes b')(1
+\otimes m) = b \otimes b'm \]
+in view of the definition of $M = GN$ as the set of elements such that $\phi(m
+\otimes 1) = 1 \otimes m$, and the fact that $\phi$ is an isomorphism of $B
+\otimes_A B$-modules. The equality we wanted to prove is thus clear.
+
+So we have the two natural transformations between $FG, GF$ and the respective
+identity functors. We have already shown that one of them is an isomorphism.
+Now we need to show that if $(N, \phi)$ is descent data as above, and $M =
+G(N, \phi)$, the map $F(M) \to (N, \phi)$ is an \emph{isomorphism}.
+In other words, we have to show that the map
+\[ M \otimes_A B \to N \]
+is an isomorphism.
+
+
+Here we shall draw a commutative diagram. Namely, we shall essentially use the Amitsur
+complex for the faithfully flat map $B \to B \otimes_A B$. We shall obtain a
+commutative an exact diagram:
+\[ \xymatrix{
+0 \ar[r] & M \otimes_A B \ar[d] \ar[r] & N \otimes_A B
+\ar[d]^{\phi} \ar[r] & N \otimes_A B \otimes_A B \ar[d]^{\phi_{13}^{-1}} \\
+0 \ar[r] & N \ar[r] & B \otimes_A N
+ \ar[r] & B \otimes_A B \otimes_A N
+}.\]
+Here the map $$N \otimes_A B \to N \otimes_A B \otimes_A B$$ sends $n \otimes b
+\mapsto n \otimes 1 \otimes b - \phi(1 \otimes n) \otimes b$.
+Consequently the first row is exact, $B$ being flat over $A$.
+The bottom map
+$$B \otimes_A N \to B \otimes_A N \otimes_A N$$
+sends $b \otimes n \mapsto b \otimes 1 \otimes n - 1 \otimes b \otimes n$.
+It follows by the Amitsur complex that the bottom row is exact too.
+We need to check that the diagram commutes. Since the two vertical maps on the
+right are isomorphisms, it will follow that $M \otimes_A B \to N$ is an
+isomorphism, and we shall be done.
+
+Fix $n \otimes b \in N \otimes_A B$. We need to figure out where it goes in $B
+\otimes_A B \otimes_A N$ under the two maps. Going right gives $n \otimes 1 \otimes b - \phi_{12}( 1 \otimes n \otimes
+b)$.
+Going down then gives
+$\phi_{13}^{-1}(n \otimes 1 \otimes b) - \phi_{13}^{-1}\phi_{12}( 1 \otimes n \otimes
+b) =
+\phi_{13}^{-1}(n \otimes 1 \otimes b) - \phi_{23}^{-1}(1 \otimes n \otimes
+b)$, where we have used the cocycle condition.
+So this is one of the maps $N \otimes_A B \to B \otimes_A B \otimes_A N$.
+
+Now we consider the other way $n \otimes b$ can map to $B \otimes_A B \otimes_A N$.
+
+Going down gives $\phi(n \otimes
+b)$, and then going right gives the difference of two maps $N \otimes_A B \to B
+\otimes_A B \otimes_A N$, which are the same as above.
+\end{proof}
+
+\subsection{Example: Galois descent}
+\add{this section}
+
+
+\section{The $\tor$ functor}
+
+
+\subsection{Introduction}
+Fix $M$. The functor $N \mapsto N \otimes_R M$ is a right-exact functor on the
+category of $R$-modules. We can thus consider its \emph{left-derived functors}
+as in \cref{homological}.
+Recall:
+
+\begin{definition}
+The derived functors of the tensor product functor $N \mapsto N \otimes_R M$ are denoted by
+$\mathrm{Tor} _R^i( N, M), i \geq 0$. We shall sometimes denote omit the
+subscript $R$.
+\end{definition}
+
+So in particular, $\mathrm{Tor} _R^0(M,N) = M \otimes N$.
+A priori, $\tor$ is only a functor of the first variable, but in fact, it is
+not hard to see that $\tor$ is a covariant functor of two variables $M, N$.
+In fact, $\tor_R^i(M, N) \simeq \tor_R^i(N, M)$ for any two $R$-modules $M, N$.
+For proofs, we refer to \cref{homological}. \textbf{ADD: THEY ARE NOT IN THAT
+CHAPTER YET.}
+
+Let us recall the basic properties of $\tor$ that follow from general facts
+about derived functors. Given an exact sequence
+\[ 0 \to N' \to N \to N'' \to 0 \]
+we have a long exact sequence
+\[ \mathrm{Tor} ^i(N',M) \to \mathrm{Tor} ^i(N,M) \to \mathrm{Tor} ^i(N'',M ) \to \mathrm{Tor} ^{i-1}(N',M) \to \dots \]
+Since $\tor$ is symmetric, we can similarly get a long exact sequence if we
+are given a short exact sequence of $M$'s.
+
+Recall, moreover, that $\tor$ can be computed explicitly (in theory).
+If we have modules $M, N$, and a projective resolution $P_* \to N$, then
+$\tor_R^i(M,N)$ is the $i$th homology of the complex $M \otimes P_*$.
+We can use this to compute $\tor$ in the case of abelian groups.
+
+\begin{example} We compute $\tor_{\mathbb{Z}}^*(A, B)$ whenever $A, B $ are abelian groups
+and $B$ is finitely generated. This immediately reduces to the case of $B$
+either $\mathbb{Z}$ or $\mathbb{Z}/d\mathbb{Z}$ for some $d$ by the
+structure theorem. When $B= \mathbb{Z}$, there is nothing to
+compute (derived functors are not very interesting on projective objects!).
+Let us compute $\tor_{\mathbb{Z}}^*(A, \mathbb{Z}/d\mathbb{Z})$ for an abelian group $A$.
+
+
+Actually, let us be more general and consider the case where the ring is
+replaced by $\mathbb{Z}/m$ for some $m$ such that $d \mid m$. Then we will
+compute $\tor_{\mathbb{Z}/m}^*(A, \mathbb{Z}/d)$ for any
+$\mathbb{Z}/m$-module $A$. The case $m = 0$
+will handle the ring $\mathbb{Z}$.
+Consider the projective resolution
+\[
+\xymatrix{
+\cdots \ar[r]^{m/d} & \mathbb{Z}/m\mathbb{Z} \ar[r]^d & \mathbb{Z}/m\mathbb{Z} \ar[r]^{m/d}
+ %& \mathbb{Z}/m\mathbb{Z} \ar[r]^d & \mathbb{Z}/m\mathbb{Z} \ar[r]^{m/d}
+ & \mathbb{Z}/m\mathbb{Z} \ar[r]^{d} & \mathbb{Z}/m\mathbb{Z} \ar[r] & \mathbb{Z}/d\mathbb{Z} \ar[r] & 0.
+}
+\]
+We apply $A \otimes_{\mathbb{Z}/m\mathbb{Z}} \cdot$. Since tensoring (over
+$\mathbb{Z}/m$!) with $\mathbb{Z}/m\mathbb{Z}$ does nothing, we obtain the complex
+\[
+\xymatrix{
+\cdots \ar[r]^{m/d} & A \ar[r]^d & A \ar[r]^{m/d}
+ & A \ar[r]^{d} & A \ar[r] & 0.
+}
+\]
+The groups $\Tor_n^{\mathbb{Z}/m\mathbb{Z}} (A, \mathbb{Z}/d\mathbb{Z})$ are simply the homology groups
+(ker/im) of
+the complex, which are simply
+\begin{align*}
+\Tor_0^{\mathbb{Z} / m\mathbb{Z}} (A, \mathbb{Z}/d\mathbb{Z}) &\cong A / dA \\
+\Tor_n^{\mathbb{Z} / m\mathbb{Z}} (A, \mathbb{Z}/d\mathbb{Z}) &\cong {}_dA/(m/d)A
+ \quad \text{$n$ odd, $n \ge 1$} \\
+\Tor_n^{\mathbb{Z} / m\mathbb{Z}} (A, \mathbb{Z}/d\mathbb{Z}) &\cong {}_{m/d}A/dA
+ \quad \text{$n$ even, $n \ge 2$},
+\end{align*}
+where ${}_kA = \{ a \in A \mid ka = 0 \}$ denotes the set of elements of $A$
+killed by $k$.
+\end{example}
+
+
+The symmetry of the tensor product also provides with a simple proof that
+$\tor$ commutes with filtered colimits.
+\begin{proposition} \label{torfilteredcolim}
+Let $M$ be an $R$-module, $\left\{N_i\right\}$ a filtered system of
+$R$-modules. Then the natural morphism
+\[ \varinjlim_i \tor_R^i(M, N_i) \to \tor^i_R(M, \varinjlim_i N_i) \]
+is an isomorphism.
+\end{proposition}
+\begin{proof}
+We can see this explicitly. Let us compute the $\tor$ functors by choosing a
+projective resolution $P_* \to M$ of $M$ (note that which factor we use is irrelevant, by
+symmetry!). Then the left side is the colimit
+\( \varinjlim H(P_* \otimes N_i) \), while the right side is $H(P_* \otimes
+\varinjlim N_i)$. But tensor products commute with filtered (or arbitrary)
+colimits, since the tensor product admits a right adjoint. Moreover, we know
+that homology commutes with filtered colimits. Thus the natural map is an
+isomorphism.
+\end{proof}
+
+
+\subsection{$\tor$ and flatness}
+
+$\tor$ provides a simple way of detecting flatness. Indeed, one of the basic applications of this is that for a flat module $M$, the tor-functors vanish for $i \geq 1$ (whatever be $N$).
+Indeed, recall that $\mathrm{Tor} (M,N)$ is computed by taking a projective resolution of $N$,
+\[ \dots \to P_2 \to P_1 \to P_0 \to M \to 0 \]
+tensoring with $M$, and taking the homology. But tensoring with $M$ is exact if we have flatness, so the higher $\mathrm{Tor} $ modules vanish.
+
+The converse is also true. In fact, something even stronger holds:
+\begin{proposition} $M$ is flat iff $\mathrm{Tor} ^1(M,R/I)=0$ for all finitely generated ideals $I \subset R$.
+\end{proposition}
+\begin{proof}
+We have just seen one direction.
+Conversely, suppose $\mathrm{Tor} ^i(M,R/I) = 0$ for all finitely generated
+ideals $I$ and $i>0$.
+Then the result holds, first of all, for all ideals $I$, because of
+\cref{torfilteredcolim} and the fact that $R/I$ is always the colimit of $R/J$
+as $J$ ranges over finitely generated ideals $J \subset I$.
+
+We now show that $\tor^i(M, N) = 0$ whenever $N$ is finitely generated. To do
+this, we induct on the number of generators of $N$. When $N$ has one
+generator, it is cyclic and we are done. Suppose we have proved the result
+whenever for modules that have $n-1$ generators or less, and suppose $N$ has
+$n$ generators.
+Then we can consider an exact sequence of the form
+\[ 0 \to N' \hookrightarrow N \twoheadrightarrow N'' \to 0 \]
+where $N'$ has $n-1$ generators and $N''$ is cyclic. Then the long exact
+sequence shows that $\tor^i(M, N) = 0$ for all $i \geq 1$.
+
+Thus we see that $\tor^i(M, N) = 0$ whenever $N$ is finitely generated. Since
+any module is a filtered colimit of finitely generated ones, we are done by
+\cref{torfilteredcolim}.
+\end{proof}
+
+
+Note that there is an exact sequence $0 \to I \to R \to R/I \to 0$ and
+so
+\[ \mathrm{Tor} _1(M,R)=0 \to \mathrm{Tor} _1(M,R/I) \to I \otimes M \to M \]
+is exact, and by this:
+
+\begin{corollary}
+If the map
+\[ I \otimes M \to M \]
+is injective for all ideals $I$, then $M$ is flat.
+\end{corollary}
+
+
+\section{Flatness over noetherian rings}
+
+We shall be able to obtain simpler criterion for flatness when the ring in
+question is noetherian local. For instance, we have already seen:
+
+\begin{theorem}
+If $M$ is a finitely generated module over a noetherian local ring $R$ (with
+residue field $k$), then $M$ is free if and only if
+$\tor_1(k, M) = 0$.
+\end{theorem}
+
+In particular, flatness is the same thing as the vanishing of \emph{one}
+$\tor$ module, and it equates to freeness. Now, we want to generalize this
+result to the case where $M$ is not necessarily finitely generated over $R$,
+but finitely generated over an $R$-algebra that is also noetherian local. In
+particular, we shall get useful criteria for when an extension of
+noetherian local \emph{rings}
+(which in general is not finite, or even finitely generated)
+is flat.
+
+We shall prove two main criteria. The \emph{local criterion} is a direct
+generalization of the above result (the vanishing of one $\tor$ group). The
+\emph{infinitesimal criterion} reduces checking flatness of $M$ to checking
+flatness of $M \otimes_R R/\mathfrak{m}^t$ over $R/\mathfrak{m}^t$; in
+particular, it reduces to the case where the base ring is \emph{artinian.}
+Armed with these, we will be able to prove a rather difficult theorem that
+states that we can always find lots of flat extensions of noetherian local
+rings.
+
+\subsection{Flatness over a noetherian local ring}
+
+We shall place ourselves in the following situation. $R, S$ are noetherian
+local rings with maximal ideals $\mathfrak{m} \subset R, \mathfrak{n} \subset
+S$, and $S$ is an $R$-algebra (and the morphism $R \to S$ is \emph{local}, so
+$\mathfrak{m}S \subset \mathfrak{n}$).
+We will want to know when a $S$-module is flat over $R$. In particular, we
+want a criterion for when $S$ is flat over $R$.
+
+\begin{theorem} \label{localcrit} The finitely generated $S$-module $M$ is flat over $R$ iff
+\[ \mathrm{Tor} ^1_R( k, M) = 0.\]
+In this case, $M$ is even free.
+\end{theorem}
+
+It is actually striking how little the condition that $M$ is a finitely
+generated $S$-module enters, or how irrelevant it seems in the statement. The
+argument will, however, use the fact that $M$ is \emph{separated} with respect
+to the $\mathfrak{m}$-adic topology, which relies on Krull's intersection
+theorem (note that since $\mathfrak{m} S \subset \mathfrak{n}$, the
+$\mathfrak{m}$-adic topology on $M$ is separated).
+
+\begin{proof}
+Necessity is immediate. What we have to prove is sufficiency.
+
+First, we claim that if $N$ is an $R$-module of finite length, then
+\begin{equation} \label{vanishingtorlocal} \mathrm{Tor} ^1_R( N,
+M)=0.\end{equation}
+This is because $N$ has by d{\'e}vissage (\cref{filtrationlemma}) a finite filtration
+$N_i$ whose quotients are of the form $R/\mathfrak{p}$ for $\mathfrak{p}$
+prime and (by finite length hypothesis) $\mathfrak{p}= \mathfrak{m}$. So we
+have a filtration on $M$ whose successive quotients are isomorphic to $k$.
+We can then climb up the filtration to argue that $\tor^1(N_i, M) = 0$ for
+each $i$.
+
+Indeed, the claim \eqref{vanishingtorlocal} is true $N_0=0 \subset N$ trivially. We climb up the filtration piece by piece inductively; if $\mathrm{Tor} ^1_R(N_i, M)=0$, then the exact sequence
+\[ 0 \to N_i \to N_{i+1} \to k \to 0 \]
+yields an exact sequence
+\[ \mathrm{Tor} ^1_R(N_i, M) \to \mathrm{Tor} ^1_R(N_{i+1}, M) \to 0 \]
+from the long exact sequence of $\mathrm{Tor} $ and the hypothesis on $M$.
+The claim is proved.
+
+
+Now we want to prove that $M$ is flat. The idea is to show that $I \otimes_RM
+\to M$ is injective for any ideal $I \subset R$. We will use some diagram chasing and the Krull intersection theorem on the kernel $K$ of this map, to interpolate between it and various quotients by powers of $\mathfrak{m}$.
+First we write some exact sequences.
+
+We have an exact sequence
+\[ 0 \to \mathfrak{m}^t \cap I \to I \to I/I \cap \mathfrak{m}^t \to 0\]
+which we tensor with $M$:
+\[ \mathfrak{m}^t \cap I \otimes M \to I \otimes M \to I/I \cap \mathfrak{m}^t \otimes M \to 0.\]
+
+The sequence
+\[ 0 \to I/I \cap \mathfrak{m}^t \to R/\mathfrak{m}^t \to R/(I+\mathfrak{m}^t) \to 0\]
+is also exact, and tensoring with $M$ yields an exact sequence:
+\[ 0 \to I/I \cap \mathfrak{m}^t \otimes M \to M/\mathfrak{m}^tM \to M/(\mathfrak{m}^t + I) M \to 0\]
+because $\mathrm{Tor} ^1_R(M, R/(I+\mathfrak{m}^t))=0$ by
+\eqref{vanishingtorlocal}, as $R/(I + \mathfrak{m}^t)$ is of finite length.
+
+Let us draw the following commutative diagram:
+\begin{equation} \label{keyflatnessdiag}
+\xymatrix{
+& & 0 \ar[d] \\
+\mathfrak{m}^t \cap I \otimes M \ar[r] & I \otimes M \ar[r] & I/I \cap \mathfrak{m}^t \otimes M \ar[d] \\
+& & M/\mathfrak{m}^t M
+} \end{equation}
+
+Here the column and the row are exact.
+As a result, if an element in $I \otimes M$ goes to zero in $M$ (a fortiori
+in $M/\mathfrak{m}^tM$) it must come from $\mathfrak{m}^t \cap I \otimes M$
+for all $t$. Thus, by the Artin-Rees lemma, it belongs to $\mathfrak{m}^t(I \otimes M)$ for all $t$, and the Krull intersection theorem (applied to $S$, since $\mathfrak{m}S \subset \mathfrak{n}$) implies it is zero.
+
+\end{proof}
+
+\subsection{The infinitesimal criterion for flatness}
+
+\begin{theorem} \label{infcriterion} Let $R$ be a noetherian local ring, $S$ a noetherian local
+$R$-algebra. Let $M$ be a finitely generated module over $S$. Then $M$ is
+flat over $R$ iff $M/\mathfrak{m}^tM$ is flat over $R/\mathfrak{m}^t$ for all $t>0$.
+\end{theorem}
+\begin{proof}
+One direction is easy, because flatness is preserved under base-change $R \to
+R/\mathfrak{m}^t$.
+For the other direction, suppose $M/\mathfrak{m}^t M$ is flat over
+$R/\mathfrak{m}^t$ for all $t$. Then, we need to show that if $I \subset R$ is any ideal,
+then the map $I \otimes_R M \to M$ is injective. We shall argue that the
+kernel is zero using the Krull intersection theorem.
+
+Fix $t \in \mathbb{N}$. As before, the short exact sequence of
+$R/\mathfrak{m}^t$-modules $0 \to
+I/(\mathfrak{m}^t \cap I) \cap R/\mathfrak{m}^t \to R/(\mathfrak{m}^t \cap I) \to 0$ gives an exact
+sequence (because $M/\mathfrak{m}^t M$ is $R/\mathfrak{m}^t$-flat)
+\[ 0 \to I/I \cap \mathfrak{m}^t \otimes M \to M/\mathfrak{m}^tM \to M/(\mathfrak{m}^t + I) M \to 0\]
+which we can fit into a diagram, as in \eqref{keyflatnessdiag}
+$$\xymatrix{
+& & 0 \ar[d] \\
+\mathfrak{m}^t \cap I \otimes M \ar[r] & I \otimes M \ar[r] & I/I \cap \mathfrak{m}^t \otimes M \ar[d] \\
+& & M/\mathfrak{m}^t M
+}.$$
+
+The horizontal sequence was always exact, as before. The vertical sequence can be argued to be exact by tensoring the exact sequence
+\[ 0 \to I/I \cap \mathfrak{m}^t \to R/\mathfrak{m}^t \to R/(I+\mathfrak{m}^t) \to 0\]
+of $R/\mathfrak{m}^t$-modules with $M/\mathfrak{m}^tM$, and using flatness of
+$M/\mathfrak{m}^t M$ over $R/\mathfrak{m}^t$ (and \cref{}).
+Thus we get flatness of $M$ as before.
+\end{proof}
+
+Incidentally, if we combine the local and infinitesimal criteria for flatness, we get a little more.
+
+\begin{comment}
+%% THIS IS NOT ADDED YEt
+\subsection{The $\gr$ criterion for flatness}
+
+Suppose $(R, \mathfrak{m})$ is a noetherian local ring and $(S, \mathfrak{n})$
+a local $R$-algebra.
+As usual, we are interested in criteria for when a finitely generated
+$S$-module $M$ is flat over $R$.
+
+We can, of course, endow $M$ with the $\mathfrak{m}$-adic topology.
+Then $M$ is a filtered module over the filtered ring $R$ (with the
+$\mathfrak{m}$-adic topology).
+We have morphisms for each $i$,
+\[ \mathfrak{m}^i/\mathfrak{m}^{i +1} \otimes_{R/\mathfrak{m}}
+M/\mathfrak{m}M \to \mathfrak{m}^i M/\mathfrak{m}^{i+1} M \]
+that induce map
+\[ \gr(R) \otimes_{R/\mathfrak{m}} M/\mathfrak{m}M \to \gr(M). \]
+
+If $M$ is flat over
+\end{comment}
+
+\subsection{Generalizations of the local and infinitesimal criteria}
+In the previous subsections, we obtained results that gave criteria for when,
+given a local homomorphism of noetherian local rings $(R, \mathfrak{m}) \to
+(S, \mathfrak{n})$, a finitely generated $S$-module was $R$-flat.
+These criteria generally were related to the $\tor$ groups of the module with
+respect to $R/\mathfrak{m}$.
+We are now interested in generalizing the above results to the setting where
+$\mathfrak{m}$ is replaced by an ideal that \emph{maps into the Jacobson radical of $S$.}
+In other words,
+\[ \phi: R \to S \]
+will be a homomorphism of noetherian rings, and $J \subset R$ will be an ideal
+such that $\phi(J)$ is contained in every maximal ideal of $S$.
+
+Ideally, we are aiming for results of the following type:
+\begin{theorem}[Generalized local criterion for flatness] \label{localcritg}
+Let $\phi: R \to S$ be a morphism of noetherian rings, $J \subset R$ an ideal
+with $\phi(J) $ contained in the Jacobson radical of $S$.
+Let $M$ be a finitely generated $S$-module. Then $M$ is $R$-flat if and only if
+$M /JM$ is $R/J$-flat and $\tor_1^R(R/J, M) = 0$.
+\end{theorem}
+
+Note that this is a generalization of \cref{localcrit}. In that case, $R/J$ was
+a field and the $R/J$-flatness of $M/JM$ was automatic.
+One key step in the proof of \cref{localcrit} was to go from the hypothesis
+that $\tor_1(M, k) = 0$ to $\tor_1(M, N) =0 $ whenever $N$ was an $R$-module of
+\emph{finite length.}
+We now want to do the same in this generalized case; the analogy would be
+that, under the hypotheses of \cref{localcritg}, we would like to conclude
+that $\tor_1^R(M, N) = 0$ whenever $N$ is a finitely generated $R$-module
+\emph{annihilated by $I$}.
+This is not quite as obvious because we cannot generally find a filtration on
+$N$ whose successive quotients are $R/J$ (unlike in the case where $J$ was
+maximal).
+Therefore we shall need two lemmas.
+
+\begin{remark}
+One situation where the strong form of the local criterion, \cref{localcritg},
+is used is in Grothendieck's proof (cf. EGA IV-11, \cite{EGA}) that the locus of points where a coherent
+sheaf is flat is open (in commutative algebra language, if $A$ is noetherian
+and $M$ finitely generated over a finitely generated $A$-algebra $B$, then the
+set of primes $\mathfrak{q} \in \spec B$ such that $M_{\mathfrak{q}}$ is
+$A$-flat is open in $\spec B$).
+\end{remark}
+
+\begin{lemma}[Serre] \label{serrelemma}
+Suppose $R$ is a ring, $S$ an $R$-algebra, and $M$ an $S$-module.
+Then the following are equivalent:
+\begin{enumerate}
+\item $M \otimes_R S$ is $S$-flat and $\tor_1^R(M, S) = 0$.
+\item $\tor_1^R(M, N) = 0$ whenever $N$ is any $S$-module.
+\end{enumerate}
+\end{lemma}
+We follow \cite{SGA1}.
+\begin{proof}
+Let $P$ be an $S$-module (considered as fixed), and $Q$ any (variable) $R$-module.
+Recall that there is a homology spectral sequence
+\[ \tor_p^S(\tor_q^R(Q, S), P) \implies \tor_{p+q}^R(Q,P). \]
+Recall that this is the Grothendieck spectral sequence of the composite functors
+\[ Q \mapsto Q \otimes_R S, \quad Q' \mapsto Q' \otimes_S P \]
+because
+\[ (Q \otimes_R S) \otimes_S P \simeq Q \otimes_R P. \]
+\add{This, and generalities on spectral sequences, need to be added!}
+From this spectral sequence, it will be relatively easy to deduce
+the result.
+\begin{enumerate}
+\item Suppose $M \otimes_R S$ is $S$-flat and $\tor_1^R(M, S) = 0$.
+We want to show that 2 holds, so let $N$ be any $S$-module.
+Consider the $E_2$ page of the above spectral sequence
+ $\tor_p^S(\tor_q^R(M, S), N) \implies \tor_{p+q}^R(M, N)$.
+ In the terms such that $p+q = 1$, we have the two terms
+$\tor_0^S(\tor_1^R(M, S), N), \tor_1^S(\tor_0^R(M, S),N)$.
+But by hypotheses these are both zero. It follows that $\tor_1^R(M, N) = 0$.
+\item Suppose $\tor_1^R(M, N) = 0$ for each $S$-module $N$.
+Since this is a {homology} spectral sequence, this implies that the
+$E_2^{10}$ term vanishes (since nothing will be able to hit this term).
+In particular $\tor_1^S(M \otimes_R S, N) = 0$ for each $S$-module $N$.
+It follows that $M \otimes_R S$ is $S$-flat.
+Hence the higher terms $\tor_p^S(M \otimes_R S, N) = 0$ as well, so the botton row of
+the $E_2$ page (except $(0,0)$) is thus entirely zero. It follows that the
+$E_{01}^2$ term vanishes if $E_{\infty}^{01}$ is trivial.
+This gives that $\tor_1^R(M, S) \otimes_S N = 0$ for every $S$-module $N$,
+which clearly implies $\tor_1^R(M, S) = 0$.
+\end{enumerate}
+\end{proof}
+
+As a result, we shall be able to deduce the result alluded to in the motivation
+following the statement of \cref{localcritg}.
+
+\begin{lemma}
+Let $R$ be a noetherian ring, $J \subset R$ an ideal, $M$ an $R$-module. Then TFAE:
+\begin{enumerate}
+\item $\tor_1^R(M, R/J) = 0$ and $M/JM$ is $R/J$-flat.
+\item $\tor_1^R(M, N) = 0$ for any finitely generated $R$-module $N$
+annihilated by a power of $J$.
+\end{enumerate}
+\end{lemma}
+\begin{proof}
+This is immediate from \cref{serrelemma}, once one notes that any $N$ as in the
+statement admits a finite filtration whose successive quotients are annihilated
+by $J$.
+\end{proof}
+\begin{proof}[Proof of \cref{localcritg}]
+Only one direction is nontrivial, so suppose $M$ is a finitely generated
+$S$-module, with $M/JM$ flat over $R/J$ and $\tor_1^R(M, R/J) = 0$.
+We know by the lemma that $\tor_1^R(M, N) = 0$ whenever $N$ is finitely
+generated and annihilated by a power of $J$.
+
+
+So as to avoid repeating the same argument over and over, we encapsulate it in
+the following lemma.
+\begin{lemma} \label{flatlemma} Let the hypotheses be as in \cref{localcritg}
+Suppose for every ideal $I \subset R$, and every $t \in \mathbb{N}$, the map
+\[ I/I \cap J^t \otimes M \to M/J^t M \]
+is an injection. Then $M$ is $R$-flat.
+\end{lemma}
+\begin{proof}
+Indeed, then as before, the kernel of $I \otimes_R M \to M$ lives inside the image of $(I \cap J^t)
+\otimes M \to I \otimes_R M$ for \emph{every} $t$; by the Artin-Rees lemma, and the Krull
+intersection theorem (since $\bigcap J^t(I \otimes_R M) = \{0\}$), it follows that this kernel is zero.
+\end{proof}
+
+It is now easy to finish the proof. Indeed, we can verify the hypotheses of the
+lemma by noting that
+\[ I /I \cap J^t \otimes M \to I \otimes M \]
+is obtained by tensoring with $M$ the sequence
+\[ 0 \to I/I \cap J^t \to R/(I \cap J^t) \to R/(I + J^t) \to 0. \]
+Since $\tor_1^R(M, R/(I + J^t)) = 0$, we find that the map as in the lemma is
+an injection, and so we are done.
+\end{proof}
+
+The reader can similarly formulate a version of the infinitesimal criterion in
+this more general case using \cref{flatlemma} and the argument in
+\cref{infcriterion}. (In fact, the spectral sequence argument of this section
+is not necessary.) We shall not state it here, as it will appear as a
+component of \cref{bigflatcriterion}. We leave the details of the proof to the reader.
+
+\subsection{The final statement of the flatness criterion}
+
+We shall now bundle the various criteria for flatness into one big result,
+following \cite{SGA1}:
+
+\begin{theorem} \label{bigflatcriterion}
+Let $A, B$ be noetherian rings, $\phi: A \to B$ a morphism making $B$ into an
+$A$-algebra. Let $I$ be an ideal of $A$ such that $\phi(I)$ is contained in the
+Jacobson radical of $B$.
+Let $M$ be a finitely generated $B$-module.
+Then the following are equivalent:
+\begin{enumerate}
+\item $M$ is $A$-flat.
+\item (Local criterion) $M/IM$ is $A/I$-flat and $\tor_1^A(M, A/I) = 0$.
+\item (Infinitesimal criterion) $M/I^n M$ is $A/I^n$-flat for each $n$.
+\item (Associated graded criterion) $M/IM$ is $A/I$-flat and $M/IM \otimes_{A/I} I^n/I^{n+1} \to I^n
+M/I^{n+1}M$ is an isomorphism for each $n$.
+\end{enumerate}
+\end{theorem}
+
+The last criterion can be phrased as saying that the $I$-adic \emph{associated
+graded} of $M$ is determined by $M/IM$.
+\begin{proof}
+We have already proved that the first three are equivalent. It is easy to see
+that flatness of $M$ implies that
+\begin{equation} \label{flatwantiso} M/IM \otimes_{A/I} I^n/I^{n+1} \to I^n
+M/I^{n+1}M \end{equation}
+is an isomorphism for each $n$.
+Indeed, this easily comes out to be the quotient of $M \otimes_A I^n$ by the
+image of $M \otimes_A I^{n+1}$, which is $I^n M/I^{n+1}M$ since the map $M
+\otimes_A I^n \to I^n M$ is an isomorphism.
+Now we need to show that this last condition implies flatness.
+To do this, we may (in view of the infinitesimal criterion) assume that $I$ is
+\emph{nilpotent}, by base-changing to $A/I^n$.
+We are then reduced to showing that $\tor_1^A(M, A/I) = 0$ (by the local
+criterion).
+Then we are, finally, reduced to showing:
+
+\begin{lemma}
+Let $A$ be a ring, $I \subset A$ be a nilpotent ideal, and $M$ any $A$-module.
+If \eqref{flatwantiso} is an isomorphism for each $n$, then
+$\tor_1^A(M, A/I) = 0$.
+\end{lemma}
+\begin{proof}
+This is equivalent to the assertion, by a diagram chase, that
+\[ I \otimes_A M \to M \]
+is an injection.
+We shall show more generally that $I^n \otimes_A M \to M$ is an injection for
+each $n$. When $n \gg 0$, this is immediate, $I$ being nilpotent. So we can use
+descending induction on $n$.
+
+Suppose $I^{n+1} \otimes_A M \to I^{n+1}M$ is an isomorphism.
+Consider the diagram
+\[ \xymatrix{
+& I^{n+1} \otimes_A M \ar[r] \ar[d] & I^n \otimes_A M \ar[r] \ar[d] &
+I^n/I^{n+1} \otimes_A M \ar[d] \to
+0 \\
+0 \ar[r] & I^{n+1}M \ar[r] & I^n M \ar[r] & I^nM/I^{n+1}M \ar[r] & 0.
+}\]
+By hypothesis, the outer two vertical arrows are isomorphisms. Thus the middle
+vertical arrow is an isomorphism as well. This completes the induction
+hypothesis.
+\end{proof}
+\end{proof}
+
+Here is an example of the above techniques:
+\begin{proposition}
+\label{fiberwiseflat}
+Let $(A, \mathfrak{m}), (B, \mathfrak{n}), (C, \mathfrak{n}')$ be noetherian
+local rings. Suppose given a commutative diagram of local homomorphisms
+\[ \xymatrix{
+B \ar[rr] & & C \\
+& A \ar[ru] \ar[lu].
+}\]
+Suppose $B, C$ are flat $A$-algebras, and $B/\mathfrak{m}B \to C/\mathfrak{m}C$
+is a flat morphism. Then $B \to C$ is flat.
+\end{proposition}
+Geometrically, this means that flatness can be checked fiberwise if both
+objects are flat over the base.
+This will be a useful technical fact.
+\begin{proof}
+We will use the associated graded criterion for flatness with the ideal
+$I = \mathfrak{m}B \subset B$. (Note that we are \emph{not} using the criterion
+with the maximal ideal here!) Namely, we shall show
+that
+\begin{equation} \label{monkey} I^n/I^{n+1} \otimes_{B/I} C/IC \to I^n
+C/I^{n+1}C \end{equation}
+is an isomorphism. By \cref{bigflatcriterion}, this will do it. Now we have:
+\begin{align*}
+ I^n/I^{n+1} \otimes_{B/I} C/IC & \simeq
+\mathfrak{m}^nB/\mathfrak{m}^{n+1}B \otimes_{B/\mathfrak{m}B}
+C/\mathfrak{m}C \\ & \simeq
+(\mathfrak{m}^n/\mathfrak{m}^{n+1})\otimes_{A} B/\mathfrak{m}B \otimes_B
+C/\mathfrak{m}C \\
+& \simeq (\mathfrak{m}^n/\mathfrak{m}^{n+1})\otimes_{A} B \otimes_B
+C/\mathfrak{m}C \\
+& \simeq (\mathfrak{m}^n/\mathfrak{m}^{n+1})\otimes_{A} C/\mathfrak{m}C \\
+& \simeq \mathfrak{m}^nC/\mathfrak{m}^{n+1} C \simeq I^n C/I^{n+1}C.
+\end{align*}
+In this chain of equalities, we have used the fact that $B, C$ were flat over
+$A$, so their associated gradeds with respect to $\mathfrak{m} \subset A$
+behave nicely. It follows that \eqref{monkey} is an isomorphism, completing the
+proof.
+\end{proof}
+
+\subsection{Flatness over regular local rings}
+
+Here we shall prove a result that implies geometrically, for instance, that a
+finite morphism between smooth varieties is always flat.
+
+\begin{theorem}[``Miracle'' flatness theorem]
+Let $(A, \mathfrak{m})$ be a regular local (noetherian) ring. Let $(B,
+\mathfrak{n})$ be a Cohen-Macaulay, local $A$-algebra such that
+\[ \dim B = \dim A + \dim B/\mathfrak{m}B . \]
+Then $B$ is flat over $A$.
+\end{theorem}
+Recall that \emph{inequality} $\leq$ always holds in the above for any
+morphism of noetherian local rings (\cref{}), and
+equality always holds with flatness supposed.
+We get a partial converse.
+
+\begin{proof}
+We shall work by induction on $\dim A$.
+Let $x \in \mathfrak{m}$ be a non-zero divisor, so the first element in a
+regular sequence of parameters.
+We are going to show that $(A/(x), B/(x))$ satisfies the same hypotheses.
+Indeed, note that
+\[ \dim B/(x) \leq \dim A/(x) + \dim B/\mathfrak{m}B \]
+by the usual inequality. Since $\dim A/(x) = \dim A - 1$,
+we find that quotienting by $x$ drops the dimension of $B$ by at least one: that is,
+$\dim B/(x) \leq \dim B - 1$. By the principal ideal theorem, we have equality,
+\[ \dim B/(x) = \dim B - 1. \]
+
+The claim is that $x$ is a non-zero divisor in $B$, and consequently we can
+argue by induction.
+Indeed, but $B$ is \emph{Cohen-Macaulay}. Thus, any zero-divisor in $B$ lies in a
+\emph{minimal} prime (since all associated primes of $B$ are minimal); thus
+quotienting by a zero-divisor would not bring down the degree. So $x$ is a
+nonzerodivisor in $B$.
+
+In other words, we have found $x \in A$ which is both $A$-regular and
+$B$-regular (i.e. nonzerodivisors on both), and such that the hypotheses of the theorem apply to the pair
+$(A/(x), B/(x))$. It follows that $B/(x)$ is flat over $A/(x)$ by the
+inductive hypothesis. The next lemma will complete the proof.
+\end{proof}
+
+
+\begin{lemma}
+Suppose $(A, \mathfrak{m})$ is a noetherian local ring, $(B, \mathfrak{n})$ a
+noetherian local $A$-algebra, and $M$ a finite $B$-module. Suppose $x \in A$ is
+a regular element of $A$ which is also regular on $M$.
+Suppose moreover $M/xM$ is $A/(x)$-flat. Then $M$ is flat over $A$.
+\end{lemma}
+\begin{proof}
+This follows from the associated graded criterion for flatness (see the
+omnibus result \cref{bigflatcriterion}).
+Indeed, if we use the notation of that result, we take $I = (x)$.
+We are given that $M/xM$ is $A/(x)$-flat. So we need to show that
+\[ M/xM \otimes_{A/(x)} (x^n)/(x^{n+1}) \to x^n M/x^{n+1}M \]
+is an isomorphism for each $n$. This, however, is implied because
+$(x^n)/(x^{n+1})$ is isomorphic to $A/(x)$ by regularity, and multiplication
+\[ M \stackrel{x^n}{\to} x^n M, \quad xM \stackrel{x^n}{\to} x^{n+1}M \]
+are isomorphisms by $M$-regularity.
+\end{proof}
+\subsection{Example: construction of flat extensions}
+
+As an illustration of several of the techniques in this chapter and previous
+ones, we shall show, following \cite{EGA} (volume III, chapter 0) that, given a
+local ring and an extension of its residue field, one may find a flat
+extension of this local ring with the bigger field as \emph{its} residue
+field. One application of this is in showing (in the context of Zariski's
+Main Theorem) that the fibers of a birational
+projective morphism of noetherian schemes (where the target is normal) are
+\emph{geometrically} connected.
+We shall later give another application in the theory of \'etale morphisms.
+
+\begin{theorem}
+Let $(R, \mathfrak{m})$ be a noetherian local ring with residue field $k$.
+Suppose $K$ is an extension of $k$. Then there is a noetherian local
+$R$-algebra $(S,
+\mathfrak{n})$ with residue field $K$ such that $S$ is flat over $R$ and $\mathfrak{n} =
+\mathfrak{m}S$.
+\end{theorem}
+
+\begin{proof}
+Let us start by motivating the theorem when $K$ is generated over $k$ by
+\emph{one} element.
+This case can be handled directly, but the general case will require a
+somewhat tricky passage to the limit.
+There are two cases.
+
+\begin{enumerate}
+\item
+First, suppose $K = k(t)$ for $t \in K$ \emph{transcendental} over $k$. In
+this case, we will take $S$ to be a suitable localization of $R[t]$. Namely,
+we consider the prime\footnote{It is prime because the quotient is the domain
+$k[t]$.} ideal $\mathfrak{m} R[t] \subset R[t]$, and let
+$S = (R[t])_{\mathfrak{m} R[t]}$.
+Then $S$ is clearly noetherian and local, and moreover $\mathfrak{m}S$ is the
+maximal ideal of $S$. The residue field of $S$ is $S/\mathfrak{m}S $, which is
+easily seen to be the quotient field of $R[t]/\mathfrak{m}R[t] = k[t]$, and is
+thus isomorphic to $K$. Moreover, as a localization of a polynomial ring, $S$
+is flat over $R$.
+Thus we have handled the case of a purely transcendental extension generated
+by one element.
+
+\item
+Let us now suppose $K = k(a)$ for $a \in K$ \emph{algebraic} over $k$. Then
+$a$ satisfies a monic irreducible polynomial $\overline{p}(T)$ with coefficients in $k$.
+We lift $\overline{p}$ to a monic polynomial $p(T) \in R[T]$. The claim is
+that then, $S = R[T]/(p(T))$ will suffice.
+
+Indeed, $S$ is clearly flat over $R$ (in fact, it is free of rank $\deg p$).
+As it is finite over $R$, $S$ is noetherian. Moreover, $S/\mathfrak{m}S = k[T]/
+(p(T)) \simeq K$. It follows that $\mathfrak{m}S \subset S$ is a maximal ideal
+and that the residue field is $K$. Since any maximal ideal of $S$ contains
+$\mathfrak{m}S$ by Nakayama,\footnote{\add{citation needed}} we see that $S$
+is local as well. Thus we have showed that $S$ satisfies all the conditions we
+want.
+\end{enumerate}
+
+So we have proved the theorem when $K$ is generated by one element over $k$.
+In general, we can iterate this procedure finitely many times, so that the
+assertion is clear when $K$ is a finitely generated extension of $k$.
+Extending to infinitely generated extensions is trickier.
+
+Let us first argue that we can write $K/k$ as a ``transfinite limit'' of
+monogenic extensions. Consider the set of well-ordered collections
+$\mathcal{C}'$ of subfields between $k$ and $K$ (containing $k$) such that if $L \in
+\mathcal{C}'$ has an immediate predecessor $L'$, then $L/L'$ is generated by
+one element. First, such collections $\mathcal{C}'$ clearly exist; we can take
+the one consisting only of $k$. The set of such collections is clearly a
+partially ordered set such that every chain has an upper bound.
+By Zorn's lemma, there is a \emph{maximal} such collection of subfields, which
+we now call $\mathcal{C}$.
+
+The claim is that $\mathcal{C}$ has a maximal field, which is $K$. Indeed, if
+it had no maximal element, we could adjoin the union $\bigcup_{F \in
+\mathcal{C}} F$ to $\mathcal{C}$ and make $\mathcal{C}$ bigger, contradicting
+maximality. If this maximal field of $\mathcal{C}$ were not $K$, then we could add another
+element to this maximal subfield and get a bigger collection than
+$\mathcal{C}$, contradiction.
+
+So thus we have a set of fields $K_\alpha$ (with $\alpha$, the
+index, ranging over a well-ordered set) between $k$ and $K$,
+such that if $\alpha$ has a successor $\alpha'$, then
+$K_\alpha'$ is generated by one element over $K_\alpha$. Moreover $K$ is the
+largest of the $K_\alpha$, and $k$ is the smallest.
+
+We are now going to define a collection of rings $R_\alpha$ by transfinite
+induction on $\alpha$. We start the induction with $R_0 = R$ (where $0$ is the
+smallest allowed $\alpha$). The inductive hypothesis that we will want to
+maintain is that $R_\alpha$ is a noetherian local ring with maximal ideal
+$\mathfrak{m}_\alpha$, flat over $R$ and
+satisfying $\mathfrak{m} R_\alpha = \mathfrak{m}_\alpha$; we require,
+moreover, that the residue field of $R_\alpha$ be $K_\alpha$. Thus if we can
+do this at each step, we will be able to work up to $K$ and get the ring $S$
+that we want.
+We are, moreover, going to construct the $R_\alpha$ such that whenever $\beta <
+\alpha$, $R_\alpha$ is a $R_\beta$-algebra.
+
+Let us assume that $R_\beta$ has been defined for all $\beta < \alpha$ and
+satisfies the conditions. Then
+we want to define $R_\alpha$ in an appropriate way. If we can do this, then we
+will have proved the result.
+There are two cases:
+\begin{enumerate}
+\item $\alpha$ has an immediate predecessor $\alpha_{pre} $. In this case, we
+can define $R_\alpha$ from $R_{\alpha_{pre}}$ as above (because
+$K_\alpha/K_{\alpha_{pre}}$ is monogenic).
+\item $\alpha$ has no immediate predecessor. Then we define $R_\alpha =
+\varinjlim_{\beta < \alpha} R_\beta$. The following lemma will show that
+$R_\alpha$ satisfies the appropriate hypotheses.
+\end{enumerate}
+This completes the proof, modulo \cref{indlimnoetherianlocal}.
+\end{proof}
+
+We shall need the following lemma to see that we preserve noetherianness when
+we pass to the limit.
+\begin{lemma}\label{indlimnoetherianlocal}
+Suppose given an inductive system $\left\{(A_\alpha,
+\mathfrak{m}_{\alpha})\right\}$ of noetherian
+rings and flat local homomorphisms, starting with $A_0$.
+Suppose moreover that $\mathfrak{m}_{\alpha} A_{\beta} = \mathfrak{m}_{\beta}$
+whenever $\alpha < \beta$.
+
+Then $A = \varinjlim A_\alpha$ is a
+noetherian local ring, flat over each $A_\alpha$. Moreover, if $\mathfrak{m} \subset A$
+is the maximal ideal, then $\mathfrak{m}_\alpha A = \mathfrak{m}$. The residue
+field of $A$ is $\varinjlim A_\alpha/\mathfrak{m}_\alpha$.
+\end{lemma}
+\begin{proof}
+First, it is clear that $A$ is a local ring (\cref{} \add{reference!}) with
+maximal ideal equal to $\mathfrak{m}_\alpha A$ for any $\alpha $ in the
+indexing set, and that $A$ has the appropriate residue field. Since filtered colimits preserve flatness, flatness of $A$ is
+also clear.
+We need to show that $A$ is noetherian; this is the crux of the lemma.
+
+To prove that $A$ is noetherian, we are going to show that its
+$\mathfrak{m}$-adic completion $\hat{A}$ is noetherian. Fortunately, we have a
+convenient criterion for this. If $\hat{\mathfrak{m}}=
+\mathfrak{m}\hat{A}$, then $\hat{A}$ is complete with respect to the
+$\hat{\mathfrak{m}}$-adic topology. So if we show that
+$\hat{A}/\hat{\mathfrak{m}}$ is noetherian and
+$\hat{\mathfrak{m}}/\hat{\mathfrak{m}^2}$ is a finitely generated
+$\hat{A}$-module, we will have shown that $\hat{A}$ is noetherian by
+\cref{completenoetherian}.
+
+But $\hat{A}/\hat{\mathfrak{m}}$ is a field, so obviously noetherian.
+Also, $\hat{\mathfrak{m}}/\hat{\mathfrak{m}}^2 = \mathfrak{m}/\mathfrak{m}^2$,
+and by flatness of $A$, this is
+\[ A \otimes_{A_\alpha} \mathfrak{m}_\alpha/\mathfrak{m}_\alpha^2 \]
+for any $\alpha$. Since $A_\alpha$ is noetherian, we see that this is finitely
+generated. The criterion \cref{completenoetherian} now shows that the completion $\hat{A}$ is
+noetherian.
+
+Finally, we need to deduce that $A$ is itself noetherian.
+To do this,
+we shall show that $\hat{A}$ is faithfully flat over $A$. Since noetherianness
+``descends'' under faithfully flat extensions (\add{citation needed}), this
+will be enough. It suffices to show that $\hat{A}$ is \emph{flat} over each
+$A_\alpha$. For this, we use the infinitesimal criterion; we have that
+\[ \hat{A} \otimes_{A_\alpha} A_\alpha/\mathfrak{m}_\alpha^t =
+\hat{A}/\hat{\mathfrak{m}^t} = A/\mathfrak{m}^t = A/A\mathfrak{m}_\alpha^t, \]
+which is flat over $A_\alpha/\mathfrak{m}_\alpha^t$ since $A$ is flat over
+$A_\alpha$.
+
+It follows that $\hat{A}$ is flat over each $A_\alpha$.
+If we want to see that $A \to \hat{A}$ is flat, we let $I \subset A$ be a
+finitely generated
+ideal; we shall prove that $I \otimes_A \hat{A} \to \hat{A}$ is injective
+(which will establish flatness). We know that there is an ideal $I_\alpha \subset A_\alpha$ for some
+$A_\alpha$ such that
+\[ I = I_\alpha A = I_\alpha \otimes_{A_\alpha} A. \]
+Then
+\[ I \otimes_A \hat{A} = I_\alpha \otimes_{A_\alpha} \hat{A} \]
+which injects into $\hat{A}$ as $A_\alpha \to \hat{A}$ is flat.
+
+\begin{comment}
+Let us first show that $A$ is \emph{separated} with respect to the
+$\mathfrak{m}$-adic topology. Fix $x \in A$. Then $x$ lies in the subring
+$A_\alpha$ for some fixed $\alpha$ depending on $\alpha$ (note that $A_\alpha
+\to A$ is injective since a flat morphism of local rings is \emph{faithfully
+flat}). If $x \in \mathfrak{m}^n = A \mathfrak{m}_\alpha^n$, then $x \in
+\mathfrak{m}_\alpha^n$ by faithful flatness and \cref{intideal}.
+So if $x \in \mathfrak{m}^n$ for all $n$, then $x \in \mathfrak{m}_\alpha^n$
+for all $n$; the separatedness of $A_\alpha$ with respect to the
+$\mathfrak{m}_\alpha$-adic topology now shows $x=0$.
+\end{comment}
+
+
+\end{proof}
+
+
+\subsection{Generic flatness}
+
+Suppose given a module $M$ over a noetherian \emph{domain} $R$. Then $M
+\otimes_R K(R)$ is a finitely generated free module over the field $K(R)$.
+Since $K(R)$ is the inductive limit $\varinjlim R_f$ as $f$ ranges over $(R -
+\left\{0\right\})/R^*$ and $K(R) \otimes_R M \simeq \varinjlim_{f \in (R -
+\left\{0\right\})/R^*} M_f$, it follows by the general theory of \cref{} that
+there exists $f \in R - \left\{0\right\}$ such that $M_f$ is free over $R_f$.
+
+Here $\spec R_f = D(f) \subset \spec R$ should be thought of as a ``big''
+subset of $\spec R$ (in fact, as one can check, it is \emph{dense} and open).
+So the moral of this argument is that $M$ is ``generically free.'' If we had
+the language of schemes, we could make this more precise.
+But the idea is that localizing at $M$ corresponds to restricting the
+\emph{sheaf} associated to $M$ to $D(f) \subset \spec R$; on this dense open subset, we
+get a free sheaf.
+(The reader not comfortable with such ``finitely presented'' arguments will
+find another one below, that also works more generally.)
+
+Now we want to generalize this to the case where $M$ is finitely generated not
+over $R$, but over a finitely generated $R$-algebra. In particular, $M$ could
+itself be a finitely generated $R$-algebra!
+
+\begin{theorem}[Generic freeness]
+Let $S$ be a finitely generated algebra over the noetherian domain $R$, and
+let $M$ be a finitely generated $S$-module. Then there is $f \in R -
+\left\{0\right\}$ such that $M_f$ is a free (in particular, flat) $R$-module.
+\end{theorem}
+\begin{proof} We shall first reduce the result to one about rings instead of
+modules. By Hilbert's basis theorem, we know that $S$ is noetherian.
+By d\'evissage (\cref{devissage}), there is a finite filtration of $M$ by
+$S$-submodules,
+\[ 0 = M_0 \subset M_1 \subset \dots \subset M_k = M \]
+such that the quotients $M_{i+1}/M_i$ are isomorphic to quotients
+$S/\mathfrak{p}_i$ for the $\mathfrak{p}_i \in \spec S$.
+
+Since localization is an exact functor, it will suffice to show that there
+exists an $f$ such that $(S/\mathfrak{p}_i)_f$ is a free $R$-module for each
+$f$. Indeed, it is clear that if a module admits a finite filtration all of
+whose successive quotients are free, then the module itself is free.
+We may thus even reduce to the case where $M = S/\mathfrak{p}$.
+
+So we are reduced to showing that if we have a finitely generated
+\emph{domain} $T$ over $R$, then there exists $f \in R - \left\{0\right\}$
+such that $T_f$ is a free $R$-module.
+If $R \to T$ is not injective, then the result is obvious (localize at
+something nonzero in the kernel), so we need only handle the case where $R \to
+T$ is a monomorphism.
+
+
+By the Noether normalization theorem, there are $d$ elements of $T \otimes_R K(R)$, which we
+denote by $t_1, \dots, t_d$, which are algebraically independent over $K(R)$
+and such that $T \otimes_R K(R)$ is integral over $K(R)[t_1, \dots, t_d]$.
+(Here $d$ is the transcendence degree of $K(T)/
+K(R)$.)
+If we localize at some highly divisible element, we can assume that $t_1,
+\dots, t_d$ all lie in $T$ itself. \emph{Let us assume that the result for
+domains is true whenever the transcendence degree is $< d$, so that we can
+induct.}
+
+Then we know that $R[t_1, \dots, t_d] \subset T$ is a polynomial ring.
+Moreover, each of the finitely many generators of $T/R$ satisfies a monic polynomial
+equation over $K(R)[t_1, \dots, t_d]$ (by the integrality part of Noether
+normalization). If we localize $R$ at a highly divisible element, we may
+assume that the coefficients of these polynomials belong to $R[t_1, \dots,
+t_d]$.
+We have thus reduced to the following case. $T$ is a finitely generated domain
+over $R$, \emph{integral} over the polynomial ring $R[t_1, \dots, t_d]$. In
+particular, it is a finitely generated module over the polynomial ring $R[t_1,
+\dots, t_d]$.
+Thus we have some $r$ and an exact sequence
+\[ 0 \to R[t_1, \dots, t_d]^r \to T \to Q \to 0, \]
+where $Q$ is a torsion $R[t_1, \dots, t_d]^r$-module. Since the polynomial
+ring is free, we are reduced to showing that by localizing at a suitable
+element of $R$, we can make $Q$ free.
+
+But now we can do an inductive argument. $Q$ has a finite filtration by
+$T$-modules whose
+quotients are isomorphic to $T/\mathfrak{p}$ for nonzero primes
+$\mathfrak{p}$ with $\mathfrak{p} \neq 0$ as $T$ is torsion; these are still domains finitely generated over $R$, but such
+that the associated transcendence degree is \emph{less} than $d$. We have
+already assumed the statement proven for domains where the transcendence
+degree is $< d$. Thus we can
+find a suitable localization that makes all these free, and thus $Q$ free; it
+follows that with this localization, $T$ becomes free too.
+\end{proof}
+
+
+
diff --git a/books/cring/foundations.tex b/books/cring/foundations.tex
new file mode 100644
index 0000000000000000000000000000000000000000..bb9c7c45fa4ba8fb0caadc0d031b635dd5bb24a4
--- /dev/null
+++ b/books/cring/foundations.tex
@@ -0,0 +1,2193 @@
+
+\chapter{Foundations}
+\label{foundations}
+
+
+The present foundational chapter will introduce the notion of a ring and,
+next, that of a module over a ring. These notions will be the focus of the
+present book. Most of the chapter will be definitions.
+
+We begin with a few historical remarks. Fermat's last theorem states that the
+equation
+ \[ \label{ft} x^n + y^n = z^n \]
+has no nontrivial solutions in the integers, for $n \ge 3$. We could try to
+prove this by factoring the expression on the left hand side. We can write
+ \[ (x+y)(x+ \zeta y) (x+ \zeta^2y) \dots (x+ \zeta^{n-1}y) = z^n, \]
+where $\zeta$ is a primitive $n$th root of unity. Unfortunately, the factors
+lie in $\mathbb{Z}[\zeta]$, not the integers $\mathbb{Z}$. Though
+$\mathbb{Z}[\zeta]$ is still a \emph{ring} where we have notions of primes and
+factorization, just as in $\mathbb{Z}$, we will see that prime factorization
+is not always unique in $\mathbb{Z}[\zeta]$. (If it were always unique, then we
+could at least one important case of Fermat's last theorem rather easily; see
+the introductory chapter of \cite{Wa97} for an argument.)
+
+For instance, consider the ring
+$\mathbb{Z}[\sqrt{-5}]$ of complex numbers of the form $a + b\sqrt{-5}$, where
+$a, b \in \mathbb{Z}$. Then we have the two factorizations
+ \[ 6 = 2 \cdot 3 = (1 + \sqrt{-5})(1 - \sqrt{-5}). \]
+Both of these are factorizations of 6 into irreducible factors, but they
+are fundamentally different.
+
+In part, commutative algebra grew out of the need to understand this failure
+of unique factorization more generally. We shall have more to say on
+factorization in the future, but here we just focus on the formalism.
+The basic definition for studying this problem is that of a \emph{ring}, which we now
+introduce.
+
+\section{Commutative rings and their ideals}
+
+\subsection{Rings}
+
+We shall mostly just work with commutative rings in this book, and consequently
+will just say ``ring'' for one such.
+\begin{definition}
+A \textbf{commutative ring} is a set $R$ with an addition map
+$+ : R \times R \to R$ and a multiplication map $\times : R \times R \to R$
+that satisfy the following conditions.
+
+\begin{enumerate}
+ \item $R$ is a group under addition.
+ \item The multiplication map is commutative and distributes over addition.
+ This means that $x \times (y+z) = x \times y + x\times z$ and $x \times y = y
+ \times x$.
+ \item There is a \textbf{unit} (or \textbf{identity element}), denoted by
+ $1$, such that $1 \times x = x$ for all $x \in R$.
+\end{enumerate}
+We shall typically write $xy$ for $x \times y$.
+
+Given a ring, a \textbf{subring} is a subset that contains the identity
+element and is closed under addition and multiplication.
+\end{definition}
+
+A \emph{noncommutative} (i.e. not necessarily commutative) ring is one
+satisfying the above conditions, except possibly for the commutativity
+requirement $xy = yx$. For instance, there is a noncommutative ring of
+$2$-by-$2$ matrices over $\mathbb{C}$. We shall not work too much with noncommutative rings in
+the sequel, though many of the basic results (e.g. on modules) do generalize.
+
+
+\begin{example}
+$\mathbb{Z}$ is the simplest example of a ring.
+\end{example}
+
+\begin{exercise}\label{polynomial} Let $R$ be a commutative ring.
+Show that the set of polynomials in one variable over $R$ is a commutative
+ring $R[x]$. Give a rigorous definition of this.
+\end{exercise}
+
+\begin{example}
+For any ring $R$, we can consider the polynomial ring $R[x_1, \ldots, x_n]$
+which consists of the polynomials in $n$ variables with coefficients in $R$.
+This can be defined inductively as $(R[x_1, \dots, x_{n-1}])[x_n]$, where the
+procedure of adjoining a single variable comes from the previous
+\cref{polynomial}.
+\end{example}
+
+We shall see a more general form of this procedure in \cref{groupring}.
+
+
+\begin{exercise}
+If $R$ is a commutative ring, recall that an \textbf{invertible element} (or, somewhat
+confusingly, a \textbf{unit}) $u \in R$ is an element such
+that there exists $v \in R$ with $uv = 1$. Prove that $v$ is necessarily
+unique.
+\end{exercise}
+
+\begin{exercise} \label{ringoffns}
+Let $X$ be a set and $R$ a ring. The set $R^X$ of functions $f:X \to R$ is a
+ring. \end{exercise}
+
+\subsection{The category of rings}
+The class of rings forms a category. Its morphisms are called ring homomorphisms.
+
+
+\begin{definition}
+A \textbf{ring homomorphism} between two rings $R$ and $S$ as a map
+$f : R \to S$ that respects addition and multiplication. That is,
+
+\begin{enumerate}
+ \item $f(1_R) = 1_S$, where $1_R$ and $1_S$ are the respective identity
+ elements.
+ \item $f(a + b) = f(a) + f(b)$ for $a, b \in R$.
+ \item $f(ab) = f(a)f(b)$ for $a, b \in R$.
+\end{enumerate}
+There is thus a \emph{category} $\mathbf{Ring}$ whose objects are commutative
+rings and whose morphisms are ring-homomorphisms.
+\end{definition}
+
+The philosophy of Grothendieck, as expounded in his EGA \cite{EGA}, is that one should
+always do things in a relative context. This means that instead of working
+with objects, one should work with \emph{morphisms} of objects. Motivated by
+this, we introduce:
+
+\begin{definition}
+Given a ring $A$, an \textbf{$A$-algebra} is a ring $R$ together with a
+morphism of rings (a \textbf{structure morphism}) $A \to R$. There is a category of $A$-algebras, where a
+morphism between $A$-algebras is a ring-homomorphism that is required to commute with the structure
+morphisms.
+\end{definition}
+
+So if $R$ is an $A$-algebra, then $R$ is not only a ring, but there is a way
+to multiply elements of $R$ by elements of $A$ (namely, to multiply $a \in A$
+with $r \in R$, take the image of $a $ in $R$, and multiply that by $r$).
+For instance, any ring is an algebra over any subring.
+
+We can think of an $A$-algebra as an arrow $A \to R$, and a morphism from $A
+\to R$ to $A \to S$ as a commutative diagram
+\[ \xymatrix{
+R \ar[rr] & & S \\
+& \ar[lu] A \ar[ru]
+}\]
+This is a special case of the \emph{undercategory} construction.
+
+If $B$ is an $A$-algebra and $C$ a $B$-algebra, then $C$ is an $A$-algebra in a
+natural way. Namely, by assumption we are given morphisms of rings $A \to B$
+and $B \to C$, so composing them gives the structure morphism $A \to C$ of $C$
+as an $A$-algebra.
+
+
+\begin{example}
+Every ring is a $\mathbb{Z}$-algebra in a natural and unique way. There is a
+unique map (of rings) $\mathbb{Z} \to R$ for any ring $R$ because a
+ring-homomorphism is required to preserve the identity.
+In fact, $\mathbb{Z}$ is the \emph{initial object} in the category of rings:
+this is a restatement of the preceding discussion.
+\end{example}
+
+\begin{example}
+If $R$ is a ring, the polynomial ring $R[x]$ is an $R$-algebra in a natural
+manner. Each element of $R$ is naturally viewed as a ``constant polynomial.''
+\end{example}
+
+\begin{example}
+$\mathbb{C}$ is an $\mathbb{R}$-algebra.
+\end{example}
+
+Here is an example that generalizes the case of the polynomial ring.
+\begin{example}
+\label{groupring}
+If $R$ is a ring and $G$ a commutative monoid,\footnote{That is, there is a
+commutative multiplication on $G$ with an identity element, but not
+necessarily with inverses.} then the set
+$R[G]$ of formal finite sums $\sum r_i g_i$ with $r_i \in R, g_i \in G$ is a
+commutative ring, called the \textbf{moniod ring} or \textbf{group ring} when
+$G$ is a group.
+Alternatively, we can think of elements of $R[G]$ as infinite sums $\sum_{g \in
+G} r_g g$ with $R$-coefficients, such that almost all the $r_g$ are zero.
+We can define the multiplication law such that
+\[ \left(\sum r_g g\right)\left( \sum s_g g\right) =
+\sum_h \left( \sum_{g g' = h} r_g s_{g'} \right) h.
+\]
+This process is called \emph{convolution.} We can think of the multiplication
+law as extended the group multiplication law (because the product of the
+ring-elements corresponding to $g, g'$ is the ring element corresponding to
+$gg' \in G$).
+
+The case of $G =
+\mathbb{Z}_{\geq 0}$ is the polynomial ring.
+In some cases, we can extend this notion to formal infinite sums, as in the
+case of the formal power series ring; see \cref{powerseriesring} below.
+\end{example}
+
+\begin{exercise}
+\label{integersinitial}
+The ring $\mathbb{Z}$ is an \emph{initial object} in the category of rings.
+That is, for any ring $R$, there is a \emph{unique} morphism of rings
+$\mathbb{Z} \to R$. We discussed this briefly earlier; show more generally that
+$A$ is the initial object in the category of $A$-algebras for any ring $A$.
+\end{exercise}
+
+\begin{exercise}
+The ring where $0=1$ (the \textbf{zero ring}) is a \emph{final object} in the category of rings. That
+is, every ring admits a unique map to the zero ring.
+\end{exercise}
+
+\begin{exercise}
+\label{corepresentable}
+Let $\mathcal{C}$ be a category and $F: \mathcal{C} \to \mathbf{Sets}$ a
+covariant functor. Recall that $F$ is said to be \textbf{corepresentable} if
+$F$ is naturally isomorphic to $X \to \hom_{\mathcal{C}}(U, X)$ for some
+object $U \in \mathcal{C}$. For instance, the functor sending everything to a
+one-point set is corepresentable if and only if $\mathcal{C}$ admits an
+initial object.
+
+Prove that the functor $\mathbf{Rings} \to \mathbf{Sets}$ assigning to each ring its underlying set is
+representable. (Hint: use a suitable polynomial ring.)
+\end{exercise}
+
+
+The category of rings is both complete and cocomplete. To show this in full
+will take more work, but we can here describe what
+certain cases (including all limits) look like.
+As we saw in \cref{corepresentable}, the forgetful functor $\mathbf{Rings} \to
+\mathbf{Sets}$ is corepresentable. Thus, if we want to look for limits in the
+category of rings, here is the approach we should follow: we should take the
+limit first of the underlying sets, and then place a ring structure on it in
+some natural way.
+
+\begin{example}[Products]
+The \textbf{product} of two rings $R_1, R_2$ is the set-theoretic product $R_1
+\times R_2$ with the multiplication law $(r_1, r_2)(s_1, s_2) = (r_1 s_1, r_2
+s_2)$. It is easy to see that this is a product in the category of rings. More
+generally, we can easily define the product of any collection of rings.
+\end{example}
+
+To describe the coproduct is more difficult: this will be given by the
+\emph{tensor product} to be developed in the sequel.
+
+\begin{example}[Equalizers]
+Let $f, g: R \rightrightarrows S$ be two ring-homomorphisms. Then we can
+construct the \textbf{equalizer} of $f,g$ as the subring of $R$ consisting of
+elements $x \in R$ such that $f(x) = g(x)$. This is clearly a subring, and one
+sees quickly that it is the equalizer in the category of rings.
+\end{example}
+
+As a result, we find:
+\begin{proposition}
+$\mathbf{Rings}$ is complete.
+\end{proposition}
+
+As we said, we will not yet show that $\mathbf{Rings}$ is cocomplete. But we
+can describe filtered colimits. In fact, filtered colimits will be constructed
+just as in the set-theoretic fashion. That is, the forgetful functor
+$\mathbf{Rings} \to \mathbf{Sets}$ commutes with \emph{filtered} colimits
+(though not with general colimits).
+\begin{example}[Filtered colimits]
+Let $I$ be a filtering category, $F: I \to \mathbf{Rings}$ a functor. We can
+construct $\varinjlim_I F$ as follows. An object is an element $(x,i)$ for $i
+\in I$ and $x \in F(i)$, modulo equivalence; we say that $(x, i)$ and $(y, j)$
+are equivalent if there is a $k \in I$ with maps $i \to k, j \to k$ sending
+$x,y$ to the same thing in the ring $F(k)$.
+
+To multiply $(x, i)$ and $(y,j)$, we find
+some $k \in I$ receiving maps from $i, j$, and replace $x,y$ with elements of
+$F(k)$. Then we multiply those two in $F(k)$. One easily sees that this is a
+well-defined multiplication law that induces a ring structure, and that what we
+have described is in fact the filtered colimit.
+
+\end{example}
+\subsection{Ideals}
+
+An \emph{ideal} in a ring is analogous to a normal subgroup of a
+group. As we shall see, one may quotient by ideals just as one quotients by
+normal subgroups.
+The idea is that one wishes to have a suitable \emph{equivalence relation} on a
+ring $R$ such that the relevant maps (addition and multiplication) factor
+through this equivalence relation. It is easy to check that any such relation
+arises via an ideal.
+
+\begin{definition}
+Let $R$ be a ring. An \textbf{ideal} in $R$ is a subset $I \subset R$ that
+satisfies the following.
+
+\begin{enumerate}
+ \item $0 \in I$.
+ \item If $x, y \in I$, then $x + y \in I$.
+ \item If $x \in I$ and $y \in R$, then $xy \in I$.
+\end{enumerate}
+\end{definition}
+
+There is a simple way of obtaining ideals, which we now describe.
+Given elements $x_1, \ldots, x_n \in R$, we denote by $(x_1, \ldots, x_n) \subset
+R$ the subset of linear combinations $\sum r_i x_i$, where $r_i \in R$. This
+is clearly an ideal, and in fact the smallest one containing all $x_i$. It is
+called the ideal \textbf{generated} by $x_1, \ldots, x_n$. A
+\textbf{principal ideal} $(x)$ is one generated by a single $x \in R$.
+
+\begin{example}
+Ideals generalize the notion of divisibility. Note that
+in $\mathbb{Z}$, the set of elements divisible by $n \in \mathbb{Z}$ forms the
+ideal $I = n\mathbb{Z} = (n)$. We shall see that every ideal in $\mathbb{Z}$ is
+of this form: $\mathbb{Z}$ is a \emph{principal ideal domain.}
+\end{example}
+
+Indeed, one can think of an ideal as axiomatizing the notions that
+``divisibility'' ought to satisfy. Clearly, if two elements are divisible by
+something, then their sum and product should also be divisible by it. More
+generally, if an element is divisible by something, then the product of that
+element with anything else should also be divisible. In general, we will extend
+(in the chapter on Dedekind domains) much of the ordinary arithmetic with
+$\mathbb{Z}$ to arithmetic with \emph{ideals} (e.g. unique factorization).
+
+\begin{example}
+We saw in \cref{ringoffns}
+that if $X$ is a set and $R$ a ring, then the set $R^X$ of functions $X \to R$
+is naturally a ring. If $Y \subset X$ is a subset, then the subset of functions
+vanishing on $Y$ is an ideal.
+\end{example}
+\begin{exercise}
+Show that the ideal $(2, 1 + \sqrt{-5}) \subset \mathbb{Z}[\sqrt{-5}]$ is not
+principal.
+\end{exercise}
+
+\subsection{Operations on ideals}
+
+There are a number of simple operations that one may do with ideals, which we
+now describe.
+
+\begin{definition}
+The sum $I + J$ of two ideals $I, J \subset R$ is defined as the set of sums
+ \[ \left\{ x + y : x \in I, y \in J \right\}. \]
+\end{definition}
+
+\begin{definition}
+The product $IJ$ of two ideals $I, J \subset R$ is defined as the smallest
+ideal containing the products $xy$ for all $x \in I, y \in J$. This is just
+the set
+ \[ \left\{ \sum x_i y_i : x_i \in I, y_i \in J \right\}. \]
+\end{definition}
+
+We leave the basic verification of properties as an exercise:
+\begin{exercise}
+Given ideals $I, J \subset R$, verify the following.
+
+\begin{enumerate}
+ \item $I + J$ is the smallest ideal containing $I$ and $J$.
+ \item $IJ$ is contained in $I$ and $J$.
+ \item $I \cap J$ is an ideal.
+\end{enumerate}
+\end{exercise}
+
+\begin{example}
+In $\mathbb{Z}$, we have the following for any $m, n$.
+
+\begin{enumerate}
+ \item $(m) + (n) = (\gcd\{ m, n \})$,
+ \item $(m)(n) = (mn)$,
+ \item $(m) \cap (n) = (\mathrm{lcm}\{ m, n \})$.
+\end{enumerate}
+\end{example}
+
+\begin{proposition}
+For ideals $I, J, K \subset R$, we have the following.
+
+\begin{enumerate}
+ \item Distributivity: $I(J + K) = IJ + IK$.
+ \item $I \cap (J + K) = I \cap J + I \cap K$ if $I \supset J$ or $I \supset K$.
+ \item If $I + J = R$, $I \cap J = IJ$.
+\end{enumerate}
+
+\begin{proof}
+1 and 2 are clear. For 3, note that $(I + J)(I \cap J) = I(I \cap J)
++ J(I \cap J) \subset IJ$. Since $IJ \subset I \cap J$, the result
+follows.
+\end{proof}
+\end{proposition}
+
+\begin{exercise}
+There is a \emph{contravariant} functor $\mathbf{Rings} \to \mathbf{Sets}$ that
+sends each ring to its set of ideals. Given a map $f: R \to S$ and an ideal $I
+\subset S$, we define an ideal $f^{-1}(I) \subset R$; this defines the
+functoriality.
+This functor is not representable, as it does not send the initial object
+in $\mathbf{Rings} $ to the
+one-element set. We will later use a \emph{subfunctor} of this functor, the
+$\spec$ construction, when we replace ideals with ``prime'' ideals.
+\end{exercise}
+
+
+\subsection{Quotient rings}
+
+We next describe a procedure for producing new rings from old ones.
+If $R$ is a ring and $I \subset R$ an ideal, then the quotient group $R/I$
+is a ring in its own right. If $a+I, b+I$ are two cosets, then the
+multiplication is $(a+I)(b+I) = ab + I$. It is easy to check that this does
+not depend on the coset representatives $a,b$. In other words, as mentioned
+earlier, the arithmetic operations on $R$ \emph{factor} through the equivalence
+relation defined by $I$.
+
+As one easily checks, this becomes to a multiplication
+\[ R/I \times R/I \to R/I \]
+which is commutative and associative, and
+whose identity element is $1+I$.
+In particular, $R/I$ is a ring, under multiplication $(a+I)(b+I) = ab+I$.
+\begin{definition}
+$R/I$ is called the \textbf{quotient ring} by the ideal $I$.
+\end{definition}
+
+The process is analogous to quotienting a group by a normal subgroup: again,
+the point is that the equivalence relation induced on the algebraic
+structure---either the group or the ring---by the subgroup (or ideal)---is
+compatible with the algebraic structure, which thus descends to the quotient.
+
+The
+reduction map $\phi \colon R \to R/I$ is a ring-homomorphism with a
+\emph{universal
+property}.
+Namely, for any ring $B$, there is a map
+ \[ \hom(R/I, B) \to \hom(R, B) \]
+ on the hom-sets
+ by composing with the ring-homomorphism $\phi$; this map is injective and the
+ image consists of all homomorphisms $R \to B$ which vanish on $I$.
+Stated alternatively, to map out of $R/I$ (into some ring $B$) is the same thing as mapping out of
+$R$ while killing the ideal $I \subset R$.
+
+This is best thought out for oneself, but here is the detailed justification.
+The reason is that any map $R/I \to B$ pulls back to a map $R \to R/I \to B$
+which annihilates $I$ since $R \to R/I$ annihilates $I$. Conversely, if we have
+a map
+\[ f: R \to B \]
+killing $I$, then we can define $R/I \to B$ by sending $a+I$ to $f(a)$; this is
+uniquely defined since $f$ annihilates $I$.
+
+\begin{exercise}
+ If $R$ is a commutative
+ring, an element $e \in R$ is said to be \textbf{idempotent} if $e^2 =
+e$. Define a covariant functor $\mathbf{Rings} \to \mathbf{Sets}$ sending a
+ring to its idempotents. Prove that it is corepresentable. (Answer: the
+corepresenting object is $\mathbb{Z}[X]/(X - X^2)$.)
+\end{exercise}
+
+\begin{exercise}
+Show that the functor assigning to each ring the set of elements annihilated
+by 2 is corepresentable.
+\end{exercise}
+
+\begin{exercise}
+If $I \subset J \subset R$, then $J/I$ is an ideal of $R/I$, and there is a
+canonical isomorphism
+\[ (R/I)/(J/I) \simeq R/J. \]
+\end{exercise}
+
+
+
+
+
+
+\subsection{Zerodivisors}
+
+
+Let $R$ be a commutative ring.
+\begin{definition}
+If $r \in R$, then $r$ is called a \textbf{zerodivisor} if there is $s \in R, s
+\neq 0$ with $sr = 0$. Otherwise $r$ is called a \textbf{nonzerodivisor.}
+\end{definition}
+
+As an example, we prove a basic result on the zerodivisors in a polynomial ring.
+
+\begin{proposition}
+Let $A=R[x]$. Let $f=a_nx^n+\cdots +a_0\in A$. If there is a non-zero polynomial $g\in
+A$ such that $fg=0$, then there exists $r\in R\smallsetminus\{0\}$ such that $f\cdot
+ r=0$.
+\end{proposition}
+So all the coefficients are zerodivisors.
+\begin{proof}
+ Choose $g$ to be of minimal degree, with leading coefficient $bx^d$. We may assume
+ that $d>0$. Then $f\cdot b\neq 0$, lest we contradict minimality of $g$. We must have
+$a_i g\neq 0$ for some $i$. To see this, assume that $a_i\cdot g=0$, then $a_ib=0$ for
+all $i$ and then $fb=0$. Now pick $j$ to be the largest integer such that $a_jg\neq
+ 0$. Then $0=fg=(a_0 + a_1x + \cdots a_jx^j)g$, and looking at the leading coefficient,
+ we get $a_jb=0$. So $\deg (a_jg)}[ld] \ar[d] \\
+K \ar[r] & M \ar[r]^f & N
+}\]
+if and only if the composite $T \to N$ is zero.
+In particular, if we think of the hom-sets as abelian groups (i.e.
+$\mathbb{Z}$-modules)
+\[ \hom_R( T,K) = \ker\left( \hom_R(T, M) \to \hom_R(T, N) \right). \]
+\end{exercise}
+
+In other words, one may think of the kernel as follows. If $X
+\stackrel{f}{\to} Y$ is a morphism, then the kernel $\ker(f)$ is the equalizer
+of $f$ and the zero morphism $X \stackrel{0}{\to} Y$.
+
+\begin{exercise}
+What is the universal property of the cokernel?
+\end{exercise}
+
+\begin{exercise} \label{moduleunderlyingsetrepresentable}
+On the category of modules, the functor assigning to each module $M$ its
+underlying set is corepresentable (cf. \rref{corepresentable}). What
+is the corepresenting object?
+\end{exercise}
+
+We shall now introduce the notions of \emph{direct sum} and \emph{direct
+product}. Let $I$ be a set, and suppose that for each $i \in I$, we are given
+an $R$-module $M_i$.
+
+\begin{definition}
+The \textbf{direct product} $\prod M_i$ is set-theoretically the cartesian product. It is given
+the structure of an $R$-module by addition and multiplication pointwise on
+each factor.
+\end{definition}
+\begin{definition}
+The \textbf{direct sum} $\bigoplus_I M_i$ is the set of elements in the direct
+product such that all but finitely many entries are zero. The direct sum is a
+submodule of the direct product.
+\end{definition}
+
+
+\begin{example} \label{productcoproduct}
+The direct product is a product in the category of modules, and the direct sum
+is a coproduct. This is easy to verify: given maps $f_i: M \to M_i$, then we
+get get a unique map $f: M \to \prod M_i$ by taking the product in the category
+of sets. The case of a coproduct is dual: given maps $g_i: M_i \to N$, then we
+get a map $\bigoplus M_i \to N$ by taking the \emph{sum} $g$ of the $g_i$: on a
+family $(m_i) \in \bigoplus M_i$, we take $g(m_i) = \sum_I g_i(m_i)$; this is
+well-defined as almost all the $m_i$ are zero.
+\end{example}
+
+\cref{productcoproduct} shows that the category of modules over a fixed
+commutative ring has products and coproducts. In fact, the category of modules
+is both complete and cocomplete (see \cref{completecat} for the definition).
+To see this, it suffices to show that (by
+\cref{coprodcoequalsufficeforcocomplete} and its dual) that this category
+admits equalizers and coequalizers.
+
+The equalizer of two maps
+\[ M \stackrel{f,g}{\rightrightarrows} N \]
+is easily checked to be the submodule of $M$ consisting of $m \in M$ such that
+$f(m) = g(m)$, or, in other words, the kernel of $f-g$. The coequalizer of these two maps is the quotient module of $N$
+by the submodule $\left\{f(m) - g(m), m \in M\right\}$, or, in other words,
+the cokernel of $f-g$.
+
+Thus:
+
+\begin{proposition}
+If $R$ is a ring, the category of $R$-modules is complete and cocomplete.
+\end{proposition}
+
+\begin{example}
+Note that limits in the category of $R$-modules are calculated in the same way
+as they are for sets, but colimits are not. That is, the functor from
+$R$-modules to $\mathbf{Sets}$, the forgetful functor, preserves limits but not
+colimits. Indeed, we will see that the forgetful functor is a right adjoint
+(\cref{freeadj}), which implies it preserves limits (by \cref{adjlimits}).
+\end{example}
+
+\subsection{Exactness}
+Finally, we introduce the notion of \emph{exactness}.
+\begin{definition} \label{exactness}
+Let $f: M \to N$ be a morphism of $R$-modules. Suppose $g: N \to P$ is another morphism of
+$R$-modules.
+
+The pair of maps is a \textbf{complex} if $g \circ f = 0: M \to N \to P$.
+This is equivalent to the condition that $\im(f) \subset \ker(g)$.
+
+This complex is \textbf{exact} (or exact at $N$) if $\im(f) = \ker(g)$.
+In other words, anything that is killed when mapped to $P$ actually comes from something in
+$M$.
+
+\end{definition}
+
+
+We shall often write pairs of maps as sequences
+\[ A \stackrel{f}{\to} B \stackrel{g}{\to} C \]
+and say that the sequence is exact if the pair of maps is, as in
+\rref{exactness}. A longer (possibly infinite) sequence of modules
+\[ A_0 \to A_1 \to A_2 \to \dots \]
+will be called a \textbf{complex} if each set of three
+consecutive terms is a complex, and \textbf{exact} if it is exact at each step.
+
+\begin{example}
+The sequence $0 \to A \stackrel{f}{\to} B$ is exact if and only if the map $f$
+is injective. Similarly, $A \stackrel{f}{\to} B \to 0$ is exact if and only if
+$f$ is surjective. Thus, $0 \to A \stackrel{f}{\to} B \to 0$ is exact if and
+only if $f$ is an isomorphism.
+\end{example}
+
+One typically sees this definition applied to sequences of the form
+\[ 0 \to M'\stackrel{f}{ \to} M \stackrel{g}{\to} M'' \to 0, \]
+which, if exact, is called a \textbf{short exact sequence}.
+Exactness here means that $f$ is injective, $g$ is surjective, and $f$ maps
+onto the kernel of $g$. So $M''$ can be thought of as the quotient $M/M'$.
+
+\begin{example}
+Conversely, if $M$ is a module and $M' \subset M$ a submodule, then there is a
+short exact sequence
+\[ 0 \to M' \to M \to M/M' \to 0. \]
+So every short exact sequence is of this form.
+\end{example}
+
+
+Suppose $F$ is a functor from the category of $R$-modules to the
+category of $S$-modules, where $R, S$ are rings. Then:
+
+\begin{definition}
+\begin{enumerate}
+\item $F$ is called \textbf{additive} if $F$ preserves direct sums.
+\item $F$ is called \textbf{exact} if $F$ is additive and preserves exact sequences.
+\item $F$ is called \textbf{left exact} if $F$ is additive and preserves exact sequences of the form
+$0 \to M' \to M \to M''$. Equivalently, $F$ preserves kernels.
+\item $F$ is \textbf{right exact} if $F$ is additive and $F$ preserves exact
+sequences of the form $M' \to M \to M'' \to 0$, i.e. $F$ preserves cokernels.
+\end{enumerate}
+\end{definition}
+
+The reader should note that much of homological algebra can be developed using the more
+general setting of an \emph{abelian category,} which axiomatizes much of the
+standard properties of the category of modules over a ring. Such a
+generalization turns out to be necessary when many natural categories, such as
+the category of chain complexes or the category of sheaves on a topological
+space, are not naturally categories of modules.
+We do not go into this here, cf. \cite{Ma98}.
+
+
+
+A functor $F$ is exact if and only if it is both left and right exact.
+This actually requires proof, though it is not hard. Namely, right-exactness implies that $F$
+preserves cokernels. Left-exactness implies that $F$ preserves kernels. $F$
+thus preserves images, as the image of a morphism is the kernel of its cokernel.
+So if
+\[ A \to B \to C \]
+is a short exact sequence, then the kernel of the second map is equal to the
+image of the first; we have just seen that this is preserved under $F$.
+
+
+From this, one can check that left-exactness is equivalent to requiring that $F$ preserve
+finite limits (as an additive functor, $F$ automatically preserves products,
+and we have just seen that $F$ is left-exact iff it preserves kernels).
+Similarly, right-exactness is equivalent to requiring that $F$ preserve
+finite colimits.
+So, in \emph{any} category with finite limits and colimits, we can talk about
+right or left exactness of a functor, but the notion is used most often for
+categories with an additive structure (e.g. categories of modules over a ring).
+
+
+
+\begin{exercise}
+Suppose whenever $0 \to A' \to A \to A'' \to 0$ is short exact, then $FA' \to
+FA \to FA'' \to 0$ is exact. Prove that $F$ is right-exact. So we get a
+slightly weaker criterion for right-exactness.
+
+Do the same for left-exact functors.
+\end{exercise}
+
+
+
+
+\subsection{Split exact sequences}
+
+Let $f: A \to B$ be a map of sets which is injective. Then there is a map $g: A
+\to B$ such that the composite $g \circ f: A \stackrel{f}{\to} B
+\stackrel{g}{\to} A$ is the identity. Namely, we define $g$ to be the inverse
+of $f$ on $f(A)$ and arbitrarily on $B-f(A)$.
+Conversely, if $f: A \to B$ admits an element $g: B \to A$ such that $g \circ f
+= 1_A$, then $f$ is injective. This is easy to see, as any $a \in A$ can be
+``recovered'' from $f(a)$ (by applying $g$).
+
+In general, however, this observation does not generalize to arbitrary
+categories.
+
+\begin{definition}
+Let $\mathcal{C}$ be a category. A morphism $A \stackrel{f}{\to} B$ is called a
+\textbf{split injection} if there is $g: B \to A$ with $g \circ f = 1_A$.
+\end{definition}
+
+\begin{exercise}[General nonsense]
+Suppose $f: A \to B$ is a split injection. Show that $f$ is a categorical monomorphism.
+(Idea: the map $\hom(C,A) \to \hom(C,B)$ becomes a split injection of sets
+thanks to $g$.)
+\end{exercise}
+
+\add{what is a categorical monomorphism? Maybe omit the exercise}
+
+In the category of sets, we have seen above that \emph{any} monomorphism is a
+split injection. This is not true in other categories, in general.
+
+\begin{exercise}
+Consider the morphism $\mathbb{Z} \to \mathbb{Z}$ given by multiplication by
+2. Show that this is not a split injection: no left inverse $g$ can exist.
+\end{exercise}
+
+We are most interested in the case of modules over a ring.
+
+\begin{proposition}
+A morphism $f: A \to B$ in the category of $R$-modules is a split injection if
+and only if:
+\begin{enumerate}
+\item $f$ is injective.
+\item $f(A)$ is a direct summand in $B$.
+\end{enumerate}
+\end{proposition}
+The second condition means that there is a submodule $B' \subset B$ such that
+$B = B' \oplus f(A)$ (internal direct sum). In other words, $B = B' + f(A)$
+and $B' \cap f(A) = \left\{0\right\}$.
+\begin{proof}
+Suppose the two conditions hold, and we have a module $B'$ which is a
+complement to $f(A)$.
+Then we define a left inverse
+\[ B \stackrel{g}{\to} A \]
+by letting $g|_{f(A)} = f^{-1}$ (note that $f$ becomes an \emph{isomorphism}
+$A \to f(A)$) and $g|_{B'}=0$. It is easy to see that this is indeed a left
+inverse, though in general not a right inverse, as $g$ is likely to be
+non-injective.
+
+Conversely, suppose $f: A \to B$ admits a left inverse $g: B \to A$. The usual
+argument (as for sets) shows that $f$ is injective. The essentially new
+observation is that $f(A) $ is a direct summand in $B$. To define the
+complement, we take $\ker(g) \subset B$.
+It is easy to see (as $g \circ f = 1_A$) that $\ker(g) \cap f(A) =
+\left\{0\right\}$. Moreover, $\ker(g) +f(A)$ fills $B$: given $b \in B$, it is
+easy to check that
+\[ b - f(g(b)) \in \ker(g). \]
+Thus we find that the two conditions are satisfied.
+\end{proof}
+
+
+
+
+\add{further explanation, exactness of filtered colimits}
+
+
+\subsection{The five lemma}
+
+The five lemma will be a useful tool for us in proving that maps are
+isomorphisms. Often this argument is used in inductive proofs. Namely, we will
+see that often ``long exact sequences'' (extending infinitely in one or both
+directions) arise from short exact sequences in a natural way. In such
+events, the five lemma
+will allow us to prove that certain morphisms are isomorphisms by induction on
+the dimension.
+\begin{theorem}
+Suppose given a commutative diagram
+\[ \xymatrix{
+A \ar[d] \ar[r] & B \ar[d] \ar[r] & C \ar[d] \ar[r] & D \ar[d] \ar[r] & E \ar[d] \\
+A' \ar[r] & B' \ar[r] & C' \ar[r] & D' \ar[r] & E'
+}\]
+such that the rows are exact and the four vertical maps $A \to A', B \to B', D
+\to D', E \to E'$ are isomorphisms. Then $C \to C'$ is an isomorphism.
+\end{theorem}
+
+This is the type of proof that goes by the name of ``diagram-chasing,'' and
+is best thought out visually for oneself, even though we give a complete proof.
+
+\begin{proof}
+We have the diagram
+\[
+\xymatrix{
+A \ar[r]^k \ar[d]^\a & B \ar[r]^l \ar[d]^\b
+ & C \ar[r]^m \ar[d]^g & D \ar[r]^n \ar[d]^\d & E \ar[d]^\e \\
+F \ar[r]_p & G \ar[r]_q & H \ar[r]_r & I \ar[r]_s & J
+}
+\]
+where the rows are exact at $B, C, D, G, H, I$ and the squares commute. In
+addition, suppose that $\a, \b, \d, \e$ are isomorphisms. We will show that
+$\g$ is an isomorphism.
+
+\emph{We show that $\g$ is surjective:}
+
+Suppose that $h \in H$. Since $\d$ is surjective, there exists an element
+$d \in D$ such that $r(h) = \d(d) \in I$.
+By the commutativity of the rightmost square, $s(r(h)) = \e(n(d))$.
+The exactness at $I$ means that $\im r = \ker s$, so hence
+$\e(n(d)) = s(r(h)) = 0$. Because $\e$ is injective, $n(d) = 0$.
+Then $d \in \ker(n) = \im(m)$ by exactness at $D$.
+Therefore, there is some $c \in C$ such that $m(c) = d$.
+Now, $\d(m(c)) = \d(d) = r(h)$ and by the commutativity of squares,
+$\d(m(c)) = r(\g(c))$, so therefore $r(\g(c)) = r(h)$. Since $r$ is a
+homomorphism, $r(\g(c) - h) = 0$. Hence $\g(c) - h \in \ker r = \im q$ by
+exactness at $H$.
+
+Therefore, there exists $g \in G$ such that $q(g) = \g(c) - h$.
+$\b$ is surjective, so there is some $b \in B$ such that $\b(b) = g$ and hence
+$q(\b(b)) = \g(c) - h$. By the commutativity of squares,
+$q(\b(b)) = \g(l(b)) = \g(c) - h$. Hence
+$h = \g(c) - \g(l(b)) = \g(c - l(b))$, and therefore $\g$ is surjective.
+
+So far, we've used that $\b$ and $\g$ are surjective, $\e$ is injective, and
+exactness at $D$, $H$, $I$.
+
+\emph{We show that $\g$ is injective:}
+
+Suppose that $c \in C$ and $\g(c) = 0$.
+Then $r(\g(c)) = 0$, and by the commutativity of squares,
+$\d(m(c)) = 0$. Since $\d$ is injective, $m(c) = 0$, so
+$c \in \ker m = \im l$ by exactness at $C$.
+Therefore, there is $b \in B$ such that $l(b) = c$.
+Then $\g(l(b)) = \g(c) = 0$, and by the commutativity of squares,
+$q(\b(b)) = 0$. Therefore, $\b(b) \in \ker q$, and by exactness at $G$,
+$\b(b) \in \ker q = \im p$.
+
+There is now $f \in F$ such that $p(f) = \b(b)$. Since $\a$ is surjective, this
+means that there is $a \in A$ such that $f = \a(a)$, so then
+$\b(b) = p(\a(a))$. By commutativity of squares,
+$\b(b) = p(\a(a)) = \b(k(a))$, and hence $\b(k(a) - b) = 0$.
+Since $\b$ is injective, we have $k(a) -b = 0$, so $k(a) = b$.
+Hence $b \in \im k = \ker l$ by commutativity of squares, so $l(b) = 0$.
+However, we defined $b$ to satisfy $l(b) = c$, so therefore $c = 0$ and hence
+$\g$ is injective.
+
+Here, we used that $\a$ is surjective, $\b, \d$ are injective, and exactness at
+$B, C, G$.
+
+Putting the two statements together, we see that $\g$ is both surjective and
+injective, so $\g$ is an isomorphism. We only used that $\b, \d$ are
+isomorphisms and that $\a$ is surjective, $\e$ is injective, so we can slightly
+weaken the hypotheses; injectivity of $\a$ and surjectivity of $\e$ were
+unnecessary.
+
+\end{proof}
+
+
+\section{Ideals}
+
+The notion of an \emph{ideal} has already been defined. Now we will introduce additional terminology related to the theory of ideals.
+
+\subsection{Prime and maximal ideals}
+
+Recall that the notion of an ideal generalizes that of divisibility. In
+elementary number theory, though, one finds that questions of divisibility
+basically reduce to questions about primes.
+The notion of a ``prime ideal'' is intended to generalize the familiar idea of a prime
+number.
+
+\begin{definition}
+An ideal $I \subset R$ is said to be \textbf{prime} if
+\begin{enumerate}[\textbf{P} 1]
+\item $1 \notin I$ (by convention, 1 is not a prime number)
+\item If $xy \in I$, either $x \in I$ or $y \in I$.
+\end{enumerate}
+\end{definition}
+
+\begin{example}
+\label{integerprimes}
+If $R = \mathbb{Z}$ and $p \in R$, then $(p) \subset \mathbb{Z}$ is a prime ideal iff $p$ or $-p$ is a
+prime number in $\mathbb{N}$ or if $p$ is zero.
+\end{example}
+
+
+
+If $R$ is any commutative ring, there are two obvious ideals. These obvious
+ones are the zero ideal $(0)$
+consisting only of the zero element, and the unit element $(1)$ consisting of all of
+$R$.
+
+
+\begin{definition} \label{maximalideal}
+An ideal $I \subset R$ is called \textbf{maximal}\footnote{Maximal with
+respect to not being the unit ideal.} if
+\begin{enumerate}[\textbf{M} 1]
+\item $1 \notin I$
+\item Any larger ideal contains $1$ (i.e., is all of $R$).
+\end{enumerate}
+\end{definition}
+
+So a maximal ideal is a maximal element in the partially ordered set of proper
+ideals (an ideal is \textbf{proper} if it does not contain 1).
+
+\begin{exercise}
+Find the maximal ideals in $\mathbb{C}[t]$.
+\end{exercise}
+
+
+\begin{proposition}
+A maximal ideal is prime.
+\end{proposition}
+\begin{proof}
+First, a maximal ideal does not contain 1.
+
+Let $I \subset R$ be a maximal ideal.
+We need to show that if $xy \in I$,
+then one of $x,y \in I$. If $x \notin I$, then $(I,x) = I + (x)$ (the ideal
+generated by $I$ and $x$) strictly contains $I$, so by maximality contains
+$1$. In particular, $1 \in I+(x)$, so we can write
+\[ 1 = a + xb \]
+where $a \in I, b \in R$. Multiply both sides by $y$:
+\[ y = ay + bxy. \]
+Both terms on the right here are in $I$ ($a \in I$ and $xy \in I$), so we find
+that $y \in I$.
+
+\end{proof}
+
+Given a ring $R$, what can we say about the collection of ideals in $R$?
+There
+are two obvious ideals in $R$, namely $(0)$ and $ (1)$. These are the same if and
+only if $0=1$, i.e. $R$ is the zero ring.
+So for any nonzero commutative ring, we have at least two distinct ideals.
+
+Next, we show that maximal ideals always \emph{do} exist, except in the case
+of the zero ring.
+\begin{proposition} \label{anycontainedinmaximal}
+Let $R$ be a commutative ring. Let $I \subset R$ be a proper ideal. Then $I$
+is contained in a maximal ideal.
+\end{proposition}
+
+\begin{proof}
+This requires the axiom of choice in the form of Zorn's lemma. Let
+$P$ be the collection of all ideals $J \subset R$ such that $I
+\subset J$ and $J \neq R$. Then $P$ is a poset with respect to inclusion. $P$ is
+nonempty because it contains $I$. Note that given a (nonempty) linearly ordered
+collection of ideals $J_{\alpha} \in P$, the union $\bigcup J_{\alpha} \subset
+R$ is an ideal: this is easily seen in view of the linear ordering (if $x,y
+\in \bigcup J_{\alpha}$, then both $x,y$ belong to some $J_{\gamma}$, so $x+y
+\in J_{\gamma}$; multiplicative closure is even easier). The union is not all
+of $R$ because it does not contain $1$.
+
+This implies that $P$ has a maximal element by Zorn's lemma. This maximal element may
+be called $\mathfrak{M}$; it's a proper element containing $I$. I claim that
+$\mathfrak{M}$ is a maximal ideal, because if it were contained in a larger
+ ideal, that would be in $P$ (which cannot happen by maximality) unless it were all of $R$.
+\end{proof}
+
+\begin{corollary}
+Let $R $ be a nonzero commutative ring. Then $R$ has a maximal ideal.
+\end{corollary}
+\begin{proof}
+Apply the lemma to the zero ideal.
+\end{proof}
+
+\begin{corollary}
+Let $R$ be a nonzero commutative ring. Then $x \in R$ is invertible if and
+only if it belongs to no maximal ideal $\mathfrak{m} \subset R$.
+\end{corollary}
+\begin{proof}
+Indeed, $x$ is invertible if and only if $(x) = 1$. That is, if and only if
+$(x)$ is not a proper ideal; now \rref{anycontainedinmaximal}
+finishes the argument.
+\end{proof}
+
+\subsection{Fields and integral domains}
+
+Recall:
+
+\begin{definition}
+A commutative ring $R$ is called a \textbf{field} if $1 \neq 0$ and for every $x \in R -
+\left\{0\right\}$ there exists an \textbf{inverse} $x^{-1} \in R$ such that $xx^{-1} =
+1$.
+
+
+\end{definition}
+
+
+This condition has an obvious interpretation in terms of ideals.
+\begin{proposition}
+A commutative ring with $1 \neq 0$ is a field iff it has only the two ideals $(1),
+(0)$.
+\end{proposition}
+
+Alternatively, a ring is a field if and only if $(0)$ is a maximal ideal.
+
+\begin{proof}
+Assume $R$ is a field. Suppose $I \subset R$. If $I \neq (0)$, then there is
+a nonzero $x \in I$. Then there is an inverse $x^{-1}$. We have $x^{-1} x =1
+\in I$, so $I = (1)$.
+In a field, there is thus no room for ideals other than $(0)$ and $(1)$.
+
+To prove the converse, assume every ideal of $R$ is $(0)$ or $(1)$. Then for
+each $x \in R$, $(x) = (0)$ or $(1)$. If $x \neq 0$, the first cannot happen, so
+that means that the ideal generated by $x$ is the unit ideal. So $1$ is a
+multiple of $x$, implying that $x$ has a multiplicative inverse.
+\end{proof}
+
+So fields also have an uninteresting ideal structure.
+
+\begin{corollary} \label{maximalfield}
+If $R$ is a ring and $I \subset R$ is an ideal, then $I$ is maximal if and only
+if $R/I$ is a field.
+\end{corollary}
+
+\begin{proof}
+The basic point here is that there is a bijection between the ideals of $R/I$
+and ideals of $R$ containing $I$.
+
+Denote by $\phi: R \to R/I$ the reduction map. There is a
+construction mapping ideals of $R/I$ to ideals of $R$. This sends an ideal in
+$R/I$ to
+its inverse image. This is easily seen to map to ideals of $R$ containing $I$.
+The map from ideals of $R/I$ to ideals of $R$ containing $I$ is a bijection,
+as one checks easily.
+
+It follows that $R/I$ is a field precisely if
+$R/I$ has precisely two ideals, i.e. precisely if there are precisely two
+ideals in $R$ containing $I$. These ideals must be $(1)$ and $I$, so this
+holds if and only if $I$ is maximal.
+\end{proof}
+
+There is a similar characterization of prime ideals.
+
+\begin{definition}
+A commutative ring $R$ is an \textbf{integral domain} if for all $ x,y \in R$,
+$x \neq 0 $ and $y \neq 0$ imply $xy \neq 0$.
+\end{definition}
+
+\begin{proposition} \label{primeifdomain}
+An ideal $I \subset R$ is prime iff $R/I$ is a domain.
+\end{proposition}
+
+\begin{exercise}
+Prove \rref{primeifdomain}.
+\end{exercise}
+
+Any field is an integral domain. This is because in a field, nonzero elements
+are invertible, and the product of two invertible elements is invertible. This
+statement translates in ring theory to the statement that a maximal ideal is
+prime.
+
+
+Finally, we include an example that describes what \emph{some} of the prime
+ideals in a polynomial ring look like.
+\begin{example}
+Let $R$ be a ring and $P$ a prime ideal. We claim that $PR[x] \subset R[x]$ is a
+prime ideal.
+
+Consider the map $\tilde{\phi}:R[x]\rightarrow(R/P)[x]$ with
+$\tilde{\phi}(a_0+\cdots+a_nx^n)=(a_0+P)+\cdots+(a_n+P)x^n$. This is clearly
+a homomorphism because $\phi:R\rightarrow R/P$ is, and its kernel consists
+of those polynomials $a_0+\cdots+a_nx^n$ with $a_0,\ldots,a_n\in P$, which is
+precisely $P[x]$. Thus $R[x]/P[x]\simeq (R/P)[x]$, which is an integral domain
+because $R/P$ is an integral domain. Thus $P[x]$ is a prime ideal.
+
+However, if
+$P$ is a maximal ideal, then $P[x]$ is never a maximal ideal because the ideal
+$P[x]+(x)$ (the polynomials with constant term in $P$) always strictly contains
+$P[x]$ (because if $x\in P[x]$ then $1\in P$, which is impossible). Note
+that $P[x]+(x)$ is the kernel of the composition of $\tilde{\phi}$ with
+evaluation at 0, i.e $(\text{ev}_0\circ\tilde{\phi}):R[x]\rightarrow R/P$,
+and this map is a surjection and $R/P$ is a field, so that $P[x]+(x)$ is
+the maximal ideal in $R[x]$ containing $P[x]$.
+\end{example}
+
+
+\begin{exercise} \label{quotfld1}
+Let $R$ be a domain. Consider the set of formal quotients $a/b, a, b \in R$
+with $b \neq 0$. Define addition and multiplication using usual rules. Show
+that the resulting object $K(R)$ is a ring, and in fact a \emph{field}. The
+natural map $R \to K(R)$, $r \to r/1$, has a universal property. If $R
+\hookrightarrow L$ is an injection of $R$ into a field $L$, then there is a
+unique morphism $K(R) \to L$ of fields extending $R \to L$. This construction
+will be generalized when we consider \emph{localization.}
+This construction is called the \textbf{quotient field.}
+
+Note that a non-injective map $R\to L$ will \emph{not} factor through the
+quotient field!
+\end{exercise}
+
+
+\begin{exercise}\label{Jacobson}
+Let $R$ be a commutative ring. Then the \textbf{Jacobson radical} of $R$ is
+the intersection $\bigcap \mathfrak{m}$ of all maximal ideals $\mathfrak{m}
+\subset R$. Prove that an element $x$ is in the Jacobson radical if and only
+if $1 - yx$ is invertible for all $y \in R$.
+\end{exercise}
+
+\subsection{Prime avoidance}
+
+The following fact will come in handy occasionally. We will, for instance, use
+it much later to show that an ideal consisting of zerodivisors on a module $M$ is
+contained in associated prime.
+
+\begin{theorem}[Prime avoidance] \label{primeavoidance}
+ Let $I_1,\dots, I_n \subset R$ be ideals. Let $A\subset R$ be a subset which is closed
+ under addition and multiplication. Assume that at least $n-2$ of the ideals are
+ prime. If $A\subset I_1\cup \cdots \cup I_n$, then $A\subset I_j$ for some $j$.
+ \end{theorem}
+
+The result is frequently used in the following specific case: if an ideal $I$
+is contained in a finite union $\bigcup \mathfrak{p}_i$ of primes, then $I
+\subset \mathfrak{p}_i$ for some $i$.
+
+ \begin{proof}
+ Induct on $n$. If $n=1$, the result is trivial. The case $n=2$ is an easy argument: if
+ $a_1\in A\smallsetminus I_1$ and $a_2\in A\smallsetminus I_2$, then $a_1+a_2\in
+ A\smallsetminus (I_1\cup I_2)$.
+
+Now assume $n\ge 3$. We may assume that for each $j$, $A\not\subset I_1\cup \cdots
+ \cup \hat I_j\cup \cdots I_n$.\footnote{The hat means omit $I_j$.} Fix an element
+ $a_j\in A\smallsetminus (I_1\cup \cdots \cup \hat I_j\cup \cdots I_n)$. Then this
+ $a_j$ must be contained in $I_j$ since $A\subset \bigcup I_j$. Since $n\ge 3$, one
+ of the $I_j$ must be prime. We may assume that $I_1$ is prime. Define
+ $x=a_1+a_2a_3\cdots a_n$, which is an element of $A$. Let's show that $x$ avoids
+ \emph{all} of the $I_j$. If $x\in I_1$, then $a_2a_3\cdots a_n\in I_1$, which
+ contradicts the fact that $a_i\not\in I_j$ for $i\neq j$ and that $I_1$ is prime. If
+ $x\in I_j$ for $j\ge 2$. Then $a_1\in I_j$, which contradicts $a_i\not\in I_j$ for
+ $i\neq j$.
+ \end{proof}
+\subsection{The Chinese remainder theorem}
+
+Let $m,n$ be relatively prime integers. Suppose $a, b \in \mathbb{Z}$; then
+one can show that the two congruences $x \equiv a \mod m$
+and $x \equiv b \mod n$ can be solved simultaneously in $x \in \mathbb{Z}$.
+The solution is unique, moreover, modulo $mn$.
+The Chinese remainder theorem generalizes this fact:
+
+
+\begin{theorem}[Chinese remainder theorem] Let $I_1, \dots I_n$ be ideals in a
+ring $R$ which satisfy $I_i + I_j = R$ for $i \neq j$. Then we have $I_1 \cap
+\dots \cap I_n = I_1 \dots I_n$ and the morphism of rings
+\[ R \to \bigoplus R/I_i \]
+is an epimorphism with kernel $I_1 \cap \dots \cap I_n$.
+\end{theorem}
+
+\begin{proof}
+First, note that for any two ideals $I_1$ and $I_2$, we
+have $I_1I_2\subset I_1\cap I_2$ and $(I_1+I_2)(I_1\cap I_2)\subset
+I_1I_2$ (because any element of $I_1+I_2$ multiplied by any element of
+$I_1\cap I_2$ will clearly be a sum of products of elements from both $I_1$
+and $I_2$). Thus, if $I_1$ and $I_2$ are coprime, i.e. $I_1+I_2=(1)=R$,
+then $(1)(I_1\cap I_2)=(I_1\cap I_2)\subset I_1I_2\subset I_1\cap I_2$,
+so that $I_1\cap I_2=I_1I_2$. This establishes the result for $n=2$.
+
+If the
+ideals $I_1,\ldots,I_n$ are pairwise coprime and the result holds for $n-1$,
+then $$\bigcap_{i=1}^{n-1} I_i=\prod_{i=1}^{n-1}I_i.$$ Because $I_n+I_i=(1)$
+for each $1\leq i\leq n-1$, there must be $x_i\in I_n$ and $y_i\in I_i$ such
+that $x_i+y_i=1$. Thus, $z_n=\prod_{i=1}^{n-1}y_i=\prod_{i=1}^{n-1}(1-x_i)\in
+\prod_{i=1}^{n-1} I_i$, and clearly $z_n+I_n=1+I_n$ since each $x_i\in
+I_n$. Thus $I_n+\prod_{i=1}^{n-1}I_i=I_n+\bigcap_{i=1}^{n-1}I_i=(1)$,
+and we can now apply the $n=2$ case to conclude that $\bigcap_{i=1}^n
+I_i=\prod_{i=1}^n I_i$.
+
+Note that for any $i$, we can construct a $z_i$
+with $z_i\in I_j$ for $j\neq i$ and $z_i+I_i=1+I_i$ via the same procedure.
+
+ Define $\phi:R\rightarrow\bigoplus R/I_i$
+by $\phi(a)=(a+I_1,\ldots,a+I_n)$. The kernel of $\phi$ is
+$\bigcap_{i=1}^n I_i$, because $a+I_i=0+I_i$ iff $a\in I_i$, so that
+$\phi(a)=(0+I_1,\ldots,0+I_n)$ iff $a\in I_i$ for all $i$, that is,
+$a\in\bigcap_{i=1}^n I_i$. Combined with our previous result, the kernel
+of $\phi$ is $\prod_{i=1}^n I_i$.
+
+Finally, recall that we constructed
+$z_i\in R$ such that $z_i+I_i=1+I_i$, and $z+I_j=0+I_j$ for all $j\neq
+i$, so that $\phi(z_i)=(0+I_1,\ldots,1+I_{i},\ldots,0+I_n)$. Thus,
+$\phi(a_1z_1+\cdots+a_nz_n)=(a_1+I_1,\ldots,a_n+I_n)$ for all $a_i\in R$,
+so that $\phi$ is onto. By the first isomorphism theorem, we have that
+$R/I_1\cdots I_n\simeq \bigoplus_{i=1}^nR/I_i$. \\
+
+\end{proof}
+
+\section{Some special classes of domains}
+
+\subsection{Principal ideal domains}
+
+\begin{definition}
+A ring $R$ is a \textbf{principal ideal domain} or \textbf{PID} if $R \neq 0$, $R$ is not a
+field, $R$ is a domain, and every ideal of $R$ is principal.
+\end{definition}
+
+These have the next simplest theory of ideals.
+Each ideal is very simple---it's principal---though there might be a lot of ideals.
+
+\begin{example}
+$\mathbb{Z}$ is a PID. The only nontrivial fact to check here is that:
+\begin{proposition}
+Any nonzero ideal $I \subset \mathbb{Z}$ is principal.
+\end{proposition}
+\begin{proof}
+If $I = (0)$, then this is obvious. Else there is $n \in I -
+\left\{0\right\}$; we can assume $n>0$. Choose $n \in I$ as small as possible and
+positive. Then I claim that the ideal $I$ is generated by $(n)$. Indeed, we have $(n)
+\subset I$ obviously. If $m \in I$ is another integer, then divide $m$ by $n$,
+to find $m = nb + r$ for $r \in [0, n)$. We find that $r \in I$ and $0 \leq r <
+n$, so $r=0$, and $m$ is divisible by $n$. And $I \subset (n)$.
+
+So $I = (n)$.
+\end{proof}
+\end{example}
+
+A module $M$ is said to be \emph{finitely generated} if there exist elements
+$x_1, \dots, x_n \in M$ such that any element of $M$ is a linear combination
+(with coefficients in $R$) of the $x_i$. (We shall define this more formally
+below.)
+One reason that PIDs are so convenient is:
+
+\begin{theorem}[Structure theorem] \label{structurePID}
+If $M$ is a finitely generated module over a principal ideal domain $R$, then
+$M$ is isomorphic to a direct sum
+\[ M \simeq \bigoplus_{i=1}^n R/a_i, \]
+for various $a_i \in R$ (possibly zero).
+\end{theorem}
+
+\add{at some point, the proof should be added. This is important!}
+
+\subsection{Unique factorization domains}
+
+The integers $\mathbb{Z}$ are especially nice because of the fundamental
+theorem of arithmetic, which states that every integer has a unique
+factorization into primes. This is not true for every integral domain.
+
+\begin{definition}
+An element of a domain $R$ is \textbf{irreducible} if it cannot be written
+as the product of two non-unit elements of $R$.
+\end{definition}
+
+\begin{example}
+Consider the integral domain $\mathbb{Z}[\sqrt{-5}]$. We saw earlier that
+\[
+6 = 2 \cdot 3 = (1 + \sqrt{-5})(1 - \sqrt{-5}),
+\]
+which means that $6$ was written as the product of two non-unit elements in
+different ways. $\mathbb{Z}[\sqrt{-5}]$ does not have unique factorization.
+\end{example}
+
+\begin{definition} \label{earlyUFD}
+A domain $R$ is a \textbf{unique factorization domain} or \textbf{UFD} if every
+non-unit $x \in R$ satisfies
+\begin{enumerate}
+\item $x$ can be written as a product $x = p_1 p_2 \cdots p_n$ of
+irreducible elements $p_i \in R$
+\item if $x = q_1 q_2 \cdots q_m$ where $q_i \in R$ are irreducible
+then the $p_i$ and $q_i$ are the same up to order and multiplication by units.
+\end{enumerate}
+\end{definition}
+
+\begin{example}
+$\mathbb{Z}$ is a UFD, while $\mathbb{Z}[\sqrt{-5}]$ is not. In fact, many of
+our favorite domains have unique factorization. We will prove that all PIDs
+are UFDs. In particular, in \rref{gaussianintegersareprincipal} and
+\rref{polyringisprincipal}, we saw that $\mathbb{Z}[i]$ and $F[t]$ are PIDs,
+so they also have unique factorization.
+\end{example}
+
+\begin{theorem} \label{PIDUFD}
+Every principal ideal domain is a unique factorization domain.
+\end{theorem}
+
+\begin{proof}
+Suppose that $R$ is a principal ideal domain and $x$ is an element of $R$. We
+first demonstrate that $x$ can be factored into irreducibles.
+If $x$ is a unit or an irreducible, then we are done. Therefore, we can assume
+that $x$ is reducible, which means that $x = x_1 x_2$ for non-units
+$x_1, x_2 \in R$. If there are irreducible, then we are again done, so we
+assume that they are reducible and repeat this process. We need to show that
+this process terminates.
+
+Suppose that this process continued infinitely. Then we have an infinite
+ascending chain of ideals, where all of the inclusions are proper:
+$(x) \subset (x_1) \subset (x_{11}) \subset \cdots \subset R$.
+We will show that this is impossible because any infinite ascending chain of
+ideals $I_1 \subset I_2 \subset \cdots \subset R$ of a principal ideal domain
+eventually becomes stationary, i.e. for some $n$, $I_k = I_n$ for $k \geq n$.
+Indeed, let $I = \bigcup_{i=1}^\infty I_i$. This is an ideal, so it is
+principally generated as $I = (a)$ for some $a$. Since $a \in I$, we must have
+$a \in I_N$ for some $N$, which means that the chain stabilizes after $I_N$.
+
+It remains to prove that this factorization of $x$ is unique. We induct on
+the number of irreducible factors $n$ of $x$. If $n = 0$, then $x$ is a unit,
+which has unique factorization up to units. Now, suppose that
+$x = p_1 \cdots p_n = q_1 \cdots q_m$ for some $m \ge n$. Since $p_1$ divides
+$x$, it must divide the product $q_1 \cdots q_m$ and by irreducibility, one of
+the factors $q_i$. Reorder the $q_i$ so that $p_1$ divides $q_1$. However,
+$q_1$ is irreducible, so this means that $p_1$ and $q_1$ are the same up to
+multiplication by a unit $u$. Canceling $p_1$ from each of the two
+factorizations, we see that $p_2 \cdots p_n = u q_2 \cdots q_m = q_2' \cdots
+q_m$. By induction, this shows that the factorization of $x$ is unique up to
+order and multiplication by units.
+\end{proof}
+
+
+\subsection{Euclidean domains}
+
+A euclidean domain is a special type of principal ideal domain. In practice,
+it will often happen that one has an explicit proof that a given domain is
+euclidean, while it might not be so trivial to prove that it is a UFD without
+the general implication below.
+
+\begin{definition}
+An integral domain $R$ is a \textbf{euclidean domain} if there is a function
+$|\cdot |:R\to \mathbb \mathbb{Z}_{\geq 0}$ (called the norm) such that the following hold.
+\begin{enumerate}
+\item $|a|=0$ iff $a=0$.
+\item For any nonzero $a,b\in R$ there exist $q,r\in R$ such that $b=aq+r$ and $|r|<|a|$.
+\end{enumerate}
+In other words, the norm is compatible with division with remainder.
+\end{definition}
+\begin{theorem}\label{EDPID}
+A euclidean domain is a principal ideal domain.
+\end{theorem}
+\begin{proof}
+Let $R$ be an euclidean domain, $I\subset R$ and ideal, and $b$ be the nonzero element of smallest norm in $I$.
+Suppose $ a\in I$. Then we can write $ a = qb + r$ with $ 0\leq r < |b|$, but since $ b$ has minimal nonzero absolute value, $ r = 0$ and $ b|a$. Thus $ I=(b)$ is principal.
+\end{proof}
+
+
+As we will see, this implies that any euclidean domain admits \emph{unique
+factorization.}
+
+\begin{proposition} \label{polyringED}
+Let $F$ be a field. Then the polynomial ring $F[t]$ is a euclidean domain.
+In particular, it is a PID.
+\end{proposition}
+\begin{proof}
+We define \add{}
+\end{proof}
+
+
+\begin{exercise} \label{gaussianintegersareprincipal}
+Prove that $\mathbb{Z}[i]$ is principal.
+(Define the norm as $N(a+ib) = a^2 + b^2$.)
+\end{exercise}
+
+\begin{exercise} \label{polyringisprincipal}
+Prove that the polynomial ring $F[t]$ for $F$ a field is principal.
+\end{exercise}
+
+
+It is \emph{not} true that a PID is necessarily euclidean. Nevertheless, it
+was shown in \cite{Gre97} that the converse is ``almost'' true. Namely,
+\cite{Gre97} defines the notion of an \textbf{almost euclidean domain.}
+A domain $R$ is almost euclidean if there is a function $d: R \to
+\mathbb{Z}_{\geq 0}$ such that
+\begin{enumerate}
+\item $d(a) = 0$ iff $a = 0$.
+\item $d(ab) \geq d(a)$ if $b \neq 0$.
+\item If $a,b \in R - \left\{0\right\}$, then either $b \mid a$ or there is
+$r \in (a,b)$ with $d(r)}[d] & M \ar[r] \ar[d] & 0 \\
+0 \ar[r] & K \ar[r] & R^m \ar[r] & M \ar[r] & 0
+}
+\]
+The dotted arrow $F \to R^m$ exists as $F$ is projective. There is induced a
+map $F' \to K$.
+We get a commutative and exact diagram
+\[
+\xymatrix{
+& F' \ar[r]\ar[d]^f & F \ar[r] \ar[d]^g & M \ar[r] \ar[d] & 0 \\
+0 \ar[r] & K \ar[r] & R^m \ar[r] & M \ar[r] & 0
+},
+\]
+to which we can apply the snake lemma. There is an exact sequence
+\[ 0 \to \coker(f) \to \coker(g) \to 0, \]
+which gives an isomorphism $\coker(f) \simeq \coker(g)$.
+However, $\coker(g)$ is finitely generated, as a quotient of $R^m$.
+Thus $\coker(f)$ is too.
+Since we have an exact sequence
+\[ 0 \to \im(f) \to K \to \coker(f) \to 0, \]
+and $\im(f)$ is finitely generated (as the image of a finitely generated
+object, $F'$), we find by \rref{exact-fingen} that $\coker(f)$ is finitely generated.
+\end{proof}
+
+\begin{proposition} \label{exact-finpres}
+Given an exact sequence
+\[ 0 \to M' \to M \to M'' \to 0, \]
+if $M', M''$ are finitely presented, so is $M$.
+\end{proposition}
+
+In general, it is not true that if $M$ is finitely presented, then $M'$ and
+$M''$ are. For instance, it is possible that a submodule of the free, finitely
+generated module $R$ (i.e. an ideal), might fail to be finitely generated. We
+shall see in \rref{noetherian} that this does not happen over a
+\emph{noetherian} ring.
+
+\begin{proof}
+Indeed, suppose we have exact sequences
+\[ F_1' \to F_0' \to M' \to 0 \]
+and
+\[ F_1'' \to F_0'' \to M'' \to 0 \]
+where the $F$'s are finitely generated and free.
+We need to get a similar sequence for $M$.
+Let us stack these into a diagram
+\[ \xymatrix{
+& F_1' \ar[d] & & F_1'' \ar[d] \\
+& F_0' \ar[d] & & F_0'' \ar[d] \\
+0 \ar[r] & M' \ar[r] & M \ar[r] & M'' \ar[r] & 0
+}\]
+However, now, using general facts about projective modules (\rref{}), we can
+splice these presentations into a resolution
+\[ F_1' \oplus F_1'' \to F_0' \oplus F_0'' \to M \to 0, \]
+which proves the assertion.
+\end{proof}
+
+
+\begin{corollary}
+The (finite) direct sum of finitely presented modules is finitely presented.
+\end{corollary}
+\begin{proof}
+Immediate from \rref{exact-finpres}
+\end{proof}
+
+\subsection{Modules of finite length}
+
+A much stronger condition on modules that of finite generation is that of \emph{finite
+length}. Here, basically any operation one does will eventually terminate.
+
+Let $R$ be a commutative ring, $M$ an $R$-module.
+
+\begin{definition}
+$M$ is \textbf{simple} if $M \neq 0$ and $M$ has no nontrivial submodules.
+\end{definition}
+
+
+\begin{exercise}
+A torsion-free abelian group is never a simple $\mathbb{Z}$-module.
+\end{exercise}
+
+\begin{proposition}
+$M$ is simple if and only if it is isomorphic to $R/\mathfrak{m}$ for $\mathfrak{m} \subset
+R$ a maximal ideal.
+\end{proposition}
+
+\begin{proof} Let $M$ be simple. Then
+$M$ must contain a cyclic submodule $Rx$ generated by some $x \in
+M - \left\{0\right\}$. So it must contain a submodule isomorphic to $R/I$
+for some ideal $I$, and
+simplicity implies that $M \simeq R/I$ for some $I$. If $I$ is not maximal,
+say properly contained in $J$,
+then we will get a nontrivial submodule $J/I$ of $R/I \simeq M$. Conversely,
+it is easy to see
+that $R/\mathfrak{m}$ is simple for $\mathfrak{m}$ maximal.
+\end{proof}
+
+
+\begin{exercise}[Schur's lemma] Let $f: M \to N$ be a module-homomorphism,
+where $M, N$ are both simple. Then either $f =0$ or $f$ is an isomorphism.
+\end{exercise}
+
+\begin{definition}
+$M$ is of \textbf{finite length} if there is a finite filtration $0 \subset M^0
+\subset \dots \subset M^n = M$ where each $M^i/M^{i-1}$ is simple.
+\end{definition}
+
+\begin{exercise}
+Modules of finite length are closed under extensions (that is, if $0 \to M'
+\to M \to M'' \to 0$ is an exact sequence, then if $M', M''$ are of finite
+length, so is $M$).
+\end{exercise}
+
+In the next result (which will not be used in this chapter), we shall use the
+notions of a \emph{noetherian} and an \emph{artinian} module. These notions
+will be developed at length in \cref{chnoetherian}, and we refer the reader
+there for more explanation.
+A module is \emph{noetherian} if every ascending chain $M_1 \subset M_2 \subset
+\dots$ of submodules stabilizes, and it is \emph{artinian} if every descending chain
+stabilizes.
+\begin{proposition}
+$M$ is finite length iff $M$ is both noetherian and artinian.
+\end{proposition}
+\begin{proof}
+Any simple module is obviously both noetherian and artinian: there are two
+submodules. So if $M$ is finite length, then the finite filtration with simple
+quotients implies that $M$ is noetherian and artinian, since these two
+properties are stable under extensions (\rref{exactnoetherian}
+and \rref{exactartinian} of \rref{noetherian}).
+
+Suppose $M \neq 0$ is noetherian and artinian. Let $M_1 \subset M$ be a minimal
+nonzero submodule, which exists as $M$ is artinian. This is necessarily simple. Then we have a filtration
+\[ 0 = M_0 \subset M_1. \]
+If $M_1 = M$, then the filtration goes up to $M$, and we have that $M$ is of
+finite length. If not, find a submodule $M_2$ that contains $M_1$ and is
+minimal among submodules containing $M_1$; then the quotient
+$M_2/M_1$ is simple. We have the filtration
+\[ 0 = M_0 \subset M_1 \subset M_2, \]
+which we can keep continuing until at some point we reach $M$. Note that since
+$M$ is noetherian, we cannot continue this strictly ascending chain forever.
+\end{proof}
+
+\begin{exercise}
+In particular, any submodule or quotient module of a finite length module is
+of finite length. Note that the analog is not true for finitely generated
+modules unless the ring in question is noetherian.
+\end{exercise}
+
+Our next goal is to show that the length of a filtration of a module with
+simple quotients is well-defined.
+For this, we need:
+\begin{lemma} \label{simplefiltrationint}
+Let $0 = M_0 \subset M_1 \subset \dots \subset M_n = M$ be a filtration of
+$M$ with simple quotients. Let $N \subset M$. Then the filtration
+$0 = M_0 \cap N \subset M_1 \cap N \subset \dots \subset N$ has simple or zero
+quotients.
+\end{lemma}
+\begin{proof}
+Indeed, for each $i$, $(N \cap M_i)/(N \cap M_{i-1})$ is a submodule of
+$M_i / M_{i-1}$, so is either zero or simple.
+\end{proof}
+
+
+\begin{theorem}[Jordan-H\"older]\label{lengthexists} Let $M$ be a module of
+finite length.
+In this case, any two filtrations
+on $M$ with simple quotients have the same length.
+\end{theorem}
+\begin{definition}
+This number is called the \textbf{length} of $M$ and is denoted $\ell(M)$.
+\end{definition}
+\begin{proof}[Proof of \rref{lengthexists}]
+Let us introduce a temporary definition: $l(M)$ is the length of the
+\emph{minimal} filtration on $M$. We will show that any filtration of $M$ (with
+simple quotients) is of length
+$l(M)$. This is the proposition in another form.
+
+The proof of this claim is by induction on $l(M)$. Suppose we have a filtration
+\[ 0 = M_0 \subset M_1 \subset \dots \subset M_n = M \]
+with simple quotients. We would like to show that $n = l(M)$. By definition of
+$l(M)$, there is another filtration
+\[ 0 = N_0 \subset \dots \subset N_{l(M)} = M. \]
+If $l(M) = 0,1$, then $M$ is zero or simple, which will necessarily imply that $n=0,1$
+respectively. So we can assume $l(M) \geq 2$. We can also assume that the
+result is known for strictly smaller submodules of $M$.
+
+There are two cases:
+\begin{enumerate}
+\item $M_{n-1} = N_{l(M) -1 } $. Then $M_{n-1} = N_{l(M)-1}$ has $l$ at most
+$l(M)-1$. Thus by the inductive hypothesis any two filtrations on $M_{n-1}$
+have the same length, so $n-1 = l(M) -1$, implying what we want.
+\item We have $M_{n-1} \cap N_{l(M) - 1} \subsetneq M_{n-1}, N_{l(M)-1}$.
+Call this intersection $K$.
+
+Now we have two filtrations of these modules $M_{n-1}, N_{l(M)-1}$ whose
+quotients are simple. We can replace them such that the next
+term before them is $K$.
+To do this, consider the filtrations
+\[ 0 = M_0 \cap K \subset M_1 \subset K \subset \dots M_{n-1} \cap K = K
+\subset M_{n-1} \]
+and
+\[ 0 = N_0 \cap K \subset M_1 \subset K \subset \dots N_{l(M)-1} \cap K = K
+\subset N_{l(M)-1} . \]
+These filtrations have simple or zero quotients by
+\rref{simplefiltrationint}, and since $ M_{n-1}/K =
+M_{n-1}/M_{n-1} \cap N_{l(M)-1} = M/M_{n-1}$ is simple, and similarly for
+$N_{l(M)-1}/K$. We can throw out redundancies to eliminate
+the zero terms.
+So we get two new filtrations of $M_{n-1}$ and $N_{l(M)-1}$ whose second-to-last
+term is $K$.
+
+By the
+inductive hypothesis any two filtrations on either of these proper submodules $M_{n-1},
+N_{l(M)-1} $
+have the same length.
+Thus the lengths of the two new filtrations are $n-1$ and $l(M)-1$,
+respectively.
+So we find that $n-1 = l(K) +1$ and $l(M)-1 = l(K)+1$ by
+the inductive hypothesis. This implies what we want.
+\end{enumerate}
+\end{proof}
+
+\begin{exercise}
+Prove that the successive quotients $M_i/M_{i-1}$ are also determined (up to
+permutation).
+\end{exercise}
+
+
diff --git a/books/cring/graded.tex b/books/cring/graded.tex
new file mode 100644
index 0000000000000000000000000000000000000000..81dde67b7dd0932bfdcca3948be6a605e3ced4db
--- /dev/null
+++ b/books/cring/graded.tex
@@ -0,0 +1,1099 @@
+\chapter{Graded and filtered rings}
+
+In algebraic geometry, working in classical affine space
+$\mathbb{A}^n_{\mathbb{C}}$ of points in $\mathbb{C}^n$ turns out to be
+insufficient for various reasons.
+Instead, it is often more convenient to consider varieties in \emph{projective
+space} $\mathbb{P}^n_{\mathbb{C}}$, which is the set of lines through the
+origin in $\mathbb{C}^{n+1}$.
+In other words, it is the set of all $n+1$-tuples $[z_0, \dots, z_n] \in
+\mathbb{C}^{n+1} - \left\{0\right\}$ modulo the relation that
+\begin{equation} \label{rescaling} [z_0, \dots, z_n] = [\lambda z_0, \dots, \lambda z_n], \quad \lambda \in
+\mathbb{C}^*. \end{equation}
+Varieties in projective space have many
+convenient properties that affine varieties do not: for instance,
+intersections work out much more nicely when intersections at the extra
+``points at infinity'' are included.
+Moreover, when endowed with the complex topology, (complex) projective
+varieties are \emph{compact}, unlike all but degenerate affine varieties (i.e.
+finite sets).
+
+It is when defining the notion of a ``variety'' in projective space that one
+encounters gradedness. Now a variety in $\mathbb{P}^n$ must be cut out by
+polynomials $F_1, \dots, F_k \in \mathbb{C}[x_0, \dots, x_n]$; that is, a
+point represented by $[z_0, \dots, z_n]$ lies in the associated variety if and
+only if $F_i(z_0, \dots, z_n) = 0$ for each $i$. For this to make sense, or to
+be independent of the choice of $z_0, \dots, z_n$ up to rescaling as in
+\eqref{rescaling}, it is necessary to assume
+that each $F_i$ is \emph{homogeneous.}
+
+Algebraically, $\mathbb{A}^n_{\mathbb{C}}$ is the set of maximal ideals in the
+polynomial ring $\mathbb{C}^{n}$. Projective space is defined somewhat more
+geometrically (as a set of lines) but it turns out that there is an
+algebraic interpretation here too. The points of projective space are in
+bijection with the \emph{homogeneous maximal ideals} of the polynomial ring
+$\mathbb{C}[x_0, \dots, x_n]$. We shall define more generally the $\proj$ of a
+\emph{graded} ring in this chapter. Although we shall not repeatedly refer to
+this concept in the sequel, it will be useful for readers interested in
+algebraic geometry.
+
+We shall also introduce the notion of a \emph{filtration}. A filtration allows
+one to endow a given module with a topology, and one can in fact complete with
+respect to this topology. This construction will be studied in
+\rref{completions}.
+
+\section{Graded rings and modules}
+
+Much of the material in the present section is motivated by algebraic
+geometry; see \cite{EGA}, volume II for the construction of $\proj R$ as a
+scheme.
+
+\subsection{Basic definitions}
+\begin{definition}
+A \textbf{graded ring} $R$ is a ring together with a decomposition (as abelian
+groups)
+\[ R = R_0 \oplus R_1 \oplus \dots \]
+such that $R_m R_n \subset R_{m+n}$ for all $m, n \in \mathbb{Z}_{\geq 0}$,
+and where $R_0$ is a subring (i.e. $1 \in R_0$).
+A \textbf{$\mathbb{Z}$-graded ring} is one where the decomposition is into
+$\bigoplus_{n \in \mathbb{Z}} R_n$.
+In either case, the elements of the subgroup $R_n$ are called
+\textbf{homogeneous of degree $n$}.
+\end{definition}
+
+The basic example to keep in mind is, of course, the polynomial ring $R[x_1,
+\dots, x_n]$ for $R$ any ring. The graded piece of degree $n$ consists of the
+homogeneous polynomials of degree $n$.
+
+Consider a graded ring $R$.
+\begin{definition}
+A \textbf{graded} $R$-module is an ordinary $R$-module $M$ together with a
+decomposition
+\[ M = \bigoplus_{k \in \mathbb{Z}} M_k \]
+as abelian groups, such that $R_m M_n \subset M_{m+n}$ for all $m \in
+\mathbb{Z}_{\geq 0}, n \in \mathbb{Z}$. Elements in one of these pieces are
+called \textbf{homogeneous.}
+Any $m \in M$ is thus uniquely a finite sum $\sum m_{n_i}$ where each $m_{n_i}
+\in M_{n_i}$ is homogeneous of degree $n_i$.
+\end{definition}
+
+Clearly there is a \emph{category} of graded $R$-modules, where the morphisms
+are the morphisms of $R$-modules that preserve the grading (i.e. take
+homogeneous elements to homogeneous elements of the same degree).
+
+Since we shall focus on positively graded rings, we shall simply call them
+graded rings; when we do have to consider rings with possibly negative
+gradings, we shall highlight this explicitly. Note, however, that we allow
+modules with negative gradings freely.
+
+In fact, we shall note an important construction that will generally shift
+the graded pieces such that some of them might be negative:
+
+\begin{definition}
+Given a graded module $M$, we define the \textbf{twist} $M(n)$ as the
+same $R$-module but with the grading
+\[ M(n)_k = M_{n+k} . \]
+This is a functor on the category of graded $R$-modules.
+\end{definition}
+
+In algebraic geometry, the process of twisting allows one to construct
+canonical line bundles on projective space. Namely, a twist of $R$ itself
+will lead to a line bundle on projective space that in general is not
+trivial. See \cite{Ha77}, II.5.
+
+Here are examples:
+\begin{example}[An easy example]
+If $R$ is a graded ring, then $R$ is a graded module over itself.
+\end{example}
+
+\begin{example}[Another easy example]
+If $S$ is any ring, then $S$ can be considered as a graded ring with $S_0 = S$
+and $S_i = 0$ for $i>0$. Then a \emph{graded} $S$-module is just a
+$\mathbb{Z}$-indexed collection of (ordinary) $S$-modules.
+\end{example}
+
+\begin{example}[The blowup algebra]
+\label{blowupalg}
+This example is a bit more interesting, and will be used in the sequel. Let $S$
+be any ring, and let $J \subset S$ be an ideal. We can make $R = S \oplus J \oplus
+J^2 \oplus \dots$ (the so-called \emph{blowup algebra}) into a graded ring, by defining the multiplication the normal
+way except that something in the $i$th component times something in the $j$th
+component goes into the $i+j$th component.
+
+Given any $S$-module $M$, there is a graded $R$-module $M \oplus JM \oplus J^2
+M \oplus \dots$, where multiplication is defined in the obvious way. We thus
+get a functor from $S$-modules to graded $R$-modules.
+\end{example}
+
+\begin{definition} Fix a graded ring $R$.
+Let $M$ be a graded $R$-module and $N \subset M$ an $R$-submodule. Then $N$ is
+called a
+\textbf{graded submodule} if the homogeneous components of anything in $N$ are
+in $N$. If $M=R$, then a graded ideal is also called a \textbf{homogeneous
+ideal}.
+\end{definition}
+
+In particular, a graded submodule is automatically a graded module in its own
+right.
+
+\begin{lemma}
+\begin{enumerate}
+\item The sum of two graded submodules (in particular, homogeneous ideals) is
+graded.
+\item The intersection of two graded submodules is graded.
+\end{enumerate}
+\end{lemma}
+\begin{proof}
+Immediate.
+\end{proof}
+
+One can grade the quotients of a graded module by a graded submodule.
+If $N \subset M$ is a graded submodule, then $M/N$ can be made into a graded
+module, via the isomorphism of abelian groups
+\[ M/N \simeq \bigoplus_{k \in \mathbb{Z}} M_k/N_k. \]
+In particular, if $\mathfrak{a} \subset R$ is a homogeneous ideal, then
+$R/\mathfrak{a}$ is a graded ring in a natural way.
+
+
+\begin{exercise}
+Let $R$ be a graded ring. Does the category of graded $R$-modules admit limits and colimits?
+\end{exercise}
+\subsection{Homogeneous ideals}
+
+Recall that a homogeneous ideal in a graded ring $R$ is simply a graded
+submodule of $R$. We now prove a useful result that enables us tell when an
+ideal is homogeneous.
+
+\begin{proposition} \label{homgideal}
+Let $R$ be a graded ring, $I \subset R$ an ideal. Then $I$ is a homogeneous
+ideal
+if and only if it can be generated by homogeneous elements.
+\end{proposition}
+\begin{proof}
+If $I$ is a homogeneous ideal, then by definition
+\[ I = \bigoplus_i I \cap R_i, \]
+so $I$ is generated by the sets $\left\{I \cap R_i\right\}_{i \in
+\mathbb{Z}_{\geq 0}}$ of homogeneous elements.
+
+Conversely, let us suppose that $I$ is generated by homogeneous elements
+$\left\{h_\alpha\right\}$. Let $x \in I$ be arbitrary; we can uniquely
+decompose $x$ as a sum of homogeneous elements, $x = \sum x_i$, where each
+$x_i \in R_i$. We need to show that each $x_i \in I$ in fact.
+
+To do this, note that $x = \sum q_\alpha h_\alpha$ where the $q_\alpha $
+belong to $R$. If we take $i$th homogeneous components, we find that
+\[ x_i = \sum ( q_{\alpha})_{i - \deg h_\alpha} h_\alpha, \]
+where $(q_\alpha)_{i - \deg h_\alpha}$ refers to the homogeneous component of $q_\alpha$
+concentrated in the degree $i - \deg h_\alpha$.
+From this it is easy to see that each $x_i$ is a linear combination of the
+$h_\alpha$ and consequently lies in $I$.
+\end{proof}
+
+\begin{example}
+If $\mathfrak{a}, \mathfrak{b} \subset R$ are homogeneous ideals, then so is
+$\mathfrak{a}\mathfrak{b}$. This is clear from \cref{homgideal}.
+\end{example}
+
+\begin{example} Let $k$ be a field.
+The ideal $(x^2 + y)$ in $k[x,y]$ is \emph{not} homogeneous.
+However, we find from \cref{homgideal} that the ideal $(x^2 + y^2, y^3)$ is.
+\end{example}
+
+Since we shall need to use them to define $\proj R$ in the future, we now
+prove a result about homogeneous \emph{prime} ideals specifically. Namely,
+``primeness''
+can be checked just on homogeneous elements for a homogeneous ideal.
+\begin{lemma} \label{homogeneousprimeideal}
+Let $\mathfrak{p} \subset R$ be a homogeneous ideal. In order that
+$\mathfrak{p}$ be prime, it is
+necessary and sufficient that whenever $x,y$ are \emph{homogeneous} elements
+such that $xy \in \mathfrak{p}$, then at least one of $x,y \in \mathfrak{p}$.
+\end{lemma}
+\begin{proof}
+Necessity is immediate. For sufficiency, suppose $a,b \in R$ and $ab \in
+\mathfrak{p}$. We must prove that one of these is in $\mathfrak{p}$. Write
+\[ a = a_{k_1} + a_1 + \dots + a_{k_2}, \ b = b_{m_1} + \dots + b_{m_2} \]
+as a decomposition into homogeneous components (i.e. $a_i$ is the $i$th
+component of $a$), where $a_{k_2}, b_{m_2}$ are nonzero
+and of the highest degree.
+
+Let $k = k_2 - k_1, m = m_2 - m_1$. So there are $k$ homogeneous terms in the
+expression for $a$, $m$ in the expression for $b$.
+We will prove that one of $a,b \in \mathfrak{p}$ by induction on $m+n$. When
+$m+n = 0$, then it is just the condition of the lemma.
+Suppose it true for smaller values of $m+n$.
+Then $ab$ has highest homogeneous component $a_{k_2} b_{m_2}$, which must be in
+$\mathfrak{p}$
+by homogeneity. Thus one of $a_{k_2}, b_{m_2}$ belongs to $\mathfrak{p}$. Say for
+definiteness it is $a_k$. Then we have that
+\[ (a-a_{k_2})b \equiv ab \equiv 0 \ \mathrm{mod} \ \mathfrak{p} \]
+so that $(a-a_{k_2})b \in \mathfrak{p}$. But the resolutions of $a-a_{k_2}, b$ have a
+smaller
+$m+n$-value: $a - a_{k_2}$ can be expressed with $k-1$ terms. By the inductive hypothesis, it follows that one of these is in
+$\mathfrak{p}$, and since $a_k \in \mathfrak{p}$, we find that one of $a,b \in
+\mathfrak{p}$.
+\end{proof}
+
+\subsection{Finiteness conditions}
+There are various finiteness conditions (e.g. noetherianness) that one often wants to impose in
+algebraic geometry.
+Since projective varieties (and schemes) are obtained from graded rings,
+we briefly discuss these finiteness conditions for them.
+
+\begin{definition}
+For a graded ring $R$, write $R_+ = R_1 \oplus R_2 \oplus \dots$. Clearly $R_+
+\subset R$ is a homogeneous ideal. It is called the \textbf{irrelevant ideal.}
+\end{definition}
+
+When we define the $\proj$ of a ring, prime ideals containing the irrelevant ideal
+will be no good. The intuition is that when one is working with
+$\mathbb{P}^n_{\mathbb{C}}$, the irrelevant ideal in the corresponding ring
+$\mathbb{C}[x_0, \dots, x_n]$ corresponds to \emph{all} homogeneous polynomials
+of positive degree. Clearly these have no zeros except for the origin, which is
+not included in projective space: thus the common zero locus of the irrelevant
+ideal should be $\emptyset \subset \mathbb{P}^n_{\mathbb{C}}$.
+
+\begin{proposition} \label{genirrelevant}
+Suppose $R = R_0 \oplus R_1 \oplus \dots$ is a graded ring. Then if a subset
+$S \subset R_+$ generates the irrelevant ideal $R_+$ as $R$-ideal, it generates $R$ as $R_0$-algebra.
+\end{proposition}
+The converse is clear as well.
+Indeed, if $S \subset R_+$ generates $R$ as an $R_0$-algebra, clearly it
+generates $R_+$ as an $R$-ideal.
+\begin{proof}
+Let $T \subset R$ be the $R_0$-algebra generated by $S$. We shall show
+inductively that $R_n \subset T$. This is true for $n=0$. Suppose $n>0$ and the
+assertion true for smaller $n$. Then, we have
+\begin{align*}
+R_n & = RS \cap R_n \ \text{by assumption} \\
+& = (R_0 \oplus R_1 \oplus \dots \oplus R_{n-1})(S) \cap R_n \ \text{because $S
+\subset R_+$} \\
+& \subset (R_0[S]) (S) \cap R_n \ \text{by inductive hypothesis} \\
+& \subset R_0(S). \end{align*}
+\end{proof}
+\begin{theorem} \label{gradednoetherian}
+The graded ring $R$ is noetherian if and only if $R_0$ is noetherian and $R$ is finitely
+generated as $R_0$-algebra.
+\end{theorem}
+\begin{proof}
+One direction is clear by Hilbert's basis theorem. For the other, suppose $R$
+noetherian. Then $R_0$ is noetherian because any sequence $I_1 \subset I_2
+\subset \dots$ of ideals of $R_0$ leads to a sequence of ideals $I_1 R \subset
+I_2 R \subset \dots$, and since these stabilize, the original $I_1 \subset I_2
+\subset \dots$ must stabilize too. (Alternatively, $R_0 = R/R_+$, and taking
+quotients preserves noetherianness.)
+Moreover, since $R_+$ is a finitely generated
+$R$-ideal by noetherianness, it follows that $R$ is a finitely generated
+$R_0$-algebra too: we can, by \cref{genirrelevant}, take as $R_0$-algebra
+generators for $R$ a set of generators for the \emph{ideal} $R_+$.
+\end{proof}
+
+The basic finiteness condition one often needs is that $R$ should be finitely generated as an
+$R_0$-algebra. We may also want to have that $R$ is generated by $R_1$, quite
+frequently---in algebraic geometry, this implies a bunch of useful things about certain sheaves
+being invertible. (See \cite{EGA}, volume II.2.)
+As one example, having $R$ generated as $R_0$-algebra by $R_1$ is equivalent to
+having $R$ a \emph{graded} quotient of a polynomial algebra over $R_0$ (with
+the usual grading).
+Geometrically, this equates to having $\proj R$ contained as a closed subset of
+some projective space over $R_0$.
+
+However, sometimes we have the first condition and not the second, though if
+we massage things we can often assure generation by $R_1$. Then the
+next idea comes in handy.
+
+\begin{definition}
+\label{dpowerofring}
+Let $R$ be a graded ring and $d \in \mathbb{N}$. We set $R^{(d)} = \bigoplus_{k
+\in \mathbb{Z}_{\geq 0}} R_{kd}$; this is a graded ring and $R_0$-algebra. If $M$ is a graded $R$-module and $l \in
+\left\{0, 1, \dots, d-1\right\}$, we write $M^{(d,l)} = \bigoplus_{k \equiv l
+ \ \mathrm{mod} \ d} M_k$. Then $M^{(d,l)}$ is a graded $R^{(d)}$-module.
+\end{definition}
+
+We in fact have a functor $\cdot^{(d,l)}$ from graded $R$-modules to graded
+$R^{(d)}$-modules.
+
+
+One of the implications of the next few results is that, by replacing $R$ with
+$R^{(d)}$, we can make the condition ``generated by terms of degree 1'' happen.
+But first, we show that basic finiteness is preserved if we filter out some of
+the terms.
+
+\begin{proposition} \label{duple preserves finiteness}
+Let $R$ be a graded ring and a finitely generated $R_0$-algebra. Let $M$ be a
+finitely generated $R$-module.
+\begin{enumerate}
+\item Each $M_i$ is finitely generated over $R_0$, and the $M_i$ become zero
+when $i \ll
+0$.
+\item $M^{(d,l)}$ is a finitely generated $R^{(d)}$ module for each $d,l$. In
+particular, $M$ itself is a finitely generated $R^{(d)}$-module.
+\item $R^{(d)}$ is a finitely generated $R_0$-algebra.
+\end{enumerate}
+\end{proposition}
+\begin{proof}
+Choose homogeneous generators $m_1, \dots, m_k \in M$.
+For instance, we can choose the homogeneous components of a finite set of
+generators for $M$.
+Then every nonzero
+element of $M$ has degree at least $\min(\deg m_i)$. This proves the
+last part of (1). Moreover, let $r_1, \dots, r_p$ be algebra generators of $R$ over
+$R_0$.
+We can assume that these are homogeneous with positive degrees $d_1, \dots,
+d_p>0$.
+Then the $R_0$-module $M_i$ is generated by the elements
+\[ r_1^{a_1} \dots r_p^{a_p} m_s \]
+where $\sum a_j d_j + \deg m_s = i$. Since the $d_j>0$ and there are only
+finitely many $m_s$'s, there are only finitely many such elements. This proves
+the rest of (1).
+
+To prove (2), note first that it is sufficient to show that $M$ is finitely
+generated over $R^{(d)}$, because the $M^{(d,l)}$ are $R^{(d)}$-homomorphic
+images (i.e. quotient by the $M^{(d', l)}$ for $d' \neq d$).
+Now $M$ is generated as $R_0$-module by the $r_1^{a_1} \dots r_p^{a_p} m_s $
+for $a_1, \dots, a_p \geq 0$ and $s = 1, \dots, k$.
+In particular, by the euclidean algorithm in elementary number theory, it
+follows that the
+$r_1^{a_1} \dots r_p^{a_p} m_s $
+for $a_1, \dots, a_p \in [0, d-1]$ and $s = 1, \dots, k$ generate $M$ over
+$R^{(d)}$, as each power $r_i^{d} \in R^{(d)}$.
+In particular, $R$ is finitely generated over $R^{(d)}$.
+
+When we apply (2) to the finitely generated $R$-module $R_+$, it follows that
+$R^{(d)}_+$ is a finitely generated
+$R^{(d)}$-module. This implies that $R^{(d)}$ is a finitely generated
+$R_0$-algebra by \cref{genirrelevant}.
+\end{proof}
+
+In particular, by \cref{finitelygeneratedintegral} (later in the book!) $R$ is \emph{integral} over
+$R^{(d)}$: this means that each element of $R$ satisfies a monic polynomial
+equation with $R^{(d)}$-coefficients. This can easily be seen
+directly. The $d$th power of a homogeneous element lies in $R^{(d)}$.
+\begin{remark}
+Part (3), the preservation of the basic finiteness condition, could also be
+proved as follows, at least in the noetherian case (with $S = R^{(d)}$).
+We shall assume familiarity with the material in \cref{intchapter} for this
+brief digression.
+\begin{lemma} \label{descintegrality}
+Suppose $R_0 \subset S \subset R$ is an inclusion of rings with $R_0$ noetherian.
+Suppose $R$ is a
+finitely generated $R_0$-algebra and $R/S$ is an integral extension. Then $S$
+is a finitely generated $R_0$-algebra.
+\end{lemma}
+In the case of interest,
+we can take $S = R^{(d)}$.
+The point of the lemma is that finite generation can be deduced for
+\emph{subrings} under nice conditions.
+\begin{proof}
+We shall start by finding a subalgebra $S' \subset S$ such that $R$ is
+integral over $S'$, but $S'$ is a finitely generated $R_0$-algebra. The
+procedure will be a general observation of the flavor of ``noetherian descent''
+to be developed in \cref{noethdescent}.
+Then, since $R$ is integral over $S'$ and finitely generated as an
+\emph{algebra}, it will be finitely generated as a $S'$-module. $S$, which
+is a sub-$S'$-module, will equally be finitely generated as a $S'$-module,
+hence as an $R_0$-algebra. So the point is to make $S$ finitely generated as a
+module over a ``good'' ring.
+
+
+Indeed, let $r_1, \dots, r_m$ be generators of $R/R_0$. Each satisfies an
+integral equation $r_k^{n_k} + P_k(r_k) = 0$, where $P_k \in S[X]$ has degree
+less than $n_k$. Let $S' \subset S \subset R$ be the subring generated over $R_0$ by the
+coefficients of all these polynomials $P_k$.
+
+Then $R$ is, by definition, integral over $S'$.
+Since $R$ is a finitely generated $S'$-algebra, it follows by
+\cref{finitelygeneratedintegral} that it is a finitely generated $S'$-module.
+Then $S$, as a $S'$-submodule is a finitely generated $S'$-module by
+noetherianness.
+Therefore, $S$ is a finitely generated
+$R_0$-algebra.
+\end{proof}
+This result implies, incidentally, the following useful corollary:
+
+\begin{corollary} Let $R$ be a noetherian ring. If a finite group
+$G$ acts on a finitely generated $R$-algebra $S$, the ring of invariants
+$S^G$ is finitely generated.
+\end{corollary}
+\begin{proof}
+Apply \cref{descintegrality} to $R, S^G, S$. One needs to check that $S$ is
+integral over $S^G$. But each $s \in S$ satisfies the
+equation
+\[ \prod_{\sigma \in G} (X - \sigma(s)) ,\]
+which has coefficients in $S^G$.
+\end{proof}
+This ends the digression.
+\end{remark}
+
+
+We next return to our main goals, and let $R$ be a graded ring, finitely
+generated as an $R_0$-algebra, as before; let $M$ be a finitely
+generated $R$-module. We show that we can have $R^{(d)}$ generated by terms of degree $d$ (i.e.
+``degree 1'' if we rescale) for $d$ chosen large.
+\begin{lemma} \label{quickfinitenesslem}
+Hypotheses as above, there is a pair $(d, n_0)$ such that
+\[ R_d M_n = M_{n+d} \]
+for $n \geq n_0$.
+\end{lemma}
+\begin{proof}
+Indeed, select $R$-module generators $m_1, \dots, m_k \in M$ and
+$R_0$-algebra generators $r_1, \dots, r_p \in R$
+as in the proof of \cref{duple preserves finiteness}; use the same
+notation for their degrees, i.e. $d_j = \deg r_j$.
+Let $d $ be the least common multiple of the $d_j$. Consider the family of
+elements
+\[ s_i = r_i^{d/d_i} \in R_d. \]
+Then suppose $m \in M_n$ for $n>d + \sup \deg m_i$. We have that $m$ is a sum
+of products of powers of the $\{r_j\}$ and the $\{m_i\}$, each term of which we can assume
+is
+of degree $n$. In this case, since in each term, at least one
+of the $\{r_j\}$ must occur to power $\geq \frac{d}{d_j}$, we can write each term
+in the sum as some $s_j$ times something in $M_{n-d}$.
+
+In particular,
+\( M_n = R_d M_{n-d}. \)
+\end{proof}
+
+\begin{proposition} \label{auxfinitenessgraded}
+Suppose $R$ is a graded ring and finitely generated $R_0$-algebra. Then there
+is $d \in \mathbb{N}$ such that $R^{(d)}$ is generated over $R_0$ by $R_d$.
+\end{proposition}
+What this proposition states geometrically is that if we apply the
+functor $R \mapsto R^{(d)}$ for large $d$ (which, geometrically, is actually
+harmless), one can arrange things so that $\proj R$ (not defined yet!) is
+contained as a closed subscheme of ordinary projective space.
+
+\begin{proof} Consider $R$ as a finitely generated, graded $R$-module.
+Suppose $d'$ is as in the \cref{auxfinitenessgraded} (replacing $d$, which we reserve for
+something else), and choose $n_0$ accordingly.
+So we have $R_{d'} R_{m} = R_{m + d'}$ whenever $m \geq n_0$.
+Let $d$ be a multiple of $d'$
+which is greater than $n_0$.
+
+Then, iterating, we have $R_d R_n = R_{d+n}$ if $n \geq d$ since $d$ is a multiple of $d'$.
+In particular, it follows that $R_{nd} = (R_d)^n$ for each $n \in \mathbb{N}$,
+which implies the statement of the proposition.
+\end{proof}
+
+As we will see below, taking $R^{(d)}$ does not affect the $\proj$, so this is
+extremely useful.
+
+\begin{example} Let $k$ be a field. Then
+$R = k[x^2] \subset k[x]$ (with the grading induced from $k[x]$) is a finitely generated graded $k$-algebra,
+which is not generated by its elements in degree one (there are none!).
+However, $R^{(2)} = k[x^2]$ is generated by $x^2$.
+\end{example}
+
+
+We next show that taking the $R^{(d)}$ \emph{always} preserves noetherianness.
+
+\begin{proposition} \label{filtnoetherian}
+If $R$ is noetherian, then so is $R^{(d)}$ for any $d>0$.
+\end{proposition}
+\begin{proof}
+If $R$ is noetherian, then $R_0$ is noetherian and $R$ is a finitely generated
+$R_0$-algebra by \cref{gradednoetherian}. \cref{duple preserves
+finiteness} now implies that $R^{(d)} $ is also a
+finitely generated $R_0$-algebra, so it is noetherian.
+\end{proof}
+
+The converse is also true, since $R$ is a finitely generated $R^{(d)}$-module.
+
+
+
+
+\subsection{Localization of graded rings}
+Next, we include a few topics that we shall invoke later on.
+First, we discuss the interaction of homogeneity and localization.
+Under favorable circumstances, we can give $\mathbb{Z}$-gradings to localizations of
+graded rings.
+
+\begin{definition}
+If $S \subset R$ is a multiplicative subset of a graded (or
+$\mathbb{Z}$-graded) ring $R$ consisting of homogeneous elements, then $S^{-1}
+R$ is a $\mathbb{Z}$-graded ring: we let the homogeneous elements of
+degree $n$ be of the form $r/s$ where $r \in R_{n + \deg s}$. We write $R_{(S)}$ for the subring of
+elements of degree zero; there is thus a map $R_0 \to R_{(S)}$.
+
+If $S$ consists of the powers of a homogeneous element $f$, we write $R_{(f)}$
+for $R_S$. If $\mathfrak{p}$ is a homogeneous ideal and $S$ the set of
+homogeneous elements of $R$ not in $\mathfrak{p}$, we write
+$R_{(\mathfrak{p})}$ for $R_{(S)}$.
+\end{definition}
+Of course, $R_{(S)}$ has a trivial grading, and is best thought of as a
+plain, unadorned ring.
+We shall show that $R_{(f)}$ is a special case of something familiar.
+
+\begin{proposition} \label{loc interpret as quotient ring}
+Suppose $f$ is of degree $d$. Then, as plain rings, there is a
+canonical isomorphism $R_{(f)} \simeq R^{(d)}/(f-1)$.
+\end{proposition}
+\begin{proof}
+The homomorphism $R^{(d)} \to R_{(f)}$ is defined to map $g \in R_{kd}$ to
+$g/f^d \in
+R_{(f)}$. This is then extended by additivity to non-homogeneous elements. It
+is clear that this is multiplicative, and that the ideal $(f-1)$ is annihilated
+by the homomorphism.
+Moreover, this is surjective.
+
+We shall now define an inverse map. Let $x/f^n \in R_{(f)}$; then $x$ must be
+a homogeneous element of degree divisible by $d$. We map this to
+the residue class of $x$ in $R^{(d)}/(f-1)$. This is well-defined; if $x/f^n =
+y/f^m$, then there is $N$ with
+\[ f^N( xf^m - yf^n) = 0, \]
+so upon reduction (note that $f$ gets reduced to $1$!), we find that the
+residue classes of $x,y$ are the same, so the images are the same.
+
+Clearly this defines an inverse to our map.
+\end{proof}
+
+\begin{corollary}
+Suppose $R$ is a graded noetherian ring. Then each of the $R_{(f)}$ is
+noetherian.
+\end{corollary}
+\begin{proof}
+This follows from the previous result and the fact that $R^{(d)}$ is noetherian
+(\rref{filtnoetherian}).\end{proof}
+
+More generally, we can define the localization procedure for graded modules.
+\begin{definition}
+Let $M$ be a graded $R$-module and $S \subset R$ a multiplicative subset
+consisting of homogeneous elements. Then we define $M_{(S)}$ as the submodule
+of the graded module $S^{-1}M$ consisting of elements of degree zero. When $S$
+consists of the powers of a homogeneous element $f \in R$, we write $M_{(f)}$
+instead of $M_{(S)}$. We similarly define $M_{(\mathfrak{p})}$ for a
+homogeneous prime ideal $\mathfrak{p}$.
+\end{definition}
+
+Then clearly $M_{(S)}$ is a $R_{(S)}$-module. This is evidently a functor from
+graded $R$-modules to $R_{(S)}$-modules.
+
+We next observe that there is a generalization of \rref{loc interpret as
+quotient ring}.
+\begin{proposition} \label{loc
+module as quotient}
+Suppose $M$ is a graded $R$-module, $f \in R$ homogeneous of degree $d$. Then
+there is an isomorphism
+\[ M_{(f)} \simeq M^{(d)}/(f-1)M^{(d)} \]
+of $R^{(d)}$-modules.
+\end{proposition}
+\begin{proof}
+This is proved in the same way as \rref{loc interpret as quotient
+ring}. Alternatively, both are right-exact functors that commute with
+arbitrary direct sums and coincide on $R$, so must be naturally isomorphic by
+a well-known bit of abstract nonsense.\footnote{Citation needed.}
+\end{proof}
+
+In particular:
+\begin{corollary}
+Suppose $M$ is a graded $R$-module, $f \in R$ homogeneous of degree 1. Then we
+have
+\[ M_{(f)} \simeq M/(f-1)M \simeq M\otimes_R R/(f-1). \]
+\end{corollary}
+
+\subsection{The $\proj$ of a ring}
+Let $R=R_0 \oplus R_1 \oplus \dots$ be a \textbf{graded ring}.
+
+\begin{definition}
+Let $\proj R$ denote the set of \emph{homogeneous prime ideals} of
+$R$ that do not contain the \textbf{irrelevant ideal} $R_+$.\footnote{Recall
+that an ideal $\mathfrak{a} \subset R$ for $R$ graded is
+\emph{homogeneous} if the homogeneous components of $\mathfrak{a}$ belong to
+$\mathfrak{a}$.}
+
+\end{definition}
+
+We can put a topology on $\proj R$ by setting, for a homogeneous ideal
+$\mathfrak{b}$, $$V(\mathfrak{b}) = \{ \mathfrak{p} \in \proj R:
+\mathfrak{p} \supset \mathfrak{b}\}$$. These sets satisfy
+\begin{enumerate}
+\item $V( \sum \mathfrak{b_i}) = \bigcap V(\mathfrak{b_i})$.
+\item $V( \mathfrak{a}\mathfrak{b}) = V(\mathfrak{a}) \cup V(\mathfrak{b})$.
+\item $V( \rad \mathfrak{a}) = V(\mathfrak{a})$.
+\end{enumerate}
+Note incidentally that we would not get any more closed sets if we allowed all
+ideals $\mathfrak{b}$, since to any $\mathfrak{b}$ we can consider its
+``homogenization.''
+We could even allow all sets.
+
+In particular, the $V$'s do in fact yield a topology on $\proj R$ (setting
+the open sets to be complements of the $V$'s).
+As with the affine case, we can define basic open sets. For $f$
+homogeneous of positive degree, define $D'(f)$ to be the
+collection of homogeneous ideals (not containing $R_+$) that do not contain $f$;
+clearly these are
+open sets.
+
+Let $\mathfrak{a}$ be a homogeneous ideal. Then we claim that:
+\begin{lemma}
+\( V(\mathfrak{a}) = V(\mathfrak{a} \cap R_+). \)
+\end{lemma}
+\begin{proof}
+Indeed, suppose $\mathfrak{p}$ is a homogeneous prime not containing $S_+$ such
+that all homogeneous
+elements of positive degree in $\mathfrak{a}$ (i.e., anything in $\mathfrak{a}
+\cap R_+$) belongs to $\mathfrak{p}$. We will
+show that $\mathfrak{a} \subset \mathfrak{p}$.
+
+Choose $a \in \mathfrak{a} \cap R_0$. It is sufficient to show that any such
+$a$ belongs to $\mathfrak{p}$ since we are working with homogeneous ideals.
+Let $f$ be a homogeneous element of positive degree that is not in
+$\mathfrak{p}$. Then $af \in \mathfrak{a} \cap R_+$, so $af \in \mathfrak{p}$.
+But $f \notin \mathfrak{p}$, so $a \in \mathfrak{p}$.
+\end{proof}
+
+Thus, when constructing these closed sets $V(\mathfrak{a})$, it suffices to
+work with ideals contained in the irrelevant ideal. In fact, we could take
+$\mathfrak{a}$ in any prescribed power of the irrelevant ideal, since taking
+radicals does not affect $V$.
+
+\begin{proposition}
+We have $D'(f) \cap D'(g) = D'(fg)$. Also, the $D'(f)$ form a basis for the
+topology on $\proj R$.
+\end{proposition}
+\begin{proof} The first part is evident, by the definition of a prime ideal. We
+prove the second.
+Note that $V(\mathfrak{a})$ is the intersection of the $V((f))$ for the
+homogeneous $f \in
+\mathfrak{a} \cap R_+$. Thus $\proj R - V(\mathfrak{a})$ is the union of these
+$D'(f)$.
+So every open set is a union of sets of the form $D'(f)$.
+\end{proof}
+
+We shall now
+show that the topology is actually rather familiar from the affine case, which
+is not surprising, since the definition is similar.
+
+\begin{proposition}
+$D'(f)$ is homeomorphic to $\spec R_{(f)}$ under the map
+\[ \mathfrak{p} \to \mathfrak{p} R_f \cap R_{(f)} \]
+sending homogeneous prime ideals of $R$ not containing $f$ into primes of
+$R_{(f)}$.
+\end{proposition}
+\begin{proof}
+Indeed, let $\mathfrak{p}$ be a homogeneous prime ideal of $R$ not containing
+$f$. Consider $\phi(\mathfrak{p}) = \mathfrak{p} R_f \cap R_{(f)} $ as above.
+This is a prime ideal, since $\mathfrak{p} R_f$ is a prime ideal in $R_f$ by
+basic properties of localization, and $R_{(f)} \subset R_f$ is a subring. (It
+cannot contain the identity, because that would imply that a power of $f$ lay
+in $\mathfrak{p}$.)
+
+So we have defined a map $\phi: D'(f) \to \spec R_{(f)}$. We can define its
+inverse $\psi$ as follows. Given $\mathfrak{q} \subset R_{(f)} $ prime, we
+define a
+prime ideal $\mathfrak{p} = \psi(\mathfrak{q})$ of $R$ by saying that a
+\textit{homogeneous} element $x \in
+R$ belongs to $\mathfrak{p}$ if and only if $x^{\deg f}/f^{\deg x} \in
+\mathfrak{q}$. It is easy to see that this is indeed an ideal, and that it is
+prime by \rref{homogeneousprimeideal}.
+
+Furthermore, it is clear that $\phi \circ \psi $ and $\psi \circ \phi$ are the
+identity.
+This is because $x \in \mathfrak{p}$ for $\mathfrak{p} \in D'(f)$ if and only
+if $f^n x \in \mathfrak{p}$ for some $n$.
+
+We next need to check that these are continuous, hence homeomorphisms. If
+$\mathfrak{a} \subset R$ is a homogeneous ideal, then $V(\mathfrak{a}) \cap
+D'(f)$ is
+mapped to $V(\mathfrak{a}R_f \cap R_{(f)}) \subset \spec R_{(f)}$, and vice
+versa.
+\end{proof}
+
+\section{Filtered rings}
+
+In practice, one often has something weaker than a grading. Instead of a way
+of saying that an element is of degree $d$, one simply has a way of saying
+that an element is ``of degree at most $d$.'' This leads to the definition of a
+\emph{filtered} ring (and a filtered module). We shall use this definition in
+placing topologies on rings and modules and, later, completing them.
+
+\subsection{Definition}
+
+\begin{definition}
+A \textbf{filtration} on a ring $R$ is a sequence of ideals $R = I_0
+\supset I_1 \supset \dots$ such that $I_m I_n \subset I_{m + n}$ for each
+$m, n \in \mathbb{Z}_{ \geq 0}$. A ring with a filtration is called a
+\textbf{filtered ring}.
+\end{definition}
+
+A filtered ring is supposed to be a generalization of a graded ring. If $R =
+\bigoplus R_k$ is graded, then we can make $R$ into a filtered ring in a
+canonical way by taking the ideal $I_m = \bigoplus_{k \geq m} R_k$ (notice
+that we are using the fact that $R$ has only pieces in nonnegative gradings!).
+
+We can make filtered rings into a category: a morphism of filtered rings $\phi:
+R \to S$ is a ring-homomorphism preserving the filtration.
+
+
+\begin{example}[The $I$-adic filtration]
+Given an ideal $I \subset R$, we
+can take powers of $I$ to generate a filtration. This filtration $R \supset I
+\supset I^2 \supset \dots$ is called the \textbf{$I$-adic filtration,} and is
+especially important when $R$ is local and $I$ the maximal ideal.
+
+If one chooses the polynomial ring $k[x_1, \dots, x_n]$ over a field with $n$
+variables and takes the $(x_1, \dots, x_n)$-adic filtration, one gets the same
+as the filtration induced by the usual grading.
+\end{example}
+
+\begin{example}
+As a specialization of the previous example, consider the power series ring
+$R=k[[x]]$ over a field $k$ with one indeterminate $x$. This is a local ring
+(with maximal ideal $(x)$), and it has a filtration with $R_i = (x^i)$.
+Note that this ring, unlike the polynomial ring, is \emph{not} a graded ring in
+any obvious way.
+\end{example}
+
+
+
+
+When we defined graded rings, the first thing we did thereafter was to define
+the notion of a graded module over a graded ring. We do the analogous thing
+for filtered modules.
+
+\begin{definition}
+Let $R$ be a filtered ring with a filtration $I_0 \supset I_1 \supset \dots$.
+A \textbf{filtration} on an $R$-module $M$ is a decreasing sequence of submodules
+\[ M = M_0 \supset M_1 \supset M_2 \supset \dots \]
+such that $I_m M_n \subset M_{n+m}$ for each $m, n$. A module together with a
+filtration is called a \textbf{filtered module.}
+\end{definition}
+
+
+As usual, there is a category of filtered modules over a fixed filtered ring
+$R$, with morphisms the module-homomorphisms that preserve the filtrations.
+
+\begin{example}[The $I$-adic filtration for modules]
+Let $R$ be any ring and $I \subset R$ any ideal. Then if we make $R$ into a
+filtered ring with the $I$-adic filtration, we can make any $R$-module $M$
+into a filtered $R$-module by giving $M$ the filtration
+\[ M \supset IM \supset I^2M \supset \dots, \]
+which is also called the \textbf{$I$-adic filtration.}
+\end{example}
+
+
+
+
+
+\subsection{The associated graded}
+
+We shall now describe a construction that produces graded things from filtered
+ones.
+
+\begin{definition} Given a filtered ring $R$ (with filtration
+$\left\{I_n\right\}$), the
+\textbf{associated graded ring} $\gr(R)$ is the graded ring
+$$\gr(R) = \bigoplus_{n=0}^\infty I_n /I_{n+1}.$$
+
+This is made into a ring by the following procedure. Given $a \in I_n$
+representing a class $\overline{a} \in I_n/I_{n+1}$ and $b \in I_m$
+representing a class $\overline{b} \in I_m/I_{m+1}$, we define
+$\overline{a}\overline{b} $ to be the class in $I_{n+m}/I_{n+m+1}$ represented
+by $ab$.
+\end{definition}
+
+It is easy to check that if different choices of representing elements $a,b$ were made in the above
+description, the value of $\overline{a}\overline{b}$ thus defined would still
+be the same, so that the definition is reasonable.
+
+\begin{example}
+Consider $R = \mathbb{Z}_{(p)}$ (the localization at $(p)$) with the $(p)$-adic
+topology. Then $\gr(R) = \mathbb{Z}/p[t]$, as a graded ring.
+For the successive quotients of ideals are of the form $\mathbb{Z}/p$, and it
+is easy to check that multiplication lines up in the appropriate form.
+\end{example}
+
+In general, as we will see below, when one takes the $\gr$ of a noetherian ring
+with the $I$-adic topology for some ideal $I$, one always gets a noetherian
+ring.
+
+
+
+\begin{definition}
+Let $R$ be a filtered ring, and $M$ a filtered $R$-module (with filtration
+$\left\{M_n\right\}$). We define the \textbf{associated graded module}
+$\gr(M)$ as the graded $\gr(R)$-module
+\[ \gr(M) = \bigoplus_{n} M_n/M_{n+1} \]
+where multiplication by an element of $\gr(R)$ is defined in a similar manner as above.
+\end{definition}
+
+In other words, we have defined a \emph{functor} $\gr$ from the category of filtered
+$R$-modules to the category of \emph{graded} $\gr(R)$ modules.
+
+
+Let $R$ be a filtered ring, and $M$ a finitely generated filtered $R$-module.
+In general, $\gr(M)$ \emph{cannot} be expected to be a finitely generated
+$\gr(R)$-module.
+\begin{example}
+Consider the ring $\mathbb{Z}_{(p)}$ (the localization of
+$\mathbb{Z}$ at $p$), which we endow with the $p^2$-adic (i.e., $(p^2)$-adic)
+filtration.
+The associated graded is $\mathbb{Z}/p^2[t]$.
+
+Consider $M=\mathbb{Z}_{(p)}$ with the filtration $M_m = (p^{m})$, i.e. the
+usual $(p)$-adic topology. The claim is that $\gr(M)$ is
+\emph{not} a finitely generated $\mathbb{Z}/p^2[t]$-module. This will follow
+from \cref{} below, but we can see it directly: multiplication by $t$ acts by
+zero on $\gr(M)$ (because this corresponds to multiplying by $p^2$ and shifting
+the degree by one).
+However, $\gr(M)$ is nonzero in every degree. If $\gr(M)$ were finitely
+generated, it would be a finitely generated $\mathbb{Z}/p^2 \mathbb{Z}$-module,
+which it is not.
+\end{example}
+
+\subsection{Topologies}
+
+We shall now see that filtered rings and modules come naturally with
+\emph{topologies} on them.
+
+\begin{definition}
+A \textbf{topological ring} is a ring $R$ together with a topology such that
+the natural maps
+\begin{gather*} R \times R \to R, \quad (x,y) \mapsto x+y \\
+R \times R \to R, \quad (x,y) \mapsto xy \\
+R \to R, \quad x \mapsto -x
+\end{gather*}
+are continuous (where $R \times R$ has the product topology).
+\end{definition}
+
+
+\add{discussion of algebraic objects in categories}
+
+
+In practice, the topological rings that we will be interested will exclusively
+be \emph{linearly} topologized rings.
+
+\begin{definition}
+A topological ring is \textbf{linearly topologized} if there is a neighborhood
+basis at $0$ consisting of open ideals.
+\end{definition}
+
+Given a filtered ring $R$ with a filtration of ideals $\left\{I_n\right\}$, we
+can naturally linearly topologize $R$. Namely, we take as a basis the cosets
+$x+I_n$ for $x \in R, n \in \mathbb{Z}_{\geq 0}$.
+It is then clear that the $\left\{I_n\right\}$ form a neighborhood basis at
+the origin (because any neighborhood $x+I_n$ containing $0$ must just be
+$I_n$!).
+
+\begin{example}
+For instance, given any ring $R$ and any ideal $I \subset R$, we can consider
+the \textbf{$I$-adic topology} on $R$. Here an element is ``small'' (i.e.,
+close to zero) if it lies in a high power of $I$.
+\end{example}
+
+
+\begin{proposition}
+A topology on $R$ defined by the filtration $\left\{I_n\right\}$ is Hausdorff
+if and only if $\bigcap I_n = 0$.
+\end{proposition}
+\begin{proof}
+Indeed, to say that $R$ is Hausdorff is to say that any two distinct elements
+$x,y \in R$ can be separated by disjoint neighborhoods. If $\bigcap I_n = 0$,
+we can find $N$ large such that $x -y \notin I_N$. Then $x+I_N, y + I_N$ are
+disjoint neighborhoods of $x,y$.
+The converse is similar: if $\bigcap I_n \neq 0$, then no neighborhoods can
+separate a nonzero element in $\bigcap I_n$ from $0$.
+\end{proof}
+
+Similarly, if $M$ is a filtered $R$-module with a filtration
+$\left\{M_n\right\}$, we can topologize $M$ by choosing the
+$\left\{M_n\right\}$ to be a neighborhood basis at the origin. Then $M$
+becomes a \emph{topological group,} that is a group with a topology such that
+the group operations are continuous.
+In the same way, we find:
+
+\begin{proposition}
+The topology on $M$ is Hausdorff if and only if $\bigcap M_n = 0$.
+\end{proposition}
+
+Moreover, because of the requirement that $R_m M_{n} \subset M_{n+m}$, it is
+easy to see that the map
+\[ R \times M \to M \]
+is itself continuous. Thus, $M$ is a \emph{topological} module.
+
+Here is another example. Suppose $M$ is a linearly topologized module with a
+basis of submodules $\left\{M_\alpha\right\}$ at the origin. Then any
+submodule $N \subset M$ becomes a linearly topologized module with a basis of
+submodules $\{N \cap M_\alpha\}$ at the origin with the relative topology.
+
+
+\begin{proposition}
+Suppose $M$ is filtered with the $\left\{M_n\right\}$. If $N \subset M$ is any
+submodule, then the closure $\overline{N}$ is the intersection $\bigcap N + M_n$.
+\end{proposition}
+\begin{proof}
+Recall that $x \in \overline{N}$ is the same as stipulating that every
+neighborhood of $x$ intersect $N$. In other words, any basic neighborhood of
+$x$ has to intersect $N$. This means that for each $n$, $x+M_n \cap N \neq
+\emptyset$, or in other words $x \in M_n + N$.
+\end{proof}
+
+\section{The Artin-Rees Lemma}
+
+We shall now show that for \emph{noetherian} rings and modules, the $I$-adic
+topology is stable under passing to submodules; this useful result, the
+Artin-Rees lemma, will become indispensable in our analysis of dimension
+theory in the future.
+
+More precisely, consider the following problem. Let $R$ be a ring and $I
+\subset R$ an ideal. Then for any $R$-module $M$, we can endow $M$ with the
+$I$-adic filtration $\left\{I^n M\right\}$, which defines a topology on $M$.
+If $N \subset M$ is a submodule, then
+$N$ inherits the subspace topology from $M$ (i.e. that defined by the filtration
+$\left\{I^n M \cap N\right\}$). But $N$ can also be topologized by simply
+taking the $I$-adic topology on it. The Artin-Rees lemma states that these two
+approaches give the same result.
+
+\subsection{The Artin-Rees Lemma}
+
+\begin{theorem}[Artin-Rees lemma]
+\label{artinrees}
+Let $R$ be noetherian, $I \subset R$ an
+ideal. Suppose $M$ is a finitely generated $R$-module and $M' \subset M$ a
+submodule. Then the $I$-adic topology on $M$ induces the $I$-adic topology on $M'$.
+More precisely,
+there is a constant $c$ such that
+\[ I^{n+c} M \cap M' \subset I^n M'. \]
+So the two filtrations $\{I^n M \cap M'\}, \{I^n M'\}$ on $M'$ are equivalent up to a
+shift.
+\end{theorem}
+\begin{proof}
+The strategy to prove Artin-Rees will be as follows. Call a filtration
+$\left\{M_n\right\}$ on an $R$-module $M$ (which is expected to be compatible
+with the $I$-adic filtration on $R$, i.e. $I^n M_m \subset M_{m+n}$ for all
+$n,m$) \textbf{$I$-good} if $I M_{n} = M_{n+1}$ for large $n \gg 0$.
+Right now, we have the very $I$-good filtration $\{I^n M\}$ on $M$, and the induced
+filtration $\{I^n M \cap M'\}$ on $M'$. The Artin-Rees lemma can be rephrased
+as saying that this filtration on $M'$ is $I$-good: in fact, this is what we
+shall prove.
+It follows that if one has an $I$-good filtration on $M$, then the induced
+filtration on $M'$ is itself $I$-good.
+
+To do this, we shall give an interpretation of $I$-goodness in terms of the
+\emph{blowup algebra}, and use its noetherianness.
+Recall that this is defined as $S = R \oplus I \oplus I^2 +
+\dots$, where multiplication is defined in the obvious manner (see
+\cref{blowupalg}). It can be regarded as a subring
+of the polynomial ring
+$R[t]$ where the coefficient of $t^i$ is required to be in $I^i$.
+The blowup algebra is clearly a graded ring.
+
+Given a filtration $\left\{M_n\right\}$ on an $R$-module $M$ (compatible with
+the $I$-adic filtration of $M$), we can make $\bigoplus_{n=0}^{\infty} M_n$
+into a \emph{graded} $S$-module in an obvious manner.
+
+Here is the promised interpretation of $I$-goodness:
+\begin{lemma} \label{subartinrees}
+ Then the
+filtration $\left\{M_n\right\}$ of the finitely generated $R$-module $M$ is
+$I$-good if and only if $\bigoplus M_n$ is a finitely generated $S$-module.
+\end{lemma}
+\begin{proof}
+Let $S_1 \subset S$ be the subset of elements of degree one.
+If $\bigoplus M_n$ is finitely generated as an $S$-module, then $S_1
+(\bigoplus M_n) $ and $\bigoplus M_n$ agree in large degrees by
+\cref{quickfinitenesslem};
+however, this means that $IM_{n-1} = M_{n}$ for $n\gg 0$, which is $I$-goodness.
+
+Conversely, if $\left\{M_n\right\}$ is an $I$-good filtration, then once the
+$I$-goodness starts (say, for $n>N$, we have $IM_{n} = M_{n+1}$), there is no
+need to add generators beyond $M_{N}$. In fact, we can use $R$-generators for
+$M_0, \dots, M_N$ in the appropriate degrees to generate $\bigoplus M_n$ as an
+$R'$-module.
+\end{proof}
+
+Finally, let $\left\{M_n\right\}$ be an $I$-good filtration on the finitely
+generated $R$-module $M$. Let $M' \subset M$ be a submodule; we will, as
+promised, show that the induced filtration on $M'$ is $I$-good.
+Now the associated module $\bigoplus_{n=0}^{\infty} (I^n M \cap M') $
+is an $S$-submodule of $\bigoplus_{n=0}^{\infty} M_n$, which
+by \cref{subartinrees} is finitely generated. We will show next that $S$
+is noetherian, and consequently submodules of finitely generated
+modules are finitely generated. Applying \cref{subartinrees} again, we will find
+that the induced filtration must be $I$-good.
+
+\begin{lemma}
+Hypotheses as above, the blowup algebra $R'$ is noetherian.
+\end{lemma}
+\begin{proof}
+Choose generators $x_1, \dots, x_n \in I$; then there is a map $R[y_1, \dots,
+y_n] \to S$ sending $y_i \to x_i $ (where $x_i$ is in degree one). This is surjective. Hence by the basis
+theorem (\cref{hilbbasiscor}), $R'$ is noetherian.
+\end{proof}
+
+
+\end{proof}
+
+
+\subsection{The Krull intersection theorem}
+
+We now prove a useful consequence of the Artin-Rees lemma and Nakayama's
+lemma. In fancier language, this states that the map from a noetherian local
+ring into its
+completion is an \emph{embedding}. A priori, this might not be obvious. For
+instance, it might be surprising that the inverse limit of the highly torsion
+groups $\mathbb{Z}/p^n$ turns out to be the torsion-free ring of $p$-adic
+integers.
+
+\begin{theorem}[Krull intersection theorem] \label{krullint} Let $R$ be a local noetherian ring with maximal ideal
+$\mathfrak{m}$. Then,
+\[ \bigcap \mathfrak{m}^i = (0). \]
+\end{theorem}
+
+\begin{proof}
+Indeed, the $\mathfrak{m}$-adic topology on $\bigcap \mathfrak{m}^i$ is the
+restriction of the $\mathfrak{m}$-adic topology of $R$ on $\bigcap
+\mathfrak{m}^i$ by the Artin-Rees lemma (\rref{artinrees}).
+However, $\bigcap \mathfrak{m}^i$ is contained in every $\mathfrak{m}$-adic
+neighborhood of $0$ in $R$; the induced topology on $\bigcap \mathfrak{m}^i$
+is thus the indiscrete topology.
+
+But to say that the $\mathfrak{m}$-adic topology on a module $N$ is indiscrete
+is to say that $\mathfrak{m}N=N$, so $N=0$ by Nakayama. The result is thus
+clear.
+
+\end{proof}
+
+By similar logic, or by localizing at each maximal ideal, we find:
+\begin{corollary}
+If $R$ is a commutative ring and $I $ is contained in the Jacobson radical of
+$R$, then $\bigcap I^n = 0$.
+\end{corollary}
+
+It turns out that the Krull intersection theorem can be proved in the
+following elementary manner, due to Perdry in \cite{Pe04}. The argument does
+not use the Artin-Rees lemma. One can prove:
+
+\begin{theorem}[\cite{Pe04}]
+Suppose $R$ is a noetherian ring, $I \subset R$ an ideal. Suppose $b \in
+\bigcap I^n$. Then as ideals $(b) = (b)I$.
+\end{theorem}
+In particular, it follows easily that $\bigcap I^n = 0$ under either of the
+following conditions:
+\begin{enumerate}
+\item $I$ is contained in the Jacobson radical of $R$.
+\item $R$ is a domain and $I$ is proper.
+\end{enumerate}
+
+\begin{proof}
+Let $a_1, \dots, a_k \in I$ be generators.
+For each $n$, the ideal $I^n$ consists of the values of all homogeneous
+polynomials in $R[x_1, \dots, x_k]$ of degree $n$ evaluated on the tuple
+$(a_1, \dots, a_k)$, as one may easily see.
+
+It follows that if $b \in \bigcap I^n$, then for each $n$ there is a polynomial
+$P_n \in
+R[x_1, \dots, x_k]$ which is homogeneous of degree $n$ and which satisfies
+\[ P_n(a_1, \dots, a_k) = b. \]
+The ideal generated by all the $P_n$ in $R[x_1, \dots, x_k]$ is finitely
+generated by the Hilbert basis theorem. Thus there is $N$ such that
+\[ P_N = Q_1 P_1 + Q_2 P_2 + \dots + Q_{N-1} P_{N-1} \]
+for some polynomials $Q_i \in R[x_1, \dots, x_k]$. By taking homogeneous
+components, we can assume moreover that $Q_i$ is homogeneous of degree $N-i$
+for each $i$. If we evaluate each at
+$(a_1, \dots, a_k)$ we find
+\[ b = b (Q_1(a_1, \dots,a_k) + \dots + Q_{N-1}(a_1, \dots, a_k)). \]
+But the $Q_i(a_1, \dots, a_k)$ lie in $I$ as all the $a_i$ do and $Q_i$ is
+homogeneous of positive degree. Thus $b$ equals $b$ times something in $I$.
+\end{proof}
+
diff --git a/books/cring/homological.tex b/books/cring/homological.tex
new file mode 100644
index 0000000000000000000000000000000000000000..7afef8a5ed3b2d4d26eaf84dbe7e9997c512ceb8
--- /dev/null
+++ b/books/cring/homological.tex
@@ -0,0 +1,977 @@
+\chapter{Homological Algebra}
+\label{homological}
+
+
+Homological algebra begins with the notion of a \emph{differential object,}
+that is, an object with an endomorphism $A \stackrel{d}{\to} A$ such that $d^2 =
+0$. This equation leads to the obvious inclusion $\im(d) \subset \ker(d)$, but
+the inclusion generally is not equality. We will find that the difference
+between $\ker(d)$ and $\im(d)$, called the \emph{homology}, is a highly useful
+variant of a differential object: its first basic property is that if an exact
+sequence
+\[ 0 \to A' \to A \to A'' \to 0 \]
+of differential objects is given, the homology of $A$ is related to that of
+$A', A''$ through a long exact sequence. The basic example, and the one we
+shall focus on, is where $A$ is a
+chain complex, and $d$ the usual differential.
+In this case, homology simply measures the failure of a complex to be exact.
+
+After introducing these preliminaries, we develop the theory of \emph{derived
+functors}. Given a functor that is only left or right-exact, derived functors
+allow for an extension of a partially exact sequence to a long exact sequence.
+The most important examples to us, $\mathrm{Tor}$ and $\mathrm{Ext}$, provide
+characterizations of flatness, projectivity, and injectivity.
+
+\section{Complexes}
+
+
+\subsection{Chain complexes}
+The chain complex is the most fundamental construction in
+homological algebra.
+
+\begin{definition} Let $R$ be a ring. A \textbf{chain complex} is a collection
+of $R$-modules
+$\{C_i\}$ (for $i \in \mathbb{Z}$)
+together with boundary
+operators
+$\partial_i:C_i\rightarrow C_{i-1}$ such that
+$\partial_{i-1}\partial_i=0$. The boundary map is also
+called the
+\textbf{differential.} Often, notation is abused and the indices for
+the boundary map are dropped.
+
+A chain complex is often simply denoted $C_*$.
+\end{definition}
+
+In practice, one often has that $C_i = 0$ for $i<0$.
+
+
+\begin{example} All exact sequences are chain complexes.
+\end{example}
+
+\begin{example} Any sequence of abelian groups $\left\{C_i\right\}_{i \in
+\mathbb{Z}}$ with the boundary operators
+identically zero forms a chain complex.
+\end{example}
+
+We will see plenty of more examples in due time.
+
+At each stage, elements in the image of the boundary $C_{i+1} \to C_i$ lie in
+the kernel of $\partial_i: C_i \to C_{i-1}$. Let us recall that a chain
+complex is \emph{exact} if the kernel and the image coincide. In general, a
+chain complex need not be exact, and this failure of exactness is measured by
+its homology.
+
+\begin{definition}
+Let $C_*$
+The submodule of cycles $Z_i\subset C_i$ is
+the kernel $\ker(\partial_i)$. The submodule of boundaries
+$B_i\subset C_i$ is the image $Im(\partial_{i+1})$. Thus
+homology is said to be ``cycles mod boundaries,'' i.e.
+$Z_i/B_i$.
+\end{definition}
+
+To further simplify notation, often all differentials regardless
+of what chain complex they are part of are denoted $\partial$,
+thus the commutativity relation on chain maps is
+$f\partial=\partial f$ with indices and distinction between the
+boundary operators dropped.
+
+
+\begin{definition} Let $C_*$ be a chain complex with boundary
+map $\partial_i$.
+We define the \textbf{homology} of the complex $C_*$ via
+$H_i(C_*)=\ker(\partial_i)/Im(\partial_{i+1})$.
+\end{definition}
+
+\begin{example} In a chain complex $C_*$ where all the boundary
+maps are
+trivial, $H_i(C_*)=C_i$.
+\end{example}
+
+Often we will bundle all the modules $C_i$ of a chain complex
+together to form a graded module $C_*=\bigoplus_i C_i$. In this
+case, the boundary operator is a
+endomorphism that takes elements from degree $i$ to degree
+$i-1$. Similarly, we
+often bundle together all the homology modules to give a graded
+homology module
+$H_*(C_*)=\bigoplus_i H_i(C_*)$.
+
+\begin{definition}
+A \textbf{differential module} is a module $M$ together with a morphism $d:
+M\to M$ such that $d^2 =0$.
+\end{definition}
+
+Thus, given a chain complex $C_*$, the module $\bigoplus C_i$ is a
+differential module with the direct sum of all the differentials $\partial_i$.
+A chain complex is just a special kind of differential module, one where the
+objects are graded and the differential drops the grading by one.
+
+\subsection{Functoriality}
+We have defined chain complexes now, but we have no notion of a morphism
+between chain complexes.
+We do this next; it turns out that chain complexes form a category when morphisms
+are appropriately defined.
+
+\begin{definition} A \textbf{morphism} of chain complexes $f:C_*\rightarrow
+D_*$, or a \textbf{chain map}, is a sequence of maps $f_i:C_i\rightarrow
+D_i$ such that $f\partial = \partial' f$ where $\partial$ is the
+boundary map of $C_*$ and $\partial'$ of $D_*$ (again we are
+abusing notation and dropping indices).
+\end{definition}
+
+There is thus a \emph{category} of chain complexes where the morphisms are
+chain maps.
+
+One can make a similar definition for differential modules. If $(M, d)$ and
+$(N,d')$ are differential modules, then a \emph{morphism of differential
+modules} $(M,d) \to (N,d')$ is a morphism of modules $M \to N$ such that the diagram
+\[
+\xymatrix{
+M \ar[d] \ar[r]^d & M \ar[d] \\
+N \ar[r]^{d'} & N
+}
+\]
+commutes.
+There is therefore a category of differential modules, and the map $C_* \to
+\bigoplus C_i$ gives a functor from the category of chain complexes to that of
+differential modules.
+
+
+\begin{proposition} A chain map $C_* \to D_*$ induces a map in homology $H_i(C)
+\to H_i(D)$ for each $i$; thus homology is a covariant functor from
+the category of chain complexes to the category of graded
+modules.
+\end{proposition}
+
+More precisely, each $H_i$ is a functor from chain complexes to modules.
+\begin{proof}
+Let $f:C_*\rightarrow D_*$ be a chain map. Let $\partial$ and
+$\partial'$ be the differentials for $C_*$ and $D_*$
+respectively. Then we have a commutative diagram:
+
+\begin{equation}
+\begin{CD}
+C_{i+1} @>\partial_{i+1}>> C_i @>>\partial_i> C_{i-1}\\
+@VV f_{i+1} V @VV f_i V @VVf_{i-1} V\\
+D_{i+1} @>\partial'_{i+1}>> D_i @>>\partial'_i> D_{i-1}
+\end{CD}
+\end{equation}
+
+Now, in order to check that a chain map $f$ induces a map $f_*$
+on homology, we need to check that $f_*(Im(\partial))\subset
+Im(\partial')$ and $f_*(\ker(\partial))\subset
+\ker(\partial)$. We first check the condition on images: we want
+to look at $f_i(Im(\partial_{i+1}))$. By commutativity of $f$
+and the boundary maps, this is equal to
+$\partial'_{i+1}(Im(f_{i+1})$. Hence we have
+$f_i(Im(\partial_{i+1}))\subset Im(\partial_{i+1}')$. For the
+condition on kernels, let $x\in \ker(\partial_i)$. Then by
+commutativity, $\partial'_i(f_i(x))=f_{i-1}\partial_i(x)=0$.
+Thus we have that $f$ induces for each $i$ a homomorphism
+$f_i:H_i(C_*)\rightarrow H_i(D_*)$ and hence it induces a
+homomorphism on homology as a graded module. \end{proof}
+
+\begin{exercise}
+Define the \emph{homology} $H(M)$ of a differential module $(M, d)$ via $\ker d / \im
+d$. Show that $M \mapsto H(M)$ is a functor from differential modules to
+modules.
+\end{exercise}
+
+
+\subsection{Long exact sequences}
+\add{OMG! We have all this and not the most basic theorem of them all.}
+
+\begin{definition} If $M$ is a complex then for any integer $k$, we define a new complex $M[k]$ by shifting indices, i.e. $(M[k])^i:=M^{i+k}$.\end{definition}
+
+\begin{definition} If $f:M\rightarrow N$ is a map of complexes, we define a complex $\mathrm{Cone}(f):=\{N^i\oplus M^{i+1}\}$ with differential
+$$d(n^i,m^{i+1}):= (d_N^i(n_i)+(-1)^i\cdot f(m^{i+1}, d_M^{i+1}(m^{i+1}))$$
+\end{definition}
+
+Remark: This is a special case of the total complex construction to be seen later.
+
+\begin{proposition} A map $f:M\rightarrow N$ is a quasi-isomorphism if and only if $\mathrm{Cone}(f)$ is acyclic.\end{proposition}
+
+\begin{proposition} Note that by definition we have a short exact sequence of complexes
+$$0\rightarrow N\rightarrow \mathrm{Cone}(f)\rightarrow M[1]\rightarrow 0$$
+so by Proposition 2.1, we have a long exact sequence
+$$\dots \rightarrow H^{i-1}(\mathrm{Cone}(f))\rightarrow H^{i}(M)\rightarrow H^{i}(N)\rightarrow H^{i}(\mathrm{Cone}(f))\rightarrow\dots$$
+so by exactness, we see that $H^i(M)\simeq H^i(N)$ if and only if $H^{i-1}(\mathrm{Cone}(f))=0$ and $H^i(\mathrm{Cone}(f))=0$. Since this is the case for all $i$, the claim follows. $\blacksquare$
+\end{proposition}
+
+
+
+\subsection{Cochain complexes}
+Cochain complexes are much like chain complexes except the
+arrows point in the
+opposite direction.
+
+\begin{definition} A \textbf{cochain complex} is a sequence of modules
+$C^i$ for $i \in \mathbb{Z}$ with \textbf{coboundary operators}, also
+called
+\textbf{differentials}, $\partial^i:C^i\rightarrow C^{i+1}$ such that
+$\partial^{i+1}\partial^i=0$. \end{definition}
+
+The theory of cochain complexes is entirely dual to that of chain complexes,
+and we shall not spell it out in detail.
+For instance, we can form a category of cochain complexes and
+\textbf{chain maps} (families of morphisms commuting with the
+differential). Moreover, given a cochain complex $C^*$, we
+define the
+\textbf{cohomology objects} to be
+$h^i(C^*)=\ker(\partial^i)/Im(\partial^{i-1})$; one obtains cohomology
+functors.
+
+It should be noted that the long exact sequence in cohomology runs in the
+opposite direction.
+If $0 \to C_*' \to C_* \to C_*'' \to 0$ is a short exact sequence of cochain
+complexes, we get a long exact sequence
+\[ \dots \to H^i(C' ) \to H^i(C) \to H^{i}(C'') \to H^{i+1}(C' ) \to H^{i+1}(C) \to
+\dots. \]
+
+Similarly, we can also turn cochain complexes and cohomology
+modules into a
+graded module.
+
+Let us now give a standard example of a cochain complex.
+\begin{example}[The de Rham complex] Readers unfamiliar with differential
+forms may omit this example. Let $M$ be a smooth manifold. For each $p$, let
+$C^p(M)$ be the $\mathbb{R}$-vector space of smooth $p$-forms on $M$.
+We can make the $\left\{C^p(M)\right\}$ into a complex by defining the maps
+\[ C^p(M) \to C^{p+1}(M) \]
+via $\omega \to d \omega$, for $d$ the exterior derivative.
+(Note that $d^2 = 0$.) This complex is called the \textbf{de Rham complex} of
+$M$, and its cohomology is called the \textbf{de Rham cohomology.} It is known
+that the de Rham cohomology is isomorphic to singular cohomology with real
+coefficients.
+
+It is a theorem, which we do not prove, that the de Rham cohomology is
+isomorphic to the singular cohomology of $M$ with coefficients in $\mathbb{R}$.
+\end{example}
+
+\subsection{Long exact sequence}
+
+\subsection{Chain Homotopies}
+
+In general, two maps of complexes $C_* \rightrightarrows D_*$ need not be
+equal to induce the same morphisms in homology. It is thus of interest to
+determine conditions when they do. One important condition is given by chain
+homotopy: chain homotopic maps are indistinguishable in homology. In algebraic
+topology, this fact is used to show that singular homology is a homotopy
+invariant.
+We will find it useful in showing that the construction (to be given later) of a
+projective resolution is essentially unique.
+
+\begin{definition} Let $C_*, D_*$ be chain complexes with differentials $d_i$. A chain homotopy between two chain maps
+$f,g:C_*\rightarrow D_*$ is a series of homomorphisms
+$h^i:C^i\rightarrow D^{i-1}$ satisfying $f^i-g^i=d h^i+
+h^{n+1}d$. Again often notation is abused and the
+condition is written $f-g=d h +
+hd$.
+\end{definition}
+
+\begin{proposition} If two morphisms of complexes $f,g: C_* \to D_*$ are chain homotopic, they are taken
+to the same induced map after applying the homology functor.
+\end{proposition}
+
+\begin{proof}
+Write $\left\{d_i\right\}$ for the various differentials (in both complexes).
+Let $m\in Z_i(C)$, the group of $i$-cycles.
+Suppose there is a chain homotopy $h$ between $f,g$ (that is, a set of
+morphisms $C_i \to D_{i-1}$).
+Then
+$$f^i(m)-g^i(m)= h^{i+1}\circ d^i(m) + d^{i-1}\circ h^i(m)= d^{i-1}\circ H^i(m) \in \Im(d^{i-1})$$
+which is zero in the cohomology $H^i(D)$.
+\end{proof}
+
+
+\begin{corollary} If two chain complexes are chain homotopically equivalent
+(there are maps $f: C_*\rightarrow D_*$ and $g:D_*\rightarrow
+C_*$ such that both $fg$ and $gf$ are chain homotopic to the
+identity), they have isomorphic homology.
+\end{corollary}
+\begin{proof}
+Clear.
+\end{proof}
+
+\begin{example} Not every quasi-isomorphism is a homotopy equivalence. Consider the complex
+$$\dots \rightarrow 0\rightarrow\mathbb{Z}/{\cdot 2}\rightarrow \mathbb{Z}\rightarrow 0\rightarrow 0\rightarrow\dots$$
+so $H^0=\mathbb{Z}/2\mathbb{Z}$ and all cohomologies are 0. We have a quasi-isomorphism from the above complex to the complex
+$$\dots \rightarrow 0\rightarrow 0 \rightarrow \mathbb{Z}/2\mathbb{Z}\rightarrow 0\rightarrow 0\rightarrow\dots$$
+but no inverse can be defined (no map from $\mathbb{Z}/2\mathbb{Z}\rightarrow \mathbb{Z}$).
+\end{example}
+
+
+\begin{proposition} Additive functors preserve chain homotopies
+\end{proposition}
+\begin{proof} Since an additive functor $F$ is a homomorphism on $Hom(-,-)$,
+the chain homotopy condition will be preserved; in
+particular, if $t$ is a chain homotopy, then $F(t)$ is a chain
+homotopy.
+\end{proof}
+
+In more sophisticated homological theory, one often makes the
+definition of the ``homotopy category of chain complexes.''
+\begin{definition} The homotopy category of chain complexes is
+the category $hKom(R)$ where objects are chain complexes of
+$R$-modules and morphisms are chain maps modulo chain homotopy.
+\end{definition}
+
+
+\subsection{Topological remarks}
+
+\add{add more detail}
+The first homology theory
+to be developed was simplicial homology - the study of homology
+of simplicial
+complexes. To be simple, we will not develop the general theory
+and instead
+motivate our definitions with a few basic examples.
+
+\begin{example} Suppose
+our simplicial complex has one line segment with both ends
+identified at
+one point $p$. Call the line segment $a$. The $n$-th homology
+group of this
+space roughly counts how many ``different ways'' there are of
+finding $n$
+dimensional sub-simplices that have no boundary that aren't the
+boundary of
+any $n+1$ dimensional simplex. For the circle, notice that for
+each integer,
+we can find such a way (namely the simplex that wraps counter
+clockwise that
+integer number of times). The way we compute this is we look at
+the free abelian group generated by $0$ simplices, and $1$
+simplices (there are no simplices of
+dimension $2$ or higher so we can ignore that). We call these
+groups $C_0$ and
+$C_1$ respectively. There is a boundary map $\partial_1:
+C_1\rightarrow C_0$.
+This boundary map takes a $1$-simplex and associates to it its
+end vertex minus
+its starting vertex (considered as an element in the free
+abelian group on
+vertices of our simplex). In the case of the circle, since there
+is only one
+$1$-simplex and one $0$-simplex, this map is trivial. We then
+get our homology
+group by looking at $\ker(\partial_1)$. In the case that there
+is a nontrivial
+boundary map $\partial_2: C_2\rightarrow C_1$ (which can only
+happen when our
+simplex is at least $2$-dimensional), we have to take the
+quotient
+$\ker(\partial_1)/\ker(\partial_2)$. This motivates us to define
+homology in a
+general setting.
+\end{example}
+
+Originally homology was
+intended to be a homotopy invariant meaning that space with the
+same homotopy type would have isomorphic homology modules. In fact, any
+homotopy induces what is now known as a chain homotopy on the simplicial chain
+complexes.
+
+\begin{exercise}[Singular homology] Let $X$ be a topological
+space and let $S^n$ be the set of all continuous maps
+$\Delta^n\rightarrow X$ where $\Delta^n$ is the convex hull of
+$n$ distinct points and the origin with orientation given by an
+ordering of the $n$ vertices. Define $C_n$ to be the free
+abelian group generated by elements of $S^n$. Define
+$\Delta^n_{\hat{i}}$ to be the face of $\Delta^n$ obtained by
+omitting the $i$-th vertex from the simplex. We can then
+construct a boundary map $\partial_n:C_n\rightarrow C_{n-1}$ to
+take a map $\sigma^n:\Delta^n\rightarrow X$ to
+$\sum_{i=0}^n(-1)^i\sigma^n|_{\Delta^n_{\hat{i}}}$. Verify that
+$\partial^2=0$ (hence making $C_*$ into a chain complex known as
+the ``singular chain complex of $X$''. Its homology groups are
+the ``singular homology groups''. \end{exercise}
+
+\begin{exercise} Compute the singular homology groups of a
+point. \end{exercise}
+
+\section{Derived functors}
+\subsection{Projective resolutions}
+
+Fix a ring $R$.
+Let us recall (\rref{projectives}) that an $R$-module $P$ is called
+\emph{projective} if the functor $N \to \hom_R(P,N)$ (which is always
+left-exact) is exact.
+
+Projective objects are useful in defining chain exact sequences
+known as ``projective resolutions.'' In the theory of derived functors, the
+projective resolution of a module $M$ is in some sense a replacement for $M$:
+thus, we want it to satisfy some uniqueness and existence properties. The
+uniqueness is not quite true, but it is true modulo chain equivalence.
+
+\begin{definition} Let $M$ be an arbitrary module, a projective
+resolution of
+$M$ is an exact sequence
+\begin{equation} \cdots\rightarrow P_i\rightarrow
+P_{i-1}\rightarrow
+P_{i-2}\cdots\rightarrow P_1\rightarrow P_0\rightarrow M
+\end{equation} where
+the $P_i$ are projective modules. \end{definition}
+
+
+\begin{proposition} Any module admits a projective resolution. \end{proposition}
+The proof will even show that we can take a \emph{free} resolution.
+\begin{proof}
+We construct the resolution inductively.
+First, we take a projective module $P_0$ with $P_0 \twoheadrightarrow N$
+surjective by the previous part. Given a portion of the resolution
+\[ P_n \to P_{n-1} \to \dots \to P_0 \twoheadrightarrow N \to 0 \]
+for $n \geq 0$, which is exact at each step, we consider $K = \ker(P_n \to
+P_{n-1})$. The sequence
+\[ 0 \to K \to P_n \to P_{n-1} \to \dots \to P_0 \twoheadrightarrow N \to 0 \]
+is exact. So if $P_{n+1}$ is chosen such that it is projective and there is an
+epimorphism
+\( P_{n+1} \twoheadrightarrow K, \)
+(which we can construct by \rref{freesurjection}), then
+\[ P_{n+1} \to P_n \to \dots \]
+is exact at every new step by construction. We can repeat this inductively and
+get a full projective resolution.
+\end{proof}
+
+Here is a useful observation:
+\begin{proposition}
+If $R$ is noetherian, and $M$ is finitely generated, then we can
+choose a
+projective resolution where each $P_i$ is finitely generated.
+\end{proposition}
+We can even take a resolution consisting of finitely generated free modules.
+\begin{proof}
+To say that $M$ is finitely generated is to say that it is a
+quotient of a free module on
+finitely many generators, so we can take $P_0$ free and finitely generated. The kernel
+of $P_0 \to M$
+is finitely generated by noetherianness, and we can proceed as
+before, at each step
+choosing a finitely generated object.
+\end{proof}
+
+\begin{example} The abelian group $\mathbb{Z}/2$ has the free
+resolution $0\rightarrow\cdots
+0\rightarrow\mathbb{Z}\rightarrow\mathbb{Z}\rightarrow\mathbb{Z}/2$.
+Similarly, since any finitely generated abelian group can be
+decomposed into the direct sum of torsion subgroups and free
+subgroups, all finitely generated abelian groups admit
+a free resolution of length two.
+
+Actually, over a principal ideal domain $R$ (e.g. $R=\mathbb{Z}$),
+\emph{every} module admits a free resolution of length two. The reason is that
+if $F \twoheadrightarrow M$ is a surjection with $F$ free, then the kernel $F'
+\subset F$ is free by a general fact (\add{citation needed}) that a submodule
+of a free module is free (if one works over a PID). So we get a free
+resolution of the type
+\[ 0 \to F' \to F \to M \to 0. \]
+\end{example}
+
+
+In general, projective resolutions are not at all unique.
+Nonetheless, they \emph{are} unique up to chain homotopy. Thus a projective
+resolution is a rather good ``replacement'' for the initial module.
+
+\begin{proposition}
+Let $M, N$ be modules and let $P_* \to M, P'_* \to N$ be projective
+resolutions. Let $f: M \to N$ be a morphism. Then there is a morphism
+\[ P_* \to P'_* \]
+such that the following diagram commutes:
+\[
+\xymatrix{
+\dots \ar[r] & P_1 \ar[r] \ar[d] & P_0 \ar[r] \ar[d] & M \ar[d]^f \\
+\dots \ar[r] & P'_1 \ar[r] & P'_0 \ar[r] & N
+}
+\]
+This morphism is unique up to chain homotopy.
+\end{proposition}
+
+\begin{proof}
+Let $P_* \to M$ and $P'_* \to N$ be projective resolutions. We will define a
+morphism of complexes $P_* \to P'_* $ such that the diagram commutes.
+Let the boundary maps in $P_*, P'_*$ be denoted $d$ (by abuse of notation).
+We have an exact diagram
+\[
+\xymatrix{
+\dots \ar[r] & P_n \ar[r]^d & P_{n-1} \ar[r]^d & \dots \ar[r]^d & P_0
+\ar[r]& M \ar[d]^{f} \ar[r] & 0 \\
+\dots \ar[r] & P'_n \ar[r]^d & P'_{n-1} \ar[r] & \dots \ar[r]^d & P'_0 \ar[r] & N \ar[r] & 0
+}
+\]
+Since $P'_0 \twoheadrightarrow N$ is an epimorphism, the map $P_0 \to M \to N$ lifts
+to a map $P_0 \to P'_0$ making the diagram
+\[ \xymatrix{
+P_0 \ar[d] \ar[r] & M \ar[d]^{f} \\
+P'_0 \ar[r] & N
+}\]
+commute.
+Suppose we have defined maps $P_i \to P'_i$ for $i \leq n$ such that the
+following diagram commutes:
+\[
+\xymatrix{
+P_n \ar[r]^d \ar[d] & P_{n-1} \ar[r]^d \ar[d] & \dots \ar[r]^d & P_0
+\ar[d] \ar[r]& M \ar[d]^{f} \ar[r] & 0 \\
+P'_n \ar[r]^d & P'_{n-1} \ar[r] & \dots \ar[r]^d & P'_0 \ar[r] & N \ar[r] & 0
+}
+\]
+Then we will define $P_{n+1} \to P'_{n+1}$, after which induction will prove
+the existence of a map. To do this, note that
+the map
+\[ P_{n+1} \to P_n \to P'_n \to P'_{n-1} \]
+is zero, because this is the same as $P_{n+1} \to P_n \to P_{n-1} \to P'_{n-1}$
+(by induction, the diagrams before $n$ commute), and this is zero because two
+$P$-differentials were composed one after another. In particular, in the diagram
+\[
+\xymatrix{
+P_{n+1} \ar[r] & P_n \ar[d] \\
+P'_{n+1} \ar[r] & P'_n
+},
+\]
+the image in $P'_n$ of $P_{n+1}$ lies in the kernel of $P'_n \to P'_{n-1}$,
+i.e. in the image $I$ of $P'_{n+1}$. The exact diagram
+\[
+\xymatrix{
+& P_{n+1} \ar[d] \\
+P'_{n+1} \ar[r] & I \ar[r] & 0
+}
+\]
+shows that we can lift $P_{n+1} \to I$ to $P_{n+1} \to P'_{n+1}$ (by
+projectivity). This implies that we can continue the diagram further and get a
+morphism $P_* \to P'_* $ of complexes.
+
+
+
+Suppose $f, g: P_* \to P'_*$ are two morphisms of the projective resolutions
+making $$\xymatrix{
+P_0 \ar[r] \ar[d] & M \ar[d] \\
+P'_0 \ar[r] & N
+}$$ commute. We will show that $f,g$ are chain homotopic.
+
+For this,
+we start by defining $D_0: P_0 \to P'_1$ such that $dD_0 = f-g: P_0 \to P'_0$.
+This we can do because $f-g$ sends $P_0$ into $\ker(P'_0 \to N)$, i.e. into the
+image of $P'_1 \to P'_0$, and $P_0$ is projective.
+Suppose we have defined chain-homotopies $D_i: P_{i} \to P_{i+1}$ for $i \leq
+n$ such that $dD_i + D_{i-1}d = f-g$ for $i \leq n$. We will define $D_{n+1}$.
+There is a diagram
+\[
+\xymatrix{
+ & P_{n+1} \ar[d] \ar[r] & P_n \ar[ld]^{D_n}\ar[d] \ar[r] & P_{n-1}
+ \ar[ld]^{D_{n-1}} \ar[d] \\
+P'_{n+2} \ar[r] & P'_{n+1} \ar[r] & P'_n \ar[r] & P'_{n-1} \\
+}\]
+where the squares commute regardless of whether you take the vertical maps to
+be $f$ or $g$ (provided that the choice is consistent).
+
+We would like to define $D_{n+1}: P_n \to P'_{n+1}$.
+The key condition we need satisfied is that
+\[ d D_{n+1} = f - g - D_n d. \]
+However, we know that, by the inductive hypothesis on the $D$'s
+\[ d( f- g - D_{n}d) = fd - gd - dD_n d = fd - gd - (f-g)d + D_n dd = 0. \]
+In particular, $f-g - D_n d$ lies in the image of $P'_{n+1} \to P'_n$.
+The projectivity of $P_n$ ensures that we can define $D_{n+1}$ satisfying the
+necessary condition.
+
+\end{proof}
+
+
+\begin{corollary}
+Let $P_* \to M, P'_* \to M$ be projective resolutions of $M$. Then there are
+maps $P_* \to P'_*, P'_* \to P_* $ under $M$ such that the compositions are
+chain homotopic to the identity.
+\end{corollary}
+\begin{proof}
+Immediate.
+\end{proof}
+
+\subsection{Injective resolutions}
+
+One can dualize all this to injective resolutions. \add{do this}
+
+\subsection{Definition}
+Often in homological algebra, we see that ``short exact
+sequences induce long exact sequences.'' Using the theory of
+derived functors, we can make this formal.
+
+Let us work in the category of modules over a ring $R$. Fix two such categories.
+Recall that a right-exact functor $F$ (from the category of modules over a
+ring to the category of modules over another ring) is an additive functor
+ such that for every short
+exact sequence $0\rightarrow A\rightarrow B\rightarrow
+C\rightarrow 0$, we get a exact sequence $F(A)\rightarrow
+F(B)\rightarrow F(C)\rightarrow 0$.
+
+We want a natural way to continue this exact sequence to the
+left; one way of doing this is to define the left derived
+functors.
+\begin{definition} Let $F$ be a right-exact functor and
+$P_*\rightarrow M$ are projective resolution. We can form a
+chain complex $F(P_*)$ whose object in degree $i$ is $F(P_i)$
+with boundary maps $F(\partial)$. The homology of this chain
+complex denoted $L_iF$ are the left derived functors.
+\end{definition}
+
+For this definition to be useful, it is important to verify that
+deriving a functor yields functors independent on choice of
+resolution. This is clear by \rref{}.
+
+\begin{theorem} The following properties characterize derived
+functors: \begin{enumerate}
+\item{ $L_0F(-)=F(-)$ }
+\item{ Suppose $0\rightarrow A\rightarrow B\rightarrow
+C\rightarrow 0$ is an exact sequence and $F$ a right-exact
+functor; the left derived functors fit into the following exact
+sequence:
+
+\begin{equation} \cdots L_iF(A)\rightarrow L_iF(B)\rightarrow
+L_iF(C)\rightarrow L_{i-1}F(A)\cdots\rightarrow
+L_1(C)\rightarrow L_0F(A)\rightarrow L_0F(B)\rightarrow
+L_0F(C)\rightarrow 0 \end{equation}}
+\end{enumerate}
+\end{theorem}
+\begin{proof} The second property is the hardest to prove, but
+it is by far the most useful; it is essentially an application
+of the snake lemma. \end{proof}
+One can define right derived functors analogously; if one has a
+left exact functor (an additive functor that takes an exact
+sequence $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ to
+$0\rightarrow F(A)\rightarrow F(B)\rightarrow F(C)$), we can
+pick an injective resolution instead (the injective criterion is simply the
+projective criterion with arrows reversed). If
+$M\rightarrow I^*$ is a injective resolution then the cohomology of the chain
+complex $F(I^*)$ gives the right derived functors.
+However, variance must also be taken into consideration so the
+choice of whether or not to use a projective or injective
+resolution is of importance (in all of the above, functors were
+assumed to be covariant). In the following, we see an example of when right
+derived functors can be computed using projective
+resolutions.
+
+\subsection{$\ext$ functors}
+
+\begin{definition} The right derived functors of $Hom(-,N)$ are
+called the $Ext$-modules denoted $Ext^i_R(-,N)$.
+\end{definition}
+We now look at the specific construction:
+
+Let $M, M'$ be $R$-modules. Choose a projective resolution
+\[ \dots \to P_2 \to P_1 \to P_0 \to M \to 0 \]
+and consider what happens when you hom this resolution into $N$.
+Namely, we can
+consider $\hom_R(M,N)$, which is the kernel of $\hom(P_0, M)
+\to\hom(P_1, M) $
+by exactness of the sequence
+\[ 0 \to \hom_R(M,N) \to \hom_R(P_0, N) \to \hom_R(P_1, N) . \]
+You might try to continue this with the sequence
+\[ 0 \to \hom_R(M,N) \to \hom_R(P_0, N) \to \hom_R(P_1, N) \to
+\hom_R(P_2, N)
+\to \dots. \]
+In general, it won't be exact, because $\hom_R$ is only
+left-exact. But it is a
+chain complex. You can thus consider the homologies.
+
+\begin{definition}
+The homology of the complex $\{\hom_R(P_i, N)\}$ is denoted
+$\ext^i_R(M,N)$. By
+definition, this is $\ker(\hom(P_i,N) \to \hom(P_{i+1},
+N))/\im(\hom(P_{i-1},
+N) \to \hom(P_i,N))$. This is an $R$-module, and is called the
+$i$th ext group.
+\end{definition}
+
+
+
+Let us list some properties (some of these properties are just
+case-specific examples of general properties of derived
+functors)
+
+\begin{proposition}
+$\ext_R^0(M,N) = \hom_R(M,N)$.
+\end{proposition}
+\begin{proof}
+This is obvious from the left-exactness of $\hom(-,N)$. (We
+discussed this.)
+\end{proof}
+
+\begin{proposition}
+$\ext^i(M,N)$ is a functor of $N$.
+\end{proposition}
+\begin{proof}
+Obvious from the definition.
+\end{proof}
+
+Here is a harder statement.
+\begin{proposition}
+$\ext^i(M,N)$ is well-defined, independent of the projective
+resolution $P_*
+\to M$, and is in fact a contravariant additive functor of
+$M$.\footnote{I.e. a map $M
+\to M'$ induces $\ext^i(M', N) \to \ext^i(M,N)$.}
+\end{proposition}
+\begin{proof}
+Omitted. We won't really need this, though; it requires more
+theory about
+chain complexes.
+\end{proof}
+
+
+\begin{proposition}
+If $M$ is annihilated by some ideal $I \subset R$, then so is
+$\ext^i(M,N)$ for
+each $i$.
+\end{proposition}
+\begin{proof}
+This is a consequence of the functoriality in $M$. If $x \in
+I$,then $x: M \to
+M$ is the zero map, so it induces the zero map on
+$\ext^i(M,N)$.\end{proof}
+
+\begin{proposition}
+$\ext^i(M,N) = 0$ if $M$ projective and $i>0$.
+\end{proposition}
+\begin{proof}
+In that case, one can use the projective resolution
+\[ 0 \to M \to M \to 0. \]
+Computing $\ext$ via this gives the result.
+\end{proof}
+
+
+
+
+\begin{proposition}
+If there is an exact sequence
+\[ 0 \to N' \to N \to N'' \to 0, \]
+there is a long exact sequence of $\ext$ groups
+\[ 0 \to \hom(M,N') \to \hom(M,N) \to \hom(M,N'') \to
+\ext^1(M,N') \to
+\ext^1(M,N) \to \dots \]
+\end{proposition}
+\begin{proof}
+This proof will assume a little homological algebra. Choose a
+projective
+resolution $P_* \to M$. (The notation $P_*$ means the chain
+complex $\dots \to
+P_2 \to P_1 \to P_0$.) In general, homming out of $M$ is not
+exact, but homming
+out of a projective module is exact. For each $i$, we get an
+exact sequence
+\[ 0 \to \hom_R(P_i, N') \to \hom_R(P_i, N) \to \hom_R(P_i,
+N'')\to 0, \]
+which leads to an exact sequence of \emph{chain complexes}
+\[ 0 \to \hom_R(P_*,N') \to \hom_R(P_*,N) \to \hom_R(P_*,N'')
+\to 0 . \]
+Taking the long exact sequence in homology gives the result.
+\end{proof}
+
+
+Much less obvious is:
+
+\begin{proposition}
+There is a long exact sequence in the $M$ variable. That is, a
+short exact
+sequence
+\[ 0 \to M' \to M \to M'' \to 0 \]
+leads a long exact sequence
+\[ 0 \to \hom_R(M'', N) \to \hom_R(M,N) \to \hom_R(M', N) \to
+\ext^1(M'', N)
+\to \ext^1(M, N) \to \dots. \]
+\end{proposition}
+\begin{proof}
+Omitted.
+\end{proof}
+
+We now can characterize projectivity:
+\begin{corollary}
+TFAE:
+\begin{enumerate}
+\item $M$ is projective.
+\item $\ext^i(M,N) = 0$ for all $R$-modules $N$ and $i>0$.
+\item $\ext^1(M,N)=0$ for all $N$.
+\end{enumerate}
+\end{corollary}
+\begin{proof}
+We have seen that 1 implies 2 because projective modules have
+simple projective
+resolutions. 2 obviously implies 3. Let's show that 3 implies
+1.Choose a
+projective module $P$ and a surjection $P \twoheadrightarrow M$
+with kernel
+$K$. There is a short exact sequence $0 \to K \to P \to M \to
+0$. The sequence
+\[ 0 \to \hom(M,K) \to \hom(P,K) \to \hom(K,K) \to
+\ext^1(M,K)=0\]
+shows that there is a map $P \to K$ which restricts to the
+identity $K \to K$.
+The sequence $0 \to K \to P \to M \to 0$ thus splits, so $M$ is
+a direct
+summand in a projective module, so is projective.
+\end{proof}
+
+
+Finally, we note that there is another way of constructing
+$\ext$. We
+constructed them by choosing a projective resolution of $M$. But
+you can also
+do this by resolving $N$ by \emph{injective} modules.
+\begin{definition}
+An $R$-module $Q$ is \textbf{injective} if $\hom_R(-,Q)$ is an
+exact (or,
+equivalently, right-exact) functor. That is, if $M_0 \subset M$
+is an inclusion
+of $R$-modules, then any map $M_0 \to Q$ can be extended to $M
+\to Q$.
+\end{definition}
+
+If we are given $M,N$, and an injective resolution $N \to Q_*$,
+we can look at
+the chain complex $\left\{\hom(M,Q_i)\right\}$, i.e. the chain
+complex
+\[ 0 \to \hom(M, Q^0) \to \hom(M, Q^1) \to \dots \]
+and we can consider the cohomologies.
+
+\begin{definition}
+We call these cohomologies
+\[ \ext^i_R(M,N)' = \ker(\hom(M, Q^i) \to \hom(M,
+Q^{i+1}))/\im(\hom(M,
+Q^{i-1}) \to \hom(M, Q^i)). \]
+\end{definition}
+
+This is dual to the previous definitions, and it is easy to
+check that the
+properties that we couldn't verify for the previous $\ext$s are
+true for the
+$\ext'$'s.
+
+Nonetheless:
+
+\begin{theorem}
+There are canonical isomorphisms:
+\[ \ext^i(M,N)' \simeq \ext^i(M,N). \]
+\end{theorem}
+
+In particular, to compute $\ext$ groups, you are free either to
+take a
+projective resolution of $M$, or an injective resolution of
+$N$.\begin{proof}[Idea of proof]
+In general, it might be a good idea to construct a third more
+complex
+construction that resembles both. Given $M,N$ construct a
+projective resolution
+$P_* \to M$ and an injective resolution $N \to Q^*$. Having made
+these choices,
+we get a \emph{double complex}
+\[ \hom_R(P_i, Q^j) \]
+of a whole lot of $R$-modules. The claim is that in such a
+situation, where
+you have a double complex $C_{ij}$, you can
+form an ordinary chain complex $C'$
+by adding along the diagonals. Namely, the $n$th term
+is $C'_n = \bigoplus_{i+j=n} C_{ij}$. This \emph{total complex}
+will receive a
+map from the chain complex used to compute the $\ext$ groups
+and a chain
+complex used to compute the $\ext'$ groups. There are maps on
+cohomology,
+\[ \ext^i(M,N) \to H^i(C'_*), \quad \ext^i(M,N)' \to H^i(C'_*).
+\]
+The claim is that isomorphisms on
+cohomology will be induced in each case. That will prove the
+result, but we
+shall not prove the claim.
+\end{proof}
+
+Last time we were talking about $\ext$ groups over commutative
+rings. For $R$ a
+commutative ring and $M,N$ $R$-modules, we defined an $R$-module
+$\ext^i(M,N)$ for
+each $i$, and proved various properties. We forgot to mention
+one.
+
+\begin{proposition}
+If $R$ noetherian, and $M,N$ are finitely generated,
+$\ext^i(M,N)$ is also finitely generated.
+\end{proposition}
+\begin{proof}
+We can take a projective resolution $P_*$ of $M$ by finitely
+generated free modules, $R$ being
+noetherian. Consequently the complex $\hom(P_*, N)$ consists of
+finitely
+generated modules. Thus the cohomology is finitely generated,
+and this cohomology
+consists of the $\ext$ groups.
+\end{proof}
+
+\subsection{Application: Modules over DVRs}
+
+
+\begin{definition} Let $M$ be a module over a domain $A$. We say that $M$ is \underline{torsion-free}, if for any nonzero $a \in A$, $a:M \to M$ is injective. We say that $M$ is \underline{torsion} if for any $m \in M$, there is nonzero $a \in A$ such that $am=0$.
+\end{definition}
+
+\begin{lemma} For any module finitely generated module $M$ over a Noetherian domain $A$, there is a short exact sequence
+\[0 \to M_{tors} \to M \to M_{tors-free} \to 0\]
+where $M_{tors}$ is killed by an element of $A$ and $M_{tors-free}$ is torsion-free.
+\label{tors tors-free ses}
+\end{lemma}
+\begin{proof} This is because we may take $M_{tors}$ to be all the elements which are killed by a non-zero element of $A$. Then this is clearly a sub-module. Since $A$ is Noetherian, it is finitely generated, which means that it can be killed by one element of $A$ (take the product of the elements that kill the generators). Then it is easy to check that the quotient $M/M_{tors}$ is torsion-free.
+\end{proof}
+
+\begin{lemma} For $R$ a PID, a module $M$ over $R$ is flat if and only if it is torsion-free.
+\label{PID means flat=tors free}
+\end{lemma}
+\begin{proof} This is the content of Problem 2 on the Midterm.
+\end{proof}
+
+Using this, we will classify modules over DVRs.
+
+\begin{proposition} let $M$ be a finitely generated module over a DVR $R$. Then
+\[M=M_{tors}\oplus R^{\oplus n},\] i.e, where $M_{tors}$ can be annihilated by $\pi^n$ for some $n$.
+\end{proposition}
+\begin{proof}
+Set $M_{tors} \subset M$ be as in Lemma \ref{tors tors-free ses} so that $M/M_{tors}$ is torsion-free. Therefore, by Corollary \ref{DVR is PID} and Lemma \ref{PID means flat=tors free} we see that it is flat. But it is over a local ring, so that means that it is free. So we have $M/M_{tors}=R^{\oplus n}$ for some $n$. Furthermore, since $R^{\oplus n}$ is free, it is additionally projective, so the above sequence splits, so
+\[M=M_{tors} \oplus R^{\oplus n}\]
+as desired.
+\end{proof}
+
+There is nothing more to say about the free part, so let us discuss the torsion part in more detail.
+
+\begin{lemma} Any finitely generated torsion module over a DVR is
+\[\bigoplus R/\pi^nR.\]
+\label{dvr fin gen tor module struct}
+\end{lemma}
+Before we prove this, let us give two examples:
+\begin{enumerate}
+\item Take $R=k[[t]]$, which is a DVR with maximal ideal (t). Thus, by the lemma, for a finitely generated torsion module $M$, $t:M \to M$ is a nilpotent operator. However, $k[[t]]/t^n$ is a Jordan block so we are exactly saying that linear transformations can be written in Jordan block form.
+\item Let $R=\mathbb{Z}_p$. Here the lemma implies that finitely generated torsion modules over $\mathbb{Z}_p$ can be written as a direct sum of $p$-groups.
+\end{enumerate}
+Now let us proceed with the proof of the lemma.
+\begin{proof}[Proof of Lemma \ref{dvr fin gen tor module struct}] Let $n$ be the minimal integer such that $\pi^n$ kills $M$. This means that $M$ is a module over $R_n=R/\pi^nR$, and also there is an element $m \in M$, and an injective map $R_n \hookrightarrow M$, because we may choose $m$ to be an element which is not annihilated by $\pi^{n-1}$, and then take the map to be $1 \mapsto m$.
+
+Proceeding by induction, it suffices to show that the above map $R_n \hookrightarrow M$ splits. But for this it suffices that $R_n$ is an injective module over itself. This property of rings is called the Frobenius property, and it is very rare. We will write this as a lemma.
+\begin{lemma} $R_n$ is injective as a module over itself.
+\label{Rn Frobenius}
+\end{lemma}
+\begin{proof}[Proof of Lemma \ref{Rn Frobenius}] Note that a module $M$ over a ring $R$ is injective if and only if for any ideal $I \subset R$, $\Ext^1(R/I,M)=0$. This was shown on Problem Set 8, Problem 2a.
+
+Thus we wish to show that for any ideal $I$, $\Ext^1_{R_n}(R_n/I,R_n)=0$. Note that since $R$ is a DVR, we know that it is a PID, and also any element has the form $r=\pi^kr_0$ for some $k \geq 0$ and some $r_0$ invertible. Then all ideals in $R$ are of the form $(\pi^k)$ for some $k$, so all ideals in $R_n$ are also of this form. Therefore, $R_n/I=R_m$ for some $m \leq n$, so it suffices to show that for $m \leq n$, $\Ext^1_{R_n}(R_m,R_n)=0$.
+
+But note that we have short exact sequence
+\[ 0 \to R_{n-m} \to^{\pi^m \cdot} R_n \to R_m \to 0\]
+which gives a corresponding long exact sequence of $\Ext$s
+\[0 \to \Hom_{R_n}(R_m,R_n) \to \Hom_{R_n}(R_n,R_n) \to^\hearts \Hom_{R_n}(R_{n-m},R_n)\]
+\[\to \Ext^1_{R_n}(R_m,R_n) \to \Ext^1_{R_n}(R_n,R_n) \to \cdots\]
+But note that any map of $R_n$ modules, $R_{n-m} \to R_n$, must map $1 \in R_{n-m}$ to an element which is killed by $\pi^{n-m}$, which means it must be a multiple of $\pi^m$, so say is is $\pi^ma$. Then the map is
+\[r \mapsto \pi^mar,\]
+which is the image of the map
+\[[r \mapsto ar] \in \Hom_{R_n}(R_n,R_n).\]
+Thus, $\hearts$ is surjective.
+Also note that $R_n$ is projective over itself, so $\Ext^1_{R_n}(R_n,R_n)=0$. This, along with the surjectivity of $\hearts$ shows that
+\[\Ext^1_{R_n}(R_m,R_n)=0\]
+as desired.
+\end{proof}
+As mentioned earlier, this lemma concludes our proof of Lemma \ref{dvr fin gen tor module struct} as well.
+\end{proof}
+
+
+
diff --git a/books/cring/homologicallocal.tex b/books/cring/homologicallocal.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e2459a701a282366ba5435fbd704feff09120e5c
--- /dev/null
+++ b/books/cring/homologicallocal.tex
@@ -0,0 +1,2118 @@
+\chapter{Homological theory of local rings}
+
+We will then apply the general theory to commutative algebra proper. The use
+of homological machinery provides a new and elegant characterization of regular local
+rings (among noetherian local rings, they are the ones with finite global
+dimension) and leads to proofs of several difficult results about them.
+For instance, we will be able to prove the rather important result (which one
+repeatedly uses in algebraic geometry) that a
+regular local ring is a UFD.
+As another example, the aforementioned criterion leads to a direct proof of
+the otherwise non-obvious that a localization of a regular local ring at a
+prime ideal is still a regular local ring.
+
+\textbf{Note: right now, the material on regular local rings is still missing!
+It should be added.}
+
+\section{Depth}
+
+In this section, we first introduce the notion of \emph{depth} for local rings
+via the $\ext$ functor, and then show that depth can be measured as the length
+of a maximal \emph{regular sequence}. After this, we study the theory of
+regular sequences in general (on not-necessarily-local rings), and show that
+the depth of a module can be bounded in terms of both its dimension and its
+associated primes.
+\subsection{Depth over local rings}
+
+Throughout, let $(R, \mathfrak{m})$ be a noetherian
+local ring. Let $k = R/\mathfrak{m}$ be the residue field.
+
+Let $M \neq 0$ be a finitely generated $R$-module. We are going to define an
+arithmetic invariant of $M$, called the \emph{depth}, that will measure in
+some sense the torsion of $M$.
+
+\begin{definition}
+The \textbf{depth} of $M$ is equal to the smallest integer $i$
+such that
+$\ext^i(k,M) \neq 0$. If there is no such integer, we set $\depth M = \infty$.
+\end{definition}
+
+We shall give another characterization of this shortly that makes no reference
+to $\ext$ functors, and is purely elementary.
+We will eventually see that there is always such an $i$ (at least if $M \neq
+0$), so $\depth M < \infty$.
+
+\begin{example}[Depth zero] Let us characterize when a module $M$ has depth zero.
+Depth zero is equivalent to saying that $\ext^0(k,M) = \hom_R(k, M) \neq 0$,
+i.e. that there is a
+nontrivial morphism
+\[ k \to M. \]
+As $k = R/\mathfrak{m}$, the existence of such a map is
+equivalent to the existence of a nonzero $x$
+such that $\ann(x) = \mathfrak{m}$, i.e. $\mathfrak{m} \in
+\ass(M)$. So depth
+zero is equivalent to having $\mathfrak{m} \in \ass(M)$.
+\end{example}
+
+Suppose now that $\depth(M) \neq 0$. In particular,
+$\mathfrak{m} \notin
+\ass(M)$. Since $\ass(M)$ is finite, prime avoidance implies that
+$\mathfrak{m}
+\not\subset \bigcup_{\mathfrak{p} \in \ass(M)} \mathfrak{p}$.
+Thus
+$\mathfrak{m}$ contains an element which is a nonzerodivisor on
+$M$ (see \cref{assmdichotomy}). So we find:
+
+\begin{proposition} \label{depthzero}
+$M$ has depth zero iff every element in $\mathfrak{m}$ is a
+zerodivisor on $M$.
+\end{proposition}
+
+Now suppose $\depth M \neq 0$. There is $a \in \mathfrak{m}$
+which is a
+nonzerodivisor on $M$, i.e. such that there is
+an exact sequence
+\[ 0 \to M \stackrel{a}{\to} M \to M/aM \to 0. \]
+For each $i$, there is an exact sequence in $\ext$ groups:
+\begin{equation} \label{extlongextdepth}\ext^{i-1}(k,M/aM) \to \ext^i(k,M) \stackrel{a}{\to} \ext^i(k,M)
+\to \ext^i(k,
+M/aM) \to \ext^{i+1}(k,M) .\end{equation}
+However, the map $a: \ext^i(k,M) \to \ext^i(k,M)$ is zero as
+multiplication by $a$
+kills $k$. (If $a$ kills a module $N$,
+then it kills
+$\ext^*(N,M)$ for all $M$.) We see from this that
+\[ \ext^i(k,M) \hookrightarrow \ext^i(k,M/aM) \]
+is injective, and
+\[ \ext^{i-1}(k, M/aM) \twoheadrightarrow \ext^i(k,M) \]
+is surjective.
+
+\begin{corollary} \label{depthdropsbyone}
+If $a \in \mathfrak{m}$ is a nonzerodivisor on $M$, then
+\[ \depth(M/aM) = \depth M -1. \]
+\end{corollary}
+\begin{proof}
+When $\depth M = \infty$, this is easy (left to the reader) from
+the exact
+sequence. Suppose $\depth(M) = n$. We would like to see that
+$\depth M/aM =
+n-1$. That is, we want to see that $\ext^{n-1}(k,M/aM) \neq 0$,
+but
+$\ext^i(k,M/aM) =
+0$ for $i < n-1$. This is direct from the sequence \eqref{extlongextdepth} above.
+In fact, surjectivity of $\ext^{n-1}(k,M/aM) \to \ext^n(k,M)$
+shows that
+$\ext^{n-1}(k,M/aM) \neq 0$. Now let $i < n-1$.
+Then in \eqref{extlongextdepth}, $\ext^i(k, M/aM)$ is sandwiched between two
+zeros, so it is zero.
+\end{proof}
+
+The moral of the above discussion is that one quotients out by a nonzerodivisor, the depth drops by one.
+In fact, we have described a recursive algorithm for computing
+$\depth(M)$.
+\begin{enumerate}
+\item If $\mathfrak{m} \in \ass(M)$, output zero.
+\item If $\mathfrak{m} \notin \ass(M)$, choose an element $a
+\in\mathfrak{m}$
+which is a nonzerodivisor on $M$. Output $\depth(M/aM) +1$.
+\end{enumerate}
+
+
+If one wished to apply this in practice, one would probably start by
+looking for a
+nonzerodivisor $a_1 \in \mathfrak{m}$ on $M$, and then looking for
+one on $M/a_1
+M$, etc.
+From this we make:
+
+\begin{definition}
+Let $(R, \mathfrak{m})$ be a local noetherian ring, $M$ a finite
+$R$-module. A
+sequence $a_1, \dots, a_n \in \mathfrak{m}$ is said to be
+\textbf{$M$-regular} iff:
+\begin{enumerate}
+\item $a_1$ is a nonzerodivisor on $M$
+\item $a_2$ is a nonzerodivisor on $M/a_1 M$
+\item $\dots$
+\item $a_i$ is a nonzerodivisor on $M/(a_1, \dots, a_{i-1})M$
+for all $i$.
+\end{enumerate}
+A regular sequence $a_1, \dots, a_n$ is \textbf{maximal } if it
+can be extended
+no further, i.e. there is no $a_{n+1}$ such that $a_1, \dots,
+a_{n+1}$ is
+$M$-regular.
+\end{definition}
+
+We now get the promised ``elementary'' characterization of depth.
+\begin{corollary} \label{depthregular}
+$\depth(M)$ is the length of every maximal $M$-regular
+sequence. In particular,
+all $M$-regular sequences have the same length.
+\end{corollary}
+
+\begin{proof}
+If $a_1, \dots, a_n$ is $M$-regular, then
+\[ \depth M/(a_1, \dots, a_i)M = \depth M -i \]
+for each $i$, by an easy induction on $i$ and the \cref{depthdropsbyone}.
+From this the result is clear, because depth zero occurs precisely when
+$\mathfrak{m}$ is an associated prime (\cref{depthzero}). But it is also clear
+that a regular sequence $a_1, \dots, a_n$ is maximal precisely when every
+element of $\mathfrak{m}$ acts as a zerodivisor on $M/(a_1, \dots, a_n) M$,
+that is, $\mathfrak{m} \in \ass(M/(a_1, \dots, a_n)M)$.
+\end{proof}
+
+\begin{remark}
+We could \emph{define} the depth via the length of a maximal
+$M$-regular sequence.
+\end{remark}
+
+Finally, we can bound the depth in terms of the dimension.
+
+\begin{corollary}\label{depthboundlocal} Let $M \neq 0$. Then the depth of $M$ is finite. In fact,
+\begin{equation} \label{depthbound} \depth M \leq \dim M. \end{equation}
+\end{corollary}
+\begin{proof}
+When $\depth M = 0$, the assertion is obvious.
+Otherwise,
+there is $ a \in \mathfrak{m}$ which is a nonzerodivisor on $M$.
+We know that
+\[ \depth M/aM = \depth M -1 \]
+and (by \cref{dimdropsbyone})
+\[ \dim M/aM = \dim M -1. \]
+By induction on $\dim M$, we have that $\depth M/aM \leq \dim M/aM$.
+From this the
+induction step is clear, because $\depth$ and $\dim$ both drop by one after
+quotienting.
+\end{proof}
+
+Generally, the depth is not the dimension.
+\begin{example}
+Given any $M$, adding $k$ makes the depth zero: $M \oplus k$
+has $\mathfrak{m}$ as an associated prime. But the dimension
+ does not
+jump to zero just by adding a copy of $k$. If $M$ is a direct sum of pieces of
+differing dimensions, then the bound \eqref{depthbound} does not exhibit
+equality.
+In fact, if $M, M'$ are finitely generated modules, then we have
+\[ \depth M \oplus M' = \min \left(\depth M, \depth M' \right), \]
+which follows at once from the definition of depth in terms of
+vanishing $\ext$ groups.
+\end{example}
+
+\begin{exercise}
+Suppose $R$ is a noetherian local ring whose depth (as a module over itself)
+is zero. If $R$ is reduced, then $R$ is a field.
+\end{exercise}
+
+Finally, we include a result that states that the depth does not depend
+on the ring so much as the module.
+\begin{proposition}[Depth and change of rings]
+Let $(R, \mathfrak{m}) \to (S, \mathfrak{n})$ be a morphism of
+noetherian local rings. Suppose $M$ is a finitely generated $S$-module,
+which is also finitely generated as an $R$-module. Then
+$\depth_R M = \depth_S M$.
+\end{proposition}
+\begin{proof}
+It is clear that we have the inequality $\depth_R M \leq \depth_S M$, by the
+interpretation of depth via regular sequences. Let $x_1, \dots, x_n \in R$
+be a maximal $M$-sequence. We need to show that it is a maximal $M$-sequence
+in $S$ as well. By quotienting, we may replace $M$ with $M/(x_1,\dots, x_n)M$;
+we then have to show that if $M$ has depth zero as an $R$-module, it has
+depth zero as an $S$-module.
+
+But then $\hom_R(R/\mathfrak{m}, M) \neq 0$. This is a $R$-submodule of
+$M$, consisting of elements killed by $\mathfrak{m}$,
+and in fact it is a $S$-submodule. We are going to show that $\mathfrak{n}$
+annihilates some element of it, which will imply that $\depth_S M = 0$.
+
+To see this, note that $\hom_R(R/\mathfrak{m}, M)$ is artinian as an
+$R$-module (as it is killed by $\mathfrak{m}$). As a result, it is an
+artinian $S$-module, which means it contains $\mathfrak{n}$ as an
+associated prime, proving the claim and the result.
+\end{proof}
+\subsection{Regular sequences}
+
+In the previous subsection, we defined the notion of \emph{depth} of a
+finitely generated module over a noetherian local ring using the $\ext$
+functors. We then showed that the depth was the length of a maximal regular
+sequence.
+
+Now, although it will not be necessary for the main results in this chapter, we want to generalize this to the case of a non-local ring. Most of the
+same arguments go through, though there are some subtle differences. For
+instance, regular sequences remain regular under permutation in the local
+case, but not in general. Since there will be some repetition, we shall try to
+be brief.
+
+We start by generalizing the idea of a regular sequence which is not required
+to be contained in the maximal ideal of a local ring.
+Let $R$ be a noetherian ring, and $M$ a finitely generated $R$-module.
+\begin{definition}
+A sequence $x_1, \dots, x_n \in R$ is \textbf{$M$-regular} (or is an
+\textbf{$M$-sequence}) if for each $k \leq n$, $x_k$ is a nonzerodivisor on the
+$R$-module $M/(x_1, \dots, x_{k-1}) M$ and also $(x_1, \dots, x_n) M \neq M$. \end{definition}
+
+So $x_1$ is a nonzerodivisor on $M$, by the first part. That is, the homothety
+$M \stackrel{x_1}{\to} M$ is injective.
+The last condition is also going to turn out to be necessary for us. In the
+previous subsection, it was automatic as $\mathfrak{m}M \neq M$ (unless $M =
+0$) by Nakayama's lemma as $M$ was assumed finitely generated.
+
+
+The property of being a regular sequence is inherently an inductive one. Note
+that $x_1, \dots, x_n$ is a regular sequence on $M$ if and only if $x_1$ is a
+zerodivisor on $M$ and $x_2, \dots, x_n$ is an $M/x_1 M$-sequence.
+
+
+\begin{definition}
+If $M$ is an $R$-module and $I \subset R$ an ideal, then we write $\depth_I M$
+for the length of the length-maximizing $M$-sequence contained in $I$.
+When $R$ is local and $I \subset R$ the maximal ideal, then we just write
+$\depth M$ as before.
+\end{definition}
+
+While we will in fact have a similar characterization of $\depth$ in terms of
+$\ext$, in this section we \emph{define} it via regular sequences.
+
+\begin{example}
+The basic example one is supposed to keep in mind is the polynomial ring $R =
+R_0[x_1, \dots, x_n]$ and $M = R$. Then the sequence $x_1, \dots, x_n$ is
+regular in $R$.
+\end{example}
+
+\begin{example}
+Let $(R, \mathfrak{m})$ be a regular local ring, and let $x_1, \dots, x_n$ be a
+regular system of parameters in $R$ (i.e. a system of generators for
+$\mathfrak{m}$ of minimal size). Then we have seen that the
+$\left\{x_i\right\}$ form a regular sequence on $R$, in any order. This is
+because each quotient $R/(x_1, \dots, x_i)$ is itself regular, hence a domain.
+\end{example}
+
+As before, we have a simple characterization of depth zero:
+\begin{proposition} Let $R$ be noetherian, $M$ finitely generated.
+If $M$ is an $R$-module with $IM \neq M$, then $M$ has depth zero if and only
+if $I $ is contained in an element of $\ass(M)$.
+\end{proposition}
+\begin{proof}
+This is analogous to \cref{depthzero}. Note than an ideal consists of
+zerodivisors on $M$ if and only if it is contained in an associated prime
+(\cref{assmdichotomy}).
+\end{proof}
+
+The above proof used \cref{assmdichotomy}, a key fact which will be used
+repeatedly in the sequel.
+This is one reason the theory of depth works best for finitely generated
+modules over noetherian rings.
+
+The first observation to make is that regular sequences are \textit{not}
+preserved by permutation. This is one nice characteristic that we would like
+but is not satisfied.
+
+\begin{example} Let $k $ be a field.
+Consider $R=k[x,y]/((x-1)y, yz)$. Then $x,z$ is a regular sequence on $R$. Indeed,
+$x$ is a nonzerodivisor and $R/(x) = k[z]$. However, $z,
+x$ is not a regular sequence because $z$ is a zerodivisor in $R$.
+\end{example}
+
+Nonetheless, regular sequences \emph{are} preserved by permutation for local rings under
+suitable noetherian hypotheses:
+\begin{proposition}
+Let $R$ be a noetherian local ring and $M$ a finite $R$-module. Then if $x_1,
+\dots, x_n$ is a $M$-sequence contained in the maximal ideal, so is any permutation $x_{\sigma(1)}, \dots,
+x_{\sigma(n)}$.
+\end{proposition}
+\begin{proof}
+It is clearly enough to check this for a transposition. Namely, if we have an
+$M$-sequence
+\[ x_1, \dots, x_i, x_{i+1}, \dots x_n \]
+we would like to check that so is
+\[ x_1, \dots, x_{i+1}, x_i, \dots, x_n. \]
+It is here that we use the inductive nature. Namely, all we need to do is check
+that
+\[ x_{i+1}, x_i, \dots,x_n \]
+is regular on $M/(x_1, \dots,x_{i-1}) M$, since the first part of the sequence
+will automatically be regular. Now $x_{i+2}, \dots, x_n$ will automatically be
+regular on $M/(x_1, \dots, x_{i+1})M$. So all we need to show is that
+$x_{i+1}, x_i$ is regular on $M/(x_1, \dots, x_{i-1})M$.
+
+The moral of the story is that we have reduced to the following lemma.
+
+\begin{lemma}
+Let $R$ be a noetherian local ring. Let $N$
+be a finite $R$-module and
+$a,b \in R$ an $N$-sequence contained in the maximal ideal. Then so is $b,a$.
+\end{lemma}
+\begin{proof}
+We can prove this as follows. First, $a$ will be a nonzerodivisor on $N/bN$.
+Indeed, if not then we can write
+\[ an = bn' \]
+for some $n,n' \in N$ with $n \notin bN$. But $b$ is a nonzerodivisor on
+$N/aN$, which means that $bn' \in aN$ implies $n' \in aN$. Say $n' = an''$. So
+$an = ba n''$. As $a$ is a nonzerodivisor on $N$, we see that $n = bn''$. Thus
+$n \in bN$, contradiction.
+This part has not used the fact that $R$ is local.
+
+Now we claim that $b$ is a nonzerodivisor on $N$. Suppose $n \in N$ and $bn =
+0$. Since $b$ is a nonzerodivisor on $N/aN$, we have that $n \in aN$, say $n =
+an'$. Thus
+\[ b(an') = a(bn') = 0. \]
+The fact that $N \stackrel{a}{\to} N$ is injective implies that $bn' = 0$. So
+we can do the same and get $n' = an''$, $n'' = a n^{(3)}, n^{(3)} =a n^{(4)}$, and
+so on. It follows that $n$ is a multiple of $a, a^2,a^3, \dots$, and hence in
+$\mathfrak{m}^j N$ for each $j$ where $\mathfrak{m} \subset R$ is the maximal
+ideal. The Krull intersection theorem now implies that $n = 0$.
+
+Together, these arguments imply that $b,a$ is an $N$-sequence, proving the
+lemma.
+\end{proof}
+The proof of the result is now complete.
+\end{proof}
+
+
+One might wonder what goes wrong, and why permutations do not preserve
+regular sequences in general; after all, oftentimes we can reduce results
+to their analogs for local rings. Yet the fact that regularity is preserved by
+permutations for local rings does not extend to arbitrary rings.
+The problem is that regular sequences do \emph{not} localize. Well, they almost
+do, but the final condition that $(x_1, \dots, x_n) M \neq M$ doesn't get
+preserved.
+We can state:
+
+\begin{proposition}
+Suppose $x_1, \dots, x_n$ is an $M$-sequence. Let $N$ be a flat $R$-module.
+Then if $(x_1, \dots, x_n)M \otimes_R N \neq M \otimes N$, then $x_1, \dots, x_n$
+is an $M \otimes_R N$-sequence.
+\end{proposition}
+\begin{proof}
+This is actually very easy now. The fact that $x_i: M/(x_1, \dots, x_{i-1})M
+\to M/(x_1, \dots, x_{i-1})M$ is injective is preserved when $M$ is replaced by
+$M \otimes_R N$ because the functor $- \otimes_R N$ is exact.
+\end{proof}
+
+In particular, it follows that if we have a good reason for supposing that
+$(x_1,\dots, x_n) M \otimes N \neq M \otimes N$, then we'll already be
+done. For instance, if $N$ is the localization of $R$ at a prime ideal
+containing the $x_i$. Then we see that automatically $x_1, \dots, x_n$ is an
+$M_{\mathfrak{p}} = M \otimes_R R_{\mathfrak{p}}$-sequence.
+
+Finally, we have an analog of the previous correspondence between depth and
+the vanishing of $\ext$. Since the argument is analogous to
+\cref{depthregular}, we omit it.
+\begin{theorem}
+\label{depthextI}
+Let $R$ be a ring. Suppose $M$ is an $R$-module and $IM \neq M$.
+All maximal $M$-sequences in $I$ have the same length. This length is the
+smallest value of $r$ such that $\mathrm{Ext}^r(R/I, M) \neq 0$.
+\end{theorem}
+
+\begin{exercise}
+Suppose $I$ is an ideal in $R$. Let $M$ be an $R$-module such that $IM \neq
+M$. Show that $\depth_I M \geq 2$ if and only if the natural map
+\[ M \simeq \hom(R, M) \to \hom(I, M) \]
+is an isomorphism.
+\end{exercise}
+
+
+\subsection{Powers of regular sequences}
+
+Regular sequences don't necessarily behave well with respect to permutation or
+localization without additional hypotheses. However, in all cases they behave
+well with respect to taking powers. The upshot of this is that the invariant
+called \textit{depth} that we will soon introduce is invariant under passing to
+the radical.
+
+We shall deduce this from the following easy fact.
+\begin{lemma}
+Suppose we have an exact sequence of $R$-modules
+\[ 0 \to M' \to M \to M'' \to 0. \]
+Suppose the sequence $x_1, \dots, x_n \in R$ is $M'$-regular and $M''$-regular.
+Then it is $M$-regular.
+\end{lemma}
+The converse is not true, of course.
+\begin{proof}
+Morally, this is the snake lemma. For instance, the fact that multiplication by
+$x_1$ is injective on $M', M''$ implies by the snake diagram that $M
+\stackrel{x_1}{\to} M$ is injective. However, we don't a priori know that a
+simple inductive argument on $n$ will work to prove this. The reason is that it needs
+to be seen that quotienting each term by $(x_1, \dots, x_{n-1})$ will preserve
+exactness. However, a general fact will tell us that this is indeed the case.
+See below.
+
+Anyway, this general fact now lets us induct on $n$. If we assume
+that $x_1, \dots, x_{n-1}$ is $M$-regular, we need only prove that $x_{n}:
+M/(x_1, \dots, x_{n-1})M
+\to M/(x_1, \dots, x_{n-1})$ is injective. (It is not surjective or the
+sequence would not be $M''$-regular.) But we have the exact sequence by the
+next lemma,
+\[ 0 \to M'/(x_1 \dots x_{n-1})M' \to M/(x_1 \dots x_{n-1})M \to M''/(x_1
+\dots x_{n-1})M'' \to 0 \]
+and the injectivity of $x_n$ on the two ends implies it at the middle by the
+snake lemma.
+\end{proof}
+
+So we need to prove:
+\begin{lemma}
+Suppose $0 \to M' \to M \to M'' \to 0$ is a short exact sequence. Let $x_1,
+\dots, x_m$ be an $M''$-sequence. Then the sequence
+\[ 0 \to M'/(x_1 \dots x_m)M' \to M/(x_1 \dots x_m)M \to M''/(x_1 \dots
+x_m)M'' \to 0\]
+is exact as well.
+\end{lemma}
+One argument here uses the fact that the Tor functors vanish when one has a
+regular sequence like this. We can give a direct argument.
+\begin{proof}
+By induction, this needs only be proved when $m=1$, since we have the recursive
+description of regular sequences: in general, $x_2 \dots x_m$ will be regular
+on $M''/x_1 M''$.
+In any case, we have exactness except possibly at the left as the tensor
+product is right-exact. So let $m' \in M'$; suppose $m'$ maps to a multiple of
+$x_1$ in $M$. We need to show that $m'$ is a multiple of $x_1$ in $M'$.
+
+Suppose $m'$ maps to $x_1 m$. Then $x_1m$ maps to zero in $M''$, so by regularity $m$
+maps to zero in $M''$. Thus $m$ comes from something, $\overline{m}'$, in $M'$. In particular
+$m' - x_1 \overline{m}'$ maps to zero in $M$, so it is zero in $M'$. Thus
+indeed $m'$ is a multiple of $x_1$ in $M'$.
+\end{proof}
+
+With this lemma proved, we can state:
+\begin{proposition}
+\label{powregseq}
+Let $M$ be an $R$-module and $x_1, \dots, x_n$ an $M$-sequence. Then $x_1^{a_1}
+,\dots, x_n^{a_n}$ is an $M$-sequence for any $a_1, \dots, a_n \in
+\mathbb{Z}_{>0}$.
+\end{proposition}
+
+\begin{proof}
+We will use:
+\begin{lemma}
+Suppose $x_1, \dots, x_i, \dots, x_n$ and $x_1, \dots, x_i', \dots, x_n$ are
+$M$-sequences for some $M$. Then so is $x_1, \dots, x_i x_i', \dots, x_n$.
+\end{lemma}
+
+\begin{proof}
+As usual, we can mod out by $(x_1 \dots x_{i-1})$ and thus assume that $i=1$.
+We have to show that if $x_1, \dots, x_n$ and $x_1', \dots, x_n$ are
+$M$-sequences, then so is $x_1 x_1', \dots, x_n$.
+
+We have an exact sequence
+\[ 0 \to x_1 M/x_1 x_1' M \to M/x_1 x_1' M \to M/x_1 M \to 0. \]
+Now $x_2, \dots, x_n$ is regular on the last term by assumption, and also on
+the first term, which is isomorphic to $M/x_1' M$ as $x_1$ acts as a
+nonzerodivisor on $M$. So $x_2, \dots, x_n$ is regular on both ends, and thus
+in the middle. This means that
+\[ x_1 x_1', \dots, x_n \]
+is $M$-regular. That proves the lemma.
+\end{proof}
+
+So we now can prove the proposition. It is trivial if $\sum a_i = n$ (i.e. if
+all are $1$) it is clear. In general, we can use complete induction on $\sum
+a_i$. Suppose we know the result for smaller values of $\sum a_i$. We can
+assume that some $a_j >1$.
+Then the sequence
+\[ x_1^{a_1}, \dots x_j^{a_j} , \dots x_n^{a_n} \]
+is obtained from the sequences
+\[ x_1^{a_1}, \dots,x_j^{a_j - 1}, \dots, x_n^{a_n} \]
+and
+\[ x_1^{a_1}, \dots,x_j^{1}, \dots, x_n^{a_n} \]
+by multiplying the middle terms. But the complete induction hypothesis implies
+that both those two sequences are $M$-regular, so we can apply the lemma.
+\end{proof}
+
+In general, the product of two regular sequences is not a regular sequence. For
+instance, consider a regular sequence $x,y$ in some finitely generated module $M$ over a
+noetherian local ring. Then $y,x$ is regular, but the product sequence $xy, xy$
+is \emph{never} regular.
+
+
+\subsection{Depth}
+
+We make the following definition slightly differently than in the local case:
+
+\begin{definition}
+Suppose $I$ is an ideal such that $IM \neq M$. Then we define the
+\textbf{$I$-depth of $M$} to be the maximum length of a maximal $M$-sequence contained
+in $I$. When $R$ is a local ring and $I$ the maximal ideal, then that number is
+simply called the \textbf{depth} of $M$.
+
+The \textbf{depth} of a proper ideal $I \subset R$ is its depth on $R$.
+\end{definition}
+
+
+The definition is slightly awkward, but it turns out that all maximal
+$M$-sequences in $I$ have the same length, as we saw in \cref{depthextI}. So we can use any of them to compute
+the depth.
+
+The first thing we can prove using the above machinery is that depth is really
+a ``geometric'' invariant, in that it depends only on the radical of $I$.
+
+\begin{proposition}
+Let $R$ be a ring, $I \subset R$ an ideal, and $M$ an $R$-module
+with $IM \neq M$. Then $\mathrm{depth}_I M = \mathrm{depth}_{\mathrm{Rad}(I)} M$.
+\end{proposition}
+\begin{proof}
+The inequality $\mathrm{depth}_I M \leq \mathrm{depth}_{\mathrm{Rad} I} M$ is trivial, so we need only
+show that if $x_1, \dots, x_n$ is an $M$-sequence in $\mathrm{Rad}(I)$, then there is
+an $M$-sequence of length $n$ in $I$. For this we just take a high power
+\[ x_1^N, \dots, x_n^{N} \]
+where $N$ is large enough such that everything is in $I$. We can do this as
+powers of $M$-sequences are $M$-sequences (\cref{powregseq}).
+\end{proof}
+
+This was a fairly easy consequence of the above result on powers of regular
+sequences. On the other hand, we want to give another proof, because it will
+let us do more. Namely, we will show that depth is really a function of prime
+ideals.
+
+For convenience, we set the following condition: if $IM = M$, we define
+\[ \mathrm{depth}_I (M) = \infty. \]
+
+\begin{proposition} \label{depthlocal}
+Let $R$ be a noetherian ring, $I \subset R$ an ideal, and $M$ a finitely generated $R$-module.
+Then
+\[ \mathrm{depth}_I M = \min_{\mathfrak{p} \in V(I)} \mathrm{depth}_{\mathfrak{p}} M . \]
+\end{proposition}
+
+So the depth of $I$ on $M$ can be calculated from the depths at each
+prime containing $I$. In this sense, it is clear that $\mathrm{depth}_I (M)$ depends
+only on $V(I)$ (and the depths on those primes), so clearly it depends only on
+$I$ \emph{up to radical}.
+
+\begin{proof}
+In this proof, we shall {use the fact that the length of every maximal
+$M$-sequence is the same} (\cref{depthextI}).
+
+It is obvious that we have an inequality
+\[ \mathrm{depth}_I \leq \min_{\mathfrak{p} \in V(I)} \mathrm{depth}_{\mathfrak{p}} M \]
+as each of those primes contains $I$.
+We are to prove that there is
+a prime $\mathfrak{p}$ containing $I$ with
+\[ \mathrm{depth}_I M = \mathrm{depth}_{\mathfrak{p}} M . \]
+But we shall actually prove the stronger statement that there is $\mathfrak{p}
+\supset I$ with $\mathrm{depth}_{\mathfrak{p}} M_{\mathfrak{p}} = \mathrm{depth}_I M$. Note
+that localization at a prime can only increase depth because an $M$-sequence in
+$\mathfrak{p}$ leads to an $M$-sequence in $M_{\mathfrak{p}}$ thanks to
+Nakayama's lemma and the flatness of localization.
+
+So let $x_1, \dots, x_n \in I$ be a $M$-sequence of maximum length. Then $I$
+acts by zerodivisors on
+$M/(x_1 , \dots, x_n) M$ or we could extend the sequence further.
+In particular, $I$ is contained in an associated prime of $M/(x_1, \dots, x_n)
+M$ by elementary commutative algebra (basically, prime avoidance).
+
+Call this associated prime $\mathfrak{p} \in V(I)$. Then $\mathfrak{p}$ is an
+associated prime of $M_{\mathfrak{p}}/(x_1, \dots, x_n) M_{\mathfrak{p}}$,
+and in particular acts only by zerodivisors on this module.
+Thus the $M_{\mathfrak{p}}$-sequence $x_1, \dots, x_n$ can be extended no
+further in $\mathfrak{p}$. In particular, since the depth
+can be computed as the length of \emph{any} maximal $M_{\mathfrak{p}}$-sequence,
+\[ \mathrm{depth}_{\mathfrak{p}} M_{\mathfrak{p}} = \mathrm{depth}_I M. \]
+\end{proof}
+
+Perhaps we should note a corollary of the argument above:
+\begin{corollary} \label{depthlocal2}
+Hypotheses as above, we have $\mathrm{depth}_I M \leq \mathrm{depth}_\mathfrak{p} M_{\mathfrak{p}}$ for
+any prime $\mathfrak{p} \supset I$. However, there is at least one $\mathfrak{p}
+\supset I$ where equality holds. \end{corollary}
+
+We are thus reduced to analyzing depth in the local case.
+
+\begin{exercise}
+\label{exer:depthcompletion}
+If $(R, \mathfrak{m})$ is a local noetherian ring and $M$ a finitely
+generated $R$-module, then show that $\depth M = \depth_{\hat{R}} \hat{M}$,
+where $\hat{M}$ is the $\mathfrak{m}$-adic completion. (Hint: use $\hat{M}= M
+\otimes_R \hat{R}$, and the fact that $\hat{R}$ is flat over $R$.)
+\end{exercise}
+\subsection{Depth and dimension}
+
+Consider an $R$-module $M$, which is always assumed to be finitely generated.
+Let $I \subset R$ be an ideal with $IM \neq M$.
+We deduce from the previous subsections:
+\begin{proposition}
+Let $M$ be a finitely generated module over the noetherian ring $R$. Then
+\[ \mathrm{depth}_I M \leq \dim M \]
+for any ideal $I \subset R$ with $IM \neq M$.
+\end{proposition}
+\begin{proof}
+We have proved this when $R$ is a \emph{local} ring
+(\cref{depthboundlocal}). Now we just use \cref{depthlocal2} to reduce to the
+local case.
+\end{proof}
+
+This does not tell us much about how $\mathrm{depth}_I M$ depends on $I$, though; it
+just says something about how it depends on $M$. In particular, it is not very
+helpful when trying to estimate $\mathrm{depth} I = \mathrm{depth}_I R$.
+Nonetheless, there is a somewhat stronger result, which we will need in the
+future.
+We start by stating the version in the local case.
+
+\begin{proposition} \label{localdepthassbound}
+Let $(R,\mathfrak{m})$ be a noetherian local ring. Let $M$ be a finite
+$R$-module. Then the depth of $\mathfrak{m}$ on $M$ is at most the dimension of
+$R/\mathfrak{p}$ for $\mathfrak{p}$ an associated prime of $M$:
+\[ \depth M \leq \min_{\mathfrak{p} \in \ass(M)}\dim R/\mathfrak{p}. \]
+\end{proposition}
+
+This is sharper than the bound $\depth M \leq \dim M$, because each $\dim
+R/\mathfrak{p}$ is at most $\dim M$ (by definition).
+\begin{proof}
+To prove this, first assume that the depth is zero. In that case, the result is
+immediate. We shall now argue inductively.
+Assume that that this is true for modules of smaller depth.
+We will quotient out appropriately to shrink the
+support and change the associated
+primes. Namely, choose a $M$-regular (nonzerodivisor on $M$) $x \in R$.
+Then $\mathrm{depth} M/xM = \mathrm{depth} M -1$.
+
+Let $\mathfrak{p}_0$ be an associated prime of $M$.
+We claim that $\mathfrak{p}_0$ is \emph{properly} contained in an associated prime of
+$M/xM$.
+We will prove this below.
+Thus $\mathfrak{p}_0$ is properly contained in some $\mathfrak{q}_0 \in
+\ass(M/xM)$.
+
+Now we know that $\mathrm{depth} M/xM = \mathrm{depth} M -1$. Also, by the inductive
+hypothesis, we know that $\dim R/\mathfrak{q}_0 \geq \mathrm{depth} M/xM = \mathrm{depth} M
+-1$. But the dimension of $R/\mathfrak{q}_0$ is strictly smaller than that of
+$R/\mathfrak{p}_0$, so at least $\dim R/\mathfrak{p}_0 +1 \geq \mathrm{depth} M$. This
+proves the lemma, modulo the result:
+
+\begin{lemma} \label{screwylemmaonquotientassprime}
+Let $(R, \mathfrak{m})$ be a noetherian local ring. Let $M$ be a finitely
+generated $R$-module, $x \in \mathfrak{m}$ an $M$-regular element.
+Then each element of $\ass(M)$ is properly contained in an element of
+$\ass(M/xM)$.
+\end{lemma}
+So if we quotient by a regular element, we can make the associated primes jump
+up.
+\begin{proof} Let $\mathfrak{p}_0 \in \ass(M)$; we want to show
+$\mathfrak{p}_0$ is properly contained in something in $\ass(M/xM)$.
+
+Indeed, $x \notin \mathfrak{p}_0$, so $\mathfrak{p}_0$ cannot itself be
+an associated prime.
+However, $\mathfrak{p}_0$ annihilates a nonzero element of $M/xM$. To see this,
+consider a maximal principal submodule of $M$ annihilated by $\mathfrak{p}_0$.
+Let this submodule be $Rz$ for some $z \in M$. Then if $z$ is a multiple of
+$x$, say $z = xz'$, then $Rz'$ would be a larger
+submodule of $M$ annihilated by $\mathfrak{p}_0$---here we are using the fact
+that $x$ is a nonzerodivisor on $M$. So the image of this $z$ in $M/xM$ is
+nonzero and is clearly annihilated by $\mathfrak{p}_0$.
+It follows $\mathfrak{p}_0$ is contained in an element of
+$\ass(M/xM)$, necessarily properly.
+\end{proof}
+
+\end{proof}
+\begin{exercise}
+Another argument for \cref{screwylemmaonquotientassprime} is given in \S 16 of \cite{EGA}, vol. IV, by reducing to
+the coprimary case. Here is a sketch.
+
+The strategy is to use the existence of an exact sequence
+\[ 0 \to M' \to M \to M'' \to 0 \]
+with $\ass(M'') = \ass(M) - \left\{\mathfrak{p}_0\right\}$ and
+$\ass(M') = \left\{\mathfrak{p}_0\right\}$.
+Quotienting by $x$ preserves exactness, and we get
+\[ 0 \to M'/xM' \to M/xM \to M''/xM'' \to 0. \]
+Now $\mathfrak{p}_0$ is properly contained in every associated prime of
+$M'/xM'$ (as it acts nilpotently on $M'$). It follows that any element of
+$\ass(M'/xM') \subset \ass(M/xM)$ will do the job.
+
+In essence, the point is that the result is \emph{trivial} when $\ass(M)
+= \left\{\mathfrak{p}_0\right\}$.
+\end{exercise}
+
+\begin{exercise}
+Here is a simpler argument for \cref{screwylemmaonquotientassprime},
+following \cite{Se65}.
+Let $\mathfrak{p}_0 \in \ass(M)$, as before. Again as before, we want to show that
+$\hom_R(R/\mathfrak{p}_0, M/xM) \neq 0$.
+But we have an exact sequence
+\[ 0 \to \hom_R(R/\mathfrak{p}_0, M)
+\stackrel{x}{\to} \hom_R(R/\mathfrak{p}_0, M) \to
+\hom_R(R/\mathfrak{p}_0, M/xM) ,
+\]
+and since the first map is not surjective (by Nakayama), the last
+object is nonzero.
+\end{exercise}
+
+Finally, we can globalize the results:
+
+\begin{proposition}
+Let $R$ be a noetherian ring, $I \subset R$ an ideal, and $M$ a finitely
+generated module. Then $\mathrm{depth}_I M$ is at most the length of every chain
+of primes in $\mathrm{Spec} R$ that starts at an associated prime of $M$ and
+ends at a prime containing $I$.
+\end{proposition}
+
+\begin{proof} Currently omitted.
+\begin{comment}
+Consider a chain of primes $\mathfrak{p}_0 \subset \dots \subset
+\mathfrak{p}_k$ where $\mathfrak{p}_0$ is an associated prime and
+$\mathfrak{p}_k$ contains $I$.
+The goal is to show that
+\[ \mathrm{depth}_I M \leq k . \]
+By localization, we can assume that $\mathfrak{p}_k$ is the maximal ideal of
+$R$; recall that localization can only increase the depth.
+We can also assume $I$ is this maximal ideal, by increasing it.
+
+In this case, the result follows from the local version
+(\cref{localdepthassbound}).
+\end{comment}
+\end{proof}
+
+
+\section{Cohen-Macaulayness}
+
+\subsection{Cohen-Macaualay modules over a local ring}
+For a local noetherian ring, we have discussed two invariants of a module:
+dimension and depth. They generally do not coincide, and Cohen-Macaulay
+modules will be those where they do.
+
+Let $(R, \mathfrak{m})$ be a noetherian local ring.
+\begin{definition}
+A finitely generated $R$-module $M$ is \textbf{Cohen-Macaulay} if $\depth M =
+\dim M$. The ring $R$ is called \textbf{Cohen-Macaulay} if it is
+Cohen-Macaulay as a module over itself.
+\end{definition}
+
+We already know that the inequality $\leq$ always holds.
+If there is a system of parameters for $M$ (i.e., a sequence $x_1, \dots, x_r
+\in \mathfrak{m}$ such that $M/(x_1, \dots, x_r) M$ is artinian) which is a
+regular sequence on $M$, then $M$ is Cohen-Macaulay: we see in fact that
+$\dim M = \depth M = r$.
+This is the distinguishing trait of Cohen-Macaulay rings.
+
+Let us now give a few examples:
+
+\begin{example}[Regular local rings are Cohen-Macaulay]
+If $R$ is regular, then $\depth R = \dim R$, so $R$ is Cohen-Macaulay.
+
+Indeed, we have seen that if $x_1, \dots, x_n$ is a regular system of parameters
+for $R$ (i.e. a minimal set of generators for $\mathfrak{m}$), then $n
+= \dim R$ and the $\left\{x_i\right\}$ form a regular sequence. See the remark
+after \cref{quotientreg44}; the point is that $R/(x_1, \dots, x_{i-1})$ is
+regular for each $i$ (by the aforementioned corollary), and hence a
+domain, so $x_i$
+acts on it by a nonzerodivisor.
+\end{example}
+
+The next example easily shows that a Cohen-Macaulay ring need not be
+regular, or even a domain:
+\begin{example}[Local artinian rings are Cohen-Macaulay]
+Any local
+artinian ring, because the dimension is zero for an artinian
+ring.
+\end{example}
+
+
+\begin{example}[Cohen-Macaulayness and completion]
+A finitely generated module $M$ is Cohen-Macaulay if and only if its
+completion $\hat{M}$ is; this follows from \cref{exer:depthcompletion}.
+\end{example}
+
+Here is a slightly harder example.
+\begin{example}
+A normal local domain $(R, \mathfrak{m})$ of dimension 2 is Cohen-Macaulay. This is a special case
+of Serre's criterion for normality.
+
+Here is an argument. If $x \in \mathfrak{m}$ is nonzero, we want to
+show that $\depth R/(x) = 1$.
+To do this, we need to show that $\mathfrak{m} \notin \ass(R/(x))$ for
+each such $x$, because then $\depth R/(x) \geq 1$ (which is all we need).
+However, suppose the contrary; then there is $y$ not divisible by $x$ such
+that $\mathfrak{m}y \subset (x)$.
+So $y/x \notin R$, but $\mathfrak{m} (y/x) \subset R$.
+
+This, however, implies $\mathfrak{m}$ is principal. Indeed, we either have
+$\mathfrak{m}(y/x) = R$, in which case $\mathfrak{m}$ is generated by $x/y$,
+or $\mathfrak{m}(y/x) \subset \mathfrak{m}$. The latter would imply
+that $y/x$ is integral over $R$ (as multiplication by it stabilizes a
+finitely generated $R$-module), and by normality $y/x \in R$. We have seen
+much of this argument before.
+\end{example}
+
+\begin{example}
+Consider $\mathbb{C}[x,y]/(xy)$, the coordinate ring of the
+union of two axes
+intersecting at the origin. This is not
+regular, as its localization at the origin
+is not a domain.
+We will later show that this is a Cohen-Macaulay ring, though.
+\begin{comment}
+Indeed, we can project the associated variety
+$X = V(xy)$
+onto the affine line by adding the coordinates. This corresponds
+to the map
+\[ \mathbb{C}[z] \to \mathbb{C}[x,y]/(xy) \]
+sending $z \to x+y$. This makes $\mathbb{C}[x,y]/(xy)$ into a
+free
+$\mathbb{C}[z]$-module of rank two (with generators $1, x$), as
+one can check.
+So by the previous result (strictly speaking, its extension to
+non-domains),
+the ring in question is Cohen-Macaulay.
+\end{comment}
+\end{example}
+
+\begin{example}
+$R=\mathbb{C}[x,y,z]/(xy, xz)$ is not Cohen-Macaulay (at the
+origin). The associated variety looks
+geometrically like the union of the plane $x=0$ and the line
+$y=z=0$ in affine
+3-space. Here there are two components of different dimensions
+intersecting.
+Let's choose a regular sequence (that is, regular after
+localization at the
+origin). The dimension at the origin is clearly two because of
+the plane.
+First, we need a nonzerodivisor in this ring, which vanishes at
+the origin, say
+$ x+y+z$. (Check this.) When we quotient by
+this, we get
+\[ S=\mathbb{C}[x,y,z]/(xy,xz, x+y+z) = \mathbb{C}[y,z]/(
+(y+z)y, (y+z)z). \]
+
+The claim is that $S$ localized at the ideal corresponding to
+$(0,0)$ has depth
+zero. We have $y+z \neq 0$, which is killed by both $y,z$, and
+hence by the
+maximal ideal at zero. In particular the maximal ideal at zero
+is an associated
+prime, which implies the claim about the depth.
+\end{example}
+
+As it happens, a Cohen-Macaulay variety is always
+equidimensional. The rough
+reason is that each irreducible piece puts an upper bound on the
+depth given by
+the dimension of the piece. If any piece is too small, the total
+depth will be
+too small.
+
+
+Here is the deeper statement:
+
+\begin{proposition} \label{dimthing}
+Let $(R, \mathfrak{m})$ be a noetherian local ring, $M$ a finitely generated,
+Cohen-Macaulay $R$-module.
+Then:
+\begin{enumerate}
+\item For each $\mathfrak{p} \in \ass(M)$, we have $\dim M = \dim
+R/\mathfrak{p}$.
+\item Every associated prime of $M$ is minimal (i.e. minimal in $\supp M$).
+\item $\supp M$ is equidimensional.
+\end{enumerate}
+\end{proposition}
+In general, there may be nontrivial inclusion relations among the
+associated primes of a general module. However, this cannot happen for a Cohen-Macaulay
+module.
+\begin{proof}
+The first statement implies all the others. (Recall that
+\emph{equidimensional} means that all the irreducible components of $\supp
+M$, i.e. the $\spec R/\mathfrak{p}$, have the same dimension.)
+But this in turn follows from the bound of \cref{localdepthassbound}.
+\end{proof}
+
+Next, we would like to obtain a criterion for when a quotient of a
+Cohen-Macaulay module is still Cohen-Macaulay.
+The answer will be similar to \cref{quotientreg} for regular local rings.
+
+\begin{proposition} \label{quotientCM}
+Let $M$ be a Cohen-Macaulay module over the local noetherian ring $(R,
+\mathfrak{m})$. If $x_1, \dots, x_n \in \mathfrak{m}$ is a $M$-regular
+sequence, then $M/(x_1, \dots, x_n)M$ is Cohen-Macaulay of dimension (and
+depth) $\dim M - n$.
+\end{proposition}
+\begin{proof}
+Indeed, we reduce to the case $n=1$ by induction.
+But then, because $x_1$ is a nonzerodivisor on $M$, we have $\dim
+M/x_1 M = \dim M -1$ and $\depth M/x_1 M = \depth M -1$. Thus
+\[ \dim M/x_1 M = \depth M/x_1M. \]
+\end{proof}
+
+So, if we are given a Cohen-Macaulay module $M$ and want one of a smaller
+dimension, we just have to find $x \in \mathfrak{m}$ not contained in any of
+the minimal primes of $\supp M$ (these are the only associated primes). Then,
+$M/xM$ will do the job.
+
+\subsection{The non-local case}
+
+More generally, we would like to make the definition:
+\begin{definition} \label{generalCM}
+ A general noetherian ring $R$ is
+\textbf{Cohen-Macaulay} if
+$R_{\mathfrak{p}}$ is Cohen-Macaulay for all $\mathfrak{p} \in
+\spec R$.
+\end{definition}
+
+We should check that these definitions coincide for a local noetherian ring.
+This, however, is not entirely obvious; we have to show that localization
+preserves Cohen-Macaulayness.
+In this subsection, we shall do that, and we shall furthermore show that Cohen-Macaulay rings are \emph{catenary}, or
+more generally that Cohen-Macaulay modules are catenary. (So far we have
+seen that they are equidimensional, in the local case.)
+
+
+We shall deduce this from the following result, which states that for a
+Cohen-Macaulay module, we can choose partial systems of parameters in any
+given prime ideal in the support.
+
+\begin{proposition} \label{CMintermediatep}
+Let $M$ be a Cohen-Macaulay module over the local noetherian ring $(R,
+\mathfrak{m})$, and let $\mathfrak{p} \in \supp M$.
+Let $x_1, \dots, x_r \in \mathfrak{p}$ be a maximal $M$-sequence contained in
+$\mathfrak{p}$. Then:
+\begin{enumerate}
+\item $\mathfrak{p}$ is an associated and minimal prime of $M/(x_1, \dots, x_r)M$.
+\item $\dim R/\mathfrak{p} = \dim M -r$
+\end{enumerate}
+\end{proposition}
+\begin{proof}
+We know (\cref{quotientCM}) that $M/(x_1, \dots, x_r)M$ is a Cohen-Macaulay module too.
+Clearly $\mathfrak{p}$ is in its support, since all the $x_i \in
+\mathfrak{p}$.
+The claim is that $\mathfrak{p}$ is an associated prime---or minimal prime, it
+is the same thing---of $M/(x_1, \dots, x_r)M$. If not, there is $x \in
+\mathfrak{p}$ that is a nonzerodivisor on this quotient, which means that
+$\left\{x_1, \dots, x_r\right\}$ was not maximal as claimed.
+
+Now we need to verify the assertion on the dimension. Clearly $\dim M/(x_1,
+\dots, x_r)M = \dim M - r$, and moreover $\dim R/\mathfrak{p} =
+\dim M/(x_1, \dots, x_r)$ by \cref{dimthing}. Combining these gives
+the second assertion.
+\end{proof}
+
+\begin{corollary} \label{CMloc}
+Hypotheses as above,
+ $\dim M_{\mathfrak{p}} = r = \dim M - \dim R/\mathfrak{p} $.
+Moreover, $M_{\mathfrak{p}}$ is a Cohen-Macaulay module over
+$R_{\mathfrak{p}}$.
+\end{corollary}
+This result shows that \cref{generalCM} is a reasonable definition.
+\begin{proof}
+Indeed, if we consider the conclusions of \cref{intermediatep}, we find that
+$x_1, \dots, x_r$ becomes a system of parameters for $M_{\mathfrak{p}}$: we
+have that $M_{\mathfrak{p}}/(x_1, \dots, x_r)M_{\mathfrak{p}}$ is an
+artinian $R_{\mathfrak{p}}$-module, while the sequence is also regular. The
+first claim follows, as does the second: any module with a system of
+parameters that is a regular sequence is Cohen-Macaulay.
+\end{proof}
+
+As a result, we can get the promised result that a Cohen-Macaulay ring is
+catenary.
+\begin{proposition}
+If $M$ is Cohen-Macaulay over the local noetherian ring $R$, then $\supp M$ is a catenary space.
+\end{proposition}
+
+In other words, if $\mathfrak{p} \subset \mathfrak{q}$ are elements of
+$\supp M$, then every maximal chain of prime ideals from $\mathfrak{p}$ to
+$\mathfrak{q}$ has the same length.
+\begin{proof}
+We will show that
+\( \dim R/\mathfrak{p} = \dim R/\mathfrak{q} + \dim
+R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}}, \) a claim that
+suffices to establish catenariness.
+We will do this by using the dimension formulas computed earlier.
+
+Namely, we know that
+$M$ is catenary over $R$, so by \cref{CMloc}
+\[ \dim_{R_{\mathfrak{q}}} M_{\mathfrak{q}} = \dim M - \dim
+R/\mathfrak{q}, \quad
+\dim_{ R_{\mathfrak{p}}} M_{\mathfrak{p}} = \dim M - \dim R/\mathfrak{p}.
+\]
+Moreover, $M_{\mathfrak{q}} $ is Cohen-Macaulay over
+$R_{\mathfrak{q}}$. As a result, we have (in view of the previous equation)
+\[ \dim_{R_{\mathfrak{p}}}
+M_{\mathfrak{p}} = \dim_{R_{\mathfrak{q}}} M_{\mathfrak{q}} - \dim
+R_{\mathfrak{q}}/\mathfrak{p}R_{\mathfrak{q}} =
+\dim M - \dim R/\mathfrak{q} - \dim
+R_{\mathfrak{q}}/\mathfrak{p}R_{\mathfrak{q}}
+. \]
+Combining, we find
+\[ \dim M - \dim R/\mathfrak{p} = \dim M - \dim R/\mathfrak{q} - \dim
+R_{\mathfrak{q}}/\mathfrak{p}R_{\mathfrak{q}} ,
+ \]
+which is what we wanted.
+\end{proof}
+
+It thus follows that any Cohen-Macaulay ring, and thus any \emph{quotient} of a
+Cohen-Macaualay ring, is catenary. In particular, it follows any non-catenary
+local noetherian ring cannot be expressed as a quotient of a
+Cohen-Macaulay (e.g. regular) local ring.
+
+It also follows immediately that if $R$ is any regular (not necessarily local)
+ring, then $R$ is catenary, and the same goes for any quotient of $R$.
+In particular, since a polynomial ring over a field is regular, we find:
+\begin{proposition}
+Any affine ring is catenary.
+\end{proposition}
+
+\subsection{Reformulation of Serre's criterion}
+
+Much earlier, we proved criteria for a noetherian ring to be reduced and (more
+interestingly) normal.
+We can state them more cleanly using the theory of depth developed.
+
+\begin{definition}
+Let $R$ be a noetherian ring, and let $k \in \mathbb{Z}_{\geq 0}$.
+\begin{enumerate}
+\item We say that $R$ satisfies \textbf{condition $R_k$} if, for every
+prime ideal $\mathfrak{p} \in \spec R$ with $\dim R_{\mathfrak{p}} \leq k$,
+the local ring $R_{\mathfrak{p}}$ is regular.
+\item $R$ satisfies \textbf{condition $S_k$} if $\depth R_{\mathfrak{p}} \geq
+\inf(k, \dim R_{\mathfrak{p}})$ for all $\mathfrak{p} \in \spec R$.
+\end{enumerate}
+\end{definition}
+
+A Cohen-Macaulay ring satisfies all the conditions $S_k$, and conversely. The
+condition $R_k$ means geometrically that the associated variety
+is regular (i.e., smooth, at least if one works over an algebraically closed
+field) outside a subvariety of codimension $\geq k$.
+
+
+
+Recall that, according to \cref{reducedcrit1}, a noetherian ring is \textit{reduced} iff:
+\begin{enumerate}
+\item For any minimal prime $\mathfrak{p} \subset R$,
+$R_{\mathfrak{p}}$ is a
+field.
+\item Every associated prime of $R$ is minimal.
+\end{enumerate}
+
+Condition 1 can be restated as follows. The ideal
+$\mathfrak{p}\subset R$ is
+minimal if and only if it is zero-dimensional, and $R_{\mathfrak{p}}$ is
+regular if and only if it is a
+field. So the first condition is that \emph{for every height
+zero prime,
+$R_{\mathfrak{p}}$ is regular.}
+In other words, it is the condition $R_0$.
+
+For the second condition,
+$\mathfrak{p} \in
+\ass(R)$ iff $\mathfrak{p} \in \ass(R_{\mathfrak{p}})$, which is
+equivalent to
+$\depth R_{\mathfrak{p}} = 0$. So the second condition states that for primes
+$\mathfrak{p} \in \spec R$ of height at least 1, $\mathfrak{p} \notin
+\ass(R_{\mathfrak{p}})$, or $\depth(R_{\mathfrak{p}}) \geq 1$. This
+is the condition $S_1$.
+
+We find:
+\begin{proposition}
+A noetherian ring is reduced if and only if it satisfies $R_0$ and $S_1$.
+\end{proposition}
+
+In particular, for a Cohen-Macaulay ring, checking if it is reduced is
+easy; one just has to check $R_0$ (if the localizations at minimal primes are
+reduced).
+
+
+Serre's criterion for normality is in the same spirit, but harder.
+Recall that
+a noetherian ring is \textit{normal} if it is a finite direct
+product of
+integrally closed domains.
+
+The earlier form of Serre's criterion (see \cref{serrecrit1}) was:
+\begin{proposition}
+Let $R$ be a local ring.
+Then $R$ is normal iff
+\begin{enumerate}
+\item $R$ is reduced.
+\item For every height one prime $\mathfrak{p} \in \spec R$,
+$R_{\mathfrak{p}}$ is a DVR (i.e. regular).
+\item For every nonzerodivisor $x \in R$, every associated prime
+of $R/(x)$ is
+minimal.
+\end{enumerate}
+\end{proposition}
+In view of the criterion for reducedness, these conditions are equivalent to:
+\begin{enumerate}
+\item For every prime $\mathfrak{p}$ of height $\leq 1$,
+$R_{\mathfrak{p}} $ is regular.
+\item For every prime $\mathfrak{p}$ of height $\geq 1$,
+$\depth R_{\mathfrak{p}} \geq 1$ (necessary for reducedness)
+\item $\depth R_{\mathfrak{p}} \geq 2$ for $\mathfrak{p}$ containing but not
+minimal over any
+principal ideal $(x)$ for $x$ a nonzerodivisor. This
+is the last
+condition of the proposition; to say $\depth R_{\mathfrak{p}} \geq 2$ is to
+say that $\depth R_{\mathfrak{p}}/(x)R_{\mathfrak{p}} \geq 1$, or
+$\mathfrak{p} \notin
+\ass(R_{\mathfrak{p}}/(x)R_{\mathfrak{p}})$.
+\end{enumerate}
+
+Combining all this, we find:
+\begin{theorem}[Serre's criterion] A noetherian ring is normal
+if and only if it satisfies the conditions $R_1$ and $S_2$.
+\end{theorem}
+
+Again, for a Cohen-Macaulay ring, the last condition is automatic, as
+the depth is the
+codimension.
+
+\section{Projective dimension and free resolutions}
+
+We shall introduce the notion of \emph{projective dimension} of a module; this
+will be the smallest projective resolution it admits (if there is none such,
+the dimension is $\infty$). We can think of it as measuring how far a module is
+from being projective. Over a noetherian \emph{local} ring, we will show that
+the projective dimension can be calculated very simply using the $\tor$ functor
+(which is an elaboration of the story that a projective module over a local
+ring is free).
+
+Ultimately we want to show that a noetherian local ring is regular if and only
+if every finitely generated module admits a finite free resolution. Although we
+shall not get to that result until the next section, we will at least relate
+projective dimension to a more familiar invariant of a module: \emph{depth.}
+
+\subsection{Introduction}
+\newcommand{\pr}{\mathrm{pd}}
+Let $R$ be a commutative ring, $M$ an $R$-module.
+
+\begin{definition}
+The \textbf{projective dimension} of $M$ is the largest integer
+$n$ such that
+there exists a module $N$ with
+\[ \ext^n(M,N) \neq 0. \]
+We allow $\infty$, if arbitrarily large such $n$ exist.
+We write $\pr(M)$ for the projective dimension. For convenience, we set $\pr(0)
+= - \infty$.
+\end{definition}
+
+So, if $m> n = \pr(M)$, then we have $\ext^m(M, N) = 0$ for \emph{all} modules $N$, and
+$n$ is the smallest integer with this property.
+As an example, note that $\pr(M) = 0$ if and only if $M$ is projective and
+nonzero. Indeed, we have seen that
+the $\ext$ groups
+$\ext^i(M,N), i >0$
+vanish always for $M$ projective, and conversely.
+
+To compute $\pr(M)$ in general, one can proceed as follows.
+Take any $M$. Choose a surjection $P \twoheadrightarrow M$ with
+$P$ projective;
+call the kernel $K$ and draw a short exact sequence
+\[ 0 \to K \to P \to M \to 0. \]
+For any $R$-module $N$, we have a long exact sequence
+\[ \ext^{i-1}(P,N) \to \ext^{i-1}(K,N) \to \ext^i(M,N) \to
+\ext^i(P, N). \]
+If $i >0$, the right end vanishes; if $i >1$, the left end
+vanishes. So if $i
+>1$, this map $\ext^{i-1}(K,N) \to \ext^i(M,N)$ is an
+\emph{isomorphism}.
+
+Suppose that $\pr(K) = d \geq 0$. We find that
+$\ext^{i-1}(K,N)=0$ for $i-1
+> d$.
+This implies that $\ext^i(M,N) = 0$ for such $i > d+1$. In
+particular, $\pr(M)
+\leq d+1$.
+This argument is completely reversible if $d >0$.
+Then we see from these isomorphisms that
+\begin{equation} \label{pdeq} \boxed{\pr(M) = \pr(K)+1}, \quad \mathrm{unless} \ \pr(M)=0
+\end{equation}
+If $M$ is projective, the sequence $0 \to K \to P \to M \to 0$
+splits, and
+$\pr(K)=0$ too.
+
+The upshot is that {we can compute projective dimension
+by choosing a
+projective resolution.}
+\begin{proposition}\label{pdprojectiveresolution}
+Let $M$ be an $R$-module. Then $\pr(M) \leq n$ iff there exists
+a finite
+projective resolution of $M$ having $n+1$ terms,
+\[ 0 \to P_n \to \dots \to P_1 \to P_0 \to M \to 0. \]
+\end{proposition}
+\begin{proof}
+Induction on $n$. When $n = 0$, $M$ is projective, and we can
+use the
+resolution $0 \to M \to M \to 0$.
+
+Suppose $\pr(M) \leq n$, where $n >0$. We can get a short exact
+sequence
+\[ 0 \to K \to P_0 \to M \to 0 \]
+with $P_0$ projective, so $\pr(K) \leq n-1$ by \eqref{pdeq}. The inductive
+hypothesis implies
+that there is a projective resolution of $K$ of length $\leq
+n-1$. We can
+splice this in with the short exact sequence to get a projective
+resolution of
+$M$ of length $n$.
+
+The argument is reversible. Choose any projective resolution
+\[ 0 \to P_n \to \dots \to P_1 \to P_0 \to M \to 0 \]
+and split into short exact sequences, and then one argue inductively to show
+that $\pr(M) \leq n$.
+\end{proof}
+
+Let $\pr(M) = n$. Choose any projective resolution $\dots \to P_2 \to P_1 \to P_0 \to M$. Choose $K_i = \ker(P_i \to P_{i-1})$ for each $i$. Then there is a short exact sequence $0 \to K_0 \to P_0 \to M
+\to 0$. Moreover,
+there are exact sequences
+\[ 0 \to K_i \to P_i \to K_{i-1} \to 0 \]
+for each $i$. From these, and from \eqref{pdeq}, we see that the projective dimensions
+of the $K_i$
+drop by one as $i$ increments. So $K_{n-1}$ is projective if
+$\pr(M) = n$ as
+$\pr(K_{n-1})=0$. In particular, we can get a projective
+resolution
+\[ 0 \to K_{n-1} \to P_{n-1} \to \dots \to P_0 \to M \to 0 \]
+which is of length $n$.
+In particular, if one has a (possibly infinite) projective resolution $M$, one can stop after going out $n$ terms, because the kernels
+will become
+projective. In other words, the projective resolution can be made to
+\emph{break off} at the $n$th term.
+This applies to \emph{any} projective resolution.
+Conversely, since any module has a (possibly infinite) projective resolution,
+we find:
+
+\begin{proposition}
+We have $\pr(M) \leq n$ if any projective resolution
+\[ \dots \to P_1 \to P_0 \to M \to 0 \]
+breaks off at the $n$th stage: that is, the kernel of $P_{n-1} \to P_{n-2}$ is
+projective.
+\end{proposition}
+
+
+If $\pr(M) \leq n$, then by definition we have $\ext^{n+1}(M, N) = 0$ for
+\emph{any} module $N$. By itself, this does not say anything about the $\tor$
+functors.
+However, the criterion for projective dimension enables us to show:
+
+\begin{proposition} \label{pdfd}
+If $\pr(M) \leq n$, then $\tor_m(M, N) = 0$ for $m > n$.
+\end{proposition}
+One can define an analog of projective dimension with the $\tor$ functors,
+called \emph{flat dimension}, and it follows that the flat dimension is at most
+the projective dimension.
+
+In fact, we have more generally:
+\begin{proposition}
+Let $F$ be a right-exact functor on the category of $R$-modules, and let $\{L_i
+F\}$ be its left derived functors.
+If $\pr(M) \leq n$, then $L_i F(M) = 0$ for $i > n$.
+\end{proposition}
+
+Clearly this implies the claim about $\tor$ functors.
+\begin{proof}
+Recall how $L_i F(M)$ can be computed. Namely, one chooses a projective
+resolution $P_\bullet \to M$ (any will do), and compute the homology of the
+complex
+$F(P_\bullet)$. However, we can choose $P_\bullet \to M$ such that $P_i = 0$
+for $i > n$ by \cref{pdprojectiveresolution}. Thus $F(P_\bullet)$ is
+concentrated in degrees between $0$ and $n$, and the result becomes clear when
+one takes the homology.
+\end{proof}
+
+In general, flat modules are not projective (e.g. $\mathbb{Q}$ is flat, but not
+projective, over $\mathbb{Z}$), and while one can use projective dimension to
+bound ``flat dimension'' (the analog for $\tor$-vanishing), one cannot use the
+flat dimension to bound the projective dimension. For a local ring, we will see
+that it is possible in the next subsection.
+
+\subsection{$\tor$ and projective dimension}
+
+Over a noetherian \emph{local} ring, there is a much simpler way to test whether a
+finitely generated module is projective. This is a special case of the very
+general flatness criterion \cref{bigflatcriterion}, but we can give a simple
+direct proof. So we prefer to keep things self-contained.
+
+\begin{theorem} \label{localflateasy}
+Let $M$ be a finitely generated module over the noetherian local ring $(R,
+\mathfrak{m})$, with residue field $k = R/\mathfrak{m}$. Then, if $\tor_1(M, k)
+= 0$, $M$ is free.
+\end{theorem}
+In particular, projective---or even flat---modules which are of finite type
+over $R$ are automatically free.
+This is a strengthening of the earlier theorem (\cref{}) that a finitely
+generated projective
+module over a local ring is free.
+\begin{proof}
+Indeed, we can find a free module $F$ and a surjection $F \to M$ such that $F
+\otimes_R k \to M \otimes_R k$ is an isomorphism. To do this, choose elements
+of $M$ that form a basis of $M \otimes_R k$, and then define a map $F \to M$
+via these elements; it is a surjection by Nakayama's lemma.
+
+Let $K$ be the kernel of $F \twoheadrightarrow M$, so there is an exact sequence
+\[ 0 \to K \to F \to M \to 0. \]
+We want to show that $K = 0$, which will imply that $M = 0$. By Nakayama's
+lemma, it suffices to show that $K \otimes_R k = 0$. But we have an exact
+sequence
+\[ \tor_1(M, k) \to K \otimes_R k \to F \otimes_R k \to M \otimes_R k \to 0. \]
+The last map is an isomorphism, and $\tor_1(M, k) = 0$, which implies that $K
+\otimes_R k = 0$. The result is now proved.
+\end{proof}
+
+As a result, we can compute the projective dimension of a module in terms of
+$\tor$.
+\begin{corollary}
+Let $M$ be a finitely generated module over the noetherian local ring $R$ with
+residue field $k$. Then $\pr(M)$ is the largest integer $n$
+such that
+$\tor_n(M, k) \neq 0$.
+It is also the smallest integer $n$ such that $\tor_{n+1}(M, k) = 0$.
+\end{corollary}
+There is a certain symmetry: if $\ext$ replaces $\tor$, then one has the
+definition of depth. We will show later that there is indeed a useful connection
+between projective dimension and depth.
+\begin{proof}
+We will show that if
+$\tor_{n+1}(M, k) = 0$, then $\pr(M) \leq n$.
+This implies the claim, in view of \cref{pdfd}. Choose a (possibly infinite)
+projective resolution
+\[ \dots \to P_1 \to P_0 \to M \to 0. \]
+Since $R$ is noetherian, we can assume that each $P_i$ is \emph{finitely
+generated.}
+
+Write $K_i = \ker(P_i \to P_{i-1})$, as before; these are finitely generated
+$R$-modules. We want to show that $K_{n-1}$
+is projective, which will establish the claim, as then the projective
+resolution will ``break off.''
+But we have an exact sequence
+\[ 0 \to K_0 \to P_0 \to M \to 0, \]
+which shows that $\tor_n(K_0, k) = \tor_{n+1}(M, k)= 0$.
+Using the exact sequencese $0 \to K_{i} \to P_i \to K_{i-1} \to 0$, we
+inductively work downwards to get that $\tor_1(K_{n-1}, k) =0$. So $K_{n-1}$ is
+projective by \cref{localflateasy}.
+\end{proof}
+
+In particular, we find that if $\pr(k) \leq n$, then $\pr(M) \leq n$ for all
+$M$. This is because if $\pr(k) \leq n$, then $\tor_{n+1}(M, k) = 0$ by using
+the relevant resolution of $k$ (see \cref{pdfd}, but for $k$).
+\begin{corollary}
+Suppose there exists $n$ such that $\tor_{n+1}(k, k) = 0$.
+Then every finitely generated $R$-module has a finite free resolution of length
+at most $n$.
+\end{corollary}
+
+We have thus seen that $k$ is in some sense the ``worst'' $R$-module, in that it is
+as far from being projective, or that it has the largest projective dimension.
+We can describe this worst-case behavior with the next concept:
+
+\begin{definition}
+Given a ring $R$, the \textbf{global dimension} is the $\sup$ of the projective
+dimensions of all finitely generated $R$-modules.
+\end{definition}
+
+So, to recapitulate: the global dimension of a noetherian local ring $R$ is the
+projective dimension of its residue field $k$, or even the \emph{flat}
+dimension of the residue field.
+\subsection{Minimal projective resolutions}
+Usually projective resolutions are non-unique; they are only unique up to
+chain homotopy. We will introduce a certain restriction that enforces
+uniqueness. These ``minimal'' projective resolutions will make it extremely
+easy to compute the groups $\tor_{\bullet}(\cdot, k)$.
+
+Let $(R, \mathfrak{m})$ be a local noetherian ring with residue field $k$, $M$ a
+finitely generated $R$-module.
+All tensor products will be over $R$.
+
+\begin{definition}
+A projective resolution $P_\bullet \to M$ of finitely generated
+modules is \textbf{minimal} if for each $i$, the
+induced map $P_i \otimes k \to P_{i-1} \otimes
+k$ is
+zero, and the map $P_0 \otimes k \to
+M/\mathfrak{m}M$ is an isomorphism.
+\end{definition}
+
+In other words, the complex $P_\bullet \otimes k$ is isomorphic to $M \otimes
+k$.
+This is equivalent to saying that for each $i$, the map $P_i
+\to\ker(P_{i-1}
+\to P_{i-2})$ is an isomorphism modulo $\mathfrak{m}$.
+
+\begin{proposition}
+Every $M$ (over a local noetherian ring) has a minimal
+projective resolution.
+\end{proposition}
+\begin{proof}
+Start with a module $M$. Then $M/\mathfrak{m}M$ is a
+finite-dimensional vector
+space over $k$, of dimension say $d_0$. We can
+choose a basis for that vector space, which
+we can lift to $M$. That determines a map of free modules
+\[ R^{d_0} \to M, \]
+which is a surjection by Nakayama's lemma. It is by construction
+an
+isomorphism modulo $\mathfrak{m}$. Then define $K =
+\ker(R^{d_0}\to M)$; this
+is finitely generated by noetherianness, and we
+can do the same thing for $K$, and repeat to get a map $R^{d_1}
+\twoheadrightarrow K$ which is an isomorphism modulo
+$\mathfrak{m}$. Then
+\[ R^{d_1} \to R^{d_0} \to M \to 0 \]
+is exact, and minimal; we can continue this by the same
+procedure.
+\end{proof}
+
+
+\begin{proposition}
+Minimal projective resolutions are unique up to isomorphism.
+\end{proposition}
+\begin{proof}
+Suppose we have one minimal projective resolution:
+\[ \dots \to P_2 \to P_1 \to P_0 \to M \to 0 \]
+and another:
+\[ \dots \to Q_2 \to Q _1 \to Q_0 \to M \to 0 .\]
+There is always a map of projective resolutions $P_* \to Q_*$ by
+general
+homological algebra. There is, equivalently, a commutative
+diagram
+\[\xymatrix{ \dots \ar[d] \ar[r] & P_2\ar[d] \ar[r] & P_1
+\ar[d]\ar[r]
+& P_0 \ar[d] \ar[r] & M \ar[d]^{\mathrm{id}} \ar[r] & 0 \\
+ \dots \ar[r] & Q_2 \ar[r] & Q_1 \ar[r]
+& Q_0 \ar[r] & M \ar[r] & 0 } \]
+If both resolutions are minimal, the claim is that this map is
+an isomorphism.
+That is, $\phi_i: P_i \to Q_i$ is an isomorphism, for each $i$.
+
+To see this, note that $P_i, Q_i$ are finite free
+$R$-modules.\footnote{We are
+using the fact that a finite projective module over a local ring
+is
+\emph{free}.} So $\phi_i$ is an isomorphism iff $\phi_i$ is an
+isomorphism
+modulo the maximal ideal, i.e. if
+\[ P_i/\mathfrak{m}P_i \to Q_i/\mathfrak{m}Q_i \]
+is an isomorphism. Indeed, if $\phi_i$ is an isomorphism, then
+its tensor
+product with $R/\mathfrak{m}$ obviously is an isomorphism.
+Conversely suppose
+that the reductions mod $\mathfrak{m}$ make an isomorphism. Then
+the ranks of
+$P_i, Q_i$ are the same, and $\phi_i$ is an $n$-by-$n$ matrix
+whose determinant
+is not in the maximal ideal, so is invertible. This means that
+$\phi_i$ is invertible by the
+usual formula for the inverse matrix.
+
+So we are to check that $P_i / \mathfrak{m}P_i \to Q_i /
+\mathfrak{m}Q_i$ is an
+isomorphism for each $i$. This is equivalent to the assertion
+that
+\[ (Q_i/\mathfrak{m}Q_i)^{\vee} \to
+(P_i/\mathfrak{m}P_i)^{\vee}\]
+is an isomorphism. But this is the map
+\[ \hom_R(Q_i, R/\mathfrak{m}) \to \hom_R(P_i, R/\mathfrak{m}).
+\]
+If we look at the chain complexes $\hom(P_*, R/\mathfrak{m}),
+\hom(Q_*,
+R/\mathfrak{m})$, the cohomologies
+compute the $\ext$ groups of $(M, R/\mathfrak{m})$. But all the
+maps in this
+chain complex are zero because the resolution is minimal, and we
+have that the
+image of $P_i$ is contained in $\mathfrak{m}P_{i-1}$ (ditto for
+$Q_i$). So the
+cohomologies are just the individual terms, and the maps
+$ \hom_R(Q_i, R/\mathfrak{m}) \to \hom_R(P_i, R/\mathfrak{m})$
+correspond to
+the identities on $\ext^i(M, R/\mathfrak{m})$. So these are
+isomorphisms.\footnote{We are sweeping under the rug the
+statement that $\ext$
+can be computed via \emph{any} projective resolution. More
+precisely, if you
+take any two projective resolutions, and take the induced maps
+between the
+projective resolutions, hom them into $R/\mathfrak{m}$, then the
+maps on
+cohomology are isomorphisms.}
+\end{proof}
+
+
+\begin{corollary}
+If $\dots \to P_2 \to P_1 \to P_0 \to M$ is a minimal projective
+resolution of
+$M$, then the ranks $\mathrm{rank}(P_i)$ are well-defined (i.e.
+don't depend
+on the choice of the minimal resolution).
+\end{corollary}
+\begin{proof}
+Immediate from the proposition. In fact, the ranks are the
+dimensions (as
+$R/\mathfrak{m}$-vector spaces) of $\ext^i(M, R/\mathfrak{m})$.
+\end{proof}
+
+\subsection{The Auslander-Buchsbaum formula}
+
+
+
+\begin{theorem}[Auslander-Buschsbaum formula]
+Let $R$ be a local noetherian ring, $M$ a finitely generated $R$-module of
+finite
+projective dimension. If $\pr(R) <
+\infty$, then $\pr(M) = \depth(R) - \depth(M)$.
+\end{theorem}
+
+\begin{proof}
+Induction on $\pr(M)$. When $\pr(M)=0$, then $M$ is projective,
+so isomorphic
+to $R^n$ for some $n$. Thus $\depth(M) = \depth(R)$.
+
+Assume $\pr(M) > 0$.
+Choose a surjection $P \twoheadrightarrow M$ and write an exact
+sequence
+\[ 0 \to K \to P \to M \to 0, \]
+where $\pr(K) = \pr(M)-1$. We also know by induction that
+\[ \pr(K) = \depth R - \depth(K). \]
+What we want to prove is that
+\[ \depth R - \depth M = \pr(M) = \pr(K)+1. \]
+This is equivalent to wanting know that $\depth(K) = \depth (M)
++1$.
+In general, this may not be true, though, but we will prove it
+under
+minimality hypotheses.
+
+Without loss of generality, we can choose that $P$ is
+\emph{minimal}, i.e.
+becomes an isomorphism modulo the maximal ideal $\mathfrak{m}$.
+This means
+that the rank of $P$ is $\dim M/\mathfrak{m}M$.
+So $K = 0$ iff $P \to M$ is an isomorphism; we've assumed that
+$M$ is not
+free, so $K \neq 0$.
+
+Recall that the depth of $M$ is the smallest value $i$ such
+that$\ext^i(R/\mathfrak{m}, M) \neq 0$. So we should look at the long exact
+sequence from the above short exact sequence:
+\[ \ext^i(R/\mathfrak{m}, P) \to \ext^i(R/\mathfrak{m},M) \to
+\ext^{i+1}(R/\mathfrak{m}, K) \to \ext^{i+1}(R/\mathfrak{m},
+P).\]
+Now $P$ is just a direct sum of copies of $R$, so
+$\ext^i(R/\mathfrak{m}, P)$
+and $\ext^{i+1}(R/\mathfrak{m}, P)$ are zero if $i+1< \depth R$.
+In
+particular, if $i+1< \depth R$, then the map $
+\ext^i(R/\mathfrak{m},M) \to
+\ext^{i+1}(R/\mathfrak{m}, K) $ is an isomorphism.
+So we find that $\depth M + 1 = \depth K$ in this case.
+
+We have seen that \emph{if $\depth K < \depth R$, then } by
+taking $i$ over
+all integers $< \depth K$, we find that
+\[ \ext^{i}(R/\mathfrak{m}, M) = \begin{cases}
+0 & \mathrm{if \ } i+1 < \depth K \\
+\ext^{i+1}(R/\mathfrak{m},K) & \mathrm{if \ } i+1 = \depth K
+\end{cases}. \]
+In particular, we are \textbf{done} unless $\depth K \geq \depth
+R$.
+By the inductive hypothesis, this is equivalent to saying that
+$K$ is
+projective.
+
+So let us consider the case where $K$ is projective, i.e.
+$\pr(M)=1$.
+We want to show that $\depth M = d-1$ if $d = \depth R$.
+We need a
+slightly different argument in this case. Let $d = \depth(R) =
+\depth (P) =
+\depth(K)$ since $P,K$ are free. We have a short exact sequence
+\[ 0 \to K \to P \to M \to 0 \]
+and a long exact sequence of $\ext$ groups:
+\[ 0 \to \ext^{d-1}(R/\mathfrak{m}, M) \to
+\ext^d(R/\mathfrak{m}, K) \to \ext^d(R/\mathfrak{m}, P) .\]
+We know that $\ext^d(R/\mathfrak{m}, K)$ is nonzero as $K$ is
+free and $R$ has
+depth $d$. However, $\ext^i(R/\mathfrak{m}, K) =
+\ext^i(R/\mathfrak{m}, P)=0$
+for $i0$, then there is an element $a$ in
+$\mathfrak{m}'$ such
+that
+\[ M \stackrel{\phi(a)}{\to} M \]
+is injective. Now $\phi(a) \in \mathfrak{m}$. So $\phi(a)$ is a
+nonzerodivisor, and we have an exact sequence
+\[ 0 \to M \stackrel{\phi(a)}{\to} M \to M/\phi(a) M \to 0. \]
+Thus we find
+\[ \depth_{S} M > 0 . \]
+Moreover, we find that $\depth_S M = \depth_S (M/\phi(a) M) +1$
+and
+$\depth_{S'} M = \depth_{S'}(M/\phi(a) M))+1$. The inductive
+hypothesis now
+tells us that
+\[ \depth_S M = \depth_{S'}M. \]
+
+The hard case is where $\depth_{S'} M = 0$. We need to show that
+this is
+equivalent to $\depth_{S} M = 0$. So we know at first that
+$\mathfrak{m}' \in
+\ass(M)$. That is, there is an element $x \in M$ such that
+$\ann_{S'}(x) =
+\mathfrak{m}'$.
+Now $\ann_S(x) \subsetneq S$ and contains $\mathfrak{m}' S$.
+
+$Sx \subset M$ is a submodule, surjected onto by $S$ by the map
+$a \to ax$.
+This map actually, as we have seen, factors through
+$S/\mathfrak{m}' S$. Here
+$S$ is a finite $S'$-module, so $S/\mathfrak{m}'S$ is a finite
+$S'/\mathfrak{m}'$-module. In particular, it is a
+finite-dimensional vector space
+over a field. It is thus a local artinian ring. But $Sx$ is a
+module over this
+local artinian ring. It must have an associated prime, which is
+a maximal
+ideal in $S/\mathfrak{m}'S$. The only maximal ideal can be
+$\mathfrak{m}/\mathfrak{m}'S$. It follows that $\mathfrak{m}
+\in\ass(Sx)
+\subset \ass(M)$.
+
+In particular, $\depth_S M = 0$ too, and we are done.
+\end{proof}
+
+\end{proof}
+\end{example}
+
+\begin{comment}We shall eventually prove:
+
+\begin{proposition}
+Let $R = \mathbb{C}[X_1, \dots, X_n]/\mathfrak{p}$ for
+$\mathfrak{p}$ prime.
+Choose an injective map $\mathbb{C}[y_1, \dots, y_n]
+\hookrightarrow R$ making $R$ a
+finite module. Then $R$ is Cohen-Macaulay iff $R$ is projective
+as a module
+over $\mathbb{C}[y_1, \dots, y_n]$.\footnote{In fact, this is
+equivalent to
+freeness, although we will not prove it. Any projective finite
+module over a
+polynomial ring over a field is free, though this is a hard
+theorem.}
+\end{proposition}
+
+The picture is that the inclusion $\mathbb{C}[y_1, \dots, y_m ]
+\hookrightarrow
+\mathbb{C}[x_1, \dots, x_n]/\mathfrak{p}$ corresponds to a map
+\[ X \to \mathbb{C}^m \]
+for $X = V(\mathfrak{p}) \subset \mathbb{C}^n$. This statement
+of freeness is a
+statement about how the fibers of this finite map stay similar
+in some sense.
+
+\end{proof}
+\end{comment}
+
+\section{Serre's criterion and its consequences}
+
+
+ We would like to prove
+Serre's
+criterion for regularity.
+
+\begin{theorem}
+Let $(R, \mathfrak{m})$ be a local noetherian ring. Then $R$ is
+regular iff
+$R/\mathfrak{m}$ has finite projective dimension. In this case,
+$\pr(R/\mathfrak{m}) = \dim R$.
+\end{theorem}
+
+\add{proof}
+
+
+\subsection{First consequences}
+
+
+\begin{proposition}
+Let $(R, \mathfrak{m}) \to (S, \mathfrak{n})$ be a flat, local homomorphism of noetherian local
+rings. If $S$ is regular, so is $R$.
+\end{proposition}
+\begin{proof}
+Let $n = \dim S$.
+Let $M$ be a finitely generated $R$-module, and consider a resolution
+\[ P_n \to P_{n-1} \to \dots \to P_0 \to M \to 0, \]
+where all the $\left\{P_i\right\}$ are finite free $R$-modules. If we can show
+that the kernel of $P_n \to P_{n-1}$ is projective, then it will follow that
+$M$ has finite projective dimension. Since $M$ was arbitrary, it will follow
+that $R$ is regular too, by Serre's criterion.
+
+Let $K$ be the kernel, so there is an exact sequence
+\[ 0 \to K \to P_n \to P_{n-1} \to \dots \to P_0 \to M \to 0, \]
+which we can tensor with $S$, by flatness:
+\[ 0 \to K \otimes_R S \to P_n \otimes_R S \to P_{n-1} \otimes_R S \to \dots
+\to P_0 \otimes_R S \to M \otimes_R S\to 0. \]
+Because any finitely generated $S$-module has projective dimension $\leq n$, it
+follows that $K \otimes_R S$ is projective, and in particular flat.
+
+But now $S$ is \emph{faithfully flat} over $R$ (see \cref{}), and it follows
+that $K $ is $R$-flat. Thus $K$ is projective over $R$, proving the claim.
+\end{proof}
+
+\begin{theorem}
+The localization of a regular local ring at a prime ideal is regular.
+\end{theorem}
+Geometrically, this means that to test whether a nice scheme (e.g. a variety) is regular
+(i.e., all the local rings are regular), one only has to test the \emph{closed}
+points.
+\begin{proof}
+Let $(R, \mathfrak{m})$ be a regular local ring. Let $\mathfrak{p} \in \spec R$
+be a prime ideal; we wish to show that $R_{\mathfrak{p}}$ is regular.
+To do this, let $M$ be a finitely generated $R_{\mathfrak{p}}$-module. Then we
+can find a finitely generated $R$-submodule $N \subset M$ such that
+the natural map $N_{\mathfrak{p}} \to M$ is an isomorphism.
+If we take a finite free resolution of $N$ by $R$-modules and localize at
+$\mathfrak{p}$, we get a finite free resolution of $M$ by
+$R_{\mathfrak{p}}$-modules.
+
+It now follows that $M$ has finite projective dimension as an
+$R_{\mathfrak{p}}$-module. By Serre's criterion, this implies that
+$R_{\mathfrak{p}}$ is regular.
+\end{proof}
+
+\subsection{Regular local rings are factorial}
+
+We now aim to prove that a regular local ring is factorial.
+
+First, we need:
+\begin{definition}
+Let $R$ be a noetherian ring and $M$ a f.gen. $R$-module. Then $M$ is
+\textbf{stably free} if $M \oplus R^k$ is free for some $k$.
+\end{definition}
+
+Stably free obviously implies ``projective.''
+Free implies stably free, clearly---take $k=0$. Over a local ring, a finitely
+generated projective module is free, so all three notions are equivalent. Over a
+general ring, these notions are generally different.
+
+We will need the following lemma:
+
+\begin{lemma}
+Let $M$ be an $R$-module with a finite free resolution. If $M$ is projective,
+it is stably free.
+\end{lemma}
+\begin{proof}
+There is an exact sequence
+\[ 0 \to F_k \to F_{k-1} \to \dots \to F_1 \to F_0 \to M \to 0 \]
+with the $F_i$ free and finitely generated, by assumption.
+
+We induct on the length $k$ of the resolution. We know that if $N$ is the
+kernel of $F_0 \to M$, then $N$ is projective (as the sequence $0 \to N \to
+F_0 \to M \to 0$ splits) so there is a resolution
+\[ 0 \to F_k \to \dots \to F_1 \to N \to 0. \]
+By the inductive hypothesis, $N$ is stably free.
+So there is a free module $R^d$ such that $N \oplus R^d$ is free.
+
+We know that $M \oplus N=F_0$ is
+free. Thus $M \oplus N \oplus R^d = F_0 \oplus R^d$ is free and $N \oplus R^d$
+is free. Thus $M$ is stably free.
+\end{proof}
+
+
+\begin{remark}
+Stably freeness does \textbf{not} generally imply freeness, though it does
+over a local noetherian ring.
+\end{remark}
+
+Nonetheless,
+
+\begin{proposition}
+Stably free does imply free for invertible modules.
+\end{proposition}
+\begin{proof}
+Let $I$ be stably free and invertible. We must show that $I \simeq R$.
+Without loss of generality, we can assume that $\spec R$ is connected, i.e.
+$R$ has no nontrivial idempotents. We will assume this in order to talk about
+the \textbf{rank} of a projective module.
+
+We know that $I \oplus R^n \simeq R^m$ for some $m$. We know that $m=n+1$ by
+localization. So $I \oplus R^n \simeq R^{n+1}$ for some $n$.
+We will now need to construct the \textbf{exterior powers}, for which we
+digress:
+
+\begin{definition}
+Let $R$ be a commutative ring and $M$ an $R$-module. Then $\wedge M$, the
+\textbf{exterior algebra on $M$}, is the free (noncommutative) graded $R$-algebra
+generated by $M$ (with product $\wedge$) with just enough relations such that
+$\wedge$ is anticommutative (and, \emph{more strongly}, $x \wedge x=0$ for
+$x$ degree one).
+\end{definition}
+
+Clearly $\wedge M$ is a quotient of the \textbf{tensor algebra} $T(M)$, which is by
+definition $R
+\oplus M \oplus M \otimes M \oplus \dots \oplus M^{\otimes n} \oplus \dots$.
+The tensor algebra is a graded $R$-algebra in an obvious way: $(x_1 \otimes
+\dots \otimes x_a) . (y_1 \otimes \dots \otimes y_b) = x_1 \otimes \dots
+\otimes x_a \otimes y_1 \otimes \dots \otimes y_b$. This is an associative
+$R$-algebra.
+Then
+\[ \wedge M = T(M)/( x \otimes x, \ x,y \in M). \]
+The grading on $\wedge M$ comes from the grading of $T(M)$.
+
+We are interested in basically one example:
+\begin{example}
+Say $M = R^m$. Then $\wedge^m M = R$. If $e_1, \dots, e_m \in M$ are
+generators, then $e_1 \wedge \dots \wedge e_m$ is a generator. More generally,
+$\wedge^k M$ is free on $e_{i_1} \wedge \dots \wedge e_{i_k}$ for $i_1 < \dots <
+i_k$.
+\end{example}
+
+We now make:
+
+\begin{definition}
+If $M$ is a projective $R$-module of rank $n$, then
+\[ \det(M) = \wedge^n M. \]
+\end{definition}
+If $M$ is free, then $\det(M)$ is free of rank one. So, as we see by
+localization, $\det(M)$ is always an
+invertible module for $M$ locally free (i.e. projective) and $\wedge^{n+1}M = 0$.
+
+\begin{lemma}
+$\det(M \oplus N) = \det M \otimes \det N$.
+\end{lemma}
+\begin{proof}
+This isomorphism is given by wedging $\wedge^{\mathrm{top}} M \otimes
+\wedge^{\mathrm{top}} N \to \wedge^{\mathrm{top}}(M \oplus N)$. This is easily
+checked for oneself.
+\end{proof}
+
+Anyway, let us finally go back to the proof. If $I \oplus R^n = R^{n+1}$, then
+taking determinants shows that
+\[ \det I \otimes R = R, \]
+so $\det I = R$. But this is $I$ as $I$ is of rank one. So $I$ is free.
+
+\end{proof}
+
+\begin{theorem}
+A regular local ring is factorial.
+\end{theorem}
+
+Let $R$ be a regular local ring of dimension $n$. We want to show that $R$ is factorial.
+Choose a prime ideal $\mathfrak{p}$ of height one.
+We'd like to show that $\mathfrak{p}$ is principal.
+
+\begin{proof}
+Induction on $n$. If $n=0$, then we are done---we have a field.
+
+If $n=1$, then a height one prime is maximal, hence principal, because
+regularity is equivalent to the ring's being a DVR.
+
+Assume $n>1$. The prime ideal $\mathfrak{p}$ has height one, so it is
+contained in a maximal ideal $\mathfrak{m}$. Note that $\mathfrak{m}^2
+\subset \mathfrak{m}$ as well. I claim that there is an element $x$ of
+$\mathfrak{m} - \mathfrak{p} - \mathfrak{m}^2$. This follows as an argument
+like prime avoidance. To see that $x$ exists, choose $x_1 \in \mathfrak{m} - \mathfrak{p}$ and $x_2
+\in \mathfrak{m} - \mathfrak{m}^2$. We are done unless $x_1 \in
+\mathfrak{m}^2$ and $x_2 \in \mathfrak{p}$ (or we could take $x$ to be $x_1$
+or $x_2$). In this case, we just take $x = x_1 + x_2$.
+
+So choose $x \in \mathfrak{m} - \mathfrak{p} - \mathfrak{m}^2$. Let us examine
+the ring $R_{x} = R[1/x]$, which contains an ideal $\mathfrak{p}[x^{-1}]$.
+This is a proper ideal as $x \notin \mathfrak{p}$. Now $R[1/x]$ is regular
+(i.e. its localizations at primes are regular local). The dimension, however,
+is of dimension less than $n$ since by inverting $x$ we have removed
+$\mathfrak{m}$. By induction we can assume that $R_x$ is locally factorial.
+
+Now $\mathfrak{p}R_{x}$ is prime and of height one, so it is invertible as
+$R_x$ is locally factorial.
+In particular it is projective.
+
+But $\mathfrak{p}$ has a finite resolution by $R$-modules (by regularity), so
+$\mathfrak{p}R_x$ has a finite free resolution. In particular,
+$\mathfrak{p}R_{x}$ is stably free and invertible, hence free.
+Thus $\mathfrak{p}R_x$ is \textbf{principal}.
+
+We want to show that $\mathfrak{p}$ is principal, not just after localization.
+We know that there is a $y \in \mathfrak{p}$ such that $y$ generates
+$\mathfrak{p}R_x$. Choose $y$ such that $(y) \subset \mathfrak{p}$ is as large
+as possible. We can do this since $R$ is noetherian. This implies that $x
+\nmid y$ because otherwise we could use $y/x$ instead of $y$.
+
+We shall now show that
+\[ \mathfrak{p} = (y). \]
+So suppose $z \in \mathfrak{p}$. We know that $y$ generates $\mathfrak{p}$
+\textbf{after $x$ is inverted.} In particular, $z \in \mathfrak{p}R_x$. That
+is, $zx^a \in (y)$ for $a$ large. That is, we can write
+\[ zx^a = yw, \quad \mathrm{for \ some} \ w \in R . \]
+We chose $x$ such that $x \notin \mathfrak{m}^2$. In particular, $R/(x)$ is
+regular, hence an integral domain; i.e. $x$ is a prime element. We find that
+$x$ must divide one of $y,w$ if $a>0$. But we know that $x \nmid y$, so $x
+\mid w$. Thus $w = w'x$ for some $x$. We find that, cancelling $x$,
+\[ zx^{a-1} = yw' \]
+and we can repeat this argument over and over until we find that
+\[ z \in (y). \]
+\end{proof}
+
+
+
diff --git a/books/cring/homotopical.tex b/books/cring/homotopical.tex
new file mode 100644
index 0000000000000000000000000000000000000000..9c8d562f5c5451a6469dd9f34816590fde9eef72
--- /dev/null
+++ b/books/cring/homotopical.tex
@@ -0,0 +1,301 @@
+\chapter{Homotopical algebra}
+
+In this chapter, we shall introduce the formalism of \emph{model categories}.
+Model categories provide an abstract setting for homotopy theory: in
+particular, we shall see that topological spaces form a model category. In a
+model category, it is possible to talk about notions such as ``homotopy,'' and
+thus to pass to the homotopy category.
+
+But many algebraic categories form model categories as well. The category of
+chain complexes over a ring forms one. It turns out that this observation
+essentially encodes classical homological algebra. We shall see, in
+particular, how the notion of \emph{derived functor} can be interpreted in a
+model category, via this model structure on chain complexes.
+
+Our ultimate goal in developing this theory, however, is to study the
+\emph{non-abelian} case. We are interested in developing the theory of the
+\emph{cotangent complex}, which is loosely speaking the derived functor of the
+K\"ahler differentials $\Omega_{S/R}$ on the category of $R$-algebras. This is
+not a functor on an additive category; however, we shall see that the
+non-abelian version of derived functors (in the category of \emph{simplicial}
+$R$-algebras) allows one to construct the cotangent complex in an elegant way.
+
+\section{Model categories}
+
+
+
+\subsection{Definition}
+
+We need to begin with the notion of a \emph{retract} of a map.
+
+\begin{definition}
+Let $\mathcal{C}$ be a category. Then we can form a new category
+$\mathrm{Map}\mathcal{C}$ of
+\emph{maps} of $\mathcal{C}$. The objects of this category are the morphisms
+$A \to B$ of $\mathcal{C}$, and a morphism between $A \to B$ and $C \to D$ is
+given by a commutative square
+\[ \xymatrix{
+A \ar[d] \ar[r] & C \ar[d] \\
+B \ar[r] & D
+}.\]
+
+A map in $\mathcal{C}$ is a \textbf{retract} of another map in $\mathcal{C}$
+if it is a retract as an object of $\mathrm{Map}\mathcal{C}$.
+This means that there is a diagram:
+\begin{xyxy}{
+A \ar[r]\ar@/^1pc/[rr]^{Id}\ar[d]^{f} & B\ar[d]^{g} \ar[r] & A\ar[d]^{f}
+\\ X \ar[r]\ar@/_2pc/[rr]^{Id} & Y \ar[r] & X
+}\end{xyxy}
+\end{definition}
+
+For instance, one can prove:
+\begin{proposition}
+In any category, isomorphisms are closed under retracts.
+\end{proposition}
+We leave the proof as an exercise.
+
+\begin{definition}
+A \textbf{model category} is a category $\mathcal{C}$ equipped with three classes of maps called \emph{cofibrations}, \emph{fibrations}, and \emph{weak equivalences}. They have to satisfy five axioms $M1 - M5$.
+
+Denote cofibrations as $\hookrightarrow$, fibrations as $\twoheadrightarrow$, and weak equivalences as $\to{\sim}$.
+\begin{itemize}
+\item [(M1)] $\mathcal{C}$ is closed under all limits and colimits.\footnote{Many of our arguments
+will involve infinite colimits. The original formulation in
+\cite{Qui67} required only finite
+such, but most people assume infinite.}
+\item [(M2)] Each of the three classes of cofibrations, fibrations, and weak
+equivalences is \emph{closed under retracts}.\footnote{Quillen initially
+called model categories satisfying this axiom \emph{closed} model categories.
+All the model categories we consider will be closed, and we have, following
+\cite{Ho07}, omitted this axiom.}
+\item [(M3)] If \emph{two of three} in a composition are weak equivalences, so is the third.
+\begin{xyxy}{
+\ar[r]^{f}\ar[d]_h & \ar[dl]^g
+\\&
+}\end{xyxy}
+\item [(M4)] (\emph{Lifts}) Suppose we have a diagram
+\begin{xyxy}{
+A\ar[r]\ar@{^(->}[d]^{i}& X\ar@{->>}[d]^{p}
+\\B\ar[r]\ar@{-->}[ru] & Y
+}\end{xyxy}
+Here $i: A \to B$ is a cofibration and $p: X \to Y$ is a fibration.
+Then a lift exists if $i$ or $p$ is a weak equivalence.
+\item [(M5)] (\emph{Factorization}) Every map can be factored in two ways:
+\begin{xyxy}{
+&.\ar@{->>}[dr]^{\sim} &
+\\.\ar@{^(->}[ru]\ar@{_(->}[dr]_{\sim}\ar[rr]^{f} & &.
+\\&.\ar@{->>}[ru]&
+}\end{xyxy}
+In words, it can be factored as a composite of a cofibration followed by a
+fibration which is a weak equivalence, or as a cofibration which is a weak
+equivalence followed by a fibration.
+\end{itemize}
+\end{definition}
+
+A map which is a weak equivalence and a fibration will be called an
+\textbf{acyclic fibration}. Denote this by $\twoheadrightarrow{\sim}$. A map
+which is both a weak equivalence and a cofibration will be called an
+\textbf{acyclic cofibration}, denoted $\hookrightarrow{\sim}$.
+(The word ``acyclic'' means for a chain complex that the homology is trivial;
+we shall see that this etymology is accurate when we construct a model
+structure on the category of chain complexes.)
+
+\begin{remark}
+If $C$ is a model category, then $\mathcal{C}^{op}$ is a model category, with the notions of fibrations and cofibrations reversed. So if we prove something about fibrations, we automatically know something about cofibrations.
+\end{remark}
+
+We begin by listing a few elementary examples of model categories:
+
+\begin{example}
+\begin{enumerate}
+\item Given a complete and cocomplete category $\mathcal{C} $, then we can
+give a model structure to $\mathcal{C}$ by taking the weak equivalences to be
+the isomorphisms and the cofibrations and fibrations to be all maps.
+\item If $R$ is a \emph{Frobenius ring}, or the classes of projective and
+injective $R$-modules coincide, then the category of modules over $R$ is a
+model category. The cofibrations are the injections, the fibrations are the
+surjections, and the weak equivalences are the \emph{stable equivalences} (a
+term which we do not define). See \cite{Ho07}.
+\item The category of topological spaces admits a model structure where the
+fibrations are the \emph{Serre fibrations} and the weak equivalences are the
+\emph{weak homotopy equivalences.} The cofibrations are, as we shall see,
+determined from this, though they can be described explicitly.
+\end{enumerate}
+\end{example}
+
+
+\begin{exercise}
+Show that there exists a model structure on the category of sets where the injections are
+the cofibrations, the surjections are fibrations, and all maps are weak
+equivalences.
+\end{exercise}
+
+
+\subsection{The retract argument}
+
+The axioms for a model category are somewhat complicated.
+We are now going to see that they are actually redundant. That is, any two of
+the classes of cofibrations, fibrations, and weak equivalences determine the
+third. We shall thus introduce a useful trick that we shall have occasion to
+use many times further when developing the foundations.
+
+
+\begin{definition}
+Let $\mathcal{C}$ be any category.
+Suppose that $P$ is a class of maps of $\mathcal{C}$. A map $f: A \to B$ has
+the \textbf{left lifting property} with respect to $P$ iff: for all $p: C \to D$ in $ P$ and all diagrams
+\begin{xyxy}{
+A\ar[r]\ar[d]_{f} & C\ar[d]^{p}
+\\B\ar@{-->}[ru]^{\exists }\ar[r] & D
+}\end{xyxy}
+a lift represented by the dotted arrow exists, making the diagram commute. We
+abbreviate this property to \textbf{LLP}. There is also a notion of a
+\textbf{right lifting property}, abbreviated \textbf{RLP}, where $f$ is on the right.
+\end{definition}
+
+\begin{proposition}
+Let $P$ be a class of maps of $\mathcal{C}$. Then the set of maps $f: A \to B $
+that have the LLP (resp. RLP) with respect to $P$ is closed under retracts and
+composition.
+\end{proposition}
+\begin{proof}
+This will be a diagram chase. Suppose $f: A \to B$ and $g: B \to C$ have the
+LLP with respect to maps in $P$. Suppose given a diagram
+\[ \xymatrix{
+A \ar[d]^{ g \circ f} \ar[r] & X \ar[d] \\
+C \ar[r] & Y
+}\]
+with $X \to Y$ in $P$. We have to show that there exists a lift $C \to X$. We can split this into a commutative diagram:
+\[ \xymatrix{
+A \ar[d]^{ f } \ar[r] & X \ar[dd] \\
+B \ar@{-->}[ru] \ar[rd] \ar[d]^g & \\
+C \ar[r] & Y
+}\]
+The lifting property provides a map $\phi: B \to X$ as in the dotted line in the
+diagram. This gives a diagram
+\[ \xymatrix{
+B \ar[d]^{ g } \ar[r]^{\phi} & X \ar[d] \\
+C \ar[r] \ar@{-->}[ru] & Y
+}\]
+and in here we can find a lift because $g$ has the LLP with respect to $p$. It
+is easy to check that this lift is what we wanted.
+
+\end{proof}
+
+The axioms of a model category imply that cofibrations have the LLP with
+respect to trivial fibrations, and acyclic cofibrations have the LLP with
+respect to fibrations. There are dual statements for fibrations. It turns out
+that these properties \emph{characterize} cofibrations and fibrations (and
+acyclic ones).
+
+
+\begin{theorem}
+Suppose $C$ is a model category. Then:
+\begin{itemize}
+\item [(1)] A map $f$ is a cofibration iff it has the left lifting property with respect to the class of acyclic fibrations.
+\item [(2)] A map is a fibration iff it has the right lifting property w.r.t. the class of acyclic cofibrations.
+\end{itemize}
+\end{theorem}
+\begin{proof}
+Suppose you have a map $f$, that has LLP w.r.t. all acyclic fibrations and you want it to be a cofibration. (The other direction is an axiom.) Somehow we're going to have to get it to be a retract of a cofibration. Somehow you have to use factorization. Factor $f$:
+\begin{xyxy}{
+A\ar[d]^{f}\ar@{^(->}[rd] &
+\\ X & X'\ar@{->>}[l]^{\sim}
+}\end{xyxy}
+We had assumed that $f$ has LLP. There is a lift:
+\begin{xyxy}{
+A \ar@{^(->}[r]^i\ar[d]^{f} & X'\ar@{->>}[d]^{\sim}
+\\X \ar[r]^{Id}\ar@{-->}[ru] & X
+}\end{xyxy}
+This implies that $f$ is a retract of $i$.
+\begin{xyxy}{
+A\ar[r]\ar[d]^f & A\ar@{^(->}[d]^{i}\ar[r] & A\ar[d]^{f}
+\\X \ar@{..>}[r]^{\exists} & X' \ar[r] & X
+}\end{xyxy}
+\end{proof}
+\begin{theorem}
+\begin{itemize}
+\item [(1)] A map $p$ is an acyclic fibration iff it has RLP w.r.t. cofibrations
+\item [(2)] A map is an acyclic cofibration iff it has LLP w.r.t. all fibrations.
+\end{itemize}
+\end{theorem}
+Suppose we know the cofibrations. Then we don't know the weak equivalences, or the fibrations, but we know the maps that are both. If we know the fibrations, we know the maps that are both weak equivalences and cofibrations. This is basically the same argument. One direction is easy: if a map is an acyclic fibration, it has the lifting property by the definitions. Conversely, suppose $f$ has RLP w.r.t. cofibrations. Factor this as a cofibration followed by an acyclic fibration.
+\begin{xyxy}{
+X\ar[r]^{Id}\ar@{^(->}[d] & X\ar[d]^f
+\\Y'\ar@{->>}[r]^p_\sim \ar@{-->}[ru]& Y
+}\end{xyxy}
+$f$ is a retract of $p$; it is a weak equivalence because $p$ is a weak equivalence. It is a fibration by the previous theorem.
+\begin{corollary}
+A map is a weak equivalence iff it can be written as the product of an acyclic fibration and an acyclic cofibration.
+\end{corollary}
+We can always write
+\begin{xyxy}{
+&.\ar@{->>}[dr]^p &
+\\.\ar[rr]^f\ar@{^(->}[ur]^\sim && .
+}\end{xyxy}
+By two out of three $f$ is a weak equivalence iff $p$ is. The class of weak equivalences is determined by the fibrations and cofibrations.
+\begin{example} [Topological spaces]
+The construction here is called the Serre model structure (although it was defined by Quillen). We have to define some maps.
+\begin{itemize}
+\item [(1)] The fibrations will be Serre fibrations.
+\item [(2)] The weak equivalences will be weak homotopy equivalences
+\item [(3)] The cofibrations are determined by the above classes of maps.
+\end{itemize}
+\end{example}
+\begin{theorem}
+A space equipped with these classes of maps is a model category.
+\end{theorem}
+\begin{proof}
+More work than you realize. M1 is not a problem. The retract axiom is also obvious. (Any class that has the lifting property also has retracts.) The third property is also obvious: \emph{something is a weak equivalence iff when you apply some functor (homotopy), it becomes an isomorphism}. (This is important.) So we need lifting and factorization. One of the lifting axioms is also automatic, by the definition of a cofibration.
+%\begin{xyxy}{
+% \ar[d]^{\sim} \ar[r] & .\ar@{->>}[d]^{\sim}
+% \\ .\ar[r]\ar@{-->}[ru] & .
+%}\end{xyxy}
+Let's start with the factorizations. Introduce two classes of maps:
+$$ A = \{D^n \times \{0\} \to D^n \times [0,1] \st n \geq 0\} $$
+$$ B = A \cup \{S^{n-1} \to D^n \st n \geq 0, S^{-1} = \emptyset\} $$
+These are compact, in a category-theory sense. By definition of Serre fibrations, a map is a fibration iff it has the right lifting property with respect to $A$. A map is an acyclic fibration iff it has the RLP w.r.t. B. (This was on the homework.) I need another general fact:
+\begin{proposition}
+The class of maps having the left lifting property w.r.t. a class $P$ is closed under arbitrary coproducts, co-base change, and countable (or even transfinite) composition. By countable composition
+$$ A_0 \hookrightarrow A_1 \to A_2 \to \cdots $$
+we mean the map $A \to colim_n \ss A_n$.
+\end{proposition}
+Suppose I have a map $f_0: X_0 \to Y_0$. We want to produce a diagram:
+\begin{xyxy}{
+X_0\ar[r]\ar[dr]_{f_0} & X_1\ar[d]^{f_1}
+\\&Y
+}\end{xyxy}
+We have $ \sqcup V \to \sqcup D$
+where the disjoint union is taken over commutative diagrams
+\begin{xyxy}{
+V\ar[d]\ar[r] & X\ar[d]
+\\ D \ar[r] & Y
+}\end{xyxy}
+where $V \to D$ is in $A$. Sometimes we call these lifting problems. For every lifting problem, we formally create a solution.
+This gives a diagram:
+\begin{xyxy}{
+\sqcup V \ar[r]\ar[d] & \sqcup D\ar[d]\ar[ddr] &
+\\ X\ar[rrd]\ar[r]& X_1\ar[dr]^<<<<*--{f_1} &
+\\ && Y
+}\end{xyxy}
+where we have subsequently made the pushout to $Y$. By construction, every lifting problem in $X_0$ can be solved in $X_1$.
+\begin{xyxy}{
+V \ar[r]\ar[d] & X_0\ar[d]\ar@{^(->}[r]^k & X_1\ar[d]
+\\D\ar[r]\ar@{-->}[ru]\ar@{..>}[rru] & Y\ar[r] & Y
+}\end{xyxy}
+We know that every map in $A$ is a cofibration. Also, $\sqcup V \to \sqcup D$ is a homotopy equivalence. $k$ is an acylic cofibration because it is a weak equivalence (recall that it is a homotopy equivalence) and a cofibration.
+
+Now we make a cone of $X_0 \to X_1 \to \cdots X_\infty$ into $Y$. The claim is that $f$ is a fibration:
+\begin{xyxy}{
+X\ar@{^(->}[r]^\sim \ar[dr] & X_\infty\ar[d]^f
+\\ &Y
+}\end{xyxy}
+by which we mean
+\begin{xyxy}{
+V\ar[r]\ar[d]_{\ell} & X_n\ar[d]\ar[r] & X_{n+1}\ar[d]\ar[r] & X_\infty\ar[d]
+\\D \ar@{-->}[ru]\ar[r] & Y \ar[r] & Y \ar[r] & Y
+}\end{xyxy}
+where $\ell \in A$. $V$ is compact Hausdorff. $X_\infty$ was a colimit along closed inclusions.
+
+\end{proof}
+So I owe you one lifting property, and the other factorization.
diff --git a/books/cring/integrality.tex b/books/cring/integrality.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5e63e9eafc7e87c71e9f118bf89ac14878307f5f
--- /dev/null
+++ b/books/cring/integrality.tex
@@ -0,0 +1,1940 @@
+\chapter{Integrality and valuation rings}
+\label{intchapter}
+
+The notion of integrality is familiar from number theory: it is similar to
+``algebraic'' but with the polynomials involved are required to be monic. In algebraic geometry, integral
+extensions of rings correspond to correspondingly nice morphisms on the
+$\spec$'s---when the extension is finitely generated, it turns out that the
+fibers are finite. That is, there are only finitely many ways to lift a prime
+ideal to the extension: if $A \to B$ is integral and finitely generated, then
+$\spec B \to \spec A$ has finite fibers.
+
+Integral domains that are \emph{integrally closed} in their quotient field will play an
+important role for us. Such ``normal domains'' are, for example, regular in
+codimension one, which means that the theory of Weil divisors
+(see \cref{weildivsec}) applies
+to them. It is particularly nice because Weil divisors are sufficient to
+determine whether a function is regular on a normal variety.
+
+A canonical example of an integrally closed ring is a valuation ring; we shall
+see in this chapter that any integrally closed ring is an intersection of such.
+
+\section{Integrality}
+
+\subsection{Fundamentals}
+
+
+As stated in the introduction to the chapter, integrality is a condition on
+rings parallel to that of algebraicity for field extensions.
+\begin{definition} \label{intdefn}
+Let $R$ be a ring, and $R'$ an $R$-algebra. An element $x \in R'$
+is said to be \textbf{integral} over $R$ if $x$ satisfies a monic polynomial
+equation in $R[X]$, say
+\[ x^n + r_1 x^{n-1} + \dots + r_n = 0, \quad r_1, \dots, r_n \in R. \]
+We can say that $R'$ is \textbf{integral} over $R$ if every $x \in R'$ is
+integral over $R$.
+\end{definition}
+
+Note that in the definition, we are not requiring $R$ to be a \emph{subring} of
+$R'$.
+
+\begin{example} \label{sixthroot}
+$\frac{1+\sqrt{-3}}{2}$ is integral over $\mathbb{Z}$; it is in fact a sixth
+root of unity, thus satisfying the equation $X^6 -1 = 0$.
+However, $\frac{1+\sqrt{5}}{2}$ is not integral over $\mathbb{Z}$. To explain this,
+however, we will
+need to work a bit more (see \cref{onegeneratorintegral} below).
+\end{example}
+
+
+\begin{example}
+Let $L/K$ be a field extension. Then $L/K$ is integral if and only if it is
+algebraic, since $K$ is a field and we can divide polynomial equations by the
+leading coefficient to make them monic.
+\end{example}
+
+\begin{example}
+Let $R$ be a graded ring. Then the subring $R^{(d)} \subset R$ was defined in
+\cref{dpowerofring}; recall that this consists of elements of $R$ all of whose
+nonzero homogeneous components live in degrees that are multiples of $d$.
+Then the $d$th power of any homogeneous
+element in $R$ is in $R^{(d)}$. As a result, every homogeneous element of $R$
+is integral over $R^{(d)}$.
+\end{example}
+
+We shall now interpret the condition of integrality in terms of finite
+generation of certain modules.
+Suppose $R$ is a ring, and $R'$ an $R$-algebra. Let $x \in R'$.
+
+\begin{proposition} \label{onegeneratorintegral}
+$x \in R'$ is integral over $R$ if and only if the subalgebra $R[x] \subset R'$
+(generated by $R, x$) is a finitely generated
+$R$-module.
+\end{proposition}
+
+This notation is an abuse of notation (usually $R[x]$ refers to a polynomial
+ring), but it should not cause confusion.
+
+This result for instance lets us show that $\frac{1+\sqrt{-5}}{2}$ is not integral
+over $\mathbb{Z}$, because when you keep taking powers, you get arbitrarily
+large denominators: the arbitrarily large denominators imply that it cannot be
+integral.
+
+\begin{proof}
+If $x \in R'$ is integral, then $x$ satisfies
+\[ x^n + r_1 x^{n-1}+\dots+r_n = 0, \quad r_i \in R. \]
+Then $R[x]$ is generated as an $R$-module by $1, x, \dots, x^{n-1}$. This is
+because the submodule of $R'$ generated by $1, x ,\dots, x^{n-1}$ is closed under
+multiplication by $R$ and by multiplication by $x$ (by the above equation).
+
+Now suppose $x$ generates a subalgebra $R[x] \subset R'$ which is a finitely
+generated $R$-module. Then the increasing sequence
+of $R$-modules generated by $\{1\}, \left\{1, x\right\}, \left\{1, x,
+x^2\right\}
+, \dots$ must stabilize, since the union is $R[x]$.\footnote{As an easy
+exercise, one may see that if a finitely generated module $M$ is the union of
+an increasing sequence of submodules $M_1 \subset M_2 \subset M_3 \subset
+\dots$, then $M = M_n$ for some $n$; we just need to take $n$ large enough
+such that $M_n$ contains each of the finitely many generators of $M$.} It follows that some $x^n$
+can be expressed as a linear combination of smaller powers of $x$. Thus $x$ is
+integral over $R$.
+\end{proof}
+
+So, if $R'$ is an $R$-module, we can say that
+an element $x \in R'$ is \textbf{integral} over $R$ if either of the following
+equivalent conditions are satisfied:
+
+\begin{enumerate}
+\item There is a monic polynomial in $R[X]$ which vanishes on $x$.
+\item $R[x] \subset R'$ is a finitely generated $R$-module.
+\end{enumerate}
+
+\begin{example}
+Let $F$ be a field, $V$ a finite-dimensional $F$-vector space, $T: V \to V$ a
+linear transformation. Then the ring generated by $T$ and $F$ inside
+$\mathrm{End}_F(V)$ (which is a noncommutative ring) is finite-dimensional
+over $F$.
+Thus, by similar reasoning, $T$ must satisfy a polynomial equation with
+coefficients in $F$ (e.g. the characteristic polynomial).
+\end{example}
+
+Of course, if $R'$ is integral over $R$, $R'$ may not be a finitely generated
+$R$-module. For instance, $\overline{\mathbb{Q}}$ is not a finitely generated
+$\mathbb{Q}$-module, although the extension is integral. As we shall see in
+the next section, this is always the case if $R'$ is a finitely
+generated $R$-\emph{algebra}.
+
+
+We now will add a third equivalent condition to this idea of ``integrality,''
+at least in the case where the structure map is an injection.
+
+\begin{proposition} \label{thirdintegralitycriterion}
+Let $R$ be a ring, and suppose $R$ is a subring of $R'$.
+$x \in R'$ is integral if and only if there exists a
+ finitely generated faithful $R$-module $M \subset R'$ such that $R \subset M$ and
+ $xM \subset M$.
+\end{proposition}
+A module $M$ is \emph{faithful} if $x M = 0$ implies $x=0$. That is, the map
+from $R$ into the $\mathbb{Z}$-endomorphisms of $M$ is injective.
+If $R$ is a \emph{subring} of $R'$ (i.e. the structure map $R \to R'$ is
+injective), then $R'$ for instance is a faithful $R$-module.
+\begin{proof}
+It's obvious that the second condition above (equivalent to integrality)
+implies the condition of this
+proposition. Indeed, one could just take $M = R[x]$.
+
+Now let us prove that if there exists such an $M$ which is finitely generated,
+then $x$ is integral. Just because $M$ is finitely generated, the
+submodule $R[x]$ is not obviously finitely generated. In particular, this
+implication requires a bit of proof.
+
+
+We shall prove that the condition of this proposition implies integrality.
+Suppose $y_1, \dots, y_k \in M$ generate $M$ as $R$-module. Then multiplication
+by $x$ gives an $R$-module map $M \to M$. In particular, we can write
+\[ xy_i = \sum a_{ij} y_j \]
+where each $a_{ij} \in R$.
+These $\left\{a_{ij}\right\}$ may not be unique, but let us make some choices;
+we get a $k$-by-$k$ matrix $A \in M_k(R)$. The claim is that $x$ satisfies the
+characteristic polynomial of $A$.
+
+Consider the matrix
+\[ (x 1 - A) \in M_n(R'). \]
+Note that $(x1-A)$ annihilates each $y_i$, by the choice of $A$.
+We can consider the adjoint $B = (x1 -A)^{adj}$. Then
+\[ B(x1 - A) = \det(x1 - A) 1. \]
+This product of matrices obviously annihilates each vector $y_i$. It follows
+that
+\[ (\det(x1 - A) y_i = 0, \quad \forall i, \]
+which implies that $\det (x1-A)$ kills $M$. This implies that $\det (x1 -
+A)=0$ since $M$ is faithful.
+
+As a result, $x$ satisfies the characteristic polynomial.
+\end{proof}
+
+\begin{exercise}
+Let $R$ be a noetherian
+local domain with maximal ideal $\mathfrak{m}$. As we will define shortly, $R$
+is \emph{integrally closed} if every element of the quotient field $K=K(R)$
+integral over $R$ belongs to $R$ itself. Then if $x \in K$ and $x \mathfrak{m}
+\subset \mathfrak{m}$, we have $x \in R$.
+\end{exercise}
+\begin{exercise} Let us say that an $A$-module is \emph{$n$-generated}
+if it is generated by at most $n$ elements.
+
+Let $A$ and $B$ be two rings such that $A\subset B$, so that $B$ is an
+$A$-module.
+
+Let
+$n\in\mathbb{N}$. Let $u\in B$. Then, the following four assertions
+ are equivalent:
+
+\begin{enumerate}
+\item There exists a monic polynomial
+$P\in A\left[ X\right] $ with $\deg P=n$ and $P\left( u\right) =0$.
+\item There exist a $B$-module $C$ and an
+$n$-generated $A$-submodule $U$ of $C$ such that $uU\subset U$ and such that
+every $v\in B$ satisfying $vU=0$ satisfies $v=0$. (Here, $C$ is an $A$-module,
+since $C$ is a $B$-module and $A\subset B$.)
+\item There exists an $n$-generated
+$A$-submodule $U$ of $B$ such that $1\in U$ and $uU\subset U$.
+\item As an $A$-module, $A[u]$ is spanned by $1, u, \dots,
+u^{n-1}$.\end{enumerate}
+\end{exercise}
+
+
+We proved this to show that the set of integral elements is well behaved.
+
+\begin{proposition}
+Let $R \subset R'$. Let $S = \left\{x \in R': x \text{ is integral over }
+R\right\}$. Then $S$ is a subring of $R'$. In particular, it is closed under
+addition and multiplication.
+\end{proposition}
+\begin{proof}
+Suppose $x,y \in S$.
+We can consider the finitely generated modules $R[x], R[y] \subset R'$
+generated (as algebras) by $x$ over $R$. By assumption, these are finitely
+generated $R$-modules. In particular, the tensor product
+\[ R[x] \otimes_R R[y] \]
+is a finitely generated $R$-module (by \cref{fingentensor}).
+
+We have a ring-homomorphism $R[x]\otimes_R R[y] \to R'$
+which comes from the inclusions $R[x], R[y] \rightarrowtail R'$.
+Let $M$ be the image of $R[x] \otimes_R R[y]$ in $R'$. Then $M$ is an
+$R$-submodule of $R'$, indeed an $R$-subalgebra containing $x,y$. Also, $M$ is
+finitely generated. Since $x+y, xy\in M$ and $M$ is a subalgebra, it
+follows that
+\[ (x+y) M \subset M, \quad xy M \subset M. \]
+Thus $x+y, xy$ are integral over $R$.
+\end{proof}
+
+Let us consider the ring $\mathbb{Z}[\sqrt{-5}]$; this is the canonical
+example of a ring where unique factorization fails. This is because \( 6 = 2
+\times 3 = (1+\sqrt{-5})(1-\sqrt{-5}). \)
+One might ask: what about $\mathbb{Z}[\sqrt{-3}]$? It turns out that
+$\mathbb{Z}[\sqrt{-3}]$ lacks unique factorization as well. Indeed, here we have
+\[ (1 - \sqrt{-3})(1+\sqrt{-3}) = 4 = 2 \times 2. \]
+These elements can be factored no more, and $1 - \sqrt{-3}$ and $2$ do not
+differ by units.
+So in this ring, we have a failure of unique factorization. Nonetheless, the
+failure of unique factorization in $\mathbb{Z}[\sqrt{-3}]$ is less
+noteworthy, because $\mathbb{Z}[\sqrt{-3}]$ is not \emph{integrally closed}. Indeed, it turns out that $\mathbb{Z}[\sqrt{-3}]$ is
+contained in the larger ring
+\( \mathbb{Z}\left[ \frac{1 + \sqrt{-3}}{2}\right], \)
+which does have unique factorization, and this larger ring is finite over
+$\mathbb{Z}[\sqrt{-3}]$.\footnote{In fact, $\mathbb{Z}[\sqrt{-3}]$ is an index two subgroup of $\mathbb{Z}\left[
+\frac{1 + \sqrt{-3}}{2}\right]$, as the ring $\mathbb{Z}[ \frac{1 + \sqrt{-3}}{2}]$
+can be described as the set of elements $a +
+b\sqrt{-3}$ where $a,b$ are either both integers or both integers plus
+$\frac{1}{2}$, as is easily seen: this set is closed under addition and
+multiplication.}
+ Since being integrally closed is a
+prerequisite for having unique factorization (see \cref{} below), the failure in
+$\mathbb{Z}[\sqrt{-3}]$ is not particularly surprising.
+
+Note that, by contrast, $\mathbb{Z}[ \frac{1 + \sqrt{-5}}{2}]$ does not
+contain $\mathbb{Z}[\sqrt{-5}]$ as a finite index subgroup---it cannot be
+slightly enlarged in the same sense. When one enlarges $\mathbb{Z}[\sqrt{-5}]$,
+one has to add a lot of stuff.
+We will see more formally that $\mathbb{Z}[\sqrt{-5}]$ is \emph{integrally
+closed} in its quotient field, while $\mathbb{Z}[\sqrt{-3}]$ is not. Since
+unique factorization domains are automatically integrally closed, the failure
+of $\mathbb{Z}[\sqrt{-5}]$ to be a UFD is much more significant than that of
+$\mathbb{Z}[\sqrt{-3}]$.
+
+
+\subsection{Le sorite for integral extensions}
+
+In commutative algebra and algebraic geometry, there are a lot of standard
+properties that a \emph{morphism} of rings $\phi: R \to S$ can have: it could
+be of \emph{finite type} (that is, $S$ is finitely generated over $\phi(R)$),
+it could be \emph{finite} (that is, $S$ is a finite $R$-module), or it could
+be \emph{integral} (which we have defined in \cref{intdefn}). There are many more examples
+that we will encounter as we dive deeper into commutative algebra.
+In algebraic geometry, there are corresponding properties of morphisms of
+\emph{schemes,} and there are many more interesting ones here.
+
+In these cases, there is usually---for any reasonable property---a standard
+and familiar list of
+properties that one proves about them. We will refer to such lists as
+``sorites,'' and prove our first one now.
+
+\begin{proposition}[Le sorite for integral morphisms]
+\begin{enumerate}
+\item For any ring $R$ and any ideal $I \subset R$, the map $R \to R/I$ is
+integral.
+\item If $\phi: R \to S$ and $\psi: S \to T$ are integral morphisms, then so
+is $\psi \circ \phi: R \to T$.
+\item If $\phi: R \to S$ is an integral morphism and $R'$ is an $R$-algebra,
+then the base-change
+$R' \to R' \otimes_R S$ is integral.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+The first property is obvious. For the second, the condition of
+integrality in a morphism of rings depends on the inclusion of the image
+in the codomain. So we can suppose that $R \subset S \subset T$. Suppose $t
+\in T$. By assumption, there is a monic polynomial equation
+\[ t^n + s_1 t^{n-1} + \dots + s_n = 0 \]
+that $t$ satisfies, where each $s_i \in S$.
+
+In particular, we find that $t$ is integral over $R[s_1, \dots, s_n]$.
+As a result, the module $R[s_1, \dots, s_n, t]$ is finitely generated over the
+ring $R'=R[s_1, \dots, s_n]$.
+By the following \cref{finitelygeneratedintegral}, $R'$ is a finitely generated
+$R$-module. In
+particular, $R[s_1, \dots, s_n,t]$ is a finitely generated $R$-module (not
+just a
+finitely generated $R'$-module).
+
+Thus the $R$-module $R[s_1, \dots, s_n,t]$ is a faithful
+$R'$ module, finitely generated over $R$, which is preserved under
+multiplication by $t$.
+\end{proof}
+
+We now prove a result that can equivalently be phrased as ``finite type plus
+integral implies finite'' for a map of rings.
+
+\begin{proposition} \label{finitelygeneratedintegral}
+Let $R'$ be a finitely generated, integral $R$-algebra. Then $R'$ is a
+finitely generated $R$-module: that is, the map $R \to R'$ is finite.
+\end{proposition}
+\begin{proof}
+Induction on the number of generators of $R'$ as $R$-algebra. For one
+generator, this follows from \rref{onegeneratorintegral}.
+In general, we will have $R' = R[\alpha_1 ,\dots, \alpha_n]$ for some
+$\alpha_i \in R'$.
+By the inductive hypothesis, $R[\alpha_1 , \dots, \alpha_{n-1}]$ is a finite
+$R$-module; by the case of one generator, $R'$ is a finite $R[\alpha_1, \dots,
+\alpha_{n-1}]$-module. This establishes the result by the next exercise.
+\end{proof}
+
+\begin{exercise}
+Let $R \to S, S \to T$ be morphisms of rings. Suppose $S$ is a finite
+$R$-module and $T$ a finite $T$-module. Then $T$ is a finite $R$-module.
+\end{exercise}
+
+
+
+
+\subsection{Integral closure}
+
+Let $R, R'$ be rings.
+\begin{definition}
+If $R \subset R'$, then the set $S = \left\{x \in R': x \ \mathrm{is \
+integral }\right\}$ is called the \textbf{integral closure} of $R$ in $R'$. We
+say that $R$ is \textbf{integrally closed in $R'$} if $S = R'$.
+
+
+When $R$ is a domain, and $K$ is the quotient field, we shall simply
+say that $R$ is \textbf{integrally closed} if it is integrally closed in
+$K$.
+Alternatively, some people say that $R$ is \textbf{normal} in this case.
+\end{definition}
+
+Integral closure (in, say, the latter sense) is thus an operation that maps
+integral domains to integral domains. It is easy to see that the operation is
+\emph{idempotent:} the integral closure of the integral closure is the integral
+closure.
+
+
+\begin{example}
+The integers $\mathbb{Z} \subset \mathbb{C}$ have as integral closure (in
+$\mathbb{C}$) the set
+of complex numbers satisfying a monic polynomial with integral
+coefficients. This set is called the set of \textbf{algebraic integers}.
+
+For instance, $i$ is an algebraic integer because it satisfies the equation $X^2 +1 = 0$.
+$\frac{1 - \sqrt{-3}}{2}$ is an algebraic integer, as we talked about last
+time; it is a sixth root of unity. On the other hand, $\frac{1+\sqrt{-5}}{2}$
+is not an algebraic integer.
+\end{example}
+
+\begin{example}
+Take $\mathbb{Z} \subset \mathbb{Q}$. The claim is that $\mathbb{Z}$ is
+integrally closed in its quotient field $\mathbb{Q}$, or simply---integrally
+closed.
+\end{example}
+\begin{proof}
+We will build on this proof later. Here is the point. Suppose $\frac{a}{b}
+\in \mathbb{Q}$ satisfying an equation
+\[ P(a/b) = 0, \quad P(t) = t^n + c_1 t^{n-1} + \dots + c_0 , \ \forall c_i \in
+\mathbb{Z}.\]
+Assume that $a,b$ have no common factors; we must prove that $b$ has no prime
+factors, so is $\pm 1$.
+If $b$ had a prime factor, say $q$, then we must obtain a contradiction.
+
+We interrupt with a definition.
+\begin{definition}
+The \textbf{valuation at $q$} (or \textbf{$q$-adic valuation}) is the map
+$v_q: \mathbb{Q}^* \to \mathbb{Z}$ is the
+function sending $q^k (a/b)$ to $k$ if $q \nmid a,b$. We extend this to all
+rational numbers via $v(0) = \infty$.
+\end{definition}
+In general, this just counts the number of factors of $q$ in the expression.
+
+
+Note the general property that
+\begin{equation} \label{val1} v_q(x+y) \geq \min( v_q(x), v_q(y)) .
+\end{equation}
+If $x,y$ are both divisible by some power of $q$, so is $x+y$; this is the
+statement above. We also have the useful property
+\begin{equation} \label{val2} v_q(xy) = v_q(x) + v_q(y). \end{equation}
+
+
+
+
+Now return to the proof that $\mathbb{Z}$ is normal. We would like to show that
+\( v_q(a/b) \geq 0. \)
+This will prove that $b$ is not divisible by $q$. When we show this for all
+$q$, it will follow that $a/b \in \mathbb{Z}$.
+
+We are assuming that $P(a/b) = 0$. In particular,
+\[ \left( \frac{a}{b} \right)^n = -c_1 \left( \frac{a}{b} \right)^{n-1} -
+\dots - c_0. \]
+Apply $v_q$ to both sides:
+\[ n v_q ( a/b) \geq \min_{i>0} v_q( c_i (a/b)^{n-i}). \]
+Since the $c_i \in \mathbb{Z}$, their valuations are nonnegative. In
+particular, the right hand side is at least
+\[ \min_{i>0} (n-i) v_q(a/b). \]
+This cannot happen if $v_q(a/b)<0$, because $n-i < n$ for each $i>0$.
+\end{proof}
+
+This argument applies more generally. If $K$ is a field, and $R \subset K$ is
+a subring ``defined
+by valuations,'' such as the $v_q$, then $R$ is integrally closed in its
+quotient field.
+More precisely,
+note the reasoning of the previous example: the key idea was that $\mathbb{Z}
+\subset \mathbb{Q}$ was characterized by the rational numbers $x$ such that
+$v_q(x) \geq 0$ for all primes $q$.
+We can abstract this idea as follows. If there exists a family of functions $\mathcal{V}$ from $K^*
+\to \mathbb{Z}$ (such as
+$\left\{v_q: \mathbb{Q}^* \to \mathbb{Z}\right\}$) satisfying \eqref{val1} and
+\eqref{val2} above such that $R$ is
+the set of elements such that $v(x) \geq 0, v \in \mathcal{V}$ (along with
+$0$), then $R$ is integrally closed in $K$.
+We will talk more about
+this, and about valuation rings, below.
+
+\begin{example}
+We saw earlier (\cref{sixthroot}) that $\mathbb{Z}[\sqrt{-3}]$ is not
+integrally closed, as $\frac{1 + \sqrt{-3}}{2}$ is integral over this ring and
+in the quotient field, but not in the ring.
+\end{example}
+
+We shall give more examples in the next subsection.
+
+
+\subsection{Geometric examples}
+
+Let us now describe the geometry of a non-integrally closed ring.
+Recall that finitely generated (reduced) $\mathbb{C}$-algebras are supposed to
+correspond to affine algebraic varieties. A \emph{smooth} variety (i.e., one
+that is a complex manifold) will always correspond to an integrally closed
+ring (though this relies on a deep result that a regular local ring is a
+factorization domain, and consequently integrally closed): non-normality is a sign of singularities.
+\begin{example}
+Here is a ring which is not integrally closed. Take $\mathbb{C}[x, y]/(x^2
+- y^3)$. Algebraically, this is the subring of the polynomial ring
+ $\mathbb{C}[t]$ generated by $t^2$ and $t^3$.
+
+In the complex plane, $\mathbb{C}^2$, this corresponds to the subvariety $C
+\subset \mathbb{C}^2$ defined by $x^2 =
+y^3$. In $\mathbb{R}^2$, this can be drawn: it has a singularity at $(x,y)=0$.
+
+Note that $x^2 = y^3$ if and only if there is a complex number $z$ such that $x
+= z^3, y = z^2$. This complex number $z$ can be recovered via $x/y$ when $x,y
+\neq 0$. In particular, there is a map $\mathbb{C} \to C$ which sends $z \to
+(z^3, z^2)$. At every point other than the origin, the inverse can be
+recovered using rational functions. But this does not work at the origin.
+
+We can think of $\mathbb{C}[x,y]/(x^2 - y^3)$ as the subring $R'$ of
+$\mathbb{C}[z]$
+generated by $\left\{z^n, n \neq 1\right\}$. There is a map from
+$\mathbb{C}[x,y]/(x^2 - y^3)$ sending $x \to z^3, y \to z^2$. Since these two
+domains are isomorphic, and $R'$ is not integrally closed, it follows that
+$\mathbb{C}[x,y]/(x^2 - y^3)$ is not integrally closed.
+The element $z$ can be thought of as an element of the fraction field of $R'$
+or of $\mathbb{C}[x,y]/(x^2 - y^3)$.
+It is integral, though.
+
+The failure of the ring to be integrally closed has to do with the singularity
+at the
+origin.
+\end{example}
+
+
+We now give a generalization of the above example.
+
+\begin{example}
+This example is outside the scope of the present course. Say that $X \subset
+\mathbb{C}^n$ is given as the zero locus of some holomorphic functions
+$\left\{f_i: \mathbb{C}^{n} \to \mathbb{C}\right\}$. We just gave an example
+when $n=2$.
+Assume that $0 \in X$, i.e. each $f_i$ vanishes at the origin.
+
+Let $R$ be the ring of germs of holomorphic functions $0$, in other words
+holomorphic functions from small open neighborhoods of zero. Each of these
+$f_i$ becomes an element of $R$. The ring
+\( R/(\left\{f_i\right\} ) \)
+is called the ring of germs of holomorphic functions on $X$ at zero.
+
+Assume that $R$ is a domain. This assumption, geometrically, means that near
+the point zero in $X$, $X$ can't be broken into two smaller closed analytic
+pieces. The fraction field of $R$ is to be thought of as the ring of
+germs of meromorphic functions on $X$ at zero.
+
+We state the following without proof:
+
+\begin{theorem}
+Let $g/g'$ be an element of the fraction field, i.e. $g, g' \in R$. Then $g/g'$
+is integral over $R$ if and only if $g/g'$ is bounded near zero.
+\end{theorem}
+
+In the previous example of $X$ defined by $x^2 = y^3$, the function $x/y$
+(defined near the origin on the curve) is
+bounded near the origin, so it is integral over the ring of germs of regular
+functions. The reason it is not defined near the origin is \emph{not} that it
+blows up. In fact, it extends continuously, but not holomorphically, to the
+rest of the variety $X$.
+\end{example}
+
+\section{Lying over and going up}
+We now interpret integrality in terms of the geometry of $\spec$.
+In general, for $R \to S$ a ring-homomorphism, the induced map $\spec S \to
+\spec R$ need not be topologically nice; for instance, even if $S$ is a
+finitely generated $R$-algebra, the image of $\spec S$ in $\spec R$ need not
+be either open or closed.\footnote{It is, however, true that if $R$ is
+\emph{noetherian} (see \rref{noetherian}) and $S$ finitely generated over
+$R$, then the image of $\spec S$ is \emph{constructible,} that is, a finite
+union of locally closed subsets. \add{this result should be added sometime}.}
+
+We shall see that under conditions of integrality, more can be said.
+
+
+
+\subsection{Lying over}
+
+In general, given a morphism of algebraic varieties $f: X \to Y$, the image of
+a closed subset $Z \subset X$ is far from closed. For instance, a regular function $f: X \to \mathbb{C}$
+that is a closed map would have to be either surjective or constant (if $X$ is
+connected, say).
+Nonetheless, under integrality hypotheses, we can say more.
+
+\begin{proposition}[Lying over] \label{lyingover}
+If $\phi: R \to R'$ is an integral morphism, then the induced map
+\[ \spec R' \to \spec R \]
+is a closed map; it is surjective if $\phi$ is injective.
+ \end{proposition}
+
+Another way to state the last claim, without mentioning $\spec R'$, is the
+following. Assume $\phi$ is injective and integral. Then if
+$\mathfrak{p} \subset R$ is prime, then there exists $\mathfrak{q} \subset R'$
+such that $\mathfrak{p}$ is the inverse image $\phi^{-1}(\mathfrak{q})$.
+
+\begin{proof} First suppose $\phi$ injective, in which case we must prove the
+map $\spec R' \to \spec R$ surjective.
+Let us reduce to the case of a local ring.
+For a prime $\mathfrak{p} \in \spec R$, we must show that $\mathfrak{p}$
+arises as the inverse image of an element of $\spec R'$.
+So we replace $R$ with $R_{\mathfrak{p}}$. We get a map
+\[ \phi_{\mathfrak{p}}: R_{\mathfrak{p}} \to (R- \mathfrak{p})^{-1} R' \]
+which is injective if $\phi$ is, since localization is an exact functor. Here
+we
+have localized both $R, R'$ at the multiplicative subset $R - \mathfrak{p}$.
+
+Note that $\phi_{\mathfrak{p}}$ is an integral extension too. This follows
+because integrality is preserved by base-change.
+We will now prove the result for $\phi_{\mathfrak{p}}$; in particular, we
+will show
+that there is a prime ideal of $(R- \mathfrak{p})^{-1} R'$ that pulls back to
+$\mathfrak{p}R_{\mathfrak{p}}$. These will imply that if we pull this prime
+ideal back to $R'$, it will pull back to $\mathfrak{p}$ in $R$. In detail, we
+can consider the diagram
+\[ \xymatrix{
+\spec (R-\mathfrak{p})^{-1} R'\ar[d] \ar[r] & \spec R_{\mathfrak{p}} \ar[d] \\
+\spec R' \ar[r] & \spec R
+}\]
+which shows that if $\mathfrak{p} R_{\mathfrak{p}}$ appears in the image of the top map, then
+$\mathfrak{p}$
+arises as the image of something in $\spec R'$.
+So it is sufficient for the proposition (that is, the case of $\phi$
+injective) to handle the case of $R$ local, and
+$\mathfrak{p}$ the maximal ideal.
+
+
+In other words, we need to show that:
+\begin{quote}
+If $R$ is a \emph{local} ring, $\phi: R \hookrightarrow R'$ an injective
+integral morphism, then the maximal ideal of $R$ is the inverse image of
+something in $\spec R'$.
+\end{quote}
+
+Assume $R$ is local with maximal ideal $\mathfrak{p}$. We want to find a prime ideal $\mathfrak{q} \subset R'$ such that
+$\mathfrak{p} = \phi^{-1}(\mathfrak{q})$. Since $\mathfrak{p}$ is already
+maximal, it will suffice to show that $\mathfrak{p} \subset
+\phi^{-1}(\mathfrak{q})$. In particular, we need to show that there is a prime
+ideal $\mathfrak{q}$ such that
+\( \mathfrak{p} R' \subset \mathfrak{q}. \)
+The pull-back of this will be $\mathfrak{p}$.
+
+If $\mathfrak{p}R' \neq R'$, then
+$\mathfrak{q}$ exists, since every proper ideal of a ring is contained in a
+maximal ideal. We will in fact show
+\begin{equation} \label{thingdoesn'tgenerate} \mathfrak{p} R' \neq R',
+\end{equation}
+or that $\mathfrak{p}$ does not generate the unit ideal in $R'$.
+If we prove \eqref{thingdoesn'tgenerate}, we will thus be able to find our
+$\mathfrak{q}$, and we will be done.
+
+Suppose the
+contrary, i.e. $\mathfrak{p}R' = R'$. We will derive a contradiction using
+Nakayama's lemma (\cref{nakayama}).
+Right now, we cannot apply Nakayama's lemma directly because $R'$ is not a
+finite $R$-module. The idea is that we will ``descend'' the ``evidence'' that
+\eqref{thingdoesn'tgenerate} fails to a small subalgebra of $R'$, and then
+obtain a contradiction.
+To do this, note that $1 \in \mathfrak{p}R'$, and we can write
+\[ 1 = \sum x_i \phi(y_i) \]
+where $x_i \in R', y_i \in \mathfrak{p}$.
+This is the ``evidence'' that \eqref{thingdoesn'tgenerate} fails, and it
+involves only a finite amount of data.
+
+Let $R''$ be the subalgebra of $R'$ generated by $\phi(R)$ and the $x_i$. Then
+$R'' \subset R'$ and is finitely generated as an $R$-algebra, because it is generated by
+the $x_i$. However, $R''$ is integral over $R$ and thus finitely generated as an
+$R$-module, by \cref{finitelygeneratedintegral}. This
+is where integrality comes in.
+
+So $R''$ is a finitely generated $R$-module. Also, the expression
+$1 = \sum x_i \phi(y_i)$ shows that $\mathfrak{p}R'' = R''$. However, this
+contradicts Nakayama's lemma. That brings the contradiction, showing that
+$\mathfrak{p}$ cannot generate $(1)$ in $R'$, and proving the surjectivity
+part of lying over theorem.
+
+Finally, we need to show that if $\phi: R \to R'$ is \emph{any} integral
+morphism, then $\spec R' \to \spec R$ is a closed map. Let $X = V(I) $ be a
+closed subset of $\spec R'$. Then the image of $X$ in $\spec R$ is the image
+of the map
+\[ \spec R'/I \to \spec R \]
+obtained from the morphism $R \to R' \to R'/I$, which is integral; thus we are
+reduced to showing that any integral morphism $\phi$ has closed image on the
+$\spec$.
+Thus we are reduced to $X = \spec R'$, if we throw out $R'$ and replace it by
+$R'/I$.
+
+In other words, we must prove the following statement. Let $\phi: R \to R'$ be
+an integral morphism; then the image of $\spec R'$ in $\spec R$ is closed.
+But, quotienting by $\ker \phi$ and taking the map $R/\ker \phi \to R'$, we
+may reduce to the case of $\phi$ injective; however, then this follows from
+the surjectivity result already proved.
+\end{proof}
+
+In general, there will be \emph{many} lifts of a given prime ideal.
+Consider for instance the inclusion $\mathbb{Z} \subset \mathbb{Z}[i]$.
+Then the prime ideal $(5) \in \spec \mathbb{Z}$ can be lifted either to $(2+i)
+\in \spec \mathbb{Z}[i]$
+or $(2-i) \in \spec \mathbb{Z}[i]$.
+These are distinct prime ideals: $\frac{2+i}{2-i} \notin \mathbb{Z}[i]$. But
+note that any element of $\mathbb{Z}$ divisible by $2+i$ is automatically
+divisible by its conjugate $2-i$, and consequently by their product $5$
+(because $\mathbb{Z}[i]$ is a UFD, being a euclidean domain).
+
+Nonetheless, the different lifts are incomparable.
+
+\begin{proposition} \label{incomparableintegral}
+Let $\phi: R \to R'$ be an integral morphism. Then given $\mathfrak{p} \in
+\spec R$, there are no inclusions among the elements $\mathfrak{q} \in \spec
+R'$ lifting $\mathfrak{p}$.
+\end{proposition}
+In other words, if $\mathfrak{q}, \mathfrak{q}' \in \spec R'$ lift
+$\mathfrak{p}$, then $\mathfrak{q} \not\subset \mathfrak{q}'$.
+\begin{proof}
+We will give a ``slick'' proof by various reductions.
+Note that the operations of localization and quotienting only shrink the
+$\spec$'s: they do not ``merge'' heretofore distinct prime ideals into one.
+Thus, by quotienting $R$ by $\mathfrak{p}$, we may assume $R$ is a
+\emph{domain} and that $\mathfrak{p} = 0$.
+Suppose we had two primes $\mathfrak{q} \subsetneq \mathfrak{q}'$ of $R'$
+lifting $(0) \in \spec R$.
+Quotienting $R'$ by $\mathfrak{q}$, we may assume that $\mathfrak{q} = 0$.
+We could even assume $R \subset R'$, by quotienting by the kernel of $\phi$.
+The next lemma thus completes the proof, because it shows that $\mathfrak{q}'
+= 0$, contradiction.
+\end{proof}
+
+\begin{lemma}
+Let $R \subset R'$ be an inclusion of integral domains, which is an integral
+morphism. If $\mathfrak{q} \in \spec R'$ is a nonzero prime ideal, then
+$\mathfrak{q} \cap R$ is nonzero.
+\end{lemma}
+\begin{proof}
+Let $x \in \mathfrak{q}'$ be nonzero. There is an equation
+\[ x^n + r_1 x^{n-1} + \dots + r_n = 0, \quad r_i \in R, \]
+that $x$ satisfies, by assumption.
+Here we can assume $r_n \neq 0$; then $r_n \in \mathfrak{q}' \cap R$ by
+inspection, though. So this intersection is nonzero.
+
+\end{proof}
+
+\begin{corollary}
+Let $R \subset R'$ be an inclusion of integral domains, such that $R'$ is
+integral over $R$. Then if one of $R, R'$ is a field, so is the other.
+\end{corollary}
+\begin{proof}
+Indeed, $\spec R' \to \spec R$ is surjective by \cref{lyingover}: so if $\spec
+R'$ has one element (i.e., $R'$ is a field), the same holds for $\spec R$
+(i.e., $R$ is a field). Conversely, suppose $R$ a field.
+Then any two prime ideals in $\spec R'$ pull back to the same element of $\spec
+R$. So, by \cref{incomparableintegral}, there can be no inclusions among the prime ideals of $\spec
+R'$. But $R'$ is a domain, so it must then be a field.
+\end{proof}
+
+
+\begin{exercise} Let $k$ be a field.
+Show that $k[\mathbb{Q}_{\geq 0}]$ is integral over the polynomial ring $k[T]$.
+Although this is a \emph{huge} extension, the prime ideal $(T)$ lifts in only
+one way to $\spec k[\mathbb{Q}_{\geq 0}]$.
+\end{exercise}
+
+\begin{exercise}
+Suppose $A \subset B$ is an inclusion of rings over a field of characteristic
+$p$. Suppose $B^p \subset A$, so that $B/A$ is integral in a very strong sense.
+Show that the map $\spec B \to \spec A$ is a \emph{homeomorphism.}
+\end{exercise}
+
+
+
+\subsection{Going up}
+Let $R \subset R'$ be an inclusion of rings with $R'$ integral over $R$. We saw in the
+lying over theorem (\cref{lyingover}) that any prime $\mathfrak{p} \in \spec R$
+has a prime $\mathfrak{q} \in \spec R'$ ``lying over'' $\mathfrak{p}$, i.e.
+such that $R \cap \mathfrak{q} = \mathfrak{p}$.
+We now want to show that we can lift finite \emph{inclusions} of primes to $R'$.
+
+\begin{proposition}[Going up]
+Let $R \subset R'$ be an integral inclusion of rings.
+Suppose $\mathfrak{p}_1 \subset \mathfrak{p}_2 \subset \dots \subset
+\mathfrak{p}_n \subset R$ is a
+finite ascending chain of prime ideals in $R$.
+Then there is an ascending chain $\mathfrak{q}_1 \subset \mathfrak{q}_2 \subset
+\dots \subset \mathfrak{q}_n$ in $\spec R'$ lifting this chain.
+
+Moreover, $\mathfrak{q}_1$ can be chosen arbitrarily so as to lift
+$\mathfrak{p}_1$.
+\end{proposition}
+\begin{proof}
+By induction and lying over (\cref{lyingover}), it suffices to show:
+\begin{quote}
+Let $\mathfrak{p}_1 \subset \mathfrak{p}_2$ be an inclusion of primes in $\spec
+R$. Let $\mathfrak{q}_1 \in \spec R'$ lift $\mathfrak{p}_1$. Then there is
+$\mathfrak{q}_2 \in \spec R'$, which satisfies the dual conditions of lifting
+$\mathfrak{p}_2$ and containing $\mathfrak{q}_1$.
+\end{quote}
+To show that this is true, we apply \cref{lyingover} to the inclusion
+$R/\mathfrak{p}_1 \hookrightarrow R'/\mathfrak{q}_1$. There is an element of
+$\spec R'/\mathfrak{q}_1$ lifting $\mathfrak{p}_2/\mathfrak{p}_1$; the
+corresponding element of $\spec R'$ will do for $\mathfrak{q}_2$.
+\end{proof}
+
+\section{Valuation rings}
+
+A valuation ring is a special type of local ring. Its distinguishing
+characteristic is that divisibility is a ``total preorder.'' That is, two
+elements of the quotient field are never incompatible under divisibility.
+ We shall see in this section that integrality can be detected using
+valuation rings only.
+
+Geometrically, the valuation ring is something like a local piece of a smooth
+curve. In fact, in algebraic geometry, a more compelling reason to study
+valuation rings is provided by the valuative criteria for separatedness and
+properness (cf. \cite{EGA} or \cite{Ha77}). One key observation about
+valuation rings that leads the last results is that any local domain can be
+``dominated'' by a valuation ring with the same quotient field (i.e. mapped
+into a valuation ring via local
+homomorphism), but valuation rings are the maximal elements in this relation
+of domination.
+
+\subsection{Definition}
+
+\begin{definition}
+A \textbf{valuation ring} is a domain $R$ such that for every pair of elements
+$a,b \in R$, either $a \mid b$ or $b \mid a$.
+\end{definition}
+
+\begin{example}
+$\mathbb{Z}$ is not a valuation ring. It is neither true that 2 divides 3
+nor that 3 divides 2.
+\end{example}
+
+\begin{example}
+$\mathbb{Z}_{(p)}$, which is the set of all fractions of the form $a/b \in
+\mathbb{Q}$ where $p \nmid b$, is a valuation ring. To check whether $a/b$
+divides $a'/b'$ or vice versa, one just has to check which is divisible by
+the larger power of $p$.
+\end{example}
+
+\begin{proposition}
+Let $R$ be a domain with quotient field $K$. Then $R$ is a valuation ring if
+and only if for every $x \in K$, either $x$ or $x^{-1}$ lies in $R$.
+\end{proposition}
+
+\begin{proof} Indeed, if $x=a/b , \ a,b \in R$, then either $a \mid
+b$ or $b \mid a$, so either $x$ or $x^{-1} \in R$. This condition is equivalent
+to $R$'s being a valuation ring.
+\end{proof}
+
+
+\subsection{Valuations}
+The reason for the name ``valuation ring'' is provided by the next definition.
+As we shall see, any valuation ring comes from a ``valuation.''
+
+By definition, an \emph{ordered abelian group} is an abelian group $A$
+together with a set of \emph{positive elements} $A_+ \subset A$. This set is
+required to be closed under addition and satisfy the property that if $x \in
+A$, then precisely one of the following is true: $x \in A_+$, $-x \in A_+$,
+and $x = 0$. This allows one to define an ordering $<$ on $A$ by writing $x 0\right\}$.
+\end{enumerate}
+
+\end{theorem}
+
+Let us motivate this by the remark:
+\begin{remark}
+A valuation ring is automatically a local ring. A local ring is a ring where
+either $x, 1-x$ is invertible for all $x$ in the ring. Let us show that this is
+true for a valuation ring.
+
+If $x $ belongs to a valuation ring $R$ with valuation $v$, it is invertible if
+$v(x)=0$. So if $x, 1-x$ were both noninvertible, then both would have
+positive valuation. However, that would imply that $v(1) \geq \min v(x),
+v(1-x)$ is positive, contradiction.
+\end{remark}
+
+\begin{quote}
+If $R'$ is any valuation ring (say defined by a valuation $v$), then $R'$ is
+local with maximal ideal consisting of elements with positive valuation.
+\end{quote}
+
+The theorem above says that there's a good supply of valuation rings.
+In particular, if $R$ is any domain, $\mathfrak{p} \subset R$ a prime ideal,
+then we can choose a valuation ring $R' \supset R$ such that $\mathfrak{p}$ is
+the intersection of the maximal ideal of $R'$ intersected with $R$.
+So the map $\spec R' \to \spec R$ contains $\mathfrak{p}$.
+
+\begin{proof}
+Without loss of generality, replace $R$ by $R_{\mathfrak{p}}$, which is a local
+ring with maximal ideal $\mathfrak{p}R_{\mathfrak{p}}$. The maximal ideal
+intersects $R$ only in $\mathfrak{p}$.
+
+So, we can assume without loss of generality that
+\begin{enumerate}
+\item $R$ is local.
+\item $\mathfrak{p}$ is maximal.
+\end{enumerate}
+
+Let $P$ be the collection of all subrings $R' \subset K$ such that $R' \supset
+R$ but $\mathfrak{p}R' \neq R'$. Then $P$ is a poset under inclusion. The
+poset is nonempty, since $R \in P$. Every totally ordered chain in $P$ has an
+upper bound. If you have a totally ordered subring of elements in $P$, then
+you can take the union.
+We invoke:
+\begin{lemma}
+Let $R_\alpha$ be a chain in $P$ and $R' = \bigcup R_\alpha$. Then $R' \in P$.
+\end{lemma}
+\begin{proof}
+Indeed, it is easy to see that this is a subalgebra of $K$ containing $R$. The
+thing to observe is that
+\[ \mathfrak{p}R' = \bigcup_\alpha \mathfrak{p} R_\alpha ;\]
+since by assumption, $1 \notin \mathfrak{p}R_\alpha$ (because each $R_\alpha
+\in P$), $1 \notin \mathfrak{p}R'$. In particular, $R' \notin P$.
+\end{proof}
+
+By the lemma, Zorn's lemma to the poset $P$. In particular, $P$ has a maximal
+element $R'$. By construction, $R'$ is some subalgebra of $K$ and
+$\mathfrak{p}R' \neq R'$. Also, $R'$ is maximal with respect to these
+properties.
+
+We show first that $R'$ is local, with maximal ideal $\mathfrak{m}$ satisfying
+\[ \mathfrak{m} \cap R = \mathfrak{p}. \]
+The second part is evident from locality of $R'$, since $\mathfrak{m} $
+must contain
+the proper ideal $\mathfrak{p}R'$, and $\mathfrak{p} \subset R$ is a maximal
+ideal.
+
+Suppose that $x \in R'$; we show that either $x, 1-x$ belongs to $R'^*$ (i.e.
+is invertible). Take the ring $R'[x^{-1}]$. If $x$ is noninvertible, this
+properly contains $R'$. By maximality, it follows that $\mathfrak{p}R'[x^{-1}]
+= R'[x^{-1}]$.
+
+And we're out of time. We'll pick this up on Monday.
+
+\end{proof}
+
+Let us set a goal.
+
+
+First, recall the notion introduced last time. A \textbf{valuation ring} is a
+domain $R$ where for all $x$ in the fraction field of $R$, either $x$ or
+$x^{-1}$ lies in $R$. We saw that if $R$ is a valuation ring, then $R$ is
+local. That is, there is a unique maximal ideal $\mathfrak{m} \subset R$,
+automatically prime. Moreover, the zero ideal $(0)$ is prime, as $R$ is a
+domain. So if you look at the spectrum $\spec R$ of a valuation ring $R$, there
+is a unique closed point $\mathfrak{m}$, and a unique generic point
+$(0)$. There might be some other prime ideals in $\spec R$; this depends on
+where the additional valuation lives.
+
+\begin{example}
+Suppose the valuation defining the valuation ring $R$ takes values in
+$\mathbb{R}$. Then the only primes are $\mathfrak{m}$ and zero.
+\end{example}
+
+Let $R$ now be any ring, with $\spec R$ containing prime ideals
+$\mathfrak{p} \subset \mathfrak{q}$. In particular, $\mathfrak{q}$ lies in
+the closure of $\mathfrak{p}$.
+As we will see, this implies that there is a map
+\[ \phi: R \to R' \]
+such that $\mathfrak{p} = \phi^{-1}(0)$ and $\mathfrak{q} =
+\phi^{-1}(\mathfrak{m})$, where $\mathfrak{m}$ is the maximal ideal of $R'$.
+This statement says that the relation of closure in $\spec R$ is always
+controlled by valuation rings.
+In yet another phrasing, in the map
+\[ \spec R' \to \spec R \]
+the closed point goes to $\mathfrak{q}$ and the generic point to
+$\mathfrak{p}$. This is our eventual goal.
+
+To carry out this goal, we need some more elementary facts. Let us discuss
+things that don't have any obvious relation to it.
+
+\subsection{Back to the goal} Now we return to the goal of the lecture. Again,
+$R$
+was any ring, and we had primes $\mathfrak{p} \subset \mathfrak{q} \subset
+R$. We
+wanted a valuation ring $R'$ and a map $\phi: R \to R'$ such that zero pulled
+back to $\mathfrak{p}$ and the maximal ideal pulled back to $\mathfrak{q}$.
+
+What does it mean for $\mathfrak{p}$ to be the inverse image of $(0) \subset
+R'$? This means that $\mathfrak{p} = \ker \phi$. So we get an injection
+\[ R/\mathfrak{p} \rightarrowtail R'. \]
+We will let $R'$ be a subring of the quotient field $K$ of the domain
+$R/\mathfrak{p}$. Of course, this subring will contain $R/\mathfrak{p}$.
+
+In this case, we will get a map $R \to R'$ such that the pull-back of zero is
+$\mathfrak{p}$. What we want, further, to be true is that $R'$ is a valuation
+ring and the pull-back of the maximal ideal is $\mathfrak{q}$.
+
+
+
+This is starting to look at the problem we discussed last time.
+Namely, let's throw out $R$, and replace it with $R/\mathfrak{p}$.
+Moreover, we can replace $R$ with $R_{\mathfrak{q}}$ and assume that $R$ is
+local with maximal ideal $\mathfrak{q}$.
+What we need to show is that a valuation ring $R' $ contained in the fraction
+field of $R$, containing $R$, such that the intersection of the maximal
+ideal of
+$R'$ with $R$ is equal to $\mathfrak{q} \subset R$.
+If we do this, then we will have accomplished our goal.
+
+\begin{lemma}
+Let $R$ be a local domain. Then there is a valuation subring $R'$ of the
+quotient
+field of $R$ that \emph{dominates} $R$, i.e .the map $R \to R'$ is a
+\emph{local} homomorphism.
+\end{lemma}
+
+Let's find $R'$ now.
+
+Choose $R'$ maximal such that $\mathfrak{q} R' \neq R'$. Such a ring exists, by
+Zorn's lemma. We gave this argument at the end last time.
+
+\begin{lemma}
+$R'$ as described is local.
+\end{lemma}
+\begin{proof}
+Look at $\mathfrak{q}R' \subset R'$; it is a proper subset, too, by assumption.
+In particular, $\mathfrak{q}R'$ is contained in some maximal ideal
+$\mathfrak{m}\subset R'$. Replace $R'$ by $R'' = R'_{\mathfrak{m}}$.
+Note that
+\[ R' \subset R'' \]
+and
+\[ \mathfrak{q}R'' \neq R'' \]
+because $\mathfrak{m}R'' \neq R''$. But $R'$ is maximal, so $R' = R''$, and
+$R''$ is a local ring. So $R'$ is a local ring.
+\end{proof}
+
+Let $\mathfrak{m}$ be the maximal ideal of $R'$. Then $\mathfrak{m} \supset
+\mathfrak{q}R$, so $\mathfrak{m} \cap R = \mathfrak{q}$.
+All that is left to prove now is that $R'$ is a valuation ring.
+
+\begin{lemma}
+$R'$ is integrally closed.
+\end{lemma}
+
+\begin{proof}
+Let $R''$ be its integral closure. Then $\mathfrak{m} R'' \neq R''$ by lying
+over, since $\mathfrak{m}$ (the maximal ideal of $R'$) lifts up to $R''$. So
+$R''$ satisfies
+\[ \mathfrak{q}R'' \neq R'' \]
+and by maximality, we have $R'' = R'$.
+\end{proof}
+
+To summarize, we know that $R'$ is a local, integrally closed subring of the
+quotient field of $R$, such that the maximal ideal of $R'$ pulls back to
+$\mathfrak{q}$ in $R$.
+All we now need is:
+
+\begin{lemma}
+$R'$ is a valuation ring.
+\end{lemma}
+\begin{proof}
+Let $x$ lie in the fraction field. We must show that either $x$ or $x^{-1} \in
+R'$. Say $x \notin R'$. This means by maximality of $R'$ that $R'' =
+R'[x]$ satisfies
+\[ \mathfrak{q}R'' = R''. \]
+In particular, we can write
+\[ 1 = \sum q_i x^i, \quad q_i \in \mathfrak{q}R' \subset R'. \]
+This implies that
+\[ (1-q_0) + \sum_{i > 0} -q_i x^i = 0. \]
+But $1-q_0$ is invertible in $R'$, since $R'$ is local. We can divide by the
+highest power of $x$:
+\[ x^{-N} + \sum_{i>0} \frac{-q_i}{1-q_0} x^{-N+i} = 0. \]
+In particular, $1/x$ is integral over $R'$; this implies that $1/x \in
+R'$ since
+$R'$ is integrally closed and $q_0$ is a nonunit. So
+$R'$ is a valuation ring.
+\end{proof}
+
+We can state the result formally.
+\begin{theorem}
+Let $R$ be a ring, $\mathfrak{p} \subset \mathfrak{q}$ prime ideals. Then there
+is a homomorphism $\phi: R \to R'$ into a valuation ring $R'$ with maximal
+ideal
+$\mathfrak{m}$ such that
+\[ \phi^{-1}(0) = \mathfrak{p} \]
+and
+\[ \phi^{-1}(\mathfrak{m} ) = \mathfrak{q} .\]
+\end{theorem}
+
+There is a related fact which we now state.
+\begin{theorem}
+Let $R$ be any domain. Then the integral closure of $R$ in the quotient field
+$K$ is the intersection
+\[ \bigcap R_{\alpha} \]
+of all valuation rings $R_{\alpha} \subset K$ containing $R$.
+\end{theorem}
+So an element of the quotient field is integral over $R$ if and only if its
+valuation is nonnegative at every valuation which is nonnegative on $R$.
+
+\begin{proof}
+The $\subset$ argument is easy, because one can check that a valuation ring is
+integrally closed. (Exercise.)
+The interesting direction is to assume that $v(x) \geq 0$ for all $v$
+nonnegative
+on $R$.
+
+Let us suppose $x$ is nonintegral. Suppose $R' = R[1/x]$ and $I$ be the ideal
+$(x^{-1}) \subset R'$. There are two cases:
+\begin{enumerate}
+\item $I = R'$. Then in the ring $R'$, $x^{-1} $ is invertible. In particular,
+$x^{-1}P(x^{-1}) = 1$. Multiplying by a high power of $x$ shows that $x$ is
+integral over $R$. Contradiction.
+\item Suppose $I \subsetneq R'$. Then $I$ is contained in a maximal ideal
+$\mathfrak{q} \subset R'$. There is a valuation subring $R'' \subset K$ ,
+containing $R'$, such that the corresponding valuation is positive on
+$\mathfrak{q}$. In particular, this valuation is positive on $x^{-1}$,
+so it is
+negative on $x$, contradiction.
+\end{enumerate}
+\end{proof}
+
+So the integral closure has this nice characterization via valuation rings. In
+some sense, the proof that $\mathbb{Z}$ is integrally closed has the property
+that every integrally closed ring is integrally closed for that reason:
+it's the
+common nonnegative locus for some valuations.
+
+\section{The Hilbert Nullstellensatz}
+
+The Nullstellensatz is the basic algebraic fact, which we have invoked in the
+past to justify various examples, that connects the idea of
+the $\spec$ of a ring to classical algebraic geometry.
+
+\subsection{Statement and initial proof of the Nullstellensatz}
+
+There are several ways in which the Nullstellensatz can be stated. Let us
+start with the following very concrete version.
+
+\begin{theorem} \label{nullstellensatzoverC}
+All maximal ideals in the polynomial ring $R=\mathbb{C}[x_1, \dots, x_n]$ come
+from points in $\mathbb{C}^n$. In other words, if $\mathfrak{m} \subset R$ is
+maximal, then there exist $a_1, \dots, a_n \in \mathbb{C}$ such that
+$\mathfrak{m} = (x_1 - a_1, \dots, x_n - a_n)$.
+\end{theorem}
+
+
+The maximal spectrum of $R=\mathbb{C}[x_1, \dots, x_n]$ is thus identified with
+$\mathbb{C}^n$.
+
+We shall now reduce \rref{nullstellensatzoverC} to an easier claim.
+Let
+$\mathfrak{m}\subset R$ be a maximal ideal. Then there is a map
+\[ \mathbb{C} \to R \to R/\mathfrak{m} \]
+where $R/\mathfrak{m}$ is thus a finitely generated $\mathbb{C}$-algebra, as
+$R$ is. The ring $R/\mathfrak{m}$ is also a field by maximality.
+
+We would like to show that $R/\mathfrak{m}$ is a finitely generated
+$\mathbb{C}$-vector
+space. This would imply that $R/\mathfrak{m}$ is integral over $\mathbb{C}$,
+and there are no proper algebraic extensions of $\mathbb{C}$. Thus, if we
+prove this, it will follow that the map $\mathbb{C} \to R/\mathfrak{m}$ is an
+isomorphism. If $a_i \in \mathbb{C}$ ($1 \leq i \leq n$) is the image of
+$x_i $ in $R/\mathfrak{m} =
+\mathbb{C}$, it will follow that $(x_1 - a_1, \dots, x_n - a_n) \subset
+\mathfrak{m}$, so $(x_1 - a_1, \dots, x_n - a_n)=
+\mathfrak{m}$.
+
+
+Consequently, the Nullstellensatz in this form would follow from the next
+claim:
+
+\begin{proposition}
+Let $k$ be a field, $L/k$ an extension of fields. Suppose $L$ is a finitely
+generated $k$-algebra. Then $L$ is a finite $k$-vector space.
+\end{proposition}
+This is what we will prove.
+
+We start with an easy proof in the special case:
+\begin{lemma}
+Assume $k$ is uncountable (e.g. $\mathbb{C}$, the original case of interest).
+Then the above proposition is true.
+\end{lemma}
+\begin{proof}
+Since $L$ is a finitely generated $k$-algebra, it suffices to show that $L/k$
+is algebraic.
+If not, there exists $x \in L$ which isn't algebraic over $k$. So $x$ satisfies
+no nontrivial polynomials.
+I claim now that the uncountably many elements $\frac{1}{x-\lambda}, \lambda
+\in K$ are linearly
+independent over $K$. This will be a contradiction as $L$ is a finitely
+generated $k$-algebra, hence at most countably dimensional over $k$. (Note that
+the polynomial ring is countably dimensional over $k$, and $L$ is a quotient.)
+
+So let's prove this. Suppose not. Then there is a nontrivial linear dependence
+\[ \sum \frac{c_i}{x - \lambda_i} = 0, \quad c_i, \lambda_i \in K. \]
+Here the $\lambda_j$ are all distinct to make this nontrivial. Clearing
+denominators, we find
+\[ \sum_i c_i \prod_{j \neq i } (x- \lambda_j) = 0. \]
+Without loss of generality, $c_1 \neq 0$.
+This equality was in the field $L$. But $x$ is transcendental over $k$. So we
+can think of this as a polynomial ring relation.
+Since we can think of this as a relation in the polynomial ring, we see that
+doing so, all but the $i =1$ term in the sum is divisible by $x - \lambda_1$
+as a polynomial.
+It follows that, as polynomials in the indeterminate $x$,
+\[ x - \lambda_1 \mid c_1 \prod_{j \neq 1} (x - \lambda_j). \]
+This is a contradiction since all the $\lambda_i$ are distinct.
+\end{proof}
+
+This is kind of a strange proof, as it exploits the fact that $\mathbb{C}$ is
+uncountable.
+This shouldn't be relevant.
+
+\subsection{The normalization lemma}
+
+Let's now give a more algebraic proof.
+We shall exploit the following highly useful fact in commutative algebra:
+
+\begin{theorem}[Noether normalization lemma] Let $k$ be a field, and $R =
+k[x_1, \dots, x_n]/\mathfrak{p}$ be a finitely generated domain over $k$ (where
+$\mathfrak{p}$ is a prime ideal in the polynomial ring).
+
+Then there exists a polynomial subalgebra $k[y_1, \dots, y_m] \subset R$ such
+that $R$ is integral over $k[y_1, \dots, y_m]$.
+\end{theorem}
+
+Later we will see that $m$ is the \emph{dimension} of $R$.
+
+There is a geometric picture here. Then $\spec R$ is some irreducible algebraic
+variety in $k^n$ (plus some additional points), with a smaller dimension than
+$n$ if $\mathfrak{p} \neq 0$. Then there exists a \emph{finite map} to $k^m$.
+In particular, we can map surjectively $\spec R \to k^m$ which is integral.
+The fibers are in fact finite, because integrality implies finite fibers. (We
+have not actually proved this yet.)
+
+How do we actually find such a finite projection? In fact, in characteristic
+zero, we just take a
+vector space projection $\mathbb{C}^n \to \mathbb{C}^m$. For a ``generic''
+projection onto a subspace of the appropriate dimension, the projection will
+will do as our finite map. In characteristic $p$, this may not work.
+
+\begin{proof}
+First, note that $m$ is uniquely determined as the transcendence degree of the
+quotient field of $R$ over $k$.
+
+Among the variables $x_1, \dots, x_n \in R$ (which we think of as in $R$ by an
+abuse of notation), choose a maximal subset which is algebraically independent.
+This subset has no nontrivial polynomial relations. In particular, the ring
+generated by that subset is just the polynomial ring on that subset.
+We can permute these variables and assume that
+$$\left\{x_1,\dots, x_m\right\}$$ is the maximal subset. In particular, $R$
+contains the \emph{polynomial ring} $k[x_1, \dots, x_m]$ and is generated by
+the rest of the variables. The rest of the variables are not adjoined freely
+though.
+
+The strategy is as follows. We will implement finitely many changes of
+variable so that $R$ becomes integral over $k[x_1, \dots, x_m]$.
+
+The essential case is where $m=n-1$. Let us handle this. So we have
+\[ R_0 = k[x_1, \dots, x_m] \subset R = R_0[x_n]/\mathfrak{p}. \]
+Since $x_n$ is not algebraically independent, there is a nonzero polynomial
+$f(x_1, \dots, x_m, x_n) \in \mathfrak{p}$.
+
+We want $f$ to be monic in $x_n$. This will buy us integrality. A priori, this
+might not be true. We will modify the coordinate system to arrange that,
+though. Choose $N \gg 0$. Define for $1 \leq i \leq m$,
+\[ x_i' = x_i + x_n^{N^i}. \]
+Then the equation becomes:
+\[ 0 = f(x_1, \dots, x_m, x_n) = f( \left\{x_i' - x_n^{N^i}\right\} , x_n). \]
+Now $f(x_1, \dots, x_n, x_{n+1})$ looks like some sum
+\[ \sum \lambda_{a_1 \dots b} x_1^{a_1} \dots x_m^{a_m} x_n^{b} , \quad
+\lambda_{a_1 \dots b} \in k. \]
+But $N$ is really really big. Let us expand this expression in the $x_i'$ and
+pay attention to the largest power of $x_n$ we see.
+We find that
+\[ f(\left\{x_i' - x_n^{N_i}\right\},x_n)
+\]
+has the largest power of $x_n$ precisely where, in the expression for $f$,
+$a_m$ is maximized first, then
+$a_{m-1}, $ and so on. The largest exponent would have the form
+\[ x_n^{a_m N^m + a_{m-1}N^{m-1} + \dots + b}. \]
+We can't, however, get any exponents of $x_n$ in the expression
+\( f(\left\{x_i' - x_n^{N_i}\right\},x_n)\) other than these. If $N$ is super
+large, then all these exponents will be different from each other.
+In particular, each power of $x_n$ appears precisely once in the expansion of
+$f$. We see in particular that $x_n$ is integral over $x_1', \dots, x_n'$.
+Thus each $x_i$ is as well.
+
+So we find
+\begin{quote}
+$R$ is integral over $k[x_1', \dots, x_m']$.
+\end{quote}
+
+We have thus proved the normalization lemma in the codimension one case. What
+about the general case? We repeat this.
+Say we have
+\[ k[x_1, \dots, x_m] \subset R. \]
+Let $R'$ be the subring of $R$ generated by $x_{1}, \dots,x_m, x_{m+1}$. The
+argument we just gave implies that we can choose $x_1', \dots, x_m'$ such that
+$R'$ is integral over $k[x_1', \dots, x_m']$, and the $x_i'$ are
+algebraically independent.
+We know in fact that $R' = k[x_1', \dots, x_m', x_{m+1}]$.
+
+Let us try repeating the argument while thinking about $x_{m+2}$. Let $R'' =
+k[x_1', \dots, x_m', x_{m+2}]$ modulo whatever relations that $x_{m+2}$ has to
+satisfy. So this is a subring of $R$. The same argument shows that we can
+change variables such that $x_1'', \dots, x_m''$ are algebraically independent
+and $R''$ is integral over $k[x_1'', \dots, x_m'']$. We have furthermore that
+$k[x_1'', \dots, x_m'', x_{m+2}] = R''$.
+
+Having done this, let us give the argument where $m=n-2$. You will then see
+how to do the general case. Then I claim that:
+\begin{quote}
+$R$ is integral over $k[x_1'', \dots, x_m'']$.
+\end{quote}
+For this, we need to check that $x_{m+1}, x_{m+2}$ are integral (because these
+together with the $x''_i$ generate $R''[x_{m+2}][x_{m+2}]=R$.
+But $x_{m+2}$ is integral over this by construction. The integral closure of
+$k[x_1'', \dots, x_{m}'']$ in $R$ thus contains
+\[ k[x_1'', \dots, x''_m, x_{m+2}] = R''. \]
+However, $R''$ contains the elements $x_1', \dots, x_m'$. But by construction,
+$x_{m+1}$ is integral over the $x_1', \dots, x_m'$. The integral closure of
+$k[x_1'', \dots, x_{m}'']$ must contain $x_{m+2}$. This completes the proof in
+the case $m=n-2$. The general case is similar; we just make several changes of
+variables, successively.
+
+\end{proof}
+\subsection{Back to the Nullstellensatz}
+
+Consider a finitely generated $k$-algebra $R$ which is a field. We need to
+show that $R$ is a
+finite $k$-module. This will prove the proposition.
+Well, note that $R$ is integral over a polynomial ring $k[x_1, \dots, x_m]$ for
+some $m$.
+If $m > 0$, then this polynomial ring has more than one prime.
+For instance, $(0)$ and $(x_1, \dots, x_m)$. But these must lift to primes in
+$R$. Indeed, we have seen that whenever you have an integral extension, the
+induced map on spectra is surjective. So
+\[ \spec R \to \spec k[x_1, \dots, x_m] \]
+is surjective. If $R$ is a field, this means $\spec k[x_1, \dots, x_m]$ has one
+point and $m=0$. So $R$ is integral over $k$, thus algebraic. This implies that
+$R$ is finite as it is finitely generated. This proves one version of the
+Nullstellensatz.
+
+
+ Another version of the Nullstellensatz, which is
+more precise, says:
+
+\begin{theorem} \label{gennullstellensatz}
+Let $I \subset \mathbb{C}[x_1, \dots, x_n]$. Let $V \subset \mathbb{C}^n$ be
+the subset of $\mathbb{C}^n$ defined by the ideal $I$ (i.e. the zero locus of
+$I$).
+
+Then $\rad(I)$ is precisely the collection of $f$ such that $f|_V = 0$. In
+particular,
+\[ \rad(I) = \bigcap_{\mathfrak{m} \supset I, \mathfrak{m} \
+\mathrm{maximal}} \mathfrak{m}. \]
+\end{theorem}
+
+In particular, there is a bijection between radical ideals and algebraic
+subsets of $\mathbb{C}^n$.
+
+The last form of the theorem, which follows from the expression of maximal
+ideals in the polynomial ring, is very similar to the result
+\[ \rad(I) = \bigcap_{\mathfrak{p} \supset I, \mathfrak{p} \
+\mathrm{prime}} \mathfrak{p}, \]
+true in any commutative ring. However, this general result is not necessarily
+true.
+
+\begin{example}
+The intersection of all primes in a DVR is zero, but the intersection of all
+maximals is nonzero.
+\end{example}
+
+\begin{proof}[Proof of \cref{gennullstellensatz}]
+It now suffices to show that for every $\mathfrak{p} \subset \mathbb{C}[x_1,
+\dots, x_n]$ prime, we have
+\[ \mathfrak{p} = \bigcap_{\mathfrak{m} \supset I \ \mathrm{maximal}}
+\mathfrak{m} \]
+since every radical ideal is an intersection of primes.
+
+Let $R = \mathbb{C}[x_1, \dots,
+x_n]/\mathfrak{p}$. This is a domain finitely generated over $\mathbb{C}$. We
+want to show that the intersection of maximal ideals in $R$ is zero. This is
+equivalent to the above displayed equality.
+
+So fix $f \in R - \left\{0\right\}$. Let $R'$ be the localization $R'= R_f$. Then $R'$ is also an
+integral domain, finitely generated over $\mathbb{C}$. $R'$ has a maximal
+ideal $\mathfrak{m}$ (which a priori could be zero). If
+we look at the map $R' \to R'/\mathfrak{m}$, we get a map into a field
+finitely generated
+over $\mathbb{C}$, which is thus $\mathbb{C}$.
+The composite map
+\[ R \to R' \to R'/\mathfrak{m} \]
+is just given by an $n$-tuple of complex numbers, i.e. to a point in
+$\mathbb{C}^n$ which is even in $V$ as it is a map out of $R$. This corresponds
+to a maximal ideal in $R$.
+This maximal ideal does not contain $f$ by construction.
+\end{proof}
+
+\begin{exercise}
+Prove the following result, known as ``Zariski's lemma'' (which easily implies
+the Nullstellensatz): if $k$ is a field, $k'$ a field extension of $k$ which
+is a finitely generated $k$-\emph{algebra}, then $k'$ is finite algebraic over
+$k$. Use the following argument of McCabe (in \cite{Mc76}):
+\begin{enumerate}
+\item $k'$ contains a subring $S$ of the form $S= k[ x_1, \dots, x_t]$ where
+the $x_1, \dots, x_t$ are algebraically independent over $k$, and $k'$ is
+algebraic over the quotient field of $S$ (which is a polynomial ring).
+\item If $k'$ is not algebraic over $k$, then $S \neq k$ is not a field.
+\item Show that there is $y \in S$ such that $k'$ is integral over $S_y$.
+Deduce that $S_y$ is a field.
+\item Since $\spec (S_y) = \left\{0\right\}$, argue that $y$ lies in every
+non-zero prime ideal of $\spec S$. Conclude that $1+y \in k$, and $S$ is a
+field---contradiction.
+\end{enumerate}
+\end{exercise}
+
+\subsection{A little affine algebraic geometry}
+
+In what follows, let $k$ be algebraically closed, and let $A$ be a finitely generated $k$-algebra. Recall that $\Specm A$ denotes the set of maximal ideals in $A$. Consider the natural $k$-algebra structure on $\mathrm{Funct}(\Specm A, k)$. We have a map
+$$A \rightarrow \mathrm{Funct}(\Specm A, k)$$
+which comes from the Weak Nullstellensatz as follows. Maximal ideals $\mathfrak{m}\subset A$ are in bijection with maps $\varphi_\mathfrak{m}:A\rightarrow k$ where $\ker(\varphi_\mathfrak{m})=\mathfrak{m}$, so we define $a\longmapsto [\mathfrak{m}\longmapsto \varphi_\mathfrak{m}(a)]$. If $A$ is reduced, then this map is injective because if $a\in A$ maps to the zero function, then $a\in \cap\, \mathfrak{m}$ $\rightarrow$ $a$ is nilpotent $\rightarrow$ $a=0$.\\
+
+\begin{definition} A function $f\in \mathrm{Funct}(\Specm A,k)$ is called {\bf algebraic} if it is in the image of $A$ under the above map. (Alternate words for this are {\bf polynomial} and {\bf regular}.) \end{definition}
+
+Let $A$ and $B$ be finitely generated $k$-algebras and $\phi:A\rightarrow B$ a homomorphism. This yields a map $\Phi:\Specm B\rightarrow \Specm A$ given by taking pre-images.
+
+\begin{definition} A map $\Phi:\Specm B\rightarrow \Specm A$ is called {\bf algebraic} if it comes from a homomorphism $\phi$ as above.\end{definition}
+
+To demonstrate how these definitions relate to one another we have the following proposition.
+
+\begin{proposition} A map $\Phi:\Specm B\rightarrow \Specm A$ is algebraic if and only if for any algebraic function $f\in \mathrm{Funct}(\Specm A,k)$, the pullback $f\circ \Phi\in \mathrm{Funct}(\Specm B,k)$ is algebraic.\end{proposition}
+
+\begin{proof}
+Suppose that $\Phi$ is algebraic. It suffices to check that the following diagram is commutative:
+\[
+\xymatrix{
+\mathrm{Funct}(\Specm A,k) \ar[r]^{-\circ\Phi} & \mathrm{Funct}(\Specm B,k) \\
+A \ar[u] \ar[r]_{\phi} & B \ar[u]\\
+}
+\]
+where $\phi:A\rightarrow B$ is the map that gives rise to $\Phi$.
+
+[$\Leftarrow$] Suppose that for all algebraic functions $f\in \mathrm{Funct}(\Specm A,k)$, the pull-back $f\circ\Phi$ is algebraic. Then we have an induced map, obtained by chasing the diagram counter-clockwise:
+$$
+\xymatrix{
+\mathrm{Funct}(\Specm A,k) \ar[r]^{-\circ\Phi} & \mathrm{Funct}(\Specm B,k) \\
+A \ar[u] \ar@{-->}[r]_{\phi} & B \ar[u]\\
+}
+$$
+From $\phi$, we can construct the map $\Phi':\Specm B \rightarrow \Specm A$ given by $\Phi'(\mathfrak{m})=\phi^{-1}(\mathfrak{m})$. I claim that $\Phi=\Phi'$. If not, then for some $\mathfrak{m}\in \Specm B$ we have $\Phi(\mathfrak{m})\neq \Phi'(\mathfrak{m})$. By definition, for all algebraic functions $f\in \mathrm{Funct}(\Specm A,k)$, $f\circ\Phi=f\circ\Phi'$ so to arrive at a contradiction we show the following lemma:\\
+Given any two distinct points in $\Specm A=V(I)\subset k^n$, there exists some
+algebraic $f$ that separates them. This is trivial when we realize that any
+polynomial function is algebraic, and such polynomials separate points.
+\end{proof}
+
+\section{Serre's criterion and its variants}
+
+We are going to now prove a useful criterion for a noetherian ring to be a
+product of
+normal domains, due to Serre: it states that a (noetherian) ring is normal if
+and only if most of the localizations at prime ideals are discrete valuation
+rings (this corresponds to the ring being \emph{regular} in codimension one,
+though we have not defined regularity yet) and a more technical condition that
+we will later interpret in terms of \emph{depth.} One advantage of this
+criterion is that it does \emph{not} require the ring to be a product of
+domains a priori.
+
+\subsection{Reducedness}
+
+There is a ``baby'' version of Serre's criterion for testing whether a ring is
+reduced, which we star with.
+
+Recall:
+
+\begin{definition}
+A ring $R$ is \textbf{reduced} if it has no nonzero nilpotents.
+\end{definition}
+
+\begin{proposition} \label{reducedcrit}
+If $R$ is noetherian, then $R$ is reduced if and only if it satisfies the
+following conditions:
+\begin{enumerate}
+\item Every associated prime of $R$ is minimal (no embedded primes).
+\item If $\mathfrak{p}$ is minimal, then $R_{\mathfrak{p}}$ is a field.
+\end{enumerate}
+\end{proposition}
+\begin{proof}
+First, assume $R$ reduced. What can we say? Say $\mathfrak{p}$ is a minimal
+prime; then $R_{\mathfrak{p}}$ has precisely one prime ideal (namely,
+$\mathfrak{m}=\mathfrak{p}R_{\mathfrak{p}}$). It is in fact a local artinian
+ring, though we
+don't need that fact. The radical of $R_{\mathfrak{p}}$ is just $\mathfrak{m}$.
+But $R$ was reduced, so $R_{\mathfrak{p}}$ was reduced; it's an easy argument
+that localization preserves reducedness. So $\mathfrak{m}=0$. The fact that 0
+is a maximal ideal in $R_{\mathfrak{p}}$ says that it is a field.
+
+On the other hand, we still have to do part 1. $R$ is reduced, so $\rad(R) =
+\bigcap_{\mathfrak{p} \in \spec R} \mathfrak{p} = 0$. In particular,
+\[ \bigcap_{\mathfrak{p} \ \mathrm{minimal}}\mathfrak{p} = 0. \]
+The map
+\[ R \to \prod_{\mathfrak{p} \ \mathrm{minimal}}R/\mathfrak{p} \]
+is injective. The associated primes of the product, however, are just the
+minimal primes. So $\ass(R)$ can contain only minimal primes.
+
+That's one direction of the proposition. Let us prove the converse now. Assume
+$R$ satisfies the two conditions listed. In other words, $\ass(R)$ consists of
+minimal primes, and each $R_{\mathfrak{p}}$ for $\mathfrak{p} \in \ass(R)$ is a
+field. We would like to show that $R$ is reduced.
+Primary decomposition tells us that there is an injection
+\[ R \hookrightarrow \prod_{\mathfrak{p}_i \ \mathrm{minimal}} M_i, \quad M_i
+\ \ \mathfrak{p}_i-\mathrm{primary}. \]
+In this case, each $M_i$ is primary with respect to a minimal prime. We have a
+map
+\[ R \hookrightarrow \prod M_i \to \prod (M_i)_{\mathfrak{p}_i}, \]
+which is injective, because when you localize a primary module at its
+associated prime, you don't kill anything by definition of primariness. Since
+we can draw a diagram
+\[
+\xymatrix{
+R \ar[r] \ar[d] & \prod M_i \ar[d] \\
+\prod R_{\mathfrak{p}_i} \ar[r] & \prod (M_i)_{\mathfrak{p}_i}
+}
+\]
+and the map $R \to \prod (M_i)_{\mathfrak{p}_i}$ is injective, the downward
+arrow on the right injective. Thus $R$ can be embedded in
+a product of the fields $\prod R_{\mathfrak{p}_i}$, so is reduced.
+\end{proof}
+
+This proof actually shows:
+\begin{proposition}[Scholism] A noetherian ring $R$ is reduced iff it injects
+into a product of fields. We can take the fields to be the localizations at the
+minimal primes.
+\end{proposition}
+
+\begin{example}
+Let $R = k[X]$ be the coordinate ring of a variety $X$ in
+$\mathbb{C}^n$. Assume $X$ is
+reduced. Then $\mathrm{MaxSpec} R$ is a union of irreducible components
+$X_i$, which
+are the closures of the minimal primes of $R$. The fields you get by localizing
+at minimal primes depend only on the irreducible components, and in fact are
+the rings of meromorphic functions on $X_i$.
+Indeed, we have a map
+\[ k[X] \to \prod k[X_i] \to \prod k(X_i). \]
+
+If we don't assume that $R$ is radical, this is \textbf{not} true.
+\end{example}
+
+There is a stronger condition than being reduced we could impose. We could say:
+
+\begin{proposition}
+If $R$ is a noetherian ring, then $R$ is a domain iff
+\begin{enumerate}
+\item $R$ is reduced.
+\item $R$ has a unique minimal prime.
+\end{enumerate}
+\end{proposition}
+\begin{proof}
+One direction is obvious. A domain is reduced and $(0)$ is the minimal prime.
+
+The other direction is proved as follows. Assume 1 and 2. Let $\mathfrak{p}$ be
+the unique minimal prime of $R$. Then $\rad (R) = 0 = \mathfrak{p}$ as every
+prime ideal contains $\mathfrak{p}$. As $(0)$ is a prime ideal, $R$ is
+a domain.
+\end{proof}
+
+We close by making some remarks about this embedding of $R$ into a product of
+fields.
+
+\begin{definition}
+Let $R$ be any ring, not necessarily a domain. Let $K(R)$ be the localized ring
+$S^{-1}R$ where $S$ is the multiplicatively closed set of nonzerodivisors in
+$R$. $K(R)$ is called the \textbf{total ring of fractions} of $R$.
+
+When $R$ is a field, this is the quotient field.
+\end{definition}
+
+First, to get a feeling for this, we show:
+
+\begin{proposition} Let $R$ be noetherian. The set of nonzerodivisors $S$
+can be described by
+$S = R- \bigcup_{\mathfrak{p} \in \ass(R)} \mathfrak{p}$.
+\end{proposition}
+\begin{proof}
+If $x \in\mathfrak{p} \in \ass(R)$, then $x$ must kill something in $R$ as it
+is in an associated prime. So $x$ is a zerodivisor.
+
+Conversely, suppose $x$ is a zerodivisor, say $xy = 0$ for some $y \in R -
+\left\{0\right\}$. In
+particular, $x \in \ann(y)$. We have an injection $R/\ann(y) \hookrightarrow R$
+sending 1 to $y$. But $R/\ann(y)$ is nonzero, so it has an associated prime
+$\mathfrak{p}$ of $R/\ann(y)$, which contains $\ann(y)$ and thus $x$. But
+$\ass(R/\ann(y)) \subset \ass(R)$.
+So $x$ is contained in a prime in $\ass(R)$.
+\end{proof}
+
+Assume now that $R$ is reduced. Then $K(R) = S^{-1}R$ where $S$ is the
+complement of the union of the minimal primes.
+At least, we can claim:
+
+\begin{proposition} Let $R$ be reduced and noetherian. Then
+$K(R) = \prod_{\mathfrak{p}_i \ \mathrm{minimal}} R_{\mathfrak{p}_i}$.
+\end{proposition}
+
+So $K(R)$ is the product of fields into which $R$ embeds.
+
+We now continue the discussion begun last time. Let $R$ be noetherian and $M$ a
+finitely generated $R$-module. We would like to understand very rough features
+of $M$.
+We can embed $M$ into a larger $R$-module.
+Here are two possible approaches.
+
+\begin{enumerate}
+\item $S^{-1}M$, where $S$ is a large multiplicatively closed subset of $M$.
+Let us take $S $ to be the set of all $a \in R$ such that $M
+\stackrel{a}{\to}M$ is injective, i.e. $a$ is not a zerodivisor on $M$. Then
+the map
+\[ M \to S^{-1}M \]
+is an injection. Note that $S$ is the complement of the union of $\ass(R)$.
+\item Another approach would be to use a \emph{primary decomposition}
+\[ M \hookrightarrow \prod M_i, \]
+where each $M_i$ is $\mathfrak{p}_i$-primary for some prime $\mathfrak{p}_i$
+(and these primes range over $\ass(M)$). In this case, it is clear that
+anything not in each $\mathfrak{p}_i$ acts injectively. So we can draw a
+commutative diagram
+\[
+\xymatrix{
+M \ar[d] \ar[r] & \prod M_i \ar[d] \\
+\prod M_{\mathfrak{p}_i} \ar[r] & \prod (M_i)_{\mathfrak{p}_i}
+}.
+\]
+The map going right and down is injective.
+It follows that $M$ injects into the product of its localizations at associated
+primes.
+\end{enumerate}
+
+The claim is that these constructions agree if $M$ has no embedded primes.
+I.e., if there are no nontrivial containments among the associated primes of
+$M$, then $S^{-1}M$ (for $S = R - \bigcup_{\mathfrak{p} \in \ass(M)}
+\mathfrak{p}$)
+is just $\prod M_{\mathfrak{p}}$.
+To see this, note that any element of $S$ must act invertibly on $\prod
+M_{\mathfrak{p}}$. We thus see that there is always a map
+\[ S^{-1}M \to \prod_{\mathfrak{p} \in \ass(M)} M_{\mathfrak{p}} . \]
+\begin{proposition}
+This is an isomorphism if $M$ has no embedded primes.
+\end{proposition}
+
+\begin{proof}
+Let us go through a series of reductions. Let $I = \ann(M) = \left\{a: aM
+= 0\right\}$. Without loss of generality, we can replace $R $ by $R/I$. This plays nice with the
+associated primes.
+
+The assumption is now that $\ass(M)$ consists of the minimal
+primes of $R$.
+
+Without loss of generality, we can next replace $R$ by $S^{-1}R$ and $M$ by
+$S^{-1}M$, because that doesn't affect the conclusion; localization plays nice
+with associated primes.
+
+Now, however, $R$ is artinian: i.e., all primes of $R$ are minimal (or
+maximal). Why is this?
+Let $R$ be \emph{any} noetherian ring and $S = R - \bigcup_{\mathfrak{p} \
+\mathrm{minimal}} \mathfrak{p}$. Then I claim that $S^{-1}R$ is artinian. We'll
+prove this in a moment.
+
+So $R$ is artinian, hence a product $\prod R_i$ where each $R_i$ is local
+artinian. Without loss of generality, we can replace $R$ by $R_i$ by taking
+products. The condition we are trying to prove is now that
+\[ S^{-1}M \to M_{\mathfrak{m}} \]
+for $\mathfrak{m} \subset R$ the maximal ideal. But $S$ is the complement of
+the union of the minimal primes, so it is $R - \mathfrak{m}$ as $R$ has one
+minimal (and maximal) ideal. This is obviously an isomorphism: indeed, both
+are $M$.
+\end{proof}
+
+\add{proof of artianness}
+\begin{corollary}
+Let $R$ be a noetherian ring with no embedded primes (i.e. $\ass(R)$ consists
+of minimal primes).
+Then $K(R) = \prod_{\mathfrak{p}_i \ \mathrm{minimal}} R_{\mathfrak{p_i}}$.
+\end{corollary}
+If $R$ is reduced, we get the statement made last time: there are no
+embedded primes, and $K(R)$ is a product of
+fields.
+
+\subsection{The image of $M \to S^{-1}M$}
+Let's ask now the following question. Let $R$ be a noetherian ring, $M$
+a finitely generated
+$R$-module, and $S$ the set of nonzerodivisors on $M$, i.e. $R -
+\bigcup_{\mathfrak{p} \in \ass(M)} \mathfrak{p}$. We have seen that there is an
+imbedding
+\[ \phi: M \hookrightarrow S^{-1}M. \]
+What is the image? Given $x \in S^{-1}M$, when does it belong to the imbedding
+above.
+
+To answer such a question, it suffices to check locally. In particular:
+\begin{proposition}
+$x$ belongs to the image of $M $ in $S^{-1}M$ iff for every $\mathfrak{p} \in
+\spec R$, the image of $x$ in $(S^{-1}M)_{\mathfrak{p}}$ lies inside
+$M_{\mathfrak{p}}$.
+\end{proposition}
+
+This isn't all that interesting. However, it turns out that you can check this
+at a smaller set of primes.
+
+\begin{proposition}
+In fact, it suffices to show that $x$ is in the image of $\phi_{\mathfrak{p}}$
+for every $\mathfrak{p} \in \ass(M/sM)$ where $s \in S$.
+\end{proposition}
+This is a little opaque; soon we'll see what it actually means.
+The proof is very simple.
+
+\begin{proof}
+Remember that $ x \in S^{-1}M$. In particular, we can write $x = y/s$ where $y
+\in M, s \in S$. What we'd like to prove that $x \in M$, or equivalently that
+$y \in sM$.\footnote{In general, this would be equivalent to $ty \in tsM$ for
+some $t \in S$; but $S$ consists of nonzerodivisors on $M$.}
+In particular, we want to know that $y$ maps to zero in $M/sM$. If not, there
+exists an associated prime $\mathfrak{p} \in \ass(M/sM)$ such that $y$ does not
+get
+killed in $(M/sM)_{\mathfrak{p}}$.
+We have assumed, however, for every associated prime $\mathfrak{p}\in \ass(M)$,
+$x \in ( S^{-1}M)_{\mathfrak{p}}$ lies in the image of $M_{\mathfrak{p}}$. This
+states that the image of $y$ in this quotient $(M/sM)_{\mathfrak{p}}$ is zero,
+or that $y$ is divisible by $s$ in this localization.
+\end{proof}
+
+The case we actually care about is the following:
+
+Take $R$ as a noetherian domain and $M = R$. Then $S = R - \left\{0\right\}$
+and $S^{-1}M $ is just the fraction field $K(R)$. The goal is to describe $R$
+as a subset of $K(R)$. What we have proven is that $R$ is the intersection in
+the fraction field
+\[ \boxed{ R = \bigcap_{\mathfrak{p} \in \ass(R/s), s \in R - 0}
+R_{\mathfrak{p}} . }\]
+So to check that something belongs to $R$, we just have to check that in a
+\emph{certain set of localizations}.
+
+Let us state this as a result:
+\begin{theorem} \label{notreallykrullthm}
+If $R$ is a noetherian domain
+\[ R = \bigcap_{\mathfrak{p} \in \ass(R/s), s \in R - 0}
+R_{\mathfrak{p}} \]
+\end{theorem}
+
+\subsection{Serre's criterion}
+
+We can now state a result.
+\begin{theorem}[Serre]
+\label{serrecrit1}
+Let $R$ be a noetherian domain. Then $R $ is integrally
+closed iff it satisfies
+\begin{enumerate}
+\item For any $\mathfrak{p} \subset R$ of height one, $R_{\mathfrak{p}}$ is a
+DVR.
+\item For any $s \neq 0$, $R/s$ has no embedded primes (i.e. all the
+associated primes of $R/s$ are height one).
+\end{enumerate}
+\end{theorem}
+
+Here is the non-preliminary version of the Krull theorem.
+\begin{theorem}[Algebraic Hartogs]
+Let $R$ be a noetherian integrally closed ring. Then
+\[ R = \bigcap_{\mathfrak{p} \ \mathrm{height \ one}} R_{\mathfrak{p}}, \]
+where each $R_{\mathfrak{p}}$ is a DVR.
+\end{theorem}
+
+\begin{proof}
+Now evident from the earlier result \cref{notreallykrullthm} and Serre's criterion.
+\end{proof}
+Earlier in the class, we proved that a domain was integrally closed if and only
+if it could be described as an intersection of valuation rings. We have now
+shown that when $R$ is noetherian, we can take \emph{discrete} valuation rings.
+
+\begin{remark}
+In algebraic geometry, say $R = \mathbb{C}[x_1, \dots, x_n]/I$. Its maximal
+spectrum is a subset of $\mathbb{C}^n$. If $I$ is prime, and $R$ a domain,
+this variety is
+irreducible. We are trying to describe $R$ inside its field of fractions.
+
+The field of fractions are like the ``meromorphic functions''; $R$ is like the
+holomorphic functions. Geometrically, this states to check that a meromorphic
+function is holomorphic, you can just check this by computing the ``poleness''
+along each codimension one subvariety. If the function doesn't blow up on each
+of the codimension one subvarieties, and $R$ is normal, then you can extend it
+globally.
+
+This is an algebraic version of Hartog's theorem: this states that a
+holomorphic function on $\mathbb{C}^2 - (0,0)$ extends over the origin, because
+this has codimension $>1$.
+
+All the obstructions of extending a function to all of $\spec R$ are in
+codimension one.
+\end{remark}
+
+Now, we prove Serre's criterion.
+\begin{proof}
+Let us first prove that $R$ is integrally closed if 1 and 2 occur. We know that
+\[ R = \bigcap_{\mathfrak{p} \in \ass(R/x), x \neq 0} R_{\mathfrak{p}} ; \]
+by condition 1, each such $\mathfrak{p}$ is of height one, and
+$R_{\mathfrak{p}}$ is a DVR. So $R$ is the intersection of DVRs and thus
+integrally closed.
+
+The hard part is going in the other direction. Assume $R$ is integrally closed.
+We want to prove the two conditions. In $R$, consider the following conditions
+on a prime ideal $\mathfrak{p}$:
+\begin{enumerate}
+\item $\mathfrak{p}$ is an associated prime of $R/x$ for some $x \neq 0$.
+\item $\mathfrak{p} $ is height one.
+\item $\mathfrak{p}_{\mathfrak{p}}$ is principal in $R_{\mathfrak{p}}$.
+\end{enumerate}
+First, 3 implies 2 implies 1. 3 implies that $\mathfrak{p}$ contains an element
+$x$ which
+generates $\mathfrak{p}$ after localizing.
+It follows that there can be no prime between $(x)$ and $\mathfrak{p}$ because
+that would be preserved under localization. Similarly, 2 implies 1 is easy. If
+$\mathfrak{p}$ is minimal over $(x)$, then $\mathfrak{p} \in \ass R/(x)$ since
+the minimal primes in the support are always associated.
+
+We are trying to prove the inverse implications. In that case, the claims
+of the theorem will be proved. We have to show that 1 implies 3.
+This is an argument we really saw last time, but let's see it again. Say
+$\mathfrak{p} \in \ass(R/x)$. We can replace $R$ by $R_{\mathfrak{p}}$ so that
+we can assume that $\mathfrak{p}$ is maximal. We want to show that
+$\mathfrak{p}$ is generated by one
+element.
+
+What does the condition $\mathfrak{p} \in \ass(R/x)$ buy us? It tells us that
+there is $\overline{y} \in R/x$ such that $\ann(\overline{y}) = \mathfrak{p}$.
+In particular, there is $y \in R$ such that $\mathfrak{p}y \subset (x)$ and $y
+\notin (x)$.
+We have the element $y/x \in K(R)$ which sends $\mathfrak{p}$ into $R$. That
+is,
+\[ (y/x) \mathfrak{p} \subset R. \]
+There are two cases to consider, as in last time:
+\begin{enumerate}
+\item $(y/x) \mathfrak{p} = R$. Then $\mathfrak{p} = R (x/y)$ so $\mathfrak{p}
+$ is principal.
+\item $(y/x) \mathfrak{p} \neq R$. In particular, $(y/x)\mathfrak{p} \subset
+\mathfrak{p}$. Then since $\mathfrak{p}$ is finitely generated, we find that
+$y/x $ is
+integral over $R$, hence in $R$. This is a contradiction as $y \notin (x)$.
+\end{enumerate}
+Only the first case is now possible. So $\mathfrak{p}$ is in fact principal.
+\end{proof}
+
+
diff --git a/books/cring/noetherian.tex b/books/cring/noetherian.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b192d5f862dde8cbbc89c5edce1c06411560356c
--- /dev/null
+++ b/books/cring/noetherian.tex
@@ -0,0 +1,1508 @@
+\chapter{Noetherian rings and modules}
+\label{noetherian}
+
+The finiteness condition of a noetherian ring is necessary for much of
+commutative algebra; many of the results we prove after this will apply only (or mostly) to the
+noetherian case. In algebraic geometry, the noetherian condition guarantees
+that the topological space associated to the ring (the $\spec $) has all its
+sets quasi-compact; this condition can be phrased as saying that the space
+itself is noetherian in a certain sense.
+
+We shall start by proving the basic properties of noetherian rings. These are
+fairly standard and straightforward; they could have been placed after
+\rref{foundations}, in fact. More subtle is the structure theory for
+finitely generated modules over a noetherian ring. While there is nothing as
+concrete as there is for PIDs (there, one has a very explicit descrition for
+the isomorphism classes), one can still construct a so-called ``primary
+decomposition.'' This will be the primary focus after the basic properties of
+noetherian rings and modules have been established. Finally, we finish with an
+important subclass of noetherian rings, the \emph{artinian} ones.
+
+
+\section{Basics}
+
+\subsection{The noetherian condition}
+
+
+\begin{definition}
+Let $R$ be a commutative ring and $M$ an $R$-module. We say that $M$ is
+\textbf{noetherian} if every submodule of $M$ is finitely generated.
+\end{definition}
+
+
+There is a convenient
+reformulation of the finiteness hypothesis above in terms of the
+\emph{ascending chain condition}.
+
+\begin{proposition} $M$ is a module over $R$.
+The following are equivalent:
+\begin{enumerate}
+\item $M$ is noetherian.
+\item Every chain of submodules $M_0 \subset M_1 \subset \dots \subset M$,
+eventually stabilizes at some $M_N$. (Ascending chain condition.)
+\item Every nonempty collection of submodules of $M$ has a maximal element.
+\end{enumerate}
+\end{proposition}
+\begin{proof}
+Say $M$ is noetherian and we have such a chain
+\[ M_0 \subset M_1 \subset \dots. \]
+Write
+\[ M' = \bigcup M_i \subset M, \]
+which is finitely generated since $M$ is noetherian. Let it be generated by
+$x_1, \dots,x_n$. Each of these finitely many elements is in the union, so
+they are all contained in some $M_N$. This means that
+\[ M' \subset M_N, \quad \mathrm{so} \quad M_N = M' \]
+and the chain stabilizes.
+
+For the converse, assume the ACC. Let $M' \subset M$ be any submodule. Define
+a chain of submodules $M_0 \subset M_1 \subset \dots \subset M'$ inductively as follows. First, just take
+$M_0 = \left\{0\right\}$. Take $M_{n+1}$ to be $M_n + Rx$ for some $x \in
+M' - M_n$, if such an $x$ exists; if not take $M_{n+1}=M_n$.
+So $M_0$ is zero,
+$M_1$ is generated by some nonzero element of $M'$, $M_2$ is $M_1$ together
+with some element of $M'$ not in $M_1$, and so on, until (if ever) the chain
+stabilizes.
+
+However, by construction, we have an ascending
+chain, so it stabilizes at some finite place by the ascending chain condition.
+Thus, at some point, it is
+impossible to choose something in $M'$ that does not belong to $M_N$. In
+particular, $M'$ is generated by $N$ elements, since $M_N$ is and $M' = M_N$.
+This proves the reverse implication. Thus the equivalence of 1 and 2 is clear.
+The equivalence of 2 and 3 is left to the reader.
+\end{proof}
+
+
+Working with noetherian modules over non-noetherian rings can be a little
+funny, though, so normally this definition is combined with:
+
+
+\begin{definition}
+The ring $R$ is \textbf{noetherian} if $R$ is noetherian as an $R$-module.
+Equivalently phrased, $R$ is noetherian if all of its ideals are finitely generated.
+\end{definition}
+
+We start with the basic examples:
+
+\begin{example}
+\begin{enumerate}
+\item Any field is noetherian. There are two ideals: $(1)$ and $(0)$.
+\item Any PID is noetherian: any ideal is generated by one element. So
+$\mathbb{Z}$ is noetherian.
+\end{enumerate}
+\end{example}
+
+The first basic result we want to prove is that over a noetherian ring, the
+noetherian modules are precisely the finitely generated ones. This will
+follow from \rref{exactnoetherian} in the next subsection. So the defining
+property of noetherian rings is that a submodule of a finitely generated
+module is finitely generated. (Compare
+\rref{noetherianiffg}.)
+
+\begin{exercise}
+The ring $\mathbb{C}[X_1, X_2, \dots]$ of polynomials in infinitely many
+variables is not noetherian. Note that the ring itself is finitely generated
+(by the element $1$), but there are ideals that are not finitely generated.
+\end{exercise}
+
+\begin{remark}
+Let $R$ be a ring such that every \emph{prime} ideal is finitely generated.
+Then $R$ is noetherian. See \rref{primenoetherian}, or prove it as
+an exercise. \end{remark}
+
+\subsection{Stability properties}
+
+The class of noetherian rings is fairly robust. If one starts with a
+noetherian ring, most of the elementary operations one can do to it lead to
+noetherian rings.
+
+\begin{proposition} \label{exactnoetherian}
+If
+\[ 0 \to M' \to M \to M'' \to 0 \]
+is an exact sequence of modules, then $M$ is noetherian if and only if $M',
+M''$ are.
+\end{proposition}
+
+One direction states that noetherianness is preserved under subobjects and
+quotients. The other direction states that noetherianness is preserved under
+extensions.
+\begin{proof}
+If $M$ is noetherian, then every submodule of $M'$ is a submodule of $M$, so is
+finitely generated. So $M'$ is noetherian too. Now we show that $M''$ is
+noetherian. Let $N \subset M''$ and let
+$\widetilde{N} \subset M$ the inverse image. Then $\widetilde{N}$ is finitely generated, so
+$N$---as the homomorphic image of $\widetilde{N}$---is finitely generated
+So $M''$ is noetherian.
+
+Suppose $M', M''$ noetherian. We prove $M$ noetherian.
+We verify the ascending chain condition. Consider
+\[ M_1 \subset M_2 \subset \dots \subset M. \]
+Let $M_i''$ denote the image of $M_i$ in $M''$ and let $M'_i$ be the
+intersection of $M_i$ with $M'$. Here we think of $M'$ as a submodule of $M$.
+These are ascending chains of submodules of $M', M''$, respectively, so they
+stabilize by noetherianness.
+So for some $N$, we have
+that $n \geq N$ implies
+\[ M'_n = M'_{n+1}, \quad M''_n = M''_{n+1}. \]
+
+We claim that this implies, for such $n$,
+\[ M_n = M_{n+1}. \]
+Indeed, say $x \in M_{n+1} \subset M$. Then $x$ maps into something in $M''_{n+1} = M''_n$.
+So there is something in $M_n$, call it $y$, such that $x,y$ go to the same
+thing in $M''$. In particular,
+\[ x - y \in M_{n+1} \]
+goes to zero in $M''$, so $x-y \in M'$. Thus $x-y \in M'_{n+1} = M'_n$. In
+particular,
+\[ x = (x-y) + y \in M'_n + M_n = M_n. \]
+So $x \in M_n$, and
+\[ M_n = M_{n+1} . \]
+This proves the result.
+\end{proof}
+
+The class of noetherian modules is thus ``robust.'' We can get from that the
+following.
+
+\begin{proposition}
+If $\phi: A \to B$ is a surjection of commutative rings and $A$ is noetherian, then $B$ is
+noetherian too.
+\end{proposition}
+\begin{proof}
+Indeed, $B$ is noetherian as an $A$-module; indeed, it is the quotient of a
+noetherian $A$-module (namely, $A$). However, it is easy to see that the
+$A$-submodules of $B$ are just the $B$-modules in $B$, so $B$ is noetherian as a
+$B$-module too. So $B$ is noetherian.
+\end{proof}
+
+We know show that noetherianness of a ring is preserved by localization:
+\begin{proposition}
+Let $R$ be a commutative ring, $S \subset R$ a multiplicatively closed subset. If
+$R$ is noetherian, then $S^{-1}R$ is noetherian.
+\end{proposition}
+I.e., the class of noetherian rings is closed under localization.
+\begin{proof}
+Say $\phi: R \to S^{-1}R$ is the canonical map. Let $I \subset S^{-1}R$ be an
+ideal. Then $\phi^{-1}(I) \subset R$ is an ideal, so finitely generated. It
+follows that
+\[ \phi^{-1}(I)( S^{-1}R )\subset S^{-1}R \]
+is finitely generated as an ideal in $S^{-1}R$; the generators are the images
+of the generators of $\phi^{-1}(I)$.
+
+Now we claim that
+\[ \phi^{-1}(I)( S^{-1}R ) = I . \]
+The inclusion $\subset$ is trivial. For the latter inclusion, if $x/s \in I$,
+then $x \in \phi^{-1}(I)$, so
+\[ x = (1/s) x \in (S^{-1}R) \phi^{-1}(I). \] This proves the claim and
+implies that $I$ is finitely generated.
+\end{proof}
+
+Let $R$ be a noetherian ring. We now characterize the noetherian $R$-modules.
+\begin{proposition} \label{noetherianiffg}
+An $R$-module $M$ is noetherian if and only if $M$ is finitely generated.
+\end{proposition}
+\begin{proof}
+The only if direction is obvious. A module is noetherian if and only if every
+submodule is finitely generated.
+
+For the if direction, if $M$ is finitely generated, then there is a surjection
+of $R$-modules
+\[ R^n \to M \]
+where $R$ is noetherian. But $R^n$ is noetherian by
+\rref{exactnoetherian} because it is a direct sum
+of copies of $R$. So $M$ is a quotient of a noetherian module and is noetherian.
+
+\end{proof}
+\subsection{The basis theorem}
+Let us now prove something a little less formal. This is, in fact, the biggest
+of the ``stability'' properties of a noetherian ring: we are going to see that finitely generated
+algebras over noetherian rings are still noetherian.
+
+\begin{theorem}[Hilbert basis theorem]\label{hilbbasis}
+If $R$ is a noetherian ring, then the polynomial ring $R[X]$ is noetherian.
+\end{theorem}
+\begin{proof}
+Let $I \subset R[X]$ be an ideal. We prove that it is finitely generated. For
+each $m \in \mathbb{Z}_{\geq 0}$, let $I(n)$ be the collection of elements
+$ a\in R$ consisting of the coefficients of $x^n$ of elements of $I$ of degree
+$\leq n$.
+This is an ideal, as is easily seen.
+
+In fact, we claim that
+\[ I(1) \subset I(2) \subset \dots \]
+which follows because if $ a\in I(1)$, there is an element $aX + \dots$ in $I$.
+Thus $X(aX + \dots) = aX^2 + \dots \in I$, so $a \in I(2)$. And so on.
+
+Since $R$ is noetherian, this chain stabilizes at some $I(N). $
+Also, because $R$ is noetherian, each $I(n)$ is generated by finitely many
+elements $a_{n,1}, \dots, a_{n, m_n} \in I(n)$. All of these come from polynomials
+$P_{n,i} \in I$ such that $P_{n,i} = a_{n,i} X^n + \dots$.
+
+The claim is that the $P_{n,i}$ for $n \leq N$ and $i \leq m_n$ generate $I$.
+This is a finite set of polynomials, so if we prove the claim, we will have
+proved the basis theorem. Let $J$ be the ideal generated by
+$\left\{P_{n,i}, n \leq N, i \leq m_n \right\}$. We know $J \subset I$. We must
+prove $I \subset J$.
+
+We will show that any element $P(X) \in I$ of degree $n$ belongs to $J$ by
+induction on $n$. The degree is the largest nonzero coefficient. In particular,
+the zero polynomial does not have a degree, but the zero polynomial is
+obviously in $J$.
+
+There are two cases. In the first case, $n \geq N$. Then we write
+\[ P(X) = a X^n + \dots . \] By definition, $a \in I(n) = I(N)$ since the
+chain of ideals $I(n)$ stabilized. Thus we can write $a$ in terms of the
+generators: $a = \sum a_{N, i} \lambda_i$ for some
+$\lambda_i \in R$. Define the polynomial
+\[ Q = \sum \lambda_i P_{N, i} x^{n-N} \in J. \] Then $Q$ has degree $n$ and
+the leading term is just $a$. In particular,
+\[ P - Q \]
+is in $I$ and has degree less than $n$. By the inductive hypothesis, this
+belongs to $J$, and since $Q \in J$, it follows that $P \in J$.
+
+Now consider the case of $n < N$.
+Again, we write $P(X) = a X^n + \dots$. Then $a \in I(n)$. We can write
+\[ a = \sum a_{n,i} \lambda_i, \quad \lambda_i \in R. \]
+But the $a_{n,i} \in I(n)$. The polynomial
+\[ Q = \sum \lambda_i P_{n,i} \]
+belongs to $J$ since $n < N$. In the same way, $P-Q \in I$ has a lower degree.
+Induction as before implies that $P \in J$.
+\end{proof}
+
+
+\begin{example}
+Let $k$ be a field. Then $k[x_1, \dots, x_n]$ is noetherian for any $n$, by the
+Hilbert basis theorem and induction on $n$.
+\end{example}
+
+
+\begin{corollary} \label{hilbbasiscor}
+If $R$ is a noetherian ring and $R'$ a finitely generated $R$-algebra, then
+$R'$ is noetherian too.
+\end{corollary}
+\begin{proof}
+Each polynomial ring $R[X_1, \dots, X_n]$ is noetherian by \cref{hilbbasis} and an easy
+induction on $n$. Consequently, any quotient of a polynomial ring (i.e. any
+finitely generated $R$-algebra, such as $R'$) is noetherian.
+\end{proof}
+
+\begin{example}
+Any finitely generated commutative ring $R$ is noetherian. Indeed, then there
+is a surjection
+\[ \mathbb{Z}[x_1, \dots, x_n] \twoheadrightarrow R \]
+where the $x_i$ get mapped onto generators in $R$. The former is noetherian by
+the basis theorem, and $R$ is as a quotient noetherian.
+\end{example}
+
+
+\begin{corollary}
+Any ring $R$ can be obtained as a filtered direct limit of noetherian rings.
+\end{corollary}
+\begin{proof}
+Indeed, $R$ is the filtered direct limit of its finitely generated subrings.
+\end{proof}
+
+This observation is sometimes useful in commutative algebra and algebraic
+geometry, in order to reduce questions about arbitrary commutative rings to
+noetherian rings. Noetherian rings have strong finiteness hypotheses that let
+you get numerical invariants that may be useful. For instance, we can do things
+like inducting on the dimension for noetherian local rings.
+
+\begin{example}
+Take $R = \mathbb{C}[x_1, \dots, x_n]$. For any algebraic variety $V$ defined
+by polynomial equations, we know that $V$ is the vanishing locus of some ideal
+$I \subset R$. Using the Hilbert basis theorem, we have shown that $I$ is
+finitely generated. This implies that $V$ can be described by \emph{finitely}
+many polynomial equations.
+\end{example}
+
+\subsection{Noetherian induction}
+
+The finiteness condition on a noetherian ring allows for ``induction''
+arguments to be made; we shall see examples of this in the future.
+\begin{proposition}[Noetherian Induction Principle]
+ Let $R$ be a noetherian ring, let $\mathcal{P}$ be a property, and let $\mathcal{F}$ be a family of
+ ideals $R$. Suppose the inductive step: if all ideals in $\mathcal{F}$ strictly larger than
+ $I\in \mathcal{F}$ satisfy $\mathcal{P}$, then $I$ satisfies $\mathcal{P}$. Then all ideals in
+ $\mathcal{F}$ satisfy $\mathcal{P}$.
+ \end{proposition}
+ \begin{proof}
+ Assume $\mathcal{F}_\text{crim} = \{J\in \mathcal{F}|J\text{ does not satisfy }\mathcal{P}\}\neq \varnothing$.
+ Since $R$ is noetherian, $\mathcal{F}_\text{crim}$ has a maximal member $I$. By maximality, all
+ ideals in $\mathcal{F}$ strictly containing $I$ satisfy $\mathcal{P}$, so $I$ also does by the inductive
+ step.
+ \end{proof}
+
+
+\section{Associated primes}
+
+We shall now begin the structure theory for noetherian modules. The first step
+will be to associate to each module a collection of primes, called the
+\emph{associated primes}, which lie in a bigger collection of primes (the
+\emph{support}) where the
+localizations are nonzero.
+
+\subsection{The support}
+ Let $R$ be a noetherian ring. An $R$-module $M$ is supposed to be thought
+ of as something like a vector bundle, somehow
+spread out over the topological space $\spec R$. If $\mathfrak{p} \in \spec R$, then let
+\( \k(\mathfrak{p}) = \mathrm{fr. \ field \ } R/\mathfrak{p} ,\)
+which is the residue field of $R_{\mathfrak{p}}$. If $M$ is any $R$-module, we
+can consider $M \otimes_R \k(\mathfrak{p})$ for each $\mathfrak{p}$; it is a
+vector space over $\k(\mathfrak{p})$. If $M$ is finitely generated, then $M \otimes_R
+\k(\mathfrak{p})$ is a finite-dimensional vector space.
+
+\begin{definition}
+Let $M$ be a finitely generated $R$-module. Then $\supp M$, the
+\textbf{support} of $M$, is defined to be the set of primes
+$\mathfrak{p} \in \spec R$ such that
+\( M \otimes_R \k(\mathfrak{p}) \neq 0. \)
+\end{definition}
+
+One is supposed to think of a module $M$ as something like a vector bundle
+over the topological space
+$\spec R$. At each $\mathfrak{p} \in \spec R$, we associate the vector space $M
+\otimes_R \k(\mathfrak{p})$; this is the ``fiber.'' Of course, the intuition
+of $M$'s being a vector bundle is somewhat limited, since the fibers
+do not generally have the same dimension.
+Nonetheless, we can talk about the support, i.e. the set of spaces where the
+``fiber'' is not zero.
+
+Note that $\mathfrak{p} \in \supp M$ if and only if $M_{\mathfrak{p}} \neq 0$. This is
+because
+\[ (M \otimes_R R_{\mathfrak{p}})/( \mathfrak{p} R_{\mathfrak{p}} (M \otimes_R
+R_{\mathfrak{p}})) = M_{\mathfrak{p}}
+\otimes_{R_{\mathfrak{p}}} \k(\mathfrak{p}) \]
+and we can use Nakayama's lemma over the local ring $R_{\mathfrak{p}}$. (We
+are using the fact that $M$ is finitely generated.)
+
+A vector bundle whose support is empty is zero. Thus the following easy
+proposition is intuitive:
+
+\begin{proposition}
+$M = 0$ if and only if $\supp M = \emptyset$.
+\end{proposition}
+\begin{proof}
+Indeed, $M=0$ if and only if $M_{\mathfrak{p}} = 0$ for all primes
+$\mathfrak{p} \in \spec R$. This is equivalent to $\supp M = \emptyset$.
+\end{proof}
+
+\begin{exercise}
+Let $0 \to M' \to M \to M'' \to 0$ be exact. Then
+\[ \supp M = \supp M' \cup \supp M''. \]
+\end{exercise}
+
+
+We will see soon that that $\supp M$ is closed in $\spec R$. One imagines that
+$M$ lives on this closed subset $\supp M$, in some sense.
+
+
+
+\subsection{Associated primes}
+Throughout this section, $R$ is a noetherian ring. The \emph{associated
+primes} of a module $M$ will refer to primes that arise as the annihilators of
+elements in $M$. As we shall see, the support of a module is determined by
+the associated primes. Namely, the associated primes contain the ``generic
+points'' (that is, the minimal primes) of the support. In some cases, however,
+they may contain more.
+
+\add{We are currently using the notation $\ann(x)$ for the annihilator of $x
+\in M$. This has not been defined before. Should we add this in a previous
+chapter?}
+
+\begin{definition}
+Let $M$ be a finitely generated $R$-module. The prime ideal $\mathfrak{p}$ is said to be
+\textbf{associated} to $M$ if there exists an element $x \in M$ such that
+$\mathfrak{p}$ is the annihilator of $x$. The set of associated primes is
+$\ass(M)$.
+\end{definition}
+
+Note that the annihilator of an element $x \in M$ is not necessarily prime, but
+it is possible that the annihilator might be prime, in which case it is
+associated.
+
+\begin{exercise}
+Show that $\mathfrak{p} \in \ass(M)$ if and only if there is an injection
+$R/\mathfrak{p} \hookrightarrow M$.
+\end{exercise}
+
+\begin{exercise}
+Let $\mathfrak{p} \in \spec R$. Then $\ass(R/\mathfrak{p}) =
+\left\{\mathfrak{p}\right\}$.
+\end{exercise}
+
+\begin{example}
+Take $R=k[x,y,z]$, where $k$ is an integral domain, and let $I = (x^2-yz,x(z-1))$. Any
+ prime associated to $I$ must contain $I$, so let's consider
+ $\mathfrak{p}=(x^2-yz,z-1)=(x^2-y,z-1)$, which is $I:x$. It is prime because $R/\mathfrak{p} = k[x]$,
+ which is a domain. To see that $(I:x)\subset \mathfrak{p}$, assume $tx\in I\subset \mathfrak{p}$; since
+ $x\not\in \mathfrak{p}$, $t\in p$, as desired.
+
+ There are two more associated primes, but we will not find them here.
+ \end{example}
+
+
+We shall start by proving that $\ass(M) \neq \emptyset$ for nonzero modules.
+\begin{proposition} \label{assmnonempty}
+If $M \neq 0$, then $M$ has an associated prime.
+\end{proposition}
+\begin{proof} Consider the collection of ideals in $R$ that arise as the
+annihilator of a nonzero element in $M$.
+Let $I \subset R$ be a maximal element among this collection. The existence of $I$ is guaranteed thanks to the noetherianness of
+$R$.
+Then $I = \ann(x)$ for some $x \in M$, so $1 \notin I$ because the annihilator of a nonzero element is not the full
+ring.
+
+I claim that
+$I$ is prime, and hence $I \in \ass(M)$.
+Indeed, suppose $ab \in I$ where $a,b \in R$. This means that
+\[ (ab)x = 0. \]
+Consider the annihilator $\ann(bx)$ of $bx$. This contains the annihilator of $x$, so $I$;
+it also contains $a$.
+
+There are two cases. If $bx = 0$, then $ b \in I$ and we are done. Suppose to
+the contrary $bx \neq 0$. In this case, $\ann(bx)$ contains $(a) + I$, which
+ contains $I$. By maximality, it must happen that $\ann(bx) = I$ and $ a \in
+ I$.
+
+ In either case, we find that one of $a,b $ belongs to $I$, so that $I$ is
+ prime.
+
+\end{proof}
+
+\begin{example}[A module with no associated prime]
+Without the noetherian hypothesis, \rref{assmnonempty} is
+\emph{false}. Consider $R = \mathbb{C}[x_1, x_2, \dots]$, the polynomial ring
+over $\mathbb{C}$ in infinitely many variables, and the ideal $I = (x_1,
+x_2^2, x_3^3, \dots) \subset R$.
+The claim is that
+\[ \ass(R/I ) = \emptyset. \]
+To see this, suppose a prime $\mathfrak{p}$ was the annihilator of some
+$\overline{f}\in R/I$. Then $\overline{f}$ lifts to $f \in R$; it follows
+that $\mathfrak{p}$ is precisely the set of $g \in R$ such that $fg \in I$.
+Now $f$ contains only finitely many of the variables $x_i$, say $x_1, \dots,
+x_n$. It is then clear that $x_{n+1}^{n+1} f \in I$ (so $x_{n+1}^{n+1} \in
+\mathfrak{p}$), but $x_{n+1} f \notin I$ (so $x_{n+1} \notin \mathfrak{p}$).
+It follows that $\mathfrak{p}$ is not a prime, a contradiction.
+\end{example}
+
+We shall now show that the associated primes are finite in number.
+
+\begin{proposition} \label{finiteassm}
+If $M$ is finitely generated, then $\ass(M)$ is finite.
+\end{proposition}
+
+The idea is going to be to use the fact that $M$ is finitely generated to build
+$M$ out of finitely many pieces, and use that to bound the number of associated
+primes to each piece. For this, we need:
+
+\begin{lemma} \label{assexact}
+Suppose we have an exact sequence of finitely generated $R$-modules
+\[ 0 \to M' \to M \to M'' \to 0. \]
+Then
+\[\ass(M') \subset \ass(M) \subset \ass(M') \cup \ass(M'') \]
+\end{lemma}
+\begin{proof}
+The first claim is obvious. If $\mathfrak{p}$ is the annihilator of
+in $x \in M'$, it is an annihilator of something in $M$ (namely the image of
+$x$), because
+$M' \to M$ is injective. So $\ass(M') \subset \ass(M)$.
+
+The harder direction is the other inclusion. Suppose $\mathfrak{p} \in \ass(M)$.
+Then there is $x \in M$ such that
+$\mathfrak{p} = \ann(x).$
+Consider the submodule $Rx \subset M$. If $Rx \cap M' \neq 0$, then we can
+choose $y \in Rx \cap M' - \left\{0\right\}$. I claim that $\ann(y) =
+\mathfrak{p}$ and so $\mathfrak{p} \in \ass(M')$.
+To see this, $ y = ax$ for some $a \in R$. The annihilator of $y$ is the set of elements
+$b \in R$ such that
+\[ abx = 0 \]
+or, equivalently, the set of $b \in R$ such that $ab \in \mathfrak{p} =
+\ann(x)$. But $y = ax \neq 0$, so $a \notin \mathfrak{p}$. As a
+result, the condition $b \in \ann(y)$ is the same as $b \in \mathfrak{p}$. In
+other words,
+\[ \ann(y) = \mathfrak{p} \]
+which proves the claim.
+
+Suppose now that $Rx \cap M' = 0$. Let $\phi: M \twoheadrightarrow M''$
+be the surjection. I claim that $\mathfrak{p} = \ann(\phi(x))$ and
+consequently that
+$\mathfrak{p} \in \ass(M'')$. The proof is as follows. Clearly $\mathfrak{p}$
+annihilates $\phi(x)$ as it annihilates $x$. Suppose $a \in \ann(\phi(x))$.
+This means that $\phi(ax) = 0$, so $ax \in \ker \phi=M'$; but $\ker \phi \cap Rx =
+0$. So $ax = 0$ and consequently $a \in \mathfrak{p}$. It follows $\ann(\phi(x)) = \mathfrak{p}$.
+\end{proof}
+
+The next step in the proof of \rref{finiteassm} is that any
+finitely generated module
+admits a filtration each of whose quotients are of a particularly nice form.
+This result is quite useful and will be referred to in the future.
+
+\begin{proposition}[D{\'e}vissage] \label{filtrationlemma} \label{devissage}
+For any finitely generated $R$-module $M$, there exists a finite filtration
+\[ 0 = M_0 \subset M_1 \subset \dots \subset M_k = M \]
+such that the successive quotients $M_{i+1}/M_i$ are isomorphic to various
+$R/\mathfrak{p}_i$ with the $\mathfrak{p}_i \subset R$ prime.
+\end{proposition}
+\begin{proof}
+Let $M' \subset M$ be maximal among submodules for which such a filtration
+(ending with $M'$)
+exists. We would like to show that $M' = M$. Now $M'$ is well-defined since
+$0$ has such a filtration and $M$ is
+noetherian.
+
+There is a filtration
+\[ 0 = M_0 \subset M_1 \subset \dots \subset M_l = M' \subset M \]
+where the successive quotients, \emph{except} possibly the last $M/M'$, are of
+the form $R/\mathfrak{p}_i $ for $\mathfrak{p}_i \in \spec R$.
+If $M' = M$, we are done. Otherwise, consider
+the quotient $M/M' \neq 0$. There is an associated prime of $M/M'$. So there is
+a prime $\mathfrak{p}$ which is the annihilator of $x \in M/M'$. This means
+that there is an injection
+\[ R/\mathfrak{p} \hookrightarrow M/M'. \]
+Now, take $M_{l+1}$ as the inverse image in $M$
+of $R/\mathfrak{p} \subset M/M'$.
+Then, we can consider the finite filtration
+\[ 0 = M_0 \subset M_1 \subset \dots \subset M_{l+1} , \]
+all of whose successive quotients are of the form $R/\mathfrak{p}_i$; this is
+because $M_{l+1}/M_l = M_{l+1}/M'$ is of this form by construction.
+We have thus extended this filtration one
+step further, a contradiction since
+$M'$ was assumed to be maximal.
+\end{proof}
+
+Now we are in a position to meet the goal, and prove that $\ass(M)$ is
+always a finite set.
+\begin{proof}[Proof of \rref{finiteassm}]
+Suppose $M$ is finitely generated Take our filtration
+\[ 0 = M_0 \subset M_1 \subset \dots \subset M_k = M. \]
+By induction, we show that $\ass(M_i)$ is finite for each $i$. It is obviously
+true for $i=0$. Assume now that $\ass(M_i)$ is finite; we prove the same for
+$\ass(M_{i+1})$. We have an exact sequence
+\[ 0 \to M_i \to M_{i+1} \to R/\mathfrak{p}_i \to 0 \]
+which implies that, by \rref{assexact},
+\[ \ass(M_{i+1}) \subset \ass(M_i) \cup \ass(R/\mathfrak{p}_i) = \ass(M_i)
+\cup \left\{\mathfrak{p}_i\right\} , \]
+so $\ass(M_{i+1})$ is also finite.
+By induction, it is now clear that $\ass(M_i)$ is finite for every $i$.
+
+This proves the proposition; it also shows that the number of
+associated primes is at most the length of the filtration.
+\end{proof}
+
+
+Finally, we characterize the zerodivisors on $M$ in terms of the associated
+primes. The last characterization of the result will be useful in the future.
+It implies, for instance, that if $R$ is local and $\mathfrak{m}$ the maximal
+ideal, then if every element of $\mathfrak{m}$ is a zerodivisor on a finitely
+generated module
+$M$, then $\mathfrak{m} \in \ass(M)$.
+
+\begin{proposition} \label{assmdichotomy}
+If $M$ is a finitely generated module over a noetherian ring $R$, then the
+zerodivisors on $M$ are the union $\bigcup_{\mathfrak{p} \in \ass(M)}
+\mathfrak{p}$.
+
+More strongly, if $I \subset R$ is any ideal consisting of zerodivisors on
+$M$, then $I$ is contained in an associated prime.
+\end{proposition}
+\begin{proof}
+Any associated prime is an annihilator of some element of $M$, so it consists
+of zerodivisors. Conversely, if $a \in R$ annihilates $x \in M$, then $a$
+belongs to every associated prime of the nonzero module $Ra \subset M$. (There
+is at least one by \cref{finiteassm}.)
+
+For the last statement, we use prime avoidance (\cref{primeavoidance}): if $I$ consists of
+zerodivisors, then $I$ is contained in the union $\bigcup_{\mathfrak{p} \in \ass(M)}
+\mathfrak{p}$ by the first part of the proof. This is a finite union by
+\cref{assmfinite}, so prime avoidance implies $I$ is contained one of these
+primes.
+\end{proof}
+
+
+\begin{exercise}
+For every module $M$ over any (not necessarily noetherian) ring $R$,
+the set of $M$-zerodivisors$\mathcal{Z}(M)$ is a union of prime ideals. In general, there is an easy
+ characterization of sets $Z$ which are a union of primes: it is exactly when
+ $R\smallsetminus Z$ is a \emph{saturated multiplicative set}. This is Kaplansky's
+ Theorem 2.
+ \begin{definition}
+ A multiplicative set $S\neq \varnothing$ is a \emph{saturated multiplicative set} if
+ for all $a,b\in R$, $a,b\in S$ if and only if $ab\in S$. (``multiplicative set'' just
+ means the ``if'' part)
+ \end{definition}
+ To see that $\mathcal{Z}(M)$ is a union of primes, just verify that its complement is a saturated
+ multiplicative set.
+\end{exercise}
+
+\subsection{Localization and $\ass(M)$}
+
+It turns out to be extremely convenient that the construction $M \to \ass(M)$
+behaves about as nicely with respect to localization as we could possibly
+want. This lets us, in fact, reduce arguments to the case of a local ring,
+which is a significant simplification.
+
+So, as usual, let $R $ be noetherian, and $M$ a finitely generated $R$-module.
+Let further $S \subset R$ be a multiplicative subset.
+Then $S^{-1}M$ is a finitely generated module over the noetherian ring
+$S^{-1}M$. So it makes sense to consider both $\ass(M) \subset \spec R$ and
+$\ass(S^{-1}M) \subset \spec S^{-1}R$. But we also know that $\spec S^{-1}R
+\subset \spec R$ is just the set of primes of $R$ that do not intersect $S$.
+Thus, we can directly compare $\ass(M)$ and $\ass(S^{-1}M)$, and one might
+conjecture (correctly, as it happens) that $\ass(S^{-1}M) = \ass(M) \cap \spec
+S^{-1}R$.
+\begin{proposition} \label{assmlocalization}
+Let $R$ noetherian, $M$ finitely generated and $S \subset R$ multiplicatively closed.
+Then
+\[ \ass(S^{-1}M) = \left\{S^{-1}\mathfrak{p}: \mathfrak{p} \in \ass(M),
+\mathfrak{p}\cap S = \emptyset \right\} . \]
+\end{proposition}
+\begin{proof}
+We first prove the easy direction, namely that $\ass(S^{-1}M)$
+\emph{contains} primes in $\spec S^{-1}R \cap \ass(M)$.
+
+Suppose $\mathfrak{p} \in \ass(M)$ and
+$\mathfrak{p} \cap S = \emptyset$. Then $\mathfrak{p} = \ann(x)$ for some $x
+\in M$. Then the annihilator of $x/1 \in S^{-1}M$ is just $S^{-1}\mathfrak{p}$, as one
+can directly check. Thus $S^{-1}\mathfrak{p} \in \ass(S^{-1}M)$.
+So we get the easy inclusion.
+
+Let us now do the harder inclusion.
+Call the localization map $R \to S^{-1}R $ as $\phi$.
+Let $\mathfrak{q} \in \ass(S^{-1}M)$. By definition, this means that $\mathfrak{q} =
+\ann(x/s)$ for some $x \in M$, $s \in S$. We want to see that
+$\phi^{-1}(\mathfrak{q}) \in \ass(M) \subset \spec R$.
+By definition $\phi^{-1}(\mathfrak{q})$ is the set of elements $a \in R$ such that
+\[ \frac{ax}{s} = 0 \in S^{-1}M . \]
+In other words, by definition of the localization, this is
+\[ \phi^{-1}(\mathfrak{q}) = \bigcup_{t \in S} \left\{a \in R: atx = 0 \in M\right\} = \bigcup \ann(tx)
+\subset R.\]
+We know, however, that among elements of the form $\ann(tx)$, there is a
+\emph{maximal} element $I=\ann(t_0 x)$ for some $t_0 \in S$, since $R$ is
+noetherian. The claim is that $I = \phi^{-1}(\mathfrak{q})$, so
+$\phi^{-1}(\mathfrak{q}) \in \ass(M)$.
+
+Indeed, any other annihilator $I' = \ann(tx)$ (for $t \in S$) must be contained in $\ann(t_0 t x)$. However,
+\( I \subset \ann(t_0 x) \)
+and $I$ is maximal, so $I = \ann(t_0 t x)$ and
+\( I' \subset I. \) In other words, $I$ contains all the other annihilators
+$\ann(tx)$ for $t \in S$.
+In particular, the big union above, i.e. $\phi^{-1}(\mathfrak{q})$, is just
+\( I = \ann(t_0 x). \)
+In particular, $\phi^{-1}(\mathfrak{q}) = \ann(t_0x)$ is in $\ass(M)$.
+This means that every associated prime
+of $S^{-1}M$ comes from an associated prime of $M$, which completes the proof.
+\end{proof}
+
+
+
+\begin{exercise}
+Show that, if $M$ is a finitely generated module over a noetherian ring, that
+the map
+\[ M \to \bigoplus_{\mathfrak{p} \in \ass(M)} M_{\mathfrak{p}} \]
+is injective. Is this true if $M$ is not finitely generated?
+\end{exercise}
+
+\subsection{Associated primes determine the support}
+The next claim is that the support and the associated primes are related.
+
+\begin{proposition}\label{supportassociated} The support is the closure of the associated primes:
+\[ \supp M = \bigcup_{\mathfrak{q} \in \ass(M)}
+\overline{\left\{\mathfrak{q}\right\}} \]
+\end{proposition}
+
+By definition of the Zariski topology, this means that a prime $\mathfrak{p}
+\in \spec R$ belongs to $\supp M$ if and only if it contains an associated
+prime.
+
+\begin{proof}
+First, we show that $\supp(M)$ contains the set of primes
+$\mathfrak{p} \in \spec R$ containing an associated prime; this will imply
+that $\supp(M) \supset \bigcup_{\mathfrak{q} \in \ass(M)}
+\overline{\left\{\mathfrak{q}\right\}}$. So let $\mathfrak{q}$ be an
+associated prime and $\mathfrak{p} \supset \mathfrak{q}$. We need to show that
+\[ \mathfrak{p} \in \supp M, \ \text{i.e.} \ M_{\mathfrak{p}} \neq 0. \]
+But, since $\mathfrak{q} \in \ass(M)$, there is an injective map
+\[ R/\mathfrak{q} \hookrightarrow M , \]
+so localization gives an injective map
+\[ (R/\mathfrak{q})_{\mathfrak{p}} \hookrightarrow M_{\mathfrak{p}}. \]
+Here, however, the first object $(R/\mathfrak{q})_{\mathfrak{p}}$ is nonzero since nothing nonzero in $R/\mathfrak{q}$ can be
+annihilated by something outside $\mathfrak{p}$. So $M_{\mathfrak{p}} \neq
+0$, and $\mathfrak{p} \in \supp M$.
+
+Let us now prove the converse inclusion. Suppose that $\mathfrak{p} \in \supp M$. We
+have to show that $\mathfrak{p}$ contains an associated prime.
+By assumption, $M_{\mathfrak{p}} \neq 0$, and $M_{\mathfrak{p}}$ is a finitely generated
+module over the noetherian ring $R_{\mathfrak{p}}$. So $M_{\mathfrak{p}}$ has
+an associated prime.
+It follows by \rref{assmlocalization} that $\ass(M) \cap \spec
+R_{\mathfrak{p}}$ is nonempty. Since the primes of $R_{\mathfrak{p}}$
+correspond to the primes contained in $\mathfrak{p}$, it follows that there
+is a prime contained in $\mathfrak{p}$ that lies in $\ass(M)$. This is
+precisely what we wanted to prove.
+\end{proof}
+
+
+\begin{corollary} \label{suppisclosed} For $M$ finitely generated,
+$\supp M$ is closed. Further, every minimal element of $\supp M$ lies in
+$\ass(M)$.
+\end{corollary}
+\begin{proof}
+Indeed, the above result says that
+\[ \supp M = \bigcup_{\mathfrak{q} \in \ass(M)}
+\overline{\left\{\mathfrak{q}\right\}}. \]
+Since $\ass(M)$ is finite, it follows that $\supp M$ is closed.
+The above equality also shows that any minimal element of $\supp M$ must be an
+associated prime.
+\end{proof}
+
+\begin{example}
+\rref{suppisclosed} is \emph{false} for modules that are not finitely
+generated. Consider for instance the abelian group $\bigoplus_p \mathbb{Z}/p$.
+The support of this as a $\mathbb{Z}$-module is precisely the set of all
+closed points (i.e., maximal ideals) of $\spec \mathbb{Z}$, and is
+consequently is not closed.
+\end{example}
+
+\begin{corollary}
+The ring $R$ has finitely many minimal prime ideals.
+\end{corollary}
+\begin{proof}
+Clearly, $\supp R = \spec R$. Thus every prime ideal of $R$
+contains an associated prime of $R$ by \rref{supportassociated}.
+\end{proof}
+
+So $\spec R$ is the finite union of the irreducible closed pieces
+$\overline{\mathfrak{q}}$ if $R$ is noetherian.
+\add{I am not sure if ``irreducibility'' has already been defined. Check this.}
+
+We have just seen that $\supp M$ is a closed subset of $\spec R$ and is a union
+of finitely many irreducible subsets. More precisely,
+\[ \supp M = \bigcup_{\mathfrak{q} \in \ass(M)}
+\overline{\left\{\mathfrak{q}\right\}} \]
+though there might be some redundancy in this expression. Some associated prime might be contained
+in others.
+
+\begin{definition}
+A prime $\mathfrak{p} \in \ass(M)$ is an \textbf{isolated} associated prime of
+$M$ if it is minimal (with respect to the ordering on $\ass(M)$); it is
+\textbf{embedded} otherwise.
+\end{definition}
+
+So the embedded primes are not needed to describe the support of $M$.
+
+\add{Examples of embedded primes}
+
+\begin{remark}
+It follows that in a noetherian ring, every minimal prime consists of
+zerodivisors. Although we shall not use this in the future, the same is true
+in non-noetherian rings as well. Here is an argument.
+
+Let $R$ be a ring and $\mathfrak{p} \subset R$ a minimal prime. Then
+$R_{\mathfrak{p}}$ has precisely one prime ideal.
+We now use:
+
+\begin{lemma}
+If a ring $R$ has precisely one prime ideal $\mathfrak{p}$, then any $x \in
+\mathfrak{p}$ is nilpotent.
+\end{lemma}
+\begin{proof}
+Indeed, it suffices to see that $R_x = 0$ (\rref{nilpcriterion} in
+\rref{spec}) if $x \in
+\mathfrak{p}$. But $\spec R_x$
+consists of the primes of $R$ not containing $x$. However, there are no such
+primes. Thus $\spec R_x = \emptyset$, so $R_x = 0$.
+\end{proof}
+
+It follows that every element in $\mathfrak{p}$ is a zerodivisor in
+$R_{\mathfrak{p}}$.
+As a result, if $x \in \mathfrak{p}$, there is $\frac{s}{t} \in
+R_{\mathfrak{p}}$ such that $xs/t = 0$ but $\frac{s}{t} \neq 0$.
+In particular, there is $t' \notin \mathfrak{p}$ with
+\[ xst' = 0, \quad st' \neq 0, \]
+so that $x$ is a zerodivisor.
+\end{remark}
+
+
+
+\subsection{Primary modules}
+
+A primary modules are ones that has only one associated prime. It is equivalent
+to say that any homothety is either injective or nilpotent.
+As we will see in the next section, any module has a ``primary
+decomposition:'' in fact, it embeds as a submodule of a sum of primary
+modules.
+
+\begin{definition}
+Let $\mathfrak{p} \subset R$ be prime, $M$ a finitely generated $R$-module. Then $M$ is
+\textbf{$\mathfrak{p}$-primary} if
+\[ \ass(M) = \left\{\mathfrak{p}\right\}. \]
+
+A module is \textbf{primary} if it is $\mathfrak{p}$-primary for some
+prime $\mathfrak{p}$, i.e., has precisely one associated prime.
+\end{definition}
+
+\begin{proposition} \label{whenisprimary}
+Let $M$ be a finitely generated $R$-module. Then $M$ is \textbf{$\mathfrak{p}$}-primary if
+and only if, for every $m \in M - \left\{0\right\}$,
+the annihilator $\ann(m)$ has radical $\mathfrak{p}$.
+\end{proposition}
+\begin{proof}
+We first need a small observation.
+
+\begin{lemma}
+If $M$ is $\mathfrak{p}$-primary, then any nonzero submodule $M' \subset M$ is
+$\mathfrak{p}$-primary.
+\end{lemma}
+\begin{proof}
+Indeed, we know that $\ass(M') \subset \ass(M)$ by \rref{assexact}.
+Since $M' \neq 0$, we also know that $M'$ has an associated prime
+(\rref{assmnonempty}). Thus $ \ass(M') = \{\mathfrak{p}\}$, so
+$M'$ is $\mathfrak{p}$-primary.
+\end{proof}
+
+Let us now return to the proof of the main result,
+\rref{whenisprimary}.
+Assume first that $M$ is $\mathfrak{p}$-primary. Let $x \in M$, $x \neq 0$. Let
+$I = \ann(x)$; we are to show that $\rad(I) =\mathfrak{p}$. By definition, there is an injection
+\[ R/I \hookrightarrow M \]
+sending $1 \to x$. As a result, $R/I$ is $\mathfrak{p}$-primary by the above
+lemma. We want to know that $\mathfrak{p} = \rad(I)$.
+We saw that the support $\supp R/I = \left\{\mathfrak{q}: \mathfrak{q}
+\supset I\right\}$ is the union of the closures of the associated primes. In
+this case,
+\[ \supp(R/I) = \left\{\mathfrak{q}: \mathfrak{q} \supset \mathfrak{p}\right\}
+.\]
+But we know that $\rad(I) = \bigcap_{\mathfrak{q} \supset I} \mathfrak{q}$,
+which by the above is just $\mathfrak{p}$. This proves that $\rad(I) =
+\mathfrak{p}$.
+We have shown that if $R/I$ is primary, then $I$ has radical $\mathfrak{p}$.
+
+The converse is easy.
+Suppose the condition holds and $\mathfrak{q} \in \ass(M)$, so $\mathfrak{q} =
+\ann(x)$ for $x \neq 0$. But then $\rad(\mathfrak{q}) = \mathfrak{p}$, so
+\[ \mathfrak{q} = \mathfrak{p} \] and $\ass(M) = \left\{\mathfrak{p}\right\}$.
+\end{proof}
+
+We have another characterization.
+
+\begin{proposition} \label{whenisprimary2}
+Let $M \neq 0$ be a finitely generated $R$-module. Then $M$ is primary if and
+only if for each $a \in
+R$, then the homothety $ M \stackrel{a}{\to} M$ is either injective or nilpotent.
+\end{proposition}
+\begin{proof}
+Suppose first that $M$ is $\mathfrak{p}$-primary. Then multiplication by anything in
+$\mathfrak{p}$ is nilpotent because the annihilator of everything nonzero has
+radical $\mathfrak{p}$ by \rref{whenisprimary}. But if $a \notin \mathfrak{p}$, then $\ann(x)$ for
+$x \in M - \left\{0\right\}$ has radical $\mathfrak{p}$ and cannot contain $a$.
+
+Let us now do the other direction. Assume that every element of $a$ acts either injectively or nilpotently on $M$.
+Let $I \subset R$ be the collection of elements $a \in R$ such that $a^n M = 0$
+for $n$ large. Then $I$ is an ideal, since it is closed under addition by the
+binomial formula: if $a, b \in I$ and $a^n, b^n$ act by zero, then $(a+b)^{2n}$
+acts by zero as well.
+
+
+I claim that $I$ is actually prime. If $a,b \notin I$, then $a,b$ act by
+multiplication injectively on $M$. So $a: M \to M, b: M \to M$ are injective.
+However, a composition of injections is injective, so $ab$ acts injectively and
+$ab \notin I$. So $I$ is prime.
+
+We need now to check that if $x \in M$ is nonzero, then $\ann(x)$ has radical
+$I$. Indeed, if $a \in R$ annihilates $x$,
+then the homothety $M \stackrel{a}{\to} M$ cannot be injective, so it must be
+nilpotent (i.e. in $I$). Conversely, if $a \in I$, then a power of $a$ is
+nilpotent, so a power of $a$
+must kill $x$.
+It follows that $\ann(x) = I$. Now, by \rref{whenisprimary}, we see
+that $M$ is $I$-primary.
+\end{proof}
+
+We now have this notion of a primary module. The idea is that all the torsion is
+somehow concentrated in some prime.
+
+\begin{example}
+If $R$ is a noetherian ring and $\mathfrak{p} \in \spec R$, then
+$R/\mathfrak{p}$ is $\mathfrak{p}$-primary. More generally, if $I \subset R$
+is an ideal, then $R/I$ is ideal if and only if $\rad(I) $ is prime. This
+follows from \rref{whenisprimary2}.
+\end{example}
+
+\begin{exercise}
+If $0 \to M' \to M \to M'' \to 0$ is an exact sequence with $M', M, M''$
+nonzero and finitely generated, then $M$ is $\mathfrak{p}$-primary if and only if $M', M''$ are.
+\end{exercise}
+
+\begin{exercise}
+Let $M$ be a finitely generated $R$-module. Let $\mathfrak{p} \in \spec R$. Show that the sum of two
+$\mathfrak{p}$-primary submodules is $\mathfrak{p}$-primary. Deduce that
+there is a $\mathfrak{p}$-primary submodule of $M$ which contains every
+$\mathfrak{p}$-primary submodule.
+\end{exercise}
+
+\begin{exercise}[Bourbaki]
+Let $M$ be a finitely generated $R$-module. Let $T \subset \ass(M)$ be a
+subset of the associated primes. Prove that there is a submodule $N \subset M$
+such that
+\[ \ass(N) = T, \quad \ass(M/N) = \ass(M) - T. \]
+
+\end{exercise}
+
+\section{Primary decomposition} This is the structure theorem for modules
+over a noetherian ring, in some sense.
+Throuoghout, we fix a noetherian ring $R$.
+
+\subsection{Irreducible and coprimary modules}
+
+\begin{definition}
+Let $M$ be a finitely generated $R$-module. A submodule $N \subset M$ is
+\textbf{$\mathfrak{p}$-coprimary} if $M/N$ is $\mathfrak{p}$-primary.
+
+Similarly, we can say that $N \subset M$ is \textbf{coprimary} if it is
+$\mathfrak{p}$-coprimary for some $\mathfrak{p} \in \spec R$.
+\end{definition}
+
+We shall now show we can represent any submodule of $M$ as an intersection of
+coprimary submodules. In order to do this, we will define a submodule of $M$ to be
+\emph{irreducible} if it cannot be written as a nontrivial intersection of
+submodules of $M$. It
+will follow by general nonsense that any submodule is an intersection of
+irreducible submodueles. We will then see that any irreducible submodule is
+coprimary.
+
+\begin{definition}
+The submomdule $N \subsetneq M$ is \textbf{irreducible} if whenever $N = N_1 \cap N_2$ for $N_1,
+N_2 \subset M$ submodules, then either one of $N_1, N_2$ equals $N$. In other words, it is not
+ the intersection of larger submodules.
+\end{definition}
+
+\begin{proposition} \label{irrediscoprimary}
+An irreducible submodule $N \subset M$ is coprimary.
+\end{proposition}
+\begin{proof}
+Say $a \in R$. We would like to show that the homothety
+\[ M/N \stackrel{a}{\to} M/N \]
+is either injective or nilpotent.
+Consider the following submodules of $M/N$:
+\[ K(n) = \left\{x \in M/N: a^n x = 0\right\} . \]
+Then clearly $K(0) \subset K(1) \subset \dots$; this chain stabilizes as
+the quotient module is noetherian.
+In particular, $K(n) = K(2n)$ for large $n$.
+
+It follows that if $x \in M/N$ is divisible by $a^n$ ($n$ large) and nonzero, then $a^n x$
+is also nonzero. Indeed, say $x = a^n y \neq 0$; then $y \notin K(n)$, so $a^{n}x =
+a^{2n}y \neq 0$ or we would have $y \in K(2n) = K(n)$. In $M/N$, the submodules
+\[ a^n(M/N) \cap \ker(a^n) \]
+are equal to zero for large $n$. But our assumption was that $N$ is
+irreducible. So one of these submodules of $M/N$ is zero. That is, either
+$a^n(M/N) = 0$ or $\ker a^n = 0$. We get either injectivity or nilpotence on
+$M/N$. This proves the result.
+\end{proof}
+
+\subsection{Irreducible and primary decompositions}
+
+We shall now show that in a finitely generated module over a noetherian ring,
+we can write $0$ as an intersection of coprimary modules. This decomposition,
+which is called a \emph{primary decomposition}, will be deduced from purely
+general reasoning.
+
+\begin{definition}
+An \textbf{irreducible decomposition} of the module $M$ is a representation
+$N_1 \cap N_2 \dots \cap N_k = 0$, where the $N_i \subset M$ are irreducible
+submodules.
+\end{definition}
+
+\begin{proposition}
+If $M$ is finitely generated, then $M$ has an irreducible decomposition. There exist finitely many irreducible
+submodules $N_1, \dots, N_k$ with
+\[ N_1 \cap \dots \cap N_k = 0. \]
+\end{proposition}
+In other words,
+\[ M \to \bigoplus M/N_i \]
+is injective.
+So a finitely generated module over a noetherian ring can be imbedded in a direct sum of
+primary modules, since by \rref{irrediscoprimary} the $M/N_i$ are
+primary.
+
+\begin{proof} This is now purely formal.
+
+Among the submodules of $M$, some may be expressible as intersections of
+finitely many irreducibles, while some may not be. Our goal is to show that
+$0$ is such an intersection.
+Let $M' \subset M$ be a maximal submodule of $M$ such that $M'$ \emph{cannot} be
+written as such an intersection. If no such
+$M'$ exists, then we are done, because then $0$ can be written as an
+intersection of finitely many irreducible submodules.
+
+Now $M'$ is not irreducible, or it would be the intersection of one irreducible
+submodule.
+It follows $M'$ can be written as $M'=M_1' \cap M_2'$ for two strictly
+larger submodules of $M$. But by maximality, $M_1', M_2'$ admit decompositions as
+intersections of irreducibles. So $M'$ admits such a decomposition as well, a contradiction.
+\end{proof}
+
+\begin{corollary}
+For any finitely generated $M$, there exist coprimary submodules $N_1, \dots,
+N_k \subset M$ such that $N_1 \cap \dots \cap N_k = 0$.
+\end{corollary}
+\begin{proof}
+Indeed, every irreducible submodule is coprimary.
+\end{proof}
+
+
+For any $M$, we have an \textbf{irreducible decomposition}
+\[ 0 = \bigcap N_i \]
+for the $N_i$ a finite set of irreducible (and thus coprimary) submodules.
+This decomposition here is highly non-unique and non-canonical. Let's try to
+pare it down to something which is a lot more canonical.
+
+The first claim is that we can collect together modules which are coprimary for
+some prime.
+\begin{lemma}
+Let $N_1, N_2 \subset M$ be $\mathfrak{p}$-coprimary submodules. Then $N_1 \cap
+N_2$ is also $\mathfrak{p}$-coprimary.
+\end{lemma}
+\begin{proof}
+We have to show that $M/N_1 \cap N_2$ is $\mathfrak{p}$-primary. Indeed, we have an injection
+\[ M/N_1 \cap N_2 \rightarrowtail M/N_1 \oplus M/N_2 \]
+which implies that $\ass(M/N_1 \cap N_2) \subset \ass(M/N_1) \cup \ass(M/N_2) =
+\left\{\mathfrak{p}\right\}$. So we are done.
+\end{proof}
+
+In particular, if we do not want irreducibility but only primariness in the
+decomposition
+\[ 0 = \bigcap N_i, \]
+we can assume that each $N_i$ is $\mathfrak{p}_i$ coprimary for some
+ prime
+$\mathfrak{p}_i$ with the $\mathfrak{p}_i$ \emph{distinct}.
+
+\begin{definition}
+Such a decomposition of zero, where the different modules $N_i$ are
+$\mathfrak{p}_i$-coprimary for different $\mathfrak{p}_i$, is called a \textbf{primary decomposition}.
+\end{definition}
+
+
+
+\subsection{Uniqueness questions}
+
+In general, primary decomposition is \emph{not} unique. Nonetheless, we shall
+see that a limited amount of uniqueness does hold. For instance, the primes
+that occur are determined.
+
+Let $M$ be a finitely generated module over a noetherian ring $R$, and suppose
+$N_1 \cap \dots \cap N_k = 0$ is a primary decomposition.
+Let us assume that the decomposition is
+\emph{minimal}: that is, if we dropped one of the $N_i$, the intersection would no
+longer be zero.
+This implies that
+\[ N_i \not\supset \bigcap_{j \neq i} N_j \]
+or we could omit one of the $N_i$. Then the decomposition is called a \textbf{reduced primary decomposition}.
+
+Again, what this tells us is that $M \rightarrowtail \bigoplus M/N_i$. What we
+have shown is that $M$ can be imbedded in a sum of pieces, each of which is
+$\mathfrak{p}$-primary for some prime, and the different primes are distinct.
+
+This is \textbf{not} unique. However,
+
+\begin{proposition}
+The primes $\mathfrak{p}_i$ that appear in a reduced primary decomposition of zero are
+uniquely determined. They are the associated primes of $M$.
+\end{proposition}
+\begin{proof}
+All the associated primes of $M$ have to be there, because we have the injection
+\[ M \rightarrowtail \bigoplus M/N_i \]
+so the associated primes of $M$ are among those of $M/N_i$ (i.e. the
+$\mathfrak{p}_i$).
+
+The hard direction is to see that each $\mathfrak{p}_i$ is an associated prime.
+I.e. if $M/N_i$ is $\mathfrak{p}_i$-primary, then $\mathfrak{p}_i \in \ass(M)$;
+we don't need to use primary modules except for primes in the associated primes.
+Here we need to use the fact that our decomposition has no redundancy. Without
+loss of generality, it suffices to show that $\mathfrak{p}_1$, for instance,
+belongs to $\ass(M)$. We will use the fact that
+\[ N_1 \not\supset N_2 \cap \dots . \]
+So this tells us that $N_2 \cap N_3 \cap \dots$ is not equal to zero, or we
+would have a containment. We have a map
+\[ N_2 \cap \dots \cap N_k \to M/N_1; \]
+it is injective, since the kernel is $N_1 \cap N_2 \cap \dots \cap N_k = 0$ as
+this is a decomposition.
+However, $M/N_1$ is $\mathfrak{p}_1$-primary, so $N_2 \cap \dots \cap N_k$ is
+$\mathfrak{p}_1$-primary. In particular, $\mathfrak{p}_1$ is an associated
+prime of $N_2 \cap \dots \cap N_k$, hence of $M$.
+\end{proof}
+
+The primes are determined. The factors are not. However, in some cases they are.
+
+\begin{proposition}
+Let $\mathfrak{p}_i$ be a minimal associated prime of $M$, i.e. not containing
+any smaller associated prime. Then the submodule $N_i$ corresponding to
+$\mathfrak{p}_i$ in the reduced primary decomposition is uniquely determined:
+it is the kernel of
+\[ M \to M_{\mathfrak{p}_i}. \]
+\end{proposition}
+
+\begin{proof}
+We have that $\bigcap N_j = \left\{0\right\} \subset M$. When we localize at
+$\mathfrak{p}_i$, we find that
+\[ (\bigcap N_j)_{\mathfrak{p}_i} = \bigcap (N_j)_{\mathfrak{p}_i} =0 \]
+as localization is an exact functor. If $j \neq i$, then $M/N_j$ is
+$\mathfrak{p}_j$ primary, and has only $\mathfrak{p}_j$ as an associated prime.
+It follows that $(M/N_j)_{\mathfrak{p}_i}$ has no associated primes, since the
+only associated prime could be $\mathfrak{p}_j$, and that's not contained in
+$\mathfrak{p}_j$.
+In particular, $(N_j)_{\mathfrak{p}_i} = M_{\mathfrak{p}_i}$.
+
+Thus, when we localize the primary decomposition at $\mathfrak{p}_i$, we get
+a trivial primary decomposition: most of the factors are the full
+$M_{\mathfrak{p}_i}$. It follows that $(N_i)_{\mathfrak{p}_i}=0$. When we draw
+a commutative diagram
+\[
+\xymatrix{
+N_i \ar[r] \ar[d] & (N_i)_{\mathfrak{p}_i} = 0 \ar[d] \\
+M \ar[r] & M_{\mathfrak{p}_i}.
+}
+\]
+we find that $N_i$ goes to zero in the localization.
+
+Now if $x \in \ker(M \to M_{\mathfrak{p}_i}$, then $sx = 0$ for some $s \notin
+\mathfrak{p}_i$. When we take the map $M \to M/N_i$, $sx$ maps to zero; but $s$
+acts injectively on $M/N_i$, so $x$ maps to zero in $M/N_i$, i.e. is zero in
+$N_i$.
+\end{proof}
+
+This has been abstract, so:
+\begin{example} Let $ R = \mathbb{Z}$.
+Let $M = \mathbb{Z} \oplus \mathbb{Z}/p$. Then zero can be written as
+\[ \mathbb{Z} \cap \mathbb{Z}/p \]
+as submodules of $M$. But $\mathbb{Z}$ is $\mathfrak{p}$-coprimary, while
+$\mathbb{Z}/p$ is $(0)$-coprimary.
+
+This is not unique. We could have considered
+\[ \{(n,n), n \in \mathbb{Z}\} \subset M. \]
+However, the zero-coprimary part has to be the $p$-torsion. This is because
+$(0)$ is the minimal ideal.
+
+The decomposition is always unique, in general, if
+we have no inclusions among the prime ideals. For $\mathbb{Z}$-modules, this
+means that primary decomposition is unique for torsion modules.
+Any torsion group is a direct sum of the $p$-power torsion over all primes $p$.
+\end{example}
+
+\begin{exercise}
+Suppose $R$ is a noetherian ring and $R_{\mathfrak{p}}$ is a domain for each prime ideal
+$\mathfrak{p} \subset R$. Then $R$ is a finite direct product $\prod R_i$ for
+each $R_i$ a domain.
+
+To see this, consider the minimal primes $\mathfrak{p}_i \in \spec R$. There
+are finitely many of them, and argue that since every localization is a domain,
+$\spec R$ is disconnected into the pieces $V(\mathfrak{p}_i)$.
+It follows that there is a decomposition $R = \prod R_{i}$ where $\spec R_i$
+has $\mathfrak{p}_i$ as the unique minimal prime.
+Each $R_i$ satisfies the same condition as $R$, so we may reduce to the case
+of $R$ having a unique minimal prime ideal. In this case, however, $R$ is
+reduced, so its unique minimal prime ideal must be zero.
+\end{exercise}
+
+
+
+\section{Artinian rings and modules}
+
+The notion of an \emph{artinian ring} appears to be dual to that of a
+noetherian ring, since the chain condition is simply reversed in the
+definition. However, the artinian condition is much stronger than the
+noetherian one. In fact,
+artinianness actually implies noetherianness, and much more.
+Artinian modules over non-artinian rings are frequently of interest as well;
+for instance, if $R$ is a noetherian ring and $\mathfrak{m}$ is a maximal
+ideal, then for any finitely generated $R$-module $M$, the module
+$M/\mathfrak{m}M$ is artinian.
+
+\subsection{Definitions}
+
+\begin{definition}
+A commutative ring $R$ is \textbf{Artinian} every descending chain of ideals
+$I_0 \supset I_1 \supset I_2 \supset \dots$
+stabilizes.
+\end{definition}
+
+\begin{definition}
+The same definition makes sense for modules. We can define an $R$-module $M$ to
+be \textbf{Artinian} if every descending chain of submodules stabilizes.
+\end{definition}
+
+In fact, as we shall see when we study dimension theory, we actually often do
+want to study artinian modules over non-artinian rings, so this definition is
+useful.
+
+\begin{exercise}
+A module is artinian if and only if every nonempty collection of submodules
+has a minimal element.
+\end{exercise}
+\begin{exercise}
+A ring which is a finite-dimensional algebra over a field is artinian.
+\end{exercise}
+\begin{proposition} \label{exactartinian}
+If $0 \to M' \to M \to M'' \to 0$ is an exact sequence, then $M$ is Artinian
+if and only if $M', M''$ are.
+\end{proposition}
+
+This is proved in the same way as for noetherianness.
+
+\begin{corollary}
+Let $R$ be artinian. Then every finitely generated $R$-module is artinian.
+\end{corollary}
+\begin{proof}
+Standard.
+\end{proof}
+
+\subsection{The main result}
+This definition is obviously dual to the notion of noetherianness, but it is
+much more restrictive.
+The main result is:
+
+\begin{theorem} \label{artinianclassification}
+A commutative ring $R$ is artinian if and only if:
+\begin{enumerate}
+\item $R$ is noetherian.
+\item Every prime ideal of $R$ is maximal.\footnote{This is much different from
+the Dedekind ring condition---there, zero is not maximal. An artinian domain is
+necessarily a field, in fact.}
+\end{enumerate}
+\end{theorem}
+
+
+So artinian rings are very simple---small in some sense.
+They all look kind of like fields.
+
+We shall prove this result in a series of small pieces. We begin with a piece
+of the forward implication in \rref{artinianclassification}:
+\begin{lemma} Let $R$ be artinian.
+Every prime $\mathfrak{p} \subset R$ is maximal.
+\end{lemma}
+\begin{proof}
+Indeed, if $\mathfrak{p} \subset R$ is a prime ideal, $R/\mathfrak{p}$ is
+artinian, as it is a quotient of an artinian ring. We want to show that
+$R/\mathfrak{p}$ is a field,
+which is the same thing as saying that $\mathfrak{p}$ is maximal.
+(In particular, we are essentially proving that an artinian \emph{domain} is a
+field.)
+
+Let $x \in
+R/\mathfrak{p}$ be nonzero. We have a descending chain
+\[ R/\mathfrak{p} \supset (x) \supset (x^{2}) \dots \]
+which necessarily stabilizes. Then we have $(x^n) = (x^{n+1})$ for some $n$. In
+particular, we have $x^n = y x^{n+1}$ for some $y \in R/\mathfrak{p}$. But $x$
+is a nonzerodivisor, and we find $ 1 = xy$. So $x$ is invertible. Thus
+$R/\mathfrak{p}$ is a field.
+\end{proof}
+
+Next, we claim there are only a few primes in an artinian ring:
+\begin{lemma}
+If $R$ is artinian, there are only finitely many maximal ideals.
+\end{lemma}
+\begin{proof}
+Assume otherwise. Then we have an infinite sequence
+\[ \mathfrak{m}_1, \mathfrak{m}_2, \dots \]
+of distinct maximal ideals. Then we have the descending chain
+\[ R \supset \mathfrak{m}_1 \supset \mathfrak{m}_1 \cap \mathfrak{m}_2 \supset \dots. \]
+This necessarily stabilizes. So for some $n$, we have that $\mathfrak{m}_1 \cap \dots \cap
+\mathfrak{m}_n \subset \mathfrak{m}_{n+1}$. However, this means that
+$\mathfrak{m}_{n+1}$ contains one of the $\mathfrak{m}_1, \dots,
+\mathfrak{m}_n$ since these are prime ideals (a familiar argument). Maximality
+and distinctness of the $\mathfrak{m}_i$ give a contradiction.
+\end{proof}
+
+In particular, we see that $\spec R$ for an artinian ring is just a finite set.
+In fact, since each point is closed, as each prime is maximal, the set has the
+\emph{discrete topology.} As a result, $\spec R$ for an artinian ring is
+\emph{Hausdorff}. (There are very few other cases.)
+
+This means that $R$ factors as a product of rings. Whenever $\spec R$ can be
+written as a disjoint union of components, there is a factoring of $R$ into a
+product $\prod R_i$. So $R = \prod R_i$ where each $R_i$ has
+only one maximal ideal. Each $R_i$, as a homomorphic image of $R$, is artinian. We find, as a result,
+
+\add{mention that disconnections of $\spec R$ are the same thing as
+idempotents.}
+
+\begin{proposition}
+Any artinian ring is a finite product of local artinian rings.
+\end{proposition}
+
+Now, let us continue our analysis. We may as well assume that we are working
+with \emph{local} artinian rings $R$ in the future. In particular, $R$ has a unique
+prime $\mathfrak{m}$, which must be the radical of $R$ as the radical is the
+intersection of all primes.
+
+We shall now see that the unique prime ideal $\mathfrak{m} \subset R$ is
+nilpotent by:
+\begin{lemma} \label{radnilpotentartinian}
+If $R$ is artinian (not necessarily local), then $\rad (R) $ is nilpotent.
+\end{lemma}
+
+It is, of course, always true that any \emph{element} of the radical $\rad(R)$
+is nilpotent, but it is not true for a general ring $R$ that $\rad(R)$ is
+nilpotent as an \emph{ideal}.
+
+\begin{proof}
+Call $J = \rad(R)$. Consider the decreasing filtration
+\[ R \supset J \supset J^2 \supset J^3 \supset \dots. \]
+We want to show that this stabilizes at zero. A priori, we know that it
+stabilizes \emph{somewhere}. For some $n$, we have
+\[ J^n = J^{n'}, \quad n' \geq n. \]
+Call the eventual stabilization of these ideals $I$. Consider ideals $I'$ such
+that
+\[ II' \neq 0. \]
+There are now two cases:
+\begin{enumerate}
+\item No such $I'$ exists. Then $I = 0$, and we are done: the powers of
+$J^n$ stabilize at zero.
+\item Otherwise, there is a
+\emph{minimal} such $I'$ (minimal for satisfying $II' \neq 0$) as $R$ is
+artinian. Necessarily $I'$ is nonzero, and furthermore there is $x \in I'$ with $x I \neq
+0$.
+
+It follows by minimality that
+\[ I' = (x) , \]
+so $I'$ is principal. Then $xI \neq 0$; observe
+that $xI$ is also $(xI)I $ as $I^2 = I$ from the definition of $I$. Since
+$(xI) I \neq 0$, it follows again by minimality that
+\[ xI = (x). \] Hence, there is $y \in I$ such that $xy = x$; but now, by construction $I \subset J = \rad (R)$, implying that $y $ is nilpotent.
+It follows that
+\[ x = xy = xy^2 = \dots = 0 \]
+as $y$ is nilpotent. However, $x \neq 0$ as $xI \neq 0$. This is a
+contradiction, which implies that the second case cannot occur.
+\end{enumerate}
+We have now proved the lemma.
+\end{proof}
+
+Finally, we may prove:
+
+\begin{lemma}
+A local artinian ring $R$ is noetherian.
+\end{lemma}
+\begin{proof}
+We have the filtration $R \supset \mathfrak{m} \supset \mathfrak{m}^2 \supset
+\dots$. This eventually stabilizes at zero by \rref{radnilpotentartinian}. I
+claim that $R$ is noetherian as an $R$-module. To prove this, it suffices to
+show that $\mathfrak{m}^k/\mathfrak{m}^{k+1}$ is noetherian as an $R$-module.
+But of course, this is annihilated by $\mathfrak{m}$, so it is really a vector
+space over the field $R/\mathfrak{m}$. But $\mathfrak{m}^k/\mathfrak{m}^{k+1}$
+is a subquotient of an artinian module, so is artinian itself. We have to show
+that it is noetherian.
+It suffices to show now that if $k$ is a field, and $V$ a $k$-vector space,
+then TFAE:
+\begin{enumerate}
+\item $V$ is artinian.
+\item $V$ is noetherian.
+\item $V$ is finite-dimensional.
+\end{enumerate}
+This is evident by linear algebra.
+\end{proof}
+
+Now, finally, we have shown that an artinian ring is noetherian. We have to
+discuss the converse. Let us assume now that $R$ is noetherian and has only
+maximal prime ideals. We show that $R$ is artinian. Let us consider $\spec R$;
+there are only finitely many minimal primes by the theory of associated
+primes: every prime ideal is minimal in this case. Once again, we learn that $\spec R$
+is finite and has the discrete topology. This means that $R$ is a product of
+factors $\prod R_i$ where each $R_i$ is a local noetherian ring with a unique
+prime ideal. We might as well now prove:
+
+\begin{lemma}
+Let $(R, \mathfrak{m})$ be a local noetherian ring with one prime ideal. Then
+$R$ is artinian.
+\end{lemma}
+\begin{proof}
+We know that $\mathfrak{m} = \mathrm{rad}(R)$. So $\mathfrak{m}$ consists of
+nilpotent elements, so by finite generatedness it is nilpotent. Then we have a
+finite filtration
+\[ R \supset \mathfrak{m} \supset \dots \supset \mathfrak{m}^k = 0. \]
+Each of the quotients are finite-dimensional vector spaces, so artinian; this
+implies that $R$ itself is artinian.
+\end{proof}
+
+ \begin{remark}
+ Note that artinian implies noetherian! This statement is true for rings (even
+ non-commutative rings), but not for modules. Take the same example $M = \varinjlim
+ \mathbb{Z}/p^n\mathbb{Z}$ over $\mathbb{Z}$. However, there is a module-theoretic statement which is
+ related.
+ \end{remark}
+ \begin{corollary}
+ For a finitely generated module $M$ over any commutative ring $R$, the following are
+ equivalent.
+ \begin{enumerate}
+ \item $M$ is an artinian module.
+ \item $M$ has finite length (i.e.\ is noetherian and artinian).
+ \item $R/\ann M$ is an artinian ring.
+ \end{enumerate}
+ \end{corollary}
+\begin{proof}
+\add{proof}
+\end{proof}
+\begin{exercise}
+If $R$ is an artinian ring, and $S$ is a finite $R$-algebra (finite as an
+$R$-module), then $S$ is artinian.
+\end{exercise}
+
+\begin{exercise}
+Let $M$ be an artinian module over a commutative ring $R$, $f: M \to M$ an \emph{injective} homomorphism.
+Show that $f$ is surjective, hence an isomorphism.
+\end{exercise}
+
+
+\subsection{Vista: zero-dimensional non-noetherian rings}
+ \begin{definition}[von Neumann]
+ An element $a\in R$ is called \emph{von Neumann regular} if there is some $x\in R$
+ such that $a=axa$.
+ \end{definition}
+ \begin{definition}[McCoy]
+ A element $a\in R$ is \emph{$\pi$-regular} if some power of $a$ is von Neumann
+ regular.
+ \end{definition}
+ \begin{definition}
+ A element $a\in R$ is \emph{strongly $\pi$-regular} (in the commutative case)
+ if the chain $aR\supset a^2R\supset a^3R\supset \cdots$ stabilizes.
+ \end{definition}
+ A ring $R$ is von Neumann regular (resp.\ (strongly) $\pi$-regular) if every element of
+ $R$ is.
+
+ \begin{theorem}[5.2]
+ For a commutative ring $R$, the following are equivalent.
+ \begin{enumerate}
+ \item $\dim R=0$.
+ \item $R$ is rad-nil (i.e. the Jacobson radical $J(R)$ is the nilradical ) and $R/\rad R$ is von Neumann regular.
+ \item $R$ is strongly $\pi$-regular.
+ \item $R$ is $\pi$-regular.
+
+ \item[] \hspace{-7ex} And any one of these implies
+ \item Any non-zero-divisor is a unit.
+ \end{enumerate}
+ \end{theorem}
+ \begin{proof}
+ $1\Rightarrow 2\Rightarrow 3\Rightarrow 4 \Rightarrow 1$ and $4\Rightarrow 5$. We will
+ not do $1\Rightarrow 2\Rightarrow 3$ here.
+
+ ($3\Rightarrow 4$) Given $a\in R$, there is some $n$ such that $a^n R = a^{n+1}
+ R=a^{2n}R$, which implies that $a^n = a^n x a^n$ for some $x$.
+
+ ($4\Rightarrow 1$) Is $\mathfrak{p}$ maximal? Let $a\not\in \mathfrak{p}$.
+ Since $a$ is $\pi$-regular, we
+ have $a^n=a^{2n}x$, so $a^n(1-a^nx)=0$, so $1-a^nx\in \mathfrak{p}$. It follows that $a$ has an
+ inverse mod $\mathfrak{p}$.
+
+ ($4\Rightarrow 5$) Using $1-a^nx=0$, we get an inverse for $a$.
+ \end{proof}
+ \begin{example}
+ Any local rad-nil ring is zero dimensional, since $2$ holds.
+ In particular, for a ring $S$ and maximal ideal $\mathfrak{m}$,
+ $R=S/\mathfrak{m}^n$ is zero dimensional
+ because it is a rad-nil local ring.
+ \end{example}
+ \begin{example}[Split-Null Extension]
+ For a ring $A$ and $A$-module $M$, let $R=A\oplus M$
+ with the multiplication $(a,m)(a',m')=(aa',am'+a'm)$ (i.e.\ take the multiplication on
+ $M$ to be zero). In $R$, $M$ is an ideal of square zero. ($A$ is called a
+ \emph{retract} of $R$ because it sits in $R$ and can be recovered by quotienting by
+ some complement.) If $A$ is a field, then $R$ is a rad-nil local ring, with maximal ideal $M$.
+ \end{example}
+
diff --git a/books/cring/smoothness.tex b/books/cring/smoothness.tex
new file mode 100644
index 0000000000000000000000000000000000000000..9a7fb54ffe56388f61cd96d980f0835e4258392b
--- /dev/null
+++ b/books/cring/smoothness.tex
@@ -0,0 +1,1353 @@
+\chapter{Regularity, differentials, and smoothness}
+
+
+In this chapter, we shall introduce two notions. First, we shall discuss
+\emph{regular} local rings. On varieties over an algebraically closed field,
+regularity corresponds to nonsingularity of the variety at that point.
+(Over non-algebraically closed fields, the connection is more subtle.) This
+will be a continuation of the local algebra done earlier in the chapter
+\cref{chdimension}
+on dimension theory.
+
+We shall next introduce the module of \emph{K\"ahler differentials} of a
+morphism of rings $A \to B$, which itself can measure smoothness (though this
+connection will not be fully elucidated until a later chapter).
+The module of K\"ahler differentials is the algebraic analog of the
+\emph{cotangent bundle} to a manifold, and we will show that for an affine
+ring, it can be computed very explicitly. For a
+smooth variety, we will see that this module is \emph{projective}, and hence a
+good candidate of a vector bundle.
+
+Despite the title, we shall actually wait a few chapters before introducing the
+general theory of smooth morphisms.
+
+
+
+\section{Regular local rings}
+We shall start by introducing the concept of a \emph{regular local} ring, which
+is one where the embedding dimension and Krull dimension coincide.
+\subsection{Regular local rings}
+
+Let $A$ be a local noetherian ring with maximal ideal $\mathfrak{m} \subset A$
+and residue field $k = A/\mathfrak{m}$.
+Endow $A$ with the $\mathfrak{m}$-adic topology, so that there is a natural
+graded $k$-algebra $\gr(A) = \bigoplus \mathfrak{m}^i/\mathfrak{m}^{i+1}$.
+This is a finitely generated $k$-algebra; indeed, a system of generators for
+the ideal $\mathfrak{m}$ (considered as elements of
+$\mathfrak{m}\mathfrak{m}^2$) generates $\gr(A)$ over $k$.
+As a result, we have a natural surjective map of \emph{graded} $k$-algebras.
+\begin{equation} \label{reglocringmap} \Sym_k \mathfrak{m}/\mathfrak{m}^2 \to
+\gr(A). \end{equation}
+Here $\Sym$ denotes the \emph{symmetric algebra.}
+\begin{definition} The local ring $(A, \mathfrak{m})$ is called \textbf{regular} if the above map is
+an isomorphism, or equivalently if the embedding dimension of $A$ is equal to
+the Krull dimension.
+\end{definition}
+
+We want to show the ``equivalently'' in the definition is justified.
+One direction is easy: if \eqref{reglocringmap} is an isomorphism, then
+$\gr(A)$ is a polynomial ring with $\dim_k \mathfrak{m}/\mathfrak{m}^2$
+generators. But the dimension of $A$ was defined in terms of the growth of
+$\dim_k \mathfrak{m}^i/\mathfrak{m}^{i+1} = (\gr A)_i$.
+For a polynomial ring on $r$ generators, however, the $i$th graded piece has
+dimension a degree-$r$ polynomial in $i$ (easy verification).
+As a result, we get the claim in one direction.
+
+However, we still have to show that if the embedding dimension equals the Krull
+dimension, then \eqref{reglocringmap} is an isomorphism. This will follow from
+the next lemma.
+
+\begin{lemma} If the inequality \[\dim(A) \leq
+\dim_{k}(\mathfrak{m}/\mathfrak{m}^2)\]
+is an equality, then \eqref{reglocringmap} is an isomorphism.
+\end{lemma}
+\begin{proof}
+Suppose \eqref{reglocringmap} is not an isomorphism.
+Then there is an element $f \in \Sym_k \mathfrak{m}/\mathfrak{m}^2$ which is
+not zero and which maps to zero in $\gr(A)$; we can assume $f$ homogeneous,
+since the map of graded rings is graded.
+
+
+Now the claim is that if $k[x_1, \dots, x_n]$ is a polynomial ring and $f \neq
+0$ a homogeneous element, then the Hilbert polynomial of $k[x_1, \dots,
+x_n]/(f)$ is of degree less than $n$. This will easily imply the lemma, since
+\eqref{reglocringmap} is always a surjection, and because $\Sym_k
+\mathfrak{m}/\mathfrak{m}^2$'s Hilbert polynomial is of degree $\dim_{k}
+\mathfrak{m}/\mathfrak{m}^2$.
+Now if $\deg f = d$, then the dimension of $(k[x_1, \dots, x_n]/f)_i$ (where
+$i$ is a degree) is $\dim (k[x_1, \dots, x_n])_i = \dim (k[x_1, \dots,
+x_n])_{i-d}$. It follows that if $P$ is the Hilbert polynomial of the
+polynomial ring, that of the quotient is $P(\cdot) - P(\cdot - d)$, which has a
+strictly smaller degree.
+\end{proof}
+
+We now would like to establish a few properties of regular local rings.
+
+Let $A$ be a local ring and $\hat{A}$ its completion. Then
+$\dim(A)=\dim(\hat{A})$, because
+$A/\mathfrak{m}^n=\hat{A}/\hat{\mathfrak{m}}^n$, so the Hilbert functions are
+the same. Similarly, $\gr(A)=\gr(\hat{A})$. However, by $\hat{A}$ is also a
+local ring. So applying the above lemma, we see:
+
+\begin{proposition}
+A noetherian local ring $A$ is regular if and only if its completion $\hat{A}$ is regular.
+\end{proposition}
+
+Regular local rings are well-behaved. We are eventually going to show that any
+regular local ring is in fact a unique factorization domain.
+Right now, we start with a much simpler claim:
+
+\begin{proposition} A regular local ring is a domain.
+\label{reg loc means domain}
+\label{regdomain}
+\end{proposition}
+This is a formal consequence of the fact that $\gr(A)$ is a domain and the
+filtration on $A$ is Hausdorff.
+\begin{proof} Let $a,b \neq 0$. Note that $\bigcap \mathfrak{m}^n=0$ by the
+Krull intersection theorem (\cref{krullint}), so there are $k_1$ and $k_2$ such that
+$a \in \mathfrak{m}^{k_1} - \mathfrak{m}^{k_1 + 1}$ and $b \in
+\mathfrak{m}^{k_2} - \mathfrak{m}^{k_2 + 1}$.
+Let $\overline{a}, \overline{b}$ be the images of $a,b$ in $\gr(A)$ (in
+degrees $k_1, k_2$); neither is
+zero.
+But then $\bar{a}\bar{b} \neq 0 \in \gr(A)$, because $\gr(A)=\Sym(\mathfrak{m}/\mathfrak{m}^2)$ is a domain. So $ab \neq 0$, as desired.
+\end{proof}
+
+\begin{exercise}
+Prove more generally that if $A$ is a filtered ring with a descending
+filtration of ideals $I_1 \supset I_2 \supset \dots$ such that $\bigcap I_k =
+0$, and such that the associated graded algebra $\gr(A)$ is a domain, then $A$
+is itself a domain.
+\end{exercise}
+
+Later we will prove the aforementioned fact that a regular local ring is a factorial
+ring. One consequence of
+that will be the following algebro-geometric fact. Let $X = \spec
+\mathbb{C}[X_1, \dots, X_n]/I$ for some ideal $I$; so $X$ is basically a subset
+of $\mathbb{C}^n$ plus some nonclosed points. Then if $X$ is smooth, we find
+that $\mathbb{C}[X_1, \dots, X_n]/I$ is locally factorial. Indeed, smoothness
+implies regularity, hence local factoriality. The whole apparatus of Weil and
+Cartier divisors now kicks in.
+
+\begin{exercise}
+Nevertheless, it is possible to prove directly that a regular local ring $(A,
+\mathfrak{m})$ is
+\emph{integrally closed.}
+To do this, we shall use the fact that the associated graded $\gr(A)$ is
+integrally closed (as a polynomial ring).
+Here is the argument:
+\begin{enumerate}[a)]
+\item Let $C$ be a noetherian domain with quotient field $K$. Then $C$ is integrally closed if and
+only if for every $x \in K$ such that there exists $d \in A$ with $dx^n \in A$
+for all $n$, we have $x \in A$. (In general, this fails for $C$ non-noetherian;
+then this condition is called being \emph{completely integrally closed}.)
+\item Let $C$ be a noetherian domain. Suppose on $C$ there is an exhaustive
+filtration $\left\{C_v\right\}$ (i.e. such that $\bigcap C_v = 0$) and such
+that $\gr(C)$ is a \emph{completely} integrally closed domain. Suppose further that
+every principal ideal is closed in the topology on $C$ (i.e., for each
+principal ideal $I$, we have $I = \bigcap I + C_v$.) Then $C$ is integrally
+closed too. Indeed:
+\begin{enumerate}
+\item Suppose $b/a, a, b \in C$ is such that $(b/a)^n$ is contained in a finitely
+generated submodule of $K$, say $d^{-1}A$ for some $d \in A$. We need to show
+that $b \in Ca + C_v$ for all $v$. Write $b = xa + r$ for $r \in C_{w} -
+C_{w+1}$. We
+will show that ``$w$'' can be improved to $w+1$ (by changing $x$).
+To do this, it suffices to write $r \in Ca + C_{w+1}$.
+\item By hypothesis, $db^n \in Ca^n$ for all $n$. Consequently $dr^n \in Ca^n$
+for all $n$.
+\item Let $\overline{r}$ be the image of $r$ in $\gr(C)$ (in some possibly
+positive homogeneous degree; choose the unique one such that the image of $r$
+is defined and not zero). Choosing $\overline{d}, \overline{a}$ similarly, we
+get $\overline{d} \overline{r}^n$ lies in the ideal of $\overline{a}^n$ for all
+$n$. This implies $\overline{r}$ is a multiple of $\overline{a}$. Deduce that
+$r \in Ca + C_{w+1}$.
+\end{enumerate}
+\item The hypotheses of the previous part apply to a regular local ring, which
+is thus integrally closed.
+\end{enumerate}
+The essential part of this argument is explained in \cite{Bo68}, ch. 5, \S 1.4.
+The application to regular local rings is mentioned in \cite{EGA}, vol. IV, \S
+16.
+\end{exercise}
+
+
+We now give a couple of easy examples. More interesting examples will come in
+the future.
+Let $R$ be a noetherian local ring with maximal ideal $\mathfrak{m}$ and
+residue field $k$.
+
+\begin{example}
+If $\dim(R)=0$, i.e. $R$ is artinian, then $R$ is regular iff the maximal ideal
+is zero, i.e. if $R$ is a field.
+Indeed, the requirement for regularity is that $\dim_k \mathfrak{m}/\mathfrak{m}^2 = 0$, or
+$\mathfrak{m} = 0$ (by Nakayama). This implies that $R$ is a field.
+\end{example}
+
+Recall that $\dim_k \mathfrak{m}/\mathfrak{m}^2$ is the size of the minimal set
+of generators of the ideal $\mathfrak{m}$, by Nakayama's lemma. As a result, a
+local ring is regular if and only if the maximal ideal has a set of generators
+of the appropriate size. This is a refinement of the above remarks.
+
+\begin{example}
+If $\dim(R) =1$, then $R$ is regular iff the maximal ideal $\mathfrak{m}$ is
+principal (by the preceding observation).
+The claim is that this happens if and only if $R$ is a DVR. Certainly a DVR is
+regular, so only the other direction is interesting.
+But it is easy to see that a local domain whose maximal ideal is principal is a
+DVR (i.e. define the valuation of $x$ in terms of the minimal $i$ such that $x
+\notin \mathfrak{m}^i$).
+\end{example}
+We find:
+\begin{proposition}
+A one-dimensional regular local ring is the same thing as a DVR.
+\end{proposition}
+
+
+Finally, we extend the notion to general noetherian rings:
+\begin{definition}
+A general noetherian ring is called \textbf{regular} if every localization at a
+maximal ideal is a regular local ring.
+\end{definition}
+In fact, it turns out that if a noetherian ring is regular, then so are
+\emph{all} its localizations. This fact relies on a fact, to be proved in the
+distant future, that the localization of a regular local ring at a prime ideal is regular.
+\subsection{Quotients of regular local rings}
+
+We now study quotients of regular local rings.
+In general, if $(A, \mathfrak{m})$ is a regular local ring and $f_1, \dots, f_k \in
+\mathfrak{m}$, the quotient $A/(f_1, \dots, f_k)$ is far from being regular.
+For instance, if $k$ is a field and $A$ is $k[x]_{(x)}$ (geometrically, this is
+the local ring of the affine line at the origin), then $A/x^2 =
+k[\epsilon]/\epsilon^2$ is not a regular local ring; it is not even a domain.
+In fact, the local ring of \emph{any} variety at a point is a \emph{quotient} of a
+regular local ring, and this is because any variety locally sits inside affine
+space.\footnote{Incidentally, the condition that a noetherian local ring $(A,
+\mathfrak{m})$ is a
+quotient of a regular local ring $(B, \mathfrak{n})$ imposes conditions on
+$A$: for instance, it has to be \emph{catenary.} As a result, one can obtain
+examples of local rings which cannot be expressed as quotients in this way.}
+
+\begin{proposition}
+If $(A, \mathfrak{m}_A)$ is a regular local ring, and $f \in \mathfrak{m}$ is such that $f \in
+\mathfrak{m}_A- \mathfrak{m}_A^2$. Then $A'=A/fA$ is also regular of dimension $\dim(A)-1$.
+\label{reg loc mod f still reg loc}
+\end{proposition}
+\begin{proof} First let us show the dimension part of the statement. We know
+from \cref{dimdropsbyone} that the dimension has to drop precisely by one (since $f$ is a
+nonzerodivisor on $A$ by \cref{regdomain}).
+
+
+Now we want to show that $A' = A/fA$ is regular.
+Let $\mathfrak{m}_{A'} = \mathfrak{m}/fA$ be the maximal ideal of $A'$.
+Then we should show that
+$\dim_{A'/\mathfrak{m}_{A'}}(\mathfrak{m}_{A'}/\mathfrak{m}_{A'}^2)=\dim(A')$,
+and it suffices to see that \begin{equation} \label{randombnd}
+\dim_{A'/\mathfrak{m}_{A'}}(\mathfrak{m}_{A'}/\mathfrak{m}_{A'}^2) \leq
+\dim_{A/\mathfrak{m}_A}(\mathfrak{m}_{A}/\mathfrak{m}_A^2)-1.\end{equation}
+In other words, we have to show that the embedding dimension drops by one.
+
+
+Note that the residue fields $k=A/\mathfrak{m}_A, A'/\mathfrak{m}_{A'}$ are
+naturally isomorphic.
+To see \eqref{randombnd}, we use the natural surjection of $k$-vector spaces
+\[ \mathfrak{m}_A/\mathfrak{m}_A^2 \to \mathfrak{m}_{A'}/\mathfrak{m}_{A'}^2. \]
+Since there is a nontrivial kernel (the class of $f$ is in the kernel), we
+obtain the inequality \eqref{randombnd}.
+\end{proof}
+
+
+
+\begin{corollary} \label{quotientreg44} Consider elements $f_1, \ldots f_m$ in $\mathfrak{m}$ such
+that $\bar{f_1}, \ldots \bar{f_m} \in \mathfrak{m}/\mathfrak{m}^2$ are linearly independent. Then $A/(f_1, \ldots f_m)$ is regular with $\dim(A/(f_1, \ldots f_m))=\dim(A)-m$
+\label{reg local mod fs still reg loc}
+\end{corollary}
+\begin{proof} This follows from \cref{reg loc mod f still reg loc} by induction. One just needs to check that in $A_1=A/(f_1)$, $\mathfrak{m}_1=\mathfrak{m}/(f_1)$, we have that $f_2, \ldots f_m$ are still linearly independent in $\mathfrak{m}_1/\mathfrak{m}_1^2$. This is easy to check.
+\end{proof}
+
+\begin{remark}
+In fact, note in the above result that each $f_i$ is a \emph{nonzerodivisor} on $A/(f_1, \dots,
+f_{i-1})$, because a regular local ring is a domain. We will later say that the
+$\left\{f_i\right\}$ form a \emph{regular sequence.}
+\end{remark}
+
+We can now obtain a full characterization of when a quotient of a regular local
+ring is still regular; it essentially states that the above situation is the
+only possible case. Geometrically, the intuition is that we are analyzing when
+a subvariety of a smooth variety is smooth; the answer is when the subvariety
+is cut out by functions with linearly independent images in the maximal ideal
+mod its square.
+
+This corresponds to the following fact: if $M$ is a smooth manifold and $f_1,
+\dots, f_m$ smooth functions such that the gradients $\left\{df_i\right\}$ are
+everywhere independent, then the common zero locus of the $\left\{f_i\right\}$
+is a smooth submanifold of $M$, and conversely every smooth submanifold of $M$
+locally looks like that.
+
+\begin{theorem} \label{quotientreg} Let $A_0$ be a regular local ring of dimension $n$, and
+let $I \subset A_0$ be a proper ideal. Let $A = A_0/I$.
+ Then the following are equivalent
+\begin{enumerate}
+\item $A$ is regular.
+\item There are elements $f_1, \ldots f_m \in I$ such that $\bar{f_1}, \ldots \bar{f_m}$ are linearly independent in $\mathfrak{m}_{A_0}/\mathfrak{m}_{A_0}^2$ where $m=n-\dim(A)$ such that $(f_1, \ldots f_m)=I$.
+\end{enumerate}
+\label{reg loc main thm}
+\end{theorem}
+
+\begin{proof} \textbf{(2) $\Rightarrow$ (1)} This is exactly the statement of
+\cref{reg local mod fs still reg loc}.
+
+\noindent \textbf{(1) $\Rightarrow$ (2)}
+Let $k$ be the residue field of $A$ (or $A_0$, since $I$ is contained in the
+maximal ideal).
+We see that there is an exact sequence
+\[I \otimes_{A_0} k \to \mathfrak{m}_{A_0}/\mathfrak{m}_{A_0}^2 \to \mathfrak{m}_{A}/\mathfrak{m}_{A}^2 \to 0.\]
+We can obtain this from the exact sequence $I \to A_0 \to A \to 0$ by tensoring
+with $k$.
+
+By assumption $A_0$ and $A$ are regular local, so
+\[\dim_{A_0/\mathfrak{m}_{A_0}}(\mathfrak{m}_{A_0}/\mathfrak{m}_{A_0}^2)=\dim(A_0)=n\]
+and
+\[\dim_{A_0/\mathfrak{m}_{A_0}}(\mathfrak{m}_{A}/\mathfrak{m}_{A}^2)=\dim(A)\]
+so the image of $I\otimes_{A_0} k$ in $\mathfrak{m}_{A_0}/\mathfrak{m}_{A_0}^2$
+has dimension $m=n-\dim(A)$. Let $\bar{f}_1, \ldots \bar{f}_m$ be a set of
+linearly independent generators of the image of $I
+\otimes_{A_0} k$ in $\mathfrak{m}_{A_0}/\mathfrak{m}_{A_0}^2$, and let $f_1, \ldots f_m$ be liftings to $I$.
+The claim is that the $\left\{f_i\right\}$ generate $I$.
+
+Let $I' \subset A_0$ be the ideal generated by $f_1, \ldots f_m$ and consider
+$A'=A_0/I'$. Then by \cref{reg local mod fs still reg loc}, we know that $A'$
+is a regular local ring with dimension $n-m=\dim(A)$. Also $I' \subset I$ so we
+have an exact sequence
+\[0 \to I/I' \to A' \to A \to 0\]
+But \cref{reg loc means domain}, this means that $A'$ is a domain, and we
+have just seen that it has the same dimension as $A$.
+Now if $I/I' \neq 0$, then $A$ would be a proper quotient of $A'$, and hence of
+a \emph{smaller} dimension (because quotienting by a nonzerodivisor drops the
+dimension). This contradiction shows that $I = I'$, which means that $I$ is
+generated by the sequence $\left\{f_i\right\}$ as claimed.
+\end{proof}
+
+So the reason that $k[x]_{(x)}/(x^2)$ was not regular is that $x^2$ vanishes to
+too high an order: it lies in the square of the maximal ideal.
+
+We can motivate the results above further with:
+\begin{definition}
+In a regular local ring $(R, \mathfrak{m})$, a \textbf{regular system of
+parameters} is a minimal system of generators for $\mathfrak{m}$, i.e. elements
+of $\mathfrak{m}$ that project to a basis of $\mathfrak{m}/\mathfrak{m}^2$.
+\end{definition}
+So a quotient of a regular local ring is regular if and only if the ideal is
+generated by a portion of a system of parameters.
+
+\subsection{Regularity and smoothness}
+\newcommand{\maxspec}{\mathrm{MaxSpec}}
+
+We now want to connect the intuition (described in the past) that, in the
+algebro-geometric context, regularity of a local ring corresponds to smoothness
+of the associated variety (at that point).
+
+Namely, let $R$ be be the (reduced) coordinate ring $ \mathbb{C}[x_1, \dots, x_n]/I$ of an algebraic
+variety. Let $\mathfrak{m}$ be a maximal ideal corresponding to the origin,
+so generated by $(x_1, \dots, x_n)$. Suppose $I \subset \mathfrak{m}$, which is
+to say the origin belongs to the corresponding variety.
+Then $\maxspec R \subset \spec R$ is the corresponding subvariety of $\mathbb{C}^n$, which is
+what we apply the intuition to. Note that $0$ is in this subvariety.
+
+Then we claim:
+
+\begin{proposition}
+$R_{\mathfrak{m}}$ is regular iff $\maxspec R$ is a smooth submanifold near $0$.
+\end{proposition}
+\begin{proof}
+We will show that regularity implies smoothness. The other direction is
+omitted for now.
+
+Note that $S = \mathbb{C}[x_1, \dots, x_n]_{\mathfrak{m}}$ is clearly a regular
+local ring of dimension $n$ ($\mathbb{C}^n$ is smooth, intuitively), and $R_{\mathfrak{m}}$ is the quotient $S/I$. By
+\cref{quotientreg}, we have a good criterion for when $R_{\mathfrak{m}}$ is
+regular.
+Namely, it is regular if and only if $I$ is generated by elements (without loss
+of generality, polynomials) $f_1, \dots, f_k$ whose images in
+the quotient $\mathfrak{m}_S/\mathfrak{m}_S^2$ (where we write
+$\mathfrak{m}_S$ to emphasize that this is the maximal ideal of $S$).
+
+But we
+know that this ``cotangent space'' corresponds to cotangent vectors in $\mathbb{C}^n$, and in
+particular, we can say the following. There are elements $\epsilon_1, \dots,
+\epsilon_n \in \mathfrak{m}_S/\mathfrak{m}_S^2$ that form a basis for this
+space (namely, the images of $x_1, \dots, x_n \in \mathfrak{m}_S$). If $f$ is a
+polynomial vanishing at the origin, then the image of $f$ in
+$\mathfrak{m}_S/\mathfrak{m}_S^2$ takes only the linear terms---that is, it can
+be identified with
+\[ \sum \frac{\partial f}{\partial x_i}|_{0} \epsilon_i, \]
+which is essentially the gradient of $f$.
+
+It follows that $R_{\mathfrak{m}}$ is regular if and only if $I$ is generated
+(in $R_{\mathfrak{m}}$, so we should really say $IR_{\mathfrak{m}}$)
+by a family of polynomials vanishing at zero with linearly independent
+gradients, or if the variety is cut out by the vanishing of such a family of
+polynomials. However, we know that this implies that the variety is locally a
+smooth manifold (by the inverse function theorem).
+\end{proof}
+
+The other direction is a bit trickier, and will require a bit of ``descent.''
+For now, we omit it. But we have shown \emph{something} in both directions: the
+ring $R_{\mathfrak{m}}$ is regular if and only if $I$ is generated
+locally (i.e., in $R_{\mathfrak{m}}$ by a family of polynomials with linearly
+independent gradients). Hartshorne uses this as the definition of smoothness in
+\cite{Ha77}, and thus obtains the result that a variety over an algebraically
+closed field (not necessarily $\mathbb{C}$!) is smooth if and only if its local rings are regular.
+
+\begin{remark}[Warning] This argument proves that if $R \simeq K[x_1, \dots,
+x_n]/I$ for $K$ algebraically closed, then $R_{\mathfrak{m}}$ is regular local for some maximal ideal
+$\mathfrak{m}$ if the corresponding algebraic variety is smooth at the
+corresponding point. We proved this in the special case $K = \mathbb{C}$ and
+$\mathfrak{m}$ the ideal of the origin.
+
+If $K$ is not algebraically closed, we \textbf{can't assume} that any maximal
+ideal corresponds to a point in the usual sense. Moreover, if $K$ is not
+perfect, regularity does \textbf{not} imply smoothness. We have not quite
+defined smoothness, but here's a definition: smoothness means that the local
+ring you get by base-changing $K$ to the algebraic closure is regular. So what
+this means is that
+regularity of affine rings over a field $K$ is not preserved under
+base-change from $K$ to $\overline{K}$.
+\end{remark}
+
+\begin{example} Let $K$ be non-perfect of characteristic $p$. Let $a$ not have
+a $p$th root.
+Consider $K[x]/(x^p -a)$. This is a regular local ring of dimension zero, i.e.
+is a field. If $K$ is replaced by its algebraic closure, then we get
+$\overline{K}[x]/(x^p - a)$, which is $\overline{K}[x]/(x- a^{1/p})^p$. This is
+still zero-dimensional but is not a field. Over the algebraic closure, the ring
+fails to be regular.
+\end{example}
+
+
+
+\subsection{Regular local rings look alike}
+So, as we've seen, regularity corresponds to smoothness. Complex analytically,
+all smooth points are the same though---they're locally $\mathbb{C}^n$.
+Manifolds have no local invariants.
+We'd like
+an algebraic version of this. The vague
+claim is that all regular local rings of the same dimension ``look alike.''
+We have already seen one instance of this phenomenon: a regular local
+ring's associated graded is uniquely determined by its dimension (as a
+polynomial ring). This was in fact how we defined the notion, in part.
+Now we would like to transfer this to statements about things
+closer to $R$.
+
+Let $(R, \mathfrak{m})$ be a regular local ring.
+\textbf{Assume now for simplicity that the residue field of $k=R/\mathfrak{m}$
+maps back into $R$.} In other words, $R$ contains a copy of its residue field,
+or there is a section of $R \to k$. This is always true in the case we
+use for geometric intuition---complex algebraic geometry---as the
+residue field at any maximal ideal is just $\mathbb{C}$ (by the
+Nullstellensatz), and one works with $\mathbb{C}$-algebras.
+
+Choose generators $y_1, \dots, y_n \in
+\mathfrak{m}$ where $n = \dim_k \mathfrak{m}/\mathfrak{m}^2$ is the embedding
+dimension. We get a map in the other direction
+\[ \phi:k[Y_1, \dots, Y_n] \to R, \quad Y_i \mapsto y_i, \]
+thanks to the section $k \to R$. This map from the polynomial ring is not
+an isomorphism (the polynomial ring is not local), but if we let $\mathfrak{m} \subset R$ be the maximal ideal
+and $\mathfrak{n} = (y_1, \dots, y_n)$, then the map on associated gradeds is
+an isomorphism (by definition). That is, $\phi:
+\mathfrak{n}^t/\mathfrak{n}^{t+1} \to \mathfrak{m}^t/\mathfrak{m}^{t+1}$ is an
+isomorphism for each $t \in \mathbb{Z}_{\geq 0}$.
+
+Consequently, $\phi$ induces an isomorphism
+\[ k[Y_1, \dots,Y_n]/\mathfrak{n}^t \simeq R/\mathfrak{m}^t \]
+for all $t$, because it is an isomorphism on the associated graded level.
+So this in turn is equivalent, upon taking inverse limits, to the statement that
+$\phi$ induces an isomorphism
+\[ k[[Y_1, \dots, Y_n ]] \to \hat{R} \]
+at the level of completions.
+
+We can now conclude:
+\begin{theorem}
+Let $R$ be a regular local ring of dimension $n$. Suppose $R$ contains a copy
+of its residue field $k$. Then, as $k$-algebras, $\hat{R} \simeq k[[Y_1, \dots, Y_m]]$.
+\end{theorem}
+
+Finally:
+\begin{corollary}
+A complete noetherian regular local ring that contains a copy of its residue
+field $k$ is a power series ring over $k$.
+\end{corollary}
+
+It now makes sense to say:
+\begin{quote}
+\textbf{All \emph{complete} regular local rings of the same dimension look
+alike.} (More precisely, this is true when $R$ is assumed to contain a copy of
+its residue field, but this is not a strong assumption in practice. One can
+show that this will be satisfied if $R$ contains \emph{any}
+field.\footnote{This is not always satisfied---take the $p$-adic integers, for instance.})
+\end{quote}
+
+We won't get into the precise statement of the general structure theorem, when
+the ring is not assumed to contain its residue field, but a safe
+intuition to take away from this is the above bolded statement.
+Note that ``looking alike'' requires the completeness, because completions are
+intuitively like taking analytically local invariants (while localization
+corresponds to working \emph{Zariski} locally, which is much weaker).
+
+
+\section{K\"ahler differentials}
+\subsection{Derivations and K\"ahler differentials} Let $R$ be a ring with the maximal ideal
+$\mathfrak{m}$. Then there is a $R/\mathfrak{m}$-vector space
+$\mathfrak{m}/\mathfrak{m}^2$. This is what we would like to think of as the
+``{cotangent space}'' of $\spec R$ at $\mathfrak{m}$. Intuitively, the
+cotangent space is what you get by differentiating functions which vanish at
+the point, but
+differentiating functions that vanish twice should give zero. This is the moral
+justification.
+(Recall that on a smooth manifold $M$, if $\mathcal{O}_p$ is the local ring of
+smooth functions defined in a neighborhood of $p \in M$, and $\mathfrak{m}_p
+\subset \mathcal{O}_p$ is the maximal ideal consisting of ``germs'' vanishing
+at $p$, then the cotangent space $T_p^* M$ is naturally
+$\mathfrak{m}_p/\mathfrak{m}_p^2$.)
+
+A goal might be to generalize this. What if you wanted to think about all
+points at once? We'd like to describe the ``cotangent bundle'' to $\spec R$ in
+an analogous way. Let's try and describe what would be a section to this
+cotangent bundle. A section of $\Omega^*_{\spec R}$ should be the same
+thing as a ``1-form'' on $\spec R$. We don't know what a 1-form is yet, but at
+least we can give some examples. If $f \in R$, then $f$ is a ``function'' on
+$\spec R$, and its ``differential'' should be a 1-form. So there should be a
+``$df$'' which should be a 1-form.
+This is analogous to the fact that if $g$ is a real-valued function on the
+smooth manifold $M$, then there is a 1-form $dg$.
+
+We should expect the rules $d(fg)= df+dg$ and $d(fg) = f(dg) + g(df)$ as the
+usual rules of differentiation. For this to make sense, 1-forms should be an
+$R$-module.
+Before defining the appropriate object, we start with:
+
+\begin{definition}
+Let $R$ be a commutative ring, $M$ an $R$-module. A \textbf{derivation} from
+$R$ to $M$ is a map $D: R \to M$ such that the two identities below hold:
+\begin{gather} D(fg)= Df + Dg \\
+ D(fg) = f(Dg) + g(Df). \end{gather}
+\end{definition}
+These equations make sense as $M$ is an $R$-module.
+
+Whatever a 1-form on $\spec R$ might be, there should be a derivation
+\[ d: R \to \left\{\text{1--forms}\right\}. \]
+An idea would be to \emph{define} the 1-forms or the ``cotangent bundle''
+$\Omega_R$ by a
+universal property. It should be universal among $R$-modules with a derivation.
+
+To make this precise:
+\begin{proposition}
+There is an $R$-module $\Omega_R$ and a derivation $d_{\mathrm{univ}} : R \to
+\Omega_R$ satisfying the following universal property. For all $R$-modules
+$M$, there is a canonical isomorphism
+\[ \hom_{R}(\Omega_R, M) \simeq \mathrm{Der}(R, M) \]
+given by composing the universal $d_{\mathrm{univ}}$ with a map $\Omega_R \to M$.
+\end{proposition}
+
+That is, any derivation $d: R \to M$ factors through this universal derivation
+in a unique way. Given the derivation $d: R \to M$, we can make the following diagram
+commutative in a unique way such that $\Omega_R \to M$ is a morphism of
+$R$-modules:
+\[
+\xymatrix{
+R \ar[r]^d \ar[d] & M \\
+\Omega_R \ar[ru]^{d_{\mathrm{univ}}}
+}
+\]
+
+\begin{definition}
+$\Omega_R$ is called the module of \textbf{K\"ahler differentials} of $R$.
+\end{definition}
+
+Let us now verify this proposition.
+\begin{proof}
+This is like the verification of the tensor product. Namely, build a free
+gadget and quotient out to enforce the desired relations.
+
+Let $\Omega_R$ be the quotient of the free $R$-module generated by elements
+$da$ for $a \in R$ by enforcing the relations
+\begin{enumerate}
+\item $d(a+b) =da + db$.
+\item $d(ab) = adb + bda$.
+\end{enumerate}
+By construction, the map $a \to da$ is a derivation $R \to \Omega_R$.
+It is easy to see that it is universal. Given a derivation $d': R \to M$, we get a
+map $\Omega_R \to M$ sending $da \to d'(a), a \in R$.
+\end{proof}
+
+The philosophy of Grothendieck says that we should do this, as with everything,
+in a relative context.
+Indeed, we are going to need a slight variant, for the case of a \emph{morphism} of
+rings.
+
+\subsection{Relative differentials}
+
+On a smooth manifold $M$, the derivation $d$ from smooth functions to 1-forms
+satisfies an additional property: it maps the constant functions to zero.
+This is the motivation for the next definition:
+
+\begin{definition}
+Let $f: R \to R'$ be a ring-homomorphism. Let $M$ be an $R'$-module. A
+derivation $d: R' \to M$ is \textbf{$R$-linear if $d(f(a)) = 0, a \in R$.}
+This is equivalent to saying that $d$ is an $R$-homomorphism by the Leibnitz
+rule.
+\end{definition}
+
+Now we want to construct an analog of the ``cotangent bundle'' taking into
+account linearity.
+
+\begin{proposition}
+Let $R'$ be an $R$-algebra.
+Then there is a universal $R$-linear derivation $R'
+\stackrel{d_{\mathrm{univ}}}{\to} \Omega_{R'/R}$.
+\end{proposition}
+\begin{proof}
+Use the same construction as in the absolute case. We get a map $R' \to
+\Omega_{R'}$ as before. This is not generally $R$-linear, so one has to
+quotient out by the images of $d(f(r)), r \in R$.
+In other words, $\Omega_{R'/R}$ is the quotient of the free $R'$-module on
+symbols $\left\{dr', r' \in R'\right\}$ with the relations:
+\begin{enumerate}
+\item $d(r_1' r_2') = r'_1 d(r_2') + d(r'_1) r_2'$.
+\item $d(r_1' + r_2') = dr_1' + dr_2'$.
+\item $dr = 0$ for $r \in R$ (where we identify $r$ with its image $f(r)$ in
+$R'$, by abuse of notation).
+\end{enumerate}
+\end{proof}
+
+\begin{definition}
+$\Omega_{R'/R}$ is called the module of \textbf{relative K\"ahler
+differentials,} or simply K\"ahler differentials.
+\end{definition}
+
+Here $\Omega_{R'/R}$ also corepresents a simple functor on the category of
+$R'$-modules: given an $R'$-module $M$, we have
+\[ \hom_{R'}(\Omega_{R'/R}, M) = \mathrm{Der}_R(R', M), \]
+where $\mathrm{Der}_R$ denotes $R$-derivations.
+This is a \emph{subfunctor} of the functor $\mathrm{Der}_R(R', \cdot)$, and so
+by Yoneda's lemma there is a natural map $\Omega_{R'} \to \Omega_{R'/R}$.
+We shall expand on this in the future.
+
+\subsection{The case of a polynomial ring}
+Let us do a simple example to make this more concrete.
+
+\begin{example} \label{polynomialringdiff}
+Let $R' = \mathbb{C}[x_1, \dots, x_n], R = \mathbb{C}$. In this case, the claim
+is that there is an isomorphism
+\[ \Omega_{R'/R} \simeq R'^n. \]
+More precisely, $\Omega_{R'/R}$ is free on $dx_1, \dots,dx_n$. So the cotangent
+bundle is ``free.'' In general, the module $\Omega_{R'/R}$ will not be free, or
+even projective, so the intuition that it is a vector bundle will be rather
+loose. (The projectivity will be connected to \emph{smoothness} of $R'/R$.)
+
+\begin{proof}
+The construction $f \to \left( \frac{\partial f}{\partial x_i} \right)$ gives
+a map $R' \to R'^n$. By elementary calculus, this is a derivation, even an
+$R$-linear derivation. We get a map
+\[ \phi:\Omega_{R'/R} \to R'^n \]
+by the universal property of the K\"ahler differentials. The claim is that this
+map is an isomorphism. The map is characterized by sending $df$ to $\left(
+\frac{\partial f}{\partial x_i}\right)$. Note that $dx_1, \dots, dx_n$ map to a
+basis of $R'^n$ because differentiating $x_i$ gives 1 at $i$ and zero at $j
+\neq i$. So we see that $\phi$ is surjective.
+
+There is a map $\psi: R'^n \to \Omega_{R'/R}$ sending $\left(a_i \right)$ to
+$\sum a_i dx_i$. It is easy to check that $\phi \circ \psi = 1$ from the
+definition of $\phi$. What we still need to show is that $\psi \circ \phi =1$.
+Namely, for any $f$, we want to show that $\psi \circ \phi(df) = df$ for $f \in
+R'$. This is precisely the claim that $df = \sum \frac{\partial f}{\partial
+x_i} dx_i$. Both sides are additive in $f$, indeed are derivations, and
+coincide on monomials of degree one, so they are equal.
+\end{proof}
+
+\end{example}
+
+By the same reasoning, one can show more generally:
+\begin{proposition}
+If $R$ is any ring, then there is a canonical isomorphism
+\[ \Omega_{R[x_1, \dots, x_n]/R} \simeq \bigoplus_{i=1}^n R[x_1, \dots, x_n]
+dx_i, \]
+i.e. it is $R[x_1, \dots, x_n]$-free on the $dx_i$.
+\end{proposition}
+
+This is essentially the claim that, given an $R[x_1, \dots, x_n]$-module $M$
+and elements $m_1, \dots, m_n \in M$, there is a \emph{unique} $R$-derivation
+from the polynomial ring into $M$ sending $x_i\mapsto m_i$.
+\subsection{Exact sequences of K\"ahler differentials}
+We now want to prove a few basic properties of K\"ahler differentials, which
+can be seen either from the explicit construction or in terms of the functors
+they represent, by formal nonsense.
+These results will be useful in computation.
+
+ Recall that if
+$\phi: A \to B$ is a map of rings, we can define a $B$-module
+\( \Omega_{B/A}\) which is generated by formal symbols $ dx|_{x \in
+B}$ and subject to the relations $d(x+y) = dx+dy$, $d(a)=0, a \in A$,
+and $d(xy) = xdy + ydx$.
+By construction, $\Omega_{B/A}$ is the receptacle from the universal $A$-linear
+derivation into a $B$-module.
+
+Let $A \to B \to C$ be a triple of maps of rings. There is an obvious map $dx \to dx$
+\[ \Omega_{C/A} \to \Omega_{C/B} \]
+where both sides have the same generators, except with a few additional
+relations on $\Omega_{C/B}$. We have to quotient by $db, b \in B$. In
+particular, there is a map $\Omega_{B/A} \to \Omega_{C/A}$, $dx \to dx$, whose images
+generate the kernel. This induces a map
+\[ C \otimes_B \Omega_{B/A} \to \Omega_{C/A}. \]
+The image is the $C$-module generated by $db|_{b \in B}$, and this is the
+kernel of the previous map.
+We have proved:
+\begin{proposition}[First exact sequence] \label{firstexactseq} Given a sequence $A \to B \to C$ of rings, there is an exact sequence
+\[ C \otimes_B \Omega_{B/A} \to \Omega_{C/A} \to \Omega_{C/B} \to 0 .\]
+\end{proposition}
+\begin{proof}[Second proof]
+There is, however, a more functorial means of seeing this sequence, which we
+now describe.
+Namely, let us consider the category of $C$-modules, and the functors
+corepresented by these three objects. We have, for a $C$-module $M$:
+\begin{gather*}
+\hom_C(\Omega_{C/B}, M) = \mathrm{Der}_B(C, M) \\
+\hom_C(\Omega_{C/A}, M) = \mathrm{Der}_A(C, M) \\
+\hom_C(C \otimes_B \Omega_{B/A}, M) = \hom_B(\Omega_{B/A}, M) = \mathrm{Der}_A(B, M).
+\end{gather*}
+By Yoneda's lemma, we know that a map of modules is the same thing as a natural
+transformation between the corresponding corepresentable functors, in the
+reverse direction.
+It is easy to see that there are natural transformations
+\[ \mathrm{Der}_B(C, M) \to \mathrm{Der}_A(C, M), \quad \mathrm{Der}_A(C, M) \to \mathrm{Der}_A(B, M) \]
+obtained by restriction in the second case, and by doing nothing in the first
+case (a $B$-derivation is automatically an $A$-derivation).
+The induced maps on the modules of differentials are precisely those described
+before; this is easy to check (and we could have defined the maps by these
+functors if we wished). Now to say that the sequence is right exact is to say
+that for each $M$, there is an exact sequence of abelian groups
+\[ 0 \to \mathrm{Der}_B(C, M) \to \mathrm{Der}_A(C, M) \to \mathrm{Der}_A(B, M). \]
+But this is obvious from the definitions: an $A$-derivation is a $B$-derivation
+if and only if the restriction to $B$ is trivial.
+This establishes the claim.
+\end{proof}
+
+
+
+Next, we are interested in a second exact sequence. In the past
+(\cref{polynomialringdiff}), we computed the module of K\"ahler differentials
+of a \emph{polynomial} algebra. While this was a special case, any algebra is a
+quotient of a polynomial algebra. As a result, it will be useful to know how
+$\Omega_{B/A}$ behaves with respect to quotienting $B$.
+
+ Let $A
+\to B$ be a homomorphism of rings and $I \subset B $ an ideal. We would like
+to describe $\Omega_{B/I/A}$. There is a map
+\[ \Omega_{B/A} \to \Omega_{B/I/A} \]
+sending $dx$ to $d \overline{x}$ for $\overline{x}$ the reduction of $x$ in
+$B/I$. This is surjective on generators, so it is surjective. It is not
+injective, though. In $\Omega_{B/I/A}$, the generators $dx, dx'$ are identified
+if $x \equiv x' \mod I$. Moreover, $\Omega_{B/I/A}$ is a
+$B/I$-module.
+This means that there will be additional relations for that. To remedy this, we
+can tensor and consider the morphism
+\[ \Omega_{B/A} \otimes_B B/I \to \Omega_{B/I/A} \to 0. \]
+
+Let us now define a map
+\[ \phi: I /I^2 \to \Omega_{B/A} \otimes_B B/I, \]
+which we claim will generate the kernel. Given $x \in I$, we define $\phi(x) =
+dx$. If $x \in I^2$, then $dx \in I \Omega_{B/A}$ so $\phi$ is indeed a map of
+abelian groups
+$I/I^2 \to \Omega_{B/A} \otimes_B B/I$. Let us check that this is a
+$B/I$-module homorphism. We would like to check that $\phi(xy) = y \phi(x)$
+for $x \in I$ in
+$\Omega_{B/A}/I \Omega_{B/A}$. This follows from the Leibnitz rule, $\phi(xy) =
+y \phi(x) + xdy \equiv x \phi(x) \mod I \Omega_{B/A}$. So $\phi$ is also
+defined. Its image is the submodule of $\Omega_{B/A}/I \Omega_{B/A}$ generated
+by $dx, x \in I$. This is precisely what one has to quotient out by to get
+$\Omega_{B/I/A}$. In particular:
+
+\begin{proposition}[Second exact sequence] Let $B$ be an $A$-algebra and $I \subset B$ an ideal.
+There is an exact sequence
+\[ I/I^2 \to \Omega_{B/A} \otimes_B B/I \to \Omega_{B/I/A} \to 0. \]
+\end{proposition}
+
+These results will let us compute the module of K\"ahler differentials in cases
+we want.
+
+\begin{example}
+Let $B = A[x_1, \dots, x_n]/I$ for $I$ an ideal. We will compute $\Omega_{B/A}$.
+
+First, $\Omega_{A[x_1, \dots, x_n]/A} \otimes B \simeq B^n$ generated by
+symbols $dx_i$. There is a surjection of
+\[ B^n \to \Omega_{B/A} \to 0 \]
+whose kernel is generated by $dx, x \in I$, by the second exact sequence above.
+If $I = (f_1, \dots, f_m)$, then the kernel is generated by
+$\left\{df_i\right\}$.
+It follows that $\Omega_{B/A}$ is the cokernel of the map
+\[ B^m \to B^n \]
+that sends the $i$th generator of $B^m$ to $df_i$ thought of as an element in
+the free $B$-module $B^n$ on generators $dx_1, \dots, dx_n$. Here, thanks to
+the Leibnitz rule, $df_i$ is
+given by formally differentiating the polynomial, i.e.
+\[ df_i = \sum_j \frac{\partial f_i}{\partial x_j} dx_j. \] We have thus
+explicitly represented $\Omega_{B/A}$ as the cokernel of the matrix $\left(
+\frac{\partial f_i}{\partial x_j}\right)$.
+\end{example}
+
+In particular, the above example shows:
+\begin{proposition}
+If $B$ is a finitely generated $A$-algebra, then $\Omega_{B/A}$ is a finitely
+generated $B$-module.
+\end{proposition}
+Given how $\Omega$ behaves with respect to localization, we can extend this to
+the case where $B$ is \emph{essentially} of finite type over $A$ (recall that
+this means $B$ is a localization of a finitely generated $A$-algebra).
+
+Let $R = \mathbb{C}[x_1, \dots, x_n]/I$ be the coordinate ring of an algebraic
+variety. Let $\mathfrak{m} \subset R$ be the maximal ideal. Then
+$\Omega_{R/\mathbb{C}}$ is what one should think of as containing information
+of the cotangent bundle of $\spec R$. One might ask what the \emph{fibe}r over a point
+$\mathfrak{m} \in \spec R$ is, though. That is, we might ask what
+\( \Omega_{R/\mathbb{C}} \otimes_R R/\mathfrak{m} \)
+is. To see this, we note that there are maps
+\[ \mathbb{C} \to R \to R/\mathfrak{m} \simeq \mathbb{C}. \]
+There is now an exact sequence by
+\cref{firstexactseq}
+\[ \mathfrak{m}/\mathfrak{m}^2 \to \Omega_{R/\mathbb{C}} \otimes_R
+R/\mathfrak{m} \to \Omega_{\mathbb{R}/\mathfrak{m}/\mathbb{C}} \to 0, \]
+where the last thing is zero as $R/\mathfrak{m} \simeq \mathbb{C} $ by the
+Nullstellensatz.
+The upshot is that $\Omega_{R/\mathbb{C}} \otimes_R R/\mathfrak{m}$ is a
+quotient of $\mathfrak{m}/\mathfrak{m}^2$.
+
+In fact, the natural map $\mathfrak{m}/\mathfrak{m}^2 \to \Omega_{R/\mathbb{C}}
+\otimes_R \mathbb{C}$ (given by $d$) is an \emph{isomorphism} of
+$\mathbb{C}$-vector spaces. We have seen
+that it is surjective, so we need to see that it is injective.
+That is, if $V$ is a $\mathbb{C}$-vector space, then we need to show that the
+map
+\[ \hom_{\mathbb{C}}(\Omega_{R/\mathbb{C}}\otimes_R \mathbb{C}, V) \to
+\hom_{\mathbb{C}}(\mathfrak{m}/\mathfrak{m}^2, V) \]
+is surjective. This means that given any $\mathbb{C}$-linear map
+$\lambda: \mathfrak{m}/\mathfrak{m}^2 \to V$, we can extend this to a derivation $R \to
+V$ (where $V$ becomes an $R$-module by $R/\mathfrak{m} \simeq \mathbb{C}$, as
+usual).
+But this is easy: given $f \in R$, we write $f = f_0 + c$ for $c \in
+\mathbb{C}$ and $f_0 \in \mathfrak{m}$, and have the derivation send $f$ to
+$\lambda(f_0)$.
+(Checking that this is a well-defined derivation is straightforward.)
+
+This goes through if $\mathbb{C}$ is replaced by any algebraically closed field.
+We have found:
+\begin{proposition}
+Let $(R, \mathfrak{m}) $ be the localization of a finitely generated algebra
+over an algebraically closed field $k$ at a maximal ideal $\mathfrak{m}$. Then
+there is a natural isomorphism:
+\[ \Omega_{R/k} \otimes_R k \simeq \mathfrak{m}/\mathfrak{m}^2. \]
+\end{proposition}
+
+This result connects the K\"ahler differentials to the cotangent bundle: the
+fiber of the cotangent bundle at a point in a manifold is, similarly, the maximal ideal
+modulo its square (where the ``maximal ideal'' is the maximal ideal in the ring
+of germs of functions at that point).
+\subsection{K\"ahler differentials and base change}
+
+We now want to show that the formation of $\Omega$ is compatible with base
+change. Namely, let $B$ be an $A$-algebra, visualized by a morphism $ A \to B$.
+If $A \to A'$ is any morphism of rings, we can think of the \emph{base-change}
+$A' \to A' \otimes_A B$; we often write $B' = A' \otimes_A B$.
+
+\begin{proposition} \label{basechangediff} With the above notation, there is a canonical isomorphism
+of $B'$-modules:
+\[ \Omega_{B/A} \otimes_A A' \simeq \Omega_{B'/A'}. \]
+\end{proposition}
+Note that, for a $B$-module, the functors $\otimes_A A'$ and $\otimes_B B'$ are
+the same. So we could have as well written $\Omega_{B/A} \otimes_B B' \simeq
+\Omega_{B'/A'}$.
+\begin{proof}
+We will use the functorial approach. Namely, for a $B'$-module $M$, we will
+show that there is a canonical isomorphism
+\[ \hom_{B'}( \Omega_{B/A} \otimes_A A', M) \simeq
+\hom_{B'}( \Omega_{B'/A'}, M) .
+\]
+The right side represents $A'$-derivations $B' \to M$, or $\mathrm{Der}_{A'}(B', M)$.
+The left side represents $\hom_B(\Omega_{B/A}, M)$, or $\mathrm{Der}_A(B, M)$.
+Here the natural map of modules corresponds by Yoneda's lemma to the restriction
+\[ \mathrm{Der}_{A'}(B', M) \to \mathrm{Der}_A(B, M). \]
+We need to see that this restriction map is an isomorphism. But given an
+$A$-derivation $B \to M$, this is to say that extends in a \emph{unique} way to
+an $A'$-linear derivation $B' \to M$. This is easy to verify directly.
+\end{proof}
+
+We next describe how $\Omega$ behaves with respect to forming tensor products.
+\begin{proposition}
+Let $B, B'$ be $A$-algebras. Then there is a natural isomorphism
+\[ \Omega_{B \otimes_A B'/A} \simeq \Omega_{B/A} \otimes_A B' \oplus B
+\otimes_A \Omega_{B'/A} . \]
+\end{proposition}
+Since $\Omega$ is a linearization process, it is somewhat natural that it
+should turn tensor products into direct sums.
+\begin{proof}
+The ``natural map'' can be described in the leftward direction. For instance,
+there is a natural map $\Omega_{B/A} \otimes_A B' \to \Omega_{B \otimes_A
+B'/A}$. We just need to show that it is an isomorphism.
+For this, we essentially have to show that to give an $A$-derivation of $B
+\otimes_A B'$ is the same as giving a derivation of $B$ and one of $B'$. This
+is easy to check.
+\end{proof}
+
+\subsection{Differentials and localization}
+We now show that localization behaves \emph{extremely} nicely with respect to
+the formation of K\"ahler differentials. This is important in algebraic
+geometry for knowing that the ``cotangent bundle'' can be defined locally.
+
+\begin{proposition} \label{localizationdiff}
+Let $f: A \to B$ be a map of rings. Let $S \subset B$ be multiplicatively
+closed. Then the natural map
+\[ S^{-1}\Omega_{B/A} \to \Omega_{S^{-1}B/A} \]
+is an isomorphism.
+\end{proposition}
+So the formation of K\"ahler differentials commutes with localization.
+
+\begin{proof}
+We could prove this by the calculational definition, but perhaps it is better
+to prove it via the universal property. If $M$ is any $S^{-1}B$-module, then
+we can look at
+\[ \hom_{S^{-1}B}( \Omega_{S^{-1}B/A}, M) \]
+which is given by the group of $A$-linear derivations $S^{-1}B \to M$, by the
+universal property.
+
+On the other hand,
+\[ \hom_{S^{-1}B}( S^{-1} \Omega_{B/A}, M) \]
+is the same thing as the set of $B$-linear maps $\Omega_{B/A} \to M$, i.e. the
+set of $A$-linear derivations $B \to M$.
+
+We want to show that these two are the same thing. Given an $A$-derivation
+$S^{-1}B \to M$, we get an $A$-derivation $B \to M$ by pulling back. We want to
+show that any $A$-linear derivation $B \to M$ arises in this way. So we need to
+show that any $A$-linear derivation $d: B \to M$ extends uniquely to an $A$-linear
+$\overline{d}: S^{-1}B \to M$.
+Here are two proofs:
+\begin{enumerate}
+\item (Lowbrow proof.) For $x/s \in S^{-1}B$, with $x \in B, s \in S$, we
+define $\overline{d}(x/s) = dx/s - xds/s^2$ as in calculus. The claim is that
+this works, and is the only thing that works. One should check
+this---\textbf{exercise}.
+\item (Highbrow proof.) We start with a digression. Let $B$ be a commutative
+ring, $M$ a $B$-module. Consider $B \oplus M$, which is a $B$-module. We can
+make it into a ring (via \textbf{square zero multiplication}) by multiplying
+\[ (b,x)(b',x') = (bb', bx'+b'x). \]
+This is compatible with the $B$-module structure on $M \subset B \oplus
+M$. Note that $M$ is an ideal in this ring with square zero. Then the
+projection $\pi: B \oplus M \to B$ is a ring-homomorphism as well.
+There is also a ring-homomorphism in the other direction $b \to (b,0)$, which
+is a section of $\pi$. There may be other homomorphisms $B \to B \oplus M$.
+
+You might ask what all the right inverses to $\pi$ are, i.e. ring-homomorphisms
+$\phi: B \to B \oplus M $ such that $\pi \circ \phi = 1_{B}$. This must be of
+the form $\phi: b \to (b, db)$ where $d: B \to M$ is some map. It is easy to check
+that $\phi$ is a homomorphism precisely when $d$ is a derivation.
+
+Suppose now $A \to B$ is a morphism of rings making $B$ an $A$-algebra. Then
+$B \oplus M$ is an $A$-algebra via the inclusion $a \to (a, 0)$. Then
+you might ask when $\phi: b \to (b, db), B \to B \oplus M$ is an
+$A$-homomorphism. The answer is clear: when $d$ is an $A$-derivation.
+
+Recall that we were in the situation of $f: A \to B$ a morphism of rings, $S
+\subset B$ a multiplicatively closed subset, and $M$ an $S^{-1}B$-module. The
+claim was that any $A$-linear derivation $d: B \to M$ extends uniquely to
+$\overline{d}: S^{-1} B \to M$.
+We can draw a diagram
+\[ \xymatrix{
+& B \oplus M \ar[d] \ar[r] & S^{-1}B \oplus M \ar[d] \\
+A \ar[r] & B \ar[r] & S^{-1}B
+}.\]
+This is a cartesian diagram. So given a section of $A$-algebras $B \to B \oplus M$, we have to
+construct a section of $A$-algebras $S^{-1}B \to S^{-1}B \oplus M$. We can do this by the
+universal property of localization, since $S$ acts by invertible elements on
+$S^{-1}B \oplus M$. (To see this, note that $S$ acts by invertible elements on
+$S^{-1}B$, and $M$ is a nilpotent ideal.)
+\end{enumerate}
+\end{proof}
+
+Finally, we note that there is an even slicker argument. (We learned this from
+\cite{Qu}.)
+Namely, it suffices to show that $\Omega_{S^{-1}B/B} =0 $, by the exact
+sequences.
+But this is a $S^{-1}B$-module, so we have
+\[ \Omega_{S^{-1}B/B} = \Omega_{S^{-1}B/B} \otimes_B S^{-1}B, \]
+because tensoring with $S^{-1}B$ localizes at $S$, but this does nothing to a
+$S^{-1}B$-module! By the base change formula (\cref{basechangediff}), we have
+\[ \Omega_{S^{-1}B/B} \otimes_B S^{-1}B = \Omega_{S^{-1}B/S^{-1}B} = 0, \]
+where we again use the fact that $S^{-1} B \otimes_B S^{-1} B \simeq S^{-1}B$.
+
+\subsection{Another construction of $\Omega_{B/A}$}
+
+Let $B$ be an $A$-algebra. We have constructed $\Omega_{B/A}$ by quotienting
+generators by relations.
+There is also a simple and elegant ``global'' construction one sometimes finds
+useful in generalizing the procedure to schemes.
+
+Consider the algebra $B \otimes_A B$ and the map $B \otimes_A B \to B$ given by
+multiplication.
+Note that $B$ acts on $B \otimes_A B$ by multiplication on
+the first factor: this is how the latter is a $B$-module, and then the
+multiplication map is a $B$-homomorphism. Let $I \subset B \otimes_A B$ be the
+kernel.
+
+\begin{proposition} \label{alternateOmega}
+There is an isomorphism of $B$-modules
+\[ \Omega_{B/A} \simeq I/I^2 \]
+given by the derivation $b \mapsto 1 \otimes b - b \otimes 1$, from $B$ to
+$I/I^2$.
+\end{proposition}
+\begin{proof}
+It is clear that the maps
+\[ b \to 1 \otimes b, b \to b \otimes 1: \quad B \to B \otimes_A B \]
+are $A$-linear, so their difference is too. The quotient $d:B \to I/I^2$ is thus
+$A$-linear too.
+
+First, note that if $c,c' \in B$, then $1 \otimes c - c \otimes 1, 1 \otimes c'
+- c' \otimes 1 \in I$. Their product is thus zero in $I/I^2$:
+\[ (1 \otimes c - c \otimes 1)(1 \otimes c'
+- c' \otimes 1) = 1 \otimes cc' + cc' \otimes 1 - c \otimes c' - c' \otimes c
+ \in I^2.\]
+Next
+we must check that $d: B \to I/I^2$ is a derivation. So fix $b, b' \in B$; we
+have
+\[ d(bb') = 1 \otimes bb'- bb' \otimes 1\]
+and
+\[ bdb' = b ( 1 \otimes b'-b' \otimes 1), \quad b' db = b'(1
+\otimes b - b \otimes 1 ). \]
+The second relation shows that
+\[ bdb' + b' db = b \otimes b' - bb' \otimes 1+ b' \otimes b - bb' \otimes
+1 . \]
+Modulo $I^2$, we have as above $b \otimes b' + b' \otimes b \equiv 1 \otimes
+bb' + bb' \otimes 1$, so
+\[ bdb' + b' db \equiv 1 \otimes bb' - bb' \otimes 1 \mod I^2, \]
+and this last is equal to $d(bb')$ by definition. So we have an $A$-linear
+derivation $d: B \to I/I^2$. It remains to be checked that this is
+\emph{universal}. In particular, we must check that the induced
+\[ \phi: \Omega_{B/A} \to I/I^2 \]
+sending $db \to 1 \otimes b - b \otimes 1$.
+is an isomorphism. We can define the inverse $\psi: I/I^2 \to \Omega_{B/A}$ by sending $\sum b_i \otimes b_i'
+\in I$
+to $\sum b_i db_i'$. This is clearly a $B$-module homomorphism, and it
+ is well-defined mod $I^2$.
+
+It is clear that $\psi (\phi(db)) = db$ from the definitions, since this
+is
+\[ \psi( 1 \otimes b - b \otimes 1) = 1 (db) - b d1 = db, \]
+as $d1 = 0$. So $\psi \circ \phi = 1_{\Omega_{B/A}}$.
+It follows that $\phi$ is injective.
+We will check now that it is surjective.
+Then we will be done.
+
+\begin{lemma}
+Any element in $I$ is a $B$-linear combination of elements of the form $1
+\otimes b - b \otimes 1$.
+\end{lemma}
+
+Every such element is the image of $db$ under $\phi$ by definition of the
+derivation $B \to I/I^2$. So this lemma will complete the proof.
+
+\begin{proof}
+Let $Q = \sum c_i \otimes d_i \in I$. By assumption, $\sum c_i d_i = 0 \in B$.
+We have by this last identity
+\[Q = \sum \left( ( c_i \otimes d_i ) - (c_i d_i \otimes 1)\right)
+= \sum c_i (1 \otimes d_i - d_i \otimes 1).
+\]
+So $Q$ is in the submodule spanned by the $\left\{1 \otimes b - b \otimes
+1\right\}_{b \in B}$.
+\end{proof}
+\end{proof}
+
+\section{Introduction to smoothness}
+\subsection{K\"ahler differentials for fields}
+
+Let us start with the simplest examples---fields.
+
+\begin{example}
+Let $k$ be a field, $k'/k$ an extension.
+\begin{question}
+What does $\Omega_{k'/k}$ look like? When does it vanish?
+\end{question}
+$\Omega_{k'/k}$ is a $k'$-vector space.
+
+\begin{proposition}
+Let $k'/k$ be a separable algebraic extension of fields. Then $\Omega_{k'/k} = 0$.
+\end{proposition}
+\begin{proof}
+We will need a formal property of K\"ahler differentials that is easy to check,
+namely that they are compatible with filtered colimits. If $B = \varinjlim
+B_\alpha$ for $A$-algebras $B_\alpha$, then there is a canonical isomorphism
+\[ \Omega_{B/A} \simeq \varinjlim \Omega_{B_{\alpha}/A}. \]
+One can check this on generators and relations, for instance.
+
+Given this, we can reduce to the case of $k'/k$ finite and separable.
+\begin{remark}
+Given a sequence of fields and morphisms $k \to k' \to k''$, then there is an
+exact sequence
+\[ \Omega_{k'/k} \otimes k'' \to \Omega_{k''/k} \to \Omega_{k''/k'} \to 0. \]
+In particular, if $\Omega_{k'/k} = \Omega_{k''/k'} =0 $, then $\Omega_{k''/k} =
+0$. This is a kind of d\'evissage argument.
+\end{remark}
+
+Anyway, recall that we have a finite separable extension $k'/k$ where $k' =
+k(x_1, \dots, x_n)$.\footnote{We can take $n=1$ by the primitive element
+theorem, but shall not need this.} We will show that
+\[ \Omega_{k(x_1, \dots, x_i)/k(x_1, \dots, x_{i-1})} =0 \quad \forall i, \]
+which will imply by the devissage argument that $\Omega_{k'/k} = 0$.
+In particular, we are reduced to showing the proposition when $k'$ is generated
+over $k$ by a \emph{single element} $x$. Then we have that
+\[ k' \simeq k[X]/(f(X)) \]
+for $f(X)$ an irreducible polynomial. Set $I = (f(X))$. We have an exact sequence
+\[ I/I^2 \to \Omega_{k[X]/k} \otimes_{k[X]} k' \to \Omega_{k'/k} \to 0 \]
+The middle term is a copy of $k'$ and the first term is isomorphic to $k[X]/I
+\simeq k'$. So there is an exact sequence
+\[ k' \to k' \to \Omega_{k'/k} \to 0. \]
+The first term is, as we have computed, multiplication by $f'(x)$; however
+this is nonzero by separability. Thus we find that $\Omega_{k'/k} =0$.
+\end{proof}
+\end{example}
+
+\begin{remark}
+The above result is \textbf{not true} for inseparable extensions in general.
+\end{remark}
+\begin{example}
+Let $k$ be an imperfect field of characteristic $p>0$. There is $x \in k$ such
+that $x^{1/p} \notin k$, by definition. Let $k' = k(x^{1/p})$. As a ring, this
+looks like
+$k[t]/(t^p - x)$. In writing the exact sequence, we find that $\Omega_{k'/k} =
+k'$ as this is the cokernel of the map $k' \to k'$ given by multiplication
+$\frac{d}{dt}|_{x^{1/p}} (t^p - x)$. That polynomial has identically vanishing
+derivative, though. We find that a generator of $\Omega_{k'/k}$ is $dt$ where
+$t$ is a $p$th root of $x$, and $\Omega_{k'/k } \simeq k$.
+\end{example}
+
+Now let us consider transcendental extensions. Let $k' = k(x_1, \dots, x_n)$ be
+a purely transcendental extension, i.e. the field of rational functions of
+$x_1, \dots, x_n$.
+
+\begin{proposition}
+If $k' = k(x_1, \dots, x_n)$, then $\Omega_{k'/k}$ is a free $k'$-module on the
+generators $dx_i$.
+\end{proposition}
+This extends to an \emph{infinitely generated} purely transcendental extension,
+because K\"ahler differentials commute with filtered colimits.
+\begin{proof}
+We already know this for the polynomial ring $k[x_1, \dots, x_n]$. However, the
+rational function field is just a localization of the polynomial ring at the
+zero ideal. So the result will follow from \cref{localizationdiff}.
+\end{proof}
+
+We have shown that separable algebraic extensions have no K\"ahler
+differentials, but that purely transcendental extensions have a free module of
+rank equal to the transcendence degree.
+
+We can deduce from this:
+\begin{corollary}
+Let $L/K$ be a field extension of fields of char 0. Then
+\[ \dim_L \Omega_{L/K} = \mathrm{trdeg}(L/K). \]
+\end{corollary}
+\begin{proof}[Partial proof]
+Put the above two facts together. Choose a transcendence basis $\{x_\alpha\}$
+for $L/K$. This means that $L$ is algebraic over $K(\left\{x_\alpha\right\})$
+and the $\left\{x_\alpha\right\}$ are algebraically independent.
+Moreover $L/K(\left\{x_\alpha\right\})$ is \emph{separable} algebraic. Now let
+us use a few things about these cotangent complexes. There is an exact sequence:
+\[ \Omega_{K(\left\{x_\alpha\right\})}
+\otimes_{K(\left\{x_\alpha\right\})} L \to \Omega_{L/K} \to \Omega_{L/K(\left\{x_\alpha\right\})} \to 0 \]
+The last thing is zero, and we know what the first thing is; it's free on the
+$dx_\alpha$. So we find that $\Omega_{L/K}$ is generated by
+the elements $dx_\alpha$. If we knew that the $dx_\alpha$ were linearly
+independent, then we would be done. But we don't, yet.
+\end{proof}
+
+This is \textbf{not true} in characteristic $p$. If $L = K(\alpha^{1/p})$ for
+$\alpha \in K$ and $\alpha^{1/p} \notin K$, then $\Omega_{L/K} \neq 0$.
+
+\subsection{Regularity, smoothness, and K\"ahler differentials}
+From this, let us revisit a statement made last time.
+Let $K$ be an algebraically closed field, let $R = k[x_1, \dots, x_n]/I$ and
+let $\mathfrak{m} \subset R$ be a maximal ideal. Recall that the
+Nullstellensatz implies that $R/\mathfrak{m} \simeq k$. We were studying
+\[ \Omega_{R/k}. \]
+This is an $R$-module, so $\Omega_{R/k} \otimes_R k$ makes sense. There is a
+surjection
+\[ \mathfrak{m}/\mathfrak{m}^2 \to \Omega_{R/k} \otimes_R k \to 0, \]
+that sends $x \to dx$.
+\begin{proposition}
+This map is an isomorphism.
+\end{proposition}
+\begin{proof}
+We construct a map going the other way. Call the map $\mathfrak{m}/\mathfrak{m}^2 \to
+\Omega_{R/k} \otimes_R k$ as $\phi$. We want to construct
+\[ \psi: \Omega_{R/k} \otimes_R k \to \mathfrak{m}/\mathfrak{m}^2. \]
+This is equivalent to giving an $R$-module map
+\[ \Omega_{R/k} \to \mathfrak{m}/\mathfrak{m}^2, \]
+that is a derivation $\partial: R \to \mathfrak{m}/\mathfrak{m}^2$. This acts
+via $\partial(\lambda + x) = x$ for $\lambda \in k, x \in \mathfrak{m}$. Since
+$k+\mathfrak{m} = R$, this is indeed well-defined. We must check that
+$\partial$ is a derivation. That is, we have to compute
+$\partial((\lambda+x)(\lambda' + x'))$.
+But this is
+\[ \partial(\lambda\lambda' + (\lambda x' + \lambda' x) + xx'). \]
+The definition of $\partial$ is to ignore the constant term and look at the
+nonconstant term mod $\mathfrak{m}^2$. So this becomes
+\[ \lambda x' + \lambda' x = (\partial (\lambda+x)) (x'+\lambda') + (\partial (\lambda'+
+x')) (x+\lambda) \]
+because $xx' \in \mathfrak{m}^2$, and because $\mathfrak{m}$ acts trivially on
+$\mathfrak{m}/\mathfrak{m}^2$. Thus we get the map $\psi$ in the inverse
+direction, and one checks that $\phi, \psi$ are inverses. This is because
+$\phi$ sends $x \to dx$ and $\psi$ sends $dx \to x$.
+\end{proof}
+
+\begin{corollary}
+Let $R$ be as before. Then $R_{\mathfrak{m}}$ is regular iff $\dim
+R_{\mathfrak{m}} = \dim_k \Omega_{R/k} \otimes_R R/\mathfrak{m}$.
+\end{corollary}
+In particular, the modules of K\"ahler differentials detect regularity for
+certain rings.
+
+\begin{definition}
+Let $R$ be a noetherian ring. We say that $R$ is \textbf{regular} if
+$R_{\mathfrak{m}}$ is regular for every maximal ideal $\mathfrak{m}$. (This
+actually implies that $R_{\mathfrak{p}}$ is regular for all primes
+$\mathfrak{p}$, though we are not ready to see this. It will follow from the
+fact that the localization of a regular local ring at a prime ideal is regular.)
+\end{definition}
+
+Let $R = k[x_1, \dots, x_n]/I$ be an affine ring over an algebraically closed
+field $k$.
+Then:
+
+\begin{proposition}
+TFAE:
+\begin{enumerate}
+\item $R$ is regular.
+\item ``$R$ is smooth over $k$'' (to be defined)
+\item $\Omega_{R/k}$ is a projective module over $R$ of rank $\dim R$.
+\end{enumerate}
+\end{proposition}
+A finitely generated projective module is locally free. So the last statement is that
+$(\Omega_{R/k})_{\mathfrak{p}}$ is free of rank $\dim R$ for each prime
+$\mathfrak{p}$.
+
+\begin{remark}
+A projective module does not necessarily have a well-defined rank as an integer. For
+instance, if $R = R_1 \times R_2$ and $M = R_1 \times 0$, then $M$ is a summand
+of $R$, hence is projective. But there are two candidates for what the rank
+should be. The problem is that $\spec R$ is disconnected into two pieces, and
+$M$ is of rank one on one piece, and of rank zero on the other.
+But in this case, it does not happen.
+\end{remark}
+
+\begin{remark}
+The smoothness condition states that locally on $\spec R$, we have an isomorphism with
+$k[y_1, \dots, y_n]/(f_1, \dots, f_m)$ with the gradients $\nabla f_i$ linearly
+independent. Equivalently, if $R_{\mathfrak{m}}$ is the localization of $R$ at
+a maximal ideal $\mathfrak{m}$, then $R_{\mathfrak{m}}$ is a regular local
+ring, as we have seen.
+\end{remark}
+
+\begin{proof}
+We have already seen that 1 and 2 are equivalent. The new thing is that they
+are equivalent to 3. First, assume 1 (or 2).
+First, note that $\Omega_{R/k}$ is a finitely generated $R$-module; that's a general
+observation:
+
+\begin{proposition}
+\label{finitelygeneratedOmega}
+If $f: A \to B$ is a map of rings that makes $B$ a finitely generated $A$-algebra, then
+$\Omega_{B/A}$ is a finitely generated $B$-module.
+\end{proposition}
+\begin{proof}
+We've seen this is true for polynomial rings, and we can use the exact
+sequence. If $B$ is a quotient of a polynomial ring, then $\Omega_{B/A}$ is a
+quotient of the K\"ahler differentials of the polynomial ring.
+\end{proof}
+Return to the main proof. In particular, $\Omega_{R/k}$ is projective if and
+only if $(\Omega_{R/k})_{\mathfrak{m}}$ is projective for every maximal ideal
+$\mathfrak{m}$. According to the second assertion, we have that
+$R_{\mathfrak{m}}$ looks like $(k[y_1, \dots, y_n]/(f_1, \dots,
+f_m))_{\mathfrak{n}}$ for some maximal ideal $\mathfrak{n}$, with the
+gradients $\nabla f_i$ linearly independent. Thus
+$(\Omega_{R/k})_{\mathfrak{m}} = \Omega_{R_{\mathfrak{m}}/k}$ looks like the cokernel of
+\[ R_{\mathfrak{m}}^m \to R_{\mathfrak{m}}^n \]
+where the map is multiplication by the Jacobian matrix $\left(\frac{\partial
+f_i}{\partial y_j} \right)$. By assumption this matrix has full rank. We see
+that there is a left inverse of the reduced map $k^m \to k^n$.
+We can lift this to a map $R_{\mathfrak{m}}^n \to R_{\mathfrak{m}}^m$. Since
+this is a left inverse mod $\mathfrak{m}$, the composite is at least an
+isomorphism (looking at determinants). Anyway, we see that $\Omega_{R/k}$ is
+given by the cokernel of a map of free module that splits, hence is projective.
+The rank is $n-m = \dim R_{\mathfrak{m}}$.
+
+Finally, let us prove that 3 implies 1. Suppose $\Omega_{R/k}$ is projective of
+rank $\dim R$. So this means that $\Omega_{R_{\mathfrak{m}}/k}$ is free of
+dimension $\dim R_{\mathfrak{m}}$. But this implies that $(\Omega_{R/k})
+\otimes_R R/\mathfrak{m}$ is free of the appropriate rank, and that is---as we
+have seen already---the embedding dimension $\mathfrak{m}/\mathfrak{m}^2$. So
+if 3 holds, the embedding dimension equals the usual dimension, and we get
+regularity.
+\end{proof}
+
+\begin{corollary}
+Let $R = \mathbb{C}[x_1, \dots, x_n]/\mathfrak{p}$ for $\mathfrak{p}$ a prime.
+Then there is a nonzero $f \in R$ such that $R[f^{-1}]$ is regular.
+\end{corollary}
+Geometrically, this says the following. $\spec R$ is some algebraic variety,
+and $\spec R[f^{-1}]$ is a Zariski open subset. What we are saying is that, in
+characteristic zero, any algebraic variety has a nonempty open smooth locus.
+The singular locus is always smaller than the entire variety.
+
+\begin{proof}
+$\Omega_{R/\mathbb{C}}$ is a finitely generated $R$-module. Let $K(R) $ be the fraction field of $R$.
+Now
+\[ \Omega_{R/\mathbb{C}} \otimes_R K(R) = \Omega_{K(R)/\mathbb{C}} \]
+is a finite $K(R)$-vector space. The dimension is
+$\mathrm{trdeg}(K(R)/\mathbb{C})$. That is also $d=\dim R$, as we have seen.
+Choose elements $x_1, \dots, x_d \in \Omega_{R/\mathbb{C}}$ which form a basis
+for $\Omega_{K(R)/\mathbb{C}}$. There is a map
+\[ R^d \to \Omega_{R/\mathbb{C}} \]
+which is an isomorphism after localization at $(0)$. This implies that there is
+$f \in R$ such that the map is an isomorphism after localization at
+$f$.\footnote{There is an inverse defined over the fraction field, so it is
+defined over some localization.} We find that $\Omega_{R[f^{-1}]/\mathbb{C}}$
+is free of rank $d$ for some $f$, which is what we wanted.
+\end{proof}
+
+This argument works over any algebraically closed field of characteristic
+zero, or really any field of characteristic zero.
+\begin{remark}[Warning] Over imperfect fields in characteristic $p$, two things can happen:
+\begin{enumerate}
+\item Varieties need not be generically smooth
+\item $\Omega_{R/k}$ can be projective with the wrong rank
+\end{enumerate}
+(Nothing goes wrong for \textbf{algebraically closed fields} of characteristic
+$p$.)
+\begin{example}
+Here is a silly example. Say $R = k[y]/(y^p-x)$ where $x \in K$ has no $p$th
+root. We know that $\Omega_{R/k}$ is free of rank one. However, the rank is
+wrong: the variety has dimension zero.
+\end{example}
+\end{remark}
+
+Last time, were trying to show that $\Omega_{L/K}$ is free on a transcendence
+basis if $L/K$ is an extension in characteristic zero. So we had a tower of fields
+\[ K \to K' \to L, \]
+where $L/K'$ was separable algebraic.
+We claim in this case that
+\[ \Omega_{L/K} \simeq \Omega_{K'/K} \otimes_{K'} L. \]
+This will prove the result. But we had not done this yesterday.
+\begin{proof}
+This doesn't follow directly from the previous calculations. Without loss of generality, $L$ is
+finite over $K'$, and in particular, $L = K'[x]/(f(x))$ for $f$ separable. The claim is that
+\[ \Omega_{L/K} \simeq (\Omega_{K'/K}\otimes_{K'}L \oplus K' dx)/f'(x)dx + \dots \]
+When we kill the vector $f'(x) dx + \dots$, we kill the second component.
+\end{proof}
+
+
+
diff --git a/books/cring/spec.tex b/books/cring/spec.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d164d1a221af9bc7618e0dd731db88b803c5790e
--- /dev/null
+++ b/books/cring/spec.tex
@@ -0,0 +1,1621 @@
+\chapter{The $\spec$ of a ring}
+\label{spec}
+
+The notion of the $\spec$ of a ring is fundamental in modern
+algebraic
+geometry. It is the scheme-theoretic analog of classical affine
+schemes. The
+identification occurs when one identifies the maximal ideals of
+the polynomial
+ring $k[x_1, \dots, x_n]$ (for $k$ an algebraically closed
+field) with the
+points of the classical variety $\mathbb{A}^n_k = k^n$. In
+modern algebraic
+geometry, one adds the ``non-closed points'' given by the other
+prime ideals.
+Just as general varieties were classically defined by gluing
+affine varieties, a
+scheme is defined by gluing open affines.
+
+This is not a book on schemes, but it will nonetheless be
+convenient to introduce the $\spec$ construction, outside of the obvious
+benefits of including preparatory material for algebraic geometry. First of
+all, it will provide a convenient
+notation. Second, and more importantly, it will provide a
+convenient geometric
+intuition. For example, an $R$-module can be thought of as a
+kind of ``vector
+bundle''---technically, a sheaf---over the space $\spec R$, with
+the caveat
+that the rank might not be locally constant (which is, however,
+the case when the module
+is projective).
+
+\section{The spectrum of a ring}
+
+We shall now associate to every commutative ring a topological
+space $\spec R$
+in a functorial manner.
+That is, there will be a contravariant functor
+\[\spec: \mathbf{CRing} \to \mathbf{Top} \]
+where $\mathbf{Top}$ is the category of topological spaces.
+This construction is the basis for scheme-theoretic
+algebraic geometry and will be used frequently in the sequel.
+
+The motivating observation is the following. If $k$ is an algebraically closed
+field, then the maximal ideals in $k[x_1, \dots, x_n]$ are of the form
+$(x_1-a_1, \dots, x_n - a_n)$ for $(a_1, \dots, a_n) \in k[x_1, \dots, x_n]$.
+This is the Nullstellensatz, which we have not proved yet. We can thus
+identify the maximal ideals in the polynomial ring with the space $k^n$.
+If $I \subset k[x_1, \dots, x_n]$ is an ideal, then the maximal ideals in
+$k[x_1,\dots,x_n]$ correspond to points where everything in $I$ vanishes. See
+\rref{twovarpoly} for a more detailed explanation. Classical affine algebraic
+geometry thus studies the set of maximal ideals in an algebra finitely
+generated over an algebraically closed field.
+
+The $\mathrm{Spec}$ of a ring is a generalization of this construction.
+In general, it is more natural to
+use all prime ideals instead of just maximal ideals.
+\subsection{Definition and examples}
+
+We start by defining $\spec$ as a set. We will next
+construct the
+Zariski topology and later the functoriality.
+\begin{definition}
+Let $R$ be a commutative ring. The \textbf{spectrum} of $R$,
+denoted $\spec R$, is
+the set of prime ideals of $R$.
+\end{definition}
+
+We shall now make $\spec R$ into a topological space. First, we
+describe a
+collection of sets which will become the closed sets.
+If $I \subset R$ is an ideal, let
+\[ V(I) = \left\{\mathfrak{p}: \mathfrak{p} \supset I\right\}
+\subset \spec R.
+\]
+
+\begin{proposition}
+There is a topology on $\spec R$ such that the closed subsets
+are of the form
+$V(I)$ for $I \subset R$ an ideal.
+\end{proposition}
+
+\begin{proof}
+Indeed, we have to check the familiar axioms for a topology:
+\begin{enumerate}
+\item $\emptyset = V((1))$ because no prime contains $1$. So
+$\emptyset$ is closed.
+\item $\spec R = V((0))$ because any ideal contains zero. So
+$\spec R$ is
+closed.
+\item We show the closed sets are stable under intersections.
+Let
+$K_{\alpha} = V(I_{\alpha})$ be closed subsets of $\spec R$ for
+$\alpha$
+ranging over some index set. Let $I
+= \sum I_{\alpha}$. Then
+\[ V(I) = \bigcap K_{\alpha} = \bigcap V(I_{\alpha}), \]
+which follows because $I$ is the smallest ideal containing each
+$I_{\alpha}$,
+so a prime contains every $I_{\alpha}$ iff it contains $I$.
+\item The union of two closed sets is closed. Indeed, if $K,K'
+\subset \spec
+R$ are closed, we show $K \cup K'$ is closed. Say $K= V(I), K' =
+V(I')$. Then
+we claim:
+\[ K \cup K' = V(II'). \]
+Here, as usual, $II'$ is the ideal generated by products $ii', i \in I, i'
+\in I'$. If
+$\mathfrak{p}$ is \textbf{prime} and contains $II'$, it must
+contain one of $I$, $I'$;
+this implies the displayed equation above and implies the
+result.
+\end{enumerate}
+\end{proof}
+\begin{definition}
+The topology on $\spec R$ defined above is called the
+\textbf{Zariski
+topology}. With it, $\spec R$ is now a topological space.
+\end{definition}
+
+\begin{exercise}
+What is the $\spec$ of the zero ring?
+\end{exercise}
+
+In order to see the geometry of this construction, let us work
+several examples.
+
+\begin{example}
+Let $R = \mathbb{Z}$, and consider $\spec \mathbb{Z}$. Then
+every prime is generated by one element, since
+$\mathbb{Z}$ is a PID. We have that $\spec \mathbb{Z} = \{(0)\}
+\cup \bigcup_{p \
+\mathrm{prime}} \{ (p)\}$. The picture is that one has all the
+familiar primes $(2), (3),
+(5), \dots, $ and then a special point $(0)$.
+
+Let us now describe the closed subsets. These are of the form
+$V(I)$ where $I
+\subset \mathbb{Z}$ is an ideal, so $I = (n)$ for some $n \in
+\mathbb{Z}$.
+
+\begin{enumerate}
+\item If $n=0$, the closed subset is all of $\spec \mathbb{Z}$.
+\item If $n \neq 0$, then $n$ has finitely many prime divisors.
+So $V((n))$ consists
+of the prime ideals corresponding to these prime divisors.
+\end{enumerate}
+
+The only closed subsets besides the entire space are the finite
+subsets
+that exclude $(0)$.
+\end{example}
+
+\begin{example} \label{twovarpoly}
+Say $R = \mathbb{C}[x,y]$ is a polynomial ring in two variables.
+We will not give a complete description of $\spec R$ here. But we will write
+down several
+prime ideals.
+
+\begin{enumerate}
+\item For every pair of complex numbers $s,t \in \mathbb{C}$,
+the collection of polynomials
+$f \in R$ such that $f(s,t) = 0$ is a prime ideal $\mathfrak{m}
+_{s,t} \subset R$. In
+fact, it is maximal, as the residue ring is all of
+$\mathbb{C}$. Indeed,
+$R/\mathfrak{m}_{s,t} \simeq \mathbb{C}$ under the map $f \to
+f(s,t)$.
+
+In fact,
+\begin{theorem}
+The $\mathfrak{m}_{s,t}$ are all the maximal ideals in $R$.
+\end{theorem}
+This will follow from the \emph{Hilbert Nullstellensatz} to be
+proved later
+(\rref{gennullstellensatz}).
+\item $(0) \subset R$ is a prime ideal since $R$ is a domain.
+\item If $f(x,y) \in R$ is an irreducible polynomial, then $(f)$
+is a prime
+ideal. This is equivalent to unique factorization in
+$R$.\footnote{To be
+proved later \rref{}.}
+\end{enumerate}
+
+To draw $\spec R$, we start by drawing $\mathbb{C}^2$, which is identified
+with the
+collection of
+maximal ideals $\mathfrak{m}_{s,t}, s, t \in \mathbb{C}$. $\spec R$ has
+additional (non-closed) points
+too, as
+described above, but for now let us
+consider the topology induced on $\mathbb{C}^2$ as a subspace of
+$\spec R$.
+
+The closed subsets of $\spec R$ are subsets $V(I)$ where $I$ is
+an ideal,
+generated by polynomials $\left\{f_{\alpha}(x,y)\right\}$. It is of interest to
+determine the subset of $\mathbb{C}^2$ that
+$V(I)$
+induces. In other words, we ask:
+\begin{quote}
+What points of $\mathbb{C}^2$ (with $(s,t)$ identified with
+$\mathfrak{m}_{s,t}$) lie in $V(I)$?
+\end{quote}
+Now, by definition, we know that $(s,t)$ corresponds to a point of $V(I)$ if
+and only if $I
+\subset \mathfrak{m}_{s,t}$.
+This is true iff all the
+$f_{\alpha} $ lie in $ \mathfrak{m}_{s,t}$, i.e. if
+$f_{\alpha}(s,t) =0$ for all
+$\alpha$. So the closed subsets of $\mathbb{C}^2$ (with the
+induced Zariski
+topology) are \emph{precisely the subsets
+that can be defined by polynomial equations}.
+
+This is
+\textbf{much} coarser
+than the usual topology. For instance, $\left\{(z_1,z_2):
+\Re(z_1) \geq 0\right\}$ is
+not Zariski-closed.
+The Zariski topology is so coarse because one has only algebraic
+data (namely,
+polynomials, or elements of $R$) to define the topology.
+\end{example}
+
+\begin{exercise}
+Let $R_1, R_2$ be commutative rings. Give $R_1 \times R_2$ a
+natural structure
+of a ring, and describe $\spec( R_1 \times R_2)$ in terms of
+$\spec R_1$ and
+$\spec R_2$.
+\end{exercise}
+
+
+\begin{exercise}
+Let $X$ be a compact Hausdorff space, $C(X)$ the ring of real
+continuous
+functions $X \to \mathbb{R}$.
+The maximal ideals in $\spec C(X)$ are in bijection with the
+points of $X$,
+and the topology induced on $X $ (as a subset of $\spec C(X)$ with the Zariski
+topology)
+is just the usual topology.
+\end{exercise}
+
+\begin{exercise}
+Prove the following result: if $X, Y$ are compact Hausdorff
+spaces and $C(X),
+C(Y)$ the associated rings of continuous functions, if $C(X),
+C(Y)$ are
+isomorphic as $\mathbb{R}$-algebras, then $X$ is homeomorphic to
+$Y$.
+\end{exercise}
+
+
+\subsection{The radical ideal-closed subset correspondence}
+
+We now return to the case of an arbitrary commutative ring $R$.
+If $I \subset R$, we get a closed
+subset $V(I) \subset \spec R$. It is called $V(I)$ because one
+is supposed to
+think of it as the places where the elements of $I$ ``vanish,''
+as the
+elements of $R$ are something like ``functions.'' This analogy
+is perhaps best
+seen in the example of a polynomial ring over an algebraically
+closed field,
+e.g. \rref{twovarpoly} above.
+
+The map from ideals into closed sets is very far from being
+injective in
+general, though by definition it is surjective.
+
+\begin{example}
+If $R = \mathbb{Z}$ and $p$ is prime, then $I = (p), I' = (p^2)$
+define the
+same subset (namely, $\left\{(p)\right\}$) of
+$\spec R$.
+\end{example}
+
+We now ask why the map from ideals to closed
+subsets fails to
+be injective. As we shall see, the entire problem disappears if
+we restrict to
+\emph{radical} ideals.
+
+\begin{definition}
+If $I$ is an ideal, then the \textbf{radical} $\rad(I) $ or $
+\sqrt{I}$ is
+defined as $$\rad(I) =
+\left\{x \in R: x^n \in I \ \mathrm{for} \ \mathrm{some} \ n
+\right\}.$$
+An ideal is \textbf{radical} if it is equal to its radical.
+(This is
+equivalent to the earlier \rref{def-radical-ideal}.)
+\end{definition}
+
+Before proceeding, we must check:
+\begin{lemma}
+If $I$ an ideal, so is $\rad(I)$.
+\end{lemma}
+\begin{proof}
+Clearly $\rad(I)$ is closed under multiplication since $I$ is.
+Suppose $x,y \in \rad(I)$; we show $x+y \in \rad(I)$. Then $x^n,
+y^n \in I$
+for some $n$ (large) and thus for all larger $n$. The binomial
+expansion now
+gives
+\[ (x+y)^{2n} = x^{2n} + \binom{2n}{1} x^{2n-1}y + \dots +
+y^{2n}, \]
+where every term contains either $x,y$ with power $ \geq n$, so
+every term
+belongs to $I$. Thus $(x+y)^{2n} \in I$ and, by definition, we
+see then that $x+y \in \rad(I)$.
+\end{proof}
+
+The map $I \to V(I)$ does in fact depend only on the radical of
+$I$. In fact, if $I,J$ have the same radical $\rad(I) =
+\rad(J)$, then $V(I) = V(J)$.
+Indeed, $V(I) = V(\rad(I)) = V(\rad(J)) = V(J)$ by:
+\begin{lemma}
+For any $I$, $V(I) = V(\rad(I))$.
+\end{lemma}
+\begin{proof}
+Indeed, $I \subset \rad(I)$ and therefore obviously $V(\rad(I))
+\subset V(I)$. We have to show the
+converse inclusion. Namely, we must prove:
+\begin{quote}
+If $\mathfrak{p} \supset I$, then $\mathfrak{p} \supset
+\rad(I).$
+\end{quote}
+So suppose $\mathfrak{p} \supset I$ is prime and $x \in
+\rad(I)$; then $x^n \in I \subset \mathfrak{p}$ for some $n$.
+But $\mathfrak{p}$ is prime, so whenever a product of things
+belongs to
+$\mathfrak{p}$, a factor does. Thus since $x^n = x \cdot x
+\cdots x$, we must
+have $x \in \mathfrak{p}$. So
+\[ \rad(I) \subset \mathfrak{p}, \]
+proving the quoted claim, and thus the lemma.
+\end{proof}
+
+There is a converse to this remark:
+\begin{proposition}
+If $V(I) = V(J)$, then $\rad(I) = \rad(J)$.
+\end{proposition}
+So two ideals define the same closed subset iff they have the
+same radical.
+\begin{proof}
+We write down a formula for $\rad(I)$ that will imply this at
+once.
+\begin{lemma} \label{radprimescontaining} For a commutative ring $R$ and an
+ideal $I \subset
+R$,
+\[ \rad(I) = \bigcap_{\mathfrak{p} \supset I} \mathfrak{p}. \]
+\end{lemma}
+From this, it follows that $V(I)$ determines $\rad(I)$. This
+will thus imply
+the proposition.
+We now prove the lemma:
+\begin{proof}
+\begin{enumerate}
+\item We show $\rad(I) \subset \bigcap_{\mathfrak{p} \in V(I)}
+\mathfrak{p} $. In
+particular, this follows if we show that if a prime contains
+$I$, it contains $\rad(I)$; but we have already
+discussed this above.
+\item If $x \notin \rad(I)$, we will show that there is a prime
+ideal $\mathfrak{p}
+\supset I$ not containing $x$. This will imply the reverse
+inclusion and the
+lemma.
+\end{enumerate}
+
+
+We want to find $\mathfrak{p}$ not containing $x$, more
+generally not
+containing any power of $x$. In particular, we want
+$\mathfrak{p} \cap \left\{1,
+x, x^2 \dots, \right\} = \emptyset$. This set $S = \left\{1, x,
+\dots\right\}$
+is multiplicatively closed, in that it contains 1 and is closed
+under
+finite products. Right now, it does not interset $I$; we want to find
+a
+\emph{prime} containing $I$ that still does not intersect $\left\{x^n, n
+\geq 0\right\}$.
+
+
+More generally, we will prove:
+
+\begin{sublemma}\label{sublemmamultclosed}
+Let $S$ be multiplicatively closed set in any ring $R$ and let
+$I$ be any ideal with $I \cap S =
+\emptyset$. There is a prime ideal $\mathfrak{p} \supset I$ and
+does not
+intersect $S$ (in fact, any ideal maximal with respect to the condition of
+not intersecting $S$ will do).
+\end{sublemma}
+In English, any ideal missing $S$ can be enlarged to a prime
+ideal missing $S$.
+This is actually fancier version of a previous argument. We
+showed earlier that any ideal not
+containing the multiplicatively closed subset $\left\{1\right\}$
+can be
+contained in a prime ideal not containing $1$, in
+\rref{anycontainedinmaximal}.
+
+Note that the sublemma clearly implies the lemma when applied to
+$S =
+\left\{1, x, \dots\right\}.$
+
+\begin{proof}[Proof of the sublemma]
+Let $P = \left\{J: J \supset I, J \cap S = \emptyset \right\}$.
+Then $P$ is a
+poset with respect to inclusion. Note that $P \neq \emptyset$
+because $I \in P$. Also,
+for any nonempty linearly ordered subset of $P$, the union is in
+$P$ (i.e. there is an
+upper bound).
+We can invoke Zorn's lemma to get a maximal element of $P$. This
+element is an
+ideal $\mathfrak{p} \supset I$ with $\mathfrak{p} \cap S =
+\emptyset$. We claim
+that $\mathfrak{p}$ is prime.
+
+First of all, $1 \notin \mathfrak{p}$ because $1 \in S$. We need
+only check
+that if $xy \in \mathfrak{p}$, then $x \in \mathfrak{p}$ or $y
+\in
+\mathfrak{p}$. Suppose otherwise, so $x,y \notin \mathfrak{p}$.
+Then $(x,\mathfrak{p}) \notin P$ or
+$\mathfrak{p}$ would not be maximal. Ditto for $(y,
+\mathfrak{p})$.
+
+In particular, we have that these bigger ideals both intersect
+$S$. This means
+that there are
+\[ a \in \mathfrak{p} , r \in R \quad \text{such that}\quad a+rx
+\in S \]
+and
+\[ b \in \mathfrak{p} , r' \in R \quad \text{such that}\quad
+b+r'y \in S .\]
+Now $S$ is multiplicatively closed, so multiply $(a+rx)(b+r'y)
+\in S$.
+We find:
+\[ ab + ar'y+brx+rr'xy \in S. \]
+Now $a,b \in \mathfrak{p}$ and $xy \in \mathfrak{p}$, so all the
+terms above are in $\mathfrak{p}$, and the sum is too. But this contradicts
+$\mathfrak{p}
+\cap S = \emptyset$.
+\end{proof}
+\end{proof}
+\end{proof}
+The upshot of the previous lemmata is:
+\begin{proposition}
+There is a bijection between the closed subsets of $\spec R$ and
+radical ideals
+$I \subset R$.
+\end{proposition}
+
+\subsection{A meta-observation about prime ideals}
+
+We saw in the previous subsection (\cref{sublemmamultclosed})
+that an ideal maximal with respect to the property of not intersecting a
+multiplicatively closed subset is prime.
+It turns out that this is the case for many such properties of ideals.
+A general method of seeing this was developed in \cite{LaRe08}.
+In this (optional) subsection, we digress to explain this phenomenon.
+
+If $I$ is an ideal and $a \in R$, we define the notation
+\[ (I:a) = \left\{ x\in R: xa \in I\right\} . \]
+More generally, if $J$ is an ideal, we define
+\[ (I:J) = \left\{x \in R: xJ \subset I\right\} . \]
+
+Let $R$ be a ring, and $\mathcal{F}$ a collection of ideals of $R$.
+We are interested in conditions that will guarantee that the maximal elements
+of $\mathcal{F}$ are \emph{prime}.
+Actually, we will do the opposite: the following condition will guarantee that
+the ideals maximal at \emph{not} being in $\mathcal{F}$ are prime.
+
+\begin{definition} \label{okafamily}
+The family $\mathcal{F}$ is called an \textbf{Oka family} if $R \in
+\mathcal{F}$ (where $R$ is considered as an ideal) and whenever $I \subset R$ is an
+ideal and $(I:a), (I,a) \in \mathcal{F}$ (for some $a \in R$), then $I \in
+\mathcal{F}$.
+\end{definition}
+
+\begin{example} \label{exm:okacard}
+Let us begin with a simple observation. If $(I:a)$ is generated by
+$a_1, \dots, a_n$ and $(I,a)$ is generated by $a, b_1, \dots, b_m$ (where we
+may take
+$b_1, \dots, b_m \in I$, without loss of generality), then $I$
+is generated by $aa_1, \dots, aa_n, b_1, \dots, b_m$.
+To see this, note that if $x \in I$, then $x \in (I,a)$ is a linear
+combination of the $\left\{a, b_1, \dots, b_m\right\}$, but the coefficient of
+$a$ must
+lie in $(I:a)$.
+
+As a result, we may deduce that
+the family of finitely generated ideals is an Oka family.
+\end{example}
+
+\begin{example}
+Let us now show that the family of \emph{principal} ideals is an Oka family.
+Indeed, suppose $I \subset R$ is an ideal, and $(I,a)$ and $(I:a)$ are
+principal.
+One can easily check that
+$(I:a) = (I: (I, a))$.
+Setting $J = (I,a)$, we find that $J$ is principal and $(I:J)$ is too.
+However, for \emph{any} principal ideal $J$, and for any ideal $I \subset J$,
+\[ I = J (I: J) \]
+as one easily checks. Thus we find in our situation that since $J=(I,a)$ and
+$(I:J)$
+are principal, $I$ is principal.
+\end{example}
+
+\begin{proposition}[\cite{LaRe08}]\label{okathm} If $\mathcal{F}$ is an Oka
+family of
+ideals, then any maximal element of the complement of $\mathcal{F}$ is prime.
+\end{proposition}
+\begin{proof}
+Suppose $I \notin \mathcal{F}$ is maximal with respect
+to not being in $\mathcal{F}$
+but $I$ is not prime. Note that $I \neq R$ by hypothesis.
+Then there is $a \in R$ such that $(I:a), (I,a)$ both strictly contain $I$,
+so they must belong to $\mathcal{F}$.
+Indeed, we can find $a,b \in R - I$ with $ab \in I$; it follows that $(I,a)
+\neq I$ and $(I:a)$ contains $b \notin I$.
+
+By the Oka condition, we have $I \in
+\mathcal{F}$, a contradiction.
+\end{proof}
+
+\begin{corollary}[Cohen] \label{primenoetherian}
+If every prime ideal of $R$ is finitely generated, then every ideal of $R$ is
+finitely generated.\footnote{Later we will say that $R$ is \emph{noetherian.}}
+\end{corollary}
+
+\begin{proof}
+Suppose that there existed ideals $I \subset R$ which were not finitely
+generated.
+The union of a totally ordered chain $\left\{I_\alpha\right\}$ of ideals that
+are not finitely generated is not finitely
+generated; indeed, if $I = \bigcup I_\alpha$ were generated by $a_1, \dots,
+a_n$, then all the generators would belong to some $I_\alpha $ and would
+consequently generate it.
+
+By Zorn's lemma, there is an ideal maximal with respect to being not finitely
+generated. However, by \rref{okathm}, this ideal is necessarily
+prime (since the family of finitely generated ideals is an Oka family). This contradicts the hypothesis.
+\end{proof}
+
+\begin{corollary}
+If every prime ideal of $R$ is principal, then every ideal of $R$ is principal.
+\end{corollary}
+\begin{proof}
+This is proved in the same way.
+\end{proof}
+
+\begin{exercise}
+Suppose every nonzero prime ideal in $R$ contains a non-zerodivisor. Then $R$
+is a domain. (Hint: consider the set $S$ of nonzerodivisors, and argue that
+any ideal maximal with respect to not intersecting $S$ is prime. Thus, $(0)$
+is prime.)
+\end{exercise}
+
+
+\begin{remark}
+\label{remark-cohen-bound-cardinality}
+Let $R$ be a ring. Let $\kappa$ be an infinite cardinal.
+By applying
+\rref{exm:okacard} and
+\rref{okathm}
+we see that any ideal maximal with respect to the property of not being
+generated by $\kappa$ elements is prime. This result is not so
+useful because there exists a ring for which every prime ideal
+of $R$ can be generated by $\aleph_0$ elements, but some
+ideal cannot. Namely, let $k$ be a field, let $T$ be a set whose
+cardinality is greater than $\aleph_0$ and let
+\[ R = k[\{x_n\}_{n \geq 1}, \{z_{t, n}\}_{t \in T, n \geq 0}]/
+(x_n^2, z_{t, n}^2, x_n z_{t, n} - z_{t, n - 1}) \]
+This is a local ring with unique prime ideal
+$\mathfrak m = (x_n)$. But the ideal $(z_{t, n})$ cannot
+be generated by countably many elements.
+\end{remark}
+
+\subsection{Functoriality of $\spec$}
+ The construction $R \to \spec R$ is functorial in $R$ in a
+contravariant sense. That is, if $f: R \to R'$, there is a
+continuous map $\spec
+R' \to \spec R$. This map sends $\mathfrak{p} \subset R'$ to
+$f^{-1}(\mathfrak{p}) \subset R$, which is easily seen to be a
+prime ideal
+in $R$. Call this map $F: \spec R' \to \spec R$. So far, we have
+seen that
+$\spec R$ induces a contravariant functor from $\mathbf{Rings}
+\to \mathbf{Sets}$.
+
+\begin{exercise}
+A contravariant functor $F: \mathcal{C} \to \mathbf{Sets}$ (for
+some category
+$\mathcal{C}$) is called \textbf{representable} if it is
+naturally isomorphic
+to a functor of the form $X \to \hom(X, X_0)$ for some $X_0 \in
+\mathcal{C}$,
+or equivalently if the induced covariant functor on
+$\mathcal{C}^{\mathrm{op}}$ is corepresentable.
+
+The functor $R \to \spec R $ is not representable. (Hint:
+Indeed, a representable
+functor must send the initial object into a one-point set.)
+\end{exercise}
+
+Next, we check that the morphisms induced on $\spec$'s from a
+ring-homomorphism are in fact \emph{continuous} maps of
+topological spaces.
+
+\begin{proposition}
+$\spec $ induces a contravariant functor from $\mathbf{Rings}$
+to the category
+$\mathbf{Top}$ of topological spaces.
+\end{proposition}
+\begin{proof} Let $f : R \to R'$.
+We need to check that this map $ \spec R' \to \spec R$, which we
+call $F$, is
+continuous.
+That is, we must check that $F^{-1}$ sends closed
+subsets of $\spec R$ to closed subsets of $\spec R'$.
+
+More precisely, if $I \subset
+R$ and we take the inverse image $F^{-1}(V(I)) \subset \spec
+R'$, it is just
+the closed set $V(f(I))$. This is best left to the reader, but
+here is the justification. If $\mathfrak{p} \in \spec R'$, then
+$F(\mathfrak{p}) = f^{-1}(\mathfrak{p})
+\supset I$ if and only if $\mathfrak{p} \supset f(I)$. So
+$F(\mathfrak{p}) \in
+V(I)$ if and only if $\mathfrak{p} \in V(f(I))$.
+
+\end{proof}
+
+
+
+\begin{example}
+Let $R$ be a commutative ring, $I \subset R$ an ideal, $f: R \to
+R/I$. There is a map
+of topological spaces
+\[ F: \spec (R/I) \to \spec R .\]
+This map is a closed embedding whose image is $V(I)$. Most of
+this follows because
+there is a bijection between ideals of $R$ containing $I$ and
+ideals of $R/I$, and this bijection preserves primality.
+
+\begin{exercise}
+Show that this map $\spec R/I \to \spec R$ is indeed a
+homeomorphism from $\spec R/I
+\to V(I)$.
+\end{exercise}
+\end{example}
+
+
+\subsection{A basis for the Zariski topology}
+In the previous section, we were talking about the Zariski
+topology. If $R$ is a
+commutative ring, we recall that $\spec R$ is defined to be the
+collection of
+prime ideals in $R$. This has a topology where the closed sets
+are the sets of
+the form
+\[ V(I) = \left\{\mathfrak{p} \in \spec R: \mathfrak{p} \supset
+I\right\} . \]
+There is another way to describe the Zariski topology in terms
+of
+\emph{open} sets.
+
+\begin{definition}
+If $f \in R$, we let
+\[ U_f = \left\{\mathfrak{p}: f \notin \mathfrak{p}\right\} \]
+so that $U_f$ is the subset of $\spec R$ consisting of primes
+not containing
+$f$. This is the complement of $V((f))$, so it is open.
+\end{definition}
+
+\begin{proposition}
+The sets $U_f$ form a basis for the Zariski topology.
+\end{proposition}
+
+\begin{proof}
+Suppose $U \subset \spec R$ is open. We claim that $U$ is a
+union of basic
+open sets $U_f$.
+
+Now $U = \spec R - V(I)$ for some ideal $I$. Then
+\[ U = \bigcup_{f \in I} U_f \]
+because if an ideal is not in $V(I)$, then it fails to contain
+some $f \in I$,
+i.e. is in $U_f$ for that $f$. Alternatively, we could take
+complements, whence
+the above statement becomes
+\[ V(I) = \bigcap_{f \in I} V((f)) \]
+which is clear.
+\end{proof}
+
+The basic open sets have nice properties.
+\begin{enumerate}
+\item $U_1 = \spec R$ because prime ideals are not allowed to
+contain the
+unit element.
+\item $U_0 = \emptyset$ because every prime ideal contains $0$.
+\item $U_{fg} = U_f \cap U_g$ because $fg$ lies in a prime ideal
+$\mathfrak{p}$ if and only if one of $f,g$ does.
+\end{enumerate}
+
+Now let us describe what the Zariski topology has to do with
+localization.
+Let $R$ be a ring and $f \in R$. Consider $S = \left\{1, f, f^2,
+\dots
+\right\}$; this is a multiplicatively closed subset. Last week,
+we defined
+$S^{-1}R$.
+
+\begin{definition}
+For $S$ the powers of $f$, we write $R_f$ or $R[f^{-1}]$ for the
+localization $S^{-1}R$.
+\end{definition}
+
+There is a map $\phi: R \to R[f^{-1}]$ and a corresponding map
+\[ \spec R[f^{-1}] \to \spec R \]
+sending a prime $\mathfrak{p} \subset R[f^{-1}]$ to
+$\phi^{-1}(\mathfrak{p})$.
+
+\begin{proposition}
+This map induces a homeomorphism of $\spec R[f^{-1}]$ onto $U_f
+\subset \spec
+R$.
+\end{proposition}
+So if one takes a commutative ring and inverts an element, one
+just gets an open
+subset of $\spec$. This is why it's called localization: one is
+restricting to
+an open subset on the $\spec $ level when one inverts something.
+
+\begin{proof}
+The reader is encouraged to work this proof out for herself.
+
+\begin{enumerate}
+\item
+First, we show that $\spec R[f^{-1}] \to \spec R$ lands in
+$U_f$. If
+$\mathfrak{p} \subset R[f^{-1}]$, then we must show that the
+inverse image
+$\phi^{-1}(\mathfrak{p})$ can't contain $f$. If otherwise, that
+would imply that
+$\phi(f) \in \mathfrak{p}$; however, $\phi(f)$ is invertible,
+and then
+$\mathfrak{p}$ would be $(1)$.
+\item Let's show that the map surjects onto $U_f$. If
+$\mathfrak{p} \subset R$ is a prime
+ideal not containing $f$, i.e. $\mathfrak{p} \in U_f$. We want
+to construct a
+corresponding prime in the ring $R[f^{-1}]$ whose inverse image
+is $\mathfrak{p}$.
+
+Let $\mathfrak{p}[f^{-1}]$ be the collection of all fractions
+\[ \{\frac{x}{f^n}, x \in \mathfrak{p}\} \subset R[f^{-1}], \]
+which is evidently an ideal. Note that whether the numerator is
+in
+$\mathfrak{p}$ is \textbf{independent} of the
+representing fraction $\frac{x}{f^n}$ used.\footnote{Suppose
+$\frac{x}{f^n} =
+\frac{y}{f^k}$ for $y \in \mathfrak{p}$. Then there is $N$ such
+that
+$f^N(f^k x - f^n y) = 0 \in \mathfrak{p}$; since $y \in
+\mathfrak{p}$ and $f
+\notin \mathfrak{p}$, it follows that $x \in \mathfrak{p}$.}
+In fact, $\mathfrak{p}[f^{-1}]$ is a prime ideal. Indeed,
+suppose
+\[ \frac{a}{f^m} \frac{b}{f^n} \in \mathfrak{p}[f^{-1}] .\]
+Then $\frac{ab}{f^{m+n}}$ belongs to this ideal, which means $ab
+\in
+\mathfrak{p}$; so one of $a,b \in \mathfrak{p}$ and one of the
+two fractions
+$\frac{a}{f^m}, \frac{b}{f^n}$ belongs to
+$\mathfrak{p}[f^{-1}]$. Also, $1/1
+\notin \mathfrak{p}[f^{-1}]$.
+
+It is clear that the inverse image of $\mathfrak{p}[f^{-1}]$ is
+$\mathfrak{p}$,
+because the image of $x \in R$ is $x/1$, and this belongs to
+$\mathfrak{p}[f^{-1}]$ precisely when $x \in \mathfrak{p}$.
+\item The map $\spec R[f^{-1}] \to \spec R$ is injective.
+Suppose
+$\mathfrak{p}, \mathfrak{p'}$ are prime ideals in the
+localization and the
+inverse images are the same.
+We must show that $\mathfrak{p} = \mathfrak{p'}$.
+
+Suppose $\frac{x}{f^n} \in \mathfrak{p}$. Then $x/1 \in
+\mathfrak{p}$, so $x
+\in \phi^{-1}(\mathfrak{p}) = \phi^{-1}(\mathfrak{p}')$. This
+means that $x/1
+\in \mathfrak{p}'$, so
+$\frac{x}{f^n} \in \mathfrak{p}'$ too. So a fraction that
+belongs to
+$\mathfrak{p}$ belongs to $\mathfrak{p}'$. By symmetry the two
+ideals must be
+the same.
+\item We now know that the map $\psi: \spec R[f^{-1}] \to U_f$
+is a continuous
+bijection. It is left to see that it is a homeomorphism. We will
+show that it
+is open.
+In particular, we have to show that a basic open set on the left
+side is mapped
+to an open set on the right side.
+If $y/f^n \in R[f^{-1}]$, we have to show that $U_{y/f^n}
+\subset \spec
+R[f^{-1}]$ has open image under $\psi$. We'll in fact show what
+open set it is.
+
+We claim that
+\[ \psi(U_{y/f^n}) = U_{fy} \subset \spec R. \]
+To see this, $\mathfrak{p}$ is contained in $U_{f/y^n}$. This
+mean that
+$\mathfrak{p}$ doesn't contain $y/f^n$. In particular,
+$\mathfrak{p}$ doesn't
+contain the multiple $yf/1$. So $\psi(\mathfrak{p})$ doesn't
+contain $yf$.
+This proves the inclusion $\subset$.
+
+\item
+To complete the proof of the claim, and
+the result, we must show that if $\mathfrak{p} \subset \spec
+R[f^{-1}]$ and
+$\psi(\mathfrak{p}) = \phi^{-1}(\mathfrak{p}) \in U_{fy}$, then
+$y/f^n$ doesn't
+belong to $\mathfrak{p}$. (This is kosher and dandy because we
+have a bijection.) But the hypothesis implies that $fy \notin
+\phi^{-1}(\mathfrak{p})$, so $fy/1 \notin \mathfrak{p}$.
+Dividing by $f^{n+1}$
+implies that
+\[ y/f^{n} \notin \mathfrak{p} \]
+and $\mathfrak{p} \in U_{f/y^n}$.
+\end{enumerate}
+\end{proof}
+
+If $\spec R$ is a space, and $f$ is thought of as a ``function''
+defined on
+$\spec R$, the space $U_f$ is to be thought of as the set of
+points where $f$
+``doesn't vanish'' or ``is invertible.''
+Thinking about rings in terms of their spectra is a very useful
+idea. We will bring it up when appropriate.
+
+\begin{remark}
+The construction $R \to R[f^{-1}]$ as discussed above is an
+instance of
+localization. More generally, we defined $S^{-1}R$ for $S
+\subset R$
+multiplicatively closed. We can thus define maps
+\( \spec S^{-1}R \to \spec R . \)
+To understand $S^{-1}R$, it may help to note that
+\[ \varinjlim_{f \in S} R[f^{-1}] \]
+which is a direct limit of rings where one inverts more and more elements.
+
+As an example, consider $S = R - \mathfrak{p}$ for a prime
+$\mathfrak{p}$, and for
+simplicity that $R$ is countable. We can write $S =
+S_0 \cup S_1 \cup \dots$, where each $S_k$ is generated by a
+finite number of
+elements $f_0, \dots, f_k$. Then $R_{\mathfrak{p}} = \varinjlim
+S_k^{-1} R$.
+So we have
+\[ S^{-1}R = \varinjlim_k R[f_0^{-1} , f_1^{-1}, \dots, f_k^{-1}
+] = \varinjlim
+R[(f_0\dots f_k)^{-1}]. \]
+The functions we invert in this construction are precisely those
+which do not
+contain $\mathfrak{p}$, or where ``the functions don't vanish.''
+
+The geometric idea is
+that to construct $\spec S^{-1}R = \spec R_{\mathfrak{p}}$, we
+keep cutting out
+from $\spec R$ vanishing locuses of various functions that do
+not
+intersect $\mathfrak{p}$. In the end, you don't restrict to an
+open set, but
+to an intersection of them.
+\end{remark}
+\begin{exercise} \label{semilocal}
+Say that $R$ is \emph{semi-local} if it has finitely many maximal ideals.
+Let $\mathfrak{p}_1$, \dots, $\mathfrak{p}_n\subset R$ be primes. The complement of
+the union, $S=R\smallsetminus \bigcup \mathfrak{p}_i$, is closed under
+multiplication, so we can
+ localize. $R[S^{-1}] = R_S$ is called the \emph{semi-localization}
+ \index{semi-localization} of $R$ at the $\mathfrak{p}_i$.
+
+The result of semi-localization is always semi-local. To see this, recall that
+the ideals
+ in $R_S$ are in bijection with ideals in $R$ contained in $\bigcup
+ \mathfrak{p}_i$. Now use prime avoidance.
+\end{exercise}
+
+\begin{definition}
+For a finitely generated $R$-module $M$, define $\mu_R(M)$ to be the smallest
+number
+ of elements that can generate $M$.
+ \end{definition}
+This is not the same as the cardinality of a minimal set of generators. For
+example, 2
+and 3 are a minimal set of generators for $\mathbb{Z}$ over itself, but
+$\mu_\mathbb{Z} (\mathbb{Z}) =1$.
+
+ \begin{theorem}
+Let $R$ be semi-local with maximal ideals $\mathfrak{m}_1,\dots,
+\mathfrak{m}_n$. Let $k_i = R/\mathfrak{m}_i$. Then
+ \[
+ \mathfrak{m}u_R(M) = \mathfrak{m}ax \{\dim_{k_i} M/\mathfrak{m}_i M\}
+ \]
+ \end{theorem}
+\begin{proof}
+\add{proof}
+\end{proof}
+
+\section{Nilpotent elements}
+
+We will now prove a few general results about nilpotent results in a ring.
+Topologically, the nilpotents do very little: quotienting by them will not
+change the $\spec$. Nonetheless, they carry geometric importance, and one
+thinks of these nilpotents as ``infinitesimal thickenings'' (in a sense to be
+elucidated below).
+
+\subsection{The radical of a ring}
+There is a useful corollary of the analysis in the previous section about the
+$\spec$ of a ring.
+
+\begin{definition}
+ $x \in R$ is called \textbf{nilpotent} if a power of $x$ is zero. The set of
+ nilpotent elements in $R$ is called the \textbf{radical} of $R$ and is denoted
+ $\rad(R)$ (which is an abuse of notation).
+\end{definition}
+
+The set of nilpotents is just the radical $\rad((0))$ of the zero ideal, so it
+is an ideal. It can vary greatly.
+A domain clearly has no nonzero nilpotents. On the other hand, many rings do:
+
+\begin{example}
+For any $n \geq 2$, the ring $\mathbb{Z}[X]/(X^n)$ has a nilpotent, namely $X$.
+The ideal of nilpotent elements is $(X)$.
+\end{example}
+
+It is easy to see that a nilpotent must lie in any prime ideal. The converse
+is also true by the previous analysis.
+As a corollary of it, we find in fact:
+
+\begin{corollary} \label{nilradicalisprimes}
+Let $R$ be a commutative ring. Then the set of nilpotent elements of $R$ is
+precisely $\bigcap_{\mathfrak{p} \in \spec R} \mathfrak{p}$.
+\end{corollary}
+\begin{proof}
+Apply \rref{radprimescontaining} to the zero ideal.
+\end{proof}
+
+We now consider a few examples of nilpotent elements.
+\begin{example}[Nilpotents in polynomial rings]
+Let us now compute the nilpotent elements in the polynomial $R[x]$.
+The claim is that a polynomial $\sum_{m=0}^n a_m x^m \in R[x]$ is nilpotent if
+and only
+if all the coefficients $a_m \in R$ are nilpotent. That is, $\rad(R[x]) =
+(\rad(R))R[x]$.
+
+If $a_0,\ldots,a_n$ are nilpotent, then because the nilpotent
+elements form an ideal, $f=a_0+\cdots+a_nx^n$ is nilpotent. Conversely,
+if $f$ is nilpotent, then $f^m=0$ and thus $(a_nx^n)^m=0$. Thus $a_nx^n$
+is nilpotent, and because the nilpotent elements form an ideal, $f-a_nx^n$
+is nilpotent. By induction, $a_ix^i$ is nilpotent for all $i$, so that all
+$a_i$ are nilpotent.
+\end{example}
+
+Before the next example, we need to define a new notion.
+We now define a power series ring intuitively in the same way they are used in
+calculus. In fact, we will use power series rings much the same way we used them
+in calculus; they will serve as keeping track of fine local data that the
+Zariski topology might ``miss'' due to its coarseness.
+\begin{definition} \label{powerseriesring} Let $R$ be a ring. The \textbf{power series ring} $R[[x]]$ is just the set of all
+expressions of the form $\sum_{i=0}^\infty c_i x^i$. The arithmetic for the
+power series ring will be done term by term formally (since we have no topology,
+we can't consider questions of convergence, though a natural topology can be
+defined making $R[[x]]$ the \emph{completion} of another ring, as we shall
+see later). \end{definition}
+
+
+
+
+\begin{example}[Nilpotence in power series rings]
+Let $R$ be a ring such that $\rad(R)$ is a finitely generated ideal. (This is
+satisfied, e.g., if $R$ is \emph{noetherian}, cf. \rref{noetherian}.)
+Let us consider the question of how $\rad(R)$ and $\rad(R[[x]])$ are related.
+The claim is that
+\[ \rad(R[[x]]) = (\rad(R))R[[x]]. \]
+
+If $f\in R[[x]]$ is nilpotent, say with $f^n=0$, then
+certainly $a_0^n=0$, so that $a_0$ is nilpotent. Because the nilpotent elements
+form an ideal, we have that $f-a_0$ is also nilpotent, and hence by induction
+every coefficient of $f$ must be nilpotent in $R$.
+For the converse, let $I =
+\rad(R)$. There
+exists an $N>0$ such that the ideal power $I^N = 0$ by finite generation.
+Thus if $f \in IR[[x]]$, then $f^N \in I^N R[[x]] = 0$.
+\end{example}
+\begin{exercise} \label{nilpcriterion}
+Prove that $x \in R$ is nilpotent if and only if the localization $R_x$ is the
+zero ring.
+\end{exercise}
+
+\begin{exercise}
+Construct an example where $\rad(R) R[[x]] \neq \rad(R[[x]])$. (Hint: consider
+$R = \mathbb{C}[X_1, X_2, X_3, \dots]/(X_1, X_2^2, X_3^3, \dots)$.)
+\end{exercise}
+
+\subsection{Lifting idempotents}
+
+If $R$ is a ring, and $I \subset R$ a nilpotent ideal, then we want to think
+of $R/I$ as somehow close to $R$. For instance, the inclusion $\spec R/I
+\hookrightarrow \spec R$ is a homeomorphism, and one pictures that $\spec R$
+has some ``fuzz'' added (with the extra nilpotents in $I$) that is killed in
+$\spec R/I$.
+
+One manifestation of the ``closeness'' of $R$ and $R/I$ is the following
+result, which states that the idempotent elements\footnote{Recall that an
+element $e \in R$ is idempotent if $e^2 = e$.} of the two are in natural
+bijection.
+For convenience, we state it in additional generality (that is, for
+noncommutative rings).
+
+\begin{lemma}[Lifting idempotents]
+Suppose $I \subset R$ is a nilpotent two-sided ideal, for $R$ any\footnote{Not
+necessarily commutative.} ring. Let
+$\overline{e} \in R/I$ be an idempotent. Then there is an idempotent $e
+\in R$ which reduces to $\overline{e}$.
+\end{lemma}
+
+Note that if $J$ is a two-sided ideal in a noncommutative ring, then so are the
+powers of $J$.
+
+\begin{proof} Let us first assume that $I^2 = 0$.
+We can find $e_1 \in R$ which reduces to $e$, but $e_1$ is not necessarily
+idempotent.
+By replacing $R$ with $\mathbb{Z}[e_1]$ and $I$ with $\mathbb{Z}[e_1] \cap I$,
+we may assume that $R$ is in fact commutative.
+However,
+\[ e_1^2 \in e_1 + I. \]
+Suppose we want to modify $e_1$ by $i$ such that $e = e_1 + i$ is
+idempotent and $i \in I$; then $e$ will do as in the lemma. We would then
+necessarily have
+\[ e_1 + i = (e_1 + i)^2 = e_1^2 + 2e_1 i\quad \mathrm{as} \ I^2 =0 . \]
+In particular, we must satisfy
+\[ i(1-2e_1) = e_1^2 - e_1 \in I. \]
+
+We claim that $1 - 2e_1 \in R$ is invertible, so that we can solve for $i \in I$.
+However, $R$ is commutative. It thus suffices to check that $1 - 2e_1$ lies in
+no maximal ideal of $R$. But the image of $e_1$ in $R/\mathfrak{m}$ for any
+maximal ideal $\mathfrak{m} \subset R$ is either zero or one. So $1 - 2e_1$ has
+image either $1$ or $-1$ in $R/\mathfrak{m}$. Thus it is invertible.
+
+This establishes the result when $I$ has zero square. In general, suppose $I^n
+= 0$. We have the sequence of noncommutative rings:
+\[ R \twoheadrightarrow R/I^{n-1} \twoheadrightarrow R/I^{n-2} \dots
+\twoheadrightarrow R/I. \]
+The kernel at each step is an ideal whose square is zero. Thus, we can use the
+lifting idempotents partial result proved above each step of the way and left
+$\overline{e} \in R/I$ to some $e \in R$.
+\end{proof}
+
+
+While the above proof has the virtue of applying to noncommutative rings,
+there is a more conceptual argument for commutative rings. The idea is that
+idempotents in $A$ measure disconnections of $\spec A$.\footnote{More
+generally, in any \emph{ringed space} (a space with a sheaf of rings), the
+idempotents in the ring of global sections correspond to the disconnections of
+the topological space.} Since the topological space underlying $\spec A$ is
+unchanged when one quotients by nilpotents, idempotents are unaffected.
+We prove:
+
+\begin{proposition} If $X = \mathrm{Spec} \ A$, then there is a one-to-one
+correspondence between $\idem(A)$ and the open and closed subsets of $X$.
+\end{proposition}
+\begin{proof} Suppose $I$ is the radical of $(e)$ for an
+an idempotent $e \in R$. We show that $V(I)$ is open and closed. Since $V$ is
+unaffected by passing to the radical, we will assume without loss of
+generality that
+\[ I = (e). \]
+
+I claim that $\spec R - V(I)$ is just $V(1-e) = V((1-e))$. This is a closed
+set, so proving this claim will imply that $V(I)$ is open. Indeed,
+$V(e)=V((e))$ cannot intersect $V(1-e)$ because if
+\[ \mathfrak{p} \in V(e) \cap V(1-e), \]
+then $e, 1-e \in \mathfrak{p}$, so $1 \in \mathfrak{p}$. This is a
+contradiction since $\mathfrak{p}$ is necessarily prime.
+
+Conversely, suppose that $\mathfrak{p} \in \spec R$ belongs to neither $V(e)$
+nor $V(1-e)$. Then $e \notin \mathfrak{p}$ and $1-e \notin \mathfrak{p}$. So
+the product
+\[ e(1-e) = e-e^2 = 0 \]
+cannot lie in $\mathfrak{p}$. But necessarily $0 \in \mathfrak{p}$,
+contradiction. So $V(e) \cup V(1-e) = \spec R$. This implies the claim.
+
+Next, we show that if $V(I)$ is open, then $I$ is the radical of $(e)$ for an
+idempotent $e$. For this it is sufficient to prove:
+
+\begin{lemma}
+Let $I \subset R$ be such that $V(I) \subset \spec R$ is open. Then $I$
+is principal, generated by $(e)$ for some idempotent $e \in R$.
+\end{lemma}
+\begin{proof}
+Suppose that $\spec R - V(I) = V(J)$ for some ideal $J \subset R$. Then the
+intersection $V(I) \cap V(J) = V(I+J)$ is all of $R$, so $I+J$ cannot be a
+proper ideal (or it would be contained in a prime ideal). In particular, $I+J =
+R$. So we can write
+\[ 1 = x + y, \quad x \in I, y \in J. \]
+
+Now $V(I) \cup V(J) = V(IJ) = \spec R$. This implies that every element of
+$IJ$ is nilpotent by the next lemma.
+\end{proof}
+\begin{lemma}
+Suppose $V(X)=\spec R$ for $X \subset R$ an ideal. Then every element of $X$ is
+nilpotent.
+\end{lemma}
+\begin{proof}
+Indeed, suppose $x \in X$ were non-nilpotent. Then the ring $R_x$ is not the
+zero ring, so it has a prime ideal. The map $\spec R_x \to \spec R$ is, as
+discussed in class, a homeomorphism of $\spec R_x$ onto $D(x)$. So $D(x)
+\subset \spec R$ (the collection of primes not containing $x$) is nonempty. In
+particular, there is $\mathfrak{p} \in \spec R$ with $x \notin \mathfrak{p}$,
+so $\mathfrak{p} \notin V(X)$. So $V(X) \neq \spec R$, contradiction.
+\end{proof}
+
+Return to the proof of the main result. We have shown that $IJ$ is nilpotent.
+In particular, in the expression $x+y=1$ we had earlier, we have that $xy$ is
+nilpotent. Say $(xy)^k = 0$. Then expand
+\[ 1 = (x+y)^{2k} = \sum_{i=0}^{2k} \binom{2k}{i}x^i y^{2k-i} = \sum' + \sum'' \]
+where $\sum'$ is the sum from $i=0$ to $i=k$ and $\sum''$ is the sum from
+$k+1$ to $2k$. Then $\sum' \sum'' = 0$ because in every term occurring in the
+expansion, a multiple of $x^k y^k$ occurs. Also, $\sum' \in I$ and $\sum'' \in
+J$ because $x \in I, y \in J$.
+
+All in all, we find that it is possible to write
+\[ 1 = x' + y', \quad x' \in I, y' \in J, \ x'y' = 0. \]
+(We take $x' = \sum', y' = \sum''$.)
+Then $x'(1-x') = 0$ so $x' \in I$ is idempotent. Similarly $y' = 1-x'$ is.
+We have that
+\[ V(I) \subset V(x'), \quad V(J) \subset V(y') \]
+and $V(x'), V(y')$ are complementary by the earlier arguments, so necessarily
+\[ V(I) = V(x'), \quad V(J) = V(y'). \]
+Since an ideal generated by an idempotent is automatically radical, it follows
+that:
+\[ I = (x'), \quad, J = (y'). \]
+\end{proof}
+
+
+There are some useful applications of this in representation theory, because
+one can look for idempotents in endomorphism rings; these indicate whether a module can be decomposed as a direct sum into smaller parts. Except, of course, that endomorphism rings aren't necessarily commutative and this proof breaks down.
+
+Thus we get:
+\begin{proposition} Let $A$ be a ring and $I$ a nilpotent ideal. Then
+$\idem(A) \to \idem(A/I)$ is bijective.
+\end{proposition}
+\begin{proof}
+Indeed, the topological spaces of $\mathrm{Spec} \ A$ and $\mathrm{Spec} \
+A/I$ are the same. The result then follows from \cref{}.
+\end{proof}
+
+
+\subsection{Units}
+Finally, we make a few remarks on \emph{units} modulo nilideals.
+It is a useful and frequently used trick that adding a nilpotent does not
+affect the collection of units. This trick is essentially an algebraic version of
+the
+familiar ``geometric series;'' convergence questions do not appear thanks to
+nilpotence.
+
+\begin{example}
+Suppose $u$ is a unit in a ring $R$ and $v \in R$ is nilpotent; we show that $a+v$ is a unit.
+
+Suppose $ua=1$ and $v^m=0$ for some
+$m>1$. Then $(u+v)\cdot a(1-av+(av)^2-\cdots\pm(av)^{m-1})=$
+$(1-(-av))(1+(-av)+(-av)^2+\cdots+(-av)^{m-1})=1-(-av)^m=1-0=1$, so $u+v$
+is a unit.
+\end{example}
+
+
+
+
+So let $R$ be a ring, $I \subset R$ a nilpotent ideal \emph{of square zero}.
+Let $R^*$ denote the group of units in $R$, as usual, and let $(R/I)^*$ denote
+the
+group of units in $R/I$.
+We have an exact sequence of abelian groups:
+\[ 0 \to I \to R^* \to (R/I)^* \to 0 \]
+where the second map is reduction and the first map sends $i \to 1+i$.
+The hypothesis that $I^2 = 0$ shows that the first map is a homomorphism.
+We should check that the last map is surjective. But if any $a \in R$ maps to a
+unit in $R/I$, it clearly can lie in no prime ideal of $R$, so is a unit itself.
+\section{Vista: sheaves on $\spec R$}
+
+\subsection{Presheaves}
+Let $X$ be a topological space.
+\begin{definition}
+A \textbf{presheaf of sets} $\mathcal{F}$ on $X$ assigns to
+every open subset
+$U \subset X$ a set $\mathcal{F} (U)$, and to every inclusion $U
+\subset V$ a
+\textbf{restriction map}
+$\mathrm{res}^V_U : \mathcal{F}(V) \to \mathcal{F}(U)$. The
+restriction map is
+required to satisfy:
+\begin{enumerate}
+\item $\res^U_U = \id_{\mathcal{F}(U)} $ for all open sets $U$.
+\item $\res^W_U = \res^V_U \circ \res^W_V $ if $U \subset V
+\subset W$.
+\end{enumerate}
+
+If the sets $\mathcal{F}(U)$ are all groups (resp. rings), and
+the restriction
+maps are morphisms of groups (resp. rings), then we say that
+$\mathcal{F}$ is a
+sheaf of groups (resp. rings). Often the restriction of an
+element $a\in U$ to a subset $W$ is denoted $a|_W$.
+
+A \textbf{morphism} of presheaves $\mathcal{F} \to \mathcal{G}$ is a
+collection of maps $\mathcal{F}(U) \to \mathcal{G}(U)$ for each open set $U$,
+that commute with the restriction maps in the obvious way. Thus the collection
+of presheaves on a topological space forms a category.
+\end{definition}
+
+
+One should think of the restriction maps as kind of like
+restricting the
+domain of a function.
+The standard example of presheaves is given in this way, in
+fact.
+
+\begin{example}
+Let $X$ be a topological space, and $\mathcal{F}$ the presheaf
+assigning to
+each $U \subset X$ the set of continuous functions $U \to
+\mathbb{R}$. The
+restriction maps come from restricting the domain of a function.\end{example}
+
+Now, in classical algebraic geometry, there are likely to be
+more
+continuous functions in the Zariski topology than one really
+wants. One wants
+to focus on functions that are given by polynomial equations.
+
+\begin{example}
+Let $X$ be the topological space $\mathbb{C}^n$ with the
+topology where the
+closed sets are those defined by the zero loci of polynomials
+(that is, the
+topology induced on $\mathbb{C}^n$ from the Zariski topology of
+$\spec
+\mathbb{C}[x_1, \dots, x_n]$ via the canonical imbedding
+$\mathbb{C}^n
+\hookrightarrow \spec \mathbb{C}[x_1, \dots, x_n]$). Then there
+is a presheaf
+assigning to each open set $U$ the collection of rational
+functions defined
+everywhere on $U$, with the restriction maps being the obvious
+ones.
+\end{example}
+
+
+
+
+\begin{remark}
+The notion of presheaf thus defined relied very little on the topology of $X$.
+In fact, we could phrase it in purely categorical terms. Let $\mathcal{C}$ be
+the category consisting of open subsets $U \subset X$ and inclusions of open
+subsets $U
+\subset U'$. This is a rather simple category (the hom-sets are either empty
+or consist of one element). Then a \emph{presheaf} is just a contravariant
+functor from $\mathcal{C}$ to $\mathbf{Sets}$ (or $\mathbf{Grp}$, etc.). A
+morphism of presheaves is a natural transformation of functors.
+
+In fact, given any category $\mathcal{C}$, we can define the \emph{category of
+presheaves} on it to be the category of functors $\mathbf{Fun}(\mathcal{C}^{op}, \mathbf{Set})$.
+This category is complete and cocomplete (we can calculate limits and colimits
+``pointwise''), and the Yoneda embedding realizes $\mathcal{C}$ as a full
+subcategory of it. So if $X \in \mathcal{C}$, we get a presheaf $Y \mapsto
+\hom_{\mathcal{C}}(Y, X)$. In general, however, such representable presheaves
+are rather special; for instance, what do they look like for the category of
+open sets in a topological space?
+\end{remark}
+
+\subsection{Sheaves}
+
+\begin{definition} Let $\mathcal{F}$ be a presheaf of sets
+ on a topological space $X$. We call $\mathbb{F}$ a
+\textbf{sheaf} if $\mathcal{F}$ further satisfies the following two
+``sheaf conditions.''
+\begin{enumerate}
+\item(Separatedness) {If $U$ is an open set of $X$ covered by a family of open subsets $\{U_i\}$ and there
+are two elements $a,b\in \mathcal{F}(U)$ such that
+$a|_{U_i}=b|_{U_i}$ for all $U_i$, then $a=b$.}
+\item(Gluability) {If $U$ is an open set of $X$ covered by $U_i$ and there
+are elements $a_i\in \mathcal{F}(U_i)$ such that $a_i|_{U_i\cap
+U_j} = a_j|_{U_i\cap U_j}$ for all $i$ and $j$, then there
+exists an element $a\in\mathcal{F}(U)$ that restricts to the
+$a_i$. Notice that by the first axiom, this element is unique.}
+\end{enumerate}
+A \emph{morphism} of sheaves is just a morphism of presheaves, so the sheaves
+on a topological space $X$ form a
+full subcategory of presheaves on $X$.
+\end{definition}
+
+The above two conditions can be phrased more compactly as follows. Whenever
+$\left\{U_i\right\}_{i \in I}$ is an open cover of $U \subset X$, we require that the
+following sequence be an equalizer of sets:
+\[ \mathcal{F}(U) \to \prod_{i \in I} \mathcal{F}(U_i) \rightrightarrows \prod_{i,j \in I}
+\mathcal{F}(U_i \cap U_j) \]
+where the two arrows correspond to the two allowable restriction maps.
+Similarly, we say that a presheaf of abelian groups (resp. rings) is a
+\textbf{sheaf} if it is a sheaf of sets.
+
+\begin{example}
+The example of functions gives an example of a sheaf, because functions are
+determined by their restrictions to an open cover! Namely, if $X$ is a
+topological space, and we consider the presheaf
+\[ U \mapsto \left\{\text{continuous functions } U \to \mathbb{R}\right\} , \]
+then this is clearly a presheaf, because we can piece together continuous
+functions in a unique manner.
+\end{example}
+
+\begin{example}
+Here is a refinement of the above example. Let $X$ be a smooth manifold.
+For each $U$, let $\mathcal{F} (U)$ denote the group of smooth functions $U
+\to \mathbb{R}$. This is easily checked to be a sheaf.
+
+We could, of course, replace ``smooth'' by ``$C^r$'' or by ``holomorphic'' in
+the case of a complex manifold.
+\end{example}
+
+
+\begin{remark}
+As remarked above, the notion of presheaf can be defined on any category, and
+does not really require a topological space. The definition of a sheaf
+requires a bit more topologically, because the idea that a family
+$\left\{U_i\right\}$ \emph{covers} an open set $U$ was used inescapably in the
+definition. The idea of covering required the internal structure of the open
+sets and was not a purely categorical idea. However, Grothendieck developed a
+way to axiomatize this, and introduced the idea of a \emph{Grothendieck
+topology} on a category (which is basically a notion of when a family of maps
+\emph{covers} something). On a category with a Grothendieck topology (also
+known as a \emph{site}), one can define the notion of a sheaf in a similar
+manner as above. See \cite{Vi08}.
+\end{remark}
+
+
+
+
+There is a process that allows one to take any presheaf and
+associate a sheaf to it. In some sense, this associated sheaf should also be the best ``approximation'' of our presheaf with a sheaf. This motivates the
+following universal property:
+
+\begin{definition} Let $\mathcal{F}$ be a presheaf. Then $\mathcal{F'}$ is said
+to be the sheafification of $\mathcal{F}$ if for any sheaf $\mathcal{G}$ and a
+morphism $\mathcal{F}\rightarrow \mathcal{G}$, there is a unique factorization
+of this morphism as $\mathcal{F}\rightarrow\mathcal{F'}\rightarrow\mathcal{G}$.
+\end{definition}
+
+\begin{theorem} We can construct the sheafification of a presheaf $\mathcal{F}$
+as follows: $\mathcal{F}'(U)=\{s:U\rightarrow\coprod_{x\in U}\mathcal{F}_x |
+\text{for all }x\in U, s(x)\in\mathcal{F}_x \text{ and there is a neighborhood
+}V\subset U \text{ and }t\in \mathcal{F}(V) \text{ such that for all }y\in V,
+s(y) \text{ is the image of } t \text{ in the local ring }\mathcal{F}_y\}$.
+\end{theorem}
+\add{proof}
+
+In the theory of schemes, when one wishes to replace polynomial
+rings over
+$\mathbb{C}$ (or an algebraically closed field) with arbitrary
+commutative
+rings, one must drop the idea that a sheaf is necessarily given
+by functions.
+A \emph{scheme} is defined as a space with a certain type of
+sheaf of rings on
+it. We shall not define a scheme formally, but show how on the
+building blocks
+of schemes---objects of the form $\spec A$---a sheaf of rings
+can be defined.
+
+
+
+\subsection{Sheaves on $\spec A$}
+
+\add{we need to describe how giving sections over basic open sets gives a
+presheaf in general}.
+
+\begin{proposition}
+Let $ A$ be a ring and let $ X = \mathrm{Spec}(A)$. Then the
+assignment of the ring $A_f$ to the basic open set $X_f$ defines
+a presheaf of rings on $X$.
+\end{proposition}
+
+\begin{proof} \mbox{}
+
+\emph{Part (i)}. If $ X_g \subset X_f$ are basic open sets,
+then there exist $ n \geq 1$ and $ u \in A$ such that $ g^n =
+uf$.
+
+\emph{Proof of part (i)}. Let $ S = \{g^n : n \geq 0\}$ and
+suppose $ S \cap (f) = \emptyset$. Then the extension $ (f)^e$
+into $ S^{-1}A$ is a proper ideal, so there exists a maximal
+ideal $ S^{-1}\mathfrak{p}$ of $ S^{-1}A$, where $ \mathfrak{p}
+\cap S = \emptyset$. Since $ (f)^e \in S^{-1}\mathfrak{p}$, we
+see that $ f/1 \in S^{-1}\mathfrak{p}$, so $ f \in
+\mathfrak{p}$. But $ S \cap \mathfrak{p} = \emptyset$ implies
+that $ g \notin \mathfrak{p}$. This is a contradiction, since
+then $ \mathfrak{p} \in X_g \setminus X_f$.
+
+\emph{Part (ii)}. If $ X_g \subset X_f$, then there exists a
+unique map $ \rho : A_f \to A_g$, called the restriction map,
+which makes the following diagram commute.
+\[ \xymatrix{ & A \ar[dl] \ar[dr] & \\ A_f \ar[rr] & & A_g } \]
+
+\emph{Proof of part (ii)}.
+Let $ n \geq 1$ and $ u \in A$ be such that $ g^n = uf$ by part
+(i). Note that in $ A_g$,
+\[ (f/1)(u/g^n) = (fu/g^n) = 1/1 = 1 \]
+which means that $ f$ maps to a unit in $ A_g$. Hence every $
+f^m$ maps to a unit in $ A_g$, so the universal property of $
+A_f$ yields the desired unique map $ \rho : A_f \to A_g$.
+
+\emph{Part (iii)}.
+If $ X_g = X_f$, then the corresponding restriction $ \rho : A_f
+\to A_g$ is an isomorphism.
+
+\emph{Proof of part (iii)}.
+The reverse inclusion yields a $ \rho' : A_g \to A_f$ such that
+the diagram
+\[ \xymatrix{
+& A \ar[dr] \ar[dl] & \\
+A_f \ar@/^/[rr]^{\rho} & & A_g \ar@/^/[ll]^{\rho'}
+} \]
+commutes. But since the localization map is epic, this implies
+that $ \rho \rho' = \rho' \rho = \mathbf{1}$.
+
+\emph{Part (iv)}.
+If $ X_h \subset X_g \subset X_f$, then the diagram
+\[ \xymatrix{
+A_f \ar[rr] \ar[dr] & & A_h \\
+& A_g \ar[ur] &
+} \]
+of restriction maps commutes.
+
+\emph{Proof of part (iv)}.
+Consider the following tetrahedron.
+\[ \xymatrix{
+& A \ar[dl] \ar[dr] \ar[dd] & \\
+A_f \ar@{.>}[rr] \ar[dr] & & A_h \\
+& A_g \ar[ur] &
+} \]
+Except for the base, the commutativity of each face of the
+tetrahedron follows from the universal property of part (ii).
+But its easy to see that commutativity of the those faces
+implies commutativity of the base, which is what we want to
+show.
+
+\emph{Part (v)}.
+If $ X_{\tilde{g}} = X_g \subset X_f = X_{\tilde{f}}$, then
+the diagram
+\[ \xymatrix{
+A_f \ar[r] \ar[d] & A_g \ar[d] \\
+A_{\tilde{f}} \ar[r] & A_{\tilde{g}}
+} \]
+of restriction maps commutes. (Note that the vertical maps here
+are isomorphisms.)
+
+\emph{Proof of part (v)}.
+By part (iv), the two triangles of
+\[ \xymatrix{
+A_f \ar[r] \ar[d] \ar[dr] & A_g \ar[d] \\
+A_{\tilde{f}} \ar[r] & A_{\tilde{g}}
+} \]
+commute. Therefore the square commutes.
+
+\emph{Part (vi)}.
+Fix a prime ideal $ \mathfrak{p}$ in $ A$. Consider the direct
+system consisting of rings $ A_f$ for every $ f \notin
+\mathfrak{p}$ and restriction maps $ \rho_{fg} : A_f \to A_g$
+whenever $ X_g \subset X_f$. Then $ \varinjlim A_f \cong
+A_{\mathfrak{p}}$.
+
+\emph{proof of part (vi)}.
+First, note that since $ f \notin \mathfrak{p}$ and $
+\mathfrak{p}$ is prime, we know that $ f^m \notin \mathfrak{p}$
+for all $ m \geq 0$. Therefore the image of $ f^m$ under the
+localization $ A \to A_\mathfrak{p}$ is a unit, which means the
+universal property of $ A_f$ yields a unique map $ \alpha_f :
+A_f \to A_\mathfrak{p}$ such that the following diagram
+commutes.
+\[ \xymatrix{
+& A \ar[dr] \ar[dl] & \\
+A_f \ar[rr]^{\alpha_f} & & A_{\mathfrak{p}}
+} \]
+Then consider the following tetrahedron.
+\[ \xymatrix{
+& A \ar[dl] \ar[dr] \ar[dd] & \\
+A_f \ar@{.>}[rr] \ar[dr] & & A_h \\
+& A_\mathfrak{p} \ar[ur] &
+} \]
+All faces except the bottom commute by construction, so the
+bottom face commutes as well. This implies that the $ \alpha_f$
+commute with the restriction maps, as necessary. Now, to see
+that $ \varinjlim A_f \cong A_\mathfrak{p}$, we show that $
+A_\mathfrak{p}$ satisfies the universal property of $ \varinjlim
+A_f$.
+
+Suppose $ B$ is a ring and there exist maps $ \beta_f : A_f \to
+B$ which commute with the restrictions. Define $ \beta : A \to
+B$ as the composition $ A \to A_f \to B$. The fact that $ \beta$
+is independent of choice of $ f$ follows from the commutativity
+of the following diagram.
+\[ \xymatrix{
+& A \ar[dr] \ar[dl] & \\
+A_f \ar[rr]^{\rho_{fg}} \ar[dr]^{\beta_f} & & A_g
+\ar[dl]_{\beta_g} \\
+& B
+} \]
+Now, for every $ f \notin \mathfrak{p}$, we know that $
+\beta(f)$ must be a unit since $ \beta(f) = \beta_f(f/1)$ and $
+f/1$ is a unit in $ A_f$. Therefore the universal property of $
+A_\mathfrak{p}$ yields a unique map $ A_{\mathfrak{p}} \to B$,
+which clearly commutes with all the arrows necessary to make $
+\varinjlim A_f \cong A_\mathfrak{p}$.
+\end{proof}
+
+
+\begin{proposition}
+Let $ A$ be a ring and let $ X = \mathrm{Spec}(A)$. The presheaf
+of rings $ \mathcal{O}_X$ defined on $ X$ is a sheaf.
+\end{proposition}
+
+\begin{proof}
+The proof proceeds in two parts. Let $ (U_i)_{i \in I}$ be a
+covering of $ X$ by basic open sets.
+
+\emph{Part 1}. If $ s \in A$ is such that $ s_i := \rho_{X,
+U_i}(s) = 0$ for all $ i \in I$, then $ s = 0$.
+
+\emph{Proof of part 1}. Suppose $ U_i = X_{f_i}$. Note that $
+s_i$ is the fraction $ s/1$ in the ring $ A_{f_i}$, so $ s_i =
+0$ implies that there exists some integer $ m_i$ such that $
+sf_i^{m_i} = 0$. Define $ g_i = f_i ^{m_i}$, and note that we
+still have an open cover by sets $ X_{g_i}$ since $ X_{f_i} =
+X_{g_i}$ (a prime ideal contains an element if and only if it
+contains every power of that element). Also $ s g_i = 0$, so the
+fraction $ s/1$ is still $ 0$ in the ring $ A_{g_i}$.
+(Essentially, all we're observing here is that we are free to
+change representation of the basic open sets in our cover to
+make notation more convenient).
+
+Since $ X$ is quasi-compact, choose a finite subcover $ X =
+X_{g_1} \cup \dotsb \cup X_{g_n}$. This means that $ g_1,
+\dotsc, g_n$ must generate the unit ideal, so there exists some
+linear combination $ \sum x_i g_i = 1$ with $ x_i \in A$. But
+then
+\[ s = s \cdot 1 = s \left( \sum x_i g_i \right) = \sum x_i (s
+g_i) = 0.\]
+
+\emph{Part 2}. Let $ s_i \in \mathcal{O}_X(U_i)$ be such that
+for every $ i, j \in I$,
+\[ \rho_{U_i, U_i \cap U_j}(s_i) = \rho_{U_j, U_i \cap
+U_j}(s_j).\]
+(That is, the collection $ (s_i)_{i \in I}$ agrees on overlaps).
+Then there exists a unique $ s \in A$ such that $ \rho_{X,
+U_i}(s) = s_i$ for every $ i \in I$.
+
+\emph{Proof of part 2}. Let $ U_i = X_{f_i}$, so that $ s_i =
+a_i/(f_i^{m_i})$ for some integers $ m_i$. As in part 1, we can
+clean up notation by defining $ g_i = f_i^{m_i}$, so that $ s_i
+= a_i/g_i$. Choose a finite subcover $ X = X_{g_1} \cup \dotsb
+\cup X_{g_n}$. Then the condition that the cover agrees on
+overlaps means that
+\[ \frac{a_i g_j}{g_i g_j} = \frac{a_j g_i}{g_i g_j} \]
+for all $ i, j$ in the finite subcover. This is equivalent to
+the existence of some $ k_{ij}$ such that
+\[ (a_i g_j - a_j g_i) (g_i g_j)^{k_{ij}} = 0.\]
+Let $ k$ be the maximum of all the $ k_{ij}$, so that $ (a_i g_j
+- a_j g_i)(g_i g_j)^k = 0$ for all $ i, j$ in the finite
+subcover. Define $ b_i = a_i g_i^k$ and $ h_i = g_i^{k+1}$. We
+make the following observations:
+\[ b_i h_j - b_j h_i = 0, X_{g_i} = X_{h_i}, \text{ and } s_i =
+a_i/g_i = b_i/h_i \]
+The first observation implies that the $ X_{h_i}$ cover $ X$, so
+the $ h_i$ generate the unit ideal. Then there exists some
+linear combination $ \sum x_i h_i = 1$. Define $ s = \sum x_i
+b_i$. I claim that this is the global section that restricts to
+$ s_i$ on the open cover.
+
+The first step is to show that it restricts to $ s_i$ on our
+chosen finite subcover. In other words, we want to show that $
+s/1 = s_i = b_i/h_i$ in $ A_{h_i}$, which is equivalent to the
+condition that there exist some $ l_i$ such that $ (sh_i b_i)
+h_i^{l_i} = 0$. But in fact, even $ l_i = 0$ works:
+\[ sh_i - b_i = \left(\sum x_j b_j\right) h_i - b_i\left(\sum
+x_j h_j\right) = \sum x_j\left(h_i b_j - b_i h_j\right) = 0. \]
+This shows that $ s$ restricts to $ s_i$ on each set in our
+finite subcover. Now we need to show that in fact, it restricts
+to $ s_i$ for all of the sets in our cover. Choose any $ j \in
+I$. Then $ U_1, \dotsc, U_n, U_j$ still cover $ X$, so the above
+process yields an $ s'$ such that $ s'$ restricts to $ s_i$ for
+all $ i \in \{1, \dotsc, n, j\}$. But then $ s - s'$ satisfies
+the assumptions of part 1 using the cover $ \{U_1, \dotsc, U_n,
+U_j\}$, so this means $ s = s'$. Hence the restriction of $ s$
+to $ U_j$ is also $ s_j$.
+\end{proof}
+
diff --git a/books/cring/threeimportantfunctors.tex b/books/cring/threeimportantfunctors.tex
new file mode 100644
index 0000000000000000000000000000000000000000..bb9e48de1a76d58cff52389caad0e815b3f9468e
--- /dev/null
+++ b/books/cring/threeimportantfunctors.tex
@@ -0,0 +1,2399 @@
+
+\chapter{Three important functors}
+
+There are three functors that will be integral to our study of commutative
+algebra in the future: localization, the tensor product, and $\hom$.
+While localization is an \emph{exact} functor, the tensor product and $\hom$
+are not. The failure of exactness in those cases leads to the theory of
+flatness and projectivity (and injectivity), and eventually the \emph{derived functors}
+$\mathrm{Tor}$ and $\mathrm{Ext}$ that crop up in commutative algebra.
+
+\section{Localization}
+
+Localization is the process of making invertible a collection of elements in a
+ring. It is a generalization of the process of forming a quotient field of an
+integral domain.
+
+\subsection{Geometric intuition}
+We first start off with some of the geometric intuition behind the idea of
+localization. Suppose we have a Riemann surface $X$ (for example, the Riemann
+sphere). Let $A(U)$ be the ring of holomorphic functions over some neighborhood
+$U\subset X$. Now, for holomorphicity to hold, all that is required is
+that a function doesn't have a pole inside of $U$, thus when $U=X$, this
+condition is the strictest and as $U$ gets smaller functions begin to show
+up that may not arise from the restriction of a holomorphic function over
+a larger domain. For example, if we want to study holomorphicity ``near a
+point $z_0$'' all that we should require is that the function doesn't pole at
+$z_0$. This means that we should consider quotients of holomorphic functions
+$f/g$ where $g(z_0)\neq 0$. This process of inverting a collection of elements
+is expressed through the algebraic construction known as ``localization.''
+
+
+\subsection{Localization at a multiplicative subset}
+
+Let $R$ be a commutative ring.
+We start by constructing the notion of \emph{localization} in the most general
+sense.
+
+We have already implicitly used this definition, but nonetheless, we make it
+formally:
+\begin{definition} \label{multset}
+A subset $S \subset R$ is a \textbf{multiplicative subset} if $1 \in S$ and
+if $x,y \in S$ implies $xy \in S$.
+\end{definition}
+
+We now define the notion of \emph{localization}. Formally, this means
+inverting things.
+This will give us a functor from $R$-modules to $R$-modules.
+
+\begin{definition}
+If $M$ is an $R$-module, we define the module $S^{-1}M$ as the set of formal
+fractions
+\[ \left\{m/s, m \in M, s \in S\right\} \]
+modulo an equivalence relation: where $m/s \sim m'/s'$ if and only if
+\[ t( s'm - m's ) = 0 \]
+for some $t \in S$. The reason we need to include the $t$ in the definition
+is that otherwise the
+ relation would not be transitive (i.e. would not be an
+equivalence relation).
+\end{definition}
+So two fractions agree if they agree when clearing denominators and
+multiplication.
+
+It is easy to check that this is indeed an equivalence relation. Moreover
+$S^{-1}M$ is an abelian group with the usual addition of fractions
+\[ \frac{m}{s}+\frac{m'}{s'} = \frac{s'm + sm'}{ss'} \]
+and it is easy to check that this is a legitimate abelian group.
+
+\begin{definition}
+Let $M$ be an $R$-module and $S \subset R$ a multiplicative subset.
+The abelian group $S^{-1}M$ is naturally an $R$-module. We define
+\[ x(m/s) = (xm)/s, \quad x \in R. \]
+It is easy to check that this is well-defined and makes it into a module.
+
+Finally, we note that localization is a \emph{functor} from the category of
+$R$-modules to itself. Indeed, given $f: M \to N$, there is a naturally
+induced map $S^{-1}M \stackrel{S^{-1}f}{\to} S^{-1}N$.
+
+\end{definition}
+
+We now consider the special case when the localized module is the initial ring
+itself.
+Let $M = R$. Then $S^{-1}R$ is an $R$-module, and it is in fact a commutative
+ring in its own right. The ring structure is quite tautological:
+\[ (x/s)(y/s') = (xy/ss'). \]
+There is a map $R \to S^{-1}R$ sending $x \to x/1$, which is a
+ring-homomorphism.
+
+\begin{definition}
+For $S \subset R$ a multiplicative set, the localization $S^{-1}R$ is a
+commutative ring as above. In fact, it is an $R$-algebra; there is a natural
+map $\phi: R \to S^{-1}R$ sending $r \to r/1$.
+\end{definition}
+
+We can, in fact, describe $\phi: R \to S^{-1}R$ by a \emph{universal
+property}. Note
+that for each $s \in S$, $\phi(s)$ is invertible. This is because $\phi(s) =
+s/1$ which has a multiplicative inverse $1/s$. This property characterizes
+$S^{-1}R$.
+
+For any commutative ring $B$, $\hom(S^{-1}R, B)$ is naturally isomorphic to the
+subset of $\hom(R,B)$ that send $S$ to units. The map takes $S^{-1}R \to B$ to
+the pull-back $R \to S^{-1}R \to B$. The proof of this is very simple.
+Suppose that $f: R \to B$ is such that $f(s) \in B$ is invertible for each $s
+\in S$. Then we must define $S^{-1}R \to B$ by sending $r/s$ to
+$f(r)f(s)^{-1}$. It is easy to check that this is well-defined and that the
+natural isomorphism as claimed is true.
+
+Let $R$ be a ring, $M$ an $R$-module, $S \subset R$ a multiplicatively closed
+subset. We defined a ring of fractions $S^{-1}R$ and an $R$-module $S^{-1}M$.
+But in fact this is a module over the ring $S^{-1}R$.
+We just multiply $(x/t)(m/s) = (xm/st)$.
+
+In particular, localization at $S$ gives a \emph{functor} from $R$-modules to
+$S^{-1}R$-modules.
+
+\begin{exercise}
+Let $R$ be a ring, $S$ a multiplicative subset. Let $T$ be the $R$-algebra
+$R[\left\{x_s\right\}_{s \in S}]/( \left\{sx_s - 1\right\})$. This is the
+polynomial ring in the variables $x_s$, one for each $s \in S$, modulo the
+ideal generated by $sx_s = 1$. Prove that this $R$-algebra is naturally
+isomorphic to $S^{-1}R$, using the universal property.
+\end{exercise}
+
+\begin{exercise} Define a functor $\mathbf{Rings} \to \mathbf{Sets}$ sending
+a ring to
+its set of units, and show that it is corepresentable (use $\mathbb{Z}[X,
+X^{-1}]$).
+\end{exercise}
+\subsection{Local rings}
+
+A special case of great importance in the future is when the multiplicative
+subset is the complement of a prime ideal, and we study this in the present
+subsection. Such localizations will be ``local rings'' and geometrically
+correspond to the process of zooming at a point.
+
+\begin{example}
+Let $R$ be an integral domain and let $S = R - \left\{0\right\}$. This is a
+multiplicative subset because $R$ is a domain. In this case, $S^{-1}R$ is just
+the ring of fractions by allowing arbitrary nonzero denominators; it is a
+field, and is called the \textbf{quotient field}. The most familiar example is
+the construction of $\mathbb{Q}$ as the quotient field of $\mathbb{Z}$.
+\end{example}
+
+We'd like to generalize this example.
+
+\begin{example}
+Let $R$ be arbitrary and $\mathfrak{p}$ is a prime ideal. This means that $1
+\notin \mathfrak{p}$ and $x,y \in R - \mathfrak{p}$ implies that $xy \in R -
+\mathfrak{p}$. Hence, the complement $S = R- \mathfrak{p}$ is multiplicatively
+closed. We get a ring $S^{-1}R$.
+
+\begin{definition}
+This ring is denoted $R_{\mathfrak{p}}$ and is called the \textbf{localization
+at $\mathfrak{p}$.} If $M$ is an $R$-module, we write $M_{\mathfrak{p}}$ for
+the localization of $M$ at $R - \mathfrak{p}$.
+\end{definition}
+This generalizes the previous example (where $\mathfrak{p} = (0)$).
+\end{example}
+
+There is a nice property of the rings $R_{\mathfrak{p}}$. To elucidate this,
+we start with a lemma.
+
+\begin{lemma}
+Let $R$ be a nonzero commutative ring. The following are equivalent:
+\begin{enumerate}
+\item $R$ has a unique maximal ideal.
+\item If $x \in R$, then either $x$ or $1-x$ is invertible.
+\end{enumerate}
+\end{lemma}
+
+\begin{definition}
+In this case, we call $R$ \textbf{local}. A local ring is one with a unique
+maximal ideal.
+\end{definition}
+
+\begin{proof}[Proof of the lemma]
+First we prove $(2) \implies (1)$.
+
+Assume $R$ is such that for
+each $x$, either $x$ or $1-x$ is invertible. We will find the maximal ideal.
+Let $\mathfrak{M} $ be the collection of noninvertible elements of $R$. This is
+a subset of $R$, not containing $1$, and it is closed under multiplication.
+Any proper ideal must be a subset of $\mathfrak{M}$, because otherwise that
+proper ideal would contain an invertible element.
+
+We just need to check that $\mathfrak{M}$ is closed under addition.
+Suppose to the
+contrary that $x, y \in \mathfrak{M}$ but $x+y$ is invertible. We get (with
+$a = x/(x+y)$)
+\[ 1 = \frac{x}{x+y} + \frac{y}{x+y} =a+(1-a). \]
+Then one of $a,1-a$ is invertible. So either $x(x+y)^{-1}$ or $y(x+y)^{-1}$ is
+invertible, which implies that either $x,y$ is invertible, contradiction.
+
+Now prove the reverse direction. Assume $R$ has a unique maximal ideal
+$\mathfrak{M}$. We claim that $\mathfrak{M}$ consists precisely of the
+noninvertible elements. To see this, first note that $\mathfrak{M}$
+can't contain any invertible elements since it is proper. Conversely, suppose
+$x$ is not invertible, i.e. $(x) \subsetneq R$. Then $(x)$ is contained in a
+maximal ideal by \rref{anycontainedinmaximal}, so $(x) \subset
+\mathfrak{M}$ since $\mathfrak{M}$ is unique among maximal ideals.
+Thus $x \in \mathfrak{M}$.
+
+Suppose $x \in R$; we can write $1 = x + (1-x)$. Since $1 \notin \mathfrak{M}$,
+one of $x, 1-x$ must not be in $\mathfrak{M}$, so one of those must not be
+invertible. So $(1) \implies (2)$. The lemma is proved.
+\end{proof}
+
+Let us give some examples of local rings.
+
+\begin{example}
+Any field is a local ring because the unique maximal ideal is $(0)$.
+\end{example}
+
+\begin{example}
+Let $R$ be any commutative ring and $\mathfrak{p}\subset R$ a prime ideal. Then
+$R_{\mathfrak{p}}$ is a local ring.
+
+We state this as a result.
+\begin{proposition}
+$R_{\mathfrak{p}}$ is a local ring if $\mathfrak{p}$ is prime.\end{proposition}
+\begin{proof}
+Let $\mathfrak{m} \subset R_{\mathfrak{p}}$ consist of elements $x/s$ for $x
+\in \mathfrak{p}$ and $s \in R - \mathfrak{p}$. It is left as an exercise
+(using the primality of $\mathfrak{p}$) to
+the reader to see that whether the numerator belongs to $\mathfrak{p}$ is
+\emph{independent} of the representation $x/s$ used for it.
+
+Then I claim that $\mathfrak{m}$ is the
+unique maximal ideal. First, note that $\mathfrak{m}$ is
+an ideal; this is evident since the numerators form an ideal. If $x/s, y/s'$
+belong to $\mathfrak{m}$ with appropriate expressions, then
+the numerator of
+\[ \frac{xs'+ys}{ss'} \]
+belongs to $\mathfrak{p}$, so this sum belongs to $\mathfrak{m}$. Moreover,
+$\mathfrak{m}$ is a proper ideal because $\frac{1}{1}$ is not of the
+appropriate form.
+
+I claim that $\mathfrak{m}$ contains all other proper ideals, which will imply
+that it is the unique maximal ideal. Let $I \subset R_{\mathfrak{p}}$ be any
+proper ideal. Suppose $x/s \in I$. We want to prove $x/s \in \mathfrak{m}$.
+In other words, we have to show $x \in \mathfrak{p}$. But if not $x/s$ would be
+invertible, and $I = (1)$, contradiction. This proves locality.
+\end{proof}
+\end{example}
+
+\begin{exercise}
+Any local ring is of the form $R_{\mathfrak{p}}$ for some ring $R$ and for
+some prime ideal $\mathfrak{p} \subset R$.
+\end{exercise}
+
+\begin{example}
+Let $R = \mathbb{Z}$. This is not a local ring; the maximal ideals are given by
+$(p)$ for $p$ prime. We can thus construct the localizations
+$\mathbb{Z}_{(p)}$ of all fractions $a/b \in \mathbb{Q}$ where $b \notin (p)$.
+Here $\mathbb{Z}_{(p)}$ consists of all rational numbers that don't have
+powers of $p$ in the denominator.
+\end{example}
+
+\begin{exercise}
+A local ring has no idempotents other than $0$ and $1$. (Recall that $e \in R$
+is \emph{idempotent} if $e^2 = e$.) In particular, the product of two rings is
+never local.
+\end{exercise}
+
+It may not yet be clear why localization is such a useful process. It turns
+out that many problems can be checked on the localizations at prime (or even
+maximal) ideals, so certain proofs can reduce to the case of a local ring.
+Let us give a small taste.
+
+\begin{proposition}
+Let $f: M \to N$ be a homomorphism of $R$-modules. Then $f$ is injective if
+and only if for every maximal ideal $\mathfrak{m} \subset R$, we have that
+$f_{\mathfrak{m}}: M_{\mathfrak{m}} \to N_{\mathfrak{m}}$ is injective.
+\end{proposition}
+Recall that, by definition, $M_{\mathfrak{m}}$ is the localization at $R -
+\mathfrak{m}$.
+
+There are many variants on this (e.g. replace with surjectivity, bijectivity).
+This is a general observation that lets you reduce lots of commutative algebra
+to local rings, which are easier to work with.
+
+\begin{proof}
+Suppose first that each $f_{\mathfrak{m}}$ is injective. I claim that $f$ is
+injective. Suppose $x \in M - \left\{0\right\}$. We must show that $f(x) \neq
+0$. If $f(x)=0$, then $f_{\mathfrak{m}}(x)=0$ for every maximal ideal
+$\mathfrak{m}$. Then by
+injectivity it follows that $x$ maps to zero in each $M_{\mathfrak{m}}$.
+We would now like to get a contradiction.
+
+Let $I = \left\{ a \in R: ax = 0 \in M \right\}$. This is proper since $x \neq
+0$. So $I$ is contained in some maximal ideal $\mathfrak{m}$. Then $x$
+maps to zero in $M_{\mathfrak{m}}$ by the previous paragraph; this means that
+there is $s \in R - \mathfrak{m}$ with $sx = 0 \in M$. But $s \notin I$,
+contradiction.
+
+Now let us do the other direction. Suppose $f$ is injective and $\mathfrak{m}$
+a maximal ideal; we prove $f_{\mathfrak{m}}$ injective. Suppose
+$f_{\mathfrak{m}}(x/s)=0 \in N_{\mathfrak{m}}$. This means that $f(x)/s=0$ in
+the localized module, so that $f(x) \in M$ is killed by some $t \in R -
+\mathfrak{m}$. We thus have $f(tx) = t(f(x)) = 0 \in M$. This means that $tx
+= 0 \in M$ since $f$ is injective. But this in turn means that $x/s = 0 \in
+M_{\mathfrak{m}}$. This is what we wanted to show.
+\end{proof}
+
+
+
+\subsection{Localization is exact}
+Localization is to be thought of as a very mild procedure.
+
+The next result says how inoffensive localization is. This result is a key
+tool in reducing problems to the local case.
+\begin{proposition}
+Suppose $f: M \to N, g: N \to P$ and $M \to N \to P$ is exact. Let $S \subset
+R$ be multiplicatively closed. Then
+\[ S^{-1}M \to S^{-1}N \to S^{-1}P \]
+is exact.
+\end{proposition}
+
+Or, as one can alternatively express it, localization is an \emph{exact
+functor.}
+
+Before proving it, we note a few corollaries:
+\begin{corollary}
+If $f: M \to N$ is surjective, then $S^{-1}M \to S^{-1}N$ is too.
+\end{corollary}
+\begin{proof}
+To say that $A \to B$ is surjective is the same as saying that $A \to B \to 0$
+is exact. From this the corollary is evident.
+\end{proof}
+
+Similarly:
+\begin{corollary}
+If $f: M \to N$ is injective, then $S^{-1}M \to S^{-1}N$ is too.
+\end{corollary}
+\begin{proof}
+To say that $A \to B$ is injective is the same as saying that $0 \to A \to B $
+is exact. From this the corollary is evident.
+\end{proof}
+
+\begin{proof}[Proof of the proposition] We adopt the notation of the
+proposition.
+If the composite $g\circ f$ is zero, clearly the localization $S^{-1}M \to
+S^{-1}N \to S^{-1}P$ is zero too.
+Call the maps $S^{-1}M \to S^{-1}N, S^{-1}N \to S^{-1}P$ as $\phi, \psi$. We
+know that $\psi \circ \phi = 0$ so $\ker(\psi) \supset \im(\phi)$. Conversely,
+suppose something belongs to $\ker(\psi). $ This can be written as a fraction
+\[ x/s \in \ker(\psi) \]
+where $x \in N, s \in S$. This is mapped to
+\[ g(x)/s \in S^{-1}P, \]
+which we're assuming is zero. This means that there is $t \in S$ with $tg(x) =
+0 \in P$. This means that $g(tx)=0$ as an element of $P$. But $tx \in N$ and
+its image of $g$ vanishes, so $tx$ must come from something in $M$. In
+particular,
+\[ tx = f(y) \ \text{for some} \ y \in M. \]
+In particular,
+\[ \frac{x}{s} = \frac{tx}{ts} = \frac{f(y)}{ts} = \phi( y/ts) \in \im(\phi).
+\]
+This proves that anything belonging to the kernel of $\psi$ lies in
+$\im(\phi)$.
+\end{proof}
+
+\subsection{Nakayama's lemma}
+
+We now state a very useful criterion for determining when a module over a
+\emph{local} ring is zero.
+
+
+\begin{lemma}[Nakayama's lemma] \label{nakayama} If $R$ is a local ring with
+maximal ideal
+$\mathfrak{m}$. Let $M$ be a finitely generated $R$-module. If
+$\mathfrak{m}M = M$, then $M = 0$.
+\end{lemma}
+
+Note that $\mathfrak{m}M$ is the submodule generated by products of
+elements of $\mathfrak{m}$ and $M$.
+
+\begin{remark}
+Once one has the theory of the tensor product, this equivalently states that
+if $M$ is finitely generated, then
+\[ M \otimes_R R/\mathfrak{m} = M/\mathfrak{m}M \neq 0. \]
+So to prove that a finitely generated module over a local ring is zero, you
+can reduce to studying the reduction to $R/\mathfrak{m}$. This is thus a very
+useful criterion.
+\end{remark}
+
+Nakayama's lemma highlights why it is so useful to work over a local ring.
+Thus, it is useful to reduce questions about general rings to questions about
+local rings.
+Before proving it, we note a corollary.
+
+\begin{corollary}
+Let $R$ be a local ring with maximal ideal $\mathfrak{m}$, and $M$ a finitely
+generated module. If $N \subset M$ is a submodule such that $N +
+\mathfrak{m}N =
+M$, then $N=M$.
+\end{corollary}
+\begin{proof}
+Apply Nakayama above (\cref{nakayama}) to $M/N$.
+\end{proof}
+
+
+We shall prove more generally:
+
+\begin{proposition}
+Suppose $M$ is a finitely generated $R$-module, $J \subset R$ an ideal.
+Suppose $JM = M$. Then there is $a \in 1+J$ such that $aM = 0$.
+\end{proposition}
+
+If $J$ is the maximal ideal of a local ring, then $a$ is a unit, so that $M=0$.
+
+\begin{proof}
+Suppose $M$ is generated by $\left\{x_1, \dots, x_n\right\} \subset M$. This
+means that every element of $M$ is a linear combination of elements of
+$x_i$. However, each $x_i \in JM$ by assumption. In particular, each
+$x_i$ can be written as
+\[ x_i = \sum a_{ij} x_j, \ \mathrm{where} \ a_{ij} \in \mathfrak{m}. \]
+If we let $A$ be the matrix $\left\{a_{ij}\right\}$, then $A$ sends the
+vector $(x_i)$ into itself. In particular, $I-A$ kills
+the vector $(x_i)$.
+
+Now $I-A$ is an $n$-by-$n$ matrix in the ring $R$. We could, of course,
+reduce everything modulo $J$ to get the identity; this is
+because $A$ consists of elements of $J$. It follows that the
+determinant must be congruent to $1$ modulo $J$.
+
+In particular, $a=\det (I - A)$ lies in $1+J$.
+Now by familiar linear algebra, $aI$ can be represented as the product of $A$
+and the matrix of cofactors; in particular, $aI$ annihilates the vector
+$(x_i)$, so that $aM=0$.
+\end{proof}
+
+Before returning to the special case of local rings, we observe the following
+useful fact from ideal theory:
+
+\begin{proposition} \label{idempotentideal}
+Let $R$ be a commutative ring, $I \subset R$ a finitely generated ideal such that $I^2 = I$.
+Then $I$ is generated by an idempotent element.
+\end{proposition}
+\begin{proof}
+We know that there is $x \in 1+I$ such that $xI =0$. If $x = 1+y, y \in I$, it
+follows that
+\[ yt = t \]
+for all $t \in I$. In particular, $y$ is idempotent and $(y) = I$.
+\end{proof}
+
+\begin{exercise}
+\rref{idempotentideal} fails if the ideal is not finitely generated.
+\end{exercise}
+
+\begin{exercise}
+Let $M$ be a finitely generated module over a ring $R$. Suppose $f: M \to M$
+is a surjection. Then $f$ is an isomorphism. To see this, consider $M$ as a
+module over $R[t]$ with $t$ acting by $f$; since $(t)M = M$, argue that there
+is a polynomial $Q(t) \in R[t]$ such that $Q(t)t$ acts as the identity on
+$M$, i.e. $Q(f)f=1_M$.
+\end{exercise}
+
+\begin{exercise}
+Give a counterexample to the conclusion of Nakayama's lemma when the module is
+not finitely generated.
+\end{exercise}
+\begin{exercise}
+Let $M$ be a finitely generated module over the ring $R$. Let $\mathfrak{I}$
+be the Jacobson
+radical of $R$ (cf. \rref{Jacobson}). If $\mathfrak{I} M = M$,
+then $M =
+0$.
+\end{exercise}
+
+\begin{exercise}[A converse to Nakayama's lemma]
+Suppose conversely that $R$ is a ring, and $\mathfrak{a} \subset R$ an ideal
+such that $\mathfrak{a} M \neq M$ for every nonzero finitely generated
+$R$-module. Then $\mathfrak{a}$ is contained in every maximal ideal of $R$.
+\end{exercise}
+
+
+\begin{exercise}
+Here is an alternative proof of Nakayama's lemma. Let $R$ be local with
+maximal ideal $\mathfrak{m}$, and let $M$ be a finitely generated module with
+$\mathfrak{m}M = M$. Let $n$ be the minimal number of generators for $M$. If
+$n>0$, pick generators $x_1, \dots, x_n$. Then write $x_1 = a_1 x_1 + \dots +
+a_n x_n$ where each $a_i \in \mathfrak{m}$. Deduce that $x_1$ is in the
+submodule generated by the $x_i, i \geq 2$, so that $n$ was not actually
+minimal, contradiction.
+\end{exercise}
+
+Let $M, M'$ be finitely generated modules over a local ring $(R,
+\mathfrak{m})$, and let $\phi: M \to M'$ be a homomorphism of modules. Then
+Nakayama's lemma gives a criterion for $\phi$ to be a surjection: namely, the
+map $\overline{\phi}: M/\mathfrak{m}M \to M'/\mathfrak{m}M'$ must be a surjection.
+For injections, this is false. For instance, if $\phi$ is multiplication by any element of
+$\mathfrak{m}$, then $\overline{\phi}$ is zero but $\phi$ may yet be injective.
+Nonetheless, we give a criterion for a map of \emph{free} modules over a local ring to
+be a \emph{split} injection.
+
+\begin{proposition} \label{splitcriterion1}
+Let $R$ be a local ring with maximal ideal $\mathfrak{m}$. Let $F, F'$ be two
+finitely generated free $R$-modules, and let $\phi: F \to F'$ be a homomorphism.
+Then $\phi$ is a split injection if and only if the reduction $\overline{\phi}$
+\[ F/\mathfrak{m}F \stackrel{\overline{\phi}}{\to} F'/\mathfrak{m}F' \]
+is an injection.
+\end{proposition}
+\begin{proof}
+One direction is easy. If $\phi$ is a split injection, then it has a left
+inverse
+$\psi: F' \to F$ such that $\psi \circ \phi = 1_F$. The reduction of $\psi$ as a
+map $F'/\mathfrak{m}F' \to F/\mathfrak{m}F$ is a left inverse to
+$\overline{\phi}$, which is thus injective.
+
+Conversely, suppose $\overline{\phi}$ injective. Let $e_1, \dots, e_r$ be a
+``basis'' for $F$, and let $f_1, \dots, f_r$ be the images under $\phi$ in
+$F'$. Then the reductions $\overline{f_1}, \dots, \overline{f_r}$ are linearly
+independent in the $R/\mathfrak{m}$-vector space $F'/\mathfrak{m}F'$. Let us
+complete this to a basis of $F'/\mathfrak{m}F'$ by adding elements
+$\overline{g_1}, \dots, \overline{g_s} \in F'/\mathfrak{m}F'$, which we can
+lift to elements $g_1, \dots, g_s \in F'$. It is clear that $F'$ has rank $r+s $
+since its reduction $F'/\mathfrak{m}F'$ does.
+
+We claim that the set $\left\{f_1, \dots, f_r, g_1, \dots, g_s\right\}$ is a
+basis for $F'$. Indeed, we have a map
+\[ R^{r+s} \to F' \]
+of free modules of rank $r+s$. It can be expressed as an $r+s$-by-$r+s$ matrix
+$M$; we need to show that $M$ is invertible. But if we reduce modulo
+$\mathfrak{m}$, it is invertible since the reductions of $f_1, \dots, f_r,
+g_1, \dots, g_s$ form a basis of $F'/\mathfrak{m}F'$.
+Thus the determinant of $M$ is not in $\mathfrak{m}$, so by locality it is
+invertible.
+The claim about $F'$ is thus proved.
+
+We can now define the left inverse $F' \to F$ of $\phi$. Indeed, given $x \in F'$,
+we can write it uniquely as a linear combination $\sum a_i f_i + \sum b_j g_j$
+by the above. We define $\psi(\sum a_i f_i + \sum b_j g_j) = \sum a_i e_i \in
+F$. It is clear that this is a left inverse
+\end{proof}
+
+We next note a slight strenghtening of the above result, which is sometimes
+useful. Namely, the first module does not have to be free.
+\begin{proposition}
+Let $R$ be a local ring with maximal ideal $\mathfrak{m}$. Let $M, F$ be two
+finitely generated $R$-modules with $F$ free, and let $\phi: M \to F'$ be a homomorphism.
+Then $\phi$ is a split injection if and only if the reduction $\overline{\phi}$
+\[ M/\mathfrak{m}M \stackrel{\overline{\phi}}{\to} F/\mathfrak{m}F \]
+is an injection.
+\end{proposition}
+It will in fact follow that $M$ is itself free, because $M$ is projective (see
+\cref{} below) as it is a direct summand of a free module.
+\begin{proof}
+Let $L$ be a ``free approximation'' to $M$.
+That is, choose a basis $\overline{x_1}, \dots, \overline{x_n}$ for $M/\mathfrak{m}M$ (as an $R/\mathfrak{m}$-vector
+space) and lift this to elements $x_1, \dots, x_n \in M$. Define a map
+\[ L = R^n \to M \]
+by sending the $i$th basis vector to $x_i$.
+Then $L/\mathfrak{m} L \to M/\mathfrak{m}M$ is an isomorphism.
+By Nakayama's lemma,
+$L \to M$ is surjective.
+
+Then the composite map
+$L \to M \to F$ is such that the $L/\mathfrak{m}L \to F/\mathfrak{m}F$ is injective, so
+$L \to F$ is a split injection (by \cref{splitcriterion1}).
+It follows that we can find a splitting $F \to L$, which when composed with $L
+\to M$ is a splitting of $M \to F$.
+\end{proof}
+
+\begin{exercise}
+Let $A$ be a local ring, and $B$ a ring which is finitely generated and free as an
+$A$-module. Suppose $A \to B$ is an injection. Then $A \to B$ is a \emph{split
+injection.} (Note that any nonzero morphism mapping out of a field is
+injective.)
+\end{exercise}
+
+\section{The functor $\hom$}
+
+In any category, the morphisms between two objects form a
+set.\footnote{Strictly speaking, this may depend on your set-theoretic
+foundations.} In many
+categories, however, the hom-sets have additional structure. For instance,
+the hom-sets
+between abelian groups are themselves abelian groups. The same situation holds
+for the category of modules over a commutative ring.
+
+
+\begin{definition}
+Let $R$ be a commutative ring and $M,N$ to be $R$-modules. We write
+$\hom_R(M,N)$ for
+the set of all $R$-module homomorphisms $M \to N$.
+ $\hom_R(M,N)$ is an $R$-module because one can add homomorphisms $f,g: M
+\to N$ by adding
+them pointwise: if $f,g$ are homomorphisms $M \to N$, define $f+g: M \to N$ via
+\( (f+g)(m) = f(m)+g(m); \)
+similarly, one can multiply homomorphisms $f: M \to N$ by elements $ a \in
+R$: one sets
+\( (af)(m) = a(f(m)). \)
+\end{definition}
+
+Recall that in any category, the hom-sets are \emph{functorial}. For instance,
+given $f: N \to N'$, post-composition with $f$ defines a map $\hom_R(M,N) \to
+\hom_R(M,N')$ for any $M$.
+Similarly precomposition gives a natural map $\hom_R(N', M) \to \hom_R(N, M)$.
+In particular, we get a bifunctor $\hom$, contravariant in the first variable
+and covariant in the second, of $R$-modules into $R$-modules.
+
+\subsection{Left-exactness of $\hom$}
+
+We now discuss the exactness properties of this construction of forming
+$\hom$-sets. The following result is basic and is, in fact, a reflection of
+the universal property of the kernel.
+\begin{proposition} \label{homcovleftexact}
+If $M$ is an $R$-module, then the functor
+\[ N \to \hom_R(M,N) \]
+is left exact (but \emph{not exact} in general).
+\end{proposition}
+This means that if
+\[ 0 \to N' \to N \to N'' \]
+is exact,
+then
+\[ 0 \to \hom_R(M, N') \to \hom_R(M, N) \to \hom_R(M, N'') \]
+is exact as well.
+
+\begin{proof}
+ First, we have to show that the map
+$\hom_R(M,N') \to \hom_R(M,N)$ is injective; this is because $N' \to N$ is
+injective, and composition with $N' \to N$ can't kill any nonzero $M \to N'$.
+Similarly, exactness in the middle can be checked easily, and follows from
+\rref{univpropertykernel}; it states simply that a map $M \to N$ has
+image landing inside $N'$ (i.e. factors through $N'$) if and only if it
+composes to zero in $N''$.
+\end{proof}
+
+\newcommand{\ol}[1]{\mathbf{#1}}
+This functor $\hom_R(M, \cdot)$ is not exact in general. Indeed:
+\begin{example}
+Suppose $R = \mathbb{Z}$, and consider the $R$-module (i.e. abelian group)
+$M = \mathbb{Z}/2\mathbb{Z}$. There is a short exact
+sequence
+\[ 0 \to 2\mathbb{Z} \to \mathbb{Z} \to \mathbb{Z}/2\mathbb{Z} \to 0. \]
+Let us apply $\hom_R(M, \cdot)$. We get a \emph{complex}
+\[ 0 \to \hom(\mathbb{Z}/2\mathbb{Z}, 2\mathbb{Z}) \to
+\hom(\mathbb{Z}/2\mathbb{Z}, \mathbb{Z}) \to \hom(\mathbb{Z}/2\mathbb{Z},
+\mathbb{Z}/2\mathbb{Z}) \to 0. \]
+The second-to-last term is $\mathbb{Z}/2\mathbb{Z}$; everything else is
+zero. Thus the sequence is not exact, and in particular the functor
+$\hom_{\mathbb{Z}}(\mathbb{Z}/2, -)$ is not an exact functor.
+\end{example}
+
+
+We have seen that homming out of a module is left-exact. Now, we see the same
+for homming \emph{into} a module.
+
+\begin{proposition} \label{homcontleftexact}
+If $M$ is a module, then $\hom_R(-,M)$ is a left-exact contravariant functor.
+\end{proposition}
+
+We write this proof in slightly more detail than \cref{homcovleftexact},
+because of the
+contravariance.
+\begin{proof}
+We want to show that $\hom(\cdot, M)$ is a left-exact contravariant functor,
+which means that
+if $ A \xrightarrow u B \xrightarrow v C \to 0$ is exact, then so is
+$$
+0 \to \hom(C, M) \xrightarrow{\ol v} \hom(B, M) \xrightarrow{\ol u} \hom(A, M)
+$$
+is exact. Here, the bold notation refers to the induced maps of $u,v$ on the
+hom-sets: if $f \in \hom(B,M)$ and $g \in \hom(C, M)$, we define
+$\ol u$ and $\ol v$ via $\ol v(g) = g \circ v$ and
+$\ol u(f) = f \circ u$.
+
+Let us show first that $\ol v$ is injective.
+Suppose that $g \in \hom(C, M)$. If $\ol v(g) = g \circ v = 0$ then
+$(g \circ v)(b) = 0$ for all $b \in B$. Since $v$ is a surjection, this means
+that $g(C) = 0$ and hence $g = 0$. Therefore, $\ol v$ is injective, and we
+have exactness at $\hom(C, M)$.
+
+Since $v \circ u = 0$, it is clear that $\ol u \circ \ol u = 0$.
+
+Now, suppose that $f \in \ker(\ol u) \subset \hom(B, M)$. Then
+$\ol u(f) = f \circ u = 0$.
+Thus $f: B \to M$ factors through $B/\im(u)$.
+However, $\im(u) = \ker(v)$, so $f$ factors through $B/\ker(v)$.
+Exactness shows that there is an isomorphism $B/\ker(v) \simeq C$.
+In particular, we find that $f$ factors through $C$. This is what we wanted.
+\end{proof}
+
+
+\begin{exercise}
+Come up with an example where $\hom_R(-, M)$ is not exact.
+\end{exercise}
+
+\begin{exercise}
+Over a \emph{field}, $\hom$ is always exact.
+\end{exercise}
+
+\subsection{Projective modules}
+
+Let $M$ be an $R$-module for a fixed commutative ring $R$. We have seen that
+$\hom_R(M,-)$ is generally only a left-exact functor.
+Sometimes, however, we do have exactness. We axiomatize this with the
+following.
+
+\begin{definition} \label{projectives}
+An $R$-module $M$ is called \textbf{projective} if the functor $\hom_R(M,
+\cdot)$ is
+exact.\footnote{It is possible to define a projective module over a
+noncommutative ring. The definition is the same, except that the $\hom$-sets
+are no longer modules, but simply abelian groups. }
+\end{definition}
+
+One may first observe that a free module is projective.
+Indeed, let $F = R^I$ for an indexing set. Then the functor $N \to \hom_R(F,
+N)$ is
+naturally
+isomorphic to $N \to N^I$. It is easy to see that this functor preserves
+exact sequences (that is, if $0 \to A \to B \to C \to 0$ is exact, so is $0
+\to A^I \to B^I \to C^I \to 0$).
+Thus $F$ is projective.
+One can also easily check that a \emph{direct summand} of a projective module
+is projective.
+
+It turns out that projective modules have a very clean characterization. They
+are \emph{precisely} the direct
+summands in free modules.
+
+\add{check this}
+\begin{proposition} \label{projmod}
+The following are equivalent for an $R$-module $M$:
+\begin{enumerate}
+\item $M$ is projective.
+\item Given any map $M \to N/N'$ from $M$ into a quotient of $R$-module
+$N/N'$, we can lift
+it to a map $M \to N$.
+\item There is a module $M'$ such that $M \oplus M'$ is free.
+\end{enumerate}
+\end{proposition}
+\begin{proof}
+The equivalence of 1 and 2 is just unwinding the definition of projectivity,
+because we just need to show that $\hom_R(M, \cdot)$ preserves surjective
+maps, i.e. quotients. ($\hom_R(M, \cdot)$ is already left-exact, after all.)
+To say that $\hom_R(M, N) \to \hom_R(M, N/N')$ is surjective is just the
+statement that any map $M \to N/N'$
+can be lifted to $M \to N$.
+
+Let us show that 2 implies 3. Suppose $M$ satisfies 2. Then choose a
+surjection $P \twoheadrightarrow M$ where $P$ is free, by
+\cref{freesurjection}. Then we can
+write $M \simeq P/P'$ for a submodule $P' \subset P$. The isomorphism map
+$M \to P/P'$
+leads by 2 to a lifting $M \to P$. In particular, there is a section of $P
+\to M$,
+namely this lifting. Since a section leads to a split exact sequence by
+\cref{}, we find then that $P \simeq \ker(P \to M) \oplus \im(M \to P) \simeq
+\ker(P \to M) \oplus M$,
+verifying 3 since $P$ is free.
+
+Now let us show that 3 implies 2.
+Suppose $M \oplus M'$ is free, isomorphic to $P$. Then a map $M \to N/N'$ can
+be extended to
+\[ P \to N/N' \]
+by declaring it to be trivial on $M'$. But now $P \to N/N'$ can be lifted to
+$N$ because $P$ is free, and we have observed that a free module is
+projective above; alternatively, we just lift the image of a basis. This
+defines $P
+\to N$. We may then compose this with the inclusion $M \to P$ to get the
+desired map $M \to P \to N$,
+which is a lifting of $M \to N/N'$.
+\end{proof}
+
+Of course, the lifting $P \to N$ of a given map $P \to N/N'$ is generally not
+unique, and in fact is unique precisely when $\hom_R(P,N') = 0$.
+
+So projective modules are precisely those with the following lifting property.
+Consider a diagram
+\[ \xymatrix{
+& P \ar[d] \\
+M \ar[r] & M'' \ar[r] & 0
+}\]
+where the bottom row is exact. Then, if $P$ is projective, there is a lifting
+$P \to M$ making commutative the diagram
+\[ \xymatrix{
+& P \ar[d]\ar@{-->}[ld] \\
+M \ar[r] & M'' \ar[r] & 0
+}\]
+
+\begin{corollary}
+Let $M$ be a module. Then there is a surjection $P \twoheadrightarrow M$,
+where $P$ is projective.
+\end{corollary}
+\begin{proof}
+Indeed, we know (\rref{freesurjection}) that we can always get a surjection
+from a free
+module. Since free modules are projective by \rref{projmod}, we are
+done.
+\end{proof}
+
+\begin{exercise}
+Let $R$ be a principal ideal domain, $F'$ a submodule of a free module
+$F$. Show that
+$F'$ is free. (Hint: well-order the set of generators of $F$, and climb up by
+transfinite induction.)
+In particular, any projective modules is free.
+\end{exercise}
+
+\subsection{Example: the Serre-Swan theorem}
+
+We now briefly digress to describe an important correspondence between
+projective modules and vector bundles. The material in this section will not
+be used in the sequel.
+
+Let $X$ be a compact space. We shall not recall the topological notion of a
+\emph{vector bundle} here.
+
+We note, however, that if $E$ is a (complex) vector bundle,
+then the set $\Gamma(X, E)$ of global sections is naturally a module over the
+ring $C(X)$ of complex-valued continuous functions on $X$.
+
+\begin{proposition}
+If $E$ is a vector bundle on a compact Hausdorff space $X$, then there is a
+surjection $\mathcal{O}^N \twoheadrightarrow E$ for some $N$.
+\end{proposition}
+Here $\mathcal{O}^N$ denotes the trivial bundle.
+
+It is known that in the category of vector bundles, every epimorphism splits.
+In particular, it follows that $E$ can be viewed as a \emph{direct summand} of
+the bundle $\mathcal{O}^N$. Since $\Gamma(X, E)$ is then a direct summand of
+$\Gamma(X, \mathcal{O}^N) = C(X)^N$, we find that $\Gamma(X, E)$ is a direct
+summand of a projective $C(X)$-module. Thus:
+
+\begin{proposition}
+$\Gamma(X, E)$ is a finitely generated projective $C(X)$-module.
+\end{proposition}
+
+\begin{theorem}[Serre-Swan]
+The functor $E \mapsto \Gamma(X, E)$ induces an equivalence of categories
+between vector bundles on $X$ and finitely generated projective modules over
+$C(X)$.
+\end{theorem}
+
+
+
+\subsection{Injective modules}
+\label{ssecinj}
+
+We have given a complete answer to the question of when the functor
+$\hom_R(M,-)$ is exact. We have shown that there are a lot of such
+\emph{projective} modules in the category of $R$-modules, enough that any
+module admits a surjection from one such.
+However, we now have to answer the dual question: when is the functor
+$\hom_R(-, Q)$ exact?
+
+Let us make the dual definition:
+
+\begin{definition}
+An $R$-module $Q$ is \textbf{injective} if the functor $\hom_R(-,Q)$ is exact.
+\end{definition}
+
+
+Thus, a module $Q$ over a ring $R$ is injective if
+whenever $M \to N$ is an injection, and one has a map $M \to Q$, it can be
+extended to $N \to Q$: in other words, $\hom_R(N,Q ) \to \hom_R(M,Q)$ is
+surjective.
+We can visualize this by a diagram
+\[ \xymatrix{
+0 \ar[r] & M \ar[r] \ar[d] & N \ar@{-->}[ld] \\
+& Q
+}\]
+where the dotted arrow always exists if $Q$ is injective.
+
+The notion is dual to projectivity, in some sense, so just as every module $M$
+admits an epimorphic map $P \to M$ for $P$ projective, we expect by duality
+that every module admits a monomorphic map $M \to Q$ for $Q$ injective.
+This is in fact true, but will require some work.
+We start, first, with a fact about injective abelian groups.
+
+\begin{theorem}\label{divisibleimpliesinj}
+A divisible abelian group (i.e. one where the map $x
+\to nx$ for any $n \in \mathbb{N}$ is surjective) is injective as a
+$\mathbb{Z}$-module (i.e. abelian group).
+\end{theorem}
+
+\begin{proof}
+The actual idea of the proof is rather simple, and similar to the proof
+of the Hahn-Banach theorem.
+Namely, we extend bit by bit, and then use Zorn's lemma.
+
+The first step is that we have a subgroup $M $ of a larger abelian
+group $N$.
+We have a map of $f:M \to Q$ for $Q$ some divisible abelian group, and we
+want to extend it to $N$.
+
+Now we can consider the poset of pairs $(\tilde{f}, M')$ where $M' \supset
+ M$, and $\tilde{f}: M' \to N$ is a map extending $f$.
+ Naturally, we make this into a poset by defining the order as ``$(\tilde{f},
+ M') \leq (\tilde{f}', M'')$ if $M'' $ contains $M'$ and $\tilde{f}'$
+ is an extension of $\tilde{f}$.
+ It is clear that every chain has an upper bound, so Zorn's lemma implies
+ that we have a submodule $M' \subset N$ containing $M$, and a map $\tilde{f}: M'
+ \to N$ extending $f$, such that there is no proper extension of $\tilde{f}$.
+ From this we will derive a contradiction unless $M' = N$.
+
+So suppose we have $M' \neq N$, for $M'$ the maximal submodule to which $f$
+can be extended, as in the above paragraph. Pick $m \in N - M'$, and consider the
+submodule $M' + \mathbb{Z} m \subset N$. We are going to show how to extend $\tilde{f}$
+to this bigger submodule. First, suppose $\mathbb{Z}m \cap M' = \{0\}$,
+i.e. the sum is direct. Then we can extend $\tilde{f}$ because $M' +
+\mathbb{Z}m$ is a
+direct sum: just define it to be zero on $\mathbb{Z}m$.
+
+The slightly harder part is what happens if $\mathbb{Z} m \cap M' \neq \{ 0\}$.
+In this case, there is an ideal $I \subset \mathbb{Z}$ such that $n \in I$
+if and only if $nm \in M'$.
+This ideal, however, is principal; let $g \in \mathbb{Z} - \left\{0\right\}$ be a generator. Then $gm = p
+\in M'$. In particular, $\tilde{f}(gm)$ is defined.
+We can ``divide'' this
+by $g$, i.e. find $u \in Q$ such that $gu = \tilde{f}(gm)$.
+
+Now we may extend to a
+map $\tilde{f}'$ from $\mathbb{Z} m + M'$ into $Q$ as follows. Choose $m'
+\in M', k \in \mathbb{Z}$. Define $\tilde{f}'( m' + km) = \tilde{f}(m')
++ k u$. It is easy to see that this is well-defined by the choice of $u$,
+and gives a proper extension of $\tilde{f}$. This contradicts maximality of
+$M'$ and completes the proof.
+\end{proof}
+
+\begin{exercise}
+\cref{divisibleimpliesinj} works over any principal ideal domain.
+\end{exercise}
+\begin{exercise}[Baer] \label{baercriterion}
+Let $N$ be an $R$-module such that for any ideal $I \subset R$, any morphism
+$I \to N$ can be extended to $R \to N$. Then $N$ is injective. (Imitate the
+above argument.)
+\end{exercise}
+
+From this, we may prove:
+\begin{theorem}
+Any $R$-module $M$ can be imbedded in an injective $R$-module $Q$.
+\end{theorem}
+\begin{proof}
+First of all, we know that any $R$-module $M$ is a quotient of a free
+$R$-module. We are going to show that the {dual} (to be defined shortly) of a free module is injective. And so since
+every module admits a surjection from a free module, we will use a dualization
+argument to prove the present theorem.
+
+First, for any abelian group $G$, define the \textbf{dual group} as $G^\vee
+= \hom_{\mathbb{Z}}(G, \mathbb{Q}/\mathbb{Z})$.
+Dualization is clearly a contravariant functor from abelian groups to abelian
+groups.
+By \cref{homcontleftexact}
+and \cref{divisibleimpliesinj}, an exact
+sequence of groups
+\[ 0 \to A \to B \to C \to 0 \]
+induces an exact sequence
+\[ 0 \to C^\vee \to B^\vee \to A^\vee \to 0 .\]
+In particular, dualization is an exact functor:
+
+\begin{proposition} Dualization preserves exact sequences (but reverses
+the order).
+\end{proposition}
+
+Now, we are going to apply this to $R$-modules. The dual of a left $R$-module
+is acted upon by $R$.
+The action, which is natural enough, is as follows. Let $M$ be an
+$R$-module, and $f: M \to
+\mathbb{Q}/\mathbb{Z}$ be a homomorphism of abelian groups (since
+$\mathbb{Q}/\mathbb{Z}$ has in general no $R$-module structure), and $r \in
+R$; then we define $rf$ to be the map $M \to \mathbb{Q}/\mathbb{Z}$ defined via
+\[ (rf)(m) = f(rm).\]
+It is easy to check that $M^{\vee}$ is thus made into an
+$R$-module.\footnote{If $R$ is noncommutative, this would not work: instead
+$M^{\vee}$ would be an \emph{right} $R$-module. For commutative rings, we have
+no such distinction between left and right modules.}
+In particular, dualization into $\mathbb{Q}/\mathbb{Z}$ gives a contravariant
+exact functor from $R$-\emph{modules} to $R$-\emph{modules}.
+
+
+ Let $M$ be as before,
+and now consider the $R$-module $M^{\vee}$. By \cref{freesurjection}, we can
+find a free
+ module $F$ and a surjection
+\[ F \to M^{\vee} \to 0.\]
+Now dualizing gives an exact sequence of $R$-modules
+\[ 0 \to M^{\vee \vee} \to F^{\vee}. \]
+However, there is a natural map (of $R$-modules) $M \to M^{\vee \vee}$: given $m \in M$, we can
+define a functional $\hom(M, \mathbb{Q}/\mathbb{Z}) \to \mathbb{Q}/\mathbb{Z}$
+by evaluation at $m$. One can check that this is a homomorphism. Moreover, this morphism $M \to M^{\vee \vee}$ is actually injective: if $m \in M$ were
+in the kernel, then by definition every functional $M \to
+\mathbb{Q}/\mathbb{Z}$ must vanish on $m$. It is easy to see (using
+$\mathbb{Z}$-injectivity of $\mathbb{Q}/\mathbb{Z}$) that this cannot happen
+if $m \neq 0$: we could just pick a nontrivial functional on the monogenic
+\emph{subgroup} $\mathbb{Z} m$ and extend to $M$.
+
+
+
+We claim now that $F^{\vee}$ is injective. This will prove the theorem, as
+we have the composite of monomorphisms $M \hookrightarrow M^{\vee \vee} \hookrightarrow F^{\vee}$ that
+embeds $M$ inside an injective module.
+
+\begin{lemma} The dual of a free $R$-module $F$ is an injective
+$R$-module.
+\end{lemma}
+\begin{proof}
+Let $0 \to A \to B $ be exact; we have to show that
+\[ \hom_R( B, F^\vee) \to \hom_R(A, F^\vee) \to 0 .\]
+is exact.
+Now we can reduce to the case where $F$ is the $R$-module $R$ itself.
+Indeed, $F$ is a direct sum of $R$'s by assumption, and taking hom's turns
+them into direct products; moreover the direct product of
+exact sequences is exact.
+
+So we are reduced to showing that $R^{\vee}$ is injective.
+Now we claim that
+\begin{equation} \label{weirddualityexpr} \hom_R(B, R^{\vee}) =
+\hom_{\mathbb{Z}}(B, \mathbb{Q}/\mathbb{Z}). \end{equation}
+In particular, $\hom_R( -, R^\vee)$ is an exact functor because
+$\mathbb{Q}/\mathbb{Z}$ is an injective abelian group.
+The proof of \cref{weirddualityexpr} is actually ``trivial.'' For instance,
+a $R$-homomorphism $f: B \to R^\vee$ induces $\tilde{f}: B \to
+\mathbb{Q}/\mathbb{Z}$ by sending $b \to (f(b))(1)$. One checks that this
+is bijective.
+
+\end{proof}
+
+\end{proof}
+
+\subsection{The small object argument}
+
+There is another, more set-theoretic approach to showing that any $R$-module
+$M$ can be imbedded in an injective module.
+This approach, which constructs the injective module by a transfinite
+colimit of push-outs, is essentially analogous to the ``small object
+argument'' that one uses in homotopy theory to show that certain categories
+(e.g. the category of CW complexes) are model categories in the sense of
+Quillen; see \cite{Ho07}.
+While this method is somewhat abstract and more complicated than the one of
+\cref{ssecinj}, it is also more general. Apparently this method originates with Baer,
+and was revisited by Cartan and Eilenberg in
+\cite{Cartan-Eilenberg} and by Grothendieck in \cite{Gr57}.
+There Grothendieck uses it to show that
+many other abelian categories have enough injectives.
+
+We first begin with a few remarks on smallness.
+Let $\{B_{\alpha}\}, \alpha \in \mathcal{A}$ be an inductive system of objects in some
+category $\mathcal{C}$, indexed by
+an ordinal $\mathcal{A}$. Let us assume that $\mathcal{C}$ has (small)
+colimits. If $A$ is an object of $\mathcal{C}$, then there is a
+natural map
+\begin{equation} \label{naturalmapcolim} \varinjlim \hom(A, B_\alpha) \to
+\hom(A, \varinjlim B_\alpha) \end{equation}
+because if one is given a map $A \to B_\beta$ for some $\beta$, one
+naturally gets a map from $A$ into the colimit by composing with $B_\beta
+\to \varinjlim B_\alpha$. (Note that the left colimit is one of sets!)
+
+
+In general, the map \cref{naturalmapcolim} is neither injective or surjective.
+
+\begin{example}
+Consider the category of sets. Let $A = \mathbb{N}$ and $B_n = \left\{1,
+\dots, n\right\}$ be the inductive system indexed by the natural numbers
+(where $B_n \to B_{m}, n \leq m$ is the obvious map). Then $\varinjlim B_n =
+\mathbb{N}$, so there is a map
+\[ A \to \varinjlim B_n, \]
+which does not factor as
+\[ A \to B_m \]
+for any $m$. Consequently, $\varinjlim \hom(A, B_n) \to \hom(A, \varinjlim
+B_n)$ is not surjective.
+\end{example}
+
+\begin{example}
+Next we give an example where the map fails to be injective. Let $B_n =
+\mathbb{N}/\left\{1, 2, \dots, n\right\}$, that is, the quotient set of
+$\mathbb{N}$ with the first $n$ elements collapsed to one element.
+There are natural maps $B_n \to B_m$ for $n \leq m$, so the
+$\left\{B_n\right\}$ form an inductive system. It is easy to see that the
+colimit $\varinjlim B_n = \left\{\ast \right\}$: it is the one-point set.
+So it follows that $\hom(A, \varinjlim B_n)$ is a one-element set.
+
+However, $\varinjlim \hom(A , B_n)$ is \emph{not} a one-element set.
+Consider the family of maps $A \to B_n$ which are just the natural projections
+$\mathbb{N} \to \mathbb{N}/\left\{1, 2, \dots, n\right\}$ and the family of
+maps $A \to B_n$ which map the whole of $A$ to the class of $1$.
+These two families of maps are distinct at each step and thus are distinct in
+$\varinjlim \hom(A, B_n)$, but they induce the same map $A \to \varinjlim B_n$.
+\end{example}
+
+
+Nonetheless, if $A$ is a \emph{finite set}, it is easy to see that for any
+sequence of sets $B_1 \to B_2 \to \dots$, we have
+\[ \varinjlim \hom(A, B_n) = \hom(A, \varinjlim B_n). \]
+\begin{proof}
+Let $f: A \to \varinjlim B_n$. The range of $A$ is finite, containing say
+elements $c_1, \dots, c_r \in \varinjlim B_n$. These all come from some
+elements in $B_N$ for $N$ large by definition of the colimit. Thus we can
+define $\widetilde{f}: A \to B_N$ lifting $f$ at a finite stage.
+
+Next, suppose two maps $f_n: A \to B_m,
+g_n : A \to B_m$ define the same map $A \to \varinjlim B_n$.
+Then each of the finitely many elements of $A$ gets sent to the same point in
+the colimit. By definition of the colimit for sets, there is $N \geq m$ such
+that the finitely many elements of $A$ get sent to the same points in $B_N$
+under $f$ and $g$. This shows that $\varinjlim \hom(A, B_n) \to \hom(A,
+\varinjlim B_n)$ is injective.
+\end{proof}
+
+
+The essential idea is that $A$ is ``small'' relative to the long chain of
+compositions $B_1 \to B_2 \to \dots$, so that it has to factor through a
+finite step.
+
+Let us generalize this.
+
+\begin{definition} \label{smallness}
+Let $\mathcal{C}$ be a category, $I $ a class of maps, and $\omega$ an ordinal.
+An object $A \in \mathcal{C}$ is said to be $\omega$-\textbf{small} (with
+respect to $I$) if
+whenever $\{B_\alpha\}$ is an inductive system parametrized by $\omega$ with
+maps in $I$, then
+the map
+\[ \varinjlim \hom(A, B_\alpha) \to \hom(A, \varinjlim B_\alpha) \]
+is an isomorphism.
+\end{definition}
+
+Our definition varies slightly from that of \cite{Ho07}, where only ``nice''
+transfinite sequences $\left\{B_\alpha\right\}$ are considered.
+
+In our applications, we shall begin by restricting ourselves to the category
+of $R$-modules for a fixed commutative ring $R$.
+We shall also take $I$ to be the set of \emph{monomorphisms,} or
+injections.\footnote{There are, incidentally, categories, such as the category
+of rings, where a categorical epimorphism may not be a surjection of sets.}
+Then each of the maps
+\[ B_\beta \to \varinjlim B_\alpha \]
+is an injection, so it follows that
+$\hom(A, B_\beta )\to \hom (A, \varinjlim B_\alpha)$ is one, and in
+particular the canonical map
+\begin{equation} \label{homcolimmap} \varinjlim \hom(A, B_\alpha) \to \hom (A,
+\varinjlim B_\alpha) \end{equation}
+is an \emph{injection.}
+We can in fact interpret the $B_\alpha$'s as subobjects of the big module
+$\varinjlim B_\alpha$, and think of their union as $\varinjlim B_\alpha$.
+(This is not an abuse of notation if we identify $B_\alpha$ with the image in
+the colimit.)
+
+We now want to show that modules are always small for ``large'' ordinals
+$\omega$.
+For this, we have to digress to do some set theory:
+
+\begin{definition}
+Let $\omega$ be a \emph{limit} ordinal, and $\kappa$ a cardinal. Then $\omega$ is
+\textbf{$\kappa$-filtered} if every collection $C$ of ordinals strictly less
+than $\omega$ and of cardinality at most $\kappa$ has an upper bound strictly
+less than $\omega$.
+\end{definition}
+
+\begin{example} \label{limitordfinfiltered}
+A limit ordinal (e.g. the natural numbers $\omega_0$) is $\kappa$-filtered for any finite cardinal $\kappa$.
+\end{example}
+
+
+\begin{proposition}
+Let $\kappa$ be a cardinal. Then there exists a $\kappa$-filtered ordinal
+$\omega$.
+\end{proposition}
+\begin{proof}
+If $\kappa$ is finite, \cref{limitordfinfiltered} shows that any limit ordinal
+will do. Let us thus assume that $\kappa$ is infinite.
+
+Consider the smallest ordinal $\omega$ whose cardinality is strictly greater
+than that of $\kappa$. Then we claim that $\omega$ is $\kappa$-filtered.
+Indeed, if $C$ is a collection of at most $\kappa$ ordinals strictly smaller
+than $\omega$, then each of these ordinals is of size at most $\kappa$. Thus
+the union of all the ordinals in $C$ (which is an ordinal) is of size at most
+$\kappa$, so is strictly smaller than $\omega$, and it provides an upper bound as in the definition.
+\end{proof}
+
+
+\begin{proposition} \label{modulesaresmall}
+Let $M$ be a module, $\kappa$ the cardinality of the set of its submodules.
+Then if $\omega$ is $\kappa$-filtered, then $M$ is $\omega$-small (with
+respect to injections).
+\end{proposition}
+
+The proof is straightforward, but let us first think about a special case. If
+$M$ is finite, then the claim is that for any inductive system
+$\left\{B_\alpha\right\}$ with injections between them, parametrized by a
+limit ordinal, any map $M \to
+\varinjlim B_\alpha$ factors through one of the $B_\alpha$. But this is clear.
+$M$ is finite, so since each element in the image must land inside one of the
+$B_\alpha$, so all of $M$ lands inside some finite stage.
+\begin{proof}
+We need only show that the map \cref{homcolimmap} is a surjection when
+$\omega$ is $\kappa$-filtered.
+Let $f: A \to \varinjlim B_\alpha$ be a map.
+Consider the subobjects $\{f^{-1}(B_\alpha)\}$ of $A$, where $B_\alpha$ is considered as a
+subobject of the colimit. If one of these, say $f^{-1}(B_\beta)$, fills $A$,
+then the map factors through $B_\beta$.
+
+So suppose to the contrary that all of the $f^{-1}(B_\alpha)$ were proper
+subobjects of $A$.
+However, we know that
+\[ \bigcup f^{-1}(B_\alpha) = f^{-1}\left(\bigcup B_\alpha\right) = A. \]
+Now there are at most $\kappa$ different subobjects of $A$ that occur among
+the $f^{-1}(B_\alpha)$, by hypothesis.
+Thus we can find a set $A$ of cardinality at most $\kappa$ such that as
+$\alpha'$ ranges over $A$, the
+$f^{-1}(B_{\alpha'})$ range over \emph{all} the $f^{-1}(B_\alpha)$.
+
+However, $A$ has an upper bound $\widetilde{\omega} < \omega$ as $\omega$ is
+$\kappa$-filtered. In particular,
+all the $f^{-1}(B_{\alpha'})$ are contained in
+$f^{-1}(B_{\widetilde{\omega}})$. It follows that
+$f^{-1}(B_{\widetilde{\omega}}) = A$.
+In particular, the map $f$ factors through $B_{\widetilde{\omega}}$.
+\end{proof}
+
+From this, we will be able to deduce the existence of lots of injectives.
+Let us recall the criterion of Baer (\cref{baercriterion}): a module $Q$ is
+injective if and only if in every commutative diagram
+\[ \xymatrix{
+\mathfrak{a} \ar[d] \ar[r] & Q \\
+R \ar@{-->}[ru]
+}\]
+for $\mathfrak{a} \subset R$ an ideal, the dotted arrow exists. In other
+words, we are trying to solve an \emph{extension problem} with respect to the
+inclusion $\mathfrak{a} \hookrightarrow R$ into the module $M$.
+
+If $M$ is an $R$-module, then in general we may have a semi-complete diagram as above. In
+it, we can form the \emph{push-out}
+\[ \xymatrix{
+\mathfrak{a} \ar[d] \ar[r] & Q \ar[d] \\
+R \ar[r] & R \oplus_{\mathfrak{a}} Q
+}.\]
+Here the vertical map is injective, and the diagram commutes. The point is
+that we can extend $\mathfrak{a} \to Q$ to $R$ \emph{if} we extend $Q$ to the
+larger module $R \oplus_{\mathfrak{a}} Q$.
+
+
+The point of the small object argument is to repeat this procedure
+transfinitely many times.
+
+\begin{theorem}
+Let $M$ be an $R$-module. Then there is an embedding $M \hookrightarrow Q$ for
+$Q$ injective.
+\end{theorem}
+\begin{proof}
+We start by defining a functor $\mathbf{M}$ on the category of $R$-modules.
+Given $N$, we consider the set of all maps $\mathfrak{a} \to N$ for
+$\mathfrak{a} \subset R$ an ideal, and consider the push-out
+\begin{equation} \label{hugediag}
+\xymatrix{
+\bigoplus \mathfrak{a}\ar[r] \ar[d] & N \ar[d] \\
+\bigoplus R \ar[r] & N \oplus_{\bigoplus \mathfrak{a}} \bigoplus R
+}
+\end{equation}
+where the direct sum of copies of $R$ is taken such that every copy of an
+ideal $\mathfrak{a}$ corresponds to one copy of $R$.
+We define $\mathbf{M}(N)$ to be this push-out. Given a map $N \to N'$, there
+is a natural morphism of diagrams \cref{hugediag}, so $\mathbf{M}$ is a
+functor.
+Note furthermore that there is a natural transformation
+\[ N \to \mathbf{M}(N), \]
+which is \emph{always an injection.}
+
+The key property of $\mathbf{M}$ is that if $\mathfrak{a} \to N$ is any
+morphism, it can be extended to $R \to \mathbf{M}(N)$, by the very
+construction of $\mathbf{M}(N)$. The idea will now be to
+apply $\mathbf{M}$ a transfinite number of times and to use the small object
+property.
+
+
+We define for each ordinal $\omega$ a functor $\mathbf{M}_{\omega}$ on the
+category of $R$-modules, together with a natural injection $N \to
+\mathbf{M}_{\omega}(N)$. We do this by transfinite induction.
+First, $\mathbf{M}_1 = \mathbf{M}$ is the functor defined above.
+Now, suppose given an ordinal $\omega$, and suppose $\mathbf{M}_{\omega'}$ is
+defined for $\omega' < \omega$. If $\omega$ has an immediate predecessor
+$\widetilde{\omega}$, we let
+$$\mathbf{M}_{\omega} = \mathbf{M} \circ \mathbf{M}_{\widetilde{\omega}}.$$
+If not, we let $\mathbf{M}_{\omega}(N) = \varinjlim_{\omega' < \omega}
+\mathbf{M}_{\omega'}(N)$.
+It is clear (e.g. inductively) that the $\mathbf{M}_{\omega}(N)$ form an inductive system over
+ordinals $\omega$, so this is reasonable.
+
+Let $\kappa$ be the cardinality of the set of ideals in $R$, and let $\Omega$
+be a $\kappa$-filtered ordinal.
+The claim is as follows.
+
+\begin{lemma}
+For any $N$, $\mathbf{M}_{\Omega}(N)$ is injective.
+\end{lemma}
+
+If we prove this, we will be done. In fact, we will have shown that there is a
+\emph{functorial} embedding of a module into an injective.
+Thus, we have only to prove this lemma.
+
+\begin{proof}
+By Baer's criterion (\cref{baercriterion}), it suffices to show that if
+$\mathfrak{a} \subset R$ is an ideal, then any map $f: \mathfrak{a} \to
+\mathbf{M}_{\Omega}(N)$ extends to $R \to \mathbf{M}_{\Omega}(N)$. However, we
+know since $\Omega$ is a limit ordinal that
+\[ \mathbf{M}_{\Omega}(N) = \varinjlim_{\omega < \Omega}
+\mathbf{M}_{\omega}(N), \]
+so by \cref{modulesaresmall}, we find that
+\[ \hom_R(\mathfrak{a}, \mathbf{M}_{\Omega}(N)) = \varinjlim_{\omega < \Omega}
+\hom_R(\mathfrak{a}, \mathbf{M}_{\omega}(N)). \]
+This means in particular that there is some $\omega' < \Omega$ such that $f$
+factors through the submodule $\mathbf{M}_{\omega'}(N)$, as
+\[ f: \mathfrak{a} \to \mathbf{M}_{\omega'}(N) \to \mathbf{M}_{\Omega}(N). \]
+However, by the fundamental property of the functor $\mathbf{M}$, we know that
+the map $\mathfrak{a} \to \mathbf{M}_{\omega'}(N)$ can be extended to
+\[ R \to \mathbf{M}( \mathbf{M}_{\omega'}(N)) = \mathbf{M}_{\omega' + 1}(N), \]
+and the last object imbeds in $\mathbf{M}_{\Omega}(N)$.
+In particular, $f$ can be extended to $\mathbf{M}_{\Omega}(N)$.
+\end{proof}
+
+
+\end{proof}
+
+\subsection{Split exact sequences}
+
+\add{additive functors preserve split exact seq}
+Suppose that
+$\xymatrix@1{0 \ar[r] & L \ar[r]^\psi & M \ar[r]^f & N \ar[r] & 0}$
+is a split short exact sequence.
+Since $\Hom_R (D, \cdot)$ is a left-exact functor, we see that
+$$\xymatrix@1{0 \ar[r]
+ & \Hom_R(D, L) \ar[r]^{\psi'}
+ & \Hom_R(D, M) \ar[r]^{\f'}
+ & \Hom_R(D, N)}$$
+is exact. In addition,
+$\Hom_R (D, L \oplus N) \cong \Hom_R(D, L) \oplus \Hom_R (D, N)$. Therefore, in
+the case that we start with a split short exact sequence $M \cong L \oplus N$,
+applying $\Hom_R (D, \cdot)$ does yield a split short exact sequence
+$$\xymatrix@1{0 \ar[r]
+ & \Hom_R(D, L) \ar[r]^{\psi'}
+ & \Hom_R(D, M) \ar[r]^{\f'}
+ & \Hom_R(D, N) \ar[r] & 0}.$$
+
+Now, assume that
+$$\xymatrix@1{0 \ar[r]
+ & \Hom_R(D, L) \ar[r]^{\psi'}
+ & \Hom_R(D, M) \ar[r]^{\f'}
+ & \Hom_R(D, N) \ar[r] & 0}$$
+is a short exact sequence of abelian groups for all $R$-modules $D$.
+Set $D = R$ and using $\Hom_R (R, N) \cong N$ yields that
+$\xymatrix@1{0 \ar[r] & L \ar[r]^\psi & M \ar[r]^f & N \ar[r] & 0}$
+is a short exact sequence.
+
+Set $D = N$, so we have
+$$\xymatrix@1{0 \ar[r]
+ & \Hom_R(N, L) \ar[r]^{\psi'}
+ & \Hom_R(N, M) \ar[r]^{\f'}
+ & \Hom_R(N, N) \ar[r] & 0}$$
+Here, $\f'$ is surjective, so the identity map of $\Hom_R (N, N)$ lifts to a
+map $g \in \Hom_R (N, M)$ so that $f \circ g = \f'(g) = id$.
+This means that $g$ is a splitting homomorphism for the sequence
+$\xymatrix@1{0 \ar[r] & L \ar[r]^\psi & M \ar[r]^f & N \ar[r] & 0}$,
+and therefore the sequence is a split short exact sequence.
+
+
+\section{The tensor product}
+\label{sec:tensorprod}
+We shall now introduce the third functor of this chapter: the tensor product.
+The tensor product's key property is that it allows one to ``linearize''
+bilinear maps. When taking the tensor product of rings, it provides a
+categorical coproduct as well.
+
+\subsection{Bilinear maps and the tensor product}
+
+Let $R$ be a commutative ring, as usual.
+We have seen that the $\hom$-sets $\hom_R(M,N)$ of $R$-modules $M,N$ are themselves
+$R$-modules.
+Consequently, if we have three $R$-modules $M,N,P$, we can think about
+module-homomorphisms
+\[ M \stackrel{\lambda}{\to}\hom_R(N,P). \]
+Suppose $x \in M, y \in N$. Then we can consider
+\( \lambda(x) \in \hom_R(N,P) \)
+and thus we can consider the element
+\( \lambda(x)(y) \in P. \)
+We denote this element $\lambda(x)(y)$, which depends on the variables $x \in
+M, y \in N$, by $\lambda(x,y)$ for convenience; it
+is a function of two variables $M \times N \to P$.
+
+There are
+certain properties of $\lambda(\cdot, \cdot)$ that we list below.
+Fix $x , x' \in M$; $y, y' \in N; \ a \in R$. Then:
+\begin{enumerate}
+\item $\lambda(x,y+y') = \lambda(x,y) + \lambda(x, y')$ because $\lambda(x)$
+is
+ additive.
+\item $\lambda(x, ay) = a \lambda(x,y)$ because $\lambda(x)$ is an
+$R$-module homomorphism.
+\item $\lambda(x+x', y) = \lambda(x,y) + \lambda(x', y)$ because
+$\lambda$ is additive.
+\item $\lambda(ax, y) = a\lambda(x,y)$ because $\lambda$ is an $R$-module
+homomorphism.
+\end{enumerate}
+
+Conversely, given a function $\lambda: M \times N \to P$ of two variables satisfying the above properties,
+it is easy to see that we can get a morphism of $R$-modules $M \to
+\hom_R(N,P)$.
+
+
+
+\begin{definition}
+An \textbf{$R$-bilinear map $\lambda: M \times N \to P$} is a map satisfying
+the above listed conditions. In other words, it is required to be $R$-linear
+in each variable separately.
+\end{definition}
+
+The previous discussion shows that there is a \emph{bijection} between $R$-bilinear
+maps $M \times N \to P$ with $R$-module maps $M \to \hom_R(N,P)$.
+Note that the first interpretation is symmetric in $M,N$; the second, by
+contrast, can be interpreted in terms of the old concepts of an $R$-module map.
+So both are useful.
+
+\begin{exercise}
+Prove that a $\mathbb{Z}$-bilinear map out of $\mathbb{Z}/2 \times
+\mathbb{Z}/3$ is identically zero, whatever the target module.
+\end{exercise}
+
+
+Let us keep the notation of the previous discussion: in particular, $M,N, P$ will
+be modules over a commutative ring $R$.
+
+Given a bilinear map $M \times N \to P$ and a homomorphism $P \to P'$, we can
+clearly get a bilinear map $M \times N \to P'$ by composition.
+In particular, given $M,N$, there is a \emph{covariant functor} from
+$R$-modules to
+$\mathbf{Sets}$ sending any $R$-module $P$ to the collection of $R$-bilinear
+maps $M \times N
+\to P$. As usual, we are interested in when this functor is
+\emph{corepresentable.}
+As a result,
+we are interested in \emph{universal} bilinear maps out of $M \times N$.
+
+
+\begin{definition}
+An $R$-bilinear map $\lambda: M \times N \to P$ is called \textbf{universal} if
+for all $R$-modules $Q$, the composition of $P \to Q$ with $M \times N
+\stackrel{\lambda}{\to} P$
+gives a \textbf{bijection}
+\[ \hom_R(P,Q) \simeq \left\{\mathrm{bilinear \ maps} \ M \times N \to
+Q\right\} \]
+So, given a bilinear map $M \times N \to Q$, there is a \textit{unique} map $P
+\to Q$ making the diagram
+\[
+\xymatrix{
+& P \ar[dd] \\
+M \times N \ar[ru]^{\lambda} \ar[rd] & \\
+& Q
+}
+\]
+
+Alternatively, $P$ \emph{corepresents} the functor $Q \to
+\left\{\mathrm{bilinear \ maps \ } M \times N \to Q\right\}$.
+\end{definition}
+
+General nonsense says that given $M,N$, an universal $R$-bilinear map $M
+\times N \to P$ is
+\textbf{unique} up to isomorphism (if it exists). This follows from \emph{Yoneda's lemma}.
+For convenience, we give a direct proof.
+
+Suppose $M \times N \stackrel{\lambda}{\to} P$ was universal and $M \times N
+\stackrel{\lambda'}{\to} P'$ is also
+universal. Then by the universal property, there are unique maps $P \to P'$
+and $P' \to P$ making the
+following diagram commutative:
+\[
+\xymatrix{
+& P \ar[dd] \\
+M \times N \ar[ru]^{\lambda} \ar[rd]^{\lambda'} & \\
+& P' \ar[uu]
+}
+\]
+These compositions $P \to P' \to P, P' \to P \to P'$ have to be the identity
+because of the uniqueness part of the universal property.
+As a result, $P \to P'$ is an isomorphism.
+
+We shall now show that this universal object does indeed exist.
+
+\begin{proposition} \label{tensorexists}
+Given $M,N$, a universal bilinear map out of $M \times N$ exists.
+\end{proposition}
+
+Before proving it we make:
+\begin{definition}
+We denote the codomain of the universal map out of $M \times N $ by $M
+\otimes_R N$. This is called the \textbf{tensor product} of $M,N$, so there
+is a universal bilinear map out of $M \times N$ into $M \otimes_R N$.
+\end{definition}
+
+\begin{proof}[Proof of \rref{tensorexists}] We will simply give
+a presentation of the tensor product by
+``generators and relations.''
+Take the free $R$-module $M \otimes_R N$ generated by the symbols $\left\{x
+\otimes
+y\right\}_{x \in M, y \in N}$ and quotient out by the relations forced upon us
+by the definition of a bilinear map (for $x, x' \in M, \ y, y' \in N, \ a
+\in R$)
+\begin{enumerate}
+\item $(x+x') \otimes y = x \otimes y + x' \otimes y$.
+\item $(ax) \otimes y = a(x \otimes y) = x \otimes (ay)$.
+\item $x \otimes (y+y') = x \otimes y + x \otimes y'$.
+\end{enumerate}
+
+We will abuse notation and denote $x \otimes y$ for its image in $M \otimes_R
+N$ (as opposed to the symbol generating the free module).
+
+There is a bilinear map $M \times N \to M \otimes_R N$ sending $(x,y) \to x
+\otimes y$; the relations imposed imply that this map is a bilinear map. We
+have to check
+that it is universal, but this is actually quite direct.
+
+Suppose we had a bilinear map $\lambda: M \times N \to P$. We must construct
+a linear map $M
+\otimes_R N \to P$.
+To do this, we can just give a map on generators, and show that it is zero on
+each of the relations.
+It is easy to see that to make the appropriate diagrams commute, the linear
+map $M \otimes N \to P$ has to send $x \otimes y \to \lambda(x,y)$.
+This factors
+through the relations on $x \otimes y$ by bilinearity and leads to an
+$R$-linear map $M \otimes_{R} N \to P$ such that the following diagram
+commutes:
+\[
+\xymatrix{
+M \times N \ar[r] \ar[rd]^{\lambda} & M \otimes_R N \ar[d] \\
+& P
+}.\]
+It is easy to see that $M \otimes_R N \to P$ is unique because the $x \otimes
+y$ generate it.
+\end{proof}
+
+
+The theory of the tensor product allows one to do away with bilinear maps and
+just think of linear maps.
+
+Given $M, N$, we have constructed an object $M \otimes_R N$. We now wish to see
+the functoriality of the tensor product. In fact, $(M,N) \to M \otimes_R N$ is a \emph{covariant
+functor} in two variables from $R$-modules to $R$-modules.
+In particular, if $M \to M', N \to N'$ are morphisms, there is a canonical map
+\begin{equation} \label{tensorisfunctor} M \otimes_R N \to M' \otimes_R N'.
+\end{equation}
+To obtain \cref{tensorisfunctor}, we take the natural bilinear map $M \times N \to M' \times N'
+\to M' \otimes_R N'$ and use the universal property of $M \otimes_R N$ to get
+a map out of it.
+
+\subsection{Basic properties of the tensor product}
+We make some observations and prove a few basic properties. As the proofs will
+show, one powerful way to prove things about an object is to reason about its
+universal property. If two objects have the same universal property, they are
+isomorphic.
+
+\begin{proposition}
+The tensor product is symmetric: for $R$-modules $M,N$, we have $M \otimes_R
+N \simeq N \otimes_R M$
+canonically.
+\end{proposition}
+\begin{proof}
+This is clear from the universal properties: giving a bilinear map
+out of $M \times N$ is the same as a bilinear map out $N \times M$.
+Thus $M \otimes_R N$ and $N \otimes_R N$ have the same universal property.
+It is also
+clear from the explicit construction.
+\end{proof}
+
+\begin{proposition}
+For an $R$-module $M$, there is a canonical isomorphism $M \to M \otimes_R R$.
+\end{proposition}
+\begin{proof}
+ If we think in terms of
+bilinear maps, this statement is equivalent to the statement that a bilinear
+map $\lambda: M \times R \to P$ is the same as a linear map $M \to N$. Indeed,
+to do
+this, restrict $\lambda$ to $\lambda(\cdot, 1)$. Given $f: M \to N$,
+similarly, we take for $\lambda$ as $\lambda(x,a) = af(x)$. This gives a
+bijection as claimed.
+\end{proof}
+
+\begin{proposition}
+The tensor product is associative. There are canonical isomorphisms $M
+\otimes_R (N \otimes_R P) \simeq (M
+\otimes_R N) \otimes_R P$.
+\end{proposition}
+\begin{proof}
+ There are a few ways to see this: one is to build
+it explicitly from the construction given, sending $x \otimes (y \otimes z) \to
+(x \otimes y) \otimes z$.
+
+More conceptually, both have the same universal
+property: by general categorical nonsense (Yoneda's lemma), we need to show
+that for all $Q$, there is a canonical bijection
+\[ \hom_R(M \otimes (N \otimes P)), Q) \simeq \hom_R( (M \otimes N)
+\otimes P, Q) \]
+where the $R$'s are dropped for simplicity. But both of these sets can be
+identified with the set of trilinear maps\footnote{Easy to define.} $M \times N
+\times P \to Q$. Indeed
+\begin{align*}
+\hom_R(M \otimes (N \otimes P), Q) & \simeq \mathrm{bilinear} \ M \times (N
+\otimes P) \to Q \\
+& \simeq \hom(N \otimes P, \hom(M,Q)) \\
+& \simeq \mathrm{bilinear} \ N \times P \to \hom(M,Q) \\
+& \simeq \hom(N, \hom(P, \hom(M,Q)) \\
+& \simeq \mathrm{trilinear\ maps}.
+\end{align*}
+
+\end{proof}
+
+\subsection{The adjoint property}
+Finally, while we defined the tensor product in terms of a ``universal
+bilinear map,'' we saw earlier that bilinear maps could be interpreted as maps
+into a suitable $\hom$-set.
+In particular, fix $R$-modules $M,N,P$. We know that the set of bilinear maps
+$M \times N \to P$ is naturally in bijection with
+\[ \hom_R(M, \hom_R(N,P)) \]
+as well as with
+\[ \hom_R(M \otimes_R, N, P). \]
+
+As a result, we find:
+\begin{proposition} For $R$-modules $M,N,P$, there is a natural bijection
+\[ \hom_R(M,\hom_R(N,P)) \simeq \hom_R(M \otimes_R N, P). \]
+\end{proposition}
+
+There is a more evocative way of phrasing the above natural bijection. Given
+$N$, let us define the functors $F_N, G_N$ via
+\[ F_N(M) = M \otimes_R N, \quad G_N(P) = \hom_R(N,P). \]
+Then the above proposition states that there is a natural isomorphism
+\[ \hom_R( F_N(M), P) \simeq \hom_R( M, G_N(P)). \]
+In particular, $F_N$ and $G_N$ are \emph{adjoint functors}. So, in a sense,
+the operations of $\hom$ and $\otimes$ are dual to each other.
+
+\begin{proposition} \label{tensorcolimit}
+Tensoring commutes with colimits.
+\end{proposition}
+
+In particular, it follows that if $\left\{N_\alpha\right\}$ is a family of
+modules, and $M$ is a module, then
+\[ M \otimes_R \bigoplus N_\alpha = \bigoplus M \otimes_R N_\alpha. \]
+\begin{exercise}
+Give an explicit proof of the above relation.
+\end{exercise}
+
+\begin{proof}
+This is a formal consequence of the fact that the tensor product is a left
+adjoint and consequently commutes with all colimits.
+\add{proof}
+\end{proof}
+
+In particular, by \cref{tensorcolimit}, the tensor product commutes with \emph{cokernels.}
+That is, if $A \to B \to C \to 0$ is an exact sequence of $R$-modules and $M$
+is an $R$-module, $A \otimes_R M \to B \otimes_R M \to C \otimes_R M \to 0$ is
+also exact, because exactness of such a sequence is precisely a condition on
+the cokernel.
+That is, the tensor product is \emph{right exact.}
+
+We can thus prove a simple result on finite generation:
+\begin{proposition} \label{fingentensor}
+If $M, N$ are finitely generated, then $M \otimes_R N$ is finitely generated.
+\end{proposition}
+\begin{proof}
+Indeed, if we have surjections $R^m \to M, R^n \to N$, we can tensor them; we
+get a surjection since the tensor product is right-exact.
+So have a surjection
+$R^{m n} = R^m \otimes_R R^n \to M \otimes_R N$.
+\end{proof}
+
+
+\subsection{The tensor product as base-change}
+
+Before this, we have considered the tensor product as a functor within a
+fixed category. Now, we shall see that when one takes the tensor product with a
+\emph{ring}, one gets additional structure. As a result, we will be able to
+get natural functors between \emph{different} module categories.
+
+Suppose we have a
+ring-homomorphism $\phi:R \to R'$. In this case, any $R'$-module can be
+regarded as
+an $R$-module.
+In particular, there is a canonical functor of \emph{restriction}
+\[ R'\mbox{-}\mathrm{modules} \to R\mbox{-}\mathrm{modules}. \]
+
+We shall see that the tensor product provides an \emph{adjoint} to this
+functor.
+Namely, if $M$ has an $R$-module
+structure, then $M \otimes_R R'$ has an $R'$ module structure where $R'$ acts
+on the right. Since the tensor product is functorial, this gives a functor
+in the opposite direction:
+\[ R\mbox{-}\mathrm{modules} \to R'\mbox{-}\mathrm{modules}. \]
+
+
+Let $M'$ be an $R'$-module and $M$ an $R$-module. In view of the above,
+we can talk about
+\[ \hom_R(M, M') \]
+by thinking of $M'$ as an $R$-module.
+
+\begin{proposition}
+There is a canonical isomorphism between
+\[ \hom_R(M, M') \simeq \hom_{R'}(M \otimes_R R', M'). \]
+In particular, the restriction functor and the functor $M \to M \otimes_R R'$
+are adjoints to each other.
+\end{proposition}
+
+
+\begin{proof}
+We can describe the bijection explicitly. Given an $R'$-homomorphism $f:M
+\otimes_R R' \to M'$, we get a map
+\[ f_0:M \to M' \]
+sending
+\[ m \to m \otimes 1 \to f(m \otimes 1). \]
+This is easily seen to be an $R$-module-homomorphism. Indeed,
+\[ f_0(ax) = f(ax \otimes 1) = f(\phi(a)(x \otimes 1)) = a f(x \otimes 1) =
+a f_0(x) \]
+since $f$ is an $R'$-module homomorphism.
+
+Conversely, if we are given a homomorphism of $R$-modules
+\[ f_0: M \to M' \]
+then we can define
+\[ f: M \otimes_R R' \to M' \]
+by sending $m \otimes r' \to r' f_0(m)$, which is a homomorphism of $R'$
+modules.
+This is well-defined because $f_0$ is a homomorphism of $R$-modules. We leave
+some details to the reader.
+\end{proof}
+
+\begin{example}
+In the representation theory of finite groups, the operation of tensor product
+corresponds to the procedure of \emph{inducing} a representation. Namely, if
+$H \subset G$ is a subgroup of a group $G$, then there is an obvious
+restriction functor from $G$-representations to $H$-representations.
+The adjoint to this is the induction operator. Since a $H$-representation
+(resp. a $G$-representation) is just a module over the group ring, the
+operation of induction is really a special case of the tensor product. Note
+that the group rings are generally not commutative, so this should be
+interpreted with some care.
+\end{example}
+
+\subsection{Some concrete examples}
+
+We now present several concrete computations of tensor products in explicit
+cases to illuminate what is happening.
+
+\begin{example} Let us compute $\mathbb{Z}/10 \otimes_{\mathbb{Z}}
+\mathbb{Z}/12$.
+Since $1$ spans $\mathbb{Z} / (10)$ and $1$ spans $\mathbb{Z} / (12)$,
+we see that $1 \otimes 1$ spans $\mathbb{Z} / (10) \otimes \mathbb{Z} /
+(12)$ and this tensor
+product is a cyclic group.
+
+Note that
+$1 \otimes 0 = 1 \otimes (10 \cdot 0) = 10 \otimes 0 = 0 \otimes 0 = 0$
+and
+$0 \otimes 1 = (12 \cdot 0) \otimes 1 = 0 \otimes 12 = 0 \otimes 0 = 0$.
+Now,
+$10 (1 \otimes 1) = 10 \otimes 1 = 0 \otimes 1 = 0$
+and
+$12 (1 \otimes 1) = 1 \otimes 12 = 1 \otimes 0 = 0$,
+so the cyclic group $\mathbb{Z} / (10) \otimes \mathbb{Z} / (12)$ has order
+dividing both
+$10$ and $12$. This means that the cyclic group has order dividing
+$\gcd(10, 12) = 2$.
+
+To show that the order of $\mathbb{Z} / (10) \otimes \mathbb{Z} / (12)$,
+define a bilinear map
+$g: \mathbb{Z} / (10) \times \mathbb{Z} / (12) \to \mathbb{Z} / (2)$ via
+$g : (x, y) \mapsto xy$. The universal property of tensor products then
+says that there is a unique linear map
+$f: \mathbb{Z} / (10) \otimes \mathbb{Z} / (12) \to \mathbb{Z} / (2)$ making
+the diagram
+\[
+\xymatrix{
+\mathbb{Z} / (10) \times \mathbb{Z} / (12) \ar[r]^\otimes \ar[rd]_g
+ & \mathbb{Z} / (10) \otimes \mathbb{Z} / (12) \ar[d]^f \\
+& \mathbb{Z} / (2).
+}
+\]
+commute. In particular, this means that $f (x \otimes y) = g(x, y) = xy$.
+Hence, $f(1 \otimes 1) = 1$, so $f$ is surjective, and therefore,
+$\mathbb{Z} / (10) \otimes \mathbb{Z} / (12)$ has size at least two. This
+allows us to
+conclude that $\mathbb{Z} / (10) \otimes \mathbb{Z} / (12) = \mathbb{Z} / (2)$.
+\end{example}
+
+We now generalize the above example to tensor products of cyclic groups.
+\begin{example}
+Let $d=\gcd(m,n)$. We will show that
+$(\mathbb{Z}/m\mathbb{Z})\otimes(\mathbb{Z}/n\mathbb{Z})\simeq(\mathbb{Z}/d\mathbb{Z})$,
+and thus in particular if $m$ and $n$
+are relatively prime, then
+$(\mathbb{Z}/m\mathbb{Z})\otimes(\mathbb{Z}/n\mathbb{Z})\simeq(0)$. First,
+note that
+any $a\otimes b\in(\mathbb{Z}/m\mathbb{Z})\otimes(\mathbb{Z}/n\mathbb{Z})$
+can be written as $ab(1\otimes 1)$,
+so that $(\mathbb{Z}/m\mathbb{Z})\otimes(\mathbb{Z}/n\mathbb{Z})$ is generated
+by $1\otimes 1$ and hence
+is a cyclic group. We know from elementary number theory that $d=xm+yn$
+for some $x,y\in\mathbb{Z}$. We have $m(1\otimes 1)=m\otimes 1=0\otimes
+1=0$ and
+$n(1\otimes 1)=1\otimes n=1\otimes0=0$. Thus $d(1\otimes 1)=(xm+yn)(1\otimes
+1)=0$, so that $1\otimes1$ has order dividing $d$.
+
+Conversely, consider the map
+$f:(\mathbb{Z}/m\mathbb{Z})\times(\mathbb{Z}/n\mathbb{Z})\rightarrow(\mathbb{Z}/d\mathbb{Z})$
+defined by
+$f(a+m\mathbb{Z},b+n\mathbb{Z})=ab+d\mathbb{Z}$. This is well-defined,
+since if $a'+m\mathbb{Z}=a+m\mathbb{Z}$
+and $b'+n\mathbb{Z}=b+n\mathbb{Z}$ then $a'=a+mr$ and $b'=b+ns$ for some
+$r,s$ and
+thus $a'b'+d\mathbb{Z}=ab+(mrb+nsa+mnrs)+d\mathbb{Z}=ab+d\mathbb{Z}$
+(since $d=\gcd(m,n)$
+divides $m$ and $n$). This is obviously bilinear, and hence induces a map
+$\tilde{f}:(\mathbb{Z}/m\mathbb{Z})\otimes(\mathbb{Z}/n\mathbb{Z})\rightarrow(\mathbb{Z}/d\mathbb{Z})$,
+which
+has $\tilde{f}(1\otimes1)=1+d\mathbb{Z}$. But the order of $1+d\mathbb{Z}$
+in $\mathbb{Z}/d\mathbb{Z}$ is $d$, so that the order of $1\otimes1$ in
+$(\mathbb{Z}/m\mathbb{Z})\otimes(\mathbb{Z}/n\mathbb{Z})$ must be at least
+$d$. Thus $1\otimes1$ is in fact
+of order $d$, and the map $\tilde{f}$ is an isomorphism between cyclic groups
+of order $d$.
+\end{example}
+
+Finally, we present an example involving the interaction of $\hom$ and the
+tensor product.
+
+\begin{example}
+Given an $R$-module $M$, let us use the notation $M^* = \hom_R(M,R)$.
+We shall define a functorial map
+\[ M^* \otimes_R N \to \hom_R(M,N), \]
+and show that it is an isomorphism when $M$ is finitely generated and free.
+
+Define $\rho':M^*\times N\rightarrow\hom_R(M,N)$ by
+$\rho'(f,n)(m)=f(m)n$ (note that $f(m)\in R$, and the multiplication $f(m)n$
+is that between an element of $R$ and an element of $N$). This is bilinear,
+\[\rho'(af+bg,n)(m)=(af+bg)(m)n=(af(m)+bg(m))n=af(m)n+bg(m)n=a\rho'(f,n)(m)+b\rho'(g,n)(m)\]
+
+\[\rho'(f,an_1+bn_2)(m)=f(m)(an_1+bn_2)=af(m)n_1+bf(m)n_2=a\rho'(f,n_1)(m)+b\rho'(f,n_2)(m)\]
+
+so it induces a map $\rho:M^*\otimes N \rightarrow \hom(M,N)$ with
+$\rho(f\otimes n)(m)=f(m)n$. This homomorphism is unique since the $f\otimes
+n$ generate $M^*\otimes N$. \\
+
+\noindent Suppose $M$ is free on the set $\{a_1,\ldots,a_k\}$. Then
+$M^*=\hom(M,R)$ is free on the set $\{f_i:M\rightarrow R,$ $
+f_i(r_1a_1+\cdots+r_ka_k)=r_i\}$, because there are clearly no
+relations among the $f_i$ and because any $f:M\rightarrow R$ has
+$f=f(a_1)f_1+\cdots+f(a_n)f_n$. Also note that any element $\sum h_j\otimes
+p_j \in M^*\otimes N$ can be written in the form $\sum_{i=1}^k f_i\otimes
+n_i$, by setting $n_i=\sum h_j(a_i)p_j$, and \textit{that this is unique}
+because the $f_i$ are a basis for $M^*$.\\
+
+\noindent We claim that the map $\psi:\hom_R(M,N)\rightarrow M^*\otimes N$
+defined by $\psi(g)=\sum_{i=1}^k f_i\otimes g(a_i)$ is inverse to $\rho$. Given
+any $\sum_{i=1}^k f_i\otimes n_i\in M^*\otimes N$, we have
+\[\rho(\sum_{i=1}^k f_i\otimes n_i)(a_j)=\sum_{i=1}^k\rho(f_i\otimes
+n_i)(a_j)=\sum_{i=1}^kf_i(a_j)n_i=n_j\]
+ Thus, $\rho(\sum_{i=1}^k f_i\otimes n_i)(a_i)=n_i$, and thus
+ $\psi(\rho(\sum_{i=1}^k f_i\otimes n_i))=\sum_{i=1}^k f_i\otimes n_i$. Thus,
+ $\psi\circ\rho=\id_{M^*\otimes N}$.\\
+
+ \noindent Conversely, recall that for $g:M\rightarrow N\in\hom_R(M,N)$,
+ we defined $\psi(g)=\sum_{i=1}^k f_i\otimes g(a_i)$. Thus,
+\[\rho(\psi(g))(a_j)=\rho(\sum_{i=1}^k f_i\otimes
+g(a_i))(a_j)=\sum_{i=1}^k\rho(f_i\otimes g(a_i))(a_j)=\sum_{i=1}^k
+f_i(a_j)g(a_i)=g(a_j)\]
+and because $\rho(\psi(g))$ agrees with $g$ on the $a_i$, it is the
+same element of $\hom_R(M,N)$ because the $a_i$ generate $M$. Thus,
+$\rho\circ\psi=\id_{\hom_R(M,N)}$.\\
+
+\noindent Thus, $\rho$ is an isomorphism.
+
+
+\end{example}
+
+
+We now interpret localization as a tensor product.
+\begin{proposition} \label{locisbasechange}
+Let $R$ be a commutative ring, $S \subset R$ a multiplicative subset. Then
+there
+exists a canonical isomorphism of functors:
+\[ \phi: S^{-1}M \simeq S^{-1 }R \otimes_R M . \]
+\end{proposition}
+\begin{proof}
+Here is a construction of $\phi$. If $x/s \in S^{-1}M$ where $x \in M, s \in
+S$, we define
+\[ \phi(x/s) = (1/s) \otimes m. \]
+Let us check that this is well-defined. Suppose $x/s = x'/s'$; then this means
+there is $t \in S$ with
+\[ xs't = x'st . \]
+
+From this we need to check that $\phi(x/s) = \phi(x'/s')$, i.e. that $1/s
+\otimes x$ and $1/s' \otimes x'$ represent the same elements in the tensor
+product. But we know from the last statement that
+\[ \frac{1}{ss't} \otimes xs't = \frac{1}{ss't} x'st \in S^{-1}R \otimes M \]
+and the first is just
+\[ s't( \frac{1}{ss't} \otimes x) = \frac{1}{s} \otimes x \]
+by linearity, while the second is just
+\[ \frac{1}{s'} \otimes x' \]
+similarly. One next checks that $\phi$ is an $R$-module homomorphism, which we
+leave to the reader.
+
+Finally, we need to describe the inverse. The inverse $\psi: S^{-1}R \otimes M
+\to S^{-1}M$ is easy to construct because it's a map out of the tensor product,
+and we just need to give a bilinear map
+\[ S^{-1} R \times M \to S^{-1}M , \]
+and this sends $(r/s, m)$ to $mr/s$.
+
+It is easy to see that $\phi, \psi$ are inverses to each other by the
+definitions.
+\end{proof}
+
+It is, perhaps, worth making a small categorical comment, and offering an
+alternative argument.
+We are given two functors $F,G$ from $R$-modules to $S^{-1}R$-modules, where
+$F(M) = S^{-1}R \otimes_R M$ and $G(M) = S^{-1}M$.
+By the universal property, the map $M \to S^{-1}M$ from an $R$-module to a
+tensor product gives a natural map
+\[ S^{-1}R \otimes_R M \to S^{-1}M, \]
+that is a natural transformation $F \to G$.
+Since it is an isomorphism for free modules, it is an isomorphism for all
+modules by a standard argument.
+
+
+\subsection{Tensor products of algebras}
+\label{tensprodalg}
+There is one other basic property of tensor products to discuss before moving
+on: namely, what happens when one tensors a ring with another ring. We shall
+see that this gives rise to \emph{push-outs} in the category of rings, or
+alternatively, coproducts in the category of $R$-algebras.
+Let $R$ be a commutative ring and suppose $R_1, R_2$ are $R$-algebras. That is, we have ring homomorphisms
+\( \phi_0: R \to R_0, \quad \phi_1: R \to R_1. \)
+
+\begin{proposition}
+$R_0 \otimes_R R_1$ has the structure of a commutative ring in a natural way.
+\end{proposition}
+
+ Indeed, this
+multiplication multiplies two typical elements $x \otimes y, x' \otimes y'$ of
+the tensor product by
+sending them to
+$xx' \otimes yy'$.
+The ring structure is determined by this formula. One ought to check that this
+approach respects the relations of the tensor product. We will do so in an
+indirect way.
+
+
+\begin{proof}
+Notice that giving a multiplication law on $R_0 \otimes_R R_1$ is equivalent to giving an $R$-bilinear map
+\[ (R_0 \otimes_R R_1) \times (R_0 \otimes R_1) \to R_0 \otimes_R R_1,\]
+i.e. an $R$-linear map
+\[ (R_0 \otimes_R R_1) \otimes_R (R_0 \otimes R_1) \to R_0 \otimes_R R_1\]
+which satisfies certain constraints (associativity, commutativity, etc.).
+But the left side is isomorphic to $(R_0 \otimes_R R_0) \otimes_R (R_1
+\otimes_R R_1)$. Since we have bilinear maps $R_0 \times R_0 \to R_0$ and $R_1
+\times R_1 \to R_1$, we get linear maps
+$R_0 \otimes_R R_0 \to R_0$ and $R_1 \otimes_R R_1 \to R_1$.
+Tensoring these maps gives the multiplication as a bilinear map. It is easy to
+see that these two approaches are the same.
+
+We now need to check that this operation is commutative and associative, with
+$1 \otimes 1$ as a unit; moreover, it distributes over addition. Distributivity
+over addition is built into the construction (i.e. in view of bilinearity). The
+rest (commutativity, associativity, units) can be checked directly on the
+generators, since we have distributivity.
+We shall leave the details to the reader.
+\end{proof}
+
+
+We can in fact describe the tensor product of $R$-algebras by a universal
+property. We will
+describe a commutative diagram:
+\[
+\xymatrix{
+& R \ar[rd] \ar[ld] & \\
+R_0 \ar[rd] & & R_1 \ar[ld] \\
+& R_0 \otimes_R R_1
+}
+\]
+Here $R_0 \to R_0 \otimes_R R_1$ sends $x \mapsto x \otimes 1$; similarly for $R_1
+\mapsto R_0 \otimes_R R_1$. These are ring-homomorphisms, and it is easy to
+see that
+the above
+diagram commutes, since $r \otimes 1 = 1 \otimes r = r(1 \otimes 1)$ for $r \in
+R$.
+In fact,
+\begin{proposition}
+$R_0 \otimes_R R_1$ is universal with respect to this property: in the language
+of category theory, the above diagram is a pushout square.
+\end{proposition}
+
+This means for any commutative ring $B$, and every pair of maps $u_0: R_0 \to
+B$ and $u_1: R_1 \to B$ such that the pull-backs $R \to R_0 \to B$ and $R \to
+R_1 \to B$ are the same, then we get a unique map of rings
+\[ R_0 \otimes_R R_1 \to B \]
+which restricts on $R_0, R_1$ to the morphisms $u_0, u_1$ that we started with.
+\begin{proof} If $B$ is a ring as in the previous paragraph, we make $B$ into an $R$-module by the map $R \to R_0 \to B$ (or
+$R \to R_1 \to B$, it is the same by assumption).
+This map $R_0 \otimes_R R_1 \to B$ sends
+\[ x \otimes y \to u_0(x) u_1(y). \]
+It is easy to check that $(x,y) \to u_0(x)u_1(y)$ is $R$-bilinear (because of
+the condition that the two pull-backs of $u_0, u_1$ to $R$ are the same), and
+that it gives a homomorphism of rings $R_0 \otimes_R R_1 \to B$ which
+restricts to $u_0, u_1$ on $R_0,
+R_1$. One can check, for instance, that this is a homomorphism of rings by
+looking at the generators.
+
+It is also clear that $R_0 \otimes_R R_1 \to B$ is unique, because we know
+that the
+map on elements of the form $x \otimes 1$ and $1 \otimes y$ is determined by
+$u_0, u_1$; these generate $R_0 \otimes_R R_1$, though.
+\end{proof}
+
+In fact, we now claim that the category of rings has \emph{all} coproducts. We
+see that the coproduct of any two elements exists (as the tensor product over
+$\mathbb{Z}$). It turns out that arbitrary coproducts exist. More generally,
+if $\left\{R_\alpha\right\}$ is a family of $R$-algebras, then one can define
+an object
+\[ \bigotimes_\alpha R_\alpha, \]
+which is a coproduct of the $R_\alpha$ in the category of $R$-algebras. To do
+this, we simply take the generators as before, as formal objects
+\[ \bigotimes r_\alpha, \quad r_\alpha \in R_\alpha, \]
+except that all but finitely many of the $r_\alpha$ are required to be the
+identity. One quotients by the usual relations.
+
+Alternatively, one may use the fact that filtered colimits exist, and
+construct the infinite coproduct as a colimit of finite coproducts (which are
+just ordinary tensor products).
+
+\section{Exactness properties of the tensor product}
+
+In general, the tensor product is not exact; it is only exact on the right,
+but it can fail to preserve injections. Yet in some important cases it
+\emph{is}
+exact. We study that in the present section.
+
+\subsection{Right-exactness of the tensor product}
+
+We will start by talking about extent to which tensor products do preserve
+exactness under any circumstance.
+First, let's recall what is going on. If $M,N$ are $R$-modules over the
+commutative ring $R$, we have defined another $R$-module $\hom_R(M,N)$
+of morphisms
+$M \to N$. This is left-exact as a functor of $N$. In other words, if we fix
+$M$ and let $N$ vary, then the construction of homming out of $M$ preserves
+kernels.
+
+In the language of category theory, this construction $N \to \hom_R(M,N)$ has
+an adjoint. The other construction we discussed last time was this adjoint,
+and it is the tensor
+product. Namely, given $M,N$ we defined a \textbf{tensor product} $M \otimes_R
+N$ such that giving a map $M \otimes_R N \to P$ into some $R$-module $P$
+is the same as giving a
+bilinear map $\lambda: M \times N \to P$, which in turn is the same as giving
+an $R$-linear map
+\[ M \to \hom_R(N, P). \]
+So we have a functorial isomorphism
+\[ \hom_R(M \otimes_R N, P) \simeq \hom_R(M, \hom_R(N,P)). \]
+Alternatively, tensoring is the left-adjoint to the
+hom functor. By abstract nonsense, it follows that since $\hom(M, \cdot)$
+preserves cokernels, the left-adjoint preserves cokernels and is right-exact.
+We shall see this directly.
+
+\begin{proposition}
+The functor $N \to M \otimes_R N$ is right-exact, i.e. preserves cokernels.
+\end{proposition}
+In fact, the tensor product is symmetric, so it's right exact in either
+variable.
+
+\begin{proof}
+We have to show that if $N' \to N \to N'' \to 0$ is exact, then so is
+\[ M \otimes_R N' \to M \otimes_R N \to M \otimes_R N'' \to 0. \]
+There are a lot of different ways to think about this. For instance, we can
+look at the direct construction. The tensor product is a certain quotient of a
+free module.
+
+$M \otimes_R N''$ is the quotient of the free module generated by $m \otimes
+n'', m \in M, n \in N''$ modulo the usual relations. The map $M \otimes N \to
+M \otimes N''$ sends $m \otimes n \to m \otimes n''$ if $n'' $ is the image of
+$n$ in $N''$. Since each $n''$ can be lifted to some $n$, it is obvious that
+the map $M \otimes_R N \to M \otimes_R N''$ is surjective.
+
+Now we know that $M \otimes_R N''$ is a quotient of $M \otimes_R N$. But which
+relations do you have to impose on $M \otimes_R N$ to get $M \otimes_R
+N''$? In fact, each relation in $M \otimes_R N''$
+can be lifted to a relation in $M \otimes_R N$, but with some redundancy. So
+the only thing to quotient
+out by is the statement that $x \otimes y, x \otimes y'$ have the same image in
+$M \otimes N''$. In particular, we have to quotient out by
+\[ x \otimes y - x\otimes y' \ , y - y' \in N' \]
+so that if we kill off $x \otimes n'$ for $n' \in N' \subset N$, then we get $M
+\otimes N''$. This is a direct proof.
+
+One can also give a conceptual proof. We would like to know that $M \otimes N''$
+is the cokernel of $M \otimes N' \to M \otimes N''$. In other words, we'd like
+to know that if we mapped $M \otimes_R N$ into some $P$ and the pull-back to $M
+\otimes_R N'$, it'd factor uniquely through $M \otimes_R N''$.
+Namely, we need to show that
+\[ \hom_R(M \otimes N'', P) = \ker(\hom_R(M \otimes N, P) \to \hom_R(M
+\otimes N'', P)). \]
+But the first is just $\hom_R(N'', \hom_R(M,P))$ by the adjointness property.
+Similarly, the second is just
+\[ \ker( \hom_R(N, \hom(M,P)) \to \hom_R(N', \hom_R(M,P)) \]
+but this last statement is $\hom_R(N'', \hom_R(M,P))$ by just the statement
+that $N'' = \mathrm{coker}(N ' \to N)$.
+To give a map $N'' $ into some module (e.g. $\hom_R(M,P)$) is the same thing as
+giving a map out of $N$ which kills $N'$.
+So we get the functorial isomorphism.
+\end{proof}
+
+\begin{remark}
+Formation of tensor products is, in general, \textbf{not} exact.
+\end{remark}
+
+\begin{example} \label{tensorbad}
+Let $R = \mathbb{Z}$. Let $M = \mathbb{Z}/2\mathbb{Z}$. Consider the exact
+sequence
+\[ 0 \to \mathbb{Z} \to \mathbb{Q} \to \mathbb{Q}/\mathbb{Z} \to 0 \]
+which we can tensor with $M$, yielding
+\[ 0 \to \mathbb{Z}/2\mathbb{Z} \to \mathbb{Q} \otimes_{}
+\mathbb{Z}/2\mathbb{Z} \to \mathbb{Q}/\mathbb{Z} \otimes
+\mathbb{Z}/2\mathbb{Z} \to 0 \]
+I claim that the second thing $\mathbb{Q} \otimes \mathbb{Z}/2\mathbb{Z}$
+is zero. This is because by tensoring with
+$\mathbb{Z}/2\mathbb{Z}$, we've made multiplication by 2 identically zero. By
+tensoring with $\mathbb{Q}$, we've made multiplication by 2 invertible. The
+only way to reconcile this is to have the second term zero. In particular, the
+sequence becomes
+\[ 0 \to \mathbb{Z}/2\mathbb{Z} \to 0 \to 0 \to 0 \]
+which is not exact.
+\end{example}
+
+\begin{exercise}
+Let $R$ be a ring, $I, J \subset R$ ideals. Show that $R/I \otimes_R R/J
+\simeq R/(I+J)$.
+\end{exercise}
+
+\subsection{A characterization of right-exact functors}
+
+Let us consider additive functors on the category of $R$-modules. So far,
+we know a very easy way of getting such functors: given an $R$-module $N$, we
+have a functor
+\[ T_N: M \to M \otimes_R N. \]
+In other words, we have a way of generating a functor on the category of
+$R$-modules for each $R$-module. These functors are all right-exact, as we
+have seen.
+Now we will prove a converse.
+
+\begin{proposition}
+Let $F$ be a right-exact functor on the category of $R$-modules that commutes
+with direct sums. Then $F$ is isomorphic to some $T_N$.
+\end{proposition}
+\begin{proof}
+The idea is that $N$ will be $F(R)$.
+
+Without the right-exactness hypothesis, we shall construct a natural morphism
+\[ F(R) \otimes M \to F(M) \]
+as follows. Given $m \in M$, there is a natural map $R \to M$ sending $1 \to
+m$. This identifies $M = \hom_R(R, M)$. But functoriality gives a map $F(R)
+\times \hom_R(R, M) \to F(M)$, which is clearly $R$-linear; the universal
+property of the tensor product now produces the desired transformation
+$T_{F(R)} \to F$.
+
+It is clear that $T_{F(R)}(M) \to F(M)$ is an isomorphism for $M = R$, and
+thus for $M$ free, as both $T_{F(R)}$ and $F$ commute with direct sums. Now
+let $M$ be any $R$-module. There is a ``free presentation,'' that is an exact
+sequence
+\[ R^I \to R^J \to M \to 0 \]
+for some sets $I,J$; we get a commutative, exact diagram
+\[ \xymatrix{
+T_{F(R)}(R^I)\ar[d] \ar[r] & T_{F(R)} (R^J) \ar[d] \ar[r] & T_{F(R)} (M) \ar[d] \ar[r] & 0 \\
+F(R^I) \ar[r] & F(R^J) \ar[r] & F( M )\ar[r] & 0
+}\]
+where the leftmost two vertical arrows are isomorphisms. A diagram chase now
+shows that $T_{F(R)}(M) \to F(M)$ is an isomorphism. In particular, $F \simeq
+T_{F(R)}$ as functors.
+\end{proof}
+
+Without the hypothesis that $F$ commutes with arbitrary direct sums, we could only draw
+the same conclusion on the category of \emph{finitely presented} modules; the
+same proof as above goes through, though $I$ and $J$ are required to be
+finite.\footnote{Recall that an additive functor commutes with finite direct
+sums.}
+\begin{proposition}
+Let $F$ be a right-exact functor on the category of finitely presented $R$-modules that commutes
+with direct sums. Then $F$ is isomorphic to some $T_N$.
+\end{proposition}
+
+From this we can easily see that localization at a multiplicative subset $S
+\subset R$ is given by tensoring with $S^{-1}R$. Indeed, localization is a
+right-exact functor on the category of $R$-modules, so it is given by
+tensoring with some module $M$; applying to $R$ shows that $M=S^{-1}R$.
+
+\subsection{Flatness}
+In some cases, though, the tensor product is exact.
+
+\begin{definition} \label{flatdefn}
+Let $R$ be a commutative ring. An $R$-module $M$ is called \textbf{flat} if the
+functor $N \to M \otimes_R N$ is exact. An $R$-algebra is \textbf{flat} if it is flat as an
+$R$-module.
+\end{definition}
+
+We already know that tensoring with anything is right exact,
+so the only thing to be checked for flatness of $M$ is that the operation of tensoring by $M$
+preserves injections.
+
+\begin{example}
+$\mathbb{Z}/2\mathbb{Z}$ is not flat as a $\mathbb{Z}$-module by
+\rref{tensorbad}.
+
+\end{example}
+
+\begin{example} \label{projmoduleisflat}
+If $R$ is a ring, then $R$ is flat as an $R$-module, because tensoring by $R$
+is the identity functor.
+
+More generally, if $P$ is a projective module (i.e., homming out of $P$
+is exact), then $P$ is flat.
+\end{example}
+\begin{proof}
+If $P = \bigoplus_A R$ is free, then tensoring with $P$ corresponds to taking
+the direct sum $|A|$ times, i.e.
+\[ P \otimes_R M = \bigoplus_A M. \]
+This is because tensoring with $R$ preserves (finite or direct) infinite sums.
+ The functor $M \to \bigoplus_A M$ is exact, so free
+modules are flat.
+
+A projective module, as discussed earlier, is a direct summand of a free
+module. So if $P$ is projective, $P \oplus P' \simeq \bigoplus_A R$ for some
+$P'$. Then we have that
+\[ (P \otimes_R M) \oplus (P' \otimes_R M) \simeq \bigoplus_A M. \]
+If we had an injection $M \to M'$, then there is a direct sum decomposition
+yields a diagram of maps
+\[ \xymatrix{
+P \otimes_R M \ar[d] \ar[r] & \bigoplus_A M \ar[d] \\
+P \otimes_R M' \ar[r] & \bigoplus_A M'
+}.\]
+A diagram-chase now shows that the vertical map is injective. Namely, the
+composition $P \otimes_R M \to \bigoplus_A M'$ is injective, so the vertical
+map has to be injective too.
+\end{proof}
+
+\begin{example}
+If $S \subset R$ is a multiplicative subset, then $S^{-1}R $ is a flat $R$-module, because localization is an
+exact functor.
+\end{example}
+
+Let us make a few other comments.
+
+\begin{remark}
+Let $\phi: R \to R'$ be a homomorphism of rings. Then, first of all, any
+$R'$-module can be regarded as an $R$-module by composition with $\phi$. In
+particular, $R'$ is an $R$-module.
+
+If $M$ is an $R$-module, we can define
+\[ M \otimes_R R' \]
+as an $R$-module. But in fact this tensor product is an $R'$-module; it has
+an action of $R'$. If $x \in M$ and $a \in R'$ and $b \in R'$, multiplication
+of $(x \otimes a) \in M \otimes_R R'$ by $b \in R'$ sends this, \emph{by
+definition}, to
+\[ b(x \otimes a) = x \otimes ab. \]
+It is easy to check that this defines an action of $R'$ on $M \otimes_R R'$.
+(One has to check that this action factors through the appropriate relations,
+etc.)
+
+\end{remark}
+
+The following fact shows that the hom-sets behave nicely with respect to flat
+base change.
+\begin{proposition}
+Let $M$ be a finitely presented $R$-module, $N$ an $R$-module. Let $S$ be a
+flat $R$-algebra. Then the natural map
+\[ \hom_R(M,N) \otimes_R S \to \hom_S( M \otimes_R S, N \otimes_R S) \]
+is an isomorphism.
+\end{proposition}
+\begin{proof}
+Indeed, it is clear that there is a natural map
+\[ \hom_R(M, N) \to \hom_S(M \otimes_R S, N \otimes_R S) \]
+of $R$-modules. The latter is an $S$-module, so the universal property gives
+the map $\hom_R(M, N) \otimes_R S \to \hom_S(M \otimes_R S, N \otimes_R S)$ as
+claimed.
+If $N$ is fixed, then we have two contravariant functors
+in $M$,
+\[ T_1(M) = \hom_R(M, N) \otimes_R S, \quad T_2(M) = \hom_S(M \otimes_R S, N
+\otimes_R S). \]
+We also have a natural transformation $T_1(M) \to T_2(M)$.
+It is clear that if $M$ is \emph{finitely generated} and \emph{free}, then the
+natural transformation is an isomorphism (for example, if $M = R$, then we just
+have the map $N \otimes_R S \to N \otimes_R S$).
+
+Note moreover that both functors are left-exact: that is, given an exact
+sequence
+\[ M' \to M \to M'' \to 0, \]
+there are induced exact sequences
+\[ 0 \to T_1(M'') \to T_1(M) \to T_1(M'), \quad 0 \to T_2(M'') \to T_2(M) \to
+T_2(M') .\]
+Here we are using the fact that $\hom$ is always a left-exact functor and the
+fact that tensoring with $S$ preserves exactness. (Thus it is here that we use
+flatness.)
+
+Now the following lemma will complete the proof:
+\begin{lemma}
+Let $T_1, T_2$ be contravariant, left-exact additive functors from the category of
+$R$-modules to the category of abelian groups. Suppose a natural transformation
+$t: T_1(M) \to T_2(M)$ is given, and suppose this is an isomorphism whenever
+$M$ is finitely generated and free. Then it is an isomorphism for any finitely
+presented module $M$.
+\end{lemma}
+\begin{proof}
+This lemma is a diagram chase. Fix a finitely presented $M$, and choose a
+presentation
+\[ F' \to F \to M \to 0, \]
+with $F', F$ finitely generated and free.
+Then we have an exact and commutative diagram
+\[ \xymatrix{
+0 \ar[r] & T_1(M) \ar[d]^{} \ar[r] & T_1(F) \ar[d]^{\simeq} \ar[r] &
+T_1(F') \ar[d]^{\simeq} \\
+0 \ar[r] & T_2(M) \ar[r] & T_2(F) \ar[r] & T_2(F') .
+}\]
+By hypotheses, the two vertical arrows to the right are isomorphisms, as
+indicated. A diagram chase now shows that the remaining arrow is an
+isomorphism, which is what we wanted to prove.
+\end{proof}
+\end{proof}
+
+\begin{example}
+Let us now consider finitely generated flat modules over a principal ideal
+domain $R$. By \rref{structurePID}, we know that any such $M$ is isomorphic to a
+direct sum $\bigoplus R/a_i$ for some $a_i \in R$. But if any of the $a_i$ is
+not zero, then that $a_i$ would be a nonzero zerodivisor on $M$. However, we
+know no element of $R - \left\{0\right\}$ can be a zerodivisor on $M$. It
+follows that all the $a_i = 0$. In particular, we have proved:
+
+\begin{proposition}
+A finitely generated module over a PID is flat if and only if it is free.
+\end{proposition}
+\end{example}
+
+\subsection{Finitely presented flat modules}
+In \cref{projmoduleisflat}, we saw that a projective module over any ring $R$
+was automatically flat. In general, the converse is flat. For instance,
+$\mathbb{Q}$ is a flat $\mathbb{Z}$-module (as tensoring by $\mathbb{Q}$ is a
+form of localization). However, because $\mathbb{Q}$ is divisible (namely,
+multiplication by $n$ is surjective for any $n$), $\mathbb{Q}$ cannot be a free
+abelian group.
+
+Nonetheless:
+
+\begin{theorem} \label{fpflatmeansprojective}
+A finitely presented flat module over a ring $R$ is projective.
+\end{theorem}
+\begin{proof}
+We follow \cite{We95}.
+
+Let us define the following contravariant functor from $R$-modules to $R$-modules.
+Given $M$, we send it to $M^* = \hom_\mathbb{Z}(M, \mathbb{Q}/\mathbb{Z})$.
+This is made into an $R$-module in the following manner: given $\phi: M \to
+\mathbb{Q}/\mathbb{Z}$ (which is just a homomorphism of abelian groups!) and $r
+\in R$, we send this to $r\phi$ defined by $(r\phi)(m) = \phi(rm)$.
+Since $\mathbb{Q}/\mathbb{Z}$ is an injective abelian group, we see that $M
+\mapsto M^*$ is an \emph{exact} contravariant functor from $R$-modules to
+$R$-modules.
+In fact, we note that $0 \to A \to B \to C \to 0$ is exact implies $0 \to C^* \to B^* \to A^* \to 0$ is exact.
+
+Let $F$ be any $R$-module. There is a natural homomorphism
+\begin{equation} \label{twoduals} M^* \otimes_R F \to \hom_R(F, M)^*.
+\end{equation}
+This is defined as follows. Given $\phi: M \to \mathbb{Q}/\mathbb{Z}$ and $x \in
+F$, we define a new map $\hom(F, M) \to \mathbb{Q}/\mathbb{Z}$ by sending a
+homomorphism $\psi: F \to M$ to $\phi(\psi(x))$.
+In other words, we have a natural map
+\[ \hom_{\mathbb{Z}}(M, \mathbb{Q}/\mathbb{Z} ) \otimes_R F \to
+\hom_{\mathbb{Z}}( \hom_R(F, M)^*, \mathbb{Q}/\mathbb{Z}). \]
+
+Now fix $M$.
+This map \eqref{twoduals} is an isomorphism if $F$ is \emph{finitely
+generated} and free.
+ Both are right-exact (because dualizing is contravariant-exact!).
+The ``finite presentation trick'' now shows that the map is an isomorphism if
+$F$ is finitely presented.
+ \add{this should be elaborated on}
+
+Fix now $F$ finitely presented and flat, and consider the above two quantities
+in \eqref{twoduals} as functors in $M$.
+Then the first functor is exact, so the second one is too.
+In particular, $\hom_R(F, M)^*$ is an exact functor in $M$; in particular, if
+$M \twoheadrightarrow M''$ is a surjection, then
+\[ \hom_R(F, M'')^* \to \hom_R(F, M)^* \]
+is an injection. But this implies that
+\[ \hom_R(F, M) \to \hom_R(F, M'') \]
+is a \emph{surjection,} i.e. that $F$ is projective.
+Indeed:
+\begin{lemma} $ A \to B \to C $ is exact if and only if $C^* \to B^* \to A^* $ is exact.
+\end{lemma}
+\begin{proof}
+Indeed, one direction was already clear (from $\mathbb{Q}/\mathbb{Z}$ being an
+injective abelian group).
+Conversely, we note that $M = 0$ if and only if $M^* = 0$ (again by
+injectivity and the fact that $(\mathbb{Z}/a)^* \neq 0$ for any $a$).
+Thus dualizing reflects isomorphisms: if a map becomes an isomorphism after
+dualized, then it was an isomorphism already. From here it is easy to deduce
+the result (by applying the above fact to the kernel and image).
+\end{proof}
+\end{proof}
diff --git a/books/cring/various.tex b/books/cring/various.tex
new file mode 100644
index 0000000000000000000000000000000000000000..409df4950e2a237c450afc45e2cf6914941e74f2
--- /dev/null
+++ b/books/cring/various.tex
@@ -0,0 +1,849 @@
+\chapter{Various topics}
+
+This chapter is currently a repository for various topics that may or may not
+reach a status worthy of their own chapters in the future, but in any event
+should be included.
+
+\section{Linear algebra over rings}
+
+\subsection{The determinant trick}
+We want to understand what $IN=N$ means.
+
+ Let $I\subset R$ and ${}_R M$ finitely generated. Let
+ $E=\End_R (M)$, which is not commutative in general. We may view $M$ as an $E$-module
+ ${}_E M$. Since every element in $R$ commutes with all of $E$, $E$ is an $R$-algebra (i.e.\
+ There is a homomorphism $R\to E$ sending $R$ into the center of $E$).
+ \begin{lemma}[Determinant Trick]
+ \begin{enumerate}\item[]
+ \item Every $\phi\in E$ such that $\phi(M)\subset IM$ satisfies a monic equation
+ of the form $\phi^n+a_1\phi^{n-1} +\cdots + a_n=0$, where each $a_i\in I$, i.e.\
+ $\phi$ is ``integral over $I$''.
+
+ \item $IM=M$ if and only if $(1-a)M=0$ for some $a\in I$.
+ \end{enumerate}
+ \end{lemma}
+ \begin{proof}
+ (1) Fix a finite set of generators, $M=Rm_1+\cdots + Rm_n$. Then we have
+ $\phi(m_i)=\sum_j a_{ij} m_j$, with $a_{ij}\in I$ by assumption. Let $A=(a_{ij})$.
+ Then these equations tell us that $(I\phi-A)\vec{m}=0$. Multiplying by the adjoint of
+ the matrix $I\phi-A$, we get that $\det(I\phi-A)m_i=0$ for each $i$. It follows that
+ $\det(I\phi-A)=0\in E$. But $\det(I\phi-A)=\phi^n+a_1\phi^{n-1}+\cdots +a_n$ for some
+ $a_i\in I$.
+
+ (2) The ``if'' part is clear. The ``only if'' part follows from (1), applied to
+ $\phi=\id_M$.
+ \end{proof}
+ \begin{remark}
+ Determinant trick (part 2) actually includes Nakayama's Lemma, because if $I$ is in
+ $\rad R$, $(1-a)$ is a unit, so $M=(1-a)M=0$.
+ \end{remark}
+ \begin{corollary}
+ For a finitely generated ideal $I\subset R$, $I=I^2$ if and only if $I=eR$ for some
+ $e=e^2$.
+ \end{corollary}
+ \begin{proof}
+ ($\Leftarrow$) clear.
+
+ ($\Rightarrow$) Apply determinant trick (part 2) to the case $M={}_R I$. We get
+ $(1-e)I=0$ for some $e\in I$, so $(1-e)a=0$ for each $a\in I$, so $a=ea$, so $I$ is
+ generated by $e$. Letting $a=e$, we see that $e$ is idempotent.
+ \end{proof}
+ \begin{corollary}[Vasconcelos-Strooker Theorem]
+ For any finitely generated module $M$ over \emph{any} commutative $R$. If $\phi\in
+ \End_R(M)$ is onto, then it is injective.
+ \end{corollary}
+ \begin{proof}
+ We can view $M$ as a module over $R[t]$, where $t$ acts by $\phi$. Apply the
+ determinant trick (part 2) to $I=t\cdot R[t]\subset R[t]$. We have that $IM=M$
+ because $\phi$ is surjective, so $m =\phi(m_0)=t\cdot m_0\in IM$. It follows that
+ there is some $th(t)$ such that $(1-th(t))M=0$. In particular, if $m\in \ker \phi$,
+ we have that $0=(1-h(t)t)m=1\cdot m=m$, so $\phi$ is injective.
+ \end{proof}
+
+\subsection{Determinantal ideals}
+\begin{definition}
+ An ideal $I\subset R$ is called \emph{dense}\index{dense ideal} if $rI=0$ implies $r=0$.
+ This is denoted $I\subset_d R$. This is the same as saying that ${}_RI$ is a
+ faithful module over $R$.
+ \end{definition}
+ If $I$ is a principal ideal, say $Rb$, then $I$ is dense exactly when $b\in \mathcal{C}(R)$. The
+ easiest case is when $R$ is a domain, in which case an ideal is dense exactly when it is
+ non-zero.
+
+ If $R$ is an integral domain, then by working over the quotient field, one can define
+ the rank of a matrix with entries in $R$. But if $R$ is not a domain, rank becomes
+ tricky. Let $\mathcal{D}_i(A)$ be the $i$-th \emph{determinantal ideal} in $R$, generated by all
+ the determinants of $i\times i$ minors of $A$. We define $\mathcal{D}_0(A)=R$. If $i\ge
+ \min\{n,m\}$, define $\mathcal{D}_i(A)=(0)$.
+
+ Note that $\mathcal{D}_{i+1}(A)\supset \mathcal{D}_i(A)$ because you can expand by minors, so we have a
+ chain
+ \[
+ R=\mathcal{D}_0(A)\supset \mathcal{D}_1(A)\supset \cdots \supset (0).
+ \]
+ \begin{definition}
+ Over a non-zero ring $R$, the \emph{McCoy rank} (or just \emph{rank}) of $A$ to be
+ the maximum $i$ such that $\mathcal{D}_i(A)$ is dense in $R$. The rank of $A$ is denoted
+ $rk(A)$.
+ \end{definition}
+ If $R$ is an integral domain, then $rk(A)$ is just the usual rank. Note that over any
+ ring, $rk(A)\le \min\{n,m\}$.
+
+ If $rk(A)=0$, then $\mathcal{D}_1(A)$ fails to be dense, so there is some non-zero element $r$
+ such that $rA=0$. That is, $r$ zero-divides all of the entries of $A$.
+
+ If $A\in \mathbb{M}_{n,n}(R)$, then $A$ has rank $n$ (full rank) if and only if $\det A$ is a
+ regular element.
+
+ \begin{exercise}
+ Let $R=\mathbb{Z}/6\mathbb{Z} $, and let $A=diag(0,2,4)$, $diag(1,2,4)$, $diag(1,2,3)$, $diag(1,5,5)$
+ ($3\times 3$ matrices). Compute the rank of $A$ in each case.
+ \end{exercise}
+ \begin{solution}\raisebox{-2\baselineskip}{
+ $\begin{array}{c|cccc}
+ A & \mathcal{D}_1(A) & \mathcal{D}_2(A) & \mathcal{D}_3(A) & \\ \hline
+ diag(0,2,4) & (2) & (2) & (0) & 3\cdot (2)=0\text{, so }rk=0 \\
+ diag(1,2,4) & R & (2) & (2) & 3\cdot (2)=0\text{, so }rk=1 \\
+ diag(1,2,3) & R & R & (2) & 3\cdot (2)=0\text{, so }rk=2 \\
+ diag(1,5,5) & R & R & R & \text{so }rk=3
+ \end{array}$}
+ \end{solution}
+ \subsection{Lecture 2}
+
+ Let $A\in \mathbb{M}_{n,m}(R)$. If $R$ is a field, the rank of $A$ is the dimension of the
+ image of $A:R^m\to R^n$, and $m-rk(A)$ is the dimension of the null space. That
+ is, whenever $rk(A)< m$, there is a solution to the system of linear equations
+ \begin{equation}
+ 0 = A\cdot x \label{lec02ast}
+ \end{equation}
+ which says that the columns $\alpha_i\in R^n$ of $A$ satisfy the dependence $\sum
+ x_i\alpha_i=0$. The following theorem of McCoy generalizes this so that $R$ can be any
+ non-zero commutative ring.
+ \begin{theorem}[McCoy]\label{lec02T:McCoy3}
+ If $R$ is not the zero ring, the following are equivalent:
+ \begin{enumerate}
+ \item The columns $\alpha_1$, \dots, $\alpha_m$ are linearly dependent.
+ \item Equation \ref{lec02ast} has a nontrivial solution.
+ \item $rk(A)n$, then by $(a)$, any $R$-linear map $R^m\to R^n$
+ has a kernel. Thus, $R^m\hookrightarrow R^n$ implies $m\le n$.
+
+ $(b\Rightarrow c)$ If $R^n\twoheadrightarrow R^m$, then since $R^m$ is free,
+ there is a section $R^m\hookrightarrow R^n$ (which must be injective), so $m\le n$.
+
+ $(c\Rightarrow d)$ If $R^m\cong R^n$, then we have surjections both ways, so
+ $m\le n\le m$, so $m=n$.
+ \end{proof}
+ \begin{corollary}
+ Let $R\ne 0$, and $A$ some $n\times n$ matrix. Then the following are equivalent
+ (1) $\det A\in \mathcal{C}(R)$; (2) the columns of $A$ are linearly independent; (3) the rows of
+ $A$ are linearly independent.
+ \end{corollary}
+ \begin{proof}
+ The columns are linearly independent if and only if Equation \ref{lec02ast} has no
+ non-trivial solutions, which occurs if and only if the rank of $A$ is equal to $n$,
+ which occurs if and only if $\det A$ is a non-zero-divisor.
+
+ The transpose argument shows that $\det A\in \mathcal{C}(R)$ if and only if the rows are
+ independent.
+ \end{proof}
+ \begin{proof}[Proof of the Theorem]
+ $0=Ax = \sum \alpha_i x_i$ if and only if the $\alpha_i$ are dependent, so $(1)$ and
+ $(2)$ are equivalent.
+
+ $(2\Rightarrow 3)$ Let $x\in R^m$ be a non-zero solution to $A\cdot x=0$. If $n r$, then $B'$ is an $(r+1)\times
+ (r+1)$ minor of $A$, so its determinant is annihilated by $a$.
+ \end{proof}
+ \begin{corollary}
+ Suppose a module ${}_RM$ over a non-zero ring $R$ is generated by $\beta_1,\dots,
+ \beta_n\in M$. If $M$ contains $n$ linearly independent vectors, $\gamma_1,\dots,
+ \gamma_n$, then the $\beta_i$ form a free basis.
+ \end{corollary}
+ \begin{proof}
+ Since the $\beta_i$ generate, we have $\gamma = \beta\cdot A$ for some $n\times n$
+ matrix $A$. If $Ax=0$ for some non-zero $x$, then $\gamma \cdot x = \beta Ax = 0$,
+ contradicting independence of the $\gamma_i$. By Theorem \ref{lec02T:McCoy3},
+ $rk(A)=n$, so $d=\det(A)$ is a regular element.
+
+ Over $R[d^{-1}]$, there is an inverse $B$ to $A$. If $\beta\cdot
+ y=0$ for some $y\in R^n$, then $\gamma By = \beta y=0$. But the $\gamma_i$ remain
+ independent over $R[d^{-1}]$ since we can clear the denominators of any linear
+ dependence to get a dependence over $R$ (this is where we use that $d\in \mathcal{C}(R)$), so
+ $By=0$. But then $y=A\cdot 0 = 0$. Therefore, the $\beta_i$ are linearly independent,
+ so they are a free basis for $M$.
+\end{proof}
+
+\section{Finite presentation}
+
+\label{noetheriandescent}
+\subsection{Compact objects in a category}
+
+Let $\mathcal{C}$ be a category.
+In general, colimits tell one how to map \emph{out of} them, not into them,
+and there is no a priori reason to assume that if $F: I \to \mathcal{C}$ is a
+functor, that
+\begin{equation} \label{filtcolimhom} \varinjlim_i \hom(X, Fi) \to \hom(X,
+\varinjlim Fi) \end{equation}
+is an isomorphism.
+In practice, though, it often happens that when $I$ is
+\emph{filtered}, the above map is an isomorphism. For simplicity, we shall
+restrict to the case when $I$ is a \emph{directed }set
+(which is naturally a category); in this case, we call the limits
+\textbf{inductive.}
+
+\begin{definition}
+The object $X$ is called \textbf{compact} if \eqref{filtcolimhom} is an
+isomorphism whenever $I$ is inductive.
+\end{definition}
+The following example motivates the term ``compact.''
+\begin{example}
+Let $\mathcal{C}$ be the category of Hausdorff topological spaces and
+\emph{closed inclusions} (so that we do not obtain a full subcategory of the
+category of topological spaces), and let $X$
+be a compact space. Then $X$ is a compact object in $\mathcal{C}$.
+
+Indeed, suppose $\left\{X_i\right\}_{i \in I}$ is an inductive system of
+Hausdorff spaces and closed inclusions. Suppose given a map $f:X \to \varinjlim
+X_i$. Then each $X_i$ is a closed subspace of the colimit, so we need to show that
+$f(X)$ lands inside one of the $X_i$. This will easily imply compactness.
+
+Suppose not. Then $f(X)$ contains, for each $i$, a point $x_i$ that belongs to
+no $X_j, j < i$. Choose a countable subset $T \subset I$ (if $I$ is finite,
+then this is automatic!). For each $t \in T$, we get an element $x_t \in
+f(X)$ that belongs to no $X_i$ for $i < t$. Note that if $t' \in T$, then it
+follows that $X_{t'} \cap \left\{x_t\right\}$ is finite.
+
+
+In particular, if $F \subset \left\{x_t\right\}$ is \emph{any} subset, then
+$X_{t'} \cap F$ is closed for each $t' \in T$.
+Thus $\varinjlim_T X_{t'}$ contains the set $F$ as a closed
+subset, and since this embeds as a closed subset of $\varinjlim X_i$, $F$ is
+thus closed in there too.
+The induced topology on $\left\{x_t\right\}$ is thus the discrete one.
+
+We have thus seen that the set $\left\{x_t\right\}$ is an infinite, discrete
+closed subset of $\varinjlim X_i$. However, it is a subset of $f(X)$ as well,
+which is compact, so it is itself compact; this is a contradiction.
+
+This example allows one to run the ``small object argument'' of Quillen for
+the category of topological spaces, and in particular to construct the
+\emph{Quillen model structure} on it. See \cite{Ho07}. As an simple example,
+we may note that if we have a sequence of closed subspaces (such as the
+skeleton filtration of a CW complex)
+\[ X_1 \subset X_2 \subset \dots \]
+it then follows easily from this that (where $[K, -]$ denotes homotopy
+classes of maps)
+\[ [K, \varinjlim X_i] = \varinjlim [K, X_i] \]
+for any compact space $K$. Taking $K$ to be a sphere, one finds that the
+homotopy group functors commute with inductive limits of closed inclusions.
+\end{example}
+
+
+
+This notion is closely related to that of ``smallness'' introduced in
+\cref{smallness} to prove an object can be imbedded in an injective module.
+For instance, smallness with respect to any limit ordinal and the class of all
+maps is basically equivalent to compactness in this sense.
+
+\add{this should be clarified. Can we replace any inductive limit by an
+ordinal one, assuming there's no largest element?}
+
+
+
+\subsection{Finitely presented modules}
+
+
+Let us recall that a module $M$ over a ring $R$ is said to be \emph{finitely
+presented} if there is an exact sequence
+\[ R^m \to R^n \to M \to 0. \]
+In particular, $M$ can be described by a ``finite amount of data:'' $M$ is
+uniquely determined by the matrix describing the map $R^m \to R^n$.
+Thus, to hom out of $M$ into an $R$-module $N$ is to specify the images of the $n$ generators
+(that are the images of the standard basis elements in $R^n$), that is to
+pick $n$ elements of $N$, and these
+images are required to satisfy $m$ relations (that come from the map $R^m \to
+R^n$).
+
+
+Note that the theory of finitely presented modules is only special and new
+when one works with a non-noetherian rings; over a noetherian ring, every
+finitely generated module is finitely presented. Nonetheless, the techniques
+described here are useful even if one restricts one's attention to noetherian
+rings.
+
+\begin{exercise}
+Show that a finitely generated \emph{projective} module is finitely presented.
+\end{exercise}
+
+
+\begin{proposition} \label{fpcompact}
+In the category of $R$-modules, the compact objects are the finitely presented
+ones.
+\end{proposition}
+\begin{proof}
+First, let us show that a finitely presented module is in fact finite.
+Suppose $M$ is finitely presented and $\left\{N_i, i \in I\right\}$ is an
+inductive system of modules. Suppose given $M \to \varinjlim N_i$; we show
+that it factors through one of the $N_i$.
+
+There are finitely many generators $m_1, \dots,
+m_n$, and in the colimit
+\[ N = \varinjlim N_i , \]
+they must all lie in the image of some $N_j, j \in I$. Thus we can choose
+$r^{(j)}_1, \dots, r^{(j)}_n$ such that $r^{(j)}_k$ and $m_k$ both map to the
+same thing in $\varinjlim N_i$.
+This alone does not enable us to conclude that $M \to \varinjlim N_i$
+factors through $N_j$, since the relations between the $m_1, \dots, m_n$ may not be
+satisfied between the putative liftings $r^{(j)}_k$ to $N_j$.
+
+However, we know that the relations \emph{are} satisfied when we push down to
+the colimit. Since there are only finitely many relations that we need to
+have satisfied, we can choose $j' > j$
+such that the relations all do become satisfied by the images of the
+$r^{(j)}_k$ in $N_{j'}$. We thus get a lifting $M \to N_{j'}$.
+
+We see from this that the map
+\[ \varinjlim \hom_R(M, N_i) \to \varinjlim \hom_R( M, \varinjlim N_i) \]
+is in fact surjective. To see that it is injective, note that if two maps $f,g:M
+\to N_j$ become the same map $M \to \varinjlim N_i$, then the finite set of
+generators $m_1, \dots, m_n$ must both be mapped to the same thing in some
+$N_{j'}, j' > j$.
+
+Now suppose $M$ is a compact object in the category of $R$-modules.
+First, we claim that $M$ is finitely generated. Indeed, we know that $M$ is
+the \emph{inductive} limit of its finitely generated submodules.
+Thus we get a map
+\[ M \to \varinjlim_{M_F \subset M, \text{f. gen}} M_F ,\]
+and by hypothesis it factors as $M \to M_F$ for some $M_F$. This
+implies that $M \to M_F \to M $ is the identity, and so $M = M_F$ and $M$ is
+finitely generated.
+
+Finally, we need to see that $M$ is finitely presented. Choose a surjection
+\[ R^n \twoheadrightarrow M \]
+and let the kernel be $K$. We would like to show that $K$ is finitely
+generated. Now $M \simeq R^n/K$, and consequently $M$ is the inductive limit
+$\varinjlim R^n/ K_F$ for $K_F$ ranging over the finitely generated submodules
+of $K$. It follows that the natural isomorphism $M \simeq \varinjlim R^n/K_F$
+factors as $M \to R^n/K_F$ for some $K_F$, which is thus an isomorphism. Hence
+$M$ is finitely presented.
+\end{proof}
+
+The above argument shows, incidentally, that if $M$ is finitely
+\emph{generated}, then
+$\varinjlim \hom_R(M, N_i) \to \varinjlim \hom_R( M, \varinjlim N_i) $ is
+always \emph{injective.}
+
+\add{any module is an inductive limit of finitely presented modules}
+\add{Lazard's theorem on flat modules}
+
+\subsection{Finitely presented algebras}
+
+Let $R$ be a commutative ring.
+\begin{definition}
+An $R$-algebra $A$ is called \textbf{finitely presented} if $A$ is isomorphic
+to an $R$-algebra of the form $R[x_1, \dots, x_n]/I$, where $I \subset R[x_1,
+\dots, x_n]$ is a finitely generated ideal in the polynomial ring.
+A morphism of rings $\phi: R \to R'$ is called \textbf{finitely presented} if
+it makes $R'$ into a finitely presented $R$-algebra.
+\end{definition}
+
+For instance, a quotient of $R$ by a finitely generated ideal is a finitely
+presented $R$-algebra. If $R$ is noetherian, then by the Hilbert basis
+theorem, an $R$-algebra is finitely presented if and only if it is finitely
+generated.
+
+\begin{proposition}
+The finitely presented $R$-algebras are the compact objects in the category of
+$R$-algebras.
+\end{proposition}
+We leave the proof to the reader, as it is analogous to \cref{fpcompact}.
+
+The notion of a finitely presented algebra is analogous to that of a finitely
+presented module, insofar as a finitely presented algebra can be specified by a
+finite amount of ``data.''
+Namely, this data consists of the generators $x_1, \dots, x_n$ and the
+finitely many relations that they are required to satisfy (these finitely
+many relations can be taken to be generators of $I$).
+Thus, to hom out of $A$ is ``easy:'' to map into an $R$-algebra $B$, we need
+to specify $n$ elements of $B$, which have to satisfy the finitely many
+relations that generate the ideal $I$.
+
+
+Like most nice types of morphisms, finitely presented morphisms have a
+``sorite.''
+\begin{proposition}[Le sorite for finitely presented morphisms] \label{soritefp}
+Finitely presented morphisms are preserved under composite and base-change.
+That is, if $\phi: A \to B$ is a finitely presented morphism, then:
+\begin{enumerate}
+\item If $A'$ is any $A$-algebra, then $\phi \otimes A': A' \to B \otimes_A
+A'$ is finitely presented.
+\item If $\psi: B \to C$ is finitely presented, then $C$ is a finitely
+presented over $A$ (that is, $\psi \circ \phi$ is finitely presented).
+\end{enumerate}
+\end{proposition}
+\begin{proof}
+First, we show that finitely presented morphisms are preserved under base-change.
+Suppose $B$ is finitely presented over $A$, thus isomorphic to a quotient $A[x_1, \dots,
+x_n]/I$, where $I$ is a finitely generated ideal in the polynomial ring. Then
+for any $A$-algebra $A'$, we have that
+\[ B \otimes_A A' = A'[x_1, \dots, x_n]/ I' \]
+where $I'$ is the ideal in $A'[x_1, \dots, x_n]$ generated by $I$. (This
+follows by right-exactness of the tensor product.) Thus $I'$ is finitely
+presented and $B \otimes_A A'$ is finitely presented over $A'$.
+
+Next, we show that finitely presented morphisms are closed under composition.
+Suppose $A \to B$ and $B \to C$ are finitely presented morphisms. Then $B$ is isomorphic as
+$A$-algebra to $A[x_1, \dots, x_n/I$ and $C$ is isomorphic as $B$-algebra to
+$B[y_1, \dots, y_m]/J$, where $I, J$ are finitely generated ideals.
+Thus $C \simeq A[x_1, \dots, x_n, y_1, \dots, y_m]/(I+J)$ for $I+J$ the ideal
+generated by $I, J$ in $A[x_1, \dots, x_n, y_1, \dots, y_m]$. This is clearly a
+finitely generated ideal.
+\end{proof}
+
+Finitely presented morphisms have a curious cancellation property that we
+tackle next. In algebraic geometry, one often finds properties $\mathcal{P}$ of morphisms
+of schemes such that if a composite
+\[ X \stackrel{f}{\to} Y \stackrel{g}{\to} Z \]
+has $\mathcal{P}$, then so does $f$ (possibly with weak conditions on $g$).
+One example of this (in any category) is the class of monomorphisms. A more
+interesting example (for schemes) is the property of separatedness; the
+interested reader
+may consult \cite{EGA}.
+
+In our case, we shall illustrate this cancellation phenomenon in the category
+of commutative rings. Since arrows for schemes go in the opposite direction as
+arrows of rings, this will look slightly different.
+
+\begin{proposition}
+Suppose we have a composite
+\[ A \stackrel{f}{\to} B \stackrel{g}{\to} C \]
+such that $g \circ f: A \to C$ is finitely presented, and $f$ is of finite
+type (that is, $B$ is a finitely generated $A$-algebra). Then $g: B \to C$
+is finitely presented.
+\end{proposition}
+\begin{proof}
+We shall prove this using the fact that the \emph{codiagonal} map in the
+category of commutative rings is finitely presented if the initial map is finitely generated:
+
+\begin{lemma}
+Let $S$ be a finitely generated $R$-algebra. Then the map $S \otimes_R S \to
+S$ is finitely presented.
+\end{lemma}
+\begin{proof}
+We shall show that the kernel $I$ of $S \otimes_R S \to S$ is a \emph{finitely generated} ideal. This will
+clearly imply the claim, as $S \otimes_R S \to S$ is obviously a surjection.
+
+To see this, let $\alpha_1, \dots, \alpha_n \in S$ be generators for $S$ as an
+$R$-algebra. The claim is that the elements $1 \otimes \alpha_i - \alpha_i
+\otimes 1$ generate $I$ as an $S \otimes_R S$-module.
+Clearly these live in $I$. Conversely, it is clear $I$ is generated by
+elements of the form $ x \otimes 1 - 1 \otimes x$ (because if $z=\sum x_k
+\otimes y_k \in I$, then $z = \sum (x_k \otimes 1) \left( 1 \otimes y_k - y_k
+\otimes y_k \right) + \sum x_k y_k \otimes 1$ and the last term vanishes by
+definition of $I$).
+
+In other words, if we define $d(\alpha) = \alpha \otimes 1 - 1 \otimes
+\alpha$ for $\alpha \in S$, then $I$ is generated by elements $d(\alpha)$.
+Now $d$ is clearly $R$-linear, and we have the identity
+\begin{align*} d(\alpha \beta) & = \alpha \beta \otimes 1 - 1 \otimes \alpha
+\beta \\
+& =
+ \alpha \beta \otimes 1 - \alpha \otimes \beta + \alpha \otimes \beta - 1 \otimes \alpha
+\beta \\
+& = (\alpha \otimes 1) d(\beta) + (1 \otimes \beta) d(\alpha).
+\end{align*}
+Thus $d(\alpha \beta)$ is in the $S \otimes_R S$-module spanned by $d(\alpha)$
+and $d(\beta)$.
+From this, it is clear that $d(\alpha_1), d(\alpha_2), \dots, d(\alpha_n)$
+generate $I$ as a $S \otimes_R S $-module.
+\end{proof}
+
+From this lemma, we will be able to prove the theorem as follows.
+We can write $g: B \to C$ as the composite
+\[ B \to B \otimes_A C \to C \]
+where the first map is the base-change of the finitely presented morphism $A
+\to C$ and the second morphism is the base-change of the finitely presented
+morphism $B \otimes_A B \to B$. Thus the composite $B \to C$ is finitely
+presented.
+\end{proof}
+\section{Inductive limits of rings}
+
+We shall now find ourselves in the following situation. We shall have an
+inductive system $\left\{A_\alpha\right\}_{\alpha \in I}$ of rings, indexed by a
+directed set $I$. With $A = \varinjlim A_\alpha$, we will be interested in relating
+categories of modules and algebras over $A$ to the categories over $A_\alpha$.
+
+The basic idea will be as follows. Given an object (e.g. module) $M$ of finite presentation of
+$A$, we will be able to find an object $M_\alpha$ of finite presentation over some
+$A_\alpha$ such that $M$ is obtained from $M_\alpha$ by base-change $A_\alpha
+\to A$.
+Moreover, given a morphism $M \to N$ of objects over $A$, we will be able to
+``descend'' this to a morphism $M_\alpha \to N_\alpha$ of objects of finite
+presentation over some $A_\alpha$, which will induce $M \to N$ by base-change.
+In other words, the \emph{category} of objects over $A$ of finite presentation
+will be the inductive limit of the \emph{categories} of such objects over the
+$A_\alpha$.
+
+\subsection{Prologue: fixed points of polynomial involutions over $\mathbb{C}$}
+
+Following \cite{Se09}, we give an application of these ideas to a simple
+concrete problem. This will help illustrate some of them, even though we have
+not formally developed the machinery yet.
+
+
+If $k$ is an algebraically closed field, a map $k^n \to k^n$ is called \emph{polynomial} if each of
+the components is a polynomial function in the input coordinates.
+So if we identify $k^n $ with the closed points of $\spec
+k[x_1, \dots, x_n]$, then a polynomial function is just the
+restriction to to the closed points of an endomorphism of $\spec
+k[x_1, \dots, x_n]$ induced by an algebra endomorphism.
+
+\begin{theorem}
+Let $F: \mathbb{C}^n \to \mathbb{C}^n$ be a polynomial map with $F \circ F =
+1_{\mathbb{C}^n}$. Then $F$ has a fixed point.
+\end{theorem}
+
+We can phrase this alternatively as follows. Let $\sigma: \mathbb{C}[x_1,
+\dots, x_n] \to \mathbb{C}[x_1, \dots, x_n]$ be a $\mathbb{C}$-involution.
+Then the map on the $\spec$'s has a fixed point (which is a closed
+point\footnote{One can show that if there is a fixed point, there is a fixed
+point that is a closed point.}).
+
+
+\begin{proof}
+It is clear that the presentation of $\sigma$ involves only a finite amount of
+data, so as in \cref{} we can construct a finitely generated
+$\mathbb{Z}$-algebra $R \subset \mathbb{C}$ and an involution
+\[ \overline{\sigma}: R[x_1, \dots, x_n] \to R[x_1, \dots, x_n] \] such that $\sigma$ is obtained from
+$\overline{\sigma}$ by base-changing $R \to \mathbb{C}$.
+We can assume that $\frac{1}{2} \in R$ as well.
+To see this explicitly, we simply need only add to $R$ the coefficients of the
+polynomials $\sigma(x_1), \dots, \sigma(x_n)$, and $\frac{1}{2}$, and
+consider the $\mathbb{Z}$-algebra they generate.
+
+Suppose now the system of equations $\sigma(x_1, \dots, x_n) - (x_1, \dots,
+x_n)$ has no solution in $\mathbb{C}^n$. This is equivalent to stating that a
+finite
+system of polynomials (namely, the $\sigma(x_i) - x_i$) generate the unit ideal in $\mathbb{C}[x_1, \dots,
+x_n]$, so that there are polynomials $P_i \in \mathbb{C}[x_1, \dots, x_n]$
+such that $\sum P_i \left( \sigma(x_i) - x_i \right) = 1$.
+
+Let us now enlarge $R$ so that the coefficients of the $P_i$ lie in $R$.
+Since the coefficients of the $\sigma(x_i)$ are already in $R$, we find
+that the polynomials $\sigma(x_i) - x_i$ will generate the unit ideal in
+$R[x_1, \dots, x_n]$.
+If $R'$ is a homomorphic image of $R$, then this will be true in $R'[x_1,
+\dots, x_n]$.
+
+Choose a maximal ideal $\mathfrak{m} \subset R$. Then $R/\mathfrak{m}$ is a
+finite field, and $\sigma$ becomes an involution
+\[ (R/\mathfrak{m})[x_1, \dots, x_n] \to (R/\mathfrak{m})[x_1, \dots, x_n]. \]
+If we let $\overline{k}$ be the algebraic closure of $R/\mathfrak{m}$, then we
+have an involution
+\[ \widetilde{\sigma}: k[x_1, \dots, x_n] \to k[x_1, \dots, x_n]. \]
+But the induced map by $\widetilde{\sigma}$ on $k^n$ has \emph{no fixed points.} This follows because the
+$\widetilde{\sigma(x_i)} - x_i$ generate the unit ideal in $k[x_1, \dots,
+x_n]$ (because we can consider the images of the $P_i$ in $k[x_1, \dots, x_n]$).
+Moreover, $\mathrm{char} k \neq 2$ as $\frac{1}{2} \in R$, so $2$ is
+invertible in $k$ as well.
+
+So from the initial fixed-point-free involution $F$ (or $\sigma$), we have
+induced a
+polynomial map $k^n \to k^n$ with no fixed points. We need only now prove:
+
+\begin{lemma} \label{easycaseoffptheorem}
+If $k$ is the algebraic closure of $\mathbb{F}_p$ for $p \neq 2$, then any
+involution $F: k^n \to k^n$ which is a polynomial map has a fixed point.
+\end{lemma}
+\begin{proof}
+This is very simple. There is a finite field $\mathbb{F}_q$ in which the
+coefficients of $F$ all lie; thus $F$ induces a map
+\[ \mathbb{F}_q^n \to \mathbb{F}_q^n \]
+which is necessarily an involution. But an involution on a finite set of odd
+cardinality necessarily has a fixed point (or all orbits would be even).
+\end{proof}
+
+\end{proof}
+
+
+\begin{remark}
+An alternative approach to the above proof is to use a little bit of model
+theory. There is a general principle due to Abraham Robinson, that can be
+stated roughly as follows. If a sentence $P$ in the first-order logic of fields
+(that is, one is allowed to refer to the elements $0,1$ and to addition and
+multiplication; in addition, one is allowed to make existential and universal
+quantifications, negations, disjunctions, and conjunctions) has the property
+that $P$ is true for an algebraically closed field of characteristic $p$ for
+each $p \gg 0$, then $P$ holds in \emph{every} algebraically closed field of
+characteristic zero.
+This principle follows from a combination of the compactness theorem and the
+fact that the theory of algebraically closed fields of a fixed characteristic
+is \emph{complete}: any statement is true in all of them, or in none of them.
+
+Consider the statement $S_{n,d}$ that for any polynomial map $F: k^n \to k^n$
+consisting of polynomials of degree $\leq d$ such that $F \circ F$, there is
+$(x_1, \dots, x_n) \in k^n$ with $F(x_1, \dots, x_n) = (x_1, \dots, x_n)$.
+Then $S_{n,d}$ is clearly a statement of first-order logic.
+\cref{easycaseoffptheorem} shows that $S_{n,d}$ holds in
+$\overline{\mathbb{F}_p}$ whenever $p > 2$. Thus, $S_{n,d}$ holds in
+$\mathbb{C}$ by Robinson's principle.
+
+These types of model-theoretic arguments can be used to prove the \textbf{Ax-Grothendieck
+theorem}: an injective polynomial map $\mathbb{C}^n \to \mathbb{C}^n$ is
+surjective. See \cite{Ma02}.
+\end{remark}
+
+\subsection{The inductive limit of categories}
+
+\add{general formalism to clarify all this}
+
+
+
+\subsection{The category of finitely presented modules}
+
+Throughout, we let $\left\{A_{\alpha}\right\}_{\alpha \in I}$ be an inductive
+system of rings, and $A = \varinjlim A_\alpha$.
+We are going to relate the category of finitely presented modules over $A$ to
+the categories of finitely presented modules over the $A_\alpha$.
+
+We start by showing that any module over $A$ ``descends'' to one of the
+$A_\alpha$.
+\begin{proposition} \label{descentfpmodule}
+Suppose $M$ is a finitely presented module over $A$. Then there is $\alpha \in
+I$ and a finitely presented $A_\alpha$-module $M_\alpha$ such that $M \simeq
+M_\alpha \otimes_{A_\alpha} A$.
+\end{proposition}
+\begin{proof}
+Indeed, $M$ is the cokernel of a morphism
+\[ f: A^m \to A^n \]
+by definition. This morphism is described by a $m$-by-$n$ (or $n$-by-$m$,
+depending on conventions) matrix with coefficients in $A$. Each of these
+finitely many coefficients must come from various $A_\alpha$ in the image (by
+definition of the inductive limit), and choosing $\alpha$ ``large'' we can
+assume that every coefficient in the matrix is in the image of $A_\alpha \to A$.
+Then we have a morphism
+\[ f_\alpha: A_\alpha^m \to A_\alpha^n \]
+that induces $f$ by base-change to $A$. Then we may let $M_\alpha$ be the
+cokernel of $f_\alpha$ since the tensor product is right-exact.
+\end{proof}
+
+Now, we want to show that if the base-change of two finitely presented modules
+over $A_\alpha$ to $A$ become isomorphic, then they ``become isomorphic'' at some
+$A_\beta $ (for $\beta > \alpha$).
+We shall actually prove a more general result.
+Namely, we shall see that
+a morphism at the colimit ``descends'' to one of the steps.
+
+\begin{proposition} \label{colimfpmodules} We keep the same notation as above.
+Suppose $M_\alpha, N_\alpha$ are finitely presented modules over $A_\alpha$.
+Write $M_\beta = M_\alpha \otimes_{A_\alpha} A_\beta, N_\beta = N_\alpha
+\otimes_{A_\alpha} A_\beta$ for each $\beta > \alpha$ and $M, N$ for the
+base-changes to $N$.
+
+Suppose there is a morphism $f: M \to N$. Then there is $\beta \geq \alpha$ such
+that $f$ is obtained by base-changing a morphism $f_\beta: M_\beta \to N_\beta$.
+If $f_\beta, f_\gamma$ are any two morphisms that do this, then there is
+$\delta \geq \beta, \gamma$ such that $f_\beta, f_\gamma$ become equal when
+base-changed to $A_\delta$.
+\end{proposition}
+The conclusion of this result is then
+\[ \hom_A(M, N) = \varinjlim_{\beta} \hom_{A_\beta}(M_\beta, N_\beta). \]
+The last part is essentially the ``uniqueness'' that we were discussing previously.
+\begin{proof} Suppose the transition maps $A_\alpha \to A_\beta$ are denoted
+$\phi_{\alpha \beta}$, and the natural maps $A_\alpha \to A$ are denoted
+$\phi_\alpha$.
+
+We know that there are exact sequences
+\[ A_\alpha^m \stackrel{\textbf{M}}{\to} A_\alpha^n \to M_\alpha \to 0, \]
+and
+\[ A_\alpha^p \to N_\alpha \to 0. \]
+These are preserved by tensoring with $A$. Here $\textbf{M}$ is a suitable matrix.
+So we get exact sequences
+\begin{gather*}
+A^m \stackrel{\phi_\alpha(\textbf{M})}{\to} A^n \to M \to 0 \\
+A^p \to N \to 0
+\end{gather*}
+and the projectivity of $A^p$ shows that the map $A^n \to M \to N$ can be
+lifted to a map $A^n \to A^p$ given by some matrix $\textbf{M}'$ with coefficients
+in $A$. We know that there is $\textbf{M}' \circ \phi_\alpha(\textbf{M}) = 0$
+because the map factors through $M$.
+
+Now $\textbf{M}'$ can be written as $\phi_\beta(\textbf{M}'')$ for some matrix
+with coefficients in $A_\beta$, or in other words a map $A_\beta^n \to
+A_\beta^p$. We would like to use this to get a map $M_\beta \to A_\beta^p \to
+N_\beta$, but for this we need to check that $A_\beta^n \to A_\beta^p$ pulls
+back to zero in $A_\beta^m$. In other words, we need that
+$\textbf{M}'' \phi_{\alpha \beta}( \textbf{M}) = 0$. This need not be true, but we know
+that it is true if base-change to a bigger $\beta$ (since this matrix product
+is zero in the colimit). This allows us to get the map $M_\beta \to N_\beta$.
+
+Finally, we need uniqueness. Suppose $f_\beta: M_\beta \to N_\beta$ and
+$f_\gamma: M_\gamma \to N_\gamma$ both are such that the base-changes to $A$
+are the same morphism $M \to N$. We need to find a $\delta$ as in the
+proposition. By replacing $\beta, \gamma$ with a mutual upper bound, we may
+suppose that $\beta = \gamma$; we shall write the two morphisms as $f_\beta,
+g_\beta$ then.
+
+Consider the pull-backs $A_\beta^n \stackrel{f_\beta, g_\beta}{\to } N_\beta$.
+These uniquely determine $f_\beta, g_\beta$ (since the map $A_\beta^n \to
+M_\beta$ is a surjection). These pull-backs are specified by $n$ elements of
+$N_\beta$. If the base-changes of $f_\beta, g_\beta$ via $\phi_{\beta}:
+A_\beta \to A$ are the same, then these $n$ elements of $N_\beta$ become the
+same in $N = \varinjlim_{\beta'} N \otimes_{A_\beta} A_{\beta'}$; thus they
+become equal at some finite stage, so there is $\beta' > \beta$ such that the
+base changes $f_{\beta'} = g_{\beta'}$.
+
+\end{proof}
+
+\begin{remark}
+The idea of the above proof was to exploit the idea that the homomorphism
+carries a finite amount of data, that is the images of the generators and the
+condition that these images satisfy finitely many relations. In essence, it is
+analogous to the argument that finitely presented modules over a \emph{fixed}
+ring are compact objects in that category.
+\end{remark}
+
+
+\begin{remark}
+In fact, we can give an alternative (and slightly simpler) argument for \cref{colimfpmodules}.
+We know that
+\[ \hom_{A_\beta}(M_\beta, N_\beta) = \hom_{A_\alpha}(M_\alpha, N_\beta) \]
+by the adjoint property of the tensor product, and similarly
+\[ \hom_{A}(M,N) = \hom_{A_\alpha}(M_\alpha, N). \]
+So the assertion we are trying to prove is
+\[ \hom_{A_\alpha}(M_\alpha, N) = \varinjlim_{\beta} \hom_{A_\alpha}(M_\alpha,
+N_\beta) , \]
+which follows from \cref{fpcompact}.
+\end{remark}
+
+\begin{exercise}
+Give a proof of the following claim. If $M$ is a finitely generated module
+over a noetherian ring $R$, $\mathfrak{p} \in \spec R$ is such that
+$M_{\mathfrak{p}}$ is free over $R_{\mathfrak{p}}$, then there is $f \in R - \mathfrak{p}$ such that
+$M_f$ is free over $R_f$.
+\end{exercise}
+
+\subsection{The category of finitely presented algebras}
+
+We can treat the category of finitely presented algebras over such an
+inductive limit in a similar manner.
+As before, let $\left\{A_\alpha\right\}_{\alpha \in I}$ be an inductive system
+of rings with $A = \varinjlim A_\alpha$.
+For each $\alpha$, there is a functor from the category of finitely presented $A_\alpha$-algebras
+to the category of finitely presented $A$-algebras sending $C \mapsto C
+\otimes_{A_\alpha} A$.
+(Note that morphisms of finite presentation are preserved under base-change by
+\cref{soritefp}.)
+
+\begin{proposition}
+Suppose $B$ is a finitely presented $A$-algebra. Then there is $\alpha \in I$
+and a finitely presented $A_\alpha$-algebra $B_\alpha$ such that $B \simeq
+B_\alpha \otimes_{A_\alpha} A$.
+\end{proposition}
+\begin{proof}
+This is analogous to the proof of \cref{descentfpmodule}.
+\end{proof}
+
+\add{analog of the next result}
+
+\subsection{$\spec$ and inductive limits}
+
+Suppose $\left\{A_\alpha\right\}_{\alpha\in I}$ is an inductive system of
+commutative rings, as before; we let $A = \varinjlim A_\alpha$.
+Since $\spec$ is a contravariant functor, we thus find that $\spec A_\alpha$
+is a \emph{projective} system of topological spaces.\footnote{Or schemes.}
+We are now interested in relating $\spec A$ to the individual $\spec A_\alpha$.
+
+\begin{proposition}
+$\spec A$ is the projective limit $\varprojlim \spec A_\alpha$ in the category
+of topological spaces.
+\end{proposition}
+
+Recall that if $\left\{X_\alpha\right\}$ is a projective system of topological
+spaces with transition maps $\phi_{\beta \alpha}: X_\beta \to X_\alpha$ whenever $\alpha \leq
+\beta$, then the projective limit $\varprojlim X_\alpha$ can be constructed as
+follows. One considers the subset of $\prod X_\alpha$ consisting of sequences
+$(x_\alpha)$ such that $\phi_{\beta \alpha}(x_\alpha) = x_\beta$ for every
+$\alpha \leq \beta$. One can easily check that this has the universal property
+of the projective limit.
+
+\begin{proof}
+Let us first verify that the assertion is true as \emph{sets.} There are maps
+\[ \spec A \to \spec A_\alpha \]
+for each $\alpha \in I$, which are obviously compatible (since the
+$\left\{A_\alpha\right\}$ form an inductive system) so that they lead to a
+(continuous) map of topological spaces
+\[ \spec A \to \varprojlim \spec A_\alpha. \]
+We first verify injectivity. Suppose two primes $\mathfrak{p}, \mathfrak{p}'$ were sent to the same element
+of $\varprojlim \spec A_\alpha$. This means that if $\phi_\alpha: A_\alpha \to
+A$ is the natural morphism for each $\alpha$, we have
+$\phi_\alpha^{-1}(\mathfrak{p}) = \phi_\alpha^{-1}(\mathfrak{p}')$ for all
+$\alpha$. It follows that the intersections of $\mathfrak{p}, \mathfrak{p}'$
+with the image of $A_\alpha$ are identical; since $A$ is the union of
+$\phi_\alpha(A_\alpha)$ over all $\alpha$, this implies $\mathfrak{p} =
+\mathfrak{p}'$.
+
+Now let us verify surjectivity. Suppose given a sequence $\mathfrak{p}_\alpha$
+of primes in $A_\alpha$, for each $\alpha$, such that $\mathfrak{p}_\alpha$ is
+the pre-image of $\mathfrak{p}_\beta$ under $A_\alpha \to A_\beta$ whenever
+$\alpha \leq \beta$. We want to form a prime ideal $\mathfrak{p} \in \spec A$
+pulling back to all these. To do this, we decide that $x \in \mathfrak{p}$ if
+and only if there exists $\alpha \in I$ such that $x \in
+\phi_\alpha(\mathfrak{p}_\alpha)$ (recall that $\phi_\alpha: A_\alpha \to A$
+is the natural map). This does not depend on the choice of $\alpha$, and one
+verifies easily that this is a prime ideal with the appropriate properties.
+
+We now have to show that the map $\spec A \to \varprojlim \spec A_\alpha$ is
+in fact a homeomorphism. We have seen that it is continuous and bijective, so
+we must prove that it is open. If $a \in A$, we will be done if we can show
+that the image of the basic open set $D(a) \subset \spec A$ is open in
+$\varprojlim \spec A_\alpha$.
+
+Suppose $a = \phi_\beta(a_\beta)$ for some $a_\beta \in A_\beta$. Then the
+claim is that the image of $D(a)$ is precisely the subset of $\varprojlim
+\spec A_\beta$ such that the $\beta$th coordinate (which is in $\spec A_\beta$!)
+lies in $D(a_\beta)$. This is clearly an open set, so if we prove this, then
+we are done. Indeed, if $\mathfrak{p} \in D(\alpha) \subset \spec A$,
+then clearly the preimage in $A_\beta$ cannot contain $a_\beta$ (since
+$a_\beta$ maps to $a$). Conversely, if we have a compatible sequence
+$\left\{\mathfrak{p}_\alpha\right\}$ of primes such that $\mathfrak{p}_\beta
+\in D(a_\beta)$, then the above construction of a prime $\mathfrak{p} \in
+\spec A$ from this shows that $a \notin \mathfrak{p}$.
+\end{proof}
diff --git a/books/hott/basics.tex b/books/hott/basics.tex
new file mode 100644
index 0000000000000000000000000000000000000000..16b1f660f7fa62a78b2076797eeb0bd84a7ccd13
--- /dev/null
+++ b/books/hott/basics.tex
@@ -0,0 +1,2693 @@
+\chapter{Homotopy type theory}
+\label{cha:basics}
+
+The central new idea in homotopy type theory is that types can be regarded as
+spaces in homotopy theory, or higher-dimensional groupoids in category
+theory.
+
+\index{classical!homotopy theory|(}
+\index{higher category theory|(}
+We begin with a brief summary of the connection between homotopy theory
+and higher-dimensional category theory.
+In classical homotopy theory, a space $X$ is a set of points equipped
+with a topology,
+\indexsee{space!topological}{topological space}
+\index{topological!space}
+and a path between points $x$ and $y$ is represented by
+a continuous map $p : [0,1] \to X$, where $p(0) = x$ and $p(1) = y$.
+\index{path!topological}
+\index{topological!path}
+This function can be thought of as giving a point in $X$ at each
+``moment in time''. For many purposes, strict equality of paths
+(meaning, pointwise equal functions) is too fine a notion. For example,
+one can define operations of path concatenation (if $p$ is a path from
+$x$ to $y$ and $q$ is a path from $y$ to $z$, then the concatenation $p
+\ct q$ is a path from $x$ to $z$) and inverses ($\opp p$ is a path
+from $y$ to $x$). However, there are natural equations between these
+operations that do not hold for strict equality: for example, the path
+$p \ct \opp p$ (which walks from $x$ to $y$, and then back along the
+same route, as time goes from $0$ to $1$) is not strictly equal to the
+identity path (which stays still at $x$ at all times).
+
+The remedy is to consider a coarser notion of equality of paths called
+\emph{homotopy}.
+\index{homotopy!topological}
+A homotopy between a pair of continuous maps $f :
+X_1 \to X_2$ and $g : X_1\to X_2$ is a continuous map $H : X_1
+\times [0, 1] \to X_2$ satisfying $H(x, 0) = f (x)$ and $H(x, 1) =
+g(x)$. In the specific case of paths $p$ and $q$ from $x$ to $y$, a homotopy is a
+continuous map $H : [0,1] \times [0,1] \rightarrow X$
+such that $H(s,0) = p(s)$ and $H(s,1) = q(s)$ for all $s\in [0,1]$.
+In this case we require also that $H(0,t) = x$ and $H(1,t)=y$ for all $t\in [0,1]$,
+so that for each $t$ the function $H(\blank,t)$ is again a path from $x$ to $y$;
+a homotopy of this sort is said to be \emph{endpoint-preserving} or \emph{rel endpoints}.
+In simple cases, we can think of the image of the square $[0,1]\times [0,1]$ under $H$ as ``filling the space'' between $p$ and $q$, although for general $X$ this doesn't really make sense; it is better to think of $H$ as a continuous deformation of $p$ into $q$ that doesn't move the endpoints.
+Since $[0,1]\times [0,1]$ is 2-dimensional, we also speak of $H$ as a 2-dimensional \emph{path between paths}.\index{path!2-}
+
+For example, because
+$p \ct \opp p$ walks out and back along the same route, you know that
+you can continuously shrink $p \ct \opp p$ down to the identity
+path---it won't, for example, get snagged around a hole in the space.
+Homotopy is an equivalence relation, and operations such as
+concatenation, inverses, etc., respect it. Moreover, the homotopy
+equivalence classes of loops\index{loop} at some point $x_0$ (where two loops $p$
+and $q$ are equated when there is a \emph{based} homotopy between them,
+which is a homotopy $H$ as above that additionally satisfies $H(0,t) =
+H(1,t) = x_0$ for all $t$) form a group called the \emph{fundamental
+ group}.\index{fundamental!group} This group is an \emph{algebraic invariant} of a space, which
+can be used to investigate whether two spaces are \emph{homotopy
+ equivalent} (there are continuous maps back and forth whose composites
+are homotopic to the identity), because equivalent spaces have
+isomorphic fundamental groups.
+
+Because homotopies are themselves a kind of 2-dimensional path, there is
+a natural notion of 3-dimensional \emph{homotopy between homotopies},\index{path!3-}
+and then \emph{homotopy between homotopies between homotopies}, and so
+on. This infinite tower of points, paths, homotopies, homotopies between
+homotopies, \ldots, equipped with algebraic operations such as the
+fundamental group, is an instance of an algebraic structure called a
+(weak) \emph{$\infty$-groupoid}. An $\infty$-groupoid\index{.infinity-groupoid@$\infty$-groupoid} consists of a
+collection of objects, and then a collection of \emph{morphisms}\indexdef{morphism!in an .infinity-groupoid@in an $\infty$-groupoid} between
+objects, and then \emph{morphisms between morphisms}, and so on,
+equipped with some complex algebraic structure; a morphism at level $k$ is called a \define{$k$-morphism}\indexdef{k-morphism@$k$-morphism}. Morphisms at each level
+have identity, composition, and inverse operations, which are weak in
+the sense that they satisfy the groupoid laws (associativity of
+composition, identity is a unit for composition, inverses cancel) only
+up to morphisms at the next level, and this weakness gives rise to
+further structure. For example, because associativity of composition of
+morphisms $p \ct (q \ct r) = (p \ct q) \ct r$ is itself a
+higher-dimensional morphism, one needs an additional operation relating
+various proofs of associativity: the various ways to reassociate $p \ct
+(q \ct (r \ct s))$ into $((p \ct q) \ct r) \ct s$ give rise to Mac
+Lane's pentagon\index{pentagon, Mac Lane}. Weakness also creates non-trivial interactions between
+levels.
+
+Every topological space $X$ has a \emph{fundamental $\infty$-groupoid}
+\index{.infinity-groupoid@$\infty$-groupoid!fundamental}
+\index{fundamental!.infinity-groupoid@$\infty$-groupoid}
+whose
+$k$-mor\-ph\-isms are the $k$-dimen\-sional paths in $X$. The weakness of the
+$\infty$-group\-oid corresponds directly to the fact that paths form a
+group only up to homotopy, with the $(k+1)$-paths serving as the
+homotopies between the $k$-paths. Moreover, the view of a space as an
+$\infty$-groupoid preserves enough aspects of the space to do homotopy theory:
+the fundamental $\infty$-groupoid construction is adjoint\index{adjoint!functor} to the
+geometric\index{geometric realization} realization of an $\infty$-groupoid as a space, and this
+adjunction preserves homotopy theory (this is called the \emph{homotopy
+ hypothesis/theorem},
+\index{hypothesis!homotopy}
+\index{homotopy!hypothesis}
+because whether it is a hypothesis or theorem
+depends on how you define $\infty$-groupoid). For example, you can
+easily define the fundamental group of an $\infty$-groupoid, and if you
+calculate the fundamental group of the fundamental $\infty$-groupoid of
+a space, it will agree with the classical definition of fundamental
+group of that space. Because of this correspondence, homotopy theory
+and higher-dimensional category theory are intimately related.
+
+\index{classical!homotopy theory|)}%
+\index{higher category theory|)}%
+
+\mentalpause
+
+Now, in homotopy type theory each type can be seen to have the structure
+of an $\infty$-groupoid. Recall that for any type $A$, and any $x,y:A$,
+we have an identity type $\id[A]{x}{y}$, also written $\idtype[A]{x}{y}$
+or just $x=y$. Logically, we may think of elements of $x=y$ as evidence
+that $x$ and $y$ are equal, or as identifications of $x$ with
+$y$. Furthermore, type theory (unlike, say, first-order logic) allows us
+to consider such elements of $\id[A]{x}{y}$ also as individuals which
+may be the subjects of further propositions. Therefore, we can
+\emph{iterate} the identity type: we can form the type
+$\id[{(\id[A]{x}{y})}]{p}{q}$ of identifications between
+identifications $p,q$, and the type
+$\id[{(\id[{(\id[A]{x}{y})}]{p}{q})}]{r}{s}$, and so on. The structure
+of this tower of identity types corresponds precisely to that of the
+continuous paths and (higher) homotopies between them in a space, or an
+$\infty$-groupoid.\index{.infinity-groupoid@$\infty$-groupoid}
+
+
+Thus, we will frequently refer to an element $p : \id[A]{x}{y}$ as
+a \define{path}
+\index{path}
+from $x$ to $y$; we call $x$ its \define{start point}
+\indexdef{start point of a path}
+\indexdef{path!start point of}
+and $y$ its \define{end point}.
+\indexdef{end point of a path}
+\indexdef{path!end point of}
+Two paths $p,q : \id[A]{x}{y}$ with the same start and end point are said to be \define{parallel},
+\indexdef{parallel paths}
+\indexdef{path!parallel}
+in which case an element $r : \id[{(\id[A]{x}{y})}]{p}{q}$ can
+be thought of as a homotopy, or a morphism between morphisms;
+we will often refer to it as a \define{2-path}
+\indexdef{path!2-}\indexsee{2-path}{path, 2-}%
+or a \define{2-dimensional path}.
+\index{dimension!of paths}%
+\indexsee{2-dimensional path}{path, 2-}\indexsee{path!2-dimensional}{path, 2-}%
+Similarly, $\id[{(\id[{(\id[A]{x}{y})}]{p}{q})}]{r}{s}$ is the type of
+\define{3-dimensional paths}
+\indexdef{path!3-}\indexsee{3-path}{path, 3-}\indexsee{3-dimensional path}{path, 3-}\indexsee{path!3-dimensional}{path, 3-}%
+between two parallel 2-dimensional paths, and so on. If the
+type $A$ is ``set-like'', such as \nat, these iterated identity types
+will be uninteresting (see \cref{sec:basics-sets}), but in the
+general case they can model non-trivial homotopy types.
+
+%% (Obviously, the
+%% notation ``$\id[A]{x}{y}$'' has its limitations here. The style
+%% $\idtype[A]{x}{y}$ is only slightly better in iterations:
+%% $\idtype[{\idtype[{\idtype[A]{x}{y}}]{p}{q}}]{r}{s}$.)
+
+An important difference between homotopy type theory and classical homotopy theory is that homotopy type theory provides a \emph{synthetic}
+\index{synthetic mathematics}%
+\index{geometry, synthetic}%
+\index{Euclid of Alexandria}%
+description of spaces, in the following sense. Synthetic geometry is geometry in the style of Euclid~\cite{Euclid}: one starts from some basic notions (points and lines), constructions (a line connecting any two points), and axioms
+(all right angles are equal), and deduces consequences logically. This is in contrast with analytic
+\index{analytic mathematics}%
+geometry, where notions such as points and lines are represented concretely using cartesian coordinates in $\R^n$---lines are sets of points---and the basic constructions and axioms are derived from this representation. While classical homotopy theory is analytic (spaces and paths are made of points), homotopy type theory is synthetic: points, paths, and paths between paths are basic, indivisible, primitive notions.
+
+Moreover, one of the amazing things about homotopy type theory is that all of the basic constructions and axioms---all of the
+higher groupoid structure---arises automatically from the induction
+principle for identity types.
+Recall from \cref{sec:identity-types} that this says that if
+\begin{itemize}
+\item for every $x,y:A$ and every $p:\id[A]xy$ we have a type $D(x,y,p)$, and
+\item for every $a:A$ we have an element $d(a):D(a,a,\refl a)$,
+\end{itemize}
+then
+\begin{itemize}
+\item there exists an element $\indid{A}(D,d,x,y,p):D(x,y,p)$ for \emph{every} two elements $x,y:A$ and $p:\id[A]xy$, such that $\indid{A}(D,d,a,a,\refl a) \jdeq d(a)$.
+\end{itemize}
+In other words, given dependent functions
+\begin{align*}
+D & :\prd{x,y:A} (\id{x}{y}) \to \type\\
+d & :\prd{a:A} D(a,a,\refl{a})
+\end{align*}
+there is a dependent function
+\[\indid{A}(D,d):\prd{x,y:A}{p:\id{x}{y}} D(x,y,p)\]
+such that
+\begin{equation}\label{eq:Jconv}
+\indid{A}(D,d,a,a,\refl{a})\jdeq d(a)
+\end{equation}
+for every $a:A$.
+Usually, every time we apply this induction rule we will either not care about the specific function being defined, or we will immediately give it a different name.
+
+Informally, the induction principle for identity types says that if we want to construct an object (or prove a statement) which depends on an inhabitant $p:\id[A]xy$ of an identity type, then it suffices to perform the construction (or the proof) in the special case when $x$ and $y$ are the same (judgmentally) and $p$ is the reflexivity element $\refl{x}:x=x$ (judgmentally).
+When writing informally, we may express this with a phrase such as ``by induction, it suffices to assume\dots''.
+This reduction to the ``reflexivity case'' is analogous to the reduction to the ``base case'' and ``inductive step'' in an ordinary proof by induction on the natural numbers, and also to the ``left case'' and ``right case'' in a proof by case analysis on a disjoint union or disjunction.\index{induction principle!for identity type}%
+
+
+The ``conversion rule''~\eqref{eq:Jconv} is less familiar in the context of proof by induction on natural numbers, but there is an analogous notion in the related concept of definition by recursion.
+If a sequence\index{sequence} $(a_n)_{n\in \mathbb{N}}$ is defined by giving $a_0$ and specifying $a_{n+1}$ in terms of $a_n$, then in fact the $0^{\mathrm{th}}$ term of the resulting sequence \emph{is} the given one, and the given recurrence relation relating $a_{n+1}$ to $a_n$ holds for the resulting sequence.
+(This may seem so obvious as to not be worth saying, but if we view a definition by recursion as an algorithm\index{algorithm} for calculating values of a sequence, then it is precisely the process of executing that algorithm.)
+The rule~\eqref{eq:Jconv} is analogous: it says that if we define an object $f(p)$ for all $p:x=y$ by specifying what the value should be when $p$ is $\refl{x}:x=x$, then the value we specified is in fact the value of $f(\refl{x})$.
+
+This induction principle endows each type with the structure of an $\infty$-groupoid\index{.infinity-groupoid@$\infty$-groupoid}, and each function between two types with the structure of an $\infty$-functor\index{.infinity-functor@$\infty$-functor} between two such groupoids. This is interesting from a mathematical point of view, because it gives a new way to work with
+$\infty$-groupoids. It is interesting from a type-theoretic point of view, because it reveals new operations that are associated with each type and function. In the remainder of this chapter, we begin to explore this structure.
+
+\section{Types are higher groupoids}
+\label{sec:equality}
+
+\index{type!identity|(}%
+\index{path|(}%
+\index{.infinity-groupoid@$\infty$-groupoid!structure of a type|(}%
+We now derive from the induction principle the beginnings of the structure of a higher groupoid.
+We begin with symmetry of equality, which, in topological language, means that ``paths can be reversed''.
+
+\begin{lem}\label{lem:opp}
+ For every type $A$ and every $x,y:A$ there is a function
+ \begin{equation*}
+ (x= y)\to(y= x)
+ \end{equation*}
+ denoted $p\mapsto \opp{p}$, such that $\opp{\refl{x}}\jdeq\refl{x}$ for each $x:A$.
+ We call $\opp{p}$ the \define{inverse} of $p$.
+ \indexdef{path!inverse}%
+ \indexdef{inverse!of path}%
+ \index{equality!symmetry of}%a
+ \index{symmetry!of equality}%
+\end{lem}
+
+Since this is our first time stating something as a ``Lemma'' or ``Theorem'', let us pause to consider what that means.
+Recall that propositions (statements susceptible to proof) are identified with types, whereas lemmas and theorems (statements that have been proven) are identified with \emph{inhabited} types.
+Thus, the statement of a lemma or theorem should be translated into a type, as in \cref{sec:pat}, and its proof translated into an inhabitant of that type.
+According to the interpretation of the universal quantifier ``for every'', the type corresponding to \cref{lem:opp} is
+\[ \prd{A:\UU}{x,y:A} (x= y)\to(y= x). \]
+The proof of \cref{lem:opp} will consist of constructing an element of this type, i.e.\ deriving the judgment $f:\prd{A:\UU}{x,y:A} (x= y)\to(y= x)$ for some $f$.
+We then introduce the notation $\opp{(\blank)}$ for this element $f$, in which the arguments $A$, $x$, and $y$ are omitted and inferred from context.
+(As remarked in \cref{sec:types-vs-sets}, the secondary statement ``$\opp{\refl{x}}\jdeq\refl{x}$ for each $x:A$'' should be regarded as a separate judgment.)
+
+\begin{proof}[First proof]
+ Assume given $A:\UU$, and
+ let $D:\prd{x,y:A}(x= y) \to \type$ be the type family defined by $D(x,y,p)\defeq (y= x)$.
+ In other words, $D$ is a function assigning to any $x,y:A$ and $p:x=y$ a type, namely the type $y=x$.
+ Then we have an element
+ \begin{equation*}
+ d\defeq \lam{x} \refl{x}:\prd{x:A} D(x,x,\refl{x}).
+ \end{equation*}
+ Thus, the induction principle for identity types gives us an element
+ \narrowequation{ \indid{A}(D,d,x,y,p): (y= x)}
+ for each $p:(x= y)$.
+ We can now define the desired function $\opp{(\blank)}$ to be $\lam{p} \indid{A}(D,d,x,y,p)$, i.e.\ we set $\opp{p} \defeq \indid{A}(D,d,x,y,p)$.
+ The conversion rule~\eqref{eq:Jconv} gives $\opp{\refl{x}}\jdeq \refl{x}$, as required.
+\end{proof}
+
+We have written out this proof in a very formal style, which may be helpful while the induction rule on identity types is unfamiliar.
+To be even more formal, we could say that \cref{lem:opp} and its proof together consist of the judgment
+\begin{narrowmultline*}
+ \lam{A}{x}{y}{p} \indid{A}((\lam{x}{y}{p} (y=x)), (\lam{x} \refl{x}), x, y, p)
+ \narrowbreak : \prd{A:\UU}{x,y:A} (x= y)\to(y= x)
+\end{narrowmultline*}
+(along with an additional equality judgment).
+However, eventually we prefer to use more natural language, such as in the following equivalent proof.
+
+\begin{proof}[Second proof]
+ We want to construct, for each $x,y:A$ and $p:x=y$, an element $\opp{p}:y=x$.
+ By induction, it suffices to do this in the case when $y$ is $x$ and $p$ is $\refl{x}$.
+ But in this case, the type $x=y$ of $p$ and the type $y=x$ in which we are trying to construct $\opp{p}$ are both simply $x=x$.
+ Thus, in the ``reflexivity case'', we can define $\opp{\refl{x}}$ to be simply $\refl{x}$.
+ The general case then follows by the induction principle, and the conversion rule $\opp{\refl{x}}\jdeq\refl{x}$ is precisely the proof in the reflexivity case that we gave.
+\end{proof}
+
+We will write out the next few proofs in both styles, to help the reader become accustomed to the latter one.
+Next we prove the transitivity of equality, or equivalently we ``concatenate paths''.
+
+\begin{lem}\label{lem:concat}
+ For every type $A$ and every $x,y,z:A$ there is a function
+ \begin{equation*}
+ (x= y) \to (y= z)\to (x= z),
+ \end{equation*}
+ written $p \mapsto q \mapsto p\ct q$, such that $\refl{x}\ct \refl{x}\jdeq \refl{x}$ for any $x:A$.
+ We call $p\ct q$ the \define{concatenation} or \define{composite} of $p$ and $q$.
+ \indexdef{path!concatenation}%
+ \indexdef{path!composite}%
+ \indexdef{concatenation of paths}%
+ \indexdef{composition!of paths}%
+ \index{equality!transitivity of}%
+ \index{transitivity!of equality}%
+\end{lem}
+
+Note that we choose to notate path concatenation in the opposite order from function composition: from $p:x=y$ and $q:y=z$ we get $p\ct q : x=z$, whereas from $f:A\to B$ and $g:B\to C$ we get $g\circ f : A\to C$ (see \cref{ex:composition}).
+
+\begin{proof}[First proof]
+ The desired function has type $\prd{x,y,z:A} (x= y) \to (y= z)\to (x= z)$.
+ We will instead define a function with the equivalent type $\prd{x,y:A} (x= y) \to \prd{z:A} (y= z)\to (x= z)$, which allows us to apply path induction twice.
+ Let $D:\prd{x,y:A} (x=y) \to \type$ be the type family
+ \begin{equation*}
+ D(x,y,p)\defeq \prd{z:A}{q:y=z} (x=z).
+ \end{equation*}
+ Note that $D(x,x,\refl x) \jdeq \prd{z:A}{q:x=z} (x=z)$.
+ Thus, in order to apply the induction principle for identity types to this $D$, we need a function of type
+ \begin{equation}\label{eq:concatD}
+ \prd{x:A} D(x,x,\refl{x})
+ \end{equation}
+ which is to say, of type
+ \[ \prd{x,z:A}{q:x=z} (x=z). \]
+ Now let $E:\prd{x,z:A}{q:x=z}\type$ be the type family $E(x,z,q)\defeq (x=z)$.
+ Note that $E(x,x,\refl x) \jdeq (x=x)$.
+ Thus, we have the function
+ \begin{equation*}
+ e(x) \defeq \refl{x} : E(x,x,\refl{x}).
+ \end{equation*}
+ By the induction principle for identity types applied to $E$, we obtain a function
+ \begin{equation*}
+ d : \prd{x,z:A}{q:x=z} E(x,z,q).
+ \end{equation*}
+ But $E(x,z,q)\jdeq (x=z)$, so the type of $d$ is~\eqref{eq:concatD}.
+ Thus, we can use this function $d$ and apply the induction principle for identity types to $D$, to obtain our desired function of type
+ \begin{equation*}
+ \prd{x,y:A} (x= y) \to \prd{z:A} (y= z)\to (x= z)
+ \end{equation*}
+ and hence $\prd{x,y,z:A} (y=z) \to (x=y) \to (x=z)$.
+ The conversion rules for the two induction principles give us $\refl{x}\ct \refl{x}\jdeq \refl{x}$ for any $x:A$.
+\end{proof}
+
+\begin{proof}[Second proof]
+ We want to construct, for every $x,y,z:A$ and every $p:x=y$ and $q:y=z$, an element of $x=z$.
+ By induction on $p$, it suffices to assume that $y$ is $x$ and $p$ is $\refl{x}$.
+ In this case, the type $y=z$ of $q$ is $x=z$.
+ Now by induction on $q$, it suffices to assume also that $z$ is $x$ and $q$ is $\refl{x}$.
+ But in this case, $x=z$ is $x=x$, and we have $\refl{x}:(x=x)$.
+\end{proof}
+
+The reader may well feel that we have given an overly convoluted proof of this lemma.
+In fact, we could stop after the induction on $p$, since at that point what we want to produce is an equality $x=z$, and we already have such an equality, namely $q$.
+Why do we go on to do another induction on $q$?
+
+The answer is that, as described in the introduction, we are doing \emph{proof-relevant} mathematics.
+\index{mathematics!proof-relevant}%
+When we prove a lemma, we are defining an inhabitant of some type, and it can matter what \emph{specific} element we defined in the course of the proof, not merely the type inhabited by that element (that is, the \emph{statement} of the lemma).
+\cref{lem:concat} has three obvious proofs: we could do induction over $p$, induction over $q$, or induction over both of them.
+If we proved it three different ways, we would have three different elements of the same type.
+It's not hard to show that these three elements are equal (see \cref{ex:basics:concat}), but as they are not \emph{definitionally} equal, there can still be reasons to prefer one over another.
+
+In the case of \cref{lem:concat}, the difference hinges on the computation rule.
+If we proved the lemma using a single induction over $p$, then we would end up with a computation rule of the form $\refl{y} \ct q \jdeq q$.
+If we proved it with a single induction over $q$, we would have instead $p\ct\refl{y}\jdeq p$, while proving it with a double induction (as we did) gives only $\refl{x}\ct\refl{x} \jdeq \refl{x}$.
+
+\index{mathematics!formalized}%
+The asymmetrical computation rules can sometimes be convenient when doing formalized mathematics, as they allow the computer to simplify more things automatically.
+However, in informal mathematics, and arguably even in the formalized case, it can be confusing to have a concatenation operation which behaves asymmetrically and to have to remember which side is the ``special'' one.
+Treating both sides symmetrically makes for more robust proofs; this is why we have given the proof that we did.
+(However, this is admittedly a stylistic choice.)
+
+The table below summarizes the ``equality'', ``homotopical'', and ``higher-groupoid" points of view on what we have done so far.
+\begin{center}
+ \medskip
+ \begin{tabular}{ccc}
+ \toprule
+ Equality & Homotopy & $\infty$-Groupoid\\
+ \midrule
+ reflexivity\index{equality!reflexivity of} & constant path & identity morphism\\
+ symmetry\index{equality!symmetry of} & inversion of paths & inverse morphism\\
+ transitivity\index{equality!transitivity of} & concatenation of paths & composition of morphisms\\
+ \bottomrule
+ \end{tabular}
+ \medskip
+\end{center}
+
+In practice, transitivity is often applied to prove an equality by a chain of intermediate steps.
+We will use the common notation for this such as $a=b=c=d$.
+If the intermediate expressions are long, or we want to specify the witness of each equality, we may write
+\begin{align*}
+ a &= b & \text{(by $p$)}\\ &= c &\text{(by $q$)} \\ &= d &\text{(by $r$)}.
+\end{align*}
+In either case, the notation indicates construction of the element $(p\ct q)\ct r: (a=d)$.
+(We choose left-associativity for concreteness, although in view of \cref{thm:omg}\ref{item:omg4} below it makes little difference.)
+If it should happen that $b$ and $c$, say, are judgmentally equal, then we may write
+\begin{align*}
+ a &= b & \text{(by $p$)}\\ &\jdeq c \\ &= d &\text{(by $r$)}
+\end{align*}
+to indicate construction of $p\ct r : (a=d)$.
+We also follow common mathematical practice in not requiring the justifications in this notation (``by $p$'' and ``by $r$'') to supply the exact witness needed; instead we allow them to simply mention the most important (or least obvious) ingredient in constructing that witness.
+For instance, if ``Lemma A'' states that for all $x$ and $y$ we have $f(x)=g(y)$, then we may write ``by Lemma A'' as a justification for the step $f(a) = g(b)$, trusting the reader to deduce that we apply Lemma A with $x\defeq a$ and $y\defeq b$.
+We may also omit a justification entirely if we trust the reader to be able to guess it.
+
+Now, because of proof-relevance, we can't stop after proving ``symmetry'' and ``transitivity'' of equality: we need to know that these \emph{operations} on equalities are well-behaved.
+(This issue is invisible in set theory, where symmetry and transitivity are mere \emph{properties} of equality, rather than structure on
+paths.)
+From the homotopy-theoretic point of view, concatenation and inversion are just the ``first level'' of higher groupoid structure --- we also need coherence\index{coherence} laws on these operations, and analogous operations at higher dimensions.
+For instance, we need to know that concatenation is \emph{associative}, and that inversion provides \emph{inverses} with respect to concatenation.
+
+\begin{lem}\label{thm:omg}%[The $\omega$-groupoid structure of types]
+ \index{associativity!of path concatenation}%
+ \index{unit!law for path concatenation}%
+ Suppose $A:\type$, that $x,y,z,w:A$ and that $p:x= y$ and $q:y = z$ and $r:z=w$.
+ We have the following:
+ \begin{enumerate}
+ \item $p= p\ct \refl{y}$ and $p = \refl{x} \ct p$.\label{item:omg1}
+ \item $\opp{p}\ct p= \refl{y}$ and $p\ct \opp{p}= \refl{x}$.\label{item:omg2}
+ \item $\opp{(\opp{p})}= p$.\label{item:omg3}
+ \item $p\ct (q\ct r)= (p\ct q)\ct r$.\label{item:omg4}
+ \end{enumerate}
+\end{lem}
+
+Note, in particular, that \ref{item:omg1}--\ref{item:omg4} are themselves propositional equalities, living in the identity types \emph{of} identity types, such as $p=_{x=y}q$ for $p,q:x=y$.
+Topologically, they are \emph{paths of paths}, i.e.\ homotopies.
+It is a familiar fact in topology that when we concatenate a path $p$ with the reversed path $\opp p$, we don't literally obtain a constant path (which corresponds to the equality $\refl{}$ in type theory) --- instead we have a homotopy, or higher path, from $p\ct\opp p$ to the constant path.
+
+\begin{proof}[Proof of~\cref{thm:omg}]
+ All the proofs use the induction principle for equalities.
+ \begin{enumerate}
+ \item \emph{First proof:} let $D:\prd{x,y:A} (x=y) \to \type$ be the type family given by
+ \begin{equation*}
+ D(x,y,p)\defeq (p= p\ct \refl{y}).
+ \end{equation*}
+ Then $D(x,x,\refl{x})$ is $\refl{x}=\refl{x}\ct\refl{x}$.
+ Since $\refl{x}\ct\refl{x}\jdeq\refl{x}$, it follows that $D(x,x,\refl{x})\jdeq (\refl{x}=\refl{x})$.
+ Thus, there is a function
+ \begin{equation*}
+ d\defeq\lam{x} \refl{\refl{x}}:\prd{x:A} D(x,x,\refl{x}).
+ \end{equation*}
+ Now the induction principle for identity types gives an element $\indid{A}(D,d,x,y,p):(p= p\ct\refl{y})$ for each $p:x= y$.
+ The other equality is proven similarly.
+
+ \mentalpause
+
+ \noindent
+ \emph{Second proof:} by induction on $p$, it suffices to assume that $y$ is $x$ and that $p$ is $\refl x$.
+ But in this case, we have $\refl{x}\ct\refl{x}\jdeq\refl{x}$.
+ \item \emph{First proof:} let $D:\prd{x,y:A} (x=y) \to \type$ be the type family given by
+ \begin{equation*}
+ D(x,y,p)\defeq (\opp{p}\ct p= \refl{y}).
+ \end{equation*}
+ Then $D(x,x,\refl{x})$ is $\opp{\refl{x}}\ct\refl{x}=\refl{x}$.
+ Since $\opp{\refl{x}}\jdeq\refl{x}$ and $\refl{x}\ct\refl{x}\jdeq\refl{x}$, we get that $D(x,x,\refl{x})\jdeq (\refl{x}=\refl{x})$.
+ Hence we find the function
+ \begin{equation*}
+ d\defeq\lam{x} \refl{\refl{x}}:\prd{x:A} D(x,x,\refl{x}).
+ \end{equation*}
+ Now path induction gives an element $\indid{A}(D,d,x,y,p):\opp{p}\ct p=\refl{y}$ for each $p:x= y$ in $A$.
+ The other equality is similar.
+
+ \mentalpause
+
+ \noindent \emph{Second proof:} by induction, it suffices to assume $p$ is $\refl x$.
+ But in this case, we have $\opp{p} \ct p \jdeq \opp{\refl x} \ct \refl x \jdeq \refl x$.
+
+ \item \emph{First proof:} let $D:\prd{x,y:A} (x=y) \to \type$ be the type family given by
+ \begin{equation*}
+ D(x,y,p)\defeq (\opp{\opp{p}}= p).
+ \end{equation*}
+ Then $D(x,x,\refl{x})$ is the type $(\opp{\opp{\refl x}}=\refl{x})$.
+ But since $\opp{\refl{x}}\jdeq \refl{x}$ for each $x:A$, we have $\opp{\opp{\refl{x}}}\jdeq \opp{\refl{x}} \jdeq\refl{x}$, and thus $D(x,x,\refl{x})\jdeq(\refl{x}=\refl{x})$.
+ Hence we find the function
+ \begin{equation*}
+ d\defeq\lam{x} \refl{\refl{x}}:\prd{x:A} D(x,x,\refl{x}).
+ \end{equation*}
+ Now path induction gives an element $\indid{A}(D,d,x,y,p):\opp{\opp{p}}= p$ for each $p:x= y$.
+
+ \mentalpause
+
+ \noindent \emph{Second proof:} by induction, it suffices to assume $p$ is $\refl x$.
+ But in this case, we have $\opp{\opp{p}}\jdeq \opp{\opp{\refl x}} \jdeq \refl x$.
+
+ \item \emph{First proof:} let $D_1:\prd{x,y:A} (x=y) \to \type$ be the type family given by
+ \begin{equation*}
+ D_1(x,y,p)\defeq\prd{z,w:A}{q:y= z}{r:z= w} \big(p\ct (q\ct r)= (p\ct q)\ct r\big).
+ \end{equation*}
+ Then $D_1(x,x,\refl{x})$ is
+ \begin{equation*}
+ \prd{z,w:A}{q:x= z}{r:z= w} \big(\refl{x}\ct(q\ct r)= (\refl{x}\ct q)\ct r\big).
+ \end{equation*}
+ To construct an element of this type, let $D_2:\prd{x,z:A} (x=z) \to \type$ be the type family
+ \begin{equation*}
+ D_2 (x,z,q) \defeq \prd{w:A}{r:z=w} \big(\refl{x}\ct(q\ct r)= (\refl{x}\ct q)\ct r\big).
+ \end{equation*}
+ Then $D_2(x,x,\refl{x})$ is
+ \begin{equation*}
+ \prd{w:A}{r:x=w} \big(\refl{x}\ct(\refl{x}\ct r)= (\refl{x}\ct \refl{x})\ct r\big).
+ \end{equation*}
+ To construct an element of \emph{this} type, let $D_3:\prd{x,w:A} (x=w) \to \type$ be the type family
+ \begin{equation*}
+ D_3(x,w,r) \defeq \big(\refl{x}\ct(\refl{x}\ct r)= (\refl{x}\ct \refl{x})\ct r\big).
+ \end{equation*}
+ Then $D_3(x,x,\refl{x})$ is
+ \begin{equation*}
+ \big(\refl{x}\ct(\refl{x}\ct \refl{x})= (\refl{x}\ct \refl{x})\ct \refl{x}\big)
+ \end{equation*}
+ which is definitionally equal to the type $(\refl{x} = \refl{x})$, and is therefore inhabited by $\refl{\refl{x}}$.
+ Applying the path induction rule three times, therefore, we obtain an element of the overall desired type.
+
+ \mentalpause
+
+ \noindent \emph{Second proof:} by induction, it suffices to assume $p$, $q$, and $r$ are all $\refl x$.
+ But in this case, we have
+ \begin{align*}
+ p\ct (q\ct r)
+ &\jdeq \refl{x}\ct(\refl{x}\ct \refl{x})\\
+ &\jdeq \refl{x}\\
+ &\jdeq (\refl{x}\ct \refl x)\ct \refl x\\
+ &\jdeq (p\ct q)\ct r.
+ \end{align*}
+ Thus, we have $\refl{\refl{x}}$ inhabiting this type. \qedhere
+ \end{enumerate}
+\end{proof}
+
+\begin{rmk}
+ There are other ways to define these higher paths.
+ For instance, in \cref{thm:omg}\ref{item:omg4} we might do induction only over one or two paths rather than all three.
+ Each possibility will produce a \emph{definitionally} different proof, but they will all be equal to each other.
+ Such an equality between any two particular proofs can, again, be proven by induction, reducing all the paths in question to reflexivities and then observing that both proofs reduce themselves to reflexivities.
+\end{rmk}
+
+In view of \cref{thm:omg}\ref{item:omg4}, we will often write $p\ct q\ct r$ for $(p\ct q)\ct r$, and similarly $p\ct q\ct r \ct s$ for $((p\ct q)\ct r)\ct s$ and so on.
+We choose left-associativity for definiteness, but it makes no real difference.
+We generally trust the reader to insert instances of \cref{thm:omg}\ref{item:omg4} to reassociate such expressions as necessary.
+
+We are still not really done with the higher groupoid structure: the paths~\ref{item:omg1}--\ref{item:omg4} must also satisfy their own higher coherence\index{coherence} laws, which are themselves higher paths,
+\index{associativity!of path concatenation!coherence of}%
+\index{globular operad}%
+\index{operad}%
+\index{groupoid!higher}%
+and so on ``all the way up to infinity'' (this can be made precise using e.g.\ the notion of a globular operad).
+However, for most purposes it is unnecessary to make the whole infinite-dimensional structure explicit.
+One of the nice things about homotopy type theory is that all of this structure can be \emph{proven} starting from only the inductive property of identity types, so we can make explicit as much or as little of it as we need.
+
+In particular, in this book we will not need any of the complicated combinatorics involved in making precise notions such as ``coherent structure at all higher levels''.
+In addition to ordinary paths, we will use paths of paths (i.e.\ elements of a type $p =_{x=_A y} q$), which as remarked previously we call \emph{2-paths}\index{path!2-} or \emph{2-dimensional paths}, and perhaps occasionally paths of paths of paths (i.e.\ elements of a type $r = _{p =_{x=_A y} q} s$), which we call \emph{3-paths}\index{path!3-} or \emph{3-dimensional paths}.
+It is possible to define a general notion of \emph{$n$-dimensional path}
+\indexdef{path!n-@$n$-}%
+\indexsee{n-path@$n$-path}{path, $n$-}%
+\indexsee{n-dimensional path@$n$-dimensional path}{path, $n$-}%
+\indexsee{path!n-dimensional@$n$-dimensional}{path, $n$-}%
+(see \cref{ex:npaths}), but we will not need it.
+
+We will, however, use one particularly important and simple case of higher paths, which is when the start and end points are the same.
+In set theory, the proposition $a=a$ is entirely uninteresting, but in homotopy theory, paths from a point to itself are called \emph{loops}\index{loop} and carry lots of interesting higher structure.
+Thus, given a type $A$ with a point $a:A$, we define its \define{loop space}
+\index{loop space}%
+$\Omega(A,a)$ to be the type $\id[A]{a}{a}$.
+We may sometimes write simply $\Omega A$ if the point $a$ is understood from context.
+
+Since any two elements of $\Omega A$ are paths with the same start and end points, they can be concatenated;
+thus we have an operation $\Omega A\times \Omega A\to \Omega A$.
+More generally, the higher groupoid structure of $A$ gives $\Omega A$ the analogous structure of a ``higher group''.
+
+It can also be useful to consider the loop space\index{loop space!iterated}\index{iterated loop space} \emph{of} the loop space of $A$, which is the space of 2-dimensional loops on the identity loop at $a$.
+This is written $\Omega^2(A,a)$ and represented in type theory by the type $\id[({\id[A]{a}{a}})]{\refl{a}}{\refl{a}}$.
+While $\Omega^2(A,a)$, as a loop space, is again a ``higher group'', it now also has some additional structure resulting from the fact that its elements are 2-dimensional loops between 1-dimensional loops.
+
+\begin{thm}[Eckmann--Hilton]\label{thm:EckmannHilton}
+ The composition operation on the second loop space
+ %
+ \begin{equation*}
+ \Omega^2(A)\times \Omega^2(A)\to \Omega^2(A)
+ \end{equation*}
+ is commutative: $\alpha\ct\beta = \beta\ct\alpha$, for any $\alpha, \beta:\Omega^2(A)$.
+ \index{Eckmann--Hilton argument}%
+\end{thm}
+
+\begin{proof}
+First, observe that the composition of $1$-loops $\Omega A\times \Omega A\to \Omega A$ induces an operation
+\[
+\star : \Omega^2(A)\times \Omega^2(A)\to \Omega^2(A)
+\]
+as follows: consider elements $a, b, c : A$ and 1- and 2-paths,
+%
+\begin{align*}
+ p &: a = b, & r &: b = c \\
+ q &: a = b, & s &: b = c \\
+ \alpha &: p = q, & \beta &: r = s
+\end{align*}
+%
+as depicted in the following diagram (with paths drawn as arrows).
+% Changed this to xymatrix in the name of having uniform source code,
+% maybe the original using xy looked better (I think it was too big).
+% It is commented out below in case you want to reinstate it.
+\[
+ \xymatrix@+5em{
+ {a} \rtwocell<10>^p_q{\alpha}
+ &
+ {b} \rtwocell<10>^r_s{\beta}
+ &
+ {c}
+ }
+\]
+Composing the upper and lower 1-paths, respectively, we get two paths $p\ct r,\ q\ct s : a = c$, and there is then a ``horizontal composition''
+%
+\begin{equation*}
+ \alpha\hct\beta : p\ct r = q\ct s
+\end{equation*}
+%
+between them, defined as follows.
+First, we define $\alpha \rightwhisker r : p\ct r = q\ct r$ by path induction on $r$, so that
+\[ \alpha \rightwhisker \refl{b} \jdeq \opp{\mathsf{ru}_p} \ct \alpha \ct \mathsf{ru}_q \]
+where $\mathsf{ru}_p : p = p \ct \refl{b}$ is the right unit law from \cref{thm:omg}\ref{item:omg1}.
+We could similarly define $\rightwhisker$ by induction on $\alpha$, or on all paths in sight, resulting in different judgmental equalities, but for present purposes the definition by induction on $r$ will make things simpler.
+Similarly, we define $q\leftwhisker \beta : q\ct r = q\ct s$ by induction on $q$, so that
+\[ \refl{b} \leftwhisker \beta \jdeq \opp{\mathsf{lu}_r} \ct \beta \ct \mathsf{lu}_s \]
+where $\mathsf{lu}_r$ denotes the left unit law.
+The operations $\leftwhisker$ and $\rightwhisker$ are called \define{whiskering}\indexdef{whiskering}.
+Next, since $\alpha \rightwhisker r$ and $q\leftwhisker \beta$ are composable 2-paths, we can define the \define{horizontal composition}
+\indexdef{horizontal composition!of paths}%
+\indexdef{composition!of paths!horizontal}%
+by:
+\[
+\alpha\hct\beta\ \defeq\ (\alpha\rightwhisker r) \ct (q\leftwhisker \beta).
+\]
+Now suppose that $a \jdeq b \jdeq c$, so that all the 1-paths $p$, $q$, $r$, and $s$ are elements of $\Omega(A,a)$, and assume moreover that $p\jdeq q \jdeq r \jdeq s\jdeq \refl{a}$, so that $\alpha:\refl{a} = \refl{a}$ and $\beta:\refl{a} = \refl{a}$ are composable in both orders.
+In that case, we have
+\begin{align*}
+ \alpha\hct\beta
+ &\jdeq (\alpha\rightwhisker\refl{a}) \ct (\refl{a}\leftwhisker \beta)\\
+ &= \opp{\mathsf{ru}_{\refl{a}}} \ct \alpha \ct \mathsf{ru}_{\refl{a}} \ct \opp{\mathsf{lu}_{\refl a}} \ct \beta \ct \mathsf{lu}_{\refl{a}}\\
+ &\jdeq \opp{\refl{\refl{a}}} \ct \alpha \ct \refl{\refl{a}} \ct \opp{\refl{\refl a}} \ct \beta \ct \refl{\refl{a}}\\
+ &= \alpha \ct \beta.
+\end{align*}
+(Recall that $\mathsf{ru}_{\refl{a}} \jdeq \mathsf{lu}_{\refl{a}} \jdeq \refl{\refl{a}}$, by the computation rule for path induction.)
+On the other hand, we can define another horizontal composition analogously by
+\[
+\alpha\hct'\beta\ \defeq\ (p\leftwhisker \beta)\ct (\alpha\rightwhisker s)
+\]
+and we similarly learn that
+\[
+\alpha\hct'\beta = \beta\ct\alpha.
+\]
+\index{interchange law}%
+But, in general, the two ways of defining horizontal composition agree, $\alpha\hct\beta = \alpha\hct'\beta$, as we can see by induction on $\alpha$ and $\beta$ and then on the two remaining 1-paths, to reduce everything to reflexivity.
+Thus we have
+\[\alpha \ct \beta = \alpha\hct\beta = \alpha\hct'\beta = \beta\ct\alpha.
+\qedhere
+\]
+\end{proof}
+
+The foregoing fact, which is known as the \emph{Eckmann--Hilton argument}, comes from classical homotopy theory, and indeed it is used in \cref{cha:homotopy} below to show that the higher homotopy groups of a type are always abelian\index{group!abelian} groups.
+The whiskering and horizontal composition operations defined in the proof are also a general part of the $\infty$-groupoid structure of types.
+They satisfy their own laws (up to higher homotopy), such as
+\[ \alpha \rightwhisker (p\ct q) = (\alpha \rightwhisker p) \rightwhisker q \]
+and so on.
+From now on, we trust the reader to apply path induction whenever needed to define further operations of this sort and verify their properties.
+
+As this example suggests, the algebra of higher path types is much more intricate than just the groupoid-like structure at each level; the levels interact to give many further operations and laws, as in the study of iterated loop spaces in homotopy theory.
+Indeed, as in classical homotopy theory, we can make the following general definitions:
+
+\begin{defn} \label{def:pointedtype}
+ A \define{pointed type}
+ \indexsee{pointed!type}{type, pointed}%
+ \indexdef{type!pointed}%
+ $(A,a)$ is a type $A:\type$ together with a point $a:A$, called its \define{basepoint}.
+ \indexdef{basepoint}%
+ We write $\pointed{\type} \defeq \sm{A:\type} A$ for the type of pointed types in the universe $\type$.
+\end{defn}
+
+\begin{defn} \label{def:loopspace}
+ Given a pointed type $(A,a)$, we define the \define{loop space}
+ \indexdef{loop space}%
+ of $(A,a)$ to be the following pointed type:
+ \[\Omega(A,a)\defeq ((\id[A]aa),\refl a).\]
+ An element of it will be called a \define{loop}\indexdef{loop} at $a$.
+ For $n:\N$, the \define{$n$-fold iterated loop space} $\Omega^{n}(A,a)$
+ \indexdef{loop space!iterated}%
+ \indexsee{loop space!n-fold@$n$-fold}{loop space, iterated}%
+ of a pointed type $(A,a)$ is defined recursively by:
+ \begin{align*}
+ \Omega^0(A,a)&\defeq(A,a)\\
+ \Omega^{n+1}(A,a)&\defeq\Omega^n(\Omega(A,a)).
+ \end{align*}
+ An element of it will be called an \define{$n$-loop}
+ \indexdef{loop!n-@$n$-}%
+ \indexsee{n-loop@$n$-loop}{loop, $n$-}%
+ or an \define{$n$-dimensional loop}
+ \indexsee{loop!n-dimensional@$n$-dimensional}{loop, $n$-}%
+ \indexsee{n-dimensional loop@$n$-dimensional loop}{loop, $n$-}%
+ at $a$.
+\end{defn}
+
+We will return to iterated loop spaces in \cref{cha:hlevels,cha:hits,cha:homotopy}.
+\index{.infinity-groupoid@$\infty$-groupoid!structure of a type|)}%
+\index{type!identity|)}
+\index{path|)}%
+
+\section{Functions are functors}
+\label{sec:functors}
+
+\index{function|(}%
+\index{functoriality of functions in type theory@``functoriality'' of functions in type theory}%
+Now we wish to establish that functions $f:A\to B$ behave functorially on paths.
+In traditional type theory, this is equivalently the statement that functions respect equality.
+\index{continuity of functions in type theory@``continuity'' of functions in type theory}%
+Topologically, this corresponds to saying that every function is ``continuous'', i.e.\ preserves paths.
+
+\begin{lem}\label{lem:map}
+ Suppose that $f:A\to B$ is a function.
+ Then for any $x,y:A$ there is an operation
+ \begin{equation*}
+ \apfunc f : (\id[A] x y) \to (\id[B] {f(x)} {f(y)}).
+ \end{equation*}
+ Moreover, for each $x:A$ we have $\apfunc{f}(\refl{x})\jdeq \refl{f(x)}$.
+ \indexdef{application!of function to a path}%
+ \indexdef{path!application of a function to}%
+ \indexdef{function!application to a path of}%
+ \indexdef{action!of a function on a path}%
+\end{lem}
+
+The notation $\apfunc f$ can be read either as the \underline{ap}plication of $f$ to a path, or as the \underline{a}ction on \underline{p}aths of $f$.
+
+\begin{proof}[First proof]
+ Let $D:\prd{x,y:A} (x=y) \to \type$ be the type family defined by
+ \[D(x,y,p)\defeq (f(x)= f(y)).\]
+ Then we have
+ \begin{equation*}
+ d\defeq\lam{x} \refl{f(x)}:\prd{x:A} D(x,x,\refl{x}).
+ \end{equation*}
+ By path induction, we obtain $\apfunc f : \prd{x,y:A} (x=y) \to (f(x)=f(y))$.
+ The computation rule implies $\apfunc f({\refl{x}})\jdeq\refl{f(x)}$ for each $x:A$.
+\end{proof}
+
+\begin{proof}[Second proof]
+ To define $\apfunc{f}(p)$ for all $p:x=y$, it suffices, by induction, to assume
+ $p$ is $\refl{x}$.
+ In this case, we may define $\apfunc f(p) \defeq \refl{f(x)}:f(x)= f(x)$.
+\end{proof}
+
+We will often write $\apfunc f (p)$ as simply $\ap f p$.
+This is strictly speaking ambiguous, but generally no confusion arises.
+It matches the common convention in category theory of using the same symbol for the application of a functor to objects and to morphisms.
+
+We note that $\apfunc{}$ behaves functorially, in all the ways that one might expect.
+
+\begin{lem}\label{lem:ap-functor}
+ For functions $f:A\to B$ and $g:B\to C$ and paths $p:\id[A]xy$ and $q:\id[A]yz$, we have:
+ \begin{enumerate}
+ \item $\apfunc f(p\ct q) = \apfunc f(p) \ct \apfunc f(q)$.\label{item:apfunctor-ct}
+ \item $\apfunc f(\opp p) = \opp{\apfunc f (p)}$.\label{item:apfunctor-opp}
+ \item $\apfunc g (\apfunc f(p)) = \apfunc{g\circ f} (p)$.\label{item:apfunctor-compose}
+ \item $\apfunc {\idfunc[A]} (p) = p$.
+ \end{enumerate}
+\end{lem}
+\begin{proof}
+ Left to the reader.
+\end{proof}
+\index{function|)}%
+
+As was the case for the equalities in \cref{thm:omg}, those in \cref{lem:ap-functor} are themselves paths, which satisfy their own coherence laws (which can be proved in the same way), and so on.
+
+
+\section{Type families are fibrations}
+\label{sec:fibrations}
+
+\index{type!family of|(}%
+\index{transport|(defstyle}%
+Since \emph{dependently typed} functions are essential in type theory, we will also need a version of \cref{lem:map} for these.
+However, this is not quite so simple to state, because if $f:\prd{x:A} B(x)$ and $p:x=y$, then $f(x):B(x)$ and $f(y):B(y)$ are elements of distinct types, so that \emph{a priori} we cannot even ask whether they are equal.
+The missing ingredient is that $p$ itself gives us a way to relate the types $B(x)$ and $B(y)$.
+
+We have already seen this in \autoref{sec:identity-types}, where we called it ``indiscernibility of identicals''.
+\index{indiscernibility of identicals}%
+We now introduce a different name and notation for it that we will use from now on.
+
+\begin{lem}[Transport]\label{lem:transport}
+ Suppose that $P$ is a type family over $A$ and that $p:\id[A]xy$.
+ Then there is a function $\transf{p}:P(x)\to P(y)$.
+\end{lem}
+
+\begin{proof}[First proof]
+ Let $D:\prd{x,y:A} (\id{x}{y}) \to \type$ be the type family defined by
+ \[D(x,y,p)\defeq P(x)\to P(y).\]
+ Then we have the function
+ \begin{equation*}
+ d\defeq\lam{x} \idfunc[P(x)]:\prd{x:A} D(x,x,\refl{x}),
+ \end{equation*}
+ so that the induction principle gives us $\indid{A}(D,d,x,y,p):P(x)\to P(y)$ for $p:x= y$, which we define to be $\transf p$.
+\end{proof}
+
+\begin{proof}[Second proof]
+ By induction, it suffices to assume $p$ is $\refl x$.
+ But in this case, we can take $\transf{(\refl x)}:P(x)\to P(x)$ to be the identity function.
+\end{proof}
+
+Sometimes, it is necessary to notate the type family $P$ in which the transport operation happens.
+In this case, we may write
+\[\transfib P p \blank : P(x) \to P(y).\]
+
+Recall that a type family $P$ over a type $A$ can be seen as a property of elements of $A$, which holds at $x$ in $A$ if $P(x)$ is inhabited.
+Then the transportation lemma says that $P$ respects equality, in the sense that if $x$ is equal to $y$, then $P(x)$ holds if and only if $P(y)$ holds.
+In fact, we will see later on that if $x=y$ then actually $P(x)$ and $P(y)$ are \emph{equivalent}.
+
+Topologically, the transportation lemma can be viewed as a ``path lifting'' operation in a fibration.
+\index{fibration}%
+\indexdef{total!space}%
+We think of a type family $P:A\to \type$ as a \emph{fibration} with base space $A$, with $P(x)$ being the fiber over $x$, and with $\sm{x:A}P(x)$ being the \define{total space} of the fibration, with first projection $\sm{x:A}P(x)\to A$.
+The defining property of a fibration is that given a path $p:x=y$ in the base space $A$ and a point $u:P(x)$ in the fiber over $x$, we may lift the path $p$ to a path in the total space starting at $u$ (and this lifting can be done continuously).
+The point $\trans p u$ can be thought of as the other endpoint of this lifted path.
+We can also define the path itself in type theory:
+
+\begin{lem}[Path lifting property]\label{thm:path-lifting}
+ \indexdef{path!lifting}%
+ \indexdef{lifting!path}%
+ Let $P:A\to\type$ be a type family over $A$ and assume we have $u:P(x)$ for some $x:A$.
+ Then for any $p:x=y$, we have
+ \begin{equation*}
+ \mathsf{lift}(u,p):(x,u)=(y,\trans{p}{u})
+ \end{equation*}
+ in $\sm{x:A}P(x)$, such that $\ap{\proj1}{\mathsf{lift}(u,p)} = p$.
+\end{lem}
+\begin{proof}
+ Left to the reader.
+ We will prove a more general theorem in \cref{sec:compute-sigma}.
+\end{proof}
+
+In classical homotopy theory, a fibration is defined as a map for which there \emph{exist} liftings of paths; while in contrast, we have just shown that in type theory, every type family comes with a \emph{specified} ``path-lifting function''.
+This accords with the philosophy of constructive mathematics, according to which we cannot show that something exists except by exhibiting it.
+\index{continuity of functions in type theory@``continuity'' of functions in type theory}%
+It also ensures automatically that the path liftings are chosen ``continuously'', since as we have seen, all functions in type theory are ``continuous''.
+
+\begin{rmk}
+ Although we may think of a type family $P:A\to \type$ as like a fibration, it is generally not a good idea to say things like ``the fibration $P:A\to\type$'', since this sounds like we are talking about a fibration with base $\type$ and total space $A$.
+ To repeat, when a type family $P:A\to \type$ is regarded as a fibration, the base is $A$ and the total space is $\sm{x:A} P(x)$.
+
+ We may also occasionally use other topological terminology when speaking about type families.
+ For instance, we may refer to a dependent function $f:\prd{x:A} P(x)$ as a \define{section}
+ \indexdef{section!of a type family}%
+ of the fibration $P$, and we may say that something happens \define{fiberwise}
+ \indexdef{fiberwise}%
+ if it happens for each $P(x)$.
+ For instance, a section $f:\prd{x:A} P(x)$ shows that $P$ is ``fiberwise inhabited''.
+\end{rmk}
+
+\index{function!dependent|(}
+Now we can prove the dependent version of \cref{lem:map}.
+The topological intuition is that given $f:\prd{x:A} P(x)$ and a path $p:\id[A]xy$, we ought to be able to apply $f$ to $p$ and obtain a path in the total space of $P$ which ``lies over'' $p$, as shown below.
+
+\begin{center}
+ \begin{tikzpicture}[yscale=.5,xscale=2]
+ \draw (0,0) arc (-90:170:8ex) node[anchor=south east] {$A$} arc (170:270:8ex);
+ \draw (0,6) arc (-90:170:8ex) node[anchor=south east] {$\sm{x:A} P(x)$} arc (170:270:8ex);
+ \draw[->] (0,5.8) -- node[auto] {$\proj1$} (0,3.2);
+ \node[circle,fill,inner sep=1pt,label=left:{$x$}] (b1) at (-.5,1.4) {};
+ \node[circle,fill,inner sep=1pt,label=right:{$y$}] (b2) at (.5,1.4) {};
+ \draw[decorate,decoration={snake,amplitude=1}] (b1) -- node[auto,swap] {$p$} (b2);
+ \node[circle,fill,inner sep=1pt,label=left:{$f(x)$}] (b1) at (-.5,7.2) {};
+ \node[circle,fill,inner sep=1pt,label=right:{$f(y)$}] (b2) at (.5,7.2) {};
+ \draw[decorate,decoration={snake,amplitude=1}] (b1) -- node[auto] {$f(p)$} (b2);
+ \end{tikzpicture}
+\end{center}
+
+We \emph{can} obtain such a thing from \cref{lem:map}.
+Given $f:\prd{x:A} P(x)$, we can define a non-dependent function $f':A\to \sm{x:A} P(x)$ by setting $f'(x)\defeq (x,f(x))$, and then consider $\ap{f'}{p} : f'(x) = f'(y)$.
+Since $\proj1 \circ f' \jdeq \idfunc[A]$, by \cref{lem:ap-functor} we have $\ap{\proj1}{\ap{f'}{p}} = p$; thus $\ap{f'}{p}$ does ``lie over'' $p$ in this sense.
+However, it is not obvious from the \emph{type} of $\ap{f'}{p}$ that it lies over any specific path in $A$ (in this case, $p$), which is sometimes important.
+
+The solution is to use the transport lemma.
+By \cref{thm:path-lifting} we have a canonical path $\mathsf{lift}(u,p)$ from $(x,u)$ to $(y,\trans p u)$ which lies over $p$.
+Thus, any path from $u:P(x)$ to $v:P(y)$ lying over $p$ should factor through $\mathsf{lift}(u,p)$, essentially uniquely, by a path from $\trans p u$ to $v$ lying entirely in the fiber $P(y)$.
+Thus, up to equivalence, it makes sense to define ``a path from $u$ to $v$ lying over $p:x=y$'' to mean a path $\trans p u = v$ in $P(y)$.
+And, indeed, we can show that dependent functions produce such paths.
+
+\begin{lem}[Dependent map]\label{lem:mapdep}
+ \indexdef{application!of dependent function to a path}%
+ \indexdef{path!application of a dependent function to}%
+ \indexdef{function!dependent!application to a path of}%
+ \indexdef{action!of a dependent function on a path}%
+ Suppose $f:\prd{x: A} P(x)$; then we have a map
+ \[\apdfunc f : \prd{p:x=y}\big(\id[P(y)]{\trans p{f(x)}}{f(y)}\big).\]
+\end{lem}
+
+\begin{proof}[First proof]
+ Let $D:\prd{x,y:A} (\id{x}{y}) \to \type$ be the type family defined by
+ \begin{equation*}
+ D(x,y,p)\defeq \trans p {f(x)}= f(y).
+ \end{equation*}
+ Then $D(x,x,\refl{x})$ is $\trans{(\refl{x})}{f(x)}= f(x)$.
+ But since $\trans{(\refl{x})}{f(x)}\jdeq f(x)$, we get that $D(x,x,\refl{x})\jdeq (f(x)= f(x))$.
+ Thus, we find the function
+ \begin{equation*}
+ d\defeq\lam{x} \refl{f(x)}:\prd{x:A} D(x,x,\refl{x})
+ \end{equation*}
+ and now path induction gives us $\apdfunc f(p):\trans p{f(x)}= f(y)$ for each $p:x= y$.
+\end{proof}
+
+\begin{proof}[Second proof]
+ By induction, it suffices to assume $p$ is $\refl x$.
+ But in this case, the desired equation is $\trans{(\refl{x})}{f(x)}= f(x)$, which holds judgmentally.
+\end{proof}
+
+We will refer generally to paths which ``lie over other paths'' in this sense as \emph{dependent paths}.
+\indexsee{dependent!path}{path, dependent}%
+\index{path!dependent}%
+They will play an increasingly important role starting in \cref{cha:hits}.
+In \cref{sec:computational} we will see that for a few particular kinds of type families, there are equivalent ways to represent the notion of dependent paths that are sometimes more convenient.
+
+Now recall from \cref{sec:pi-types} that a non-dependently typed function $f:A\to B$ is just the special case of a dependently typed function $f:\prd{x:A} P(x)$ when $P$ is a constant type family, $P(x) \defeq B$.
+In this case, $\apdfunc{f}$ and $\apfunc{f}$ are closely related, because of the following lemma:
+
+\begin{lem}\label{thm:trans-trivial}
+ If $P:A\to\type$ is defined by $P(x) \defeq B$ for a fixed $B:\type$, then for any $x,y:A$ and $p:x=y$ and $b:B$ we have a path
+ \[ \transconst Bpb : \transfib P p b = b. \]
+\end{lem}
+\begin{proof}[First proof]
+ Fix a $b:B$, and let $D:\prd{x,y:A} (\id{x}{y}) \to \type$ be the type family defined by
+ \[ D(x,y,p) \defeq (\transfib P p b = b). \]
+ Then $D(x,x,\refl x)$ is $(\transfib P{\refl{x}}{b} = b)$, which is judgmentally equal to $(b=b)$ by the computation rule for transporting.
+ Thus, we have the function
+ \[ d \defeq \lam{x} \refl{b} : \prd{x:A} D(x,x,\refl x). \]
+ Now path induction gives us an element of
+ \narrowequation{
+ \prd{x,y:A}{p:x=y}(\transfib P p b = b),}
+ as desired.
+\end{proof}
+\begin{proof}[Second proof]
+ By induction, it suffices to assume $y$ is $x$ and $p$ is $\refl x$.
+ But $\transfib P {\refl x} b \jdeq b$, so in this case what we have to prove is $b=b$, and we have $\refl{b}$ for this.
+\end{proof}
+
+Thus, for any $x,y:A$ and $p:x=y$ and $f:A\to B$, by concatenating with $\transconst Bp{f(x)}$ and its inverse, respectively, we obtain functions
+\begin{align}
+ \big(f(x) = f(y)\big) &\to \big(\trans{p}{f(x)} = f(y)\big)\label{eq:ap-to-apd}
+ \qquad\text{and} \\
+ \big(\trans{p}{f(x)} = f(y)\big) &\to \big(f(x) = f(y)\big).\label{eq:apd-to-ap}
+\end{align}
+In fact, these functions are inverse equivalences (in the sense to be introduced in \cref{sec:basics-equivalences}), and they relate $\apfunc f (p)$ to $\apdfunc f (p)$.
+
+\begin{lem}\label{thm:apd-const}
+ For $f:A\to B$ and $p:\id[A]xy$, we have
+ \[ \apdfunc f(p) = \transconst B p{f(x)} \ct \apfunc f (p). \]
+\end{lem}
+\begin{proof}[First proof]
+ Let $D:\prd{x,y:A} (\id xy) \to \type$ be the type family defined by
+ \[ D(x,y,p) \defeq \big(\apdfunc f (p) = \transconst Bp{f(x)} \ct \apfunc f (p)\big). \]
+ Thus, we have
+ \[D(x,x,\refl x) \jdeq \big(\apdfunc f (\refl x) = \transconst B{\refl x}{f(x)} \ct \apfunc f ({\refl x})\big).\]
+ But by definition, all three paths appearing in this type are $\refl{f(x)}$, so we have
+ \[ \refl{\refl{f(x)}} : D(x,x,\refl x). \]
+ Thus, path induction gives us an element of $\prd{x,y:A}{p:x=y} D(x,y,p)$, which is what we wanted.
+\end{proof}
+\begin{proof}[Second proof]
+ By induction, it suffices to assume $y$ is $x$ and $p$ is $\refl x$.
+ In this case, what we have to prove is $\refl{f(x)} = \refl{f(x)} \ct \refl{f(x)}$, which is true judgmentally.
+\end{proof}
+
+Because the types of $\apdfunc{f}$ and $\apfunc{f}$ are different, it is often clearer to use different notations for them.
+% We may sometimes use a notation $\apd f p$ for $\apdfunc{f}(p)$, which is similar to the notation $\ap f p$ for $\apfunc{f}(p)$.
+
+\index{function!dependent|)}%
+
+At this point, we hope the reader is starting to get a feel for proofs by induction on identity types.
+From now on we stop giving both styles of proofs, allowing ourselves to use whatever is most clear and convenient (and often the second, more concise one).
+Here are a few other useful lemmas about transport; we leave it to the reader to give the proofs (in either style).
+
+\begin{lem}\label{thm:transport-concat}
+ Given $P:A\to\type$ with $p:\id[A]xy$ and $q:\id[A]yz$ while $u:P(x)$, we have
+ \[ \trans{q}{\trans{p}{u}} = \trans{(p\ct q)}{u}. \]
+\end{lem}
+
+\begin{lem}\label{thm:transport-compose}
+ For a function $f:A\to B$ and a type family $P:B\to\type$, and any $p:\id[A]xy$ and $u:P(f(x))$, we have
+ \[ \transfib{P\circ f}{p}{u} = \transfib{P}{\apfunc f(p)}{u}. \]
+\end{lem}
+
+\begin{lem}\label{thm:ap-transport}
+ For $P,Q:A\to \type$ and a family of functions $f:\prd{x:A} P(x)\to Q(x)$, and any $p:\id[A]xy$ and $u:P(x)$, we have
+ \[ \transfib{Q}{p}{f_x(u)} = f_y(\transfib{P}{p}{u}). \]
+\end{lem}
+
+\index{type!family of|)}%
+\index{transport|)}
+
+\section{Homotopies and equivalences}
+\label{sec:basics-equivalences}
+
+\index{homotopy|(defstyle}%
+
+So far, we have seen how the identity type $\id[A]xy$ can be regarded as a type of \emph{identifications}, \emph{paths}, or \emph{equivalences} between two elements $x$ and~$y$ of a type $A$.
+Now we investigate the appropriate notions of ``identification'' or ``sameness'' between \emph{functions} and between \emph{types}.
+In \cref{sec:compute-pi,sec:compute-universe}, we will see that homotopy type theory allows us to identify these with instances of the identity type, but before we can do that we need to understand them in their own right.
+
+Traditionally, we regard two functions as the same if they take equal values on all inputs.
+Under the propositions-as-types interpretation, this suggests that two functions $f$ and $g$ (perhaps dependently typed) should be the same if the type $\prd{x:A} (f(x)=g(x))$ is inhabited.
+Under the homotopical interpretation, this dependent function type consists of \emph{continuous} paths or \emph{functorial} equivalences, and thus may be regarded as the type of \emph{homotopies} or of \emph{natural isomorphisms}.\index{isomorphism!natural}
+We will adopt the topological terminology for this.
+
+\begin{defn} \label{defn:homotopy}
+ Let $f,g:\prd{x:A} P(x)$ be two sections of a type family $P:A\to\type$.
+ A \define{homotopy}
+ from $f$ to $g$ is a dependent function of type
+ \begin{equation*}
+ (f\htpy g) \defeq \prd{x:A} (f(x)=g(x)).
+ \end{equation*}
+\end{defn}
+
+Note that a homotopy is not the same as an identification $(f=g)$.
+However, in \cref{sec:compute-pi} we will introduce an axiom making homotopies and identifications ``equivalent''.
+
+The following proofs are left to the reader.
+
+\begin{lem}\label{lem:homotopy-props}
+ Homotopy is an equivalence relation on each dependent function type $\prd{x:A} P(x)$.
+ That is, we have elements of the types
+ \begin{gather*}
+ \prd{f:\prd{x:A} P(x)} (f\htpy f)\\
+ \prd{f,g:\prd{x:A} P(x)} (f\htpy g) \to (g\htpy f)\\
+ \prd{f,g,h:\prd{x:A} P(x)} (f\htpy g) \to (g\htpy h) \to (f\htpy h).
+ \end{gather*}
+\end{lem}
+
+% This is judgmental and is \cref{ex:composition}.
+% \begin{lem}
+% Composition is associative and unital up to homotopy.
+% That is:
+% \begin{enumerate}
+% \item If $f:A\to B$ then $f\circ \idfunc[A]\htpy f\htpy \idfunc[B]\circ f$.
+% \item If $f:A\to B, g:B\to C$ and $h:C\to D$ then $h\circ (g\circ f) \htpy (h\circ g)\circ f$.
+% \end{enumerate}
+% \end{lem}
+
+\index{functoriality of functions in type theory@``functoriality'' of functions in type theory}%
+\index{continuity of functions in type theory@``continuity'' of functions in type theory}%
+Just as functions in type theory are automatically ``functors'', homotopies are automatically
+\index{naturality of homotopies@``naturality'' of homotopies}%
+``natural transformations''.
+We will state and prove this only for non-dependent functions $f,g:A\to B$; in \cref{ex:dep-htpy-natural} we ask the reader to generalize it to dependent functions.
+
+Recall that for $f:A\to B$ and $p:\id[A]xy$, we may write $\ap f p$ to mean $\apfunc{f} (p)$.
+
+\begin{lem}\label{lem:htpy-natural}
+ Suppose $H:f\htpy g$ is a homotopy between functions $f,g:A\to B$ and let $p:\id[A]xy$. Then we have
+ \begin{equation*}
+ H(x)\ct\ap{g}{p}=\ap{f}{p}\ct H(y).
+ \end{equation*}
+ We may also draw this as a commutative diagram:\index{diagram}
+ \begin{align*}
+ \xymatrix{
+ f(x) \ar@{=}[r]^{\ap fp} \ar@{=}[d]_{H(x)} & f(y) \ar@{=}[d]^{H(y)} \\
+ g(x) \ar@{=}[r]_{\ap gp} & g(y)
+ }
+ \end{align*}
+\end{lem}
+\begin{proof}
+ By induction, we may assume $p$ is $\refl x$.
+ Since $\apfunc{f}$ and $\apfunc g$ compute on reflexivity, in this case what we must show is
+ \[ H(x) \ct \refl{g(x)} = \refl{f(x)} \ct H(x). \]
+ But this follows since both sides are equal to $H(x)$.
+\end{proof}
+
+\begin{cor}\label{cor:hom-fg}
+ Let $H : f \htpy \idfunc[A]$ be a homotopy, with $f : A \to A$. Then for any $x : A$ we have \[ H(f(x)) = \ap f{H(x)}. \]
+ % The above path will be denoted by $\com{H}{f}{x}$.
+\end{cor}
+\noindent
+Here $f(x)$ denotes the ordinary application of $f$ to $x$, while $\ap f{H(x)}$ denotes $\apfunc{f}(H(x))$.
+\begin{proof}
+By naturality of $H$, the following diagram of paths commutes:
+\begin{align*}
+\xymatrix@C=3pc{
+ffx \ar@{=}[r]^-{\ap f{Hx}} \ar@{=}[d]_{H(fx)} & fx \ar@{=}[d]^{Hx} \\
+fx \ar@{=}[r]_-{Hx} & x
+}
+\end{align*}
+That is, $\ap f{H x} \ct H x = H(f x) \ct H x$.
+We can now whisker by $\opp{(H x)}$ to cancel $H x$, obtaining
+\[ \ap f{H x}
+= \ap f{H x} \ct H x \ct \opp{(H x)}
+= H(f x) \ct H x \ct \opp{(H x)}
+= H(f x)
+\]
+as desired (with some associativity paths suppressed).
+\end{proof}
+
+Of course, like the functoriality of functions (\cref{lem:ap-functor}), the equality in \cref{lem:htpy-natural} is a path which satisfies its own coherence laws, and so on.
+
+\index{homotopy|)}%
+
+\index{equivalence|(}%
+Moving on to types, from a traditional perspective one may say that a function $f:A\to B$ is an \emph{isomorphism} if there is a function $g:B\to A$ such that both composites $f\circ g$ and $g\circ f$ are pointwise equal to the identity, i.e.\ such that $f \circ g \htpy \idfunc[B]$ and $g\circ f \htpy \idfunc[A]$.
+\indexsee{homotopy!equivalence}{equivalence}%
+A homotopical perspective suggests that this should be called a \emph{homotopy equivalence}, and from a categorical one, it should be called an \emph{equivalence of (higher) groupoids}.
+However, when doing proof-relevant mathematics,
+\index{mathematics!proof-relevant}%
+the corresponding type
+\begin{equation}
+ \sm{g:B\to A} \big((f \circ g \htpy \idfunc[B]) \times (g\circ f \htpy \idfunc[A])\big)\label{eq:qinvtype}
+\end{equation}
+is poorly behaved.
+For instance, for a single function $f:A\to B$ there may be multiple unequal inhabitants of~\eqref{eq:qinvtype}.
+(This is closely related to the observation in higher category theory that often one needs to consider \emph{adjoint} equivalences\index{adjoint!equivalence} rather than plain equivalences.)
+For this reason, we give~\eqref{eq:qinvtype} the following historically accurate, but slightly de\-rog\-a\-to\-ry-sounding name instead.
+
+\begin{defn}\label{defn:quasi-inverse}
+ For a function $f:A\to B$, a \define{quasi-inverse}
+ \indexdef{quasi-inverse}%
+ \indexsee{function!quasi-inverse of}{quasi-inverse}%
+ of $f$ is a triple $(g,\alpha,\beta)$ consisting of a function $g:B\to A$ and homotopies
+$\alpha:f\circ g\htpy \idfunc[B]$ and $\beta:g\circ f\htpy \idfunc[A]$.
+\end{defn}
+
+\symlabel{qinv}
+Thus,~\eqref{eq:qinvtype} is \emph{the type of quasi-inverses of $f$}; we may denote it by $\qinv(f)$.
+
+\begin{eg}\label{eg:idequiv}
+ \index{identity!function}%
+ \index{function!identity}%
+ The identity function $\idfunc[A]:A\to A$ has a quasi-inverse given by $\idfunc[A]$ itself, together with homotopies defined by $\alpha(y) \defeq \refl{y}$ and $\beta(x) \defeq \refl{x}$.
+\end{eg}
+
+\begin{eg}\label{eg:concatequiv}
+ For any $p:\id[A]xy$ and $z:A$, the functions
+ \begin{align*}
+ (p\ct \blank)&:(\id[A]yz) \to (\id[A]xz) \qquad\text{and}\\
+ (\blank \ct p)&:(\id[A]zx) \to (\id[A]zy)
+ \end{align*}
+ have quasi-inverses given by $(\opp p \ct \blank)$ and $(\blank \ct \opp p)$, respectively; see \cref{ex:equiv-concat}.
+\end{eg}
+
+\begin{eg}\label{thm:transportequiv}
+ For any $p:\id[A]xy$ and $P:A\to\type$, the function
+ \[\transfib{P}{p}{\blank}:P(x) \to P(y)\]
+ has a quasi-inverse given by $\transfib{P}{\opp p}{\blank}$; this follows from \narrowbreak \cref{thm:transport-concat}.
+\end{eg}
+
+\symlabel{basics-isequiv}\symlabel{basics:iso}
+In general, we will only use the word \emph{isomorphism}
+\index{isomorphism!of sets}
+(and similar words such as \emph{bijection}, and the associated notation $A\cong B$)
+\index{bijection}
+in the special case when the types $A$ and $B$ ``behave like sets'' (see \cref{sec:basics-sets}).
+In this case, the type~\eqref{eq:qinvtype} is unproblematic.
+We will reserve the word \emph{equivalence} for an improved notion $\isequiv (f)$ with the following properties:%
+\begin{enumerate}
+\item For each $f:A\to B$ there is a function $\qinv(f) \to \isequiv (f)$.\label{item:be1}
+\item Similarly, for each $f$ we have $\isequiv (f) \to \qinv(f)$; thus the two are logically equivalent (see \cref{sec:pat}).\label{item:be2}
+\item For any two inhabitants $e_1,e_2:\isequiv(f)$ we have $e_1=e_2$.\label{item:be3}
+\end{enumerate}
+In \cref{cha:equivalences} we will see that there are many different definitions of $\isequiv(f)$ which satisfy these three properties, but that all of them are equivalent.
+For now, to convince the reader that such things exist, we mention only the easiest such definition:
+\begin{equation}\label{eq:isequiv-invertible}
+ \isequiv(f) \;\defeq\;
+ \Parens{\sm{g:B\to A} (f\circ g \htpy \idfunc[B])}
+ \times
+ \Parens{\sm{h:B\to A} (h\circ f \htpy \idfunc[A])}.
+\end{equation}
+We can show~\ref{item:be1} and~\ref{item:be2} for this definition now.
+A function $\qinv(f) \to \isequiv (f)$ is easy to define by taking $(g,\alpha,\beta)$ to $(g,\alpha,g,\beta)$.
+In the other direction, given $(g,\alpha,h,\beta)$, let $\gamma$ be the composite homotopy
+\[ g \overset{\beta}{\htpy} h\circ f\circ g \overset{\alpha}{\htpy} h, \]
+meaning that $\gamma(x) \defeq \opp{\beta(g(x))} \ct \ap{h}{\alpha(x)}$.
+Now define $\beta':g\circ f\htpy \idfunc[A]$ by $\beta'(x) \defeq \gamma(f(x)) \ct \beta(x)$.
+Then $(g,\alpha,\beta'):\qinv(f)$.
+
+Property~\ref{item:be3} for this definition is not too hard to prove either, but it requires identifying the identity types of cartesian products and dependent pair types, which we will discuss in \cref{sec:compute-cartprod,sec:compute-sigma}.
+Thus, we postpone it as well; see \cref{sec:biinv}.
+At this point, the main thing to take away is that there is a well-behaved type which we can pronounce as ``$f$ is an equivalence'', and that we can prove $f$ to be an equivalence by exhibiting a quasi-inverse to it.
+In practice, this is the most common way to prove that a function is an equivalence.
+
+In accord with the proof-relevant philosophy,
+\index{mathematics!proof-relevant}%
+\emph{an equivalence} from $A$ to $B$ is defined to be a function $f:A\to B$ together with an inhabitant of $\isequiv (f)$, i.e.\ a proof that it is an equivalence.
+We write $(\eqv A B)$ for the type of equivalences from $A$ to $B$, i.e.\ the type
+\begin{equation}\label{eq:eqv}
+ (\eqv A B) \defeq \sm{f:A\to B} \isequiv(f).
+\end{equation}
+Property~\ref{item:be3} above will ensure that if two equivalences are equal as functions (that is, the underlying elements of $A\to B$ are equal), then they are also equal as equivalences (see \cref{sec:compute-sigma}).
+Thus, we often abuse notation and blur the distinction between equivalences and their underlying functions.
+For instance, if we have a function $f:A\to B$ and we know that $e:\isequiv(f)$, we may write $f:\eqv A B$, rather than $\tup{f}{e}$.
+Or conversely, if we have an equivalence $g:\eqv A B$, we may write $g(a)$ when given $a:A$, rather than $(\proj1 g)(a)$.
+
+We conclude by observing:
+
+\begin{lem}\label{thm:equiv-eqrel}
+ Type equivalence is an equivalence relation on \type.
+ More specifically:
+ \begin{enumerate}
+ \item For any $A$, the identity function $\idfunc[A]$ is an equivalence; hence $\eqv A A$.
+ \item For any $f:\eqv A B$, we have an equivalence $f^{-1} : \eqv B A$.
+ \item For any $f:\eqv A B$ and $g:\eqv B C$, we have $g\circ f : \eqv A C$.
+ \end{enumerate}
+\end{lem}
+\begin{proof}
+ The identity function is clearly its own quasi-inverse; hence it is an equivalence.
+
+ If $f:A\to B$ is an equivalence, then it has a quasi-inverse, say $f^{-1}:B\to A$.
+ Then $f$ is also a quasi-inverse of $f^{-1}$, so $f^{-1}$ is an equivalence $B\to A$.
+
+ Finally, given $f:\eqv A B$ and $g:\eqv B C$ with quasi-inverses $f^{-1}$ and $g^{-1}$, say, then for any $a:A$ we have $f^{-1} g^{-1} g f a = f^{-1} f a = a$, and for any $c:C$ we have $g f f^{-1} g^{-1} c = g g^{-1} c = c$.
+ Thus $f^{-1} \circ g^{-1}$ is a quasi-inverse to $g\circ f$, hence the latter is an equivalence.
+\end{proof}
+
+\index{equivalence|)}%
+
+
+\section{The higher groupoid structure of type formers}
+\label{sec:computational}
+
+In \cref{cha:typetheory}, we introduced many ways to form new types: cartesian products, disjoint unions, dependent products, dependent sums, etc.
+In \cref{sec:equality,sec:functors,sec:fibrations}, we saw that \emph{all} types in homotopy type theory behave like spaces or higher groupoids.
+Our goal in the rest of the chapter is to make explicit how this higher structure behaves in the case of the particular types defined in \cref{cha:typetheory}.
+
+It turns out that for many types $A$, the equality types $\id[A]xy$ can be characterized, up to equivalence, in terms of whatever data was used to construct $A$.
+For example, if $A$ is a cartesian product $B\times C$, and $x\jdeq (b,c)$ and $y\jdeq(b',c')$, then we have an equivalence
+\begin{equation}\label{eq:prodeqv}
+ \eqv{\big((b,c)=(b',c')\big)}{\big((b=b')\times (c=c')\big)}.
+\end{equation}
+In more traditional language, two ordered pairs are equal just when their components are equal (but the equivalence~\eqref{eq:prodeqv} says rather more than this).
+The higher structure of the identity types can also be expressed in terms of these equivalences; for instance, concatenating two equalities between pairs corresponds to pairwise concatenation.
+
+Similarly, when a type family $P:A\to\type$ is built up fiberwise using the type forming rules from \cref{cha:typetheory}, the operation $\transfib{P}{p}{\blank}$ can be characterized, up to homotopy, in terms of the corresponding operations on the data that went into $P$.
+For instance, if $P(x) \jdeq B(x)\times C(x)$, then we have
+\[\transfib{P}{p}{(b,c)} = \big(\transfib{B}{p}{b},\transfib{C}{p}{c}\big).\]
+
+Finally, the type forming rules are also functorial, and if a function $f$ is built from this functoriality, then the operations $\apfunc f$ and $\apdfunc f$ can be computed based on the corresponding ones on the data going into $f$.
+For instance, if $g:B\to B'$ and $h:C\to C'$ and we define $f:B\times C \to B'\times C'$ by $f(b,c)\defeq (g(b),h(c))$, then modulo the equivalence~\eqref{eq:prodeqv}, we can identify $\apfunc f$ with ``$(\apfunc g,\apfunc h)$''.
+
+The next few sections (\crefrange{sec:compute-cartprod}{sec:compute-nat}) will be devoted to stating and proving theorems of this sort for all the basic type forming rules, with one section for each basic type former.
+Here we encounter a certain apparent deficiency in currently available type theories;
+as will become clear in later chapters, it would seem to be more convenient and intuitive if these characterizations of identity types, transport, and so on were \emph{judgmental}\index{judgmental equality} equalities.
+However, in the theory presented in \cref{cha:typetheory}, the identity types are defined uniformly for all types by their induction principle, so we cannot ``redefine'' them to be different things at different types.
+Thus, the characterizations for particular types to be discussed in this chapter are, for the most part, \emph{theorems} which we have to discover and prove, if possible.
+
+Actually, the type theory of \cref{cha:typetheory} is insufficient to prove the desired theorems for two of the type formers: $\Pi$-types and universes.
+For this reason, we are forced to introduce axioms into our type theory, in order to make those ``theorems'' true.
+Type-theoretically, an \emph{axiom} (c.f.~\cref{sec:axioms}) is an ``atomic'' element that is declared to inhabit some specified type, without there being any rules governing its behavior other than those pertaining to the type it inhabits.
+\index{axiom!versus rules}%
+
+\index{function extensionality}%
+\indexsee{extensionality, of functions}{function extensionality}
+\index{univalence axiom}%
+The axiom for $\Pi$-types (\cref{sec:compute-pi}) is familiar to type theorists: it is called \emph{function extensionality}, and states (roughly) that if two functions are homotopic in the sense of \cref{sec:basics-equivalences}, then they are equal.
+The axiom for universes (\cref{sec:compute-universe}), however, is a new contribution of homotopy type theory due to Voevodsky: it is called the \emph{univalence axiom}, and states (roughly) that if two types are equivalent in the sense of \cref{sec:basics-equivalences}, then they are equal.
+We have already remarked on this axiom in the introduction; it will play a very important role in this book.%
+\footnote{We have chosen to introduce these principles as axioms, but there are potentially other ways to formulate a type theory in which they hold.
+ See the Notes to this chapter.}
+
+It is important to note that not \emph{all} identity types can be ``determined'' by induction over the construction of types.
+Counterexamples include most nontrivial higher inductive types (see \cref{cha:hits,cha:homotopy}).
+For instance, calculating the identity types of the types $\Sn^n$ (see \cref{sec:circle}) is equivalent to calculating the higher homotopy groups of spheres, a deep and important field of research in algebraic topology.
+
+
+\section{Cartesian product types}
+\label{sec:compute-cartprod}
+
+\index{type!product|(}%
+Given types $A$ and $B$, consider the cartesian product type $A \times B$.
+For any elements $x,y:A\times B$ and a path $p:\id[A\times B]{x}{y}$, by functoriality we can extract paths $\ap{\proj1}p:\id[A]{\proj1(x)}{\proj1(y)}$ and $\ap{\proj2}p:\id[B]{\proj2(x)}{\proj2(y)}$.
+Thus, we have a function
+\begin{equation}\label{eq:path-prod}
+ (\id[A\times B]{x}{y}) \to (\id[A]{\proj1(x)}{\proj1(y)}) \times (\id[B]{\proj2(x)}{\proj2(y)}).
+\end{equation}
+
+\begin{thm}\label{thm:path-prod}
+ For any $x$ and $y$, the function~\eqref{eq:path-prod} is an equivalence.
+\end{thm}
+
+Read logically, this says that two pairs are equal just if they are equal
+componentwise. Read category-theoretically, this says that the
+morphisms in a product groupoid are pairs of morphisms. Read
+homotopy-theoretically, this says that the paths in a product
+space are pairs of paths.
+
+\begin{proof}
+ We need a function in the other direction:
+ \begin{equation}
+ (\id[A]{\proj1(x)}{\proj1(y)}) \times (\id[B]{\proj2(x)}{\proj2(y)}) \to (\id[A\times B]{x}{y}). \label{eq:path-prod-inverse}
+ \end{equation}
+ By the induction rule for cartesian products, we may assume that $x$ and $y$ are both pairs, i.e.\ $x\jdeq (a,b)$ and $y\jdeq (a',b')$ for some $a,a':A$ and $b,b':B$.
+ In this case, what we want is a function
+ \begin{equation*}
+ (\id[A]{a}{a'}) \times (\id[B]{b}{b'}) \to \big(\id[A\times B]{(a,b)}{(a',b')}\big).
+ \end{equation*}
+ Now by induction for the cartesian product in its domain, we may assume given $p:a=a'$ and $q:b=b'$.
+ And by two path inductions, we may assume that $a\jdeq a'$ and $b\jdeq b'$ and both $p$ and $q$ are reflexivity.
+ But in this case, we have $(a,b)\jdeq(a',b')$ and so we can take the output to also be reflexivity.
+
+ It remains to prove that~\eqref{eq:path-prod-inverse} is quasi-inverse to~\eqref{eq:path-prod}.
+ This is a simple sequence of inductions, but they have to be done in the right order.
+
+ In one direction, let us start with $r:\id[A\times B]{x}{y}$.
+ We first do a path induction on $r$ in order to assume that $x\jdeq y$ and $r$ is reflexivity.
+ In this case, since $\apfunc{\proj1}$ and $\apfunc{\proj2}$ are defined by path induction,~\eqref{eq:path-prod} takes $r\jdeq \refl{x}$ to the pair $(\refl{\proj1x},\refl{\proj2x})$.
+ Now by induction on $x$, we may assume $x\jdeq (a,b)$, so that this is $(\refl a, \refl b)$.
+ Thus,~\eqref{eq:path-prod-inverse} takes it by definition to $\refl{(a,b)}$, which (under our current assumptions) is $r$.
+
+ In the other direction, if we start with $s:(\id[A]{\proj1(x)}{\proj1(y)}) \times (\id[B]{\proj2(x)}{\proj2(y)})$, then we first do induction on $x$ and $y$ to assume that they are pairs $(a,b)$ and $(a',b')$, and then induction on $s:(\id[A]{a}{a'}) \times (\id[B]{b}{b'})$ to reduce it to a pair $(p,q)$ where $p:a=a'$ and $q:b=b'$.
+ Now by induction on $p$ and $q$, we may assume they are reflexivities $\refl a$ and $\refl b$, in which case~\eqref{eq:path-prod-inverse} yields $\refl{(a,b)}$ and then~\eqref{eq:path-prod} returns us to $(\refl a,\refl b)\jdeq (p,q)\jdeq s$.
+\end{proof}
+
+In particular, we have shown that~\eqref{eq:path-prod} has an inverse~\eqref{eq:path-prod-inverse}, which we may denote by
+\symlabel{defn:pairpath}
+\[
+\pairpath : (\id{\proj{1}(x)}{\proj{1}(y)}) \times (\id{\proj{2}(x)}{\proj{2}(y)}) \to (\id x y).
+\]
+Note that a special case of this yields the propositional uniqueness principle\index{uniqueness!principle, propositional!for product types} for products: $z = (\proj1(z),\proj2(z))$.
+
+It can be helpful to view \pairpath as a \emph{constructor} or \emph{introduction rule} for $\id x y$, analogous to the ``pairing'' constructor of $A\times B$ itself, which introduces the pair $(a,b)$ given $a:A$ and $b:B$.
+From this perspective, the two components of~\eqref{eq:path-prod}:
+\begin{align*}
+ \projpath{1} &: (\id{x}{y}) \to (\id{\proj{1}(x)}{\proj{1} (y)})\\
+ \projpath{2} &: (\id{x}{y}) \to (\id{\proj{2}(x)}{\proj{2} (y)})
+\end{align*}
+are \emph{elimination} rules.
+Similarly, the two homotopies which witness~\eqref{eq:path-prod-inverse} as quasi-inverse to~\eqref{eq:path-prod} consist, respectively, of \emph{propositional computation rules}:
+\index{computation rule!propositional!for identities between pairs}%
+\begin{align*}
+ {\projpath{1}{(\pairpath(p, q)})}
+ &= %_{(\id{\proj{1} x}{\proj{1} y})}
+ {p} \\
+ {\projpath{2}{(\pairpath(p,q)})}
+ &= %_{(\id{\proj{2} x}{\proj{2} y})}
+ {q}
+\end{align*}
+for $p:\id{\proj{1} x}{\proj{1} y}$ and $q:\id{\proj{2} x}{\proj{2} y}$,
+and a \emph{propositional uniqueness principle}:
+\index{uniqueness!principle, propositional!for identities between pairs}%
+\[
+\id{r}{\pairpath(\projpath{1} (r), \projpath{2} (r)) }
+\qquad\text{for } r : \id[A \times B] x y.
+\]
+
+We can also characterize the reflexivity, inverses, and composition of paths in $A\times B$ componentwise:
+\begin{align*}
+ {\refl{(z : A \times B)}}
+ &= {\pairpath (\refl{\proj{1} z},\refl{\proj{2} z})} \\
+ {\opp{p}}
+ &= {\pairpath \big(\opp{\projpath{1} (p)},\, \opp{\projpath{2} (p)}\big)} \\
+ {{p \ct q}}
+ &= {\pairpath \big({\projpath{1} (p)} \ct {\projpath{1} (q)},\,{\projpath{2} (p)} \ct {\projpath{2} (q)}\big)}.
+\end{align*}
+Or, written differently:
+\begin{alignat*}{2}
+ \projpath{i}(\refl{(z : A \times B)}) &= \refl{\proj{i} z} &\qquad (i=1,2)\\
+ \pairpath(\opp p, \opp q) &= \opp{\pairpath(p,q)}\\
+ \pairpath(p\ct q, p'\ct q') &= \pairpath(p,p') \ct \pairpath(q,q').
+\end{alignat*}
+All of these equations can be derived by using path induction on the given paths and then returning reflexivity.
+The same is true for the rest of the higher groupoid structure considered in \cref{sec:equality}, although it begins to get tedious to insert enough other coherence paths to yield an equation that will typecheck.
+For instance, if we denote the inverse of the path in \cref{thm:omg}\ref{item:omg4} by $\ctassoc(p,q,r)$ and the last path displayed above by $\pairct(p,q,p',q')$, then for any $u,v,z,w:A\times B$ and $p,q,r,p',q',r'$ of appropriate types we have
+\begin{equation*}
+ \begin{array}{l}
+ \pairct(p\ct q, r, p'\ct q', r') \\
+ \ct\; (\pairct(p,q,p',q') \rightwhisker \pairpath(r,r')) \\
+ \ct\; \ctassoc(\pairpath(p,p'),\pairpath(q,q'),\pairpath(r,r'))\\
+ =
+ \begin{array}[t]{l}
+ \apfunc{\pairpath}({\pairpath(\ctassoc(p,q,r),\ctassoc(p',q',r'))})\\
+ \ct\; \pairct(p, q\ct r, p', q'\ct r')\\
+ \ct\; (\pairpath(p,p') \leftwhisker \pairct(q,r,q',r')).
+ \end{array}
+ \end{array}
+\end{equation*}
+Fortunately, we will never have to use any such higher-dimensional coherences.
+
+\index{transport!in product types}%
+We now consider transport in a pointwise product of type families.
+Given type families $ A, B : Z \to \type$, we abusively write $A\times B:Z\to \type$ for the type family defined by $(A\times B)(z) \defeq A(z) \times B(z)$.
+Now given $p : \id[Z]{z}{w}$ and $x : A(z) \times B(z)$, we can transport $x$ along $p$ to obtain an element of $A(w)\times B(w)$.
+
+\begin{thm}\label{thm:trans-prod}
+ In the above situation, we have
+ \[
+ \id[A(w) \times B(w)]
+ {\transfib{A\times B}px}
+ {(\transfib{A}{p}{\proj{1}x}, \transfib{B}{p}{\proj{2}x})}.
+ \]
+\end{thm}
+\begin{proof}
+ By path induction, we may assume $p$ is reflexivity, in which case we have
+ \begin{align*}
+ \transfib{A\times B}px&\jdeq x\\
+ \transfib{A}{p}{\proj{1}x}&\jdeq \proj1x\\
+ \transfib{B}{p}{\proj{2}x}&\jdeq \proj2x.
+ \end{align*}
+ Thus, it remains to show $x = (\proj1 x, \proj2x)$.
+ But this is the propositional uniqueness principle for product types, which, as we remarked above, follows from \cref{thm:path-prod}.
+\end{proof}
+
+Finally, we consider the functoriality of $\apfunc{}$ under cartesian products.
+Suppose given types $A,B,A',B'$ and functions $g:A\to A'$ and $h:B\to B'$; then we can define a function $f:A\times B\to A'\times B'$ by $f(x) \defeq (g(\proj1x),h(\proj2x))$.
+
+\begin{thm}\label{thm:ap-prod}
+ In the above situation, given $x,y:A\times B$ and $p:\proj1x=\proj1y$ and $q:\proj2x=\proj2y$, we have
+ \[ \id[(f(x)=f(y))]{\ap{f}{\pairpath(p,q)}} {\pairpath(\ap{g}{p},\ap{h}{q})}. \]
+\end{thm}
+\begin{proof}
+ Note first that the above equation is well-typed.
+ On the one hand, since $\pairpath(p,q):x=y$ we have $\ap{f}{\pairpath(p,q)}:f(x)=f(y)$.
+ On the other hand, since $\proj1(f(x))\jdeq g(\proj1x)$ and $\proj2(f(x))\jdeq h(\proj2x)$, we also have $\pairpath(\ap{g}{p},\ap{h}{q}):f(x)=f(y)$.
+
+ Now, by induction, we may assume $x\jdeq(a,b)$ and $y\jdeq(a',b')$, in which case we have $p:a=a'$ and $q:b=b'$.
+ Thus, by path induction, we may assume $p$ and $q$ are reflexivity, in which case the desired equation holds judgmentally.
+\end{proof}
+
+\index{type!product|)}%
+
+\section{\texorpdfstring{$\Sigma$}{Σ}-types}
+\label{sec:compute-sigma}
+
+\index{type!dependent pair|(}%
+Let $A$ be a type and $P:A\to\type$ a type family.
+Recall that the $\Sigma$-type, or dependent pair type, $\sm{x:A} P(x)$ is a generalization of the cartesian product type.
+Thus, we expect its higher groupoid structure to also be a generalization of the previous section.
+In particular, its paths should be pairs of paths, but it takes a little thought to give the correct types of these paths.
+
+Suppose that we have a path $p:w=w'$ in $\sm{x:A}P(x)$.
+Then we get $\ap{\proj{1}}{p}:\proj{1}(w)=\proj{1}(w')$.
+However, we cannot directly ask whether $\proj{2}(w)$ is identical to $\proj{2}(w')$ since they don't have to be in the same type.
+But we can transport\index{transport} $\proj{2}(w)$ along the path $\ap{\proj{1}}{p}$, and this does give us an element of the same type as $\proj{2}(w')$.
+By path induction, we do in fact obtain a path $\trans{\ap{\proj{1}}{p}}{\proj{2}(w)}=\proj{2}(w')$.
+
+Recall from the discussion preceding \cref{lem:mapdep} that
+\narrowequation{
+ \trans{\ap{\proj{1}}{p}}{\proj{2}(w)}=\proj{2}(w')
+}
+can be regarded as the type of paths from $\proj2(w)$ to $\proj2(w')$ which lie over the path $\ap{\proj1}{p}$ in $A$.
+\index{fibration}%
+\index{total!space}%
+Thus, we are saying that a path $w=w'$ in the total space determines (and is determined by) a path $p:\proj1(w)=\proj1(w')$ in $A$ together with a path from $\proj2(w)$ to $\proj2(w')$ lying over $p$, which seems sensible.
+
+\begin{rmk}
+ Note that if we have $x:A$ and $u,v:P(x)$ such that $(x,u)=(x,v)$, it does not follow that $u=v$.
+ All we can conclude is that there exists $p:x=x$ such that $\trans p u = v$.
+ This is a well-known source of confusion for newcomers to type theory, but it makes sense from a topological viewpoint: the existence of a path $(x,u)=(x,v)$ in the total space of a fibration between two points that happen to lie in the same fiber does not imply the existence of a path $u=v$ lying entirely \emph{within} that fiber.
+\end{rmk}
+
+The next theorem states that we can also reverse this process.
+Since it is a direct generalization of \cref{thm:path-prod}, we will be more concise.
+
+\begin{thm}\label{thm:path-sigma}
+Suppose that $P:A\to\type$ is a type family over a type $A$ and let $w,w':\sm{x:A}P(x)$. Then there is an equivalence
+\begin{equation*}
+\eqvspaced{(w=w')}{\dsm{p:\proj{1}(w)=\proj{1}(w')} \trans{p}{\proj{2}(w)}=\proj{2}(w')}.
+\end{equation*}
+\end{thm}
+
+\begin{proof}
+We define a function
+\begin{equation*}
+f : \prd{w,w':\sm{x:A}P(x)} (w=w') \to \dsm{p:\proj{1}(w)=\proj{1}(w')} \trans{p}{\proj{2}(w)}=\proj{2}(w')
+\end{equation*}
+by path induction, with
+\begin{equation*}
+f(w,w,\refl{w})\defeq(\refl{\proj{1}(w)},\refl{\proj{2}(w)}).
+\end{equation*}
+We want to show that $f$ is an equivalence.
+
+In the reverse direction, we define
+\begin{narrowmultline*}
+ g : \prd{w,w':\sm{x:A}P(x)}
+ \Parens{\sm{p:\proj{1}(w)=\proj{1}(w')}\trans{p}{\proj{2}(w)}=\proj{2}(w')}
+ \to
+ \narrowbreak
+ (w=w')
+\end{narrowmultline*}
+by first inducting on $w$ and $w'$, which splits them into $(w_1,w_2)$ and
+$(w_1',w_2')$ respectively, so it suffices to show
+\begin{equation*}
+\Parens{\sm{p:w_1 = w_1'}\trans{p}{w_2}=w_2'} \to ((w_1,w_2)=(w_1',w_2')).
+\end{equation*}
+Next, given a pair $\sm{p:w_1 = w_1'}\trans{p}{w_2}=w_2'$, we can
+use $\Sigma$-induction to get $p : w_1 = w_1'$ and $q :
+\trans{p}{w_2}=w_2'$. Inducting on $p$, we have $q :
+\trans{(\refl{w_1})}{w_2}=w_2'$, and it suffices to show
+$(w_1,w_2)=(w_1,w_2')$. But $\trans{(\refl{w_1})}{w_2} \jdeq w_2$, so
+inducting on $q$ reduces the goal to
+$(w_1,w_2)=(w_1,w_2)$, which we can prove with $\refl{(w_1,w_2)}$.
+
+Next we show that $f(g(r))=r$ for all $w$, $w'$ and
+$r$, where $r$ has type
+\[\dsm{p:\proj{1}(w)=\proj{1}(w')} (\trans{p}{\proj{2}(w)}=\proj{2}(w')).\]
+First, we break apart the pairs $w$, $w'$, and $r$ by pair induction, as in the
+definition of $g$, and then use two path inductions to reduce both components
+of $r$ to \refl{}. Then it suffices to show that
+$f (g(\refl{w_1},\refl{w_2})) = (\refl{w_1},\refl{w_2})$, which is true by definition.
+
+Similarly, to show that $g(f(p))=p$ for all $w$, $w'$,
+and $p : w = w'$, we can do path induction on $p$, and then pair induction to
+split $w$, at which point it suffices to show that
+$g(f (\refl{(w_1,w_2)})) = \refl{(w_1,w_2)}$, which is true by
+definition.
+
+Thus, $f$ has a quasi-inverse, and is therefore an equivalence.
+\end{proof}
+
+As we did in the case of cartesian products, we can deduce a propositional uniqueness principle as a special case.
+
+\begin{cor}\label{thm:eta-sigma}
+ \index{uniqueness!principle, propositional!for dependent pair types}%
+ For $z:\sm{x:A} P(x)$, we have $z = (\proj1(z),\proj2(z))$.
+\end{cor}
+\begin{proof}
+ We have $\refl{\proj1(z)} : \proj1(z) = \proj1(\proj1(z),\proj2(z))$, so by \cref{thm:path-sigma} it will suffice to exhibit a path $\trans{(\refl{\proj1(z)})}{\proj2(z)} = \proj2(\proj1(z),\proj2(z))$.
+ But both sides are judgmentally equal to $\proj2(z)$.
+\end{proof}
+
+Like with binary cartesian products, we can think of
+the backward direction of \cref{thm:path-sigma} as
+an introduction form (\pairpath{}{}), the forward direction as
+elimination forms (\projpath{1} and \projpath{2}), and the equivalence
+as giving a propositional computation rule and uniqueness principle for these.
+
+Note that the lifted path $\mathsf{lift}(u,p)$ of $p:x=y$ at $u:P(x)$ defined in \cref{thm:path-lifting} may be identified with the special case of the introduction form
+\[\pairpath(p,\refl{\trans p u}):(x,u) = (y,\trans p u).\]
+\index{transport!in dependent pair types}%
+This appears in the statement of action of transport on $\Sigma$-types, which is also a generalization of the action for binary cartesian products:
+
+\begin{thm}\label{transport-Sigma}
+ Suppose we have type families
+ %
+ \begin{equation*}
+ P:A\to\type
+ \qquad\text{and}\qquad
+ Q:\Parens{\sm{x:A} P(x)}\to\type.
+ \end{equation*}
+ %
+ Then we can construct the type family over $A$ defined by
+ \begin{equation*}
+ x \mapsto \sm{u:P(x)} Q(x,u).
+ \end{equation*}
+ For any path $p:x=y$ and any $(u,z):\sm{u:P(x)} Q(x,u)$ we have
+ \begin{equation*}
+ \trans{p}{u,z}=\big(\trans{p}{u},\,\trans{\pairpath(p,\refl{\trans pu})}{z}\big).
+ \end{equation*}
+\end{thm}
+
+\begin{proof}
+Immediate by path induction.
+\end{proof}
+
+We leave it to the reader to state and prove a generalization of
+\cref{thm:ap-prod} (see \cref{ex:ap-sigma}), and to characterize
+the reflexivity, inverses, and composition of $\Sigma$-types
+componentwise.
+
+\index{type!dependent pair|)}%
+
+\section{The unit type}
+\label{sec:compute-unit}
+
+\index{type!unit|(}%
+Trivial cases are sometimes important, so we mention briefly the case of the unit type~\unit.
+
+\begin{thm}\label{thm:path-unit}
+ For any $x,y:\unit$, we have $\eqv{(x=y)}{\unit}$.
+\end{thm}
+
+It may be tempting to begin this proof by $\unit$-induction on $x$ and $y$, reducing the problem to $\eqv{(\ttt=\ttt)}{\unit}$.
+However, at this point we would be stuck, since we would be unable to perform a path induction on $p:\ttt=\ttt$.
+Thus, we instead work with a general $x$ and $y$ as much as possible, reducing them to $\ttt$ by induction only at the last moment.
+
+\begin{proof}
+ A function $(x=y)\to\unit$ is easy to define by sending everything to \ttt.
+ Conversely, for any $x,y:\unit$ we may assume by induction that $x\jdeq \ttt\jdeq y$.
+ In this case we have $\refl{\ttt}:x=y$, yielding a constant function $\unit\to(x=y)$.
+
+ To show that these are inverses, consider first an element $u:\unit$.
+ We may assume that $u\jdeq\ttt$, but this is also the result of the composite $\unit \to (x=y)\to\unit$.
+
+ On the other hand, suppose given $p:x=y$.
+ By path induction, we may assume $x\jdeq y$ and $p$ is $\refl x$.
+ We may then assume that $x$ is \ttt, in which case the composite $(x=y) \to \unit\to(x=y)$ takes $p$ to $\refl x$, i.e.\ to~$p$.
+\end{proof}
+
+In particular, any two elements of $\unit$ are equal.
+We leave it to the reader to formulate this equivalence in terms of introduction, elimination, computation, and uniqueness rules.
+\index{transport!in unit type}%
+The transport lemma for \unit is simply the transport lemma for constant type families (\cref{thm:trans-trivial}).
+
+\index{type!unit|)}%
+
+\section{\texorpdfstring{$\Pi$}{Π}-types and the function extensionality axiom}
+\label{sec:compute-pi}
+
+\index{type!dependent function|(}%
+\index{type!function|(}%
+\index{homotopy|(}%
+Given a type $A$ and a type family $B : A \to \type$, consider the dependent function type $\prd{x:A}B(x)$.
+We expect the type $f=g$ of paths from $f$ to $g$ in $\prd{x:A} B(x)$ to be equivalent to
+the type of pointwise paths:\index{pointwise!equality of functions}
+\begin{equation}
+ \eqvspaced{(\id{f}{g})}{\Parens{\prd{x:A} (\id[B(x)]{f(x)}{g(x)})}}.\label{eq:path-forall}
+\end{equation}
+From a traditional perspective, this would say that two functions which are equal at each point are equal as functions.
+\index{continuity of functions in type theory@``continuity'' of functions in type theory}%
+From a topological perspective, it would say that a path in a function space is the same as a continuous homotopy.
+\index{functoriality of functions in type theory@``functoriality'' of functions in type theory}%
+And from a categorical perspective, it would say that an isomorphism in a functor category is a natural family of isomorphisms.
+
+Unlike the case in the previous sections, however, the basic type theory presented in \cref{cha:typetheory} is insufficient to prove~\eqref{eq:path-forall}.
+All we can say is that there is a certain function
+\begin{equation}\label{eq:happly}
+ \happly : (\id{f}{g}) \to \prd{x:A} (\id[B(x)]{f(x)}{g(x)})
+\end{equation}
+which is easily defined by path induction.
+For the moment, therefore, we will assume:
+
+\begin{axiom}[Function extensionality]\label{axiom:funext}
+ \indexsee{axiom!function extensionality}{function extensionality}%
+ \indexdef{function extensionality}%
+ For any $A$, $B$, $f$, and $g$, the function~\eqref{eq:happly} is an equivalence.
+\end{axiom}
+
+We will see in later chapters that this axiom follows both from univalence (see \cref{sec:compute-universe,sec:univalence-implies-funext}) and from an interval type (see \cref{sec:interval} and \cref{ex:funext-from-interval}).
+
+In particular, \cref{axiom:funext} implies that~\eqref{eq:happly} has a quasi-inverse
+\[
+\funext : \Parens{\prd{x:A} (\id{f(x)}{g(x)})} \to {(\id{f}{g})}.
+\]
+This function is also referred to as ``function extensionality''.
+As we did with $\pairpath$ in \cref{sec:compute-cartprod}, we can regard $\funext$ as an \emph{introduction rule} for the type $\id f g$.
+From this point of view, $\happly$ is the \emph{elimination rule}, while the homotopies witnessing $\funext$ as quasi-inverse to $\happly$ become a propositional computation rule\index{computation rule!propositional!for identities between functions}
+\[
+\id{\happly({\funext{(h)}},x)}{h(x)} \qquad\text{for }h:\prd{x:A} (\id{f(x)}{g(x)})
+\]
+and a propositional uniqueness principle\index{uniqueness!principle!for identities between functions}:
+\[
+\id{p}{\funext (x \mapsto \happly(p,{x}))} \qquad\text{for } p: \id f g.
+\]
+
+We can also compute the identity, inverses, and composition in $\Pi$-types; they are simply given by pointwise operations:\index{pointwise!operations on functions}
+\begin{align*}
+\refl{f} &= \funext(x \mapsto \refl{f(x)}) \\
+\opp{\alpha} &= \funext (x \mapsto \opp{\happly (\alpha,x)}) \\
+{\alpha} \ct \beta &= \funext (x \mapsto {\happly({\alpha},x) \ct \happly({\beta},x)}).
+\end{align*}
+The first of these equalities follows from the definition of $\happly$, while the second and third are easy path inductions.
+
+Since the non-dependent function type $A\to B$ is a special case of the dependent function type $\prd{x:A} B(x)$ when $B$ is independent of $x$, everything we have said above applies in non-dependent cases as well.
+\index{transport!in function types}%
+The rules for transport, however, are somewhat simpler in the non-dependent case.
+Given a type $X$, a path $p:\id[X]{x_1}{x_2}$, type families $A,B:X\to \type$, and a function $f : A(x_1) \to B(x_1)$, we have
+\begin{align}\label{eq:transport-arrow}
+ \transfib{A\to B}{p}{f} &=
+ \Big(x \mapsto \transfib{B}{p}{f(\transfib{A}{\opp p}{x})}\Big)
+\end{align}
+where $A\to B$ denotes abusively the type family $X\to \type$ defined by
+\[(A\to B)(x) \defeq (A(x)\to B(x)).\]
+In other words, when we transport a function $f:A(x_1)\to B(x_1)$ along a path $p:x_1=x_2$, we obtain the function $A(x_2)\to B(x_2)$ which transports its argument backwards along $p$ (in the type family $A$), applies $f$, and then transports the result forwards along $p$ (in the type family $B$).
+This can be proven easily by path induction.
+
+\index{transport!in dependent function types}%
+Transporting dependent functions is similar, but more complicated.
+Suppose given $X$ and $p$ as before, type families $A:X\to \type$ and $B:\prd{x:X} (A(x)\to\type)$, and also a dependent function $f : \prd{a:A(x_1)} B(x_1,a)$.
+Then for $a:A(x_2)$, we have
+\begin{narrowmultline*}
+ \transfib{\Pi_A(B)}{p}{f}(a) = \narrowbreak
+ \Transfib{\widehat{B}}{\opp{(\pairpath(\opp{p},\refl{ \trans{\opp p}{a} }))}}{f(\transfib{A}{\opp p}{a})}
+\end{narrowmultline*}
+where $\Pi_A(B)$ and $\widehat{B}$ denote respectively the type families
+\begin{equation}\label{eq:transport-arrow-families}
+\begin{array}{rclcl}
+\Pi_A(B) &\defeq& \big(x\mapsto \prd{a:A(x)} B(x,a) \big) &:& X\to \type\\
+\widehat{B} &\defeq& \big(w \mapsto B(\proj1w,\proj2w) \big) &:& \big(\sm{x:X} A(x)\big) \to \type.
+\end{array}
+\end{equation}
+If these formulas look a bit intimidating, don't worry about the details.
+The basic idea is just the same as for the non-dependent function type: we transport the argument backwards, apply the function, and then transport the result forwards again.
+
+Now recall that for a general type family $P:X\to\type$, in \cref{sec:functors} we defined the type of \emph{dependent paths} over $p:\id[X]xy$ from $u:P(x)$ to $v:P(y)$ to be $\id[P(y)]{\trans{p}{u}}{v}$.
+When $P$ is a family of function types, there is an equivalent way to represent this which is often more convenient.
+\index{path!dependent!in function types}
+
+\begin{lem}\label{thm:dpath-arrow}
+ Given type families $A,B:X\to\type$ and $p:\id[X]xy$, and also $f:A(x)\to B(x)$ and $g:A(y)\to B(y)$, we have an equivalence
+ \[ \eqvspaced{ \big(\trans{p}{f} = {g}\big) } { \prd{a:A(x)} (\trans{p}{f(a)} = g(\trans{p}{a})) }. \]
+ Moreover, if $q:\trans{p}{f} = {g}$ corresponds under this equivalence to $\widehat q$, then for $a:A(x)$, the path
+ \[ \happly(q,\trans p a) : (\trans p f)(\trans p a) = g(\trans p a)\]
+ is equal to the concatenated path $i\ct j\ct k$, where
+ \begin{itemize}
+ \item $i:(\trans p f)(\trans p a) = \trans p {f (\trans {\opp p}{\trans p a})}$ comes from~\eqref{eq:transport-arrow},
+ \item $j:\trans p {f (\trans {\opp p}{\trans p a})} = \trans p {f(a)}$ comes from \cref{thm:transport-concat,thm:omg}, and
+ \item $k:\trans p {f(a)}= g(\trans p a)$ is $\widehat{q}(a)$.
+ \end{itemize}
+\end{lem}
+\begin{proof}
+ By path induction, we may assume $p$ is reflexivity, in which case the desired equivalence reduces to function extensionality.
+ The second statement then follows by the computation rule for function extensionality.
+\end{proof}
+
+In general, it happens quite frequently that we want to consider a concatenation of paths each of which arises from some previously proven lemmas or hypothesized objects, and it can be rather tedious to describe this by giving a name to each path in the concatenation as we did in the second statement above.
+Thus, we adopt a convention of writing such concatenations in the familiar mathematical style of ``chains of equalities with reasons'', and allow ourselves to omit reasons that the reader can easily fill in.
+For instance, the path $i\ct j\ct k$ from \cref{thm:dpath-arrow} would be written like this:
+ \begin{align*}
+ (\trans p f)(\trans p a)
+ &= \trans p {f (\trans {\opp p}{\trans p a})}
+ \tag{by~\eqref{eq:transport-arrow}}\\
+ &= \trans p {f(a)}\\
+ &= g(\trans p a).
+ \tag{by $\widehat{q}$}
+ \end{align*}
+In ordinary mathematics, such a chain of equalities would be merely proving that two things are equal.
+We are enhancing this by using it to describe a \emph{particular} path between them.
+
+As usual, there is a version of \cref{thm:dpath-arrow} for dependent functions that is similar, but more complicated.
+\index{path!dependent!in dependent function types}
+
+\begin{lem}\label{thm:dpath-forall}
+ Given type families $A:X\to\type$ and $B:\prd{x:X} A(x)\to\type$ and $p:\id[X]xy$, and also $f:\prd{a:A(x)} B(x,a)$ and $g:\prd{a:A(y)} B(y,a)$, we have an equivalence
+ \[ \eqvspaced{ \big(\trans{p}{f} = {g}\big) } { \Parens{\prd{a:A(x)} \transfib{\widehat{B}}{\pairpath(p,\refl{\trans pa})}{f(a)} = g(\trans{p}{a}) } } \]
+ with $\widehat{B}$ as in~\eqref{eq:transport-arrow-families}.
+\end{lem}
+
+We leave it to the reader to prove this and to formulate a suitable computation rule.
+
+\index{homotopy|)}%
+\index{type!dependent function|)}%
+\index{type!function|)}%
+
+\section{Universes and the univalence axiom}
+\label{sec:compute-universe}
+
+\index{type!universe|(}%
+\index{equivalence|(}%
+Given two types $A$ and $B$, we may consider them as elements of some universe type \type, and thereby form the identity type $\id[\type]AB$.
+As mentioned in the introduction, \emph{univalence} is the identification of $\id[\type]AB$ with the type $(\eqv AB)$ of equivalences from $A$ to $B$, which we described in \cref{sec:basics-equivalences}.
+We perform this identification by way of the following canonical function.
+
+\begin{lem}\label{thm:idtoeqv}
+ For types $A,B:\type$, there is a certain function,
+ \begin{equation}\label{eq:uidtoeqv}
+ \idtoeqv : (\id[\type]AB) \to (\eqv A B),
+ \end{equation}
+ defined in the proof.
+\end{lem}
+\begin{proof}
+ We could construct this directly by induction on equality, but the following description is more convenient.
+ \index{identity!function}%
+ \index{function!identity}%
+ Note that the identity function $\idfunc[\type]:\type\to\type$ may be regarded as a type family indexed by the universe \type; it assigns to each type $X:\type$ the type $X$ itself.
+ (When regarded as a fibration, its total space is the type $\sm{A:\type}A$ of ``pointed types''; see also \cref{sec:object-classification}.)
+ Thus, given a path $p:A =_\type B$, we have a transport\index{transport} function $\transf{p}:A \to B$.
+ We claim that $\transf{p}$ is an equivalence.
+ But by induction, it suffices to assume that $p$ is $\refl A$, in which case $\transf{p} \jdeq \idfunc[A]$, which is an equivalence by \cref{eg:idequiv}.
+ Thus, we can define $\idtoeqv(p)$ to be $\transf{p}$ (together with the above proof that it is an equivalence).
+\end{proof}
+
+We would like to say that \idtoeqv is an equivalence.
+However, as with $\happly$ for function types, the type theory described in \cref{cha:typetheory} is insufficient to guarantee this.
+Thus, as we did for function extensionality, we formulate this property as an axiom: Voevodsky's \emph{univalence axiom}.
+
+\begin{axiom}[Univalence]\label{axiom:univalence}
+ \indexdef{univalence axiom}%
+ \indexsee{axiom!univalence}{univalence axiom}%
+ For any $A,B:\type$, the function~\eqref{eq:uidtoeqv} is an equivalence.
+\end{axiom}
+
+In particular, therefore, we have
+ \[
+\eqv{(\id[\type]{A}{B})}{(\eqv A B)}.
+\]
+
+Technically, the univalence axiom is a statement about a particular universe type $\UU$.
+If a universe $\UU$ satisfies this axiom, we say that it is \define{univalent}.
+\indexdef{type!universe!univalent}%
+\indexdef{univalent universe}%
+Except when otherwise noted (e.g.\ in \cref{sec:univalence-implies-funext}) we will assume that \emph{all} universes are univalent.
+
+\begin{rmk}
+ It is important for the univalence axiom that we defined $\eqv AB$ using a ``good'' version of $\isequiv$ as described in \cref{sec:basics-equivalences}, rather than (say) as $\sm{f:A\to B} \qinv(f)$.
+ See \cref{ex:qinv-univalence}.
+\end{rmk}
+
+In particular, univalence means that \emph{equivalent types may be identified}.
+As we did in previous sections, it is useful to break this equivalence into:
+%
+\symlabel{ua}
+\begin{itemize}
+\item An introduction rule for {(\id[\type]{A}{B})}, denoted $\ua$ for ``univalence axiom'':
+ \[
+ \ua : ({\eqv A B}) \to (\id[\type]{A}{B}).
+ \]
+\item The elimination rule, which is $\idtoeqv$,
+ \[
+ \idtoeqv \jdeq \transfibf{X \mapsto X} : (\id[\type]{A}{B}) \to (\eqv A B).
+ \]
+\item The propositional computation rule\index{computation rule!propositional!for univalence},
+ \[
+ \transfib{X \mapsto X}{\ua(f)}{x} = f(x).
+ \]
+\item The propositional uniqueness principle: \index{uniqueness!principle, propositional!for univalence}
+ for any $p : \id A B$,
+ \[
+ \id{p}{\ua(\transfibf{X \mapsto X}(p))}.
+ \]
+\end{itemize}
+%
+We can also identify the reflexivity, concatenation, and inverses of equalities in the universe with the corresponding operations on equivalences:
+\begin{align*}
+ \refl{A} &= \ua(\idfunc[A]) \\
+ \ua(f) \ct \ua(g) &= \ua(g\circ f) \\
+ \opp{\ua(f)} &= \ua(f^{-1}).
+\end{align*}
+The first of these follows because $\idfunc[A] = \idtoeqv(\refl{A})$ by definition of \idtoeqv, and \ua is the inverse of \idtoeqv.
+For the second, if we define $p \defeq \ua(f)$ and $q\defeq \ua(g)$, then we have
+\[ \ua(g\circ f) = \ua(\idtoeqv(q) \circ \idtoeqv(p)) = \ua(\idtoeqv(p\ct q)) = p\ct q\]
+using \cref{thm:transport-concat} and the definition of $\idtoeqv$.
+The third is similar.
+
+The following observation, which is a special case of \cref{thm:transport-compose}, is often useful when applying the univalence axiom.
+
+\begin{lem}\label{thm:transport-is-ap}
+ For any type family $B:A\to\type$ and $x,y:A$ with a path $p:x=y$ and $u:B(x)$, we have
+ \begin{align*}
+ \transfib{B}{p}{u} &= \transfib{X\mapsto X}{\apfunc{B}(p)}{u}\\
+ &= \idtoeqv(\apfunc{B}(p))(u).
+ \end{align*}
+\end{lem}
+
+\index{equivalence|)}%
+\index{type!universe|)}%
+
+\section{Identity type}
+\label{sec:compute-paths}
+
+\index{type!identity|(}%
+Just as the type \id[A]{a}{a'} is characterized up to isomorphism, with
+a separate ``definition'' for each $A$, there is no simple
+characterization of the type \id[{\id[A]{a}{a'}}]{p}{q} of paths between
+paths $p,q : \id[A]{a}{a'}$.
+However, our other general classes of theorems do extend to identity types, such as the fact that they respect equivalence.
+
+\begin{thm}\label{thm:paths-respects-equiv}
+ If $f : A \to B$ is an equivalence, then for all $a,a':A$, so is
+ \[\apfunc{f} : (\id[A]{a}{a'}) \to (\id[B]{f(a)}{f(a')}).\]
+\end{thm}
+\begin{proof}
+ Let $\opp f$ be a quasi-inverse of $f$, with homotopies
+ %
+ \begin{equation*}
+ \alpha:\prd{b:B} (f(\opp f(b))=b)
+ \qquad\text{and}\qquad
+ \beta:\prd{a:A} (\opp f(f(a)) = a).
+ \end{equation*}
+ %
+ The quasi-inverse of $\apfunc{f}$ is, essentially,
+ \[\apfunc{\opp f} : (\id{f(a)}{f(a')}) \to (\id{\opp f(f(a))}{\opp f(f(a'))}).\]
+ However, in order to obtain an element of $\id[A]{a}{a'}$ from $\apfunc{\opp f}(q)$, we must concatenate with the paths $\opp{\beta_a}$ and $\beta_{a'}$ on either side.
+ To show that this gives a quasi-inverse of $\apfunc{f}$, on one hand we must show that for any $p:\id[A]{a}{a'}$ we have
+ \[ \opp{\beta_a} \ct \apfunc{\opp f}(\apfunc{f}(p)) \ct \beta_{a'} = p. \]
+ This follows from the functoriality of $\apfunc{}$ and the naturality of homotopies, \cref{lem:ap-functor,lem:htpy-natural}.
+ On the other hand, we must show that for any $q:\id[B]{f(a)}{f(a')}$ we have
+ \[ \apfunc{f}\big( \opp{\beta_a} \ct \apfunc{\opp f}(q) \ct \beta_{a'} \big) = q. \]
+ The proof of this is a little more involved, but each step is again an application of \cref{lem:ap-functor,lem:htpy-natural} (or simply canceling inverse paths):
+ \begin{align*}
+ \apfunc{f}\big( \narrowamp\opp{\beta_a} \ct \apfunc{\opp f}(q) \ct \beta_{a'} \big) \narrowbreak
+ &= \opp{\alpha_{f(a)}} \ct {\alpha_{f(a)}} \ct
+ \apfunc{f}\big( \opp{\beta_a} \ct \apfunc{\opp f}(q) \ct \beta_{a'} \big)
+ \ct \opp{\alpha_{f(a')}} \ct {\alpha_{f(a')}}\\
+ &= \opp{\alpha_{f(a)}} \ct
+ \apfunc f \big(\apfunc{\opp f}\big(\apfunc{f}\big( \opp{\beta_a} \ct \apfunc{\opp f}(q) \ct \beta_{a'} \big)\big)\big)
+ \ct {\alpha_{f(a')}}\\
+ &= \opp{\alpha_{f(a)}} \ct
+ \apfunc f \big(\beta_a \ct \opp{\beta_a} \ct \apfunc{\opp f}(q) \ct \beta_{a'} \ct \opp{\beta_{a'}} \big)
+ \ct {\alpha_{f(a')}}\\
+ &= \opp{\alpha_{f(a)}} \ct
+ \apfunc f (\apfunc{\opp f}(q))
+ \ct {\alpha_{f(a')}}\\
+ &= q.\qedhere
+ \end{align*}
+\end{proof}
+
+Thus, if for some type $A$ we have a full characterization of $\id[A]{a}{a'}$, the type $\id[{\id[A]{a}{a'}}]{p}{q}$ is determined as well.
+For example:
+\begin{itemize}
+\item Paths $p = q$, where $p,q : \id[A \times B]{w}{w'}$, are equivalent to pairs of paths
+ \[\id[{\id[A]{\proj{1} w}{\proj{1} w'}}]{\projpath{1}{p}}{\projpath{1}{q}}
+ \quad\text{and}\quad
+ \id[{\id[B]{\proj{2} w}{\proj{2} w'}}]{\projpath{2}{p}}{\projpath{2}{q}}.
+ \]
+\item Paths $p = q$, where $p,q : \id[\prd{x:A} B(x)]{f}{g}$, are equivalent to homotopies
+ \[\prd{x:A} (\id[f(x)=g(x)] {\happly(p)(x)}{\happly(q)(x)}).\]
+\end{itemize}
+
+\index{transport!in identity types}%
+Next we consider transport in families of paths, i.e.\ transport in $C:A\to\type$ where each $C(x)$ is an identity type.
+The simplest case is when $C(x)$ is a type of paths in $A$ itself, perhaps with one endpoint fixed.
+
+\begin{lem}\label{cor:transport-path-prepost}
+ For any $A$ and $a:A$, with $p:x_1=x_2$, we have
+ %
+ \begin{align*}
+ \transfib{x \mapsto (\id{a}{x})} {p} {q} &= q \ct p
+ & &\text{for $q:a=x_1$,}\\
+ \transfib{x \mapsto (\id{x}{a})} {p} {q} &= \opp {p} \ct q
+ & &\text{for $q:x_1=a$,}\\
+ \transfib{x \mapsto (\id{x}{x})} {p} {q} &= \opp{p} \ct q \ct p
+ & &\text{for $q:x_1=x_1$.}
+ \end{align*}
+\end{lem}
+\begin{proof}
+ Path induction on $p$, followed by the unit laws for composition.
+\end{proof}
+
+In other words, transporting with ${x \mapsto \id{c}{x}}$ is post-composition, and transporting with ${x \mapsto \id{x}{c}}$ is contravariant pre-composition.
+These may be familiar as the functorial actions of the covariant and contravariant hom-functors $\hom(c, {\blank})$ and $\hom({\blank},c)$ in category theory.
+
+Similarly, we can prove the following more general form of \cref{cor:transport-path-prepost}, which is related to \cref{thm:transport-compose}.
+
+\begin{thm}\label{thm:transport-path}
+ For $f,g:A\to B$, with $p : \id[A]{a}{a'}$ and $q : \id[B]{f(a)}{g(a)}$, we have
+ \begin{equation*}
+ \id[f(a') = g(a')]{\transfib{x \mapsto \id[B]{f(x)}{g(x)}}{p}{q}}
+ {\opp{(\apfunc{f}{p})} \ct q \ct \apfunc{g}{p}}.
+ \end{equation*}
+\end{thm}
+
+Because $\apfunc{(x \mapsto x)}$ is the identity function and $\apfunc{(x \mapsto c)}$ (where $c$ is a constant) is $p \mapsto\refl{c}$, \cref{cor:transport-path-prepost} is a special case.
+A yet more general version is when $B$ can be a family of types indexed on $A$:
+
+\begin{thm}\label{thm:transport-path2}
+ Let $B : A \to \type$ and $f,g : \prd{x:A} B(x)$, with $p : \id[A]{a}{a'}$ and $q : \id[B(a)]{f(a)}{g(a)}$.
+ Then we have
+ \begin{equation*}
+ \transfib{x \mapsto \id[B(x)]{f(x)}{g(x)}}{p}{q} =
+ \opp{(\apdfunc{f}(p))} \ct \apfunc{(\transfibf{B}{p})}(q) \ct \apdfunc{g}(p).
+ \end{equation*}
+\end{thm}
+
+Finally, as in \cref{sec:compute-pi}, for families of identity types there is another equivalent characterization of dependent paths.
+\index{path!dependent!in identity types}
+
+\begin{thm}\label{thm:dpath-path}
+ For $p:\id[A]a{a'}$ with $q:a=a$ and $r:a'=a'$, we have
+ \[ \eqvspaced{ \big(\transfib{x\mapsto (x=x)}{p}{q} = r \big) }{ \big( q \ct p = p \ct r \big). } \]
+\end{thm}
+\begin{proof}
+ Path induction on $p$, followed by the fact that composing with the unit equalities $q\ct 1 = q$ and $r = 1\ct r$ is an equivalence.
+\end{proof}
+
+There are more general equivalences involving the application of functions, akin to \cref{thm:transport-path,thm:transport-path2}.
+
+\index{type!identity|)}%
+
+\section{Coproducts}
+\label{sec:compute-coprod}
+
+\index{type!coproduct|(}%
+\index{encode-decode method|(}%
+So far, most of the type formers we have considered have been what are called \emph{negative}.
+\index{type!negative}\index{negative!type}%
+\index{polarity}%
+Intuitively, this means that their elements are determined by their behavior under the elimination rules: a (dependent) pair is determined by its projections, and a (dependent) function is determined by its values.
+The identity types of negative types can almost always be characterized straightforwardly, along with all of their higher structure, as we have done in \crefrange{sec:compute-cartprod}{sec:compute-pi}.
+The universe is not exactly a negative type, but its identity types behave similarly: we have a straightforward characterization (univalence) and a description of the higher structure.
+Identity types themselves, of course, are a special case.
+
+We now consider our first example of a \emph{positive} type former.
+\index{type!positive}\index{positive!type}%
+Again informally, a positive type is one which is ``presented'' by certain constructors, with the universal property of a presentation\index{presentation!of a positive type by its constructors} being expressed by its elimination rule.
+(Categorically speaking, a positive type has a ``mapping out'' universal property, while a negative type has a ``mapping in'' universal property.)
+Because computing with presentations is, in general, an uncomputable problem, for positive types we cannot always expect a straightforward characterization of the identity type.
+However, in many particular cases, a characterization or partial characterization does exist, and can be obtained by the general method that we introduce with this example.
+
+(Technically, our chosen presentation of cartesian products and $\Sigma$-types is also positive.
+However, because these types also admit a negative presentation which differs only slightly, their identity types have a direct characterization that does not require the method to be described here.)
+
+Consider the coproduct type $A+B$, which is ``presented'' by the injections $\inl:A\to A+B$ and $\inr:B\to A+B$.
+Intuitively, we expect that $A+B$ contains exact copies of $A$ and $B$ disjointly, so that we should have
+\begin{align}
+ {(\inl(a_1)=\inl(a_2))}&\eqvsym {(a_1=a_2)} \label{eq:inlinj}\\
+ {(\inr(b_1)=\inr(b_2))}&\eqvsym {(b_1=b_2)}\\
+ {(\inl(a)= \inr(b))} &\eqvsym {\emptyt}. \label{eq:inlrdj}
+\end{align}
+We prove this as follows.
+Fix an element $a_0:A$; we will characterize the type family
+\begin{equation}
+ (x\mapsto (\inl(a_0)=x)) : A+B \to \type.\label{eq:sumcodefam}
+\end{equation}
+A similar argument would characterize the analogous family $x\mapsto (x = \inr(b_0))$ for any $b_0:B$.
+Together, these characterizations imply~\eqref{eq:inlinj}--\eqref{eq:inlrdj}.
+
+In order to characterize~\eqref{eq:sumcodefam}, we will define a type family $\code:A+B\to\type$ and show that $\prd{x:A+B} (\eqv{(\inl(a_0)=x)}{\code(x)})$.
+Since we want to conclude~\eqref{eq:inlinj} from this, we should have $\code(\inl(a)) = (a_0=a)$, and since we also want to conclude~\eqref{eq:inlrdj}, we should have $\code (\inr(b)) = \emptyt$.
+The essential insight is that we can use the recursion principle of $A+B$ to \emph{define} $\code:A+B\to\type$ by these two equations:
+\begin{align*}
+ \code(\inl(a)) &\defeq (a_0=a),\\
+ \code(\inr(b)) &\defeq \emptyt.
+\end{align*}
+This is a very simple example of a proof technique that is used quite a
+bit when doing homotopy theory in homotopy type theory; see
+e.g.\ \cref{sec:pi1-s1-intro,sec:general-encode-decode}.
+%
+We can now show:
+
+\begin{thm}\label{thm:path-coprod}
+ For all $x:A+B$ we have $\eqv{(\inl(a_0)=x)}{\code(x)}$.
+\end{thm}
+\begin{proof}
+ The key to the following proof is that we do it for all points $x$ together, enabling us to use the elimination principle for the coproduct.
+ We first define a function
+ \[ \encode : \prd{x:A+B}{p:\inl(a_0)=x} \code(x) \]
+ by transporting reflexivity along $p$:
+ \[ \encode(x,p) \defeq \transfib{\code}{p}{\refl{a_0}}. \]
+ Note that $\refl{a_0} : \code(\inl(a_0))$, since $\code(\inl(a_0))\jdeq (a_0=a_0)$ by definition of \code.
+ Next, we define a function
+ \[ \decode : \prd{x:A+B}{c:\code(x)} (\inl(a_0)=x). \]
+ To define $\decode(x,c)$, we may first use the elimination principle of $A+B$ to divide into cases based on whether $x$ is of the form $\inl(a)$ or the form $\inr(b)$.
+
+ In the first case, where $x\jdeq \inl(a)$, then $\code(x)\jdeq (a_0=a)$, so that $c$ is an identification between $a_0$ and $a$.
+ Thus, $\apfunc{\inl}(c):(\inl(a_0)=\inl(a))$ so we can define this to be $\decode(\inl(a),c)$.
+
+ In the second case, where $x\jdeq \inr(b)$, then $\code(x)\jdeq \emptyt$, so that $c$ inhabits the empty type.
+ Thus, the elimination rule of $\emptyt$ yields a value for $\decode(\inr(b),c)$.
+
+ This completes the definition of \decode; we now show that $\encode(x,{\blank})$ and $\decode(x,{\blank})$ are quasi-inverses for all $x$.
+ On the one hand, suppose given $x:A+B$ and $p:\inl(a_0)=x$; we want to show
+ \narrowequation{
+ \decode(x,\encode(x,p)) = p.
+ }
+ But now by (based) path induction, it suffices to consider $x\jdeq\inl(a_0)$ and $p\jdeq \refl{\inl(a_0)}$:
+ \begin{align*}
+ \decode(x,\encode(x,p))
+ &\jdeq \decode(\inl(a_0),\encode(\inl(a_0),\refl{\inl(a_0)}))\\
+ &\jdeq \decode(\inl(a_0),\transfib{\code}{\refl{\inl(a_0)}}{\refl{a_0}})\\
+ &\jdeq \decode(\inl(a_0),\refl{a_0})\\
+ &\jdeq \apfunc{\inl}(\refl{a_0})\\
+ &\jdeq \refl{\inl(a_0)}\\
+ &\jdeq p.
+ \end{align*}
+ On the other hand, let $x:A+B$ and $c:\code(x)$; we want to show $\encode(x,\decode(x,c))=c$.
+ We may again divide into cases based on $x$.
+ If $x\jdeq\inl(a)$, then $c:a_0=a$ and $\decode(x,c)\jdeq \apfunc{\inl}(c)$, so that
+ \begin{align}
+ \encode(x,\decode(x,c))
+ &\jdeq \transfib{\code}{\apfunc{\inl}(c)}{\refl{a_0}}
+ \notag\\
+ &= \transfib{a\mapsto (a_0=a)}{c}{\refl{a_0}}
+ \tag{by \cref{thm:transport-compose}}\\
+ &= \refl{a_0} \ct c
+ \tag{by \cref{cor:transport-path-prepost}}\\
+ &= c. \notag
+ \end{align}
+ Finally, if $x\jdeq \inr(b)$, then $c:\emptyt$, so we may conclude anything we wish.
+\end{proof}
+
+\noindent
+Of course, there is a corresponding theorem if we fix $b_0:B$ instead of $a_0:A$.
+
+In particular, \cref{thm:path-coprod} implies that for any $a : A$ and $b : B$ there are functions
+%
+\[ \encode(\inl(a), {\blank}) : (\inl(a_0)=\inl(a)) \to (a_0=a)\]
+%
+and
+%
+\[ \encode(\inr(b), {\blank}) : (\inl(a_0)=\inr(b)) \to \emptyt. \]
+%
+The second of these states
+``$\inl(a_0)$ is not equal to $\inr(b)$'', i.e.\ the images of \inl and \inr are disjoint. The traditional reading of the first one, where identity types are viewed as propositions, is just injectivity of $\inl$. The
+full homotopical statement of \cref{thm:path-coprod} gives more information: the types $\inl(a_0)=\inl(a)$ and
+$a_0=a$ are actually equivalent, as are $\inr(b_0)=\inr(b)$ and $b_0=b$.
+
+\begin{rmk}\label{rmk:true-neq-false}
+In particular, since the two-element type $\bool$ is equivalent to $\unit+\unit$, we have $\bfalse\neq\btrue$.
+\end{rmk}
+
+This proof illustrates a general method for describing path spaces, which we will use often. To characterize a path space, the first step is to define a comparison fibration ``$\code$'' that provides a more explicit description of the paths. There are several different methods for proving that such a comparison fibration is equivalent to the paths (we show a few different proofs of the same result in \cref{sec:pi1-s1-intro}). The one we have used here is called the \define{encode-decode method}:
+\indexdef{encode-decode method}
+the key idea is to define $\decode$ generally for all instances of the fibration (i.e.\ as a function $\prd{x:A+B} \code(x) \to (\inl(a_0)=x)$), so that path induction can be used to analyze $\decode(x,\encode(x,p))$.
+
+\index{transport!in coproduct types}%
+As usual, we can also characterize the action of transport in coproduct types.
+Given a type~$X$, a path $p:\id[X]{x_1}{x_2}$, and type families $A,B:X\to\type$, we have
+\begin{align*}
+ \transfib{A+B}{p}{\inl(a)} &= \inl (\transfib{A}{p}{a}),\\
+ \transfib{A+B}{p}{\inr(b)} &= \inr (\transfib{B}{p}{b}),
+\end{align*}
+where as usual, $A+B$ in the superscript denotes abusively the type family $x\mapsto A(x)+B(x)$.
+The proof is an easy path induction.
+
+\index{encode-decode method|)}%
+\index{type!coproduct|)}%
+
+\section{Natural numbers}
+\label{sec:compute-nat}
+
+\index{natural numbers|(}%
+\index{encode-decode method|(}%
+We use the encode-decode method to characterize the path space of the natural numbers, which are also a positive type.
+In this case, rather than fixing one endpoint, we characterize the two-sided path space all at once.
+Thus, the codes for identities are a type family
+\[\code:\N\to\N\to\type,\]
+defined by double recursion over \N as follows:
+\begin{align*}
+ \code(0,0) &\defeq \unit\\
+ \code(\suc(m),0) &\defeq \emptyt\\
+ \code(0,\suc(n)) &\defeq \emptyt\\
+ \code(\suc(m),\suc(n)) &\defeq \code(m,n).
+\end{align*}
+We also define by recursion a dependent function $r:\prd{n:\N} \code(n,n)$, with
+\begin{align*}
+ r(0) &\defeq \ttt\\
+ r(\suc(n)) &\defeq r(n).
+\end{align*}
+
+\begin{thm}\label{thm:path-nat}
+ For all $m,n:\N$ we have $\eqv{(m=n)}{\code(m,n)}$.
+\end{thm}
+\begin{proof}
+ We define
+ \[ \encode : \prd{m,n:\N} (m=n) \to \code(m,n) \]
+ by transporting, $\encode(m,n,p) \defeq \transfib{\code(m,{\blank})}{p}{r(m)}$.
+ And we define
+ \[ \decode : \prd{m,n:\N} \code(m,n) \to (m=n) \]
+ by double induction on $m,n$.
+ When $m$ and $n$ are both $0$, we need a function $\unit \to (0=0)$, which we define to send everything to $\refl{0}$.
+ When $m$ is a successor and $n$ is $0$ or vice versa, the domain $\code(m,n)$ is \emptyt, so the eliminator for \emptyt suffices.
+ And when both are successors, we can define $\decode(\suc(m),\suc(n))$ to be the composite
+ %
+ \begin{narrowmultline*}
+ \code(\suc(m),\suc(n))\jdeq\code(m,n)
+ \xrightarrow{\decode(m,n)} \narrowbreak
+ (m=n)
+ \xrightarrow{\apfunc{\suc}}
+ (\suc(m)=\suc(n)).
+ \end{narrowmultline*}
+ %
+ Next we show that $\encode(m,n)$ and $\decode(m,n)$ are quasi-inverses for all $m,n$.
+
+ On one hand, if we start with $p:m=n$, then by induction on $p$ it suffices to show
+ \[\decode(n,n,\encode(n,n,\refl{n}))=\refl{n}.\]
+ But $\encode(n,n,\refl{n}) \jdeq r(n)$, so it suffices to show that $\decode(n,n,r(n)) =\refl{n}$.
+ We can prove this by induction on $n$.
+ If $n\jdeq 0$, then $\decode(0,0,r(0)) =\refl{0}$ by definition of \decode.
+ And in the case of a successor, by the inductive hypothesis we have $\decode(n,n,r(n)) = \refl{n}$, so it suffices to observe that $\apfunc{\suc}(\refl{n}) \jdeq \refl{\suc(n)}$.
+
+ On the other hand, if we start with $c:\code(m,n)$, then we proceed by double induction on $m$ and $n$.
+ If both are $0$, then $\decode(0,0,c) \jdeq \refl{0}$, while $\encode(0,0,\refl{0})\jdeq r(0) \jdeq \ttt$.
+ Thus, it suffices to recall from \cref{sec:compute-unit} that every inhabitant of $\unit$ is equal to \ttt.
+ If $m$ is $0$ but $n$ is a successor, or vice versa, then $c:\emptyt$, so we are done.
+ And in the case of two successors, we have
+ \begin{multline*}
+ \encode(\suc(m),\suc(n),\decode(\suc(m),\suc(n),c))\\
+ \begin{aligned}
+ &= \encode(\suc(m),\suc(n),\apfunc{\suc}(\decode(m,n,c)))\\
+ &= \transfib{\code(\suc(m),{\blank})}{\apfunc{\suc}(\decode(m,n,c))}{r(\suc(m))}\\
+ &= \transfib{\code(\suc(m),\suc({\blank}))}{\decode(m,n,c)}{r(\suc(m))}\\
+ &= \transfib{\code(m,{\blank})}{\decode(m,n,c)}{r(m)}\\
+ &= \encode(m,n,\decode(m,n,c))\\
+ &= c
+ \end{aligned}
+ \end{multline*}
+ using the inductive hypothesis.
+\end{proof}
+
+In particular, we have
+\begin{equation}\label{eq:zero-not-succ}
+ \encode(\suc(m),0) : (\suc(m)=0) \to \emptyt
+\end{equation}
+which shows that ``$0$ is not the successor of any natural number''.
+We also have the composite
+\begin{narrowmultline}\label{eq:suc-injective}
+ (\suc(m)=\suc(n))
+ \xrightarrow{\encode} \narrowbreak
+ \code(\suc(m),\suc(n))
+ \jdeq \code(m,n) \xrightarrow{\decode} (m=n)
+\end{narrowmultline}
+which shows that the function $\suc$ is injective.
+\index{successor}%
+
+We will study more general positive types in \cref{cha:induction,cha:hits}.
+In \cref{cha:homotopy}, we will see that the same technique used here to characterize the identity types of coproducts and \nat can also be used to calculate homotopy groups of spheres.
+
+\index{encode-decode method|)}%
+\index{natural numbers|)}%
+
+\section{Example: equality of structures}
+\label{sec:equality-of-structures}
+
+We now consider one example to illustrate the interaction between the groupoid structure on a type and the type
+formers. In the introduction we remarked that one of the
+advantages of univalence is that two isomorphic things are interchangeable,
+in the sense that every property or construction involving one also
+applies to the other. Common ``abuses of notation''\index{abuse!of notation} become formally
+true. Univalence itself says that equivalent types are equal, and
+therefore interchangeable, which includes e.g.\ the common practice of identifying isomorphic sets. Moreover, when we define other
+mathematical objects as sets, or even general types, equipped with structure or properties, we
+can derive the correct notion of equality for them from univalence. We will illustrate this
+point with a significant example in \cref{cha:category-theory}, where we
+define the basic notions of category theory in such a way that equality
+of categories is equivalence, equality of functors is natural
+isomorphism, etc. See in particular \cref{sec:sip}.
+ In this section, we describe a very simple example, coming from algebra.
+
+For simplicity, we use \emph{semigroups} as our example, where a
+semigroup is a type equipped with an associative ``multiplication''
+operation. The same ideas apply to other algebraic structures, such as
+monoids, groups, and rings.
+Recall from \cref{sec:sigma-types,sec:pat} that the definition of a kind of mathematical structure should be interpreted as defining the type of such structures as a certain iterated $\Sigma$-type.
+In the case of semigroups this yields the following.
+
+\begin{defn}
+Given a type $A$, the type \semigroupstr{A} of \define{semigroup structures}
+\indexdef{semigroup!structure}%
+\index{structure!semigroup}%
+\index{associativity!of semigroup operation}%
+with carrier\index{carrier} $A$ is defined by
+\[
+\semigroupstr{A} \defeq \sm{m:A \to A \to A} \prd{x,y,z:A} m(x,m(y,z)) = m(m(x,y),z).
+\]
+%
+A \define{semigroup}
+\indexdef{semigroup}%
+is a type together with such a structure:
+%
+\[
+\semigroup \defeq \sm{A:\type} \semigroupstr A
+\]
+\end{defn}
+
+\noindent
+In the next two subsections, we describe two ways in which univalence makes
+it easier to work with such semigroups.
+
+\subsection{Lifting equivalences}
+
+\index{lifting!equivalences}%
+When working loosely, one might say that a bijection between sets $A$
+and $B$ ``obviously'' induces an isomorphism between semigroup
+structures on $A$ and semigroup structures on $B$. With univalence,
+this is indeed obvious, because given an equivalence between types $A$
+and $B$, we can automatically derive a semigroup structure on $B$ from
+one on $A$, and moreover show that this derivation is an equivalence of
+semigroup structures. The reason is that \semigroupstrsym\ is a family
+of types, and therefore has an action on paths between types given by
+$\mathsf{transport}$:
+\[
+\transfibf{\semigroupstrsym}{(\ua(e))} : \semigroupstr{A} \to \semigroupstr{B}.
+\]
+Moreover, this map is an equivalence, because
+$\transfibf{C}(\alpha)$ is always an equivalence with inverse
+$\transfibf{C}{(\opp \alpha)}$, see \cref{thm:transport-concat,thm:omg}.
+
+While the univalence axiom\index{univalence axiom} ensures that this map exists, we need to use
+facts about $\mathsf{transport}$ proven in the preceding sections to
+calculate what it actually does. Let $(m,a)$ be a semigroup structure on
+$A$, and we investigate the induced semigroup structure on $B$ given by
+\[
+\transfib{\semigroupstrsym}{\ua(e)}{(m,a)}.
+\]
+First, because
+\semigroupstr{X} is defined to be a $\Sigma$-type, by
+\cref{transport-Sigma},
+\[
+\transfib{\semigroupstrsym}{\ua(e)}{(m,a)} = (m',a')
+\]
+where $m'$ is an induced multiplication operation on $B$
+\begin{flalign*}
+& m' : B \to B \to B \\
+& m'(b_1,b_2) \defeq \transfib{X \mapsto (X \to X \to X)}{\ua(e)}{m}(b_1,b_2)
+\end{flalign*}
+and $a'$ an induced proof that $m'$ is associative.
+We have, again by \cref{transport-Sigma},
+\begin{equation}\label{eq:transport-semigroup-step1}
+ \begin{aligned}
+{}&a' : \mathsf{Assoc}(B,m')\\
+ &a' \defeq \transfib{(X,m) \mapsto \mathsf{Assoc}(X,m)}{(\pairpath(\ua(e),\refl{m'}))}{a},
+ \end{aligned}
+\end{equation}
+where $\mathsf{Assoc}(X,m)$ is the type $\prd{x,y,z:X} m(x,m(y,z)) = m(m(x,y),z)$.
+By function extensionality, it suffices to investigate the behavior of $m'$ when
+applied to arguments $b_1,b_2 : B$. By applying
+\eqref{eq:transport-arrow} twice, we have that $m'(b_1,b_2)$ is equal to
+%
+\begin{narrowmultline*}
+ \transfibf{X \mapsto X}\big(
+ \ua(e), \narrowbreak
+ m(\transfib{X \mapsto X}{\opp{\ua(e)}}{b_1},
+ \transfib{X \mapsto X}{\opp{\ua(e)}}{b_2}
+ )
+ \big).
+\end{narrowmultline*}
+%
+Then, because $\ua$ is quasi-inverse to $\transfibf{X\mapsto X}$, this is equal to
+\[
+e(m(\opp{e}(b_1), \opp{e}(b_2))).
+\]
+Thus, given two elements of $B$, the induced multiplication $m'$
+sends them to $A$ using the equivalence $e$, multiplies them in $A$, and
+then brings the result back to $B$ by $e$, just as one would expect.
+
+Moreover, though we do not show the proof, one can calculate that the
+induced proof that $m'$ is associative (see \eqref{eq:transport-semigroup-step1})
+is equal to a function sending
+$b_1,b_2,b_3 : B$ to a path given by the following steps:
+\begin{equation}
+ \label{eq:transport-semigroup-assoc}
+ \begin{aligned}
+ m'(m'(b_1,b_2),b_3)
+ &= e(m(\opp{e}(m'(b_1,b_2)),\opp{e}(b_3))) \\
+ &= e(m(\opp{e}(e(m(\opp{e}(b_1),\opp{e}(b_2)))),\opp{e}(b_3))) \\
+ &= e(m(m(\opp{e}(b_1),\opp{e}(b_2)),\opp{e}(b_3))) \\
+ &= e(m(\opp{e}(b_1),m(\opp{e}(b_2),\opp{e}(b_3)))) \\
+ &= e(m(\opp{e}(b_1),\opp{e}(e(m(\opp{e}(b_2),\opp{e}(b_3)))))) \\
+ &= e(m(\opp{e}(b_1),\opp{e}(m'(b_2,b_3)))) \\
+ &= m'(b_1,m'(b_2,b_3)).
+\end{aligned}
+\end{equation}
+These steps use the proof $a$ that $m$ is associative and the inverse
+laws for~$e$. From an algebra perspective, it may seem strange to
+investigate the identity of a proof that an operation is associative,
+but this makes sense if we think of $A$ and $B$ as general spaces, with
+non-trivial homotopies between paths. In \cref{cha:logic}, we will
+introduce the notion of a \emph{set}, which is a type with only trivial
+homotopies, and if we consider semigroup structures on sets, then any
+two such associativity proofs are automatically equal.
+
+\subsection{Equality of semigroups}
+\label{sec:equality-semigroups}
+
+Using the equations for path spaces discussed in the previous sections,
+we can investigate when two semigroups are equal. Given semigroups
+$(A,m,a)$ and $(B,m',a')$, by \cref{thm:path-sigma}, the type of paths
+\narrowequation{
+ (A,m,a) =_\semigroup (B,m',a')
+}
+is equal to the type of pairs
+\begin{align*}
+p_1 &: A =_{\type} B \qquad\text{and}\\
+p_2 &: \transfib{\semigroupstrsym}{p_1}{(m,a)} = {(m',a')}.
+\end{align*}
+By univalence, $p_1$ is $\ua(e)$ for some equivalence $e$. By
+\cref{thm:path-sigma}, function extensionality, and the above analysis of
+transport in the type family $\semigroupstrsym$, $p_2$ is equivalent to a pair
+of proofs, the first of which shows that
+\begin{equation} \label{eq:equality-semigroup-mult}
+\prd{y_1,y_2:B} e(m(\opp{e}(y_1), \opp{e}(y_2))) = m'(y_1,y_2)
+\end{equation}
+and the second of which shows that $a'$ is equal to the induced
+associativity proof constructed from $a$ in
+\eqref{eq:transport-semigroup-assoc}. But by cancellation of inverses
+\eqref{eq:equality-semigroup-mult} is equivalent to
+\[
+\prd{x_1,x_2:A} e(m(x_1, x_2)) = m'(e(x_1),e(x_2)).
+\]
+This says that $e$ commutes with the binary operation, in the sense
+that it takes multiplication in $A$ (i.e.\ $m$) to multiplication in $B$
+(i.e.\ $m'$). A similar rearrangement is possible for the equation relating
+$a$ and $a'$. Thus, an equality of semigroups consists exactly of an
+equivalence on the carrier types that commutes with the semigroup
+structure.
+
+For general types, the proof of associativity is thought of as part of
+the structure of a semigroup. However, if we restrict to set-like types
+(again, see \cref{cha:logic}), the
+equation relating $a$ and $a'$ is trivially true. Moreover, in this
+case, an equivalence between sets is exactly a bijection. Thus, we have
+arrived at a standard definition of a \emph{semigroup isomorphism}:\index{isomorphism!semigroup} a
+bijection on the carrier sets that preserves the multiplication
+operation. It is also possible to use the category-theoretic definition
+of isomorphism, by defining a \emph{semigroup homomorphism}\index{homomorphism!semigroup} to be a map
+that preserves the multiplication, and arrive at the conclusion that equality of
+semigroups is the same as two mutually inverse homomorphisms; but we
+will not show the details here; see \cref{sec:sip}.
+
+The conclusion is that, thanks to univalence, semigroups are equal
+precisely when they are isomorphic as algebraic structures. As we will see in \cref{sec:sip}, the
+conclusion applies more generally: in homotopy type theory, all constructions of
+mathematical structures automatically respect isomorphisms, without any
+tedious proofs or abuse of notation.
+
+\section{Universal properties}
+\label{sec:universal-properties}
+
+\index{universal!property|(}%
+By combining the path computation rules described in the preceding sections, we can show that various type forming operations satisfy the expected universal properties, interpreted in a homotopical way as equivalences.
+For instance, given types $X,A,B$, we have a function
+\index{type!product}%
+\begin{equation}\label{eq:prod-ump-map}
+ (X\to A\times B) \to (X\to A)\times (X\to B)
+\end{equation}
+defined by $f \mapsto (\proj1 \circ f, \proj2\circ f)$.
+
+\begin{thm}\label{thm:prod-ump}
+ \index{universal!property!of cartesian product}%
+ \eqref{eq:prod-ump-map} is an equivalence.
+\end{thm}
+\begin{proof}
+ We define the quasi-inverse by sending $(g,h)$ to $\lam{x}(g(x),h(x))$.
+ (Technically, we have used the induction principle for the cartesian product $(X\to A)\times (X\to B)$, to reduce to the case of a pair.
+ From now on we will often apply this principle without explicit mention.)
+
+ Now given $f:X\to A\times B$, the round-trip composite yields the function
+ \begin{equation}
+ \lam{x} (\proj1(f(x)),\proj2(f(x))).\label{eq:prod-ump-rt1}
+ \end{equation}
+ By \cref{thm:path-prod}, for any $x:X$ we have $(\proj1(f(x)),\proj2(f(x))) = f(x)$.
+ Thus, by function extensionality, the function~\eqref{eq:prod-ump-rt1} is equal to $f$.
+
+ On the other hand, given $(g,h)$, the round-trip composite yields the pair $(\lam{x} g(x),\lam{x} h(x))$.
+ By the uniqueness principle for functions, this is (judgmentally) equal to $(g,h)$.
+\end{proof}
+
+In fact, we also have a dependently typed version of this universal property.
+Suppose given a type $X$ and type families $A,B:X\to \type$.
+Then we have a function
+\begin{equation}\label{eq:prod-umpd-map}
+ \Parens{\prd{x:X} (A(x)\times B(x))} \to \Parens{\prd{x:X} A(x)} \times \Parens{\prd{x:X} B(x)}
+\end{equation}
+defined as before by $f \mapsto (\proj1 \circ f, \proj2\circ f)$.
+
+\begin{thm}\label{thm:prod-umpd}
+ \eqref{eq:prod-umpd-map} is an equivalence.
+\end{thm}
+\begin{proof}
+ Left to the reader.
+\end{proof}
+
+Just as $\Sigma$-types are a generalization of cartesian products, they satisfy a generalized version of this universal property.
+Jumping right to the dependently typed version, suppose we have a type $X$ and type families $A:X\to \type$ and $P:\prd{x:X} A(x)\to\type$.
+Then we have a function
+\index{type!dependent pair}%
+\begin{equation}
+ \label{eq:sigma-ump-map}
+ \Parens{\prd{x:X}\dsm{a:A(x)} P(x,a)} \to
+ \Parens{\sm{g:\prd{x:X} A(x)} \prd{x:X} P(x,g(x))}.
+\end{equation}
+Note that if we have $P(x,a) \defeq B(x)$ for some $B:X\to\type$, then~\eqref{eq:sigma-ump-map} reduces to~\eqref{eq:prod-umpd-map}.
+
+\begin{thm}\label{thm:ttac}
+ \index{universal!property!of dependent pair type}%
+ \eqref{eq:sigma-ump-map} is an equivalence.
+\end{thm}
+\begin{proof}
+ As before, we define a quasi-inverse to send $(g,h)$ to the function $\lam{x} (g(x),h(x))$.
+ Now given $f:\prd{x:X} \sm{a:A(x)} P(x,a)$, the round-trip composite yields the function
+ \begin{equation}
+ \lam{x} (\proj1(f(x)),\proj2(f(x))).\label{eq:prod-ump-rt2}
+ \end{equation}
+ Now for any $x:X$, by \cref{thm:eta-sigma} (the uniqueness principle for $\Sigma$-types) we have
+ %
+ \begin{equation*}
+ (\proj1(f(x)),\proj2(f(x))) = f(x).
+ \end{equation*}
+ %
+ Thus, by function extensionality,~\eqref{eq:prod-ump-rt2} is equal to $f$.
+ On the other hand, given $(g,h)$, the round-trip composite yields $(\lam {x} g(x),\lam{x} h(x))$, which is judgmentally equal to $(g,h)$ as before.
+\end{proof}
+
+\index{axiom!of choice!type-theoretic}
+This is noteworthy because the propositions-as-types interpretation of~\eqref{eq:sigma-ump-map} is ``the axiom of choice''.
+If we read $\Sigma$ as ``there exists'' and $\Pi$ (sometimes) as ``for all'', we can pronounce:
+\begin{itemize}
+\item $\prd{x:X} \sm{a:A(x)} P(x,a)$ as ``for all $x:X$ there exists an $a:A(x)$ such that $P(x,a)$'', and
+\item $\sm{g:\prd{x:X} A(x)} \prd{x:X} P(x,g(x))$ as ``there exists a choice function $g:\prd{x:X} A(x)$ such that for all $x:X$ we have $P(x,g(x))$''.
+\end{itemize}
+Thus, \cref{thm:ttac} says that not only is the axiom of choice ``true'', its antecedent is actually equivalent to its conclusion.
+(On the other hand, the classical\index{mathematics!classical} mathematician may find that~\eqref{eq:sigma-ump-map} does not carry the usual meaning of the axiom of choice, since we have already specified the values of $g$, and there are no choices left to be made.
+We will return to this point in \cref{sec:axiom-choice}.)
+
+The above universal property for pair types is for ``mapping in'', which is familiar from the category-theoretic notion of products.
+However, pair types also have a universal property for ``mapping out'', which may look less familiar.
+In the case of cartesian products, the non-dependent version simply expresses
+the cartesian closure adjunction\index{adjoint!functor}:
+\[ \eqvspaced{\big((A\times B) \to C\big)}{\big(A\to (B\to C)\big)}.\]
+The dependent version of this is formulated for a type family $C:A\times B\to \type$:
+\[ \eqvspaced{\Parens{\prd{w:A\times B} C(w)}}{\Parens{\prd{x:A}{y:B} C(x,y)}}. \]
+Here the right-to-left function is simply the induction principle for $A\times B$, while the left-to-right is evaluation at a pair.
+We leave it to the reader to prove that these are quasi-inverses.
+There is also a version for $\Sigma$-types:
+\begin{equation}
+ \eqvspaced{\Parens{\prd{w:\sm{x:A} B(x)} C(w)}}{\Parens{\prd{x:A}{y:B(x)} C(x,y)}}.\label{eq:sigma-lump}
+\end{equation}
+Again, the right-to-left function is the induction principle.
+
+Some other induction principles are also part of universal properties of this sort.
+For instance, path induction is the right-to-left direction of an equivalence as follows:
+\index{type!identity}%
+\index{universal!property!of identity type}%
+\begin{equation}
+ \label{eq:path-lump}
+ \eqvspaced{\Parens{\prd{x:A}{p:a=x} B(x,p)}}{B(a,\refl a)}
+\end{equation}
+for any $a:A$ and type family $B:\prd{x:A} (a=x) \to\type$.
+However, inductive types with recursion, such as the natural numbers, have more complicated universal properties; see \cref{cha:induction}.
+
+\index{type!limit}%
+\index{type!colimit}%
+\index{limit!of types}%
+\index{colimit!of types}%
+Since \cref{thm:prod-ump} expresses the usual universal property of a cartesian product (in an appropriate homotopy-theoretic sense), the categorically inclined reader may well wonder about other limits and colimits of types.
+In \cref{ex:coprod-ump} we ask the reader to show that the coproduct type $A+B$ also has the expected universal property, and the nullary cases of $\unit$ (the terminal object) and $\emptyt$ (the initial object) are easy.
+\index{type!empty}%
+\index{type!unit}%
+\indexsee{initial!type}{type, empty}%
+\indexsee{terminal!type}{type, unit}%
+
+\indexdef{pullback}%
+For pullbacks, the expected explicit construction works: given $f:A\to C$ and $g:B\to C$, we define
+\begin{equation}
+ A\times_C B \defeq \sm{a:A}{b:B} (f(a)=g(b)).\label{eq:defn-pullback}
+\end{equation}
+In \cref{ex:pullback} we ask the reader to verify this.
+Some more general homotopy limits can be constructed in a similar way, but for colimits we will need a new ingredient; see \cref{cha:hits}.
+
+\index{universal!property|)}%
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\sectionNotes
+
+The definition of identity types, with their induction principle, is due to Martin-L\"of \cite{Martin-Lof-1972}.
+\index{intensional type theory}%
+\index{extensional!type theory}%
+\index{type theory!intensional}%
+\index{type theory!extensional}%
+\index{reflection rule}%
+As mentioned in the notes to \cref{cha:typetheory}, our identity types are those that belong to \emph{intensional} type theory, rather than \emph{extensional} type theory.
+In general, a notion of equality is said to be ``intensional'' if it distinguishes objects based on their particular definitions, and ``extensional'' if it does not distinguish between objects that have the same ``extension'' or ``observable behavior''.
+In the terminology of Frege, an intensional equality compares \emph{sense}, while an extensional one compares only \emph{reference}.
+We may also speak of one equality being ``more'' or ``less'' extensional than another, meaning that it takes account of fewer or more intensional aspects of objects, respectively.
+
+\emph{Intensional} type theory is so named because its \emph{judgmental} equality, $x\jdeq y$, is a very intensional equality: it says essentially that $x$ and $y$ ``have the same definition'', after we expand the defining equations of functions.
+By contrast, the propositional equality type $\id xy$ is more extensional, even in the axiom-free intensional type theory of \cref{cha:typetheory}: for instance, we can prove by induction that $n+m=m+n$ for all $m,n:\N$, but we cannot say that $n+m\jdeq m+n$ for all $m,n:\N$, since the \emph{definition} of addition treats its arguments asymmetrically.
+We can make the identity type of intensional type theory even more extensional by adding axioms such as function extensionality (two functions are equal if they have the same behavior on all inputs, regardless of how they are defined) and univalence (which can be regarded as an extensionality property for the universe: two types are equal if they behave the same in all contexts).
+The axioms of function extensionality, and univalence in the special case of mere propositions (``propositional extensionality''), appeared already in the first type theories of Russell and Church.
+
+As mentioned before, \emph{extensional} type theory includes also a ``reflection rule'' saying that if $p:x=y$, then in fact $x\jdeq y$.
+Thus extensional type theory is so named because it does \emph{not} admit any purely \emph{intensional} equality: the reflection rule forces the judgmental equality to coincide with the more extensional identity type.
+Moreover, from the reflection rule one may deduce function extensionality (at least in the presence of a judgmental uniqueness principle for functions).
+However, the reflection rule also implies that all the higher groupoid structure collapses (see \cref{ex:equality-reflection}), and hence is inconsistent with the univalence axiom (see \cref{thm:type-is-not-a-set}).
+Therefore, regarding univalence as an extensionality property, one may say that intensional type theory permits identity types that are ``more extensional'' than extensional type theory does.
+
+The proofs of symmetry (inversion) and transitivity (concatenation) for equalities are well-known in type theory.
+The fact that these make each type into a 1-groupoid (up to homotopy) was exploited in~\cite{hs:gpd-typethy} to give the first ``homotopy'' style semantics for type theory.
+
+The actual homotopical interpretation, with identity types as path spaces, and type families as fibrations, is due to \cite{AW}, who used the formalism of Quillen model categories. An interpretation in (strict) $\infty$-groupoids\index{.infinity-groupoid@$\infty$-groupoid} was also given in the thesis \cite{mw:thesis}.
+For a construction of \emph{all} the higher operations and coherences of an $\infty$-groupoid in type theory, see~\cite{pll:wkom-type} and~\cite{bg:type-wkom}.
+
+\index{proof!assistant!Coq@\textsc{Coq}}%
+Operations such as $\transfib{P}{p}{\blank}$ and $\apfunc{f}$, and one good notion of equivalence, were first studied extensively in type theory by Voevodsky, using the proof assistant \Coq.
+Subsequently, many other equivalent definitions of equivalence have been found, which are compared in \cref{cha:equivalences}.
+
+The ``computational'' interpretation of identity types, transport, and so on described in \cref{sec:computational} has been emphasized by~\cite{lh:canonicity}.
+They also described a ``1-truncated'' type theory (see \cref{cha:hlevels}) in which these rules are judgmental equalities.
+The possibility of extending this to the full untruncated theory is a subject of current research.
+
+\index{function extensionality}%
+The naive form of function extensionality which says that ``if two functions are pointwise equal, then they are equal'' is a common axiom in type theory, going all the way back to \cite{PM2}.
+Some stronger forms of function extensionality were considered in~\cite{garner:depprod}.
+The version we have used, which identifies the identity types of function types up to equivalence, was first studied by Voevodsky, who also proved that it is implied by the naive version (and by univalence; see \cref{sec:univalence-implies-funext}).
+
+\index{univalence axiom}%
+The univalence axiom is also due to Voevodsky.
+It was originally motivated by semantic considerations in the simplicial set model; see~\cite{klv:ssetmodel}.
+A similar axiom motivated by the groupoid model was proposed by Hofmann and Streicher~\cite{hs:gpd-typethy} under the name ``universe extensionality''.
+It used quasi-inverses~\eqref{eq:qinvtype} rather than a good notion of ``equivalence'', and hence is correct (and equivalent to univalence) only for a universe of 1-types (see \cref{defn:1type}).
+
+In the type theory we are using in this book, function extensionality and univalence have to be assumed as axioms, i.e.\ elements asserted to belong to some type but not constructed according to the rules for that type.
+While serviceable, this has a few drawbacks.
+For instance, type theory is formally better-behaved if we can base it entirely on rules rather than asserting axioms.
+It is also sometimes inconvenient that the theorems of \crefrange{sec:compute-cartprod}{sec:compute-nat} are only propositional equalities (paths) or equivalences, since then we must explicitly mention whenever we pass back and forth across them.
+One direction of current research in homotopy type theory is to describe a type system in which these rules are \emph{judgmental} equalities, solving both of these problems at once.
+So far this has only been done in some simple cases, although preliminary results such as~\cite{lh:canonicity} are promising.
+There are also other potential ways to introduce univalence and function extensionality into a type theory, such as having a sufficiently powerful notion of ``higher quotients'' or ``higher inductive-recursive types''.
+
+The simple conclusions in \crefrange{sec:compute-coprod}{sec:compute-nat} such as ``$\inl$ and $\inr$ are injective and disjoint'' are well-known in type theory, and the construction of the function \encode is the usual way to prove them.
+The more refined approach we have described, which characterizes the entire identity type of a positive type (up to equivalence), is a more recent development; see e.g.~\cite{ls:pi1s1}.
+
+\index{axiom!of choice!type-theoretic}%
+The type-theoretic axiom of choice~\eqref{eq:sigma-ump-map} was noticed in William Howard's original paper~\cite{howard:pat} on the propositions-as-types correspondence, and was studied further by Martin-L\"of with the introduction of his dependent type theory. It is mentioned as a ``distributivity law'' in Bourbaki's set theory \cite{Bourbaki}.\index{Bourbaki}%
+
+For a more comprehensive (and formalized) discussion of pullbacks and more general homotopy limits in homotopy type theory, see~\cite{AKL13}.
+Limits\index{limit!of types} of diagrams over directed graphs\index{graph} are the easiest general sort of limit to formalize; the problem with diagrams over categories (or more generally $(\infty,1)$-categories)
+\index{.infinity1-category@$(\infty,1)$-category}%
+\indexsee{category!.infinity1-@$(\infty,1)$-}{$(\infty,1)$-category}%
+is that in general, infinitely many coherence conditions are involved in the notion of (homotopy coherent) diagram.\index{diagram}
+Resolving this problem is an important open question\index{open!problem} in homotopy type theory.
+
+\sectionExercises
+
+\begin{ex}\label{ex:basics:concat}
+ Show that the three obvious proofs of \cref{lem:concat} are pairwise equal.
+\end{ex}
+
+\begin{ex}\label{ex:eq-proofs-commute}
+ Show that the three equalities of proofs constructed in the previous exercise form a commutative triangle.
+ In other words, if the three definitions of concatenation are denoted by $(p \mathbin{\ct_1} q)$, $(p\mathbin{\ct_2} q)$, and $(p\mathbin{\ct_3} q)$, then the concatenated equality
+ \[(p\mathbin{\ct_1} q) = (p\mathbin{\ct_2} q) = (p\mathbin{\ct_3} q)\]
+ is equal to the equality $(p\mathbin{\ct_1} q) = (p\mathbin{\ct_3} q)$.
+\end{ex}
+
+\begin{ex}\label{ex:fourth-concat}
+ Give a fourth, different, proof of \cref{lem:concat}, and prove that it is equal to the others.
+\end{ex}
+
+\begin{ex}\label{ex:npaths}
+ Define, by induction on $n$, a general notion of \define{$n$-dimensional path}\index{path!n-@$n$-} in a type $A$, simultaneously with the type of boundaries for such paths.
+\end{ex}
+
+\begin{ex}\label{ex:ap-to-apd-equiv-apd-to-ap}
+ Prove that the functions~\eqref{eq:ap-to-apd} and~\eqref{eq:apd-to-ap} are inverse equivalences.
+ % and that they take $\apfunc f(p)$ to $\apdfunc f (p)$ and vice versa. (that was \cref{thm:apd-const})
+\end{ex}
+
+\begin{ex}\label{ex:equiv-concat}
+ Prove that if $p:x=y$, then the function $(p\ct \blank):(y=z) \to (x=z)$ is an equivalence.
+\end{ex}
+
+\begin{ex}\label{ex:ap-sigma}
+ State and prove a generalization of \cref{thm:ap-prod} from cartesian products to $\Sigma$-types.
+\end{ex}
+
+\begin{ex}\label{ex:ap-coprod}
+ State and prove an analogue of \cref{thm:ap-prod} for coproducts.
+\end{ex}
+
+\begin{ex}\label{ex:coprod-ump}
+ \index{universal!property!of coproduct}%
+ Prove that coproducts have the expected universal property,
+ \[ \eqv{(A+B \to X)}{(A\to X)\times (B\to X)}. \]
+ Can you generalize this to an equivalence involving dependent functions?
+\end{ex}
+
+\begin{ex}\label{ex:sigma-assoc}
+ Prove that $\Sigma$-types are ``associative'',
+ \index{associativity!of Sigma-types@of $\Sigma$-types}%
+ in that for any $A:\UU$ and families $B:A\to\UU$ and $C:(\sm{x:A} B(x))\to\UU$, we have
+ \[\eqvspaced{\Parens{\sm{x:A}{y:B(x)} C(\pairr{x,y})}}{\Parens{\sm{p:\sm{x:A}B(x)} C(p)}}. \]
+\end{ex}
+
+\begin{ex}\label{ex:pullback}
+ A (homotopy) \define{commutative square}
+ \indexdef{commutative!square}%
+ \begin{equation*}
+ \vcenter{\xymatrix{
+ P\ar[r]^h\ar[d]_k &
+ A\ar[d]^f\\
+ B\ar[r]_g &
+ C
+ }}
+ \end{equation*}
+ consists of functions $f$, $g$, $h$, and $k$ as shown, together with a path $f \circ h= g \circ k$.
+ Note that this is exactly an element of the pullback $(P\to A) \times_{P\to C} (P\to B)$ as defined in~\eqref{eq:defn-pullback}.
+ A commutative square is called a (homotopy) \define{pullback square}
+ \indexdef{pullback}%
+ if for any $X$, the induced map
+ \[ (X\to P) \to (X\to A) \times_{(X\to C)} (X\to B) \]
+ is an equivalence.
+ Prove that the pullback $P \defeq A\times_C B$ defined in~\eqref{eq:defn-pullback} is the corner of a pullback square.
+\end{ex}
+
+\begin{ex}\label{ex:pullback-pasting}
+ Suppose given two commutative squares
+ \begin{equation*}
+ \vcenter{\xymatrix{
+ A\ar[r]\ar[d] &
+ C\ar[r]\ar[d] &
+ E\ar[d]\\
+ B\ar[r] &
+ D\ar[r] &
+ F
+ }}
+ \end{equation*}
+ and suppose that the right-hand square is a pullback square.
+ Prove that the left-hand square is a pullback square if and only if the outer rectangle is a pullback square.
+\end{ex}
+
+\begin{ex}\label{ex:eqvboolbool}
+ Show that $\eqv{(\eqv\bool\bool)}{\bool}$.
+\end{ex}
+
+\begin{ex}\label{ex:equality-reflection}
+ Suppose we add to type theory the \emph{equality reflection rule} which says that if there is an element $p:x=y$, then in fact $x\jdeq y$.
+ Prove that for any $p:x=x$ we have $p\jdeq \refl{x}$.
+ (This implies that every type is a \emph{set} in the sense to be introduced in \cref{sec:basics-sets}; see \cref{sec:hedberg}.)
+\end{ex}
+
+\begin{ex}\label{ex:strengthen-transport-is-ap}
+ Show that \cref{thm:transport-is-ap} can be strengthened to
+ \[\transfib{B}{p}{{\blank}} =_{B(x)\to B(y)} \idtoeqv(\apfunc{B}(p))\]
+ without using function extensionality.
+ (In this and other similar cases, the apparently weaker formulation has been chosen for readability and consistency.)
+\end{ex}
+
+\begin{ex}\label{ex:strong-from-weak-funext}
+ Suppose that rather than function extensionality (\cref{axiom:funext}), we suppose only the existence of an element
+ \[ \funext : \prd{A:\UU}{B:A\to \UU}{f,g:\prd{x:A} B(x)} (f\htpy g) \to (f=g) \]
+ (with no relationship to $\happly$ assumed).
+ Prove that in fact, this is sufficient to imply the whole function extensionality axiom (that $\happly$ is an equivalence).
+ This is due to Voevodsky; its proof is tricky and may require concepts from later chapters.
+\end{ex}
+
+\begin{ex}\label{ex:equiv-functor-types}\
+ \begin{enumerate}
+ \item Show that if $\eqv{A}{A'}$ and $\eqv{B}{B'}$, then $\eqv{(A\times B)}{(A'\times B')}$.
+ \item Give two proofs of this fact, one using univalence and one not using it, and show that the two proofs are equal.
+ \item Formulate and prove analogous results for the other type formers: $\Sigma$, $\to$, $\Pi$, and $+$.
+\end{enumerate}
+\end{ex}
+
+\begin{ex}\label{ex:dep-htpy-natural}
+ State and prove a version of \cref{lem:htpy-natural} for dependent functions.
+\end{ex}
+
+% Local Variables:
+% TeX-master: "hott-online"
+% End:
diff --git a/books/hott/blurb.tex b/books/hott/blurb.tex
new file mode 100644
index 0000000000000000000000000000000000000000..8fb95945923535077c195247821137bd463c4cee
--- /dev/null
+++ b/books/hott/blurb.tex
@@ -0,0 +1,32 @@
+% Blurb on back cover, gets included in lulu cover as well
+% as regular version, so be careful with formatting
+
+{
+\parindent=0pt
+\parskip=\baselineskip
+{\OPTbacktitlefont
+\textit{From the Introduction:}}
+\OPTbackfont
+
+\emph{Homotopy type theory} is a new branch of mathematics that combines aspects of several different fields in a surprising way. It is based on a recently discovered connection between \emph{homotopy theory} and \emph{type theory}.
+It touches on topics as seemingly distant as the homotopy groups of spheres, the algorithms for type checking, and the definition of weak $\infty$-groupoids.
+
+Homotopy type theory brings new ideas into the very foundation of mathematics.
+On the one hand, there is Voevodsky's subtle and beautiful \emph{univalence axiom}.
+The univalence axiom implies, in particular, that isomorphic structures can be identified, a principle that mathematicians have been happily using on workdays, despite its incompatibility with the ``official'' doctrines of conventional foundations.
+On the other hand, we have \emph{higher inductive types}, which provide direct, logical descriptions of some of the basic spaces and constructions of homotopy theory: spheres, cylinders, truncations, localizations, etc.
+Both ideas are impossible to capture directly in classical set-theoretic foundations, but when combined in homotopy type theory, they permit an entirely new kind of ``logic of homotopy types''.
+
+This suggests a new conception of foundations of mathematics, with intrinsic homotopical content, an ``invariant'' conception of the objects of mathematics --- and convenient machine implementations, which can serve as a practical aid to the working mathematician.
+This is the \emph{Univalent Foundations} program.
+
+The present book is intended as a first systematic exposition of the basics of univalent foundations, and a collection of examples of this new style of reasoning --- but without requiring the reader to know or learn any formal logic, or to use any computer proof assistant.
+We believe that univalent foundations will eventually become a viable alternative to set theory as the ``implicit foundation'' for the unformalized mathematics done by most mathematicians.
+
+\bigskip
+
+\begin{center}
+ {\Large
+ \textit{Get a free copy of the book at HomotopyTypeTheory.org.}}
+\end{center}
+}
diff --git a/books/hott/categories.tex b/books/hott/categories.tex
new file mode 100644
index 0000000000000000000000000000000000000000..567c397a2ea517b3a98a105238167617b11aabfc
--- /dev/null
+++ b/books/hott/categories.tex
@@ -0,0 +1,1854 @@
+\chapter{Category theory}
+\label{cha:category-theory}
+
+Of the branches of mathematics, category theory is one which perhaps fits the least comfortably in set theoretic foundations.
+One problem is that most of category theory is invariant under weaker notions of ``sameness'' than equality, such as isomorphism in a category or equivalence of categories, in a way which set theory fails to capture.
+But this is the same sort of problem that the univalence axiom solves for types, by identifying equality with equivalence.
+Thus, in univalent foundations it makes sense to consider a notion of ``category'' in which equality of objects is identified with isomorphism in a similar way.
+
+Ignoring size issues, in set-based mathematics a category consists of a \emph{set} $A_0$ of objects and, for each $x,y\in A_0$, a \emph{set} $\hom_A(x,y)$ of morphisms.
+Under univalent foundations, a ``naive'' definition of category would simply mimic this with a \emph{type} of objects and \emph{types} of morphisms.
+If we allowed these types to contain arbitrary higher homotopy, then we ought to impose higher coherence conditions, leading to some notion of $(\infty,1)$-category,
+\index{.infinity1-category@$(\infty,1)$-category}%
+but at present our goal is more modest.
+We consider only 1-categories, and therefore we restrict the types $\hom_A(x,y)$ to be sets, i.e.\ 0-types.
+If we impose no further conditions, we will call this notion a \emph{precategory}.
+
+If we add the requirement that the type $A_0$ of objects is a set, then we end up with a definition that behaves much like the traditional set-theoretic one.
+Following Toby Bartels, we call this notion a \emph{strict category}.
+\index{strict!category}%
+Alternatively, we can require a generalized version of the univalence axiom, identifying $(x=_{A_0} y)$ with the type $\mathsf{iso}(x,y)$ of isomorphisms from $x$ to $y$.
+Since we regard the latter choice as usually the ``correct'' definition, we will call it simply a \emph{category}.
+
+A good example of the difference between the three notions of category is provided by the statement ``every fully faithful and essentially surjective functor is an equivalence of categories'', which in classical set-based category theory is equivalent to the axiom of choice.
+\index{mathematics!classical}%
+\index{axiom!of choice}%
+\index{classical!category theory}%
+\begin{enumerate}
+\item For strict categories, this is still equivalent to the axiom of choice.
+\item For precategories, there is no consistent axiom of choice which can make it true.
+\item For categories, it is provable \emph{without} any axiom of choice.
+\end{enumerate}
+We will prove the latter statement in this chapter, as well as other pleasant properties of categories, e.g.\ that equivalent categories are equal (as elements of the type of categories).
+We will also describe a universal way of ``saturating'' a precategory $A$ into a category $\widehat A$, which we call its \emph{Rezk completion},
+\index{completion!Rezk}%
+although it could also reasonably be called the \emph{stack completion} (see the Notes).
+
+The Rezk completion also sheds further light on the notion of equivalence of categories.
+For instance, the functor $A \to \widehat{A}$ is always fully faithful and essentially surjective, hence a ``weak equivalence''.
+It follows that a precategory is a category exactly when it ``sees'' all fully faithful and essentially surjective functors as equivalences; thus our notion of ``category'' is already inherent in the notion of ``fully faithful and essentially surjective functor''.
+
+We assume the reader has some basic familiarity with classical category theory.\index{classical!category theory}
+Recall that whenever we write \type it denotes some universe of types, but perhaps a different one at different times; everything we say remains true for any consistent choice of universe levels\index{universe level}.
+We will use the basic notions of homotopy type theory from \cref{cha:typetheory,cha:basics} and the propositional truncation from \cref{cha:logic}, but not much else from \cref{part:foundations}, except that our second construction of the Rezk completion will use a higher inductive type.
+
+
+\section{Categories and precategories}
+\label{sec:cats}
+
+In classical mathematics, there are many equivalent definitions of a category.
+In our case, since we have dependent types, it is natural to choose the arrows to be a type family indexed by the objects.
+This matches the way hom-types are always used in category theory: we never even consider comparing two arrows unless we know their domains and codomains agree.
+Furthermore, it seems clear that for a theory of 1-categories, the hom-types should all be sets.
+This leads us to the following definition.
+
+\begin{defn}\label{ct:precategory}
+ A \define{precategory}
+ \indexdef{precategory}
+ $A$ consists of the following.
+ \begin{enumerate}
+ \item A type $A_0$, whose elements are called \define{objects}.%
+ \indexdef{object!in a (pre)category}
+ We write $a:A$ for $a:A_0$.
+ \item For each $a,b:A$, a set $\hom_A(a,b)$, whose elements are called \define{arrows} or \define{morphisms}.%
+ \indexsee{arrow}{morphism}%
+ \indexdef{morphism!in a (pre)category}%
+ \indexdef{hom-set}%
+ \item For each $a:A$, a morphism $1_a:\hom_A(a,a)$, called the \define{identity morphism}.%
+ \indexdef{identity!morphism in a (pre)category}
+ \item For each $a,b,c:A$, a function%
+ \indexdef{composition!of morphisms in a (pre)category}
+ \[ \hom_A(b,c) \to \hom_A(a,b) \to \hom_A(a,c) \]
+ called \define{composition}, and denoted infix by $g\mapsto f\mapsto g\circ f$, or sometimes simply by $gf$.
+ \item For each $a,b:A$ and $f:\hom_A(a,b)$, we have $\id f {1_b\circ f}$ and $\id f {f\circ 1_a}$.
+ \item For each $a,b,c,d:A$ and
+ \begin{equation*}
+ f:\hom_A(a,b), \qquad
+ g:\hom_A(b,c), \qquad
+ h:\hom_A(c,d),
+ \end{equation*}
+ we have $\id {h\circ (g\circ f)}{(h\circ g)\circ f}$.
+ \end{enumerate}
+\end{defn}
+
+The problem with the notion of precategory is that for objects $a,b:A$, we have two possibly-different notions of ``sameness''.
+On the one hand, we have the type $(\id[A_0]{a}{b})$.
+But on the other hand, there is the standard categorical notion of \emph{isomorphism}.
+
+\begin{defn}\label{ct:isomorphism}
+ A morphism $f:\hom_A(a,b)$ is an \define{isomorphism}
+ \indexdef{isomorphism!in a (pre)category}%
+ if there is a morphism $g:\hom_A(b,a)$ such that $\id{g\circ f}{1_a}$ and $\id{f\circ g}{1_b}$.
+ We write $a\cong b$ for the type of such isomorphisms.
+\end{defn}
+
+\begin{lem}\label{ct:isoprop}
+ For any $f:\hom_A(a,b)$, the type ``$f$ is an isomorphism'' is a mere proposition.
+ Therefore, for any $a,b:A$ the type $a\cong b$ is a set.
+\end{lem}
+\begin{proof}
+ Suppose given $g:\hom_A(b,a)$ and $\eta:(\id{1_a}{g\circ f})$ and $\epsilon:(\id{f\circ g}{1_b})$, and similarly $g'$, $\eta'$, and $\epsilon'$.
+We must show $\id{(g,\eta,\epsilon)}{(g',\eta',\epsilon')}$.
+ But since all hom-sets are sets, their identity types are mere propositions, so it suffices to show $\id g {g'}$.
+ For this we have
+ \[g' = 1_a\circ g' = (g\circ f)\circ g' = g\circ (f\circ g') = g\circ 1_b = g\]
+ using $\eta$ and $\epsilon'$.
+\end{proof}
+
+\symlabel{ct:inv}
+\index{inverse!in a (pre)category}%
+If $f:a\cong b$, then we write $\inv f$ for its inverse, which by \cref{ct:isoprop} is uniquely determined.
+
+The only relationship between these two notions of sameness that we have in a precategory is the following.
+
+\begin{lem}[\textsf{idtoiso}]\label{ct:idtoiso}
+ If $A$ is a precategory and $a,b:A$, then
+ \[(\id a b)\to (a \cong b).\]
+\end{lem}
+\begin{proof}
+ By induction on identity, we may assume $a$ and $b$ are the same.
+ But then we have $1_a:\hom_A(a,a)$, which is clearly an isomorphism.
+\end{proof}
+
+Evidently, this situation is analogous to the issue that motivated us to introduce the univalence axiom.
+In fact, we have the following:
+
+\begin{eg}\label{ct:precatset}
+ \index{set}%
+ There is a precategory \uset, whose type of objects is \set, and with $\hom_{\uset}(A,B) \defeq (A\to B)$.
+ The identity morphisms are identity functions and the composition is function composition.
+ For this precategory, \cref{ct:idtoiso} is equal to (the restriction to sets of) the map $\idtoeqv$ from \cref{sec:compute-universe}.
+
+ Of course, to be more precise we should call this category $\uset_\UU$, since its objects are only the \emph{small sets}
+ \index{small!set}%
+ relative to a universe \UU.
+\end{eg}
+
+Thus, it is natural to make the following definition.
+
+\begin{defn}\label{ct:category}
+ A \define{category}
+ \indexdef{category}
+ is a precategory such that for all $a,b:A$, the function $\idtoiso_{a,b}$ from \cref{ct:idtoiso} is an equivalence.
+\end{defn}
+
+In particular, in a category, if $a\cong b$, then $a=b$.
+
+\begin{eg}\label{ct:eg:set}
+ \index{univalence axiom}%
+ The univalence axiom implies immediately that \uset is a category.
+ One can also show, using univalence, that any precategory of set-level structures such as groups, rings, topological spaces, etc.\ is a category; see \cref{sec:sip}.
+\end{eg}
+
+We also note the following.
+
+\begin{lem}\label{ct:obj-1type}
+ In a category, the type of objects is a 1-type.
+\end{lem}
+\begin{proof}
+ It suffices to show that for any $a,b:A$, the type $\id a b$ is a set.
+ But $\id a b$ is equivalent to $a \cong b$, which is a set.
+\end{proof}
+
+\symlabel{isotoid}
+We write $\isotoid$ for the inverse $(a\cong b) \to (\id a b)$ of the map $\idtoiso$ from \cref{ct:idtoiso}.
+The following relationship between the two is important.
+
+\begin{lem}\label{ct:idtoiso-trans}
+ For $p:\id a a'$ and $q:\id b b'$ and $f:\hom_A(a,b)$, we have
+ \begin{equation}\label{ct:idtoisocompute}
+ \id{\trans{(p,q)}{f}}
+ {\idtoiso(q)\circ f \circ \inv{\idtoiso(p)}}.
+ \end{equation}
+\end{lem}
+\begin{proof}
+ By induction, we may assume $p$ and $q$ are $\refl a$ and $\refl b$ respectively.
+Then the left-hand side of~\eqref{ct:idtoisocompute} is simply $f$.
+ But by definition, $\idtoiso(\refl a)$ is $1_a$, and $\idtoiso(\refl b)$ is $1_b$, so the right-hand side of~\eqref{ct:idtoisocompute} is $1_b\circ f\circ 1_a$, which is equal to $f$.
+\end{proof}
+
+Similarly, we can show
+\begin{gather}
+ \id{\idtoiso(\rev p)}{\inv {(\idtoiso(p))}}\\
+ \id{\idtoiso(p\ct q)}{\idtoiso(q)\circ \idtoiso(p)}\\
+ \id{\isotoid(f\circ e)}{\isotoid(e)\ct \isotoid(f)}
+\end{gather}
+and so on.
+
+\begin{eg}\label{ct:orders}
+ A precategory in which each set $\hom_A(a,b)$ is a mere proposition is equivalently a type $A_0$ equipped with a mere relation ``$\le$'' that is reflexive ($a\le a$) and transitive (if $a\le b$ and $b\le c$, then $a\le c$).
+ We call this a \define{preorder}.
+ \indexdef{preorder}
+
+ In a preorder, a witness $f: a\le b$ is an isomorphism just when there exists some witness $g: b\le a$.
+ Thus, $a\cong b$ is the mere proposition that $a\le b$ and $b\le a$.
+ Therefore, a preorder $A$ is a category just when (1) each type $a=b$ is a mere proposition, and (2) for any $a,b:A_0$ there exists a function $(a\cong b) \to (a=b)$.
+ In other words, $A_0$ must be a set, and $\le$ must be antisymmetric\index{relation!antisymmetric} (if $a\le b$ and $b\le a$, then $a=b$).
+ We call this a \define{(partial) order} or a \define{poset}.
+ \indexdef{partial order}%
+ \indexdef{poset}%
+\end{eg}
+
+\begin{eg}\label{ct:gaunt}
+ If $A$ is a category, then $A_0$ is a set if and only if for any $a,b:A_0$, the type $a\cong b$ is a mere proposition.
+ Given that $A$ is a category, this is equivalent to saying that every automorphism in $A$ is an identity arrow. On the other hand, if $A$ is a precategory such that $A_0$ is a set, then $A$ is a category precisely if it is skeletal\index{mathematics!classical} (any two isomorphic objects are equal) \emph{and} every automorphism is an identity arrow.
+ Categories of this sort are sometimes called \define{gaunt}~\cite{bsp12infncats}.
+ \indexdef{category!gaunt}%
+ \indexdef{gaunt category}%
+ \index{skeletal category}%
+ \index{category!skeletal}%
+ There is not really any notion of ``skeletality'' for our categories, unless one considers \cref{ct:category} itself to be such.
+\end{eg}
+
+\begin{eg}\label{ct:discrete}
+ For any 1-type $X$, there is a category with $X$ as its type of objects and with $\hom(x,y) \defeq (x=y)$.
+ If $X$ is a set, we call this the \define{discrete}
+ \indexdef{category!discrete}%
+ \indexdef{discrete!category}%
+ category on $X$.
+ In general, we call it a \define{groupoid}
+ \indexdef{groupoid}
+ (see \cref{ct:groupoids}).
+\end{eg}
+
+\begin{eg}\label{ct:fundgpd}
+ For \emph{any} type $X$, there is a precategory with $X$ as its type of objects and with $\hom(x,y) \defeq \pizero{x=y}$.
+ The composition operation
+ \[ \pizero{y=z} \to \pizero{x=y} \to \pizero{x=z} \]
+ is defined by induction on truncation from concatenation $(y=z)\to(x=y)\to(x=z)$.
+ We call this the \define{fundamental pregroupoid}
+ \indexdef{fundamental!pregroupoid}%
+ \indexsee{pregroupoid, fundamental}{fundamental pregroupoid}%
+ of $X$.
+ (In fact, we have met it already in \cref{sec:van-kampen}; see also \cref{ex:rezk-vankampen}.)
+\end{eg}
+
+\begin{eg}\label{ct:hoprecat}
+ There is a precategory whose type of objects is \type and with $\hom(X,Y) \defeq \pizero{X\to Y}$, and composition defined by induction on truncation from ordinary composition $(Y\to Z) \to (X\to Y) \to (X\to Z)$.
+ We call this the \define{homotopy precategory of types}.
+ \indexdef{precategory!of types}%
+ \index{homotopy!category of types@(pre)category of types}%
+\end{eg}
+
+\begin{eg}\label{ct:rel}
+ Let \urel be the following precategory:
+ \begin{itemize}
+ \item Its objects are sets.
+ \item $\hom_{\urel}(X,Y) = X\to Y\to \prop$.
+ \item For a set $X$, we have $1_X(x,x') \defeq (x=x')$.
+ \item For $R:\hom_{\urel}(X,Y)$ and $S:\hom_{\urel}(Y,Z)$, their composite is defined by
+ \[ (S\circ R)(x,z) \defeq \Brck{\sm{y:Y} R(x,y) \times S(y,z)}.\]
+ \end{itemize}
+ Suppose $R:\hom_{\urel}(X,Y)$ is an isomorphism, with inverse $S$.
+ We observe the following.
+ \begin{enumerate}
+ \item If $R(x,y)$ and $S(y',x)$, then $(R\circ S)(y',y)$, and hence $y'=y$.
+ Similarly, if $R(x,y)$ and $S(y,x')$, then $x=x'$.\label{item:rel1}
+ \item For any $x$, we have $x=x$, hence $(S\circ R)(x,x)$.
+ Thus, there merely exists a $y:Y$ such that $R(x,y)$ and $S(y,x)$.\label{item:rel2}
+ \item Suppose $R(x,y)$.
+ By~\ref{item:rel2}, there merely exists a $y'$ with $R(x,y')$ and $S(y',x)$.
+ But then by~\ref{item:rel1}, merely $y'=y$, and hence $y'=y$ since $Y$ is a set.
+ Therefore, by transporting $S(y',x)$ along this equality, we have $S(y,x)$.
+ In conclusion, $R(x,y)\to S(y,x)$.
+ Similarly, $S(y,x) \to R(x,y)$.\label{item:rel3}
+ \item If $R(x,y)$ and $R(x,y')$, then by~\ref{item:rel3}, $S(y',x)$, so that by~\ref{item:rel1}, $y=y'$.
+ Thus, for any $x$ there is at most one $y$ such that $R(x,y)$.
+ And by~\ref{item:rel2}, there merely exists such a $y$, hence there exists such a $y$.
+ \end{enumerate}
+ In conclusion, if $R:\hom_{\urel}(X,Y)$ is an isomorphism, then for each $x:X$ there is exactly one $y:Y$ such that $R(x,y)$, and dually.
+ Thus, there is a function $f:X\to Y$ sending each $x$ to this $y$, which is an equivalence; hence $X=Y$.
+ With a little more work, we conclude that \urel is a category.
+\end{eg}
+
+We might now restrict ourselves to considering categories rather than precategories.
+Instead, we will develop many concepts for precategories as well as categories, in order to emphasize how much better-behaved categories are, as compared both to precategories and to ordinary categories in classical\index{mathematics!classical} mathematics.
+
+We will also see in \crefrange{sec:strict-categories}{sec:dagger-categories} that in slightly more exotic contexts, there are uses for certain kinds of precategories other than categories, each of which ``fixes'' the equality of objects in different ways.
+This emphasizes the ``pre''-ness of precategories: they are the raw material out of which multiple important categorical structures can be defined.
+
+
+\section{Functors and transformations}
+\label{sec:transfors}
+
+The following definitions are fairly obvious, and need no modification.
+
+\begin{defn}\label{ct:functor}
+ Let $A$ and $B$ be precategories.
+ A \define{functor}
+ \indexdef{functor}%
+ $F:A\to B$ consists of
+ \begin{enumerate}
+ \item A function $F_0:A_0\to B_0$, generally also denoted $F$.
+ \item For each $a,b:A$, a function $F_{a,b}:\hom_A(a,b) \to \hom_B(Fa,Fb)$, generally also denoted $F$.
+ \item For each $a:A$, we have $\id{F(1_a)}{1_{Fa}}$.
+ \item For each $a,b,c:A$ and $f:\hom_A(a,b)$ and $g:\hom_A(b,c)$, we have
+ \[\id{F(g\circ f)}{Fg\circ Ff}.\]\label{ct:functor:comp}
+ \end{enumerate}
+\end{defn}
+
+Note that by induction on identity, a functor also preserves \idtoiso.
+
+\begin{defn}\label{ct:nattrans}
+ For functors $F,G:A\to B$, a \define{natural transformation}
+ \indexdef{natural!transformation}%
+ \indexsee{transformation!natural}{natural transformation}%
+ $\gamma:F\to G$ consists of
+ \begin{enumerate}
+ \item For each $a:A$, a morphism $\gamma_a:\hom_B(Fa,Ga)$ (the ``components'').
+ \item For each $a,b:A$ and $f:\hom_A(a,b)$, we have $\id{Gf\circ \gamma_a}{\gamma_b\circ Ff}$ (the ``naturality axiom'').
+ \end{enumerate}
+\end{defn}
+
+Since each type $\hom_B(Fa,Gb)$ is a set, its identity type is a mere proposition.
+Thus, the naturality axiom is a mere proposition, so identity of natural transformations is determined by identity of their components.
+In particular, for any $F$ and $G$, the type of natural transformations from $F$ to $G$ is again a set.
+
+Similarly, identity of functors is determined by identity of the functions $A_0\to B_0$ and (transported along this) of the corresponding functions on hom-sets.
+
+\begin{defn}\label{ct:functor-precat}
+ \indexdef{precategory!of functors}%
+ For precategories $A,B$, there is a precategory $B^A$, called the \define{functor precategory}, defined by
+ \begin{itemize}
+ \item $(B^A)_0$ is the type of functors from $A$ to $B$.
+ \item $\hom_{B^A}(F,G)$ is the type of natural transformations from $F$ to $G$.
+ \end{itemize}
+\end{defn}
+\begin{proof}
+ We define $(1_F)_a\defeq 1_{Fa}$.
+ Naturality follows by the unit axioms of a precategory.
+ For $\gamma:F\to G$ and $\delta:G\to H$, we define $(\delta\circ\gamma)_a\defeq \delta_a\circ \gamma_a$.
+ Naturality follows by associativity.
+ Similarly, the unit and associativity laws for $B^A$ follow from those for $B$.
+\end{proof}
+
+\begin{lem}\label{ct:natiso}
+ \index{natural!isomorphism}%
+ \index{isomorphism!natural}%
+ A natural transformation $\gamma:F\to G$ is an isomorphism in $B^A$ if and only if each $\gamma_a$ is an isomorphism in $B$.
+\end{lem}
+\begin{proof}
+ If $\gamma$ is an isomorphism, then we have $\delta:G\to F$ that is its inverse.
+ By definition of composition in $B^A$, $(\delta\gamma)_a\jdeq \delta_a\gamma_a$ and similarly.
+ Thus, $\id{\delta\gamma}{1_F}$ and $\id{\gamma\delta}{1_G}$ imply $\id{\delta_a\gamma_a}{1_{Fa}}$ and $\id{\gamma_a\delta_a}{1_{Ga}}$, so $\gamma_a$ is an isomorphism.
+
+ Conversely, suppose each $\gamma_a$ is an isomorphism, with inverse called $\delta_a$, say.
+We define a natural transformation $\delta:G\to F$ with components $\delta_a$; for the naturality axiom we have
+ \[ Ff\circ \delta_a = \delta_b\circ \gamma_b\circ Ff \circ \delta_a = \delta_b\circ Gf\circ \gamma_a\circ \delta_a = \delta_b\circ Gf. \]
+ Now since composition and identity of natural transformations is determined on their components, we have $\id{\gamma\delta}{1_G}$ and $\id{\delta\gamma}{1_F}$.
+\end{proof}
+
+The following result is fundamental.
+
+\begin{thm}\label{ct:functor-cat}
+ \indexdef{category!of functors}%
+ \indexdef{functor!category of}%
+ If $A$ is a precategory and $B$ is a category, then $B^A$ is a category.
+\end{thm}
+\begin{proof}
+ Let $F,G:A\to B$; we must show that $\idtoiso:(\id{F}{G}) \to (F\cong G)$ is an equivalence.
+
+ To give an inverse to it, suppose $\gamma:F\cong G$ is a natural isomorphism.
+ Then for any $a:A$, we have an isomorphism $\gamma_a:Fa \cong Ga$, hence an identity $\isotoid(\gamma_a):\id{Fa}{Ga}$.
+ By function extensionality, we have an identity $\bar{\gamma}:\id[(A_0\to B_0)]{F_0}{G_0}$.
+
+ Now since the last two axioms of a functor are mere propositions, to show that $\id{F}{G}$ it will suffice to show that for any $a,b:A$, the functions
+ \begin{align*}
+ F_{a,b}&:\hom_A(a,b) \to \hom_B(Fa,Fb)\mathrlap{\qquad\text{and}}\\
+ G_{a,b}&:\hom_A(a,b) \to \hom_B(Ga,Gb)
+ \end{align*}
+ become equal when transported along $\bar\gamma$.
+ By computation for function extensionality, when applied to $a$, $\bar\gamma$ becomes equal to $\isotoid(\gamma_a)$.
+ But by \cref{ct:idtoiso-trans}, transporting $Ff:\hom_B(Fa,Fb)$ along $\isotoid(\gamma_a)$ and $\isotoid(\gamma_b)$ is equal to the composite $\gamma_b\circ Ff\circ \inv{(\gamma_a)}$, which by naturality of $\gamma$ is equal to $Gf$.
+
+ This completes the definition of a function $(F\cong G) \to (\id F G)$.
+ Now consider the composite
+ \[ (\id F G) \to (F\cong G) \to (\id F G). \]
+ Since hom-sets are sets, their identity types are mere propositions, so to show that two identities $p,q:\id F G$ are equal, it suffices to show that $\id[\id{F_0}{G_0}]{p}{q}$.
+ But in the definition of $\bar\gamma$, if $\gamma$ were of the form $\idtoiso(p)$, then $\gamma_a$ would be equal to $\idtoiso(p_a)$ (this can easily be proved by induction on $p$).
+ Thus, $\isotoid(\gamma_a)$ would be equal to $p_a$, and so by function extensionality we would have $\id{\bar\gamma}{p}$, which is what we need.
+
+ Finally, consider the composite
+ \[(F\cong G)\to (\id F G) \to (F\cong G). \]
+ Since identity of natural transformations can be tested componentwise, it suffices to show that for each $a$ we have $\id{\idtoiso(\bar\gamma)_a}{\gamma_a}$.
+ But as observed above, we have $\id{\idtoiso(\bar\gamma)_a}{\idtoiso((\bar\gamma)_a)}$, while $\id{(\bar\gamma)_a}{\isotoid(\gamma_a)}$ by computation for function extensionality.
+ Since $\isotoid$ and $\idtoiso$ are inverses, we have $\id{\idtoiso(\bar\gamma)_a}{\gamma_a}$ as desired.
+\end{proof}
+
+In particular, naturally isomorphic functors between categories (as opposed to precategories) are equal.
+
+\mentalpause
+
+We now define all the usual ways to compose functors and natural transformations.
+
+\begin{defn}\label{ct:functor-composition}
+ For functors $F:A\to B$ and $G:B\to C$, their composite $G\circ F:A\to C$ is given by
+ \begin{itemize}
+ \item The composite $(G_0\circ F_0) : A_0 \to C_0$
+ \item For each $a,b:A$, the composite
+ \[(G_{Fa,Fb}\circ F_{a,b}):\hom_A(a,b) \to \hom_C(GFa,GFb).\]
+ \end{itemize}
+ It is easy to check the axioms.
+\end{defn}
+
+\begin{defn}\label{ct:whisker}
+ For functors $F:A\to B$ and $G,H:B\to C$ and a natural transformation $\gamma:G\to H$, the composite $(\gamma F):GF\to HF$ is given by
+ \begin{itemize}
+ \item For each $a:A$, the component $\gamma_{Fa}$.
+ \end{itemize}
+ Naturality is easy to check.
+ Similarly, for $\gamma$ as above and $K:C\to D$, the composite $(K\gamma):KG\to KH$ is given by
+ \begin{itemize}
+ \item For each $b:B$, the component $K(\gamma_b)$.
+ \end{itemize}
+\end{defn}
+
+\begin{lem}\label{ct:interchange}
+ \index{interchange law}%
+ For functors $F,G:A\to B$ and $H,K:B\to C$ and natural transformations $\gamma:F\to G$ and $\delta:H\to K$, we have
+ \[\id{(\delta G)(H\gamma)}{(K\gamma)(\delta F)}.\]
+\end{lem}
+\begin{proof}
+ It suffices to check componentwise: at $a:A$ we have
+ \begin{align*}
+ ((\delta G)(H\gamma))_a
+ &\jdeq (\delta G)_{a}(H\gamma)_a\\
+ &\jdeq \delta_{Ga}\circ H(\gamma_a)\\
+ &= K(\gamma_a) \circ \delta_{Fa} \tag{by naturality of $\delta$}\\
+ &\jdeq (K \gamma)_a\circ (\delta F)_a\\
+ &\jdeq ((K \gamma)(\delta F))_a.\qedhere
+ \end{align*}
+\end{proof}
+
+\index{horizontal composition!of natural transformations}%
+\index{classical!category theory}%
+Classically, one defines the ``horizontal composite'' of $\gamma:F\to G$ and $\delta:H\to K$ to be the common value of ${(\delta G)(H\gamma)}$ and ${(K\gamma)(\delta F)}$.
+We will refrain from doing this, because while equal, these two transformations are not \emph{definitionally} equal.
+This also has the consequence that we can use the symbol $\circ$ (or juxtaposition) for all kinds of composition unambiguously: there is only one way to compose two natural transformations (as opposed to composing a natural transformation with a functor on either side).
+
+\begin{lem}\label{ct:functor-assoc}
+ \index{associativity!of functor composition}
+ Composition of functors is associative: $\id{H(GF)}{(HG)F}$.
+\end{lem}
+\begin{proof}
+ Since composition of functions is associative, this follows immediately for the actions on objects and on homs.
+ And since hom-sets are sets, the rest of the data is automatic.
+\end{proof}
+
+The equality in \cref{ct:functor-assoc} is likewise not definitional.
+(Composition of functions is definitionally associative, but the axioms that go into a functor must also be composed, and this breaks definitional associativity.) For this reason, we need also to know about \emph{coherence}\index{coherence} for associativity.
+
+\begin{lem}\label{ct:pentagon}
+ \index{associativity!of functor composition!coherence of}%
+ \cref{ct:functor-assoc} is coherent, i.e.\ the following pentagon\index{pentagon, Mac Lane} of equalities commutes:
+ \[ \xymatrix{ & K(H(GF)) \ar@{=}[dl] \ar@{=}[dr]\\
+ (KH)(GF) \ar@{=}[d] && K((HG)F) \ar@{=}[d]\\
+ ((KH)G)F && (K(HG))F \ar@{=}[ll] }
+ \]
+\end{lem}
+\begin{proof}
+ As in \cref{ct:functor-assoc}, this is evident for the actions on objects, and the rest is automatic.
+\end{proof}
+
+We will henceforth abuse notation by writing $H\circ G\circ F$ or $HGF$ for either $H(GF)$ or $(HG)F$, transporting along \cref{ct:functor-assoc} whenever necessary.
+We have a similar coherence result for units.
+
+\begin{lem}\label{ct:units}
+ For a functor $F:A\to B$, we have equalities $\id{(1_B\circ F)}{F}$ and $\id{(F\circ 1_A)}{F}$, such that given also $G:B\to C$, the following triangle of equalities commutes.
+ \[ \xymatrix{
+ G\circ (1_B \circ F) \ar@{=}[rr] \ar@{=}[dr] &&
+ (G\circ 1_B)\circ F \ar@{=}[dl] \\
+ & G \circ F.}
+ \]
+\end{lem}
+
+See \cref{ct:pre2cat,ct:2cat} for further development of these ideas.
+
+
+\section{Adjunctions}
+\label{sec:adjunctions}
+
+The definition of adjoint functors is straightforward; the main interesting aspect arises from proof-relevance.
+
+\begin{defn}\label{ct:adjoints}
+ A functor $F:A\to B$ is a \define{left adjoint}
+ \indexdef{left!adjoint}%
+ \indexdef{adjoint!functor}%
+ \indexdef{right!adjoint}%
+ \indexdef{adjoint!functor}%
+ \index{functor!adjoint}%
+ if there exists
+ \begin{itemize}
+ \item A functor $G:B\to A$.
+ \item A natural transformation $\eta:1_A \to GF$ (the \define{unit}\indexdef{unit!of an adjunction}).
+ \item A natural transformation $\epsilon:FG\to 1_B$ (the \define{counit}\indexdef{counit of an adjunction}).
+ \item $\id{(\epsilon F)(F\eta)}{1_F}$.
+ \item $\id{(G\epsilon)(\eta G)}{1_G}$.
+ \end{itemize}
+\end{defn}
+
+The last two equations are called the \define{triangle identities}\indexdef{triangle!identity} or \define{zigzag identities}\indexdef{zigzag identity}.
+\indexdef{identity!triangle}\indexdef{identity!zigzag}
+We leave it to the reader to define right adjoints analogously.
+
+\begin{lem}\label{ct:adjprop}
+ If $A$ is a category (but $B$ may be only a precategory), then the type ``$F$ is a left adjoint'' is a mere proposition.
+\end{lem}
+\begin{proof}
+ Suppose we are given $(G,\eta,\epsilon)$ with the triangle identities and also $(G',\eta',\epsilon')$.
+ Define $\gamma:G\to G'$ to be $(G'\epsilon)(\eta' G)$, and $\delta:G'\to G$ to be $(G\epsilon')(\eta G')$.
+ Then
+ \begin{align*}
+ \delta\gamma &=
+ (G\epsilon')(\eta G')(G'\epsilon)(\eta'G)\\
+ &= (G\epsilon')(G F G'\epsilon)(\eta G' F G)(\eta'G)\\
+ &= (G\epsilon)(G\epsilon'FG)(G F \eta' G)(\eta G)\\
+ &= (G\epsilon)(\eta G)\\
+ &= 1_G
+ \end{align*}
+ using \cref{ct:interchange} and the triangle identities.
+ Similarly, we show $\id{\gamma\delta}{1_{G'}}$, so $\gamma$ is a natural isomorphism $G\cong G'$.
+ By \cref{ct:functor-cat}, we have an identity $\id G {G'}$.
+
+ Now we need to know that when $\eta$ and $\epsilon$ are transported along this identity, they become equal to $\eta'$ and $\epsilon'$.
+ By \cref{ct:idtoiso-trans}, this transport is given by composing with $\gamma$ or $\delta$ as appropriate.
+ For $\eta$, this yields
+ \begin{equation*}
+ (G'\epsilon F)(\eta'GF)\eta
+ = (G'\epsilon F)(G'F\eta)\eta'
+ = \eta'
+ \end{equation*}
+ using \cref{ct:interchange} and the triangle identity.
+ The case of $\epsilon$ is similar.
+ Finally, the triangle identities transport correctly automatically, since hom-sets are sets.
+\end{proof}
+
+In \cref{sec:yoneda} we will give another proof of \cref{ct:adjprop}.
+
+
+\section{Equivalences}
+\label{sec:equivalences}
+
+It is usual in category theory to define an \emph{equivalence of categories} to be a functor $F:A\to B$ such that there exists a functor $G:B\to A$ and natural isomorphisms $F G \cong 1_B$ and $G F \cong 1_A$.
+Unlike the property of being an adjunction, however, this would not be a mere proposition without truncating it, for the same reasons that the type of quasi-inverses is ill-behaved (see \cref{sec:quasi-inverses}).
+And as in \cref{sec:hae}, we can avoid this by using the usual notion of \emph{adjoint} equivalence.
+\indexdef{adjoint!equivalence!of (pre)categories}
+
+\begin{defn}\label{ct:equiv}
+ A functor $F:A\to B$ is an \define{equivalence of (pre)categories}
+ \indexdef{equivalence!of (pre)categories}%
+ \indexdef{category!equivalence of}%
+ \indexdef{precategory!equivalence of}%
+ \index{functor!equivalence}%
+ if it is a left adjoint for which $\eta$ and $\epsilon$ are isomorphisms.
+ We write $\cteqv A B$ for the type of equivalences of categories from $A$ to $B$.
+\end{defn}
+
+By \cref{ct:adjprop,ct:isoprop}, if $A$ is a category, then the type ``$F$ is an equivalence of precategories'' is a mere proposition.
+
+\begin{lem}\label{ct:adjointification}
+ If for $F:A\to B$ there exists $G:B\to A$ and isomorphisms $GF\cong 1_A$ and $FG\cong 1_B$, then $F$ is an equivalence of precategories.
+\end{lem}
+\begin{proof}
+ Just like the proof of \cref{thm:equiv-iso-adj} for equivalences of types.
+\end{proof}
+
+\begin{defn}\label{ct:full-faithful}
+ We say a functor $F:A\to B$ is \define{faithful}
+ \indexdef{functor!faithful}%
+ \index{faithful functor}%a
+ if for all $a,b:A$, the function
+ \[F_{a,b}:\hom_A(a,b) \to \hom_B(Fa,Fb)\]
+ is injective, and \define{full}
+ \indexdef{functor!full}%
+ \indexdef{full functor}%
+ if for all $a,b:A$ this function is surjective.
+ If it is both (hence each $F_{a,b}$ is an equivalence) we say $F$ is \define{fully faithful}.
+ \indexdef{functor!fully faithful}%
+ \indexdef{fully faithful functor}%
+\end{defn}
+
+\begin{defn}\label{ct:split-essentially-surjective}
+ We say a functor $F:A\to B$ is \define{split essentially surjective}
+ \indexdef{functor!split essentially surjective}%
+ \indexdef{split!essentially surjective functor}%
+ if for all $b:B$ there exists an $a:A$ such that $Fa\cong b$.
+\end{defn}
+
+\begin{lem}\label{ct:ffeso}
+ For any precategories $A$ and $B$ and functor $F:A\to B$, the following types are equivalent.
+ \begin{enumerate}
+ \item $F$ is an equivalence of precategories.\label{item:ct:ffeso1}
+ \item $F$ is fully faithful and split essentially surjective.\label{item:ct:ffeso2}
+ \end{enumerate}
+\end{lem}
+\begin{proof}
+ Suppose $F$ is an equivalence of precategories, with $G,\eta,\epsilon$ specified.
+ Then we have the function
+ \begin{align*}
+ \hom_B(Fa,Fb) &\to \hom_A(a,b),\\
+ g &\mapsto \inv{\eta_b}\circ G(g)\circ \eta_a.
+ \end{align*}
+ For $f:\hom_A(a,b)$, we have
+ \[ \inv{\eta_{b}}\circ G(F(f))\circ \eta_{a} =
+ \inv{\eta_{b}} \circ \eta_{b} \circ f=
+ f
+ \]
+ while for $g:\hom_B(Fa,Fb)$ we have
+ \begin{align*}
+ F(\inv{\eta_b} \circ G(g)\circ\eta_a)
+ &= F(\inv{\eta_b})\circ F(G(g))\circ F(\eta_a)\\
+ &= \epsilon_{Fb}\circ F(G(g))\circ F(\eta_a)\\
+ &= g\circ\epsilon_{Fa}\circ F(\eta_a)\\
+ &= g
+ \end{align*}
+ using naturality of $\epsilon$, and the triangle identities twice.
+ Thus, $F_{a,b}$ is an equivalence, so $F$ is fully faithful.
+ Finally, for any $b:B$, we have $Gb:A$ and $\epsilon_b:FGb\cong b$.
+
+ On the other hand, suppose $F$ is fully faithful and split essentially surjective.
+ Define $G_0:B_0\to A_0$ by sending $b:B$ to the $a:A$ given by the specified essential splitting, and write $\epsilon_b$ for the likewise specified isomorphism $FGb\cong b$.
+
+ Now for any $g:\hom_B(b,b')$, define $G(g):\hom_A(Gb,Gb')$ to be the unique morphism such that $\id{F(G(g))}{\inv{(\epsilon_{b'})}\circ g \circ \epsilon_b }$ (which exists since $F$ is fully faithful).
+ Finally, for $a:A$ define $\eta_a:\hom_A(a,GFa)$ to be the unique morphism such that $\id{F\eta_a}{\inv{\epsilon_{Fa}}}$.
+ It is easy to verify that $G$ is a functor and that $(G,\eta,\epsilon)$ exhibit $F$ as an equivalence of precategories.
+
+ Now consider the composite~\ref{item:ct:ffeso1}$\to$\ref{item:ct:ffeso2}$\to$\ref{item:ct:ffeso1}.
+ We clearly recover the same function $G_0:B_0 \to A_0$.
+ For the action of $G$ on hom-sets, we must show that for $g:\hom_B(b,b')$, $G(g)$ is the (necessarily unique) morphism such that $F(G(g)) = \inv{(\epsilon_{b'})}\circ g \circ \epsilon_b$.
+ But this equation holds by the assumed naturality of $\epsilon$.
+ We also clearly recover $\epsilon$, while $\eta$ is uniquely characterized by $\id{F\eta_a}{\inv{\epsilon_{Fa}}}$ (which is one of the triangle identities assumed to hold in the structure of an equivalence of precategories).
+ Thus, this composite is equal to the identity.
+
+ Finally, consider the other composite~\ref{item:ct:ffeso2}$\to$\ref{item:ct:ffeso1}$\to$\ref{item:ct:ffeso2}.
+ Since being fully faithful is a mere proposition, it suffices to observe that we recover, for each $b:B$, the same $a:A$ and isomorphism $F a \cong b$.
+ But this is clear, since we used this function and isomorphism to define $G_0$ and $\epsilon$ in~\ref{item:ct:ffeso1}, which in turn are precisely what we used to recover~\ref{item:ct:ffeso2} again.
+ Thus, the composites in both directions are equal to identities, hence we have an equivalence \eqv{\text{\ref{item:ct:ffeso1}}}{\text{\ref{item:ct:ffeso2}}}.
+\end{proof}
+
+However, if $A$ is not a category, then neither type in \cref{ct:ffeso} may necessarily be a mere proposition.
+This suggests considering as well the following notions.
+
+\begin{defn}\label{ct:essentially-surjective}
+ A functor $F:A\to B$ is \define{essentially surjective}
+ \indexdef{functor!essentially surjective}%
+ \indexdef{essentially surjective functor}%
+ if for all $b:B$, there \emph{merely} exists an $a:A$ such that $Fa\cong b$.
+ We say $F$ is a \define{weak equivalence}
+ \indexsee{equivalence!of (pre)categories!weak}{weak equivalence}%
+ \indexdef{weak equivalence!of precategories}%
+ \indexsee{functor!weak equivalence}{weak equivalence}%
+ if it is fully faithful and essentially surjective.
+\end{defn}
+
+Being a weak equivalence is \emph{always} a mere proposition.
+For categories, however, there is no difference between equivalences and weak ones.
+
+\index{acceptance}
+\begin{lem}\label{ct:catweq}
+ If $F:A\to B$ is fully faithful and $A$ is a category, then for any $b:B$ the type $\sm{a:A} (Fa\cong b)$ is a mere proposition.
+ Hence a functor between categories is an equivalence if and only if it is a weak equivalence.
+\end{lem}
+\begin{proof}
+ Suppose given $(a,f)$ and $(a',f')$ in $\sm{a:A} (Fa\cong b)$.
+ Then $\inv{f'}\circ f$ is an isomorphism $Fa \cong Fa'$.
+ Since $F$ is fully faithful, we have $g:a\cong a'$ with $Fg = \inv{f'}\circ f$.
+ And since $A$ is a category, we have $p:a=a'$ with $\idtoiso(p)=g$.
+ Now $Fg = \inv{f'}\circ f$ implies $\trans{(\map{(F_0)}{p})}{f} = f'$, hence (by the characterization of equalities in dependent pair types) $(a,f)=(a',f')$.
+
+ Thus, for fully faithful functors whose domain is a category, essential surjectivity is equivalent to split essential surjectivity, and so being a weak equivalence is equivalent to being an equivalence.
+\end{proof}
+
+This is an important advantage of our category theory over set-based approaches.
+With a purely set-based definition of category, the statement ``every fully faithful and essentially surjective functor is an equivalence of categories'' is equivalent to the axiom of choice \choice{}.
+Here we have it for free, as a category-theoretic version of the principle of unique choice (\cref{sec:unique-choice}).
+(In fact, this property characterizes categories among precategories; see \cref{sec:rezk}.)
+
+On the other hand, the following characterization of equivalences of categories is perhaps even more useful.
+
+\begin{defn}\label{ct:isocat}
+ A functor $F:A\to B$ is an \define{isomorphism of (pre)cat\-ego\-ries}
+ \indexdef{isomorphism!of (pre)categories}%
+ \indexdef{category!isomorphism of}%
+ \indexdef{precategory!isomorphism of}%
+ if $F$ is fully faithful and $F_0:A_0\to B_0$ is an equivalence of types.
+\end{defn}
+
+This definition is an exception to our general rule (see \cref{sec:basics-equivalences}) of only using the word ``isomorphism'' for sets and set-like objects.
+However, it does carry an appropriate connotation here, because for general precategories, isomorphism is stronger than equivalence.
+
+Note that being an isomorphism of precategories is always a mere property.
+Let $A\cong B$ denote the type of isomorphisms of (pre)categories from $A$ to $B$.
+
+\begin{lem}\label{ct:isoprecat}
+ For precategories $A$ and $B$ and $F:A\to B$, the following are equivalent.
+ \begin{enumerate}
+ \item $F$ is an isomorphism of precategories.\label{item:ct:ipc1}
+ \item There exist $G:B\to A$ and $\eta:1_A = GF$ and $\epsilon:FG=1_B$ such that\label{item:ct:ipc2}
+ \begin{equation}
+ \apfunc{(\lam{H} F H)}({\eta}) = \apfunc{(\lam{K} K F)}({\opp\epsilon}).\label{eq:ct:isoprecattri}
+ \end{equation}
+ \item There merely exist $G:B\to A$ and $\eta:1_A = GF$ and $\epsilon:FG=1_B$.\label{item:ct:ipc3}
+ \end{enumerate}
+\end{lem}
+
+Note that if $B_0$ is not a 1-type, then~\eqref{eq:ct:isoprecattri} may not be a mere proposition.
+
+\begin{proof}
+ First note that since hom-sets are sets, equalities between equalities of functors are uniquely determined by their object-parts.
+ Thus, by function extensionality,~\eqref{eq:ct:isoprecattri} is equivalent to
+ \begin{equation}
+ \map{(F_0)}{\eta_0}_a = \opp{(\epsilon_0)}_{F_0 a}.\label{eq:ct:ipctri}
+ \end{equation}
+ for all $a:A_0$.
+ Note that this is precisely the triangle identity for $G_0$, $\eta_0$, and $\epsilon_0$ to be a proof that $F_0$ is a half adjoint equivalence of types.
+
+ Now suppose~\ref{item:ct:ipc1}.
+ Let $G_0:B_0 \to A_0$ be the inverse of $F_0$, with $\eta_0: \idfunc[A_0] = G_0 F_0$ and $\epsilon_0:F_0G_0 = \idfunc[B_0]$ satisfying the triangle identity, which is precisely~\eqref{eq:ct:ipctri}.
+ Now define $G_{b,b'}:\hom_B(b,b') \to \hom_A(G_0b,G_0b')$ by
+ \[ G_{b,b'}(g) \defeq
+ \inv{(F_{G_0b,G_0b'})}\Big(\idtoiso(\opp{(\epsilon_0)}_{b'}) \circ g \circ \idtoiso((\epsilon_0)_b)\Big)
+ \]
+ (using the assumption that $F$ is fully faithful).
+ Since \idtoiso takes inverses to inverses and concatenation to composition, and $F$ is a functor, it follows that $G$ is a functor.
+
+ By definition, we have $(GF)_0 \jdeq G_0 F_0$, which is equal to $\idfunc[A_0]$ by $\eta_0$.
+ To obtain $1_A = GF$, we need to show that when transported along $\eta_0$, the identity function of $\hom_A(a,a')$ becomes equal to the composite $G_{Fa,Fa'} \circ F_{a,a'}$.
+ In other words, for any $f:\hom_A(a,a')$ we must have
+ \begin{multline*}
+ \idtoiso((\eta_0)_{a'}) \circ f \circ \idtoiso(\opp{(\eta_0)}_a)\\
+ = \inv{(F_{GFa,GFa'})}\Big(\idtoiso(\opp{(\epsilon_0)}_{Fa'})
+ \circ F_{a,a'}(f) \circ \idtoiso((\epsilon_0)_{Fa})\Big).
+ \end{multline*}
+ But this is equivalent to
+ \begin{multline*}
+ (F_{GFa,GFa'})\Big(\idtoiso((\eta_0)_{a'}) \circ f \circ \idtoiso(\opp{(\eta_0)}_a)\Big)\\
+ = \idtoiso(\opp{(\epsilon_0)}_{Fa'})
+ \circ F_{a,a'}(f) \circ \idtoiso((\epsilon_0)_{Fa}).
+ \end{multline*}
+ which follows from functoriality of $F$, the fact that $F$ preserves \idtoiso, and~\eqref{eq:ct:ipctri}.
+ Thus we have $\eta:1_A = GF$.
+
+ On the other side, we have $(FG)_0\jdeq F_0 G_0$, which is equal to $\idfunc[B_0]$ by $\epsilon_0$.
+ To obtain $FG=1_B$, we need to show that when transported along $\epsilon_0$, the identity function of $\hom_B(b,b')$ becomes equal to the composite $F_{Gb,Gb'} \circ G_{b,b'}$.
+ That is, for any $g:\hom_B(b,b')$ we must have
+ \begin{multline*}
+ F_{Gb,Gb'}\Big(\inv{(F_{Gb,Gb'})}\Big(\idtoiso(\opp{(\epsilon_0)}_{b'}) \circ g \circ \idtoiso((\epsilon_0)_b)\Big)\Big)\\
+ = \idtoiso((\opp{\epsilon_0})_{b'}) \circ g \circ \idtoiso((\epsilon_0)_b).
+ \end{multline*}
+ But this is just the fact that $\inv{(F_{Gb,Gb'})}$ is the inverse of $F_{Gb,Gb'}$.
+ And we have remarked that~\eqref{eq:ct:isoprecattri} is equivalent to~\eqref{eq:ct:ipctri}, so~\ref{item:ct:ipc2} holds.
+
+ Conversely, suppose given~\ref{item:ct:ipc2}; then the object-parts of $G$, $\eta$, and $\epsilon$ together with~\eqref{eq:ct:ipctri} show that $F_0$ is an equivalence of types.
+ And for $a,a':A_0$, we define $\overline{G}_{a,a'}: \hom_B(Fa,Fa') \to \hom_A(a,a')$ by
+ \begin{equation}
+ \overline{G}_{a,a'}(g) \defeq \idtoiso(\opp{\eta})_{a'} \circ G(g) \circ \idtoiso(\eta)_a.\label{eq:ct:gbar}
+ \end{equation}
+ By naturality of $\idtoiso(\eta)$, for any $f:\hom_A(a,a')$ we have
+ \begin{align*}
+ \overline{G}_{a,a'}(F_{a,a'}(f))
+ &= \idtoiso(\opp{\eta})_{a'} \circ G(F(f)) \circ \idtoiso(\eta)_a\\
+ &= \idtoiso(\opp{\eta})_{a'} \circ \idtoiso(\eta)_{a'} \circ f \\
+ &= f.
+ \end{align*}
+ On the other hand, for $g:\hom_B(Fa,Fa')$ we have
+ \begin{align*}
+ F_{a,a'}(\overline{G}_{a,a'}(g))
+ &= F(\idtoiso(\opp{\eta})_{a'}) \circ F(G(g)) \circ F(\idtoiso(\eta)_a)\\
+ &= \idtoiso(\epsilon)_{Fa'}
+ \circ F(G(g))
+ \circ \idtoiso(\opp{\epsilon})_{Fa}\\
+ &= \idtoiso(\epsilon)_{Fa'}
+ \circ \idtoiso(\opp{\epsilon})_{Fa'}
+ \circ g\\
+ &= g.
+ \end{align*}
+ (There are lemmas needed here regarding the compatibility of \idtoiso and whiskering, which we leave it to the reader to state and prove.)
+ Thus, $F_{a,a'}$ is an equivalence, so $F$ is fully faithful; i.e.~\ref{item:ct:ipc1} holds.
+
+ Now the composite~\ref{item:ct:ipc1}$\to$\ref{item:ct:ipc2}$\to$\ref{item:ct:ipc1} is equal to the identity since~\ref{item:ct:ipc1} is a mere proposition.
+ On the other side, tracing through the above constructions we see that the composite~\ref{item:ct:ipc2}$\to$\ref{item:ct:ipc1}$\to$\ref{item:ct:ipc2} essentially preserves the object-parts $G_0$, $\eta_0$, $\epsilon_0$, and the object-part of~\eqref{eq:ct:isoprecattri}.
+ And in the latter three cases, the object-part is all there is, since hom-sets are sets.
+
+ Thus, it suffices to show that we recover the action of $G$ on hom-sets.
+ In other words, we must show that if $g:\hom_B(b,b')$, then
+ \[ G_{b,b'}(g) =
+ \overline{G}_{G_0b,G_0b'}\Big(\idtoiso(\opp{(\epsilon_0)}_{b'}) \circ g \circ \idtoiso((\epsilon_0)_b)\Big)
+ \]
+ where $\overline{G}$ is defined by~\eqref{eq:ct:gbar}.
+ However, this follows from functoriality of $G$ and the \emph{other} triangle identity, which we have seen in \cref{cha:equivalences} is equivalent to~\eqref{eq:ct:ipctri}.
+
+ Now since~\ref{item:ct:ipc1} is a mere proposition, so is~\ref{item:ct:ipc2}, so it suffices to show they are logically equivalent to~\ref{item:ct:ipc3}.
+ Of course,~\ref{item:ct:ipc2}$\to$\ref{item:ct:ipc3}, so let us assume~\ref{item:ct:ipc3}.
+ Since~\ref{item:ct:ipc1} is a mere proposition, we may assume given $G$, $\eta$, and $\epsilon$.
+ Then $G_0$ along with $\eta$ and $\epsilon$ imply that $F_0$ is an equivalence.
+ Moreover, we also have natural isomorphisms $\idtoiso(\eta):1_A\cong GF$ and $\idtoiso(\epsilon):FG\cong 1_B$, so by \cref{ct:adjointification}, $F$ is an equivalence of precategories, and in particular fully faithful.
+\end{proof}
+
+From \cref{ct:isoprecat}\ref{item:ct:ipc2} and $\idtoiso$ in functor categories, we conclude immediately that any isomorphism of precategories is an equivalence.
+For precategories, the converse can fail.
+
+\begin{eg}\label{ct:chaotic}
+ Let $X$ be a type and $x_0:X$ an element, and let $X_{\mathrm{ch}}$ denote the \emph{chaotic}\indexdef{chaotic precategory} or \emph{indiscrete}\indexdef{indiscrete precategory} precategory on $X$.
+ By definition, we have $(X_{\mathrm{ch}})_0\defeq X$, and $\hom_{X_{\mathrm{ch}}}(x,x') \defeq \unit$ for all $x,x'$.
+ Then the unique functor $X_{\mathrm{ch}}\to \unit$ is an equivalence of precategories, but not an isomorphism unless $X$ is contractible.
+
+ This example also shows that a precategory can be equivalent to a category without itself being a category.
+ Of course, if a precategory is \emph{isomorphic} to a category, then it must itself be a category.
+\end{eg}
+
+However, for categories, the two notions coincide.
+
+\begin{lem}\label{ct:eqv-levelwise}
+ For categories $A$ and $B$, a functor $F:A\to B$ is an equivalence of categories if and only if it is an isomorphism of categories.
+\end{lem}
+\begin{proof}
+ Since both are mere properties, it suffices to show they are logically equivalent.
+ So first suppose $F$ is an equivalence of categories, with $(G,\eta,\epsilon)$ given.
+ We have already seen that $F$ is fully faithful.
+ By \cref{ct:functor-cat}, the natural isomorphisms $\eta$ and $\epsilon$ yield identities $\id{1_A}{GF}$ and $\id{FG}{1_B}$, hence in particular identities $\id{\idfunc[A]}{G_0\circ F_0}$ and $\id{F_0\circ G_0}{\idfunc[B]}$.
+Thus, $F_0$ is an equivalence of types.
+
+ Conversely, suppose $F$ is fully faithful and $F_0$ is an equivalence of types, with inverse $G_0$, say.
+ Then for each $b:B$ we have $G_0 b:A$ and an identity $\id{FGb}{b}$, hence an isomorphism $FGb\cong b$.
+ Thus, by \cref{ct:ffeso}, $F$ is an equivalence of categories.
+\end{proof}
+
+Of course, there is yet a third notion of sameness for (pre)categories: equality.
+However, the univalence axiom implies that it coincides with isomorphism.
+
+\begin{lem}\label{ct:cat-eq-iso}
+ If $A$ and $B$ are precategories, then the function
+ \[(\id A B) \to (A\cong B)\]
+ (defined by induction from the identity functor) is an equivalence of types.
+\end{lem}
+\begin{proof}
+ As usual for dependent sum types, to give an element of $\id A B$ is equivalent to giving
+ \begin{itemize}
+ \item an identity $P_0:\id{A_0}{B_0}$,
+ \item for each $a,b:A_0$, an identity
+ \[P_{a,b}:\id{\hom_A(a,b)}{\hom_B(\trans {P_0} a,\trans {P_0} b)},\]
+ \item identities $\id{\trans {(P_{a,a})} {1_a}}{1_{\trans {P_0} a}}$ and
+ \narrowequation{\id{\trans {(P_{a,c})} {gf}}{\trans {(P_{b,c})} g \circ \trans {(P_{a,b})} f}.}
+ \end{itemize}
+ (Again, we use the fact that the identity types of hom-sets are mere propositions.)
+ However, by univalence, this is equivalent to giving
+ \begin{itemize}
+ \item an equivalence of types $F_0:\eqv{A_0}{B_0}$,
+ \item for each $a,b:A_0$, an equivalence of types
+ \[F_{a,b}:\eqv{\hom_A(a,b)}{\hom_B(F_0 (a),F_0 (b))},\]
+ \item and identities $\id{F_{a,a}(1_a)}{1_{F_0 (a)}}$ and $\id{F_{a,c}(gf)}{F_{b,c} (g)\circ F_{a,b} (f)}$.
+ \end{itemize}
+ But this consists exactly of a functor $F:A\to B$ that is an isomorphism of categories.
+ And by induction on identity, this equivalence $\eqv{(\id A B)}{(A\cong B)}$ is equal to the one obtained by induction.
+\end{proof}
+
+Thus, for categories, equality also coincides with equivalence.
+We can interpret this as saying that categories, functors, and natural transformations form, not just a pre-2-category, but a 2-category (see \cref{ct:pre2cat}).
+
+\begin{thm}\label{ct:cat-2cat}
+ If $A$ and $B$ are categories, then the function
+ \[(\id A B) \to (\cteqv A B)\]
+ (defined by induction from the identity functor) is an equivalence of types.
+\end{thm}
+\begin{proof}
+ By \cref{ct:cat-eq-iso,ct:eqv-levelwise}.
+\end{proof}
+
+As a consequence, the type of categories is a 2-type.
+For since $\cteqv A B$ is a subtype of the type of functors from $A$ to $B$, which are the objects of a category, it is a 1-type; hence the identity types $\id A B$ are also 1-types.
+
+
+\section{The Yoneda lemma}
+\label{sec:yoneda}
+\index{Yoneda!lemma|(}
+
+Recall that we have a category \uset whose objects are sets and whose morphisms are functions.
+We now show that every precategory has a \uset-valued hom-functor.
+First we need to define opposites and products of (pre)categories.
+
+\begin{defn}\label{ct:opposite-category}
+ For a precategory $A$, its \define{opposite}
+ \indexdef{opposite of a (pre)category}%
+ \indexdef{precategory!opposite}%
+ \indexdef{category!opposite}%
+ $A\op$ is a precategory with the same type of objects, with $\hom_{A\op}(a,b) \defeq \hom_A(b,a)$, and with identities and composition inherited from $A$.
+\end{defn}
+
+\begin{defn}\label{ct:prod-cat}
+ For precategories $A$ and $B$, their \define{product}
+ \index{precategory!product of}%
+ \index{category!product of}%
+ \index{product!of (pre)categories}%
+ $A\times B$ is a precategory with $(A\times B)_0 \defeq A_0 \times B_0$ and
+ \[\hom_{A\times B}((a,b),(a',b')) \defeq \hom_A(a,a') \times \hom_B(b,b').\]
+ Identities are defined by $1_{(a,b)}\defeq (1_a,1_b)$ and composition by
+ \narrowequation{(g,g')(f,f') \defeq ((gf),(g'f')).}
+\end{defn}
+
+\begin{lem}\label{ct:functorexpadj}
+ For precategories $A,B,C$, the following types are equivalent.
+ \begin{enumerate}
+ \item Functors $A\times B\to C$.
+ \item Functors $A\to C^B$.
+ \end{enumerate}
+\end{lem}
+\begin{proof}
+ Given $F:A\times B\to C$, for any $a:A$ we obviously have a functor $F_a : B\to C$.
+ This gives a function $A_0 \to (C^B)_0$.
+ Next, for any $f:\hom_A(a,a')$, we have for any $b:B$ the morphism $F_{(a,b),(a',b)}(f,1_b):F_a(b) \to F_{a'}(b)$.
+ These are the components of a natural transformation $F_a \to F_{a'}$.
+ Functoriality in $a$ is easy to check, so we have a functor $\hat{F}:A\to C^B$.
+
+ Conversely, suppose given $G:A\to C^B$.
+ Then for any $a:A$ and $b:B$ we have the object $G(a)(b):C$, giving a function $A_0 \times B_0 \to C_0$.
+ And for $f:\hom_A(a,a')$ and $g:\hom_B(b,b')$, we have the morphism
+ \begin{equation*}
+ G(a')_{b,b'}(g)\circ G_{a,a'}(f)_b = G_{a,a'}(f)_{b'} \circ G(a)_{b,b'}(g)
+ \end{equation*}
+ in $\hom_C(G(a)(b), G(a')(b'))$.
+ Functoriality is again easy to check, so we have a functor $\check{G}:A\times B \to C$.
+
+ Finally, it is also clear that these operations are inverses.
+\end{proof}
+
+Now for any precategory $A$, we have a hom-functor
+\indexdef{hom-functor}%
+\[\hom_A : A\op \times A \to \uset.\]
+It takes a pair $(a,b): (A\op)_0 \times A_0 \jdeq A_0 \times A_0$ to the set $\hom_A(a,b)$.
+For a morphism $(f,f') : \hom_{A\op\times A}((a,b),(a',b'))$, by definition we have $f:\hom_A(a',a)$ and $f':\hom_A(b,b')$, so we can define
+\begin{align*}
+ (\hom_A)_{(a,b),(a',b')}(f,f')
+ &\defeq (g \mapsto (f'gf))\\
+ &: \hom_A(a,b) \to \hom_A(a',b').
+\end{align*}
+Functoriality is easy to check.
+
+By \cref{ct:functorexpadj}, therefore, we have an induced functor $\y:A\to \uset^{A\op}$, which we call the \define{Yoneda embedding}.
+\indexdef{Yoneda!embedding}%
+\indexdef{embedding!Yoneda}%
+
+\begin{thm}[The Yoneda lemma]\label{ct:yoneda}
+ \indexdef{Yoneda!lemma}
+ For any precategory $A$, any $a:A$, and any functor $F:\uset^{A\op}$, we have an isomorphism
+ \begin{equation}\label{eq:yoneda}
+ \hom_{\uset^{A\op}}(\y a, F) \cong Fa.
+ \end{equation}
+ Moreover, this is natural in both $a$ and $F$.
+\end{thm}
+\begin{proof}
+ Given a natural transformation $\alpha:\y a \to F$, we can consider the component $\alpha_a : \y a(a) \to F a$.
+ Since $\y a(a)\jdeq \hom_A(a,a)$, we have $1_a : \y a(a)$, so that $\alpha_a(1_a) : F a$.
+ This gives a function $(\alpha \mapsto \alpha_a(1_a))$ from left to right in~\eqref{eq:yoneda}.
+
+ In the other direction, given $x:F a$, we define $\alpha:\y a \to F$ by
+ \[\alpha_{a'}(f) \defeq F_{a,a'}(f)(x). \]
+ Naturality is easy to check, so this gives a function from right to left in~\eqref{eq:yoneda}.
+
+ To show that these are inverses, first suppose given $x:F a$.
+ Then with $\alpha$ defined as above, we have $\alpha_a(1_a) = F_{a,a}(1_a)(x) = 1_{F a}(x) = x$.
+ On the other hand, if we suppose given $\alpha:\y a \to F$ and define $x$ as above, then for any $f:\hom_A(a',a)$ we have
+ \begin{align*}
+ \alpha_{a'}(f)
+ &= \alpha_{a'} (\y a_{a,a'}(f)(1_a))\\
+ &= (\alpha_{a'}\circ \y a_{a,a'}(f))(1_a)\\
+ &= (F_{a,a'}(f)\circ \alpha_a)(1_a)\\
+ &= F_{a,a'}(f)(\alpha_a(1_a))\\
+ &= F_{a,a'}(f)(x).
+ \end{align*}
+ Thus, both composites are equal to identities.
+ We leave the proof of naturality to the reader.
+\end{proof}
+
+\begin{cor}\label{ct:yoneda-embedding}
+ The Yoneda embedding $\y :A\to \uset^{A\op}$ is fully faithful.
+\end{cor}
+\begin{proof}
+ By \cref{ct:yoneda}, we have
+ \[ \hom_{\uset^{A\op}}(\y a, \y b) \cong \y b(a) \jdeq \hom_A(a,b). \]
+ It is easy to check that this isomorphism is in fact the action of \y on hom-sets.
+\end{proof}
+
+\begin{cor}\label{ct:yoneda-mono}
+ If $A$ is a category, then $\y_0 : A_0 \to (\uset^{A\op})_0$ is an embedding.
+ In particular, if $\y a = \y b$, then $a=b$.
+\end{cor}
+\begin{proof}
+ By \cref{ct:yoneda-embedding}, \y induces an isomorphism on sets of isomorphisms.
+ But as $A$ and $\uset^{A\op}$ are categories and \y is a functor, this is equivalently an isomorphism on identity types, which is the definition of being an embedding.
+\end{proof}
+
+\begin{defn}\label{ct:representable}
+ A functor $F:\uset^{A\op}$ is said to be \define{representable}
+ \indexdef{functor!representable}%
+ \indexdef{representable functor}%
+ if there exists $a:A$ and an isomorphism $\y a \cong F$.
+\end{defn}
+
+\begin{thm}\label{ct:representable-prop}
+ If $A$ is a category, then the type ``$F$ is representable'' is a mere proposition.
+\end{thm}
+\begin{proof}
+ By definition ``$F$ is representable'' is just the fiber of $\y_0$ over $F$.
+ Since $\y_0$ is an embedding by \cref{ct:yoneda-mono}, this fiber is a mere proposition.
+\end{proof}
+
+In particular, in a category, any two representations of the same functor are equal.
+We can use this to give a different proof of \cref{ct:adjprop}.
+First we give a characterization of adjunctions in terms of representability.
+
+\begin{lem}\label{ct:adj-repr}
+ For any precategories $A$ and $B$ and a functor $F:A\to B$, the following types are equivalent.
+ \begin{enumerate}
+ \item $F$ is a left adjoint\index{adjoint!functor}.\label{item:ct:ar1}
+ \item For each $b:B$, the functor $(a \mapsto \hom_B(Fa,b))$ from $A\op$ to \uset is representable\index{representable functor}.\label{item:ct:ar2}
+ \end{enumerate}
+\end{lem}
+\begin{proof}
+ An element of the type~\ref{item:ct:ar2} consists of a function $G_0:B_0 \to A_0$ together with, for every $a:A$ and $b:B$ an isomorphism
+ \[ \gamma_{a,b}:\hom_B(Fa,b) \cong \hom_A(a,G_0 b) \]
+ such that $\gamma_{a,b}(g \circ Ff) = \gamma_{a',b}(g)\circ f$ for $f:\hom_{A}(a,a')$.
+
+ Given this, for $a:A$ we define $\eta_a \defeq \gamma_{a,Fa}(1_{Fa})$, and for $b:B$ we define $\epsilon_b \defeq \inv{(\gamma_{Gb,b})}(1_{Gb})$.
+ Now for $g:\hom_B(b,b')$ we define
+ \[ G_{b,b'}(g) \defeq \gamma_{G b, b'}(g \circ \epsilon_b) \]
+ The verifications that $G$ is a functor and $\eta$ and $\epsilon$ are natural transformations satisfying the triangle identities are exactly as in the classical case, and as they are all mere propositions we will not care about their values.
+ Thus, we have a function~\ref{item:ct:ar2}$\to$\ref{item:ct:ar1}.
+
+ In the other direction, if $F$ is a left adjoint, we of course have $G_0$ specified, and we can take $\gamma_{a,b}$ to be the composite
+ \[ \hom_B(Fa,b)
+ \xrightarrow{G_{Fa,b}} \hom_A(GFa,Gb)
+ \xrightarrow{(\blank\circ \eta_a)} \hom_A(a,Gb).
+ \]
+ This is clearly natural since $\eta$ is, and it has an inverse given by
+ \[ \hom_A(a,Gb)
+ \xrightarrow{F_{a,Gb}} \hom_B(Fa,FGb)
+ \xrightarrow{(\epsilon_b \circ \blank )} \hom_A(Fa,b)
+ \]
+ (by the triangle identities).
+ Thus we also have~\ref{item:ct:ar1}$\to$~\ref{item:ct:ar2}.
+
+ For the composite~\ref{item:ct:ar2}$\to$\ref{item:ct:ar1}$\to$~\ref{item:ct:ar2}, clearly the function $G_0$ is preserved, so it suffices to check that we get back $\gamma$.
+ But the new $\gamma$ is defined to take $f:\hom_B(Fa,b)$ to
+ \begin{align*}
+ G(f) \circ \eta_a
+ &\jdeq \gamma_{G Fa, b}(f \circ \epsilon_{Fa}) \circ \eta_a\\
+ &= \gamma_{G Fa, b}(f \circ \epsilon_{Fa} \circ F\eta_a)\\
+ &= \gamma_{G Fa, b}(f)
+ \end{align*}
+ so it agrees with the old one.
+
+ Finally, for~\ref{item:ct:ar1}$\to$\ref{item:ct:ar2}$\to$~\ref{item:ct:ar1}, we certainly get back the functor $G$ on objects.
+ The new $G_{b,b'}:\hom_B(b,b') \to \hom_A(Gb,Gb')$ is defined to take $g$ to
+ \begin{align*}
+ \gamma_{G b, b'}(g \circ \epsilon_b)
+ &\jdeq G(g \circ \epsilon_b) \circ \eta_{Gb}\\
+ &= G(g) \circ G\epsilon_b \circ \eta_{Gb}\\
+ &= G(g)
+ \end{align*}
+ so it agrees with the old one.
+ The new $\eta_a$ is defined to be $\gamma_{a,Fa}(1_{Fa}) \jdeq G(1_{Fa}) \circ \eta_a$, so it equals the old $\eta_a$.
+ And finally, the new $\epsilon_b$ is defined to be $\inv{(\gamma_{Gb,b})}(1_{Gb}) \jdeq \epsilon_b \circ F(1_{Gb})$, which also equals the old $\epsilon_b$.
+\end{proof}
+
+\begin{cor}\label{ct:adjprop2}[\cref{ct:adjprop}]
+ If $A$ is a category and $F:A\to B$, then the type ``$F$ is a left adjoint'' is a mere proposition.
+\end{cor}
+\begin{proof}
+ By \cref{ct:representable-prop}, if $A$ is a category then the type in \cref{ct:adj-repr}\ref{item:ct:ar2} is a mere proposition.
+\end{proof}
+\index{Yoneda!lemma|)}
+
+
+\section{Strict categories}
+\label{sec:strict-categories}
+
+\index{bargaining|(}%
+
+\begin{defn}\label{ct:strict-category}
+ A \define{strict category}
+ \indexdef{category!strict}%
+ \indexdef{strict!category}%
+ is a precategory whose type of objects is a set.
+\end{defn}
+
+In accordance with the mathematical red herring principle,\index{red herring principle} a strict category is not necessarily a category.
+In fact, a category is a strict category precisely when it is gaunt (\cref{ct:gaunt}).
+\index{gaunt category}%
+\index{category!gaunt}%
+Most of the time, category theory is about categories, not strict ones, but sometimes one wants to consider strict categories.
+The main advantage of this is that strict categories have a stricter notion of ``sameness'' than equivalence, namely isomorphism (or equivalently, by \cref{ct:cat-eq-iso}, equality).
+
+Here is one origin of strict categories.
+
+\begin{eg}\label{ct:mono-cat}
+ Let $A$ be a precategory and $x:A$ an object.
+ Then there is a precategory $\mathsf{mono}(A,x)$ as follows:
+ \index{monomorphism}
+ \indexsee{mono}{monomorphism}
+ \indexsee{monic}{monomorphism}
+ \begin{itemize}
+ \item Its objects consist of an object $y:A$ and a monomorphism $m:\hom_A(y,x)$.
+ (As usual, $m:\hom_A(y,x)$ is a \define{monomorphism} (or is \define{monic}) if $(m\circ f = m\circ g) \Rightarrow (f=g)$.)
+ \item Its morphisms from $(y,m)$ to $(z,n)$ are arbitrary morphisms from $y$ to $z$ in $A$ (not necessarily respecting $m$ and $n$).
+ \end{itemize}
+ An equality $(y,m)=(z,n)$ of objects in $\mathsf{mono}(A,x)$ consists of an equality $p:y=z$ and an equality $\trans{p}{m}=n$, which by \cref{ct:idtoiso-trans} is equivalently an equality $m=n\circ \idtoiso(p)$.
+ Since hom-sets are sets, the type of such equalities is a mere proposition.
+ But since $m$ and $n$ are monomorphisms, the type of morphisms $f$ such that $m = n\circ f$ is also a mere proposition.
+ Thus, if $A$ is a category, then $(y,m)=(z,n)$ is a mere proposition, and hence $\mathsf{mono}(A,x)$ is a strict category.
+\end{eg}
+
+This example can be dualized, and generalized in various ways.
+Here is an interesting application of strict categories.
+
+\begin{eg}\label{ct:galois}
+ Let $E/F$ be a finite Galois extension
+ \index{Galois!extension}%
+ of fields, and $G$ its Galois group.
+ \index{Galois!group}%
+ Then there is a strict category whose objects are intermediate fields $F\subseteq K\subseteq E$, and whose morphisms are field homomorphisms\index{homomorphism!field} which fix $F$ pointwise (but need not commute with the inclusions into $E$).
+ There is another strict category whose objects are subgroups $H\subseteq G$, and whose morphisms are morphisms of $G$-sets $G/H \to G/K$.
+ The fundamental theorem of Galois theory
+ \index{fundamental!theorem of Galois theory}%
+ says that these two precategories are isomorphic (not merely equivalent).
+\end{eg}
+
+\index{bargaining|)}%
+
+\section{\texorpdfstring{$\dagger$}{†}-categories}
+\label{sec:dagger-categories}
+
+It is also worth mentioning a useful kind of precategory whose type of objects is not a set, but which is not a category either.
+
+\begin{defn}\label{ct:dagger-precategory}
+ A \define{$\dagger$-precategory}
+ \indexdef{.dagger-precategory@$\dagger$-precategory}%
+ \indexdef{precategory!.dagger-@$\dagger$-}%
+ is a precategory $A$ together with the following.
+ \begin{enumerate}
+ \item For each $x,y:A$, a function $\dgr{(-)}:\hom_A(x,y) \to \hom_A(y,x)$.
+ \item For all $x:A$, we have $\dgr{(1_x)} = 1_x$.
+ \item For all $f,g$ we have $\dgr{(g\circ f)} = \dgr f \circ \dgr g$.
+ \item For all $f$ we have $\dgr{(\dgr f)} = f$.
+ \end{enumerate}
+\end{defn}
+
+\begin{defn}\label{ct:unitary}
+ A morphism $f:\hom_A(x,y)$ in a $\dagger$-precategory is \define{unitary}
+ \indexdef{.dagger-precategory@$\dagger$-precategory!unitary morphism in}%
+ \indexdef{unitary morphism}%
+ \indexdef{morphism!unitary}%
+ \indexdef{isomorphism!unitary}%
+ if $\dgr f \circ f = 1_x$ and $f\circ \dgr f = 1_y$.
+\end{defn}
+
+Of course, every unitary morphism is an isomorphism, and being unitary is a mere proposition.
+Thus for each $x,y:A$ we have a set of unitary isomorphisms from $x$ to $y$, which we denote $(x\unitaryiso y)$.
+
+\begin{lem}\label{ct:idtounitary}
+ If $p:(x=y)$, then $\idtoiso(p)$ is unitary.
+\end{lem}
+\begin{proof}
+ By induction, we may assume $p$ is $\refl x$.
+ But then $\dgr{(1_x)} \circ 1_x = 1_x\circ 1_x = 1_x$ and similarly.
+\end{proof}
+
+\begin{defn}\label{ct:dagger-category}
+ A \define{$\dagger$-category}
+ \indexdef{.dagger-category@$\dagger$-category}%
+ is a $\dagger$-precategory such that for all $x,y:A$, the function
+ \[ (x=y) \to (x \unitaryiso y) \]
+ from \cref{ct:idtounitary} is an equivalence.
+\end{defn}
+
+\begin{eg}\label{ct:rel-dagger-cat}
+ The category \urel from \cref{ct:rel} becomes a $\dagger$-pre\-cat\-e\-go\-ry if we define $(\dgr R)(y,x) \defeq R(x,y)$.
+ The proof that \urel is a category actually shows that every isomorphism is unitary; hence \urel is also a $\dagger$-category.
+\end{eg}
+
+\begin{eg}\label{ct:groupoid-dagger-cat}
+ Any groupoid becomes a $\dagger$-category if we define $\dgr f \defeq \inv{f}$.
+\end{eg}
+
+\begin{eg}\label{ct:hilb}
+ Let \uhilb be the following precategory.
+ \begin{itemize}
+ \item Its objects are finite-dimensional \index{finite!-dimensional vector space} vector spaces\index{vector!space} equipped with an inner product $\langle \blank,\blank\rangle$.
+ \item Its morphisms are arbitrary linear maps.
+ \index{function!linear}%
+ \indexsee{linear map}{function, linear}%
+ \end{itemize}
+ By standard linear algebra, any linear map $f:V\to W$ between finite
+ dimensional inner product spaces has a uniquely defined adjoint\index{adjoint!linear map} $\dgr f:W\to V$, characterized by $\langle f v,w\rangle = \langle v,\dgr f w\rangle$.
+ In this way, \uhilb becomes a $\dagger$-precategory.
+ Moreover, a linear isomorphism is unitary precisely when it is an \define{isometry},
+ \indexdef{isometry}%
+ i.e.\ $\langle fv,fw\rangle = \langle v,w\rangle$.
+ It follows from this that \uhilb is a $\dagger$-category, though it is not a category (not every linear isomorphism is unitary).
+\end{eg}
+
+There has been a good deal of general theory developed for $\dagger$-cat\-e\-gor\-ies under classical\index{mathematics!classical}\index{classical!category theory} foundations.
+It was observed early on that the unitary isomorphisms, not arbitrary isomorphisms, are the correct notion of ``sameness'' for objects of a $\dagger$-category, which has caused some consternation among category theorists.
+Homotopy type theory resolves this issue by identifying $\dagger$-categories, like strict categories, as simply a different kind of precategory.
+
+
+\section{The structure identity principle}
+\label{sec:sip}
+ \index{structure!identity principle|(}
+
+The \emph{structure identity principle} is an informal principle
+that expresses that isomorphic structures are identical. We aim to
+prove a general abstract result which can be applied to a wide family
+of notions of structure, where structures may be many-sorted or even
+dependently-sorted, infinitary, or even higher order.
+
+The simplest kind of single-sorted structure consists of a type with
+no additional structure. The univalence axiom expresses the structure identity principle for that
+notion of structure in a strong form: for types $A,B$, the
+canonical function $(A=B)\to (\eqv A B)$ is an equivalence.
+
+We start with a precategory $X$. In our application to
+single-sorted first order structures, $X$ will be the category %\uset%
+of $\bbU$-small sets, where $\bbU$ is a univalent type universe.
+
+\begin{defn}\label{ct:sig}
+ A \define{notion of structure}
+ \indexdef{structure!notion of}%
+ $(P,H)$ over $X$ consists of the following.
+ \begin{enumerate}
+ \item A type family $P:X_0 \to \type$.
+ For each $x:X_0$ the elements of $Px$ are called \define{$(P,H)$-structures}
+ \indexsee{PH-structure@$(P,H)$-structure}{structure}%
+ \indexdef{structure!PH@$(P,H)$-}%
+ on $x$.
+ \item For $x,y:X_0$, $f:\hom_X(x,y)$ and $\alpha:Px$, $\;\beta:Py$, a mere proposition
+ \[ H_{\alpha\beta}(f).\]
+ If $H_{\alpha\beta}(f)$ is true, we say that $f$ is a \define{$(P,H)$-homomorphism}
+ \indexdef{homomorphism!of structures}%
+ \indexdef{structure!homomorphism of}%
+ from $\alpha$ to $\beta$.
+ \item For $x:X_0$ and $\alpha:Px$, we have $H_{\alpha\alpha}(1_x)$.\label{item:sigid}
+ \item For $x,y,z:X_0$ and $\alpha:Px$, $\;\beta:Py$, $\;\gamma:Pz$,
+if $f:\hom_X(x,y)$ and $g:\hom_X(y,z)$, we have\label{item:sigcmp}
+ \[ H_{\alpha\beta}(f)\to H_{\beta\gamma}(g)\to H_{\alpha\gamma}(g\circ f).\]
+ \end{enumerate}
+ When $(P,H)$ is a notion of structure, for $\alpha,\beta:Px$ we define
+ \[ (\alpha\leq_x\beta) \defeq H_{\alpha\beta}(1_x).\]
+ By~\ref{item:sigid} and~\ref{item:sigcmp}, this is a preorder (\cref{ct:orders}) with $Px$ its type of objects.
+ We say that $(P,H)$ is a \define{standard notion of structure}
+ \indexdef{structure!standard notion of}%
+ if this preorder is in fact a partial order, for all $x:X$.
+\end{defn}
+
+Note that for a standard notion of structure, each type $Px$ must actually be a set.
+We now define, for any notion of structure $(P,H)$, a \define{precategory of $(P,H)$-structures},
+\indexdef{precategory!of PH-structures@of $(P,H)$-structures}%
+\indexdef{structure!precategory of PH@precategory of $(P,H)$-}%
+$A = \mathsf{Str}_{(P,H)}(X)$.
+\begin{itemize}
+\item The type of objects of $A$ is the type $A_0 \defeq \sm{x:X_0} Px$.
+ If $a\jdeq (x,\alpha):A_0$, we may write $|a| \defeq x$.
+\item For $(x,\alpha):A_0$ and $(y,\beta):A_0$, we define
+ \[\hom_A((x,\alpha),(y,\beta)) \defeq \setof{ f:x \to y | H_{\alpha\beta}(f)}.\]
+\end{itemize}
+The composition and identities are inherited from $X$; conditions~\ref{item:sigid} and \ref{item:sigcmp} ensure that these lift to $A$.
+
+\begin{thm}[Structure identity principle]\label{thm:sip}
+ \indexdef{structure!identity principle}%
+ If $X$ is a category and $(P,H)$ is a standard notion of structure over $X$, then the precategory $\mathsf{Str}_{(P,H)}(X)$ is a category.
+\end{thm}
+\begin{proof}
+ By the definition of equality in dependent pair types, to give an equality $(x,\alpha)=(y,\beta)$ consists of
+ \begin{itemize}
+ \item An equality $p:x=y$, and
+ \item An equality $\trans{p}{\alpha}=\beta$.
+ \end{itemize}
+ Since $P$ is set-valued, the latter is a mere proposition.
+ On the other hand, it is easy to see that an isomorphism $(x,\alpha)\cong (y,\beta)$ in $\mathsf{Str}_{(P,H)}(X)$ consists of
+ \begin{itemize}
+ \item An isomorphism $f:x\cong y$ in $X$, such that
+ \item $H_{\alpha\beta}(f)$ and $H_{\beta\alpha}(\inv f)$.
+ \end{itemize}
+ Of course, the second of these is also a mere proposition.
+ And since $X$ is a category, the function $(x=y) \to (x\cong y)$ is an equivalence.
+ Thus, it will suffice to show that for any $p:x=y$ and for any $(\alpha:Px)$, $(\beta:Py)$, we have $\trans{p}{\alpha}=\beta$ if and only if both $H_{\alpha\beta}(\idtoiso (p))$ and $H_{\beta\alpha}(\inv{\idtoiso(p)})$.
+
+ The ``only if'' direction is just the existence of the function $\idtoiso$ for the category $\mathsf{Str}_{(P,H)}(X)$.
+ For the ``if'' direction, by induction on $p$ we may assume that $y\jdeq x$ and $p\jdeq\refl x$.
+ However, in this case $\idtoiso (p)\jdeq 1_x$ and therefore $\inv{\idtoiso(p)}=1_x$.
+ Thus, $\alpha\leq_x \beta$ and $\beta\leq_x \alpha$, which implies $\alpha=\beta$ since $(P,H)$ is a standard notion of structure.
+\end{proof}
+
+As an example, this methodology gives an alternative way to express the proof of \cref{ct:functor-cat}.
+
+\begin{eg}\label{ct:sip-functor-cat}
+ Let $A$ be a precategory and $B$ a category.
+ There is a precategory $B^{A_0}$ whose objects are functions $A_0 \to B_0$, and whose set of morphisms from $F_0:A_0 \to B_0$ to $G_0:A_0 \to B_0$ is $\prd{a:A_0} \hom_B(F_0 a, G_0 a)$.
+ Composition and identities are inherited directly from those in $B$.
+ It is easy to show that $\gamma:\hom_{B^{A_0}}(F_0, G_0)$ is an isomorphism exactly when each component $\gamma_a$ is an isomorphism, so that we have $\eqv{(F_0 \cong G_0)}{\prd{a:A_0} (F_0 a \cong G_0 a)}$.
+ Moreover, the map $\idtoiso : (F_0 = G_0) \to (F_0 \cong G_0)$ of $B^{A_0}$ is equal to the composite
+ \[ (F_0 = G_0) \longrightarrow \prd{a:A_0} (F_0 a = G_0 a) \longrightarrow \prd{a:A_0} (F_0 a \cong G_0 a) \longrightarrow (F_0 \cong G_0) \]
+ in which the first map is an equivalence by function extensionality, the second because it is a dependent product of equivalences (since $B$ is a category), and the third as remarked above.
+ Thus, $B^{A_0}$ is a category.
+
+ Now we define a notion of structure on $B^{A_0}$ for which $P(F_0)$ is the type of operations $F:\prd{a,a':A_0} \hom_A(a,a') \to \hom_B(F_0 a,F_0 a')$ which extend $F_0$ to a functor (i.e.\ preserve composition and identities).
+ This is a set since each $\hom_B(\blank,\blank)$ is so.
+ Given such $F$ and $G$, we define $\gamma:\hom_{B^{A_0}}(F_0, G_0)$ to be a homomorphism if it forms a natural transformation.\index{natural!transformation}
+ In \cref{ct:functor-precat} we essentially verified that this is a notion of structure.
+ Moreover, if $F$ and $F'$ are both structures on $F_0$ and the identity is a natural transformation from $F$ to $F'$, then for any $f:\hom_A(a,a')$ we have $F'f = F'f \circ 1_{F_0 a} = 1_{F_0 a}\circ F f = F f$.
+ Applying function extensionality, we conclude $F = F'$.
+ Thus, we have a \emph{standard} notion of structure, and so by \cref{thm:sip}, the precategory $B^A$ is a category.
+\end{eg}
+
+As another example, we consider categories of structures for a first-order signature.
+We define a \define{first-order signature},
+\indexdef{first-order!signature}%
+\indexdef{signature!first-order}%
+$\Omega$, to consist of sets $\Omega_0$ and $\Omega_1$ of function symbols, $\omega:\Omega_0$, and relation symbols, $\omega:\Omega_1$, each having an arity\index{arity} $|\omega|$ that is a set.
+An \define{$\Omega$-structure}
+\indexdef{structure!Omega@$\Omega$-}%
+\indexsee{omega-structure@$\Omega$-structure}{structure}%
+$a$ consists of a set $|a|$ together with an assignment of an $|\omega|$-ary function $\omega^a:|a|^{|\omega|}\to |a|$ on $|a|$ to each function symbol, $\omega$, and an assignment of an $|\omega|$-ary relation $\omega^a$ on $|a|$, assigning a mere proposition $\omega^ax$ to each $x:|a|^{|\omega|}$, to each relation symbol.
+And given $\Omega$-structures $a,b$, a function $f:|a|\to |b|$ is a \define{homomorphism $a\to b$}
+\indexdef{homomorphism!of Omega-structures@of $\Omega$-structures}%
+\indexdef{structure!homomorphism of Omega@homomorphism of $\Omega$-}%
+if it preserves the structure; i.e.\ if for each symbol $\omega$ of the signature and each $x:|a|^{|\omega|}$,
+\begin{enumerate}
+\item $f(\omega^ax) = \omega^b(f\circ x)$ if $\omega:\Omega_0$, and
+\item $\omega^ax\to\omega^b(f\circ x)$ if $\omega:\Omega_1$.
+\end{enumerate}
+Note that each $x:|a|^{|\omega|}$ is a function $x:|\omega|\to |a|$ so that $f\circ x : b^\omega$.
+
+Now we assume given a (univalent) universe $\bbU$ and a $\bbU$-small signature $\Omega$; i.e. $|\Omega|$ is a $\bbU$-small set and, for each $\omega:|\Omega|$, the set $|\omega|$ is $\bbU$-small.
+Then we have the category $\uset_\bbU$ of $\bbU$-small sets. We want to define the precategory of $\bbU$-small $\Omega$-structures over $\uset_\bbU$ and use \cref{thm:sip} to show that it is a category.
+
+We use the first order signature $\Omega$ to give us a standard notion of structure $(P,H)$ over $\uset_\bbU$.
+
+\begin{defn}\label{defn:fo-notion-of-structure}
+\mbox{}
+\begin{enumerate}
+\item For each $\bbU$-small set $x$ define
+ \[ Px \defeq P_0x\times P_1x.\]
+ Here
+ %
+ \begin{align*}
+ P_0x &\defeq \prd{\omega:\Omega_0} x^{|\omega|}\to x, \mbox{ and } \\
+ P_1x &\defeq \prd{\omega:\Omega_1} x^{|\omega|}\to \propU,
+ \end{align*}
+\item For $\bbU$-small sets $x,y$ and
+ $\alpha:P^\omega x,\;\beta:P^\omega y,\; f:x\to y$, define
+ \[ H_{\alpha\beta}(f) \defeq H_{0,\alpha\beta}(f)\wedge H_{1,\alpha\beta}(f).\]
+ Here
+ \begin{align*}
+ H_{0,\alpha\beta}(f) &\defeq
+ \fall{\omega:\Omega_0}{u:x^{|\omega|}} f(\alpha u)=\;\beta(f\circ u),
+ \mbox{ and }\\
+ H_{1,\alpha\beta}(f) &\defeq
+ \fall{\omega:\Omega_1}{u:x^{|\omega|}} \alpha u\to\beta(f\circ u).
+ \end{align*}
+\end{enumerate}
+\end{defn}
+
+It is now routine to check that $(P,H)$ is a standard notion of structure over $\uset_\bbU$ and hence we may use \cref{thm:sip} to get that the precategory $Str_{(P,H)}(\uset_\bbU)$ is a category. It only remains to observe that this is essentially the same as the precategory of $\bbU$-small $\Omega$-structures over $\uset_\bbU$.
+ \index{structure!identity principle|)}
+
+
+\section{The Rezk completion}
+\label{sec:rezk}
+
+In this section we will give a universal way to replace a precategory by a category.
+In fact, we will give two.
+Both rely on the fact that ``categories see weak equivalences as equivalences''.
+
+To prove this, we begin with a couple of lemmas which are completely standard category theory, phrased carefully so as to make sure we are using the eliminator for $\truncf{-1}$ correctly.
+One would have to be similarly careful in classical\index{mathematics!classical}\index{classical!category theory} category theory if one wanted to avoid the axiom of choice: any time we want to define a function, we need to characterize its values uniquely somehow.
+
+\begin{lem}\label{ct:esosurj-postcomp-faithful}
+ If $A,B,C$ are precategories and $H:A\to B$ is an essentially surjective functor, then $(\blank\circ H):C^B \to C^A$ is faithful.
+\end{lem}
+\begin{proof}
+ Let $F,G:B\to C$, and $\gamma,\delta:F\to G$ be such that $\gamma H = \delta H$; we must show $\gamma=\delta$.
+ Thus let $b:B$; we want to show $\gamma_b=\delta_b$.
+ This is a mere proposition, so since $H$ is essentially surjective, we may assume given an $a:A$ and an isomorphism $f:Ha\cong b$.
+ But now we have
+ \[ \gamma_b = G(f) \circ \gamma_{Ha} \circ F(\inv{f})
+ = G(f) \circ \delta_{Ha} \circ F(\inv{f})
+ = \delta_b.\qedhere
+ \]
+\end{proof}
+
+\begin{lem}\label{ct:esofull-precomp-ff}
+ If $A,B,C$ are precategories and $H:A\to B$ is essentially surjective and full, then $(\blank\circ H):C^B \to C^A$ is fully faithful.
+\end{lem}
+\begin{proof}
+ It remains to show fullness.
+ Thus, let $F,G:B\to C$ and $\gamma:FH \to GH$.
+ We claim that for any $b:B$, the type
+ \begin{equation}\label{eq:fullprop}
+ \sm{g:\hom_C(Fb,Gb)} \prd{a:A}{f:Ha\cong b} (\gamma_a = \inv{Gf}\circ g\circ Ff)
+ \end{equation}
+ is contractible.
+ Since contractibility is a mere property, and $H$ is essentially surjective, we may assume given $a_0:A$ and $h:Ha_0\cong b$.
+
+ Now take $g\defeq Gh \circ \gamma_{a_0} \circ \inv{Fh}$.
+ Then given any other $a:A$ and $f:Ha\cong b$, we must show $\gamma_a = \inv{Gf}\circ g\circ Ff$.
+ Since $H$ is full, there merely exists a morphism $k:\hom_A(a,a_0)$ such that $Hk = \inv{h}\circ f$.
+ And since our goal is a mere proposition, we may assume given some such $k$.
+ Then we have
+ \begin{align*}
+ \gamma_a &= \inv{GHk}\circ \gamma_{a_0} \circ FHk\\
+ &= \inv{Gf} \circ Gh \circ \gamma_{a_0} \circ \inv{Fh} \circ Ff\\
+ &= \inv{Gf}\circ g\circ Ff.
+ \end{align*}
+ Thus,~\eqref{eq:fullprop} is inhabited.
+ It remains to show it is a mere proposition.
+ Let $g,g':\hom_C(Fb, Gb)$ be such that for all $a:A$ and $f:Ha\cong b$, we have both $(\gamma_a = \inv{Gf}\circ g\circ Ff)$ and $(\gamma_a = \inv{Gf}\circ g'\circ Ff)$.
+ The dependent product types are mere propositions, so all we have to prove is $g=g'$.
+ But this is a mere proposition, so we may assume $a_0:A$ and $h:Ha_0\cong b$, in which case we have
+ \[ g = Gh \circ \gamma_{a_0} \circ \inv{Fh} = g'.\]
+ %
+ This proves that~\eqref{eq:fullprop} is contractible for all $b:B$.
+ Now we define $\delta:F\to G$ by taking $\delta_b$ to be the unique $g$ in~\eqref{eq:fullprop} for that $b$.
+ To see that this is natural, suppose given $f:\hom_B(b,b')$; we must show $Gf \circ \delta_b = \delta_{b'}\circ Ff$.
+ As before, we may assume $a:A$ and $h:Ha\cong b$, and likewise $a':A$ and $h':Ha'\cong b'$.
+ Since $H$ is full as well as essentially surjective, we may also assume $k:\hom_A(a,a')$ with $Hk = \inv{h'}\circ f\circ h$.
+
+ Since $\gamma$ is natural, $GHk\circ \gamma_a = \gamma_{a'} \circ FHk$.
+ Using the definition of $\delta$, we have
+ \begin{align*}
+ Gf \circ \delta_b
+ &= Gf \circ Gh \circ \gamma_a \circ \inv{Fh}\\
+ &= Gh' \circ GHk\circ \gamma_a \circ \inv{Fh}\\
+ &= Gh' \circ \gamma_{a'} \circ FHk \circ \inv{Fh}\\
+ &= Gh' \circ \gamma_{a'} \circ \inv{Fh'} \circ Ff\\
+ &= \delta_{b'} \circ Ff.
+ \end{align*}
+ Thus, $\delta$ is natural.
+ Finally, for any $a:A$, applying the definition of $\delta_{Ha}$ to $a$ and $1_a$, we obtain $\gamma_a = \delta_{Ha}$.
+ Hence, $\delta \circ H = \gamma$.
+\end{proof}
+
+The rest of the theorem follows almost exactly the same lines, with the category-ness of $C$ inserted in one crucial step, which we have italicized below for emphasis.
+This is the point at which we are trying to define a function into \emph{objects} without using choice, and so we must be careful about what it means for an object to be ``uniquely specified''.
+In classical\index{mathematics!classical}\index{classical!category theory} category theory, all one can say is that this object is specified up to unique isomorphism, but in set-theoretic foundations this is not a sufficient amount of uniqueness to give us a function without invoking \choice{}.
+In univalent foundations, however, if $C$ is a category, then isomorphism is equality, and we have the appropriate sort of uniqueness (namely, living in a contractible space).
+
+\index{weak equivalence!of precategories|(}%
+
+\begin{thm}\label{ct:cat-weq-eq}
+ If $A,B$ are precategories, $C$ is a category, and $H:A\to B$ is a weak equivalence, then $(\blank\circ H):C^B \to C^A$ is an isomorphism.
+\end{thm}
+\begin{proof}
+ By \cref{ct:functor-cat}, $C^B$ and $C^A$ are categories.
+ Thus, by \cref{ct:eqv-levelwise} it will suffice to show that $(\blank\circ H)$ is an equivalence.
+ But since we know from the preceding two lemmas that it is fully faithful, by \cref{ct:catweq} it will suffice to show that it is essentially surjective.
+ Thus, suppose $F:A\to C$; we want there to merely exist a $G:B\to C$ such that $GH\cong F$.
+
+ For each $b:B$, let $X_b$ be the type whose elements consist of:
+ \begin{enumerate}
+ \item An element $c:C$; and
+ \item For each $a:A$ and $h:Ha\cong b$, an isomorphism $k_{a,h}:Fa\cong c$; such that\label{item:eqvprop2}
+ \item For each $(a,h)$ and $(a',h')$ as in~\ref{item:eqvprop2} and each $f:\hom_A(a,a')$ such that $h'\circ Hf = h$, we have $k_{a',h'}\circ Ff = k_{a,h}$.\label{item:eqvprop3}
+ \end{enumerate}
+ We claim that for any $b:B$, the type $X_b$ is contractible.
+ As this is a mere proposition, we may assume given $a_0:A$ and $h_0:Ha_0 \cong b$.
+ Let $c^0\defeq Fa_0$.
+ Next, given $a:A$ and $h:Ha\cong b$, since $H$ is fully faithful there is a unique isomorphism $g_{a,h}:a\to a_0$ with $Hg_{a,h} = \inv{h_0}\circ h$; define $k^0_{a,h} \defeq Fg_{a,h}$.
+ Finally, if $h'\circ Hf = h$, then $\inv{h_0}\circ h'\circ Hf = \inv{h_0}\circ h$, hence $g_{a',h'} \circ f = g_{a,h}$ and thus $k^0_{a',h'}\circ Ff = k^0_{a,h}$.
+ Therefore, $X_b$ is inhabited.
+
+ Now suppose given another $(c^1,k^1): X_b$.
+ Then $k^1_{a_0,h_0}:c^0 \jdeq Fa_0 \cong c^1$.
+ \emph{Since $C$ is a category, we have $p:c^0=c^1$ with $\idtoiso(p) = k^1_{a_0,h_0}$.}
+ And for any $a:A$ and $h:Ha\cong b$, by~\ref{item:eqvprop3} for $(c^1,k^1)$ with $f\defeq g_{a,h}$, we have
+ \[k^1_{a,h} = k^1_{a_0,h_0} \circ k^0_{a,h} = \trans{p}{k^0_{a,h}}\]
+ This gives the requisite data for an equality $(c^0,k^0)=(c^1,k^1)$, completing the proof that $X_b$ is contractible.
+
+ Now since $X_b$ is contractible for each $b$, the type $\prd{b:B} X_b$ is also contractible.
+ In particular, it is inhabited, so we have a function assigning to each $b:B$ a $c$ and a $k$.
+ Define $G_0(b)$ to be this $c$; this gives a function $G_0 :B_0 \to C_0$.
+
+ Next we need to define the action of $G$ on morphisms.
+ For each $b,b':B$ and $f:\hom_B(b,b')$, let $Y_f$ be the type whose elements consist of:
+ \begin{enumerate}[resume]
+ \item A morphism $g:\hom_C(Gb,Gb')$, such that
+ \item For each $a:A$ and $h:Ha\cong b$, and each $a':A$ and $h':Ha'\cong b'$, and any $\ell:\hom_A(a,a')$, we have\label{item:eqvprop5}
+ \[ (h' \circ H\ell = f \circ h)
+ \to
+ (k_{a',h'} \circ F\ell = g\circ k_{a,h}). \]
+ \end{enumerate}
+ We claim that for any $b,b'$ and $f$, the type $Y_f$ is contractible.
+ As this is a mere proposition, we may assume given $a_0:A$ and $h_0:Ha_0\cong b$, and each $a'_0:A$ and $h'_0:Ha'_0\cong b'$.
+ Then since $H$ is fully faithful, there is a unique $\ell_0:\hom_A(a_0,a_0')$ such that $h'_0 \circ H\ell_0 = f \circ h_0$.
+ Define $g_0 \defeq k_{a_0',h_0'} \circ F \ell_0 \circ \inv{(k_{a_0,h_0})}$.
+
+ Now for any $a,h,a',h'$, and $\ell$ such that $(h' \circ H\ell = f \circ h)$, we have $\inv{h}\circ h_0:Ha_0\cong Ha$, hence there is a unique $m:a_0\cong a$ with $Hm = \inv{h}\circ h_0$ and hence $h\circ Hm = h_0$.
+ Similarly, we have a unique $m':a_0'\cong a'$ with $h'\circ Hm' = h_0'$.
+ Now by~\ref{item:eqvprop3}, we have $k_{a,h}\circ Fm = k_{a_0,h_0}$ and $k_{a',h'}\circ Fm' = k_{a_0',h_0'}$.
+ We also have
+ \begin{align*}
+ Hm' \circ H\ell_0
+ &= \inv{(h')} \circ h_0' \circ H\ell_0\\
+ &= \inv{(h')} \circ f \circ h_0\\
+ &= \inv{(h')} \circ f \circ h \circ \inv{h} \circ h_0\\
+ &= H\ell \circ Hm
+ \end{align*}
+ and hence $m'\circ \ell_0 = \ell\circ m$ since $H$ is fully faithful.
+ Finally, we can compute
+ \begin{align*}
+ g_0 \circ k_{a,h}
+ &= k_{a_0',h_0'} \circ F \ell_0 \circ \inv{(k_{a_0,h_0})} \circ k_{a,h}\\
+ &= k_{a_0',h_0'} \circ F \ell_0 \circ \inv{Fm}\\
+ &= k_{a_0',h_0'} \circ \inv{(Fm')} \circ F\ell\\
+ &= k_{a',h'}\circ F\ell.
+ \end{align*}
+ This completes the proof that $Y_f$ is inhabited.
+ To show it is contractible, since hom-sets are sets, it suffices to take another $g_1:\hom_C(Gb,Gb')$ satisfying~\ref{item:eqvprop5} and show $g_0=g_1$.
+ However, we still have our specified $a_0,h_0,a_0',h_0',\ell_0$ around, and~\ref{item:eqvprop5} implies both $g_0$ and $g_1$ must be equal to $k_{a_0',h_0'} \circ F \ell_0 \circ \inv{(k_{a_0,h_0})}$.
+
+ This completes the proof that $Y_f$ is contractible for each $b,b':B$ and $f:\hom_B(b,b')$.
+ Therefore, there is a function assigning to each such $f$ its unique inhabitant; denote this function $G_{b,b'}:\hom_B(b,b') \to \hom_C(Gb,Gb')$.
+ The proof that $G$ is a functor is straightforward; in each case we can choose $a,h$ and apply~\ref{item:eqvprop5}.
+
+ Finally, for any $a_0:A$, defining $c\defeq Fa_0$ and $k_{a,h}\defeq F g$, where $g:\hom_A(a,a_0)$ is the unique isomorphism with $Hg = h$, gives an element of $X_{Ha_0}$.
+ Thus, it is equal to the specified one; hence $GHa=Fa$.
+ Similarly, for $f:\hom_A(a_0,a_0')$ we can define an element of $Y_{Hf}$ by transporting along these equalities, which must therefore be equal to the specified one.
+ Hence, we have $GH=F$, and thus $GH\cong F$ as desired.
+\end{proof}
+
+\index{universal!property!of Rezk completion}%
+Therefore, if a precategory $A$ admits a weak equivalence functor $A\to \widehat{A}$ into a category, then that is its ``reflection'' into categories: any functor from $A$ into a category will factor essentially uniquely through $\widehat{A}$.
+We now give two constructions of such a weak equivalence.
+
+\indexsee{Rezk completion}{completion, Rezk}%
+\index{completion!Rezk|(defstyle}%
+
+\begin{thm}\label{thm:rezk-completion}
+ For any precategory $A$, there is a category $\widehat A$ and a weak equivalence $A\to\widehat{A}$.
+\end{thm}
+
+\begin{proof}[First proof]
+ Let $\widehat{A}_0 \defeq \setof{ F:\uset^{A\op} | \exis{a:A} (\y a \cong F)}$, with hom-sets inherited from $\uset^{A\op}$.
+ Then the inclusion $\widehat{A} \to \uset^{A\op}$ is fully faithful and an embedding on objects.
+ Since $\uset^{A\op}$ is a category (by \cref{ct:functor-cat}, since \uset is so by univalence), $\widehat A$ is also a category.
+
+ Let $A\to\widehat A$ be the Yoneda embedding.
+ This is fully faithful by \cref{ct:yoneda-embedding}, and essentially surjective by definition of $\widehat{A}_0$.
+ Thus it is a weak equivalence.
+\end{proof}
+
+This proof is very slick, but it has the drawback that it increases universe level.
+If $A$ is a category in a universe \bbU, then in this proof \uset must be at least as large as $\uset_\bbU$.
+Then $\uset_\bbU$ and $(\uset_\bbU)^{A\op}$ are not themselves categories in \bbU, but only in a higher universe, and \emph{a priori} the same is true of $\widehat A$.
+One could imagine a resizing axiom that could deal with this, but it is also possible to give a direct construction using higher inductive types.
+
+\begin{proof}[Second proof]
+ We define a higher inductive type $\widehat A_0$ with the following constructors:
+ \begin{itemize}
+ \item A function $i:A_0 \to \widehat A_0$.
+ \item For each $a,b:A$ and $e:a\cong b$, an equality $je:\id{ia}{ib}$.
+ \item For each $a:A$, an equality $\id{j(1_a)}{\refl{ia}}$.
+ \item For each $(a,b,c:A)$, $(f:a\cong b)$, and $(g:b\cong c)$, an equality $\id{j(g \circ f)}{j(f)\ct j(g)}$.
+ \item 1-truncation: for all $x,y:\widehat A_0$ and $p,q:\id x y$ and $r,s:\id p q$, an equality $\id r s$.
+ \end{itemize}
+ Note that for any $a,b:A$ and $p:\id a b$, we have $\id{j(\idtoiso(p))}{\map i p}$.
+ This follows by path induction on $p$ and the third constructor.
+
+ The type $\widehat A_0$ will be the type of objects of $\widehat A$; we now build all the rest of the structure.
+ (The following proof is of the sort that can benefit a lot from the help of a computer proof assistant:\index{proof!assistant} it is wide and shallow with many short cases to consider, and a large part of the work consists of writing down what needs to be checked.)
+
+ \mentalpause
+
+ \emph{Step 1:} We define a family $\hom_{\widehat A}:\widehat A_0\to \widehat A_0 \to \set$ by double induction on $\widehat A_0$.
+ Since \set is a 1-type, we can ignore the 1-truncation constructor.
+ When $x$ and $y$ are of the form $ia$ and $ib$, we take $\hom_{\widehat A}(ia,ib) \defeq \hom_A(a,b)$.
+ It remains to consider all the other possible pairs of constructors.
+
+ Let us keep $x=ia$ fixed at first.
+ If $y$ varies along the identity $je:\id{ib}{ib'}$, for some $e:b\cong b'$, we require an identity $\id{\hom_A(a,b)}{\hom_A(a,b')}$.
+ By univalence, it suffices to give an equivalence $\eqv{\hom_A(a,b)}{\hom_A(a,b')}$.
+ We take this to be the function $(e\circ \blank ):\hom_A(a,b)\to \hom_A(a,b')$.
+ To see that this is an equivalence, we give its inverse as $(\inv e\circ \blank )$, with witnesses to inversion coming from the fact that $\inv e$ is the inverse of $e$ in $A$.
+
+ As $y$ varies along the identity $\id{j(1_b)}{\refl{ib}}$, we require an identity $\id{(1_b\circ \blank )}{\refl{\hom_A(a,b)}}$; this follows from the identity axiom $\id{1_b\circ g}{g}$ of a precategory.
+ Similarly, as $y$ varies along the identity $\id{j(g\circ f)}{j(f)\ct j(g)}$, we require an identity $\id{((g\circ f)\circ \blank )}{(g\circ (f\circ \blank ))}$, which follows from associativity.
+ % Finally, as $y$ varies along the 1-truncation constructor, we need only to observe that \set is 1-truncated.
+
+ Now we consider the other constructors for $x$.
+ Say that $x$ varies along the identity $j(e):\id{ia}{ia'}$, for some $e:a \cong a'$; we again must deal with all the constructors for $y$.
+ If $y$ is $ib$, then we require an identity $\id{\hom_A(a,b)}{\hom_A(a',b)}$.
+ By univalence, this may come from an equivalence, and for this we can use $(\blank\circ \inv e)$, with inverse $(\blank\circ e)$.
+
+ Still with $x$ varying along $j(e)$, suppose now that $y$ also varies along $j(f)$ for some $f:b\cong b'$.
+ Then we need to know that the two concatenated identities
+ \begin{gather*}
+ \hom_A(a,b) = \hom_A(a',b) = \hom_A(a',b') \mathrlap{\qquad\text{and}}\\
+ \hom_A(a,b) = \hom_A(a,b') = \hom_A(a',b')
+ \end{gather*}
+ are identical.
+ This follows from associativity: $(f\circ \blank)\circ \inv e = f\circ (\blank\circ \inv e)$.
+ The other two constructors for $y$ are trivial, since they are 2-fold equalities in sets.
+
+ For the next two constructors of $x$, all but the first constructor for $y$ is likewise trivial.
+ When $x$ varies along $j(1_a)=\refl{ia}$ and $y$ is $ib$, we use the identity axiom again.
+ Similarly, when $x$ varies along $\id{j(g\circ f)}{j(f)\ct j(g)}$, we use associativity again.
+ This completes the construction of $\hom_{\widehat A}:\widehat A_0 \to \widehat A_0 \to \set$.
+
+ \mentalpause
+
+ \emph{Step 2:} We give the precategory structure on $\widehat A$, always by induction on $\widehat A_0$.
+ % The reader is probably getting bored at this point, so we skip the details.
+ We are now eliminating into sets (the hom-sets of $\widehat A$), so all but the first two constructors are trivial to deal with.
+
+ For identities, if $x$ is $ia$ then we have $\hom_{\widehat A}(x,x) \jdeq \hom_A(a,a)$ and we define $1_x \defeq 1_{ia}$.
+ If $x$ varies along $je$ for $e:a\cong a'$, we must show that $\transfib{x\mapsto \hom_{\widehat A}(x,x)}{je}{1_{ia}} = 1_{ia'}$.
+ But by definition of $\hom_{\widehat A}$, transporting along $je$ is given by composing with $e$ and $\inv e$, and we have $e\circ 1_{ia} \circ \inv{e} = 1_{ia'}$.
+
+ For composition, if $x,y,z$ are $ia,ib,ic$ respectively, then $\hom_{\widehat A}$ reduces to $\hom_A$ and we can define composition in $\widehat A$ to be composition in $A$.
+ And when $x$, $y$, or $z$ varies along $je$, then we verify the following equalities:
+ \begin{align*}
+ e \circ (g\circ f) &= (e\circ g) \circ f,\\
+ g\circ f &= (g\circ \inv e) \circ (e\circ f),\\
+ (g\circ f) \circ \inv e &= g \circ (f\circ \inv e).
+ \end{align*}
+ Finally, the associativity and unitality axioms are mere propositions, so all constructors except the first are trivial.
+ But in that case, we have the corresponding axioms in $A$.
+
+ \mentalpause
+
+ \emph{Step 3}: We show that $\widehat A$ is a category.
+ That is, we must show that for all $x,y:\widehat A$, the function $\idtoiso:(x=y) \to (x\cong y)$ is an equivalence.
+ First we define, for all $x,y:\widehat A$, a function $k_{x,y}:(x\cong y) \to (x=y)$ by induction.
+ As before, since our goal is a set, it suffices to deal with the first two constructors.
+
+ When $x$ and $y$ are $ia$ and $ib$ respectively, we have $\hom_{\widehat A}(ia,ib)\jdeq \hom_A(a,b)$, with composition and identities inherited as well, so that $(ia\cong ib)$ is equivalent to $(a\cong b)$.
+ But now we have the constructor $j:(a\cong b) \to (ia=ib)$.
+
+ Next, if $y$ varies along $j(e)$ for some $e:b\cong b'$, we must show that for $f:a\cong b$ we have $j(\trans{j(e)}{f}) = j(f) \ct j(e)$.
+ But by definition of $\hom_{\widehat A}$ on equalities, transporting along $j(e)$ is equivalent to post-composing with $e$, so this equality follows from the last constructor of $\widehat A_0$.
+ The remaining case when $x$ varies along $j(e)$ for $e:a\cong a'$ is similar.
+ This completes the definition of $k:\prd{x,y:\widehat A_0} (x\cong y) \to (x=y)$.
+
+ Now one thing we must show is that if $p:x=y$, then $k(\idtoiso(p))=p$.
+ By induction on $p$, we may assume it is $\refl x$, and hence $\idtoiso(p)\jdeq 1_x$.
+ Now we argue by induction on $x:\widehat A_0$, and since our goal is a mere proposition (since $\widehat A_0$ is a 1-type), all constructors except the first are trivial.
+ But if $x$ is $ia$, then $k(1_{ia}) \jdeq j(1_a)$, which is equal to $\refl{ia}$ by the third constructor of $\widehat A_0$.
+
+ To complete the proof that $\widehat A$ is a category, we must show that if $f:x\cong y$, then $\idtoiso(k(f))=f$.
+ By induction we may assume that $x$ and $y$ are $ia$ and $ib$ respectively, in which case $f$ must arise from an isomorphism $g:a\cong b$ and we have $k(f)\jdeq j(g)$.
+ However, for any $p$ we have $\idtoiso(p) = \trans{p}{1}$, so in particular $\idtoiso (j(g)) = \trans{j(g)}{1_{ia}}$.
+ And by definition of $\hom_{\widehat A}$ on equalities, this is given by composing $1_{ia}$ with the equivalence $g$, hence is equal to $g$.
+
+ \index{encode-decode method}%
+ Note the similarity of this step to the encode-decode method\index{encode-decode method} used in \cref{sec:compute-coprod,sec:compute-nat,cha:homotopy}.
+ Once again we are characterizing the identity types of a higher inductive type (here, $\widehat A_0$) by defining recursively a family of codes (here, $(x,y)\mapsto (x\cong y)$) and encoding and decoding functions by induction on $\widehat A_0$ and on paths.
+
+ \mentalpause
+
+ \emph{Step 4}: We define a weak equivalence $I:A \to \widehat A$.
+ We take $I_0 \defeq i : A_0 \to \widehat A_0$, and by construction of $\hom_{\widehat A}$ we have functions $I_{a,b}:\hom_A(a,b) \to \hom_{\widehat A}(Ia,Ib)$ forming a functor $I:A \to \widehat A$.
+ This functor is fully faithful by construction, so it remains to show it is essentially surjective.
+ That is, for all $x:\widehat A$ we want there to merely exist an $a:A$ such that $Ia\cong x$.
+ As always, we argue by induction on $x$, and since the goal is a mere proposition, all but the first constructor are trivial.
+ But if $x$ is $ia$, then of course we have $a:A$ and $Ia\jdeq ia$, hence $Ia \cong ia$.
+ (Note that if we were trying to prove $I$ to be \emph{split} essentially surjective, we would be stuck, because we know nothing about equalities in $A_0$ and thus have no way to deal with any further constructors.)
+\end{proof}
+
+We call the construction $A\mapsto \widehat A$ the \define{Rezk completion},
+although there is also an argument (coming from higher topos semantics)
+\index{.infinity1-topos@$(\infty,1)$-topos}%
+for calling it the \define{stack completion}.
+\index{stack}%
+\index{completion!Rezk|)}%
+
+We have seen that most precategories arising in practice are categories, since they are constructed from \uset, which is a category by the univalence axiom.
+However, there are a few cases in which the Rezk completion is necessary to obtain a category.
+
+\begin{eg}\label{ct:rezk-fundgpd-trunc1}
+ Recall from \cref{ct:fundgpd} that for any type $X$ there is a pregroupoid with $X$ as its type of objects and $\hom(x,y) \defeq \pizero{x=y}$.
+ \indexdef{fundamental!groupoid}%
+ \index{fundamental!pregroupoid}%
+ \indexsee{groupoid!fundamental}{fundamental group\-oid}%
+ Its Rezk completion is the \emph{fundamental groupoid} of $X$.
+ Recalling that group\-oids are equivalent to 1-types, it is not hard to identify this groupoid with $\trunc1X$.
+\end{eg}
+
+\begin{eg}\label{ct:hocat}
+ Recall from \cref{ct:hoprecat} that there is a precategory whose type of objects is \type and with $\hom(X,Y) \defeq \pizero{X\to Y}$.
+ Its Rezk completion may be called the \define{homotopy category of types}.
+ \index{category!of types}%
+ \index{homotopy!category of types@(pre)category of types}%
+ Its type of objects can be identified with $\trunc1\type$ (see \cref{ct:ex:hocat}).
+\end{eg}
+
+The Rezk completion also allows us to show that the notion of ``category'' is determined by the notion of ``weak equivalence of precategories''.
+Thus, insofar as the latter is inevitable, so is the former.
+
+\begin{thm}\label{ct:weq-iso-precat-cat}
+ A precategory $C$ is a category if and only if for every weak equivalence of precategories $H:A\to B$, the induced functor $(\blank\circ H):C^B \to C^A$ is an isomorphism of precategories.
+\end{thm}
+\begin{proof}
+ ``Only if'' is \cref{ct:cat-weq-eq}.
+ In the other direction, let $H$ be $I:A\to\widehat A$.
+ Then since $(\blank\circ I)_0$ is an equivalence, there exists $R:\widehat A\to A$ such that $RI=1_A$.
+ Hence $IRI=I$, but again since $(\blank\circ I)_0$ is an equivalence, this implies $IR =1_{\widehat A}$.
+ By \cref{ct:isoprecat}\ref{item:ct:ipc3}, $I$ is an isomorphism of precategories.
+ But then since $\widehat A$ is a category, so is $A$.
+\end{proof}
+
+\index{weak equivalence!of precategories|)}%
+
+
+\newpage
+
+\sectionNotes
+
+The original definition of categories, of course, was in set-theoretic foundations, so that the collection of objects of a category formed a set (or, for large categories, a class).
+Over time, it became clear that all ``category-theoretic'' properties of objects were invariant under isomorphism, and that equality of objects in a category was not usually a very useful notion.
+Numerous authors~\cite{blanc:eqv-log,freyd:invar-eqv,makkai:folds,makkai:comparing} discovered that a dependently typed logic enabled formulating the definition of category without invoking any notion of equality for objects, and that the statements provable in this logic are precisely the ``category-theoretic'' ones that are invariant under isomorphism.
+\index{evil}%
+
+Although most of category theory appears to be invariant under isomorphism of objects and under equivalence of categories, there are some interesting exceptions, which have led to philosophical discussions about what it means to be ``category-theoretic''.
+For instance, \cref{ct:galois} was brought up by Peter May on the categories mailing list in May 2010, as a case where it matters that two categories (defined as usual in set theory) are isomorphic rather than only equivalent.
+The case of $\dagger$-categories was also somewhat confounding to those advocating an isomorphism-invariant version of category theory, since the ``correct'' notion of sameness between objects of a $\dagger$-category is not ordinary isomorphism but \emph{unitary} isomorphism.
+\index{isomorphism!invariance under}%
+
+Categories satisfying the ``saturation'' or ``univalence'' principle as in \cref{ct:category} were first considered by Hofmann and Streicher~\cite{hs:gpd-typethy}.
+The condition then occurred independently to Voevodsky, Shulman, and perhaps others around the same time several years later, and was formalized by Ahrens and Kapulkin~\cite{aks:rezk}.
+This framework puts all the above examples in a unified context: some precategories are categories, others are strict categories, and so on.
+A general theorem that ``isomorphism implies equality'' for a large class of algebraic structures (assuming the univalence axiom) was proven by Coquand and Danielsson; the formulation of the structure identity principle in \cref{sec:sip} is due to Aczel.
+
+Independently of philosophical considerations about category theory, Rezk~\cite{rezk01css} discovered that when defining a notion of $(\infty,1)$-cat\-e\-go\-ry,
+\index{.infinity1-category@$(\infty,1)$-category}%
+it was very convenient to use not merely a \emph{set} of objects with spaces of morphisms between them, but a \emph{space} of objects incorporating all the equivalences and homotopies between them.
+This yields a very well-behaved sort of model for $(\infty,1)$-categories as particular simplicial spaces, which Rezk called \emph{complete Segal spaces}.
+\index{complete!Segal space}%
+\index{Segal!space}%
+One especially good aspect of this model is the analogue of \cref{ct:eqv-levelwise}: a map of complete Segal spaces is an equivalence just when it is a levelwise equivalence of simplicial spaces.
+
+When interpreted in Voevodsky's simplicial\index{simplicial!sets} set model of univalent foundations, our precategories are similar to a truncated analogue of Rezk's ``Segal spaces'', while our categories correspond to his ``complete Segal spaces''.
+\index{Segal!category}%
+Strict categories correspond instead to (a weakened and truncated version of) what are called ``Segal categories''.
+It is known that Segal categories and complete Segal spaces are equivalent models for $(\infty,1)$-categories (see e.g.~\cite{bergner:infty-one}), so that in the simplicial set model, categories and strict categories yield ``equivalent'' category theories---although as we have seen, the former still have many advantages.
+However, in the more general categorical semantics of a higher topos,
+\index{.infinity1-topos@$(\infty,1)$-topos}%
+a strict category corresponds to an internal category (in the traditional sense) in the corresponding 1-topos\index{topos} of sheaves, while a category corresponds to a \emph{stack}.
+\index{stack}%
+The latter are generally a more appropriate sort of ``category'' relative to a topos.
+
+In Rezk's context, what we have called the ``Rezk completion'' corresponds to fibrant replacement
+\index{fibrant replacement}
+in the model category for complete Segal spaces.
+Since this is built using a transfinite induction argument, it most closely matches our second construction as a higher inductive type.
+However, in higher topos models of homotopy type theory, the Rezk completion corresponds to \emph{stack completion},\index{completion!stack}\index{stack!completion} which can be constructed either with a transfinite induction~\cite{jt:strong-stacks} or using a Yoneda embedding \cite{bunge:stacks-morita-internal}.
+
+
+\sectionExercises
+
+\begin{ex}\label{ex:slice-precategory}
+ For a precategory $A$ and $a:A$, define the \define{slice precategory} $A/a$.
+ \indexsee{precategory!slice}{category, slice}%
+ \indexsee{slice (pre)category}{category, slice}%
+ Show that if $A$ is a category, so is $A/a$.
+ \indexdef{category!slice}%
+\end{ex}
+
+\begin{ex}\label{ex:set-slice-over-equiv-functor-category}
+ For any set $X$, prove that the slice category $\uset/X$ is equivalent to the functor category $\uset^X$, where in the latter case we regard $X$ as a discrete category.
+\end{ex}
+
+\begin{ex}\label{ex:functor-equiv-right-adjoint}
+ \index{adjoint!functor}%
+ \index{adjoint!equivalence}%
+ Prove that a functor is an equivalence of categories if and only if it is a \emph{right} adjoint whose unit and counit are isomorphisms.
+\end{ex}
+
+\begin{ex}\label{ct:pre2cat}
+ Define the notion of \define{pre-2-category}.
+ \indexdef{pre-2-category}%
+ Show that precategories, functors, and natural transformations as defined in \cref{sec:transfors} form a pre-2-category.
+ Similarly, define a \define{pre-bicategory}
+ \indexdef{pre-bicategory}%
+ by replacing the equalities (such as those in \cref{ct:functor-assoc,ct:units}) with natural isomorphisms satisfying analogous coherence conditions.
+ Define a function from pre-2-categories to pre-bicategories, and show that it becomes an equivalence when restricted and corestricted to those whose hom-pre\-cat\-egories are categories.
+\end{ex}
+
+\begin{ex}\label{ct:2cat}
+ Define a \define{2-category}
+ \indexdef{2-category}%
+ to be a pre-2-category satisfying a condition analogous to that of \cref{ct:category}.
+ Verify that the pre-2-category of categories \ucat is a 2-category.
+ How much of this chapter can be done internally to an arbitrary 2-category?
+\end{ex}
+
+\begin{ex}\label{ct:groupoids}
+ Define a 2-category whose objects are 1-types, whose morphisms are functions, and whose 2-morphisms are homotopies.
+ Prove that it is equivalent, in an appropriate sense, to the full sub-2-category of \ucat spanned by the \emph{groupoids} (categories in which every arrow is an isomorphism).
+\end{ex}
+
+\begin{ex}\label{ex:2strict-cat}
+ \index{strict!category}%
+ Recall that a \emph{strict category} is a precategory whose type of objects is a set.
+ Prove that the pre-2-category of strict categories is equivalent to the following pre-2-category.
+ \begin{itemize}
+ \item Its objects are categories $A$ equipped with a surjection
+ % \footnote{Recall that a function $f:X\to Y$ is a \emph{surjection} if for every $y:Y$, there \emph{merely exists} an $x:X$ such that $f(x)=y$. This is to be distinguished from a \emph{split surjection}, which has the property that for every $y:Y$ there \emph{exists} an $x:X$ such that $f(x)=y$.}
+ $p_A:A_0'\to A_0$, where $A_0'$ is a set.
+ \item Its morphisms are functors $F:A\to B$ equipped with a function $F_0':A_0' \to B_0'$ such that $p_B \circ F_0' = F_0 \circ p_A$.
+ \item Its 2-morphisms are simply natural transformations.
+ \end{itemize}
+\end{ex}
+
+\begin{ex}\label{ex:pre2dagger-cat}
+ Define the pre-2-category of $\dagger$-categories, which has $\dagger$-struc\-tures on its hom-pre\-cat\-egories.
+ Show that two $\dagger$-categories are equal precisely when they are ``unitarily equivalent'' in a suitable sense.
+\end{ex}
+
+\begin{ex}\label{ct:ex:hocat}
+ Prove that a function $X\to Y$ is an equivalence if and only if its image in the homotopy category of \cref{ct:hocat} is an isomorphism.
+ Show that the type of objects of this category is $\trunc1\type$.
+\end{ex}
+
+\begin{ex}\label{ex:dagger-rezk}
+ Construct the $\dagger$-Rezk completion of a $\dagger$-precategory into a $\dagger$-category, and give it an appropriate universal property.
+\end{ex}
+
+\begin{ex}\label{ex:rezk-vankampen}
+ \index{van Kampen theorem}%
+ \index{theorem!van Kampen}%
+ \index{fundamental!groupoid}%
+ \index{fundamental!pregroupoid}%
+ Using fundamental (pre)groupoids from \cref{ct:fundgpd,ct:rezk-fundgpd-trunc1} and the Rezk completion from \cref{sec:rezk}, give a different proof of van Kampen's theorem (\cref{sec:van-kampen}).
+\end{ex}
+
+\begin{ex}\label{ex:stack}
+ Let $X$ and $Y$ be sets and $p:Y\to X$ a surjection.
+ \begin{enumerate}
+ \item Define, for any precategory $A$, the category $\mathrm{Desc}(A,p)$ of \define{descent data}
+ \indexdef{descent data}%
+ in $A$ relative to $p$.
+ \item Show that any precategory $A$ is a \define{prestack}
+ \indexdef{prestack}%
+ for $p$, i.e.\ the canonical functor $A^X \to \mathrm{Desc}(A,p)$ is fully faithful.
+ \item Show that if $A$ is a category, then it is a \define{stack}
+ \indexdef{stack}%
+ for $p$, i.e.\ $A^X \to \mathrm{Desc}(A,p)$ is an equivalence.
+ \item Show that the statement ``every strict category is a stack for every surjection of sets'' is equivalent to the axiom of choice.
+ \index{axiom!of choice}%
+ \index{strict!category}%
+ \end{enumerate}
+\end{ex}
+
+% Local Variables:
+% TeX-master: "hott-online"
+% End:
diff --git a/books/hott/equivalences.tex b/books/hott/equivalences.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f14f7bc3cea1a44ef81d08bb274d6173b1ffdf86
--- /dev/null
+++ b/books/hott/equivalences.tex
@@ -0,0 +1,1117 @@
+\chapter{Equivalences}
+\label{cha:equivalences}
+
+We now study in more detail the notion of \emph{equivalence of types} that was introduced briefly in \cref{sec:basics-equivalences}.
+Specifically, we will give several different ways to define a type $\isequiv(f)$ having the properties mentioned there.
+Recall that we wanted $\isequiv(f)$ to have the following properties, which we restate here:
+\begin{enumerate}
+\item $\qinv(f) \to \isequiv (f)$.\label{item:beb1}
+\item $\isequiv (f) \to \qinv(f)$.\label{item:beb2}
+\item $\isequiv(f)$ is a mere proposition.\label{item:beb3}
+\end{enumerate}
+Here $\qinv(f)$ denotes the type of quasi-inverses to $f$:
+\begin{equation*}
+ \sm{g:B\to A} \big((f \circ g \htpy \idfunc[B]) \times (g\circ f \htpy \idfunc[A])\big).
+\end{equation*}
+By function extensionality, it follows that $\qinv(f)$ is equivalent to the type
+\begin{equation*}
+ \sm{g:B\to A} \big((f \circ g = \idfunc[B]) \times (g\circ f = \idfunc[A])\big).
+\end{equation*}
+We will define three different types having properties~\ref{item:beb1}--\ref{item:beb3}, which we call
+\begin{itemize}
+\item half adjoint equivalences,
+\item bi-invertible maps,
+ \index{function!bi-invertible}
+ and
+\item contractible functions.
+\end{itemize}
+We will also show that all these types are equivalent.
+These names are intentionally somewhat cumbersome, because after we know that they are all equivalent and have properties~\ref{item:beb1}--\ref{item:beb3}, we will revert to saying simply ``equivalence'' without needing to specify which particular definition we choose.
+But for purposes of the comparisons in this chapter, we need different names for each definition.
+
+Before we examine the different notions of equivalence, however, we give a little more explanation of why a different concept than quasi-invertibility is needed.
+
+\section{Quasi-inverses}
+\label{sec:quasi-inverses}
+
+\index{quasi-inverse|(}%
+We have said that $\qinv(f)$ is unsatisfactory because it is not a mere proposition, whereas we would rather that a given function could ``be an equivalence'' in at most one way.
+However, we have given no evidence that $\qinv(f)$ is not a mere proposition.
+In this section we exhibit a specific counterexample.
+
+\begin{lem}\label{lem:qinv-autohtpy}
+ If $f:A\to B$ is such that $\qinv (f)$ is inhabited, then
+ \[\eqv{\qinv(f)}{\Parens{\prd{x:A}(x=x)}}.\]
+\end{lem}
+\begin{proof}
+ By assumption, $f$ is an equivalence; that is, we have $e:\isequiv(f)$ and so $(f,e):\eqv A B$.
+ By univalence, $\idtoeqv:(A=B) \to (\eqv A B)$ is an equivalence, so we may assume that $(f,e)$ is of the form $\idtoeqv(p)$ for some $p:A=B$.
+ Then by path induction, we may assume $p$ is $\refl{A}$, in which case $f$ is $\idfunc[A]$.
+ Thus we are reduced to proving $\eqv{\qinv(\idfunc[A])}{(\prd{x:A}(x=x))}$.
+ Now by definition we have
+ \[ \qinv(\idfunc[A]) \jdeq
+ \sm{g:A\to A} \big((g \htpy \idfunc[A]) \times (g \htpy \idfunc[A])\big).
+ \]
+ By function extensionality, this is equivalent to
+ \[ \sm{g:A\to A} \big((g = \idfunc[A]) \times (g = \idfunc[A])\big).
+ \]
+ And by \cref{ex:sigma-assoc}, this is equivalent to
+ \[ \sm{h:\sm{g:A\to A} (g = \idfunc[A])} (\proj1(h) = \idfunc[A])
+ \]
+ However, by \cref{thm:contr-paths}, $\sm{g:A\to A} (g = \idfunc[A])$ is contractible with center $(\idfunc[A],\refl{\idfunc[A]})$; therefore by \cref{thm:omit-contr} this type is equivalent to $\idfunc[A] = \idfunc[A]$.
+ And by function extensionality, $\idfunc[A] = \idfunc[A]$ is equivalent to $\prd{x:A} x=x$.
+\end{proof}
+
+\noindent
+We remark that \cref{ex:qinv-autohtpy-no-univalence} asks for a proof of the above lemma which avoids univalence.
+
+Thus, what we need is some $A$ which admits a nontrivial element of $\prd{x:A}(x=x)$.
+Thinking of $A$ as a higher groupoid, an inhabitant of $\prd{x:A}(x=x)$ is a natural transformation\index{natural!transformation} from the identity functor of $A$ to itself.
+Such transformations are said to form the \define{center of a category},
+\index{center!of a category}%
+\index{category!center of}%
+since the naturality axiom requires that they commute with all morphisms.
+Classically, if $A$ is simply a group regarded as a one-object groupoid, then this yields precisely its center in the usual group-theoretic sense.
+This provides some motivation for the following.
+
+\begin{lem}\label{lem:autohtpy}
+ Suppose we have a type $A$ with $a:A$ and $q:a=a$ such that
+ \begin{enumerate}
+ \item The type $a=a$ is a set.\label{item:autohtpy1}
+ \item For all $x:A$ we have $\brck{a=x}$.\label{item:autohtpy2}
+ \item For all $p:a=a$ we have $p\ct q = q \ct p$.\label{item:autohtpy3}
+ \end{enumerate}
+ Then there exists $f:\prd{x:A} (x=x)$ with $f(a)=q$.
+\end{lem}
+\begin{proof}
+ Let $g:\prd{x:A} \brck{a=x}$ be as given by~\ref{item:autohtpy2}. First we
+ observe that each type $\id[A]xy$ is a set. For since being a set is a mere
+ proposition, we may apply the induction principle of propositional truncation, and assume that $g(x)=\bproj
+ p$ and $g(y)=\bproj{p'}$ for $p:a=x$ and $p':a=y$. In this case, composing with
+ $p$ and $\opp{p'}$ yields an equivalence $\eqv{(x=y)}{(a=a)}$. But $(a=a)$ is
+ a set by~\ref{item:autohtpy1}, so $(x=y)$ is also a set.
+
+ Now, we would like to define $f$ by assigning to each $x$ the path $\opp{g(x)}
+ \ct q \ct g(x)$, but this does not work because $g(x)$ does not inhabit $a=x$
+ but rather $\brck{a=x}$, and the type $(x=x)$ may not be a mere proposition,
+ so we cannot use induction on propositional truncation. Instead we can apply
+ the technique mentioned in \cref{sec:unique-choice}: we characterize
+ uniquely the object we wish to construct. Let us define, for each $x:A$, the
+ type
+ \[ B(x) \defeq \sm{r:x=x} \prd{s:a=x} (r = \opp s \ct q\ct s).\]
+ We claim that $B(x)$ is a mere proposition for each $x:A$.
+ Since this claim is itself a mere proposition, we may again apply induction on
+ truncation and assume that $g(x) = \bproj p$ for some $p:a=x$.
+ Now suppose given $(r,h)$ and $(r',h')$ in $B(x)$; then we have
+ \[ h(p) \ct \opp{h'(p)} : r = r'. \]
+ It remains to show that $h$ is identified with $h'$ when transported along this equality, which by transport in identity types and function types (\cref{sec:compute-paths,sec:compute-pi}), reduces to showing
+ \[ h(s) = h(p) \ct \opp{h'(p)} \ct h'(s) \]
+ for any $s:a=x$.
+ But each side of this is an equality between elements of $(x=x)$, so it follows from our above observation that $(x=x)$ is a set.
+
+ Thus, each $B(x)$ is a mere proposition; we claim that $\prd{x:A} B(x)$.
+ Given $x:A$, we may now invoke the induction principle of propositional truncation to assume that $g(x) = \bproj p$ for $p:a=x$.
+ We define $r \defeq \opp p \ct q \ct p$; to inhabit $B(x)$ it remains to show that for any $s:a=x$ we have
+ $r = \opp s \ct q \ct s$.
+ Manipulating paths, this reduces to showing that $q\ct (p\ct \opp s) = (p\ct \opp s) \ct q$.
+ But this is just an instance of~\ref{item:autohtpy3}.
+\end{proof}
+
+\begin{thm}\label{thm:qinv-notprop}
+ There exist types $A$ and $B$ and a function $f:A\to B$ such that $\qinv(f)$ is not a mere proposition.
+\end{thm}
+\begin{proof}
+ It suffices to exhibit a type $X$ such that $\prd{x:X} (x=x)$ is not a mere proposition.
+ Define $X\defeq \sm{A:\type} \brck{\bool=A}$, as in the proof of \cref{thm:no-higher-ac}.
+ It will suffice to exhibit an $f:\prd{x:X} (x=x)$ which is unequal to $\lam{x} \refl{x}$.
+
+ Let $a \defeq (\bool,\bproj{\refl{\bool}}) : X$, and let $q:a=a$ be the path corresponding to the nonidentity equivalence $e:\eqv\bool\bool$ defined by $e(\bfalse)\defeq\btrue$ and $e(\btrue)\defeq\bfalse$.
+ We would like to apply \cref{lem:autohtpy} to build an $f$.
+ By definition of $X$, equalities in subset types (\cref{subsec:prop-subsets}), and univalence, we have $\eqv{(a=a)}{(\eqv{\bool}{\bool})}$, which is a set, so~\ref{item:autohtpy1} holds.
+ Similarly, by definition of $X$ and equalities in subset types we have~\ref{item:autohtpy2}.
+ Finally, \cref{ex:eqvboolbool} implies that every equivalence $\eqv\bool\bool$ is equal to either $\idfunc[\bool]$ or $e$, so we can show~\ref{item:autohtpy3} by a four-way case analysis.
+
+ Thus, we have $f:\prd{x:X} (x=x)$ such that $f(a) = q$.
+ Since $e$ is not equal to $\idfunc[\bool]$, $q$ is not equal to $\refl{a}$, and thus $f$ is not equal to $\lam{x} \refl{x}$.
+ Therefore, $\prd{x:X} (x=x)$ is not a mere proposition.
+\end{proof}
+
+More generally, \cref{lem:autohtpy} implies that any ``Eilenberg--Mac Lane space'' $K(G,1)$, where $G$ is a nontrivial abelian\index{group!abelian} group, will provide a counterexample; see \cref{cha:homotopy}.
+The type $X$ we used turns out to be equivalent to $K(\mathbb{Z}_2,1)$.
+In \cref{cha:hits} we will see that the circle $\Sn^1 = K(\mathbb{Z},1)$ is another easy-to-describe example.
+
+We now move on to describing better notions of equivalence.
+
+\index{quasi-inverse|)}%
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\section{Half adjoint equivalences}
+\label{sec:hae}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+\index{equivalence!half adjoint|(defstyle}%
+\index{half adjoint equivalence|(defstyle}%
+\index{adjoint!equivalence!of types, half|(defstyle}%
+
+In \cref{sec:quasi-inverses} we concluded that $\qinv(f)$ is equivalent to $\prd{x:A} (x=x)$ by discarding a contractible type.
+Roughly, the type $\qinv(f)$ contains three data $g$, $\eta$, and $\epsilon$, of which two ($g$ and $\eta$) could together be seen to be contractible when $f$ is an equivalence.
+The problem is that removing these data left one remaining ($\epsilon$).
+In order to solve this problem, the idea is to add one \emph{additional} datum which, together with $\epsilon$, forms a contractible type.
+
+\begin{defn}\label{defn:ishae}
+ A function $f:A\to B$ is a \define{half adjoint equivalence}
+ if there are $g:B\to A$ and homotopies $\eta: g \circ f \htpy \idfunc[A]$ and $\epsilon:f \circ g \htpy \idfunc[B]$ such that there exists a homotopy
+ \[\tau : \prd{x:A} \map{f}{\eta x} = \epsilon(fx).\]
+\end{defn}
+
+Thus we have a type $\ishae(f)$, defined to be
+\begin{equation*}
+ \sm{g:B\to A}{\eta: g \circ f \htpy \idfunc[A]}{\epsilon:f \circ g \htpy \idfunc[B]} \prd{x:A} \map{f}{\eta x} = \epsilon(fx).
+\end{equation*}
+Note that in the above definition, the coherence\index{coherence} condition relating $\eta$ and $\epsilon$ only involves $f$.
+We might consider instead an analogous coherence condition involving $g$:
+\[\upsilon : \prd{y:B} \map{g}{\epsilon y} = \eta(gy)\]
+and a resulting analogous definition $\ishae'(f)$.
+
+Fortunately, it turns out each of the conditions implies the other one:
+
+\begin{lem}\label{lem:coh-equiv}
+For functions $f : A \to B$ and $g:B\to A$ and homotopies $\eta: g \circ f \htpy \idfunc[A]$ and $\epsilon:f \circ g \htpy \idfunc[B]$, the following conditions are logically equivalent:
+\begin{itemize}
+\item $\prd{x:A} \map{f}{\eta x} = \epsilon(fx)$
+\item $\prd{y:B} \map{g}{\epsilon y} = \eta(gy)$
+\end{itemize}
+\end{lem}
+\begin{proof}
+ It suffices to show one direction; the other one is obtained by replacing $A$, $f$, and $\eta$ by $B$, $g$, and $\epsilon$ respectively.
+ Let $\tau : \prd{x:A}\;\map{f}{\eta x} = \epsilon(fx)$.
+ Fix $y : B$.
+ Using naturality of $\epsilon$ and applying $g$, we get the following commuting diagram of paths:
+\[\uppercurveobject{{ }}\lowercurveobject{{ }}\twocellhead{{ }}
+ \xymatrix@C=3pc{gfgfgy \ar@{=}^-{gfg(\epsilon y)}[r] \ar@{=}_{g(\epsilon (fgy))}[d] & gfgy \ar@{=}^{g(\epsilon y)}[d] \\ gfgy \ar@{=}_{g(\epsilon y)}[r] & gy
+ }\]
+Using $\tau(gy)$ on the left side of the diagram gives us
+\[\uppercurveobject{{ }}\lowercurveobject{{ }}\twocellhead{{ }}
+ \xymatrix@C=3pc{gfgfgy \ar@{=}^-{gfg(\epsilon y)}[r] \ar@{=}_{gf(\eta (gy))}[d] & gfgy \ar@{=}^{g(\epsilon y)}[d] \\ gfgy \ar@{=}_{g(\epsilon y)}[r] & gy
+ }\]
+Using the commutativity of $\eta$ with $g \circ f$ (\cref{cor:hom-fg}), we have
+\[\uppercurveobject{{ }}\lowercurveobject{{ }}\twocellhead{{ }}
+ \xymatrix@C=3pc{gfgfgy \ar@{=}^-{gfg(\epsilon y)}[r] \ar@{=}_{\eta (gfgy)}[d] & gfgy \ar@{=}^{g(\epsilon y)}[d] \\ gfgy \ar@{=}_{g(\epsilon y)}[r] & gy
+ }\]
+However, by naturality of $\eta$ we also have
+\[\uppercurveobject{{ }}\lowercurveobject{{ }}\twocellhead{{ }}
+ \xymatrix@C=3pc{gfgfgy \ar@{=}^-{gfg(\epsilon y)}[r] \ar@{=}_{\eta (gfgy)}[d] & gfgy \ar@{=}^{\eta(gy)}[d] \\ gfgy \ar@{=}_{g(\epsilon y)}[r] & gy
+ }\]
+Thus, canceling all but the right-hand homotopy, we have $g(\epsilon y) = \eta(g y)$ as desired.
+\end{proof}
+
+However, it is important that we do not include \emph{both} $\tau$ and $\upsilon$ in the definition of $\ishae (f)$ (whence the name ``\emph{half} adjoint equivalence'').
+If we did, then after canceling contractible types we would still have one remaining datum --- unless we added another higher coherence condition.
+In general, we expect to get a well-behaved type if we cut off after an odd number of coherences.
+
+Of course, it is obvious that $\ishae(f) \to\qinv(f)$: simply forget the coherence datum.
+The other direction is a version of a standard argument from homotopy theory and category theory.
+
+\begin{thm}\label{thm:equiv-iso-adj}
+ For any $f:A\to B$ we have $\qinv(f)\to\ishae(f)$.
+\end{thm}
+\begin{proof}
+Suppose that $(g,\eta,\epsilon)$ is a quasi-inverse for $f$. We have to provide
+a quadruple $(g',\eta',\epsilon',\tau)$ witnessing that $f$ is a half adjoint equivalence. To
+define $g'$ and $\eta'$, we can just make the obvious choice by setting $g'
+\defeq g$ and $\eta'\defeq \eta$. However, in the definition of $\epsilon'$ we
+need start worrying about the construction of $\tau$, so we cannot just follow our nose
+and take $\epsilon'$ to be $\epsilon$. Instead, we take
+\begin{equation*}
+\epsilon'(b) \defeq \opp{\epsilon(f(g(b)))}\ct (\ap{f}{\eta(g(b))}\ct \epsilon(b)).
+\end{equation*}
+Now we need to find
+\begin{equation*}
+\tau(a): \ap{f}{\eta(a)}=\opp{\epsilon(f(g(f(a))))}\ct (\ap{f}{\eta(g(f(a)))}\ct \epsilon(f(a))).
+\end{equation*}
+Note first that by \cref{cor:hom-fg}, we have
+%$\eta(g(f(a)))\ct\eta(a)=\ap{g}{\ap{f}{\eta(a)}}\ct\eta(a)$ and hence it follows that
+$\eta(g(f(a)))=\ap{g}{\ap{f}{\eta(a)}}$. Therefore, we can apply
+\cref{lem:htpy-natural} to compute
+\begin{align*}
+\ap{f}{\eta(g(f(a)))}\ct \epsilon(f(a))
+& = \ap{f}{\ap{g}{\ap{f}{\eta(a)}}}\ct \epsilon(f(a))\\
+& = \epsilon(f(g(f(a))))\ct \ap{f}{\eta(a)}
+\end{align*}
+from which we get the desired path $\tau(a)$.
+\end{proof}
+
+Combining this with \cref{lem:coh-equiv} (or symmetrizing the proof), we also have $\qinv(f)\to\ishae'(f)$.
+
+It remains to show that $\ishae(f)$ is a mere proposition.
+For this, we will need to know that the fibers of an equivalence are contractible.
+
+\begin{defn}\label{defn:homotopy-fiber}
+ The \define{fiber}
+ \indexdef{fiber}%
+ \indexsee{function!fiber of}{fiber}%
+ of a map $f:A\to B$ over a point $y:B$ is
+ \[ \hfib f y \defeq \sm{x:A} (f(x) = y).\]
+\end{defn}
+
+In homotopy theory, this is what would be called the \emph{homotopy fiber} of $f$.
+The path lemmas in \cref{sec:computational} yield the following characterization of paths in fibers:
+
+\begin{lem}\label{lem:hfib}
+ For any $f : A \to B$, $y : B$, and $(x,p),(x',p') : \hfib{f}{y}$, we have
+ \[ \big((x,p) = (x',p')\big) \eqvsym \Parens{\sm{\gamma : x = x'} f(\gamma) \ct p' = p} \qedhere\]
+\end{lem}
+
+\begin{thm}\label{thm:contr-hae}
+ If $f:A\to B$ is a half adjoint equivalence, then for any $y:B$ the fiber $\hfib f y$ is contractible.
+\end{thm}
+\begin{proof}
+ Let $(g,\eta,\epsilon,\tau) : \ishae(f)$, and fix $y : B$.
+ As our center of contraction for $\hfib{f}{y}$ we choose $(gy, \epsilon y)$.
+ Now take any $(x,p) : \hfib{f}{y}$; we want to construct a path from $(gy, \epsilon y)$ to $(x,p)$.
+ By \cref{lem:hfib}, it suffices to give a path $\gamma : \id{gy}{x}$ such that $\ap f\gamma \ct p = \epsilon y$.
+ We put $\gamma \defeq \opp{g(p)} \ct \eta x$.
+ Then we have
+ \begin{align*}
+ f(\gamma) \ct p & = \opp{fg(p)} \ct f (\eta x) \ct p \\
+ & = \opp{fg(p)} \ct \epsilon(fx) \ct p \\
+ & = \epsilon y
+ \end{align*}
+ where the second equality follows by $\tau x$ and the third equality is naturality of $\epsilon$.
+\end{proof}
+
+We now define the types which encapsulate contractible pairs of data.
+The following types put together the quasi-inverse $g$ with one of the homotopies.
+
+\begin{defn}\label{defn:linv-rinv}
+ Given a function $f:A\to B$, we define the types
+ \begin{align*}
+ \linv(f) &\defeq \sm{g:B\to A} (g\circ f\htpy \idfunc[A])\\
+ \rinv(f) &\defeq \sm{g:B\to A} (f\circ g\htpy \idfunc[B])
+ \end{align*}
+ of \define{left inverses}
+ \indexdef{left!inverse}%
+ \indexdef{inverse!left}%
+ and \define{right inverses}
+ \indexdef{right!inverse}%
+ \indexdef{inverse!right}%
+ to $f$, respectively.
+ We call $f$ \define{left invertible}
+ \indexdef{function!left invertible}%
+ \indexdef{function!right invertible}%
+ if $\linv(f)$ is inhabited, and similarly \define{right invertible}
+ \indexdef{left!invertible function}%
+ \indexdef{right!invertible function}%
+ if $\rinv(f)$ is inhabited.
+\end{defn}
+
+\begin{lem}\label{thm:equiv-compose-equiv}
+ If $f:A\to B$ has a quasi-inverse, then so do
+ \begin{align*}
+ (f\circ \blank) &: (C\to A) \to (C\to B)\\
+ (\blank\circ f) &: (B\to C) \to (A\to C).
+ \end{align*}
+\end{lem}
+\begin{proof}
+ If $g$ is a quasi-inverse of $f$, then $(g\circ \blank)$ and $(\blank\circ g)$ are quasi-inverses of $(f\circ \blank)$ and $(\blank\circ f)$ respectively.
+\end{proof}
+
+\begin{lem}\label{lem:inv-hprop}
+ If $f : A \to B$ has a quasi-inverse, then the types $\rinv(f)$ and $\linv(f)$ are contractible.
+\end{lem}
+\begin{proof}
+ By function extensionality, we have
+ \[\eqv{\linv(f)}{\sm{g:B\to A} (g\circ f = \idfunc[A])}.\]
+ But this is the fiber of $(\blank\circ f)$ over $\idfunc[A]$, and so
+ by \cref{thm:equiv-compose-equiv,thm:equiv-iso-adj,thm:contr-hae}, it is contractible.
+ Similarly, $\rinv(f)$ is equivalent to the fiber of $(f\circ \blank)$ over $\idfunc[B]$ and hence contractible.
+\end{proof}
+
+Next we define the types which put together the other homotopy with the additional coherence datum.\index{coherence}%
+
+\begin{defn}\label{defn:lcoh-rcoh}
+For $f : A \to B$, a left inverse $(g,\eta) : \linv(f)$, and a right inverse $(g,\epsilon) : \rinv(f)$, we denote
+\begin{align*}
+\lcoh{f}{g}{\eta} & \defeq \sm{\epsilon : f\circ g \htpy \idfunc[B]} \prd{y:B} g(\epsilon y) = \eta (gy), \\
+\rcoh{f}{g}{\epsilon} & \defeq \sm{\eta : g\circ f \htpy \idfunc[A]} \prd{x:A} f(\eta x) = \epsilon (fx).
+\end{align*}
+\end{defn}
+
+\begin{lem}\label{lem:coh-hfib}
+For any $f,g,\epsilon,\eta$, we have
+\begin{align*}
+\lcoh{f}{g}{\eta} & \eqvsym {\prd{y:B} \id[\hfib{g}{gy}]{(fgy,\eta(gy))}{(y,\refl{gy})}}, \\
+\rcoh{f}{g}{\epsilon} & \eqvsym {\prd{x:A} \id[\hfib{f}{fx}]{(gfx,\epsilon(fx))}{(x,\refl{fx})}}.
+\end{align*}
+\end{lem}
+\begin{proof}
+Using \cref{lem:hfib}.
+\end{proof}
+
+\begin{lem}\label{lem:coh-hprop}
+ If $f$ is a half adjoint equivalence, then for any $(g,\epsilon) : \rinv(f)$, the type $\rcoh{f}{g}{\epsilon}$ is contractible.
+\end{lem}
+\begin{proof}
+ By \cref{lem:coh-hfib} and the fact that dependent function types preserve contractible spaces, it suffices to show that for each $x:A$, the type $\id[\hfib{f}{fx}]{(gfx,\epsilon(fx))}{(x,\refl{fx})}$ is contractible.
+ But by \cref{thm:contr-hae}, $\hfib{f}{fx}$ is contractible, and any path space of a contractible space is itself contractible.
+\end{proof}
+
+\begin{thm}\label{thm:hae-hprop}
+ For any $f : A \to B$, the type $\ishae(f)$ is a mere proposition.
+\end{thm}
+\begin{proof}
+ By \cref{ex:prop-inhabcontr} it suffices to assume $f$ to be a half adjoint equivalence and show that $\ishae(f)$ is contractible.
+ Now by associativity of $\Sigma$ (\cref{ex:sigma-assoc}), the type $\ishae(f)$ is equivalent to
+ \[\sm{u : \rinv(f)} \rcoh{f}{\proj{1}(u)}{\proj{2}(u)}.\]
+ But by \cref{lem:inv-hprop,lem:coh-hprop} and the fact that $\Sigma$ preserves contractibility, the latter type is also contractible.
+\end{proof}
+
+Thus, we have shown that $\ishae(f)$ has all three desiderata for the type $\isequiv(f)$.
+In the next two sections we consider a couple of other possibilities.
+
+\index{equivalence!half adjoint|)}%
+\index{half adjoint equivalence|)}%
+\index{adjoint!equivalence!of types, half|)}%
+
+\section{Bi-invertible maps}
+\label{sec:biinv}
+
+\index{function!bi-invertible|(defstyle}%
+\index{bi-invertible function|(defstyle}%
+\index{equivalence!as bi-invertible function|(defstyle}%
+
+Using the language introduced in \cref{sec:hae}, we can restate the definition proposed in \cref{sec:basics-equivalences} as follows.
+
+\begin{defn}\label{defn:biinv}
+ We say $f:A\to B$ is \define{bi-invertible}
+ if it has both a left inverse and a right inverse:
+ \[ \biinv (f) \defeq \linv(f) \times \rinv(f). \]
+\end{defn}
+
+In \cref{sec:basics-equivalences} we proved that $\qinv(f)\to\biinv(f)$ and $\biinv(f)\to\qinv(f)$.
+What remains is the following.
+
+\begin{thm}\label{thm:isprop-biinv}
+ For any $f:A\to B$, the type $\biinv(f)$ is a mere proposition.
+\end{thm}
+\begin{proof}
+ We may suppose $f$ to be bi-invertible and show that $\biinv(f)$ is contractible.
+ But since $\biinv(f)\to\qinv(f)$, by \cref{lem:inv-hprop} in this case both $\linv(f)$ and $\rinv(f)$ are contractible, and the product of contractible types is contractible.
+\end{proof}
+
+Note that this also fits the proposal made at the beginning of \cref{sec:hae}: we combine $g$ and $\eta$ into a contractible type and add an additional datum which combines with $\epsilon$ into a contractible type.
+The difference is that instead of adding a \emph{higher} datum (a 2-dimensional path) to combine with $\epsilon$, we add a \emph{lower} one (a right inverse that is separate from the left inverse).
+
+\begin{cor}\label{thm:equiv-biinv-isequiv}
+ For any $f:A\to B$ we have $\eqv{\biinv(f)}{\ishae(f)}$.
+\end{cor}
+\begin{proof}
+ We have $\biinv(f) \to \qinv(f) \to \ishae(f)$ and $\ishae(f) \to \qinv(f) \to \biinv(f)$.
+ Since both $\ishae(f)$ and $\biinv(f)$ are mere propositions, the equivalence follows from \cref{lem:equiv-iff-hprop}.
+\end{proof}
+
+\index{function!bi-invertible|)}%
+\index{bi-invertible function|)}%
+\index{equivalence!as bi-invertible function|)}%
+
+\section{Contractible fibers}
+\label{sec:contrf}
+
+\index{function!contractible|(defstyle}%
+\index{contractible!function|(defstyle}%
+\index{equivalence!as contractible function|(defstyle}%
+
+Note that our proofs about $\ishae(f)$ and $\biinv(f)$ made essential use of the fact that the fibers of an equivalence are contractible.
+In fact, it turns out that this property is itself a sufficient definition of equivalence.
+
+\begin{defn}[Contractible maps] \label{defn:equivalence}
+ A map $f:A\to B$ is \define{contractible}
+ if for all $y:B$, the fiber $\hfib f y$ is contractible.
+\end{defn}
+
+Thus, the type $\iscontr(f)$ is defined to be
+\begin{align}
+ \iscontr(f) &\defeq \prd{y:B} \iscontr(\hfib f y)\label{eq:iscontrf}
+ % \\
+ % &\defeq \prd{y:B} \iscontr (\setof{x:A | f(x) = y}).
+\end{align}
+Note that in \cref{sec:contractibility} we defined what it means for a \emph{type} to be contractible.
+Here we are defining what it means for a \emph{map} to be contractible.
+Our terminology follows the general homotopy-theoretic practice of saying that a map has a certain property if all of its (homotopy) fibers have that property.
+Thus, a type $A$ is contractible just when the map $A\to\unit$ is contractible.
+From \cref{cha:hlevels} onwards we will also call contractible maps and types \emph{$(-2)$-truncated}.
+
+We have already shown in \cref{thm:contr-hae} that $\ishae(f) \to \iscontr(f)$.
+Conversely:
+
+\begin{thm}\label{thm:lequiv-contr-hae}
+For any $f:A\to B$ we have ${\iscontr(f)} \to {\ishae(f)}$.
+\end{thm}
+\begin{proof}
+Let $P : \iscontr(f)$. We define an inverse mapping $g : B \to A$ by sending each $y : B$ to the center of contraction of the fiber at $y$:
+\[ g(y) \defeq \proj{1}(\proj{1}(Py)). \]
+We can thus define the homotopy $\epsilon$ by mapping $y$ to the witness that $g(y)$ indeed belongs to the fiber at $y$:
+\[ \epsilon(y) \defeq \proj{2}(\proj{1}(P y)). \]
+It remains to define $\eta$ and $\tau$. This of course amounts to giving an element of $\rcoh{f}{g}{\epsilon}$. By \cref{lem:coh-hfib}, this is the same as giving for each $x:A$ a path from $(gfx,\epsilon(fx))$ to $(x,\refl{fx})$ in the fiber of $f$ over $fx$. But this is easy: for any $x : A$, the type $\hfib{f}{fx}$
+is contractible by assumption, hence such a path must exist. We can construct it explicitly as
+\[\opp{\big(\proj{2}(P(fx))(gfx,\epsilon(fx))\big)} \ct \big(\proj{2}(P(fx)) (x,\refl{fx})\big). \qedhere \]
+\end{proof}
+
+It is also easy to see:
+
+\begin{lem}\label{thm:contr-hprop}
+ For any $f$, the type $\iscontr(f)$ is a mere proposition.
+\end{lem}
+\begin{proof}
+ By \cref{thm:isprop-iscontr}, each type $\iscontr (\hfib f y)$ is a mere proposition.
+ Thus, by \cref{thm:isprop-forall}, so is~\eqref{eq:iscontrf}.
+\end{proof}
+
+\begin{thm}\label{thm:equiv-contr-hae}
+ For any $f:A\to B$ we have $\eqv{\iscontr(f)}{\ishae(f)}$.
+\end{thm}
+\begin{proof}
+ We have already established a logical equivalence ${\iscontr(f)} \Leftrightarrow {\ishae(f)}$, and both are mere propositions (\cref{thm:contr-hprop,thm:hae-hprop}).
+ Thus, \cref{lem:equiv-iff-hprop} applies.
+\end{proof}
+
+Usually, we prove that a function is an equivalence by exhibiting a quasi-inverse, but sometimes this definition is more convenient.
+For instance, it implies that when proving a function to be an equivalence, we are free to assume that its codomain is inhabited.
+
+\begin{cor}\label{thm:equiv-inhabcod}
+ If $f:A\to B$ is such that $B\to \isequiv(f)$, then $f$ is an equivalence.
+\end{cor}
+\begin{proof}
+ To show $f$ is an equivalence, it suffices to show that $\hfib f y$ is contractible for any $y:B$.
+ But if $e:B\to \isequiv(f)$, then given any such $y$ we have $e(y):\isequiv(f)$, so that $f$ is an equivalence and hence $\hfib f y$ is contractible, as desired.
+\end{proof}
+
+\index{function!contractible|)}%
+\index{contractible!function|)}%
+\index{equivalence!as contractible function|)}%
+
+\section{On the definition of equivalences}
+\label{sec:concluding-remarks}
+
+\indexdef{equivalence}
+We have shown that all three definitions of equivalence satisfy the three desirable properties and are pairwise equivalent:
+\[ \iscontr(f) \eqvsym \ishae(f) \eqvsym \biinv(f). \]
+(There are yet more possible definitions of equivalence, but we will stop with these three.
+See \cref{ex:brck-qinv} and the exercises in this chapter for some more.)
+Thus, we may choose any one of them as ``the'' definition of $\isequiv (f)$.
+For definiteness, we choose to define
+\[ \isequiv(f) \defeq \ishae(f).\]
+\index{mathematics!formalized}%
+This choice is advantageous for formalization, since $\ishae(f)$ contains the most directly useful data.
+On the other hand, for other purposes, $\biinv(f)$ is often easier to deal with, since it contains no 2-dimensional paths and its two symmetrical halves can be treated independently.
+However, for purposes of this book, the specific choice will make little difference.
+
+In the rest of this chapter, we study some other properties and characterizations of equivalences.
+\index{equivalence!properties of}%
+
+
+\section{Surjections and embeddings}
+\label{sec:mono-surj}
+
+\index{set}
+When $A$ and $B$ are sets and $f:A\to B$ is an equivalence, we also call it as \define{isomorphism}
+\indexdef{isomorphism!of sets}%
+or a \define{bijection}.
+\indexdef{bijection}%
+\indexsee{function!bijective}{bijection}%
+(We avoid these words for types that are not sets, since in homotopy theory and higher category theory they often denote a stricter notion of ``sameness'' than homotopy equivalence.)
+In set theory, a function is a bijection just when it is both injective and surjective.
+The same is true in type theory, if we formulate these conditions appropriately.
+For clarity, when dealing with types that are not sets, we will speak of \emph{embeddings} instead of injections.
+
+\begin{defn}\label{defn:surj-emb}
+ Let $f:A\to B$.
+ \begin{enumerate}
+ \item We say $f$ is \define{surjective}
+ \indexsee{surjective!function}{function, surjective}%
+ \indexdef{function!surjective}%
+ (or a \define{surjection})
+ \indexsee{surjection}{function, surjective}%
+ if for every $b:B$ we have $\brck{\hfib f b}$.
+ \item We say $f$ is an \define{embedding}
+ \indexdef{function!embedding}%
+ \indexsee{embedding}{function, embedding}%
+ if for every $x,y:A$ the function $\apfunc f : (\id[A]xy) \to (\id[B]{f(x)}{f(y)})$ is an equivalence.
+ \end{enumerate}
+\end{defn}
+
+In other words, $f$ is surjective if every fiber of $f$ is merely inhabited, or equivalently if for all $b:B$ there merely exists an $a:A$ such that $f(a)=b$.
+In traditional logical notation, $f$ is surjective if $\fall{b:B}\exis{a:A} (f(a)=b)$.
+This must be distinguished from the stronger assertion that $\prd{b:B}\sm{a:A} (f(a)=b)$; if this holds we say that $f$ is a \define{split surjection}.
+\indexsee{split!surjection}{function, split surjective}%
+\indexsee{surjection!split}{function, split surjective}%
+\indexsee{surjective!function!split}{function, split surjective}%
+\indexdef{function!split surjective}%
+(Since this latter type is equivalent to $\sm{g:B\to A}\prd{b:B} (f(g(b))=b)$, being a split surjection is the same as being a \emph{retraction} as defined in \cref{sec:contractibility}.)
+\index{retraction}%
+\index{function!retraction}%
+
+The axiom of choice from \cref{sec:axiom-choice} says exactly that every surjection \emph{between sets} is split.
+However, in the presence of the univalence axiom, it is simply false that \emph{all} surjections are split.
+In \cref{thm:no-higher-ac} we constructed a type family $Y:X\to \type$ such that $\prd{x:X} \brck{Y(x)}$ but $\neg \prd{x:X} Y(x)$;
+for any such family, the first projection $(\sm{x:X} Y(x)) \to X$ is a surjection that is not split.
+
+If $A$ and $B$ are sets, then by \cref{lem:equiv-iff-hprop}, $f$ is an embedding just when
+\begin{equation}
+ \prd{x,y:A} (\id[B]{f(x)}{f(y)}) \to (\id[A]xy).\label{eq:injective}
+\end{equation}
+In this case we say that $f$ is \define{injective},
+\indexsee{injective function}{function, injective}%
+\indexdef{function!injective}%
+or an \define{injection}.
+\indexsee{injection}{function, injective}%
+We avoid these word for types that are not sets, because they might be interpreted as~\eqref{eq:injective}, which is an ill-behaved notion for non-sets.
+It is also true that any function between sets is surjective if and only if it is an \emph{epimorphism} in a suitable sense, but this also fails for more general types, and surjectivity is generally the more important notion.
+
+\begin{thm}\label{thm:mono-surj-equiv}
+ A function $f:A\to B$ is an equivalence if and only if it is both surjective and an embedding.
+\end{thm}
+\begin{proof}
+ If $f$ is an equivalence, then each $\hfib f b$ is contractible, hence so is $\brck{\hfib f b}$, so $f$ is surjective.
+ And we showed in \cref{thm:paths-respects-equiv} that any equivalence is an embedding.
+
+ Conversely, suppose $f$ is a surjective embedding.
+ Let $b:B$; we show that $\sm{x:A}(f(x)=b)$ is contractible.
+ Since $f$ is surjective, there merely exists an $a:A$ such that $f(a)=b$.
+ Thus, the fiber of $f$ over $b$ is inhabited; it remains to show it is a mere proposition.
+ For this, suppose given $x,y:A$ with $p:f(x)=b$ and $q:f(y)=b$.
+ Then since $\apfunc f$ is an equivalence, there exists $r:x=y$ with $\apfunc f (r) = p \ct \opp q$.
+ However, using the characterization of paths in $\Sigma$-types, the latter equality rearranges to $\trans{r}{p} = q$.
+ Thus, together with $r$ it exhibits $(x,p) = (y,q)$ in the fiber of $f$ over $b$.
+\end{proof}
+
+\begin{cor}
+ For any $f:A\to B$ we have
+ \[ \isequiv(f) \eqvsym (\mathsf{isEmbedding}(f) \times \mathsf{isSurjective}(f)).\]
+\end{cor}
+\begin{proof}
+ Being a surjection and an embedding are both mere propositions; now apply \cref{lem:equiv-iff-hprop}.
+\end{proof}
+
+Of course, this cannot be used as a definition of ``equivalence'', since the definition of embeddings refers to equivalences.
+However, this characterization can still be useful; see \cref{sec:whitehead}.
+We will generalize it in \cref{cha:hlevels}.
+
+
+% \section{Fiberwise equivalences}
+\section{Closure properties of equivalences}
+\label{sec:equiv-closures}
+\label{sec:fiberwise-equivalences}
+\index{equivalence!properties of}%
+
+
+% We end this chapter by observing some important closure properties of equivalences.
+We have already seen in \cref{thm:equiv-eqrel} that equivalences are closed under composition.
+Furthermore, we have:
+
+\begin{thm}[The 2-out-of-3 property]\label{thm:two-out-of-three}
+ \index{2-out-of-3 property}%
+ Suppose $f:A\to B$ and $g:B\to C$.
+ If any two of $f$, $g$, and $g\circ f$ are equivalences, so is the third.
+\end{thm}
+\begin{proof}
+ If $g\circ f$ and $g$ are equivalences, then $\opp{(g\circ f)} \circ g$ is a quasi-inverse to $f$.
+ On the one hand, we have $\opp{(g\circ f)} \circ g \circ f \htpy \idfunc[A]$, while on the other we have
+ \begin{align*}
+ f \circ \opp{(g\circ f)} \circ g
+ &\htpy \opp g \circ g \circ f \circ \opp{(g\circ f)} \circ g\\
+ &\htpy \opp g \circ g\\
+ &\htpy \idfunc[B].
+ \end{align*}
+ Similarly, if $g\circ f$ and $f$ are equivalences, then $f\circ \opp{(g\circ f)}$ is a quasi-inverse to $g$.
+\end{proof}
+
+This is a standard closure condition on equivalences from homotopy theory.
+Also well-known is that they are closed under retracts, in the following sense.
+
+\index{retract!of a function|(defstyle}%
+
+\begin{defn}\label{defn:retract}
+A function $g:A\to B$ is said to be a \define{retract}
+of a function $f:X\to Y$ if there is a diagram
+\begin{equation*}
+ \xymatrix{
+ {A} \ar[r]^{s} \ar[d]_{g}
+ &
+ {X} \ar[r]^{r} \ar[d]_{f}
+ &
+ {A} \ar[d]^{g}
+ \\
+ {B} \ar[r]_{s'}
+ &
+ {Y} \ar[r]_{r'}
+ &
+ {B}
+ }
+\end{equation*}
+for which there are
+\begin{enumerate}
+\item a homotopy $R:r\circ s \htpy \idfunc[A]$.
+\item a homotopy $R':r'\circ s' \htpy\idfunc[B]$.
+\item a homotopy $L:f\circ s\htpy s'\circ g$.
+\item a homotopy $K:g\circ r\htpy r'\circ f$.
+\item for every $a:A$, a path $H(a)$ witnessing the commutativity of the square
+\begin{equation*}
+ \xymatrix@C=3pc{
+ {g(r(s(a)))} \ar@{=}[r]^-{K(s(a))} \ar@{=}[d]_{\ap g{R(a)}}
+ &
+ {r'(f(s(a)))} \ar@{=}[d]^{\ap{r'}{L(a)}}
+ \\
+ {g(a)} \ar@{=}[r]_-{\opp{R'(g(a))}}
+ &
+ {r'(s'(g(a)))}
+ }
+\end{equation*}
+\end{enumerate}
+\end{defn}
+
+Recall that in \cref{sec:contractibility} we defined what it means for a type to be a retract of another.
+This is a special case of the above definition where $B$ and $Y$ are $\unit$.
+Conversely, just as with contractibility, retractions of maps induce retractions of their fibers.
+
+\begin{lem}\label{lem:func_retract_to_fiber_retract}
+If a function $g:A\to B$ is a retract of a function $f:X\to Y$, then $\hfib{g}b$ is a retract of $\hfib{f}{s'(b)}$
+for every $b:B$, where $s':B\to Y$ is as in \cref{defn:retract}.
+\end{lem}
+
+\begin{proof}
+Suppose that $g:A\to B$ is a retract of $f:X\to Y$. Then for any $b:B$ we have the functions
+\begin{align*}
+\varphi_b &:\hfiber{g}b\to\hfib{f}{s'(b)}, &
+\varphi_b(a,p) & \defeq \pairr{s(a),L(a)\ct s'(p)},\\
+\psi_b &:\hfib{f}{s'(b)}\to\hfib{g}b, &
+\psi_b(x,q) &\defeq \pairr{r(x),K(x)\ct r'(q)\ct R'(b)}.
+\end{align*}
+Then we have $\psi_b(\varphi_b({a,p}))\equiv\pairr{r(s(a)),K(s(a))\ct r'(L(a)\ct s'(p))\ct R'(b)}$.
+We claim $\psi_b$ is a retraction with section $\varphi_b$ for all $b:B$, which is to say that for all $(a,p):\hfib g b$ we have $\psi_b(\varphi_b({a,p}))= \pairr{a,p}$.
+In other words, we want to show
+\begin{equation*}
+\prd{b:B}{a:A}{p:g(a)=b} \psi_b(\varphi_b({a,p}))= \pairr{a,p}.
+\end{equation*}
+By reordering the first two $\Pi$s and applying a version of \cref{thm:omit-contr}, this is equivalent to
+\begin{equation*}
+\prd{a:A}\psi_{g(a)}(\varphi_{g(a)}({a,\refl{g(a)}}))=\pairr{a,\refl{g(a)}}.
+\end{equation*}
+For any $a$, by \cref{thm:path-sigma}, this equality of pairs is equivalent to a pair of equalities. The first components are equal by $R(a):r(s(a))= a$, so we need only show
+\begin{equation*}
+\trans{R(a)}{K(s(a))\ct r'(L(a))\ct R'(g(a))} = \refl{g(a)}.
+\end{equation*}
+But this transportation computes as $\opp{g(R(a))}\ct K(s(a))\ct r'(L(a))\ct R'(g(a))$, so the required path is given by $H(a)$.
+\end{proof}
+
+\begin{thm}\label{thm:retract-equiv}
+ If $g$ is a retract of an equivalence $f$, then $g$ is also an equivalence.
+\end{thm}
+\begin{proof}
+ By \cref{lem:func_retract_to_fiber_retract}, every fiber of $g$ is a retract of a fiber of $f$.
+ Thus, by \cref{thm:retract-contr}, if the latter are all contractible, so are the former.
+\end{proof}
+
+\index{retract!of a function|)}%
+
+\index{fibration}%
+\index{total!space}%
+Finally, we show that fiberwise equivalences can be characterized in terms of equivalences of total spaces.
+To explain the terminology, recall from \cref{sec:fibrations} that a type family $P:A\to\type$ can be viewed as a fibration over $A$ with total space $\sm{x:A} P(x)$, the fibration being the projection $\proj1:\sm{x:A} P(x) \to A$.
+From this point of view, given two type families $P,Q:A\to\type$, we may refer to a function $f:\prd{x:A} (P(x)\to Q(x))$ as a \define{fiberwise map} or a \define{fiberwise transformation}.
+\indexsee{transformation!fiberwise}{fiberwise transformation}%
+\indexsee{function!fiberwise}{fiberwise transformation}%
+\index{fiberwise!transformation|(defstyle}%
+\indexsee{fiberwise!map}{fiberwise transformation}%
+\indexsee{map!fiberwise}{fiberwise transformation}
+Such a map induces a function on total spaces:
+
+\begin{defn}\label{defn:total-map}
+ Given type families $P,Q:A\to\type$ and a map $f:\prd{x:A} P(x)\to Q(x)$, we define
+ \begin{equation*}
+ \total f \defeq \lam{w}\pairr{\proj{1}w,f(\proj{1}w,\proj{2}w)} : \sm{x:A}P(x)\to\sm{x:A}Q(x).
+ \end{equation*}
+\end{defn}
+
+\begin{thm}\label{fibwise-fiber-total-fiber-equiv}
+Suppose that $f$ is a fiberwise transformation between families $P$ and
+$Q$ over a type $A$ and let $x:A$ and $v:Q(x)$. Then we have an equivalence
+\begin{equation*}
+\eqv{\hfib{\total{f}}{\pairr{x,v}}}{\hfib{f(x)}{v}}.
+\end{equation*}
+\end{thm}
+\begin{proof}
+ We calculate:
+\begin{align}
+ \hfib{\total{f}}{\pairr{x,v}}
+ & \jdeq \sm{w:\sm{x:A}P(x)}\pairr{\proj{1}w,f(\proj{1}w,\proj{2}w)}=\pairr{x,v}
+ \notag \\
+ & \eqv{}{} \sm{a:A}{u:P(a)}\pairr{a,f(a,u)}=\pairr{x,v}
+ \tag{by~\cref{ex:sigma-assoc}} \\
+ & \eqv{}{} \sm{a:A}{u:P(a)}{p:a=x}\trans{p}{f(a,u)}=v
+ \tag{by \cref{thm:path-sigma}} \\
+ & \eqv{}{} \sm{a:A}{p:a=x}{u:P(a)}\trans{p}{f(a,u)}=v
+ \notag \\
+ & \eqv{}{} \sm{u:P(x)}f(x,u)=v
+ \tag{$*$}\label{eq:uses-sum-over-paths} \\
+ & \jdeq \hfib{f(x)}{v}. \notag
+\end{align}
+The equivalence~\eqref{eq:uses-sum-over-paths} follows from \cref{thm:omit-contr,thm:contr-paths,ex:sigma-assoc}.
+\end{proof}
+
+We say that a fiberwise transformation $f:\prd{x:A} P(x)\to Q(x)$ is a \define{fiberwise equivalence}%
+\indexdef{fiberwise!equivalence}%
+\indexdef{equivalence!fiberwise}
+if each $f(x):P(x) \to Q(x)$ is an equivalence.
+
+\begin{thm}\label{thm:total-fiber-equiv}
+Suppose that $f$ is a fiberwise transformation between families
+$P$ and $Q$ over a type $A$.
+Then $f$ is a fiberwise equivalence if and only if $\total{f}$ is an equivalence.
+\end{thm}
+
+\begin{proof}
+Let $f$, $P$, $Q$ and $A$ be as in the statement of the theorem.
+By \cref{fibwise-fiber-total-fiber-equiv} it follows for all
+$x:A$ and $v:Q(x)$ that
+$\hfib{\total{f}}{\pairr{x,v}}$ is contractible if and only if
+$\hfib{f(x)}{v}$ is contractible.
+Thus, $\hfib{\total{f}}{w}$ is contractible for all $w:\sm{x:A}Q(x)$ if and only if $\hfib{f(x)}{v}$ is contractible for all $x:A$ and $v:Q(x)$.
+\end{proof}
+
+\index{fiberwise!transformation|)}%
+
+
+\section{The object classifier}
+\label{sec:object-classification}
+
+In type theory we have a basic notion of \emph{family of types}, namely a function $B:A\to\type$.
+We have seen that such families behave somewhat like \emph{fibrations} in homotopy theory, with the fibration being the projection $\proj1:\sm{a:A} B(a) \to A$.
+A basic fact in homotopy theory is that every map is equivalent to a fibration.
+With univalence at our disposal, we can prove the same thing in type theory.
+
+\begin{lem}\label{thm:fiber-of-a-fibration}
+ For any type family $B:A\to\type$, the fiber of $\proj1:\sm{x:A} B(x) \to A$ over $a:A$ is equivalent to $B(a)$:
+ \[ \eqv{\hfib{\proj1}{a}}{B(a)} \]
+\end{lem}
+\begin{proof}
+ We have
+ \begin{align*}
+ \hfib{\proj1}{a} &\defeq \sm{u:\sm{x:A} B(x)} \proj1(u)=a\\
+ &\eqvsym \sm{x:A}{b:B(x)} (x=a)\\
+ &\eqvsym \sm{x:A}{p:x=a} B(x)\\
+ &\eqvsym B(a)
+ \end{align*}
+ using the left universal property of identity types.
+\end{proof}
+
+\begin{lem}\label{thm:total-space-of-the-fibers}
+ For any function $f:A\to B$, we have $\eqv{A}{\sm{b:B}\hfib{f}{b}}$.
+\end{lem}
+\begin{proof}
+ We have
+ \begin{align*}
+ \sm{b:B}\hfib{f}{b} &\defeq \sm{b:B}{a:A} (f(a)=b)\\
+ &\eqvsym \sm{a:A}{b:B} (f(a)=b)\\
+ &\eqvsym A
+ \end{align*}
+ using the fact that $\sm{b:B} (f(a)=b)$ is contractible.
+\end{proof}
+
+\begin{thm}\label{thm:nobject-classifier-appetizer}
+For any type $B$ there is an equivalence
+\begin{equation*}
+\chi:\Parens{\sm{A:\type} (A\to B)}\eqvsym (B\to\type).
+\end{equation*}
+\end{thm}
+\begin{proof}
+We have to construct quasi-inverses
+\begin{align*}
+\chi & : \Parens{\sm{A:\type} (A\to B)}\to B\to\type\\
+\psi & : (B\to\type)\to\Parens{\sm{A:\type} (A\to B)}.
+\end{align*}
+We define $\chi$ by $\chi((A,f),b)\defeq\hfiber{f}b$, and $\psi$ by $\psi(P)\defeq\Pairr{(\sm{b:B} P(b)),\proj1}$.
+Now we have to verify that $\chi\circ\psi\htpy\idfunc{}$ and that $\psi\circ\chi \htpy\idfunc{}$.
+\begin{enumerate}
+\item Let $P:B\to\type$.
+ By \cref{thm:fiber-of-a-fibration},
+$\hfiber{\proj1}{b}\eqvsym P(b)$ for any $b:B$, so it follows immediately
+that $P\htpy\chi(\psi(P))$.
+\item Let $f:A\to B$ be a function. We have to find a path
+\begin{equation*}
+\Pairr{\tsm{b:B} \hfiber{f}b,\,\proj1}=\pairr{A,f}.
+\end{equation*}
+First note that by \cref{thm:total-space-of-the-fibers}, we have
+$e:\sm{b:B} \hfiber{f}b\eqvsym A$ with $e(b,a,p)\defeq a$ and $e^{-1}(a)
+\defeq(f(a),a,\refl{f(a)})$.
+By \cref{thm:path-sigma}, it remains to show $\trans{(\ua(e))}{\proj1} = f$.
+But by the computation rule for univalence and~\eqref{eq:transport-arrow}, we have $\trans{(\ua(e))}{\proj1} = \proj1\circ e^{-1}$, and the definition of $e^{-1}$ immediately yields $\proj1 \circ e^{-1} \jdeq f$.\qedhere
+\end{enumerate}
+\end{proof}
+
+\noindent
+\indexdef{object!classifier}%
+\indexdef{classifier!object}%
+\index{.infinity1-topos@$(\infty,1)$-topos}%
+In particular, this implies that we have an \emph{object classifier} in the sense of higher topos theory.
+Recall from \cref{def:pointedtype} that $\pointed\type$ denotes the type $\sm{A:\type} A$ of pointed types.
+
+\begin{thm}\label{thm:object-classifier}
+Let $f:A\to B$ be a function. Then the diagram
+\begin{equation*}
+ \vcenter{\xymatrix{
+ A\ar[r]^-{\vartheta_f} \ar[d]_{f} &
+ \pointed{\type}\ar[d]^{\proj1}\\
+ B\ar[r]_{\chi_f} &
+ \type
+ }}
+\end{equation*}
+is a pullback\index{pullback} square (see \cref{ex:pullback}).
+Here the function $\vartheta_f$ is defined by
+\begin{equation*}
+ \lam{a} \pairr{\hfiber{f}{f(a)},\pairr{a,\refl{f(a)}}}.
+\end{equation*}
+\end{thm}
+\begin{proof}
+Note that we have the equivalences
+\begin{align*}
+A & \eqvsym \sm{b:B} \hfiber{f}b\\
+& \eqvsym \sm{b:B}{X:\type}{p:\hfiber{f}b= X} X\\
+& \eqvsym \sm{b:B}{X:\type}{x:X} \hfiber{f}b= X\\
+& \eqvsym \sm{b:B}{Y:\pointed{\type}} \hfiber{f}b = \proj1 Y\\
+& \jdeq B\times_{\type}\pointed{\type}
+\end{align*}
+which gives us a composite equivalence $e:A\eqvsym B\times_\type\pointed{\type}$.
+We may display the action of this composite equivalence step by step by
+\begin{align*}
+a & \mapsto \pairr{f(a),\; \pairr{a,\refl{f(a)}}}\\
+& \mapsto \pairr{f(a), \; \hfiber{f}{f(a)}, \; \refl{\hfiber{f}{f(a)}}, \; \pairr{a,\refl{f(a)}}}\\
+& \mapsto \pairr{f(a), \; \hfiber{f}{f(a)}, \; \pairr{a,\refl{f(a)}}, \; \refl{\hfiber{f}{f(a)}}}.
+\end{align*}
+Therefore, we get homotopies $f\htpy\proj1\circ e$ and $\vartheta_f\htpy \proj2\circ e$.
+\end{proof}
+
+
+
+\section{Univalence implies function extensionality}
+\label{sec:univalence-implies-funext}
+
+\index{function extensionality!proof from univalence}%
+In the last section of this chapter we include a proof that the univalence axiom implies function
+extensionality. Thus, in this section we work \emph{without} the function extensionality axiom.
+The proof consists of two steps. First we show
+in \cref{uatowfe} that the univalence
+axiom implies a weak form of function extensionality, defined in \cref{weakfunext} below. The
+principle of weak function extensionality in turn implies the usual function extensionality,
+and it does so without the univalence axiom (\cref{wfetofe}).
+
+\index{univalence axiom}%
+Let $\type$ be a universe; we will explicitly indicate where we assume that it is univalent.
+
+\begin{defn}\label{weakfunext}
+The \define{weak function extensionality principle}
+\indexdef{function extensionality!weak}%
+asserts that there is a function
+\begin{equation*}
+\Parens{\prd{x:A}\iscontr(P(x))} \to\iscontr\Parens{\prd{x:A}P(x)}
+\end{equation*}
+for any family $P:A\to\type$ of types over any type $A$.
+\end{defn}
+
+The following lemma is easy to prove using function extensionality; the point here is that it also follows from univalence without assuming function extensionality separately.
+
+\begin{lem} \label{UA-eqv-hom-eqv}
+Assuming $\type$ is univalent, for any $A,B,X:\type$ and any $e:\eqv{A}{B}$, there is an equivalence
+\begin{equation*}
+\eqv{(X\to A)}{(X\to B)}
+\end{equation*}
+of which the underlying map is given by post-composition with the underlying function of $e$.
+\end{lem}
+
+\begin{proof}
+ % Immediate by induction on $\eqv{}{}$ (see \cref{thm:equiv-induction}).
+ As in the proof of \cref{lem:qinv-autohtpy}, we may assume that $e = \idtoeqv(p)$ for some $p:A=B$.
+ Then by path induction, we may assume $p$ is $\refl{A}$, so that $e = \idfunc[A]$.
+ But in this case, post-composition with $e$ is the identity, hence an equivalence.
+\end{proof}
+
+\begin{cor}\label{contrfamtotalpostcompequiv}
+Let $P:A\to\type$ be a family of contractible types, i.e.\ \narrowequation{\prd{x:A}\iscontr(P(x)).}
+Then the projection $\proj{1}:(\sm{x:A}P(x))\to A$ is an equivalence. Assuming $\type$ is univalent, it follows immediately that post-composition with $\proj{1}$ gives an equivalence
+\begin{equation*}
+\alpha : \eqv{\Parens{A\to\sm{x:A}P(x)}}{(A\to A)}.
+\end{equation*}
+\end{cor}
+
+\begin{proof}
+ By \cref{thm:fiber-of-a-fibration}, for $\proj{1}:\sm{x:A}P(X)\to A$ and $x:A$ we have an equivalence
+ \begin{equation*}
+ \eqv{\hfiber{\proj{1}}{x}}{P(x)}.
+ \end{equation*}
+ Therefore $\proj{1}$ is an equivalence whenever each $P(x)$ is contractible. The assertion is now a consequence of \cref{UA-eqv-hom-eqv}.
+\end{proof}
+
+In particular, the homotopy fiber of the above equivalence at $\idfunc[A]$ is contractible. Therefore, we can show that univalence implies weak function extensionality by showing that the dependent function type $\prd{x:A}P(x)$ is a retract of $\hfiber{\alpha}{\idfunc[A]}$.
+
+\begin{thm}\label{uatowfe}
+In a univalent universe $\type$, suppose that $P:A\to\type$ is a family of contractible types
+and let $\alpha$ be the function of \cref{contrfamtotalpostcompequiv}.
+Then $\prd{x:A}P(x)$ is a retract of $\hfiber{\alpha}{\idfunc[A]}$. As a consequence, $\prd{x:A}P(x)$ is contractible. In other words, the univalence axiom implies the weak function extensionality principle.
+\end{thm}
+
+\begin{proof}
+Define the functions
+\begin{align*}
+ \varphi &: (\tprd{x:A}P(x))\to\hfiber{\alpha}{\idfunc[A]},\\
+ \varphi(f) &\defeq (\lam{x} (x,f(x)),\refl{\idfunc[A]}),
+\intertext{and}
+ \psi &: \hfiber{\alpha}{\idfunc[A]}\to \tprd{x:A}P(x), \\
+ \psi(g,p) &\defeq \lam{x} \trans {\happly (p,x)}{\proj{2} (g(x))}.
+\end{align*}
+Then $\psi(\varphi(f))=\lam{x} f(x)$, which is $f$, by the uniqueness principle for dependent function types.
+\end{proof}
+
+We now show that weak function extensionality implies the usual function extensionality.
+Recall from~\eqref{eq:happly} the function $\happly (f,g) : (f = g)\to(f\htpy g)$ which
+converts equality of functions to homotopy. In the proof that follows, the univalence
+axiom is not used.
+
+\begin{thm}\label{wfetofe}
+ \index{function extensionality}%
+Weak function extensionality implies the function extensionality \cref{axiom:funext}.
+\end{thm}
+
+\begin{proof}
+We want to show that
+\begin{equation*}
+\prd{A:\type}{P:A\to\type}{f,g:\prd{x:A}P(x)}\isequiv(\happly (f,g)).
+\end{equation*}
+Since a fiberwise map induces an equivalence on total spaces if and only if it is fiberwise an equivalence by \cref{thm:total-fiber-equiv}, it suffices to show that the function of type
+\begin{equation*}
+\Parens{\sm{g:\prd{x:A}P(x)}(f= g)} \to \sm{g:\prd{x:A}P(x)}(f\htpy g)
+\end{equation*}
+induced by $\lam{g:\prd{x:A}P(x)} \happly (f,g)$ is an equivalence.
+Since the type on the left is contractible by \cref{thm:contr-paths}, it suffices to show that the type on the right:
+\begin{equation}\label{eq:uatofesp}
+\sm{g:\prd{x:A}P(x)}\prd{x:A}f(x)= g(x)
+\end{equation}
+is contractible.
+Now \cref{thm:ttac} says that this is equivalent to
+\begin{equation}\label{eq:uatofeps}
+\prd{x:A}\sm{u:P(x)}f(x)= u.
+\end{equation}
+The proof of \cref{thm:ttac} uses function extensionality, but only for one of the composites.
+Thus, without assuming function extensionality, we can conclude that~\eqref{eq:uatofesp} is a retract\index{retract!of a type} of~\eqref{eq:uatofeps}.
+And~\eqref{eq:uatofeps} is a product of contractible types, which is contractible by the weak function extensionality principle; hence~\eqref{eq:uatofesp} is also contractible.
+\end{proof}
+
+\sectionNotes
+
+The fact that the space of continuous maps equipped with quasi-inverses has the wrong homotopy type to be the ``space of homotopy equivalences'' is well-known in algebraic topology.
+In that context, the ``space of homotopy equivalences'' $(\eqv AB)$ is usually defined simply as the subspace of the function space $(A\to B)$ consisting of the functions that are homotopy equivalences.
+In type theory, this would correspond most closely to $\sm{f:A\to B} \brck{\qinv(f)}$; see \cref{ex:brck-qinv}.
+
+The first definition of equivalence given in homotopy type theory was the one that we have called $\iscontr(f)$, which was due to Voevodsky.
+The possibility of the other definitions was subsequently observed by various people.
+The basic theorems about adjoint equivalences\index{adjoint!equivalence} such as \cref{lem:coh-equiv,thm:equiv-iso-adj} are adaptations of standard facts in higher category theory and homotopy theory.
+Using bi-invertibility as a definition of equivalences was suggested by Andr\'e Joyal.
+
+The properties of equivalences discussed in \cref{sec:mono-surj,sec:equiv-closures} are well-known in homotopy theory.
+Most of them were first proven in type theory by Voevodsky.
+
+The fact that every function is equivalent to a fibration is a standard fact in homotopy theory.
+The notion of object classifier
+\index{object!classifier}%
+\index{classifier!object}%
+in $(\infty,1)$-category
+\index{.infinity1-category@$(\infty,1)$-category}%
+theory (the categorical analogue of \cref{thm:nobject-classifier-appetizer}) is due to Rezk (see~\cite{Rezk05,lurie:higher-topoi}).
+
+Finally, the fact that univalence implies function extensionality (\cref{sec:univalence-implies-funext}) is due to Voevodsky.
+Our proof is a simplification of his.
+\cref{ex:funext-from-nondep} is also due to Voevodsky.
+
+\sectionExercises
+
+\begin{ex}\label{ex:two-sided-adjoint-equivalences}
+ Consider the type of ``two-sided adjoint equivalence\index{adjoint!equivalence} data'' for $f:A\to B$,
+ \begin{narrowmultline*}
+ \sm{g:B\to A}{\eta: g \circ f \htpy \idfunc[A]}{\epsilon:f \circ g \htpy \idfunc[B]}
+ \narrowbreak
+ \Parens{\prd{x:A} \map{f}{\eta x} = \epsilon(fx)} \times
+ \Parens{\prd{y:B} \map{g}{\epsilon y} = \eta(gy) }.
+ \end{narrowmultline*}
+ By \cref{lem:coh-equiv}, we know that if $f$ is an equivalence, then this type is inhabited.
+ Give a characterization of this type analogous to \cref{lem:qinv-autohtpy}.
+
+ Can you give an example showing that this type is not generally a mere proposition?
+ (This will be easier after \cref{cha:hits}.)
+\end{ex}
+
+\begin{ex}\label{ex:symmetric-equiv}
+ Show that for any $A,B:\UU$, the following type is equivalent to $\eqv A B$.
+ \begin{equation*}
+ \sm{R:A\to B\to \type}
+ \Parens{\prd{a:A} \iscontr\Parens{\sm{b:B} R(a,b)}} \times
+ \Parens{\prd{b:B} \iscontr\Parens{\sm{a:A} R(a,b)}}.
+ \end{equation*}
+ Can you extract from this a definition of a type satisfying the three desiderata of $\isequiv(f)$?
+\end{ex}
+
+\begin{ex} \label{ex:qinv-autohtpy-no-univalence}
+ Reformulate the proof of \cref{lem:qinv-autohtpy} without using univalence.
+\end{ex}
+
+\begin{ex}[The unstable octahedral axiom]\label{ex:unstable-octahedron}
+ \index{axiom!unstable octahedral}%
+ \index{octahedral axiom, unstable}%
+ Suppose $f:A\to B$ and $g:B\to C$ and $b:B$.
+ \begin{enumerate}
+ \item Show that there is a natural map $\hfib{g\circ f}{g(b)} \to \hfib{g}{g(b)}$ whose fiber over $(b,\refl{g(b)})$ is equivalent to $\hfib f b$.
+ \item Show that $\eqv{\hfib{g\circ f}{c}}{\sm{w:\hfib{g}{c}} \hfib f {\proj1 w}}$.
+ \end{enumerate}
+\end{ex}
+
+\begin{ex}\label{ex:2-out-of-6}
+ \index{2-out-of-6 property}%
+ Prove that equivalences satisfy the \emph{2-out-of-6 property}: given $f:A\to B$ and $g:B\to C$ and $h:C\to D$, if $g\circ f$ and $h\circ g$ are equivalences, so are $f$, $g$, $h$, and $h\circ g\circ f$.
+ Use this to give a higher-level proof of \cref{thm:paths-respects-equiv}.
+\end{ex}
+
+\begin{ex}\label{ex:qinv-univalence}
+ For $A,B:\UU$, define
+ \[ \mathsf{idtoqinv}_{A,B} :(A=B) \to \sm{f:A\to B}\qinv(f) \]
+ by path induction in the obvious way.
+ Let \textbf{\textsf{qinv}-univalence} denote the modified form of the univalence axiom which asserts that for all $A,B:\UU$ the function $\mathsf{idtoqinv}_{A,B}$ has a quasi-inverse.
+ \begin{enumerate}
+ \item Show that \qinv-univalence can be used instead of univalence in the proof of function extensionality in \cref{sec:univalence-implies-funext}.
+ \item Show that \qinv-univalence can be used instead of univalence in the proof of \cref{thm:qinv-notprop}.
+ \item Show that \qinv-univalence is inconsistent (i.e.\ allows construction of an inhabitant of $\emptyt$).
+ Thus, the use of a ``good'' version of $\isequiv$ is essential in the statement of univalence.
+ \end{enumerate}
+\end{ex}
+
+\begin{ex}\label{ex:embedding-cancellable}
+ Show that a function $f:A\to B$ is an embedding if and only if the following two conditions hold:
+ \begin{enumerate}
+ \item $f$ is \emph{left cancellable}, i.e.\ for any $x,y:A$, if $f(x)=f(y)$ then $x=y$.\label{item:ex:ec1}
+ \item For any $x:A$, the map $\apfunc f: \Omega(A,x) \to \Omega(B,f(x))$ is an equivalence.\label{item:ex:ec2}
+ \end{enumerate}
+ (In particular, if $A$ is a set, then $f$ is an embedding if and only if it is left-cancellable and $\Omega(B,f(x))$ is contractible for all $x:A$.)
+ Give examples to show that neither of~\ref{item:ex:ec1} or~\ref{item:ex:ec2} implies the other.
+\end{ex}
+
+\begin{ex}\label{ex:cancellable-from-bool}
+ Show that the type of left-cancellable functions $\bool\to B$ (see \cref{ex:embedding-cancellable}) is equivalent to $\sm{x,y:B}(x\neq y)$.
+ Give a similar explicit characterization of the type of embeddings $\bool\to B$.
+\end{ex}
+
+\begin{ex}\label{ex:funext-from-nondep}
+ The \textbf{na\"{i}ve non-dependent function extensionality axiom} says that for $A,B:\type$ and $f,g:A\to B$ there is a function $(\prd{x:A} f(x)=g(x)) \to (f=g)$.
+ \indexdef{function extensionality!non-dependent}%
+ Modify the argument of \cref{sec:univalence-implies-funext} to show that this axiom implies the full function extensionality axiom (\cref{axiom:funext}).
+\end{ex}
+
+% Local Variables:
+% TeX-master: "hott-online"
+% End:
diff --git a/books/hott/errata.tex b/books/hott/errata.tex
new file mode 100644
index 0000000000000000000000000000000000000000..fded14f1516c7e11665bbf638203a18f690a9b06
--- /dev/null
+++ b/books/hott/errata.tex
@@ -0,0 +1,938 @@
+% This is the errata document for the homotopy type theory book.
+
+% This file supports two book sizes:
+% - Letter size (8.5" x 11")
+% - US Trade size (6" x 9")
+%
+% To activate one or the other, uncomment the appropriate font size in
+% the documentclass below, and then one of the two page geometry incantations
+%
+% NOTE: The 6" x 9" format is only experimental. It will break the
+% title page, for example.
+
+\PassOptionsToPackage{table}{xcolor}
+
+% DOCUMENT CLASS
+\documentclass[
+%
+%10pt % for US Trade 6" x 9" book
+%
+11pt % for Letter size book
+]{article}
+\usepackage{etex} % We're running out of registers and dimensions, or some such
+
+\newcounter{chapter} % So that macros.tex doesn't choke
+
+% PAGE GEOMETRY
+%
+% Uncomment one of these
+
+% We make the page 40pt taller than the standard LaTeX book.
+
+% OPTION 1: Letter
+\usepackage[papersize={8.5in,11in},
+ twoside,
+ includehead,
+ top=1in,
+ bottom=1in,
+ inner=0.75in,
+ outer=1.0in,
+ bindingoffset=0.35in]{geometry}
+
+% OPTION 2: US Trade
+% \usepackage[papersize={6in,9in},
+% twoside,
+% includehead,
+% top=0.75in,
+% bottom=0.75in,
+% inner=0.5in,
+% outer=0.75in,
+% bindingoffset=0.35in]{geometry}
+
+% HYPERLINKING AND PDF METADATA
+
+\usepackage[pagebackref,
+ colorlinks,
+ citecolor=darkgreen,
+ linkcolor=darkgreen,
+ unicode,
+ pdfauthor={Univalent Foundations Program},
+ pdftitle={Homotopy Type Theory: Univalent Foundations of Mathematics},
+ pdfsubject={Mathematics},
+ pdfkeywords={type theory, homotopy theory, univalence axiom}]{hyperref}
+
+% OTHER PACKAGES
+
+% Use this package and stick \layout somewhere in the text to see
+% page margins, text size and width etc. Useful for debugging page format.
+%\usepackage{layout}
+
+%%% Because Germans have umlauts and Slavs have even stranger ways of mangling letters
+\usepackage[utf8]{inputenc}
+
+%%% For table {tab:theorems}
+\usepackage{pifont}
+
+%%% Multi-Columns for long lists of names
+\usepackage{multicol}
+
+%%% Set the fonts
+\usepackage{mathpazo}
+\usepackage[scaled=0.95]{helvet}
+\usepackage{courier}
+\linespread{1.05} % Palatino looks better with this
+
+\usepackage{graphicx}
+\DeclareGraphicsExtensions{.png}
+\input{bmpsize-hack} % for bounding boxes in dvi mode
+\usepackage{comment}
+
+\usepackage{wallpaper} % For the background image on the cover page
+
+\usepackage{fancyhdr} % To set headers and footers
+
+\usepackage{nextpage} % So we can jump to odd-numbered pages
+
+\usepackage{amssymb,amsmath,amsthm,stmaryrd,mathrsfs,wasysym}
+\usepackage{enumitem,mathtools,xspace}
+\usepackage{xcolor} % For colored cells in tables we need \cellcolor
+\usepackage{booktabs} % For nice tables
+\usepackage{array} % For nice tables
+\usepackage{supertabular} % For index of symbols
+\definecolor{darkgreen}{rgb}{0,0.45,0}
+\usepackage{aliascnt}
+\usepackage[capitalize]{cleveref}
+\usepackage[all,2cell]{xy}
+\UseAllTwocells
+\usepackage{braket} % used for \setof{ ... } macro
+\usepackage{tikz}
+\usetikzlibrary{decorations.pathmorphing}
+
+\usepackage{etoolbox} % hacking commands for TOC
+
+\usepackage{mathpartir} % for formal.tex appendix, section 3
+
+\usepackage[numbered]{bookmark} % add chapter/section numbers to the toc in the pdf metadata
+
+\input{macros}
+
+%%%% Indexing
+\usepackage{makeidx}
+\makeindex
+
+%%%% Header and footers
+\pagestyle{fancyplain}
+\setlength{\headheight}{15pt}
+\renewcommand{\sectionmark}[1]{\markright{\textsc{\thesection\ #1}}}
+
+\lhead[\fancyplain{}{{\thepage}}]%
+ {\fancyplain{}{\nouppercase{\rightmark}}}
+\rhead[\fancyplain{}{\nouppercase{\leftmark}}]%
+ {\fancyplain{}{\thepage}}
+\cfoot[]{}
+\lfoot[]{}
+\rfoot[]{}
+
+%%%% Chapter & part style
+\usepackage{titlesec}
+\titleformat{\part}[display]{\fontsize{40}{40}\fontseries{m}\fontshape{sc}\selectfont}{\hfil\partname\ \Roman{part}}{20pt}{\fontsize{60}{60}\fontseries{b}\fontshape{sc}\selectfont\hfil}
+\titleformat{\chapter}[display]{\fontsize{23}{25}\fontseries{m}\fontshape{it}\selectfont}{\chaptertitlename\ \thechapter}{20pt}{\fontsize{35}{35}\fontseries{b}\fontshape{n}\selectfont}
+
+\input{main.labels}
+\input{version.tex}
+
+\usepackage{longtable}
+
+\title{Errata for the HoTT Book, first edition%
+%% VERSION MARKER
+}
+
+\begin{document}
+\maketitle
+
+For the benefit of all readers, the available PDF and printed copies of the book are being updated on a rolling basis with minor corrections and clarifications as we receive them. Every copy has a version marker that can be found on the title page and is of the form "first-edition-XX-gYYYYYYY", where XX is a natural number and YYYYYYY is the git commit hash that uniquely identifies the exact version. Higher values of XX indicate more recent copies.
+
+Below is a list of corrections and clarifications that have been made
+%% BEGIN STARTPOINT
+so far
+%% END STARTPOINT
+(except for trivial formatting and spacing changes), along with the version marker in which they were first made.
+This list is current as of \today\ and version marker ``\OPTversion''.
+
+While the page numbering may differ between copies with different version markers (and indeed, already differs between the letter/A4 and printed/ebook copies with the same version marker), we promise that the numbering of chapters, sections, theorems, and equations will remain constant, and no new mathematical content will be added, unless and until there is a second edition.
+
+\noindent
+\begin{longtable}{llp{10.5cm}}
+ \textbf{Location} & \textbf{Fixed in} & \textbf{Change} \\ \hline \endhead
+%% BEGIN ERRATA
+ %
+ % Chapter 1
+ %
+ \cref{sec:types-vs-sets}
+ & 182-gb29ea2f
+ & Change notation $a\jdeq_A b$ to $a\jdeq b : A$, to match that used in \cref{cha:rules}.
+ (Neither are used anywhere else in the book.)\\
+ %
+ \cref{sec:types-vs-sets}
+ & 154-g42698c2
+ & Clarify that algorithmic decidability of judgmental equality is only meta-theoretic.\\
+ %
+ \cref{sec:types-vs-sets}
+ & 154-gac9b226
+ & Mention notation $a=b=c=d$ to mean ``$a=b$ and $b=c$ and $c=d$, hence $a=d$'', possibly including judgmental equalities.\\
+ %
+ \cref{sec:universes}
+ & 42-g4bc5cc2
+ & Cumulativity means some elements do not have unique types, the index $i$ on $\UU_i$ is not an internal natural number, and typical ambiguity must be justified by reinserting indices.\\
+ %
+ \cref{sec:universes,sec:pi-types}
+ & 42-ga34b313
+ & Explain that we can't define $\Fin$ and $\fmax$ yet where we first mention them.\\
+ %
+ \cref{sec:pi-types}
+ & 165-g0ad2aba
+ & Add $\mathsf{swap}$ as another example of a polymorphic function, and discuss the use of subscripts and implicit arguments to dependent functions.\\
+ %
+ \cref{rmk:introducing-new-concepts}
+ & 80-g8f95fa5
+ & In the discussion of formation rules, the dependent function type example should be $\prd{x:A} B(x)$.\\
+ %
+ \cref{sec:finite-product-types}
+ & 51-g67e86db
+ & Better explanation of recursion on product types, why it is justified, and how it relates to the uniqueness principle.\\
+ %
+ \cref{sec:sigma-types}
+ & 2-gbe277a8
+ & In the types of $g$ and $\ind{\sm{x:A}B(x)}$, there is a $\prd{a:A}{b:B(x)}$ in which $x$ should be $a$.\\
+ %
+ \cref{sec:sigma-types}
+ & 27-gd0bfa0d
+ & At two places in the definition of $\ac$, $R(a,\fst(g(x)))$ should be $R(x,\fst(g(x)))$.\\
+ %
+ \cref{sec:sigma-types}
+ & 125-g7fdadbf
+ & When substituting $\lam{x} \fst(g(x))$ for $f$ while verifying that $\ac$ is well-typed, the left side of the judgmental equality should be $\tprd{x:A} R(x,\fst(g(x)))$, not $\tprd{x:A} R(x,\fst(f(x)))$.\\
+ %
+ \cref{sec:coproduct-types}
+ & 30-g264d934
+ & In two displayed equations, $f(\inl(b))$ should be $f(\inr(b))$.\\
+ %
+ Theorem \ref{thm:allbool-trueorfalse} % NB: We have to write out "Theorem" instead of using \cref here, since in the (post-erratum) version this label no longer denotes a Theorem.
+ & 391-g1ce619a
+ & This should not be called a ``Theorem'', since we have not yet introduced what that means.
+ Instead it should say ``We construct an element of\dots''.\\
+ %
+ \cref{sec:type-booleans}
+ & 125-g433f87e
+ & In the definition of binary products in terms of $\bool$, the definitions of $\fst(p)$ and $\snd(p)$ should be switched to match the order of arguments to $\rec\bool$ and $\ind\bool$.\\
+ \cref{sec:pat}
+ & 111-g1e868fa
+ & When translating English to type theory, ``unnamed variables'' are unnamed in English but must be named in type theory.\\
+ %
+ \cref{sec:identity-types}
+ & 154-g4ef49f7
+ & Emphasize that path induction, like all other induction principles, defines a \emph{specified} function.\\
+ %
+ \cref{sec:identity-types}
+ & 244-gd58529d
+ & In proof that path induction implies based path induction, $D(x,y,p)$ should be written $\prd{C : \prd{z:A} (\id[A]{x}{z}) \to \UU} \left( \cdots \right)$ so the type of $C$ matches the premise of based path induction. \\
+ %
+ \cref{rmk:the-only-path-is-refl}
+ & 563-g3286941
+ & The facts that any $(x,y,p): \sm{x,y:A}(\id{x}{y})$ is equal to $(x,x,\refl{x})$, and that any $(y,p):\sm{y:A}(\id[A]{a}{y})$ is equal to $(a,\refl{a})$, can be proven by path induction and based path induction respectively.\\
+ %
+ \cref{ex:iterator}
+ & 78-gcce4dc0
+ & The second defining equation of $\ite$ should have right-hand side $c_s(\ite(C,c_0,c_s,n))$.\\
+ %
+ \cref{ex:iterator}
+ & 293-g4663bfe
+ & The defining equations of the recursor derived from the iterator only hold propositionally, and require the induction principle to prove.\\
+ %
+ \cref{ex:prod-via-bool}
+ & 229-ged891f3
+ & This exercise requires function extensionality (\cref{sec:compute-pi}).\\
+ %
+ \cref{ex:nat-semiring}
+ & 450-g7f38c9a
+ & This exercise requires symmetry and transitivity of equality, \cref{lem:opp,lem:concat}.\\
+ %
+ \cref{ex:ackermann}
+ & 110-gfe4641b
+ & To match the usual Ackermann--P\'eter function, the second displayed equation should be $\ack(\suc(m),0) \jdeq \ack(m,1)$.\\
+ %
+ % Chapter 2
+ %
+ \cref{cha:basics}
+ & 239-gaf3d682
+ & In the chapter introduction, clarify that topological homotopies between paths must be endpoint-preserving.\\
+ %
+ \cref{lem:opp}
+ & 166-g37b78ef
+ & Add remarks before and after the proof about how a theorem's statement and proof should be interpreted as exhibiting an element of some type.\\
+ %
+ \cref{lem:concat}
+ & 374-g0bc0908
+ & In the penultimate display in the first proof, $d(x,z,q)$ should be simply $d$.\\
+ %
+ \cref{thm:omg}
+ & 750-g91b7348
+ & In the first proofs of~\ref{item:omg1}--\ref{item:omg3}, $\indid{A}(D,d,p)$ should be $\indid{A}(D,d,x,y,p)$.\\
+ %
+ \cref{sec:equality}
+ & 435-gee0b28a
+ & In the third paragraph after \cref{lem:concat}, $p\ct\refl{x}\jdeq p$ should be $p\ct\refl{y}\jdeq p$.\\
+ %
+ \cref{sec:equality}
+ & 165-g18642ca
+ & Mention that the notation $a=b=c=d$, and its displayed variant, indicate concatenation of paths.\\
+ %
+ \cref{sec:equality}
+ & 253-gdd47c75
+ & \cref{thm:omg}\ref{item:omg4} justifies writing $p\ct q \ct r$ and so on.\\
+ %
+ \cref{thm:EckmannHilton}
+ & 253-gdd47c75
+ & The induction defining $\alpha\rightwhisker r$ has defining equation $\alpha \rightwhisker \refl{b} \jdeq \opp{\mathsf{ru}_p} \ct \alpha \ct \mathsf{ru}_q$, with $\mathsf{ru}_p$ the right unit law.
+ For $\alpha\hct\beta = \alpha\ct\beta$ to be well-typed, we assume $p\jdeq q \jdeq r \jdeq s\jdeq \refl{a}$ and use $\mathsf{ru}_{\refl{a}} = \refl{\refl{a}}$ and its dual.
+ Proving $\alpha\hct\beta = \alpha\hct'\beta$ requires induction not only on $\alpha$ and $\beta$ but then on the two remaining 1-paths.
+ After the proof, remark that we trust the reader to construct such operations from now on.\\
+ %
+ \cref{def:loopspace}
+ & 233-gc3fb777
+ & The three displays should be $\defeq$'s rather than $=$'s.\\
+ %
+ \cref{sec:functors}
+ & 336-g8ff8a7f
+ & In the type of $\apfunc{f}$ towards the end of the first proof of \cref{lem:map}, $g(x)$ should be $f(y)$.\\
+ %
+ \cref{sec:fibrations}
+ & 154-g4ef49f7
+ & Emphasize that unlike fibrations in classical homotopy theory, type families come with a \emph{specified} path-lifting function.\\
+ %
+ \cref{sec:fibrations}
+ & 343-g6efd724
+ & The functions \cref{eq:ap-to-apd} and \cref{eq:apd-to-ap} are obtained by concatenating with $\transconst Bp{f(x)}$ and its inverse, respectively.\\
+ %
+ \cref{cor:hom-fg}
+ & 253-gdd47c75
+ & Canceling $H(x)$ may be done by whiskering with $\opp{(H(x))}$.\\
+ %
+ \cref{sec:basics-equivalences}
+ & 1171-gab3c0aa
+ & In the proof that $\isequiv(f) \to \qinv (f)$, the definition of $\gamma$ should be $\gamma(x) \defeq \opp{\beta(g(x))} \ct \ap{h}{\alpha(x)}$.\\
+ %
+ \cref{sec:compute-cartprod}
+ & 74-g9896e32
+ & In the type of $\pairpath$ (just after the proof of \cref{thm:path-prod}), the second factor in the domain should be $\id{\proj{2}(x)}{\proj{2}(y)}$.\\
+ %
+ \cref{sec:compute-cartprod}
+ & 895-g96db894
+ & In the displayed equation just before \cref{thm:trans-prod}, $\pairct(p\ct q, r, p'\ct q', r)$ should be $\pairct(p\ct q, r, p'\ct q', r')$ and $\pairct(p, q\ct r, p', q'\ct r)$ should be $\pairct(p, q\ct r, p', q'\ct r')$ (two primes on $r$s are missing).\\
+ %
+ \cref{thm:trans-prod}
+ & 349-gc7fd9d8
+ & The path is in $A(w)\times B(w)$, not $A(y)\times B(y)$.\\
+ %
+ \cref{thm:trans-prod}
+ & 76-ga42354c
+ & The third displayed judgmental equality in the proof should be $\transfib{B}{p}{\proj{2}x} \jdeq \proj2x$.\\
+ %
+ \cref{thm:path-sigma}
+ & 507-g8f10eda
+ & In the proof, the equation $f(g(\refl{},\refl{}))=\refl{}$ should be $f (g(\refl{w_1},\refl{w_2})) = (\refl{w_1},\refl{w_2})$.\\
+ %
+ \cref{sec:compute-pi}
+ & 269-g3880fe2
+ & The paragraph preceding the definition of $\transfib{\Pi_A(B)}{p}{f}$ (before \cref{eq:transport-arrow-families}) misstated the (already given) type of $p$.\\
+ %
+ \cref{axiom:univalence}
+ & 992-gc4a5314
+ & The axiom should read ``For any $A,B:\type$, the function~\eqref{eq:uidtoeqv} is an equivalence. The display $\eqv{(\id[\type]{A}{B})}{(\eqv A B)}$ should be deduced afterwards, outside the axiom statement.\\
+ %
+ \cref{thm:paths-respects-equiv}
+ & 310-gd5fa240
+ & The second half of the proof is more involved than the first.
+ It follows abstractly using the 2-out-of-6 property (\cref{ex:2-out-of-6}), or more concretely by concatenating with $\opp{\alpha_{f(a)}} \ct {\alpha_{f(a)}}$ on each side and then repeatedly using naturality and functoriality.\\
+ %
+ \cref{sec:compute-paths}
+ & 236-g32be999
+ & The second display after the proof of \cref{thm:paths-respects-equiv} should be $\prd{x:A} (\id[f(x)=g(x)] {\happly(p)(x)}{\happly(q)(x)})$.\\
+ %
+ \cref{thm:transport-path}
+ & 628-g1bd8602
+ & The sentence preceding the theorem suggests that it follows from \cref{cor:transport-path-prepost,thm:transport-compose}, but actually it requires a separate path induction.\\
+ %
+ \cref{thm:transport-path}
+ & 704-g70c069e
+ & The sentence after the theorem should say that $\apfunc{(x \mapsto c)}$ is $p \mapsto\refl{c}$, not $\refl{c}$.\\
+ %
+ \cref{thm:transport-path2}
+ & 364-g3c47534
+ & The right-hand side of the displayed equality should be $\opp{(\apdfunc{f}(p))} \ct \apfunc{(\transfibf{B}{p})}(q) \ct \apdfunc{g}(p)$.\\
+ %
+ \cref{sec:compute-coprod}
+ & 101-g645f763
+ & In \cref{thm:path-coprod} and the preceding paragraph, in the equivalence $\eqv{(\inl(a)=x)}{\code(x)}$, the variable $a$ should be $a_0$. \\
+ %
+ \cref{sec:compute-coprod}
+ & 370-g114db82
+ & In the two displays after the proof of \cref{thm:path-coprod}, the terms should be $\encode(\inl(a), {\blank})$ and $\encode(\inr(b), {\blank})$.\\
+ %
+ \cref{sec:equality-semigroups}
+ & 261-g4ccda0a
+ & In the first displayed pair of equations, the type of $p_2$ should be $\transfib{\semigroupstrsym}{p_1}{(m,a)} = {(m',a')}$.\\
+ %
+ \cref{sec:equality-semigroups}
+ & 402-g2297ecb
+ & The right hand side of the last displayed equation should be $m'(e(x_1),e(x_2))$.\\
+ %
+ \cref{sec:universal-properties}
+ & 305-g64685f1
+ & In the discussion of universal properties for product types and $\Sigma$-types surrounding \cref{eq:sigma-lump}, the phrases ``left-to-right'' and ``right-to-left'' should be switched.\\
+ %
+ \cref{cha:basics} Notes
+ & 379-ga57eab2
+ & It should be mentioned that Hofmann and Streicher (1998) proposed an axiom similar to univalence, which is correct (and equivalent to univalence) for a universe of 1-types.\\
+ %
+ % Chapter 3
+ %
+ \cref{eq:english-ac}
+ & 1193-g54b20e3
+ & The domain of $g:\prd{x:A} A(x)$ should be $X$.\\
+ %
+ \cref{subsec:prop-subsets}
+ & 86-g39feab1
+ & The definition of subset containment should say $\prd{x:A}(P(x)\rightarrow Q(x))$, not $\fall{x:A}(P(x)\Rightarrow Q(x))$, as the latter notation has not been introduced yet.\\
+ %
+ \cref{thm:retract-contr}
+ & 95-gce0131f
+ & In the proof, $p$ should be $r$ to match the preceding definition of retraction.\\
+ %
+ \cref{ex:lem-brck}
+ & 1162-ga97cb70
+ & Should be to show that $\neg\neg A$ satisfies the recursion principle of $\brck{A}$ but with only a propositional computation rule.\\
+ %
+ % Chapter 4
+ %
+ \cref{lem:qinv-autohtpy}
+ & 87-g693e9b9
+ & At the end of the proof, \cref{thm:contr-paths} should be cited as the reason why $\sm{g:A\to A} (g = \idfunc[A])$ is contractible.\\
+ %
+ \cref{thm:equiv-iso-adj}
+ & 275-g8ea9f71
+ & In the proof, the path concatenations in the definitions of $\epsilon'$ and $\tau$ were written in reverse order.\\
+ %
+ \cref{thm:equiv-iso-adj}
+ & 1043-gcfce4d7
+ & In the proof, the type of $\tau(a)$ should be $\ap{f}{\eta(a)}=\opp{\epsilon(f(g(f(a))))}\ct (\ap{f}{\eta(g(f(a)))}\ct \epsilon(f(a)))$, instead of $\opp{\epsilon(f(g(f(a))))}\ct (\ap{f}{\eta(g(f(a)))}\ct \epsilon(f(a)))=\ap{f}{\eta(a)}$.\\
+ %
+ \cref{lem:coh-hprop}
+ & 296-ge3dc076
+ & In the proof, $\id[\hfib{f}{fx}]{(fgx,\epsilon(fx))}{(x,\refl{fx})}$ should be $\id[\hfib{f}{fx}]{(gfx,\epsilon(fx))}{(x,\refl{fx})}$.\\
+ %
+ \cref{thm:equiv-biinv-isequiv}
+ & 272-gfd47093
+ & At the end of the proof, the equivalence follows from the fact that $\ishae(f)$, not $\iscontr(f)$, is a mere proposition. \\
+ %
+ \cref{thm:lequiv-contr-hae}
+ & 299-g85b729b
+ & In the proof, $\lcoh{f}{g}{\epsilon}$ should be $\rcoh{f}{g}{\epsilon}$, and the final displayed equation should have $\proj{2}$ applied to both occurrences of $P(fx)$.\\
+ %
+ \cref{lem:func_retract_to_fiber_retract}
+ & 265-g64000fb
+ & The path concatenations in the definitions of $\varphi_b$ and $\psi_b$ (and subsequent equations) are reversed, and each $f(a)$ in the next two displayed equations should be $g(a)$.\\
+ %
+ \cref{fibwise-fiber-total-fiber-equiv}
+ & 275-g84ab032
+ & The first equivalence in the proof is not by~\eqref{eq:sigma-lump} but by \cref{ex:sigma-assoc}.\\
+ %
+ \cref{fibwise-fiber-total-fiber-equiv}
+ & 202-g775a3f0
+ & The last equivalence in the proof is not by~\eqref{eq:path-lump} but by \cref{thm:omit-contr,thm:contr-paths,ex:sigma-assoc}.\\
+ %
+ \cref{thm:nobject-classifier-appetizer}
+ & 205-gf9fe386
+ & In the proof, $e\cdot \proj1$ should be $\trans{(\ua(e))}{\proj1}$. Also, explain its computation better.\\
+ %
+ \cref{sec:univalence-implies-funext}
+ & 114-gaba76c8
+ & The point of \cref{UA-eqv-hom-eqv} is that it follows from univalence without assuming function extensionality separately.\\
+ %
+ \cref{contrfamtotalpostcompequiv}
+ & 484-g2ce1249
+ & In the statement, ``precomposition'' should be ``post-composition''.\\
+ %
+ \cref{uatowfe}
+ & 746-g4d540d6
+ & In the definition of $\psi$ in the proof, transport has to be along $\happly(p,x)$ instead of along $p$.\\
+ %
+ \cref{ex:symmetric-equiv}
+ & 358-g9543064
+ & The text should be ``Show that for any $A,B:\UU$, the following type is equivalent to $\eqv A B$. Can you extract from this a definition of a type satisfying the three desiderata of $\isequiv(f)$?''\\
+ %
+ % Chapter 5
+ %
+ \cref{sec:appetizer-univalence}
+ & 706-ged2c765
+ & In the proof that $\eqv{\nat}{\natp}$, the definitions of $f$ and $g$ should be $\rec\nat(\natp, \; \zerop, \; \lamu{n:\nat} \sucp)$ and $\rec\natp(\nat, \; 0, \; \lamu{n:\natp} \suc)$ respectively.\\
+ %
+ \cref{sec:w-types}
+ & 125-g433f87e
+ & In the definition of $\natw$, use $\bfalse$ for $0$ and $\btrue$ for $\suc$, to match the ordering of $\bfalse$ and $\btrue$ in \cref{sec:type-booleans}.\\
+ %
+ \cref{sec:w-types}
+ & 551-g82b74bf
+ & The definitions of $\natw$ and $\lst A$ as $\w$-types should be $\wtype{b:\bool} \rec\bool(\bbU,\emptyt,\unit,b)$ and $\wtype{x: \unit + A} \rec{\unit + A}(\bbU, \emptyt, \lamu{a:A} \unit, x)$.\\
+ %
+ \cref{sec:w-types}
+ & 218-g42219cb
+ & In the description of the constructor $\supp$, its second argument is more clearly written as $f : B(a) \to \wtype{x:A} B(x)$.\\
+ %
+ \cref{sec:w-types}
+ & 525-gb1957b8
+ & In the computation rule, the recursive call to $\rec{}$ is missing an argument.
+ It should read $\rec{\wtype{x:A} B(x)}(E,e,\supp(a,f)) \jdeq e(a,f,\big(\lamu{b:B(a)} \rec{\wtype{x:A} B(x)}(E,e,f(b))\big))$.\\
+ %
+ \cref{sec:w-types}
+ & 570-g6ec04c3
+ & In the verification that $\dbl$ computes as expected, $e_t$ should be $e_0$ and $e_f$ should be $e_1$.\\
+ %
+ \cref{sec:initial-alg}
+ & 554-g9b2a34b
+ & The definition of the type of $\w$-homomorphisms (just before \cref{thm:w-hinit}) should read $\whom_{A,B}((C, s_C),(D,s_D)) \defeq \sm{f : C \to D} \prd{a:A}{h:B(a)\to C} \id{f(s_C(a,h))}{s_D(a, f\circ h)}$.\\
+ %
+ \cref{sec:htpy-inductive}
+ & 917-gd6960ad
+ & In the first paragraph, the definition of $\natw$ should be $\wtype{b:\bool} \rec\bool(\bbU,\emptyt,\unit,b)$.\\
+ %
+ \cref{sec:htpy-inductive}
+ & 608-g6af101f
+ & In the computation rule for homotopy $\w$-types, the left-hand side should be $\rec{\wtypeh{x:A} B(x)}(E,e,\supp(a,f))$.\\
+ %
+ \cref{sec:htpy-inductive}
+ & 1261-g4cdab82
+ & In the commutative diagram preceding the definition of $\w_s(A, B)$, all occurrences of $x$ should be replaced with $a$.\\
+ %
+ \cref{sec:htpy-inductive}
+ & 1261-g4cdab82
+ & In the definition of $\w_s(A,B)$, $\alpha(\supp(x,f))$ should be $\alpha(\supp(a,f))$, and $\prd{a,f}$ should be inserted after $\sm\alpha$.\\
+ %
+ \cref{eq:example-comp}
+ & 912-g04d3fb6
+ & In the preceeding sentence, $\delta:d$ should be $\delta:D$.\\
+ %
+ \cref{sec:generalizations}
+ & 908-g4b2eb10
+ & The second two constructors of $\mathsf{paritynat}$ should be $\mathsf{esucc} : \mathsf{paritynat}(\btrue) \to \mathsf{paritynat}(\bfalse)$ and $\mathsf{osucc} : \mathsf{paritynat}(\bfalse) \to \mathsf{paritynat}(\btrue)$.\\
+ %
+ \cref{thm:identity-systems}
+ & 139-gd5c5d01
+ & In the proof of \ref{item:identity-systems4}$\Rightarrow$\ref{item:identity-systems1}, the type of $D'$ should be $(\sm{b:A} R(b)) \to \type$.\\
+ %
+ \cref{ex:same-recurrence-not-defeq}
+ & 622-ga0bd007
+ & The two functions should satisfy the same recurrence judgmentally.\\
+ %
+ \cref{ex:one-function-two-recurrences}
+ & 622-ga0bd007
+ & The function should satisfy both recurrences judgmentally.\\
+ %
+ \cref{sec:identity-systems}
+ & 171-gdc4966e
+ & The subscript of $\refl A : a=_A a$ should be $a$, i.e. $\refl a$.\\
+ %
+ % Chapter 6
+ %
+ \cref{sec:dependent-paths}
+ & 54-gd4a47c2
+ & Soon after \cref{rmk:defid}, the phrase ``An element $b:P(\base)$ in the fiber over the constructor $\base:\nat$'' should say $\base:\Sn^1$.\\
+ %
+ \cref{thm:uniqueness-for-functions-on-S1}
+ & 423-gf763ae1
+ & \cref{thm:transport-path,thm:dpath-path} are needed to put $q$ in the form required by the induction principle.\\
+ %
+ \cref{thm:interval-funext}
+ & 417-g4aa6a15
+ & Added \cref{ex:funext-from-interval}: the function constructed in \cref{thm:interval-funext} is actually an inverse to $\happly$, so that the full function extensionality axiom follows from an interval type.\\
+ %
+ \cref{thm:S1-autohtpy}
+ & 625-g950efa9
+ & In the second paragraph of the proof, the appeal to function extensionality should be omitted.\\
+ %
+ \cref{sec:circle}
+ & 327-g7cbe31c
+ & In the first sentence after the proof of \cref{thm:apd2}, ``$P:\Sn^2\to P$'' should be ``$P:\Sn^2\to\type$''.\\
+ %
+ \cref{sec:circle}
+ & 1039-g30da4c6
+ & In the sentence after the proof of \cref{thm:apd2}, the type family in which $s$ is a dependent path should be $\lam{p} \dpath P p b b$ instead of $P$.\\
+ %
+ \cref{sec:cell-complexes}
+ & 289-gdefeb8c
+ & In the induction principle for the torus, the types of $p'$ and $q'$ should be $\dpath P p {b'} {b'}$ and $\dpath P q b b$ respectively.\\
+ %
+ \cref{sec:hubs-spokes}
+ & 289-gdefeb8c
+ & In the induction principle for the torus, the types of $p'$ and $q'$ should be $\dpath P p {b'} {b'}$ and $\dpath P q b b$ respectively.\\
+ %
+ \cref{sec:hittruncations}
+ & 468-g5472874
+ & The induction principle for $\brck{A}$ should conclude $f(\bproj a)\jdeq g(a)$, not $f(\bproj a)\jdeq a$. And in the hypotheses of the induction principle for $\trunc0 A$ and in the proof of \cref{thm:trunc0-ind}, $v:\dpath{B}{u(x,y,p,q)}{p}{q}$ should instead be $v:\dpath{B}{u(x,y,p,q)}{r}{s}$.\\
+ %
+ \cref{sec:hittruncations}
+ & 860-gc7d862c
+ & In the penultimate paragraph, the ``unobjectionable'' constructor for $\trunc0 A$ should begin ``For every $f:S\to \trunc0 A$'', not ``For every $f:S\to A$''.\\
+ %
+ \cref{thm:quotient-ump}
+ & 961-gde36592
+ & The first sentence of the second paragraph of the proof should end with $g(x) = \overline{g\circ q}(x)$.\\
+ %
+ \cref{lem:quotient-when-canonical-representatives}
+ & 514-g18ade45
+ & Instead of ``is the set-quotient of $A$ by $\eqr$'', the statement should say ``satisfies the universal property of the set-quotient of $A$ by~$\eqr$, and hence is equivalent to it''.
+ In the proof, the second displayed equation should be $e'(g, s) (x,p) \defeq g(x)$.
+ The fourth displayed equation should be $e(e'(g, s)) \jdeq e(g \circ \proj{1}) \jdeq (g \circ \proj{1} \circ q, {\nameless})$, the fifth should be $g(\proj{1}(q(x))) \jdeq g(r(x)) = g(x)$, and the proof should conclude with ``$g$ respects $\eqr$ by the assumption $s$''.\\
+ %
+ \cref{thm:sign-induction}
+ & 535-g0a9abfe
+ & The ``computation rules'' satisfied by $f$ are only propositional equalities.
+ Also, the proof requires transport across a few unmentioned equivalences.\\
+ %
+ \cref{thm:looptothe}
+ & 535-g0a9abfe
+ & The defining clauses should use $\defid$ rather than $\defeq$ (see the erratum for \cref{thm:sign-induction}).
+ Also, the first clause should say $\refl{a}$ rather than $\refl{\base}$.\\
+ %
+ \cref{thm:transport-is-given}
+ & 682-g3af5dbe
+ & Three occurrences of $P$ in the statement should be $B$.\\
+ %
+ \cref{thm:flattening-cp}
+ & 457-g411ec6d
+ & The right-hand side of the displayed equation in the proof should be $(\cc(g(b)),D(b)(y))$.\\
+ %
+ \cref{thm:flattening-cp}
+ & 961-gde36592
+ & After the display we should have $\pp(b):\cc(f(b))=\cc(g(b))$.\\
+ %
+ \cref{sec:flattening}
+ & 519-gc99a54c
+ & $f$ denotes a map $B\to A$ in this section and should not be re-used for functions defined by induction on $\sm{w:W} P(w)$; we may use $k$ instead.
+ Thus $f$ should be $k$ in the last sentence of \cref{thm:flattening-rect}; the first sentence of its proof; the second and third sentences of the paragraph after its proof; the last sentence of \cref{thm:flattening-rectnd}; the first, second, and last sentences of its proof; throughout the statement and proof of \cref{thm:ap-sigma-rect-path-pair}; the statement of \cref{thm:flattening-rectnd-beta-ppt}; and the second sentence of its proof.\\
+ %
+ \cref{thm:flattening-rect}
+ & 537-gdf3b51d
+ & In the display after the definition of $q$, the transport in the first line should be with respect to $x\mapsto Q(\cct'(g(b),x))$, and in the second line the subscript of $\apfunc{}$ should be $x\mapsto \cct'(g(b),x)$.\\
+ %
+ \cref{thm:flattening-rect}
+ & 961-gde36592
+ & The subscript of $\apfunc{}$ should also be $x\mapsto \cct'(g(b),x)$ in the third, fourth, and fifth displays.
+ In the fourth and fifth displays, the path-concatenations should be in the other order.
+ And in the fifth display, $\refl{g(b)}$ should be $\refl{\cc(g(b))}$.\\
+ %
+ \cref{thm:ap-sigma-rect-path-pair}
+ & 501-ge895f81
+ & Both occurrences of $P$ in the statement should be $Y$, and both occurrences of $Q$ in the proof should be $Z$.\\
+ %
+ % Chapter 7
+ %
+ \cref{thm:h-level-retracts}
+ & 180-gb672a4d
+ & In the last displayed equation of the proof, $q$ should be $r$.\\
+ %
+ \cref{thm:isaprop-isofhlevel}
+ & 101-g713f48c
+ & The base case in the proof is just \cref{thm:isprop-iscontr}.\\
+ %
+ \cref{sec:truncations}
+ & 480-gdc84050
+ & The third paragraph is wrong: in contrast to \cref{rmk:spokes-no-hub}, it \emph{would} actually work to define $\trunc nA$ omitting the hub point.\\
+ %
+ \cref{thm:h-set-refrel-in-paths-sets}
+ & 1131-gc1748fa
+ & In the second paragraph of the first proof, the codomain of the function $f(x,x)$ should be $\id[X]xx$, not $\id[X]xy$.\\
+ %
+ \cref{lem:hedberg-helper}
+ & 644-g627c0a8
+ & In the proof of the lemma, ``If $x$ is $\inr(f)$'' should be ``If $x$ is $\inr(t)$''.\\
+ %
+ \cref{thm:path-truncation}
+ & 412-gb9582fc
+ & In the proof, \encode and \decode should be switched.\\
+ %
+ \cref{lem:nconnected_postcomp_variation}
+ & 801-g01922a8
+ & The converse direction is false unless $Q$ is fiberwise merely inhabited. Also, the occurrences of $\ap f p$ and $\ap f {\proj 2 w}$ in the proof should be just $p$ and $\proj 2 w$, respectively.\\
+ %
+ \cref{lem:connected-map-equiv-truncation}
+ & 367-g1c8c07e
+ & In the proof that the first composite is the identity, all occurrences of $y$ should be $f(x)$.\\
+ %
+ \cref{thm:modal-char}
+ & 658-g016f3a4
+ & In the second paragraph of the proof, the first two occurrences of $\proj2$ (but not the third) should be $\proj1$.\\
+ %
+ \cref{ex:s2-colim-unit}
+ & 101-ga366be2
+ & ``entires'' should be ``entirely''.\\
+ %
+ \cref{ex:s2-colim-unit}
+ & 683-g8941e50
+ & This exercise needs more precise definitions of ``diagram'' and ``colimit''.\\
+ %
+ \cref{ex:acnm}
+ & 1074-gcd42187
+ & $\choice{\infty,\infty}$ is not \cref{thm:ttac}, but the identity function.\\
+ %
+ \cref{ex:acnm}
+ & 603-ge113e08
+ & The penultimate sentence should ask ``Is $\choice{n,m}$ consistent with univalence for any $m\ge 0$ and any $n$?''.\\
+ %
+ % Chapter 8
+ %
+ \cref{lem:s1-encode-decode}
+ & 535-g0a9abfe
+ & The proof by induction on $n:\Z$ is justified by \cref{thm:sign-induction}, not \cref{thm:looptothe}.\\
+ %
+ \cref{thm:iscontr-s1cover}
+ & 535-g0a9abfe
+ & The clauses defining $q_z$ should use $\defid$ rather than $\defeq$ (see the erratum for \cref{thm:sign-induction}).\\
+ %
+ \cref{thm:suspension-increases-connectedness}
+ & 1062-gf3bfeae
+ & In the proof, $E$ is not $(n + 1)$-connected but $(n + 1)$-truncated.\\
+ %
+ \cref{thm:fiber-of-the-fiber}
+ & 1181-g3e51973
+ & In the proof, $(x:A)$ should be $(x:X)$.\\
+ %
+ \cref{thm:conn-pik}
+ & 1023-gf188aeb
+ & The proof requires a separate argument for $k=0$.\\
+ %
+ \cref{thm:hopf-fibration}
+ & 256-g9e6fcb8
+ & The phrase ``whose fibers are $\Sn^1$'' should be ``whose fiber over the basepoint is $\Sn ^1$''.
+ The same change should be made in \cref{ex:HopfJr,ex:SuperHopf}.\\
+ %
+ \cref{lem:fibration-over-pushout}
+ & 1062-gf3bfeae
+ & In the definition of ${E^{\mathrm{tot}}}'$ in the proof, $e_C$ should be $e_X$.\\
+ %
+ \cref{thm:conn-trunc-variable-ind}
+ & 396-g868335b
+ & In the proof, the function $k$ should have type $\prd{a:A} P(f(a))$.
+ It should also be named $\ell$, to avoid confusion with the integer $k$.\\
+ %
+ \cref{thm:freudcode}
+ & 87-g3f977b2
+ & In the second displayed equation in the proof, $\merid(x_1)$ should be $\opp{\merid(x_1)}$.\\
+ %
+ \cref{thm:wedge-connectivity}
+ & 1203-g7464bf1
+ & The type family $P$ defined in the proof should instead be called $Q$, to avoid clashes with the type family $P$ assumed in the statement.\\
+ %
+ \cref{thm:wedge-connectivity}
+ & 399-g8897c94
+ & In the last sentence of the proof, ``$(n-1)$-connected'' should be ``$(n-1)$-truncated''.\\
+ %
+ \cref{thm:freudlemma}
+ & 88-g0c0be67
+ & The type of $m$ should be $a_1=a_2$, the second display should begin with $C(a_1,\transfib{B}{\opp m}{b})$, and the proof should say ``we may assume $a_2$ is $a_1$ and $m$ is $\refl{a_1}$''.\\
+ %
+ \cref{sec:freudenthal}
+ & 165-gd5584c6
+ & In~\eqref{eq:freudcompute1}, $r''$ should be $r'$, the end point of $r$ should be $\transfib{B}{\opp{\merid(x_0)}}{q}$, and obtaining $r'$ requires also identifying this with $q \ct \opp{\merid(x_0)}$.
+ Similarly, in~\eqref{eq:freudcompute2}, the end point of $r$ should be $\transfib{B}{\opp{\merid(x_1)}}{q}$.\\
+ %
+ \cref{sec:freudenthal}
+ & 474-g5289470
+ & $\pi_3(\Sn^2)=\Z$ should be stated as \cref{thm:pi3s2}, following from \cref{cor:pis2-hopf,thm:pinsn}.\\
+ %
+ \cref{thm:whiteheadn}
+ & 1092-ge3b8b71
+ & After applying the induction hypothesis, it additionally needs to be checked that for every path $p : a = a$ the map $\pi_k(\apfunc f):\pi_k(x = x,p) \to \pi_k(f(x) = f(x),\apfunc f(p))$ is a bijection. \\
+ %
+ \cref{sec:general-encode-decode}
+ & 1154-g301662b
+ & In the strengthening of condition (iii) from \cref{lem:encode-decode-loop}, the right side should read just ``$c$'' instead of ``$c.a$''.\\
+ %
+ % Chapter 9
+ %
+ \cref{ct:gaunt}
+ & 1307-gfe63517
+ & Stating that every isomorphism is an identity is not very accurate (consider the discrete category on the interval type): a more accurate statement is that every automorphism is an identity arrow. Notice that for precategories, this property must be combined with skeletality for the equivalence to hold.\\
+ %
+ \cref{ct:functor}
+ & 807-gebec78b
+ & In \cref{ct:functor:comp}, it should read ``$\hom_A(b,c)$'' instead of ``$\hom_B(b,c)$''.\\
+ %
+ \cref{sec:equivalences}
+ & 1218-gcb6ba30
+ & Just before \cref{ct:essentially-surjective}, it should say ``However, if $A$ is not a category'' instead of ``However, if $B$ is not a category''.\\
+ %
+ \cref{ct:yoneda}
+ & 971-g6096085
+ & The sequence of equations at the end of the proof should begin with $\alpha_{a'}(f) = \alpha_{a'} (\y a_{a,a'}(f)(1_a))$, and thereafter the subscripts should remain $a,a'$ rather than $a',a$.\\
+ %
+ \cref{ct:sig}
+ & 897-g94fb722
+ & In~\ref{item:sigcmp}, ``if $f:\hom_X(x,y)$'' should be ``if $f:\hom_X(x,y)$ and $g:\hom_X(y,z)$''.\\
+ %
+ \cref{sec:sip}
+ & 1111-g3332a31
+ & The type of objects $A_0$ of the precategory $A$ of $(P,H)$-structures should be defined as $\sm{x:X_0} Px$, not $\sm{x:X} Px$.\\
+ %
+ \cref{cha:category-theory}
+ & 966-g04374f5
+ & The first sentence after \cref{ct:cat-weq-eq} should begin ``Therefore, if a precategory $A$ admits a weak equivalence functor $A\to \widehat{A}$ \emph{into a category}\dots''.\\
+ %
+ \cref{thm:rezk-completion}
+ & 313-g8ee79db
+ & In the second proof, the third constructor of $\widehat A_0$ is unneeded; it follows from the fourth constructor and path induction.
+ In the fifth constructor, $j(g)\ct j(f)$ should be $j(f)\ct j(g)$, and similarly throughout the proof.
+ Finally, for consistency, the 1-truncation constructor should be included explicitly (this was intended to be implied by "higher inductive 1-type").\\
+ %
+ \cref{cha:category-theory} Notes
+ & 379-ga57eab2
+ & It should be mentioned that Hofmann and Streicher (1998) also considered this definition of category.\\
+ %
+ % Chapter 10
+ %
+ \cref{thm:wfmin}
+ & 1290-g4101ad3
+ & In the proof, the second sentence of the second paragraph should have ``$s(a'):\acc(a')$'' rather than ``$s(a'):\acc(a)$''.\\
+ %
+ \cref{thm:ordord}
+ & 140-g55de417
+ & The second sentence of the proof should say ``By well-founded induction on $A$, suppose $\ordsl A b$ is accessible for all $b] (N0) -- node[above]{\footnotesize $i_n$} (N1);
+\draw[->] (N0) -- node[left]{\footnotesize $j_n$} (N2);
+\draw[->] (N1) -- node[right]{\footnotesize $j_{n+1}$} (N2);
+\draw[double, double equal sign distance] (N3) -- node[left,above]{\footnotesize $glue_n$} (N4);
+\end{tikzpicture}
+\end{center}
+we get a path $x \defeq j_n(y) = j_{n+1} (i_n(y))$.
+But, as we will show, the inclusion $i_n$ of $\Sn^n$ in $\Sn^{n+1}$ is nullhomotopic,
+every point in the image is equal to $\north_{n+1}$. Thus, we are able to show that
+$j_{n+1}(i_n(y)) = j_{n+1}(\north_{n+1})$. Composing the proof of the first step
+with the proof of the second step we conclude the exercise.
+
+To construct a $D_{\blank} : \prd{n:\nat} j_n(\north_n) = j_0(\north_0)$ we
+proceed by induction on $n$. For the base case we can
+use $\refl{j_0(\north_0)}$. For the inductive case, we have by inductive
+hypothesis $j_n(\north_n) = j_0(\north_0)$. By our definition of $i_{\blank}$,
+we have that $\north_{n+1} \equiv i_n(\north_n)$. So $j_{n+1} (\north_{n+1})$
+equals $j_{n+1}(i_n(\north_n))$. By concatenation with $\glue_n(\north_n)$,
+we reduce our goal to the inductive hypothesis.
+
+Let's now show that the inclusion of $\Sn^n$ in $\Sn^{n+1}$ can be
+continuously retracted to $\north_{n+1}$.
+That is, let's construct a homotopy:
+\[ H_{\blank} : \prd{n:\nat}{x:\Sn^n} i_n(x) = \north_{n+1} \]
+For the case $n\equiv 0$ we know that $i_0(\btrue) \equiv \north_1$ and
+$i_0(\bfalse) \equiv \south_1$. This is because we constructed the inclusion
+that way. So we can prove the equalities
+using $\refl{\north_1}$ and $\merid_1(\btrue) : \north_1 = \south_1$.
+
+For the inductive case we defined, previously, $i_n(\north_n) \equiv \north_{n+1}$
+and $i_n(\south_{n}) \equiv \south_{n+1}$. So we can prove the equalities using
+$\refl{\north_{n+1}}$ and $(\merid_{n+1}(\north_n))^{-1}$.
+Then we have to prove that the function respects $\merid_n$:
+\[ \prd{x:\Sn^{n-1}} \dpath {x\mapsto (i_n(x) = \north_{n+1})} {\merid_n(x)} {\refl{\north_{n+1}}} {(\merid_{n+1}(\north_n))^{-1}} \]
+By \cref{thm:transport-path} (and some straightforward computation) this reduces to:
+\[ i_n(\merid_n(x)) = \merid_{n+1}(\north_{n}) \]
+But, by our definition of $i_{\blank}$, and the computation rule of the
+suspension induction $i_n(\merid_n(x))$ equals $\merid_{n+1} (i_{n-1}(x))$.
+And, by inductive hypothesis, $i_{n-1}(x) = \north_n$, which gives us the
+desired result.
+
+Composing the two proofs we just gave we get a function:
+\[ J_n(x) \defeq \glue_n(x)\ct \apfunc{j_{n+1}}{H_n(x)}\ct D_{n+1}
+ : \prd{n:\nat}{x:\Sn^n} j_n(x) = j_0(\north_0). \]
+We use this function and induction on $\Sn^{\infty}$ to derive
+the contractibility of the space.
+Now it remains to show that our function respects the gluing:
+\[ \prd{n:\nat}{x:\Sn^n} \dpath {x\mapsto (x = j_0(\north_0))}
+ {\glue_n(x)} {J_n(x)} {J_{n+1}(i_n (x))} \]
+By definition this is:
+\[ \prd{n:\nat}{x:\Sn^n}
+ \transfib{x\mapsto (x = j_0(\north_0))}{\glue_n(x)}{J_n(x)}
+ = {J_{n+1}(i_n (x))} \]
+
+The LHS is equal to $\glue_{n}(x)^{-1}\ct J_n(x)$, which, by definition of
+$J_n(x)$, is:
+\[ \glue_n(x)^{-1}\ct\glue_n(x)\ct\apfunc{j_{n+1}}{H_n(x)}\ct D_{n+1} \]
+Cancelling we get:
+\[ \apfunc{j_{n+1}}{H_n(x)}\ct D_{n+1} \]
+
+We also use the definition of $J_{\blank}$ in the RHS, and then the computation
+rule of $D_{\blank}$, giving us the equalities:
+\begin{align*}
+ & J_{n+1}(i_n(x))\\
+ &= \glue_{n+1}(i_n(x))\ct\apfunc{j_{n+2}}{H_{n+1}(i_n(x))}\ct D_{n+2}\\
+ &= \glue_{n+1}(i_n(x))\ct\apfunc{j_{n+2}}{H_{n+1}(i_n(x))}\ct
+ \glue_{n+1}(\north_{n+1})^{-1}\ct D_{n+1}.
+\end{align*}
+
+So it suffices to show:
+\[ \apfunc{j_{n+1}}{H_n(x)} = \glue_{n+1}(i_n(x))\ct\apfunc{j_{n+2}}{H_{n+1}(i_n(x))}
+ \ct\glue_{n+1}(\north_{n+1})^{-1} \]
+Or equivalently:
+\[ \apfunc{j_{n+1}}{H_n(x)} \ct\glue_{n+1}(\north_{n+1}) =
+ \glue_{n+1}(i_n(x))\ct\apfunc{j_{n+2}}{H_{n+1}(i_n(x))} \]
+
+But we remember that we have the homotopy
+$\glue_{n+1} : j_{n+1} = j_{n+2}\circ i_{n+1}$, so, by a simple application
+of \cref{lem:htpy-natural} and the functoriality of $\apfunc{}{}$, we get
+a proof of the equality:
+\[ \apfunc{j_{n+1}}{H_n(x)} \ct\glue_{n+1}(\north_{n+1}) =
+ \glue_{n+1}(i_n(x))\ct\apfunc{j_{n+2}}{\apfunc{i_{n+1}}{H_n(x)}} \]
+So we reduced the goal to showing:
+\[ \apfunc{i_{n+1}}{H_n(x)} = H_{n+1}(i_n(x)) \]
+
+This can be done easily by induction in $\Sn^n$ using the definition of $H_{\blank}$.
+
+
+\subsection*{Solution to \cref{ex:contr-infinity-sphere-susp}}
+
+First we write down the type of the induction principle explicitly:
+
+\[ \ind{\Sn^\infty} : \prd{C:\Sn^\infty \to \UU}{n:C(\north)}{s:C(\south)}
+ \Parens{\prd{x:\Sn^\infty} C(x) \to \dpath C {\merid(x)} {n}{s}} \to \prd{x:\Sn^\infty} C(x) . \]
+
+ We take $\north$ as center of contraction. So we have to prove
+ $\prd{x:\Sn^\infty} \north = x$. For this we use induction on $\Sn^\infty$ taking:
+\[C \defeq (\lambda x. \north = x) : \Sn^\infty \to \UU .\]
+
+ When $x$ is $\north$ we just use $\refl{\north} : \north = \north$. When $x$
+ is $\south$ we use $\merid(\north) : \north = \south$.
+ When $x$ varies along $\merid$ we have to give a function of type:
+\[\prd{x:\Sn^\infty} \Parens{\north = x} \to \Parens{\dpath C {\merid(x)} {\refl{\north}}{\merid(\north)}}. \]
+ So, given $x : \Sn^\infty$ and $p : \north = x$, we have to prove:
+ \[\transfib{x \mapsto (\north = x)}{\merid(x)}{\refl{\north}} = \merid(\north).\]
+
+By \cref{cor:transport-path-prepost} it suffices to show
+$\refl{\north}\ct \merid(x) = \merid(\north)$. Canceling $\refl{\north}$ and
+applying $\merid$ to $p$ gets us the desired result.
+
+\subsection*{Solution to \cref{ex:unique-fiber}}
+
+We know that every two points $y_1,y_2 : Y$ are merely equal, because $Y$ is
+is connected. That is, we have a function
+$c : \prd{y_1,y_2:Y}\trunc {} {y_1 = y_2}$. To prove this we can use the remark
+after \cref{thm:connected-pointed}. If we want to show that any pair of
+points $y_1,y_2 : Y$ are merely equal we can use the first point $y_1$ to get
+a pointed space $(Y,y_1)$, and then use the remark.
+
+We note that it suffices to show that for any $y_1,y_2:Y$ we have
+$\trunc {} {\hfib{f}{y_1} = \hfib{f}{y_2}}$ because
+$\hfib{f}{y_1} = \hfib{f}{y_2}$ implies (using $\idtoeqv$)
+$\hfib{f}{y_1}\simeq \hfib{f}{y_2}$
+and thus, by recursion on the truncation of $\hfib{f}{y_1} = \hfib{f}{y_2}$,
+we get that $\trunc {} {\hfib{f}{y_1} = \hfib{f}{y_2}}$ implies
+$\trunc {} {\hfib{f}{y_1} \simeq \hfib{f}{y_2}}$.
+
+The type of $c(y_1,y_2)$ is a truncation, so we can use its recursion to
+prove the desired result.
+By recursion we can assume that $y_1 = y_2$, and in that case we obviously have
+$\trunc {} {\hfib{f}{y_1} = \hfib{f}{y_2}}$. We also have to show that
+the proposition we want to prove is $-1$-truncated, but that is straightforward
+because it is a $-1$-truncation.
+
+
+\section*{Exercises from \cref{cha:category-theory}}
+
+\subsection*{Solution to \cref{ex:stack}}
+
+Define $K$ to be the precategory with $K_0 \defeq Y$ and $\hom_K(y_1,y_2) \defeq (p(y_1)=p(y_2))$.
+Then $\mathrm{Desc}(A,p)\defeq A^K$ is a good definition.
+Moreover, the obvious functor $K\to X$ (where $X$ denotes the discrete category on itself) is a weak equivalence, so \cref{ct:esofull-precomp-ff,ct:cat-weq-eq} yield the second and third parts.
+Finally, $K$ is a strict category, so if it is a stack, then $p$ has a section, while conversely if $p$ has a section then $K\to X$ is a (strong) equivalence.
+
+\section*{Exercises from \cref{cha:set-math}}
+
+\subsection*{Solution to \cref{ex:prop-ord}}
+
+Define $A>,thick] (P) -- (S1);
+ \node[fill,circle,inner sep=1pt,label={below right:$\base$}] at (0,-.5) {};
+ \node at (-2.6,.6) {$\lloop$};
+ \node[fill,circle,\OPTblue,inner sep=1pt] (b) at (0,2.3) {};
+ \node[\OPTblue] at (-.2,2.1) {$b$};
+ \begin{scope}
+ \draw[\OPTblue] (b) to[out=180,in=-150] (-2.7,3.5) to[out=30,in=180] (0,3.35);
+ \draw[\OPTblue,dotted] (0,3.35) to[out=0,in=175] (1.4,4.35);
+ \draw[\OPTblue] (1.4,4.35) to[out=-5,in=90] (2.5,3) to[out=-90,in=0,looseness=.8] (b);
+ \end{scope}
+ \node[\OPTblue] at (-2.2, 3.3) {$\ell$};
+ \end{tikzpicture}
+ \caption{The topological induction principle for $\Sn^1$}
+ \label{fig:topS1ind}
+\end{figure}
+
+\begin{figure}
+ \centering
+ \begin{tikzpicture}
+ \draw (0,0) ellipse (3 and .5);
+ \draw (0,3) ellipse (3.5 and 1.5);
+ \begin{scope}[yshift=4]
+ \clip (-3,3) -- (-1.8,3) -- (-1.8,3.7) -- (1.8,3.7) -- (1.8,3) -- (3,3) -- (3,0) -- (-3,0) -- cycle;
+ \draw[clip] (0,3.5) ellipse (2.25 and 1);
+ \draw (0,2.5) ellipse (1.7 and .7);
+ \end{scope}
+ \node (P) at (4.5,3) {$P$};
+ \node (S1) at (4.5,0) {$\Sn^1$};
+ \draw[->>,thick] (P) -- (S1);
+ \node[fill,circle,inner sep=1pt,label={below right:$\base$}] at (0,-.5) {};
+ \node at (-2.6,.6) {$\lloop$};
+ \node[fill,circle,\OPTblue,inner sep=1pt] (b) at (0,2.3) {};
+ \node[\OPTblue] at (-.3,2.3) {$b$};
+ \node[fill,circle,\OPTpurple,inner sep=1pt] (tb) at (0,1.8) {};
+ % \draw[\OPTpurple,dashed] (b) to[out=0,in=0,looseness=5] (0,4) to[out=180,in=180] (tb);
+ \draw[\OPTpurple,dashed] (b) arc (-90:90:2.9 and 0.85) arc (90:270:2.8 and 1.1);
+ \begin{scope}
+ \clip (b) -- ++(.1,0) -- (.1,1.8) -- ++(-.2,0) -- ++(0,-1) -- ++(3,2) -- ++(-3,0) -- (-.1,2.3) -- cycle;
+ \draw[\OPTred,dotted,thick] (.2,2.07) ellipse (.2 and .57);
+ \begin{scope}
+ % \draw[clip] (b) -- ++(.1,0) |- (tb) -- ++(-.2,0) -- ++(0,-1) -| ++(3,3) -| (b);
+ \clip (.2,0) rectangle (-2,3);
+ \draw[\OPTred,thick] (.2,2.07) ellipse (.2 and .57);
+ \end{scope}
+ \end{scope}
+ \node[\OPTred] at (1,1.2) {$\ell: \trans \lloop b=b$};
+ \end{tikzpicture}
+ \caption{The type-theoretic induction principle for $\Sn^1$}
+ \label{fig:ttS1ind}
+\end{figure}
+
+Of course, we expect to be able to prove the recursion principle from the induction principle, by taking $P$ to be a constant type family.
+This is in fact the case, although deriving the non-dependent computation rule for $\lloop$ (which refers to $\apfunc f$) from the dependent one (which refers to $\apdfunc f$) is surprisingly a little tricky.
+
+\begin{lem}\label{thm:S1rec}
+ \index{recursion principle!for S1@for $\Sn^1$}%
+ \index{computation rule!for S1@for $\Sn^1$}%
+ If $A$ is a type together with $a:A$ and $p:\id[A]aa$, then there is a
+ function $f:\Sn^1\to{}A$ with
+ \begin{align*}
+ f(\base)&\defeq a \\
+ \apfunc f(\lloop)&\defid p.
+ \end{align*}
+\end{lem}
+\begin{proof}
+ We would like to apply the induction principle of $\Sn^1$ to the constant type family, $(\lam{x} A): \Sn^1\to \UU$.
+ The required hypotheses for this are a point of $(\lam{x} A)(\base) \jdeq A$, which we have (namely $a:A$), and a dependent path in $\dpath {x \mapsto A}{\lloop} a a$, or equivalently $\transfib{x \mapsto A}{\lloop} a = a$.
+ This latter type is not the same as the type $\id[A]aa$ where $p$ lives, but it is equivalent to it, because by \cref{thm:trans-trivial} we have $\transconst{A}{\lloop}{a} : \transfib{x \mapsto A}{\lloop} a= a$.
+ Thus, given $a:A$ and $p:a=a$, we can consider the composite
+ \[\transconst{A}{\lloop}{a} \ct p:(\dpath {x \mapsto A}\lloop aa).\]
+ Applying the induction principle, we obtain $f:\Sn^1\to A$ such that
+ \begin{align}
+ f(\base) &\jdeq a \qquad\text{and}\label{eq:S1recindbase}\\
+ \apdfunc f(\lloop) &= \transconst{A}{\lloop}{a} \ct p.\label{eq:S1recindloop}
+ \end{align}
+ It remains to derive the equality $\apfunc f(\lloop)=p$.
+ However, by \cref{thm:apd-const}, we have
+ \[\apdfunc f(\lloop) = \transconst{A}{\lloop}{f(\base)} \ct \apfunc f(\lloop).\]
+ Combining this with~\eqref{eq:S1recindloop} and canceling the occurrences of $\transconstf$ (which are the same by~\eqref{eq:S1recindbase}), we obtain $\apfunc f(\lloop)=p$.
+\end{proof}
+
+% Similarly, in this case we speak of defining $f$ by $f(\base)\defeq a$ and $\ap f \lloop \defid p$.
+We also have a corresponding uniqueness principle.
+
+\begin{lem}\label{thm:uniqueness-for-functions-on-S1}
+ \index{uniqueness!principle, propositional!for functions on the circle}%
+ If $A$ is a type and $f,g:\Sn^1\to{}A$ are two maps together with two
+ equalities $p,q$:
+ \begin{align*}
+ p:f(\base)&=_Ag(\base),\\
+ q:\map{f}\lloop&=^{\lam{x} x=_Ax}_p\map{g}\lloop.
+ \end{align*}
+ Then for all $x:\Sn^1$ we have $f(x)=g(x)$.
+\end{lem}
+\begin{proof}
+ We apply the induction principle of $\Sn^1$ at the type family $P(x)\defeq(f(x)=g(x))$.
+ When $x$ is $\base$, $p$ is exactly what we need.
+ And when $x$ varies along $\lloop$, we need
+ \(p=^{\lam{x} f(x)=g(x)}_{\lloop} p,\)
+ which by \cref{thm:transport-path,thm:dpath-path} can be reduced to $q$.
+\end{proof}
+
+\index{universal!property!of S1@of $\Sn^1$}%
+These two lemmas imply the expected universal property of the circle:
+
+\begin{lem}\label{thm:S1ump}
+ For any type $A$ we have a natural equivalence
+ \[ (\Sn^1 \to A) \;\eqvsym\;
+ \sm{x:A} (x=x).
+ \]
+\end{lem}
+\begin{proof}
+ We have a canonical function $f:(\Sn^1 \to A) \to \sm{x:A} (x=x)$ defined by $f(g) \defeq (g(\base),\ap g \lloop)$.
+ The induction principle shows that the fibers of $f$ are inhabited, while the uniqueness principle shows that they are mere propositions.
+ Hence they are contractible, so $f$ is an equivalence.
+\end{proof}
+
+\index{type!circle|)}%
+
+As in \cref{sec:htpy-inductive}, we can show that the conclusion of \cref{thm:S1ump} is equivalent to having an induction principle with propositional computation rules.
+Other higher inductive types also satisfy lemmas analogous to \cref{thm:S1rec,thm:S1ump}; we will generally leave their proofs to the reader.
+We now proceed to consider many examples.
+
+
+\section{The interval}
+\label{sec:interval}
+
+\index{type!interval|(defstyle}%
+\indexsee{interval!type}{type, interval}%
+The \define{interval}, which we denote $\interval$, is perhaps an even simpler higher inductive type than the circle.
+It is generated by:
+\begin{itemize}
+\item a point $\izero:\interval$,
+\item a point $\ione:\interval$, and
+\item a path $\seg : \id[\interval]\izero\ione$.
+\end{itemize}
+\index{recursion principle!for interval type}%
+The recursion principle for the interval says that given a type $B$ along with
+\begin{itemize}
+\item a point $b_0:B$,
+\item a point $b_1:B$, and
+\item a path $s:b_0=b_1$,
+\end{itemize}
+there is a function $f:\interval\to B$ such that $f(\izero)\jdeq b_0$, $f(\ione)\jdeq b_1$, and $\ap f \seg = s$.
+\index{induction principle!for interval type}%
+The induction principle says that given $P:\interval\to\type$ along with
+\begin{itemize}
+\item a point $b_0:P(\izero)$,
+\item a point $b_1:P(\ione)$, and
+\item a path $s:\dpath{P}{\seg}{b_0}{b_1}$,
+\end{itemize}
+there is a function $f:\prd{x:\interval} P(x)$ such that $f(\izero)\jdeq b_0$, $f(\ione)\jdeq b_1$, and $\apd f \seg = s$.
+
+Regarded purely up to homotopy, the interval is not really interesting:
+
+\begin{lem}\label{thm:contr-interval}
+ The type $\interval$ is contractible.
+\end{lem}
+
+\begin{proof}
+ We prove that for all $x:\interval$ we have $x=_\interval\ione$. In other words we want a
+ function $f$ of type $\prd{x:\interval}(x=_\interval\ione)$. We begin to define $f$ in the following way:
+ \begin{alignat*}{2}
+ f(\izero)&\defeq \seg &:\izero&=_\interval\ione,\\
+ f(\ione)&\defeq \refl\ione &:\ione &=_\interval\ione.
+ \end{alignat*}
+ It remains to define $\apd{f}\seg$, which must have type $\seg =_\seg^{\lam{x} x=_\interval\ione}\refl \ione$.
+ By definition this type is $\trans\seg\seg=_{\ione=_\interval\ione}\refl\ione$, which in turn is equivalent to $\rev\seg\ct\seg=\refl\ione$.
+ But there is a canonical element of that type, namely the proof that path inverses are in fact inverses.
+\end{proof}
+
+However, type-theoretically the interval does still have some interesting features, just like the topological interval in classical homotopy theory.
+For instance, it enables us to give an easy proof of function extensionality.
+(Of course, as in \cref{sec:univalence-implies-funext}, for the duration of the following proof we suspend our overall assumption of the function extensionality axiom.)
+
+\begin{lem}\label{thm:interval-funext}
+ \index{function extensionality!proof from interval type}%
+ If $f,g:A\to{}B$ are two functions such that $f(x)=g(x)$ for every $x:A$, then
+ $f=g$ in the type $A\to{}B$.
+\end{lem}
+
+\begin{proof}
+ Let's call the proof we have $p:\prd{x:A}(f(x)=g(x))$. For all $x:A$ we define
+ a function $\widetilde{p}_x:\interval\to{}B$ by
+ \begin{align*}
+ \widetilde{p}_x(\izero) &\defeq f(x), \\
+ \widetilde{p}_x(\ione) &\defeq g(x), \\
+ \map{\widetilde{p}_x}\seg &\defid p(x).
+ \end{align*}
+ We now define $q:\interval\to(A\to{}B)$ by
+ \[q(i)\defeq(\lam{x} \widetilde{p}_x(i))\]
+ Then $q(\izero)$ is the function $\lam{x} \widetilde{p}_x(\izero)$, which is equal to $f$ because $\widetilde{p}_x(\izero)$ is defined by $f(x)$.
+ Similarly, we have $q(\ione)=g$, and hence
+ \[\map{q}\seg:f=_{(A\to{}B)}g \qedhere\]
+\end{proof}
+
+In \cref{ex:funext-from-interval} we ask the reader to complete the proof of the full function extensionality axiom from \cref{thm:interval-funext}.
+
+\index{type!interval|)}%
+
+\section{Circles and spheres}
+\label{sec:circle}
+
+\index{type!circle|(}%
+We have already discussed the circle $\Sn^1$ as the higher inductive type generated by
+\begin{itemize}
+\item A point $\base:\Sn^1$, and
+\item A path $\lloop : {\id[\Sn^1]\base\base}$.
+\end{itemize}
+\index{induction principle!for S1@for $\Sn^1$}%
+Its induction principle says that given $P:\Sn^1\to\type$ along with $b:P(\base)$ and $\ell :\dpath P \lloop b b$, we have $f:\prd{x:\Sn^1} P(x)$ with $f(\base)\jdeq b$ and $\apd f \lloop = \ell$.
+Its non-dependent recursion principle says that given $B$ with $b:B$ and $\ell:b=b$, we have $f:\Sn^1\to B$ with $f(\base)\jdeq b$ and $\ap f \lloop = \ell$.
+
+We observe that the circle is nontrivial.
+
+\begin{lem}\label{thm:loop-nontrivial}
+ $\lloop\neq\refl{\base}$.
+\end{lem}
+\begin{proof}
+ Suppose that $\lloop=\refl{\base}$.
+ Then since for any type $A$ with $x:A$ and $p:x=x$, there is a function $f:\Sn^1\to A$ defined by $f(\base)\defeq x$ and $\ap f \lloop \defid p$, we have
+ \[p = f(\lloop) = f(\refl{\base}) = \refl{x}.\]
+ But this implies that every type is a set, which as we have seen is not the case (see \cref{thm:type-is-not-a-set}).
+\end{proof}
+
+The circle also has the following interesting property, which is useful as a source of counterexamples.
+
+\begin{lem}\label{thm:S1-autohtpy}
+ There exists an element of $\prd{x:\Sn^1} (x=x)$ which is not equal to $x\mapsto \refl{x}$.
+\end{lem}
+\begin{proof}
+ We define $f:\prd{x:\Sn^1} (x=x)$ by $\Sn^1$-induction.
+ When $x$ is $\base$, we let $f(\base)\defeq \lloop$.
+ Now when $x$ varies along $\lloop$ (see \cref{rmk:varies-along}), we must show that $\transfib{x\mapsto x=x}{\lloop}{\lloop} = \lloop$.
+ However, in \cref{sec:compute-paths} we observed that $\transfib{x\mapsto x=x}{p}{q} = \opp{p} \ct q \ct p$, so what we have to show is that $\opp{\lloop} \ct \lloop \ct \lloop = \lloop$.
+ But this is clear by canceling an inverse.
+
+ To show that $f\neq (x\mapsto \refl{x})$, it suffices to show that $f(\base) \neq \refl{\base}$.
+ But $f(\base)=\lloop$, so this is just the previous lemma.
+\end{proof}
+
+For instance, this enables us to extend \cref{thm:type-is-not-a-set} by showing that any universe which contains the circle cannot be a 1-type.
+
+\begin{cor}
+ If the type $\Sn^1$ belongs to some universe \type, then \type is not a 1-type.
+\end{cor}
+\begin{proof}
+ The type $\Sn^1=\Sn^1$ in \type is, by univalence, equivalent to the type $\eqv{\Sn^1}{\Sn^1}$ of auto\-equivalences of $\Sn^1$, so it suffices to show that $\eqv{\Sn^1}{\Sn^1}$ is not a set.
+ \index{automorphism!of S1@of $\Sn^1$}%
+ For this, it suffices to show that its equality type $\id[(\eqv{\Sn^1}{\Sn^1})]{\idfunc[\Sn^1]}{\idfunc[\Sn^1]}$ is not a mere proposition.
+ Since being an equivalence is a mere proposition, this type is equivalent to $\id[(\Sn^1\to\Sn^1)]{\idfunc[\Sn^1]}{\idfunc[\Sn^1]}$.
+ But by function extensionality, this is equivalent to $\prd{x:\Sn^1} (x=x)$, which as we have seen in \cref{thm:S1-autohtpy} contains two unequal elements.
+\end{proof}
+
+\index{type!circle|)}%
+
+\index{type!2-sphere|(}%
+\indexsee{sphere type}{type, sphere}%
+We have also mentioned that the 2-sphere $\Sn^2$ should be the higher inductive type generated by
+\symlabel{s2b}
+\begin{itemize}
+\item A point $\base:\Sn^2$, and
+\item A 2-dimensional path $\surf:\refl{\base} = \refl{\base}$ in ${\base=\base}$.
+\end{itemize}
+\index{recursion principle!for S2@for $\Sn^2$}%
+The recursion principle for $\Sn^2$ is not hard: it says that given $B$ with $b:B$ and $s:\refl b = \refl b$, we have $f:\Sn^2\to B$ with $f(\base)\jdeq b$ and $\aptwo f \surf = s$.
+Here by ``$\aptwo f \surf$'' we mean an extension of the functorial action of $f$ to two-dimensional paths, which can be stated precisely as follows.
+
+\begin{lem}\label{thm:ap2}
+ Given $f:A\to B$ and $x,y:A$ and $p,q:x=y$, and $r:p=q$, we have a path $\aptwo f r : \ap f p = \ap f q$.
+\end{lem}
+\begin{proof}
+ By path induction, we may assume $p\jdeq q$ and $r$ is reflexivity.
+ But then we may define $\aptwo f {\refl p} \defeq \refl{\ap f p}$.
+\end{proof}
+
+In order to state the general induction principle, we need a version of this lemma for dependent functions, which in turn requires a notion of dependent two-dimensional paths.
+As before, there are many ways to define such a thing; one is by way of a two-dimensional version of transport.
+
+\begin{lem}\label{thm:transport2}
+ Given $P:A\to\type$ and $x,y:A$ and $p,q:x=y$ and $r:p=q$, for any $u:P(x)$ we have $\transtwo r u : \trans p u = \trans q u$.
+\end{lem}
+\begin{proof}
+ By path induction.
+\end{proof}
+
+Now suppose given $x,y:A$ and $p,q:x=y$ and $r:p=q$ and also points $u:P(x)$ and $v:P(y)$ and dependent paths $h:\dpath P p u v$ and $k:\dpath P q u v$.
+By our definition of dependent paths, this means $h:\trans p u = v$ and $k:\trans q u = v$.
+Thus, it is reasonable to define the type of dependent 2-paths over $r$ to be
+\[ (\dpath P r h k )\defeq (h = \transtwo r u \ct k). \]
+We can now state the dependent version of \cref{thm:ap2}.
+
+\begin{lem}\label{thm:apd2}
+ Given $P:A\to\type$ and $x,y:A$ and $p,q:x=y$ and $r:p=q$ and a function $f:\prd{x:A} P(x)$, we have
+ $\apdtwo f r : \dpath P r {\apd f p}{\apd f q}$.
+\end{lem}
+\begin{proof}
+ Path induction.
+\end{proof}
+
+\index{induction principle!for S2@for $\Sn^2$}%
+Now we can state the induction principle for $\Sn^2$: suppose we are given $P:\Sn^2\to\type$ with $b:P(\base)$ and $s:\dpath Q \surf {\refl b}{\refl b}$ where $Q\defeq\lam{p} \dpath P p b b$. Then there is a function $f:\prd{x:\Sn^2} P(x)$ such that $f(\base)\jdeq b$ and $\apdtwo f \surf = s$.
+
+\index{type!2-sphere|)}%
+
+Of course, this explicit approach gets more and more complicated as we go up in dimension.
+Thus, if we want to define $n$-spheres for all $n$, we need some more systematic idea.
+One approach is to work with $n$-dimensional loops\index{loop!n-@$n$-} directly, rather than general $n$-dimensional paths.\index{path!n-@$n$-}
+
+\index{type!pointed}%
+Recall from \cref{sec:equality} the definitions of \emph{pointed types} $\type_*$, and the $n$-fold loop space\index{loop space!iterated} $\Omega^n : \type_* \to \type_*$
+(\cref{def:pointedtype,def:loopspace}). Now we can define the
+$n$-sphere $\Sn^n$ to be the higher inductive type generated by
+\index{type!n-sphere@$n$-sphere}%
+\begin{itemize}
+\item A point $\base:\Sn^n$, and
+\item An $n$-loop $\lloop_n : \Omega^n(\Sn^n,\base)$.
+\end{itemize}
+In order to write down the induction principle for this presentation, we would need to define a notion of ``dependent $n$-loop\indexdef{loop!dependent n-@dependent $n$-}'', along with the action of dependent functions on $n$-loops.
+We leave this to the reader (see \cref{ex:nspheres}); in the next section we will discuss a different way to define the spheres that is sometimes more tractable.
+
+
+\section{Suspensions}
+\label{sec:suspension}
+
+\indexsee{type!suspension of}{suspension}%
+\index{suspension|(defstyle}%
+The \define{suspension} of a type $A$ is the universal way of making the points of $A$ into paths (and hence the paths in $A$ into 2-paths, and so on).
+It is a type $\susp A$ defined by the following generators:\footnote{There is an unfortunate clash of notation with dependent pair types, which of course are also written with a $\Sigma$.
+ However, context usually disambiguates.}
+\begin{itemize}
+\item a point $\north:\susp A$,
+\item a point $\south:\susp A$, and
+\item a function $\merid:A \to (\id[\susp A]\north\south)$.
+\end{itemize}
+The names are intended to suggest a ``globe'' of sorts, with a north pole, a south pole, and an $A$'s worth of meridians
+\indexdef{pole}%
+\indexdef{meridian}%
+from one to the other.
+Indeed, as we will see, if $A=\Sn^1$, then its suspension is equivalent to the surface of an ordinary sphere, $\Sn^2$.
+
+\index{recursion principle!for suspension}%
+The recursion principle for $\susp A$ says that given a type $B$ together with
+\begin{itemize}
+\item points $n,s:B$ and
+\item a function $m:A \to (n=s)$,
+\end{itemize}
+we have a function $f:\susp A \to B$ such that $f(\north)\jdeq n$ and $f(\south)\jdeq s$, and for all $a:A$ we have $\ap f {\merid(a)} = m(a)$.
+\index{induction principle!for suspension}%
+Similarly, the induction principle says that given $P:\susp A \to \type$ together with
+\begin{itemize}
+\item a point $n:P(\north)$,
+\item a point $s:P(\south)$, and
+\item for each $a:A$, a path $m(a):\dpath P{\merid(a)}ns$,
+\end{itemize}
+there exists a function $f:\prd{x:\susp A} P(x)$ such that $f(\north)\jdeq n$ and $f(\south)\jdeq s$ and for each $a:A$ we have $\apd f {\merid(a)} = m(a)$.
+
+Our first observation about suspension is that it gives another way to define the circle.
+
+\begin{lem}\label{thm:suspbool}
+ \index{type!circle}%
+ $\eqv{\susp\bool}{\Sn^1}$.
+\end{lem}
+\begin{proof}
+ Define $f:\susp\bool\to\Sn^1$ by recursion such that $f(\north)\defeq \base$ and $f(\south)\defeq\base$, while $\ap f{\merid(\bfalse)}\defid\lloop$ but $\ap f{\merid(\btrue)} \defid \refl{\base}$.
+ Define $g:\Sn^1\to\susp\bool$ by recursion such that $g(\base)\defeq \north$ and $\ap g \lloop \defid \merid(\bfalse) \ct \opp{\merid(\btrue)}$.
+ We now show that $f$ and $g$ are quasi-inverses.
+
+ First we show by induction that $g(f(x))=x$ for all $x:\susp \bool$.
+ If $x\jdeq\north$, then $g(f(\north)) \jdeq g(\base)\jdeq \north$, so we have $\refl{\north} : g(f(\north))=\north$.
+ If $x\jdeq\south$, then $g(f(\south)) \jdeq g(\base)\jdeq \north$, and we choose the equality $\merid(\btrue) : g(f(\south)) = \south$.
+ It remains to show that for any $y:\bool$, these equalities are preserved as $x$ varies along $\merid(y)$, which is to say that when $\refl{\north}$ is transported along $\merid(y)$ it yields $\merid(\btrue)$.
+ By transport in path spaces and pulled back fibrations, this means we are to show that
+ \[ \opp{\ap g {\ap f {\merid(y)}}} \ct \refl{\north} \ct \merid(y) = \merid(\btrue). \]
+ Of course, we may cancel $\refl{\north}$.
+ Now by \bool-induction, we may assume either $y\jdeq \bfalse$ or $y\jdeq \btrue$.
+ If $y\jdeq \bfalse$, then we have
+ \begin{align*}
+ \opp{\ap g {\ap f {\merid(\bfalse)}}} \ct \merid(\bfalse)
+ &= \opp{\ap g {\lloop}} \ct \merid(\bfalse)\\
+ &= \opp{(\merid(\bfalse) \ct \opp{\merid(\btrue)})} \ct \merid(\bfalse)\\
+ &= \merid(\btrue) \ct \opp{\merid(\bfalse)} \ct \merid(\bfalse)\\
+ &= \merid(\btrue)
+ \end{align*}
+ while if $y\jdeq \btrue$, then we have
+ \begin{align*}
+ \opp{\ap g {\ap f {\merid(\btrue)}}} \ct \merid(\btrue)
+ &= \opp{\ap g {\refl{\base}}} \ct \merid(\btrue)\\
+ &= \opp{\refl{\north}} \ct \merid(\btrue)\\
+ &= \merid(\btrue).
+ \end{align*}
+ Thus, for all $x:\susp \bool$, we have $g(f(x))=x$.
+
+ Now we show by induction that $f(g(x))=x$ for all $x:\Sn^1$.
+ If $x\jdeq \base$, then $f(g(\base))\jdeq f(\north)\jdeq\base$, so we have $\refl{\base} : f(g(\base))=\base$.
+ It remains to show that this equality is preserved as $x$ varies along $\lloop$, which is to say that it is transported along $\lloop$ to itself.
+ Again, by transport in path spaces and pulled back fibrations, this means to show that
+ \[ \opp{\ap f {\ap g {\lloop}}} \ct \refl{\base} \ct \lloop = \refl{\base}.\]
+ However, we have
+ \begin{align*}
+ \ap f {\ap g {\lloop}} &= \ap f {\merid(\bfalse) \ct \opp{\merid(\btrue)}}\\
+ &= \ap f {\merid(\bfalse)} \ct \opp{\ap f {\merid(\btrue)}}\\
+ &= \lloop \ct \refl{\base}
+ \end{align*}
+ so this follows easily.
+\end{proof}
+
+Topologically, the two-point space \bool is also known as the \emph{0-dimensional sphere}, $\Sn^0$.
+(For instance, it is the space of points at distance $1$ from the origin in $\mathbb{R}^1$, just as the topological 1-sphere is the space of points at distance $1$ from the origin in $\mathbb{R}^2$.)
+Thus, \cref{thm:suspbool} can be phrased suggestively as $\eqv{\susp\Sn^0}{\Sn^1}$.
+\index{type!n-sphere@$n$-sphere|defstyle}%
+\indexsee{n-sphere@$n$-sphere}{type, $n$-sphere}%
+In fact, this pattern continues: we can define all the spheres inductively by
+\begin{equation}\label{eq:Snsusp}
+ \Sn^0 \defeq \bool
+ \qquad\text{and}\qquad
+ \Sn^{n+1} \defeq \susp \Sn^n.
+\end{equation}
+We can even start one dimension lower by defining $\Sn^{-1}\defeq \emptyt$, and observe that $\eqv{\susp\emptyt}{\bool}$.
+
+To prove carefully that this agrees with the definition of $\Sn^n$ from the previous section would require making the latter more explicit.
+However, we can show that the recursive definition has the same universal property that we would expect the other one to have.
+If $(A,a_0)$ and $(B,b_0)$ are pointed types (with basepoints often left implicit), let $\Map_*(A,B)$ denote the type of based maps:
+\index{based map}
+\symlabel{based-maps}
+\[ \Map_*(A,B) \defeq \sm{f:A\to B} (f(a_0)=b_0). \]
+Note that any type $A$ gives rise to a pointed type $A_+ \defeq A+\unit$ with basepoint $\inr(\ttt)$; this is called \emph{adjoining a disjoint basepoint}.
+\indexdef{basepoint!adjoining a disjoint}%
+\index{disjoint!basepoint}%
+\index{adjoining a disjoint basepoint}%
+
+\begin{lem}
+ For a type $A$ and a pointed type $(B,b_0)$, we have
+ \[ \eqv{\Map_*(A_+,B)}{(A\to B)} \]
+\end{lem}
+Note that on the right we have the ordinary type of \emph{unbased} functions from $A$ to $B$.
+\begin{proof}
+ From left to right, given $f:A_+ \to B$ with $p:f(\inr(\ttt)) = b_0$, we have $f\circ \inl : A \to B$.
+ And from right to left, given $g:A\to B$ we define $g':A_+ \to B$ by $g'(\inl(a))\defeq g(a)$ and $g'(\inr(u)) \defeq b_0$.
+ We leave it to the reader to show that these are quasi-inverse operations.
+\end{proof}
+
+In particular, note that $\eqv{\bool}{\unit_+}$.
+Thus, for any pointed type $B$ we have
+\[{\Map_*(\bool,B)} \eqvsym {(\unit \to B)}\eqvsym B.\]
+%
+Now recall that the loop space\index{loop space} operation $\Omega$ acts on pointed types, with definition $\Omega(A,a_0) \defeq (\id[A]{a_0}{a_0},\refl{a_0})$.
+We can also make the suspension $\susp$ act on pointed types, by $\susp(A,a_0)\defeq (\susp A,\north)$.
+
+\begin{lem}\label{lem:susp-loop-adj}
+ \index{universal!property!of suspension}%
+ For pointed types $(A,a_0)$ and $(B,b_0)$ we have
+ \[ \eqv{\Map_*(\susp A, B)}{\Map_*(A,\Omega B)}.\]
+\end{lem}
+\addtocounter{thm}{1} % Because we removed a numbered equation in commit 8f54d16
+\begin{proof}
+We first observe the following chain of equivalences:
+\begin{align*}
+\Map_*(\susp A, B) & \defeq \sm{f:\susp A\to B} (f(\north)=b_0) \\
+ & \eqvsym \sm{f:\sm{b_n : B}{b_s : B} (A \to (b_n = b_s))} (\fst(f)=b_0) \\
+ & \eqvsym \sm{b_n : B}{b_s : B} \big(A \to (b_n = b_s)\big) \times (b_n=b_0) \\
+ & \eqvsym \sm{p : \sm{b_n : B} (b_n=b_0)}{b_s : B} (A \to (\fst(p) = b_s)) \\
+ & \eqvsym \sm{b_s : B} (A \to (b_0 = b_s))
+\end{align*}
+The first equivalence is by the universal property of suspensions, which says that
+\[ \Parens{\susp A \to B} \eqvsym \Parens{\sm{b_n : B} \sm{b_s : B} (A \to (b_n = b_s)) } \]
+with the function from right to left given by the recursor (see \cref{ex:susp-lump}).
+The second and third equivalences are by \cref{ex:sigma-assoc}, along with a reordering of components.
+Finally, the last equivalence follows from \cref{thm:omit-contr}, since by \cref{thm:contr-paths}, $\sm{b_n : B} (b_n=b_0)$ is contractible with center $(b_0, \refl{b_0})$.
+
+The proof is now completed by the following chain of equivalences:
+\begin{align*}
+ \sm{b_s : B} (A \to (b_0 = b_s))
+ &\eqvsym \sm{b_s : B}{g:A \to (b_0 = b_s)}{q:b_0 = b_s} (g(a_0) = q)\\
+ &\eqvsym \sm{r : \sm{b_s : B}(b_0 = b_s)}{g:A \to (b_0 = \proj1(r))} (g(a_0) = \proj2(r))\\
+ &\eqvsym \sm{g:A \to (b_0 = b_0)} (g(a_0) = \refl{b_0})\\
+ &\jdeq \Map_*(A,\Omega B).
+\end{align*}
+Similar to before, the first and last equivalences are by \cref{thm:omit-contr,thm:contr-paths}, and the second is by \cref{ex:sigma-assoc} and reordering of components.
+\end{proof}
+
+\index{type!n-sphere@$n$-sphere|defstyle}%
+In particular, for the spheres defined as in~\eqref{eq:Snsusp} we have
+\index{universal!property!of Sn@of $\Sn^n$}%
+\[ \Map_*(\Sn^n,B) \eqvsym \Map_*(\Sn^{n-1}, \Omega B) \eqvsym \cdots \eqvsym \Map_*(\bool,\Omega^n B) \eqvsym \Omega^n B. \]
+Thus, these spheres $\Sn^n$ have the universal property that we would expect from the spheres defined directly in terms of $n$-fold loop spaces\index{loop space!iterated} as in \cref{sec:circle}.
+
+\index{suspension|)}%
+
+\section{Cell complexes}
+\label{sec:cell-complexes}
+
+\index{cell complex|(defstyle}%
+\index{CW complex|(defstyle}%
+In classical topology, a \emph{cell complex} is a space obtained by successively attaching discs along their boundaries.
+It is called a \emph{CW complex} if the boundary of an $n$-dimensional disc\index{disc} is constrained to lie in the discs of dimension strictly less than $n$ (the $(n-1)$-skeleton).\index{skeleton!of a CW-complex}
+
+Any finite CW complex can be presented as a higher inductive type, by turning $n$-dimensional discs into $n$-dimensional paths and partitioning the image of the attaching\index{attaching map} map into a source\index{source!of a path constructor} and a target\index{target!of a path constructor}, with each written as a composite of lower dimensional paths.
+Our explicit definitions of $\Sn^1$ and $\Sn^2$ in \cref{sec:circle} had this form.
+
+\index{torus}%
+Another example is the torus $T^2$, which is generated by:
+\begin{itemize}
+\item a point $b:T^2$,
+\item a path $p:b=b$,
+\item another path $q:b=b$, and
+\item a 2-path $t: p\ct q = q \ct p$.
+\end{itemize}
+Perhaps the easiest way to see that this is a torus is to start with a rectangle, having four corners $a,b,c,d$, four edges $p,q,r,s$, and an interior which is manifestly a 2-path $t$ from $p\ct q$ to $r\ct s$:
+\begin{equation*}
+ \xymatrix{
+ a\ar@{=}[r]^p\ar@{=}[d]_r \ar@{}[dr]|{\Downarrow t} &
+ b\ar@{=}[d]^q\\
+ c\ar@{=}[r]_s &
+ d
+ }
+\end{equation*}
+Now identify the edge $r$ with $q$ and the edge $s$ with $p$, resulting in also identifying all four corners.
+Topologically, this identification can be seen to produce a torus.
+
+\index{induction principle!for torus}%
+\index{torus!induction principle for}%
+The induction principle for the torus is the trickiest of any we've written out so far.
+Given $P:T^2\to\type$, for a section $\prd{x:T^2} P(x)$ we require
+\begin{itemize}
+\item a point $b':P(b)$,
+\item a path $p' : \dpath P p {b'} {b'}$,
+\item a path $q' : \dpath P q {b'} {b'}$, and
+\item a 2-path $t'$ between the ``composites'' $p'\ct q'$ and $q'\ct p'$, lying over $t$.
+\end{itemize}
+In order to make sense of this last datum, we need a composition operation for dependent paths, but this is not hard to define.
+Then the induction principle gives a function $f:\prd{x:T^2} P(x)$ such that $f(b)\jdeq b'$ and $\apd f {p} = p'$ and $\apd f {q} = q'$ and something like ``$\apdtwo f t = t'$''.
+However, this is not well-typed as it stands, firstly because the equalities $\apd f {p} = p'$ and $\apd f {q} = q'$ are not judgmental, and secondly because $\apdfunc f$ only preserves path concatenation up to homotopy.
+We leave the details to the reader (see \cref{ex:torus}).
+
+Of course, another definition of the torus is $T^2 \defeq \Sn^1 \times \Sn^1$ (in \cref{ex:torus-s1-times-s1} we ask the reader to verify the equivalence of the two).
+\index{Klein bottle}%
+\index{projective plane}%
+The cell-complex definition, however, generalizes easily to other spaces without such descriptions, such as the Klein bottle, the projective plane, etc.
+But it does get increasingly difficult to write down the induction principles, requiring us to define notions of dependent $n$-paths and of $\apdfunc{}$ acting on $n$-paths.
+Fortunately, once we have the spheres in hand, there is a way around this.
+
+\section{Hubs and spokes}
+\label{sec:hubs-spokes}
+
+\indexsee{spoke}{hub and spoke}%
+\index{hub and spoke|(defstyle}%
+
+In topology, one usually speaks of building CW complexes by attaching $n$-dimensional discs along their $(n-1)$-dimensional boundary spheres.
+\index{attaching map}%
+However, another way to express this is by gluing in the \emph{cone}\index{cone!of a sphere} on an $(n-1)$-dimensional sphere.
+That is, we regard a disc\index{disc} as consisting of a cone point (or ``hub''), with meridians
+\index{meridian}%
+(or ``spokes'') connecting that point to every point on the boundary, continuously, as shown in \cref{fig:hub-and-spokes}.
+
+\begin{figure}
+ \centering
+ \begin{tikzpicture}
+ \draw (0,0) circle (2cm);
+ \foreach \x in {0,20,...,350}
+ \draw[\OPTblue] (0,0) -- (\x:2cm);
+ \node[\OPTblue,circle,fill,inner sep=2pt] (hub) at (0,0) {};
+ \end{tikzpicture}
+ \caption{A 2-disc made out of a hub and spokes}
+ \label{fig:hub-and-spokes}
+\end{figure}
+
+We can use this idea to express higher inductive types containing $n$-dimensional path con\-struc\-tors for $n>1$ in terms of ones containing only 1-di\-men\-sion\-al path con\-struc\-tors.
+The point is that we can obtain an $n$-dimensional path as a continuous family of 1-dimensional paths parametrized by an $(n-1)$-di\-men\-sion\-al object.
+The simplest $(n-1)$-dimensional object to use is the $(n-1)$-sphere, although in some cases a different one may be preferable.
+(Recall that we were able to define the spheres in \cref{sec:suspension} inductively using suspensions, which involve only 1-dimensional path constructors.
+Indeed, suspension can also be regarded as an instance of this idea, since it involves a family of 1-dimensional paths parametrized by the type being suspended.)
+
+\index{torus}
+For instance, the torus $T^2$ from the previous section could be defined instead to be generated by:
+\begin{itemize}
+\item a point $b:T^2$,
+\item a path $p:b=b$,
+\item another path $q:b=b$,
+\item a point $h:T^2$, and
+\item for each $x:\Sn^1$, a path $s(x) : f(x)=h$, where $f:\Sn^1\to T^2$ is defined by $f(\base)\defeq b$ and $\ap f \lloop \defid p \ct q \ct \opp p \ct \opp q$.
+\end{itemize}
+The induction principle for this version of the torus says that given $P:T^2\to\type$, for a section $\prd{x:T^2} P(x)$ we require
+\begin{itemize}
+\item a point $b':P(b)$,
+\item a path $p' : \dpath P p {b'} {b'}$,
+\item a path $q' : \dpath P q {b'} {b'}$,
+\item a point $h':P(h)$, and
+\item for each $x:\Sn^1$, a path $\dpath {P}{s(x)}{g(x)}{h'}$, where $g:\prd{x:\Sn^1} P(f(x))$ is defined by $g(\base)\defeq b'$ and $\apd g \lloop \defid t(p' \ct q' \ct \opp{(p')} \ct \opp{(q')})$.
+ In the latter, $\ct$ denotes concatenation of dependent paths, and the definition of $t:\eqv{(\dpath{P}{\ap f \lloop}{b'}{b'})}{(\dpath{P\circ f}{\lloop}{b'}{b'})}$ is left to the reader.
+\end{itemize}
+Note that there is no need for dependent 2-paths or $\apdtwofunc{}$.
+We leave it to the reader to write out the computation rules.
+
+\begin{rmk}\label{rmk:spokes-no-hub}
+One might question the need for introducing the hub point $h$; why couldn't we instead simply add paths continuously relating the boundary of the disc to a point \emph{on} that boundary, as shown in \cref{fig:spokes-no-hub}?
+However, this does not work without further modification.
+For if, given some $f:\Sn^1 \to X$, we give a path constructor connecting each $f(x)$ to $f(\base)$, then what we end up with is more like the picture in \cref{fig:spokes-no-hub-ii} of a cone whose vertex is twisted around and glued to some point on its base.
+The problem is that the specified path from $f(\base)$ to itself may not be reflexivity.
+We could remedy the problem by adding a 2-dimensional path constructor to ensure this, but using a separate hub avoids the need for any path constructors of dimension above~$1$.
+\end{rmk}
+
+\begin{figure}
+ \centering
+ \begin{minipage}{2in}
+ \begin{center}
+ \begin{tikzpicture}
+ \draw (0,0) circle (2cm);
+ \clip (0,0) circle (2cm);
+ \foreach \x in {0,15,...,165}
+ \draw[\OPTblue] (0,-2cm) -- (\x:4cm);
+ \end{tikzpicture}
+ \end{center}
+ \caption{Hubless spokes}
+ \label{fig:spokes-no-hub}
+ \end{minipage}
+ \qquad
+ \begin{minipage}{2in}
+ \begin{center}
+ \begin{tikzpicture}[xscale=1.3]
+ \draw (0,0) arc (-90:90:.7cm and 2cm) ;
+ \draw[dashed] (0,4cm) arc (90:270:.7cm and 2cm) ;
+ \draw[\OPTblue] (0,0) to[out=90,in=0] (-1,1) to[out=180,in=180] (0,0);
+ \draw[\OPTblue] (0,4cm) to[out=180,in=180,looseness=2] (0,0);
+ \path (0,0) arc (-90:-60:.7cm and 2cm) node (a) {};
+ \draw[\OPTblue] (a.center) to[out=120,in=10] (-1.2,1.2) to[out=190,in=180] (0,0);
+ \path (0,0) arc (-90:-30:.7cm and 2cm) node (b) {};
+ \draw[\OPTblue] (b.center) to[out=150,in=20] (-1.4,1.4) to[out=200,in=180] (0,0);
+ \path (0,0) arc (-90:0:.7cm and 2cm) node (c) {};
+ \draw[\OPTblue] (c.center) to[out=180,in=30] (-1.5,1.5) to[out=210,in=180] (0,0);
+ \path (0,0) arc (-90:30:.7cm and 2cm) node (d) {};
+ \draw[\OPTblue] (d.center) to[out=190,in=50] (-1.7,1.7) to[out=230,in=180] (0,0);
+ \path (0,0) arc (-90:60:.7cm and 2cm) node (e) {};
+ \draw[\OPTblue] (e.center) to[out=200,in=70] (-2,2) to[out=250,in=180] (0,0);
+ \clip (0,0) to[out=90,in=0] (-1,1) to[out=180,in=180] (0,0);
+ \draw (0,4cm) arc (90:270:.7cm and 2cm) ;
+ \end{tikzpicture}
+ \end{center}
+ \caption{Hubless spokes, II}
+ \label{fig:spokes-no-hub-ii}
+ \end{minipage}
+\end{figure}
+
+\begin{rmk}
+ \index{computation rule!propositional}%
+ Note also that this ``translation'' of higher paths into 1-paths does not preserve judgmental computation rules for these paths, though it does preserve propositional ones.
+\end{rmk}
+
+\index{cell complex|)}%
+\index{CW complex|)}%
+
+\index{hub and spoke|)}%
+
+
+\section{Pushouts}
+\label{sec:colimits}
+
+\index{type!limit}%
+\index{type!colimit}%
+\index{limit!of types}%
+\index{colimit!of types}%
+From a category-theoretic point of view, one of the important aspects of any foundational system is the ability to construct limits and colimits.
+In set-theoretic foundations, these are limits and colimits of sets, whereas in our case they are limits and colimits of \emph{types}.
+We have seen in \cref{sec:universal-properties} that cartesian product types have the correct universal property of a categorical product of types, and in \cref{ex:coprod-ump} that coproduct types likewise have their expected universal property.
+
+As remarked in \cref{sec:universal-properties}, more general limits can be constructed using identity types and $\Sigma$-types, e.g.\ the pullback\index{pullback} of $f:A\to C$ and $g:B\to C$ is $\sm{a:A}{b:B} (f(a)=g(b))$ (see \cref{ex:pullback}).
+However, more general \emph{colimits} require identifying elements coming from different types, for which higher inductives are well-adapted.
+Since all our constructions are homotopy-invariant, all our colimits are necessarily \emph{homotopy colimits}, but we drop the ubiquitous adjective in the interests of concision.
+
+In this section we discuss \emph{pushouts}, as perhaps the simplest and one of the most useful colimits.
+Indeed, one expects all finite colimits (for a suitable homotopical definition of ``finite'') to be constructible from pushouts and finite coproducts.
+It is also possible to give a direct construction of more general colimits using higher inductive types, but this is somewhat technical, and also not completely satisfactory since we do not yet have a good fully general notion of homotopy coherent diagrams.
+
+\indexsee{type!pushout of}{pushout}%
+\index{pushout|(defstyle}%
+\index{span}%
+Suppose given a span of types and functions:
+\[\Ddiag=\;\vcenter{\xymatrix{C \ar^g[r] \ar_f[d] & B \\ A & }}\]
+The \define{pushout} of this span is the higher inductive type $A\sqcup^CB$ presented by
+\begin{itemize}
+\item a function $\inl:A\to A\sqcup^CB$,
+\item a function $\inr:B \to A\sqcup^CB$, and
+\item for each $c:C$ a path $\glue(c):(\inl(f(c))=\inr(g(c)))$.
+\end{itemize}
+In other words, $A\sqcup^CB$ is the disjoint union of $A$ and $B$, together with for every $c:C$ a witness that $f(c)$ and $g(c)$ are equal.
+The recursion principle says that if $D$ is another type, we can define a map $s:A\sqcup^CB\to{}D$ by defining
+\begin{itemize}
+\item for each $a:A$, the value of $s(\inl(a)):D$,
+\item for each $b:B$, the value of $s(\inr(b)):D$, and
+\item for each $c:C$, the value of $\mapfunc{s}(\glue(c)):s(\inl(f(c)))=s(\inr(g(c)))$.
+\end{itemize}
+We leave it to the reader to formulate the induction principle.
+It also implies the uniqueness principle that if $s,s':A\sqcup^CB\to{}D$ are two maps such that
+\index{uniqueness!principle, propositional!for functions on a pushout}%
+\begin{align*}
+ s(\inl(a))&=s'(\inl(a))\\
+ s(\inr(b))&=s'(\inr(b))\\
+ \mapfunc{s}(\glue(c))&=\mapfunc{s'}(\glue(c))
+ \qquad\text{(modulo the previous two equalities)}
+\end{align*}
+for every $a,b,c$, then $s=s'$.
+
+To formulate the universal property of a pushout, we introduce the following.
+
+\begin{defn}\label{defn:cocone}
+ Given a span $\Ddiag= (A \xleftarrow{f} C \xrightarrow{g} B)$ and a type $D$, a \define{cocone under $\Ddiag$ with vertex $D$}
+ \indexdef{cocone}%
+ \index{vertex of a cocone}%
+ consists of functions $i:A\to{}D$ and $j:B\to{}D$ and a homotopy $h : \prd{c:C} (i(f(c))=j(g(c)))$:
+ \[\uppercurveobject{{ }}\lowercurveobject{{ }}\twocellhead{{ }}
+ \xymatrix{C \ar^g[r] \ar_f[d] \drtwocell{^h} & B \ar^j[d] \\ A \ar_i[r] & D
+ }\]
+ We denote by $\cocone{\Ddiag}{D}$ the type of all such cocones, i.e.
+ \[ \cocone{\Ddiag}{D} \defeq
+ \sm{i:A\to D}{j:B\to D} \prd{c:C} (i(f(c))=j(g(c))).
+ \]
+\end{defn}
+
+Of course, there is a canonical cocone under $\Ddiag$ with vertex $A\sqcup^C B$ consisting of $\inl$, $\inr$, and $\glue$.
+\[\uppercurveobject{{ }}\lowercurveobject{{ }}\twocellhead{{ }}
+\xymatrix{C \ar^g[r] \ar_f[d] \drtwocell{^\glue\ \ } & B \ar^\inr[d] \\
+ A \ar_-\inl[r] & A\sqcup^CB }\]
+The following lemma says that this is the universal such cocone.
+
+\begin{lem}\label{thm:pushout-ump}
+ \index{universal!property!of pushout}%
+ For any type $E$, there is an equivalence
+ \[ (A\sqcup^C B \to E) \;\eqvsym\; \cocone{\Ddiag}{E}. \]
+\end{lem}
+\begin{proof}
+ Let's consider an arbitrary type $E:\type$.
+ There is a canonical function $c_\sqcup$ defined by
+ \[\function{(A\sqcup^CB\to{}E)}{\cocone{\Ddiag}{E}}
+ {t}{(t\circ{}\inl,t\circ{}\inr,\mapfunc{t}\circ{}\glue)}\]
+ We write informally $t\mapsto\composecocone{t}c_\sqcup$ for this function.
+ We show that this is an equivalence.
+
+ Firstly, given a $c=(i,j,h):\cocone{\mathscr{D}}{E}$, we need to construct a
+ map $\mathsf{s}(c)$ from $A\sqcup^CB$ to $E$.
+ \[\uppercurveobject{{ }}\lowercurveobject{{ }}\twocellhead{{ }}
+ \xymatrix{C \ar^g[r] \ar_f[d] \drtwocell{^h} & B \ar^{j}[d] \\
+ A \ar_-{i}[r] & E }\]
+ The map $\mathsf{s}(c)$ is defined in the following way
+ \begin{align*}
+ \mathsf{s}(c)(\inl(a))&\defeq i(a),\\
+ \mathsf{s}(c)(\inr(b))&\defeq j(b),\\
+ \mapfunc{\mathsf{s}(c)}(\glue(x))&\defid h(x).
+ \end{align*}
+We have defined a map
+\[\function{\cocone{\Ddiag}{E}}{(A\sqcup^CB\to{}E)}{c}{\mathsf{s}(c)}\]
+and we need to prove that this map is an inverse to
+$t\mapsto{}\composecocone{t}c_\sqcup$.
+On the one hand, if $c=(i,j,h):\cocone{\Ddiag}{E}$, we have
+\begin{align*}
+ \composecocone{\mathsf{s}(c)}c_\sqcup & =
+ (\mathsf{s}(c)\circ\inl,\mathsf{s}(c)\circ\inr,
+ \mapfunc{\mathsf{s}(c)}\circ\glue) \\
+ & = (\lamu{a:A} \mathsf{s}(c)(\inl(a)),\;
+ \lamu{b:B} \mathsf{s}(c)(\inr(b)),\;
+ \lamu{x:C} \mapfunc{\mathsf{s}(c)}(\glue(x))) \\
+ & = (\lamu{a:A} i(a),\;
+ \lamu{b:B} j(b),\;
+ \lamu{x:C} h(x)) \\
+ & \jdeq (i, j, h) \\
+ & = c.
+\end{align*}
+%
+On the other hand, if $t:A\sqcup^CB\to{}E$, we want to prove that
+$\mathsf{s}(\composecocone{t}c_\sqcup)=t$.
+For $a:A$, we have
+\[\mathsf{s}(\composecocone{t}c_\sqcup)(\inl(a))=t(\inl(a))\]
+because the first component of $\composecocone{t}c_\sqcup$ is $t\circ\inl$. In
+the same way, for $b:B$ we have
+\[\mathsf{s}(\composecocone{t}c_\sqcup)(\inr(b))=t(\inr(b))\]
+and for $x:C$ we have
+\[\mapfunc{\mathsf{s}(\composecocone{t}c_\sqcup)}(\glue(x))
+=\mapfunc{t}(\glue(x))\]
+hence $\mathsf{s}(\composecocone{t}c_\sqcup)=t$.
+
+This proves that $c\mapsto\mathsf{s}(c)$ is a quasi-inverse to $t\mapsto{}\composecocone{t}c_\sqcup$, as desired.
+\end{proof}
+
+A number of standard homotopy-theoretic constructions can be expressed as (homotopy) pushouts.
+\begin{itemize}
+\item The pushout of the span $\unit \leftarrow A \to \unit$ is the \define{suspension} $\susp A$ (see \cref{sec:suspension}).%
+ \index{suspension}
+\symlabel{join}
+\item The pushout of $A \xleftarrow{\proj1} A\times B \xrightarrow{\proj2} B$ is called the \define{join} of $A$ and $B$, written $A*B$.%
+ \indexdef{join!of types}
+\item The pushout of $\unit \leftarrow A \xrightarrow{f} B$ is the \define{cone} or \define{cofiber} of $f$.%
+ \indexdef{cone!of a function}%
+ \indexsee{mapping cone}{cone of a function}%
+ \indexdef{cofiber of a function}%
+\symlabel{wedge}
+\item If $A$ and $B$ are equipped with basepoints $a_0:A$ and $b_0:B$, then the pushout of $A \xleftarrow{a_0} \unit \xrightarrow{b_0} B$ is the \define{wedge} $A\vee B$.%
+ \indexdef{wedge}
+\symlabel{smash}
+\item If $A$ and $B$ are pointed as before, define $f:A\vee B \to A\times B$ by $f(\inl(a))\defeq (a,b_0)$ and $f(\inr(b))\defeq (a_0,b)$, with $\ap f \glue \defid \refl{(a_0,b_0)}$.
+ Then the cone of $f$ is called the \define{smash product} $A\wedge B$.%
+ \indexdef{smash product}
+\end{itemize}
+We will discuss pushouts further in \cref{cha:hlevels,cha:homotopy}.
+
+\begin{rmk}
+ As remarked in \cref{subsec:prop-trunc}, the notations $\wedge$ and $\vee$ for the smash product and wedge of pointed spaces are also used in logic for ``and'' and ``or'', respectively.
+ Since types in homotopy type theory can behave either like spaces or like propositions, there is technically a potential for conflict --- but since they rarely do both at once, context generally disambiguates.
+ Furthermore, the smash product and wedge only apply to \emph{pointed} spaces, while the only pointed mere proposition is $\top\jdeq\unit$ --- and we have $\unit\wedge \unit = \unit$ and $\unit\vee\unit=\unit$ for either meaning of $\wedge$ and $\vee$.
+\end{rmk}
+
+\index{pushout|)}%
+
+\begin{rmk}
+ Note that colimits do not in general preserve truncatedness.
+ For instance, $\Sn^0$ and \unit are both sets, but the pushout of $\unit \leftarrow \Sn^0 \to \unit$ is $\Sn^1$, which is not a set.
+ If we are interested in colimits in the category of $n$-types, therefore (and, in particular, in the category of sets), we need to ``truncate'' the colimit somehow.
+ We will return to this point in \cref{sec:hittruncations,cha:hlevels,cha:set-math}.
+\end{rmk}
+
+
+\section{Truncations}
+\label{sec:hittruncations}
+
+\index{truncation!propositional|(}%
+In \cref{subsec:prop-trunc} we introduced the propositional truncation as a new type forming operation;
+we now observe that it can be obtained as a special case of higher inductive types.
+This reduces the problem of understanding truncations to the problem of understanding higher inductives, which at least are amenable to a systematic treatment.
+It is also interesting because it provides our first example of a higher inductive type which is truly \emph{recursive}, in that its constructors take inputs from the type being defined (as does the successor $\suc:\nat\to\nat$).
+
+Let $A$ be a type; we define its propositional truncation $\brck A$ to be the higher inductive type generated by:
+\begin{itemize}
+\item A function $\bprojf : A \to \brck A$, and
+\item for each $x,y:\brck A$, a path $x=y$.
+\end{itemize}
+Note that the second constructor is by definition the assertion that $\brck A$ is a mere proposition.
+Thus, the definition of $\brck A$ can be interpreted as saying that $\brck A$ is freely generated by a function $A\to\brck A$ and the fact that it is a mere proposition.
+
+The recursion principle for this higher inductive definition is easy to write down: it says that given any type $B$ together with
+\begin{itemize}
+\item a function $g:A\to B$, and
+\item for any $x,y:B$, a path $x=_B y$,
+\end{itemize}
+there exists a function $f:\brck A \to B$ such that
+\begin{itemize}
+\item $f(\bproj a) \jdeq g(a)$ for all $a:A$, and
+\item for any $x,y:\brck A$, the function $\apfunc f$ takes the specified path $x=y$ in $\brck A$ to the specified path $f(x) = f(y)$ in $B$ (propositionally).
+\end{itemize}
+\index{recursion principle!for truncation}%
+These are exactly the hypotheses that we stated in \cref{subsec:prop-trunc} for the recursion principle of propositional truncation --- a function $A\to B$ such that $B$ is a mere proposition --- and the first part of the conclusion is exactly what we stated there as well.
+The second part (the action of $\apfunc f$) was not mentioned previously, but it turns out to be vacuous in this case, because $B$ is a mere proposition, so \emph{any} two paths in it are automatically equal.
+
+\index{induction principle!for truncation}%
+There is also an induction principle for $\brck A$, which says that given any $B:\brck A \to \type$ together with
+\begin{itemize}
+\item a function $g:\prd{a:A} B(\bproj a)$, and
+\item for any $x,y:\brck A$ and $u:B(x)$ and $v:B(y)$, a dependent path $q:\dpath{B}{p(x,y)}{u}{v}$, where $p(x,y)$ is the path coming from the second constructor of $\brck A$,
+\end{itemize}
+there exists $f:\prd{x:\brck A} B(x)$ such that $f(\bproj a)\jdeq g(a)$ for $a:A$, and also another computation rule.
+However, because there can be at most one function between any two mere propositions (up to homotopy), this induction principle is not really useful (see also \cref{ex:prop-trunc-ind}).
+
+\index{truncation!propositional|)}%
+\index{truncation!set|(}%
+
+\index{set|(}%
+We can, however, extend this idea to construct similar truncations landing in $n$-types, for any $n$.
+For instance, we might define the \emph{0-trun\-ca\-tion} $\trunc0A$ to be generated by
+\begin{itemize}
+\item A function $\tprojf0 : A \to \trunc0 A$, and
+\item For each $x,y:\trunc0A$ and each $p,q:x=y$, a path $p=q$.
+\end{itemize}
+Then $\trunc0A$ would be freely generated by a function $A\to \trunc0A$ together with the assertion that $\trunc0A$ is a set.
+A natural induction principle for it would say that given $B:\trunc0 A \to \type$ together with
+\begin{itemize}
+\item a function $g:\prd{a:A} B(\tproj0a)$, and
+\item for any $x,y:\trunc0A$ with $z:B(x)$ and $w:B(y)$, and each $p,q:x=y$ with $r:\dpath{B}{p}{z}{w}$ and $s:\dpath{B}{q}{z}{w}$, a 2-path $v:\dpath{\dpath{B}{-}{z}{w}}{u(x,y,p,q)}{r}{s}$, where $u(x,y,p,q):p=q$ is obtained from the second constructor of $\trunc0A$,
+\end{itemize}
+there exists $f:\prd{x:\trunc0A} B(x)$ such that $f(\tproj0a)\jdeq g(a)$ for all $a:A$, and also $\apdtwo{f}{u(x,y,p,q)}$ is the 2-path specified above.
+(As in the propositional case, the latter condition turns out to be uninteresting.)
+From this, however, we can prove a more useful induction principle.
+
+\begin{lem}\label{thm:trunc0-ind}
+ Suppose given $B:\trunc0 A \to \type$ together with $g:\prd{a:A} B(\tproj0a)$, and assume that each $B(x)$ is a set.
+ Then there exists $f:\prd{x:\trunc0A} B(x)$ such that $f(\tproj0a)\jdeq g(a)$ for all $a:A$.
+\end{lem}
+\begin{proof}
+ It suffices to construct, for any $x,y,z,w,p,q,r,s$ as above, a 2-path $v:\dpath{B}{u(x,y,p,q)}{r}{s}$.
+ However, by the definition of dependent 2-paths, this is an ordinary 2-path in the fiber $B(y)$.
+ Since $B(y)$ is a set, a 2-path exists between any two parallel paths.
+\end{proof}
+
+This implies the expected universal property.
+
+\begin{lem}\label{thm:trunc0-lump}
+ \index{universal!property!of truncation}%
+ For any set $B$ and any type $A$, composition with $\tprojf0:A\to \trunc0A$ determines an equivalence
+ \[ \eqvspaced{(\trunc0A\to B)}{(A\to B)}. \]
+\end{lem}
+\begin{proof}
+ The special case of \cref{thm:trunc0-ind} when $B$ is the constant family gives a map from right to left, which is a right inverse to the ``compose with $\tprojf0$'' function from left to right.
+ To show that it is also a left inverse, let $h:\trunc0A\to B$, and define $h':\trunc0A\to B$ by applying \cref{thm:trunc0-ind} to the composite $a\mapsto h(\tproj0a)$.
+ Thus, $h'(\tproj0a)=h(\tproj0a)$.
+
+ However, since $B$ is a set, for any $x:\trunc0A$ the type $h(x)=h'(x)$ is a mere proposition, and hence also a set.
+ Therefore, by \cref{thm:trunc0-ind}, the observation that $h'(\tproj0a)=h(\tproj0a)$ for any $a:A$ implies $h(x)=h'(x)$ for any $x:\trunc0A$, and hence $h=h'$.
+\end{proof}
+
+\index{limit!of sets}%
+\index{colimit!of sets}%
+For instance, this enables us to construct colimits of sets.
+We have seen that if $A \xleftarrow{f} C \xrightarrow{g} B$ is a span of sets, then the pushout $A\sqcup^C B$ may no longer be a set.
+(For instance, if $A$ and $B$ are \unit and $C$ is \bool, then the pushout is $\Sn^1$.)
+However, we can construct a pushout that is a set, and has the expected universal property with respect to other sets, by truncating.
+
+\begin{lem}\label{thm:set-pushout}
+ \index{universal!property!of pushout}%
+ Let $A \xleftarrow{f} C \xrightarrow{g} B$ be a span\index{span} of sets.
+ Then for any set $E$, there is a canonical equivalence
+ \[ \Parens{\trunc0{A\sqcup^C B} \to E} \;\eqvsym\; \cocone{\Ddiag}{E}. \]
+\end{lem}
+\begin{proof}
+ Compose the equivalences in \cref{thm:pushout-ump,thm:trunc0-lump}.
+\end{proof}
+
+We refer to $\trunc0{A\sqcup^C B}$ as the \define{set-pushout}
+\indexdef{set-pushout}%
+\index{pushout!of sets}
+of $f$ and $g$, to distinguish it from the (homotopy) pushout $A\sqcup^C B$.
+Alternatively, we could modify the definition of the pushout in \cref{sec:colimits} to include the $0$-truncation constructor directly, avoiding the need to truncate afterwards.
+Similar remarks apply to any sort of colimit of sets; we will explore this further in \cref{cha:set-math}.
+
+However, while the above definition of the 0-truncation works --- it gives what we want, and is consistent --- it has a couple of issues.
+Firstly, it doesn't fit so nicely into the general theory of higher inductive types.
+In general, it is tricky to deal directly with constructors such as the second one we have given for $\trunc0A$, whose \emph{inputs} involve not only elements of the type being defined, but paths in it.
+
+This can be gotten round fairly easily, however.
+Recall in \cref{sec:bool-nat} we mentioned that we can allow a constructor of an inductive type $W$ to take ``infinitely many arguments'' of type $W$ by having it take a single argument of type $\nat\to W$.
+There is a general principle behind this: to model a constructor with funny-looking inputs, use an auxiliary inductive type (such as \nat) to parametrize them, reducing the input to a simple function with inductive domain.
+
+For the 0-truncation, we can consider the auxiliary \emph{higher} inductive type $S$ generated by two points $a,b:S$ and two paths $p,q:a=b$.
+Then the fishy-looking constructor of $\trunc 0A$ can be replaced by the unobjectionable
+\begin{itemize}
+\item For every $f:S\to \trunc 0A$, a path $\apfunc{f}(p) = \apfunc{f}(q)$.
+\end{itemize}
+Since to give a map out of $S$ is the same as to give two points and two parallel paths between them, this yields the same induction principle.
+
+\index{set|)}%
+
+\index{truncation!set|)}%
+\index{truncation!n-truncation@$n$-truncation}%
+A more serious problem with our current definition of $0$-truncation, however, is that it doesn't generalize very well.
+If we want to describe a notion of definition of ``$n$-truncation'' into $n$-types uniformly for all $n:\nat$, then this approach is unfeasible, since the second constructor would need a number of arguments that increases with $n$.
+In \cref{sec:truncations}, therefore, we will use a different idea to construct these, based on the observation that the type $S$ introduced above is equivalent to the circle $\Sn^1$.
+This includes the 0-truncation as a special case, and satisfies generalized versions of \cref{thm:trunc0-ind,thm:trunc0-lump}.
+
+
+\section{Quotients}
+\label{sec:set-quotients}
+
+A particularly important sort of colimit of sets is the \emph{quotient} by a relation.
+That is, let $A$ be a set and $R:A\times A \to \prop$ a family of mere propositions (a \define{mere relation}).
+\indexdef{relation!mere}%
+\indexdef{mere relation}%
+Its quotient should be the set-coequalizer of the two projections
+\[ \tsm{a,b:A} R(a,b) \rightrightarrows A. \]
+We can also describe this directly, as the higher inductive type $A/R$ generated by
+\index{set-quotient|(defstyle}%
+\indexsee{quotient of sets}{set-quotient}%
+\indexsee{type!quotient}{set-quotient}%
+\begin{itemize}
+\item A function $q:A\to A/R$;
+\item For each $a,b:A$ such that $R(a,b)$, an equality $q(a)=q(b)$; and
+\item The $0$-truncation constructor: for all $x,y:A/R$ and $r,s:x=y$, we have $r=s$.
+\end{itemize}
+We will sometimes refer to this higher inductive type $A/R$ as the \define{set-quotient} of $A$ by $R$, to emphasize that it produces a set by definition.
+(There are more general notions of ``quotient'' in homotopy theory, but they are mostly beyond the scope of this book.
+However, in \cref{sec:rezk} we will consider the ``quotient'' of a type by a 1-groupoid, which is the next level up from set-quotients.)
+
+\begin{rmk}\label{rmk:quotient-of-non-set}
+ It is not actually necessary for the definition of set-quotients, and most of their properties, that $A$ be a set.
+ However, this is generally the case of most interest.
+\end{rmk}
+
+\begin{lem}\label{thm:quotient-surjective}
+ The function $q:A\to A/R$ is surjective.
+\end{lem}
+\begin{proof}
+ We must show that for any $x:A/R$ there merely exists an $a:A$ with $q(a)=x$.
+ We use the induction principle of $A/R$.
+ The first case is trivial: if $x$ is $q(a)$, then of course there merely exists an $a$ such that $q(a)=q(a)$.
+ And since the goal is a mere proposition, it automatically respects all path constructors, so we are done.
+\end{proof}
+
+We can now prove that the set-quotient has the expected universal property of a (set-)coequalizer.
+
+\begin{lem}\label{thm:quotient-ump}
+ For any set $B$, precomposing with $q$ yields an equivalence
+ \[ \eqvspaced{(A/R \to B)}{\Parens{\sm{f:A\to B} \prd{a,b:A} R(a,b) \to (f(a)=f(b))}}.\]
+\end{lem}
+\begin{proof}
+ The quasi-inverse of $\blank\circ q$, going from right to left, is just the recursion principle for $A/R$.
+ That is, given $f:A\to B$ such that
+ \narrowequation{\prd{a,b:A} R(a,b) \to (f(a)=f(b)),} we define $\bar f:A/R\to B$ by $\bar f(q(a))\defeq f(a)$.
+ This defining equation says precisely that $(f\mapsto \bar f)$ is a right inverse to $(\blank\circ q)$.
+
+ For it to also be a left inverse, we must show that for any $g:A/R\to B$ and $x:A/R$ we have $g(x) = \overline{g\circ q}(x)$.
+ However, by \cref{thm:quotient-surjective} there merely exists $a$ such that $q(a)=x$.
+ Since our desired equality is a mere proposition, we may assume there purely exists such an $a$, in which case $g(x) = g(q(a)) = \overline{g\circ q}(q(a)) = \overline{g\circ q}(x)$.
+\end{proof}
+
+Of course, classically the usual case to consider is when $R$ is an \define{equivalence relation}, i.e.\ we have
+\indexdef{relation!equivalence}%
+\indexsee{equivalence!relation}{relation, equivalence}%
+%
+\begin{itemize}
+\item \define{reflexivity}: $\prd{a:A} R(a,a)$,
+ \indexdef{reflexivity!of a relation}%
+ \indexdef{relation!reflexive}%
+\item \define{symmetry}: $\prd{a,b:A} R(a,b) \to R(b,a)$, and
+ \indexdef{symmetry!of a relation}%
+ \indexdef{relation!symmetric}%
+\item \define{transitivity}: $\prd{a,b,c:C} R(a,b) \times R(b,c) \to R(a,c)$.
+ \indexdef{transitivity!of a relation}%
+ \indexdef{relation!transitive}%
+\end{itemize}
+%
+In this case, the set-quotient $A/R$ has additional good properties, as we will see in \cref{sec:piw-pretopos}: for instance, we have $R(a,b) \eqvsym (\id[A/R]{q(a)}{q(b)})$.
+\symlabel{equivalencerelation}
+We often write an equivalence relation $R(a,b)$ infix as $a\eqr b$.
+
+The quotient by an equivalence relation can also be constructed in other ways.
+The set theoretic approach is to consider the set of equivalence classes, as a subset of the power set\index{power set} of $A$.
+We can mimic this ``impredicative'' construction in type theory as well.
+\index{impredicative!quotient}
+
+\begin{defn}
+ A predicate $P:A\to\prop$ is an \define{equivalence class}
+ \indexdef{equivalence!class}%
+ of a relation $R : A \times A \to \prop$ if there merely exists an $a:A$ such that for all $b:A$ we have $\eqv{R(a,b)}{P(b)}$.
+\end{defn}
+
+As $R$ and $P$ are mere propositions, the equivalence $\eqv{R(a,b)}{P(b)}$ is the same thing as implications $R(a,b) \to P(b)$ and $P(b) \to R(a,b)$.
+And of course, for any $a:A$ we have the canonical equivalence class $P_a(b) \defeq R(a,b)$.
+
+\begin{defn}\label{def:VVquotient}
+ We define
+ \begin{equation*}
+ A\sslash R \defeq \setof{ P:A\to\prop | P \text{ is an equivalence class of } R}.
+ \end{equation*}
+ The function $q':A\to A\sslash R$ is defined by $q'(a) \defeq P_a$.
+\end{defn}
+
+\begin{thm}
+ For any equivalence relation $R$ on $A$, the type $A\sslash R$ is equivalent to the set-quotient $A/R$.
+\end{thm}
+\begin{proof}
+ First, note that if $R(a,b)$, then since $R$ is an equivalence relation we have $R(a,c) \Leftrightarrow R(b,c)$ for any $c:A$.
+ Thus, $R(a,c) = R(b,c)$ by univalence, hence $P_a=P_b$ by function extensionality, i.e.\ $q'(a)=q'(b)$.
+ Therefore, by \cref{thm:quotient-ump} we have an induced map $f:A/R \to A\sslash R$ such that $f\circ q = q'$.
+
+ We show that $f$ is injective and surjective, hence an equivalence.
+ Surjectivity follows immediately from the fact that $q'$ is surjective, which in turn is true essentially by definition of $A\sslash R$.
+ For injectivity, if $f(x)=f(y)$, then to show the mere proposition $x=y$, by surjectivity of $q$ we may assume $x=q(a)$ and $y=q(b)$ for some $a,b:A$.
+ Then $R(a,c) = f(q(a))(c) = f(q(b))(c) = R(b,c)$ for any $c:A$, and in particular $R(a,b) = R(b,b)$.
+ But $R(b,b)$ is inhabited, since $R$ is an equivalence relation, hence so is $R(a,b)$.
+ Thus $q(a)=q(b)$ and so $x=y$.
+\end{proof}
+
+In \cref{subsec:quotients} we will give an alternative proof of this theorem.
+Note that unlike $A/R$, the construction $A\sslash R$ raises universe level: if $A:\UU_i$ and $R:A\to A\to \prop_{\UU_i}$, then in the definition of $A\sslash R$ we must also use $\prop_{\UU_i}$ to include all the equivalence classes, so that $A\sslash R : \UU_{i+1}$.
+Of course, we can avoid this if we assume the propositional resizing axiom from \cref{subsec:prop-subsets}.
+
+\begin{rmk}\label{defn-Z}
+The previous two constructions provide quotients in generality, but in particular cases there may be easier constructions.
+For instance, we may define the integers \Z as a set-quotient
+\indexdef{integers}%
+\indexdef{number!integers}%
+%
+\[ \Z \defeq (\N \times \N)/{\eqr} \]
+%
+where $\eqr$ is the equivalence relation defined by
+%
+\[ (a,b) \eqr (c,d) \defeq (a + d = b + c). \]
+%
+In other words, a pair $(a,b)$ represents the integer $a - b$.
+In this case, however, there are \emph{canonical representatives} of the equivalence classes: those of the form $(n,0)$ or $(0,n)$.
+\end{rmk}
+
+The following lemma says that when this sort of thing happens, we don't need either general construction of quotients.
+(A function $r:A\to A$ is called \define{idempotent}
+\indexdef{function!idempotent}%
+\indexdef{idempotent!function}%
+if $r\circ r = r$.)
+
+\begin{lem}\label{lem:quotient-when-canonical-representatives}
+ Suppose $\eqr$ is a relation on a set $A$, and there exists an idempotent $r
+ : A \to A$ such that $\eqv{(r(x) = r(y))}{(x \eqr y)}$ for all $x, y: A$.
+ (This implies $\eqr$ is an equivalence relation.)
+ Then the type
+ %
+ \begin{equation*}
+ (A/{\eqr}) \defeq \Parens{\sm{x : A} r(x) = x}
+ \end{equation*}
+ %
+ satisfies the universal property of the set-quotient of $A$ by~$\eqr$, and hence is equivalent to it.
+ In other words, there is a map $q : A \to (A/{\eqr})$ such that for every set $B$, precomposition with $q$ induces an equivalence
+ %
+ \begin{equation}
+ \label{eq:quotient-when-canonical}
+ \Parens{(A/{\eqr}) \to B} \eqvsym \Parens{\sm{g : A \to B} \prd{x, y : A} (x \eqr y) \to (g(x) = g(y))}.
+ \end{equation}
+\end{lem}
+
+\begin{proof}
+ Let $i : \prd{x : A} r(r(x)) = r(x)$ witness idempotence of~$r$.
+ The map $q : A \to (A/{\eqr})$ is defined by $q(x) \defeq (r(x), i(x))$.
+ Note that since $A$ is a set, we have $q(x)=q(y)$ if and only if $r(x)=r(y)$, hence (by assumption) if and only if $x \eqr y$.
+ We define a map $e$ from left to right in~\eqref{eq:quotient-when-canonical} by
+ \[ e(f) \defeq (f \circ q, \nameless), \]
+ %
+ where the underscore $\nameless$ denotes the following proof: if $x, y : A$ and $x \eqr y$, then $q(x)=q(y)$ as observed above, hence $f(q(x)) = f(q(y))$.
+ To see that $e$ is an equivalence, consider the map $e'$ in the opposite direction defined by
+ %
+ \[ e'(g, s) (x,p) \defeq g(x). \]
+ %
+ Given any $f : (A/{\eqr}) \to B$,
+ %
+ \[ e'(e(f))(x, p) \jdeq f(q(x)) \jdeq f(r(x), i(x)) = f(x, p) \]
+ %
+ where the last equality holds because $p : r(x) = x$ and so $(x,p) = (r(x), i(x))$
+ because $A$ is a set. Similarly we compute
+ %
+ \[ e(e'(g, s)) \jdeq e(g \circ \proj{1}) \jdeq (g \circ \proj{1} \circ q, {\nameless}). \]
+ %
+ Because $B$ is a set we need not worry about the $\nameless$ part, while for the first
+ component we have
+ %
+ \[ g(\proj{1}(q(x))) \jdeq g(r(x)) = g(x), \]
+ %
+ where the last equation holds because $r(x) \eqr x$, and $g$ respects $\eqr$ by
+ the assumption $s$.
+\end{proof}
+
+\begin{cor}\label{thm:retraction-quotient}
+ Suppose $p:A\to B$ is a retraction between sets.
+ Then $B$ is the quotient of $A$ by the equivalence relation $\eqr$ defined by
+ \[ (a_1 \eqr a_2) \defeq (p(a_1) = p(a_2)). \]
+\end{cor}
+\begin{proof}
+ Suppose $s:B\to A$ is a section of $p$.
+ Then $s\circ p : A\to A$ is an idempotent which satisfies the condition of \cref{lem:quotient-when-canonical-representatives} for this $\eqr$, and $s$ induces an isomorphism from $B$ to its set of fixed points.
+\end{proof}
+
+\begin{rmk}\label{Z-quotient-by-canonical-representatives}
+\cref{lem:quotient-when-canonical-representatives} applies to $\Z$ with the idempotent $r : \N \times \N \to \N \times \N$
+defined by
+%
+\begin{equation*}
+ r(a, b) =
+ \begin{cases}
+ (a - b, 0) & \text{if $a \geq b$,} \\
+ (0, b - a) & \text{otherwise.}
+ \end{cases}
+\end{equation*}
+%
+(This is a valid definition even constructively, since the relation $\geq$ on $\N$ is decidable.)
+Thus a non-negative integer is canonically represented as $(k, 0)$ and a non-positive one by $(0, m)$, for $k,m:\N$.
+This division into cases implies the following ``induction principle'' for integers, which will be useful in \cref{cha:homotopy}.
+\index{natural numbers}%
+(As usual, we identify a natural number $n$ with the corresponding non-negative integer, i.e.\ with the image of $(n,0):\N\times\N$ in $\Z$.)
+\end{rmk}
+
+\begin{lem}\label{thm:sign-induction}
+ \index{integers!induction principle for}%
+ \index{induction principle!for integers}%
+ Suppose $P:\Z\to\type$ is a type family and that we have
+ \begin{itemize}
+ \item $d_0: P(0)$,
+ \item $d_+: \prd{n:\N} P(n) \to P(\suc(n))$, and
+ \item $d_- : \prd{n:\N} P(-n) \to P(-\suc(n))$.
+ \end{itemize}
+ Then we have $f:\prd{z:\Z} P(z)$ such that
+ \begin{itemize}
+ \item $f(0) = d_0$,
+ \item $f(\suc(n)) = d_+(n,f(n))$ for all $n:\N$, and
+ \item $f(-\suc(n)) = d_-(n,f(-n))$ for all $n:\N$.
+ \end{itemize}
+\end{lem}
+\begin{proof}
+ For purposes of this proof, let $\Z$ denote $\sm{x:\N\times\N}(r(x)=x)$, where $r$ is the above idempotent.
+ (We can then transport the result to any equivalent definition of $\Z$.)
+ Let $q:\N\times\N\to\Z$ be the quotient map, defined by $q(x) = (r(x),i(x))$ as in \cref{lem:quotient-when-canonical-representatives}.
+ Now define $Q\defeq P\circ q:\N\times \N \to \type$.
+ By transporting the given data across appropriate equalities, we obtain
+ \begin{align*}
+ d'_0 &: Q(0,0)\\
+ d'_+ &: \prd{n:\N} Q(n,0) \to Q(\suc(n),0)\\
+ d'_- &: \prd{n:\N} Q(0,n) \to Q(0,\suc(n)).
+ \end{align*}
+ Note also that since $q(n,m) = q(\suc(n),\suc(m))$, we have an induced equivalence
+ \[e_{n,m}:\eqv{Q(n,m)}{Q(\suc(n),\suc(m))}.\]
+ We can then construct $g:\prd{x:\N\times \N} Q(x)$ by double induction on $x$:
+ \begin{align*}
+ g(0,0) &\defeq d'_0,\\
+ g(\suc(n),0) &\defeq d'_+(n,g(n,0)),\\
+ g(0,\suc(m)) &\defeq d'_-(m,g(0,m)),\\
+ g(\suc(n),\suc(m)) &\defeq e_{n,m}(g(n,m)).
+ \end{align*}
+ Now we have $\proj1 : \Z \to \N\times\N$, with the property that $q\circ \proj1 = \idfunc$.
+ In particular, therefore, we have $Q\circ \proj1 = P$, and hence a family of equivalences $s:\prd{z:\Z} \eqv{Q(\proj1(z))}{P(z)}$.
+ Thus, we can define $f(z) = s(z,g(\proj1(z)))$ to obtain $f:\prd{z:\Z} P(z)$, and verify the desired equalities.
+\end{proof}
+
+We will sometimes denote a function $f:\prd{z:\Z} P(z)$ obtained from \cref{thm:sign-induction} with a pattern-matching syntax, involving the three cases $d_0$, $d_+$, and $d_-$:
+\begin{align*}
+ f(0) &\defid d_0\\
+ f(\suc(n)) &\defid d_+(n,f(n))\\
+ f(-\suc(n)) &\defid d_-(n,f(-n))
+\end{align*}
+We use $\defid$ rather than $\defeq$, as we did for the path constructors of higher inductive types, to indicate that the ``computation'' rules implied by \cref{thm:sign-induction} are only propositional equalities.
+For example, in this way we can define the $n$-fold concatenation of a loop for any integer $n$.
+
+\begin{cor}\label{thm:looptothe}
+ \indexdef{path!concatenation!n-fold@$n$-fold}%
+ Let $A$ be a type with $a:A$ and $p:a=a$.
+ There is a function $\prd{n:\Z} (a=a)$, denoted $n\mapsto p^n$, defined by
+ \begin{align*}
+ p^0 &\defid \refl{a}\\
+ p^{n+1} &\defid p^n \ct p
+ & &\text{for $n\ge 0$}\\
+ p^{n-1} &\defid p^n \ct \opp p
+ & &\text{for $n\le 0$.}
+ \end{align*}
+\end{cor}
+
+We will discuss the integers further in \cref{sec:free-algebras,sec:field-rati-numb}.
+
+\index{set-quotient|)}%
+
+\section{Algebra}
+\label{sec:free-algebras}
+
+In addition to constructing higher-dimensional objects such as spheres and cell complexes, higher inductive types are also very useful even when working only with sets.
+We have seen one example already in \cref{thm:set-pushout}: they allow us to construct the colimit of any diagram of sets, which is not possible in the base type theory of \cref{cha:typetheory}.
+Higher inductive types are also very useful when we study sets with algebraic structure.
+
+As a running example in this section, we consider \emph{groups}, which are familiar to most mathematicians and exhibit the essential phenomena (and will be needed in later chapters).
+However, most of what we say applies equally well to any sort of algebraic structure.
+
+\index{monoid|(}%
+
+\begin{defn}
+ A \define{monoid}
+ \indexdef{monoid}%
+ is a set $G$ together with
+ \begin{itemize}
+ \item a \emph{multiplication}
+ \indexdef{multiplication!in a monoid}%
+ \indexdef{multiplication!in a group}%
+ function $G\times G\to G$, written infix as $(x,y) \mapsto x\cdot y$; and
+ \item a \emph{unit}
+ \indexdef{unit!of a monoid}%
+ \indexdef{unit!of a group}%
+ element $e:G$; such that
+ \item for any $x:G$, we have $x\cdot e = x$ and $e\cdot x = x$; and
+ \item for any $x,y,z:G$, we have $x\cdot (y\cdot z) = (x\cdot y)\cdot z$.
+ \index{associativity!in a monoid}%
+ \index{associativity!in a group}%
+ \end{itemize}
+ A \define{group}
+ \indexdef{group}%
+ is a monoid $G$ together with
+ \begin{itemize}
+ \item an \emph{inversion} function $i:G\to G$, written $x\mapsto \opp x$; such that
+ \index{inverse!in a group}%
+ \item for any $x:G$ we have $x\cdot \opp x = e$ and $\opp x \cdot x = e$.
+ \end{itemize}
+\end{defn}
+
+\begin{rmk}\label{rmk:infty-group}
+Note that we require a group to be a set.
+We could consider a more general notion of ``$\infty$-group''%
+\index{.infinity-group@$\infty$-group}
+which is not a set, but this would take us further afield than is appropriate at the moment.
+With our current definition, we may expect the resulting ``group theory'' to behave similarly to the way it does in set-theoretic mathematics (with the caveat that, unless we assume \LEM{}, it will be ``constructive'' group theory).\index{mathematics!constructive}
+\end{rmk}
+
+\begin{eg}
+ The natural numbers \N are a monoid under addition, with unit $0$, and also under multiplication, with unit $1$.
+ If we define the arithmetical operations on the integers \Z in the obvious way, then as usual they are a group under addition and a monoid under multiplication (and, of course, a ring).
+ For instance, if $u, v \in \Z$ are represented by $(a,b)$ and $(c,d)$, respectively, then $u + v$ is represented by $(a + c, b + d)$, $-u$ is represented by $(b, a)$, and $u v$ is represented by $(a c + b d, a d + b c)$.
+\end{eg}
+
+\begin{eg}\label{thm:homotopy-groups}
+ We essentially observed in \cref{sec:equality} that if $(A,a)$ is a pointed type, then its loop space\index{loop space} $\Omega(A,a)\defeq (\id[A]aa)$ has all the structure of a group, except that it is not in general a set.
+ It should be an ``$\infty$-group'' in the sense mentioned in \cref{rmk:infty-group}, but we can also make it a group by truncation.
+ Specifically, we define the \define{fundamental group}
+ \indexsee{group!fundamental}{fundamental group}%
+ \indexdef{fundamental!group}%
+ of $A$ based at $a:A$ to be
+ \[\pi_1(A,a)\defeq \trunc0{\Omega(A,a)}.\]
+ This inherits a group structure; for instance, the multiplication $\pi_1(A,a) \times \pi_1(A,a) \to \pi_1(A,a)$ is defined by double induction on truncation from the concatenation of paths.
+
+ More generally, the \define{$n^{\mathrm{th}}$ homotopy group}
+ \index{homotopy!group}%
+ \indexsee{group!homotopy}{homotopy group}%
+ of $(A,a)$ is $\pi_n(A,a)\defeq \trunc0{\Omega^n(A,a)}$.
+ \index{loop space!iterated}%
+ Then $\pi_n(A,a) = \pi_1(\Omega^{n-1}(A,a))$ for $n\ge 1$, so it is also a group.
+ (When $n=0$, we have $\pi_0(A) \jdeq \trunc0 A$, which is not a group.)
+ Moreover, the Eckmann--Hilton argument \index{Eckmann--Hilton argument} (\cref{thm:EckmannHilton}) implies that if $n\ge 2$, then $\pi_n(A,a)$ is an \emph{abelian}\index{group!abelian} group, i.e.\ we have $x\cdot y = y\cdot x$ for all $x,y$.
+ \cref{cha:homotopy} will be largely the study of these groups.
+\end{eg}
+
+\index{algebra!free}%
+\index{free!algebraic structure}%
+One important notion in group theory is that of the \emph{free group} generated by a set, or more generally of a group \emph{presented} by generators\index{generator!of a group} and relations.
+It is well-known in type theory that \emph{some} free algebraic objects can be defined using \emph{ordinary} inductive types.
+\symlabel{lst-freemonoid}%
+\indexdef{type!of lists}%
+\indexsee{list type}{type, of lists}%
+\index{monoid!free|(}%
+For instance, the free monoid on a set $A$ can be identified with the type $\lst A$ of \emph{finite lists} \index{finite!lists, type of} of elements of $A$, which is inductively generated by
+\begin{itemize}
+\item a constructor $\nil:\lst A$, and
+\item for each $\ell:\lst A$ and $a:A$, an element $\cons(a,\ell):\lst A$.
+\end{itemize}
+We have an obvious inclusion $\eta : A\to \lst A$ defined by $a\mapsto \cons(a,\nil)$.
+The monoid operation on $\lst A$ is concatenation, defined recursively by
+\begin{align*}
+ \nil \cdot \ell &\defeq \ell\\
+ \cons (a,\ell_1) \cdot \ell_2 &\defeq \cons(a, \ell_1\cdot\ell_2).
+\end{align*}
+It is straightforward to prove, using the induction principle for $\lst A$, that $\lst A$ is a set and that concatenation of lists is associative
+\index{associativity!of list concatenation}%
+and has $\nil$ as a unit.
+Thus, $\lst A$ is a monoid.
+
+\begin{lem}\label{thm:free-monoid}
+ \indexsee{free!monoid}{monoid, free}%
+ For any set $A$, the type $\lst A$ is the free monoid on $A$.
+ In other words, for any monoid $G$, composition with $\eta$ is an equivalence
+ \[ \eqv{\hom_{\mathrm{Monoid}}(\lst A,G)}{(A\to G)}, \]
+ where $\hom_{\mathrm{Monoid}}(\blank,\blank)$ denotes the set of monoid homomorphisms (functions which preserve the multiplication and unit).
+ \indexdef{homomorphism!monoid}%
+ \indexdef{monoid!homomorphism}%
+\end{lem}
+\begin{proof}
+ Given $f:A\to G$, we define $\bar{f}:\lst A \to G$ by recursion:
+ \begin{align*}
+ \bar{f}(\nil) &\defeq e\\
+ \bar{f}(\cons(a,\ell)) &\defeq f(a) \cdot \bar{f}(\ell).
+ \end{align*}
+ It is straightforward to prove by induction that $\bar{f}$ is a monoid homomorphism, and that $f\mapsto \bar f$ is a quasi-inverse of $(\blank\circ \eta)$; see \cref{ex:free-monoid}.
+\end{proof}
+
+\index{monoid!free|)}%
+
+This construction of the free monoid is possible essentially because elements of the free monoid have computable canonical forms (namely, finite lists).
+However, elements of other free (and presented) algebraic structures --- such as groups --- do not in general have \emph{computable} canonical forms.
+For instance, equality of words in group presentations is algorithmically\index{algorithm} undecidable.
+However, we can still describe free algebraic objects as \emph{higher} inductive types, by simply asserting all the axiomatic equations as path constructors.
+
+\indexsee{free!group}{group, free}%
+\index{group!free|(}%
+For example, let $A$ be a set, and define a higher inductive type $\freegroup{A}$ with the following generators.
+\begin{itemize}
+\item A function $\eta:A\to \freegroup{A}$.
+\item A function $m: \freegroup{A} \times \freegroup{A} \to \freegroup{A}$.
+\item An element $e:\freegroup{A}$.
+\item A function $i:\freegroup{A} \to \freegroup{A}$.
+\item For each $x,y,z:\freegroup{A}$, an equality $m(x,m(y,z)) = m(m(x,y),z)$.
+\item For each $x:\freegroup{A}$, equalities $m(x,e) = x$ and $m(e,x) = x$.
+\item For each $x:\freegroup{A}$, equalities $m(x,i(x)) = e$ and $m(i(x),x) = e$.
+\item The $0$-truncation constructor: for any $x,y:\freegroup{A}$ and $p,q:x=y$, we have $p=q$.
+\end{itemize}
+The first constructor says that $A$ maps to $\freegroup{A}$.
+The next three give $\freegroup{A}$ the operations of a group: multiplication, an identity element, and inversion.
+The three constructors after that assert the axioms of a group: associativity\index{associativity}, unitality, and inverses.
+Finally, the last constructor asserts that $\freegroup{A}$ is a set.
+
+Therefore, $\freegroup{A}$ is a group.
+It is also straightforward to prove:
+
+\begin{thm}
+ \index{universal!property!of free group}%
+ $\freegroup{A}$ is the free group on $A$.
+ In other words, for any (set) group $G$, composition with $\eta:A\to \freegroup{A}$ determines an equivalence
+ \[ \hom_{\mathrm{Group}}(\freegroup{A},G) \eqvsym (A\to G) \]
+ where $\hom_{\mathrm{Group}}(\blank,\blank)$ denotes the set of group homomorphisms between two groups.
+ \indexdef{group!homomorphism}%
+ \indexdef{homomorphism!group}%
+\end{thm}
+\begin{proof}
+ The recursion principle of the higher inductive type $\freegroup{A}$ says \emph{precisely} that if $G$ is a group and we have $f:A\to G$, then we have $\bar{f}:\freegroup{A} \to G$.
+ Its computation rules say that $\bar{f}\circ \eta \jdeq f$, and that $\bar f$ is a group homomorphism.
+ Thus, $(\blank\circ \eta) : \hom_{\mathrm{Group}}(\freegroup{A},G) \to (A\to G)$ has a right inverse.
+ It is straightforward to use the induction principle of $\freegroup{A}$ to show that this is also a left inverse.
+\end{proof}
+
+\index{acceptance}
+It is worth taking a step back to consider what we have just done.
+We have proven that the free group on any set exists \emph{without} giving an explicit construction of it.
+Essentially all we had to do was write down the universal property that it should satisfy.
+In set theory, we could achieve a similar result by appealing to black boxes such as the adjoint functor theorem\index{adjoint!functor theorem}; type theory builds such constructions into the foundations of mathematics.
+
+Of course, it is sometimes also useful to have a concrete description of free algebraic structures.
+In the case of free groups, we can provide one, using quotients.
+Consider $\lst{A+A}$, where in $A+A$ we write $\inl(a)$ as $a$, and $\inr(a)$ as $\hat{a}$ (intended to stand for the formal inverse of $a$).
+The elements of $\lst{A+A}$ are \emph{words} for the free group on $A$.
+
+\begin{thm}
+ Let $A$ be a set, and let $\freegroupx{A}$ be the set-quotient of $\lst{A+A}$ by the following relations.
+ \begin{align*}
+ (\dots,a_1,a_2,\widehat{a_2},a_3,\dots) &=
+ (\dots,a_1,a_3,\dots)\\
+ (\dots,a_1,\widehat{a_2},a_2,a_3,\dots) &=
+ (\dots,a_1,a_3,\dots).
+ \end{align*}
+ Then $\freegroupx{A}$ is also the free group on the set $A$.
+\end{thm}
+\begin{proof}
+ First we show that $\freegroupx{A}$ is a group.
+ We have seen that $\lst{A+A}$ is a monoid; we claim that the monoid structure descends to the quotient.
+ We define $\freegroupx{A} \times \freegroupx{A} \to \freegroupx{A}$ by double quotient recursion; it suffices to check that the equivalence relation generated by the given relations is preserved by concatenation of lists.
+ Similarly, we prove the associativity and unit laws by quotient induction.
+
+ In order to define inverses in $\freegroupx{A}$, we first define $\mathsf{reverse}:\lst B\to\lst B$ by recursion on lists:
+ \begin{align*}
+ \mathsf{reverse}(\nil) &\defeq \nil,\\
+ \mathsf{reverse}(\cons(b,\ell))&\defeq \mathsf{reverse}(\ell)\cdot \cons(b,\nil).
+ \end{align*}
+ Now we define $i:\freegroupx{A}\to \freegroupx{A}$ by quotient recursion, acting on a list $\ell:\lst{A+A}$ by switching the two copies of $A$ and reversing the list.
+ This preserves the relations, hence descends to the quotient.
+ And we can prove that $i(x) \cdot x = e$ for $x:\freegroupx{A}$ by induction.
+ First, quotient induction allows us to assume $x$ comes from $\ell:\lst{A+A}$, and then we can do list induction; if we write $q:\lst{A+A}\to \freegroupx{A}$ for the quotient map, the cases are
+ \begin{align*}
+ i(q(\nil)) \ct q(\nil) &= q(\nil) \ct q(\nil)\\
+ &= q(\nil)\\
+ i(q(\cons(a,\ell))) \ct q(\cons(a,\ell)) &= i(q(\ell)) \ct q(\cons(\hat{a},\nil)) \ct q(\cons(a,\ell))\\
+ &= i(q(\ell)) \ct q(\cons(\hat{a},\cons(a,\ell)))\\
+ &= i(q(\ell)) \ct q(\ell)\\
+ &= q(\nil). \tag{by the inductive hypothesis}
+ \end{align*}
+ (We have omitted a number of fairly evident lemmas about the behavior of concatenation of lists, etc.)
+
+ This completes the proof that $\freegroupx{A}$ is a group.
+ Now if $G$ is any group with a function $f:A\to G$, we can define $A+A\to G$ to be $f$ on the first copy of $A$ and $f$ composed with the inversion map of $G$ on the second copy.
+ Now the fact that $G$ is a monoid yields a monoid homomorphism $\lst{A+A} \to G$.
+ And since $G$ is a group, this map respects the relations, hence descends to a map $\freegroupx{A}\to G$.
+ It is straightforward to prove that this is a group homomorphism, and the unique one which restricts to $f$ on $A$.
+\end{proof}
+
+\index{monoid|)}%
+
+If $A$ has decidable equality\index{decidable!equality} (such as if we assume excluded middle), then the quotient defining $\freegroupx{A}$ can be obtained from an idempotent as in \cref{lem:quotient-when-canonical-representatives}.
+We define a word, which we recall is just an element of $\lst{A+A}$, to be \define{reduced}
+\indexdef{reduced word in a free group}
+if it contains no adjacent pairs of the form $(a,\hat a)$ or $(\hat a,a)$.
+When $A$ has decidable equality, it is straightforward to define the \define{reduction}
+\index{reduction!of a word in a free group}%
+of a word, which is an idempotent generating the appropriate quotient; we leave the details to the reader.
+
+If $A\defeq \unit$, which has decidable equality, a reduced word must consist either entirely of $\ttt$'s or entirely of $\hat{\ttt}$'s.
+Thus, the free group on $\unit$ is equivalent to the integers \Z, with $0$ corresponding to $\nil$, the positive integer $n$ corresponding to a reduced word of $n$ $\ttt$'s, and the negative integer $(-n)$ corresponding to a reduced word of $n$ $\hat{\ttt}$'s.
+One could also, of course, show directly that \Z has the universal property of $\freegroup{\unit}$.
+
+\begin{rmk}\label{thm:freegroup-nonset}
+ Nowhere in the construction of $\freegroup{A}$ and $\freegroupx{A}$, and the proof of their universal properties, did we use the assumption that $A$ is a set.
+ Thus, we can actually construct the free group on an arbitrary type.
+ Comparing universal properties, we conclude that $\eqv{\freegroup{A}}{\freegroup{\trunc0A}}$.
+\end{rmk}
+
+\index{group!free|)}%
+
+\index{algebra!colimits of}%
+We can also use higher inductive types to construct colimits of algebraic objects.
+For instance, suppose $f:G\to H$ and $g:G\to K$ are group homomorphisms.
+Their pushout in the category of groups, called the \define{amalgamated free product}
+\indexdef{amalgamated free product}%
+\indexdef{free!product!amalgamated}%
+$H *_G K$, can be constructed as the higher inductive type generated by
+\begin{itemize}
+\item Functions $h:H\to H *_G K$ and $k:K\to H *_G K$.
+\item The operations and axioms of a group, as in the definition of $\freegroup{A}$.
+\item Axioms asserting that $h$ and $k$ are group homomorphisms.
+\item For $x:G$, we have $h(f(x)) = k(g(x))$.
+\item The $0$-truncation constructor.
+\end{itemize}
+On the other hand, it can also be constructed explicitly, as the set-quotient of $\lst{H+K}$ by the following relations:
+\begin{align*}
+ (\dots, x_1, x_2, \dots) &= (\dots, x_1\cdot x_2, \dots)
+ & &\text{for $x_1,x_2:H$}\\
+ (\dots, y_1, y_2, \dots) &= (\dots, y_1\cdot y_2, \dots)
+ & &\text{for $y_1,y_2:K$}\\
+ (\dots, 1_G, \dots) &= (\dots, \dots) && \\
+ (\dots, 1_H, \dots) &= (\dots, \dots) && \\
+ (\dots, f(x), \dots) &= (\dots, g(x), \dots)
+ & &\text{for $x:G$.}
+\end{align*}
+We leave the proofs to the reader.
+In the special case that $G$ is the trivial group, the last relation is unnecessary, and we obtain the \define{free product}
+\indexdef{free!product}%
+$H*K$, the coproduct in the category of groups.
+(This notation unfortunately clashes with that for the \emph{join} of types, as in \cref{sec:colimits}, but context generally disambiguates.)
+
+\index{presentation!of a group}%
+Note that groups defined by \emph{presentations} can be regarded as a special case of colimits.
+Suppose given a set (or more generally a type) $A$, and a pair of functions $R\rightrightarrows \freegroup{A}$.
+We regard $R$ as the type of ``relations'', with the two functions assigning to each relation the two words that it sets equal.
+For instance, in the presentation $\langle a \mid a^2 = e \rangle$ we would have $A\defeq \unit$ and $R\defeq \unit$, with the two morphisms $R\rightrightarrows \freegroup{A}$ picking out the list $(a,a)$ and the empty list $\nil$, respectively.
+Then by the universal property of free groups, we obtain a pair of group homomorphisms $\freegroup{R} \rightrightarrows \freegroup{A}$.
+Their coequalizer in the category of groups, which can be built just like the pushout, is the group \emph{presented} by this presentation.
+
+\mentalpause
+
+Note that all these sorts of construction only apply to \emph{algebraic} theories,\index{theory!algebraic} which are theories whose axioms are (universally quantified) equations referring to variables, constants, and operations from a given signature\index{signature!of an algebraic theory}.
+They can be modified to apply also to what are called \emph{essentially algebraic theories}:\index{theory!essentially algebraic} those whose operations are partially defined on a domain specified by equalities between previous operations.
+They do not apply, for instance, to the theory of fields, in which the ``inversion'' operation is partially defined on a domain $\setof{x | x \mathrel{\#} 0}$ specified by an \emph{apartness} $\#$ between previous operations, see \cref{RD-inverse-apart-0}.
+And indeed, it is well-known that the category of fields has no initial object.
+\index{initial!field}%
+
+On the other hand, these constructions do apply just as well to \emph{infinitary}\index{infinitary!algebraic theory} algebraic theories, whose ``operations'' can take infinitely many inputs.
+In such cases, there may not be any presentation of free algebras or colimits of algebras as a simple quotient, unless we assume the axiom of choice.
+This means that higher inductive types represent a significant strengthening of constructive type theory (not necessarily in terms of proof-theoretic strength, but in terms of practical power), and indeed are stronger in some ways than Zermelo--Fraenkel\index{set theory!Zermelo--Fraenkel} set theory (without choice)~\cite{blass:freealg}.
+% We will see an example of this in \cref{sec:ordinals}.
+
+
+\section{The flattening lemma}
+\label{sec:flattening}
+
+As we will see in \cref{cha:homotopy}, amazing things happen when we combine higher inductive types with univalence.
+The principal way this comes about is that if $W$ is a higher inductive type and \UU is a type universe, then we can define a type family $P:W\to \UU$ by using the recursion principle for $W$.
+When we come to the clauses of the recursion principle dealing with the path constructors of $W$, we will need to supply paths in \UU, and this is where univalence comes in.
+
+For example, suppose we have a type $X$ and a self-equivalence $e:\eqv X X$.
+Then we can define a type family $P:\Sn^1 \to \UU$ by using $\Sn^1$-recursion:
+\begin{equation*}
+ P(\base) \defeq X
+ \qquad\text{and}\qquad
+ \ap P\lloop \defid \ua(e).
+\end{equation*}
+The type $X$ thus appears as the fiber $P(\base)$ of $P$ over the basepoint.
+The self-equivalence $e$ is a little more hidden in $P$, but the following lemma says that it can be extracted by transporting along \lloop.
+
+\begin{lem}\label{thm:transport-is-given}
+ Given $B:A\to\type$ and $x,y:A$, with a path $p:x=y$ and an equivalence $e:\eqv{B(x)}{B(y)}$ such that $\ap{B}p = \ua(e)$, then for any $u:B(x)$ we have
+ \begin{align*}
+ \transfib{B}{p}{u} &= e(u).
+ \end{align*}
+\end{lem}
+\begin{proof}
+ Applying \cref{thm:transport-is-ap}, we have
+ \begin{align*}
+ \transfib{B}{p}{u} &= \idtoeqv(\ap{B}p)(u)\\
+ &= \idtoeqv(\ua(e))(u)\\
+ &= e(u).\qedhere
+ \end{align*}
+\end{proof}
+
+We have seen type families defined by recursion before: in \cref{sec:compute-coprod,sec:compute-nat} we used them to characterize the identity types of (ordinary) inductive types.
+In \cref{cha:homotopy}, we will use similar ideas to calculate homotopy groups of higher inductive types.
+
+In this section, we describe a general lemma about type families of this sort which will be useful later on.
+We call it the \define{flattening lemma}:
+\indexdef{flattening lemma}%
+\indexdef{lemma!flattening}%
+it says that if $P:W\to\UU$ is defined recursively as above, then its total space $\sm{x:W} P(x)$ is equivalent to a ``flattened'' higher inductive type, whose constructors may be deduced from those of $W$ and the definition of $P$.
+(From a category-theoretic point of view, $\sm{x:W} P(x)$ is the ``Grothendieck\index{Grothendieck construction} construction'' of $P$, and the flattening lemma expresses its universal property as a ``lax\index{lax colimit} colimit''. Although because types in homotopy type theory (like $W$) correspond categorically to $\infty$-groupoids (since all paths are invertible), in this case the lax colimit is the same as a pseudo colimit.)
+
+We prove here one general case of the flattening lemma, which directly implies many particular cases and suggests the method to prove others.
+Suppose we have $A,B:\type$ and $f,g:B\to{}A$, and that the higher inductive type $W$ is generated by
+\begin{itemize}
+\item $\cc:A\to{}W$ and
+\item $\pp:\prd{b:B} (\cc(f(b))=_W\cc(g(b)))$.
+\end{itemize}
+Thus, $W$ is the \define{(homotopy) coequalizer}
+\indexdef{coequalizer}%
+\indexdef{type!coequalizer}%
+of $f$ and $g$.
+Using binary sums (coproducts) and dependent sums ($\Sigma$-types), a lot of interesting nonrecursive higher
+inductive types can be represented in this form. All point constructors have to
+be bundled in the type $A$ and all path constructors in the type $B$.
+For instance:
+\begin{itemize}
+\item The circle $\Sn^1$ can be represented by taking $A\defeq \unit$ and $B\defeq \unit$, with $f$ and $g$ the identity.
+\item The pushout of $j:X\to Y$ and $k:X\to Z$ can be represented by taking $A\defeq Y+Z$ and $B\defeq X$, with $f\defeq \inl \circ j$ and $g\defeq \inr\circ k$.
+\end{itemize}
+Now suppose in addition that
+\begin{itemize}
+\item $C:A\to\type$ is a family of types over $A$, and
+\item $D:\prd{b:B}\eqv{C(f(b))}{C(g(b))}$ is a family of equivalences over $B$.
+\end{itemize}
+Define a type family $P : W\to\type$ recursively by
+\begin{align*}
+ P(\cc(a)) &\defeq C(a)\\
+ \map{P}{\pp(b)} &\defid \ua(D(b)).
+\end{align*}
+Let \Wtil be the higher inductive type generated by
+\begin{itemize}
+\item $\cct:\prd{a:A} C(a) \to \Wtil$ and
+\item $\ppt:\prd{b:B}{y:C(f(b))} (\cct(f(b),y)=_{\Wtil}\cct(g(b),D(b)(y)))$.
+\end{itemize}
+
+The flattening lemma is:
+
+\begin{lem}[Flattening lemma]\label{thm:flattening}
+ In the above situation, we have
+ \[ \eqvspaced{\Parens{\sm{x:W} P(x)}}{\widetilde{W}}. \]
+\end{lem}
+
+\index{universal!property!of dependent pair type}%
+As remarked above, this equivalence can be seen as expressing the universal property of $\sm{x:W} P(x)$ as a ``lax\index{lax colimit} colimit'' of $P$ over $W$.
+It can also be seen as part of the \emph{stability and descent} property of colimits, which characterizes higher toposes.%
+\index{.infinity1-topos@$(\infty,1)$-topos}%
+\index{stability!and descent}%
+
+The proof of \cref{thm:flattening} occupies the rest of this section.
+It is somewhat technical and can be skipped on a first reading.
+But it is also a good example of ``proof-relevant mathematics'',
+\index{mathematics!proof-relevant}%
+so we recommend it on a second reading.
+
+The idea is to show that $\sm{x:W} P(x)$ has the same universal property as \Wtil.
+We begin by showing that it comes with analogues of the constructors $\cct$ and $\ppt$.
+
+\begin{lem}\label{thm:flattening-cp}
+ There are functions
+ \begin{itemize}
+ \item $\cct':\prd{a:A} C(a) \to \sm{x:W} P(x)$ and
+ \item $\ppt':\prd{b:B}{y:C(f(b))} \Big(\cct'(f(b),y)=_{\sm{w:W}P(w)}\cct'(g(b),D(b)(y))\Big)$.
+ \end{itemize}
+\end{lem}
+\begin{proof}
+ The first is easy; define $\cct'(a,x) \defeq (\cc(a),x)$ and note that by definition $P(\cc(a))\jdeq C(a)$.
+ For the second, suppose given $b:B$ and $y:C(f(b))$; we must give an equality
+ \[ (\cc(f(b)),y) = (\cc(g(b)),D(b)(y)). \]
+ Since we have $\pp(b):\cc(f(b))=\cc(g(b))$, by equalities in $\Sigma$-types it suffices to give an equality $\trans{\pp(b)}{y} = D(b)(y)$.
+ But this follows from \cref{thm:transport-is-given}, using the definition of $P$.
+\end{proof}
+
+Now the following lemma says to define a section of a type family over $\sm{w:W} P(w)$, it suffices to give analogous data as in the case of \Wtil.
+
+\begin{lem}\label{thm:flattening-rect}
+ Suppose $Q:\big(\sm{x:W} P(x)\big) \to \type$ is a type family and that we have
+ \begin{itemize}
+ \item $c : \prd{a:A}{x:C(a)} Q(\cct'(a,x))$ and
+ \item $p : \prd{b:B}{y:C(f(b))} \Big(\trans{\ppt'(b,y)}{c(f(b),y)} = c(g(b),D(b)(y))\Big)$. %_{Q(\cct'(g(b),D(b)(y)))}
+ \end{itemize}
+ Then there exists $k:\prd{z:\sm{w:W} P(w)} Q(z)$ such that $k(\cct'(a,x)) \jdeq c(a,x)$.
+\end{lem}
+\begin{proof}
+ Suppose given $w:W$ and $x:P(w)$; we must produce an element $k(w,x):Q(w,x)$.
+ By induction on $w$, it suffices to consider two cases.
+ When $w\jdeq \cc(a)$, then we have $x:C(a)$, and so $c(a,x):Q(\cc(a),x)$ as desired.
+ (This part of the definition also ensures that the stated computational rule holds.)
+
+ Now we must show that this definition is preserved by transporting along $\pp(b)$ for any $b:B$.
+ Since what we are defining, for all $w:W$, is a function of type $\prd{x:P(w)} Q(w,x)$, by \cref{thm:dpath-forall} it suffices to show that for any $y:C(f(b))$, we have
+ \[ \transfib{Q}{\pairpath(\pp(b),\refl{\trans{\pp(b)}{y}})}{c(f(b),y)} = c(g(b),\trans{\pp(b)}{y}). \]
+ Let $q:\trans{\pp(b)}{y} = D(b)(y)$ be the path obtained from \cref{thm:transport-is-given}.
+ Then we have
+ \begin{align}
+ c(g(b),\trans{\pp(b)}{y})
+ &= \transfib{x\mapsto Q(\cct'(g(b),x))}{\opp{q}}{c(g(b),D(b)(y))}
+ \tag{by $\opp{\apdfunc{x\mapsto c(g(b),x)}(\opp q)}$} \\
+ &= \transfib{Q}{\apfunc{x\mapsto \cct'(g(b),x)}(\opp q)}{c(g(b),D(b)(y))}
+ \tag{by \cref{thm:transport-compose}}.
+ \end{align}
+ Thus, it suffices to show
+ \begin{multline*}
+ \Transfib{Q}{\pairpath(\pp(b),\refl{\trans{\pp(b)}{y}})}{c(f(b),y)} = {}\\
+ \Transfib{Q}{\apfunc{x\mapsto \cct'(g(b),x)}(\opp q)}{c(g(b),D(b)(y))}.
+ \end{multline*}
+ Moving the right-hand transport to the other side, and combining two transports, this is equivalent to
+ %
+ \begin{narrowmultline*}
+ \Transfib{Q}{\pairpath(\pp(b),\refl{\trans{\pp(b)}{y}}) \ct
+ \apfunc{x\mapsto \cct'(g(b),x)}(q)}{c(f(b),y)} =
+ \narrowbreak
+ c(g(b),D(b)(y)).
+ \end{narrowmultline*}
+ %
+ However, we have
+ \begin{multline*}
+ \pairpath(\pp(b),\refl{\trans{\pp(b)}{y}}) \ct \apfunc{x\mapsto \cct'(g(b),x)}(q)
+ = {} \\
+ \pairpath(\pp(b),\refl{\trans{\pp(b)}{y}}) \ct \pairpath(\refl{\cc(g(b))},q)
+ = \pairpath(\pp(b),q)
+ = \ppt'(b,y)
+ \end{multline*}
+ so the construction is completed by the assumption $p(b,y)$ of type
+ \[ \transfib{Q}{\ppt'(b,y)}{c(f(b),y)} = c(g(b),D(b)(y)). \qedhere \]
+\end{proof}
+
+\cref{thm:flattening-rect} \emph{almost} gives $\sm{w:W}P(w)$ the same induction principle as \Wtil.
+The missing bit is the equality $\apdfunc{k}(\ppt'(b,y)) = p(b,y)$.
+In order to prove this, we would need to analyze the proof of \cref{thm:flattening-rect}, which of course is the definition of $k$.
+
+It should be possible to do this, but it turns out that we only need the computation rule for the non-dependent recursion principle.
+Thus, we now give a somewhat simpler direct construction of the recursor, and a proof of its computation rule.
+
+\begin{lem}\label{thm:flattening-rectnd}
+ Suppose $Q$ is a type and that we have
+ \begin{itemize}
+ \item $c : \prd{a:A} C(a) \to Q$ and
+ \item $p : \prd{b:B}{y:C(f(b))} \Big(c(f(b),y) =_Q c(g(b),D(b)(y))\Big)$.
+ \end{itemize}
+ Then there exists $k:\big(\sm{w:W} P(w)\big) \to Q$ such that $k(\cct'(a,x)) \jdeq c(a,x)$.
+\end{lem}
+\begin{proof}
+ As in \cref{thm:flattening-rect}, we define $k(w,x)$ by induction on $w:W$.
+ When $w\jdeq \cc(a)$, we define $k(\cc(a),x)\defeq c(a,x)$.
+ Now by \cref{thm:dpath-arrow}, it suffices to consider, for $b:B$ and $y:C(f(b))$, the composite path
+ \begin{equation}\label{eq:flattening-rectnd}
+ \transfib{x\mapsto Q}{\pp(b)}{c(f(b),y)}
+ = c(g(b),\transfib{P}{\pp(b)}{y})
+ \end{equation}
+ %
+ defined as the composition
+ %
+ \begin{align}
+ \transfib{x\mapsto Q}{\pp(b)}{c(f(b),y)}
+ &= c(f(b),y) \tag{by \cref{thm:trans-trivial}}\\
+ &= c(g(b),D(b)(y)) \tag{by $p(b,y)$}\\
+ &= c(g(b),\transfib{P}{\pp(b)}{y}). \tag{by \cref{thm:transport-is-given}}
+ \end{align}
+ The computation rule $k(\cct'(a,x)) \jdeq c(a,x)$ follows by definition, as before.
+\end{proof}
+
+For the second computation rule, we need the following lemma.
+
+\begin{lem}\label{thm:ap-sigma-rect-path-pair}
+ Let $Y:X\to\type$ be a type family and let $k:(\sm{x:X}Y(x)) \to Z$ be defined componentwise by $k(x,y) \defeq d(x)(y)$ for a curried function $d:\prd{x:X} Y(x)\to Z$.
+ Then for any $s:\id[X]{x_1}{x_2}$ and any $y_1:Y(x_1)$ and $y_2:Y(x_2)$ with a path $r:\trans{s}{y_1}=y_2$, the path
+ \[\apfunc k (\pairpath(s,r)) :k(x_1,y_1) = k(x_2,y_2)\]
+ is equal to the composite
+ \begin{align}
+ k(x_1,y_1)
+ &\jdeq d(x_1)(y_1) \notag\\
+ &= \transfib{x\mapsto Z}{s}{d(x_1)(y_1)}
+ \tag{by $\opp{\text{(\cref{thm:trans-trivial})}}$}\\
+ &= \transfib{x\mapsto Z}{s}{d(x_1)(\trans{\opp s}{\trans{s}{y_1}})}
+ \notag\\
+ &= \big(\transfib{x\mapsto (Y(x)\to Z)}{s}{d(x_1)}\big)(\trans{s}{y_1})
+ \tag{by~\eqref{eq:transport-arrow}}\\
+ &= d(x_2)(\trans{s}{y_1})
+ \tag{by $\happly(\apdfunc{d}(s))(\trans{s}{y_1}$}\\
+ &= d(x_2)(y_2)
+ \tag{by $\apfunc{d(x_2)}(r)$}\\
+ &\jdeq k(x_2,y_2).
+ \notag
+ \end{align}
+\end{lem}
+\begin{proof}
+ After path induction on $s$ and $r$, both equalities reduce to reflexivities.
+\end{proof}
+
+At first it may seem surprising that \cref{thm:ap-sigma-rect-path-pair} has such a complicated statement, while it can be proven so simply.
+The reason for the complication is to ensure that the statement is well-typed: $\apfunc k (\pairpath(s,r))$ and the composite path it is claimed to be equal to must both have the same start and end points.
+Once we have managed this, the proof is easy by path induction.
+
+\begin{lem}\label{thm:flattening-rectnd-beta-ppt}
+ In the situation of \cref{thm:flattening-rectnd}, we have $\apfunc{k}(\ppt'(b,y)) = p(b,y)$.
+\end{lem}
+\begin{proof}
+ Recall that $\ppt'(b,y) \defeq \pairpath(\pp(b),q)$ where $q:\trans{\pp(b)}{y} = D(b)(y)$ comes from \cref{thm:transport-is-given}.
+ Thus, since $k$ is defined componentwise, we may compute $\apfunc{k}(\ppt'(b,y))$ by \cref{thm:ap-sigma-rect-path-pair}, with
+ \begin{align*}
+ x_1 &\defeq \cc(f(b)) & y_1 &\defeq y\\
+ x_2 &\defeq \cc(g(b)) & y_2 &\defeq D(b)(y)\\
+ s &\defeq \pp(b) & r &\defeq q.
+ \end{align*}
+ The curried function $d:\prd{w:W} P(w) \to Q$ was defined by induction on $w:W$;
+ to apply \cref{thm:ap-sigma-rect-path-pair} we need to understand $\apfunc{d(x_2)}(r)$ and $\happly(\apdfunc{d}(s),\trans s {y_1})$.
+
+ For the first, since $d(\cc(a),x)\jdeq c(a,x)$, we have
+ \[ \apfunc{d(x_2)}(r) \jdeq \apfunc{c(g(b),-)}(q). \]
+ For the second, the computation rule for the induction principle of $W$ tells us that $\apdfunc{d}(\pp(b))$ is equal to the composite~\eqref{eq:flattening-rectnd}, passed across the equivalence of \cref{thm:dpath-arrow}.
+ Thus, the computation rule given in \cref{thm:dpath-arrow} implies that $\happly(\apdfunc{d}(\pp(b)),\trans {\pp(b)}{y})$ is equal to the composite
+ \begin{align}
+ \big(\trans{\pp(b)}{c(f(b),-)}\big)(\trans {\pp(b)}{y})
+ &= \trans{\pp(b)}{c(f(b),\trans{\opp {\pp(b)}}{\trans {\pp(b)}{y}})}
+ \tag{by~\eqref{eq:transport-arrow}}\\
+ &= \trans{\pp(b)}{c(f(b),y)}
+ \notag \\
+ &= c(f(b),y)
+ \tag{by \cref{thm:trans-trivial}}\\
+ &= c(f(b),D(b)(y))
+ \tag{by $p(b,y)$}\\
+ &= c(f(b),\trans{\pp(b)}{y}).
+ \tag{by $\opp{\apfunc{c(g(b),-)}(q)}$}
+ \end{align}
+ Finally, substituting these values of $\apfunc{d(x_2)}(r)$ and $\happly(\apdfunc{d}(s),\trans s {y_1})$ into \cref{thm:ap-sigma-rect-path-pair}, we see that all the paths cancel out in pairs, leaving only $p(b,y)$.
+\end{proof}
+
+Now we are finally ready to prove the flattening lemma.
+
+\begin{proof}[Proof of \cref{thm:flattening}]
+ We define $h:\Wtil \to \sm{w:W}P(w)$ by using the recursion principle for \Wtil, with $\cct'$ and $\ppt'$ as input data.
+ Similarly, we define $k:(\sm{w:W}P(w)) \to \Wtil$ by using the recursion principle of \cref{thm:flattening-rectnd}, with $\cct$ and $\ppt$ as input data.
+
+ On the one hand, we must show that for any $z:\Wtil$, we have $k(h(z))=z$.
+ By induction on $z$, it suffices to consider the two constructors of \Wtil.
+ But we have
+ \[k(h(\cct(a,x))) \jdeq k(\cct'(a,x)) \jdeq \cct(a,x)\]
+ by definition, while similarly
+ \[\ap k{\ap h{\ppt(b,y)}} = \ap k{\ppt'(b,y)} = \ppt(b,y) \]
+ using the propositional computation rule for $\Wtil$ and \cref{thm:flattening-rectnd-beta-ppt}.
+
+ On the other hand, we must show that for any $z:\sm{w:W}P(w)$, we have $h(k(z))=z$.
+ But this is essentially identical, using \cref{thm:flattening-rect} for ``induction on $\sm{w:W}P(w)$'' and the same computation rules.
+\end{proof}
+
+\section{The general syntax of higher inductive definitions}
+\label{sec:naturality}
+
+In \cref{sec:strictly-positive}, we discussed the conditions on a putative ``inductive definition'' which make it acceptable, namely that all inductive occurrences of the type in its constructors are ``strictly positive''.\index{strict!positivity}
+In this section, we say something about the additional conditions required for \emph{higher} inductive definitions.
+Finding a general syntactic description of valid higher inductive definitions is an area of current research, and all of the solutions proposed to date are somewhat technical in nature; thus we only give a general description and not a precise definition.
+Fortunately, the corner cases never seem to arise in practice.
+
+Like an ordinary inductive definition, a higher inductive definition is specified by a list of \emph{constructors}, each of which is a (dependent) function.
+For simplicity, we may require the inputs of each constructor to satisfy the same condition as the inputs for constructors of ordinary inductive types.
+In particular, they may contain the type being defined only strictly positively.
+Note that this excludes definitions such as the $0$-truncation as presented in \cref{sec:hittruncations}, where the input of a constructor contains not only the inductive type being defined, but its identity type as well.
+It may be possible to extend the syntax to allow such definitions; but also, in \cref{sec:truncations} we will give a different construction of the $0$-truncation whose constructors do satisfy the more restrictive condition.
+
+The only difference between an ordinary inductive definition and a higher one, then, is that the \emph{output} type of a constructor may be, not the type being defined ($W$, say), but some identity type of it, such as $\id[W]uv$, or more generally an iterated identity type such as $\id[({\id[W]uv})]pq$.
+Thus, when we give a higher inductive definition, we have to specify not only the inputs of each constructor, but the expressions $u$ and $v$ (or $u$, $v$, $p$, and $q$, etc.)\ which determine the source\index{source!of a path constructor} and target\index{target!of a path constructor} of the path being constructed.
+
+Importantly, these expressions may refer to \emph{other} constructors of $W$.
+For instance, in the definition of $\Sn^1$, the constructor $\lloop$ has both $u$ and $v$ being $\base$, the previous constructor.
+To make sense of this, we require the constructors of a higher inductive type to be specified \emph{in order}, and we allow the source and target expressions $u$ and $v$ of each constructor to refer to previous constructors, but not later ones.
+(Of course, in practice the constructors of any inductive definition are written down in some order, but for ordinary inductive types that order is irrelevant.)
+
+Note that this order is not necessarily the order of ``dimension'': in principle, a 1-dimensional path constructor could refer to a 2-dimensional one and hence need to come after it.
+However, we have not given the 0-dimensional constructors (point constructors) any way to refer to previous constructors, so they might as well all come first.
+And if we use the hub-and-spoke construction (\cref{sec:hubs-spokes}) to reduce all constructors to points and 1-paths, then we might assume that all point constructors come first, followed by all 1-path constructors --- but the order among the 1-path constructors continues to matter.
+
+The remaining question is, what sort of expressions can $u$ and $v$ be?
+We might hope that they could be any expression at all involving the previous constructors.
+However, the following example shows that a naive approach to this idea does not work.
+
+\begin{eg}\label{eg:unnatural-hit}
+ Consider a family of functions $f:\prd{X:\type} (X\to X)$.
+ Of course, $f_X$ might be just $\idfunc[X]$ for all $X$, but other such $f$s may also exist.
+ For instance, nothing prevents $f_{\bool}:\bool\to\bool$ from being the nonidentity automorphism\index{automorphism!of 2, nonidentity@of $\bool$, nonidentity} (see \cref{ex:unnatural-endomorphisms}).
+
+ Now suppose that we attempt to define a higher inductive type $K$ generated by:
+ \begin{itemize}
+ \item two elements $a,b:K$, and
+ \item a path $\sigma:f_K(a)=f_K(b)$.
+ \end{itemize}
+ What would the induction principle for $K$ say?
+ We would assume a type family $P:K\to\type$, and of course we would need $x:P(a)$ and $y:P(b)$.
+ The remaining datum should be a dependent path in $P$ living over $\sigma$, which must therefore connect some element of $P(f_K(a))$ to some element of $P(f_K(b))$.
+ But what could these elements possibly be?
+ We know that $P(a)$ and $P(b)$ are inhabited by $x$ and $y$, respectively, but this tells us nothing about $P(f_K(a))$ and $P(f_K(b))$.
+\end{eg}
+
+Clearly some condition on $u$ and $v$ is required in order for the definition to be sensible.
+It seems that, just as the domain of each constructor is required to be (among other things) a \emph{covariant functor}, the appropriate condition on the expressions $u$ and $v$ is that they define \emph{natural transformations}.
+Making precise sense of this requirement is beyond the scope of this book, but informally it means that $u$ and $v$ must only involve operations which are preserved by all functions between types.
+
+For instance, it is permissible for $u$ and $v$ to refer to concatenation of paths, as in the case of the final constructor of the torus in \cref{sec:cell-complexes}, since all functions in type theory preserve path concatenation (up to homotopy).
+However, it is not permissible for them to refer to an operation like the function $f$ in \cref{eg:unnatural-hit}, which is not necessarily natural: there might be some function $g:X\to Y$ such that $f_Y \circ g \neq g\circ f_X$.
+(Univalence implies that $f_X$ must be natural with respect to all \emph{equivalences}, but not necessarily with respect to functions that are not equivalences.)
+
+The intuition of naturality supplies only a rough guide for when a higher inductive definition is permissible.
+Even if it were possible to give a precise specification of permissible forms of such definitions in this book, such a specification would probably be out of date quickly, as new extensions to the theory are constantly being explored.
+For instance, the presentation of $n$-spheres in terms of ``dependent $n$-loops\index{loop!dependent n-@dependent $n$-}'' referred to in \cref{sec:circle}, and the ``higher inductive-recursive definitions'' used in \cref{cha:real-numbers}, were innovations introduced while this book was being written.
+We encourage the reader to experiment --- with caution.
+
+
+\sectionNotes
+
+The general idea of higher inductive types was conceived in discussions between Andrej Bauer, Peter Lumsdaine, Mike Shulman, and Michael Warren at the Oberwolfach meeting in 2011, although there are some suggestions of some special cases in earlier work. Subsequently, Guillaume Brunerie and Dan Licata contributed substantially to the general theory, especially by finding convenient ways to represent them in computer proof assistants
+\index{proof!assistant}
+and do homotopy theory with them (see \cref{cha:homotopy}).
+
+A general discussion of the syntax of higher inductive types, and their semantics in higher-categorical models, appears in~\cite{ls:hits}.
+As with ordinary inductive types, models of higher inductive types can be constructed by transfinite iterative processes; a slogan is that ordinary inductive types describe \emph{free} monads while higher inductive types describe \emph{presentations} of monads.\index{monad}
+The introduction of path constructors also involves the model-category-theoretic equivalence between ``right homotopies'' (defined using path spaces) and ``left homotopies'' (defined using cylinders) --- the fact that this equivalence is generally only up to homotopy provides a semantic reason to prefer propositional computation rules for path constructors.
+
+Another (temporary) reason for this preference comes from the limitations of existing computer implementations.
+Proof assistants\index{proof!assistant} like \Coq and \Agda have ordinary inductive types built in, but not yet higher inductive types.
+We can of course introduce them by assuming lots of axioms, but this results in only propositional computation rules.
+However, there is a trick due to Dan Licata which implements higher inductive types using private data types; this yields judgmental rules for point constructors but not path constructors.
+
+The type-theoretic description of higher spheres using loop spaces and suspensions in \cref{sec:circle,sec:suspension} is largely due to Brunerie and Licata; Hou has given a type-theoretic version of the alternative description that uses $n$-dimensional paths\index{path!n-@$n$-}.
+The reduction of higher paths to 1-dimensional paths with hubs and spokes (\cref{sec:hubs-spokes}) is due to Lumsdaine and Shulman.
+The description of truncation as a higher inductive type is due to Lumsdaine; the $(-1)$-truncation is closely related to the ``bracket types'' of~\cite{ab:bracket-types}.
+The flattening lemma was first formulated in generality by Brunerie.
+
+\index{set-quotient}
+Quotient types are unproblematic in extensional type theory, such as \NuPRL~\cite{constable+86nuprl-book}.
+They are often added by passing to an extended system of setoids.\index{setoid}
+However, quotients are a trickier issue in intensional type theory (the starting point for homotopy type theory), because one cannot simply add new propositional equalities without specifying how they are to behave. Some solutions to this problem have been studied~\cite{hofmann:thesis,Altenkirch1999,altenkirch+07ott}, and several different notions of quotient types have been considered. The construction of set-quotients using higher-inductives provides an argument for our particular approach (which is similar to some that have previously been considered), because it arises as an instance of a general mechanism. Our construction does not yet provide a new solution to all the computational problems related to quotients, since we still lack a good computational understanding of higher inductive types in general---but it does mean that ongoing work on the computational interpretation of higher inductives applies to the quotients as well. The construction of quotients in terms of equivalence classes is, of
+course, a standard set-theoretic idea, and a well-known aspect of elementary topos theory; its use in type theory (which depends on the univalence axiom, at least for mere propositions) was proposed by Voevodsky. The fact that quotient types in intensional type theory imply function extensionality was proved by~\cite{hofmann:thesis}, inspired by the work of~\cite{carboni} on exact completions; \cref{thm:interval-funext} is an adaptation of such arguments.
+
+\sectionExercises
+
+\begin{ex}\label{ex:torus}
+ Define concatenation of dependent paths, prove that application of dependent functions preserves concatenation, and write out the precise induction principle for the torus $T^2$ with its computation rules.\index{torus}
+\end{ex}
+
+\begin{ex}\label{ex:suspS1}
+ Prove that $\eqv{\susp \Sn^1}{\Sn^2}$, using the explicit definition of $\Sn^2$ in terms of $\base$ and $\surf$ given in \cref{sec:circle}.
+\end{ex}
+
+\begin{ex}\label{ex:torus-s1-times-s1}
+ Prove that the torus $T^2$ as defined in \cref{sec:cell-complexes} is equivalent to $\Sn^1\times \Sn^1$.
+ (Warning: the path algebra for this is rather difficult.)
+\end{ex}
+
+\begin{ex}\label{ex:nspheres}
+ Define dependent $n$-loops\index{loop!dependent n-@dependent $n$-} and the action of dependent functions on $n$-loops, and write down the induction principle for the $n$-spheres as defined at the end of \cref{sec:circle}.
+\end{ex}
+
+\begin{ex}\label{ex:susp-spheres-equiv}
+ Prove that $\eqv{\susp \Sn^n}{\Sn^{n+1}}$, using the definition of $\Sn^n$ in terms of $\Omega^n$ from \cref{sec:circle}.
+\end{ex}
+
+\begin{ex}\label{ex:spheres-make-U-not-2-type}
+ Prove that if the type $\Sn^2$ belongs to some universe \type, then \type is not a 2-type.
+\end{ex}
+
+\begin{ex}\label{ex:monoid-eq-prop}
+ Prove that if $G$ is a monoid and $x:G$, then $\sm{y:G}((x\cdot y = e) \times (y\cdot x =e))$ is a mere proposition.
+ Conclude, using the principle of unique choice (\cref{cor:UC}), that it would be equivalent to define a group to be a monoid such that for every $x:G$, there merely exists a $y:G$ such that $x\cdot y = e$ and $y\cdot x=e$.
+\end{ex}
+
+\begin{ex}\label{ex:free-monoid}
+ Prove that if $A$ is a set, then $\lst A$ is a monoid.
+ Then complete the proof of \cref{thm:free-monoid}.\index{monoid!free}
+\end{ex}
+
+\begin{ex}\label{ex:unnatural-endomorphisms}
+ Assuming \LEM{}, construct a family $f:\prd{X:\type}(X\to X)$ such that $f_\bool:\bool\to\bool$ is the nonidentity automorphism.\index{automorphism!of 2, nonidentity@of $\bool$, nonidentity}
+\end{ex}
+
+\begin{ex}\label{ex:funext-from-interval}
+ Show that the map constructed in \cref{thm:interval-funext} is in fact a quasi-inverse to $\happly$, so that an interval type implies the full function extensionality axiom.
+ (You may have to use \cref{ex:strong-from-weak-funext}.)
+\end{ex}
+
+\begin{ex}\label{ex:susp-lump}
+ Prove the universal property of suspension:
+ \[ \Parens{\susp A \to B} \eqvsym \Parens{\sm{b_n : B} \sm{b_s : B} (A \to (b_n = b_s)) } \]
+\end{ex}
+
+\begin{ex}\label{ex:alt-integers}
+ Show that $\eqv{\Z}{\N+\unit+\N}$.
+ Show that if we were to define $\Z$ as $\N+\unit+\N$, then we could obtain \cref{thm:sign-induction} with judgmental computation rules.
+\end{ex}
+
+\begin{ex}\label{ex:trunc-bool-interval}
+ Show that we can also prove \cref{thm:interval-funext} by using $\brck \bool$ instead of $\interval$.
+\end{ex}
+
+\index{type!higher inductive|)}%
+
+% Local Variables:
+% TeX-master: "hott-online"
+% End:
diff --git a/books/hott/hlevels.tex b/books/hott/hlevels.tex
new file mode 100644
index 0000000000000000000000000000000000000000..465bb0d488adbbc58284fa66026480bdc2d17724
--- /dev/null
+++ b/books/hott/hlevels.tex
@@ -0,0 +1,2139 @@
+\chapter{Homotopy \texorpdfstring{$n$}{n}-types}
+\label{cha:hlevels}
+
+\index{n-type@$n$-type|(}%
+\indexsee{h-level}{$n$-type}
+
+One of the basic notions of homotopy theory is that of a \emph{homotopy $n$-type}: a space containing no interesting homotopy above dimension $n$.
+For instance, a homotopy $0$-type is essentially a set, containing no nontrivial paths, while a homotopy $1$-type may contain nontrivial paths, but no nontrivial paths between paths.
+Homotopy $n$-types are also called \emph{$n$-truncated spaces}.
+We have mentioned this notion already in \cref{sec:basics-sets}; our first goal in this chapter is to give it a precise definition in homotopy type theory.
+
+A dual notion to truncatedness is connectedness: a space is \emph{$n$-connected} if it has no interesting homotopy in dimensions $n$ and \emph{below}.
+For instance, a space is $0$-connected (also called just ``connected'') if it has only one connected component, and $1$-connected (also called ``simply connected'') if it also has no nontrivial loops (though it may have nontrivial higher loops between loops\index{loop!n-@$n$-}).
+
+The duality between truncatedness and connectedness is most easily seen by extending both notions to maps.
+We call a map \emph{$n$-truncated} or \emph{$n$-connected} if all its fibers are so.
+Then $n$-connected and $n$-truncated maps form the two classes of maps in an \emph{orthogonal factorization system},
+\index{orthogonal factorization system}
+\indexsee{factorization!system, orthogonal}{orthogonal factorization system}
+i.e.\ every map factors uniquely as an $n$-connected map followed by an $n$-truncated one.
+
+In the case $n={-1}$, the $n$-truncated maps are the embeddings and the $n$-connected maps are the surjections, as defined in \cref{sec:mono-surj}.
+Thus, the $n$-connected factorization system is a massive generalization of the standard image factorization of a function between sets into a surjection followed by an injection.
+At the end of this chapter, we sketch briefly an even more general theory: any type-theoretic \emph{modality} gives rise to an analogous factorization system.
+
+
+\section{Definition of \texorpdfstring{$n$}{n}-types}
+\label{sec:n-types}
+
+As mentioned in \cref{sec:basics-sets,sec:contractibility}, it turns out to be convenient to define $n$-types starting two levels below zero, with the $(-1)$-types being the mere propositions and the $(-2)$-types the contractible ones.
+
+\begin{defn}\label{def:hlevel}
+ Define the predicate $\istype{n} : \type \to \type$ for $n \geq -2$ by recursion as follows:
+ \[ \istype{n}(X) \defeq
+ \begin{cases}
+ \iscontr(X) & \text{ if } n = -2, \\
+ \prd{x,y : X} \istype{n'}(\id[X]{x}{y}) & \text{ if } n = n'+1.
+ \end{cases}
+ \]
+ We say that $X$ is an \define{$n$-type}, or sometimes that it is \emph{$n$-truncated},
+ \indexdef{n-type@$n$-type}%
+ \indexsee{n-truncated@$n$-truncated!type}{$n$-type}%
+ \indexsee{type!n-type@$n$-type}{$n$-type}%
+ \indexsee{type!n-truncated@$n$-truncated}{$n$-type}%
+ if $\istype{n}(X)$ is inhabited.
+\end{defn}
+
+\begin{rmk}
+ The number $n$ in \cref{def:hlevel} ranges over all integers greater than or equal to $-2$.
+ We could make sense of this formally by defining a type $\Z_{{\geq}-2}$ of such integers (a type whose induction principle is identical to that of $\nat$), or instead defining a predicate $\istype{(k-2)}$ for $k : \nat$.
+ Either way, we can prove theorems about $n$-types by induction on $n$, with $n = -2$ as the base case.
+\end{rmk}
+
+\begin{eg}
+ \index{set}
+ We saw in \cref{thm:prop-minusonetype} that $X$ is a $(-1)$-type if and only if it is a mere proposition.
+ Therefore, $X$ is a $0$-type if and only if it is a set.
+\end{eg}
+
+We have also seen that there are types which are not sets (\cref{thm:type-is-not-a-set}).
+So far, however, we have not shown for any $n>0$ that there exist types which are not $n$-types.
+In \cref{cha:homotopy}, however, we will show that the $(n+1)$-sphere $\Sn^{n+1}$ is not an $n$-type.
+(Kraus has also shown that the $n^{\mathrm{th}}$ nested univalent universe is also not an $n$-type, without using any higher inductive types.)
+Moreover, in \cref{sec:whitehead} will give an example of a type that is not an $n$-type for \emph{any} (finite) number $n$.
+
+We begin the general theory of $n$-types by showing they are closed under certain operations and constructors.
+
+\begin{thm}\label{thm:h-level-retracts}
+ \index{retract!of a type}%
+ \index{retraction}%
+ Let $p : X \to Y$ be a retraction and suppose that $X$ is an $n$-type, for any $n\geq -2$.
+ Then $Y$ is also an $n$-type.
+\end{thm}
+
+\begin{proof}
+ We proceed by induction on $n$.
+ The base case $n=-2$ is handled by \cref{thm:retract-contr}.
+
+ For the inductive step, assume that any retract of an $n$-type is an $n$-type, and that $X$ is an $\nplusone$-type.
+ Let $y, y' : Y$; we must show that $\id{y}{y'}$ is an $n$-type.
+ Let $s$ be a section of $p$, and let $\epsilon$ be a homotopy $\epsilon : p \circ s \htpy 1$.
+ Since $X$ is an $\nplusone$-type, $\id[X]{s(y)}{s(y')}$ is an $n$-type.
+ We claim that $\id{y}{y'}$ is a retract of $\id[X]{s(y)}{s(y')}$.
+ For the section, we take
+ \[ \apfunc s : (y=y') \to (s(y)=s(y')). \]
+ For the retraction, we define $t:(s(y)=s(y'))\to(y=y')$ by
+ \[ t(q) \defeq \opp{\epsilon_y} \ct \ap p q \ct \epsilon_{y'}.\]
+ To show that $t$ is a retraction of $\apfunc s$, we must show that
+ \[ \opp{\epsilon_y} \ct \ap p {\ap sr} \ct \epsilon_{y'} = r \]
+ for any $r:y=y'$.
+ But this follows from \cref{lem:htpy-natural}.
+\end{proof}
+
+As an immediate corollary we obtain the stability of $n$-types under equivalence (which is also immediate from univalence):
+
+\begin{cor}\label{cor:preservation-hlevels-weq}
+ If $\eqv{X}{Y}$ and $X$ is an $n$-type, then so is $Y$.
+\end{cor}
+
+Recall also the notion of embedding from \cref{sec:mono-surj}.
+
+\begin{thm}\label{thm:isntype-mono}
+ \index{function!embedding}
+ If $f:X\to Y$ is an embedding and $Y$ is an $n$-type for some $n\ge -1$, then so is $X$.
+\end{thm}
+\begin{proof}
+ Let $x,x':X$; we must show that $\id[X]{x}{x'}$ is an $\nminusone$-type.
+ But since $f$ is an embedding, we have $(\id[X]{x}{x'}) \eqvsym (\id[Y]{f(x)}{f(x')})$, and the latter is an $\nminusone$-type by assumption.
+\end{proof}
+
+Note that this theorem fails when $n=-2$: the map $\emptyt \to \unit$ is an embedding, but $\unit$ is a $(-2)$-type while $\emptyt$ is not.
+
+\begin{thm}\label{thm:hlevel-cumulative}
+ The hierarchy of $n$-types is cumulative in the following sense:
+ given a number $n \geq -2$, if $X$ is an $n$-type, then it is also an $\nplusone$-type.
+\end{thm}
+
+\begin{proof}
+ We proceed by induction on $n$.
+
+ For $n = -2$, we need to show that a contractible type, say, $A$, has contractible path spaces.
+ Let $a_0: A$ be the center of contraction of $A$, and let $x, y : A$. We show that $\id[A]{x}{y}$
+ is contractible.
+ By contractibility of $A$ we have a path $\contr_x \ct \opp{\contr_y} : x = y$, which we choose as
+ the center of contraction for $\id{x}{y}$.
+ Given any $p : x = y$, we need to show $p = \contr_x \ct \opp{\contr_y}$.
+ By path induction, it suffices to show that
+ $\refl{x} = \contr_x \ct \opp{\contr_x}$, which is trivial.
+
+ For the inductive step, we need to show that $\id[X]{x}{y}$ is an $\nplusone$-type, provided
+ that $X$ is an $\nplusone$-type. Applying the inductive hypothesis to $\id[X]{x}{y}$
+ yields the desired result.
+\end{proof}
+
+% \section{Preservation under constructors}
+% \label{sec:ntype-pres}
+
+We now show that $n$-types are preserved by most of the type forming operations.
+
+\begin{thm}\label{thm:ntypes-sigma}
+ Let $n \geq -2$, and let $A : \type$ and $B : A \to \type$.
+ If $A$ is an $n$-type and for all $a : A$, $B(a)$ is an $n$-type, then so is $\sm{x : A} B(x)$.
+\end{thm}
+
+\begin{proof}
+ We proceed by induction on $n$.
+
+ For $n = -2$, we choose the center of contraction for $\sm{x : A} B(x)$ to be the pair
+ $(a_0, b_0)$, where $a_0 : A$ is the center of contraction of $A$ and $b_0 : B(a_0)$ is the center of contraction of $B(a_0)$.
+ Given any other element $(a,b)$ of $\sm{x : A} B(x)$, we provide a path $\id{(a, b)}{(a_0,b_0)}$
+ by contractibility of $A$ and $B(a_0)$, respectively.
+
+ For the inductive step, suppose that $A$ is an $\nplusone$-type and
+ for any $a : A$, $B(a)$ is an $\nplusone$-type. We show that $\sm{x : A} B(x)$ is an $\nplusone$-type:
+ fix $(a_1, b_1)$ and $(a_2,b_2)$ in $\sm{x : A} B(x)$,
+ we show that $\id{(a_1, b_1)}{(a_2,b_2)}$ is an $n$-type.
+ By \cref{thm:path-sigma} we have
+ \[ \eqvspaced{(\id{(a_1, b_1)}{(a_2,b_2)})}{\sm{p : \id{a_1}{a_2}} (\id[B(a_2)]{\trans{p}{b_1}}{b_2})} \]
+ and by preservation of $n$-types under equivalences (\cref{cor:preservation-hlevels-weq})
+ it suffices to prove that the latter is an $n$-type. This follows from the
+ inductive hypothesis.
+\end{proof}
+
+As a special case, if $A$ and $B$ are $n$-types, so is $A\times B$.
+Note also that \cref{thm:hlevel-cumulative} implies that if $A$ is an $n$-type, then so is $\id[A]xy$ for any $x,y:A$.
+Combining this with \cref{thm:ntypes-sigma}, we see that for any functions $f:A\to C$ and $g:B\to C$ between $n$-types, their pullback\index{pullback}
+\[ A\times_C B \defeq \sm{x:A}{y:B} (f(x)=g(y)) \]
+(see \cref{ex:pullback}) is also an $n$-type.
+More generally, $n$-types are closed under all \emph{limits}.
+
+\begin{thm}\label{thm:hlevel-prod}
+ Let $n\geq -2$, and let $A : \type$ and $B : A \to \type$.
+ If for all $a : A$, $B(a)$ is an $n$-type, then so is $\prd{x : A} B(x)$.
+\end{thm}
+
+\begin{proof}
+ We proceed by induction on $n$.
+ For $n = -2$, the result is simply \cref{thm:contr-forall}.
+
+ For the inductive step, assume the result is true for $n$-types, and that each $B(a)$ is an $\nplusone$-type.
+ Let $f, g : \prd{a:A}B(a)$.
+ We need to show that $\id{f}{g}$ is an $n$-type.
+ By function extensionality and closure of $n$-types under equivalence, it suffices to show that $\prd{a : A} (\id[B(a)]{f(a)}{g(a)})$ is an $n$-type.
+ This follows from the inductive hypothesis.
+\end{proof}
+
+As a special case of the above theorem, the function space $A \to B$ is an $n$-type provided that $B$ is an $n$-type.
+We can now generalize our observations in \cref{cha:basics} that $\isset(A)$ and $\isprop(A)$ are mere propositions.
+
+\begin{thm}\label{thm:isaprop-isofhlevel}
+ For any $n \geq -2$ and any type $X$, the type $\istype{n}(X)$ is a mere proposition.
+\end{thm}
+\begin{proof}
+ We proceed by induction with respect to $n$.
+
+ For the base case, we need to show that for any $X$, the type $\iscontr(X)$ is a mere proposition.
+ This is \cref{thm:isprop-iscontr}.
+
+For the inductive step we need to show
+\[\prd{X : \type} \isprop (\istype{n}(X)) \to \prd{X : \type} \isprop (\istype{\nplusone}(X)). \]
+To show the conclusion of this implication, we need to show that for any type $X$, the type
+\[\prd{x, x' : X}\istype{n}(x = x')\]
+is a mere proposition. By \cref{thm:isprop-forall} or \cref{thm:hlevel-prod}, it suffices to show that for any $x, x' : X$, the type $\istype{n}(x =_X x')$ is a mere
+proposition.
+But this follows from the inductive hypothesis applied to the type $(x =_X x')$.
+\end{proof}
+
+Finally, we show that the type of $n$-types is itself an $\nplusone$-type.
+We define this to be:
+\symlabel{universe-of-ntypes}
+\[\ntype{n} \defeq \sm{X : \type} \istype{n}(X). \]
+If necessary, we may specify the universe $\UU$ by writing $\ntypeU{n}$.
+In particular, we have $\prop \defeq \ntype{(-1)}$ and $\set \defeq \ntype{0}$, as defined in \cref{cha:basics}.
+Note that just as for \prop and \set, because $\istype{n}(X)$ is a mere proposition, by \cref{thm:path-subset} for any $(X,p), (X',p'):\ntype{n}$ we have
+\begin{align*}
+ \Big(\id[\ntype{n}]{(X, p)}{(X', p')}\Big) &\eqvsym (\id[\type] X X')\\
+ &\eqvsym (\eqv{X}{X'}).
+\end{align*}
+
+\begin{thm}\label{thm:hleveln-of-hlevelSn}
+ For any $n \geq -2$, the type $\ntype{n}$ is an $\nplusone$-type.
+\end{thm}
+\begin{proof}%[Proof of \cref{thm:hleveln-of-hlevelSn}]
+ Let $(X, p), (X', p') : \ntype{n}$; we need to show that $\id{(X, p)}{(X', p')}$ is an $n$-type.
+ By the above observation, this type is equivalent to $\eqv{X}{X'}$.
+ Next, we observe that the projection
+ \[(\eqv{X}{X'}) \to (X \rightarrow X').\]
+ is an embedding, so that if $n\geq -1$, then by \cref{thm:isntype-mono} it suffices to show that $X \rightarrow X'$ is an $n$-type.
+ But since $n$-types are preserved under the arrow type, this reduces to an assumption that $X'$ is an $n$-type.
+
+ In the case $n=-2$, this argument shows that $\eqv{X}{X'}$ is a $(-1)$-type --- but it is also inhabited, since any two contractible types
+are equivalent to \unit, and hence to each other.
+ Thus, $\eqv{X}{X'}$ is also a $(-2)$-type.
+\end{proof}
+
+\section{Uniqueness of identity proofs and Hedberg's theorem}
+\label{sec:hedberg}
+
+\index{set|(}%
+
+In \cref{sec:basics-sets} we defined a type $X$ to be a \emph{set} if for all $x, y : X$ and $p, q : x =_X y$ we have $p = q$.
+In conventional type theory, this property goes by the name of \define{uniqueness of identity proofs (UIP)}.
+\indexdef{uniqueness!of identity proofs}%
+We have seen also that it is equivalent to being a $0$-type in the sense of the previous section.
+Here is another equivalent characterization, involving Streicher's ``Axiom K'' \cite{Streicher93}:
+
+\begin{thm}\label{thm:h-set-uip-K}
+ A type $X$ is a set if and only if it satisfies \define{Axiom K}:
+ \indexdef{axiom!Streicher's Axiom K}%
+ for all $x : X$ and $p : (x =_A x)$ we have $p = \refl{x}$.
+\end{thm}
+
+\begin{proof}
+ Clearly Axiom K is a special case of UIP.
+ Conversely, if $X$ satisfies Axiom K, let $x, y : X$ and $p, q : (\id{x}{y})$; we want to show $p=q$.
+ But induction on $q$ reduces this goal precisely to Axiom K.
+\end{proof}
+
+We stress that \emph{we} are not assuming UIP or the K principle as axioms!
+They are simply properties which a particular type may or may not satisfy (which are equivalent to being a set).
+Recall from \cref{thm:type-is-not-a-set} that \emph{not} all types are sets.
+
+The following theorem is another useful way to show that types are sets.
+
+\begin{thm}\label{thm:h-set-refrel-in-paths-sets}
+ \index{relation!reflexive}%
+ Suppose $R$ is a reflexive\index{reflexivity!of a relation} mere relation on a type $X$ implying identity.
+ Then $X$ is a set, and $R(x,y)$ is equivalent to $\id[X]{x}{y}$ for all $x,y:X$.
+\end{thm}
+
+\begin{proof}
+ Let $\rho : \prd{x:X} R(x,x)$ witness reflexivity of $R$, and let \narrowequation{f : \prd{x,y:X} R(x,y) \to (\id[X]{x}{y})} be a witness that $R$
+implies identity.
+ Note first that the two statements in the theorem are equivalent.
+ For on one hand, if $X$ is a set, then $\id[X]xy$ is a mere proposition, and since it is logically equivalent to the mere proposition
+$R(x,y)$ by hypothesis, it must also be equivalent to it.
+ On the other hand, if $\id[X]xy$ is equivalent to $R(x,y)$, then like the latter it is a mere proposition for all $x,y:X$, and hence $X$
+is a set.
+
+ We give two proofs of this theorem.
+ The first shows directly that $X$ is a set; the second shows directly that $R(x,y)\eqvsym (x=y)$.
+
+ \emph{First proof:} we show that $X$ is a set.
+ The idea is the same as that of \cref{thm:prop-set}: the function $f$ must be continuous in its arguments $x$ and $y$.
+ However, it is slightly more notationally complicated because we have to deal with the additional argument of type $R(x,y)$.
+
+ Firstly, for any $x:X$ and $p:\id[X]xx$, consider $\apdfunc{f(x)}(p)$.
+ This is a dependent path from $f(x,x)$ to itself.
+ Since $f(x,x)$ is still a function $R(x,x) \to (\id[X]xx)$, by \cref{thm:dpath-arrow} this yields for any $r:R(x,x)$ a path
+ \[\trans{p}{f(x,x,r)} = f(x,x,\trans{p}r).
+ \]
+ On the left-hand side, we have transport in an identity type, which is concatenation.
+ And on the right-hand side, we have $\trans{p}r = r$, since both lie in the mere proposition $R(x,x)$.
+ Thus, substituting $r\defeq \rho(x)$, we obtain
+ \[ f(x,x,\rho(x)) \ct p = f(x,x,\rho(x)). \]
+ By cancellation, $p=\refl{x}$.
+ So $X$ satisfies Axiom K, and hence is a set.
+
+ \emph{Second proof:} we show that each $f(x,y) : R(x,y) \to \id[X]{x}{y}$ is an equivalence.
+ By \cref{thm:total-fiber-equiv}, it suffices to show that $f$ induces an equivalence of total spaces:
+ \begin{equation*}
+ \eqv{\Parens{\sm{y:X}R(x,y)}}{\Parens{\sm{y:X}\id[X]{x}{y}}}.
+ \end{equation*}
+ By \cref{thm:contr-paths}, the type on the right is contractible, so it
+ suffices to show that the type on the left is contractible. As the center of
+ contraction we take the pair $\pairr{x,\rho(x)}$. It remains to show, for
+ every ${y:X}$ and every ${H:R(x,y)}$ that
+ \begin{equation*}
+ \id{\pairr{x,\rho(x)}}{\pairr{y,H}}.
+ \end{equation*}
+ But since $R(x,y)$ is a mere proposition, by \cref{thm:path-sigma} it suffices to show that
+ $\id[X]{x}{y}$, which we get from $f(H)$.
+\end{proof}
+
+\begin{cor}\label{notnotstable-equality-to-set}
+ If a type $X$ has the property that $\neg\neg(x=y)\to(x=y)$ for any $x,y:X$, then $X$ is a set.
+\end{cor}
+
+Another convenient way to show that a type is a set is the following.
+Recall from \cref{sec:intuitionism} that a type $X$ is said to have \emph{decidable equality}
+\index{decidable!equality|(}%
+if for all $x, y : X$ we have
+\[(x =_X y) + \neg (x =_X y).\]
+\index{continuity of functions in type theory@``continuity'' of functions in type theory}%
+\index{functoriality of functions in type theory@``functoriality'' of functions in type theory}%
+This is a very strong condition: it says that a path $x=y$ can be chosen, when it exists, continuously (or computably, or functorially) in $x$ and $y$.
+This turns out to imply that $X$ is a set, by way of \cref{thm:h-set-refrel-in-paths-sets} and the following lemma.
+
+\begin{lem}\label{lem:hedberg-helper}
+For any type $A$ we have $(A+\neg A)\to(\neg\neg A\to A)$.
+\end{lem}
+
+\begin{proof}
+This was essentially already proven in \cref{thm:not-lem}, but we repeat the argument.
+Suppose $x:A+\neg A$. We have two cases to consider.
+If $x$ is $\inl(a)$ for some $a:A$, then we have the constant function $\neg\neg A
+\to A$ which maps everything to $a$. If $x$ is $\inr(t)$ for some $t:\neg A$,
+we have $g(t):\emptyt$ for every $g:\neg\neg A$. Hence we may use
+\emph{ex falso quodlibet}, that is $\rec{\emptyt}$, to obtain an element of $A$ for any $g:\neg\neg A$.
+\end{proof}
+
+\index{anger}
+\begin{thm}[Hedberg]\label{thm:hedberg}
+ \index{Hedberg's theorem}%
+ \index{theorem!Hedberg's}%
+ If $X$ has decidable equality, then $X$ is a set.
+\end{thm}
+
+\begin{proof}
+If $X$ has decidable equality, it follows that $\neg\neg(x=y)\to(x=y)$ for any
+$x,y:X$. Therefore, Hedberg's theorem follows from
+\cref{notnotstable-equality-to-set}.
+\end{proof}
+
+There is, of course, a strong connection between this theorem and \cref{thm:not-lem}.
+The statement \LEM{\infty} that is denied by \cref{thm:not-lem} clearly implies that every type has decidable equality, and hence is a set, which we know is not the case.
+\index{excluded middle}%
+Note that the consistent axiom \LEM{} from \cref{sec:intuitionism} implies only that every type has \emph{merely decidable equality}, i.e.\ that for any $A$ we have
+\indexdef{equality!merely decidable}%
+\indexdef{merely!decidable equality}%
+\[ \prd{a,b:A} (\brck{a=b} + \neg\brck{a=b}). \]
+
+\index{decidable!equality|)}%
+
+As an example application of \cref{thm:hedberg}, recall that in \cref{thm:nat-set} we observed that $\nat$ is a set, using our characterization of its equality types in
+\cref{sec:compute-nat}.
+A more traditional proof of this theorem uses only~\eqref{eq:zero-not-succ} and~\eqref{eq:suc-injective}, rather than the full
+characterization of \cref{thm:path-nat}, with \cref{thm:hedberg} to fill in the blanks.
+
+\begin{thm}\label{prop:nat-is-set}
+ The type $\nat$ of natural numbers has decidable equality, and hence is a set.
+\end{thm}
+
+\begin{proof}
+ Let $x, y : \nat$ be given; we proceed by induction on $x$ and case analysis on $y$ to prove $(x=y)+\neg(x=y)$.
+ If $x \jdeq 0$ and $y \jdeq 0$, we take $\inl(\refl0)$.
+ If $x \jdeq 0$ and $y \jdeq \suc(n)$, then by~\eqref{eq:zero-not-succ} we get $\neg (0 = \suc (n))$.
+
+ For the inductive step, let $x \jdeq \suc (n)$.
+ If $y \jdeq 0$, we use~\eqref{eq:zero-not-succ} again.
+ Finally, if $y \jdeq \suc (m)$, the inductive hypothesis gives $(m = n)+\neg(m = n)$.
+ In the first case, if $p:m=n$, then $\ap \suc p:\suc(m)=\suc(n)$.
+ And in the second case,~\eqref{eq:suc-injective} yields $\neg(\suc(m)=\suc(n))$.
+\end{proof}
+
+\index{set|)}%
+
+\index{axiom!Streicher's Axiom K!generalization to n-types@generalization to $n$-types}%
+Although Hedberg's theorem appears rather special to sets ($0$-types), ``Axiom K'' generalizes naturally to $n$-types.
+Note that the ordinary Axiom K (as a property of a type $X$) states that for all $x:X$, the loop space\index{loop space} $\Omega(X,x)$ (see \cref{def:loopspace}) is contractible.
+Since $\Omega(X,x)$ is always inhabited (by $\refl{x}$), this is equivalent to its being a mere proposition (a $(-1)$-type).
+Since $0 = (-1)+1$, this suggests the following generalization.
+
+\begin{thm}\label{thm:hlevel-loops}
+ For any $n\geq -1$, a type $X$ is an $\nplusone$-type if and only if for all $x : X$, the type $\Omega(X, x)$ is an $n$-type.
+\end{thm}
+
+Before proving this, we prove an auxiliary lemma:
+
+\begin{lem}\label{lem:hlevel-if-inhab-hlevel}
+ Given $n \geq -1$ and $X : \type$.
+ If, given any inhabitant of $X$ it follows that $X$ is an $n$-type, then $X$ is an $n$-type.
+\end{lem}
+\begin{proof}
+ Let $f : X \to \istype{n}(X)$ be the given map.
+ We need to show that for any $x, x' : X$, the type $\id{x}{x'}$ is an $\nminusone$-type.
+ But then $f(x)$ shows that $X$ is an $n$-type, hence all its path spaces are $\nminusone$-types.
+\end{proof}
+
+\begin{proof}[Proof of \cref{thm:hlevel-loops}]
+ The ``only if'' direction is obvious, since $\Omega(X,x)\defeq (\id[X]xx)$.
+ Conversely, in order to show that $X$ is an $\nplusone$-type, we need to show that for any $x, x' : X$, the type $\id{x}{x'}$ is an
+$n$-type.
+ Following \cref{lem:hlevel-if-inhab-hlevel} it suffices to give a map
+ \[ (\id{x}{x'}) \to \istype{n}(\id{x}{x'}). \]
+ By path induction, it suffices to do this when $x\jdeq x'$, in which case it follows from the assumption that $\Omega(X, x)$ is an
+$n$-type.
+\end{proof}
+
+\index{whiskering}
+By induction and some slightly clever whiskering, we can obtain a generalization of the K property to $n>0$.
+
+\begin{thm}\label{thm:ntype-nloop}
+ \index{loop space!iterated}%
+ For every $n\ge -1$, a type $A$ is an $n$-type if and only if $\Omega^{n+1}(A,a)$ is contractible for all $a:A$.
+\end{thm}
+\begin{proof}
+ Recalling that $\Omega^0(A,a) = (A,a)$, the case $n=-1$ is \cref{ex:prop-inhabcontr}.
+ The case $n=0$ is \cref{thm:h-set-uip-K}.
+ Now we use induction; suppose the statement holds for $n:\N$.
+ By \cref{thm:hlevel-loops}, $A$ is an $(n+1)$-type iff $\Omega(A,a)$ is an $n$-type for all $a:A$.
+ By the inductive hypothesis, the latter is equivalent to saying that $\Omega^{n+1}(\Omega(A,a),p)$ is contractible for all $p:\Omega(A,a)$.
+
+ Since $\Omega^{n+2}(A,a) \defeq \Omega^{n+1}(\Omega(A,a),\refl{a})$, and $\Omega^{n+1} = \Omega^n \circ \Omega$, it will suffice to show that $\Omega(\Omega(A,a),p)$ is equal to $\Omega(\Omega(A,a),\refl{a})$, in the type $\pointed\type$ of pointed types.
+ For this, it suffices to give an equivalence
+ \[ g : \Omega(\Omega(A,a),p) \eqvsym \Omega(\Omega(A,a),\refl{a}) \]
+ which carries the basepoint $\refl{p}$ to the basepoint $\refl{\refl{a}}$.
+ For $q:p=p$, define $g(q):\refl{a} = \refl{a}$ to be the following composite:
+ \[ \refl{a} = p\ct \opp p \overset{q}{=} p\ct\opp p = \refl{a}, \]
+ where the path labeled ``$q$'' is actually $\apfunc{\lam{r} r\ct\opp p} (q)$.
+ Then $g$ is an equivalence because it is a composite of equivalences
+ \[ (p=p) \xrightarrow{\apfunc{\lam{r} r\ct\opp p}} (p\ct \opp p = p\ct \opp p) \xrightarrow{i\ct - \ct \opp i} (\refl{a} = \refl{a}). \]
+ using \cref{eg:concatequiv,thm:paths-respects-equiv}, where $i:\refl{a} = p\ct \opp p$ is the canonical equality.
+ And it is evident that $g(\refl{p}) = \refl{\refl{a}}$.
+\end{proof}
+
+\section{Truncations}
+\label{sec:truncations}
+
+\indexsee{n-truncation@$n$-truncation}{truncation}%
+\index{truncation!n-truncation@$n$-truncation|(defstyle}%
+
+In \cref{subsec:prop-trunc} we introduced the propositional truncation, which makes the ``best approximation'' of a type that is a mere
+proposition, i.e.\ a $(-1)$-type.
+In \cref{sec:hittruncations} we constructed this truncation as a higher inductive type, and gave one way to generalize it to a
+0-truncation.
+We now explain a better generalization of this, which truncates any type into an $n$-type for any $n\geq -2$; in classical homotopy theory this would be called its \define{$n^{\mathrm{th}}$ Postnikov section}.\index{Postnikov tower}
+
+The idea is to make use of \cref{thm:ntype-nloop}, which states that $A$ is an $n$-type just when $\Omega^{n+1}(A,a)$
+\index{loop space!iterated}%
+is contractible for
+all $a:A$, and \cref{lem:susp-loop-adj}, which implies that
+\narrowequation{\Omega^{n+1}(A,a) \eqvsym \Map_{*}(\Sn^{n+1},(A,a)),} where $\Sn^{n+1}$ is equipp\-ed with some basepoint which we may as well call \base.
+However, contractibility of $\Map_*(\Sn^{n+1},(A,a))$ is something that we can ensure directly by giving path constructors.
+
+\index{hub and spoke}%
+We will use the ``hub and spoke'' construction as in \cref{sec:hubs-spokes}.
+Thus, for $n\ge -1$, we take $\trunc nA$ to be the higher inductive type generated by:
+\begin{itemize}
+\item a function $\tprojf n : A \to \trunc n A$,
+\item for each $r:\Sn^{n+1} \to \trunc n A$, a \emph{hub} point $h(r):\trunc n A$, and
+\item for each $r:\Sn^{n+1} \to \trunc n A$ and each $x:\Sn^{n+1}$, a \emph{spoke} path $s_r(x):r(x) = h(r)$.
+\end{itemize}
+
+\noindent
+The existence of these constructors is now enough to show:
+
+\begin{lem}
+ $\trunc n A$ is an $n$-type.
+\end{lem}
+\begin{proof}
+ By \cref{thm:ntype-nloop}, it suffices to show that $\Omega ^{n+1}(\trunc nA,b)$ is contractible for all $b:\trunc nA$, which by
+\cref{lem:susp-loop-adj} is equivalent to \narrowequation{\Map_*(\Sn^{n+1},(\trunc nA,b)).}
+ As center of contraction for the latter, we choose the function $c_b:\Sn^{n+1} \to \trunc nA$ which is constant at $b$, together with
+$\refl b : c_b(\base) = b$.
+
+ Now, an arbitrary element of $\Map_*(\Sn^{n+1},(\trunc nA,b))$ consists of a map $r:\Sn^{n+1} \to \trunc n A$ together with a path
+$p:r(\base)=b$.
+ By function extensionality, to show $r = c_b$ it suffices to give, for each $x:\Sn^{n+1}$, a path $r(x)=c_b(x) \jdeq b$.
+ We choose this to be the composite $s_r(x) \ct \opp{s_r(\base)} \ct p$, where $s_r(x)$ is the spoke at $x$.
+
+
+ Finally, we must show that when transported along this equality $r=c_b$, the path $p$ becomes $\refl b$.
+ By transport in path types, this means we need
+ \[\opp{(s_r(\base) \ct \opp{s_r(\base)} \ct p)} \ct p = \refl b.\]
+ But this is immediate from path operations.
+\end{proof}
+
+(This construction fails for $n=-2$, but in that case we can simply define $\trunc{-2}{A}\defeq \unit$ for all $A$.
+From now on we assume $n\ge -1$.)
+
+\index{induction principle!for truncation}%
+To show the desired universal property of the $n$-truncation, we need the induction principle.
+We extract this from the constructors in the usual way; it says that given $P:\trunc nA\to\type$ together with
+\begin{itemize}
+\item For each $a:A$, an element $g(a) : P(\tproj na)$,
+\item For each $r:\Sn^{n+1} \to \trunc n A$ and $r':\prd{x:\Sn^{n+1}} P(r(x))$, an element $h'(r,r'):P(h(r))$,
+\item For each $r:\Sn^{n+1} \to \trunc n A$ and $r':\prd{x:\Sn^{n+1}} P(r(x))$, and each $x:\Sn^{n+1}$, a dependent path
+$\dpath{P}{s_r(x)}{r'(x)}{h'(r,r')}$,
+\end{itemize}
+there exists a section $f:\prd{x:\trunc n A} P(x)$ with $f(\tproj n a) \jdeq g(a)$ for all $a:A$.
+To make this more useful, we reformulate it as follows.
+
+\begin{thm}\label{thm:truncn-ind}
+ For any type family $P:\trunc n A \to \type$ such that each $P(x)$ is an $n$-type, and any function $g : \prd{a:A} P(\tproj n a)$, there
+exists a section $f:\prd{x:\trunc n A} P(x)$ such that $f(\tproj n a)\defeq g(a)$ for all $a:A$.
+\end{thm}
+\begin{proof}
+ It will suffice to construct the second and third data listed above, since $g$ has exactly the type of the first datum.
+ Given $r:\Sn^{n+1} \to \trunc n A$ and $r':\prd{x:\Sn^{n+1}} P(r(x))$, we have $h(r):\trunc n A$ and $s_r :\prd{x:\Sn^{n+1}} (r(x) =
+h(r))$.
+ Define $t:\Sn^{n+1} \to P(h(r))$ by $t(x) \defeq \trans{s_r(x)}{r'(x)}$.
+ Then since $P(h(r))$ is $n$-truncated, there exists a point $u:P(h(r))$ and a contraction $v:\prd{x:\Sn^{n+1}} (t(x) = u)$.
+ Define $h'(r,r') \defeq u$, giving the second datum.
+ Then (recalling the definition of dependent paths), $v$ has exactly the type required of the third datum.
+\end{proof}
+
+In particular, if $E$ is some $n$-type, we can consider the constant family of types equal to $E$ for every point of $A$.
+\symlabel{extend}
+\index{recursion principle!for truncation}%
+Thus, every map $f:A\to{}E$ can be extended to a map $\extend{f}:\trunc nA\to{}E$ defined by $\extend{f}(\tproj na)\defeq f(a)$; this is the \emph{recursion principle} for $\trunc n A$.
+
+The induction principle also implies a uniqueness principle for functions of this form.
+\index{uniqueness!principle, propositional!for functions on a truncation}%
+Namely, if $E$ is an $n$-type and $g,g':\trunc nA\to{}E$ are such
+that $g(\tproj na)=g'(\tproj na)$ for every $a:A$, then $g(x)=g'(x)$ for all $x:\trunc nA$, since the type $g(x)=g'(x)$ is an $n$-type.
+Thus, $g=g'$.
+(In fact, this uniqueness principle holds more generally when $E$ is an $\nplusone$-type.)
+This yields the following universal property.
+
+\begin{lem}[Universal property of truncations]\label{thm:trunc-reflective}
+ \index{universal!property!of truncation}%
+ Let $n\ge-2$, $A:\type$ and $B:\typele{n}$. The following map is an
+ equivalence:
+ \[\function{(\trunc nA\to{}B)}{(A\to{}B)}{g}{g\circ\tprojf n}\]
+\end{lem}
+
+\begin{proof}
+ Given that $B$ is $n$-truncated, any $f:A\to{}B$ can be extended to a map $\extend{f}:\trunc nA\to{}B$.
+ The map $\extend{f}\circ\tprojf n$ is equal to $f$, because for every $a:A$ we have $\extend{f}(\tproj na)=f(a)$ by definition.
+ And the map $\extend{g\circ\tprojf n}$ is equal to $g$, because they both send $\tproj na$ to $g(\tproj na)$.
+\end{proof}
+
+In categorical language, this says that the $n$-types form a \emph{reflective subcategory} of the category of types.
+\index{reflective!subcategory}%
+(To state this fully precisely, one ought to use the language of $(\infty,1)$-categories.)
+\index{.infinity1-category@$(\infty,1)$-category}%
+In particular, this implies that the $n$-truncation is functorial:
+given $f:A\to B$, applying the recursion principle to the composite $A\xrightarrow{f} B \to \trunc n B$ yields a map $\trunc n f: \trunc n A \to \trunc n B$.
+By definition, we have a homotopy
+\begin{equation}
+ \mathsf{nat}^f_n : \prd{a:A} \trunc n f(\tproj n a) = \tproj n {f(a)},\label{eq:trunc-nat}
+\end{equation}
+expressing \emph{naturality} of the maps $\tprojf n$.
+
+Uniqueness implies functoriality laws such as $\trunc n {g\circ f} = \trunc n g \circ \trunc n f$ and $\trunc n{\idfunc[A]} = \idfunc[\trunc n A]$, with attendant coherence laws.
+We also have higher functoriality, for instance:
+
+\begin{lem}\label{thm:trunc-htpy}
+ Given $f,g:A\to B$ and a homotopy $h:f\htpy g$, there is an induced homotopy $\trunc n h : \trunc n f \htpy \trunc n g$ such that the composite
+ \begin{equation}
+ \xymatrix@C=3.6pc{\tproj n{f(a)} \ar@{=}[r]^-{\opp{\mathsf{nat}^f_n(a)}} &
+ \trunc n f(\tproj n a) \ar@{=}[r]^-{\trunc n h(\tproj na)} &
+ \trunc n g(\tproj n a) \ar@{=}[r]^-{\mathsf{nat}^g_n(a)} &
+ \tproj n{g(a)}}\label{eq:trunc-htpy}
+ \end{equation}
+ is equal to $\apfunc{\tprojf n}(h(a))$.
+\end{lem}
+\begin{proof}
+ First, we indeed have a homotopy with components $\apfunc{\tprojf n}(h(a)) : \tproj n{f(a)} = \tproj n{g(a)}$.
+ Composing on either sides with the paths $\tproj n{f(a)} = \trunc n f(\tproj n a)$ and $\tproj n{g(a)} = \trunc n g(\tproj n a)$, which arise from the definitions of $\trunc n f$ and $\trunc ng$, we obtain a homotopy $(\trunc n f \circ \tprojf n) \htpy (\trunc n g \circ \tprojf n)$, and hence an equality by function extensionality.
+ But since $(\blank\circ \tprojf n)$ is an equivalence, there must be a path $\trunc nf = \trunc ng$ inducing it, and the coherence laws for function extensionality imply~\eqref{eq:trunc-htpy}.
+\end{proof}
+
+The following observation about reflective subcategories is also standard.
+
+\begin{cor}
+ A type $A$ is an $n$-type if and only if $\tprojf n : A \to \trunc n A$ is an equivalence.
+\end{cor}
+\begin{proof}
+ ``If'' follows from closure of $n$-types under equivalence.
+ On the other hand, if $A$ is an $n$-type, we can define $\ext(\idfunc[A]):\trunc n A\to{}A$.
+ Then we have $\ext(\idfunc[A])\circ\tprojf n=\idfunc[A]:A\to{}A$ by
+ definition. In order to prove that
+ $\tprojf n\circ\ext(\idfunc[A])=\idfunc[\trunc nA]$, we only need to prove
+ that $\tprojf n\circ\ext(\idfunc[A])\circ\tprojf n=
+ \idfunc[\trunc nA]\circ\tprojf n$.
+ This is again true:
+ \[\raisebox{\depth-\height+1em}{\xymatrix{
+ A \ar^-{\tprojf n}[r] \ar_{\idfunc[A]}[rd] &
+ \trunc nA \ar^>>>{\ext(\idfunc[A])}[d] \ar@/^40pt/^{\idfunc[\trunc nA]}[dd] \\
+ & A \ar_{\tprojf n}[d] \\
+ & \trunc nA}}
+ \qedhere\]
+\end{proof}
+
+The category of $n$-types also has some special properties not possessed by all reflective subcategories.
+For instance, the reflector $\trunc n-$ preserves finite products.
+
+\begin{thm}\label{cor:trunc-prod}
+ For any types $A$ and $B$, the induced map $\trunc n{A\times B} \to \trunc nA \times \trunc nB$ is an equivalence.
+\end{thm}
+\begin{proof}
+ It suffices to show that $\trunc nA \times \trunc nB$ has the same universal property as $\trunc n{A\times B}$.
+ Thus, let $C$ be an $n$-type; we have
+ \begin{align*}
+ (\trunc nA \times \trunc nB \to C)
+ &= (\trunc nA \to (\trunc nB \to C))\\
+ &= (\trunc nA \to (B \to C))\\
+ &= (A \to (B \to C))\\
+ &= (A \times B \to C)
+ \end{align*}
+ using the universal properties of $\trunc nB$ and $\trunc nA$, along with the fact that $B\to C$ is an $n$-type since $C$ is.
+ It is straightforward to verify that this equivalence is given by composing with $\tprojf n \times \tprojf n$, as needed.
+\end{proof}
+
+The following related fact about dependent sums is often useful.
+
+\begin{thm}\label{thm:trunc-in-truncated-sigma}
+Let $P:A\to\type$ be a family of types. Then there is an equivalence
+\begin{equation*}
+\eqv{\Trunc n{\sm{x:A}\trunc n{P(x)}}}{\Trunc n{\sm{x:A}P(x)}}.
+\end{equation*}
+\end{thm}
+
+\begin{proof}
+We use the induction principle of $n$-truncation several times to construct
+functions
+\begin{align*}
+\varphi & : \Trunc n{\sm{x:A}\trunc n{P(x)}}\to\Trunc n{\sm{x:A}P(x)}\\
+\psi & : \Trunc n{\sm{x:A}P(x)}\to \Trunc n{\sm{x:A} \trunc n{P(x)}}
+\end{align*}
+and homotopies $H:\varphi\circ\psi\htpy \idfunc$ and $K:\psi\circ\varphi\htpy
+\idfunc$ exhibiting them as quasi-inverses.
+We define $\varphi$ by setting $\varphi(\tproj n{\pairr{x,\tproj nu}})\defeq\tproj n{\pairr{x,u}}$.
+We define $\psi$ by setting $\psi(\tproj n{\pairr{x,u}})\defeq\tproj n{\pairr{x,\tproj nu}}$.
+Then we define $H(\tproj n{\pairr{x,u}})\defeq \refl{\tproj n{\pairr{x,u}}}$ and
+$K(\tproj n{\pairr{x,\tproj nu}})\defeq \refl{\tproj n{\pairr{x,\tproj nu}}}$.
+\end{proof}
+
+\begin{cor}\label{thm:refl-over-ntype-base}
+ If $A$ is an $n$-type and $P:A\to\type$ is any type family, then
+ \[ \eqv{\sm{a:A} \trunc n{P(a)}}{\Trunc n{\sm{a:A}P(a)}} \]
+\end{cor}
+\begin{proof}
+ If $A$ is an $n$-type, then the left-hand type above is already an $n$-type, hence equivalent to its $n$-truncation; thus this follows from \cref{thm:trunc-in-truncated-sigma}.
+\end{proof}
+
+We can characterize the path spaces of a truncation using the same method that we used in \cref{sec:compute-coprod,sec:compute-nat} for
+coproducts and natural numbers (and which we will use in \cref{cha:homotopy} to calculate homotopy groups).
+Unsurprisingly, the path spaces in the $(n+1)$-truncation of $A$ are the $n$-truncations of the path spaces of $A$.
+Indeed, for any $x,y:A$ there is a canonical map
+\begin{equation}
+ f:\ttrunc n{x=_Ay}\to \Big(\tproj {n+1}x=_{\trunc{n+1}A}\tproj {n+1}y\Big)\label{eq:path-trunc-map}
+\end{equation}
+defined by
+\[f(\tproj n{p})\defeq \apfunc{\tprojf {n+1}}(p). \]
+This definition uses the recursion principle for $\truncf n$, which is correct because $\trunc {n+1}A$ is $(n+1)$-truncated, so that the
+codomain of $f$ is $n$-truncated.
+
+\begin{thm} \label{thm:path-truncation}
+ For any $A$ and $x,y:A$ and $n\ge -2$, the map~\eqref{eq:path-trunc-map} is an equivalence; thus we have
+ \[ \eqv{\ttrunc n{x=_Ay}}{\Big(\tproj {n+1}x=_{\trunc{n+1}A}\tproj {n+1}y\Big)}. \]
+\end{thm}
+
+\begin{proof}
+ The proof is a simple application of the encode-decode method:
+ As in previous situations, we cannot directly define a quasi-inverse to the map~\eqref{eq:path-trunc-map} because there is no way to induct on an
+equality between $\tproj {n+1}x$ and $\tproj {n+1}y$.
+ Thus, instead we generalize its type, in order to have general elements of the type $\trunc{n+1}A$ instead of $\tproj {n+1}x$ and $\tproj
+{n+1}y$.
+ Define $P:\trunc {n+1}A\to\trunc {n+1}A\to\typele{n}$ by
+ \[P(\tproj {n+1}x,\tproj {n+1}y)\defeq \trunc n{x=_Ay}\]
+ This definition is correct because $\trunc n{x=_Ay}$ is $n$-truncated, and $\typele{n}$ is $(n+1)$-truncated by
+\cref{thm:hleveln-of-hlevelSn}.
+ Now for every $u,v:\trunc{n+1}A$, there is a map
+ \[\decode:P(u,v) \to \big(u=_{\trunc{n+1}A}v\big)\]
+ defined for $u=\tproj {n+1}x$ and $v=\tproj {n+1}y$ and $p:x=y$ by
+ \[\decode(\tproj n{p})\defeq \apfunc{\tprojf{n+1}} (p).\]
+ Since the codomain of $\decode$ is $n$-truncated, it suffices to define it only for $u$ and $v$ of this form, and then it's just the same
+definition as before.
+ We also define a function
+ \[ r : \prd{u:\trunc{n+1} A} P(u,u) \]
+ by induction on $u$, where $r(\tproj{n+1} x) \defeq \tproj n {\refl x}$.
+
+ Now we can define an inverse map
+ \[\encode: (u=_{\trunc{n+1}A}v) \to P(u,v)\]
+ by
+ \[\encode(p) \defeq \transfib{v\mapsto P(u,v)}{p}{r(u)}. \]
+ To show that the composite
+ \[ (u=_{\trunc{n+1}A}v) \xrightarrow{\encode} P(u,v) \xrightarrow{\decode} (u=_{\trunc{n+1}A}v) \]
+ is the identity function, by path induction it suffices to check it for $\refl u : u=u$, in which case what we need to know is that
+$\decode(r(u)) = \refl{u}$.
+ But since this is an $(n-1)$-type, hence also an $(n+1)$-type, we may assume $u\jdeq \tproj {n+1} x$, in which case it follows by definition
+of $r$ and $\decode$.
+ Finally, to show that
+ \[ P(u,v) \xrightarrow{\decode} (u=_{\trunc{n+1}A}v) \xrightarrow{\encode} P(u,v) \]
+ is the identity function, since this goal is again an $(n-1)$-type, we may assume that $u=\tproj {n+1}x$ and $v=\tproj {n+1}y$ and that we are
+considering $\tproj n p:P(\tproj{n+1}x,\tproj{n+1}y)$ for some $p:x=y$.
+ Then we have
+ \begin{align*}
+ \encode(\decode(\tproj n p)) &= \encode(\apfunc{\tprojf{n+1}}(p))\\
+ &= \transfib{v\mapsto P(\tproj{n+1}x,v)}{\apfunc{\tprojf{n+1}}(p)}{\tproj n {\refl x}}\\
+ &= \transfib{y\mapsto \trunc n{x=y}}{p}{\tproj n {\refl x}}\\
+ &= \tproj n {\transfib{y \mapsto (x=y)}{p}{\refl x}}\\
+ &= \tproj n p,
+ \end{align*}
+ using \cref{thm:transport-compose,thm:ap-transport}.
+ (Alternatively, we could do path induction on $p$; the desired equality would then hold judgmentally.)
+ This completes the proof that \decode and \encode are quasi-inverses.
+ The stated result is then the special case where $u=\tproj {n+1}x$ and $v=\tproj {n+1}y$.
+\end{proof}
+
+\begin{cor}
+ Let $n\ge-2$ and $(A,a)$ be a pointed type. Then
+ \[\ttrunc n{\Omega(A,a)}=\Omega\mathopen{}\left(\trunc{n+1}{(A,a)}\right)\]
+\end{cor}
+\begin{proof}
+ This is a special case of the previous lemma where $x=y=a$.
+\end{proof}
+
+\begin{cor}
+ Let $n\ge -2$ and $k\ge 0$ and $(A,a)$ a pointed type. Then
+ \[\ttrunc n{\Omega^k(A,a)} = \Omega^k\mathopen{}\left(\trunc{n+k}{(A,a)}\right). \]
+\end{cor}
+\begin{proof}
+ By induction on $k$, using the recursive definition of $\Omega^k$.
+\end{proof}
+
+We also observe that ``truncations are cumulative'': if we truncate to an $n$-type and then to a $k$-type with $k\le n$, then we might as
+well have truncated directly to a $k$-type.
+
+\begin{lem} \label{lem:truncation-le}
+ Let $k,n\ge-2$ with $k\le{}n$ and $A:\type$. Then
+ $\trunc k{\trunc nA}=\trunc kA$.
+\end{lem}
+\begin{proof}
+ We define two maps $f:\trunc k{\trunc nA}\to\trunc kA$ and
+ $g:\trunc kA\to\trunc k{\trunc nA}$ by
+ %
+ \[
+ f(\tproj k{\tproj na}) \defeq \tproj ka
+ \qquad\text{and}\qquad
+ g(\tproj ka) \defeq \tproj k{\tproj na}.
+ \]
+ %
+ The map $f$ is well-defined because $\trunc kA$ is $k$-truncated and also
+ $n$-truncated (because $k\le{}n$), and the map $g$ is well-defined because
+ $\trunc k{\trunc nA}$ is $k$-truncated.
+
+ The composition $f\circ{}g:\trunc kA\to\trunc kA$ satisfies
+ $(f\circ{}g)(\tproj ka)=\tproj ka$, hence $f\circ{}g=\idfunc[\trunc kA]$.
+ Similarly, we have $(g\circ{}f)(\tproj k{\tproj na})=\tproj k{\tproj na}$ and hence $g\circ{}f=\idfunc[\trunc k{\trunc nA}]$.
+\end{proof}
+
+% \begin{lem}
+% We have $\trunc n{\unit}=\unit$.
+% \end{lem}
+% \begin{proof}
+% Indeed, $\unit$ is $n$-truncated for every $n$ hence $\trunc n{\unit}=\unit$ by
+% \cref{reflectPequiv}.
+% \end{proof}
+
+\index{truncation!n-truncation@$n$-truncation|)}%
+
+\section{Colimits of \texorpdfstring{$n$}{n}-types}
+\label{sec:pushouts}
+
+Recall that in \cref{sec:colimits}, we used higher inductive types to define pushouts of types, and proved their universal property.
+In general, a (homotopy) colimit of $n$-types may no longer be an $n$-type (for an extreme counterexample, see \cref{ex:s2-colim-unit}).
+However, if we $n$-truncate it, we obtain an $n$-type which satisfies the correct universal property with respect to other $n$-types.
+
+In this section we prove this for pushouts, which are the most important and nontrivial case of colimits.
+Recall the following definitions from \cref{sec:colimits}.
+
+\begin{defn}
+ A \define{span} % in $\P$
+ \indexdef{span} %
+ is a 5-tuple $\Ddiag=(A,B,C,f,g)$ with % $A,B,C:\P$ and
+ $f:C\to{}A$ and $g:C\to{}B$.
+ \[\Ddiag=\quad\vcenter{\xymatrix{C \ar^g[r] \ar_f[d] & B \\ A & }}\]
+\end{defn}
+
+\begin{defn}
+ Given a span $\Ddiag=(A,B,C,f,g)$ and a type $D$, a %$D:\P$, a
+ \define{cocone under $\Ddiag$ with base $D$} is a triple $(i, j, h)$
+ \index{cocone} %
+ with $i:A\to{}D$, $j:B\to{}D$ and $h : \prd{c:C}i(f(c))=j(g(c))$:
+ \[\uppercurveobject{{ }}\lowercurveobject{{ }}\twocellhead{{ }}
+ \xymatrix{C \ar^g[r] \ar_f[d] \drtwocell{^h} & B \ar^j[d] \\ A \ar_i[r] & D
+ }\]
+ We denote by $\cocone{\Ddiag}{D}$ the type of all such cocones.
+\end{defn}
+
+The type of cocones is (covariantly) functorial.
+For instance, given $D,E$ % $D,E:\P$
+and a map $t:D\to{}E$, there is a map
+ \[\function{\cocone{\Ddiag}{D}}{\cocone{\Ddiag}{E}}{c}{\composecocone{t}c}\]
+ defined by:
+ \[\composecocone{t}(i,j,h)=(t\circ{}i,t\circ{}j,\mapfunc{t}\circ{}h).\]
+And given $D,E,F$, %$:\P$,
+functions $t:D\to{}E$, $u:E\to{}F$ and $c:\cocone{\Ddiag}{D}$, we have
+\begin{align}
+ \composecocone{\idfunc[D]}c &= c \label{eq:composeconeid}\\
+ \composecocone{(u\circ{}t)}c&=\composecocone{u}(\composecocone{t}c). \label{eq:composeconefunc}
+\end{align}
+
+\begin{defn}
+ Given a span $\Ddiag$ of $n$-types, an $n$-type $D$, and a cocone
+ $c:\cocone{\Ddiag}{D}$, the pair $(D,c)$ is said to be a \define{pushout
+ of $\Ddiag$ in $n$-types}
+ \indexdef{pushout!in ntypes@in $n$-types}%
+ if for every $n$-type $E$, the map
+ \[\function{(D\to{}E)}{\cocone{\Ddiag}{E}}{t}{\composecocone{t}c}\]
+ is an equivalence.
+\end{defn}
+
+\begin{comment}
+We showed in \cref{thm:pushout-ump} that pushouts exist when $\P$ is \type itself, by giving a direct construction in terms of higher
+inductive types.
+For a general \P, pushouts may or may not exist, but if they do, then they are unique.
+
+\begin{lem}
+ If $(D,c)$ and $(D',c')$ are two pushouts of $\Ddiag$ in $\P$, then
+ $(D,c)=(D',c')$.
+\end{lem}
+\begin{proof}
+ We first prove that the two types $D$ and $D'$ are equivalent.
+
+ Using the universal property of $D$ with $D'$, we see that the following map is an
+ equivalence
+ %
+ \[
+ \function{(D\to{}D')}{\cocone{\Ddiag}{D'}}{t}{\composecocone{t}c}
+ \]
+ %
+ In particular, there is a function $f:D\to{}D'$ satisfying $\composecocone{f}c=c'$. In the
+ same way there is a function $g:D'\to{}D$ such that $\composecocone{g}c'=c$.
+
+ In order to prove that $g\circ{}f=\idfunc[D]$ we use the universal property of
+ $D$ for $D$, which says that the following map is an equivalence:
+ %
+ \[
+ \function{(D\to{}D)}{\cocone{\Ddiag}{D}}{t}{\composecocone{t}c}
+ \]
+ %
+ Using the functoriality of $t\mapsto{}\composecocone{t}c$ we see that
+ \begin{align*}
+ \composecocone{(g\circ{}f)}c &= \composecocone{g}(\composecocone{f}c) \\
+ &= \composecocone{g}c' \\
+ &= c \\
+ &= \composecocone{\idfunc[D]}c
+ \end{align*}
+ hence
+ $g\circ{}f=\idfunc[D]$, because equivalences are injective. The same argument
+ with $D'$ instead of $D$ shows that $f\circ{}g=\idfunc[D']$.
+
+ Hence $D$ and $D'$ are equal, and the fact that $(D,c)=(D',c')$ follows from
+ the fact that the equivalence between $D$ and $D'$ we just defined sends $c$
+ to $c'$.
+\end{proof}
+
+\begin{cor}
+ The type of pushouts of $\Ddiag$ in $\P$ is a mere proposition. In particular if
+ pushouts merely exist then they actually exist.
+\end{cor}
+
+As in the case of pullbacks, if \P is reflective, then pushouts in \P always exist.
+However, unlike the case of pullbacks, pushouts in \P are not the same as the pushouts in \type: they are obtained by applying the
+reflector.
+\end{comment}
+
+In order to construct pushouts of $n$-types, we need to explain how to reflect spans and cocones.
+
+\bgroup
+\def\reflect(#1){\trunc n{#1}}
+
+\begin{defn}
+ Let
+ \[\Ddiag=\quad\vcenter{\xymatrix{C \ar^g[r] \ar_f[d] & B \\ A & }}\]
+ be a span. We denote by $\reflect(\Ddiag)$ the following
+ span of $n$-types:
+ \[\reflect(\Ddiag)\defeq\quad \vcenter{\xymatrix{\reflect(C) \ar^{\reflect(g)}[r]
+ \ar_{\reflect(f)}[d] & \reflect(B) \\ \reflect(A) & }}\]
+\end{defn}
+
+\begin{defn}
+ Let $D:\type$ and $c=(i,j,h):\cocone{\Ddiag}{D}$.
+ We define
+ \[\reflect(c)=(\reflect(i),\reflect(j),k):
+ \cocone{\reflect(\Ddiag)}{\reflect(D)}\]
+ where $k$ is the composite homotopy
+ \[ \reflect(i) \circ \reflect(f) \htpy \reflect(i\circ f) \htpy \reflect(j\circ g) \htpy \reflect(j) \circ \reflect(g) \]
+ using \cref{thm:trunc-htpy} and the functoriality of $\reflect(\blank)$.
+ % \[\reflect(h):\prd{c:\reflect(C)}\reflect(i)(\reflect(f)(c))=\reflect(j)(\reflect(g)(c))\]
+ % is defined in the following way:
+\end{defn}
+
+\egroup
+
+We now observe that the maps from each type to its $n$-truncation assemble into a map of spans, in the following sense.
+
+\begin{defn}
+ Let
+ \[\Ddiag=\quad\vcenter{\xymatrix{C \ar^g[r] \ar_f[d] & B \\ A & }}
+ \qquad\text{and}\qquad
+ \Ddiag'=\quad\vcenter{\xymatrix{C' \ar^{g'}[r] \ar_{f'}[d] & B' \\ A' & }}
+ \]
+ be spans.
+ A \define{map of spans}
+ \indexdef{map!of spans}%
+ $\Ddiag \to \Ddiag'$ consists of functions $\alpha:A\to A'$, $\beta:B\to B'$, and $\gamma:C\to C'$ and homotopies $\phi: \alpha\circ f \htpy f'\circ \gamma$ and $\psi:\beta\circ g \htpy g' \circ \gamma$.
+\end{defn}
+
+Thus, for any span $\Ddiag$, we have a map of spans $\tprojf[\Ddiag] n : \Ddiag \to \trunc n\Ddiag$ consisting of $\tprojf[A]n$, $\tprojf[B]n$, $\tprojf[C]n$, and the naturality homotopies $\mathsf{nat}^f_n$ and $\mathsf{nat}^g_n$ from~\eqref{eq:trunc-nat}.
+
+We also need to know that maps of spans behave functorially.
+Namely, if $(\alpha,\beta,\gamma,\phi,\psi):\Ddiag \to \Ddiag'$ is a map of spans and $D$ any type, then we have
+\[ \function{\cocone{\Ddiag'}{D}}{\cocone{\Ddiag}{D}}{(i,j,h)}{(i\circ \alpha,j\circ\beta, k)} \]
+where $k: \prd{z:C} i(\alpha(f(z))) = j(\beta(g(z)))$ is the composite
+\begin{equation}\label{eq:mapofspans-htpy}
+\xymatrix{
+ i(\alpha(f(z))) \ar@{=}[r]^{\apfunc{i}(\phi)} &
+ i(f'(\gamma(z))) \ar@{=}[r]^{h(\gamma(z))} &
+ j(g'(\gamma(z))) \ar@{=}[r]^{\apfunc{j}(\psi)} &
+ j(\beta(g(z))). }
+\end{equation}
+We denote this cocone by $(i,j,h) \circ (\alpha,\beta,\gamma,\phi,\psi)$.
+Moreover, this functorial action commutes with the other functoriality of cocones:
+
+\begin{lem}\label{thm:conemap-funct}
+ Given $(\alpha,\beta,\gamma,\phi,\psi):\Ddiag \to \Ddiag'$ and $t:D\to E$, the following diagram commutes:
+ \begin{equation*}
+ \vcenter{\xymatrix{
+ \cocone{\Ddiag'}{D}\ar[r]^{t \circ {\blank}}\ar[d] &
+ \cocone{\Ddiag'}{E}\ar[d]\\
+ \cocone{\Ddiag}{D}\ar[r]_{t \circ {\blank}} &
+ \cocone{\Ddiag}{E}
+ }}
+ \end{equation*}
+\end{lem}
+\begin{proof}
+ Given $(i,j,h):\cocone{\Ddiag'}{D}$, note that both composites yield a cocone whose first two components are $t\circ i\circ \alpha$ and $t\circ j\circ\beta$.
+ Thus, it remains to verify that the homotopies agree.
+ For the top-right composite, the homotopy is~\eqref{eq:mapofspans-htpy} with $(i,j,h)$ replaced by $(t\circ i, t\circ j, \apfunc{t}\circ h)$:
+ \begin{equation*}
+ \xymatrix@+2.8em{
+ {t \, i \, \alpha \, f \, z} \ar@{=}[r]^{\apfunc{t\circ i}(\phi)} &
+ {t \, i \, f' \, \gamma \, z} \ar@{=}[r]^{\apfunc{t}(h(\gamma(z)))} &
+ {t \, j \, g' \, \gamma \, z} \ar@{=}[r]^{\apfunc{t\circ j}(\psi)} &
+ {t \, j \, \beta \, g \, z}
+ }
+ \end{equation*}
+ (For brevity, we are omitting the parentheses around the arguments of functions.)
+ On the other hand, for the left-bottom composite, the homotopy is $\apfunc{t}$ applied to~\eqref{eq:mapofspans-htpy}.
+ Since $\apfunc{}$ respects path-concatenation, this is equal to
+ \begin{equation*}
+ \xymatrix@+2.8em{
+ {t \, i \, \alpha \, f \, z} \ar@{=}[r]^{\apfunc{t}(\apfunc{i}(\phi))} &
+ {t \, i \, f' \, \gamma \, z} \ar@{=}[r]^{\apfunc{t}(h(\gamma(z)))} &
+ {t \, j \, g' \, \gamma \, z} \ar@{=}[r]^{\apfunc{t}(\apfunc{j}(\psi))} &
+ {t \, j \, \beta \, g \, z}. }
+ \end{equation*}
+ But $\apfunc{t}\circ \apfunc{i} = \apfunc{t\circ i}$ and similarly for $j$, so these two homotopies are equal.
+\end{proof}
+
+Finally, note that since we defined $\trunc nc : \cocone{\trunc n \Ddiag}{\trunc n D}$ using \cref{thm:trunc-htpy}, the additional condition~\eqref{eq:trunc-htpy} implies
+\begin{equation}
+ \tprojf[D] n \circ c = \trunc n c \circ \tprojf[\Ddiag]n. \label{eq:conetrunc}
+\end{equation}
+for any $c:\cocone{\Ddiag}{D}$.
+Now we can prove our desired theorem.
+
+\begin{thm}
+ \label{reflectcommutespushout}
+ \index{universal!property!of pushout}%
+ Let $\Ddiag$ be a span and $(D,c)$ its pushout.
+ Then $(\trunc nD,\trunc n c)$ is a pushout of $\trunc n\Ddiag$ in $n$-types.
+\end{thm}
+\begin{proof}
+ Let $E$ be an $n$-type, and consider the following diagram:
+\bgroup
+\def\reflect(#1){\trunc n{#1}}
+ \begin{equation*}
+ \vcenter{\xymatrix{
+ (\trunc nD \to E)\ar[r]^-{\blank\circ \tprojf[D] n}\ar[d]_{\blank\circ \trunc nc} &
+ (D\to E)\ar[d]^{\blank\circ c}\\
+ \cocone{\trunc n \Ddiag}{E}\ar[r]^-{\blank\circ \tprojf[\Ddiag]n}\ar@{<-}[d]_{\ell_1} &
+ \cocone{\Ddiag}{E}\ar@{<-}[d]^{\ell_2}\\
+ (\reflect(A)\to{}E)\times_{(\reflect(C)\to{}E)}(\reflect(B)\to{}E)\ar[r] &
+ (A\to{}E)\times_{(C\to{}E)}(B\to{}E)
+ }}
+ \end{equation*}
+\egroup
+ The upper horizontal arrow is an equivalence since $E$ is an $n$-type, while $\blank\circ c$ is an equivalence since $c$ is a pushout cocone.
+ Thus, by the 2-out-of-3 property, to show that $\blank\circ \trunc nc$ is an equivalence, it will suffice to show that the upper square commutes and that the middle horizontal arrow is an equivalence.
+ To see that the upper square commutes, let $t:\trunc nD \to E$; then
+ \begin{align}
+ \big(t \circ \trunc n c\big) \circ \tprojf[\Ddiag] n
+ &= t \circ \big(\trunc n c \circ \tprojf[\Ddiag] n\big)
+ \tag{by \cref{thm:conemap-funct}}\\
+ &= t\circ \big(\tprojf[D]n \circ c\big)
+ \tag{by~\eqref{eq:conetrunc}}\\
+ &= \big(t\circ \tprojf[D]n\big) \circ c
+ \tag{by~\eqref{eq:composeconefunc}}.
+ \end{align}
+ To show that the middle horizontal arrow is an equivalence, consider the lower square.
+ The two lower vertical arrows are simply applications of $\happly$:
+ \begin{align*}
+ \ell_1(i,j,p) &\defeq (i,j,\happly(p))\\
+ \ell_2(i,j,p) &\defeq (i,j,\happly(p))
+ \end{align*}
+ and hence are equivalences by function extensionality.
+ The lowest horizontal arrow is defined by
+ \[ (i,j,p) \mapsto \big( i\circ \tprojf[A]n,\;\; j \circ \tprojf[B] n,\;\; q\big) \]
+ where $q$ is the composite
+ \begin{align}
+ i\circ \tprojf[A]n \circ f
+ &= i\circ \trunc nf \circ \tprojf[C]n
+ \tag{by $\funext(\lam{z} \apfunc{i}(\mathsf{nat}^f_n(z)))$}\\
+ &= j\circ \trunc ng \circ \tprojf[C]n
+ \tag{by $\apfunc{\blank\circ \tprojf[C] n}(p)$}\\
+ &= j\circ \tprojf[B]n \circ g.
+ \tag{by $\funext(\lam{z} \apfunc{j}(\mathsf{nat}^g_n(z)))$}
+ \end{align}
+ This is an equivalence, because it is induced by an equivalence of cospans.
+ Thus, by 2-out-of-3, it will suffice to show that the lower square commutes.
+ But the two composites around the lower square agree definitionally on the first two components, so it suffices to show that for $(i,j,p)$ in the lower left corner and $z:C$, the path
+ \[ \happly(q,z) : i(\tproj n{f(z)}) = j(\tproj n{g(z)}) \]
+ (with $q$ as above)
+ is equal to the composite
+ \begin{align}
+ i(\tproj n{f(z)})
+ &= i(\trunc nf(\tproj nz))
+ \tag{by $\apfunc{i}(\mathsf{nat}^f_n(z))$}\\
+ &= j(\trunc ng(\tproj nz))
+ \tag{by $\happly(p,\tproj nz)$}\\
+ &= j(\tproj n{g(z)}).
+ \tag{by $\apfunc{j}(\mathsf{nat}^g_n(z))$}
+ \end{align}
+ However, since $\happly$ is functorial, it suffices to check equality for the three component paths:
+ \begin{align*}
+ \happly({\funext(\lam{z} \apfunc{i}(\mathsf{nat}^f_n(z)))},z)
+ &= {\apfunc{i}(\mathsf{nat}^f_n(z))}\\
+ \happly(\apfunc{\blank\circ \tprojf[C] n}(p), z)
+ &= {\happly(p,\tproj nz)}\\
+ \happly({\funext(\lam{z} \apfunc{j}(\mathsf{nat}^g_n(z)))},z)
+ &= {\apfunc{j}(\mathsf{nat}^g_n(z))}.
+ \end{align*}
+ The first and third of these are just the fact that $\happly$ is quasi-inverse to $\funext$, while
+ the second is an easy general lemma about $\happly$ and precomposition.
+\end{proof}
+
+
+\section{Connectedness}
+\label{sec:connectivity}
+
+An $n$-type is one that has no interesting information above dimension $n$.
+By contrast, an \emph{$n$-connected type} is one that has no interesting information \emph{below} dimension $n$.
+It turns out to be natural to study a more general notion for functions as well.
+
+\begin{defn}
+A function $f:A\to B$ is said to be \define{$n$-connected}
+\indexdef{function!n-connected@$n$-connected}%
+\indexsee{n-connected@$n$-connected!function}{function, $n$-connected}%
+if for all $b:B$, the type $\trunc n{\hfiber f b}$ is contractible:
+\begin{equation*}
+ \mathsf{conn}_n(f)\defeq \prd{b:B}\iscontr(\trunc n{\hfiber{f}b}).
+\end{equation*}
+A type $A$ is said to be \define{$n$-connected}
+\indexsee{n-connected@$n$-connected!type}{type, $n$-connected}%
+\indexdef{type!n-connected@$n$-connected}%
+ if the unique function $A\to\unit$ is $n$-connected, i.e.\ if $\trunc nA$ is contractible.
+\end{defn}
+\indexsee{connected!function}{function, $n$-connected}
+
+Thus, a function $f:A\to B$ is $n$-connected if and only if $\hfib{f}b$ is $n$-connected for every $b:B$.
+Of course, every function is $(-2)$-connected.
+At the next level, we have:
+
+\begin{lem}\label{thm:minusoneconn-surjective}
+ \index{function!surjective}%
+ A function $f$ is $(-1)$-connected if and only if it is surjective in the sense of \cref{sec:mono-surj}.
+\end{lem}
+\begin{proof}
+ We defined $f$ to be surjective if $\trunc{-1}{\hfiber f b}$ is inhabited for all $b$.
+ But since it is a mere proposition, inhabitation is equivalent to contractibility.
+\end{proof}
+
+Thus, $n$-connectedness of a function for $n\ge 0$ can be thought of as a strong form of surjectivity.
+Category-theoretically, $(-1)$-connectedness corresponds to essential surjectivity on objects, while $n$-connectedness corresponds to essential surjectivity on $k$-morphisms for $k\le n+1$.
+
+\cref{thm:minusoneconn-surjective} also implies that a type $A$ is $(-1)$-connected if and only if it is merely inhabited.
+When a type is $0$-connected we may simply say that it is \define{connected},
+\indexdef{connected!type}%
+\indexdef{type!connected}%
+and when it is $1$-connected we say it is \define{simply connected}.
+\indexdef{simply connected type}%
+\indexdef{type!simply connected}%
+
+\begin{rmk}\label{rmk:connectedness-indexing}
+ While our notion of $n$-connectedness for types agrees with the standard notion in homotopy theory, our notion of $n$-connectedness for \emph{functions} is off by one from a common indexing in classical homotopy theory.
+ Whereas we say a function $f$ is $n$-connected if all its fibers are $n$-connected, some classical homotopy theorists would call such a function $(n+1)$-connected.
+ (This is due to a historical focus on \emph{cofibers} rather than fibers.)
+\end{rmk}
+
+We now observe a few closure properties of connected maps.
+\index{function!n-connected@$n$-connected}
+
+\begin{lem}
+\index{retract!of a function}%
+Suppose that $g$ is a retract of a $n$-connected function $f$. Then $g$ is
+$n$-connected.
+\end{lem}
+\begin{proof}
+This is a direct consequence of \cref{lem:func_retract_to_fiber_retract}.
+\end{proof}
+
+\begin{cor}
+If $g$ is homotopic to a $n$-connected function $f$, then $g$ is $n$-connected.
+\end{cor}
+
+\begin{lem}\label{lem:nconnected_postcomp}
+Suppose that $f:A\to B$ is $n$-connected. Then $g:B\to C$ is $n$-connected if and only if $g\circ f$ is
+$n$-connected.
+\end{lem}
+
+\begin{proof}
+For any $c:C$, we have
+\begin{align*}
+ \trunc n{\hfib{g\circ f}c}
+ & \eqvsym \Trunc n{ \sm{w:\hfib{g}c}\hfib{f}{\proj1 w}}
+ \tag{by \cref{ex:unstable-octahedron}}\\
+ & \eqvsym \Trunc n{\sm{w:\hfib{g}c} \trunc n{\hfib{f}{\proj1 w}}}
+ \tag{by \cref{thm:trunc-in-truncated-sigma}}\\
+ & \eqvsym \trunc n{\hfib{g}c}.
+ \tag{since $\trunc n{\hfib{f}{\proj1 w}}$ is contractible}
+\end{align*}
+It follows that $\trunc n{\hfib{g}c}$ is contractible if and only if $\trunc n{\hfib{g\circ f}c}$ is
+contractible.
+\end{proof}
+
+Importantly, $n$-connected functions can be equivalently characterized as those which satisfy an ``induction principle'' with respect to $n$-types.\index{induction principle!for connected maps}
+This idea will lead directly into our proof of the Freudenthal suspension theorem in \cref{sec:freudenthal}.
+
+\begin{lem}\label{prop:nconnected_tested_by_lv_n_dependent types}
+For $f:A\to B$ and $P:B\to\type$, consider the following function:
+\begin{equation*}
+\lam{s} s\circ f :\Parens{\prd{b:B} P(b)}\to\Parens{\prd{a:A}P(f(a))}.
+\end{equation*}
+For a fixed $f$ and $n\ge -2$, the following are equivalent.
+\begin{enumerate}
+\item $f$ is $n$-connected.\label{item:conntest1}
+\item For every $P:B\to\ntype{n}$, the map $\lam{s} s\circ f$ is an equivalence.\label{item:conntest2}
+\item For every $P:B\to\ntype{n}$, the map $\lam{s} s\circ f$ has a section.\label{item:conntest3}
+\end{enumerate}
+\end{lem}
+
+\begin{proof}
+Suppose that $f$ is $n$-connected and let $P:B\to\ntype{n}$. Then we have the equivalences
+\begin{align}
+ \prd{b:B} P(b) & \eqvsym \prd{b:B} \Parens{\trunc n{\hfib{f}b} \to P(b)}
+ \tag{since $\trunc n{\hfib{f}b}$ is contractible}\\
+ & \eqvsym \prd{b:B} \Parens{\hfib{f}b\to P(b)}
+ \tag{since $P(b)$ is an $n$-type}\\
+ & \eqvsym \prd{b:B}{a:A}{p:f(a)= b} P(b)
+ \tag{by the left universal property of $\Sigma$-types}\\
+ & \eqvsym \prd{a:A} P(f(a)).
+ \tag{by the left universal property of path types}
+\end{align}
+We omit the proof that this equivalence is indeed given by $\lam{s} s\circ f$.
+Thus,~\ref{item:conntest1}$\Rightarrow$\ref{item:conntest2}, and clearly~\ref{item:conntest2}$\Rightarrow$\ref{item:conntest3}.
+To show~\ref{item:conntest3}$\Rightarrow$\ref{item:conntest1}, consider the type family
+\begin{equation*}
+P(b)\defeq \trunc n{\hfib{f}b}.
+\end{equation*}
+Then~\ref{item:conntest3} yields a map $c:\prd{b:B} \trunc n{\hfib{f}b}$ with
+$c(f(a))=\tproj n{\pairr{a,\refl{f(a)}}}$. To show that each $\trunc n{\hfib{f}b}$ is contractible,
+we will find a function of type
+\begin{equation*}
+\prd{b:B}{w:\trunc n{\hfib{f}b}} w= c(b).
+\end{equation*}
+By \cref{thm:truncn-ind}, for this it suffices to find a function of type
+\begin{equation*}
+\prd{b:B}{a:A}{p:f(a)= b} \tproj n{\pairr{a,p}}= c(b).
+\end{equation*}
+But by rearranging variables and path induction, this is equivalent to the type
+\begin{equation*}
+\prd{a:A} \tproj n{\pairr{a,\refl{f(a)}}}= c(f(a)).
+\end{equation*}
+This property holds by our choice of $c(f(a))$.
+\end{proof}
+
+\begin{cor}\label{cor:totrunc-is-connected}
+For any $A$, the canonical function $\tprojf n:A\to\trunc n A$ is $n$-connected.
+\end{cor}
+\begin{proof}
+By \cref{thm:truncn-ind} and the associated uniqueness principle, the condition of \cref{prop:nconnected_tested_by_lv_n_dependent types} holds.
+\end{proof}
+
+For instance, when $n=-1$, \cref{cor:totrunc-is-connected} says that the map $A\to \brck A$ from a type to its propositional truncation is surjective.
+
+\begin{cor}\label{thm:nconn-to-ntype-const}\label{connectedtotruncated}
+A type $A$ is $n$-connected if and only if the map
+\begin{equation*}
+ \lam{b}{a} b: B \to (A\to B)
+\end{equation*}
+is an equivalence for every $n$-type $B$.
+In other words, ``every map from $A$ to an $n$-type is constant''.
+\end{cor}
+\begin{proof}
+ By \cref{prop:nconnected_tested_by_lv_n_dependent types} applied to a function with codomain $\unit$.
+\end{proof}
+
+\begin{lem}\label{lem:nconnected_to_leveln_to_equiv}
+Let $B$ be an $n$-type and let $f:A\to B$ be a function. Then the induced function $g:\trunc n A\to B$ is an
+equivalence if and only if $f$ is $n$-connected.
+\end{lem}
+
+\begin{proof}
+By \cref{cor:totrunc-is-connected}, $\tprojf n$ is $n$-connected.
+Thus, since $f = g\circ \tprojf n$, by
+\cref{lem:nconnected_postcomp} $f$ is $n$-connected if and only if $g$ is $n$-connected.
+But since $g$ is a function between $n$-types, its fibers are also $n$-types.
+Thus, $g$ is $n$-connected if and only if it is an equivalence.
+\end{proof}
+
+We can also characterize connected pointed types in terms of connectivity of the inclusion of their basepoint.
+
+\begin{lem}\label{thm:connected-pointed}
+ \index{basepoint}%
+ Let $A$ be a type and $a_0:\unit\to A$ a basepoint, with $n\ge -1$.
+ Then $A$ is $n$-connected if and only if the map $a_0$ is $(n-1)$-connected.
+\end{lem}
+\begin{proof}
+ First suppose $a_0:\unit\to A$ is $(n-1)$-connected and let $B$ be an $n$-type; we will use \cref{thm:nconn-to-ntype-const}.
+ The map $\lam{b}{a} b: B \to (A\to B)$ has a retraction given by $f\mapsto f(a_0)$, so it suffices to show it also has a section, i.e.\ that for any $f:A\to B$ there is $b:B$ such that $f = \lam{a}b$.
+ We choose $b\defeq f(a_0)$.
+ Define $P:A\to\type$ by $P(a) \defeq (f(a)=f(a_0))$.
+ Then $P$ is a family of $(n-1)$-types and we have $P(a_0)$; hence we have $\prd{a:A} P(a)$ since $a_0:\unit\to A$ is $(n-1)$-connected.
+ Thus, $f = \lam{a} f(a_0)$ as desired.
+
+ Now suppose $A$ is $n$-connected, and let $P:A\to\ntype{(n-1)}$ and $u:P(a_0)$ be given.
+ By \cref{prop:nconnected_tested_by_lv_n_dependent types}, it will suffice to construct $f:\prd{a:A} P(a)$ such that $f(a_0)=u$.
+ Now $\ntype{(n-1)}$ is an $n$-type and $A$ is $n$-connected, so by \cref{thm:nconn-to-ntype-const}, there is an $n$-type $B$ such that $P = \lam{a} B$.
+ Hence, we have a family of equivalences $g:\prd{a:A} (\eqv{P(a)}{B})$.
+ Define $f(a) \defeq \opp{g_a}(g_{a_0}(u))$; then $f:\prd{a:A} P(a)$ and $f(a_0) = u$ as desired.
+\end{proof}
+
+In particular, a pointed type $(A,a_0)$ is 0-connected if and only if $a_0:\unit\to A$ is surjective, which is to say $\prd{x:A} \brck{x=a_0}$.
+For a similar result in the not-necessarily-pointed case, see \cref{ex:connectivity-inductively}.
+
+A useful variation on \cref{lem:nconnected_postcomp} is:
+
+\begin{lem}\label{lem:nconnected_postcomp_variation}
+Let $f:A\to B$ be a function and $P:A\to\type$ and $Q:B\to\type$ be type families. Suppose that $g:\prd{a:A} P(a)\to Q(f(a))$
+is a fiberwise $n$-connected%
+\index{fiberwise!n-connected family of functions@$n$-connected family of functions}
+family of functions, i.e.\ each function $g_a : P(a) \to Q(f(a))$ is $n$-connected.
+If $f$ is also $n$-connected, then so is the function
+\begin{align*}
+\varphi &:\Parens{\sm{a:A} P(a)}\to\Parens{\sm{b:B} Q(b)}\\
+\varphi(a,u) &\defeq \pairr{f(a),g_a(u)}.
+\end{align*}
+Conversely, if $\varphi$ and each $g_a$ are $n$-connected, and moreover $Q$ is fiberwise merely inhabited (i.e.\ we have $\brck{Q(b)}$ for all $b:B$), then $f$ is $n$-connected.
+\end{lem}
+
+\begin{proof}
+For any $b:B$ and $v:Q(b)$ we have
+{\allowdisplaybreaks
+\begin{align*}
+\trunc n{\hfib{\varphi}{\pairr{b,v}}} & \eqvsym \Trunc n{\sm{a:A}{u:P(a)}{p:f(a)= b} \trans{p}{g_a(u)}= v}\\
+& \eqvsym \Trunc n{\sm{w:\hfib{f}b}{u:P(\proj1(w))} g_{\proj 1 w}(u)= \trans{\opp{\proj2(w)}}{v}}\\
+& \eqvsym \Trunc n{\sm{w:\hfib{f}b} \hfib{g(\proj1 w)}{\trans{\opp{\proj 2(w)}}{v}}}\\
+& \eqvsym \Trunc n{\sm{w:\hfib{f}b} \trunc n{\hfib{g(\proj1 w)}{\trans{\opp{\proj 2(w)}}{v}}}}\\
+& \eqvsym \trunc n{\hfib{f}b}
+\end{align*}}%
+where the transportations along $f(p)$ and $f(p)^{-1}$ are with respect to $Q$.
+Therefore, if either is contractible, so is the other.
+
+In particular, if $f$ is $n$-connected, then $\trunc n{\hfib{f}b}$ is contractible for all $b:B$, and hence so is $\trunc n{\hfib{\varphi}{\pairr{b,v}}}$ for all $(b,v):\sm{b:B} Q(b)$.
+On the other hand, if $\varphi$ is $n$-connected, then $\trunc n{\hfib{\varphi}{\pairr{b,v}}}$ is contractible for all $(b,v)$, hence so is $\trunc n{\hfib{f}b}$ for any $b:B$ such that there exists some $v:Q(b)$.
+Finally, since contractibility is a mere proposition, it suffices to merely have such a $v$.
+\end{proof}
+
+The converse direction of \cref{lem:nconnected_postcomp_variation} can fail if $Q$ is not fiberwise merely inhabited.
+For example, if $P$ and $Q$ are both constant at $\emptyt$, then $\varphi$ and each $g_a$ are equivalences, but $f$ could be arbitrary.
+
+In the other direction, we have
+
+\begin{lem}\label{prop:nconn_fiber_to_total}
+Let $P,Q:A\to\type$ be type families and consider a fiberwise transformation\index{fiberwise!transformation}
+\begin{equation*}
+f:\prd{a:A} \Parens{P(a)\to Q(a)}
+\end{equation*}
+from $P$ to $Q$. Then the induced map $\total f: \sm{a:A}P(a) \to \sm{a:A} Q(a)$ is $n$-connected if and only if each $f(a)$ is $n$-connected.
+\end{lem}
+
+Of course, the ``only if'' direction is also a special case of \cref{lem:nconnected_postcomp_variation}.
+
+\begin{proof}
+By \cref{fibwise-fiber-total-fiber-equiv}, we have
+$\hfib{\total f}{\pairr{x,v}}\eqvsym\hfib{f(x)}v$
+for each $x:A$ and $v:Q(x)$. Hence $\trunc n{\hfib{\total f}{\pairr{x,v}}}$ is contractible if and only if
+$\trunc n{\hfib{f(x)}v}$ is contractible.
+\end{proof}
+
+Another useful fact about connected maps is that they induce an
+equivalence on $n$-truncations:
+
+\begin{lem} \label{lem:connected-map-equiv-truncation}
+If $f : A \to B$ is $n$-connected, then it induces an equivalence
+$\eqv{\trunc{n}{A}}{\trunc{n}{B}}$.
+\end{lem}
+\begin{proof}
+Let $c$ be the proof that $f$ is $n$-connected. From left to right, we
+use the map $\trunc{n}{f} : \trunc{n}{A} \to \trunc{n}{B}$.
+To define the map from right to left, by the universal property of
+truncations, it suffices to give a map $\mathsf{back} : B \to {\trunc{n}{A}}$. We can
+define this map as follows:
+\[
+\mathsf{back}(y) \defeq \trunc{n}{\proj{1}}{(\proj{1}{(c(y))})}.
+\]
+By definition, $c(y)$ has type $\iscontr(\trunc n {\hfiber{f}y})$, so its
+first component has type $\trunc n{\hfiber{f}y}$, and we can obtain an
+element of $\trunc n A$ from this by projection.
+
+Next, we show that the composites are the identity. In both directions,
+because the goal is a path in an $n$-truncated type, it suffices to
+cover the case of the constructor $\tprojf{n}$.
+
+In one direction, we must show that for all $x:A$,
+\[
+\trunc{n}{\proj{1}}{(\proj{1}{(c(f(x)))})} = \tproj{n}{x}.
+\]
+But $\tproj{n}{(x, \refl{f(x)})} : \trunc n{\hfiber{f}{f(x)}}$, and
+$c(f(x))$ says that this type is contractible, so
+\[
+\proj{1}{(c(f(x)))} = \tproj{n}{(x, \refl{})}.
+\]
+Applying $\trunc{n}{\proj{1}}$ to both sides of this equation gives the
+result.
+
+In the other direction, we must show that for all $y:B$,
+\[
+\trunc{n}{f}(\trunc{n}{\proj{1}} (\proj{1}{(c(y))})) = \tproj{n}{y}.
+\]
+$\proj{1}{(c(y))}$ has type $\trunc n {\hfiber{f}y}$, and the path we
+want is essentially the second component of the $\hfiber{f}y$, but we
+need to make sure the truncations work out.
+
+In general, suppose we are given $p:\trunc{n}{\sm{x:A} B(x)}$ and wish to prove
+$P(\trunc{n}{\proj{1}{}}(p))$. By truncation induction, it suffices to
+prove $P(\tproj{n}{a})$ for all $a:A$ and $b:B(a)$. Applying this
+principle in this case, it suffices to prove
+\[
+\trunc{n}{f}(\tproj{n}{a}) = \tproj{n}{y}
+\]
+given $a:A$ and $b:f (a) = y$. But the left-hand side equals $\tproj{n}{f (a)}$,
+so applying $\tprojf{n}$ to both sides of $b$ gives the result.
+\end{proof}
+
+One might guess that this fact characterizes the $n$-connected maps, but in fact being $n$-connected is a bit stronger than this.
+For instance, the inclusion $\bfalse:\unit \to\bool$ induces an equivalence on $(-1)$-truncations, but is not surjective (i.e.\ $(-1)$-connected).
+In \cref{sec:long-exact-sequence-homotopy-groups} we will see that the difference in general is an analogous extra bit of surjectivity.
+
+
+\section{Orthogonal factorization}
+\label{sec:image-factorization}
+
+\index{unique!factorization system|(}%
+\index{orthogonal factorization system|(}%
+In set theory, the surjections and the injections form a unique factorization system: every function factors essentially uniquely as a surjection followed by an injection.
+We have seen that surjections generalize naturally to $n$-connected maps, so it is natural to inquire whether these also participate in a factorization system.
+Here is the corresponding generalization of injections.
+
+\begin{defn}
+ A function $f:A\to B$ is \define{$n$-truncated}
+ \indexdef{n-truncated@$n$-truncated!function}%
+ \indexdef{function!n-truncated@$n$-truncated}%
+if the fiber $\hfib f b$ is an $n$-type for all $b:B$.
+\end{defn}
+
+In particular, $f$ is $(-2)$-truncated if and only if it is an equivalence.
+And of course, $A$ is an $n$-type if and only if $A\to\unit$ is $n$-truncated.
+Moreover, $n$-truncated maps could equivalently be defined recursively, like $n$-types.
+
+\begin{lem}\label{thm:modal-mono}
+ For any $n\ge -2$, a function $f:A\to B$ is $(n+1)$-truncated if and only if for all $x,y:A$, the map $\apfunc{f}:(x=y) \to (f(x)=f(y))$ is $n$-truncated.
+ \index{function!embedding}%
+ \index{function!injective}%
+ In particular, $f$ is $(-1)$-truncated if and only if it is an embedding in the sense of \cref{sec:mono-surj}.
+\end{lem}
+\begin{proof}
+ Note that for any $(x,p),(y,q):\hfib f b$, we have
+ \begin{align*}
+ \big((x,p) = (y,q)\big)
+ &= \sm{r:x=y} (p = \apfunc f(r)\ct q)\\
+ &= \sm{r:x=y} (\apfunc f (r) = p\ct \opp q)\\
+ &= \hfib{\apfunc{f}}{p\ct \opp q}.
+ \end{align*}
+ Thus, any path space in any fiber of $f$ is a fiber of $\apfunc{f}$.
+ On the other hand, choosing $b\defeq f(y)$ and $q\defeq \refl{f(y)}$ we see that any fiber of $\apfunc f$ is a path space in a fiber of $f$.
+ The result follows, since $f$ is $\nplusone$-truncated if all path spaces of its fibers are $n$-types.
+\end{proof}
+
+We can now construct the factorization, in a fairly obvious way.
+
+\begin{defn}\label{defn:modal-image}
+Let $f:A\to B$ be a function. The \define{$n$-image}
+\indexdef{image}%
+\indexdef{image!n-image@$n$-image}%
+\indexdef{n-image@$n$-image}%
+\indexdef{function!n-image of@$n$-image of}%
+of $f$ is defined as
+\begin{equation*}
+\im_n(f)\defeq \sm{b:B} \trunc n{\hfib{f}b}.
+\end{equation*}
+When $n=-1$, we write simply $\im(f)$ and call it the \define{image} of $f$.
+\end{defn}
+
+\begin{lem}\label{prop:to_image_is_connected}
+For any function $f:A\to B$, the canonical function $\tilde{f}:A\to\im_n(f)$ is $n$-connected.
+Consequently, any function factors as an $n$-connected function followed by an $n$-truncated function.
+\end{lem}
+
+\begin{proof}
+Note that $A\eqvsym\sm{b:B}\hfib{f}b$. The function $\tilde{f}$ is the function on total spaces induced by the canonical fiberwise transformation
+\begin{equation*}
+\prd{b:B} \Parens{\hfib{f}b\to\trunc n{\hfib{f}b}}.
+\end{equation*}
+Since each map $\hfib{f}b\to\trunc n{\hfib{f}b}$ is $n$-connected by \cref{cor:totrunc-is-connected}, $\tilde{f}$ is $n$-connected by \cref{prop:nconn_fiber_to_total}.
+Finally, the projection $\proj1:\im_n(f) \to B$ is $n$-truncated, since its fibers are equivalent to the $n$-truncations of the fibers of $f$.
+\end{proof}
+
+In the following lemma we set up some machinery to prove the unique factorization theorem.
+
+\begin{lem}\label{prop:factor_equiv_fiber}
+Suppose we have a commutative diagram of functions
+\begin{equation*}
+ \xymatrix{
+ {A} \ar[r]^{g_1} \ar[d]_{g_2} &
+ {X_1} \ar[d]^{h_1} &
+ \\
+ {X_2} \ar[r]_{h_2}
+ &
+ {B}
+ }
+\end{equation*}
+with $H:h_1\circ g_1\htpy h_2\circ g_2$, where $g_1$ and $g_2$ are $n$-connected and where $h_1$ and $h_2$ are $n$-truncated.
+Then there is an equivalence
+\begin{equation*}
+E(H,b):\hfib{h_1}b\eqvsym\hfib{h_2}b
+\end{equation*}
+for any $b:B$, such that for any $a:A$ we have an identification
+\[\overline{E}(H,a) : E(H,h_1(g_1(a)))({g_1(a),\refl{h_1(g_1(a))}}) = \pairr{g_2(a),\opp{H(a)}}.\]
+\end{lem}
+
+\begin{proof}
+Let $b:B$. Then we have the following equivalences:
+\begin{align}
+\hfib{h_1}b
+& \eqvsym \sm{w:\hfib{h_1}b} \trunc n{ \hfib{g_1}{\proj1 w}}
+\tag{since $g_1$ is $n$-connected}\\
+& \eqvsym \Trunc n{\sm{w:\hfib{h_1}b}\hfib{g_1}{\proj1 w}}
+\tag{by \cref{thm:refl-over-ntype-base}, since $h_1$ is $n$-truncated}\\
+& \eqvsym \trunc n{\hfib{h_1\circ g_1}b}
+\tag{by \cref{ex:unstable-octahedron}}
+\end{align}
+and likewise for $h_2$ and $g_2$.
+Also, since we have a homotopy $H:h_1\circ g_1\htpy h_2\circ g_2$, there is an obvious equivalence $\hfib{h_1\circ g_1}b\eqvsym\hfib{h_2\circ g_2}b$.
+Hence we obtain
+\begin{equation*}
+\hfib{h_1}b\eqvsym\hfib{h_2}b
+\end{equation*}
+for any $b:B$. By analyzing the underlying functions, we get the following representation of what happens to the element
+$\pairr{g_1(a),\refl{h_1(g_1(a))}}$ after applying each of the equivalences of which $E$ is composed.
+Some of the identifications are definitional, but others (marked with a $=$ below) are only propositional; putting them together we obtain $\overline E(H,a)$.
+{\allowdisplaybreaks
+\begin{align*}
+\pairr{g_1(a),\refl{h_1(g_1(a))}} &
+ \overset{=}{\mapsto} \Pairr{\pairr{g_1(a),\refl{h_1(g_1(a))}}, \tproj n{ \pairr{a,\refl{g_1(a)}}}}\\
+ & \mapsto \tproj n { \pairr{\pairr{g_1(a),\refl{h_1(g_1(a))}}, \pairr{a,\refl{g_1(a)}} }}\\
+ & \mapsto \tproj n { \pairr{a,\refl{h_1(g_1(a))}}}\\
+ & \overset{=}{\mapsto} \tproj n { \pairr{a,\opp{H(a)}}}\\
+ & \mapsto \tproj n { \pairr{\pairr{g_2(a),\opp{H(a)}},\pairr{a,\refl{g_2(a)}}} }\\
+ & \mapsto \Pairr{\pairr{g_2(a),\opp{H(a)}}, \tproj n {\pairr{a,\refl{g_2(a)}}} }\\
+ & \mapsto \pairr{g_2(a),\opp{H(a)}}
+\end{align*}}
+The first equality is because for general $b$, the map
+\narrowequation{ \hfib{h_1}b \to \sm{w:\hfib{h_1}b} \trunc n{ \hfib{g_1}{\proj1 w}} }
+inserts the center of contraction for $\trunc n{ \hfib{g_1}{\proj1 w}}$ supplied by the assumption that $g_1$ is $n$-truncated; whereas in the case in question this type has the obvious inhabitant $\tproj n{ \pairr{a,\refl{g_1(a)}}}$, which by contractibility must be equal to the center.
+The second propositional equality is because the equivalence $\hfib{h_1\circ g_1}b\eqvsym\hfib{h_2\circ g_2}b$ concatenates the second components with $\opp{H(a)}$, and we have $\opp{H(a)} \ct \refl{} = \opp{H(a)}$.
+The reader may check that the other equalities are definitional (assuming a reasonable solution to \cref{ex:unstable-octahedron}).
+\end{proof}
+
+% The equivalences $E(H,b)$ are such that $E(H^{-1},b)= E(H,b)^{-1}$.
+
+Combining \cref{prop:to_image_is_connected,prop:factor_equiv_fiber}, we have the following unique factorization result:
+
+\begin{thm}\label{thm:orth-fact}
+For each $f:A\to B$, the space $\fact_n(f)$ defined by
+\begin{equation*}
+\sm{X:\type}{g:A\to X}{h:X\to B} (h\circ g\htpy f)\times\mathsf{conn}_n(g)\times\mathsf{trunc}_n(h)
+\end{equation*}
+is contractible.
+Its center of contraction is the element
+\begin{equation*}
+\pairr{\im_n(f),\tilde{f},\proj1,\theta,\varphi,\psi}:\fact_n(f)
+\end{equation*}
+arising from \cref{prop:to_image_is_connected},
+where $\theta:\proj1\circ\tilde{f}\htpy f$ is the canonical homotopy, where $\varphi$ is the proof of
+\cref{prop:to_image_is_connected}, and where $\psi$ is the obvious proof that $\proj1:\im_n(f)\to B$ has $n$-truncated fibers.
+\end{thm}
+
+\begin{proof}
+By \cref{prop:to_image_is_connected} we know that there is an element of $\fact_n(f)$, hence it is enough to
+show that $\fact_n(f)$ is a mere proposition. Suppose we have two $n$-factorizations
+\begin{equation*}
+\pairr{X_1,g_1,h_1,H_1,\varphi_1,\psi_1}\qquad\text{and}\qquad\pairr{X_2,g_2,h_2,H_2,\varphi_2,\psi_2}
+\end{equation*}
+of $f$. Then we have the pointwise-concatenated homotopy
+\[ H\defeq (\lam{a} H_1(a) \ct H_2^{-1}(a)) \,:\, (h_1\circ g_1\htpy h_2\circ g_2).\]
+By univalence and the characterization of paths and transport in $\Sigma$-types, function types, and path types, it suffices to show that
+\begin{enumerate}
+\item there is an equivalence $e:X_1\eqvsym X_2$,
+\item there is a homotopy $\zeta:e\circ g_1\htpy g_2$,
+% \note{Is it easy enough to see that these elements are the various transports?}
+\item there is a homotopy $\eta:h_2\circ e\htpy h_1$,
+\item for any $a:A$ we have $\opp{\apfunc{h_2}(\zeta(a))} \ct \eta(g_1(a)) \ct H_1(a) = H_2(a)$.
+\end{enumerate}
+%where $\underline{e}$ is the function underlying the equivalence.
+We prove these four assertions in that order.
+\begin{enumerate}
+\item By \cref{prop:factor_equiv_fiber}, we have a fiberwise equivalence
+% \note{It could be a nice exercise for the book to show
+% that if $f_1:A_1\to B$ and $f_2:A_2\to B$ have equivalent fibers, then $A_1\eqvsym A_2$}.
+\begin{equation*}
+E(H) : \prd{b:B} \eqv{\hfib{h_1}b}{\hfib{h_2}b}.
+\end{equation*}
+This induces an equivalence of total spaces, i.e.\ we have
+\begin{equation*}
+\eqvspaced{\Parens{\sm{b:B} \hfib{h_1}b}}{\Parens{\sm{b:B}\hfib{h_2}b}}.
+\end{equation*}
+Of course, we also have the equivalences $X_1\eqvsym\sm{b:B}\hfib{h_1}b$ and $X_2\eqvsym\sm{b:B}
+\hfib{h_2}b$ from \cref{thm:total-space-of-the-fibers}.
+This gives us our equivalence $e:X_1\eqvsym X_2$; the reader may verify that the underlying function of $e$ is given by
+\begin{equation*}
+e(x) \jdeq \proj1(E(H,h_1(x))(x,\refl{h_1(x)})).
+\end{equation*}
+\item By \cref{prop:factor_equiv_fiber}, we may choose
+ $\zeta(a) \defeq \apfunc{\proj1}(\overline E(H,a)) : e(g_1(a)) = g_2(a)$.
+ \label{item:orth-fact-2}
+\item For every $x:X_1$, we have
+\begin{equation*}
+\proj2(E(H,h_1(x))({x,\refl{h_1(x)}})) :h_2(e(x))= h_1(x),
+\end{equation*}
+giving us a homotopy $\eta:h_2\circ e\htpy h_1$.
+\item By the characterization of paths in fibers (\cref{lem:hfib}), the path $\overline E(H,a)$ from \cref{prop:factor_equiv_fiber} gives us
+ $\eta(g_1(a)) = \apfunc{h_2}(\zeta(a)) \ct \opp{H(a)}$.
+ The desired equality follows by substituting the definition of $H$ and rearranging paths.\qedhere
+\end{enumerate}
+\end{proof}
+
+% I can't make sense of this, and it doesn't seem necessary
+%
+% \begin{cor}
+% A function $f:A\to B$ is $n$-connected if and only if
+% \begin{equation*}
+% \prd C\prd{g:\modalfunc(B\to C)} \iscontr\big(\sm{h:\modalfunc(B\to C)}\underline{h}\circ
+% f\htpy\underline{g}\circ f\big).
+% \end{equation*}
+% \end{cor}
+
+By standard arguments, this yields the following orthogonality principle.
+
+\begin{thm}
+ Let $e:A\to B$ be $n$-connected and $m:C\to D$ be $n$-truncated.
+ Then the map
+ \[ \varphi: (B\to C) \;\to\; \sm{h:A\to C}{k:B\to D} (m\circ h \htpy k \circ e) \]
+ is an equivalence.
+\end{thm}
+\begin{proof}[Sketch of proof]
+ For any $(h,k,H)$ in the codomain, let $h = h_2 \circ h_1$ and $k = k_2 \circ k_1$, where $h_1$ and $k_1$ are $n$-connected and $h_2$ and $k_2$ are $n$-truncated.
+ Then $f = (m\circ h_2) \circ h_1$ and $f = k_2 \circ (k_1\circ e)$ are both $n$-factorizations of $m \circ h = k\circ e$.
+ Thus, there is a unique equivalence between them.
+ It is straightforward (if a bit tedious) to extract from this that $\hfib\varphi{(h,k,H)}$ is contractible.
+\end{proof}
+
+\index{orthogonal factorization system|)}%
+\index{unique!factorization system|)}%
+
+We end by showing that images are stable under pullback.
+\index{image!stability under pullback}
+\index{factorization!stability under pullback}
+
+\begin{lem}\label{lem:hfiber_wrt_pullback}
+Suppose that the square
+\begin{equation*}
+ \vcenter{\xymatrix{
+ A\ar[r]\ar[d]_f &
+ C\ar[d]^g\\
+ B\ar[r]_-h &
+ D
+ }}
+\end{equation*}
+is a pullback square and let $b:B$. Then $\hfib{f}b\eqvsym\hfib{g}{h(b)}$.
+\end{lem}
+
+\begin{proof}
+This follows from pasting of pullbacks (\cref{ex:pullback-pasting}), since the type $X$ in the diagram
+\begin{equation*}
+ \vcenter{\xymatrix{
+ X\ar[r]\ar[d] &
+ A\ar[r]\ar[d]_f &
+ C\ar[d]^g\\
+ \unit\ar[r]_b &
+ B\ar[r]_h &
+ D
+ }}
+\end{equation*}
+is the pullback of the left square if and only if it is the pullback of the outer rectangle, while $\hfib{f}b$ is the pullback of the square on the left and $\hfib{g}{h(b)}$ is the pullback of the outer rectangle.
+\end{proof}
+
+\begin{thm}\label{thm:stable-images}
+\index{stability!of images under pullback}%
+Consider functions $f:A\to B$, $g:C\to D$ and the diagram
+\begin{equation*}
+ \vcenter{\xymatrix{
+ A\ar[r]\ar[d]_{\tilde{f}_n} &
+ C\ar[d]^{\tilde{g}_n}\\
+ \im_n(f)\ar[r]\ar[d]_{\proj1} &
+ \im_n(g)\ar[d]^{\proj1}\\
+ B\ar[r]_h &
+ D
+ }}
+\end{equation*}
+If the outer rectangle is a pullback, then so is the bottom square (and hence so is the top square, by \cref{ex:pullback-pasting}). Consequently, images are stable under pullbacks.
+\end{thm}
+
+\begin{proof}
+Assuming the outer square is a pullback, we have equivalences
+\begin{align*}
+B\times_D\im_n(g) & \jdeq \sm{b:B}{w:\im_n(g)} h(b)=\proj1 w\\
+& \eqvsym \sm{b:B}{d:D}{w:\trunc n{\hfib{g}d}} h(b)= d\\
+& \eqvsym \sm{b:B} \trunc n{\hfib{g}{h(b)}}\\
+& \eqvsym \sm{b:B} \trunc n{\hfib{f}b} &&
+\text{(by \cref{lem:hfiber_wrt_pullback})}\\
+& \equiv \im_n(f). && \qedhere
+\end{align*}
+\end{proof}
+
+\index{n-type@$n$-type|)}%
+
+\section{Modalities}
+\label{sec:modalities}
+
+\index{modality|(}
+
+Nearly all of the theory of $n$-types and connectedness can be done in much greater generality.
+This section will not be used in the rest of the book.
+
+Our first thought regarding generalizing the theory of $n$-types might be to take \cref{thm:trunc-reflective} as a definition.
+
+\begin{defn}\label{defn:reflective-subuniverse}
+ A \define{reflective subuniverse}
+ \indexdef{reflective!subuniverse}%
+ \indexdef{subuniverse, reflective}%
+ is a predicate $P:\type\to\prop$ such that
+ for every $A:\type$ we have a type $\reflect A$ such that $P(\reflect A)$ and a map
+ $\project_A:A\to\reflect A$, with the property that for every $B:\type$ with $P(B)$, the following map is an equivalence:
+ \[\function{(\reflect A\to{}B)}{(A\to{}B)}{f}{f\circ\project_A}.\]
+\end{defn}
+
+We write $\P \defeq \setof{A:\type | P(A)}$, so $A:\P$ means that $A:\type$ and we have $P(A)$.
+We also write $\rec{\modal}$ for the quasi-inverse of the above map.
+The notation $\reflect$ may seem slightly odd, but it will make more sense soon.
+
+For any reflective subuniverse, we can prove all the familiar facts about reflective subcategories from category theory, in the usual way.
+For instance, we have:
+\begin{itemize}
+\item A type $A$ lies in $\P$ if and only if $\project_A:A\to\reflect A$ is an equivalence.
+\item $\P$ is closed under retracts.
+ In particular, $A$ lies in $\P$ as soon as $\project_A$ admits a retraction.
+\item The operation $\reflect$ is a functor in a suitable up-to-coherent-homotopy sense, which we can make precise at as high levels as necessary.
+\item The types in $\P$ are closed under all limits such as products and pullbacks.
+ In particular, for any $A:\P$ and $x,y:A$, the identity type $(x=_A y)$ is also in $\P$, since it is a pullback of two functions $\unit\to A$.
+\item Colimits in $\P$ can be constructed by applying $\reflect$ to ordinary colimits of types.
+\end{itemize}
+
+Importantly, closure under products extends also to ``infinite products'', i.e.\ dependent function types.
+
+\begin{thm}\label{thm:reflsubunv-forall}
+ If $B:A\to\P$ is any family of types in a reflective subuniverse \P, then $\prd{x:A} B(x)$ is also in \P.
+\end{thm}
+\begin{proof}
+ For any $x:A$, consider the function $\mathsf{ev}_x : (\prd{x:A} B(x)) \to B(x)$ defined by $\mathsf{ev}_x(f) \defeq f(x)$.
+ Since $B(x)$ lies in $P$, this extends to a function
+ \[ \rec{\modal}(\mathsf{ev}_x) : \reflect\Parens{\prd{x:A} B(x)} \to B(x). \]
+ Thus we can define $h:\reflect(\prd{x:A} B(x)) \to \prd{x:A} B(x)$ by $h(z)(x) \defeq \rec{\modal}(\mathsf{ev}_x)(z)$.
+ Then $h$ is a retraction of $\project_{\prd{x:A} B(x)}$, so that ${\prd{x:A} B(x)}$ is in $\P$.
+\end{proof}
+
+In particular, if $B:\P$ and $A$ is any type, then $(A\to B)$ is in \P.
+In categorical language, this means that any reflective subuniverse is an \define{exponential ideal}.
+\indexdef{exponential ideal}%
+This, in turn, implies by a standard argument that the reflector preserves finite products.
+
+\begin{cor}\label{cor:trunc_prod}
+ For any types $A$ and $B$ and any reflective subuniverse, the induced map $\reflect(A\times B) \to \reflect(A) \times \reflect(B)$ is an equivalence.
+\end{cor}
+\begin{proof}
+ It suffices to show that $\reflect(A) \times \reflect(B)$ has the same universal property as $\reflect(A\times B)$.
+ It lies in $\P$ by the above remark that types in $\P$ are closed under limits.
+ Now let $C:\P$; we have
+ \begin{align*}
+ (\reflect(A) \times \reflect(B) \to C)
+ &= (\reflect(A) \to (\reflect(B) \to C))\\
+ &= (\reflect(A) \to (B \to C))\\
+ &= (A \to (B \to C))\\
+ &= (A \times B \to C)
+ \end{align*}
+ using the universal properties of $\reflect(B)$ and $\reflect(A)$, along with the fact that $B\to C$ is in \P since $C$ is.
+ It is straightforward to verify that this equivalence is given by composing with $\mreturn_A \times \mreturn_B$, as needed.
+\end{proof}
+
+It may seem odd that every reflective subcategory of types is automatically an exponential ideal, with a product-preserving reflector.
+However, this is also the case classically in the category of \emph{sets}, for the same reasons.
+It's just that this fact is not usually remarked on, since the classical category of sets---in contrast to the category of homotopy
+types---does not have many interesting reflective subcategories.
+
+Two basic properties of $n$-types are \emph{not} shared by general reflective subuniverses: \cref{thm:ntypes-sigma} (closure under $\Sigma$-types) and \cref{thm:truncn-ind} (truncation induction).
+However, the analogues of these two properties are equivalent to each other.
+
+
+\begin{thm}\label{thm:modal-char}
+ For a reflective subuniverse \P, the following are logically equivalent.
+ \begin{enumerate}
+ \item If $A:\P$ and $B:A\to \P$, then $\sm{x:A} B(x)$ is in \P.\label{item:mchr1}
+ \item for every $A:\type$, type family $B:\reflect A\to\P$, and map $g:\prd{a:A} B(\project(a))$, there exists $f:\prd{z:\reflect A} B(z)$ such that $f(\project(a)) = g(a)$ for all $a:A$.\label{item:mchr2}
+ \end{enumerate}
+\end{thm}
+\begin{proof}
+ Suppose~\ref{item:mchr1}.
+ Then in the situation of~\ref{item:mchr2}, the type $\sm{z:\reflect A} B(z)$ lies in $\P$, and we have $g':A\to \sm{z:\reflect A} B(z)$ defined by $g'(a)\defeq (\project(a),g(a))$.
+ Thus, we have $\rec{\modal}(g'):\reflect A \to \sm{z:\reflect A} B(z)$ such that $\rec{\modal}(g')(\project(a)) = (\project(a),g(a))$.
+
+ Now consider the functions $\proj1 \circ \rec{\modal}(g') : \reflect A \to \reflect A$ and $\idfunc[\reflect A]$.
+ By assumption, these become equal when precomposed with $\project$.
+ Thus, by the universal property of $\reflect$, they are equal already, i.e.\ we have $p_z:\proj1(\rec{\modal}(g')(z)) = z$ for all $z$.
+ Now we can define
+ %
+ \narrowequation{f(z) \defeq \trans{p_z}{\proj2(\rec{\modal}(g')(z))},}
+ %
+ Using the adjunction property of the equivalence of
+ definition~\ref{defn:reflective-subuniverse}, one can show that the first component of
+ %
+ \narrowequation{\rec{\modal}(g')(\project(a)) = (\project(a),g(a))}
+ %
+ is equal to $p_{\project(a)}$. Thus, its second component yields
+ $f(\project(a)) = g(a)$, as needed.
+
+ Conversely, suppose~\ref{item:mchr2}, and that $A:\P$ and $B:A\to\P$.
+ Let $h$ be the composite
+ \[ \reflect\Parens{\sm{x:A} B(x)} \xrightarrow{\reflect(\proj1)} \reflect A \xrightarrow{\opp{(\project_A)}} A. \]
+ Then for $z:\sm{x:A} B(x)$ we have
+ \begin{align*}
+ h(\project(z)) &= \opp\project(\reflect(\proj1)(\project(z)))\\
+ &= \opp\project(\project(\proj1(z)))\\
+ &= \proj1(z).
+ \end{align*}
+ Denote this path by $p_z$.
+ Now if we define $C:\reflect(\sm{x:A} B(x)) \to \type$ by $C(w) \defeq B(h(w))$, we have
+ \[ g \defeq \lam{z} \trans{p_z}{\proj2(z)} \;:\; \prd{z:\sm{x:A} B(x)} C(\project(z)). \]
+ Thus, the assumption yields
+ %
+ \narrowequation{f:\prd{w:\reflect(\sm{x:A}B(x))} C(w)}
+ %
+ such that $f(\project(z)) = g(z)$.
+ Together, $h$ and $f$ give a function
+ %
+ \narrowequation{k:\reflect(\sm{x:A}B(x)) \to \sm{x:A}B(x)}
+ %
+ defined by $k(w) \defeq (h(w),f(w))$, while $p_z$ and the equality $f(\project(z)) = g(z)$ show that $k$ is a retraction of $\project_{\sm{x:A}B(x)}$.
+ Therefore, $\sm{x:A}B(x)$ is in \P.
+\end{proof}
+
+Note the similarity to the discussion in \cref{sec:htpy-inductive}.
+\index{recursion principle!for a modality}%
+\index{induction principle!for a modality}%
+\index{uniqueness!principle, propositional!for a modality}%
+The universal property of the reflector of a reflective subuniverse is like a recursion principle with its uniqueness property, while \cref{thm:modal-char}\ref{item:mchr2} is like the corresponding induction principle.
+Unlike in \cref{sec:htpy-inductive}, the two are not equivalent here, because of the restriction that we can only eliminate into types that lie in $\P$.
+Condition~\ref{item:mchr1} of \cref{thm:modal-char} is what fixes the disconnect.
+
+Unsurprisingly, of course, if we have the induction principle, then we can derive the recursion principle.
+We can also derive its uniqueness property, as long as we allow ourselves to eliminate into path types.
+This suggests the following definition.
+Note that any reflective subuniverse can be characterized by the operation $\reflect:\type\to\type$ and the functions $\project_A:A\to \reflect A$, since we have $P(A) = \isequiv(\project_A)$.
+
+\begin{defn}\label{defn:modality}
+A \define{modality}
+\indexdef{modality}
+is an operation $\modal:\type\to\type$ for which there are
+\begin{enumerate}
+\item functions $\mreturn^\modal_A:A\to\modal(A)$ for every type $A$.\label{item:modal1}
+\item for every $A:\type$ and every type family $B:\modal(A)\to\type$, a function\label{item:modal2}
+\begin{equation*}
+\ind{\modal}:\Parens{\prd{a:A}\modal(B(\mreturn^\modal_A(a)))}\to\prd{z:\modal(A)}\modal(B(z)).
+\end{equation*}
+\item A path $\ind\modal(f)(\mreturn^\modal_A(a)) = f(a)$ for each $f:\prd{a:A}\modal(B(\mreturn^\modal_A(a)))$.\label{item:modal3}
+\item For any $z,z':\modal(A)$, the function $\mreturn^\modal_{z=z'} : (z=z') \to \modal(z=z')$ is an equivalence.\label{item:modal4}
+\end{enumerate}
+We say that $A$ is \define{modal}
+\indexdef{modal!type}%
+\indexdef{type!modal}%
+for $\modal$ if $\mreturn^\modal_A:A\to\modal(A)$ is an equivalence, and we write
+\begin{equation}
+ \modaltype\defeq\setof{X:\type | X \text{ is $\modal$-modal} }\label{eq:modaltype}
+\end{equation}
+for the type of modal types.
+\end{defn}
+
+Conditions~\ref{item:modal2} and~\ref{item:modal3} are very similar to \cref{thm:modal-char}\ref{item:mchr2}, but phrased using $\modal B(z)$ rather than assuming $B$ to be valued in $\P$.
+This allows us to state the condition purely in terms of the operation $\modal$, rather than requiring the predicate $P:\type\to\prop$ to be given in advance.
+(It is not entirely satisfactory, since we still have to refer to $P$ not-so-subtly in clause~\ref{item:modal4}.
+We do not know whether~\ref{item:modal4} follows from~\ref{item:modal1}--\ref{item:modal3}.)
+However, the stronger-looking property of \cref{thm:modal-char}\ref{item:mchr2} follows from \cref{defn:modality}\ref{item:modal2} and~\ref{item:modal3}, since for any $C:\modal A \to \modaltype$ we have $C(z) \eqvsym \modal C(z)$, and we can pass back across this equivalence.
+
+\index{universal!property!of a modality}%
+As with other induction principles, this implies a universal property.
+
+\begin{thm}\label{prop:lv_n_deptype_sec_equiv_by_precomp}
+Let $A$ be a type and let $B:\modal(A)\to\modaltype$. Then the function
+\begin{equation*}
+(\blank\circ \mreturn^\modal_A) : \Parens{\prd{z:\modal(A)}B(z)} \to \Parens{\prd{a:A}B(\mreturn^\modal_A(a))}
+\end{equation*}
+is an equivalence.
+\end{thm}
+\begin{proof}
+By definition, the operation $\ind{\modal}$ is a right inverse to $(\blank\circ \mreturn^\modal_A)$.
+Thus, we only need to find a homotopy
+\begin{equation*}
+\prd{z:\modal(A)}s(z)= \ind{\modal}(s\circ \mreturn^\modal_A)(z)
+\end{equation*}
+for each $s:\prd{z:\modal(A)}B(z)$, exhibiting it as a left inverse as well.
+By assumption, each $B(z)$ is modal, and hence each type $s(z)= R^\modal_X(s\circ \mreturn^\modal_A)(z)$
+is also modal.
+Thus, it suffices to find a function of type
+\begin{equation*}
+\prd{a:A}s(\mreturn^\modal_A(a))= \ind{\modal}(s\circ \mreturn^\modal_A)(\mreturn^\modal_A(a))
+\end{equation*}
+which follows from \cref{defn:modality}\ref{item:modal3}.
+\end{proof}
+
+In particular, for every type $A$ and every modal type $B$, we have an equivalence $(\modal A\to B)\eqvsym (A\to B)$.
+
+\begin{cor}
+ For any modality $\modal$, the $\modal$-modal types form a reflective subuniverse satisfying the equivalent conditions of \cref{thm:modal-char}.
+\end{cor}
+
+Thus, modalities can be identified with reflective subuniverses closed under $\Sigma$-types.
+The name \emph{modality} comes, of course, from \emph{modal logic}\index{modal!logic}, which studies logic where we can form statements such as ``possibly $A$'' (usually written $\diamond A$) or ``necessarily $A$'' (usually written $\Box A$).
+The symbol $\modal$ is somewhat common for an arbitrary modal operator\index{modal!operator}. % (rather than a specific one such as $\diamond$ or $\Box$).
+Under the propositions-as-types principle, a modality in the sense of modal logic corresponds to an operation on \emph{types}, and \cref{defn:modality} seems a reasonable candidate for how such an operation should be defined.
+(More precisely, we should perhaps call these \emph{idempotent, monadic} modalities; see the Notes.)
+\index{idempotent!modality}%
+As mentioned in \cref{subsec:when-trunc}, we may in general use adverbs\index{adverb} to speak informally about such modalities, such as ``merely''\index{merely} for the propositional truncation and ``purely''\index{purely} for the identity modality
+\index{identity!modality}%
+\index{modality!identity}%
+(i.e.\ the one defined by $\modal A \defeq A$).
+
+For any modality $\modal$, we define a map $f:A\to B$ to be \define{$\modal$-connected}
+\indexdef{function!.circle-connected@$\modal$-connected}%
+\indexdef{.circle-connected function@$\modal$-connected function}%
+if $\modal(\hfib f b)$ is contractible for all $b:B$, and to be \define{$\modal$-truncated}
+\indexdef{function!.circle-truncated@$\modal$-truncated}%
+\indexdef{.circle-truncated function@$\modal$-truncated function}%
+if $\hfib f b$ is modal for all $b:B$.
+All of the theory of \cref{sec:connectivity,sec:image-factorization} which doesn't involve relating $n$-types for different values of $n$ applies verbatim in this generality.
+\index{orthogonal factorization system}%
+\index{unique!factorization system}%
+In particular, we have an orthogonal factorization system.
+
+An important class of modalities which does \emph{not} include the $n$-trun\-ca\-tions is the \emph{left exact} modalities: those for which the functor $\modal$ preserves pullbacks as well as finite products.
+\index{topology!Lawvere-Tierney}%
+These are a categorification of ``Lawvere-Tierney\index{Lawvere}\index{Tierney} topologies'' in elementary topos\index{topos} theory,
+and correspond in higher-categorical semantics to sub-$(\infty,1)$-toposes.
+\index{.infinity1-topos@$(\infty,1)$-topos}%
+However, this is beyond the scope of this book.
+
+Some particular examples of modalities other than $n$-truncation can be found in the exercises.
+
+\index{modality|)}
+
+\sectionNotes
+
+The notion of homotopy $n$-type in classical homotopy theory is quite old.
+It was Voevodsky who realized that the notion can be defined recursively in homotopy type theory, starting from contractibility.
+
+\index{axiom!Streicher's Axiom K}%
+The property ``Axiom K'' was so named by Thomas Streicher, as a property of identity types which comes after J, the latter being the traditional name for the eliminator of identity types.
+\cref{thm:hedberg} is due to Hedberg~\cite{hedberg1998coherence}; \cite{krausgeneralizations} contains more information and generalizations.
+
+The notions of $n$-connected spaces and functions are also classical in homotopy theory, although as mentioned before, our indexing for connectedness of functions is off by one from the classical indexing.
+The importance of the resulting factorization system has been emphasized by recent work in higher topos theory by Rezk, Lurie, and others.%
+\index{.infinity1-topos@$(\infty,1)$-topos}
+In particular, the results of this chapter should be compared with~\cite[\S6.5.1]{lurie:higher-topoi}.
+In \cref{sec:freudenthal}, the theory of $n$-connected maps will be crucial to our proof of the Freudenthal suspension theorem.
+
+Modal operators\index{modal!operator} in \emph{simple} type theory have been studied extensively; see e.g.~\cite{modalTT}. In the setting of dependent type theory, \cite{ab:bracket-types} treats the special case of propositional truncation ($(-1)$-truncation) as a modal operator\index{modal!operator}. The development presented here greatly extends and generalizes this work, while drawing also on ideas from topos theory.\index{topos}
+
+Generally, modal operators\index{modal!operator} come in (at least) two flavors: those such as $\diamond$ (``possibly'') for which $A\Rightarrow \diamond A$, and those such as $\Box$ (``necessarily'') for which $\Box A \Rightarrow A$.
+When they are also \emph{idempotent} (i.e.\ $\diamond A = \diamond{\diamond A}$ or $\Box A = \Box{\Box A}$), the former may be identified with reflective subcategories (or equivalently, idempotent monads), and the latter with coreflective subcategories (or idempotent comonads).
+\index{monad}
+\index{comonad}
+However, in dependent type theory it is trickier to deal with the comonadic sort, since they are more rarely stable under pullback, and thus cannot be interpreted as operations on the universe \UU.
+Sometimes there are ways around this (see e.g.~\cite{QGFTinCHoTT12}), but for simplicity, here we stick to the monadic sort.
+
+On the computational side, monads (and hence modalities\index{modality}) are used to model computational effects in functional programming~\cite{Moggi89}.%
+\index{programming}%
+\index{computational effect}
+A computation is said to be \emph{pure} if its execution results in no side effects (such as printing a message to the screen, playing music, or sending data over the Internet).
+There exist ``purely functional'' programming languages, such as Haskell\index{Haskell}, in which it is technically only possible to write pure functions: side effects are represented by applying ``monads'' to output types.
+For instance, a function of type $\mathsf{Int}\to\mathsf{Int}$ is pure, while a function of type $\mathsf{Int}\to \mathsf{IO}(\mathsf{Int})$ may perform input and output along the way to computing its result; the operation $\mathsf{IO}$ is a monad.
+\index{purely}%
+(This is the origin of our use of the adverb ``purely'' for the identity monad, since it corresponds computationally to pure functions with no side-effects.)
+The modalities we have considered in this chapter are all idempotent, whereas those used in functional programming rarely are, but the ideas are still closely related.
+
+
+\sectionExercises
+
+\begin{ex}\label{ex:all-types-sets}\
+ \begin{enumerate}
+ \item Use \cref{thm:h-set-refrel-in-paths-sets} to show
+ that if $\brck{A}\to A$ for every type $A$,
+ then every type is a set.
+ \item Show that if every surjective function (purely) splits,
+ i.e.~if
+ %
+ \narrowequation{\prd{b:B}\brck{\hfib{f}{b}}\to\prd{b:B}\hfib{f}{b}}
+ %
+ for every $f:A\to B$, then every type is a set.
+ \end{enumerate}
+\end{ex}
+
+\begin{ex}\label{ex:s2-colim-unit}
+ For this exercise, we consider the following general notion of colimit.
+ Define a \define{graph}\indexdef{graph} $\Gamma$ to consist of a type $\Gamma_0$ and a family $\Gamma_1 : \Gamma_0 \to \Gamma_0 \to \UU$.
+ A \define{diagram}\index{diagram} (of types) over a graph $\Gamma$ consists of a family $F:\Gamma_0 \to \UU$ together with for each $x,y:\Gamma_0$, a function $F_{x,y}:\Gamma_1(x,y) \to F(x) \to F(y)$.
+ The \define{colimit}\index{colimit!of types} of such a diagram is the higher inductive type $\colim(F)$ generated by
+ \begin{itemize}
+ \item for each $x:\Gamma_0$, a function $\inc_x:F(x) \to \colim(F)$, and
+ \item for each $x,y:\Gamma_0$ and $\gamma:\Gamma_1(x,y)$ and $a:F(x)$, a path $\inc_y(F_{x,y}(\gamma,a)) = \inc_x(a)$.
+ \end{itemize}
+ There are more general kinds of colimits as well (see e.g.\ \cref{ex:s2-colim-unit-2}), but this is good enough for many purposes.
+ \begin{enumerate}
+ \item Exhibit a graph $\Gamma$ such that colimits of $\Gamma$-diagrams can be identified with pushouts as defined in \cref{sec:colimits}.
+ In other words, each span should induce a diagram over $\Gamma$ whose colimit is the pushout of the span.
+ \item Exhibit a graph $\Gamma$ and a diagram $F$ over $\Gamma$ such that $F(x)=\unit$ for all $x$, but such that $\colim(F)=\Sn^2$.
+ Note that $\unit$ is a $(-2)$-type, while $\Sn^2$ is not expected to be an $n$-type for any finite $n$.
+ See also \cref{ex:s2-colim-unit-2}.
+ \end{enumerate}
+\end{ex}
+
+\begin{ex}\label{ex:ntypes-closed-under-wtypes}
+ Show that if $A$ is an $n$-type and $B:A\to \ntype{n}$ is a family of $n$-types, where $n\ge -1$, then the $W$-type $\wtype{a:A} B(a)$ (see \cref{sec:w-types}) is also an $n$-type.
+\end{ex}
+
+\begin{ex}\label{ex:connected-pointed-all-section-retraction}
+ Use \cref{prop:nconn_fiber_to_total} to extend \cref{thm:connected-pointed} to any section-retraction pair.
+\end{ex}
+
+\begin{ex}\label{ex:ntype-from-nconn-const}
+ Show that \cref{thm:nconn-to-ntype-const} also works as a characterization in the other direction: $B$ is an $n$-type if and only if every map into $B$ from an $n$-con\-nect\-ed type is constant.
+ Ideally, your proof should work for any modality as in \cref{sec:modalities}.
+\end{ex}
+
+\begin{ex}\label{ex:connectivity-inductively}
+ Prove that for $n\ge -1$, a type $A$ is $n$-connected if and only if it is merely inhabited and for all $a,b:A$ the type $\id[A]ab$ is $(n-1)$-connected.
+ Thus, since every type is $(-2)$-connected, $n$-connectedness of types can be defined inductively using only propositional truncations.
+ (In particular, $A$ is 0-connected if and only if $\brck{A}$ and $\prd{a,b:A} \brck{a=b}$.)
+\end{ex}
+
+\begin{ex}\label{ex:lemnm}
+ \indexdef{excluded middle!LEMnm@$\LEM{n,m}$}%
+ For $-1\le n,m \le\infty$, let $\LEM{n,m}$ denote the statement
+ \[ \prd{A:\ntype{n}} \trunc m{A + \neg A},\]
+ where $\ntype{\infty} \defeq \type$ and $\trunc{\infty}{X}\defeq X$.
+ Show that:
+ \begin{enumerate}
+ \item If $n=-1$ or $m=-1$, then $\LEM{n,m}$ is equivalent to $\LEM{}$ from \cref{sec:intuitionism}.
+ \item If $n\ge 0$ and $m\ge 0$, then $\LEM{n,m}$ is inconsistent with univalence.
+ \end{enumerate}
+\end{ex}
+
+\begin{ex}\label{ex:acnm}
+ \indexdef{axiom!of choice!ACnm@$\choice{n,m}$}%
+ For $-1\le n,m\le\infty$, let $\choice{n,m}$ denote the statement
+ \[ \prd{X:\set}{Y:X\to\ntype{n}}
+ \Parens{\prd{x:X} \trunc m{Y(x)}}
+ \to
+ \Trunc m{\prd{x:X} Y(x)},
+ \]
+ with conventions as in \cref{ex:lemnm}.
+ Thus $\choice{0,-1}$ is the axiom of choice from \cref{sec:axiom-choice},
+ while $\choice{\infty,\infty}$ is the identity function.
+ (If we had formulated $\choice{n,m}$ analogously to \eqref{eq:ac}
+ rather than \eqref{eq:epis-split},
+ $\choice{\infty,\infty}$ would be like \cref{thm:ttac}.)
+ It is known that $\choice{\infty,-1}$ is consistent with univalence, since it holds in Voevodsky's simplicial model.
+ \begin{enumerate}
+ \item Without using univalence, show that $\LEM{n,\infty}$ implies $\choice{n,m}$ for all $m$.
+ (On the other hand, in \cref{subsec:emacinsets} we will show that $\choice{}=\choice{0,-1}$ implies $\LEM{}=\LEM{-1,-1}$.)
+ \item Of course, $\choice{n,m}\Rightarrow \choice{k,m}$ if $k\le n$.
+ Are there any other implications between the principles $\choice{n,m}$?
+ Is $\choice{n,m}$ consistent with univalence for any $m\ge 0$ and any $n$?
+ (These are open questions.)\index{open!problem}
+ \end{enumerate}
+\end{ex}
+
+\begin{ex}\label{ex:acnm-surjset}
+ Show that $\choice{n,-1}$ implies that for any $n$-type $A$, there merely exists a set $B$ and a surjection $B\to A$.
+\end{ex}
+
+\begin{ex}\label{ex:acconn}
+ Define the \define{$n$-connected axiom of choice}
+ \indexdef{n-connected@$n$-connected!axiom of choice}%
+ \indexdef{axiom!of choice!n-connected@$n$-connected}%
+ to be the statement
+ \begin{quote}
+ If $X$ is a set and $Y:X\to \type$ is a family of types such that each $Y(x)$ is $n$-connected, then $\prd{x:X} Y(x)$ is $n$-connected.
+ \end{quote}
+ Note that the $(-1)$-connected axiom of choice is $\choice{\infty,-1}$ from \cref{ex:acnm}.
+ \begin{enumerate}
+ \item Prove that the $(-1)$-connected axiom of choice implies the $n$-con\-nect\-ed axiom of choice for all $n\ge -1$.
+ \item Are there any other implications between the $n$-connected axioms of choice and the principles $\choice{n,m}$?
+ (This is an open question.)\index{open!problem}
+ \end{enumerate}
+\end{ex}
+
+\begin{ex}\label{ex:n-truncation-not-left-exact}
+ Show that the $n$-truncation modality is not left exact for any $n\ge -1$.
+ That is, exhibit a pullback which it fails to preserve.
+\end{ex}
+
+\begin{ex}\label{ex:double-negation-modality}
+ Show that $X\mapsto (\neg\neg X)$ is a modality.\index{modal!operator}%
+\end{ex}
+
+\begin{ex}\label{ex:prop-modalities}
+ Let $P$ be a mere proposition.
+ \begin{enumerate}
+ \item Show that $X\mapsto (P\to X)$ is a left exact modality.
+ This is called the \define{open modality}
+ \indexdef{open!modality}%
+ \indexdef{modality!open}%
+ associated to $P$.
+ \item Show that $X\mapsto P*X$ is a left exact modality, where $*$ denotes the join (see \cref{sec:colimits}).
+ This is called the \define{closed modality}
+ \indexdef{closed!modality}%
+ \indexdef{modality!closed}%
+ associated to $P$.
+ \end{enumerate}
+\end{ex}
+
+\begin{ex}\label{ex:f-local-type}
+ Let $f:A\to B$ be a map; a type $Z$ is \define{$f$-local}
+ \indexdef{f-local type@$f$-local type}%
+ \indexdef{type!f-local@$f$-local}%
+ if $(\blank\circ f):(B\to Z) \to (A\to Z)$ is an equivalence.
+ \begin{enumerate}
+ \item Prove that the $f$-local types form a reflective subuniverse.
+ You will want to use a higher inductive type to define the reflector (localization).
+ \item Prove that if $B=\unit$, then this subuniverse is a modality.
+ \end{enumerate}
+\end{ex}
+
+\begin{ex}\label{ex:trunc-spokes-no-hub}
+ Show that in contrast to \cref{rmk:spokes-no-hub}, we could equivalently define $\trunc nA$ to be generated by a function $\tprojf n : A \to \trunc n A$ together with for each $r:\Sn^{n+1} \to \trunc n A$ and each $x:\Sn^{n+1}$, a path $s_r(x):r(x) = r(\base)$.
+\end{ex}
+
+\begin{ex}\label{ex:s2-colim-unit-2}
+ In this exercise, we consider a slightly fancier notion of colimit than in \cref{ex:s2-colim-unit}.
+ Define a \define{graph with composition}\indexdef{graph!with composition} $\Gamma$ to be a graph as in \cref{ex:s2-colim-unit} together with for each $x,y,z:\Gamma_0$, a function $\Gamma_1(y,z) \to \Gamma_1(x,y) \to \Gamma_1(x,z)$, written as $\delta\mapsto\gamma \mapsto \delta \circ \gamma$.
+ (For instance, any precategory as in \cref{cha:category-theory} is a graph with composition.)
+ A \define{diagram}\index{diagram} $F$ over a graph with composition $\Gamma$ consists of a diagram over the underlying graph, together with for each $x,y,z:\Gamma_0$ and $\gamma:\Gamma_1(x,y)$ and $\delta:\Gamma_1(y,z)$, a homotopy $\cmp_{x,y,z}(\delta,\gamma) : F_{y,z}(\delta) \circ F_{x,y}(\gamma) \htpy F_{x,z}(\delta\circ\gamma)$.
+ The \define{colimit}\index{colimit!of types} of such a diagram is the higher inductive type $\colim(F)$ generated by
+ \begin{itemize}
+ \item for each $x:\Gamma_0$, a function $\inc_x:F(x) \to \colim(F)$,
+ \item for each $x,y:\Gamma_0$ and $\gamma:\Gamma_1(x,y)$ and $a:F(x)$, a path $\glue_{x,y}(\gamma,a) : \inc_y(F_{x,y}(\gamma,a)) = \inc_x(a)$, and
+ \item for each $x,y,z:\Gamma_0$ and $\gamma:\Gamma_1(x,y)$ and $\delta:\Gamma_1(y,z)$ and $a:F(x)$, a path
+ \[ \ap{\inc_z}{\cmp_{x,y,z}(\delta,\gamma,a)} \ct \glue_{x,z}(\delta\circ \gamma,a) = \glue_{y,z}(\delta,F_{x,y}(\gamma,a)) \ct \glue_{x,y}(\gamma,a). \]
+ \end{itemize}
+ (This is a ``second-order approximation'' to a fully homotopy-theoretic notions of diagram and colimit, which ought to involve ``coherence paths'' of this sort at all higher levels.
+ Defining such things in type theory is an important open problem.)
+
+ Exhibit a graph with composition $\Gamma$ such that $\Gamma_0$ is a set and each type $\Gamma_1(x,y)$ is a mere proposition, and a diagram $F$ over $\Gamma$ such that $F(x)=\unit$ for all $x$, for which $\colim(F)=\Sn^2$.
+\end{ex}
+
+\begin{ex}\label{ex:fiber-map-not-conn}
+ Comparing \cref{lem:nconnected_postcomp_variation,prop:nconn_fiber_to_total}, one might be tempted to conjecture that if $f:A\to B$ is $n$-connected and $g:\prd{a:A} P(a) \to Q(f(a))$ induces an $n$-connected map $\Parens{\sm{a:A} P(a)} \to \Parens{\sm{b:B} Q(b)}$, then $g$ is fiberwise $n$-connected.
+ Give a counterexample to show that this is false.
+ (In fact, when generalized to modalities, this property characterizes the left exact ones; see \cref{ex:prop-modalities}.)
+\end{ex}
+
+\begin{ex}\label{ex:is-conn-trunc-functor}
+ Show that if $f : A \to B$ is $n$-connected, then $\trunc kf : \trunc kA \to \trunc kB$ is also $n$-connected.
+\end{ex}
+
+\begin{ex}\label{ex:categorical-connectedness}
+ We say a type $A$ is \define{categorically connected}
+ \indexdef{connected!categorically} if for every types $B, C$ the canonical map
+ $e_{A, B, C}:((A\to B) + (A\to C)) \to (A \to B + C)$ defined by
+ \begin{align*}
+ e_{A,B,C}(\inl(g)) &\defeq \lam{x} \inl(g(x)),\\
+ e_{A,B,C}(\inr(g)) &\defeq \lam{x} \inr(g(x))
+ \end{align*}
+ is an equivalence.
+ \begin{enumerate}
+ \item Show that any connected type is categorically connected.
+ \item Show that all categorically connected types are connected if and only if $\LEM{}$ holds. (Hint: consider $A \defeq \Sigma P$ such that $\neg \neg P$ holds.)
+ \end{enumerate}
+\end{ex}
+
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: "hott-online"
+%%% End:
diff --git a/books/hott/homotopy.tex b/books/hott/homotopy.tex
new file mode 100644
index 0000000000000000000000000000000000000000..87136c1ab05846d96a71b784fe652447b354434a
--- /dev/null
+++ b/books/hott/homotopy.tex
@@ -0,0 +1,2892 @@
+\chapter{Homotopy theory}
+\label{cha:homotopy}
+
+\index{acceptance|(}
+
+In this chapter, we develop some homotopy theory within type theory. We
+use the \emph{synthetic approach} to homotopy theory introduced in
+\cref{cha:basics}: Spaces, points, paths, and homotopies are basic
+notions, which are represented by types and elements of types, particularly
+the identity type. The algebraic structure of paths and homotopies is
+represented by the natural $\infty$-groupoid
+\index{.infinity-groupoid@$\infty$-groupoid}%
+structure on types, which is generated
+by the rules for the identity type. Using higher inductive types, as
+introduced in \cref{cha:hits}, we can describe spaces directly by their
+universal properties.
+
+\index{synthetic mathematics}%
+There are several interesting aspects of this synthetic approach.
+First, it combines advantages of concrete models (such as topological
+spaces\index{topological!space}
+or simplicial sets)\index{simplicial!sets}
+with advantages of abstract categorical frameworks
+for homotopy theory (such as Quillen model categories).\index{Quillen model category}
+ On the one hand,
+our proofs feel elementary, and refer concretely to
+points, paths, and homotopies in types. On the other hand, our approach nevertheless abstracts away from
+any concrete presentation of these objects --- for example,
+associativity of path concatenation is proved by path induction, rather
+than by reparametrization of maps $[0,1] \to X$ or by horn-filling conditions.
+Type theory seems to be a very convenient way to study the abstract homotopy theory
+of $\infty$-groupoids: by using the rules for the identity type, we
+can avoid the complicated combinatorics involved in many definitions of
+$\infty$-groupoids, and explicate only as much of the
+structure as is needed in any particular proof.
+
+The abstract nature of type theory means that our proofs apply automatically in a variety of settings.
+In particular, as mentioned previously, homotopy type theory has one interpretation in
+Kan\index{Kan complex} simplicial sets\index{simplicial!sets},
+which is one model for the homotopy theory of $\infty$-groupoids. Thus,
+our proofs apply to this model, and transferring them along the geometric
+realization\index{geometric realization} functor from simplicial sets to topological spaces gives
+proofs of corresponding theorems in classical homotopy theory.
+However, though the details are work in progress, we can also
+interpret type theory in a wide variety of other categories
+that look like the category of $\infty$-groupoids, such as
+$(\infty,1)$-toposes\index{.infinity1-topos@$(\infty,1)$-topos}.
+Thus, proving a result in type theory will show
+that it holds in these settings as well.
+This sort of extra generality is well-known as a property of ordinary categorical logic:
+univalent foundations extends it to homotopy theory as well.
+
+Second, our synthetic approach has suggested new type-theoretic
+methods and proofs. Some of our proofs are fairly
+direct transcriptions of classical proofs. Others have a more
+type-theoretic feel, and consist mainly of calculations with
+$\infty$-groupoid operations, in a style that is very similar to how
+computer scientists use type theory to reason about computer programs.
+One thing that seems to have permitted these new proofs is the fact that type theory
+emphasizes different aspects of homotopy theory than other approaches:
+while tools like path induction and the universal properties of higher
+inductives are available in a setting like Kan\index{Kan complex} simplicial sets, type
+theory elevates their importance, because they are the \emph{only}
+primitive tools available for working with these types. Focusing on
+these tools had led to new descriptions of familiar constructions such
+as the universal cover of the circle and the Hopf fibration, using just the
+recursion principles for higher inductive types. These descriptions are
+very direct, and many of the proofs in this chapter involve
+computational calculations with such fibrations.
+\index{mathematics!constructive}%
+Another new aspect of our proofs is that they are constructive (assuming
+univalence and higher inductives types are constructive); we describe an
+application of this to homotopy groups of spheres in
+\cref{sec:moreresults}.
+
+\index{mathematics!formalized}%
+\indexsee{formalization of mathematics}{mathematics, formalized}%
+Third, our synthetic approach is very amenable to computer-checked
+proofs in proof assistants\index{proof!assistant} such as \Coq and \Agda.
+Almost all of the proofs
+described in this chapter have been computer-checked, and many of these
+proofs were first given in a proof assistant, and then ``unformalized''
+for this book. The computer-checked proofs are comparable in length and
+effort to the informal proofs presented here, and in some cases they are
+even shorter and easier to do.
+
+\mentalpause
+
+Before turning to the presentation of our results, we briefly review some
+ basic concepts and theorems from homotopy theory for the benefit of the reader who is not familiar with them.
+ We also give an overview of the
+results proved in this chapter.
+
+\index{classical!homotopy theory|(}%
+Homotopy theory is a branch of algebraic topology, and uses tools from abstract algebra,
+such as group theory, to investigate properties of spaces. One question
+homotopy theorists investigate is how to tell whether two spaces are the
+same, where ``the same'' means \emph{homotopy equivalence}
+\index{homotopy!equivalence!topological}%
+(continuous
+maps back and forth that compose to the identity up to homotopy---this
+gives the opportunity to ``correct'' maps that don't exactly compose to
+the identity). One common way to tell whether two spaces are the same
+is to calculate \emph{algebraic invariants} associated with a space,
+which include its \emph{homotopy groups} and \emph{homology} and
+\emph{cohomology groups}.
+\index{homology}%
+\index{cohomology}%
+Equivalent spaces have isomorphic
+homotopy/(co)homology groups, so if two spaces have different groups,
+then they are not equivalent. Thus, these algebraic invariants provide
+global information about a space, which can be used to tell spaces
+apart, and complements the local information provided by notions such as
+continuity. For example, the torus locally looks like the $2$-sphere, but it
+has a global difference, because it has a hole in it, and this difference
+is visible in the homotopy groups of these two spaces.
+
+The simplest example of a homotopy group is the \emph{fundamental group}
+\index{fundamental!group}%
+of a space, which is written $\pi_1(X,x_0$): Given a space $X$ and a
+point $x_0$ in it, one can make a group whose elements are loops at
+$x_0$ (continuous paths from $x_0$ to $x_0$), considered up to homotopy, with the
+group operations given by the identity path (standing still), path
+concatenation, and path reversal. For example, the fundamental group of
+the $2$-sphere is trivial, but the fundamental group of the torus is not,
+which shows that the sphere and the torus are not homotopy equivalent.
+The intuition is that every loop on the sphere is homotopic to the
+identity, because its inside can be filled in. In contrast, a loop on
+the torus that goes through the donut's hole is not homotopic to the
+identity, so there are non-trivial elements in the fundamental group.
+
+\index{homotopy!group}
+The \emph{higher homotopy groups} provide additional information about a
+space. Fix a point $x_0$ in $X$, and consider the constant path
+$\refl{x_0}$. Then the homotopy classes of homotopies between $\refl{x_0}$
+and itself form a group $\pi_2(X,x_0)$, which tells us something about
+the two-dimensional structure of the space. Then $\pi_3(X,x_0$) is the
+group of homotopy classes of homotopies between homotopies, and so on.
+One of the basic problems of algebraic topology is
+\emph{calculating the homotopy groups of a space $X$}, which means
+giving a group isomorphism between $\pi_k(X,x_0)$ and some more direct
+description of a group (e.g., by a multiplication table or
+presentation). Somewhat surprisingly, this is a very difficult
+question, even for spaces as simple as the spheres. As can be seen from
+\cref{tab:homotopy-groups-of-spheres}, some patterns emerge in the
+higher homotopy groups of spheres, but there is no general formula, and
+many homotopy groups of spheres are currently still unknown.
+
+\bgroup
+% Colors for the table of homotopy groups of spheres
+\definecolor{xA}{\OPTcolormodel}{\OPTcolxA}
+\definecolor{xB}{\OPTcolormodel}{\OPTcolxB}
+\definecolor{xC}{\OPTcolormodel}{\OPTcolxC}
+\definecolor{xD}{\OPTcolormodel}{\OPTcolxD}
+\definecolor{xE}{\OPTcolormodel}{\OPTcolxE}
+\definecolor{xF}{\OPTcolormodel}{\OPTcolxF}
+\definecolor{xG}{\OPTcolormodel}{\OPTcolxG}
+\definecolor{xH}{\OPTcolormodel}{\OPTcolxH}
+\definecolor{xI}{\OPTcolormodel}{\OPTcolxI}
+\definecolor{xJ}{\OPTcolormodel}{\OPTcolxJ}
+\definecolor{xK}{\OPTcolormodel}{\OPTcolxK}
+\definecolor{xL}{\OPTcolormodel}{\OPTcolxL}
+\definecolor{xM}{\OPTcolormodel}{\OPTcolxM}
+
+\newcommand{\cA}{\colorbox{xG}{\hbox to 20pt {\hfil$0$\hfil}}}
+\newcommand{\cB}{\colorbox{xH}{\hbox to 20pt {\hfil$\Z$\hfil}}}
+\newcommand{\cC}{\colorbox{xI}{\hbox to 20pt {\hfil$\Z_{2}$\hfil}}}
+\newcommand{\cD}{\colorbox{xJ}{\hbox to 20pt {\hfil$\Z_{2}$\hfil}}}
+\newcommand{\cE}{\colorbox{xK}{\hbox to 20pt {\hfil$\Z_{24}$\hfil}}}
+\newcommand{\cF}{\colorbox{xL}{\hbox to 20pt {\hfil$0$\hfil}}}
+\newcommand{\cG}{\colorbox{xM}{\hbox to 20pt {\hfil$0$\hfil}}}
+\newcommand{\cH}{\colorbox{xF}{\hbox to 20pt {\hfil$0$\hfil}}}
+\newcommand{\cI}{\colorbox{xE}{\hbox to 20pt {\hfil$0$\hfil}}}
+\newcommand{\cJ}{\colorbox{xD}{\hbox to 20pt {\hfil$0$\hfil}}}
+\newcommand{\cK}{\colorbox{xC}{\hbox to 20pt {\hfil$0$\hfil}}}
+\newcommand{\cL}{\colorbox{xB}{\hbox to 20pt {\hfil$0$\hfil}}}
+\newcommand{\cM}{\colorbox{xA}{\hbox to 20pt {\hfil$0$\hfil}}}
+
+\begin{table}[htb]
+\centering\small
+\begin{tabular}{p{15pt}>{\centering\arraybackslash}p{\OPTspherescolwidth}>{\centering\arraybackslash}p{\OPTspherescolwidth}>{\centering\arraybackslash}p{\OPTspherescolwidth}>{\centering\arraybackslash}p{\OPTspherescolwidth}>{\centering\arraybackslash}p{\OPTspherescolwidth}>{\centering\arraybackslash}p{\OPTspherescolwidth}>{\centering\arraybackslash}p{\OPTspherescolwidth}>{\centering\arraybackslash}p{\OPTspherescolwidth}>{\centering\arraybackslash}p{\OPTspherescolwidth}}
+\toprule
+ & $\Sn^0$ & $\Sn^1$ & $\Sn^{2}$ & $\Sn^{3}$ & $\Sn^{4}$ & $\Sn^{5}$ & $\Sn^{6}$ & $\Sn^{7}$ & $\Sn^{8}$ \\ \addlinespace[3pt] \midrule
+$\pi_{1}$ & $0$ & $\Z$ & \cA & \cH & \cI & \cJ & \cK & \cL & \cM \\ \addlinespace[3pt]
+$\pi_{2}$ & $0$ & $0$ & \cB & \cA & \cH & \cI & \cJ & \cK & \cL \\ \addlinespace[3pt]
+$\pi_{3}$ & $0$ & $0$ & $\Z$ & \cB & \cA & \cH & \cI & \cJ & \cK \\ \addlinespace[3pt]
+$\pi_{4}$ & $0$ & $0$ & $\Z_{2}$ & \cC & \cB & \cA & \cH & \cI & \cJ \\ \addlinespace[3pt]
+$\pi_{5}$ & $0$ & $0$ & $\Z_{2}$ & $\Z_{2}$ & \cC & \cB & \cA & \cH & \cI \\ \addlinespace[3pt]
+$\pi_{6}$ & $0$ & $0$ & $\Z_{12}$ & $\Z_{12}$ & \cD & \cC & \cB & \cA & \cH \\ \addlinespace[3pt]
+$\pi_{7}$ & $0$ & $0$ & $\Z_{2}$ & $\Z_{2}$ & {\footnotesize $\Z {\times} \Z_{12}$} & \cD & \cC & \cB & \cA \\ \addlinespace[3pt]
+$\pi_{8}$ & $0$ & $0$ & $\Z_{2}$ & $\Z_{2}$ & $\Z_{2}^{2}$ & \cE & \cD & \cC & \cB \\ \addlinespace[3pt]
+$\pi_{9}$ & $0$ & $0$ & $\Z_{3}$ & $\Z_{3}$ & $\Z_{2}^{2}$ & $\Z_{2}$ & \cE & \cD & \cC \\ \addlinespace[3pt]
+$\pi_{10}$ & $0$ & $0$ & $\Z_{15}$ & $\Z_{15}$ & \footnotesize{$\Z_{24} {\times} \Z_{3}$} & $\Z_{2}$ & \cF & \cE & \cD \\ \addlinespace[3pt]
+$\pi_{11}$ & $0$ & $0$ & $\Z_{2}$ & $\Z_{2}$ & $\Z_{15}$ & $\Z_{2}$ & $\Z$ & \cF & \cE \\ \addlinespace[3pt]
+$\pi_{12}$ & $0$ & $0$ & $\Z_{2}^{2}$ & $\Z_{2}^{2}$ & $\Z_{2}$ & $\Z_{30}$ & $\Z_{2}$ & \cG & \cF \\ \addlinespace[3pt]
+$\pi_{13}$ & $0$ & $0$ & {\footnotesize $\Z_{12} {\times} \Z_{2}$} & {\footnotesize $\Z_{12} {\times} \Z_{2}$} & $\Z_{2}^{3}$ & $\Z_{2}$ & $\Z_{60}$ & $\Z_{2}$ & \cG \\ \addlinespace[3pt]
+%$\pi_{14}$ & $0$ & $0$ & $\Z_{84} {\times} \Z_{2}^{2}$ & $\Z_{84} {\times} \Z_{2}^{2}$ & {\footnotesize $\Z_{120} {\times} \Z_{12} {\times} \Z_{2}$} & $\Z_{2}^{3}$ & $\Z_{24} {\times} \Z_{2}$ & $\Z_{120}$ & $\Z_{2}$ \\ \addlinespace[3pt]
+%$\pi_{15}$ & $0$ & $0$ & $\Z_{2}^{2}$ & $\Z_{2}^{2}$ & $\Z_{84} {\times} \Z_{2}^{5}$ & $\Z_{72} {\times} \Z_{2}$ & $\Z_{2}^{3}$ & $\Z_{2}^{3}$ & $\Z {\times} \Z_{120}$ \\ \addlinespace[3pt]
+\bottomrule
+\end{tabular}
+
+\caption{Homotopy groups of spheres~\cite{wikipedia-groups}.\index{homotopy!group!of sphere}
+The $k^{\textrm{th}}$ homotopy group $\pi_k$ of the $n$-dimensional sphere
+$\Sn^n$ is isomorphic to the group listed in each entry, where $\Z$ is
+the additive group of integers,\index{integers} and $\Z_{m}$ is the cyclic group\index{group!cyclic}\index{cyclic group} of order~$m$.
+}
+\label{tab:homotopy-groups-of-spheres}
+\end{table}
+\egroup
+
+
+One way of understanding this complexity is through the correspondence
+between spaces and $\infty$-groupoids
+\index{.infinity-groupoid@$\infty$-groupoid}%
+introduced in \cref{cha:basics}.
+As discussed in \cref{sec:circle}, the 2-sphere is presented by a higher
+inductive type with one point and one 2-dimensional loop. Thus, one
+might wonder why $\pi_3(\Sn ^2)$ is $\Z$, when the type $\Sn ^2$ has no
+generators creating 3-dimensional cells. It turns out that the
+generating element of $\pi_3(\Sn ^2)$ is constructed using the
+interchange law described in the proof of \cref{thm:EckmannHilton}: the
+algebraic structure of an $\infty$-groupoid includes non-trivial
+interactions between levels, and these interactions create elements of
+higher homotopy groups.
+
+\index{classical!homotopy theory|)}%
+
+Type theory provides a natural setting for investigating this structure, as
+we can easily define the higher homotopy groups. Recall from \cref{def:loopspace} that for $n:\N$, the
+$n$-fold iterated loop space\index{loop space!iterated} of a pointed type $(A,a)$ is defined
+recursively by:
+\begin{align*}
+ \Omega^0(A,a)&=(A,a)\\
+ \Omega^{n+1}(A,a)&=\Omega^n(\Omega(A,a)).
+\end{align*}
+%
+This gives a \emph{space} (i.e.\ a type) of $n$-dimensional loops\index{loop!n-@$n$-}, which itself has
+higher homotopies. We obtain the set of $n$-dimensional loops by
+truncation (this was also defined as an example in
+\cref{sec:free-algebras}):
+
+\begin{defn}[Homotopy Groups]\label{def-of-homotopy-groups}
+ Given $n\ge 1$ and $(A,a)$ a pointed type, we define the \define{homotopy groups} of $A$
+ \indexdef{homotopy!group}%
+ at $a$ by
+ \[\pi_n(A,a)\defeq \Trunc0{\Omega^n(A,a)}\]
+\end{defn}
+
+\noindent
+Since $n\ge 1$, the path concatenation and inversion operations on
+$\Omega^n(A)$ induce operations on $\pi_n(A)$ making it into a group in
+a straightforward way. If $n\ge 2$, then the group $\pi_n(A)$ is
+abelian\index{group!abelian}, by the Eckmann--Hilton argument \index{Eckmann--Hilton argument} (\cref{thm:EckmannHilton}).
+It is convenient to also write $\pi_0(A) \defeq \trunc0 A$,
+but this case behaves somewhat differently: not only is it not a group,
+it is defined without reference to any basepoint in $A$.
+
+\index{presentation!of an infinity-groupoid@of an $\infty$-groupoid}%
+This definition is a suitable one for investigating homotopy groups
+because the (higher) inductive definition of a type $X$ presents $X$ as
+a free type, analogous to a free $\infty$-groupoid,
+\index{.infinity-groupoid@$\infty$-groupoid}%
+and this
+presentation \emph{determines} but does not \emph{explicitly describe}
+the higher identity types of $X$. The identity types are populated by
+both the generators ($\lloop$, for the circle) and the results of applying to them all of the groupoid
+operations (identity, composition, inverses, associativity, interchange,
+\ldots). Thus, the higher-inductive presentation of a space allows us
+to pose the question ``what does the identity type of $X$ really turn out
+to be?'' though it can take some significant mathematics to answer it.
+This is a higher-dimensional generalization of a familiar fact in type
+theory: characterizing the identity type of $X$ can take some work,
+even if $X$ is an ordinary inductive type, such as the natural numbers
+or booleans. For example, the theorem that $\bfalse$ is different
+from $\btrue$ does not follow immediately from the definition;
+see \cref{sec:compute-coprod}.
+
+\index{univalence axiom}%
+The univalence axiom plays an essential role in calculating homotopy
+groups (without univalence, type theory is compatible with an
+interpretation where all paths, including, for example, the loop on the
+circle, are reflexivity). We will see this in the calculation of the
+fundamental group of the circle below: the map from $\Omega(\Sn^1)$ to $\Z$ is defined by mapping a loop on the circle to an
+automorphism\index{automorphism!of Z, successor@of $\Z$, successor} of the set $\Z$, so that, for example, $\lloop \ct \opp
+\lloop$ is sent to $\mathsf{successor} \ct \mathsf{predecessor}$ (where
+$\mathsf{successor}$ and $\mathsf{predecessor}$ are automorphisms of
+$\Z$ viewed, by univalence, as paths in the universe), and then applying
+the automorphism to 0. Univalence produces non-trivial paths in the
+universe, and this is used to extract information from paths in higher
+inductive types.
+
+In this chapter, we first calculate some homotopy groups of spheres,
+including $\pi_k(\Sn ^1)$ (\cref{sec:pi1-s1-intro}), $\pi_{k}(\Sn
+^n)$ for $k1$, so we will actually have calculated \emph{all} the homotopy groups of $\Sn^1$.
+
+\subsection{Getting started}
+\label{sec:pi1s1-initial-thoughts}
+
+It is not too hard to define functions in both directions between $\Omega(\Sn^1)$ and \Z.
+By specializing \cref{thm:looptothe} to $\lloop:\base=\base$, we have a function $\lloop^{\blank} : \Z \rightarrow (\id{\base}{\base})$ defined (loosely speaking) by
+\[
+ \lloop^n =
+ \begin{cases}
+ \underbrace{\lloop \ct \lloop \ct \cdots \ct \lloop}_{n} & \text{if $n > 0$,} \\
+ \underbrace{\opp \lloop \ct \opp \lloop \ct \cdots \ct \opp \lloop}_{-n} & \text{if $n < 0$,} \\
+ \refl{\base} & \text{if $n = 0$.}
+\end{cases}
+\]
+%
+Defining a function $g:\Omega(\Sn^1)\to\Z$ in the other direction is a bit trickier.
+Note that the successor function $\Zsuc:\Z\to\Z$ is an equivalence,
+\index{successor!isomorphism on Z@isomorphism on $\Z$}%
+and hence induces a path $\ua(\Zsuc):\Z=\Z$ in the universe \type.
+Thus, the recursion principle of $\Sn^1$ induces a map $c:\Sn^1\to\type$ by $c(\base)\defeq \Z$ and $\apfunc c (\lloop) \defid \ua(\Zsuc)$.
+Then we have $\apfunc{c} : (\base=\base) \to (\Z=\Z)$, and we can define $g(p)\defeq \transfib{X\mapsto X}{\apfunc{c}(p)}{0}$.
+
+With these definitions, we can even prove that $g(\lloop^n)=n$ for any $n:\Z$, using the induction principle \cref{thm:sign-induction} for $n$.
+(We will prove something more general a little later on.)
+However, the other equality $\lloop^{g(p)}=p$ is significantly harder.
+The obvious thing to try is path induction, but path induction does not apply to loops such as $p:(\base=\base)$ that have \emph{both} endpoints fixed!
+A new idea is required, one which can be explained both in terms of classical homotopy theory and in terms of type theory.
+We begin with the former.
+
+
+\subsection{The classical proof}
+\label{sec:pi1s1-classical-proof}
+
+\index{classical!homotopy theory|(}%
+In classical homotopy theory, there is a standard proof of $\pi_1(\Sn^1)=\Z$ using universal covering spaces.
+Our proof can be regarded as a type-theoretic version of this proof, with covering spaces appearing here as fibrations whose fibers are sets.
+\index{fibration}%
+\index{total!space}%
+Recall that \emph{fibrations} over a space $B$ in homotopy theory correspond to type families $B\to\type$ in type theory.
+\index{path!fibration}%
+\index{fibration!of paths}%
+In particular, for a point $x_0:B$, the type family $(x\mapsto (x_0=x))$ corresponds to the \emph{path fibration} $P_{x_0} B \to B$, in which the points of $P_{x_0} B$ are paths in $B$ starting at $x_0$, and the map to $B$ selects the other endpoint of such a path.
+This total space $P_{x_0} B$ is contractible, since we can ``retract'' any path to its initial endpoint $x_0$ --- we have seen the type-theoretic version of this as \cref{thm:contr-paths}.
+Moreover, the fiber over $x_0$ is the loop space\index{loop space} $\Omega(B,x_0)$ --- in type theory this is obvious by definition of the loop space.
+
+\begin{figure}\centering
+ \begin{tikzpicture}[xscale=1.4,yscale=.6]
+ \node (R) at (2,1) {$\mathbb{R}$};
+ \node (S1) at (2,-2) {$S^1$};
+ \draw[->] (R) -- node[auto] {$w$} (S1);
+ \draw (0,-2) ellipse (1 and .4);
+ \draw[dotted] (1,0) arc (0:-30:1 and .8);
+ \draw (1,0) arc (0:90:1 and .8) arc (90:270:1 and .3) coordinate (t1);
+ \draw[white,line width=4pt] (t1) arc (-90:90:1 and .8);
+ \draw (t1) arc (-90:90:1 and .8) arc (90:270:1 and .3) coordinate (t2);
+ \draw[white,line width=4pt] (t2) arc (-90:90:1 and .8);
+ \draw (t2) arc (-90:90:1 and .8) arc (90:270:1 and .3) coordinate (t3);
+ \draw[white,line width=4pt] (t3) arc (-90:90:1 and .8);
+ \draw (t3) arc (-90:-30:1 and .8) coordinate (t4);
+ \draw[dotted] (t4) arc (-30:0:1 and .8);
+ \node[fill,circle,inner sep=1pt,label={below:\scriptsize \base}] at (0,-2.4) {};
+ \node[fill,circle,inner sep=1pt,label={above left:\scriptsize 0}] at (0,.2) {};
+ \node[fill,circle,inner sep=1pt,label={above left:\scriptsize 1}] at (0,1.2) {};
+ \node[fill,circle,inner sep=1pt,label={above left:\scriptsize 2}] at (0,2.2) {};
+ \end{tikzpicture}
+ \caption{The winding map in classical topology}\label{fig:winding}
+\end{figure}
+
+Now in classical homotopy theory, where $\Sn^1$ is regarded as a topological space, we may proceed as follows.
+\index{winding!map}%
+Consider the ``winding'' map $w:\mathbb{R}\to \Sn ^1$, which looks like a helix projecting down onto the circle (see \cref{fig:winding}).
+This map $w$ sends each point on the helix\index{helix} to the point on the circle that it is ``sitting above''.
+It is a fibration, and the fiber over each point is isomorphic to the integers.\index{integers}
+If we lift the path that goes counterclockwise around the loop on the bottom, we go up one level in the helix, incrementing the integer in the fiber.
+Similarly, going clockwise around the loop on the bottom corresponds to going down one level in the helix, decrementing this count.
+This fibration is called the \emph{universal cover} of the circle.
+\index{universal!cover}%
+\index{cover!universal}%
+\index{covering space!universal}%
+
+Now a basic fact in classical homotopy theory is that a map $E_1\to E_2$ of fibrations over $B$ which is a homotopy equivalence between $E_1$ and $E_2$ induces a homotopy equivalence on all fibers.
+(We have already seen the type-theoretic version of this as well in \cref{thm:total-fiber-equiv}.)
+Since $\mathbb{R}$ and $P_{\base} S^1$ are both contractible topological spaces, they are homotopy equivalent, and thus their fibers $\Z$ and $\Omega(\Sn ^1)$ over the basepoint are also homotopy equivalent.
+
+\index{classical!homotopy theory|)}%
+
+\subsection{The universal cover in type theory}
+\label{sec:pi1s1-universal-cover}
+
+\index{universal!cover|(}%
+\index{cover!universal|(}%
+\index{covering space!universal|(}%
+
+Let us consider how we might express the preceding proof in type theory.
+We have already remarked that the path fibration of $\Sn^1$ is represented by the type family $(x\mapsto (\base=x))$.
+We have also already seen a good candidate for the universal cover of $\Sn^1$: it's none other than the type family $c:\Sn^1\to\type$ which we defined in \cref{sec:pi1s1-initial-thoughts}!
+By definition, the fiber of this family over $\base$ is $\Z$, while the effect of transporting around $\lloop$ is to add one --- thus it behaves just as we would expect from \cref{fig:winding}.
+
+However, since we don't know yet that this family behaves like a universal cover is supposed to (for instance, that its total space is simply connected), we use a different name for it.
+For reference, therefore, we repeat the definition.
+\index{integers}
+
+\begin{defn}[Universal Cover of $\Sn^1$] \label{S1-universal-cover}
+ Define $\code : \Sn ^1 \to \type$ by circle-recursion, with
+ \begin{align*}
+ \code(\base) &\defeq \Z \\
+ \apfunc{\code}({\lloop}) &\defid \ua(\Zsuc).
+ \end{align*}
+\end{defn}
+
+We emphasize briefly the definition of this family, since it is so different from how one usually defines covering spaces in classical homotopy theory.
+To define a function by circle recursion, we need to find a point and a
+loop in the codomain. In this case, the codomain is $\type$, and the point
+we choose is $\Z$, corresponding to our expectation that the
+fiber of the universal cover should be the integers. The loop we choose
+is the successor/predecessor
+\index{successor!isomorphism on Z@isomorphism on $\Z$}%
+\index{predecessor!isomorphism on Z@isomorphism on $\Z$}%
+isomorphism on $\Z$, which
+corresponds to the fact that going around the loop in the base goes up
+one level on the helix. Univalence is necessary for this part of the
+proof, because we need to convert a \emph{non-trivial} equivalence on $\Z$ into an identity.
+
+We call this the fibration of ``codes'', because its elements are combinatorial data that act as codes for paths on the circle: the integer $n$ codes for the path which loops around the circle $n$ times.
+
+From this definition, it is simple to calculate that transporting with
+$\code$ takes $\lloop$ to the successor function, and
+$\opp{\lloop}$ to the predecessor function:
+\begin{lem} \label{lem:transport-s1-code}
+\id{\transfib \code \lloop x} {x + 1} and
+\id{\transfib \code {\opp \lloop} x} {x - 1}.
+\end{lem}
+\begin{proof}
+For the first equation, we calculate as follows:
+\begin{align}
+{\transfib \code \lloop x}
+&= \transfib {A \mapsto A} {(\ap{\code}{\lloop})} x \tag{by \cref{thm:transport-compose}}\\
+&= \transfib {A \mapsto A} {\ua (\Zsuc)} x \tag{by computation for $\rec{\Sn^1}$}\\
+&= x + 1 \tag{by computation for \ua}.
+\end{align}
+The second equation follows from the first, because $\transfib{B}{p}{\blank}$ and $\transfib{B}{\opp p}{\blank}$ are always inverses, so $\transfib\code {\opp \lloop}{\blank}$ must be the inverse of $\Zsuc$.
+\end{proof}
+
+We can now see what was wrong with our first approach: we defined $f$ and $g$ only on the fibers $\Omega(\Sn^1)$ and \Z, when we should have defined a whole morphism \emph{of fibrations} over $\Sn^1$.
+In type theory, this means we should have defined functions having types
+\begin{align}
+ \prd{x:\Sn^1} &((\base=x) \to \code(x)) \qquad\text{and/or} \label{eq:pi1s1-encode}\\
+ \prd{x:\Sn^1} &(\code(x) \to (\base=x))\label{eq:pi1s1-decode}
+\end{align}
+instead of only the special cases of these when $x$ is \base.
+This is also an instance of a common observation in type theory: when attempting to prove something about particular inhabitants of some inductive type, it is often easier to generalize the statement so that it refers to \emph{all} inhabitants of that type, which we can then prove by induction.
+Looked at in this way, the proof of $\Omega(\Sn^1)=\Z$ fits into the same pattern as the characterization of the identity types of coproducts and natural numbers in \cref{sec:compute-coprod,sec:compute-nat}.
+
+At this point, there are two ways to finish the proof.
+We can continue mimicking the classical argument by constructing~\eqref{eq:pi1s1-encode} or~\eqref{eq:pi1s1-decode} (it doesn't matter which), proving that a homotopy equivalence between total spaces induces an equivalence on fibers, and then that the total space of the universal cover is contractible.
+The first type-theoretic proof of $\Omega(\Sn^1)=\Z$ followed this pattern; we call it the \emph{homotopy-theoretic} proof.
+
+Later, however, we discovered that there is an alternative proof, which has a more type-theoretic feel and more closely follows the proofs in \cref{sec:compute-coprod,sec:compute-nat}.
+In this proof, we directly construct both~\eqref{eq:pi1s1-encode} and~\eqref{eq:pi1s1-decode}, and prove that they are mutually inverse by calculation.
+We will call this the \emph{encode-decode} proof, because we call the functions~\eqref{eq:pi1s1-encode} and~\eqref{eq:pi1s1-decode} \emph{encode} and \emph{decode} respectively.
+Both proofs use the same construction of the cover given above.
+Where the classical proof induces an equivalence on fibers from an equivalence between total spaces, the encode-decode proof constructs the inverse map (\emph{decode}) explicitly as a map between fibers.
+And where the classical proof uses contractibility, the encode-decode proof uses path induction, circle induction, and integer induction.
+These are the same tools used to prove contractibility---indeed, path induction \emph{is} essentially contractibility of the path fibration composed with $\mathsf{transport}$---but they are applied in a different way.
+
+Since this is a book about homotopy type theory, we present the encode-decode proof first.
+A homotopy theorist who gets lost is encouraged to skip to the homotopy-theoretic proof (\cref{subsec:pi1s1-homotopy-theory}).
+
+\index{universal!cover|)}%
+\index{cover!universal|)}%
+\index{covering space!universal|)}%
+
+\subsection{The encode-decode proof}
+\label{subsec:pi1s1-encode-decode}
+
+\index{encode-decode method|(}%
+\indexsee{encode}{encode-decode method}%
+\indexsee{decode}{encode-decode method}%
+We begin with the function~\eqref{eq:pi1s1-encode} that maps paths to codes:
+\begin{defn}
+Define $\encode : \prd{x : \Sn ^1} (\base=x) \rightarrow \code(x)$ by
+\[
+\encode \: p \defeq \transfib{\code} p 0
+\]
+(we leave the argument $x$ implicit).
+\end{defn}
+Encode is defined by lifting a path into the universal cover, which
+determines an equivalence, and then applying the resulting equivalence
+to $0$.
+The interesting thing about this function is that it computes a concrete
+number from a loop on the circle, when this loop is represented using
+the abstract groupoidal framework of homotopy type theory. To gain an
+intuition for how it does this, observe that by the above lemmas,
+$\transfib \code \lloop x$ is the successor map and $\transfib \code {\opp
+ \lloop} x$ is the predecessor map.
+Further, $\mathsf{transport}$ is functorial (\cref{cha:basics}), so
+$\transfib{\code} {\lloop \ct \lloop}{\blank}$ is
+\[(\transfib \code \lloop-) \circ (\transfib \code \lloop-)\]
+and so on.
+Thus, when $p$ is a composition like
+\[
+\lloop \ct \opp \lloop \ct \lloop \ct \cdots
+\]
+$\transfib{\code}{p}{\blank}$ will compute a composition of functions like
+\[
+\Zsuc \circ \Zpred \circ \Zsuc \circ \cdots
+\]
+Applying this composition of functions to 0 will compute the
+\index{winding!number}%
+\emph{winding number} of the path---how many times it goes around the
+circle, with orientation marked by whether it is positive or negative,
+after inverses have been canceled. Thus, the computational behavior of
+$\encode$ follows from the reduction rules for higher-inductive types and
+univalence, and the action of $\mathsf{transport}$ on compositions and inverses.
+
+Note that the instance $\encode' \defeq \encode_{\base}$ has type
+$(\id \base \base) \rightarrow \Z$.
+This will be one half of our desired equivalence; indeed, it is exactly the function $g$ defined in \cref{sec:pi1s1-initial-thoughts}.
+
+Similarly, the function~\eqref{eq:pi1s1-decode} is a generalization of the function $\lloop^{\blank}$ from \cref{sec:pi1s1-initial-thoughts}.
+
+\begin{defn}\label{thm:pi1s1-decode}
+Define $\decode : \prd{x : \Sn ^1}\code(x) \rightarrow (\base=x)$ by
+circle induction on $x$. It suffices to give a function
+${\code(\base) \rightarrow (\base=\base)}$, for which we use $\lloop^{\blank}$, and
+to show that $\lloop^{\blank}$ respects the loop.
+\end{defn}
+
+\begin{proof}
+To show that $\lloop^{\blank}$ respects the loop, it suffices to give a path
+from $\lloop^{\blank}$ to itself that lies over $\lloop$.
+By the definition of dependent paths, this means a path from
+\[\transfib {(x' \mapsto \code(x') \rightarrow (\base=x'))} {\lloop} {\lloop^{\blank}}\]
+to $\lloop^{\blank}$. We define such a
+path as follows:
+\begin{align*}
+ \MoveEqLeft \transfib {(x' \mapsto \code(x') \rightarrow (\base=x'))} \lloop {\lloop^{\blank}}\\
+&= \transfibf {x'\mapsto (\base=x')}(\lloop) \circ {\lloop^{\blank}} \circ \transfibf \code ({\opp \lloop}) \\
+&= (- \ct \lloop) \circ (\lloop^{\blank}) \circ \transfibf \code ({\opp \lloop}) \\
+&= (- \ct \lloop) \circ (\lloop^{\blank}) \circ \Zpred \\
+&= (n \mapsto \lloop^{n - 1} \ct \lloop).
+\end{align*}
+On the first line, we apply the characterization of $\mathsf{transport}$
+when the outer connective of the fibration is $\rightarrow$, which
+reduces the $\mathsf{transport}$ to pre- and post-composition with
+$\mathsf{transport}$ at the domain and codomain types. On the second line,
+we apply the characterization of $\mathsf{transport}$ when the type family
+is $x\mapsto \id{\base}{x}$, which is post-composition of paths. On the third line,
+we use the action of $\code$ on $\opp \lloop$ from
+\cref{lem:transport-s1-code}. And on the fourth line, we simply
+reduce the function composition. Thus, it suffices to show that for all
+$n$, $\id{\lloop^{n - 1} \ct \lloop}{\lloop^{n}}$. This is an easy application of \cref{thm:sign-induction}, using the groupoid laws.
+\end{proof}
+
+We can now show that $\encode$ and $\decode$ are quasi-inverses.
+What used to be the difficult direction is now easy!
+
+\begin{lem} \label{lem:s1-decode-encode}
+For all $x: \Sn ^1$ and $p : \id \base x$, $\id
+{\decode_x({{\encode_x(p)}})} p$.
+\end{lem}
+
+\begin{proof}
+By path induction, it suffices to show that
+\narrowequation{\id {\decode_{\base}({{\encode_{\base}(\refl{\base})}})} {\refl{\base}}.}
+But
+\narrowequation{\encode_{\base}(\refl{\base}) \jdeq \transfib{\code}{\refl{\base}} 0 \jdeq 0,}
+and $\decode_{\base}(0) \jdeq \lloop^ 0 \jdeq \refl{\base}$.
+\end{proof}
+
+The other direction is not much harder.
+
+\begin{lem} \label{lem:s1-encode-decode} For all
+$x: \Sn ^1$ and $c : \code(x)$, we have $\id
+{\encode_x({{\decode_x(c)}})} c$.
+\end{lem}
+
+\begin{proof}
+The proof is by circle induction. It suffices to show the case for
+\base, because the case for \lloop is a path between paths in
+$\Z$, which is immediate because $\Z$ is a set.
+
+Thus, it suffices to show, for all $n : \Z$, that
+\[
+\id {\encode'(\lloop^n)} {n}.
+\]
+The proof is by induction, using \cref{thm:sign-induction}.
+%
+\begin{itemize}
+
+\item In the case for $0$, the result is true by definition.
+
+\item In the case for $n+1$,
+\begin{align}
+ {\encode'(\lloop^{n+1})}
+&= {\encode'(\lloop^{n} \ct \lloop)} \tag{by definition of $\lloop^{\blank}$} \\
+&= \transfib{\code}{(\lloop^{n} \ct \lloop)}{0} \tag{by definition of $\encode$}\\
+&= \transfib{\code}{\lloop}{(\transfib{\code}{\lloop^n}{0})} \tag{by functoriality}\\
+&= {(\transfib{\code}{\lloop^n}{0})} + 1 \tag{by \cref{lem:transport-s1-code}}\\
+&= n + 1. \tag{by the inductive hypothesis}
+\end{align}
+
+\item The case for negatives is analogous. \qedhere
+\end{itemize}
+\end{proof}
+
+Finally, we conclude the theorem.
+
+\begin{thm}
+There is a family of equivalences $\prd{x : \Sn ^1} (\eqv {(\base=x)} {\code(x)})$.
+\end{thm}
+\begin{proof}
+The maps $\encode$ and $\decode$ are quasi-inverses by
+\cref{lem:s1-decode-encode,lem:s1-encode-decode}.
+\end{proof}
+
+Instantiating at {\base} gives
+\begin{cor}\label{cor:omega-s1}
+$\eqv {\Omega(\Sn^1,\base)} {\Z}$.
+\end{cor}
+
+A simple induction shows that this equivalence takes addition to
+composition, so that $\Omega(\Sn ^1) = \Z$ as groups.
+
+\begin{cor} \label{cor:pi1s1}
+$\id{\pi_1(\Sn ^1)} {\Z}$, while $\id{\pi_n(\Sn ^1)}0$ for $n>1$.
+\end{cor}
+\begin{proof}
+For $n=1$, we sketched the proof from \cref{cor:omega-s1} above.
+For $n > 1$, we have $\trunc 0 {\Omega^{n}(\Sn ^1)} = \trunc 0 {\Omega^{n-1}(\Omega{\Sn ^1})} = \trunc 0 {\Omega^{n-1}(\Z)}$.
+And since $\Z$ is a set, $\Omega^{n-1}(\Z)$ is contractible, so this is trivial.
+\end{proof}
+
+\index{encode-decode method|)}%
+
+
+\subsection{The homotopy-theoretic proof}
+\label{subsec:pi1s1-homotopy-theory}
+
+In \cref{sec:pi1s1-universal-cover}, we defined the putative universal cover $\code:\Sn^1\to\type$ in type theory, and in \cref{subsec:pi1s1-encode-decode} we defined a map $\encode : \prd{x:\Sn^1} (\base=x) \to \code(x)$ from the path fibration to the universal cover.
+What remains for the classical proof is to show that this map induces an equivalence on total spaces because both are contractible, and to deduce from this that it must be an equivalence on each fiber.
+
+\index{total!space}%
+In \cref{thm:contr-paths} we saw that the total space $\sm{x:\Sn^1} (\base=x)$ is contractible.
+For the other, we have:
+
+\begin{lem}\label{thm:iscontr-s1cover}
+ The type $\sm{x:\Sn^1}\code(x)$ is contractible.
+\end{lem}
+\begin{proof}
+ We apply the flattening lemma (\cref{thm:flattening}) with the following values:
+ \begin{itemize}
+ \item $A\defeq\unit$ and $B\defeq\unit$, with $f$ and $g$ the obvious functions.
+ Thus, the base higher inductive type $W$ in the flattening lemma is equivalent to $\Sn^1$.
+ \item $C:A\to\type$ is constant at \Z.
+ \item $D:\prd{b:B} (\eqv{\Z}{\Z})$ is constant at $\Zsuc$.
+ \end{itemize}
+ Then the type family $P:\Sn^1\to\type$ defined in the flattening lemma is equivalent to $\code:\Sn^1\to\type$.
+ Thus, the flattening lemma tells us that $\sm{x:\Sn^1}\code(x)$ is equivalent to a higher inductive type with the following generators, which we denote $R$:
+ \begin{itemize}
+ \item A function $\mathsf{c}: \Z \to R$.
+ \item For each $z:\Z$, a path $\mathsf{p}_z:\mathsf{c}(z) = \mathsf{c}(\Zsuc(z))$.
+ \end{itemize}
+ We might call this type the \define{homotopical reals};
+ \indexdef{real numbers!homotopical}%
+ it plays the same role as the topological space $\mathbb{R}$ in the classical proof.
+
+ Thus, it remains to show that $R$ is contractible.
+ As center of contraction we choose $\mathsf{c}(0)$; we must now show that $x=\mathsf{c}(0)$ for all $x:R$.
+ We do this by induction on $R$.
+ Firstly, when $x$ is $\mathsf{c}(z)$, we must give a path $q_z:\mathsf{c}(0) = \mathsf{c}(z)$, which we can do by induction on $z:\Z$, using \cref{thm:sign-induction}:
+ \begin{align*}
+ q_0 &\defid \refl{\mathsf{c}(0)}\\
+ q_{n+1} &\defid q_n \ct \mathsf{p}_n & &\text{for $n\ge 0$}\\
+ q_{n-1} &\defid q_n \ct \opp{\mathsf{p}_{n-1}} & &\text{for $n\le 0$.}
+ \end{align*}
+ Secondly, we must show that for any $z:\Z$, the path $q_z$ is transported along $\mathsf{p}_z$ to $q_{z+1}$.
+ By transport of paths, this means we want $q_z \ct \mathsf{p}_z = q_{z+1}$.
+ This is easy by induction on $z$, using the definition of $q_z$.
+ This completes the proof that $R$ is contractible, and thus so is $\sm{x:\Sn^1}\code(x)$.
+\end{proof}
+
+\begin{cor}\label{thm:encode-total-equiv}
+ The map induced by \encode:
+ \[ \tsm{x:\Sn^1} (\base=x) \to \tsm{x:\Sn^1}\code(x) \]
+ is an equivalence.
+\end{cor}
+\begin{proof}
+ Both types are contractible.
+\end{proof}
+
+\begin{thm}
+ $\eqv {\Omega(\Sn^1,\base)} {\Z}$.
+\end{thm}
+\begin{proof}
+ Apply \cref{thm:total-fiber-equiv} to $\encode$, using \cref{thm:encode-total-equiv}.
+\end{proof}
+
+In essence, the two proofs are not very different: the encode-decode one may be seen as a ``reduction'' or ``unpackaging'' of the homotopy-theoretic one.
+Each has its advantages; the interplay between the two points of view is part of the interest of the subject.
+\index{fundamental!group!of circle|)}
+
+
+\subsection{The universal cover as an identity system}
+\label{sec:pi1s1-idsys}
+
+Note that the fibration $\code:\Sn^1\to\type$ together with $0:\code(\base)$ is a \emph{pointed predicate} in the sense of \cref{defn:identity-systems}.
+From this point of view, we can see that the encode-decode proof in \cref{subsec:pi1s1-encode-decode} consists of proving that \code satisfies \cref{thm:identity-systems}\ref{item:identity-systems3}, while the homotopy-theoretic proof in \cref{subsec:pi1s1-homotopy-theory} consists of proving that it satisfies \cref{thm:identity-systems}\ref{item:identity-systems4}.
+This suggests a third approach.
+
+\begin{thm}
+ The pair $(\code,0)$ is an identity system at $\base:\Sn^1$ in the sense of \cref{defn:identity-systems}.
+\end{thm}
+\begin{proof}
+ Let $D:\prd{x:\Sn^1} \code(x) \to \type$ and $d:D(\base,0)$ be given; we want to define a function $f:\prd{x:\Sn^1}{c:\code(x)} D(x,c)$.
+ By circle induction, it suffices to specify $f(\base):\prd{c:\code(\base)} D(\base,c)$ and verify that $\trans{\lloop}{f(\base)} = f(\base)$.
+
+ Of course, $\code(\base)\jdeq \Z$.
+ By \cref{lem:transport-s1-code} and induction on $n$, we may obtain a path $p_n : \transfib{\code}{\lloop^n}{0} = n$ for any integer $n$.
+ Therefore, by paths in $\Sigma$-types, we have a path $\pairpath(\lloop^n,p_n) : (\base,0) = (\base,n)$ in $\sm{x:\Sn^1} \code(x)$.
+ Transporting $d$ along this path in the fibration $\widehat{D}:(\sm{x:\Sn^1} \code(x)) \to\type$ associated to $D$, we obtain an element of $D(\base,n)$ for any $n:\Z$.
+ We define this element to be $f(\base)(n)$:
+ \[ f(\base)(n) \defeq \transfib{\widehat{D}}{\pairpath(\lloop^n,p_n)}{d}. \]
+ %
+ Now we need $\transfib{\lam{x} \prd{c:\code(x)} D(x,c)}{\lloop}{f(\base)} = f(\base)$.
+ By \cref{thm:dpath-forall}, this means we need to show that for any $n:\Z$,
+ %
+ \begin{narrowmultline*}
+ \transfib{\widehat D}{\pairpath(\lloop,\refl{\trans\lloop n})}{f(\base)(n)}
+ =_{D(\base,\trans\lloop n)} \narrowbreak
+ f(\base)(\trans\lloop n).
+ \end{narrowmultline*}
+ %
+ Now we have a path $q:\trans\lloop n = n+1$, so transporting along this, it suffices to show
+ \begin{multline*}
+ \transfib{D(\base)}{q}{\transfib{\widehat D}{\pairpath(\lloop,\refl{\trans\lloop n})}{f(\base)(n)}}\\
+ =_{D(\base,n+1)} \transfib{D(\base)}{q}{f(\base)(\trans\lloop n)}.
+ \end{multline*}
+ By a couple of lemmas about transport and dependent application, this is equivalent to
+ \[ \transfib{\widehat D}{\pairpath(\lloop,q)}{f(\base)(n)} =_{D(\base,n+1)} f(\base)(n+1). \]
+ However, expanding out the definition of $f(\base)$, we have
+ \begin{narrowmultline*}
+ \transfib{\widehat D}{\pairpath(\lloop,q)}{f(\base)(n)}
+ \narrowbreak
+ \begin{aligned}[t]
+ &= \transfib{\widehat D}{\pairpath(\lloop,q)}{\transfib{\widehat{D}}{\pairpath(\lloop^n,p_n)}{d}}\\
+ &= \transfib{\widehat D}{\pairpath(\lloop^n,p_n) \ct \pairpath(\lloop,q)}{d}\\
+ &= \transfib{\widehat D}{\pairpath(\lloop^{n+1},p_{n+1})}{d}\\
+ &= f(\base)(n+1).
+ \end{aligned}
+ \end{narrowmultline*}
+ We have used the functoriality of transport, the characterization of composition in $\Sigma$-types (which was an exercise for the reader), and a lemma relating $p_n$ and $q$ to $p_{n+1}$ which we leave it to the reader to state and prove.
+
+ This completes the construction of $f:\prd{x:\Sn^1}{c:\code(x)} D(x,c)$.
+ Since
+ \[f(\base,0) \jdeq \trans{\pairpath(\lloop^0,p_0)}{d} = \trans{\refl{\base}}{d} = d,\]
+ we have shown that $(\code,0)$ is an identity system.
+\end{proof}
+
+\begin{cor}
+ For any $x:\Sn^1$, we have $\eqv{(\base=x)}{\code(x)}$.
+\end{cor}
+\begin{proof}
+ By \cref{thm:identity-systems}.
+\end{proof}
+
+Of course, this proof also contains essentially the same elements as the previous two.
+Roughly, we can say that it unifies the proofs of \cref{thm:pi1s1-decode,lem:s1-encode-decode}, performing the requisite inductive argument only once in a generic case.
+
+\begin{rmk}
+ Note that all of the above proofs that $\eqv{\pi_1(\Sn^1)}{\Z}$ use the univalence axiom in an essential way.
+ This is unavoidable: univalence or something like it is \emph{necessary} in order to prove $\eqv{\pi_1(\Sn^1)}{\Z}$.
+ In the absence of univalence, it is consistent to assume the statement ``all types are sets'' (a.k.a.\ ``uniqueness of identity proofs'' or ``Axiom K'', as discussed in \cref{sec:hedberg}), and this statement implies instead that $\eqv{\pi_1(\Sn^1)}{\unit}$.
+ In fact, the (non)triviality of $\pi_1(\Sn^1)$ detects exactly whether all types are sets: the proof of \cref{thm:loop-nontrivial} showed conversely that if $\lloop=\refl{\base}$ then all types are sets.
+\end{rmk}
+
+\section{Connectedness of suspensions}
+\label{sec:conn-susp}
+
+Recall from \cref{sec:connectivity} that a type $A$ is called \define{$n$-connected} if $\trunc nA$ is contractible.
+The aim of this section is to prove that the operation of suspension from \cref{sec:suspension} increases connectedness.
+
+\begin{thm} \label{thm:suspension-increases-connectedness}
+ If $A$ is $n$-connected then the suspension of $A$ is $(n+1)$-connected.
+\end{thm}
+
+\begin{proof}
+ We remarked in \cref{sec:colimits} that the suspension of $A$ is the pushout $\unit\sqcup^A\unit$, so we need to
+ prove that the following type is contractible:
+ %
+ \[\trunc{n+1}{\unit\sqcup^A\unit}.\]
+ %
+ By \cref{reflectcommutespushout} we know that
+ $\trunc{n+1}{\unit\sqcup^A\unit}$ is a pushout in $\typelep{n+1}$ of the diagram
+ \[\xymatrix{\trunc{n+1}A \ar[d] \ar[r] & \trunc{n+1}{\unit} \\
+ \trunc{n+1}{\unit} & }.\]
+ %
+ Given that $\trunc{n+1}{\unit}=\unit$, the type
+ $\trunc{n+1}{\unit\sqcup^A\unit}$ is also a pushout of the following diagram in
+ $\typelep{n+1}$ (because both diagrams are equal)
+ %
+ \[\Ddiag=\vcenter{\xymatrix{\trunc{n+1}A \ar[d] \ar[r] & \unit \\
+ \unit & }}.\]
+ %
+ We will now prove that $\unit$ is also a pushout of $\Ddiag$ in
+ $\typelep{n+1}$.
+ %
+ Let $E$ be an $(n+1)$-truncated type; we need to prove that the following map
+ is an equivalence
+ %
+ \[\function{(\unit \to E)}{\cocone{\Ddiag}{E}}{y}
+ {(y,y,\lamu{u:{\trunc{n+1}A}} \refl{y(\ttt)})}.\]
+ %
+ where we recall that $\cocone{\Ddiag}{E}$ is the type
+ \[\sm{f:\unit \to E}{g:\unit \to E}(\trunc{n+1}A\to
+ (f(\ttt)=_E{}g(\ttt))).\]
+ %
+ The map $\function{(\unit\to E)}{E}{f}{f(\ttt)}$ is an equivalence, hence
+ we also have
+ \[\cocone{\Ddiag}{E}=\sm{x:E}{y:E}(\trunc{n+1}A\to(x=_Ey)).\]
+ %
+ Now $A$ is $n$-connected hence so is $\trunc{n+1}A$ because
+ $\trunc n{\trunc{n+1}A}=\trunc nA=\unit$, and $(x=_Ey)$ is $n$-truncated because
+ $E$ is $(n+1)$-truncated. Hence by \cref{connectedtotruncated} the
+ following map is an equivalence
+ %
+ \[\function{(x=_Ey)}{(\trunc{n+1}A\to(x=_Ey))}{p}{\lam{z} p}\]
+ %
+ Hence we have
+ %
+ \[\cocone{\Ddiag}{E}=\sm{x:E}{y:E}(x=_Ey).\]
+ %
+ But the following map is an equivalence
+ %
+ \[\function{E}{\sm{x:E}{y:E}(x=_Ey)}{x}{(x,x,\refl{x})}.\]
+ %
+ Hence
+ %
+ \[\cocone{\Ddiag}{E}=E.\]
+ %
+ Finally we get an equivalence
+ %
+ \[(\unit \to E)\eqvsym\cocone{\Ddiag}{E}\]
+ %
+ We can now unfold the definitions in order to get the explicit expression of
+ this map, and we see easily that this is exactly the map we had at the
+ beginning.
+
+ Hence we proved that $\unit$ is a pushout of $\Ddiag$ in $\typelep{n+1}$. Using
+ uniqueness of pushouts we get that $\trunc{n+1}{\unit\sqcup^A\unit}=\unit$
+ which proves that the suspension of $A$ is $(n+1)$-connected.
+\end{proof}
+
+\begin{cor} \label{cor:sn-connected}
+ For all $n:\N$, the sphere $\Sn^n$ is $(n-1)$-connected.
+\end{cor}
+
+\begin{proof}
+ We prove this by induction on $n$.
+ %
+ For $n=0$ we have to prove that $\Sn^0$ is merely inhabited, which is clear.
+ %
+ Let $n:\N$ be such that $\Sn^n$ is $(n-1)$-connected. By definition $\Sn^{n+1}$
+ is the suspension of $\Sn^n$, hence by the previous lemma $\Sn^{n+1}$ is
+ $n$-connected.
+\end{proof}
+
+\section{\texorpdfstring{$\pi_{k \le n}$}{π\_(k≤n)} of an \texorpdfstring{$n$}{n}-connected space and \texorpdfstring{$\pi_{k < n}(\Sn ^n)$}{π\_(k0$ the set $\pi_n(A,a)$ has a group
+structure, and if $n>1$ the group is abelian\index{group!abelian}.
+
+We can now say something about homotopy groups of $n$-truncated and
+$n$-connected types.
+
+\begin{lem}
+ If $A$ is $n$-truncated and $a:A$, then $\pi_k(A,a)=\unit$ for all $k>n$.
+\end{lem}
+
+\begin{proof}
+ The loop space of an $n$-type is an
+ $(n-1)$-type, hence $\Omega^k(A,a)$ is an $(n-k)$-type, and we have
+ $(n-k)\le-1$ so $\Omega^k(A,a)$ is a mere proposition. But $\Omega^k(A,a)$ is inhabited,
+ so it is actually contractible and
+ $\pi_k(A,a)=\trunc0{\Omega^k(A,a)}=\trunc0{\unit}=\unit$.
+\end{proof}
+
+\begin{lem} \label{lem:pik-nconnected}
+ If $A$ is $n$-connected and $a:A$, then $\pi_k(A,a)=\unit$ for all $k\le{}n$.
+\end{lem}
+
+\begin{proof}
+ We have the following sequence of equalities:
+ %
+ \begin{narrowmultline*}
+ \pi_k(A,a) = \trunc0{\Omega^k(A,a)}
+ = \Omega^k(\trunc k{(A,a)})
+ = \Omega^k(\trunc k{\trunc n{(A,a)}})
+ = \narrowbreak
+ \Omega^k(\trunc k{\unit})
+ = \Omega^k(\unit)
+ = \unit.
+ \end{narrowmultline*}
+ %
+ The third equality uses the fact that $k\le{}n$ in order to use that
+ $\truncf k\circ\truncf n=\truncf k$ and the fourth equality uses the fact that $A$ is
+ $n$-connected.
+\end{proof}
+
+\begin{cor}
+ $\pi_k(\Sn ^n) = \unit$ for $k < n$.
+\end{cor}
+\begin{proof}
+ The sphere $\Sn^n$ is $(n-1)$-connected by \cref{cor:sn-connected}, so
+ we can apply \cref{lem:pik-nconnected}.
+\end{proof}
+
+\section{Fiber sequences and the long exact sequence}
+\label{sec:long-exact-sequence-homotopy-groups}
+
+\index{fiber sequence|(}%
+\index{sequence!fiber|(}%
+
+If the codomain of a function $f:X\to Y$ is equipped with a basepoint $y_0:Y$, then we refer to the fiber $F\defeq \hfib f {y_0}$ of $f$ over $y_0$ as \define{the fiber of $f$}\index{fiber}.
+(If $Y$ is connected, then $F$ is determined up to mere equivalence; see \cref{ex:unique-fiber}.)
+We now show that if $X$ is also pointed and $f$ preserves basepoints, then there is a relation between the homotopy groups of $F$, $X$, and $Y$ in the form of a \emph{long exact sequence}.
+We derive this by way of the \emph{fiber sequence} associated to such an $f$.
+
+\begin{defn}\label{def:pointedmap}
+ A \define{pointed map}
+ \indexdef{pointed!map}%
+ \indexsee{function!pointed}{pointed map}%
+ between pointed types $(X,x_0)$ and $(Y,y_0)$ is a
+ map $f:X\to Y$ together with a path $f_0:f(x_0)=y_0$.
+\end{defn}
+
+For any pointed types $(X,x_0)$ and $(Y,y_0)$, there is a pointed map $(\lam{x} y_0) : X\to Y$ which is constant at the basepoint.
+We call this the \define{zero map}\indexdef{zero!map}\indexdef{function!zero} and sometimes write it as $0:X\to Y$.
+
+Recall that every pointed type $(X,x_0)$ has a loop space\index{loop space} $\Omega (X,x_0)$.
+We now note that this operation is functorial on pointed maps.\index{loop space!functoriality of}\index{functor!loop space}
+
+\begin{defn}\label{def:loopfunctor}
+ Given a pointed map between pointed types $f:X \to Y$, we define a pointed
+ map $\Omega f:\Omega X
+ \to \Omega Y$ by
+ \[(\Omega f)(p) \defeq \rev{f_0}\ct\ap{f}{p}\ct f_0.\]
+ The path $(\Omega f)_0 : (\Omega f) (\refl{x_0}) = \refl{y_0}$, which exhibits $\Omega f$ as a pointed map, is the obvious path of type
+ \[\rev{f_0}\ct\ap{f}{\refl{x_0}}\ct f_0=\refl{y_0}.\]
+\end{defn}
+
+There is another functor on pointed maps, which takes $f:X\to Y$ to $\proj1 : \hfib f {y_0} \to X$.
+When $f$ is pointed, we always consider $\hfib f {y_0}$ to be pointed with basepoint $(x_0,f_0)$, in which case $\proj1$ is also a pointed map, with witness $(\proj1)_0 \defeq \refl{x_0}$.
+Thus, this operation can be iterated.
+
+\begin{defn}
+ The \define{fiber sequence}
+ \indexdef{fiber sequence}%
+ \indexdef{sequence!fiber}%
+ of a pointed map $f:X\to Y$ is the infinite sequence of pointed types and pointed maps
+ \[\xymatrix{\dots \ar[r]^{f^{(n+1)}} & X^{(n+1)} \ar[r]^{f^{(n)}} & X^{(n)} \ar^-{f^{(n-1)}}[r] & \dots \ar[r] & X^{(2)} \ar^-{f^{(1)}}[r] & X^{(1)} \ar[r]^{f^{(0)}} & X^{(0)}}\]
+ defined recursively by
+ \[ X^{(0)} \defeq Y\qquad
+ X^{(1)} \defeq X\qquad
+ f^{(0)} \defeq f\qquad
+ \]
+ and
+ \begin{alignat*}{2}
+ X^{(n+1)} &\defeq \hfib {f^{(n-1)}}{x^{(n-1)}_0}\\
+ f^{(n)} &\defeq \proj1 &: X^{(n+1)} \to X^{(n)}.
+ \end{alignat*}
+ where $x^{(n)}_0$ denotes the basepoint of $X^{(n)}$, chosen recursively as above.
+\end{defn}
+
+Thus, any adjacent pair of maps in this fiber sequence is of the form
+\[ \xymatrix{ X^{(n+1)} \jdeq \hfib{f^{(n-1)}}{x^{(n-1)}_0} \ar[rr]^-{f^{(n)}\jdeq \proj1} && X^{(n)} \ar[r]^{f^{(n-1)}} & X^{(n-1)}. } \]
+In particular, we have $f^{(n-1)} \circ f^{(n)} = 0$.
+We now observe that the types occurring in this sequence are the iterated loop spaces\index{loop space!iterated} of the base
+space $Y$, the total space\index{total!space} $X$, and the fiber $F\defeq \hfib f {y_0}$, and similarly for the maps.
+
+\begin{lem}\label{thm:fiber-of-the-fiber}
+ Let $f:X\to Y$ be a pointed map of pointed spaces. Then:
+ \begin{enumerate}
+ \item The fiber of $f^{(1)}\defeq \proj1 : \hfib f {y_0} \to X$ is equivalent to $\Omega Y$.\label{item:fibseq1}
+ \item Similarly, the fiber of $f^{(2)} : \Omega Y \to \hfib f {y_0}$ is equivalent to $\Omega X$.\label{item:fibseq2}
+ \item Under these equivalences, the pointed map $f^{(3)} : \Omega X\to \Omega Y$ is identified with the pointed map $\Omega f\circ\rev{(\blank)}$.\label{item:fibseq3}
+ \end{enumerate}
+\end{lem}
+\begin{proof}
+ For~\ref{item:fibseq1}, we have
+ \begin{align*}
+ \hfib{f^{(1)}}{x_0}
+ &\defeq \sm{z:\hfib{f}{y_0}} (\proj{1}(z) = x_0)\\
+ &\eqvsym \sm{x:X}{p:f(x)=y_0} (x = x_0) &\text{(by \cref{ex:sigma-assoc})}\\
+ &\eqvsym (f(x_0) = y_0) &\text{(as $\tsm{x:X} (x=x_0)$ is contractible)}\\
+ &\eqvsym (y_0 = y_0) &\text{(by $(f_0 \ct \blank)$)}\\
+ &\jdeq \Omega Y.
+ \end{align*}
+ Tracing through, we see that this equivalence sends $((x,p),q)$ to $\opp{f_0} \ct \ap{f}{\opp q} \ct p$, while its inverse sends $r:y_0=y_0$ to $((x_0, f_0 \ct r), \refl{x_0})$.
+ In particular, the basepoint $((x_0,f_0),\refl{x_0})$ of $\hfib{f^{(1)}}{x_0}$ is sent to $\opp{f_0} \ct \ap{f}{\opp {\refl{x_0}}} \ct f_0$, which equals $\refl{y_0}$.
+ Hence this equivalence is a pointed map (see \cref{ex:pointed-equivalences}).
+ Moreover, under this equivalence, $f^{(2)}$ is identified with $\lam{r} (x_0, f_0 \ct r): \Omega Y \to \hfib f {y_0}$.
+
+ \cref{item:fibseq2} follows immediately by applying~\ref{item:fibseq1} to $f^{(1)}$ in place of $f$.
+ % The resulting equivalence sends $(((x,p),p'),q') : \hfib{f^{(2)}}{(x_0,f_0)}$ to $\ap{f}{\opp {q'}} \ct p'$, while its inverse sends $s:x_0=x_0$ to $(((x_0,f_0), s), \refl{(x_0,f_0)})$.
+ Since $(f^{(1)})_0 \defeq \refl{x_0}$, under this equivalence $f^{(3)}$ is identified with the map $\Omega X \to \hfib {f^{(1)}}{x_0}$ defined by $s \mapsto ((x_0,f_0),s)$.
+ Thus, when we compose with the previous equivalence $\hfib {f^{(1)}}{x_0} \eqvsym \Omega Y$, we see that $s$ maps to $\opp{f_0} \ct \ap{f}{\opp s} \ct f_0$, which is by definition $(\Omega f)(\opp s)$. We omit the proof that this is an equality of pointed maps rather than just of functions.
+\end{proof}
+
+Thus, the fiber sequence of $f:X\to Y$ can be pictured as:
+\[\xymatrix@C=1.5pc{
+ \dots \ar[r] &
+ \Omega^2 X \ar[r]^-{\Omega^2 f} &
+ \Omega^2 Y \ar[r]^-{-\Omega \partial} &
+ \Omega F \ar[r]^-{-\Omega i} &
+ \Omega X \ar[r]^-{-\Omega f} &
+ \Omega Y \ar[r]^-{\partial} &
+ F \ar[r]^-{i} &
+ X \ar[r]^{f} & Y.}\]
+where the minus signs denote composition with path inversion $\rev{(\blank)}$.
+Note that by \cref{ex:ap-path-inversion}, we have
+\[ \Omega\left(\Omega f\circ \opp{(\blank)}\right) \circ \opp{(\blank)}
+= \Omega^2 f \circ \opp{(\blank)} \circ \opp{(\blank)}
+= \Omega^2 f.
+\]
+Thus, there are minus signs on the $k$-fold loop maps whenever $k$ is odd.
+
+From this fiber sequence we will deduce an \emph{exact sequence of pointed sets}.
+\index{image}%
+Let $A$ and $B$ be sets and $f:A\to B$ a function, and recall from \cref{defn:modal-image} the definition of the \emph{image} $\im(f)$, which can be regarded as a subset of $B$:
+\[\im(f) \defeq \setof{b:B | \exis{a:A} f(a)=b}. \]
+If $A$ and $B$ are moreover pointed with basepoints $a_0$ and $b_0$, and $f$ is a pointed map, we define the \define{kernel}
+\indexdef{kernel}%
+\indexdef{pointed!map!kernel of}%
+of $f$ to be the following subset of $A$:
+\symlabel{kernel}
+\[\ker(f) \defeq \setof{x:A | f(x) = b_0}. \]
+Of course, this is just the fiber of $f$ over the basepoint $b_0$; it is a subset of $A$ because $B$ is a set.
+
+Note that any group is a pointed set, with its unit element as basepoint, and any group homomorphism is a pointed map.
+In this case, the kernel and image agree with the usual notions from group theory.
+
+\begin{defn}
+ An \define{exact sequence of pointed sets}
+ \indexdef{exact sequence}%
+ \indexdef{sequence!exact}%
+ is a (possibly bounded) sequence of pointed sets and pointed maps:
+ \[\xymatrix{\dots \ar[r] & A^{(n+1)} \ar[r]^-{f^{(n)}} & A^{(n)} \ar[r]^{f^{(n-1)}} & A^{(n-1)} \ar[r] &
+ \dots}\]
+ such that for every $n$, the image of $f^{(n)}$ is equal, as a subset of $A^{(n)}$, to the kernel of $f^{(n-1)}$.
+ In other words, for all $a:A^{(n)}$ we have
+ \[ (f^{(n-1)}(a) = a^{(n-1)}_0) \iff \exis{b:A^{(n+1)}} (f^{(n)}(b)=a). \]
+ where $a^{(n)}_0$ denotes the basepoint of $A^{(n)}$.
+\end{defn}
+
+Usually, most or all of the pointed sets in an exact sequence are groups, and often abelian groups.
+When we speak of an \define{exact sequence of groups}, it is assumed moreover that the maps are group homomorphisms and not just pointed maps.
+
+\index{exact sequence}
+\begin{thm}\label{thm:les}
+ Let $f:X \to Y$ be a pointed map between pointed spaces with fiber $F\defeq \hfib f {y_0}$.
+ Then we have the following long exact sequence, which consists of groups except for the last three terms, and abelian groups except for the last six.
+
+ \[
+ \xymatrix@R=1.2pc@C=3pc{
+ \vdots & \vdots & \vdots \ar[lld] \\
+ \pi_k(F) \ar[r] & \pi_k(X) \ar[r] & \pi_k(Y) \ar[lld] \\
+ \vdots & \vdots & \vdots \ar[lld] \\
+ \pi_2(F) \ar[r] & \pi_2(X) \ar[r] & \pi_2(Y) \ar[lld] \\
+ \pi_1(F) \ar[r] & \pi_1(X) \ar[r] & \pi_1(Y) \ar[lld]\\
+ \pi_0(F) \ar[r] & \pi_0(X) \ar[r] & \pi_0(Y)}
+ \]
+\end{thm}
+
+% In US trade we might want a page break here, and extra stretch, otherwise the
+% whole page looks ugly.
+\vspace*{0pt plus 10ex}
+\goodbreak
+
+\begin{proof}
+ We begin by showing that the 0-truncation of a fiber sequence is an exact sequence of pointed sets.
+ Thus, we need to show that for any adjacent pair of maps in a fiber sequence:
+ %
+ \[\xymatrix{\hfib{f}{z_0} \ar^-g[r] & W \ar^-f[r] & Z}\]
+ %
+ with $g\defeq \proj1$, the sequence
+ %
+ \[\xymatrix{\trunc{0}{\hfib{f}{z_0}} \ar^-{\trunc0g}[r] & \trunc0{W}
+ \ar^-{\trunc0{f}}[r] & \trunc0{Z}}\]
+ %
+ is exact, i.e.\ that $\im(\trunc0g)\subseteq\ker(\trunc0f)$ and $\ker(\trunc0f)\subseteq\im(\trunc0g)$.
+
+ The first inclusion is equivalent to $\trunc0g\circ\trunc0f=0$, which holds by functoriality of $\truncf0$ and the fact that $g\circ f=0$.
+ For the second, we assume $w':\trunc0W$ and $p':\trunc0f(w')=\tproj0{z_0}$ and show there merely exists ${t:\hfib{f}{z_0}}$ such that $g(t)=w'$.
+ Since our goal is a mere proposition, we can assume that $w'$ is of the
+ form $\tproj0w$ for some $w:W$.
+ Now by \cref{thm:path-truncation}, $p' : \tproj0{f(w)}=\tproj0{z_0}$ yields $p'': \trunc{-1}{f(w)=z_0}$, so by a further truncation induction we may assume some $p:f(w)=z_0$.
+ But now we have $\tproj0{(w,p)}:\tproj0{\hfib{f}{z_0}}$ whose image under $\trunc0g$ is $\tproj0w\jdeq w'$, as desired.
+
+ Thus, applying $\truncf0$ to the fiber sequence of $f$, we obtain a long exact sequence involving the pointed sets $\pi_k(F)$, $\pi_k(X)$, and $\pi_k(Y)$ in the desired order.
+ And of course, $\pi_k$ is a group for $k\ge1$, being the 0-truncation of a loop space, and an abelian group for $k\ge 2$ by the Eckmann--Hilton argument
+ \index{Eckmann--Hilton argument} (\cref{thm:EckmannHilton}).
+ Moreover, \cref{thm:fiber-of-the-fiber} allows us to identify the maps $\pi_k(F) \to \pi_k(X)$ and $\pi_k(X) \to \pi_k(Y)$ in this exact sequence as $(-1)^k \pi_k(i)$ and $(-1)^k \pi_k(f)$ respectively.
+
+ More generally, every map in this long exact sequence except the last three is of the form $\trunc0{\Omega h}$ or $\trunc0{-\Omega h}$ for some $h$.
+ In the former case it is a group homomorphism, while in the latter case it is a homomorphism if the groups are abelian; otherwise it is an ``anti-homomorphism''.
+ However, the kernel and image of a group homomorphism are unchanged when we replace it by its negative, and hence so is the exactness of any sequence involving it.
+ Thus, we can modify our long exact sequence to obtain one involving $\pi_k(i)$ and $\pi_k(f)$ directly and in which all the maps are group homomorphisms (except the last three).
+\end{proof}
+
+The usual properties of exact sequences of abelian groups\index{group!abelian!exact sequence of} can be proved as
+usual. In particular we have:
+\begin{lem}\label{thm:ses}
+ Suppose given an exact sequence of abelian groups:
+ %
+ \[\xymatrix{K \ar[r]& G \ar^f[r] & H \ar[r] & Q.}\]
+ %
+ \begin{enumerate}
+ \item If $K=0$, then $f$ is injective.\label{item:sesinj}
+ \item If $Q=0$, then $f$ is surjective.\label{item:sessurj}
+ \item If $K=Q=0$, then $f$ is an isomorphism.\label{item:sesiso}
+ \end{enumerate}
+\end{lem}
+\begin{proof}
+ Since the kernel of $f$ is the image of $K\to G$, if $K=0$ then the kernel of $f$ is $\{0\}$;
+ hence $f$ is injective because it's a group morphism.
+ Similarly, since the image of $f$ is the kernel of $H\to Q$, if $Q=0$ then the image of $f$ is all of $H$, so $f$ is surjective.
+ Finally,~\ref{item:sesiso} follows from~\ref{item:sesinj} and~\ref{item:sessurj} by \cref{thm:mono-surj-equiv}.
+\end{proof}
+
+As an immediate application, we can now quantify in what way $n$-connectedness of a map is stronger than inducing an equivalence on $n$-truncations.
+
+\begin{cor}\label{thm:conn-pik}
+ Let $f:A\to B$ be $n$-connected and $a:A$, and define $b\defeq f(a)$. Then:
+ \begin{enumerate}
+ \item If $k\le n$, then $\pi_k(f):\pi_k(A,a) \to \pi_k(B,b)$ is an isomorphism.\label{item:conn-pik1}
+ \item If $k=n+1$, then $\pi_k(f):\pi_k(A,a) \to \pi_k(B,b)$ is surjective.\label{item:conn-pik2}
+ \end{enumerate}
+\end{cor}
+\begin{proof}
+ For $k=0$, part~\ref{item:conn-pik1} follows from \cref{lem:connected-map-equiv-truncation}, noticing that $\pi_0(f)\equiv\trunc 0f$.
+ For $k=0$ part~\ref{item:conn-pik2} follows from \cref{ex:is-conn-trunc-functor}, noticing that a function is surjective iff it's $(-1)$-connected, by \cref{thm:minusoneconn-surjective}.
+ For $k>0$ we have as part of the long exact sequence an exact sequence
+ \[\xymatrix{\pi_k(\hfib f b) \ar[r]& \pi_k(A,a) \ar^f[r] & \pi_k(B,b) \ar[r] & \pi_{k-1}(\hfib f b).}\]
+ Now since $f$ is $n$-connected, $\trunc n{\hfib f b}$ is contractible.
+ Therefore, if $k\le n$, then $\pi_k(\hfib f b) = \trunc0{\Omega^k(\hfib f b)} = \Omega^k(\trunc k{\hfib f b})$ is also contractible.
+ Thus, $\pi_k(f)$ is an isomorphism for $k\le n$ by \cref{thm:ses}\ref{item:sesiso}, while for $k=n+1$ it is surjective by \cref{thm:ses}\ref{item:sessurj}.
+\end{proof}
+
+In \cref{sec:whitehead} we will see that the converse of \cref{thm:conn-pik} also holds.
+
+\index{fiber sequence|)}%
+\index{sequence!fiber|)}%
+
+\section{The Hopf fibration}
+\label{sec:hopf}
+
+In this section we will define the \define{Hopf fibration}.
+\indexdef{Hopf!fibration}%
+
+\begin{thm}[Hopf Fibration]\label{thm:hopf-fibration}
+There is a fibration $H$ over $\Sn ^2$ whose fiber over the basepoint is $\Sn ^1$ and
+whose total space is $\Sn ^3$.
+\end{thm}
+
+The Hopf fibration will allow us to compute several homotopy groups of
+spheres.
+Indeed, it yields the following long exact sequence
+\index{homotopy!group!of sphere}
+\index{sequence!exact}
+of homotopy groups
+(see
+\cref{sec:long-exact-sequence-homotopy-groups}):
+%
+\[
+\xymatrix@R=1.2pc{
+ \pi_k(\Sn^1) \ar[r] & \pi_k(\Sn^3) \ar[r] & \pi_k(\Sn^2) \ar[lld] \\
+ \vdots & \vdots & \vdots \ar[lld] \\
+ \pi_2(\Sn^1) \ar[r] & \pi_2(\Sn^3) \ar[r] & \pi_2(\Sn^2) \ar[lld] \\
+ \pi_1(\Sn^1) \ar[r] & \pi_1(\Sn^3) \ar[r] & \pi_1(\Sn^2)}
+\]
+%
+We've already computed all $\pi_n(\Sn^1)$, and $\pi_k(\Sn^n)$ for $k>}[d] & A \times A \ar[l]_-{\proj2} \ar@{->>}_{\proj1}[d]
+ \ar[r]^-{\mu} & A \ar@{->>}[d] \\
+ 1 & A \ar[r] \ar[l] & 1}\]
+%
+and the fibration we just constructed is a fibration over $\susp A$ whose total
+space is the pushout of the top line.
+
+Moreover, with $f(x,y)\defeq(\mu(x,y),y)$ we have the following diagram:
+%
+\[\xymatrix{A \ar_\idfunc[d] & A \times A \ar[l]_-{\proj2} \ar^f[d]
+ \ar[r]^-{\mu} & A \ar^\idfunc[d] \\
+ A & A\times A \ar^-{\proj2}[l] \ar_-{\proj1}[r] & A}\]
+%
+The diagram commutes and the three vertical maps are equivalences, the inverse
+of $f$ being the function $g$ defined by
+\[g(u,v)\defeq(\opp{\mu(\blank,v)}(u),v).\]
+%
+This shows that the two lines are equivalent (hence equal) spans, so the total
+space of the fibration we constructed is equivalent to the pushout of the bottom
+line.
+And by definition, this latter pushout is the \emph{join} of $A$ with itself (see \cref{sec:colimits}).
+%
+We have proven:
+
+\begin{lem}\label{lem:hopf-construction}
+ Given a connected H-space $A$, there is a fibration, called the
+ \define{Hopf construction},
+ \indexdef{Hopf!construction}%
+ over $\susp A$ with fiber $A$ and total space $A*A$.
+\end{lem}
+
+\subsection{The Hopf fibration}
+
+\index{Hopf fibration|(}
+\indexsee{fibration!Hopf}{Hopf fibration}
+We will first construct a structure of H-space on the circle $\Sn^1$, hence by
+\cref{lem:hopf-construction} we will get a fibration over $\Sn^2$ with fiber
+$\Sn^1$ and total space $\Sn^1*\Sn^1$. We will then prove that this join is
+equivalent to $\Sn^3$.
+
+\begin{lem}\label{lem:hspace-S1}
+ There is an H-space structure on the circle $\Sn^1$.
+\end{lem}
+\begin{proof}
+ For the base point of the H-space structure we choose $\base$.
+ %
+ Now we need to define the multiplication operation
+ $\mu:\Sn^1\times\Sn^1\to\Sn^1$.
+ We will define the curried form $\widetilde\mu:\Sn^1\to(\Sn^1\to\Sn^1)$ of $\mu$
+ by recursion on $\Sn^1$:
+ \begin{equation*}
+ \widetilde\mu(\base) \defeq\idfunc[\Sn^1],
+ \qquad\text{and}\qquad
+ \ap{\widetilde\mu}{\lloop} \defid\funext(h).
+ \end{equation*}
+ where $h:\prd{x:\Sn^1}(x=x)$ is the function defined in \cref{thm:S1-autohtpy},
+ which has the property that $h(\base) \defeq\lloop$.
+
+ Now we just have to prove that $\mu(x,\base)=\mu(\base,x)=x$ for every
+ $x:\Sn^1$.
+ %
+ By definition, if $x:\Sn^1$ we have
+ $\mu(\base,x)=\widetilde\mu(\base)(x)=\idfunc[\Sn^1](x)=x$. For the equality
+ $\mu(x,\base)=x$ we do it by induction on $x:\Sn^1$:
+ \begin{itemize}
+ \item If $x$ is $\base$ then $\mu(\base,\base)=\base$ by definition, so we
+ have $\refl\base:\mu(\base,\base)=\base$.
+ \item When $x$ varies along $\lloop$, we need to prove that
+ \[\refl\base\ct\apfunc{\lam{x}x}({\lloop})=
+ \apfunc{\lam{x}\mu(x,\base)}({\lloop})\ct\refl\base.\]
+ The left-hand side is equal to $\lloop$, and for the right-hand side we have:
+ \begin{align*}
+ \apfunc{\lam{x}\mu(x,\base)}({\lloop})\ct\refl\base &=
+ \apfunc{\lam{x}(\widetilde\mu(x))(\base)}({\lloop})\\
+ &=\happly(\apfunc{\lam{x}(\widetilde\mu(x))}({\lloop}),\base)\\
+ &=\happly(\funext(h),\base)\\
+ &=h(\base)\\
+ &=\lloop. \qedhere
+ \end{align*}
+ \end{itemize}
+\end{proof}
+
+Now recall from \cref{sec:colimits} that the \emph{join} $A*B$ of types $A$ and $B$ is the pushout of the diagram
+\index{join!of types}%
+\[A \xleftarrow{\proj1}A\times B \xrightarrow{\proj2} B. \]
+
+\begin{lem}
+ \index{associativity!of join}%
+ The operation of join is associative: if $A$, $B$ and $C$ are three types
+ then we have an equivalence $\eqv{(A*B)*C}{A*(B*C)}$.
+\end{lem}
+
+\begin{proof}
+ We define a map $f:(A*B)*C\to A*(B*C)$ by induction. We first need to define
+ $f\circ\inl:A*B\to A*(B*C)$ which will be done by induction, then
+ $f\circ\inr:C\to A*(B*C)$, and then $\apfunc{f}\circ\glue:\prd{t:(A*B)\times
+ C}f(\inl(\fst(t)))=f(\inr(\snd(t)))$ which will be done by induction on the
+ first component of $t$:
+ \begin{align*}
+ (f\circ\inl)(\inl(a)) &\defeq \inl(a), \\
+ (f\circ\inl)(\inr(b)) &\defeq \inr(\inl(b)), \\
+ \apfunc{f\circ\inl}(\glue(a,b)) &\defid \glue(a,\inl(b)), \\
+ f(\inr(c)) &\defeq \inr(\inr(c)),\\
+ \apfunc{f}(\glue(\inl(a),c)) &\defid\glue(a,\inr(c)),\\
+ \apfunc{f}(\glue(\inr(b),c)) &\defid\apfunc{\inr}(\glue(b,c)),\\
+ \apdfunc{\lam{x}\apfunc{f}(\glue(x,c))}(\glue(a,b)) &\defid
+ ``\apdfunc{\lam{x}\glue(a,x)}(\glue(b,c))''.
+ \end{align*}
+ %
+ For the last equation, note that the right-hand side is of type
+ \[\transfib{\lam{x}\inl(a)=\inr(x)}{\glue(b,c)}{\glue(a,\inl(b))}=
+ \glue(a,\inr(c))\]
+ whereas it is supposed to be of type
+ %
+ \begin{narrowmultline*}
+ \transfib{\lam{x}f(\inl(x))=f(\inr(c))}{\glue(a,b)}{\apfunc{f}(\glue(\inl(a),c))}
+ = \narrowbreak
+ \apfunc{f}(\glue(\inr(b),c)).
+ \end{narrowmultline*}
+ %
+ But by the previous clauses in the definition, both of these types are equivalent to the following type:
+ \[\glue(a,\inr(c))=\glue(a,\inl(b))\ct\apfunc\inr(\glue(b,c)),\]
+ and so we can coerce by an equivalence to obtain the necessary element.
+ %
+ Similarly, we can define a map $g:A*(B*C)\to(A*B)*C$, and checking that $f$ and
+ $g$ are inverse to each other is a long and tedious but essentially
+ straightforward computation.
+\end{proof}
+
+A more conceptual proof sketch is as follows.
+
+\begin{proof}
+ Let us consider the following diagram where the maps are the obvious
+ projections:
+ \[\xymatrix{
+ A & A\times C \ar[l] \ar[r] & A\times C\\
+ A\times B \ar[u] \ar[d] & A\times B\times C \ar[l]\ar[u]\ar[r]\ar[d] &
+ A\times C \ar[u] \ar[d] \\
+ B & B\times C \ar[l] \ar[r] & C}\]
+ %
+ Taking the colimit of the columns gives the following
+ diagram, whose colimit is $(A*B)*C$:
+ \[\xymatrix{A*B & (A*B)\times C \ar[l]\ar[r] & C}\]
+ %
+ On the other hand, taking the colimit of the lines gives a diagram whose
+ colimit is $A*(B*C)$.
+
+ Hence using a Fubini-like theorem for colimits (that we haven’t proved) we
+ have an equivalence $\eqv{(A*B)*C}{A*(B*C)}$. The proof of this Fubini theorem
+ \index{Fubini theorem for colimits}
+ for colimits still requires the long and tedious computation, though.
+\end{proof}
+
+\begin{lem}
+ For any type $A$, there is an equivalence $\eqv{\susp A}{\bool*A}$.
+\end{lem}
+
+\begin{proof}
+ It is easy to define the two maps back and forth and to prove that they are
+ inverse to each other. The details are left as an exercise to the reader.
+\end{proof}
+
+We can now construct the Hopf fibration:
+
+\begin{thm}
+ There is a fibration over $\Sn^2$ of fiber $\Sn^1$ and total space $\Sn^3$.
+ \index{total!space}%
+\end{thm}
+\begin{proof}
+ We proved that $\Sn^1$ has a structure of H-space (cf \cref{lem:hspace-S1})
+ hence by \cref{lem:hopf-construction} there is a fibration over $\Sn^2$ of
+ fiber $\Sn^1$ and total space $\Sn^1*\Sn^1$. But by the two previous results
+ and \cref{thm:suspbool} we have:
+ \begin{equation*}
+ \Sn^1*\Sn^1 = (\susp\bool)*\Sn^1
+ =(\bool*\bool)*\Sn^1
+ =\bool*(\bool*\Sn^1)
+ =\susp(\susp\Sn^1)
+ =\Sn^3. \qedhere
+ \end{equation*}
+\end{proof}
+\index{Hopf fibration|)}
+
+\section{The Freudenthal suspension theorem}
+\label{sec:freudenthal}
+
+\index{Freudenthal suspension theorem|(}%
+\index{theorem!Freudenthal suspension|(}%
+
+Before proving the Freudenthal suspension theorem, we need some auxiliary lemmas about connectedness.
+In \cref{cha:hlevels} we proved a number of facts about $n$-connected maps and $n$-types for fixed $n$; here we are now interested in what happens when we vary $n$.
+For instance, in \cref{prop:nconnected_tested_by_lv_n_dependent types} we showed that $n$-connected maps are characterized by an ``induction principle'' relative to families of $n$-types.
+If we want to ``induct along'' an $n$-connected map into a family of $k$-types for $k> n$, we don't immediately know that there is a function by such an induction principle, but the following lemma says that at least our ignorance can be quantified.
+
+\begin{lem}\label{thm:conn-trunc-variable-ind}
+ If $f:A\to B$ is $n$-connected and $P:B\to \ntype{k}$ is a family of $k$-types for $k\ge n$, then the induced function
+ \[ (\blank\circ f) : \Parens{\prd{b:B} P(b)} \to \Parens{\prd{a:A} P(f(a)) } \]
+ is $(k-n-2)$-truncated.
+\end{lem}
+\begin{proof}
+ We induct on the natural number $k-n$.
+ When $k=n$, this is \cref{prop:nconnected_tested_by_lv_n_dependent types}.
+ %
+ For the inductive step, suppose $f$ is $n$-connected and $P$ is a family of $(k+1)$-types.
+ To show that $(\blank\circ f)$ is $(k-n-1)$-truncated, let $\ell:\prd{a:A} P(f(a))$; then we have
+ \[ \hfib{(\blank\circ f)}{\ell} \eqvsym \sm{g:\prd{b:B} P(b)} \prd{a:A} g(f(a)) = \ell(a).\]
+ Let $(g,p)$ and $(h,q)$ lie in this type, so $p:g\circ f \htpy \ell$ and $q:h\circ f \htpy \ell$; then we also have
+ \[ \big((g,p) = (h,q)\big) \eqvsym
+ \Parens{\sm{r:g\htpy h} r\circ f = p \ct \opp{q}}.
+ \]
+ However, here the right-hand side is a fiber of the map
+ \[ (\blank\circ f) : \Parens{\prd{b:B} Q(b)} \to \Parens{\prd{a:A} Q(f(a)) } \]
+ where $Q(b) \defeq (g(b)=h(b))$.
+ Since $P$ is a family of $(k+1)$-types, $Q$ is a family of $k$-types, so the inductive hypothesis implies that this fiber is a $(k-n-2)$-type.
+ Thus, all path spaces of $\hfib{(\blank\circ f)}{\ell}$ are $(k-n-2)$-types, so it is a $(k-n-1)$-type.
+\end{proof}
+
+Recall that if $\pairr{A,a_0}$ and $\pairr{B,b_0}$ are pointed types, then
+their \define{wedge}
+\index{wedge}%
+$A\vee B$ is defined to be the pushout of $A\xleftarrow{a_0}
+\unit\xrightarrow{b_0} B$.
+There is a canonical map $i:A\vee B \to A\times B$ defined by the two maps $\lam{a} (a,b_0)$ and $\lam{b} (a_0,b)$; the following lemma essentially says that this map is highly connected if $A$ and $B$ are so.
+It is a bit more convenient both to prove and use, however, if we use the characterization of connectedness from \cref{prop:nconnected_tested_by_lv_n_dependent types} and substitute in the universal property of the wedge (generalized to type families).
+
+\begin{lem}[Wedge connectivity lemma]\label{thm:wedge-connectivity}
+ Suppose that $\pairr{A,a_0}$ and $\pairr{B,b_0}$ are $n$- and $m$-connected pointed types, respectively, with $n,m\geq0$, and let
+%
+\narrowequation{P:A\to B\to \ntype{(n+m)}.}
+%
+Then for any ${f:\prd{a:A} P(a,b_0)}$ and ${g:\prd{b:B} P(a_0,b)}$ with $p:f(a_0) = g(b_0)$, there exists $h:\prd{a:A}{b:B} P(a,b)$ with homotopies
+%
+\begin{equation*}
+ q:\prd{a:A} h(a,b_0)=f(a)
+ \qquad\text{and}\qquad
+ r:\prd{b:B} h(a_0,b)=g(b)
+ \end{equation*}
+%
+such that $p = \opp{q(a_0)} \ct r(b_0)$.
+\end{lem}
+\begin{proof}
+ Define $Q:A\to\type$ by
+ \[ Q(a) \defeq \sm{k:\prd{b:B} P(a,b)} (f(a) = k(b_0)). \]
+ Then we have $(g,p):Q(a_0)$.
+ Since $a_0:\unit\to A$ is $(n-1)$-connected, if $Q$ is a family of $(n-1)$-types then we will have $\ell:\prd{a:A} Q(a)$ such that $\ell(a_0) = (g,p)$, in which case we can define $h(a,b) \defeq \proj1(\ell(a))(b)$.
+ However, for fixed $a$, the type $Q(a)$ is the fiber over $f(a)$ of the map
+ \[ \Parens{\prd{b:B} P(a,b) } \to P(a,b_0) \]
+ given by precomposition with $b_0:\unit\to B$.
+ Since $b_0:\unit\to B$ is $(m-1)$-connected, for this fiber to be $(n-1)$-truncated, by \cref{thm:conn-trunc-variable-ind} it suffices for each type $P(a,b)$ to be an $(n+m)$-type, which we have assumed.
+\end{proof}
+
+Let $(X,x_0)$ be a pointed type, and recall the definition of the suspension $\susp X$ from \cref{sec:suspension}, with constructors $\north,\south:\susp X$ and $\merid:X \to (\north=\south)$.
+We regard $\susp X$ as a pointed space with basepoint $\north$, so that we have $\Omega\susp X \defeq (\id[\susp X]\north\north)$.
+Then there is a canonical map
+\begin{align*}
+ \sigma &: X \to \Omega\susp X\\
+ \sigma(x) &\defeq \merid(x) \ct \opp{\merid(x_0)}.
+\end{align*}
+
+\begin{rmk}
+ In classical algebraic topology, one considers the \emph{reduced suspension}, in which the path $\merid(x_0)$ is collapsed down to a point, identifying $\north$ and $\south$.
+ The reduced and unreduced suspensions are homotopy equivalent, so the distinction is invisible to our purely homotopy-theoretic eyes --- and higher inductive types only allow us to ``identify'' points up to a higher path anyway, there is no purpose to considering reduced suspensions in homotopy type theory.
+ However, the ``unreducedness'' of our suspension is the reason for the (possibly unexpected) appearance of $\opp{\merid(x_0)}$ in the definition of $\sigma$.
+\end{rmk}
+
+Our goal is now to prove the following.
+
+\begin{thm}[The Freudenthal suspension theorem]\label{thm:freudenthal}
+ Suppose that $X$ is $n$-connected and pointed, with $n\geq 0$.
+ Then the map $\sigma:X\to \Omega\susp(X)$ is $2n$-connected.
+\end{thm}
+
+\index{encode-decode method|(}%
+
+We will use the encode-decode method, but applied in a slightly different way.
+In most cases so far, we have used it to characterize the loop space $\Omega (A,a_0)$ of some type as equivalent to some other type $B$, by constructing a family $\code:A\to \type$ with $\code(a_0)\defeq B$ and a family of equivalences $\decode:\prd{x:A}\code(x) \eqvsym (a_0=x)$.
+% We have also generalized it to characterize truncations of loop spaces by way of a family of equivalences $\prd{x:A}\code(x) \eqvsym \trunc n{a_0=x}$.
+
+In this case, however, we want to show that $\sigma:X\to \Omega \susp X$ is $2n$-connected.
+We could use a truncated version of the previous method, such as we will see in \cref{sec:van-kampen}, to prove that $\trunc{2n}X\to \trunc{2n}{\Omega \susp X}$ is an equivalence---but this is a slightly weaker statement than the map being $2n$-connected (see \cref{thm:conn-pik,thm:pik-conn}).
+However, note that in the general case, to prove that $\decode(x)$ is an equivalence, we could equivalently be proving that its fibers are contractible, and we would still be able to use induction over the base type.
+This we can generalize to prove connectedness of a map into a loop space, i.e.\ that the \emph{truncations} of its fibers are contractible.
+Moreover, instead of constructing $\code$ and $\decode$ separately, we can construct directly a family of \emph{codes for the truncations of the fibers}.
+
+\begin{defn}\label{thm:freudcode}
+ If $X$ is $n$-connected and pointed with $n\geq 0$, then there is a family
+ \begin{equation}
+ \code:\prd{y:\susp X} (\north=y) \to \type\label{eq:freudcode}
+ \end{equation}
+ such that
+ \begin{align}
+ \code(\north,p) &\defeq \trunc{2n}{\hfib{\sigma}{p}}
+ \jdeq \trunc{2n}{\tsm{x:X} (\merid(x) \ct \opp{\merid(x_0)} = p)}\label{eq:freudcodeN}\\
+ \code(\south,q) &\defeq \trunc{2n}{\hfib{\merid}{q}}
+ \jdeq \trunc{2n}{\tsm{x:X} (\merid(x) = q)}.\label{eq:freudcodeS}
+ \end{align}
+\end{defn}
+
+Our eventual goal will be to prove that $\code(y,p)$ is contractible for all $y:\susp X$ and $p:\north=y$.
+Applying this with $y\defeq \north$ will show that all fibers of $\sigma$ are $2n$-connected, and thus $\sigma$ is $2n$-connected.
+
+\begin{proof}[Proof of \cref{thm:freudcode}]
+ We define $\code(y,p)$ by induction on $y:\susp X$, where the first two cases are~\eqref{eq:freudcodeN} and~\eqref{eq:freudcodeS}.
+ It remains to construct, for each $x_1:X$, a dependent path
+ \[ \dpath{\lam{y}(\north=y)\to\type}{\merid(x_1)}{\code(\north)}{\code(\south)}. \]
+ By \cref{thm:dpath-arrow}, this is equivalent to giving a family of paths
+ \[ \prd{q:\north=\south} \code(\north)(\transfib{\lam{y}(\north=y)}{\opp{\merid(x_1)}}{q}) = \code(\south)(q). \]
+ And by univalence and transport in path types, this is equivalent to a family of equivalences
+ \[ \prd{q:\north=\south} \code(\north,q \ct \opp{\merid(x_1)}) \eqvsym \code(\south,q). \]
+ We will define a family of maps
+ \begin{equation}\label{eq:freudmap}
+ \prd{q:\north=\south} \code(\north,q \ct \opp{\merid(x_1)}) \to \code(\south,q).
+ \end{equation}
+ and then show that they are all equivalences.
+ Thus, let $q:\north=\south$; by the universal property of truncation and the definitions of $\code(\north,\blank)$ and $\code(\south,\blank)$, it will suffice to define for each $x_2:X$, a map
+ \begin{equation*}
+ \big(\merid(x_2)\ct \opp{\merid(x_0)} = q \ct \opp{\merid(x_1)}\big)
+ \to \trunc{2n}{\tsm{x:X} (\merid(x) = q)}.
+ \end{equation*}
+ Now for each $x_1,x_2:X$, this type is $2n$-truncated, while $X$ is $n$-connected.
+ Thus, by \cref{thm:wedge-connectivity}, it suffices to define this map when $x_1$ is $x_0$, when $x_2$ is $x_0$, and check that they agree when both are $x_0$.
+
+ When $x_1$ is $x_0$, the hypothesis is $r:\merid(x_2)\ct \opp{\merid(x_0)} = q \ct \opp{\merid(x_0)}$.
+ Thus, by canceling $\opp{\merid(x_0)}$ from $r$ to get $r':\merid(x_2)=q$, so we can define the image to be $\tproj{2n}{(x_2,r')}$.
+
+ When $x_2$ is $x_0$, the hypothesis is $r:\merid(x_0)\ct \opp{\merid(x_0)} = q \ct \opp{\merid(x_1)}$.
+ Rearranging this, we obtain $r'':\merid(x_1)=q$, and we can define the image to be $\tproj{2n}{(x_1,r'')}$.
+
+ Finally, when both $x_1$ and $x_2$ are $x_0$, it suffices to show the resulting $r'$ and $r''$ agree; this is an easy lemma about path composition.
+ This completes the definition of~\eqref{eq:freudmap}.
+ To show that it is a family of equivalences, since being an equivalence is a mere proposition and $x_0:\unit\to X$ is (at least) $(-1)$-connected, it suffices to assume $x_1$ is $x_0$.
+ In this case, inspecting the above construction we see that it is essentially the $2n$-truncation of the function that cancels $\opp{\merid(x_0)}$, which is an equivalence.
+\end{proof}
+
+In addition to~\eqref{eq:freudcodeN} and~\eqref{eq:freudcodeS}, we will need to extract from the construction of $\code$ some information about how it acts on paths.
+For this we use the following lemma.
+
+\begin{lem}\label{thm:freudlemma}
+ Let $A:\UU$, $B:A\to \UU$, and $C:\prd{a:A} B(a)\to\UU$, and also $a_1,a_2:A$ with $m:a_1=a_2$ and $b:B(a_2)$.
+ Then the function
+ \[\transfib{\widehat{C}}{\pairpath(m,t)}{\blank} : C(a_1,\transfib{B}{\opp m}{b}) \to C(a_2,b),\]
+ where $t:\transfib{B}{m}{\transfib{B}{\opp m}{b}} = b$ is the obvious coherence path and $\widehat{C}:(\sm{a:A} B(a)) \to\type$ is the uncurried form of $C$, is equal to the equivalence obtained by univalence from the composite
+ \begin{align}
+ C(a_1,\transfib{B}{\opp m}{b})
+ &= \transfib{\lam{a} B(a)\to \UU}{m}{C(a_1)}(b)
+ \tag{by~\eqref{eq:transport-arrow}}\\
+ &= C(a_2,b). \tag{by $\happly(\apd{C}{m},b)$}
+ \end{align}
+\end{lem}
+\begin{proof}
+ By path induction, we may assume $a_2$ is $a_1$ and $m$ is $\refl{a_1}$, in which case both functions are the identity.
+\end{proof}
+
+We apply this lemma with $A\defeq\susp X$ and $B\defeq \lam{y}(\north=y)$ and $C\defeq\code$, while $a_1\defeq\north$ and $a_2\defeq\south$ and $m\defeq \merid(x_1)$ for some $x_1:X$, and finally $b\defeq q$ is some path $\north=\south$.
+The computation rule for induction over $\susp X$ identifies $\apd{C}{m}$ with a path constructed in a certain way out of univalence and function extensionality.
+The second function described in \cref{thm:freudlemma} essentially consists of undoing these applications of univalence and function extensionality, reducing back to the particular functions~\eqref{eq:freudmap} that we defined using \cref{thm:wedge-connectivity}.
+Therefore, \cref{thm:freudlemma} says that transporting along $\pairpath(q,t)$ essentially recovers these functions.
+
+Finally, by construction, when $x_1$ or $x_2$ coincides with $x_0$ and the input is in the image of $\tproj{2n}{\blank}$, we know more explicitly what these functions are.
+Thus, for any $x_2:X$, we have
+\begin{equation}
+ \transfib{\hat{\code}}{\pairpath(\merid(x_0),t)}{\tproj{2n}{(x_2,r)}}
+ =\tproj{2n}{(x_1,r')}\label{eq:freudcompute1}
+\end{equation}
+where $r:\merid(x_2) \ct \opp{\merid(x_0)} = \transfib{B}{\opp{\merid(x_0)}}{q}$ is arbitrary as before, and $r':\merid(x_2)=q$ is obtained from $r$ by identifying its end point with $q \ct \opp{\merid(x_0)}$ and canceling $\opp{\merid(x_0)}$.
+Similarly, for any $x_1:X$, we have
+\begin{equation}
+ \transfib{\hat{\code}}{\pairpath(\merid(x_1),t)}{\tproj{2n}{(x_0,r)}}
+ = \tproj{2n}{(x_1,r'')}\label{eq:freudcompute2}
+\end{equation}
+where $r:\merid(x_0) \ct \opp{\merid(x_0)} = \transfib{B}{\opp{\merid(x_1)}}{q}$, and $r'':\merid(x_1)=q$ is obtained by identifying its end point and rearranging paths.
+
+\begin{proof}[Proof of \cref{thm:freudenthal}]
+ It remains to show that $\code(y,p)$ is contractible for each $y:\susp X$ and $p:\north=y$.
+ First we must choose a center of contraction, say $c(y,p):\code(y,p)$.
+ This corresponds to the definition of the function $\encode$ in our previous proofs, so we define it by transport.
+ Note that in the special case when $y$ is $\north$ and $p$ is $\refl{\north}$, we have
+ \[\code(\north,\refl{\north}) \jdeq \trunc{2n}{\tsm{x:X} (\merid(x) \ct \opp{\merid(x_0)} = \refl{\north})}.\]
+ Thus, we can choose $c(\north,\refl{\north})\defeq \tproj{2n}{(x_0,\mathsf{rinv}_{\merid(x_0)})}$, where $\mathrm{rinv}_q$ is the obvious path $q\ct\opp q = \refl{}$ for any $q$.
+ We can now obtain $c:\prd{y:\susp X}{p:\north=y} \code(y,p)$ by path induction on $p$, but it will be important below that we can also give a concrete definition in terms of transport:
+ \[ c(y,p) \defeq \transfib{\hat{\code}}{\pairpath(p,\mathsf{tid}_p)}{c(\north,\refl{\north})}
+ \]
+ where $\hat{\code}: \big(\sm{y:\susp X} (\north=y)\big) \to \type$ is the uncurried version of \code, and $\mathsf{tid}_p:\trans{p}{\refl{}} = p$ is a standard lemma.
+
+ Next, we must show that every element of $\code(y,p)$ is equal to $c(y,p)$.
+ Again, by path induction, it suffices to assume $y$ is $\north$ and $p$ is $\refl{\north}$.
+ In fact, we will prove it more generally when $y$ is $\north$ and $p$ is arbitrary.
+ That is, we will show that for any $p:\north=\north$ and $d:\code(\north,p)$ we have $d = c(\north,p)$.
+ Since this equality is a $(2n-1)$-type, we may assume $d$ is of the form $\tproj{2n}{(x_1,r)}$ for some $x_1:X$ and $r:\merid(x_1) \ct \opp{\merid(x_0)} = p$.
+
+ Now by a further path induction, we may assume that $r$ is reflexivity, and $p$ is $\merid(x_1) \ct \opp{\merid(x_0)}$.
+ (This is why we generalized to arbitrary $p$ above.)
+ Thus, we have to prove that
+ \begin{equation}
+ \tproj{2n}{(x_1, \refl{\merid(x_1) \ct \opp{\merid(x_0)}})}
+ \;=\;
+ c\left(\north,\refl{\merid(x_1) \ct \opp{\merid(x_0)}}\right).\label{eq:freudgoal}
+ \end{equation}
+ By definition, the right-hand side of this equality is
+ \begin{multline*}
+ \Transfib{\hat{\code}}{\pairpath(\merid(x_1) \ct \opp{\merid(x_0)}, \nameless)}{\tproj{2n}{(x_0,\nameless)}} \\
+ = \transfibf{\hat{\code}}
+ \begin{aligned}[t]
+ \Big(
+ &{\pairpath(\opp{\merid(x_0)}, \nameless)},\\
+ &{\Transfib{\hat{\code}}{\pairpath(\merid(x_1), \nameless)}{\tproj{2n}{(x_0,\nameless)}}}
+ \Big)
+ \end{aligned}
+ \\
+ = \Transfib{\hat{\code}}{\pairpath(\opp{\merid(x_0)}, \nameless)}{\tproj{2n}{(x_1,\nameless)}}
+ = \tproj{2n}{(x_1,\nameless)}
+ \end{multline*}
+ where the underscore $\nameless$ ought to be filled in with suitable coherence paths.
+ Here the first step is functoriality of transport, the second invokes~\eqref{eq:freudcompute2}, and the third invokes~\eqref{eq:freudcompute1} (with transport moved to the other side).
+ Thus we have the same first component as the left-hand side of~\eqref{eq:freudgoal}.
+ We leave it to the reader to verify that the coherence paths all cancel, giving reflexivity in the second component.
+\end{proof}
+
+% As a corollary, we have the following equivalence.
+
+\begin{cor}[Freudenthal Equivalence] \label{cor:freudenthal-equiv}
+Suppose that $X$ is $n$-connected and pointed, with $n\geq 0$.
+Then $\eqv{\trunc{2n}{X}}{\trunc{2n}{\Omega\susp(X)}}$.
+\end{cor}
+\begin{proof}
+By \cref{thm:freudenthal}, $\sigma$ is $2n$-connected. By
+\cref{lem:connected-map-equiv-truncation}, it is therefore an
+equivalence on $2n$-truncations.
+\end{proof}
+
+\index{encode-decode method|)}%
+
+\index{Freudenthal suspension theorem|)}%
+\index{theorem!Freudenthal suspension|)}%
+
+\index{homotopy!group!of sphere}%
+\index{stability!of homotopy groups of spheres}%
+\index{type!n-sphere@$n$-sphere}%
+One important corollary of the Freudenthal suspension theorem is that the homotopy groups of
+spheres are stable in a certain range (these are the northeast-to-southwest diagonals
+in \cref{tab:homotopy-groups-of-spheres}):
+
+\begin{cor}[Stability for Spheres] \label{cor:stability-spheres}
+If $k \le 2n-2$, then $\pi_{k+1}(S^{n+1}) = \pi_{k}(S^{n})$.
+\end{cor}
+\begin{proof}
+Assume $k \le 2n-2$.
+%
+By \cref{cor:sn-connected}, $\Sn ^{n}$ is $\nminusone$-connected. Therefore,
+by \cref{cor:freudenthal-equiv},
+\[
+\trunc{2(n-1)}{\Omega(\susp(\Sn^{n}))} = \trunc{2(n-1)}{\Sn^{n}}.
+\]
+By \cref{lem:truncation-le}, because $k \le 2(n-1)$, applying $\trunc{k}{\blank}$
+to both sides shows that this equation holds for $k$:
+\begin{equation}\label{eq:freudenthal-for-spheres}
+\trunc{k}{\Omega(\susp(\Sn^{n}))} = \trunc{k}{\Sn^{n}}.
+\end{equation}
+%
+Then, the main idea of the proof is as follows; we omit checking that these
+equivalences act appropriately on the base points of these spaces, and that for
+$k > 0$ the equivalences respect multiplication:
+%
+\begin{align*}
+\pi_{k+1}(\Sn^{n+1}) &\jdeq \trunc{0}{\Omega^{k+1}(\Sn^{n+1})} \\
+ &\jdeq \trunc{0}{\Omega^k(\Omega(\Sn^{n+1}))} \\
+ &\jdeq \trunc{0}{\Omega^k(\Omega(\susp(\Sn^{n})))} \\
+ &= \Omega^k(\trunc{k}{(\Omega(\susp(\Sn^{n})))})
+ \tag{by \cref{thm:path-truncation}}\\
+ &= \Omega^k(\trunc{k}{\Sn^{n}})
+ \tag{by \eqref{eq:freudenthal-for-spheres}}\\
+ &= \trunc{0}{\Omega^k(\Sn^{n})}
+ \tag{by \cref{thm:path-truncation}}\\
+ &\jdeq \pi_k(\Sn^{n}). \qedhere
+\end{align*}
+%
+\end{proof}
+
+This means that once we have calculated one entry in one of these stable
+diagonals, we know all of them. For example:
+\begin{thm}\label{thm:pinsn}
+$\pi_n(\Sn^n)=\Z$ for every $n\geq 1$.
+\end{thm}
+
+\begin{proof}
+The proof is by induction on $n$. We already have $\pi_1(\Sn ^1) = \Z$
+(\cref{cor:pi1s1}) and $\pi_2(\Sn ^2) = \Z$ (\cref{cor:pis2-hopf}).
+When $n \ge 2$, $n \le (2n - 2)$. Therefore, by
+\cref{cor:stability-spheres}, $\pi_{n+1}(S^{n+1}) = \pi_{n}(S^{n})$, and
+this equivalence, combined with the inductive hypothesis, gives the result.
+\end{proof}
+
+\begin{cor}
+ $\Sn^{n+1}$ is not an $n$-type for any $n\ge -1$.
+\end{cor}
+
+\begin{cor}\label{thm:pi3s2}
+ $\pi_3(\Sn^2)=\Z$.
+\end{cor}
+\begin{proof}
+ By \cref{cor:pis2-hopf}, $\pi_3(\Sn^2) = \pi_3(\Sn^3)$.
+ But by \cref{thm:pinsn}, $\pi_3(\Sn^3) = \Z$.
+\end{proof}
+
+\section{The van Kampen theorem}
+\label{sec:van-kampen}
+
+\index{van Kampen theorem|(}%
+\index{theorem!van Kampen|(}%
+
+\index{fundamental!group}%
+The van Kampen theorem calculates the fundamental group $\pi_1$ of a (homotopy) pushout of spaces.
+It is traditionally stated for a topological space $X$ which is the union of two open subspaces $U$ and $V$, but in homotopy-theoretic terms this is just a convenient way of ensuring that $X$ is the pushout of $U$ and $V$ over their intersection.
+Thus, we will prove a version of the van Kampen theorem for arbitrary pushouts.
+
+In this section we will describe a proof of the van Kampen theorem which uses the same encode-decode method that we used for $\pi_1(\Sn^1)$ in \cref{sec:pi1-s1-intro}.
+There is also a more homotopy-theoretic approach; see \cref{ex:rezk-vankampen}.
+
+We need a more refined version of the encode-decode method.
+In \cref{sec:pi1-s1-intro} (as well as in \cref{sec:compute-coprod,sec:compute-nat}) we used it to characterize the path space of a (higher) inductive type $W$ --- deriving as a consequence a characterization of the loop space $\Omega(W)$, and thereby also of its 0-truncation $\pi_1(W)$.
+In the van Kampen theorem, our goal is only to characterize the fundamental group $\pi_1(W)$, and we do not have any explicit description of the loop spaces or the path spaces to use.
+
+It turns out that we can use the same technique directly for a truncated version of the path fibration, thereby characterizing not only the fundamental \emph{group} $\pi_1(W)$, but also the whole fundamental \emph{groupoid}.
+\index{fundamental!pregroupoid}%
+Spe\-cif\-ical\-ly, for a type $X$, write $\Pi_1 X: X\to X\to \type$ for the $0$-truncation of its identity type, i.e.\ $\Pi_1 X(x,y) \defeq \trunc0{x=y}$.
+Note that we have induced groupoid operations
+\begin{align*}
+ (\blank\ct\blank) &\;:\; \Pi_1X(x,y) \to \Pi_1X(y,z) \to \Pi_1X(x,z)\\
+ \opp{(\blank)} &\;:\; \Pi_1X(x,y) \to \Pi_1X(y,x)\\
+ \refl{x} &\;:\; \Pi_1X(x,x)\\
+ \apfunc{f} &\;:\; \Pi_1X(x,y) \to \Pi_1Y(fx,fy)
+\end{align*}
+for which we use the same notation as the corresponding operations on paths.
+
+\subsection{Naive van Kampen}
+\label{sec:naive-vankampen}
+
+We begin with a ``naive'' version of the van Kampen theorem, which is useful but not quite as useful as the classical version.
+In \cref{sec:better-vankampen} we will improve it to a more useful version.
+
+\index{encode-decode method|(}%
+
+Given types $A,B,C$ and functions $f:A\to B$ and $g:A\to C$, let $P$ be their pushout $B\sqcup^A C$.
+As we saw in \cref{sec:colimits}, $P$ is the higher inductive type generated by
+\begin{itemize}
+\item $i:B\to P$,
+\item $j:C\to P$, and
+\item for all $x:A$, a path $k x:ifx = jgx$.
+\end{itemize}
+Define $\code:P\to P\to \type$ by double induction on $P$ as follows.
+\begin{itemize}
+\item $\code(ib,ib')$ is a set-quotient (see \cref{sec:set-quotients}) of the type of sequences %pairs $(\vec a, \vec a', \vec p, \vec q)$
+ \[ (b, p_0, x_1, q_1, y_1, p_1, x_2, q_2, y_2, p_2, \dots, y_n, p_n, b') \]
+ where
+ \begin{itemize}
+ \item $n:\mathbb{N}$
+ \item $x_k:A$ and $y_k:A$ for $00$, and $p_0:\Pi_1B(b,b')$ for $n=0$
+ \item $p_k:\Pi_1B(f y_k, fx_{k+1})$ for $1\le k < n$
+ \item $q_k:\Pi_1C(gx_k, gy_k)$ for $1\le k\le n$
+ \end{itemize}
+ The quotient is generated by the following equalities:
+ \begin{align*}
+ (\dots, q_k, y_k, \refl{fy_k}, y_k, q_{k+1},\dots)
+ &= (\dots, q_k\ct q_{k+1},\dots)\\
+ (\dots, p_k, x_k, \refl{gx_k}, x_k, p_{k+1},\dots)
+ &= (\dots, p_k\ct p_{k+1},\dots)
+ \end{align*}
+ (see \cref{rmk:naive} below).
+ We leave it to the reader to define this type of sequences precisely as an inductive type.
+\item $\code(jc,jc')$ is identical, with the roles of $B$ and $C$ reversed.
+ We likewise notationally reverse the roles of $x$ and $y$, and of $p$ and $q$.
+\item $\code(ib,jc)$ and $\code(jc,ib)$ are similar, with the parity changed so that they start in one type and end in the other.
+\item For $a:A$ and $b:B$, we require an equivalence
+ \begin{equation}
+ \code(ib, ifa) \eqvsym \code(ib,jga).\label{eq:bfa-bga}
+ \end{equation}
+ We define this to consist of the two functions defined on sequences by
+ \begin{align*}
+ (\dots, y_n, p_n,fa) &\mapsto (\dots,y_n,p_n,a,\refl{ga},ga),\\
+ (\dots, x_n, p_n, a, \refl{fa}, fa) &\mapsfrom (\dots, x_n, p_n, ga).
+ \end{align*}
+ Both of these functions are easily seen to respect the equivalence relations, and hence to define functions on the types of codes.
+ The left-to-right-to-left composite is
+ \[ (\dots, y_n, p_n,fa) \mapsto
+ (\dots,y_n,p_n,a,\refl{ga},a,\refl{fa},fa)
+ \]
+ which is equal to the identity by a generating equality of the quotient.
+ The other composite is analogous.
+ Thus we have defined an equivalence~\eqref{eq:bfa-bga}.
+\item Similarly, we require equivalences
+ \begin{align*}
+ \code(jc,ifa) &\eqvsym \code(jc,jga)\\
+ \code(ifa,ib)&\eqvsym (jga,ib)\\
+ \code(ifa,jc)&\eqvsym (jga,jc)
+ \end{align*}
+ all of which are defined in exactly the same way (the second two by adding reflexivity terms on the beginning rather than the end).
+\item Finally, we need to know that for $a,a':A$, the following diagram commutes:
+ \begin{equation}\label{eq:bfa-bga-comm}
+ \vcenter{\xymatrix{
+ \code(ifa,ifa') \ar[r]\ar[d] &
+ \code(ifa,jga')\ar[d]\\
+ \code(jga,ifa')\ar[r] &
+ \code(jga,jga')
+ }}
+ \end{equation}
+ This amounts to saying that if we add something to the beginning and then something to the end of a sequence, we might as well have done it in the other order.
+\end{itemize}
+
+\begin{rmk}\label{rmk:naive}
+ One might expect to see in the definition of \code some additional generating equations for the set-quotient, such as
+ \begin{align*}
+ (\dots, p_{k-1} \ct fw, x_{k}', q_{k}, \dots) &=
+ (\dots, p_{k-1}, x_{k}, gw \ct q_{k}, \dots)
+ \tag{for $w:\Pi_1A(x_{k},x_{k}')$}\\
+ (\dots, q_k \ct gw, y_k', p_k, \dots) &=
+ (\dots, q_k, y_k, fw \ct p_k, \dots).
+ \tag{for $w:\Pi_1A(y_k, y_k')$}
+ \end{align*}
+ However, these are not necessary!
+ In fact, they follow automatically by path induction on $w$.
+ This is the main difference between the ``naive'' van Kampen theorem and the more refined one we will consider in the next subsection.
+\end{rmk}
+
+Continuing on, we can characterize transporting in the fibration $\code$:
+\begin{itemize}
+\item For $p:b=_B b'$ and $u:P$, we have
+ \[ \mathsf{transport}^{b\mapsto \code(u,ib)}(p, (\dots, y_n,p_n,b))
+ = (\dots,y_n,p_n\ct p,b').
+ \]
+\item For $q:c=_C c'$ and $u:P$, we have
+ \[ \mathsf{transport}^{c\mapsto \code(u,jc)}(q, (\dots, x_n,q_n,c))
+ = (\dots,x_n,q_n\ct q,c').
+ \]
+\end{itemize}
+Here we are abusing notation by using the same name for a path in $X$ and its image in $\Pi_1X$.
+Note that transport in $\Pi_1X$ is also given by concatenation with (the image of) a path.
+From this we can prove the above statements by induction on $u$.
+We also have:
+\begin{itemize}
+\item For $a:A$ and $u:P$,
+ \[ \mathsf{transport}^{v\mapsto \code(u,v)}(ha, (\dots, y_n,p_n,fa))
+ = (\dots,y_n,p_n,a,\refl{ga},ga).
+ \]
+\end{itemize}
+This follows essentially from the definition of $\code$.
+
+We also construct a function
+\[ r : \prd{u:P} \code(u,u) \]
+by induction on $u$ as follows:
+\begin{align*}
+ rib &\defeq (b,\refl{b},b)\\
+ rjc &\defeq (c,\refl{c},c)
+\end{align*}
+and for $rka$ we take the composite equality
+\begin{align*}
+ (ka,ka)_* (fa,\refl{fa},fa)
+ &= (ga,\refl{ga},a,\refl{fa},a,\refl{ga},ga) \\
+ &= (ga,\refl{ga},ga)
+\end{align*}
+where the first equality is by the observation above about transporting in $\code$, and the second is an instance of the set quotient relation used to define $\code$.
+
+We will now prove:
+\begin{thm}[Naive van Kampen theorem]\label{thm:naive-van-kampen}
+ For all $u,v:P$ there is an equivalence
+ \[ \Pi_1P(u,v) \eqvsym \code(u,v). \]
+\end{thm}
+\begin{proof}
+
+To define a function
+\[ \encode : \Pi_1P(u,v) \to \code(u,v) \]
+it suffices to define a function $(u=_P v) \to \code(u,v)$,
+since $\code(u,v)$ is a set.
+We do this by transport:
+\[\encode(p) \defeq \mathsf{transport}^{v\mapsto \code(u,v)}(p,r(u)).\]
+Now to define
+\[ \decode: \code(u,v) \to \Pi_1P(u,v) \]
+we proceed as usual by induction on $u,v:P$.
+In each case for $u$ and $v$, we apply $i$ or $j$ to all the equalities $p_k$ and $q_k$ as appropriate and concatenate the results in $P$, using $h$ to identify the endpoints.
+For instance, when $u\jdeq ib$ and $v\jdeq ib'$, we define
+\begin{narrowmultline}\label{eq:decode}
+ \decode(b, p_0, x_1, q_1, y_1, p_1, \dots, y_n, p_n, b') \defeq\narrowbreak
+ (p_0)\ct h(x_1) \ct j(q_1) \ct \opp{h(y_1)} \ct i(p_1) \ct \cdots \ct \opp{h(y_n)}\ct i(p_n).
+\end{narrowmultline}
+This respects the set-quotient equivalence relation and the equivalences such as~\eqref{eq:bfa-bga}, since $h: fi \htpy gj$ is natural and $f$ and $g$ are functorial.
+
+As usual, to show that the composite
+\[ \Pi_1P(u,v) \xrightarrow{\encode} \code(u,v) \xrightarrow{\decode} \Pi_1P(u,v) \]
+is the identity, we first peel off the 0-truncation (since the codomain is a set) and then apply path induction.
+The input $\refl{u}$ goes to $ru$, which then goes back to $\refl u$ (applying a further induction on $u$ to decompose $\decode(ru)$).
+
+Finally, consider the composite
+\[ \code(u,v) \xrightarrow{\decode} \Pi_1P(u,v) \xrightarrow{\encode} \code(u,v). \]
+We proceed by induction on $u,v:P$.
+When $u\jdeq ib$ and $v\jdeq ib'$, this composite is
+%
+\begin{narrowmultline*}
+(b, p_0, x_1, q_1, y_1, p_1, \dots, y_n, p_n, b')
+\narrowbreak
+\begin{aligned}[t]
+ &\mapsto \Big(ip_0\ct hx_1 \ct jq_1 \ct \opp{hy_1} \ct ip_1 \ct \cdots \ct \opp{hy_n}\ct ip_n\Big)_*(rib)\\
+ &= (ip_n)_* \cdots(jq_1)_* (hx_1)_*(ip_0)_*(b,\refl{b},b)\\
+ &= (ip_n)_* \cdots(jq_1)_* (hx_1)_*(b,p_0,ifx_1)\\
+ &= (ip_n)_* \cdots(jq_1)_* (b,p_0,x_1,\refl{gx_1},jgx_1)\\
+ &= (ip_n)_* \cdots (b,p_0,x_1,q_1,jgy_1)\\
+ &= \quad\vdots\\
+ &= (b, p_0, x_1, q_1, y_1, p_1, \dots, y_n, p_n, b').
+\end{aligned}
+\end{narrowmultline*}
+%
+i.e., the identity function.
+(To be precise, there is an implicit inductive argument needed here.)
+The other three point cases are analogous, and the path cases are trivial since all the types are sets.
+\end{proof}
+
+\index{encode-decode method|)}%
+
+\cref{thm:naive-van-kampen} allows us to calculate the fundamental groups of many types, provided $A$ is a set,
+for in that case, each $\code(u,v)$ is, by definition, a set-quotient of a \emph{set} by a relation.
+
+\begin{eg}\label{eg:circle}
+ Let $A\defeq \bool$, $B\defeq\unit$, and $C\defeq \unit$.
+ Then $P \eqvsym S^1$.
+ Inspecting the definition of, say, $\code(i(\ttt),i(\ttt))$, we see that the paths all may as well be trivial, so the only information is in the sequence of elements $x_1,y_1,\dots,x_n,y_n: \bool$.
+ Moreover, if we have $x_k=y_k$ or $y_k=x_{k+1}$ for any $k$, then the set-quotient relations allow us to excise both of those elements.
+ Thus, every such sequence is equal to a canonical \emph{reduced} one in which no two adjacent elements are equal.
+ Clearly such a reduced sequence is uniquely determined by its length (a natural number $n$) together with, if $n>1$, the information of whether $x_1$ is $\bfalse$ or $\btrue$, since that determines the rest of the sequence uniquely.
+ And these data can, of course, be identified with an integer, where $n$ is the absolute value and $x_1$ encodes the sign.
+ Thus we recover $\pi_1(S^1)\cong \Z$.
+\end{eg}
+
+Since \cref{thm:naive-van-kampen} asserts only a bijection of families of sets, this isomorphism $\pi_1(S^1)\cong \Z$ is likewise only a bijection of sets.
+We could, however, define a concatenation operation on $\code$ (by concatenating sequences) and show that $\encode$ and $\decode$ form an isomorphism respecting this structure.
+(In the language of \cref{cha:category-theory}, these would be ``pregroupoids''.)
+We leave the details to the reader.
+
+\index{fundamental!group|(}%
+
+\begin{eg}\label{eg:suspension}
+ More generally, let $B\defeq\unit$ and $C\defeq \unit$ but $A$ be arbitrary, so that $P$ is the suspension of $A$.
+ Then once again the paths $p_k$ and $q_k$ are trivial, so that the only information in a path code is a sequence of elements $x_1,y_1,\dots,x_n,y_n: A$.
+ The first two generating equalities say that adjacent equal elements can be canceled, so it makes sense to think of this sequence as a word of the form
+ \[ x_1 y_1^{-1} x_2 y_2^{-1} \cdots x_n y_n^{-1} \]
+ in a group.
+ Indeed, it looks similar to the free group on $A$ (or equivalently on $\trunc0A$; see \cref{thm:freegroup-nonset}), but we are considering only words that start with a non-inverted element, alternate between inverted and non-inverted elements, and end with an inverted one.
+ This effectively reduces the size of the generating set by one.
+ For instance, if $A$ has a point $a:A$, then we can identify $\pi_1(\susp A)$ with the group presented by $\trunc0A$ as generators with the relation $\tproj0a = e$; see \cref{ex:vksusppt,ex:vksuspnopt} for details.
+\end{eg}
+
+\begin{eg}\label{eg:wedge}
+ Let $A\defeq\unit$ and $B$ and $C$ be arbitrary, so that $f$ and $g$ simply equip $B$ and $C$ with basepoints $b$ and $c$, say.
+ Then $P$ is the \emph{wedge} $B\vee C$ of $B$ and $C$ (the coproduct in the category of based spaces).
+ In this case, it is the elements $x_k$ and $y_k$ which are trivial, so that the only information is a sequence of loops $(p_0,q_1,p_1,\dots,p_n)$ with $p_k:\pi_1(B,b)$ and $q_k:\pi_1(C,c)$.
+ Such sequences, modulo the equivalence relation we have imposed, are easily identified with the explicit description of the \emph{free product} of the groups $\pi_1(B,b)$ and $\pi_1(C,c)$, as constructed in \cref{sec:free-algebras}.
+ Thus, we have $\pi_1(B\vee C) \cong \pi_1(B) * \pi_1(C)$.
+\end{eg}
+
+\index{fundamental!group|)}%
+
+However, \cref{thm:naive-van-kampen} stops just short of being the full classical van Kampen theorem, which handles
+the case where $A$ is not necessarily a set,
+and states that $\pi_1(B\sqcup^A C) \cong \pi_1(B) *_{\pi_1(A)} \pi_1(C)$ (with base point coming from $A$).
+Indeed, the conclusion of \cref{thm:naive-van-kampen} says nothing at all about $\pi_1(A)$; the paths in $A$ are ``built into the quotienting'' in a type-theoretic way that makes it hard to extract explicit information, in that $\code(u,v)$ is a set-quotient of a non-set by a relation.
+For this reason, in the next subsection we consider a better version of the van Kampen theorem.
+
+
+\subsection{The van Kampen theorem with a set of basepoints}
+\label{sec:better-vankampen}
+
+\index{basepoint!set of}%
+The improvement of van Kampen we present now is closely analogous to a similar improvement in classical algebraic topology, where $A$ is equip\-ped with a \emph{set $S$ of base points}.
+In fact, it turns out to be unnecessary for our proof to assume that the ``set of basepoints'' is a \emph{set} --- it might just as well be an arbitrary type; the utility of assuming $S$ is a set arises later, when applying the theorem to obtain computations.
+What is important is that $S$ contains at least one point in each connected component of $A$.
+We state this in type theory by saying that we have a type $S$ and a function $k:S \to A$ which is surjective, i.e.\ $(-1)$-connected.
+If $S\jdeq A$ and $k$ is the identity function, then we will recover the naive van Kampen theorem.
+Another example to keep in mind is when $A$ is pointed and (0-)connected, with $k:\unit\to A$ the point: by \cref{thm:minusoneconn-surjective,thm:connected-pointed} this map is surjective just when $A$ is 0-connected.
+
+Let $A,B,C,f,g,P,i,j,h$ be as in the previous section.
+We now define, given our surjective map $k:S\to A$, an auxiliary type which improves the connectedness of $k$.
+Let $T$ be the higher inductive type generated by
+\begin{itemize}
+\item A function $\ell:S\to T$, and
+\item For each $s,s':S$, a function $m:(\id[A]{ks}{ks'}) \to (\id[T]{\ell s}{\ell s'})$.
+\end{itemize}
+There is an obvious induced function $\kbar:T\to A$ such that $\kbar \ell = k$, and any $p:ks=ks'$ is equal to the composite $ks = \kbar \ell s \overset{\kbar m p}{=} \kbar \ell s' = k s'$.
+
+\begin{lem}\label{thm:kbar}
+ $\kbar$ is 0-connected.
+\end{lem}
+\begin{proof}
+ We must show that for all $a:A$, the 0-truncation of the type $\sm{t:T}(\kbar t = a)$ is contractible.
+ Since contractibility is a mere proposition and $k$ is $(-1)$-connected, we may assume that $a=ks$ for some $s:S$.
+ Now we can take the center of contraction to be $\tproj0{(\ell s,q)}$ where $q$ is the equality $\kbar\ell s = k s$.
+
+ It remains to show that for any $\phi:\trunc0{\sm{t:T} (\kbar t = ks)}$ we have $\phi = \tproj0{(\ell s,q)}$.
+ Since the latter is a mere proposition, and in particular a set, we may assume that $\phi=\tproj0{(t,p)}$ for $t:T$ and $p:\kbar t = ks$.
+
+ Now we can do induction on $t:T$.
+ If $t\jdeq\ell s'$, then $ks' = \kbar \ell s' \overset{p}{=} ks$ yields via $m$ an equality $\ell s = \ell s'$.
+ Hence by definition of $\kbar$ and of equality in homotopy fibers, we obtain an equality $(ks',p) = (ks,q)$, and thus $\tproj0{(ks',p)} = \tproj0{(ks,q)}$.
+ Next we must show that as $t$ varies along $m$ these equalities agree.
+ But they are equalities in a set (namely $\trunc0{\sm{t:T} (\kbar t = ks)}$), and hence this is automatic.
+\end{proof}
+
+\begin{rmk}
+ \index{kernel!pair}%
+ $T$ can be regarded as the (homotopy) coequalizer of the ``kernel pair'' of $k$.
+ If $S$ and $A$ were sets, then the $(-1)$-connectivity of $k$ would imply that $A$ is the $0$-truncation of this coequalizer (see \cref{cha:set-math}).
+ For general types, higher topos theory suggests that $(-1)$-con\-nec\-tiv\-i\-ty of $k$ will imply instead that $A$ is the colimit (a.k.a.\ ``geometric realization'') of the ``simplicial kernel'' of $k$.
+ \index{.infinity1-topos@$(\infty,1)$-topos}%
+ \index{geometric realization}%
+ \index{simplicial!kernel}%
+ \index{kernel!simplicial}%
+ The type $T$ is the colimit of the ``1-skeleton'' of this simplicial kernel, so it makes sense that it improves the connectivity of $k$ by $1$.
+ More generally, we might expect the colimit of the $n$-skeleton\index{skeleton!of a CW-complex} to improve connectivity by $n$.
+\end{rmk}
+
+\index{encode-decode method|(}%
+
+Now we define $\code:P\to P\to \type$ by double induction as follows
+\begin{itemize}
+\item $\code(ib,ib')$ is now a set-quotient of the type of sequences
+ \[ (b, p_0, x_1, q_1, y_1, p_1, x_2, q_2, y_2, p_2, \dots, y_n, p_n, b') \]
+ where
+ \begin{itemize}
+ \item $n:\mathbb{N}$,
+ \item $x_k:S$ and $y_k:S$ for $00$, and $p_0:\Pi_1B(b,b')$ for $n=0$,
+ \item $p_k:\Pi_1B(fk y_k, fkx_{k+1})$ for $1\le k < n$,
+ \item $q_k:\Pi_1C(gkx_k, gky_k)$ for $1\le k\le n$.
+ \end{itemize}
+ The quotient is generated by the following equalities (see \cref{rmk:naive}):
+ \begin{align*}
+ (\dots, q_k, y_k, \refl{fy_k}, y_k, q_{k+1},\dots)
+ &= (\dots, q_k\ct q_{k+1},\dots)\\
+ (\dots, p_k, x_k, \refl{gx_k}, x_k, p_{k+1},\dots)
+ &= (\dots, p_k\ct p_{k+1},\dots)\\
+ (\dots, p_{k-1} \ct fw, x_{k}', q_{k}, \dots) &=
+ (\dots, p_{k-1}, x_{k}, gw \ct q_{k}, \dots)
+ \tag{for $w:\Pi_1A(kx_{k},kx_{k}')$}\\
+ (\dots, q_k \ct gw, y_k', p_k, \dots) &=
+ (\dots, q_k, y_k, fw \ct p_k, \dots).
+ \tag{for $w:\Pi_1A(ky_k, ky_k')$}
+ \end{align*}
+ We will need below the definition of the case of $\decode$ on such a sequence, which as before concatenates all the paths $p_k$ and $q_k$ together with instances of $h$ to give an element of $\Pi_1P(ifb,ifb')$, cf.~\eqref{eq:decode}.
+ As before, the other three point cases are nearly identical.
+\item For $a:A$ and $b:B$, we require an equivalence
+ \begin{equation}
+ \code(ib, ifa) \eqvsym \code(ib,jga).\label{eq:bfa-bga2}
+ \end{equation}
+ Since $\code$ is set-valued, by \cref{thm:kbar} we may assume that $a=\kbar t$ for some $t:T$.
+ Next, we can do induction on $t$.
+ If $t\jdeq \ell s$ for $s:S$, then we define~\eqref{eq:bfa-bga2} as in \cref{sec:naive-vankampen}:
+ \begin{align*}
+ (\dots, y_n, p_n,fks) &\mapsto (\dots,y_n,p_n,s,\refl{gks},gks),\\
+ (\dots, x_n, p_n, s, \refl{fks}, fks) &\mapsfrom (\dots, x_n, p_n, gks).
+ \end{align*}
+ These respect the equivalence relations, and define quasi-inverses just as before.
+ Now suppose $t$ varies along $m_{s,s'}(w)$ for some $w:ks=ks'$; we must show that~\eqref{eq:bfa-bga2} respects transporting along $\kbar mw$.
+ By definition of $\kbar$, this essentially boils down to transporting along $w$ itself.
+ By the characterization of transport in path types, what we need to show is that
+ \[ w_*(\dots, y_n, p_n,fks) = (\dots,y_n, p_n \ct fw, fks') \]
+ is mapped by~\eqref{eq:bfa-bga2} to
+ \[ w_*(\dots,y_n,p_n,s,\refl{gks},gks) = (\dots, y_n, p_n, s, \refl{gks} \ct gw, gks') \]
+ But this follows directly from the new generators we have imposed on the set-quotient relation defining \code.
+\item The other three requisite equivalences are defined similarly.
+\item Finally, since the commutativity~\eqref{eq:bfa-bga-comm} is a mere proposition, by $(-1)$-connectedness of $k$ we may assume that $a=ks$ and $a'=ks'$, in which case it follows exactly as before.
+\end{itemize}
+
+\begin{thm}[van Kampen with a set of basepoints]\label{thm:van-Kampen}
+ For all $u,v:P$ there is an equivalence
+ \[ \Pi_1P(u,v) \eqvsym \code(u,v). \]
+ with \code defined as in this section.
+\end{thm}
+
+\begin{proof}
+ Basically just like before.
+ To show that $\decode$ respects the new generators of the quotient relation, we use the naturality of $h$.
+ And to show that $\decode$ respects the equivalences such as~\eqref{eq:bfa-bga2}, we need to induct on $\kbar$ and on $T$ in order to decompose those equivalences into their definitions, but then it becomes again simply functoriality of $f$ and $g$.
+ The rest is easy.
+ In particular, no additional argument is required for $\encode\circ\decode$, since the goal is to prove an equality in a set, and so the case of $h$ is trivial.
+\end{proof}
+
+\index{encode-decode method|)}%
+
+\index{fundamental!group|(}%
+\cref{thm:van-Kampen} allows us to calculate the fundamental group of a space~$A$,
+even when $A$ is not a set, provided $S$ is a set, for in that case,
+each $\code(u,v)$ is, by definition, a set-quotient of a \emph{set} by a
+relation. In that respect, it is an improvement over
+\cref{thm:naive-van-kampen}.
+
+\begin{eg}\label{eg:clvk}
+ Suppose $S\defeq \unit$, so that $A$ has a basepoint $a \defeq k(\ttt)$ and is connected.
+ Then code for loops in the pushout can be identified with alternating sequences of loops in $\pi_1(B,f(a))$ and $\pi_1(C,g(a))$, modulo an equivalence relation which allows us to slide elements of $\pi_1(A,a)$ between them (after applying $f$ and $g$ respectively).
+ Thus, $\pi_1(P)$ can be identified with the \emph{amalgamated free product}
+ \index{amalgamated free product}%
+ \index{free!product!amalgamated}%
+ $\pi_1(B) *_{\pi_1(A)} \pi_1(C)$ (the pushout in the category of groups), as constructed in \cref{sec:free-algebras}.
+ This (in the case when $B$ and $C$ are open subspaces of $P$ and $A$ their intersection) is probably the most classical version of the van Kampen theorem.
+\end{eg}
+
+\begin{eg}\label{eg:cofiber}
+ \index{cofiber}
+ As a special case of \cref{eg:clvk}, suppose additionally that $C\defeq\unit$, so that $P$ is the cofiber $B/A$.
+ Then every loop in $C$ is equal to reflexivity, so the relations on path codes allow us to collapse all sequences to a single loop in $B$.
+ The additional relations require that multiplying on the left, right, or in the middle by an element in the image of $\pi_1(A)$ is the identity.
+ We can thus identify $\pi_1(B/A)$ with the quotient of the group $\pi_1(B)$ by the normal subgroup generated by the image of $\pi_1(A)$.
+\end{eg}
+
+\begin{eg}\label{eg:torus}
+ \index{torus}
+ As a further special case of \cref{eg:cofiber}, let $B\defeq S^1 \vee S^1$, let $A\defeq S^1$, and let $f:A\to B$ pick out the composite loop $p \ct q \ct \opp p \ct \opp q$, where $p$ and $q$ are the generating loops in the two copies of $S^1$ comprising $B$.
+ Then $P$ is a presentation of the torus $T^2$.
+ Indeed, it is not hard to identify $P$ with the presentation of $T^2$ as described in \cref{sec:hubs-spokes}, using the cone on a particular loop.
+ Thus, $\pi_1(T^2)$ is the quotient of the free group on two generators\index{generator!of a group} (i.e., $\pi_1(B)$) by the relation $p \ct q \ct \opp p \ct \opp q = 1$.
+ This clearly yields the free \emph{abelian}\index{group!abelian} group on two generators, which is $\Z\times\Z$.
+\end{eg}
+
+
+\begin{eg}
+ \index{CW complex}
+ \index{hub and spoke}
+ More generally, any CW complex can be obtained by repeatedly ``coning off'' spheres, as described in \cref{sec:hubs-spokes}.
+ That is, we start with a set $X_0$ of points (``0-cells''), which is the ``0-skeleton'' of the CW complex.
+ We take the pushout
+ \begin{equation*}
+ \vcenter{\xymatrix{
+ S_1 \times \Sn^0\ar[r]^-{f_1}\ar[d] &
+ X_0\ar[d]\\
+ \unit \ar[r] &
+ X_1
+ }}
+ \end{equation*}
+ for some set $S_1$ of 1-cells and some family $f_1$ of ``attaching maps'', obtaining the ``1-skeleton''\index{skeleton!of a CW-complex} $X_1$.
+ \index{attaching map}%
+ Then we take the pushout
+ \begin{equation*}
+ \vcenter{\xymatrix{
+ S_2 \times \Sn^1\ar[r]^{f_2}\ar[d] &
+ X_1\ar[d]\\
+ \unit \ar[r] &
+ X_2
+ }}
+ \end{equation*}
+ for some set $S_2$ of 2-cells and some family $f_2$ of attaching maps, obtaining the 2-skeleton $X_2$, and so on.
+ The fundamental group of each pushout can be calculated from the van Kampen theorem: we obtain the group presented by generators derived from the 1-skeleton, and relations derived from $S_2$ and $f_2$.
+ The pushouts after this stage do not alter the fundamental group, since $\pi_1(\Sn^n)$ is trivial for $n>1$ (see \cref{sec:pik-le-n}).
+\end{eg}
+
+
+\begin{eg}\label{eg:kg1}
+ In particular, suppose given any presentation\index{presentation!of a group} of a (set-)group $G = \langle X \mid R \rangle$, with $X$ a set of generators and $R$ a set of words in these generators\index{generator!of a group}.
+ Let $B\defeq \bigvee_X S^1$ and $A\defeq \bigvee_R S^1$, with $f:A\to B$ sending each copy of $S^1$ to the corresponding word in the generating loops of $B$.
+ It follows that $\pi_1(P) \cong G$; thus we have constructed a connected type whose fundamental group is $G$.
+ Since any group has a presentation, any group is the fundamental group of some type.
+ If we 1-truncate such a type, we obtain a type whose only nontrivial homotopy group is $G$; this is called an \define{Eilenberg--Mac Lane space} $K(G,1)$.%
+ \indexdef{Eilenberg--Mac Lane space}%
+\end{eg}
+
+\index{fundamental!group|)}%
+
+\index{van Kampen theorem|)}%
+\index{theorem!van Kampen|)}%
+
+\section{Whitehead's theorem and Whitehead's principle}
+\label{sec:whitehead}
+
+In classical homotopy theory, a map $f:A\to B$ which induces an isomorphism $\pi_n(A,a) \cong \pi_n(B,f(a))$ for all points $a$ in $A$ (and also an isomorphism $\pi_0(A)\cong\pi_0(B)$) is necessarily a homotopy equivalence, as long as the spaces $A$ and $B$ are well-behaved (e.g.\ have the homotopy types of CW-complexes).
+\index{theorem!Whitehead's}%
+\index{Whitehead's!theorem}%
+This is known as \emph{Whitehead's theorem}.
+In fact, the ``ill-behaved'' spaces for which Whitehead's theorem fails are invisible to type theory.
+Roughly, the well-behaved topological spaces suffice to present $\infty$-groupoids,%
+\index{.infinity-groupoid@$\infty$-groupoid}
+and homotopy type theory deals with $\infty$-groupoids directly rather than actual topological spaces.
+Thus, one might expect that Whitehead's theorem would be true in univalent foundations.
+
+However, this is \emph{not} the case: Whitehead's theorem is not provable.
+In fact, there are known models of type theory in which it fails to be true, although for entirely different reasons than its failure for ill-behaved topological spaces.
+These models are ``non-hypercomplete $\infty$-toposes''
+\index{.infinity1-topos@$(\infty,1)$-topos}%
+\index{.infinity1-topos@$(\infty,1)$-topos!non-hypercomplete}%
+(see~\cite{lurie:higher-topoi}); roughly speaking, they consist of sheaves of $\infty$-groupoids over $\infty$-dimensional base spaces.
+
+\index{axiom!Whitehead's principle|(}%
+\index{Whitehead's!principle|(}%
+
+From a foundational point of view, therefore, we may speak of \emph{Whitehead's principle} as a ``classicality axiom'', akin to \LEM{} and \choice{}.
+It may consistently be assumed, but it is not part of the computationally motivated type theory, nor does it hold in all natural models.
+But when working from set-theoretic foundations, this principle is invisible: it cannot fail to be true in a world where $\infty$-groupoids are built up out of sets (using topological spaces, simplicial sets, or any other such model).
+
+This may seem odd, but actually it should not be surprising.
+Homotopy type theory is the \emph{abstract} theory of homotopy types, whereas the homotopy theory of topological spaces or simplicial sets in set theory is a \emph{concrete} model of this theory, in the same way that the integers are a concrete model of the abstract theory of rings.
+It is to be expected that any concrete model will have special properties which are not intrinsic to the corresponding abstract theory, but which we might sometimes want to assume as additional axioms (e.g.\ the integers are a Principal Ideal Domain, but not all rings are).
+
+It is beyond the scope of this book to describe any models of type theory, so we will not explain how Whitehead's principle might fail in some of them.
+However, we can prove that it holds whenever the types involved are $n$-truncated for some finite $n$, by ``downward'' induction on $n$.
+In addition to being of interest in its own right (for instance, it implies the essential uniqueness of Eilenberg--Mac Lane spaces), the proof of this result will hopefully provide some intuitive explanation for why we cannot hope to prove an analogous theorem without truncation hypotheses.
+
+We begin with the following modification of \cref{thm:mono-surj-equiv}, which will eventually supply the induction step in the proof of the truncated Whitehead's principle.
+It may be regarded as a type-theoretic, $\infty$-group\-oid\-al version of the classical statement that a fully faithful and essentially surjective functor is an equivalence of categories.
+
+\begin{thm}\label{thm:whitehead0}
+ Suppose $f:A\to B$ is a function such that
+ \begin{enumerate}
+ \item $\trunc0 f : \trunc0 A \to \trunc0 B$ is surjective, and\label{item:whitehead01}
+ \item for any $x,y:A$, the function $\apfunc f : (\id[A]xy) \to (\id[B]{f(x)}{f(y)})$ is an equivalence.\label{item:whitehead02}
+ \end{enumerate}
+ Then $f$ is an equivalence.
+\end{thm}
+\begin{proof}
+ Note that~\ref{item:whitehead02} is precisely the statement that $f$ is an embedding, c.f.~\cref{sec:mono-surj}.
+ Thus, by \cref{thm:mono-surj-equiv}, it suffices to show that $f$ is surjective, i.e.\ that for any $b:B$ we have $\trunc{-1}{\hfib f b}$.
+ Suppose given $b$; then since $\trunc0 f$ is surjective, there merely exists an $a:A$ such that $\trunc 0 f(\tproj0a) = \tproj0b$.
+ And since our goal is a mere proposition, we may assume given such an $a$.
+ Then we have $\tproj0{f(a)} = \trunc 0 f(\tproj0a) =\tproj0b$, hence $\trunc{-1}{f(a)=b}$.
+ Again, since our goal is still a mere proposition, we may assume $f(a)=b$.
+ Hence $\hfib f b$ is inhabited, and thus merely inhabited.
+\end{proof}
+
+Since homotopy groups are truncations of loop spaces\index{loop space}, rather than path spaces, we need to modify this theorem to speak about these instead. Recall the map $\Omega f$ from \cref{def:loopfunctor}.
+
+\begin{cor}\label{thm:whitehead1}
+ Suppose $f:A\to B$ is a function such that
+ \begin{enumerate}
+ \item $\trunc0 f : \trunc0 A \to \trunc0 B$ is a bijection, and
+ \item for any $x:A$, the function $\Omega f : \Omega(A,x) \to \Omega(B,f(x))$ is an equivalence.
+ \end{enumerate}
+ Then $f$ is an equivalence.
+\end{cor}
+\begin{proof}
+ By \cref{thm:whitehead0}, it suffices to show that $\apfunc f : (\id[A]xy) \to (\id[B]{f(x)}{f(y)})$ is an equivalence for any $x,y:A$.
+ And by \cref{thm:equiv-inhabcod}, we may assume $\id[B]{f(x)}{f(y)}$.
+ In particular, $\tproj0{f(x)} = \tproj0{f(y)}$, so since $\trunc0 f$ is an equivalence, we have $\tproj0 x = \tproj0y$, hence $\tproj{-1}{x=y}$.
+ Since we are trying to prove a mere proposition ($\apfunc f$ being an equivalence), we may assume given $p:x=y$.
+ But now the following square commutes up to homotopy:
+ \begin{equation*}
+ \vcenter{\xymatrix@C=3pc{
+ \Omega(A,x)\ar[r]^-{\blank\ct p}\ar[d]_{\Omega f} &
+ (\id[A]xy) \ar[d]^{\apfunc f}\\
+ \Omega(B,f(x))\ar[r]_-{\blank\ct f(p)} &
+ (\id[B]{f(x)}{f(y)}).
+ }}
+ \end{equation*}
+ The top and bottom maps are equivalences, and the left-hand map is so by assumption.
+ Hence, by the 2-out-of-3 property, so is the right-hand map.
+\end{proof}
+
+Now we can prove the truncated Whitehead's principle.
+
+\begin{thm}\label{thm:whiteheadn}
+ Suppose $A$ and $B$ are $n$-types and $f:A\to B$ is such that
+ \begin{enumerate}
+ \item $\trunc0f:\trunc0A \to \trunc0B$ is a bijection, and\label{item:wh0}
+ \item $\pi_k(f):\pi_k(A,x) \to \pi_k(B,f(x))$ is a bijection for all $k\ge 1$ and all $x:A$.\label{item:whk}
+ \end{enumerate}
+ Then $f$ is an equivalence.
+\end{thm}
+
+\noindent
+Condition~\ref{item:wh0} is almost the case of~\ref{item:whk} when $k=0$, except that it makes no reference to any basepoint $x:A$.
+
+\begin{proof}
+ We proceed by induction on $n$.
+ When $n=-2$, the statement is trivial.
+ Thus, suppose it to be true for all functions between $n$-types, and let $A$ and $B$ be $(n+1)$-types and $f:A\to B$ as above.
+ The first condition in \cref{thm:whitehead1} holds by assumption, so it will suffice to show that for any $x:A$, the function $\Omega f: \Omega(A,x) \to \Omega(B,f(x))$ is an equivalence.
+
+ Since $\Omega(A,x)$ and $\Omega(B,f(x))$ are $n$-types we can apply the induction hypothesis.
+ We need to check that $\trunc 0{\Omega f}$ is a bijection, and that for all $k\geq1$ and $p : x = x$ the map $\pi_k(\Omega f):\pi_k(x = x,p) \to \pi_k(f(x) = f(x),\Omega f(p))$ is a bijection.
+ The first statement holds by assumption, since $\trunc 0{\Omega f} \jdeq \pi_1(f)$.
+ To prove the second statement, we generalize it first: we show that for all $y : A$ and $q : x = y$ we have $\pi_k(\apfunc f):\pi_k(x = y,q) \to \pi_k(f(x) = f(y),\apfunc f(q))$.
+ This implies the desired statement, since when $y\defeq x$, we have $\pi_k(\Omega f)=\pi_k(\apfunc f)$ modulo identifying their base points $\Omega f(p)=\apfunc f(p)$.
+ To prove the generalization, it suffices by path induction to prove it when $q$ is $\refl{a}$.
+ In this case, we have $\pi_k(\apfunc f) = \pi_k(\Omega f) = \pi_{k+1}(f)$, and $\pi_{k+1}(f)$ is an bijection by the original assumptions.
+\end{proof}
+
+Note that if $A$ and $B$ are not $n$-types for any finite $n$, then there is no way for the induction to get started.
+
+\begin{cor}\label{thm:whitehead-contr}
+ If $A$ is a $0$-connected $n$-type and $\pi_k(A,a)=0$ for all $k$ and $a:A$, then $A$ is contractible.
+\end{cor}
+\begin{proof}
+ Apply \cref{thm:whiteheadn} to the map $A\to\unit$.
+\end{proof}
+
+As an application, we can deduce the converse of \cref{thm:conn-pik}.
+
+\begin{cor}\label{thm:pik-conn}
+ For $n\ge 0$, a map $f:A\to B$ is $n$-connected if and only if the following all hold:
+ \begin{enumerate}
+ \item $\trunc0f:\trunc0A \to \trunc0B$ is an isomorphism.
+ \item For any $a:A$ and $k\le n$, the map $\pi_k(f):\pi_k(A,a) \to \pi_k(B,f(a))$ is an isomorphism.
+ \item For any $a:A$, the map $\pi_{n+1}(f):\pi_{n+1}(A,a) \to \pi_{n+1}(B,f(a))$ is surjective.
+ \end{enumerate}
+\end{cor}
+\begin{proof}
+ The ``only if'' direction is \cref{thm:conn-pik}.
+ Conversely, by the long exact sequence of a fibration (\cref{thm:les}),
+ \index{sequence!exact}%
+ the hypotheses imply that $\pi_k(\hfib f {f(a)})=0$ for all $k\le n$ and $a:A$, and that $\trunc 0{\hfib f{f(a)}}$ is contractible.
+ Since $\pi_k(\hfib f {f(a)}) = \pi_k(\trunc n{\hfib f{f(a)}})$ for $k\le n$, and $\trunc n{\hfib f{f(a)}}$ is $n$-connected, by \cref{thm:whitehead-contr} it is contractible for any $a$.
+
+ It remains to show that $\trunc n{\hfib f{b}}$ is contractible for $b:B$ not necessarily of the form $f(a)$.
+ However, by assumption, there is $x:\trunc0A$ with $\tproj 0b = \trunc0f(x)$.
+ Since contractibility is a mere proposition, we may assume $x$ is of the form $\tproj0a$ for $a:A$, in which case $\tproj 0 b = \trunc0f(\tproj0a) = \tproj0{f(a)}$, and therefore $\trunc{-1}{b=f(a)}$.
+ Again since contractibility is a mere proposition, we may assume $b=f(a)$, and the result follows.
+\end{proof}
+
+A map $f$ such that $\trunc0f$ is a bijection and $\pi_k(f)$ is a bijection for all $k$ is called \define{$\infty$-connected}%
+\indexdef{function!.infinity-connected@$\infty$-connected}%
+\indexdef{.infinity-connected function@$\infty$-connected function}
+or a \define{weak equivalence}.%
+\indexdef{equivalence!weak}%
+\indexdef{weak equivalence!of types}
+This is equivalent to asking that $f$ be $n$-connected for all $n$.
+A type $Z$ is called \define{$\infty$-truncated}%
+\indexdef{type!.infinity-truncated@$\infty$-truncated}%
+\indexdef{.infinity-truncated type@$\infty$-truncated type}
+or \define{hypercomplete}%
+\indexdef{type!hypercomplete}%
+\indexdef{hypercomplete type}
+if the induced map
+\[(\blank\circ f):(B\to Z) \to (A\to Z)\]
+is an equivalence whenever $f$ is $\infty$-connected --- that is, if $Z$ thinks every $\infty$-connected map is an equivalence.
+\indexdef{axiom!Whitehead's principle}%
+Then if we want to assume Whitehead's principle as an axiom, we may use either of the following equivalent forms.
+\begin{itemize}
+\item Every $\infty$-connected function is an equivalence.
+\item Every type is $\infty$-truncated.
+\end{itemize}
+In higher topos models,
+\index{.infinity1-topos@$(\infty,1)$-topos}%
+the $\infty$-truncated types form a reflective subuniverse in the sense of
+\cref{sec:modalities} (the ``hypercompletion'' of an $(\infty,1)$-topos), but we do not know whether this is
+true in general.
+
+\index{axiom!Whitehead's principle|)}%
+\index{Whitehead's!principle|)}%
+
+It may not be obvious that there \emph{are} any types which are not $n$-types for any $n$, but in fact there are.
+Indeed, in classical homotopy theory, $\Sn^n$ has this property for any $n\ge 2$.
+We have not proven this fact in homotopy type theory yet, but there are other types which we can prove to have ``infinite truncation level''.
+
+\begin{eg}
+ Suppose we have $B:\nat\to\type$ such that for each $n$, the type $B(n)$ contains an $n$-loop\index{loop!n-@$n$-} which is not equal to $n$-fold reflexivity, say $p_n:\Omega^n(B(n),b_n)$ with $p_n \neq \refl{b_n}^n$.
+ (For instance, we could define $B(n)\defeq \Sn^n$, with $p_n$ the image of $1:\Z$ under the isomorphism $\pi_n(\Sn^n)\cong \Z$.)
+ Consider $C\defeq \prd{n:\nat} B(n)$, with the point $c:C$ defined by $c(n)\defeq b_n$.
+ Since loop spaces commute with products, for any $m$ we have
+ \[\eqvspaced{\Omega^m (C,c)}{\prd{n:\nat}\Omega^m(B(n),b_n)}.\]
+ Under this equivalence, $\refl{c}^m$ corresponds to the function $(n\mapsto \refl{b_n}^m)$.
+ Now define $q_m$ in the right-hand type by
+ \[ q_m(n) \defeq
+ \begin{cases}
+ p_n &\quad m=n\\
+ \refl{b_n}^m &\quad m\neq n.
+ \end{cases}
+ \]
+ If we had $q_m = (n\mapsto \refl{b_n}^m)$, then we would have $p_n = \refl{b_n}^n$, which is not the case.
+ Thus, $q_m \neq (n\mapsto \refl{b_n}^m)$, and so there is a point of $\Omega^m(C,c)$ which is unequal to $\refl{c}^m$.
+ Hence $C$ is not an $m$-type, for any $m:\nat$.
+\end{eg}
+
+We expect it should also be possible to show that a universe $\UU$ itself is not an $n$-type for any $n$, using the fact that it contains higher inductive types such as $\Sn^n$ for all $n$.
+However, this has not yet been done.
+
+\section{A general statement of the encode-decode method}
+\label{sec:general-encode-decode}
+
+\indexdef{encode-decode method}
+
+We have used the encode-decode method to characterize the path spaces
+of various types, including coproducts (\cref{thm:path-coprod}), natural
+numbers (\cref{thm:path-nat}), truncations (\cref{thm:path-truncation}),
+the circle (\cref{{cor:omega-s1}}), suspensions (\cref{thm:freudenthal}), and pushouts
+(\cref{thm:van-Kampen}). Variants of this technique are used in the
+proofs of many of the other theorems mentioned in the introduction to
+this chapter, such as a direct proof of $\pi_n(\Sn^n)$, the Blakers--Massey theorem, and the construction of Eilenberg--Mac Lane spaces.
+While it is tempting to try to
+abstract the method into a lemma, this is difficult because
+slightly different variants are needed for different problems. For
+example, different variations on the same method can be used to
+characterize a loop space (as in \cref{thm:path-coprod,cor:omega-s1}) or
+a whole path space (as in \cref{thm:path-nat}), to give a complete
+characterization of a loop space (e.g.\ $\Omega^1(\Sn ^1)$) or only to
+characterize some truncation of it (e.g.\ van Kampen), and to calculate
+homotopy groups or to prove that a map is $n$-connected (e.g.\ Freudenthal and
+Blakers--Massey).
+
+However, we can state lemmas for specific variants of the method.
+The proofs of these lemmas are almost trivial; the main point is to
+clarify the method by stating them in generality. The simplest
+case is using an encode-decode method to characterize the loop space of a
+type, as in \cref{thm:path-coprod} and \cref{{cor:omega-s1}}.
+
+\begin{lem}[Encode-decode for Loop Spaces]\label{lem:encode-decode-loop}
+ \index{loop space}%
+Given a pointed type $(A,a_0)$ and a fibration
+$\code : A \to \type$, if
+\begin{enumerate}
+\item $c_0 : \code(a_0)$,\label{item:ed1}
+\item $\decode : \prd{x:A} \code(x) \to (\id{a_0}{x})$,\label{item:ed2}
+\item for all $c : \code(a_0)$, \id{\transfib{\code}{\decode(c)}{c_0}}{c}, and\label{item:ed3}
+\item $\id{\decode(c_0)}{\refl{}}$,\label{item:ed4}
+\end{enumerate}
+then $(\id{a_0}{a_0})$ is equivalent to $\code(a_0)$.
+\end{lem}
+
+\begin{proof}
+Define
+$\encode : \prd{x:A} (\id{a_0}{x}) \to \code(x)$ by
+\[
+\encode_x(\alpha) = \transfib{\code}{\alpha}{c_0}.
+\]
+We show that $\encode_{a_0}$ and $\decode_{a_0}$ are quasi-inverses.
+The composition $\encode_{a_0} \circ \decode_{a_0}$ is immediate by
+assumption~\ref{item:ed3}. For the other composition, we show
+\[
+\prd{x:A}{p : \id{a_0}{x}} \id{\decode_{x} (\encode_{x} p)}{p}.
+\]
+By path induction, it suffices to show
+$\id{{\decode_{{a_0}} (\encode_{{a_o}} \refl{})}}{\refl{}}$.
+After reducing the $\mathsf{transport}$, it suffices to show
+$\id{{\decode_{{a_0}} (c_0)}}{\refl{}}$, which is assumption~\ref{item:ed4}.
+\end{proof}
+
+If a fiberwise equivalence between $(\id{a_0}{\blank})$ and $\code$ is desired,
+it suffices to strengthen condition (iii) to
+\[
+\prd{x:A}{c : \code(x)} \id{\encode_{x}(\decode_{x}(c))}{c}.
+\]
+However, to calculate a loop space (e.g. $\Omega(\Sn ^1)$), this
+stronger assumption is not necessary.
+
+Another variation, which comes up often when calculating homotopy
+groups, characterizes the truncation of a loop space:
+
+\begin{lem}[Encode-decode for Truncations of Loop Spaces]
+Assume a pointed type $(A,a_0)$ and a fibration
+$\code : A \to \type$, where for every $x$, $\code(x)$ is a $k$-type.
+Define
+\[
+\encode : \prd{x:A} \trunc{k}{\id{a_0}{x}} \to \code(x)
+\]
+by truncation recursion (using the fact
+that $\code(x)$ is a $k$-type), mapping $\alpha : \id{a_0}{x}$ to
+\transfib{\code}{\alpha}{c_0}. Suppose:
+\begin{enumerate}
+\item $c_0 : \code(a_0)$,
+\item $\decode : \prd{x:A} \code(x) \to \trunc{k}{\id{a_0}{x}}$,
+\item \label{item:decode-encode-loop-iii}
+ $\id{\encode_{a_0}(\decode_{a_0}(c))}{c}$ for all $c : \code(a_0)$, and
+\item \label{item:decode-encode-loop-iv}
+ $\id{\decode(c_0)}{\tproj{}{\refl{}}}$.
+\end{enumerate}
+Then $\trunc{k}{\id{a_0}{a_0}}$ is equivalent to $\code(a_0)$.
+\end{lem}
+
+\begin{proof}
+That $\decode \circ \encode$ is identity is immediate by \ref{item:decode-encode-loop-iii}.
+%
+To prove $\encode \circ \decode$, we first do a truncation induction, by
+which it suffices to show
+\[
+\prd{x:A}{p : \id{a_0}{x}} \id{\decode_{x}(\encode_{x}(\tproj{k}{p}))}{\tproj{k}{p}}.
+\]
+The truncation induction is allowed because paths in a $k$-type are a
+$k$-type. To show this type, we do a path induction, and after reducing
+the \encode, use assumption~\ref{item:decode-encode-loop-iv}.
+\end{proof}
+
+\section{Additional Results}
+\label{sec:moreresults}
+
+Though we do not present the proofs in this chapter, following results have also been established in homotopy type theory.
+
+\begin{thm}
+\index{homotopy!group!of sphere}
+There exists a $k$ such that for all $n \ge 3$, $\pi_{n+1}(\Sn ^n) =
+\Z_k$.
+\end{thm}
+
+\begin{proof}[Notes on the proof.]
+The proof consists of a calculation of $\pi_4(\Sn ^3)$, together with an
+appeal to stability (\cref{cor:stability-spheres}). In the classical
+statement of this result, $k$ is $2$. While we have not yet checked that
+$k$ is in fact $2$, our calculation of $\pi_4(\Sn ^3)$ is constructive,\index{mathematics!constructive}
+like all the rest of the proofs in this chapter.
+(More precisely, it doesn't use any additional axioms such as \LEM{} or \choice{}, making it as constructive as
+univalence and higher inductive types are.) Thus, given a
+computational interpretation of homotopy type theory, we could run the
+proof on a computer to verify that $k$ is $2$. This example is quite
+intriguing, because it is the first calculation of a homotopy group
+for which we have not needed to know the answer in advance.
+\end{proof}
+
+% Recall from \cref{sec:colimits} that $X \sqcup^C Y$ denotes the
+% (homotopy) pushout of $X$ and $Y$ along $C$.
+\index{pushout}%
+
+\begin{thm}[Blakers--Massey theorem]\label{Blakers-Massey}
+ \indexdef{theorem!Blakers--Massey}%
+ \indexsee{Blakers--Massey theorem}{theorem, Blakers--Massey}%
+ Suppose we are given maps $f : C \rightarrow X$, and $g : C \rightarrow Y$. Taking first the pushout $X \sqcup^C Y $ of $f$ and $g$ and then the pullback of its inclusions $\inl : X \rightarrow X \sqcup^C Y \leftarrow Y : \inr$, we have an induced map $C \to X \times_{(X \sqcup^C Y)} Y$.
+
+ If $f$ is $i$-connected and $g$ is $j$-connected, then this induced map is $(i+j)$-connected. In other words, for any points $x:X$, $y:Y$, the corresponding fiber $C_{x,y}$ of $(f,g) : C \to X \times Y $ gives an approximation to the path space $\id[X \sqcup^C Y]{\inl(x)}{\inr(y)}$ in the pushout.
+\end{thm}
+
+It should be noted that in classical algebraic topology, the Blakers--Massey theorem is often stated in a somewhat different form, where the maps $f$ and $g$ are replaced by inclusions of subcomplexes of CW complexes, and the homotopy pushout and homotopy pullback by a union and intersection, respectively.
+In order to express the theorem in homotopy type theory, we have to replace notions of this sort with ones that are homotopy-invariant.
+We have seen another example of this in the van Kampen theorem (\cref{sec:van-kampen}), where we had to replace a union of open subsets by a homotopy pushout.
+
+\begin{thm}[Eilenberg--Mac Lane Spaces]\label{Eilenberg-Mac-Lane-Spaces}
+\index{Eilenberg--Mac Lane space}
+For any abelian\index{group!abelian} group $G$ and positive integer $n$, there is an $n$-type
+$K(G,n)$ such that $\pi_n(K(G,n)) = G$, and $\pi_k(K(G,n)) = 0$
+for $k\neq n$.
+\end{thm}
+
+\begin{thm}[Covering spaces]\label{thm:covering-spaces}
+ \index{covering space}%
+ For a connected space $A$, there is an equivalence between covering spaces over $A$ and sets with an action of $\pi_1(A)$.
+\end{thm}
+
+
+\sectionNotes
+
+%% For the spheres, two
+%% different definitions of the $n$-sphere $\Sn ^n$ have been used: the first as the
+%% suspension of $\Sn ^ {n-1}$ (\cref{sec:suspension}), and the second
+%% as a higher inductive type with one base point and one loop in
+%% $\Omega^n$ (\cref{sec:circle}); we list the status for both
+%% definitions. The equivalence of the two can be deduced from the remarks
+%% at the end of \cref{sec:suspension}.
+
+{
+\newcommand{\humancheck}{\ding{52}}
+\newcommand{\computercheck}{\ding{52}\kern-0.5em\ding{52}}
+\begin{table}[htb]
+ \centering
+\begin{tabular}{lcc}
+\toprule
+Theorem & Status \\
+\midrule
+$\pi_1(\Sn ^1)$ & \computercheck \\
+$\pi_{k*{\copy\pbbox} \restore}
+
+% Macros for the categories chapter
+\newcommand{\inv}[1]{{#1}^{-1}}
+\newcommand{\idtoiso}{\ensuremath{\mathsf{idtoiso}}\xspace}
+\newcommand{\isotoid}{\ensuremath{\mathsf{isotoid}}\xspace}
+\newcommand{\op}{^{\mathrm{op}}}
+\newcommand{\y}{\ensuremath{\mathbf{y}}\xspace}
+\newcommand{\dgr}[1]{{#1}^{\dagger}}
+\newcommand{\unitaryiso}{\mathrel{\cong^\dagger}}
+\newcommand{\cteqv}[2]{\ensuremath{#1 \simeq #2}\xspace}
+\newcommand{\cteqvsym}{\simeq} % Symbol for equivalence of categories
+
+%%% Natural numbers
+\newcommand{\N}{\ensuremath{\mathbb{N}}\xspace}
+%\newcommand{\N}{\textbf{N}}
+\let\nat\N
+\newcommand{\natp}{\ensuremath{\nat'}\xspace} % alternative nat in induction chapter
+
+\newcommand{\zerop}{\ensuremath{0'}\xspace} % alternative zero in induction chapter
+\newcommand{\suc}{\mathsf{succ}}
+\newcommand{\sucp}{\ensuremath{\suc'}\xspace} % alternative suc in induction chapter
+\newcommand{\add}{\mathsf{add}}
+\newcommand{\ack}{\mathsf{ack}}
+\newcommand{\ite}{\mathsf{iter}}
+\newcommand{\assoc}{\mathsf{assoc}}
+\newcommand{\dbl}{\ensuremath{\mathsf{double}}}
+\newcommand{\dblp}{\ensuremath{\dbl'}\xspace} % alternative double in induction chapter
+
+
+%%% Lists
+\newcommand{\lst}[1]{\mathsf{List}(#1)}
+\newcommand{\nil}{\mathsf{nil}}
+\newcommand{\cons}{\mathsf{cons}}
+\newcommand{\lost}[1]{\mathsf{Lost}(#1)}
+
+%%% Vectors of given length, used in induction chapter
+\newcommand{\vect}[2]{\ensuremath{\mathsf{Vec}_{#1}(#2)}\xspace}
+
+%%% Integers
+\newcommand{\Z}{\ensuremath{\mathbb{Z}}\xspace}
+\newcommand{\Zsuc}{\mathsf{succ}}
+\newcommand{\Zpred}{\mathsf{pred}}
+
+%%% Rationals
+\newcommand{\Q}{\ensuremath{\mathbb{Q}}\xspace}
+
+%%% Function extensionality
+\newcommand{\funext}{\mathsf{funext}}
+\newcommand{\happly}{\mathsf{happly}}
+
+%%% A naturality lemma
+\newcommand{\com}[3]{\mathsf{swap}_{#1,#2}(#3)}
+
+%%% Code/encode/decode
+\newcommand{\code}{\ensuremath{\mathsf{code}}\xspace}
+\newcommand{\encode}{\ensuremath{\mathsf{encode}}\xspace}
+\newcommand{\decode}{\ensuremath{\mathsf{decode}}\xspace}
+
+% Function definition with domain and codomain
+\newcommand{\function}[4]{\left\{\begin{array}{rcl}#1 &
+ \longrightarrow & #2 \\ #3 & \longmapsto & #4 \end{array}\right.}
+
+%%% Cones and cocones
+\newcommand{\cone}[2]{\mathsf{cone}_{#1}(#2)}
+\newcommand{\cocone}[2]{\mathsf{cocone}_{#1}(#2)}
+% Apply a function to a cocone
+\newcommand{\composecocone}[2]{#1\circ#2}
+\newcommand{\composecone}[2]{#2\circ#1}
+%%% Diagrams
+\newcommand{\Ddiag}{\mathscr{D}}
+
+%%% (pointed) mapping spaces
+\newcommand{\Map}{\mathsf{Map}}
+
+%%% The interval
+\newcommand{\interval}{\ensuremath{I}\xspace}
+\newcommand{\izero}{\ensuremath{0_{\interval}}\xspace}
+\newcommand{\ione}{\ensuremath{1_{\interval}}\xspace}
+
+%%% Arrows
+\newcommand{\epi}{\ensuremath{\twoheadrightarrow}}
+\newcommand{\mono}{\ensuremath{\rightarrowtail}}
+
+%%% Sets
+\newcommand{\bin}{\ensuremath{\mathrel{\widetilde{\in}}}}
+
+%%% Semigroup structure
+\newcommand{\semigroupstrsym}{\ensuremath{\mathsf{SemigroupStr}}}
+\newcommand{\semigroupstr}[1]{\ensuremath{\mathsf{SemigroupStr}}(#1)}
+\newcommand{\semigroup}[0]{\ensuremath{\mathsf{Semigroup}}}
+
+%%% Macros for the formal type theory
+\newcommand{\emptyctx}{\ensuremath{\cdot}}
+\newcommand{\production}{\vcentcolon\vcentcolon=}
+\newcommand{\conv}{\downarrow}
+\newcommand{\ctx}{\ensuremath{\mathsf{ctx}}}
+\newcommand{\wfctx}[1]{#1\ \ctx}
+\newcommand{\oftp}[3]{#1 \vdash #2 : #3}
+\newcommand{\jdeqtp}[4]{#1 \vdash #2 \jdeq #3 : #4}
+\newcommand{\judg}[2]{#1 \vdash #2}
+\newcommand{\tmtp}[2]{#1 \mathord{:} #2}
+
+% rule names
+\newcommand{\rform}{\textsc{form}}
+\newcommand{\rintro}{\textsc{intro}}
+\newcommand{\relim}{\textsc{elim}}
+\newcommand{\rcomp}{\textsc{comp}}
+\newcommand{\runiq}{\textsc{uniq}}
+\newcommand{\Weak}{\mathsf{Wkg}}
+\newcommand{\Vble}{\mathsf{Vble}}
+\newcommand{\Exch}{\mathsf{Exch}}
+\newcommand{\Subst}{\mathsf{Subst}}
+
+%%% Macros for HITs
+\newcommand{\cc}{\mathsf{c}}
+\newcommand{\pp}{\mathsf{p}}
+\newcommand{\cct}{\widetilde{\mathsf{c}}}
+\newcommand{\ppt}{\widetilde{\mathsf{p}}}
+\newcommand{\Wtil}{\ensuremath{\widetilde{W}}\xspace}
+
+%%% Macros for n-types
+\newcommand{\istype}[1]{\mathsf{is}\mbox{-}{#1}\mbox{-}\mathsf{type}}
+\newcommand{\nplusone}{\ensuremath{(n+1)}}
+\newcommand{\nminusone}{\ensuremath{(n-1)}}
+\newcommand{\fact}{\mathsf{fact}}
+
+%%% Macros for homotopy
+\newcommand{\kbar}{\overline{k}} % Used in van Kampen's theorem
+
+%%% Macros for induction
+\newcommand{\natw}{\ensuremath{\mathbf{N^w}}\xspace}
+\newcommand{\zerow}{\ensuremath{0^\mathbf{w}}\xspace}
+\newcommand{\sucw}{\ensuremath{\mathsf{succ}^{\mathbf{w}}}\xspace}
+\newcommand{\nalg}{\nat\mathsf{Alg}}
+\newcommand{\nhom}{\nat\mathsf{Hom}}
+\newcommand{\ishinitw}{\mathsf{isHinit}_{\mathsf{W}}}
+\newcommand{\ishinitn}{\mathsf{isHinit}_\nat}
+\newcommand{\w}{\mathsf{W}}
+\newcommand{\walg}{\w\mathsf{Alg}}
+\newcommand{\whom}{\w\mathsf{Hom}}
+
+%%% Macros for real numbers
+\newcommand{\RC}{\ensuremath{\mathbb{R}_\mathsf{c}}\xspace} % Cauchy
+\newcommand{\RD}{\ensuremath{\mathbb{R}_\mathsf{d}}\xspace} % Dedekind
+\newcommand{\R}{\ensuremath{\mathbb{R}}\xspace} % Either
+\newcommand{\barRD}{\ensuremath{\bar{\mathbb{R}}_\mathsf{d}}\xspace} % Dedekind completion of Dedekind
+
+\newcommand{\close}[1]{\sim_{#1}} % Relation of closeness
+\newcommand{\closesym}{\mathord\sim}
+\newcommand{\rclim}{\mathsf{lim}} % HIT constructor for Cauchy reals
+\newcommand{\rcrat}{\mathsf{rat}} % Embedding of rationals into Cauchy reals
+\newcommand{\rceq}{\mathsf{eq}_{\RC}} % HIT path constructor
+\newcommand{\CAP}{\mathcal{C}} % The type of Cauchy approximations
+\newcommand{\Qp}{\Q_{+}}
+\newcommand{\apart}{\mathrel{\#}} % apartness
+\newcommand{\dcut}{\mathsf{isCut}} % Dedekind cut
+\newcommand{\cover}{\triangleleft} % inductive cover
+\newcommand{\intfam}[3]{(#2, \lam{#1} #3)} % family of rational intervals
+
+% Macros for the Cauchy reals construction
+\newcommand{\bsim}{\frown}
+\newcommand{\bbsim}{\smile}
+
+\newcommand{\hapx}{\diamondsuit\approx}
+\newcommand{\hapname}{\diamondsuit}
+\newcommand{\hapxb}{\heartsuit\approx}
+\newcommand{\hapbname}{\heartsuit}
+\newcommand{\tap}[1]{\bullet\approx_{#1}\triangle}
+\newcommand{\tapname}{\triangle}
+\newcommand{\tapb}[1]{\bullet\approx_{#1}\square}
+\newcommand{\tapbname}{\square}
+
+%%% Macros for surreals
+\newcommand{\NO}{\ensuremath{\mathsf{No}}\xspace}
+\newcommand{\surr}[2]{\{\,#1\,\big|\,#2\,\}}
+\newcommand{\LL}{\mathcal{L}}
+\newcommand{\RR}{\mathcal{R}}
+\newcommand{\noeq}{\mathsf{eq}_{\NO}} % HIT path constructor
+
+\newcommand{\ble}{\trianglelefteqslant}
+\newcommand{\blt}{\vartriangleleft}
+\newcommand{\bble}{\sqsubseteq}
+\newcommand{\bblt}{\sqsubset}
+
+\newcommand{\hle}{\diamondsuit\preceq}
+\newcommand{\hlt}{\diamondsuit\prec}
+\newcommand{\hlname}{\diamondsuit}
+\newcommand{\hleb}{\heartsuit\preceq}
+\newcommand{\hltb}{\heartsuit\prec}
+\newcommand{\hlbname}{\heartsuit}
+% \newcommand{\tle}{(\bullet\preceq\triangle)}
+% \newcommand{\tlt}{(\bullet\prec\triangle)}
+\newcommand{\tle}{\triangle\preceq}
+\newcommand{\tlt}{\triangle\prec}
+\newcommand{\tlname}{\triangle}
+% \newcommand{\tleb}{(\bullet\preceq\square)}
+% \newcommand{\tltb}{(\bullet\prec\square)}
+\newcommand{\tleb}{\square\preceq}
+\newcommand{\tltb}{\square\prec}
+\newcommand{\tlbname}{\square}
+
+%%% Macros for set theory
+\newcommand{\vset}{\mathsf{set}} % point constructor for cummulative hierarchy V
+\def\cd{\tproj0}
+\newcommand{\inj}{\ensuremath{\mathsf{inj}}} % type of injections
+\newcommand{\acc}{\ensuremath{\mathsf{acc}}} % accessibility
+
+\newcommand{\atMostOne}{\mathsf{atMostOne}}
+
+\newcommand{\power}[1]{\mathcal{P}(#1)} % power set
+\newcommand{\powerp}[1]{\mathcal{P}_+(#1)} % inhabited power set
+
+%%%% THEOREM ENVIRONMENTS %%%%
+
+% The cleveref package provides \cref{...} which is like \ref{...}
+% except that it automatically inserts the type of the thing you're
+% referring to, e.g. it produces "Theorem 3.8" instead of just "3.8"
+% (and hyperref makes the whole thing a hyperlink). This saves a slight amount
+% of typing, but more importantly it means that if you decide later on
+% that 3.8 should be a Lemma or a Definition instead of a Theorem, you
+% don't have to change the name in all the places you referred to it.
+
+% The following hack improves on this by using the same counter for
+% all theorem-type environments, so that after Theorem 1.1 comes
+% Corollary 1.2 rather than Corollary 1.1. This makes it much easier
+% for the reader to find a particular theorem when flipping through
+% the document.
+\makeatletter
+\def\defthm#1#2#3{%
+ %% Ensure all theorem types are numbered with the same counter
+ \newaliascnt{#1}{thm}
+ \newtheorem{#1}[#1]{#2}
+ \aliascntresetthe{#1}
+ %% This command tells cleveref's \cref what to call things
+ \crefname{#1}{#2}{#3}% following brace must be on separate line to support poorman cleveref sed file
+}
+
+% Now define a bunch of theorem-type environments.
+\newtheorem{thm}{Theorem}[section]
+\crefname{thm}{Theorem}{Theorems}
+%\defthm{prop}{Proposition} % Probably we shouldn't use "Proposition" in this way
+\defthm{cor}{Corollary}{Corollaries}
+\defthm{lem}{Lemma}{Lemmas}
+\defthm{axiom}{Axiom}{Axioms}
+% Since definitions and theorems in type theory are synonymous, should
+% we actually use the same theoremstyle for them?
+\theoremstyle{definition}
+\defthm{defn}{Definition}{Definitions}
+\theoremstyle{remark}
+\defthm{rmk}{Remark}{Remarks}
+\defthm{eg}{Example}{Examples}
+\defthm{egs}{Examples}{Examples}
+\defthm{notes}{Notes}{Notes}
+% Number exercises within chapters, with their own counter.
+\newtheorem{ex}{Exercise}[chapter]
+\crefname{ex}{Exercise}{Exercises}
+
+% Display format for sections
+\crefformat{section}{\S#2#1#3}
+\Crefformat{section}{Section~#2#1#3}
+\crefrangeformat{section}{\S\S#3#1#4--#5#2#6}
+\Crefrangeformat{section}{Sections~#3#1#4--#5#2#6}
+\crefmultiformat{section}{\S\S#2#1#3}{ and~#2#1#3}{, #2#1#3}{ and~#2#1#3}
+\Crefmultiformat{section}{Sections~#2#1#3}{ and~#2#1#3}{, #2#1#3}{ and~#2#1#3}
+\crefrangemultiformat{section}{\S\S#3#1#4--#5#2#6}{ and~#3#1#4--#5#2#6}{, #3#1#4--#5#2#6}{ and~#3#1#4--#5#2#6}
+\Crefrangemultiformat{section}{Sections~#3#1#4--#5#2#6}{ and~#3#1#4--#5#2#6}{, #3#1#4--#5#2#6}{ and~#3#1#4--#5#2#6}
+
+% Display format for appendices
+\crefformat{appendix}{Appendix~#2#1#3}
+\Crefformat{appendix}{Appendix~#2#1#3}
+\crefrangeformat{appendix}{Appendices~#3#1#4--#5#2#6}
+\Crefrangeformat{appendix}{Appendices~#3#1#4--#5#2#6}
+\crefmultiformat{appendix}{Appendices~#2#1#3}{ and~#2#1#3}{, #2#1#3}{ and~#2#1#3}
+\Crefmultiformat{appendix}{Appendices~#2#1#3}{ and~#2#1#3}{, #2#1#3}{ and~#2#1#3}
+\crefrangemultiformat{appendix}{Appendices~#3#1#4--#5#2#6}{ and~#3#1#4--#5#2#6}{, #3#1#4--#5#2#6}{ and~#3#1#4--#5#2#6}
+\Crefrangemultiformat{appendix}{Appendices~#3#1#4--#5#2#6}{ and~#3#1#4--#5#2#6}{, #3#1#4--#5#2#6}{ and~#3#1#4--#5#2#6}
+
+\crefname{part}{Part}{Parts}
+
+% Number subsubsections
+\setcounter{secnumdepth}{5}
+
+% Display format for figures
+\crefname{figure}{Figure}{Figures}
+
+%%%% EQUATION NUMBERING %%%%
+
+% The following hack uses the single theorem counter to number
+% equations as well, so that we don't have both Theorem 1.1 and
+% equation (1.1).
+\let\c@equation\c@thm
+\numberwithin{equation}{section}
+
+
+%%%% ENUMERATE NUMBERING %%%%
+
+% Number the first level of enumerates as (i), (ii), ...
+\renewcommand{\theenumi}{(\roman{enumi})}
+\renewcommand{\labelenumi}{\theenumi}
+
+
+%%%% MARGINS %%%%
+
+% This is a matter of personal preference, but I think the left
+% margins on enumerates and itemizes are too wide.
+\setitemize[1]{leftmargin=2em}
+\setenumerate[1]{leftmargin=*}
+
+% Likewise that they are too spaced out.
+\setitemize[1]{itemsep=-0.2em}
+\setenumerate[1]{itemsep=-0.2em}
+
+%%% Notes %%%
+\def\noteson{%
+\gdef\note##1{\mbox{}\marginpar{\color{blue}\textasteriskcentered\ ##1}}}
+\gdef\notesoff{\gdef\note##1{\null}}
+\noteson
+
+\newcommand{\Coq}{\textsc{Coq}\xspace}
+\newcommand{\Agda}{\textsc{Agda}\xspace}
+\newcommand{\NuPRL}{\textsc{NuPRL}\xspace}
+
+%%%% CITATIONS %%%%
+
+% \let \cite \citep
+
+%%%% INDEX %%%%
+
+\newcommand{\footstyle}[1]{{\hyperpage{#1}}n} % If you index something that is in a footnote
+\newcommand{\defstyle}[1]{\textbf{\hyperpage{#1}}} % Style for pageref to a definition
+
+\newcommand{\indexdef}[1]{\index{#1|defstyle}} % Index a definition
+\newcommand{\indexfoot}[1]{\index{#1|footstyle}} % Index a term in a footnote
+\newcommand{\indexsee}[2]{\index{#1|see{#2}}} % Index "see also"
+
+
+%%%% Standard phrasing or spelling of common phrases %%%%
+
+\newcommand{\ZF}{Zermelo--Fraenkel}
+\newcommand{\CZF}{Constructive \ZF{} Set Theory}
+
+\newcommand{\LEM}[1]{\ensuremath{\mathsf{LEM}_{#1}}\xspace}
+\newcommand{\choice}[1]{\ensuremath{\mathsf{AC}_{#1}}\xspace}
+
+%%%% MISC %%%%
+
+\newcommand{\mentalpause}{\medskip} % Use for "mental" pause, instead of \smallskip or \medskip
+
+%% Use \symlabel instead of \label to mark a pageref that you need in the index of symbols
+\newcounter{symindex}
+\newcommand{\symlabel}[1]{\refstepcounter{symindex}\label{#1}}
+
+% Local Variables:
+% mode: latex
+% TeX-master: "hott-online"
+% End:
diff --git a/books/hott/preface.tex b/books/hott/preface.tex
new file mode 100644
index 0000000000000000000000000000000000000000..265876a8264783593ea4bac51b156e62fc980818
--- /dev/null
+++ b/books/hott/preface.tex
@@ -0,0 +1,111 @@
+\chapter*{Preface}
+\label{cha:preface}
+
+% Uncomment this if you think Preface should appear in TOC
+%\addcontentsline{toc}{chapter}{Preface}
+\subsection*{IAS Special Year on Univalent Foundations}
+
+A Special Year on Univalent Foundations of Mathematics was held in 2012-13 at the Institute for Advanced Study, School of Mathematics, organized by Steve Awodey, Thierry Coquand, and Vladimir Voevodsky. The following people were the official participants.
+
+\begin{multicols}{\OPTprefacecols}{
+\begin{itemize}
+\item[] Peter Aczel
+\item[] Benedikt Ahrens
+\item[] Thorsten Altenkirch
+\item[] Steve Awodey
+\item[] Bruno Barras
+\item[] Andrej Bauer
+\item[] Yves Bertot
+\item[] Marc Bezem
+\item[] Thierry Coquand
+\item[] Eric Finster
+\item[] Daniel Grayson
+\item[] Hugo Herbelin
+\item[] Andr\'e Joyal
+\item[] Dan Licata
+\item[] Peter Lumsdaine
+\item[] Assia Mahboubi
+\item[] Per Martin-L\"of
+\item[] Sergey Melikhov
+\item[] Alvaro Pelayo
+\item[] Andrew Polonsky
+\item[] Michael Shulman
+\item[] Matthieu Sozeau
+\item[] Bas Spitters
+\item[] Benno van den Berg
+\item[] Vladimir Voevodsky
+\item[] Michael Warren
+\item[] Noam Zeilberger
+\end{itemize}
+}
+\end{multicols}
+
+\noindent There were also the following students, whose participation was no less valuable.
+
+\begin{multicols}{\OPTprefacecols}{
+\begin{itemize}
+\item[] Carlo Angiuli
+\item[] Anthony Bordg
+\item[] Guillaume Brunerie
+\item[] Chris Kapulkin
+\item[] Egbert Rijke
+\item[] Kristina Sojakova
+\end{itemize}
+}
+\end{multicols}
+
+\noindent In addition, there were the following short- and long-term visitors, including student visitors, whose contributions to the Special Year were also essential.
+
+\begin{multicols}{\OPTprefacecols}{
+\begin{itemize}
+\item[] Jeremy Avigad
+\item[] Cyril Cohen
+\item[] Robert Constable
+\item[] Pierre-Louis Curien
+\item[] Peter Dybjer
+\item[] Mart{\'\i}n Escard{\'o}
+\item[] Kuen-Bang Hou
+\item[] Nicola Gambino
+\item[] Richard Garner
+\item[] Georges Gonthier
+\item[] Thomas Hales
+\item[] Robert Harper
+\item[] Martin Hofmann
+\item[] Pieter Hofstra
+\item[] Joachim Kock
+\item[] Nicolai Kraus
+\item[] Nuo Li
+\item[] Zhaohui Luo
+\item[] Michael Nahas
+\item[] Erik Palmgren
+\item[] Emily Riehl
+\item[] Dana Scott
+\item[] Philip Scott
+\item[] Sergei Soloviev
+\end{itemize}
+}
+\end{multicols}
+
+\subsection*{About this book}
+
+We did not set out to write a book. The present work has its origins in our collective attempts to develop a new style of ``informal type theory'' that can be read and understood by a human being, as a complement to a formal proof that can be checked by a machine.
+Univalent foundations is closely tied to the idea of a foundation of mathematics that can be implemented in a computer proof assistant. Although such a formalization is not part of this book, much of the material presented here was actually done first in the fully formalized setting inside a proof assistant, and only later ``unformalized'' to arrive at the presentation you find before you --- a remarkable inversion of the usual state of affairs in formalized mathematics.
+
+Each of the above-named individuals contributed something to the Special Year --- and so to this book --- in the form of ideas, words, or deeds. The spirit of collaboration that prevailed throughout the year was truly extraordinary.
+
+\mentalpause
+
+Special thanks are due to the Institute for Advanced Study, without which this book would obviously never have come to be. It proved to be an ideal setting for the creation of this new branch of mathematics: stimulating, congenial, and supportive. May some trace of this unique atmosphere linger in the pages of this book, and in the future development of this new field of study.
+
+\bigskip
+
+\begin{flushright}
+The Univalent Foundations Program\\
+Institute for Advanced Study\\
+Princeton, April 2013
+\end{flushright}
+
+%%%%%%%%%%%%% end of scope of local macros
+% Local Variables:
+% TeX-master: "hott-online"
+% End:
diff --git a/books/hott/preliminaries.tex b/books/hott/preliminaries.tex
new file mode 100644
index 0000000000000000000000000000000000000000..98a3f7575170988fb48a3894844911ce76527088
--- /dev/null
+++ b/books/hott/preliminaries.tex
@@ -0,0 +1,2044 @@
+\chapter{Type theory}
+\label{cha:typetheory}
+
+\section{Type theory versus set theory}
+\label{sec:types-vs-sets}
+\label{sec:axioms}
+
+\index{type theory}
+Homotopy type theory is (among other things) a foundational language for mathematics, i.e., an alternative to Zermelo--Fraenkel\index{set theory!Zermelo--Fraenkel} set theory.
+However, it behaves differently from set theory in several important ways, and that can take some getting used to.
+Explaining these differences carefully requires us to be more formal here than we will be in the rest of the book.
+As stated in the introduction, our goal is to write type theory \emph{informally}; but for a mathematician accustomed to set theory, more precision at the beginning can help avoid some common misconceptions and mistakes.
+
+We note that a set-theoretic foundation has two ``layers'': the deductive system of first-order logic,\index{first-order!logic} and, formulated inside this system, the axioms of a particular theory, such as ZFC.
+Thus, set theory is not only about sets, but rather about the interplay between sets (the objects of the second layer) and propositions (the objects of the first layer).
+
+By contrast, type theory is its own deductive system: it need not be formulated inside any superstructure, such as first-order logic.
+Instead of the two basic notions of set theory, sets and propositions, type theory has one basic notion: \emph{types}.
+Propositions (statements which we can prove, disprove, assume, negate, and so on\footnote{Confusingly, it is also a common practice (dating
+back to Euclid) to use the word ``proposition'' synonymously with ``theorem''.
+ We will confine ourselves to the logician's usage, according to which a \emph{proposition} is a statement \emph{susceptible to} proof, whereas a \emph{theorem}\indexfoot{theorem} (or ``lemma''\indexfoot{lemma} or ``corollary''\indexfoot{corollary}) is such a statement that \emph{has been} proven.
+Thus ``$0=1$'' and its negation ``$\neg(0=1)$'' are both propositions, but only the latter is a theorem.}) are identified with particular types, via the correspondence shown in \cref{tab:pov} on page~\pageref{tab:pov}.
+Thus, the mathematical activity of \emph{proving a theorem} is identified with a special case of the mathematical activity of \emph{constructing an object}---in this case, an inhabitant of a type that represents a proposition.
+
+\index{deductive system}%
+This leads us to another difference between type theory and set theory, but to explain it we must say a little about deductive systems in general.
+Informally, a deductive system is a collection of \define{rules}
+\indexdef{rule}%
+for deriving things called \define{judgments}.
+\indexdef{judgment}%
+If we think of a deductive system as a formal game,
+\index{game!deductive system as}%
+then the judgments are the ``positions'' in the game which we reach by following the game rules.
+We can also think of a deductive system as a sort of algebraic theory, in which case the judgments are the elements (like the elements of a group) and the deductive rules are the operations (like the group multiplication).
+From a logical point of view, the judgments can be considered to be the ``external'' statements, living in the metatheory, as opposed to the ``internal'' statements of the theory itself.
+
+In the deductive system of first-order logic (on which set theory is based), there is only one kind of judgment: that a given proposition has a proof.
+That is, each proposition $A$ gives rise to a judgment ``$A$ has a proof'', and all judgments are of this form.
+A rule of first-order logic such as ``from $A$ and $B$ infer $A\wedge B$'' is actually a rule of ``proof construction'' which says that given the judgments ``$A$ has a proof'' and ``$B$ has a proof'', we may deduce that ``$A\wedge B$ has a proof''.
+Note that the judgment ``$A$ has a proof'' exists at a different level from the \emph{proposition} $A$ itself, which is an internal statement of the theory.
+% In particular, we cannot manipulate it to construct propositions such as ``if $A$ has a proof, then $B$ does not have a proof''---unless we are using our set-theoretic foundation as a meta-theory with which to talk about some other axiomatic system.
+
+The basic judgment of type theory, analogous to ``$A$ has a proof'', is written ``$a:A$'' and pronounced as ``the term $a$ has type $A$'', or more loosely ``$a$ is an element of $A$'' (or, in homotopy type theory, ``$a$ is a point of $A$'').
+\indexdef{term}%
+\indexdef{element}%
+\indexdef{point!of a type}%
+When $A$ is a type representing a proposition, then $a$ may be called a \emph{witness}\index{witness!to the truth of a proposition} to the provability of $A$, or \emph{evidence}\index{evidence, of the truth of a proposition} of the truth of $A$ (or even a \emph{proof}\index{proof} of $A$, but we will try to avoid this confusing terminology).
+In this case, the judgment $a:A$ is derivable in type theory (for some $a$) precisely when the analogous judgment ``$A$ has a proof'' is derivable in first-order logic (modulo differences in the axioms assumed and in the encoding of mathematics, as we will discuss throughout the book).
+
+On the other hand, if the type $A$ is being treated more like a set than like a proposition (although as we will see, the distinction can become blurry), then ``$a:A$'' may be regarded as analogous to the set-theoretic statement ``$a\in A$''.
+However, there is an essential difference in that ``$a:A$'' is a \emph{judgment} whereas ``$a\in A$'' is a \emph{proposition}.
+In particular, when working internally in type theory, we cannot make statements such as ``if $a:A$ then it is not the case that $b:B$'', nor can we ``disprove'' the judgment ``$a:A$''.
+
+A good way to think about this is that in set theory, ``membership'' is a relation which may or may not hold between two pre-existing objects ``$a$'' and ``$A$'', while in type theory we cannot talk about an element ``$a$'' in isolation: every element \emph{by its very nature} is an element of some type, and that type is (generally speaking) uniquely determined.
+Thus, when we say informally ``let $x$ be a natural number'', in set theory this is shorthand for ``let $x$ be a thing and assume that $x\in\nat$'', whereas in type theory ``let $x:\nat$'' is an atomic statement: we cannot introduce a variable without specifying its type.\index{membership}
+
+
+At first glance, this may seem an uncomfortable restriction, but it is arguably closer to the intuitive mathematical meaning of ``let $x$ be a natural number''.
+In practice, it seems that whenever we actually \emph{need} ``$a\in A$'' to be a proposition rather than a judgment, there is always an ambient set $B$ of which $a$ is known to be an element and $A$ is known to be a subset.
+This situation is also easy to represent in type theory, by taking $a$ to be an element of the type $B$, and $A$ to be a predicate on $B$; see \cref{subsec:prop-subsets}.
+
+A last difference between type theory and set theory is the treatment of equality.
+The familiar notion of equality in mathematics is a proposition: e.g.\ we can disprove an equality or assume an equality as a hypothesis.
+Since in type theory, propositions are types, this means that equality is a type: for elements $a,b:A$ (that is, both $a:A$ and $b:A$) we have a type ``$\id[A]ab$''.
+(In \emph{homotopy} type theory, of course, this equality proposition can behave in unfamiliar ways: see \cref{sec:identity-types,cha:basics}, and the rest of the book).
+When $\id[A]ab$ is inhabited, we say that $a$ and $b$ are \define{(propositionally) equal}.
+\index{propositional!equality}%
+\index{equality!propositional}%
+
+However, in type theory there is also a need for an equality \emph{judgment}, existing at the same level as the judgment ``$x:A$''.\index{judgment}
+\symlabel{defn:judgmental-equality}%
+This is called \define{judgmental equality}
+\indexdef{equality!judgmental}%
+\indexdef{judgmental equality}%
+or \define{definitional equality},
+\indexdef{equality!definitional}%
+\indexsee{definitional equality}{equality, definitional}%
+and we write it as $a\jdeq b : A$ or simply $a \jdeq b$.
+It is helpful to think of this as meaning ``equal by definition''.
+For instance, if we define a function $f:\nat\to\nat$ by the equation $f(x)=x^2$, then the expression $f(3)$ is equal to $3^2$ \emph{by definition}.
+Inside the theory, it does not make sense to negate or assume an equality-by-definition; we cannot say ``if $x$ is equal to $y$ by definition, then $z$ is not equal to $w$ by definition''.
+Whether or not two expressions are equal by definition is just a matter of expanding out the definitions; in particular, it is algorithmically\index{algorithm} decidable (though the algorithm is necessarily meta-theoretic, not internal to the theory).\index{decidable!definitional equality}
+
+As type theory becomes more complicated, judgmental equality can get more subtle than this, but it is a good intuition to start from.
+Alternatively, if we regard a deductive system as an algebraic theory, then judgmental equality is simply the equality in that theory, analogous to the equality between elements of a group---the only potential for confusion is that there is \emph{also} an object \emph{inside} the deductive system of type theory (namely the type ``$a=b$'') which behaves internally as a notion of ``equality''.
+
+The reason we \emph{want} a judgmental notion of equality is so that it can control the other form of judgment, ``$a:A$''.
+For instance, suppose we have given a proof that $3^2=9$, i.e.\ we have derived the judgment $p:(3^2=9)$ for some $p$.
+Then the same witness $p$ ought to count as a proof that $f(3)=9$, since $f(3)$ is $3^2$ \emph{by definition}.
+The best way to represent this is with a rule saying that given the judgments $a:A$ and $A\jdeq B$, we may derive the judgment $a:B$.
+
+Thus, for us, type theory will be a deductive system based on two forms of judgment:
+\begin{center}
+\medskip
+\begin{tabular}{cl}
+ \toprule
+ Judgment & Meaning\\
+ \midrule
+ $a : A$ & ``$a$ is an object of type $A$''\\
+ $a \jdeq b : A$ & ``$a$ and $b$ are definitionally equal objects of type $A$''\\
+ \bottomrule
+\end{tabular}
+\medskip
+\end{center}
+%
+\symlabel{defn:defeq}%
+When introducing a definitional equality, i.e., defining one thing to be equal to another, we will use the symbol ``$\defeq$''.
+Thus, the above definition of the function $f$ would be written as $f(x)\defeq x^2$.
+
+Because judgments cannot be put together into more complicated statements, the symbols ``$:$'' and ``$\jdeq$'' bind more loosely than anything else.%
+\footnote{In formalized\indexfoot{mathematics!formalized} type theory, commas and turnstiles can bind even more loosely.
+ For instance, $x:A,y:B\vdash c:C$ is parsed as $((x:A),(y:B))\vdash (c:C)$.
+ However, in this book we refrain from such notation until \cref{cha:rules}.}
+Thus, for instance, ``$p:\id{x}{y}$'' should be parsed as ``$p:(\id{x}{y})$'', which makes sense since ``$\id{x}{y}$'' is a type, and not as ``$\id{(p:x)}{y}$'', which is senseless since ``$p:x$'' is a judgment and cannot be equal to anything.
+Similarly, ``$A\jdeq \id{x}{y}$'' can only be parsed as ``$A\jdeq(\id{x}{y})$'', although in extreme cases such as this, one ought to add parentheses anyway to aid reading comprehension.
+Moreover, later on we will fall into the common notation of chaining together equalities --- e.g.\ writing $a=b=c=d$ to mean ``$a=b$ and $b=c$ and $c=d$, hence $a=d$'' --- and we will also include judgmental equalities in such chains.
+Context usually suffices to make the intent clear.
+
+This is perhaps also an appropriate place to mention that the common mathematical notation ``$f:A\to B$'', expressing the fact that $f$ is a function from $A$ to $B$, can be regarded as a typing judgment, since we use ``$A\to B$'' as notation for the type of functions from $A$ to $B$ (as is standard practice in type theory; see \cref{sec:pi-types}).
+
+\index{assumption|(defstyle}%
+Judgments may depend on \emph{assumptions} of the form $x:A$, where $x$ is a variable
+\indexdef{variable}%
+and $A$ is a type.
+For example, we may construct an object $m + n : \nat$ under the assumptions that $m,n : \nat$.
+Another example is that assuming $A$ is a type, $x,y : A$, and $p : \id[A]{x}{y}$, we may construct an element $p^{-1} : \id[A]{y}{x}$.
+The collection of all such assumptions is called the \define{context};%
+\index{context}
+from a topological point of view it may be thought of as a ``parameter\index{parameter!space} space''.
+In fact, technically the context must be an ordered list of assumptions, since later assumptions may depend on previous ones: the assumption $x:A$ can only be made \emph{after} the assumptions of any variables appearing in the type $A$.
+
+If the type $A$ in an assumption $x:A$ represents a proposition, then the assumption is a type-theoretic version of a \emph{hypothesis}:
+\indexdef{hypothesis}%
+we assume that the proposition $A$ holds.
+When types are regarded as propositions, we may omit the names of their proofs.
+Thus, in the second example above we may instead say that assuming $\id[A]{x}{y}$, we can prove $\id[A]{y}{x}$.
+However, since we are doing ``proof-relevant'' mathematics,
+\index{mathematics!proof-relevant}%
+we will frequently refer back to proofs as objects.
+In the example above, for instance, we may want to establish that $p^{-1}$ together with the proofs of transitivity and reflexivity behave like a groupoid; see \cref{cha:basics}.
+
+Note that under this meaning of the word \emph{assumption}, we can assume a propositional equality (by assuming a variable $p:x=y$), but we cannot assume a judgmental equality $x\jdeq y$, since it is not a type that can have an element.
+However, we can do something else which looks kind of like assuming a judgmental equality: if we have a type or an element which involves a variable $x:A$, then we can \emph{substitute} any particular element $a:A$ for $x$ to obtain a more specific type or element.
+We will sometimes use language like ``now assume $x\jdeq a$'' to refer to this process of substitution, even though it is not an \emph{assumption} in the technical sense introduced above.
+\index{assumption|)}%
+
+By the same token, we cannot \emph{prove} a judgmental equality either, since it is not a type in which we can exhibit a witness.
+Nevertheless, we will sometimes state judgmental equalities as part of a theorem, e.g.\ ``there exists $f:A\to B$ such that $f(x)\jdeq y$''.
+This should be regarded as the making of two separate judgments: first we make the judgment $f:A\to B$ for some element $f$, then we make the additional judgment that $f(x)\jdeq y$.
+
+In the rest of this chapter, we attempt to give an informal presentation of type theory, sufficient for the purposes of this book; we give a more formal account in \cref{cha:rules}.
+Aside from some fairly obvious rules (such as the fact that judgmentally equal things can always be substituted\index{substitution} for each other), the rules of type theory can be grouped into \emph{type formers}.
+Each type former consists of a way to construct types (possibly making use of previously constructed types), together with rules for the construction and behavior of elements of that type.
+In most cases, these rules follow a fairly predictable pattern, but we will not attempt to make this precise here; see however the beginning of \cref{sec:finite-product-types} and also \cref{cha:induction}.\index{type theory!informal}
+
+
+\index{axiom!versus rules}%
+\index{rule!versus axioms}%
+An important aspect of the type theory presented in this chapter is that it consists entirely of \emph{rules}, without any \emph{axioms}.
+In the description of deductive systems in terms of judgments, the \emph{rules} are what allow us to conclude one judgment from a collection of others, while the \emph{axioms} are the judgments we are given at the outset.
+If we think of a deductive system as a formal game, then the rules are the rules of the game, while the axioms are the starting position.
+And if we think of a deductive system as an algebraic theory, then the rules are the operations of the theory, while the axioms are the \emph{generators} for some particular free model of that theory.
+
+In set theory, the only rules are the rules of first-order logic (such as the rule allowing us to deduce ``$A\wedge B$ has a proof'' from ``$A$ has a proof'' and ``$B$ has a proof''): all the information about the behavior of sets is contained in the axioms.
+By contrast, in type theory, it is usually the \emph{rules} which contain all the information, with no axioms being necessary.
+For instance, in \cref{sec:finite-product-types} we will see that there is a rule allowing us to deduce the judgment ``$(a,b):A\times B$'' from ``$a:A$'' and ``$b:B$'', whereas in set theory the analogous statement would be (a consequence of) the pairing axiom.
+
+The advantage of formulating type theory using only rules is that rules are ``procedural''.
+In particular, this property is what makes possible (though it does not automatically ensure) the good computational properties of type theory, such as ``canonicity''.\index{canonicity}
+However, while this style works for traditional type theories, we do not yet understand how to formulate everything we need for \emph{homotopy} type theory in this way.
+In particular, in \cref{sec:compute-pi,sec:compute-universe,cha:hits} we will have to augment the rules of type theory presented in this chapter by introducing additional axioms, notably the \emph{univalence axiom}.
+In this chapter, however, we confine ourselves to a traditional rule-based type theory.
+
+
+\section{Function types}
+\label{sec:function-types}
+
+\index{type!function|(defstyle}%
+\indexsee{function type}{type, function}%
+Given types $A$ and $B$, we can construct the type $A \to B$ of \define{functions}
+\index{function|(defstyle}%
+\indexsee{map}{function}%
+\indexsee{mapping}{function}%
+with domain $A$ and codomain $B$.
+We also sometimes refer to functions as \define{maps}.
+\index{domain!of a function}%
+\index{codomain, of a function}%
+\index{function!domain of}%
+\index{function!codomain of}%
+\index{functional relation}%
+Unlike in set theory, functions are not defined as
+functional relations; rather they are a primitive concept in type theory.
+We explain the function type by prescribing what we can do with functions,
+how to construct them and what equalities they induce.
+
+Given a function $f : A \to B$ and an element of the domain $a : A$, we
+can \define{apply}
+\indexdef{application!of function}%
+\indexdef{function!application}%
+\indexsee{evaluation}{application, of a function}
+the function to obtain an element of the codomain $B$,
+denoted $f(a)$ and called the \define{value} of $f$ at $a$.
+\indexdef{value!of a function}%
+It is common in type theory to omit the parentheses\index{parentheses} and denote $f(a)$ simply by $f\,a$, and we will sometimes do this as well.
+
+But how can we construct elements of $A \to B$? There are two equivalent ways:
+either by direct definition or by using
+$\lambda$-abstraction. Introducing a function by definition
+\indexdef{definition!of function, direct}%
+means that
+we introduce a function by giving it a name --- let's say, $f$ --- and saying
+we define $f : A \to B$ by giving an equation
+\begin{equation}
+ \label{eq:expldef}
+ f(x) \defeq \Phi
+\end{equation}
+where $x$ is a variable
+\index{variable}%
+and $\Phi$ is an expression which may use $x$.
+In order for this to be valid, we have to check that $\Phi : B$ assuming $x:A$.
+
+Now we can compute $f(a)$ by replacing the variable $x$ in $\Phi$ with
+$a$. As an example, consider the function $f : \nat \to \nat$ which is
+defined by $f(x) \defeq x+x$. (We will define $\nat$ and $+$ in \cref{sec:inductive-types}.)
+Then $f(2)$ is judgmentally equal to $2+2$.
+
+If we don't want to introduce a name for the function, we can use
+\define{$\lambda$-abstraction}.
+\index{lambda abstraction@$\lambda$-abstraction|defstyle}%
+\indexsee{function!lambda abstraction@$\lambda$-abstraction}{$\lambda$-abstraction}%
+\indexsee{abstraction!lambda-@$\lambda$-}{$\lambda$-abstraction}%
+Given an expression $\Phi$ of type $B$ which may use $x:A$, as above, we write $\lam{x:A} \Phi$ to indicate the same function defined by~\eqref{eq:expldef}.
+Thus, we have
+\[ (\lamt{x:A}\Phi) : A \to B. \]
+For the example in the previous paragraph, we have the typing judgment
+\[ (\lam{x:\nat}x+x) : \nat \to \nat. \]
+As another example, for any types $A$ and $B$ and any element $y:B$, we have a \define{constant function}
+\indexdef{constant!function}%
+\indexdef{function!constant}%
+$(\lam{x:A} y): A\to B$.
+
+We generally omit the type of the variable $x$ in a $\lambda$-abstraction and write $\lam{x}\Phi$, since the typing $x:A$ is inferable from the judgment that the function $\lam x \Phi$ has type $A\to B$.
+By convention, the ``scope''
+\indexdef{variable!scope of}%
+\indexdef{scope}%
+of the variable binding ``$\lam{x}$'' is the entire rest of the expression, unless delimited with parentheses\index{parentheses}.
+Thus, for instance, $\lam{x} x+x$ should be parsed as $\lam{x} (x+x)$, not as $(\lam{x}x)+x$ (which would, in this case, be ill-typed anyway).
+
+Another equivalent notation is
+\symlabel{mapsto}%
+\[ (x \mapsto \Phi) : A \to B. \]
+\symlabel{blank}%
+We may also sometimes use a blank ``$\blank$'' in the expression $\Phi$ in place of a variable, to denote an implicit $\lambda$-abstraction.
+For instance, $g(x,\blank)$ is another way to write $\lam{y} g(x,y)$.
+
+Now a $\lambda$-abstraction is a function, so we can apply it to an argument $a:A$.
+We then have the following \define{computation rule}\indexdef{computation rule!for function types}\footnote{Use of this equality is often referred to as \define{$\beta$-conversion}
+\indexsee{beta-conversion@$\beta $-conversion}{$\beta$-reduction}%
+\indexsee{conversion!beta@$\beta $-}{$\beta$-reduction}%
+or \define{$\beta$-reduction}.%
+\index{beta-reduction@$\beta $-reduction|footstyle}%
+\indexsee{reduction!beta@$\beta $-}{$\beta$-reduction}%
+}, which is a definitional equality:
+\[(\lamu{x:A}\Phi)(a) \jdeq \Phi'\]
+where $\Phi'$ is the
+expression $\Phi$ in which all occurrences of $x$ have been replaced by $a$.
+Continuing the above example, we have
+%
+\[ (\lamu{x:\nat}x+x)(2) \jdeq 2+2. \]
+%
+Note that from any function $f:A\to B$, we can construct a lambda abstraction function $\lam{x} f(x)$.
+Since this is by definition ``the function that applies $f$ to its argument'' we consider it to be definitionally equal to $f$:\footnote{Use of this equality is often referred to as \define{$\eta$-conversion}
+\indexsee{eta-conversion@$\eta $-conversion}{$\eta$-expansion}%
+\indexsee{conversion!eta@$\eta $-}{$\eta$-expansion}%
+or \define{$\eta$-expansion.
+\index{eta-expansion@$\eta $-expansion|footstyle}%
+\indexsee{expansion, eta-@expansion, $\eta $-}{$\eta$-expansion}%
+}}
+\[ f \jdeq (\lam{x} f(x)). \]
+This equality is the \define{uniqueness principle for function types}\indexdef{uniqueness!principle!for function types}, because it shows that $f$ is uniquely determined by its values.
+
+The introduction of functions by definitions with explicit parameters can be reduced
+to simple definitions by using $\lambda$-abstraction: i.e., we can read
+a definition of $f: A\to B$ by
+\[ f(x) \defeq \Phi \]
+as
+\[ f \defeq \lamu{x:A}\Phi.\]
+
+When doing calculations involving variables, we have to be
+careful when replacing a variable with an expression that also involves
+variables, because we want to preserve the binding structure of
+expressions. By the \emph{binding structure}\indexdef{binding structure} we mean the
+invisible link generated by binders such as $\lambda$, $\Pi$ and
+$\Sigma$ (the latter we are going to meet soon) between the place where the variable is introduced and where it is used. As an example, consider $f : \nat \to (\nat \to \nat)$
+defined as
+\[ f(x) \defeq \lamu{y:\nat} x + y. \]
+Now if we have assumed somewhere that $y : \nat$, then what is $f(y)$? It would be wrong to just naively replace $x$ by $y$ everywhere in the expression ``$\lam{y}x+y$'' defining $f(x)$, obtaining $\lamu{y:\nat} y + y$, because this means that $y$ gets \define{captured}.
+\indexdef{capture, of a variable}%
+\indexdef{variable!captured}%
+Previously, the substituted\index{substitution} $y$ was referring to our assumption, but now it is referring to the argument of the $\lambda$-abstraction. Hence, this naive substitution would destroy the binding structure, allowing us to perform calculations which are semantically unsound.
+
+But what \emph{is} $f(y)$ in this example? Note that bound (or ``dummy'')
+variables
+\indexdef{variable!bound}%
+\indexdef{variable!dummy}%
+\indexsee{bound variable}{variable, bound}%
+\indexsee{dummy variable}{variable, bound}%
+such as $y$ in the expression $\lamu{y:\nat} x + y$
+have only a local meaning, and can be consistently replaced by any
+other variable, preserving the binding structure. Indeed, $\lamu{y:\nat} x + y$ is declared to be judgmentally equal\footnote{Use of this equality is often referred to as \define{$\alpha$-conversion.
+\indexfoot{alpha-conversion@$\alpha $-conversion}
+\indexsee{conversion!alpha@$\alpha$-}{$\alpha$-conversion}
+}} to
+$\lamu{z:\nat} x + z$. It follows that
+$f(y)$ is judgmentally equal to $\lamu{z:\nat} y + z$, and that answers our question. (Instead of $z$,
+any variable distinct from $y$ could have been used, yielding an equal result.)
+
+Of course, this should all be familiar to any mathematician: it is the same phenomenon as the fact that if $f(x) \defeq \int_1^2 \frac{dt}{x-t}$, then $f(t)$ is not $\int_1^2 \frac{dt}{t-t}$ but rather $\int_1^2 \frac{ds}{t-s}$.
+A $\lambda$-abstraction binds a dummy variable in exactly the same way that an integral does.
+
+We have seen how to define functions in one variable. One
+way to define functions in several variables would be to use the
+cartesian product, which will be introduced later; a function with
+parameters $A$ and $B$ and results in $C$ would be given the type
+$f : A \times B \to C$. However, there is another choice that avoids
+using product types, which is called \define{currying}
+\indexdef{currying}%
+\indexdef{function!currying of}%
+(after the mathematician Haskell Curry).
+\index{programming}%
+
+The idea of currying is to represent a function of two inputs $a:A$ and $b:B$ as a function which takes \emph{one} input $a:A$ and returns \emph{another function}, which then takes a second input $b:B$ and returns the result.
+That is, we consider two-variable functions to belong to an iterated function type, $f : A \to (B \to C)$.
+We may also write this without the parentheses\index{parentheses}, as $f : A \to B \to C$, with
+associativity\index{associativity!of function types} to the right as the default convention. Then given $a : A$ and $b : B$,
+we can apply $f$ to $a$ and then apply the result to $b$, obtaining
+$f(a)(b) : C$. To avoid the proliferation of parentheses, we allow ourselves to
+write $f(a)(b)$ as $f(a,b)$ even though there are no products
+involved.
+When omitting parentheses around function arguments entirely, we write $f\,a\,b$ for $(f\,a)\,b$, with the default associativity now being to the left so that $f$ is applied to its arguments in the correct order.
+
+Our notation for definitions with explicit parameters extends to
+this situation: we can define a named function $f : A \to B \to C$ by
+giving an equation
+\[ f(x,y) \defeq \Phi\]
+where $\Phi:C$ assuming $x:A$ and $y:B$. Using $\lambda$-abstraction\index{lambda abstraction@$\lambda$-abstraction} this
+corresponds to
+\[ f \defeq \lamu{x:A}{y:B} \Phi, \]
+which may also be written as
+\[ f \defeq x \mapsto y \mapsto \Phi. \]
+We can also implicitly abstract over multiple variables by writing multiple blanks, e.g.\ $g(\blank,\blank)$ means $\lam{x}{y} g(x,y)$.
+Currying a function of three or more arguments is a straightforward extension of what we have just described.
+
+\index{type!function|)}%
+\index{function|)}%
+
+
+\section{Universes and families}
+\label{sec:universes}
+
+So far, we have been using the expression ``$A$ is a type'' informally. We
+are going to make this more precise by introducing \define{universes}.
+\index{type!universe|(defstyle}%
+\indexsee{universe}{type, universe}%
+A universe is a type whose elements are types. As in naive set theory,
+we might wish for a universe of all types $\UU_\infty$ including itself
+(that is, with $\UU_\infty : \UU_\infty$).
+However, as in set
+theory, this is unsound, i.e.\ we can deduce from it that every type,
+including the empty type representing the proposition False (see \cref{sec:coproduct-types}), is inhabited.
+For instance, using a
+representation of sets as trees, we can directly encode Russell's
+paradox\index{paradox} \cite{coquand:paradox}.
+% or alternatively, in order to avoid the use of
+% inductive types to define trees, we can follow Girard \cite{girard:paradox} and encode the Burali-Forti paradox,
+% which shows that the collection of all ordinals cannot be an ordinal.
+
+To avoid the paradox we introduce a hierarchy of universes
+\indexsee{hierarchy!of universes}{type, universe}%
+\[ \UU_0 : \UU_1 : \UU_2 : \cdots \]
+where every universe $\UU_i$ is an element of the next universe
+$\UU_{i+1}$. Moreover, we assume that our universes are
+\define{cumulative},
+\indexdef{type!universe!cumulative}%
+\indexdef{cumulative!universes}%
+that is that all the elements of the $i^{\mathrm{th}}$
+universe are also elements of the $(i+1)^{\mathrm{st}}$ universe, i.e.\ if
+$A:\UU_i$ then also $A:\UU_{i+1}$.
+This is convenient, but has the slightly unpleasant consequence that elements no longer have unique types, and is a bit tricky in other ways that need not concern us here; see the Notes.
+
+When we say that $A$ is a type, we mean that it inhabits some universe
+$\UU_i$. We usually want to avoid mentioning the level
+\indexdef{universe level}%
+\indexsee{level}{universe level or $n$-type}%
+\indexsee{type!universe!level}{universe level}%
+$i$ explicitly,
+and just assume that levels can be assigned in a consistent way; thus we
+may write $A:\UU$ omitting the level. This way we can even write
+$\UU:\UU$, which can be read as $\UU_i:\UU_{i+1}$, having left the
+indices implicit. Writing universes in this style is referred to as
+\define{typical ambiguity}.
+\indexdef{typical ambiguity}%
+It is convenient but a bit dangerous, since it allows us to write valid-looking proofs that reproduce the paradoxes of self-reference.
+If there is any doubt about whether an argument is correct, the way to check it is to try to assign levels consistently to all universes appearing in it.
+When some universe \UU is assumed, we may refer to types belonging to \UU as \define{small types}.
+\indexdef{small!type}%
+\indexdef{type!small}%
+
+To model a collection of types varying over a given type $A$, we use functions $B : A \to \UU$ whose
+codomain is a universe. These functions are called
+\define{families of types} (or sometimes \emph{dependent types});
+\indexsee{family!of types}{type, family of}%
+\indexdef{type!family of}%
+\indexsee{type!dependent}{type, family of}%
+\indexsee{dependent!type}{type, family of}%
+they correspond to families of sets as used in
+set theory.
+
+\symlabel{fin}%
+An example of a type family is the family of finite sets $\Fin
+: \nat \to \UU$, where $\Fin(n)$ is a type with exactly $n$ elements.
+(We cannot \emph{define} the family $\Fin$ yet --- indeed, we have not even introduced its domain $\nat$ yet --- but we will be able to soon; see \cref{ex:fin}.)
+We may denote the elements of $\Fin(n)$ by $0_n,1_n,\dots,(n-1)_n$, with subscripts to emphasize that the elements of $\Fin(n)$ are different from those of $\Fin(m)$ if $n$ is different from $m$, and all are different from the ordinary natural numbers (which we will introduce in \cref{sec:inductive-types}).
+\index{finite!sets, family of}%
+
+A more trivial (but very important) example of a type family is the \define{constant} type family
+\indexdef{constant!type family}%
+\indexdef{type!family of!constant}%
+at a type $B:\UU$, which is of course the constant function $(\lam{x:A} B):A\to\UU$.
+
+As a \emph{non}-example, in our version of type theory there is no type family ``$\lam{i:\nat} \UU_i$''.
+Indeed, there is no universe large enough to be its codomain.
+Moreover, we do not even identify the indices $i$ of the universes $\UU_i$ with the natural numbers \nat of type theory (the latter to be introduced in \cref{sec:inductive-types}).
+
+\index{type!universe|)}%
+
+\section{Dependent function types (\texorpdfstring{$\Pi$}{Π}-types)}
+\label{sec:pi-types}
+
+\index{type!dependent function|(defstyle}%
+\index{function!dependent|(defstyle}%
+\indexsee{dependent!function}{function, dependent}%
+\indexsee{type!Pi-@$\Pi$-}{type, dependent function}%
+\indexsee{Pi-type@$\Pi$-type}{type, dependent function}%
+In type theory we often use a more general version of function
+types, called a \define{$\Pi$-type} or \define{dependent function type}. The elements of
+a $\Pi$-type are functions
+whose codomain type can vary depending on the
+element of the domain to which the function is applied, called \define{dependent functions}. The name ``$\Pi$-type''
+is used because this type can also be regarded as the cartesian
+product over a given type.
+
+Given a type $A:\UU$ and a family $B:A \to \UU$, we may construct
+the type of dependent functions $\prd{x:A}B(x) : \UU$.
+There are many alternative notations for this type, such as
+\[ \tprd{x:A} B(x) \qquad \dprd{x:A}B(x) \qquad \lprd{x:A} B(x). \]
+If $B$ is a constant family, then the dependent product type is the ordinary function type:
+\[\tprd{x:A} B \jdeq (A \to B).\]
+Indeed, all the constructions of $\Pi$-types are generalizations of the corresponding constructions on ordinary function types.
+
+\indexdef{definition!of function, direct}%
+We can introduce dependent functions by explicit definitions: to
+define $f : \prd{x:A}B(x)$, where $f$ is the name of a dependent function to be
+defined, we need an expression $\Phi : B(x)$ possibly involving the variable $x:A$,
+\index{variable}%
+and we write
+\[ f(x) \defeq \Phi \qquad \mbox{for $x:A$}.\]
+Alternatively, we can use \define{$\lambda$-abstraction}%
+\index{lambda abstraction@$\lambda$-abstraction|defstyle}%
+\begin{equation}
+ \label{eq:lambda-abstraction}
+ \lamu{x:A} \Phi \ :\ \prd{x:A} B(x).
+\end{equation}
+\indexdef{application!of dependent function}%
+\indexdef{function!dependent!application}%
+As with non-dependent functions, we can \define{apply} a dependent function $f : \prd{x:A}B(x)$ to an argument $a:A$ to obtain an element $f(a):B(a)$.
+The equalities are the same as for the ordinary function type, i.e.\ we have the computation rule
+\index{computation rule!for dependent function types}%
+given $a:A$ we have $f(a) \jdeq \Phi'$ and
+$(\lamu{x:A} \Phi)(a) \jdeq \Phi'$, where $\Phi' $ is obtained by replacing all
+occurrences of $x$ in $\Phi$ by $a$ (avoiding variable capture, as always).
+Similarly, we have the uniqueness principle $f\jdeq (\lam{x} f(x))$ for any $f:\prd{x:A} B(x)$.
+\index{uniqueness!principle!for dependent function types}%
+
+As an example, recall from \cref{sec:universes} that there is a type family $\Fin:\nat\to\UU$ whose values are the standard finite sets, with elements $0_n,1_n,\dots,(n-1)_n : \Fin(n)$.
+There is then a dependent function $\fmax : \prd{n:\nat} \Fin(n+1)$
+which returns the ``largest'' element of each nonempty finite type, $\fmax(n) \defeq n_{n+1}$.
+\index{finite!sets, family of}%
+As was the case for $\Fin$ itself, we cannot define $\fmax$ yet, but we will be able to soon; see \cref{ex:fin}.
+
+Another important class of dependent function types, which we can define now, are functions which are \define{polymorphic}
+\indexdef{function!polymorphic}%
+\indexdef{polymorphic function}%
+over a given universe.
+A polymorphic function is one which takes a type as one of its arguments, and then acts on elements of that type (or of other types constructed from it).
+\symlabel{idfunc}%
+\indexdef{function!identity}%
+\indexdef{identity!function}%
+An example is the polymorphic identity function $\idfunc : \prd{A:\UU} A \to A$, which we define by $\idfunc{} \defeq \lam{A:\type}{x:A} x$.
+(Like $\lambda$-abstractions, $\Pi$s automatically scope\index{scope} over the rest of the expression unless delimited; thus $\idfunc : \prd{A:\UU} A \to A$ means $\idfunc : \prd{A:\UU} (A \to A)$.
+This convention, though unusual in mathematics, is common in type theory.)
+
+We sometimes write some arguments of a dependent function as subscripts.
+For instance, we might equivalently define the polymorphic identity function by $\idfunc[A](x) \defeq x$.
+Moreover, if an argument can be inferred from context, we may omit it altogether.
+For instance, if $a:A$, then writing $\idfunc(a)$ is unambiguous, since $\idfunc$ must mean $\idfunc[A]$ in order for it to be applicable to $a$.
+
+Another, less trivial, example of a polymorphic function is the ``swap'' operation that switches the order of the arguments of a (curried) two-argument function:
+\[ \mathsf{swap} : \prd{A:\UU}{B:\UU}{C:\UU} (A\to B\to C) \to (B\to A \to C). \]
+We can define this by
+\[ \mathsf{swap}(A,B,C,g) \defeq \lam{b}{a} g(a)(b). \]
+We might also equivalently write the type arguments as subscripts:
+\[ \mathsf{swap}_{A,B,C}(g)(b,a) \defeq g(a,b). \]
+
+Note that as we did for ordinary functions, we use currying to define dependent functions with
+several arguments (such as $\mathsf{swap}$). However, in the dependent case the second domain may
+depend on the first one, and the codomain may depend on both. That is,
+given $A:\UU$ and type families $B : A \to \UU$ and $C : \prd{x:A}B(x) \to \UU$, we may construct
+the type $\prd{x:A}{y : B(x)} C(x,y)$ of functions with two
+arguments.
+In the case when $B$ is constant and equal to $A$, we may condense the notation and write $\prd{x,y:A}$; for instance, the type of $\mathsf{swap}$ could also be written as
+\[ \mathsf{swap} : \prd{A,B,C:\UU} (A\to B\to C) \to (B\to A \to C). \]
+Finally, given $f:\prd{x:A}{y : B(x)} C(x,y)$ and arguments $a:A$ and $b:B(a)$, we have $f(a)(b) : C(a,b)$, which,
+as before, we write as $f(a,b) : C(a,b)$.
+
+\index{type!dependent function|)}%
+\index{function!dependent|)}%
+
+
+\section{Product types}
+\label{sec:finite-product-types}
+
+Given types $A,B:\UU$ we introduce the type $A\times B:\UU$, which we call their \define{cartesian product}.
+\indexsee{cartesian product}{type, product}%
+\indexsee{type!cartesian product}{type, product}%
+\index{type!product|(defstyle}%
+\indexsee{product!of types}{type, product}%
+We also introduce a nullary product type, called the \define{unit type} $\unit : \UU$.
+\indexsee{nullary!product}{type, unit}%
+\indexsee{unit!type}{type, unit}%
+\index{type!unit|(defstyle}%
+We intend the elements of $A\times B$ to be pairs $\tup{a}{b} : A \times B$, where $a:A$ and $b:B$, and the only element of $\unit$ to be some particular object $\ttt : \unit$.
+\indexdef{pair!ordered}%
+However, unlike in set theory, where we define ordered pairs to be particular sets and then collect them all together into the cartesian product, in type theory, ordered pairs are a primitive concept, as are functions.
+
+\begin{rmk}\label{rmk:introducing-new-concepts}
+ There is a general pattern for introduction of a new kind of type in type theory.
+ We have already seen this pattern in \cref{sec:function-types,sec:pi-types}\footnote{The description of universes above is an exception.}, so it is worth emphasizing the general form.
+ To specify a type, we specify:
+ \begin{enumerate}
+ \item how to form new types of this kind, via \define{formation rules}.
+ \indexdef{formation rule}%
+ \index{rule!formation}%
+(For example, we can form the function type $A \to B$ when $A$ is a type and when $B$ is a type. We can form the dependent function type $\prd{x:A} B(x)$ when $A$ is a type and $B(x)$ is a type for $x:A$.)
+
+ \item how to construct elements of that type.
+ These are called the type's \define{constructors} or \define{introduction rules}.
+ \indexdef{constructor}%
+ \indexdef{rule!introduction}%
+ \indexdef{introduction rule}%
+ (For example, a function type has one constructor, $\lambda$-abstraction.
+ Recall that a direct definition like $f(x)\defeq 2x$ can equivalently be phrased
+ as a $\lambda$-abstraction $f\defeq \lam{x} 2x$.)
+
+ \item how to use elements of that type.
+ These are called the type's \define{eliminators} or \define{elimination rules}.
+ \indexsee{rule!elimination}{eliminator}%
+ \indexsee{elimination rule}{eliminator}%
+ \indexdef{eliminator}%
+ (For example, the function type has one eliminator, namely function application.)
+
+ \item
+ a \define{computation rule}\indexdef{computation rule}\footnote{also referred to as \define{$\beta$-reduction}\index{beta-reduction@$\beta $-reduction|footstyle}}, which expresses how an eliminator acts on a constructor.
+(For example, for functions, the computation rule states that $(\lamu{x:A}\Phi)(a)$ is judgmentally equal to the substitution of $a$ for $x$ in $\Phi$.)
+
+ \item
+ an optional \define{uniqueness principle}\indexdef{uniqueness!principle}\footnote{also referred to as \define{$\eta$-expansion}\index{eta-expansion@$\eta $-expansion|footstyle}}, which expresses
+uniqueness of maps into or out of that type.
+For some types, the uniqueness principle characterizes maps into the type, by stating that
+every element of the type is uniquely determined by the results of applying eliminators to it, and can be reconstructed from those results by applying a constructor---thus expressing how constructors act on eliminators, dually to the computation rule.
+(For example, for functions, the uniqueness principle says that any function $f$ is judgmentally equal to the ``expanded'' function $\lamu{x} f(x)$, and thus is uniquely determined by its values.)
+For other types, the uniqueness principle says that every map (function) \emph{from} that type is uniquely determined by some data. (An example is the coproduct type introduced in \cref{sec:coproduct-types}, whose uniqueness principle is mentioned in \cref{sec:universal-properties}.)
+
+ When the uniqueness principle is not taken as a rule of judgmental equality, it is often nevertheless provable as a \emph{propositional} equality from the other rules for the type.
+ In this case we call it a \define{propositional uniqueness principle}.
+ \indexdef{uniqueness!principle, propositional}%
+ \indexsee{propositional!uniqueness principle}{uniqueness principle, propositional}%
+ (In later chapters we will also occasionally encounter \emph{propositional computation rules}.)
+ \indexdef{computation rule!propositional}%
+ \end{enumerate}
+The inference rules in \cref{sec:syntax-more-formally} are organized and named accordingly; see, for example, \cref{sec:more-formal-pi}, where each possibility is realized.
+\end{rmk}
+
+The way to construct pairs is obvious: given $a:A$ and $b:B$, we may form $(a,b):A\times B$.
+Similarly, there is a unique way to construct elements of $\unit$, namely we have $\ttt:\unit$.
+We expect that ``every element of $A\times B$ is a pair'', which is the uniqueness principle for products; we do not assert this as a rule of type theory, but we will prove it later on as a propositional equality.
+
+Now, how can we \emph{use} pairs, i.e.\ how can we define functions out of a product type?
+Let us first consider the definition of a non-dependent function $f : A\times B \to C$.
+Since we intend the only elements of $A\times B$ to be pairs, we expect to be able to define such a function by prescribing the result
+when $f$ is applied to a pair $\tup{a}{b}$.
+We can prescribe these results by providing a function $g : A \to B \to C$.
+Thus, we introduce a new rule (the elimination rule for products), which says that for any such $g$, we can define a function $f : A\times B \to C$ by
+\[ f(\tup{a}{b}) \defeq g(a)(b). \]
+We avoid writing $g(a,b)$ here, in order to emphasize that $g$ is not a function on a product.
+(However, later on in the book we will often write $g(a,b)$ both for functions on a product and for curried functions of two variables.)
+This defining equation is the computation rule for product types\index{computation rule!for product types}.
+
+Note that in set theory, we would justify the above definition of $f$ by the fact that every element of $A\times B$ is an ordered pair, so that it suffices to define $f$ on such pairs.
+By contrast, type theory reverses the situation: we assume that a function on $A\times B$ is well-defined as soon as we specify its values on pairs, and from this (or more precisely, from its more general version for dependent functions, below) we will be able to \emph{prove} that every element of $A\times B$ is a pair.
+From a category-theoretic perspective, we can say that we define the product $A\times B$ to be left adjoint to the ``exponential'' $B\to C$, which we have already introduced.
+
+As an example, we can derive the \define{projection}
+\indexsee{function!projection}{projection}%
+\indexsee{component, of a pair}{projection}%
+\indexdef{projection!from cartesian product type}%
+functions
+\symlabel{defn:proj}%
+\begin{align*}
+ \fst & : A \times B \to A \\
+ \snd & : A \times B \to B
+\end{align*}
+with the defining equations
+\begin{align*}
+ \fst(\tup{a}{b}) & \defeq a \\
+ \snd(\tup{a}{b}) & \defeq b.
+\end{align*}
+%
+\symlabel{defn:recursor-times}%
+Rather than invoking this principle of function definition every time we want to define a function, an alternative approach is to invoke it once, in a universal case, and then simply apply the resulting function in all other cases.
+That is, we may define a function of type
+\begin{equation}
+ \rec{A\times B} : \prd{C:\UU}(A \to B \to C) \to A \times B \to C
+\end{equation}
+with the defining equation
+\[\rec{A\times B}(C,g,\tup{a}{b}) \defeq g(a)(b). \]
+Then instead of defining functions such as $\fst$ and $\snd$ directly by a defining equation, we could define
+\begin{align*}
+ \fst &\defeq \rec{A\times B}(A, \lam{a}{b} a)\\
+ \snd &\defeq \rec{A\times B}(B, \lam{a}{b} b).
+\end{align*}
+We refer to the function $\rec{A\times B}$ as the \define{recursor}
+\indexsee{recursor}{recursion principle}%
+for product types. The name ``recursor'' is a bit unfortunate here, since no recursion is taking place. It comes from the fact that product types are a degenerate example of a general framework for inductive types, and for types such as the natural numbers, the recursor will actually be recursive. We may also speak of the \define{recursion principle} for cartesian products, meaning the fact that we can define a function $f:A\times B\to C$ as above by giving its value on pairs.
+\index{recursion principle!for cartesian product}%
+
+We leave it as a simple exercise to show that the recursor can be
+derived from the projections and vice versa.
+% Ex: Derive from projections
+
+\symlabel{defn:recursor-unit}%
+We also have a recursor for the unit type:
+\[\rec{\unit} : \prd{C:\UU}C \to \unit \to C\]
+with the defining equation
+\[ \rec{\unit}(C,c,\ttt) \defeq c. \]
+Although we include it to maintain the pattern of type definitions, the recursor for $\unit$ is completely useless,
+because we could have defined such a function directly
+by simply ignoring the argument of type $\unit$.
+
+To be able to define \emph{dependent} functions over the product type, we have
+to generalize the recursor. Given $C: A \times B \to \UU$, we may
+define a function $f : \prd{x : A \times B} C(x)$ by providing a
+function
+\narrowequation{
+ g : \prd{x:A}\prd{y:B} C(\tup{x}{y})
+}
+with defining equation
+\[ f(\tup x y) \defeq g(x)(y). \]
+For example, in this way we can prove the propositional uniqueness principle, which says that every element of $A\times B$ is equal to a pair.
+\index{uniqueness!principle, propositional!for product types}%
+Specifically, we can construct a function
+\[ \uniq{A\times B} : \prd{x:A \times B} (\id[A\times B]{\tup{\fst {(x)}}{\snd {(x)}}}{x}). \]
+Here we are using the identity type, which we are going to introduce below in \cref{sec:identity-types}.
+However, all we need to know now is that there is a reflexivity element $\refl{x} : \id[A]{x}{x}$ for any $x:A$.
+Given this, we can define
+\label{uniquenessproduct}
+\[ \uniq{A\times B}(\tup{a}{b}) \defeq \refl{\tup{a}{b}}. \]
+This construction works, because in the case that $x \defeq \tup{a}{b}$ we can
+calculate
+\[ \tup{\fst(\tup{a}{b})}{\snd{(\tup{a}{b})}} \jdeq \tup{a}{b} \]
+using the defining equations for the projections. Therefore,
+\[ \refl{\tup{a}{b}} : \id{\tup{\fst(\tup{a}{b})}{\snd{(\tup{a}{b})}}}{\tup{a}{b}} \]
+is well-typed, since both sides of the equality are judgmentally equal.
+
+More generally, the ability to define dependent functions in this way means that to prove a property for all elements of a product, it is enough
+to prove it for its canonical elements, the ordered pairs.
+When we come to inductive types such as the natural numbers, the analogous property will be the ability to write proofs by induction.
+Thus, if we do as we did above and apply this principle once in the universal case, we call the resulting function \define{induction} for product types: given $A,B : \UU$ we have
+\symlabel{defn:induction-times}%
+\[ \ind{A\times B} : \prd{C:A \times B \to \UU}
+\Parens{\prd{x:A}{y:B} C(\tup{x}{y})} \to \prd{x:A \times B} C(x) \]
+with the defining equation
+\[ \ind{A\times B}(C,g,\tup{a}{b}) \defeq g(a)(b). \]
+Similarly, we may speak of a dependent function defined on pairs being obtained from the \define{induction principle}
+\index{induction principle}%
+\index{induction principle!for product}%
+of the cartesian product.
+It is easy to see that the recursor is just the special case of induction
+in the case that the family $C$ is constant.
+Because induction describes how to use an element of the product type, induction is also called the \define{(dependent) eliminator},
+\indexsee{eliminator!of inductive type!dependent}{induction principle}%
+and recursion the \define{non-dependent eliminator}.
+\indexsee{eliminator!of inductive type!non-dependent}{recursion principle}%
+\indexsee{non-dependent eliminator}{recursion principle}%
+\indexsee{dependent eliminator}{induction principle}%
+
+% We can read induction propositionally as saying that a property which
+% is true for all pairs holds for all elements of the product type.
+
+Induction for the unit type turns out to be more useful than the
+recursor:
+\symlabel{defn:induction-unit}%
+\[ \ind{\unit} : \prd{C:\unit \to \UU} C(\ttt) \to \prd{x:\unit}C(x)\]
+with the defining equation
+\[ \ind{\unit}(C,c,\ttt) \defeq c. \]
+Induction enables us to prove the propositional uniqueness principle for $\unit$, which asserts that its only inhabitant is $\ttt$.
+That is, we can construct
+\label{uniquenessunit}
+\[\uniq{\unit} : \prd{x:\unit} \id{x}{\ttt} \]
+by using the defining equations
+\[\uniq{\unit}(\ttt) \defeq \refl{\ttt} \]
+or equivalently by using induction:
+\[\uniq{\unit} \defeq \ind{\unit}(\lamu{x:\unit} \id{x}{\ttt},\refl{\ttt}). \]
+
+\index{type!product|)}%
+\index{type!unit|)}%
+
+\section{Dependent pair types (\texorpdfstring{$\Sigma$}{Σ}-types)}
+\label{sec:sigma-types}
+
+\index{type!dependent pair|(defstyle}%
+\indexsee{type!dependent sum}{type, dependent pair}%
+\indexsee{type!Sigma-@$\Sigma$-}{type, dependent pair}%
+\indexsee{Sigma-type@$\Sigma$-type}{type, dependent pair}%
+\indexsee{sum!dependent}{type, dependent pair}%
+
+Just as we generalized function types (\cref{sec:function-types}) to dependent function types (\cref{sec:pi-types}), it is often useful to generalize the product types from \cref{sec:finite-product-types} to allow the type of
+the second component of a pair to vary depending on the choice of the first
+component. This is called a \define{dependent pair type}, or \define{$\Sigma$-type}, because in set theory it
+corresponds to an indexed sum (in the sense of coproduct or
+disjoint union) over a given type.
+
+Given a type $A:\UU$ and a family $B : A \to \UU$, the dependent
+pair type is written as $\sm{x:A} B(x) : \UU$.
+Alternative notations are
+\[ \tsm{x:A} B(x) \hspace{2cm} \dsm{x:A}B(x) \hspace{2cm} \lsm{x:A} B(x). \]
+Like other binding constructs such as $\lambda$-abstractions and $\Pi$s, $\Sigma$s automatically scope\index{scope} over the rest of the expression unless delimited, so e.g.\ $\sm{x:A} B(x) \to C$ means $\sm{x:A} (B(x) \to C)$.
+
+\symlabel{defn:dependent-pair}%
+\indexdef{pair!dependent}%
+The way to construct elements of a dependent pair type is by pairing: we have
+$\tup{a}{b} : \sm{x:A} B(x)$ given $a:A$ and $b:B(a)$.
+If $B$ is constant, then the dependent pair type is the
+ordinary cartesian product type:
+\[ \Parens{\sm{x:A} B} \jdeq (A \times B).\]
+All the constructions on $\Sigma$-types arise as straightforward generalizations of the ones for product types, with dependent functions often replacing non-dependent ones.
+
+For instance, the recursion principle%
+\index{recursion principle!for dependent pair type}
+says that to define a non-dependent function out of a $\Sigma$-type
+$f : (\sm{x:A} B(x)) \to C$, we provide a function
+$g : \prd{x:A} B(x) \to C$, and then we can define $f$ via the defining
+equation
+\[ f(\tup{a}{b}) \defeq g(a)(b). \]
+\indexdef{projection!from dependent pair type}%
+For instance, we can derive the first projection from a $\Sigma$-type:
+\symlabel{defn:dependent-proj1}%
+\begin{equation*}
+ \fst : \Parens{\sm{x : A}B(x)} \to A
+\end{equation*}
+by the defining equation
+\begin{equation*}
+ \fst(\tup{a}{b}) \defeq a.
+\end{equation*}
+However, since the type of the second component of a pair
+\narrowequation{
+ (a,b):\sm{x:A} B(x)
+}
+is $B(a)$, the second projection must be a \emph{dependent} function, whose type involves the first projection function:
+\symlabel{defn:dependent-proj2}%
+\[ \snd : \prd{p:\sm{x : A}B(x)}B(\fst(p)). \]
+Thus we need the \emph{induction} principle%
+\index{induction principle!for dependent pair type}
+for $\Sigma$-types (the ``dependent eliminator'').
+This says that to construct a dependent function out of a $\Sigma$-type into a family $C : (\sm{x:A} B(x)) \to \UU$, we need a function
+\[ g : \prd{a:A}{b:B(a)} C(\tup{a}{b}). \]
+We can then derive a function
+\[ f : \prd{p : \sm{x:A}B(x)} C(p) \]
+with defining equation\index{computation rule!for dependent pair type}
+\[ f(\tup{a}{b}) \defeq g(a)(b).\]
+Applying this with $C(p)\defeq B(\fst(p))$, we can define
+\narrowequation{
+\snd : \prd{p:\sm{x : A}B(x)}B(\fst(p))
+}
+with the obvious equation
+\[ \snd(\tup{a}{b}) \defeq b. \]
+To convince ourselves that this is correct, we note that $B (\fst(\tup{a}{b})) \jdeq B(a)$, using the defining equation for $\fst$, and
+indeed $b : B(a)$.
+
+We can package the recursion and induction principles into the recursor for $\Sigma$:
+\symlabel{defn:recursor-sm}%
+\[ \rec{\sm{x:A}B(x)} : \dprd{C:\UU}\Parens{\tprd{x:A} B(x) \to C} \to
+\Parens{\tsm{x:A}B(x)} \to C \]
+with the defining equation
+\[ \rec{\sm{x:A}B(x)}(C,g,\tup{a}{b}) \defeq g(a)(b) \]
+and the corresponding induction operator:
+\symlabel{defn:induction-sm}%
+\begin{narrowmultline*}
+ \ind{\sm{x:A}B(x)} : \narrowbreak
+ \dprd{C:(\sm{x:A} B(x)) \to \UU}
+ \Parens{\tprd{a:A}{b:B(a)} C(\tup{a}{b})}
+ \to \dprd{p : \sm{x:A}B(x)} C(p)
+\end{narrowmultline*}
+with the defining equation
+\[ \ind{\sm{x:A}B(x)}(C,g,\tup{a}{b}) \defeq g(a)(b). \]
+As before, the recursor is the special case of induction
+when the family $C$ is constant.
+
+As a further example, consider the following principle, where $A$ and $B$ are types and $R:A\to B\to \UU$:
+\[ \ac : \Parens{\tprd{x:A} \tsm{y :B} R(x,y)} \to
+\Parens{\tsm{f:A\to B} \tprd{x:A} R(x,f(x))}.
+\]
+We may regard $R$ as a ``proof-relevant relation''
+\index{mathematics!proof-relevant}%
+between $A$ and $B$, with $R(a,b)$ the type of witnesses for relatedness of $a:A$ and $b:B$.
+Then $\ac$ says intuitively that if we have a dependent function $g$ assigning to every $a:A$ a dependent pair $(b,r)$ where $b:B$ and $r:R(a,b)$, then we have a function $f:A\to B$ and a dependent function assigning to every $a:A$ a witness that $R(a,f(a))$.
+Our intuition tells us that we can just split up the values of $g$ into their components.
+Indeed, using the projections we have just defined, we can define:
+\[ \ac(g) \defeq \Parens{\lamu{x:A} \fst(g(x)),\, \lamu{x:A} \snd(g(x))}. \]
+To verify that this is well-typed, note that if $g:\prd{x:A} \sm{y :B} R(x,y)$, we have
+\begin{align*}
+\lamu{x:A} \fst(g(x)) &: A \to B, \\
+\lamu{x:A} \snd(g(x)) &: \tprd{x:A} R(x,\fst(g(x))).
+\end{align*}
+Moreover, the type $\prd{x:A} R(x,\fst(g(x)))$ is the result of applying the type family $\lamu{f:A\to B} \tprd{x:A} R(x,f(x))$ being summed over in the codomain of \ac to the function $\lamu{x:A} \fst(g(x))$:
+\[ \tprd{x:A} R(x,\fst(g(x))) \jdeq
+\Parens{\lamu{f:A\to B} \tprd{x:A} R(x,f(x))}\big(\lamu{x:A} \fst(g(x))\big). \]
+Thus, we have
+\[ \Parens{\lamu{x:A} \fst(g(x)),\, \lamu{x:A} \snd(g(x))} : \tsm{f:A\to B} \tprd{x:A} R(x,f(x))\]
+as required.
+
+If we read $\Pi$ as ``for all'' and $\Sigma$ as ``there exists'', then the type of the function $\ac$ expresses:
+\emph{if for all $x:A$ there is a $y:B$ such that $R(x,y)$, then there is a function $f : A \to B$ such that for all $x:A$ we have $R(x,f(x))$}.
+Since this sounds like a version of the axiom of choice, the function \ac has traditionally been called the \define{type-theoretic axiom of choice}, and as we have just shown, it can be proved directly from the rules of type theory, rather than having to be taken as an axiom.
+\index{axiom!of choice!type-theoretic}%
+However, note that no choice is actually involved, since the choices have already been given to us in the premise: all we have to do is take it apart into two functions: one representing the choice and the other its correctness.
+In \cref{sec:axiom-choice} we will give another formulation of an ``axiom of choice'' which is closer to the usual one.
+
+Dependent pair types are often used to define types of mathematical structures, which commonly consist of several dependent pieces of data.
+To take a simple example, suppose we want to define a \define{magma}\indexdef{magma} to be a type $A$ together with a binary operation $m:A\to A\to A$.
+The precise meaning of the phrase ``together with''\index{together with} (and the synonymous ``equipped with''\index{equipped with}) is that ``a magma'' is a \emph{pair} $(A,m)$ consisting of a type $A:\UU$ and an operation $m:A\to A\to A$.
+Since the type $A\to A\to A$ of the second component $m$ of this pair depends on its first component $A$, such pairs belong to a dependent pair type.
+Thus, the definition ``a magma is a type $A$ together with a binary operation $m:A\to A\to A$'' should be read as defining \emph{the type of magmas} to be
+\[ \mathsf{Magma} \defeq \sm{A:\UU} (A\to A\to A). \]
+Given a magma, we extract its underlying type (its ``carrier''\index{carrier}) with the first projection $\proj1$, and its operation with the second projection $\proj2$.
+Of course, structures built from more than two pieces of data require iterated pair types, which may be only partially dependent; for instance the type of pointed magmas (magmas $(A,m)$ equipped with a basepoint $e:A$) is
+\[ \mathsf{PointedMagma} \defeq \sm{A:\UU} (A\to A\to A) \times A. \]
+We generally also want to impose axioms on such a structure, e.g.\ to make a pointed magma into a monoid or a group.
+This can also be done using $\Sigma$-types; see \cref{sec:pat}.
+
+In the rest of the book, we will sometimes make definitions of this sort explicit, but eventually we trust the reader to translate them from English into $\Sigma$-types.
+We also generally follow the common mathematical practice of using the same letter for a structure of this sort and for its carrier (which amounts to leaving the appropriate projection function implicit in the notation): that is, we will speak of a magma $A$ with its operation $m:A\to A\to A$.
+
+Note that the canonical elements of $\mathsf{PointedMagma}$ are of the form $(A,(m,e))$ where $A:\UU$, $m:A\to A\to A$, and $e:A$.
+Because of the frequency with which iterated $\Sigma$-types of this sort arise, we use the usual notation of ordered triples, quadruples and so on to stand for nested pairs (possibly dependent) associating to the right.
+That is, we have $(x,y,z) \defeq (x,(y,z))$ and $(x,y,z,w)\defeq (x,(y,(z,w)))$, etc.
+
+\index{type!dependent pair|)}%
+
+\section{Coproduct types}
+\label{sec:coproduct-types}
+
+Given $A,B:\UU$, we introduce their \define{coproduct} type $A+B:\UU$.
+\indexsee{coproduct}{type, coproduct}%
+\index{type!coproduct|(defstyle}%
+\indexsee{disjoint!sum}{type, coproduct}%
+\indexsee{disjoint!union}{type, coproduct}%
+\indexsee{sum!disjoint}{type, coproduct}%
+\indexsee{union!disjoint}{type, coproduct}%
+This corresponds to the \emph{disjoint union} in set theory, and we may also use that name for it.
+In type theory, as was the case with functions and products, the coproduct must be a fundamental construction, since there is no previously given notion of ``union of types''.
+We also introduce a
+nullary version: the \define{empty type $\emptyt:\UU$}.
+\indexsee{nullary!coproduct}{type, empty}%
+\indexsee{empty type}{type, empty}%
+\index{type!empty|(defstyle}%
+
+There are two ways to construct elements of $A+B$, either as $\inl(a) : A+B$ for $a:A$, or as
+$\inr(b):A+B$ for $b:B$.
+(The names $\inl$ and $\inr$ are short for ``left injection'' and ``right injection''.)
+There are no ways to construct elements of the empty type.
+
+\index{recursion principle!for coproduct}
+To construct a non-dependent function $f : A+B \to C$, we need
+functions $g_0 : A \to C$ and $g_1 : B \to C$. Then $f$ is defined
+via the defining equations
+\begin{align*}
+ f(\inl(a)) &\defeq g_0(a), \\
+ f(\inr(b)) &\defeq g_1(b).
+\end{align*}
+That is, the function $f$ is defined by \define{case analysis}.
+\indexdef{case analysis}%
+As before, we can derive the recursor:
+\symlabel{defn:recursor-plus}%
+\[ \rec{A+B} : \dprd{C:\UU}(A \to C) \to (B\to C) \to A+B \to C\]
+with the defining equations
+\begin{align*}
+\rec{A+B}(C,g_0,g_1,\inl(a)) &\defeq g_0(a), \\
+\rec{A+B}(C,g_0,g_1,\inr(b)) &\defeq g_1(b).
+\end{align*}
+
+\index{recursion principle!for empty type}
+We can always construct a function $f : \emptyt \to C$ without
+having to give any defining equations, because there are no elements of \emptyt on which to define $f$.
+Thus, the recursor for $\emptyt$ is
+\symlabel{defn:recursor-emptyt}%
+\[\rec{\emptyt} : \tprd{C:\UU} \emptyt \to C,\]
+which constructs the canonical function from the empty type to any other type.
+Logically, it corresponds to the principle \textit{ex falso quodlibet}.
+\index{ex falso quodlibet@\textit{ex falso quodlibet}}
+
+\index{induction principle!for coproduct}
+To construct a dependent function $f:\prd{x:A+B}C(x)$ out of a coproduct, we assume as given the family
+$C: (A + B) \to \UU$, and
+require
+\begin{align*}
+ g_0 &: \prd{a:A} C(\inl(a)), \\
+ g_1 &: \prd{b:B} C(\inr(b)).
+\end{align*}
+This yields $f$ with the defining equations:\index{computation rule!for coproduct type}
+\begin{align*}
+ f(\inl(a)) &\defeq g_0(a), \\
+ f(\inr(b)) &\defeq g_1(b).
+\end{align*}
+We package this scheme into the induction principle for coproducts:
+\symlabel{defn:induction-plus}%
+\begin{narrowmultline*}
+ \ind{A+B} :
+ \dprd{C: (A + B) \to \UU}
+ \Parens{\tprd{a:A} C(\inl(a))} \to \narrowbreak
+ \Parens{\tprd{b:B} C(\inr(b))} \to \tprd{x:A+B}C(x).
+\end{narrowmultline*}
+As before, the recursor arises in the case that the family $C$ is
+constant.
+
+\index{induction principle!for empty type}
+The induction principle for the empty type
+\symlabel{defn:induction-emptyt}%
+\[ \ind{\emptyt} : \prd{C:\emptyt \to \UU}{z:\emptyt} C(z) \]
+gives us a way to define a trivial dependent function out of the
+empty type. % In the presence of $\eta$-equality it is derivable
+% from the recursor.
+% ex
+
+\index{type!coproduct|)}%
+\index{type!empty|)}%
+
+
+\section{The type of booleans}
+\label{sec:type-booleans}
+
+\indexsee{boolean!type of}{type of booleans}%
+\index{type!of booleans|(defstyle}%
+The type of booleans $\bool:\UU$ is intended to have exactly two elements
+$\bfalse,\btrue : \bool$. It is clear that we could construct this
+type out of coproduct
+% this one results in a warning message just because it's on the same page as the previous entry
+% for {type!coproduct}, so it's not our fault
+\index{type!coproduct}%
+and unit\index{type!unit} types as $\unit + \unit$. However,
+since it is used frequently, we give the explicit rules here.
+Indeed, we are going to observe that we can also go the other way
+and derive binary coproducts from $\Sigma$-types and $\bool$.
+
+\index{recursion principle!for type of booleans}
+To derive a function $f : \bool \to C$ we need $c_0,c_1 : C$ and
+add the defining equations
+\begin{align*}
+ f(\bfalse) &\defeq c_0, \\
+ f(\btrue) &\defeq c_1.
+\end{align*}
+The recursor corresponds to the if-then-else construct in
+functional programming:
+\symlabel{defn:recursor-bool}%
+\[ \rec{\bool} : \prd{C:\UU} C \to C \to \bool \to C \]
+with the defining equations
+\begin{align*}
+ \rec{\bool}(C,c_0,c_1,\bfalse) &\defeq c_0, \\
+ \rec{\bool}(C,c_0,c_1,\btrue) &\defeq c_1.
+\end{align*}
+
+\index{induction principle!for type of booleans}
+Given $C : \bool \to \UU$, to derive a dependent function
+$f : \prd{x:\bool}C(x)$ we need $c_0:C(\bfalse)$ and $c_1 : C(\btrue)$, in which case we can give the defining equations
+\begin{align*}
+ f(\bfalse) &\defeq c_0, \\
+ f(\btrue) &\defeq c_1.
+\end{align*}
+We package this up into the induction principle
+\symlabel{defn:induction-bool}%
+\[ \ind{\bool} : \dprd{C:\bool \to \UU} C(\bfalse) \to C(\btrue)
+\to \tprd{x:\bool} C(x) \]
+with the defining equations
+\begin{align*}
+ \ind{\bool}(C,c_0,c_1,\bfalse) &\defeq c_0, \\
+ \ind{\bool}(C,c_0,c_1,\btrue) &\defeq c_1.
+\end{align*}
+
+As an example, using the induction principle we can deduce that, as we expect, every element of $\bool$ is either $\btrue$ or $\bfalse$.
+As before, in order to state this we use the equality types which we have not yet introduced, but we need only the fact that everything is equal to itself by $\refl{x}:x=x$.
+Thus, we construct an element of
+\begin{equation}\label{thm:allbool-trueorfalse}
+ \prd{x:\bool}(x=\bfalse)+(x=\btrue),
+\end{equation}
+i.e.\ a function assigning to each $x:\bool$ either an equality $x=\bfalse$ or an equality $x=\btrue$.
+We define this element using the induction principle for \bool, with $C(x) \defeq (x=\bfalse)+(x=\btrue)$;
+the two inputs are $\inl(\refl{\bfalse}) : C(\bfalse)$ and $\inr(\refl{\btrue}):C(\btrue)$.
+In other words, our element of~\eqref{thm:allbool-trueorfalse} is
+\[ \ind{\bool}\big(\lam{x}(x=\bfalse)+(x=\btrue),\, \inl(\refl{\bfalse}),\, \inr(\refl{\btrue})\big). \]
+
+We have remarked that $\Sigma$-types can be regarded as analogous to indexed disjoint unions, while coproducts are binary disjoint unions.
+It is natural to expect that a binary disjoint union $A+B$ could be constructed as an indexed one over the two-element type \bool.
+For this we need a type family $P:\bool\to\type$ such that $P(\bfalse)\jdeq A$ and $P(\btrue)\jdeq B$.
+Indeed, we can obtain such a family precisely by the recursion principle for $\bool$.
+\index{type!family of}%
+(The ability to define \emph{type families} by induction and recursion, using the fact that the universe $\UU$ is itself a type, is a subtle and important aspect of type theory.)
+Thus, we could have defined
+\index{type!coproduct}%
+\[ A + B \defeq \sm{x:\bool} \rec{\bool}(\UU,A,B,x) \]
+with
+\begin{align*}
+ \inl(a) &\defeq \tup{\bfalse}{a}, \\
+ \inr(b) &\defeq \tup{\btrue}{b}.
+\end{align*}
+We leave it as an exercise to derive the induction principle of a coproduct type from this definition.
+(See also \cref{ex:sum-via-bool,sec:appetizer-univalence}.)
+
+We can apply the same idea to products and $\Pi$-types: we could have defined
+\[ A \times B \defeq \prd{x:\bool}\rec{\bool}(\UU,A,B,x). \]
+Pairs could then be constructed using induction for \bool:
+\[ \tup{a}{b} \defeq \ind{\bool}(\rec{\bool}(\UU,A,B),a,b) \]
+while the projections are straightforward applications
+\begin{align*}
+ \fst(p) &\defeq p(\bfalse), \\
+ \snd(p) &\defeq p(\btrue).
+\end{align*}
+The derivation of the induction principle for binary products defined in this way is a bit more involved, and requires function extensionality, which we will introduce in \cref{sec:compute-pi}.
+Moreover, we do not get the same judgmental equalities; see \cref{ex:prod-via-bool}.
+This is a recurrent issue when encoding one type as another; we will return to it in \cref{sec:htpy-inductive}.
+
+We may occasionally refer to the elements $\bfalse$ and $\btrue$ of $\bool$ as ``false'' and ``true'' respectively.
+However, note that unlike in classical\index{mathematics!classical} mathematics, we do not use elements of $\bool$ as truth values
+\index{value!truth}%
+or as propositions.
+(Instead we identify propositions with types; see \cref{sec:pat}.)
+In particular, the type $A \to \bool$ is not generally the power set\index{power set} of $A$; it represents only the ``decidable'' subsets of $A$ (see \cref{cha:logic}).
+\index{decidable!subset}%
+
+\index{type!of booleans|)}%
+
+
+\section{The natural numbers}
+\label{sec:inductive-types}
+
+\indexsee{type!of natural numbers}{natural numbers}%
+\index{natural numbers|(defstyle}%
+\indexsee{number!natural}{natural numbers}%
+So far we have rules for constructing new types by abstract operations, but for doing concrete mathematics we also require some concrete types, such as types of numbers.
+The most basic such is the type $\nat : \UU$ of natural numbers; once we have this we can construct integers, rational numbers, real numbers, and so on (see \cref{cha:real-numbers}).
+
+The elements of $\nat$ are constructed using $0 : \nat$\indexdef{zero} and the successor\indexdef{successor} operation $\suc : \nat \to \nat$.
+When denoting natural numbers, we adopt the usual decimal notation $1 \defeq \suc(0)$, $2 \defeq \suc(1)$, $3 \defeq \suc(2)$, \dots.
+
+The essential property of the natural numbers is that we can define functions by recursion and perform proofs by induction --- where now the words ``recursion'' and ``induction'' have a more familiar meaning.
+\index{recursion principle!for natural numbers}%
+To construct a non-dependent function $f : \nat \to C$ out of the natural numbers by recursion, it is enough to provide a starting point $c_0 : C$ and a ``next step'' function $c_s : \nat \to C \to C$.
+This gives rise to $f$ with the defining equations\index{computation rule!for natural numbers}
+\begin{align*}
+ f(0) &\defeq c_0, \\
+ f(\suc(n)) &\defeq c_s(n,f(n)).
+\end{align*}
+We say that $f$ is defined by \define{primitive recursion}.
+\indexdef{primitive!recursion}%
+\indexdef{recursion!primitive}%
+
+As an example, we look at how to define a function on natural numbers which doubles its argument.
+In this case we have $C\defeq \nat$.
+We first need to supply the value of $\dbl(0)$, which is easy: we put $c_0 \defeq 0$.
+Next, to compute the value of $\dbl(\suc(n))$ for a natural number $n$, we first compute the value of $\dbl(n)$ and then perform the successor operation twice.
+This is captured by the recurrence\index{recurrence} $c_s(n,y) \defeq \suc(\suc(y))$.
+Note that the second argument $y$ of $c_s$ stands for the result of the \emph{recursive call}\index{recursive call} $\dbl(n)$.
+
+Defining $\dbl:\nat\to\nat$ by primitive recursion in this way, therefore, we obtain the defining equations:
+\begin{align*}
+ \dbl(0) &\defeq 0\\
+ \dbl(\suc(n)) &\defeq \suc(\suc(\dbl(n))).
+\end{align*}
+This indeed has the correct computational behavior: for example, we have
+\begin{align*}
+ \dbl(2) &\jdeq \dbl(\suc(\suc(0)))\\
+ & \jdeq c_s(\suc(0), \dbl(\suc(0))) \\
+ & \jdeq \suc(\suc(\dbl(\suc(0)))) \\
+ & \jdeq \suc(\suc(c_s(0,\dbl(0)))) \\
+ & \jdeq \suc(\suc(\suc(\suc(\dbl(0))))) \\
+ & \jdeq \suc(\suc(\suc(\suc(c_0)))) \\
+ & \jdeq \suc(\suc(\suc(\suc(0))))\\
+ &\jdeq 4.
+\end{align*}
+We can define multi-variable functions by primitive recursion as well, by currying and allowing $C$ to be a function type.
+\indexdef{addition!of natural numbers}
+For example, we define addition $\add : \nat \to \nat \to \nat$ with $C \defeq \nat \to \nat$ and the following ``starting point'' and ``next step'' data:
+\begin{align*}
+ c_0 & : \nat \to \nat \\
+ c_0 (n) & \defeq n \\
+ c_s & : \nat \to (\nat \to \nat) \to (\nat \to \nat) \\
+ c_s(m,g)(n) & \defeq \suc(g(n)).
+\end{align*}
+We thus obtain $\add : \nat \to \nat \to \nat$ satisfying the definitional equalities
+\begin{align*}
+ \add(0,n) &\jdeq n \\
+ \add(\suc(m),n) &\jdeq \suc(\add(m,n)).
+\end{align*}
+As usual, we write $\add(m,n)$ as $m+n$.
+The reader is invited to verify that $2+2\jdeq 4$.
+% ex: define multiplication and exponentiation.
+
+As in previous cases, we can package the principle of primitive recursion into a recursor:
+\[\rec{\nat} : \dprd{C:\UU} C \to (\nat \to C \to C) \to \nat \to C \]
+with the defining equations
+\symlabel{defn:recursor-nat}%
+\begin{align*}
+\rec{\nat}(C,c_0,c_s,0) &\defeq c_0, \\
+\rec{\nat}(C,c_0,c_s,\suc(n)) &\defeq c_s(n,\rec{\nat}(C,c_0,c_s,n)).
+\end{align*}
+%ex derive rec from it
+Using $\rec{\nat}$ we can present $\dbl$ and $\add$ as follows:
+\begin{align}
+\dbl &\defeq \rec\nat\big(\nat,\, 0,\, \lamu{n:\nat}{y:\nat} \suc(\suc(y))\big) \label{eq:dbl-as-rec}\\
+\add &\defeq \rec{\nat}\big(\nat \to \nat,\, \lamu{n:\nat} n,\, \lamu{m:\nat}{g:\nat \to \nat}{n :\nat} \suc(g(n))\big).
+\end{align}
+Of course, all functions definable only using the primitive recursion principle will be \emph{computable}.
+(The presence of higher function types --- that is, functions with other functions as arguments --- does, however, mean we can define more than the usual primitive recursive functions; see e.g.~\cref{ex:ackermann}.)
+This is appropriate in constructive mathematics;
+\index{mathematics!constructive}%
+in \cref{sec:intuitionism,sec:axiom-choice} we will see how to augment type theory so that we can define more general mathematical functions.
+
+\index{induction principle!for natural numbers}
+We now follow the same approach as for other types, generalizing primitive recursion to dependent functions to obtain an \emph{induction principle}.
+Thus, assume as given a family $C : \nat \to \UU$, an element $c_0 : C(0)$, and a function $c_s : \prd{n:\nat} C(n) \to C(\suc(n))$; then we can construct $f : \prd{n:\nat} C(n)$ with the defining equations:\index{computation rule!for natural numbers}
+\begin{align*}
+ f(0) &\defeq c_0, \\
+ f(\suc(n)) &\defeq c_s(n,f(n)).
+\end{align*}
+We can also package this into a single function
+\symlabel{defn:induction-nat}%
+\[\ind{\nat} : \dprd{C:\nat\to \UU} C(0) \to \Parens{\tprd{n : \nat} C(n) \to C(\suc(n))} \to \tprd{n : \nat} C(n) \]
+with the defining equations
+\begin{align*}
+\ind{\nat}(C,c_0,c_s,0) &\defeq c_0, \\
+\ind{\nat}(C,c_0,c_s,\suc(n)) &\defeq c_s(n,\ind{\nat}(C,c_0,c_s,n)).
+\end{align*}
+Here we finally see the connection to the classical notion of proof by induction.
+Recall that in type theory we represent propositions by types, and proving a proposition by inhabiting the corresponding type.
+In particular, a \emph{property} of natural numbers is represented by a family of types $P:\nat\to\type$.
+From this point of view, the above induction principle says that if we can prove $P(0)$, and if for any $n$ we can prove $P(\suc(n))$ assuming $P(n)$, then we have $P(n)$ for all $n$.
+This is, of course, exactly the usual principle of proof by induction on natural numbers.
+
+\index{associativity!of addition!of natural numbers}
+As an example, consider how we might represent an explicit proof that $+$ is associative.
+(We will not actually write out proofs in this style, but it serves as a useful example for understanding how induction is represented formally in type theory.)
+To derive
+\[\assoc : \prd{i,j,k:\nat} \id{i + (j + k)}{(i + j) + k}, \]
+it is sufficient to supply
+\[ \assoc_0 : \prd{j,k:\nat} \id{0 + (j + k)}{(0+ j) + k} \]
+and
+\begin{narrowmultline*}
+ \assoc_s : \prd{i:\nat} \left(\prd{j,k:\nat} \id{i + (j + k)}{(i + j) + k}\right)
+ \narrowbreak
+ \to \prd{j,k:\nat} \id{\suc(i) + (j + k)}{(\suc(i) + j) + k}.
+\end{narrowmultline*}
+To derive $\assoc_0$, recall that $0+n \jdeq n$, and hence $0 + (j + k) \jdeq j+k \jdeq (0+ j) + k$.
+Hence we can just set
+\[ \assoc_0(j,k) \defeq \refl{j+k}. \]
+For $\assoc_s$, recall that the definition of $+$ gives $\suc(m)+n \jdeq \suc(m+n)$, and hence
+\begin{align*}
+ \suc(i) + (j + k) &\jdeq \suc(i+(j+k)) \qquad\text{and}\\
+ (\suc(i)+j)+k &\jdeq \suc((i+j)+k).
+\end{align*}
+Thus, the output type of $\assoc_s$ is equivalently $\id{\suc(i+(j+k))}{\suc((i+j)+k)}$.
+But its input (the ``inductive hypothesis'')
+\index{hypothesis!inductive}%
+\index{inductive!hypothesis}%
+yields $\id{i+(j+k)}{(i+j)+k}$, so it suffices to invoke the fact that if two natural numbers are equal, then so are their successors.
+(We will prove this obvious fact in \cref{lem:map}, using the induction principle of identity types.)
+We call this latter fact
+$\apfunc{\suc} : %\prd{m,n:\nat}
+(\id[\nat]{m}{n}) \to (\id[\nat]{\suc(m)}{\suc(n)})$, so we can define
+\[\assoc_s(i,h,j,k) \defeq \apfunc{\suc}( %n+(j+k),(n+j)+k,
+h(j,k)). \]
+Putting these together with $\ind{\nat}$, we obtain a proof of associativity.
+
+\index{natural numbers|)}%
+
+
+\section{Pattern matching and recursion}
+\label{sec:pattern-matching}
+
+\index{pattern matching|(defstyle}%
+\indexsee{matching}{pattern matching}%
+\index{definition!by pattern matching|(}%
+The natural numbers introduce an additional subtlety over the types considered up until now.
+In the case of coproducts, for instance, we could define a function $f:A+B\to C$ either with the recursor:
+\[ f \defeq \rec{A+B}(C, g_0, g_1) \]
+or by giving the defining equations:
+\begin{align*}
+ f(\inl(a)) &\defeq g_0(a)\\
+ f(\inr(b)) &\defeq g_1(b).
+\end{align*}
+To go from the former expression of $f$ to the latter, we simply use the computation rules for the recursor.
+Conversely, given any defining equations
+\begin{align*}
+ f(\inl(a)) &\defeq \Phi_0\\
+ f(\inr(b)) &\defeq \Phi_1
+\end{align*}
+where $\Phi_0$ and $\Phi_1$ are expressions that may involve the variables
+\index{variable}%
+$a$ and $b$ respectively, we can express these equations equivalently in terms of the recursor by using $\lambda$-abstraction\index{lambda abstraction@$\lambda$-abstraction}:
+\[ f\defeq \rec{A+B}(C, \lam{a} \Phi_0, \lam{b} \Phi_1).\]
+In the case of the natural numbers, however, the ``defining equations'' of a function such as $\dbl$:
+\begin{align}
+ \dbl(0) &\defeq 0 \label{eq:dbl0}\\
+ \dbl(\suc(n)) &\defeq \suc(\suc(\dbl(n)))\label{eq:dblsuc}
+\end{align}
+involve \emph{the function $\dbl$ itself} on the right-hand side.
+However, we would still like to be able to give these equations, rather than~\eqref{eq:dbl-as-rec}, as the definition of \dbl, since they are much more convenient and readable.
+The solution is to read the expression ``$\dbl(n)$'' on the right-hand side of~\eqref{eq:dblsuc} as standing in for the result of the recursive call, which in a definition of the form $\dbl\defeq \rec{\nat}(\nat,c_0,c_s)$ would be the second argument of $c_s$.
+
+More generally, if we have a ``definition'' of a function $f:\nat\to C$ such as
+\begin{align*}
+ f(0) &\defeq \Phi_0\\
+ f(\suc(n)) &\defeq \Phi_s
+\end{align*}
+where $\Phi_0$ is an expression of type $C$, and $\Phi_s$ is an expression of type $C$ which may involve the variable $n$ and also the symbol ``$f(n)$'', we may translate it to a definition
+\[ f \defeq \rec{\nat}(C,\,\Phi_0,\,\lam{n}{r} \Phi_s') \]
+where $\Phi_s'$ is obtained from $\Phi_s$ by replacing all occurrences of ``$f(n)$'' by the new variable $r$.
+
+This style of defining functions by recursion (or, more generally, dependent functions by induction) is so convenient that we frequently adopt it.
+It is called definition by \define{pattern matching}.
+Of course, it is very similar to how a computer programmer may define a recursive function with a body that literally contains recursive calls to itself.
+However, unlike the programmer, we are restricted in what sort of recursive calls we can make: in order for such a definition to be re-expressible using the recursion principle, the function $f$ being defined can only appear in the body of $f(\suc(n))$ as part of the composite symbol ``$f(n)$''.
+Otherwise, we could write nonsense functions such as
+\begin{align*}
+ f(0)&\defeq 0\\
+ f(\suc(n)) &\defeq f(\suc(\suc(n))).
+\end{align*}
+If a programmer wrote such a function, it would simply call itself forever on any positive input, going into an infinite loop and never returning a value.
+In mathematics, however, to be worthy of the name, a \emph{function} must always associate a unique output value to every input value, so this would be unacceptable.
+
+This point will be even more important when we introduce more complicated inductive types in \cref{cha:induction,cha:hits,cha:real-numbers}.
+Whenever we introduce a new kind of inductive definition, we always begin by deriving its induction principle.
+Only then do we introduce an appropriate sort of ``pattern matching'' which can be justified as a shorthand for the induction principle.
+
+\index{pattern matching|)}%
+\index{definition!by pattern matching|)}%
+
+\section{Propositions as types}
+\label{sec:pat}
+
+\index{proposition!as types|(defstyle}%
+\index{logic!propositions as types|(}%
+As mentioned in the introduction, to show that a proposition is true in type theory corresponds to exhibiting an element of the type corresponding to that proposition.
+\index{evidence, of the truth of a proposition}%
+\index{witness!to the truth of a proposition}%
+\index{proof|(}
+We regard the elements of this type as \emph{evidence} or \emph{witnesses} that the proposition is true. (They are sometimes even called \emph{proofs}, but this terminology can be misleading, so we generally avoid it.)
+In general, however, we will not construct witnesses explicitly; instead we present the proofs in ordinary mathematical prose, in such a way that they could be translated into an element of a type.
+This is no different from reasoning in classical set theory, where we don't expect to see an explicit derivation using the rules of predicate logic and the axioms of set theory.
+
+However, the type-theoretic perspective on proofs is nevertheless different in important ways.
+The basic principle of the logic of type theory is that a proposition is not merely true or false, but rather can be seen as the collection of all possible witnesses of its truth.
+Under this conception, proofs are not just the means by which mathematics is communicated, but rather are mathematical objects in their own right, on a par with more familiar objects such as numbers, mappings, groups, and so on.
+Thus, since types classify the available mathematical objects and govern how they interact, propositions are nothing but special types --- namely, types whose elements are proofs.
+
+\index{propositional!logic}%
+\index{logic!propositional}%
+The basic observation which makes this identification feasible is that we have the following natural correspondence between \emph{logical} operations on propositions, expressed in English, and \emph{type-theoretic} operations on their corresponding types of witnesses.
+\index{false}%
+\index{true}%
+\index{conjunction}%
+\index{disjunction}%
+\index{implication}%
+\begin{center}
+\medskip
+\begin{tabular}{ll}
+ \toprule
+ English & Type Theory\\
+ \midrule
+ True & $\unit$ \\
+ False & $\emptyt$ \\
+ $A$ and $B$ & $A \times B$ \\
+ $A$ or $B$ & $A + B$ \\
+ If $A$ then $B$ & $A \to B$ \\
+ $A$ if and only if $B$ & $(A \to B) \times (B \to A)$ \\
+ Not $A$ & $A \to \emptyt$ \\
+ \bottomrule
+\end{tabular}
+\medskip
+\end{center}
+
+The point of the correspondence is that in each case, the rules for constructing and using elements of the type on the right correspond to the rules for reasoning about the proposition on the left.
+For instance, the basic way to prove a statement of the form ``$A$ and $B$'' is to prove $A$ and also prove $B$, while the basic way to construct an element of $A\times B$ is as a pair $(a,b)$, where $a$ is an element (or witness) of $A$ and $b$ is an element (or witness) of $B$.
+And if we want to use ``$A$ and $B$'' to prove something else, we are free to use both $A$ and $B$ in doing so, analogously to how the induction principle for $A\times B$ allows us to construct a function out of it by using elements of $A$ and of $B$.
+
+Similarly, the basic way to prove an implication\index{implication} ``if $A$ then $B$'' is to assume $A$ and prove $B$, while the basic way to construct an element of $A\to B$ is to give an expression which denotes an element (witness) of $B$ which may involve an unspecified variable element (witness) of type $A$.
+And the basic way to use an implication ``if $A$ then $B$'' is deduce $B$ if we know $A$, analogously to how we can apply a function $f:A\to B$ to an element of $A$ to produce an element of $B$.
+We strongly encourage the reader to do the exercise of verifying that the rules governing the other type constructors translate sensibly into logic.
+
+Of special note is that the empty type $\emptyt$ corresponds to falsity.\index{false}
+When speaking logically, we refer to an inhabitant of $\emptyt$ as a \define{contradiction}:
+\indexdef{contradiction}%
+thus there is no way to prove a contradiction,%
+\footnote{More precisely, there is no \emph{basic} way to prove a contradiction, i.e.\ \emptyt has no constructors.
+If our type theory were inconsistent, then there would be some more complicated way to construct an element of \emptyt.}
+while from a contradiction anything can be derived.
+We also define the \define{negation}
+\indexdef{negation}%
+of a type $A$ as
+%
+\begin{equation*}
+ \neg A \ \defeq\ A \to \emptyt.
+\end{equation*}
+%
+Thus, a witness of $\neg A$ is a function $A \to \emptyt$, which we may construct by assuming $x : A$ and deriving an element of~$\emptyt$.
+\index{proof!by contradiction}%
+\index{logic!constructive vs classical}
+Note that although the logic we obtain is ``constructive'', as discussed in the introduction, this sort of ``proof by contradiction'' (assume $A$ and derive a contradiction, concluding $\neg A$) is perfectly valid constructively: it is simply invoking the \emph{meaning} of ``negation''.
+The sort of ``proof by contradiction'' which is disallowed is to assume $\neg A$ and derive a contradiction as a way of proving $A$.
+Constructively, such an argument would only allow us to conclude $\neg\neg A$, and the reader can verify that there is no obvious way to get from $\neg\neg A$ (that is, from $(A\to \emptyt)\to\emptyt$) to $A$.
+
+\mentalpause
+
+The above translation of logical connectives into type-forming operations is referred to as \define{propositions as types}: it gives us a way to translate propositions and their proofs, written in English, into types and their elements.
+For example, suppose we want to prove the following tautology (one of ``de Morgan's laws''):
+\index{law!de Morgan's|(}%
+\index{de Morgan's laws|(}%
+\begin{equation}\label{eq:tautology1}
+ \text{\emph{``If not $A$ and not $B$, then not ($A$ or $B$)''}.}
+\end{equation}
+An ordinary English proof of this fact might go as follows.
+\begin{quote}
+ Suppose not $A$ and not $B$, and also suppose $A$ or $B$; we will derive a contradiction.
+ There are two cases.
+ If $A$ holds, then since not $A$, we have a contradiction.
+ Similarly, if $B$ holds, then since not $B$, we also have a contradiction.
+ Thus we have a contradiction in either case, so not ($A$ or $B$).
+\end{quote}
+Now, the type corresponding to our tautology~\eqref{eq:tautology1}, according to the rules given above, is
+\begin{equation}\label{eq:tautology2}
+ (A\to \emptyt) \times (B\to\emptyt) \to (A+B\to\emptyt)
+\end{equation}
+so we should be able to translate the above proof into an element of this type.
+
+As an example of how such a translation works, let us describe how a mathematician reading the English proof above might simultaneously construct, in their head, an element of~\eqref{eq:tautology2}.
+The introductory phrase ``Suppose not $A$ and not $B$'' translates into defining a function, with an implicit application of the recursion principle for the cartesian product in its domain $(A\to\emptyt)\times (B\to\emptyt)$.
+This introduces unnamed variables
+\index{variable}%
+(hypotheses)
+\index{hypothesis}%
+of types $A\to\emptyt$ and $B\to\emptyt$.
+When translating into type theory, we have to give these variables names; let us call them $x$ and $y$.
+At this point our partial definition of an element of~\eqref{eq:tautology2} can be written as
+\[ f((x,y)) \defeq\; \Box\;:A+B\to\emptyt \]
+with a ``hole'' $\Box$ of type $A+B\to\emptyt$ indicating what remains to be done.
+(We could equivalently write $f \defeq \rec{(A\to\emptyt)\times (B\to\emptyt)}(A+B\to\emptyt,\lam{x}{y} \Box)$, using the recursor instead of pattern matching.)
+The next phrase ``also suppose $A$ or $B$; we will derive a contradiction'' indicates filling this hole by a function definition, introducing another unnamed hypothesis $z:A+B$, leading to the proof state:
+\[ f((x,y))(z) \defeq \;\Box\; :\emptyt. \]
+Now saying ``there are two cases'' indicates a case split, i.e.\ an application of the recursion principle for the coproduct $A+B$.
+If we write this using the recursor, it would be
+\[ f((x,y))(z) \defeq \rec{A+B}(\emptyt,\lam{a} \Box,\lam{b}\Box,z) \]
+while if we write it using pattern matching, it would be
+\begin{align*}
+ f((x,y))(\inl(a)) &\defeq \;\Box\;:\emptyt\\
+ f((x,y))(\inr(b)) &\defeq \;\Box\;:\emptyt.
+\end{align*}
+Note that in both cases we now have two ``holes'' of type $\emptyt$ to fill in, corresponding to the two cases where we have to derive a contradiction.
+Finally, the conclusion of a contradiction from $a:A$ and $x:A\to\emptyt$ is simply application of the function $x$ to $a$, and similarly in the other case.
+\index{application!of hypothesis or theorem}%
+(Note the convenient coincidence of the phrase ``applying a function'' with that of ``applying a hypothesis'' or theorem.)
+Thus our eventual definition is
+\begin{align*}
+ f((x,y))(\inl(a)) &\defeq x(a)\\
+ f((x,y))(\inr(b)) &\defeq y(b).
+\end{align*}
+
+As an exercise, you should verify
+the converse tautology \emph{``If not ($A$ or $B$), then (not $A$) and (not $B$)}'' by exhibiting an element of
+\[ ((A + B) \to \emptyt) \to (A \to \emptyt) \times (B \to \emptyt), \]
+for any types $A$ and $B$, using the rules we have just introduced.
+
+\index{logic!classical vs constructive|(}
+However, not all classical\index{mathematics!classical} tautologies hold under this interpretation.
+For example, the rule
+\emph{``If not ($A$ and $B$), then (not $A$) or (not $B$)''} is not valid: we cannot, in general, construct an element of the corresponding type
+\[ ((A \times B) \to \emptyt) \to (A \to \emptyt) + (B \to \emptyt).\]
+This reflects the fact that the ``natural'' propositions-as-types logic of type theory is \emph{constructive}.
+This means that it does not include certain classical principles, such as the law of excluded middle (\LEM{})\index{excluded middle}
+or proof by contradiction,\index{proof!by contradiction}
+and others which depend on them, such as this instance of de Morgan's law.
+\index{law!de Morgan's|)}%
+\index{de Morgan's laws|)}%
+
+Philosophically, constructive logic is so-called because it confines itself to constructions that can be carried out \emph{effectively}, which is to say those with a computational meaning.
+Without being too precise, this means there is some sort of algorithm\index{algorithm} specifying, step-by-step, how to build an object (and, as a special case, how to see that a theorem is true).
+This requires omission of \LEM{}, since there is no \emph{effective}\index{effective!procedure} procedure for deciding whether a proposition is true or false.
+
+The constructivity of type-theoretic logic means it has an intrinsic computational meaning, which is of interest to computer scientists.
+It also means that type theory provides \emph{axiomatic freedom}.\index{axiomatic freedom}
+For example, while by default there is no construction witnessing \LEM{}, the logic is still compatible with the existence of one (see \cref{sec:intuitionism}).
+Thus, because type theory does not \emph{deny} \LEM{}, we may consistently add it as an assumption, and work conventionally without restriction.
+In this respect, type theory enriches, rather than constrains, conventional mathematical practice.
+
+We encourage the reader who is unfamiliar with constructive logic to work through some more examples as a means of getting familiar with it.
+See \cref{ex:tautologies,ex:not-not-lem} for some suggestions.
+\index{logic!classical vs constructive|)}
+
+\mentalpause
+
+So far we have discussed only propositional logic.
+\index{quantifier}%
+\index{quantifier!existential}%
+\index{quantifier!universal}%
+\index{predicate!logic}%
+\index{logic!predicate}%
+Now we consider \emph{predicate} logic, where in addition to logical connectives like ``and'' and ``or'' we have quantifiers ``there exists'' and ``for all''.
+In this case, types play a dual role: they serve as propositions and also as types in the conventional sense, i.e., domains we quantify over.
+A predicate over a type $A$ is represented as a family $P : A \to \UU$, assigning to every element $a : A$ a type $P(a)$ corresponding to the proposition that $P$ holds for $a$. We now extend the above translation with an explanation of the quantifiers:
+\begin{center}
+ \medskip
+ \begin{tabular}{ll}
+ \toprule
+ English & Type Theory\\
+ \midrule
+ For all $x:A$, $P(x)$ holds & $\prd{x:A} P(x)$ \\
+ There exists $x:A$ such that $P(x)$ & $\sm{x:A}$ $P(x)$ \\
+ \bottomrule
+ \end{tabular}
+ \medskip
+\end{center}
+As before, we can show that tautologies of (constructive) predicate logic translate into inhabited types.
+For example, \emph{If for all $x:A$, $P(x)$ and $Q(x)$ then (for all $x:A$, $P(x)$) and (for all $x:A$, $Q(x)$)} translates to
+\[ (\tprd{x:A} P(x) \times Q(x)) \to (\tprd{x:A} P(x)) \times (\tprd{x:A} Q(x)). \]
+An informal proof of this tautology might go as follows:
+\begin{quote}
+ Suppose for all $x$, $P(x)$ and $Q(x)$.
+ First, we suppose given $x$ and prove $P(x)$.
+ By assumption, we have $P(x)$ and $Q(x)$, and hence we have $P(x)$.
+ Second, we suppose given $x$ and prove $Q(x)$.
+ Again by assumption, we have $P(x)$ and $Q(x)$, and hence we have $Q(x)$.
+\end{quote}
+The first sentence begins defining an implication as a function, by introducing a witness for its hypothesis:\index{hypothesis}
+\[ f(p) \defeq \;\Box\; : (\tprd{x:A} P(x)) \times (\tprd{x:A} Q(x)). \]
+At this point there is an implicit use of the pairing constructor to produce an element of a product type, which is somewhat signposted in this example by the words ``first'' and ``second'':
+\[ f(p) \defeq \Big( \;\Box\; : \tprd{x:A} P(x) \;,\; \Box\; : \tprd{x:A}Q(x) \;\Big). \]
+The phrase ``we suppose given $x$ and prove $P(x)$'' now indicates defining a \emph{dependent} function in the usual way, introducing a variable
+\index{variable}%
+for its input.
+Since this is inside a pairing constructor, it is natural to write it as a $\lambda$-abstraction\index{lambda abstraction@$\lambda$-abstraction}:
+\[ f(p) \defeq \Big( \; \lam{x} \;\big(\Box\; : P(x)\big) \;,\; \Box\; : \tprd{x:A}Q(x) \;\Big). \]
+Now ``we have $P(x)$ and $Q(x)$'' invokes the hypothesis, obtaining $p(x) : P(x)\times Q(x)$, and ``hence we have $P(x)$'' implicitly applies the appropriate projection:
+\[ f(p) \defeq \Big( \; \lam{x} \proj1(p(x)) \;,\; \Box\; : \tprd{x:A}Q(x) \;\Big). \]
+The next two sentences fill the other hole in the obvious way:
+\[ f(p) \defeq \Big( \; \lam{x} \proj1(p(x)) \;,\; \lam{x} \proj2(p(x)) \; \Big). \]
+Of course, the English proofs we have been using as examples are much more verbose than those that mathematicians usually use in practice; they are more like the sort of language one uses in an ``introduction to proofs'' class.
+The practicing mathematician has learned to fill in the gaps, so in practice we can omit plenty of details, and we will generally do so.
+The criterion of validity for proofs, however, is always that they can be translated back into the construction of an element of the corresponding type.
+
+\symlabel{leq-nat}%
+As a more concrete example, consider how to define inequalities of natural numbers.
+One natural definition is that $n\le m$ if there exists a $k:\nat$ such that $n+k=m$.
+(This uses again the identity types that we will introduce in the next section, but we will not need very much about them.)
+Under the propositions-as-types translation, this would yield:
+\[ (n\le m) \defeq \sm{k:\nat} (\id{n+k}{m}). \]
+The reader is invited to prove the familiar properties of $\le$ from this definition.
+For strict inequality, there are a couple of natural choices, such as
+\[ (n 0 }$ the type of positive rational numbers.
+
+\section{Dedekind reals}
+\label{sec:dedekind-reals}
+
+\index{real numbers!Dedekind|(}%
+Let us first recall the basic idea of Dedekind's construction. We use two-sided Dedekind
+cuts, as opposed to an often used one-sided version, because the symmetry makes
+constructions more elegant, and it works constructively as well as classically.
+\index{mathematics!constructive}%
+A \emph{Dedekind cut}\index{cut!Dedekind} consists of a pair $(L, U)$ of subsets $L, U \subseteq \Q$, called the
+\emph{lower} and \emph{upper cut} respectively, which are:
+%
+\begin{enumerate}
+\item \emph{inhabited:} there are $q \in L$ and $r \in U$,
+\item \emph{rounded:} $q \in L \Leftrightarrow \exis {r \in \Q} q < r \land r \in L$
+ and $r \in U \Leftrightarrow \exis {q \in \Q} q \in U \land q < r$,
+ \index{rounded!Dedekind cut}
+\item \emph{disjoint:} $\lnot (q \in L \land q \in U)$, and
+\item \emph{located:} $q < r \Rightarrow q \in L \lor r \in U$.
+ \index{locatedness}%
+\end{enumerate}
+%
+Reading the roundedness condition from left to right tells us that cuts are \emph{open},
+\index{open!cut}%
+and from right to left that they are \emph{lower}, respectively \emph{upper}, sets. The
+locatedness condition states that there is no large gap between $L$ and $U$. Because cuts
+are always open, they never include the ``point in between'', even when it is rational. A
+typical Dedekind cut looks like this:
+%
+\begin{center}
+ \begin{tikzpicture}[x=\textwidth]
+ \draw[<-),line width=0.75pt] (0,0) -- (0.297,0) node[anchor=south east]{$L\ $};
+ \draw[(->,line width=0.75pt] (0.300, 0) node[anchor=south west]{$\ U$} -- (0.9, 0) ;
+ \end{tikzpicture}
+\end{center}
+%
+We might naively translate the informal definition into type theory by saying that a cut
+is a pair of maps $L, U : \Q \to \prop$. But we saw in \cref{subsec:prop-subsets} that
+$\prop$ is an ambiguous\index{typical ambiguity} notation for $\prop_{\UU_i}$ where~$\UU_i$ is a universe. Once we
+use a particular $\UU_i$ to define cuts, the type of reals will reside in the next
+universe $\UU_{i+1}$, a property of reals two levels higher in $\UU_{i+2}$, a property of
+subsets of reals in $\UU_{i+3}$, etc. In principle we should be able to keep track of the
+universe levels\index{universe level}, especially with the help of a proof assistant, but doing so here would
+just burden us with bureaucracy that we prefer to avoid. We shall therefore make a
+simplifying assumption that a single type of propositions $\Omega$ is sufficient for all
+our purposes.
+
+In fact, the construction of the Dedekind reals is quite resilient to logical
+manipulations. There are several ways in which we can make sense of using a single type
+$\Omega$:
+%
+\begin{enumerate}
+
+\item We could identify $\Omega$ with the ambiguous $\prop$ and track all the universes
+ that appear in definitions and constructions.
+
+\item We could assume the propositional resizing axiom,
+ \index{propositional!resizing}%
+ as in \cref{subsec:prop-subsets}, which essentially collapses the $\prop_{\UU_i}$'s to the
+ lowest level\index{universe level}, which we call $\Omega$.
+
+\item A classical mathematician who is not interested in the intricacies of type-theoretic
+ universes or computation may simply assume the law of excluded middle~\eqref{eq:lem} for
+ mere propositions so that $\Omega \jdeq \bool$.
+ \index{excluded middle}
+ This not only eradicates questions about
+ levels\index{universe level} of $\prop$, but also turns everything we do into the standard classical\index{mathematics!classical}
+ construction of real numbers.
+
+\item On the other end of the spectrum one might ask for a minimal requirement that makes
+ the constructions work. The condition that a mere predicate be a Dedekind cut is
+ expressible using only conjunctions, disjunctions, and existential quantifiers\index{quantifier!existential} over~\Q, which
+ is a countable set. Thus we could take $\Omega$ to be the initial \emph{$\sigma$-frame},
+ \index{initial!sigma-frame@$\sigma$-frame}%
+ \index{sigma-frame@$\sigma$-frame!initial|defstyle}%
+ i.e., a lattice\index{lattice} with countable joins\index{join!in a lattice} in which binary meets distribute over countable
+ joins. (The initial $\sigma$-frame cannot be the two-point lattice $\bool$ because
+ $\bool$ is not closed under countable joins, unless we assume excluded middle.) This
+ would lead to a construction of~$\Omega$ as a higher inductive-inductive type, but one
+ experiment of this kind in \cref{sec:cauchy-reals} is enough.
+\end{enumerate}
+
+In all of the above cases $\Omega$ is a set.
+%
+Without further ado, we translate the informal definition into type theory.
+Throughout this chapter, we use the
+logical notation from \cref{defn:logical-notation}.
+
+\begin{defn} \label{defn:dedekind-reals}
+ A \define{Dedekind cut}
+ \indexsee{Dedekind!cut}{cut, Dedekind}%
+ \indexdef{cut!Dedekind}%
+ is a pair $(L, U)$ of mere predicates $L : \Q \to \Omega$ and $U
+ : \Q \to \Omega$ which is:
+ %
+ \begin{enumerate}
+ \item \label{defn:dedekind-reals-inhabited}
+ \emph{inhabited (i.e., bounded):} $\exis{q : \Q} L(q)$ and $\exis{r : \Q} U(r)$,
+ \item \emph{rounded:} for all $q, r : \Q$,
+ \index{rounded!Dedekind cut}
+ %
+ \begin{align*}
+ L(q) &\Leftrightarrow \exis{r : \Q} (q < r) \land L(r)
+ \qquad\text{and}\\
+ U(r) &\Leftrightarrow \exis{q : \Q} (q < r) \land U(q),
+ \end{align*}
+ \item \emph{disjoint:} $\lnot (L(q) \land U(q))$ for all $q : \Q$,
+ \item \emph{located:} $(q < r) \Rightarrow L(q) \lor U(r)$ for all $q, r : \Q$.
+ \index{locatedness}%
+ \end{enumerate}
+ %
+ We let $\dcut(L, U)$ denote the conjunction of these conditions. The type of
+ \define{Dedekind reals} is
+ \indexsee{Dedekind!real numbers}{real numbers, De\-de\-kind}%
+ \indexdef{real numbers!Dedekind}%
+ %
+ \begin{equation*}
+ \RD \defeq \setof{ (L, U) : (\Q \to \Omega) \times (\Q \to \Omega) | \dcut(L,U)}.
+ \end{equation*}
+\end{defn}
+
+It is apparent that $\dcut(L, U)$ is a mere proposition, and since $\Q \to \Omega$ is a
+set the Dedekind reals form a set too. See
+\cref{ex:RD-extended-reals,ex:RD-lower-cuts,ex:RD-interval-arithmetic} for variants of
+Dedekind cuts which lead to extended reals, lower and upper reals, and the interval
+domain.
+
+There is an embedding $\Q \to \RD$ which associates with each rational $q : \Q$ the cut
+$(L_q, U_q)$ where
+%
+\begin{equation*}
+ L_q(r) \defeq (r < q)
+ \qquad\text{and}\qquad
+ U_q(r) \defeq (q < r).
+\end{equation*}
+%
+We shall simply write $q$ for the cut $(L_q, U_q)$ associated with a rational number.
+
+\subsection{The algebraic structure of Dedekind reals}
+\label{sec:algebr-struct-dedek}
+
+The construction of the algebraic and order-theoretic structure of Dedekind reals proceeds
+as usual in intuitionistic logic. Rather than dwelling on details we point out the
+differences between the classical\index{mathematics!classical} and intuitionistic setup. Writing $L_x$ and $U_x$ for
+the lower and upper cut of a real number $x : \RD$, we define addition as%
+%
+\indexdef{addition!of Dedekind reals}%
+\begin{align*}
+ L_{x + y}(q) &\defeq \exis{r, s : \Q} L_x(r) \land L_y(s) \land q = r + s, \\
+ U_{x + y}(q) &\defeq \exis{r, s : \Q} U_x(r) \land U_y(s) \land q = r + s,
+\end{align*}
+%
+and the additive inverse by
+%
+\begin{align*}
+ L_{-x}(q) &\defeq \exis{r : \Q} U_x(r) \land q = - r, \\
+ U_{-x}(q) &\defeq \exis{r : \Q} L_x(r) \land q = - r.
+\end{align*}
+%
+With these operations $(\RD, 0, {+}, {-})$ is an abelian\index{group!abelian} group. Multiplication is a bit
+more cumbersome:
+%
+\indexdef{multiplication!of Dedekind reals}%
+\begin{align*}
+ L_{x \cdot y}(q) &\defeq
+ \begin{aligned}[t]
+ \exis{a, b, c, d : \Q} & L_x(a) \land U_x(b) \land L_y(c) \land U_y(d) \land {}\\
+ & \qquad q < \min (a \cdot c, a \cdot d, b \cdot c, b \cdot d),
+ \end{aligned} \\
+ U_{x \cdot y}(q) &\defeq
+ \begin{aligned}[t]
+ \exis{a, b, c, d : \Q} & L_x(a) \land U_x(b) \land L_y(c) \land U_y(d) \land {}\\
+ & \qquad \max (a \cdot c, a \cdot d, b \cdot c, b \cdot d) < q.
+ \end{aligned}
+\end{align*}
+%
+\index{interval!arithmetic}%
+These formulas are related to multiplication of intervals in interval arithmetic, where
+intervals $[a,b]$ and $[c,d]$ with rational endpoints multiply to the interval
+%
+\begin{equation*}
+ [a,b] \cdot [c,d] =
+ [\min(a c, a d, b c, b d), \max(a c, a d, b c, b d)].
+\end{equation*}
+%
+For instance, the formula for the lower cut can be read as saying that $q < x \cdot y$
+when there are intervals $[a,b]$ and $[c,d]$ containing $x$ and $y$, respectively, such
+that $q$ is to the left of $[a,b] \cdot [c,d]$. It is generally useful to think of an
+interval $[a,b]$ such that $L_x(a)$ and $U_x(b)$ as an approximation of~$x$, see
+\cref{ex:RD-interval-arithmetic}.
+
+We now have a commutative ring\index{ring} with unit
+\index{unit!of a ring}%
+$(\RD, 0, 1, {+}, {-}, {\cdot})$. To treat
+multiplicative inverses, we must first introduce order. Define $\leq$ and $<$ as
+%
+\begin{align*}
+ (x \leq y) &\ \defeq \ \fall{q : \Q} L_x(q) \Rightarrow L_y(q), \\
+ (x < y) &\ \defeq \ \exis{q : \Q} U_x(q) \land L_y(q).
+\end{align*}
+
+\begin{lem} \label{dedekind-in-cut-as-le}
+ For all $x : \RD$ and $q : \Q$, $L_x(q) \Leftrightarrow (q < x)$ and $U_x(q)
+ \Leftrightarrow (x < q)$.
+\end{lem}
+
+\begin{proof}
+ If $L_x(q)$ then by roundedness there merely is $r > q$ such that $L_x(r)$, and since
+ $U_q(r)$ it follows that $q < x$. Conversely, if $q < x$ then there is $r : \Q$ such
+ that $U_q(r)$ and $L_x(r)$, hence $L_x(q)$ because $L_x$ is a lower set. The other half
+ of the proof is symmetric.
+\end{proof}
+
+\index{partial order}%
+\index{transitivity!of . for reals@of $<$ for reals}
+\index{transitivity!of . for reals@of $\leq$ for reals}
+\index{relation!irreflexive}
+\index{irreflexivity!of . for reals@of $<$ for reals}
+The relation $\leq$ is a partial order, and $<$ is transitive and irreflexive. Linearity
+\index{order!linear}%
+\index{linear order}%
+%
+\begin{equation*}
+ (x < y) \lor (y \leq x)
+\end{equation*}
+%
+is valid if we assume excluded middle, but without it we get weak linearity
+%
+\index{order!weakly linear}
+\index{weakly linear order}
+\begin{equation} \label{eq:RD-linear-order}
+ (x < y) \Rightarrow (x < z) \lor (z < y).
+\end{equation}
+%
+At first sight it might not be clear what~\eqref{eq:RD-linear-order} has to do with
+linear order. But if we take $x \jdeq u - \epsilon$ and $y \jdeq u + \epsilon$ for
+$\epsilon > 0$, then we get
+%
+\begin{equation*}
+ (u - \epsilon < z) \lor (z < u + \epsilon).
+\end{equation*}
+%
+This is linearity ``up to a small numerical error'', i.e., since it is unreasonable to
+expect that we can actually compute with infinite precision, we should not be surprised
+that we can decide~$<$ only up to whatever finite precision we have computed.
+
+To see that~\eqref{eq:RD-linear-order} holds, suppose $x < y$. Then there merely exists $q : \Q$ such that $U_x(q)$ and
+$L_y(q)$. By roundedness there merely exist $r, s : \Q$ such that $r < q < s$, $U_x(r)$
+and $L_y(s)$. Then, by locatedness $L_z(r)$ or $U_z(s)$. In the first case we get $x < z$
+and in the second $z < y$.
+
+Classically, multiplicative inverses exist for all numbers which are different from zero.
+However, without excluded middle, a stronger condition is required. Say that $x, y : \RD$
+are \define{apart}
+\indexdef{apartness}%
+from each other, written $x \apart y$, when $(x < y) \lor (y < x)$:
+%
+\symlabel{apart}
+\begin{equation*}
+ (x \apart y) \defeq (x < y) \lor (y < x).
+\end{equation*}
+%
+If $x \apart y$, then $\lnot (x = y)$.
+The converse is true if we assume excluded middle, but is not provable constructively.
+\index{mathematics!constructive}%
+Indeed, if $\lnot (x = y)$ implies $x\apart y$, then a little bit of excluded middle follows; see \cref{ex:reals-apart-neq-MP}.
+
+\begin{thm} \label{RD-inverse-apart-0}
+ A real is invertible if, and only if, it is apart from $0$.
+\end{thm}
+
+\begin{rmk}
+ We observe that a real is invertible if, and only if, it is merely
+ invertible. Indeed, the same is true in any ring,\index{ring} since a ring is a set, and
+ multiplicative inverses are unique if they exist. See the discussion
+ following \cref{cor:UC}.
+\end{rmk}
+
+\begin{proof}
+ Suppose $x \cdot y = 1$. Then there merely exist $a, b, c, d : \Q$ such that
+ $a < x < b$, $c < y < d$ and $0 < \min (a c, a d, b c, b d)$. From $0 < a c$ and $0 < b c$ it follows
+ that $a$, $b$, and $c$ are either all positive or all negative.
+ Hence either $0 < a < x$ or $x < b < 0$, so that $x \apart 0$.
+
+ Conversely, if $x \apart 0$ then
+ %
+ \begin{align*}
+ L_{x^{-1}}(q) &\defeq
+ \exis{r : \Q} U_x(r) \land ((0 < r \land q r < 1) \lor (r < 0 \land 1 < q r))
+ \\
+ U_{x^{-1}}(q) &\defeq
+ \exis{r : \Q} L_x(r) \land ((0 < r \land q r > 1) \lor (r < 0 \land 1 > q r))
+ \end{align*}
+ %
+ defines the desired inverse. Indeed, $L_{x^{-1}}$ and $U_{x^{-1}}$ are inhabited because
+ $x \apart 0$.
+\end{proof}
+
+\index{ordered field!archimedean}%
+\index{dense}%
+\indexsee{order-dense}{dense}%
+The archimedean principle can be stated in several ways. We find it most illuminating in the
+form which says that $\Q$ is dense in $\RD$.
+
+\begin{thm}[Archimedean principle for $\RD$] \label{RD-archimedean}
+ %
+ For all $x, y : \RD$ if $x < y$ then there merely exists $q : \Q$ such that
+ $x < q < y$.
+\end{thm}
+
+\begin{proof}
+ By definition of $<$.
+\end{proof}
+
+Before tackling completeness of Dedekind reals, let us state precisely what algebraic
+structure they possess. In the following definition we are not aiming at a minimal
+axiomatization, but rather at a useful amount of structure and properties.
+
+\begin{defn} \label{ordered-field} An \define{ordered field}
+ \indexdef{ordered field}%
+ \indexsee{field!ordered}{ordered field}%
+ is a set $F$ together with
+ constants $0$, $1$, operations $+$, $-$, $\cdot$, $\min$, $\max$, and mere relations
+ $\leq$, $<$, $\apart$ such that:
+ %
+ \begin{enumerate}
+ \item $(F, 0, 1, {+}, {-}, {\cdot})$ is a commutative ring with unit;
+ \index{unit!of a ring}%
+ \index{ring}%
+ \item $x : F$ is invertible if, and only if, $x \apart 0$;
+ \item $(F, {\leq}, {\min}, {\max})$ is a lattice;
+ \item the strict order $<$ is transitive, irreflexive,
+ \index{relation!irreflexive}
+ \index{irreflexivity!of . in a field@of $<$ in a field}%
+ and weakly linear ($x < y \Rightarrow x < z \lor z < y$);\index{transitivity!of . in a field@of $<$ in a field}
+ \index{order!weakly linear}
+ \index{weakly linear order}
+ \index{strict!order}%
+ \index{order!strict}%
+ \item apartness $\apart$ is irreflexive, symmetric and cotransitive ($x \apart y \Rightarrow x \apart z \lor y \apart z$);
+ \index{relation!irreflexive}
+ \index{irreflexivity!of apartness}%
+ \indexdef{relation!cotransitive}%
+ \index{cotransitivity of apartness}%
+ \item for all $x, y, z : F$:
+ %
+ \begin{align*}
+ x \leq y &\Leftrightarrow \lnot (y < x), &
+ x < y \leq z &\Rightarrow x < z, \\
+ x \apart y &\Leftrightarrow (x < y) \lor (y < x), &
+ x \leq y < z &\Rightarrow x < z, \\
+ x \leq y &\Leftrightarrow x + z \leq y + z, &
+ x \leq y \land 0 \leq z &\Rightarrow x z \leq y z, \\
+ x < y &\Leftrightarrow x + z < y + z, &
+ 0 < z \Rightarrow (x < y &\Leftrightarrow x z < y z), \\
+ 0 < x + y &\Rightarrow 0 < x \lor 0 < y, &
+ 0 &< 1.
+ \end{align*}
+ \end{enumerate}
+ %
+ Every such field has a canonical embedding $\Q \to F$. An ordered field is
+ \define{archimedean}
+ \indexdef{ordered field!archimedean}%
+ \indexsee{archimedean property}{ordered field, archi\-mede\-an}%
+ when for all $x, y : F$, if $x < y$ then there merely exists $q :
+ \Q$ such that $x < q < y$.
+\end{defn}
+
+\begin{thm} \label{RD-archimedean-ordered-field}
+ The Dedekind reals form an ordered archimedean field.
+\end{thm}
+
+\begin{proof}
+ We omit the proof in the hope that what we have demonstrated so far makes the theorem
+ plausible.
+\end{proof}
+
+\subsection{Dedekind reals are Cauchy complete}
+\label{sec:RD-cauchy-complete}
+
+Recall that $x : \N \to \Q$ is a \emph{Cauchy sequence}\indexdef{Cauchy!sequence} when it satisfies
+%
+\begin{equation} \label{eq:cauchy-sequence}
+ \prd{\epsilon : \Qp} \sm{n : \N} \prd{m, k \geq n} |x_m - x_k| < \epsilon.
+\end{equation}
+%
+Note that we did \emph{not} truncate the inner existential because we actually want to
+compute rates of convergence---an approximation without an error estimate carries little
+useful information. By \cref{thm:ttac}, \eqref{eq:cauchy-sequence} yields a function $M
+: \Qp \to \N$, called the \emph{modulus of convergence}\indexdef{modulus!of convergence}, such that $m, k \geq M(\epsilon)$
+implies $|x_m - x_k| < \epsilon$. From this we get $|x_{M(\delta/2)} - x_{M(\epsilon/2)}|<
+\delta + \epsilon$ for all $\delta, \epsilon : \Qp$. In fact, the map $(\epsilon \mapsto
+x_{M(\epsilon/2)}) : \Qp \to \Q$ carries the same information about the limit as the
+original Cauchy condition~\eqref{eq:cauchy-sequence}. We shall work with these
+approximation functions rather than with Cauchy sequences.
+
+\begin{defn} \label{defn:cauchy-approximation}
+ A \define{Cauchy approximation}
+ \indexdef{Cauchy!approximation}%
+ is a map $x : \Qp \to \RD$ which satisfies
+ %
+ \begin{equation}
+ \label{eq:cauchy-approx}
+ \fall{\delta, \epsilon :\Qp} |x_\delta - x_\epsilon| < \delta + \epsilon.
+ \end{equation}
+ %
+ The \define{limit}
+ \index{limit!of a Cauchy approximation}%
+ of a Cauchy approximation $x : \Qp \to \RD$ is a number $\ell : \RD$ such
+ that
+ %
+ \begin{equation*}
+ \fall{\epsilon, \theta : \Qp} |x_\epsilon - \ell| < \epsilon + \theta.
+ \end{equation*}
+\end{defn}
+
+\begin{thm} \label{RD-cauchy-complete}
+ Every Cauchy approximation in $\RD$ has a limit.
+\end{thm}
+
+\begin{proof}
+ Note that we are showing existence, not mere existence, of the limit.
+ Given a Cauchy approximation $x : \Qp \to \RD$, define
+ %
+ \begin{align*}
+ L_y(q) &\defeq \exis{\epsilon, \theta : \Qp} L_{x_\epsilon}(q + \epsilon + \theta),\\
+ U_y(q) &\defeq \exis{\epsilon, \theta : \Qp} U_{x_\epsilon}(q - \epsilon - \theta).
+ \end{align*}
+ %
+ It is clear that $L_y$ and $U_y$ are inhabited, rounded, and disjoint. To establish
+ locatedness, consider any $q, r : \Q$ such that $q < r$. There is $\epsilon : \Qp$ such
+ that $5 \epsilon < r - q$. Since $q + 2 \epsilon < r - 2 \epsilon$ merely
+ $L_{x_\epsilon}(q + 2 \epsilon)$ or $U_{x_\epsilon}(r - 2 \epsilon)$. In the first case
+ we have $L_y(q)$ and in the second $U_y(r)$.
+
+ To show that $y$ is the limit of $x$, consider any $\epsilon, \theta : \Qp$. Because
+ $\Q$ is dense in $\RD$ there merely exist $q, r : \Q$ such that
+ %
+ \begin{narrowmultline*}
+ x_\epsilon - \epsilon - \theta/2 < q < x_\epsilon - \epsilon - \theta/4
+ < x_\epsilon < \\
+ x_\epsilon + \epsilon + \theta/4 < r < x_\epsilon + \epsilon + \theta/2,
+ \end{narrowmultline*}
+ %
+ and thus $q < y < r$. Now either $y < x_\epsilon + \theta/2$ or $x_\epsilon - \theta/2 < y$.
+ In the first case we have
+ %
+ \begin{equation*}
+ x_\epsilon - \epsilon - \theta/2 < q < y < x_\epsilon + \theta/2,
+ \end{equation*}
+ %
+ and in the second
+ %
+ \begin{equation*}
+ x_\epsilon - \theta/2 < y < r < x_\epsilon + \epsilon + \theta/2.
+ \end{equation*}
+ %
+ In either case it follows that $|y - x_\epsilon| < \epsilon + \theta$.
+\end{proof}
+
+For sake of completeness we record the classic formulation as well.
+
+\begin{cor}
+ Suppose $x : \N \to \RD$ satisfies the Cauchy condition~\eqref{eq:cauchy-sequence}. Then
+ there exists $y : \RD$ such that
+ %
+ \begin{equation*}
+ \prd{\epsilon : \Qp} \sm{n : \N} \prd{m \geq n} |x_m - y| < \epsilon.
+ \end{equation*}
+\end{cor}
+
+\begin{proof}
+ By \cref{thm:ttac} there is $M : \Qp \to \N$ such that $\bar{x}(\epsilon) \defeq
+ x_{M(\epsilon/2)}$ is a Cauchy approximation. Let $y$ be its limit, which exists by
+ \cref{RD-cauchy-complete}. Given any $\epsilon : \Qp$, let $n \defeq M(\epsilon/4)$
+ and observe that, for any $m \geq n$,
+ %
+ \begin{narrowmultline*}
+ |x_m - y| \leq |x_m - x_n| + |x_n - y| =
+ |x_m - x_n| + |\bar{x}(\epsilon/2) - y| < \narrowbreak
+ \epsilon/4 + \epsilon/2 + \epsilon/4 = \epsilon.\qedhere
+ \end{narrowmultline*}
+\end{proof}
+
+\subsection{Dedekind reals are Dedekind complete}
+\label{sec:RD-dedekind-complete}
+
+We obtained $\RD$ as the type of Dedekind cuts on $\Q$. But we could have instead started
+with any archimedean ordered field $F$ and constructed Dedekind cuts\index{cut!Dedekind} on $F$. These would
+again form an archimedean ordered field $\bar{F}$, the \define{Dedekind completion of $F$},%
+\index{completion!Dedekind}%
+\indexsee{Dedekind!completion}{completion, Dedekind}
+with $F$ contained as a subfield. What happens if we apply this construction to
+$\RD$, do we get even more real numbers? The answer is negative. In fact, we shall prove a
+stronger result: $\RD$ is final.
+
+Say that an ordered field~$F$ is \define{admissible for $\Omega$}
+\indexsee{admissible!ordered field}{ordered field, admissible}%
+\indexdef{ordered field!admissible}%
+when the strict order
+$<$ on~$F$ is a map ${<} : F \to F \to \Omega$.
+
+\begin{thm} \label{RD-final-field}
+ Every archimedean ordered field which is admissible for $\Omega$ is a subfield of~$\RD$.
+\end{thm}
+
+\begin{proof}
+ Let $F$ be an archimedean ordered field. For every $x : F$ define $L_x, U_x : \Q \to
+ \Omega$ by
+ %
+ \begin{equation*}
+ L_x(q) \defeq (q < x)
+ \qquad\text{and}\qquad
+ U_x(q) \defeq (x < q).
+ \end{equation*}
+ %
+ (We have just used the assumption that $F$ is admissible for $\Omega$.)
+ Then $(L_x, U_x)$ is a Dedekind cut.\index{cut!Dedekind} Indeed, the cuts are inhabited and rounded because
+ $F$ is archimedean and $<$ is transitive, disjoint because $<$ is irreflexive, and
+ located because $<$ is a weak linear order. Let $e : F \to \RD$ be the map $e(x) \defeq (L_x,
+ U_x)$.
+
+ We claim that $e$ is a field embedding which preserves and reflects the order. First of
+ all, notice that $e(q) = q$ for a rational number $q$. Next we have the equivalences,
+ for all $x, y : F$,
+ %
+ \begin{narrowmultline*}
+ x < y \Leftrightarrow
+ (\exis{q : \Q} x < q < y) \Leftrightarrow \narrowbreak
+ (\exis{q : \Q} U_x(q) \land L_y(q)) \Leftrightarrow
+ e(x) < e(y),
+ \end{narrowmultline*}
+ %
+ so $e$ indeed preserves and reflects the order. That $e(x + y) = e(x) + e(y)$ holds
+ because, for all $q : \Q$,
+ %
+ \begin{equation*}
+ q < x + y \Leftrightarrow
+ \exis{r, s : \Q} r < x \land s < y \land q = r + s.
+ \end{equation*}
+ %
+ The implication from right to left is obvious. For the other direction, if $q < x +
+ y$ then there merely exists $r : \Q$ such that $q - y < r < x$, and by taking $s \defeq
+ q - r$ we get the desired $r$ and $s$. We leave preservation of multiplication by $e$ as
+ an exercise.
+\end{proof}
+
+To establish that the Dedekind cuts on $\RD$ do not give us anything new, we need just one
+more lemma.
+
+\begin{lem} \label{lem:cuts-preserve-admissibility}
+ If $F$ is admissible for $\Omega$ then so is its Dedekind completion.
+ \index{completion!Dedekind}%
+\end{lem}
+
+\begin{proof}
+ Let $\bar{F}$ be the Dedekind completion of $F$. The strict order on $\bar{F}$ is
+ defined by
+ %
+ \begin{equation*}
+ ((L,U) < (L',U')) \defeq \exis{q : \Q} U(q) \land L'(q).
+ \end{equation*}
+ %
+ Since $U(q)$ and $L'(q)$ are elements of $\Omega$, the lemma holds as long as $\Omega$
+ is closed under conjunctions and countable existentials, which we assumed from the outset.
+\end{proof}
+
+
+\begin{cor} \label{RD-dedekind-complete}
+ %
+ \indexdef{complete!ordered field, Dedekind}%
+ \indexdef{Dedekind!completeness}%
+ The Dedekind reals are Dedekind complete: for every real-valued Dedekind cut $(L, U)$
+ there is a unique $x : \RD$ such that $L(y) = (y < x)$ and $U(y) = (x < y)$ for all $y :
+ \RD$.
+\end{cor}
+
+\begin{proof}
+ By \cref{lem:cuts-preserve-admissibility} the Dedekind completion $\barRD$ of $\RD$
+ is admissible for $\Omega$, so by \cref{RD-final-field} we have an embedding $\barRD
+ \to \RD$, as well as an embedding $\RD \to \barRD$. But these embeddings must be
+ isomorphisms, because their compositions are order-preserving field homomorphisms\index{homomorphism!field} which
+ fix the dense subfield~$\Q$, which means that they are the identity. The corollary now
+ follows immediately from the fact that $\barRD \to \RD$ is an isomorphism.
+\end{proof}
+
+\index{real numbers!Dedekind|)}%
+
+\section{Cauchy reals}
+\label{sec:cauchy-reals}
+
+\index{real numbers!Cauchy|(}%
+\index{completion!Cauchy|(}%
+\indexsee{Cauchy!completion}{completion, Cauchy}%
+The Cauchy reals are, by intent, the completion of \Q under limits of Cauchy sequences.\index{Cauchy!sequence}
+In the classical construction of the Cauchy reals, we consider the set $\mathcal{C}$ of all Cauchy sequences in \Q and then form a suitable quotient $\mathcal{C}/{\approx}$.
+Then, to show that $\mathcal{C}/{\approx}$ is Cauchy complete, we consider a Cauchy sequence $x : \N \to \mathcal{C}/{\approx}$, lift it to a sequence of sequences $\bar{x} : \N \to \mathcal{C}$, and construct the limit of $x$ using $\bar{x}$. However, the lifting of~$x$ to $\bar{x}$ uses
+the axiom of countable choice (the instance of~\eqref{eq:ac} where $X=\N$) or the law of excluded middle, which we may wish to avoid.
+\indexdef{axiom!of choice!countable}%
+Every construction of reals whose last step is a quotient suffers from this deficiency.
+There are three common ways out of the conundrum in constructive mathematics:
+\index{mathematics!constructive}%
+%
+\index{bargaining}%
+\begin{enumerate}
+\item Pretend that the reals are a setoid $(\mathcal{C}, {\approx})$, i.e., the type of
+ Cauchy sequences $\mathcal{C}$ with a coincidence\index{coincidence, of Cauchy approximations} relation attached to it by
+ administrative decree. A sequence of reals then simply \emph{is} a sequence of Cauchy
+ sequences representing them.
+\item Give in to temptation and accept the axiom of countable choice. After all, the axiom
+ is valid in most models of constructive mathematics based on a computational viewpoint,
+ such as realizability models.
+\item Declare the Cauchy reals unworthy and construct the Dedekind reals instead.
+ Such a verdict is perfectly valid in certain contexts, such as in sheaf-theoretic models of constructive mathematics.
+ However, as we saw in \cref{sec:dedekind-reals}, the constructive Dedekind reals have their own problems.
+\end{enumerate}
+
+Using higher inductive types, however, there is a fourth solution, which we believe to be preferable to any of the above, and interesting even to a classical mathematician.
+The idea is that the Cauchy real numbers should be the \emph{free complete metric space}\index{free!complete metric space} generated by~\Q.
+In general, the construction of a free gadget of any sort requires applying the gadget operations repeatedly many times to the generators.
+For instance, the elements of the free group on a set $X$ are not just binary products and inverses of elements of $X$, but words obtained by iterating the product and inverse constructions.
+Thus, we might naturally expect the same to be true for Cauchy completion, with the relevant ``operation'' being ``take the limit of a Cauchy sequence''.
+(In this case, the iteration would have to take place transfinitely, since even after infinitely many steps there will be new Cauchy sequences to take the limit of.)
+
+The argument referred to above shows that if excluded middle or countable choice hold, then Cauchy completion is very special: when building the completion of a space, it suffices to stop applying the operation after \emph{one step}.
+This may be regarded as analogous to the fact that free monoids and free groups can be given explicit descriptions in terms of (reduced) words.
+However, we saw in \cref{sec:free-algebras} that higher inductive types allow us to construct free gadgets \emph{directly}, whether or not there is also an explicit description available.
+In this section we show that the same is true for the Cauchy reals (a similar technique would construct the Cauchy completion of any metric space; see \cref{ex:metric-completion}).
+Specifically, higher inductive types allow us to \emph{simultaneously} add limits of Cauchy sequences and quotient by the coincidence relation, so that we can avoid the problem of lifting a sequence of reals to a sequence of representatives.
+\index{completion!Cauchy|)}%
+
+
+\subsection{Construction of Cauchy reals}
+\label{sec:constr-cauchy-reals}
+
+The construction of the Cauchy reals $\RC$ as a higher inductive type is a bit more subtle than that of the free algebraic structures considered in \cref{sec:free-algebras}.
+We intend to include a ``take the limit'' constructor whose input is a Cauchy sequence of reals, but the notion of ``Cauchy sequence of reals'' depends on having some way to measure the ``distance'' between real numbers.
+In general, of course, the distance between two real numbers will be another real number, leading to a potentially problematic circularity.
+
+However, what we actually need for the notion of Cauchy sequence of reals is not the general notion of ``distance'', but a way to say that ``the distance\index{distance} between two real numbers is less than $\epsilon$'' for any $\epsilon:\Qp$.
+This can be represented by a family of binary relations, which we will denote $\mathord{\close\epsilon} : \RC\to\RC\to \prop$.
+The intended meaning of $x \close\epsilon y$ is $|x - y| < \epsilon$, but since we do not have notions of subtraction, absolute value, or inequality available yet (we are only just defining $\RC$, after all), we will have to define these relations $\close\epsilon$ at the same time as we define $\RC$ itself.
+And since $\close\epsilon$ is a type family indexed by two copies of $\RC$, we cannot do this with an ordinary mutual (higher) inductive definition; instead we have to use a \emph{higher inductive-inductive definition}.
+\index{inductive-inductive type!higher}
+
+Recall from \cref{sec:generalizations} that the ordinary notion of inductive-inductive definition allows us to define a type and a type family indexed by it by simultaneous induction.
+Of course, the ``higher'' version of this allows both the type and the family to have path constructors as well as point constructors.
+We will not attempt to formulate any general theory of higher inductive-inductive definitions, but hopefully the description we will give of $\RC$ and $\close\epsilon$ will make the idea transparent.
+
+\begin{rmk}
+ We might also consider a \emph{higher inductive-recursive definition}, in which $\close\epsilon$ is defined using the \emph{recursion} principle of $\RC$, simultaneously with the \emph{inductive} definition of $\RC$.
+ We choose the inductive-inductive route instead for two reasons.
+ Firstly, higher inductive-re\-cur\-sive definitions seem to be more difficult to justify in homotopical semantics.
+ Secondly, and more importantly, the inductive-inductive definition yields a more powerful induction principle, which we will need in order to develop even the basic theory of Cauchy reals.
+\end{rmk}
+
+Finally, as we did for the discussion of Cauchy completeness of the Dedekind reals in \cref{sec:RD-cauchy-complete}, we will work with \emph{Cauchy approximations} (\cref{defn:cauchy-approximation}) instead of Cauchy sequences.
+Of course, our Cauchy approximations will now consist of Cauchy reals, rather than Dedekind reals or rational numbers.
+
+\begin{defn}\label{defn:cauchy-reals}
+ Let $\RC$ and the relation $\closesym:\Qp \times \RC \times \RC \to \type$ be the following higher inductive-inductive type family.
+ The type $\RC$ of \define{Cauchy reals}
+ \indexdef{real numbers!Cauchy}%
+ \indexsee{Cauchy!real numbers}{real numbers, Cau\-chy}%
+ is generated by the following constructors:
+ \begin{itemize}
+ \item \emph{rational points:}
+ for any $q : \Q$ there is a real $\rcrat(q)$.
+ \index{rational numbers!as Cauchy real numbers}%
+ \item \emph{limit points}:
+ for any $x : \Qp \to \RC$ such that
+ %
+ \begin{equation}
+ \label{eq:RC-cauchy}
+ \fall{\delta, \epsilon : \Qp} x_\delta \close{\delta + \epsilon} x_\epsilon
+ \end{equation}
+ %
+ there is a point $\rclim(x) : \RC$. We call $x$ a \define{Cauchy approximation}.
+ \indexdef{Cauchy!approximation}%
+ \index{limit!of a Cauchy approximation}%
+ %
+ \item \emph{paths:}
+ for $u, v : \RC$ such that
+ %
+ \begin{equation}
+ \label{eq:RC-path}
+ \fall{\epsilon : \Qp} u \close\epsilon v
+ \end{equation}
+ %
+ then there is a path $\rceq(u, v) : \id[\RC]{u}{v}$.
+ \end{itemize}
+ Simultaneously, the type family $\closesym:\RC\to\RC\to\Qp \to\type$ is generated by the following constructors.
+ Here $q$ and $r$ denote rational numbers; $\delta$, $\epsilon$, and $\eta$ denote positive rationals; $u$ and $v$ denote Cauchy reals; and $x$ and $y$ denote Cauchy approximations:
+ \begin{itemize}
+ \item for any $q,r,\epsilon$, if $-\epsilon < q - r < \epsilon$, then $\rcrat(q) \close\epsilon \rcrat(r)$,
+ \item for any $q,y,\epsilon,\delta$, if $\rcrat(q) \close{\epsilon - \delta} y_\delta$, then $\rcrat(q) \close{\epsilon} \rclim(y)$,
+ \item for any $x,r,\epsilon,\delta$, if $x_\delta \close{\epsilon - \delta} \rcrat(r)$, then $\rclim(x) \close\epsilon \rcrat(r)$,
+ \item for any $x,y,\epsilon,\delta,\eta$, if $x_\delta \close{\epsilon - \delta - \eta} y_\eta$, then $\rclim(x) \close\epsilon \rclim(y)$,
+ \item for any $u,v,\epsilon$, if $\xi,\zeta : u \close{\epsilon} v$, then $\xi=\zeta$ (propositional truncation).
+ \end{itemize}
+\end{defn}
+
+\mentalpause
+
+The first constructor of $\RC$ says that any rational number can be regarded as a real number.
+The second says that from any Cauchy approximation to a real number, we can obtain a new real number called its ``limit''.
+And the third expresses the idea that if two Cauchy approximations coincide, then their limits are equal.
+
+The first four constructors of $\closesym$ specify when two rational numbers are close, when a rational is close to a limit, and when two limits are close.
+In the case of two rational numbers, this is just the usual notion of $\epsilon$-closeness for rational numbers, whereas the other cases can be derived by noting that each approximant $x_\delta$ is supposed to be within $\delta$ of the limit $\rclim(x)$.
+
+We remind ourselves of proof-relevance: a real number obtained from $\rclim$ is represented not
+just by a Cauchy approximation $x$, but also a proof $p$ of~\eqref{eq:RC-cauchy}, so we
+should technically have written $\rclim(x,p)$ instead of just $\rclim(x)$.
+A similar observation also applies to $\rceq$ and~\eqref{eq:RC-path}, but we shall write just
+$\rceq : u = v$ instead of $\rceq(u, v, p) : u = v$. These abuses of notation are
+mitigated by the fact that we are omitting mere propositions and information that is
+readily guessed.
+Likewise, the last constructor of $\mathord{\close\epsilon}$ justifies our leaving the other four nameless.
+
+We are immediately able to populate $\RC$ with many real numbers. For suppose $x : \N \to
+\Q$ is a traditional Cauchy sequence\index{Cauchy!sequence} of rational numbers, and let $M : \Qp \to \N$ be its
+modulus of convergence. Then $\rcrat \circ x \circ M : \Qp \to \RC$ is a Cauchy
+approximation, using the first constructor of $\closesym$ to produce the necessary witness.
+Thus, $\rclim(\rcrat \circ x \circ m)$ is a real number. Various famous
+real numbers such as $\sqrt{2}$, $\pi$, $e$, \dots{} are all limits of such Cauchy sequences of
+rationals.
+
+\subsection{Induction and recursion on Cauchy reals}
+\label{sec:induct-recurs-cauchy}
+
+In order to do anything useful with $\RC$, of course, we need to give its induction principle.
+As is the case whenever we inductively define two or more objects at once, the basic induction principle for $\RC$ and $\closesym$ requires a simultaneous induction over both at once.
+Thus, we should expect it to say that assuming two type families over $\RC$ and $\closesym$, respectively, together with data corresponding to each constructor, there exist sections of both of these families.
+However, since $\closesym$ is indexed on two copies of $\RC$, the precise dependencies of these families is a bit subtle.
+The induction principle will apply to any pair of type families:
+\begin{align*}
+A&:\RC\to\type\\
+B&:\prd{x,y:\RC} A(x) \to A(y) \to \prd{\epsilon:\Qp} (x\close\epsilon y) \to \type.
+\end{align*}
+The type of $A$ is obvious, but the type of $B$ requires a little thought.
+Since $B$ must depend on $\closesym$, but $\closesym$ in turn depends on two copies of $\RC$ and one copy of $\Qp$, it is fairly obvious that $B$ must also depend on the variables $x,y:\RC$ and $\epsilon:\Qp$ as well as an element of $(x\close\epsilon y)$.
+What is slightly less obvious is that $B$ must also depend on $A(x)$ and $A(y)$.
+
+This may be more evident if we consider the non-dependent case (the recursion principle), where $A$ is a simple type (rather than a type family).
+In this case we would expect $B$ not to depend on $x,y:\RC$ or $x\close\epsilon y$.
+But the recursion principle (along with its associated uniqueness principle) is supposed to say that $\RC$ with $\close\epsilon$ is an ``initial object'' in some category, so in this case the dependency structure of $A$ and $B$ should mirror that of $\RC$ and $\close\epsilon$: that is, we should have $B:A\to A\to \Qp \to \type$.
+Combining this observation with the fact that, in the dependent case, $B$ must also depend on $x,y:\RC$ and $x\close\epsilon y$, leads inevitably to the type given above for $B$.
+
+\symlabel{RC-recursion}
+It is helpful to think of $B$ as an $\epsilon$-indexed family of relations between the types $A(x)$ and $A(y)$.
+With this in mind, we may write $B(x,y,a,b,\epsilon,\xi)$ as $(x,a) \bsim_\epsilon^\xi (y,b)$.
+Since $\xi:x\close\epsilon y$ is unique when it exists, we generally omit it from the notation and write $(x,a) \bsim_\epsilon (y,b)$; this is harmless as long as we keep in mind that this relation is only defined when $x\close\epsilon y$.
+We may also sometimes simplify further and write $a\bsim_\epsilon b$, with $x$ and $y$ inferred from the types of $a$ and $b$, but sometimes it will be necessary to include them for clarity.
+
+\index{induction principle!for Cauchy reals}%
+Now, given a type family $A:\RC\to\type$ and a family of relations $\bsim$ as above, the hypotheses of the induction principle consist of the following data, one for each constructor of $\RC$ or $\closesym$:
+\begin{itemize}
+\item For any $q : \Q$, an element $f_q:A(\rcrat(q))$.
+\item For any Cauchy approximation $x$, and any $a:\prd{\epsilon:\Qp} A(x_\epsilon)$ such that
+ \begin{equation}
+ \fall{\delta, \epsilon : \Qp}
+ (x_\delta,a_\delta) \bsim_{\delta+\epsilon} (x_\epsilon,a_\epsilon),
+ \label{eq:depCauchyappx}
+ \end{equation}
+ an element $f_{x,a}:A(\rclim(x))$.
+ We call such $a$ a \define{dependent Cauchy approximation}
+ \indexdef{Cauchy!approximation!dependent}%
+ \indexsee{approximation, Cauchy}{Cauchy approximation}%
+ \indexdef{dependent!Cauchy approximation}%
+ over $x$.
+\item For $u, v : \RC$ such that $h:\fall{\epsilon : \Qp} u \close\epsilon v$, and all $a:A(u)$ and $b:A(v)$ such that
+ $\fall{\epsilon:\Qp} (u,a) \bsim_\epsilon (v,b)$,
+ a dependent path $\dpath{A}{\rceq(u,v)}{a}{b}$.
+\item For $q,r:\Q$ and $\epsilon:\Qp$, if $-\epsilon < q - r < \epsilon$, we have
+ \narrowequation{(\rcrat(q),f_q) \bsim_\epsilon (\rcrat(r),f_r).}
+\item For $q:\Q$ and $\delta,\epsilon:\Qp$ and $y$ a Cauchy approximation, and $b$ a dependent Cauchy approximation over $y$, if $\rcrat(q) \close{\epsilon - \delta} y_\delta$, then
+ \[(\rcrat(q),f_q) \bsim_{\epsilon-\delta} (y_\delta,b_\delta)
+ \;\Rightarrow\;
+ (\rcrat(q),f_q) \bsim_\epsilon (\rclim(y),f_{y,b}).\]
+\item Similarly, for $r:\Q$ and $\delta,\epsilon:\Qp$ and $x$ a Cauchy approximation, and $a$ a dependent Cauchy approximation over $x$, if $x_\delta \close{\epsilon - \delta} \rcrat(r)$, then
+ \[(x_\delta,a_\delta) \bsim_{\epsilon-\delta} (\rcrat(r),f_r)
+ \;\Rightarrow\;
+ (\rclim(x),f_{x,a}) \bsim_\epsilon (\rcrat(q),f_r).
+ \]
+\item For $\epsilon,\delta,\eta:\Qp$ and $x,y$ Cauchy approximations, and $a$ and $b$ dependent Cauchy approximations over $x$ and $y$ respectively, if we have $x_\delta \close{\epsilon - \delta - \eta} y_\eta$, then
+ \[ (x_\delta,a_\delta) \bsim_{\epsilon - \delta - \eta} (y_\eta,b_\eta)
+ \;\Rightarrow\;
+ (\rclim(x),f_{x,a}) \bsim_\epsilon (\rclim(y),f_{y,b}).\]
+\item For $\epsilon:\Qp$ and $x,y:\RC$ and $\xi,\zeta:x\close{\epsilon} y$, and $a:A(x)$ and $b:A(y)$, any two elements of $(x,a) \bsim_\epsilon^\xi (y,b)$ and $(x,a) \bsim_\epsilon^\zeta (y,b)$ are dependently equal over $\xi=\zeta$.
+ Note that as usual, this is equivalent to asking that $\bsim$ takes values in mere propositions.
+\end{itemize}
+Under these hypotheses, we deduce functions
+\begin{align*}
+ f&:\prd{x:\RC} A(x)\\
+ g&:\prd{x,y:\RC}{\epsilon:\Qp}{\xi:x\close{\epsilon} y}
+ (x,f(x)) \bsim_\epsilon^\xi (y,f(y))
+\end{align*}
+which compute as expected:
+\begin{align}
+ f(\rcrat(q)) &\defeq f_q, \label{eq:rcsimind1}\\
+ f(\rclim(x)) &\defeq f_{x,(f,g)[x]}. \label{eq:rcsimind2}
+\end{align}
+Here $(f,g)[x]$ denotes the result of applying $f$ and $g$ to a Cauchy approximation $x$ to obtain a dependent Cauchy approximation over $x$.
+That is, we define $(f,g)[x]_\epsilon \defeq f(x_\epsilon) : A(x_\epsilon)$, and then for any $\epsilon,\delta:\Qp$ we have $g(x_\epsilon,x_\delta,\epsilon+\delta,\xi)$ to witness the fact that $(f,g)[x]$ is a dependent Cauchy approximation, where $\xi: x_\epsilon \close{\epsilon+\delta} x_\delta$ arises from the assumption that $x$ is a Cauchy approximation.
+
+We will never use this notation again, so don't worry about remembering it.
+Generally we use the pattern-matching convention, where $f$ is defined by equations such as~\eqref{eq:rcsimind1} and~\eqref{eq:rcsimind2} in which the right-hand side of~\eqref{eq:rcsimind2} may involve the symbols $f(x_\epsilon)$ and an assumption that they form a dependent Cauchy approximation.
+
+However, this induction principle is admittedly still quite a mouthful.
+To help make sense of it, we observe that it contains as special cases two separate induction principles for~$\RC$ and for~$\closesym$.
+Firstly, suppose given only a type family $A:\RC\to\type$, and define $\bsim$ to be constant at \unit.
+Then much of the required data becomes trivial, and we are left with:
+\begin{itemize}
+\item for any $q : \Q$, an element $f_q:A(\rcrat(q))$,
+\item for any Cauchy approximation $x$, and any $a:\prd{\epsilon:\Qp} A(x_\epsilon)$, an element $f_{x,a}:A(\rclim(x))$,
+\item for $u, v : \RC$ and $h:\fall{\epsilon : \Qp} u \close\epsilon v$, and $a:A(u)$ and $b:A(v)$, we have $\dpath{A}{\rceq(u,v)}{a}{b}$.
+\end{itemize}
+Given these data, the induction principle yields a function $f:\prd{x:\RC} A(x)$ such that
+\begin{align*}
+ f(\rcrat(q)) &\defeq f_q,\\
+ f(\rclim(x)) &\defeq f_{x,f(x)}.
+\end{align*}
+We call this principle \define{$\RC$-induction}; it says essentially that if we take $\close\epsilon$ as given, then $\RC$ is inductively generated by its constructors.
+
+Note that, if $A$ is a mere property, then the third hypothesis in $\RC$-induction is automatic (we will see in a moment that these are in fact equivalent statements).
+Thus, we may prove mere properties of real numbers by simply proving them for rationals and for limits of Cauchy approximations.
+Here is an example.
+
+\begin{lem} \label{lem:close-reflexive}
+ For any $u:\RC$ and $\epsilon:\Qp$, we have $u\close\epsilon u$.
+\end{lem}
+\begin{proof}
+ Define $A(u) \defeq \fall{\epsilon:\Qp} (u\close\epsilon u)$.
+ Since this is a mere proposition (by the last constructor of $\closesym$), by $\RC$-induction, it suffices to prove it when $u$ is $\rcrat(q)$ and when $u$ is $\rclim(x)$.
+ In the first case, we obviously have $|q-q|<\epsilon$ for any $\epsilon$, hence $\rcrat(q) \close\epsilon \rcrat(q)$ by the first constructor of $\closesym$.
+ %
+ And in the second case, we may assume inductively that $x_\delta \close\epsilon x_\delta$ for all $\delta,\epsilon:\Qp$.
+ Then in particular, we have $x_{\epsilon/3} \close{\epsilon/3} x_{\epsilon/3}$, whence $\rclim(x) \close{\epsilon} \rclim(x)$ by the fourth constructor of $\closesym$.
+\end{proof}
+
+From \cref{lem:close-reflexive}, we infer that a direct application of $\RC$-induction only has a chance to succeed if the family $A:\RC\to\type$ is a mere property.
+To see this, fix $u:\RC$.
+Taking $v$ to be $u$, the third hypothesis of $\RC$-induction tells us that, for any $a : A(u)$, we have $\dpath{A}{\rceq(u,u)}{a}{a}$.
+Given a point $b : A(u)$ in addition, we also get $\dpath{A}{\rceq(u,u)}{a}{b}$.
+From the definition of the dependent path type, we conclude that the combination of these two paths implies $a = b$, i.e.\ all points in $A(u)$ are equal.
+
+\begin{thm}\label{thm:Cauchy-reals-are-a-set}
+ $\RC$ is a set.
+\end{thm}
+\begin{proof}
+ We have just shown that the mere relation
+ \narrowequation{P(u,v) \defeq \fall{\epsilon:\Qp} (u\close\epsilon v)}
+ is reflexive.
+ Since it implies identity, by the path constructor of $\RC$, the result follows from \cref{thm:h-set-refrel-in-paths-sets}.
+\end{proof}
+
+We can also show that although $\RC$ may not be a quotient of the set of Cauchy sequences of \emph{rationals}, it is nevertheless a quotient of the set of Cauchy sequences of \emph{reals}.
+(Of course, this is not a valid \emph{definition} of $\RC$, but it is a useful property.)
+We define the type of Cauchy approximations to be
+%
+\symlabel{cauchy-approximations}%
+\index{Cauchy!approximation!type of}%
+\begin{equation*}
+ \CAP \defeq
+ \setof{ x : \Qp \to \RC |
+ \fall{\epsilon, \delta : \Qp} x_\delta \close{\delta + \epsilon} x_\epsilon
+ }.
+\end{equation*}
+The second constructor of $\RC$ gives a function $\rclim:\CAP\to\RC$.
+
+\begin{lem} \label{RC-lim-onto}
+ Every real merely is a limit point: $\fall{u : \RC} \exis{x : \CAP} u = \rclim(x)$.
+ In other words, $\rclim:\CAP\to\RC$ is surjective.
+\end{lem}
+\begin{proof}
+ By $\RC$-induction, we may divide into cases on $u$.
+ Of course, if $u$ is a limit $\rclim(x)$, the statement is trivial.
+ So suppose $u$ is a rational point $\rcrat(q)$; we claim $u$ is equal to $\rclim(\lam{\epsilon} \rcrat(q))$.
+ By the path constructor of $\RC$, it suffices to show $\rcrat(q) \close\epsilon \rclim(\lam{\epsilon} \rcrat(q))$ for all $\epsilon:\Qp$.
+ And by the second constructor of $\closesym$, for this it suffices to find $\delta:\Qp$ such that $\rcrat(q)\close{\epsilon-\delta} \rcrat(q)$.
+ But by the first constructor of $\closesym$, we may take any $\delta:\Qp$ with $\delta<\epsilon$.
+\end{proof}
+
+%
+
+\begin{lem} \label{RC-lim-factor}
+ If $A$ is a set and $f : \CAP \to A$ respects coincidence\index{coincidence!of Cauchy approximations} of Cauchy approximations, in the sense that
+ %
+ \begin{equation*}
+ \fall{x, y : \CAP} \rclim(x) = \rclim(y) \Rightarrow f(x) = f(y),
+ \end{equation*}
+ %
+ then $f$ factors uniquely through $\rclim : \CAP \to \RC$.
+\end{lem}
+\begin{proof}
+ Since $\rclim$ is surjective, by \cref{lem:images_are_coequalizers}, $\RC$ is the quotient of $\CAP$ by the kernel pair\index{kernel!pair} of $\rclim$.
+ But this is exactly the statement of the lemma.
+\end{proof}
+
+For the second special case of the induction principle, suppose instead that we take $A$ to be constant at $\unit$.
+In this case, $\bsim$ is simply an $\epsilon$-indexed family of relations on $\epsilon$-close pairs of real numbers, so we may write $u\bsim_\epsilon v$ instead of $(u,\ttt)\bsim_\epsilon (v,\ttt)$.
+Then the required data reduces to the following, where $q, r$ denote rational numbers, $\epsilon, \delta, \eta$ positive rational numbers, and $x, y$ Cauchy approximations:
+\begin{itemize}
+\item if $-\epsilon < q - r < \epsilon$, then
+ $\rcrat(q) \bsim_\epsilon \rcrat(r)$,
+\item if $\rcrat(q) \close{\epsilon - \delta} y_\delta$ and
+ $\rcrat(q)\bsim_{\epsilon-\delta} y_\delta$,
+ then $\rcrat(q) \bsim_\epsilon \rclim(y)$,
+\item if $x_\delta \close{\epsilon - \delta} \rcrat(r)$ and
+ $x_\delta \bsim_{\epsilon-\delta} \rcrat(r)$,
+ then $\rclim(y) \bsim_\epsilon \rcrat(q)$,
+\item if $x_\delta \close{\epsilon - \delta - \eta} y_\eta$ and
+ $x_\delta\bsim_{\epsilon - \delta - \eta} y_\eta$,
+ then $\rclim(x) \bsim_\epsilon \rclim(y)$.
+\end{itemize}
+The resulting conclusion is $\fall{u,v:\RC}{\epsilon:\Qp} (u\close\epsilon v) \to (u \bsim_\epsilon v)$.
+We call this principle \define{$\closesym$-induction}; it says essentially that if we take $\RC$ as given, then $\close\epsilon$ is inductively generated (as a family of types) by \emph{its} constructors.
+For example, we can use this to show that $\closesym$ is symmetric.
+
+\begin{lem}\label{thm:RCsim-symmetric}
+ For any $u,v:\RC$ and $\epsilon:\Qp$, we have $(u\close\epsilon v) = (v\close\epsilon u)$.
+\end{lem}
+\begin{proof}
+ Since both are mere propositions, by symmetry it suffices to show one implication.
+ Thus, let $(u\bsim_\epsilon v) \defeq (v\close\epsilon u)$.
+ By $\closesym$-induction, we may reduce to the case that $u\close\epsilon v$ is derived from one of the four interesting constructors of $\closesym$.
+ In the first case when $u$ and $v$ are both rational, the result is trivial (we can apply the first constructor again).
+ In the other three cases, the inductive hypothesis (together with commutativity of addition in $\Q$) yields exactly the input to another of the constructors of $\closesym$ (the second and third constructors switch, while the fourth stays put).
+\end{proof}
+
+The general induction principle, which we may call \define{$(\RC,\closesym)$-induction}, is therefore a sort of joint $\RC$-induction and $\closesym$-induction.
+Consider, for instance, its non-dependent version, which we call \define{$(\RC,\closesym)$-recursion}, which is the one that we will have the most use for.
+\index{recursion principle!for Cauchy reals}%
+Ordinary $\RC$-recursion tells us that to define a function $f : \RC \to A$ it suffices to:
+\begin{enumerate}
+\item for every $q : \Q$ construct $f(\rcrat(q)) : A$,
+\item for every Cauchy approximation $x : \Qp \to \RC$, construct $f(x) : A$,
+ assuming that $f(x_\epsilon)$ has already been defined for all $\epsilon : \Qp$,
+\item prove $f(u) = f(v)$ for all $u, v : \RC$ satisfying $\fall{\epsilon:\Qp} u\close\epsilon v$.\label{item:rcrec3}
+\end{enumerate}
+However, it is generally quite difficult to show~\ref{item:rcrec3} without knowing something about how $f$ acts on $\epsilon$-close Cauchy reals.
+The enhanced principle of $(\RC,\closesym)$-recursion remedies this deficiency, allowing us to specify an \emph{arbitrary} ``way in which $f$ acts on $\epsilon$-close Cauchy reals'', which we can then prove to be the case by a simultaneous induction with the definition of $f$.
+This is the family of relations $\bsim$.
+Since $A$ is independent of $\RC$, we may assume for simplicity that $\bsim$ depends only on $A$ and $\Qp$, and thus there is no ambiguity in writing $a\bsim_\epsilon b$ instead of $(u,a) \bsim_\epsilon (v,b)$.
+In this case, defining a function $f:\RC\to A$ by $(\RC,\closesym)$-recursion requires the following cases (which we now write using the pattern-matching convention).
+\begin{itemize}
+\item For every $q : \Q$, construct $f(\rcrat(q)) : A$.
+\item For every Cauchy approximation $x : \Qp \to \RC$, construct $f(\rclim(x)) : A$, assuming inductively that $f(x_\epsilon)$ has already been defined for all $\epsilon : \Qp$ and form a ``Cauchy approximation with respect to $\bsim$'', i.e.\ that $\fall{\epsilon,\delta:\Qp} (f(x_\epsilon) \bsim_{\epsilon+\delta} f(x_\delta))$.
+\item Prove that the relations $\bsim$ are \emph{separated}, i.e.\ that, for any $a,b:A$,
+ \indexdef{relation!separated family of}%
+ \indexdef{separated family of relations}%
+\narrowequation{(\fall{\epsilon:\Qp} a\bsim_\epsilon b) \Rightarrow (a=b).}
+\item Prove that if $-\epsilon< q-r <\epsilon$ for $q,r:\Q$, then $f(\rcrat(q))\bsim_\epsilon f(\rcrat(r))$.
+\item For any $q:\Q$ and any Cauchy approximation $y$, prove that
+\narrowequation{f(\rcrat(q)) \bsim_\epsilon f(\rclim(y)),} assuming inductively that $\rcrat(q)\close{\epsilon-\delta} y_\delta$ and $f(\rcrat(q)) \bsim_{\epsilon-\delta} f(y_\delta)$ for some $\delta:\Qp$, and that $\eta \mapsto f(x_\eta)$ is a Cauchy approximation with respect to $\bsim$.
+\item For any Cauchy approximation $x$ and any $r:\Q$, prove that
+\narrowequation{f(\rclim(x)) \bsim_\epsilon f(\rcrat(r)),}
+assuming inductively that $x_\delta \close{\epsilon-\delta} \rcrat(r)$ and $f(x_\delta) \bsim_{\epsilon-\delta} f(\rcrat(r))$ for some $\delta:\Qp$, and that $\eta\mapsto f(x_\eta)$ is a Cauchy approximation with respect to $\bsim$.
+\item For any Cauchy approximations $x,y$, prove that
+\narrowequation{f(\rclim(x)) \bsim_\epsilon f(\rclim(y)),}
+assuming inductively that $x_\delta \close{\epsilon-\delta-\eta} y_\eta$ and $f(x_\delta) \bsim_{\epsilon-\delta-\eta} f(y_\eta)$ for some $\delta,\eta:\Qp$, and that $\theta\mapsto f(x_\theta)$ and $\theta\mapsto f(y_\theta)$ are Cauchy approximations with respect to $\bsim$.
+\end{itemize}
+Note that in the last four proofs, we are free to use the specific definitions of $f(\rcrat(q))$ and $f(\rclim(x))$ given in the first two data.
+However, the proof of separatedness must apply to \emph{any} two elements of $A$, without any relation to $f$: it is a sort of ``admissibility'' condition on the family of relations $\bsim$.
+Thus, we often verify it first, immediately after defining $\bsim$, before going on to define $f(\rcrat(q))$ and $f(\rclim(x))$.
+
+Under the above hypotheses, $(\RC,\closesym)$-recursion yields a function $f:\RC\to A$ such that $f(\rcrat(q))$ and $f(\rclim(x))$ are judgmentally equal to the definitions given for them in the first two clauses.
+Moreover, we may also conclude
+\begin{equation}
+ \fall{u,v:\RC}{\epsilon:\Qp} (u\close\epsilon v) \to (f(u) \bsim_\epsilon f(v)).\label{eq:RC-sim-recursion-extra}
+\end{equation}
+
+As a paradigmatic example, $(\RC,\closesym)$-recursion allows us to extend functions defined on $\Q$ to all of $\RC$, as long as they are sufficiently continuous.
+\index{function!continuous}%
+
+\begin{defn}\label{defn:lipschitz}
+ A function $f:\Q\to\RC$ is \define{Lipschitz}
+ \indexdef{function!Lipschitz}%
+ \indexdef{Lipschitz!function}%
+ \indexdef{Lipschitz!constant}%
+ \indexdef{constant!Lipschitz}%
+ if there exists $L:\Qp$ (the \define{Lipschitz constant}) such that
+ \[ |q - r|<\epsilon \Rightarrow (f(q) \close{L\epsilon} f(r)) \]
+ for all $\epsilon:\Qp$ and $q,r:\Q$.
+ %
+ Similarly, $g:\RC\to\RC$ is \define{Lipschitz} if there exists $L:\Qp$ such that
+ \[ (u\close\epsilon v) \Rightarrow (g(u) \close{L\epsilon} g(v)) \]
+ for all $\epsilon:\Qp$ and $u,v:\RC$..
+\end{defn}
+
+In particular, note that by the first constructor of $\closesym$, if $f:\Q\to\Q$ is Lipschitz in the obvious sense, then so is the composite $\Q\xrightarrow{f} \Q \to \RC$.
+
+\begin{lem}\label{RC-extend-Q-Lipschitz}
+ Suppose $f : \Q \to \RC$ is Lipschitz with constant $L : \Qp$.
+ Then there exists a Lipschitz map $\bar{f} : \RC \to \RC$, also with constant $L$, such that $\bar{f}(\rcrat(q)) \jdeq f(q)$ for all $q:\Q$.
+\end{lem}
+
+\begin{proof}
+ % Uniqueness follows directly from \cref{RC-continuous-eq}.
+ We define $\bar{f}$ by $(\RC,\closesym)$-recursion, with codomain $A\defeq \RC$.
+ We define the relation $\mathord{\bsim}: \RC \to \RC \to \Qp \to \prop$ to be
+ \begin{align*}
+ (u \bsim_\epsilon v) &\defeq (u \close{L\epsilon} v).
+ \end{align*}
+ For $q : \Q$, we define
+ %
+ \begin{equation*}
+ \bar{f}(\rcrat(q)) \defeq \rcrat(f(q)).
+ \end{equation*}
+ %
+ For a Cauchy approximation $x : \Qp \to \RC$, we define
+ %
+ \begin{equation*}
+ \bar{f}(\rclim(x)) \defeq \rclim(\lamu{\epsilon : \Qp} \bar{f}(x_{\epsilon/L})).
+ \end{equation*}
+ %
+ For this to make sense, we must verify that $y \defeq \lamu{\epsilon : \Qp} \bar{f}(x_{\epsilon/L})$ is a Cauchy approximation.
+ However, the inductive hypothesis for this step is that for any $\delta,\epsilon:\Qp$ we have $\bar{f}(x_\delta) \bsim_{\delta+\epsilon} \bar{f}(x_\epsilon)$, i.e.\ $\bar{f}(x_\delta) \close{L\delta+L\epsilon} \bar{f}(x_\epsilon)$.
+ Thus we have
+ \[y_\delta \jdeq f(x_{\delta/L}) \close{\delta + \epsilon} f(x_{\epsilon/L}) \jdeq y_\epsilon. \]
+
+ For proving separatedness, we simply observe that $\fall{\epsilon:\Qp} a\bsim_\epsilon b$ means $\fall{\epsilon:\Qp} a\close{L\epsilon} b$, which implies $\fall{\epsilon:\Qp}a\close\epsilon b$ and thus $a=b$.
+
+ To complete the $(\RC,\closesym)$-recursion, it remains to verify the four conditions on $\bsim$.
+ This basically amounts to proving that $\bar f$ is Lipschitz for all the four constructors of $\closesym$.
+ \begin{enumerate}
+ \item When $u$ is $\rcrat(q)$ and $v$ is $\rcrat(r)$ with $-\epsilon < |q-r| <\epsilon$, the assumption that $f$ is Lipschitz yields $f(q) \close{L\epsilon} f(r)$, hence $\bar{f}(\rcrat(q)) \bsim_\epsilon \bar{f}(\rcrat(r))$ by definition.
+ \item When $u$ is $\rclim(x)$ and $v$ is $\rcrat(q)$ with $x_\eta \close{\epsilon - \eta} \rcrat(q)$, then the
+ inductive hypothesis is $\bar{f}(x_\eta) \close{L \epsilon - L \eta} \rcrat(f(q))$, which proves
+ \narrowequation{\bar{f}(\rclim(x)) \close{L \epsilon} \bar{f}(\rcrat(q))}
+ by the third constructor of $\closesym$.
+ \item The symmetric case when $u$ is rational and $v$ is a limit is essentially identical.
+ \item When $u$ is $\rclim(x)$ and $v$ is $\rclim(y)$, with $\delta, \eta : \Qp$ such that $x_\delta \close{\epsilon - \delta - \eta} y_\eta$,
+ the inductive hypothesis is $\bar{f}(x_\delta) \close{L \epsilon - L \delta - L \eta} \bar{f}(y_\eta)$, which proves $\bar{f}(\rclim(x)) \close{L
+ \epsilon} \bar{f}(\rclim(y))$ by the fourth constructor of $\closesym$.
+ \end{enumerate}
+ This completes the $(\RC,\closesym)$-recursion, and hence the construction of $\bar f$.
+ The desired equality $\bar f(\rcrat(q))\jdeq f(q)$ is exactly the first computation rule for $(\RC,\closesym)$-recursion, and the additional condition~\eqref{eq:RC-sim-recursion-extra} says exactly that $\bar f$ is Lipschitz with constant $L$.
+\end{proof}
+
+At this point we have gone about as far as we can without a better characterization of $\closesym$.
+We have specified, in the constructors of $\closesym$, the conditions under which we want Cauchy reals of the two different forms to be $\epsilon$-close.
+However, how do we know that in the resulting inductive-inductive type family, these are the \emph{only} witnesses to this fact?
+We have seen that inductive type families (such as identity types, see \cref{sec:identity-systems}) and higher inductive types have a tendency to contain ``more than was put into them'', so this is not an idle question.
+
+In order to characterize $\closesym$ more precisely, we will define a family of relations $\approx_\epsilon$ on $\RC$ \emph{recursively}, so that they will compute on constructors, and prove that this family is equivalent to $\close\epsilon$.
+
+\begin{thm}\label{defn:RC-approx}
+ There is a family of mere relations $\mathord\approx:\RC\to\RC\to\Qp\to\prop$ such that
+ \begin{align}
+ (\rcrat(q) \approx_\epsilon \rcrat(r)) &\defeq
+ (-\epsilon < q - r < \epsilon)\label{eq:RCappx1}\\
+ (\rcrat(q) \approx_\epsilon \rclim(y)) &\defeq
+ \exis{\delta : \Qp} \rcrat(q) \approx_{\epsilon - \delta} y_\delta\label{eq:RCappx2}\\
+ (\rclim(x) \approx_\epsilon \rcrat(r)) &\defeq
+ \exis{\delta : \Qp} x_\delta \approx_{\epsilon - \delta} \rcrat(r)\label{eq:RCappx3}\\
+ (\rclim(x) \approx_\epsilon \rclim(y)) &\defeq
+ \exis{\delta, \eta : \Qp} x_\delta \approx_{\epsilon - \delta - \eta} y_\eta.\label{eq:RCappx4}
+ \end{align}
+ Moreover, we have
+ \begin{gather}
+ (u \approx_\epsilon v) \Leftrightarrow \exis{\theta:\Qp} (u \approx_{\epsilon-\theta} v) \label{RC-sim-rounded}\\
+ (u \approx_\epsilon v) \to (v\close\delta w) \to (u\approx_{\epsilon+\delta} w)\label{eq:RC-sim-rtri}\\
+ (u \close\epsilon v) \to (v\approx_\delta w) \to (u\approx_{\epsilon+\delta} w)\label{eq:RC-sim-ltri}.
+ \end{gather}
+\end{thm}
+
+The additional conditions~\eqref{RC-sim-rounded}--\eqref{eq:RC-sim-ltri} turn out to be required in order to make the inductive definition go through.
+Condition~\eqref{RC-sim-rounded} is called being \define{rounded}.
+\indexsee{relation!rounded}{rounded relation}%
+\indexdef{rounded!relation}%
+Reading it from right to left gives \define{monotonicity} of $\approx$,
+\index{monotonicity}%
+\index{relation!monotonic}%
+%
+\begin{equation*}
+ (\delta < \epsilon) \land (u \approx_\delta v) \Rightarrow (u \approx_\epsilon v)
+\end{equation*}
+%
+while reading it left to right to \define{openness} of $\approx$,
+\index{open!relation}%
+\index{relation!open}%
+%
+\begin{equation*}
+ (u \approx_\epsilon v) \Rightarrow \exis{\delta : \Qp} (\delta < \epsilon) \land (u \approx_\delta v).
+\end{equation*}
+%
+Conditions~\eqref{eq:RC-sim-rtri} and~\eqref{eq:RC-sim-ltri} are forms of the triangle inequality, which say that $\approx$ is a ``module'' over $\closesym$ on both sides.
+
+\begin{proof}
+ We will define $\mathord\approx:\RC\to\RC\to\Qp\to\prop$ by double $(\RC,\closesym)$-recursion.
+ First we will apply $(\RC,\closesym)$-recursion with codomain the subset of $\RC\to\Qp\to\prop$ consisting of those families of predicates which are rounded and satisfy the one appropriate form of the triangle inequality.
+ Thinking of these predicates as half of a binary relation, we will write them as $(u,\epsilon) \mapsto (\hapx_\epsilon u)$, with the symbol $\hapname$ referring to the whole relation.
+ Now we can write $A$ precisely as
+ \begin{multline*}
+ A \defeq\; \Bigg\{ \hapname :\RC\to\Qp\to\prop \;\bigg|\; \\
+ \Big(\fall{u:\RC}{\epsilon:\Qp}
+ \big((\hapx_\epsilon u) \Leftrightarrow \exis{\theta:\Qp} (\hapx_{\epsilon-\theta} u)\big)\Big) \\
+ \land \Big(\fall{u,v:\RC}{\eta,\epsilon:\Qp} (u\close\epsilon v) \to\\
+ \big((\hapx_\eta u) \to (\hapx_{\eta+\epsilon} v) \big) \land \big((\hapx_\eta v) \to (\hapx_{\eta+\epsilon} u) \big)\Big)\Bigg\}
+ \end{multline*}
+ As usual with subsets, we will use the same notation for an inhabitant of $A$ and its first component $\hapname$.
+ As the family of relations required for $(\RC,\closesym)$-recursion, we consider the following, which will ensure the other form of the triangle inequality:
+ \begin{narrowmultline*}
+ (\hapname \bsim_\epsilon \hapbname ) \defeq \narrowbreak
+ \fall{u:\RC}{\eta:\Qp} ((\hapx_\eta u) \to (\hapxb_{\epsilon+\eta} u))
+ \land \narrowbreak
+ ((\hapxb_\eta u) \to (\hapx_{\epsilon+\eta} u)).
+ \end{narrowmultline*}
+ We observe that these relations are separated.
+ For assuming
+ \narrowequation{\fall{\epsilon:\Qp} (\hapname \bsim_\epsilon \hapbname),}
+ to show $\hapname = \hapbname$ it suffices to show $(\hapx_\epsilon u) \Leftrightarrow (\hapxb_\epsilon u)$ for all $u:\RC$.
+ But $\hapx_\epsilon u$ implies $\hapx_{\epsilon-\theta} u$ for some $\theta$, by roundedness, which together with $\hapname \bsim_\epsilon \hapbname$ implies $\hapxb_\epsilon u$; and the converse is identical.
+
+ Now the first two data the recursion principle requires are the following.
+ \begin{itemize}
+ \item For any $q:\Q$, we must give an element of $A$, which we denote $(\rcrat(q)\approx_{(\blank)} \blank)$.
+ \item For any Cauchy approximation $x$, if we assume defined a function $\Qp \to A$, which we will denote by $\epsilon \mapsto (x_\epsilon \approx_{(\blank)} \blank)$, with the property that
+ % \[ \fall{u,v:\RC}{\delta,\epsilon,\eta:\Qp} (x_\delta \approx_\eta u) \to (u\close{\delta+\epsilon} v) \to (x_\epsilon \approx_{\eta+\delta+\epsilon} v) \]
+ \begin{equation}
+ \fall{u:\RC}{\delta,\epsilon,\eta:\Qp} (x_\delta \approx_\eta u) \to (x_\epsilon \approx_{\eta+\delta+\epsilon} u),\label{eq:appxrec2}
+ \end{equation}
+ we must give an element of $A$, which we write as $(\rclim(x)\approx_{(\blank)} \blank)$.
+ \end{itemize}
+ In both cases, we give the required definition by using a nested $(\RC,\closesym)$-recursion, with codomain the subset of $\Qp\to\prop$ consisting of rounded families of mere propositions.
+ Thinking of these propositions as zero halves of a binary relation, we will write them as $\epsilon \mapsto (\tap{\epsilon})$, with the symbol $\tapname$ referring to the whole family.
+ Now we can write the codomain of these inner recursions precisely as
+ \begin{narrowmultline*}
+ C \defeq
+ \bigg\{ \tapname :\Qp\to\prop \;\;\Big|\;\; \narrowbreak
+ \fall{\epsilon:\Qp} \Big((\tap\epsilon) \Leftrightarrow \exis{\theta:\Qp} (\tap{\epsilon-\theta})\Big)\bigg\}
+ \end{narrowmultline*}
+ We take the required family of relations to be the remnant of the triangle inequality:
+ \begin{narrowmultline*}
+ (\tapname \bbsim_\epsilon \tapbname) \defeq
+ \fall{\eta:\Qp} ((\tap\eta) \to (\tapb{\epsilon+\eta})) \land
+ \narrowbreak
+ ((\tapb\eta) \to (\tap{\epsilon+\eta})).
+ \end{narrowmultline*}
+ These relations are separated by the same argument as for $\bsim$, using roundedness of all elements of $C$.
+
+ Note that if such an inner recursion succeeds, it will yield a family of predicates $\hapname : \RC\to\Qp\to \prop$ which are rounded
+(since their image in $\Qp\to\prop$ lies in $C$) and satisfy
+ \[ \fall{u,v:\RC}{\epsilon:\Qp} (u\close\epsilon v) \to \big((\hapx_{(\blank)} u) \bbsim_\epsilon (\hapx_{(\blank)} u)\big). \]
+ Expanding out the definition of $\bbsim$, this yields precisely the third condition for $\hapname$ to belong to $A$; thus it is exactly what we need.
+
+ It is at this point that we can give the definitions~\eqref{eq:RCappx1}--\eqref{eq:RCappx4}, as the first two clauses of each of the two inner recursions, corresponding to rational points and limits.
+ In each case, we must verify that the relation is rounded and hence lies in $C$.
+ In the rational-rational case~\eqref{eq:RCappx1} this is clear, while in the other cases it follows from an inductive hypothesis.
+ (In~\eqref{eq:RCappx2} the relevant inductive hypothesis is that $(\rcrat(q) \approx_{(\blank)} y_\delta) : C$, while in~\eqref{eq:RCappx3} and~\eqref{eq:RCappx4} it is that $(x_\delta \approx_{(\blank)} \blank) : A$.)
+
+ The remaining data of the sub-recursions consist of showing that \eqref{eq:RCappx1}--\eqref{eq:RCappx4} satisfy the triangle inequality on the right with respect to the constructors of $\closesym$.
+ There are eight cases --- four in each sub-recursion --- corresponding to the eight possible ways that $u$, $v$, and $w$ in~\eqref{eq:RC-sim-rtri} can be chosen to be rational points or limits.
+ First we consider the cases when $u$ is $\rcrat(q)$.
+ \begin{enumerate}
+ \item Assuming $\rcrat(q)\approx_\phi \rcrat(r)$ and $-\epsilon<|r-s|<\epsilon$, we must show $\rcrat(q)\approx_{\phi+\epsilon} \rcrat(s)$.
+ But by definition of $\approx$, this reduces to the triangle inequality for rational numbers.
+ \item We assume $\phi,\epsilon,\delta:\Qp$ such that $\rcrat(q)\approx_\phi \rcrat(r)$ and $\rcrat(r) \close{\epsilon-\delta} y_\delta$, and inductively that
+ \begin{equation}
+ \fall{\psi:\Qp}(\rcrat(q) \approx_{\psi} \rcrat(r)) \to (\rcrat(q) \approx_{\psi+\epsilon-\delta} y_\delta).\label{eq:RCappx-rtri-rrl1}
+ \end{equation}
+ We assume also that $\psi,\delta\mapsto (\rcrat(q) \approx_{\psi} y_\delta)$ is a Cauchy approximation with respect to $\bbsim$, i.e.\
+ \begin{equation}
+ \fall{\psi,\xi,\zeta:\Qp} (\rcrat(q) \approx_{\psi} y_\xi) \to (\rcrat(q) \approx_{\psi+\xi+\zeta} y_\zeta),\label{eq:RCappx-rtri-rrl2}
+ \end{equation}
+ although we do not need this assumption in this case.
+ Indeed, \eqref{eq:RCappx-rtri-rrl1} with $\psi\defeq \phi$ yields immediately $\rcrat(q) \approx_{\phi+\epsilon-\delta} y_\delta$, and hence $\rcrat(q) \approx_{\phi+\epsilon} \rclim(y)$ by definition of $\approx$.
+ \item We assume $\phi,\epsilon,\delta:\Qp$ such that $\rcrat(q)\approx_\phi \rclim(y)$ and $y_\delta \close{\epsilon-\delta} \rcrat(r)$, and inductively that
+ \begin{gather}
+ \fall{\psi:\Qp}(\rcrat(q) \approx_{\psi} y_\delta) \to (\rcrat(q) \approx_{\psi+\epsilon-\delta} \rcrat(r)).\label{eq:RCappx-rtri-rlr1}\\
+ \fall{\psi,\xi,\zeta:\Qp} (\rcrat(q) \approx_{\psi} y_\xi) \to (\rcrat(q) \approx_{\psi+\xi+\zeta} y_\zeta).\label{eq:RCappx-rtri-rlr2}
+ \end{gather}
+ By definition, $\rcrat(q)\approx_\phi \rclim(y)$ means that we have $\theta:\Qp$ with $\rcrat(q) \approx_{\phi-\theta} y_\theta$.
+ By assumption~\eqref{eq:RCappx-rtri-rlr2}, therefore, we have also $\rcrat(q) \approx_{\phi+\delta} y_\delta$, and then by~\eqref{eq:RCappx-rtri-rlr1} it follows that $\rcrat(q) \approx_{\phi+\epsilon} \rcrat(r)$, as desired.
+ \item We assume $\phi,\epsilon,\delta,\eta:\Qp$ such that $\rcrat(q)\approx_\phi \rclim(y)$ and $y_\delta \close{\epsilon-\delta-\eta} z_\eta$, and inductively that
+ \begin{gather}
+ \fall{\psi:\Qp}(\rcrat(q) \approx_{\psi} y_\delta) \to (\rcrat(q) \approx_{\psi+\epsilon-\delta-\eta} z_\eta), \label{eq:RCappx-rtri-rll1}\\
+ \fall{\psi,\xi,\zeta:\Qp} (\rcrat(q) \approx_{\psi} y_\xi) \to (\rcrat(q) \approx_{\psi+\xi+\zeta} y_\zeta), \label{eq:RCappx-rtri-rll2}\\
+ \fall{\psi,\xi,\zeta:\Qp} (\rcrat(q) \approx_{\psi} z_\xi) \to (\rcrat(q) \approx_{\psi+\xi+\zeta} z_\zeta). \label{eq:RCappx-rtri-rll3}
+ \end{gather}
+ Again, $\rcrat(q)\approx_\phi \rclim(y)$ means we have $\xi:\Qp$ with $\rcrat(q) \approx_{\phi-\xi} y_\xi$, while~\eqref{eq:RCappx-rtri-rll2} then implies $\rcrat(q) \approx_{\phi+\delta} y_\delta$ and~\eqref{eq:RCappx-rtri-rll1} implies $\rcrat(q) \approx_{\phi+\epsilon-\eta} z_\eta$.
+ But by definition of $\approx$, this implies $\rcrat(q) \approx_{\phi+\epsilon} \rclim(z)$ as desired.
+ \end{enumerate}
+ Now we move on to the cases when $u$ is $\rclim(x)$, with $x$ a Cauchy approximation.
+ In this case, the ambient inductive hypothesis of the definition of $(\rclim(x) \approx_{(\blank)} {\blank}) : A$ is that we have ${(x_\delta \approx_{(\blank)} {\blank})}: A$, so that in addition to being rounded they satisfy the triangle inequality on the right.
+ \begin{enumerate}\setcounter{enumi}{4}
+ \item Assuming $\rclim(x)\approx_\phi \rcrat(r)$ and $-\epsilon<|r-s|<\epsilon$, we must show $\rclim(x)\approx_{\phi+\epsilon} \rcrat(s)$.
+ By definition of $\approx$, the former means $x_\delta \approx_{\phi-\delta} \rcrat(r)$, so that above triangle inequality implies $x_\delta \approx_{\epsilon+\phi-\delta} \rcrat(s)$, hence $\rclim(x)\approx_{\phi+\epsilon} \rcrat(s)$ as desired.
+ \item We assume $\phi,\epsilon,\delta:\Qp$ such that $\rclim(x)\approx_\phi \rcrat(r)$ and $\rcrat(r) \close{\epsilon-\delta} y_\delta$, and two unneeded inductive hypotheses.
+ %
+ By definition, we have $\eta:\Qp$ such that $x_\eta \approx_{\phi-\eta} \rcrat(r)$, so the inductive triangle inequality gives $x_\eta \approx_{\phi+\epsilon-\eta-\delta} y_\delta$.
+ The definition of $\approx$ then immediately yields $\rclim(x) \approx_{\phi+\epsilon} \rclim(y)$.
+ \item We assume $\phi,\epsilon,\delta:\Qp$ such that $\rclim(x)\approx_\phi \rclim(y)$ and $y_\delta \close{\epsilon-\delta} \rcrat(r)$, and two unneeded inductive hypotheses.
+ By definition we have $\xi,\theta:\Qp$ such that $x_\xi \approx_{\phi-\xi-\theta} y_\theta$.
+ Since $y$ is a Cauchy approximation, we have $y_\theta \close{\theta+\delta} y_\delta$, so the inductive triangle inequality gives $x_\xi \approx_{\phi+\delta-\xi} y_\delta$ and then $x_\xi \close{\phi+\epsilon-\xi} \rcrat(r)$.
+ The definition of $\approx$ then gives $\rclim(x) \approx_{\phi+\epsilon}\rcrat(r)$, as desired.
+ \item Finally, we assume $\phi,\epsilon,\delta,\eta:\Qp$ such that $\rclim(x)\approx_\phi \rclim(y)$ and $y_\delta \close{\epsilon-\delta-\eta} z_\eta$.
+ Then as before we have $\xi,\theta:\Qp$ with $x_\xi \approx_{\phi-\xi-\theta} y_\theta$, and two applications of the triangle inequality suffices as before.
+ \end{enumerate}
+
+ This completes the two inner recursions, and thus the definitions of the families of relations $(\rcrat(q)\approx_{(\blank)}\blank)$ and $(\rclim(x)\approx_{(\blank)}\blank)$.
+ Since all are elements of $A$, they are rounded and satisfy the triangle inequality on the right with respect to $\closesym$.
+% , and satisfy~\eqref{eq:appxrec2}.
+ What remains is to verify the conditions relating to $\bsim$, which is to say that these relations satisfy the triangle inequality on the \emph{left} with respect to the constructors of $\closesym$.
+ The four cases correspond to the four choices of rational or limit points for $u$ and $v$ in~\eqref{eq:RC-sim-ltri}, and since they are all mere propositions, we may apply $\RC$-induction and assume that $w$ is also either rational or a limit.
+ This yields another eight cases, whose proofs are essentially identical to those just given; so we will not subject the reader to them.
+\end{proof}
+
+We can now prove:
+
+\begin{thm}\label{thm:RC-sim-characterization}
+ For any $u,v:\RC$ and $\epsilon:\Qp$ we have $(u\close\epsilon v) = (u\approx_\epsilon v)$.
+\end{thm}
+\begin{proof}
+ Since both are mere propositions, it suffices to prove bidirectional implication.
+ For the left-to-right direction, we use $\closesym$-induction applied to $C(u,v,\epsilon)\defeq (u\approx_\epsilon v)$.
+ Thus, it suffices to consider the four constructors of $\closesym$.
+ In each case, $u$ and $v$ are specialized to either rational points or limits, so that the definition of $\approx$ evaluates, and the inductive hypothesis always applies.
+
+ For the right-to-left direction, we use $\RC$-induction to assume that $u$ and $v$ are rational points or limits, allowing $\approx$ to evaluate.
+ But now the definitions of $\approx$, and the inductive hypotheses, supply exactly the data required for the relevant constructors of $\closesym$.
+\end{proof}
+
+\index{encode-decode method}%
+Stretching a point, one might call $\approx$ a fibration of ``codes'' for $\closesym$, with the two directions of the above proof being \encode and \decode respectively.
+By the definition of $\approx$, from \cref{thm:RC-sim-characterization} we get equivalences
+\begin{align*}
+ (\rcrat(q) \close\epsilon \rcrat(r)) &=
+ (-\epsilon < q - r < \epsilon)\\
+ (\rcrat(q) \close\epsilon \rclim(y)) &=
+ \exis{\delta : \Qp} \rcrat(q) \close{\epsilon - \delta} y_\delta\\
+ (\rclim(x) \close\epsilon \rcrat(r)) &=
+ \exis{\delta : \Qp} x_\delta \close{\epsilon - \delta} \rcrat(r)\\
+ (\rclim(x) \close\epsilon \rclim(y)) &=
+ \exis{\delta, \eta : \Qp} x_\delta \close{\epsilon - \delta - \eta} y_\eta.
+\end{align*}
+Our proof also provides the following additional information.
+
+\begin{cor}
+ \index{triangle!inequality for R@inequality for $\RC$}%
+ \indexsee{inequality!triangle}{triangle inequality}%
+ $\closesym$ is rounded\index{rounded!relation} and satisfies the triangle inequality:
+ \begin{gather}
+ \eqvspaced{
+ (u \close\epsilon v)
+ }{
+ \exis{\theta : \Qp} u \close{\epsilon - \theta} v
+ }\\
+ (u\close\epsilon v) \to (v\close\delta w) \to (u\close{\epsilon+\delta} w). \label{item:RC-sim-triangle}
+ \end{gather}
+\end{cor}
+% \begin{proof}
+% The construction of $\approx$ showed simultaneously that it is rounded, and satisfies ``triangle inequalities'' such as
+% \[ (u\approx_\epsilon v) \to (v\close\delta w) \to (u\approx_{\epsilon+\delta} w). \]
+% Thus, both properties follow from \cref{thm:RC-sim-characterization}.
+% \end{proof}
+
+With the triangle inequality in hand, we can show that ``limits'' of Cauchy approximations actually behave like limits.
+
+\begin{lem}\label{thm:RC-sim-lim}
+ For any $u:\RC$, Cauchy approximation $y$, and $\epsilon,\delta:\Qp$, if $u\close\epsilon y_\delta$ then $u\close{\epsilon+\delta} \rclim(y)$.
+\end{lem}
+\begin{proof}
+ We use $\RC$-induction on $u$.
+ If $u$ is $\rcrat(q)$, then this is exactly the second constructor of $\closesym$.
+ Now suppose $u$ is $\rclim(x)$, and that each $x_\eta$ has the property that for any $y,\epsilon,\delta$, if $x_\eta\close\epsilon y_\delta$ then $x_\eta \close{\epsilon+\delta} \rclim(y)$.
+ In particular, taking $y\defeq x$ and $\delta\defeq\eta$ in this assumption, we conclude that $x_\eta \close{\eta+\theta} \rclim(x)$ for any $\eta,\theta:\Qp$.
+
+ Now let $y,\epsilon,\delta$ be arbitrary and assume $\rclim(x) \close\epsilon y_\delta$.
+ By roundedness, there is a $\theta$ such that $\rclim(x) \close{\epsilon-\theta} y_\delta$.
+ Then by the above observation, for any $\eta$ we have $x_\eta \close{\eta+\theta/2} \rclim(x)$, and hence $x_\eta \close{\epsilon+\eta-\theta/2} y_\delta$ by the triangle inequality.
+ Hence, the fourth constructor of $\closesym$ yields $\rclim(x) \close{\epsilon+2\eta+\delta-\theta/2} \rclim(y)$.
+ Thus, if we choose $\eta \defeq \theta/4$, the result follows.
+\end{proof}
+
+\begin{lem}\label{thm:RC-sim-lim-term}
+ For any Cauchy approximation $y$ and any $\delta,\eta:\Qp$ we have $y_\delta \close{\delta+\eta} \rclim(y)$.
+\end{lem}
+\begin{proof}
+ Take $u\defeq y_\delta$ and $\epsilon\defeq \eta$ in the previous lemma.
+\end{proof}
+
+\begin{rmk}
+ We might have expected to have $y_\delta \close{\delta} \rclim(y)$, but this fails in examples.
+ For instance, consider $x$ defined by $x_\epsilon \defeq \epsilon$.
+ Its limit is clearly $0$, but we do not have $|\epsilon - 0 |<\epsilon$, only $\le$.
+\end{rmk}
+
+As an application, \cref{thm:RC-sim-lim-term} enables us to show that the extensions of Lipschitz functions from \cref{RC-extend-Q-Lipschitz} are unique.
+
+\begin{lem}\label{RC-continuous-eq}
+ \index{function!continuous}%
+ Let $f,g:\RC\to\RC$ be continuous, in the sense that
+ \[ \fall{u:\RC}{\epsilon:\Qp}\exis{\delta:\Qp}\fall{v:\RC} (u\close\delta v) \to (f(u) \close\epsilon f(v)) \]
+ and analogously for $g$.
+ If $f(\rcrat(q))=g(\rcrat(q))$ for all $q:\Q$, then $f=g$.
+\end{lem}
+\begin{proof}
+ We prove $f(u)=g(u)$ for all $u$ by $\RC$-induction.
+ The rational case is just the hypothesis.
+ Thus, suppose $f(x_\delta)=g(x_\delta)$ for all $\delta$.
+ We will show that $f(\rclim(x))\close\epsilon g(\rclim(x))$ for all $\epsilon$, so that the path constructor of $\RC$ applies.
+
+ Since $f$ and $g$ are continuous, there exist $\theta,\eta$ such that for all $v$, we have
+ \begin{align*}
+ (\rclim(x)\close\theta v) &\to (f(\rclim(x)) \close{\epsilon/2} f(v))\\
+ (\rclim(x)\close\eta v) &\to (g(\rclim(x)) \close{\epsilon/2} g(v)).
+ \end{align*}
+ Choosing $\delta < \min(\theta,\eta)$, by \cref{thm:RC-sim-lim-term} we have both $\rclim(x)\close\theta y_\delta$ and $\rclim(x)\close\eta y_\delta$.
+ Hence
+ \[ f(\rclim(x)) \close{\epsilon/2} f(y_\delta) = g(y_\delta) \close{\epsilon/2} g(\rclim(x))\]
+ and thus $f(\rclim(x))\close\epsilon g(\rclim(x))$ by the triangle inequality.
+\end{proof}
+
+\subsection{The algebraic structure of Cauchy reals}
+\label{sec:algebr-struct-cauchy}
+
+We first define the additive structure $(\RC, 0, {+}, {-})$. Clearly, the additive unit element
+$0$ is just $\rcrat(0)$, while the additive inverse ${-} : \RC \to \RC$ is obtained as the
+extension of the additive inverse ${-} : \Q \to \Q$, using \cref{RC-extend-Q-Lipschitz}
+with Lipschitz constant~$1$. We have to work a bit harder for addition.
+
+\begin{lem} \label{RC-binary-nonexpanding-extension}
+ Suppose $f : \Q \times \Q \to \Q$ satisfies, for all $q, r, s : \Q$,
+ %
+ \begin{equation*}
+ |f(q, s) - f(r, s)| \leq |q - r|
+ \qquad\text{and}\qquad
+ |f(q, r) - f(q, s)| \leq |r - s|.
+ \end{equation*}
+ %
+ Then there is a function $\bar{f} : \RC \times \RC \to \RC$ such that
+ $\bar{f}(\rcrat(q), \rcrat(r)) = f(q,r)$ for all $q, r : \Q$. Furthermore,
+ for all $u, v, w : \RC$ and $q : \Qp$,
+ %
+ \begin{equation*}
+ u \close\epsilon v \Rightarrow \bar{f}(u,w) \close\epsilon \bar{f}(v,w)
+ \quad\text{and}\quad
+ v \close\epsilon w \Rightarrow \bar{f}(u,v) \close\epsilon \bar{f}(u,w).
+ \end{equation*}
+\end{lem}
+
+\begin{proof}
+ We use $(\RC, {\closesym})$-recursion to construct the curried form of $\bar{f}$ as a map
+ $\RC \to A$ where $A$ is the space of non-expanding\index{function!non-expanding}\index{non-expanding function} real-valued
+ functions:
+ %
+ \begin{equation*}
+ A \defeq
+ \setof{ h : \RC \to \RC |
+ \fall{\epsilon : \Qp} \fall{u, v : \RC}
+ u \close\epsilon v \Rightarrow h(u) \close\epsilon h(v)
+ }.
+ \end{equation*}
+ %
+ We shall also need a suitable $\bsim_\epsilon$ on $A$, which we define as
+ %
+ \begin{equation*}
+ (h \bsim_\epsilon k) \defeq \fall{u : \RC} h(u) \close\epsilon k(u).
+ \end{equation*}
+ %
+ Clearly, if $\fall{\epsilon : \Qp} h \bsim_\epsilon k$ then $h(u) = k(u)$ for all $u :
+ \RC$, so $\bsim$ is separated.
+
+ For the base case we define $\bar{f}(\rcrat(q)) : A$, where $q : \Q$, as the
+ extension of the Lipschitz map $\lam{r} f(q,r)$ from $\Q \to \Q$ to $\RC \to \RC$, as
+ constructed in \cref{RC-extend-Q-Lipschitz} with Lipschitz constant~$1$. Next, for a
+ Cauchy approximation $x$, we define $\bar{f}(\rclim(x)) : \RC \to \RC$ as
+ %
+ \begin{equation*}
+ \bar{f}(\rclim(x))(v) \defeq \rclim (\lam{\epsilon} \bar{f}(x_\epsilon)(v)).
+ \end{equation*}
+ %
+ For this to be a valid definition, $\lam{\epsilon} \bar{f}(x_\epsilon)(v)$ should be a
+ Cauchy approximation, so consider any $\delta, \epsilon : \Q$. Then by assumption
+ $\bar{f}(x_\delta) \bsim_{\delta + \epsilon} \bar{f}(x_\epsilon)$, hence
+ $\bar{f}(x_\delta)(v) \close{\delta + \epsilon} \bar{f}(x_\epsilon)(v)$. Furthermore,
+ $\bar{f}(\rclim(x))$ is non-expanding because $\bar{f}(x_\epsilon)$ is such by induction
+ hypothesis. Indeed, if $u \close\epsilon v$ then, for all $\epsilon : \Q$,
+ %
+ \begin{equation*}
+ \bar{f}(x_{\epsilon/3})(u) \close{\epsilon/3} \bar{f}(x_{\epsilon/3})(v),
+ \end{equation*}
+ %
+ therefore $\bar{f}(\rclim(x))(u) \close\epsilon \bar{f}(\rclim(x))(v)$ by the fourth constructor of $\closesym$.
+
+ We still have to check four more conditions, let us illustrate just one. Suppose
+ $\epsilon : \Qp$ and for some $\delta : \Qp$ we have $\rcrat(q) \close{\epsilon - \delta}
+ y_\delta$ and $\bar{f}(\rcrat(q)) \bsim_{\epsilon - \delta} \bar{f}(y_\delta)$. To show
+ $\bar{f}(\rcrat(q)) \bsim_\epsilon \bar{f}(\rclim(y))$, consider any $v : \RC$ and observe that
+ %
+ \begin{equation*}
+ \bar{f}(\rcrat(q))(v) \close{\epsilon - \delta} \bar{f}(y_\delta)(v).
+ \end{equation*}
+ %
+ Therefore, by the second constructor of $\closesym$, we have
+ \narrowequation{\bar{f}(\rcrat(q))(v) \close\epsilon \bar{f}(\rclim(y))(v)}
+ as required.
+\end{proof}
+
+We may apply \cref{RC-binary-nonexpanding-extension} to any bivariate rational function
+which is non-expanding separately in each variable. Addition is such a function, therefore
+we get ${+} : \RC \times \RC \to \RC$.
+\indexdef{addition!of Cauchy reals}%
+Furthermore, the extension is unique as long as we
+require it to be non-expanding in each variable, and just as in the univariate case,
+identities on rationals extend to identities on reals. Since composition of non-expanding
+maps is again non-expanding, we may conclude that addition satisfies the usual properties,
+such as commutativity and associativity.
+\index{associativity!of addition!of Cauchy reals}%
+Therefore, $(\RC, 0, {+}, {-})$ is a commutative
+group.
+
+We may also apply \cref{RC-binary-nonexpanding-extension} to the functions $\min : \Q \times
+\Q \to \Q$ and $\max : \Q \times \Q \to \Q$, which turns $\RC$ into a lattice. The partial
+order $\leq$ on $\RC$ is defined in terms of $\max$ as
+%
+\symlabel{leq-RC}
+\index{order!non-strict}%
+\index{non-strict order}%
+\begin{equation*}
+ (u \leq v) \defeq (\max(u, v) = v).
+\end{equation*}
+%
+The relation $\leq$ is a partial order because it is such on $\Q$, and the axioms of a
+partial order are expressible as equations in terms of $\min$ and $\max$, so they transfer
+to $\RC$.
+
+\index{absolute value}%
+Another function which extends to $\RC$ by the same method is the absolute value $|{\blank}|$.
+Again, it has the expected properties because they transfer from $\Q$ to $\RC$.
+
+\symlabel{lt-RC}
+From $\leq$ we get the strict order $<$ by
+\index{strict!order}%
+\index{order!strict}%
+%
+\begin{equation*}
+ (u < v) \defeq \exis{q, r : \Q} (u \leq \rcrat(q)) \land (q < r) \land (\rcrat(r) \leq v).
+\end{equation*}
+%
+That is, $u < v$ holds when there merely exists a pair of rational numbers $q < r$ such that $x \leq
+\rcrat(q)$ and $\rcrat(r) \leq v$. It is not hard to check that $<$ is irreflexive and
+transitive, and has other properties that are expected for an ordered field.
+The archimedean principle follows directly from the definition of~$<$.
+
+\index{ordered field!archimedean}%
+\begin{thm}[Archimedean principle for $\RC$] \label{RC-archimedean}
+ %
+ For every $u, v : \RC$ such that $u < v$ there merely exists $q : \Q$ such that $u < q < v$.
+\end{thm}
+
+\begin{proof}
+ From $u < v$ we merely get $r, s : \Q$ such that $u \leq r < s \leq v$, and we may take $q
+ \defeq (r + s) / 2$.
+\end{proof}
+
+We now have enough structure on $\RC$ to express $u \close\epsilon v$ with standard concepts.
+
+\begin{lem}\label{thm:RC-le-grow}
+ If $q:\Q$ and $u:\RC$ satisfy $u\le \rcrat(q)$, then for any $v:\RC$ and $\epsilon:\Qp$, if $u\close\epsilon v$ then $v\le \rcrat(q+\epsilon)$.
+\end{lem}
+\begin{proof}
+ Note that the function $\max(\rcrat(q),\blank):\RC\to\RC$ is Lipschitz with constant $1$.
+ First consider the case when $u=\rcrat(r)$ is rational.
+ For this we use induction on $v$.
+ If $v$ is rational, then the statement is obvious.
+ If $v$ is $\rclim(y)$, we assume inductively that for any $\epsilon,\delta$, if $\rcrat(r)\close\epsilon y_\delta$ then $y_\delta \le \rcrat(q+\epsilon)$, i.e.\ $\max(\rcrat(q+\epsilon),y_\delta)=\rcrat(q+\epsilon)$.
+
+ Now assuming $\epsilon$ and $\rcrat(r)\close\epsilon \rclim(y)$, we have $\theta$ such that $\rcrat(r)\close{\epsilon-\theta} \rclim(y)$, hence $\rcrat(r)\close\epsilon y_\delta$ whenever $\delta<\theta$.
+ Thus, the inductive hypothesis gives $\max(\rcrat(q+\epsilon),y_\delta)=\rcrat(q+\epsilon)$ for such $\delta$.
+ But by definition,
+ \[\max(\rcrat(q+\epsilon),\rclim(y)) \jdeq \rclim(\lam{\delta} \max(\rcrat(q+\epsilon),y_\delta)).\]
+ Since the limit of an eventually constant Cauchy approximation is that constant, we have
+ \[\max(\rcrat(q+\epsilon),\rclim(y)) = \rcrat(q+\epsilon),\] hence $\rclim(y)\le \rcrat(q+\epsilon)$.
+
+ Now consider a general $u:\RC$.
+ Since $u\le \rcrat(q)$ means $\max(\rcrat(q),u)=\rcrat(q)$, the assumption $u\close\epsilon v$ and the Lipschitz property of $\max(\rcrat(q),-)$ imply $\max(\rcrat(q),v) \close\epsilon \rcrat(q)$.
+ Thus, since $\rcrat(q)\le \rcrat(q)$, the first case implies $\max(\rcrat(q),v) \le \rcrat(q+\epsilon)$, and hence $v\le \rcrat(q+\epsilon)$ by transitivity of $\le$.
+\end{proof}
+
+\begin{lem}\label{thm:RC-lt-open}
+ Suppose $q:\Q$ and $u:\RC$ satisfy $u<\rcrat(q)$. Then:
+ \begin{enumerate}
+ \item For any $v:\RC$ and $\epsilon:\Qp$, if $u\close\epsilon v$ then $v< \rcrat(q+\epsilon)$.\label{item:RCltopen1}
+ \item There exists $\epsilon:\Qp$ such that for any $v:\RC$, if $u\close\epsilon v$ we have $v<\rcrat(q)$.\label{item:RCltopen2}
+ \end{enumerate}
+\end{lem}
+\begin{proof}
+ By definition, $u<\rcrat(q)$ means there is $r:\Q$ with $r
+ 1/q$, which is to say that $u \apart 0$.
+
+ For the converse we construct the inverse map
+ %
+ \begin{equation*}
+ ({\blank})^{-1} : \setof{ u : \RC | u \apart 0 } \to \RC
+ \end{equation*}
+ %
+ by patching together functions, similarly to the construction of squaring in
+ \cref{RC-squaring}. We only outline the main steps. For every $q : \Q$ let
+ %
+ \begin{equation*}
+ [q, \infty) \defeq \setof{u : \RC | q \leq u}
+ \qquad\text{and}\qquad
+ (-\infty, q] \defeq \setof{u : \RC | u \leq -q}.
+ \end{equation*}
+ %
+ Then, as $q$ ranges over $\Qp$, the types $(-\infty, q]$ and $[q, \infty)$ jointly cover
+ $\setof{u : \RC | u \apart 0}$. On each such $[q, \infty)$ and $(-\infty, q]$ the
+ inverse function is obtained by an application of \cref{RC-extend-Q-Lipschitz}
+ with Lipschitz constant $1/q^2$. Finally, \cref{lem:images_are_coequalizers}
+ guarantees that the inverse function factors uniquely through $\setof{ u : \RC | u
+ \apart 0 }$.
+\end{proof}
+
+We summarize the algebraic structure of $\RC$ with a theorem.
+
+\begin{thm} \label{RC-archimedean-ordered-field}
+ The Cauchy reals form an archimedean ordered field.
+\end{thm}
+
+\subsection{Cauchy reals are Cauchy complete}
+\label{sec:cauchy-reals-cauchy-complete}
+
+We constructed $\RC$ by closing $\Q$ under limits of Cauchy approximations, so it better
+be the case that $\RC$ is Cauchy complete. Thanks to \cref{RC-sim-eqv-le} there is no
+difference between a Cauchy approximation $x : \Qp \to \RC$ as defined in the construction
+of $\RC$, and a Cauchy approximation in the sense of \cref{defn:cauchy-approximation}
+(adapted to $\RC$).
+
+Thus, given a Cauchy approximation $x : \Qp \to \RC$ it is quite natural to expect that
+$\rclim(x)$ is its limit, where the notion of limit is defined as in
+\cref{defn:cauchy-approximation}. But this is so by \cref{RC-sim-eqv-le} and
+\cref{thm:RC-sim-lim-term}. We have proved:
+
+\begin{thm}
+ Every Cauchy approximation in $\RC$ has a limit.
+\end{thm}
+
+An archimedean ordered field in which every Cauchy approximation has a limit is called
+\define{Cauchy complete}.
+\indexdef{Cauchy!completeness}%
+\indexdef{complete!ordered field, Cauchy}%
+\index{ordered field}%
+The Cauchy reals are the least such field.
+
+\begin{thm} \label{RC-initial-Cauchy-complete}
+ The Cauchy reals embed into every Cauchy complete ar\-chi\-me\-de\-an ordered field.
+\end{thm}
+
+\begin{proof}
+ \index{limit!of a Cauchy approximation}%
+ Suppose $F$ is a Cauchy complete archimedean ordered field. Because limits are unique,
+ there is an operator $\lim$ which takes Cauchy approximations in $F$ to their limits. We
+ define the embedding $e : \RC \to F$ by $(\RC, {\closesym})$-recursion as
+ %
+ \begin{equation*}
+ e(\rcrat(q)) \defeq q
+ \qquad\text{and}\qquad
+ e(\rclim(x)) \defeq \lim (e \circ x).
+ \end{equation*}
+ %
+ A suitable $\bsim$ on $F$ is
+ %
+ \begin{equation*}
+ (a \bsim_\epsilon b) \defeq |a - b| < \epsilon.
+ \end{equation*}
+ %
+ This is a separated relation because $F$ is archimedean. The rest of the clauses for
+ $(\RC, {\closesym})$-recursion are easily checked. One would also have to check that $e$ is
+ an embedding of ordered fields which fixes the rationals.
+\end{proof}
+
+\index{real numbers!Cauchy|)}%
+
+\section{Comparison of Cauchy and Dedekind reals}
+\label{sec:comp-cauchy-dedek}
+
+\index{real numbers!Dedekind|(}%
+\index{real numbers!Cauchy|(}%
+\index{depression|(}
+
+Let us also say something about the relationship between the Cauchy and Dedekind reals. By
+\cref{RC-archimedean-ordered-field}, $\RC$ is an archimedean ordered field. It is also
+admissible\index{ordered field!admissible} for $\Omega$, as can be easily checked. (In case $\Omega$ is the initial
+$\sigma$-frame
+\index{initial!sigma-frame@$\sigma$-frame}%
+\index{sigma-frame@$\sigma$-frame!initial}%
+it takes a simple induction, while in other cases it is immediate.)
+Therefore, by \cref{RD-final-field} there is an embedding of ordered fields
+%
+\begin{equation*}
+ \RC \to \RD
+\end{equation*}
+%
+which fixes the rational numbers.
+(We could also obtain this from \cref{RC-initial-Cauchy-complete,RD-cauchy-complete}.)
+In general we do not expect $\RC$ and $\RD$ to coincide
+without further assumptions.
+
+\begin{lem} \label{lem:untruncated-linearity-reals-coincide}
+ %
+ If for every $x : \RD$ there merely exists
+ %
+ \begin{equation}
+ \label{eq:untruncated-linearity}
+ c : \prd{q, r : \Q} (q < r) \to (q < x) + (x < r)
+ \end{equation}
+ %
+ then the Cauchy and Dedekind reals coincide.
+\end{lem}
+
+\begin{proof}
+ Note that the type in~\eqref{eq:untruncated-linearity} is an untruncated variant
+ of~\eqref{eq:RD-linear-order}, which states that~$<$ is a weak linear order.
+ We already know that $\RC$ embeds into $\RD$, so it suffices to show that every Dedekind
+ real merely is the limit of a Cauchy sequence\index{Cauchy!sequence} of rational numbers.
+
+ Consider any $x : \RD$. By assumption there merely exists $c$ as in the statement of the
+ lemma, and by inhabitation of cuts\index{cut!Dedekind} there merely exist $a, b : \Q$ such that $a < x < b$.
+ We construct a sequence\index{sequence} $f : \N \to \setof{ \pairr{q, r} \in \Q \times \Q | q < r }$ by
+ recursion:
+ %
+ \begin{enumerate}
+ \item Set $f(0) \defeq \pairr{a, b}$.
+ \item Suppose $f(n)$ is already defined as $\pairr{q_n, r_n}$ such that $q_n < r_n$.
+ Define $s \defeq (2 q_n + r_n)/3$ and $t \defeq (q_n + 2 r_n)/3$. Then $c(s,t)$
+ decides between $s < x$ and $x < t$. If it decides $s < x$ then we set $f(n+1) \defeq
+ \pairr{s, r_n}$, otherwise $f(n+1) \defeq \pairr{q_n, t}$.
+ \end{enumerate}
+ %
+ Let us write $\pairr{q_n, r_n}$ for the $n$-th term of the sequence~$f$. Then it is easy
+ to see that $q_n < x < r_n$ and $|q_n - r_n| \leq (2/3)^n \cdot |q_0 - r_0|$ for all $n
+ : \N$. Therefore $q_0, q_1, \ldots$ and $r_0, r_1, \ldots$ are both Cauchy sequences
+ converging to the Dedekind cut~$x$. We have shown that for every $x : \RD$ there merely
+ exists a Cauchy sequence converging to $x$.
+\end{proof}
+
+The lemma implies that either countable choice or excluded middle suffice for coincidence
+of $\RC$ and $\RD$.
+
+\begin{cor} \label{when-reals-coincide}
+ \index{axiom!of choice!countable}%
+ \index{excluded middle}%
+ If excluded middle or countable choice holds then $\RC$ and $\RD$ are equivalent.
+\end{cor}
+
+\begin{proof}
+ If excluded middle holds then $(x < y) \to (x < z) + (z < y)$ can be proved: either $x <
+ z$ or $\lnot (x < z)$. In the former case we are done, while in the latter we get $z <
+ y$ because $z \leq x < y$. Therefore, we get~\eqref{eq:untruncated-linearity} so that we
+ can apply \cref{lem:untruncated-linearity-reals-coincide}.
+
+ Suppose countable choice holds. The set $S = \setof{ \pairr{q, r} \in \Q \times \Q | q <
+ r }$ is equivalent to $\N$, so we may apply countable choice to the statement that $x$
+ is located,
+ %
+ \begin{equation*}
+ \fall{\pairr{q, r} : S} (q < x) \lor (x < r).
+ \end{equation*}
+ %
+ Note that $(q < x) \lor (x < r)$ is expressible as an existential statement $\exis{b :
+ \bool} (b = \bfalse \to q < x) \land (b = \btrue \to x < r)$. The (curried form) of
+ the choice function is then precisely~\eqref{eq:untruncated-linearity} so that
+ \cref{lem:untruncated-linearity-reals-coincide} is applicable again.
+\end{proof}
+
+\index{real numbers!Dedekind|)}%
+\index{real numbers!Cauchy|)}%
+\index{real numbers!agree}%
+
+\index{depression|)}
+
+\section{Compactness of the interval}
+\label{sec:compactness-interval}
+
+\index{mathematics!classical|(}%
+\index{mathematics!constructive|(}%
+
+We already pointed out that our constructions of reals are entirely compatible with
+classical logic. Thus, by assuming the law of excluded middle~\eqref{eq:lem} and the axiom
+of choice~\eqref{eq:ac} we could develop classical analysis,\index{classical!analysis}\index{analysis!classical} which would essentially
+amount to copying any standard book on analysis.
+
+\index{analysis!constructive}%
+\index{constructive!analysis}%
+Nevertheless, anyone interested in computation, for example a numerical analyst, ought to
+be curious about developing analysis in a computationally meaningful setting. That
+analysis in a constructive setting is even possible was demonstrated by~\cite{Bishop1967}.
+As a sample of the differences and similarities between classical and constructive
+analysis we shall briefly discuss just one topic---compactness of the closed interval
+$[0,1]$ and a couple of theorems surrounding the concept.
+
+Compactness is no exception to the common phenomenon in constructive mathematics that
+classically equivalent notions bifurcate. The three most frequently used notions of
+compactness are:
+%
+\indexdef{compactness}%
+\begin{enumerate}
+\item \define{metrically compact:} ``Cauchy complete and totally bounded'',
+ \indexdef{metrically compact}%
+ \indexdef{compactness!metric}%
+\item \define{Bolzano--Weierstra\ss{} compact:} ``every sequence has a convergent subsequence'',
+ \index{compactness!Bolzano--Weierstrass@Bolzano--Weierstra\ss{}}%
+ \indexsee{Bolzano--Weierstrass@Bolzano--Weierstra\ss{}}{compactness}%
+ \index{sequence}%
+\item \define{Heine--Borel compact:} ``every open cover has a finite subcover''.
+ \index{compactness!Heine--Borel}%
+ \indexsee{Heine--Borel}{compactness}%
+\end{enumerate}
+%
+These are all equivalent in classical mathematics.
+Let us see how they fare in homotopy type theory. We can use either the Dedekind or the
+Cauchy reals, so we shall denote the reals just as~$\R$. We first recall several basic
+definitions.
+
+\indexsee{space!metric}{metric space}
+\index{metric space|(}%
+
+\begin{defn} \label{defn:metric-space}
+ A \define{metric space}
+ \indexdef{metric space}%
+ $(M, d)$ is a set $M$ with a map $d : M \times M \to \R$
+ satisfying, for all $x, y, z : M$,
+ %
+ \begin{align*}
+ d(x,y) &\geq 0, &
+ d(x,y) &= d(y,x), \\
+ d(x,y) &= 0 \Leftrightarrow x = y, &
+ d(x,z) &\leq d(x,y) + d(y,z).
+ \end{align*}
+ %
+\end{defn}
+
+\begin{defn} \label{defn:complete-metric-space}
+ A \define{Cauchy approximation}
+ \index{Cauchy!approximation}%
+ in $M$ is a sequence $x : \Qp \to M$ satisfying
+ %
+ \begin{equation*}
+ \fall{\delta, \epsilon} d(x_\delta, x_\epsilon) < \delta + \epsilon.
+ \end{equation*}
+ %
+ \index{limit!of a Cauchy approximation}%
+ The \define{limit} of a Cauchy approximation $x : \Qp \to M$ is a point $\ell : M$
+ satisfying
+ %
+ \begin{equation*}
+ \fall{\epsilon, \theta : \Qp} d(x_\epsilon, \ell) < \epsilon + \theta.
+ \end{equation*}
+ %
+ \indexdef{metric space!complete}%
+ \indexdef{complete!metric space}%
+ A \define{complete metric space} is one in which every Cauchy approximation has a limit.
+\end{defn}
+
+\begin{defn} \label{defn:total-bounded-metric-space}
+ For a positive rational $\epsilon$, an \define{$\epsilon$-net}
+ \indexdef{epsilon-net@$\epsilon$-net}%
+ in a metric space $(M,
+ d)$ is an element of
+ %
+ \begin{equation*}
+ \sm{n : \N}{x_1, \ldots, x_n : M}
+ \fall{y : M} \exis{k \leq n} d(x_k, y) < \epsilon.
+ \end{equation*}
+ %
+ In words, this is a finite sequence of points $x_1, \ldots, x_n$ such that every point
+ in $M$ merely is within $\epsilon$ of some~$x_k$.
+
+ A metric space $(M, d)$ is \define{totally bounded}
+ \indexdef{totally bounded metric space}%
+ \indexdef{metric space!totally bounded}%
+ when it has $\epsilon$-nets of all
+ sizes:
+ %
+ \begin{equation*}
+ \prd{\epsilon : \Qp}
+ \sm{n : \N}{x_1, \ldots, x_n : M}
+ \fall{y : M} \exis{k \leq n} d(x_k, y) < \epsilon.
+ \end{equation*}
+\end{defn}
+
+\begin{rmk}
+ In the definition of total boundedness we used sloppy notation $\sm{n : \N}{x_1, \ldots, x_n : M}$. Formally, we should have written $\sm{x : \lst{M}}$ instead,
+ where $\lst{M}$ is the inductive type of finite lists\index{type!of lists} from \cref{sec:bool-nat}.
+ However, that would make the rest of the statement a bit more cumbersome to express.
+\end{rmk}
+
+Note that in the definition of total boundedness we require pure existence of an
+$\epsilon$-net, not mere existence. This way we obtain a function which assigns to each
+$\epsilon : \Qp$ a specific $\epsilon$-net. Such a function might be called a ``modulus of
+total boundedness''. In general, when porting classical metric notions to homotopy type
+theory, we should use propositional truncation sparingly, typically so that we avoid
+asking for a non-constant map from $\R$ to $\Q$ or $\N$. For instance, here is the
+``correct'' definition of uniform continuity.
+
+\begin{defn} \label{defn:uniformly-continuous}
+ A map $f : M \to \R$ on a metric space is \define{uniformly continuous}
+ \indexdef{function!uniformly continuous}%
+ \indexdef{uniformly continuous function}%
+ when
+ %
+ \begin{equation*}
+ \prd{\epsilon : \Qp}
+ \sm{\delta : \Qp}
+ \fall{x, y : M}
+ d(x,y) < \delta \Rightarrow |f(x) - f(y)| < \epsilon.
+ \end{equation*}
+ %
+ In particular, a uniformly continuous map has a modulus of uniform continuity\indexdef{modulus!of uniform continuity},
+ which is a function that assigns to each $\epsilon$ a corresponding $\delta$.
+\end{defn}
+
+Let us show that $[0,1]$ is compact in the first sense.
+
+\begin{thm} \label{analysis-interval-ctb}
+ \index{compactness!metric}%
+ \index{interval!open and closed}%
+ The closed interval $[0,1]$ is complete and totally bounded.
+\end{thm}
+
+\begin{proof}
+ Given $\epsilon : \Qp$, there is $k : \N$ such that $2/k < \epsilon$, so we may take the
+ $\epsilon$-net $x_i = i/k$ for $i = 0, \ldots, k$. This is an $\epsilon$-net because,
+ for every $y : [0,1]$ there merely exists $i$ such that $0 \leq i \leq k$ and $(i -
+ 1)/k < y < (i+1)/k$, and so $|y - x_i| < 2/k < \epsilon$.
+
+ For completeness of $[0,1]$, consider a Cauchy approximation $x : \Qp \to
+ [0,1]$ and let $\ell$ be its limit in $\R$. Since $\max$ and $\min$ are Lipschitz maps,
+ the retraction $r : \R \to [0,1]$ defined by $r(x) \defeq \max(0, \min(1, x))$ commutes
+ with limits of Cauchy approximations, therefore
+ %
+ \begin{equation*}
+ r(\ell) =
+ r (\lim x) =
+ \lim (r \circ x) =
+ \lim x =
+ \ell,
+ \end{equation*}
+ %
+ which means that $0 \leq \ell \leq 1$, as required.
+\end{proof}
+
+We thus have at least one good notion of compactness in homotopy type theory.
+Unfortunately, it is limited to metric spaces because total boundedness is a metric
+notion. We shall consider the other two notions shortly, but first we prove that a
+uniformly continuous map on a totally bounded space has a \define{supremum},
+\indexsee{least upper bound}{supremum}%
+i.e.\ an upper bound which is less than or equal to all other upper bounds.
+
+\begin{thm} \label{ctb-uniformly-continuous-sup}
+ %
+ \indexdef{supremum!of uniformly continuous function}%
+ A uniformly continuous map $f : M \to \R$ on a totally bounded metric space
+ $(M, d)$ has a supremum $m : \R$. For every $\epsilon : \Qp$ there exists $u : M$ such
+ that $|m - f(u)| < \epsilon$.
+\end{thm}
+
+\begin{proof}
+ Let $h : \Qp \to \Qp$ be the modulus of uniform continuity of~$f$.
+ We define an approximation $x : \Qp \to \R$ as follows: for any $\epsilon : \Q$ total
+ boundedness of $M$ gives a $h(\epsilon)$-net $y_0, \ldots, y_n$. Define
+ %
+ \begin{equation*}
+ x_\epsilon \defeq \max (f(y_0), \ldots, f(y_n)).
+ \end{equation*}
+ %
+ We claim that $x$ is a Cauchy approximation. Consider any $\epsilon, \eta : \Q$, so that
+ %
+ \begin{equation*}
+ x_\epsilon \jdeq \max (f(y_0), \ldots, f(y_n))
+ \quad\text{and}\quad
+ x_\eta \jdeq \max (f(z_0), \ldots, f(z_m))
+ \end{equation*}
+ %
+ for some $h(\epsilon)$-net $y_0, \ldots, y_n$ and $h(\eta)$-net $z_0, \ldots, z_m$.
+ Every $z_i$ is merely $h(\epsilon)$-close to some $y_j$, therefore $|f(z_i) - f(y_j)| <
+ \epsilon$, from which we may conclude that
+ %
+ \begin{equation*}
+ f(z_i) < \epsilon + f(y_j) \leq \epsilon + x_\epsilon,
+ \end{equation*}
+ %
+ therefore $x_\eta < \epsilon + x_\epsilon$. Symmetrically we obtain $x_\eta < \eta +
+ x_\eta$, therefore $|x_\eta - x_\epsilon| < \eta + \epsilon$.
+
+ We claim that $m \defeq \lim x$ is the supremum of~$f$. To prove that $f(x) \leq m$ for
+ all $x : M$ it suffices to show $\lnot (m < f(x))$. So suppose to the contrary that $m <
+ f(x)$. There is $\epsilon : \Qp$ such that $m + \epsilon < f(x)$. But now merely for
+ some $y_i$ participating in the definition of $x_\epsilon$ we get $|f(x) - f(y_i) <
+ \epsilon$, therefore $m < f(x) - \epsilon < f(y_i) \leq m$, a contradiction.
+
+ We finish the proof by showing that $m$ satisfies the second part of the theorem, because
+ it is then automatically a least upper bound. Given any $\epsilon : \Qp$, on one hand
+ $|m - f(x_{\epsilon/2})| < 3 \epsilon/4$, and on the other $|f(x_{\epsilon/2}) - f(y_i)| <
+ \epsilon/4$ merely for some $y_i$ participating in the definition of $x_{\epsilon/2}$,
+ therefore by taking $u \defeq y_i$ we obtain $|m - f(u)| < \epsilon$ by triangle
+ inequality.
+\end{proof}
+
+Now, if in \cref{ctb-uniformly-continuous-sup} we also knew that $M$ were complete, we
+could hope to weaken the assumption of uniform continuity to continuity, and strengthen
+the conclusion to existence of a point at which the supremum is attained. The usual proofs
+of these improvements rely on the facts that in a complete totally bounded space
+%
+\begin{enumerate}
+\item continuity implies uniform continuity, and
+\item every sequence has a convergent subsequence.
+\end{enumerate}
+%
+The first statement follows easily from Heine--Borel compactness, and the second is just
+Bolzano--Weierstra\ss{} compactness.
+\index{compactness!Bolzano--Weierstrass@Bolzano--Weierstra\ss{}}%
+Unfortunately, these are both somewhat problematic. Let
+us first show that Bolzano--Weierstra\ss{} compactness implies an instance of excluded middle
+known as the \define{limited principle of omniscience}:
+\indexsee{axiom!limited principle of omniscience}{limited principle of omniscience}%
+\indexdef{limited principle of omniscience}%
+for every $\alpha : \N \to \bool$,
+%
+\begin{equation} \label{eq:lpo}
+ \Parens{\sm{n : \N} \alpha(n) = \btrue} +
+ \Parens{\prd{n : \N} \alpha(n) = \bfalse}.
+\end{equation}
+%
+Computationally speaking, we would not expect this principle to hold, because it asks us to decide
+whether infinitely many values of a function are~$\bfalse$.
+
+\begin{thm} \label{analysis-bw-lpo}
+ %
+ Bolzano--Weierstra\ss{} compactness of $[0,1]$ implies the limited principle of omniscience.
+ \index{compactness!Bolzano--Weierstrass@Bolzano--Weierstra\ss{}}%
+\end{thm}
+
+\begin{proof}
+ Given any $\alpha : \N \to \bool$, define the sequence\index{sequence} $x : \N \to [0,1]$ by
+ %
+ \begin{equation*}
+ x_n \defeq
+ \begin{cases}
+ 0 & \text{if $\alpha(k) = \bfalse$ for all $k < n$,}\\
+ 1 & \text{if $\alpha(k) = \btrue$ for some $k < n$}.
+ \end{cases}
+ \end{equation*}
+ %
+ If the Bolzano--Weierstra\ss{} property holds, there exists a strictly increasing $f : \N \to
+ \N$ such that $x \circ f$ is a Cauchy sequence\index{Cauchy!sequence}. For a sufficiently large $n :
+ \N$ the $n$-th term $x_{f(n)}$ is within $1/6$ of its limit. Either $x_{f(n)} < 2/3$ or
+ $x_{f(n)} > 1/3$. If $x_{f(n)} < 2/3$ then~$x_n$ converges to $0$ and so $\prd{n : \N}
+ \alpha(n) = \bfalse$. If $x_{f(n)} > 1/3$ then $x_{f(n)} = 1$, therefore $\sm{n : \N}
+ \alpha(n) = \btrue$.
+\end{proof}
+
+While we might not mourn Bolzano--Weierstra\ss{} compactness too much, it seems harder to live
+without Heine--Borel compactness, as attested by the fact that both classical mathematics
+and Brouwer's Intuitionism accepted it. As we do not want to wade too deeply into general
+topology, we shall work with basic open sets. In the case of $\R$ these are the open
+intervals with rational endpoints. A family of such intervals, indexed by a type~$I$,
+would be a map
+%
+\begin{equation*}
+ \mathcal{F} : I \to \setof{(q, r) : \Q \times \Q | q < r},
+\end{equation*}
+%
+with the idea that a pair of rationals $(q, r)$ with $q < r$ determines the type $\setof{ x : \R | q < x < r}$. It is slightly more convenient to allow degenerate intervals as well, so we take a
+\define{family of basic intervals}
+\indexdef{family!of basic intervals}%
+\indexdef{interval!family of basic}%
+to be a map
+%
+\begin{equation*}
+ \mathcal{F} : I \to \Q \times \Q.
+\end{equation*}
+%
+To be quite precise, a family is a dependent pair $(I, \mathcal{F})$, not just
+$\mathcal{F}$. A \define{finite family of basic intervals} is one indexed by $\setof{ m :
+ \N | m < n}$ for some $n : \N$. We usually present it by a finite list $[(q_0, r_0), \ldots,
+(q_{n-1}, r_{n-1})]$. Finally, a \define{finite subfamily}\indexdef{subfamily, finite, of intervals} of $(I, \mathcal{F})$ is given
+by a list of indices $[i_1, \ldots, i_n]$ which then determine the finite family
+$[\mathcal{F}(i_1), \ldots, \mathcal{F}(i_n)]$.
+
+As long as we are aware of the distinction between a pair $(q, r)$ and the corresponding
+interval $\setof{ x : \R | q < x < r}$, we may safely use the same notation $(q, r)$ for
+both. Intersections\indexdef{intersection!of intervals} and inclusions\indexdef{inclusion!of intervals}\indexdef{containment!of intervals} of intervals are expressible in terms of their
+endpoints:
+%
+\symlabel{interval-intersection}
+\symlabel{interval-subset}
+\begin{align*}
+ (q, r) \cap (s, t) &\ \defeq\ (\max(q, s), \min(r, t)),\\
+ (q, r) \subseteq (s, t) &\ \defeq\ (q < r \Rightarrow s \leq q < r \leq t).
+\end{align*}
+%
+We say that $\intfam{i}{I}{(q_i, r_i)}$ \define{(pointwise) covers $[a,b]$}
+\indexdef{interval!pointwise cover}%
+\indexdef{cover!pointwise}%
+\indexdef{pointwise!cover}%
+when
+%
+\begin{equation} \label{eq:cover-pointwise-truncated}
+ \fall{x : [a,b]} \exis{i : I} q_i < x < r_i.
+\end{equation}
+%
+The \define{Heine--Borel compactness for $[0,1]$}
+\indexdef{compactness!Heine--Borel}%
+states that every covering family of $[0,1]$
+merely has a finite subfamily which still covers $[0,1]$.
+
+\index{depression}
+\begin{thm} \label{classical-Heine-Borel}
+ \index{excluded middle}%
+ If excluded middle holds then $[0,1]$ is Heine--Borel compact.
+\end{thm}
+
+\begin{proof}
+ Assume for the purpose of reaching a contradiction that a family $\intfam{i}{I}{(a_i,
+ b_i)}$ covers $[0,1]$ but no finite subfamily does. We construct a sequence of closed
+ intervals $[q_n, r_n]$ which are nested, their sizes shrink to~$0$, and none of them is covered
+ by a finite subfamily of $\intfam{i}{I}{(a_i, b_i)}$.
+
+ We set $[q_0, r_0] \defeq [0,1]$. Assuming $[q_n, r_n]$ has been constructed, let $s
+ \defeq (2 q_n + r_n)/3$ and $t \defeq (q_n + 2 r_n)/3$. Both $[q_n, t]$ and $[s, r_n]$
+ are covered by $\intfam{i}{I}{(a_i, b_i)}$, but they cannot both have a finite subcover,
+ or else so would $[q_n, r_n]$. Either $[q_n, t]$ has a finite subcover or it does not.
+ If it does we set $[q_{n+1}, r_{n+1}] \defeq [s, r_n]$, otherwise we set $[q_{n+1},
+ r_{n+1}] \defeq [q_n, t]$.
+
+ The sequences $q_0, q_1, \ldots$ and $r_0, r_1, \ldots$ are both Cauchy and they
+ converge to a point $x : [0,1]$ which is contained in every $[q_n, r_n]$.
+ There merely exists $i : I$ such that $a_i < x < b_i$. Because the sizes of the
+ intervals $[q_n, r_n]$ shrink to zero, there is $n : \N$ such that $a_i < q_n \leq x
+ \leq r_n < b_i$, but this means that $[q_n, r_n]$ is covered by a single interval $(a_i,
+ b_i)$, while at the same time it has no finite subcover. A contradiction.
+\end{proof}
+
+Without excluded middle, or a pinch of Brouwerian Intuitionism, we seem to be stuck.
+Nevertheless, Heine--Borel compactness of $[0,1]$ \emph{can} be recovered in a constructive
+setting, in a fashion that is still compatible with classical mathematics! For this to be
+done, we need to revisit the notion of cover. The trouble with
+\eqref{eq:cover-pointwise-truncated} is that the truncated existential allows a space to
+be covered in any haphazard way, and so computationally speaking, we stand no chance of
+merely extracting a finite subcover. By removing the truncation we get
+%
+\begin{equation} \label{eq:cover-pointwise}
+ \prd{x : [0,1]} \sm{i : I} q_i < x < r_i,
+\end{equation}
+%
+which might help, were it not too demanding of covers. With this definition we
+could not even show that $(0,3)$ and $(2,5)$ cover $[1,4]$ because that would amount
+to exhibiting a non-constant map $[1,4] \to \bool$, see
+\cref{ex:reals-non-constant-into-Z}. Here we can take a lesson from ``pointfree topology''
+\index{pointfree topology}%
+\index{topology!pointfree}%
+(i.e.\ locale theory):
+\index{locale}%
+the notion of cover ought to be expressed in terms of open sets, without
+reference to points. Such a ``holistic'' view of space will then allow us to analyze the
+notion of cover, and we shall be able to recover Heine--Borel compactness. Locale
+theory uses power sets,
+\index{power set}%
+which we could obtain by assuming propositional resizing;
+\index{propositional!resizing}%
+but instead we can steal ideas from the predicative cousin of locale theory,
+\index{mathematics!predicative}%
+which is called ``formal topology''.
+\index{formal!topology}%
+
+\index{acceptance|(}
+
+Suppose that we have a family $\pairr{I, \mathcal{F}}$ and an interval $(a, b)$. How might
+we express the fact that $(a,b)$ is covered by the family, without referring to points?
+Here is one: if $(a, b)$ equals some $\mathcal{F}(i)$ then it is covered by the family.
+And another one: if $(a,b)$ is covered by some other family $(J, \mathcal{G})$, and in
+turn each $\mathcal{G}(j)$ is covered by $\pairr{I, \mathcal{F}}$, then $(a,b)$ is covered
+$\pairr{I, \mathcal{F}}$. Notice that we are listing \emph{rules} which can be used to
+\emph{deduce} that $\pairr{I, \mathcal{F}}$ covers $(a,b)$. We should find sufficiently
+good rules and turn them into an inductive definition.
+
+\begin{defn} \label{defn:inductive-cover}
+ %
+ The \define{inductive cover $\cover$}
+ \indexdef{inductive!cover}%
+ \indexdef{cover!inductive}%
+ is a mere relation
+ %
+ \begin{equation*}
+ {\cover} : (\Q \times \Q) \to \Parens{\sm{I : \type} (I \to \Q \times \Q)} \to \prop
+ \end{equation*}
+ %
+ defined inductively by the following rules, where $q, r, s, t$ are rational numbers and
+ $\pairr{I, \mathcal{F}}$, $\pairr{J, \mathcal{G}}$ are families of basic intervals:
+ %
+ \begin{enumerate}
+
+ \item \emph{reflexivity:}
+ \index{reflexivity!of inductive cover}%
+ $\mathcal{F}(i) \cover \pairr{I, \mathcal{F}}$ for all $i : I$,
+
+ \item \emph{transitivity:}
+ \index{transitivity!of inductive cover}%
+ if $(q, r) \cover \pairr{J, \mathcal{G}}$ and $\fall{j : J} \mathcal{G}(j) \cover \pairr{I,\mathcal{F}}$
+ then $(q, r) \cover \pairr{I, \mathcal{F}}$,
+
+ \item \emph{monotonicity:}
+ \index{monotonicity!of inductive cover}%
+ if $(q, r) \subseteq (s, t)$ and $(s,t) \cover \pairr{I, \mathcal{F}}$ then $(q, r) \cover
+ \pairr{I, \mathcal{F}}$,
+
+ \item \emph{localization:}
+ \index{localization of inductive cover}%
+ if $(q, r) \cover (I, \mathcal{F})$ then $(q, r) \cap (s, t) \cover
+ \intfam{i}{I}{(\mathcal{F}(i) \cap (s, t))}$.
+
+ \item \label{defn:inductive-cover-interval-1}
+ if $q < s < t < r$ then $(q, r) \cover [(q, t), (r, s)]$,
+
+ \item \label{defn:inductive-cover-interval-2}
+ $(q, r) \cover \intfam{u}{\setof{ (s,t) : \Q \times \Q | q < s < t < r}}{u}$.
+ \end{enumerate}
+\end{defn}
+
+The definition should be read as a higher-inductive type in which the listed rules are
+point constructors, and the type is $(-1)$-truncated. The first four clauses are of a
+general nature and should be intuitively clear. The last two clauses are specific to the
+real line: one says that an interval may be covered by two intervals if they overlap,
+while the other one says that an interval may be covered from within. Incidentally, if $r
+\leq q$ then $(q, r)$ is covered by the empty family by the last clause.
+
+Inductive covers enjoy the Heine--Borel property, the proof of which requires a lemma.
+
+\begin{lem} \label{reals-formal-topology-locally-compact}
+ Suppose $q < s < t < r$ and $(q, r) \cover \pairr{I, \mathcal{F}}$. Then there merely
+ exists a finite subfamily of $\pairr{I, \mathcal{F}}$ which inductively covers $(s, t)$.
+\end{lem}
+
+\begin{proof}
+ We prove the statement by induction on $(q, r) \cover \pairr{I, \mathcal{F}}$. There are
+ six cases:
+ %
+ \begin{enumerate}
+
+ \item Reflexivity: if $(q, r) = \mathcal{F}(i)$ then by monotonicity $(s, t)$ is covered
+ by the finite subfamily $[\mathcal{F}(i)]$.
+
+ \item Transitivity:
+ suppose $(q, r) \cover \pairr{J, \mathcal{G}}$ and $\fall{j : J} \mathcal{G}(j) \cover
+ \pairr{I, \mathcal{F}}$. By the inductive hypothesis there merely exists
+ $[\mathcal{G}(j_1), \ldots, \mathcal{G}(j_n)]$ which covers $(s, t)$.
+ Again by the inductive hypothesis, each of $\mathcal{G}(j_k)$ is covered by a finite
+ subfamily of $\pairr{I, \mathcal{F}}$, and we can collect these into a finite
+ subfamily which covers $(s, t)$.
+
+ \item Monotonicity:
+ if $(q, r) \subseteq (u, v)$ and $(u, v) \cover \pairr{I, \mathcal{F}}$ then we may
+ apply the inductive hypothesis to $(u, v) \cover \pairr{I, \mathcal{F}}$ because $u <
+ s < t < v$.
+
+ \item Localization:
+ suppose $(q', r') \cover \pairr{I, \mathcal{F}}$ and $(q, r) = (q', r') \cap (a, b)$.
+ Because $q' < s < t < r'$, by the inductive hypothesis there is a finite subcover
+ $[\mathcal{F}(i_1), \ldots, \mathcal{F}(i_n)]$ of $(s, t)$. We also know that $a < s <
+ t < b$, therefore $(s, t) = (s, t) \cap (a, b)$ is covered by
+ $[\mathcal{F}(i_1) \cap (a,b), \ldots, \mathcal{F}(i_n) \cap (a,b)]$, which is a
+ finite subfamily of $\intfam{i}{I}{(\mathcal{F}(i) \cap (a, b))}$.
+
+ \item If $(q, r) \cover [(q, v), (u, r)]$ for some $q < u < v < r$ then by monotonicity
+ $(s, t) \cover [(q, v), (u, r)]$.
+
+ \item Finally, $(s, t) \cover \intfam{z}{\setof{ (u,v):\Q \times \Q | q < u < v < r}}{z}$ by
+ reflexivity. \qedhere
+ \end{enumerate}
+\end{proof}
+
+Say that \define{$\pairr{I, \mathcal{F}}$ inductively covers
+ $[a, b]$} when there merely exists $\epsilon : \Qp$ such that $(a - \epsilon, b +
+\epsilon) \cover \pairr{I, \mathcal{F}}$.
+
+\begin{cor} \label{interval-Heine-Borel}
+ \index{compactness!Heine-Borel}%
+ \index{interval!open and closed}%
+ A closed interval is Heine--Borel compact for inductive covers.
+\end{cor}
+
+\begin{proof}
+ Suppose $[a, b]$ is inductively covered by $\pairr{I, \mathcal{F}}$, so there merely is
+ $\epsilon : \Qp$ such that $(a - \epsilon, b + \epsilon) \cover \pairr{I, \mathcal{F}}$.
+ By \cref{reals-formal-topology-locally-compact} there is a finite subcover of
+ $(a - \epsilon/2, b + \epsilon/2)$, which is therefore a finite subcover of $[a, b]$.
+\end{proof}
+
+Experience from formal topology\index{topology!formal} shows that the rules for inductive covers are sufficient
+for a constructive development of pointfree topology. But we can also provide our own
+evidence that they are a reasonable notion.
+
+\begin{thm} \label{inductive-cover-classical}
+ \mbox{}
+ %
+ \begin{enumerate}
+ \item An inductive cover is also a pointwise cover.
+ \item Assuming excluded middle, a pointwise cover is also an inductive cover.
+ \end{enumerate}
+\end{thm}
+
+\begin{proof}
+ \mbox{}
+ %
+ \begin{enumerate}
+
+ \item
+ Consider a family of basic intervals $\pairr{I, \mathcal{F}}$, where we write $(q_i,
+ r_i) \defeq \mathcal{F}(i)$, an interval $(a,b)$ inductively covered by $\pairr{I,
+ \mathcal{F}}$, and $x$ such that $a < x < b$.
+ %
+ We prove by induction on $(a,b) \cover \pairr{I, \mathcal{F}}$ that there merely
+ exists $i : I$ such that $q_i < x < r_i$. Most cases are pretty obvious, so we show
+ just two. If $(a,b) \cover \pairr{I, \mathcal{F}}$ by reflexivity, then there merely
+ is some $i : I$ such that $(a,b) = (q_i, r_i)$ and so $q_i < x < r_i$. If $(a,b)
+ \cover \pairr{I, \mathcal{F}}$ by transitivity via $\intfam{j}{J}{(s_j, t_j)}$ then by
+ the inductive hypothesis there merely is $j : J$ such that $s_j < x < t_j$, and then since
+ $(s_j, t_j) \cover \pairr{I, \mathcal{F}}$ again by the inductive hypothesis there merely
+ exists $i : I$ such that $q_i < x < r_i$. Other cases are just as exciting.
+
+ \item Suppose $\intfam{i}{I}{(q_i, r_i)}$ pointwise covers $(a, b)$. By
+ \cref{defn:inductive-cover-interval-2} of \cref{defn:inductive-cover} it
+ suffices to show that $\intfam{i}{I}{(q_i, r_i)}$ inductively covers $(c, d)$ whenever
+ $a < c < d < b$, so consider such $c$ and $d$. By \cref{classical-Heine-Borel}
+ there is a finite subfamily $[i_1, \ldots, i_n]$ which already pointwise covers $[c,
+ d]$, and hence $(c,d)$. Let $\epsilon : \Qp$ be a Lebesgue number
+ \index{Lebesgue number}
+ for $(q_{i_1}, r_{i_1}), \ldots, (q_{i_n}, r_{i_n})$ as in
+ \cref{ex:finite-cover-lebesgue-number}. There is a positive $k : \N$ such that $2 (d - c)/k
+ < \min(1, \epsilon)$. For $0 \leq i \leq k$ let
+ %
+ \begin{equation*}
+ c_k \defeq ((k - i) c + i d) / k.
+ \end{equation*}
+ %
+ The intervals $(c_0, c_2)$, $(c_1, c_3)$, \dots, $(c_{k-2}, c_k)$ inductively cover
+ $(c,d)$ by repeated use of transitivity and~\cref{defn:inductive-cover-interval-1}
+ in \cref{defn:inductive-cover}. Because their widths are below $\epsilon$ each of
+ them is contained in some $(q_i, r_i)$, and we may use transitivity and monotonicity to
+ conclude that $\intfam{i}{I}{(q_i, r_i)}$ inductively cover $(c, d)$. \qedhere
+ \end{enumerate}
+\end{proof}
+
+The upshot of the previous theorem is that, as far as classical mathematics is concerned,
+there is no difference between a pointwise and an inductive cover. In particular, since it
+is consistent to assume excluded middle in homotopy type theory, we cannot exhibit an
+inductive cover which fails to be a pointwise cover. Or to put it in a different way, the
+difference between pointwise and inductive covers is not what they cover but in the
+\emph{proofs} that they cover.
+
+We could write another book by going on like this, but let us stop here and hope that we
+have provided ample justification for the claim that analysis can be developed in homotopy
+type theory. The curious reader should consult \cref{ex:mean-value-theorem} for
+constructive versions of the intermediate value theorem.
+
+\index{acceptance|)}
+
+\index{mathematics!classical|)}%
+\index{mathematics!constructive|)}%
+
+\section{The surreal numbers}
+\label{sec:surreals}
+
+\index{surreal numbers|(}%
+
+In this section we consider another example of a higher inductive-in\-duc\-tive type, which draws together many of our threads: Conway's field \NO of \emph{surreal numbers}~\cite{conway:onag}.
+The surreal numbers are the natural common generalization of the (Dedekind) real numbers (\cref{sec:dedekind-reals}) and the ordinal numbers (\cref{sec:ordinals}).
+Conway, working in classical\index{mathematics!classical} mathematics with excluded middle and Choice, defines a surreal number to be a pair of \emph{sets} of surreal numbers, written $\surr L R$, such that every element of $L$ is strictly less than every element of $R$.
+This obviously looks like an inductive definition, but there are three issues with regarding it as such.
+
+Firstly, the definition requires the relation of (strict) inequality between surreals, so that relation must be defined simultaneously with the type \NO of surreals.
+(Conway avoids this issue by first defining \emph{games}\index{game!Conway}, which are like surreals but omit the compatibility condition on $L$ and $R$.)
+As with the relation $\closesym$ for the Cauchy reals, this simultaneous definition could \emph{a priori} be either inductive-inductive or inductive-recursive.
+We will choose to make it inductive-inductive, for the same reasons we made that choice for $\closesym$.
+
+Moreover, we will define strict inequality $<$ and non-strict inequality $\le$ for surreals separately (and mutually inductively).
+Conway defines $<$ in terms of $\le$, in a way which is sensible classically but not constructively.
+\index{mathematics!constructive}%
+Furthermore, a negative definition of $<$ would make it unacceptable as a hypothesis of the constructor of a higher inductive type (see \cref{sec:strictly-positive}).
+
+Secondly, Conway says that $L$ and $R$ in $\surr L R$ should be ``sets of surreal numbers'', but the naive meaning of this as a predicate $\NO\to\prop$ is not positive, hence cannot be used as input to an inductive constructor.
+However, this would not be a good type-theoretic translation of what Conway means anyway, because in set theory the surreal numbers form a proper class, whereas the sets $L$ and $R$ are true (small) sets, not arbitrary subclasses of \NO.
+In type theory, this means that \NO will be defined relative to a universe \UU, but will itself belong to the next higher universe $\UU'$, like the sets \ord and \card of ordinals and cardinals, the cumulative hierarchy $V$, or even the Dedekind reals in the absence of propositional resizing.
+\index{propositional!resizing}%
+We will then require the ``sets'' $L$ and $R$ of surreals to be \UU-small, and so it is natural to represent them by \emph{families} of surreals indexed by some \UU-small type.
+(This is all exactly the same as what we did with the cumulative hierarchy in \cref{sec:cumulative-hierarchy}.)
+That is, the constructor of surreals will have type
+\[ \prd{\LL,\RR:\UU} (\LL\to\NO) \to (\RR\to \NO) \to (\text{some condition}) \to \NO \]
+which is indeed strictly positive.\index{strict!positivity}
+
+Finally, after giving the mutual definitions of \NO and its ordering, Conway declares two surreal numbers $x$ and $y$ to be \emph{equal} if $x\le y$ and $y\le x$.
+This is naturally read as passing to a quotient of the set of ``pre-surreals'' by an equivalence relation.
+%(In set-theoretic foundations, one has to us an additional trick to deal with large equivalence classes.)
+However, in the absence of the axiom of choice, such a quotient presents the same problem as the quotient in the usual construction of Cauchy reals: it will no longer be the case that a pair of families \emph{of surreals} yield a new surreal $\surr L R$, since we cannot necessarily ``lift'' $L$ and $R$ to families of pre-surreals.
+Of course, we can solve this problem in the same way we did for Cauchy reals, by using a \emph{higher} inductive-inductive definition.
+
+\begin{defn}\label{defn:surreals}
+ The type \NO of \define{surreal numbers},
+ \indexdef{surreal numbers}%
+ \indexsee{number!surreal}{surreal numbers}%
+ along with the relations $\mathord<:\NO\to\NO\to\type$ and $\mathord\le:\NO\to\NO\to\type$, are defined higher inductive-inductively as follows.
+ The type \NO has the following constructors.
+ \begin{itemize}
+ \item For any $\LL,\RR:\UU$ and functions $\LL\to \NO$ and $\RR\to \NO$, whose values we write as $x^L$ and $x^R$ for $L:\LL$ and $R:\RR$ respectively, if $\fall{L:\LL}{R:\RR} x^Ly$ iff ($x\ge y$ and $y\not\ge x$).
+\end{itemize}
+The inclusion of $x\ge y$ in the definition of $x>y$ is unnecessary if all objects are [surreal] numbers rather than ``games''\index{game!Conway}.
+Thus, Conway's $<$ is just the negation of his $\ge$, so that his condition for $\surr L R$ to be a surreal is the same as ours.
+Negating Conway's $\le$ and canceling double negations, we arrive at our definition of $<$, and we can then reformulate his $\le$ in terms of $<$ without negations.
+
+We can immediately populate $\NO$ with many surreal numbers.
+Like Conway, we write
+\symlabel{surreal-cut}
+\[\surr{x,y,z,\dots}{u,v,w,\dots}\]
+for the surreal number defined by a cut where $\LL\to\NO$ and $\RR\to\NO$ are families described by $x,y,z,\dots$ and $u,v,w,\dots$.
+Of course, if $\LL$ or $\RR$ are $\emptyt$, we leave the corresponding part of the notation empty.
+There is an unfortunate clash with the standard notation $\setof{x:A | P(x)}$ for subsets, but we will not use the latter in this section.
+\begin{itemize}
+\item We define $\iota_{\nat}:\nat\to\NO$ recursively by
+ \begin{align*}
+ \iota_{\nat}(0) &\defeq \surr{}{},\\
+ \iota_\nat(\suc(n)) &\defeq \surr{\iota_\nat(n)}{}.
+ \end{align*}
+ That is, $\iota_\nat(0)$ is defined by the cut consisting of $\emptyt\to\NO$ and $\emptyt\to\NO$.
+ Similarly, $\iota_\nat(\suc(n))$ is defined by $\unit\to\NO$ (picking out $\iota_\nat(n)$) and $\emptyt\to\NO$.
+\item Similarly, we define $\iota_{\Z}:\Z\to\NO$ using the sign-case recursion principle (\cref{thm:sign-induction}):
+ \begin{align*}
+ \iota_{\Z}(0) &\defeq \surr{}{},\\
+ \iota_\Z(n+1) &\defeq \surr{\iota_\Z(n)}{} & &\text{$n\ge 0$,}\\
+ \iota_\Z(n-1) &\defeq \surr{}{\iota_\Z(n)} & &\text{$n\le 0$.}
+ \end{align*}
+\item By a \define{dyadic rational}
+ \indexdef{rational numbers!dyadic}%
+ \indexsee{dyadic rational}{rational numbers, dyadic}%
+ we mean a pair $(a,n)$ where $a:\Z$ and $n:\nat$, and such that if $n>0$ then $a$ is odd.
+ We will write it as $a/2^n$, and identify it with the corresponding rational number.
+ If $\Q_D$ denotes the set of dyadic rationals, we define $\iota_{\Q_D}:\Q_D\to\NO$ by induction on $n$:
+ \begin{align*}
+ \iota_{\Q_D}(a/2^0) &\defeq \iota_\Z(a),\\
+ \iota_{\Q_D}(a/2^n) &\defeq \surr{\iota_{\Q_D}(a/2^n - 1/2^n)}{\iota_{\Q_D}(a/2^n + 1/2^n)},
+ \quad \text{for $n>0$.}
+ \end{align*}
+ Here we use the fact that if $n>0$ and $a$ is odd, then $a/2^n \pm 1/2^n$ is a dyadic rational with a smaller denominator than $a/2^n$.
+\item We define $\iota_{\RD}:\RD\to\NO$,\label{reals-into-surreals} where $\RD$ is (any version of) the Dedekind reals from \cref{sec:dedekind-reals}, by
+ \begin{align*}
+ \iota_{\RD}(x) &\defeq
+ \surr{q\in\Q_D \text{ such that } q z$.
+We equip $A$ with the relations $\le$ and $<$ induced from $\NO$, so that antisymmetry is obvious.
+For the primary clause of the inner recursion, we suppose also $y$ defined by a cut, with each $x+y^L$ and $x+y^R$ defined and satisfying $x^L+y^L < x+y^L$, $x^L+y^R < x+y^R$, $x+y^L < x^R + y^L$, and $x+y^R < x^R+y^R$ (these come from the additional conditions imposed on elements of $A(y)$), and also $x+y^L < x+y^R$ (since the elements $x+y^L$ and $x+y^R$ of $A(y)$ form a dependent cut).
+Now we give Conway's definition:
+\[ x+y \defeq \surr{x^L+y, x+y^L}{x^R+y,x+y^R}. \]
+In other words, the left options of $x+y$ are all numbers of the form $x^L+y$ for some left option $x^L$, or $x+y^L$ for some left option $y^L$.
+We must show that each of these left options is less than each of these right options:
+\begin{itemize}
+\item $x^L+y < x^R+y$ by the outer inductive hypothesis.
+\item $x^L+y < x^L + y^R < x + y^R$, the first since $(x^L+\blank)$ preserves inequalities, and the second since $x+y^R : A(y^R)$.
+\item $x+y^L < x^R+ y^L < x^R + y$, the first since $x+y^L : A(y^L)$ and the second since $(x^R+\blank)$ preserves inequalities.
+\item $x+y^L < x+y^R$ by the inner inductive hypothesis (specifically, the fact that we have a dependent cut).
+\end{itemize}
+We also have to show that $x+y$ thusly defined lies in $A(y)$, i.e.\ that $x^L + y < x+y$ and $x+y < x^R + y$; but this is true by \cref{thm:NO-refl-opt}\ref{item:NO-lt-opt}.
+
+Next we have to verify that the definition of $(x+\blank)$ preserves inequality:
+\begin{itemize}
+\item If $y\le z$ arises from knowing that $y^L
+ 0$. Derive from this the limited principle of omniscience~\eqref{eq:lpo}.
+\index{limited principle of omniscience}%
+ \end{enumerate}
+\end{ex}
+
+\begin{ex} \label{ex:traditional-archimedean}
+ \index{ordered field!archimedean}%
+ Show that in an ordered field $F$, density of $\Q$ and the traditional archimedean axiom
+ are equivalent:
+ %
+ \begin{equation*}
+ (\fall{x, y : F} x < y \Rightarrow \exis{q : \Q} x < q < y)
+ \Leftrightarrow
+ (\fall{x : F} \exis{k : \Z} x < k).
+ \end{equation*}
+\end{ex}
+
+\begin{ex} \label{RC-Lipschitz-on-interval} Suppose $a, b : \Q$ and $f : \setof{ q : \Q |
+ a \leq q \leq b } \to \RC$ is Lipschitz with constant~$L$. Show that there exists a unique
+ extension $\bar{f} : [a,b] \to \RC$ of $f$ which is Lipschitz with
+ constant~$L$. Hint: rather than redoing \cref{RC-extend-Q-Lipschitz} for closed
+ intervals, observe that there is a retraction $r : \RC \to [-n,n]$ and apply
+ \cref{RC-extend-Q-Lipschitz} to $f \circ r$.
+\end{ex}
+
+\begin{ex} \label{ex:metric-completion}
+ \index{completion!of a metric space}%
+ Generalize the construction of $\RC$ to construct the Cauchy completion of any metric space. First, think about which notion of real numbers is most natural as the codomain for the distance\index{distance} function of a metric space. Does it matter? Next, work out the details of two constructions:
+ %
+ \begin{enumerate}
+ \item Follow the construction of Cauchy reals to define the completion of a metric space as an inductive-inductive type closed under limits of Cauchy sequences.\index{Cauchy!sequence}
+ \item Use the following construction due to Lawvere~\cite{lawvere:metric-spaces}\index{Lawvere} and Richman~\cite{Richman00thefundamental}, where the completion of a metric space $(M, d)$ is given as the type of \define{locations}.
+ \indexdef{location}%
+ A location is a function $f : M \to \R$ such that
+ %
+ \begin{enumerate}
+ \item $f(x) \geq |f(y) - d(x,y)|$ for all $x, y : M$, and
+ \item $\inf_{x \in M} f(x) = 0$, by which we mean $\fall{\epsilon : \Qp} \exis{x : M} |f(x)| < \epsilon$ and $\fall{x : M} f(x) \geq 0$.
+ \end{enumerate}
+ %
+ The idea is that $f$ looks like it is measuring the distance from a point.
+ \end{enumerate}
+ %
+ \index{universal!property!of metric completion}%
+ Finally, prove the following universal property of metric completions: a locally uniformly continuous map from a metric space to a Cauchy complete metric space extends uniquely to a locally uniformly continuous map on the completion. (We say that a map is \define{locally uniformly continuous}
+ \indexdef{function!locally uniformly continuous}%
+ \indexdef{locally uniformly continuous map}%
+ if it is uniformly continuous on open balls.)
+\end{ex}
+
+\index{metric space|)}%
+
+\begin{ex} \label{ex:reals-apart-neq-MP}
+ \define{Markov's principle}
+ \indexdef{axiom!Markov's principle}%
+ \indexdef{Markov's principle}%
+ says that for all $f : \nat \to \bool$,
+ %
+ \begin{equation*}
+ (\lnot \lnot \exis{n : \nat} f(n) = \btrue)
+ \Rightarrow
+ \exis{n : \nat} f(n) = \btrue.
+ \end{equation*}
+ %
+ This is a particular instance of the law of double negation~\eqref{eq:ldn}. Show that
+ $\fall{x, y: \RD} x \neq y \Rightarrow x \apart y$ implies Markov's principle. Does the
+ converse hold as well?
+\end{ex}
+
+\begin{ex} \label{ex:reals-apart-zero-divisors}
+ \index{apartness}%
+ Verify that the following ``no zero divisors'' property holds for the real numbers:
+ $x y \apart 0 \Leftrightarrow x \apart 0 \land y \apart 0$.
+\end{ex}
+
+\begin{ex} \label{ex:finite-cover-lebesgue-number}
+ %
+ Suppose $(q_1, r_1), \ldots, (q_n, r_n)$ pointwise cover $(a, b)$. Then there is
+ $\epsilon : \Qp$ such that whenever $a < x < y < b$ and $|x - y| < \epsilon$
+ then there merely exists $i$ such that $q_i < x < r_i$ and $q_i < y < r_i$. Such an
+ $\epsilon$ is called a \define{Lebesgue number}
+ \indexdef{Lebesgue number}%
+ for the given cover.
+\end{ex}
+
+\begin{ex} \label{ex:mean-value-theorem}
+ %
+ Prove the following approximate version of the intermediate value theorem:
+ %
+ \begin{quote}
+ \emph{
+ If $f : [0,1] \to \R$ is uniformly continuous and $f(0) < 0 < f(1)$ then
+ for every $\epsilon : \Qp$ there merely exists $x : [0,1]$ such that $|f(x)| <
+ \epsilon$.
+ }
+ \end{quote}
+ %
+ Hint: do not try to use the bisection method because it leads to the axiom of choice.
+ Instead, approximate $f$ with a piecewise linear map. How do you construct a piecewise
+ linear map?
+\end{ex}
+
+\begin{ex}\label{ex:knuth-surreal-check}
+ Check whether everything in~\cite{knuth74:_surreal_number} can be done using the higher
+ inductive-inductive surreals of \cref{sec:surreals}.
+\end{ex}
+
+\begin{ex}\label{ex:reals-into-surreals}
+ Recall the function $\iota_{\RD}:\RD\to\NO$ defined on page~\pageref{reals-into-surreals}.
+ \begin{enumerate}
+ \item Show that $\iota_{\RD}$ is injective.
+ \item There are obvious extensions of $\iota_{\RD}$ to the extended reals (\cref{ex:RD-extended-reals}) and the interval domain (\cref{ex:RD-interval-arithmetic}).
+ Are they injective?
+ \end{enumerate}
+\end{ex}
+
+\begin{ex}\label{ex:ord-into-surreals}
+ Show that the function $\iota_{\ord}:\ord\to\NO$ defined on page~\pageref{ord-into-surreals} is injective if and only if \LEM{} holds.
+\end{ex}
+
+\begin{ex}\label{ex:hiit-plump}
+ Define a type $\mathsf{POrd}$ equipped with binary relations $\le$ and $<$ by mimicking the definition of \NO but using only left options.
+ \begin{enumerate}
+ \item Construct a map $j:\mathsf{POrd} \to \NO$ and show that it is an embedding.
+ \item Show that $\mathsf{POrd}$ is an ordinal (in the next higher universe, like \ord) under the relation $<$.
+ \item Assuming propositional resizing, show that $\mathsf{POrd}$ is equivalent to the subset
+ \[\setof{A:\ord | \mathsf{isPlump}(A)}\]
+ of \ord from \cref{ex:plump-ordinals}.
+ Conclude that $\iota_{\ord}:\ord\to\NO$ is injective when restricted to plump ordinals.
+ \end{enumerate}
+ In the absence of propositional resizing, we may still refer to elements of $\mathsf{POrd}$ (or their images in \NO) as \define{plump ordinals}.\index{ordinal!plump}\index{plump!ordinal}
+\end{ex}
+
+\begin{ex}\label{ex:pseudo-ordinals}
+ Define a surreal number to be a \define{pseudo-ordinal}\index{pseudo-ordinal}\index{ordinal!pseudo-} if it is equal to a cut $\surr{x^L}{}$ with no right options (but its left options may themselves have right options).
+ Show that the statement ``every pseudo-ordinal is a plump ordinal'' is equivalent to \LEM{}.
+\end{ex}
+
+\begin{ex}\label{ex:double-No-recursion}
+ Note that \cref{defn:No-codes} and \cref{eg:surreal-addition} both use a similar pattern to define a function $\NO \to \NO \to B$: an outer \NO-recursion whose codomain is the set of order-preserving functions $\NO\to B$, followed by an inner \NO-induction into a family $A:\NO\to\type$ where $A(y)$ is a subset of $B$ ensuring that the inequalities $x^L[r]^{\proj1}
+ \ar@<-0.25em>[r]_{\proj2}
+ &
+ {A}
+ \ar[r]^(0.4){\tilde{f}}
+ \ar[rd]_{f}
+ &
+ {\im(f)}
+ \ar@{..>}[d]^{i_f}
+ \\ & &
+ B
+ }
+\end{equation*}
+
+Recall that a function $f:A\to B$ is called \emph{surjective} if
+\index{function!surjective}%
+\narrowequation{\fall{b:B}\brck{\hfib f b},}
+or equivalently $\fall{b:B} \exis{a:A} f(a)=b$.
+We have also said that a function $f:A\to B$ between sets is called \emph{injective} if
+\index{function!injective}%
+$\fall{a,a':A} (f(a) = f(a')) \Rightarrow (a=a')$, or equivalently if each of its fibers is a mere proposition.
+Since these are the $(-1)$-connected and $(-1)$-truncated maps in the sense of \cref{cha:hlevels}, the general theory there implies that $\tilde f$ above is surjective and $i_f$ is injective, and that this factorization is stable under pullback.
+
+We now identify surjectivity and injectivity with the appropriate cat\-e\-go\-ry-theoretic notions.
+First we observe that categorical monomorphisms and epimorphisms have a slightly stronger equivalent formulation.
+
+\begin{lem}\label{thm:mono}
+ For a morphism $f:\hom_A(a,b)$ in a category $A$, the following are equivalent.
+ \begin{enumerate}
+ \item $f$ is a \define{monomorphism}:
+ \indexdef{monomorphism}%
+ for all $x:A$ and ${g,h:\hom_A(x,a)}$, if $f\circ g = f\circ h$ then $g=h$.\label{item:mono1}
+ \item (If $A$ has pullbacks) the diagonal map $a\to a\times_b a$ is an isomorphism.\label{item:mono4}
+ \item For all $x:A$ and $k:\hom_A(x,b)$, the type $\sm{h:\hom_A(x,a)} (k = f\circ h)$ is a mere proposition.\label{item:mono2}
+ \item For all $x:A$ and ${g:\hom_A(x,a)}$, the type $\sm{h:\hom_A(x,a)} (f\circ g = f\circ h)$ is contractible.\label{item:mono3}
+ \end{enumerate}
+\end{lem}
+\begin{proof}
+ The equivalence of conditions~\ref{item:mono1} and~\ref{item:mono4} is standard category theory.
+ Now consider the function $(f\circ \blank ):\hom_A(x,a) \to \hom_A(x,b)$ between sets.
+ Condition~\ref{item:mono1} says that it is injective, while~\ref{item:mono2} says that its fibers are mere propositions; hence they are equivalent.
+ And~\ref{item:mono2} implies~\ref{item:mono3} by taking $k\defeq f\circ g$ and recalling that an inhabited mere proposition is contractible.
+ Finally,~\ref{item:mono3} implies~\ref{item:mono1} since if $p:f\circ g= f\circ h$, then $(g,\refl{})$ and $(h,p)$ both inhabit the type in~\ref{item:mono3}, hence are equal and so $g=h$.
+\end{proof}
+
+\begin{lem}\label{thm:inj-mono}
+ A function $f:A\to B$ between sets is injective if and only if it is a monomorphism\index{monomorphism} in \uset.
+\end{lem}
+\begin{proof}
+ Left to the reader.
+\end{proof}
+
+Of course, an \define{epimorphism}
+\indexdef{epimorphism}%
+\indexsee{epi}{epimorphism}%
+is a monomorphism in the opposite category.
+We now show that in \uset, the epimorphisms are precisely the surjections, and also precisely the coequalizers (regular epimorphisms).
+
+The coequalizer of a pair of maps $f,g:A\to B$ in $\uset$ is defined as the 0-truncation of a general (homotopy) coequalizer.
+For clarity, we may call this the \define{set-coequalizer}.
+\indexdef{set-coequalizer}%
+\indexsee{coequalizer!of sets}{set-coequalizer}%
+It is convenient to express its universal property as follows.
+
+\begin{lem}
+\index{universal!property!of set-coequalizer}%
+Let $f,g:A\to B$ be functions between sets $A$ and $B$. The
+{set-co}equalizer $c_{f,g}:B\to Q$ has the property that, for any set $C$ and any $h:B\to C$ with $h\circ f = h\circ g$, the type
+\begin{equation*}
+\sm{k:Q\to C} (k\circ c_{f,g} = h)
+\end{equation*}
+is contractible.
+\end{lem}
+
+\begin{lem}\label{epis-surj}
+For any function $f:A\to B$ between sets, the following are equivalent:
+\begin{enumerate}
+\item $f$ is an epimorphism.
+\item Consider the pushout diagram
+\begin{equation*}
+ \xymatrix{
+ {A}
+ \ar[r]^{f}
+ \ar[d]
+ &
+ {B}
+ \ar[d]^{\iota}
+ \\
+ {\unit}
+ \ar[r]_{t}
+ &
+ {C_f}
+ }
+\end{equation*}
+in $\uset$ defining the mapping cone\index{cone!of a function}. Then the type $C_f$ is contractible.
+\item $f$ is surjective.
+\end{enumerate}
+\end{lem}
+
+\begin{proof}
+Let $f:A\to B$ be a function between sets, and suppose it to be an epimorphism; we show $C_f$ is contractible.
+The constructor $\unit\to C_f$ of $C_f$ gives us an element $t:C_f$.
+We have to show that
+\begin{equation*}
+\prd{x:C_f} x= t.
+\end{equation*}
+Note that $x= t$ is a mere proposition, hence we can use induction on $C_f$.
+Of course when $x$ is $t$ we have $\refl{t}:t=t$, so it suffices to find
+\begin{align*}
+I_0 & : \prd{b:B} \iota(b)= t\\
+I_1 & : \prd{a:A} \opp{\alpha_1(a)} \ct I_0(f(a))=\refl{t}
+\end{align*}
+where $\iota:B\to C_f$ and $\alpha_1:\prd{a:A} \iota(f(a))= t$ are the other constructors
+of $C_f$. Note that $\alpha_1$ is a homotopy from $\iota\circ f$ to
+$\mathsf{const}_t\circ f$, so we find the elements
+\begin{equation*}
+\pairr{\iota,\refl{\iota\circ f}},\pairr{\mathsf{const}_t,\alpha_1}:
+\sm{h:B\to C_f} \iota\circ f \htpy h\circ f.
+\end{equation*}
+By the dual of \cref{thm:mono}\ref{item:mono3} (and function extensionality), there is a path
+\begin{equation*}
+\gamma:\pairr{\iota,\refl{\iota\circ f}}=\pairr{\mathsf{const}_t,\alpha_1}.
+\end{equation*}
+Hence, we may define $I_0(b)\defeq \happly(\projpath1(\gamma),b):\iota(b)=t$.
+We also have
+\[\projpath2(\gamma) : \trans{\projpath1(\gamma)}{\refl{\iota\circ f}} = \alpha_1. \]
+This transport involves precomposition with $f$, which commutes with $\happly$.
+Thus, from transport in path types we obtain $I_0(f(a)) = \alpha_1(a)$ for any $a:A$, which gives us $I_1$.
+
+Now suppose $C_f$ is contractible; we show $f$ is surjective.
+We first construct a type family $P:C_f\to\prop$ by recursion on $C_f$, which is valid since \prop is a set.
+On the point constructors, we define
+\begin{align*}
+P(t) & \defeq \unit\\
+P(\iota(b)) & \defeq \brck{\hfiber{f}b}.
+\end{align*}
+To complete the construction of $P$, it remains to give a path
+\narrowequation{\brck{\hfiber{f}{f(a)}} =_\prop \unit}
+for all $a:A$.
+However, $\brck{\hfiber{f}{f(a)}}$ is inhabited by $(f(a),\refl{f(a)})$.
+Since it is a mere proposition, this means it is contractible --- and thus equivalent, hence equal, to \unit.
+This completes the definition of $P$.
+Now, since $C_f$ is assumed to be contractible, it follows that $P(x)$ is equivalent to $P(t)$ for any $x:C_f$.
+In particular, $P(\iota(b))\jdeq \brck{\hfiber{f}b}$ is equivalent to $P(t)\jdeq \unit$ for each $b:B$, and hence contractible.
+Thus, $f$ is surjective.
+
+Finally, suppose $f:A\to B$ to be surjective, and consider a set $C$ and two functions
+$g,h:B\to C$ with the property that $g\circ f = h\circ f$. Since $f$
+is assumed to be surjective, for all $b:B$ the type $\brck{\hfib f b}$ is contractible.
+Thus we have the following equivalences:
+\begin{align*}
+\prd{b:B} (g(b)= h(b))
+& \eqvsym \prd{b:B} \Parens{\brck{\hfib f b} \to (g(b)= h(b))}\\
+& \eqvsym \prd{b:B} \Parens{\hfib f b \to (g(b)= h(b))}\\
+& \eqvsym \prd{b:B}{a:A}{p:f(a)= b} g(b)= h(b)\\
+& \eqvsym \prd{a:A} g(f(a))= h(f(a))
+\end{align*}
+using on the second line the fact that $g(b)=h(b)$ is a mere proposition, since $C$ is a set.
+But by assumption, there is an element of the latter type.
+\end{proof}
+
+% \begin{rem}
+% The above theorem is not true when we replace $\set$ by $\type$
+% (replacing it also in the definition of $\mathsf{epi}$ and $\mathsf{epi}'$).
+% However, we do
+% get the implications $\textit{ii.}\Rightarrow\textit{iii.}\Rightarrow
+% \textit{iv.}$
+% \end{rem}
+
+\begin{thm}\label{thm:set_regular}\label{lem:images_are_coequalizers}
+The category $\uset$ is regular. Moreover, surjective functions between sets are regular epimorphisms.
+\end{thm}
+
+\begin{proof}
+It is a standard lemma in category theory that a category is regular as soon as it admits finite limits and a pullback-stable orthogonal
+factorization system\index{orthogonal factorization system} $(\mathcal{E},\mathcal{M})$ with $\mathcal{M}$ the monomorphisms, in which case $\mathcal{E}$ consists automatically of
+the regular epimorphisms.
+(See e.g.~\cite[A1.3.4]{elephant}.)
+The existence of the factorization system was proved in \cref{thm:orth-fact}.
+\end{proof}
+
+\begin{lem}\label{lem:pb_of_coeq_is_coeq}
+Pullbacks of regular epis in \uset are regular epis.
+\end{lem}
+\begin{proof}
+ We showed in \cref{thm:stable-images} that pullbacks of $n$-connected functions are $n$-connected.
+ By \cref{lem:images_are_coequalizers}, it suffices to apply this when $n=-1$.
+\end{proof}
+
+\indexdef{image!of a subset}
+One of the consequences of \uset being a regular category is that we have an ``image'' operation on subsets.
+That is, given $f:A\to B$, any subset $P:\power A$ (i.e.\ a predicate $P:A\to \prop$) has an \define{image} which is a subset of $B$.
+This can be defined directly as $\setof{ y:B | \exis{x:A} f(x)=y \land P(x)}$, or indirectly as the image (in the previous sense) of the composite function
+\[ \setof{ x:A | P(x) } \to A \xrightarrow{f} B.\]
+\symlabel{subset-image}
+We will also sometimes use the common notation $\setof{f(x) | P(x)}$ for the image of $P$.
+
+
+\subsection{Quotients}\label{subsec:quotients}
+
+\index{set-quotient|(}%
+Now that we know that $\uset$ is regular, to show that $\uset$ is exact, we need to show that every
+equivalence relation is effective.
+\index{effective!equivalence relation|(}%
+\index{relation!effective equivalence|(}%
+In other words, given an equivalence
+relation $R:A\to A\to\prop$, there is a coequalizer $c_R$ of the pair
+$\proj1,\proj2:\sm{x,y:A} R(x,y)\to A$ and, moreover, the $\proj1$ and $\proj2$
+form the kernel\index{kernel!pair} pair of $c_R$.
+
+We have already seen, in \cref{sec:set-quotients}, two general ways to construct the quotient of a set by an equivalence relation $R:A\to A\to\prop$.
+The first can be described as the set-coequalizer of the two projections
+\[\proj1,\proj2:\Parens{\sm{x,y:A} R(x,y)} \to A.\]
+The important property of such a quotient is the following.
+
+\begin{defn}
+ A relation $R:A\to A\to\prop$ is said to be \define{effective}
+ \indexdef{effective!relation}
+ \indexdef{effective!equivalence relation}%
+ \indexdef{relation!effective equivalence}%
+ if the square
+\begin{equation*}
+ \xymatrix{
+ {\sm{x,y:A} R (x,y)}
+ \ar[r]^(0.7){\proj1}
+ \ar[d]_{\proj2}
+ &
+ {A}
+ \ar[d]^{c_R}
+ \\
+ {A}
+ \ar[r]_{c_R}
+ &
+ {A/R}
+ }
+\end{equation*}
+is a pullback.
+\end{defn}
+
+Since the standard pullback of $c_R$ and itself is $\sm{x,y:A} (c_R(x)=c_R(y))$, by \cref{thm:total-fiber-equiv} this is equivalent to asking that the canonical transformation $\prd{x,y:A} R(x,y) \to (c_R(x)=c_R(y))$ be a fiberwise equivalence.
+
+\begin{lem}\label{lem:sets_exact}
+Suppose $\pairr{A,R}$ is an equivalence relation. Then there is an equivalence
+\begin{equation*}
+(c_R(x)= c_R(y))\eqvsym R(x,y)
+\end{equation*}
+for any $x,y:A$. In other words, equivalence relations are effective.
+\end{lem}
+
+\begin{proof}
+We begin by extending $R$ to a relation $\widetilde{R}:A/R\to A/R\to\prop$, which we will then show is equivalent
+to the identity type on $A/R$. We define $\widetilde{R}$ by double induction on
+$A/R$ (note that $\prop$ is a set by univalence for mere propositions). We
+define $\widetilde{R}(c_R(x),c_R(y)) \defeq R(x,y)$. For $r:R(x,x')$ and $s:R(y,y')$,
+the transitivity and symmetry
+of $R$ gives an equivalence from $R(x,y)$ to $R(x',y')$. This completes the
+definition of $\widetilde{R}$.
+
+It remains to show that $\widetilde{R}(w,w')\eqvsym (w= w')$ for every $w,w':A/R$.
+The direction $(w=w')\to \widetilde{R}(w,w')$ follows by transport once we show that $\widetilde{R}$ is reflexive, which is an easy induction.
+The other direction $\widetilde{R}(w,w')\to (w= w')$ is a mere proposition, so since $c_R:A\to A/R$ is surjective, it suffices to assume that $w$ and $w'$ are of the form $c_R(x)$ and $c_R(y)$.
+But in this case, we have the canonical map $\widetilde{R}(c_R(x),c_R(y)) \defeq R(x,y) \to (c_R(x)=c_R(y))$.
+(Note again the appearance of the encode-decode method.\index{encode-decode method})
+\end{proof}
+
+The second construction of quotients is as the set of equivalence classes of $R$ (a subset
+of its power set\index{power set}):
+\[ A\sslash R \defeq \setof{ P:A\to\prop | P \text{ is an equivalence class of } R}. \]
+This requires propositional resizing\index{propositional resizing}\index{impredicative!quotient}\index{resizing} in order to remain in the same universe as $A$ and $R$.
+
+Note that if we regard $R$ as a function from $A$ to $A\to \prop$, then $A\sslash R$ is equivalent to $\im(R)$, as constructed in \cref{sec:image}.
+Now in \cref{lem:images_are_coequalizers} we have shown that images are
+coequalizers. In particular, we immediately get the coequalizer diagram
+\begin{equation*}
+ \xymatrix{
+ **[l]{\sm{x,y:A} R (x)= R (y)}
+ \ar@<0.25em>[r]^{\proj1}
+ \ar@<-0.25em>[r]_{\proj2}
+ &
+ {A}
+ \ar[r]
+ &
+ {A \sslash R.}
+ }
+\end{equation*}
+We can use this to give an alternative proof that any equivalence relation is effective and that the two definitions of quotients agree.
+
+\begin{thm}\label{prop:kernels_are_effective}
+For any function $f:A\to B$ between any two sets,
+the relation $\ker(f):A\to A\to\prop$ given by
+$\ker(f,x,y)\defeq (f(x)= f(y))$ is effective.
+\end{thm}
+
+\begin{proof}
+We will use that $\im(f)$ is the coequalizer of $\proj1,\proj2:
+(\sm{x,y:A} f(x)= f(y))\to A$.
+%we get this equivalence from~\cref{prop:images_are_coequalizers}
+Note that the kernel pair of the function
+\[c_f\defeq\lam{a} \Parens{f(a),\brck{\pairr{a,\refl{f(a)}}}}
+: A \to \im(f)
+\]
+consists of the two projections
+\begin{equation*}
+\proj1,\proj2:\Parens{\sm{x,y:A} c_f(x)= c_f(y)}\to A.
+\end{equation*}
+For any $x,y:A$, we have equivalences
+\begin{align*}
+ (c_f(x)= c_f(y))
+ & \eqvsym \Parens{\sm{p:f(x)= f(y)} \trans{p}{\brck{\pairr{x,\refl{f(x)}}}} =\brck{\pairr{y,\refl{f(x)}}}}\\
+ & \eqvsym (f(x)= f(y)),
+\end{align*}
+where the last equivalence holds because
+$\brck{\hfiber{f}b}$ is a mere proposition for any $b:B$.
+Therefore, we get that
+\begin{equation*}
+\Parens{\sm{x,y:A} c_f(x)= c_f(y)}\eqvsym \Parens{\sm{x,y:A} f(x)= f(y)}
+\end{equation*}
+and hence we may conclude that $\ker f$ is an effective relation
+for any function $f$.
+\end{proof}
+
+\begin{thm}
+Equivalence relations are effective and there is an equivalence $A/R \eqvsym A\sslash R $.
+\end{thm}
+
+\begin{proof}
+We need to analyze the coequalizer diagram
+\begin{equation*}
+ \xymatrix{
+ **[l]{\sm{x,y:A} R (x)= R (y)}
+ \ar@<0.25em>[r]^{\proj1}
+ \ar@<-0.25em>[r]_{\proj2}
+ &
+ {A}
+ \ar[r]
+ &
+ {A \sslash R}
+ }
+\end{equation*}
+By the univalence axiom, the type $R(x) = R(y)$ is equivalent to the type of homotopies from $R(x)$ to $R(y)$, which is
+equivalent to
+\narrowequation{\prd{z:A} R (x,z)\eqvsym R (y,z).}
+Since $R$ is an equivalence relation, the latter space is equivalent to $R(x,y)$. To
+summarize, we get that $(R(x) = R(y)) \eqvsym R(x,y)$, so $R $ is effective since it is equivalent to an effective relation. Also,
+the diagram
+\begin{equation*}
+ \xymatrix{
+ **[l]{\sm{x,y:A} R(x, y)}
+ \ar@<0.25em>[r]^{\proj1}
+ \ar@<-0.25em>[r]_{\proj2}
+ &
+ {A}
+ \ar[r]
+ &
+ {A \sslash R.}
+ }
+\end{equation*}
+is a coequalizer diagram. Since coequalizers are unique up to equivalence, it follows that $A/R \eqvsym A\sslash R $.
+\end{proof}
+
+We finish this section by mentioning a possible third construction of the quotient of a set $A$ by an equivalence relation $R$.
+Consider the precategory with objects $A$ and hom-sets $R$; the type of objects of the Rezk completion
+\index{completion!Rezk}%
+(see \cref{sec:rezk}) of this precategory will then be the
+quotient. The reader is invited to check the details.
+
+\index{effective!equivalence relation|)}%
+\index{relation!effective equivalence|)}%
+\index{set-quotient|)}%
+
+\subsection{\texorpdfstring{$\uset$}{Set} is a \texorpdfstring{$\Pi\mathsf{W}$}{ΠW}-pretopos}
+\label{subsec:piw}
+
+\index{structural!set theory|(}%
+
+The notion of a \emph{$\Pi\mathsf{W}$-pretopos}
+\index{PiW-pretopos@$\Pi\mathsf{W}$-pretopos}%
+\indexsee{pretopos}{$\Pi\mathsf{W}$-pretopos}
+--- that is, a locally cartesian closed category
+\index{locally cartesian closed category}%
+\index{category!locally cartesian closed}%
+with disjoint finite coproducts, effective equivalence relations, and initial algebras for polynomial endofunctors --- is intended as a ``predicative''
+\index{mathematics!predicative}%
+notion of topos, i.e.\ a category of ``predicative sets'', which can serve the purpose for constructive mathematics
+\index{mathematics!constructive}%
+that the usual category of sets does for classical
+\index{mathematics!classical}%
+mathematics.
+
+Typically, in constructive type theory, one resorts to an external construction of ``setoids'' --- an exact completion --- to obtain a category with such closure properties.
+\index{setoid}\index{completion!exact}%
+ In particular, the well-behaved quotients are required for many constructions in mathematics that usually involve (non-constructive) power sets. It is noteworthy that univalent foundations provides these constructions \emph{internally} (via higher inductive types), without requiring such external constructions. This represents a powerful advantage of our approach, as we shall see in subsequent examples.
+
+\begin{thm}
+ \index{PiW-pretopos@$\Pi\mathsf{W}$-pretopos}
+ The category $\uset$ is a $\Pi\mathsf{W}$-pretopos.
+\end{thm}
+\begin{proof}
+ We have an initial object
+ \index{initial!set}%
+ $\emptyt$ and finite, disjoint sums $A+B$. These are stable under pullback, simply because pullback has a right adjoint\index{adjoint!functor}. Indeed, $\uset$ is locally cartesian closed, since for any map $f:A\to B$ between sets, the ``fibrant replacement'' \index{fibrant replacement} $\sm{a:A}f(a)=b$ is equivalent to $A$ (over $B$), and we have dependent function types for the replacement.
+We've just shown that $\uset$ is regular (\cref{thm:set_regular}) and that quotients are effective (\cref{lem:sets_exact}). We thus have a locally cartesian closed pretopos. Finally, since the $n$-types are closed under the formation of $W$-types by \cref{ex:ntypes-closed-under-wtypes}, and by \cref{thm:w-hinit} $W$-types are initial algebras for polynomial endofunctors, we see that $\uset$ is a $\Pi\mathsf{W}$-pretopos.
+\end{proof}
+
+
+\index{topos|(}
+One naturally wonders what, if anything, prevents $\uset$ from being an (elementary) topos?
+In addition to the structure already mentioned, a topos has a
+\emph{subobject classifier}:
+\indexdef{subobject classifier}%
+\index{classifier!subobject}%
+\index{power set}%
+a pointed object classifying (equivalence classes of) monomorphisms\index{monomorphism}. (In fact, in the presence of a subobject
+classifier, things become somewhat simpler: one merely needs cartesian closure in order to get the colimits.)
+In homotopy type theory, univalence implies that the type $\prop \defeq \sm{X:\UU}\isprop(X)$ does classify monomorphisms (by an argument similar to \cref{sec:object-classification}), but in general it is as large as the ambient universe $\UU$.
+Thus, it is a ``set'' in the sense of being a $0$-type, but it is not ``small'' in the sense of being an object of $\UU$, hence not an object of the category \uset.
+However, if we assume an appropriate form of propositional resizing (see \cref{subsec:prop-subsets}), then we can find a small version of $\prop$, so that \uset becomes an elementary topos.
+
+\begin{thm}\label{thm:settopos}
+ \index{propositional!resizing}%
+ If there is a type $\Omega:\UU$ of all mere propositions, then the category $\uset_\UU$ is an elementary topos.
+\end{thm}
+\index{topos|)}
+
+A sufficient condition for this is the law of excluded middle, in the ``mere-propositional'' form that we have called \LEM{}; for then we have $\prop = \bool$, which \emph{is} small, and which then also classifies all mere propositions.
+Moreover, in topos theory a well-known sufficient condition for \LEM{} is the axiom of choice, which is of course often assumed as an axiom in classical\index{mathematics!classical} set theory.
+In the next section, we briefly investigate the relation between these conditions in our setting.
+
+\index{structural!set theory|)}%
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\subsection{The axiom of choice implies excluded middle}
+\label{subsec:emacinsets}
+
+% In this section we prove a classic result that the axiom of choice implies excluded
+% middle.
+
+We begin with the following lemma.
+
+\begin{lem}\label{prop:trunc_of_prop_is_set}
+If $A$ is a mere proposition then its suspension $\susp(A)$ is a set,
+and $A$ is equivalent to $\id[\susp(A)]{\north}{\south}$.
+\end{lem}
+
+\begin{proof}
+To show that $\susp(A)$ is a set, we define a
+family $P:\susp(A)\to\susp(A)\to\type$ with the
+property that $P(x,y)$ is a mere proposition for each $x,y:\susp(A)$,
+and which is equivalent to its identity type $\idtypevar{\susp(A)}$.
+%
+We make the following definitions:
+\begin{align*}
+P(\north,\north) & \defeq \unit &
+P(\south,\north) & \defeq A\\
+P(\north,\south) & \defeq A &
+P(\south,\south) & \defeq \unit.
+\end{align*}
+We have to check that the definition preserves paths.
+Given any $a : A$, there is a meridian $\merid(a) : \north = \south$,
+so we should also have
+%
+\begin{equation*}
+ P(\north, \north) = P(\north, \south) = P(\south, \north) = P(\south, \south).
+\end{equation*}
+%
+But since $A$ is inhabited by $a$, it is equivalent to $\unit$, so we have
+%
+\begin{equation*}
+ P(\north, \north) \eqvsym P(\north, \south) \eqvsym P(\south, \north) \eqvsym P(\south, \south).
+\end{equation*}
+%
+The univalence axiom turns these into the desired equalities. Also, $P(x,y)$ is a mere
+proposition for all $x, y : \susp(A)$, which is proved by induction on $x$ and $y$, and
+using the fact that being a mere proposition is a mere proposition.
+
+Note that $P$ is a reflexive relation.
+Therefore we may apply \cref{thm:h-set-refrel-in-paths-sets}, so it suffices to
+construct $\tau : \prd{x,y:\susp(A)}P(x,y)\to(x=y)$. We do this by a double induction.
+When $x$ is $\north$, we define $\tau(\north)$ by
+%
+\begin{equation*}
+ \tau(\north,\north,u) \defeq \refl{\north}
+ \qquad\text{and}\qquad
+ \tau(\north,\south,a) \defeq \merid(a).
+\end{equation*}
+%
+If $A$ is inhabited by $a$ then $\merid(a) : \north = \south$ so we also need
+\narrowequation{
+ \trans{\merid(a)}{\tau(\north, \north)} = \tau(\north, \south).
+}
+This we get by function extensionality using the fact that, for all $x : A$,
+%
+\begin{multline*}
+ \trans{\merid(a)}{\tau(\north,\north,x)} =
+ \tau(\north,\north,x) \ct \opp{\merid(a)} \jdeq \\
+ \refl{\north} \ct \merid(a) =
+ \merid(a) =
+ \merid(x) \jdeq
+ \tau(\north, \south, x).
+\end{multline*}
+In a symmetric fashion we may define $\tau(\south)$ by
+%
+\begin{equation*}
+ \tau(\south,\north, a) \defeq \opp{\merid(a)}
+ \qquad\text{and}\qquad
+ \tau(\south,\south, u) \defeq \refl{\south}.
+\end{equation*}
+%
+To complete the construction of $\tau$, we need to check $\trans{\merid(a)}{\tau(\north)} = \tau(\south)$,
+given any $a : A$. The verification proceeds much along the same lines by induction on the
+second argument of $\tau$.
+
+Thus, by \cref{thm:h-set-refrel-in-paths-sets} we have that $\susp(A)$ is a set and that $P(x,y) \eqvsym (\id{x}{y})$ for all $x,y:\susp(A)$.
+Taking $x\defeq \north$ and $y\defeq \south$ yields $\eqv{A}{(\id[\susp(A)]\north\south)}$ as desired.
+\end{proof}
+
+\begin{thm}[Diaconescu]\label{thm:1surj_to_surj_to_pem}
+ \index{axiom!of choice}%
+ \index{excluded middle}%
+ \index{Diaconescu's theorem}\index{theorem!Diaconescu's}%
+ The axiom of choice implies the law of excluded middle.
+\end{thm}
+
+\begin{proof}
+We use the equivalent form of choice given in \cref{thm:ac-epis-split}.
+Consider a mere proposition $A$.
+The function $f:\bool\to\susp(A)$ defined by
+$f(\bfalse) \defeq \north$ and $f(\btrue) \defeq \south$
+is surjective.
+Indeed, we have
+$\pairr{\bfalse,\refl{\north}} : \hfiber{f}{\north}$
+and $\pairr{\btrue,\refl{\south}} :\hfiber{f}{\south}$.
+Since $\bbrck{\hfiber{f}{x}}$ is a mere proposition, by induction the claimed surjectivity follows.
+
+By \cref{prop:trunc_of_prop_is_set} the suspension $\susp(A)$
+is a set, so by the axiom of choice there merely exists a
+section $g: \susp(A) \to \bool$ of $f$.
+As equality on $\bool$ is decidable we get
+\begin{equation*}
+ (g(f(\bfalse))= g(f(\btrue))) +
+ \lnot (g(f(\bfalse))= g(f(\btrue))),
+\end{equation*}
+and, since $g$ is a section of $f$, hence injective,
+\begin{equation*}
+(f(\bfalse) = f(\btrue)) +
+\lnot (f(\bfalse) = f(\btrue)).
+\end{equation*}
+Finally, since $(f(\bfalse)=f(\btrue)) = (\north=\south) = A$ by \cref{prop:trunc_of_prop_is_set}, we have $A+\neg A$.
+\end{proof}
+
+% This conclusion needs only \LEM{}, see \cref{ex:lemnm}.
+
+% \begin{cor}\label{cor:ACtoLEM0}
+% If the axiom of choice \choice{} holds then $\brck{A + \neg A}$ for every set $A$.
+% \end{cor}
+
+% \begin{proof}
+% There is a surjection
+% \[
+% A + \neg A \epi \brck{A} + \brck{\neg A} \epi
+% \brck{(\brck{A} + \brck{\neg A})} = \brck{A} \vee \brck{\neg A} = \brck{A} \vee \neg \brck{A} = \unit,
+% \]
+% %
+% where in the last step excluded middle is available as a consequence of the axiom of choice.
+% Again by the axiom of choice there merely exists a section of the surjection, but this
+% is none other than an inhabitant of $A + \neg A$. Therefore $\brck{A+\neg A}$.
+% \end{proof}
+
+\index{denial}
+\begin{thm}\label{thm:ETCS}
+ \index{Elementary Theory of the Category of Sets}%
+ \index{category!well-pointed}%
+ If the axiom of choice holds then the category $\uset$ is a well-pointed boolean\index{topos!boolean}\index{boolean!topos} elementary topos\index{topos} with choice.
+\end{thm}
+
+\begin{proof}
+ Since \choice{} implies \LEM{}, we have a boolean elementary topos with choice by \cref{thm:settopos} and the remark following it. We leave the proof of well-pointedness as
+an exercise for the reader (\cref{ex:well-pointed}).
+\end{proof}
+
+\begin{rmk}
+ The conditions on a category mentioned in the theorem are known as Lawvere's\index{Lawvere}
+ axioms for the \emph{Elementary Theory of the Category of Sets}~\cite{lawvere:etcs-long}.
+\end{rmk}
+
+\section{Cardinal numbers}
+\label{sec:cardinals}
+
+\begin{defn}\label{defn:card}
+ The \define{type of cardinal numbers}
+ \indexdef{type!of cardinal numbers}%
+ \indexdef{cardinal number}%
+ \indexsee{number!cardinal}{cardinal number}%
+ is the 0-truncation of the type \set of sets:
+ \[ \card \defeq \pizero{\set} \]
+ Thus, a \define{cardinal number}, or \define{cardinal}, is an inhabitant of $\card\jdeq \pizero\set$.
+ Technically, of course, there is a separate type $\card_\UU$ associated to each universe \type.
+\end{defn}
+
+%\begin{rmk}
+
+ % , but with these conventions we can state theorems beginning with ``for all cardinal numbers\dots''\ and give them exactly the same sort of meaning as those beginning ``for all types\dots''.
+%\end{rmk}
+
+As usual for truncations, if $A$ is a set, then $\cd{A}$ denotes its image under the canonical projection $\set \to \trunc0\set \jdeq \card$; we call $\cd{A}$ the \define{cardinality}\indexdef{cardinality} of $A$.
+By definition, \card is a set.
+It also inherits the structure of a semiring from \set.
+
+\begin{defn}
+ The operation of \define{cardinal addition}
+ \indexdef{addition!of cardinal numbers}%
+ \index{cardinal number!addition of}%
+ \[ (\blank+\blank) : \card \to \card \to \card \]
+ is defined by induction on truncation:
+ \[ \cd{A} + \cd{B} \defeq \cd{A+B}.\]
+\end{defn}
+\begin{proof}
+ Since $\card\to\card$ is a set, to define $(\alpha+\blank):\card\to\card$ for all $\alpha:\card$, by induction it suffices to assume that $\alpha$ is $\cd{A}$ for some $A:\set$.
+ Now we want to define $(\cd{A}+\blank) :\card\to\card$, i.e.\ we want to define $\cd{A}+\beta :\card$ for all $\beta:\card$.
+ However, since $\card$ is a set, by induction it suffices to assume that $\beta$ is $\cd{B}$ for some $B:\set$.
+ But now we can define $\cd{A}+\cd{B}$ to be $\cd{A+B}$.
+\end{proof}
+
+\begin{defn}
+ Similarly, the operation of \define{cardinal multiplication}
+ \indexdef{multiplication!of cardinal numbers}%
+ \index{cardinal number!multiplication of}%
+ \[ (\blank\cdot\blank) : \card \to \card \to \card \]
+ is defined by induction on truncation:
+ \[ \cd{A} \cdot \cd{B} \defeq \cd{A\times B} \]
+\end{defn}
+
+\begin{lem}\label{card:semiring}
+ \card is a commutative semiring\index{semiring}, i.e.\ for $\alpha,\beta,\gamma:\card$ we have the following.
+ \begin{align*}
+ (\alpha+\beta)+\gamma &= \alpha+(\beta+\gamma)\\
+ \alpha+0 &= \alpha\\
+ \alpha + \beta &= \beta + \alpha\\
+ (\alpha \cdot \beta) \cdot \gamma &= \alpha \cdot (\beta\cdot\gamma)\\
+ \alpha \cdot 1 &= \alpha\\
+ \alpha\cdot\beta &= \beta\cdot\alpha\\
+ \alpha\cdot(\beta+\gamma) &= \alpha\cdot\beta + \alpha\cdot\gamma
+ \end{align*}
+ where $0 \defeq \cd{\emptyt}$ and $1\defeq\cd{\unit}$.
+\end{lem}
+\begin{proof}
+ We prove the commutativity of multiplication, $\alpha\cdot\beta = \beta\cdot\alpha$; the others are exactly analogous.
+ Since \card is a set, the type $\alpha\cdot\beta = \beta\cdot\alpha$ is a mere proposition, and in particular a set.
+ Thus, by induction it suffices to assume $\alpha$ and $\beta$ are of the form $\cd{A}$ and $\cd{B}$ respectively, for some $A,B:\set$.
+ Now $\cd{A}\cdot \cd{B} \jdeq \cd{A\times B}$ and $\cd{B}\times\cd{A} \jdeq \cd{B\times A}$, so it suffices to show $A\times B = B\times A$.
+ Finally, by univalence, it suffices to give an equivalence $A\times B \eqvsym B\times A$.
+ But this is easy: take $(a,b) \mapsto (b,a)$ and its obvious inverse.
+\end{proof}
+
+\begin{defn}
+ The operation of \define{cardinal exponentiation} is also defined by induction on truncation:
+ \indexdef{exponentiation, of cardinal numbers}%
+ \index{cardinal number!exponentiation of}%
+ \[ \cd{A}^{\cd{B}} \defeq \cd{B\to A}. \]
+\end{defn}
+
+\begin{lem}\label{card:exp}
+ For $\alpha,\beta,\gamma:\card$ we have
+ \begin{align*}
+ \alpha^0 &= 1\\
+ 1^\alpha &= 1\\
+ \alpha^1 &= \alpha\\
+ \alpha^{\beta+\gamma} &= \alpha^\beta \cdot \alpha^\gamma\\
+ \alpha^{\beta\cdot \gamma} &= (\alpha^{\beta})^\gamma\\
+ (\alpha\cdot\beta)^\gamma &= \alpha^\gamma \cdot \beta^\gamma
+ \end{align*}
+\end{lem}
+\begin{proof}
+ Exactly like \cref{card:semiring}.
+\end{proof}
+
+\begin{defn}
+ The relation of \define{cardinal inequality}
+ \index{order!non-strict}%
+ \index{cardinal number!inequality of}%
+ \[ (\blank\le\blank) : \card\to\card\to\prop \]
+ is defined by induction on truncation:
+ \symlabel{inj}
+ \[ \cd{A} \le \cd{B} \defeq \brck{\inj(A,B)} \]
+ where $\inj(A,B)$ is the type of injections from $A$ to $B$.
+ \index{function!injective}%
+ In other words, $\cd{A} \le \cd{B}$ means that there merely exists an injection from $A$ to $B$.
+\end{defn}
+
+\begin{lem}
+ Cardinal inequality is a preorder, i.e.\ for $\alpha,\beta:\card$ we have
+ \index{preorder!of cardinal numbers}%
+ \begin{gather*}
+ \alpha \le \alpha\\
+ (\alpha \le \beta) \to (\beta\le\gamma) \to (\alpha\le\gamma)
+ \end{gather*}
+\end{lem}
+\begin{proof}
+ As before, by induction on truncation.
+ For instance, since $(\alpha \le \beta) \to (\beta\le\gamma) \to (\alpha\le\gamma)$ is a mere proposition, by induction on 0-truncation we may assume $\alpha$, $\beta$, and $\gamma$ are $\cd{A}$, $\cd{B}$, and $\cd{C}$ respectively.
+ Now since $\cd{A} \le \cd{C}$ is a mere proposition, by induction on $(-1)$-truncation we may assume given injections $f:A\to B$ and $g:B\to C$.
+ But then $g\circ f$ is an injection from $A$ to $C$, so $\cd{A} \le \cd{C}$ holds.
+ Reflexivity is even easier.
+\end{proof}
+
+We may likewise show that cardinal inequality is compatible with the semiring operations.
+
+\begin{lem}\label{thm:injsurj}
+ \index{function!injective}%
+ \index{function!surjective}%
+ Consider the following statements:
+ \begin{enumerate}
+ \item There is an injection $A\to B$.\label{item:cle-inj}
+ \item There is a surjection $B\to A$.\label{item:cle-surj}
+ \end{enumerate}
+ Then, assuming excluded middle:
+ \index{excluded middle}%
+ \index{axiom!of choice}%
+ \begin{itemize}
+ \item Given $a_0:A$, we have~\ref{item:cle-inj}$\to$\ref{item:cle-surj}.
+ \item Therefore, if $A$ is merely inhabited, we have~\ref{item:cle-inj} $\to$ merely \ref{item:cle-surj}.
+ \item Assuming the axiom of choice, we have~\ref{item:cle-surj} $\to$ merely \ref{item:cle-inj}.
+ \end{itemize}
+\end{lem}
+\begin{proof}
+ If $f:A\to B$ is an injection, define $g:B\to A$ at $b:B$ as follows.
+ Since $f$ is injective, the fiber of $f$ at $b$ is a mere proposition.
+ Therefore, by excluded middle, either there is an $a:A$ with $f(a)=b$, or not.
+ In the first case, define $g(b)\defeq a$; otherwise set $g(b)\defeq a_0$.
+ Then for any $a:A$, we have $a = g(f(a))$, so $g$ is surjective.
+
+ The second statement follows from this by induction on truncation.
+ For the third, if $g:B\to A$ is surjective, then by the axiom of choice, there merely exists a function $f:A\to B$ with $g(f(a)) = a$ for all $a$.
+ But then $f$ must be injective.
+\end{proof}
+
+\begin{thm}[Schroeder--Bernstein]
+ \index{theorem!Schroeder--Bernstein}%
+ \index{Schroeder--Bernstein theorem}%
+ Assuming excluded middle, for sets $A$ and $B$ we have
+ \[ \inj(A,B) \to \inj(B,A) \to (A\cong B) \]
+\end{thm}
+\begin{proof}
+ The usual ``back-and-forth'' argument applies without significant changes.
+ Note that it actually constructs an isomorphism $A\cong B$ (assuming excluded middle so that we can decide whether a given element belongs to a cycle, an infinite chain, a chain beginning in $A$, or a chain beginning in $B$).
+\end{proof}
+
+\begin{cor}
+ Assuming excluded middle, cardinal inequality is a partial order, i.e.\ for $\alpha,\beta:\card$ we have
+ \[ (\alpha\le\beta) \to (\beta\le\alpha) \to (\alpha=\beta). \]
+\end{cor}
+\begin{proof}
+ Since $\alpha=\beta$ is a mere proposition, by induction on truncation we may assume $\alpha$ and $\beta$ are $\cd{A}$ and $\cd{B}$, respectively, and that we have injections $f:A\to B$ and $g:B\to A$.
+ But then the Schroeder--Bernstein theorem gives an isomorphism $A\cong B$, hence an equality $\cd{A}=\cd{B}$.
+\end{proof}
+
+Finally, we can reproduce Cantor's theorem, showing that for every cardinal there is a greater one.
+
+\begin{thm}[Cantor]
+ \index{Cantor's theorem}%
+ \index{theorem!Cantor's}%
+ For $A:\set$, there is no surjection $A \to (A\to \bool)$.
+\end{thm}
+\begin{proof}
+ Suppose $f:A \to (A\to \bool)$ is any function, and define $g:A\to \bool$ by $g(a) \defeq \neg f(a)(a)$.
+ If $g = f(a_0)$, then $g(a_0) = f(a_0)(a_0)$ but $g(a_0) = \neg f(a_0)(a_0)$, a contradiction.
+ Thus, $f$ is not surjective.
+\end{proof}
+
+\begin{cor}
+ Assuming excluded middle, for any $\alpha:\card$, there is a cardinal $\beta$ such that $\alpha\le\beta$ and $\alpha\neq\beta$.
+\end{cor}
+\begin{proof}
+ Let $\beta = 2^\alpha$.
+ Now we want to show a mere proposition, so by induction we may assume $\alpha$ is $\cd{A}$, so that $\beta\jdeq \cd{A\to \bool}$.
+ Using excluded middle, we have a function $f:A\to (A\to \bool)$ defined by
+ \[f(a)(a') \defeq
+ \begin{cases}
+ \btrue &\quad a=a'\\
+ \bfalse &\quad a\neq a'.
+ \end{cases}
+ \]
+ And if $f(a)=f(a')$, then $f(a')(a) = f(a)(a) = \btrue$, so $a=a'$; hence $f$ is injective.
+ Thus, $\alpha \jdeq \cd{A} \le \cd{A\to \bool} \jdeq 2^\alpha$.
+
+ On the other hand, if $2^\alpha \le \alpha$, then we would have an injection $(A\to\bool)\to A$.
+ By \cref{thm:injsurj}, since we have $(\lam{x} \bfalse):A\to \bool$ and excluded middle, there would then be a surjection $A \to (A\to \bool)$, contradicting Cantor's theorem.
+\end{proof}
+
+\section{Ordinal numbers}
+\label{sec:ordinals}
+
+\index{ordinal|(}%
+
+\begin{defn}\label{defn:accessibility}
+ Let $A$ be a set and
+ \[(\blank<\blank):A\to A\to \prop\]
+ a binary relation on $A$.
+ We define by induction what it means for an element $a:A$ to be \define{accessible}
+ \indexdef{accessibility}%
+ \indexsee{accessible}{accessibility}%
+ by $<$:
+ \begin{itemize}
+ \item If $b$ is accessible for every $ba$.
+ Thus, $A'$ is itself an ordinal.
+
+ Finally, since \ord is an ordinal, we can take $A\defeq\ord$.
+ Let $X'$ be the image of $g_\ord':\ord' \to X$; then the inverse of $g_\ord'$ yields an injection $H:X'\to \ord$.
+ By \cref{thm:ordunion}, there is an ordinal $C$ such that $Hx\le C$ for all $x:X'$.
+ Then by \cref{thm:ordsucc}, there is a further ordinal $D$ such that $C \kappa?
+In particular, the countable chain condition will show that $\Po$ preserves all the cardinals.
+Then, we'll show that $\opname{Add}(\omega, \omega_2)$ does indeed have this property.
+This will complete the proof.
+
+We isolate a useful lemma:
+\begin{lemma}[Possible values argument]
+ Suppose $M$ is a transitive model of $\ZFC$ and $\Po$ is a partial order
+ such that $\Po$ has the $\kappa$-chain condition in $M$.
+ Let $X,Y \in M$ and let $f: X \to Y$
+ be some function in $M[G]$, but $f \notin M$.
+
+ Then there exists a function $F \in M$, with $F: X \to \PP(Y)$ and such that
+ for any $x \in X$,
+ \[ f(x) \in F(x) \quad\text{and}\quad \left\lvert F(x) \right\rvert^M < \kappa. \]
+\end{lemma}
+What this is saying is that if $f$ is some new function that's generated,
+$M$ is still able to pin down the values of $f$ to at most $\kappa$ many values.
+
+\begin{proof}
+ The idea behind the proof is easy: any possible value of $f$ gives us some condition in
+ the poset $\Po$ which forces it.
+ Since distinct values must have incompatible conditions,
+ the $\kappa$-chain condition guarantees
+ there are at most $\kappa$ such values.
+
+ Here are the details.
+ Let $\dot f$, $\check X$, $\check Y$ be names for $f$, $X$, $Y$.
+ Start with a condition $p$ such that $p$ forces the sentence
+ \[ \text{``$\dot f$ is a function from $\check X$ to $\check Y$''}. \]
+ We'll work just below here.
+
+ For each $x \in X$, we can consider (using the Axiom of Choice) a maximal strong antichain $A(x)$
+ of incompatible conditions $q \le p$ which forces $f(x)$ to equal some value $y \in Y$.
+ Then, we let $F(x)$ collect all the resulting $y$-values.
+ These are all possible values, and there are less than $\kappa$ of them.
+\end{proof}
+
+\section{Preserving cardinals}
+As we saw earlier, cardinal collapse can still occur.
+For the Continuum Hypothesis we want to avoid this possibility,
+so we can add in $\aleph_2^M$ many real numbers and have $\aleph_2^{M[G]} = \aleph_2^M$.
+It turns out that to verify this, one can check a weaker result.
+
+\begin{definition}
+ For $M$ a transitive model of $\ZFC$ and $\Po \in M$ a poset,
+ we say $\Po$ \vocab{preserves cardinals} if
+ $\forall G \subseteq \Po$ an $M$-generic,
+ the model $M$ and $M[G]$ agree on the sentence ``$\kappa$ is a cardinal'' for every $\kappa$.
+ Similarly we say $\Po$ \vocab{preserves regular cardinals} if $M$ and $M[G]$
+ agree on the sentence ``$\kappa$ is a regular cardinal'' for every $\kappa$.
+\end{definition}
+Intuition:
+In a model $M$, it's possible that two ordinals which are in bijection in $V$ are no longer in bijection in $M$.
+Similarly, it might be the case that some cardinal $\kappa \in M$ is regular,
+but stops being regular in $V$ because some function $f : \ol\kappa \to \kappa$ is cofinal but happened to only exist in $V$.
+In still other words, ``$\kappa$ is a regular cardinal '' turns out to be a $\Pi_1$ statement too.
+
+Fortunately, each implies the other.
+We quote the following without proof.
+\begin{proposition}[Preserving cardinals $\iff$ preserving regular cardinals]
+ Let $M$ be a transitive model of $\ZFC$.
+ Let $\Po \in M$ be a poset.
+ Then for any $\lambda$,
+ $\Po$ preserves cardinalities less than or equal to $\lambda$
+ if and only if $\Po$ preserves regular cardinals less than or equal to $\lambda$.
+ Moreover the same holds if we replace ``less than or equal to''
+ by ``greater than or equal to''.
+\end{proposition}
+
+Thus, to show that $\Po$ preserves cardinality and cofinalities
+it suffices to show that $\Po$ preserves regularity.
+The following theorem lets us do this:
+\begin{theorem}[Chain conditions preserve regular cardinals]
+ Let $M$ be a transitive model of ZFC, and let $\Po \in M$ be a poset.
+ Suppose $M$ satisfies the sentence ``$\Po$ has the $\kappa$ chain condition and $\kappa$ is regular''.
+ Then $\Po$ preserves regularity greater than or equal to $\kappa$.
+\end{theorem}
+\begin{proof}
+ Use the Possible Values Argument.
+ \Cref{prob:chain}.
+\end{proof}
+
+In particular, if $\Po$ has the countable chain condition then $\Po$ preserves \emph{all} the cardinals (and cofinalities).
+Therefore, it remains to show that $\opname{Add}(\omega, \omega_2)$ satisfies the countable chain condition.
+
+\section{Infinite combinatorics}
+We now prove that $\opname{Add}(\omega, \omega_2)$ satisfies the countable chain condition.
+This is purely combinatorial, and so we work briefly.
+
+\begin{definition}
+ Suppose $C$ is an uncountable collection of finite sets.
+ $C$ is a \vocab{$\Delta$-system} if there exists a \vocab{root} $R$
+ with the condition that for any distinct $X$ and $Y$
+ in $C$, we have $X \cap Y = R$.
+\end{definition}
+
+\begin{lemma}
+ [$\Delta$-System lemma] Suppose $C$ is an uncountable collection of finite sets.
+ Then $\exists \ol C \subseteq C$ such that $\ol C$ is an uncountable $\Delta$-system.
+\end{lemma}
+\begin{proof}
+ There exists an integer $n$ such that $C$ has uncountably many guys of length $n$.
+ So we can throw away all the other sets, and just assume that all sets in $C$ have size $n$.
+
+ We now proceed by induction on $n$.
+ The base case $n=1$ is trivial, since we can just take $R = \varnothing$.
+ For the inductive step we consider two cases.
+
+ First, assume there exists an $a \in C$ contained in uncountably many $F \in C$.
+ Throw away all the other guys.
+ Then we can just delete $a$, and apply the inductive hypothesis.
+
+ Now assume that for every $a$, only countably many members of $C$ have $a$ in them.
+ We claim we can even get a $\ol C$ with $R = \varnothing$.
+ First, pick $F_0 \in C$.
+ It's straightforward to construct an $F_1$ such that $F_1 \cap F_0 = \varnothing$.
+ And we can just construct $F_2, F_3, \dots$
+\end{proof}
+
+\begin{lemma}
+ For all $\kappa$, $\opname{Add}(\omega, \kappa)$ satisfies the countable chain condition.
+\end{lemma}
+\begin{proof}
+ Assume not. Let
+ \[ \left\{ p_\alpha : \alpha < \omega_1 \right\} \]
+ be a strong antichain. Let
+ \[ C = \left\{ \dom(p_\alpha) : \alpha < \omega_1 \right\}. \]
+ Let $\ol C \subseteq C$ be such that $\ol C$ is uncountable, and $\ol C$ is a $\Delta$-system with root $R$.
+ Then let
+ \[ B = \left\{ p_\alpha : \dom(p_\alpha) \in R \right\}. \]
+ Each $p_\alpha \in B$ is a function $p_\alpha : R \to \{0,1\}$,
+ so there are two that are the same.
+\end{proof}
+
+Thus, we have proven that the Continuum Hypothesis cannot be proven in $\ZFC$.
+
+\section\problemhead
+\begin{problem}
+ \label{prob:chain}
+ Let $M$ be a transitive model of ZFC, and let $\Po \in M$ be a poset.
+ Suppose $M$ satisfies the sentence ``$\Po$ has the $\kappa$ chain condition and $\kappa$ is regular''.
+ Show that $\Po$ preserves regularity greater than or equal to $\kappa$.
+ \begin{hint}
+ Assume not, and take $\lambda > \kappa$ regular in $M$;
+ if $f : \ol \lambda \to \lambda$,
+ use the Possible Values Argument on $f$ to generate a function in $M$
+ that breaks cofinality of $\lambda$.
+ \end{hint}
+ \begin{sol}
+ It suffices to show that $\Po$ preserves regularity greater than or equal to $\kappa$.
+ Consider $\lambda > \kappa$ which is regular in $M$,
+ and suppose for contradiction that $\lambda$ is not regular in $M[G]$.
+ That's the same as saying that there is a function $f \in M[G]$,
+ $f : \ol \lambda \to \lambda$ cofinal, with $\ol \lambda < \lambda$.
+ Then by the Possible Values Argument,
+ there exists a function $F \in M$ from $\ol \lambda \to \PP(\lambda)$
+ such that $f(\alpha) \in F(\alpha)$ and $\left\lvert F(\alpha) \right\rvert^M < \kappa$
+ for every $\alpha$.
+
+ Now we work in $M$ again.
+ Note for each $\alpha \in \ol\lambda$,
+ $F(\alpha)$ is bounded in $\lambda$ since $\lambda$ is regular in $M$ and
+ greater than $\left\lvert F(\alpha) \right\rvert$.
+ Now look at the function $\ol \lambda \to \lambda$ in $M$ by just
+ \[ \alpha \mapsto \cup F(\alpha) < \lambda. \]
+ This is cofinal in $M$, contradiction.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/abelian.tex b/books/napkin/abelian.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ecdb680f5e8ad91f0b012640b6d8e2e97e0622c1
--- /dev/null
+++ b/books/napkin/abelian.tex
@@ -0,0 +1,451 @@
+\chapter{Abelian categories}
+In this chapter I'll translate some more familiar concepts into categorical language;
+this will require some additional assumptions about our category,
+culminating in the definition of a so-called ``abelian category''.
+Once that's done, I'll be able to tell you what this ``diagram chasing'' thing is all about.
+
+Throughout this chapter, ``$\injto$'' will be used for monic maps and ``$\surjto$'' for epic maps.
+
+\section{Zero objects, kernels, cokernels, and images}
+\prototype{In $\catname{Grp}$, the trivial group and homomorphism
+are the zero objects and morphisms.
+If $G$, $H$ are abelian then the cokernel
+of $\phi : G \to H$ is $H/\img \phi$.}
+
+A \vocab{zero object} of a category is an object $0$ which is both initial and terminal;
+of course, it's unique up to unique isomorphism.
+For example, in $\catname{Grp}$ the zero object is the trivial group,
+in $\catname{Vect}_k$ it's the zero-dimensional vector space consisting of one point, and so on.
+\begin{ques}
+ Show that $\catname{Set}$ and $\catname{Top}$ don't have zero objects.
+\end{ques}
+For the rest of this chapter, all categories will have zero objects.
+
+In a category $\AA$ with zero objects, any two objects $A$ and $B$ thus have a distinguished morphism
+\[ A \to 0 \to B \]
+which is called the \vocab{zero morphism} and also denoted $0$.
+For example, in $\catname{Grp}$ this is the trivial homomorphism.
+
+We can now define:
+\begin{definition}
+ Consider a map $A \taking f B$.
+ The \vocab{kernel} is defined as the equalizer of this map and the map $A \taking 0 B$.
+ Thus, it's a map $\ker f \colon \Ker f \injto A$ such that
+ \begin{center}
+ \begin{tikzcd}
+ \Ker f \ar[rd, "0", dashed] \ar[d, "\ker f"', hook] \\
+ A \ar[r, "f"'] & B
+ \end{tikzcd}
+ \end{center}
+ commutes, and moreover any other map with the same property factors uniquely through $\Ker A$
+ (so it is universal with this property).
+ By \Cref{prob:equalizer_monic}, $\ker f$ is a monic morphism,
+ which justifies the use of ``$\injto$''.
+\end{definition}
+Notice that we're using $\ker f$ to represent the map and $\Ker f$ to represent the object
+Similarly, we define the cokernel, the dual notion:
+\begin{definition}
+ Consider a map $A \taking f B$.
+ The \vocab{cokernel} of $f$ is a map $\coker f \colon B \surjto \Coker f$ such that
+ \begin{center}
+ \begin{tikzcd}
+ A \ar[r, "f"] \ar[rd, "0"', dashed] & B \ar[d, "\coker f", two heads] \\
+ & \Coker f
+ \end{tikzcd}
+ \end{center}
+ commutes, and moreover any other map with the same property factors
+ uniquely through $\Coker f$ (so it is universal with this property).
+ Thus it is the ``coequalizer'' of this map and the map $A \taking 0 B$.
+ By the dual of \Cref{prob:equalizer_monic}, $\coker f$ is an epic morphism,
+ which justifies the use of ``$\surjto$''.
+\end{definition}
+Think of the cokernel of a map $A \taking f B$ as ``$B$ modulo the image of $f$''.
+\begin{example}
+ [Cokernels]
+ Consider the map $\Zc6 \to D_{12} = \left\langle r,s \mid r^6=s^2=1, rs=sr\inv\right\rangle$.
+ Then the cokernel of this map in $\catname{Grp}$ is $D_{12} / \left\langle r \right\rangle \cong \Zc 2$.
+\end{example}
+This doesn't always work out quite the way we want since in general the image of
+a homomorphism need not be normal in the codomain.
+Nonetheless, we can use this to define:
+\begin{definition}
+ The \vocab{image} of $A \taking f B$ is the kernel of $\coker f$.
+ We denote $\Img f = \Ker(\coker f)$.
+ This gives a unique map $\img f : A \to \Img f$.
+\end{definition}
+When it exists, this coincides with our concrete notion of ``image''.
+Picture:
+\begin{center}
+\begin{tikzcd}
+ A \ar[rd, "\exists!"'] \ar[rr, "f"] \ar[rrrd, "0"', near start, dashed]
+ && B \ar[rd, two heads, "\coker f"] \\
+ & \Img f \ar[ur, hook] \ar[rr, "0", dashed] && \Coker f
+\end{tikzcd}
+\end{center}
+Note that by universality of $\Img f$,
+we find that there is a unique map
+$\img f \colon A \to \Img f$ that makes the entire diagram commute.
+
+\section{Additive and abelian categories}
+\prototype{$\catname{Ab}$, $\catname{Vect}_k$, or more generally $\catname{Mod}_R$.}
+We can now define the notion of an additive and abelian category,
+which are the types of categories where this notion is most useful.
+
+\begin{definition}
+ An \vocab{additive category} $\AA$ is one such that:
+ \begin{itemize}
+ \ii $\AA$ has a zero object, and any two objects have a product.
+ \ii More importantly: every $\Hom_\AA(A, B)$ forms an \emph{abelian group} (written additively)
+ such that composition distributes over addition:
+ \[ (g+h)\circ f = g\circ f + h\circ f
+ \quad\text{and}\quad
+ f\circ(g+h) = f\circ g + f \circ h. \]
+ The zero map serves as the identity element for each group.
+\end{itemize}
+\end{definition}
+\begin{definition}
+ An \vocab{abelian category} $\AA$ is one with the additional properties that
+ for any morphism $A \taking f B$,
+ \begin{itemize}
+ \ii The kernel and cokernel exist, and
+ \ii The morphism factors through the image so that $\img(f)$ is epic.
+ \end{itemize}
+ So, this yields a diagram
+ \begin{center}
+ \begin{tikzcd}
+ \Ker(f) \ar[rd, hook, "\ker(f)"']
+ &&
+ \Img(f) \ar[rd, hook]
+ &&
+ \Coker(f) \\
+ & A \ar[ru, "\img(f)", two heads] \ar[rr, "f"', dashed]
+ && B \ar[ru, "\coker(f)"', two heads]
+ \end{tikzcd}
+ \end{center}
+\end{definition}
+
+\begin{example}[Examples of abelian categories]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\catname{Vect}_k$, $\catname{Ab}$ are abelian categories,
+ where $f+g$ takes its usual meaning.
+ \ii Generalizing this, the category $\catname{Mod}_R$ of $R$-modules is abelian.
+ \ii $\catname{Grp}$ is not even additive, because there is no way to assign
+ a commutative addition to pairs of morphisms.
+ \end{enumerate}
+\end{example}
+
+In general, once you assume a category is abelian, all the properties you would want
+of these kernels, cokernels, \dots\ that you would guess hold true.
+For example,
+\begin{proposition}[Monic $\iff$ trival kernel]
+ A map $A \taking f B$ is monic if and only if its kernel is $0 \to A$.
+ Dually, $A \taking f B$ is epic if and only if its cokernel is $B \to 0$.
+\end{proposition}
+\begin{proof}
+ The easy direction is:
+ \begin{exercise}
+ Show that if $A \taking f B$ is monic, then $0 \to A$ is a kernel.
+ (This holds even in non-abelian categories.)
+ \end{exercise}
+ Of course, since kernels are unique up to isomorphism, monic $\implies$ $0$ kernel.
+ On the other hand, assume that $0 \to A$ is a kernel of $A \taking f B$.
+ For this we can exploit the group structure of the underlying homomorphisms now.
+ Assume the diagram
+ \begin{center}
+ \begin{tikzcd}
+ Z \ar[r, "g", shift left] \ar[r, "h"', shift right] & A \ar[r, "f"] & B
+ \end{tikzcd}
+ \end{center}
+ commutes.
+ Then $(g - h) \circ f = g \circ f - h \circ f = 0$, and we've arrived at a commutative diagram.
+ \begin{center}
+ \begin{tikzcd}
+ Z \ar[d, "g-h"'] \ar[rd, dashed, "0"] & \\
+ A \ar[r, "f"'] & B
+ \end{tikzcd}
+ \end{center}
+ But since $0 \to A$ is a kernel it follows that $g-h$ factors through $0$,
+ so $g-h = 0 \implies g = h$, which is to say that $f$ is monic.
+\end{proof}
+\begin{proposition}[Isomorphism $\iff$ monic and epic]
+ In an abelian category,
+ a map is an isomorphism if and only if it is monic and epic.
+\end{proposition}
+\begin{proof}
+ Omitted. (The Mitchell embedding theorem
+ presented later implies this anyways for
+ most situations we care about,
+ by looking at a small sub-category.)
+\end{proof}
+
+\section{Exact sequences}
+\prototype{$0 \to G \to G \times H \to H \to 0$ is exact.}
+Exact sequences will seem exceedingly unmotivated until you learn about homology groups,
+which is one of the most natural places that exact sequences appear.
+In light of this, it might be worth trying to read the chapter on homology groups
+simultaneously with this one.
+
+First, let me state the definition for groups, to motivate the general categorical definition.
+A sequence of groups
+\[ G_0 \taking{f_1} G_1 \taking{f_2} G_2 \taking{f_3} \dots \taking{f_n} G_n \]
+is \emph{exact} at $G_k$ if the image of $f_k$ is the kernel of $f_{k+1}$.
+We say the entire sequence is exact if it's exact at $k=1,\dots,n-1$.
+\begin{example}
+ [Exact sequences]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The sequence
+ \[ 0 \to \Zc 3
+ \overset{\times 5}{\injto} \Zc{15}
+ \surjto \Zc{5}
+ \to 0 \]
+ is exact.
+ Actually, $0 \to G \injto G \times H \surjto H \to 0$ is exact in general.
+ (Here $0$ denotes the trivial group.)
+ \ii For groups, the map $0 \to A \to B$ is exact if and only if $A \to B$ is injective.
+ \ii For groups, the map $A \to B \to 0$ is exact if and only if $A \to B$ is surjective.
+ \end{enumerate}
+\end{example}
+
+Now, we want to mimic this definition in a general \emph{abelian} category $\AA$.
+So, let's write down a criterion for when $A \taking f B \taking g C$ is exact.
+First, we had better have that $g \circ f = 0$,
+which encodes the fact that $\img(f) \subseteq \ker(g)$.
+Adding in all the relevant objects, we get the commutative diagram below.
+\begin{center}
+\begin{tikzcd}
+ A \ar[rd, "f"] \ar[rr, dashed, "0"] \ar[dd, "\img f"', two heads] && C \\
+ & B \ar[ru, "g"] \\
+ \Img f \ar[ru, hook, "\iota"] \ar[rr, dashed, "\exists!"] &&
+ \Ker g \ar["0"', dashed, uu] \ar[lu, hook']
+\end{tikzcd}
+\end{center}
+Here the map $A \surjto \Img f$ is epic since we are assuming $\AA$ is an abelian category.
+So, we have that
+\[ 0 = (g \circ \iota) \circ \img f = g \circ (\iota \circ \img f) = g \circ f = 0 \]
+but since $\img f$ is epic, this means that $g \circ \iota = 0$.
+So there is a \emph{unique} map $\Img f \to \Ker g$, and we require that this diagram commutes.
+In short,
+\begin{definition}
+ Let $\AA$ be an abelian category. The sequence
+ \[ \dots \to A_{n-1} \taking{f_n} A_n \taking{f_{n+1}} A_{n+1} \to \dots \]
+ is \vocab{exact} at $A_n$ if $f_n \circ f_{n+1} = 0$ and
+ the canonical map $\Img f_n \to \Ker f_{n+1}$ is an isomorphism.
+ The entire sequence is exact if it is exact at each $A_i$.
+ (For finite sequences we don't impose condition on the very first and very last object.)
+\end{definition}
+
+\begin{exercise}
+ Show that, as before, $0 \to A \to B$ is exact $\iff$ $A \to B$ is monic.
+\end{exercise}
+
+\section{The Freyd-Mitchell embedding theorem}
+We now introduce the Freyd-Mitchell embedding theorem,
+which essentially says that any abelian category can be realized as a concrete one.
+
+\begin{definition}
+ A category is \vocab{small} if $\obj(\AA)$ is a set (as opposed to a class),
+ i.e.\ there is a ``set of all objects in $\AA$''.
+ For example, $\catname{Set}$ is not small because there is no set of all sets.
+\end{definition}
+
+\begin{theorem}
+ [Freyd-Mitchell embedding theorem]
+ Let $\AA$ be a small abelian category.
+ Then there exists a ring $R$ (with $1$ but possibly non-commutative)
+ and a full, faithful, exact functor onto the category of left $R$-modules.
+\end{theorem}
+Here a functor is \vocab{exact} if it preserves exact sequences.
+This theorem is good because it means
+\begin{moral}
+ You can basically forget about all the weird definitions
+ that work in any abelian category.
+\end{moral}
+Any time you're faced with a statement about an abelian category,
+it suffices to just prove it for a ``concrete'' category
+where injective/surjective/kernel/image/exact/etc.\
+agree with your previous notions.
+A proof by this means is sometimes called \emph{diagram chasing}.
+
+\begin{remark}
+ The ``small'' condition is a technical obstruction
+ that requires the objects $\AA$ to actually form a set.
+ I'll ignore this distinction,
+ because one can almost always work around it
+ by doing enough set-theoretic technicalities.
+\end{remark}
+
+For example, let's prove:
+\begin{lemma}
+ [Short five lemma]
+ In an abelian category, consider the commutative diagram
+ \begin{center}
+ \begin{tikzcd}
+ 0 \ar[r]
+ & A \ar[r, hook, "p"] \ar[d, "\alpha", "\cong"']
+ & B \ar[r, two heads, "q"] \ar[d, "\beta"]
+ & C \ar[r] \ar[d, "\gamma", "\cong"']
+ & 0 \\
+ 0 \ar[r] & A' \ar[r, hook, "p'"'] & B' \ar[r, two heads, "q'"'] & C' \ar[r] & 0
+ \end{tikzcd}
+ \end{center}
+ and assume the top and bottom rows are exact.
+ If $\alpha$ and $\gamma$ are isomorphisms, then so is $\beta$.
+\end{lemma}
+
+\begin{proof}
+ We prove that $\beta$ is epic (with a similar proof to get monic).
+% One can show that it's possible to take a small subcategory of $\AA$
+% containing the $10$ elements and $13$ arrows above, as well as all necessary
+% kernels, cokernels, et cetera.
+% (Essentially, let $\BB_0$ be the diagram above and let $\BB_{i+1}$ add in any needed objects;
+% then $\bigcup \BB_i$ is a set-sized category).
+ By the embedding theorem we can treat the category as $R$-modules over some $R$.
+ This lets us do a so-called ``diagram chase'' where we move elements around the picture,
+ using the concrete interpretation of our category as $R$-modules.
+
+ Let $b'$ be an element of $B'$.
+ Then $q'(b') \in C'$, and since $\gamma$ is surjective, we have a $c$ such that $\gamma(c) = b'$,
+ and finally a $b \in B$ such that $q(b) = c$.
+ Picture:
+ \begin{center}
+ \begin{tikzcd}
+ b \in B \ar[r, "q", mapsto] \ar[d, "\beta"', dashed] & c \in C \ar[d, "\gamma", "\cong"', mapsto] \\
+ b' \in B' \ar[r, mapsto, "q'"] & c' \in C'
+ \end{tikzcd}
+ \end{center}
+ Now, it is not necessarily the case that $\beta(b) = b'$.
+ However, since the diagram commutes we at least have that
+ \[ q'(b') = q'(\beta(b)) \]
+ so $b' - \beta(b) \in \Ker q' = \Img p'$, and there is an $a' \in A'$ such that
+ $p'(a') = b' - \beta(b)$;
+ use $\alpha$ now to lift it to $a \in A$.
+ Picture:
+ \begin{center}
+ \begin{tikzcd}
+ a \in A \ar[d, mapsto] & b \in B \\
+ a' \in A' \ar[r, mapsto] & b' - \beta(b) \in B' \ar[r, mapsto] & 0 \in C
+ \end{tikzcd}
+ \end{center}
+ Then, we have
+ \[
+ \beta(b + q(a)) = \beta b + \beta p a
+ = \beta b + p' \alpha a
+ = \beta b + (b' - \beta b)
+ = b'
+ \]
+ so $b' \in \Img \beta$ which completes the proof that $\beta'$ is surjective.
+\end{proof}
+
+\section{Breaking long exact sequences}
+\prototype{First isomorphism theorem.}
+
+In fact, it turns out that any exact sequence breaks into short exact sequences.
+This relies on:
+\begin{proposition}[``First isomorphism theorem'' in abelian categories]
+ \label{prop:break_exact}
+ Let $A \taking f B$ be an arrow of an abelian category.
+ Then there is an exact sequence
+ \[ 0 \to \Ker f \taking{\ker f} A \taking{\img f} \Img f \to 0. \]
+\end{proposition}
+
+\begin{example}
+Let's analyze this theorem in our two examples of abelian categories:
+\begin{enumerate}[(a)]
+ \ii In the category of abelian groups,
+ this is basically the first isomorphism theorem.
+ \ii In the category $\catname{Vect}_k$,
+ this amounts to the rank-nullity theorem, \Cref{thm:rank_nullity}.
+\end{enumerate}
+\end{example}
+Thus, any exact sequence can be broken into short exact sequences, as
+\begin{center}
+ \begin{tikzcd}[sep=0.8em]
+ && 0 \ar[rd] && 0 && 0 \ar[rd] && 0 \\
+ &&& C_{n} \ar[rd] \ar[ru] &&&& C_{n+2} \ar[ru] \ar[rd] \\
+ {\color{red}\dots} \ar[rr, red] \ar[rd]
+ && {\color{red}A_{n-1}} \ar[rr, "f_{n-1}"', red] \ar[ru]
+ && {\color{red}A_n} \ar[rr, "f_n", red] \ar[rd]
+ && {\color{red}A_{n+1}} \ar[rr, "f_{n+1}"', red] \ar[ru]
+ && \dots
+ \\
+ & C_{n-1} \ar[ru] \ar[rd] &&&& C_{n+1} \ar[ru] \ar[rd] \\
+ 0 \ar[ru] && 0 && 0 \ar[ru] && 0
+\end{tikzcd}
+\end{center}
+where $C_k = \img f_{k-1} = \ker f_k$ for every $k$.
+
+\section\problemhead
+\begin{problem}
+ [Four lemma]
+ In an abelian category, consider the commutative diagram
+ \begin{center}
+ \begin{tikzcd}
+ A \ar[r, "p"] \ar[d, "\alpha"', two heads]
+ & B \ar[r, "q"] \ar[d, "\beta"', hook]
+ & C \ar[r, "r"] \ar[d, "\gamma"']
+ & D \ar[d, "\delta"', hook] \\
+ A' \ar[r, "p'"'] & B' \ar[r, "q'"'] & C' \ar[r, "r'"'] & D'
+ \end{tikzcd}
+ \end{center}
+ where the first and second rows are exact.
+ Prove that if $\alpha$ is epic, and $\beta$ and $\delta$ are monic,
+ then $\gamma$ is monic.
+ \begin{sol}
+ Let $c \in C$ with $\gamma(c) = 0$.
+ We show $c = 0$.
+ This proceeds in a diagram chase:
+ \begin{itemize}
+ \ii Note that $0 = r'(\gamma(c)) = \delta(r(c))$, and since $\delta$
+ is injective, it follows that $r(c) = 0$.
+ \ii Since the top row is exact,
+ it follows $c = q(b)$ for some $b \in B$.
+ \ii Then $q'(\beta(b)) = 0$,
+ so if we let $b' = \beta(b)$,
+ then $b' \in \ker(q')$.
+ As the bottom row is exact, there exists $a'$ with $p'(a') = b'$.
+ \ii Since $\alpha$ is injective,
+ there is $a \in A$ with $\alpha(a) = a'$.
+ \ii Since $\beta$ is injective,
+ it follows that $p(a) = b$.
+ \ii Since the top row is exact, and $b$ is in the image of $p$,
+ it follows that $0 = q(b) = c$ as needed.
+ \end{itemize}
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Five lemma]
+ \gim
+ In an abelian category, consider the commutative diagram
+ \begin{center}
+ \begin{tikzcd}
+ A \ar[r, "p"] \ar[d, "\alpha"', two heads]
+ & B \ar[r, "q"] \ar[d, "\beta"', "\cong"]
+ & C \ar[r, "r"] \ar[d, "\gamma"']
+ & D \ar[r, "s"] \ar[d, "\delta"', "\cong"]
+ & E \ar[d, "\eps"', hook] \\
+ A' \ar[r, "p'"'] & B' \ar[r, "q'"'] & C' \ar[r, "r'"'] & D' \ar[r, "s'"'] & E'
+ \end{tikzcd}
+ \end{center}
+ where the two rows are exact,
+ $\beta$ and $\delta$ are isomorphisms,
+ $\alpha$ is epic, and $\eps$ is monic.
+ Prove that $\gamma$ is an isomorphism.
+\end{problem}
+
+\begin{sproblem}
+ [Snake lemma]
+ \gim
+ In an abelian category, consider the diagram
+ \begin{center}
+ \begin{tikzcd}
+ & A \ar[r, "f"] \ar[d, "a"] & B \ar[r, "g", two heads] \ar[d, "b"] & C \ar[r] \ar[d, "c"] & 0 \\
+ 0 \ar[r] & A' \ar[r, hook, "f'"'] & B' \ar[r, "g'"'] & C'
+ \end{tikzcd}
+ \end{center}
+ where the first and second rows are exact sequences.
+ Prove that there is an exact sequence
+ \[ \Ker a \to \Ker b \to \Ker c \to \Coker a \to \Coker b \to \Coker c. \]
+\end{sproblem}
diff --git a/books/napkin/action.tex b/books/napkin/action.tex
new file mode 100644
index 0000000000000000000000000000000000000000..541998f6a3f99880777ec99b5a9b40455f0e2421
--- /dev/null
+++ b/books/napkin/action.tex
@@ -0,0 +1,339 @@
+\chapter{Group actions overkill AIME problems}
+Consider this problem from the 1996 AIME:
+\begin{quote}
+ (AIME 1996) Two of the squares of a $7 \times 7$ checkerboard are painted yellow, and the rest are painted green. Two color schemes are equivalent if one can be obtained from the other by applying a rotation in the plane of the board. How many inequivalent color schemes are possible?
+\end{quote}
+
+What's happening here? Let $X$ be the set of the $\binom{49}{2}$ possible colorings of the board.
+What's the natural interpretation of ``rotation''?
+Answer: the group $\Zc 4 = \left$ somehow ``acts'' on this set $X$ by sending one state $x \in X$ to another state $r \cdot x$, which is just $x$ rotated by $90\dg$.
+Intuitively we're just saying that two configurations are the same if they can be reached from one another by this ``action''.
+
+We can make all of this precise using the idea of a group action.
+
+\section{Definition of a group action}
+\prototype{The AIME problem.}
+
+\begin{definition}
+ Let $X$ be a set and $G$ a group.
+ A \vocab{group action} is a binary operation $\cdot : G \times X \to X$
+ which lets a $g \in G$ send an $x \in X$ to $g \cdot x$.
+ It satisfies the axioms
+ \begin{itemize}
+ \ii $(g_1g_2) \cdot x = g_1 \cdot (g_2 \cdot x)$ for any $g_1, g_2 \in G$
+ for all $x \in X$.
+ \ii $1_G \cdot x = x$ for any $x \in X$.
+ \end{itemize}
+\end{definition}
+
+\begin{example}[Examples of group actions]
+ Let $G=(G,\star)$ be a group.
+ \begin{enumerate}[(a)]
+ \ii The group $\Zc 4$ can act on the set of ways to color a $7 \times 7$
+ board either yellow or green.
+ \ii The group $\Zc4 = \left$ acts on the $xy$-plane $\RR^2$ as follows: $r \cdot (x,y) = (y,-x)$.
+ In other words, it's a rotation by $90\dg$.
+ \ii The dihedral group $D_{2n}$ acts on the set of ways to color the vertices of an $n$-gon.
+ \ii The group $S_n$ acts on $X = \left\{ 1,2,\dots,n \right\}$
+ by applying the permutation $\sigma$: $\sigma \cdot x \defeq \sigma(x)$.
+ \ii The group $G$ can act on itself (i.e.\ $X=G$) by left multiplication: put $g \cdot g' \defeq g \star g'$.
+ \end{enumerate}
+\end{example}
+
+\section{Stabilizers and orbits}
+\prototype{Again the AIME problem.}
+
+Given a group action $G$ on $X$,
+we can define an equivalence relation $\sim$ on $X$ as follows:
+$x \sim y$ if $x = g \cdot y$ for some $g \in G$.
+For example, in the AIME problem, $\sim$ means ``one can be obtained from the other by a rotation''.
+\begin{ques}
+ Why is this an equivalence relation?
+\end{ques}
+In that case, the AIME problem wants the number of equivalence classes under $\sim$.
+So let's give these equivalence classes a name: \vocab{orbits}.
+We usually denote orbits by $\OO$.
+
+As usual, orbits carve out $X$ into equivalence classes.
+\begin{center}
+ \begin{asy}
+ bigbox("$X$");
+ draw(ellipse(origin,0.8,2.5));
+ draw(ellipse((-2,0),0.8,1.5));
+ draw(ellipse(( 2,0),0.8,2.25));
+ for (int i=-3; i<=3; ++i) {
+ dot( (0, 0.7*i) );
+ }
+ dot( (-2,0) );
+ dot( (-2,-1) );
+ dot( (-2,1) );
+ dot( ( 2,0) );
+ dot( ( 2,-1) );
+ dot( ( 2,1) );
+ dot( ( 2,-2) );
+ dot( ( 2,2) );
+
+ MP("\mathcal O_1", (-2,-1.5), dir(225));
+ MP("\mathcal O_2", ( 0,-2.5), dir(180));
+ MP("\mathcal O_3", ( 2,-2.25), dir(225));
+ \end{asy}
+\end{center}
+
+It turns out that a very closely related concept is:
+\begin{definition}
+ The \vocab{stabilizer} of a point $x \in X$,
+ denoted $\Stab_G(x)$, is the set of $g \in G$ which fix $x$; in other words
+ \[ \Stab_G(x) \defeq \left\{ g \in G \mid g \cdot x = x \right\}. \]
+\end{definition}
+\begin{example}
+ Consider the AIME problem again, with $X$ the possible set of states
+ (again $G = \Zc4$).
+ Let $x$ be the configuration where two opposite corners are colored yellow.
+ Evidently $1_G$ fixes $x$, but so does the $180\dg$ rotation $r^2$.
+ But $r$ and $r^3$ do not preserve $x$, so
+ $\Stab_G(x) = \{1,r^2\} \cong \Zc2$.
+\end{example}
+\begin{ques}
+ Why is $\Stab_G(x)$ a subgroup of $G$?
+\end{ques}
+
+Once we realize the stabilizer is a group, this leads us to what I privately call the ``fundamental theorem of how big an orbit is''.
+\begin{theorem}[Orbit-stabilizer theorem]
+ Let $\OO$ be an orbit, and pick any $x \in \OO$.
+ Let $S = \Stab_G(x)$ be a subgroup of $G$.
+ There is a natural bijection between $\OO$ and left cosets.
+ In particular,
+ \[ \left\lvert \OO \right\rvert \left\lvert S \right\rvert = \left\lvert G \right\rvert. \]
+ In particular, the stabilizers of each $x \in \OO$ have the same size.
+\end{theorem}
+\begin{proof}
+ The point is that every coset $gS$ just specifies an element of $\OO$,
+ namely $g \cdot x$. The fact that $S$ is a stabilizer implies
+ that it is irrelevant which representative we pick.
+
+ \begin{center}
+ \begin{asy}
+ size(6cm);
+ draw(ellipse(origin, 0.5, 2));
+ label("$\mathcal O \subseteq X$", (0,2), dir(90));
+ for (real i=-1.5; i<=1.5; ++i) {
+ dot( (0,i) );
+ }
+ draw( (0.3,1.8)--(0.3,1.2)--(4,1.2)--(4,1.8)--cycle );
+ label("$S \subseteq G$", (4,1.5), dir(0));
+ for (real i=0.7; i < 4; i+=0.7) {
+ label("$\circ$", (i, 1.5), origin);
+ }
+ draw( (-0.2,1.5)..(-1,0.5)..(-0.2,-0.5), EndArrow);
+ label("$g$", (-1,0.5), dir(180));
+ \end{asy}
+ \end{center}
+
+ Since the $\left\lvert \mathcal O \right\rvert$ cosets partition $G$,
+ each of size $\left\lvert S \right\rvert$, we obtain the second result.
+\end{proof}
+
+\section{Burnside's lemma}
+Now for the crux of this chapter: a way to count the number of orbits.
+\begin{theorem}
+ [Burnside's lemma]
+ Let $G$ act on a set $X$.
+ The number of orbits of the action is equal to
+ \[ \frac{1}{\left\lvert G \right\rvert}
+ \sum_{g \in G} \left\lvert \FixPt g \right\rvert \]
+ where $\FixPt g$ is the set of points $x \in X$
+ such that $g \cdot x = x$.
+\end{theorem}
+The proof is deferred as a bonus problem,
+since it has a very olympiad-flavored solution.
+As usual, this lemma was not actually proven by Burnside;
+Cauchy got there first, and thus it is sometimes called
+\emph{the lemma that is not Burnside's}.
+Example application:
+\begin{example}
+ [AIME 1996]
+ { \footnotesize Two of the squares of a $7 \times 7$ checkerboard are painted yellow, and the rest are painted green. Two color schemes are equivalent if one can be obtained from the other by applying a rotation in the plane of the board. How many inequivalent color schemes are possible? }
+
+ We know that $G = \Zc4$ acts on the set $X$ of $\binom{49}{2}$ possible coloring schemes.
+ Now we can compute $\FixPt g$ explicitly for each $g \in \Zc4$.
+ \begin{itemize}
+ \ii If $g = 1_G$, then every coloring is fixed, for a count of $\binom{49}{2} = 1176$.
+ \ii If $g = r^2$ there are exactly $24$ coloring schemes fixed by $g$:
+ this occurs when the two squares are reflections across the center,
+ which means they are preserved under a $180\dg$ rotation.
+ \ii If $g = r$ or $g=r^3$, then there are no fixed coloring schemes.
+ \end{itemize}
+ As $\left\lvert G \right\rvert = 4$, the average is
+ \[ \frac{1176 + 24 + 0 + 0}{4} = 300. \]
+\end{example}
+
+\begin{exercise}[MathCounts Chapter Target Round]
+ A circular spinner has seven sections of equal size,
+ each of which is colored either red or blue.
+ Two colorings are considered the same if one can be rotated to yield the other.
+ In how many ways can the spinner be colored? (Answer: 20)
+\end{exercise}
+% The group in question is $\Zc7$; the answer should be $20$.
+
+Consult \cite{ref:aops_burnside}
+for some more examples of ``hands-on'' applications.
+
+\section{Conjugation of elements}
+\prototype{In $S_n$, conjugacy classes are ``cycle types''.}
+A particularly common type of action is the so-called \vocab{conjugation}.
+We let $G$ act on itself as follows:
+\[ g : h \mapsto ghg\inv. \]
+You might think this definition is a little artificial.
+Who cares about the element $ghg\inv$?
+Let me try to convince you this definition is not so unnatural.
+\begin{example}
+ [Conjugacy in $S_n$]
+ Let $G = S_5$, and fix a $\pi \in S_5$.
+ Here's the question: is $\pi \sigma \pi \inv$ related to $\sigma$?
+ To illustrate this,
+ I'll write out a completely random example of a permutation $\sigma \in S_5$.
+ \[
+ \text{If }
+ \sigma = \;
+ \begin{array}{ccc}
+ 1 & \mapsto & 3 \\
+ 2 & \mapsto & 1 \\
+ 3 & \mapsto & 5 \\
+ 4 & \mapsto & 2 \\
+ 5 & \mapsto & 4
+ \end{array}
+ \qquad
+ \text{then}
+ \qquad
+ \pi \sigma \pi\inv =
+ \begin{array}{ccc}
+ \pi(1) & \mapsto & \pi(3) \\
+ \pi(2) & \mapsto & \pi(1) \\
+ \pi(3) & \mapsto & \pi(5) \\
+ \pi(4) & \mapsto & \pi(2) \\
+ \pi(5) & \mapsto & \pi(4)
+ \end{array}
+ \]
+ Thus our fixed $\pi$ doesn't really change the structure of $\sigma$ at all:
+ it just ``renames'' each of the elements $1$, $2$, $3$, $4$, $5$
+ to $\pi(1)$, $\pi(2)$, $\pi(3)$, $\pi(4)$, $\pi(5)$.
+\end{example}
+But wait, you say.
+That's just a very particular type of group behaving nicely under conjugation.
+Why does this mean anything more generally?
+All I have to say is: remember Cayley's theorem!
+(This was \Cref{thm:cayley_theorem}.)
+
+In any case, we may now define:
+\begin{definition}
+ The \vocab{conjugacy classes} of a group $G$ are the orbits of $G$ under
+ the conjugacy action.
+\end{definition}
+
+Let's see what the conjugacy classes of $S_n$ are, for example.
+\begin{example}
+ [Conjugacy classes of $S_n$ correspond to cycle types]
+ Intuitively, the discussion above says that two elements of $S_n$ should
+ be conjugate if they have the same ``shape'', regardless of what the elements are named.
+ The right way to make the notion of ``shape'' rigorous is cycle notation.
+ For example, consider the permutation
+ \[ \sigma_1 = (1 \; 3 \; 5)(2 \; 4) \]
+ in cycle notation, meaning $1 \mapsto 3 \mapsto 5 \mapsto 1$ and $2 \mapsto 4 \mapsto 2$.
+ It is conjugate to the permutation
+ \[ \sigma_2 = (1 \; 2 \; 3)(4 \; 5) \]
+ or any other way of relabeling the elements.
+ So, we could think of $\sigma$ as having conjugacy class
+ \[ (- \; - \; -)(- \; -). \]
+ More generally, you can show that two elements of $S_n$ are conjugate
+ if and only if they have the same ``shape'' under cycle decomposition.
+\end{example}
+\begin{ques}
+ Show that the number of conjugacy classes of $S_n$
+ equals the number of \emph{partitions} of $n$.
+\end{ques}
+
+As long as I've put the above picture, I may as well also define:
+\begin{definition}
+ Let $G$ be a group.
+ The \vocab{center} of $G$, denoted $Z(G)$, is the set of elements $x \in G$
+ such that $xg = gx$ for every $g \in G$.
+ More succinctly,
+ \[ Z(G) \defeq \left\{ x \in G \mid gx=xg \; \forall g \in G \right\}. \]
+\end{definition}
+You can check this is indeed a subgroup of $G$.
+\begin{ques}
+ Why is $Z(G)$ normal in $G$?
+\end{ques}
+\begin{ques}
+ What are the conjugacy classes of elements in the center?
+\end{ques}
+
+A trivial result that gets used enough that I should explicitly call it out:
+\begin{corollary}[Conjugacy in abelian groups is trivial]
+ If $G$ is abelian, then the conjugacy classes all have size one.
+\end{corollary}
+
+\section\problemhead
+\begin{problem}
+ [PUMaC 2009 C8]
+ Taotao wants to buy a bracelet consisting of seven beads,
+ each of which is orange, white or black.
+ (The bracelet can be rotated and reflected in space.)
+ Find the number of possible bracelets.
+ \begin{hint}
+ Just apply Burnside's lemma directly to get the answer of $198$
+ (the relevant group is $D_{14}$).
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ Show that two elements in the same conjugacy class
+ have the same order.
+ \begin{hint}
+ There are multiple ways to see this.
+ One is to just do the algebraic manipulation.
+ Another is to use Cayley's theorem to embed $G$ into a symmetric group.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ \gim
+ Prove Burnside's lemma.
+ \begin{hint}
+ Double-count pairs $(g,x)$ with $g \cdot x = x$.
+ \end{hint}
+\end{problem}
+
+\begin{sproblem}
+ [The ``class equation'']
+ \label{prob:class_eq}
+ Let $G$ be a finite group.
+ We define the \vocab{centralizer} $C_G(g) = \{ x \in G \mid xg = gx \}$
+ for each $g \in G$.
+ Show that
+ \[ \left\lvert G \right\rvert = \left\lvert Z(G) \right\rvert + \sum_{s \in S}
+ \frac{\left\lvert G \right\rvert}{\left\lvert C_G(s) \right\rvert} \]
+ where $S \subseteq G$ is defined as follows:
+ for each conjugacy class $C \subseteq G$ with $|C| > 1$,
+ we pick a representative of $C$ and add it to $S$.
+\end{sproblem}
+
+\begin{dproblem}
+ [Classical]
+ \gim
+ Assume $G$ is a finite group and $p$ is the smallest prime dividing its order.
+ Let $H$ be a subgroup of $G$ with $\left\lvert G \right\rvert / \left\lvert H \right\rvert = p$.
+ Show that $H$ is normal in $G$.
+ \begin{hint}
+ Let $G$ act on the left cosets $\{gH \mid g \in G\}$
+ by left multiplication: $g' \cdot gH = g'gH$.
+ Consider the orbit $\OO$ of the coset $H$.
+ By the orbit-stabilizer theorem, $\left\lvert \OO \right\rvert$ divides $\left\lvert G \right\rvert$.
+ But $\left\lvert \OO \right\rvert \le p$ also.
+ So either $\OO = \{H\}$ or $\OO$ contains all cosets.
+ The first case is impossible.
+ \end{hint}
+ \begin{sol}
+ \url{https://math.stackexchange.com/a/3012179/229197}
+ \end{sol}
+\end{dproblem}
diff --git a/books/napkin/advice.tex b/books/napkin/advice.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e050556e8a4c3a8614fdb776230209c0664a4931
--- /dev/null
+++ b/books/napkin/advice.tex
@@ -0,0 +1,246 @@
+%\addtocounter{chapter}{-1}
+\chapter{Advice for the reader}
+
+\section{Prerequisites}
+As explained in the preface,
+the main prerequisite is some amount of mathematical maturity.
+This means I expect the reader to know how
+to read and write a proof, follow logical arguments, and so on.
+
+I also assume the reader is familiar with basic terminology
+about sets and functions (e.g.\ ``what is a bijection?'').
+If not, one should consult \Cref{ch:sets_functions}.
+
+\section{Deciding what to read}
+There is no need to read this book in linear order:
+it covers all sorts of areas in mathematics,
+and there are many paths you can take.
+In \Cref{ch:sales}, I give a short overview for each part
+explaining what you might expect to see in that part.
+
+For now, here is a brief chart showing
+how the chapters depend on each other;
+again see \Cref{ch:sales} for details.
+Dependencies are indicated by arrows;
+dotted lines are optional dependencies.
+\textbf{I suggest that you simply pick a chapter you find interesting,
+and then find the shortest path.}
+With that in mind, I hope the length of the entire PDF is not intimidating.
+
+(The text in the following diagram should be clickable
+and links to the relevant part.)
+
+\input{tex/frontmatter/digraph}
+\newpage
+
+\section{Questions, exercises, and problems}
+In this book, there are three hierarchies:
+\begin{itemize}
+ \ii An inline \vocab{question} is intended to be offensively easy,
+ mostly a chance to help you internalize definitions.
+ If you find yourself unable to answer one or two of them,
+ it probably means I explained it badly and you should complain to me.
+ But if you can't answer many,
+ you likely missed something important: read back.
+ \ii An inline \vocab{exercise} is more meaty than a question,
+ but shouldn't have any ``tricky'' steps.
+ Often I leave proofs of theorems and propositions as exercises
+ if they are instructive and at least somewhat interesting.
+ \ii Each chapter features several trickier \vocab{problems} at the end.
+ Some are reasonable, but others are legitimately
+ difficult olympiad-style problems.
+ \gim Harder problems are marked with up to
+ three chili peppers (\scalebox{0.7}{\chili}), like this paragraph.
+
+ In addition to difficulty annotations,
+ the problems are also marked by how important they are to the big picture.
+ \begin{itemize}
+ \ii \textbf{Normal problems},
+ which are hopefully fun but non-central.
+ \ii \textbf{Daggered problems},
+ which are (usually interesting) results that one should know,
+ but won't be used directly later.
+ \ii \textbf{Starred problems},
+ which are results which will be used later
+ on in the book.\footnote{This is to avoid the classic
+ ``we are done by PSet 4, Problem 8''
+ that happens in college sometimes,
+ as if I remembered what that was.}
+ \end{itemize}
+\end{itemize}
+Several hints and solutions can be found in \Cref{app:hints,app:sol}.
+
+\begin{center}
+ \includegraphics[width=14cm]{media/abstruse-goose-exercise.png}
+ \\ \scriptsize Image from \cite{img:exercise}
+\end{center}
+
+% I personally find most exercises to not be that interesting, and I've tried to keep boring ones to a minimum.
+% Regardless, I've tried hard to pick problems that are fun to think about and, when possible, to give them
+% the kind of flavor you might find on the IMO or Putnam (even when the underlying background is different).
+
+\section{Paper}
+At the risk of being blunt,
+\begin{moral}
+Read this book with pencil and paper.
+\end{moral}
+Here's why:
+
+\begin{center}
+ \includegraphics[width=0.5\textwidth]{media/read-with-pencil.jpg}
+ \\ \scriptsize Image from \cite{img:read_with_pencil}
+\end{center}
+You are not God.
+You cannot keep everything in your head.\footnote{
+ See also \url{https://usamo.wordpress.com/2015/03/14/writing/}
+ and the source above.}
+If you've printed out a hard copy, then write in the margins.
+If you're trying to save paper,
+grab a notebook or something along with the ride.
+Somehow, some way, make sure you can write. Thanks.
+
+\section{On the importance of examples}
+I am pathologically obsessed with examples.
+In this book, I place all examples in large boxes to draw emphasis to them,
+which leads to some pages of the book simply consisting of sequences of boxes
+one after another. I hope the reader doesn't mind.
+
+I also often highlight a ``prototypical example'' for some sections,
+and reserve the color red for such a note.
+The philosophy is that any time the reader sees a definition
+or a theorem about such an object, they should test it
+against the prototypical example.
+If the example is a good prototype, it should be immediately clear
+why this definition is intuitive, or why the theorem should be true,
+or why the theorem is interesting, et cetera.
+
+Let me tell you a secret. Whenever I wrote a definition or a theorem in this book,
+I would have to recall the exact statement from my (quite poor) memory.
+So instead I often consider the prototypical example,
+and then only after that do I remember what the definition or the theorem is.
+Incidentally, this is also how I learned all the definitions in the first place.
+I hope you'll find it useful as well.
+
+\section{Conventions and notations}
+This part describes some of the less familiar notations and definitions
+and settles for once and for all some annoying issues
+(``is zero a natural number?'').
+Most of these are ``remarks for experts'':
+if something doesn't make sense,
+you probably don't have to worry about it for now.
+
+A full glossary of notation used can be found in the appendix.
+
+\subsection{Natural numbers are positive}
+The set $\NN$ is the set of \emph{positive} integers, not including $0$.
+In the set theory chapters, we use $\omega = \{0, 1, \dots\}$
+instead, for consistency with the rest of the book.
+
+\subsection{Sets and equivalence relations}
+This is brief, intended as a reminder for experts.
+Consult \Cref{ch:sets_functions} for full details.
+
+An \vocab{equivalence relation} on a set $X$ is a relation $\sim$
+which is symmetric, reflexive, and transitive.
+An equivalence relation partitions $X$
+into several \vocab{equivalence classes}.
+We will denote this by $X / {\sim}$.
+An element of such an equivalence class is a
+\vocab{representative} of that equivalence class.
+
+I always use $\cong$ for an ``isomorphism''-style relation
+(formally: a relation which is an isomorphism in a reasonable category).
+The only time $\simeq$ is used in the Napkin is for homotopic paths.
+
+I generally use $\subseteq$ and $\subsetneq$ since these are non-ambiguous,
+unlike $\subset$. I only use $\subset$ on rare occasions in which equality
+obviously does not hold yet pointing it out would be distracting.
+For example, I write $\QQ \subset \RR$
+since ``$\QQ \subsetneq \RR$'' is distracting.
+
+I prefer $S \setminus T$ to $S - T$.
+
+The power set of $S$ (i.e.,\ the set of subsets of $S$),
+is denoted either by $2^S$ or $\mathcal P(S)$.
+
+\subsection{Functions}
+This is brief, intended as a reminder for experts.
+Consult \Cref{ch:sets_functions} for full details.
+
+Let $X \taking f Y$ be a function:
+\begin{itemize}
+\ii By $f\pre(T)$ I mean the \vocab{pre-image}
+\[ f\pre(T) \defeq \left\{ x \in X \mid f(x) \in T \right\}. \]
+This is in contrast to the $f\inv(T)$ used in the rest of the world;
+I only use $f\inv$ for an inverse \emph{function}.
+
+By abuse of notation, we may abbreviate $f\pre(\{y\})$ to $f\pre(y)$.
+We call $f\pre(y)$ a \vocab{fiber}.
+
+\ii By $f\im(S)$ I mean the \vocab{image}
+\[ f\im(S) \defeq \left\{ f(x) \mid x \in S \right\}. \]
+Almost everyone else in the world uses $f(S)$
+(though $f[S]$ sees some use, and $f``(S)$ is often used in logic)
+but this is abuse of notation,
+and I prefer $f\im(S)$ for emphasis.
+This image notation is \emph{not} standard.
+
+\ii If $S \subseteq X$, then the \vocab{restriction} of $f$ to $S$
+is denoted $f \restrict{S}$,
+i.e.\ it is the function $f \restrict{S} \colon S \to Y$.
+
+\ii Sometimes functions $f \colon X \to Y$
+are \emph{injective} or \emph{surjective};
+I may emphasize this sometimes by writing
+$f \colon X \injto Y$ or $f \colon X \surjto Y$, respectively.
+\end{itemize}
+
+\subsection{Cycle notation for permutations}
+\label{subsec:cycle_notation}
+
+Additionally, a permutation on a finite set may be denoted
+in \emph{cycle notation},
+as described in say \url{https://en.wikipedia.org/wiki/Permutation#Cycle_notation}.
+For example the notation $(1 \; 2 \; 3 \; 4)(5 \; 6 \; 7)$
+refers to the permutation with
+$1 \mapsto 2$, $2 \mapsto 3$, $3 \mapsto 4$, $4 \mapsto 1$,
+$5 \mapsto 6$, $6 \mapsto 7$, $7 \mapsto 5$.
+Usage of this notation will usually be obvious from context.
+
+\subsection{Rings}
+All rings have a multiplicative identity $1$ unless otherwise specified.
+We allow $0=1$ in general rings but not in integral domains.
+
+\textbf{All rings are commutative unless otherwise specified.}
+There is an elaborate scheme for naming rings which are not commutative,
+used only in the chapter on cohomology rings:
+
+\begin{center}
+ \small
+ \begin{tabular}[h]{|c|cc|}
+ \hline
+ & Graded & Not Graded \\ \hline
+ $1$ not required & graded pseudo-ring & pseudo-ring \\
+ Anticommutative, $1$ not required & anticommutative pseudo-ring & N/A \\
+ Has $1$ & graded ring & N/A \\
+ Anticommutative with $1$ & anticommutative ring & N/A \\
+ Commutative with $1$ & commutative graded ring & ring \\ \hline
+ \end{tabular}
+\end{center}
+
+On the other hand, an \emph{algebra} always has $1$,
+but it need not be commutative.
+
+\subsection{Choice}
+We accept the Axiom of Choice, and use it freely.
+
+\section{Further reading}
+The appendix \Cref{ch:refs} contains a list of resources I like,
+and explanations of pedagogical choices that I made for each chapter.
+I encourage you to check it out.
+
+In particular, this is where you should go for further reading!
+There are some topics that should be covered in the Napkin,
+but are not, due to my own ignorance or laziness.
+The references provided in this appendix should hopefully help partially
+atone for my omissions.
diff --git a/books/napkin/affine-var.tex b/books/napkin/affine-var.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f65b2e9e986f126efa76a86fd403103d1e084357
--- /dev/null
+++ b/books/napkin/affine-var.tex
@@ -0,0 +1,600 @@
+\chapter{Affine varieties}
+In this chapter we introduce affine varieties.
+We introduce them in the context of coordinates,
+but over the course of the other chapters
+we'll gradually move away from this perspective to
+viewing varieties as ``intrinsic objects'',
+rather than embedded in coordinates.
+
+For simplicity, we'll do almost everything over the field of complex numbers,
+but the discussion generalizes to any algebraically closed field.
+
+\section{Affine varieties}
+\prototype{$\VV(y-x^2)$ is a parabola in $\Aff^2$.}
+%An \vocab{affine variety} is just the zero locus of a set of polynomials.
+%We think of it as living in the $n$-dimensional space $\Aff^n$.
+
+\begin{definition}
+ Given a set of polynomials $S \subseteq \CC[x_1, \dots, x_n]$
+ (not necessarily finite or even countable),
+ we let $\VV(S)$ denote the set of points vanishing on \emph{all}
+ the polynomials in $S$.
+ Such a set is called an \vocab{affine variety}.
+ It lives in \vocab{$n$-dimensional affine space}, denoted $\Aff^n$
+ (to distinguish it from projective space later).
+\end{definition}
+For example, a parabola is the zero locus of the polynomial $\VV(y-x^2)$. Picture:
+\begin{center}
+ \begin{asy}
+ import graph;
+ size(5cm);
+
+ real f(real x) { return x*x; }
+ graph.xaxis("$x$");
+ graph.yaxis("$y$");
+ draw(graph(f,-2,2,operator ..), blue, Arrows);
+ label("$\mathcal V(y-x^2)$", (0.8, f(0.8)), dir(-45), blue);
+ label("$\mathbb A^2$", (2,3), dir(45));
+ \end{asy}
+\end{center}
+
+\begin{example}[Examples of affine varieties]
+ These examples are in two-dimensional space $\Aff^2$,
+ whose points are pairs $(x,y)$.
+ \begin{enumerate}[(a)]
+ \ii A straight line can be thought of as $\VV(Ax + By + C)$.
+ \ii A parabola as above can be pictured as $\VV(y-x^2)$.
+ \ii A hyperbola might be the zero locus of the polynomial $\VV(xy-1)$.
+ \ii The two axes can be thought of as $\VV(xy)$; this is the set of points
+ such that $x=0$ \emph{or} $y=0$.
+ \ii A point $(x_0, y_0)$ can be thought of as $\VV(x-x_0, y-y_0)$.
+ \ii The entire space $\Aff^2$ can be thought of as $\VV(0)$.
+ \ii The empty set is the zero locus of the constant polynomial $1$, that is $\VV(1)$.
+ \end{enumerate}
+\end{example}
+
+\section{Naming affine varieties via ideals}
+\prototype{$\VV(I)$ is a parabola, where $I=(y-x^2)$.}
+As you might have already noticed, a variety can be named by $\VV(-)$ in multiple ways.
+For example, the set of solutions to
+\[ x=3 \text{ and } y=4 \]
+is just the point $(3,4)$.
+But this is also the set of solutions to
+\[ x=3 \text{ and } y=x+1. \]
+So, for example
+\[ \{(3,4)\}
+ = \VV(x-3, y-4)
+ = \VV(x-3, y-x-1).
+ \]
+That's a little annoying, because in an ideal\footnote{Pun not intended
+ but left for amusement value.}
+world we would have \emph{one} name
+for every variety.
+Let's see if we can achieve this.
+
+A partial solution is to use \emph{ideals} rather than small sets.
+That is, consider the ideal
+\[
+ I = \left( x-3, y-4 \right)
+ = \left\{ p(x,y) \cdot (x-3) + q(x,y) \cdot (y-4)
+ \mid p,q \in \CC[x,y] \right\}
+\]
+and look at $\VV(I)$.
+\begin{ques}
+ Convince yourself that $\VV(I) = \{(3,4)\}$.
+\end{ques}
+So rather than writing $\VV(x-3, y-4)$ it makes sense to
+think about this as $\VV\left( I \right)$, where $I = (x-3,y-4)$ is the \emph{ideal}
+generated by the two polynomials $x-3$ and $y-4$.
+This is an improvement because
+\begin{ques}
+ Check that $(x-3, y-x-1) = (x-3, y-4)$.
+\end{ques}
+
+Needless to say, this pattern holds in general.
+\begin{ques}
+ Let $\{f_i\}$ be a set of polynomials, and consider
+ the ideal $I$ generated by these $\{f_i\}$.
+ Show that $\VV(\{f_i\}) = \VV(I)$.
+\end{ques}
+
+Thus we will only consider $\VV(I)$ when $I$ is an ideal.
+Of course, frequently our ideals are generated by one or two polynomials,
+which leads to:
+\begin{abuse}
+ Given a set of polynomials $f_1, \dots, f_m$
+ we let $\VV(f_1, \dots, f_m)$ be shorthand for
+ $\VV\left( \left( f_1, \dots, f_m \right) \right)$.
+ In other words we let $\VV(f_1, \dots, f_m)$
+ abbreviate $\VV(I)$, where $I$ is the \emph{ideal} $I=(f_1, \dots, f_m)$.
+\end{abuse}
+
+This is where the Noetherian condition really shines:
+it guarantees that every ideal $I \subseteq \CC[x_1, \dots, x_n]$
+can be written in the form above with \emph{finitely} many polynomials,
+because it is \emph{finitely generated}.
+(The fact that $\CC[x_1, \dots, x_n]$ is Noetherian follows from the Hilbert basis theorem,
+which is \Cref{thm:hilbert_basis}).
+This is a relief, because dealing with infinite sets of polynomials is not much fun.
+
+\section{Radical ideals and Hilbert's Nullstellensatz}
+\prototype{$\sqrt{(x^2)} = (x)$ in $\CC[x]$, $\sqrt{(12)} = (6)$ in $\ZZ$.}
+You might ask whether the name is unique now:
+that is, if $\VV(I) = \VV(J)$, does it follow that $I=J$?
+The answer is unfortunately no: the counterexample can be found in just $\Aff^1$.
+It is
+\[ \VV(x) = \VV(x^2). \]
+In other words, the set of solutions to $x=0$
+is the same as the set of solutions to $x^2=0$.
+
+Well, that's stupid.
+We want an operation which takes the ideal $(x^2)$ and makes it into the ideal $(x)$.
+The way to do so is using the radical of an ideal.
+
+\begin{definition}
+ Let $R$ be a ring.
+ The \vocab{radical} of an ideal $I \subseteq R$, denoted $\sqrt I$,
+ is defined by
+ \[ \sqrt I = \left\{ r \in R
+ \mid r^m \in I \text{ for some integer $m \ge 1$} \right\}. \]
+ If $I = \sqrt I$, we say the ideal $I$ itself is \vocab{radical}.
+\end{definition}
+For example, $\sqrt{(x^2)} = (x)$.
+You may like to take the time to verify that $\sqrt I$ is actually an ideal.
+
+\begin{remark}
+ [Number theoretic motivation]
+ This is actually the same as the notion of ``radical'' in number theory.
+ In $\ZZ$, the radical of an ideal $(n)$ corresponds to just
+ removing all the duplicate prime factors, so for example
+ \[ \sqrt{(12)} = (6). \]
+ In particular, if you try to take $\sqrt{(6)}$,
+ you just get $(6)$ back;
+ you don't squeeze out any new prime factors.
+
+ This is actually true more generally,
+ and there is a nice corresponding alternate definition:
+ for any ideal $I$, we have
+ \[ \sqrt I = \bigcap_{I \subseteq \kp \text{ prime}} \kp. \]
+ Although we could prove this now,
+ it will be proved later in \Cref{thm:radical_intersect_prime},
+ when we first need it.
+\end{remark}
+
+Here are the immediate properties you should know.
+\begin{proposition}
+ [Properties of radical]
+ \label{prop:radical}
+ In any ring:
+ \begin{itemize}
+ \ii If $I$ is an ideal, then $\sqrt I$ is always a radical ideal.
+ \ii Prime ideals are radical.
+ \ii For $I \subseteq \CC[x_1, \dots, x_n]$
+ we have $\VV(I) = \VV(\sqrt I)$.
+ \end{itemize}
+\end{proposition}
+\begin{proof}
+ These are all obvious.
+ \begin{itemize}
+ \ii If $f^m \in \sqrt I$ then $f^{mn} \in I$, so $f \in \sqrt I$.
+ \ii If $f^n \in \kp$ for a prime $\kp$,
+ then either $f \in \kp$ or $f^{n-1} \in \kp$,
+ and in the latter case we may continue by induction.
+ \ii We have $f(x_1, \dots, x_n) = 0$
+ if and only if $f(x_1, \dots, x_n)^m = 0$ for some integer $m$.
+ \qedhere
+ \end{itemize}
+\end{proof}
+
+The last bit makes sense: you would never refer to $x=0$ as $x^2=0$,
+and hence we would always want to call $\VV(x^2)$ just $\VV(x)$.
+With this, we obtain a theorem called Hilbert's Nullstellensatz.
+\begin{theorem}[Hilbert's Nullstellensatz]
+ \label{thm:hilbert_null}
+ Given an affine variety $V = \VV(I)$,
+ the set of polynomials which vanish
+ on all points of $V$ is precisely $\sqrt I$.
+ Thus if $I$ and $J$ are ideals in $\CC[x_1, \dots, x_n]$, then
+ \[ \VV(I) = \VV(J) \text{ if and only if $\sqrt I = \sqrt J$}. \]
+\end{theorem}
+In other words
+\begin{moral}
+ Radical ideals in $\CC[x_1, \dots, x_n]$ correspond
+ exactly to $n$-dimensional affine varieties.
+\end{moral}
+The proof of Hilbert's Nullstellensatz will be given in
+\Cref{prob:hilbert_from_weak}; for now it is worth remarking that
+it relies essentially on the fact that $\CC$ is
+\emph{algebraically closed}.
+For example, it is false in $\RR[x]$,
+with $(x^2+1)$ being a maximal ideal with empty vanishing set.
+
+\section{Pictures of varieties in $\Aff^1$}
+\prototype{Finite sets of points (in fact these are the only nontrivial examples).}
+Let's first draw some pictures.
+In what follows I'll draw $\CC$ as a straight line\dots sorry.
+
+First of all, let's look at just the complex line $\Aff^1$.
+What are the various varieties on it?
+For starters, we have a single point $9 \in \CC$,
+generated by $(x-9)$.
+
+\begin{center}
+ \begin{asy}
+ size(6cm);
+ pair A = (-9,0); pair B = (9,0);
+ draw(A--B, Arrows);
+ label("$\mathcal V(x-9)$", (0,0), 2*dir(-90), blue);
+ dot("$9$", (3,0), dir(90), blue);
+ label("$\mathbb A^1$", A+(2,0), dir(90));
+ \end{asy}
+\end{center}
+
+Another example is the point $4$.
+And in fact, if we like we can get an ideal consisting of just these two points;
+consider $\VV\left( (x-4)(x-9) \right)$.
+
+\begin{center}
+ \begin{asy}
+ size(6cm);
+ pair A = (-9,0); pair B = (9,0);
+ draw(A--B, Arrows);
+ label("$\mathcal V( (x-4)(x-9) )$", (0,0), 2*dir(-90), blue);
+ dot("$4$", (-1,0), dir(90), blue);
+ dot("$9$", (3,0), dir(90), blue);
+ label("$\mathbb A^1$", A+(2,0), dir(90));
+ \end{asy}
+\end{center}
+
+In general, in $\Aff^1$ you can get finitely
+many points $\left\{ a_1, \dots, a_n \right\}$ by
+just taking \[ \VV\left( (x-a_1)(x-a_2)\dots(x-a_n) \right). \]
+On the other hand, you can't get the set $\{0,1,2,\dots\}$ as an affine variety;
+the only polynomial vanishing
+on all those points is the zero polynomial.
+In fact, you can convince yourself that these
+are the only affine varieties, with two exceptions:
+\begin{itemize}
+ \ii The entire line $\Aff^1$ is given by $\VV(0)$, and
+ \ii The empty set is given by $\VV(1)$.
+\end{itemize}
+\begin{exercise}
+ Show that these are the only varieties of $\Aff^1$.
+ (Let $\VV(I)$ be the variety and pick a $0 \neq f \in I$.)
+\end{exercise}
+
+As you might correctly guess, we have:
+\begin{theorem}[Intersections and unions of varieties]
+ \label{thm:many_aff_variety}
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The intersection of affine varieties
+ (even infinitely many) is an affine variety.
+ \ii The union of finitely many affine varieties
+ is an affine variety.
+ \end{enumerate}
+ In fact we have
+ \[ \bigcap_\alpha \VV(I_\alpha)
+ = \VV\left( \sum_\alpha I_\alpha \right)
+ \qquad\text{and}\qquad
+ \bigcup_{k=1}^n \VV(I_k)
+ = \VV\left( \bigcap_{k=1}^n I_k \right). \]
+\end{theorem}
+You are welcome to prove this easy result yourself.
+\begin{remark}
+ Part (a) is a little misleading in that the sum $I+J$ need not be radical:
+ take for example $I = (y-x^2)$ and $J = (y)$ in $\CC[x,y]$,
+ where $x \in \sqrt{I+J}$ and $x \notin I+J$.
+ But in part (b) for radical ideals $I$ and $J$,
+ the intersection $I \cap J$ is radical.
+\end{remark}
+
+\section{Prime ideals correspond to irreducible affine varieties}
+\prototype{$(xy)$ corresponds to the union of two lines in $\Aff^2$.}
+
+Note that most of the affine varieties of $\Aff^1$, like $\{4,9\}$,
+are just unions of the simplest ``one-point'' ideals.
+To ease our classification,
+we can restrict our attention to the case of \emph{irreducible} varieties:
+\begin{definition}
+ A variety $V$ is \vocab{irreducible} if it cannot be written
+ as the union of two proper sub-varieties $V = V_1 \cup V_2$.
+\end{definition}
+\begin{abuse}
+ Warning: in other literature,
+ irreducible is part of the definition of variety.
+\end{abuse}
+
+\begin{example}
+ [Irreducible varieties of $\Aff^1$]
+ The irreducible varieties of $\Aff^1$ are:
+ \begin{itemize}
+ \ii the empty set $\VV(1)$,
+ \ii a single point $\VV(x-a)$, and
+ \ii the entire line $\Aff^1 = \VV(0)$.
+ \end{itemize}
+\end{example}
+\begin{example}
+ [The union of two axes]
+ Let's take a non-prime ideal in $\CC[x,y]$, such as $I = (xy)$.
+ Its vanishing set $\VV(I)$ is the union of two lines $x=0$ and $y=0$.
+ So $\VV(I)$ is reducible.
+\end{example}
+
+%We have already seen that the radical ideals
+%are in one-to-one correspondence with affine varieties.
+%In the next sections we answer the two questions:
+%\begin{itemize}
+% \ii What property of $\VV(I)$ corresponds to $I$ being prime?
+% \ii What property of $\VV(I)$ corresponds to $I$ being maximal?
+%\end{itemize}
+%The first question is easier to answer.
+
+In general:
+\begin{theorem}[Prime $\iff$ irreducible]
+ Let $I$ be a radical ideal, and $V = \VV(I)$ a nonempty variety.
+ Then $I$ is prime if and only if $V$ is irreducible.
+\end{theorem}
+\begin{proof}
+ First, assume $V$ is irreducible; we'll show $I$ is prime.
+ Let $f,g \in \CC[x_1, \dots, x_n]$ so that $fg \in I$.
+ Then $V$ is a subset of the union $\VV(f) \cup \VV(g)$;
+ actually, $V = \left( V \cap \VV(f) \right) \cup \left( V \cap \VV(g) \right)$.
+ Since $V$ is irreducible, we may assume $V = V \cap \VV(f)$,
+ hence $f$ vanishes on all of $V$. So $f \in I$.
+
+ The reverse direction is similar.
+\end{proof}
+
+%\begin{remark}
+% The above proof illustrates the following principle:
+% Let $V$ be an irreducible variety.
+% Suppose that $V \subseteq V_1 \cup V_2$;
+% this implies $V = (V_1 \cap V) \cup (V_2 \cap V)$.
+% Recall that the intersection of two varieties is a variety.
+% Thus an irreducible variety can't even be \emph{contained}
+% in a nontrivial union of two varieties.
+%\end{remark}
+
+\section{Pictures in $\Aff^2$ and $\Aff^3$}
+\prototype{Various curves and hypersurfaces.}
+
+With this notion, we can now draw pictures in
+``complex affine plane'', $\Aff^2$.
+What are the irreducible affine varieties in it?
+
+As we saw in the previous discussion,
+naming irreducible affine varieties in $\Aff^2$
+amounts to naming the prime ideals of $\CC[x,y]$.
+Here are a few.
+\begin{itemize}
+ \ii The ideal $(0)$ is prime. $\VV(0)$ as usual corresponds to the entire plane.
+ \ii The ideal $(x-a, y-b)$ is prime,
+ since $\CC[x,y] / (x-a, y-b) \cong \CC$ is an integral domain.
+ (In fact, since $\CC$ is a field, the ideal $(x-a,y-b)$ is \emph{maximal}).
+ The vanishing set of this is $\VV(x-a, y-b) = \{ (a,b) \} \in \CC^2$,
+ so these ideals correspond to a single point.
+ \ii Let $f(x,y)$ be an irreducible polynomial, like $y-x^2$.
+ Then $(f)$ is a prime ideal! Here $\VV(I)$ is a ``degree one curve''.
+\end{itemize}
+
+By using some polynomial algebra
+(again you're welcome to check this; Euclidean algorithm),
+these are in fact the only prime ideals of $\CC[x,y]$.
+Here's a picture.
+
+\begin{center}
+ \begin{asy}
+ import graph;
+ graph.xaxis("$x$", -4, 4);
+ graph.yaxis("$y$", -4, 4);
+
+ real f (real x) { return x*x; }
+ draw(graph(f,-2,2,operator ..), blue);
+ label("$\mathcal V(y-x^2)$", (1,1), dir(-45), blue);
+ dot("$\mathcal V(x-1,y+2)$", (1,-2), dir(-45), red);
+ \end{asy}
+\end{center}
+
+
+As usual, you can make varieties which are just unions of these irreducible ones.
+For example, if you wanted the variety consisting of a parabola $y=x^2$
+plus the point $(20,15)$ you would write
+\[ \VV \left( (y-x^2)(x-20), (y-x^2)(y-15) \right). \]
+
+The picture in $\Aff^3$ is harder to describe.
+Again, you have points $\VV(x-a, y-b, z-c)$ corresponding to
+be zero-dimensional points $(a,b,c)$, and two-dimensional surfaces
+$\VV(f)$ for each irreducible polynomial $f$ (for example, $x+y+z=0$ is a plane).
+But there are more prime ideals, like $\VV(x,y)$, which corresponds to the
+intersection of the planes $x=0$ and $y=0$: this is the one-dimensional $z$-axis.
+It turns out there is no reasonable way to classify the ``one-dimensional'' varieties;
+they correspond to ``irreducible curves''.
+
+Thus, as Ravi Vakil \cite{ref:vakil} says:
+the purely algebraic question
+of determining the prime ideals of $\CC[x,y,z]$
+has a fundamentally geometric answer.
+
+\section{Maximal ideals}
+\prototype{All maximal ideals are $(x_1-a_1, \dots, x_n-a_n)$.}
+We begin by noting:
+\begin{proposition}
+ [$\VV(-)$ is inclusion reversing]
+ If $I \subseteq J$ then $\VV(I) \supseteq \VV(J)$.
+ Thus $\VV(-)$ is \emph{inclusion-reversing}.
+\end{proposition}
+\begin{ques}
+ Verify this.
+\end{ques}
+Thus, bigger ideals correspond to smaller varieties.
+As the above pictures might have indicated,
+the smallest varieties are \emph{single points}.
+Moreover, as you might guess from the name,
+the biggest ideals are the \emph{maximal ideals}.
+As an example, all ideals of the form
+\[ \left( x_1-a_1, \dots, x_n-a_n \right) \]
+are maximal, since the quotient
+\[ \CC[x_1, \dots, x_n] / \left( x_1-a_1, \dots, x_n-a_n \right) \cong \CC \]
+is a field.
+The question is: are all maximal ideals of this form?
+
+The answer is in the affirmative.
+%It's equivalent to:
+%\begin{theorem}
+% [Weak Nullstellensatz, phrased as nonempty varieties]
+% Let $I \subsetneq \CC[x_1, \dots, x_n]$ be a proper ideal.
+% Then the variety $\VV(I) \neq \varnothing$.
+%\end{theorem}
+% From this we can deduce that all maximal ideals are of the above form.
+\begin{theorem}
+ [Weak Nullstellensatz, phrased with maximal ideals]
+ Every maximal ideal of $\CC[x_1, \dots, x_n]$
+ is of the form $(x_1-a_1, \dots, x_n-a_n)$.
+\end{theorem}
+The proof of this is surprisingly nontrivial,
+so we won't include it here yet; see \cite[\S7.4.3]{ref:vakil}.
+%% TODO we might include this eventually
+%\begin{proof}
+% [WN implies MI]
+% Let $J$ be a maximal ideal, and consider the corresponding variety $V = \VV(J)$.
+% By WN, it contains some point $p=(a_1, \dots, a_n)$.
+% Now, define $I = (x_1-a_1, \dots, x_n-a_n)$; this ideal contains all polynomials
+% vanishing at $p$, so necessarily $J \subseteq I \subsetneq \CC[x_1, \dots, x_n]$.
+% Then by maximality of $J$ we have $J=I$.
+%\end{proof}
+Again this uses the fact that $\CC$ is algebraically closed.
+(For example $(x^2+1)$ is a maximal ideal of $\RR[x]$.)
+Thus:
+\begin{moral}
+ Over $\CC$, maximal ideals correspond to single points.
+\end{moral}
+
+Consequently, our various ideals over $\CC$ correspond to various flavors
+of affine varieties:
+\begin{center}
+ \begin{tabular}[h]{|cc|}
+ \hline
+ Algebraic flavor & Geometric flavor \\ \hline
+ radical ideal & affine variety \\
+ prime ideal & irreducible variety \\
+ maximal ideal & single point \\
+ any ideal & (scheme?) \\ \hline
+ \end{tabular}
+\end{center}
+There's one thing I haven't talked about: what's the last entry?
+
+\section{Motivating schemes with non-radical ideals}
+One of the most elementary motivations for the scheme
+is that we would like to use them to count multiplicity.
+That is, consider the intersection
+\[ \VV(y-x^2) \cap \VV(y) \subseteq \Aff^2 \]
+This is the intersection of the parabola with the tangent $x$-axis,
+this is the green dot below.
+
+\begin{center}
+ \begin{asy}
+ import graph;
+ size(5cm);
+
+ real f(real x) { return x*x; }
+ graph.xaxis("$x$", red);
+ graph.yaxis("$y$");
+ draw(graph(f,-2,2,operator ..), blue, Arrows);
+ label("$\mathcal V(y-x^2)$", (0.8, f(0.8)), dir(-45), blue);
+ label("$\mathbb A^2$", (2,3), dir(45));
+ dotfactor *= 1.5;
+ dot(origin, heavygreen);
+ \end{asy}
+\end{center}
+
+Unfortunately, as a variety, it is just a single point!
+However, we want to think of this as a ``double point'':
+after all, in some sense it has multiplicity $2$.
+You can detect this when you look at the ideals:
+\[ (y-x^2) + (y) = (x^2,y) \]
+and thus, if we blithely ignore taking the radical, we get
+\[ \CC[x,y] / (x^2,y) \cong \CC[\eps] / (\eps^2). \]
+So the ideals in question are noticing the presence of a double point.
+
+In order to encapsulate this, we need a more refined object than
+a variety, which (at the end of the day) is just a set of points;
+it's not possible using topology along to encode more information
+(there is only one topology on a single point!).
+This refined object is the \emph{scheme}.
+
+\section\problemhead
+\todo{some actual computation here would be good}
+
+\begin{problem}
+ Show that a \emph{real} affine variety $V \subseteq \Aff_\RR^n$
+ can always be written in the form $\VV(f)$.
+ \begin{hint}
+ Squares are nonnegative.
+ \end{hint}
+ \begin{sol}
+ If $V = \VV(I)$ with $I = (f_1, \dots, f_m)$
+ (as usual there are finitely many polynomials since $\RR[x_1, \dots, x_n]$ is Noetherian)
+ then we can take $f = f_1^2 + \dots + f_m^2$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Complex varieties can't be empty]
+ \label{prob:complex_variety_nonempty}
+ Prove that if $I$ is a proper ideal in $\CC[x_1, \dots, x_n]$
+ then $\VV(I) \ne \varnothing$.
+ \begin{hint}
+ This is actually an equivalent formulation
+ of the Weak Nullstellensatz.
+ \end{hint}
+ \begin{sol}
+ Let $I$ be an ideal, and let $\km$ be a maximal ideal contained in it.
+ (If you are worried about the existence of $\km$,
+ it follows from Krull's Theorem, \Cref{prob:krull_max_ideal}).
+ Then $\km = (x_1 - a_1, \dots, x_n - a_n)$ by Weak Nullstellensatz.
+ Consequently, $(a_1, \dots, a_n)$ is the unique point of $\VV(\km)$,
+ and hence this point is also in $\VV(I)$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ \yod
+ \label{prob:hilbert_from_weak}
+ Show that Hilbert's Nullstellensatz in $n$ dimensions
+ follows from the Weak Nullstellensatz.
+ (This solution is called the \vocab{Rabinowitsch Trick}.)
+ \begin{hint}
+ Use the weak Nullstellensatz on $n+1$ dimensions.
+ Given $f$ vanishing on everything,
+ consider $x_{n+1}f-1$.
+ \end{hint}
+ \begin{sol}
+ The point is is to check that if $f$ vanishes on all of $\VV(I)$,
+ then $f \in \sqrt I$.
+
+ Take a set of generators $f_1, \dots, f_m$,
+ in the original ring $\CC[x_1, \dots, x_n]$;
+ we may assume it's finite by the Hilbert basis theorem.
+
+ We're going to do a trick now:
+ consider $S = \CC[x_1, \dots, x_n, x_{n+1}]$ instead.
+ Consider the ideal $I' \subseteq S$ in the bigger ring
+ generated by $\{f_1, \dots, f_m\}$ and the polynomial $x_{n+1} f - 1$.
+ The point of the last guy is that its zero locus
+ does not touch our copy $x_{n+1}=0$ of $\Aff^n$
+ nor any point in the ``projection'' of $f$ through $\Aff^{n+1}$
+ (one can think of this as $\VV(I)$ in the smaller ring
+ direct multiplied with $\CC$).
+ Thus $\VV(I') = \varnothing$, and by the weak Nullstellensatz
+ we in fact have $I' = \CC[x_1, \dots, x_{n+1}]$.
+ So
+ \[ 1 = g_1f_1 + \dots + g_mf_m + g_{m+1} \left( x_{n+1}f-1 \right). \]
+ Now the hack: \textbf{replace every instance of $x_{n+1}$ by $\frac 1f$},
+ and then clear all denominators.
+ Thus for some large enough integer $N$ we can get
+ \[ f^N = f^N(g_1f_1 + \dots + g_mf_m) \]
+ which eliminates any fractional powers of $f$ in the right-hand side.
+ It follows that $f^N \in I$.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/applications.tex b/books/napkin/applications.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e613ec3b73ad65035d7302895868fc65e660f3b4
--- /dev/null
+++ b/books/napkin/applications.tex
@@ -0,0 +1,245 @@
+\chapter{Some applications}
+With all this setup, we now take the time to develop some nice
+results which are of independent interest.
+
+\section{Frobenius divisibility}
+\begin{theorem}
+ [Frobenius divisibility]
+ Let $V$ be a complex irrep of a finite group $G$.
+ Then $\dim V$ divides $|G|$.
+\end{theorem}
+The proof of this will require algebraic integers
+(developed in the algebraic number theory chapter).
+Recall that an \emph{algebraic integer} is a complex number
+which is the root of a monic polynomial with integer coefficients,
+and that these algebraic integers form a ring $\ol\ZZ$
+under addition and multiplication, and that $\ol\ZZ \cap \QQ = \ZZ$.
+
+First, we prove:
+\newcommand{\tempuuujbhtkx}{$\ZZ[G]$}
+\begin{lemma}[Elements of \tempuuujbhtkx\ are integral]
+ \label{lem:group_ring_integral}
+ Let $\alpha \in \ZZ[G]$.
+ Then there exists a monic polynomial $P$ with integer coefficients
+ such that $P(\alpha) = 0$.
+\end{lemma}
+\begin{proof}
+ Let $A_k$ be the $\ZZ$-span of $1, \alpha^1, \dots, \alpha^k$.
+ Since $\ZZ[G]$ is Noetherian,
+ the inclusions $A_0 \subseteq A_1 \subseteq A_2 \subseteq \dots$
+ cannot all be strict, hence $A_k = A_{k+1}$ for some $k$,
+ which means $\alpha^{k+1}$ can be expressed in terms of
+ lower powers of $\alpha$.
+\end{proof}
+
+\begin{proof}
+ [Proof of Frobenius divisibility]
+ Let $C_1$, \dots, $C_m$ denote the conjugacy classes of $G$.
+ Then consider the rational number \[ \frac{|G|}{\dim V}; \]
+ we will show it is an algebraic integer, which will prove the theorem.
+ Observe that we can rewrite it as
+ \[
+ \frac{|G|}{\dim V}
+ = \frac{|G| \left< \chi_V, \chi_V \right>}{\dim V}
+ = \sum_{g \in G} \frac{\chi_V(g) \ol{\chi_V(g)}}{\dim V}.
+ \]
+ We split the sum over conjugacy classes, so
+ \[
+ \frac{|G|}{\dim V}
+ =
+ \sum_{i=1}^m \ol{\chi_V(C_i)} \cdot \frac{|C_i| \chi_V(C_i)}{\dim V}.
+ \]
+ We claim that for every $i$,
+ \[ \frac{|C_i| \chi_V(C_i)}{\dim V}
+ = \frac{1}{\dim V} \Tr T_i \]
+ is an algebraic integer,
+ where \[ T_i \defeq \rho\left(\sum_{h \in C_i} h\right). \]
+ To see this, note that $T_i$ commutes with elements of $G$,
+ and hence is an intertwining operator $T_i : V \to V$.
+ Thus by Schur's lemma, $T_i = \lambda_i \cdot \id_V$
+ and $\Tr T = \lambda_i \dim V$.
+ By \Cref{lem:group_ring_integral}, $\lambda_i \in \ol\ZZ$, as desired.
+
+ Now we are done, since $\ol{\chi_V(C_i)} \in \ol\ZZ$ too
+ (it is the sum of conjugates of roots of unity),
+ so $\frac{|G|}{\dim V}$ is the sum of products of algebraic integers,
+ hence itself an algebraic integer.
+\end{proof}
+
+\section{Burnside's theorem}
+We now prove a group-theoretic result.
+This is the famous poster child for representation theory
+(in the same way that RSA is the poster child of number theory)
+because the result is purely group theoretic.
+
+Recall that a group is \vocab{simple} if it has no normal subgroups.
+In fact, we will prove:
+\begin{theorem}[Burnside]
+ Let $G$ be a nonabelian group of order $p^a q^b$ (where $p,q$ are distinct primes and $a,b \ge 0$).
+ Then $G$ is not simple.
+\end{theorem}
+In what follows $p$ and $q$ will always denote prime numbers.
+
+\begin{lemma}[On $\gcd(|C|, \dim V) = 1$]
+ \label{lem:burnside_ant_lemma}
+ Let $V = (V, \rho)$ be an complex irrep of $G$.
+ Assume $C$ is a conjugacy class of $G$ with $\gcd(|C|, \dim V) = 1$.
+ Then for any $g \in C$, either
+ \begin{itemize}
+ \ii $\rho(g)$ is multiplication by a scalar, or
+ \ii $\chi_V(g) = \Tr \rho(g) = 0$.
+ \end{itemize}
+\end{lemma}
+\begin{proof}
+ If $\eps_i$ are the $n$ eigenvalues of $\rho(g)$ (which are roots of unity),
+ then from the proof of Frobenius divisibility we know
+ $\frac{|C|}{n} \chi_V(g) \in \ol\ZZ$,
+ thus from $\gcd(|C|, n) = 1$ we get
+ \[ \frac1n \chi_V(g) = \frac1n(\eps_1 + \dots + \eps_n) \in \ol\ZZ. \]
+ So this follows readily from a fact from algebraic number theory,
+ namely \Cref{prob:rep_lemma}:
+ either $\eps_1 = \dots = \eps_n$ (first case) or
+ $\eps_1 + \dots + \eps_n = 0$ (second case).
+\end{proof}
+
+\begin{lemma}
+ [Simple groups don't have prime power conjugacy classes]
+ Let $G$ be a finite simple group.
+ Then $G$ cannot have a conjugacy class of order $p^k$ (where $k > 0$).
+\end{lemma}
+\begin{proof}
+ By contradiction.
+ Assume $C$ is such a conjugacy class, and fix any $g \in C$.
+ By the second orthogonality formula (\Cref{prob:second_orthog})
+ applied $g$ and $1_G$ (which are not conjugate since $g \neq 1_G$) we have
+ \[ \sum_{i=1}^r \dim V_i \chi_{V_i}(g) = 0 \]
+ where $V_i$ are as usual all irreps of $G$.
+ \begin{exercise}
+ Show that there exists a nontrivial irrep $V$
+ such that $p \nmid \dim V$ and $\chi_V(g) \neq 0$.
+ (Proceed by contradiction to show that $-\frac1p \in \ol\ZZ$ if not.)
+ \end{exercise}
+ Let $V = (V, \rho)$ be the irrep mentioned.
+ By the previous lemma, we now know that $\rho(g)$ acts as a scalar in $V$.
+
+ Now consider the subgroup
+ \[ H = \left< ab\inv \mid a,b \in C \right> \subseteq G. \]
+ We claim this is a nontrivial normal subgroup of $G$.
+ It is easy to check $H$ is normal,
+ and since $|C| > 1$ we have that $H$ is nontrivial.
+ As represented by $V$ each element of $H$ acts trivially in $G$,
+ so since $V$ is nontrivial and irreducible, $H \neq G$.
+ This contradicts the assumption that $G$ was simple.
+\end{proof}
+
+With this lemma, Burnside's theorem follows by partitioning
+the $|G|$ elements of our group into conjugacy classes.
+Assume for contradiction $G$ is simple.
+Each conjugacy class must have order either $1$ (of which there are $|Z(G)|$ by \Cref{prob:class_eq})
+or divisible by $pq$ (by the previous lemma), but on the other hand the sum equals $|G| = p^aq^b$.
+Consequently, we must have $|Z(G)| > 1$.
+But $G$ is not abelian, hence $Z(G) \neq G$,
+thus the center $Z(G)$ is a nontrivial normal subgroup,
+contradicting the assumption that $G$ was simple.
+
+
+
+\section{Frobenius determinant}
+We finish with the following result,
+the problem that started the branch of representation theory.
+Given a finite group $G$,
+we create $n$ variables $\{x_g\}_{g \in G}$,
+and an $n \times n$ matrix $M_G$ whose $(g,h)$th entry is $x_{gh}$.
+\begin{example}[Frobenius determinants]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii If $G = \Zc 2 = \left< T \mid T^2 = 1\right>$
+ then the matrix would be \[ M_G =
+ \begin{bmatrix} x_{\id} & x_T \\ x_T & x_{\id} \end{bmatrix}. \]
+ Then $\det M_G = (x_{\id}-x_T)(x_{\id}+x_T)$.
+ \ii If $G = S_3$, a long computation gives
+ the irreducible factorization of $\det M_G$ is
+ \[
+ \left( \sum_{\sigma \in S_3} x_{\sigma} \right)
+ \left( \sum_{\sigma \in S_3} \sign(\sigma)x_\sigma \right)
+ \Big( F\left(x_\id, x_{(123)}, x_{(321)}\right)
+ - F\left(x_{(12)}, x_{(23)}, x_{(31)}\right) \Big)^2 \]
+ where $F(a,b,c) = a^2+b^2+c^2-ab-bc-ca$;
+ the latter factor is irreducible.
+ \end{enumerate}
+\end{example}
+\begin{theorem}
+ [Frobenius determinant]
+ The polynomial $\det M_G$ (in $|G|$ variables) factors
+ into a product of irreducible polynomials such that
+ \begin{enumerate}[(i)]
+ \ii The number of polynomials equals the number
+ of conjugacy classes of $G$, and
+ \ii The multiplicity of each polynomial
+ equals its degree.
+ \end{enumerate}
+\end{theorem}
+You may already be able to guess how the ``sum of squares'' result
+is related! (Indeed, look at $\deg\det M_G$.)
+
+Legend has it that Dedekind observed this behavior first in 1896.
+He didn't know how to prove it in general,
+so he sent it in a letter to Frobenius,
+who created representation theory to solve the problem.
+
+With all the tools we've built, it is now fairly straightforward
+to prove the result.
+
+\begin{proof}
+ Let $V = (V, \rho) = \Reg(\CC[G])$ and let $V_1$, \dots, $V_r$
+ be the irreps of $G$.
+ Let's consider the map $T \colon \CC[G] \to \CC[G]$
+ which has matrix $M_G$ in the usual basis of $\CC[G]$, namely
+ \[ T : T(\{x_g\}_{g \in G}) = \sum_{g \in G} x_g \rho(g) \in \Mat(V). \]
+ Thus we want to examine $\det T$.
+
+ But we know that $V = \bigoplus_{i=1}^r V_i^{\oplus \dim V_i}$
+ as before, and so breaking down $T$ over its subspaces we know
+ \[
+ \det T
+ = \prod_{i=1}^r \left( \det (T \restrict{V_i}) \right)^{\dim V_i}.
+ \]
+ So we only have to show two things:
+ the polynomials $\det T_{V_i}$ are irreducible,
+ and they are pairwise different for different $i$.
+
+ Let $V_i = (V_i, \rho)$, and pick $k = \dim V_i$.
+ \begin{itemize}
+ \ii \emph{Irreducible}:
+ By the density theorem, for any $M \in \Mat(V_i)$ there exists
+ a \emph{particular} choice of complex numbers $x_g \in G$ such that
+ \[
+ M = \sum_{g \in G} x_g
+ \cdot \rho_i(g)
+ = (T \restrict{V_i})(\{x_g\}).
+ \]
+ View $\rho_i(g)$ as a $k \times k$ matrix with complex coefficients.
+ Thus the ``generic'' $(T \restrict{V_i})(\{x_g\})$, viewed as a matrix with
+ polynomial entries, must have linearly independent entries
+ (or there would be some matrix in $\Mat(V_i)$ that we can't achieve).
+
+ Then, the assertion follows (by a linear variable change)
+ from the simple fact that the polynomial
+ $\det (y_{ij})_{1 \le i, j \le m}$ in $m^2$ variables
+ is always irreducible.
+
+ \ii \emph{Pairwise distinct}:
+ We show that from $\det T|_{V_i}(\{x_g\})$ we can read
+ off the character $\chi_{V_i}$, which proves the claim.
+ In fact
+ \begin{exercise}
+ Pick \emph{any} basis for $V_i$.
+ If $\dim V_i = k$, and $1_G \neq g \in G$, then
+ \[
+ \chi_{V_i} (g)
+ \text{ is the coefficient of } x_g x_{1_G}^{k-1}.
+ \]
+ \end{exercise}
+ Thus, we are done. \qedhere
+ \end{itemize}
+\end{proof}
diff --git a/books/napkin/artin.tex b/books/napkin/artin.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e631a13993212503a1230fe0665cd1b13c175a77
--- /dev/null
+++ b/books/napkin/artin.tex
@@ -0,0 +1,572 @@
+\chapter{Bonus: A Bit on Artin Reciprocity}
+In this chapter, I'm going to state some big theorems of
+global class field theory and use them to deduce the
+Kronecker-Weber plus Hilbert class fields.
+No proofs, but hopefully still appreciable.
+For experts: this is global class field theory, without ideles.
+
+Here's the executive summary: let $K$ be a number field.
+Then all abelian extensions $L/K$ can be understood
+using solely information intrinsic to $K$:
+namely, the ray class groups (generalizing ideal class groups).
+
+\section{Infinite primes}
+\prototype{$\QQ(\sqrt{-5})$ has a complex infinite prime,
+$\QQ(\sqrt5)$ has two real infinite ones.}
+Let $K$ be a number field of degree $n$ and signature $(r,s)$.
+We know what a prime ideal of $\OO_K$ is,
+but we now allow for the so-called infinite primes,
+which I'll describe using the embeddings.\footnote{This is
+ not really the right definition; the ``correct'' way to think of
+ primes, finite or infinite, is in terms of valuations.
+ But it'll be sufficient for me to state the theorems I want.}
+Recall there are $n$ embeddings $\sigma : K \to \CC$, which consist of
+\begin{itemize}
+ \ii $r$ real embeddings where $\img\sigma \subseteq \RR$, and
+ \ii $s$ pairs of conjugate complex embeddings.
+\end{itemize}
+Hence $r+2s = n$.
+The first class of embeddings form the \vocab{real infinite primes},
+while the \vocab{complex infinite primes} are the second type.
+We say $K$ is \vocab{totally real} (resp \vocab{totally complex})
+if all its infinite primes are real (resp complex).
+\begin{example}
+ [Examples of infinite primes]
+ \listhack
+ \begin{itemize}
+ \ii $\QQ$ has a single real infinite prime.
+ We often write it as $\infty$.
+ \ii $\QQ(\sqrt{-5})$ has a single complex infinite prime,
+ and no real infinite primes. Hence totally complex.
+ \ii $\QQ(\sqrt{5})$ has two real infinite primes,
+ and no complex infinite primes. Hence totally real.
+ \end{itemize}
+\end{example}
+
+\section{Modular arithmetic with infinite primes}
+A \vocab{modulus} is a formal product
+\[ \km = \prod_{\kp} \kp^{\nu(\kp)} \]
+where the product runs over all primes, finite and infinite.
+(Here $\nu(\kp)$ is a nonnegative integer,
+of which only finitely many are nonzero.)
+We also require that
+\begin{itemize}
+ \ii $\nu(\kp) = 0$ for any complex infinite prime $\kp$, and
+ \ii $\nu(\kp) \le 1$ for any real infinite prime $\kp$.
+\end{itemize}
+Obviously, every $\km$ can be written as $\km = \km_0\km_\infty$
+by separating the finite from the (real) infinite primes.
+
+We say $a \equiv b \pmod{\km}$ if
+\begin{itemize}
+ \ii If $\kp$ is a finite prime, then $a \equiv b \pmod{\kp^{\nu(\kp)}}$
+ means exactly what you think it should mean:
+ $a-b \in \kp^{\nu(\kp)}$.
+ \ii If $\kp$ is a \emph{real} infinite prime $\sigma : K \to \RR$, then
+ $a \equiv b \pmod{\kp}$ means that $\sigma(a/b) > 0$.
+\end{itemize}
+Of course, $a \equiv b \pmod{\km}$ means $a \equiv b$
+modulo each prime power in $\km$.
+With this, we can define a generalization of the class group:
+\begin{definition}
+ Let $\km$ be a modulus of a number field $K$.
+ \begin{itemize}
+ \ii Let $I_K(\km)$ denote the set of all fractional ideals of $K$
+ which are relatively prime to $\km$, which is an abelian group.
+ \ii Let $P_K(\km)$ be the subgroup of $I_K(\km)$ generated by
+ \[
+ \left\{
+ \alpha \OO_K
+ \mid
+ \alpha \in K^\times \text{ and }
+ \alpha \equiv 1 \pmod{\km}
+ \right\}.
+ \]
+ This is sometimes called a ``ray'' of principal ideals.
+ \end{itemize}
+ Finally define the \vocab{ray class group} of $\km$
+ to be $C_K(\km) = I_K(\km) / P_K(\km)$.
+\end{definition}
+This group is known to always be finite.
+Note the usual class group is $C_K(1)$.
+
+One last definition that we'll use right after Artin reciprocity:
+\begin{definition}
+ A \vocab{congruence subgroup} of $\km$ is a subgroup $H$ with
+ \[ P_K(\km) \subseteq H \subseteq I_K(\km). \]
+\end{definition}
+Thus $C_K(\km)$ is a group which contains a lattice of various
+quotients $I_K(\km) / H$, where $H$ is a congruence subgroup.
+
+This definition takes a while to get used to, so here are examples.
+\begin{example}
+ [Ray class groups in $\QQ$, finite modulus]
+ Consider $K = \QQ$ with infinite prime $\infty$. Then
+ \begin{itemize}
+ \ii If we take $\km = 1$ then $I_\QQ(1)$ is all fractional ideals,
+ and $P_\QQ(1)$ is all principal fractional ideals.
+ Their quotient is the usual class group of $\QQ$, which is trivial.
+
+ \ii Now take $\km = 8$.
+ Thus $I_\QQ(8) \cong \left\{ \frac ab\ZZ \mid
+ a/b \equiv 1,3,5,7 \pmod 8 \right\}$.
+ Moreover
+ \[ P_\QQ(8) \cong \left\{ \frac ab\ZZ \mid
+ a/b \equiv 1 \pmod 8 \right\}. \]
+ You might at first glance think that
+ the quotient is thus $(\ZZ/8\ZZ)^\times$.
+ But the issue is that we are dealing with \emph{ideals}:
+ specifically, we have
+ \[ 7\ZZ = -7\ZZ \in P_\QQ(8) \]
+ because $-7 \equiv 1 \pmod 8$.
+ So \emph{actually}, we get
+ \[
+ C_\QQ(8)
+ \cong \left\{ 1,3,5,7 \text{ mod } 8 \right\}
+ / \left\{ 1,7 \text{ mod } 8 \right\}
+ \cong (\ZZ/4\ZZ)^\times.
+ \]
+ \end{itemize}
+ More generally,
+ \[ C_\QQ(m) = (\ZZ/m\ZZ)^\times / \{\pm1\}. \]
+\end{example}
+\begin{example}
+ [Ray class groups in $\QQ$, infinite moduli]
+ Consider $K = \QQ$ with infinite prime $\infty$ again.
+ \begin{itemize}
+ \ii Now take $\km = \infty$.
+ As before $I_\QQ(\infty) = \QQ^\times$.
+ Now, by definition we have
+ \[ P_\QQ(\infty) = \left\{ \frac ab \ZZ \mid a/b > 0 \right\}. \]
+ At first glance you might think this was $\QQ_{>0}$,
+ but the same behavior with ideals shows in fact $P_\QQ(\infty) = \QQ^\times$.
+ So in this case, $P_\QQ(\infty)$ still has all principal fractional ideals.
+ Therefore, $C_\QQ(\infty)$ is still trivial.
+
+ \ii Finally, let $\km = 8\infty$.
+ As before $I_\QQ(8\infty) \cong \left\{ \frac ab\ZZ \mid
+ a/b \equiv 1,3,5,7 \pmod 8 \right\}$.
+ Now in this case:
+ \[ P_\QQ(8\infty) \cong \left\{ \frac ab\ZZ \mid
+ a/b \equiv 1 \pmod 8 \text{ and } a/b > 0 \right\}. \]
+ This time, we really do have $-7\ZZ \notin P_\QQ(8\infty)$:
+ we have $7 \not\equiv 1 \pmod 8$ and also $-8 < 0$.
+ So neither of the generators of $7\ZZ$ are in $P_\QQ(8\infty)$.
+ Thus we finally obtain
+ \[
+ C_\QQ(8\infty)
+ \cong \left\{ 1,3,5,7 \text{ mod } 8 \right\}
+ / \left\{ 1 \text{ mod } 8 \right\}
+ \cong (\ZZ/8\ZZ)^\times
+ \]
+ with the bijection $C_\QQ(8\infty) \to (\ZZ/8\ZZ)^\times$
+ given by $a\ZZ \mapsto |a| \pmod 8$.
+ \end{itemize}
+ More generally,
+ \[ C_\QQ(m\infty) = (\ZZ/m\ZZ)^\times. \]
+\end{example}
+
+\section{Infinite primes in extensions}
+I want to emphasize that everything above is
+\emph{intrinsic} to a particular number field $K$.
+After this point we are going to consider extensions $L/K$
+but it is important to keep in mind the distinction that
+the concept of modulus and ray class group are objects
+defined solely from $K$ rather than the above $L$.
+
+Now take a \emph{Galois} extension $L/K$ of degree $m$.
+We already know prime ideals $\kp$ of $K$ break into
+a product of prime ideals $\kP$ of $K$ in $L$ in a nice way,
+so we want to do the same thing with infinite primes.
+This is straightforward: each of the $n$ infinite primes
+$\sigma : K \to \CC$ lifts to $m$ infinite primes $\tau : L \to \CC$,
+by which I mean the diagram
+\begin{center}
+\begin{tikzcd}
+ L \ar[r, "\tau", dashed] & \CC \\
+ K \ar[u, hook] \ar[ur, "\sigma", swap]
+\end{tikzcd}
+\end{center}
+commutes.
+Hence like before, each infinite prime $\sigma$ of $K$
+has $m$ infinite primes $\tau$ of $L$ which lie above it.
+
+For a real prime $\sigma$ of $K$, any of the resulting $\tau$ above it
+are complex, we say that the prime $\sigma$ \vocab{ramifies}
+in the extension $L/K$. Otherwise it is \vocab{unramified} in $L/K$.
+An infinite prime of $K$ is always unramified in $L/K$.
+In this way, we can talk about an unramified Galois extension $L/K$:
+it is one where all primes (finite or infinite) are unramified.
+
+\begin{example}
+ [Ramification of $\infty$]
+ Let $\infty$ be the real infinite prime of $\QQ$.
+ \begin{itemize}
+ \ii $\infty$ is ramified in $\QQ(\sqrt{-5})/\QQ$.
+ \ii $\infty$ is unramified in $\QQ(\sqrt{5})/\QQ$.
+ \end{itemize}
+ Note also that if $K$ is totally complex
+ then any extension $L/K$ is unramified.
+\end{example}
+
+\section{Frobenius element and Artin symbol}
+Recall the key result:
+\begin{theorem}
+ [Frobenius element]
+ Let $L/K$ be a \emph{Galois} extension.
+ If $\kp$ is a prime unramified in $K$,
+ and $\kP$ a prime above it in $L$.
+ Then there is a unique element of $\Gal(L/K)$,
+ denoted $\Frob_\kP$, obeying
+ \[ \Frob_\kP(\alpha) \equiv \alpha^{N\kp} \pmod{\kP}
+ \qquad \forall \alpha \in \OO_L. \]
+\end{theorem}
+\begin{example}
+ [Example of Frobenius elements]
+ Let $L = \QQ(i)$, $K = \QQ$.
+ We have $\Gal(L/K) \cong \ZZ/2\ZZ$.
+
+ If $p$ is an odd prime with $\kP$ above it,
+ then $\Frob_\kP$ is the unique element such that
+ \[ (a+bi)^p \equiv \Frob_\kP(a+bi) \pmod{\kP} \]
+ in $\ZZ[i]$. In particular,
+ \[ \Frob_\kP(i) = i^p =
+ \begin{cases}
+ i & p \equiv 1 \pmod 4 \\
+ -i & p \equiv 3 \pmod 4.
+ \end{cases}
+ \]
+ From this we see that $\Frob_\kP$ is the identity when $p \equiv 1 \pmod 4$
+ and $\Frob_\kP$ is complex conjugation when $p \equiv 3 \pmod 4$.
+\end{example}
+\begin{example}
+ [Cyclotomic Frobenius element]
+ Generalizing previous example, let $L = \QQ(\zeta)$ and $K = \QQ$,
+ with $\zeta$ an $m$th root of unity.
+ It's well-known that $L/K$ is unramified outside $\infty$
+ and prime factors of $m$.
+ Moreover, the Galois group $\Gal(L/K)$ is $(\ZZ/m\ZZ)^\times$:
+ the Galois group consists of elements of the form
+ \[ \sigma_n : \zeta \mapsto \zeta^n \]
+ and $\Gal(L/K) = \left\{ \sigma_n \mid n \in (\ZZ/m\ZZ)^\times \right\}$.
+
+ Then it follows just like before that
+ if $p \nmid n$ is prime and $\kP$ is above $p$
+ \[ \Frob_\kP(x) = \sigma_p. \]
+\end{example}
+
+An important property of the Frobenius element is its order
+is related to the decomposition of $\kp$ in the higher field $L$
+in the nicest way possible:
+\begin{lemma}
+ [Order of the Frobenius element]
+ The Frobenius element $\Frob_\kP \in \Gal(L/K)$
+ of an extension $L/K$ has order equal to the
+ inertial degree of $\kP$, that is,
+ \[ \ord \Frob_\kP = f(\kP \mid \kp). \]
+ In particular, $\Frob_\kP = \id$ if and only if $\kp$
+ splits completely in $L/K$.
+\end{lemma}
+\begin{proof}
+ We want to understand the order of the map $T : x \mapsto x^{N\kp}$ on
+ the field $\OO_K / \kP$.
+ But the latter is isomorphic to the splitting field
+ of $X^{N\kP} - X$ in $\FF_p$, by Galois theory of finite fields.
+ Hence the order is $\log_{N\kp} (N\kP) = f(\kP \mid \kp)$.
+\end{proof}
+
+\begin{exercise}
+ Deduce from this that the rational primes which split completely
+ in $\QQ(\zeta)$ are exactly those with $p \equiv 1 \pmod m$.
+ Here $\zeta$ is an $m$th root of unity.
+\end{exercise}
+
+The Galois group acts transitively among the set of $\kP$ above a given $\kp$,
+so that we have
+\[ \Frob_{\sigma(\kP)} = \sigma \circ (\Frob_{\kp}) \circ \sigma\inv. \]
+Thus $\Frob_\kP$ is determined by its underlying $\kp$ up to conjugation.
+
+In class field theory, we are interested in \vocab{abelian extensions},
+i.e.\ those for which $\Gal(L/K)$ is Galois.
+Here the theory becomes extra nice:
+the conjugacy classes have size one.
+\begin{definition}
+ Assume $L/K$ is an \textbf{abelian} extension.
+ Then for a given unramified prime $\kp$ in $K$,
+ the element $\Frob_\kP$ doesn't depend on the choice of $\kP$.
+ We denote the resulting $\Frob_\kP$ by the \vocab{Artin symbol},
+ \[ \left( \frac{L/K}{\kp} \right). \]
+\end{definition}
+The definition of the Artin symbol is written deliberately to
+look like the Legendre symbol.
+To see why:
+\begin{example}[Legendre symbol subsumed by Artin symbol]
+ Suppose we want to understand
+ $(2/p) \equiv 2^{\frac{p-1}{2}}$ where $p > 2$ is prime.
+ Consider the element
+ \[ \left( \frac{\QQ(\sqrt 2)/\QQ}{p\ZZ} \right)
+ \in \Gal(\QQ(\sqrt 2) / \QQ). \]
+ It is uniquely determined by where it sends $a$.
+ But in fact we have
+ \[
+ \left( \frac{\QQ(\sqrt 2)/\QQ}{p\ZZ} \right) \left( \sqrt 2 \right)
+ \equiv \left( \sqrt 2 \right)^{p}
+ \equiv 2^{\frac{p-1}{2}} \cdot \sqrt 2
+ \equiv \left( \frac 2p \right) \sqrt 2
+ \pmod{\kP}
+ \]
+ where $\left( \frac 2p \right)$ is the usual Legendre symbol,
+ and $\kP$ is above $p$ in $\QQ(\sqrt 2)$.
+ Thus the Artin symbol generalizes the quadratic Legendre symbol.
+\end{example}
+\begin{example}[Cubic Legendre symbol subsumed by Artin symbol]
+ Similarly, it also generalizes the cubic Legendre symbol.
+ To see this, assume $\theta$ is primary in $K = \QQ(\sqrt{-3}) = \QQ(\omega)$
+ (thus $\OO_K = \ZZ[\omega]$ is Eisenstein integers).
+ Then for example
+ \[
+ \left( \frac{K(\cbrt 2)/K}{\theta \OO_K} \right) \left( \cbrt 2 \right)
+ \equiv \left( \cbrt 2 \right)^{N(\theta)}
+ \equiv 2^{\frac{N\theta-1}{3}} \cdot \sqrt 2
+ \equiv \left( \frac{2}{\theta} \right)_3 \cbrt 2.
+ \pmod{\kP}
+ \]
+ where $\kP$ is above $p$ in $K(\cbrt 2)$.
+\end{example}
+
+\section{Artin reciprocity}
+Now, we further capitalize on the fact that $\Gal(L/K)$ is abelian.
+For brevity, in what follows let $\Ram(L/K)$ denote the primes of $K$
+(either finite or infinite) which ramify in $L$.
+
+\begin{definition}
+ Let $L/K$ be an abelian extension and let $\km$ be
+ divisible by every prime in $\Ram(L/K)$.
+ Then since $L/K$ is abelian we can extend the Artin symbol
+ multiplicatively to a map
+ \[
+ \left( \frac{L/K}{\bullet} \right) :
+ I_K(\km) \surjto \Gal(L/K).
+ \]
+ This is called the \vocab{Artin map},
+ and it is surjective (for example by Chebotarev Density).
+ Thus we denote its kernel by
+ \[ H(L/K, \km) \subseteq I_K(\km). \]
+\end{definition}
+In particular we have
+$\Gal(L/K) \cong I_K(\km) / H(L/K, \km)$.
+
+We can now present the long-awaited Artin reciprocity theorem.
+\begin{theorem}
+ [Artin reciprocity]
+ Let $L/K$ be an abelian extension.
+ Then there is a modulus $\kf = \kf(L/K)$,
+ divisible by exactly the primes of $\Ram(L/K)$, such that:
+ for any modulus $\km$ divisible by all primes of $\Ram(L/K)$, we have
+ \[
+ P_K(\km) \subseteq H(L/K, \km) \subseteq I_K(\km)
+ \quad\text{if and only if}\quad
+ \kf \mid \km.
+ \]
+ We call $\kf$ the \vocab{conductor} of $L/K$.
+\end{theorem}
+So the conductor $\kf$ plays a similar role to the discriminant
+(divisible by exactly the primes which ramify),
+and when $\km$ is divisible by the conductor,
+$H(L/K, \km)$ is a \emph{congruence subgroup}.
+
+Here's the reason this is called a ``reciprocity'' theorem.
+Recalling that $C_K(\kf) = I_K(\kf) / P_K(\kf)$,
+the above theorem tells us we get a sequence of maps
+\begin{center}
+\begin{tikzcd}
+ I_K(\kf) \ar[r, two heads] & C_K(\kf) \ar[rd, two heads]
+ \ar[rr, "\left( \frac{L/K}{\bullet} \right)", two heads]
+ && \Gal(L/K) \\
+ && I_K(\kf) / H(L/K, \kf) \ar[ru, "\cong", swap] &
+\end{tikzcd}
+\end{center}
+Consequently:
+\begin{moral}
+ For primes $\kp \in I_K(\kf)$,
+ $\left( \frac{L/K}{\kp} \right)$ depends
+ only on ``$\kp \pmod \kf$''.
+\end{moral}
+
+Let's see how this result relates to quadratic reciprocity.
+\begin{example}
+ [Artin reciprocity implies quadratic reciprocity]
+ The big miracle of quadratic reciprocity states that:
+ for a fixed (squarefree) $a$,
+ the Legendre symbol $\left( \frac ap \right)$
+ should only depend the residue of $p$ modulo something.
+ Let's see why Artin reciprocity tells us this \emph{a priori}.
+
+ Let $L = \QQ(\sqrt a)$, $K = \QQ$.
+ Then we've already seen that the Artin symbol
+ \[ \left( \frac{\QQ(\sqrt a)/\QQ}{\bullet} \right) \]
+ is the correct generalization of the Legendre symbol.
+ Thus, Artin reciprocity tells us that there is a conductor
+ $\kf = \kf(\QQ(\sqrt a)/\QQ)$ such that
+ $\left( \frac{\QQ(\sqrt a)/\QQ}{p} \right)$ depends only on
+ the residue of $p$ modulo $\kf$, which is what we wanted.
+\end{example}
+
+Here is an example along the same lines.
+\begin{example}
+ [Cyclotomic field]
+ Let $\zeta$ be a primitive $m$th root of unity.
+ For primes $p$, we know that $\Frob_p \in \Gal(\QQ(\zeta)/\QQ)$
+ is ``exactly'' $p \pmod m$.
+ Let's translate this idea into the notation of Artin reciprocity.
+
+ We are going to prove
+ \[
+ H(\QQ(\zeta) / \QQ, m\infty)
+ = P_\QQ(m\infty)
+ = \left\{ \frac ab \ZZ \mid a/b \equiv 1 \pmod m \right\}.
+ \]
+ This is the generic example of achieving the lower bound in Artin reciprocity.
+ It also implies that $\kf(\QQ(\zeta)/\QQ) \mid m\infty$.
+
+ It's well-known $\QQ(\zeta)/\QQ$ is unramified outside finite primes dividing $m$,
+ so that the Artin symbol is defined on $I_K(\km)$.
+ Now the Artin map is given by
+ \begin{center}
+ \begin{tikzcd}
+ I_\QQ(\km) \ar[r, "\left( \frac{\QQ(\zeta)/\QQ}{\bullet} \right)"]
+ & \Gal(\QQ(\zeta)/\QQ) \ar[r, "\cong"] & (\ZZ/m\ZZ)^\times \\[-2em]
+ p \ar[r, mapsto] & (x \mapsto x^p) \ar[r, mapsto] & p \pmod m.
+ \end{tikzcd}
+ \end{center}
+ So we see that the kernel of this map is trivial,
+ i.e.\ it is given by the identity of the Galois group,
+ corresponding to $1 \pmod m$.
+ On the other hand, we've also computed $P_\QQ(m\infty)$ already,
+ so we have the desired equality.
+\end{example}
+
+In fact, we also have the following ``existence theorem'':
+every congruence subgroup appears uniquely once we fix $\km$.
+\begin{theorem}
+ [Takagi existence theorem]
+ Fix $K$ and let $\km$ be a modulus.
+ Consider any congruence subgroup $H$, i.e.\
+ \[ P_K(\km) \subseteq H \subseteq I_K(\km). \]
+ Then $H = H(L/K, \km)$ for a \emph{unique} abelian extension $L/K$.
+\end{theorem}
+
+Finally, such subgroups reverse inclusion in the best way possible:
+\begin{lemma}
+ [Inclusion-reversing congruence subgroups]
+ Fix a modulus $\km$.
+ Let $L/K$ and $M/K$ be abelian extensions
+ and suppose $\km$ is divisible by the conductors of $L/K$ and $M/K$.
+ Then
+ \[ L \subseteq M
+ \quad\text{if and only if}\quad
+ H(M/K, \km) \subseteq H(L/K, \km). \]
+\end{lemma}
+Here by $L \subseteq M$ we mean that $L$ is isomorphic to some subfield of $M$.
+\begin{proof}
+ [Sketch of proof]
+ Let us first prove the equivalence with $\km$ fixed.
+ In one direction, assume $L \subseteq M$;
+ one can check from the definitions that the diagram
+ \begin{center}
+ \begin{tikzcd}
+ I_K(\km)
+ \ar[r, "\left( \frac{M/K}\bullet \right)"]
+ \ar[rd, "\left( \frac{L/K}\bullet \right)", swap]
+ & \Gal(M/K) \ar[d, two heads] \\
+ & \Gal(L/K)
+ \end{tikzcd}
+ \end{center}
+ commutes, because it suffices to verify this for prime powers,
+ which is just saying that Frobenius elements behave well
+ with respect to restriction.
+ Then the inclusion of kernels follows directly.
+ The reverse direction is essentially the Takagi existence theorem.
+\end{proof}
+Note that we can always take $\km$ to be the product of conductors here.
+
+\bigskip
+
+To finish, here is a quote from Emil Artin on his reciprocity law:
+\begin{quote}
+ I will tell you a story about the Reciprocity Law.
+ After my thesis, I had the idea to define $L$-series
+ for non-abelian extensions. But for them to agree
+ with the $L$-series for abelian extensions,
+ a certain isomorphism had to be true.
+ I could show it implied all the standard reciprocity laws.
+ So I called it the General Reciprocity Law and tried to prove it but couldn't,
+ even after many tries. Then I showed it to the other number theorists,
+ but they all laughed at it, and I remember Hasse in particular
+ telling me it couldn't possibly be true.
+
+ Still, I kept at it, but nothing I tried worked.
+ Not a week went by --- \emph{for three years!} ---
+ that I did not try to prove the Reciprocity Law.
+ It was discouraging, and meanwhile I turned to other things.
+ Then one afternoon I had nothing special to do, so I said,
+ `Well, I try to prove the Reciprocity Law again.'
+ So I went out and sat down in the garden. You see,
+ from the very beginning I had the idea to use the cyclotomic fields,
+ but they never worked, and now I suddenly saw that all this time
+ I had been using them in the wrong way
+ --- and in half an hour I had it.
+\end{quote}
+
+\section{\problemhead}
+\begin{dproblem}
+ [Kronecker-Weber theorem]
+ Let $L$ be an abelian extension of $\QQ$.
+ Then $L$ is contained in a cyclic extension $\QQ(\zeta)$
+ where $\zeta$ is an $m$th root of unity (for some $m$).
+ \begin{hint}
+ Pick $m$ so that $\kf(L/\QQ) \mid m\infty$.
+ \end{hint}
+ \begin{sol}
+ Suppose $\kf(L/\QQ) \mid m\infty$ for some $m$.
+ Then by the example from earlier we have the chain
+ \[ P_{\QQ}(m\infty) = H(\QQ(\zeta)/\QQ, m\infty)
+ \subseteq H(L/\QQ, m) \subseteq I_\QQ(m\infty). \]
+ So by inclusion reversal we're done.
+ \end{sol}
+\end{dproblem}
+
+\begin{dproblem}
+ [Hilbert class field]
+ Let $K$ be any number field.
+ Then there exists a unique abelian extension $E/K$
+ which is unramified at all primes (finite or infinite)
+ and such that
+ \begin{itemize}
+ \ii $E/K$ is the maximal such extension by inclusion.
+ \ii $\Gal(E/K)$ is isomorphic to the class group of $E$.
+ \ii A prime $\kp$ of $K$ splits completely in $E$
+ if and only if it is a principal ideal of $\OO_K$.
+ \end{itemize}
+ We call $E$ the \vocab{Hilbert class field} of $K$.
+ \begin{hint}
+ Apply the Takagi existence theorem with $\km = 1$.
+ \end{hint}
+ \begin{sol}
+ Apply the Takagi existence theorem with $\km = 1$
+ to obtain an unramified extension $E/K$ such that
+ $H(E/K, 1) = P_K(1)$.
+ We claim this works:
+ \begin{itemize}
+ \ii To see it is maximal by inclusion, note that any other extension $M/K$
+ with this property has conductor $1$ (no primes divide the conductor),
+ and then we have $P_K(1) = H(E/K, 1) \subseteq H(M/K, 1) \subseteq I_K(1)$,
+ so inclusion reversal gives $M \subseteq E$.
+ \ii We have $\Gal(L/K) \cong I_K(1) / P_K(1) = C_K(1)$ the class group.
+ \ii The isomorphism in the previous part is given by the Artin symbol.
+ So $\kp$ splits completely if and only if $\left( \frac{L/K}{\kp} \right) = \id$
+ if and only if $\kp$ is principal (trivial in $C_K(1)$).
+ \end{itemize}
+ This completes the proof.
+ \end{sol}
+\end{dproblem}
diff --git a/books/napkin/bezout.tex b/books/napkin/bezout.tex
new file mode 100644
index 0000000000000000000000000000000000000000..46fad42353a06fc5e735b4b0bef3f3b6c2b32141
--- /dev/null
+++ b/books/napkin/bezout.tex
@@ -0,0 +1,520 @@
+\chapter{Bonus: B\'ezout's theorem}
+In this chapter we discuss B\'ezout's theorem.
+It makes precise the idea that two degree $d$ and $e$
+curves in $\CP^2$ should intersect at ``exactly'' $de$ points.
+(We work in projective space so e.g.\ any two lines intersect.)
+
+\section{Non-radical ideals}
+\prototype{Tangent to the parabola.}
+We need to account for multiplicities.
+So we will whenever possible work with homogeneous ideals $I$,
+rather than varieties $V$,
+because we want to allow the possibility that $I$ is not radical.
+Let's see how we might do so.
+
+For a first example, suppose we intersect $y=x^2$ with the line $y=1$;
+or more accurately, in projective coordinates of $\CP^2$,
+the parabola $zy=x^2$ and $y=z$.
+The intersection of the ideals is
+\[ (zy-x^2, y-z) = (x^2-z^2, y-z) \subseteq \CC[x,y,z]. \]
+So this corresponds to having two points;
+this gives two intersection points: $(1:1:1)$ and $(-1:1:1)$.
+Here is a picture of the two varieties in the affine $z=1$ chart:
+\begin{center}
+ \begin{asy}
+ import graph;
+ size(4cm);
+ real f(real x) { return x*x-1; }
+ graph.xaxis("$\mathcal V(y-z)$", red);
+ draw(graph(f,-2,2,operator ..), blue, Arrows);
+ label("$\mathcal V(zy-x^2)$", (1.4, f(1.4)), dir(15), blue);
+ label("$\mathbb{CP}^2$", (2,3), dir(45));
+ dotfactor *= 1.5;
+ dot(dir(0), heavygreen);
+ dot(dir(180), heavygreen);
+ \end{asy}
+\end{center}
+That's fine, but now suppose we intersect $zy=x^2$ with the line $x=0$ instead.
+Then we instead get a ``double point'':
+\begin{center}
+ \begin{asy}
+ import graph;
+ size(4cm);
+ real f(real x) { return x*x; }
+ graph.xaxis("$\mathcal V(y)$", red);
+ draw(graph(f,-2,2,operator ..), blue, Arrows);
+ label("$\mathcal V(zy-x^2)$", (1.4, f(1.4)), dir(15), blue);
+ label("$\mathbb{CP}^2$", (2,3), dir(45));
+ dotfactor *= 1.5;
+ dot(origin, heavygreen);
+ \end{asy}
+\end{center}
+The corresponding ideal is this time
+\[ (zy-x^2, y) = (x^2,y) \subseteq \CC[x,y,z]. \]
+This ideal is \emph{not} radical,
+and when we take $\sqrt{(x^2,y)} = (x,y)$ we get the ideal
+which corresponds to a single projective point $(0:0:1)$ of $\CP^2$.
+This is why we work with ideals rather than varieties:
+we need to tell the difference between $(x^2,y)$ and $(x,y)$.
+
+\section{Hilbert functions of finitely many points}
+\prototype{The Hilbert function attached to the double point $(x^2,y)$
+ is eventually the constant $2$.}
+\begin{definition}
+ Given a nonempty projective variety $V$, there is a unique
+ radical ideal $I$ such that $V = \Vp(I)$.
+ In this chapter we denote it by $\II(V)$.
+ For an empty variety we set $\II(\varnothing) = (1)$,
+ rather than choosing the irrelevant ideal.
+\end{definition}
+\begin{definition}
+ Let $I \subseteq \CC[x_0, \dots, x_n]$ be homogeneous.
+ We define the \vocab{Hilbert function} of $I$,
+ denoted $h_I : \ZZ_{\ge 0} \to \ZZ_{\ge 0}$ by
+ \[ h_I(d) = \dim_{\CC} \left( \CC[x_0, \dots, x_n]/I \right)^d \]
+ i.e.\ $h_I(d)$ is the dimension of the $d$th graded part of
+ $\CC[x_0, \dots, x_n] / I$.
+\end{definition}
+\begin{definition}
+ If $V$ is a projective variety, we set $h_V = h_{\II(V)}$,
+ where $I$ is the \emph{radical} ideal satisfying $V = \Vp(I)$.
+ If $V = \varnothing$, we choose $I = (1)$.
+\end{definition}
+\begin{example}[Examples of Hilbert functions in zero dimensions]
+ \label{ex:hilbert_zero}
+ For concreteness, let us use $\CP^2$.
+ \begin{enumerate}[(a)]
+ \ii If $V$ is the single point $(0:0:1)$,
+ with ideal $\II(V) = (x,y)$,
+ then
+ \[ \CC[x,y,z] / (x,y) \cong \CC[z]
+ \cong \CC \oplus z\CC \oplus z^2\CC \oplus z^3\CC \dots \]
+ which has dimension $1$ in all degrees.
+ Consequently, we have \[ h_I(d) \equiv 1. \]
+ \ii Now suppose we use the ``double point'' ideal $I = (x^2,y)$.
+ This time, we have
+ \begin{align*}
+ \CC[x,y,z] / (x^2,y)
+ &\cong \CC[z] \oplus x\CC[z] \\
+ &\cong \CC \oplus (x\CC \oplus z\CC) \oplus (xz\CC \oplus z^2\CC)
+ \oplus (xz^2\CC \oplus z^3\CC) \oplus\dots.
+ \end{align*}
+ From this we deduce that
+ \[
+ h_I(d) =
+ \begin{cases}
+ 2 & d = 1, 2, 3, \dots \\
+ 1 & d = 0.
+ \end{cases}
+ \]
+ \ii Let's now take the variety $V = \{(1:1:1), (-1:1:1)\}$
+ consisting of two points, with $\II(V) = (x^2-z^2, y-z)$. Then
+ \begin{align*}
+ \CC[x,y,z] / (x^2-z^2,y-z)
+ &\cong \CC[x,z] / (x^2-z^2) \\
+ &\cong \CC[z] \oplus x\CC[z].
+ \end{align*}
+ So this example has the same Hilbert function as the previous one.
+ \end{enumerate}
+\end{example}
+\begin{abuse}
+ I'm abusing the isomorphism symbol
+ $\CC[z] \cong \CC \oplus z\CC \oplus z^2\CC$ and similarly
+ in other examples.
+ This is an isomorphism only on the level of $\CC$-vector spaces.
+ However, in computing Hilbert functions of other examples
+ I will continue using this abuse of notation.
+\end{abuse}
+\begin{example}
+ [Hilbert functions for empty varieties]
+ Suppose $I \subsetneq \CC[x_0, \dots, x_n]$
+ is an ideal, possibly not radical
+ but such that \[ \Vp(I) = \varnothing \]
+ hence $\sqrt I = (x_0, \dots, x_n)$ is the irrelevant ideal.
+ Thus there are integers $d_i$ for $i=0,\dots,n$ such that
+ $x_i^{d_i} \in I$ for every $i$; consequently, $h_I(d) = 0$
+ for any $d > d_0 + \dots + d_n$.
+ We summarize this by saying that
+ \[ h_I(d) = 0 \text{ for all $d \gg 0$}. \]
+\end{example}
+Here the notation $d\gg 0$ means ``all sufficiently large $d$''.
+
+From these examples we see that if $I$ is an ideal,
+then the Hilbert function appears to eventually be constant,
+with the desired constant equal to the size of $\Vp(I)$,
+``with multiplicity'' in the case that $I$ is not radical.
+
+Let's prove this.
+Before proceeding we briefly remind the reader of short exact sequences:
+a sequence of maps of $0 \to V \injto W \surjto X \to 0$
+is one such that the $\img(V \injto W) = \ker(W \surjto X)$
+(and of course the maps $V \injto W$ and $W \surjto X$ are
+injective and surjective).
+If $V$, $W$, $X$ are finite-dimensional vector spaces over $\CC$
+this implies that $\dim W = \dim V + \dim X$.
+
+\begin{proposition}
+ [Hilbert functions of $I \cap J$ and $I+J$]
+ Let $I$ and $J$ be homogeneous ideals in $\CC[x_0, \dots, x_n]$.
+ Then \[ h_{I \cap J} + h_{I+J} = h_I + h_J. \]
+\end{proposition}
+\begin{proof}
+ Consider any $d \ge 0$.
+ Let $S = \CC[x_0, \dots, x_n]$ for brevity.
+ Then
+ \begin{center}
+ \begin{tikzcd}
+ 0 \ar[r]
+ & \left[ S / (I \cap J) \right]^d \ar[r, hook]
+ & \left[ S / I \right]^d \oplus \left[ S / J \right]^d \ar[r, two heads]
+ & \left[ S / (I+J) \right]^d \ar[r] & 0 \\
+ & f \ar[r, mapsto] & (f,f) \\
+ && (f,g) \ar[r, mapsto] & f-g
+ \end{tikzcd}
+ \end{center}
+ is a short exact sequence of vector spaces.
+ Therefore, for every $d \ge 0$ we have that
+ \[
+ \dim \left[ S / I \right]^d \oplus \left[ S / J \right]^d
+ = \dim \left[ S / (I \cap J) \right]^d
+ + \dim \left[ S / (I+J) \right]^d
+ \]
+ which gives the conclusion.
+\end{proof}
+\begin{example}
+ [Hilbert function of two points in $\CP^1$]
+ In $\CP^1$ with coordinate ring $\CC[s,t]$,
+ consider $I = (s)$ the ideal corresponding to the point $(0:1)$
+ and $J = (t)$ the ideal corresponding to the point $(1:0)$.
+ Then $I \cap J = (st)$ is the ideal corresponding
+ to the disjoint union of these two points,
+ while $I+J = (s,t)$ is the irrelevant ideal.
+ Consequently $h_{I+J}(d) = 0$ for $d \gg 0$.
+ Therefore, we get
+ \[ h_{I \cap J}(d) = h_I(d) + h_J(d) \text{ for $d \gg 0$} \]
+ so the Hilbert function of a two-point projective variety
+ is the constant $2$ for $d \gg 0$.
+\end{example}
+
+This example illustrates the content of the main result:
+\begin{theorem}
+ [Hilbert functions of zero-dimensional varieties]
+ Let $V$ be a projective variety consisting of $m$ points
+ (where $m \ge 0$ is an integer).
+ Then \[ h_V(d) = m \text{ for $d \gg 0$}. \]
+\end{theorem}
+\begin{proof}
+ We already did $m = 0$, so assume $m \ge 1$.
+ Let $I = \II(V)$ and for $k=1,\dots,m$
+ let $I_k = \II(\text{$k$th point of $V$})$.
+ \begin{exercise}
+ Show that $h_{I_k} (d) = 1$ for every $d$.
+ (Modify \Cref{ex:hilbert_zero}(a).)
+ \label{ques:hilbert_always_one}
+ \end{exercise}
+
+ Hence we can proceed by induction on $m \ge 2$,
+ with the base case $m=1$ already done above.
+ For the inductive step,
+ we use the projective analogues of \Cref{thm:many_aff_variety}.
+ We know that $h_{I_1 \cap \dots \cap I_{m-1}}(d) = m-1$ for $d \gg 0$
+ (this is the first $m-1$ points;
+ note that $I_1 \cap \dots \cap I_{m-1}$ is radical).
+ To add in the $m$th point we note that
+ \[
+ h_{I_1 \cap \dots \cap I_m}(d)
+ = h_{I_1 \cap \dots I_{m-1}}(d) + h_{I_m}(d)
+ - h_J(d)
+ \]
+ where $J = (I_1 \cap \dots \cap I_{m-1}) + I_m$.
+ The ideal $J$ may not be radical, but satisfies $\Vp(J) = \varnothing$
+ by an earlier example, hence $h_J = 0$ for $d \gg 0$.
+ This completes the proof.
+\end{proof}
+In exactly the same way we can prove that:
+\begin{corollary}[$h_I$ eventually constant when $\dim \Vp(I) = 0$]
+ Let $I$ be an ideal, not necessarily radical,
+ such that $\Vp(I)$ consists of finitely many points.
+ Then the Hilbert $h_I$ is eventually constant.
+\end{corollary}
+\begin{proof}
+ Induction on the number of points, $m \ge 1$.
+ The base case $m = 1$ was essentially done in \Cref{ex:hilbert_zero}(b)
+ and \Cref{ques:hilbert_always_one}.
+ The inductive step is literally the same as in the proof above,
+ except no fuss about radical ideals.
+\end{proof}
+
+\section{Hilbert polynomials}
+So far we have only talked about Hilbert functions
+of zero-dimensional varieties, and showed that
+they are eventually constant.
+Let's look at some more examples.
+\begin{example}
+ [Hilbert function of $\CP^n$]
+ The Hilbert function of $\CP^n$ is
+ \[ h_{\CP^n}(d) = \binom{d+n}{n}
+ = \frac{1}{n!} (d+n)(d+n-1) \dots (d+1) \]
+ by a ``balls and urns'' argument.
+ This is a polynomial of degree $n$.
+\end{example}
+\begin{example}
+ [Hilbert function of the parabola]
+ Consider the parabola $zy-x^2$ in $\CP^2$
+ with coordinates $\CC[x,y,z]$.
+ Then
+ \[ \CC[x,y,z] / (zy-x^2) \cong \CC[y,z] \oplus x\CC[y,z]. \]
+ A combinatorial computation gives that
+ \begin{align*}
+ h_{(zy-x^2)}(0) &= 1 & \text{Basis $1$} \\
+ h_{(zy-x^2)}(1) &= 3 & \text{Basis $x,y,z$} \\
+ h_{(zy-x^2)}(2) &= 5 & \text{Basis $xy$, $xz$, $y^2$, $yz$, $z^2$}.
+ \end{align*}
+ We thus in fact see that $h_{(zy-x^2)}(d) = 2d-1$.
+\end{example}
+
+In fact, this behavior of ``eventually polynomial'' always works.
+\begin{theorem}
+ [Hilbert polynomial]
+ Let $I \subseteq \CC[x_0, \dots, x_n]$ be a homogeneous ideal,
+ not necessarily radical. Then
+ \begin{enumerate}[(a)]
+ \ii There exists a polynomial $\chi_I$ such that
+ $h_I(d) = \chi_I(d)$ for all $d \gg 0$.
+ \ii $\deg \chi_I = \dim\Vp(I)$ (if $\Vp(I) = \varnothing$
+ then $\chi_I = 0$).
+ \ii The polynomial $m! \cdot \chi_I$ has integer coefficients.
+ \end{enumerate}
+\end{theorem}
+\begin{proof}
+ The base case was addressed in the previous section.
+
+ For the inductive step, consider $\Vp(I)$ with dimension $m$.
+ Consider a hyperplane $H$ such that no irreducible
+ component of $\Vp(I)$ is contained inside $H$
+ (we quote this fact without proof, as it is geometrically obvious,
+ but the last time I tried to write the proof I messed up).
+ For simplicity, assume WLOG that $H = \Vp(x_0)$.
+
+ Let $S = \CC[x_0, \dots, x_n]$ again.
+ Now, consider the short exact sequence
+ \begin{center}
+ \begin{tikzcd}
+ 0 \ar[r]
+ & \left[ S/I \right]^{d-1} \ar[r, hook]
+ & \left[ S / I \right]^d \ar[r, two heads]
+ & \left[ S / (I+(f)) \right]^d \ar[r] & 0 \\
+ & f \ar[r, mapsto] & f \cdot x_0 \\
+ && f \ar[r, mapsto] & f.
+ \end{tikzcd}
+ \end{center}
+ (The injectivity of the first map follows from the assumption
+ about irreducible components of $\Vp(I)$.)
+ Now exactness implies that
+ \[ h_I(d) - h_I(d-1) = h_{I + (x_0)}(d). \]
+ The last term geometrically corresponds to $\Vp(I) \cap H$;
+ it has dimension $m-1$, so by the inductive hypothesis
+ we know that
+ \[ h_I(d) - h_I(d-1)
+ = \frac{c_0 d^{m-1} + c_1 d^{m-2} + \dots + c_{m-1}}{(m-1)!}
+ \qquad d \gg 0 \]
+ for some integers $c_0$, \dots, $c_{m-1}$.
+ Then we are done by the theory of
+ \textbf{finite differences} of polynomials.
+\end{proof}
+
+\section{B\'ezout's theorem}
+
+\begin{definition}
+ We call $\chi_I$ the \vocab{Hilbert polynomial} of $I$.
+ If $\chi_I$ is nonzero, we call the leading coefficient of
+ $m! \chi_I$ the \vocab{degree} of $I$, which is an integer,
+ denoted $\deg I$.
+
+ Of course for projective varieties $V$ we let $h_V = h_{\II(V)}$.
+\end{definition}
+
+\begin{example}[Examples of degrees]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii If $V$ is a finite set of $n \ge 1$ points, it has degree $n$.
+ \ii If $I$ corresponds to a double point, it has degree $2$.
+ \ii $\CP^n$ has degree $1$.
+ \ii The parabola has degree $2$.
+ \end{enumerate}
+\end{example}
+
+Now, you might guess that if $f$ is a homogeneous quadratic polynomial
+then the degree of the principal ideal $(f)$ is $2$, and so on.
+(Thus for example we expect a circle to have degree $2$.)
+This is true:
+
+\begin{theorem}
+ [B\'ezout's theorem]
+ Let $I$ be a homogeneous ideal of $\CC[x_0, \dots, x_n]$,
+ such that $\dim \Vp(I) \ge 1$.
+ Let $f \in \CC[x_0, \dots, x_n]$ be a homogeneous polynomial of degree $k$
+ which does not vanish on any irreducible component of $\Vp(I)$.
+ Then
+ \[ \deg\left( I + (f) \right) = k \deg I. \]
+\end{theorem}
+\begin{proof}
+ Let $S = \CC[x_0, \dots, x_n]$ again.
+ This time the exact sequence is
+ \begin{center}
+ \begin{tikzcd}
+ 0 \ar[r]
+ & \left[ S/I \right]^{d-k} \ar[r, hook]
+ & \left[ S / I \right]^d \ar[r, two heads]
+ & \left[ S / (I+(f)) \right]^d \ar[r] & 0.
+ \end{tikzcd}
+ \end{center}
+ We leave this olympiad-esque exercise as \Cref{prob:bezout}.
+\end{proof}
+
+\section{Applications}
+First, we show that the notion of degree is what we expect.
+\begin{corollary}
+ [Hypersurfaces: the degree deserves its name]
+ Let $V$ be a hypersurface, i.e. $\II(V) = (f)$
+ for $f$ a homogeneous polynomial of degree $k$.
+ Then $\deg V = k$.
+\end{corollary}
+\begin{proof}
+ Recall $\deg(0) = \deg \CP^n = 1$.
+ Take $I = (0)$ in B\'ezout's theorem.
+\end{proof}
+
+The common special case in $\CP^2$ is:
+\begin{corollary}[B\'ezout's theorem for curves]
+ For any two curves $X$ and $Y$ in $\CP^2$ without
+ a common irreducible component,
+ \[ \left\lvert X \cap Y \right\rvert
+ \le \deg X \cdot \deg Y. \]
+\end{corollary}
+
+Now, we use this to prove Pascal's theorem.
+\begin{theorem}
+ [Pascal's theorem]
+ Let $A$, $B$, $C$, $D$, $E$, $F$ be six
+ distinct points which lie on a conic $\mathscr C$ in $\CP^2$.
+ Then the points $AB \cap DE$, $BC \cap EF$, $CD \cap FA$ are collinear.
+\end{theorem}
+\begin{proof}
+ Let $X$ be the variety equal to the union of the
+ three lines $AB$, $CD$, $EF$, hence $X = \Vp(f)$
+ for some cubic polynomial $f$ (which is the product of three linear ones).
+ Similarly, let $Y = \Vp(g)$ be the variety
+ equal to the union of the three lines $BC$, $DE$, $FA$.
+
+ \begin{center}
+ \begin{asy}
+ filldraw(unitcircle, opacity(0.2)+lightcyan, blue);
+ pair A = dir(110);
+ pair B = dir(210);
+ pair C = dir(40);
+ pair D = dir(270);
+ pair E = dir(130);
+ pair F = dir(-30);
+
+ pair P = extension(A, B, D, E);
+ pair X = extension(B, C, E, F);
+ pair Q = extension(C, D, F, A);
+
+ draw(A--B--C--D--E--F--cycle);
+ draw(P--Q, red+dashed);
+
+ dot("$A$", A, dir(A));
+ dot("$B$", B, dir(B));
+ dot("$C$", C, dir(C));
+ dot("$D$", D, dir(D));
+ dot("$E$", E, dir(E));
+ dot("$F$", F, dir(F));
+ dot(P);
+ dot(X);
+ dot(Q);
+
+ /* Source generated by TSQ */
+ \end{asy}
+ \end{center}
+
+ Now let $P$ be an arbitrary point on the conic on $\mathscr C$,
+ distinct from the six points $A$, $B$, $C$, $D$, $E$, $F$.
+ Consider the projective variety
+ \[ V = \Vp(\alpha f + \beta g) \]
+ where the constants $\alpha$ and $\beta$ are chosen such that $P \in V$.
+ \begin{ques}
+ Show that $V$ also contains the six points $A$, $B$, $C$, $D$, $E$, $F$
+ as well as the three points $AB \cap DE$, $BC \cap EF$, $CD \cap FA$
+ regardless of which $\alpha$ and $\beta$ are chosen.
+ \end{ques}
+
+ Now, note that $|V \cap \mathscr C| \ge 7$.
+ But $\deg V = 3$ and $\deg \mathscr C = 2$.
+ This contradicts B\'ezout's theorem unless $V$ and $\mathscr C$
+ share an irreducible component.
+ This can only happen if $V$ is the union of a line and conic,
+ for degree reasons; i.e.\ we must have that
+ \[ V = \mathscr C \cup \text{line}. \]
+ Finally note that the three intersection points $AB \cap DE$,
+ $BC \cap EF$ and $CD \cap FA$ do not lie on $\mathscr C$,
+ so they must lie on this line.
+\end{proof}
+
+
+\section\problemhead
+\begin{problem}
+ \label{prob:bezout}
+ Complete the proof of B\'ezout's theorem from before.
+ \begin{sol}
+ From the exactness,
+ $h_I(d) = h_I(d-k) + h_{I+(f)}(d)$,
+ and it follows that
+ \[ \chi_{I+(f)}(d) = \chi_I(d) - \chi_I(d-k). \]
+ Let $m = \dim \Vp(I) \ge 1$.
+ Now $\dim \Vp(I+(f)) = m-1$, so
+ and $c_{\text{new}} = \deg I+(f)$ then we have
+ \[
+ \frac{\deg (I+(f)) d^{m-1} + \dots}{(m-1)!}
+ =
+ \frac{1}{m!}
+ \left( \deg I (d^m - (d-k)^m)
+ + \text{lower order terms} \right)
+ \]
+ from which we read off
+ \[ \deg (I+(f)) = \frac{(m-1)!}{m!} \cdot k \binom m1 \deg I
+ = k \deg I \]
+ as needed.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [USA TST 2016/6]
+ \yod
+ Let $ABC$ be an acute scalene triangle
+ and let $P$ be a point in its interior.
+ Let $A_1$, $B_1$, $C_1$ be projections of $P$ onto
+ triangle sides $BC$, $CA$, $AB$, respectively.
+ Find the locus of points $P$ such that $AA_1$, $BB_1$, $CC_1$
+ are concurrent and $\angle PAB + \angle PBC + \angle PCA = 90^{\circ}$.
+ \begin{hint}
+ You will need to know about complex numbers
+ in Euclidean geometry to solve this problem.
+ \end{hint}
+ \begin{sol}
+ In complex numbers with $ABC$ the unit circle,
+ it is equivalent to solving the two cubic equations
+ \begin{align*}
+ (p-a)(p-b)(p-c) &= (abc)^2 (q -1/a)(q - 1/b)(q - 1/c) \\
+ 0 &= \prod_{\text{cyc}} (p+c-b-bcq) + \prod_{\text{cyc}} (p+b-c-bcq)
+ \end{align*}
+ in $p$ and $q = \overline p$.
+ Viewing this as two cubic curves in $(p,q) \in \mathbb C^2$,
+ by B\'{e}zout's theorem it follows there are at most nine solutions
+ (unless both curves are not irreducible,
+ but one can check the first one cannot be factored).
+ Moreover it is easy to name nine solutions (for $ABC$ scalene):
+ the three vertices, the three excenters, and $I$, $O$, $H$.
+ Hence the answer is just those three triangle centers $I$, $O$ and $H$.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/caratheodory.tex b/books/napkin/caratheodory.tex
new file mode 100644
index 0000000000000000000000000000000000000000..9c8db69aed924abbed76fbd21914e6d848d17cb2
--- /dev/null
+++ b/books/napkin/caratheodory.tex
@@ -0,0 +1,641 @@
+\chapter{Constructing the Borel and Lebesgue measure}
+It's very difficult to define in one breath a measure
+on the Borel space $\SB(\RR^n)$.
+It is easier if we define a weaker notion first.
+There are two such weaker notions that we will define:
+\begin{itemize}
+ \ii A \textbf{pre-measure}:
+ satisfies the axioms of a measure,
+ but defined on \emph{fewer} sets than a measure:
+ they'll be defined on an ``algebra''
+ rather than the full-fledged ``$\sigma$-algebra''.
+
+ \ii An \textbf{outer measure}:
+ defined on $2^\Omega$ but satisfies weaker axioms.
+\end{itemize}
+It will turn out that pre-measures yield outer measures,
+and outer measures yield measures.
+
+
+\section{Pre-measures}
+\prototype{Let $\Omega = \RR^2$. Then we take $\SA_0$ generated by rectangles,
+ with $\mu_0$ the usual area.}
+The way to define a pre-measure is to weaken
+the $\sigma$-algebra to an algebra.
+\begin{definition}
+ Let $\Omega$ be a set.
+ We define notions of an \vocab{algebra},
+ which is the same as $\sigma$-algebra except
+ with ``countable'' replaced by finite everywhere.
+
+ That is: an algebra $\SA_0$ on $\Omega$ is a
+ nonempty subset of $2^\Omega$,
+ which is closed under complement and \emph{finite} union.
+ The smallest algebra containing a subset $\SF \subseteq 2^\Omega$
+ is the \vocab{algebra generated by $\SF$}.
+\end{definition}
+In practice, we will basically always use generation for algebras.
+\begin{example}
+ When $\Omega = \RR^n$,
+ we can let $\mathcal{L}_0$ be the algebra generated by
+ $[a_1, b_1] \times \dots \times [a_n, b_n]$.
+ A typical element might look like:
+ \begin{center}
+ \begin{asy}
+ size(5cm);
+ filldraw( (0,0)--(9,0)--(9,2)--(6,2)--(6,5)--(0,5)--cycle,
+ opacity(0.1)+lightcyan, heavycyan );
+ filldraw( (7,3)--(12,3)--(12,6)--(7,6)--cycle,
+ opacity(0.1)+lightcyan, heavycyan );
+ \end{asy}
+ \end{center}
+ Unsurprisingly, since we have \emph{finitely} many
+ rectangles and their complements involved,
+ in this case we actually \emph{can}
+ unambiguously assign an area, and will do so soon.
+\end{example}
+
+\begin{definition}
+ A \vocab{pre-measure} $\mu_0$ on a algebra $\SA_0$
+ is a function $\mu_0 \colon \SA_0 \to [0, +\infty]$
+ which satisfies the axioms
+ \begin{itemize}
+ \ii $\mu_0(\varnothing) = 0$, and
+ \ii \textbf{Countable additivity}:
+ if $A_1$, $A_2$, \dots are disjoint sets in $\SA_0$
+ and \emph{moreover} the disjoint union $\bigsqcup A_i$
+ is contained in $\SA_0$ (not guaranteed by algebra axioms!),
+ then
+ \[ \mu_0\left( \bigsqcup_n A_n \right) = \sum_n \mu_0(A_n). \]
+ \end{itemize}
+\end{definition}
+
+\begin{example}
+ [The pre-measure on $\RR^n$]
+ Let $\Omega = \RR^2$.
+ Then, let $\mathcal{L}_0$ be the algebra generated by rectangles
+ $[a_1, a_2] \times [b_1, b_2]$.
+ We then let
+ \[ \mu_0\left( [a_1, a_2] \times [b_1, b_2] \right)
+ = (a_2-a_1)(b_2-b_1) \]
+ the area of the rectangle.
+ As elements of $\mathcal{L}_0$ are simply \emph{finite} unions
+ of rectangles and their complements (picture drawn earlier),
+ it's not difficult to extend this to a pre-measure $\lambda_0$
+ which behaves as you expect --- although we won't do this.
+\end{example}
+
+Since we are sweeping something under the rug that
+turns out to be conceptually important,
+I'll go ahead and blue-box it.
+\begin{proposition}
+ [Geometry sanity check that we won't prove]
+ \label{prop:lebesgue_rectangle}
+ For $\Omega = \RR^n$ and $\mathcal{L}_0$
+ the algebra generated by rectangular prisms,
+ one can define a pre-measure $\lambda_0$ on $\mathcal{L}_0$.
+\end{proposition}
+From this point forwards, we will basically do
+almost no geometry\footnote{White lie.
+ Technically, we will use one more fact:
+ that open sets of $\RR^n$ can be covered by countably
+ infinitely many rectangles,
+ as in \Cref{exer:cubes_vs_open}.
+ This step doesn't involve any area assignments, though.}
+whatsoever in defining the measure $\SB(\RR^n)$,
+and only use set theory to extend our measure.
+So, \Cref{prop:lebesgue_rectangle} is the only sentry
+which checks to make sure that our ``initial definition'' is sane.
+
+To put the point another way,
+suppose an \textbf{insane scientist}\footnote{Because
+ ``mad scientists'' are overrated.}
+tried to define a notion
+of area in which every rectangle had area $1$.
+Intuitively, this shouldn't be possible:
+every rectangle can be dissected into two halves
+and we ought to have $1+1 \ne 1$.
+However, the only thing that would stop them is that they couldn't
+extend their pre-measure on the algebra $\mathcal{L}_0$.
+If they somehow got past that barrier and got a pre-measure,
+nothing in the rest of the section would prevent them
+from getting an entire \emph{bona fide} measure with this property.
+Thus, in our construction of the Lebesgue measure,
+most of the geometric work is captured in the (omitted) proof
+of \Cref{prop:lebesgue_rectangle}.
+
+\section{Outer measures}
+\prototype{Keep taking $\Omega = \RR^2$; see the picture to follow.}
+The other way to weaken a measure is to relax the countable additivity,
+and this yields the following:
+\begin{definition}
+ An \vocab{outer measure} $\mu^\ast$ on a set $\Omega$
+ is a function $\mu^\ast \colon 2^\Omega \to [0, +\infty]$
+ satisfying the following axioms:
+ \begin{itemize}
+ \ii $\mu^\ast(\varnothing) = 0$;
+ \ii if $E \subseteq F$ and $E,F \in 2^{\Omega}$
+ then $\mu^\ast(E) \le \mu^\ast(F)$;
+ \ii for any subsets $E_1$, $E_2$, \dots of $\Omega$ we have
+ \[ \mu^\ast \left( \bigcup_n E_n \right)
+ \le \sum_n \mu^\ast(E_n). \]
+ \end{itemize}
+ (I don't really like the word ``outer measure'',
+ since I think it is a bit of a misnomer:
+ I would rather call it ``fake measure'',
+ since it's not a measure either.)
+\end{definition}
+
+The reason for the name ``outer measure''
+is that you almost always obtain outer measures
+by approximating them from ``outside'' sets.
+Officially, the result is often stated as follows
+(as \Cref{pr:construct_outer_measure}).
+\begin{quote}
+ For a set $\Omega$, let $\mathcal{E}$ be \emph{any} subset of $2^{\Omega}$
+ and let $\rho \colon \mathcal{E} \to [0,+\infty]$ be \emph{any} function.
+ Then
+ \[ \mu^\ast(E) = \inf \left\{ \sum_{n=1}^\infty \rho(E_n) \mid
+ E_n \in \mathcal{E}, \;
+ E \subseteq \bigcup_{n=1}^\infty E_n \right\} \]
+ is an outer measure.
+\end{quote}
+
+However, I think the above theorem is basically always
+wrong to use in practice, because it is \emph{way too general}.
+As I warned with the insane scientist,
+we really do want some sort of sanity conditions on $\rho$:
+otherwise, if we apply the above result as stated,
+there is no guarantee that $\mu^\ast$ will
+be compatible with $\rho$ in any way.
+
+%So, the recipe so far goes like:
+%define an algebra $\SA_0$ generated by some sets with
+%``known'' measure (like rectangles) and checking it extends well,
+%then get an outer measure from it.
+%In the next section, we will then see every outer measure gives a true measure.
+
+So, I think it is really better to apply the theorem to pre-measures $\mu_0$
+for which one \emph{does} have some sort of guarantee
+that the resulting $\mu^\ast$ is compatible with $\mu_0$.
+In practice, this is always how we will want to construct our outer measures.
+\begin{theorem}
+ [Constructing outer measures from pre-measures]
+ \label{thm:construct_outer}
+ Let $\mu_0$ be a pre-measure on an algebra $\SA_0$ on a set $\Omega$.
+ \begin{enumerate}[(a)]
+ \ii The map $\mu^\ast \colon 2^\Omega \to [0,+\infty]$ defined by
+ \[ \mu^\ast(E) = \inf \left\{ \sum_{n=1}^\infty \mu_0(A_n) \mid
+ A_n \in \SA_0, \; E \subseteq \bigcup_{n=1}^\infty A_n \right\} \]
+ is an outer measure.
+ \ii Moreover, this measure agrees with $\mu_0$ on sets in $\SA_0$.
+ \end{enumerate}
+\end{theorem}
+Intuitively, what is going on is that
+$\mu^\ast(A)$ is the infimum of coverings of $A$ by
+countable unions of elements in $\SA_0$.
+Part (b) is the first half of the compatibility condition I promised;
+the other half appears later as \Cref{prop:cm_compatible}.
+
+\begin{proof}
+ [Proof of \Cref{thm:construct_outer}]
+ As alluded to already, part (a)
+ is a special case of \Cref{pr:construct_outer_measure}
+ (and proving it in this generality is actually easier,
+ because you won't be distracted by unnecessary properties).
+
+ We now check (b), that $\mu^\ast(A) = \mu_0(A)$ for $A \in \SA_0$.
+ One bound is quick:
+ \begin{ques}
+ Show that $\mu^\ast(A) \le \mu_0(A)$.
+ \end{ques}
+ For the reverse, suppose that $A \subseteq \bigcup_n A_n$.
+ Then, define the sets
+ \begin{align*}
+ B_1 &= A \cap A_1 \\
+ B_2 &= (A \cap A_2) \setminus B_1 \\
+ B_3 &= (A \cap A_3) \setminus B_2 \\
+ &\vdotswithin=
+ \end{align*}
+ and so on.
+ Then the $B_n$ are disjoint elements of $\SA_0$ with $B_n \subset A_n$,
+ and we have rigged the definition so that $\bigsqcup_n B_n = A$.
+ Thus by definition of pre-measure,
+ \[ \mu_0(A) = \sum_n \mu_0(B_n) \le \sum_n \mu_0(A_n) \]
+ as desired.
+\end{proof}
+
+\begin{example}
+ Let $\Omega = \RR^2$ and $\lambda_0$ the pre-measure from before.
+ Then $\lambda^\ast(A)$ is, intuitively,
+ the infimum of coverings of the set $A$ by rectangles.
+ Here is a picture you might use to imagine the
+ situation with $A$ being the unit disk.
+ \missingfigure{circles covered by rectangles}
+\end{example}
+
+
+
+\section{Carath\'{e}odory extension for outer measures}
+We will now take any outer measure and turn it into a proper measure.
+To do this, we first need to specify the $\sigma$-algebra
+on which we will define the measure.
+
+\begin{definition}
+ Let $\mu^\ast$ be an outer measure.
+ We say a set $A$ is \vocab{Carath\'{e}odory measurable with respect
+ to $\mu^\ast$}, or just \vocab{$\mu^\ast$-measurable},
+ if the following condition holds:
+ for any set $E \in 2^{\Omega}$,
+ \[ \mu^\ast(E) = \mu^\ast(E \cap A) + \mu^\ast(E \setminus A). \]
+\end{definition}
+This definition is hard to motivate, but turns out to be the right one.
+One way to motivate is this:
+it turns out that in $\RR^n$,
+it will be equivalent to a reasonable geometric condition
+(which I will state in \Cref{prop:lebesgue_geo}),
+but since that geometric definition requires information about $\RR^n$ itself,
+this is the ``right'' generalization for general measure spaces.
+
+Since our goal was to extend our $\SA_0$,
+we had better make sure this definition
+lets us measure the initial sets that we started with!
+\begin{proposition}
+ [Carath\'{e}odory measurability is compatible with the initial $\SA_0$]
+ \label{prop:cm_compatible}
+ Suppose $\mu^\ast$ was obtained from a pre-measure $\mu_0$ on
+ an algebra $\SA_0$, as in \Cref{thm:construct_outer}.
+ Then every set in $\SA_0$ is $\mu^\ast$-measurable.
+\end{proposition}
+This is the second half of the compatibility condition
+that we get if we make sure our initial $\mu_0$
+at least satisfies the pre-measure axioms.
+(The first half was (b) of \Cref{thm:construct_outer}.)
+\begin{proof}
+ Let $A \in \SA_0$ and $E \in 2^{\Omega}$; we wish to prove
+ $\mu^\ast(E) = \mu^\ast(E \cap A) + \mu^\ast(E \setminus A)$.
+ The definition of outer measure already requires
+ $\mu^\ast(E) \le \mu^\ast(E \cap A) + \mu^\ast(E \setminus A)$
+ and so it's enough to prove the reverse inequality.
+
+ By definition of infimum, for any $\eps > 0$,
+ there is a covering $E \subset \bigcup_n A_n$
+ with $\mu^\ast(E) + \eps \ge \sum_n \mu_0(A_n)$.
+ But \[ \sum_n \mu_0(A_n)
+ = \sum_n \left( \mu_0(A_n \cap A) + \mu_0(A_n \setminus A) \right)
+ \ge \mu^\ast(E \cap A) + \mu^\ast(E \setminus A) \]
+ with the first equality being the definition of pre-measure
+ on $\SA_0$, the second just being by definition of $\mu^\ast$
+ (since $A_n \cap A$ certainly covers $E \cap A$, for example).
+ Thus $\mu^\ast(E) + \eps \ge \mu^\ast(E \cap A) + \mu^\ast(E \setminus A)$.
+ Since the inequality holds for any $\eps > 0$, we're done.
+\end{proof}
+
+To add extra icing onto the cake,
+here is one more niceness condition
+which our constructed measure will happen to satisfy.
+\begin{definition}
+ A \vocab{null set} of a measure space $(\Omega, \SA, \mu)$
+ is a set $A \in \SA$ with $\mu(A) = 0$.
+ A measure space $(\Omega, \SA, \mu)$ is \vocab{complete}
+ if whenever $A$ is a null set,
+ then all subsets of $A$ are in $\SA$ as well (and hence null sets).
+\end{definition}
+This is a nice property to have, for obvious reasons.
+Visually, if I have a bunch of dust which I \emph{already} assigned weight zero,
+and I blow away some of the dust,
+then the remainder should still have an assigned weight --- zero.
+The extension theorem will give us $\sigma$-algebras with this property.
+
+\begin{theorem}
+ [Carath\'{e}odory extension theorem for outer measures]
+ \label{thm:cara_outer}
+ If $\mu^\ast$ is an outer measure,
+ and $\SA\cme$ is the set of $\mu^\ast$-measurable sets
+ with respect to $\mu^\ast$,
+ then $\SA\cme$ is a $\sigma$-algebra on $\Omega$,
+ and the restriction $\mu\cme$ of $\mu^\ast$ to $\SA\cme$
+ gives a \emph{complete} measure space.
+\end{theorem}
+(Phonetic remark: you can think of the superscript ${}\cme$ as standing
+for either ``Carath\'{e}odory measurable'' or ``complete''.
+Both are helpful for remembering what this represents.
+This notation is not standard but the pun was too good to resist.)
+
+Thus, if we compose \Cref{thm:construct_outer} with \Cref{thm:cara_outer},
+we find that every pre-measure $\mu_0$ on an algebra $\SA_0$ naturally
+gives a $\sigma$-algebra $\SA\cme$ with a complete measure $\mu\cme$,
+and our two compatibility results
+(namely (b) of \Cref{thm:construct_outer},
+together with \Cref{prop:cm_compatible})
+means that $\SA\cme \supset \SA_0$
+and $\mu\cme$ agrees with $\mu$.
+
+Here is a table showing the process,
+where going down each row of the table corresponds to restriction process.
+\begin{center}
+ \begin{tabular}[h]{llcl}
+ & & Construct order & Notes \\ \hline
+ $2^\Omega$ & $\mu^\ast$ & Step 2 &
+ $\mu^\ast$ is outer measure obtained from $\mu_0$ \\[1em]
+ $\SA\cme$ & $\mu\cme$ & Step 3 & $\SA\cme$ defined as $\mu^\ast$-measurable sets, \\
+ &&& $(\SA\cme, \mu\cme)$ is complete. \\[1em]
+ $\SA_0$ & $\mu_0$ & Step 1 & $\mu_0$ is a pre-measure
+ \end{tabular}
+\end{center}
+
+
+\section{Defining the Lebesgue measure}
+This lets us finally define the Lebesgue measure on $\RR^n$.
+We wrap everything together at once now.
+\begin{definition}
+ We create a measure on $\RR^n$ by the following procedure.
+ \begin{itemize}
+ \ii Start with the algebra $\mathcal{L}_0$
+ generated by rectangular prisms,
+ and define a \emph{pre-measure} $\lambda_0$ on this $\mathcal{L}_0$
+ (this was glossed over in the example).
+ \ii By \Cref{thm:construct_outer},
+ this gives the \vocab{Lebesgue outer measure}
+ $\lambda^\ast$ on $2^{\RR^n}$,
+ which is compatible on all the rectangular prisms.
+ \ii By Carath\'{e}odory (\Cref{thm:cara_outer}),
+ this restricts to a complete measure $\lambda$
+ on the $\sigma$-algebra $\mathcal{L}(\RR^n)$
+ of $\lambda^\ast$-measurable sets
+ (which as promised contains all rectangular prisms).\footnote{If
+ I wanted to be consistent with the previous theorems,
+ I might prefer to write $\mathcal{L}\cme$
+ and $\lambda\cme$ for emphasis.
+ It seems no one does this, though, so I won't.}
+ \end{itemize}
+ The resulting complete measure, denoted $\lambda$,
+ is called the \vocab{Lebesgue measure}.
+
+ The algebra $\mathcal{L}(\RR^n)$ we obtained will be called the
+ \vocab{Lebesgue $\sigma$-algebra};
+ sets in it are said to be \vocab{Lebesgue measurable}.
+\end{definition}
+
+Here is the same table from before,
+with the values filled in for the special case $\Omega = \RR^n$,
+which gives us the Lebesgue algebra.
+\begin{center}
+ \begin{tabular}[h]{llcl}
+ & & Construct order & Notes \\ \hline
+ $2^{\RR^n}$ & $\lambda^\ast$ & Step 2 &
+ $\lambda^\ast$ is Lebesgue outer measure \\[1em]
+ $\mathcal L(\RR^n)$ & $\lambda$ & Step 3 & Lebesgue $\sigma$-algebra (complete) \\[1em]
+ $\mathcal L_0$ & $\lambda_0$ & Step 1 & Define pre-measure on rectangles
+ \end{tabular}
+\end{center}
+
+
+Of course, now that we've gotten all the way here,
+if we actually want to \emph{compute} any measures,
+we can mostly gleefully forget about how we actually constructed
+the measure and just use the properties.
+The hard part was to showing that there \emph{is}
+a way to assign measures consistently;
+actually figuring out what that measure's value is
+\emph{given that it exists} is often much easier.
+Here is an example.
+
+\begin{example}
+ [The Cantor set has measure zero]
+ The standard \vocab{middle-thirds Cantor set} is the subset
+ $[0,1]$ obtained as follows:
+ we first delete the open interval $(1/3, 2/3)$.
+ This leaves two intervals $[0,1/3]$ and $[2/3,1]$
+ from which we delete the middle thirds again from both,
+ i.e.\ deleting $(1/9,2/9)$ and $(7/9,8/9)$.
+ We repeat this procedure indefinitely and let $C$ denote the result.
+ An illustration is shown below.
+ \begin{center}
+ \includegraphics[width=0.8\textwidth]{media/cantor-thirds.png} \\
+ \footnotesize Image from \cite{img:cantor}
+ \end{center}
+ It is a classic fact that $C$ is uncountable
+ (it consists of ternary expansions omitting the digit $1$).
+ But it is measurable (it is an intersection of closed sets!)
+ and we contend it has measure zero.
+ Indeed, at the $n$th step, the result has measure $(2/3)^n$ leftover.
+ So $\mu(C) \le (2/3)^n$ for every $n$, forcing $\mu(C) = 0$.
+\end{example}
+
+This is fantastic, but there is one elephant in the room:
+how are the Lebesgue $\sigma$-algebra and the Borel $\sigma$-algebra related?
+To answer this question briefly, I will state two results
+(but another answer is given in the next section).
+The first is a geometric interpretation of the strange
+Carath\'{e}odory measurable hypothesis.
+\begin{proposition}
+ [A geometric interpretation of Lebesgue measurability]
+ \label{prop:lebesgue_geo}
+ A set $A \subseteq \RR^n$ is Lebesgue measurable
+ if and only if for every $\eps > 0$,
+ there is an open set $U \supset A$ such that
+ \[ \lambda^\ast(U \setminus A) < \eps \]
+ where $\lambda^\ast$ is the Lebesgue outer measure.
+\end{proposition}
+I want to say that this was Lebesgue's original formulation
+of ``measurable'', but I'm not sure about that.
+In any case, we won't need to use this,
+but it's good to see that our definition of Lebesgue measurable
+has a down-to-earth geometric interpretation.
+
+\begin{ques}
+ Deduce that every open set is Lebesgue measurable.
+ Conclude that the Lebesgue $\sigma$-algebra
+ contains the Borel $\sigma$-algebra.
+ (A different proof is given later on.)
+\end{ques}
+
+However, the containment is proper:
+there are more Lebesgue measurable sets than Borel ones.
+Indeed, it can actually be proven using transfinite induction
+(though we won't) that
+$\left\lvert \SB(\RR) \right\rvert = \left\lvert \RR \right\rvert$.
+Using this, one obtains:
+\begin{exercise}
+ Show the Borel $\sigma$-algebra is not complete.
+ (Hint: consider the Cantor set.
+ You won't be able to write down an example of a non-measurable
+ set, but you can use cardinality arguments.)
+ Thus the Lebesgue $\sigma$-algebra strictly contains the Borel one.
+ % It should contain every subset of the Cantor set,
+ % since Lebesgue is complete.
+\end{exercise}
+
+Nonetheless, there is a great way to describe the Lebesgue $\sigma$-algebra,
+using the idea of completeness.
+\begin{definition}
+ Let $(\Omega, \SA, \mu)$ be a measure space.
+ The \vocab{completion} $(\Omega, \ol{\SA}, \ol{\mu})$
+ is defined as follows:
+ we let
+ \[ \ol{\SA} = \left\{ A \cup N \mid A \in \SA,
+ N \text{ subset of null set} \right\}. \]
+ and $\ol{\mu}(A \cup N) = \mu(A)$.
+ One can check this is well-defined,
+ and in fact $\ol{\mu}$ is the unique extension
+ of $\mu$ from $\SA$ to $\ol{\SA}$.
+
+ This looks more complicated than it is.
+ Intuitively, all we are doing is ``completing'' the measure
+ by telling $\ol{\mu}$ to regard any subset of a null set
+ as having measure zero, too.
+\end{definition}
+
+Then, the saving grace:
+\begin{theorem}
+ [Lebesgue is completion of Borel]
+ For $\RR^n$, the Lebesgue measure is the completion of the Borel measure.
+\end{theorem}
+\begin{proof}
+ This actually follows from results in the next section,
+ namely \Cref{exer:cubes_vs_open}
+ and part (c) of Carath\'{e}odory for pre-measures (\Cref{thm:cara_premeasure}).
+\end{proof}
+
+\section{A fourth row: Carath\'{e}odory for pre-measures}
+\prototype{The fourth row for the Lebesgue measure is $\SB(\RR^n)$.}
+In many cases, $\SA\cme$ is actually bigger than our original goal,
+and instead we only need to extend $\mu_0$ on $\SA_0$
+to $\mu$ on $\SA$, where $\SA$ is the $\sigma$-algebra generated by $\SA_0$.
+Indeed, our original goal was to get $\SB(\RR^n)$, and in fact:
+\begin{exercise}
+ Show that $\SB(\RR^n)$ is the $\sigma$-algebra generated
+ by the $\mathcal{L}_0$ we defined earlier.
+ \label{exer:cubes_vs_open}
+\end{exercise}
+
+Fortunately, this restriction is trivial to do.
+\begin{ques}
+ Show that $\SA\cme \supset \SA$,
+ so we can just restrict $\mu\cme$ to $\SA$.
+\end{ques}
+We will in a moment add this as the fourth row in our table.
+
+However, if this is the end goal,
+than a somewhat different Carath\'{e}odory theorem
+can be stated because often one more niceness condition holds:
+\begin{definition}
+ A pre-measure or measure $\mu$ on $\Omega$ is \vocab{$\sigma$-finite}
+ if $\Omega$ can be written as a countable union $\Omega = \bigcup_n A_n$
+ with $\mu(A_n) < \infty$ for each $n$.
+\end{definition}
+\begin{ques}
+ Show that the pre-measure $\lambda_0$ we had,
+ as well as the Borel measure $\SB(\RR^n)$,
+ are both $\sigma$-finite.
+\end{ques}
+Actually, for us, $\sigma$-finite is basically always going to be true,
+so you can more or less just take it for granted.
+
+\begin{theorem}
+ [Carath\'{e}odory extension theorem for pre-measures]
+ \label{thm:cara_premeasure}
+ Let $\mu_0$ be a pre-measure on an algebra $\SA_0$ of $\Omega$,
+ and let $\SA$ denote the $\sigma$-algebra generated by $\SA_0$.
+ Let $\SA\cme$, $\mu\cme$ be as in \Cref{thm:cara_outer}.
+ Then:
+ \begin{enumerate}[(a)]
+ \ii The restriction of $\mu\cme$ to $\SA$
+ gives a measure $\mu$ extending $\mu_0$.
+ \ii If $\mu_0$ was $\sigma$-finite,
+ then $\mu$ is the unique extension of $\mu_0$ to $\SA$.
+ \ii If $\mu_0$ was $\sigma$-finite,
+ then $\mu\cme$ is the completion of $\mu$,
+ hence the unique extension of $\mu_0$ to $\SA\cme$.
+ \end{enumerate}
+\end{theorem}
+Here is the updated table, with comments if $\mu_0$ was indeed $\sigma$-finite.
+\begin{center}
+ \begin{tabular}[h]{llcl}
+ & & Construct order & Notes \\ \hline
+ $2^\Omega$ & $\mu^\ast$ & Step 2 &
+ $\mu^\ast$ is outer measure obtained from $\mu_0$ \\[1em]
+ $\SA\cme$ & $\mu\cme$ & Step 3 & $(\SA\cme, \mu\cme)$ is completion $(\SA, \mu)$, \\
+ &&& $\SA\cme$ defined as $\mu^\ast$-measurable sets \\[1em]
+ $\SA$ & $\mu$ & Step 4 & $\SA$ defined as $\sigma$-alg.\ generated by $\SA_0$ \\[1em]
+ $\SA_0$ & $\mu_0$ & Step 1 & $\mu_0$ is a pre-measure
+ \end{tabular}
+\end{center}
+And here is the table for $\Omega = \RR^n$,
+with Borel and Lebesgue in it.
+\begin{center}
+ \begin{tabular}[h]{llcl}
+ & & Construct order & Notes \\ \hline
+ $2^{\RR^n}$ & $\lambda^\ast$ & Step 2 &
+ $\lambda^\ast$ is Lebesgue outer measure \\[1em]
+ $\mathcal L(\RR^n)$ & $\lambda$ & Step 3 &
+ Lebesgue $\sigma$-algebra, completion of Borel one \\[1em]
+ $\SB(\RR^n)$ & $\mu$ & Step 4 &
+ Borel $\sigma$-algebra, generated by $\mathcal{L}_0$ \\[1em]
+ $\mathcal L_0$ & $\lambda_0$ & Step 1 & Define pre-measure on rectangles
+ \end{tabular}
+\end{center}
+
+Going down one row of the table corresponds to restriction,
+while each of $\mu_0 \to \mu \to \mu\cme$ is a unique extension
+when $\mu_0$ is $\sigma$-finite.
+\begin{proof}
+ [Proof of \Cref{thm:cara_premeasure}]
+ For (a): this is just \Cref{thm:construct_outer} and \Cref{thm:cara_outer}
+ put together, combined with the observation that $\SA^\ast \supset \SA_0$
+ and hence $\SA^\ast \supset \SA$.
+ Parts (b) and (c) are more technical, and omitted.
+\end{proof}
+
+\section{From now on, we assume the Borel measure}
+\todo{explain why}
+
+\section{\problemhead}
+\begin{dproblem}
+ [Constructing outer measures from arbitrary $\rho$]
+ \label{pr:construct_outer_measure}
+ For a set $\Omega$,
+ let $\mathcal{E}$ be \emph{any} subset of $2^{\Omega}$
+ and let $\rho \colon \mathcal{E} \to [0,+\infty]$
+ be \emph{any} function.
+ Prove that
+ \[ \mu^\ast(E) = \inf \left\{ \sum_{n=1}^\infty \rho(E_n) \mid
+ E_n \in \mathcal{E}, \;
+ E \subseteq \bigcup_{n=1}^\infty E_n \right\} \]
+ is an outer measure.
+\end{dproblem}
+
+\begin{problem}
+ [The insane scientist]
+ Let $\Omega = \RR^2$, and let $\mathcal{E}$
+ be the set of (non-degenerate) rectangles.
+ Let $\rho(E) = 1$ for every rectangle $E \in \mathcal{E}$.
+ Ignoring my advice, the insane scientist
+ uses $\rho$ to construct an outer measure $\mu^\ast$,
+ as in \Cref{pr:construct_outer_measure}.
+ \begin{enumerate}[(a)]
+ \ii Find $\mu^\ast(S)$ for each subset $S$ of $\RR^2$.
+ \ii Which sets are $\mu^\ast$-measurable?
+ \end{enumerate}
+ You should find that no rectangle is $\mu^\ast$-measurable,
+ unsurprisingly foiling the scientist.
+ \begin{hint}
+ Show that
+ \[ \mu^\ast(S) = \begin{cases}
+ 0 & S = \varnothing \\
+ 1 & S \text{ bounded and nonempty} \\
+ \infty & S \text{ not bounded}.
+ \end{cases}
+ \]
+ This lets you solve (b) readily;
+ I think the answer is just unbounded sets,
+ $\varnothing$, and one-point sets.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ \gim
+ A function $f \colon \RR \to \RR$ is continuous.
+ Must $f$ be measurable with respect to the Lebesgue measure on $\RR$?
+\end{problem}
diff --git a/books/napkin/cardinal.tex b/books/napkin/cardinal.tex
new file mode 100644
index 0000000000000000000000000000000000000000..32fd7a079924ad66637ed776f45f72855a07437a
--- /dev/null
+++ b/books/napkin/cardinal.tex
@@ -0,0 +1,484 @@
+\chapter{Cardinals}
+An ordinal measures a total ordering.
+However, it does not do a fantastic job at measuring size.
+For example, there is a bijection between the elements of $\omega$ and $\omega+1$:
+\[
+ \begin{array}{rccccccc}
+ \omega+1 = & \{ & \omega & 0 & 1 & 2 & \dots & \} \\
+ \omega = & \{ & 0 & 1 & 2 & 3 & \dots & \}.
+ \end{array}
+\]
+In fact, as you likely already know,
+there is even a bijection between $\omega$ and $\omega^2$:
+\[
+ \begin{array}{l|cccccc}
+ + & 0 & 1 & 2 & 3 & 4 & \dots \\ \hline
+ 0 & 0 & 1 & 3 & 6 & 10 & \dots \\
+ \omega & 2 & 4 & 7 & 11 & \dots & \\
+ \omega \cdot 2 & 5 & 8 & 12 & \dots & & \\
+ \omega \cdot 3 & 9 & 13 & \dots & & & \\
+ \omega \cdot 4 & 14 & \dots & & & &
+ \end{array}
+\]
+So ordinals do not do a good job of keeping track of size.
+For this, we turn to the notion of a cardinal number.
+
+\section{Equinumerous sets and cardinals}
+\begin{definition}
+ Two sets $A$ and $B$ are \vocab{equinumerous}, written $A \approx B$,
+ if there is a bijection between them.
+\end{definition}
+
+\begin{definition}
+ A \vocab{cardinal} is an ordinal $\kappa$ such that
+ for no $\alpha < \kappa$ do we have $\alpha \approx \kappa$.
+\end{definition}
+\begin{example}[Examples of cardinals]
+ Every finite number is a cardinal.
+ Moreover, $\omega$ is a cardinal.
+ However, $\omega+1$, $\omega^2$, $\omega^{2015}$ are not,
+ because they are countable.
+\end{example}
+\begin{example}[$\omega^\omega$ is countable]
+ Even $\omega^\omega$ is not a cardinal,
+ since it is a countable union
+ \[ \omega^\omega = \bigcup_n \omega^n \]
+ and each $\omega^n$ is countable.
+\end{example}
+\begin{ques}
+ Why must an infinite cardinal be a limit ordinal?
+\end{ques}
+
+\begin{remark}
+ There is something fishy about the definition of a cardinal:
+ it relies on an \emph{external} function $f$.
+ That is, to verify $\kappa$ is a cardinal I can't just look at $\kappa$ itself;
+ I need to examine the entire universe $V$ to make sure
+ there does not exist a bijection $f : \kappa \to \alpha$ for $\alpha < \kappa$.
+ For now this is no issue, but later in model theory
+ this will lead to some highly counterintuitive behavior.
+\end{remark}
+
+\section{Cardinalities}
+Now that we have defined a cardinal, we can discuss the size
+of a set by linking it to a cardinal.
+
+\begin{definition}
+ The \vocab{cardinality} of a set $X$
+ is the \emph{least} ordinal $\kappa$ such that $X \approx \kappa$.
+ We denote it by $\left\lvert X \right\rvert$.
+\end{definition}
+\begin{ques}
+ Why must $\left\lvert X \right\rvert$ be a cardinal?
+\end{ques}
+\begin{remark}
+ One needs the well-ordering theorem (equivalently, choice)
+ in order to establish that such an ordinal $\kappa$ actually exists.
+\end{remark}
+Since cardinals are ordinals, it makes sense to ask whether $\kappa_1 \le \kappa_2$,
+and so on.
+Our usual intuition works well here.
+\begin{proposition}[Restatement of cardinality properties]
+ Let $X$ and $Y$ be sets.
+ \begin{enumerate}[(i)]
+ \ii $X \approx Y$ if and only $\left\lvert X \right\rvert = \left\lvert Y \right\rvert$,
+ if and only if there's a bijection from $X$ to $Y$.
+ \ii $\left\lvert X \right\rvert \le \left\lvert Y \right\rvert$
+ if and only if there is an injective map $X \injto Y$.
+ \end{enumerate}
+\end{proposition}
+Diligent readers are invited to try and prove this.
+
+\section{Aleph numbers}
+\prototype{$\aleph_0 = \omega$, and $\aleph_1$ is the first uncountable ordinal.}
+First, let us check that cardinals can get arbitrarily large:
+\begin{proposition}
+ We have $\left\lvert X \right\rvert < \left\lvert \PP(X) \right\rvert$ for every set $X$.
+\end{proposition}
+\begin{proof}
+ There is an injective map $X \injto \PP(X)$
+ but there is no injective map $\PP(X) \injto X$ by \Cref{lem:cantor_diag}.
+\end{proof}
+
+Thus we can define:
+\begin{definition}
+ For a cardinal $\kappa$, we define $\kappa^+$ to be the least cardinal above $\kappa$,
+ called the \vocab{successor cardinal}.
+\end{definition}
+This $\kappa^+$ exists and has $\kappa^+ \le \left\lvert \PP(\kappa) \right\rvert$.
+
+Next, we claim that:
+\begin{exercise}
+ Show that if $A$ is a set of cardinals, then $\cup A$ is a cardinal.
+\end{exercise}
+
+Thus by transfinite induction we obtain that:
+\begin{definition}
+ For any $\alpha \in \On$, we define the \vocab{aleph numbers} as
+ \begin{align*}
+ \aleph_0 &= \omega \\
+ \aleph_{\alpha+1} &= \left( \aleph_\alpha \right)^+ \\
+ \aleph_{\lambda} &= \bigcup_{\alpha < \lambda} \aleph_\alpha.
+ \end{align*}
+\end{definition}
+
+Thus we have the sequence of cardinals
+\[
+ 0 < 1 < 2 < \dots < \aleph_0 < \aleph_1 < \dots < \aleph_\omega < \aleph_{\omega+1} < \dots.
+\]
+By definition, $\aleph_0$ is the cardinality of the natural numbers,
+$\aleph_1$ is the first uncountable ordinal, \dots.
+
+We claim the aleph numbers constitute all the cardinals:
+\begin{lemma}[Aleph numbers constitute all infinite cardinals]
+ If $\kappa$ is a cardinal then
+ either $\kappa$ is finite (i.e.\ $\kappa \in \omega$) or
+ $\kappa = \aleph_\alpha$ for some $\alpha \in \On$.
+\end{lemma}
+\begin{proof}
+ Assume $\kappa$ is infinite, and take $\alpha$ minimal with $\aleph_\alpha \ge \kappa$.
+ Suppose for contradiction that we have $\aleph_\alpha > \kappa$.
+ We may assume $\alpha > 0$, since the case $\alpha = 0$ is trivial.
+
+ If $\alpha = \ol\alpha + 1$ is a successor, then
+ \[ \aleph_{\ol\alpha} < \kappa < \aleph_{\alpha}
+ = (\aleph_{\ol\alpha})^+ \]
+ which contradicts the definition of the successor cardinal.
+
+ If $\alpha = \lambda$ is a limit ordinal, then $\aleph_\lambda$ is the
+ supremum $\bigcup_{\gamma < \lambda} \aleph_\gamma$.
+ So there must be some $\gamma < \lambda$ with $\aleph_\gamma > \kappa$,
+ which contradicts the minimality of $\alpha$.
+\end{proof}
+
+\begin{definition}
+ An infinite cardinal which is not a successor cardinal
+ is called a \vocab{limit cardinal}.
+ It is exactly those cardinals of the form $\aleph_\lambda$,
+ for $\lambda$ a limit ordinal, plus $\aleph_0$.
+\end{definition}
+
+
+\section{Cardinal arithmetic}
+\prototype{$\aleph_0 \cdot \aleph_0 = \aleph_0 + \aleph_0 = \aleph_0$}
+Recall the way we set up ordinal arithmetic.
+Note that in particular, $\omega + \omega > \omega$ and $\omega^2 > \omega$.
+Since cardinals count size, this property is undesirable, and
+we want to have
+\begin{align*}
+ \aleph_0 + \aleph_0 &= \aleph_0 \\
+ \aleph_0 \cdot \aleph_0 &= \aleph_0
+\end{align*}
+because $\omega + \omega$ and $\omega \cdot \omega$ are countable.
+In the case of cardinals, we simply ``ignore order''.
+
+The definition of cardinal arithmetic is as expected:
+\begin{definition}[Cardinal arithmetic]
+ Given cardinals $\kappa$ and $\mu$, define
+ \[ \kappa + \mu
+ \defeq
+ \left\lvert
+ \left( \left\{ 0 \right\} \times \kappa \right)
+ \cup
+ \left( \left\{ 1 \right\} \times \mu \right)
+ \right\rvert
+ \]
+ and
+ \[
+ \kappa \cdot \mu
+ \defeq
+ \left\lvert \mu \times \kappa \right\rvert
+ .
+ \]
+\end{definition}
+
+
+\begin{ques}
+ Check this agrees with what you learned in pre-school
+ for finite cardinals.
+\end{ques}
+
+\begin{abuse}
+ This is a slight abuse of notation since we are using
+ the same symbols as for ordinal arithmetic,
+ even though the results are different ($\omega \cdot \omega = \omega^2$
+ but $\aleph_0 \cdot \aleph_0 = \aleph_0$).
+ In general, I'll make it abundantly clear whether I am talking
+ about cardinal arithmetic or ordinal arithmetic.
+\end{abuse}
+To help combat this confusion, we use separate symbols for ordinals and cardinals.
+Specifically, $\omega$ will always refer to $\{0,1,\dots\}$ viewed as an ordinal;
+$\aleph_0$ will always refer to the same set viewed as a cardinal.
+More generally,
+\begin{definition}
+ Let $\omega_\alpha = \aleph_\alpha$ viewed as an ordinal.
+\end{definition}
+
+However, as we've seen already we have that $\aleph_0 \cdot \aleph_0 = \aleph_0$.
+In fact, this holds even more generally:
+
+\begin{theorem}[Infinite cardinals squared]
+ Let $\kappa$ be an infinite cardinal.
+ Then $\kappa \cdot \kappa = \kappa$.
+\end{theorem}
+\begin{proof}
+ Obviously $\kappa \cdot \kappa \ge \kappa$,
+ so we want to show $\kappa \cdot \kappa \le \kappa$.
+
+ The idea is to try to repeat the same proof
+ that we had for $\aleph_0 \cdot \aleph_0 = \aleph_0$,
+ so we re-iterate it here. We took the ``square'' of
+ elements of $\aleph_0$, and then
+ \emph{re-ordered} it according to the diagonal:
+ \[
+ \begin{array}{l|cccccc}
+ & 0 & 1 & 2 & 3 & 4 & \dots \\ \hline
+ 0 & 0 & 1 & 3 & 6 & 10 & \dots \\
+ 1 & 2 & 4 & 7 & 11 & \dots & \\
+ 2 & 5 & 8 & 12 & \dots & & \\
+ 3 & 9 & 13 & \dots & & & \\
+ 4 & 14 & \dots & & & &
+ \end{array}
+ \]
+ We'd like to copy this idea for a general $\kappa$;
+ however, since addition is less well-behaved for infinite ordinals
+ it will be more convenient to use $\max\{\alpha,\beta\}$
+ rather than $\alpha+\beta$.
+ Specifically, we put the ordering $<_{\text{max}}$
+ on $\kappa \times \kappa$ as follows:
+ for $(\alpha_1, \beta_1)$ and $(\alpha_2, \beta_2)$ in $\kappa \times \kappa$
+ we declare $(\alpha_1, \beta_1) <_{\text{max}} (\alpha_2, \beta_2)$ if
+ \begin{itemize}
+ \ii $\max \left\{ \alpha_1, \beta_1 \right\} < \max \left\{ \alpha_2, \beta_2 \right\}$ or
+ \ii $\max \left\{ \alpha_1, \beta_1 \right\} = \max \left\{ \alpha_2, \beta_2 \right\}$ and $(\alpha_1, \beta_1)$
+ is lexicographically earlier than $(\alpha_2, \beta_2)$.
+ \end{itemize}
+ This alternate ordering (which deliberately avoids referring
+ to the addition) looks like:
+ \[
+ \begin{array}{l|cccccc}
+ & 0 & 1 & 2 & 3 & 4 & \dots \\ \hline
+ 0 & 0 & 1 & 4 & 9 & 16 & \dots \\
+ 1 & 2 & 3 & 5 & 10 & 17 & \dots \\
+ 2 & 6 & 7 & 8 & 11 & 18 & \dots \\
+ 3 & 12 & 13 & 14 & 15 & 19 & \dots \\
+ 4 & 20 & 21 & 22 & 23 & 24 & \dots \\
+ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \\
+ \end{array}
+ \]
+
+ Now we proceed by transfinite induction on $\kappa$.
+ The base case is $\kappa = \aleph_0$, done above.
+ Now, $<_{\text{max}}$ is a well-ordering of $\kappa \times \kappa$,
+ so we know it is in order-preserving bijection with some ordinal $\gamma$.
+ Our goal is to show that $\left\lvert \gamma \right\rvert \le \kappa$.
+ To do so, it suffices to prove that for any $\ol\gamma \in \gamma$,
+ we have $\left\lvert \ol\gamma \right\rvert < \kappa$.
+
+ Suppose $\ol\gamma$ corresponds to the point $(\alpha, \beta) \in \kappa \times \kappa$
+ under this bijection.
+ If $\alpha$ and $\beta$ are both finite
+ then certainly $\ol\gamma$ is finite too.
+ Otherwise, let $\ol\kappa = \max \{\alpha, \beta\} < \kappa$;
+ then the number of points below $\ol\gamma$ is at most
+ \[
+ \left\lvert \alpha \right\rvert \cdot \left\lvert \beta \right\rvert
+ \le \ol\kappa \cdot \ol\kappa
+ = \ol\kappa
+ \]
+ by the inductive hypothesis.
+ So $\left\lvert \ol\gamma \right\rvert \le \ol\kappa < \kappa$ as desired.
+\end{proof}
+
+From this it follows that cardinal addition and multiplication is really boring:
+\begin{theorem}[Infinite cardinal arithmetic is trivial]
+ Given cardinals $\kappa$ and $\mu$,
+ one of which is infinite, we have
+ \[ \kappa \cdot \mu = \kappa + \mu
+ = \max\left\{ \kappa, \mu \right\}.\]
+\end{theorem}
+\begin{proof}
+ The point is that both of these are less than the square of the maximum.
+ Writing out the details:
+ \begin{align*}
+ \max \left\{ \kappa, \mu \right\}
+ &\le \kappa + \mu \\
+ &\le \kappa \cdot \mu \\
+ &\le \max \left\{ \kappa, \mu \right\}
+ \cdot \max \left\{ \kappa, \mu \right\} \\
+ &= \max\left\{ \kappa, \mu \right\}. \qedhere
+ \end{align*}
+\end{proof}
+
+
+
+
+\section{Cardinal exponentiation}
+\prototype{$2^\kappa = \left\lvert \PP(\kappa) \right\rvert$.}
+\begin{definition}
+ Suppose $\kappa$ and $\lambda$ are cardinals.
+ Then
+ \[ \kappa^\lambda
+ \defeq \left\lvert \mathscr F(\lambda, \kappa) \right\rvert.
+ \]
+ Here $\mathscr F(A,B)$ is the set of functions from $A$ to $B$.
+\end{definition}
+
+\begin{abuse}
+ As before, we are using the same notation for
+ both cardinal and ordinal arithmetic. Sorry!
+\end{abuse}
+
+In particular, $2^\kappa = \left\lvert \PP(\kappa) \right\rvert > \kappa$,
+and so from now on we can use the notation $2^\kappa$ freely.
+(Note that this is totally different from ordinal arithmetic;
+there we had $2^\omega = \bigcup_{n\in\omega} 2^n = \omega$.
+In cardinal arithmetic $2^{\aleph_0} > \aleph_0$.)
+
+I have unfortunately not told you what $2^{\aleph_0}$ equals.
+A natural conjecture is that $2^{\aleph_0} = \aleph_1$; this is called the
+\vocab{Continuum Hypothesis}.
+It turns out that this is \emph{undecidable} -- it is not possible
+to prove or disprove this from the $\ZFC$ axioms.
+
+\section{Cofinality}
+\prototype{$\aleph_0$, $\aleph_1$, \dots\ are all regular, but $\aleph_\omega$ has cofinality $\omega$.}
+
+\begin{definition}
+ Let $\lambda$ be an ordinal (usually a limit ordinal),
+ and $\alpha$ another ordinal.
+ A map $f : \alpha \to \lambda$ of ordinals is called \vocab{cofinal}
+ if for every $\ol\lambda < \lambda$, there is some $\ol\alpha \in \alpha$
+ such that $f(\ol\alpha) \ge \ol\lambda$.
+ In other words, the map reaches arbitrarily high into $\lambda$.
+\end{definition}
+\begin{example}
+ [Example of a cofinal map]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The map $\omega \to \omega^\omega$ by $n \mapsto \omega^n$ is cofinal.
+ \ii For any ordinal $\alpha$, the identity map $\alpha \to \alpha$ is cofinal.
+ \end{enumerate}
+\end{example}
+
+\begin{definition}
+ Let $\lambda$ be a limit ordinal.
+ The \vocab{cofinality} of $\lambda$, denoted $\cof(\lambda)$,
+ is the smallest ordinal $\alpha$ such that there is a cofinal map
+ $\alpha \to \lambda$.
+\end{definition}
+\begin{ques}
+ Why must $\alpha$ be an infinite cardinal?
+\end{ques}
+
+Usually, we are interested in taking the cofinality of a cardinal $\kappa$.
+
+Pictorially, you can imagine standing at the bottom of the universe and looking
+up the chain of ordinals to $\kappa$.
+You have a machine gun and are firing bullets upwards, and you want to get arbitrarily
+high but less than $\kappa$.
+The cofinality is then the number of bullets you need to do this.
+
+We now observe that ``most'' of the time, the cofinality of a cardinal is itself.
+Such a cardinal is called \vocab{regular}.
+\begin{example}[$\aleph_0$ is regular]
+ $\cof(\aleph_0) = \aleph_0$, because no finite subset of
+ $\aleph_ 0 = \omega$ can reach arbitrarily high.
+\end{example}
+\begin{example}[$\aleph_1$ is regular]
+ $\cof(\aleph_1) = \aleph_1$.
+ Indeed, assume for contradiction that some countable
+ set of ordinals $A = \{ \alpha_0, \alpha_1, \dots \} \subseteq \aleph_1$
+ reaches arbitrarily high inside $\aleph_1$.
+ Then $\Lambda = \cup A$ is a \emph{countable} ordinal,
+ because it is a countable union of countable ordinals.
+ In other words $\Lambda \in \aleph_1$.
+ But $\Lambda$ is an upper bound for $A$, contradiction.
+\end{example}
+On the other hand, there \emph{are} cardinals which are not regular;
+since these are the ``rare'' cases we call them \vocab{singular}.
+\begin{example}[$\aleph_\omega$ is not regular]
+ Notice that $\aleph_0 < \aleph_1 < \aleph_2 < \dots$ reaches
+ arbitrarily high in $\aleph_\omega$, despite only having $\aleph_0$ terms.
+ It follows that $\cof(\aleph_\omega) = \aleph_0$.
+\end{example}
+
+We now confirm a suspicion you may have:
+\begin{theorem}
+ [Successor cardinals are regular]
+ If $\kappa = \ol\kappa^+$ is a successor cardinal,
+ then it is regular.
+\end{theorem}
+\begin{proof}
+ We copy the proof that $\aleph_1$ was regular.
+
+ Assume for contradiction that for some $\mu \le \ol\kappa$,
+ there are $\mu$ sets reaching arbitrarily high in $\kappa$ as a cardinal.
+ Observe that each of these sets must have cardinality at most $\ol\kappa$.
+ We take the union of all $\mu$ sets, which gives an ordinal $\Lambda$
+ serving as an upper bound.
+
+ The number of elements in the union is at most
+ \[ \#\text{sets} \cdot \#\text{elms}
+ \le \mu \cdot \ol\kappa = \ol\kappa \]
+ and hence $\left\lvert \Lambda \right\rvert \le \ol\kappa < \kappa$.
+\end{proof}
+
+\section{Inaccessible cardinals}
+So, what about limit cardinals?
+It seems to be that most of them are singular: if $\aleph_\lambda \ne \aleph_0$ is a limit ordinal,
+then the sequence $\{\aleph_\alpha\}_{\alpha \in \lambda}$ (of length $\lambda$) is certainly cofinal.
+
+\begin{example}[Beth fixed point]
+ Consider the monstrous cardinal
+ \[ \kappa = \aleph_{\aleph_{\aleph_{\ddots}}}. \]
+ This might look frighteningly huge, as $\kappa = \aleph_\kappa$,
+ but its cofinality is $\omega$ as it is the limit of the sequence
+ \[ \aleph_0, \aleph_{\aleph_0}, \aleph_{\aleph_{\aleph_0}}, \dots \]
+\end{example}
+
+More generally, one can in fact prove that
+\[ \cof(\aleph_\lambda) = \cof(\lambda). \]
+But it is actually conceivable that $\lambda$ is so large
+that $\left\lvert \lambda \right\rvert = \left\lvert \aleph_\lambda \right\rvert$.
+
+A regular limit cardinal other than $\aleph_0$ has a special name: it is \vocab{weakly inaccessible}.
+Such cardinals are so large that it is impossible to prove or disprove their existence in $\ZFC$.
+It is the first of many so-called ``large cardinals''.
+
+An infinite cardinal $\kappa$ is a strong limit cardinal if
+\[ \forall \ol\kappa < \kappa \quad 2^{\ol\kappa} < \kappa \]
+for any cardinal $\ol\kappa$. For example, $\aleph_0$ is a strong limit cardinal.
+\begin{ques}
+ Why must strong limit cardinals actually be limit cardinals?
+ (This is offensively easy.)
+\end{ques}
+A regular strong limit cardinal other than $\aleph_0$
+is called \vocab{strongly inaccessible}.
+
+\section\problemhead
+\begin{problem}
+ Compute $\left\lvert V_\omega \right\rvert$.
+ \begin{hint}
+ $\sup_{k \in \omega} \left\lvert V_k \right\rvert$.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ Prove that for any limit ordinal $\alpha$, $\cof(\alpha)$ is a \emph{regular} cardinal.
+ \begin{hint}
+ Rearrange the cofinal maps to be nondecreasing.
+ \end{hint}
+\end{problem}
+
+\begin{sproblem}
+ [Strongly inaccessible cardinals]
+ \label{prob:strongly_inaccessible}
+ Show that for any strongly inaccessible $\kappa$,
+ we have $\left\lvert V_\kappa \right\rvert = \kappa$.
+\end{sproblem}
+
+\begin{problem}
+ [K\"onig's theorem]
+ Show that \[ \kappa^{\cof(\kappa)} > \kappa \] for every infinite cardinal $\kappa$.
+\end{problem}
diff --git a/books/napkin/categories.tex b/books/napkin/categories.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b08cc6ec69473775c22b0d30075e658b8c841c65
--- /dev/null
+++ b/books/napkin/categories.tex
@@ -0,0 +1,635 @@
+\chapter{Objects and morphisms}
+\label{ch:cats}
+I can't possibly hope to do category theory any justice in these few chapters;
+thus I'll just give a very high-level overview of how many of the concepts we've
+encountered so far can be re-cast into categorical terms.
+So I'll say what a category is, give some examples,
+then talk about a few things that categories can do.
+For my examples, I'll be drawing from all the previous chapters;
+feel free to skip over the examples corresponding to things you haven't seen.
+
+If you're interested in category theory (like I was!), perhaps in
+what surprising results are true for general categories, I strongly recommend \cite{ref:msci}.
+
+\section{Motivation: isomorphisms}
+From earlier chapters let's recall the definition of an \emph{isomorphism} of two objects:
+\begin{itemize}
+ \ii Two groups $G$ and $H$ are isomorphic if there was a bijective homomorphism:
+ equivalently, we wanted homomorphisms $\phi : G \to H$ and $\psi : H \to G$
+ which were mutual inverses, meaning $\phi \circ \psi = \id_H$ and $\psi \circ \phi = \id_G$.
+ \ii Two metric (or topological) spaces $X$ and $Y$ are isomorphic
+ if there is a continuous bijection $f : X \to Y$ such that $f\inv$ is also continuous.
+ \ii Two vector spaces $V$ and $W$ are isomorphic if there is a bijection $T : V \to W$
+ which is a linear map.
+ Again, this can be re-cast as saying that $T$ and $T\inv$ are linear maps.
+ \ii Two rings $R$ and $S$ are isomorphic if there is a bijective ring homomorphism $\phi$;
+ again, we can re-cast this as two mutually inverse ring homomorphisms.
+\end{itemize}
+
+In each case we have some collections of objects and some maps,
+and the isomorphisms can be viewed as just maps.
+Let's use this to motivate the definition of a general \emph{category}.
+
+\section{Categories, and examples thereof}
+\prototype{$\catname{Grp}$ is possibly the most natural example.}
+\begin{definition}
+ A \vocab{category} $\AA$ consists of:
+ \begin{itemize}
+ \ii A class of \vocab{objects}, denoted $\obj(\AA)$.
+ \ii For any two objects $A_1, A_2 \in \obj(\AA)$,
+ a class of \vocab{arrows} (also called \vocab{morphisms} or \vocab{maps}) between them.
+ We'll denote the set of these arrows by $\Hom_\AA(A_1, A_2)$.
+ \ii For any $A_1, A_2, A_3 \in \obj(\AA)$,
+ if $f \colon A_1 \to A_2$ is an arrow and $g \colon A_2 \to A_3$ is an arrow,
+ we can compose these arrows to get an arrow $g \circ f \colon A_1 \to A_3$.
+
+ We can represent this in a \vocab{commutative diagram}
+ \begin{center}
+ \begin{tikzcd}
+ A_1 \ar[r, "f"] \ar[rd, "h"'] & A_2 \ar[d, "g"] \\
+ & A_3
+ \end{tikzcd}
+ \end{center}
+ where $h = g \circ f$.
+ The composition operation $\circ$ is part of the data of $\AA$;
+ it must be associative.
+ In the diagram above we say that $h$ \vocab{factors} through $A_2$.
+
+ \ii Finally, every object $A \in \obj(\AA)$ has a special \vocab{identity arrow} $\id_A$;
+ you can guess what it does.\footnote{To be painfully explicit:
+ if $f \colon A' \to A$ is an arrow then $\id_A \circ f = f$;
+ similarly, if $g \colon A \to A'$ is an arrow then $g \circ \id_A = g$.}
+ \end{itemize}
+\end{definition}
+\begin{abuse}
+ From now on, by $A \in \AA$ we'll mean $A \in \obj(\AA)$.
+\end{abuse}
+\begin{abuse}
+ You can think of ``class'' as just ``set''.
+ The reason we can't use the word ``set'' is
+ because of some paradoxical issues with
+ collections which are too large;
+ Cantor's Paradox says there is no set of all sets.
+ So referring to these by ``class'' is a way of sidestepping these issues.
+
+ Now and forever I'll be sloppy and assume all my categories
+ are \vocab{locally small}, meaning that $\Hom_{\AA} (A_1, A_2)$
+ is a set for any $A_1, A_2 \in \AA$.
+ So elements of $\AA$ may not form a set,
+ but the set of morphisms between
+ two \emph{given} objects will always assumed to be a set.
+\end{abuse}
+
+Let's formalize the motivation we began with.
+\begin{example}
+ [Basic examples of categories]
+ \listhack
+ \label{example:basic_categories}
+ \begin{enumerate}[(a)]
+ \ii There is a category of groups $\catname{Grp}$. The data is
+ \begin{itemize}
+ \ii The objects of $\catname{Grp}$ are the groups.
+ \ii The arrows of $\catname{Grp}$ are the homomorphisms between these groups.
+ \ii The composition $\circ$ in $\catname{Grp}$ is function composition.
+ \end{itemize}
+ \ii In the same way we can conceive a category $\catname{CRing}$ of (commutative) rings.
+ \ii Similarly, there is a category $\catname{Top}$ of topological spaces,
+ whose arrows are the continuous maps.
+ \ii There is a category $\catname{Top}_\ast$ of topological spaces with a \emph{distinguished basepoint};
+ that is, a pair $(X, x_0)$ where $x_0 \in X$.
+ Arrows are continuous maps $f : X \to Y$ with $f(x_0) = y_0$.
+ \ii Similarly, there is a category $\catname{Vect}_k$ of
+ vector spaces (possibly infinite-dimensional) over a field $k$,
+ whose arrows are the linear maps.
+ There is even a category $\catname{FDVect}_k$ of
+ \emph{finite-dimensional} vector spaces.
+ \ii We have a category $\catname{Set}$ of sets,
+ where the arrows are \emph{any} maps.
+ \end{enumerate}
+\end{example}
+And of course, we can now define what an isomorphism is!
+\begin{definition}
+ An arrow $A_1 \taking{f} A_2$ is an \vocab{isomorphism}
+ if there exists $A_2 \taking{g} A_1$ such that $f \circ g = \id_{A_2}$
+ and $g \circ f = \id_{A_1}$.
+ In that case we say $A_1$ and $A_2$ are \vocab{isomorphic}, hence $A_1 \cong A_2$.
+\end{definition}
+\begin{remark}
+ Note that in $\catname{Set}$, $X \cong Y
+ \iff \left\lvert X \right\rvert = \left\lvert Y \right\rvert$.
+\end{remark}
+\begin{ques}
+ Check that every object in a category is isomorphic to itself.
+ (This is offensively easy.)
+\end{ques}
+More importantly, this definition should strike you as a little impressive.
+We're able to define whether two groups (rings, spaces, etc.) are isomorphic
+solely by the functions between the objects.
+Indeed, one of the key themes in category theory (and even algebra) is that
+\begin{moral}
+ One can learn about objects by the functions between them.
+ Category theory takes this to the extreme by \emph{only} looking at arrows,
+ and ignoring what the objects themselves are.
+\end{moral}
+
+But there are some trickier interesting examples of categories.
+\begin{example}
+ [Posets are categories]
+ Let $\mathcal P$ be a partially ordered set.
+ We can construct a category $P$ for it as follows:
+ \begin{itemize}
+ \ii The objects of $P$ are going to be the elements of $\mathcal P$.
+ \ii The arrows of $P$ are defined as follows:
+ \begin{itemize}
+ \ii For every object $p \in P$, we add an identity arrow $\id_p$, and
+ \ii For any pair of distinct objects $p \le q$, we add a single arrow $p \to q$.
+ \end{itemize}
+ There are no other arrows.
+ \ii There's only one way to do the composition. What is it?
+ \end{itemize}
+\end{example}
+For example, for the poset $\mathcal P$ on four objects $\{a,b,c,d\}$ with $a \le b$ and $a \le c \le d$, we get:
+\begin{center}
+\begin{tikzpicture}[scale=3.5]
+ \SetVertexMath
+ \Vertices{square}{d,c,a,b}
+ \Edge[style={->}, label={$a \le b$}](a)(b)
+ \Edge[style={->}, label={$a \le c$}](a)(c)
+ \Edge[style={->}, label={$a \le d$}](a)(d)
+ \Edge[style={->}, label={$c \le d$}](c)(d)
+ \Loop[dist=8, dir=NO, label={$\id_a$}, labelstyle={above=1pt}](a)
+ \Loop[dist=8, dir=WE, label={$\id_b$}, labelstyle={left=1pt}](b)
+ \Loop[dist=8, dir=EA, label={$\id_c$}, labelstyle={right=1pt}](c)
+ \Loop[dist=8, dir=WE, label={$\id_d$}, labelstyle={left=1pt}](d)
+\end{tikzpicture}
+\end{center}
+
+This illustrates the point that
+\begin{moral}
+ The arrows of a category can be totally different from functions.
+\end{moral}
+In fact, in a way that can be made precise, the term ``concrete category'' refers
+to one where the arrows really are ``structure-preserving maps between sets'',
+like $\catname{Grp}$, $\catname{Top}$, or $\catname{CRing}$.
+
+\begin{ques}
+ Check that no two distinct objects of a poset are isomorphic.
+\end{ques}
+
+Here's a second quite important example of a non-concrete category.
+\begin{example}
+ [Important: groups are one-Object categories]
+ A group $G$ can be interpreted as a category $\mathcal G$ with one object $\ast$,
+ all of whose arrows are isomorphisms.
+
+ \begin{center}
+ \begin{tikzpicture}[scale=5.5]
+ \Vertex[x=0,y=0,L={$\ast$}]{a}
+ \Loop[dist=8, dir=NO, label={$1 = \id_a$}, labelstyle={above=1pt}](a)
+ \Loop[dist=7, dir=WE, label={$g_2$}, labelstyle={left=1pt}](a)
+ \Loop[dist=9, dir=SO, label={$g_3$}, labelstyle={below=1pt}](a)
+ \Loop[dist=8, dir=EA, label={$g_4$}, labelstyle={right=1pt}](a)
+ \end{tikzpicture}
+ \end{center}
+
+ As \cite{ref:msci} says:
+
+ \begin{quote}
+ The first time you meet the idea that a group is a kind of category,
+ it's tempting to dismiss it as a coincidence or a trick.
+ It's not: there's real content.
+ To see this, suppose your education had been shuffled and you took a course
+ on category theory before ever learning what a group was.
+ Someone comes to you and says:
+
+ ``There are these structures called `groups', and the idea is this:
+ a group is what you get when you collect together all the symmetries
+ of a given thing.''
+
+ ``What do you mean by a `symmetry'?'' you ask.
+
+ ``Well, a symmetry of an object $X$ is a way of transforming $X$ or mapping
+ $X$ into itself, in an invertible way.''
+
+ ``Oh,'' you reply, ``that's a special case of an idea I've met before.
+ A category is the structure formed by \emph{lots} of objects and mappings
+ between them -- not necessarily invertible. A group's just the very special case
+ where you've only got one object, and all the maps happen to be invertible.''
+ \end{quote}
+\end{example}
+
+\begin{exercise}
+ Verify the above!
+ That is, show that the data of a one-object category with all isomorphisms
+ is the same as the data of a group.
+\end{exercise}
+
+Finally, here are some examples of categories you can make from other categories.
+\begin{example}
+ [Deriving categories]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Given a category $\AA$, we can construct the \vocab{opposite category}
+ $\AA\op$, which is the same as $\AA$ but with all arrows reversed.
+ \ii Given categories $\AA$ and $\BB$, we can construct the \vocab{product category} $\AA \times \BB$
+ as follows: the objects are pairs $(A, B)$ for $A \in \AA$ and $B \in \BB$,
+ and the arrows from $(A_1, B_1)$ to $(A_2, B_2)$
+ are pairs \[ \left( A_1 \taking{f} A_2, B_1 \taking{g} B_2 \right). \]
+ What do you think the composition and identities are?
+ \end{enumerate}
+\end{example}
+
+\section{Special objects in categories}
+\prototype{$\catname{Set}$ has initial object $\varnothing$ and final object $\{\ast\}$. An element of $S$ corresponds to a map $\{\ast\} \to S$.}
+Certain objects in categories have special properties.
+Here are a couple examples.
+\begin{example}
+ [Initial object]
+ An \vocab{initial object} of $\AA$ is an object
+ $A_{\text{init}} \in \AA$ such that for any $A \in \AA$ (possibly $A = A_{\text{init}}$),
+ there is exactly one arrow from $A_{\text{init}}$ to $A$.
+ For example,
+ \begin{enumerate}[(a)]
+ \ii The initial object of $\catname{Set}$ is the empty set $\varnothing$.
+ \ii The initial object of $\catname{Grp}$ is the trivial group $\{1\}$.
+ \ii The initial object of $\catname{CRing}$ is the ring $\ZZ$
+ (recall that ring homomorphisms $R \to S$ map $1_R$ to $1_S$).
+ \ii The initial object of $\catname{Top}$ is the empty space.
+ \ii The initial object of a partially ordered set is its smallest element, if one exists.
+ \end{enumerate}
+\end{example}
+
+We will usually refer to ``the'' initial object of a category, since:
+\begin{exercise}
+ [Important!]
+ Show that any two initial objects $A_1$, $A_2$ of $\AA$ are \emph{uniquely isomorphic}
+ meaning there is a unique isomorphism between them.
+\end{exercise}
+
+\begin{remark}
+ In mathematics, we usually neither know nor care if two objects are actually equal
+ or whether they are isomorphic.
+ For example, there are many competing ways to define $\RR$,
+ but we still just refer to it as ``the'' real numbers.
+
+ Thus when we define categorical notions, we would like to check they are
+ unique up to isomorphism.
+ This is really clean in the language of categories, and definitions
+ often cause objects to be unique up to isomorphism for elegant reasons like the above.
+\end{remark}
+
+One can take the ``dual'' notion, a terminal object.
+\begin{example}
+ [Terminal object]
+ A \vocab{terminal object} of $\AA$ is an object
+ $A_{\text{final}} \in \AA$ such that for any $A \in \AA$ (possibly $A = A_{\text{final}}$),
+ there is exactly one arrow from $A$ to $A_{\text{final}}$.
+ For example,
+ \begin{enumerate}[(a)]
+ \ii The terminal object of $\catname{Set}$ is the singleton set $\{\ast\}$.
+ (There are many singleton sets, of course, but \emph{as sets} they are all isomorphic!)
+ \ii The terminal object of $\catname{Grp}$ is the trivial group $\{1\}$.
+ \ii The terminal object of $\catname{CRing}$ is the zero ring $0$.
+ (Recall that ring homomorphisms $R \to S$ must map $1_R$ to $1_S$).
+ \ii The terminal object of $\catname{Top}$ is the single-point space.
+ \ii The terminal object of a partially ordered set is its maximal element, if one exists.
+ \end{enumerate}
+\end{example}
+
+Again, terminal objects are unique up to isomorphism.
+The reader is invited to repeat the proof from the preceding exercise here.
+However, we can illustrate more strongly the notion of duality to give a short proof.
+\begin{ques}
+ Verify that terminal objects of $\AA$ are equivalent to initial objects of $\AA\op$.
+ Thus terminal objects of $\AA$ are unique up to isomorphism.
+\end{ques}
+In general, one can consider in this way the dual of \emph{any} categorical notion:
+properties of $\AA$ can all be translated to dual properties of $\AA\op$
+(often by adding the prefix ``co'' in front).
+
+One last neat construction: suppose we're working in a concrete category,
+meaning (loosely) that the objects are ``sets with additional structure''.
+Now suppose you're sick of maps and just want to think about elements of these sets.
+Well, I won't let you do that since you're reading a category theory chapter,
+but I will offer you some advice:
+\begin{itemize}
+ \ii In $\catname{Set}$, arrows from $\{\ast\}$ to $S$ correspond to elements of $S$.
+ \ii In $\catname{Top}$, arrows from $\{\ast\}$ to $X$ correspond to points of $X$.
+ \ii In $\catname{Grp}$, arrows from $\ZZ$ to $G$ correspond to elements of $G$.
+ \ii In $\catname{CRing}$, arrows from $\ZZ[x]$ to $R$ correspond to elements of $R$.
+\end{itemize}
+and so on.
+So in most concrete categories, you can think of elements as functions from special sets to the set in question.
+In each of these cases we call the object in question a \vocab{free object}.
+
+\section{Binary products}
+\prototype{$X \times Y$ in most concrete categories is the set-theoretic product.}
+The ``universal property'' is a way of describing objects in terms of maps
+in such a way that it defines the object up to unique isomorphism
+(much the same as the initial and terminal objects).
+
+To show how this works in general, let me give a concrete example.
+Suppose I'm in a category -- let's say $\catname{Set}$ for now.
+I have two sets $X$ and $Y$, and I want to construct the Cartesian product $X \times Y$ as we know it.
+The philosophy of category theory dictates that I should talk about maps only,
+and avoid referring to anything about the sets themselves.
+How might I do this?
+
+Well, let's think about maps into $X \times Y$.
+The key observation is that
+\begin{moral}
+A function $A \taking f X \times Y$
+amounts to a pair of functions $(A \taking g X, A \taking h Y)$.
+\end{moral}
+Put another way, there is a natural projection map $X \times Y \surjto X$ and $X \times Y \surjto Y$:
+\begin{center}
+\begin{tikzcd}
+ & X \\
+ X \times Y \ar[ru, two heads, "\pi_X"] \ar[rd, two heads, "\pi_Y"'] & \\
+ & Y
+\end{tikzcd}
+\end{center}
+(We have to do this in terms of projection maps rather than elements,
+because category theory forces us to talk about arrows.)
+Now how do I add $A$ to this diagram?
+The point is that there is a bijection between functions $A \taking f X \times Y$
+and pairs $(g,h)$ of functions.
+Thus for every pair $A \taking g X$ and $A \taking h Y$ there is a \emph{unique} function
+$A \taking f X \times Y$.
+
+But $X \times Y$ is special in that it is ``universal'':
+for any \emph{other} set $A$, if you give me functions $A \to X$ and $A \to Y$, I can use it
+build a \emph{unique} function $A \to X \times Y$.
+Picture:
+\begin{center}
+\begin{tikzcd}
+ &&& X \\
+ A \ar[rrru, bend left, "g"'] \ar[rrrd, bend right, "h"] \ar[rr, dotted, "\exists! f"] &&
+ X \times Y \ar[ru, two heads, "\pi_X"] \ar[rd, two heads, "\pi_Y"] & \\
+ &&& Y
+\end{tikzcd}
+\end{center}
+
+We can do this in any general category, defining a so-called product.
+\begin{definition}
+ Let $X$ and $Y$ be objects in any category $\AA$.
+ The \vocab{product} consists of an object $X \times Y$
+ and arrows $\pi_X$, $\pi_Y$ to $X$ and $Y$ (thought of as projection).
+ We require that for any object $A$ and arrows $A \taking g X$, $A \taking h Y$, there
+ is a \emph{unique} function $A \taking f X \times Y$ such that the above diagram commutes.
+\end{definition}
+\begin{abuse}
+ Strictly speaking, the product should consist of \emph{both}
+ the object $X \times Y$
+ and the projection maps $\pi_X$ and $\pi_Y$.
+ However, if $\pi_X$ and $\pi_Y$ are understood,
+ then we often use $X \times Y$ to refer to the object,
+ and refer to it also as the product.
+ \label{abuse:object}
+\end{abuse}
+
+Products do not always exist; for example,
+take a category with just two objects and no non-identity morphisms.
+Nonetheless:
+\begin{proposition}[Uniqueness of products]
+ When they exist, products are unique up to isomorphism:
+ given two products $P_1$ and $P_2$ of $X$ and $Y$
+ there is an isomorphism between the two objects.
+\end{proposition}
+\begin{proof}
+ This is very similar to the proof that initial objects are unique up to unique isomorphism.
+ Consider two such objects $P_1$ and $P_2$, and the associated projection maps.
+ So, we have a diagram
+ \begin{center}
+ \begin{tikzcd}
+ & & X & & \\
+ \\
+ P_1 \ar[rrdd, "\pi_Y^1"', two heads] \ar[rruu, "\pi_X^1", two heads] \ar[rr, "f", two heads]
+ && P_2 \ar[rr, "g", two heads] \ar[uu, "\pi_X^2"', two heads] \ar[dd, "\pi_Y^2", two heads]
+ && P_1 \ar[lluu, "\pi_X^1"', two heads] \ar[lldd, "\pi_Y^1"', two heads] \\
+ \\
+ && Y &&
+ \end{tikzcd}
+ \end{center}
+ There are unique morphisms $f$ and $g$ between $P_1$ and $P_2$ that
+ make the entire diagram commute, according to the universal property.
+
+ On the other hand, look at $g \circ f$ and focus on just the outer square.
+ Observe that $g \circ f$ is a map which makes the outer square commute,
+ so by the universal property of $P_1$ it is the only one.
+ But $\id_{P_1}$ works as well.
+ Thus $\id_{P_1} = g \circ f$.
+ Similarly, $f \circ g = \id_{P_2}$ so $f$ and $g$ are isomorphisms.
+\end{proof}
+\begin{abuse}
+ Actually, this is not really the morally correct theorem;
+ since we've only showed the objects $P_1$ and $P_2$ are isomorphic
+ and have not made any assertion about the projection maps.
+ But I haven't (and won't) define isomorphism of the entire product,
+ and so in what follows if I say ``$P_1$ and $P_2$ are isomorphic''
+ I really just mean the objects are isomorphic.
+\end{abuse}
+\begin{exercise}
+ In fact, show the products are unique up to \emph{unique} isomorphism:
+ the $f$ and $g$ above are the only isomorphisms between
+ the objects $P_1$ and $P_2$ respecting the projections.
+\end{exercise}
+
+The nice fact about this ``universal property'' mindset
+is that we don't have to give explicit constructions; assuming existence,
+the ``universal property'' allows us to bypass all this work by saying
+``the object with these properties is unique up to unique isomorphism'',
+thus we don't need to understand the internal workings of the object
+to use its properties.
+
+Of course, that's not to say we can't give concrete examples.
+\begin{example}
+ [Examples of products]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii In $\catname{Set}$, the product of two sets
+ $X$ and $Y$ is their Cartesian product $X \times Y$.
+ \ii In $\catname{Grp}$, the product of $G$, $H$
+ is the group product $G \times H$.
+ \ii In $\catname{Vect}_k$, the product
+ of $V$ and $W$ is $V \oplus W$.
+ \ii In $\catname{CRing}$, the product
+ of $R$ and $S$ is appropriately the ring product $R \times S$.
+ \ii Let $\mathcal P$ be a poset interpreted as a category.
+ Then the product of two objects $x$ and $y$
+ is the \vocab{greatest lower bound}; for example,
+ \begin{itemize}
+ \ii If the poset is $(\RR, \le)$ then it's $\min\{x,y\}$.
+ \ii If the poset is the subsets
+ of a finite set by inclusion,
+ then it's $x \cap y$.
+ \ii If the poset is the positive integers ordered by division,
+ then it's $\gcd(x,y)$.
+ \end{itemize}
+ \end{enumerate}
+\end{example}
+
+Of course, we can define products of more than just one object.
+Consider a set of objects $(X_i)_{i \in I}$ in a category $\AA$.
+We define a \vocab{cone} on the $X_i$ to be an object $A$
+with some ``projection'' maps to each $X_i$.
+Then the \vocab{product} is a cone $P$ which is ``universal'' in the same sense as before:
+given any other cone $A$ there is a unique map $A \to P$ making the diagram commute.
+In short, a product is a ``universal cone''.
+
+The picture of this is
+\begin{center}
+\begin{tikzcd}
+ && A
+ \ar[dd, "\exists! f"]
+ \ar[llddd, two heads, bend right]
+ \ar[lddd, two heads, bend right]
+ \ar[rddd, two heads, bend left]
+ \ar[rrddd, two heads, bend left]
+ && \\
+ &&&& \\
+ && P
+ \ar[lld, two heads]
+ \ar[ld, two heads]
+ \ar[rd, two heads]
+ \ar[rrd, two heads]
+ && \\
+ X_1 & X_2 && X_3 & X_4
+\end{tikzcd}
+\end{center}
+See also \Cref{prob:associative_product}.
+
+One can also do the dual construction to get a \vocab{coproduct}:
+given $X$ and $Y$, it's the object $X+Y$
+together with maps $X \taking{\iota_X} X+Y$ and $Y \taking{\iota_Y} X+Y$
+(that's Greek iota, think inclusion)
+such that for any object $A$ and maps $X \taking g A$, $Y \taking h A$
+there is a unique $f$ for which
+\begin{center}
+\begin{tikzcd}
+ X \ar[rd, "\iota_X"'] \ar[rrd, "g", bend left] \\
+ & X+Y \ar[r, "\exists! f"] & A \\
+ Y \ar[ru, "\iota_Y"] \ar[rru, "h"', bend right]
+\end{tikzcd}
+\end{center}
+commutes.
+We'll leave some of the concrete examples as an exercise this time,
+for example:
+\begin{exercise}
+ Describe the coproduct in $\catname{Set}$.
+\end{exercise}
+Predictable terminology: a coproduct is a universal \vocab{cocone}.
+
+Spoiler alert later on:
+this construction can be generalized vastly to so-called ``limits'',
+and we'll do so later on.
+
+\section{Monic and epic maps}
+The notion of ``injective'' doesn't make sense
+in an arbitrary category since arrows need not be functions.
+The correct categorical notion is:
+\begin{definition}
+ A map $X \taking f Y$ is \vocab{monic}
+ (or a monomorphism) if for any commutative diagram
+ \begin{center}
+ \begin{tikzcd}
+ A \ar[r, shift left, "g"] \ar[r, shift right, "h"'] & X \ar[r, "f"] & Y
+ \end{tikzcd}
+ \end{center}
+ we must have $g = h$.
+ In other words, $f \circ g = f \circ h \implies g = h$.
+\end{definition}
+\begin{ques}
+ Verify that in a \emph{concrete} category, injective $\implies$ monic.
+\end{ques}
+\begin{ques}
+ Show that the composition of two monic maps is monic.
+\end{ques}
+
+In most but not all situations, the converse is also true.
+\begin{exercise}
+ Show that in $\catname{Set}$, $\catname{Grp}$, $\catname{CRing}$,
+ monic implies injective. (Take $A = \{\ast\}$, $A = \ZZ$, $A = \ZZ[x]$.)
+\end{exercise}
+More generally, as we said before there are many categories
+with a ``free'' object that you can use to think of as elements.
+An element of a set is a function $1 \to S$,
+and element of a ring is a function $\ZZ[x] \to R$, et cetera.
+In all these categories,
+the definition of monic literally reads
+``$f$ is injective on $\Hom_\AA(A, X)$''.
+So in these categories, ``monic'' and ``injective'' coincide.
+
+That said, here is the standard counterexample.
+An additive abelian group $G = (G,+)$ is called \emph{divisible}
+if for every $x \in G$ and $n \in \ZZ$ there exists $y \in G$ with $ny = x$.
+Let $\catname{DivAbGrp}$ be the category of such groups.
+\begin{exercise}
+ Show that the projection $\QQ \to \QQ/\ZZ$ is monic but not injective.
+\end{exercise}
+
+Of course, we can also take the dual notion.
+\begin{definition}
+ A map $X \taking f Y$ is \vocab{epic}
+ (or an epimorphism) if for any commutative diagram
+ \begin{center}
+ \begin{tikzcd}
+ X \ar[r, "f"] & Y \ar[r, "g", shift left] \ar[r, "h"', shift right] & A
+ \end{tikzcd}
+ \end{center}
+ we must have $g = h$.
+ In other words, $g \circ f = h \circ f \implies g = h$.
+\end{definition}
+
+This is kind of like surjectivity, although it's a little farther than last time.
+Note that in concrete categories, surjective $\implies$ epic.
+\begin{exercise}
+ Show that in $\catname{Set}$, $\catname{Grp}$, $\catname{Ab}$, $\catname{Vect}_k$, $\catname{Top}$,
+ the notions of epic and surjective coincide.
+ (For $\catname{Set}$, take $A = \{0, 1\}$.)
+\end{exercise}
+However, there are more cases where it fails.
+Most notably:
+\begin{example}
+ [Epic but not surjective]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii In $\catname{CRing}$, for instance, the inclusion $\ZZ \injto \QQ$ is epic
+ (and not surjective)..
+ Indeed, if two homomorphisms $\QQ \to A$ agree on
+ every integer then they agree everywhere (why?),
+ \ii In the category of \emph{Hausdorff} topological spaces
+ (every two points have disjoint open neighborhoods),
+ in fact epic $\iff$ dense image (like $\QQ \injto \RR$).
+ \end{enumerate}
+ Thus failures arise when a function $f : X \to Y$ can be determined by just some of the points of $X$.
+\end{example}
+
+\section\problemhead
+
+\begin{problem}
+ In the category $\catname{Vect}_k$ of $k$-vector spaces
+ (for a field $k$),
+ what are the initial and terminal objects?
+\end{problem}
+
+\begin{dproblem}
+ What is the coproduct $X+Y$ in the categories
+ $\catname{Set}$, $\catname{Vect}_k$, and a poset?
+\end{dproblem}
+
+\begin{problem}
+ In any category $\AA$ where all products exist,
+ show that \[ (X \times Y) \times Z \cong X \times (Y \times Z) \]
+ where $X$, $Y$, $Z$ are arbitrary objects.
+ (Here both sides refer to the objects, as in \Cref{abuse:object}.)
+ \label{prob:associative_product}
+\end{problem}
+
+\begin{problem}
+ \gim
+ Consider a category $\AA$ with a \vocab{zero object},
+ meaning an object which is both initial and terminal.
+ Given objects $X$ and $Y$ in $A$,
+ prove that the projection $X \times Y \to X$ is epic.
+\end{problem}
diff --git a/books/napkin/cellular.tex b/books/napkin/cellular.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e33ae245ef5ff68af4b31c636d85448020dc6be5
--- /dev/null
+++ b/books/napkin/cellular.tex
@@ -0,0 +1,449 @@
+\chapter{Bonus: Cellular homology}
+We now introduce cellular homology, which essentially lets us compute
+the homology groups of any CW complex we like.
+
+\section{Degrees}
+\prototype{$z \mapsto z^d$ has degree $d$.}
+For any $n > 0$ and map $f : S^n \to S^n$, consider
+\[ f_\ast : \underbrace{H_n(S^n)}_{\cong \ZZ} \to \underbrace{H_n(S^n)}_{\cong \ZZ} \]
+which must be multiplication by some constant $d$.
+This $d$ is called the \vocab{degree} of $f$, denoted $\deg f$.
+\begin{ques}
+ Show that $\deg(f \circ g) = \deg(f) \deg(g)$.
+\end{ques}
+
+\begin{example}
+ [Degree]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii For $n=1$, the map $z \mapsto z^k$ (viewing $S^1 \subseteq \CC$)
+ has degree $k$.
+ \ii A reflection map $(x_0, x_1, \dots, x_n) \mapsto (-x_0, x_1, \dots, x_n)$
+ has degree $-1$; we won't prove this, but geometrically this should be clear.
+ \ii The antipodal map $x \mapsto -x$ has degree $(-1)^{n+1}$
+ since it's the composition of $n+1$ reflections as above.
+ We denote this map by $-\id$.
+ \end{enumerate}
+\end{example}
+
+Obviously, if $f$ and $g$ are homotopic, then $\deg f = \deg g$.
+In fact, a theorem of Hopf says that this is a classifying invariant:
+anytime $\deg f = \deg g$, we have that $f$ and $g$ are homotopic.
+
+One nice application of this:
+\begin{theorem}
+ [Hairy ball theorem]
+ If $n > 0$ is even, then $S^n$ doesn't have a continuous field
+ of nonzero tangent vectors.
+\end{theorem}
+\begin{proof}
+ If the vectors are nonzero then WLOG they have norm $1$;
+ that is for every $x$ we have an orthogonal unit vector $v(x)$.
+ Then we can construct a homotopy map $F : S^n \times [0,1] \to S^n$ by
+ \[ (x,t) \mapsto (\cos \pi t)x + (\sin \pi t) v(x). \]
+ which gives a homotopy from $\id$ to $-\id$.
+ So $\deg(\id) = \deg(-\id)$, which means $1 = (-1)^{n+1}$
+ so $n$ must be odd.
+\end{proof}
+Of course, the one can construct such a vector field whenever $n$ is odd.
+For example, when $n=1$ such a vector field is drawn below.
+\begin{center}
+ \begin{asy}
+ size(5cm);
+ draw(unitcircle, blue+1);
+ label("$S^1$", dir(100), dir(100), blue);
+ void arrow(real theta) {
+ pair P = dir(theta);
+ dot(P);
+ pair delta = 0.8*P*dir(90);
+ draw( P--(P+delta), EndArrow );
+ }
+ arrow(0);
+ arrow(50);
+ arrow(140);
+ arrow(210);
+ arrow(300);
+ \end{asy}
+\end{center}
+
+
+\section{Cellular chain complex}
+Before starting, we state:
+\begin{lemma}
+ [CW homology groups]
+ Let $X$ be a CW complex. Then
+ \begin{align*}
+ H_k(X^n, X^{n-1}) &\cong
+ \begin{cases}
+ \ZZ^{\oplus\text{\#$n$-cells of $X$}} & k = n \\
+ 0 & \text{otherwise}.
+ \end{cases} \\
+ \intertext{and}
+ H_k(X^n) &\cong
+ \begin{cases}
+ H_k(X) & k \le n-1 \\
+ 0 & k \ge n+1.
+ \end{cases}
+ \end{align*}
+\end{lemma}
+\begin{proof}
+ % I'll prove just the case where $X$ is finite-dimensional as usual.
+ The first part is immediate by noting that $(X^n, X^{n-1})$ is a good pair
+ and $X^n/X^{n-1}$ is a wedge sum of two spheres.
+ For the second part, fix $k$ and note that, as long as $n \le k-1$ or $n \ge k+2$,
+ \[
+ \underbrace{H_{k+1}(X^n, X^{n-1})}_{=0}
+ \to H_k(X^{n-1})
+ \to H_k(X^n)
+ \to \underbrace{H_{k}(X^n, X^{n-1})}_{=0}.
+ \]
+ So we have isomorphisms
+ \[ H_k(X^{k-1}) \cong H_k(X^{k-2}) \cong \dots \cong H_k(X^0) = 0 \]
+ and
+ \[ H_k(X^{k+1}) \cong H_k(X^{k+2}) \cong \dots \cong H_k(X). \qedhere \]
+\end{proof}
+
+So, we know that the groups $H_k(X^k, X^{k-1})$ are super nice:
+they are free abelian with basis given by the cells of $X$.
+So, we give them a name:
+\begin{definition}
+ For a CW complex $X$, we define
+ \[ \Cells_k(X) = H_k(X^k, X^{k-1}) \]
+ where $\Cells_0(X) = H_0(X^0, \varnothing) = H_0(X^0)$ by convention.
+ So $\Cells_k(X)$ is an abelian group with basis given by
+ the $k$-cells of $X$.
+\end{definition}
+
+Now, using $\Cells_k = H_k(X^k, X^{k-1})$ let's use
+our long exact sequence and try to string together maps between these.
+Consider the following diagram.
+
+\begin{center}
+\newcommand{\CX}[1]{\boxed{\color{blue}\Cells_{#1}(X)}}
+\begin{tikzcd}[column sep=tiny]
+ & \underbrace{H_3(X^2)}_{=0} \ar[d, "0"] \\
+ \CX{4} \ar[r, "\partial_4"] \ar[rd, "d_4", blue]
+ & H_3(X^3) \ar[r, two heads] \ar[d, "0"]
+ & \underbrace{H_3(X^4)}_{\cong H_3(X)} \ar[r, "0"]
+ & \underbrace{H_3(X^4, X^3)}_{= 0} \\
+ & \CX{3} \ar[d, "\partial_3"] \ar[rd, "d_3", blue]
+ && \underbrace{H_1(X^0)}_{=0} \ar[d, "0"] \\
+ \underbrace{H_2(X^1)}_{=0} \ar[r, "0"]
+ & H_2(X^2) \ar[r, hook] \ar[d, two heads]
+ & \CX{2} \ar[r, "\partial_2"] \ar[rd, "d_2", blue]
+ & H_1(X^1) \ar[r, two heads]
+ & \underbrace{H_1(X^2)}_{\cong H_1(X)} \ar[r, "0"]
+ & \underbrace{H_1(X^2, X^1)}_{=0} \\
+ & \underbrace{H_2(X^3)}_{\cong H_2(X)} \ar[d, "0"]
+ && \CX{1} \ar[d, "\partial_1"] \ar[rd, "d_1", blue] \\
+ & \underbrace{H_2(X^3, X^2)}_{=0}
+ & \underbrace{H_0(\varnothing)}_{=0} \ar[r, "0"]
+ & H_0(X^0) \ar[r, hook] \ar[d, two heads]
+ & \CX{0} \ar[r, "\partial_0"]
+ & \dots \\
+ &&& \underbrace{H_0(X^1)}_{\cong H_0(X)} \ar[d, "0"] \\
+ &&& \underbrace{H_0(X^1, X^0)}_{=0}
+\end{tikzcd}
+\end{center}
+
+The idea is that we have taken all the exact sequences generated by adjacent
+skeletons, and strung them together at the groups $H_k(X^k)$,
+with half the exact sequences being laid out vertically
+and the other half horizontally.
+
+In that case, composition generates a sequence of dotted maps
+between the $H_k(X^k, X^{k-1})$ as shown.
+\begin{ques}
+ Show that the composition of two adjacent dotted arrows is zero.
+\end{ques}
+
+So from the diagram above, we can read off a sequence of arrows
+\[
+ \dots \taking{d_5} \Cells_4(X) \taking{d_4} \Cells_3(X)
+ \taking{d_3} \Cells_2(X) \taking{d_2} \Cells_1(X)
+ \taking{d_1} \Cells_0(X) \taking{d_0} 0.
+\]
+This is a chain complex, called the \vocab{cellular chain complex};
+as mentioned before before all the homology groups are free,
+but these ones are especially nice because for most reasonable CW complexes,
+they are also finitely generated
+(unlike the massive $C_\bullet(X)$ that we had earlier).
+In other words, the $H_k(X^k, X^{k-1})$ are especially nice ``concrete'' free groups
+that one can actually work with.
+
+The other reason we care is that in fact:
+\begin{theorem}[Cellular chain complex gives $H_n(X)$]
+ \label{thm:cellular_chase}
+ The $k$th homology group of the cellular chain complex
+ is isomorphic to $H_k(X)$.
+\end{theorem}
+\begin{proof}
+ Follows from the diagram; \Cref{prob:diagram_chase}.
+\end{proof}
+
+A nice application of this is to define
+the \vocab{Euler characteristic} of a finite CW complex $X$.
+Of course we can write
+\[ \chi(X) = \sum_n (-1)^n \cdot \#(\text{$n$-cells of $X$}) \]
+which generalizes the familiar $V-E+F$ formula.
+However, this definition is unsatisfactory because it
+depends on the choice of CW complex, while we actually
+want $\chi(X)$ to only depend on the space $X$ itself
+(and not how it was built). In light of this, we prove that:
+\begin{theorem}
+ [Euler characteristic via Betti numbers]
+ For any finite CW complex $X$ we have
+ \[ \chi(X) = \sum_n (-1)^n \rank H_n(X). \]
+\end{theorem}
+Thus $\chi(X)$ does not depend on the choice of CW decomposition.
+The numbers
+\[ b_n = \rank H_n(X) \]
+are called the \vocab{Betti numbers} of $X$.
+In fact, we can use this to define $\chi(X)$ for any reasonable space;
+we are happy because in the (frequent) case that $X$ is a CW complex,
+
+\begin{proof}
+ We quote the fact that if $0 \to A \to B \to C \to D \to 0$
+ is exact then $\rank B + \rank D = \rank A + \rank C$.
+ Then for example the row
+ \begin{center}
+ \begin{tikzcd}
+ \underbrace{H_2(X^1)}_{=0} \ar[r, "0"] & H_2(X^2) \ar[r, hook] &
+ H_2(X^2, X^1) \ar[r, "\partial_2"] & H_1(X^1) \ar[r, two heads] &
+ \underbrace{H_1(X^2)}_{\cong H_1(X)} \ar[r, "0"]
+ & \underbrace{H_1(X^2, X^1)}_{=0}
+ \end{tikzcd}
+ \end{center}
+ from the cellular diagram gives
+ \[ \#(\text{$2$-cells}) + \rank H_1(X)
+ = \rank H_2(X^2) + \rank H_1(X^1). \]
+ More generally,
+ \[ \#(\text{$k$-cells}) + \rank H_{k-1}(X)
+ = \rank H_k(X^k) + \rank H_{k-1}(X^{k-1}) \]
+ which holds also for $k=0$ if we drop the $H_{-1}$ terms
+ (since $\#\text{$0$-cells} = \rank H_0(X^0)$ is obvious).
+ Multiplying this by $(-1)^k$ and summing across $k \ge 0$
+ gives the conclusion.
+\end{proof}
+
+\begin{example}
+ [Examples of Betti numbers]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The Betti numbers of $S^n$ are $b_0 = b_n = 1$,
+ and zero elsewhere. The Euler characteristic is $1 + (-1)^n$.
+ \ii The Betti numbers of a torus $S^1 \times S^1$
+ are $b_0 = 1$, $b_1 = 2$, $b_2 = 1$, and zero elsewhere.
+ Thus the Euler characteristic is $0$.
+ \ii The Betti numbers of $\CP^n$ are $b_0 = b_2 = \dots = b_{2n} = 1$,
+ and zero elsewhere. Thus the Euler characteristic is $n+1$.
+ \ii The Betti numbers of the Klein bottle
+ are $b_0 = 1$, $b_1 = 1$ and zero elsewhere.
+ Thus the Euler characteristic is $0$, the same as the sphere
+ (also since their CW structures use the same number of cells).
+ \end{enumerate}
+ One notices that in the ``nice'' spaces $S^n$, $S^1 \times S^1$ and $\CP^n$
+ there is a nice symmetry in the Betti numbers, namely $b_k = b_{n-k}$.
+ This is true more generally; see Poincar\'e duality and \Cref{prob:betti}.
+\end{example}
+
+\section{The cellular boundary formula}
+In fact, one can describe explicitly what the maps $d_n$ are.
+Recalling that $H_k(X^k, X^{k-1})$ has a basis the $k$-cells of $X$, we obtain:
+\begin{theorem}
+ [Cellular boundary formula for $k=1$]
+ For $k=1$, \[ d_1 : \Cells_1(X) \to \Cells_0(X) \] is just the boundary map.
+\end{theorem}
+\begin{theorem}
+ [Cellular boundary for $k > 1$]
+ Let $k > 1$ be a positive integer.
+ Let $e^k$ be an $k$-cell, and let $\{e_\beta^{k-1}\}_\beta$
+ denote all $(k-1)$-cells of $X$.
+ Then \[ d_k : \Cells_k(X) \to \Cells_{k-1}(X) \]
+ is given on basis elements by
+ \[ d_k(e^k) = \sum_\beta d_\beta e_\beta^{k-1} \]
+ where $d_\beta$ is be the degree of the composed map
+ \[ S^{k-1} = \partial D_\beta^k \xrightarrow{\text{attach}}
+ X^{k-1} \surjto S_\beta^{k-1}. \]
+ Here the first arrow is the attaching map for $e^k$
+ and the second arrow is the quotient of collapsing
+ $X^{k-1} \setminus e^{k-1}_\beta$ to a point.
+\end{theorem}
+This gives us an algorithm for computing homology groups of a CW complex:
+\begin{itemize}
+ \ii Construct the cellular chain complex,
+ where $\Cells_k(X)$ is $\ZZ^{\oplus \# \text{$k$-cells}}$.
+ \ii $d_1 : \Cells_1(X) \to \Cells_0(X)$ is just the boundary map
+ (so $d_1(e^1)$ is the difference of the two endpoints).
+ \ii For any $k > 1$, we compute $d_k : \Cells_k(X) \to \Cells_{k-1}(X)$
+ on basis elements as follows.
+ Repeat the following for each $k$-cell $e^k$:
+ \begin{itemize}
+ \ii For every $k-1$ cell $e^{k-1}_\beta$,
+ compute the degree of the boundary of $e^k$ welded onto
+ the boundary of $e^{k-1}_\beta$, say $d_\beta$.
+ \ii Then $d_k(e^k) = \sum_\beta d_\beta e^{k-1}_\beta$.
+ \end{itemize}
+ \ii Now we have the maps of the cellular chain complex,
+ so we can compute the homologies directly
+ (by taking the quotient of the kernel by the image).
+\end{itemize}
+
+We can use this for example to compute the homology groups of the torus again,
+as well as the Klein bottle and other spaces.
+
+\begin{example}
+ [Cellular homology of a torus]
+ Consider the torus built from $e^0$, $e^1_a$, $e^1_b$ and $e^2$ as before,
+ where $e^2$ is attached via the word $aba\inv b\inv$.
+ For example, $X^1$ is
+ \begin{center}
+ \begin{asy}
+ unitsize(0.8cm);
+ draw(shift(-1,0)*unitcircle, blue, MidArrow);
+ draw(shift(1,0)*rotate(180)*unitcircle, red, MidArrow);
+ label("$e^1_a$", 2*dir(180), dir(180), blue);
+ label("$e^1_b$", 2*dir(0), dir(0), red);
+ dotfactor *= 1.4;
+ dot("$e^0$", origin, dir(0));
+ \end{asy}
+ \end{center}
+ The cellular chain complex is
+ \begin{center}
+ \begin{tikzcd}
+ 0 \ar[r]
+ & \ZZ e^2 \ar[r, "d_2"]
+ & \ZZ e^1_a \oplus \ZZ e^1_b \ar[r, "d_1"]
+ & \ZZ e^0 \ar[r, "d_0"]
+ & 0
+ \end{tikzcd}
+ \end{center}
+ Now apply the cellular boundary formulas:
+ \begin{itemize}
+ \ii Recall that $d_1$ was the boundary formula.
+ We have $d_1(e^1_a) = e_0 - e_0 = 0$ and similarly $d_1(e^1_b) = 0$.
+ So $d_1 = 0$.
+
+ \ii For $d_2$, consider the image of the boundary $e^2$ on $e^1_a$.
+ Around $X^1$, it wraps once around $e^1_a$, once around $e^1_b$,
+ again around $e^1_a$ (in the opposite direction),
+ and again around $e^1_b$.
+ Once we collapse the entire $e^1_b$ to a point,
+ we see that the degree of the map is $0$.
+ So $d_2(e^2)$ has no $e^1_a$ coefficient.
+ Similarly, it has no $e^1_b$ coefficient, hence $d_2 = 0$.
+ \end{itemize}
+ Thus \[ d_1=d_2=0. \]
+ So at every map in the complex, the kernel of the map
+ is the whole space while the image is $\{0\}$.
+ So the homology groups are $\ZZ$, $\ZZ^{\oplus 2}$, $\ZZ$.
+\end{example}
+\begin{example}
+ [Cellular homology of the Klein bottle]
+ Let $X$ be a Klein bottle.
+ Consider cells $e^0$, $e^1_a$, $e^1_b$ and $e^2$ as before,
+ but this time $e^2$ is attached via the word $abab\inv$.
+ So $d_1$ is still zero, but this time we have
+ $d_2(e^2) = 2e^1_a$ instead (why?).
+ So our diagram looks like
+ \begin{center}
+ \begin{tikzcd}[row sep = tiny]
+ 0 \ar[r, "0"]
+ & \ZZ e^2 \ar[r, "d_2"]
+ & \ZZ e^1_a \oplus \ZZ e^1_b \ar[r, "d_1"]
+ & \ZZ e^0 \ar[r, "d_0"] & 0 \\
+ & e^2 \ar[r, mapsto] & 2e^1_a \\
+ && e_1^a \ar[r, mapsto] & 0 && \\
+ && e_1^b \ar[r, mapsto] & 0
+ \end{tikzcd}
+ \end{center}
+ So we get that $H_0(X) \cong \ZZ$,
+ but \[ H_1(X) \cong \ZZ \oplus \Zc2 \] this time
+ (it is $\ZZ^{\oplus 2}$ modulo a copy of $2\ZZ$).
+ Also, $\ker d_2 = 0$, and so now $H_2(X) = 0$.
+\end{example}
+
+\section\problemhead
+\begin{dproblem}
+ Let $n$ be a positive integer.
+ Show that
+ \[
+ H_k(\CP^n) \cong
+ \begin{cases}
+ \ZZ & k=0,2,4,\dots,2n \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ \begin{hint}
+ $\CP^n$ has no cells in adjacent dimensions,
+ so all $d_k$ maps must be zero.
+ \end{hint}
+\end{dproblem}
+
+\begin{problem}
+ Show that a non-surjective map $f : S^n \to S^n$ has degree zero.
+ \begin{hint}
+ The space $S^n - \{x_0\}$ is contractible.
+ \end{hint}
+\end{problem}
+
+\begin{problem}[Moore spaces]
+ \gim
+ Let $G_1$, $G_2$, \dots, $G_N$ be a sequence of
+ finitely generated abelian groups.
+ Construct a space $X$ such that
+ \[
+ \wt H_n(X)
+ \cong
+ \begin{cases}
+ G_n & 1 \le n \le N \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+\end{problem}
+
+\begin{problem}
+ \label{prob:diagram_chase}
+ Prove \Cref{thm:cellular_chase},
+ showing that the homology groups of $X$
+ coincide with the homology groups of the cellular chain complex.
+ \begin{hint}
+ You won't need to refer to any elements.
+ Start with \[ H_2(X) \cong H_2(X^3) \cong
+ H_2(X^2) / \ker \left[ H_2(X^2) \surjto H_2(X^3) \right], \] say.
+ Take note of the marked injective and surjective arrows.
+ \end{hint}
+ \begin{sol}
+ For concreteness, let's just look at the homology at $H_2(X^2, X^1)$
+ and show it's isomorphic to $H_2(X)$.
+ According to the diagram
+ \begin{align*}
+ H_2(X) &\cong H_2(X^3) \\
+ &\cong H_2(X^2) / \ker \left[ H_2(X^2) \surjto H_2(X^3) \right] \\
+ &\cong H_2(X^2) / \img \partial_3 \\
+ &\cong \img\left[ H_2(X^2) \injto H_2(X^2, X^1) \right] / \img \partial_3 \\
+ &\cong \ker(\partial_2) / \img\partial_3 \\
+ &\cong \ker d_2 / \img d_3. \qedhere
+ \end{align*}
+ \end{sol}
+\end{problem}
+
+\begin{dproblem}
+ \gim
+ Let $n$ be a positive integer. Show that
+ \[
+ H_k(\RP^n)
+ \cong
+ \begin{cases}
+ \ZZ & \text{if $k=0$ or $k=n\equiv 1 \pmod 2$} \\
+ \Zc2 & \text{if $k$ is odd and $0 < k < n$} \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ \begin{hint}
+ There is one cell of each dimension.
+ Show that the degree of $d_k$ is $\deg(\id)+\deg(-\id)$,
+ hence $d_k$ is zero or $\cdot 2$ depending
+ on whether $k$ is even or odd.
+ \end{hint}
+\end{dproblem}
diff --git a/books/napkin/characters.tex b/books/napkin/characters.tex
new file mode 100644
index 0000000000000000000000000000000000000000..64aa41b12e375162c9e6474266da59dd92106a08
--- /dev/null
+++ b/books/napkin/characters.tex
@@ -0,0 +1,504 @@
+\chapter{Characters}
+Characters are basically the best thing ever.
+To every representation $V$ of $A$ we will attach a
+so-called character $\chi_V : A \to k$.
+It will turn out that the characters of irreps of $V$
+will determine the representation $V$ completely.
+Thus an irrep is just specified by a set of $\dim A$ numbers.
+
+\section{Definitions}
+\begin{definition}
+ Let $V = (V, \rho)$ be a finite-dimensional representation of $A$.
+ The \vocab{character} $\chi_V : A \to k$ attached to
+ $A$ is defined by $\chi_V = \Tr \circ \rho$, i.e.\
+ \[ \chi_V(a) \defeq \Tr\left( \rho(a) : V \to V \right). \]
+\end{definition}
+Since $\Tr$ and $\rho$ are additive, this is a $k$-linear map
+(but it is not multiplicative).
+Note also that $\chi_{V \oplus W} = \chi_V + \chi_W$
+for any representations $V$ and $W$.
+
+We are especially interested in the case $A = k[G]$, of course.
+As usual, we just have to specify $\chi_V(g)$ for each
+$g \in S_3$ to get the whole map $k[G] \to k$.
+Thus we often think of $\chi_V$ as a function $G \to k$,
+called a character of the group $G$.
+Here is the case $G = S_3$:
+\begin{example}
+ [Character table of $S_3$]
+ Let's consider the three irreps of $G = S_3$ from before.
+ For $\CC_{\text{triv}}$ all traces are $1$;
+ for $\CC_{\text{sign}}$ the traces are $\pm 1$ depending on sign
+ (obviously, for one-dimensional maps $k \to k$ the trace ``is''
+ just the map itself).
+ For $\refl_0$ we take a basis $(1,0,-1)$ and $(0,1,-1)$, say,
+ and compute the traces directly in this basis.
+ \[
+ \begin{array}{|r|rrrrrr|}
+ \hline
+ \chi_V(g) & \id & (1\;2) & (2\;3) & (3\;1)
+ & (1\;2\;3) & (3\;2\;1) \\ \hline
+ \Ctriv & 1 & 1 & 1 & 1 & 1 & 1 \\
+ \CC_{\mathrm{sign}} & 1 & -1 & -1 & -1 & 1 & 1 \\
+ \refl_0 & 2 & 0 & 0 & 0 & -1 & -1 \\ \hline
+ \end{array}
+ \]
+\end{example}
+The above table is called the \vocab{character table} of the group $G$.
+The table above has certain mysterious properties,
+which we will prove as the chapter progresses.
+\begin{enumerate}[(I)]
+ \ii The value of $\chi_V(g)$ only depends on the conjugacy class of $g$.
+ \ii The number of rows equals the number of conjugacy classes.
+ \ii The sum of the squares of any row is $6$ again!
+ \ii The ``dot product'' of any two rows is zero.
+\end{enumerate}
+
+\begin{abuse}
+ The name ``character'' for $\chi_V : G \to k$ is a bit of a misnomer.
+ This $\chi_V$ is not multiplicative in any way,
+ as the above example shows: one can almost think of it as
+ an element of $k^{\oplus |G|}$.
+\end{abuse}
+
+\begin{ques}
+ Show that $\chi_V(1_A) = \dim V$,
+ so one can read the dimensions of the representations
+ from the leftmost column of a character table.
+\end{ques}
+
+\section{The dual space modulo the commutator}
+For any algebra, we first observe that since $\Tr(TS) = \Tr(ST)$,
+we have for any $V$ that
+\[ \chi_V(ab) = \chi_V(ba). \]
+This explains observation (I) from earlier:
+\begin{ques}
+ Deduce that if $g$ and $h$ are in the same conjugacy class of a
+ group $G$, and $V$ is a representation of $\CC[G]$,
+ then $\chi(g) = \chi(h)$.
+\end{ques}
+Now, given our algebra $A$ we define the \vocab{commutator} $[A,A]$
+to be the $k$-vector subspace spanned by $xy-yx$ for $x,y \in A$.
+Thus $[A,A]$ is contained in the kernel of each $\chi_V$.
+\begin{definition}
+ The space $A\ab \coloneqq A / [A,A]$ is called the \vocab{abelianization} of $A$.
+ Each character of $A$ can be viewed as a map $A\ab \to k$, i.e.\ an element of $(A\ab)^\vee$.
+\end{definition}
+\begin{example}
+ [Examples of abelianizations]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii If $A$ is commutative, then $[A,A] = \{0\}$
+ and $A\ab = A$.
+ \ii If $A = \Mat_k(d)$, then $[A,A]$ consists exactly
+ of the $d \times d$ matrices of trace zero.
+ (Proof: harmless exercise.)
+ Consequently, $A\ab$ is one-dimensional.
+ \ii Suppose $A = k[G]$.
+ Then in $A\ab$, we identify $gh$ and $hg$ for each $g,h \in G$;
+ equivalently $ghg\inv = h$.
+ So in other words, $A\ab$ is isomorphic to the space of
+ $k$-linear combinations of the \emph{conjugacy classes} of $G$.
+ \end{enumerate}
+\end{example}
+
+\begin{theorem}
+ [Character of representations of algebras]
+ Let $A$ be an algebra over an algebraically closed field. Then
+ \begin{enumerate}[(a)]
+ \ii Characters of pairwise non-isomorphic irreps are
+ linearly independent in $(A\ab)^\vee$.
+ \ii If $A$ is finite-dimensional and semisimple,
+ then the characters attached to irreps
+ form a basis of $(A\ab)^\vee$.
+ \end{enumerate}
+ In particular, in (b) the number of irreps of $A$ equals $\dim A\ab$.
+\end{theorem}
+\begin{proof}
+ Part (a) is more or less obvious by the density theorem:
+ suppose there is a linear dependence, so that for every $a$ we have
+ \[ c_1 \chi_{V_1}(a) + c_2 \chi_{V_2}(a) + \dots + c_r \chi_{V_r} (a) = 0\]
+ for some integer $r$.
+ \begin{ques}
+ Deduce that $c_1 = \dots = c_r = 0$ from the density theorem.
+ \end{ques}
+ For part (b), assume there are $r$ irreps.
+ We may assume that \[ A = \bigoplus_{i=1}^r \Mat(V_i) \]
+ where $V_1$, \dots, $V_r$ are the irreps of $A$.
+ Since we have already showed the characters are linearly independent
+ we need only show that $\dim ( A / [A,A] ) = r$,
+ which follows from the observation earlier that each $\Mat(V_i)$
+ has a one-dimensional abelianization.
+\end{proof}
+Since $G$ has $\dim \CC[G]\ab$ conjugacy classes,
+this completes the proof of (II).
+
+\section{Orthogonality of characters}
+Now we specialize to the case of finite groups $G$, represented over $\CC$.
+\begin{definition}
+ Let $\Classes(G)$ denote the set conjugacy classes of $G$.
+\end{definition}
+If $G$ has $r$ conjugacy classes, then it has $r$ irreps.
+Each (finite-dimensional) representation $V$, irreducible or not, gives a
+character $\chi_V$.
+\begin{abuse}
+ From now on, we will often regard $\chi_V$ as a function $G \to \CC$
+ or as a function $\Classes(G) \to \CC$.
+ So for example, we will write both $\chi_V(g)$ (for $g \in G$)
+ and $\chi_V(C)$ (for a conjugacy class $C$);
+ the latter just means $\chi_V(g_C)$ for any representative $g_C \in C$.
+\end{abuse}
+\begin{definition}
+ Let $\FunCl(G)$ denote the set of functions $\Classes(G) \to \CC$
+ viewed as a vector space over $\CC$.
+ We endow it with the inner form
+ \[
+ \left< f_1, f_2 \right> =
+ \frac{1}{|G|}
+ \sum_{g \in G} f_1(g) \ol{f_2(g)}.
+ \]
+\end{definition}
+This is the same ``dot product'' that we mentioned at the beginning,
+when we looked at the character table of $S_3$.
+We now aim to prove the following orthogonality theorem,
+which will imply (III) and (IV) from earlier.
+\begin{theorem}[Orthogonality]
+ For any finite-dimensional complex representations $V$ and $W$
+ of $G$ we have
+ \[ \left< \chi_V, \chi_W \right> = \dim \Homrep(W, V). \]
+ In particular, if $V$ and $W$ are irreps then
+ \[ \left< \chi_V, \chi_W \right>
+ =
+ \begin{cases}
+ 1 & V \cong W \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+\end{theorem}
+\begin{corollary}[Irreps give an orthonormal basis]
+ The characters associated to irreps
+ form an \emph{orthonormal} basis of $\FunCl(G)$.
+\end{corollary}
+
+In order to prove this theorem, we have to define
+the dual representation and the tensor representation,
+which give a natural way to deal with the quantity $\chi_V(g)\ol{\chi_W(g)}$.
+\begin{definition}
+ Let $V = (V, \rho)$ be a representation of $G$.
+ The \vocab{dual representation} $V^\vee$ is the representation on $V^\vee$
+ with the action of $G$ given as follows: for each $\xi \in V^\vee$,
+ the action of $g$ gives a $g \cdot \xi \in V^\vee$ specified by
+ \[ v \xmapsto{g \cdot \xi} \xi\left( \rho(g\inv)(v) \right). \]
+\end{definition}
+\begin{definition}
+ Let $V = (V, \rho_V)$ and $W = (W, \rho_W)$
+ be \emph{group} representations of $G$.
+ The \vocab{tensor product} of $V$ and $W$ is the group representation
+ on $V \otimes W$ with the action of $G$ given on pure tensors by
+ \[
+ g \cdot (v \otimes w)
+ =
+ (\rho_V(g)(v)) \otimes (\rho_W(g)(w)) \]
+ which extends linearly to define the action of $G$ on all of $V \otimes W$.
+\end{definition}
+\begin{remark}
+ Warning: the definition for tensors does \emph{not} extend to algebras.
+ We might hope that $a \cdot (v \otimes w) = (a \cdot v) \otimes (a \cdot w)$
+ would work, but this is not even linear in $a \in A$
+ (what happens if we take $a=2$, for example?).
+\end{remark}
+
+\begin{theorem}
+ [Character traces]
+ If $V$ and $W$ are finite-dimensional representations of $G$,
+ then for any $g \in G$.
+ \begin{enumerate}[(a)]
+ \ii $\chi_{V \oplus W}(g) = \chi_V(g) + \chi_W(g)$.
+ \ii $\chi_{V \otimes W}(g) = \chi_V(g) \cdot \chi_W(g)$.
+ \ii $\chi_{V^\vee}(g) = \ol{\chi_V(g)}$.
+ \end{enumerate}
+\end{theorem}
+\begin{proof}
+ Parts (a) and (b) follow from the identities
+ $\Tr(S \oplus T) = \Tr(S) + \Tr(T)$
+ and $\Tr(S \otimes T) = \Tr(S) \Tr(T)$.
+ However, part (c) is trickier.
+ As $(\rho(g))^{|G|} = \rho(g^{|G|}) = \rho(1_G) = \id_V$
+ by Lagrange's theorem, we can diagonalize $\rho(g)$,
+ say with eigenvalues $\lambda_1$, \dots, $\lambda_n$
+ which are $|G|$th roots of unity,
+ corresponding to eigenvectors $e_1$, \dots, $e_n$.
+ Then we see that in the basis $e_1^\vee$, \dots, $e_n^\vee$,
+ the action of $g$ on $V^\vee$ has eigenvalues
+ $\lambda_1\inv$, $\lambda_2\inv$, \dots, $\lambda_n\inv$.
+ So
+ \[
+ \chi_V(g) = \sum_{i=1}^n \lambda_i \quad\text{and}\quad
+ \chi_{V^\vee}(g) = \sum_{i=1}^n \lambda_i\inv = \sum_{i=1}^n \ol{\lambda_i}
+ \]
+ where the last step follows from the identity $|z|=1 \iff z\inv = \ol z$.
+\end{proof}
+\begin{remark}
+ [Warning]
+ The identities (b) and (c) do not extend linearly to $\CC[G]$,
+ i.e.\ it is not true for example that $\chi_V(a) = \ol{\chi_V(a)}$
+ if we think of $\chi_V$ as a map $\CC[G] \to \CC$.
+\end{remark}
+\begin{proof}
+ [Proof of orthogonality relation]
+ The key point is that we can now reduce
+ the sums of products to just a single character by
+ \[ \chi_V(g) \ol{\chi_W(g)} = \chi_{V \otimes W^\vee} (g). \]
+ So we can rewrite the sum in question as just
+ \[
+ \left< \chi_V, \chi_W \right>
+ = \frac{1}{|G|} \sum_{g \in G} \chi_{V \otimes W^\vee} (g)
+ = \chi_{V \otimes W^\vee}
+ \left( \frac{1}{|G|} \sum_{g \in G} g \right).
+ \]
+ Let $P : V \otimes W^\vee \to V \otimes W^\vee$ be the
+ action of $\frac{1}{|G|} \sum_{g \in G} g$,
+ so we wish to find $\Tr P$.
+ \begin{exercise}
+ Show that $P$ is idempotent.
+ (Compute $P \circ P$ directly.)
+ \end{exercise}
+ Hence $V \otimes W^\vee = \ker P \oplus \img P$ (by \Cref{prob:idempotent})
+ and $\img P$ is the subspace of elements which are fixed under $G$.
+ From this we deduce that
+ \[ \Tr P = \dim \img P =
+ \dim \left\{ x \in V \otimes W^\vee
+ \mid g \cdot x = x \; \forall g \in G \right\}.
+ \]
+ Now, consider the natural isomorphism $V \otimes W^\vee \to \Hom(W, V)$.
+ \begin{exercise}
+ Let $g \in G$.
+ Show that under this isomorphism, $T \in \Hom(W, V)$
+ satisfies $g \cdot T = T$ if and only if
+ $T(g \cdot w) = g \cdot T(w)$ for each $w \in W$.
+ (This is just unwinding three or four definitions.)
+ \end{exercise}
+ Consequently, $\chi_{V \otimes W^\vee}(P) = \Tr P = \dim \Homrep(W,V)$
+ as desired.
+\end{proof}
+
+The orthogonality relation gives us a fast and mechanical way to check
+whether a finite-dimensional representation $V$ is irreducible.
+Namely, compute the traces $\chi_V(g)$ for each $g \in G$,
+and then check whether $\left< \chi_V, \chi_V \right> = 1$.
+So, for example, we could have seen the three representations of
+$S_3$ that we found were irreps directly from the character table.
+Thus, we can now efficiently verify any time we have
+a complete set of irreps.
+
+\section{Examples of character tables}
+\begin{example}
+ [Dihedral group on $10$ elements]
+ Let $D_{10} = \left< r,s \mid r^5 = s^2 = 1, rs = sr\inv \right>$.
+ Let $\omega = \exp(\frac{2\pi i}{5})$.
+ We write four representations of $D_{10}$:
+ \begin{itemize}
+ \ii $\Ctriv$, all elements of $D_{10}$ act as the identity.
+ \ii $\Csign$, $r$ acts as the identity while $s$ acts by negation.
+ \ii $V_1$, which is two-dimensional and given by
+ $r \mapsto \begin{bmatrix} \omega & 0 \\ 0 & \omega^4 \end{bmatrix}$
+ and $s \mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$.
+ \ii $V_2$, which is two-dimensional and given by
+ $r \mapsto \begin{bmatrix} \omega^2 & 0 \\ 0 & \omega^3 \end{bmatrix}$
+ and $s \mapsto \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$.
+ \end{itemize}
+ We claim that these four representations are irreducible
+ and pairwise non-isomorphic.
+ We do so by writing the character table:
+ \[
+ \begin{array}{|c|rccr|}
+ \hline
+ D_{10} & 1 & r, r^4 & r^2, r^3 & sr^k \\ \hline
+ \Ctriv & 1 & 1 & 1 & 1 \\
+ \Csign & 1 & 1 & 1 & -1 \\
+ V_1 & 2 & \omega+\omega^4 & \omega^2+\omega^3 & 0 \\
+ V_2 & 2 & \omega^2+\omega^3 & \omega+\omega^4 & 0 \\ \hline
+ \end{array}
+ \]
+ Then a direct computation shows the orthogonality relations,
+ hence we indeed have an orthonormal basis.
+ For example, $\left< \Ctriv, \Csign \right> = 1 + 2 \cdot 1 + 2 \cdot 1 + 5 \cdot (-1) = 0$.
+\end{example}
+
+\begin{example}
+ [Character table of $S_4$]
+ We now have enough machinery to to compute the character
+ table of $S_4$, which has five conjugacy classes
+ (corresponding to cycle types $\id$, $2$, $3$, $4$ and $2+2$).
+ First of all, we note that it has two one-dimensional representations,
+ $\Ctriv$ and $\Csign$, and these are the only ones
+ (because there are only two homomorphisms $S_4 \to \CC^\times$).
+ So thus far we have the table
+ \[
+ \begin{array}{|c|rrrrr|}
+ \hline
+ S_4 & 1 & (\bullet\;\bullet) & (\bullet\;\bullet\;\bullet)
+ & (\bullet\;\bullet\;\bullet\;\bullet)
+ & (\bullet\;\bullet)(\bullet\;\bullet)
+ \\ \hline
+ \Ctriv & 1 & 1 & 1 & 1 & 1 \\
+ \Csign & 1 & -1 & 1 & -1 & 1 \\
+ \vdots & \multicolumn{5}{|c|}{\vdots}
+ \end{array}
+ \]
+ Note the columns represent $1+6+8+6+3=24$ elements.
+
+ Now, the remaining three representations have dimensions
+ $d_1$, $d_2$, $d_3$ with
+ \[ d_1^2 + d_2^2 + d_3^2 = 4! - 2 = 22 \]
+ which has only $(d_1, d_2, d_3) = (2,3,3)$ and permutations.
+ Now, we can take the $\refl_0$ representation
+ \[ \left\{ (w,x,y,z) \mid w+x+y+z=0 \right\} \]
+ with basis $(1,0,0,-1)$, $(0,1,0,-1)$ and $(0,0,1,-1)$.
+ This can be geometrically checked to be irreducible,
+ but we can also do this numerically by computing the
+ character directly (this is tedious):
+ it comes out to have $3$, $1$, $0$, $-1$, $-1$
+ which indeed gives norm
+ \[
+ \left< \chi_{\refl_0}, \chi_{\refl_0} \right>
+ =
+ \frac{1}{4!}
+ \left(
+ \underbrace{3^2}_{\id}
+ + \underbrace{6\cdot(1)^2}_{(\bullet\;\bullet)}
+ + \underbrace{8\cdot(0)^2}_{(\bullet\;\bullet\;\bullet)}
+ + \underbrace{6\cdot(-1)^2}_{(\bullet\;\bullet\;\bullet\;\bullet)}
+ + \underbrace{3\cdot(-1)^2}_{(\bullet\;\bullet)(\bullet\;\bullet)}
+ \right)
+ = 1.
+ \]
+ Note that we can also tensor this with the sign representation,
+ to get another irreducible representation
+ (since $\Csign$ has all traces $\pm 1$, the norm doesn't change).
+ Finally, we recover the final row using orthogonality
+ (which we name $\CC^2$, for lack of a better name);
+ hence the completed table is as follows.
+ \[
+ \begin{array}{|c|rrrrr|}
+ \hline
+ S_4 & 1 & (\bullet\;\bullet) & (\bullet\;\bullet\;\bullet)
+ & (\bullet\;\bullet\;\bullet\;\bullet)
+ & (\bullet\;\bullet)(\bullet\;\bullet)
+ \\ \hline
+ \Ctriv & 1 & 1 & 1 & 1 & 1 \\
+ \Csign & 1 & -1 & 1 & -1 & 1 \\
+ \CC^2 & 2 & 0 & -1 & 0 & 2 \\
+ \refl_0 & 3 & 1 & 0 & -1 & -1 \\
+ \refl_0 \otimes \Csign & 3 & -1 & 0 & 1 & -1 \\\hline
+ \end{array}
+ \]
+\end{example}
+
+\section\problemhead
+
+\begin{dproblem}
+ [Reading decompositions from characters]
+ Let $W$ be a complex representation of a finite group $G$.
+ Let $V_1$, \dots, $V_r$ be the complex irreps of $G$
+ and set $n_i = \left< \chi_W, \chi_{V_i} \right>$.
+ Prove that each $n_i$ is a non-negative integer and
+ \[ W = \bigoplus_{i=1}^r V_i^{\oplus n_i}. \]
+ \begin{hint}
+ Obvious.
+ Let $W = \bigoplus V_i^{m_i}$ (possible since $\CC[G]$ semisimple)
+ thus $\chi_W = \sum_i m_i \chi_{V_i}$.
+ \end{hint}
+\end{dproblem}
+
+\begin{problem}
+ Consider complex representations of $G = S_4$.
+ The representation $\refl_0 \otimes \refl_0$
+ is $9$-dimensional, so it is clearly reducible.
+ Compute its decomposition in terms of the five
+ irreducible representations.
+ \begin{hint}
+ Use the previous problem, with $\chi_W = \chi_{\refl_0}^2$.
+ \end{hint}
+ \begin{sol}
+ $\Csign \oplus \CC^2 \oplus \refl_0 \oplus (\refl_0\otimes\Csign)$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Tensoring by one-dimensional irreps]
+ Let $V$ and $W$ be irreps of $G$, with $\dim W = 1$.
+ Show that $V \otimes W$ is irreducible.
+ \begin{hint}
+ Characters. Note that $|\chi_W| = 1$ everywhere.
+ \end{hint}
+ \begin{sol}
+ First, observe that $|\chi_W(g)|=1$ for all $g \in G$.
+ \begin{align*}
+ \left< \chi_{V \otimes W}, \chi_{V \otimes W} \right>
+ &= \left< \chi_V \chi_W, \chi_V \chi_W \right> \\
+ &= \frac{1}{|G|} \sum_{g \in G}
+ \left\lvert \chi_V(g) \right\rvert^2
+ \left\lvert \chi_W(g) \right\rvert^2 \\
+ &= \frac{1}{|G|} \sum_{g \in G}
+ \left\lvert \chi_V(g) \right\rvert^2 \\
+ &= \left< \chi_V, \chi_V \right> = 1.
+ \end{align*}
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Quaternions]
+ Compute the character table of the quaternion group $Q_8$.
+ \begin{hint}
+ There are five conjugacy classes, $1$, $-1$
+ and $\pm i$, $\pm j$, $\pm k$.
+ Given four of the representations, orthogonality
+ can give you the fifth one.
+ \end{hint}
+ \begin{sol}
+ The table is given by
+ \[
+ \begin{array}{|c|rrrrr|}
+ \hline
+ Q_8 & 1 & -1 & \pm i & \pm j & \pm k \\ \hline
+ \Ctriv & 1 & 1 & 1 & 1 & 1 \\
+ \CC_i & 1 & 1 & 1 & -1 & -1 \\
+ \CC_j & 1 & 1 & -1 & 1 & -1 \\
+ \CC_k & 1 & 1 & -1 & -1 & 1 \\
+ \CC^2 & 2 & -2 & 0 & 0 & 0 \\\hline
+ \end{array}
+ \]
+ The one-dimensional representations (first four rows)
+ follows by considering the homomorphism $Q_8 \to \CC^\times$.
+ The last row is two-dimensional and can be recovered
+ by using the orthogonality formula.
+ \end{sol}
+\end{problem}
+
+\begin{sproblem}
+ [Second orthogonality formula]
+ \label{prob:second_orthog}
+ \gim
+ Let $g$ and $h$ be elements of a finite group $G$,
+ and let $V_1$, \dots, $V_r$ be the irreps of $G$.
+ Prove that
+ \[
+ \sum_{i = 1}^r \chi_{V_i}(g) \ol{\chi_{V_i}(h)}
+ =
+ \begin{cases}
+ |C_G(g)| & \text{if $g$ and $h$ are conjugates} \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ Here, $C_G(g) = \left\{ x \in G : xg = gx \right\}$
+ is the centralizer of $g$.
+ \begin{hint}
+ Write as
+ \[ \sum_{i=1}^r \chi_{V_i \otimes V_i^\vee} (gh\inv)
+ = \chi_{\bigoplus_i V_i \otimes V_i^\vee}(gh\inv)
+ = \chi_{\CC[G]}(gh\inv).
+ \]
+ Now look at the usual basis for $\CC[G]$.
+ \end{hint}
+\end{sproblem}
diff --git a/books/napkin/circuits.tex b/books/napkin/circuits.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6dac302f5236049f7909f8ef04df2740eb328682
--- /dev/null
+++ b/books/napkin/circuits.tex
@@ -0,0 +1,717 @@
+\chapter{Quantum circuits}
+Now that we've discussed qubits, we can talk about how to use them in circuits.
+The key change --- and the reason that quantum circuits can do things that
+classical circuits cannot --- is the fact that we are allowing
+linear combinations of $0$ and $1$.
+
+\section{Classical logic gates}
+In classical logic, we build circuits which take in some bits for input,
+and output some more bits for input.
+These circuits are built out of individual logic gates.
+For example, the \vocab{AND gate} can be pictured as follows.
+\[
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{0} &\qw & \multigate{1}{\textsc{and}} & \rstick{0} \qw \\
+ \lstick{0} &\qw & \ghost{\textsc{and}} &
+ }
+ \hspace{4.5em}
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{0} &\qw & \multigate{1}{\textsc{and}} & \rstick{0} \qw \\
+ \lstick{1} &\qw & \ghost{\textsc{and}} &
+ }
+ \hspace{4.5em}
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{1} &\qw & \multigate{1}{\textsc{and}} & \rstick{0} \qw \\
+ \lstick{0} &\qw & \ghost{\textsc{and}} &
+ }
+ \hspace{4.5em}
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{1} &\qw & \multigate{1}{\textsc{and}} & \rstick{1} \qw \\
+ \lstick{1} &\qw & \ghost{\textsc{and}} &
+ }
+\]
+One can also represent the AND gate using the ``truth table'':
+\[
+ \begin{array}{|cc|c|}
+ \hline
+ A & B & A \text{ and } B \\ \hline
+ 0 & 0 & 0 \\
+ 0 & 1 & 0 \\
+ 1 & 0 & 0 \\
+ 1 & 1 & 1 \\
+ \hline
+ \end{array}
+\]
+Similarly, we have the \vocab{OR gate} and the \vocab{NOT gate}:
+\[
+ \begin{array}{|cc|c|}
+ \hline
+ A & B & A \text{ or } B \\ \hline
+ 0 & 0 & 0 \\
+ 0 & 1 & 1 \\
+ 1 & 0 & 1 \\
+ 1 & 1 & 1 \\
+ \hline
+ \end{array}
+ \qquad
+ \begin{array}{|c|c|}
+ \hline
+ A & \text{not } A \\ \hline
+ 0 & 1 \\
+ 1 & 0 \\
+ \hline
+ \end{array}
+\]
+We also have a so-called \vocab{COPY gate}, which duplicates a bit.
+\[
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{0} & \qw & \multigate{1}{\textsc{copy}} & \rstick{0} \qw & \\
+ && & \rstick{0} \qw &
+ }
+ \qquad\qquad
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{1} & \qw & \multigate{1}{\textsc{copy}} & \rstick{1} \qw & \\
+ && & \rstick{1} \qw &
+ }
+\]
+Of course, the first theorem you learn about these gates is that:
+\begin{theorem}
+ [AND, OR, NOT, COPY are universal]
+ The set of four gates AND, OR, NOT, COPY is universal in the sense that
+ any boolean function $f : \{0,1\}^n \to \{0,1\}$
+ can be implemented as a circuit using only these gates.
+\end{theorem}
+\begin{proof}
+ Somewhat silly: we essentially write down a circuit that OR's across
+ all input strings in $f\pre(1)$.
+ For example, suppose we have $n=3$ and want to simulate the function
+ $f(abc)$ with $f(011) = f(110) = 1$ and $0$ otherwise.
+ Then the corresponding Boolean expression for $f$ is simply
+ \[
+ f(abc) =
+ \left[ \text{(not $a$) and $b$ and $c$} \right]
+ \text{ or }
+ \left[ \text{$a$ and $b$ and (not $c$)} \right].
+ \]
+ Clearly, one can do the same for any other $f$,
+ and implement this logic into a circuit.
+\end{proof}
+\begin{remark}
+ Since
+ $x \text{ and } y = \text{not } ( (\text{not $x$}) \text{ or } (\text{not $y$}))$,
+ it follows that in fact, we can dispense with the AND gate.
+\end{remark}
+
+\section{Reversible classical logic}
+\prototype{CNOT gate, Toffoli gate.}
+
+For the purposes of quantum mechanics, this is not enough.
+To carry through the analogy we in fact need gates that are \vocab{reversible},
+meaning the gates are bijections from the input space to the output space.
+In particular, such gates must take the same number of input and output gates.
+\begin{example}[Reversible gates]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii None of the gates AND, OR, COPY are reversible for dimension reasons.
+ \ii The NOT gate, however, is reversible:
+ it is a bijection $\{0,1\} \to \{0,1\}$.
+ \end{enumerate}
+\end{example}
+\begin{example}
+ [The CNOT gate]
+ The controlled-NOT gate, or the \vocab{CNOT} gate,
+ is a reversible $2$-bit gate with the following truth table.
+ \[
+ \begin{array}{|rr|rr|}
+ \hline
+ \multicolumn{2}{|c|}{\text{In}} & \multicolumn{2}{|c|}{\text{Out}} \\
+ \hline
+ 0 & 0 & 0 & 0 \\
+ 1 & 0 & 1 & 1 \\
+ 0 & 1 & 0 & 1 \\
+ 1 & 1 & 1 & 0 \\ \hline
+ \end{array}
+ \]
+ In other words, this gate XOR's the first bit to the second bit,
+ while leaving the first bit unchanged.
+ It is depicted as follows.
+ \[
+ \Qcircuit @C=1em @R=.7em {
+ \lstick{x} & \ctrl{1} & \rstick{x} \qw \\
+ \lstick{y} & \targ & \rstick{x+y \mod 2} \qw
+ }
+ \]
+ The first dot is called the ``control'',
+ while the $\oplus$ is the ``negation'' operation:
+ the first bit controls whether the second bit gets flipped or not.
+ Thus, a typical application might be as follows.
+ \[
+ \Qcircuit @C=1em @R=.7em {
+ \lstick{1} & \ctrl{1} & \rstick{1} \qw \\
+ \lstick{0} & \targ & \rstick{1} \qw
+ }
+ \]
+\end{example}
+So, NOT and CNOT are the only nontrivial reversible gates on two bits.
+
+We now need a different definition of universal for our reversible gates.
+\begin{definition}
+ A set of reversible gates can \vocab{simulate} a Boolean function $f(x_1 \dots x_n)$,
+ if one can implement a circuit which takes
+ \begin{itemize}
+ \ii As input, $x_1 \dots x_n$ plus some fixed bits set to $0$ or $1$,
+ called \vocab{ancilla bits}\footnote{%
+ The English word ``ancilla'' means ``maid''.}.
+ \ii As output, the input bits $x_1, \dots, x_n$,
+ the output bit $f(x_1, \dots, x_n)$,
+ and possibly some extra bits (called \vocab{garbage bits}).
+ \end{itemize}
+ The gate(s) are \vocab{universal} if they can simulate any Boolean function.
+\end{definition}
+For example, the CNOT gate can simulate the NOT gate,
+using a single ancilla bit $1$,
+according to the following circuit.
+\[
+ \Qcircuit @C=1em @R=.7em {
+ \lstick{x} & \ctrl{1} & \rstick{x} \qw \\
+ \lstick{1} & \targ & \rstick{\text{not } x} \qw
+ }
+\]
+Unfortunately, it is not universal.
+\begin{proposition}
+ [CNOT $\not\Rightarrow$ AND]
+ The CNOT gate cannot simulate the boolean function ``$x \text{ and } y$''.
+\end{proposition}
+\begin{proof}[Sketch of Proof]
+ One can see that any function simulated using only CNOT gates
+ must be of the form
+ \[ a_1 x_1 + a_2 x_2 + \dots + a_n x_n \pmod 2 \]
+ because CNOT is the map $(x,y) \mapsto (x, x+y)$.
+ Thus, even with ancilla bits, we can only create functions
+ of the form $ax+by+c \pmod 2$ for fixed $a$, $b$, $c$.
+ The AND gate is not of this form.
+\end{proof}
+
+So, we need at least a three-qubit gate.
+The most commonly used one is:
+\begin{definition}
+ The three-bit \vocab{Toffoli gate}, also called the CCNOT gate, is given by
+ \[
+ \Qcircuit @C=1em @R=.7em {
+ \lstick{x} & \ctrl{1} & \rstick{x} \qw \\
+ \lstick{y} & \ctrl{1} & \rstick{y} \qw \\
+ \lstick{z} & \targ & \rstick{z + xy \pmod 2} \qw
+ }
+ \]
+ So the Toffoli has two controls, and toggles the last bit if and only if
+ both of the control bits are $1$.
+\end{definition}
+This replacement is sufficient.
+\begin{theorem}
+ [Toffoli gate is universal]
+ The Toffoli gate is universal.
+\end{theorem}
+\begin{proof}
+ We will show it can \emph{reversibly} simulate
+ AND, NOT, hence OR,
+ which we know is enough to show universality.
+ (We don't need COPY because of reversibility.)
+
+ For the AND gate, we draw the circuit
+ \[
+ \Qcircuit @C=1em @R=.7em {
+ \lstick{x} & \ctrl{1} & \rstick{x} \qw \\
+ \lstick{y} & \ctrl{1} & \rstick{y} \qw \\
+ \lstick{0} & \targ & \rstick{x \text{ and } y} \qw
+ }
+ \]
+ with one ancilla bit, and no garbage bits.
+
+ For the NOT gate, we use two ancilla $1$ bits and one garbage bit:
+ \[
+ \Qcircuit @C=1em @R=.7em {
+ \lstick{1} & \ctrl{1} & \rstick{1} \qw \\
+ \lstick{z} & \ctrl{1} & \rstick{z} \qw \\
+ \lstick{1} & \targ & \rstick{\text{not } z} \qw
+ }
+ \]
+ This completes the proof.
+\end{proof}
+
+Hence, in theory we can create any classical circuit we desire
+using the Toffoli gate alone.
+Of course, this could require exponentially many gates for even the
+simplest of functions.
+Fortunately, this is NO BIG DEAL because I'm a math major,
+and having $2^n$ gates is a problem best left for the CS majors.
+
+\section{Quantum logic gates}
+In quantum mechanics, since we can have \emph{linear combinations} of basis
+elements, our logic gates will instead consist of \emph{linear maps}.
+Moreover, in quantum computation, gates are always reversible,
+which was why we took the time in the previous section to show
+that we can still simulate any function when restricted to reversible gates
+(e.g.\ using the Toffoli gate).
+
+First, some linear algebra:
+\begin{definition}
+ Let $V$ be a finite dimensional inner product space.
+ Then for a map $U : V \to V$, the following are equivalent:
+ \begin{itemize}
+ \ii $\left< U(x), U(y) \right> = \left< x,y \right>$ for $x,y \in V$.
+ \ii $U^\dagger$ is the inverse of $U$.
+ \ii $\norm{x} = \norm{U(x)}$ for $x \in V$.
+ \end{itemize}
+ The map $U$ is called \vocab{unitary}
+ if it satisfies these equivalent conditions.
+\end{definition}
+
+Then
+\begin{moral}
+ Quantum logic gates are unitary matrices.
+\end{moral}
+In particular, unlike the classical situation,
+quantum gates are always reversible
+(and hence they always take the same number of input and output bits).
+
+For example, consider the CNOT gate.
+Its quantum analog should be a unitary map $\UCNOT : H \to H$,
+where $H = \CC^{\oplus 2} \otimes \CC^{\oplus 2}$,
+given on basis elements by
+\[
+ \UCNOT(\ket{00}) = \ket{00}, \quad
+ \UCNOT(\ket{01}) = \ket{01}
+\]
+\[
+ \UCNOT(\ket{10}) = \ket{11}, \quad
+ \UCNOT(\ket{11}) = \ket{10}.
+\]
+So pictorially, the quantum CNOT gate is given by
+\[
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket0} & \ctrl{1} & \rstick{\ket0} \qw \\
+ \lstick{\ket0} & \targ & \rstick{\ket0} \qw \\
+ }
+ \hspace{6em}
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket0} & \ctrl{1} & \rstick{\ket0} \qw \\
+ \lstick{\ket1} & \targ & \rstick{\ket1} \qw \\
+ }
+ \hspace{6em}
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket1} & \ctrl{1} & \rstick{\ket1} \qw \\
+ \lstick{\ket0} & \targ & \rstick{\ket1} \qw \\
+ }
+ \hspace{6em}
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket1} & \ctrl{1} & \rstick{\ket1} \qw \\
+ \lstick{\ket1} & \targ & \rstick{\ket0} \qw \\
+ }
+\]
+OK, so what?
+The whole point of quantum mechanics is that we allow linear
+qubits to be in linear combinations of $\ket0$ and $\ket1$,
+too, and this will produce interesting results.
+For example, let's take $\xdown = \frac{1}{\sqrt2} (\ket0-\ket1)$
+and plug it into the top, with $\ket 1$ on the bottom, and see what happens:
+\[
+ \UCNOT \left( \xdown \otimes \ket1 \right)
+ = \UCNOT \left( \frac{1}{\sqrt2} (\ket{01}-\ket{11}) \right)
+ = \frac{1}{\sqrt2} \left( \ket{01}-\ket{10} \right)
+ = \ket{\Psi_-}
+\]
+which is the fully entangled \emph{singlet state}! Picture:
+\[
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\xdown} & \ctrl{1} & \rstick{\ket{\Psi_-}} \qw \\
+ \lstick{\ket1} & \targ & \rstick{} \qw \\
+ }
+\]
+
+Thus, when we input mixed states into our quantum gates,
+the outputs are often entangled states,
+even when the original inputs are not entangled.
+
+\begin{example}
+ [More examples of quantum gates]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Every reversible classical gate that we encountered before
+ has a quantum analog obtained in the same way as CNOT:
+ by specifying the values on basis elements.
+ For example, there is a quantum Tofolli gate which
+ for example sends
+ \[
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket1} & \ctrl{1} & \rstick{\ket1} \qw \\
+ \lstick{\ket1} & \ctrl{1} & \rstick{\ket1} \qw \\
+ \lstick{\ket0} & \targ & \rstick{\ket1} \qw.
+ }
+ \]
+ \ii The \vocab{Hadamard gate} on one qubit is a rotation given by
+ \[
+ \begin{bmatrix}
+ \frac{1}{\sqrt2} & \frac{1}{\sqrt2} \\
+ \frac{1}{\sqrt2} & -\frac{1}{\sqrt2}
+ \end{bmatrix}.
+ \]
+ Thus, it sends $\ket0$ to $\xup$ and $\ket1$ to $\xdown$.
+ Note that the Hadamard gate is its own inverse.
+ It is depicted by an ``$H$'' box.
+ \[
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket0} & \gate{H} & \rstick{\xup} \qw
+ }
+ \]
+ \ii More generally, if $U$ is a $2 \times 2$ unitary matrix
+ (i.e.\ a map $\CC^{\oplus 2} \to \CC^{\oplus 2}$) then
+ there is \vocab{$U$-rotation gate} similar to the previous one,
+ which applies $U$ to the input.
+ \[
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket\psi} & \gate{U} & \rstick{U\ket\psi} \qw
+ }
+ \]
+ For example, the classical NOT gate is represented by $U = \sigma_x$.
+ \ii A \vocab{controlled $U$-rotation gate} generalizes the CNOT gate.
+ Let $U : \CC^{\oplus 2} \to \CC^{\oplus 2}$ be a rotation gate,
+ and let $H = \CC^{\oplus 2} \otimes \CC^{\oplus 2}$ be a $2$-qubit space.
+ Then the controlled $U$ gate has the following circuit diagrams.
+ \[
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket0} & \ctrl{1} & \rstick{\ket0} \qw \\
+ \lstick{\ket\psi} & \gate{U} & \rstick{\ket\psi} \qw
+ }
+ \hspace{8em}
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket1} & \ctrl{1} & \rstick{\ket1} \qw \\
+ \lstick{\ket\psi} & \gate{U} & \rstick{U\ket\psi} \qw
+ }
+ \]
+ Thus, $U$ is applied when the controlling bit is $1$,
+ and CNOT is the special case $U = \sigma_x$. As before,
+ we get interesting behavior if the control is mixed.
+ \end{enumerate}
+\end{example}
+
+And now, some more counterintuitive quantum behavior.
+Suppose we try to use CNOT as a copy, with truth table.
+\[
+ \begin{array}{|rr|rr|}
+ \hline
+ \multicolumn{2}{|c|}{\text{In}} & \multicolumn{2}{|c|}{\text{Out}} \\
+ \hline
+ 0 & 0 & 0 & 0 \\
+ 1 & 0 & 1 & 1 \\
+ 0 & 1 & 0 & 1 \\
+ 1 & 1 & 1 & 0 \\ \hline
+ \end{array}
+\]
+The point of this gate is to be used with a garbage $0$ at the bottom
+to try and simulate a ``copy'' operation.
+So indeed, one can check that
+\[
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket0} & \multigate{1}{U} & \rstick{\ket0} \qw \\
+ \lstick{\ket0} & \ghost{U} & \rstick{\ket0} \qw
+ }
+ \hspace{8em}
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket1} & \multigate{1}{U} & \rstick{\ket1} \qw \\
+ \lstick{\ket0} & \ghost{U} & \rstick{\ket1} \qw
+ }
+\]
+Thus we can copy $\ket0$ and $\ket1$.
+But as we've already seen if we input $\xdown \otimes \ket0$ into $U$,
+we end up with the entangled state $\ket{\Psi_-}$
+which is decisively \emph{not} the $\xdown \otimes \xdown$ we wanted.
+And in fact, the so-called \vocab{no-cloning theorem} implies
+that it's impossible to duplicate an arbitrary $\ket\psi$;
+the best we can do is copy specific orthogonal states as in the classical case.
+See also \Cref{prob:baby_no_clone}.
+
+\section{Deutsch-Jozsa algorithm}
+The Deutsch-Jozsa algorithm is the first example of a nontrivial
+quantum algorithm which cannot be performed classically:
+it is a ``proof of concept'' that would later inspire Grover's search algorithm
+and Shor's factoring algorithm.
+
+The problem is as follows: we're given a function $f : \{0,1\}^n \to \{0,1\}$,
+and promised that the function $f$ is either
+\begin{itemize}
+ \ii A constant function, or
+ \ii A balanced function, meaning that exactly half the inputs map to
+ $0$ and half the inputs map to $1$.
+\end{itemize}
+The function $f$ is given in the form of a reversible black box $U_f$ which
+is the control of a NOT gate, so it can be represented as the circuit diagram
+\[
+ \Qcircuit @C=1em @R=0.7em {
+ \lstick{\ket{x_1x_2 \dots x_n}} & /^n \qw & \multigate{1}{U_f} &
+ \rstick{\ket{x_1x_2\dots x_n}} \qw \\
+ \lstick{\ket{y}} & \qw & \ghost{U_f} & \rstick{\ket{y+f(x) \mod 2}}\qw \\
+ }
+\]
+i.e.\ if $f(x_1, \dots, x_n) = 0$ then the gate does nothing,
+otherwise the gate flips the $y$ bit at the bottom.
+The slash with the $n$ indicates that the top of the input really consists
+of $n$ qubits, not just the one qubit drawn,
+and so the black box $U_f$ is a map on $n+1$ qubits.
+
+The problem is to determine,
+with as few calls to the black box $U_f$ as possible,
+whether $f$ is balanced or constant.
+
+\begin{ques}
+ Classically, show that in the worst case we may need
+ up to $2^{n-1}+1$ calls to the function $f$ to answer the question.
+\end{ques}
+
+So with only classical tools, it would take $O(2^n)$ queries to determine
+whether $f$ is balanced or constant.
+However,
+\begin{theorem}
+ [Deutsch-Jozsa]
+ The Deutsch-Jozsa problem can be determined in a quantum circuit
+ with only a single call to the black box.
+\end{theorem}
+\begin{proof}
+ For concreteness, we do the case $n=1$ explicitly;
+ the general case is contained in \Cref{prob:deutsch_jozsa}.
+ We claim that the necessary circuit is
+ \[
+ \Qcircuit @C=1em @R=0.7em {
+ \lstick{\ket0} & \gate{H} & \multigate{1}{U_f} & \gate{H} & \meter \qw \\
+ \lstick{\ket1} & \gate{H} & \ghost{U_f} & \qw & \\
+ }
+ \]
+ Here the $H$'s are Hadamard gates,
+ and the meter at the end of the rightmost wire
+ indicates that we make a measurement along the usual $\ket0$, $\ket1$ basis.
+ This is not a typo! Even though classically the top wire is just
+ a repeat of the input information,
+ we are about to see that it's the top we want to measure.
+
+ Note that after the two Hadamard operations, the state we get is
+ \begin{align*}
+ \ket{01} &\xmapsto{H^{\otimes 2}}
+ \left( \frac{1}{\sqrt2}(\ket0+\ket1) \right)
+ \otimes
+ \left( \frac{1}{\sqrt2}(\ket0-\ket1) \right) \\
+ &=
+ \half \Big( \ket0\otimes\big(\ket0-\ket1\big)
+ \; + \; \ket1\otimes\big(\ket0-\ket1\big) \Big).
+ \end{align*}
+ So after applying $U_f$, we obtain
+ \[
+ \half\Big(
+ \ket0\otimes\big(\ket{0+f(0)}-\ket{1+f(0)}\big)
+ \; + \; \ket1\otimes\big(\ket{0+f(1)}-\ket{1+f(1)}\big)
+ \Big)
+ \]
+ where the modulo $2$ has been left implicit.
+ Now, observe that the effect of going from
+ $\ket0-\ket1$ to $\ket{0+f(x)}-\ket{1+f(x)}$ is merely
+ to either keep the state the same (if $f(x)=0$)
+ or to negate it (if $f(x)=1$).
+ So we can simplify and factor to get
+ \[
+ \half
+ \left( (-1)^{f(0)}\ket0 + (-1)^{f(1)}\ket1 \right)
+ \otimes
+ \left( \ket0-\ket1 \right).
+ \]
+ Thus, the picture so far is:
+ \[
+ \Qcircuit @C=1em @R=0.7em {
+ \lstick{\ket0} & \gate{H} & \multigate{1}{U_f} &
+ \rstick{\frac{1}{\sqrt2}%
+ \Big((-1)^{f(0)}\ket0+(-1)^{f(1)}\ket1\Big)} \qw \\
+ \lstick{\ket1} & \gate{H} & \ghost{U_f} &
+ \rstick{\frac{1}{\sqrt2}(\ket0-\ket1)} \qw
+ }
+ \]
+ In particular, the resulting state is not entangled,
+ and we can simply discard the last qubit (!).
+ Now observe:
+ \begin{itemize}
+ \ii If $f$ is constant, then the upper-most state is $\pm\xup$.
+ \ii If $f$ is balanced, then the upper-most state is $\pm\xdown$.
+ \end{itemize}
+ So simply doing a measurement along $\sigma_x$ will give us the answer.
+ Equivalently, perform another $H$ gate
+ (so that $H\xup = \ket0$, $H\xdown = \ket1$)
+ and measuring along $\sigma_z$ in the usual $\ket0$, $\ket1$ basis.
+ Thus for $n=1$ we only need a single call to the oracle.
+\end{proof}
+
+
+\section\problemhead
+\begin{problem}[Fredkin gate]
+ The \vocab{Fredkin gate} (also called the controlled swap, or CSWAP gate)
+ is the three-bit gate with the following truth table:
+ \[
+ \begin{array}{|rrr|rrr|}
+ \hline
+ \multicolumn{3}{|c|}{\text{In}} & \multicolumn{3}{|c|}{\text{Out}} \\
+ \hline
+ 0 & 0 & 0 & 0 & 0 & 0 \\
+ 0 & 0 & 1 & 0 & 0 & 1 \\
+ 0 & 1 & 0 & 0 & 1 & 0 \\
+ 0 & 1 & 1 & 0 & 1 & 1 \\
+ 1 & 0 & 0 & 1 & 0 & 0 \\
+ 1 & 0 & 1 & 1 & 1 & 0 \\
+ 1 & 1 & 0 & 1 & 0 & 1 \\
+ 1 & 1 & 1 & 1 & 1 & 1 \\\hline
+ \end{array}
+ \]
+ Thus the gate swaps the last two input bits whenever the first bit is $1$.
+ Show that this gate is also reversible and universal.
+ \begin{hint}
+ One way is to create CCNOT using a few Fredkin gates.
+ \end{hint}
+ \begin{sol}
+ To show the Fredkin gate is universal
+ it suffices to reversibly create a CCNOT gate with it.
+ We write the system
+ \begin{align*}
+ (z,\neg z,-) &= \opname{Fred}(z,1,0) \\
+ (x,a,-) &= \opname{Fred}(x,1,0) \\
+ (y,b,-) &= \opname{Fred}(y,a,0) \\
+ (-,c,-) &= \opname{Fred}(b,0,1) \\
+ (-,d,-) &= \opname{Fred}(c, z, \neg z).
+ \end{align*}
+ Direct computation shows that $d = z+xy\pmod 2$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Baby no-cloning theorem]
+ \label{prob:baby_no_clone}
+ Show that there is no unitary map $U$ on two qubits
+ which sends $U(\ket\psi \otimes \ket0) = \ket\psi \otimes \ket\psi$
+ for any qubit $\ket\psi$, i.e.\
+ the following circuit diagram is impossible.
+ \[
+ \Qcircuit @C=0.8em @R=.7em {
+ \lstick{\ket\psi} & \multigate{1}{U} & \rstick{\ket\psi} \qw \\
+ \lstick{\ket0} & \ghost{U} & \rstick{\ket\psi} \qw
+ }
+ \]
+ \begin{hint}
+ Plug in $\ket\psi=\ket0$, $\ket\psi=\ket1$, $\ket\psi = \xup$
+ and derive a contradiction.
+ \end{hint}
+\end{problem}
+
+\begin{problem}[Deutsch-Jozsa]
+ \label{prob:deutsch_jozsa}
+ Given the black box $U_f$ described in the Deutsch-Jozsa algorithm,
+ consider the following circuit.
+ \[
+ \Qcircuit @C=1em @R=0.7em {
+ \lstick{\ket{0\dots0}} & /^n \qw & \gate{H^{\otimes n}} & \multigate{1}{U_f}
+ & \gate{H^{\otimes n}} & \meter \qw \\
+ \lstick{\ket1} & \qw & \gate{H} & \ghost{U_f} & \qw & \\
+ }
+ \]
+ That is, take $n$ copies of $\ket 0$, apply the Hadamard rotation to all of them,
+ apply $U_f$, reverse the Hadamard to all $n$ input bits
+ (again discarding the last bit), then measure all $n$ bits
+ in the $\ket0$/$\ket1$ basis (as in \Cref{ex:simult_measurement}).
+
+ Show that the probability of measuring $\ket{0\dots0}$
+ is $1$ if $f$ is constant and $0$ if $f$ is balanced.
+ \begin{hint}
+ First show that the box sends
+ $\ket{x_1} \otimes \dots \otimes \ket{x_m} \otimes \xdown$
+ to $(-1)^{f(x_1, \dots, x_m)}
+ (\ket{x_1} \otimes \dots \otimes \ket{x_m} \otimes \xdown)$.
+ \end{hint}
+ \begin{sol}
+ Put $\xdown = \frac{1}{\sqrt2} (\ket0-\ket1)$.
+ Then we have that $U_f$ sends
+ \[
+ \ket{x_1} \dots \ket{x_m} \ket 0
+ - \ket{x_1} \dots \ket{x_m} \ket 1
+ \xmapsto{U_f}
+ \pm \ket{x_1} \dots \ket{x_m} \ket 0
+ \mp \ket{x_1} \dots \ket{x_m} \ket 1
+ \]
+ the sign being $+$, $-$ exactly when $f(x_1, \dots, x_m) = 1$.
+
+ Now, upon inputting $\ket0 \dots \ket0 \ket1$, we find that $H^{\otimes m+1}$ maps it to
+ \[ 2^{-n/2} \sum_{x_1, \dots, x_n} \ket{x_1} \dots \ket{x_n} \xdown. \]
+ Then the image under $U_f$ is
+ \[ 2^{-n/2} \sum_{x_1, \dots, x_n} (-1)^{f(x_1, \dots, x_n)} \ket{x_1} \dots \ket{x_n} \xdown. \]
+ We now discard the last qubit, leaving us with
+ \[ 2^{-n/2} \sum_{x_1, \dots, x_n} (-1)^{f(x_1, \dots, x_n)} \ket{x_1} \dots \ket{x_n}. \]
+ Applying $H^{\otimes m}$ to this, we get
+ \[ 2^{-n/2} \sum_{x_1, \dots, x_n} (-1)^{f(x_1, \dots, x_n)}
+ \cdot
+ \left(
+ 2^{-n/2}
+ \sum_{y_1, \dots, y_n}
+ (-1)^{x_1 y_1 + \dots + x_n y_n}
+ \ket{y_1} \ket{y_2} \dots \ket{y_n}
+ \right)
+ \]
+ since $H\ket0 = \frac{1}{\sqrt2}(\ket0+\ket1)$
+ while $H\ket1 = \frac{1}{\sqrt2}(\ket0-\ket1)$,
+ so minus signs arise exactly if $x_i = 0$ and $y_i = 0$ simultaneously,
+ hence the term $(-1)^{x_1 y_1 + \dots + x_n y_n}$.
+ Swapping the order of summation, we get
+ \[
+ 2^{-n}
+ \sum_{y_1, \dots, y_n}
+ C(y_1, \dots, y_n)
+ \ket{y_1} \ket{y_2} \dots \ket{y_n}
+ \]
+ where $C_{y_1, \dots, y_n}
+ = \sum_{x_1, \dots, x_n} (-1)^{f(x_1, \dots, x_n)
+ +x_1 y_1 + \dots + x_n y_n}$.
+ Now, we finally consider two cases.
+ \begin{itemize}
+ \ii If $f$ is the constant function, then we find that
+ \[
+ C(y_1, \dots, y_n) =
+ \begin{cases}
+ \pm 1 & y_1 = \dots = y_n = 0 \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ To see this, note that the result is clear for $y_1 = \dots = y_n = 0$;
+ otherwise, if WLOG $y_1 = 1$, then the terms for $x_1 = 0$ exactly cancel
+ the terms for $x_1 = 0$, pair by pair.
+ Thus in this state, the measurements all result in $\ket0 \dots \ket0$.
+
+ \ii On the other hand if $f$ is balanced, we derive that
+ \[ C(0, \dots, 0) = 0. \]
+ Thus \emph{no} measurements result in $\ket 0 \dots \ket 0$.
+ \end{itemize}
+ In this way, we can tell whether $f$ is balanced or not.
+ \end{sol}
+\end{problem}
+
+\begin{dproblem}[Barenco et al, 1995; arXiv:quant-ph/9503016v1]
+ Let
+ \[
+ P = \begin{bmatrix} 1 & 0 \\ 0& i \end{bmatrix}
+ \qquad
+ Q = \frac{1}{\sqrt2}\begin{bmatrix} 1 & -i \\ -i & 1 \end{bmatrix}
+ \]
+ Verify that the quantum Toffoli gate can be implemented
+ using just controlled rotations via the circuit
+ \[
+ \Qcircuit @R=1em @C=0.7em {
+ \lstick{\ket{x_1}} & \qw & \ctrl{2} & \ctrl{1} & \ctrl{1} & \qw & \ctrl{1} & \qw \\
+ \lstick{\ket{x_2}} & \ctrl{1} & \qw & \gate{P} & \targ & \ctrl{1} & \targ & \qw \\
+ \lstick{\ket{x_3}} & \gate{Q} & \gate{Q} & \qw & \qw & \gate{Q^\dagger} & \qw & \qw
+ }
+ \]
+ This was a big surprise to researchers when discovered,
+ because classical reversible logic requires three-bit gates (e.g. Toffoli, Fredkind).
+ \begin{hint}
+ This is direct computation.
+ \end{hint}
+\end{dproblem}
diff --git a/books/napkin/classgrp.tex b/books/napkin/classgrp.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f0337a0c60fd3d8fa2054b9ac51540342410f462
--- /dev/null
+++ b/books/napkin/classgrp.tex
@@ -0,0 +1,850 @@
+\chapter{Minkowski bound and class groups}
+We now have a neat theory of unique factorization of ideals.
+In the case of a PID, this in fact gives us a UFD. Sweet.
+
+We'll define, in a moment, something called the \emph{class group}
+which measures how far $\OO_K$ is from being a PID;
+the bigger the class group, the farther $\OO_K$ is from being a PID.
+In particular, the $\OO_K$ is a PID if it has trivial class group.
+
+Then we will provide some inequalities which let us put restrictions on the class group;
+for instance, this will let us show in some cases that the class group must be trivial.
+Astonishingly, the proof will use Minkowski's theorem, a result from geometry.
+
+\section{The class group}
+\prototype{PID's have trivial class group.}
+Let $K$ be a number field,
+and let $J_K$ denote the multiplicative group of fractional ideals of $\OO_K$.
+Let $P_K$ denote the multiplicative group of \vocab{principal fractional ideals}:
+those of the form $(x) = x \OO_K$ for some $x \in K$.
+\begin{ques}
+Check that $P_K$ is also a multiplicative group.
+(This is really easy: name $x\OO_K \cdot y\OO_K$ and $(x\OO_K)\inv$.)
+\end{ques}
+As $J_K$ is abelian, we can now define the \vocab{class group} to be the quotient
+\[ \Cl_K \defeq J_K / P_K. \]
+The elements of $\Cl_K$ are called \vocab{classes}.
+
+Equivalently,
+\begin{moral}
+The class group $\Cl_K$ is the set of nonzero fractional ideals modulo
+scaling by a constant in $K$.
+\end{moral}
+In particular, $\Cl_K$ is trivial if all ideals are principal,
+since the nonzero principal ideals are the same up to scaling.
+
+The size of the class group is called the \vocab{class number}.
+It's a beautiful theorem that the class number is always finite,
+and the bulk of this chapter will build up to this result.
+It requires several ingredients.
+
+
+\section{The discriminant of a number field}
+\prototype{Quadratic fields.}
+Let's say I have $K = \QQ(\sqrt 2)$.
+As we've seen before, this means $\OO_K = \ZZ[\sqrt 2]$, meaning
+\[ \OO_K = \left\{ a + b \sqrt 2 \mid a,b \in \ZZ \right\}. \]
+The key insight now is that you might think of this as a \emph{lattice}:
+geometrically, we want to think about this the same way we think about $\ZZ^2$.
+
+Perversely, we might try to embed this into $\QQ^2$ by sending $a+b\sqrt 2$ to $(a, b)$.
+But this is a little stupid, since we're rudely making $K$,
+which somehow lives inside $\RR$ and is
+``one-dimensional'' in that sense, into a two-dimensional space.
+It also depends on a choice of basis, which we don't like.
+A better way is to think about the fact that there are two embeddings
+$\sigma_1 : K \to \CC$ and $\sigma_2 : K \to \CC$, namely the identity, and conjugation:
+\begin{align*}
+ \sigma_1(a+b\sqrt2) &= a+b\sqrt 2 \\
+ \sigma_2(a+b\sqrt2) &= a-b\sqrt 2.
+\end{align*}
+Fortunately for us, these embeddings both have real image.
+This leads us to consider the set of points
+\[ \left( \sigma_1(\alpha), \sigma_2(\alpha) \right) \in \RR^2
+\quad\text{as}\quad \alpha \in K. \]
+This lets us visualize what $\OO_K$ looks like in $\RR^2$.
+The points of $K$ are dense in $\RR^2$, but the points of $\OO_K$ cut out a lattice.
+
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ real T = 1.414;
+ int N = 10;
+ real M = 5;
+ for (int a = -N; a <= N; ++a) {
+ for (int b = -N; b <= N; ++b) {
+ if ((abs(a+T*b) <= M) && (abs(a-T*b) <= M))
+ dot( (a+T*b, a-T*b) );
+ }
+ }
+ draw( (-M,0)--(M,0), dotted);
+ draw( (0,-M)--(0,M), dotted);
+ label("$-2$", (-2,-2), dir(135));
+ label("$-1$", (-1,-1), dir(135));
+ label("$0$", (0,0), dir(135));
+ label("$1$", (1,1), dir(135));
+ label("$2$", (2,2), dir(135));
+ label("$-\sqrt 2$", (-T,T), dir(135));
+ label("$\sqrt 2$", (T,-T), dir(-45));
+ label("$1+\sqrt 2$", (1+T,1-T), dir(-45));
+
+ filldraw((0,0)--(1,1)--(1+T,1-T)--(T,-T)--cycle, heavycyan, blue);
+ \end{asy}
+\end{center}
+
+To see how big the lattice is, we look at how $\{1, \sqrt2\}$, the generators
+of $\OO_K$, behave.
+The point corresponding to $a+b\sqrt2$ in the lattice is
+\[ a \cdot (1,1) + b \cdot (\sqrt 2, -\sqrt 2). \]
+The \vocab{mesh} of the lattice\footnote{Most authors call this the volume, but I
+think this is not the right word to use -- lattices have ``volume'' zero since they
+are just a bunch of points! In contrast, the English word ``mesh'' really
+does refer to the width of a ``gap''.}
+is defined as the hypervolume of the ``fundamental parallelepiped'' I've colored blue above.
+For this particular case, it ought to be equal to the
+area of that parallelogram, which is
+\[
+ \det
+ \begin{bmatrix}
+ 1 & -\sqrt 2 \\
+ 1 & \sqrt 2
+ \end{bmatrix}
+ = 2\sqrt 2.
+\]
+The definition of the discriminant is precisely this, except with an extra square factor
+(since permutation of rows could lead to changes in sign in the matrix above).
+\Cref{prob:trace_discriminant} shows that the squaring makes $\Delta_K$ an integer.
+
+To make the next definition, we invoke:
+\begin{theorem}[The $n$ embeddings of a number field]
+ Let $K$ be a number field of degree $n$.
+ Then there are exactly $n$ field homomorphisms $K \injto \CC$,
+ say $\sigma_1, \dots, \sigma_n$, which fix $\QQ$.
+\end{theorem}
+\begin{proof}
+ Deferred to \Cref{thm:n_embeddings}, once we have the tools of Galois theory.
+\end{proof}
+In fact, in \Cref{thm:conj_distrb} we see that
+for $\alpha \in K$, we have that $\sigma_i(\alpha)$
+runs over the conjugates of $\alpha$ as $i=1,\dots,n$.
+It follows that
+\[
+ \Tr_{K/\QQ}(\alpha) = \sum_{i=1}^n \sigma_i(\alpha)
+ \quad\text{and}\quad
+ \Norm_{K/\QQ}(\alpha) = \prod_{i=1}^n \sigma_i(\alpha).
+\]
+
+This allows us to define:
+\begin{definition}
+ Suppose $\alpha_1, \dots, \alpha_n$ is a $\ZZ$-basis of $\OO_K$.
+ The \vocab{discriminant} of the number field $K$ is defined by
+ \[
+ \Delta_K \defeq \det
+ \begin{bmatrix}
+ \sigma_1(\alpha_1) & \dots & \sigma_n(\alpha_1) \\
+ \vdots & \ddots & \vdots \\
+ \sigma_1(\alpha_n) & \dots & \sigma_n(\alpha_n) \\
+ \end{bmatrix}^2.
+ \]
+\end{definition}
+This does not depend on the choice of the $\{\alpha_i\}$;
+we will not prove this here.
+\begin{example}[Discriminant of $K = \QQ(\sqrt2)$]
+ We have $\OO_K = \ZZ[\sqrt 2]$
+ and as discussed above the discriminant is
+ \[
+ \Delta_K =
+ (-2 \sqrt 2)^2 = 8.
+ \]
+\end{example}
+\begin{example}[Discriminant of $\QQ(i)$]
+ Let $K = \QQ(i)$.
+ We have $\OO_K = \ZZ[i] = \ZZ \oplus i \ZZ$.
+ The embeddings are the identity and complex conjugation
+ which take $1$ to $(1,1)$ and $i$ to $(i, -i)$.
+ So
+ \[
+ \Delta_K =
+ \det
+ \begin{bmatrix}
+ 1 & 1 \\
+ i & -i
+ \end{bmatrix}^2
+ =
+ (-2i)^2 = -4.
+ \]
+ This example illustrates that the discriminant need not be positive
+ for number fields which wander into the complex plane
+ (the lattice picture is a less perfect analogy).
+ But again, as we'll prove in the problems the discriminant is always
+ an integer.
+\end{example}
+\begin{example}
+ [Discriminant of $\QQ(\sqrt 5)$]
+ Let $K = \QQ(\sqrt5)$.
+ This time, $\OO_K = \ZZ \oplus \frac{1+\sqrt5}{2} \ZZ$, and so the discriminant
+ is going to look a little bit different.
+ The embeddings are still $a+b\sqrt 5 \mapsto a+b\sqrt5, a-b\sqrt5$.
+
+ Applying this to the $\ZZ$-basis $\left\{ 1, \frac{1+\sqrt5}{2} \right\}$, we get
+ \[
+ \Delta_K
+ =
+ \det
+ \begin{bmatrix}
+ 1 & 1 \\
+ \frac{1+\sqrt5}{2} & \frac{1-\sqrt5}{2}
+ \end{bmatrix}^2
+ = (-\sqrt 5)^2 = 5.
+ \]
+\end{example}
+\begin{exercise}
+ Extend all this to show that
+ if $K = \QQ(\sqrt d)$ for $d \neq 1$ squarefree, we have
+ \[
+ \Delta_K =
+ \begin{cases}
+ d & \text{if } d \equiv 1 \pmod 4 \\
+ 4d & \text{if } d \equiv 2, 3 \pmod 4.
+ \end{cases}
+ \]
+\end{exercise}
+
+Actually, let me point out something curious: recall that the polynomial discriminant of $Ax^2+Bx+C$ is $B^2-4AC$. Then:
+\begin{itemize}
+ \ii In the $d \equiv 1 \pmod 4$ case,
+ $\Delta_K$ is the discriminant of $x^2 - x - \frac{d-1}{4}$,
+ which is the minimal polynomial of $\half(1+\sqrt d)$.
+ Of course, $\OO_K = \ZZ[\half(1+\sqrt d)]$.
+ \ii In the $d \equiv 2,3 \pmod 4$ case,
+ $\Delta_K$ is the discriminant of $x^2 - d$
+ which is the minimal polynomial of $\sqrt d$. Once again, $\OO_K = \ZZ[\sqrt d]$.
+\end{itemize}
+This is not a coincidence! \Cref{prob:root_discriminant} asserts that this is true in general;
+hence the name ``discriminant''.
+
+\section{The signature of a number field}
+\prototype{$\QQ(\sqrt[100]{2})$ has signature $(2, 49)$.}
+In the example of $K = \QQ(i)$,
+we more or less embedded $K$ into the space $\CC$.
+However, $K$ is a degree two extension,
+so what we'd really like to do is embed it into $\RR^2$.
+To do so, we're going to take advantage of complex conjugation.
+
+Let $K$ be a number field and $\sigma_1, \dots, \sigma_n$ be its embeddings.
+We distinguish between the \vocab{real embeddings}
+(which map all of $K$ into $\RR$)
+and the \vocab{complex embeddings}
+(which map some part of $K$ outside $\RR$).
+Notice that if $\sigma$ is a complex embedding,
+then so is the conjugate $\ol{\sigma} \ne \sigma$;
+hence complex embeddings come in pairs.
+
+\begin{definition}
+ Let $K$ be a number field of degree $n$, and set
+ \begin{align*}
+ r_1 &= \text{number of real embeddings} \\
+ r_2 &= \text{number of pairs of complex embeddings}.
+ \end{align*}
+ The \vocab{signature} of $K$ is the pair $(r_1, r_2)$.
+ Observe that $r_1 + 2r_2 = n$.
+\end{definition}
+\begin{example}[Basic examples of signatures]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\QQ$ has signature $(1,0)$.
+ \ii $\QQ(\sqrt 2)$ has signature $(2,0)$.
+ \ii $\QQ(i)$ has signature $(0,1)$.
+ \ii Let $K = \QQ(\sqrt[3]{2})$, and let $\omega$ be a cube root of unity.
+ The elements of $K$ are
+ \[ K = \left\{ a + b\cbrt2 + c\cbrt4 \mid a,b,c \in \QQ \right\}. \]
+ Then the signature is $(1,1)$, because the three embeddings are
+ \[ \sigma_1 : \cbrt 2 \mapsto \cbrt 2,
+ \quad
+ \sigma_2 : \cbrt 2 \mapsto \cbrt 2 \omega,
+ \quad
+ \sigma_3 : \cbrt 2 \mapsto \cbrt 2 \omega^2. \]
+ The first of these is real and the latter two are conjugate pairs.
+ \end{enumerate}
+\end{example}
+\begin{example}[Even more signatures]
+ In the same vein $\QQ(\sqrt[99]{2})$ and $\QQ(\sqrt[100]{2})$
+ have signatures $(1,49)$ and $(2,49)$.
+\end{example}
+\begin{ques}
+ Verify the signatures of the above two number fields.
+\end{ques}
+
+From now on, we will number the embeddings of $K$ in such a way that
+\[ \sigma_1, \sigma_2, \dots, \sigma_{r_1} \]are the real embeddings,
+while
+\[
+ \sigma_{r_1+1} = \ol{\sigma_{r_1+r_2+1}}, \quad
+ \sigma_{r_1+2} = \ol{\sigma_{r_1+r_2+2}}, \quad
+ \dots, \quad
+ \sigma_{r_1+r_2} = \ol{\sigma_{r_1+2r_2}}.
+\]
+are the $r_2$ pairs of complex embeddings.
+We define the \vocab{canonical embedding} of $K$ as
+\[
+ K \overset\iota\injto \RR^{r_1} \times \CC^{r_2} \
+ \quad\text{by}\quad
+ \alpha \mapsto \left( \sigma_1(\alpha), \dots, \sigma_{r_1}(\alpha),
+ \sigma_{r_1+1}(\alpha), \dots, \sigma_{r_1+r_2}(\alpha) \right).
+\]
+All we've done is omit, for the complex case, the second of the embeddings in each conjugate pair.
+This is no big deal, since they are just conjugates;
+the above tuple is all the information we need.
+
+For reasons that will become obvious in a moment, I'll let $\tau$ denote the isomorphism
+\[
+ \tau : \RR^{r_1} \times \CC^{r_2} \taking{\sim} \RR^{r_1+2r_2} = \RR^n
+\]
+by breaking each complex number into its real and imaginary part, as
+\begin{align*}
+\alpha \mapsto \big(& \sigma_1(\alpha), \dots, \sigma_{r_1}(\alpha), \\
+& \Re \sigma_{r_1+1}(\alpha), \;\; \Im \sigma_{r_1+1}(\alpha), \\
+& \Re \sigma_{r_1+2}(\alpha), \;\; \Im \sigma_{r_1+2}(\alpha), \\
+& \dots, \\
+& \Re \sigma_{r_1+r_2}(\alpha), \;\; \Im \sigma_{r_1+r_2}(\alpha) \big).
+\end{align*}
+\begin{example}[Example of canonical embedding]
+ As before let $K = \QQ(\sqrt[3]{2})$ and set
+ \[ \sigma_1 : \cbrt 2 \mapsto \cbrt 2,
+ \quad
+ \sigma_2 : \cbrt 2 \mapsto \cbrt 2 \omega,
+ \quad
+ \sigma_3 : \cbrt 2 \mapsto \cbrt 2 \omega^2 \]
+ where $\omega = - \half + \frac{\sqrt3}{2} i$,
+ noting that we've already arranged indices so $\sigma_1 = \id$ is real
+ while $\sigma_2$ and $\sigma_3$ are a conjugate pair.
+ So the embeddings $K \overset\iota\injto \RR \times \CC \taking\sim \RR^3$ are given by
+ \[
+ \alpha \overset\iota\longmapsto \left( \sigma_1(\alpha), \sigma_2(\alpha) \right)
+ \overset\tau\longmapsto \left( \sigma_1(\alpha), \; \Re\sigma_2(\alpha), \; \Im\sigma_2(\alpha) \right). \]
+ For concreteness, taking $\alpha = 9 + \cbrt 2$ gives
+ \begin{align*}
+ 9 + \cbrt 2 &\overset\iota\mapsto
+ \left( 9 + \cbrt 2, \;\; 9 + \cbrt2\omega \right) \\
+ &= \left( 9 + \cbrt 2, \;\;
+ 9 - \half\cbrt2 + \frac{\sqrt[6]{108}}{2} i \right) \in \RR \times \CC \\
+ &\overset\tau\mapsto \left( 9 + \cbrt 2, \;\; 9 - \half \cbrt 2,
+ \;\; \frac{\sqrt[6]{108}}{2} \right) \in \RR^3.
+ \end{align*}
+\end{example}
+
+Now, the whole point of this is that we want to consider the resulting lattice
+when we take $\OO_K$. In fact, we have:
+\begin{lemma}
+ \label{lem:vol_OK_mesh}
+ Consider the composition of the embeddings $K \injto \RR^{r_1} \times \CC^{r_2} \taking\sim \RR^n$.
+ Then as before, $\OO_K$ becomes a lattice $L$ in $\RR^n$, with mesh equal to
+ \[ \frac{1}{2^{r_2}} \sqrt{\left\lvert \Delta_K \right\rvert}. \]
+\end{lemma}
+\begin{proof}
+ Fun linear algebra problem (you just need to manipulate determinants).
+ Left as \Cref{prob:OK_linalg}.
+\end{proof}
+From this we can deduce:
+\begin{lemma}
+ Consider the composition of the embeddings $K \injto \RR^{r_1} \times \CC^{r_2} \taking\sim \RR^n$.
+ Let $\ka$ be an ideal in $\OO_K$.
+ Then the image of $\ka$ is a lattice $L_\ka$ in $\RR^n$ with mesh equal to
+ \[ \frac{\Norm(\ka)}{2^{r_2}} \sqrt{\left\lvert \Delta_K \right\rvert}. \]
+\end{lemma}
+\begin{proof}[Sketch of Proof]
+ Let \[ d = \Norm(\ka) \defeq [\OO_K : \ka]. \]
+ Then in the lattice $L_\ka$, we somehow only take $\frac 1d$th of the points
+ which appear in the lattice $L$, which is why the area increases by a factor of $\Norm(\ka)$.
+ To make this all precise I would need to do a lot more with lattices and geometry
+ than I have space for in this chapter, so I will omit the details.
+ But I hope you can see why this is intuitively true.
+\end{proof}
+
+\section{Minkowski's theorem}
+Now I can tell you why I insisted we move from $\RR^{r_1} \times \CC^{r_2}$ to $\RR^n$.
+In geometry, there's a really cool theorem of Minkowski's that goes as follows.
+\begin{theorem}
+ [Minkowski]
+ Let $S \subseteq \RR^n$ be a convex set containing $0$
+ which is centrally symmetric (meaning that $x \in S \iff -x \in S$).
+ Let $L$ be a lattice with mesh $d$.
+ If either
+ \begin{enumerate}[(a)]
+ \ii The volume of $S$ exceeds $2^n d$, or
+ \ii The volume of $S$ equals $2^n d$ and $S$ is compact,
+ \end{enumerate}
+ then $S$ contains a nonzero lattice point of $L$.
+\end{theorem}
+\begin{ques}
+ Show that the condition $0 \in S$ is actually extraneous
+ in the sense that any nonempty, convex, centrally symmetric set contains the origin.
+\end{ques}
+
+\begin{proof}[Sketch of Proof]
+ Part (a) is surprisingly simple and has a very olympiad-esque solution:
+ it's basically Pigeonhole on areas.
+ We'll prove part (a) in the special case $n=2$,
+ $L = \ZZ^2$ for simplicity as the proof
+ can easily be generalized to any lattice and any $n$.
+ Thus we want to show that any such convex set $S$
+ with area more than $4$ contains a lattice point.
+
+ Dissect the plane into $2 \times 2$ squares
+ \[ [2a-1, 2a+1] \times [2b-1, 2b+1] \]
+ and overlay all these squares on top of each other.
+ By the Pigeonhole Principle, we find there exist two points $p \neq q \in S$ which map to the same point.
+ Since $S$ is symmetric, $-q \in S$. Then $\half (p-q) \in S$ (convexity) and is a nonzero lattice point.
+
+ I'll briefly sketch part (b): the idea is to consider $(1+\eps) S$ for $\eps > 0$
+ (this is ``$S$ magnified by a small factor $1+\eps$'').
+ This satisfies condition (a). So for each $\eps > 0$ the set of nonzero lattice points in $(1+\eps) S$,
+ say $S_\eps$, is a \emph{finite nonempty set} of (discrete) points
+ (the ``finite'' part follows from the fact that $(1+\eps)S$ is bounded).
+ So there has to be some point that's in $S_\eps$ for every $\eps > 0$ (why?), which implies it's in $S$.
+\end{proof}
+
+\section{The trap box}
+The last ingredient we need is a set to apply Minkowski's theorem to. I propose:
+\begin{definition}
+ Let $M$ be a positive real.
+ In $\RR^{r_1} \times \CC^{r_2}$, define the box $S$ to be the
+ set of points $(x_1, \dots, x_{r_1}, z_1, \dots, z_{r_2})$ such that
+ \[
+ \sum_{i=1}^{r_1} \left\lvert x_i \right\rvert
+ + 2 \sum_{j=1}^{r_2} \left\lvert z_j \right\rvert
+ \le M.
+ \]
+ Note that this depends on the value of $M$.
+\end{definition}
+
+Think of this box as a \emph{mousetrap}: anything that falls in it is going to have a small norm,
+and our goal is to use Minkowski to lure some nonzero element into it.
+
+\begin{center}
+ \begin{asy}
+ size(6cm);
+ real T = 1.414;
+ int N = 10;
+ real M = 3;
+
+ real K = 2.4;
+ filldraw( (K,0)--(0,K)--(-K,0)--(0,-K)--cycle, palered, darkred);
+ label("$S$", (2*K/3,K/3), dir(45));
+
+ for (int a = -N; a <= N; ++a) {
+ for (int b = -N; b <= N; ++b) {
+ if ((abs(a+T*b) <= M) && (abs(a-T*b) <= M))
+ dot( (a+T*b, a-T*b) );
+ }
+ }
+ draw( (-M,0)--(M,0), dotted);
+ draw( (0,-M)--(0,M), dotted);
+ label("$-2$", (-2,-2), dir(135));
+ label("$-1$", (-1,-1), dir(135));
+ label("$0$", (0,0), dir(135));
+ label("$1$", (1,1), dir(135));
+ label("$2$", (2,2), dir(135));
+ label("$-\sqrt 2$", (-T,T), dir(135));
+ label("$\sqrt 2$", (T,-T), dir(-45));
+ label("$1+\sqrt 2$", (1+T,1-T), dir(-45));
+ \end{asy}
+\end{center}
+
+That is, suppose $\alpha \in \ka$ falls into the box I've defined above, which means
+\[
+ M \ge
+ \sum_{i=1}^{r_1} \left\lvert \sigma_i(\alpha) \right\rvert
+ + 2 \sum_{i=r_1+1}^{r_1+r_2} \left\lvert \sigma_i(\alpha) \right\rvert
+ = \sum_{i=1}^{n} \left\lvert \sigma_i(\alpha) \right\rvert,
+\]
+where we are remembering that the last few $\sigma$'s come in conjugate pairs.
+This looks like the trace, but the absolute values are in the way.
+So instead, we apply AM-GM to obtain:
+\begin{lemma}[Effect of the mousetrap]
+ Let $\alpha \in \OO_K$, and suppose $\iota(\alpha)$ is in $S$
+ (where $\iota : K \injto \RR^{r_1} \times \CC^{r_2}$ as usual).
+ Then
+ \[ \Norm_{K/\QQ}(\alpha)
+ = \prod_{i=1}^n \left\lvert \sigma_i(\alpha) \right\rvert
+ \le \left( \frac Mn \right)^n. \]
+\end{lemma}
+The last step we need to do is compute the volume of the box.
+This is again some geometry I won't do, but take my word for it:
+\begin{lemma}[Size of the mousetrap]
+ Let $\tau : \RR^{r_1} \times \CC^{r_2} \taking\sim \RR^n$ as before.
+ Then the image of $S$ under $\tau$ is a convex, compact, centrally symmetric set with volume
+ \[ 2^{r_1} \cdot \left( \frac{\pi}{2} \right)^{r_2} \cdot \frac{M^n}{n!}. \]
+\end{lemma}
+\begin{ques}
+ (Sanity check)
+ Verify that the above is correct for the signatures $(r_1, r_2) = (2,0)$ and $(r_1,r_2) = (0,1)$,
+ which are the possible signatures when $n=2$.
+\end{ques}
+
+
+\section{The Minkowski bound}
+We can now put everything we have together to obtain the great Minkowski bound.
+\begin{theorem}
+ [Minkowski bound]
+ Let $\ka \subseteq \OO_K$ be any nonzero ideal.
+ Then there exists $0 \neq \alpha \in \ka$ such that
+ \[ \Norm_{K/\QQ}(\alpha) \le \left( \frac 4\pi \right)^{r_2} \frac{n!}{n^n} \sqrt{\left\lvert \Delta_K \right\rvert}
+ \cdot \Norm(\ka). \]
+\end{theorem}
+\begin{proof}
+ This is a matter of putting all our ingredients together.
+ Let's see what things we've defined already:
+ \begin{center}
+ \begin{tikzcd}
+ K \ar[r, "\iota", hook]
+ & \RR^{r_1} \times \CC^{r_2} \ar[r, "\tau"]
+ & \RR^n \\
+ & \text{box } S \ar[r, mapsto]
+ & \tau\im(S)
+ & \quad\text{with volume }
+ 2^{r_1} \left( \frac\pi2 \right)^{r_2} \frac{M^n}{n!} \\
+ \OO_K \ar[rr, mapsto]
+ && \text{Lattice } L
+ & \quad\text{with mesh }
+ 2^{-r_2} \sqrt{\left\lvert \Delta_K \right\rvert} \\
+ \ka \ar[rr, mapsto]
+ && \text{Lattice } L_\ka
+ & \quad\text{with mesh }
+ 2^{-r_2} \sqrt{\left\lvert \Delta_K \right\rvert} \Norm(\ka)
+ \end{tikzcd}
+ \end{center}
+ Pick a value of $M$ such that the mesh of $L_\ka$
+ equals $2^{-n}$ of the volume of the box.
+ Then Minkowski's theorem gives that some $0 \neq \alpha \in \ka$ lands inside the box ---
+ the mousetrap is configured to force $\Norm_{K/\QQ}(\alpha) \le \frac{1}{n^n} M^n$.
+ The correct choice of $M$ is
+ \[
+ M^n
+ = M^n \cdot 2^n \cdot \frac{\text{mesh}}{\text{vol box}}
+ = 2^n \cdot \frac{n!}{2^{r_1} \cdot \left( \frac \pi 2 \right)^{r_2}}
+ \cdot 2^{-r_2} \sqrt{\left\lvert \Delta_K \right\rvert} \Norm(\ka)
+ \]
+ which gives the bound after some arithmetic.
+\end{proof}
+
+\section{The class group is finite}
+\begin{definition}
+ Let
+ $M_K = \left( \frac 4\pi \right)^{r_2} \frac{n!}{n^n} \sqrt{\left\lvert \Delta_K \right\rvert}$
+ for brevity.
+ Note that it is a constant depending on $K$.
+\end{definition}
+So that's cool and all, but what we really wanted was to show that
+the class group is finite.
+How can the Minkowski bound help?
+Well, you might notice that we can rewrite it to say
+\[ \Norm\left( (\alpha) \cdot \ka\inv \right) \le M_K \]
+where $M_K$ is some constant depending on $K$, and $\alpha \in \ka$.
+\begin{ques}
+ Show that $(\alpha) \cdot \ka\inv$ is an integral ideal.
+ (Unwind definitions.)
+\end{ques}
+But in the class group we \emph{mod out} by principal ideals like $(\alpha)$.
+If we shut our eyes for a moment and mod out, the above statement becomes ``$\Norm(\ka\inv)\le M_K$''.
+The precise statement of this is
+\begin{corollary}
+ \label{cor:class_rep_minkowski}
+ Let $K$ be a number field, and pick a fractional ideal $J$.
+ Then we can find $\alpha$ such that $\kb = (\alpha) \cdot J$ is integral
+ and $\Norm(\kb) \le M_K$.
+\end{corollary}
+\begin{proof}
+ For fractional ideals $I$ and $J$ write $I \sim J$ to mean
+ that $I = (\alpha)J$ for some $\alpha$; then $\Cl_K$ is just modding out by $\sim$.
+ Let $J$ be a fractional ideal.
+ Then $J\inv$ is some other fractional ideal.
+ By definition, for some $\alpha \in \OO_K$ we have that $\alpha J\inv$ is an integral ideal $\ka$.
+ The Minkowski bound tells us that for some $x \in \ka$, we have $\Norm(x\ka\inv) \le M_K$.
+ But $x\ka\inv \sim \ka\inv = (\alpha J\inv)\inv \sim J$.
+\end{proof}
+
+\begin{corollary}[Finiteness of class group]
+ Class groups are always finite.
+\end{corollary}
+\begin{proof}
+ For every class in $\Cl_K$,
+ we can identify an integral ideal $\ka$ with norm less than $M_K$.
+ We just have to show there are finitely many such integral ideals;
+ this will mean there are finitely many classes.
+
+ Suppose we want to build such an ideal $\ka = \kp_1^{e_1} \dots \kp_m^{e_m}$.
+ Recall that a prime ideal $\kp_i$ must have some rational prime $p$ inside it,
+ meaning $\kp_i$ divides $(p)$ and $p$ divides $\Norm(\kp_i)$.
+ So let's group all the $\kp_i$ we want to build $\ka$ with based on which $(p)$ they came from.
+ \begin{center}
+ \begin{asy}
+ size(4cm);
+ label("$(2)$", (1,0), dir(90));
+ for (int i=1; i <= 4; ++i) dot( (1, -i/4) );
+ label("$(3)$", (2,0), dir(90));
+ for (int i=1; i <= 6; ++i) dot( (2, -i/4) );
+ label("$(5)$", (3,0), dir(90));
+ for (int i=1; i <= 2; ++i) dot( (3, -i/4) );
+ label("$\dots$", (4,0), dir(90));
+ \end{asy}
+ \end{center}
+ To be more dramatic: imagine you have a \emph{cherry tree};
+ each branch corresponds to a prime $(p)$
+ and contains as cherries (prime ideals) the factors of $(p)$ (finitely many).
+ Your bucket (the ideal $\ka$ you're building) can only hold a total weight
+ (norm) of $M_K$. So you can't even touch the branches higher than $M_K$.
+ You can repeat cherries (oops),
+ but the weight of a cherry on branch $(p)$ is definitely $\ge p$;
+ all this means that the number of ways to build $\ka$ is finite.
+\end{proof}
+
+\section{Computation of class numbers}
+\begin{definition}
+The order of $\Cl_K$ is called the \vocab{class number} of $K$.
+\end{definition}
+\begin{remark}
+ If $\Cl_K = 1$, then $\OO_K$ is a PID, hence a UFD.
+\end{remark}
+
+By computing the actual value of $M_K$,
+we can quite literally build the entire ``cherry tree'' mentioned in the previous proof.
+Let's give an example how!
+\begin{proposition}
+ The field $\QQ(\sqrt{-67})$ has class number $1$.
+\end{proposition}
+\begin{proof}
+ Since $K = \QQ(\sqrt{-67})$ has signature $(0,1)$
+ and discriminant $\Delta_K = -67$ (since $-67 \equiv 1 \pmod 4$)
+ we can compute
+ \[ M_K = \left( \frac 4\pi \right)^{1} \cdot \frac{2!}{2^2} \sqrt{67} \approx 5.2. \]
+ That means we can cut off the cherry tree after $(2)$, $(3)$, $(5)$, since any
+ cherries on these branches will necessarily have norm $\ge M_K$.
+ We now want to factor each of these in $\OO_K = \ZZ[\theta]$, where $\theta = \frac{1+\sqrt{-67}}{2}$
+ has minimal polynomial $x^2 - x + 17$.
+ But something miraculous happens:
+ \begin{itemize}
+ \ii When we try to reduce $x^2-x+17 \pmod 2$, we get an irreducible polynomial $x^2-x+1$.
+ By the factoring algorithm (\Cref{thm:factor_alg}) this means $(2)$ is prime.
+ \ii Similarly, reducing mod $3$ gives $x^2-x+2$, which is irreducible.
+ This means $(3)$ is prime.
+ \ii Finally, for the same reason, $(5)$ is prime.
+ \end{itemize}
+ It's our lucky day;
+ all of the ideals $(2)$, $(3)$, $(5)$ are prime (already principal).
+ To put it another way,
+ each of the three branches has only one (large) cherry on it.
+ That means any time we put together an integral ideal with norm $\le M_K$,
+ it is actually principal.
+ In fact, these guys have norm $4$, $9$, $25$ respectively\dots
+ so we can't even touch $(3)$ and $(5)$,
+ and the only ideals we can get are $(1)$ and $(2)$ (with norms $1$ and $4$).
+
+ Now we claim that's all.
+ Pick a fractional ideal $J$.
+ By \Cref{cor:class_rep_minkowski},
+ we can find an integral ideal $\kb \sim J$ with $\Norm(\kb) \le M_K$.
+ But by the above, either $\kb = (1)$ or $\kb = (2)$, both of which are principal,
+ and hence trivial in $\Cl_K$.
+ So $J$ is trivial in $\Cl_K$ too, as needed.
+\end{proof}
+Let's do a couple more.
+\begin{theorem}[Gaussian integers {$\ZZ[i]$} form a UFD]
+ The field $\QQ(i)$ has class number $1$.
+\end{theorem}
+\begin{proof}
+ This is $\OO_K$ where $K = \QQ(i)$, so we just want $\Cl_K$ to be trivial.
+ We have $M_K = \frac{2}{\pi}\sqrt{4} < 2$.
+ So every class
+ has an integral ideal of norm $\kb$ satisfying
+ \[
+ \Norm(\kb)
+ \le \left( \frac4\pi \right)^{1} \cdot \frac{2!}{2^2} \cdot \sqrt{4}
+ = \frac4\pi < 2.
+ \]
+ Well, that's silly: we don't have any branches to pick from at all.
+ In other words, we can only have $\kb = (1)$.
+\end{proof}
+
+Here's another example of something that still turns out to be unique factorization,
+but this time our cherry tree will actually have cherries that can be picked.
+\begin{proposition}[{$\ZZ[\sqrt7]$} is a UFD]
+ The field $\QQ(\sqrt7)$ has class number $1$.
+\end{proposition}
+\begin{proof}
+ First we compute the Minkowski bound.
+ \begin{ques}
+ Check that $M_K \approx 2.646$.
+ \end{ques}
+ So this time, the only branch is $(2)$. Let's factor $(2)$ as usual: the polynomial $x^2+7$
+ reduces as $(x-1)(x+1) \pmod 2$, and hence
+ \[ (2) = \left( 2, \sqrt7-1 \right) \left( 2, \sqrt7+1 \right). \]
+ Oops! We now have two cherries, and they both seem reasonable.
+ But actually, I claim that
+ \[ \left( 2, \sqrt7-1 \right) = \left( 3 - \sqrt 7 \right). \]
+ \begin{ques}
+ Prove this.
+ \end{ques}
+ So both the cherries are principal ideals, and as before we conclude that $\Cl_K$ is trivial.
+ But note that this time, the prime ideal $(2)$ actually splits; we got lucky
+ that the two cherries were principal but this won't always work.
+\end{proof}
+
+How about some nontrivial class groups?
+First, we use a lemma that will help us with
+narrowing down the work in our cherry tree.
+\begin{lemma}[Ideals divide their norms]
+ Let $\kb$ be an integral ideal with $\Norm(\kb) = n$.
+ Then $\kb$ divides the ideal $(n)$.
+\end{lemma}
+\begin{proof}
+ By definition, $n = \left\lvert \OO_K / \kb \right\rvert$.
+ Treating $\OO_K/\kb$ as an (additive) abelian group and using Lagrange's theorem, we find
+ \[ 0 \equiv
+ \underbrace{\alpha + \dots + \alpha}_{n\text{ times}} = n\alpha
+ \pmod \kb \qquad\text{for all } \alpha \in \OO_K. \]
+ Thus $(n) \subseteq \kb$, done.
+\end{proof}
+
+Now we can give such an example.
+\begin{proposition}[Class group of $\QQ(\sqrt{-17})$]
+ The number field $K = \QQ(\sqrt{-17})$ has class group $\Zc 4$.
+\end{proposition}
+You are not obliged to read the entire proof in detail,
+as it is somewhat gory.
+The idea is just that there are some cherries which are not trivial in the class group.
+\begin{proof}
+Since $\Delta_K = -68$, we compute
+the Minkowski bound
+\[ M_K = \frac{4}{\pi} \sqrt{17} < 6. \]
+
+Now, it suffices to factor with $(2)$, $(3)$, $(5)$.
+The minimal polynomial of $\sqrt{-17}$ is $x^2+17$, so as usual
+\begin{align*}
+ (2) &= (2, \sqrt{-17}+1)^2 \\
+ (3) &= (3, \sqrt{-17}-1)(3,\sqrt{-17}+1) \\
+ (5) &= (5)
+\end{align*}
+corresponding to the factorizations of $x^2+17$ modulo each of $2$, $3$, $5$.
+Set $\kp = (2, \sqrt{-17}+1)$ and $\kq_1 = (3, \sqrt{-17}-1)$, $\kq_2 = (3, \sqrt{-17}+1)$.
+We can compute
+\[ \Norm(\kp) = 2 \quad\text{and}\quad \Norm(\kq_1) = \Norm(\kq_2) = 3. \]
+In particular, they are not principal.
+The ideal $(5)$ is out the window; it has norm $25$.
+Hence, the three cherries are $\kp$, $\kq_1$, $\kq_2$.
+
+The possible ways to arrange these cherries into ideals with norm $\le 5$ are
+\[ \left\{ (1), \kp, \kq_1, \kq_2, \kp^2 \right\}. \]
+However, you can compute \[ \kp^2 = (2) \] so $\kp^2$ and $(1)$ are in the same class group;
+that is, they are trivial.
+In particular, the class group has order at most $4$.
+
+From now on, let $[\ka]$ denote the class (member of the class group) that $\ka$ is in.
+Since $\kp$ isn't principal (so $[\kp] \neq [(1)]$), it follows that $\kp$ has order two.
+So Lagrange's theorem says that $\Cl_K$ has order either $2$ or $4$.
+
+Now we claim $[\kq_1]^2 \neq [(1)]$, which implies that $\kq_1$ has order greater than $2$.
+If not, $\kq_1^2$ is principal.
+We know $\Norm(\kq_1) = 3$,
+so this can only occur if $\kq_1^2 = (3)$;
+this would force $\kq_1 = \kq_2$.
+This is impossible since $\kq_1 + \kq_2 = (1)$.
+
+Thus, $\kq_1$ has even order greater than $2$.
+So it has to have order $4$.
+From this we deduce \[ \Cl_K \cong \Zc 4. \qedhere \]
+\end{proof}
+
+
+
+
+\begin{remark}
+ When we did this at Harvard during Math 129,
+ there was a five-minute interruption in which students (jokingly) complained
+ about the difficulty of evaluating $\frac{4}{\pi} \sqrt{17}$. Excerpt:
+ \begin{quote}
+ ``Will we be allowed to bring a small calculator on the exam?'' -- Student 1 \\
+ ``What does the size have to do with anything? You could have an Apple Watch'' -- Professor \\
+ ``Just use the fact that $\pi \ge 3$'' -- me \\
+ ``Even [other professor] doesn't know that, how are we supposed to?'' -- Student 2 \\
+ ``You have to do this yourself!'' -- Professor \\
+ ``This is an outrage.'' -- Student 1
+ \end{quote}
+\end{remark}
+\section\problemhead
+\begin{problem}
+ Show that $K = \QQ(\sqrt{-163})$ has trivial class group,
+ and hence $\OO_K = \ZZ\left[ \frac{1+\sqrt{-163}}{2} \right]$
+ is a UFD.\footnote{In fact,
+ $n = 163$ is the largest number
+ for which $\QQ(\sqrt{-n})$ has trivial class group.
+ The complete list is $1, 2, 3, 7, 11, 19, 43, 67, 163$,
+ the \vocab{Heegner numbers}.
+ You might notice Euler's prime-generating polynomial $t^2+t+41$
+ when doing the above problem. Not a coincidence!}
+ \begin{hint}
+ Repeat the previous procedure.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ Determine the class group of $\QQ(\sqrt{-31})$.
+ \begin{hint}
+ You should get a group of order three.
+ \end{hint}
+\end{problem}
+
+\begin{problem}[China TST 1998]
+ Let $n$ be a positive integer.
+ A polygon in the plane (not necessarily convex) has area greater than $n$.
+ Prove that one can translate it so that it contains at least $n+1$ lattice points.
+ \begin{hint}
+ Mimic the proof of part (a) of Minkowski's theorem.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ [\Cref{lem:vol_OK_mesh}]
+ \label{prob:OK_linalg}
+ Consider the composition of the embeddings $K \injto \RR^{r_1} \times \CC^{r_2} \taking\sim \RR^n$.
+ Show that the image of $\OO_K \subseteq K$ has mesh equal to
+ \[ \frac{1}{2^{r_2}} \sqrt{\left\lvert \Delta_K \right\rvert}. \]
+ \begin{hint}
+ Linear algebra.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ Let $p \equiv 1 \pmod 4$ be a prime.
+ Show that there are unique integers $a > b > 0$ such that $a^2+b^2=p$.
+ \begin{hint}
+ Factor in $\QQ(i)$.
+ \end{hint}
+% \begin{sol}
+% Let's factor $(p)$ in $\QQ(i)$, which has ring of integers $\ZZ[i]$.
+% Using \Cref{thm:factor_alg}, we get $\ZZ[x] / (p,x^2+1) \cong \FF_p[x] / (x^2+1)$.
+% Since $p \equiv 1 \pmod 4$, $x^2+1 \equiv (x+t)(x-t) \pmod p$ for some $t$.
+% Thus $(p) = (p, t+i)(p, t-i)$ is the full decomposition into prime factors.
+% \end{sol}
+\end{problem}
+
+
+\begin{problem}
+ [Korea national olympiad 2014]
+ Let $p$ be an odd prime and $k$ a positive integer such that $p \mid k^2+5$.
+ Prove that there exist positive integers $m$, $n$ such that $p^2 = m^2+5n^2$.
+ \begin{hint}
+ Factor $p$, show that the class group of $\QQ(\sqrt{-5})$ has order two.
+ \end{hint}
+ \begin{sol}
+ Let $K = \QQ(\sqrt{-5})$. Check that $\Cl_K$ has order two using the Minkowski bound;
+ moreover $\Delta_K = 20$.
+ Now note that $\OO_K = \ZZ[\sqrt{-5}]$, and $x^2+5$ factors mod $p$ as $(x+k)(x-k)$;
+ hence in $\OO_K$ we have $(p) = (p, \sqrt{-5}+k)(p, \sqrt{-5}-k) = \kp_1 \kp_2$, say.
+ For $p > 5$ the prime $p$ does not ramify and we have $\kp_1 \neq \kp_2$, since $\Delta_K = 20$.
+
+ Then $(p^2) = \kp_1^2 \cdot \kp_2^2$. Because the class group has order two,
+ both $\kp_1^2$ and $\kp_2^2$ are principal, and because $\kp_1 \neq \kp_2$ they are distinct.
+ Thus $p^2$ is a nontrivial product of two elements of $\OO_K$; from this we can extract the desired factorization.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/cohomology.tex b/books/napkin/cohomology.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d7f546dc558cad42ef10908b681875b02c186d4c
--- /dev/null
+++ b/books/napkin/cohomology.tex
@@ -0,0 +1,449 @@
+\chapter{Singular cohomology}
+Here's one way to motivate this chapter. It turns out that:
+\begin{itemize}
+ \ii $H_n(\CP^2) \cong H_n(S^2 \vee S^4)$ for every $n$.
+ \ii $H_n(\CP^3) \cong H_n(S^2 \times S^4)$ for every $n$.
+\end{itemize}
+This is unfortunate, because if possible we would like
+to be able to tell these spaces apart (as they are
+in fact not homotopy equivalent), but the homology groups
+cannot tell the difference between them.
+
+In this chapter, we'll define a \emph{cohomology group} $H^n(X)$ and $H^n(Y)$.
+In fact, the $H^n$'s are completely determined by the $H_n$'s
+by the so-called \emph{universal coefficient theorem}.
+However, it turns out that one can take all the cohomology groups and put
+them together to form a \emph{cohomology ring} $H^\bullet$.
+We will then see that $H^\bullet(X) \not\cong H^\bullet(Y)$ as rings.
+
+\section{Cochain complexes}
+\begin{definition}
+A \vocab{cochain complex} $A^\bullet$ is algebraically the same as a chain complex, except that the indices increase.
+So it is a sequence of abelian groups
+\[ \dots \taking{\delta} A^{n-1} \taking\delta A^n \taking\delta A^{n+1} \taking\delta \dots. \]
+such that $\delta^2 = 0$.
+Notation-wise, we're now using subscripts, and use $\delta$ rather $\partial$.
+We define the \vocab{cohomology groups} by
+\[ H^n(A^\bullet) = \ker\left( A^n \taking\delta A^{n+1} \right)
+ / \img\left( A^{n-1} \taking\delta A^n \right). \]
+\end{definition}
+
+\begin{example}[de Rham cohomology]
+ We have already met one example of a cochain complex:
+ let $M$ be a smooth manifold and $\Omega^k(M)$ be the
+ additive group of $k$-forms on $M$.
+ Then we have a cochain complex
+ \[ 0 \taking d \Omega^0(M)
+ \taking d \Omega^1(M) \taking d \Omega^2(M)
+ \taking d \dots. \]
+ The resulting cohomology is called \vocab{de Rham cohomology},
+ described later.
+\end{example}
+
+Aside from de Rham's cochain complex,
+\textbf{the most common way to get a cochain complex
+is to \emph{dualize} a chain complex.}
+Specifically, pick an abelian group $G$;
+note that $\Hom(-, G)$ is a contravariant functor,
+and thus takes every chain complex
+\[ \dots \taking\partial A_{n+1} \taking\partial
+ A_n \taking\partial A_{n-1} \taking\partial \dots \]
+into a cochain complex: letting $A^n = \Hom(A_n, G)$ we obtain
+\[ \dots \taking\delta A^{n-1} \taking\delta
+ A^n \taking\delta A^{n+1} \taking\delta \dots. \]
+where $\delta(A_n \taking{f} G) = A_{n+1} \taking\partial A \taking{f} G$.
+
+These are the cohomology groups we study most in algebraic topology,
+so we give a special notation to them.
+\begin{definition}
+ Given a chain complex $A_\bullet$ of abelian groups and another group $G$,
+ we let \[ H^n(A_\bullet; G) \] denote the cohomology groups
+ of the dual cochain complex $A^\bullet$ obtained by applying $\Hom(-,G)$.
+ In other words, $H^n(A_\bullet; G) = H^n(A^\bullet)$.
+\end{definition}
+
+\section{Cohomology of spaces}
+\prototype{$C^0(X;G)$ all functions $X \to G$ while $H^0(X)$ are those functions $X \to G$
+constant on path components.}
+
+The case of interest is our usual geometric situation, with $C_\bullet(X)$.
+\begin{definition}
+ For a space $X$ and abelian group $G$,
+ we define $C^\bullet(X;G)$ to be the dual to the
+ singular chain complex $C_\bullet(X)$,
+ called the \vocab{singular cochain complex} of $X$;
+ its elements are called \vocab{cochains}.
+
+ Then we define the \vocab{cohomology groups}
+ of the space $X$ as
+ \[ H^n(X; G) \defeq H^n(C_\bullet(X); G) = H_n(C^\bullet(X;G)). \]
+\end{definition}
+\begin{remark}
+ Note that if $G$ is also a ring (like $\ZZ$ or $\RR$),
+ then $H^n(X; G)$ is not only an abelian group but actually a $G$-module.
+\end{remark}
+
+\begin{example}
+ [$C^0(X; G)$, $C^1(X; G)$, and $H^0(X;G)$]
+ Let $X$ be a topological space and consider $C^\bullet(X)$.
+ \begin{itemize}
+ \ii $C_0(X)$ is the free abelian group on $X$,
+ and $C^0(X) = \Hom(C_0(X), G)$.
+ So a $0$-cochain is a function that
+ takes every point of $X$ to an element of $G$.
+ \ii $C_1(X)$ is the free abelian group on $1$-simplices in $X$.
+ So $C^1(X)$ needs to take every $1$-simplex to an element of $G$.
+ \end{itemize}
+ Let's now try to understand $\delta : C^0(X) \to C^1(X)$.
+ Given a $0$-cochain $\phi \in C^0(X)$,
+ i.e.\ a homomorphism $\phi : C^0(X) \to G$,
+ what is $\delta\phi : C^1(X) \to G$?
+ Answer:
+ \[ \delta\phi : [v_0, v_1] \mapsto \phi([v_0]) - \phi([v_1]). \]
+ Hence, elements of
+ $\ker(C^0 \taking\delta C^1) \cong H^0(X;G)$
+ are those cochains
+ that are \emph{constant on path-connected components}.
+\end{example}
+In particular, much like $H_0(X)$, we have \[ H^0(X) \cong G^{\oplus r} \]
+if $X$ has $r$ path-connected components (where $r$ is finite\footnote{%
+ Something funny happens if $X$ has \emph{infinitely} many path-connected components:
+ say $X = \coprod_\alpha X_\alpha$ over an infinite indexing set.
+ In this case we have
+ $H_0(X) = \bigoplus_\alpha G$ while $H^0(X) = \prod_\alpha G$.
+ For homology we get a \emph{direct sum} while
+ for cohomology we get a \emph{direct product}.
+
+ These are actually different for infinite indexing sets.
+ For general modules $\bigoplus_\alpha M_\alpha$ is \emph{defined} to only allow
+ to have \emph{finitely many} nonzero terms.
+ (This was never mentioned earlier in the Napkin,
+ since I only ever defined $M \oplus N$ and extended it to finite direct sums.)
+ No such restriction holds for $\prod_\alpha G_\alpha$ a product of groups.
+ This corresponds to the fact that $C_0(X)$ is formal linear sums of $0$-chains
+ (which, like all formal sums, are finite)
+ from the path-connected components of $G$.
+ But a cochain of $C^0(X)$ is a \emph{function}
+ from each path-connected component of $X$ to $G$,
+ where there is no restriction.
+}).
+
+To the best of my knowledge, the higher cohomology groups $H^n(X; G)$
+(or even the cochain groups $C^n(X; G) = \Hom(C_n(X), G)$) are harder to describe concretely.
+
+\begin{abuse}
+ In this chapter the only cochain complexes
+ we will consider are dual complexes as above.
+ So, any time we write a chain complex $A^\bullet$ it is implicitly given
+ by applying $\Hom(-,G)$ to $A_\bullet$.
+\end{abuse}
+
+\section{Cohomology of spaces is functorial}
+We now check that the cohomology groups still exhibit the same nice functorial behavior.
+First, let's categorize the previous results we had:
+
+\begin{ques}
+ Define $\catname{CoCmplx}$
+ the category of cochain complexes.
+\end{ques}
+
+\begin{exercise}
+ Interpret $\Hom(-,G)$ as a contravariant functor
+ from \[ \Hom(-,G) : \catname{Cmplx}\op \to \catname{CoCmplx}. \]
+ This means in particular that given a chain map $f : A_\bullet \to B_\bullet$,
+ we naturally obtain a dual map $f^\vee : B^\bullet \to A^\bullet$.
+\end{exercise}
+
+\begin{ques}
+ Interpret $H^n : \catname{CoCmplx} \to \catname{Grp}$ as a functor.
+ Compose these to get a contravariant functor
+ $H^n(-;G) : \catname{Cmplx}\op \to \catname{Grp}$.
+\end{ques}
+
+Then in exact analog to our result that $H_n : \catname{hTop} \to \catname{Grp}$ we have:
+\begin{theorem}[$H^n (-;G): \catname{hTop}\op \to \catname{Grp}$]
+ For every $n$, $H^n(-;G)$ is a contravariant functor
+ from $\catname{hTop}\op$ to $\catname{Grp}$.
+\end{theorem}
+\begin{proof}
+ The idea is to leverage the work we already did in constructing
+ the prism operator earlier.
+ First, we construct the entire sequence of functors
+ from $\catname{Top}\op \to \catname{Grp}$:
+ \begin{center}
+ \begin{tikzcd}
+ \catname{Top}\op \ar[rr, "C_\bullet"]
+ && \catname{Cmplx}\op \ar[rr, "\Hom(-;G)"]
+ && \catname{CoCmplx} \ar[rr, "H^n"]
+ && \catname{Grp} \\
+ X \ar[dd, "f"', near start]
+ && C_\bullet(X) \ar[dd, "f_\sharp"', near start]
+ && C^\bullet(X;G)
+ && H^n(X;G) \\
+ \quad \ar[rr, mapsto] &&
+ \quad \ar[rr, mapsto] &&
+ \quad \ar[rr, mapsto] &&
+ \quad \\
+ Y
+ && C_\bullet(Y)
+ && C^\bullet(Y;G) \ar[uu, "f_\sharp"', near start]
+ && H^n(Y;G). \ar[uu, "f^\ast"', near start]
+ \end{tikzcd}
+ \end{center}
+ Here $f^\sharp = (f_\sharp)^\vee$, and $f^\ast$
+ is the resulting induced map on homology groups of the cochain complex.
+
+ So as before all we have to show is that $f \simeq g$,
+ then $f^\ast = g^\ast$.
+ Recall now that there is a prism operator such that
+ $f_\sharp - g_\sharp = P \partial + \partial P$.
+ If we apply the entire functor $\Hom(-;G)$ we get that
+ $f^\sharp - g^\sharp = \delta P^\vee + P^\vee \delta$
+ where $P^\vee : C^{n+1}(Y;G) \to C^n(X;G)$.
+ So $f^\sharp$ and $g^\sharp$ are chain homotopic thus $f^\ast = g^\ast$.
+\end{proof}
+
+
+\section{Universal coefficient theorem}
+We now wish to show that the cohomology groups are determined up to isomorphism
+by the homology groups: given $H_n(A_\bullet)$, we can extract $H^n(A_\bullet; G)$.
+This is achieved by the \emph{universal coefficient theorem}.
+\begin{theorem}
+ [Universal coefficient theorem]
+ Let $A_\bullet$ be a chain complex of \emph{free} abelian groups,
+ and let $G$ be another abelian group.
+ Then there is a natural short exact sequence
+ \[
+ 0 \to \Ext(H_{n-1}(A_\bullet), G) \to H^n(A_\bullet; G)
+ \taking{h} \Hom(H_n(A_\bullet), G) \to 0. \]
+ In addition, this exact sequence is \emph{split}
+ so in particular
+ \[ H^n(C_\bullet; G) \cong \Ext(H_{n-1}(A_\bullet, G))
+ \oplus \Hom(H_n(A_\bullet), G). \]
+\end{theorem}
+Fortunately, in our case of interest, $A_\bullet$ is $C_\bullet(X)$
+which is by definition free.
+
+There are two things we need to explain, what the map $h$ is and the map $\Ext$ is.
+
+It's not too hard to guess how \[ h : H^n(A_\bullet; G) \to \Hom(H_n(A_\bullet), G) \] is defined.
+An element of $H^n(A_\bullet;G)$ is represented by a function which sends a cycle
+in $A_n$ to an element of $G$.
+The content of the theorem is to show that $h$ is surjective with kernel $\Ext(H_{n-1}(A_\bullet), G)$.
+
+What about $\Ext$?
+It turns out that $\Ext(-,G)$ is the so-called \vocab{Ext functor}, defined as follows.
+Let $H$ be an abelian group, and consider a \vocab{free resolution} of $H$,
+by which we mean an exact sequence
+\[ \dots \taking{f_2} F_1 \taking{f_1} F_0 \taking{f_0} H \to 0 \]
+with each $F_i$ free.
+Then we can apply $\Hom(-,G)$ to get a cochain complex
+\[ \dots \xleftarrow{f_2^\vee} \Hom(F_1, G) \xleftarrow{f_1^\vee}
+ \Hom(F_0, G) \xleftarrow{f_0^\vee} \Hom(H,G) \leftarrow 0. \]
+but \emph{this cochain complex need not be exact}
+(in categorical terms, $\Hom(-,G)$ does not preserve exactness).
+We define \[ \Ext(H,G) \defeq \ker(f_2^\vee) / \img(f_1^\vee) \]
+and it's a theorem that this doesn't depend on the choice of the free resolution.
+There's a lot of homological algebra that goes into this,
+which I won't take the time to discuss;
+but the upshot of the little bit that I did include is that the $\Ext$
+functor is very easy to compute in practice, since
+you can pick any free resolution you want and compute the above.
+
+%By ``natural'', we mean that if $f : A_\bullet \to B_\bullet$ is a chain map,
+%then we obtain a commutative diagram
+%\begin{diagram}
+% 0 & \rTo & \Ext(H_{n-1}(A_\bullet), G) & \rTo
+% & H^n(A_\bullet;G) & \rTo & \Hom(H_n(A_\bullet), G) & \rTo & 0 \\
+% & & \uTo^{ \Ext(f_\ast, G) } & & \uTo^{f^\ast} & & \uTo^{\Hom(f_\ast, G)} & & \\
+% 0 & \rTo & \Ext(H_{n-1}(B_\bullet), G) & \rTo
+% & H^n(A_\bullet;G) & \rTo & \Hom(H_n(B_\bullet), G) & \rTo & 0 \\
+%\end{diagram}
+%where $f_\ast$ is the induced arrow $H_n(A_\bullet) \to H_n(B_\bullet)$.
+
+\begin{lemma}
+ [Computing the $\Ext$ functor]
+ For any abelian groups $G$, $H$, $H'$ we have
+ \begin{enumerate}[(a)]
+ \ii $\Ext(H \oplus H', G) = \Ext(H, G) \oplus \Ext(H', G)$.
+ \ii $\Ext(H,G) = 0$ for $H$ free, and
+ \ii $\Ext(\Zc n, G) = G / nG$.
+ \end{enumerate}
+\end{lemma}
+\begin{proof}
+ For (a), note that if $\dots \to F_1 \to F_0 \to H \to 0$
+ and $\dots \to F_1' \to F_0' \to F_0' \to H' \to 0$ are free resolutions,
+ then so is $F_1 \oplus F_1' \to F_0 \oplus F_0' \to H \oplus H' \to 0$.
+
+ For (b), note that $0 \to H \to H \to 0$ is a free resolution.
+
+ Part (c) follows by taking the free resolution
+ \[ 0 \to \ZZ \taking{\times n} \ZZ \to \Zc n \to 0 \]
+ and applying $\Hom(-,G)$ to it.
+ \begin{ques}
+ Finish the proof of (c) from here. \qedhere
+ \end{ques}
+\end{proof}
+
+\begin{ques}
+ Some $\Ext$ practice: compute
+ $\Ext(\ZZ^{\oplus 2015}, G)$ and $\Ext(\Zc{30}, \Zc 4)$.
+\end{ques}
+
+\section{Example computation of cohomology groups}
+\prototype{Possibly $H^n(S^m)$.}
+
+The universal coefficient theorem gives us a direct way to compute
+any cohomology groups, provided we know the homology ones.
+
+\begin{example}
+ [Cohomolgy groups of $S^m$]
+ It is straightforward to compute $H^n(S^m)$ now:
+ all the $\Ext$ terms vanish since $H_n(S^m)$ is always free,
+ and hence we obtain that
+ \[ H^n(S^m) \cong \Hom(H_n(S^m), G) \cong
+ \begin{cases}
+ G & n=m, n=0 \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+% By UCT for reduced groups, we also have
+% \[ \wt H^n(S^m) \cong \Hom(\wt H_n(S^m), G) \cong
+% \begin{cases}
+% G & n=m \\
+% 0 & \text{otherwise}.
+% \end{cases}
+% \]
+% since $\Hom(\ZZ, G)$.
+\end{example}
+
+\begin{example}
+ [Cohomolgy groups of torus]
+ This example has no nonzero $\Ext$ terms either,
+ since this time $H^n(S^1 \times S^1)$ is always free.
+ So we obtain
+ \[ H^n(S^1 \times S^1) \cong \Hom(H_n(S^1 \times S^1), G). \]
+ Since $H_n(S^1 \times S^1)$ is $\ZZ$, $\ZZ^{\oplus 2}$, $\ZZ$
+ in dimensions $n=1,2,1$ we derive that
+ \[
+ H^n(S^1 \times S^1)
+ \cong
+ \begin{cases}
+ G & n = 0,2 \\
+ G^{\oplus 2} & n = 1.
+ \end{cases}
+ \]
+\end{example}
+
+From these examples one might notice that:
+\begin{lemma}
+ [$0$th homology groups are just duals]
+ For $n = 0$ and $n = 1$, we have
+ \[ H^n(X;G) \cong \Hom(H_n(X), G). \]
+\end{lemma}
+\begin{proof}
+ It's already been shown for $n=0$.
+ For $n=1$, notice that $H_0(X)$ is free,
+ so the $\Ext$ term vanishes.
+\end{proof}
+
+\begin{example}
+ [Cohomolgy groups of Klein bottle]
+ This example will actually have $\Ext$ term.
+ Recall that if $K$ is a Klein Bottle then its homology groups are
+ $\ZZ$ in dimension $n=0$ and $\ZZ \oplus \Zc 2$ in $n=1$, and $0$ elsewhere.
+
+ For $n=0$, we again just have $H^0(K;G) \cong \Hom(\ZZ, G) \cong G$.
+ For $n=1$, the $\Ext$ term is $\Ext(H_0(K), G) \cong \Ext(\ZZ, G) = 0$
+ so \[ H^1(K;G) \cong \Hom(\ZZ \oplus \Zc2, G) \cong G \oplus \Hom(\Zc2, G). \]
+ We have that $\Hom(\Zc2,G)$ is the subgroup
+ of elements of order $2$ in $G$ (and $0 \in G$).
+
+ But for $n=2$, we have our first interesting $\Ext$ group:
+ the exact sequence is
+ \[ 0 \to \Ext(\ZZ \oplus \Zc 2, G) \to H^2(X;G) \to \underbrace{H_2(X)}_{=0} \to 0. \]
+ Thus, we have
+ \[ H^2(X;G) \cong \left( \Ext(\ZZ,G) \oplus \Ext(\Zc2,G) \right) \oplus 0
+ \cong G/2G. \]
+ All the higher groups vanish.
+ In summary:
+ \[
+ H^n(X;G) \cong
+ \begin{cases}
+ G & n = 0 \\
+ G \oplus \Hom(\Zc2, G) & n = 1 \\
+ G/2G & n = 2 \\
+ 0 & n \ge 3.
+ \end{cases}
+ \]
+\end{example}
+
+
+\section{Relative cohomology groups}
+One can also define relative cohomology groups in the obvious way:
+dualize the chain complex
+\[ \dots \taking\partial C_1(X,A) \taking\partial C_0(X,A) \to 0 \]
+to obtain a cochain complex
+\[
+ \dots \xleftarrow\delta C^1(X,A;G) \xleftarrow\delta C^0(X,A;G)
+ \leftarrow 0.
+\]
+We can take the cohomology groups ofthis.
+\begin{definition}
+ The groups thus obtained are the \vocab{relative cohomology groups}
+ are denoted $H^n(X,A;G)$.
+\end{definition}
+
+In addition, we can define reduced cohomology groups as well.
+One way to do it is to take the augmented singular chain complex
+\[ \dots \taking\partial C_1(X) \taking\partial C_0(X) \taking\eps \ZZ \to 0 \]
+and dualize it to obtain
+\[
+ \dots \xleftarrow\delta C^1(X;G) \xleftarrow\delta C^0(X;G)
+ \xleftarrow{\eps^\vee} \underbrace{\Hom(\ZZ, G)}_{\cong G}
+ \leftarrow 0.
+\]
+Since the $\ZZ$ we add is also free,
+the universal coefficient theorem still applies.
+So this will give us reduced cohomology groups.
+
+However, since we already defined the relative cohomology groups,
+it is easiest to simply define:
+\begin{definition}
+ The \vocab{reduced cohomology groups} of a nonempty space $X$,
+ denoted $\wt H^n(X; G)$,
+ are defined to be $H^n(X, \{\ast\} ; G)$
+ for some point $\ast \in X$.
+\end{definition}
+
+
+\section\problemhead
+\begin{sproblem}
+ [Wedge product cohomology]
+ For any $G$ and $n$ we have
+ \[
+ \wt H^n(X \vee Y; G)
+ \cong
+ \wt H^n(X; G) \oplus \wt H^n(Y; G).
+ \]
+\end{sproblem}
+
+\begin{dproblem}
+ Prove that for a field $F$ of characteristic zero and a space $X$
+ with finitely generated homology groups:
+ \[ H^k(X, F) \cong \left( H_k(X) \right)^\vee. \]
+ Thus over fields cohomology is the dual of homology.
+\end{dproblem}
+
+\begin{problem}[$\Zc2$-cohomology of $\RP^n$]
+ Prove that
+ \[
+ H^m(\RP^n, \Zc2)
+ \cong
+ \begin{cases}
+ \ZZ & \text{$m=0$, or $m$ is odd and $m=n$} \\
+ \Zc2 & \text{$0 < m < n$ and $m$ is odd} \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+\end{problem}
diff --git a/books/napkin/compactness.tex b/books/napkin/compactness.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6f3b22dc38ee798e27559c62e3bcf002a8eb1bea
--- /dev/null
+++ b/books/napkin/compactness.tex
@@ -0,0 +1,684 @@
+\chapter{Compactness}
+One of the most important notions of topological spaces is that of \emph{compactness}.
+It generalizes the notion of ``closed and bounded'' in Euclidean space
+to any topological space
+(e.g.\ see \Cref{thm:bzw}).
+
+For metric spaces, there are two equivalent ways of formulating compactness:
+\begin{itemize}
+ \ii A ``natural'' definition using \emph{sequences}, called sequential compactness.
+ \ii A less natural definition using open covers.
+\end{itemize}
+As I alluded to earlier, sequences in metric spaces are super nice,
+but sequences in general topological spaces \emph{suck} (to the point where
+I didn't bother to define convergence of general sequences).
+So it's the second definition that will be used for general spaces.
+
+\section{Definition of sequential compactness}
+\prototype{$[0,1]$ is compact, but $(0,1)$ is not.}
+To emphasize, compactness is one of the
+\emph{best} possible properties that a metric space can have.
+\begin{definition}
+ A \vocab{subsequence} of an infinite sequence
+ $x_1, x_2, \dots$ is exactly what it sounds like:
+ a sequence $x_{i_1}, x_{i_2}, \dots$
+ where $i_1 < i_2 < \cdots$ are positive integers.
+ Note that the sequence is required to be infinite.
+\end{definition}
+Another way to think about this is ``selecting infinitely many terms''
+or ``deleting some terms'' of the sequence, depending on whether
+your glass is half empty or half full.
+
+\begin{definition}
+ A metric space $M$ is \vocab{sequentially compact} if
+ every sequence has a subsequence which converges.
+\end{definition}
+This time, let me give some non-examples before the examples.
+\begin{example}
+ [Non-examples of compact metric spaces]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The space $\RR$ is not compact: consider the sequence $1,2,3,4,\dots$.
+ Any subsequence explodes, hence $\RR$ cannot possibly be compact.
+ \ii More generally, if a space is
+ not bounded it cannot be compact.
+ (You can prove this if you want.)
+ \ii The open interval $(0,1)$ is bounded but not compact:
+ consider the sequence $\frac12, \frac13, \frac14, \dots$.
+ No subsequence can converge to a point in $(0,1)$ because the sequence ``converges to $0$''.
+ \ii More generally, any space which is not complete cannot be compact.
+ \end{enumerate}
+\end{example}
+
+Now for the examples!
+\begin{ques}
+ Show that a finite set is compact.
+ (Pigeonhole Principle.)
+\end{ques}
+\begin{example}[Examples of compact spaces]
+ Here are some more examples of compact spaces.
+ I'll prove they're compact in just a moment;
+ for now just convince yourself they are.
+ \begin{enumerate}[(a)]
+ \ii $[0,1]$ is compact. Convince yourself of this!
+ Imagine having a large number of dots in the unit interval\dots
+ \ii The surface of a sphere, $S^2 = \left\{ (x,y,z) \mid x^2+y^2+z^2=1 \right\}$ is compact.
+ \ii The unit ball $B^2 = \left\{ (x,y) \mid x^2+y^2 \le 1 \right\}$ is compact.
+ \ii The \vocab{Hawaiian earring} living in $\RR^2$ is compact:
+ it consists of mutually tangent circles of radius $\frac 1n$ for each $n$,
+ as in \Cref{fig:hawaiian}.
+ \end{enumerate}
+\end{example}
+\begin{figure}[ht]
+ \centering
+ \begin{asy}
+ size(4cm);
+ for (int n=1; n<=15; ++n) draw(CP(dir(0)/n, origin));
+ fill(CP(dir(0)/15, origin), black);
+ \end{asy}
+ \caption{Hawaiian Earring.}
+ \label{fig:hawaiian}
+\end{figure}
+
+To aid in generating more examples, we remark:
+\begin{proposition}[Closed subsets of compacts]
+ Closed subsets of sequentially compact sets are compact.
+\end{proposition}
+\begin{ques}
+ Prove this. (It should follow easily from definitions.)
+\end{ques}
+
+We need to do a bit more work for these examples, which we do in the next section.
+
+\section{Criteria for compactness}
+%Quick note: right now I've only defined compactness for metric spaces.
+%In the next section I'll define compactness for general spaces, but
+%all the results in this section will still remain true.
+%However the proofs become much harder (in particular, \Cref{thm:tychonoff}
+%becomes notoriously difficult).
+%So you should assume all spaces in this section are metric spaces.
+
+\begin{theorem}
+ [Tychonoff's theorem]
+ \label{thm:tychonoff}
+ If $X$ and $Y$ are compact spaces, then so is $X \times Y$.
+\end{theorem}
+\begin{proof}
+ \Cref{prob:tychonoff}.
+\end{proof}
+
+We also have:
+\begin{theorem}[The interval is compact]
+ \label{thm:interval_compact}
+ $[0,1]$ is compact.
+\end{theorem}
+\begin{proof}
+ Killed by \Cref{thm:bzw};
+ however, here is a sketch of a direct proof.
+ Split $[0,1]$ into $[0,\half] \cup [\half,1]$.
+ By Pigeonhole, infinitely many terms of the sequence lie in the left half (say);
+ let $x_1$ be the first one and then keep only the terms in the left half after $x_1$.
+ Now split $[0, \half]$ into $[0,\frac14] \cup [\frac14,\half]$.
+ Again, by Pigeonhole, infinitely many terms fall in some half; pick one of them, call it $x_2$.
+ Rinse and repeat.
+ In this way we generate a sequence $x_1$, $x_2$, \dots which is Cauchy,
+ implying that it converges since $[0,1]$ is complete.
+\end{proof}
+
+Now we can prove the main theorem about Euclidean space:
+in $\RR^n$, compactness is equivalent to being ``closed and bounded''.
+\begin{theorem}[Bolzano-Weierstra\ss]
+ A subset of $\RR^n$ is compact if and only if it is closed and bounded.
+ \label{thm:fakeBW}
+\end{theorem}
+\begin{ques}
+ Why does this imply the spaces in our examples are compact?
+\end{ques}
+\begin{proof}
+ Well, look at a closed and bounded $S \subseteq \RR^n$.
+ Since it's bounded, it lives inside some box $[a_1, b_1] \times [a_2, b_2] \times \dots \times [a_n, b_n]$.
+ By Tychonoff's theorem, since each $[a_i, b_i]$ is compact the entire box is.
+ Since $S$ is a closed subset of this compact box, we're done.
+\end{proof}
+
+One really has to work in $\RR^n$ for this to be true!
+In other spaces, this criterion can easily fail.
+\begin{example}[Closed and bounded but not compact]
+ Let $S = \{s_1, s_2, \dots\}$ be any infinite set equipped with the discrete metric.
+ Then $S$ is closed (since all convergent sequences are constant sequences)
+ and $S$ is bounded (all points are a distance $1$ from each other)
+ but it's certainly not compact since the sequence $s_1, s_2, \dots$ doesn't converge.
+\end{example}
+
+The Bolzano-Weierstrass theorem, which is \Cref{thm:bzw}, tells you exactly
+which sets are compact in metric spaces in a geometric way.
+
+\section{Compactness using open covers}
+\prototype{$[0,1]$ is compact.}
+There's a second related notion of compactness which I'll now define.
+The following definitions might appear very unmotivated, but bear with me.
+\begin{definition}
+ An open cover of a topological space $X$
+ is a collection of open sets $\{U_\alpha\}$
+ (possibly infinite or uncountable) which \emph{cover} it:
+ every point in $X$ lies in at least one of the $U_\alpha$,
+ so that \[ X = \bigcup U_\alpha. \]
+ Such a cover is called an \vocab{open cover}.
+
+ A \vocab{subcover} is exactly what it sounds like:
+ it takes only some of the $U_\alpha$,
+ while ensuring that $X$ remains covered.
+\end{definition}
+
+Some art:
+\begin{center}
+\begin{asy}
+size(12cm);
+path blob = (-8,3)..(-10,1)..(-9.4,0)..(-8.2,-3)..(-8,-4)
+ ..(-2,-4.3)..(2,-4.2)..(8,-4)
+ ..(8.6,-2)..(8.2,0.5)..(8,3)
+ ..(4,3.3)..(0,3.1)..(-6,2.9)..cycle;
+void open_ball(pair O, real r, pen p) {
+ dot(O, p);
+ filldraw(CR(O, r), p+opacity(0.1), p+dashed);
+}
+
+filldraw(blob, cyan+opacity(0.2), blue+0.7);
+open_ball((-7,0.8), 3.5, red);
+open_ball((-4.7,-1.2), 4.6, orange);
+open_ball((-1.7,-0.3), 3.9, red);
+open_ball((1.4,0.9), 2.9, yellow);
+open_ball((2.5,-0.9), 3.7, orange);
+open_ball((0.4,-3.7), 1.3, yellow);
+open_ball((4.3,0.8), 2.7, red);
+open_ball((6.3,1.8), 2.4, orange);
+open_ball((6.0,-2.4), 4.0, yellow);
+
+label("$X$", (8,3), dir(5), blue);
+
+label(scale(2)*"$X = \bigcup_\alpha U_\alpha$", (0,4), blue);
+\end{asy}
+\end{center}
+
+
+\begin{definition}
+ A topological space $X$ is \vocab{quasicompact}
+ if \emph{every} open cover has a finite subcover.
+ It is \vocab{compact} if it is also Hausdorff.
+\end{definition}
+\begin{remark}
+ The ``Hausdorff'' hypothesis that I snuck in
+ is a sanity condition which is not worth worrying about unless you're
+ working on the algebraic geometry chapters,
+ since all the spaces you will deal with are Hausdorff.
+ (In fact, some authors don't even bother to include it.)
+ For example all metric spaces are Hausdorff
+ and thus this condition can be safely ignored
+ if you are working with metric spaces.
+\end{remark}
+What does this mean? Here's an example:
+\begin{example}[Example of a finite subcover]
+ Suppose we cover the unit square $M = [0,1]^2$ by
+ putting an open disk of diameter $1$ centered at every point
+ (trimming any overflow).
+ This is clearly an open cover because,
+ well, every point lies in \emph{many} of the open sets,
+ and in particular is the center of one.
+
+ But this is way overkill -- we only need about four
+ of these circles to cover the whole square.
+ That's what is meant by a ``finite subcover''.
+ \begin{center}
+ \begin{asy}
+ size(4cm);
+ draw(shift( (-0.5,-0.5) )*unitsquare, black+1);
+ real d = 0.4;
+ real r = 0.5;
+ draw(CR(dir( 45)*d, r), dotted);
+ draw(CR(dir(135)*d, r), dotted);
+ draw(CR(dir(225)*d, r), dotted);
+ draw(CR(dir(315)*d, r), dotted);
+ \end{asy}
+ \end{center}
+\end{example}
+
+Why do we care?
+Because of this:
+\begin{theorem}[Sequentially compact $\iff$ compact]
+ A metric space $M$ is sequentially compact if and only if it is compact.
+ \label{thm:compactness_metric}
+\end{theorem}
+We defer the proof to the last section.
+
+This gives us the motivation we wanted for our definition.
+Sequential compactness was a condition that made sense.
+The open-cover definition looked strange,
+but it turned out to be equivalent.
+But we now prefer it, because we have seen that
+whenever possible we want to resort to open-set-only based definitions:
+so that e.g.\ they are preserved under homeomorphism.
+
+\begin{example}[An example of non-compactness]
+ The space $X = [0,1)$ is not compact in either sense. % chktex 9
+ We can already see it is not sequentially compact, because it is not even complete (look at $x_n = 1 - \frac 1n$).
+ To see it is not compact under the covering definition, consider the sets
+ \[ U_m = \left[0, 1 - \frac{1}{m+1} \right) \] % chktex 9
+ for $m = 1, 2, \dots$. Then $X = \bigcup U_i$; hence the $U_i$ are indeed a cover.
+ But no finite collection of the $U_i$'s will cover $X$.
+\end{example}
+\begin{ques}
+ Convince yourself that $[0,1]$ \emph{is} compact;
+ this is a little less intuitive than it being sequentially compact.
+\end{ques}
+\begin{abuse}
+ Thus, we'll never call a metric space ``sequentially compact'' again
+ --- we'll just say ``compact''.
+ (Indeed, I kind of already did this in the previous few sections.)
+\end{abuse}
+
+
+\section{Applications of compactness}
+Compactness lets us reduce \emph{infinite} open covers to finite ones.
+Actually, it lets us do this even if the open covers are \emph{blithely stupid}.
+Very often one takes an open cover consisting
+of an open neighborhood of $x \in X$ for every single point $x$ in the space;
+this is a huge number of open sets,
+and yet compactness lets us reduce to a finite set.
+
+To give an example of a typical usage:
+\begin{proposition}[Compact $\implies$ totally bounded]
+ Let $M$ be compact. Then $M$ is totally bounded.
+\end{proposition}
+\begin{proof}[Proof using covers]
+ For every point $p \in M$, take an $\eps$-neighborhood of $p$, say $U_p$.
+ These cover $M$ for the horrendously stupid reason that each point $p$ is
+ at the very least covered by its open neighborhood $U_p$.
+ Compactness then lets us take a finite subcover.
+\end{proof}
+
+Next, an important result about maps between compact spaces.
+\begin{theorem}[Images of compacts are compact]
+ Let $f \colon X \to Y$ be a continuous function, where $X$ is compact.
+ Then the image \[ f\im(X) \subseteq Y \] is compact.
+\end{theorem}
+\begin{proof}[Proof using covers]
+ Take any open cover $\{V_\alpha\}$ in $Y$ of $f\im(X)$.
+ By continuity of $f$,
+ it pulls back to an open cover $\{U_\alpha\}$ of $X$.
+ Thus some finite subcover of this covers $X$.
+ The corresponding $V$'s cover $f\im(X)$.
+\end{proof}
+\begin{ques}
+ Give another proof using the sequential definitions
+ of continuity and compactness.
+ (This is even easier.)
+\end{ques}
+
+Some nice corollaries of this:
+\begin{corollary}
+ [Extreme value theorem]
+ Let $X$ be compact and consider a continuous function $f : X \to \RR$.
+ Then $f$ achieves a \emph{maximum value} at some point,
+ i.e.\ there is a point $p \in X$ such that $f(p) \ge f(q)$ for any
+ other $q \in X$.
+\end{corollary}
+\begin{corollary}
+ [Intermediate value theorem]
+ Consider a continuous function $f: [0,1] \to \RR$.
+ Then the image of $f$ is of the form $[a,b]$ for some real numbers $a \le b$.
+\end{corollary}
+
+\begin{proof}[Sketch of Proof]
+ The point is that the image of $f$ is compact in $\RR$,
+ and hence closed and bounded.
+ You can convince yourself that the closed sets are just unions of closed intervals.
+ That implies the extreme value theorem.
+
+ When $X=[0,1]$, the image is also connected,
+ so there should only be one closed interval in $f\im([0,1])$.
+ Since the image is bounded, we then know it's of the form $[a,b]$.
+ (To give a full proof, you would use the so-called \emph{least upper bound}
+ property, but that's a little involved for a bedtime story;
+ also, I think $\RR$ is boring.)
+\end{proof}
+
+\begin{example}
+ [$1/x$]
+ The compactness hypothesis is really important here.
+ Otherwise, consider the function
+ \[ (0,1) \to \RR \quad \text{ by } \quad
+ x \mapsto \frac 1x. \]
+ This function (which you plot as a hyperbola) is not bounded;
+ essentially, you can see graphically that the issue
+ is we can't extend it to a function on $[0,1]$ because it explodes near $x=0$.
+\end{example}
+
+One last application: if $M$ is a compact metric space,
+then continuous functions $f : M \to N$
+are continuous in an especially ``nice'' way:
+\begin{definition}
+ A function $f : M \to N$ of metric spaces
+ is called \vocab{uniformly continuous}
+ if for any $\eps > 0$, there exists a $\delta > 0$
+ (depending only on $\eps$) such that
+ whenever $d_M(x,y) < \delta$ we also have $d_N(fx, fy) < \eps$.
+\end{definition}
+The name means that for $\eps > 0$,
+we need a $\delta$ that works for \emph{every point} of $M$.
+\begin{example}[Uniform continuity]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The functions $\RR$ to $\RR$ of the form
+ $x \mapsto ax+b$ are all uniformly continuous,
+ since one can always take $\delta = \eps/|a|$ (or $\delta=1$ if $a=0$).
+ \ii Actually, it is true that a differentiable function $\RR \to \RR$
+ with a bounded derivative is uniformly continuous.
+ (The converse is false for the reason that uniformly continuous
+ doesn't imply differentiable at all.)
+ \ii The function $f : \RR \to \RR$ by $x \mapsto x^2$
+ is \emph{not} uniformly continuous, since for large $x$,
+ tiny $\delta$ changes to $x$ lead to fairly large changes in $x^2$.
+ (If you like, you can try to prove this formally now.)
+
+ Think $f(2017.01) - f(2017) > 40$;
+ even when $\delta = 0.01$, one can still cause large changes in $f$.
+
+ \ii However, when restricted to $(0,1)$ or $[0,1]$
+ the function $x \mapsto x^2$ becomes uniformly continuous.
+ (For $\eps > 0$ one can now pick for example $\delta = \min\{1,\eps\}/3$.)
+
+ \ii The function $(0,1) \to \RR$ by $x \mapsto 1/x$ is \emph{not}
+ uniformly continuous (same reason as before).
+ \end{enumerate}
+\end{example}
+
+Now, as promised:
+\begin{proposition}[Continuous on compact $\implies$ uniformly continuous]
+ If $M$ is compact and $f \colon M \to N$ is continuous,
+ then $f$ is uniformly continuous.
+\end{proposition}
+\begin{proof}[Proof using sequences]
+ Fix $\eps > 0$, and assume for contradiction that for every $\delta = 1/k$
+ there exists points $x_k$ and $y_k$ within $\delta$ of each
+ other but with images $\eps > 0$ apart.
+ By compactness, take a convergent subsequence $x_{i_k} \to p$.
+ Then $y_{i_k} \to p$ as well, since the $x_k$'s and $y_k$'s are close to each other.
+ So both sequences $f(x_{i_k})$ and $f(y_{i_k})$ should converge to $f(p)$ by sequential continuity,
+ but this can't be true since the two sequences are always $\eps$ apart.
+\end{proof}
+
+\section{(Optional) Equivalence of formulations of compactness}
+We will prove that:
+\begin{theorem}
+ [Heine-Borel for general metric spaces]
+ For a metric space $M$, the following are equivalent:
+ \begin{enumerate}[(i)]
+ \ii Every sequence has a convergent subsequence,
+ \ii The space $M$ is complete and totally bounded, and
+ \ii Every open cover has a finite subcover.
+ \end{enumerate}
+\end{theorem}
+We leave the proof that (i) $\iff$ (ii) as \Cref{thm:bzw};
+the idea of the proof is much in the spirit of \Cref{thm:interval_compact}.
+\begin{proof}
+ [Proof that (i) and (ii) $\implies$ (iii)]
+ We prove the following lemma, which is interesting in its own right.
+ \begin{lemma}
+ [Lebesgue number lemma]
+ Let $M$ be a compact metric space and $\{U_\alpha\}$ an open cover.
+ Then there exists a real number $\delta > 0$,
+ called a \vocab{Lebesgue number} for that covering,
+ such that the $\delta$-neighborhood of any point
+ $p$ lies entirely in some $U_\alpha$.
+ \end{lemma}
+ \begin{subproof}[Proof of lemma]
+ Assume for contradiction that for every $\delta = 1/k$
+ there is a point $x_k \in M$
+ such that its $1/k$-neighborhood isn't contained in any $U_\alpha$.
+ In this way we construct a sequence $x_1$, $x_2$, \dots;
+ thus we're allowed to take a subsequence which converges to some $x$.
+ Then for every $\eps > 0$ we can find an integer $n$ such that $d(x_n, x) + 1/n < \eps$;
+ thus the $\eps$-neighborhood at $x$ isn't contained in any $U_\alpha$ for every $\eps > 0$.
+ This is impossible, because we assumed $x$ was covered by some open set.
+ \end{subproof}
+ Now, take a Lebesgue number $\delta$ for the covering.
+ Since $M$ is totally bounded, finitely many $\delta$-neighborhoods cover the space,
+ so finitely many $U_\alpha$ do as well.
+\end{proof}
+
+\begin{proof}
+ [Proof that (iii) $\implies$ (ii)]
+ One step is immediate:
+ \begin{ques}
+ Show that the covering condition $\implies$ totally bounded.
+ \end{ques}
+ The tricky part is showing $M$ is complete.
+ Assume for contradiction it isn't and thus that the sequence $(x_k)$ is Cauchy,
+ but it doesn't converge to any particular point.
+ \begin{ques}
+ Show that this implies for each $p \in M$, there is an $\eps_p$-neighborhood $U_p$
+ which contains at most finitely many of the points of the sequence $(x_k)$.
+ (You will have to use the fact that $x_k \not\to p$ and $(x_k)$ is Cauchy.)
+ \end{ques}
+ Now if we consider $M = \bigcup_p U_p$ we get a
+ finite subcover of these open neighborhoods;
+ but this finite subcover can only cover finitely
+ many points of the sequence, by contradiction.
+\end{proof}
+
+
+\section{\problemhead}
+The later problems are pretty hard;
+some have the flavor of IMO 3/6-style constructions.
+It's important to draw lots of pictures so one can tell what's happening.
+% I'd be happy to learn of any easier ones.
+Of these \Cref{thm:bzw} is definitely my favorite.
+
+\begin{problem}
+ Show that the closed interval $[0,1]$ and
+ open interval $(0,1)$ are not homeomorphic.
+ \begin{hint}
+ $[0,1]$ is compact.
+ \end{hint}
+ \begin{sol}
+ Compactness is preserved under homeomorphism,
+ but $[0,1]$ is compact while $(0,1)$ is not.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Let $X$ be a topological space with the discrete topology.
+ Under what conditions is $X$ compact?
+ \begin{hint}
+ If and only if it is finite.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ [The cofinite topology is quasicompact only]
+ We let $X$ be an infinite set and equip it with the
+ \vocab{cofinite topology}:
+ the open sets are the empty set and complements of finite sets.
+ This makes $X$ into a topological space.
+ Show that $X$ is quasicompact but not Hausdorff.
+\end{problem}
+
+\begin{problem}
+ [Cantor's intersection theorem]
+ \label{prob:cantor_intersect}
+ Let $X$ be a compact topological space, and suppose
+ \[ X = K_0 \supseteq K_1 \supseteq K_2 \supseteq \dots \]
+ is an infinite sequence of nested nonempty closed subsets.
+ Show that $\bigcap_{n \ge 0} K_n \neq \varnothing$.
+\end{problem}
+
+%\begin{sproblem}[Compact Implies Bounded]
+% Let $f : X \to \RR$ be a continuous function,
+% where $X$ is a compact topological space.
+% Show that $f$ is bounded.
+% \begin{hint}
+% Immediate by the fact that the image of $f$ is compact,
+% and hence bounded.
+% Remember this!
+% \end{hint}
+%\end{sproblem}
+
+\begin{problem}
+ [Tychonoff's theorem]
+ Let $X$ and $Y$ be compact metric spaces. Show that $X \times Y$ is compact.
+ (This is also true for general topological spaces,
+ but the proof is surprisingly hard,
+ and we haven't even defined $X \times Y$ in general yet.)
+ \label{prob:tychonoff}
+ \begin{hint}
+ Suppose $p_i = (x_i, y_i)$ is a sequence in $X \times Y$ ($i=1,2,\dots$).
+ Take a sub-sequence such that the $x$-coordinate converges
+ (throwing out some terms).
+ Then take a sub-sequence of \emph{that} sub-sequence
+ such that $y$-coordinate converges (throwing out more terms).
+ \end{hint}
+ \begin{sol}
+ Suppose $p_i = (x_i, y_i)$ is a sequence in $X \times Y$ ($i=1,2,\dots$).
+ Looking on the $X$ side, some subsequence converges:
+ for the sake of illustration say it's $x_1, x_4, x_9, x_{16}, \dots \to x$.
+ Then look at the corresponding sequence $y_1, y_4, y_9, y_{16}, \dots$.
+ Using compactness of $Y$, it has a convergent subsequence, say
+ $y_1, y_{16}, y_{81}, y_{256}, \dots \to y$.
+ Then $p_1, p_{16}, p_{81}, \dots$ will converge to $(x,y)$.
+
+ One common mistake is to just conclude
+ that $(x_n)$ has a convergent subsequence
+ and that $(y_n)$ does too.
+ But these sequences could be totally unrelated.
+ For this proof to work,
+ you do need to apply compactness of $X$ first,
+ and then compactness of $Y$ on the resulting \emph{filtered}
+ sequence like we did here.
+ \end{sol}
+\end{problem}
+
+\begin{dproblem}
+ [Bolzano-Weierstra\ss\ theorem for general metric spaces]
+ \gim
+ Prove that a metric space $M$ is sequentially compact
+ if and only if it is complete and totally bounded.
+ \label{thm:bzw}
+ \begin{hint}
+ Mimic the proof of \Cref{thm:interval_compact}.
+ The totally bounded condition lets you do Pigeonhole.
+ \end{hint}
+\end{dproblem}
+
+\begin{problem}
+ [Almost Arzel\`a-Ascoli theorem]
+ \gim
+ Let
+ $f_1, f_2, \ldots \colon [0,1] \to [-100,100]$
+ be an \vocab{equicontinuous} sequence of functions, meaning
+ \[
+ \forall \eps > 0 \quad
+ \exists \delta > 0 \quad
+ \forall n \;
+ \forall x,y \quad
+ \left( \left\lvert x-y \right\rvert <\delta
+ \implies \left\lvert f_n(x) - f_n(y) \right\rvert < \eps \right)
+ \]
+ Show that we can extract a subsequence $f_{i_1}, f_{i_2}, \dots$
+ of these functions such that for every $x \in [0,1]$,
+ the sequence $f_{i_1}(x)$, $f_{i_2}(x)$, \dots\ converges.
+\end{problem}
+
+\begin{problem}
+ \gim
+ Let $M = (M,d)$ be a bounded metric space.
+ Suppose that whenever $d'$ is another metric on $M$
+ for which $(M,d)$ and $(M,d')$ are homeomorphic
+ (i.e.\ have the same open sets), then $d'$ is also bounded.
+ Prove that $M$ is compact.
+\end{problem}
+
+\begin{problem}
+ \yod
+ In this problem a ``circle''
+ refers to the boundary of a disk with \emph{nonzero} radius.
+ \begin{enumerate}[(a)]
+ \ii Is it possible to partition the plane $\RR^2$
+ into disjoint circles?
+ \ii From the plane $\RR^2$ we delete two distinct points $p$ and $q$.
+ Is it possible to partition the remaining points into disjoint circles?
+ \end{enumerate}
+ \begin{hint}
+ The answer to both parts is no.
+
+ For (a) use \Cref{prob:cantor_intersect}.
+
+ For (b), color each circle in the partition
+ based on whether it contains $p$ but not $q$,
+ $q$ but not $p$, or both.
+ \end{hint}
+ \begin{sol}
+ Part (a) follows by the Cantor intersection theorem
+ (\Cref{prob:cantor_intersect}).
+ Assume for contradiction such a partition existed.
+ Take any of the circles $C_0$, and let $K_0$ denote the closed disk
+ with boundary $C_0$.
+ Now take the circle $C_1$ passing through the center of $C_0$,
+ and let $K_1$ denote the closed disk with boundary $C_1$.
+ If we repeat in this way,
+ we get a nested sequence $K_0 \supseteq K_1 \supseteq \dots$
+ and the radii of $C_i$ approach zero
+ (since each is at most half the previous once).
+ Thus some point $p$ lies in $\bigcap_n K_n$ which is impossible.
+
+ Now for part (b),
+ again assume for contradiction a partition into circles exists.
+ Color a circle magenta if it contains $p$ but not $q$
+ and color a circle cyan if it contains $q$ but not $p$.
+ Color $p$ itself magenta and $q$ itself cyan as well.
+ Finally, color a circle neon yellow if it contains both $p$ and $q$.
+ (When we refer to coloring a circle,
+ we mean to color all the points on it.)
+
+ By repeating the argument in (a) there are no circles
+ enclosing neither $p$ nor $q$.
+ Hence every point is either magenta, cyan, or neon yellow.
+ Now note that given any magenta circle,
+ its interior is completely magenta.
+ Actually, the magenta circles can be totally ordered
+ by inclusion (since they can't intersect).
+ So we consider two cases:
+ \begin{itemize}
+ \ii If there is a magenta circle which is maximal by inclusion
+ (i.e.\ a magenta circle not contained in any other magenta circle)
+ then the set of all magenta points is just a closed disk.
+ \ii If there is no such magenta circle,
+ then the set of magenta points can also be expressed
+ as the union over all magenta circles of their interiors.
+ This is a union of open sets, so it is itself open.
+ \end{itemize}
+
+ We conclude the set of magenta points is
+ either a closed disk or an open set.
+ Similarly for the set of cyan points.
+ Moreover, the set of such points is convex.
+
+ To finish the problem:
+ \begin{itemize}
+ \ii Suppose there are no neon yellow points.
+ If the magenta points form a closed disk,
+ then the cyan points are $\mathbb R^2$ minus a disk which is not convex.
+ Contradiction. So the magenta points must be open.
+ Similarly the cyan points must be open.
+ But $\mathbb R^2$ is connected,
+ so it can't be written as the union of two open sets.
+ \ii Now suppose there are neon yellow points.
+ We claim there is a neon yellow circle minimal by inclusion.
+ If not, then repeat the argument of (a) to get a contradiction,
+ since any neon yellow circle must have diameter the distance from $p$ to $q$.
+ So we can find a neon yellow circle $\mathscr C$ whose
+ interior is all magenta and cyan.
+ Now repeat the argument of the previous part,
+ replacing $\mathbb R^2$ by the interior of $\mathscr C$.
+ \end{itemize}
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/constructions.tex b/books/napkin/constructions.tex
new file mode 100644
index 0000000000000000000000000000000000000000..edd7179682cf5dd1112bc598f4334f53bc871ec1
--- /dev/null
+++ b/books/napkin/constructions.tex
@@ -0,0 +1,716 @@
+\chapter{Some topological constructions}
+In this short chapter we briefly describe some common spaces and constructions
+in topology that we haven't yet discussed.
+
+\section{Spheres}
+Recall that
+\[ S^n = \left\{ (x_0, \dots, x_n)
+ \mid x_0^2 + \dots + x_n^2 = 1 \right\} \subset \RR^{n+1} \]
+is the surface of an $n$-sphere while
+\[ D^{n+1} = \left\{ (x_0, \dots, x_n)
+ \mid x_0^2 + \dots + x_n^2 \le 1 \right\} \subset \RR^{n+1} \]
+is the corresponding \emph{closed ball}
+(So for example, $D^2$ is a disk in a plane while $S^1$ is the unit circle.)
+\begin{exercise}
+ Show that the open ball $D^n \setminus S^{n-1}$
+ is homeomorphic to $\RR^n$.
+\end{exercise}
+
+In particular, $S^0$ consists of two points,
+while $D^1$ can be thought of as the interval $[-1,1]$.
+
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ draw(dir(0)--dir(180), blue);
+ dot(dir(0), red+4);
+ dot(dir(180), red+4);
+ label("$S^0$", dir(0), dir(90), red);
+ label("$D^1$", dir(0)--dir(180), blue);
+ add(shift(-4,0)*CC());
+ unitsize(2cm);
+ filldraw(unitcircle, lightblue+opacity(0.2), red);
+ label("$D^2$", origin, blue);
+ label("$S^1$", dir(45), dir(45), red);
+ \end{asy}
+\end{center}
+
+
+\section{Quotient topology}
+\prototype{$D^n / S^{n-1} = S^n$, or the torus.}
+
+Given a space $X$, we can \emph{identify} some of the points together
+by any equivalence relation $\sim$;
+for an $x \in X$ we denote its equivalence class by $[x]$.
+Geometrically, this is the space achieved by welding together points
+equivalent under $\sim$.
+
+Formally,
+\begin{definition}
+ Let $X$ be a topological space, and $\sim$ an equivalence relation
+ on the points of $X$.
+ Then $X / {\sim}$ is the space whose
+ \begin{itemize}
+ \ii Points are equivalence classes of $X$, and
+ \ii $U \subseteq X / {\sim}$ is open if and only if
+ $\left\{ x \in X \text{ such that } [x] \in U \right\}$
+ is open in $X$.
+ \end{itemize}
+\end{definition}
+As far as I can tell, this definition is mostly useless for intuition,
+so here are some examples.
+
+\begin{example}[Interval modulo endpoints]
+ Suppose we take $D^1 = [-1, 1]$
+ and quotient by the equivalence relation which identifies
+ the endpoints $-1$ and $1$.
+ (Formally, $x \sim y \iff (x=y) \text{ or } \{x,y\} = \{-1,1\}$.)
+ In that case, we simply recover $S^1$:
+ \begin{center}
+ \begin{asy}
+ size(8cm);
+ draw(dir(0)--dir(180), blue);
+ dot("$-1$", dir(0), dir(90), red+4);
+ dot("$-1$", dir(180), dir(90), red+4);
+ label("$D^1$", dir(0)--dir(180), blue);
+ add(shift(-4,0)*CC());
+ unitsize(2cm);
+ draw(unitcircle, blue);
+ label("$S^1 \approx D^1 / {\sim}$", dir(45), dir(45), blue);
+ dot("$-1 \sim 1$", dir(90), dir(90), red);
+ \end{asy}
+ \end{center}
+ Observe that a small open neighborhood around $-1 \sim 1$ in the quotient space
+ corresponds to two half-intervals at $-1$ and $1$ in the original space $D^1$.
+ This should convince you the definition we gave is the right one.
+\end{example}
+
+\begin{example}[More quotient spaces]
+ Convince yourself that:
+ \begin{itemize}
+ \ii Generalizing the previous example, $D^n$ modulo its boundary $S^{n-1}$ is $S^n$.
+ \ii Given a square $ABCD$, suppose we identify segments $AB$ and $DC$ together.
+ Then we get a cylinder. (Think elementary school, when you would tape
+ up pieces of paper together to get cylinders.)
+ \ii In the previous example, if we also identify $BC$ and $DA$ together,
+ then we get a torus. (Imagine taking our cylinder and putting the two
+ circles at the end together.)
+ \ii Let $X = \RR$, and let $x \sim y$ if $y -x \in \ZZ$.
+ Then $X / {\sim}$ is $S^1$ as well.
+ \end{itemize}
+\end{example}
+
+One special case that we did above:
+\begin{definition}
+ Let $A \subseteq X$.
+ Consider the equivalence relation which identifies
+ all the points of $A$ with each other
+ while leaving all remaining points inequivalent.
+ (In other words, $x \sim y$ if $x=y$ or $x,y \in A$.)
+ Then the resulting quotient space is denoted $X/A$.
+\end{definition}
+
+So in this notation, \[ D^n / S^{n-1} = S^n. \]
+
+\begin{abuse}
+ Note that I'm deliberately being sloppy, and saying
+ ``$D^n / S^{n-1} = S^n$'' or ``$D^n / S^{n-1}$ \emph{is} $S^n$'',
+ when I really ought to say ``$D^n / S^{n-1}$ is homeomorphic to $S^n$''.
+ This is a general theme in mathematics:
+ objects which are homoeomorphic/isomorphic/etc.\ are generally
+ not carefully distinguished from each other.
+\end{abuse}
+
+\section{Product topology}
+\prototype{$\RR \times \RR$ is $\RR^2$, $S^1 \times S^1$ is the torus.}
+
+\begin{definition}
+ Given topological spaces $X$ and $Y$,
+ the \vocab{product topology} on $X \times Y$ is the space whose
+ \begin{itemize}
+ \ii Points are pairs $(x,y)$ with $x \in X$, $y \in Y$, and
+ \ii Topology is given as follows: the \emph{basis} of
+ the topology for $X \times Y$ is $U \times V$,
+ for $U \subseteq X$ open and $V \subseteq Y$ open.
+ \end{itemize}
+\end{definition}
+
+\begin{remark}
+ It is not hard to show that, in fact,
+ one need only consider basis elements for $U$ and $V$.
+ That is to say,
+ \[ \left\{ U \times V \mid
+ U,V \text{ basis elements for } X,Y \right\} \]
+ is also a basis for $X \times Y$.
+
+ We really do need to fiddle with the basis:
+ in $\RR \times \RR$, an open unit disk better be open,
+ despite not being of the form $U \times V$.
+\end{remark}
+
+This does exactly what you think it would.
+\begin{example}[The unit square]
+ Let $X = [0,1]$ and consider $X \times X$.
+ We of course expect this to be the unit square.
+ Pictured below is an open set of $X \times X$ in the basis.
+ \begin{center}
+ \begin{asy}
+ size(6cm);
+ filldraw(unitsquare, opacity(0.2)+lightblue, black);
+
+ pair B = (0,1);
+ pair A = (1,0);
+ fill(box(0.3*A+0.2*B,0.6*A+0.7*B), lightred+opacity(0.5));
+ label("$U \times V$", (0.45,0.45), brown);
+
+ draw(0.3*A--(0.3*A+B), heavygreen+dashed+1);
+ draw(0.6*A--(0.6*A+B), heavygreen+dashed+1);
+ draw(0.2*B--(0.2*B+A), heavycyan+dashed+1);
+ draw(0.7*B--(0.7*B+A), heavycyan+dashed+1);
+
+ draw( 0.3*A--0.6*A, heavygreen+2 );
+ opendot( 0.3*A, heavygreen+2);
+ opendot( 0.6*A, heavygreen+2);
+ label("$U$", 0.45*A, dir(-90), heavygreen);
+ draw( 0.2*B--0.7*B, heavycyan+2 );
+ opendot( 0.2*B, heavycyan+2);
+ opendot( 0.7*B, heavycyan+2);
+ label("$V$", 0.45*B, dir(180), heavycyan);
+ \end{asy}
+ \end{center}
+\end{example}
+\begin{exercise}
+ Convince yourself this basis gives the same topology
+ as the product metric on $X \times X$.
+ So this is the ``right'' definition.
+\end{exercise}
+
+\begin{example}[More product spaces]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\RR \times \RR$ is the Euclidean plane.
+ \ii $S^1 \times [0,1]$ is a cylinder.
+ \ii $S^1 \times S^1$ is a torus! (Why?)
+ \end{enumerate}
+\end{example}
+
+\section{Disjoint union and wedge sum}
+\prototype{$S^1 \vee S^1$ is the figure eight.}
+
+The disjoint union of two spaces is geometrically exactly
+what it sounds like: you just imagine the two spaces side by side.
+For completeness, here is the formal definition.
+\begin{definition}
+ Let $X$ and $Y$ be two topological spaces.
+ The \vocab{disjoint union}, denoted $X \amalg Y$, is defined by
+ \begin{itemize}
+ \ii The points are the disjoint union $X \amalg Y$, and
+ \ii A subset $U \subseteq X \amalg Y$ is open if
+ and only if $U \cap X$ and $U \cap Y$ are open.
+ \end{itemize}
+\end{definition}
+\begin{exercise}
+ Show that the disjoint union of two nonempty spaces is disconnected.
+\end{exercise}
+
+More interesting is the wedge sum, where two topological spaces $X$
+and $Y$ are fused together only at a single base point.
+\begin{definition}
+ Let $X$ and $Y$ be topological spaces, and $x_0 \in X$ and $y_0 \in Y$
+ be points.
+ We define the equivalence relation $\sim$ by declaring $x_0 \sim y_0$ only.
+ Then the \vocab{wedge sum} of two spaces is defined as
+ \[ X \vee Y = (X \amalg Y) / {\sim}. \]
+\end{definition}
+
+\begin{example}
+ [$S^1 \vee S^1$ is a figure eight]
+ Let $X = S^1$ and $Y = S^1$,
+ and let $x_0 \in X$ and $y_0 \in Y$ be any points.
+ Then $X \vee Y$ is a ``figure eight'': it is two
+ circles fused together at one point.
+ \begin{center}
+ \begin{asy}
+ size(3cm);
+ draw(shift(-1,0)*unitcircle);
+ draw(shift(1,0)*unitcircle);
+ dotfactor *= 1.4;
+ dot(origin);
+ \end{asy}
+ \end{center}
+\end{example}
+\begin{abuse}
+ We often don't mention $x_0$ and $y_0$ when they are understood
+ (or irrelevant). For example, from now on we will just
+ write $S^1 \vee S^1$ for a figure eight.
+\end{abuse}
+
+\begin{remark}
+ Annoyingly, in \LaTeX\ \verb+\wedge+ gives $\wedge$ instead
+ of $\vee$ (which is \verb+\vee+).
+ So this really should be called the ``vee product'', but too late.
+\end{remark}
+
+
+\section{CW complexes}
+Using this construction, we can start building some spaces.
+One common way to do so is using a so-called \vocab{CW complex}.
+Intuitively, a CW complex is built as follows:
+\begin{itemize}
+ \ii Start with a set of points $X^0$.
+ \ii Define $X^1$ by taking some line segments (copies of $D^1$)
+ and fusing the endpoints (copies of $S^0$) onto $X^0$.
+ \ii Define $X^2$ by taking copies of $D^2$ (a disk)
+ and welding its boundary (a copy of $S^1$) onto $X^1$.
+ \ii Repeat inductively up until a finite stage $n$;
+ we say $X$ is \vocab{$n$-dimensional}.
+\end{itemize}
+The resulting space $X$ is the CW-complex.
+The set $X^k$ is called the \vocab{$k$-skeleton} of $X$.
+Each $D^k$ is called a \vocab{$k$-cell}; it is customary to
+denote it by $e_\alpha^k$ where $\alpha$ is some index.
+We say that $X$ is \vocab{finite} if only finitely many cells were used.
+\begin{abuse}
+ Technically, most sources (like \cite{ref:hatcher}) allow one to
+ construct infinite-dimensional CW complexes.
+ We will not encounter any such spaces in the Napkin.
+\end{abuse}
+
+\begin{example}
+ [$D^2$ with $2+2+1$ and $1+1+1$ cells]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii First, we start with $X^0$ having two points $e_a^0$ and $e_b^0$.
+ Then, we join them with two $1$-cells $D^1$ (green),
+ call them $e_c^1$ and $e_d^1$.
+ The endpoints of each $1$-cell (the copy of $S^0$) get identified
+ with distinct points of $X^0$; hence $X^1 \cong S^1$.
+ Finally, we take a single $2$-cell $e^2$ (yellow) and weld it in,
+ with its boundary fitting into the copy of $S^1$ that we just drew.
+ This gives the figure on the left.
+
+ \ii In fact, one can do this using just $1+1+1=3$ cells.
+ Start with $X^0$ having a single point $e^0$.
+ Then, use a single $1$-cell $e^1$, fusing its two endpoints
+ into the single point of $X^0$.
+ Then, one can fit in a copy of $S^1$ as before,
+ giving $D^2$ as on the right.
+ \end{enumerate}
+ \begin{center}
+ \begin{asy}
+ size(4cm);
+ filldraw(unitcircle, opacity(0.2)+yellow, heavygreen);
+ dotfactor *= 1.4;
+ dot(dir(90), blue);
+ dot(dir(-90), blue);
+ label("$e_a^0$", dir(90), dir(90), blue);
+ label("$e_b^0$", dir(-90), dir(-90), blue);
+ label("$e_c^1$", dir(0), dir(0), heavygreen);
+ label("$e_d^1$", dir(180), dir(180), heavygreen);
+ label("$e^2$", origin, origin);
+ \end{asy}
+ \qquad
+ \begin{asy}
+ size(4cm);
+ filldraw(unitcircle, opacity(0.2)+yellow, heavygreen);
+ dotfactor *= 1.4;
+ dot(dir(90), blue);
+ label("$e^0$", dir(90), dir(90), blue);
+ label("$e^1$", dir(-90), dir(-90), heavygreen);
+ label("$e^2$", origin, origin);
+ \end{asy}
+ \end{center}
+\end{example}
+
+\begin{example}
+ [$S^n$ as a CW complex]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii One can obtain $S^n$ (for $n \ge 1$) with just two cells.
+ Namely, take a single point $e^0$ for $X^0$, and to obtain $S^n$
+ take $D^n$ and weld its entire boundary into $e^0$.
+
+ We already saw this example in the beginning with $n=2$,
+ when we saw that the sphere $S^2$ was the result when we fuse
+ the boundary of a disk $D^2$ together.
+
+ \ii Alternatively, one can do a ``hemisphere'' construction,
+ by constructing $S^n$ inductively using two cells in each dimension.
+ So $S^0$ consists of two points, then $S^1$ is obtained
+ by joining these two points by two segments ($1$-cells),
+ and $S^2$ is obtained by gluing two hemispheres (each a $2$-cell)
+ with $S^1$ as its equator.
+ \end{enumerate}
+\end{example}
+
+\begin{definition}
+ Formally, for each $k$-cell $e^k_\alpha$ we want to add to $X^k$,
+ we take its boundary $S^{k-1}_\alpha$ and weld it onto
+ $X^{k-1}$ via an \vocab{attaching map} $S^{k-1}_\alpha \to X^{k-1}$.
+ Then
+ \[ X^k = X^{k-1} \amalg \left(\coprod_\alpha e^k_\alpha\right) / {\sim} \]
+ where $\sim$ identifies each boundary point of $e^k_\alpha$
+ with its image in $X^{k-1}$.
+\end{definition}
+
+
+\section{The torus, Klein bottle, $\RP^n$, $\CP^n$}
+\label{sec:top_spaces}
+We now present four of the most important examples of CW complexes.
+
+\subsection{The torus}
+The \vocab{torus} can be formed by taking
+a square and identifying the opposite edges in the same direction:
+if you walk off the right edge, you re-appear at the corresponding
+point in on the left edge.
+(Think \emph{Asteroids} from Atari!)
+
+\begin{center}
+ \begin{asy}
+ size(2cm);
+ fill(unitsquare, yellow+opacity(0.2));
+ pair C = (0,0);
+ pair B = (1,0);
+ pair A = (1,1);
+ pair D = (0,1);
+ draw(A--B, red, MidArrow);
+ draw(B--C, blue, MidArrow);
+ draw(D--C, red, MidArrow);
+ draw(A--D, blue, MidArrow);
+ \end{asy}
+\end{center}
+
+Thus the torus is $(\RR/\ZZ)^2 \cong S^1 \times S^1$.
+
+Note that all four corners get identified together to a single point. One
+can realize the torus in $3$-space by treating the square as a sheet of paper,
+taping together the left and right (red) edges to form a cylinder,
+then bending the cylinder and fusing the top and bottom (blue) edges
+to form the torus.
+\begin{center}
+ \includegraphics[width=0.8\textwidth]{media/Projection_color_torus.jpg}
+ \\ \scriptsize Image from \cite{img:torus}
+\end{center}
+
+The torus can be realized as a CW complex with
+\begin{itemize}
+ \ii A $0$-skeleton consisting of a single point,
+ \ii A $1$-skeleton consisting of two $1$-cells $e^1_a$, $e^1_b$, and
+ \begin{center}
+ \begin{asy}
+ unitsize(1cm);
+ draw(shift(-1,0)*unitcircle, blue, MidArrow);
+ draw(shift(1,0)*rotate(180)*unitcircle, red, MidArrow);
+ label("$e^1_a$", 2*dir(180), dir(180), blue);
+ label("$e^1_b$", 2*dir(0), dir(0), red);
+ dotfactor *= 1.4;
+ dot("$e^0$", origin, dir(0));
+ \end{asy}
+ \end{center}
+ \ii A $2$-skeleton with a single $2$-cell $e^2$,
+ whose circumference is divided into four parts,
+ and welded onto the $1$-skeleton ``via $aba\inv b \inv$''.
+ This means: wrap a quarter of the circumference around $e^1_a$,
+ then another quarter around $e^1_b$,
+ then the third quarter around $e^1_a$ but in the opposite direction,
+ and the fourth quarter around $e^1_b$ again in the opposite direction as before.
+ \begin{center}
+ \begin{asy}
+ size(3cm);
+ fill(unitcircle, yellow+opacity(0.2));
+ defaultpen(linewidth(1));
+ draw(arc(origin, 1, 45, 135), blue, MidArrow);
+ draw(arc(origin, 1, 315, 225), blue, MidArrow);
+ draw(arc(origin, 1, 135, 225), red, MidArrow);
+ draw(arc(origin, 1, 45, -45), red, MidArrow);
+ label("$e^2$", origin, origin);
+ \end{asy}
+ \end{center}
+\end{itemize}
+We say that $aba\inv b\inv$ is the \vocab{attaching word};
+this shorthand will be convenient later on.
+
+\subsection{The Klein bottle}
+The \vocab{Klein bottle} is defined similarly to
+the torus, except one pair of edges is identified in the opposite manner,
+as shown.
+
+\begin{center}
+ \begin{asy}
+ size(2cm);
+ fill(unitsquare, yellow+opacity(0.2));
+ pair C = (0,0);
+ pair B = (1,0);
+ pair A = (1,1);
+ pair D = (0,1);
+ draw(A--B, red, MidArrow);
+ draw(C--B, blue, MidArrow);
+ draw(D--C, red, MidArrow);
+ draw(A--D, blue, MidArrow);
+ \end{asy}
+\end{center}
+
+Unlike the torus one cannot realize this in $3$-space
+without self-intersecting. One can tape together the red edges
+as before to get a cylinder, but to then fuse the resulting blue
+circles in opposite directions is not possible in 3D.
+Nevertheless, we often draw a picture in 3-dimensional space
+in which we tacitly allow the cylinder to intersect itself.
+
+\begin{center}
+ \begin{minipage}[c]{0.5\textwidth}
+ \includegraphics[width=\textwidth]{media/klein-fold.png}
+ \end{minipage}
+ \quad
+ \begin{minipage}[c]{0.3\textwidth}
+ \includegraphics[width=\textwidth]{media/KleinBottle-01.png}
+ \end{minipage}
+ \par \scriptsize Image from \cite{img:kleinfold,img:kleinbottle}
+\end{center}
+
+
+Like the torus, the Klein bottle is realized as a CW complex with
+\begin{itemize}
+ \ii One $0$-cell,
+ \ii Two $1$-cells $e^1_a$ and $e^1_b$, and
+ \ii A single $2$-cell attached this time via the word $abab\inv$.
+\end{itemize}
+
+\subsection{Real projective space}
+Let's start with $n=2$.
+The space $\RP^2$ is obtained if we reverse both directions of
+the square from before, as shown.
+
+\begin{center}
+ \begin{asy}
+ size(2cm);
+ fill(unitsquare, yellow+opacity(0.2));
+ pair C = (0,0);
+ pair B = (1,0);
+ pair A = (1,1);
+ pair D = (0,1);
+ draw(B--A, red, MidArrow);
+ draw(C--B, blue, MidArrow);
+ draw(D--C, red, MidArrow);
+ draw(A--D, blue, MidArrow);
+ \end{asy}
+\end{center}
+
+However, once we do this the fact that the original
+polygon is a square is kind of irrelevant;
+we can combine a red and blue edge to get the single purple edge.
+Equivalently, one can think of this as a circle with half
+its circumference identified with the other half:
+
+\begin{center}
+ \begin{asy}
+ size(3cm);
+ dotfactor *= 2;
+ fill(unitcircle, opacity(0.2)+yellow);
+ draw(dir(-90)..dir(0)..dir(90), purple, MidArrow);
+ draw(dir(90)..dir(180)..dir(-90), purple, MidArrow);
+ dot(dir(90));
+ dot(dir(-90));
+ label("$\mathbb{RP}^2$", origin, origin);
+ \end{asy}
+ \qquad
+ \begin{asy}
+ size(3cm);
+ dotfactor *= 2;
+ draw(dir(-90)..dir(0)..dir(90));
+ draw(dir(90)..dir(180)..dir(-90), dashed);
+ fill(unitcircle, yellow+opacity(0.2));
+ dot(dir(90));
+ opendot(dir(-90));
+ label("$\mathbb{RP}^2$", origin, origin);
+ \end{asy}
+\end{center}
+
+The resulting space should be familiar to those of you who do
+projective (Euclidean) geometry.
+Indeed, there are several possible geometric interpretations:
+\begin{itemize}
+ \ii One can think of $\RP^2$ as the set of lines through the
+ origin in $\RR^3$, with each line being a point in $\RP^2$.
+
+ Of course, we can identify each line with a point on the unit sphere $S^2$,
+ except for the property that two antipodal points actually
+ correspond to the same line, so that $\RP^2$ can be almost thought
+ of as ``half a sphere''. Flattening it gives the picture above.
+
+ \ii Imagine $\RR^2$, except augmented with ``points at infinity''.
+ This means that we add some points ``infinitely far away'',
+ one for each possible direction of a line.
+ Thus in $\RP^2$, any two lines indeed intersect
+ (at a Euclidean point if they are not parallel, and at a point
+ at infinity if they do).
+
+ This gives an interpretation of $\RP^2$,
+ where the boundary represents the \emph{line at infinity}
+ through all of the points at infinity.
+ Here we have used the fact that $\RR^2$
+ and interior of $D^2$ are homeomorphic.
+\end{itemize}
+\begin{exercise}
+ Observe that these formulations are equivalent
+ by considering the plane $z=1$ in $\RR^3$,
+ and intersecting each line in the first formulation with this plane.
+\end{exercise}
+
+We can also express $\RP^2$ using coordinates:
+it is the set of triples $(x : y : z)$ of real numbers not all zero
+up to scaling, meaning that
+\[ (x : y : z) = (\lambda x : \lambda y : \lambda z) \]
+for any $\lambda \neq 0$.
+Using the ``lines through the origin in $\RR^3$'' interpretation
+makes it clear why this coordinate system gives the right space.
+The points at infinity are those with $z = 0$,
+and any point with $z \neq 0$ gives a Cartesian point since
+\[ (x : y : z) = \left( \frac xz : \frac yz : 1 \right) \]
+hence we can think of it as the Cartesian point $(\frac xz, \frac yz)$.
+
+In this way we can actually define \vocab{real-projective $n$-space},
+$\RP^n$ for any $n$, as either
+\begin{enumerate}[(i)]
+ \ii The set of lines through the origin in $\RR^{n+1}$,
+ \ii Using $n+1$ coordinates as above, or
+ \ii As $\RR^n$ augmented with points at infinity,
+ which themselves form a copy of $\RP^{n-1}$.
+\end{enumerate}
+
+As a possibly helpful example, we give all three pictures of $\RP^1$.
+\begin{example}
+ [Real projective $1$-Space]
+ $\RP^1$ can be thought of as $S^1$ modulo the relation
+ the antipodal points are identified.
+ Projecting onto a tangent line, we see that we get
+ a copy of $\RR$ plus a single point at infinity, corresponding
+ to the parallel line (drawn in cyan below).
+ \begin{center}
+ \begin{asy}
+ size(7cm);
+ filldraw(unitcircle, lightblue+opacity(0.2), heavyblue+opacity(0.4));
+ label("$S^1$", dir(225), dir(225), lightblue);
+ dot("$\vec 0$", origin, dir(45));
+ pair X1 = (-2.1,1);
+ pair X2 = (1.9,1);
+ draw(X1--X2, heavyred, Arrows);
+ dot("$0$", (0,1), dir(90), heavyred);
+ dot("$1$", (1,1), dir(90), heavyred);
+ pair P = extension( (0,1), (1,1), dir(250), dir(70) );
+ dot("$0.36$", P, dir(90), heavyred);
+ label("$\mathbb R$", X2, dir(105), heavyred);
+
+ path L(pair A, pair B, real a=0.6, real b=a)
+ { return (a*(A-B)+A)--(b*(B-A)+B); }
+ draw(L(dir(130),-dir(130),0.2,0.2), gray);
+ draw(L(dir(250),-dir(250),0.2,0.2), gray);
+ draw(L(dir(-20),-dir(-20),0.2,0.2), gray);
+ draw(L(dir(0), -dir(0), 0.4,0.4), heavycyan+1);
+ \end{asy}
+ \end{center}
+
+ Thus, the points of $\RP^1$ have two forms:
+ \begin{itemize}
+ \ii $(x:1)$, which we think of as $x \in \RR$ (in dark red above), and
+ \ii $(1:0)$, which we think of as $1/0 = \infty$,
+ corresponding to the cyan line above.
+ \end{itemize}
+ So, we can literally write
+ \[ \RP^1 = \RR \cup \{\infty\}. \]
+ Note that $\RP^1$ is also the boundary of $\RP^2$.
+ In fact, note also that topologically we have
+ \[ \RP^1 \cong S^1 \]
+ since it is the ``real line with endpoints fused together''.
+ \begin{center}
+ \begin{asy}
+ size(2cm);
+ draw(unitcircle, heavyred);
+ dot("$\infty$", dir(90), dir(90), heavycyan);
+ dot("$0$", dir(-90), dir(-90), heavyred);
+ \end{asy}
+ \end{center}
+\end{example}
+
+Since $\RP^n$ is just ``$\RR^n$ (or $D^n$) with $\RP^{n-1}$ as its boundary'',
+we can construct $\RP^n$ as a CW complex inductively.
+Note that $\RP^n$ thus consists of \textbf{one cell in each dimension}.
+
+\begin{example}[$\RP^n$ as a cell complex]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\RP^0$ is a single point.
+ \ii $\RP^1 \cong S^1$ is a circle, which as a CW complex
+ is a $0$-cell plus a $1$-cell.
+ \ii $\RP^2$ can be formed by taking a $2$-cell
+ and wrapping its perimeter twice around a copy of $\RP^1$.
+ \end{enumerate}
+\end{example}
+
+\subsection{Complex projective space}
+The \vocab{complex projective space} $\CP^n$ is
+defined like $\RP^n$ with coordinates, i.e.\
+\[ (z_0 : z_1 : \dots : z_n) \]
+under scaling; this time $z_i$ are complex.
+As before, $\CP^n$ can be thought of as $\CC^n$ augmented
+with some points at infinity (corresponding to $\CP^{n-1}$).
+\begin{example}
+ [Complex projective space]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\CP^0$ is a single point.
+ \ii $\CP^1$ is $\CC$ plus a single point at infinity
+ (``complex infinity'' if you will).
+ That means as before we can think of $\CP^1$ as
+ \[ \CP^1 = \CC \cup \{\infty\}. \]
+ So, imagine taking the complex plane and then adding
+ a single point to encompass the entire boundary.
+ The result is just sphere $S^2$.
+ \end{enumerate}
+ Here is a picture of $\CP^1$ with its coordinate system,
+ the \vocab{Riemann sphere}.
+ \begin{center}
+ \includegraphics[width=0.9\textwidth]{media/earth.pdf}
+ \end{center}
+\end{example}
+
+\begin{remark}
+ [For Euclidean geometers]
+ You may recognize that while $\RP^2$ is the setting for projective geometry,
+ inversion about a circle is done in $\CP^1$ instead.
+ When one does an inversion sending generalized circles to generalized
+ circles, there is only one point at infinity:
+ this is why we work in $\CP^n$.
+\end{remark}
+
+Like $\RP^n$, $\CP^n$ is a CW complex, built inductively
+by taking $\CC^n$ and welding its boundary onto $\CP^{n-1}$
+The difference is that as topological spaces,
+\[ \CC^n \cong \RR^{2n} \cong D^{2n}. \]
+Thus, we attach the cells $D^0$, $D^2$, $D^4$ and so on
+inductively to construct $\CP^n$.
+Thus we see that
+\begin{moral}
+ $\CP^n$ consists of one cell in each \emph{even} dimension.
+\end{moral}
+
+
+\section\problemhead
+\begin{problem}
+ Show that a space $X$ is Hausdorff if and only if the diagonal
+ $\{(x,x) \mid x \in X\}$ is closed in the product space $X \times X$.
+\end{problem}
+\begin{problem}
+ Realize the following spaces as CW complexes:
+ \begin{enumerate}[(a)]
+ \ii M\"obius strip.
+ \ii $\RR$.
+ \ii $\RR^n$.
+ \end{enumerate}
+\end{problem}
+\begin{dproblem}
+ Show that a finite CW complex is compact.
+ \begin{hint}
+ Prove and use the fact that a quotients of compact spaces remain compact.
+ \end{hint}
+\end{dproblem}
diff --git a/books/napkin/cover-project.tex b/books/napkin/cover-project.tex
new file mode 100644
index 0000000000000000000000000000000000000000..942ddeaf7171df7fdcb5750d43f24229933188a4
--- /dev/null
+++ b/books/napkin/cover-project.tex
@@ -0,0 +1,593 @@
+\chapter{Covering projections}
+A few chapters ago we talked about what a fundamental group was,
+but we didn't actually show how to compute any of them
+except for the most trivial case of a simply connected space.
+In this chapter we'll introduce the notion of a \emph{covering projection},
+which will let us see how some of these groups can be found.
+
+\section{Even coverings and covering projections}
+\prototype{$\RR$ covers $S^1$.}
+What we want now is a notion where a big space $E$, a ``covering space'',
+can be projected down onto a base space $B$ in a nice way.
+Here is the notion of ``nice'':
+\begin{definition}
+ Let $p : E \to B$ be a continuous function.
+ Let $U$ be an open set of $B$.
+ We call $U$ \vocab{evenly covered} (by $p$) if $p\pre(U)$ is a disjoint union of open sets (possibly infinite) such that $p$ restricted to any of these sets is a homeomorphism.
+\end{definition}
+Picture:
+\begin{center}
+ \includegraphics[width=4cm]{media/even-covering.png}
+ \\ \scriptsize Image from \cite{img:even_covering}
+\end{center}
+All we're saying is that $U$ is evenly covered if its pre-image
+is a bunch of copies of it. (Actually, a little more: each of the pancakes is homeomorphic to $U$, but we also require that $p$ is the homeomorphism.)
+
+\begin{definition}
+ A \vocab{covering projection} $p : E \to B$
+ is a surjective continuous map such that every base point $b \in B$
+ has an open neighborhood $U \ni b$ which is evenly covered by $p$.
+\end{definition}
+\begin{exercise}
+ [On requiring surjectivity of $p$]
+ Let $p \colon E \to B$ be satisfying this definition,
+ except that $p$ need not be surjective.
+ Show that the image of $p$ is a connected component of $B$.
+ Thus if $B$ is connected and $E$ is nonempty,
+ then $p \colon E \to B$ is already surjective.
+ For this reason, some authors omit the surjectivity hypothesis
+ as usually $B$ is path-connected.
+\end{exercise}
+
+Here is the most stupid example of a covering projection.
+\begin{example}[Tautological covering projection]
+ Let's take $n$ disconnected copies of any space $B$:
+ formally, $E = B \times \{1, \dots, n\}$ with the discrete topology
+ on $\{1, \dots, n\}$.
+ Then there exists a tautological covering projection
+ $E \to B$ by $(x,m) \mapsto x$;
+ we just project all $n$ copies.
+
+ This is a covering projection because \emph{every} open set in $B$
+ is evenly covered.
+\end{example}
+This is not really that interesting because $B \times [n]$ is not path-connected.
+
+A much more interesting example is that of $\RR$ and $S^1$.
+
+\begin{example}[Covering projection of $S^1$]
+ Take $p : \RR \to S^1$ by $\theta \mapsto e^{2\pi i \theta}$.
+ This is essentially wrapping the real line
+ into a single helix and projecting it down.
+\end{example}
+\missingfigure{helix}
+
+We claim this is a covering projection.
+Indeed, consider the point $1 \in S^1$
+(where we view $S^1$ as the unit circle in the complex plane).
+We can draw a small open neighborhood of it
+whose pre-image is a bunch of copies in $\RR$.
+\begin{center}
+ \begin{asy}
+ size(12cm);
+
+ real[] t = {-2,-1,0,1,2};
+ xaxis(-3.5,3.5, graph.LeftTicks(Ticks=t), Arrows);
+
+ pen bloo = blue+1.5;
+
+ dotfactor *= 2;
+ pair A,B;
+ for (real x = -2; x <= 2; ++x) {
+ A = (x-0.2, 0); B = (x+0.2, 0);
+ draw(A--B, bloo); opendot(A, blue); opendot(B, blue);
+ }
+ MP("\mathbb R", (3,0), dir(90));
+
+ add(shift( (0,3) ) * CC());
+
+ path darrow = (0,2.5)--(0,1.5);
+ MP("p", midpoint(darrow), dir(0));
+ draw(darrow, EndArrow);
+
+ real r = 1.4;
+ draw(scale(r)*unitcircle);
+ MP("S^1", r*dir(45), dir(45));
+ A = r*dir(-20);
+ B = r*dir(20);
+ draw(arc(origin, A, B), bloo);
+ opendot(A, blue); opendot(B, blue);
+ dot("$1$", r*dir(0), dir(0));
+ \end{asy}
+\end{center}
+
+Note that not all open neighborhoods work this time:
+notably, $U = S^1$ does not work because the pre-image
+would be the entire $\RR$.
+
+\begin{example}[Covering of $S^1$ by itself]
+ The map $S^1 \to S^1$ by
+ $z \mapsto z^{3}$ is also a covering projection.
+ Can you see why?
+\end{example}
+
+\begin{example}
+ [Covering projections of $\CC \setminus \{0\}$]
+ For those comfortable with complex arithmetic,
+ \begin{enumerate}[(a)]
+ \ii The exponential map $\exp : \CC \to \CC \setminus \{0\}$
+ is a covering projection.
+ \ii For each $n$, the $n$th power map
+ $-^n: \CC \setminus \{0\} \to \CC \setminus \{0\}$
+ is a covering projection.
+ \end{enumerate}
+\end{example}
+
+\section{Lifting theorem}
+\prototype{$\RR$ covers $S^1$.}
+Now here's the key idea: we are going to try to interpret
+loops in $B$ as paths in $\RR$.
+This is often much simpler.
+For example, we had no idea how to compute the fundamental group of $S^1$,
+but the fundamental group of $\RR$ is just the trivial group.
+So if we can interpret loops in $S^1$ as paths in $\RR$,
+that might (and indeed it does!) make computing $\pi_1(S^1)$ tractable.
+
+\begin{definition}
+ Let $\gamma : [0,1] \to B$ be a path and $p : E \to B$ a covering projection.
+ A \vocab{lifting} of $\gamma$ is a path $\tilde\gamma : [0,1] \to E$
+ such that $p \circ \tilde\gamma = \gamma$.
+\end{definition}
+Picture:
+\begin{center}
+\begin{tikzcd}
+ & E \ar[d, "p"] \\
+ {[0,1]} \ar[r, "\gamma"'] \ar[ru, "\tilde{\gamma}"] & B
+\end{tikzcd}
+\end{center}
+
+
+\begin{example}[Typical example of lifting]
+ Take $p : \RR \to S^1 \subseteq \CC$ by $\theta \mapsto e^{2 \pi i \theta}$
+ (so $S^1$ is considered again as the unit circle).
+ Consider the path $\gamma$ in $S^1$ which starts at $1 \in \CC$
+ and wraps around $S^1$ once, counterclockwise, ending at $1$ again.
+ In symbols, $\gamma : [0,1] \to S^1$ by $t \mapsto e^{2\pi i t}$.
+
+ Then one lifting $\tilde\gamma$ is the path which walks from $0$ to $1$.
+ In fact, \emph{for any integer $n$}, walking from $n$ to $n+1$ works.
+
+ \begin{center}
+ \begin{asy}
+ size(6cm);
+
+ real[] t = {-1,0,1,2};
+ xaxis(-2,3, graph.LeftTicks(Ticks=t), Arrows);
+ MP("\mathbb R", (2.5,0), dir(90));
+ path gt = (0,0.3)--(1,0.3);
+ draw(gt, blue, EndArrow);
+ label("$\tilde\gamma$", midpoint(gt), dir(90), blue);
+ add(shift( (0,3) ) * CC());
+
+ path darrow = (0,2.5)--(0,1.5);
+ MP("p", midpoint(darrow), dir(0));
+ draw(darrow, EndArrow);
+
+ real r = 1.2;
+ draw(scale(r)*unitcircle);
+ MP("S^1", r*dir(45), dir(45));
+ dot("$1$", r*dir(0), dir(0));
+ path g = dir(20)..dir(100)..dir(180)..dir(260)..dir(340);
+ draw(g, red, EndArrow);
+ label("$\gamma$", midpoint(g), -dir(midpoint(g)), red);
+
+ MP("p(0) = 1", (2.5,0.5));
+ MP("p(1) = 1", (2.5,0));
+ \end{asy}
+ \end{center}
+
+ Similarly, the counterclockwise path from $1 \in S^1$ to $-1 \in S^1$
+ has a lifting: for some integer $n$, the path from $n$ to $n+\half$.
+ \label{example:lifting_circle}
+\end{example}
+
+The above is the primary example of a lifting.
+It seems like we have the following structure: given a path $\gamma$
+in $B$ starting at $b_0$, we start at any point in the fiber $p\pre(b_0)$.
+(In our prototypical example, $B = S^1$, $b_0 = 1 \in \CC$
+and that's why we start at any integer $n$.)
+After that we just trace along the path in $B$, and we get
+a corresponding path in $E$.
+\begin{ques}
+ Take a path $\gamma$ in $S^1$ with $\gamma(0) = 1 \in \CC$.
+ Convince yourself that once we select an integer $n \in \RR$,
+ then there is exactly one lifting starting at $n$.
+\end{ques}
+
+It turns out this is true more generally.
+\begin{theorem}[Lifting paths]
+ Suppose $\gamma : [0,1] \to B$ is a path with $\gamma(0) = b_0$, and
+ $ p : (E,e_0) \to (B,b_0) $
+ is a covering projection.
+ Then there exists a \emph{unique} lifting $\tilde\gamma : [0,1] \to E$
+ such that $\tilde\gamma(0) = e_0$.
+\end{theorem}
+\begin{proof}
+ For every point $b \in B$, consider an evenly covered
+ open neighborhood $U_b$ in $B$.
+ Then the family of open sets
+ \[ \left\{ \gamma\pre(U_b) \mid b \in B \right\} \]
+ is an open cover of $[0,1]$.
+ As $[0,1]$ is compact we can take a finite subcover.
+ Thus we can chop $[0,1]$ into finitely many disjoint closed intervals
+ $[0,1] = I_1 \sqcup I_2 \sqcup \dots \sqcup I_N$ in that order,
+ such that for every $I_k$, $\gamma\im(I_k)$ is contained
+ in some $U_b$.
+
+ We'll construct $\tilde\gamma$ interval by interval now,
+ starting at $I_1$.
+ Initially, place a robot at $e_0 \in E$ and a mouse at $b_0 \in B$.
+ For each interval $I_k$, the mouse moves around according
+ to however $\gamma$ behaves on $I_k$.
+ But the whole time it's in some evenly covered $U_k$;
+ the fact that $p$ is a covering projection tells us that
+ there are several copies of $U_k$ living in $E$.
+ Exactly one of them, say $V_k$, contains our robot.
+ So the robot just mimics the mouse until it gets to the end of $I_k$.
+ Then the mouse is in some new evenly covered $U_{k+1}$,
+ and we can repeat.
+\end{proof}
+
+The theorem can be generalized to a diagram
+\begin{center}
+\begin{tikzcd}
+ & (E, e_0) \ar[d, "p"] \\
+ (Y, y_0) \ar[ru, "\tilde{f}"] \ar[r, "f"'] & (B, b_0)
+\end{tikzcd}
+\end{center}
+where $Y$ is some general path-connected space, as follows.
+\begin{theorem}[General lifting criterion]
+ \label{thm:lifting}
+ Let $f: (Y,y_0) \to (B, b_0)$ be continuous and consider a covering projection $p : (E, e_0) \to (B, b_0)$.
+ (As usual, $Y$, $B$, $E$ are path-connected.)
+ Then a lifting $\tilde f$ with $\tilde f(y_0) = e_0$ exists if and only if
+ \[ f_\ast\im(\pi_1(Y, y_0)) \subseteq p_\ast\im(\pi_1(E, e_0)), \]
+ i.e.\ the image of $\pi_1(Y, y_0)$ under $f$ is contained in
+ the image of $\pi_1(E, e_0)$ under $p$ (both viewed as subgroups of $\pi_1(B, b_0)$).
+ If this lifting exists, it is unique.
+\end{theorem}
+As $p_\ast$ is injective,
+we actually have $p_\ast\im(\pi_1(E, e_0)) \cong \pi_1(E, e_0)$.
+But in this case we are interested in the actual elements, not just the isomorphism classes of the groups.
+\begin{ques}
+ What happens if we put $Y= [0,1]$?
+\end{ques}
+
+\begin{remark}[Lifting homotopies]
+ Here's another cool special case:
+ Recall that a homotopy can be encoded as a continuous function $[0,1] \times [0,1] \to X$.
+ But $[0,1] \times [0,1]$ is also simply connected.
+ Hence given a homotopy $\gamma_1 \simeq \gamma_2$ in the base space $B$, we can lift it to get
+ a homotopy $\tilde\gamma_1 \simeq \tilde\gamma_2$ in $E$.
+\end{remark}
+Another nice application of this result is \Cref{ch:complex_log}.
+
+\section{Lifting correspondence}
+\prototype{$(\RR,0)$ covers $(S^1,1)$.}
+Let's return to the task of computing fundamental groups.
+Consider a covering projection $p : (E, e_0) \to (B, b_0)$.
+
+A loop $\gamma$ can be lifted uniquely to $\tilde\gamma$ in $E$
+which starts at $e_0$ and ends at some point $e$ in the fiber $p\pre(b_0)$.
+You can easily check that this $e \in E$ does not change if we
+pick a different path $\gamma'$ homotopic to $\tilde\gamma$.
+\begin{ques}
+ Look at the picture in \Cref{example:lifting_circle}.
+
+ Put one finger at $1 \in S^1$, and one finger on $0 \in \RR$.
+ Trace a loop homotopic to $\gamma$ in $S^1$ (meaning, you can
+ go backwards and forwards but you must end with exactly one full
+ counterclockwise rotation)
+ and follow along with the other finger in $\RR$.
+
+ Convince yourself that you have to end at the point $1 \in \RR$.
+\end{ques}
+
+Thus every homotopy class of a loop at $b_0$ (i.e.\ an element of $\pi_1(B, b_0)$) can be associated with some $e$ in the fiber of $b_0$.
+The below proposition summarizes this and more.
+\begin{proposition}
+ Let $p : (E,e_0) \to (B,b_0)$ be a covering projection.
+ Then we have a function of sets
+ \[ \Phi : \pi_1(B, b_0) \to p\pre(b_0) \]
+ by $[\gamma] \mapsto \tilde\gamma(1)$, where $\tilde\gamma$
+ is the unique lifting starting at $e_0$.
+ Furthermore,
+ \begin{itemize}
+ \ii If $E$ is path-connected, then $\Phi$ is surjective.
+ \ii If $E$ is simply connected, then $\Phi$ is injective.
+ \end{itemize}
+\end{proposition}
+\begin{ques}
+ Prove that $E$ path-connected implies $\Phi$ is surjective.
+ (This is really offensively easy.)
+\end{ques}
+\begin{proof}
+ To prove the proposition, we've done everything except show
+ that $E$ simply connected implies $\Phi$ injective.
+ To do this suppose that $\gamma_1$ and $\gamma_2$ are loops
+ such that $\Phi([\gamma_1]) = \Phi([\gamma_2])$.
+
+ Applying lifting, we get paths $\tilde\gamma_1$ and $\tilde\gamma_2$
+ both starting at some point $e_0 \in E$ and ending at some point $e_1 \in E$.
+ Since $E$ is simply connected that means they are \emph{homotopic},
+ and we can write a homotopy $F : [0,1] \times [0,1] \to E$
+ which unites them.
+ But then consider the composition of maps
+ \[ [0,1] \times [0,1] \taking{F} E \taking{p} B. \]
+ You can check this is a homotopy from $\gamma_1$ to $\gamma_2$.
+ Hence $[\gamma_1] = [\gamma_2]$, done.
+\end{proof}
+
+This motivates:
+\begin{definition}
+ A \vocab{universal cover} of a space $B$ is a covering projection
+ $p : E \to B$ where $E$ is simply connected (and in particular path-connected).
+\end{definition}
+\begin{abuse}
+ When $p$ is understood, we sometimes just say $E$ is the universal cover.
+\end{abuse}
+
+\begin{example}[Fundamental group of $S^1$]
+ Let's return to our standard $p : \RR \to S^1$.
+ Since $\RR$ is simply connected, this is a universal cover of $S^1$.
+ And indeed, the fiber of any point in $S^1$
+ is a copy of the integers: naturally in bijection with loops in $S^1$.
+
+ You can show (and it's intuitively obvious) that the bijection
+ \[ \Phi : \pi_1(S^1) \leftrightarrow \ZZ \]
+ is in fact a group homomorphism if we equip $\ZZ$ with its
+ additive group structure $\ZZ$.
+ Since it's a bijection, this leads us to conclude $\pi_1(S^1) \cong \ZZ$.
+\end{example}
+
+\section{Regular coverings}
+\prototype{$\RR \to S^1$ comes from $n \cdot x = n + x$}
+
+Here's another way to generate some coverings.
+Let $X$ be a topological space and $G$ a group acting on its points.
+Thus for every $g$, we get a map $X \to X$ by
+\[ x \mapsto g \cdot x. \]
+We require that this map is continuous\footnote{%
+ Another way of phrasing this: the action,
+ interpreted as a map $G \times X \to X$, should be continuous,
+ where $G$ on the left-hand side is interpreted as a set with
+ the discrete topology.}
+for every $g \in G$, and that the stabilizer of each point in $X$ is trivial.
+Then we can consider a quotient space $X/G$ defined by fusing any points
+in the same orbit of this action.
+Thus the points of $X/G$ are identified with the orbits of the action.
+Then we get a natural ``projection''
+\[ X \to X/G \]
+by simply sending every point to the orbit it lives in.
+\begin{definition}
+ Such a projection is called \vocab{regular}.
+ (Terrible, I know.)
+\end{definition}
+
+\begin{example}[$\RR \to S^1$ is regular]
+ Let $G = \ZZ$, $X = \RR$
+ and define the group action of $G$ on $X$ by
+ \[ n \cdot x = n + x \]
+ You can then think of $X/G$ as ``real numbers modulo $1$'',
+ with $[0,1)$ a complete set of representatives and $0 \sim 1$. % chktex 9
+ \begin{center}
+ \begin{asy}
+ size(9cm);
+ dotfactor *= 2;
+ pair A = MP("0", (-5.1,0), 1.4*dir(90));
+ pair B = MP("1", (-3,0), 1.4*dir(90));
+ draw(A--B);
+ Drawing("\frac13", (-4.4,0), 1.4*dir(90));
+ Drawing("\frac23", (-3.7,0), 1.4*dir(90));
+ MP("\mathbb R / G", (-4,-0.6), dir(-90));
+ dot(A); opendot(B);
+ draw(unitcircle);
+ draw( (-2.4,0)--(-1.6,0), EndArrow);
+ dot("$0=1$", dir(0), dir(0));
+ dot("$\frac13$", dir(120), dir(120));
+ dot("$\frac23$", dir(240), dir(240));
+ label("$S^1$", origin, origin);
+ \end{asy}
+ \end{center}
+ So we can identify $X/G$ with $S^1$
+ and the associated regular projection
+ is just our usual $\exp : \theta \mapsto e^{2i\pi \theta}$.
+\end{example}
+
+\begin{example}[The torus]
+ Let $G = \ZZ \times \ZZ$ and $X = \RR^2$,
+ and define the group action of $G$ on $X$ by $(m,n) \cdot (x,y)
+ = (m+x, n+y)$.
+ As $[0,1)^2$ is a complete set of representatives, % chktex 9
+ you can think of it as a unit square with the edges identified.
+ We obtain the torus $S^1 \times S^1$
+ and a covering projection $\RR^2 \to S^1 \times S^1$.
+\end{example}
+
+\begin{example}[$\mathbb {RP}^2$]
+ Let $G = \Zc 2 = \left$ and
+ let $X = S^2$ be the surface of the sphere,
+ viewed as a subset of $\RR^3$.
+ We'll let $G$ act on $X$ by sending $T \cdot \vec x = - \vec x$;
+ hence the orbits are pairs of opposite points (e.g.\ North and South pole).
+
+ Let's draw a picture of a space.
+ All the orbits have size two:
+ every point below the equator gets fused with a point above the equator.
+ As for the points on the equator, we can take half of them; the other half
+ gets fused with the corresponding antipodes.
+
+ Now if we flatten everything,
+ you can think of the result as a disk with half its boundary:
+ this is $\RP^2$ from before.
+ The resulting space has a name: \emph{real projective $2$-space},
+ denoted $\mathbb{RP}^2$.
+ \begin{center}
+ \begin{asy}
+ size(3cm);
+ dotfactor *= 2;
+ draw(dir(-90)..dir(0)..dir(90));
+ draw(dir(90)..dir(180)..dir(-90), dashed);
+ fill(unitcircle, yellow+opacity(0.2));
+ dot(dir(90));
+ opendot(dir(-90));
+ label("$\mathbb{RP}^2$", origin, origin);
+ \end{asy}
+ \end{center}
+
+ This gives us a covering projection $S^2 \to \mathbb{RP}^2$
+ (note that the pre-image of a sufficiently small patch is just two copies
+ of it on $S^2$.)
+\end{example}
+\begin{example}
+ [Fundamental group of $\mathbb{RP}^2$]
+ As above, we saw that there was a covering projection
+ $S^2 \to \mathbb{RP}^2$.
+ Moreover the fiber of any point has size two.
+ Since $S^2$ is simply connected, we have a natural bijection
+ $\pi_1(\mathbb{RP}^2)$ to a set of size two; that is,
+ \[ \left\lvert \pi_1(\mathbb{RP}^2) \right\rvert = 2. \]
+ This can only occur if $\pi_1(\mathbb{RP}^2) \cong \Zc 2$,
+ as there is only one group of order two!
+\end{example}
+
+\begin{ques}
+ Show each of the continuous maps $x \mapsto g \cdot x$ is in fact a homeomorphism.
+ (Name its continuous inverse).
+\end{ques}
+% WOW I thought this was always a covering projection gg
+
+\section{The algebra of fundamental groups}
+\prototype{$S^1$, with fundamental group $\ZZ$.}
+Next up, we're going to turn functions between spaces into homomorphisms of fundamental groups.
+
+Let $X$ and $Y$ be topological spaces and $f : (X, x_0) \to (Y, y_0)$.
+Recall that we defined a group homomorphism
+\[ f_\sharp : \pi_1(X, x_0) \to \pi_1(Y_0, y_0)
+ \quad\text{by}\quad
+ [\gamma] \mapsto [f \circ \gamma]. \]
+% which gave us a functor $\catname{Top}_\ast \to \catname{Grp}$.
+
+More importantly, we have:
+\begin{proposition}
+ Let $p : (E,e_0) \to (B,b_0)$ be a covering projection of path-connected spaces.
+ Then the homomorphism $p_\sharp : \pi_1(E, e_0) \to \pi_1(B, b_0)$ is \emph{injective}.
+ Hence $p_\sharp \im(\pi_1(E, e_0))$ is an isomorphic copy of $\pi_1(E, e_0)$
+ as a subgroup of $\pi_1(B, b_0)$.
+\end{proposition}
+\begin{proof}
+ We'll show $\ker p_\sharp$ is trivial.
+ It suffices to show if $\gamma$ is a nulhomotopic loop in $B$
+ then its lift is nulhomotopic.
+
+ By definition, there's a homotopy $F : [0,1] \times [0,1] \to B$
+ taking $\gamma$ to the constant loop $1_B$.
+ We can lift it to a homotopy $\tilde F : [0,1] \times [0,1] \to E$
+ that establishes $\tilde\gamma \simeq \tilde 1_B$.
+ But $1_E$ is a lift of $1_B$ (duh) and lifts are unique.
+\end{proof}
+
+\begin{example}[Subgroups of $\ZZ$]
+ Let's look at the space $S^1$ with fundamental group $\ZZ$.
+ The group $\ZZ$ has two types of subgroups:
+ \begin{itemize}
+ \ii The trivial subgroup.
+ This corresponds to the canonical projection $\RR \to S^1$,
+ since $\pi_1(\RR)$ is the trivial group ($\RR$ is simply connected)
+ and hence its image in $\ZZ$ is the trivial group.
+ \ii $n\ZZ$ for $n \ge 1$.
+ This is given by the covering projection $S^1 \to S^1$
+ by $z \mapsto z^n$.
+ The image of a loop in the covering $S^1$ is a ``multiple of $n$''
+ in the base $S^1$.
+ \end{itemize}
+\end{example}
+
+It turns out that these are the \emph{only} covering projections of $S^n$ by path-connected spaces: there's one for each subgroup of $\ZZ$.
+(We don't care about disconnected spaces because, again, a covering projection
+via disconnected spaces is just a bunch of unrelated ``good'' coverings.)
+For this statement to make sense I need to tell you what it means for
+two covering projections to be equivalent.
+
+\begin{definition}
+ Fix a space $B$.
+ Given two covering projections $p_1 : E_1 \to B$ and $p_2 : E_2 \to B$
+ a \vocab{map of covering projections} is a continuous function $f : E_1 \to E_2$
+ such that $p_2 \circ f = p_1$.
+ \begin{center}
+ \begin{tikzcd}
+ E_1 \ar[r, "f"] \ar[rd, "p_1"'] & E_2 \ar[d, "p_2"] \\
+ & B
+ \end{tikzcd}
+ \end{center}
+ Then two covering projections $p_1$ and $p_2$ are isomorphic if there are
+ $f : E_1 \to E_2$ and $g : E_2 \to E_1$
+ such that $f \circ g = \id_{E_1}$ and $g \circ f = \id_{E_2}$.
+\end{definition}
+\begin{remark}
+ [For category theorists]
+ The set of covering projections forms a category in this way.
+\end{remark}
+
+It's an absolute miracle that this is true more generally:
+the greatest triumph of covering spaces is the following result.
+Suppose a space $X$ satisfies some nice conditions, like:
+\begin{definition}
+ A space $X$ is called \vocab{locally connected}
+ if for each point $x \in X$ and open neighborhood $V$ of it,
+ there is a connected open set $U$ with $x \in U \subseteq V$.
+\end{definition}
+\begin{definition}
+ A space $X$ is \vocab{semi-locally simply connected}
+ if for every point $x \in X$
+ there is an open neighborhood $U$
+ such that all loops in $U$ are nulhomotopic.
+ (But the contraction need not take place in $U$.)
+\end{definition}
+\begin{example}[These conditions are weak]
+ Pretty much every space I've shown you has these two properties.
+ In other words, they are rather mild conditions, and you can think of them as just
+ saying ``the space is not too pathological''.
+\end{example}
+Then we get:
+\begin{theorem}[Group theory via covering spaces]
+ Suppose $B$ is a locally connected, semi-locally simply connected space.
+ Then:
+ \begin{itemize}
+ \ii Every subgroup $H \subseteq \pi_1(B)$ corresponds
+ to exactly one covering projection $p : E \to B$
+ with $E$ path-connected (up to isomorphism).
+
+ (Specifically, $H$ is the image of $\pi_1(E)$ in $\pi_1(B)$ through $p_\sharp$.)
+ \ii Moreover, the \emph{normal} subgroups of $\pi_1(B)$
+ correspond exactly to the regular covering projections.
+ \end{itemize}
+\end{theorem}
+Hence it's possible to understand the group theory of $\pi_1(B)$ completely
+in terms of the covering projections.
+
+Moreover, this is how the ``universal cover'' gets its name:
+it is the one corresponding to the trivial subgroup of $\pi_1(B)$.
+Actually, you can show that it really is universal in the sense
+that if $p : E \to B$ is another covering projection,
+then $E$ is in turn covered by the universal space.
+More generally, if $H_1 \subseteq H_2 \subseteq G$ are subgroups,
+then the space corresponding to $H_2$ can be covered by the space
+corresponding to $H_1$.
+
+% According to \cite{ref:covering_all_we_know}, this statement and
+% its extension to group actions are ``pretty much all there is to know
+% about covering projections''.
+
+\section{\problemhead}
+\todo{problems}
diff --git a/books/napkin/cup-product.tex b/books/napkin/cup-product.tex
new file mode 100644
index 0000000000000000000000000000000000000000..8ffa4d748e7827bfcae738cb8b84c4f6c29c03eb
--- /dev/null
+++ b/books/napkin/cup-product.tex
@@ -0,0 +1,549 @@
+\chapter{Application of cohomology}
+In this final chapter on topology, I'll state (mostly without proof)
+some nice properties of cohomology groups, and in particular
+introduce the so-called cup product.
+For an actual treatise on the cup product,
+see \cite{ref:hatcher} or \cite{ref:maxim752}.
+
+\section{Poincar\'e duality}
+First cool result:
+you may have noticed symmetry in the (co)homology groups of
+``nice'' spaces like the torus or $S^n$.
+In fact this is predicted by:
+\begin{theorem}
+ [Poincar\'e duality]
+ If $M$ is a smooth oriented compact $n$-manifold,
+ then we have a natural isomorphism
+ \[ H^k(M; \ZZ) \cong H_{n-k}(M) \]
+ for every $k$.
+ In particular, $H^k(M) = 0$ for $k > n$.
+\end{theorem}
+So for smooth oriented compact manifolds,
+cohomology and homology groups are not so different.
+
+From this follows the symmetry that we mentioned
+when we first defined the Betti numbers:
+\begin{corollary}
+ [Symmetry of Betti numbers]
+ Let $M$ be a smooth oriented compact $n$-manifold,
+ and let $b_k$ denote its Betti number.
+ Then \[ b_k = b_{n-k}. \]
+\end{corollary}
+\begin{proof}
+ \Cref{prob:betti}.
+\end{proof}
+
+
+\section{de Rham cohomology}
+We now reveal the connection between
+differential forms and singular cohomology.
+
+Let $M$ be a smooth manifold.
+We are interested in the homology and cohomology groups of $M$.
+We specialize to the case $G = \RR$, the additive group of real numbers.
+\begin{ques}
+ Check that $\Ext(H, \RR) = 0$ for any finitely generated abelian group $H$.
+\end{ques}
+Thus, with real coefficients the universal coefficient theorem says that
+\[ H^k(M; \RR) \cong \Hom(H_k(M), \RR) = \left( H_k(M) \right)^\vee \]
+where we view $H_k(X)$ as a real vector space.
+So, we'd like to get a handle on either $H_k(M$) or $H^k(M; \RR)$.
+
+Consider the cochain complex
+\[
+ 0 \to \Omega^0(M)
+ \taking d \Omega^1(M)
+ \taking d \Omega^2(M)
+ \taking d \Omega^3(M)
+ \taking d \dots
+\]
+and let $\HdR^k(M)$ denote its cohomology groups.
+Thus the de Rham cohomology is the closed forms modulo the exact forms.
+\[
+ \text{Cochain} : \text{Cocycle} : \text{Coboundary}
+ = \text{$k$-form} : \text{Closed form} : \text{Exact form}.
+\]
+
+The whole punch line is:
+\begin{theorem}
+ [de Rham's theorem]
+ For any smooth manifold $M$, we have a natural isomorphism
+ \[ H^k(M; \RR) \cong \HdR^k(M). \]
+\end{theorem}
+So the theorem is that the real cohomology groups of manifolds $M$
+are actually just given by the behavior of differential forms.
+Thus,
+\begin{moral}
+ One can metaphorically think of elements of cohomology groups
+ as $G$-valued differential forms on the space.
+\end{moral}
+
+Why does this happen?
+In fact, we observed already behavior of differential
+forms which reflects holes in the space.
+For example, let $M = S^1$ be a circle
+and consider the \textbf{angle form} $\alpha$
+(see \Cref{ex:angle_form}).
+The from $\alpha$ is closed, but not exact,
+because it is possible to run a full circle around $S^1$.
+So the failure of $\alpha$ to be exact is signaling
+that $H_1(S^1) \cong \ZZ$.
+
+\section{Graded rings}
+\prototype{Polynomial rings are commutative graded rings,
+while $\Lambda^\bullet(V)$ is anticommutative.}
+In the de Rham cohomology, the differential forms can interact in another way:
+given a $k$-form $\alpha$ and an $\ell$-form $\beta$, we can consider
+a $(k+\ell)$-form
+\[ \alpha \wedge \beta. \]
+So we can equip the set of forms with a ``product'', satisfying
+$\beta \wedge \alpha = (-1)^{k\ell} \alpha \wedge \beta$
+This is a special case of a more general structure:
+
+\begin{definition}
+ A \vocab{graded pseudo-ring} $R$ is an abelian group
+ \[ R = \bigoplus_{d \ge 0} R^d \]
+ where $R^0$, $R^1$, \dots, are abelian groups,
+ with an additional associative binary operation $\times : R \to R$.
+ We require that if $r \in R^d$ and $s \in R^e$, we have $rs \in R^{d+e}$.
+ Elements of an $R^d$ are called \vocab{homogeneous elements};
+ if $r \in R^d$ and $r \neq 0$, we write $|r| = d$.
+\end{definition}
+Note that we do \emph{not} assume commutativity.
+In fact, these ``rings'' may not even have an identity $1$.
+We use other words if there are additional properties:
+\begin{definition}
+ A \vocab{graded ring} is a graded pseudo-ring with $1$.
+ If it is commutative we say it is a \vocab{commutative graded ring}.
+\end{definition}
+\begin{definition}
+ A graded (pseudo-)ring $R$ is \vocab{anticommutative} if
+ for any homogeneous $r$ and $s$ we have
+ \[ rs = (-1)^{|r| |s|} sr. \]
+\end{definition}
+
+To summarize:
+\begin{center}
+ \small
+ \begin{tabular}[h]{|c|cc|}
+ \hline
+ \textbf{Flavors of graded rings} &
+ Need not have $1$ & Must have a $1$ \\ \hline
+ No Assumption & graded pseudo-ring & graded ring \\
+ Anticommutative & anticommutative pseudo-ring & anticommutative ring \\
+ Commutative & & commutative graded ring \\ \hline
+ \end{tabular}
+\end{center}
+
+\begin{example}[Examples of graded rings]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The ring $R = \ZZ[x]$ is a \textbf{commutative graded ring},
+ with the $d$th component being the multiples of $x^d$.
+ \ii The ring $R = \ZZ[x,y,z]$ is a \textbf{commutative graded ring},
+ with the $d$th component being the abelian group
+ of homogeneous degree $d$ polynomials (and $0$).
+ \ii Let $V$ be a vector space, and consider
+ the abelian group
+ \[ \Lambda^\bullet(V) = \bigoplus_{d \ge 0} \Lambda^d(V). \]
+ For example, $e_1 + (e_2 \wedge e_3) \in \Lambda^\bullet(V)$, say.
+ We endow $\Lambda^\bullet(V)$ with the product $\wedge$,
+ which makes it into an \textbf{anticommutative ring}.
+ \ii Consider the set of differential forms of a manifold $M$,
+ say \[ \Omega^\bullet(M) = \bigoplus_{d \ge 0} \Omega^d(M) \]
+ endowed with the product $\wedge$.
+ This is an \textbf{anticommutative ring}.
+ \end{enumerate}
+ All four examples have a multiplicative identity.
+\end{example}
+
+Let's return to the situation of $\Omega^\bullet(M)$.
+Consider again the de Rham cohomology groups $\HdR^k(M)$,
+whose elements are closed forms modulo exact forms.
+We claim that:
+\begin{lemma}
+ [Wedge product respects de Rham cohomology]
+ The wedge product induces a map
+ \[ \wedge : \HdR^k(M) \times \HdR^\ell(M) \to \HdR^{k+\ell}(M). \]
+\end{lemma}
+\begin{proof}
+ First, we recall that the operator $d$ satisfies
+ \[
+ d(\alpha \wedge \beta)
+ = (d\alpha) \wedge \beta + \alpha \wedge (d\beta).
+ \]
+ Now suppose $\alpha$ and $\beta$ are closed forms.
+ Then from the above, $\alpha \wedge \beta$ is clearly closed.
+ Also if $\alpha$ is closed and $\beta = d\omega$ is exact,
+ then $\alpha \wedge \beta$ is exact, from the identity
+ \[ d(\alpha \wedge \omega)
+ = d\alpha \wedge\omega + \alpha \wedge d\omega = \alpha \wedge \beta. \]
+ Similarly if $\alpha$ is exact and $\beta$ is closed
+ then $\alpha \wedge \beta$ is exact.
+ Thus it makes sense to take the product modulo exact forms,
+ giving the theorem above.
+\end{proof}
+
+Therefore, we can obtain a \emph{anticommutative ring}
+\[ \HdR^\bullet(M) = \bigoplus_{k \ge 0} \HdR^k(M) \]
+with $\wedge$ as a product, and $1 \in \Lambda^0(\RR) = \RR$ as the identity
+
+\section{Cup products}
+Inspired by this, we want to see if we can construct a similar product
+on $\bigoplus_{k \ge 0} H^k(X; R)$ for any topological space $X$ and ring $R$
+(where $R$ is commutative with $1$ as always).
+The way to do this is via the \emph{cup product}.
+
+Then this gives us a way to multiply two cochains, as follows.
+\begin{definition}
+ Suppose $\phi \in C^k(X;R)$ and $\psi \in C^\ell(X;R)$.
+ Then we can define their \vocab{cup product}
+ $\phi\smile\psi \in C^{k+\ell}(X;R)$ to be
+ \[
+ (\phi\smile\psi)([v_0, \dots, v_{k+\ell}])
+ =
+ \phi\left( [v_0, \dots, v_k] \right)
+ \cdot
+ \psi\left( [v_k, \dots, v_{k+\ell}] \right)
+ \]
+ where the multiplication is in $R$.
+\end{definition}
+
+\begin{ques}
+ Assuming $R$ has a $1$, which $0$-cochain is the identity for $\smile$?
+\end{ques}
+
+First, we prove an analogous result as before:
+\begin{lemma}[$\delta$ with cup products]
+ We have
+ $\delta(\phi\smile\psi) = \delta\phi\smile\psi
+ + (-1)^k\phi\smile\delta\psi$.
+\end{lemma}
+\begin{proof}
+ Direct $\sum$ computations.
+\end{proof}
+Thus, by the same routine we used for de Rham cohomology, we get
+an induced map
+\[ \smile : H^k(X;R) \times H^\ell(X;R) \to H^{k+\ell}(X;R). \]
+We then define the \vocab{singular cohomology ring}
+whose elements are finite sums in
+\[ H^\bullet(X;R) = \bigoplus_{k \ge 0} H^k(X;R) \]
+and with multiplication given by $\smile$.
+Thus it is a graded ring (with $1_R \in R$ the identity)
+and is in fact anticommutative:
+\begin{proposition}[Cohomology is anticommutative]
+ $H^\bullet(X; R)$ is an anticommutative ring,
+ meaning $\phi \smile \psi = (-1)^{k\ell} \psi \smile \phi$.
+\end{proposition}
+For a proof, see \cite[Theorem 3.11, pages 210-212]{ref:hatcher}.
+Moreover, we have the de Rham isomorphism
+\begin{theorem}
+ [de Rham extends to ring isomorphism]
+ For any smooth manifold $M$, the isomorphism
+ of de Rham cohomology groups to singular cohomology
+ groups in facts gives an isomorphism
+ \[ H^\bullet(M; \RR) \cong \HdR^\bullet(M) \]
+ of anticommutative rings.
+\end{theorem}
+
+Therefore, if ``differential forms'' are the way to visualize
+the elements of a cohomology group, the wedge product is the
+correct way to visualize the cup product.
+
+We now present (mostly without proof)
+the cohomology rings of some common spaces.
+
+\begin{example}
+ [Cohomology of torus]
+ The cohomology ring $H^\bullet(S^1 \times S^1; \ZZ)$
+ of the torus is generated by elements $|\alpha| = |\beta| = 1$
+ which satisfy the relations
+ $\alpha \smile \alpha = \beta \smile \beta = 0$,
+ and $\alpha \smile \beta = -\beta \smile \alpha$.
+ (It also includes an identity $1$.)
+ Thus as a $\ZZ$-module it is
+ \[ H^\bullet(S^1 \times S^1; \ZZ)
+ \cong \ZZ \oplus \left[ \alpha \ZZ \oplus \beta \ZZ \right]
+ \oplus (\alpha \smile \beta) \ZZ. \]
+ This gives the expected dimensions $1+2+1=4$.
+ It is anti-commutative.
+\end{example}
+
+\begin{example}[Cohomology ring of $S^n$]
+ Consider $S^n$ for $n \ge 1$.
+ The nontrivial cohomology groups are given by
+ $H^0(S^n; \ZZ) \cong H^n(S^n; \ZZ) \cong \ZZ$.
+ So as an abelian group
+ \[ H^\bullet(S^n; \ZZ) \cong \ZZ \oplus \alpha \ZZ \]
+ where $\alpha$ is the generator of $H^n(S^n, \ZZ)$.
+
+ Now, observe that $|\alpha\smile\alpha| = 2n$, but
+ since $H^{2n}(S^n; \ZZ) = 0$ we must have $\alpha\smile\alpha=0$.
+ So even more succinctly,
+ \[ H^\bullet(S^n; \ZZ) \cong \ZZ[\alpha]/(\alpha^2). \]
+ Confusingly enough, this graded ring is both
+ commutative \emph{and} anti-commutative.
+ The reason is that $\alpha \smile \alpha = 0 = -(\alpha \smile \alpha)$.
+\end{example}
+
+\begin{example}[Cohmology ring of real and complex projective space]
+ It turns out that
+ \begin{align*}
+ H^\bullet(\RP^n; \Zc2) &\cong \Zc2[\alpha]/(\alpha^{n+1}) \\
+ H^\bullet(\CP^n; \ZZ) &\cong \ZZ[\beta]/(\beta^{n+1})
+ \end{align*}
+ where $|\alpha| = 1$ is a generator of $H^1(\RP^n; \Zc2)$
+ and $|\beta| = 2$ is a generator of $H^2(\CP^n; \ZZ)$.
+
+ Confusingly enough, both graded rings are commutative \emph{and} anti-commutative.
+ In the first case it is because we work in $\Zc 2$, for which $1 = -1$,
+ so anticommutative is actually equivalent to commutative.
+ In the second case, all nonzero homogeneous elements have degree $2$.
+\end{example}
+
+
+\section{Relative cohomology pseudo-rings}
+For $A \subseteq X$, one can also define a relative cup product
+\[ H^k(X,A;R) \times H^\ell(X,A;R) \to H^{k+\ell}(X,A;R). \]
+After all, if either cochain vanishes on chains in $A$,
+then so does their cup product.
+This lets us define \vocab{relative cohomology pseudo-ring}
+and \vocab{reduced cohomology pseudo-ring} (by $A = \{\ast\}$), say
+\begin{align*}
+H^\bullet(X,A;R) &= \bigoplus_{k \ge 0} H^k(X,A; R) \\
+\wt H^\bullet(X;R) &= \bigoplus_{k \ge 0} \wt H^k(X;R).
+\end{align*}
+These are both \textbf{anticommutative pseudo-rings}.
+Indeed, often we have $\wt H^0(X;R) = 0$ and thus there is no identity at all.
+
+Once again we have functoriality:
+\begin{theorem}
+ [Cohomology (pseudo-)rings are functorial]
+ Fix a ring $R$ (commutative with $1$).
+ Then we have functors
+ \begin{align*}
+ H^\bullet(-; R) &: \catname{hTop}\op \to \catname{GradedRings} \\
+ H^\bullet(-,-; R) &: \catname{hPairTop}\op \to \catname{GradedPseudoRings}.
+ \end{align*}
+\end{theorem}
+
+Unfortunately, unlike with (co)homology groups,
+it is a nontrivial task to determine the cup product
+for even nice spaces like CW complexes.
+So we will not do much in the way of computation.
+However, there is a little progress we can make.
+
+\section{Wedge sums}
+Our goal is to now compute $\wt H^\bullet(X \vee Y)$.
+To do this, we need to define the product of two graded pseudo-rings:
+\begin{definition}
+ Let $R$ and $S$ be two graded pseudo-rings.
+ The \vocab{product pseudo-ring} $R \times S$ is the graded pseudo-ring
+ defined by taking the underlying abelian group as
+ \[ R \oplus S = \bigoplus_{d \ge 0} (R^d \oplus S^d). \]
+ Multiplication comes from $R$ and $S$, followed by
+ declaring $r \cdot s = 0$ for $r \in R$, $s \in S$.
+\end{definition}
+Note that this is just graded version of the product ring
+defined in \Cref{ex:product_ring}.
+\begin{exercise}
+ Show that if $R$ and $S$ are graded rings (meaning they have $1_R$ and $1_S$),
+ then so is $R \times S$.
+\end{exercise}
+
+Now, the theorem is that:
+\begin{theorem}
+ [Cohomology pseudo-rings of wedge sums]
+ We have
+ \[
+ \wt H^\bullet(X \vee Y; R)
+ \cong \wt H^\bullet(X;R)
+ \times \wt H^\bullet(Y;R)
+ \]
+ as graded pseudo-rings.
+\end{theorem}
+
+This allows us to resolve the first question posed at the beginning.
+Let $X = \CP^2$ and $Y = S^2 \vee S^4$.
+We have that
+\[ H^\bullet(\CP^2; \ZZ) \cong \ZZ[\alpha] / (\alpha^3). \]
+Hence this is a graded ring generated by there elements:
+\begin{itemize}
+ \ii $1$, in dimension $0$.
+ \ii $\alpha$, in dimension $2$.
+ \ii $\alpha^2$, in dimension $4$.
+\end{itemize}
+Next, consider the reduced cohomology pseudo-ring
+\[ \wt H^\bullet(S^2 \vee S^4; \ZZ) \cong
+ \wt H^\bullet(S^2; \ZZ)
+ \oplus \wt H^\bullet(S^4 ; \ZZ).
+\]
+Thus the absolute cohomology ring $H^\bullet(S^2 \vee S^4 ; \ZZ)$
+is a graded ring also generated by three elements.
+\begin{itemize}
+ \ii $1$, in dimension $0$ (once we add back in the $0$th dimension).
+ \ii $a_2$, in dimension $2$ (from $H^\bullet(S^2 ; \ZZ)$).
+ \ii $a_4$, in dimension $4$ (from $H^\bullet(S^4 ; \ZZ)$).
+\end{itemize}
+Each graded component is isomorphic, like we expected.
+However, in the former, the product of two degree $2$ generators is
+\[ \alpha \cdot \alpha = \alpha^2. \]
+In the latter, the product of two degree $2$ generators is
+\[ a_2 \cdot a_2 = a_2^2 = 0 \]
+since $a_2 \smile a_2 = 0 \in H^\bullet(S^2; \ZZ)$.
+
+Thus $S^2 \vee S^4$ and $\CP^2$ are not homotopy equivalent.
+
+\section{K\"unneth formula}
+We now wish to tell apart the spaces $S^2 \times S^4$ and $\CP^3$.
+In order to do this, we will need a formula
+for $H^n(X \times Y; R)$ in terms of $H^n(X;R)$ and $H^n(Y;R)$.
+Thus formulas are called \vocab{K\"unneth formulas}.
+In this section we will only use a very special case,
+which involves the tensor product of two graded rings.
+
+\begin{definition}
+ Let $A$ and $B$ be two graded rings which are also $R$-modules
+ (where $R$ is a commutative ring with $1$).
+ We define the \vocab{tensor product} $A \otimes_R B$ as follows.
+ As an abelian group, it is
+ \[ A \otimes_R B = \bigoplus_{d \ge 0}
+ \left( \bigoplus_{k=0}^{d} A^k \otimes_R B^{d-k} \right). \]
+ The multiplication is given on basis elements by
+ \[ \left( a_1 \otimes b_1 \right)\left( a_2 \otimes b_2 \right)
+ = (a_1a_2) \otimes (b_1b_2).
+ \]
+ Of course the multiplicative identity is $1_A \otimes 1_B$.
+\end{definition}
+
+Now let $X$ and $Y$ be topological spaces, and take the product:
+we have a diagram
+\begin{center}
+\begin{tikzcd}
+ & X \times Y \ar[ld, "\pi_X"'] \ar[rd, "\pi_Y"] \\
+ X && Y
+\end{tikzcd}
+\end{center}
+where $\pi_X$ and $\pi_Y$ are projections.
+As $H^k(-; R)$ is functorial, this gives induced maps
+\begin{align*}
+ \pi_X^\ast &: H^k(X \times Y; R) \to H^k(X; R) \\
+ \pi_Y^\ast &: H^k(X \times Y; R) \to H^k(Y; R)
+\end{align*}
+for every $k$.
+
+By using this, we can define a so-called cross product.
+\begin{definition}
+ Let $R$ be a ring, and $X$ and $Y$ spaces.
+ Let $\pi_X$ and $\pi_Y$ be the projections of $X \times Y$
+ onto $X$ and $Y$.
+ Then the \vocab{cross product} is the map
+ \[
+ H^\bullet(X; R) \otimes_R H^\bullet(Y;R)
+ \taking{\times} H^\bullet(X \times Y; R)
+ \]
+ acting on cocycles as follows:
+ $\phi \times \psi = \pi_X^\ast(\phi) \smile \pi_Y^\ast(\psi)$.
+\end{definition}
+
+This is just the most natural way to take a $k$-cycle
+on $X$ and an $\ell$-cycle on $Y$, and create a $(k+\ell)$-cycle
+on the product space $X \times Y$.
+
+
+\begin{theorem}
+ [K\"unneth formula]
+ Let $X$ and $Y$ be CW complexes such that $H^k(Y;R)$
+ is a finitely generated free $R$-module for every $k$.
+ Then the cross product is an isomorphism of anticommutative rings
+ \[
+ H^\bullet(X;R) \otimes_R H^\bullet(Y;R)
+ \to H^\bullet(X \times Y; R).
+ \]
+\end{theorem}
+
+In any case, this finally lets us resolve the question
+set out at the beginning.
+We saw that $H_n(\CP^3) \cong H_n(S^2 \times S^4)$ for every $n$,
+and thus it follows that $H^n(\CP^3; \ZZ) \cong H^n(S^2 \times S^4; \ZZ)$ too.
+
+But now let us look at the cohomology rings. First, we have
+\[ H^\bullet(\CP^3; \ZZ) \cong \ZZ[\alpha] / (\alpha^4)
+ \cong \ZZ \oplus \alpha\ZZ \oplus \alpha^2\ZZ \oplus \alpha^3\ZZ
+\]
+where $|\alpha| = 2$; hence this is a graded ring generated by
+\begin{itemize}
+ \ii $1$, in degree $0$.
+ \ii $\alpha$, in degree $2$.
+ \ii $\alpha^2$, in degree $4$.
+ \ii $\alpha^3$, in degree $6$.
+\end{itemize}
+
+Now let's analyze
+\[ H^\bullet(S^2 \times S^4; \ZZ) \cong
+ \ZZ[\beta] / (\beta^2)
+ \otimes
+ \ZZ[\gamma] / (\gamma^2).
+\]
+It is thus generated thus by the following elements:
+\begin{itemize}
+ \ii $1 \otimes 1$, in degree $0$.
+ \ii $\beta \otimes 1$, in degree $2$.
+ \ii $1 \otimes \gamma$, in degree $4$.
+ \ii $\beta \otimes \gamma$, in degree $6$.
+\end{itemize}
+Again in each dimension we have the same abelian group.
+But notice that if we square $\beta \otimes 1$ we get
+\[ (\beta \otimes 1)(\beta \otimes 1) = \beta^2 \otimes 1 = 0. \]
+Yet the degree $2$ generator of $H^\bullet(\CP^3; \ZZ)$
+does not have this property.
+Hence these two graded rings are not isomorphic.
+
+So it follows that $\CP^3$ and $S^2 \times S^4$ are not homotopy equivalent.
+
+
+% Borsuk Ulam
+
+\section\problemhead
+
+\begin{dproblem}
+ [Symmetry of Betti numbers by Poincar\'e duality]
+ \label{prob:betti}
+ Let $M$ be a smooth oriented compact $n$-manifold,
+ and let $b_k$ denote its Betti number.
+ Prove that $b_k = b_{n-k}$.
+ \begin{hint}
+ Write $H^k(M; \ZZ)$ in terms of $H_k(M)$
+ using the UCT, and analyze the ranks.
+ \end{hint}
+\end{dproblem}
+
+\begin{problem}
+ Show that $\RP^n$ is not orientable for even $n$.
+ \begin{hint}
+ Use the previous result on Betti numbers.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ Show that $\RP^3$ is not homotopy equivalent to $\RP^2 \vee S^3$.
+ \begin{hint}
+ Use the $\Zc2$ cohomologies, and find the cup product.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ \gim
+ Show that $S^m \vee S^n$ is not a deformation retract
+ of $S^m \times S^n$ for any $m,n \ge 1$.
+ \begin{hint}
+ Assume that $r : S^m \times S^n \to S^m \vee S^n$ is such a map.
+ Show that the induced map
+ $H^\bullet(S^m \vee S^n; \ZZ) \to H^\bullet(S^m \times S^n; \ZZ)$
+ between their cohomology rings is monic
+ (since there exists an inverse map $i$).
+ \end{hint}
+ \begin{sol}
+ See \cite[Example 3.3.14, pages 68-69]{ref:maxim752}.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/dedekind.tex b/books/napkin/dedekind.tex
new file mode 100644
index 0000000000000000000000000000000000000000..7916280ba7c514875fa6cda9e472e7b86ac7d20a
--- /dev/null
+++ b/books/napkin/dedekind.tex
@@ -0,0 +1,725 @@
+\chapter{Unique factorization (finally!)}
+Took long enough.
+
+\section{Motivation}
+Suppose we're interested in solutions to the
+Diophantine equation $n = x^2 + 5y^2$ for a given $n$.
+The idea is to try and ``factor'' $n$ in $\ZZ[\sqrt{-5}]$,
+for example \[ 6 = (1+\sqrt{-5})(1-\sqrt{-5}). \]
+Unfortunately, this is not so simple, because as I've said before
+we don't have unique factorization of elements:
+\[ 6 = 2 \cdot 3 = \left( 1+\sqrt{-5} \right)\left( 1-\sqrt{-5} \right). \]
+One reason this doesn't work is that we don't have a notion of a
+\emph{greatest common divisor}.
+We can write $(35, 77) = 7$, but what do we make of $(3, 1+\sqrt{-5})$?
+
+The trick is to use ideals as a ``generalized GCD''.
+Recall that by $(a,b)$ I mean the ideal $\{ax + by \mid x,y \in \ZZ[\sqrt{-5}] \}$.
+You can see that $(35, 77) = (7)$,
+but $(3, 1+\sqrt{-5})$ will be left ``unsimplified'' because it doesn't
+represent an actual value in the ring.
+Using these \emph{sets} (ideals) as elements,
+it turns out that we can develop a full theory
+of prime factorization, and we do so in this chapter.
+
+In other words, we use the ideal $(a_1, \dots, a_m)$
+to interpret a ``generalized GCD'' of $a_1$, \dots, $a_m$.
+In particular, if we have a number $x$ we want to represent,
+we encode it as just $(x)$.
+
+Going back to our example of $6$,
+\[ (6) = (2) \cdot (3)
+ = \left( 1+\sqrt{-5} \right) \cdot \left( 1-\sqrt{-5} \right). \]
+Please take my word for it that in fact,
+the complete prime factorization of $(6)$ into prime ideals is
+\[
+ (6)
+ = (2,1-\sqrt{-5})^2 (3,1+\sqrt{-5})(3,1-\sqrt{-5})
+ = \kp^2 \kq_1 \kq_2. \]
+In fact, $(2) = \kp^2$, $(3) = \kq_1 \kq_2$,
+$(1+\sqrt{-5}) = \kp \kq_1$, $(1-\sqrt{-5}) = \kp \kq_2$.
+So $6$ indeed factorizes uniquely into ideals,
+even though it doesn't factor into elements.
+
+As one can see above,
+ideal factorization is more refined than element factorization.
+Once you have the factorization into \emph{ideals},
+you can from there recover all the factorizations into \emph{elements}.
+The upshot of this is that if we want to write $n$ as $x^2+5y^2$,
+we just have to factor $n$ into ideals,
+and from there we can recover all factorizations into elements,
+and finally all ways to write $n$ as $x^2+5y^2$.
+Since we can already break $n$ into rational prime factors
+(for example $6 = 2 \cdot 3$ above)
+we just have to figure out how each rational prime $p \mid n$ breaks down.
+There's a recipe for this, \Cref{thm:factor_alg}!
+In fact, I'll even tell you what is says in this special case:
+\begin{itemize}
+ \ii If $t^2+5$ factors as $(t+c)(t-c) \pmod p$,
+ then $(p) = (p, c+\sqrt{-5})(p, c-\sqrt{-5})$.
+ \ii Otherwise, $(p)$ is a prime ideal.
+\end{itemize}
+In this chapter we'll develop this theory of unique factorization in full generality.
+
+%We saw earlier that in rings, such as $\ZZ[\sqrt{-5}]$, unique factorization can fail:
+%\[ 6 = 2 \cdot 3 = \left( 1 - \sqrt{-5} \right)\left( 1 + \sqrt{5} \right). \]
+%I mentioned that we thus turned to the notion of \emph{ideals},
+%and defined the notion of a prime ideal.
+%Then I said
+%\begin{quote}
+% I now must regrettably inform you that prime factorization is still
+% not true even with the notion of a ``prime'' ideal
+% (though not I haven't told you how to multiply two ideals yet).
+% But it will work in the situations we care about most:
+% this is covered in the chapter on Dedekind domains.
+%\end{quote}
+%I can be precise about what's going to happen now.
+%We'll define a Dedekind domain, which in particular includes all rings of integers $\OO_K$.
+%Then prime factorization will work in Dedekind domains, and we'll throw a small party.
+
+\begin{remark}
+ In this chapter, I'll be using the letters $\ka$, $\kb$, $\kp$, $\kq$
+ for ideals of $\OO_K$.
+ When fractional ideals arise, I'll use $I$ and $J$ for them.
+\end{remark}
+
+\section{Ideal arithmetic}
+\prototype{$(x)(y) = (xy)$. In any case, think in terms of generators.}
+First, I have to tell you how to add and multiply two ideals $\ka$ and $\kb$.
+\begin{definition}
+ Given two ideals $\ka$ and $\kb$ of a ring $R$, we define
+ \begin{align*}
+ \ka + \kb &\defeq \left\{ a+b \mid a \in \ka, b \in \kb \right\} \\
+ \ka \cdot \kb &\defeq \left\{ a_1b_1 + \dots + a_n b_n
+ \mid a_i \in \ka, b_i \in \kb \right\}.
+ \end{align*}
+\end{definition}
+(Note that infinite sums don't make sense in general rings, which is why in $\ka \cdot \kb$
+we cut off the sum after some finite number of terms.)
+You can readily check these are actually ideals.
+This definition is more natural if you think about it in terms of
+the generators of $\ka$ and $\kb$.
+\begin{proposition}[Ideal arithmetic via generators]
+ Suppose $\ka = \left( a_1, a_2, \dots, a_n \right)$
+ and $\kb = \left( b_1, \dots, b_m \right)$ are ideals in a ring $R$.
+ Then
+ \begin{enumerate}[(a)]
+ \ii $\ka + \kb$ is the ideal generated by $a_1, \dots, a_n, b_1, \dots, b_m$.
+ \ii $\ka \cdot \kb$ is the ideal generated by $a_i b_j$,
+ for $1 \le i \le n$ and $1 \le j \le m$.
+ \end{enumerate}
+\end{proposition}
+\begin{proof}
+ Pretty straightforward; just convince yourself that this result is correct.
+\end{proof}
+In other words, for sums you append the two sets of generators together,
+and for products you take products of the generators.
+Note that for principal ideals, this coincides with ``normal'' multiplication,
+for example
+\[ (3) \cdot (5) = (15) \]
+in $\ZZ$.
+\begin{remark}
+Note that for an ideal $\ka$ and an element $c$,
+the set \[ c \ka = \left\{ ca \mid a \in \ka \right\} \]
+is equal to $(c) \cdot \ka$.
+So ``scaling'' and ``multiplying by principal ideals'' are the same thing.
+This is important, since we'll be using the two notions interchangably.
+\end{remark}
+
+Finally, since we want to do factorization we better have some notion of divisibility.
+So we define:
+\begin{definition}
+ We say $\ka$ divides $\kb$ and write $\ka \mid \kb$ if $\ka \supseteq \kb$.
+\end{definition}
+Note the reversal of inclusions!
+So $(3)$ divides $(15)$, because $(15)$ is contained in $(3)$;
+every multiple of $15$ is a multiple of $3$.
+And from the example in the previous section: In $\ZZ[\sqrt{-5}]$,
+$(3,1-\sqrt{-5})$ divides $(3)$ and $(1 - \sqrt{-5})$.
+
+Finally, the \vocab{prime ideals} are defined as in \Cref{def:prime_ideal}:
+$\kp$ is prime if $xy \in \kp$ implies $x \in \kp$ or $y \in \kp$.
+This is compatible with the definition of divisibility:
+\begin{exercise}
+ A nonzero proper ideal $\kp$ is prime
+ if and only if whenever $\kp$ divides $\ka \kb$,
+ $\kp$ divides one of $\ka$ or $\kb$.
+\end{exercise}
+As mentioned in \Cref{rem:unit_sign_issue},
+this also lets us ignore multiplication by units: $(-3) = (3)$.
+
+\section{Dedekind domains}
+\prototype{Any $\OO_K$ is a Dedekind domain.}
+We now define a Dedekind domain as follows.
+\begin{definition}
+ An integral domain $\mathcal A$ is a \vocab{Dedekind domain}
+ if it is Noetherian, integrally closed, and
+ \emph{every nonzero prime ideal of $\mathcal A$ is in fact maximal}.
+ (The last condition is the important one.)
+\end{definition}
+Here there's one new word I have to define for you, but we won't make much use of it.
+\begin{definition}
+ Let $R$ be an integral domain and let $K$ be its field of fractions.
+ We say $R$ is \vocab{integrally closed} if
+ the only elements $a \in K$ which are roots of \emph{monic} polynomials in $R$
+ are the elements of $R$ (which are roots of the trivial $x-r$ polynomial).
+\end{definition}
+The \emph{interesting} condition in the definition
+of a Dedekind domain is the last one: prime ideals and maximal ideals
+are the same thing.
+The other conditions are just technicalities,
+but ``primes are maximal'' has real substance.
+\begin{example}[$\ZZ$ is a Dedekind domain]
+ The ring $\ZZ$ is a Dedekind domain.
+ Note that
+ \begin{itemize}
+ \ii $\ZZ$ is Noetherian (for obvious reasons).
+ \ii $\ZZ$ has field of fractions $\QQ$.
+ If $f(x) \in \ZZ[x]$ is monic, then by the rational root theorem
+ any rational roots are integers
+ (this is the same as the proof that $\ol\ZZ \cap \QQ = \ZZ$).
+ Hence $\ZZ$ is integrally closed.
+ \ii The nonzero prime ideals of $\ZZ$ are $(p)$,
+ which also happen to be maximal.
+ \end{itemize}
+\end{example}
+
+The case of interest is a ring $\OO_K$ in which we wish to do factorizing.
+We're now going to show that for any number field $K$, the ring $\OO_K$ is a Dedekind domain.
+First, the boring part.
+\begin{proposition}[$\OO_K$ integrally closed and Noetherian]
+ For any number field $K$, the ring $\OO_K$ is integrally closed and Noetherian.
+\end{proposition}
+\begin{proof}
+ Boring, but here it is anyways for completeness.
+
+ Since $\OO_K \cong \ZZ^{\oplus n}$, we get that it's Noetherian.
+
+ Now we show that $\OO_K$ is integrally closed.
+ Suppose that $\eta \in K$ is the root of some polynomial with coefficients in $\OO_K$.
+ Thus
+ \[ \eta^n = \alpha_{n-1} \cdot \eta^{n-1} + \alpha_{n-2} \cdot \eta^{n-2}
+ + \dots + \alpha_0 \]
+ where $\alpha_i \in \OO_K$. We want to show that $\eta \in \OO_K$ as well.
+
+ Well, from the above, $\OO_K[\eta]$ is finitely generated\dots
+ thus $\ZZ[\eta] \subseteq \OO_K[\eta]$ is finitely generated.
+ So $\eta \in \ol\ZZ$, and hence $\eta \in K \cap \ol\ZZ = \OO_K$.
+\end{proof}
+Now let's do the fun part.
+We'll prove a stronger result, which will re-appear repeatedly.
+\begin{theorem}[Important: prime ideals divide rational primes]
+ Let $\OO_K$ be a ring of integers
+ and $\kp$ a nonzero prime ideal inside it.
+ Then $\kp$ contains a rational prime $p$.
+ Moreover, $\kp$ is maximal.
+\end{theorem}
+\begin{proof}
+ Take any $\alpha \neq 0$ in $\kp$.
+ Its Galois conjugates are algebraic integers
+ so their product $\Norm(\alpha)/\alpha$ is in $\OO_K$
+ (even though each individual conjugate need not be in $K$).
+ Consequently, $\Norm(\alpha) \in \kp$,
+ and we conclude $\kp$ contains some integer.
+
+ Then take the smallest positive integer in $\kp$, say $p$.
+ We must have that $p$ is a rational prime, since otherwise $\kp \ni p = xy$
+ implies one of $x,y \in \kp$.
+ This shows the first part.
+
+ We now do something pretty tricky to show $\kp$ is maximal.
+ Look at $\OO_K / \kp$;
+ since $\kp$ is prime it's supposed to be an integral domain\dots\
+ but we claim that it's actually finite!
+ To do this, we forget that we can multiply on $\OO_K$.
+ Recalling that $\OO_K \cong \ZZ^{\oplus n}$ as an abelian group,
+ we obtain a map
+ \[ {\FF_p}^{\oplus n} \cong \OO_K / (p) \surjto \OO_K / \kp. \]
+ Hence $\left\lvert \OO_K / \kp \right\rvert \le p^n$ is \emph{finite}.
+ Since finite integral domains are fields (\Cref{prob:finite_domain_field})
+ we are done.
+\end{proof}
+Since every nonzero prime $\kp$ is maximal, we now know that $\OO_K$ is a Dedekind domain.
+Note that this tricky proof is essentially inspired by the solution to \Cref{prob:dedekind_sample}.
+
+
+\section{Unique factorization works}
+Okay, I'll just say it now!
+\begin{moral}
+ Unique factorization works perfectly in Dedekind domains!
+\end{moral}
+\begin{theorem}[Prime factorization works]
+ Let $\ka$ be a nonzero proper ideal of a Dedekind domain $\mathcal A$.
+ Then $\ka$ can be written as a finite product of nonzero prime ideals $\kp_i$, say
+ \[ \ka = \kp_1^{e_1} \kp_2^{e_2} \dots \kp_g^{e_g} \]
+ and this factorization is unique up to the order of the $\kp_i$.
+
+ Moreover, $\ka$ divides $\kb$ if and only if for every prime ideal $\kp$,
+ the exponent of $\kp$ in $\ka$ is less than the corresponding exponent in $\kb$.
+\end{theorem}
+%% Ofer joke
+% As ideals, you and I are coprime because together we are (1).
+
+I won't write out the proof, but I'll describe the basic method of attack.
+Section 3 of \cite{ref:ullery} does a nice job of explaining it.
+When we proved the fundamental theorem of arithmetic, the basic plot was:
+\begin{enumerate}[(1)]
+ \ii Show that if $p$ is a rational prime\footnote{
+ Note that the kindergarten definition of a prime is
+ that ``$p$ isn't the product of two smaller integers''.
+ This isn't the correct definition of a prime:
+ the definition of a prime is that $p \mid bc$
+ means $p \mid b$ or $p \mid c$.
+ The kindergarten definition is something called ``irreducible''.
+ Fortunately, in $\ZZ$, primes and irreducibles are the same thing,
+ so no one ever told you that your definition of ``prime'' was wrong.}
+ then $p \mid bc$ means $p \mid b$ or $p \mid c$. (This is called Euclid's Lemma.)
+ \ii Use strong induction to show that every $N > 1$ can be written as the product of primes (easy).
+ \ii Show that if $p_1 \dots p_m = q_1 \dots q_n$ for some primes (not necessarily unique),
+ then $p_1 = q_i$ for some $i$, say $q_1$.
+ \ii Divide both sides by $p_1$ and use induction.
+\end{enumerate}
+What happens if we try to repeat the proof here?
+We get step 1 for free, because we're using a better definition of ``prime''.
+We can also do step 3, since it follows from step 1.
+But step 2 doesn't work,
+because for abstract Dedekind domains
+we don't really have a notion of size.
+And step 4 doesn't work because we don't yet have a
+notion of what the inverse of a prime ideal is.
+
+Well, it turns out that we \emph{can} define the inverse $\ka\inv$ of an ideal,
+and I'll do so by the end of this chapter.
+You then need to check that $\ka \cdot \ka\inv = (1) = \mathcal A$.
+In fact, even this isn't easy.
+You have to check it's true for prime ideals $\kp$,
+\emph{then} prove prime factorization,
+and then prove that this is true.
+Moreover, $\ka\inv$ is not actually an ideal, so you need to
+work in the field of fractions $K$ instead of $\mathcal A$.
+
+So the main steps in the new situation are as follows:
+\begin{enumerate}[(1)]
+ \ii First, show that every ideal $\ka$ divides $\kp_1 \dots \kp_g$
+ for some finite collection of primes.
+ (This is an application of Zorn's Lemma.)
+ \ii Define $\kp\inv$ and show that $\kp \kp\inv = (1)$.
+ \ii Show that a factorization exists (again using Zorn's Lemma).
+ \ii Show that it's unique, using the new inverse we've defined.
+\end{enumerate}
+
+Finally, let me comment on how nice this is if $\mathcal A$ is a PID (like $\ZZ$).
+Thus every element $a \in \mathcal A$ is in direct correspondence with an ideal $(a)$.
+Now suppose $(a)$ factors as a product of ideals $\kp_i = (p_i)$, say,
+\[ (a) = (p_1)^{e_1} (p_2)^{e_2} \dots (p_n)^{e_n} . \]
+This verbatim reads \[ a = u p_1^{e_1} p_2^{e_2} \dots p_n^{e_n} \]
+where $u$ is some unit (recall \Cref{def:unit}).
+Hence, Dedekind domains which are PID's satisfy unique factorization
+for \emph{elements}, just like in $\ZZ$.
+(In fact, the converse of this is true.)
+
+\section{The factoring algorithm}
+Let's look at some examples from quadratic fields.
+Recall that if $K = \QQ(\sqrt{d})$, then
+\[
+ \OO_K =
+ \begin{cases}
+ \ZZ[\sqrt d] & d \equiv 2,3 \pmod 4 \\
+ \ZZ\left[ \frac{1+\sqrt d}{2} \right] & d \equiv 1 \pmod 4.
+ \end{cases}
+\]
+Also, recall that the norm of $a+b\sqrt{-d}$ is given by $a^2+db^2$.
+
+%In what follows, we are going to often use the trick that
+%\[ \ZZ[\alpha] \cong \ZZ[x] / (f) \]
+%where $f$ is the minimal polynomial of $\alpha$.
+
+\begin{example}[Factoring $6$ in the integers of $\QQ(\sqrt{-5})$]
+ Let $\OO_K = \ZZ[\sqrt{-5}]$ arise from $K = \QQ(\sqrt{-5})$.
+ We've already seen that
+ \[ (6) = (2) \cdot (3) = \left( 1+\sqrt{-5} \right)\left( 1-\sqrt{-5} \right) \]
+ and you can't get any further with these principal ideals.
+ But let
+ \[ \kp = \left( 1+\sqrt{-5}, 2 \right) = \left( 1-\sqrt{-5}, 2 \right)
+ \quad\text{and}\quad \kq_1 = (1+\sqrt{-5},3),
+ \; \kq_2 = (1-\sqrt{-5},3). \]
+ Then it turns out $(6) = \kp^2\kq_1\kq_2$.
+ More specifically, $(2) = \kp^2$, $(3) = \kq_1\kq_2$,
+ and $(1+\sqrt{-5}) = \kp\kq_1$ and $(1-\sqrt{-5}) = \kp\kq_2$.
+ (Proof in just a moment.)
+\end{example}
+I want to stress that all our ideals are computed relative to $\OO_K$.
+So for example, \[ (2) = \left\{ 2x \mid x \in \OO_K \right\}. \]
+
+How do we know in this example that $\kp$ is prime/maximal?
+(Again, these are the same since we're in a Dedekind domain.)
+Answer: look at $\OO_K / \kp$ and see if it's a field.
+There is a trick to this: we can express
+\[ \OO_K = \ZZ[\sqrt{-5}] \cong \ZZ[x] / (x^2+5). \]
+%\begin{ques}
+% Convince yourself this is true.
+% (More generally, if $\theta \in \ol\ZZ$ has minimal polynomial $p$,
+% then $\ZZ[\theta] \cong \ZZ[x] / (p)$).
+%\end{ques}
+So when we take \emph{that} mod $\kp$, we get that
+\[ \OO_K / \kp = \ZZ[x] / (x^2+5, 2, 1+x) \cong \FF_2[x] / (x^2+5,x+1) \]
+as rings.
+\begin{ques}
+ Conclude that $\OO_K / \kp \cong \mathbb F_2$,
+ and satisfy yourself that $\kq_1$ and $\kq_2$ are also maximal.
+\end{ques}
+I should give an explicit example of an ideal multiplication: let's compute
+\begin{align*}
+ \kq_1\kq_2 &= \left( (1+\sqrt{-5})(1-\sqrt{-5}), 3(1+\sqrt{-5}), 3(1-\sqrt{-5}), 9 \right) \\
+ &= \left( 6, 3+3\sqrt{-5}, 3-3\sqrt{-5}, 9 \right) \\
+ &= \left( 6, 3+3\sqrt{-5}, 3-3\sqrt{-5}, 3 \right) \\
+ &= (3)
+\end{align*}
+where we first did $9-6=3$ (think Euclidean algorithm!),
+then noted that all the other generators don't contribute
+anything we don't already have with the $3$
+(again these are ideals computed in $\OO_K$).
+You can do the computation for $\kp^2$, $\kp\kq_1$, $\kp\kq_2$ in the same way.
+
+Finally, it's worth pointing out that we should quickly
+verify that $\kp \neq (x)$ for some $x$;
+in other words, that $\kp$ is not principal.
+Assume for contradiction that it is.
+Then $x$ divides both $1+\sqrt{-5}$ and $2$, in the sense
+that $1+\sqrt{-5} = \alpha_1 x$ and $2 = \alpha_2 x$
+for some $\alpha_1, \alpha_2 \in \OO_K$.
+(Principal ideals are exactly the ``multiples'' of $x$, so $(x) = x \OO_K$.)
+Taking the norms, we find that $\Norm_{K/\QQ}(x)$ divides both
+\[ \Norm_{K/\QQ}(1+\sqrt{-5}) = 6 \quad\text{and}\quad \Norm_{K/\QQ}(2) = 4. \]
+Since $\kp \neq (1)$, $x$ cannot be a unit, so its norm must be $2$.
+But there are no elements of norm $2 = a^2+5b^2$ in $\OO_K$.
+
+\begin{example}[Factoring $3$ in the integers of $\QQ(\sqrt{-17})$]
+ Let $\OO_K = \ZZ[\sqrt{-17}]$ arise from $K = \QQ(\sqrt{-17})$.
+ We know $\OO_K \cong \ZZ[x] / (x^2+17)$.
+ Now
+ \[
+ \OO_K / 3\OO_K \cong \ZZ[x] / (3,x^2+17)
+ \cong \FF_3[x] / (x^2-1).
+ \]
+ This already shows that $(3)$ cannot be a prime (i.e.\ maximal) ideal,
+ since otherwise our result should be a field.
+ Anyways, we have a projection
+ \[ \OO_K \surjto \FF_3[x] / \left( (x-1)(x+1) \right). \]
+ Let $\kq_1$ be the pre-image of $(x-1)$ in the image, that is,
+ \[ \kq_1 = (3, \sqrt{-17}-1). \]
+ Similarly, \[ \kq_2 = (3, \sqrt{-17}+1). \]
+ We have $\OO_K / \kq_1 \cong \FF_3$, so $\kq_1$ is maximal (prime).
+ Similarly $\kq_2$ is prime.
+ Magically, you can check explicitly that
+ \[ \kq_1 \kq_2 = (3). \]
+ Hence this is the factorization of $(3)$ into prime ideals.
+\end{example}
+
+The fact that $\kq_1 \kq_2 = (3)$ looks magical, but it's really true:
+\begin{align*}
+ \kq_1\kq_2
+ &= (3, \sqrt{-17}-1) (3, \sqrt{-17}+1) \\
+ &= (9, 3\sqrt{-17}+3, 3\sqrt{-17}-3, 18) \\
+ &= (9, 3\sqrt{-17}+3, 6) \\
+ &= (3, 3\sqrt{-17}+3, 6) \\
+ &= (3).
+\end{align*}
+In fact, it turns out this always works in general:
+given a rational prime $p$, there is an algorithm
+to factor $p$ in any $\OO_K$ of the form $\ZZ[\theta]$.
+
+\begin{theorem}[Factoring algorithm / Dedekind-Kummer theorem]
+ \label{thm:factor_alg}
+ Let $K$ be a number field.
+ Let $\theta \in \OO_K$ with $[\OO_K : \ZZ[\theta]] = j < \infty$,
+ and let $p$ be a prime not dividing $j$.
+ Then $(p) = p \OO_K$ is factored as follows:
+ \begin{quote}
+ Let $f$ be the minimal polynomial of $\theta$ and
+ factor $\ol f$ mod $p$ as
+ \[ \ol f \equiv \prod_{i=1}^g (\ol f_i)^{e_i} \pmod p. \]
+ Then $\kp_i = (f_i(\theta), p)$ is prime for each $i$
+ and the factorization of $(p)$ is
+ \[ \OO_K \supseteq (p) = \prod_{i=1}^g \kp_i^{e_i}. \]
+ \end{quote}
+ In particular, if $K$ is monogenic with $\OO_K = \ZZ[\theta]$ then $j=1$
+ and the theorem applies for all primes $p$.
+\end{theorem}
+In almost all our applications in this book, $K$ will be monogenic; i.e.\ $j=1$.
+Here $\ol \psi$ denotes the image in $\FF_p[x]$ of a polynomial $\psi \in \ZZ[x]$.
+
+\begin{ques}
+ There are many possible pre-images $f_i$ we could have chosen
+ (for example if $\ol{f_i} = x^2+1 \pmod 3$, we could pick $f_i = x^2 + 3x + 7$.)
+ Why does this not affect the value of $\kp_i$?
+\end{ques}
+
+Note that earlier, we could check the factorization worked
+for any particular case.
+The proof that this works is much the same, but we need one extra tool, the ideal norm.
+After that we leave the proposition as \Cref{prob:prove_factoring_algorithm}.
+
+This algorithm gives us a concrete way to compute prime factorizations of $(p)$
+in any monogenic number field with $\OO_K = \ZZ[\theta]$. To summarize the recipe:
+\begin{enumerate}
+ \ii Find the minimal polynomial of $\theta$, say $f \in \ZZ[x]$.
+ \ii Factor $f$ mod $p$ into irreducible polynomials
+ ${\ol f_1}^{e_1} {\ol f_2}^{e_2} \dots {\ol f_g}^{e_g}$.
+ \ii Compute $\kp_i = (f_i(\theta), p)$ for each $i$.
+\end{enumerate}
+Then your $(p) = \kp_1^{e_1} \dots \kp_g^{e_g}$.
+
+\begin{exercise}
+ Factor $(29)$ in $\ZZ[i]$ using the above algorithm.
+\end{exercise}
+
+%\begin{remark}
+% What if $K$ isn't monogenic?
+% It turns out that we can still apply the
+% factoring algorithm to ``almost all primes'' as follows.
+% Suppose $K$ is a number field and $\alpha \in \OO_K$ such that
+% $[\OO_K : \ZZ[\alpha]] = j < \infty$.
+% Then as long as $p \nmid j$, we can apply the above algorithm
+% with $f$ the minimal polynomial of $\alpha$.
+% The formulation we presented above was the special case where $j=1$.
+%\end{remark}
+
+\section{Fractional ideals}
+\prototype{Analog to $\QQ$ for $\ZZ$, allowing us to take inverses of ideals.
+Prime factorization works in the nicest way possible.}
+We now have a neat theory of factoring ideals of $\mathcal A$,
+just like factoring the integers.
+Now note that our factorization of $\ZZ$ naturally gives a way to factor
+elements of $\QQ$; just factor the numerator and denominator separately.
+
+Let's make the analogy clearer.
+The analogue of a rational number is as follows.
+
+\begin{definition}
+ Let $\mathcal A$ be a Dedekind domain with field of fractions $K$.
+ A \vocab{fractional ideal} $J$ of $K$ is a set of the form
+ \[ J = \frac{1}{x} \cdot \ka \quad \text{where $x \in \mathcal A$, and $\ka$ is an integral ideal.} \]
+ For emphasis, ideals of $\mathcal A$ will be sometimes referred to as \vocab{integral ideals}.
+\end{definition}
+
+You might be a little surprised by this definition:
+one would expect that a fractional ideal should be of the form $\frac{\ka}{\kb}$
+for some integral ideals $\ka$, $\kb$.
+But in fact, it suffices to just take $x \in \mathcal A$ in the denominator.
+The analogy is that when we looked at $\OO_K$, we found that we only needed
+integer denominators: $\frac{1}{4-\sqrt3} = \frac{1}{13}(4+\sqrt3)$.
+Similarly here, it will turn out that we only need to look at $\frac1x \cdot \ka$
+rather than $\frac{\ka}{\kb}$, and so we define it this way from the beginning.
+See \Cref{prob:fractional_ideal_alt_def} for a different equivalent definition.
+
+\begin{example}[$\frac52\ZZ$ is a fractional ideal]
+ The set \[ \frac52 \ZZ = \left\{ \frac52n \mid n \in \ZZ \right\} = \half (5) \]
+ is a fractional ideal of $\ZZ$.
+\end{example}
+
+Now, as we prescribed, the fractional ideals form a multiplicative group:
+\begin{theorem}[Fractional ideals form a group]
+ Let $\mathcal A$ be a Dedekind domain and $K$ its field of fractions.
+ For any integral ideal $\ka$, the set
+ \[ \ka\inv = \left\{ x \in K
+ \mid x\ka \subseteq (1) = \mathcal A \right\} \]
+ is a fractional ideal with $\ka \ka\inv = (1)$.
+\end{theorem}
+\begin{definition}
+ Thus nonzero fractional ideals of $K$ form a group under multiplication
+ with identity $(1) = \mathcal A$.
+ This \vocab{ideal group} is denoted $J_K$.
+\end{definition}
+
+\begin{example}[$(3)\inv$ in $\ZZ$]
+ Please check that in $\ZZ$ we have
+ \[ (3)\inv = \left\{ \frac 13 n \mid n \in \ZZ \right\} = \frac 13 \ZZ. \]
+\end{example}
+
+It follows that every fractional ideal $J$ can be uniquely written as
+\[ J = \prod_i \kp_i^{n_i} \cdot \prod \kq_i^{-m_i} \]
+where $n_i$ and $m_i$ are positive integers.
+In fact, $\ka$ is an integral ideal if and only if all its exponents are nonnegative,
+just like the case with integers.
+So, a perhaps better way to think about fractional ideals is
+as products of prime ideals, possibly with negative exponents.
+
+\section{The ideal norm}
+One last tool is the ideal norm,
+which gives us a notion of the ``size'' of an ideal.
+\begin{definition}
+ The \vocab{ideal norm} (or absolute norm)
+ of a nonzero ideal $\ka \subseteq \OO_K$ is defined as
+ $\left\lvert \OO_K / \ka \right\rvert$ and denoted $\Norm(\ka)$.
+\end{definition}
+\begin{example}[Ideal norm of $(5)$ in the Gaussian integers]
+ Let $K = \QQ(i)$, $\OO_K = \ZZ[i]$.
+ Consider the ideal $(5)$ in $\OO_K$.
+ We have that
+ \[ \OO_K / (5) \cong \{ a+bi \mid a,b \in \Zc5 \} \]
+ so $(5)$ has ideal norm $25$,
+ corresponding to the fact that $\OO_K/(5)$ has $5^2=25$ elements.
+\end{example}
+
+\begin{example}[Ideal norm of $(2+i)$ in the Gaussian integers]
+ You'll notice that \[ \OO_K / (2+i) \cong \FF_5 \]
+ since mod $2+i$ we have both $5 \equiv 0$ and $i \equiv -2$.
+ (Indeed, since $(2+i)$ is prime we had better get a field!)
+ Thus $\Norm\left( (2+i) \right) = 5$; similarly $\Norm\left( (2-i) \right) = 5$.
+\end{example}
+
+Thus the ideal norm measures how ``roomy'' the ideal is:
+that is, $(5)$ is a lot more spaced out in $\ZZ[i]$ than it is in $\ZZ$.
+(This intuition will be important when we will actually view $\OO_K$ as a lattice.)
+
+\begin{ques}
+ What are the ideals with ideal norm one?
+\end{ques}
+% Regrettably, ideals with more elements have smaller norms.
+
+Our example with $(5)$ suggests several properties of the ideal norm
+which turn out to be true:
+\begin{lemma}[Properties of the absolute norm]
+ Let $\ka$ be a nonzero ideal of $\OO_K$.
+ \begin{enumerate}[(a)]
+ \ii $\Norm(\ka)$ is finite.
+ \ii For any other nonzero ideal $\kb$, $\Norm(\ka\kb) = \Norm(\ka)\Norm(\kb)$.
+ \ii If $\ka = (a)$ is principal, then $\Norm(\ka) = \Norm_{K/\QQ}(a)$.
+ % \ii The ideal norm $\Norm(\ka)$ equals the GCD of $\Norm_{K/\QQ}(\alpha)$ across
+ % all $\alpha \in \ka$. In particular, if $\ka = (\alpha)$ is principal
+ % then $\Norm(\ka) = \Norm_{K/\QQ}(\alpha)$.
+ \end{enumerate}
+\end{lemma}
+I unfortunately won't prove these properties, though we already did (a) in our proof that $\OO_K$
+was a Dedekind domain.
+% Anyways, this lemma provides another way to show an ideal isn't principal.
+
+%\begin{example}[Example of an Ideal Norm]
+% Let $K = \QQ(\sqrt 3)$, so $\OO_K = \ZZ[\sqrt 3]$.
+% % \ii The norm of the principal ideal $(4+\sqrt 3)$ is the norm of the element, $(4+\sqrt3)(4-\sqrt3) = 13$.
+% % \ii
+% Consider $\ka = (2, \sqrt3)$.
+% We need to have
+% \[ \Norm(\ka) \mid \gcd(\Norm_{K/\QQ}(2), \Norm_{K/\QQ}(3)) = \gcd(4, 3) = 1. \]
+% So in fact, $\Norm(\ka) = 1$. Surprise!
+% This actually means $\ka = (1)$,
+% which in hindsight also follows from the fact that
+% $\sqrt 3 \in \ka$, thus $\sqrt 3 \cdot \sqrt 3 \in \ka$ and $3 \in \ka$;
+% but $2 \in \ka$ so $1 \in \ka$.
+%\end{example}
+
+The fact that $\Norm$ is completely multiplicative lets us also consider the norm
+of a fractional ideal $J$ by the natural extension
+\[ J = \prod_i \kp_i^{n_i} \cdot \prod_i \kq_i^{-m_i}
+ \quad \implies \quad
+ \Norm(J) \defeq \frac{\prod_i \Norm(\kp_i)^{n_i}}{\prod_i \Norm(\kq_i)^{m_i}}. \]
+Thus $\Norm$ is a natural group homomorphism $J_K \to \QQ \setminus \{0\}$.
+
+
+\section\problemhead
+\begin{problem}
+ Show that there are three different factorizations of $77$ in $\OO_K$,
+ where $K = \QQ(\sqrt{-13})$.
+\end{problem}
+
+\begin{problem}
+ Let $K = \QQ(\cbrt 2)$;
+ take for granted that $\OO_K = \ZZ[\cbrt 2]$.
+ Find the factorization of $(5)$ in $\OO_K$.
+\end{problem}
+
+\begin{problem}
+ [Fermat's little theorem]
+ Let $\kp$ be a prime ideal in some ring of integers $\OO_K$.
+ Show that for $\alpha \in \OO_K$,
+ \[ \alpha^{\Norm(\kp)} \equiv \alpha \pmod{\kp}. \]
+ \begin{hint}
+ Copy the proof of the usual Fermat's little theorem.
+ \end{hint}
+ \begin{sol}
+ If $\alpha \equiv 0 \pmod{\kp}$ it's clear, so assume this isn't the case.
+ Then $\OO_K/\kp$ is a finite field with $\Norm(\kp)$ elements.
+ Looking at $(\OO_K/\kp)^\ast$, it's a multiplicative group with $\Norm(\kp)-1$ elements,
+ so $\alpha^{\Norm(\kp)-1} \equiv 1 \pmod{\kp}$, as desired.
+ \end{sol}
+\end{problem}
+
+\begin{dproblem}
+ Let $\mathcal A$ be a Dedekind domain with field of fractions $K$,
+ and pick $J \subseteq K$.
+ Show that $J$ is a fractional ideal if and only if
+ \begin{enumerate}[(i)]
+ \ii $J$ is closed under addition and multiplication
+ by elements of $\mathcal A$, and
+ \ii $J$ is finitely generated as an abelian group.
+ \end{enumerate}
+ More succinctly: $J$ is a fractional ideal $\iff$ $J$ is a finitely generated $\mathcal A$-module.
+ \label{prob:fractional_ideal_alt_def}
+ \begin{hint}
+ Clear denominators!
+ \end{hint}
+ \begin{sol}
+ Suppose it's generated by some elements in $K$; we can write them as
+ $\frac{\beta_i}{\alpha_i}$ for $\alpha_i, \beta_i \in \mathcal A$.
+ Hence
+ \[ J = \left\{ \sum_i \gamma_i \cdot \frac{\beta_i}{\alpha_i}
+ \mid \alpha_i, \beta_i, \gamma_i \in \OO_K \right\}. \]
+ Now ``clear denominators''. Set $\alpha = \alpha_1 \dots \alpha_n$,
+ and show that $\alpha J$ is an integral ideal.
+ \end{sol}
+\end{dproblem}
+
+\begin{problem}
+ \label{prob:prove_factoring_algorithm}
+ In the notation of \Cref{thm:factor_alg}, let $I = \prod_{i=1}^g \kp_i^{e_i}$.
+ Assume for simplicity that $K$ is monogenic, hence $\OO_K = \ZZ[\theta]$.
+ \begin{enumerate}[(a)]
+ \ii Prove that each $\kp_i$ is prime.
+ \ii Show that $(p)$ divides $I$.
+ \ii Use the norm to show that $(p) = I$.
+ \end{enumerate}
+ \begin{hint}
+ (a) is straightforward.
+ For (b) work mod $p$.
+ For (c) use norms.
+ \end{hint}
+ \begin{sol}
+ For part (a), note that the $\kp_i$ are prime
+ just because
+ \[ \OO_K / \kp_i
+ \cong (\ZZ[x] / f) / (p, f_i)
+ \cong \FF_p[x] / (f_i) \]
+ is a field, since the $f_i$ are irreducible.
+
+ We check (b).
+ Computing the product modulo $p$ yields\footnote{%
+ For example, suppose we want to know that $(3, 1+\sqrt{7})(3, 1-\sqrt{7})$ is contained in $(3)$.
+ We could do the full computation and get $(9, 3+3\sqrt{7}, 3-3\sqrt{7}, 6)$.
+ But if all we care about is that every element is divisible by $3$, we could have just taken ``mod $3$''
+ at the beginning and looked at just $(1+\sqrt{7})(1-\sqrt{7}) = (6)$;
+ all the other products we get will obviously have factors of $3$.
+ }
+ \[ \prod_{i=1}^{g} (f_i(\theta))^{e_i}
+ \equiv (f(\theta)) \equiv 0 \pmod p \]
+ so we've shown that $I \subseteq (p)$.
+
+ Finally, we prove (c) with a size argument.
+ The idea is that $I$ and $(p)$ really should have the same size;
+ to nail this down we'll use the ideal norm.
+ Since $(p)$ divides $I$, we can write
+ $ (p) = \prod_{i=1}^g \kp_i^{e_i'} $
+ where $e_i' \le e_i$ for each $i$.
+ Remark $\OO_K / (p) \cong \Zc p[x] / (f)$ has size $p^{\deg f}$.
+ Similarly, $\OO_K / (\kp_i)$ has degree $p^{\deg f_i}$ for each $i$.
+ Compute $\Norm( (p) )$ using the $e_i'$ now and compare the results.
+% Hence
+% \[ p^{\deg f} = \Norm(p)
+% = \prod_{i=1}^r \Norm(\kp_i)^{e_i'}
+% = p^{e_1' \deg f_1 + \dots + e_g' \deg f_g}. \]
+% Hence
+% \[ e_1' \deg f_1 + \dots e_g' \deg f_g = \deg f. \]
+% On the other hand, it's obvious that \[ \deg f = e_1 \deg f_1 + \dots + e_g \deg f_g. \]
+% As $e_i' \le e_i$ for each $i$, we can only have $e_i = e_i'$.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/dets.tex b/books/napkin/dets.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e81996686db569ed4f79e4ef480c59c27dad137c
--- /dev/null
+++ b/books/napkin/dets.tex
@@ -0,0 +1,734 @@
+\chapter{Determinant}
+The goal of this chapter is to give the basis-free
+definition of the determinant:
+that is, we're going to define $\det T$
+for $T \colon V \to V$ without making reference to the encoding for $T$.
+This will make it obvious the determinant of a matrix
+does not depend on the choice of basis,
+and that several properties are vacuously true
+(e.g.\ that the determinant is multiplicative).
+
+The determinant is only defined for finite-dimensional
+vector spaces, so if you want you can restrict
+your attention to finite-dimensional vector spaces for this chapter.
+On the other hand we do not need the
+ground field to be algebraically closed.
+
+\section{Wedge product}
+\prototype{$\Lambda^2(\RR^2)$ gives parallelograms.}
+We're now going to define something called the wedge product.
+It will look at first like the tensor product $V \otimes V$,
+but we'll have one extra relation.
+
+For simplicity, I'll first define the wedge product $\Lambda^2(V)$.
+But we will later replace $2$ with any $n$.
+
+\begin{definition}
+ Let $V$ be a $k$-vector space.
+ The $2$-wedge product $\Lambda^2(V)$ is the abelian group
+ generated by elements of the form $v \wedge w$ (where $v,w \in V$),
+ subject to the same relations
+ \begin{align*}
+ (v_1 + v_2) \wedge w &= v_1 \wedge w + v_2 \wedge w \\
+ v \wedge (w_1 + w_2) &= v \wedge w_1 + v \wedge w_2 \\
+ (c \cdot v) \wedge w &= v \wedge (c \cdot w)
+ \end{align*}
+ plus two additional relations:
+ \[ v \wedge v = 0 \quad\text{and}\quad
+ v \wedge w = - w \wedge v. \]
+ As a vector space, its action is given by
+ $c \cdot (v \wedge w) = (c \cdot v) \wedge w = v \wedge (c \cdot w)$.
+\end{definition}
+\begin{exercise}
+ Show that the condition $v \wedge w = - (w \wedge v)$
+ is actually extraneous:
+ you can derive it from the fact that $v \wedge v = 0$.
+ (Hint: expand $(v + w) \wedge (v + w) = 0$.)
+\end{exercise}
+
+This looks almost exactly the same as the definition for a tensor product,
+with two subtle differences.
+The first is that we only have $V$ now, rather than $V$ and $W$
+as with the tensor product\footnote{So maybe the wedge product
+ might be more accurately called the ``wedge power''!}
+Secondly, there is a new \emph{mysterious} relation
+\[ v \wedge v = 0 \implies v \wedge w = - (w \wedge v). \]
+What's that doing there?
+It seems kind of weird.
+
+I'll give you a hint.
+\begin{example}
+ [Wedge product explicit computation]
+ Let $V = \RR^2$, and let $v = ae_1 + be_2$, $w = ce_1 + de_2$.
+ Now let's compute $v \wedge w$ in $\Lambda^2(V)$.
+ \begin{align*}
+ v \wedge w &= (ae_1 + be_2) \wedge (ce_1 + de_2) \\
+ &= ac (e_1 \wedge e_1) + bd (e_2 \wedge e_2)
+ + ad (e_1 \wedge e_2) + bc (e_2 \wedge e_1) \\
+ &= ad (e_1 \wedge e_2) + bc (e_2 \wedge e_1) \\
+ &= (ad-bc) (e_1 \wedge e_2).
+ \end{align*}
+\end{example}
+
+What is $ad-bc$? You might already recognize it:
+\begin{itemize}
+ \ii You might know that the area of the parallelogram
+ formed by $v$ and $w$ is $ad-bc$.
+ \ii You might recognize it as the determinant of
+ $ \begin{bmatrix} a & c \\ b & d \end{bmatrix}$.
+ In fact, you might even know that the determinant
+ is meant to interpret hypervolumes.
+\end{itemize}
+
+\begin{center}
+\begin{asy}
+ pair v = (4,1);
+ pair w = (2,3);
+ dot("$0$", origin, dir(225));
+ dot("$v = ae_1 + be_2$", v, dir(-45), red);
+ dot("$w = ce_1 + de_2$", w, dir(135), red);
+ dot("$v+w$", v+w, dir(45));
+ label("$ad-bc$", (v+w)/2, blue);
+ fill(origin--v--(v+w)--w--cycle, opacity(0.1)+lightcyan);
+ draw(origin--v, EndArrow, Margins);
+ draw(origin--w, EndArrow, Margins);
+ draw(v--(v+w), EndArrow, Margins);
+ draw(w--(v+w), EndArrow, Margins);
+\end{asy}
+\end{center}
+
+This is absolutely no coincidence.
+The wedge product is designed to interpret signed areas.
+That is, $v \wedge w$ is meant to interpret the area of the parallelogram
+formed by $v$ and $w$.
+You can see why the condition $(cv) \wedge w = v \wedge (cw)$ would make sense now.
+And now of course you know why $v \wedge v$ ought to be zero:
+it's an area zero parallelogram!
+
+The \textbf{miracle of wedge products} is that the only additional condition
+we need to add to the tensor product axioms is that $v \wedge w = -(w \wedge v)$.
+Then suddenly, the wedge will do all our work of interpreting volumes for us.
+
+In analog to earlier:
+\begin{proposition}
+ [Basis of $\Lambda^2(V)$]
+ Let $V$ be a vector space
+ with basis $e_1$, \dots, $e_n$.
+ Then a basis of $\Lambda^2(V)$ is
+ \[ e_i \wedge e_j \]
+ where $i < j$.
+ Hence $\Lambda^2(V)$ has dimension $\binom n2$.
+\end{proposition}
+\begin{proof}
+ Surprisingly slippery, and also omitted.
+ (You can derive it from the corresponding theorem on tensor products.)
+\end{proof}
+
+Now I have the courage to define a multi-dimensional wedge product.
+It's just the same thing with more wedges.
+\begin{definition}
+ Let $V$ be a vector space and $m$ a positive integer.
+ The space $\Lambda^m(V)$ is generated by wedges of the form
+ \[ v_1 \wedge v_2 \wedge \dots \wedge v_m \]
+ subject to relations
+ \begin{align*}
+ \dots \wedge (v_1+v_2) \wedge \dots
+ &= (\dots \wedge v_1 \wedge \dots)
+ + (\dots \wedge v_2 \wedge \dots) \\
+ \dots \wedge (cv_1) \wedge v_2 \wedge \dots
+ &= \dots \wedge v_1 \wedge (cv_2) \wedge \dots \\
+ \dots \wedge v \wedge v \wedge \dots &= 0 \\
+ \dots \wedge v \wedge w \wedge \dots &=
+ - (\dots \wedge w \wedge v \wedge \dots)
+ \end{align*}
+ As a vector space
+ \[ c \cdot (v_1 \wedge v_2 \wedge \dots \wedge v_m)
+ = (cv_1) \wedge v_2 \wedge \dots \wedge v_m
+ = v_1 \wedge (cv_2) \wedge \dots \wedge v_m
+ = \dots .
+ \]
+\end{definition}
+This definition is pretty wordy, but in English the three conditions say
+\begin{itemize}
+ \ii We should be able to add products like before,
+ \ii You can put constants onto any of the $m$ components
+ (as is directly pointed out in the ``vector space'' action), and
+ \ii Switching any two \emph{adjacent} wedges negates the whole wedge.
+\end{itemize}
+So this is the natural generalization of $\Lambda^2(V)$.
+You can convince yourself that any element of the form
+\[ \dots \wedge v \wedge \dots \wedge v \wedge \dots \]
+should still be zero.
+
+Just like $e_1 \wedge e_2$ was a basis earlier, we can find the basis
+for general $m$ and $n$.
+\begin{proposition}[Basis of the wedge product]
+ Let $V$ be a vector space with basis $e_1, \dots, e_n$.
+ A basis for $\Lambda^m(V)$ consists of the elements
+ \[ e_{i_1} \wedge e_{i_2} \wedge \dots \wedge e_{i_m} \]
+ where
+ \[ 1 \le i_1 < i_2 < \dots < i_m \le n. \]
+ Hence $\Lambda^m(V)$ has dimension $\binom nm$.
+\end{proposition}
+\begin{proof}[Sketch of proof]
+ We knew earlier that $e_{i_1} \otimes \dots \otimes e_{i_m}$
+ was a basis for the tensor product.
+ Here we have the additional property that (a)
+ if two basis elements re-appear then the whole thing becomes zero,
+ thus we should assume the $i$'s are all distinct;
+ and (b) we can shuffle around elements,
+ and so we arbitrarily decide to put the basis elements
+ in increasing order.
+\end{proof}
+
+
+\section{The determinant}
+\prototype{$(ae_1+be_2)\wedge(ce_1+de_2) = (ad-bc)(e_1\wedge e_2)$.}
+Now we're ready to define the determinant.
+Suppose $T \colon V \to V$ is a square matrix.
+We claim that the map $\Lambda^m(V) \to \Lambda^m(V)$ given on wedges by
+\[ v_1 \wedge v_2 \wedge \dots \wedge v_m
+ \mapsto T(v_1) \wedge T(v_2) \wedge \dots \wedge T(v_m) \]
+and extending linearly to all of $\Lambda^m(V)$ is a linear map.
+(You can check this yourself if you like.)
+We call that map $\Lambda^m(T)$.
+\begin{example}
+ [Example of $\Lambda^m(T)$]
+ In $V = \RR^4$ with standard basis $e_1$, $e_2$, $e_3$, $e_4$,
+ let $T(e_1) = e_2$, $T(e_2) = 2e_3$, $T(e_3) = e_3$ and $T(e_4) = 2e_2 + e_3$.
+ Then, for example, $\Lambda^2(T)$ sends
+ \begin{align*}
+ e_1 \wedge e_2 + e_3 \wedge e_4
+ &\mapsto T(e_1) \wedge T(e_2) + T(e_3) \wedge T(e_4) \\
+ &= e_2 \wedge 2e_3 + e_3 \wedge (2e_2 + e_3) \\
+ &= 2(e_2 \wedge e_3 + e_3 \wedge e_2) \\
+ &= 0.
+ \end{align*}
+\end{example}
+
+Now here's something interesting.
+Suppose $V$ has dimension $n$, and let $m=n$.
+Then $\Lambda^n(V)$ has dimension $\binom nn = 1$ --- it's a one dimensional space!
+Hence $\Lambda^n(V) \cong k$.
+
+So $\Lambda^n(T)$ can be thought of as a linear map from $k$ to $k$.
+But we know that \emph{a linear map from $k$ to $k$ is just multiplication by a constant}.
+Hence $\Lambda^n(T)$ is multiplication by some constant.
+\begin{definition}
+ Let $T \colon V \to V$, where $V$ is an $n$-dimensional vector space.
+ Then $\Lambda^n(T)$ is multiplication by a constant $c$;
+ we define the \vocab{determinant} of $T$ as $c = \det T$.
+\end{definition}
+
+\begin{example}[The determinant of a $2 \times 2$ matrix]
+ Let $V = \RR^2$ again with basis $e_1$ and $e_2$.
+ Let
+ \[ T = \begin{bmatrix}
+ a & c \\ b & d
+ \end{bmatrix}.
+ \]
+ In other words, $T(e_1) = ae_1 + be_2$
+ and $T(e_2) = ce_1 + de_2$.
+
+ Now let's consider $\Lambda^2(V)$.
+ It has a basis $e_1 \wedge e_2$.
+ Now $\Lambda^2(T)$ sends it to
+ \[ e_1 \wedge e_2 \xmapsto{\Lambda^2(T)}
+ T(e_1) \wedge T(e_2) =
+ (ae_1 + be_2) \wedge (ce_1 + de_2)
+ = (ad-bc)(e_1 \wedge e_2).
+ \]
+ So $\Lambda^2(T) : \Lambda^2(V) \to \Lambda^2(V)$
+ is multiplication by $\det T = ad-bc$,
+ because it sent $e_1 \wedge e_2$ to
+ $(ad-bc)(e_1 \wedge e_2)$.
+\end{example}
+And that is the definition of a determinant.
+Once again, since we defined it in terms of $\Lambda^n(T)$,
+this definition is totally independent of the choice of basis.
+In other words, the determinant can be defined based on $T \colon V \to V$ alone
+without any reference to matrices.
+
+\begin{ques}
+ Why does $\Lambda^n(S \circ T) = \Lambda^n(S) \circ \Lambda^n(T)$?
+\end{ques}
+In this way, we also get \[ \det(S \circ T) = \det(S) \det(T) \] for free.
+
+More generally if we replace $2$ by $n$,
+an write out the result of expanding
+\[ \left( a_{11}e_1 + a_{21}e_2 + \dots \right) \wedge \dots \wedge
+ \left( a_{1n}e_1 + a_{2n}e_2 + \dots + a_{nn} e_n \right) \]
+then you will get the formula
+\[ \det(A) = \sum_{\sigma \in S_n} \opname{sgn}(\sigma)
+ a_{1, \sigma(1)} a_{2, \sigma(2)} \dots a_{n, \sigma(n)} \]
+called the \vocab{Leibniz formula} for determinants.
+American high school students will recognize it;
+this is (unfortunately) taught as the definition of the determinant,
+rather than a corollary of the better definition using wedge products.
+
+\begin{exercise}
+ Verify that expanding the wedge product
+ yields the Leibniz formula for $n=3$.
+\end{exercise}
+
+\section{Characteristic polynomials, and Cayley-Hamilton}
+Let's connect with the theory of eigenvalues.
+Take a map $T \colon V \to V$, where $V$ is $n$-dimensional
+over an algebraically closed field,
+and suppose its eigenvalues
+are $\lambda_1$, $\lambda_2$, \dots, $\lambda_n$ (with repetition).
+Then the \vocab{characteristic polynomial} is given by
+\[
+ p_T(X) = (X-\lambda_1)(X-\lambda_2) \dots (X-\lambda_n).
+\]
+Note that if we've written $T$ in Jordan form, that is,
+\[
+ T = \begin{bmatrix}
+ \lambda_1 & \ast & 0 & \dots & 0 \\
+ 0 & \lambda_2 & \ast & \dots & 0 \\
+ 0 & 0 & \lambda_3 & \dots & 0 \\
+ \vdots & \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & 0 & \dots & \lambda_n
+ \end{bmatrix}
+\]
+(here each $\ast$ is either $0$ or $1$),
+then we can hack together the definition
+\[
+ p_T(X) \defeq
+ \det \left( X \cdot \id_n - T \right)
+ = \det \begin{bmatrix}
+ X - \lambda_1 & \ast & 0 & \dots & 0 \\
+ 0 & X - \lambda_2 & \ast & \dots & 0 \\
+ 0 & 0 & X - \lambda_3 & \dots & 0 \\
+ \vdots & \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & 0 & \dots & X - \lambda_n
+ \end{bmatrix}.
+\]
+The latter definition is what you'll see in most
+linear algebra books because it lets you define the characteristic polynomial
+without mentioning the word ``eigenvalue''
+(i.e.\ entirely in terms of arrays of numbers).
+I'll admit it does have the merit that it means that given any matrix,
+it's easy to compute the characteristic polynomial and hence
+compute the eigenvalues;
+but I still think the definition should be done in terms of
+eigenvalues to begin with.
+For instance the determinant definition obscures the following theorem,
+which is actually a complete triviality.
+\begin{theorem}[Cayley-Hamilton]
+ Let $T \colon V \to V$ be a map of finite-dimensional
+ vector spaces over an algebraically closed field.
+ Then for any $T \colon V \to V$,
+ the map $p_T(T)$ is the zero map.
+\end{theorem}
+Here, by $p_T(T)$ we mean that if
+\[ p_T(X) = X^n + c_{n-1} X^{n-1} + \dots + c_0 \]
+then \[ p_T(T) = T^n + c_{n-1} T^{n-1} + \dots + c_1 T + c_0 I \]
+is the zero map,
+where $T^k$ denotes $T$ applied $k$ times.
+We saw this concept already when we proved
+that $T$ had at least one nonzero eigenvector.
+
+\begin{example}[Example of Cayley-Hamilton using determinant definition]
+ Suppose $T = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$.
+ Using the determinant definition of characteristic polynomial,
+ we find that $p_T(X) = (X-1)(X-4)-(-2)(-3) = X^2 - 5X - 2$.
+ Indeed, you can verify that
+ \[ T^2 - 5T - 2
+ = \begin{bmatrix}
+ 7 & 10 \\
+ 15 & 22
+ \end{bmatrix}
+ - 5 \cdot \begin{bmatrix}
+ 1 & 2 \\
+ 3 & 4
+ \end{bmatrix}
+ - 2 \cdot \begin{bmatrix}
+ 1 & 0 \\
+ 0 & 1
+ \end{bmatrix}
+ = \begin{bmatrix}
+ 0 & 0 \\
+ 0 & 0
+ \end{bmatrix}.
+ \]
+\end{example}
+If you define $p_T$ without the word eigenvalue,
+and adopt the evil view that matrices are arrays of numbers,
+then this looks like a complete miracle.
+(Indeed, just look at the terrible proofs on Wikipedia.)
+
+But if you use the abstract viewpoint of $T$ as a linear map,
+then the theorem is almost obvious:
+\begin{proof}[Proof of Cayley-Hamilton]
+ Suppose we write $V$ in Jordan normal form as
+ \[ V = J_1 \oplus \dots \oplus J_m \]
+ where $J_i$ has eigenvalue $\lambda_i$ and dimension $d_i$.
+ By definition,
+ \[ p_T(T) = (T - \lambda_1)^{d_1} (T - \lambda_2)^{d_2}
+ \dots (T - \lambda_m)^{d_m}. \]
+ By definition, $(T - \lambda_1)^{d_1}$ is the zero map on $J_1$.
+ So $p_T(T)$ is zero on $J_1$.
+ Similarly it's zero on each of the other $J_i$'s --- end of story.
+\end{proof}
+\begin{remark}
+ [Tensoring up]
+ The Cayley-Hamilton theorem holds without the hypothesis that
+ $k$ is algebraically closed:
+ because for example any real matrix can be regarded
+ as a matrix with complex coefficients
+ (a trick we've mentioned before).
+ I'll briefly hint at how you can use tensor products to formalize this idea.
+
+ Let's take the space $V = \RR^3$, with basis $e_1$, $e_2$, $e_3$.
+ Thus objects in $V$ are of the form $r_1 e_1 + r_2 e_2 + r_3 e_3$
+ where $r_1$, $r_2$, $r_3$ are real numbers.
+ We want to consider essentially the same vector space,
+ but with complex coefficients $z_i$ rather than real coefficients $r_i$.
+
+ So here's what we do: view $\CC$ as a $\RR$-vector space
+ (with basis $\{1,i\}$, say)
+ and consider the \vocab{complexification}
+ \[ V_\CC \defeq \CC \otimes_\RR V. \]
+ Then you can check that our elements are actually of the form
+ \[ z_1 \otimes e_1 + z_2 \otimes e_2 + z_3 \otimes e_3. \]
+ Here, the tensor product is over $\RR$,
+ so we have $z \otimes re_i = (zr) \otimes e_i$ for $r \in \RR$.
+ Then $V_{\CC}$ can be thought as a three-dimensional vector space over $\CC$,
+ with basis $1 \otimes e_i$ for $i \in \{1,2,3\}$.
+ In this way, the tensor product lets us formalize the idea
+ that we ``fuse on'' complex coefficients.
+
+ If $T \colon V \to W$ is a map, then $T_\CC \colon V_\CC \to W_\CC$
+ is just the map $z \otimes v \mapsto z \otimes T(v)$.
+ You'll see this written sometimes as $T_\CC = \id \otimes T$.
+ One can then apply theorems to $T_\CC$
+ and try to deduce the corresponding results on $T$.
+\end{remark}
+
+\section\problemhead
+\begin{problem}[Column operations]
+ Show that for any real numbers $x_{ij}$ (here $1 \le i,j \le n$) we have
+ \[
+ \det
+ \begin{bmatrix}
+ x_{11} & x_{12} & \dots & x_{1n} \\
+ x_{21} & x_{22} & \dots & x_{2n} \\
+ \vdots & \vdots & \ddots & \vdots \\
+ x_{n1} & x_{n2} & \dots & x_{nn} \\
+ \end{bmatrix}
+ =
+ \det
+ \begin{bmatrix}
+ x_{11} + cx_{12} & x_{12} & \dots & x_{1n} \\
+ x_{21} + cx_{22} & x_{22} & \dots & x_{2n} \\
+ \vdots & \vdots & \ddots & \vdots \\
+ x_{n1} + cx_{n2} & x_{n2} & \dots & x_{nn} \\
+ \end{bmatrix}.
+ \]
+ \begin{hint}
+ The point is that
+ \[
+ (v_1+cv_2) \wedge v_2 \dots \wedge v_n
+ = v_1 \wedge v_2 \dots \wedge v_n
+ + c(v_2 \wedge v_2 \dots \wedge v_n)
+ \]
+ and the latter term is zero.
+ \end{hint}
+\end{problem}
+\begin{problem}
+ [Determinant is product of eigenvalues]
+ Let $V$ be an $n$-dimensional vector space
+ over an algebraically closed field $k$.
+ Let $T \colon V \to V$ be a linear map with
+ eigenvalues $\lambda_1$, $\lambda_2$, \dots, $\lambda_n$
+ (counted with algebraic multiplicity).
+ Show that $\det T = \lambda_1 \dots \lambda_n$.
+ \begin{hint}
+ You can either do this by writing $T$ in matrix form,
+ or you can use the wedge definition of $\det T$
+ with the basis given by Jordan form.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ [Exponential matrix]
+ Let $X$ be an $n \times n$ matrix with complex coefficients.
+ We define the exponential map by
+ \[ \exp(X) = 1 + X + \frac{X^2}{2!} + \frac{X^3}{3!} + \dots \]
+ (take it for granted that this converges to some $n \times n$ matrix).
+ Prove that
+ \[ \det(\exp(X)) = e^{\Tr X}. \]
+ \begin{hint}
+ This is actually immediate by taking any basis
+ in which $X$ is upper-triangular!
+ \end{hint}
+\end{problem}
+
+
+\begin{problem}
+ [Extension to \Cref{prob:equal_dimension}]
+ Let $T \colon V \to V$ be a map of finite-dimensional vector spaces.
+ Prove that $T$ is an isomorphism
+ if and only if $\det T \ne 0$.
+ \begin{hint}
+ You don't need eigenvalues (though they could work also).
+ In one direction, recall that (by \Cref{prob:equal_dimension})
+ we can replace ``isomorphism'' by ``injective''.
+ In the other, if $T$ is an isomorphism,
+ let $S$ be the inverse map and look at $\det(S \circ T)$.
+ \end{hint}
+ \begin{sol}
+ Recall that (by \Cref{prob:equal_dimension})
+ we can replace ``isomorphism'' by ``injective''.
+
+ If $T(v) = 0$ for any nonzero $v$,
+ then by taking a basis for which $e_1 = v$,
+ we find $\Lambda^n(T)$ will map $e_1 \wedge \dots$
+ to $0 \wedge T(e_2) \wedge \dots = 0$,
+ hence is the zero map, so $\det T = 0$.
+
+ Conversely, if $T$ is an isomorphism,
+ we let $S$ denote the inverse map.
+ Then $1 = \det(\id) = \det(S \circ T) = \det S \det T$,
+ so $\det T \ne 0$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Based on Sweden 2010]
+ \gim
+ A herd of $1000$ cows of nonzero weight is given.
+ Prove that we can remove one cow such that the
+ remaining $999$ cows cannot be split
+ into two halves of equal weights.
+ \begin{hint}
+ Consider $1000 \times 1000$ matrix $M$
+ with entries $0$ on diagonal and $\pm 1$ off-diagonal.
+ Mod $2$.
+ \end{hint}
+ \begin{sol}
+ We proceed by contradiction.
+ Let $v$ be a vector of length $1000$
+ whose entries are weight of cows.
+ Assume the existence of a matrix $M$ such that $Mv = 0$,
+ with entries $0$ on diagonal and $\pm 1$ off-diagonal.
+ But $\det M \pmod 2$ is equal to the number of derangements
+ of $\{1, \dots, 1000\}$, which is odd.
+ Thus $\det M$ is odd and in particular not zero,
+ so $M$ is invertible.
+ Thus $Mv = 0 \implies v = 0$, contradiction.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Putnam 2015]
+ \yod
+ Define $S$ to be the set of real matrices
+ $\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right)$
+ such that $a$, $b$, $c$, $d$ form
+ an arithmetic progression in that order.
+ Find all $M \in S$ such that for some integer $k > 1$, $M^k \in S$.
+ \begin{hint}
+ There is a family of solutions other than just $a=b=c=d$.
+
+ One can solve the problem using Cayley-Hamilton.
+ A more ``bare-hands'' approach is to
+ show the matrix is invertible (unless $a=b=c=d$)
+ and then diagonalize the matrix as
+ $
+ M =
+ \begin{bmatrix} s & -q \\ -r & p \end{bmatrix}
+ \begin{bmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix}
+ \begin{bmatrix} p & q \\ r & s \end{bmatrix}
+ =
+ \begin{bmatrix}
+ ps\lambda_1 - qr\lambda_2 & qs(\lambda_1-\lambda_2) \\
+ pr(\lambda_2-\lambda_1) & ps\lambda_2 - qr\lambda_1
+ \end{bmatrix}
+ $.
+ \end{hint}
+ \begin{sol}
+ The answer is
+ \[ \begin{bmatrix} t&t\\t&t \end{bmatrix}
+ \quad\text{and}\quad
+ \begin{bmatrix} -3t&-t\\t&3t \end{bmatrix} \]
+ for $t \in \RR$.
+ These work by taking $k=3$.
+
+ Now to see these are the only ones, consider an arithmetic matrix
+ \[ M = \begin{bmatrix} a & a+e \\ a+2e & a+3e \end{bmatrix}. \]
+ with $e \neq 0$.
+ Its characteristic polynomial is $t^2 - (2a+3e)t - 2e^2$,
+ with discriminant $(2a+3e)^2 + 8e^2$,
+ so it has two distinct real roots; moreover, since $-2e^2 \le 0$
+ either one of the roots is zero or they are of opposite signs.
+ Now we can diagonalize $M$ by writing
+ \[
+ M =
+ \begin{bmatrix} s & -q \\ -r & p \end{bmatrix}
+ \begin{bmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix}
+ \begin{bmatrix} p & q \\ r & s \end{bmatrix}
+ =
+ \begin{bmatrix}
+ ps\lambda_1 - qr\lambda_2 & qs(\lambda_1-\lambda_2) \\
+ pr(\lambda_2-\lambda_1) & ps\lambda_2 - qr\lambda_1
+ \end{bmatrix}
+ \]
+ where $ps-qr=1$. By using the fact the diagonal entries have sum equalling
+ the off-diagonal entries, we obtain that
+ \[ (ps-qr)(\lambda_1+\lambda_2) = (qs-pr)(\lambda_1-\lambda_2)
+ \implies qs-pr = \frac{\lambda_1+\lambda_2}{\lambda_1-\lambda_2}. \]
+ Now if $M^k \in S$ too then the same calculation gives
+ \[ qs-pr = \frac{\lambda_1^k+\lambda_2^k}{\lambda_1^k-\lambda_2^k}. \]
+ Let $x = \lambda_1/\lambda_2 < 0$ (since $-2e^2 < 0$). We appropriately get
+ \[ \frac{x+1}{x-1} = \frac{x^k+1}{x^k-1}
+ \implies \frac{2}{x-1} = \frac{2}{x^k-1}
+ \implies x = x^k \implies x = -1 \text{ or } x = 0 \]
+ and $k$ odd. If $x=0$ we get $e=0$ and if $x=-1$ we get $2a+3e=0$,
+ which gives the curve of solutions that we claimed.
+
+ A slicker approach is by Cayley-Hamilton.
+ Assume that $e \neq 0$, so $M$ has two distinct real eigenvalues as above.
+ We have $M^k = cM + d\id$ for some constants $c$ and $d$
+ (since $M$ satisfies some quadratic polynomial).
+ Since $M \in S$, $M^k \in S$ we obtain $d=0$.
+ Thus $M^k = cM$, so it follows the eigenvalues of $M$ are negatives of each other.
+ That means $\Tr M = 0$, and the rest is clear.
+ \end{sol}
+\end{problem}
+
+
+%\begin{problem}[USAMO 2008, edited]
+% At a certain mathematical conference,
+% every two mathematicians are either friends or strangers.
+% At mealtime, every participant eats in one of two large dining rooms.
+% Each mathematician insists upon eating in a room which contains an
+% even number of his or her friends.
+% Assuming that such a split exists, prove that the number of ways
+% that the mathematicians may be split between the two rooms is a power of two.
+% % http://www.artofproblemsolving.com/Forum/viewtopic.php?p=3338962#p3338962
+%\end{problem}
+
+
+
+\begin{problem}
+ \yod
+ Let $V$ be a finite-dimensional vector space over $k$ and $T : V \to V$.
+ Show that
+ \[
+ \det(a \cdot \id_V - T) =
+ \sum_{n=0}^{\dim V} a^{\dim V-n} \cdot (-1)^n
+ \Tr\left( \Lambda^n(T) \right)
+ \]
+ where the trace is taken by viewing $\Lambda^n(T) : \Lambda^n(V) \to \Lambda^n(V)$.
+ \begin{hint}
+ Take bases, and do a fairly long calculation.
+ \end{hint}
+ \begin{sol}
+ \newcommand{\Fix}{\opname{Fix}}
+ \newcommand{\NoFix}{\opname{NoFix}}
+ Pick a basis $e_1, \dots, e_n$ of $V$.
+ Let $T$ have matrix $(x_{ij})$, and let $m = \dim V$.
+ Let $\delta_{ij}$ be the Kronecker delta.
+ Also, let $\Fix(\sigma)$ denote the fixed points of a permutation $\sigma$
+ and let $\NoFix(\sigma)$ denote the non-fixed points.
+
+ Expanding then gives
+ \begin{align*}
+ &\qquad \det (a \cdot \id - T) \\
+ &= \sum_{\sigma \in S_m} \left( \sign(\sigma)
+ \cdot \prod_{i=1}^m \left( a \cdot \delta_{i \sigma(i)} - x_{i \sigma(i)} \right)\right) \\
+ % ------------------------
+ &=
+ \sum_{s=0}^m
+ \sum_{1 \le i_1 < \dots < i_s \le m}
+ \sum_{\substack{\sigma \in S_m \\ \sigma \text{ fixes } i_k}}
+ \left( \sign(\sigma)
+ \cdot \prod_{i=1}^m \left( a \cdot \delta_{i \sigma(i)} - x_{i \sigma(i)} \right)\right) \\
+ % ------------------------
+ &=
+ \sum_{s=0}^m
+ \sum_{1 \le i_1 < \dots < i_s \le m}
+ \sum_{\substack{\sigma \in S_m \\ \sigma \text{ fixes } (i_k)}}
+ \left( \sign(\sigma)
+ \cdot \prod_{i \notin (i_k)} -x_{i \sigma(i)}
+ \prod_{i \in (i_k)}^n \left( a \cdot - x_{ii}
+ \right)\right) \\
+ % -----------------------
+ &=
+ \sum_{\sigma \in S_m}
+ \left( \sign(\sigma)
+ \cdot \prod_{i \in \NoFix(\sigma)} -x_{i \sigma(i)}
+ \prod_{i \in \Fix{\sigma}} \left( a - x_{ii}
+ \right)\right) \\
+ % -----------------------
+ &=
+ \sum_{\sigma \in S_m}
+ \left( \sign(\sigma)
+ \cdot \left( \prod_{i \in \NoFix(\sigma)} -x_{i \sigma(i)} \right)
+ \left( \sum_{t=0}^{\left\lvert \Fix(\sigma) \right\rvert}
+ a^{\left\lvert \Fix(\sigma) \right\rvert - t} \cdot \sum_{i_1 < \dots < i_t \in \Fix(\sigma)}
+ \prod_{k=1}^t -x_{i_k i_k} \right)
+ \right) \\
+ % -----------------------
+ &=
+ \sum_{\sigma \in S_m}
+ \left( \sign(\sigma)
+ \left( \sum_{t=0}^{\left\lvert \Fix(\sigma) \right\rvert}
+ a^{m-t-\left\lvert \Fix(\sigma) \right\rvert}
+ \sum_{\substack{X \subseteq \{1, \dots, m\} \\ \NoFix(\sigma) \subseteq X \\ X \text{ has exactly $t$ fixed}}} \prod_{i \in X} -x_{i \sigma(i)}
+ \right) \right) \\
+ % -----------------------
+ &=
+ \sum_{n=0}^m
+ a^{m-n}
+ \left(
+ \sum_{\sigma \in S_m}
+ \sign(\sigma)
+ \sum_{\substack{X \subseteq \{1, \dots, m\} \\ \NoFix(\sigma) \subseteq X \\ \left\lvert X \right\rvert = n} }
+ \prod_{i \in X} -x_{i \sigma(i)}
+ \right) \\
+ % -----------------------
+ &= \sum_{n=0}^m
+ a^{m-n} (-1)^n
+ \left(
+ \sum_{\substack{X \subseteq \{1, \dots, m\} \\ \left\lvert X \right\rvert = n} }
+ \sum_{\substack{\sigma \in S_m \\ \NoFix(\sigma) \subseteq X}}
+ \sign(\sigma) \prod_{i \in X} x_{i \sigma(i)}
+ \right).
+ \end{align*}
+
+ Hence it's the same to show that
+ \[
+ \sum_{\substack{X \subseteq \{1, \dots, m\} \\ \left\lvert X \right\rvert = n} }
+ \sum_{\substack{\sigma \in S_m \\ \NoFix(\sigma) \subseteq X}}
+ \sign(\sigma) \prod_{i \in X} x_{i \sigma(i)}
+ = \Tr_{\Lambda^n(V)} \left( \Lambda^n(T) \right)
+ \]
+ holds for every $n$.
+
+ We can expand the definition of trace as using basis elements as
+ \begin{align*}
+ \Tr\left( \Lambda^n(T) \right)
+ &= \sum_{1 \le i_1 < \dots < i_n \le m}
+ \left( \bigwedge_{k=1}^n e_{i_k} \right)^\vee
+ \left( \Lambda^n(T) \left( \bigwedge_{k=1}^n e_{i_k} \right) \right) \\
+ &= \sum_{1 \le i_1 < \dots < i_n \le m}
+ \left( \bigwedge_{k=1}^n e_{i_k} \right)^\vee
+ \left( \bigwedge_{k=1}^n T(e_{i_k}) \right) \\
+ &= \sum_{1 \le i_1 < \dots < i_n \le m}
+ \left( \bigwedge_{k=1}^n e_{i_k} \right)^\vee
+ \left( \bigwedge_{k=1}^n
+ \left( \sum_{j=1}^m x_{i_k j} e_j \right)
+ \right) \\
+ &= \sum_{1 \le i_1 < \dots < i_n \le m}
+ \sum_{\pi \in S_n} \sign(\pi) \prod_{k=1}^n x_{i_{\pi(k)}k} \\
+ &= \sum_{\substack{X \subseteq \{1,\dots,m\} \\ \left\lvert X \right\rvert = n}}
+ \sum_{\pi \in S_X} \sign(\pi) \prod_{i \in X} x_{t \pi(t)}
+ \end{align*}
+ Hence it remains to show that the permutations over $X$
+ are in bijection with the permutations over $S_m$ which fix $\{1, \dots, m\} - X$,
+ which is clear, and moreover, the signs clearly coincide.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/differentiate.tex b/books/napkin/differentiate.tex
new file mode 100644
index 0000000000000000000000000000000000000000..46f9746fb0a0fc7f10f4e8b45d6ff5051b070670
--- /dev/null
+++ b/books/napkin/differentiate.tex
@@ -0,0 +1,667 @@
+\chapter{Differentiation}
+\section{Definition}
+\prototype{$x^3$ has derivative $3x^2$.}
+
+I suspect most of you have seen this before, but:
+\begin{definition}
+ Let $U$ be an open subset\footnote{We
+ will almost always use $U = (a,b)$ or $U = \RR$,
+ and you will not lose much by restricting
+ the definition to those.}
+ of $\RR$ and let $f \colon U \to \RR$ be a function.
+ Let $p \in U$.
+ We say $f$ is \vocab{differentiable} at $p$
+ if the limit\footnote{Remember we are following the convention in
+ \Cref{abuse:limit}.
+ So we mean ``the limit of the function $h \mapsto \frac{f(p+h)-f(p)}{h}$
+ except the value at $h=0$ can be anything''.
+ And this is important because that fraction
+ does not have a definition at $h = 0$.
+ As promised, we pay this no attention.}
+ \[ \lim_{h \to 0} \frac{f(p+h) - f(p)}{h} \]
+ exists.
+ If so, we denote its value by $f'(p)$ and refer
+ to this as the \vocab{derivative} of $f$ at $p$.
+
+ The function $f$ is differentiable if it is differentiable
+ at every point.
+ In that case, we regard the derivative $f' \colon (a,b) \to \RR$
+ as a function it its own right.
+\end{definition}
+
+\begin{exercise}
+ Show that if $f$ is differentiable at $p$
+ then it is continuous at $p$ too.
+\end{exercise}
+
+Here is the picture.
+Suppose $f \colon \RR \to \RR$ is differentiable
+(hence continuous).
+We draw a graph of $f$ in the usual way and consider values of $h$.
+For any nonzero $h$, what we get is the slope of the \emph{secant}
+line joining $(p, f(p))$ to $(p+h, f(p+h))$.
+However, as $h$ gets close to zero,
+that secant line begins to approach a line
+which is tangent to the graph of the curve.
+A picture with $f$ a parabola is shown below,
+with the tangent in red, and the secant in dashed green.
+
+\begin{center}
+\begin{asy}
+ import graph;
+ size(8cm);
+ real f(real x) { return (x-2)*(x-2)/2 - 0.1; }
+ graph.xaxis("$x$");
+ graph.yaxis("$y$");
+ draw(graph(f,-1,5,operator ..), blue, Arrows);
+ pair P = (3, f(3));
+ dot("$(p, f(p))$", P, dir(-20), red);
+ draw((1.8,-0.8)--(4.2, 1.6), red);
+ label("Slope $f'(p)$", (4.2, 1.6), dir(-25), red);
+ pair Q = (4.3, f(4.3));
+ dot("$(p+h, f(p+h))$", Q, dir(-20), deepgreen);
+ draw(P--Q, dashed+deepgreen);
+ label("Slope $\frac{f(p+h)-f(p)}{h}$", P--Q, 1.5*dir(165), deepgreen);
+\end{asy}
+\end{center}
+
+So the picture in your head should be that
+\begin{moral}
+ $f'(p)$ looks like the slope of the tangent line at $(p, f(p))$.
+\end{moral}
+
+\begin{remark}
+ Note that the derivatives are defined
+ for functions on \emph{open} intervals.
+ This is important.
+ If $f \colon [a,b] \to \RR$ for example,
+ we could still define the derivative at each interior point,
+ but $f'(a)$ no longer makes sense
+ since $f$ is not given a value on any open neighborhood of $a$.
+\end{remark}
+
+Let's do one computation and get on with this.
+\begin{example}
+ [Derivative of $x^3$ is $3x^2$]
+ Let $f \colon \RR \to \RR$ by $f(x) = x^3$.
+ For any point $p$, and \emph{nonzero} $h$ we can compute
+ \begin{align*}
+ \frac{f(p+h) - f(p)}{h} &= \frac{(p+h)^3 - p^3}{h} \\
+ &= \frac{3p^2h + 3ph^2 + h^3}{h} \\
+ &= 3p^2 + 3ph + h^2.
+ \end{align*}
+ Thus,
+ \[ \lim_{h \to 0} \frac{f(p+h)-f(p)}{h}
+ = \lim_{h \to 0} (3p^2+3ph+h^2) = 3p^2. \]
+ Thus the slope at each point of $f$ is given by the formula $3p^2$.
+ It is customary to then write $f'(x) = 3x^2$
+ as the derivative of the entire function $f$.
+\end{example}
+\begin{abuse}
+ We will now be sloppy and write this as $(x^3)' = 3x^2$.
+ This is shorthand for the significantly more verbose
+ ``the real-valued function $x^3$ on domain so-and-so
+ has derivative $3p^2$ at every point $p$ in its domain''.
+
+ In general, a real-valued differentiable function
+ $f \colon U \to \RR$ naturally gives rise to derivative
+ $f'(p)$ at every point $p \in U$,
+ so it is customary to just give up on $p$ altogether
+ and treat $f'$ as function itself $U \to \RR$,
+ even though this real number is of a ``different interpretation'':
+ $f'(p)$ is meant to interpret a slope (e.g.\ your hourly pay rate)
+ as opposed to a value (e.g.\ your total dollar worth at time $t$).
+ If $f$ is a function from real life, the units do not even match!
+
+ This convention is so deeply entrenched I cannot uproot it
+ without more confusion than it is worth.
+ But if you read the chapters on multivariable calculus
+ you will see how it comes back to bite us,
+ when I need to re-define the derivative to be a \emph{linear map},
+ rather than single real numbers.
+\end{abuse}
+
+\section{How to compute them}
+Same old, right?
+Sum rule, all that jazz.
+
+\begin{theorem}
+ [Your friendly high school calculus rules]
+ In what follows $f$ and $g$ are differentiable functions,
+ and $U$, $V$ are open subsets of $\RR$.
+ \begin{itemize}
+ \ii (Sum rule) If $f,g \colon U \to \RR$ then
+ then $(f+g)'(x) = f'(x) + g'(x)$.
+
+ \ii (Product rule) If $f,g \colon U \to \RR$ then
+ then $(f \cdot g)'(x) = f'(x) g(x) + f(x) g'(x)$.
+
+ \ii (Chain rule) If $f \colon U \to V$ and $g \colon V \to \RR$
+ then the derivative of the composed function
+ $g \circ f \colon U \to \RR$ is $g'(f(x)) \cdot f'(x)$.
+ \end{itemize}
+\end{theorem}
+\begin{proof}
+ \begin{itemize}
+ \ii Sum rule: trivial, do it yourself if you care.
+
+ \ii Product rule: for every nonzero $h$ and point $p \in U$
+ we may write
+ \[
+ \frac{f(p+h) g(p+h) - f(p) g(p)}{h}
+ = \frac{f(p+h) - f(p)}{h} \cdot g(p+h)
+ + \frac{g(p+h)-g(p)}{h} \cdot f(p)
+ \]
+ which as $h \to 0$ gives the desired expression.
+
+ \ii Chain rule: this is where \Cref{abuse:limit}
+ will actually bite us.
+ Let $p \in U$, $q = f(p) \in V$, so that
+ \[ (g \circ f)'(p) = \lim_{h \to 0} \frac{g(f(p+h)) - g(q)}{h}. \]
+ We would like to write the expression in the limit as
+ \[ \frac{g(f(p+h)) - g(q)}{h}
+ = \frac{g(f(p+h)) - g(q)}{f(p+h) - q}
+ \cdot \frac{f(p+h) - f(p)}{h}. \]
+ The problem is that the denominator $f(p+h)-f(p)$ might be zero.
+ So instead, we define the expression
+ \[
+ Q(y) = \begin{cases}
+ \frac{g(y) - g(q)}{y - q} & \text{if } y \ne q \\
+ g'(q) & \text{if } y = q
+ \end{cases}
+ \]
+ which is continuous since $g$ was differentiable at $q$.
+ Then, we \emph{do} have the equality
+ \[ \frac{g(f(p+h)) - g(q)}{h}
+ = Q\left( f(p+h) \right) \cdot \frac{f(p+h) - f(p)}{h}. \]
+ because if $f(p+h) = q$ with $h \ne 0$,
+ then both sides are equal to zero anyways.
+
+ Then, in the limit as $h \to 0$,
+ we have $\lim_{h \to 0} \frac{f(p+h)-f(p)}{h} = f'(p)$,
+ while $\lim_{h \to 0} Q(f(p+h)) = Q(q) = g'(q)$ by continuity.
+ This was the desired result. \qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{exercise}
+ Compute the derivative of the polynomial $f(x) = x^3 + 10x^2 + 2019$,
+ viewed as a function $f \colon \RR \to \RR$.
+\end{exercise}
+
+\begin{remark}
+ Quick linguistic point:
+ the theorems above all hold at each individual point.
+ For example the sum rule really should say that
+ if $f,g \colon U \to \RR$ are differentiable at the point $p$
+ then so is $f+g$ and the derivative equals $f'(p) + g'(p)$.
+ Thus $f$ and $g$ are differentiable on all of $U$,
+ then it of course follows that $(f+g)' = f' + g'$.
+ So each of the above rules has a ``point-by-point'' form
+ which then implies the ``whole $U$'' form.
+
+ We only state the latter since that is what is used in practice.
+ However, in the rare situations where you have a function
+ differentiable only at certain points of $U$ rather
+ than the whole interval $U$, you can still use the below.
+\end{remark}
+
+We next list some derivatives of
+well-known functions,
+but as we do not give rigorous definitions
+of these functions, we do not prove these here.
+\begin{proposition}
+ [Derivatives of some well-known functions]
+ \listhack
+ \begin{itemize}
+ \ii The exponential function $\exp \colon \RR \to \RR$
+ defined by $\exp(x) = e^x$ is its own derivative.
+ \ii The trig functions $\sin$ and $\cos$
+ have $\sin' = \cos$, $\cos' = -\sin$.
+ \end{itemize}
+\end{proposition}
+
+\begin{example}
+ [A typical high-school calculus question]
+ This means that you can mechanically compute
+ the derivatives of any artificial function obtained by using the above,
+ which makes it a great source of busy work
+ in American high schools and universities.
+ For example, if
+ \[ f(x) = e^x + x \sin(x^2) \qquad f \colon \RR \to \RR \]
+ then one can compute $f'$ by:
+ \begin{align*}
+ f'(x) &= (e^x)' + (x \sin(x^2))' & \text{sum rule} \\
+ &= e^x + (x \sin(x^2))' & \text{above table} \\
+ &= e^x + (x)' \sin(x^2) + x (\sin(x^2))' & \text{product rule} \\
+ &= e^x + \sin(x^2) + x (\sin(x^2))' & (x)' = 1 \\
+ &= e^x + \sin(x^2) + x \cdot 2x \cdot \cos(x^2) & \text{chain rule}.
+ \end{align*}
+ Of course, this function $f$ is totally artificial and has no meaning,
+ which is why calculus is the topic of widespread scorn in the USA.
+ That said, it is worth appreciating that calculations like
+ this are possible: it would be better to write the pseudo-theorem
+ ``derivatives can actually be computed''.
+\end{example}
+
+If we take for granted that $(e^x)' = e^x$,
+then we can derive two more useful functions
+to add to our library of functions we can differentiate.
+\begin{corollary}
+ [Power rule]
+ Let $r$ be a real number.
+ The function $\RR_{>0} \to \RR$ by $x \mapsto x^r$
+ has derivative $(x^r)' = rx^{r-1}$.
+\end{corollary}
+
+\begin{proof}
+ We knew this for integers $r$ already,
+ but now we can prove it for any positive real number $r$.
+ Write
+ \[ f(x) = x^r = e^{r \log x} \]
+ considered as a function $f \colon \RR_{>0} \to \RR$.
+ The chain rule (together with the fact that $(e^x)' = e^x$)
+ now gives
+ \begin{align*}
+ f'(x) &= e^{r \log x} \cdot (r \log x)' \\
+ &= e^{r \log x} \cdot \frac rx = x^r \cdot \frac rx = rx^{r-1}.
+ \end{align*}
+ The reason we don't prove the formulas for $e^x$ and $\log x$
+ is that we don't at the moment even have a rigorous
+ definition for either, or even for $2^x$ if $x$ is not rational.
+ However it's nice to know that some things imply the other.
+\end{proof}
+\begin{corollary}
+ [Derivative of $\log$ is $1/x$]
+ The function $\log \colon \RR_{>0} \to \RR$
+ has derivative $(\log x)' = 1/x$.
+\end{corollary}
+\begin{proof}
+ We have that $x = e^{\log x}$.
+ Differentiate both sides, and again use the chain rule\footnote{There
+ is actually a small subtlety here:
+ we are taking for granted that $\log$ is differentiable.}
+ \[ 1 = e^{\log x} \cdot (\log x)'. \]
+ Thus $(\log x)' = \frac{1}{e^{\log x}} = 1/x$.
+\end{proof}
+
+\section{Local (and global) maximums}
+\prototype{Horizontal tangent lines to the parabola
+ are typically good pictures.}
+You may remember from high school
+that one classical use of calculus
+was to extract the minimum or maximum values of functions.
+We will give a rigorous description of how to do this here.
+
+\begin{definition}
+ Let $f \colon U \to \RR$ be a function.
+ A \vocab{local maximum} is a point $p \in U$
+ such that there exists an open neighborhood $V$ of $p$
+ (contained inside $U$)
+ such that $f(p) \ge f(x)$ for every $x \in V$.
+
+ A \vocab{local minimum} is defined similarly.\footnote{Equivalently,
+ it is a local maximum of $-f$.}
+\end{definition}
+\begin{definition}
+ A point $p$ is a \vocab{local extrema}
+ if it satisfies either of these.
+\end{definition}
+
+The nice thing about derivatives is that they pick up all extrema.
+\begin{theorem}
+ [Fermat's theorem on stationary points]
+ Suppose $f \colon U \to \RR$ is differentiable
+ and $p \in U$ is a local extrema.
+ Then $f'(p) = 0$.
+\end{theorem}
+
+If you draw a picture, this result is not surprising.
+\begin{center}
+\begin{asy}
+ import graph;
+ size(7cm);
+ real f(real x) { return 2-(x-2)*(x-2)/3; }
+ graph.xaxis("$x$");
+ graph.yaxis("$y$");
+ draw(graph(f,-1,5,operator ..), blue, Arrows);
+ pair P = (2, f(2));
+ dot("$(p, f(p))$", P, dir(90), red);
+ draw( (0.3,f(2))--(3.7,f(2)), red );
+\end{asy}
+\end{center}
+(Note also: the converse is not true.
+Say, $f(x) = x^{2019}$ has $f'(0) = 0$
+but $x=0$ is not a local extrema for $f$.)
+
+\begin{proof}
+ Assume for contradiction $f'(p) > 0$.
+ Choose any $\eps > 0$ with $\eps < f'(p)$.
+ Then for sufficiently small $|h|$ we should have
+ \[ \frac{f(p+h)-f(p)}{h} > \eps. \]
+ In particular $f(p+h) > f(p)$ for $h > 0$
+ while $f(p-h) < f(p)$ for $h < 0$.
+ So $p$ is not a local extremum.
+
+ The proof for $f'(p) < 0$ is similar.
+\end{proof}
+
+However, this is not actually adequate
+if we want a complete method for optimization.
+The issue is that we seek \emph{global} extrema,
+which may not even exist:
+for example $f(x) = x$ (which has $f'(x) = 1$)
+obviously has no local extrema at all.
+The key to resolving this is to use \emph{compactness}:
+we change the domain to be a compact set $Z$,
+for which we know that $f$ will achieve some global maximum.
+The set $Z$ will naturally have some \emph{interior} $S$,
+and calculus will give us all the extrema within $S$.
+Then we manually check all cases outside $Z$.
+
+Let's see two extended examples.
+The one is simple, and you probably already know about it,
+but I want to show you how to use compactness to argue thoroughly,
+and how the ``boundary'' points naturally show up.
+
+\begin{example}
+ [Rectangle area optimization]
+ Suppose we consider rectangles with perimeter $20$
+ and want the rectangle with the smallest or largest area.
+ \begin{center}
+ \begin{asy}
+ size(4cm);
+ draw( (0,0)--(7,0)--(7,3)--(0,3)--cycle );
+ label("$10-x$", (3.5,0), dir(-90));
+ label("$x$", (0,1.5), dir(180));
+ \end{asy}
+ \end{center}
+ If we choose the legs of the rectangle to be $x$ and
+ $10-x$, then we are trying to optimize the function
+ \[ f(x) = x(10-x) = 10x-x^2 \qquad f \colon [0,10] \to \RR. \]
+ By compactness, there exists \emph{some} global maximum
+ and \emph{some} global minimum.
+
+ As $f$ is differentiable on $(0,10)$,
+ we find that for any $p \in (0,10)$, a global maximum
+ will be a local maximum too, and hence should satisfy
+ \[ 0 = f'(p) = 10 - 2p \implies p = 5. \]
+ Also, the points $x = 0$ and $x = 10$ lie in the domain
+ but not the interior $(0,10)$.
+ Therefore the global extrema (in addition to existing)
+ must be among the three suspects $\{0, 5, 10\}$.
+
+ We finally check $f(0) = 0$, $f(5) = 25$, $f(10) = 0$.
+ So the $5 \times 5$ square has the largest area
+ and the degenerate rectangles have the smallest (zero) area.
+\end{example}
+
+Here is a non-elementary example.
+\begin{proposition}[$e^x \ge 1+x$]
+ For all real numbers $x$ we have $e^x \ge 1+x$.
+\end{proposition}
+\begin{proof}
+ Define the differentiable function
+ \[ f(x) = e^x - (x+1) \qquad f \colon \RR \to \RR. \]
+ Consider the compact interval $Z = [-1,100]$.
+ If $x \le -1$ then obviously $f(x) > 0$.
+ Similarly if $x \ge 100$ then obviously $f(x) > 0$ too.
+ So we just want to prove that if $x \in Z$, we have $f(x) \ge 0$.
+
+ Indeed, there exists \emph{some} global minimum $p$.
+ It could be the endpoints $-1$ or $100$.
+ Otherwise, if it lies in $U = (-1, 100)$
+ then it would have to satisfy
+ \[ 0 = f'(p) = e^p - 1 \implies p = 0. \]
+ As $f(-1) > 0$, $f(100) > 0$, $f(0) = 0$,
+ we conclude $p = 0$ is the global minimum of $Z$;
+ and hence $f(x) \ge 0$ for all $x \in Z$, hence for all $x$.
+\end{proof}
+
+\begin{remark}
+ If you are willing to use limits at $\pm \infty$,
+ you can rewrite proofs like the above in such a way
+ that you don't have to explicitly come
+ up with endpoints like $-1$ or $100$.
+ We won't do so here, but it's nice food for thought.
+\end{remark}
+
+\section{Rolle and friends}
+\prototype{The racetrack principle, perhaps?}
+
+One corollary of the work in the previous section is Rolle's theorem.
+\begin{theorem}
+ [Rolle's theorem]
+ Suppose $f \colon [a,b] \to \RR$ is a continuous function,
+ which is differentiable on the open interval $(a,b)$,
+ such that $f(a) = f(b)$.
+ Then there is a point $c \in (a,b)$ such that $f'(c) = 0$.
+\end{theorem}
+
+\begin{proof}
+ Assume $f$ is nonconstant (otherwise any $c$ works).
+ By compactness, there exists both a global maximum and minimum.
+ As $f(a) = f(b)$, either the global maximum
+ or the global minimum must lie inside the open interval $(a,b)$,
+ and then Fermat's theorem on stationary points finishes.
+\end{proof}
+
+I was going to draw a picture until I realized xkcd \#2042 has one already.
+\begin{center}
+ \includegraphics[scale=0.66]{media/xkcd-rolles.png}
+ \\ \scriptsize Image from \cite{img:xkcd_rolles}
+\end{center}
+
+One can adapt the theorem as follows.
+\begin{theorem}
+ [Mean value theorem]
+ Suppose $f \colon [a,b] \to \RR$ is a continuous function,
+ which is differentiable on the open interval $(a,b)$.
+ Then there is a point $c \in (a,b)$ such that
+ \[ f'(c) = \frac{f(b)-f(a)}{b-a}. \]
+\end{theorem}
+
+Pictorially, there is a $c$ such that the tangent at $c$
+has the same slope as the secant joining $(a, f(a))$, to $(b, f(b))$;
+and Rolle's theorem is the special case where that secant is horizontal.
+
+\begin{center}
+\begin{asy}
+ import graph;
+ size(7cm);
+ real f(real x) { return x*x/2 - 0.2; }
+ graph.xaxis("$x$");
+ graph.yaxis("$y$");
+ draw(graph(f,-2,2.5,operator ..), blue, Arrows);
+ pair A = (-1, f(-1));
+ pair B = (2, f(2));
+ dot("$(a, f(a))$", A, dir(A-B), deepgreen);
+ dot("$(b, f(b))$", B, dir(10), deepgreen);
+ draw(A--B, deepgreen);
+ label("Slope $\frac{f(b)-f(a)}{b-a}$", A--B, dir(120), deepgreen);
+ draw(A--(A.x,0), deepgreen+dashed);
+ draw(B--(B.x,0), deepgreen+dashed);
+ label("$a$", (A.x,0), dir(-90), deepgreen);
+ label("$b$", (B.x,0), dir(-90), deepgreen);
+
+ real c = (A.y-B.y) / (A.x-B.x);
+ pair C = (c, f(c));
+ dot("$(c, f(c))$", C, dir(-70), red);
+ draw( (c-1, f(c)-c)--(c+1, f(c)+c), red );
+\end{asy}
+\end{center}
+
+\begin{proof}
+ [Proof of mean value theorem]
+ Let $s = \frac{f(b)-f(a)}{b-a}$ be the slope of the secant line,
+ and define
+ \[ g(x) = f(x) - s x \]
+ which intuitively shears $f$ downwards so that the
+ secant becomes vertical.
+ In fact $g(a) = g(b)$ now, so we apply Rolle's theorem to $g$.
+\end{proof}
+
+\begin{remark}
+ [For people with driver's licenses]
+ There is a nice real-life interpretation of this I should mention.
+ A car is travelling along a one-dimensional road
+ (with $f(t)$ denoting the position at time $t$).
+ Suppose you cover $900$ kilometers in your car
+ over the course of $5$ hours
+ (say $f(0) = 0$, $f(5) = 900$).
+ Then there is \emph{some} point at time in which
+ your speed at that moment was exactly $180$ kilometers per hour,
+ and so you cannot really complain
+ when the cops pull you over for speeding.
+\end{remark}
+
+The mean value theorem is important because it lets
+you relate \textbf{use derivative information
+to get information about the function}
+in a way that is really not possible without it.
+Here is one quick application to illustrate my point:
+
+\begin{proposition}
+ [Racetrack principle]
+ Let $f, g \colon \RR \to \RR$ be two differentiable functions
+ with $f(0) = g(0)$.
+ \begin{enumerate}[(a)]
+ \ii If $f'(x) \ge g'(x)$ for every $x > 0$,
+ then $f(x) \ge g(x)$ for every $x > 0$.
+ \ii If $f'(x) > g'(x)$ for every $x > 0$,
+ then $f(x) > g(x)$ for every $x > 0$.
+ \end{enumerate}
+\end{proposition}
+
+This proposition might seem obvious.
+You can think of it as a race track for a reason:
+if $f$ and $g$ denote the positions of two cars (or horses etc)
+and the first car is always faster than the second car,
+then the first car should end up ahead of the second car.
+At a special case $g = 0$, this says that if $f'(x) \ge 0$,
+i.e.\ ``$f$ is increasing'',
+then, well, $f(x) \ge f(0)$ for $x > 0$, which had better be true.
+However, if you try to prove this by definition from derivatives,
+you will find that it is not easy!
+However, it's almost a prototype for the mean value theorem.
+
+\begin{proof}
+ [Proof of racetrack principle]
+ We prove (a). Let $h = f-g$, so $h(0) = 0$.
+ Assume for contradiction $h(p) < 0$ for some $p > 0$.
+ Then the secant joining $(0, h(0))$ to $(p, h(p))$ has negative slope;
+ in other words by mean value theorem there is a $0 < c < p$
+ such that
+ \[ f'(c) - g'(c) = h'(c) = \frac{h(p)-h(0)}{p} = \frac{h(p)}{p} < 0 \]
+ so $f'(c) < g'(c)$, contradiction.
+ Part (b) is the same.
+\end{proof}
+
+Sometimes you will be faced with two functions which you cannot
+easily decouple; the following form may be more useful in that case.
+\begin{theorem}
+ [Ratio mean value theorem]
+ Let $f, g \colon [a,b] \to \RR$ be two continuous functions
+ which are differentiable on $(a,b)$,
+ and such that $g(a) \neq g(b)$.
+ Then there exists $c \in (a,b)$ such that
+ \[ f'(c)(g(b)-g(a)) = g'(c)(f(b)-f(a)) \]
+\end{theorem}
+\begin{proof}
+ Use Rolle's theorem on the function
+ \[ h(x) = \left[ f(x)-f(a) \right] \left[ g(b)-g(a) \right]
+ - \left[ g(x)-g(a) \right] \left[ f(b)-f(a) \right]. \qedhere \]
+\end{proof}
+This is also called Cauchy's mean value theorem
+or the extended mean value theorem.
+
+\section{Smooth functions}
+\prototype{All the functions you're used to.}
+
+Let $f \colon U \to \RR$ be differentiable,
+thus giving us a function $f' \colon U \to \RR$.
+If our initial function was nice enough,
+then we can take the derivative again,
+giving a function $f'' \colon U \to \RR$, and so on.
+In general, after taking the derivative $n$ times,
+we denote the resulting function by $f^{(n)}$.
+By convention, $f^{(0)} = f$.
+\begin{definition}
+ A function $f \colon U \to \RR$ is \vocab{smooth}
+ if it is infinitely differentiable;
+ that is the function $f^{(n)}$ exists for all $n$.
+\end{definition}
+\begin{ques}
+ Show that the absolute value function is not smooth.
+\end{ques}
+
+Most of the functions we encounter,
+such as polynomials, $e^x$, $\log$, $\sin$, $\cos$
+are smooth, and so are their compositions.
+Here is a weird example which we'll grow more next time.
+\begin{example}
+ [A smooth function with all derivatives zero]
+ Consider the function
+ \[ f(x) = \begin{cases}
+ e^{-1/x} & x > 0 \\
+ 0 & x \le 0.
+ \end{cases}
+ \]
+ This function can be shown to be smooth,
+ with $f^{(n)}(0) = 0$.
+ So this function has every derivative at the origin
+ equal to zero, despite being nonconstant!
+\end{example}
+
+\section{\problemhead}
+\begin{problem}
+ [Quotient rule]
+ Let $f \colon (a,b) \to \RR$ and $g \colon (a,b) \to \RR_{>0}$
+ be differentiable functions.
+ Let $h = f/g$ be their quotient
+ (also a function $(a,b) \to \RR$).
+ Show that the derivative of $h$ is given by
+ \[ h'(x) = \frac{f'(x) g(x) - f(x) g'(x)}{g(x)^2}. \]
+\end{problem}
+
+\begin{problem}
+ For real numbers $x > 0$, how small can $x^x$ be?
+\end{problem}
+
+\begin{problem}
+ [RMM 2018]
+ \gim
+ Determine whether or not there exist
+ nonconstant polynomials $P(x)$ and $Q(x)$ with
+ real coefficients satisfying
+ \[ P(x)^{10} + P(x)^9 = Q(x)^{21} + Q(x)^{20}. \]
+\end{problem}
+
+\begin{problem}
+ \gim
+ Let $P(x)$ be a degree $n$ polynomial with real coefficients.
+ Prove that the equation $e^x = P(x)$ has at most $n+1$ real solutions in $x$.
+\end{problem}
+
+\begin{problem}
+ [Jensen's inequality]
+ Let $f \colon (a,b) \to \RR$ be a twice differentiable function
+ such that $f''(x) \ge 0$ for all $x$
+ (i.e.\ $f$ is \emph{convex}).
+ Prove that
+ \[ f\left( \frac{x+y}{2} \right)
+ \le \frac{f(x) + f(y)}{2} \]
+ for all real numbers $x$ and $y$ in the interval $(a,b)$.
+\end{problem}
+
+\begin{problem}
+ [L'H\^{o}pital rule, or at least one case]
+ Let $f,g \colon \RR \to \RR$ be differentiable functions
+ and let $p$ be a real number.
+ Suppose that
+ \[ \lim_{x \to p} f(x) = \lim_{x \to p} g(x) = 0. \]
+ Prove that
+ \[ \lim_{x \to p} \frac{f(x)}{g(x)}
+ = \lim_{x \to p} \frac{f'(x)}{g'(x)} \]
+ provided the right-hand limit exists.
+\end{problem}
diff --git a/books/napkin/digraph.tex b/books/napkin/digraph.tex
new file mode 100644
index 0000000000000000000000000000000000000000..822ffbec275aa2cd1b26ebc538ce463f4b6da82a
--- /dev/null
+++ b/books/napkin/digraph.tex
@@ -0,0 +1,74 @@
+%\chapter*{Graph of Chapter Dependencies}
+%\addcontentsline{toc}{chapter}{Graph of Chapter Dependencies}
+
+\bgroup
+\renewcommand{\href}[1]{} % temp disable links
+\hypersetup{linkcolor=black} % temporarily make internal refs black, so part labels are not all maroon
+\renewcommand{\solidwidth}{0.7pt}
+\renewcommand{\boldwidth}{1.5pt}
+
+\setcounter{diagheight}{50}
+\begin{chart}
+\reqhalfcourse 20,45:{Ch 1,3-5}{\hyperref[part:absalg]{Abs Alg}}{}
+\reqhalfcourse 55,45:{Ch 2,6-8}{\hyperref[part:basictop]{Topology}}{}
+\halfcourse 33,45:{Ch 9-15,18}{\hyperref[part:linalg]{Lin Alg}}{}
+\halfcourse 5,35:{Ch 16}{\hyperref[part:groups]{Grp Act}}{}
+\halfcourse 5,24:{Ch 17}{\hyperref[chapter:sylow]{Grp Classif}}{}
+\halfcourse 30,35:{Ch 19-22}{\hyperref[part:repth]{Rep Th}}{}
+\halfcourse 45,43:{Ch 23-25}{\hyperref[part:quantum]{Quantum}}{}
+\halfcourse 64,38:{Ch 26-30}{\hyperref[part:calc]{Calc}}{}
+\halfcourse 64,30:{Ch 31-33}{\hyperref[part:cmplxana]{Cmplx Ana}}{}
+\halfcourse 55,20:{Ch 34-41}{\hyperref[part:measure]{Measure/Pr}}{}
+\halfcourse 48,28:{Ch 42-45}{\hyperref[part:diffgeo]{Diff Geo}}{}
+\halfcourse 40,10:{Ch 46-51}{\hyperref[part:algnt1]{Alg NT 1}}{}
+\halfcourse 40,0:{Ch 52-56}{\hyperref[part:algnt2]{Alg NT 2}}{}
+\halfcourse 23,10:{Ch 57-59}{\hyperref[part:algtop1]{Alg Top 1}}{}
+\halfcourse 20,28:{Ch 60-63}{\hyperref[part:cats]{Cat Th}}{}
+\halfcourse 23,0:{Ch 64-69}{\hyperref[part:algtop2]{Alg Top 2}}{}
+\halfcourse 6,10:{Ch 70-74}{\hyperref[part:ag1]{Alg Geo 1}}{}
+\halfcourse 6,0:{Ch 75-80}{\hyperref[part:ag2]{Alg Geo 2-3}}{}
+\reqhalfcourse 5,45:{Ch 81-87}{\hyperref[part:st1]{Set Theory}}{}
+
+%% Basic alg ch
+\prereqc 20,45,30,35;0: % Abs Alg -> Rep Th
+\coreqc 20,45,33,45;0: % Abs Alg -> Lin Alg
+\prereqc 33,45,30,35;0: % Lin Alg -> Rep Th
+\prereqc 33,45,45,43;0: % Lin Alg -> Quantum
+%% Category theory
+\prereqc 20,45,20,28;0: % Grp -> Cat Th
+\coreqc 20,45,20,28;0: % Abs Alg -> Cat Th
+\coreqc 33,45,20,28;80: % Lin Alg -> Cat Th
+\coreqc 55,45,20,28;-40: % Top -> Cat Th
+\coreqc 20,28,30,35;0: % Cat Th -> Rep Th
+%% Group theory chain
+\prereqc 20,45,5,35;0: % Grp -> Grp Act
+\prereqc 5,35,5,24;0: % Grp Act -> Grp Class
+%% Analysis chain
+\prereqc 33,45,48,28;30: % Lin Alg -> Diff Geo
+\prereqc 55,45,64,38;0: % Top -> Diff Geo
+\coreqc 64,38,48,28;50: % Calc -> Diff Geo
+\prereqc 55,45,48,28;10: % Top -> Diff Geo
+\prereqc 55,45,64,38;0: % Top -> Calc
+\coreqc 64,38,64,30;0: % Calc -> Cmplx Ana
+\prereqc 55,45,64,30;-10: % Top -> Cmplx Ana
+\prereqc 55,45,55,20;0: % Top -> Meas/Pr
+\coreqc 64,38,55,20;50: % Calc -> Meas/Pr
+%% Alg Geom
+\prereqc 6,10,6,0;0: % AG1 -> AG2
+\prereqc 20,45,6,10;0: % Abs Alg -> AG1
+\prereqc 55,45,6,10;-40: % Top -> AG1
+\coreqc 20,28,6,10;0: % Cat Th -> AG1
+\prereqc 20,28,6,0;-18: % Cat -> AG2
+%% Alg Top
+\prereqc 23,10,23,0;0: % AT1 -> AT2
+\prereqc 55,45,23,10;0: % Top -> AT1
+\prereqc 5,35,23,10;20: % Grp Act -> AT1
+\coreqc 23,10,20,28;20: % AT1 -> Cat
+\prereqc 20,28,23,0;-190: % Cat -> AT2
+%% Alg NT
+\prereqc 40,10,40,0;0: % ANT1 -> ANT2
+\prereqc 20,45,40,10;-40: % Abs Alg -> ANT1
+\prereqc 33,45,40,10;50: % Lin Alg -> ANT1
+\prereqc 5,35,40,0;10: % Grp Act -> ANT2
+\end{chart}
+\egroup
diff --git a/books/napkin/discriminant.tex b/books/napkin/discriminant.tex
new file mode 100644
index 0000000000000000000000000000000000000000..34922beb4cfbcb770cd3024dffdcf21f81c64418
--- /dev/null
+++ b/books/napkin/discriminant.tex
@@ -0,0 +1,84 @@
+\chapter{More properties of the discriminant}
+I'll remind you that the discriminant of a number field $K$ is given by
+\[
+\Delta_K \defeq \det
+\begin{bmatrix}
+ \sigma_1(\alpha_1) & \dots & \sigma_n(\alpha_1) \\
+ \vdots & \ddots & \vdots \\
+ \sigma_1(\alpha_n) & \dots & \sigma_n(\alpha_n) \\
+\end{bmatrix}^2
+\]
+where $\alpha_1$, \dots, $\alpha_n$ is a $\ZZ$-basis for $K$,
+and the $\sigma_i$ are the $n$ embeddings of $K$ into $\CC$.
+
+Several examples, properties, and equivalent definitions follow.
+
+\section\problemhead
+\begin{sproblem}[Discriminant of cyclotomic field]
+ \label{prob:discrim_cyclotomic_field}
+ Let $p$ be an odd rational prime and $\zeta_p$ a primitive $p$th root of unity.
+ Let $K = \QQ(\zeta_p)$.
+ Show that \[ \Delta_K = (-1)^{\frac{p-1}{2}} p^{p-2}. \]
+ \begin{hint}
+ Direct linear algebra computation.
+ \end{hint}
+\end{sproblem}
+
+\begin{sproblem}[Trace representation of $\Delta_K$]
+ \gim
+ Let $\alpha_1$, \dots, $\alpha_n$ be a basis for $\OO_K$.
+ Prove that
+ \[ \Delta_K
+ =
+ \det
+ \begin{bmatrix}
+ \TrK(\alpha_1^2) & \TrK(\alpha_1\alpha_2) & \dots & \TrK(\alpha_1\alpha_n) \\
+ \TrK(\alpha_2\alpha_1) & \TrK(\alpha_2^2) & \dots & \TrK(\alpha_2\alpha_n) \\
+ \qquad\vdots & \qquad\vdots & \ddots & \qquad\vdots \\
+ \TrK(\alpha_n\alpha_1) & \TrK(\alpha_n\alpha_2) & \dots & \TrK(\alpha_n\alpha_n) \\
+ \end{bmatrix}.
+ \]
+ In particular, $\Delta_K$ is an integer.
+ \label{prob:trace_discriminant}
+ \begin{hint}
+ Let $M$ be the ``embedding'' matrix.
+ Look at $M^\top M$, where $M^\top$ is the transpose matrix.
+ \end{hint}
+\end{sproblem}
+
+
+\begin{sproblem}[Root representation of $\Delta_K$]
+ The \vocab{discriminant} of a quadratic polynomial $Ax^2+Bx+C$ is defined as $B^2-4AC$.
+ More generally, the polynomial discriminant of a polynomial $f \in \ZZ[x]$ of degree $n$ is
+ \[ \Delta(f) \defeq c^{2n-2} \prod_{1 \le i < j \le n} \left( z_i - z_j \right)^2 \]
+ where $z_1, \dots, z_n$ are the roots of $f$, and $c$ is the leading coefficient of $f$.
+
+ Suppose $K$ is monogenic with $\OO_K = \ZZ[\theta]$.
+ Let $f$ denote the minimal polynomial of $\theta$ (hence monic).
+ Show that \[ \Delta_K = \Delta(f). \]
+ \label{prob:root_discriminant}
+ \begin{hint}
+ Vandermonde matrices.
+ \end{hint}
+\end{sproblem}
+
+
+\begin{problem}
+ Show that if $K \neq \QQ$ is a number field then $\left\lvert \Delta_K \right\rvert > 1$.
+ \begin{hint}
+ $M_K \ge 1$ must hold. Bash.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ [Brill's theorem]
+ For a number field $K$ with signature $(r_1, r_2)$, show that
+ $\Delta_K > 0$ if and only if $r_2$ is even.
+\end{problem}
+
+\begin{problem}
+ [Stickelberger theorem]
+ \kurumi
+ Let $K$ be a number field. Prove that \[ \Delta_K \equiv 0 \text{ or } 1 \pmod 4. \]
+ % P N
+\end{problem}
diff --git a/books/napkin/dual-trace.tex b/books/napkin/dual-trace.tex
new file mode 100644
index 0000000000000000000000000000000000000000..954fe60c4b7786c4307a00447dffb56a6ff5c069
--- /dev/null
+++ b/books/napkin/dual-trace.tex
@@ -0,0 +1,556 @@
+\chapter{Dual space and trace}
+You may have learned in high school that given a matrix
+\[
+ \begin{bmatrix}
+ a & c \\
+ b & d
+ \end{bmatrix}
+\]
+the trace is the sum along the diagonals $a+d$
+and the determinant is $ad-bc$.
+But we know that a matrix is somehow
+just encoding a linear map using a choice of basis.
+Why would these random formulas somehow not
+depend on the choice of a basis?
+
+In this chapter, we are going to
+give an intrinsic definition of $\Tr T$,
+where $T \colon V \to V$ and $\dim V < \infty$.
+This will give a coordinate-free definition
+which will in particular imply the trace $a+d$
+doesn't change if we take a different basis.
+
+In doing so, we will introduce two new constructions:
+the \emph{tensor product} $V \otimes W$
+(which is a sort of product of two spaces,
+with dimension $\dim V \cdot \dim W$)
+and the \emph{dual space} $V^\vee$,
+which is the set of linear maps $V \to k$ (a $k$-vector space).
+Later on, when we upgrade from a vector space $V$
+to an inner product space,
+we will see that the dual space $V^\vee$ gives a nice
+interpretation of the ``transpose'' of a matrix.
+You'll already see some of that come through here.
+
+The trace is only defined for finite-dimensional
+vector spaces, so if you want you can restrict
+your attention to finite-dimensional vector spaces for this chapter.
+(On the other hand we do not need the
+ground field to be algebraically closed.)
+
+The next chapter will then do the same for the determinant.
+
+\section{Tensor product}
+\prototype{$\RR[x] \otimes \RR[y] = \RR[x,y]$.}
+We know that $\dim (V \oplus W) = \dim V + \dim W$,
+even though as sets $V \oplus W$ looks like $V \times W$.
+What if we wanted a real ``product'' of spaces,
+with multiplication of dimensions?
+
+For example, let's pull out
+my favorite example of a real vector space, namely
+\[ V = \left\{ ax^2 + bx + c \mid a,b,c \in \RR \right\}. \]
+Here's another space, a little smaller:
+\[ W = \left\{ dy + e \mid d,e \in \RR \right\}. \]
+If we take the direct sum, then we would get some rather unnatural
+vector space of dimension five
+(whose elements can be thought of as pairs $(ax^2+bx+c,dy+e)$).
+But suppose we want a vector space
+whose elements are \emph{products} of polynomials in $V$ and $W$;
+it would contain elements like $4x^2y + 5xy + y + 3$.
+In particular, the basis would be
+\[ \left\{ x^2y, x^2, xy, x, y, 1 \right\} \]
+and thus have dimension six.
+
+For this we resort to the \emph{tensor product}.
+It does exactly this, except that the ``multiplication''
+is done by a scary\footnote{%
+ Seriously, $\otimes$ looks \emph{terrifying} to non-mathematicians,
+ and even to many math undergraduates.}
+symbol $\otimes$:
+think of it as a ``wall'' that separates the elements
+between the two vector spaces.
+For example, the above example might be written as
+\[ 4x^2 \otimes y + 5x \otimes y + 1 \otimes y + 3 \otimes 1. \]
+(This should be read as $(4x^2 \otimes y) + (5x \otimes y) + \dots$;
+addition comes after $\otimes$.)
+Of course there should be no distinction
+between writing $4x^2 \otimes y$ and $x^2 \otimes 4y$
+or even $2x^2 \otimes 2y$.
+While we want to keep the $x$ and $y$ separate,
+the scalars should be free to float around.
+
+Of course, there's no need to do everything
+in terms of just the monomials.
+We are free to write
+\[ (x + 1) \otimes (y + 1). \]
+If you like, you can expand this as
+\[ x \otimes y + 1 \otimes y + x \otimes 1 + 1 \otimes 1. \]
+Same thing.
+The point is that we can take any two of our polynomials
+and artificially ``tensor'' them together.
+
+The definition of the tensor product does exactly this,
+and nothing else.\footnote{I'll only define this
+ for vector spaces for simplicity.
+ The definition for modules over a commutative ring $R$ is exactly the same.}
+\begin{definition}
+ Let $V$ and $W$ be vector spaces over the same field $k$.
+ The \vocab{tensor product} $V \otimes_k W$ is the abelian group
+ generated by elements of the form $v \otimes w$, subject to relations
+ \begin{align*}
+ (v_1 + v_2) \otimes w &= v_1 \otimes w + v_2 \otimes w \\
+ v \otimes (w_1 + w_2) &= v \otimes w_1 + v \otimes w_2 \\
+ (c \cdot v) \otimes w &= v \otimes (c \cdot w).
+ \end{align*}
+ As a vector space,
+ its action is given by
+ $c \cdot (v \otimes w) = (c \cdot v) \otimes w = v \otimes (c \cdot w)$.
+\end{definition}
+Here's another way to phrase the same idea.
+We define a \vocab{pure tensor} as an
+element of the form $v \otimes w$ for $v \in V$ and $w \in W$.
+But we let the $\otimes$ wall be ``permeable'' in the sense that
+\[ (c \cdot v) \otimes w = v \otimes (c \cdot w) = c \cdot (v \otimes w) \]
+and we let multiplication and addition distribute as we expect.
+Then $V \otimes W$ consists of sums of pure tensors.
+
+\begin{example}
+ [Infinite-dimensional example of tensor product: two-variable polynomials]
+ Although it's not relevant to this chapter,
+ this definition works equally well with infinite-dimensional
+ vector spaces.
+ The best example might be
+ \[ \RR[x] \otimes_\RR \RR[y] = \RR[x,y]. \]
+ That is, the tensor product of polynomials in $x$
+ with real polynomials in $y$
+ turns out to just be two-variable polynomials $\RR[x,y]$.
+\end{example}
+
+\begin{remark}
+ [Warning on sums of pure tensors]
+ Remember the elements of $V \otimes_k W$
+ really are \emph{sums} of these pure tensors!
+ If you liked the previous example,
+ this fact has a nice interpretation ---
+ not every polynomial in $\RR[x,y] = \RR[x] \otimes_\RR \RR[y]$ factors
+ as a polynomial in $x$ times a polynomial in $y$
+ (i.e.\ as pure tensors $f(x) \otimes g(y)$).
+ But they all can be written as sums of pure tensors $x^a \otimes y^b$.
+\end{remark}
+
+
+As the example we gave suggested,
+the basis of $V \otimes_k W$ is literally the ``product''
+of the bases of $V$ and $W$.
+In particular, this fulfills our desire that
+$\dim (V \otimes_k W) = \dim V \cdot \dim W$.
+\begin{proposition}[Basis of $V \otimes W$]
+ Let $V$ and $W$ be finite-dimensional $k$-vector spaces.
+ If $e_1, \dots, e_m$ is a basis of $V$ and $f_1, \dots, f_n$ is a basis of $W$,
+ then the basis of $V \otimes_k W$
+ is precisely $e_i \otimes f_j$, where $i=1,\dots,m$ and $j=1,\dots,n$.
+\end{proposition}
+\begin{proof}
+ Omitted; it's easy at least to see that this basis is spanning.
+\end{proof}
+
+\begin{example}[Explicit computation]
+ Let $V$ have basis $e_1$, $e_2$ and $W$ have basis $f_1, f_2$.
+ Let $v = 3e_1 + 4e_2 \in V$ and $w = 5f_1 + 6f_2 \in W$.
+ Let's write $v \otimes w$ in this basis for $V \otimes_k W$:
+ \begin{align*}
+ v \otimes w &= (3e_1+4e_2) \otimes (5f_1+6f_2) \\
+ &= (3e_1) \otimes (5f_1) + (4e_2) \otimes (5f_1)
+ + (3e_1) \otimes (6f_2) + (4e_2) \otimes (6f_2) \\
+ &= 15 (e_1 \otimes f_1) + 20(e_2 \otimes f_1)
+ + 18 (e_1 \otimes f_2) + 24(e_2 \otimes f_2).
+ \end{align*}
+ So you can see why tensor products are a nice ``product'' to
+ consider if we're really interested in $V \times W$
+ in a way that's more intimate than just a direct sum.
+\end{example}
+
+\begin{abuse}
+ Moving forward, we'll almost always abbreviate $\otimes_k$ to just $\otimes$,
+ since $k$ is usually clear.
+\end{abuse}
+\begin{remark}
+ Observe that to define a linear map $V \otimes W \to X$,
+ I only have to say what happens to each pure tensor $v \otimes w$,
+ since the pure tensors \emph{generate} $V \otimes W$.
+ But again, keep in mind that
+ $V \otimes W$ consists of \emph{sums} of these pure tensors!
+ In other words, $V \otimes W$ is generated by pure tensors.
+\end{remark}
+
+\begin{remark}
+ Much like the Cartesian product $A \times B$ of sets,
+ you can tensor together any two vector spaces $V$ and $W$ over the same field $k$;
+ the relationship between $V$ and $W$ is completely irrelevant.
+ One can think of the $\otimes$ as a ``wall'' through which one can pass
+ scalars in $k$, but otherwise keeps the elements of $V$ and $W$ separated.
+ Thus, $\otimes$ is \textbf{content-agnostic}.
+
+ This also means that even if $V$ and $W$ have some relation to each other,
+ the tensor product doesn't remember this.
+ So for example $v \otimes 1 \neq 1 \otimes v$,
+ just like $(g,1_G) \neq (1_G,g)$ in the group $G \times G$.
+\end{remark}
+
+
+\section{Dual space}
+\prototype{Rotate a column matrix by $90$ degrees.}
+
+Consider the following vector space:
+\begin{example}
+ [Functions from $\RR^3 \to \RR$]
+ The set of real functions $f(x,y,z)$ is an
+ infinite-dimensional real vector space.
+ Indeed, we can add two functions to get $f+g$,
+ and we can think of functions like $2f$.
+\end{example}
+This is a terrifyingly large vector space,
+but you can do some reasonable reductions.
+For example, you can restrict your attention to just
+the \emph{linear maps} from $\RR^3$ to $\RR$.
+
+That's exactly what we're about to do.
+This definition might seem strange at first, but bear with me.
+
+\begin{definition}
+ Let $V$ be a $k$-vector space.
+ Then $V^\vee$, the \vocab{dual space} of $V$, is defined
+ as the vector space whose elements are \emph{linear maps from $V$ to $k$}.
+\end{definition}
+The addition and multiplication are pointwise:
+it's the same notation we use when we write $cf+g$ to mean $c \cdot f(x) + g(x)$.
+The dual space itself is less easy to think about.
+
+Let's try to find a basis for $V^\vee$.
+First, here is a very concrete interpretation of the vector space.
+Suppose for example $V = \RR^3$.
+We can think of elements of $V$ as column matrices, like
+\[ v = \begin{bmatrix}
+ 2 \\ 5 \\ 9
+ \end{bmatrix}
+ \in V. \]
+Then a linear map $f \colon V \to k$ can be interpreted as a \emph{row matrix}:
+\[
+ f = \begin{bmatrix}
+ 3 & 4 & 5
+ \end{bmatrix}
+ \in V^\vee. \]
+Then
+\[
+ f(v) = \begin{bmatrix}
+ 3 & 4 & 5
+ \end{bmatrix}
+ \begin{bmatrix}
+ 2 \\ 5 \\ 9
+ \end{bmatrix}
+ = 71. \]
+
+More precisely: \textbf{to specify a linear map $V \to k$,
+I only have to tell you where each basis element of $V$ goes}.
+In the above example, $f$ sends $e_1$ to $3$, $e_2$ to $4$, and $e_3$ to $5$.
+So $f$ sends \[ 2e_1 + 5e_2 + 9e_3 \mapsto 2 \cdot 3 + 5 \cdot 4 + 9 \cdot 5 = 71. \]
+
+Let's make all this precise.
+\begin{proposition}[The dual basis for $V^\vee$]
+ Let $V$ be a finite-dimensional vector space with basis $e_1, \dots, e_n$.
+ For each $i$ consider the function $e_i^\vee \colon V \to k$
+ defined by
+ \[
+ e_i^\vee(e_j)
+ = \begin{cases}
+ 1 & i=j \\
+ 0 & i \neq j.
+ \end{cases}
+ \]
+ In more humane terms, $e_i^\vee(v)$
+ gives the coefficient of $e_i$ in $v$.
+
+ Then $e_1^\vee$, $e_2^\vee$, \dots, $e_n^\vee$ is a basis of $V^\vee$.
+\end{proposition}
+
+\begin{example}[Explicit example of element in $V^\vee$]
+ In this notation, $f = 3e_1^\vee + 4e_2^\vee + 5e_3^\vee$.
+ Do you see why the ``sum'' notation works as expected here?
+ Indeed
+ \begin{align*}
+ f(e_1) &= (3e_1^\vee + 4e_2^\vee + 5e_3^\vee)(e_1) \\
+ &= 3e_1^\vee(e_1) + 4e_2^\vee(e_1) + 5e_3^\vee(e_1) \\
+ &= 3 \cdot 1 + 4 \cdot 0 + 5 \cdot 0 = 3.
+ \end{align*}
+ That's exactly what we wanted.
+\end{example}
+
+You might be inclined to point out that $V \cong V^\vee$ at this point,
+with an isomorphism given by $e_i \mapsto e_i^\vee$.
+You might call it ``rotating the column matrix by $90\dg$''.
+
+This statement is technically true,
+but for a generic vector space $V$ without any extra information,
+you can just think of this as an artifact of the $\dim V = \dim V^\vee$
+(as \emph{any} two vector spaces of equal dimension are isomorphic).
+Most importantly, the isomorphism given above depends
+on what basis you picked.
+
+\begin{remark*}
+ [Explicit example showing that the isomorphism $V \to V^\vee$
+ given above is unnatural]
+ Alice or Bob are looking at the same two-dimensional real vector space
+ \[ V = \left\{ (x,y,z) \mid x+y+z = 0 \right\}. \]
+ Also, let $v_{\text{example}} = (3,5,-8)$
+ be an example of an arbitrary element of $V$ for concreteness.
+
+ Suppose Alice chooses the following basis vectors for $V$.
+ \begin{align*}
+ e_1 &= (1,0,-1) \\
+ e_2 &= (0,1,-1).
+ \end{align*}
+ Alice uses this to construct an isomorphism $A \colon V \to V^\vee$
+ as described above, and considers $e_1^\vee = A(e_1^\vee)$.
+ The element $e_1^\vee \in V^\vee$ is a
+ function $e_1^\vee \colon V \to \RR$,
+ meaning Alice can plug any vector in $V$ into it.
+ As an example, for $v_{\text{example}}$
+ \[ e_1^\vee(v_{\text{example}})
+ = e_1\left( (3,5,-8) \right)
+ = e_1^\vee\left( 3e_1 - 5e_2 \right) = 3. \]
+ Meanwhile, Bob chooses the different basis vectors
+ \begin{align*}
+ f_1 &= (1,0,-1) \\
+ f_2 &= (1,-1,0).
+ \end{align*}
+ This gives Bob an isomorphism $B \colon V \to V^\vee$,
+ and a corresponding $f_1^\vee = B(f_1)$.
+ Bob can also evaluate it anywhere, e.g.
+ \[ f_1^\vee\left( v_{\text{example}} \right)
+ f_1^\vee\left( (3, 5, -8) \right)
+ = f_1^\vee\left( 8f_1 - 5f_2 \right) = 8. \]
+
+ It follows that $e_1^\vee = A\left( (1,0,-1) \right)$
+ and $f_1^\vee = B\left( (1,0,-1) \right)$
+ are different elements of $V^\vee$.
+ In other words Alice and Bob got different isomorphisms
+ because they picked different bases.
+\end{remark*}
+
+\section{$V^\vee \otimes W$ gives matrices from $V$ to $W$}
+Goal of this section:
+\begin{moral}
+ If $V$ and $W$ are finite-dimensional $k$-vector spaces
+ then $V^\vee \otimes W$ represents linear maps $V \to W$.
+\end{moral}
+
+Here's the intuition.
+If $V$ is three-dimensional and $W$ is five-dimensional, then we can think
+of the maps $V \to W$ as a $5 \times 3$ array of numbers.
+We want to think of these maps as a vector space:
+(since one can add or scale matrices).
+So it had better be a vector space with dimension $15$,
+but just saying ``$k^{\oplus 15}$'' is not really that satisfying
+(what is the basis?).
+
+To do better, we consider the tensor product
+\[ V^\vee \otimes W \]
+which somehow is a product of maps out of $V$ and the target space $W$.
+We claim that this is in fact the space we want:
+i.e.\ \textbf{there is a natural bijection between elements of $V^\vee \otimes W$
+and linear maps from $V$ to $W$}.
+
+First, how do we interpret an element of $V^\vee \otimes W$ as a map $V \to W$?
+For concreteness, suppose $V$ has a basis $e_1$, $e_2$, $e_3$,
+and $W$ has a basis $f_1$, $f_2$, $f_3$, $f_4$, $f_5$.
+Consider an element of $V^\vee \otimes W$, say
+\[ e_1^\vee \otimes (f_2 + 2f_4) + 4e_2^\vee \otimes f_5. \]
+We want to interpret this element as a function $V \to W$:
+so given a $v \in V$,
+we want to output an element of $W$.
+There's really only one way to do this:
+feed in $v \in V$ into the $V^\vee$ guys on the left.
+That is, take the map
+\[ v \mapsto e_1^\vee(v) \cdot (f_2 + 2f_4) + 4e_2^\vee(v) \cdot f_5 \in W. \]
+So, there's a natural way to interpret any element
+$\xi_1 \otimes w_1 + \dots + \xi_m \otimes w_m \in V^\vee \otimes W$
+as a linear map $V \to W$.
+The claim is that in fact, every linear map $V \to W$ has
+such an interpretation.
+
+First, for notational convenience,
+\begin{definition}
+ Let $\Hom(V,W)$ denote the set of linear maps from $V$ to $W$
+ (which one can interpret as matrices which send $V$ to $W$),
+ viewed as a vector space over $k$.
+ (The ``$\Hom$'' stands for homomorphism.)
+\end{definition}
+\begin{ques}
+ Identify $\Hom(V,k)$ by name.
+\end{ques}
+
+We can now write down something that's more true generally.
+\begin{theorem}[$V^\vee \otimes W$ $\iff$ linear maps $V \to W$]
+ \label{thm:vect_hom_dualization}
+ Let $V$ and $W$ be finite-dimensional vector spaces.
+ We described a map
+ \[ \Psi \colon V^\vee \otimes W \to \Hom(V,W) \]
+ by sending $\xi_1 \otimes w_1 + \dots + \xi_m \otimes w_m$ to the linear map
+ \[ v \mapsto \xi_1(v) w_1 + \dots + \xi_m(v) w_m. \]
+ Then $\Psi$ is an isomorphism of vector spaces, i.e.\ every linear map $V \to W$
+ can be uniquely represented as an element of $V^\vee \otimes W$ in this way.
+\end{theorem}
+
+The above is perhaps a bit dense, so here is a concrete example.
+\begin{example}[Explicit example]
+ Let $V = \RR^2$ and take a basis $e_1$, $e_2$ of $V$.
+ Then define $T : V \to V$ by
+ \[ T = \begin{bmatrix}
+ 1 & 2 \\ 3 & 4
+ \end{bmatrix}. \]
+ Then we have
+ \[ \Psi(e_1^\vee \otimes e_1 + 2e_2^\vee \otimes e_1
+ + 3e_1^\vee \otimes e_2 + 4e_2^\vee \otimes e_2) = T. \]
+ The beauty is that the $\Psi$ definition is basis-free;
+ thus even if we change the basis,
+ although the above expression will look completely different,
+ the \emph{actual element} in $V^\vee \otimes V$ doesn't change.
+\end{example}
+
+Despite this, we'll indulge ourselves in using coordinates for the proof.
+\begin{proof}
+ [Proof of \Cref{thm:vect_hom_dualization}]
+ This looks intimidating, but it's actually not difficult.
+ We proceed in two steps:
+ \begin{enumerate}
+ \ii First, we check that $\Psi$ is \emph{surjective};
+ every linear map has at least one representation in $V^\vee \otimes W$.
+ To see this, take any $T : V \to W$.
+ Suppose $V$ has basis $e_1$, $e_2$, $e_3$ and that
+ $T(e_1) = w_1$, $T(e_2) = w_2$ and $T(e_3) = w_3$.
+ Then the element
+ \[ e_1^\vee \otimes w_1 + e_2^\vee \otimes w_2 + e_3^\vee \otimes w_3 \]
+ works, as it is contrived to agree with $T$ on the basis elements $e_i$.
+ \ii So it suffices to check now that $\dim V^\vee \otimes W = \dim \Hom(V,W)$.
+ Certainly, $V^\vee \otimes W$ has dimension $\dim V \cdot \dim W$.
+ But by viewing $\Hom(V,W)$ as $\dim V \cdot \dim W$ matrices, we see that
+ it too has dimension $\dim V \cdot \dim W$. \qedhere
+ \end{enumerate}
+\end{proof}
+So there is a \textbf{natural isomorphism} $V^\vee \otimes W \cong \Hom(V,W)$.
+While we did use a basis liberally in the
+\emph{proof that it works}, this doesn't change the
+fact that the isomorphism is ``God-given'',
+depending only on the spirit of $V$ and $W$ itself
+and not which basis we choose to express the vector spaces in.
+
+
+\section{The trace}
+We are now ready to give the definition of a trace.
+Recall that a square matrix $T$ can be thought of as a map $T \colon V \to V$.
+According to the above theorem,
+\[ \Hom(V, V) \cong V^\vee \otimes V \]
+so every map $V \to V$ can be thought of as an element of $V^\vee \otimes V$.
+But we can also define an
+\emph{evaluation map} $\opname{ev} : V^\vee \otimes V \to k$
+by ``collapsing'' each pure tensor: $f \otimes v \mapsto f(v)$.
+So this gives us a composed map
+\begin{center}
+\begin{tikzcd}
+ \Hom(V, V) \ar[r, "\cong"] & V^\vee \otimes V \ar[r, "\opname{ev}"] & k.
+\end{tikzcd}
+\end{center}
+This result is called the \vocab{trace} of a matrix $T$.
+
+\begin{example}[Example of a trace]
+ Continuing the previous example,
+ \[ \Tr T = e_1^\vee(e_1) + 2e_2^\vee(e_1)
+ + 3e_1^\vee(e_2) + 4e_2^\vee(e_2)
+ = 1 + 0 + 0 + 4 = 5. \]
+ And that is why the trace is the sum of the diagonal entries.
+\end{example}
+
+\section{\problemhead}
+
+\begin{problem}
+ [Trace is sum of eigenvalues]
+ Let $V$ be an $n$-dimensional vector space
+ over an algebraically closed field $k$.
+ Let $T \colon V \to V$ be a linear map with
+ eigenvalues $\lambda_1$, $\lambda_2$, \dots, $\lambda_n$
+ (counted with algebraic multiplicity).
+ Show that $\Tr T = \lambda_1 + \dots + \lambda_n$.
+ \begin{hint}
+ Follows by writing $T$ in an eigenbasis:
+ then the diagonal entries are the eigenvalues.
+ \end{hint}
+\end{problem}
+
+\begin{dproblem}
+ [Product of traces]
+ Let $T \colon V \to V$ and $S \colon W \to W$ be linear maps
+ of finite-dimensional vector spaces $V$ and $W$.
+ Define $T \otimes S \colon V \otimes W \to V \otimes W$
+ by $v \otimes w \mapsto T(v) \otimes S(w)$.
+ Prove that \[ \Tr(T \otimes S) = \Tr(T) \Tr(S). \]
+ \begin{hint}
+ Again one can just take a basis.
+ \end{hint}
+\end{dproblem}
+
+\begin{dproblem}
+ [Traces kind of commute]
+ \gim
+ Let $T \colon V \to W$ and $S \colon W \to V$ be linear maps
+ between finite-dimensional vector spaces $V$ and $W$.
+ Show that \[ \Tr(T \circ S) = \Tr(S \circ T). \]
+ \begin{hint}
+ One solution is to just take a basis.
+ Otherwise, interpret $T \otimes S \mapsto \Tr(T \circ S)$ as a
+ linear map $(V^\vee \otimes W) \otimes (W^\vee \otimes V) \to k$,
+ and verify that it is commutative.
+ \end{hint}
+ \begin{sol}
+ Although we could give a coordinate calculation,
+ we instead opt to give a cleaner proof.
+ This amounts to drawing the diagram
+ \begin{center}
+ \begin{tikzcd}[column sep=tiny]
+ & (W^\vee \otimes V) \otimes (V^\vee \otimes W) \ar[d]
+ \ar[r, equals] \ar[ld, "\text{compose}"']
+ & (V^\vee \otimes W) \otimes (W^\vee \otimes V) \ar[d]
+ \ar[rd, "\text{compose}"]
+ \\
+ \Hom(W, W) \ar[r, leftrightarrow] \ar[rd, "\Tr"']
+ & W^\vee \otimes W \ar[d, "\opname{ev}"]
+ & V^\vee \otimes V \ar[r, leftrightarrow] \ar[d, "\opname{ev}"]
+ & \Hom(V, V) \ar[ld, "\Tr"']
+ \\
+ & k \ar[r, equals] & k
+ \end{tikzcd}
+ \end{center}
+ It is easy to check that the center rectangle commutes,
+ by checking it on pure tensors $\xi_W \otimes v \otimes \xi_V \otimes w$.
+ So the outer hexagon commutes and we're done.
+ This is really the same as the proof with bases;
+ what it amounts to is checking the assertion is true for
+ matrices that have a $1$ somewhere and $0$ elsewhere,
+ then extending by linearity.
+ \end{sol}
+\end{dproblem}
+
+\begin{problem}
+ [Putnam 1988]
+ \gim
+ Let $V$ be an $n$-dimensional vector space.
+ Let $T : V \to V$ be a linear map and suppose there exists $n+1$ eigenvectors,
+ any $n$ of which are linearly independent.
+ Does it follow that $T$ is a scalar multiple of the identity?
+ \begin{hint}
+ Look at the trace of $T$.
+ \end{hint}
+ \begin{sol}
+ See \url{https://mks.mff.cuni.cz/kalva/putnam/psoln/psol886.html}.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/eigenvalues.tex b/books/napkin/eigenvalues.tex
new file mode 100644
index 0000000000000000000000000000000000000000..0290dd11004f6bd6fdd9e068cf22a62a1fedb73f
--- /dev/null
+++ b/books/napkin/eigenvalues.tex
@@ -0,0 +1,685 @@
+\chapter{Eigen-things}
+This chapter will develop the theory of eigenvalues and eigenvectors,
+the so-called ``Jordan canonical form''.
+(Later on we will use it to
+define the characteristic polynomial.)
+
+\section{Why you should care}
+We know that a square matrix $T$ is really just
+a linear map from $V$ to $V$.
+What's the simplest type of linear map?
+It would just be multiplication by some scalar $\lambda$,
+which would have associated matrix (in any basis!)
+\[
+ T =
+ \begin{bmatrix}
+ \lambda & 0 & \dots & 0 \\
+ 0 & \lambda & \dots & 0 \\
+ \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & \dots & \lambda
+ \end{bmatrix}.
+\]
+That's perhaps \emph{too} simple, though.
+If we had a fixed basis $e_1, \dots, e_n$
+then another very ``simple'' operation
+would just be scaling each basis element $e_i$ by $\lambda_i$,
+i.e.\ a \vocab{diagonal matrix} of the form
+\[
+ T = \begin{bmatrix}
+ \lambda_1 & 0 & \dots & 0 \\
+ 0 & \lambda_2 & \dots & 0 \\
+ \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & \dots & \lambda_n
+ \end{bmatrix}.
+\]
+These maps are more general.
+Indeed, you can, for example, compute $T^{100}$ in a heartbeat:
+the map sends $e_1 \to \lambda_1^{100} e_1$.
+(Try doing that with an arbitrary $n \times n$ matrix.)
+
+Of course, most linear maps are probably not that nice.
+Or are they?
+\begin{example}
+ [Getting lucky]
+ Let $V$ be some two-dimensional vector space
+ with $e_1$ and $e_2$ as basis elements.
+ Let's consider a map $T \colon V \to V$
+ by $e_1 \mapsto 2e_1$ and $e_2 \mapsto e_1+3e_2$,
+ which you can even write concretely as
+ \[ T = \begin{bmatrix}
+ 2 & 1 \\
+ 0 & 3
+ \end{bmatrix} \quad\text{in basis $e_1$, $e_2$}. \]
+ This doesn't look anywhere as nice until we realize we can rewrite it as
+ \begin{align*}
+ e_1 &\mapsto 2e_1 \\
+ e_1+e_2 &\mapsto 3(e_1+e_2).
+ \end{align*}
+ So suppose we change to the basis $e_1$ and $e_1 + e_2$.
+ Thus in the new basis,
+ \[ T = \begin{bmatrix}
+ 2 & 0 \\
+ 0 & 3
+ \end{bmatrix} \quad\text{in basis $e_1$, $e_1+e_2$}. \]
+ So our completely random-looking map,
+ under a suitable change of basis,
+ looks like the very nice maps we described before!
+\end{example}
+In this chapter, we will be \emph{making} our luck,
+and we will see that our better understanding of matrices
+gives us the right way to think about this.
+
+\section{Warning on assumptions}
+Most theorems in this chapter only work for
+\begin{itemize}
+ \ii finite-dimensional vector spaces $V$,
+ \ii over a field $k$ which is \emph{algebraically closed}.
+\end{itemize}
+On the other hand, the definitions work fine without
+these assumptions.
+
+\section{Eigenvectors and eigenvalues}
+Let $k$ be a field and $V$ a vector space over it.
+In the above example, we saw that there were two very nice
+vectors, $e_1$ and $e_1+e_2$, for which $V$ did something very simple.
+Naturally, these vectors have a name.
+\begin{definition}
+ Let $T \colon V \to V$ and $v \in V$ a \emph{nonzero} vector.
+ We say that $v$ is an \vocab{eigenvector} if $T(v) = \lambda v$
+ for some $\lambda \in k$ (possibly zero, but remember $v \neq 0$).
+ The value $\lambda$ is called an \vocab{eigenvalue} of $T$.
+
+ We will sometimes abbreviate
+ ``$v$ is an eigenvector with eigenvalue $\lambda$''
+ to just ``$v$ is a $\lambda$-eigenvector''.
+\end{definition}
+Of course, no mention to a basis anywhere.
+
+\begin{example}
+ [An example of an eigenvector and eigenvalue]
+ Consider the example earlier with
+ $T = \begin{bmatrix} 2 & 1 \\ 0 & 3 \end{bmatrix}$.
+ \begin{enumerate}[(a)]
+ \ii Note that $e_1$ and $e_1 + e_2$ are
+ $2$-eigenvectors and $3$-eigenvectors.
+ \ii Of course, $5e_1$ is also an $2$-eigenvector.
+ \ii And, $7e_1 + 7e_2$ is also a $3$-eigenvector.
+ \end{enumerate}
+\end{example}
+So you can quickly see the following observation.
+\begin{ques}
+ Show that the $\lambda$-eigenvectors, together with $\{0\}$
+ form a subspace.
+\end{ques}
+\begin{definition}
+ For any $\lambda$, we define the $\lambda$-\vocab{eigenspace}
+ as the set of $\lambda$-eigenvectors together with $0$.
+\end{definition}
+This lets us state succinctly that
+``$2$ is an eigenvalue of $T$
+with one-dimensional eigenspace spanned by $e_1$''.
+
+Unfortunately, it's not exactly true that eigenvalues always exist.
+\begin{example}[Eigenvalues need not exist]
+ Let $V = \RR^2$ and let $T$ be the map
+ which rotates a vector by $90\dg$
+ around the origin.
+ Then $T(v)$ is not a multiple of $v$ for any $v \in V$,
+ other than the trivial $v=0$.
+\end{example}
+
+However, it is true if we replace $k$ with an
+algebraically closed field\footnote{A field is \vocab{algebraically closed}
+ if all its polynomials have roots,
+ the archetypal example being $\CC$.}.
+\begin{theorem}[Eigenvalues always exist over algebraically closed fields]
+ Suppose $k$ is an \emph{algebraically closed} field.
+ Let $V$ be a finite dimensional $k$-vector space.
+ Then if $T \colon V \to V$ is a linear map,
+ there exists an eigenvalue $\lambda \in k$.
+\end{theorem}
+\begin{proof}
+ (From \cite{ref:axler})
+ The idea behind this proof is to consider ``polynomials'' in $T$.
+ For example, $2T^2-4T+5$ would be shorthand for $2T(T(v)) - 4T(v) + 5v$.
+ In this way we can consider ``polynomials'' $P(T)$;
+ this lets us tie in the ``algebraically closed'' condition.
+ These polynomials behave nicely:
+ \begin{ques}
+ Show that $P(T)+Q(T) = (P+Q)(T)$ and $P(T) \circ Q(T) = (P \cdot Q)(T)$.
+ \end{ques}
+
+ Let $n = \dim V < \infty$ and fix any nonzero vector $v \in V$,
+ and consider vectors $v$, $T(v)$, \dots, $T^n (v)$.
+ There are $n+1$ of them,
+ so they can't be linearly independent for dimension reasons;
+ thus there is a nonzero polynomial $P$ such that $P(T)$
+ is zero when applied to $v$.
+ WLOG suppose $P$ is a monic polynomial,
+ and thus $P(z) = (z-r_1)\dots(z-r_m)$ say.
+ Then we get
+ \[ 0 = (T - r_1 \id) \circ (T - r_2 \id) \circ \dots
+ \circ (T - r_m \id)(v) \]
+ (where $\id$ is the identity matrix). This means at least one of
+ $T - r_i \id$ is not injective, i.e.\ has a nontrivial kernel,
+ which is the same as an eigenvector.
+\end{proof}
+So in general we like to consider algebraically closed fields.
+This is not a big loss:
+any real matrix can be interpreted as a complex matrix
+whose entries just happen to be real, for example.
+
+\section{The Jordan form}
+So that you know exactly where I'm going,
+here's the main theorem.
+\begin{definition}
+ A \vocab{Jordan block} is an $n \times n$ matrix of the following shape:
+ \[
+ \begin{bmatrix}
+ \lambda & 1 & 0 & 0 & \dots & 0 & 0 \\
+ 0 & \lambda & 1 & 0 & \dots & 0 & 0 \\
+ 0 & 0 & \lambda & 1 & \dots & 0 & 0 \\
+ 0 & 0 & 0 & \lambda & \dots & 0 & 0 \\
+ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
+ 0 & 0 & 0 & 0 & \dots & \lambda & 1 \\
+ 0 & 0 & 0 & 0 & \dots & 0 & \lambda
+ \end{bmatrix}.
+ \]
+ In other words, it has $\lambda$ on the diagonal,
+ and $1$ above it.
+ We allow $n = 1$,
+ so $\begin{bmatrix} \lambda \end{bmatrix}$ is a Jordan block.
+\end{definition}
+
+\begin{theorem}
+ [Jordan canonical form]
+ Let $T \colon V \to V$ be a linear map
+ of finite-dimensional vector spaces
+ over an algebraically closed field $k$.
+ Then we can choose a basis of $V$
+ such that the matrix $T$ is ``block-diagonal''
+ with each block being a Jordan block.
+
+ Such a matrix is said to be in \vocab{Jordan form}.
+ This form is unique up to rearranging the order of the blocks.
+\end{theorem}
+As an example, this means the matrix should look something like:
+\[
+ \begin{bmatrix}
+ \lambda_1 & 1 \\
+ 0 & \lambda_1 \\
+ && \lambda_2 \\
+ &&& \lambda_3 & 1 & 0 \\
+ &&& 0 & \lambda_3 & 1 \\
+ &&& 0 & 0 & \lambda_3 \\
+ &&&&&& \ddots \\
+ &&&&&&& \lambda_m & 1 \\
+ &&&&&&& 0 & \lambda_m
+ \end{bmatrix}
+\]
+\begin{ques}
+ Check that diagonal matrices are the special case
+ when each block is $1 \times 1$.
+\end{ques}
+
+What does this mean?
+Basically, it means \emph{our dream is almost true}.
+What happens is that $V$ can get broken down as a direct sum
+\[ V = J_1 \oplus J_2 \oplus \dots \oplus J_m \]
+and $T$ acts on each of these subspaces independently.
+These subspaces correspond to the blocks in the matrix above.
+In the simplest case, $\dim J_i = 1$,
+so $J_i$ has a basis element $e$
+for which $T(e) = \lambda_i e$;
+in other words, we just have a simple eigenvalue.
+But on occasion, the situation is not quite so simple,
+and we have a block of size greater than $1$;
+this leads to $1$'s just above the diagonals.
+
+I'll explain later how to interpret the $1$'s,
+when I make up the word \emph{descending staircase}.
+For now, you should note that even if $\dim J_i \ge 2$,
+we still have a basis element
+which is an eigenvector with eigenvalue $\lambda_i$.
+
+\begin{example}
+ [A concrete example of Jordan form]
+ Let $T : k^6 \to k^6$ and suppose $T$ is given by the matrix
+ \[
+ T = \begin{bmatrix}
+ 5 & 0 & 0 & 0 & 0 & 0 \\
+ 0 & 2 & 1 & 0 & 0 & 0 \\
+ 0 & 0 & 2 & 0 & 0 & 0 \\
+ 0 & 0 & 0 & 7 & 0 & 0 \\
+ 0 & 0 & 0 & 0 & 3 & 0 \\
+ 0 & 0 & 0 & 0 & 0 & 3 \\
+ \end{bmatrix}.
+ \]
+ Reading the matrix, we can compute all the eigenvectors and eigenvalues:
+ for any constants $a, b \in k$ we have
+ \begin{align*}
+ T(a \cdot e_1) &= 5a \cdot e_1 \\
+ T(a \cdot e_2) &= 2a \cdot e_2 \\
+ T(a \cdot e_4) &= 7a \cdot e_4 \\
+ T(a \cdot e_5 + b \cdot e_6) &= 3\left[ a \cdot e_5 + b \cdot e_6 \right].
+ \end{align*}
+ The element $e_3$ on the other hand,
+ is not an eigenvector since $T(e_3) = e_2 + 2e_3$.
+\end{example}
+
+%Here's the idea behind the proof.
+%We would like to be able to break down $V$ directly into components as above,
+%but some of the $\lambda_i$'s from the different Jordan blocks could coincide,
+%and this turns out to give a technical difficulty for detecting them.
+%So instead, we're going to break down $V$ first into subspaces
+%based on the values of $\lambda$'s;
+%these are called the \emph{generalized eigenspaces}.
+%In other words, the generalized eigenspace of a given $\lambda$
+%is just all the Jordan blocks which have eigenvalue $\lambda$.
+%Only after that will we break the generalized eigenspace
+%into the individual Jordan blocks.
+%
+%The sections below record this proof in the other order:
+%the next part deals with breaking generalized
+%eigenspaces into Jordan blocks,
+%and the part after that is the one where we break down $V$ into
+%generalized eigenspaces.
+
+\section{Nilpotent maps}
+Bear with me for a moment. First, define:
+\begin{definition}
+ A map $T: V \to V$ is \vocab{nilpotent} if $T^m$ is the zero map for some integer $m$.
+ (Here $T^m$ means ``$T$ applied $m$ times''.)
+\end{definition}
+What's an example of a nilpotent map?
+\begin{example}
+ [The ``descending staircase'']
+ Let $V = k^{\oplus 3}$ have basis $e_1$, $e_2$, $e_3$.
+ Then the map $T$ which sends
+ \[ e_3 \mapsto e_2 \mapsto e_1 \mapsto 0 \]
+ is nilpotent, since $T(e_1) = T^2(e_2) = T^3(e_3) = 0$,
+ and hence $T^3(v) = 0$ for all $v \in V$.
+\end{example}
+The $3 \times 3$ descending staircase has matrix representation
+\[ T = \begin{bmatrix}
+ 0 & 1 & 0 \\
+ 0 & 0 & 1 \\
+ 0 & 0 & 0
+ \end{bmatrix}. \]
+You'll notice this is a Jordan block.
+\begin{exercise}
+ Show that the descending staircase above
+ has $0$ as its only eigenvalue.
+\end{exercise}
+
+That's a pretty nice example.
+As another example, we can have multiple such staircases.
+\begin{example}
+ [Double staircase]
+ Let $V = k^{\oplus 5}$ have basis $e_1$, $e_2$, $e_3$, $e_4$, $e_5$.
+ Then the map
+ \[ e_3 \mapsto e_2 \mapsto e_1 \mapsto 0 \text{ and }
+ e_5 \mapsto e_4 \mapsto 0 \]
+ is nilpotent.
+\end{example}
+Picture, with some zeros omitted for emphasis:
+\[ T = \begin{bmatrix}
+ 0 & 1 & 0 & & \\
+ 0 & 0 & 1 & & \\
+ 0 & 0 & 0 & & \\
+ & & & 0 & 1 \\
+ & & & 0 & 0 \\
+ \end{bmatrix}
+\]
+You can see this isn't really that different
+from the previous example;
+it's just the same idea repeated multiple times.
+And in fact we now claim that \emph{all}
+nilpotent maps have essentially that form.
+\begin{theorem}
+ [Nilpotent Jordan]
+ Let $V$ be a finite-dimensional vector space
+ over an algebraically closed field $k$.
+ Let $T \colon V \to V$ be a nilpotent map.
+ Then we can write $V = \bigoplus_{i=1}^m V_i$
+ where each $V_i$ has a basis of the form
+ $v_i$, $T(v_i)$, \dots, $T^{\dim V_i - 1}(v_i)$
+ for some $v_i \in V_i$.
+\end{theorem}
+Hence:
+\begin{moral}
+ Every nilpotent map can be viewed as independent staircases.
+\end{moral}
+Each chain $v_i$, $T(v_i)$, $T(T(v_i))$, \dots is just one staircase.
+The proof is given later, but first let me point out where this is going.
+
+Here's the punch line.
+Let's take the double staircase again.
+Expressing it as a matrix gives, say
+\[
+ S = \begin{bmatrix}
+ 0 & 1 & 0 & & \\
+ 0 & 0 & 1 & & \\
+ 0 & 0 & 0 & & \\
+ & & & 0 & 1 \\
+ & & & 0 & 0
+ \end{bmatrix}.
+\]
+Then we can compute
+\[
+ S + \lambda \id = \begin{bmatrix}
+ \lambda & 1 & 0 & & \\
+ 0 & \lambda & 1 & & \\
+ 0 & 0 & \lambda & & \\
+ & & & \lambda & 1 \\
+ & & & 0 & \lambda
+ \end{bmatrix}.
+\]
+It's a bunch of $\lambda$ Jordan blocks!
+This gives us a plan to proceed: we need to break $V$ into
+a bunch of subspaces such that $T - \lambda \id$ is nilpotent over each subspace.
+Then Nilpotent Jordan will finish the job.
+
+\section{Reducing to the nilpotent case}
+\begin{definition}
+ Let $T \colon V \to V$. A subspace $W \subseteq V$
+ is called $T$-\vocab{invariant}
+ if $T(w) \in W$ for any $w \in W$.
+ In this way, $T$ can be thought of as a map $W \to W$.
+\end{definition}
+In this way, the Jordan form is a decomposition of $V$ into invariant subspaces.
+
+Now I'm going to be cheap, and define:
+\begin{definition}
+ A map $T \colon V \to V$ is called \vocab{indecomposable}
+ if it's impossible to write $V = W_1 \oplus W_2$
+ where both $W_1$ and $W_2$ are nontrivial $T$-invariant spaces.
+\end{definition}
+Picture of a \emph{decomposable} map:
+\[
+ \begin{bmatrix}
+ \multicolumn{2}{c|}{\multirow{2}{*}{$W_1$}} & 0 & 0 & 0 \\
+ \multicolumn{2}{c|}{} & 0 & 0 & 0 \\ \hline
+ 0 & 0 & \multicolumn{3}{|c}{\multirow{3}{*}{$W_2$}} \\
+ 0 & 0 & \multicolumn{3}{|c}{} \\
+ 0 & 0 & \multicolumn{3}{|c}{}
+ \end{bmatrix}
+\]
+As you might expect, we can break a space apart into ``indecomposable'' parts.
+\begin{proposition}
+ [Invariant subspace decomposition]
+ Let $V$ be a finite-dimensional vector space.
+ Given any map $T \colon V \to V$, we can write
+ \[ V = V_1 \oplus V_2 \oplus \dots \oplus V_m \]
+ where each $V_i$ is $T$-invariant,
+ and for any $i$ the map $T \colon V_i \to V_i$ is indecomposable.
+\end{proposition}
+\begin{proof}
+ Same as the proof that every integer is the product of primes.
+ If $V$ is not decomposable, we are done.
+ Otherwise, by definition write $V = W_1 \oplus W_2$
+ and then repeat on each of $W_1$ and $W_2$.
+\end{proof}
+
+Incredibly, with just that we're almost done!
+Consider a decomposition as above,
+so that $T \colon V_1 \to V_1$ is an indecomposable map.
+Then $T$ has an eigenvalue $\lambda_1$, so let $S = T - \lambda_1 \id$; hence $\ker S \neq \{0\}$.
+\begin{ques}
+ Show that $V_1$ is also $S$-invariant, so we can consider $S : V_1 \to V_1$.
+\end{ques}
+By \Cref{prob:endomorphism_eventual_lemma}, we have
+\[ V_1 = \ker S^N \oplus \img S^N \]
+for some $N$.
+But we assumed $T$ was indecomposable,
+so this can only happen if $\img S^N = \{0\}$ and $\ker S^N = V_1$
+(since $\ker S^N$ contains our eigenvector).
+Hence $S$ is nilpotent, so it's a collection of staircases.
+In fact, since $T$ is indecomposable, there is only one staircase.
+Hence $V_1$ is a Jordan block, as desired.
+
+\section{(Optional) Proof of nilpotent Jordan}
+The proof is just induction on $\dim V$.
+Assume $\dim V \ge 1$, and let $W = T\im(V)$ be the image of $V$.
+Since $T$ is nilpotent, we must have $W \subsetneq V$.
+Moreover, if $W = \{0\}$ (i.e.\ $T$ is the zero map) then we're already done.
+So assume $\{0\} \subsetneq W \subsetneq V$.
+
+By the inductive hypothesis, we can select a good basis of $W$:
+\begin{align*}
+ \mathcal B' =
+ \Big\{ & T(v_1), T(T(v_1)), T(T(T(v_1))), \dots \\
+ & T(v_2), T(T(v_2)), T(T(T(v_2))), \dots \\
+ & \dots, \\
+ & T(v_\ell), T(T(v_\ell)), T(T(T(v_\ell))), \dots \Big\}
+\end{align*}
+for some $T(v_i) \in W$ (here we have taken advantage of the fact that each element of $W$ is itself of the form $T(v)$ for some $v$).
+
+Also, note that there are exactly $\ell$ elements of $\mathcal B'$ which are in $\ker T$
+(namely the last element of each of the $\ell$ staircases).
+We can thus complete it to a basis $v_{\ell+1}, \dots, v_m$ (where $m = \dim \ker T$).
+(In other words, the last element of each staircase plus the $m-\ell$ new ones are a basis for $\ker T$.)
+
+Now consider
+\begin{align*}
+ \mathcal B =
+ \Big\{ & v_1, T(v_1), T(T(v_1)), T(T(T(v_1))), \dots \\
+ & v_2, T(v_2), T(T(v_2)), T(T(T(v_2))), \dots \\
+ & \dots, \\
+ & v_\ell, T(v_\ell), T(T(v_\ell)), T(T(T(v_\ell))), \dots \\
+ & v_{\ell+1}, v_{\ell+2}, \dots, v_m \Big\}.
+\end{align*}
+\begin{ques}
+Check that there are exactly $\ell + \dim W + (\dim \ker T - \ell) = \dim V$ elements.
+\end{ques}
+\begin{exercise}
+ Show that all the elements are linearly independent.
+ (Assume for contradiction there is some linear dependence,
+ then take $T$ of both sides.)
+\end{exercise}
+Hence $\mathcal B$ is a basis of the desired form.
+
+\section{Algebraic and geometric multiplicity}
+\prototype{The matrix $T$ below.}
+This is some convenient notation:
+let's consider the matrix in Jordan form
+\[
+ T =
+ \begin{bmatrix}
+ 7 & 1 \\
+ 0 & 7 \\
+ & & 9 \\
+ & & & 7 & 1 & 0 \\
+ & & & 0 & 7 & 1 \\
+ & & & 0 & 0 & 7
+ \end{bmatrix}.
+\]
+We focus on the eigenvalue $7$,
+which appears multiple times, so it is certainly ``repeated''.
+However, there are two different senses in which you could say it is repeated.
+\begin{itemize}
+ \ii \emph{Algebraic}: You could say it is repeated five times,
+ because it appears five times on the diagonal.
+ \ii \emph{Geometric}: You could say it really only appears two times:
+ because there are only two eigen\emph{vectors}
+ with eigenvalue $7$, namely $e_1$ and $e_4$.
+
+ Indeed, the vector $e_2$ for example has $T(e_2) = 7e_2 + e_1$,
+ so it's not really an eigenvector!
+ If you apply $T - 7\id$ to $e_2$ twice though,
+ you do get zero.
+\end{itemize}
+\begin{ques}
+ In this example,
+ how many times do you need to apply $T - 7\id$ to $e_6$ to get zero?
+\end{ques}
+Both these notions are valid,
+so we will name both.
+To preserve generality,
+we first state the ``intrinsic'' definition.
+\begin{definition}
+ Let $T \colon V \to V$ be a linear map and $\lambda$ a scalar.
+ \begin{itemize}
+ \ii The \vocab{geometric multiplicity}
+ of $\lambda$ is the dimension $\dim V_\lambda$
+ of the $\lambda$-eigenspace.
+
+ \ii Define the \vocab{generalized eigenspace}
+ $V^\lambda$ to be the subspace of $V$
+ for which $(T-\lambda \id)^n(v) = 0$ for some $n \ge 1$.
+ The \vocab{algebraic multiplicity} of $\lambda$ is the
+ dimension $\dim V^\lambda$.
+ \end{itemize}
+ (Silly edge case: we allow ``multiplicity zero''
+ if $\lambda$ is not an eigenvalue at all.)
+\end{definition}
+However in practice you should just count the Jordan blocks.
+\begin{example}
+ [An example of eigenspaces via Jordan form]
+ Retain the matrix $T$ mentioned earlier and let $\lambda = 7$.
+ \begin{itemize}
+ \ii The eigenspace $V_\lambda$ has basis $e_1$ and $e_4$,
+ so the geometric multiplicity is $2$.
+ \ii The generalized eigenspace $V^\lambda$ has basis $e_1$, $e_2$,
+ $e_4$, $e_5$, $e_6$ so the algebraic multiplicity is $5$.
+ \end{itemize}
+\end{example}
+
+To be completely explicit, here is how you think of these in practice:
+\begin{proposition}
+ [Geometric and algebraic multiplicity vs Jordan blocks]
+ Assume $T \colon V \to V$ is a linear map
+ of finite-dimensional vector spaces,
+ written in Jordan form.
+ Let $\lambda$ be a scalar.
+ Then
+ \begin{itemize}
+ \ii The geometric multiplicity of $\lambda$ is the number
+ of Jordan blocks with eigenvalue $\lambda$;
+ the eigenspace has one basis element per Jordan block.
+
+ \ii The algebraic multiplicity of $\lambda$ is the
+ sum of the dimensions of the Jordan blocks
+ with eigenvalue $\lambda$;
+ the eigenspace is the direct sum of the subspaces
+ corresponding to those blocks.
+ \end{itemize}
+\end{proposition}
+
+\begin{ques}
+ Show that the geometric multiplicity
+ is always less than or equal to the algebraic multiplicity.
+\end{ques}
+
+This actually gives us a tentative definition:
+\begin{itemize}
+ \ii The trace is the sum of the eigenvalues,
+ counted with algebraic multiplicity.
+ \ii The determinant is the product of the eigenvalues,
+ counted with algebraic multiplicity.
+\end{itemize}
+This definition is okay,
+but it has the disadvantage of requiring the ground
+field to be algebraically closed.
+It is also not the definition that is easiest
+to work with computationally.
+The next two chapters will give us a better definition.
+
+\section\problemhead
+
+\begin{problem}
+ [Sum of algebraic multiplicities]
+ Given a $2018$-dimensional complex vector space $V$
+ and a map $T \colon V \to V$,
+ what is the sum of the algebraic multiplicities
+ of all eigenvalues of $T$?
+ \begin{sol}
+ It's just $\dim V = 2018$.
+ After all, you are adding the dimensions of the Jordan blocks\dots
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [The word ``diagonalizable'']
+ A linear map $T \colon V \to V$ (where $\dim V$ is finite)
+ is said to be \vocab{diagonalizable}
+ if it has a basis $e_1$, \dots, $e_n$
+ such that each $e_i$ is an eigenvector.
+ \begin{enumerate}[(a)]
+ \ii Explain the name ``diagonalizable''.
+ \ii Suppose we are working over an algebraically closed field.
+ Then show that that $T$ is diagonalizable if and only if
+ for any $\lambda$, the geometric multiplicity of $\lambda$
+ equals the algebraic multiplicity of $\lambda$.
+ \end{enumerate}
+ \begin{sol}
+ (a): if you express $T$ as a matrix in such a basis,
+ one gets a diagonal matrix.
+ (b): this is just saying each Jordan block has dimension $1$,
+ which is what we wanted.
+ (We are implicitly using uniqueness of Jordan form here.)
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Switcharoo]
+ Let $V$ be the $\CC$-vector space
+ with basis $e_1$ and $e_2$.
+ The map $T \colon V \to V$ sends $T(e_1) = e_2$ and $T(e_2) = e_1$.
+ Determine the eigenspaces of $T$.
+ \begin{sol}
+ The $+1$ eigenspace is spanned by $e_1+e_2$.
+ The $-1$ eigenspace is spanned by $e_1-e_2$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Writing a polynomial backwards]
+ Define the complex vector space $V$
+ of polynomials with degree at most $2$,
+ say $V = \left\{ ax^2 + bx + c \mid a,b,c \in \CC \right\}$.
+ Define $T \colon V \to V$ by
+ \[ T(ax^2+bx+c) = cx^2+bx+a. \]
+ Determine the eigenspaces of $T$.
+ \begin{sol}
+ The $+1$ eigenspace is spanned by $1+x^2$ and $x$.
+ The $-1$ eigenspace is spanned by $1-x^2$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Differentiation of polynomials]
+ Let $V = \RR[x]$ be the real vector space of all real polynomials.
+ Note that $\frac{d}{dx} \colon V \to V$ is a linear map
+ (for example it sends $x^3$ to $3x^2$).
+ Which real numbers are eigenvalues of this map?
+ \begin{hint}
+ Only $0$ is.
+ Look at degree.
+ \end{hint}
+ \begin{sol}
+ Constant functions differentiate to zero,
+ and these are the only $0$-eigenvectors.
+ There can be no other eigenvectors,
+ since if $\deg p > 0$ then $\deg p' = \deg p - 1$,
+ so if $p'$ is a constant real multiple of $p$
+ we must have $p' = 0$, ergo $p$ is constant.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Differentiation of functions]
+ Let $V$ be the real vector space of all
+ infinitely differentiable functions $\RR \to \RR$.
+ Note that $\frac{d}{dx} \colon V \to V$ is a linear map
+ (for example it sends $\cos x$ to $-\sin x$).
+ Which real numbers are eigenvalues of this map?
+ \begin{hint}
+ All of them are!
+ \end{hint}
+ \begin{sol}
+ $e^{cx}$ is an example of a $c$-eigenvector for every $c$.
+ If you know differential equations,
+ these generate all examples!
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/excision.tex b/books/napkin/excision.tex
new file mode 100644
index 0000000000000000000000000000000000000000..06113d72006f81ba7add8c395042af3b4209b030
--- /dev/null
+++ b/books/napkin/excision.tex
@@ -0,0 +1,389 @@
+\chapter{Excision and relative homology}
+We have already seen how to use the Mayer-Vietoris sequence:
+we started with a sequence
+\[ \dots \to H_n(U \cap V) \to H_n(U) \oplus H_n(V) \to H_n(U+V) \to H_{n-1}(U \cap V) \to \dots \]
+and its reduced version,
+then appealed to the geometric fact that $H_n(U+V) \cong H_n(X)$.
+This allowed us to algebraically make computations on $H_n(X)$.
+
+In this chapter, we turn our attention to the long exact
+sequence associated to the chain complex
+\[ 0 \to C_n(A) \injto C_n(X) \surjto C_n(X,A) \to 0. \]
+The setup will look a lot like the previous two chapters,
+except in addition to $H_n : \catname{hTop} \to \catname{Grp}$
+we will have a functor $H_n : \catname{hPairTop} \to \catname{Grp}$
+which takes a pair $(X,A)$ to $H_n(X,A)$.
+Then, we state (again without proof) the key geometric result,
+and use this to make deductions.
+
+\section{The long exact sequences}
+Recall \Cref{thm:long_exact_rel}, which says that the sequences
+\[ \dots \to H_n(A) \to H_n(X) \to H_n(X,A) \to H_{n-1}(A) \to \dots. \]
+and
+\[ \dots \to \wt H_n(A) \to \wt H_n(X) \to H_n(X,A) \to \wt H_{n-1}(A) \to \dots \]
+are long exact.
+By \Cref{prob:triple_long_exact} we even have a long exact sequence
+\[
+ \dots
+ \to H_n(B,A)
+ \to H_n(X,A)
+ \to H_n(X,B)
+ \to H_{n-1}(B,A)
+ \to \dots.
+\]
+for $A \subseteq B \subseteq X$.
+An application of the second long exact sequence above gives:
+\begin{lemma}
+ [Homology relative to contractible spaces]
+ \label{lem:rel_contractible}
+ Let $X$ be a topological space,
+ and let $A \subseteq X$ be contractible.
+ For all $n$, \[ H_n(X, A) \cong \wt H_n(X). \]
+\end{lemma}
+\begin{proof}
+ Since $A$ is contractible, we have $\wt H_n(A) = 0$ for every $n$.
+ For each $n$ there's a segment of the long exact sequence given by
+ \[ \dots \to \underbrace{\wt H_n(A)}_{=0} \to \wt H_n(X) \to H_n(X,A)
+ \to \underbrace{\wt H_{n-1}(A)}_{=0} \to \dots. \]
+ So since $0 \to \wt H_n(X) \to H_n(X,A) \to 0$ is exact,
+ this means $H_n(X,A) \cong \wt H_n(X)$.
+\end{proof}
+
+In particular, the theorem applies if $A$ is a single point.
+The case $A = \varnothing$ is also worth noting.
+We compile these results into a lemma:
+\begin{lemma}
+ [Relative homology generalizes absolute homology]
+ Let $X$ be any space, and $\ast \in X$ a point. Then for all $n$,
+ \[
+ H_n(X, \{\ast\}) \cong \wt H_n(X)
+ \qquad\text{and}\qquad
+ H_n(X, \varnothing) = H_n(X).
+ \]
+\end{lemma}
+
+\section{The category of pairs}
+Since we now have an $H_n(X,A)$ instead of just $H_n(X)$,
+a natural next step is to create a suitable category of \emph{pairs}
+and give ourselves the same functorial setup as before.
+
+\begin{definition}
+ Let $\varnothing \neq A \subseteq X$ and $\varnothing \neq B \subseteq X$
+ be subspaces, and consider a map $f : X \to Y$.
+ If $f\im(A) \subseteq B$ we write
+ \[ f : (X,A) \to (Y,B). \]
+ We say $f$ is a \vocab{map of pairs},
+ between the pairs $(X,A)$ and $(Y,B)$.
+\end{definition}
+\begin{definition}
+ We say that $f,g : (X,A) \to (Y,B)$ are \vocab{pair-homotopic} if they
+ are ``homotopic through maps of pairs''.
+
+ More formally, a \vocab{pair-homotopy}
+ $f, g : (X,A) \to (Y,B)$ is a map $F : [0,1] \times X \to Y$,
+ which we'll write as $F_t(X)$, such that
+ $F$ is a homotopy of the maps $f,g : X \to Y$
+ and each $F_t$ is itself a map of pairs.
+\end{definition}
+Thus, we naturally arrive at two categories:
+\begin{itemize}
+ \ii $\catname{PairTop}$, the category of \emph{pairs} of
+ topological spaces, and
+ \ii $\catname{hPairTop}$, the same category except
+ with maps only equivalent up to homotopy.
+\end{itemize}
+\begin{definition}
+ As before, we say pairs $(X,A)$ and $(Y,B)$ are
+ \vocab{pair-homotopy equivalent}
+ if they are isomorphic in $\catname{hPairTop}$.
+ An isomorphism of $\catname{hPairTop}$ is a
+ \vocab{pair-homotopy equivalence}.
+\end{definition}
+
+We can do the same song and dance as before with the prism operator to obtain:
+\begin{lemma}[Induced maps of relative homology]
+ We have a functor
+ \[ H_n : \catname{hPairTop} \to \catname{Grp}. \]
+\end{lemma}
+That is, if $f : (X,A) \to (Y,B)$ then we obtain an induced map
+\[ f_\ast : H_n(X,A) \to H_n(Y,B). \]
+and if two such $f$ and $g$ are pair-homotopic
+then $f_\ast = g_\ast$.
+
+Now, we want an analog of contractible spaces for our pairs:
+i.e.\ pairs of spaces $(X,A)$ such that $H_n(X,A) = 0$.
+The correct definition is:
+\begin{definition}
+ Let $A \subseteq X$.
+ We say that $A$ is a \vocab{deformation retract} of $X$
+ if there is a map of pairs $r : (X, A) \to (A, A)$
+ which is a pair homotopy equivalence.
+\end{definition}
+\begin{example}
+ [Examples of deformation retracts]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii If a single point $p$ is a deformation retract of a space $X$,
+ then $X$ is contractible, since the retraction $r : X \to \{\ast\}$
+ (when viewed as a map $X \to X$)
+ is homotopic to the identity map $\id_X : X \to X$.
+ \ii The punctured disk $D^2 \setminus \{0\}$
+ deformation retracts onto its boundary $S^1$.
+ \ii More generally, $D^{n} \setminus \{0\}$
+ deformation retracts onto its boundary $S^{n-1}$.
+ \ii Similarly, $\RR^n \setminus \{0\}$
+ deformation retracts onto a sphere $S^{n-1}$.
+ \end{enumerate}
+\end{example}
+Of course in this situation we have that
+\[ H_n(X,A) \cong H_n(A,A) = 0. \]
+
+\begin{exercise}
+ Show that if $A \subseteq V \subseteq X$,
+ and $A$ is a deformation retract of $V$,
+ then $H_n(X,A) \cong H_n(X,V)$ for all $n$.
+ (Use \Cref{prob:triple_long_exact}. Solution in next section.)
+\end{exercise}
+
+\section{Excision}
+Now for the key geometric result, which is the analog of
+\Cref{thm:open_cover_homology} for our relative homology groups.
+\begin{theorem}
+ [Excision]
+ Let $Z \subseteq A \subseteq X$ be subspaces such that
+ the closure of $Z$ is contained in the interior of $A$.
+ Then the inclusion $\iota (X \setminus Z, A \setminus Z) \injto (X,A)$
+ (viewed as a map of pairs) induces an isomorphism of
+ relative homology groups
+ \[ H_n(X \setminus Z, A \setminus Z) \cong H_n(X,A). \]
+\end{theorem}
+This means we can \emph{excise} (delete) a subset $Z$ of $A$ in computing
+the relative homology groups $H_n(X,A)$.
+This should intuitively make sense:
+since we are ``modding out by points in $A$'',
+the internals of the point $A$ should not matter so much.
+
+The main application of excision is to decide
+when $H_n(X,A) \cong \wt H_n(X/A)$.
+Answer:
+
+\begin{theorem}
+ [Relative homology $\implies$ quotient space]
+ \label{thm:good_pair}
+ Let $X$ be a space and $A$ be a subspace such that
+ $A$ is a deformation retract of some open set $V \subseteq X$.
+ Then the quotient map $q : X \to X/A$ induces an isomorphism
+ \[ H_n(X,A) \cong H_n(X/A, A/A) \cong \wt H_n(X/A). \]
+\end{theorem}
+\begin{proof}
+ By hypothesis, we can consider the following maps of pairs:
+ \begin{align*}
+ r & : (V,A) \to (A,A) \\
+ q & : (X,A) \to (X/A, A/A) \\
+ \widehat q &: (X-A, V-A) \to (X/A-A/A, V/A-A/A).
+ \end{align*}
+ Moreover, $r$ is a pair-homotopy equivalence.
+ Considering the long exact sequence of a triple
+ (which was \Cref{prob:triple_long_exact})
+ we have a diagram
+ \begin{center}
+ \begin{tikzcd}[row sep=huge]
+ H_n(V,A) \ar[r] \ar[d, "\cong"', "r"]
+ & H_n(X,A) \ar["f", r]
+ & H_n(X, V) \ar[r]
+ & H_{n-1}(V,A) \ar[d, "\cong"', "r"] \\
+ \underbrace{H_n(A,A)}_{=0} & & & \underbrace{H_{n-1}(A,A)}_{=0}
+ \end{tikzcd}
+ \end{center}
+ where the isomorphisms arise since $r$ is a pair-homotopy equivalence.
+ So $f$ is an isomorphism.
+ Similarly the map
+ \[ g : H_n(X/A, A/A) \to H_n(X/A, V/A) \]
+ is an isomorphism.
+
+ Now, consider the commutative diagram
+ \begin{center}
+ \begin{tikzcd}[sep=huge]
+ H_n(X,A) \ar[r, "f"] \ar[d, "q_\ast"']
+ & H_n(X,V)
+ & H_n(X-A, V-A) \ar[l, "\text{Excise}"'] \ar[d, "\widehat{q}_\ast", "\cong"']
+ \\
+ H_n(X/A,A/A) \ar[r, "g"']
+ & H_n(X/A,V/A)
+ & \ar["\text{Excise}"', l] H_n(X/A-A/A, V/A-A/A)
+ \end{tikzcd}
+ \end{center}
+ and observe that the rightmost arrow $\widehat{q}_\ast$ is an isomorphism,
+ because outside of $A$ the map $\widehat q$ is the identity.
+ We know $f$ and $g$ are isomorphisms,
+ as are the two arrows marked with ``Excise'' (by excision).
+ From this we conclude that $q_\ast$ is an isomorphism.
+ Of course we already know that homology relative to a point
+ is just the relative homology groups
+ (this is the important case of \Cref{lem:rel_contractible}).
+\end{proof}
+
+\section{Some applications}
+One nice application of excision is to compute $\wt H_n(X \vee Y)$.
+\begin{theorem}[Homology of wedge sums]
+ Let $X$ and $Y$ be spaces with basepoints $x_0 \in X$ and $y_0 \in Y$,
+ and assuming each point is a deformation retract of some open neighborhood.
+ Then for every $n$ we have
+ \[
+ \wt H_n(X \vee Y)
+ = \wt H_n(X) \oplus \wt H_n(Y).
+ \]
+\end{theorem}
+\begin{proof}
+ Apply \Cref{thm:good_pair} with the subset $\{x_0, y_0\}$ of $X \amalg Y$,
+ \begin{align*}
+ \wt H_n (X \vee Y)
+ \cong \wt H_n( (X \amalg Y) / \{x_0, y_0\} )
+ &\cong H_n(X \amalg Y, \{x_0,y_0\}) \\
+ &\cong H_n(X, \{x_0\}) \oplus H_n(Y, \{y_0\}) \\
+ &\cong\wt H_n(X) \oplus \wt H_n(Y). \qedhere
+ \end{align*}
+\end{proof}
+
+Another application is to give a second method
+of computing $H_n(S^m)$.
+To do this, we will prove that
+\[ \wt H_n(S^m) \cong \wt H_{n-1}(S^{m-1}) \]
+for any $n,m > 1$.
+However,
+\begin{itemize}
+ \ii $\wt H_0(S^n)$ is $\ZZ$ for $n=0$ and $0$ otherwise.
+ \ii $\wt H_n(S^0)$ is $\ZZ$ for $m=0$ and $0$ otherwise.
+\end{itemize}
+So by induction on $\min \{m,n\}$ we directly obtain that
+\[
+ \wt H_n(S^m) \cong
+ \begin{cases}
+ \ZZ & m=n \\
+ 0 & \text{otherwise}
+ \end{cases}
+\]
+which is what we wanted.
+
+To prove the claim, let's consider the exact sequence
+formed by the pair $X = D^2$ and $A = S^1$.
+\begin{example}[The long exact sequence for $(X,A) = (D^2, S^1)$]
+ Consider $D^2$ (which is contractible) with boundary $S^1$.
+ Clearly $S^1$ is a deformation retraction of $D^2 \setminus \{0\}$,
+ and if we fuse all points on the boundary together we get $D^2 / S^1 \cong S^2$.
+ So we have a long exact sequence
+ \begin{center}
+ \begin{tikzcd}
+ \wt H_2(S^1) \ar[r] & \underbrace{\wt H_2(D^2)}_{=0} \ar[r] & \wt H_2(S^2) \ar[lld] \\
+ \wt H_1(S^1) \ar[r] & \underbrace{\wt H_1(D^2)}_{=0} \ar[r] & \wt H_1(S^2) \ar[lld] \\
+ \wt H_0(S^1) \ar[r] & \underbrace{\wt H_0(D^2)}_{=0} \ar[r] & \underbrace{\wt H_0(S^2)}_{=0}
+ \end{tikzcd}
+ \end{center}
+ From this diagram we read that
+ \[
+ \dots, \quad
+ \wt H_3(S^2) = \wt H_2(S^1), \quad
+ \wt H_2(S^2) = \wt H_1(S^1), \quad
+ \wt H_1(S^2) = \wt H_0(S^1).
+ \]
+\end{example}
+More generally, the exact sequence for the pair $(X,A) = (D^m, S^{m-1})$
+shows that $\wt H_n(S^m) \cong \wt H_{n-1}(S^{m-1})$,
+which is the desired conclusion.
+
+\section{Invariance of dimension}
+Here is one last example of an application of excision.
+\begin{definition}
+ Let $X$ be a space and $p \in X$ a point.
+ The $k$th \vocab{local homology group} of $p$ at $X$ is defined as
+ \[ H_k(X, X \setminus \{p\}). \]
+\end{definition}
+Note that for any open neighborhood $U$ of $p$, we have by excision that
+\[ H_k(X, X \setminus \{p\}) \cong H_k(U, U \setminus \{p\}). \]
+Thus this local homology group only depends on the space near $p$.
+
+\begin{theorem}
+ [Invariance of dimension, Brouwer 1910]
+ Let $U \subseteq \RR^n$ and $V \subseteq \RR^m$ be nonempty open sets.
+ If $U$ and $V$ are homeomorphic, then $m = n$.
+\end{theorem}
+\begin{proof}
+ Consider a point $x \in U$ and its local homology groups. By excision,
+ \[ H_k(\RR^n, \RR^n \setminus \{x\}) \cong
+ H_k(U, U \setminus \{x\}). \]
+ But since $\RR^n \setminus \{x\}$ is homotopic to $S^{n-1}$,
+ the long exact sequence of \Cref{thm:long_exact_rel} tells us
+ that
+ \[
+ H_k(\RR^n, \RR^n \setminus \{x\})
+ \cong
+ \begin{cases}
+ \ZZ & k = n \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ Analogously, given $y \in V$ we have
+ \[ H_k(\RR^m, \RR^m \setminus\{y\}) \cong H_k(V, V\setminus\{y\}). \]
+ If $U \cong V$, we thus
+ deduce that
+ \[ H_k(\RR^n, \RR^n\setminus\{x\}) \cong H_k(\RR^m, \RR^m\setminus\{y\}) \]
+ for all $k$. This of course can only happen if $m=n$.
+\end{proof}
+
+\section\problemhead
+\begin{problem}
+ Let $X = S^1 \times S^1$ and $Y = S^1 \vee S^1 \vee S^2$.
+ Show that \[ H_n(X) \cong H_n(Y) \] for every integer $n$.
+\end{problem}
+
+\begin{problem}[Hatcher \S2.1 exercise 18]
+ Consider $\QQ \subset \RR$.
+ Compute $\wt H_1(\RR, \QQ)$.
+ \begin{hint}
+ Use \Cref{thm:long_exact_rel}.
+ \end{hint}
+ \begin{sol}
+ We have an exact sequence
+ \[
+ \underbrace{\wt H_1(\RR)}_{=0}
+ \to \wt H_1(\RR, \QQ) \to \wt H_0(\QQ) \to
+ \underbrace{\wt H_0(\RR)}_{=0}.
+ \]
+ Now, since $\QQ$ is path-disconnected
+ (i.e.\ no two of its points are path-connected)
+ it follows that $\wt H_0(\QQ)$ consists of
+ countably infinitely many copies of $\ZZ$.
+ \end{sol}
+\end{problem}
+
+\begin{sproblem}
+ What are the local homology groups of a topological $n$-manifold?
+\end{sproblem}
+
+\begin{problem}
+ Let \[ X = \{(x,y) \mid x \ge 0\} \subseteq \RR^2 \]
+ denote the half-plane.
+ What are the local homology groups of points in $X$?
+ % http://math.stackexchange.com/questions/350667/local-homology-group-a-homeomorphism-takes-the-boundary-to-the-boundary
+\end{problem}
+
+\begin{problem}
+ [Brouwer-Jordan separation theorem,
+ generalizing Jordan curve theorem]
+ \yod
+ Let $X \subseteq \RR^n$ be a subset
+ which is homeomorphic to $S^{n-1}$.
+ Prove that $\RR^n \setminus X$
+ has exactly two path-connected components.
+ \begin{hint}
+ For any $n$, prove by induction for $k=1,\dots,n-1$ that
+ (a) if $X$ is a subset of $S^n$ homeomorphic to $D^k$
+ then $\wt H_i(S^n \setminus X) = 0$;
+ (b) if $X$ is a subset of $S^n$ homeomorphic to $S^k$
+ then $\wt H_i(S^n \setminus X) = \ZZ$ for $i=n-k-1$
+ and $0$ otherwise.
+ \end{hint}
+ \begin{sol}
+ This is shown in detail in Section 2.B of Hatcher.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/finite-field.tex b/books/napkin/finite-field.tex
new file mode 100644
index 0000000000000000000000000000000000000000..095e506cc33b5b73e62f644cbcab60d6e410d839
--- /dev/null
+++ b/books/napkin/finite-field.tex
@@ -0,0 +1,307 @@
+\chapter{Finite fields}
+In this short chapter, we classify all fields with finitely many elements
+and compute the Galois groups.
+Nothing in here is very hard, and so most of the proofs are just sketches;
+if you like, you should check the details yourself.
+
+The whole point of this chapter is to prove:
+\begin{itemize}
+ \ii A finite field $F$ must have order $p^n$, with $p$ prime and $n$ an integer.
+ \ii In this case, $F$ has characteristic $p$.
+ \ii All such fields are isomorphic,
+ so it's customary to use the notation $\FF_{p^n}$
+ for ``the'' finite field of order $p^n$ if we only care up to isomorphism.
+ \ii The extension $F/\FF_p$ is Galois, and $\Gal(F/\FF_p)$ is a cyclic group of order $n$.
+ The generator is the automorphism \[ \sigma : F \to F \quad\text{by}\quad x \mapsto x^p. \]
+\end{itemize}
+If you're in a hurry you can just remember these results and skip to the next chapter.
+
+\section{Example of a finite field}
+Before diving in, we give some examples.
+
+Recall that the \emph{characteristic} of a field $F$
+is the smallest positive integer $p$ such that
+\[ \underbrace{1_F + \dots + 1_F}_{\text{$p$ times}} = 0 \]
+or $0$ if no such integer $p$ exists.
+
+\begin{example}[Base field]
+ Let $\FF_p$ denote the field of integers modulo $p$.
+ This is a field with $p$ elements, with characteristic $p$.
+\end{example}
+
+\begin{example}[The finite field of nine elements]
+ Let
+ \[ F \cong \FF_3[X]/(X^2+1) \cong \ZZ[i] / (3). \]
+ We can think of its elements as \[ \left\{ a + bi \mid 0 \le a,b \le 2 \right\}. \]
+ Since $(3)$ is prime in $\ZZ[i]$, the ring of integers of $\QQ(i)$,
+ we see $F$ is a field with $3^2 = 9$ elements inside it.
+ Note that, although this field has $9$ elements, every element $x$ has the property that
+ \[ 3x = \underbrace{x + \dots + x}_{\text{$3$ times}} = 0. \]
+ In particular, $F$ has characteristic $3$.
+\end{example}
+
+\section{Finite fields have prime power order}
+\begin{lemma}
+ If the characteristic of a field $F$ isn't zero,
+ it must be a prime number.
+\end{lemma}
+\begin{proof}
+ Assume not, so $n = ab$ for $a,b < n$.
+ Then let
+ \[ A = \underbrace{1_F + \dots + 1_F}_{\text{$a$ times}} \neq 0 \]
+ and
+ \[ B = \underbrace{1_F + \dots + 1_F}_{\text{$b$ times}} \neq 0. \]
+ Then $AB = 0$, contradicting the fact that $F$ is a field.
+\end{proof}
+
+We like fields of characteristic zero, but unfortunately for finite fields
+we are doomed to have nonzero characteristic.
+
+\begin{lemma}
+ [Finite fields have prime power orders]
+ Let $F$ be a finite field.
+ Then
+ \begin{enumerate}[(a)]
+ \ii Its characteristic is nonzero, and hence some prime $p$.
+ \ii The field $F$ is a finite extension of $\FF_p$,
+ and in particular it is an $\FF_p$-vector space.
+ \ii We have $\left\lvert F \right\rvert = p^n$ for some prime $p$, integer $n$.
+ \end{enumerate}
+\end{lemma}
+\begin{proof}
+ Very briefly, since this is easy:
+ \begin{enumerate}[(a)]
+ \ii Apply Lagrange's theorem (or pigeonhole principle!)
+ to $(F, +)$ to get the characteristic isn't zero.
+ \ii The additive subgroup of $(F,+)$ generated
+ by $1_F$ is an isomorphic copy of $\FF_p$.
+ \ii Since it's a field extension,
+ $F$ is a finite-dimensional vector space over $\FF_p$,
+ with some basis $e_1, \dots, e_n$.
+ It follows that there are $p^n$ elements of $F$. \qedhere
+ \end{enumerate}
+\end{proof}
+\begin{remark}
+ An amusing alternate proof of (c) by contradiction:
+ if a prime $q \neq p$ divides $\left\lvert F \right\rvert$, then
+ by Cauchy's theorem (\Cref{thm:cauchy_group}) on $(F, +)$
+ there's a (nonzero) element $x$ of order $q$.
+ Evidently \[ x \cdot ( \underbrace{1_F + \dots + 1_F}_{\text{$q$ times}} ) = 0 \]
+ then, but $x \neq 0$, and hence the characteristic of $F$ also divides $q$,
+ which is impossible.
+\end{remark}
+
+An important point in the above proof is that
+\begin{lemma}[Finite fields are field extensions of $\FF_p$]
+ If $\left\lvert F \right\rvert = p^n$ is a finite field,
+ then there is an isomorphic copy of $\FF_p$ sitting inside $F$.
+ Thus $F$ is a field extension of $\FF_p$.
+\end{lemma}
+
+We want to refer a lot to this copy of $\FF_p$, so in what follows:
+\begin{abuse}
+ Every integer $n$ can be identified as an element of $F$, namely
+ \[ n \defeq \underbrace{1_F + \dots + 1_F}_{\text{$n$ times}}. \]
+ Note that (as expected) this depends only on $n \pmod p$.
+\end{abuse}
+
+This notation makes it easier to think about statements like the following.
+\begin{theorem}
+ [Freshman's dream]
+ For any $a,b \in F$ we have
+ \[ (a+b)^p = a^p + b^p. \]
+\end{theorem}
+\begin{proof}
+ Use the Binomial theorem, and the fact that $\binom pi$ is divisible by $p$ for $0 < i < p$.
+\end{proof}
+\begin{exercise}
+ Convince yourself that this proof works.
+\end{exercise}
+
+\section{All finite fields are isomorphic}
+We next proceed to prove ``Fermat's little theorem'':
+\begin{theorem}
+ [Fermat's little theorem in finite fields]
+ Let $F$ be a finite field of order $p^n$.
+ Then every element $x \in F$ satisfies
+ \[ x^{p^n} - x = 0. \]
+\end{theorem}
+\begin{proof}
+ If $x = 0$ it's true; otherwise, use Lagrange's theorem
+ on the abelian group $(F, \times)$ to get $x^{p^n-1} = 1_F$.
+\end{proof}
+
+We can now prove the following result,
+which is the ``main surprise'' about finite fields:
+that there is a unique one up to isomorphism for each size.
+\begin{theorem}[Complete classification of finite fields]
+ A field $F$ is a finite field with $p^n$ elements if and only if
+ it is a splitting field of $x^{p^n}-x$ over $\FF_p$.
+\end{theorem}
+\begin{proof}
+ By ``Fermat's little theorem'', all the elements of $F$ satisfy this polynomial.
+ So we just have to show that the roots of this polynomial are distinct
+ (i.e.\ that it is separable).
+
+ To do this, we use the derivative trick again: the derivative of this polynomial is
+ \[ p^n \cdot x^{p^n-1} - 1 = -1 \]
+ which has no roots at all, so the polynomial cannot have any double roots. \qedhere
+\end{proof}
+
+\begin{definition}
+ For this reason, it's customary to denote \emph{the}
+ field with $p^n$ elements by $\FF_{p^n}$.
+\end{definition}
+
+Note that the polynomial $x^{p^n}-x \pmod p$ is far from irreducible, but
+the computation above shows that it's separable.
+\begin{example}[The finite field of order nine again]
+ The polynomial $x^9-x$ is separable modulo $3$ and has factorization
+ \[ x(x+1)(x+2)(x^2+1)(x^2+x+2)(x^2+2x+2) \pmod 3. \]
+
+ So if $F$ has order $9$, then we intuitively expect it to be the field
+ generated by adjoining all the roots: $0$, $1$, $2$, as well as
+ $\pm i$, $1 \pm i$, $2 \pm i$.
+ Indeed, that's the example we had at the beginning of this chapter.
+
+ (Here $i$ denotes \emph{an} element of $\FF_9$ satisfying $i^2=-1$.
+ The notation is deliberately similar to the usual imaginary unit.)
+\end{example}
+
+\section{The Galois theory of finite fields}
+Retain the notation $\FF_{p^n}$ now (instead of $F$ like before).
+By the above theorem, it's the splitting field of a separable polynomial,
+hence we know that $\FF_{p^n} /\FF_p$ is a Galois extension.
+We would like to find the Galois group.
+
+In fact, we are very lucky: it is cyclic.
+First, we exhibit one such element $\sigma_p \in \Gal(\FF_{p^n} /\FF_p)$:
+
+\begin{theorem}[The $p$th power automorphism]
+ The map $\sigma_p : \FF_{p^n} \to \FF_{p^n}$ defined by
+ \[ \sigma_p(x) = x^p \]
+ is an automorphism, and moreover fixes $\FF_p$.
+\end{theorem}
+\begin{proof}
+ It's a homomorphism since it fixes $1$,
+ respects multiplication,
+ and respects addition.
+ \begin{ques}
+ Why does it respect addition?
+ \end{ques}
+ Next, we claim that it is injective. To see this, note that
+ \[ x^p = y^p
+ \iff x^p - y^p = 0
+ \iff (x-y)^p = 0
+ \iff x=y.
+ \]
+ Here we have again used the Freshman's Dream.
+ Since $\FF_{p^n}$ is finite, this injective map is automatically bijective.
+ The fact that it fixes $\FF_p$ is Fermat's little theorem.
+\end{proof}
+
+Now we're done:
+\begin{theorem}
+ [Galois group of the extension $\FF_{p^n}/\FF_p$]
+ We have $\Gal(\FF_{p^n}/\FF_p) \cong \Zc n$ with generator $\sigma_p$.
+\end{theorem}
+\begin{proof}
+ Since $[\FF_{p^n}:\FF_p] = n$, the Galois group $G$ has order $n$.
+ So we just need to show $\sigma_p \in G$ has order $n$.
+
+ Note that $\sigma_p$ applied $k$ times gives $x \mapsto x^{p^k}$.
+ Hence, $\sigma_p$ applied $n$ times is the identity,
+ as all elements of $\FF_{p^n}$ satisfy $x^{p^n}=x$.
+ But if $k < n$, then $\sigma_p$ applied $k$ times
+ cannot be the identity or $x^{p^k}-x$ would have too many roots.
+\end{proof}
+
+We can see an example of this again with the finite field of order $9$.
+\begin{example}
+ [Galois group of finite field of order $9$]
+ Let $\FF_9$ be the finite field of order $9$,
+ and represent it concretely by $\FF_9 = \ZZ[i]/(3)$.
+ Let $\sigma_3 : \FF_9 \to \FF_9$ be $x \mapsto x^3$.
+ We can witness the fate of all nine elements:
+ \begin{center}
+ \begin{tikzcd}
+ 0 & 1 & 2
+ & i \ar[d, leftrightarrow, "\sigma"]
+ & 1+i \ar[d, leftrightarrow, "\sigma"]
+ & 2+i \ar[d, leftrightarrow, "\sigma"] \\
+ &&& -i & 1-i & 2-i
+ \end{tikzcd}
+ \end{center}
+ (As claimed, $0$, $1$, $2$ are the fixed points,
+ so I haven't drawn arrows for them.)
+ As predicted, the Galois group has order two:
+ \[ \Gal(\FF_9/\FF_3) = \left\{ \id, \sigma_3 \right\} \cong \Zc 2. \]
+\end{example}
+
+This concludes the proof of all results stated at the beginning of this chapter.
+
+\section\problemhead
+\begin{dproblem}[HMMT 2017]
+ \gim
+ What is the period of the Fibonacci sequence modulo $127$?
+ \begin{hint}
+ The Fibonacci sequence is given by
+ $F_n = \frac{\alpha^n - \beta^n}{\alpha - \beta}$
+ where $\alpha = \frac{1 + \sqrt5}{2}$ and $\beta = \frac{1 - \sqrt5}{2}$
+ are the two roots of $P(X) \overset{\text{def}}{=} X^2-X-1$.
+ Show the polynomial $P(X)$ is irreducible modulo $127$;
+ then work in the splitting field of $P$, namely $\FF_{p^2}$.
+
+ Show that $\FF_p=-1$, $\FF_{p+1}=0$, $\FF_{2p+1}=1$, $\FF_{2p+2}=0$.
+ (Look at the action of $\Gal(\FF_{p^2}/\FF_p)$
+ on the roots of $P$.)
+ \end{hint}
+ \begin{sol}
+ Recall that the Fibonacci sequence is given by
+ \[ F_n = \frac{\alpha^n - \beta^n}{\alpha - \beta} \]
+ where $\alpha = \frac{1 + \sqrt5}{2}$ and $\beta = \frac{1 - \sqrt5}{2}$
+ are the two roots of $P(X) \defeq X^2-X-1$.
+
+ Let $p = 127$ and work modulo $p$.
+ As \[ \left( \frac5p \right) = \left( \frac p5 \right) = \left( \frac 25
+ \right) = -1 \]
+ we see $5$ is not a quadratic residue mod $127$.
+ Thus the polynomial $P(X)$,
+ viewed as a polynomial in $\FF_p[X]$, is irreducible
+ (intuitively, $\alpha$ and $\beta$ are not elements of $\FF_p$).
+ Accordingly we will work in the finite field $\FF_{p^2}$,
+ which is the $\FF_p$-splitting field of $P(X)$.
+ In other words we interpret $\alpha$ and $\beta$ as elements
+ of $\FF_{p^2}$ which do not lie in $\FF_p$.
+
+ Let $\sigma \colon \FF_{p^2} \to \FF_{p^2}$ by $t \mapsto t^p$
+ be the nontrivial element of
+ $\opname{Gal}\left( \FF_{p^2} / \FF_p \right)$;
+ in other words, $\sigma$ is the non-identity automorphism of $\FF_{p^2}$.
+ Since the fixed points of $\sigma$ are the elements of $\FF_p$,
+ this means $\sigma$ does not fix either root of $P$; thus we must have
+ \begin{align*}
+ \alpha^p = \sigma(\alpha) &= \beta \\
+ \beta^p = \sigma(\beta) &= \alpha.
+ \end{align*}
+ Now, compute
+ \begin{align*}
+ F_{p} &= \frac{\alpha^{p} - \beta^{p}}{\alpha-\beta} =
+ \frac{\beta-\alpha}{\alpha-\beta} = -1. \\
+ F_{p+1} &= \frac{\alpha^{p+1} - \beta^{p+1}}{\alpha-\beta} =
+ \frac{\alpha\beta-\beta\alpha}{\alpha-\beta} = 0. \\
+ F_{2p+1} &= \frac{\alpha^{2p+1} - \beta^{2p+1}}{\alpha-\beta}
+ = \frac{\beta^2\alpha-\alpha^2\beta}{\alpha-\beta}
+ = - \alpha \beta = 1. \\
+ F_{2p+2} &= \frac{\alpha^{2p+2} - \beta^{2p+2}}{\alpha-\beta}
+ = \frac{\beta^2\alpha^2-\alpha^2\beta^2}{\alpha-\beta} = 0.
+ \end{align*}
+ Consequently, the period must divide $2p+2$ but not $p+1$.
+
+ We now use for the first time the exact numerical value $p=127$
+ to see the period divides $2p+2 = 256 = 2^8$,
+ but not $p+1 = 128 = 2^7$.
+ (Previously we only used the fact that $(5/p)=-1$.)
+ Thus the period must be exactly $256$.
+ \end{sol}
+\end{dproblem}
diff --git a/books/napkin/forcing.tex b/books/napkin/forcing.tex
new file mode 100644
index 0000000000000000000000000000000000000000..befb6be5da8ae51d18dac6e58273d2d22eabee63
--- /dev/null
+++ b/books/napkin/forcing.tex
@@ -0,0 +1,683 @@
+\chapter{Forcing}
+We are now going to introduce Paul Cohen's technique of \vocab{forcing},
+which we then use to break the Continuum Hypothesis.
+
+Here is how it works.
+Given a transitive model $M$ and a poset $\Po$ inside it,
+we can consider a ``generic'' subset $G \subseteq \Po$, where $G$ is not in $M$.
+Then, we are going to construct a bigger universe $M[G]$ which contains both $M$ and $G$.
+(This notation is deliberately the same as $\ZZ[\sqrt2]$, for example -- in the algebra case,
+we are taking $\ZZ$ and adding in a new element $\sqrt 2$, plus everything that can be generated from it.)
+By choosing $\Po$ well, we can cause $M[G]$ to have desirable properties.
+
+Picture:
+
+\begin{center}
+ \begin{asy}
+ size(14cm);
+ pair A = (12,30);
+ pair B = -conj(A);
+ pair M = midpoint(A--B);
+ pair O = origin;
+ MP("V", A, dir(10));
+ draw(A--O--B);
+
+ fill(A--O--B--cycle, opacity(0.3)+palecyan);
+
+ MP("V_0 = \varnothing", origin, dir(-20));
+ MP("V_1 = \{\varnothing\}", 0.05*A, dir(0));
+ MP("V_2 = \{\varnothing, \{\varnothing\} \}", 0.10*A, dir(0));
+
+ pair A1 = 0.4*A;
+ pair B1 = 0.4*B;
+ draw(MP("V_\omega", A1, dir(0))--B1);
+ draw(MP("V_{\omega+1} = \mathcal P(V_\omega)", 0.45*A, dir(0))--0.45*B);
+ Drawing("\omega", 0.45*M, dir(45));
+
+ filldraw(O--A1--(A1+0.30*M)..(0.85*M)..(B1+0.30*M)--B1--cycle,
+ opacity(0.3)+lightgreen, heavygreen+1);
+ draw(O--0.85*M, heavygreen+1);
+ filldraw(O--(1.3*A1)..(0.85*M)..(1.3*B1)--cycle,
+ opacity(0.1)+lightred, heavyred+1);
+
+ Drawing("\aleph_1^V", 0.95*M, dir(0));
+
+ pair F = 0.55*B+0.10*A;
+ Drawing("f", F, dir(90));
+ draw(F--0.55*M, dotted, EndArrow, Margins);
+ draw(F--0.45*M, dotted, EndArrow, Margins);
+
+ Drawing("\aleph_1^M", 0.55*M, dir(45));
+ Drawing("\aleph_1^{M[G]}", 0.65*M, dir(45));
+
+ draw(0.85*M--M);
+ MP("\mathrm{On}^V", M, dir(90));
+ MP("\mathrm{On}^M = \mathrm{On}^{M[G]}", 0.85*M, dir(135));
+
+ MP("M \subseteq M[G]", 0.85*M, 3*dir(0)+dir(45));
+
+ Drawing("\mathbb P", 0.3*M+0.3*A, dir(135));
+ Drawing("G", 0.55*A+0.1*B, dir(45));
+ \end{asy}
+\end{center}
+
+The model $M$ is drawn in green, and its extension $M[G]$ is drawn in red.
+
+The models $M$ and $M[G]$ will share the same ordinals, which is represented here
+as $M$ being no taller than $M[G]$.
+But one issue with this is that forcing may introduce some new bijections between cardinals of $M$
+that were not there originally; this leads to the phenomenon called \vocab{cardinal collapse}:
+quite literally, cardinals in $M$ will no longer be cardinals in $M[G]$, and instead just an ordinal.
+This is because in the process of adjoining $G$, we may accidentally pick up some bijections which were not in the earlier universe.
+In the diagram drawn, this is the function $f$ mapping $\omega$ to $\aleph_1^M$.
+Essentially, the difficulty is that ``$\kappa$ is a cardinal'' is a $\Pi_1$ statement.
+
+In the case of the Continuum Hypothesis, we'll introduce a $\Po$ such that
+any generic subset $G$ will ``encode'' $\aleph_2^M$ real numbers.
+We'll then show cardinal collapse does not occur, meaning $\aleph_2^{M[G]} = \aleph_2^M$.
+Thus $M[G]$ will have $\aleph_2^{M[G]}$ real numbers, as desired.
+
+\section{Setting up posets}
+\prototype{Infinite Binary Tree}
+Let $M$ be a transitive model of $\ZFC$.
+Let $\Po = (\Po, \le) \in M$ be a poset with a maximal element $1_\Po$
+which lives inside a model $M$.
+The elements of $\Po$ are called \vocab{conditions};
+because they will force things to be true in $M[G]$.
+
+\begin{definition}
+ A subset $D \subseteq \Po$ is \vocab{dense} if for all $p \in \Po$,
+ there exists a $q \in D$ such that $q \le p$.
+\end{definition}
+Examples of dense subsets include the entire $\Po$ as well
+as any downwards ``slice''.
+
+\begin{definition}
+ For $p,q \in \Po$ we write $p \parallel q$,
+ saying ``$p$ is \vocab{compatible} with $q$'',
+ if there exists $r \in \Po$ with $r \le p$ and $r \le q$.
+ Otherwise, we say $p$ and $q$ are \vocab{incompatible}
+ and write $p \perp q$.
+\end{definition}
+\begin{example}[Infinite binary tree]
+ Let $\Po = 2^{<\omega}$ be the \vocab{infinite binary tree} shown below,
+ extended to infinity in the obvious way:
+ \begin{center}
+ \begin{asy}
+ size(8cm);
+ pair P = Drawing("\varnothing", (0,4), dir(90));
+ pair P0 = Drawing("0", (-5,2), 1.5*dir(90));
+ pair P1 = Drawing("1", (5,2), 1.5*dir(90));
+ pair P00 = Drawing("00", (-7,0), 1.4*dir(120));
+ pair P01 = Drawing("01", (-3,0), 1.4*dir(60));
+ pair P10 = Drawing("10", (3,0), 1.4*dir(120));
+ pair P11 = Drawing("11", (7,0), 1.4*dir(60));
+
+ pair P000 = Drawing("000", (-8,-3));
+ pair P001 = Drawing("001", (-6,-3));
+ pair P010 = Drawing("010", (-4,-3));
+ pair P011 = Drawing("011", (-2,-3));
+
+ pair P100 = Drawing("100", (2,-3));
+ pair P101 = Drawing("101", (4,-3));
+ pair P110 = Drawing("110", (6,-3));
+ pair P111 = Drawing("111", (8,-3));
+
+ label("$\vdots$", (-7,-3), dir(-90));
+ label("$\vdots$", (-3,-3), dir(-90));
+ label("$\vdots$", (3,-3), dir(-90));
+ label("$\vdots$", (7,-3), dir(-90));
+
+ draw(P01--P0--P00);
+ draw(P11--P1--P10);
+ draw(P0--P--P1);
+ draw(P000--P00--P001);
+ draw(P100--P10--P101);
+ draw(P010--P01--P011);
+ draw(P110--P11--P111);
+ \end{asy}
+ \end{center}
+
+ \begin{enumerate}[(a)]
+ \ii The maximal element $1_\Po$ is the empty string $\varnothing$.
+ \ii $D = \{\text{all strings ending in $001$}\}$ is an example of a dense set.
+ \ii No two elements of $\Po$ are compatible unless they are comparable.
+ \end{enumerate}
+\end{example}
+
+
+Now, I can specify what it means to be ``generic''.
+\begin{definition}
+ A nonempty set $G \subseteq \Po$ is a \vocab{filter} if
+ \begin{enumerate}[(a)]
+ \ii The set $G$ is upwards-closed:
+ $\forall p \in G (\forall q \ge p) (q \in G)$.
+ \ii Any pair of elements in $G$ is compatible.
+ \end{enumerate}
+ We say $G$ is \vocab{$M$-generic} if for all $D$ which are \emph{in the model $M$},
+ if $D$ is dense then $G \cap D \neq \varnothing$.
+\end{definition}
+\begin{ques}
+ Show that if $G$ is a filter then $1_\Po \in G$.
+\end{ques}
+\begin{example}[Generic filters on the infinite binary tree]
+ Let $\Po = 2^{<\omega}$.
+ The generic filters on $\Po$ are sets of the form
+ \[ \left\{ 0,\; b_1,\; b_1b_2,\; b_1b_2b_3,\; \dots \right\}. \]
+ So every generic filter on $\Po$ corresponds
+ to a binary number $b = 0.b_1b_2b_3\dots$.
+
+ It is harder to describe which reals correspond to generic filters,
+ but they should really ``look random''.
+ For example, the set of strings ending in $011$ is dense,
+ so one should expect ``$011$'' to appear inside $b$,
+ and more generally that $b$ should contain every binary string.
+ So one would expect the binary expansion of $\pi-3$ might correspond to a generic,
+ but not something like $0.010101\dots$.
+ That's why we call them ``generic''.
+\end{example}
+
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ pair P = Drawing("\varnothing", (0,4), dir(90), red);
+ pair P0 = Drawing("0", (-5,2), 1.5*dir(90), red);
+ pair P1 = Drawing("1", (5,2), 1.5*dir(90));
+ pair P00 = Drawing("00", (-7,0), 1.4*dir(120));
+ pair P01 = Drawing("01", (-3,0), 1.4*dir(60), red);
+ pair P10 = Drawing("10", (3,0), 1.4*dir(120));
+ pair P11 = Drawing("11", (7,0), 1.4*dir(60));
+
+ pair P000 = Drawing("000", (-8,-3));
+ pair P001 = Drawing("001", (-6,-3));
+ pair P010 = Drawing("010", (-4,-3), red);
+ pair P011 = Drawing("011", (-2,-3));
+
+ pair P100 = Drawing("100", (2,-3));
+ pair P101 = Drawing("101", (4,-3));
+ pair P110 = Drawing("110", (6,-3));
+ pair P111 = Drawing("111", (8,-3));
+
+ draw(P01--P0--P00);
+ draw(P11--P1--P10);
+ draw(P0--P--P1);
+ draw(P000--P00--P001);
+ draw(P100--P10--P101);
+ draw(P010--P01--P011);
+ draw(P110--P11--P111);
+
+ draw(P--P0--P01--P010--(P010+2*dir(-90)), red+1.4);
+ MP("G", P010+2*dir(-90), dir(-90), red);
+ \end{asy}
+\end{center}
+
+\begin{exercise}
+ Verify that these are every generic filter $2^{<\omega}$ has the form above.
+ Show that conversely, a binary number gives a filter, but it need not be generic.
+\end{exercise}
+
+Notice that if $p \ge q$, then the sentence $q \in G$ tells us more information than the sentence $p \in G$.
+In that sense $q$ is a \emph{stronger} condition.
+In another sense $1_\Po$ is the weakest possible condition,
+because it tells us nothing about $G$; we always have $1_\Po \in G$
+since $G$ is upwards closed.
+
+\section{More properties of posets}
+We had better make sure that generic filters exist.
+In fact this is kind of tricky, but for countable models it works:
+\begin{lemma}[Rasiowa-Sikorski lemma]
+ Suppose $M$ is a \emph{countable} transitive model of $\ZFC$
+ and $\Po$ is a partial order.
+ Then there exists an $M$-generic filter $G$.
+\end{lemma}
+\begin{proof}
+ Essentially, hit them one by one.
+ \Cref{prob:rslemma}.
+\end{proof}
+
+Fortunately, for breaking $\CH$ we would want $M$ to be countable anyways.
+% This is really just the proof of the Baire category theorem.
+
+The other thing we want to do to make sure we're on the right track is guarantee
+that a generic set $G$ is not actually in $M$.
+(Analogy: $\ZZ[3]$ is a really stupid extension.)
+The condition that guarantees this is:
+
+\begin{definition}
+ A partial order $\Po$ is \vocab{splitting} if
+ for all $p \in \Po$, there exists $q,r \le p$
+ such that $q \perp r$.
+\end{definition}
+\begin{example}[Infinite binary tree is (very) splitting]
+ The infinite binary tree is about as splitting as you can get.
+ Given $p \in 2^{<\omega}$, just consider the two elements right under it.
+\end{example}
+
+\begin{lemma}[Splitting posets omit generic sets]
+ Suppose $\Po$ is splitting. Then if $F \subseteq \Po$ is a filter
+ such that $F \in M$, then $\Po \setminus F$ is dense.
+ In particular, if $G \subseteq \Po$ is generic, then $G \notin M$.
+\end{lemma}
+\begin{proof}
+ Consider $p \notin \Po \setminus F \iff p \in F$.
+ Then there exists $q, r \le p$ which are not compatible.
+ Since $F$ is a filter it cannot contain both;
+ we must have one of them outside $F$, say $q$.
+ Hence every element of $p \in \Po \setminus (\Po \setminus F)$
+ has an element $q \le p$ in $\Po \setminus F$.
+ That's enough to prove $\Po \setminus F$ is dense.
+ \begin{ques}
+ Deduce the last assertion of the lemma about generic $G$. \qedhere
+ \end{ques}
+\end{proof}
+
+\section{Names, and the generic extension}
+We now define the \emph{names} associated to a poset $\Po$.
+
+\begin{definition}
+ Suppose $M$ is a transitive model of $\ZFC$, $\Po = (\Po, \le) \in M$ is a partial order.
+ We define the hierarchy of \vocab{$\Po$-names} recursively by
+ \begin{align*}
+ \Name_0 &= \varnothing \\
+ \Name_{\alpha+1} &= \PP(\Name_\alpha \times \Po) \\
+ \Name_{\lambda} &= \bigcup_{\alpha < \lambda} \Name_\alpha.
+ \end{align*}
+ Finally, $\Name = \bigcup_\alpha \Name_\alpha$ denote the class of all $\Po$-names.
+ % For $\tau \in \Name$, let $\nrank(\tau)$ be the least $\alpha$ such that $\tau \in \Name_\alpha$.
+\end{definition}
+(These $\Name_\alpha$'s are the analog of the $V_\alpha$'s:
+each $\Name_\alpha$ is just the set of all names with rank $\le \alpha$.)
+
+\begin{definition}
+ For a filter $G$, we define the \vocab{interpretation} of $\tau$ by $G$,
+ denoted $\tau^G$, using the transfinite recursion
+ \[ \tau^G
+ = \left\{ \sigma^G
+ \mid \left<\sigma, p\right> \in \tau
+ \text{ and } p \in G\right\}. \]
+ We then define the model
+ \[ M[G] = \left\{ \tau^G \mid \tau \in \Name^M \right\}. \]
+ In words, $M[G]$ is the interpretation of all the possible $\Po$-names
+ (as computed by $M$).
+\end{definition}
+
+\textbf{You should think of a $\Po$-name as a ``fuzzy set''.}
+Here's the idea.
+Ordinary sets are collections of ordinary sets,
+so fuzzy sets should be collections of fuzzy sets.
+These fuzzy sets can be thought of like the Ghosts of Christmases yet to come:
+they represent things that might be, rather than things that are certain.
+In other words, they represent the possible futures of $M[G]$ for various choices of $G$.
+
+Every fuzzy set has an element $p \in \Po$ pinned to it.
+When it comes time to pass judgment,
+we pick a generic $G$ and filter through the universe of $\Po$-names.
+The fuzzy sets with an element of $G$ attached to it materialize into the real world,
+while the fuzzy sets with elements outside of $G$ fade from existence.
+The result is $M[G]$.
+
+\begin{example}[First few levels of the name hierarchy]
+ Let us compute
+ \begin{align*}
+ \Name_0 &= \varnothing \\
+ \Name_1 &= \PP(\varnothing \times \Po) \\
+ &= \{\varnothing\} \\
+ \Name_2 &= \PP(\{\varnothing\} \times \Po) \\
+ &= \PP\left( \left\{
+ \left<\varnothing, p\right>
+ \mid p \in \Po
+ \right\} \right).
+ \end{align*}
+\end{example}
+Compare the corresponding von Neuman universe.
+\[ V_0 = \varnothing, \; V_1 = \{\varnothing\}, \;
+V_2 = \left\{ \varnothing, \left\{ \varnothing \right\} \right\}. \]
+
+\begin{example}[Example of an interpretation]
+ As we said earlier, $\Name_1 = \{\varnothing\}$.
+ Now suppose
+ \[ \tau =
+ \left\{
+ \left<\varnothing, p_1\right>,
+ \left<\varnothing, p_2\right>,
+ \dots,
+ \left<\varnothing, p_n\right>
+ \right\}
+ \in \Name_2. \]
+ Then
+ \[
+ \tau^G
+ = \left\{ \varnothing \mid
+ \left<\varnothing, p\right> \in \tau \text{ and } p \in G\right\}
+ =
+ \begin{cases}
+ \{\varnothing\} & \text{if some } p_i \in G \\
+ \varnothing & \text{otherwise}.
+ \end{cases}
+ \]
+ In particular, remembering that $G$ is nonempty we see that
+ \[ \left\{ \tau^G \mid \tau \in \Name_2 \right\} = V_2. \]
+ In fact, this holds for any natural number $n$, not just $2$.
+\end{example}
+So, $M[G]$ and $M$ agree on finite sets.
+
+Now, we want to make sure $M[G]$ contains the elements of $M$.
+To do this, we take advantage of the fact that $1_\Po$ must be in $G$, and define
+for every $x \in M$ the set
+\[ \check x = \left\{ \left<\check y, 1_\Po\right> \mid y \in x \right\} \]
+by transfinite recursion.
+Basically, $\check x$ is just a copy of $x$ where we add check marks and tag every element with $1_\Po$.
+
+\begin{example}
+ Compute $\check 0 = 0$ and $\check 1 = \left\{ \left<\check 0, 1_\Po\right> \right\}$.
+ Thus \[ (\check 0)^G = 0 \quad\text{and}\quad (\check 1)^G = 1. \]
+\end{example}
+\begin{ques}
+ Show that in general, $(\check x)^G = x$.
+ (Rank induction.)
+\end{ques}
+
+However, we'd also like to cause $G$ to be in $M[G]$.
+In fact, we can write down the name exactly: we define
+\[ \dot \Po \defeq \left\{ \left<\check p, p\right> \mid p \in \Po \right\}. \]
+\begin{ques}
+ Show that $(\dot \Po)^G = G$.
+\end{ques}
+\begin{ques}
+ Verify that $M[G]$ is transitive:
+ that is, if $\sigma^G \in \tau^G \in M[G]$, show that $\sigma^G \in M[G]$.
+ (This is offensively easy.)
+\end{ques}
+
+In summary,
+\begin{moral}
+ $M[G]$ is a transitive model extending $M$ (it contains $G$).
+\end{moral}
+
+Moreover, it is reasonably well-behaved even if $G$ is just a filter.
+Let's see what we can get off the bat.
+\begin{lemma}[Properties obtained from filters]
+ Let $M$ be a transitive model of $\ZFC$.
+ If $G$ is a filter, then $M[G]$ is transitive
+ and satisfies $\Extensionality$, $\Foundation$,
+ $\EmptySet$, $\Infinity$, $\Pairing$, and $\Union$.
+\end{lemma}
+
+This leaves $\PowerSet$, $\Replacement$, and Choice.
+\begin{proof}
+ Hence, we get $\Extensionality$ and $\Foundation$ for free.
+ Then $\Infinity$ and $\EmptySet$ follows from $M \subseteq M[G]$.
+
+ For $\Pairing$, suppose $\sigma_1^G, \sigma_2^G \in M[G]$.
+ Then
+ \[ \sigma =
+ \left\{ \left<\sigma_1, 1_\Po\right>, \left<\sigma_2, 1_\Po\right> \right\}
+ \]
+ satisfies $\sigma^G = \{\sigma_1^G, \sigma_2^G\}$.
+ (Note that we used $M \vDash \Pairing$.)
+ $\Union$ is left as a problem, which you are encouraged to try now.
+\end{proof}
+Up to here, we don't need to know anything about when a sentence is true in $M[G]$;
+all we had to do was contrive some names like $\check x$ or
+$\left\{ \left<\sigma_1, 1_\Po\right>, \left<\sigma_2, 1_\Po\right> \right\}$
+to get the facts we wanted.
+But for the remaining axioms, we \emph{are} going to need this extra power
+are true in $M[G]$.
+For this, we have to introduce the fundamental theorem of forcing.
+
+\section{Fundamental theorem of forcing}
+The model $M$ unfortunately has no idea what $G$ might be,
+only that it is some generic filter.\footnote{You might
+ say this is a good thing; here's why.
+ We're trying to show that $\neg \CH$ is consistent with $\ZFC$,
+ and we've started with a model $M$ of the real universe $V$.
+ But for all we know $\CH$ might be true in $V$ (what if $V=L$?),
+ in which case it would also be true of $M$.
+
+ Nonetheless, we boldly construct $M[G]$ an extension of the model $M$.
+ In order for it to behave differently from $M$, it has to be out of reach of $M$.
+ Conversely, if $M$ could compute everything about $M[G]$,
+ then $M[G]$ would have to conform to $M$'s beliefs.
+
+ That's why we worked so hard to make sure $G \in M[G]$ but $G \notin M$.}
+Nonetheless, we are going to define a relation $\Vdash$,
+called the \emph{forcing} relation.
+Roughly, we are going to write
+\[ p \Vdash \varphi(\sigma_1, \dots, \sigma_n) \]
+where $p \in \Po$, $\sigma_1, \dots, \sigma_n \in M[G]$, if and only if:
+\begin{quote}
+ For \emph{any} generic $G$,
+ if $p \in G$,
+ then $M[G] \vDash \varphi(\sigma_1^G, \dots, \sigma_n^G)$.
+\end{quote}
+Note that $\Vdash$ is defined without reference to $G$:
+it is something that $M$ can see.
+We say $p$ \vocab{forces} the sentence $\varphi(\sigma_1, \dots, \sigma_n)$.
+And miraculously, we can define this relation in such a way that the converse is true:
+\emph{a sentence holds if and only if some $p$ forces it}.
+
+
+\begin{theorem}
+ [Fundamental theorem of forcing]
+ Suppose $M$ is a transitive model of ZF.
+ Let $\Po \in M$ be a poset, and $G \subseteq \Po$ is an $M$-generic filter.
+ Then,
+ \begin{enumerate}[(1)]
+ \ii Consider $\sigma_1, \dots, \sigma_n \in \Name^M$,
+ Then
+ \[ M[G] \vDash \varphi[\sigma_1^G, \dots, \sigma_n^G] \]
+ if and only if there exists a condition $p \in G$
+ such that $p$ \emph{forces} the sentence $\varphi(\sigma_1, \dots, \sigma_n)$.
+ We denote this by $p \Vdash \varphi(\sigma_1, \dots, \sigma_n)$.
+ \ii This forcing relation is (uniformly) definable in $M$.
+ \end{enumerate}
+\end{theorem}
+
+I'll tell you how the definition works in the next section.
+
+\section{(Optional) Defining the relation}
+Here's how we're going to go.
+We'll define the most generous condition possible such that
+the forcing works in one direction ($p \Vdash \varphi(\sigma_1, \dots, \sigma_n)$ means
+$M[G] \vDash \varphi[\sigma_1^G, \dots, \sigma_n^G]$).
+We will then cross our fingers that the converse also works.
+
+We proceed by induction on the formula complexity.
+It turns out in this case that the atomic formula (base cases)
+are hardest and themselves require induction on ranks.
+
+For some motivation, let's consider how we should define
+$p \Vdash \tau_1 \in \tau_2$ given that we've
+already defined $p \Vdash \tau_1 = \tau_2$.
+We need to ensure this holds iff
+\[ \forall \text{$M$-generic $G$ with $p \in G$}:
+ \ M[G] \vDash \tau_1^G \in \tau_2^G. \]
+So it suffices to ensure that any generic $G \ni p$ hits a condition $q$ which forces $\tau_1^G$ to \emph{equal} a member $\tau^G$ of $\tau_2^G$.
+In other words, we want to choose the definition of $p \Vdash \tau_1 \in \tau_2$ to hold if and only if
+\[
+ \left\{ q \in \Po
+ \mid \exists \left<\tau, r\right> \in \tau_2
+ \left( q \le r \land q \Vdash(\tau=\tau_1) \right) \right\}
+\]
+is dense below in $p$.
+In other words, if the set is dense, then the generic must hit $q$, so it must hit $r$, meaning that $\left<\tau_r\right> \in \tau_2$ will get interpreted such that $\tau^G \in \tau_2^G$, and moreover the $q \in G$ will force $\tau_1 = \tau$.
+
+Now let's write down the definition\dots
+In what follows, the $\Vdash$ omits the $M$ and $\Po$.
+\begin{definition}
+ Let $M$ be a countable transitive model of ZFC.
+ Let $\Po \in M$ be a partial order.
+ For $p \in \Po$ and $\varphi(\sigma_1, \dots, \sigma_n)$
+ a formula in the language of set theory,
+ we write $\tau \Vdash \varphi(\sigma_1, \dots, \sigma_n)$
+ to mean the following, defined by induction on formula complexity plus rank.
+ \begin{enumerate}[(1)]
+ \ii $p \Vdash \tau_1 = \tau_2$ means
+ \begin{enumerate}[(i)]
+ \ii For all $\left<\sigma_1, q_1\right> \in \tau_1$ the set
+ \[ D_{\sigma_1, q_1}
+ \defeq
+ \left\{ r \mid
+ r \le q_1 \lthen \exists \left<\sigma_2, q_2\right> \in \tau_2 \left( r \le q_2 \land r \Vdash (\sigma_1 = \sigma_2) \right)\right\}.
+ \]
+ is dense in $p$.
+ (This encodes ``$\tau_1 \subseteq \tau_2$''.)
+ \ii For all $\left<\sigma_2, q_2\right> \in \tau_2$,
+ the set $D_{\sigma_2, q_2}$ defined similarly is dense below $p$.
+ \end{enumerate}
+ \ii $p \Vdash \tau_1 \in \tau_2$ means
+ \[
+ \left\{ q \in \Po
+ \mid \exists \left<\tau, r\right> \in \tau_2
+ \left( q \le r \land q \Vdash(\tau=\tau_1) \right)
+ \right\} \]
+ is dense below $p$.
+ \ii $p \Vdash \varphi \land \psi$ means $p \Vdash \varphi$ and $p \Vdash \psi$.
+ \ii $p \Vdash \neg \varphi$ means $\forall q \le p$, $q \not\Vdash \varphi$.
+ \ii $p \Vdash \exists x \varphi(x, \sigma_1, \dots, \sigma_n)$ means that the set
+ \[
+ \left\{ q \mid \exists \tau \left( q \Vdash
+ \varphi(\tau, \sigma_1, \dots, \sigma_n ) \right)
+ \right\}
+ \]
+ is dense below $p$.
+ \end{enumerate}
+\end{definition}
+This is definable in $M$!
+All we've referred to is $\Po$ and names, which are in $M$.
+(Note that being dense is definable.)
+Actually, in parts (3) through (5) of the definition above,
+we use induction on formula complexity.
+But in the atomic cases (1) and (2) we are doing induction on the ranks of the names.
+
+So, the construction above gives us one direction (I've omitted tons of details, but\dots).
+
+Now, how do we get the converse: that a sentence is true if and only if something forces it?
+Well, by induction, we can actually show:
+\begin{lemma}[Consistency and Persistence]
+ We have
+ \begin{enumerate}[(1)]
+ \ii (Consistency) If $p \Vdash \varphi$ and $q \le p$ then $q \Vdash \varphi$.
+ \ii (Persistence) If $\left\{ q \mid q \Vdash \varphi \right\}$
+ is dense below $p$ then $p \Vdash \varphi$.
+ \end{enumerate}
+\end{lemma}
+You can prove both of these by induction on formula complexity.
+From this we get:
+\begin{corollary}[Completeness]
+ The set $\left\{ p \mid p \Vdash \varphi \text{ or } p \Vdash \neg\varphi \right\}$
+ is dense.
+\end{corollary}
+\begin{proof}
+ We claim that whenever $p \not\Vdash \varphi$ then
+ for some $\ol p \le p$ we have $\ol p \Vdash \neg\varphi$;
+ this will establish the corollary.
+
+ By the contrapositive of the previous lemma,
+ $\{q \mid q \Vdash \varphi\}$ is not dense below $p$,
+ meaning for some $\ol p \le p$, every $q \le \ol p$ gives $q \not\Vdash \varphi$.
+ By the definition of $p \Vdash \neg\varphi$,
+ we have $\ol p \Vdash \neg\varphi$.
+\end{proof}
+And this gives the converse: the $M$-generic $G$ has to hit some condition
+that passes judgment, one way or the other.
+This completes the proof of the fundamental theorem.
+
+\section{The remaining axioms}
+\begin{theorem}[The generic extension satisfies $\ZFC$]
+ Suppose $M$ is a transitive model of $\ZFC$.
+ Let $\Po \in M$ be a poset, and $G \subseteq \Po$ is an $M$-generic filter.
+ Then \[ M[G] \vDash \ZFC. \]
+\end{theorem}
+\begin{proof}
+ We'll just do $\Comprehension$, as the other remaining axioms are similar.
+
+ Suppose $\sigma^G, \sigma_1^G, \dots, \sigma_n^G \in M[G]$
+ are a set and parameters, and
+ $\varphi(x,x_1, \dots, x_n)$ is a formula
+ in the language of set theory.
+ We want to show that the set
+ \[ A = \left\{
+ x \in \sigma^G \mid M[G] \vDash \varphi[x, \sigma_1^G, \dots, \sigma_n^G]
+ \right\} \]
+ is in $M[G]$; i.e.\ it is the interpretation of some name.
+
+ Note that every element of $\sigma^G$ is of the form $\rho^G$
+ for some $\rho \in \dom(\sigma)$ (a bit of abuse of notation here,
+ $\sigma$ is a bunch of pairs of names and $p$'s,
+ and the domain $\dom(\sigma)$ is just the set of names).
+ So by the fundamental theorem of forcing, we may write
+ \[ A =
+ \left\{ \rho^G \mid \rho \in \dom(\sigma)
+ \text{ and }
+ \exists p \in G
+ \left( p \Vdash \rho \in \sigma
+ \land \varphi(\rho, \sigma_1, \dots, \sigma_n)
+ \right)
+ \right\}.
+ \]
+ To show $A \in M[G]$ we have to write down a $\tau$
+ such that the name $\tau^G$ coincides with $A$.
+ We claim that
+ \[
+ \tau
+ =
+ \left\{ \left<\rho, p\right>
+ \in \dom(\sigma) \times \Po \mid
+ p \Vdash \rho \in \sigma
+ \land \varphi(\rho, \sigma_1, \dots, \sigma_n)
+ \right\}
+ \]
+ is the correct choice.
+ It's actually clear that $\tau^G = A$ by construction;
+ the ``content'' is showing that $\tau$ is in actually a name of $M$,
+ which follows from $M \vDash \Comprehension$.
+
+ So really, the point of the fundamental theorem of forcing
+ is just to let us write down this $\tau$;
+ it lets us show that $\tau$ is in $\Name^M$
+ without actually referencing $G$.
+\end{proof}
+
+
+\section\problemhead
+\begin{problem}
+ For a filter $G$ and $M$ a transitive model of $\ZFC$,
+ show that $M[G] \vDash \Union$.
+\end{problem}
+
+\begin{problem}[Rasiowa-Sikorski lemma]
+ \label{prob:rslemma}
+ Show that in a countable transitive model $M$ of $\ZFC$,
+ one can find an $M$-generic filter on any partial order.
+ \begin{hint}
+ Let $D_1$, $D_2$, \dots be the dense sets (there are countably many of them).
+ \end{hint}
+ \begin{sol}
+ Since $M$ is countable, there are only countably many dense sets (they live in $M$!),
+ say \[ D_1, D_2, \ldots, D_n, \ldots \in M. \]
+ Using Choice,
+ let $p_1 \in D_1$, and then let $p_2 \le p_1$ such that $p_2 \in D_2$
+ (this is possible since $D_2$ is dense), and so on.
+ In this way we can inductively exhibit a chain
+ \[ p_1 \ge p_2 \ge p_3 \ge \dots \]
+ with $p_i \in D_i$ for every $i$.
+
+ Hence, we want to generate a filter from the $\{p_i\}$.
+ Just take the upwards closure -- let $G$ be the set of $q \in \Po$ such that $q \ge p_n$ for some $n$.
+ By construction, $G$ is a filter (this is actually trivial).
+ Moreover, $G$ intersects all the dense sets by construction.
+ \end{sol}
+\end{problem}
+
+%\begin{exercise}
+% Show that $\rank \sigma^G \le \nrank(\sigma)$ for any $\sigma \in \Name^M$.
+%\end{exercise}
+
+%\begin{exercise}
+% Check that
+% \begin{enumerate}[(1)]
+% \ii $(\check x)^G = x$.
+% \ii $(\dot G)^G = G$.
+% \end{enumerate}
+%\end{exercise}
diff --git a/books/napkin/forms.tex b/books/napkin/forms.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c44c266e1b4c04d1d4727628c3505a9af0d97d74
--- /dev/null
+++ b/books/napkin/forms.tex
@@ -0,0 +1,506 @@
+\chapter{Differential forms}
+In this chapter, all vector spaces are finite-dimensional
+real inner product spaces.
+We first start by (non-rigorously) drawing pictures
+of all the things that we will define in this chapter.
+Then we re-do everything again in its proper algebraic context.
+
+\section{Pictures of differential forms}
+Before defining a differential form,
+we first draw some pictures.
+The key thing to keep in mind is
+\begin{moral}
+ ``The definition of a differential form is:
+ something you can integrate.'' \\ --- Joe Harris
+\end{moral}
+
+We'll assume that all functions are \vocab{smooth},
+i.e.\ infinitely differentiable.
+
+Let $U \subseteq V$ be an open set of a vector space $V$.
+Suppose that we have a function $f : U \to \RR$, i.e.\
+we assign a value to every point of $U$.
+\begin{center}
+ \begin{asy}
+ bigblob("$U$");
+ dot("$3$", (-2,1), red);
+ dot("$\sqrt2$", (-1,-1), red);
+ dot("$-1$", (2,2), red);
+ dot("$0$", (-3,-3), red);
+ \end{asy}
+\end{center}
+\begin{definition}
+ A \vocab{$0$-form} $f$ on $U$ is just a smooth function $f : U \to \RR$.
+\end{definition}
+Thus, if we specify a finite set $S$ of points in $U$
+we can ``integrate'' over $S$ by just adding up the values
+of the points:
+\[ 0 + \sqrt 2 + 3 + (-1) = 2 + \sqrt2. \]
+So, \textbf{a $0$-form $f$ lets us integrate over $0$-dimensional ``cells''}.
+
+But this is quite boring, because as we know we like
+to integrate over things like curves, not single points.
+So, by analogy, we want a $1$-form to let us integrate
+over $1$-dimensional cells: i.e.\ over curves.
+What information would we need to do that?
+To answer this, let's draw a picture of a curve $c$,
+which can be thought of as a function $c : [0,1] \to U$.
+\begin{center}
+ \begin{asy}
+ bigblob("$U$");
+ pair a = (-2,-2);
+ pair b = (3,0);
+ pair p = (0,1);
+ pair q = (2,0);
+ label("$c$", q, dir(45), heavygreen);
+ dot(a, heavygreen);
+ dot(b, heavygreen);
+ draw(a..p..q..b, heavygreen);
+ dot("$p$", p, dir(90), blue);
+ pair v = p+1.5*dir(-10);
+ label("$v$", v, dir(50), blue);
+ draw(p--v, blue, EndArrow);
+ \end{asy}
+\end{center}
+We might think that we could get away
+with just specifying a number on every point of $U$
+(i.e.\ a $0$-form $f$), and then somehow ``add up''
+all the values of $f$ along the curve.
+We'll use this idea in a moment, but we can in fact do something more general.
+Notice how when we walk along a smooth curve, at every point $p$
+we also have some extra information: a \emph{tangent vector} $v$.
+So, we can define a $1$-form $\alpha$ as follows.
+A $0$-form just took a point and gave a real number,
+but \textbf{a $1$-form will take both a point \emph{and} a tangent
+vector at that point, and spit out a real number.}
+So a $1$-form $\alpha$ is a smooth function on pairs $(p,v)$,
+where $v$ is a tangent vector at $p$, to $\RR$. Hence
+\[ \alpha : U \times V \to \RR. \]
+
+Actually, for any point $p$, we will require that $\alpha(p,-)$
+is a linear function in terms of the vectors:
+i.e.\ we want for example that $\alpha(p,2v) = 2\alpha(p,v)$.
+So it is more customary to think of $\alpha$ as:
+\begin{definition}
+ A \vocab{$1$-form} $\alpha$ is a smooth function
+ \[ \alpha : U \to V^\vee. \]
+\end{definition}
+Like with $Df$, we'll use $\alpha_p$ instead of $\alpha(p)$.
+So, at every point $p$, $\alpha_p$ is some linear functional
+that eats tangent vectors at $p$, and spits out a real number.
+Thus, we think of $\alpha_p$ as an element of $V^\vee$;
+\[ \alpha_p \in V^\vee. \]
+
+Next, we draw pictures of $2$-forms.
+This should, for example, let us integrate over a blob
+(a so-called $2$-cell) of the form
+\[ c : [0,1] \times [0,1] \to U \]
+i.e.\ for example, a square in $U$.
+In the previous example with $1$-forms,
+we looked at tangent vectors to the curve $c$.
+This time, at points we will look at \emph{pairs} of tangent vectors
+in $U$: in the same sense that lots of tangent vectors
+approximate the entire curve, lots of tiny squares
+will approximate the big square in $U$.
+\begin{center}
+ \begin{asy}
+ bigblob("$U$");
+ filldraw( (-2,-2)--(-2,2)--(2,2)--(2,-2)--cycle,
+ opacity(0.4) + orange, red);
+ label("$c$", (2,2), dir(45), red);
+ for (real t = -1; t < 2; t += 1) {
+ draw( (-2,t)--(2,t), red );
+ draw( (t,-2)--(t,2), red );
+ }
+ pair p = (-1, -1);
+ dot("$p$", p, dir(225), blue);
+ pair v = p + dir(90);
+ pair w = p + dir(0);
+ draw(p--v, blue, EndArrow);
+ draw(p--w, blue, EndArrow);
+ label("$v$", v, dir(135), blue);
+ label("$w$", w, dir(-45), blue);
+ \end{asy}
+\end{center}
+So what should a $2$-form $\beta$ be?
+As before, it should start by taking a point $p \in U$,
+so $\beta_p$ is now a linear functional:
+but this time, it should be a linear map on two vectors $v$ and $w$.
+Here $v$ and $w$ are not tangent so much as their span cuts out
+a small parallelogram. So, the right thing to do is in fact consider
+\[ \beta_p \in V^\vee \wedge V^\vee. \]
+That is, to use the wedge product to get a handle on
+the idea that $v$ and $w$ span a parallelogram.
+Another valid choice would have been $(V \wedge V)^\vee$;
+in fact, the two are isomorphic, but it will be more convenient
+to write it in the former.
+
+\section{Pictures of exterior derivatives}
+Next question:
+\begin{moral}
+ How can we build a $1$-form from a $0$-form?
+\end{moral}
+Let $f$ be a $0$-form on $U$; thus, we have a function $f : U \to \RR$.
+Then in fact there is a very natural $1$-form on $U$ arising
+from $f$, appropriately called $df$.
+Namely, given a point $p$ and a tangent vector $v$,
+the differential form $(df)_p$ returns the \emph{change in $f$ along $v$}.
+In other words, it's just the total derivative $(Df)_p(v)$.
+\begin{center}
+ \begin{asy}
+ bigblob("$U$");
+ dot("$3$", (-2,1), red);
+ dot("$\sqrt2$", (-1,-1), red);
+ dot("$-1$", (2,2), red);
+ dot("$0$", (-3,-3), red);
+ draw((-1,-1)--(0,1), blue, EndArrow);
+ label("$\sqrt2 + \varepsilon$", (0,1), dir(90), blue);
+ label("$v$", (-0.5, 0), dir(0), blue);
+ \end{asy}
+\end{center}
+Thus, $df$ measures ``the change in $f$''.
+
+Now, even if I haven't defined integration yet,
+given a curve $c$ from a point $a$ to $b$, what do you think
+\[ \int_c df \]
+should be equal to?
+Remember that $df$ is the $1$-form that measures
+``infinitesimal change in $f$''.
+So if we add up all the change in $f$ along a path from $a$ to $b$,
+then the answer we get should just be
+\[ \int_c df = f(b) - f(a). \]
+This is the first case of something we call Stokes' theorem.
+
+Generalizing, how should we get from a $1$-form to a $2$-form?
+At each point $p$, the $2$-form $\beta$ gives a $\beta_p$
+which takes in a ``parallelogram'' and returns a real number.
+Now suppose we have a $1$-form $\alpha$.
+Then along each of the edges of a parallelogram,
+with an appropriate sign convention the $1$-form $\alpha$ gives
+us a real number.
+So, given a $1$-form $\alpha$, we define $d\alpha$
+to be the $2$-form that takes in a parallelogram
+spanned by $v$ and $w$,
+and returns \textbf{the measure of $\alpha$ along the boundary}.
+
+Now, what happens if you integrate $d\alpha$ along the entire square $c$?
+The right picture is that, if we think of each little square
+as making up the big square, then the adjacent boundaries cancel out,
+and all we are left is the main boundary.
+This is again just a case of the so-called Stokes' theorem.
+
+\begin{center}
+ \begin{asy}
+ bigblob("$U$");
+ filldraw( (-2,-2)--(-2,2)--(2,2)--(2,-2)--cycle,
+ opacity(0.4) + orange, red);
+ label("$c$", (2,2), dir(45), red);
+ for (real t = -1; t < 2; t += 1) {
+ draw( (-2,t)--(2,t), red );
+ draw( (t,-2)--(t,2), red );
+ }
+ pair p = (-1, -1);
+ dot("$p$", p, dir(225), blue);
+ pair v = p + dir(90);
+ pair w = p + dir(0);
+ pair x = w + v - p;
+ draw(p--w, blue, EndArrow);
+ draw(w--x, blue, EndArrow);
+ draw(x--v, blue, EndArrow);
+ draw(v--p, blue, EndArrow);
+ \end{asy}
+ \hspace{4em}
+ \begin{minipage}[t]{6.2cm}
+ \includegraphics[width=6cm]{media/stokes-patch.png}
+ \\ \scriptsize Image from \cite{img:stokes}
+ \end{minipage}
+\end{center}
+
+
+\section{Differential forms}
+\prototype{Algebraically,
+ something that looks like $f \ee_1^\vee \wedge \ee_2^\vee + \dots$,
+ and geometrically, see the previous section.}
+
+Let's now get a handle on what $dx$ means.
+Fix a real vector space $V$ of dimension $n$,
+and let $\ee_1$, \dots, $\ee_n$ be a standard basis.
+Let $U$ be an open set.
+
+\begin{definition}
+ We define a \vocab{differential $k$-form} $\alpha$ on $U$
+ to be a smooth (infinitely differentiable) map
+ $\alpha : U \to \Lambda^k(V^\vee)$.
+ (Here $\Lambda^k(V^\vee)$ is the wedge product.)
+\end{definition}
+
+Like with $Df$, we'll use $\alpha_p$ instead of $\alpha(p)$.
+
+\begin{example}
+ [$k$-forms for $k=0,1$]
+ \listhack
+ \begin{enumerate}[(a)]
+ \item A $0$-form is just a function $U \to \RR$.
+ \item A $1$-form is a function $U \to V^\vee$.
+ For example,
+ the total derivative $Df$ of a function $V \to \RR$ is a $1$-form.
+ \item Let $V = \RR^3$ with standard basis $\ee_1$, $\ee_2$, $\ee_3$.
+ Then a typical $2$-form is given by
+ \[
+ \alpha_p
+ =
+ f(p) \cdot \ee_1^\vee \wedge \ee_2^\vee
+ + g(p) \cdot \ee_1^\vee \wedge \ee_3^\vee
+ + h(p) \cdot \ee_2^\vee \wedge \ee_3^\vee
+ \in \Lambda^2(V^\vee)
+ \]
+ where $f,g,h : V \to \RR$ are smooth functions.
+ \end{enumerate}
+\end{example}
+
+Now, by the projection principle (\Cref{thm:project_principle}) we only have to specify
+a function on each of $\binom nk$ basis elements of $\Lambda^k(V^\vee)$.
+So, take any basis $\{e_i\}$ of $V$, and
+take the usual basis for $\Lambda^k(V^\vee)$ of elements
+\[ e_{i_1}^\vee \wedge e_{i_2}^\vee \wedge \dots \wedge e_{i_k}^\vee. \]
+Thus, a general $k$-form takes the shape
+\[ \alpha_p = \sum_{1 \le i_1 < \dots < i_k \le n}
+ f_{i_1, \dots, i_k}(p) \cdot
+ e_{i_1}^\vee \wedge e_{i_2}^\vee \wedge \dots \wedge e_{i_k}^\vee. \]
+Since this is a huge nuisance to write, we will abbreviate this to just
+\[ \alpha = \sum_I f_I \cdot de_I \]
+where we understand the sum runs over $I = (i_1, \dots, i_k)$,
+and $de_I$ represents $e_{i_1}^\vee \wedge \dots \wedge e_{i_k}^\vee$.
+
+Now that we have an element $\Lambda^k(V^\vee)$, what can it do?
+Well, first let me get the definition on the table, then tell you what it's doing.
+\begin{definition}
+ For linear functions $\xi_1, \dots, \xi_k \in V^\vee$
+ and vectors $v_1, \dots, v_k \in V$, set
+ \[
+ (\xi_1 \wedge \dots \wedge \xi_k)(v_1, \dots, v_k)
+ \defeq
+ \det
+ \begin{bmatrix}
+ \xi_1(v_1) & \dots & \xi_1(v_k) \\
+ \vdots & \ddots & \vdots \\
+ \xi_k(v_1) & \dots & \xi_k(v_k)
+ \end{bmatrix}.
+ \]
+ You can check that this is well-defined
+ under e.g. $v \wedge w = -w \wedge v$ and so on.
+\end{definition}
+
+\begin{example}
+ [Evaluation of a differential form]
+ Set $V = \RR^3$.
+ Suppose that at some point $p$, the $2$-form $\alpha$ returns
+ \[ \alpha_p = 2 \ee_1^\vee \wedge \ee_2^\vee + \ee_1^\vee \wedge \ee_3^\vee. \]
+ Let $v_1 = 3\ee_1 + \ee_2 + 4\ee_3$ and $v_2 = 8\ee_1 + 9\ee_2 + 5\ee_3$.
+ Then
+ \[
+ \alpha_p(v_1, v_2)
+ =
+ 2\det \begin{bmatrix}
+ 3 & 8 \\ 1 & 9 \end{bmatrix}
+ +
+ \det \begin{bmatrix}
+ 3 & 8 \\ 4 & 5 \end{bmatrix}
+ = 21.
+ \]
+\end{example}
+
+What does this definition mean?
+One way to say it is that
+\begin{moral}
+ If I walk to a point $p \in U$,
+ a $k$-form $\alpha$ will take in $k$ vectors $v_1, \dots, v_k$
+ and spit out a number, which is to be interpreted as a (signed) volume.
+\end{moral}
+
+Picture:
+\begin{center}
+ \begin{asy}
+ bigblob("$U$");
+ pair p = (-2,-2);
+ dot("$p$", p, dir(225), red);
+ pair p1 = p + 1.4*dir(120);
+ pair p2 = p + 1.7*dir(10);
+ draw(p--p1, red, EndArrow);
+ draw(p--p2, red, EndArrow);
+ label("$v_1$", p1, dir(p1-p), red);
+ label("$v_2$", p2, dir(p2-p), red);
+ label("$\alpha_p(v_1, v_2) \in \mathbb R$", p+dir(45)*3);
+ \end{asy}
+\end{center}
+
+In other words, at every point $p$, we get a function $\alpha_p$.
+Then I can feed in $k$ vectors to $\alpha_p$ and get a number,
+which I interpret as a signed volume of the parallelpiped spanned by the $\{v_i\}$'s
+in some way (e.g.\ the flux of a force field).
+That's why $\alpha_p$ as a ``function'' is contrived to lie in the wedge product:
+this ensures that the notion of ``volume'' makes sense, so that for example,
+the equality $\alpha_p(v_1, v_2) = -\alpha_p(v_2, v_1)$ holds.
+
+This is what makes differential forms so fit for integration.
+
+
+\section{Exterior derivatives}
+\prototype{Possibly $dx_1 = \ee_1^\vee$.}
+We now define the exterior derivative $df$ that we gave
+pictures of at the beginning of the section.
+It turns out that the exterior derivative is easy to compute
+given explicit coordinates to work with.
+
+First, given a function $f : U \to \RR$,
+we define
+\[ df \defeq Df = \sum_i \frac{\partial f}{\partial e_i} e_i^\vee \]
+In particular, suppose $V = \RR^n$ and $f(x_1, \dots, x_n) = x_1$
+(i.e.\ $f = \ee_1^\vee$). Then:
+\begin{ques}
+ Show that for any $p \in U$, \[ \left( d(\ee_1^\vee) \right)_p = \ee_1^\vee. \]
+\end{ques}
+
+\begin{abuse}
+ Unfortunately, someone somewhere decided
+ it would be a good idea to use ``$x_1$'' to denote $\ee_1^\vee$
+ (because \emph{obviously}\footnote{Sarcasm.} $x_1$ means
+ ``the function that takes $(x_1, \dots, x_n) \in \RR^n$ to $x_1$'')
+ and then decided that \[ dx_1 \defeq \ee_1^\vee. \]
+ This notation is so entrenched that I have no choice
+ but to grudgingly accept it.
+ Note that it's not even right,
+ since technically it's $(dx_1)_p = \ee_1^\vee$; $dx_1$ is a $1$-form.
+ \label{abuse:dx}
+\end{abuse}
+\begin{remark}
+ This is the reason why we use the notation $\frac{df}{dx}$ in calculus now:
+ given, say, $f : \RR \to \RR$ by $f(x) = x^2$, it is indeed true that
+ \[ df = 2x \cdot \ee_1^\vee = 2x \cdot dx \]
+ and so by (more) abuse of notation we write $df/dx = 2x$.
+\end{remark}
+
+More generally, we can define the \vocab{exterior derivative}
+in terms of our basis $e_1$, \dots, $e_n$ as follows:
+if $\alpha = \sum_I f_I de_I$ then we set
+\[ d\alpha \defeq \sum_I df_I \wedge de_I
+ = \sum_I \sum_j \fpartial{f_I}{e_j} de_j \wedge de_I. \]
+This doesn't depend on the choice of basis.
+
+\begin{example}[Computing some exterior derivatives]
+ Let $V = \RR^3$ with standard basis $\ee_1$, $\ee_2$, $\ee_3$.
+ Let $f(x,y,z) = x^4 + y^3 + 2xz$.
+ Then we compute
+ \[ df = Df = (4x^3+2z) \; dx + 3y^2 \; dy + 2x \; dz. \]
+ Next, we can evaluate $d(df)$ as prescribed: it is
+ \begin{align*}
+ d^2f &= (12x^2 \; dx + 2 dz) \wedge dx + (6y \; dy) \wedge dy
+ + 2(dx \wedge dz) \\
+ &= 12x^2 (dx \wedge dx) + 2(dz \wedge dx) + 6y (dy \wedge dy) + 2(dx \wedge dz) \\
+ &= 2(dz \wedge dx) + 2(dx \wedge dz) \\
+ &= 0.
+ \end{align*}
+ So surprisingly, $d^2f$ is the zero map.
+ Here, we have exploited \Cref{abuse:dx} for the first time,
+ in writing $dx$, $dy$, $dz$.
+\end{example}
+And in fact, this is always true in general:
+\begin{theorem}[Exterior derivative vanishes]
+ \label{thm:dd_zero}
+ Let $\alpha$ be any $k$-form.
+ Then $d^2(\alpha) = 0$.
+ Even more succinctly, \[ d^2 = 0. \]
+\end{theorem}
+The proof is left as \Cref{prob:dd_zero}.
+\begin{exercise}
+ Compare the statement $d^2 = 0$ to the geometric
+ picture of a $2$-form given at the beginning of this chapter.
+ Why does this intuitively make sense?
+\end{exercise}
+
+Here are some other properties of $d$:
+\begin{itemize}
+ \ii As we just saw, $d^2 = 0$.
+ \ii For a $k$-form $\alpha$ and $\ell$-form $\beta$, one can show that
+ \[ d(\alpha \wedge \beta) = d\alpha \wedge \beta + (-1)^k (\alpha \wedge d\beta). \]
+ \ii If $f \colon U \to \RR$ is smooth, then $df = Df$.
+\end{itemize}
+In fact, one can show that $df$ as defined above is
+the \emph{unique} map sending $k$-forms to $(k+1)$-forms
+with these properties.
+So, one way to define $df$ is to take as axioms
+the bulleted properties above
+and then declare $d$ to be the unique solution to this functional equation.
+In any case, this tells us that our definition of $d$
+does not depend on the basis chosen.
+
+Recall that $df$ measures the change in boundary.
+In that sense, $d^2 = 0$ is saying something like
+``the boundary of the boundary is empty''.
+We'll make this precise when we see Stokes' theorem in the next chapter.
+
+\section{Closed and exact forms}
+Let $\alpha$ be a $k$-form.
+\begin{definition}
+ We say $\alpha$ is \vocab{closed} if $d\alpha = 0$.
+\end{definition}
+\begin{definition}
+ We say $\alpha$ is \vocab{exact} if for some $(k-1)$-form $\beta$,
+ $d\beta = \alpha$. If $k = 0$, $\alpha$ is exact only when $\alpha = 0$.
+\end{definition}
+\begin{ques}
+ Show that exact forms are closed.
+\end{ques}
+
+A natural question arises: are there closed forms
+which are not exact?
+Surprisingly, the answer to this question is tied to topology.
+Here is one important example.
+
+\begin{example}
+ [The angle form]
+ \label{ex:angle_form}
+ Let $U = \RR^2 \setminus \{0\}$,
+ and let $\theta(p)$ be the angle formed by the $x$-axis
+ and the line from the origin to $p$.
+
+ The $1$-form $\alpha : U \to (\RR^2)^\vee$ defined by
+ \[ \alpha = \frac{-y \; dx + x \; dy}{x^2+y^2} \]
+ is called the \vocab{angle form}:
+ given $p \in U$ it measures the change in angle $\theta(p)$
+ along a tangent vector.
+ So intuitively, ``$\alpha = d\theta$''.
+ Indeed, one can check directly that the angle form is closed.
+
+ However, $\alpha$ is not exact: there is no global smooth
+ function $\theta : U \to \RR$ having $\alpha$ as a derivative.
+ This reflects the fact that one can actually perform
+ a full $2\pi$ rotation around the origin, i.e.\ $\theta$
+ only makes sense mod $2\pi$.
+ Thus existence of the angle form $\alpha$ reflects
+ the possibility of ``winding'' around the origin.
+\end{example}
+
+So the key idea is that the failure of a closed form to be exact
+corresponds quite well with ``holes'' in the space:
+the same information that homotopy and homology groups are trying to capture.
+To draw another analogy, in complex analysis Cauchy-Goursat
+only works when $U$ is simply connected.
+The ``hole'' in $U$ is being detected by the existence of a form $\alpha$.
+The so-called de Rham cohomology will make this relation explicit.
+
+\section\problemhead
+\begin{problem}
+ Show directly that the angle form
+ \[ \alpha = \frac{-y \; dx + x \; dy}{x^2+y^2} \]
+ is closed.
+\end{problem}
+
+\begin{problem}
+ \label{prob:dd_zero}
+ Establish \Cref{thm:dd_zero}, which states that $d^2 = 0$.
+ \begin{hint}
+ This is just a summation.
+ You will need the fact that mixed partials are symmetric.
+ \end{hint}
+\end{problem}
diff --git a/books/napkin/fourier.tex b/books/napkin/fourier.tex
new file mode 100644
index 0000000000000000000000000000000000000000..375600ceb8eb77f3b6af3274bbaa2cdad321ab21
--- /dev/null
+++ b/books/napkin/fourier.tex
@@ -0,0 +1,625 @@
+\chapter{Bonus: Fourier analysis}
+\label{ch:fourier}
+Now that we've worked hard to define abstract inner product spaces,
+I want to give an (optional) application:
+how to set up Fourier analysis correctly, using this language.
+
+For fun, I also prove a form of Arrow's Impossibility Theorem
+using binary Fourier analysis.
+
+In what follows, we let $\TT = \RR/\ZZ$ denote the ``circle group'',
+thought of as the additive group of ``real numbers modulo $1$''.
+There is a canonical map $e \colon \TT \to \CC$ sending $\TT$ to the
+complex unit circle, given by
+\[ e(\theta) = \exp(2\pi i \theta). \]
+
+\section{Synopsis}
+Suppose we have a domain $Z$ and are interested in functions $f \colon Z \to \CC$.
+Naturally, the set of such functions form a complex vector space.
+We like to equip the set of such functions
+with an positive definite \emph{inner product}.
+% usually something like $\left< f,g \right> = \EE_{x \in Z} f(x) \ol{g(x)}$.
+%which makes the set of such functions into a normed vector space.
+%(Here $\EE$ is an ``average'', which is a finite sum if $|Z| < \infty$
+%but otherwise is usually an integral.)
+%In particular, $\left< f,f\right>$ is the average of $|f(x)|^2$;
+%and thus gives a positive definite inner form.
+
+The idea of Fourier analysis is to then select an \emph{orthonormal basis}
+for this set of functions, say $(e_\xi)_{\xi}$,
+which we call the \vocab{characters};
+the indexing $\xi$ are called \vocab{frequencies}.
+In that case, since we have a basis, every function $f : Z \to \CC$
+becomes a sum
+\[ f(x) = \sum_{\xi} \wh f(\xi) e_\xi \]
+where $\wh f(\xi)$ are complex coefficients of the basis;
+appropriately we call $\wh f$ the \vocab{Fourier coefficients}.
+The variable $x \in Z$ is referred to as the \vocab{physical} variable.
+This is generally good because the characters are deliberately chosen
+to be nice ``symmetric'' functions,
+like sine or cosine waves or other periodic functions.
+Thus we decompose an arbitrarily complicated function into a sum of nice ones.
+
+\section{A reminder on Hilbert spaces}
+For convenience, we record a few facts about orthonormal bases.
+\begin{proposition}
+ [Facts about orthonormal bases]
+ \label{prop:orthonormal}
+ Let $V$ be a complex Hilbert space
+ with inner form $\left< -,-\right>$
+ and suppose $x = \sum_\xi a_\xi e_\xi$ and $y = \sum_\xi b_\xi e_\xi$
+ where $e_\xi$ are an orthonormal basis.
+ Then
+ \begin{align*}
+ \left< x,x \right> &= \sum_\xi |a_\xi|^2 \\
+ a_\xi &= \left< x, e_\xi \right> \\
+ \left< x,y \right> &= \sum_\xi a_\xi \ol{b_\xi}.
+ \end{align*}
+\end{proposition}
+\begin{exercise}
+ Prove all of these.
+ (You don't need any of the preceding section,
+ it's only there to motivate the notation with lots of scary $\xi$'s.)
+\end{exercise}
+
+In what follows,
+most of the examples will be of finite-dimensional inner product spaces
+(which are thus Hilbert spaces),
+but the example of ``square-integrable functions''
+will actually be an infinite dimensional example.
+Fortunately, as I alluded to earlier,
+this is no cause for alarm
+and you can mostly close your eyes and not worry about infinity.
+
+\section{Common examples}
+\subsection{Binary Fourier analysis on $\{\pm1\}^n$}
+Let $Z = \{\pm 1\}^n$ for some positive integer $n$,
+so we are considering functions $f(x_1, \dots, x_n)$ accepting binary values.
+Then the functions $Z \to \CC$ form a $2^n$-dimensional vector space $\CC^Z$,
+and we endow it with the inner form
+\[ \left< f,g \right> = \frac{1}{2^n} \sum_{x \in Z} f(x) \ol{g(x)}. \]
+In particular,
+\[ \left< f,f \right>
+ = \frac{1}{2^n} \sum_{x \in Z} \left\lvert f(x) \right\rvert^2 \]
+is the average of the squares;
+this establishes also that $\left< -,-\right>$ is positive definite.
+
+In that case, the \vocab{multilinear polynomials} form a basis of $\CC^Z$,
+that is the polynomials
+\[ \chi_S(x_1, \dots, x_n) = \prod_{s \in S} x_s. \]
+\begin{exercise}
+ Show that they're actually orthonormal under $\left< -,-\right>$.
+ This proves they form a basis, since there are $2^n$ of them.
+\end{exercise}
+Thus our frequency set is actually the subsets $S \subseteq \{1, \dots, n\}$.
+Thus, we have a decomposition
+\[ f = \sum_{S \subseteq \{1, \dots, n\}} \wh f(S) \chi_S. \]
+\begin{example}
+ [An example of binary Fourier analysis]
+ Let $n = 2$.
+ Then binary functions $\{ \pm 1\}^2 \to \CC$ have a basis
+ given by the four polynomials
+ \[ 1, \quad x_1, \quad x_2, \quad x_1x_2. \]
+ For example, consider the function $f$
+ which is $1$ at $(1,1)$ and $0$ elsewhere.
+ Then we can put
+ \[ f(x_1, x_2) = \frac{x_1+1}{2} \cdot \frac{x_2+1}{2}
+ = \frac14 \left( 1 + x_1 + x_2 + x_1x_2 \right). \]
+ So the Fourier coefficients are $\wh f(S) = \frac 14$
+ for each of the four $S$'s.
+\end{example}
+This notion is useful in particular for
+binary functions $f : \{\pm1\}^n \to \{\pm1\}$;
+for these functions (and products thereof),
+we always have $\left< f,f \right> = 1$.
+
+It is worth noting that the frequency $\varnothing$ plays a special role:
+\begin{exercise}
+ Show that
+ \[ \wh f(\varnothing) = \frac{1}{|Z|} \sum_{x \in Z} f(x). \]
+\end{exercise}
+
+\subsection{Fourier analysis on finite groups $Z$}
+This time, suppose we have a finite abelian group $Z$,
+and consider functions $Z \to \CC$;
+this is a $|Z|$-dimensional vector space.
+The inner product is the same as before:
+\[ \left< f,g \right> = \frac{1}{|Z|} \sum_{x \in Z} f(x) \ol{g(x)}. \]
+
+To proceed, we'll need to be able to multiply two elements of $Z$.
+This is a bit of a nuisance since it actually won't really
+matter what map I pick, so I'll move briskly;
+feel free to skip most or all of the remaining paragraph.
+\begin{definition}
+We select a \emph{symmetric non-degenerate bilinear form}
+\[ \cdot \colon Z \times Z \to \TT \]
+satisfying the following properties:
+\begin{itemize}
+ \ii $\xi \cdot (x_1 + x_2) = \xi \cdot x_1 + \xi \cdot x_2$
+ and $(\xi_1 + \xi_2) \cdot x = \xi_1 \cdot x + \xi_2 \cdot x$
+ (this is the word ``bilinear'')
+ \ii $\cdot$ is symmetric,
+ \ii For any $\xi \neq 0$, there is an $x$ with $\xi \cdot x \neq 0$
+ (this is the word ``nondegenerate'').
+\end{itemize}
+\end{definition}
+\begin{example}
+ [The form on $\Zc n$]
+ If $Z = \Zc n$ then $\xi \cdot x = (\xi x)/n$ satisfies the above.
+\end{example}
+In general, it turns out finite abelian groups
+decompose as the sum of cyclic groups (see \Cref{sec:FTFGAG}),
+which makes it relatively easy to find such a $\cdot$;
+but as I said the choice won't matter, so let's move on.
+
+Now for the fun part: defining the characters.
+\begin{proposition}
+ [$e_\xi$ are orthonormal]
+ For each $\xi \in Z$ we define the character
+ \[ e_\xi(x) = e(\xi \cdot x). \]
+ The $|Z|$ characters form an orthonormal basis of the
+ space of functions $Z \to \CC$.
+\end{proposition}
+\begin{proof}
+ I recommend skipping this one, but it is:
+ \begin{align*}
+ \left< e_{\xi}, e_{\xi'} \right>
+ &= \frac{1}{|Z|} \sum_{x \in Z} e(\xi \cdot x) \ol{e(\xi' \cdot x)} \\
+ &= \frac{1}{|Z|} \sum_{x \in Z} e(\xi \cdot x) e(-\xi' \cdot x) \\
+ &= \frac{1}{|Z|} \sum_{x \in Z} e\left( (\xi-\xi') \cdot x \right).
+ \end{align*}
+\end{proof}
+
+In this way, the set of frequencies is also $Z$,
+but the $\xi \in Z$ play very different roles from the ``physical'' $x \in Z$.
+Here is an example which might be enlightening.
+\begin{example}
+ [Cube roots of unity filter]
+ Suppose $Z = \Zc3$, with the inner form given by $\xi \cdot x = (\xi x)/3$.
+ Let $\omega = \exp(\frac 23 \pi i)$ be a primitive cube root of unity.
+ Note that
+ \[ e_\xi(x) = \begin{cases}
+ 1 & \xi = 0 \\
+ \omega^x & \xi = 1 \\
+ \omega^{2x} & \xi = 2.
+ \end{cases} \]
+ Then given $f \colon Z \to \CC$ with $f(0) = a$, $f(1) = b$, $f(2) = c$,
+ we obtain
+ \[ f(x) = \frac{a+b+c}{3} \cdot 1
+ + \frac{a + \omega^2 b + \omega c}{3} \cdot \omega^x
+ + \frac{a + \omega b + \omega^2 c}{3} \cdot \omega^{2x}. \]
+ In this way we derive that the transforms are
+ \begin{align*}
+ \wh f(0) &= \frac{a+b+c}{3} \\
+ \wh f(1) &= \frac{a+\omega^2 b+ \omega c}{3} \\
+ \wh f(2) &= \frac{a+\omega b+\omega^2c}{3}.
+ \end{align*}
+\end{example}
+\begin{exercise}
+ Show that in analogy to $\wh f(\varnothing)$
+ for binary Fourier analysis, we now have
+ \[ \wh f(0) = \frac{1}{|Z|} \sum_{x \in Z} f(x). \]
+\end{exercise}
+Olympiad contestants may recognize the previous example
+as a ``roots of unity filter'', which is exactly the point.
+For concreteness, suppose one wants to compute
+\[ \binom{1000}{0} + \binom{1000}{3} + \dots + \binom{1000}{999}. \]
+In that case, we can consider the function
+\[ w : \ZZ/3 \to \CC. \]
+such that $w(0) = 1$ but $w(1) = w(2) = 0$.
+By abuse of notation we will also think of $w$
+as a function $w : \ZZ \surjto \ZZ/3 \to \CC$.
+Then the sum in question is
+\begin{align*}
+ \sum_n \binom{1000}{n} w(n)
+ &= \sum_n \binom{1000}{n} \sum_{k=0,1,2} \wh w(k) \omega^{kn} \\
+ &= \sum_{k=0,1,2} \wh w(k) \sum_n \binom{1000}{n} \omega^{kn} \\
+ &= \sum_{k=0,1,2} \wh w(k) (1+\omega^k)^n.
+\end{align*}
+In our situation, we have $\wh w(0) = \wh w(1) = \wh w(2) = \frac13$,
+and we have evaluated the desired sum.
+More generally, we can take any periodic weight $w$
+and use Fourier analysis in order to interchange the order of summation.
+
+\begin{example}
+ [Binary Fourier analysis]
+ Suppose $Z = \{\pm 1\}^n$, viewed as an abelian group
+ under pointwise multiplication
+ hence isomorphic to $(\ZZ/2\ZZ)^{\oplus n}$.
+ Assume we pick the dot product defined by
+ \[ \left< \xi, x \right>
+ = \half \sum_i \frac{\xi_i-1}{2} \cdot \frac{x_i-1}{2} \]
+ where $\xi = (\xi_1, \dots, \xi_n)$ and $x = (x_1, \dots, x_n)$.
+
+ We claim this coincides with the first example we gave.
+ Indeed, let $S \subseteq \{1, \dots, n\}$
+ and let $\xi \in \{\pm1\}^n$ which is $-1$ at positions in $S$,
+ and $+1$ at positions not in $S$.
+ Then the character $\chi_S$ from the previous example
+ coincides with the character $e_\xi$ in the new notation.
+ In particular, $\wh f(S) = \wh f(\xi)$.
+
+ Thus Fourier analysis on a finite group $Z$ subsumes
+ binary Fourier analysis.
+\end{example}
+
+\subsection{Fourier series for functions $L^2([-\pi, \pi])$}
+This is the most famous one, and hence the one you've heard of.
+\begin{definition}
+ The space $L^2([-\pi, \pi])$ consists of all functions
+ $f \colon [-\pi, \pi] \to \CC$ such that
+ the integral
+ $\int_{[-\pi, \pi]} \left\lvert f(x) \right\rvert^2 \; dx$
+ exists and is finite,
+ modulo the relation that a function which is zero ``almost everywhere''
+ is considered to equal zero.\footnote{We won't define this, yet,
+ as it won't matter to us for now.
+ But we will elaborate more on this in the parts on measure theory.
+
+ There is one point at which this is relevant.
+ Often we require that the function $f$ satisfies $f(-\pi) = f(\pi)$,
+ so that $f$ becomes a periodic function,
+ and we can think of it as $f \colon \TT \to \CC$.
+ This makes no essential difference
+ since we merely change the value at one point.}
+
+ It is made into an inner product space according to
+ \[ \left< f,g \right>
+ = \frac{1}{2\pi} \int_{[-\pi, \pi]} f(x) \ol{g(x)} \; dx. \]
+\end{definition}
+It turns out (we won't prove) that this is an
+(infinite-dimensional) Hilbert space!
+
+Now, the beauty of Fourier analysis is that
+\textbf{this space has a great basis}:
+\begin{theorem}
+ [The classical Fourier basis]
+ For each integer $n$, define
+ \[ e_n(x) = \exp(inx). \]
+ Then $e_n$ form an orthonormal basis
+ of the Hilbert space $L^2([-\pi, \pi])$.
+\end{theorem}
+Thus this time the frequency set $\ZZ$ is infinite, and we have
+\[ f(x) = \sum_n \wh f(n) \exp(inx)
+ \quad\text{almost everywhere} \]
+for coefficients $\wh f(n)$
+with $\sum_n \left\lvert \wh f(n) \right\rvert^2 < \infty$.
+Since the frequency set is indexed by $\ZZ$,
+we call this a \vocab{Fourier series}
+to reflect the fact that the index is $n \in \ZZ$.
+\begin{exercise}
+ Show once again
+ \[ \wh f(0) = \frac{1}{2\pi} \int_{[-\pi, \pi]} f(x) \; dx. \]
+\end{exercise}
+
+\section{Summary, and another teaser}
+We summarize our various flavors of Fourier analysis in the following table.
+\[
+ \begin{array}{llll}
+ \hline
+ \text{Type} & \text{Physical var} & \text{Frequency var}
+ & \text{Basis functions} \\ \hline
+ \text{Binary} & \{\pm1\}^n
+ & \text{Subsets } S \subseteq \left\{ 1, \dots, n \right\}
+ & \prod_{s \in S} x_s \\
+ \text{Finite group} & Z & \xi \in Z, \text{ choice of } \cdot
+ & e(\xi \cdot x) \\
+ \text{Fourier series} & \TT \text{ or } [-\pi, \pi] & n \in \ZZ
+ & \exp(inx) \\
+ \text{Discrete} & \Zc n
+ & \xi \in \Zc n
+ & e(\xi x / n) \\
+ \end{array}
+\]
+I snuck in a fourth row with $Z = \Zc n$,
+but it's a special case of the second row, so no cause for alarm.
+
+Alluding to the future, I want to hint at how \Cref{ch:pontryagin} starts.
+Each one of these is really a statement
+about how functions from $G \to \CC$
+can be expressed in terms of functions $\wh G \to \CC$,
+for some ``dual'' $\wh G$.
+In that sense, we could rewrite the above table as:
+\[
+ \begin{array}{llll}
+ \hline
+ \text{Name} & \text{Domain }G & \text{Dual }\wh G
+ & \text{Characters} \\ \hline
+ \text{Binary} & \{\pm1\}^n
+ & S \subseteq \left\{ 1, \dots, n \right\}
+ & \prod_{s \in S} x_s \\
+ \text{Finite group} & Z
+ & \xi \in \wh Z \cong Z & e( i \xi \cdot x) \\
+ \text{Fourier series} & \TT \cong [-\pi, \pi] & n \in \ZZ
+ & \exp(inx) \\
+ \text{Discrete} & \ZZ/n\ZZ & \xi \in \ZZ/n\ZZ
+ & e(\xi x / n) \\
+ \end{array}
+\]
+It will turn out that in general
+we can say something about many different domains $G$,
+once we know what it means to integrate a measure.
+This is the so-called \emph{Pontryagin duality};
+and it is discussed as a follow-up bonus in \Cref{ch:pontryagin}.
+
+\section{Parseval and friends}
+Here is a fun section in which you get to learn a lot of big names quickly.
+Basically, we can take each of the three results
+from Proposition~\ref{prop:orthonormal},
+translate it into the context of our Fourier analysis
+(for which we have an orthonormal basis of the Hilbert space),
+and get a big-name result.
+
+\begin{corollary}
+ [Parseval theorem]
+ Let $f \colon Z \to \CC$, where $Z$ is a finite abelian group.
+ Then \[ \sum_\xi |\wh f(\xi)|^2 = \frac{1}{|Z|} \sum_{x \in Z} |f(x)|^2. \]
+ Similarly, if $f \colon [-\pi, \pi] \to \CC$ is square-integrable then
+ its Fourier series satisfies
+ \[ \sum_n |\wh f(n)|^2 = \frac{1}{2\pi} \int_{[-\pi, \pi]} |f(x)|^2 \; dx. \]
+\end{corollary}
+\begin{proof}
+Recall that $\left< f,f\right>$ is equal to the
+square sum of the coefficients.
+\end{proof}
+
+\begin{corollary}
+ [Fourier inversion formula]
+ Let $f : Z \to \CC$, where $Z$ is a finite abelian group.
+ Then \[ \wh f(\xi) = \frac{1}{|Z|} \sum_{x \in Z} f(x) \ol{e_\xi(x)}. \]
+ Similarly, if $f : [-\pi, \pi] \to \CC$ is square-integrable then
+ its Fourier series is given by
+ \[ \wh f(n) = \frac{1}{2\pi} \int_{[-\pi, \pi]} f(x) \exp(-inx) \; dx. \]
+\end{corollary}
+\begin{proof}
+Recall that in an orthonormal basis $(e_\xi)_\xi$,
+the coefficient of $e_\xi$ in $f$ is $\left< f, e_\xi\right>$.
+\end{proof}
+\begin{ques}
+ What happens when $\xi = 0$ above?
+\end{ques}
+
+\begin{corollary}
+ [Plancherel theorem]
+ Let $f : Z \to \CC$, where $Z$ is a finite abelian group.
+ Then \[ \left< f,g \right> = \sum_{\xi \in Z} \wh f(\xi) \ol{\wh g(\xi)}. \]
+ Similarly, if $f : [-\pi, \pi] \to \CC$ is square-integrable then
+ \[ \left< f,g \right> = \sum_n \wh f(n) \ol{\wh g(n)}. \]
+\end{corollary}
+\begin{ques}
+ Prove this one in one line (like before).
+\end{ques}
+
+\section{Application: Basel problem}
+One cute application about Fourier analysis on $L^2([-\pi, \pi])$
+is that you can get some otherwise hard-to-compute sums,
+as long as you are willing to use a little calculus.
+
+Here is the classical one:
+\begin{theorem}
+ [Basel problem]
+ We have
+ \[ \sum_{n \ge 1} \frac{1}{n^2} = \frac{\pi^2}{6}. \]
+\end{theorem}
+The proof is to consider the identity function $f(x) = x$,
+which is certainly square-integrable.
+Then by Parseval, we have
+\[
+ \sum_{n \in \ZZ} \left\lvert \wh f(n) \right\rvert^2
+ = \left< f,f\right>
+ = \frac{1}{2\pi} \int_{[-\pi, \pi]} \left\lvert f(x) \right\rvert^2 \; dx.
+\]
+A calculus computation gives
+\[ \frac{1}{2\pi} \int_{[-\pi, \pi]} x^2 \; dx = \frac{\pi^2}{3}. \]
+On the other hand, we will now compute all Fourier coefficients.
+We have already that
+\[ \wh f(0) = \frac{1}{2\pi} \int_{[-\pi, \pi]} f(x) \; dx
+ = \frac{1}{2\pi} \int_{[-\pi, \pi]} x \; dx = 0. \]
+For $n \neq 0$, we have by definition
+(or ``Fourier inversion formula'', if you want to use big words)
+the formula
+\begin{align*}
+ \wh f(n) &= \left< f, \exp(inx) \right> \\
+ &= \frac{1}{2\pi} \int_{[-\pi, \pi]} x \cdot \ol{\exp(inx)} \; dx \\
+ &= \frac{1}{2\pi} \int_{[-\pi, \pi]} x \exp(-inx) \; dx.
+\end{align*}
+The anti-derivative is equal to
+$\frac{1}{n^2} \exp(-inx) (1+inx)$,
+which thus with some more calculation gives that
+\[ \wh f(n) = \frac{(-1)^n}{n} i. \]
+So
+\[ \sum_n \left\lvert \wh f(n) \right\rvert^2
+ = 2 \sum_{n \ge 1} \frac{1}{n^2} \]
+implying the result.
+
+\section{Application: Arrow's Impossibility Theorem}
+As an application of binary Fourier analysis,
+we now prove a form of
+\href{https://en.wikipedia.org/wiki/Arrow's_impossibility_theorem}{Arrow's theorem}.
+
+Consider $n$ voters voting among $3$ candidates $A$, $B$, $C$.
+Each voter specifies a tuple $v_i = (x_i, y_i, z_i) \in \{\pm1\}^3$ as follows:
+\begin{itemize}
+ \ii $x_i = 1$ if person $i$ ranks $A$ ahead of $B$, and $x_i = -1$ otherwise.
+ \ii $y_i = 1$ if person $i$ ranks $B$ ahead of $C$, and $y_i = -1$ otherwise.
+ \ii $z_i = 1$ if person $i$ ranks $C$ ahead of $A$, and $z_i = -1$ otherwise.
+\end{itemize}
+Tacitly, we only consider $3! = 6$ possibilities for $v_i$:
+we forbid ``paradoxical'' votes of the form $x_i = y_i = z_i$
+by assuming that people's votes are consistent
+(meaning the preferences are transitive).
+
+
+For brevity, let $x_\bullet = (x_1, \dots, x_n)$
+and define $y_\bullet$ and $z_\bullet$ similarly.
+Then, we can consider a voting mechanism
+\begin{align*}
+ f \colon \{\pm1\}^n &\to \{\pm1\} \\
+ g \colon \{\pm1\}^n &\to \{\pm1\} \\
+ h \colon \{\pm1\}^n &\to \{\pm1\}
+\end{align*}
+such that
+\begin{itemize}
+ \ii $f(x_\bullet)$ is the global preference of $A$ vs.\ $B$,
+ \ii $g(y_\bullet)$ is the global preference of $B$ vs.\ $C$,
+ \ii and $h(z_\bullet)$ is the global preference of $C$ vs.\ $A$.
+\end{itemize}
+We'd like to avoid situations where the global preference
+$(f(x_\bullet), g(y_\bullet), h(z_\bullet))$ is itself paradoxical.
+
+Let $\EE f$ denote the average value of $f$ across all $2^n$ inputs.
+Define $\EE g$ and $\EE h$ similarly.
+We'll add an assumption that $\EE f = \EE g = \EE h = 0$,
+which provides symmetry
+(and e.g.\ excludes the possibility that $f$, $g$, $h$
+are constant functions which ignore voter input).
+With that we will prove the following result:
+\begin{theorem}
+ [Arrow Impossibility Theorem]
+ Assume that $(f,g,h)$ always avoids paradoxical outcomes,
+ and assume $\EE f = \EE g = \EE h = 0$.
+ Then $(f,g,h)$ is either a dictatorship or anti-dictatorship:
+ there exists a ``dictator'' $k$ such that
+ \[ f(x_\bullet) = \pm x_k, \qquad g(y_\bullet) = \pm y_k,
+ \qquad h(z_\bullet) = \pm z_k \]
+ where all three signs coincide.
+\end{theorem}
+Unlike the usual Arrow theorem, we do \emph{not} assume
+that $f(+1, \dots, +1) = +1$ (hence possibility of anti-dictatorship).
+
+\begin{proof}
+ Suppose the voters each randomly select one of the $3!=6$
+ possible consistent votes.
+ In \Cref{prob:arrow_lemma} it is shown
+ that the exact probability of a paradoxical outcome
+ for any functions $f$, $g$, $h$ is given exactly by
+ \[ \frac14 + \frac14 \sum_{S \subseteq \{1, \dots, n\}}
+ \left( -\frac13 \right)^{\left\lvert S \right\rvert}
+ \left( \wh f(S) \wh g(S) + \wh g(S) \wh h(S) + \wh h(S) \wh f(S) \right).
+ \]
+ Assume that this probability (of a paradoxical outcome) equals $0$.
+ Then, we derive
+ \[ 1 = \sum_{S \subseteq \{1, \dots, n\}}
+ -\left( -\frac13 \right)^{\left\lvert S \right\rvert}
+ \left( \wh f(S) \wh g(S) + \wh g(S) \wh h(S) + \wh h(S) \wh f(S) \right). \]
+ But now we can just use weak inequalities.
+ We have $\wh f(\varnothing) = \EE f = 0$ and similarly for $\wh g$ and $\wh h$,
+ so we restrict attention to $|S| \ge 1$.
+ We then combine the famous inequality $|ab+bc+ca| \le a^2+b^2+c^2$
+ (which is true across all real numbers) to deduce that
+ \begin{align*}
+ 1 &= \sum_{S \subseteq \{1, \dots, n\}}
+ -\left( -\frac13 \right)^{\left\lvert S \right\rvert}
+ \left( \wh f(S) \wh g(S) + \wh g(S) \wh h(S) + \wh h(S) \wh f(S) \right) \\
+ &\le \sum_{S \subseteq \{1, \dots, n\}}
+ \left( \frac13 \right)^{\left\lvert S \right\rvert}
+ \left( \wh f(S)^2 + \wh g(S)^2 + \wh h(S)^2 \right) \\
+ &\le \sum_{S \subseteq \{1, \dots, n\}} \left( \frac13 \right)^1
+ \left( \wh f(S)^2 + \wh g(S)^2 + \wh h(S)^2 \right) \\
+ &= \frac13 (1+1+1) = 1.
+ \end{align*}
+ with the last step by Parseval.
+ So all inequalities must be sharp, and in particular $\wh f$, $\wh g$, $\wh h$
+ are supported on one-element sets, i.e.\ they are linear in inputs.
+ As $f$, $g$, $h$ are $\pm 1$ valued, each $f$, $g$, $h$ is itself
+ either a dictator or anti-dictator function.
+ Since $(f,g,h)$ is always consistent, this implies the final result.
+\end{proof}
+
+
+\section{\problemhead}
+
+\begin{problem}
+ [For calculus fans]
+ Prove that
+ \[ \sum_{n \ge 1} \frac{1}{n^4} = \frac{\pi^4}{90}. \]
+ \begin{hint}
+ Use Parseval again, but this time on $f(x) = x^2$.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ \gim
+ \label{prob:arrow_lemma}
+ Let $f,g,h \colon \{\pm1\}^n \to \{\pm1\}$
+ be any three functions.
+ For each $i$, we randomly select $(x_i, y_i, z_i) \in \{\pm1\}^3$
+ subject to the constraint that not all are equal
+ (hence, choosing among $2^3-2=6$ possibilities).
+ Prove that the probability that
+ \[ f(x_1, \dots, x_n) = g(y_1, \dots, y_n) = h(z_1, \dots, z_n) \]
+ is given by the formula
+ \[ \frac14 + \frac14 \sum_{S \subseteq \{1, \dots, n\}}
+ \left( -\frac13 \right)^{\left\lvert S \right\rvert}
+ \left( \wh f(S) \wh g(S) + \wh g(S) \wh h(S) + \wh h(S) \wh f(S) \right)
+ \]
+ \begin{hint}
+ Define the Boolean function $D : \{\pm 1\}^3 \to \RR$ by
+ $D(a,b,c) = ab+bc+ca$.
+ Write out the value of $D(a,b,c)$ for each $(a,b,c)$.
+ Then, evaluate its expected value.
+ \end{hint}
+ \begin{sol}
+ Define the Boolean function $D : \{\pm 1\}^3 \to \RR$ by
+ \[ D(a,b,c) = ab + bc + ca
+ = \begin{cases}
+ 3 & a,b,c \text{ all equal} \\
+ -1 & a,b,c \text{ not all equal}.
+ \end{cases}.
+ \]
+ Thus paradoxical outcomes arise when
+ $D(f(x_\bullet), g(y_\bullet), h(z_\bullet)) = 3$.
+ Now, we compute that for randomly selected
+ $x_\bullet$, $y_\bullet$, $z_\bullet$ that
+ \begin{align*}
+ \EE D(f(x_\bullet), g(y_\bullet), h(z_\bullet))
+ &= \EE \sum_S \sum_T
+ \left( \wh f(S) \wh g(T) + \wh g(S) \wh h(T) + \wh h(S) \wh f(T) \right)
+ \left( \chi_S(x_\bullet)\chi_T(y_\bullet) \right) \\
+ &= \sum_S \sum_T
+ \left( \wh f(S) \wh g(T) + \wh g(S) \wh h(T) + \wh h(S) \wh f(T) \right)
+ \EE\left( \chi_S(x_\bullet)\chi_T(y_\bullet) \right).
+ \end{align*}
+ Now we observe that:
+ \begin{itemize}
+ \ii If $S \neq T$, then $\EE \chi_S(x_\bullet) \chi_T(y_\bullet) = 0$,
+ since if say $s \in S$, $s \notin T$ then $x_s$ affects
+ the parity of the product with 50\% either way,
+ and is independent of any other variables in the product.
+ \ii On the other hand, suppose $S = T$.
+ Then
+ \[ \chi_S(x_\bullet) \chi_T(y_\bullet)
+ = \prod_{s \in S} x_sy_s. \]
+ Note that $x_sy_s$ is equal to $1$ with probability $\frac13$
+ and $-1$ with probability $\frac23$
+ (since $(x_s, y_s, z_s)$ is uniform from $3!=6$ choices,
+ which we can enumerate).
+ From this an inductive calculation on $|S|$ gives that
+ \[
+ \prod_{s \in S} x_sy_s
+ =
+ \begin{cases}
+ +1 & \text{ with probability } \half(1+(-1/3)^{|S|}) \\
+ -1 & \text{ with probability } \half(1-(-1/3)^{|S|}).
+ \end{cases}
+ \]
+ Thus
+ \[ \EE \left( \prod_{s \in S} x_sy_s \right) = \left( -\frac13 \right)^{|S|}. \]
+ \end{itemize}
+ Piecing this altogether, we now have that
+ \[
+ \EE D(f(x_\bullet), g(y_\bullet), h(z_\bullet))
+ =
+ \left( \wh f(S) \wh g(T) + \wh g(S) \wh h(T) + \wh h(S) \wh f(T) \right)
+ \left( -\frac13 \right)^{|S|}.
+ \]
+ Then, we obtain that
+ \begin{align*}
+ &\EE \frac14 \left( 1 + D(f(x_\bullet), g(y_\bullet), h(z_\bullet)) \right) \\
+ =& \frac14 + \frac14\sum_S
+ \left( \wh f(S) \wh g(T) + \wh g(S) \wh h(T) + \wh h(S) \wh f(T) \right)
+ \wh f(S)^2 \left( -\frac13 \right)^{|S|}.
+ \end{align*}
+ Comparing this with the definition of $D$ gives the desired result.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/frobenius.tex b/books/napkin/frobenius.tex
new file mode 100644
index 0000000000000000000000000000000000000000..75fcd5baae34c6afa38df0572e54730a64f49256
--- /dev/null
+++ b/books/napkin/frobenius.tex
@@ -0,0 +1,833 @@
+\chapter{The Frobenius element}
+Throughout this chapter $K/\QQ$ is a Galois extension with Galois group $G$,
+$p$ is an \emph{unramified} rational prime in $K$, and $\kp$ is a prime above it.
+Picture:
+\begin{center}
+\begin{tikzcd}
+ K \ar[d, dash]
+ & \supset
+ & \OO_K \ar[d, dash]
+ & \kp \ar[d, dash]
+ & \OO_K/\kp \cong \FF_{p^f} \ar[d, dash] \\
+ \QQ & \supset & \ZZ & (p) & \FF_p
+\end{tikzcd}
+\end{center}
+If $p$ is unramified, then one can show there
+is a unique $\sigma \in \Gal(L/K)$ such that
+$\sigma(\alpha) \equiv \alpha^p \pmod{\kp}$ for every prime $p$.
+
+\section{Frobenius elements}
+\prototype{$\Frob_\kp$ in $\ZZ[i]$ depends on $p \pmod 4$.}
+
+Here is the theorem statement again:
+\begin{theorem}[The Frobenius element]
+ Assume $K/\QQ$ is Galois with Galois group $G$.
+ Let $p$ be a rational prime unramified in $K$, and $\kp$ a prime above it.
+ There is a \emph{unique} element $\Frob_\kp \in G$
+ with the property that
+ \[ \Frob_\kp(\alpha) \equiv \alpha^{p} \pmod{\kp}. \]
+ It is called the \vocab{Frobenius element} at $\kp$, and has order $f$.
+\end{theorem}
+The \emph{uniqueness} part is pretty important:
+it allows us to show that a given $\sigma \in \Gal(L/K)$
+is the Frobenius element by just observing that it satisfies
+the above functional equation.
+
+Let's see an example of this:
+\begin{example}[Frobenius elements of the Gaussian integers]
+ Let's actually compute some Frobenius elements for $K = \QQ(i)$,
+ which has $\OO_K = \ZZ[i]$.
+ This is a Galois extension, with $G = \Zm2$,
+ corresponding to the identity and complex conjugation.
+
+ If $p$ is an odd prime with $\kp$ above it,
+ then $\Frob_\kp$ is the unique element such that
+ \[ (a+bi)^p \equiv \Frob_\kp(a+bi) \pmod{\kp} \]
+ in $\ZZ[i]$. In particular,
+ \[ \Frob_\kp(i) = i^p =
+ \begin{cases}
+ i & p \equiv 1 \pmod 4 \\
+ -i & p \equiv 3 \pmod 4.
+ \end{cases}
+ \]
+ From this we see that $\Frob_\kp$ is the identity when $p \equiv 1 \pmod 4$
+ and $\Frob_\kp$ is complex conjugation when $p \equiv 3 \pmod 4$.
+\end{example}
+Note that we really only needed to compute $\Frob_\kp$ on $i$.
+If this seems too good to be true,
+a philosophical reason is ``freshman's dream''
+where $(x+y)^p \equiv x^p + y^p \pmod{p}$ (and hence mod $\kp$).
+So if $\sigma$ satisfies the functional equation on generators,
+it satisfies the functional equation everywhere.
+
+We also have an important lemma:
+\begin{lemma}
+ [Order of the Frobenius element]
+ Let $\Frob_\kp$ be a Frobenius element from an extension $K/\QQ$.
+ Then the order of $\Frob_\kp$ is equal to the inertial degree $f_\kp$.
+ In particular, $(p)$ splits completely in $\OO_K$
+ if and only if $\Frob_\kp = \id$.
+\end{lemma}
+\begin{exercise}
+ Prove this lemma as by using the fact that $\OO_K / \kp$
+ is the finite field of order $f_\kp$,
+ and the Frobenius element is just $x \mapsto x^p$ on this field.
+\end{exercise}
+
+Let us now prove the main theorem.
+This will only make sense in the context of decomposition groups,
+so readers which skipped that part should omit this proof.
+\begin{proof}
+ [Proof of existence of Frobenius element]
+ The entire theorem is just a rephrasing of the fact
+ that the map $\theta$ defined in the last section
+ is an isomorphism when $p$ is unramified.
+ Picture:
+ \begin{center}
+ \begin{asy}
+ size(12cm);
+ filldraw( (-4,-2)--(-4,2)--(1.5,2)--(1.5,-2)--cycle, lightblue+opacity(0.2), black);
+ label("$G = \operatorname{Gal}(K/\mathbb Q)$", (-1,2), dir(90));
+ dot( (-1.1,-1.2) );
+ dot( (-1.4,0.9) );
+ dot( (-2,1.4) );
+ dot( (-2.7,-0.4) );
+ dot( (-3.1,0.2) );
+ dot( (-3.4,-1.6) );
+
+ filldraw(scale(0.8,1.8)*unitcircle, lightcyan+opacity(0.4), black);
+ label("$D_{\mathfrak p}$", (0.8,2), dir(-90));
+ for (real y=-1.5; y<2; ++y) { dot( (0,y) ); }
+ label("$\operatorname{Frob}_{\mathfrak p}$", (0,-1.5), dir(90));
+
+ filldraw(shift(5,0)*scale(0.8,1.8)*unitcircle, lightcyan+opacity(0.4), black);
+ for (real y=0.5; y<2; ++y) { dot( (5,y) ); }
+ dot("$T$", (5,-1.5), dir(45));
+ dot("$T^2$", (5,-0.5), dir(45));
+ draw( (1,0)--(4,0), Arrows );
+ label("$\left$", (5,1.8), dir(90));
+
+ draw( (0.2,-1.5)--(4.8,-1.5), dashed, EndArrow);
+ label("$\theta(\operatorname{Frob}_{\mathfrak p}) = T$", (2.8,-1.5), dir(-90));
+ label("$\theta$", (2.5,0), dir(90));
+ label("$\cong$", (2.5,0), dir(-90));
+ \end{asy}
+ \end{center}
+ In here we can restrict our attention to $D_\kp$
+ since we need to have $\sigma(\alpha) \equiv 0 \pmod \kp$
+ when $\alpha \equiv 0 \pmod \kp$.
+ Thus we have the isomorphism
+ \[ D_\kp \taking\theta \Gal\left( (\OO_K/\kp) / \FF_p \right). \]
+ But we already know $\Gal\left( (\OO_K/\kp)/\FF_p \right)$,
+ according to the string of isomorphisms
+ \[
+ \Gal\left( (\OO_K/\kp) / \FF_p \right)
+ \cong \Gal\left( \FF_{p^f} / \FF_p \right)
+ \cong \left< T = x \mapsto x^p \right>
+ \cong \Zc{f} .
+ \]
+ So the unique such element is the pre-image of $T$ under $\theta$.
+\end{proof}
+
+
+\section{Conjugacy classes}
+Now suppose $\kp_1$ and $\kp_2$ are \emph{two} primes above an unramified rational prime $p$.
+Then we can define $\Frob_{\kp_1}$ and $\Frob_{\kp_2}$.
+Since the Galois group acts transitively,
+we can select $\sigma \in \Gal(K/\QQ)$ be such that
+\[ \sigma(\kp_1) = \kp_2. \]
+We claim that
+\[
+ \Frob_{\kp_2} = \sigma \circ \Frob_{\kp_1} \circ \sigma\inv.
+\]
+Note that this is an equation in $G$.
+\begin{ques}
+ Prove this.
+\end{ques}
+
+More generally, for a given unramified rational prime $p$, we obtain:
+\begin{theorem}
+ [Conjugacy classes in Galois groups]
+ The set
+ \[ \left\{ \Frob_\kp \mid \kp \text{ above } p \right\} \]
+ is one of the conjugacy classes of $G$.
+\end{theorem}
+\begin{proof}
+ We've used the fact that $G = \Gal(K/\QQ)$ is transitive
+ to show that $\Frob_{\kp_1}$ and $\Frob_{\kp_2}$ are conjugate
+ if they both lie above $p$; hence it's \emph{contained} in some
+ conjugacy class.
+ So it remains to check that for any $\kp$, $\sigma$,
+ we have $\sigma \circ \Frob_\kp \circ \sigma\inv = \Frob_{\kp'}$
+ for some $\kp'$. For this, just take $\kp' = \sigma\kp$.
+ Hence the set is indeed a conjugacy class.
+\end{proof}
+%We denote the conjugacy class by the \vocab{Frobenius symbol}
+%\[ \left( \frac{K/\QQ}{p} \right). \]
+
+In summary,
+\begin{moral}
+ $\Frob_{\kp}$ is determined up to conjugation by the prime $p$
+ from which $\kp$ arises.
+\end{moral}
+So even though the Gothic letters look scary, the content of $\Frob_{\kp}$
+really just comes from the more friendly-looking rational prime $p$.
+
+
+\begin{example}
+ [Frobenius elements in $\QQ(\cbrt2,\omega)$]
+ With those remarks, here is a more involved example of a Frobenius map.
+ Let $K = \QQ(\cbrt2, \omega)$ be the splitting field of
+ \[ t^3-2 = (t-\cbrt2)(t-\omega\cbrt2)(t-\omega^2\cbrt2). \]
+ Thus $K/\QQ$ is Galois.
+ We've seen in an earlier example that
+ \[ \OO_K = \ZZ[\eps] \quad\text{where}\quad \eps \text { is a root of } t^6+3t^5-5t^3+3t+1. \]
+
+ Let's consider the prime $5$ which factors (trust me here) as
+ \[ (5) = (5, \eps^2+\eps+2)(5, \eps^2+3\eps+3)(5, \eps^2+4\eps+1)
+ = \kp_1 \kp_2 \kp_3. \]
+ Note that all the prime ideals have inertial degree $2$.
+ Thus $\Frob_{\kp_i}$ will have order $2$ for each $i$.
+
+ Note that
+ \[ \Gal(K/\QQ) =
+ \text{permutations of } \{\cbrt2,\omega\cbrt2,\omega^2\cbrt2\}
+ \cong S_3. \]
+ In this $S_3$ there are $3$ elements of order two:
+ fixing one root and swapping the other two.
+ These correspond to each of $\Frob_{\kp_1}$, $\Frob{\kp_2}$, $\Frob_{\kp_3}$.
+
+ In conclusion, the conjugacy class
+ $\left\{ \Frob_{\kp_1}, \Frob_{\kp_2}, \Frob_{\kp_3} \right\}$
+ associated to $(5)$ is the
+ cycle type $(\bullet)(\bullet \; \bullet)$ in $S_3$.
+\end{example}
+
+
+\section{Chebotarev density theorem}
+Natural question: can we represent every conjugacy class in this way?
+In other words, is every element of $G$ equal to $\Frob_\kp$ for some $\kp$?
+
+Miraculously, not only is the answer ``yes'', but in fact it does so in the nicest way possible:
+the $\Frob_\kp$'s are ``equally distributed'' when we pick a random $\kp$.
+\begin{theorem}
+ [Chebotarev density theorem over $\QQ$]
+ Let $C$ be a conjugacy class of $G = \Gal(K/\QQ)$.
+ The density of (unramified) primes $p$ such that $\{ \Frob_\kp \mid \kp \text{ above } p \} = C$
+ %\[ \left( \frac{K/\QQ}{p} \right) = C \]
+ is exactly $\left\lvert C \right\rvert / \left\lvert G \right\rvert$.
+ In particular, for any $\sigma \in G$ there are infinitely many rational primes $p$
+ with $\kp$ above $p$ so that $\Frob_{\kp} = \sigma$.
+\end{theorem}
+
+By density, I mean that the proportion of primes $p \le x$ that work
+approaches $\frac{\left\lvert C \right\rvert}{\left\lvert G \right\rvert}$ as $x \to \infty$.
+Note that I'm throwing out the primes that ramify in $K$.
+This is no issue, since the only primes that ramify are those dividing $\Delta_K$,
+of which there are only finitely many.
+
+In other words, if I pick a random prime $p$ and look at the resulting conjugacy class,
+it's a lot like throwing a dart at $G$:
+the probability of hitting any conjugacy class depends just on the size of the class.
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ bigbox("$G$");
+ pen b = lightcyan + opacity(0.4);
+ pen k = black;
+ filldraw( (-2.6,2.5)--(0.6,2.5)--(0.6,0.5)--(-2.6,0.5)--cycle, b, k);
+ filldraw( (-2.6,-2.5)--(0.6,-2.5)--(0.6,-0.5)--(-2.6,-0.5)--cycle, b, k);
+ filldraw( (2,0)--(3.5,0)--(3.5,2.5)--(2,2.5)--cycle, b, k);
+ filldraw( (2,-1)--(3.5,-1)--(3.5,-2)--(2,-2)--cycle, b, k);
+ for (real x = -2; x < 1; ++x) {
+ dot( (x, 1.9) );
+ dot( (x, 1.1) );
+ dot( (x, -1.9) );
+ dot( (x, -1.1) );
+ }
+ label("$37.5\%$", (-2.6, 0.5), dir(140));
+ label("$37.5\%$", (-2.6,-2.5), dir(140));
+ label("$C_1$", (-2.6, 2.5), dir(225));
+ label("$C_2$", (-2.6, -.5), dir(225));
+ dot( (2.75, 2.0) );
+ dot( (2.75, 1.25) );
+ dot( (2.75, 0.50) );
+ dot( (2.75, -1.50) );
+ label("$C_3$", (2, 0), dir(-90));
+ label("$18.75\%$", (3, 0), dir(-75));
+ label("$C_4$", (2, -2), dir(-90));
+ label("$6.25\%$", (3, -2), dir(-75));
+ \end{asy}
+\end{center}
+
+\begin{remark}
+Happily, this theorem (and preceding discussion)
+also works if we replace $K/\QQ$ with any Galois extension $K/F$;
+in that case we replace ``$\kp$ over $p$'' with ``$\kP$ over $\kp$''.
+In that case, we use $\Norm(\kp) \le x$ rather than $p \le x$ as the way
+to define density.
+\end{remark}
+
+\section{Example: Frobenius elements of cyclotomic fields}
+Let $q$ be a prime, and consider $L = \QQ(\zeta_q)$,
+with $\zeta_q$ a primitive $q$th root of unity.
+You should recall from various starred problems that
+\begin{itemize}
+ \ii $\Delta_L = \pm q^{q-2}$,
+ \ii $\OO_L = \ZZ[\zeta_q]$, and
+ \ii The map \[ \sigma_n : L \to L \quad\text{by}\quad \zeta_q \mapsto \zeta_q^n \]
+ is an automorphism of $L$ whenever $\gcd(n,q)=1$,
+ and depends only on $n \pmod q$.
+ In other words, the automorphisms of $L/\QQ$ just shuffle around the $q$th roots of unity.
+ In fact the Galois group consists exactly of the elements $\{\sigma_n\}$, namely
+ \[ \Gal(L/\QQ) = \{ \sigma_n \mid n \not\equiv 0 \pmod q \}. \]
+ As a group, \[ \Gal(L/\QQ) = \Zm q \cong \Zcc{q-1}. \]
+\end{itemize}
+This is surprisingly nice,
+because \textbf{elements of $\Gal(L/\QQ)$ look a lot
+like Frobenius elements already}.
+Specifically:
+
+\begin{lemma}[Cyclotomic Frobenius elements]
+ \label{lem:cyclo_frob}
+ In the cyclotomic setting $L = \QQ(\zeta_q)$,
+ let $p$ be a rational unramified prime
+ and $\kp$ above it. Then \[ \Frob_\kp = \sigma_p. \]
+\end{lemma}
+\begin{proof}
+ Observe that $\sigma_p$ satisfies the functional equation
+ (check on generators).
+ Done by uniqueness.
+% We know $\Frob_\kp(\alpha) \equiv \alpha^p \pmod{\kp}$ by definition,
+% but also that $\Frob_\kp = \sigma_n$ for some $n$
+% We want $n=p$; since $\sigma_n(\zeta_q)^n = \zeta_q^n$ by definition
+% it would be very weird if this wasn't true!
+%
+% Given $\zeta_q^n \equiv \zeta_q^p \pmod{\kp}$, it suffices to
+% prove that the $q$th roots of unity are distinct mod $\kp$.
+% Look at the polynomial $F(x) = x^q-1$ in $\ZZ[\zeta_p]/\kp \cong \FF_{p^f}$.
+% Its derivative is \[ F'(x) = qx^{q-1} \not\equiv 0 \pmod{\kp} \]
+% (since $\FF_{p^f}$ has characteristic $p \nmid q$).
+% The only root of $F'$ is zero, hence $F$ has no double roots mod $\kp$.
+\end{proof}
+
+\begin{ques}
+ Conclude that a rational prime $p$
+ splits completely in $\OO_L$ if and only if $p \equiv 1 \pmod q$.
+\end{ques}
+
+\section{Frobenius elements behave well with restriction}
+Let $L/\QQ$ and $K/\QQ$ be Galois extensions, and consider the setup
+\begin{center}
+\begin{tikzcd}
+ L \ar[d, dash] & \supset
+ & \kP \ar[d, dash] \ar[r, dotted]
+ & \Frob_{\kP} \in \Gal(L/\QQ)\\
+ K \ar[d, dash] & \supset
+ & \kp \ar[d, dash] \ar[r, dotted]
+ & \Frob_\kp \in \Gal(K/\QQ) \\
+ \QQ & \supset
+ & (p)
+ &
+\end{tikzcd}
+\end{center}
+Here $\kp$ is above $(p)$ and $\kP$ is above $\kp$.
+We may define
+\[ \Frob_\kp \colon K \to K
+ \quad\text{and}\quad
+ \Frob_{\kP} \colon L \to L \]
+and want to know how these are related.
+
+\begin{theorem}
+ [Restrictions of Frobenius elements]
+ Assume $L/\QQ$ and $K/\QQ$ are both Galois.
+ Let $\kP$ and $\kp$ be unramified as above.
+ Then $\Frob_{\kP} \restrict{K} = \Frob_{\kp}$,
+ i.e.\ for every $\alpha \in K$,
+ \[ \Frob_\kp(\alpha) = \Frob_{\kP}(\alpha). \]
+\end{theorem}
+%\begin{proof}
+% We know
+% \[ \Frob_{\kP}(\alpha) \equiv \alpha^p \pmod{\kP}
+% \quad \forall \alpha \in \OO_L \]
+% from the definition.
+% \begin{ques}
+% Deduce that
+% \[ \Frob_{\kP}(\alpha) \equiv \alpha^p \pmod{\kp}
+% \quad \forall \alpha \in \OO_K. \]
+% (This is weaker than the previous statement in two ways!)
+% \end{ques}
+% Thus $\Frob_{\kP}$ restricted to $\OO_K$ satisfies the
+% characterizing property of $\Frob_\kp$.
+%\end{proof}
+\begin{proof}
+ TODO: Broken proof. Needs repair.
+\end{proof}
+In short, the point of this section is that
+\begin{moral}
+ Frobenius elements upstairs restrict to Frobenius elements downstairs.
+\end{moral}
+
+\section{Application: Quadratic reciprocity}
+We now aim to prove:
+\begin{theorem}
+ [Quadratic reciprocity]
+ Let $p$ and $q$ be distinct odd primes.
+ Then
+ \[ \left( \frac pq \right)\left( \frac qp \right)
+ = (-1)^{\frac{p-1}{2} \cdot \frac{q-1}{2}}. \]
+\end{theorem}
+(See, e.g. \cite{ref:holden} for an exposition on quadratic reciprocity,
+if you're not familiar with it.)
+
+\subsection{Step 1: Setup}
+For this proof, we first define
+\[ L = \QQ(\zeta_q) \]
+where $\zeta_q$ is a primitive $q$th root of unity.
+Then $L/\QQ$ is Galois, with Galois group $G$.
+\begin{ques}
+ Show that $G$ has a unique subgroup $H$ of index two.
+\end{ques}
+In fact, we can describe it exactly: viewing $G \cong \Zm q$, we have
+\[ H = \left\{ \sigma_n \mid \text{$n$ quadratic residue mod $q$} \right\}. \]
+By the fundamental theorem of Galois Theory, there ought to be a degree $2$
+extension of $\QQ$ inside $\QQ(\zeta_q)$ (that is, a quadratic field).
+Call it $\QQ(\sqrt{q^\ast})$, for $q^\ast$ squarefree:
+\begin{center}
+\begin{tikzcd}
+ L = \QQ(\zeta_q)
+ \ar[d, "\frac{q-1}{2}"', dash]
+ \ar[r, leftrightarrow]
+ & \{1\} \ar[d, dash] \\
+ K = \QQ(\sqrt{q^\ast})
+ \ar[d, "2"', dash]
+ \ar[r, leftrightarrow]
+ & H \ar[d, dash] \\
+ \QQ \ar[r, leftrightarrow]
+ & G
+\end{tikzcd}
+\end{center}
+\begin{exercise}
+ Note that if a rational prime $\ell$ ramifies in $K$,
+ then it ramifies in $L$.
+ Use this to show that
+ \[ q^\ast = \pm q \text{ and } q^\ast \equiv 1 \pmod 4. \]
+ Together these determine the value of $q^\ast$.
+\end{exercise}
+(Actually, it is true in general
+$\Delta_K$ divides $\Delta_L$ in a tower $L/K/\QQ$.)
+
+\subsection{Step 2: Reformulation}
+Now we are going to prove:
+\begin{theorem}
+ [Quadratic reciprocity, equivalent formulation]
+ For distinct odd primes $p$, $q$ we have
+ \[ \left( \frac pq \right) = \left( \frac{q^\ast}{p} \right). \]
+\end{theorem}
+\begin{exercise}
+ Using the fact that $\left( \frac{-1}{p} \right) = (-1)^{\frac{p-1}{2}}$,
+ show that this is equivalent to quadratic reciprocity as we know it.
+\end{exercise}
+
+We look at the rational prime $p$ in $\ZZ$.
+Either it splits into two in $K$ or is inert; either way let $\kp$ be a prime factor
+in the resulting decomposition (so $\kp$ is either $p \cdot \OO_K$ in the inert case,
+or one of the primes in the split case).
+Then let ${\kP}$ be above $\kp$.
+It could possibly also split in $K$: the picture looks like
+\begin{center}
+\begin{tikzcd}
+ \OO_L = \ZZ[\zeta_q] & \supset
+ & {\kP} \ar[r, dotted] & \ZZ[\zeta_p]/{\kP} \cong \FF_{p^f} \\
+ \OO_K = \ZZ[\frac{1+\sqrt{q^\ast}}{2}] & \supset
+ & \kp \ar[r, dotted] & \FF_p \text{ or } \FF_{p^2} \\
+ \ZZ & \supset & (p) \ar[r, dotted]
+ & \FF_p
+\end{tikzcd}
+\end{center}
+\begin{ques}
+ Why is $p$ not ramified in either $K$ or $L$?
+\end{ques}
+
+\subsection{Step 3: Introducing the Frobenius}
+Now, we take the Frobenius
+\[ \sigma_p = \Frob_{\kP} \in \Gal(L/\QQ). \]
+We claim that
+\[ \Frob_{\kP} \in H \iff \text{$p$ splits in $K$}. \]
+To see this, note that $\Frob_{\kP}$ is in $H$ if and only if it acts
+as the identity on $K$.
+But $\Frob_{\kP} \restrict{K}$ is $\Frob_\kp$!
+So \[ \Frob_{\kP} \in H \iff \Frob_\kp = \id_K. \]
+Finally note that $\Frob_\kp$ has order $1$ if $p$ splits
+($\kp$ has inertial degree $1$)
+and order $2$ if $p$ is inert.
+This completes the proof of the claim.
+
+\subsection{Finishing up}
+We already know by \Cref{lem:cyclo_frob} that $\Frob_{\kP} = \sigma_p \in H$
+if and only if $p$ is a quadratic residue.
+On the other hand,
+\begin{exercise}
+ Show that $p$ splits in $\OO_K = \ZZ[\frac12(1+\sqrt{q^\ast})]$
+ if and only if $\left( \frac{q^\ast}{p} \right) = 1$.
+ (Use the factoring algorithm. You need the fact that $p \neq 2$ here.)
+\end{exercise}
+In other words
+\[ \left( \frac pq \right) = 1
+ \iff \sigma_p \in H \iff \text{$p$ splits in $\ZZ\left[ \tfrac12(1+\sqrt{q^\ast}) \right]$}
+ \iff \left( \frac{q^\ast}{p} \right) = 1.
+\]
+This completes the proof.
+
+
+\section{Frobenius elements control factorization}
+\prototype{$\Frob_\kp$ controlled the splitting of $p$ in the proof of quadratic reciprocity;
+the same holds in general.}
+In the proof of quadratic reciprocity, we used the fact that Frobenius elements behaved
+well with restriction in order to relate the splitting of $p$ with properties of $\Frob_\kp$.
+
+In fact, there is a much stronger statement for
+any intermediate field $\QQ \subseteq E \subseteq K$
+which works even if $E/\QQ$ is not Galois.
+It relies on the notion of a \emph{factorization pattern}.
+Here is how it goes.
+
+Set $n = [E:\QQ]$, and let $p$ be a rational prime unramified in $K$.
+Then $p$ can be broken in $E$ as
+\[ p \cdot \OO_E = \kp_1 \kp_2 \dots \kp_g \]
+with inertial degrees $f_1$, \dots, $f_g$:
+(these inertial degrees might be different since $E/\QQ$ isn't Galois).
+The numbers $f_1 + \dots + f_g = n$ form a partition of the number $n$.
+For example, in the quadratic reciprocity proof we had $n = 2$,
+with possible partitions $1 + 1$ (if $p$ split) and $2$ (if $p$ was inert).
+We call this the \vocab{factorization pattern} of $p$ in $E$.
+
+Next, we introduce a Frobenius $\Frob_{\kP}$ above $(p)$, all the way in $K$;
+this is an element of $G = \Gal(K/\QQ)$.
+Then let $H$ be the group corresponding to the field $E$.
+Diagram:
+\begin{center}
+\begin{tikzcd}
+ K \ar[r, leftrightarrow] \ar[d, dash] & \{1\} \ar[d, dash]
+ & \Frob_{\kP} \\
+ E \ar[d, dash, "n"'] \ar[r, leftrightarrow] & H \ar[d, dash, "n"]
+ & \kp_1 \dots \kp_g \ar[d, dash] & f_1 + \dots + f_g = n \\
+ \QQ \ar[r, leftrightarrow] & G & (p)
+\end{tikzcd}
+\end{center}
+Then $\Frob_{\kP}$ induces a \emph{permutation}
+of the $n$ left cosets $gH$ by left multiplication
+(after all, $\Frob_{\kP}$ is an element of $G$ too!).
+Just as with any permutation, we may look at the resulting cycle decomposition,
+which has a natural ``cycle structure'': a partition of $n$.
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ pen tg = heavyred; // "times g"
+
+ pen pointpen = lightblue;
+ pair A = Drawing("g_1H", dir(80), dir(80), pointpen);
+ pair B = Drawing("g_2H", A*dir(120), A*dir(120), pointpen);
+ pair C = Drawing("g_3H", A*dir(240), A*dir(240), pointpen);
+ draw(A--B, dashed + pointpen, EndArrow, Margin(2,2));
+ draw(B--C, dashed + pointpen, EndArrow, Margin(2,2));
+ draw(C--A, dashed + pointpen, EndArrow, Margin(2,2));
+ label("$\times g$", midpoint(A--B), A+B, tg);
+ label("$\times g$", midpoint(B--C), B+C, tg);
+ label("$\times g$", midpoint(C--A), C+A, tg);
+ label("$3$", origin, origin, pointpen);
+ add(shift( (-3.2,0.1) ) * CC());
+
+ label("$g = \operatorname{Frob}_{\mathfrak P}$", (-1.7,1.7), origin, tg);
+
+ pointpen = heavygreen;
+ pair W = Drawing("g_4H", dir(50), dir(50), pointpen);
+ pair X = Drawing("g_5H", W*dir(90), W*dir(90), pointpen);
+ pair Y = Drawing("g_6H", W*dir(180), W*dir(180), pointpen);
+ pair Z = Drawing("g_7H", W*dir(270), W*dir(270), pointpen);
+ draw(W--X, dashed + pointpen, EndArrow, Margin(2,2));
+ draw(X--Y, dashed + pointpen, EndArrow, Margin(2,2));
+ draw(Y--Z, dashed + pointpen, EndArrow, Margin(2,2));
+ draw(Z--W, dashed + pointpen, EndArrow, Margin(2,2));
+ defaultpen(red);
+ label("$\times g$", W--X, W+X, tg);
+ label("$\times g$", X--Y, X+Y, tg);
+ label("$\times g$", Y--Z, Y+Z, tg);
+ label("$\times g$", Z--W, Z+W, tg);
+ label("$4$", origin, origin, pointpen);
+
+ label("$\boxed{n = 7 = 3+4}$", (-2,-1.8), origin, black);
+ \end{asy}
+\end{center}
+
+The theorem is that these coincide:
+\begin{theorem}
+ [Frobenius elements control decomposition]
+ \label{thm:frob_control_decomp}
+ Let $\QQ \subseteq E \subseteq K$ an extension of number fields
+ and assume $K/\QQ$ is Galois (though $E/\QQ$ need not be).
+ Pick an unramified rational prime $p$; let $G = \Gal(K/\QQ)$
+ and $H$ the corresponding intermediate subgroup.
+ Finally, let $\kP$ be a prime above $p$ in $K$.
+
+ Then the \emph{factorization pattern} of $p$ in $E$ is given by
+ the \emph{cycle structure} of $\Frob_{\kP}$ acting on the left cosets of $H$.
+\end{theorem}
+Often, we take $E = K$, in which case this is just asserting
+that the decomposition of the prime $p$ is controlled by a Frobenius element over it.
+
+An important special case is when $E = \QQ(\alpha)$,
+because as we will see it is let us determine how the minimal
+polynomial of $\alpha$ factors modulo $p$.
+To motivate this, let's go back a few chapters
+and think about the Factoring Algorithm.
+
+Let $\alpha$ be an algebraic integer and $f$ its minimal polynomial (of degree $n$).
+Set $E = \QQ(\alpha)$ (which has degree $n$ over $\QQ$).
+Suppose we're lucky enough that $\OO_E = \ZZ[\alpha]$,
+i.e.\ that $E$ is monogenic.
+Then we know by the Factoring Algorithm,
+to factor any $p$ in $E$, all we have to do is factor $f$ modulo $p$,
+since if $f = f_1^{e_1} \dots f_g^{e_g} \pmod p$ then we have
+\[ (p) = \prod_i \kp_i = \prod_i (f_i(\alpha), p)^{e_i}. \]
+This gives us complete information about the ramification indices and inertial degrees;
+the $e_i$ are the ramification indices, and $\deg f_i$ are the inertial degrees
+(since $\OO_E / \kp_i \cong \FF_p[X] / (f_i(X))$).
+
+In particular, if $p$ is unramified then all the $e_i$ are equal to $1$, and we get
+\[ n = \deg f = \deg f_1 + \deg f_2 + \dots + \deg f_g. \]
+Once again we have a partition of $n$;
+we call this the \vocab{factorization pattern} of $f$ modulo $p$.
+So, to see the factorization pattern of an unramified $p$ in $\OO_E$,
+we just have to know the factorization pattern of the $f \pmod p$.
+
+Turning this on its head, if we want to know the factorization pattern of $f \pmod p$,
+we just need to know how $p$ decomposes.
+And it turns out these coincide even without the assumption that $E$ is monogenic.
+
+\begin{theorem}[Frobenius controls polynomial factorization]
+ \label{thm:factor_poly_frob}
+ Let $\alpha$ be an algebraic integer with minimal polynomial $f$,
+ and let $E = \QQ(\alpha)$.
+ Then for any prime $p$ unramified in the splitting field $K$ of $f$,
+ the following coincide:
+ \begin{enumerate}[(i)]
+ \ii The factorization pattern of $p$ in $E$.
+ \ii The factorization pattern of $f \pmod p$.
+ \ii The cycle structure associated to the action
+ of $\Frob_{\kP} \in \Gal(K/\QQ)$ on the roots of $f$,
+ where $\kP$ is above $p$ in $K$.
+ \end{enumerate}
+\end{theorem}
+\begin{example}[Factoring $x^3-2 \pmod 5$]
+ Let $\alpha = \cbrt2$ and $f = x^3-2$, so $E = \QQ(\cbrt2)$.
+ Set $p=5$ and finally, let $K = \QQ(\cbrt2, \omega)$ be the splitting field.
+ Setup:
+ \begin{center}
+ \begin{tikzcd}
+ K = \QQ(\cbrt2, \omega) \ar[d, dash, "2"']
+ & \kP \ar[d, dash]
+ & x^3-2 = (x-\cbrt2)(x-\cbrt2\omega)(x-\cbrt2\omega^2) \\
+ E = \QQ(\sqrt[3]{2}) \ar[d, dash, "3"']
+ & \kp \ar[d, dash]
+ & x^3-2 = (x-\cbrt2)(x^2+\cbrt2x+\cbrt4) \\
+ \QQ & (5) & x^3-2 \text{ irreducible over } \QQ
+ \end{tikzcd}
+ \end{center}
+ The three claimed objects now all have shape $2+1$:
+ \begin{enumerate}[(i)]
+ \ii By the Factoring Algorithm, we have
+ $(5) = (5, \cbrt2-3)(5, 9+3\cbrt2+\cbrt4)$.
+ \ii We have $x^3-2 \equiv (x-3)(x^2+3x+9) \pmod 5$.
+ \ii We saw before that $\Frob_{\kP} = (\bullet)(\bullet \; \bullet)$.
+ \end{enumerate}
+\end{example}
+
+\begin{proof}[Sketch of Proof]
+ Letting $n = \deg f$.
+ Let $H$ be the subgroup of $G = \Gal(K/\QQ)$ corresponding to $E$, so $[G:E] = n$.
+ Pictorially, we have
+ \begin{center}
+ \begin{tikzcd}
+ K \ar[d, dash] & \{1\} \ar[d, dash] & \kP \ar[d, dash] \\
+ E = \QQ(\alpha) \ar[d, dash] & H \ar[d, dash] & \kp \ar[d, dash] \\
+ \QQ & G & (p)
+ \end{tikzcd}
+ \end{center}
+ We claim that (i), (ii), (iii) are all equivalent to
+ \begin{center}
+ (iv) The pattern of the action of $\Frob_{\kP}$ on the $G/H$.
+ \end{center}
+ In other words we claim the cosets correspond to the $n$ roots of $f$ in $K$.
+ Indeed $H$ is just the set of $\tau \in G$ such that $\tau(\alpha)=\alpha$,
+ so there's a bijection between the roots and the cosets $G/H$
+ by $\tau H \mapsto \tau(\alpha)$.
+ Think of it this way: if $G = S_n$, and $H = \{\tau : \tau(1) = 1\}$,
+ then $G/H$ has order $n! / (n-1)! = n$ and corresponds to the elements $\{1, \dots, n\}$.
+ So there is a natural bijection from (iii) to (iv).
+
+ The fact that (i) is in bijection to (iv) was the previous theorem,
+ \Cref{thm:frob_control_decomp}.
+ The correspondence (i) $\iff$ (ii) is a fact of Galois theory,
+ so we omit the proof here.
+\end{proof}
+
+All this can be done in general with $\QQ$ replaced by $F$;
+for example, in \cite{ref:lenstra_chebotarev}.
+
+\section{Example application: IMO 2003 problem 6}
+As an example of the power we now have at our disposal, let's prove:
+
+\begin{center}
+ \begin{minipage}{4.5cm}
+ \includegraphics[width=4cm]{media/IMO-2003-logo.png}
+ \end{minipage}%
+ \begin{minipage}{10cm}
+ \textbf{Problem 6}.
+ Let $p$ be a prime number.
+ Prove that there exists a prime number $q$ such that for every integer $n$,
+ the number $n^p-p$ is not divisible by $q$.
+ \end{minipage}
+\end{center}
+We will show, much more strongly, that there exist infinitely many primes $q$
+such that $X^p-p$ is irreducible modulo $q$.
+
+\begin{proof}[Solution]
+ Okay! First, we draw the tower of fields
+ \[ \QQ \subseteq \QQ(\sqrt[p]{p}) \subseteq K \]
+ where $K$ is the splitting field of $f(x) = x^p-p$.
+ Let $E = \QQ(\sqrt[p]{p})$ for brevity and note it has degree $[E:\QQ] = p$.
+ Let $G = \Gal(K/\QQ)$.
+ \begin{ques}
+ Show that $p$ divides the order of $G$. (Look at $E$.)
+ \end{ques}
+ Hence by Cauchy's theorem (\Cref{thm:cauchy_group}, which is a purely group-theoretic fact)
+ we can find a $\sigma \in G$ of order $p$.
+ By Chebotarev, there exist infinitely many rational (unramified) primes $q \neq p$
+ and primes $\kQ \subseteq \OO_K$ above $q$
+ such that $\Frob_\kQ = \sigma$.
+ (Yes, that's an uppercase Gothic $Q$. Sorry.)
+
+ We claim that all these $q$ work.
+
+ By \Cref{thm:factor_poly_frob}, the factorization of $f \pmod q$ is
+ controlled by the action of $\sigma = \Frob_\kQ$ on the roots of $f$.
+ But $\sigma$ has prime order $p$ in $G$!
+ So all the lengths in the cycle structure have to divide $p$.
+ Thus the possible factorization patterns of $f$ are
+ \[ p = \underbrace{1 + 1 + \dots + 1}_{\text{$p$ times}}
+ \quad\text{or}\quad p = p. \]
+ So we just need to rule out the $p = 1 + \dots + 1$ case now:
+ this only happens if $f$ breaks into linear factors mod $q$.
+ Intuitively this edge case seems highly unlikely (are we really so unlucky
+ that $f$ factors into \emph{linear} factors when we want it to be irreducible?).
+ And indeed this is easy to see: this means that $\sigma$ fixes all
+ of the roots of $f$ in $K$, but that means $\sigma$ fixes $K$ altogether,
+ and hence is the identity of $G$, contradiction.
+\end{proof}
+\begin{remark}
+ In fact $K = \QQ(\sqrt[p]{p}, \zeta_p)$, and $\left\lvert G \right\rvert = p(p-1)$.
+ With a little more group theory, we can show that in fact the density of
+ primes $q$ that work is $\frac 1p$.
+\end{remark}
+
+\section\problemhead
+
+\begin{problem}
+ Show that for an odd prime $p$, \[ \left( \frac 2p \right) = (-1)^{\frac 18(p^2-1)}. \]
+ \begin{hint}
+ Modify the end of the proof of quadratic reciprocity.
+ \end{hint}
+ \begin{sol}
+ It is still true that
+ \[ \left( \frac 2q \right) = 1
+ \iff \sigma_2 \in H \iff \text{$2$ splits in $\ZZ\left[ \tfrac12(1+\sqrt{q^\ast}) \right]$}. \]
+ Now, $2$ splits in the ring if and only if $t^2 - t - \tfrac14(1-q^\ast)$
+ factors mod $2$. This happens if and only if $q^\ast \equiv 1 \pmod 8$.
+ One can check this is exactly if $q \equiv \pm 1 \pmod 8$, which gives the conclusion.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Let $f$ be a nonconstant polynomial with integer coefficients.
+ Suppose $f \pmod p$ splits completely into linear factors
+ for all sufficiently large primes $p$.
+ Show that $f$ splits completely into linear factors.
+\end{problem}
+
+\begin{dproblem}
+ [Dirichlet's theorem on arithmetic progressions]
+ Let $a$ and $m$ be relatively prime positive integers.
+ Show that the density of primes $p \equiv a \pmod m$ is exactly $\frac{1}{\phi(m)}$.
+ \begin{hint}
+ Chebotarev Density on $\QQ(\zeta_m)$.
+ \end{hint}
+ \begin{sol}
+ Let $K = \Gal(\QQ(\zeta_m)/\QQ)$.
+ One can show that $\Gal(K/\QQ) \cong \Zm m$ exactly as before.
+ In particular, $\Gal(K/\QQ)$ is abelian and therefore its conjugacy classes
+ are singleton sets; there are $\phi(m)$ of them.
+
+ As long as $p$ is sufficiently large, it is unramified
+ and $\sigma_p = \Frob_\kp$ for any $\kp$ above $p$
+ (as $m$th roots of unity will be distinct modulo $p$;
+ differentiate $x^m-1$ mod $p$ again).
+ \end{sol}
+\end{dproblem}
+
+\begin{problem}
+ Let $n$ be an odd integer which is not a prime power.
+ Show that the $n$th cyclotomic polynomial is not
+ irreducible modulo \emph{any} rational prime.
+ % http://mathoverflow.net/questions/12366/how-many-primes-stay-inert-in-a-finite-non-cyclic-extension-of-number-fields
+\end{problem}
+
+\begin{problem}
+ [Putnam 2012 B6]
+ \yod
+ Let $p$ be an odd prime such that $p \equiv 2 \pmod 3$.
+ Let $\pi$ be a permutation of $\FF_p$ by $\pi(x) = x^3 \pmod p$.
+ Show that $\pi$ is even if and only if $p \equiv 3 \pmod 4$.
+ \begin{hint}
+ By primitive roots, it's the same as the action of $\times 3$ on $\Zcc{p-1}$.
+ Let $\zeta$ be a $(p-1)$st root of unity.
+ Take $d = \prod_{i < j} (\zeta^i - \zeta^j)$, think about $\QQ(d)$,
+ and figure out how to act on it by $x \mapsto x^3$.
+ \end{hint}
+ \begin{sol}
+ This solution is by David Corwin.
+ By primitive roots, it's the same as the action of $\times 3$ on $\Zcc{p-1}$.
+ Let $\zeta$ be a $(p-1)$st root of unity.
+
+ Consider
+ \[ d = \prod_{0 \le i < j < p-1} (\zeta^i - \zeta^j). \]
+ This is the square root of the discriminant of
+ the polynomial $X^{p-1}-1$; in other words $d^2 \in \ZZ$.
+ In fact, by elementary methods one can compute
+ \[ (-1)^{\binom{p-1}{2}} d^2 = -(p-1)^{p-1} \]
+
+ Now take the extension $K = \QQ(d)$, noting that
+ \begin{itemize}
+ \ii If $p \equiv 3 \pmod 4$, then $d = (p-1)^{\half(p-1)}$, so $K = \QQ$.
+ \ii If $p \equiv 1 \pmod 4$, then $d = i(p-1)^{\half(p-1)}$, so $K = \QQ(i)$.
+ \end{itemize}
+ Either way, in $\OO_K$, let $\kp$ be a prime ideal above $(3) \subseteq \OO_K$.
+ Let $\sigma = \Frob_\kp$ then be the unique element such that
+ $\sigma(x) = x^3 \pmod{\kp}$ for all $x$.
+ Then, we observe that
+ \[
+ \sigma(d) \equiv \prod_{0 \le i < j < p-1} (\zeta^{3i} - \zeta^{3j})
+ \equiv \begin{cases}
+ +d & \text{if $\pi$ is even} \\
+ -d & \text{if $\pi$ is odd}
+ \end{cases} \pmod{\kp}.
+ \]
+ Now if $K = \QQ$, then $\sigma$ is the identity, thus $\sigma$ even.
+ Conversely, if $K = \QQ(i)$, then $3$ does not split, so $\sigma(d) = -d$
+ (actually $\sigma$ is complex conjugation) thus $\pi$ is odd.
+
+ Note the condition that $p \equiv 2 \pmod 3$ is used only
+ to guarantee that $\pi$ is actually a permutation (and thus $d \neq 0$);
+ it does not play any substantial role in the solution.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/functors.tex b/books/napkin/functors.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f454294533b226990a92ff5efc3baf90945965bb
--- /dev/null
+++ b/books/napkin/functors.tex
@@ -0,0 +1,688 @@
+\chapter{Functors and natural transformations}
+\label{ch:functors}
+Functors are maps between categories; natural transformations are maps between functors.
+
+\section{Many examples of functors}
+\prototype{Forgetful functors; fundamental groups; $\bullet^\vee$.}
+Here's the point of a functor:
+\begin{moral}
+ Pretty much any time you make an object out of another object,
+ you get a functor.
+\end{moral}
+Before I give you a formal definition, let me list (informally) some examples.
+(You'll notice some of them have opposite categories $\AA\op$ appearing in places.
+Don't worry about those for now; you'll see why in a moment.)
+\begin{itemize}
+ \ii Given a group $G$ (or vector space, field, \dots), we can take its underlying set $S$;
+ this is a functor from $\catname{Grp} \to \catname{Set}$.
+ \ii Given a set $S$ we can consider a vector space with basis $S$;
+ this is a functor from $\catname{Set} \to \catname{Vect}$.
+
+ \ii Given a vector space $V$ we can consider its dual space $V^\vee$.
+ This is a functor $\catname{Vect}_k\op \to \catname{Vect}_k$.
+ \ii Tensor products give a functor from $\catname{Vect}_k \times \catname{Vect}_k \to \catname{Vect}_k$.
+ \ii Given a set $S$, we can build its power set, giving a functor $\catname{Set} \to \catname{Set}$.
+ \ii In algebraic topology, we take a topological space $X$ and build several groups $H_1(X)$, $\pi_1(X)$,
+ etc.\ associated to it. All these group constructions are functors $\catname{Top} \to \catname{Grp}$.
+ \ii Sets of homomorphisms: let $\AA$ be a category.
+ \begin{itemize}
+ \ii Given two vector spaces $V_1$ and $V_2$ over $k$,
+ we construct the abelian group of linear maps $V_1 \to V_2$.
+ This is a functor from $\catname{Vect}_k\op \times \catname{Vect}_k \to \catname{AbGrp}$.
+ \ii More generally for any category $\AA$
+ we can take pairs $(A_1, A_2)$ of objects and
+ obtain a set $\Hom_{\AA}(A_1, A_2)$.
+ This turns out to be a functor $\AA\op \times \AA \to \catname{Set}$.
+ \ii The above operation has two ``slots''.
+ If we ``pre-fill'' the first slots, then we get a functor $\AA \to \catname{Set}$.
+ That is, by fixing $A \in \AA$, we obtain a functor (called $H^A$)
+ from $\AA \to \catname{Set}$ by sending $A' \in \AA$ to $\Hom_{\AA} (A, A')$.
+ This is called the covariant Yoneda functor (explained later).
+ \ii As we saw above,
+ for every $A \in \AA$ we obtain a functor $H^A : \AA \to \catname{Set}$.
+ It turns out we can construct a category $[\AA, \catname{Set}]$
+ whose elements are functors $\AA \to \catname{Set}$;
+ in that case, we now have a functor $\AA\op \to [\AA, \catname{Set}]$.
+ \end{itemize}
+\end{itemize}
+
+\section{Covariant functors}
+\prototype{Forgetful/free functors, \dots}
+Category theorists are always asking ``what are the maps?'',
+and so we can now think about maps between categories.
+
+\begin{definition}
+ Let $\AA$ and $\BB$ be categories.
+ Of course, a \vocab{functor} $F$ takes every object of $\AA$ to an object of $\BB$.
+ In addition, though, it must take every arrow $A_1 \taking{f} A_2$
+ to an arrow $F(A_1) \taking{F(f)} F(A_2)$.
+ You can picture this as follows.
+ \begin{center}
+ \begin{tikzcd}
+ & A_1 \ar[dd, "f"'] && B_1 = F(A_1) \ar[dd, "F(f)"] \\
+ \AA \ni & \quad \ar[rr, "F", dashed] & & \quad & \in \BB \\
+ & A_2 && B_2 = F(A_2)
+ \end{tikzcd}
+ \end{center}
+ (I'll try to use dotted arrows for functors, which cross different categories, for emphasis.)
+ It needs to satisfy the ``naturality'' requirements:
+ \begin{itemize}
+ \ii Identity arrows get sent to identity arrows:
+ for each identity arrow $\id_A$, we have $F(\id_A) = \id_{F(A)}$.
+ \ii The functor respects composition:
+ if $A_1 \taking f A_2 \taking g A_3$ are arrows in $\AA$,
+ then $F(g \circ f) = F(g) \circ F(f)$.
+ \end{itemize}
+\end{definition}
+
+So the idea is:
+\begin{moral}
+Whenever we naturally make an object $A \in \AA$ into an object of $B \in \BB$,
+there should usually be a natural way to transform a map $A_1 \to A_2$ into a map $B_1 \to B_2$.
+\end{moral}
+Let's see some examples of this.
+
+\begin{example}
+ [Free and forgetful functors]
+ Note that these are both informal terms,
+ and don't have a rigid definition.
+ \begin{enumerate}[(a)]
+ \ii We talked about a \vocab{forgetful functor} earlier,
+ which takes the underlying set of a category like $\catname{Vect}_k$.
+ Let's call it $U : \catname{Vect}_k \to \catname{Set}$.
+
+ Now, given a map $T : V_1 \to V_2$ in $\catname{Vect}_k$,
+ there is an obvious $U(T) : U(V_1) \to U(V_2)$ which is just
+ the set-theoretic map corresponding to $T$.
+
+ Similarly there are forgetful functors from
+ $\catname{Grp}$, $\catname{CRing}$, etc., to $\catname{Set}$.
+ There is even a forgetful functor $\catname{CRing} \to \catname{Grp}$:
+ send a ring $R$ to the abelian group $(R,+)$.
+ The common theme is that we are ``forgetting'' structure
+ from the original category.
+
+ \ii We also talked about a \vocab{free functor} in the example.
+ A free functor $F : \catname{Set} \to \catname{Vect}_k$ can be taken by considering
+ $F(S)$ to be the vector space with basis $S$.
+ Now, given a map $f : S \to T$, what is the obvious map $F(S) \to F(T)$?
+ Simple: take each basis element $s \in S$ to the basis element $f(s) \in T$.
+
+ Similarly, we can define $F : \catname{Set} \to \catname{Grp}$
+ by taking the free group generated by a set $S$.
+ \end{enumerate}
+\end{example}
+
+\begin{remark}
+ There is also a notion of ``injective'' and ``surjective''
+ for functors (on arrows) as follows.
+ A functor $F \colon \AA \to \BB$ is \vocab{faithful}
+ (resp.\ \vocab{full}) if for any $A_1, A_2$,
+ $F \colon \Hom_\AA(A_1, A_2) \to \Hom_\BB(FA_1, FA_2)$
+ is injective (resp.\ surjective).\footnote{Again,
+ experts might object that $\Hom_\AA(A_1, A_2)$
+ or $\Hom_\BB(FA_1, FA_2)$ may be proper classes instead of sets,
+ but I am assuming everything is locally small.}
+
+ We can use this to give an exact definition of concrete category:
+ it's a category with a faithful (forgetful) functor
+ $U \colon \AA \to \catname{Set}$.
+\end{remark}
+
+\begin{example}
+ [Functors from $\mathcal G$]
+ Let $G$ be a group and $\mathcal G = \{\ast\}$ be the associated one-object category.
+ \begin{enumerate}[(a)]
+ \ii Consider a functor $F : \mathcal G \to \catname{Set}$, and let $S = F(\ast)$.
+ Then the data of $F$ corresponds to putting a \emph{group action} of $G$ on $S$.
+ \ii Consider a functor $F : \mathcal G \to \catname{FDVect}_k$, and let $V = F(\ast)$ have dimension $n$.
+ Then the data of $F$ corresponds to embedding $G$ as a subgroup of the $n \times n$ matrices
+ (i.e.\ the linear maps $V \to V$).
+ This is one way groups historically arose; the theory of viewing groups as matrices
+ forms the field of representation theory.
+ \ii Let $H$ be a group and construct $\mathcal H$ the same way.
+ Then functors $\mathcal G \to \mathcal H$ correspond to homomorphisms $G \to H$.
+ \end{enumerate}
+\end{example}
+\begin{exercise}
+ Check the above group-based functors work as advertised.
+\end{exercise}
+
+Here's a more involved example.
+If you find it confusing,
+skip it and come back after reading about its contravariant version.
+\begin{example}
+ [Covariant Yoneda functor]
+ \label{ex:covariant_yoneda}
+ Fix an $A \in \AA$.
+ For a category $\AA$, define the
+ \vocab{covariant Yoneda functor} $H^A \colon \AA \to \catname{Set}$
+ by defining \[ H^A(A_1) \defeq \Hom_\AA (A, A_1) \in \catname{Set}. \]
+ Hence each $A_1$ is sent to the \emph{arrows from $A$ to $A_1$};
+ so \textbf{$H^A$ describes how $A$ sees the world}.
+
+ Now we want to specify how $H^A$ behaves on arrows.
+ For each arrow $A_1 \taking{f} A_2$, we need
+ to specify $\catname{Set}$-map $\Hom_\AA (A, A_1) \to \Hom(A, A_2)$;
+ in other words, we need to send an arrow $A \taking{p} A_1$ to an arrow $A \to A_2$.
+ There's only one reasonable way to do this: take the composition
+ \[ A \taking{p} A_1 \taking{f} A_2. \]
+ In other words, $H_A(f)$ is $p \mapsto f \circ p$.
+ In still other words, $H_A(f) = f \circ -$;
+ the $-$ is a slot for the input to go into.
+\end{example}
+
+As another example:
+\begin{ques}
+ If $\mathcal P$ and $\mathcal Q$ are posets interpreted as categories,
+ what does a functor from $\mathcal P$ to $\mathcal Q$ represent?
+\end{ques}
+
+Now, let me explain why we might care.
+Consider the following ``obvious'' fact:
+if $G$ and $H$ are isomorphic groups, then they have the same size.
+We can formalize it by saying: if $G \cong H$ in $\catname{Grp}$
+and $U \colon \catname{Grp} \to \catname{Set}$ is the forgetful functor
+(mapping each group to its underlying set), then $U(G) \cong U(H)$.
+The beauty of category theory shows itself:
+this in fact works \emph{for any functors and categories},
+and the proof is done solely through arrows:
+
+\begin{theorem}
+ [Functors preserve isomorphism]
+ \label{thm:functor_isom}
+ If $A_1 \cong A_2$ are isomorphic objects in $\AA$
+ and $F : \AA \to \BB$ is a functor
+ then $F(A_1) \cong F(A_2)$.
+\end{theorem}
+\begin{proof}
+ Try it yourself! The picture is:
+ \begin{center}
+ \begin{tikzcd}
+ & A_1 \ar[dd, "f"', shift right, near start] && B_1 = F(A_1) \ar[dd, "F(f)"', near start, shift right] \\
+ \AA \ni & \quad \ar[rr, "F", dashed] & & \quad & \in \BB \\
+ & A_2 \ar[uu, "g"', shift right, near start] && B_2 = F(A_2) \ar[uu, "F(g)"', near start, shift right]
+ \end{tikzcd}
+ \end{center}
+ You'll need to use both key properties of functors:
+ they preserve composition and the identity map.
+\end{proof}
+
+This will give us a great intuition in the future, because
+\begin{enumerate}[(i)]
+ \ii Almost every operation we do in our lifetime will be a functor, and
+ \ii We now know that functors take isomorphic objects to isomorphic objects.
+\end{enumerate}
+Thus, we now automatically know that basically any ``reasonable'' operation
+we do will preserve isomorphism (where ``reasonable'' means that it's a functor).
+This is super convenient in algebraic topology, for example;
+see \Cref{thm:fundgrp_functor}, where we get for free that homotopic
+spaces have isomorphic fundamental groups.
+
+\begin{remark}
+ This lets us construct a category $\catname{Cat}$
+ whose objects are categories and arrows are functors.
+\end{remark}
+% wef why is this here?
+%While I'm here:
+%\begin{ques}
+% Verify this works; figure out the identity and compositions.
+%\end{ques}
+
+\section{Contravariant functors}
+\prototype{Dual spaces, contravariant Yoneda functor, etc.}
+
+Now I have to explain what the opposite categories were doing earlier.
+In all the previous examples, we took an arrow $A_1 \to A_2$,
+and it became an arrow $F(A_1) \to F(A_2)$.
+Sometimes, however, the arrow in fact goes the other way:
+we get an arrow $F(A_2) \to F(A_1)$ instead.
+In other words, instead of just getting a functor $\AA \to \BB$
+we ended up with a functor $\AA\op \to \BB$.
+
+These functors have a name:
+\begin{definition}
+ A \vocab{contravariant functor} from $\AA$ to $\BB$
+ is a functor $F : \AA\op \to \BB$.
+ (Note that we do \emph{not} write ``contravariant functor $F: \AA \to \BB$'',
+ since that would be confusing; the function notation will always
+ use the correct domain and codomain.)
+\end{definition}
+Pictorially:
+\begin{center}
+\begin{tikzcd}
+ & A_1 \ar[dd, "f"'] && B_1 = F(A_1) \\
+ \AA \ni & \quad \ar[rr, "F", dashed] & & \quad & \in \BB \\
+ & A_2 && B_2 = F(A_2) \ar[uu, "F(f)"']
+\end{tikzcd}
+\end{center}
+For emphasis, a usual functor is often called a \vocab{covariant functor}.
+(The word ``functor'' with no adjective always refers to covariant.)
+
+Let's see why this might happen.
+\begin{example}[$V \mapsto V^\vee$ is contravariant]
+ Consider the functor $\catname{Vect}_k \to \catname{Vect}_k$ by $V \mapsto V^\vee$.
+
+ If we were trying to specify a covariant functor,
+ we would need, for every linear map $T \colon V_1 \to V_2$,
+ a linear map $T^\vee \colon V_1^\vee \to V_2^\vee$.
+ But recall that $V_1^\vee = \Hom(V_1, k)$ and $V_2^\vee = \Hom(V_2, k)$:
+ there's no easy way to get an obvious map from left to right.
+
+ However, there \emph{is} an obvious map from right to left:
+ given $\xi_2 \colon V_2 \to k$, we can easily give a map from $V_1 \to k$:
+ just compose with $T$!
+ In other words, there is a very natural map $V_2^\vee \to V_1^\vee$
+ according to the composition
+ \begin{center}
+ \begin{tikzcd}
+ V_1 \ar[r, "T"] & V_2 \ar[r, "\xi_2"] & k
+ \end{tikzcd}
+ \end{center}
+ In summary, a map $T : V_1 \to V_2$ induces naturally a map
+ $T^\vee \colon V_2^\vee \to V_1^\vee$ in the opposite direction.
+ So the contravariant functor looks like:
+ \begin{center}
+ \begin{tikzcd}
+ V_1 \ar[dd, "T"'] && V_1^\vee \\
+ \quad \ar[rr, "\bullet^\vee", dashed] & & \quad \\
+ V_2 && V_2^\vee \ar[uu, "T^\vee"']
+ \end{tikzcd}
+ \end{center}
+\end{example}
+
+%Contravariant functors come up in a lot in geometric applications.
+%Here's why.
+%If $X$ is a geometric object, we'll often consider
+%the \emph{set of functions} $X \taking\psi A$ for some particular $A$.
+%For example, if $V$ was a vector space, we could consider the functions $V \to k$,
+%giving the dual module $V^\vee$.
+%Or if $X$ was a space, we might consider the continuous
+%real functions $X \taking{p} \RR$.
+%As a non-geometric example: for a set $S$,
+%a function $S \to \{x,y\}$ corresponds to a subset of $S$.
+
+We can generalize the example above in any category by
+replacing the field $k$ with any chosen object $A \in \AA$.
+
+\begin{example}[Contravariant Yoneda functor]
+ The \vocab{contravariant Yoneda functor} on $\AA$,
+ denoted $H_A : \AA\op \to \catname{Set}$,
+ is used to describe how objects of $\AA$ see $A$.
+ For each $X \in \AA$ it puts \[ H_A(X) \defeq \Hom_{\AA}(X, A) \in \catname{Set}. \]
+ For $X \taking{f} Y$ in $\AA$,
+ the map $H_A(f)$ sends each arrow $Y \taking{p} A \in \Hom_\AA(Y,A)$ to
+ \[ X \taking{f} Y \taking{p} A \quad \in \Hom_\AA(X,A) \]
+ as we did above.
+ Thus $H_A(f)$ is an arrow from $\Hom_\AA(Y,A) \to \Hom_\AA(X,A)$.
+ (Note the flipping!)
+\end{example}
+
+\begin{exercise}
+ Check now the claim that $\AA\op \times \AA \to \catname{Set}$
+ by $(A_1, A_2) \mapsto \Hom(A_1, A_2)$ is in fact a functor.
+\end{exercise}
+
+\section{Equivalence of categories}
+\todo{fully faithful and essentially surjective}
+
+\section{(Optional) Natural transformations}
+We made categories to keep track of objects and maps, then went a little crazy and asked
+``what are the maps between categories?'' to get functors.
+Now we'll ask ``what are the maps between functors?'' to get natural transformations.
+
+It might sound terrifying that we're drawing arrows between functors, but this is actually an old idea.
+Recall that given two paths $\alpha, \beta : [0,1] \to X$,
+we built a path-homotopy by ``continuously deforming'' the path $\alpha$ to $\beta$;
+this could be viewed as a function $[0,1] \times [0,1] \to X$.
+The definition of a natural transformation is similar: we want to pull $F$ to $G$
+along a series of arrows in the target space $\BB$.
+
+\begin{definition}
+ Let $F, G : \AA \to \BB$ be two functors.
+ A \vocab{natural transformation} $\alpha$ from $F$ to $G$, denoted
+ \[ \nattfm{\AA}{F}{\alpha}{G}{\BB} \]
+ consists of, for each $A \in \AA$ an arrow $\alpha_A \in \Hom_\BB(F(A), G(A))$, which is
+ called the \vocab{component} of $\alpha$ at $A$.
+ Pictorially, it looks like this:
+ \begin{center}
+ \begin{tikzcd}
+ & F(A) \in \BB \ar[dd, "\alpha_A"] \\
+ \AA \ni A \ar[ru, dashed, "F"] \ar[rd, dashed, "G"'] & \\
+ & G(A) \in \BB
+ \end{tikzcd}
+ \end{center}
+ These $\alpha_A$ are subject to the ``naturality'' requirement that for any $A_1 \taking{f} A_2$,
+ the diagram
+ \begin{center}
+ \begin{tikzcd}
+ F(A_1) \ar[r, "F(f)"] \ar[d, "\alpha_{A_1}"'] & F(A_2) \ar[d, "\alpha_{A_2}"]\\
+ G(A_1) \ar[r, "G(f)"'] & G(A_2)
+ \end{tikzcd}
+ \end{center}
+ commutes.
+\end{definition}
+The arrow $\alpha_A$ represents the path that $F(A)$ takes to get to $G(A)$
+(just as in a path-homotopy from $\alpha$ to $\beta$
+each \emph{point} $\alpha(t)$ gets deformed to the \emph{point} $\beta(t)$ continuously).
+A picture might help: consider
+\begin{center}
+ \begin{asy}
+ size(14cm);
+ dotfactor *= 1.4;
+
+ path sparrow(pair X, pair Y) {
+ // Short for "spaced arrow"
+ return (0.9*X+0.1*Y)--(0.1*X+0.9*Y);
+ }
+
+ pair A1 = Drawing("A_1", dir(210), dir(225));
+ pair A2 = Drawing("A_2", origin, dir(90));
+ pair A3 = Drawing("A_3", dir(-30), dir(-45));
+ path f = Drawing(sparrow(A2, A1), EndArrow);
+ label("$f$", f, dir(90));
+ path g = Drawing(sparrow(A2, A3), EndArrow);
+ label("$g$", g, dir(90));
+ label("$\mathcal A$", 0.6*(A1+A3));
+
+ pen p = blue;
+ transform FF = shift( (3.5, 0.7) );
+ dot("$F(A_1)$", FF*A1, dir(225), p);
+ dot("$F(A_2)$", FF*A2, dir(90), p);
+ dot("$F(A_3)$", FF*A3, dir(-45), p);
+ draw(FF*f, p, EndArrow);
+ draw(FF*g, p, EndArrow);
+ label("$F(f)$", FF*f, dir(110), p);
+ label("$F(g)$", FF*g, dir(70), p);
+ draw(FF*f, p+1.4);
+ draw(FF*g, p+1.4);
+
+ p = deepcyan;
+ transform GG = shift( (3.5, -0.7) );
+ dot("$G(A_1)$", GG*A1, dir(225), p);
+ dot("$G(A_2)$", GG*A2, 3*dir(-90), p);
+ dot("$G(A_3)$", GG*A3, dir(-45), p);
+ label("$G(f)$", Drawing(GG*f, p, EndArrow), dir(110), p);
+ label("$G(g)$", Drawing(GG*g, p, EndArrow), dir(70), p);
+ draw(GG*f, p+1.4);
+ draw(GG*g, p+1.4);
+
+ p = lightred;
+ label("$\alpha_{A_1}$", Drawing(sparrow(FF*A1, GG*A1), p, EndArrow), dir(180), p);
+ label("$\alpha_{A_2}$", Drawing(sparrow(FF*A2, GG*A2), p, EndArrow), dir(180), p);
+ label("$\alpha_{A_3}$", Drawing(sparrow(FF*A3, GG*A3), p, EndArrow), dir(0), p);
+
+ p = magenta + dotted + 0.7;
+ path Fa = (0.5,0)--FF*(-1,-0.2);
+ path Ga = (0.5,-0.6)--GG*(-1,-0.4);
+ label("$F$", Drawing(Fa, p, EndArrow), dir(135), p);
+ label("$G$", Drawing(Ga, p, EndArrow), dir(225), p);
+
+ p = lightred + 0.7;
+ label("$\alpha$", Drawing(sparrow(midpoint(Fa), midpoint(Ga)), p, EndArrow), dir(180), p);
+
+ p = grey + dashed;
+ pair B1 = Drawing(midpoint(FF*A2--GG*A1), p);
+ pair B2 = Drawing(0.6 * (FF*A3) + 0.4 * (GG*A2), p);
+ draw(sparrow(FF*A1, B1), p, EndArrow);
+ draw(sparrow(GG*A2, B1), p, EndArrow);
+ draw(sparrow(FF*A3, B2), p, EndArrow);
+ pair B3 = Drawing(FF*A3 + 0.7*dir(100), p);
+ draw(sparrow(B3, FF*A3), p, EndArrow);
+ label("$\mathcal B$", GG*(0.6*(A1+A3)));
+ draw(sparrow(FF*A2, GG*A3), p, EndArrow);
+ pair B4 = Drawing(FF*A1 + 0.5*dir(90), p);
+ draw(sparrow(FF*A1, B4), p, EndArrow);
+ \end{asy}
+\end{center}
+Here $\AA$ is the small category with three elements and two non-identity arrows $f$, $g$
+(I've omitted the identity arrows for simplicity).
+The images of $\AA$ under $F$ and $G$ are the blue and green ``subcategories'' of $\BB$.
+Note that $\BB$ could potentially have many more objects and arrows in it (grey).
+The natural transformation $\alpha$ (red) selects an arrow of $\BB$ from each $F(A)$
+to the corresponding $G(A)$, dragging the entire image of $F$ to the image of $G$.
+Finally, we require that any diagram formed by the blue, red, and green arrows is commutative (naturality),
+so the natural transformation is really ``natural''.
+
+There is a second equivalent definition that looks much more like the homotopy.
+\begin{definition}
+ Let $\mathbf 2$ denote the category generated by a poset with two elements $0 \le 1$, that is,
+ \begin{center}
+ \begin{tikzpicture}[scale=2]
+ \SetVertexMath
+ \Vertices{circle}{1,0}
+ \Edge[style={->}, label={$0 \le 1$}](0)(1)
+ \Loop[dist=12, dir=NO, label={$\id_0$}, labelstyle={above=1pt}](0)
+ \Loop[dist=12, dir=NO, label={$\id_1$}, labelstyle={above=1pt}](1)
+ \end{tikzpicture}
+ \end{center}
+ Then a \emph{natural transformation}
+ $ \nattfm{\AA}{F}{\alpha}{G}{\BB} $
+ is just a functor $\alpha : \AA \times \mathbf 2 \to \BB$ satisfying
+ \[ \alpha(A,0) = F(A), \;\; \alpha(f,0) = F(f)
+ \quad\text{and}\quad
+ \alpha(A,1) = G(A), \;\; \alpha(f,1) = G(f). \]
+ More succinctly, $\alpha(-,0) = F$, $\alpha(-,1) = G$.
+\end{definition}
+The proof that these are equivalent is left as a practice problem.
+
+Naturally, two natural transformations $\alpha : F \to G$ and $\beta : G \to H$ can get composed.
+\begin{center}
+\begin{tikzcd}
+ & F(A) \ar[d, "\alpha_A"] \\
+ \AA \ni A \ar[ru, dashed, "F"] \ar[r, dashed, "G"] \ar[rd, dashed, "H"'] & G(A) \ar[d, "\beta_A"] \\
+ & H(A)
+\end{tikzcd}
+\end{center}
+Now suppose $\alpha$ is a natural transformation such that $\alpha_A$ is an isomorphism for each $A$.
+In this way, we can construct an inverse arrow $\beta_A$ to it.
+\begin{center}
+\begin{tikzcd}
+ & F(A) \in \BB \ar[dd, "\alpha_A"', shift right] \\
+ \AA \ni A \ar[ru, dashed, "F"] \ar[rd, dashed, "G"'] & \\
+ & G(A) \in \BB \ar[uu, "\beta_A"', shift right]
+\end{tikzcd}
+\end{center}
+In this case, we say $\alpha$ is a \vocab{natural isomorphism}.
+We can then say that $F(A) \cong G(A)$ \vocab{naturally} in $A$.
+(And $\beta$ is an isomorphism too!)
+This means that the functors $F$ and $G$ are ``really the same'':
+not only are they isomorphic on the level of objects,
+but these isomorphisms are ``natural''.
+As a result of this, we also write $F \cong G$ to mean
+that the functors are naturally isomorphic.
+
+This is what it really means when we say that
+``there is a natural / canonical isomorphism''.
+For example, I claimed earlier (in \Cref{prob:double_dual})
+that there was a canonical isomorphism $(V^\vee)^\vee \cong V$,
+and mumbled something about ``not having to pick a basis'' and ``God-given''.
+Category theory, amazingly, lets us formalize this:
+it just says that $(V^\vee)^\vee \cong \id(V)$ naturally in $V \in \catname{FDVect}_k$.
+Really, we have a natural transformation
+\[ \nattfm{\catname{FDVect}_k}{\id}{\eps}{(\bullet^\vee)^\vee}{\catname{FDVect}_k}. \]
+where the component $\eps_V$ is given by $v \mapsto \opname{ev}_v$
+(as discussed earlier,
+the fact that it is an isomorphism follows from the fact that $V$ and $(V^\vee)^\vee$
+have equal dimensions and $\eps_V$ is injective).
+
+\section{(Optional) The Yoneda lemma}
+Now that I have natural transformations, I can define:
+\begin{definition}
+ The \vocab{functor category} of two categories $\AA$ and $\BB$,
+ denoted $[\AA, \BB]$, is defined as follows:
+ \begin{itemize}
+ \ii The objects of $[\AA, \BB]$ are (covariant) functors $F : \AA \to \BB$, and
+ \ii The morphisms are natural transformations $\alpha : F \to G$.
+ \end{itemize}
+\end{definition}
+\begin{ques}
+ When are two objects in the functor category isomorphic?
+\end{ques}
+
+With this, I can make good on the last example I mentioned at the beginning:
+\begin{exercise}
+ Construct the following functors:
+ \begin{itemize}
+ \ii $\AA \to [\AA\op, \catname{Set}]$ by $A \mapsto H_A$, which we call $H_\bullet$.
+ \ii $\AA\op \to [\AA, \catname{Set}]$ by $A \mapsto H^A$, which we call $H^\bullet$.
+ \end{itemize}
+\end{exercise}
+Notice that we have opposite categories either way; even if you like $H^A$ because it is covariant,
+the map $H^\bullet$ is contravariant.
+So for what follows, we'll prefer to use $H_\bullet$.
+
+The main observation now is that given a category $\AA$, $H_\bullet$ provides some \emph{special}
+functors $\AA\op \to \catname{Set}$ which are already ``built'' in to the category $A$.
+In light of this, we define:
+\begin{definition}
+ A \vocab{presheaf} $X$ is just a contravariant functor $\AA\op \to \catname{Set}$.
+ It is called \vocab{representable} if $X \cong H_A$ for some $A$.
+\end{definition}
+In other words, when we think about representable, the question we're asking is:
+\begin{quote}
+ \itshape
+ What kind of presheaves are already ``built in'' to the category $\AA$?
+\end{quote}
+One way to get at this question is: given a presheaf $X$ and a particular $H_A$,
+we can look at the \emph{set} of natural transformations $\alpha : X \implies H_A$,
+and see if we can learn anything about it.
+In fact, this set can be written explicitly:
+
+\begin{theorem}
+ [Yoneda lemma]
+ \label{thm:yoneda}
+ Let $\AA$ be a category,
+ pick $A \in \AA$, and let $H_A$ be the contravariant Yoneda functor.
+ Let $X : \AA\op \to \catname{Set}$ be a contravariant functor.
+ Then the map
+ \[ \left\{ \text{Natural transformations }
+ \nattfm{\AA\op}{H_A}{\alpha}{X}{\catname{Set}} \right\}
+ \to X(A) \]
+ defined by $\alpha \mapsto \alpha_A(\id_A) \in X(A)$
+ is an isomorphism of $\catname{Set}$ (i.e.\ a bijection).
+ Moreover, if we view both sides of the equality as functors
+ \[ \AA\op \times [\AA\op, \catname{Set}] \to \catname{Set} \]
+ then this isomorphism is natural.
+\end{theorem}
+
+This might be startling at first sight.
+Here's an unsatisfying explanation why this might not be too crazy:
+in category theory, a rule of thumb is that ``two objects of the same type
+that are built naturally are probably the same''.
+You can see this theme when we defined functors and natural transformations,
+and even just compositions.
+Now to look at the set of natural transformations, we took a pair of elements $A \in \AA$
+and $X \in [\AA\op, \catname{Set}]$
+and constructed a \emph{set} of natural transformations.
+Is there another way we can get a set from these two pieces of information?
+Yes: just look at $X(A)$.
+The Yoneda lemma is telling us that our heuristic still holds true here.
+
+Some consequences of the Yoneda lemma are recorded in \cite{ref:msci}.
+Since this chapter is already a bit too long, I'll just write down the statements,
+and refer you to \cite{ref:msci} for the proofs.
+
+\begin{enumerate}
+ \ii As we mentioned before, $H^\bullet$ provides a functor
+ \[ \AA \to [\AA\op, \catname{Set}]. \]
+ It turns out this functor is in fact \emph{fully faithful};
+ it quite literally embeds the category $\AA$ into the functor category on the right
+ (much like Cayley's theorem embeds every group into a permutation group).
+
+ \ii If $X, Y \in \AA$ then
+ \[ H_X \cong H_Y \iff X \cong Y \iff H^X \cong H^Y. \]
+ To see why this is expected, consider $\AA = \catname{Grp}$ for concreteness.
+ Suppose $A$, $X$, $Y$ are groups such that $H_X(A) \cong H_Y(A)$ for all $A$.
+ For example,
+ \begin{itemize}
+ \ii If $A = \ZZ$, then $\left\lvert X \right\rvert = \left\lvert Y \right\rvert$.
+ \ii If $A = \ZZ / 2\ZZ$, then $X$ and $Y$ have the same number of elements of order $2$.
+ \ii \dots
+ \end{itemize}
+ Each $A$ gives us some information on how $X$ and $Y$ are similar,
+ but the whole natural isomorphism is strong enough to imply $X \cong Y$.
+
+ \ii Consider the functor $U : \catname{Grp} \to \catname{Set}$.
+ It can be represented by $H^\ZZ$, in the sense that
+ \[ \Hom_{\catname{Grp}}(\ZZ, G) \cong U(G)
+ \qquad\text{ by }\qquad \phi \mapsto \phi(1). \]
+ That is, elements of $G$ are in bijection with maps $\ZZ \to G$,
+ determined by the image of $+1$ (or $-1$ if you prefer).
+ So a representation of $U$ was determined by looking at $\ZZ$ and picking $+1 \in U(\ZZ)$.
+
+ The generalization of this is a follows: let $\AA$ be a category
+ and $X : \AA \to \catname{Set}$ a covariant functor.
+ Then a representation $H^A \cong X$ consists of an object $A \in \AA$ and
+ an element $u \in X(A)$ satisfying a certain condition.
+ You can read this off the condition\footnote{%
+ Just for completeness, the condition is:
+ For all $A' \in \AA$ and $x \in X(A')$, there's a unique $f : A \to A'$ with $(Xf)(u) = x$.
+ } if you know what the inverse map is in \Cref{thm:yoneda}.
+ In the above situation, $X = U$, $A = \ZZ$ and $u = \pm 1$.
+\end{enumerate}
+
+\section\problemhead
+
+\begin{problem}
+ Show that the two definitions of natural transformation
+ (one in terms of $\AA \times \mathbf 2 \to \BB$
+ and one in terms of arrows $F(A) \taking{\alpha_A} G(A)$)
+ are equivalent.
+ \begin{hint}
+ The category $\AA \times \mathbf 2$ has ``redundant arrows''.
+ \end{hint}
+ \begin{sol}
+ The main observation is that in $\AA \times \mathbf 2$,
+ you have the arrows in $\AA$ (of the form $(f, \id_{\mathbf 2})$),
+ and then the arrows crossing the two copies of $\AA$ (of the form $(\id_A, 0 \le 1)$).
+ But there are some more arrows $(f, 0 \le 1)$: nonetheless, they can be thought of as compositions
+ \[ (f, 0 \le 1) = (f, \id_{\mathbf 2}) \circ (\id_A, 0 \le 1) = (\id_A, 0 \le 1) \circ (f, \id_{\mathbf 2}). \]
+ Now we want to specify a functor $\alpha : \AA \times \mathbf 2$, we only have to specify
+ where each of these two more basic things goes.
+ The conditions on $\alpha$ already tells us that $(f, \id_{\mathbf 2})$ should be mapped to $F(f)$ or $G(f)$
+ (depending on whether the arrow above is in $\AA \times \{0\}$ or $\AA \times \{1\}$),
+ and specifying the arrow $(\id_A, 0 \le 1)$ amounts to specifying the $A$th component.
+ Where does naturality come in?
+
+ The above discussion transfers to products of categories in general:
+ you really only have to think about $(f, \id)$ and $(\id, g)$ arrows
+ to get the general arrow $(f,g) = (f, \id) \circ (\id, g) = (\id, g) \circ (f, \id)$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Let $\AA$ be the category of finite sets whose arrows are bijections between sets.
+ For $A \in \AA$,
+ let $F(A)$ be the set of \emph{permutations} of $A$ and
+ let $G(A)$ be the set of \emph{orderings} on $A$.\footnote{
+ A permutation is a bijection $A \to A$,
+ and an ordering is a bijection $\{1, \dots, n\} \to A$,
+ where $n$ is the size of $A$.}
+ \begin{enumerate}[(a)]
+ \ii Extend $F$ and $G$ to functors $\AA \to \catname{Set}$.
+ \ii Show that $F(A) \cong G(A)$ for every $A$, but this isomorphism is \emph{not} natural.
+ \end{enumerate}
+\end{problem}
+
+
+
+\begin{problem}
+ [Proving the Yoneda lemma]
+ In the context of \Cref{thm:yoneda}:
+ \begin{enumerate}[(a)]
+ \ii Prove that the map described is in fact a bijection.
+ (To do this, you will probably have to explicitly write down the inverse map.)
+
+ \ii \yod Prove that the bijection is indeed natural.
+ (This is long-winded, but not difficult; from start to finish,
+ there is only one thing you can possibly do.)
+ \end{enumerate}
+\end{problem}
+
+% The bijection is defined as follows:
+% \begin{itemize}
+% \ii For $\alpha$ on the right-hand side, we take the element $\alpha_A(\id_A) \in X(A)$.
+% \ii For $x \in X(A)$, its image in the right-hand side is the $\alpha$
+% with $\alpha_{A'} : H_A(A') \to X(A')$ by $(A' \taking f A) \mapsto (Xf)(x)$.
+% \end{itemize}
diff --git a/books/napkin/fundamental-group.tex b/books/napkin/fundamental-group.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5fd0012765a0e107f50fa918ce8d90c2fe55525d
--- /dev/null
+++ b/books/napkin/fundamental-group.tex
@@ -0,0 +1,686 @@
+\chapter{Fundamental groups}
+Topologists can't tell the difference between a coffee cup and a doughnut.
+So how do you tell \emph{anything} apart?
+
+This is a very hard question to answer, but one way we can
+try to answer it is to find some \emph{invariants} of the space.
+To draw on the group analogy, two groups are clearly not isomorphic if,
+say, they have different orders, or if one is simple and the other isn't, etc.
+We'd like to find some similar properties for topological spaces
+so that we can actually tell them apart.
+
+Two such invariants for a space $X$ are
+\begin{itemize}
+ \ii Defining homology groups $H_1(X)$, $H_2(X)$, \dots
+ \ii Defining homotopy groups $\pi_1(X)$, $\pi_2(X)$, \dots
+\end{itemize}
+Homology groups are hard to define, but in general easier to compute.
+Homotopy groups are easier to define but harder to compute.
+
+This chapter is about the fundamental group $\pi_1$.
+
+
+\section{Fusing paths together}
+Recall that a \emph{path} in a space $X$ is a function $[0,1] \to X$.
+Suppose we have paths $\gamma_1$ and $\gamma_2$
+such that $\gamma_1(1) = \gamma_2(0)$.
+We'd like to fuse\footnote{%
+ Almost everyone else in the world uses ``gluing'' to describe this
+ and other types of constructs.
+ But I was traumatized by Elmer's glue when I was in high school
+ because I hated the stupid ``make a poster'' projects and hated
+ having to use glue on them.
+ So I refuse to talk about ``gluing'' paths together, referring
+ instead to ``fusing'' them together, which sounds cooler anyways.
+} them together to get a path $\gamma_1 \ast \gamma_2$. Easy, right?
+
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ bigblob("$X$");
+ pair A = Drawing("\gamma_1(0)", (-3,-1));
+ pair B = Drawing("\gamma_1(1) = \gamma_2(0)", (1,1), dir(90));
+ pair C = Drawing("\gamma_2(1)", (2,-2), dir(-90));
+ path p = A..(-2,0)..(0,0.5)..B;
+ path q = B..(1.8,-0.5)..C;
+ draw(p, red, EndArrow);
+ draw(q, blue, EndArrow);
+ MP("\gamma_1", midpoint(p), dir(90));
+ MP("\gamma_2", midpoint(q), dir(0));
+ \end{asy}
+\end{center}
+
+We unfortunately do have to hack the definition a tiny bit. In an ideal world, we'd have a path $\gamma_1 : [0,1] \to X$ and $\gamma_2 : [1,2] \to X$ and we could just merge them together to get $\gamma_1 \ast \gamma_2 : [0,2] \to X$.
+But the ``$2$'' is wrong here.
+The solution is that we allocate $[0, \half]$ for the first path
+and $[\half, 1]$ for the second path; we run ``twice as fast''.
+
+\begin{definition}
+ Given two paths $\gamma_1, \gamma_2 : [0,1] \to X$
+ such that $\gamma_1(1) = \gamma_2(0)$, we define
+ a path $\gamma_1 \ast \gamma_2 : [0,1] \to X$ by
+ \[
+ (\gamma_1 \ast \gamma_2)(t)
+ =
+ \begin{cases}
+ \gamma_1(2t) & 0 \le t \le \half \\
+ \gamma_2(2t-1) & \half \le t \le 1.
+ \end{cases}
+ \]
+\end{definition}
+
+This hack unfortunately reveals a second shortcoming: this ``product'' is not associative.
+If we take $(\gamma_1 \ast \gamma_2) \ast \gamma_3$ for some suitable paths,
+then $[0, \frac14]$, $[\frac14, \frac12]$ and $[\frac12, 1]$
+are the times allocated for $\gamma_1$, $\gamma_2$, $\gamma_3$.
+\begin{ques}
+ What are the times allocated
+ for $\gamma_1 \ast (\gamma_2 \ast \gamma_3)$?
+\end{ques}
+But I hope you'll agree that even though this operation isn't associative,
+the reason it fails to be associative is kind of stupid.
+It's just a matter of how fast we run in certain parts.
+
+\begin{center}
+ \begin{asy}
+ unitsize(6cm);
+ Drawing( unitsquare);
+ MP("0", (0,0), S);
+ MP("1", (1,0), S);
+ MP("\frac{1}{4}", (1/4, 0), S);
+ MP("\frac{1}{2}", (1/2, 0), S);
+ MP("0", (0,1), N);
+ MP("1", (1,1), N);
+ MP("\frac{3}{4}", (3/4, 1), N);
+ MP("\frac{1}{2}", (1/2, 1), N);
+ MP("\gamma_1", (1/8, 0), N);
+ MP("\gamma_2", (3/8, 0), N);
+ MP("\gamma_3", (3/4, 0), N);
+ MP("\gamma_1", (1/4, 1), S);
+ MP("\gamma_2", (5/8, 1), S);
+ MP("\gamma_3", (7/8, 1), S);
+
+ MP("\boxed{\gamma_1 \ast \left( \gamma_2 \ast \gamma_3 \right)}", (0.5,1.2), origin);
+ MP("\boxed{\left( \gamma_1 \ast \gamma_2 \right) \ast \gamma_3}", (0.5,-0.2), origin);
+
+ Drawing((1/4,0)--(1/2,1), blue);
+ Drawing((1/2,0)--(3/4,1), blue);
+ Drawing( (1/2,0)--(1/2,1), dotted);
+ \end{asy}
+\end{center}
+
+So as long as we're fusing paths together,
+we probably don't want to think of $[0,1]$ itself too seriously.
+And so we only consider everything up to (path) homotopy equivalence.
+(Recall that two paths $\alpha$ and $\beta$ are homotopic if
+there's a path homotopy $F : [0,1]^2 \to X$ between them,
+which is a continuous deformation from $\alpha$ to $\beta$.)
+It is definitely true that
+\[
+ \left( \gamma_1 \ast \gamma_2 \right) \ast \gamma_3
+ \simeq
+ \gamma_1 \ast \left( \gamma_2 \ast \gamma_3 \right) . \]
+It is also true that if $\alpha_1 \simeq \alpha_2$ and $\beta_1 \simeq \beta_2$
+then $\alpha_1 \ast \beta_1 \simeq \alpha_2 \ast \beta_2$.
+
+Naturally, homotopy is an equivalence relation,
+so paths $\gamma$ lives in some ``homotopy type'',
+the equivalence classes under $\simeq$. We'll denote this $[\gamma]$.
+Then it makes sense to talk about $[\alpha] \ast [\beta]$.
+Thus, \textbf{we can think of $\ast$ as an operation on homotopy classes}.
+
+
+\section{Fundamental groups}
+\prototype{$\pi_1(\RR^2)$ is trivial and $\pi_1(S^1) \cong \ZZ$.}
+
+At this point I'm a little annoyed at keeping track of endpoints,
+so now I'm going to specialize to a certain type of path.
+\begin{definition}
+ A \vocab{loop} is a path with $\gamma(0) = \gamma(1)$.
+\end{definition}
+\begin{center}
+ \begin{asy}
+ bigblob("$X$");
+ pair A = Drawing("x_0", (-1,0), dir(100));
+ path p = A..(1,1)..(2,0)..(0.5,-1)..(-1.5,-0.5)..cycle;
+ draw(p, blue, EndArrow);
+ MP("\gamma", midpoint(p), dir(-20));
+ \end{asy}
+\end{center}
+
+Hence if we restrict our attention to paths starting at a single point $x_0$,
+then we can stop caring about endpoints and start-points, since
+everything starts and stops at $x_0$.
+We even have a very canonical loop: the ``do-nothing'' loop\footnote{Fatty.} given by standing at $x_0$ the whole time.
+
+\begin{definition}
+ Denote the trivial ``do-nothing loop'' by $1$.
+ A loop $\gamma$ is \vocab{nulhomotopic} if it is homotopic to $1$; i.e.\ $\gamma \simeq 1$.
+\end{definition}
+
+For homotopy of loops, you might visualize ``reeling in'' the loop, contracting it to a single point.
+
+\begin{example}[Loops in $S^2$ are nulhomotopic]
+ As the following picture should convince you, every loop in
+ the simply connected space $S^2$ is nulhomotopic.
+ \begin{center}
+ \includegraphics[width=6cm]{media/S2-simply.png}
+ \end{center}
+ (Starting with the purple loop, we contract to the red-brown point.)
+\end{example}
+
+Hence to show that spaces are simply connected it suffices to understand
+the loops of that space.
+We are now ready to provide:
+\begin{definition}
+ The \vocab{fundamental group} of $X$ with basepoint $x_0$,
+ denoted $\pi_1(X, x_0)$, is the set of homotopy classes
+ \[ \left\{ [\gamma] \mid \gamma \text{ a loop at $x_0$} \right\} \]
+ equipped with $\ast$ as a group operation.
+\end{definition}
+
+It might come as a surprise that this has a group structure.
+For example, what is the inverse?
+Let's define it now.
+\begin{definition}
+ Given a path $\alpha : [0,1] \to X$ we can define a path $\ol\alpha$
+ \[ \ol\alpha (t) = \alpha(1-t). \]
+ In effect, this ``runs $\alpha$ backwards''.
+ Note that $\ol\alpha$ starts at the endpoint of $\alpha$
+ and ends at the starting point of $\alpha$.
+\end{definition}
+\begin{exercise}
+ Show that for any path $\alpha$,
+ $\alpha \ast \ol\alpha$ is homotopic
+ to the ``do-nothing'' loop at $\alpha(0)$.
+ (Draw a picture.)
+\end{exercise}
+
+Let's check it.
+\begin{proof}
+ [Proof that this is a group structure]
+ Clearly $\ast$ takes two loops at $x_0$ and spits out a loop at $x_0$.
+ We also already took the time to show that $\ast$ is associative.
+ So we only have to check that (i) there's an identity, and (ii)
+ there's an inverse.
+ \begin{itemize}
+ \ii We claim that the identity is the ``do-nothing'' loop $1$
+ we described above. The reader can check that for any $\gamma$,
+ \[ \gamma \simeq \gamma \ast 1 = 1 \ast \gamma. \]
+ \ii For a loop $\gamma$, recall again we define its ``backwards'' loop $\ol\gamma$ by
+ \[ \ol\gamma(t) = \gamma(1-t). \]
+ Then we have $\gamma \ast \ol\gamma = \ol\gamma \ast \gamma = 1$.
+ \end{itemize}
+ Hence $\pi_1(X,x_0)$ is actually a group.
+\end{proof}
+
+Before going any further I had better give some examples.
+\begin{example}
+ [Examples of fundamental groups]
+ Note that proving the following results is not at all trivial.
+ For now, just try to see intuitively why the claimed answer ``should'' be correct.
+ \begin{enumerate}[(a)]
+ \ii The fundamental group of $\CC$ is the
+ trivial group: in the plane, every loop is nulhomotopic.
+ (Proof: imagine it's a piece of rope and reel it in.)
+ \ii On the other hand, the fundamental group of $\CC - \{0\}$
+ (meteor example from earlier) with any base point is actually $\ZZ$!
+ We won't be able to prove this for a while,
+ but essentially a loop is determined by the number of times
+ that it winds around the origin -- these are so-called
+ \emph{winding numbers}. Think about it!
+ \ii Similarly, we will soon show that the fundamental group of $S^1$
+ (the boundary of the unit circle) is $\ZZ$.
+ \end{enumerate}
+ Officially, I also have to tell you what the base point is, but
+ by symmetry in these examples, it doesn't matter.
+\end{example}
+Here is the picture for $\CC \setminus \{0\}$, with the hole exaggerated
+as the meteor from \Cref{sec:meteor}.
+\begin{center}
+ \begin{asy}
+ size(6cm);
+ bigbox("$\mathbb C \setminus \{0\}$");
+ filldraw(scale(0.5)*unitcircle, grey, black);
+ dot("$x_0$", (1.4,0), dir(0));
+ draw( (1.4,0)..(0,1.4)..(-1.4,0)..(1.4*dir(-30))..cycle, blue, EndArrow );
+ \end{asy}
+\end{center}
+
+\begin{ques}
+ Convince yourself that the fundamental group of $S^1$ is $\ZZ$,
+ and understand why we call these ``winding numbers''.
+ (This will be the most important example of a fundamental group
+ in later chapters, so it's crucial you figure it out now.)
+\end{ques}
+
+\begin{example}
+ [The figure eight]
+ \label{ex:figure8}
+ Consider a figure eight $S^1 \vee S^1$, and let $x_0$
+ be the center.
+ Then \[\pi_1(S^1 \vee S^1, x_0) \cong \left \]
+ is the \emph{free group} generated on two letters.
+ The idea is that one loop of the eight is $a$,
+ and the other loop is $b$, so we expect $\pi_1$
+ to be generated by this loop $a$ and $b$ (and its inverses
+ $\ol a$ and $\ol b$).
+ These loops don't talk to each other.
+ \begin{center}
+ \begin{asy}
+ draw( shift( (1,0) ) * unitcircle, grey + 5 );
+ draw( shift( (-1,0) ) * unitcircle, grey + 5 );
+ dot(origin);
+ path g = dir(20)..dir(100)..dir(180)..dir(260)..dir(340);
+ draw( shift( (1,0) ) * scale(0.8) * reflect(dir(90),dir(-90)) * g, blue, EndArrow );
+ draw( shift( (-1,0) ) * scale(0.8) * g, red, EndArrow );
+ label("$a$", (-1.6,0), dir(0));
+ label("$b$", (1.6,0), dir(180));
+ \end{asy}
+ \end{center}
+\end{example}
+
+Recall that in graph theory, we usually assume our graphs are connected,
+since otherwise we can just consider every connected component separately.
+Likewise, we generally want to restrict our attention to path-connected spaces, since if a space isn't path-connected then it can be broken into a bunch of ``path-connected components''.
+(Can you guess how to define this?)
+Indeed, you could imagine a space $X$ that consists of the objects on my desk (but not the desk itself): $\pi_1$ of my phone has nothing to do with $\pi_1$ of my mug. They are just totally disconnected, both figuratively and literally.
+
+But on the other hand we claim that in a path-connected space,
+the groups are very related!
+\begin{theorem}[Fundamental groups don't depend on basepoint]
+ Let $X$ be a path-connected space.
+ Then for any $x_1 \in X$ and $x_2 \in X$, we have
+ \[ \pi_1(X, x_1) \cong \pi_1(X, x_2). \]
+\end{theorem}
+Before you read the proof, see if you can guess the isomorphism based just on the picture below.
+\begin{center}
+ \begin{asy}
+ size(7cm);
+ bigblob("$X$");
+ pair A = Drawing("x_1", (-1.5,0), dir(180));
+ pair B = Drawing("x_2", (1.5,0), dir(0));
+ draw(A..(-2.5,-0.8)..(-2.5,0.9)..cycle, blue);
+ draw(B..(2.2,-1.1)..(2.4,0.8)..cycle, blue);
+ draw(A--B, red+dashed, Arrows);
+ label("$\alpha$/$\overline{\alpha}$", origin, dir(90));
+ \end{asy}
+\end{center}
+\begin{proof}
+ Let $\alpha$ be any path from $x_1$ to $x_2$ (possible by path-connectedness), and let $\ol\alpha$ be its reverse.
+ Then we can construct a map
+ \[
+ \pi_1(X,x_1) \to \pi_1(X,x_2)
+ \text{ by }
+ [\gamma] \mapsto [\ol\alpha \ast \gamma \ast \alpha]. \]
+ In other words, given a loop $\gamma$ at $x_1$,
+ we can start at $x_2$, follow $\ol\alpha$ to $x_1$,
+ run $\gamma$, then run along $\alpha$ home to $x_2$.
+ Hence this is a map which builds a loop of $\pi_1(X, x_2)$
+ from every loop at $\pi_1(X, x_1)$.
+ It is a \emph{homomorphism} of the groups just because
+ \[ \left( \ol \alpha \ast \gamma_1 \ast \alpha \right)
+ \ast \left( \ol\alpha \ast \gamma_2 \ast \alpha \right)
+ = \ol\alpha \ast \gamma_1 \ast \gamma_2 \ast \alpha \]
+ as $\alpha \ast \ol\alpha$ is nulhomotopic.
+
+ Similarly, there is a homomorphism
+ \[
+ \pi_1(X,x_2) \to \pi_1(X,x_1)
+ \text{ by }
+ [\gamma] \mapsto [\alpha \ast \gamma \ast \ol\alpha]. \]
+ As these maps are mutual inverses, it follows
+ they must be isomorphisms. End of story.
+\end{proof}
+This is a bigger reason why we usually only care about path-connected spaces.
+
+\begin{abuse}
+ For a path-connected space $X$ we will often abbreviate $\pi_1(X, x_0)$
+ to just $\pi_1(X)$, since it doesn't matter which $x_0 \in X$
+ we pick.
+\end{abuse}
+
+Finally, recall that we originally defined ``simply connected'' as saying
+that any two paths with matching endpoints were homotopic.
+It's possible to weaken this condition and then rephrase it using
+fundamental groups.
+\begin{exercise}
+ Let $X$ be a path-connected space.
+ Prove that $X$ is \vocab{simply connected} if and only if
+ $\pi_1(X)$ is the trivial group.
+ (One direction is easy; the other is a little trickier.)
+\end{exercise}
+This is the ``usual'' definition of simply connected.
+
+
+\section{Fundamental groups are functorial}
+One quick shorthand I will introduce to clean up the discussion:
+\begin{definition}
+ By $f : (X, x_0) \to (Y, y_0)$, we will mean that
+ $f : X \to Y$ is a continuous function of spaces
+ which also sends the point $x_0$ to $y_0$.
+\end{definition}
+
+Let $X$ and $Y$ be topological spaces and $f : (X, x_0) \to (Y, y_0)$.
+We now want to relate the fundamental groups of $X$ and $Y$.
+
+Recall that a loop $\gamma$ in $(X, x_0)$ is a map $\gamma : [0,1] \to X$
+with $\gamma(0) = \gamma(1) = x_0$.
+Then if we consider the composition
+\[ [0,1] \taking\gamma (X, x_0) \taking f (Y, y_0) \]
+then we get straight-away a loop in $Y$ at $y_0$!
+Let's call this loop $f_\sharp \gamma$.
+\begin{lemma}[$f_\sharp$ is homotopy invariant]
+ \label{lem:fsharp_homotopy_invariant}
+ If $\gamma_1 \simeq \gamma_2$ are path-homotopic,
+ then in fact
+ \[ f_\sharp \gamma_1 \simeq f_\sharp \gamma_2. \]
+\end{lemma}
+\begin{proof}
+ Just take the homotopy $h$ taking $\gamma_1$ to $\gamma_2$
+ and consider $f \circ h$.
+\end{proof}
+
+It's worth noting at this point that if $X$ and $Y$ are homeomorphic,
+then their fundamental groups are all isomorphic.
+Indeed, let $f : X \to Y$ and $g : Y \to X$ be mutually inverse continuous maps.
+Then one can check that $f_\sharp : \pi_1(X, x_0) \to \pi_1(Y, y_0)$
+and $g_\sharp : \pi_1(Y, y_0) \to \pi_1(X, x_0)$ are inverse maps
+between the groups (assuming $f(x_0) = y_0$ and $g(y_0) = x_0$).
+
+%Now we want to show that by taking the map $f_\sharp$, we get a \emph{functor}
+%\begin{diagram}
+% & (X, x_0) & & \pi_1(X, x_0) & \\
+% \catname{Top}_\ast \ni & \dTo^f & \rDotted & \dTo_{f_\sharp} & \in \catname{Grp} \\
+% & (Y, y_0) & & \pi_1(Y, y_0) &
+%\end{diagram}
+%\begin{ques}
+% Check this -- we need that $(f \circ g)_\sharp = f_\sharp \circ g_\sharp$
+% and that $(\id_X)_\sharp = \id_{\pi_1(X)}$.
+% Both are totally obvious once you can tell what they're asking.
+%\end{ques}
+%Thus in this way we've constructed a functor
+%\[ \catname{Top}_\ast \to \catname{Grp}. \]
+%In particular, by the fact that functors preserve isomorphism (\Cref{thm:functor_isom}), we have
+%\begin{moral}
+% Homeomorophic topological spaces have isomorphic fundamental groups.
+% Category theory gives this to us for free.
+%\end{moral}
+%
+%\begin{remark}
+% Note the similarity between this and the construction
+% of the covariant Yoneda functor (\Cref{ex:covariant_yoneda}).
+%\end{remark}
+
+\section{Higher homotopy groups}
+Why the notation $\pi_1$ for the fundamental group?
+And what are $\pi_2$, \dots?
+The answer lies in the following rephrasing:
+\begin{ques}
+ Convince yourself that a loop is the same thing
+ as a continuous function $S^1 \to X$.
+\end{ques}
+It turns out we can define homotopy for things other than paths.
+Two functions $f, g : Y \to X$ are \vocab{homotopic} if there exists a continuous
+function $Y \times [0,1] \to X$ which continuously deforms $f$ to $g$.
+So everything we did above was just the special case $Y = S^1$.
+
+For general $n$, the group $\pi_n(X)$ is defined as the homotopy classes
+of the maps $S^n \to X$.
+The group operation is a little harder to specify.
+You have to show that $S^n$ is homeomorphic to $[0,1]^n$ with
+some endpoints fused together; for example $S^1$ is $[0,1]$ with $0$ fused to $1$.
+Once you have these cubes, you can merge them together on a face.
+(Again, I'm being terribly imprecise, deliberately.)
+
+For $n \neq 1$, $\pi_n$ behaves somewhat differently than $\pi_1$.
+(You might not be surprised, as $S^n$ is simply connected for all $n \ge 2$ but not when $n=1$.)
+In particular, it turns out that $\pi_n(X)$ is an abelian group for all $n \ge 2$.
+
+Let's see some examples.
+\begin{example}[$\pi_n(S^n) \cong \mathbb Z$]
+ As we saw, $\pi_1(S^1) \cong \ZZ$; given the base circle $S^1$,
+ we can wrap a second circle around it as many times as we want.
+ In general, it's true that $\pi_n(S^n) \cong \ZZ$.
+\end{example}
+\begin{example}[$\pi_n(S^m) \cong \{1\}$ when $n < m$]
+ We saw that $\pi_1(S^2) \cong \{1\}$, because
+ a circle in $S^2$ can just be reeled in to a point.
+ It turns out that similarly, any smaller $n$-dimensional sphere
+ can be reeled in on the surface of a bigger $m$-dimensional sphere.
+ So in general, $\pi_n(S^m)$ is trivial for $n < m$.
+\end{example}
+However, beyond these observations, the groups behave quite weirdly.
+Here is a table of $\pi_n(S^m)$ for $1 \le m \le 8$ and $2 \le n \le 10$,
+so you can see what I'm talking about.
+(Taken from Wikipedia.)
+
+\bgroup
+\footnotesize
+\[
+ \begin{array}{r|ccccccccc}
+ \pi_n(S^m) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline
+ m = 1 & \{1\} & \{1\} & \{1\} & \{1\} & \{1\} & \{1\} & \{1\} & \{1\} & \{1\} \\
+ 2 & \ZZ & \ZZ & \Zc2 & \Zc2 & \Zc{12} & \Zc2 & \Zc2 & \Zc3 & \Zc{15}\\
+ 3 & & \ZZ & \Zc2 & \Zc2 & \Zc{12} & \Zc2 & \Zc2 & \Zc3 & \Zc{15}\\
+ 4 & & & \ZZ & \Zc2 & \Zc2 & \ZZ \times \Zc{12} & (\Zc2)^2 & \Zc2 \times \Zc2 & \Zc{24} \times \Zc3 \\
+ 5 & & & & \ZZ & \Zc2 & \Zc2 & \Zc{24} & \Zc2 & \Zc2 \\
+ 6 & & & & & \ZZ & \Zc2 & \Zc2 & \Zc{24} & \{1\} \\
+ 7 & & & & & & \ZZ & \Zc2 & \Zc2 & \Zc{24} \\
+ 8 & & & & & & & \ZZ & \Zc2 & \Zc2
+ \end{array}
+\]
+\egroup
+
+Actually, it turns out that if you can compute $\pi_n(S^m)$
+for every $m$ and $n$,
+then you can essentially compute \emph{any} homotopy classes.
+Thus, computing $\pi_n(S^m)$ is sort of a lost cause in general,
+and the mixture of chaos and pattern in the above table is a testament to this.
+
+\section{Homotopy equivalent spaces}
+\prototype{A disk is homotopy equivalent to a point, an annulus is homotopy equivalent to $S^1$.}
+Up to now I've abused notation and referred to ``path homotopy'' as just ``homotopy'' for two paths.
+I will unfortunately continue to do so (and so any time I say two paths are homotopic, you should assume I mean ``path-homotopic'').
+But let me tell you what the general definition of homotopy is first.
+\begin{definition}
+ Let $f,g : X \to Y$ be continuous functions.
+ A \vocab{homotopy} is a continuous function $F : X \times [0,1] \to Y$,
+ which we'll write $F_s(x)$ for $s \in [0,1]$, $x \in X$, such that
+ \[ F_0(x) = f(x) \text{ and } F_1(x) = g(x) \text{ for all $x \in X$.} \]
+ If such a function exists, then $f$ and $g$ are \vocab{homotopic}.
+\end{definition}
+Intuitively this is once again ``deforming $f$ to $g$''.
+You might notice this is almost exactly the same definition as path-homotopy,
+except that $f$ and $g$ are any functions instead of paths, and hence
+there's no restriction on keeping some ``endpoints'' fixed through the deformation.
+
+This homotopy can be quite dramatic:
+\begin{example}
+ The zero function $z \mapsto 0$ and the identity function $z \mapsto z$
+ are homotopic as functions $\CC \to \CC$.
+ The necessary deformation is
+ \[ [0,1] \times \CC \to \CC \text{ by } (t,z) \mapsto tz. \]
+\end{example}
+
+I bring this up because I want to define:
+\begin{definition}
+ Let $X$ and $Y$ be continuous spaces.
+ They are \vocab{homotopy equivalent} if there exist
+ functions $f : X \to Y$ and $g : Y \to X$ such that
+ \begin{enumerate}[(i)]
+ \ii $f \circ g : X \to X$ is homotopic to the identity map on $X$, and
+ \ii $g \circ f : Y \to Y$ is homotopic to the identity map on $Y$.
+ \end{enumerate}
+ If a topological space is homotopy equivalent to a point,
+ then it is said to be \vocab{contractible}.
+\end{definition}
+\begin{ques}
+ Why are two homeomorphic spaces also homotopy equivalent?
+\end{ques}
+
+Intuitively, you can think of this as a more generous form of stretching
+and bending than homeomorphism: we are allowed to compress huge spaces into single points.
+
+\begin{example}[$\CC$ is contractible]
+ Consider the topological spaces $\CC$
+ and the space consisting of the single point $\{0\}$.
+ We claim these spaces are homotopy equivalent (can you guess what $f$ and $g$ are?)
+ Indeed, the two things to check are
+ \begin{enumerate}[(i)]
+ \ii $\CC \to \{0\} \hookrightarrow \CC$ by $z \mapsto 0 \mapsto 0$
+ is homotopy equivalent to the identity on $\CC$, which we just saw, and
+ \ii $\{0\} \hookrightarrow \CC \to \{0\}$ by $0 \mapsto 0 \mapsto 0$, which \emph{is} the identity on $\{0\}$.
+ \end{enumerate}
+ Here by $\hookrightarrow$ I just mean $\to$ in the special case
+ that the function is just an ``inclusion''.
+\end{example}
+\begin{remark}
+ $\CC$ cannot be \emph{homeomorphic} to a point
+ because there is no bijection of sets between them.
+\end{remark}
+
+\begin{example}[$\CC \setminus \{0\}$ is homotopy equivalent to $S^1$]
+ Consider the topological spaces $\CC \setminus \{0\}$,
+ the \vocab{punctured plane}, and the circle $S^1$ viewed as a subset of $S^1$.
+ We claim these spaces are actually homotopy equivalent!
+ The necessary functions are the inclusion
+ \[ S^1 \hookrightarrow \CC \setminus \{0\} \]
+ and the function
+ \[ \CC \setminus \{0\} \to S^1
+ \quad\text{by}\quad
+ z \mapsto \frac{z}{\left\lvert z \right\rvert}. \]
+ You can check that these satisfy the required condition.
+\end{example}
+\begin{remark}
+ On the other hand, $\CC \setminus \{0\}$ cannot be \emph{homeomorphic} to $S^1$.
+ One can make $S^1$ disconnected by deleting two points;
+ the same is not true for $\CC \setminus \{0\}$.
+\end{remark}
+\begin{example}
+ [$\text{Disk} = \text{Point}$, $\text{Annulus} = \text{Circle}$.]
+ By the same token, a disk is homotopic to a point;
+ an annulus is homotopic to a circle.
+ (This might be a little easier to visualize, since it's finite.)
+\end{example}
+
+I bring these up because it turns out that
+\begin{moral}
+ Algebraic topology can't distinguish between homotopy equivalent spaces.
+\end{moral}
+More precisely,
+\begin{theorem}[Homotopy equivalent spaces have isomorphic fundamental groups]
+ \label{thm:fundgrp_homotopy_invariant}
+ Let $X$ and $Y$ be path-connected, homotopy-equivalent spaces.
+ Then $\pi_n(X) \cong \pi_n(Y)$ for every positive integer $n$.
+\end{theorem}
+\begin{proof}
+ Let $\gamma : [0,1] \to X$ be a loop.
+ Let $f : X \to Y$ and $g : Y \to X$ be maps witnessing that $X$ and $Y$ are homotopy equivalent
+ (meaning $f \circ g$ and $g \circ f$ are each homotopic to the identity).
+ Then the composition
+ \[ [0,1] \taking\gamma X \taking f Y \]
+ is a loop in $Y$ and hence $f$ induces a natural homomorphism $\pi_1(X) \to \pi_1(Y)$.
+ Similarly $g$ induces a natural homomorphism $\pi_1(Y) \to \pi_1(X)$.
+ The conditions on $f$ and $g$ now say exactly that these two homomorphisms
+ are inverse to each other, meaning the maps are isomorphisms.
+\end{proof}
+In particular,
+\begin{ques}
+ What are the fundamental groups of contractible spaces?
+\end{ques}
+
+That means, for example, that algebraic topology can't tell
+the following homotopic subspaces of $\RR^2$ apart.
+\begin{center}
+ {\color{red} \Huge \venus}
+ \qquad
+ {\color{blue} \Huge \mars}
+\end{center}
+
+\section{The pointed homotopy category}
+This section is meant to be read by those who know some basic category theory.
+Those of you that don't should come back after reading \Cref{ch:cats,ch:functors}.
+Those of you that do will enjoy how succinctly we can summarize
+the content of this chapter using categorical notions.
+
+\begin{definition}
+ The \vocab{pointed homotopy category} $\catname{hTop}_\ast$ is defined as follows.
+ \begin{itemize}
+ \ii Objects: pairs $(X, x_0)$ of spaces with a distinguished basepoint, and
+ \ii Morphisms: \emph{homotopy classes} of continuous functions $(X, x_0) \to (Y, y_0)$.
+ \end{itemize}
+\end{definition}
+In particular, two path-connected spaces are isomorphic in this category exactly
+when they are homotopy equivalent.
+Then we can summarize many of the preceding results as follows:
+\begin{theorem}[Functorial interpretation of fundamental groups]
+ \label{thm:fundgrp_functor}
+ There is a functor
+ \[ \pi_1 \colon \catname{hTop}_\ast \to \catname{Grp} \]
+ sending
+ \begin{center}
+ \begin{tikzcd}
+ (X,x_0) \ar[d, "f"'] \ar[r, dashed] & \pi_1(X, x_0) \ar[d, "f_\sharp"] \\
+ (Y,y_0) \ar[r, dashed] & \pi_1(Y, y_0)
+ \end{tikzcd}
+ \end{center}
+\end{theorem}
+This implies several things, like
+\begin{itemize}
+ \ii The functor bundles the information of $f_\sharp$,
+ including the fact that it respects composition.
+ In the categorical language, $f_\sharp$ is $\pi_1(f)$.
+ \ii Homotopic spaces have isomorphic fundamental group
+ (since the spaces are isomorphic in $\catname{hTop}$,
+ and functors preserve isomorphism by \Cref{thm:functor_isom}).
+ In fact, you'll notice that the proofs
+ of \Cref{thm:functor_isom} and \Cref{thm:fundgrp_homotopy_invariant}
+ are secretly identical to each other.
+ \ii If maps $f, g : (X, x_0) \to (Y, y_0)$ are homotopic,
+ then $f_\sharp = g_\sharp$. This is basically \Cref{lem:fsharp_homotopy_invariant}
+\end{itemize}
+
+\begin{remark}
+ In fact, $\pi_1(X, x_0)$ is the set of arrows $(S^1, 1) \to (X, x_0)$ in $\catname{hTop}_\ast$,
+ so this is actually a covariant Yoneda functor (\Cref{ex:covariant_yoneda}),
+ except with target $\catname{Grp}$ instead of $\catname{Set}$.
+\end{remark}
+
+
+\section\problemhead
+
+\begin{problem}[Harmonic fan]
+ Exhibit a subspace $X$ of the metric space $\RR^2$ which is
+ path-connected but for which a point $p$ can be found such that
+ any $r$-neighborhood of $p$ with $r < 1$ is not path-connected.
+ % harmonic fan
+\end{problem}
+
+\begin{dproblem}
+ [Special case of Seifert-van Kampen] \gim
+ Let $X$ be a topological space.
+ Suppose $U$ and $V$ are connected open subsets of $X$, with $X = U \cup V$,
+ so that $U \cap V$ is nonempty and path-connected.
+
+ Prove that if $\pi_1(U) = \pi_1(V) = \{1\}$ then $\pi_1(X) = 1$.
+\end{dproblem}
+\begin{remark}
+ The \vocab{Seifert--van Kampen theorem} generalizes this
+ for $\pi_1(U)$ and $\pi_1(V)$ any groups; it gives a formula for calculating $\pi_1(X)$
+ in terms of $\pi_1(U)$, $\pi_1(V)$, $\pi_1(U \cap V)$.
+ The proof is much the same.
+
+ Unfortunately, this does not give us a way to calculate $\pi_1(S^1)$,
+ because it is not possible to write $S^1 = U \cup V$ for $U \cap V$ \emph{connected}.
+\end{remark}
+
+\begin{problem}
+ [RMM 2013] \yod
+ Let $n \ge 2$ be a positive integer.
+ A stone is placed at each vertex of a regular $2n$-gon.
+ A move consists of selecting an edge of the $2n$-gon and swapping the two stones at the endpoints of the edge.
+ Prove that if a sequence of moves swaps every pair of stones exactly once, then there is some edge never used in any move.
+\end{problem}
+(This last problem doesn't technically have anything to do with the chapter,
+but the ``gut feeling'' which motivates the solution is very similar.)
diff --git a/books/napkin/galois.tex b/books/napkin/galois.tex
new file mode 100644
index 0000000000000000000000000000000000000000..98177387878cc7ef33dbe03aad35ced331e8642f
--- /dev/null
+++ b/books/napkin/galois.tex
@@ -0,0 +1,720 @@
+\chapter{Things Galois}
+%This chapter is mostly optional.
+%Read the first two sections and then decide
+%whether you want to read the rest of this chapter.
+
+\section{Motivation}
+\prototype{$\QQ(\sqrt2)$ and $\QQ(\cbrt{2})$.}
+The key idea in Galois theory is that of \emph{embeddings},
+which give us another way to get at the idea of the ``conjugate'' we described earlier.
+
+Let $K$ be a number field.
+An \vocab{embedding} $\sigma \colon K \injto \CC$,
+is an \emph{injective field homomorphism}:
+it needs to preserve addition and multiplication,
+and in particular it should fix $1$.
+\begin{ques}
+ Show that in this context, $\sigma(q) = q$ for any rational number $q$.
+\end{ques}
+
+\begin{example}
+ [Examples of embeddings]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii If $K = \QQ(i)$, the two embeddings of $K$ into $\CC$ are
+ $z \mapsto z$ (the identity) and $z \mapsto \ol z$ (complex conjugation).
+ \ii If $K = \QQ(\sqrt 2)$, the two embeddings of $K$ into $\CC$ are
+ $a+b\sqrt 2 \mapsto a+b\sqrt 2$ (the identity) and $a+b\sqrt 2 \mapsto a-b\sqrt 2$ (conjugation).
+ \ii If $K = \QQ(\cbrt 2)$, there are three embeddings:
+ \begin{itemize}
+ \ii The identity embedding, which sends $1 \mapsto 1$ and $\cbrt 2 \mapsto \cbrt 2$.
+ \ii An embedding which sends $1 \mapsto 1$ and $\cbrt 2 \mapsto \omega \cbrt 2$,
+ where $\omega$ is a cube root of unity.
+ Note that this is enough to determine the rest of the embedding.
+ \ii An embedding which sends $1 \mapsto 1$ and $\cbrt 2 \mapsto \omega^2 \cbrt 2$.
+ \end{itemize}
+ \end{enumerate}
+\end{example}
+
+I want to make several observations about these embeddings,
+which will form the core ideas of Galois theory.
+Pay attention here!
+
+\begin{itemize}
+\ii
+First, you'll notice some duality between roots: in the first example, $i$ gets sent to $\pm i$,
+$\sqrt 2$ gets sent to $\pm \sqrt 2$, and $\cbrt 2$ gets sent to the other roots of $x^3-2$.
+This is no coincidence, and one can show this occurs in general.
+Specifically, suppose $\alpha$ has minimal polynomial
+\[ 0 = c_n \alpha^n + c_{n-1} \alpha^{n-1} + \dots + c_1\alpha + c_0 \]
+where the $c_i$ are rational.
+Then applying any embedding $\sigma$ to both sides gives
+\begin{align*}
+ 0 &= \sigma(c_n \alpha^n + c_{n-1} \alpha^{n-1} + \dots + c_1\alpha + c_0) \\
+ % &= \sigma(c_n \alpha^n) + \sigma(c_{n-1} \alpha^{n-1})
+ % + \dots + \sigma(c_1\alpha) + \sigma(c_0) \\
+ &= \sigma(c_n) \sigma(\alpha)^n + \sigma(c_{n-1}) \sigma(\alpha)^{n-1}
+ + \dots + \sigma(c_1)\sigma(\alpha) + \sigma(c_0) \\
+ &= c_n \sigma(\alpha)^n + c_{n-1} \sigma(\alpha)^{n-1} + \dots + c_1\sigma(\alpha) + c_0
+\end{align*}
+where in the last step we have used the fact that $c_i \in \QQ$, so they are fixed by $\sigma$.
+\emph{So, roots of minimal polynomials go to other roots of that polynomial.}
+
+\ii
+Next, I want to draw out a contrast between the second and third examples.
+Specifically, in example (b) where we consider embeddings $K = \QQ(\sqrt 2)$
+to $\CC$. The image of these embeddings lands entirely in $K$: that is, we
+could just as well have looked at $K \to K$ rather than looking at $K \to \CC$.
+However, this is not true in (c): indeed $\QQ(\cbrt 2) \subset \RR$,
+but the non-identity embeddings have complex outputs!
+
+The key difference is to again think about conjugates.
+Key observation:
+\begin{moral}
+ The field $K = \QQ(\cbrt 2)$ is ``deficient'' because the minimal polynomial $x^3-2$
+ has two other roots $\omega \cbrt{2}$ and $\omega^2 \cbrt{2}$ not contained in $K$.
+\end{moral}
+On the other hand $K = \QQ(\sqrt 2)$ is just fine because both roots of $x^2-2$ are contained inside $K$.
+Finally, one can actually fix the deficiency in $K = \QQ(\cbrt 2)$ by completing it to a field $\QQ(\cbrt 2, \omega)$.
+Fields like $\QQ(i)$ or $\QQ(\sqrt 2)$ which are ``self-contained'' are called
+\emph{Galois extensions}, as we'll explain shortly.
+
+\ii
+Finally, you'll notice that in the examples above, \emph{the number of embeddings from $K$ to $\CC$
+happens to be the degree of $K$}.
+This is an important theorem, \Cref{thm:n_embeddings}.
+\end{itemize}
+
+In this chapter we'll develop these ideas in full generality, for any field other than $\QQ$.
+
+\section{Field extensions, algebraic closures, and splitting fields}
+\prototype{$\QQ(\cbrt 2)/\QQ$ is an extension, $\CC$ is an algebraic closure of any number field.}
+
+First, we define a notion of one field sitting inside another,
+in order to generalize the notion of a number field.
+\begin{definition}
+ Let $K$ and $F$ be fields.
+ If $F \subseteq K$, we write $K/F$ and say $K$ is a
+ \vocab{field extension} of $F$.
+
+ Thus $K$ is automatically an $F$-vector space
+ (just like $\QQ(\sqrt 2)$ is automatically a $\QQ$-vector space).
+ The \vocab{degree} is the dimension of this space, denoted $[K:F]$.
+ If $[K:F]$ is finite, we say $K/F$ is a \vocab{finite (field) extension}.
+\end{definition}
+That's really all. There's nothing tricky at all.
+
+\begin{ques}
+ What do you call a finite extension of $\QQ$?
+\end{ques}
+
+Degrees of finite extensions are multiplicative.
+\begin{theorem}[Field extensions have multiplicative degree]
+ Let $F \subseteq K \subseteq L$ be fields with $L/K$, $K/F$ finite. Then
+ \[ [L:K][K:F] = [L:F]. \]
+\end{theorem}
+\begin{proof}
+ Basis bash: you can find a basis of $L$ over $K$,
+ and then expand that into a basis $L$ over $F$.
+ (Diligent readers can fill in details.)
+\end{proof}
+
+Next, given a field (like $\QQ(\cbrt2)$)
+we want something to embed it into (in our case $\CC$).
+So we just want a field that contains all the roots of all the polynomials:
+\begin{theorem}[Algebraic closures]
+ Let $F$ be a field.
+ Then there exists a field extension $\ol F$ containing $F$, called an \vocab{algebraic closure},
+ such that all polynomials in $\ol F[x]$ factor completely.
+\end{theorem}
+\begin{example}
+ [$\CC$]
+ $\CC$ is an algebraic closure of $\QQ$, $\RR$ and even itself.
+\end{example}
+\begin{abuse}
+ Some authors also require the algebraic closure to be \emph{minimal by inclusion}:
+ for example, given $\QQ$ they would want only $\ol\QQ$ (the algebraic numbers).
+ It's a theorem that such a minimal algebraic closure is unique,
+ and so these authors will refer to \emph{the} algebraic closure of $K$.
+
+ I like $\CC$, so I'll use the looser definition.
+\end{abuse}
+
+\section{Embeddings into algebraic closures for number fields}
+Now that I've defined all these ingredients, I can prove:
+\begin{theorem}[The $n$ embeddings of a number field]
+ \label{thm:n_embeddings}
+ Let $K$ be a number field of degree $n$.
+ Then there are exactly $n$ field homomorphisms $K \injto \CC$,
+ say $\sigma_1, \dots, \sigma_n$ which fix $\QQ$.
+\end{theorem}
+\begin{remark}
+ Note that a nontrivial homomorphism of fields is necessarily injective
+ (the kernel is an ideal).
+ This justifies the use of ``$\injto$'', and we call each $\sigma_i$ an
+ \vocab{embedding} of $K$ into $\CC$.
+\end{remark}
+\begin{proof}
+ This is actually kind of fun!
+ Recall that any irreducible polynomial over $\QQ$ has distinct roots (\Cref{lem:irred_complex}).
+ We'll adjoin elements $\alpha_1, \alpha_2, \dots, \alpha_m$ one at a time to $\QQ$,
+ until we eventually get all of $K$, that is,
+ \[ K = \QQ(\alpha_1, \dots, \alpha_n). \]
+ Diagrammatically, this is
+ \begin{center}
+ \begin{tikzcd}
+ \QQ \ar[r, hook] \ar[d, hook, "\id"']
+ & \QQ(\alpha_1) \ar[r, hook] \ar[d, hook, "\tau_1"']
+ & \QQ(\alpha_1, \alpha_2) \ar[r, hook] \ar[d, hook, "\tau_2"']
+ & \dots \ar[r, hook]
+ & K \ar[d, hook, "\tau_m=\sigma"] \\
+ \CC \ar[r]
+ & \CC \ar[r]
+ & \CC \ar[r]
+ & \dots \ar[r]
+ & \CC
+ \end{tikzcd}
+ \end{center}
+ First, we claim there are exactly \[ [\QQ(\alpha_1) : \QQ] \] ways to pick $\tau_1$.
+ Observe that $\tau_1$ is determined by where it sends $\alpha_1$ (since it has to fix $\QQ$).
+ Letting $p_1$ be the minimal polynomial of $\alpha_1$, we see that there are $\deg p_1$ choices for $\tau_1$,
+ one for each (distinct) root of $p_1$. That proves the claim.
+
+ Similarly, given a choice of $\tau_1$, there are
+ \[ [\QQ(\alpha_1, \alpha_2) : \QQ(\alpha_1)] \]
+ ways to pick $\tau_2$.
+ (It's a little different: $\tau_1$ need not be the identity.
+ But it's still true that $\tau_2$ is determined by where it sends $\alpha_2$,
+ and as before there are $[\QQ(\alpha_1, \alpha_2) : \QQ(\alpha_1)]$ possible ways.)
+
+ Multiplying these all together gives the desired $[K:\QQ]$.
+\end{proof}
+\begin{remark}
+ The primitive element theorem actually implies that $m = 1$
+ is sufficient; we don't need to build a whole tower.
+ This simplifies the proof somewhat.
+\end{remark}
+
+It's common to see expressions like ``let $K$ be a number field of degree $n$,
+and $\sigma_1, \dots, \sigma_n$ its $n$ embeddings'' without further explanation.
+The relation between these embeddings and the Galois conjugates is given as follows.
+\begin{theorem}[Embeddings are evenly distributed over conjugates]
+ \label{thm:conj_distrb}
+ Let $K$ be a number field of degree $n$
+ with $n$ embeddings $\sigma_1$, \dots, $\sigma_n$,
+ and let $\alpha \in K$ have $m$ Galois conjugates over $\QQ$.
+
+ Then $\sigma_j(\alpha)$ is ``evenly distributed''
+ over each of these $m$ conjugates: for any Galois conjugate $\beta$,
+ exactly $\frac nm$ of the embeddings send $\alpha$ to $\beta$.
+\end{theorem}
+\begin{proof}
+ In the previous proof, adjoin $\alpha_1 = \alpha$ first.
+\end{proof}
+
+So, now we can define the trace and norm over $\QQ$ in a nice way:
+given a number field $K$, we set
+\[
+ \Tr_{K/\QQ}(\alpha) = \sum_{i=1}^n \sigma_i(\alpha)
+ \quad\text{and}\quad
+ \Norm_{K/\QQ}(\alpha) = \prod_{i=1}^n \sigma_i(\alpha)
+\]
+where $\sigma_i$ are the $n$ embeddings of $K$ into $\CC$.
+
+\section{Everyone hates characteristic 2: separable vs irreducible}
+\prototype{$\QQ$ has characteristic zero, hence irreducible polynomials are separable.}
+Now, we want a version of the above theorem for any field $F$.
+If you read the proof, you'll see that the only thing that ever uses anything about the field $\QQ$
+is \Cref{lem:irred_complex}, where we use the fact that
+\begin{quote}
+ \itshape Irreducible polynomials over $F$ have no double roots.
+\end{quote}
+
+Let's call a polynomial with no double roots \vocab{separable};
+thus we want irreducible polynomials to be separable.
+We did this for $\QQ$ in the last chapter by taking derivatives.
+Should work for any field, right?
+
+Nope.
+Suppose we took the derivative of some polynomial like $2x^3 + 24x + 9$,
+namely $6x^2 + 24$.
+In $\CC$ it's obvious that the derivative of a nonconstant polynomial $f'$ isn't zero.
+But suppose we considered the above as a polynomial in $\FF_3$, i.e.\ modulo $3$.
+Then the derivative is zero.
+Oh, no!
+
+We have to impose a condition that prevents something like this from happening.
+\begin{definition}
+ For a field $F$, the \vocab{characteristic} of $F$ is the smallest
+ positive integer $p$ such that,
+ \[ \underbrace{1_F + \dots + 1_F}_{\text{$p$ times}} = 0 \]
+ or zero if no such integer $p$ exists.
+\end{definition}
+\begin{example}[Field characteristics]
+ Old friends $\RR$, $\QQ$, $\CC$ all have characteristic zero.
+ But $\FF_p$, the integers modulo $p$, is a field of characteristic $p$.
+\end{example}
+\begin{exercise}
+ Let $F$ be a field of characteristic $p$.
+ Show that if $p > 0$ then $p$ is a prime number.
+ (A proof is given next chapter.)
+\end{exercise}
+With the assumption of characteristic zero, our earlier proof works.
+\begin{lemma}[Separability in characteristic zero]
+ Any irreducible polynomial in a characteristic zero field is separable.
+\end{lemma}
+Unfortunately, this lemma is false if the ``characteristic zero'' condition is dropped.
+
+\begin{remark}
+ The reason it's called \emph{separable} is (I think) this picture:
+ I have a polynomial and I want to break it into irreducible parts.
+ Normally, if I have a double root in a polynomial, that means it's not irreducible.
+ But in characteristic $p > 0$ this fails.
+ So inseparable polynomials are strange when you think about them: somehow
+ you have double roots that can't be separated from each other.
+\end{remark}
+
+We can get this to work for any field extension in which separability is not an issue.
+\begin{definition}
+ A \vocab{separable extension} $K/F$ is one in which every irreducible
+ polynomial in $F$ is separable (for example, if $F$ has characteristic zero).
+ A field $F$ is \vocab{perfect} if any finite field extension $K/F$ is separable.
+\end{definition}
+In fact, as we see in the next chapter:
+\begin{theorem}
+ [Finite fields are perfect]
+ Suppose $F$ is a field with finitely many elements. Then it is perfect.
+\end{theorem}
+Thus, we will almost never have to worry about separability
+since every field we see in the Napkin is either finite or characteristic $0$.
+So the inclusion of the word ``separable'' is mostly a formality.
+
+Proceeding onwards, we obtain
+\begin{theorem}[The $n$ embeddings of any separable extension]
+ Let $K/F$ be a separable extension of degree $n$ and let $\ol F$ be an algebraic closure of $F$.
+ Then there are exactly $n$ field homomorphisms $K \injto \ol F$,
+ say $\sigma_1, \dots, \sigma_n$, which fix $F$.
+\end{theorem}
+
+In any case, this lets us define the trace for \emph{any} separable normal extension.
+\begin{definition}
+Let $K/F$ be a separable extension of degree $n$, and let $\sigma_1$, \dots, $\sigma_n$
+be the $n$ embeddings into an algebraic closure of $F$. Then we define
+\[
+ \Tr_{K/F}(\alpha) = \sum_{i=1}^n \sigma_i(\alpha)
+ \quad\text{and}\quad
+ \Norm_{K/F}(\alpha) = \prod_{i=1}^n \sigma_i(\alpha).
+\]
+When $F = \QQ$ and the algebraic closure is $\CC$, this coincides with our earlier definition!
+\end{definition}
+
+
+\section{Automorphism groups and Galois extensions}
+\prototype{$\QQ(\sqrt 2)$ is Galois but $\QQ(\cbrt 2)$ is not.}
+We now want to get back at the idea we stated at the beginning of
+this section that $\QQ(\cbrt 2)$ is deficient in a way that $\QQ(\sqrt 2)$ is not.
+
+First, we define the ``internal'' automorphisms.
+\begin{definition}
+ Suppose $K/F$ is a finite extension.
+ Then $\Aut(K/F)$ is the set of \emph{field isomorphisms} $\sigma : K \to K$ which fix $F$.
+ In symbols
+ \[ \Aut(K/F) =
+ \left\{
+ \sigma : K \to K \mid
+ \text{$\sigma$ is identity on $F$}
+ \right\}.
+ \]
+ This is a group under function composition!
+\end{definition}
+Note that this time, we have a condition that $F$ is fixed by $\sigma$.
+(This was not there before when we considered $F = \QQ$, because we got it for free.)
+
+\begin{example}[Old examples of automorphism groups]
+ Reprising the example at the beginning of the chapter in the new notation, we have:
+ \begin{enumerate}[(a)]
+ \ii $\Aut(\QQ(i) / \QQ) \cong \Zc 2$, with elements $z \mapsto z$ and $z \mapsto \ol z$.
+ \ii $\Aut(\QQ(\sqrt 2) / \QQ) \cong \Zc 2$ in the same way.
+ \ii $\Aut(\QQ(\cbrt 2) / \QQ)$ is the trivial group, with only the identity embedding!
+ \end{enumerate}
+\end{example}
+
+\begin{example}[Automorphism group of $\QQ(\sqrt2,\sqrt3)$]
+ Here's a new example: let $K = \QQ(\sqrt2, \sqrt3)$.
+ It turns out that $\Aut(K/\QQ) = \{1, \sigma, \tau, \sigma\tau\}$, where
+ \[
+ \sigma :
+ \begin{cases}
+ \sqrt2 &\mapsto -\sqrt2 \\
+ \sqrt3 &\mapsto \sqrt3
+ \end{cases}
+ \quad\text{and}\quad
+ \tau :
+ \begin{cases}
+ \sqrt2 &\mapsto \sqrt2 \\
+ \sqrt3 &\mapsto -\sqrt3.
+ \end{cases}
+ \]
+ In other words, $\Aut(K/\QQ)$ is the Klein Four Group.
+\end{example}
+
+First, let's repeat the proof of the observation that these embeddings shuffle around roots
+(akin to the first observation in the introduction):
+\begin{lemma}
+ [Root shuffling in $\Aut(K/F)$]
+ Let $f \in F[x]$, suppose $K/F$ is a finite extension,
+ and assume $\alpha \in K$ is a root of $f$.
+ Then for any $\sigma \in \Aut(K/F)$, $\sigma(\alpha)$ is also a root of $f$.
+ \label{lem:root_shuffle}
+\end{lemma}
+\begin{proof}
+ Let $f(x) = c_n x^n + c_{n-1}x^{n-1} + \dots + c_0$,
+ where $c_i \in F$. Thus,
+ \[ 0 = \sigma(f(\alpha)) = \sigma\left( c_n\alpha^n + \dots + c_0 \right)
+ = c_n\sigma(\alpha)^n + \dots + c_0 = f(\sigma(\alpha)). \qedhere \]
+\end{proof}
+In particular, taking $f$ to be the minimal polynomial of $\alpha$ we deduce
+\begin{moral}
+ An embedding $\sigma \in \Aut(K/F)$ sends an $\alpha \in K$
+ to one of its various Galois conjugates (over $F$).
+\end{moral}
+
+Next, let's look again at the ``deficiency'' of certain fields.
+Look at $K = \QQ(\cbrt 2)$.
+So, again $K / \QQ$ is deficient for two reasons.
+First, while there are three maps $\QQ(\cbrt 2) \injto \CC$,
+only one of them lives in $\Aut(K/\QQ)$, namely the identity.
+In other words, $\left\lvert \Aut(K/\QQ) \right\rvert$ is \emph{too small}.
+Secondly, $K$ is missing some Galois conjugates ($\omega \cbrt 2$ and $\omega^2 \cbrt 2$).
+
+The way to capture the fact that there are missing Galois conjugates
+is the notion of a splitting field.
+\begin{definition}
+ Let $F$ be a field and $p(x) \in F[x]$ a polynomial of degree $n$.
+ Then $p(x)$ has roots $\alpha_1, \dots, \alpha_n$ in an algebraic closure of $F$.
+ The \vocab{splitting field} of $F$ is defined as $F(\alpha_1, \dots, \alpha_n)$.
+\end{definition}
+In other words, the splitting field is the smallest field in which $p(x)$ splits.
+\begin{example}[Examples of splitting fields]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The splitting field of $x^2 - 5$ over $\QQ$ is $\QQ(\sqrt 5)$.
+ This is a degree $2$ extension.
+ \ii The splitting field of $x^2+x+1$ over $\QQ$ is $\QQ(\omega)$,
+ where $\omega$ is a cube root of unity.
+ This is a degree $3$ extension.
+ % In particular, the splitting field of $x^3-2$ is a degree \emph{six} extension.
+ \ii The splitting field of $x^2+3x+2 = (x+1)(x+2)$ is just $\QQ$!
+ There's nothing to do.
+ \end{enumerate}
+\end{example}
+\begin{example}
+ [Splitting fields: a cautionary tale]
+ The splitting field of $x^3 - 2$ over $\QQ$ is in fact
+ \[ \QQ( \cbrt 2, \omega ) \]
+ and not just $\QQ(\cbrt 2)$!
+ One must really adjoin \emph{all} the roots, and it's not necessarily the case that
+ these roots will generate each other.
+
+ To be clear:
+ \begin{itemize}
+ \ii For $x^2-5$, we adjoin $\sqrt 5$ and this will automatically include $-\sqrt 5$.
+ \ii For $x^2+x+1$, we adjoin $\omega$ and get the other root $\omega^2$ for free.
+ \ii But for $x^3-2$, if we adjoin $\cbrt 2$,
+ we do NOT get $\omega\cbrt2$ and $\omega^2\cbrt2$ for free.
+ Indeed, $\QQ(\cbrt 2) \subset \RR$!
+ \end{itemize}
+ Note that in particular, the splitting field of
+ $x^3-2$ over $\QQ$ is \emph{degree six}, not just degree three.
+\end{example}
+
+In general,
+\textbf{the splitting field of a polynomial can be an extension of degree up to $n!$}.
+The reason is that if $p(x)$ has $n$ roots and none of them are ``related'' to each other,
+then any permutation of the roots will work.
+
+Now, we obtain:
+\begin{theorem}[Galois extensions are splitting]
+ For finite extensions $K/F$,
+ $\left\lvert \Aut(K/F) \right\rvert$ divides $[K:F]$,
+ with equality if and only if $K$ is the \emph{splitting field}
+ of some separable polynomial with coefficients in $F$.
+ \label{thm:Galois_splitting}
+\end{theorem}
+The proof of this is deferred to an optional section at the end of the chapter.
+If $K/F$ is a finite extension and $\left\lvert \Aut(K/F) \right\rvert = [K:F]$,
+we say the extension $K/F$ is \vocab{Galois}.
+In that case, we denote $\Aut(K/F)$ by $\Gal(K/F)$ instead
+and call this the \vocab{Galois group} of $K/F$.
+
+\begin{example}
+ [Examples and non-examples of Galois extensions]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The extension $\QQ(\sqrt2) / \QQ$ is Galois,
+ since it's the splitting field of $x^2-2$ over $\QQ$.
+ The Galois group has order two, $\sqrt 2 \mapsto \pm \sqrt 2$.
+ \ii The extension $\QQ(\sqrt2, \sqrt 3) / \QQ$ is Galois,
+ since it's the splitting field of $(x^2-5)^2-6$ over $\QQ$.
+ As discussed before, the Galois group is $\Zc 2 \times \Zc 2$.
+ \ii The extension $\QQ(\cbrt{2}) / \QQ$ is \emph{not} Galois.
+ \end{enumerate}
+\end{example}
+
+%Here is some more intuition on what $[K:F]$ actually measures: suppose $K$ is a splitting field
+%of some $(x-\alpha_1) \dots (x-\alpha_n)$, meaning $K = F(\alpha_1, \dots, \alpha_n)$.
+%Then a permutation $\sigma \in \Aut(K/F)$ is determined by where it sends each $\alpha_i$.
+%The dimension of $[K:F]$ measures how much ``redundancy'' there is among the $\alpha_i$.
+%For example, in the case of \[ (x-\sqrt5)(x+\sqrt5) = x^2-5 \]the $\sqrt 5$ and $-\sqrt 5$ were redundant,
+%in the sense that knowing $\sigma(\sqrt 5)$ tells you $\sigma(-\sqrt 5) = -\sigma(\sqrt 5)$.
+%But in the $x^3-2$ case, knowing $\sigma(\cbrt{2})$ does \emph{not} tell you where
+%$\omega\cbrt{2}$ should go; this is reflected in the fact that $[K:F]$ and $\Aut(K/F)$
+%are both six rather than three.
+
+To explore $\QQ(\cbrt 2)$ one last time:
+\begin{example}
+ [Galois closures, and the automorphism group of $\QQ(\cbrt2, \omega)$]
+ Let's return to the field $K = \QQ(\cbrt 2, \omega)$,
+ which is a field with $[K:\QQ] = 6$.
+ Consider the two automorphisms:
+ \[
+ \sigma:
+ \begin{cases}
+ \cbrt 2 &\mapsto \omega \cbrt 2 \\
+ \omega &\mapsto \omega
+ \end{cases}
+ \quad\text{and}\quad
+ \tau:
+ \begin{cases}
+ \cbrt 2 &\mapsto \cbrt 2 \\
+ \omega &\mapsto \omega^2.
+ \end{cases}
+ \]
+ Notice that $\sigma^3 = \tau^2 = \id$.
+ From this one can see that the automorphism group of $K$ must have order $6$
+ (it certainly has order $\le 6$; now use Lagrange's theorem).
+ So, $K/\QQ$ is Galois! Actually one can check explicitly that
+ \[ \Gal(K/\QQ) \cong S_3 \]
+ is the symmetric group on $3$ elements, with order $3! = 6$.
+\end{example}
+This example illustrates the fact that
+given a non-Galois field extension,
+one can ``add in'' missing conjugates to make it Galois.
+This is called taking a \vocab{Galois closure}.
+
+\section{Fundamental theorem of Galois theory}
+After all this stuff about Galois Theory, I might as well tell you the fundamental theorem,
+though I won't prove it.
+Basically, it says that if $K/F$ is Galois with Galois group $G$, then:
+\begin{moral}
+ Subgroups of $G$ correspond exactly to fields $E$ with $F \subseteq E \subseteq K$.
+\end{moral}
+
+To tell you how the bijection goes, I have to define a fixed field.
+\begin{definition}
+ Let $K$ be a field and $H$ a subgroup of $\Aut(K/F)$.
+ We define the \vocab{fixed field} of $H$, denoted $K^H$, as
+ \[ K^H \defeq \left\{ x \in K : \sigma(x)=x \; \forall \sigma \in H \right\}. \]
+\end{definition}
+\begin{ques}
+ Verify quickly that $K^H$ is actually a field.
+\end{ques}
+
+Now let's look at examples again.
+Consider $K = \QQ(\sqrt2, \sqrt3)$,
+where \[ G = \Gal(K/\QQ) = \{\id, \sigma, \tau, \sigma\tau\} \]
+is the Klein four group
+(where $\sigma(\sqrt2) = -\sqrt 2$ but $\sigma(\sqrt 3) = \sqrt 3$; $\tau$ goes the other way).
+\begin{ques}
+ Let $H = \{\id, \sigma\}$. What is $K^H$?
+\end{ques}
+In that case, the diagram of fields between $\QQ$ and $K$
+matches exactly with the subgroups of $G$, as follows:
+\begin{center}
+\begin{tikzcd}
+ & \QQ(\sqrt2, \sqrt3) \ar[ld, dash] \ar[d, dash] \ar[rd, dash] & \\
+ \QQ(\sqrt2) \ar[rd, dash] & \QQ(\sqrt6) \ar[d, dash] & \QQ(\sqrt3) \ar[ld, dash] \\
+ & \QQ &
+\end{tikzcd}
+\begin{tikzcd}
+ & \{\id\} \ar[ld, dash] \ar[d, dash] \ar[rd, dash] & \\
+ \{\id,\tau\} \ar[rd, dash] & \{\id,\sigma\tau\} \ar[d, dash] & \{\id,\sigma\} \ar[ld, dash] \\
+ & G &
+\end{tikzcd}
+\end{center}
+We see that subgroups correspond to fixed fields.
+That, and much more, holds in general.
+
+\begin{theorem}[Fundamental theorem of Galois theory]
+ Let $K/F$ be a Galois extension with Galois group $G = \Gal(K/F)$.
+ \begin{enumerate}[(a)]
+ \ii There is a bijection between field towers $F \subseteq E \subseteq K$ and subgroups $H \subseteq G$:
+ \[
+ \left\{
+ \begin{array}{c}
+ K \\ \mid \\ E \\ \mid \\ F
+ \end{array}
+ \right\}
+ \iff
+ \left\{
+ \begin{array}{c}
+ 1 \\ \mid \\ H \\ \mid \\ G
+ \end{array}
+ \right\}
+ \]
+ The bijection sends $H$ to its fixed field $K^H$, and hence is inclusion reversing.
+ \ii Under this bijection, we have $[K:E] = \left\lvert H \right\rvert$ and $[E:F] = [G:H]$.
+ \ii $K/E$ is always Galois, and its Galois group is $\Gal(K/E) = H$.
+ \ii $E/F$ is Galois if and only if $H$ is normal in $G$. If so, $\Gal(E/F) = G/H$.
+ \end{enumerate}
+\end{theorem}
+
+\begin{exercise}
+ Suppose we apply this theorem for
+ \[ K = \QQ(\cbrt2, \omega). \]
+ Verify that the fact $E = \QQ(\cbrt 2)$ is not Galois
+ corresponds to the fact that $S_3$ does not have normal subgroups of order $2$.
+\end{exercise}
+
+
+\section\problemhead
+\begin{sproblem}[Galois group of the cyclotomic field]
+ Let $p$ be an odd rational prime and $\zeta_p$ a primitive $p$th root of unity.
+ Let $K = \QQ(\zeta_p)$.
+ Show that \[ \Gal(K/\QQ) \cong (\ZZ/p\ZZ)^\ast. \]
+ \begin{hint}
+ Look at the image of $\zeta_p$.
+ \end{hint}
+ \begin{sol}
+ It's just $\Zc{p-1}$, since $\zeta_p$ needs to get sent
+ to one (any) of the $p-1$ primitive roots of unity.
+ \end{sol}
+\end{sproblem}
+
+\begin{problem}[Greek constructions]
+ Prove that the three Greek constructions
+ \begin{enumerate}[(a)]
+ \ii doubling the cube,
+ \ii squaring the circle, and
+ \ii trisecting an angle
+ \end{enumerate}
+ are all impossible.
+ (Assume $\pi$ is transcendental.)
+ \begin{hint}
+ Repeated quadratic extensions have degree $2$, so one can
+ only get powers of two.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ [China Hong Kong Math Olympiad]
+ \yod
+ Prove that there are no rational numbers $p$, $q$, $r$ satisfying
+ \[ \cos\left( \frac{2\pi}{7} \right)
+ = p + \sqrt{q} + \sqrt[3]{r}. \]
+ \begin{sol}
+ A similar (but not identical) problem is solved here:
+ \url{https://aops.com/community/c6h149153p842956}.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Show that the only automorphism of $\RR$ is the identity.
+ Hence $\Aut(\RR)$ is the trivial group.
+ \begin{hint}
+ Hint: $\sigma(x^2) = \sigma(x)^2 \ge 0$ plus Cauchy's Functional Equation.
+ \end{hint}
+\end{problem}
+
+\begin{problem}[Artin's primitive element theorem]
+ \yod
+ Let $K$ be a number field.
+ Show that $K \cong \QQ(\gamma)$ for some $\gamma$.
+ \label{prob:artin_primitive_elm}
+ \begin{hint}
+ By induction, suffices to show $\QQ(\alpha, \beta) = \QQ(\gamma)$
+ for some $\gamma$ in terms of $\alpha$ and $\beta$.
+ For all but finitely many rational $\lambda$,
+ the choice $\gamma = \alpha + \lambda \beta$ will work.
+ \end{hint}
+ \begin{sol}
+ \url{http://www.math.cornell.edu/~kbrown/6310/primitive.pdf}
+ \end{sol}
+\end{problem}
+
+\section{(Optional) Proof that Galois extensions are splitting}
+We prove \Cref{thm:Galois_splitting}.
+First, we extract a useful fragment from the fundamental theorem.
+\begin{theorem}[Fixed field theorem]
+ \label{thm:fixed_field_theorem}
+ Let $K$ be a field and $G$ a subgroup of $\Aut(K)$.
+ Then $[K:K^G] = \left\lvert G \right\rvert$.
+\end{theorem}
+
+The inequality itself is not difficult:
+\begin{exercise}
+ Show that $[K:F] \ge |\Aut(K/F)|$,
+ and that equality holds if and only if
+ the set of elements fixed by all $\sigma \in \Aut(K/F)$
+ is exactly $F$.
+ (Use \Cref{thm:fixed_field_theorem}.)
+\end{exercise}
+The equality case is trickier.
+
+The easier direction is when $K$ is a splitting field.
+Assume $K = F(\alpha_1, \dots, \alpha_n)$ is the splitting field of some separable polynomial $p \in F[x]$
+with $n$ distinct roots $\alpha_1, \dots, \alpha_n$.
+Adjoin them one by one:
+\begin{center}
+\begin{tikzcd}
+ F \ar[r, hook] \ar[d, "\id"']
+ & F(\alpha_1) \ar[r, hook] \ar[d, "\tau_1"']
+ & F(\alpha_1, \alpha_2) \ar[r, hook] \ar[d, "\tau_2"']
+ & \dots \ar[r, hook]
+ & K \ar[d, "\tau_n=\sigma"] \\
+ F \ar[r, hook]
+ & F(\alpha_1) \ar[r, hook]
+ & F(\alpha_1, \alpha_2) \ar[r, hook]
+ & \dots \ar[r, hook]
+ & K
+\end{tikzcd}
+\end{center}
+(Does this diagram look familiar?)
+Every map $K \to K$ which fixes $F$ corresponds to an above commutative diagram.
+As before, there are exactly $[F(\alpha_1) : F]$ ways to pick $\tau_1$.
+(You need the fact that the minimal polynomial $p_1$ of $\alpha_1$ is separable for this:
+there need to be exactly $\deg p_1 = [F(\alpha_1) : F]$ distinct roots to nail $p_1$ into.)
+Similarly, given a choice of $\tau_1$, there are $[F(\alpha_1, \alpha_2) : F(\alpha_1)]$ ways to pick $\tau_2$.
+Multiplying these all together gives the desired $[K:F]$.
+
+\bigskip
+
+Now assume $K/F$ is Galois.
+First, we state:
+\begin{lemma}
+ Let $K/F$ be Galois, and $p \in F[x]$ irreducible.
+ If any root of $p$ (in $\ol F$) lies in $K$, then all of them do,
+ and in fact $p$ is separable.
+\end{lemma}
+\begin{proof}
+ Let $\alpha \in K$ be the prescribed root.
+ Consider the set
+ \[ S = \left\{ \sigma(\alpha) \mid \sigma \in \Gal(K/F) \right\}. \]
+ (Note that $\alpha \in S$ since $\Gal(K/F) \ni \id$.)
+ By construction, any $\tau \in \Gal(K/F)$ fixes $S$.
+ So if we construct
+ \[ \tilde p(x) = \prod_{\beta \in S} (x - \beta), \]
+ then by Vieta's Formulas, we find that all the coefficients of $\tilde p$ are fixed by elements of $\sigma$.
+ By the \emph{equality case} we specified in the exercise, it follows that $\tilde p$ has coefficients in $F$!
+ (This is where we use the condition.)
+ Also, by \Cref{lem:root_shuffle}, $\tilde p$ divides $p$.
+
+ Yet $p$ was irreducible, so it is the minimal polynomial of $\alpha$ in $F[x]$,
+ and therefore we must have that $p$ divides $\tilde p$.
+ Hence $p = \tilde p$. Since $\tilde p$ was built to be separable, so is $p$.
+\end{proof}
+Now we're basically done -- pick a basis $\omega_1$, \dots, $\omega_n$ of $K/F$,
+and let $p_i$ be their minimal polynomials; by the above, we don't get any roots outside $K$.
+Consider $P = p_1 \dots p_n$, removing any repeated factors.
+The roots of $P$ are $\omega_1$, \dots, $\omega_n$ and some other guys in $K$.
+So $K$ is the splitting field of $P$.
diff --git a/books/napkin/genscheme.tex b/books/napkin/genscheme.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1c63f1b8bdcee44c0df1917e4cc1e0180a00e059
--- /dev/null
+++ b/books/napkin/genscheme.tex
@@ -0,0 +1 @@
+\chapter{Working with general schemes}
diff --git a/books/napkin/grp-intro.tex b/books/napkin/grp-intro.tex
new file mode 100644
index 0000000000000000000000000000000000000000..82183ec6155556d078f25353a68d3db37a2c938f
--- /dev/null
+++ b/books/napkin/grp-intro.tex
@@ -0,0 +1,827 @@
+\chapter{Groups}
+A group is one of the most basic structures in higher mathematics.
+In this chapter I will tell you only the bare minimum:
+what a group is, and when two groups are the same.
+
+\section{Definition and examples of groups}
+\prototype{The additive group of integers $(\ZZ,+)$ and the cyclic group $\Zc m$.
+Just don't let yourself forget that most groups are non-commutative.}
+
+A group consists of two pieces of data: a set $G$,
+and an associative binary operation $\star$ with some properties.
+Before I write down the definition of a group, let me give two examples.
+
+\begin{example}[Additive integers]
+ The pair $(\ZZ, +)$ is a group:
+ $\ZZ = \left\{ \dots,-2,-1,0,1,2,\dots \right\}$ is the set
+ and the associative operation is \emph{addition}.
+ Note that
+ \begin{itemize}
+ \ii The element $0 \in \ZZ$ is an \emph{identity}:
+ $a+0=0+a = a$ for any $a$.
+ \ii Every element $a \in \ZZ$ has an additive \emph{inverse}: $a + (-a) = (-a) + a = 0$.
+ \end{itemize}
+ We call this group $\ZZ$.
+\end{example}
+\begin{example}[Nonzero rationals]
+ Let $\QQ^\times$ be the set of \emph{nonzero rational numbers}.
+ The pair $(\QQ^\times, \cdot)$ is a group:
+ the set is $\QQ^\times$
+ and the associative operation is \emph{multiplication}.
+
+ Again we see the same two nice properties.
+ \begin{itemize}
+ \ii The element $1 \in \QQ^\times$ is an \emph{identity}:
+ for any rational number, $a \cdot 1 = 1 \cdot a = a$.
+ \ii For any rational number $x \in \QQ^\times$,
+ we have an inverse $x\inv$, such that
+ \[ x \cdot x\inv = x\inv \cdot x = 1. \]
+ \end{itemize}
+\end{example}
+
+From this you might already have a guess what the definition of a group is.
+\begin{definition}
+ A \vocab{group} is a pair $G = (G, \star)$
+ consisting of a set of elements $G$, and a binary operation $\star$ on $G$, such that:
+ \begin{itemize}
+ \ii $G$ has an \vocab{identity element}, usually denoted $1_G$
+ or just $1$, with the property that
+ \[ 1_G \star g = g \star 1_G = g \text{ for all $g \in G$}. \]
+ \ii The operation is \vocab{associative}, meaning
+ $(a \star b) \star c = a \star (b \star c)$
+ for any $a,b,c \in G$.
+ Consequently we generally don't write the parentheses.
+ \ii Each element $g \in G$ has an \vocab{inverse}, that is, an element $h \in G$ such that \[ g \star h = h \star g = 1_G. \]
+ \end{itemize}
+ \label{def:group}
+\end{definition}
+\begin{remark}
+ [Unimportant pedantic point]
+ Some authors like to add a ``closure'' axiom,
+ i.e.\ to say explicitly that $g \star h \in G$.
+ This is implied already by the fact that $\star$
+ is a binary operation on $G$,
+ but is worth keeping in mind for the examples below.
+\end{remark}
+
+\begin{remark}
+ It is not required that $\star$ is commutative ($a \star b = b \star a$).
+ So we say that a group is \vocab{abelian} if the operation is
+ commutative and \vocab{non-abelian} otherwise.
+\end{remark}
+
+% Now that I've made clear what the criteria of a group are,
+% let us write down some non-examples of groups.
+
+\begin{example}[Non-Examples of groups]
+ \listhack
+ \begin{itemize}
+ \ii The pair $(\QQ, \cdot)$ is NOT a group.
+ (Here $\QQ$ is rational numbers.)
+ While there is an identity element, the element $0 \in \QQ$
+ does not have an inverse.
+ \ii The pair $(\ZZ, \cdot)$ is also NOT a group. (Why?)
+ \ii Let $\Mat_{2 \times 2}(\RR)$ be the set of $2 \times 2$ real matrices.
+ Then $(\Mat_{2 \times 2}(\RR), \cdot)$
+ (where $\cdot$ is matrix multiplication) is NOT a group.
+ Indeed, even though we have an identity matrix
+ \[
+ \begin{bmatrix}
+ 1 & 0 \\ 0 & 1
+ \end{bmatrix}
+ \]
+ we still run into the same issue as before:
+ the zero matrix does not have a multiplicative inverse.
+
+ (Even if we delete the zero matrix from the set,
+ the resulting structure is still not a group:
+ those of you that know some linear algebra
+ might recall that any matrix with determinant zero
+ cannot have an inverse.)
+ \end{itemize}
+\end{example}
+
+Let's resume writing down examples.
+Here are some more \textbf{abelian examples} of groups:
+\begin{example}
+ [Complex unit circle]
+ Let $S^1$ denote the set of complex numbers $z$ with absolute value one; that is
+ \[ S^1 \defeq \left\{ z \in \CC \mid \left\lvert z \right\rvert = 1 \right\}. \]
+ Then $(S^1, \times)$ is a group because
+ \begin{itemize}
+ \ii The complex number $1 \in S^1$ serves as the identity, and
+ \ii Each complex number $z \in S^1$ has an inverse $\frac 1z$ which is also in $S^1$, since $\left\lvert z\inv \right\rvert = \left\lvert z \right\rvert\inv = 1$.
+ \end{itemize}
+ There is one thing I ought to also check: that $z_1 \times z_2$ is actually still in $S^1$.
+ But this follows from the fact that $\left\lvert z_1z_2 \right\rvert = \left\lvert z_1 \right\rvert \left\lvert z_2 \right\rvert = 1$.
+\end{example}
+
+\begin{example}
+ [Addition mod $n$]
+ Here is an example from number theory:
+ Let $n > 1$ be an integer,
+ and consider the residues (remainders) modulo $n$.
+ These form a group under addition.
+ We call this the \vocab{cyclic group of order $n$},
+ and denote it as $\Zc n$, with elements $\ol 0, \ol 1, \dots$.
+ The identity is $\ol 0$.
+ \label{def:cyclic_group}
+\end{example}
+\begin{example}
+ [Multiplication mod $p$]
+ Let $p$ be a prime.
+ Consider the \emph{nonzero residues modulo $p$},
+ which we denote by $\Zm p$.
+ Then $\left( \Zm p, \times \right)$ is a group.
+ \label{def:mult_mod_p}
+\end{example}
+\begin{ques}
+ Why do we need the fact that $p$ is prime?
+\end{ques}
+(Digression: the notation $\Zc n$ and $\Zm p$ may seem strange
+but will make sense when we talk about rings and ideals.
+Set aside your worry for now.)
+
+
+Here are some \textbf{non-abelian examples}:
+\begin{example}
+ [General linear group]
+ Let $n$ be a positive integer.
+ Then $\GL_n(\RR)$ is defined as the set of $n \times n$ real matrices
+ which have nonzero determinant.
+ It turns out that with this condition,
+ every matrix does indeed have an inverse,
+ so $(\GL_n(\RR), \times)$ is a group, called the
+ \vocab{general linear group}.
+
+ (The fact that $\GL_n(\RR)$ is closed under $\times$ follows
+ from the linear algebra fact that $\det (AB) = \det A \det B$,
+ proved in later chapters.)
+\end{example}
+\begin{example}
+ [Special linear group]
+ Following the example above, let $\SL_n(\RR)$ denote
+ the set of $n \times n$ matrices whose determinant is actually $1$.
+ Again, for linear algebra reasons
+ it turns out that $(\SL_n(\RR), \times)$ is also a group,
+ called the \vocab{special linear group}.
+\end{example}
+
+\begin{example}
+ [Symmetric groups]
+ Let $S_n$ be the set of permutations of $\left\{ 1,\dots,n \right\}$.
+ By viewing these permutations as functions from $\left\{ 1,\dots,n \right\}$ to itself, we can consider \emph{compositions} of permutations.
+ Then the pair $(S_n, \circ)$ (here $\circ$ is function composition)
+ is also a group, because
+ \begin{itemize}
+ \ii There is an identity permutation, and
+ \ii Each permutation has an inverse.
+ \end{itemize}
+ The group $S_n$ is called the \vocab{symmetric group} on $n$ elements.
+\end{example}
+\begin{example}
+ [Dihedral group]
+ The \vocab{dihedral group of order $2n$}, denoted $D_{2n}$,
+ is the group of symmetries of a regular $n$-gon $A_1A_2 \dots A_n$,
+ which includes rotations and reflections.
+ It consists of the $2n$ elements
+ \[ \left\{ 1, r, r^2, \dots, r^{n-1}, s, sr, sr^2, \dots, sr^{n-1} \right\}. \]
+ The element $r$ corresponds to rotating the $n$-gon by $\frac{2\pi}{n}$,
+ while $s$ corresponds to reflecting it across the line $OA_1$
+ (here $O$ is the center of the polygon).
+ So $rs$ mean ``reflect then rotate''
+ (like with function composition, we read from right to left).
+
+ In particular, $r^n = s^2 = 1$. You can also see that $r^k s = sr^{-k}$.
+\end{example}
+
+Here is a picture of some elements of $D_{10}$.
+\begin{center}
+ \begin{asy}
+ size(12cm);
+ picture aoeu(string a, string b, string c, string d, string e,
+ string x) {
+ draw(dir(0)--dir(72)--dir(144)--dir(216)--dir(288)--cycle);
+ MP(a, dir(0), dir(0));
+ MP(b, dir(72), dir(72));
+ MP(c, dir(144), dir(144));
+ MP(d, dir(216), dir(216));
+ MP(e, dir(288), dir(288));
+ MP(x, origin, origin);
+ return CC();
+ }
+ picture one = aoeu("1", "2", "3", "4", "5", "1");
+ picture r = aoeu("5", "1", "2", "3", "4", "r");
+ picture s = aoeu("1", "5", "4", "3", "2", "s");
+ picture sr = aoeu("5", "4", "3", "2", "1", "sr");
+ picture rs = aoeu("2", "1", "5", "4", "3", "rs");
+ add(shift( (0,0) ) * one);
+ add(shift( (3,0) ) * r);
+ add(shift( (6,0) ) * s);
+ add(shift( (9,0) ) * sr);
+ add(shift( (12,0) ) * rs);
+ \end{asy}
+\end{center}
+Trivia: the dihedral group $D_{12}$ is my favorite example of a non-abelian group,
+and is the first group I try for any exam question of the form ``find an example\dots''.
+
+More examples:
+\begin{example}
+ [Products of groups]
+ Let $(G, \star)$ and $(H, \ast)$ be groups.
+ We can define a \vocab{product group} $(G \times H, {\cdot})$, as follows.
+ The elements of the group will be ordered pairs $(g,h) \in G \times H$.
+ Then
+ \[ (g_1, h_1) \cdot (g_2, h_2) = (g_1 \star g_2, h_1 \ast h_2) \in G \times H
+ \]
+ is the group operation.
+ \label{def:product_group}
+\end{example}
+\begin{ques}
+ What are the identity and inverses of the product group?
+\end{ques}
+
+\begin{example}
+ [Trivial group]
+ The \vocab{trivial group}, often denoted $0$ or $1$,
+ is the group with only an identity element.
+ I will use the notation $\{1\}$.
+\end{example}
+
+\begin{exercise}
+ Which of these are groups?
+ \begin{enumerate}[(a)]
+ \ii Rational numbers with odd denominators (in simplest form), where the operation is addition.
+ (This includes integers, written as $n/1$, and $0 = 0/1$).
+ \ii The set of rational numbers with denominator at most $2$, where the operation is addition.
+ \ii The set of rational numbers with denominator at most $2$, where the operation is multiplication.
+ \ii The set of nonnegative integers, where the operation is addition.
+ \end{enumerate}
+\end{exercise}
+
+
+\section{Properties of groups}
+\prototype{$\Zm p$ is possibly best.}
+\begin{abuse}
+ From now on, we'll often refer to a group $(G, \star)$ by just $G$.
+ Moreover, we'll abbreviate $a \star b$ to just $ab$.
+ Also, because the operation $\star$ is associative,
+ we will omit unnecessary parentheses: $(ab)c = a(bc) = abc$.
+\end{abuse}
+\begin{abuse}
+ From now on, for any $g \in G$ and $n \in \NN$ we abbreviate
+ \[ g^n
+ =
+ \underbrace{g \star \dots \star g}_{\text{$n$ times}}.\]
+ Moreover, we let $g\inv$ denote the inverse of $g$,
+ and $g^{-n} = (g\inv)^n$.
+\end{abuse}
+
+In mathematics, a common theme is to require
+that objects satisfy certain minimalistic properties,
+with certain examples in mind,
+but then ignore the examples on paper
+and try to deduce as much as you can just from the properties alone.
+(Math olympiad veterans are likely familiar with
+``functional equations''
+in which knowing a single property about a function
+is enough to determine the entire function.)
+Let's try to do this here,
+and see what we can conclude just from knowing \Cref{def:group}.
+
+It is a law in Guam and 37 other states that
+I now state the following proposition.
+\begin{fact}
+ Let $G$ be a group.
+ \begin{enumerate}[(a)]
+ \ii The identity of a group is unique.
+ \ii The inverse of any element is unique.
+ \ii For any $g \in G$, $(g\inv)\inv = g$.
+ \end{enumerate}
+\end{fact}
+\begin{proof}
+ This is mostly just some formal manipulations,
+ and you needn't feel bad skipping it on a first read.
+ \begin{enumerate}[(a)]
+ \ii If $1$ and $1'$ are identities, then $1 = 1 \star 1' = 1'$.
+ \ii If $h$ and $h'$ are inverses to $g$, then $1_G = g \star h
+ \implies h' = (h' \star g) \star h = 1_G \star h = h$.
+ \ii Trivial; omitted. \qedhere
+ \end{enumerate}
+\end{proof}
+
+Now we state a slightly more useful proposition.
+\begin{proposition}[Inverse of products]
+ Let $G$ be a group, and $a,b \in G$.
+ Then $(ab)\inv = b\inv a\inv$.
+\end{proposition}
+\begin{proof}
+ Direct computation. We have
+ \[ (ab)(b\inv a\inv)
+ = a (bb\inv) a\inv = aa\inv = 1_G. \]
+ Similarly, $(b\inv a\inv)(ab) = 1_G$ as well.
+ Hence $(ab)\inv = b\inv a\inv$.
+\end{proof}
+
+Finally, we state a very important lemma about groups,
+which highlights why having an inverse is so valuable.
+\begin{lemma}[Left multiplication is a bijection]
+ Let $G$ be a group, and pick a $g \in G$.
+ Then the map $G \to G$ given by $x \mapsto gx$ is a bijection.
+ \label{lem:group_mult_biject}
+\end{lemma}
+\begin{exercise}
+ Check this by showing injectivity and surjectivity directly.
+ (If you don't know what these words mean,
+ consult \Cref{ch:sets_functions}.)
+\end{exercise}
+\begin{example}
+ Let $G = \Zm 7$ (as in \Cref{def:mult_mod_p}) and pick $g=3$.
+ The above lemma states that the map $x \mapsto 3 \cdot x$ is a bijection, and we can see this explicitly:
+ \begin{align*}
+ 1 &\overset{\times 3}{\longmapsto} 3 \pmod 7 \\
+ 2 &\overset{\times 3}{\longmapsto} 6 \pmod 7 \\
+ 3 &\overset{\times 3}{\longmapsto} 2 \pmod 7 \\
+ 4 &\overset{\times 3}{\longmapsto} 5 \pmod 7 \\
+ 5 &\overset{\times 3}{\longmapsto} 1 \pmod 7 \\
+ 6 &\overset{\times 3}{\longmapsto} 4 \pmod 7.
+ \end{align*}
+\end{example}
+The fact that the map is injective is often called the \vocab{cancellation law}.
+(Why do you think so?)
+
+\begin{abuse}
+ [Later on, sometimes the identity is denoted $0$ instead of $1$]
+ You don't need to worry about this for a few chapters,
+ but I'll bring it up now anyways.
+ In most of our examples up until now the operation $\star$
+ was thought of like multiplication of some sort,
+ which is why $1 = 1_G$ was a natural notation for the identity element.
+
+ But there are groups like $\ZZ = (\ZZ,+)$
+ where the operation $\star$ is thought of as addition,
+ in which case the notation $0 = 0_G$ might make more sense instead.
+ (In general, whenever an operation is denoted $+$,
+ the operation is almost certainly commutative.)
+ We will eventually start doing so too
+ when we discuss rings and linear algebra.
+\end{abuse}
+
+\section{Isomorphisms}
+\prototype{$\ZZ \cong 10\ZZ$.}
+First, let me talk about what it means for groups to be isomorphic.
+Consider the two groups
+\begin{itemize}
+ \ii $\ZZ = (\left\{ \dots,-2,-1,0,1,2,\dots \right\}, +)$.
+ \ii $10\ZZ = (\left\{ \dots, -20, -10, 0, 10, 20, \dots \right\}, +)$.
+\end{itemize}
+These groups are ``different'', but only superficially so -- you might even say they only differ in the names of the elements.
+Think about what this might mean formally for a moment.
+
+Specifically the map
+\[ \phi : \ZZ \to 10 \ZZ \text{ by } x \mapsto 10 x \]
+is a bijection of the underlying sets which respects the group action.
+In symbols,
+\[ \phi(x + y) = \phi(x) + \phi(y). \]
+In other words, $\phi$ is a way of re-assigning names of the elements
+without changing the structure of the group.
+That's all just formalism for
+capturing the obvious fact that $(\ZZ,+)$
+and $(10 \ZZ, +)$ are the same thing.
+
+Now, let's do the general definition.
+\begin{definition}
+ Let $G = (G, \star)$ and $H = (H, \ast)$ be groups.
+ A bijection $\phi : G \to H$ is called an \vocab{isomorphism} if
+ \[ \phi(g_1 \star g_2) = \phi(g_1) \ast \phi(g_2) \quad
+ \text{for all $g_1, g_2 \in G$}. \]
+ If there exists an isomorphism from $G$ to $H$,
+ then we say $G$ and $H$ are \vocab{isomorphic} and write $G \cong H$.
+\end{definition}
+Note that in this definition, the left-hand side
+$\phi(g_1 \star g_2)$ uses the operation of $G$
+while the right-hand side $\phi(g_1) \ast \phi(g_2)$
+uses the operation of $H$.
+
+\begin{example}
+ [Examples of isomorphisms]
+ Let $G$ and $H$ be groups. We have the following isomorphisms.
+ \begin{enumerate}[(a)]
+ \ii $\ZZ \cong 10 \ZZ$, as above.
+ \ii There is an isomorphism
+ \[ G \times H \cong H \times G\]
+ by the map $(g,h) \mapsto (h,g)$.
+ \ii The identity map $\id : G \to G$
+ is an isomorphism, hence $G \cong G$.
+ \ii There is another isomorphism of $\ZZ$ to itself: send every $x$ to $-x$.
+ \end{enumerate}
+\end{example}
+\begin{example}
+ [Primitive roots modulo $7$]
+ As a nontrivial example, we claim that $\Zc 6 \cong \Zm 7$.
+ The bijection is
+ \[ \phi(\text{$a$ mod $6$}) = \text{$3^a$ mod $7$}. \]
+ \begin{itemize}
+ \ii This map is a bijection by explicit calculation:
+ \[ (3^0, 3^1, 3^2, 3^3, 3^4, 3^5)
+ \equiv (1,3,2,6,4,5) \pmod 7. \]
+ (Technically, I should more properly write $3^{0 \bmod 6} = 1$ and so on to be pedantic.)
+ \ii Finally, we need to verify that this map respects the group action.
+ In other words, we want to see that
+ $\phi(a+b) = \phi(a) \phi(b)$
+ since the operation of $\Zc 6$ is addition
+ while the operation of $\Zm 7$ is multiplication.
+ That's just saying that $3^{a+b \bmod 6} \equiv 3^{a \bmod 6} 3^{b \bmod 6} \pmod 7$,
+ which is true.
+ \end{itemize}
+\end{example}
+\begin{example}
+ [Primitive roots]
+ More generally, for any prime $p$, there exists
+ an element $g \in \Zm p$ called a \vocab{primitive root} modulo $p$
+ such that $1, g, g^2, \dots, g^{p-2}$ are all different modulo $p$.
+ One can show by copying the above proof that
+ \[ \Zcc{p-1} \cong \Zm p \text{ for all primes $p$}. \]
+ The example above was the special case $p=7$ and $g=3$.
+\end{example}
+\begin{exercise}
+ Assuming the existence of primitive roots,
+ establish the isomorphism $\Zcc{p-1} \cong \Zm p$ as above.
+\end{exercise}
+
+It's not hard to see that $\cong$ is an equivalence relation (why?).
+Moreover, because we really only care about the structure of groups,
+we'll usually consider two groups to be the same when they are isomorphic.
+So phrases such as ``find all groups'' really mean
+``find all groups up to isomorphism''.
+
+\section{Orders of groups, and Lagrange's theorem}
+\prototype{$\Zm p$.}
+
+As is typical in math, we use the word ``order'' for way too many things.
+In groups, there are two notions of order.
+\begin{definition}
+ The \vocab{order of a group} $G$ is the number of elements of $G$.
+ We denote this by $\left\lvert G \right\rvert$.
+ Note that the order may not be finite, as in $\ZZ$.
+ We say $G$ is a \vocab{finite group} just to mean that $\left\lvert G \right\rvert$ is finite.
+\end{definition}
+\begin{example}[Orders of groups]
+ For a prime $p$, $\left\lvert \Zm p \right\rvert = p-1$.
+ In other words, the order of $\Zm p$ is $p-1$.
+ As another example,
+ the order of the symmetric group $S_n$ is $n!$
+ and the order of the dihedral group $D_{2n}$ is $2n$.
+\end{example}
+
+\begin{definition}
+ The \vocab{order of an element} $g \in G$ is the smallest positive integer $n$
+ such that $g^n = 1_G$, or $\infty$ if no such $n$ exists.
+ We denote this by $\ord g$.
+\end{definition}
+\begin{example}[Examples of orders]
+ The order of $-1$ in $\QQ^\times$ is $2$,
+ while the order of $1$ in $\ZZ$ is infinite.
+\end{example}
+\begin{ques}
+ Find the order of each of the six elements of $\Zc 6$,
+ the cyclic group on six elements.
+ (See \Cref{def:cyclic_group} if you've forgotten what $\Zc 6$ means.)
+\end{ques}
+\begin{example}[Primitive roots]
+ If you know olympiad number theory, this coincides with the definition of an order of a residue mod $p$.
+ That's why we use the term ``order'' there as well.
+ In particular, a primitive root is precisely an element $g \in \Zm p$
+ such that $\ord g = p-1$.
+\end{example}
+You might also know that if $x^n \equiv 1 \pmod p$,
+then the order of $x \pmod p$ must divide $n$.
+The same is true in a general group for exactly the same reason.
+\begin{fact}
+ If $g^n = 1_G$ then $\ord g$ divides $n$.
+\end{fact}
+Also, you can show that any element
+of a finite group has a finite order.
+The proof is just an olympiad-style pigeonhole argument.
+Consider the infinite sequence $1_G, g, g^2, \dots$,
+and find two elements that are the same.
+\begin{fact}
+ Let $G$ be a finite group.
+ For any $g \in G$, $\ord g$ is finite.
+\end{fact}
+
+What's the last property of $\Zm p$ that you know from olympiad math?
+We have Fermat's little theorem: for any $a \in \Zm p$,
+we have $a^{p-1} \equiv 1 \pmod p$.
+This is no coincidence:
+exactly the same thing is true in a more general setting.
+
+\begin{theorem}
+ [Lagrange's theorem for orders]
+ Let $G$ be any finite group.
+ Then $x^{\left\lvert G \right\rvert} = 1_G$ for any $x \in G$.
+\end{theorem}
+Keep this result in mind! We'll prove it later in
+the generality of \Cref{thm:lagrange_grp}.
+
+\section{Subgroups}
+\prototype{$\SL_n(\RR)$ is a subgroup of $\GL_n(\RR)$.}
+Earlier we saw that $\GL_n(\RR)$, the $n \times n$ matrices with nonzero determinant, formed a group under matrix multiplication.
+But we also saw that a subset of $\GL_n(\RR)$, namely $\SL_n(\RR)$, also formed a group with the same operation.
+For that reason we say that $\SL_n(\RR)$ is a subgroup of $\GL_n(\RR)$.
+And this definition generalizes in exactly the way you expect.
+
+\begin{definition}
+ Let $G = (G, \star)$ be a group.
+ A \vocab{subgroup} of $G$ is exactly what you would expect it to be:
+ a group $H = (H, \star)$ where $H$ is a subset of $G$.
+ It's a \vocab{proper subgroup} if $H \neq G$.
+\end{definition}
+
+\begin{remark}
+ To specify a group $G$, I needed to tell you both what the set $G$ was and the operation $\star$ was.
+ But to specify a subgroup $H$ of a given group $G$, I only need to tell you who its elements are: the operation of $H$ is just inherited from the operation of $G$.
+\end{remark}
+
+\begin{example}
+ [Examples of subgroups]
+ \listhack
+ \begin{enumerate}[(a) ]
+ \ii $2\ZZ$ is a subgroup of $\ZZ$, which is isomorphic to $\ZZ$ itself!
+ \ii Consider again $S_n$, the symmetric group on $n$ elements.
+ Let $T$ be the set of permutations $\tau : \{1, \dots, n\} \to \{1, \dots, n\}$
+ for which $\tau(n) = n$. Then $T$ is a subgroup of $S_n$;
+ in fact, it is isomorphic to $S_{n-1}$.
+ \ii Consider the group $G \times H$ (\Cref{def:product_group})
+ and the elements $ \left\{ (g, 1_H) \mid g \in G \right\} $.
+ This is a subgroup of $G \times H$ (why?).
+ In fact, it is isomorphic to $G$
+ by the isomorphism $(g,1_H) \mapsto g$.
+ \end{enumerate}
+\end{example}
+\begin{example}
+ [Stupid examples of subgroups]
+ For any group $G$, the trivial group $\{1_G\}$
+ and the entire group $G$ are subgroups of $G$.
+\end{example}
+%\begin{example}
+% [Center of a Group]
+% Let $G$ be a group.
+% Its \vocab{center}, denoted $Z(G)$, is the set $x \in G$ such that
+% $gx = xg$ for every $g \in G$; in other words, it is the set of
+% $x \in G$ which commute with every element of $G$.
+%\end{example}
+%You can check the center is indeed a group (some boring details\dots).
+
+Next is an especially important example that we'll talk about more in later chapters.
+\begin{example}[Subgroup generated by an element]
+ Let $x$ be an element of a group $G$.
+ Consider the set
+ \[ \left = \left\{ \dots, x^{-2}, x^{-1}, 1, x, x^2, \dots \right\}. \]
+ This is also a subgroup of $G$, called the subgroup generated by $x$.
+\end{example}
+\begin{exercise}
+ If $\ord x = 2015$, what is the above subgroup equal to?
+ What if $\ord x = \infty$?
+\end{exercise}
+
+Finally, we present some non-examples of subgroups.
+\begin{example}[Non-examples of subgroups]
+ Consider the group $\ZZ = (\ZZ, +)$.
+ \begin{enumerate}[(a)]
+ \ii The set $\left\{ 0,1,2,\dots \right\}$ is
+ not a subgroup of $\ZZ$ because it does not contain inverses.
+ \ii The set $\{ n^3 \mid n \in \ZZ \}
+ = \{ \dots, -8, -1, 0, 1, 8, \dots \}$ is not a subgroup
+ because it is not closed under addition;
+ the sum of two cubes is not in general a cube.
+ \ii The empty set $\varnothing$ is not a subgroup
+ of $\ZZ$ because it lacks an identity element.
+ \end{enumerate}
+\end{example}
+
+\section{Groups of small orders}
+Just for fun, here is a list of all groups of order less than or equal to ten
+(up to isomorphism, of course).
+\begin{enumerate}
+ \ii The only group of order $1$ is the trivial group.
+ \ii The only group of order $2$ is $\Zc 2$.
+ \ii The only group of order $3$ is $\Zc 3$.
+ \ii The only groups of order $4$ are
+ \begin{itemize}
+ \ii $\Zc 4$, the cyclic group on four elements,
+ \ii $\Zc 2 \times \Zc 2$, called the Klein Four Group.
+ \end{itemize}
+ \ii The only group of order $5$ is $\Zc 5$.
+ \ii The groups of order six are
+ \begin{itemize}
+ \ii $\Zc 6$, the cyclic group on six elements.
+ \ii $S_3$, the permutation group of three elements.
+ This is the first non-abelian group.
+ \end{itemize}
+ Some of you might wonder where $\Zc 2 \times \Zc 3$ is.
+ All I have to say is: Chinese remainder theorem!
+
+ You might wonder where $D_6$ is in this list.
+ It's actually isomorphic to $S_3$.
+ \ii The only group of order $7$ is $\Zc 7$.
+ \ii The groups of order eight are more numerous.
+ \begin{itemize}
+ \ii $\Zc 8$, the cyclic group on eight elements.
+ \ii $\Zc 4 \times \Zc 2$.
+ \ii $\Zc 2 \times \Zc 2 \times \Zc 2$.
+ \ii $D_8$, the dihedral group with eight elements, which is not abelian.
+ \ii A non-abelian group $Q_8$, called the \emph{quaternion group}.
+ It consists of eight elements $\pm 1$, $\pm i$, $\pm j$, $\pm k$
+ with $i^2=j^2=k^2=ijk=-1$.
+ \end{itemize}
+ \ii The groups of order nine are
+ \begin{itemize}
+ \ii $\Zc 9$, the cyclic group on nine elements.
+ \ii $\Zc 3 \times \Zc 3$.
+ \end{itemize}
+ \ii The groups of order $10$ are
+ \begin{itemize}
+ \ii $\Zc{10} \cong \Zc5 \times \Zc2$ (again Chinese remainder theorem).
+ \ii $D_{10}$, the dihedral group with $10$ elements.
+ This group is non-abelian.
+ \end{itemize}
+\end{enumerate}
+
+\section{Unimportant long digression}
+A common question is: why these axioms?
+For example, why associative but not commutative?
+This answer will likely not make sense until later,
+but here are some comments that may help.
+
+One general heuristic is:
+Whenever you define a new type of general object,
+there's always a balancing act going on.
+On the one hand, you want to include enough constraints that your
+objects are ``nice''.
+On the other hand, if you include too many constraints,
+then your definition applies to too few objects.
+
+So, for example, we include ``associative''
+because that makes our lives easier
+and most operations we run into are associative.
+In particular, associativity is required for the inverse
+of an element to necessarily be unique.
+However we don't include ``commutative'', because examples below
+show that there are lots of non-abelian groups we care about.
+(But we introduce another name ``abelian''
+because we still want to keep track of it.)
+
+Another comment: a good motivation for the inverse axioms
+is that you get a large amount of \emph{symmetry}.
+The set of positive integers with addition is not a group,
+for example, because you can't subtract $6$ from $3$:
+some elements are ``larger'' than others.
+By requiring an inverse element to exist, you get rid of this issue.
+(You also need identity for this;
+it's hard to define inverses without it.)
+
+Even more abstruse comment:
+\Cref{thm:cayley_theorem} shows that groups are actually shadows of
+the so-called symmetric groups (defined later, also called permutation groups).
+This makes rigorous the notion that ``groups are very symmetric''.
+
+\section{\problemhead}
+
+\begin{problem}
+ What is the joke in the following figure? (Source: \cite{img:snsd}.)
+ \begin{center}
+ \includegraphics[height=8cm]{media/love-proper-isomorphic-subgroup.jpg}
+ %\caption{$\heartsuit$ is a group, $G \subsetneq \heartsuit$ a subgroup and $G \cong \heartsuit$.}
+ \end{center}
+ \begin{hint}
+ Orders.
+ \end{hint}
+ \begin{sol}
+ The point is that $\heartsuit$ is a group, $G \subsetneq \heartsuit$ a subgroup and $G \cong \heartsuit$.
+ This can only occur if $\left\lvert \heartsuit \right\rvert = \infty$;
+ otherwise, a proper subgroup would have strictly smaller size than the original.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Prove Lagrange's theorem for orders in the special case
+ that $G$ is a finite abelian group.
+ \begin{hint}
+ Copy the proof of Fermat's little theorem, using
+ \Cref{lem:group_mult_biject}.
+ \end{hint}
+ \begin{sol}
+ Let $\{g_1, g_2, \dots, g_n\}$ denote the elements of $G$.
+ For any $g \in G$, this is the same as the set $\{gg_1, \dots, gg_n\}$.
+ Taking the entire product and exploiting commutativity gives
+ $g^n \cdot g_1g_2 \dots g_n = g_1g_2 \dots g_n$, hence $g^n=1$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Show that $D_6 \cong S_3$ but $D_{24} \not\cong S_4$.
+ \begin{hint}
+ For the former,
+ decide where the isomorphism should send $r$ and $s$,
+ and the rest will follow through.
+ For the latter, look at orders.
+ \end{hint}
+ \begin{sol}
+ One can check manually that $D_6 \cong S_3$,
+ using the map $r \mapsto (1 \; 2 \; 3)$ and $s \mapsto (1 \; 2)$.
+ (The right-hand sides are in ``cycle notation'',
+ as mentioned in \Cref{subsec:cycle_notation}.)
+ On the other hand $D_{24}$ contains an element of order $12$
+ while $S_4$ does not.
+ \end{sol}
+\end{problem}
+
+\begin{sproblem}
+ Let $p$ be a prime.
+ Show that the only group of order $p$ is $\Zc p$.
+ \begin{hint}
+ Generated groups.
+ \end{hint}
+ \begin{sol}
+ Let $G$ be a group of order $p$, and $1 \neq g \in G$.
+ Look at the group $H$ generated by $g$ and use Lagrange's theorem.
+ \end{sol}
+\end{sproblem}
+
+\begin{problem}
+ [A hint for Cayley's theorem]
+ Find a subgroup $H$ of $S_8$
+ which is isomorphic to $D_8$,
+ and write the isomorphism explicitly.
+\end{problem}
+
+\begin{dproblem}
+ \gim
+ Let $G$ be a finite group.\footnote{In other words,
+ permutation groups can be arbitrarily weird.
+ I remember being highly unsettled
+ by this theorem when I first heard of it,
+ but in hindsight it is not so surprising.}
+ Show that there exists a positive integer $n$ such that
+ \begin{enumerate}[(a)]
+ \ii (Cayley's theorem) $G$ is isomorphic to some subgroup of the symmetric group $S_n$.
+ \ii (Representation Theory) $G$ is isomorphic to some subgroup of
+ the general linear group $\GL_n(\RR)$.
+ (This is the group of invertible $n \times n$ matrices.)
+ \end{enumerate}
+ \label{thm:cayley_theorem}
+ \begin{hint}
+ Use $n = \left\lvert G \right\rvert$.
+ \end{hint}
+ \begin{sol}
+ The idea is that each element $g \in G$ can be thought of as a permutation
+ $G \to G$ by $x \mapsto gx$.
+ \end{sol}
+\end{dproblem}
+
+\begin{problem}
+ [IMO SL 2005 C5] \gim
+ There are $n$ markers, each with one side white and the other side black.
+ In the beginning, these $n$ markers are aligned in a row so that their white sides are all up.
+ In each step, if possible, we choose a marker whose white side is up
+ (but not one of the outermost markers),
+ remove it, and reverse the closest marker to the left of it
+ and also reverse the closest marker to the right of it.
+
+ Prove that if $n \equiv 1 \pmod 3$ it's impossible to reach a state
+ with only two markers remaining.
+ (In fact the converse is true as well.)
+ % http://www.artofproblemsolving.com/Forum/viewtopic.php?f=41&t=90046&p=3573800
+ \begin{hint}
+ Draw inspiration from $D_6$.
+ \end{hint}
+ \begin{sol}
+ We have $www = bb$, $bww = wb$, $wwb = bw$, $bwb = ww$.
+ Interpret these as elements of $D_6$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ \gim
+ Let $p$ be a prime and $F_1 = F_2 = 1$, $F_{n+2} = F_{n+1} + F_n$
+ be the Fibonacci sequence.
+ Show that $F_{2p(p^2-1)}$ is divisible by $p$.
+ \begin{hint}
+ Look at the group of $2 \times 2$ matrices mod $p$
+ with determinant $\pm 1$.
+ \end{hint}
+ \begin{sol}
+ Look at the group $G$ of $2 \times 2$ matrices mod $p$
+ with determinant $\pm 1$ (whose entries are the integers mod $p$).
+ Let $g = \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}$
+ and then use $g^{\left\lvert G \right\rvert} = 1_G$.
+ \end{sol}
+\end{problem}
+
+%\begin{problem}[Hard]
+% Exhibit two groups $G$ and $H$ which are not isomorphic with the property that
+% for every positive integer $n$,
+% the number of elements $g \in G$ with $\ord g = n$
+% equals the number of elements $h \in H$ with $\ord h = n$.
+%\end{problem}
diff --git a/books/napkin/hintsol.tex b/books/napkin/hintsol.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c380e8dcea92a133c77208aa53c99d771e4559d1
--- /dev/null
+++ b/books/napkin/hintsol.tex
@@ -0,0 +1,11 @@
+\chapter{Hints to selected problems}
+\label{app:hints}
+\begin{enumerate}
+ \input{tex/backmatter/all-hints.out}
+\end{enumerate}
+
+\chapter{Sketches of selected solutions}
+\label{app:sol}
+\begin{enumerate}
+ \input{tex/backmatter/all-solns.out}
+\end{enumerate}
diff --git a/books/napkin/holomorphic.tex b/books/napkin/holomorphic.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b7ffb5fca7a6024b06e66e96ed7049a774b345dd
--- /dev/null
+++ b/books/napkin/holomorphic.tex
@@ -0,0 +1,711 @@
+\chapter{Holomorphic functions}
+Throughout this chapter, we denote by $U$ an open subset of the complex plane,
+and by $\Omega$ an open subset which is also simply connected.
+The main references for this chapter were \cite{ref:dartmouth,ref:bak_ca}.
+
+\section{The nicest functions on earth}
+In high school you were told how to differentiate and integrate real-valued functions.
+In this chapter on complex analysis,
+we'll extend it to differentiation and integration of complex-valued functions.
+
+Big deal, you say. Calculus was boring enough. Why do I care about complex calculus?
+
+Perhaps it's easiest to motivate things if I compare real analysis to complex analysis.
+In real analysis, your input lives inside the real line $\RR$.
+This line is not terribly discerning -- you can construct a lot of unfortunate functions.
+Here are some examples.
+\begin{example}
+ [Optional: evil real functions]
+ You can skim over these very quickly: they're only here to make a point.
+ \begin{enumerate}[(a)]
+ \ii The \vocab{Devil's Staircase} (or Cantor function)
+ is a continuous function $H : [0,1] \to [0,1]$
+ which has derivative zero ``almost everywhere'',
+ yet $H(0) = 0$ and $H(1) = 1$.
+ \ii The \vocab{Weierstra\ss\ function}
+ \[ x \mapsto \sum_{n=0}^\infty \left( \half \right)^n \cos \left( 2015^n \pi x \right) \]
+ is continuous \emph{everywhere} but differentiable \emph{nowhere}.
+ \ii The function
+ \[
+ x \mapsto
+ \begin{cases}
+ x^{100} & x \ge 0 \\
+ -x^{100} & x < 0
+ \end{cases}
+ \]
+ has the first $99$ derivatives but not the $100$th one.
+ \ii
+ If a function has all derivatives (we call these \vocab{smooth} functions),
+ then it has a Taylor series.
+ But for real functions that Taylor series might still be wrong. The function
+ \[ x \mapsto
+ \begin{cases}
+ e^{-1/x} & x > 0 \\
+ 0 & x \le 0
+ \end{cases}
+ \]
+ has derivatives at every point.
+ But if you expand the Taylor series at $x=0$, you get $0 + 0x + 0x^2 + \dots$,
+ which is wrong for \emph{any} $x > 0$ (even $x=0.0001$).
+ \end{enumerate}
+\end{example}
+\begin{figure}[h]
+ \centering
+ \includegraphics[width=0.8\textwidth]{media/weierstrass-pubdomain.png}
+ \caption{The Weierstra\ss\ Function (image from \cite{img:weierstrass}).}
+\end{figure}
+
+Let's even put aside the pathology.
+If I tell you the value of a real smooth function on the interval $[-1, 1]$,
+that still doesn't tell you anything about the function as a whole.
+It could be literally anything, because it's somehow possible to ``fuse together'' smooth functions.
+
+So what about complex functions?
+If you consider them as functions $\RR^2 \to \RR^2$, you now have the interesting property
+that you can integrate along things that are not line segments: you can write integrals
+across curves in the plane.
+But $\CC$ has something more: it is a \emph{field}, so you can \emph{multiply} and \emph{divide} two complex numbers.
+
+So we restrict our attention to differentiable functions called \emph{holomorphic functions}.
+It turns out that the multiplication on $\CC$ makes all the difference.
+The primary theme in what follows is that holomorphic functions are \emph{really, really nice},
+and that knowing tiny amounts of data about the function can determine all its values.
+%In particular, they are highly \emph{rigid} and \emph{regular}.
+
+The two main highlights of this chapter,
+from which all other results are more or less corollaries:
+\begin{itemize}
+ \ii Contour integrals of loops are always zero.
+ \ii A holomorphic function is essentially given by its Taylor series;
+ in particular, single-differentiable implies infinitely differentiable.
+ Thus, holomorphic functions behave quite like polynomials.
+\end{itemize}
+Some of the resulting corollaries:
+\begin{itemize}
+ \ii It'll turn out that knowing the values of a holomorphic function
+ on the boundary of the unit circle will tell you the values in its interior.
+ \ii Knowing the values of the function at $1$, $\half$, $\frac13$, \dots
+ are enough to determine the whole function!
+ \ii Bounded holomorphic functions $\CC \to \CC$ must be constant
+ \ii And more\dots
+\end{itemize}
+
+As \cite{ref:pugh} writes: ``Complex analysis is the good twin and real analysis is the evil one: beautiful formulas and elegant theorems seem to blossom spontaneously in the complex domain, while toil and pathology rule the reals''.
+
+
+\section{Complex differentiation}
+\prototype{Polynomials are holomorphic; $\ol z$ is not.}
+Let $f : U \to \CC$ be a complex function.
+Then for some $z_0 \in U$, we define the \vocab{derivative} at $z_0$ to be
+\[
+ \lim_{h \to 0} \frac{f(z_0+h) - f(z_0)}{h}.
+\]
+Note that this limit may not exist;
+when it does we say $f$ is \vocab{differentiable} at $z_0$.
+
+What do I mean by a ``complex'' limit $h \to 0$?
+It's what you might expect: for every $\eps > 0$ there should be a $\delta > 0$
+such that
+\[ 0 < \left\lvert h \right\rvert < \delta
+ \implies
+ \left\lvert \frac{f(z_0+h)-f(z_0)}{h} - L \right\rvert < \eps. \]
+If you like topology, you are encouraged to think of this in terms of
+open neighborhoods in the complex plane.
+(This is why we require $U$ to be open:
+it makes it possible to take $\delta$-neighborhoods in it.)
+
+But note that having a complex derivative is actually much stronger
+than a real function having a derivative.
+In the real line, $h$ can only approach zero from below and above,
+and for the limit to exist we need the ``left limit'' to equal the ``right limit''.
+But the complex numbers form a \emph{plane}: $h$ can approach zero
+from many directions, and we need all the limits to be equal.
+
+\begin{example}
+ [Important: conjugation is \emph{not} holomorphic]
+ Let $f(z) = \ol z$ be complex conjugation, $f : \CC \to \CC$.
+ This function, despite its simple nature, is not holomorphic!
+ Indeed, at $z=0$ we have,
+ \[ \frac{f(h)-f(0)}{h} = \frac{\ol h}{h}. \]
+ This does not have a limit as $h \to 0$, because depending
+ on ``which direction'' we approach zero from we have different values.
+ \begin{center}
+ \begin{asy}
+ size(7cm);
+ dot("$0$", origin, dir(225));
+ void meow(string s, real theta, real eps = 45, pen p) {
+ draw( (dir(theta) * 0.8) -- (dir(theta) * 0.2), p+1);
+ draw( (dir(theta) * 0.8) -- (dir(theta) * 0.2), p, Arrow);
+ label(s, dir(theta)*0.5, dir(eps), p);
+ }
+ meow("$1$", 0, 90, blue);
+ meow("$1$", 180, 90, blue);
+ meow("$i$", -45, 45, heavygreen);
+ meow("$-1$", 90, 0, red);
+ label("$f(z) = \overline z$", dir(135));
+ label("$\dfrac{f(0+h)-f(0)}{h}$", dir(135)-0.35*dir(90));
+
+ import graph;
+ graph.xaxis("Re", -1, 1, grey, NoTicks, Arrows);
+ graph.yaxis("Im", -1, 1, grey, NoTicks, Arrows);
+ \end{asy}
+ \end{center}
+\end{example}
+
+If a function $f : U \to \CC$ is complex differentiable
+at all the points in its domain it is called \vocab{holomorphic}.
+In the special case of a holomorphic function with domain $U = \CC$,
+we call the function \vocab{entire}.\footnote{Sorry, I know the word ``holomorphic'' sounds so much cooler. I'll try to do things more generally for that sole reason.}
+
+\begin{example}
+ [Examples of holomorphic functions]
+ In all the examples below, the derivative of the function
+ is the same as in their real analogues
+ (e.g.\ the derivative of $e^z$ is $e^z$).
+ \begin{enumerate}[(a)]
+ \ii Any polynomial $z \mapsto z^n + c_{n-1} z^{n-1} + \dots + c_0$ is holomorphic.
+ \ii The complex exponential $\exp : x+yi \mapsto e^x (\cos y + i \sin y)$
+ can be shown to be holomorphic.
+ \ii $\sin$ and $\cos$ are holomorphic when extended
+ to the complex plane by $\cos z = \frac{e^{iz}+e^{-iz}}{2}$
+ and $\sin z = \frac{e^{iz}-e^{-iz}}{2i}$.
+ \ii As usual, the sum, product, chain rules and so on apply,
+ and hence \textbf{sums, products, nonzero quotients,
+ and compositions of holomorphic functions are also holomorphic}.
+ \end{enumerate}
+\end{example}
+You are welcome to try and prove these results, but I won't bother to do so.
+
+\section{Contour integrals}
+\prototype{$\oint_\gamma z^m \; dz$ around the unit circle.}
+In the real line we knew how to integrate a function across a line segment $[a,b]$:
+essentially, we'd ``follow along'' the line segment adding up the values of $f$ we see
+to get some area.
+Unlike in the real line, in the complex plane we have the power to integrate
+over arbitrary paths: for example, we might compute an integral around a unit circle.
+A contour integral lets us formalize this.
+
+First of all, if $f : \RR \to \CC$ and $f(t) = u(t) + iv(t)$ for $u,v \in \RR$,
+we can define an integral $\int_a^b$ by just adding the real and imaginary parts:
+\[ \int_a^b f(t) \; dt
+ = \left( \int_a^b u(t) \; dt \right)
+ + i \left( \int_a^b v(t) \; dt \right). \]
+Now let $\alpha : [a,b] \to \CC$ be a path, thought of as
+a complex differentiable\footnote{This isn't entirely correct here:
+ you want the path $\alpha$ to be continuous and mostly differentiable,
+ but you allow a finite number of points to have ``sharp bends''; in other words,
+ you can consider paths which are combinations of $n$ smooth pieces.
+ But for this we also require that $\alpha$ has ``bounded length''.} function.
+Such a path is called a \vocab{contour},
+and we define its \vocab{contour integral} by
+\[
+ \oint_\alpha f(z) \; dz
+ = \int_a^b f(\alpha(t)) \cdot \alpha'(t) \; dt.
+\]
+You can almost think of this as a $u$-substitution (which is where the $\alpha'$ comes from).
+In particular, it turns out this integral does not depend on how $\alpha$ is ``parametrized'':
+a circle given by \[ [0,2\pi] \to \CC : t \mapsto e^{it} \]
+and another circle given by \[ [0,1] \to \CC : t \mapsto e^{2\pi i t} \]
+and yet another circle given by \[ [0,1] \to \CC : t \mapsto e^{2 \pi i t^5} \]
+will all give the same contour integral, because the paths they represent have the same
+geometric description: ``run around the unit circle once''.
+
+In what follows I try to use $\alpha$ for general contours and $\gamma$ in the special case of loops.
+
+Let's see an example of a contour integral.
+\begin{theorem}
+ \label{thm:central_cauchy_computation}
+ Take $\gamma : [0,2\pi] \to \CC$ to be the unit circle specified by
+ \[ t \mapsto e^{it}. \]
+ Then for any integer $m$, we have
+ \[ \oint_\gamma z^{m} \; dz
+ =
+ \begin{cases}
+ 2\pi i & m = -1 \\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+\end{theorem}
+\begin{proof}
+ The derivative of $e^{it}$ is $i e^{it}$.
+ So, by definition the answer is the value of
+ \begin{align*}
+ \int_0^{2\pi} (e^{it})^m \cdot (ie^{it}) \; dt
+ &= \int_0^{2\pi} i(e^{it})^{1+m} \; dt \\
+ &= i \int_0^{2\pi} \cos [(1+m)t] + i \sin [(1+m)t] \; dt \\
+ &= - \int_0^{2\pi} \sin [(1+m)t] \; dt + i \int_0^{2\pi} \cos [(1+m)t] \; dt.
+ \end{align*}
+ This is now an elementary calculus question.
+ One can see that this equals $2\pi i$ if $m=-1$ and
+ otherwise the integrals vanish.
+\end{proof}
+Let me try to explain why this intuitively ought to be true for $m=0$.
+In that case we have $\oint_\gamma 1 \; dz$.
+So as the integral walks around the unit circle, it ``sums up'' all the tangent vectors
+at every point (that's the direction it's walking in), multiplied by $1$.
+And given the nice symmetry of the circle, it should come as no surprise that everything cancels out.
+The theorem says that even if we multiply by $z^m$ for $m \neq -1$, we get the same cancellation.
+
+\begin{center}
+ \begin{asy}
+ size(5cm);
+ draw(unitcircle, dashed);
+ void arrow(real theta) {
+ pair P = dir(theta);
+ dot(P);
+ pair delta = 0.4*P*dir(90);
+ draw( P--(P+delta), EndArrow );
+ }
+ arrow(0);
+ arrow(50);
+ arrow(140);
+ arrow(210);
+ arrow(300);
+ \end{asy}
+\end{center}
+
+\begin{definition}
+ Given $\alpha : [0,1] \to \CC$,
+ we denote by $\ol\alpha$ the ``backwards'' contour
+ $\ol\alpha(t) = \alpha(1-t)$.
+\end{definition}
+
+\begin{ques}
+ What's the relation between $\oint_\alpha f \; dz$ and $\oint_{\ol\alpha} f \; dz$?
+ Prove it.
+\end{ques}
+
+This might seem a little boring.
+Things will get really cool really soon, I promise.
+
+\section{Cauchy-Goursat theorem}
+\prototype{$\oint_\gamma z^m \; dz = 0$ for $m \ge 0$. But if $m < 0$, Cauchy's theorem does not apply.}
+Let $\Omega \subseteq \CC$ be simply connected (for example, $\Omega = \CC$),
+and consider two paths $\alpha$, $\beta$ with the same start and end points.
+
+\begin{center}
+ \begin{asy}
+ unitsize(0.8cm);
+ bigbox("$\Omega$");
+ pair A = Drawing((-3,0));
+ pair B = Drawing((3,0));
+ draw(A..(-2,0.5)..MP("\alpha", (0,2), dir(90))..(1,1.2)..B, red, EndArrow);
+ draw(A----MP("\beta", (A+B)/2, dir(-90))--B, blue, EndArrow);
+ \end{asy}
+\end{center}
+
+What's the relation between $\oint_\alpha f(z) \; dz$ and $\oint_\beta f(z) \; dz$?
+You might expect there to be some relation between them, considering that the space $\Omega$ is simply connected.
+But you probably wouldn't expect there to be \emph{much} of a relation.
+
+As a concrete example, let $\Psi : \CC \to \CC$ be the function $z \mapsto z - \Re[z]$
+(for example, $\Psi(2015+3i) = 3i$). Let's consider two paths from $-1$ to $1$.
+Thus $\beta$ is walking along the real axis, and $\alpha$ which follows an upper semicircle.
+
+\begin{center}
+ \begin{asy}
+ pair A = Drawing("-1", dir(180), dir(-90));
+ pair B = Drawing("1", dir(0), dir(-90));
+ draw(arc(origin, 1, 180, 0), red, EndArrow);
+ MP("\alpha", dir(90), dir(90));
+ draw(A--B, blue, EndArrow);
+ MP("\beta", 0, dir(-90));
+ \end{asy}
+\end{center}
+
+Obviously $\oint_\beta \Psi(z) \; dz = 0$.
+But heaven knows what $\oint_\alpha \Psi(z) \; dz$ is supposed to equal.
+We can compute it now just out of non-laziness.
+If you like, you are welcome to compute it yourself (it's a little annoying but not hard).
+If I myself didn't mess up, it is
+\[ \oint_\alpha \Psi(z) \; dz = - \oint_{\ol\alpha} \Psi(z) \; dz
+= - \int_0^\pi (i \sin(t)) \cdot ie^{it} \; dt = \half\pi i \]
+which in particular is not zero.
+
+But somehow $\Psi$ is not a really natural function.
+It's not respecting any of the nice, multiplicative structure of $\CC$ since
+it just rudely lops off the real part of its inputs.
+More precisely,
+\begin{ques}
+ Show that $\Psi(z) = z - \Re[z]$ is not holomorphic.
+ (Hint: $\ol z$ is not holomorphic.)
+\end{ques}
+
+Now here's a miracle: for holomorphic functions, the two integrals are \emph{always equal}.
+Equivalently, (by considering $\alpha$ followed by $\ol\beta$) contour integrals of loops are always zero.
+This is the celebrated Cauchy-Goursat theorem
+(also called the Cauchy integral theorem,
+but later we'll have a ``Cauchy Integral Formula'' so blah).
+
+\begin{theorem}
+ [Cauchy-Goursat theorem]
+ Let $\gamma$ be a loop, and $f : \Omega \to \CC$ a holomorphic function
+ where $\Omega$ is open in $\CC$ and simply connected.
+ Then
+ \[ \oint_\gamma f(z) \; dz = 0. \]
+\end{theorem}
+\begin{remark}[Sanity check]
+ This might look surprising considering that we saw $\oint_\gamma z^{-1} \; dz = 2 \pi i$ earlier.
+ The subtlety is that $z^{-1}$ is not even defined at $z = 0$.
+ On the other hand, the function $\CC \setminus \{0\} \to \CC$ by $z \mapsto \frac 1z$ \emph{is} holomorphic!
+ The defect now is that $\Omega = \CC \setminus \{0\}$ is not simply connected.
+ So the theorem passes our sanity checks, albeit barely.
+\end{remark}
+
+The typical proof of Cauchy's Theorem assumes additionally
+that the partial derivatives of $f$ are continuous
+and then applies the so-called Green's theorem.
+But it was Goursat who successfully proved the fully general theorem we've stated above,
+which assumed only that $f$ was holomorphic.
+I'll only outline the proof, and very briefly.
+You can show that if $f : \Omega \to \CC$ has an antiderivative $F : \Omega \to \CC$ which is also holomorphic,
+and moreover $\Omega$ is simply connected, then you get a ``fundamental theorem of calculus'', a la
+\[ \oint_\alpha f(z) \; dz = F(\alpha(b)) - F(\alpha(a)) \]
+where $\alpha : [a,b] \to \CC$ is some path.
+So to prove Cauchy-Goursat, you only have to show this antiderivative $F$ exists.
+Goursat works very hard to prove the result in the special case that $\gamma$ is a triangle,
+and hence by induction for any polygon.
+Once he has the result for a rectangle, he uses this special case to construct the function $F$ explicitly.
+Goursat then shows that $F$ is holomorphic, completing the proof.
+
+Anyways, the theorem implies that $\oint_\gamma z^m \; dz = 0$ when $m \ge 0$.
+So much for all our hard work earlier.
+But so far we've only played with circles.
+This theorem holds for \emph{any} contour which is a loop.
+So what else can we do?
+
+\section{Cauchy's integral theorem}
+We now present a stunning application of Cauchy-Goursat, a ``representation theorem'':
+essentially, it says that values of $f$ inside a disk
+are determined by just the values on the boundary!
+In fact, we even write down the exact formula.
+As \cite{ref:dartmouth} says,
+``any time a certain type of function satisfies some sort of representation theorem,
+it is likely that many more deep theorems will follow.''
+Let's pull back the curtain:
+
+\begin{theorem}[Cauchy's integral formula]
+ Let $\gamma : [0,2\pi] \to \CC$ be a circle in the plane given by $t \mapsto Re^{it}$,
+ which bounds a disk $D$.
+ Suppose $f : U \to \CC$ is holomorphic such that $U$ contains the circle and its interior.
+ Then for any point $a$ in the interior of $D$, we have
+ \[
+ f(a)
+ =
+ \frac{1}{2\pi i} \oint_\gamma \frac{f(z)}{z-a} \; dz.
+ \]
+\end{theorem}
+Note that we don't require $U$ to be simply connected, but the reason is pretty silly:
+we're only going to ever integrate $f$ over $D$, which is an open disk, and hence the disk
+is simply connected anyways.
+
+The presence of $2\pi i$, which you saw earlier in the form $\oint_{\text{circle}} z^{-1} \; dz$,
+is no accident.
+In fact, that's the central result we're going to use to prove the result.
+
+\begin{proof}
+ There are several proofs out there, but I want to give the one that really
+ draws out the power of Cauchy's theorem. Here's the picture we have:
+ there's a point $a$ sitting inside a circle $\gamma$,
+ and we want to get our hands on the value $f(a)$.
+ \begin{center}
+ \begin{asy}
+ size(3cm);
+ draw(unitcircle, dashed, MidArrow);
+ MP("\gamma", dir(-45), dir(-45));
+ pair a = 0.1 * dir(60);
+ dot("$a$", a, dir(a));
+ \end{asy}
+ \end{center}
+ We're going to do a trick: construct a \vocab{keyhole contour} $\Gamma_{\delta, \eps}$
+ which has an outer circle $\gamma$, plus an inner circle $\ol{\gamma_\eps}$, which is a circle centered
+ at $a$ with radius $\eps$, running clockwise (so that $\gamma_\eps$ runs counterclockwise).
+ The ``width'' of the corridor is $\delta$. See picture:
+ \begin{center}
+ \begin{asy}
+ size(4cm);
+ MP("\gamma", dir(-45), dir(-45));
+ pair a = 0.1 * dir(60);
+ dot("$a$", a, dir(a));
+ real delta_outer = 6;
+ real delta_inner = 20;
+ pair P = dir(60+delta_outer);
+ pair Q = dir(60-delta_outer);
+ draw(arc(origin, 1, 60+delta_outer, 360+60-delta_outer), MidArrow);
+ draw(arc(a, 0.3, 60-delta_inner, -360+60+delta_inner), MidArrow);
+ draw(dir(60-delta_outer)--(a+0.3*dir(60-delta_inner)), MidArrow);
+ draw((a+0.3*dir(60+delta_inner))--dir(60+delta_outer), MidArrow);
+ MP("\overline{\gamma_\varepsilon}", a+0.3*dir(225), dir(225));
+ \end{asy}
+ \end{center}
+ Hence $\Gamma_{\delta,\eps}$ consists of four smooth curves.
+ \begin{ques}
+ Draw a \emph{simply connected} open set $\Omega$ which contains the entire
+ $\Gamma_{\delta,\eps}$ but does not contain the point $a$.
+ \end{ques}
+ The function $\frac{f(z)}{z-a}$ manages to be holomorphic on all of $\Omega$.
+ Thus Cauchy's theorem applies and tells us that
+ \[
+ 0 = \oint_{\Gamma_{\delta,\eps}} \frac{f(z)}{z-a} \; dz.
+ \]
+ As we let $\delta \to 0$, the two walls of the keyhole will cancel each other (because $f$ is continuous,
+ and the walls run in opposite directions).
+ So taking the limit as $\delta \to 0$, we are left with just $\gamma$ and $\gamma_\eps$,
+ which (taking again orientation into account) gives
+ \[
+ \oint_{\gamma} \frac{f(z)}{z-a} \; dz
+ = - \oint_{\ol{\gamma_\eps}} \frac{f(z)}{z-a} \; dz
+ = \oint_{\gamma_\eps} \frac{f(z)}{z-a} \; dz.
+ \]
+ Thus \textbf{we've managed to replace $\gamma$ with a much smaller circle $\gamma_\eps$ centered around $a$},
+ and the rest is algebra.
+
+ To compute the last quantity, write
+ \begin{align*}
+ \oint_{\gamma_\eps} \frac{f(z)}{z-a} \; dz
+ &=
+ \oint_{\gamma_\eps} \frac{f(z)-f(a)}{z-a} \; dz
+ +
+ f(a) \cdot \oint_{\gamma_\eps} \frac{1}{z-a} \; dz \\
+ &=
+ \oint_{\gamma_\eps} \frac{f(z)-f(a)}{z-a} \; dz
+ +
+ 2\pi i f(a).
+ \end{align*}
+ where we've used \Cref{thm:central_cauchy_computation}
+ Thus, all we have to do is show that
+ \[ \oint_{\gamma_\eps} \frac{f(z)-f(a)}{z-a} \; dz = 0. \]
+ For this we can basically use the weakest bound possible, the so-called $ML$ lemma
+ which I'll cite without proof:
+ it says ``bound the function everywhere by its maximum''.
+ \begin{lemma}
+ [$ML$ estimation lemma]
+ Let $f$ be a holomorphic function and $\alpha$ a path.
+ Suppose $M = \max_{z \text{ on } \alpha} \left\lvert f(z) \right\rvert$, and
+ let $L$ be the length of $\alpha$.
+ Then
+ \[ \left\lvert \oint_\alpha f(z) \; dz \right\rvert \le ML. \]
+ \end{lemma}
+ (This is straightforward to prove if you know the definition of length:
+ $L = \int_a^b |\alpha'(t)| \; dt$, where $\alpha : [a,b] \to \CC$.)
+
+ Anyways, as $\eps \to 0$, the quantity $\frac{f(z)-f(a)}{z-a}$ approaches $f'(a)$,
+ and so for small enough $\eps$ (i.e.\ $z$ close to $a$) there's some upper bound $M$.
+ Yet the length of $\gamma_\eps$ is the circumference $2\pi \eps$.
+ So the $ML$ lemma says that
+ \[ \left\lvert \oint_{\gamma_\eps} \frac{f(z)-f(a)}{z-a} \right\rvert
+ \le 2\pi\eps \cdot M \to 0
+ \]
+ as desired.
+\end{proof}
+
+\section{Holomorphic functions are analytic}
+\prototype{Imagine a formal series $\sum_k c_k x^k$!}
+In the setup of the previous problem, we have a circle $\gamma : [0,2\pi] \to \CC$
+and a holomorphic function $f \colon U \to \CC$ which contains the disk $D$.
+We can write
+\begin{align*}
+ f(a) &= \frac{1}{2\pi i} \oint_\gamma \frac{f(z)}{z-a} \; dz \\
+ &= \frac{1}{2\pi i} \oint_\gamma \frac{f(z)/z}{1 - \frac az} \; dz \\
+ &= \frac{1}{2\pi i} \oint_\gamma f(z)/z \cdot \sum_{k \ge 0} \left( \frac az \right)^k \; dz \\
+ \intertext{You can prove (using the so-called Weierstrass M-test) that the summation order can be switched:}
+ f(a) &= \frac{1}{2\pi i} \sum_{k \ge 0} \oint_\gamma \frac{f(z)}{z} \cdot \left( \frac az \right)^k \; dz \\
+ &= \frac{1}{2\pi i} \sum_{k \ge 0} \oint_\gamma a^k \cdot \frac{f(z)}{z^{k+1}} \; dz \\
+ &= \sum_{k \ge 0} \left( \frac{1}{2\pi i}\oint_\gamma \frac{f(z)}{z^{k+1}} \; dz \right) a^k. \\
+ \intertext{Letting
+ $c_k = \frac{1}{2\pi i}\oint_\gamma \frac{f(z)}{z^{k+1}} \; dz$,
+ and noting this is independent of $a$, this is}
+ f(a) &= \sum_{k \ge 0} c_k a^k
+\end{align*}
+and that's the miracle: holomorphic functions
+are given by a \vocab{Taylor series}!
+This is one of the biggest results in complex analysis.
+Moreover, if one is willing to believe that we can
+take the derivative $k$ times, we obtain
+\[ c_k = \frac{f^{(k)}(0)}{k!} \]
+and this gives us $f^{(k)}(0) = k! \cdot c_k$.
+
+Naturally, we can do this with any circle (not just one centered at zero).
+So let's state the full result below, with arbitrary center $p$.
+
+\begin{theorem}
+ [Cauchy's differentiation formula]
+ Let $f : U \to \CC$ be a holomorphic function and let $D$ be a disk centered at point $p$
+ bounded by a circle $\gamma$. Suppose $D$ is contained inside $U$.
+ Then $f$ is given everywhere in $D$ by a Taylor series
+ \[
+ f(z) = c_0 + c_1(z-p) + c_2(z-p)^2 + \dots
+ \]
+ where
+ \[
+ c_k = \frac{f^{k}(p)}{k!} = \frac{1}{2\pi i} \oint_\gamma \frac{f(w-p)}{(w-p)^{k+1}} \; dw
+ \]
+ In particular,
+ \[ f^{(k)}(p) = k! c_k = \frac{k!}{2\pi i} \oint_\gamma \frac{f(w-p)}{(w-p)^{k+1}} \; dw. \]
+\end{theorem}
+
+Most importantly,
+\begin{moral}
+ Over any disk,
+ a holomorphic function is given
+ exactly by a Taylor series.
+\end{moral}
+This establishes a result we stated at the beginning of the chapter:
+that a function being complex differentiable once means it is not only infinitely differentiable,
+but in fact equal to its Taylor series.
+
+I should maybe emphasize a small subtlety of the result:
+the Taylor series centered at $p$ is only valid in a disk centered at $p$ which lies entirely in the domain $U$.
+If $U = \CC$ this is no issue, since you can make the disk big enough to accommodate any point you want.
+It's more subtle in the case that $U$ is, for example, a square; you can't cover the entire square
+with a disk centered at some point without going outside the square.
+However, since $U$ is open we can at any rate at least find some
+open neighborhood for which the Taylor
+series is correct -- in stark contrast to the real case.
+Indeed, as you'll see in the problems,
+the existence of a Taylor series is incredibly powerful.
+
+\section\problemhead
+These aren't olympiad problems, but I think they're especially nice!
+In the next complex analysis chapter we'll see some more nice applications.
+
+The first few results are the most important.
+
+\begin{sproblem}
+ [Liouville's theorem]
+ \gim
+ Let $f : \CC \to \CC$ be an entire function.
+ Suppose that $\left\lvert f(z) \right\rvert < 1000$ for all complex numbers $z$.
+ Prove that $f$ is a constant function.
+ \begin{hint}
+ Look at the Taylor series of $f$,
+ and use Cauchy's differentiation formula to
+ show that each of the larger coefficients must be zero.
+ \end{hint}
+% \footnote{%
+% It's true more generally that if
+% $\left\lvert f(z) \right\rvert < A+B\left\lvert z \right\rvert^n$
+% for some constants $A$ and $B$,
+% then $f$ is a polynomial of degree at most $n$.
+% The proof is induction on $n$ with the case $n=0$ being the theorem.}
+\end{sproblem}
+
+\begin{sproblem}[Zeros are isolated]
+ An \vocab{isolated set} in the complex plane is a
+ set of points $S$ such that around each point in $S$,
+ one can draw an open neighborhood not intersecting any other point of $S$.
+
+ Show that the zero set of any nonzero holomorphic
+ function $f : U \to \CC$ is an isolated set,
+ unless there exists a nonempty open subset of $U$
+ on which $f$ is identically zero.
+ \begin{hint}
+ Proceed by contradiction,
+ meaning there exists a sequence $z_1, z_2, \dots \to z$
+ where $0 = f(z_1) = f(z_2) = \dots$ all distinct.
+ Prove that $f = 0$ on an open neighborhood of $z$
+ by looking at the Taylor series of $f$ and
+ pulling out factors of $z$.
+ \end{hint}
+ \begin{sol}
+ Proceed by contradiction, meaning there exists a sequence $z_1, z_2, \dots \to z$
+ where $0 = f(z_1) = f(z_2) = \dots$ all distinct.
+ WLOG set $z=0$.
+ Look at the Taylor series of $f$ around $z=0$.
+ Since it isn't uniformly zero by assumption,
+ write it as $a_N z^N + a_{N+1}z^{N+1} + \dots$, $a_N \neq 0$.
+ But by continuity of $h(z) = a_N + a_{N+1}z + \dots$ there is some
+ open neighborhood of zero where $h(z) \neq 0$.
+ \end{sol}
+\end{sproblem}
+
+\begin{sproblem}
+ [Identity theorem]
+ \gim
+ Let $f, g : U \to \CC$ be holomorphic, and assume that $U$ is connected.
+ Prove that if $f$ and $g$ agree on some open neighborhood,
+ then $f = g$.
+ \begin{hint}
+ Take the interior of the agreeing points;
+ show that this set is closed, which implies the conclusion.
+ \end{hint}
+ \begin{sol}
+ Let $S$ be the interior of the points satisfying $f=g$.
+ By definition $S$ is open.
+ By the previous part, $S$ is closed: if $z_i \to z$ and $z_i \in S$,
+ then $f=g$ in some open neighborhood of $z$, hence $z \in S$.
+ Since $S$ is clopen and nonempty, $S = U$.
+ \end{sol}
+\end{sproblem}
+
+%\begin{dproblem}
+% [Mean Value Property]
+% Let $f : U \to \CC$ be holomorphic.
+% Assume that $z_0 \in U$ and the disk centered at $z_0$
+% with radius $r > 0$ is contained inside $U$. Show that
+% \[ f(z_0) = \frac{1}{2\pi} \int_0^{2\pi} f(z_0+re^{it}) \; dt. \]
+% In other words, $f(z_0)$ is the average of $f$ along the circle.
+% \begin{hint}
+% Evaluate $\oint_\gamma \frac{f(w)}{w-z_0} \; dw$ over the circle.
+% \end{hint}
+%\end{dproblem}
+
+\begin{dproblem}[Maximums Occur On Boundaries]
+ Let $f : U \to \CC$ be holomorphic, let $Y \subseteq U$ be compact,
+ and let $\partial Y$ be boundary\footnote{
+ The boundary $\partial Y$ is the set of points $p$
+ such that no open neighborhood of $p$ is contained in $Y$.
+ It is also a compact set if $Y$ is compact.
+ } of $Y$.
+ Show that
+ \[ \max_{z \in Y} \left\lvert f(z) \right\rvert
+ = \max_{z \in \partial Y} \left\lvert f(z) \right\rvert. \]
+ In other words, the maximum values of $\left\lvert f \right\rvert$ occur
+ on the boundary. (Such maximums exist by compactness.)
+\end{dproblem}
+
+\begin{problem}
+ [Harvard quals]
+ Let $f : \CC \to \CC$ be a nonconstant entire function.
+ Prove that $f\im(\CC)$ is dense in $\CC$.
+ (In fact, a much stronger result is true:
+ Little Picard's theorem says that the image of a nonconstant
+ entire function omits at most one point.)
+ \begin{hint}
+ Liouville. Look at $\frac{1}{f(z)-w}$.
+ \end{hint}
+ \begin{sol}
+ Suppose we want to show that there's a point
+ in the image within $\eps$ of a given a point $w \in \CC$.
+ Look at $\frac{1}{f(z) - w}$ and use Liouville's theorem.
+ \end{sol}
+\end{problem}
+
+%\begin{dproblem}
+% [Liouiville's theorem extended]
+% Let $f : \CC \to \CC$ be entire.
+% \begin{enumerate}[(a)]
+% \ii Show that if $\left\lvert f(z) \right\rvert < C \left\lvert z \right\rvert^{1000}$
+% for some constant $C$, then $f$ is a polynomial of degree at most $1000$.
+% \ii Show that the image $f\im(\CC)$ is dense in $\CC$,
+% unless $f$ is constant.
+% \end{enumerate}
+% \begin{hint}
+% Part (a) is the same proof of the original Louiville's theorem.
+%
+% For part (b), assume it's not dense and misses a circle at $w$
+% with radius $\eps$. Look at $\frac{1}{f(z)-w}$ and show it's bounded.
+% \end{hint}
+%\end{dproblem}
+
+%\begin{problem}
+% Show that a nonzero entire function can have at most countably many zeros,
+% and give an example where equality occurs.
+% \begin{hint}
+% Assume there are uncountably many zeros
+% and do a pigeonhole style argument to force them to
+% accumulate at some point.
+% Then apply the identity theorem.
+% Equality occurs at $\sin(z)$.
+% \end{hint}
+%\end{problem}
diff --git a/books/napkin/ideals.tex b/books/napkin/ideals.tex
new file mode 100644
index 0000000000000000000000000000000000000000..9e1c349bf5ad161427000443935d08a40624ae21
--- /dev/null
+++ b/books/napkin/ideals.tex
@@ -0,0 +1,827 @@
+\chapter{Rings and ideals}
+\section{Some motivational metaphors about rings vs groups}
+In this chapter we'll introduce the notion of
+a \textbf{commutative ring} $R$.
+It is a larger structure than a group:
+it will have two operations addition and multiplication,
+rather than just one.
+We will then immediately define a \textbf{ring homomorphism}
+$R \to S$ between pairs of rings.
+
+This time, instead of having normal subgroups $H \normalin G$,
+rings will instead have subsets $I \subseteq R$ called \textbf{ideals},
+which are not themselves rings but satisfy some niceness conditions.
+We will then show how you to define $R/I$,
+in analogy to $G/H$ as before.
+Finally, like with groups, we will talk a bit about how to generate ideals.
+
+Here is a possibly helpful table of analogies to help you keep track:
+\begin{center}
+ \begin{tabular}[h]{lcc}
+ & Group & Ring \\
+ \hline
+ Notation & $G$ & $R$ \\
+ Operations & $\cdot$ & $+$, $\times$ \\
+ Commutativity & only if abelian & for us, always \\
+ Sub-structure & subgroup & (not discussed) \\
+ Homomorphism & grp hom.\ $G \to H$ & ring hom.\ $R \to S$ \\
+ Kernel & normal subgroup & ideal \\
+ Quotient & $G/H$ & $R/I$ \\
+ \end{tabular}
+\end{center}
+
+\section{(Optional) Pedagogical notes on motivation}
+I wrote most of these examples with a number theoretic eye in mind;
+thus if you liked elementary number theory,
+a lot of your intuition will carry over.
+Basically, we'll try to generalize properties of the ring $\ZZ$ to
+any abelian structure in which we can also multiply.
+That's why, for example, you can talk about
+``irreducible polynomials in $\QQ[x]$'' in the same
+way you can talk about ``primes in $\ZZ$'',
+or about ``factoring polynomials modulo $p$''
+in the same way we can talk ``unique factorization in $\ZZ$''.
+Even if you only care about $\ZZ$
+(say, you're a number theorist), this has a lot of value:
+I assure you that trying to solve $x^n+y^n = z^n$ (for $n > 2$)
+requires going into a ring other than $\ZZ$!
+
+Thus for all the sections that follow, keep $\ZZ$ in mind as your prototype.
+
+I mention this here because
+commutative algebra is \emph{also} closely tied to algebraic geometry.
+Lots of the ideas in commutative algebra have nice
+``geometric'' interpretations that motivate the definitions,
+and these connections are explored in the corresponding part later.
+So, I want to admit outright that this is not
+the only good way (perhaps not even the most natural one)
+of motivating what is to follow.
+
+\section{Definition and examples of rings}
+\prototype{$\ZZ$ all the way! Also $R[x]$ and various fields (next section).}
+
+Well, I guess I'll define a ring\footnote{Or,
+ according to some authors, a ``ring with identity'';
+ some authors don't require rings to have multiplicative identity.
+ For us, ``ring'' always means ``ring with $1$''.}.
+
+\begin{definition}
+ A \vocab{ring} is a triple $(R, +, \times)$,
+ the two operations usually called addition and multiplication, such that
+ \begin{enumerate}[(i)]
+ \ii $(R,+)$ is an abelian group, with identity $0_R$, or just $0$.
+ \ii $\times$ is an associative, binary operation on $R$ with some
+ identity, written $1_R$ or just $1$.
+ \ii Multiplication distributes over addition.
+ \end{enumerate}
+ The ring $R$ is \vocab{commutative} if $\times$ is commutative.
+\end{definition}
+\begin{abuse}
+ As usual, we will abbreviate $(R, +, \times)$ to $R$.
+\end{abuse}
+\begin{abuse}
+ For simplicity, assume all rings are commutative
+ for the rest of this chapter.
+ We'll run into some noncommutative rings eventually,
+ but for such rings we won't need the full theory of this chapter anyways.
+\end{abuse}
+
+These definitions are just here for completeness.
+The examples are much more important.
+\begin{example}[Typical rings]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The sets $\ZZ$, $\QQ$, $\RR$ and $\CC$ are all rings
+ with the usual addition and multiplication.
+ \ii The integers modulo $n$ are also a ring
+ with the usual addition and multiplication.
+ We also denote it by $\Zc n$.
+ \end{enumerate}
+\end{example}
+
+Here is also a trivial example.
+\begin{definition}
+ The \vocab{zero ring} is the ring $R$ with a single element.
+ We denote the zero ring by $0$.
+ A ring is \vocab{nontrivial} if it is not the zero ring.
+\end{definition}
+\begin{exercise}
+ [Comedic]
+ Show that a ring is nontrivial if and only if $0_R \ne 1_R$.
+\end{exercise}
+
+Since I've defined this structure, I may as well state the obligatory facts about it.
+\begin{fact}
+ For any ring $R$ and $r \in R$, $r \cdot 0_R = 0_R$.
+ Moreover, $r \cdot (-1_R) = -r$.
+\end{fact}
+
+Here are some more examples of rings.
+\begin{example}
+ [Product ring]
+ \label{ex:product_ring}
+ Given two rings $R$ and $S$ the \vocab{product ring},
+ denoted $R \times S$, is defined as ordered pairs $(r,s)$
+ with both operations done component-wise.
+ For example, the Chinese remainder theorem says
+ that \[ \Zc{15} \cong \Zc3 \times \Zc5 \]
+ with the isomorphism $n \bmod{15} \mapsto (n \bmod 3, n \bmod 5)$.
+\end{example}
+\begin{remark}
+ Equivalently, we can define $R \times S$ as the abelian group $R \oplus S$,
+ and endow it with the multiplication where $r \cdot s = 0$
+ for $r \in R$, $s \in S$.
+\end{remark}
+\begin{ques}
+ Which $(r,s)$ is the identity element of the product ring $R \times S$?
+\end{ques}
+
+\begin{example}[Polynomial ring]
+ Given any ring $R$,
+ the \vocab{polynomial ring} $R[x]$ is defined as the set of polynomials
+ with coefficients in $R$:
+ \[ R[x] = \left\{ a_n x^n+a_{n-1}x^{n-1}+\dots+a_0
+ \mid a_0, \dots, a_n \in R \right\}. \]
+ This is pronounced ``$R$ adjoin $x$''.
+ Addition and multiplication are done exactly in the way you would expect.
+\end{example}
+\begin{remark}
+ [Digression on division]
+ Happily, polynomial division also does what we expect:
+ if $p \in R[x]$ is a polynomial, and $p(a) = 0$,
+ then $(x-a)q(x) = p(x)$ for some polynomial $q$.
+ Proof: do polynomial long division.
+
+ With that, note the caveat that
+ \[ x^2-1 \equiv (x-1)(x+1) \pmod 8 \]
+ has \emph{four} roots $1$, $3$, $5$, $7$ in $\Zc8$.
+
+ The problem is that $2 \cdot 4 = 0$ even though $2$ and $4$ are not zero;
+ we call $2$ and $4$ \emph{zero divisors} for that reason.
+ In an \emph{integral domain} (a ring without zero divisors),
+ this pathology goes away,
+ and just about everything you know about polynomials carries over.
+ (I'll say this all again next section.)
+\end{remark}
+\begin{example}
+ [Multi-variable polynomial ring]
+ We can consider polynomials in $n$ variables with coefficients in $R$,
+ denoted $R[x_1, \dots, x_n]$.
+ (We can even adjoin infinitely many $x$'s if we like!)
+\end{example}
+
+\begin{example}
+ [Gaussian integers are a ring]
+ The \vocab{Gaussian integers} are the set of complex numbers
+ with integer real and imaginary parts, that is
+ \[ \ZZ[i] = \left\{ a+bi \mid a,b \in \ZZ \right\}. \]
+\end{example}
+\begin{abuse}
+ [Liberal use of adjoinment]
+ Careful readers will detect some abuse in notation here.
+ $\ZZ[i]$ should officially be
+ ``integer-coefficient polynomials in a variable $i$''.
+ However, it is understood from context that $i^2=-1$;
+ and a polynomial in $i = \sqrt{-1}$ ``is'' a Gaussian integer.
+\end{abuse}
+\begin{example}
+ [Cube root of $2$]
+ As another example (using the same abuse of notation):
+ \[ \ZZ[\sqrt[3]{2}] = \left\{ a + b\sqrt[3]{2} + c\sqrt[3]4
+ \mid a,b,c \in \ZZ \right\}. \]
+\end{example}
+
+\section{Fields}
+\prototype{$\QQ$ is a field, but $\ZZ$ is not.}
+
+Although we won't need to know what a field is until next chapter,
+they're so convenient for examples I will go ahead and introduce them now.
+
+As you might already know, if the multiplication is invertible,
+then we call the ring a field.
+To be explicit, let me write the relevant definitions.
+
+\begin{definition}
+ \label{def:unit}
+ A \vocab{unit} of a ring $R$
+ is an element $u \in R$ which is invertible:
+ for some $x \in R$ we have $ux = 1_R$.
+\end{definition}
+
+\begin{example}
+ [Examples of units]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The units of $\ZZ$ are $\pm 1$,
+ because these are the only things which ``divide $1$''
+ (which is the reason for the name ``unit'').
+ \ii On the other hand, in $\QQ$ everything is a unit (except $0$).
+ For example, $\frac 35$ is a unit since
+ $\frac 35 \cdot \frac 53 = 1$.
+ \ii The Gaussian integers $\ZZ[i]$ have four units:
+ $\pm 1$ and $\pm i$.
+ \end{enumerate}
+\end{example}
+
+\begin{definition}
+ A nontrivial (commutative) ring is a \vocab{field}
+ when all its nonzero elements are units.
+\end{definition}
+
+Colloquially, we say that
+\begin{moral}
+ A field is a structure where you can add, subtract, multiply, and divide.
+\end{moral}
+Depending on context, they are often denoted
+either $k$, $K$, $F$.
+
+\begin{example}
+ [First examples of fields]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\QQ$, $\RR$, $\CC$ are fields,
+ since the notion $\frac 1c$ makes sense in them.
+ \ii If $p$ is a prime, then $\Zc p$ is a field,
+ which we denote will usually denote by $\FF_p$.
+ \end{enumerate}
+ The trivial ring $0$ is \emph{not} considered a field,
+ since we require fields to be nontrivial.
+\end{example}
+
+%\begin{remark}
+% You might say at this point that ``fields are nicer than rings'',
+% but as you'll see in this chapter, the conditions for
+% being a field are somehow ``too strong''.
+% To give an example of what I mean:
+% if you try to think about the concept of ``divisibility''
+% in $\ZZ$, you've stepped into the vast and bizarre realm of
+% number theory. Try to do the same thing in $\QQ$ and you get nothing:
+% any nonzero $a$ ``divides'' any nonzero $b$
+% because $b = a \cdot \frac ba$.
+%
+% I know at least one person who instead
+% thinks of this as an argument for why people
+% shouldn't care about number theory
+% (studying chaos rather than order).
+%\end{remark}
+
+\section{Homomorphisms}
+\prototype{$\ZZ \to \Zc5$ by modding out by $5$.}
+This section is going to go briskly --
+it's the obvious generalization of all the stuff
+we did with quotient groups.\footnote{I once found an
+ abstract algebra textbook which teaches rings before groups.
+ At the time I didn't understand why,
+ but now I think I get it -- modding out by things in
+ commutative rings is far more natural, and you can start talking
+ about all the various flavors of rings and fields.
+ You also have (in my opinion) more vivid first examples
+ for rings than for groups.
+ I actually sympathize a lot with this approach --- maybe I'll convert
+ Napkin to follow it one day.}
+
+First, we define a homomorphism and isomorphism.
+\begin{definition}
+ Let $R = (R, +_R, \times_R)$ and $S = (S, +_S, \times_S)$ be rings.
+ A \vocab{ring homomorphism} is a map $\phi : R \to S$
+ such that
+ \begin{enumerate}[(i)]
+ \ii $\phi(x +_R y) = \phi(x) +_S \phi(y)$ for each $x,y \in R$.
+ \ii $\phi(x \times_R y) = \phi(x) \times_S \phi(y)$ for each $x,y \in R$.
+ \ii $\phi(1_R) = 1_S$.
+ \end{enumerate}
+ If $\phi$ is a bijection then $\phi$ is an \vocab{isomorphism}
+ and we say that rings $R$ and $S$ are \vocab{isomorphic}.
+\end{definition}
+Just what you would expect.
+The only surprise is that we also demand $\phi(1_R)$ to go to $1_S$.
+This condition is not extraneous:
+consider the map $\ZZ \to \ZZ$ called ``multiply by zero''.
+\begin{example}
+ [Examples of homomorphisms]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The identity map, as always.
+ \ii The map $\ZZ \to \Zc5$ modding out by $5$.
+ \ii The map $\RR[x] \to \RR$ by $p(x) \mapsto p(0)$
+ by taking the constant term.
+ \ii For any ring $R$, there is a trivial ring homomorphism $R \to 0$.
+ \end{enumerate}
+\end{example}
+\begin{example}
+ [Non-examples of homomorphisms]
+ Because we require $1_R$ to $1_S$, some maps that you
+ might have thought were homomorphisms will fail.
+ \begin{enumerate}[(a)]
+ \ii The map $\ZZ \to \ZZ$ by $x \mapsto 2x$
+ is not a ring homomorphism.
+ Aside from the fact it sends $1$ to $2$,
+ it also does not preserve multiplication.
+
+ \ii If $S$ is a nontrivial ring,
+ the map $R \to S$ by $x \mapsto 0$ is not
+ a ring homomorphism, even though it preserves multiplication.
+
+ \ii There is no ring homomorphism $\Zc{2016} \to \ZZ$ at all.
+ \end{enumerate}
+ In particular, whereas for groups $G$ and $H$
+ there was always a trivial group homomorphism sending
+ everything in $G$ to $1_H$, this is not the case for rings.
+\end{example}
+
+\section{Ideals}
+\prototype{The multiples of $5$ are an ideal of $\ZZ$.}
+Now, just like we were able to mod out by groups,
+we'd also like to define quotient rings.
+So once again,
+\begin{definition}
+ The \vocab{kernel} of a ring homomorphism $\phi \colon R \to S$,
+ denoted $\ker \phi$, is the set of $r \in R$ such that $\phi(r) = 0$.
+\end{definition}
+
+In group theory, we were able to characterize the ``normal'' subgroups by a few
+obviously necessary conditions (namely, $gHg\inv = H$).
+We can do the same thing for rings, and it's in fact easier because our operations are commutative.
+
+First, note two obvious facts:
+\begin{itemize}
+ \ii If $\phi(x) = \phi(y) = 0$, then $\phi(x+y) = 0$ as well.
+ So $\ker \phi$ should be closed under addition.
+ \ii If $\phi(x) = 0$, then for any $r \in R$ we have
+ $\phi(rx) = \phi(r)\phi(x) = 0$ too.
+ So for $x \in \ker \phi$ and \emph{any} $r \in R$,
+ we have $rx \in \ker\phi$.
+\end{itemize}
+
+A (nonempty) subset $I \subseteq R$ is called
+an ideal if it satisfies these properties.
+That is,
+\begin{definition}
+ A nonempty subset $I \subseteq R$ is an \vocab{ideal}
+ if it is closed under addition, and for each $x \in I$,
+ $rx \in I$ for all $r \in R$.
+ It is \vocab{proper} if $I \neq R$.
+\end{definition}
+
+Note that in the second condition, $r$ need not be in $I$!
+So this is stronger than merely saying $I$ is closed under multiplication.
+\begin{remark}
+ If $R$ is not commutative, we also need the condition $xr \in I$.
+ That is, the ideal is \emph{two-sided}: it absorbs multiplication
+ from both the left and the right.
+ But since rings in Napkin are commutative
+ we needn't worry with this distinction.
+\end{remark}
+
+\begin{example}
+ [Prototypical example of an ideal]
+ Consider the set $I = 5\ZZ = \{\dots,-10,-5,0,5,10,\dots\}$ as an ideal in $\ZZ$.
+ We indeed see $I$ is the kernel of the ``take mod $5$'' homomorphism:
+ \[ \ZZ \surjto \ZZ/5\ZZ. \]
+ It's clearly closed under addition,
+ but it absorbs multiplication from \emph{all} elements of $\ZZ$:
+ given $15 \in I$, $999 \in \ZZ$, we get $15 \cdot 999 \in I$.
+\end{example}
+
+\begin{exercise}
+ [Mandatory: fields have two ideals]
+ If $K$ is a field, show that $K$ has exactly two ideals.
+ What are they?
+ \label{exer:field_ideal}
+\end{exercise}
+
+Now we claim that these conditions are sufficient.
+More explicitly,
+\begin{theorem}
+ [Ring analog of normal subgroups]
+ Let $R$ be a ring and $I \subsetneq R$.
+ Then $I$ is the kernel of some homomorphism if and only if it's an ideal.
+\end{theorem}
+\begin{proof}
+ It's quite similar to the proof for the normal subgroup thing,
+ and you might try it yourself as an exercise.
+
+ Obviously the conditions are necessary.
+ To see they're sufficient, we \emph{define} a ring by ``cosets''
+ \[ S = \left\{ r + I \mid r \in R \right\}. \]
+ These are the equivalence classes under $r_1 \sim r_2$ if and only if $r_1 - r_2 \in I$
+ (think of this as taking ``mod $I$'').
+ To see that these form a ring, we have to check that the addition
+ and multiplication we put on them is well-defined.
+ Specifically, we want to check that if $r_1 \sim s_1$ and $r_2 \sim s_2$,
+ then $r_1 + r_2 \sim s_1 + s_2$ and $r_1r_2 \sim s_1s_2$.
+ We actually already did the first part
+ -- just think of $R$ and $S$ as abelian
+ groups, forgetting for the moment that we can multiply.
+ The multiplication is more interesting.
+ \begin{exercise}
+ [Recommended]
+ Show that if $r_1 \sim s_1$ and $r_2 \sim s_2$, then $r_1r_2 \sim s_1s_2$.
+ You will need to use the fact that $I$ absorbs multiplication
+ from \emph{any} elements of $R$, not just those in $I$.
+ \end{exercise}
+ Anyways, since this addition and multiplication is well-defined there
+ is now a surjective homomorphism $R \to S$ with kernel exactly $I$.
+\end{proof}
+
+\begin{definition}
+ Given an ideal $I$, we define as above the \vocab{quotient ring}
+ \[ R/I \defeq \left\{ r+I \mid r \in R \right\}. \]
+ It's the ring of these equivalence classes.
+ This ring is pronounced ``$R$ mod $I$''.
+\end{definition}
+\begin{example}[$\ZZ/5\ZZ$]
+ The integers modulo $5$ formed by ``modding out additively by $5$''
+ are the $\Zc 5$ we have already met.
+\end{example}
+But here's an important point:
+just as we don't actually think of $\ZZ/5\ZZ$ as consisting of
+$k + 5\ZZ$ for $k=0,\dots,4$,
+we also don't really want to think about $R/I$ as elements $r+I$.
+The better way to think about it is
+\begin{moral}
+ $R/I$ is the result when we declare that elements of $I$ are all zero;
+ that is, we ``mod out by elements of $I$''.
+\end{moral}
+For example, modding out by $5\ZZ$ means that we consider
+all elements in $\ZZ$ divisible by $5$ to be zero.
+This gives you the usual modular arithmetic!
+
+\begin{exercise}
+ Earlier, we wrote $\ZZ[i]$ for the Gaussian integers,
+ which was a slight abuse of notation.
+ Convince yourself that this ring
+ could instead be written as $\ZZ[x] / (x^2+1)$,
+ if we wanted to be perfectly formal.
+ (We will stick with $\ZZ[i]$ though --- it's more natural.)
+
+ Figure out the analogous formalization of $\ZZ[\sqrt[3]{2}]$.
+\end{exercise}
+
+\section{Generating ideals}
+\prototype{In $\ZZ$, the ideals are all of the form $(n)$.}
+
+Let's give you some practice with ideals.
+
+An important piece of intuition is that once an ideal
+contains a unit, it contains $1$, and
+thus must contain the entire ring.
+That's why the notion of ``proper ideal''
+is useful language.
+To expand on that:
+\begin{proposition}
+ [Proper ideal $\iff$ no units]
+ Let $R$ be a ring and $I \subseteq R$ an ideal.
+ Then $I$ is proper (i.e.\ $I \ne R$)
+ if and only if it contains no units of $R$.
+\end{proposition}
+\begin{proof}
+ Suppose $I$ contains a unit $u$, i.e.\ an element $u$
+ with an inverse $u\inv$.
+ Then it contains $u \cdot u\inv = 1$, and thus $I = R$.
+ Conversely, if $I$ contains no units, it is obviously proper.
+\end{proof}
+As a consequence, if $K$ is a field,
+then its only ideals are $(0)$ and $K$
+(this was \Cref{exer:field_ideal}).
+So for our practice purposes, we'll be working with rings that aren't fields.
+
+First practice: $\ZZ$.
+\begin{exercise}
+ Show that the only ideals of $\ZZ$ are precisely those
+ sets of the form $n\ZZ$, where $n$ is a nonnegative integer.
+\end{exercise}
+
+Thus, while ideals of fields are not terribly interesting,
+ideals of $\ZZ$ look eerily like elements of $\ZZ$.
+Let's make this more precise.
+\begin{definition}
+ Let $R$ be a ring.
+ The \vocab{ideal generated} by a set of elements $x_1, \dots, x_n \in R$
+ is denoted by $I = (x_1, x_2, \dots, x_n)$
+ and given by
+ \[ I = \left\{ r_1 x_1 + \dots + r_n x_n \mid r_i \in R \right\}. \]
+ One can think of this as ``the smallest ideal containing all the $x_i$''.
+\end{definition}
+
+The analogy of putting the $\{x_i\}$ in a sealed box and shaking vigorously
+kind of works here too.
+\begin{remark}
+ [Linear algebra digression]
+ If you know linear algebra,
+ you can summarize this as: an ideal is an $R$-module.
+ The ideal $(x_1, \dots, x_n)$ is the submodule spanned by $x_1, \dots, x_n$.
+\end{remark}
+
+In particular, if $I = (x)$ then $I$ consists of exactly the
+``multiples of $x$'', i.e.\ numbers of the form $rx$ for $r \in R$.
+\begin{remark}
+ We can also apply this definition to infinite generating sets,
+ as long as only finitely many of the $r_i$ are not zero
+ (since infinite sums don't make sense in general).
+\end{remark}
+
+\begin{example}[Examples of generated ideals]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii As $(n) = n\ZZ$ for all $n \in \ZZ$,
+ every ideal in $\ZZ$ is of the form $(n)$.
+ \ii In $\ZZ[i]$, we have
+ $(5) = \left\{ 5a + 5b i \mid a,b \in \ZZ \right\}$.
+ \ii In $\ZZ[x]$, the ideal $(x)$ consists of polynomials
+ with zero constant terms.
+ \ii In $\ZZ[x,y]$, the ideal $(x,y)$ again consists
+ of polynomials with zero constant terms.
+ \ii In $\ZZ[x]$, the ideal $(x,5)$ consists of polynomials
+ whose constant term is divisible by $5$.
+ \end{enumerate}
+\end{example}
+\begin{ques}
+ Please check that the set
+ $I = \left\{ r_1 x_1 + \dots + r_n x_n \mid r_i \in R \right\}$
+ is indeed always an ideal (closed under addition,
+ and absorbs multiplication).
+\end{ques}
+Now suppose $I = (x_1, \dots, x_n)$.
+What does $R/I$ look like?
+According to what I said at the end of the last section,
+it's what happens when we ``mod out'' by each of the elements $x_i$.
+For example\dots
+\begin{example}
+ [Modding out by generated ideals]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Let $R = \ZZ$ and $I = (5)$. Then $R/I$ is literally
+ $\ZZ/5\ZZ$, or the ``integers modulo $5$'':
+ it is the result of declaring $5 = 0$.
+ \ii Let $R = \ZZ[x]$ and $I = (x)$.
+ Then $R/I$ means we send $x$ to zero; hence $R/I \cong \ZZ$
+ as given any polynomial $p(x) \in R$,
+ we simply get its constant term.
+ \ii Let $R = \ZZ[x]$ again and now let $I = (x-3)$.
+ Then $R/I$ should be thought of as the quotient when $x-3 \equiv 0$,
+ that is, $x \equiv 3$.
+ So given a polynomial $p(x)$ its image after
+ we mod out should be thought of as $p(3)$.
+ Again $R/I \cong \ZZ$, but in a different way.
+ \ii Finally, let $I = (x-3,5)$.
+ Then $R/I$ not only sends $x$ to three, but also $5$ to zero.
+ So given $p \in R$, we get $p(3) \pmod 5$.
+ Then $R/I \cong \ZZ/5\ZZ$.
+ \end{enumerate}
+\end{example}
+\begin{remark}
+ [Mod notation]
+ By the way, given an ideal $I$ of a ring $R$, it's totally legit to write
+ \[ x \equiv y \pmod I \]
+ to mean that $x-y \in I$.
+ Everything you learned about modular arithmetic carries over.
+\end{remark}
+
+\section{Principal ideal domains}
+\prototype{$\ZZ$ is a PID, $\ZZ[x]$ is not.
+$\CC[x]$ is a PID, $\CC[x,y]$ is not.}
+
+What happens if we put multiple generators in an ideal,
+like $(10,15) \subseteq \ZZ$?
+Well, we have by definition that $(10,15)$ is given as a set by
+\[ (10,15) \defeq \left\{ 10x + 15y \mid x,y \in \ZZ \right\}. \]
+If you're good at number theory you'll instantly
+recognize this as $5\ZZ = (5)$.
+Surprise! In $\ZZ$, the ideal $(a,b)$ is exactly $\gcd(a,b) \ZZ$.
+And that's exactly the reason you often see the GCD of two numbers denoted $(a,b)$.
+
+We call such an ideal (one generated by a single element) a \vocab{principal ideal}.
+So, in $\ZZ$, every ideal is principal.
+But the same is not true in more general rings.
+\begin{example}
+ [A non-principal ideal]
+ In $\ZZ[x]$, $I = (x,2015)$ is \emph{not} a principal ideal.
+
+ For if $I = (f)$ for some polynomial $f \in I$
+ then $f$ divides $x$ and $2015$.
+ This can only occur if $f = \pm 1$,
+ but then $I$ contains $\pm1$, which it does not.
+\end{example}
+A ring with the property that all its ideals
+are principal is called a \vocab{principal ideal ring}.
+We like this property because they effectively
+let us take the ``greatest common factor''
+in a similar way as the GCD in $\ZZ$.
+
+In practice, we actually usually care about
+so-called \textbf{principal ideal domains (PID's)}.
+But we haven't defined what a domain is yet.
+Nonetheless, all the examples below are actually PID's,
+so we will go ahead and use this word for now,
+and tell you what the additional condition is in the next chapter.
+
+\begin{example}
+ [Examples of PID's]
+ To reiterate, for now you should just verify
+ that these are principal ideal rings,
+ even though we are using the word PID.
+ \begin{enumerate}[(a)]
+ \ii As we saw, $\ZZ$ is a PID.
+
+ \ii As we also saw, $\ZZ[x]$ is not a PID,
+ since $I = (x,2015)$ for example is not principal.
+
+ \ii It turns out that for a field $k$
+ the ring $k[x]$ is always a PID.
+ For example, $\QQ[x]$, $\RR[x]$, $\CC[x]$ are PID's.
+
+ If you want to try and prove this,
+ first prove an analog of B\'{e}zout's lemma,
+ which implies the result.
+
+ \ii $\CC[x,y]$ is not a PID, because $(x,y)$
+ is not principal.
+ \end{enumerate}
+\end{example}
+
+\section{Noetherian rings}
+\prototype{$\ZZ[x_1, x_2, \dots]$ is not Noetherian,
+ but most reasonable rings are.
+ In particular polynomial rings are.
+ (Equivalently, only weirdos care about non-Noetherian rings).}
+
+If it's too much to ask that an ideal is generated by \emph{one} element,
+perhaps we can at least ask that our ideals
+are generated by \emph{finitely many} elements.
+Unfortunately, in certain weird rings this is also not the case.
+\begin{example}
+ [Non-Noetherian ring]
+ Consider the ring $R = \ZZ[x_1, x_2, x_3, \dots]$
+ which has \emph{infinitely} many free variables.
+ Then the ideal $I = (x_1, x_2, \dots) \subseteq R$
+ cannot be written with a finite generating set.
+\end{example}
+Nonetheless, most ``sane'' rings we work in
+\emph{do} have the property that their ideals are finitely generated.
+We now name such rings and give two equivalent definitions:
+\begin{proposition}[The equivalent definitions of a Noetherian ring]
+ For a ring $R$, the following are equivalent:
+ \begin{enumerate}[(a)]
+ \ii Every ideal $I$ of $R$ is finitely generated
+ (i.e.\ can be written with a finite generating set).
+ \ii There does \emph{not} exist an
+ infinite ascending chain of ideals
+ \[ I_1 \subsetneq I_2 \subsetneq I_3 \subsetneq \dots. \]
+ The absence of such chains is often
+ called the \vocab{ascending chain condition}.
+ \end{enumerate}
+ Such rings are called \vocab{Noetherian}.
+\end{proposition}
+
+\begin{example}
+ [Non-Noetherian ring breaks ACC]
+ In the ring $R = \ZZ[x_1, x_2, x_3, \dots]$ we have
+ an infinite ascending chain
+ \[ (x_1) \subsetneq (x_1, x_2) \subsetneq (x_1,x_2,x_3) \subsetneq \dots. \]
+\end{example}
+From the example, you can kind of see why the proposition is true:
+from an infinitely generated ideal you can extract an ascending chain
+by throwing elements in one at a time.
+I'll leave the proof to you if you want to
+do it.\footnote{On the other hand, every undergraduate
+ class in this topic I've seen makes you do it as homework.
+ Admittedly I haven't gone to that many such classes.}
+
+\begin{ques}
+ Why are fields Noetherian?
+ Why are PID's (such as $\ZZ$) Noetherian?
+\end{ques}
+
+This leaves the question:
+is our prototypical non-example of a PID,
+$\ZZ[x]$, a Noetherian ring?
+The answer is a glorious yes,
+according to the celebrated Hilbert basis theorem.
+\begin{theorem}[Hilbert basis theorem]
+ Given a Noetherian ring $R$,
+ the ring $R[x]$ is also Noetherian.
+ Thus by induction, $R[x_1, x_2, \dots, x_n]$ is Noetherian
+ for any integer $n$.
+ \label{thm:hilbert_basis}
+\end{theorem}
+The proof of this theorem is really olympiad flavored,
+so I couldn't possibly spoil it -- I've
+left it as a problem at the end of this chapter.
+
+Noetherian rings really shine in algebraic geometry,
+and it's a bit hard for me to motivate them right now,
+other than to say
+``most rings you'll encounter are Noetherian''.
+Please bear with me!
+
+\section{\problemhead}
+
+\begin{problem}
+ The ring $R = \RR[x] / (x^2+1)$ is one that you've seen before.
+ What is its name?
+ \begin{hint}
+ $R = \RR[i]$.
+ \end{hint}
+ \begin{sol}
+ This is just $\RR[i] = \CC$.
+ The isomorphism is given by $x \mapsto i$,
+ which has kernel $(x^2+1)$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Show that $\CC[x] / (x^2-x) \cong \CC \times \CC$.
+ \begin{hint}
+ The isomorphism is given by $x \mapsto (1,0)$
+ and $1-x \mapsto (0,1)$.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ In the ring $\ZZ$, let $I = (2016)$ and $J = (30)$.
+ Show that $I \cap J$ is an ideal of $\ZZ$ and compute its elements.
+\end{problem}
+
+\begin{sproblem}
+ \label{prob:inclusion_preserving}
+ Let $R$ be a ring and $I$ an ideal.
+ Find an inclusion-preserving bijection between
+ \begin{itemize}
+ \ii ideals of $R/I$, and
+ \ii ideals of $R$ which contain $I$.
+ \end{itemize}
+\end{sproblem}
+
+\begin{problem}
+ Let $R$ be a ring.
+ \begin{enumerate}[(a)]
+ \ii Prove that there is exactly one ring homomorphism $\ZZ \to R$.
+ \ii Prove that the number of ring homomorphisms
+ $\ZZ[x] \to R$ is equal to the number of elements of $R$.
+ \end{enumerate}
+ \begin{hint}
+ For (b) homomorphism is uniquely determined by the choice of $\psi(x) \in R$
+ \end{hint}
+\end{problem}
+
+%\begin{problem}
+% [$\phi(1_R) = 1_S$ is really necessary]
+% Find two rings $R$ and $S$ and a \emph{nonzero} function
+% $f \colon R \to S$ such that
+% \begin{itemize}
+% \ii $f(x+y) = f(x) + f(y)$ for $x,y \in R$,
+% \ii $f(xy) = f(x) f(y)$ for $x,y \in R$,
+% \ii $f(1_R) \ne 1_S$ (i.e.\ $f$ is not a ring homomorphism).
+% \end{itemize}
+% \begin{hint}
+% Take $S = R \times R$ with $R$ nonzero.
+% \end{hint}
+% \begin{sol}
+% The map $R \to R \times R$ by $x \mapsto (x,0)$ will always work,
+% with $R$ nonzero.
+% \end{sol}
+%\end{problem}
+
+\begin{problem}
+ \gim
+ Prove the Hilbert basis theorem, \Cref{thm:hilbert_basis}.
+\end{problem}
+
+\begin{problem}
+ [USA Team Selection Test 2016]
+ Let $\FF_p$ denote the integers modulo a fixed prime number $p$.
+ Define $\Psi \colon \FF_p[x] \to \FF_p[x]$ by
+ \[ \Psi\left( \sum_{i=0}^n a_i x^i \right) = \sum_{i=0}^n a_i x^{p^i}. \]
+ Let $S$ denote the image of $\Psi$.
+ \begin{enumerate}[(a)]
+ \ii Show that $S$ is a ring with addition
+ given by polynomial addition,
+ and multiplication given by \emph{function composition}.
+ \ii Prove that $\Psi \colon \FF_p[x] \to S$
+ is then a ring isomorphism.
+ \end{enumerate}
+\end{problem}
+
+\begin{problem} % from Brian Chen
+ \yod
+ Let $A \subseteq B \subseteq C$ be rings.
+ Suppose $C$ is a finitely generated $A$-module.
+ Does it follow that $B$ is a finitely generated $A$-module?
+ % Assume $A$ is Noetherian. Show that $B$ is finitely generated as an $A$-module.
+ % Find a counterexample where $A$ is not Noetherian.
+ \begin{hint}
+ I think the result is true if you add the assumption $A$ is Noetherian,
+ so look for trouble by picking $A$ not Noetherian.
+ \end{hint}
+ \begin{sol}
+ Nope! Pick
+ \begin{align*}
+ A &= \ZZ[x_1, x_2, \dots] \\
+ B &= \ZZ[x_1, x_2, \dots, \eps x_1, \eps x_2, \dots] \\
+ C &= \ZZ[x_1, x_2, \dots, \eps].
+ \end{align*}
+ where $\eps \neq 0$ but $\eps^2 = 0$.
+ I think the result is true if you add the assumption $A$ is Noetherian.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/inner-form.tex b/books/napkin/inner-form.tex
new file mode 100644
index 0000000000000000000000000000000000000000..986815563de8d191bf52d0ca8836a70554543118
--- /dev/null
+++ b/books/napkin/inner-form.tex
@@ -0,0 +1,538 @@
+\chapter{Inner product spaces}
+%%fakesection Cheerleading
+It will often turn out that our vector spaces which look more like $\RR^n$
+not only have the notion of addition, but also a notion of \emph{orthogonality}
+and the notion of \emph{distance}.
+All this is achieved by endowing the vector space with a so-called \textbf{inner form},
+which you likely already know as the ``dot product'' for $\RR^n$.
+Indeed, in $\RR^n$ you already know that
+\begin{itemize}
+ \ii $v \cdot w = 0$ if and only if $v$ and $w$ are perpendicular, and
+ \ii $|v|^2 = v \cdot v$.
+\end{itemize}
+The purpose is to quickly set up this structure in full generality.
+Some highlights of the chapter:
+\begin{itemize}
+ \ii We'll see that the high school ``dot product''
+ formulation is actually very natural:
+ it falls out from the two axioms we listed above.
+ If you ever wondered why $\sum a_i b_i$ behaves as nicely as it does,
+ now you'll know.
+ \ii We show how the inner form can be used to make $V$ into a \emph{metric space},
+ giving it more geometric structure.
+ \ii A few chapters later,
+ we'll identify $V \cong V^\vee$ in a way that wasn't possible before,
+ and as a corollary deduce the nice result that
+ symmetric matrices with real entries
+ always have real eigenvalues.
+\end{itemize}
+
+Throughout this chapter, \emph{all vector spaces are over $\CC$ or $\RR$},
+unless otherwise specified.
+We'll generally prefer working over $\CC$ instead of $\RR$ since
+$\CC$ is algebraically closed
+(so, e.g.\ we have Jordan forms).
+Every real matrix can be thought of as a matrix
+with complex entries anyways.
+
+\section{The inner product}
+\prototype{Dot product in $\RR^n$.}
+\subsection{For real numbers: bilinear forms}
+
+First, let's define the inner form for real spaces.
+Rather than the notation $v \cdot w$ it is most customary
+to use $\left< v,w \right>$ for general vector spaces.
+\begin{definition}
+ Let $V$ be a real vector space.
+ A \vocab{real inner form}\footnote{Other
+ names include ``inner product'', ``dot product'',
+ ``positive definite nondegenerate symmetric bilinear form'', \dots}
+ is a function
+ \[ \left< \bullet, \bullet \right> : V \times V \to \RR \]
+ which satisfies the following properties:
+ \begin{itemize}
+ \ii The form is \vocab{symmetric}: for any $v,w \in V$ we have
+ \[ \left< v,w \right> = \left< w,v\right>. \]
+ Of course, one would expect this property from a product.
+
+ \ii The form is \vocab{bilinear}, or \textbf{linear in both arguments},
+ meaning that $\left< -, v\right>$
+ and $\left< v, -\right>$ are linear functions for any fixed $v$.
+ Spelled explicitly this means that
+ \begin{align*}
+ \left< cx, v \right> &= c \left< x,v \right> \\
+ \left< x+y, v \right> &= \left< x,v \right> + \left< y,v \right>.
+ \end{align*}
+ and similarly if $v$ was on the left.
+ This is often summarized by the single equation
+ $\left< cx+y, z \right> = c \left< x,z \right> + \left< y,z \right>$.
+
+ \ii The form is \vocab{positive definite}, meaning $\left \ge 0$
+ is a nonnegative real number, and equality takes place only if $v = 0_V$.
+ \end{itemize}
+\end{definition}
+\begin{exercise}
+ Show that linearity in the first argument plus symmetry
+ already gives you linearity in the second argument,
+ so we could edit the above definition
+ by only requiring $\left< -, v\right>$ to be linear.
+\end{exercise}
+
+\begin{example}
+ [$\RR^n$]
+ As we already know, one can define the inner form on $\RR^n$ as follows.
+ Let $e_1 = (1, 0, \dots, 0)$, $e_2 = (0, 1, \dots, 0)$,
+ \dots, $e_n = (0, \dots, 0, 1)$ be the usual basis.
+ Then we let
+ \[
+ \left< a_1 e_1 + \dots + a_n e_n, b_1 e_1 + \dots + b_n e_n \right>
+ \defeq a_1 b_1 + \dots + a_n b_n.
+ \]
+ It's easy to see this is bilinear
+ (symmetric and linear in both arguments).
+ To see it is positive definite,
+ note that if $a_i = b_i$
+ then the dot product is $a_1^2 + \dots + a_n^2$,
+ which is zero exactly when all $a_i$ are zero.
+\end{example}
+
+\subsection{For complex numbers: sesquilinear forms}
+The definition for a complex product space is similar, but has one difference:
+rather than symmetry we instead have \emph{conjugate symmetry}
+meaning $\left< v, w \right> = \ol{ \left< w,v \right> }$.
+Thus, while we still have linearity in the first argument,
+we actually have a different linearity for the second argument.
+To be explicit:
+\begin{definition}
+ Let $V$ be a complex vector space.
+ A \vocab{complex inner product} is a function
+ \[ \left< \bullet, \bullet \right> : V \times V \to \CC \]
+ which satisfies the following properties:
+ \begin{itemize}
+ \ii The form has \vocab{conjugate symmetry}, which means that
+ for any $v,w \in V$ we have
+ \[ \left< v,w \right> = \ol{\left< w,v\right>}. \]
+
+ \ii The form is \vocab{sesquilinear}
+ (the name means ``one-and-a-half linear'').
+ This means that:
+ \begin{itemize}
+ \ii The form is \textbf{linear in the first argument}, so again we have
+ \begin{align*}
+ \left< x+y, v \right> &= \left< x,v \right> + \left< y,v \right> \\
+ \left< cx, v \right> &= c\left< x,v \right>.
+ \end{align*}
+ Again this is often abbreviated to the single line
+ $\left< cx+y, v \right> = c \left< x,v \right> + \left< y,v \right>$
+ in the literature.
+
+ \ii However, it is now \textbf{\vocab{anti-linear}
+ in the second argument}:
+ for any complex number $c$ and vectors $x$ and $y$ we have
+ \begin{align*}
+ \left< v, x+y\right> &= \left< v, x\right> + \left< v,y \right> \\
+ \left< v, cx \right> &= \ol c \left< v, x\right>.
+ \end{align*}
+ Note the appearance of the complex conjugate $\ol c$,
+ which is new!
+ Again, we can abbreviate this to just
+ $\left< v, cx+y \right> = \ol c \left< v,x \right> + \left< v,y\right>$
+ if we only want to write one equation.
+ \end{itemize}
+
+ \ii The form is \vocab{positive definite},
+ meaning $\left$ is a nonnegative real number,
+ and equals zero exactly when $v = 0_V$.
+ \end{itemize}
+\end{definition}
+\begin{exercise}
+ Show that anti-linearity follows
+ from conjugate symmetry plus linearity in the first argument.
+\end{exercise}
+
+\begin{example}
+ [$\CC^n$]
+ The dot product in $\CC^n$ is defined as follows:
+ let $\ee_1$, $\ee_2$, \dots, $\ee_n$ be the standard basis.
+ For complex numbers $w_i$, $z_i$ we set
+ \[
+ \left< w_1 \ee_1 + \dots + w_n \ee_n, z_1 \ee_1 + \dots + z_n \ee_n \right>
+ \defeq w_1\ol{z_1} + \dots + w_n \ol{z_n}.
+ \]
+\end{example}
+\begin{ques}
+ Check that the above is in fact a complex inner form.
+\end{ques}
+
+\subsection{Inner product space}
+It'll be useful to treat both types of spaces simultaneously:
+\begin{definition}
+ An \vocab{inner product space} is either a real vector space
+ equipped with a real inner form,
+ or a complex vector space equipped with a complex inner form.
+
+ A linear map between inner product spaces
+ is a map between the underlying vector spaces
+ (we do \emph{not} require any compatibility with the inner form).
+\end{definition}
+
+\begin{remark}
+ [Why sesquilinear?]
+ The above example explains one reason why we want
+ to satisfy conjugate symmetry rather than just symmetry.
+ If we had tried to define the dot product as $\sum w_i z_i$,
+ then we would have lost the condition of being positive definite,
+ because there is no guarantee that
+ $\left< v,v \right> = \sum z_i^2$ will even be a real number at all.
+ On the other hand, with conjugate symmetry
+ we actually enforce $\left< v,v \right> = \ol{\left< v,v \right>}$,
+ i.e.\ $\left< v,v \right> \in \RR$ for every $v$.
+
+ Let's make this point a bit more forcefully.
+ Suppose we tried to put a bilinear form $\left< -, -\right>$,
+ on a \emph{complex} vector space $V$.
+ Let $e$ be any vector with $\left< e, e \right> = 1$ (a unit vector).
+ Then we would instead get
+ $\left< ie, ie \right> = - \left< e,e \right> = -1$;
+ this is a vector with length $\sqrt{-1}$, which is not okay!
+ That's why it is important that,
+ when we have a complex inner product space,
+ our form is sesquilinear, not bilinear.
+\end{remark}
+
+Now that we have a dot product,
+we can talk both about the norm and orthogonality.
+
+\section{Norms}
+\prototype{$\RR^n$ becomes its usual Euclidean space with the vector norm.}
+
+The inner form equips our vector space with a notion of distance, which we call the norm.
+\begin{definition}
+ Let $V$ be an inner product space.
+ The \vocab{norm} of $v \in V$ is defined by
+ \[ \norm{v} = \sqrt{\left}. \]
+ This definition makes sense because
+ we assumed our form to be positive definite,
+ so $\left< v,v\right>$ is a nonnegative real number.
+\end{definition}
+
+\begin{example}[$\RR^n$ and $\CC^n$ are normed vector spaces]
+ When $V = \RR^n$ or $V = \CC^n$ with the standard dot product norm,
+ then the norm of $v$ corresponds to the absolute value that we are used to.
+\end{example}
+
+Our goal now is to prove that
+\begin{moral}
+ With the metric $d(v,w) = \norm{v-w}$, $V$ becomes a metric space.
+\end{moral}
+\begin{ques}
+ Verify that $d(v,w) = 0$ if and only if $v = w$.
+\end{ques}
+So we just have to establish the triangle inequality.
+Let's now prove something we all know and love,
+which will be a stepping stone later:
+\begin{lemma}
+ [Cauchy-Schwarz]
+ Let $V$ be an inner product space.
+ For any $v,w \in V$ we have
+ \[ \left\lvert \left< v,w\right> \right\rvert
+ \le \norm{v} \norm{w} \]
+ with equality if and only if $v$ and $w$ are linearly dependent.
+\end{lemma}
+\begin{proof}
+ The theorem is immediate if $\left< v,w\right> = 0$.
+ It is also immediate if $\norm{v} \norm{w} = 0$,
+ since then one of $v$ or $w$ is the zero vector.
+ So henceforth we assume all these quantities are nonzero
+ (as we need to divide by them later).
+
+ The key to the proof is to think about the equality case:
+ we'll use the inequality $\left< cv-w, cv-w\right> \ge 0$.
+ Deferring the choice of $c$ until later, we compute
+ \begin{align*}
+ 0 &\le \left< cv-w, cv-w \right> \\
+ &= \left< cv, cv\right> - \left< cv, w\right> - \left< w, cv\right> + \left< w,w \right> \\
+ &= |c|^2 \left< v,v \right> - c \left< v,w \right> - \ol c \left< w,v \right> + \left< w,w \right> \\
+ &= |c|^2 \norm{v}^2 + \norm{w}^2 - c \left< v,w \right> - \ol{c \left< v,w\right> } \\
+ 2 \Re \left[ c \left< v,w \right> \right] &\le |c|^2 \norm{v}^2 + \norm{w}^2 \\
+ \intertext{At this point, a good choice of $c$ is}
+ c &= \frac{ \norm w}{\norm v} \cdot \frac{|\left< v,w\right>|}{\left< v,w\right>} \\
+ \intertext{since then}
+ c \left< v,w \right> &= \frac{\norm w}{\norm v} \left\lvert \left< v,w\right> \right\rvert \in \RR \\
+ |c| &= \frac{\norm w}{\norm v} \\
+ \intertext{whence the inequality becomes}
+ 2\frac{\norm w}{\norm v} \left\lvert \left< v,w\right> \right\rvert &\le 2 \norm{w}^2 \\
+ \left\lvert \left< v,w\right> \right\rvert &\le \norm v \norm w. \qedhere
+ \end{align*}
+\end{proof}
+Thus:
+\begin{theorem}
+ [Triangle inequality]
+ We always have
+ \[ \norm v + \norm w \ge \norm{v+w} \]
+ with equality if and only if $v$ and $w$ are linearly dependent
+ and point in the same direction.
+\end{theorem}
+\begin{exercise}
+ Prove this by squaring both sides, and applying Cauchy-Schwarz.
+\end{exercise}
+
+In this way, our vector space now has a topological structure of a metric space.
+
+\section{Orthogonality}
+\prototype{Still $\RR^n$!}
+Our next goal is to give the geometric notion of ``perpendicular''.
+The definition is easy enough:
+\begin{definition}
+ Two nonzero vectors $v$ and $w$ in an inner product space
+ are \vocab{orthogonal} if $\left< v,w \right> = 0$.
+\end{definition}
+
+As we expect from our geometric intuition in $\RR^n$,
+this implies independence:
+\begin{lemma}[Orthogonal vectors are independent]
+ Any set of pairwise orthogonal vectors $v_1$, $v_2$, \dots, $v_n$,
+ with $\norm{v_i} \ne 0$ for each $i$,
+ is linearly independent.
+\end{lemma}
+\begin{proof}
+ Consider a dependence
+ \[ a_1 v_1 + \dots + a_n v_n = 0 \]
+ for $a_i$ in $\RR$ or $\CC$.
+ Then \[ 0 = \left< v_1, \sum a_i v_i \right> = \ol{a_1} \norm{v_1}^2. \]
+ Hence $a_1 = 0$, since we assumed $\norm{v_1} \neq 0$.
+ Similarly $a_2 = \dots = a_m = 0$.
+\end{proof}
+
+In light of this, we can now consider a stronger condition on our bases:
+\begin{definition}
+ An \vocab{orthonormal} basis of a
+ \emph{finite-dimensional} inner product space $V$
+ is a basis $e_1$, \dots, $e_n$ such that
+ $\norm{e_i} = 1$ for every $i$ and
+ $\left< e_i, e_j \right> = 0$ for any $i \neq j$.
+\end{definition}
+\begin{example}[$\RR^n$ and $\CC^n$ have standard bases]
+ In $\RR^n$ and $\CC^n$ equipped with the standard dot product,
+ the standard basis $\ee_1$, \dots, $\ee_n$ is also orthonormal.
+\end{example}
+This is no loss of generality:
+\begin{theorem}[Gram-Schmidt]
+ Let $V$ be a finite-dimensional inner product space.
+ Then it has an orthonormal basis.
+\end{theorem}
+\begin{proof}[Sketch of Proof]
+ One constructs the orthonormal basis explicitly from any basis
+ $e_1$, \dots, $e_n$ of $V$.
+ Define $\opname{proj}_u(v) = \frac{\left< v,u\right>}{\left< u,u\right>} u$.
+ Then recursively define
+ \begin{align*}
+ u_1 &= e_1 \\
+ u_2 &= e_2 - \opname{proj}_{u_1}(e_2) \\
+ u_3 &= e_3 - \opname{proj}_{u_1}(e_3) - \opname{proj}_{u_2}(e_3) \\
+ &\vdotswithin{=} \\
+ u_n &= e_n - \opname{proj}_{u_1}(e_n) - \dots - \opname{proj}_{u_{n-1}}(e_n).
+ \end{align*}
+ One can show the $u_i$ are pairwise orthogonal and not zero.
+\end{proof}
+Thus, we can generally assume our bases are orthonormal.
+
+Worth remarking:
+\begin{example}[The dot product is the ``only'' inner form]
+ Let $V$ be a finite-dimensional inner product space,
+ and consider \emph{any} orthonormal basis $e_1, \dots, e_n$.
+ Then we have that
+ \[ \left< a_1 e_1 + \dots + a_n e_n,
+ b_1 e_1 + \dots + b_n e_n \right>
+ = \sum_{i,j=1}^n a_i\ol{b_j} \left< e_i, e_j \right>
+ = \sum_{i=1}^n a_i \ol{b_i} \]
+ owing to the fact that the $\{e_i\}$ are orthonormal.
+\end{example}
+And now you know why the dot product expression is so ubiquitous.
+
+\section{Hilbert spaces}
+In algebra we are usually scared of infinity,
+and so when we defined a basis of a vanilla vector space many chapters ago,
+we only allowed finite linear combinations.
+However, if we have an inner product space,
+then it is a metric space and we \emph{can}
+sometimes actually talk about convergence.
+
+Here is how it goes:
+\begin{definition}
+ A \vocab{Hilbert space} is a inner product space $V$,
+ such that the corresponding metric space is complete.
+\end{definition}
+
+In that case, it will now often make sense to take infinite linear combinations,
+because we can look at the sequence of partial sums and let it converge.
+Here is how we might do it.
+Let's suppose we have $e_1$, $e_2$, \dots an infinite sequence
+of vectors with norm $1$ and which are pairwise orthogonal.
+Suppose $c_1$, $c_2$, \dots, is a sequence of real or complex numbers.
+Then consider the sequence
+\begin{align*}
+ v_1 &= c_1 e_1 \\
+ v_2 &= c_1 e_1 + c_2 e_2 \\
+ v_3 &= c_1 e_1 + c_2 e_2 + c_3 e_3 \\
+ &\vdotswithin=
+\end{align*}
+
+\begin{proposition}
+ [Convergence criteria in a Hilbert space]
+ The sequence $(v_i)$ defined above
+ converges if and only if $\sum \left\lvert c_i \right\rvert^2 < \infty$.
+\end{proposition}
+\begin{proof}
+ This will make more sense if you read \Cref{ch:calc_limits},
+ so you could skip this proof if you haven't read the chapter.
+ The sequence $v_i$ converges if and only if it is Cauchy,
+ meaning that when $i < j$,
+ \[ \norm{v_j - v_i}^2 = |c_{i+1}|^2 + \dots + |c_j|^2 \]
+ tends to zero as $i$ and $j$ get large.
+ This is equivalent to the sequence
+ $s_n = |c_1|^2 + \dots + |c_n|^2$ being Cauchy.
+
+ Since $\RR$ is complete, $s_n$ is Cauchy
+ if and only if it converges.
+ Since $s_n$ consists of nonnegative real numbers,
+ converges holds if and only if $s_n$ is bounded,
+ or equivalently if $\sum \left\lvert c_i \right\rvert^2 < \infty$.
+\end{proof}
+
+Thus, when we have a Hilbert space, we change our definition slightly:
+\begin{definition}
+ An \vocab{orthonormal basis} for a Hilbert space $V$
+ is a (possibly infinite) sequence $e_1$, $e_2$, \dots,
+ of vectors such that
+ \begin{itemize}
+ \ii $\left< e_i, e_i \right> = 1$ for all $i$,
+ \ii $\left< e_i, e_j \right> = 0$ for $i \ne j$,
+ i.e.\ the vectors are pairwise orthogonal
+ \ii every element of $V$ can be expressed uniquely as an
+ infinite linear combination
+ \[ \sum_i c_i e_i \]
+ where $\sum_i \left\lvert c_i \right\rvert^2 < \infty$,
+ as described above.
+ \end{itemize}
+\end{definition}
+That's the official definition, anyways.
+(Note that if $\dim V < \infty$, this agrees with our usual definition,
+since then there are only finitely many $e_i$.)
+But for our purposes you can mostly not worry about it and instead think:
+\begin{moral}
+ A Hilbert space is an inner product space
+ whose basis requires infinite linear combinations,
+ not just finite ones.
+\end{moral}
+The technical condition $\sum \left\lvert c_i \right\rvert^2 < \infty$
+is exactly the one which ensures the infinite sum makes sense.
+
+\section{\problemhead}
+
+\begin{problem}
+ [Pythagorean theorem]
+ Show that if $\left< v,w \right> = 0$ in an inner product space,
+ then $\norm{v}^2 + \norm{w}^2 = \norm{v+w}^2$.
+\end{problem}
+
+\begin{sproblem}
+ [Finite-dimensional $\implies$ Hilbert]
+ Show that a finite-dimensional inner product space
+ is a Hilbert space.
+ \begin{hint}
+ Fix an orthonormal basis $e_1$, \dots, $e_n$.
+ Use the fact that $\RR^n$ is complete.
+ \end{hint}
+\end{sproblem}
+
+%\begin{problem}
+% Let $V$ be a complex inner product space
+% with orthonormal basis $e_1$, \dots, $e_n$.
+% Suppose $x = a_1 e_1 + \dots + a_n e_n \in V$
+% and $y = b_1 e_1 + \dots + b_n e_n \in V$.
+% Show:
+% \begin{align*}
+% \left< x,x \right> &= \sum_\xi |a_i|^2 \\
+% a_1 &= \left< x, e_1 \right> \\
+% \left< x,y \right> &= \sum_i a_i \ol{b_i}.
+% \end{align*}
+%\end{problem}
+
+\begin{problem}[Taiwan IMO camp]
+ \gim
+ In a town there are $n$ people and $k$ clubs.
+ Each club has an odd number of members,
+ and any two clubs have an even number of common members.
+ Prove that $k \le n$.
+ \begin{hint}
+ Dot products in $\FF_2$.
+ \end{hint}
+ \begin{sol}
+ Interpret clubs as vectors in the vector space $\mathbb F_2^n$.
+ Consider a ``dot product''
+ to show that all $k$ vectors are linearly independent:
+ any two different club-vectors have dot product $0$, so are orthogonal.
+ Thus $k \le \dim \mathbb F_2^n = n$.
+ \end{sol}
+\end{problem}
+
+\begin{sproblem}[Inner product structure of tensors]
+ \label{prob:inner_prod_tensor}
+ Let $V$ and $W$ be finite-dimensional inner product spaces over $k$,
+ where $k$ is either $\RR$ or $\CC$.
+ \begin{enumerate}[(a)]
+ \ii Find a canonical way to make $V \otimes_k W$ into an inner product space too.
+ \ii Let $e_1$, \dots, $e_n$ be an orthonormal basis of $V$
+ and $f_1$, \dots, $f_m$ be an orthonormal basis of $W$.
+ What's an orthonormal basis of $V \otimes W$?
+ \end{enumerate}
+ \begin{hint}
+ Define it on simple tensors then extend linearly.
+ \end{hint}
+ \begin{sol}
+ The inner form given by
+ \[ \left< v_1 \otimes w_1 , v_2 \otimes w_2 \right>_{V \otimes W}
+ = \left< v_1,v_2 \right>_V \left< w_1,w_2\right>_W \]
+ on pure tensors, then extending linearly.
+ For (b) take $e_i \otimes f_j$ for $1 \le i \le n$, $1 \le j \le m$.
+ \end{sol}
+\end{sproblem}
+
+\begin{problem}[Putnam 2014]
+ \gim
+ Let $n$ be a positive integer.
+ What is the largest $k$ for which there exist
+ $n\times n$ matrices $M_1,\dots,M_k$ and $N_1,\dots,N_k$
+ with real entries such that for all $i$ and $j$,
+ the matrix product $M_i N_j$ has a zero entry somewhere
+ on its diagonal if and only if $i \ne j?$
+ \begin{hint}
+ $k = n^n$.
+ Endow tensor products with an inner form.
+ Note that ``zero entry somewhere on its diagonal''
+ is equivalent to the product of those entries being zero.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ [Sequence space]
+ Consider the space $\ell^2$ of infinite sequences of real numbers
+ $a = (a_1, a_2, \dots)$ satisfying $\sum_i a_i^2 < \infty$.
+ We equip it with the dot product
+ \[ \left< a, b\right> = \sum_i a_i b_i. \]
+ Is this a Hilbert space?
+ If so, identify a Hilbert basis.
+\end{problem}
+
+\begin{problem}
+ [Kuratowski embedding]
+ A \vocab{Banach space} is a normed vector space $V$,
+ such that the corresponding metric space is complete.
+ (So a Hilbert space is a special case of a Banach space.)
+
+ Let $(M,d)$ be any metric space.
+ Prove that there exists a Banach space $X$
+ and an injective function $f \colon M \injto X$
+ such that $d(x,y) = \norm{f(x)-f(y)}$ for any $x$ and $y$.
+\end{problem}
diff --git a/books/napkin/integrate.tex b/books/napkin/integrate.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6dfd7086cc91873a86cd957eaadce7eea50c45f8
--- /dev/null
+++ b/books/napkin/integrate.tex
@@ -0,0 +1,539 @@
+\chapter{Riemann integrals}
+\begin{quote}
+ \footnotesize
+ ``Trying to Riemann integrate discontinuous
+ functions is kind of outdated.'' \\
+ --- Dennis Gaitsgory, \cite{ref:55b}
+\end{quote}
+
+We will go ahead and define the Riemann integral,
+but we won't do very much with it.
+The reason is that the Lebesgue integral is basically better,
+so we will define it, check the fundamental theorem of calculus
+(or rather, leave it as a problem at the end of the chapter),
+and then always use Lebesgue integrals forever after.
+
+\section{Uniform continuity}
+\prototype{$f(x) = x^2$ is not uniformly continuous on $\RR$,
+but functions on compact sets are always uniformly continuous.}
+\begin{definition}
+ Let $f \colon M \to N$ be a continuous
+ map between two metric spaces.
+ We say that $f$ is \vocab{uniformly continuous} if
+ for all $\eps > 0$ there exists a $\delta > 0$ such that
+ \[ d_M(p,q) < \delta \implies d_N(f(p), f(q)) < \eps. \]
+\end{definition}
+This difference is that given an $\eps > 0$ we must specify a $\delta > 0$
+which works for \emph{every} choice $p$ and $q$ of inputs;
+whereas usually $\delta$ is allowed to depend on $p$ and $q$.
+(Also, this definition can't be ported
+to a general topological space.)
+
+\begin{example}
+ [Uniform continuity failure]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The function $f \colon \RR \to \RR$ by $x \mapsto x^2$
+ is not uniformly continuous.
+ Suppose we take $\eps = 0.1$ for example.
+ There is no $\delta$ such that if $|x-y| < \delta$
+ then $|x^2-y^2| < 0.1$,
+ since as $x$ and $y$ get large,
+ the function $f$ becomes increasingly sensitive
+ to small changes.
+
+ \ii The function $(0,1) \to \RR$ by $x \mapsto x\inv$
+ is not uniformly continuous.
+
+ \ii The function $\RR_{>0} \to \RR$ by $x \mapsto \sqrt x$
+ does turn out to be uniformly continuous
+ (despite having unbounded derivatives!).
+ Indeed, you can check that the assertion
+ \[ \left\lvert x-y \right\rvert < \eps^2
+ \implies \left\lvert \sqrt x - \sqrt y \right\rvert
+ < \eps \]
+ holds for any $x, y, \eps > 0$.
+ \end{enumerate}
+\end{example}
+
+The good news is that in the compact case all is well.
+\begin{theorem}
+ [Uniform continuity free for compact spaces]
+ Let $M$ be a compact metric space.
+ Then any continuous map $f \colon M \to N$ is
+ also uniformly continuous.
+\end{theorem}
+\begin{proof}
+ Assume for contradiction there is some bad $\eps > 0$.
+ Then taking $\delta = 1/n$,
+ we find that for each integer $n$
+ there exists points $p_n$ and $q_n$
+ which are within $1/n$ of each other,
+ but are mapped more than $\eps$ away from each other by $f$.
+ In symbols, $d_M(p_n, q_n) < 1/n$ but $d_N(f(p_n), f(q_n)) > 1/n$.
+
+ By compactness of $M$,
+ we can find a convergent subsequence
+ $p_{i_1}$, $p_{i_2}$, \dots\ converging to some $x \in M$.
+ Since the $q_{i_n}$ is within $1/i_n$ of $p_{i_n}$,
+ it ought to converge as well, to the same point $x \in M$.
+ Then the sequences $f(p_{i_n})$ and $f(q_{i_n})$
+ should both converge to $f(x) \in N$,
+ but this is impossible as they are always $\eps$
+ away from each other.
+\end{proof}
+This means for example that $x^2$ viewed
+as a continuous function $[0,1] \to \RR$ is automatically
+uniformly continuous.
+Man, isn't compactness great?
+
+\section{Dense sets and extension}
+\prototype{Functions from $\QQ \to N$ extend to $\RR \to N$
+if they're uniformly continuous and $N$ is complete.
+See also counterexamples below.}
+
+\begin{definition}
+ Let $S$ be a subset (or subspace) of a topological space $X$.
+ Then we say that $S$ is \vocab{dense}
+ if every open subset of $X$ contains a point of $S$.
+\end{definition}
+
+\begin{example}
+ [Dense sets]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\QQ$ is dense in $\RR$.
+ \ii In general, any metric space $M$ is dense
+ in its completion $\ol M$.
+ \end{enumerate}
+\end{example}
+
+Dense sets lend themselves to having functions completed.
+The idea is that if I have a continuous function
+$f \colon \QQ \to N$, for some metric space $N$,
+then there should be \emph{at most} one way to extend it to a function
+$\wt f \colon \RR \to N$.
+For we can approximate each rational number by real numbers:
+if I know $f(1)$, $f(1.4)$, $f(1.41)$, \dots\
+$\wt f(\sqrt2)$ had better be the limit of this sequence.
+So it is certainly unique.
+
+However, there are two ways this could go wrong:
+\begin{example}
+ [Non-existence of extension]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii It could be that $N$ is not complete,
+ so the limit may not even exist in $N$.
+ For example if $N = \QQ$,
+ then certainly there is no way to
+ extend even the identify function $f \colon \QQ \to N$
+ to a function $\wt f \colon \RR \to N$.
+
+ \ii Even if $N$ was complete, we might run into issues
+ where $f$ explodes.
+ For example, let $N = \RR$ and define
+ \[ f(x) = \frac{1}{x-\sqrt2} \qquad f \colon \QQ \to \RR. \]
+ There is also no way to extend this
+ due to the explosion of $f$ near $\sqrt2 \notin \QQ$,
+ which would cause $\wt f(\sqrt2)$ to be undefined.
+ \end{enumerate}
+\end{example}
+However, the way to fix this is to require $f$ to be uniformly continuous,
+and in that case we do get a unique extension.
+
+\begin{theorem}
+ [Extending uniformly continuous functions]
+ Let $M$ be a metric space, $N$ a \emph{complete} metric space,
+ and $S$ a dense subspace of $M$.
+ Suppose $\psi \colon S \to N$ is a \emph{uniformly} continuous function.
+ Then there exists a unique continuous function $\wt \psi \colon M \to N$
+ such that the diagram
+ \begin{center}
+ \begin{tikzcd}
+ M \ar[r, "\wt \psi"] & N \\
+ S \ar[u, hook] \ar[ru, "\psi"'] &
+ \end{tikzcd}
+ \end{center}
+ commutes.
+\end{theorem}
+\begin{proof}
+ [Outline of proof]
+ As mentioned in the discussion,
+ each $x \in M$ can be approximated by a
+ sequence $x_1$, $x_2$, \dots\ in $S$ with $x_i \to x$.
+ The two main hypotheses, completeness and uniform continuity,
+ are now used:
+ \begin{exercise}
+ Prove that $\psi(x_1)$, $\psi(x_2)$, \dots\ converges in $N$
+ by using uniform continuity to show that it is Cauchy,
+ and then appealing to completeness of $N$.
+ \end{exercise}
+ Hence we define $\wt \psi(x)$ to be the limit of that sequence;
+ this doesn't depend on the choice of sequence,
+ and one can use sequential continuity to show $\wt \psi$ is continuous.
+\end{proof}
+
+\section{Defining the Riemann integral}
+Extensions will allow us to define the Riemann integral.
+I need to introduce a bit of notation so bear with me.
+
+\begin{definition}
+ Let $[a,b]$ be a closed interval.
+ \begin{itemize}
+ \ii We let $C^0([a,b])$ denote the set of
+ continuous functions on $[a,b] \to \RR$.
+ \ii We let $R([a,b])$ denote the set of
+ \textbf{rectangle functions} on $[a,b] \to \RR$.
+ These functions which are constant on the intervals
+ $[t_0,t_1)$, $[t_1, t_2)$, $[t_2, t_3)$, \dots, $[t_{n-2}, t_{n-1})$, %chktex 9
+ and also $[t_{n-1}, t_n]$,
+ for some $a = t_0 < t_1 < t_2 < \dots < t_n = b$.
+ \ii We let $M([a,b]) = C^0([a,b]) \cup R([a,b])$.
+ \end{itemize}
+\end{definition}
+Warning: only $C^0([a,b])$ is common notation,
+and the other two are made up.
+
+See picture below for a typical a rectangle function.
+(It is irritating that we have to officially assign a single
+value to each $t_i$,
+even though there are naturally two values we want to use,
+and so we use the convention of letting
+the left endpoint be closed).
+\begin{center}
+\begin{asy}
+ import graph;
+ graph.xaxis();
+ graph.yaxis();
+ label("$a=t_0$", (-3,0), dir(-90), red);
+ label("$b=t_4$", ( 3,0), dir(-90), red);
+ pen b = blue + 1.2;
+ draw( (-3,3)--(-1.2,3), b );
+ draw( (-1.2,1)--(0.8,1), b );
+ draw( (0.8,4)--(2,4), b );
+ draw( (2,2.5)--(3,2.5), b );
+
+ draw( (-3,3)--(-3,0), red );
+ draw( (-1.2,3)--(-1.2,0), dotted+heavycyan );
+ draw( (0.8,4)--(0.8,0), dotted+heavycyan );
+ draw( (2,4)--(2,0), dotted+heavycyan );
+ draw( (3,2.5)--(3,0), red );
+ label("$t_1$", (-1.2,0), dir(-90), heavycyan);
+ label("$t_2$", (0.8,0), dir(-90), heavycyan);
+ label("$t_3$", (2,0), dir(-90), heavycyan);
+
+ dotfactor *= 2;
+ dot( (-3,3), blue ); // x = a
+ opendot( (-1.2, 3), blue );
+ opendot( (0.8, 1), blue );
+ opendot( (2, 4), blue );
+ dot( (-1.2,1), blue );
+ dot( (0.8,4), blue );
+ dot( (2,2.5), blue );
+ dot( (3, 2.5), blue ); // x = b
+\end{asy}
+\end{center}
+
+\begin{definition}
+ We can impose a metric on $M([a,b])$
+ by defining
+ \[ d(f,g) = \sup_{x \in [a,b]} \left\lvert f(x) - g(x) \right\rvert. \]
+\end{definition}
+
+Now, there is a natural notion of integral
+for rectangle functions: just sum up the obvious rectangles!
+Officially, this is the expression
+\[ f(a)(t_1-a) + f(t_1)(t_2-t_1) +
+ + f(t_2) \left( t_3 - t_2 \right)
+ + \dots + f(t_n) \left( b - t_n \right). \]
+We denote this function by
+\[ \Sigma : R([a,b]) \to \RR. \]
+
+\begin{theorem}
+ [The Riemann integral]
+ \label{thm:gaitsgory_riemann}
+ There exists a unique continuous map
+ \[ \textstyle{\int_a^b} \colon M([a,b]) \to \RR \]
+ such that the diagram
+ \begin{center}
+ \begin{tikzcd}
+ M([a,b]) \ar[r, "\int_a^b"] & \RR \\
+ R([a,b]) \ar[u, hook] \ar[ru, "\Sigma"'] &
+ \end{tikzcd}
+ \end{center}
+ commutes.
+\end{theorem}
+\begin{proof}
+ We want to apply the extension theorem,
+ so we just have to check a few things:
+ \begin{itemize}
+ \ii We claim $R([a,b])$ is a dense subset of $M([a,b])$.
+ In other words, for any continuous $f \colon [a,b] \to \RR$
+ and $\eps > 0$,
+ we want there to exist a rectangle function
+ that approximates $f$ within $\eps$.
+
+ This follows by uniform continuity.
+ We know there exists a $\delta > 0$ such
+ that whenever $|x-y| < \delta$ we have $|f(x)-f(y)| < \eps$.
+ So as long as we select a rectangle function
+ whose rectangles have width less than $\delta$,
+ and such that the upper-left corner of each rectangle
+ lies on the graph of $f$, then we are all set.
+
+ \begin{center}
+ \begin{asy}
+ import graph;
+ real f(real x) { return 3 + (x-2)*(x-2)*(x+5) / 35; }
+ draw( (5, f(5))--(5, 0), red+dotted );
+ for (int i=0; i<10; ++i) {
+ filldraw(box( (i-5, 0), (i-4, f(i-5)) ),
+ opacity(0.1)+lightgreen, deepgreen);
+ dot( (i-5, f(i-5)), deepgreen);
+ }
+ draw(graph(f,-5.2,5.2,operator ..), blue, Arrows(TeXHead));
+ graph.xaxis();
+ graph.yaxis();
+ label("$a$", (-5,0), dir(-90), blue);
+ label("$b$", (5,0), dir(-90), blue);
+ \end{asy}
+ \end{center}
+
+
+ \ii The ``add-the-rectangles'' map $\Sigma \colon R([a,b]) \to \RR$
+ is \emph{uniformly} continuous.
+ Actually this is pretty obvious:
+ if two rectangle functions $f$ and $g$
+ have $d(f,g) < \eps$,
+ then $d(\Sigma f, \Sigma g) < \eps(b-a)$.
+
+ \ii $\RR$ is complete.\qedhere
+ \end{itemize}
+\end{proof}
+
+\section{Meshes}
+The above definition might seem fantastical, overcomplicated,
+hilarious, or terrible, depending on your taste.
+But if you unravel it, it's really the picture you are used to.
+What we have done is taking every continuous function $f \colon [a,b] \to \RR$
+and showed that it can be approximated by a rectangle function
+(which we phrased as a dense inclusion).
+Then we added the area of the rectangles.
+Nonetheless, we will give a definition that's
+more like what you're used to seeing in other places.
+\begin{definition}
+ A \emph{tagged partition} $P$ of $[a,b]$
+ consists of a partition of $[a,b]$ into $n$ intervals,
+ with a point $\xi_i$ in the $n$th interval, denoted
+ \[ a = t_0 < t_1 < t_2 < \dots < t_n = b
+ \qquad\text{and}\qquad \xi_i \in [t_{i-1}, t_i]
+ \quad \forall \; 1 \le i \le n. \]
+ The \emph{mesh} of $P$ is the width
+ of the longest interval, i.e.\ $\max_i(t_i - t_{i-1})$.
+\end{definition}
+
+Of course the point of this definition
+is that we add the rectangles,
+but the $\xi_i$ are the sample points.
+
+\begin{center}
+\begin{asy}
+ import graph;
+ real f(real x) { return 3 + (x-2)*(x-2)*(x+5) / 35; }
+ real[] t = {-4.4, -3.3, -0.2, 1.3, 3.5, 4.5};
+ filldraw(box((-5,0), (-4.1,f(t[0]))), opacity(0.1)+lightcyan, heavycyan );
+ filldraw(box((-4.1,0), (-2,f(t[1]))), opacity(0.1)+lightcyan, heavycyan );
+ filldraw(box((-2,0), (0.6,f(t[2]))), opacity(0.1)+lightcyan, heavycyan );
+ filldraw(box((0.6,0), (1.9,f(t[3]))), opacity(0.1)+lightcyan, heavycyan );
+ filldraw(box((1.9,0), (3.7,f(t[4]))), opacity(0.1)+lightcyan, heavycyan );
+ filldraw(box((3.7,0), (5,f(t[5]))), opacity(0.1)+lightcyan, heavycyan );
+ for (int i=0; i<=5; ++i) { dot( (t[i], f(t[i])), blue ); }
+ draw(graph(f,-5.2,5.2,operator ..), blue, Arrows(TeXHead));
+ label("$\xi$", (t[1], f(t[1])), dir(90), blue);
+ graph.xaxis();
+ graph.yaxis();
+ label("$a$", (-5,0), dir(-90), blue);
+ label("$b$", (5,0), dir(-90), blue);
+\end{asy}
+\end{center}
+
+\begin{theorem}
+ [Riemann integral]
+ Let $f \colon [a,b] \to \RR$ be continuous.
+ Then
+ \[ \int_a^b f(x) \; dx
+ = \lim_{\substack{\text{$P$ tagged partition}
+ \\ \opname{mesh} P \to 0}}
+ \left( \sum_{i=1}^n f(\xi_i) (t_i - t_{i-1}) \right). \]
+ Here the limit means that we can take any sequence
+ of partitions whose mesh approaches zero.
+\end{theorem}
+\begin{proof}
+ The right-hand side corresponds to the areas
+ of some rectangle functions $g_1$, $g_2$, \dots
+ with increasingly narrow rectangles.
+ As in the proof \Cref{thm:gaitsgory_riemann},
+ as the meshes of those rectangles approaches zero,
+ by uniform continuity, we have $d(f, g_n) \to 0$ as well.
+ Thus by continuity in the diagram of \Cref{thm:gaitsgory_riemann},
+ we get $\lim_n \Sigma(g_n) = \int(f)$ as needed.
+\end{proof}
+
+Combined with the mean value theorem,
+this can be used to give a short proof of
+the fundamental theorem of calculus
+for functions $f$ with a continuous derivative.
+The idea is that for any choice of partition
+$a \le t_0 < t_1 < t_2 < \dots < t_n \le b$,
+using the Mean Value Theorem it should be possible
+to pick $\xi_i$ in each interval to match
+with the slope of the secant:
+at which point the areas sum to the total change in $f$.
+We illustrate this situation with three points,
+and invite the reader to fill
+in the details as \Cref{thm:FTC}.
+
+\begin{center}
+\begin{asy}
+ size(9cm);
+ import graph;
+ graph.xaxis("$x$");
+ graph.yaxis("$y$");
+ real f(real x) { return x*x/2+0.4; }
+ pair P(real x) { return (x, f(x)); }
+ draw(graph(f,-2,2,operator ..), blue, Arrows(TeXHead));
+ label("$f$", (2, f(2)), dir(-45), blue);
+ for (real i=-1.3; i<=2; ++i) {
+ if (i < 1) { draw(P(i)--P(i+1), deepgreen, EndArrow(TeXHead), Margins); }
+ dot(P(i), deepgreen);
+ }
+ real m1 = f(-0.3) - f(-1.3);
+ real m2 = f(0.7) - f(-0.3);
+ real m3 = f(1.7) - f(0.7);
+ real r = 0.4;
+ void draw_tangent(real m) {
+ dot(P(m), red);
+ draw((m-r, f(m)-r*m)--(m+r, f(m)+r*m), red);
+ }
+ draw_tangent(m1);
+ draw_tangent(m2);
+ draw_tangent(m3);
+ draw(P(-1.3)--(-1.3,0), deepgreen+dashed);
+ draw(P(-0.3)--(-0.3,0), deepgreen+dashed);
+ draw(P( 0.7)--( 0.7,0), deepgreen+dashed);
+ draw(P( 1.7)--( 1.7,0), deepgreen+dashed);
+ label("$t_0$", (-1.3,0), dir(-90), deepgreen);
+ label("$t_1$", (-0.3,0), dir(-90), deepgreen);
+ label("$t_2$", ( 0.7,0), dir(-90), deepgreen);
+ label("$t_3$", ( 1.7,0), dir(-90), deepgreen);
+ draw(P(m1)--(m1,0), red+dashed);
+ draw(P(m2)--(m2,0), red+dashed);
+ draw(P(m3)--(m3,0), red+dashed);
+ label("$\xi_1$", (m1,0), dir(-90), red+fontsize(8pt));
+ label("$\xi_2$", (m2,0), dir(-90), red+fontsize(8pt));
+ label("$\xi_3$", (m3,0), dir(-90), red+fontsize(8pt));
+ draw(P(-1.3)--P(1.7), blue+dashed, EndArrow, Margins);
+ label("Net change", 0.3*P(-1.3)+0.7*P(1.7), dir(90), blue);
+\end{asy}
+\end{center}
+
+One quick note is that although I've only defined
+the Riemann integral for continuous functions,
+there ought to be other functions for which it exists
+(including ``piecewise continuous functions'' for example,
+or functions ``continuous almost everywhere'').
+The relevant definition is:
+\begin{definition}
+ If $f \colon [a,b] \to \RR$ is a function
+ which is not necessarily continuous,
+ but for which the limit
+ \[ \lim_{\substack{\text{$P$ tagged partition}
+ \\ \opname{mesh} P \to 0}}
+ \left( \sum_{i=1}^n f(\xi_i) (t_i - t_{i-1}) \right). \]
+ exists anyways,
+ then we say $f$ is \vocab{Riemann integrable} on $[a,b]$
+ and define its value to be that limit $\int_a^b f(x) \; dx$.
+\end{definition}
+We won't really use this definition much,
+because we will see that every Riemann integrable function
+is Lebesgue integrable, and the Lebesgue integral is better.
+
+\begin{example}
+ [Your AP calculus returns]
+ We had better mention that \Cref{thm:FTC}
+ implies that we can compute Riemann integrals in practice,
+ although most of you may already know this
+ from high-school calculus.
+ For example, on the interval $(1,4)$,
+ the derivative of the function $F(x) = \frac13 x^3$ is $F'(x) = x^2$.
+ As $f(x) = x^2$ is a continuous function $f \colon [1,4] \to \RR$, we get
+ \[ \int_1^4 x^2 \; dx =
+ F(4) - F(1) = \frac{64}{3} - \frac13 = 21. \]
+ Note that we could also have picked $F(x) = \frac13x^3 + 2019$;
+ the function $F$ is unique up to shifting,
+ and this constant cancels out when we subtract.
+ This is why it's common in high school to (really) abuse notation
+ and write $\int x^2 \; dx = \frac13x^3+C$.
+\end{example}
+
+\section{\problemhead}
+\begin{problem}
+ Let $f \colon (a,b) \to \RR$ be differentiable
+ and assume $f'$ is bounded.
+ Show that $f$ is uniformly continuous.
+ \begin{hint}
+ Contradiction and mean value theorem (again!).
+ \end{hint}
+\end{problem}
+
+\begin{sproblem}
+ [Fundamental theorem of calculus]
+ \label{thm:FTC}
+ Let $f \colon [a,b] \to \RR$ be continuous,
+ differentiable on $(a,b)$,
+ and assume the derivative $f'$ extends to a
+ continuous function $f' \colon [a,b] \to \RR$.
+ Prove that
+ \[ \int_a^b f'(x) \; dx = f(b) - f(a). \]
+ \begin{hint}
+ For every positive integer $n$,
+ take a partition where every rectangle has width $w = \frac{b-a}{n}$.
+ Use the mean value theorem to construct a tagged partition
+ such that the first rectangle has area $f(a+w)-f(a)$,
+ the second rectangle has area $f(a+2w) - f(a+w)$, and so on;
+ thus the total area is $f(b) - f(a)$.
+ \end{hint}
+\end{sproblem}
+
+\begin{sproblem}
+ [Improper integrals]
+ \label{prob:improper}
+ For each real number $r > 0$,
+ evaluate the limit\footnote{If you are not
+ familiar with the notation $\eps \to 0^+$,
+ you can replace $\eps$ with $1/M$ for $M > 0$,
+ and let $M \to \infty$ instead.}
+ \[ \lim_{\eps \to 0^+} \int_\eps^1 \frac{1}{x^r} \; dx \]
+ or show it does not exist.
+
+ This can intuitively be thought of as
+ the ``improper'' integral $\int_0^1 x^{-r} \; dx$;
+ it doesn't make sense in our original definition since
+ we did not (and cannot) define the integral
+ over the non-compact $(0,1]$ %chktex 9
+ but we can still consider the integral over $[\eps,1]$
+ for any $\eps > 0$.
+\end{sproblem}
+
+\begin{problem}
+ Show that
+ \[ \lim_{n \to \infty}
+ \left( \frac{1}{n+1} + \frac{1}{n+2} + \dots + \frac{1}{2n} \right)
+ = \log 2. \]
+ \begin{hint}
+ Write this as $\frac1n \sum_{k=1}^n \frac{1}{1+\frac kn}$.
+ Then you can interpret it as a rectangle sum
+ of a certain Riemann integral.
+ \end{hint}
+\end{problem}
diff --git a/books/napkin/large-laws.tex b/books/napkin/large-laws.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2f0506322acbb4ee2ae84015c87e76bf7d4675f8
--- /dev/null
+++ b/books/napkin/large-laws.tex
@@ -0,0 +1,114 @@
+\chapter{Large number laws (TO DO)}
+\todo{write chapter}
+\section{Notions of convergence}
+\subsection{Almost sure convergence}
+\begin{definition}
+ Let $X$, $X_n$ be random variables on a probability space $\Omega$.
+ We say $X_n$ \vocab{converges almost surely} to $X$ if
+ \[ \mu \left( \omega \in \Omega :
+ \lim_n X_n(\omega) = X(\omega) \right) = 1. \]
+\end{definition}
+This is a very strong notion of convergence:
+it says in almost every \emph{world},
+the values of $X_n$ converge to $X$.
+In fact, it is almost better for me to give a \emph{non-example}.
+\begin{example}
+ [Non-example of almost sure convergence]
+ Imagine an immortal skeleton archer is practicing shots,
+ and on the $n$th shot, he scores a bulls-eye with probability
+ $1 - \frac 1n$
+ (which tends to $1$ because the archer improves over time).
+ Let $X_n \in \{0, 1, \dots, 10\}$ be the score of the $n$th shot.
+
+ Although the skeleton is gradually approaching perfection,
+ there are \emph{almost no worlds} in which the archer
+ misses only finitely many shots: that is
+ \[ \mu \left( \omega \in \Omega :
+ \lim_n X_n(\omega) = 10 \right) = 0. \]
+\end{example}
+
+\subsection{Convergence in probability}
+Therefore, for many purposes we need a weaker notion of convergence.
+\begin{definition}
+ Let $X$, $X_n$ be random variables on a probability space $\Omega$.
+ We say $X_n$ \vocab{converges in probability} to $X$ if
+ if for every $\eps > 0$ and $\delta > 0$, we have
+ \[ \mu \left( \omega \in \Omega :
+ \left\lvert X_n(\omega) - X(\omega) \right\rvert < \eps
+ \right) \ge 1 - \delta \]
+ for $n$ large enough (in terms of $\eps$ and $\delta$).
+\end{definition}
+In this sense, our skeleton archer does succeed:
+for any $\delta > 0$, if $n > \delta\inv$
+then the skeleton archer does hit a bulls-eye
+in a $1-\delta$ fraction of the worlds.
+In general, you can think of this as saying that for any $\delta > 0$,
+the chance of an $\eps$-anomaly event at the $n$th stage
+eventually drops below $\delta$.
+
+\begin{remark}
+ To mask $\delta$ from the definition,
+ this is sometimes written instead as:
+ for all $\eps$
+ \[ \lim_{n \to \infty} \mu \left( \omega \in \Omega :
+ \left\lvert X_n(\omega) - X(\omega) \right\rvert < \eps
+ \right) = 1. \]
+ I suppose it doesn't make much difference,
+ though I personally don't like the asymmetry.
+\end{remark}
+
+\subsection{Convergence in law}
+
+\section{\problemhead}
+\begin{problem}
+ [Quantifier hell]
+ \gim
+ In the definition of convergence in probability
+ suppose we allowed $\delta = 0$
+ (rather than $\delta > 0$).
+ Show that the modified definition is
+ equivalent to almost sure convergence.
+ \begin{hint}
+ This is actually trickier than it appears,
+ you cannot just push quantifiers (contrary to the name),
+ but have to focus on $\eps = 1/m$ for $m = 1, 2, \dots$.
+
+ The problem is saying for each $\eps > 0$,
+ if $n > N_\eps$, we have
+ $\mu(\omega : |X(\omega)-X_n(\omega)| \le \eps) = 1$.
+ For each $m$ there are some measure zero ``bad worlds'';
+ take the union.
+ \end{hint}
+ \begin{sol}
+ For each positive integer $m$,
+ consider what happens when $\eps = 1/m$.
+ Then, by hypothesis, there is a threshold $N_m$
+ such that the \emph{anomaly set}
+ \[ A_m \defeq \left\{ \omega :
+ |X(\omega)-X_n(\omega)| \ge \frac 1m
+ \text{ for some } n > N_m \right\} \]
+ has measure $\mu(A_m) = 0$.
+ Hence, the countable union $A = \bigcup_{m \ge 1} A_m$ has measure zero too.
+
+ So the complement of $A$ has measure $1$.
+ For any world $\omega \notin A$,
+ we then have
+ \[ \lim_n \left\lvert X(\omega) - X_n(\omega) \right\rvert = 1 \]
+ because when $n > N_m$ that absolute value
+ is always at most $1/m$ (as $\omega \notin A_m$).
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Almost sure convorgence is not topologizable]
+ Consider the space of all random variables on $\Omega = [0,1]$.
+ Prove that it's impossible to impose a metric on this space
+ which makes the following statement true:
+ \begin{quote}
+ A sequence $X_1$, $X_2$, \dots, of converges almost surely to $X$
+ if and only if $X_i$ converge to $X$ in the metric.
+ \end{quote}
+ \begin{sol}
+ \url{https://math.stackexchange.com/a/2201906/229197}
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/lebesgue-int.tex b/books/napkin/lebesgue-int.tex
new file mode 100644
index 0000000000000000000000000000000000000000..9129a24aaaa5114af1dc4ec6981d9552f6db2423
--- /dev/null
+++ b/books/napkin/lebesgue-int.tex
@@ -0,0 +1,238 @@
+\chapter{Lebesgue integration}
+On any measure space $(\Omega, \SA, \mu)$ we can then,
+for a function $f \colon \Omega \to [0,\infty]$
+define an integral
+\[ \int_\Omega f \; d\mu. \]
+This integral may be $+\infty$ (even if $f$ is finite).
+As the details of the construction won't matter for us later on,
+we will state the relevant definitions,
+skip all the proofs,
+and also state all the properties that we actually care about.
+Consequently, this chapter will be quite short.
+
+\section{The definition}
+The construction is done in four steps.
+\begin{definition}
+ If $A$ is a measurable set of $\Omega$,
+ then the \vocab{indicator function}
+ $\mathbf{1}_A \colon \Omega \to \RR$ is defined by
+ \[ \mathbf{1}_A(\omega) = \begin{cases}
+ 1 & \omega \in A \\
+ 0 & \omega \notin A.
+ \end{cases} \]
+\end{definition}
+
+\begin{step}
+ [Indicator functions]
+ For an indicator function, we require
+ \[ \int_\Omega \mathbf{1}_A \; d\mu \defeq \mu(A) \]
+ (which may be infinite).
+\end{step}
+We extend this linearly now for nonnegative functions
+which are sums of indicators:
+these functions are called \vocab{simple functions}.
+\begin{step}
+ [Simple functions]
+ Let $A_1$, \dots, $A_n$ be a finite collection of measurable sets.
+ Let $c_1$, \dots, $c_n$ be either nonnegative real numbers or $+\infty$.
+ Then we define
+ \[ \int_\Omega \left( \sum_{i=1}^n c_i \mathbf{1}_{A_i} \right) \; d\mu
+ \defeq \sum_{i=1}^n c_i \mu(A_i). \]
+ If $c_i = \infty$ and $\mu(A_i) = 0$, we treat $c_i \mu(A_i) = 0$.
+\end{step}
+One can check the resulting sum does not depend
+on the representation of the simple function as $\sum c_i \mathbf{1}_{A_i}$.
+In particular, it is compatible with the previous step.
+
+Conveniently, this is already enough to define the integral
+for $f \colon \Omega \to [0, +\infty]$.
+Note that $[0,+\infty]$ can be thought of as a topological space
+where we add new open sets $(a,+\infty]$ %chktex 9
+for each real number $a$ to our usual basis of open intervals.
+Thus we can equip it with the Borel sigma-algebra.\footnote{We
+ \emph{could} also try to define a measure on it,
+ but we will not: it is a good enough for us
+ that it is a measurable space.}
+\begin{step}
+ [Nonnegative functions]
+ For each measurable function $f \colon \Omega \to [0, +\infty]$, let
+ \[ \int_\Omega f \; d\mu \defeq
+ \sup_{0 \le s \le f} \left( \int_\Omega s \; d\mu \right) \]
+ where the supremum is taken over all \emph{simple} $s$
+ such that $0 \le s \le f$.
+ As before, this integral may be $+\infty$.
+\end{step}
+One can check this is compatible with the previous definitions.
+At this point, we introduce an important term.
+\begin{definition}
+ A measurable (nonnegative) function $f \colon \Omega \to [0, +\infty]$
+ is \vocab{absolutely integrable}
+ or just \vocab{integrable} if $\int_\Omega f \; d\mu < \infty$.
+\end{definition}
+Warning: I find ``integrable'' to be \emph{really} confusing terminology.
+Indeed, \emph{every} measurable function from $\Omega$ to $[0,+\infty]$
+can be assigned a Lebesgue integral, it's just that
+this integral may be $+\infty$.
+So the definition is far more stringent than the name suggests.
+Even constant functions can fail to be integrable:
+\begin{example}
+ [We really should call it ``finitely integrable'']
+ The constant function $1$ is \emph{not} integrable on $\RR$,
+ since $\int_\RR 1 \; d\mu = \mu(\RR) = +\infty$.
+\end{example}
+For this reason, I will usually prefer the term ``integrable''.
+(If it were up to me, I would call it ``finitely integrable'',
+and usually do so privately.)
+
+Finally, this lets us integrate general functions.
+\begin{definition}
+ In general, a measurable function $f \colon \Omega \to [-\infty, \infty]$
+ is \vocab{absolutely integrable} or just \vocab{integrable} if $|f|$ is.
+\end{definition}
+Since we'll be using the first word, this is easy to remember:
+``absolutely integrable'' requires taking absolute values.
+
+\begin{step}
+ [Absolutely integrable functions]
+ If $f \colon \Omega \to [-\infty, \infty]$ is absolutely integrable,
+ then we define
+ \begin{align*}
+ f^+(x) &= \max\left\{ f(x), 0 \right\} \\
+ f^-(x) &= \min\left\{ f(x), 0 \right\} \\
+ \end{align*}
+ and set
+ \[ \int_\Omega f \; d\mu = \int_\Omega |f^+| \; d\mu
+ - \int_\Omega |f^-| \; d\mu \]
+ which in particular is finite.
+\end{step}
+You may already start to see that we really like nonnegative functions:
+with the theory of measures, it is possible to integrate them,
+and it's even okay to throw in $+\infty$'s everywhere.
+But once we start dealing with functions that can be either positive or negative,
+we have to start adding finiteness restrictions ---
+actually essentially what we're doing is splitting
+the function into its positive and negative part,
+requiring both are finite, and then integrating.
+
+
+To finish this section, we state for completeness
+some results that you probably could have guessed were true.
+Fix $\Omega = (\Omega, \SA, \mu)$, and
+let $f$ and $g$ be measurable real-valued functions
+such that $f(x) = g(x)$ almost everywhere.
+\begin{itemize}
+ \ii (Almost-everywhere preservation)
+ The function $f$ is absolutely integrable if and only if $g$ is,
+ and if so, their Lebesgue integrals match.
+ \ii (Additivity)
+ If $f$ and $g$ are absolutely integrable then
+ \[ \int_\Omega f+g \; d\mu
+ = \int_\Omega f \; d\mu
+ + \int_\Omega g \; d\mu. \]
+ The ``absolutely integrable'' hypothesis can be dropped
+ if $f$ and $g$ are nonnegative.
+ \ii (Scaling) If $f$ is absolutely integrable and $c \in \RR$
+ then $cf$ is absolutely integrable and
+ \[ \int_\Omega cf \; d\mu = c \int_\Omega f \; d\mu. \]
+ The ``absolutely integrable'' hypothesis can be dropped
+ if $f$ is nonnegative and $c > 0$.
+ \ii (Monotoncity)
+ If $f$ and $g$ are absolutely integrable and $f \le g$, then
+ \[ \int_\Omega f \; d\mu \le \int_\Omega g \; d\mu. \]
+ The ``absolutely integrable'' hypothesis can be dropped
+ if $f$ and $g$ are nonnegative.
+\end{itemize}
+There are more famous results like monotone/dominated convergence
+that are also true, but we won't state them here
+as we won't really have a use for them in the context of probability.
+(They appear later on in a bonus chapter.)
+
+\section{Relation to Riemann integrals (or: actually computing Lebesgue integrals)}
+For closed intervals, this actually just works out of the box.
+\begin{theorem}
+ [Lebesgue integral generalizes Riemann integral]
+ Let $f \colon [a,b] \to \RR$ be a Riemann integrable function
+ (where $[a,b]$ is equipped with the Borel measure).
+ Then $f$ is also Lebesgue integrable and the integrals agree:
+ \[ \int_a^b f(x) \; dx = \int_{[a,b]} f \; d\mu. \]
+\end{theorem}
+
+Thus in practice, we do all theory with Lebesgue integrals (they're nicer),
+but when we actually need to compute $\int_{[1,4]} x^2 \; d\mu$
+we just revert back to our usual antics with the
+Fundamental Theorem of Calculus.
+\begin{example}
+ [Integrating $x^2$ over {$[1,4]$}]
+ Reprising our old example:
+ \[ \int_{[1,4]} x^2 \; d\mu
+ = \int_1^4 x^2 \; dx
+ = \frac13 \cdot 4^3 - \frac13 \cdot 1^3 = 21. \]
+\end{example}
+
+This even works for \emph{improper} integrals,
+if the functions are nonnegative.
+The statement is a bit cumbersome to write down, but here it is.
+\begin{theorem}
+ [Improper integrals are nice Lebesgue ones]
+ Let $f \ge 0$ be a \emph{nonnegative}
+ continuous function defined on $(a,b) \subseteq \RR$,
+ possibly allowing $a = -\infty$ or $b = \infty$.
+ Then
+ \[ \int_{(a,b)} f \; d\mu
+ = \lim_{\substack{a' \to a^+ \\ b' \to b^-}}
+ \int_{a'}^{b'} f(x) \; dx \]
+ where we allow both sides to be $+\infty$
+ if $f$ is not absolutely integrable.
+\end{theorem}
+The right-hand side makes sense since $[a',b'] \subsetneq (a,b)$
+is a compact interval on which $f$ is continuous.
+This means that improper Riemann integrals of nonnegative
+functions can just be regarded as Lebesgue ones
+over the corresponding open intervals.
+
+It's probably better to just look at an example though.
+\begin{example}
+ [Integrating $1/\sqrt{x}$ on $(0,1)$]
+ For example, you might be familiar with improper integrals like
+ \[ \int_0^1 \frac{1}{\sqrt x} \; dx
+ \defeq \lim_{\eps \to 0^+}
+ \int_\eps^1 \frac{1}{\sqrt x} \; dx
+ = \lim_{\eps \to 0^+} \left( 2\sqrt{1} - 2\sqrt{\eps} \right) = 2.
+ \]
+ (Note this appeared before as \Cref{prob:improper}.)
+ In the Riemann integration situation, we needed the limit as $\eps \to 0^+$
+ since otherwise $\frac{1}{\sqrt x}$ is not defined as a function $[0,1] \to \RR$.
+ However, it is a \emph{measurable nonnegative}
+ function $(0,1) \to [0,+\infty]$, and hence
+ \[ \int_{(0,1)} \frac{1}{\sqrt x} \; d\mu = 2. \]
+\end{example}
+
+If $f$ is not nonnegative, then all bets are off.
+Indeed \Cref{prob:sin_improper} is the famous counterexample.
+
+\section{\problemhead}
+
+\begin{sproblem}
+ [The indicator of the rationals]
+ \label{prob:1QQ}
+ Take the indicator function
+ $\mathbf 1_{\QQ} \colon \RR \to \{0,1\} \subseteq \RR$
+ for the rational numbers.
+ \begin{enumerate}[(a)]
+ \ii Prove that $\mathbf{1}_\QQ$ is not Riemann integrable.
+ \ii Show that $\int_\RR \mathbf{1}_\QQ$ exists
+ and determine its value --- the one you expect!
+ \end{enumerate}
+\end{sproblem}
+
+\begin{dproblem}
+ [An improper Riemann integral with sign changes]
+ \label{prob:sin_improper}
+ Define $f \colon (1,\infty) \to \RR$ by $f(x) = \frac{\sin(x)}{x}$.
+ Show that $f$ is not absolutely integrable,
+ but that the improper Riemann integral
+ \[ \int_1^\infty f(x) \; dx \defeq
+ \lim_{b \to \infty}
+ \int_a^b f(x) \; dx \]
+ nonetheless exists.
+\end{dproblem}
diff --git a/books/napkin/limits.tex b/books/napkin/limits.tex
new file mode 100644
index 0000000000000000000000000000000000000000..25320bb35c1b1675db7d14ec9a1ad3e0c15cc41c
--- /dev/null
+++ b/books/napkin/limits.tex
@@ -0,0 +1,110 @@
+\chapter{Limits in categories (TO DO)}
+We saw near the start of our category theory chapter
+the nice construction of products by drawing
+a bunch of arrows.
+It turns out that this concept can be generalized immensely,
+and I want to give a you taste of that here.
+
+To run this chapter, we follow the approach of \cite{ref:msci}.
+\todo{write introduction}
+
+\section{Equalizers}
+\prototype{The equalizer of $f,g : X \to Y$ is the set of points with $f(x) = g(x)$.}
+Given two sets $X$ and $Y$, and maps $X \taking{f,g} Y$, we define their \vocab{equalizer} to be
+\[ \left\{ x \in X \mid f(x) = g(x) \right\}. \]
+We would like a categorical way of defining this, too.
+
+Consider two objects $X$ and $Y$ with two maps $f$ and $g$ between them.
+Stealing a page from \cite{ref:msci}, we call this a \vocab{fork}:
+\begin{center}
+\begin{tikzcd}
+ X \ar[r, "f", shift left] \ar[r, "g"', shift right] & Y
+\end{tikzcd}
+\end{center}
+A cone over this fork is an object $A$ and arrows over $X$ and $Y$
+which make the diagram commute, like so.
+\begin{center}
+\begin{tikzcd}
+ A \ar[d, "q"'] \ar[rd, "f \circ q = g \circ q"] & \\
+ X \ar[r, "f", shift left, near start] \ar[r, "g"', shift right, near start] & Y
+\end{tikzcd}
+\end{center}
+Effectively, the arrow over $Y$ is just forcing $f \circ q = g \circ q$.
+In any case, the \vocab{equalizer} of $f$ and $g$ is a ``universal cone'' over this fork:
+it is an object $E$ and a map $E \taking{e} X$ such that
+for each $A \taking q X$ the diagram
+\begin{center}
+\begin{tikzcd}
+ & A \ar[dd, "\exists! h"] \ar[lddd, "q"'] \ar[rddd, dashed, bend left] \\
+ \\
+ & E \ar[ld, "e"', near start] \ar[rd, dashed] & \\
+ X \ar[rr, "f", shift left] \ar[rr, "g"', shift right] && Y
+\end{tikzcd}
+\end{center}
+commutes for a unique $A \taking h E$.
+In other words, any map $A \taking{q} X$ as above
+must factor uniquely through $E$.
+Again, the dotted arrows can be omitted,
+and as before equalizers may not exist.
+But when they do exist:
+\begin{exercise}
+ If $E \taking{e} X$ and $E' \taking{e'} X$ are equalizers,
+ show that $E \cong E'$.
+\end{exercise}
+
+\begin{example}
+ [Examples of equalizers]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii In $\catname{Set}$, given $X \taking{f,g} Y$
+ the equalizer $E$ can be realized as $E = \{x \mid f(x) = g(x)\}$,
+ with the inclusion $e : E \injto X$ as the morphism.
+ As usual, by abuse we'll often just refer to $E$ as the equalizer.
+
+ \ii Ditto in $\catname{Top}$, $\catname{Grp}$.
+ One has to check that the appropriate structures are preserved
+ (e.g.\ one should check that $\{\phi(g) = \psi(g) \mid g \in G\}$ is a group).
+
+ \ii In particular, given a homomorphism $\phi : G \to H$, the inclusion
+ $ \ker\phi \injto G $
+ is an equalizer for the fork $G \to H$ by $\phi$ and the trivial homomorphism.
+ \end{enumerate}
+\end{example}
+
+According to (c) equalizers let us get at the concept of a kernel
+if there is a distinguished
+``trivial map'', like the trivial homomorphism in $\catname{Grp}$.
+We'll flesh this idea out in the chapter on abelian categories.
+
+\section{Pullback squares (TO DO)}
+\todo{write me}
+Great example: differentiable functions on $(-3,1)$ and $(-1,3)$
+
+\begin{example}
+ \label{ex:diff_pullback}
+\end{example}
+
+\section{Limits}
+We've defined cones over discrete sets of $X_i$ and over forks.
+It turns out you can also define a cone over any general \vocab{diagram} of objects and arrows;
+we specify a projection from $A$ to each object and
+require that the projections from $A$ commute with the arrows in the diagram.
+(For example, a cone over a fork is a diagram with two edges and two arrows.)
+If you then demand the cone be universal,
+you have the extremely general definition of a \vocab{limit}.
+As always, these are unique up to unique isomorphism.
+We can also define the dual notion of a \vocab{colimit} in the same way.
+
+
+\section{\problemhead}
+\begin{sproblem}[Equalizers are monic]
+ Show that the equalizer of any fork is monic.
+ \label{prob:equalizer_monic}
+\end{sproblem}
+
+pushout square gives tenor product
+
+p-adic
+
+
+relative Chinese remainder theorem!!
diff --git a/books/napkin/localization.tex b/books/napkin/localization.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e6ab01c0ca7bb87c2087a4ce8d425ec03f071581
--- /dev/null
+++ b/books/napkin/localization.tex
@@ -0,0 +1,593 @@
+\chapter{Localization}
+Before we proceed on to defining an affine scheme,
+we will take the time to properly cover one more algebraic construction
+that of a \emph{localization}.
+This is mandatory because when we define a scheme,
+we will find that all the sections and stalks
+are actually obtained using this construction.
+
+One silly slogan might be:
+\begin{moral}
+ Localization is the art of adding denominators.
+\end{moral}
+You may remember that when we were working with affine varieties,
+there were constantly expressions of the form
+$\left\{ \frac{f}{g} \mid g(p) \ne 0 \right\}$
+and the like.
+The point is that we introduced a lot of denominators.
+Localization will give us a concise way of doing this in general.
+
+\emph{Notational note}:
+moving forward we'll prefer to denote rings by $A$, $B$, \dots,
+rather than $R$, $S$, \dots.
+
+\section{Spoilers}
+Here is a preview of things to come,
+so that you know what you are expecting.
+Some things here won't make sense,
+but that's okay, it is just foreshadowing.
+
+Let $V \subseteq \Aff^n$, and for brevity let $R = \CC[V]$ be its coordinate ring.
+We saw in previous sections how to compute $\OO_V(D(g))$
+and $\OO_{V, p}$ for $p \in V$ a point.
+For example, if we take $\Aff^1$ and consider a point $p$, then
+$\OO_{\Aff^1}(D(x-p)) = \left\{ \frac{f(x)}{(x-p)^n} \right\}$
+and $\OO_{\Aff^1, p} = \left\{ \frac{f(x)}{g(x)} \mid g(p) \ne 0 \right\}$.
+More generally, we had
+\begin{align*}
+ \OO_{V}(D(g)) &= \left\{ \frac{f}{g^n} \mid f \in R \right\}
+ \quad\text{by \Cref{thm:reg_func_distinguish_open}} \\
+ \OO_{V,p} &= \left\{ \frac{f}{g} \mid f,g \in R, g(p) \ne 0 \right\}
+ \quad\text{by \Cref{thm:stalks_affine_var}}.
+\end{align*}
+
+We will soon define something called a localization,
+which will give us a nice way of expressing the above:
+if $R = \CC[V]$ is the coordinate ring, then
+the above will become abbreviated to just
+\begin{align*}
+ \OO_{V}(D(g)) &= R_{g} \\
+ \OO_{V, p} &= R_{\km} \quad \text{where } \{p\} = \VV(\km).
+\end{align*}
+The former will be pronounced
+``$R$ localized away from $g$''
+while the latter will be pronounced
+``$R$ localized at $\km$''.
+
+Even more generally,
+next chapter we will throw out the coordinate ring $R$
+altogether and replace it with a general commutative ring $A$
+(which are still viewed as functions).
+We will construct a ringed space called $X = \Spec A$,
+whose elements are \emph{prime ideals} of $A$
+and is equipped with the Zariski topology and a sheaf $\OO_X$.
+It will turn out that, in analogy to what we had before,
+\begin{align*}
+ \OO_X(D(f)) &= A[f\inv] \\
+ \OO_{X,\kp} &= A_\kp
+\end{align*}
+for any element $f \in A$ and prime ideal $\kp \in \Spec A$.
+Thus just as with complex affine varieties,
+localizations will give us a way to more or less
+describe the sheaf $\OO_X$ completely.
+
+\section{The definition}
+\begin{definition}
+ A subset $S \subseteq A$ is a \vocab{multiplicative set}
+ if $1 \in S$ and $S$ is closed under multiplication.
+\end{definition}
+\begin{definition}
+ Let $A$ be a ring and $S \subset A$ a multiplicative set.
+ Then the \vocab{localization of $A$ at $S$}, denoted $S\inv A$,
+ is defined as the set of fractions
+ \[ \left\{ a/s \mid a \in A, s \in S \right\} \]
+ where we declare two fractions $a_1 / s_1 = a_2 / s_2$
+ to be equal if $s(a_1s_2 - a_2s_1) = 0$ for some $s \in S$.
+ Addition and multiplication in this ring
+ are defined in the obvious way.
+\end{definition}
+In particular, if $0 \in S$ then $S\inv A$ is the zero ring.
+So we usually only take situations where $0 \notin S$.
+
+We give in brief now two examples which will be
+motivating forces for the construction of the affine scheme.
+\begin{example}
+ [Localizations of {$\CC[x]$}]
+ Let $A = \CC[x]$.
+ \begin{enumerate}[(a)]
+ \ii Suppose we let $S = \left\{ 1, x, x^2, x^3, \dots \right\}$
+ be the powers of $x$.
+ Then
+ \[ S\inv A = \left\{ \frac{f(x)}{x^n}
+ \mid f \in \CC[x], n \in \ZZ_{\ge 0} \right\}. \]
+ In other words, we get the Laurent polynomials in $x$.
+
+ You might recognize this as
+ \[ \OO_V(U) \text{ where } V = \Aff^1, \; U = V \setminus \{0\}. \]
+ i.e.\ the sections of the punctured line.
+ In line with the ``hyperbola effect'',
+ this is also expressible as $\CC[x,y] / (xy-1)$.
+
+ \ii Let $p \in \CC$.
+ Suppose we let $S = \left\{ g(x) \mid g(p) \ne 0 \right\}$,
+ i.e.\ we allow any denominators where $g(p) \ne 0$.
+ Then
+ \[ S\inv A = \left\{ \frac{f(x)}{g(x)}
+ \mid f,g \in \CC[x], g(p) \ne 0 \right\}. \]
+ You might recognize this is as the stalk $\OO_{\Aff^1, p}$.
+ This will be important later on.
+ \end{enumerate}
+\end{example}
+
+\begin{remark}
+ [Why the extra $s$?]
+ We cannot use the simpler $a_1s_2 - a_2s_1 = 0$ since
+ otherwise the equivalence relation may fail to be transitive.
+ Here is a counterexample: take
+ \[ A = \Zc{12} \qquad S = \{ 2, 4, 8 \}. \]
+ Then we have for example $\frac01 = \frac02 = \frac{12}{2} = \frac61$.
+ So we need to have $\frac01=\frac61$ which is only true
+ with the first definition.
+ Of course, if $A$ is an integral domain (and $0\notin S$)
+ then this is a moot point.
+\end{remark}
+
+\begin{example}
+ [Field of fractions]
+ Let $A$ be an integral domain and $S = A \setminus \{0\}$.
+ Then $S\inv A = \Frac(A)$.
+\end{example}
+
+
+\section{Localization away from an element}
+\prototype{$\ZZ$ localized away from $6$ has fractions $\frac{m}{2^x 3^y}$.}
+We now focus on the two special cases of localization we will need the most;
+one in this section, the other in the next section.
+\begin{definition}
+ For $f \in A$, we define the \vocab{localization of $A$ away from $f$},
+ denoted $A[1/f]$ or $A[f\inv]$,
+ to be $\{1, f, f^2, f^3, \dots\}\inv A$.
+ (Note that $\left\{ 1, f, f^2, \dots \right\}$ is multiplicative.)
+\end{definition}
+\begin{remark}
+ In the literature it is more common to
+ see the notation $A_f$ instead of $A[1/f]$.
+ This is confusing, because in the next section
+ we define $A_\kp$ which is almost the opposite.
+ So I prefer this more suggestive (but longer) notation.
+\end{remark}
+
+\begin{example}
+ [Some arithmetic examples of localizations]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii We localize $\ZZ$ away from $6$:
+ \[ \ZZ[1/6] = \left\{ \frac{m}{6^n} \mid m \in \ZZ,
+ n \in \ZZ_{\ge 0} \right\}. \]
+ So $A[1/6]$ consist of those rational numbers whose
+ denominators have only powers of $2$ and $3$.
+ For example, it contains $\frac{5}{12} = \frac{15}{36}$.
+
+ \ii Here is a more confusing example:
+ if we localize $\Zc{60}$ away from the element $5$,
+ we get $(\Zc{60})[1/5] \cong \Zc{12}$.
+ You should try to think about why this is the case.
+ We will see a ``geometric'' reason later.
+ \end{enumerate}
+\end{example}
+
+
+\begin{example}
+ [Localization at an element, algebraic geometry flavored]
+ We saw that if $A$ is the coordinate ring of a variety,
+ then $A[1/g]$ is interpreted geometrically as $\OO_V(D(g))$.
+ Here are some special cases:
+ \begin{enumerate}[(a)]
+ \ii As we saw, if $A = \CC[x]$,
+ then $A[1/x] = \left\{ \frac{f(x)}{x^n} \right\}$
+ consists of Laurent polynomials.
+
+ \ii Let $A = \CC[x,y,z]$.
+ Then \[ A[1/x] = \left\{ \frac{f(x,y,z)}{x^n} \mid
+ f \in \CC[x,y,z], \; n \ge 0 \right\} \]
+ is rational functions whose denominators are powers of $x$.
+
+ \ii Let $A = \CC[x,y]$.
+ If we localize away from $y-x^2$ we get
+ \[ A[(y-x^2)\inv] = \left\{ \frac{f(x,y)}{(y-x^2)^n} \mid
+ f \in \CC[x,y], \; n \ge 0 \right\} \]
+ By now you should recognize this as $\OO_{\Aff^2}(D(y-x^2))$.
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [An example with zero-divisors]
+ Let $A = \CC[x,y] / (xy)$
+ (which intuitively is the coordinate ring of two axes).
+ Suppose we localize at $x$:
+ equivalently, allowing denominators of $x$.
+ Since $xy = 0$ in $A$, we now have $0 = x\inv (xy) = y$,
+ so $y = 0$ in $A$, and thus $y$ just goes away completely.
+ From this we get a ring isomorphism
+ \[ A[1/x] \cong \CC[x,1/x].\]
+\end{example}
+
+\section{Localization at a prime ideal}
+\prototype{$\ZZ$ localized at $(5)$ has fractions $\frac{m}{n}$ with $5 \nmid n$.}
+\label{sec:localize_prime_ideal}
+
+\begin{definition}
+ If $A$ is a ring and $\kp$ is a prime ideal, then we define
+ \[ A_\kp \defeq \left( A \setminus \kp \right)^{-1} A. \]
+ This is called the \vocab{localization at $\kp$}.
+\end{definition}
+\begin{ques}
+ Why is $S = A \setminus \kp$ multiplicative
+ in the above definition?
+\end{ques}
+
+%Warning: this notation sort of conflicts with the previous one.
+%The ring $A_\kp$ is the localization with multiplicative set $A \setminus \kp$,
+%while $A_f$ is the localization with multiplicative set $\{1,f,f^2,\dots\}$;
+%these two are quite different beasts!
+%In $A_\kp$ the subscript denotes the \emph{forbidden} denominators;
+%in $A_f$ the subscript denotes the \emph{allowed} denominators.
+
+This special case is important because we will see that
+stalks of schemes will all be of this shape.
+In fact, the same was true for affine varieties too.
+\begin{example}
+ [Relation to affine varieties]
+ Let $V \subseteq \Aff^n$, let $A = \CC[V]$
+ and let $p = (a_1, \dots, a_n)$ be a point.
+ Consider the maximal (hence prime) ideal
+ \[ \km = (x_1 - a_1, \dots, x_n - a_n). \]
+ Observe that a function $f \in A$ vanishes at $p$
+ if and only if $f \pmod{\km} = 0$, equivalently $f \in \km$.
+ Thus, by \Cref{thm:stalks_affine_var} we can write
+ \begin{align*}
+ \OO_{V,p} &= \left\{ \frac{f}{g} \mid f,g \in A, g(p) \ne 0 \right\} \\
+ &= \left\{ \frac{f}{g} \mid f \in A, g \in A \setminus \km \right\} \\
+ &= \left( A \setminus \km \right)\inv A = A_\km.
+ \end{align*}
+ So, we can also express $\OO_{V,p}$ concisely as a localization.
+\end{example}
+Consequently, we give several examples in this vein.
+
+\begin{example}
+ [Geometric examples of localizing at a prime]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii We let $\km$ be the maximal ideal $(x)$ of $A = \CC[x]$.
+ Then \[ A_\km = \left\{ \frac{f(x)}{g(x)} \mid g(0) \ne 0 \right\} \]
+ consists of the Laurent polynomials.
+
+ \ii We let $\km$ be the maximal ideal $(x,y)$ of $A = \CC[x,y]$.
+ Then \[ A_\km = \left\{ \frac{f(x,y)}{g(x,y)} \mid g(0,0) \ne 0 \right\}. \]
+
+ \ii Let $\kp$ be the prime ideal $(y-x^2)$ of $A = \CC[x,y]$.
+ Then
+ \[ A_\kp = \left\{ \frac{f(x,y)}{g(x,y)} \mid g \notin (y-x^2) \right\}. \]
+ This is a bit different from what we've seen before:
+ the polynomials in the denominator are allowed to vanish
+ at a point like $(1,1)$, as long as they don't vanish on
+ \emph{every} point on the parabola.
+ This doesn't correspond to any stalk we're familiar with right now,
+ but it will later
+ (it will be the ``stalk at the generic point of the parabola'').
+
+ \ii Let $A = \CC[x]$ and localize at the prime ideal $(0)$.
+ This gives \[ A_{(0)} = \left\{ \frac{f(x)}{g(x)} \mid g(x) \ne 0 \right\}. \]
+ This is all rational functions, period.
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [Arithmetic examples]
+ We localize $\ZZ$ at a few different primes.
+ \begin{enumerate}[(a)]
+ \ii If we localize $\ZZ$ at $(0)$:
+ \[ \ZZ_{(0)} = \left\{ \frac mn \mid n \ne 0 \right\}
+ \cong \QQ. \]
+ \ii If we localize $\ZZ$ at $(3)$, we get
+ \[ \ZZ_{(3)} = \left\{ \frac mn \mid \gcd(n,3) = 1 \right\} \]
+ which is the ring of rational numbers
+ whose denominators are relatively prime to $3$.
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [Field of fractions]
+ If $A$ is an integral domain,
+ the localization $A_{(0)}$
+ is the field of fractions of $A$.
+\end{example}
+
+
+\section{Prime ideals of localizations}
+\prototype{The examples with $A = \ZZ$.}
+We take the time now to mention how you can
+think about prime ideals of localized rings.
+\begin{proposition}
+ [The prime ideals of $S\inv A$]
+ Let $A$ be a ring and $S \subseteq A$ a multiplicative set.
+ Then there is a natural inclusion-preserving bijection between:
+ \begin{itemize}
+ \ii The set of prime ideals of $S\inv A$, and
+ \ii The set of prime ideals of $A$ not intersecting $S$.
+ \end{itemize}
+\end{proposition}
+\begin{proof}
+ Consider the homomorphism $\iota \colon A \to S\inv A$.
+ For any prime ideal $\kq \subseteq S\inv A$,
+ its pre-image $\iota\pre(\kq)$ is a prime ideal of $A$
+ (by \Cref{prob:prime_preimage}).
+ Conversely, for any prime ideal $\kp \subseteq A$
+ not meeting $S$,
+ $S\inv \kp = \left\{ \frac{a}{s} \mid a \in \kp, s \in S \right\}$
+ is a prime ideal of $S\inv A$.
+ An annoying check shows that this produces the required bijection.
+\end{proof}
+In practice, we will almost always use the corollary
+where $S$ is one of the two special cases we discussed at length:
+\begin{corollary}
+ [Spectrums of localizations]
+ Let $A$ be a ring.
+ \begin{enumerate}[(a)]
+ \ii If $\kp$ is a prime ideal of $A$,
+ then the prime ideals of $A[1/f]$ are naturally
+ in bijection with prime ideals of $A$
+ \textbf{do not contain the element} $f$.
+
+ \ii If $\kp$ is a prime ideal of $A$,
+ then the prime ideals of $A_\kp$ are naturally
+ in bijection with prime ideals of $A$
+ which are \textbf{subsets of} $\kp$.
+ \end{enumerate}
+\end{corollary}
+\begin{proof}
+ Part (b) is immediate; a prime ideal doesn't meet $A \setminus \kp$
+ exactly if it is contained in $\kp$.
+ For part (a), we want prime ideals of $A$ not containing
+ any \emph{power} of $f$.
+ But if the ideal is prime and contains $f^n$,
+ then it should contain either $f$ or $f^{n-1}$,
+ and so at least for prime ideals these are equivalent.
+\end{proof}
+Notice again how the notation is a bit of a nuisance.
+Anyways, here are some examples, to help cement the picture.
+\begin{example}
+ [Prime ideals of {$\ZZ[1/6]$}]
+ Suppose we localize $\ZZ$ away from the element $6$,
+ i.e.\ consider $\ZZ[1/6]$.
+ As we saw,
+ \[ \ZZ[1/6] = \left\{ \frac{n}{2^x 3^y} \mid n \in \ZZ,
+ x,y \in \ZZ_{\ge 0} \right\}. \]
+ consist of those rational numbers whose
+ denominators have only powers of $2$ and $3$.
+ Note that $(5) \subset \ZZ[1/6]$ is a prime ideal:
+ those elements of $\ZZ[1/6]$ with $5$ dividing the numerator.
+ Similarly, $(7)$, $(11)$, $(13)$, \dots\
+ and even $(0)$ give prime ideals of $\ZZ[1/6]$.
+
+ But $(2)$ and $(3)$ no longer correspond to
+ prime ideals; in fact in $A_6$ we have $(2) = (3) = (1)$,
+ the whole ring.
+\end{example}
+
+\begin{example}
+ [Prime ideals of $A_{(5)}$]
+ Suppose we localize $\ZZ$ at the prime $(5)$.
+ As we saw,
+ \[ \ZZ_{(5)} = \left\{ \frac{m}{n} \mid m,n \in \ZZ,
+ 5 \nmid n \right\}. \]
+ consist of those rational numbers whose
+ denominators are not divisible by $5$.
+ This is an integral domain, so $(0)$ is still a prime ideal.
+ There is one other prime ideal: $(5)$,
+ i.e.\ those elements whose numerators are divisible by $5$.
+
+ There are no other prime ideals:
+ if $p \ne 5$ is a rational prime,
+ then $(p) = (1)$, the whole ring, again.
+\end{example}
+
+\section{Prime ideals of quotients}
+While we are here, we mention that
+the prime ideals of quotients $A/I$
+can be interpreted in terms of those of $A$
+(as in the previous section for localization).
+You may remember this from \Cref{prob:inclusion_preserving}
+a long time ago, if you did that problem;
+but for our purposes we actually only care about the prime ideals.
+\begin{proposition}
+ [The prime ideals of $A/I$]
+ \label{prop:prime_quotient}
+ If $A$ is a ring and $I$ is any ideal (not necessarily prime)
+ then the prime (resp.\ maximal) ideals of $A/I$
+ are in bijection with prime (resp.\ maximal) ideals of $A$
+ which are \textbf{supersets of} $I$.
+ This bijection is inclusion-preserving.
+\end{proposition}
+\begin{proof}
+ Consider the quotient homomorphism $\psi \colon A \surjto A/I$.
+ For any prime ideal $\kq \subseteq A/I$,
+ its pre-image $\psi\pre(\kq)$ is a prime ideal
+ (by \Cref{prob:prime_preimage}).
+ Conversely, for any prime ideal $\kp$
+ with $I \subseteq \kp \subseteq A$,
+ we get a prime ideal of $A/I$ by looking at $\kp \pmod I$.
+ An annoying check shows that this produces the required bijection.
+ It is also inclusion-preserving --- from which
+ the same statement holds for maximal ideals.
+\end{proof}
+\begin{example}
+ [Prime ideals of $\Zc{60}$]
+ The ring $\Zc{60}$ has three prime ideals:
+ \begin{align*}
+ (2) &= \left\{ 0, 2, 4, \dots, 58 \right\} \\
+ (3) &= \left\{ 0, 3, 6, \dots, 57 \right\} \\
+ (5) &= \left\{ 0, 5, 10, \dots, 55 \right\}.
+ \end{align*}
+ Back in $\ZZ$, these correspond to the three prime ideals
+ which are supersets of
+ $60\ZZ = \left\{ \dots, -60, 0, 60, 120, \dots \right\}$.
+\end{example}
+
+\section{Localization commutes with quotients}
+\prototype{$(\CC[xy]/(xy))[1/x] \cong \CC[x,x\inv]$.}
+While we are here, we mention a useful result from
+commutative algebra which lets us compute localizations in quotient rings,
+which are surprisingly unintuitive.
+You will \emph{not} have a reason to care about this
+until we reach \Cref{sec:convenient_square},
+and so this is only placed earlier to emphasize that it's
+a purely algebraic fact that we can (and do) state this early,
+even though we will not need it anytime soon.
+
+Let's say we have a quotient ring like
+\[ A/I = \CC[x,y] / (xy) \]
+and want to compute the localization of this ring
+away from the element $x$.
+(To be pedantic, we are actually localizing away from $x \pmod{xy}$,
+the element of the quotient ring, but we will just call it $x$.)
+You will quickly find that even the notation becomes clumsy: it is
+\begin{equation}
+ \left( \CC[x,y] / (xy) \right)[1/x]
+ \label{eq:quotient_localization_before}
+\end{equation}
+which is hard to think about,
+because the elements in play are part of the \emph{quotient}:
+how are we supposed to think about
+\[ \frac{1 \pmod{xy}}{x \pmod{xy}} \]
+for example?
+The zero-divisors in play may already make you feel uneasy.
+
+However, it turns out that we can actually do the localization
+\emph{first}, meaning the answer is just
+\begin{equation}
+ \CC[x,y,1/x] / (xy)
+ \label{eq:quotient_localization_after}
+\end{equation}
+which then becomes $\CC[x, x\inv, y] / (y) \cong \CC[x,x\inv]$.
+
+This might look like it should be trivial,
+but it's not as obvious as you might expect.
+There is a sleight of hand present here with the notation:
+\begin{itemize}
+ \ii In \eqref{eq:quotient_localization_before},
+ the notation $(xy)$ stands for an ideal of $\CC[x,y]$
+ --- that is, the set $xy \CC[x,y]$.
+
+ \ii In \eqref{eq:quotient_localization_after}
+ the notation $(xy)$ now stands for an ideal of $\CC[x,x\inv,y]$
+ --- that is, the set $xy \CC[x,x\inv,y]$.
+\end{itemize}
+So even writing down the \emph{statement} of the theorem
+is actually going to look terrible.
+
+In general, what we want to say is that if we have our ring $A$
+with ideal $I$ and $S$ is some multiplicative subset of $A$,
+then \[ \text{Colloquially: ``}S\inv (A/I) = (S\inv A)/I\text{''}. \]
+But there are two things wrong with this:
+\begin{itemize}
+\ii The main one is that $I$ is not an ideal of $S\inv A$, as we saw above.
+This is remedied by instead using $S\inv I$,
+which consists of those elements of those elements $\frac xs$
+for $x \in I$ and $s \in S$.
+As we saw this distinction is usually masked in practice,
+because we will usually write $I = (a_1, \dots, a_n) \subseteq A$
+in which case the new ideal $S\inv I \subseteq A$ can be denoted
+in exactly the same way: $(a_1, \dots, a_n)$,
+just regarded as a subset of $S\inv A$ now.
+
+\ii The second is that $S$ is not, strictly speaking,
+a subset of $A/I$, either.
+But this is easily remedied by instead using the image of $S$
+under the quotient map $A \surjto A/I$.
+
+We actually already saw this in the previous example:
+when trying to localize $\CC[x,y]/(xy)$,
+we were really localizing at the element $x \pmod{xy}$,
+but (as always) we just denoted it by $x$ anyways.
+\end{itemize}
+
+And so after all those words, words, words, we have the hideous:
+\begin{theorem}
+ [Localization commutes with quotients]
+ \label{thm:localization_commute_quotient}
+ Let $S$ be a multiplicative set of a ring $A$,
+ and $I$ an ideal of $A$.
+ Let $\ol S$ be the image of $S$
+ under the projection map $A \surjto A/I$.
+ Then
+ \[ {\ol S}\inv (A/I) \cong S\inv A / S\inv I \]
+ where $S\inv I = \left\{ \frac{x}{s} \mid x \in I, s \in S \right\}$.
+\end{theorem}
+\begin{proof}
+ Omitted; Atiyah-Macdonald is the right reference for these
+ type of things in the event that you do care.
+\end{proof}
+The notation is a hot mess.
+But when we do calculations in practice, we instead write
+\[ \left( \CC[x,y,z]/(x^2 + y^2 - z^2) \right)[1/x]
+ \cong \CC[x,y,z,1/x] / (x^2 + y^2 - z^2) \]
+or (for an example where we localize at a prime ideal)
+\[ \left( \ZZ[x,y,z]/ (x^2 + yz) \right)_{(x,y)}
+ \cong \ZZ[x,y,z]_{(x,y)} / (x^2 + yz) \]
+and so on --- the pragmatism of our ``real-life'' notation
+which hides some details actually guides our intuition
+(rather than misleading us).
+So maybe the moral of this section is that whenever
+you compute the localization of the quotient ring,
+if you just suspend belief for a bit,
+then you will probably get the right answer.
+
+We will later see geometric interpretations of these facts
+when we work with $\Spec A/I$,
+at which point they will become more natural.
+
+\section{\problemhead}
+\begin{problem}
+ Let $A = \Zc{2016}$, and consider the element $60 \in A$.
+ Compute $A[1/60]$, the localization of $A$ away from $60$.
+ \begin{sol}
+ One should get $A[1/60] = \Zc{7}$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Injectivity of localizations]
+ Let $A$ be a ring and $S \subseteq A$ a multiplicative set.
+ Find necessary and sufficient conditions
+ for the map $A \to S\inv A$ to be injective.
+ \begin{hint}
+ Consider zero divisors.
+ \end{hint}
+ \begin{sol}
+ If and only if $S$ has no zero divisors.
+ \end{sol}
+\end{problem}
+
+\begin{sproblem}
+ [Alluding to local rings]
+ Let $A$ be a ring, and $\kp$ a prime ideal.
+ How many maximal ideals does $A_\kp$ have?
+ \begin{hint}
+ Only one!
+ A proof will be given a few chapters later.
+ \end{hint}
+\end{sproblem}
+
+
+\begin{problem}
+ Let $A$ be a ring such that $A_\kp$ is an integral domain
+ for every prime ideal $\kp$ of $A$.
+ Must $A$ be an integral domain?
+ \begin{hint}
+ No. Imagine two axes.
+ \end{hint}
+ \begin{sol}
+ Take $A = \CC[x,y] / (xy)$.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/log.tex b/books/napkin/log.tex
new file mode 100644
index 0000000000000000000000000000000000000000..63255c6eb3648683cb02bf93cad9a17e986f811f
--- /dev/null
+++ b/books/napkin/log.tex
@@ -0,0 +1,363 @@
+\chapter{Holomorphic square roots and logarithms}
+\label{ch:complex_log}
+In this chapter we'll make sense of a holomorphic square root and logarithm.
+The main results are \Cref{thm:nth_root}, \Cref{thm:holomorphic_log},
+\Cref{cor:nonvanishing}, and \Cref{cor:principal}.
+If you like, you can read just these four results, and skip the discussion of how they came to be.
+
+Let $f : U \to \CC$ be a holomorphic function.
+A \vocab{holomorphic $n$th root} of $f$ is a function $g : U \to \CC$
+such that $f(z) = g(z)^n$ for all $z \in U$.
+A \vocab{logarithm} of $f$ is a function $g : U \to \CC$
+such that $f(z) = e^{g(z)}$ for all $z \in U$.
+The main question we'll try to figure out is: when do these exist?
+In particular, what if $f = \id$?
+
+\section{Motivation: square root of a complex number}
+To start us off, can we define $\sqrt z$ for any complex number $z$?
+
+The first obvious problem that comes up is that for any $z$, there are
+\emph{two} numbers $w$ such that $w^2 = z$.
+How can we pick one to use?
+For our ordinary square root function, we had a notion of ``positive'',
+and so we simply took the positive root.
+
+Let's expand on this: given
+$ z = r \left( \cos\theta + i \sin\theta \right) $
+(here $r \ge 0$) we should take the root to be
+\[ w = \sqrt{r} \left( \cos \alpha + i \sin \alpha \right). \]
+such that $2\alpha \equiv \theta \pmod{2\pi}$;
+there are two choices for $\alpha \pmod{2\pi}$, differing by $\pi$.
+
+For complex numbers, we don't have an obvious way to pick $\alpha$.
+Nonetheless, perhaps we can also get away with an arbitrary distinction:
+let's see what happens if we just choose the $\alpha$ with $-\half\pi < \alpha \le \half\pi$.
+
+Pictured below are some points (in red)
+and their images (in blue) under this ``upper-half'' square root.
+The condition on $\alpha$ means we are forcing the blue points to lie on the right-half plane.
+
+\begin{center}
+ %% First Diagram
+ \begin{asy}
+ size(7cm);
+ cplane();
+ draw(origin--(4.2*dir(180)), grey+2);
+ pair O = (-2.4, 1.1);
+ pair T;
+ pair[] A = new pair[8];
+ real r = 1.44;
+ real[] dirs = {45,45,45,135,175,255,-90,-75};
+ for (int i=0; i<8; ++i) {
+ A[i] = O + r*dir(i*45);
+ dot("$z_{" + (string) i + "}$", A[i], dir(dirs[i]), red);
+ }
+ draw(A[0]..A[1]..A[2]..A[3]..A[4]..A[5]..A[6]..A[7]..cycle, red);
+
+ real[] dirs = {-45, -15, 15, 45, 75, 150, 210, 230};
+ for (int i=0; i<8; ++i) {
+ A[i] = sqrt(abs(A[i]))*dir(degrees(A[i])/2);
+ if (degrees(A[i]) > 90) { A[i] *= -1; }
+ dot("$w_{" + (string) i + "}$", A[i], dir(dirs[i]), blue);
+ }
+ draw(A[7]..A[0]..A[1]..A[2]..A[3]..A[4]..A[5], blue);
+ \end{asy}
+\end{center}
+
+
+Here, $w_i^2 = z_i$ for each $i$, and we are constraining the $w_i$
+to lie in the right half of the complex plane.
+We see there is an obvious issue: there is a big discontinuity near
+the points $w_5$ and $w_7$!
+The nearby point $w_6$ has been mapped very far away.
+This discontinuity occurs since the points on the negative real axis are
+at the ``boundary''.
+For example, given $-4$, we send it to $-2i$, but we have hit the boundary:
+in our interval $-\half\pi \le \alpha < \half\pi$, we are at the very left edge.
+
+
+The negative real axis that we must not touch
+is what we will later call a \emph{branch cut},
+but for now I call it a \textbf{ray of death}.
+It is a warning to the red points: if you cross this line, you will die!
+However, if we move the red circle just a little upwards
+(so that it misses the negative real axis) this issue is avoided entirely,
+and we get what seems to be a ``nice'' square root.
+
+\begin{center}
+ %% Second Diagram
+ \begin{asy}
+ size(7cm);
+ cplane();
+ draw(origin--(4.2*dir(180)), grey+2);
+ pair O = (-2.8, 1.9);
+ pair T;
+ pair[] A = new pair[8];
+ real r = 1.44;
+ real[] dirs = {45,45,45,135,175,225,-45,-45};
+ for (int i=0; i<8; ++i) {
+ A[i] = O + r*dir(i*45);
+ dot("$z_{" + (string) i + "}$", A[i], dir(dirs[i]), red);
+ }
+ draw(A[0]..A[1]..A[2]..A[3]..A[4]..A[5]..A[6]..A[7]..cycle, red);
+
+ real[] dirs = {-45, -15, 15, 75, 105, 150, 210, 260};
+ for (int i=0; i<8; ++i) {
+ A[i] = sqrt(abs(A[i]))*dir(degrees(A[i])/2);
+ if (degrees(A[i]) > 90) { A[i] *= -1; }
+ dot("$w_{" + (string) i + "}$", A[i], dir(dirs[i]), blue);
+ }
+ draw(A[0]..A[1]..A[2]..A[3]..A[4]..A[5]..A[6]..A[7]..cycle, blue);
+ \end{asy}
+\end{center}
+
+In fact, the ray of death is fairly arbitrary:
+it is the set of ``boundary issues''
+that arose when we picked $-\half\pi < \alpha \le \half\pi$.
+Suppose we instead insisted on the interval $0 \le \alpha < \pi$;
+then the ray of death would be the \emph{positive} real axis instead.
+The earlier circle we had now works just fine.
+
+\begin{center}
+ %% Third Diagram
+ \begin{asy}
+ size(7cm);
+ cplane();
+ draw(origin--(4.2*dir(0)), grey+2);
+ pair O = (-2.4, 1.1);
+ pair T;
+ pair[] A = new pair[8];
+ real r = 1.44;
+ real[] dirs = {-5,45,45,135,175,255,-90,-75};
+ for (int i=0; i<8; ++i) {
+ A[i] = O + r*dir(i*45);
+ dot("$z_{" + (string) i + "}$", A[i], dir(dirs[i]), red);
+ }
+ draw(A[0]..A[1]..A[2]..A[3]..A[4]..A[5]..A[6]..A[7]..cycle, red);
+
+ real[] dirs = {-45, -15, 15, 45, 75, 150, 180, 230};
+ for (int i=0; i<8; ++i) {
+ A[i] = sqrt(abs(A[i]))*dir(degrees(A[i])/2);
+ dot("$w_{" + (string) i + "}$", A[i], dir(dirs[i]), blue);
+ }
+ draw(A[0]..A[1]..A[2]..A[3]..A[4]..A[5]..A[6]..A[7]..cycle, blue);
+ \end{asy}
+\end{center}
+
+What we see is that picking a particular $\alpha$-interval
+leads to a different set of edge cases, and hence a different ray of death.
+The only thing these rays have in common is their starting point of zero.
+In other words, given a red circle and a restriction of $\alpha$,
+I can make a nice ``square rooted'' blue circle as long as the
+ray of death misses it.
+
+So, what exactly is going on?
+
+\section{Square roots of holomorphic functions}
+To get a picture of what's happening, we would like to consider a more
+general problem: let $f: U \to \CC$ be holomorphic.
+Then we want to decide whether there is a $g : U \to \CC$
+such that \[ f(z) = g(z)^2. \]
+Our previous discussion with $f = \id$
+tells us we cannot hope to achieve this for $U = \CC$;
+there is a ``half-ray'' which causes problems.
+However, there are certainly functions $f : \CC \to \CC$ such that a $g$ exists.
+As a simplest example, $f(z) = z^2$ should definitely have a square root!
+
+Now let's see if we can fudge together a square root.
+Earlier, what we did was try to specify a rule to force one of the two choices at each point.
+This is unnecessarily strict.
+Perhaps we can do something like:
+start at a point in $z_0 \in U$, pick a square root $w_0$ of $f(z_0)$,
+and then try to ``fudge'' from there the square roots of the other points.
+What do I mean by fudge?
+Well, suppose $z_1$ is a point very close to $z_0$,
+and we want to pick a square root $w_1$ of $f(z_1)$.
+While there are two choices, we also would expect $w_0$ to be close to $w_1$.
+Unless we are highly unlucky, this should tell us which choice of $w_1$ to pick.
+(Stupid concrete example: if I have taken the square root $-4.12i$ of $-17$
+and then ask you to continue this square root to $-16$,
+which sign should you pick for $\pm 4i$?)
+
+There are two possible ways we could get unlucky in the scheme above:
+first, if $w_0 = 0$, then we're sunk.
+But even if we avoid that, we have to worry that
+if we run a full loop in the complex plane,
+we might end up in a different place from where we started.
+For concreteness, consider the following situation, again with $f = \id$:
+
+\begin{center}
+ %% Fourth Diagram
+ \begin{asy}
+ size(7cm);
+ cplane();
+ pair O = origin;
+ pair T;
+ pair[] A = new pair[8];
+ real r = 3.24;
+ real[] dirs = {45,45,45,135,135,255,-45,-45};
+ for (int i=0; i<8; ++i) {
+ A[i] = O + r*dir(i*45);
+ dot("$z_{" + (string) i + "}$", A[i], dir(dirs[i]), red);
+ }
+ draw(A[0]..A[1]..A[2]..A[3]..A[4]..A[5]..A[6]..A[7]..cycle, red);
+
+ real[] dirs = {-65, -15, 15, 45, 45, 135, 143, 135};
+ for (int i=0; i<8; ++i) {
+ A[i] = sqrt(abs(A[i]))*dir(degrees(A[i])/2);
+ dot("$w_{" + (string) i + "}$", A[i], dir(dirs[i]), blue);
+ }
+ draw(A[0]..A[1]..A[2]..A[3]..A[4]..A[5]..A[6]..A[7]..(-A[0]), blue);
+ \end{asy}
+\end{center}
+
+We started at the point $z_0$, with one of its square roots as $w_0$.
+We then wound a full red circle around the origin, only to find that at the end of it,
+the blue arc is at a different place where it started!
+
+The interval construction from earlier doesn't work either:
+no matter how we pick the interval for $\alpha$,
+any ray of death must hit our red circle.
+The problem somehow lies with the fact that we have enclosed the
+very special point $0$.
+
+Nevertheless, we know that if we take $f(z) = z^2$, then we don't run into any problems
+with our ``make it up as you go'' procedure.
+So, what exactly is going on?
+
+\section{Covering projections}
+By now, if you have read the part on algebraic topology,
+this should all seem quite familiar.
+The ``fudging'' procedure exactly describes the idea of a lifting.
+
+More precisely, recall that there is a covering projection
+\[ (-)^2 : \CC \setminus \{0\}
+ \to \CC \setminus \{0\}. \]
+Let $V = \left\{ z \in U \mid f(z) \neq 0 \right\}$.
+For $z \in U \setminus V$, we already have the square root $g(z) = \sqrt{f(z)} = \sqrt 0 = 0$.
+So the burden is completing $g : V \to \CC$.
+
+Then essentially, what we are trying to do is construct a lifting $g$ in the diagram
+\begin{center}
+ \begin{tikzcd}[sep=large]
+ & E = \CC \setminus \{0\} \ar[d, "p=\bullet^2"] \\
+ V \ar[ru, "g"] \ar[r, "f"'] & B = \CC \setminus \{0\}.
+ \end{tikzcd}
+\end{center}
+Our map $p$ can be described as ``winding around twice''.
+Our \Cref{thm:lifting} now tells us that this lifting exists if and only if
+\[ f_\ast\im(\pi_1(V)) \subseteq p_\ast\im(\pi_1(E)) \]
+is a subset of the image of $\pi_1(E)$ by $p$.
+Since $B$ and $E$ are both punctured planes, we can identify them with $S^1$.
+\begin{ques}
+ Show that the image under $p$ is
+ exactly $2\ZZ$ once we identify $\pi_1(B) = \ZZ$.
+\end{ques}
+
+That means that for any loop $\gamma$ in $V$, we need $f \circ \gamma$ to have an
+\emph{even} winding number around $0 \in B$.
+This amounts to
+\[
+ \frac{1}{2\pi} \oint_\gamma \frac{f'}{f} \; dz \in 2\ZZ
+\]
+since $f$ has no poles.
+
+Replacing $2$ with $n$ and carrying over the discussion gives the first main result.
+\begin{theorem}
+ [Existence of holomorphic $n$th roots]
+ \label{thm:nth_root}
+ Let $f : U \to \CC$ be holomorphic.
+ Then $f$ has a holomorphic $n$th root if and only if
+ \[ \frac{1}{2\pi i}\oint_\gamma \frac{f'}{f} \; dz \in n\ZZ \]
+ for every contour $\gamma$ in $U$.
+\end{theorem}
+
+
+\section{Complex logarithms}
+The multivalued nature of the complex logarithm comes from the fact that
+\[ \exp(z+2\pi i) = \exp(z). \]
+So if $e^w = z$, then any complex number $w + 2\pi i k$ is also a solution.
+
+We can handle this in the same way as before:
+it amounts to a lifting of the following diagram.
+\begin{center}
+\begin{tikzcd}[sep=large]
+ & E = \CC \ar[d, "p=\exp"] \\
+ U \ar[ru, "g"] \ar[r, "f"'] & B = \CC \setminus \{0\}
+\end{tikzcd}
+\end{center}
+There is no longer a need to work with a separate $V$ since:
+\begin{ques}
+ Show that if $f$ has any zeros then $g$ can't possibly exist.
+\end{ques}
+In fact, the map $\exp : \CC \to \CC\setminus\{0\}$ is a universal cover,
+since $\CC$ is simply connected.
+Thus, $p\im(\pi_1(\CC))$ is \emph{trivial}.
+So in addition to being zero-free, $f$ cannot have any winding number around $0 \in B$ at all.
+In other words:
+\begin{theorem}
+ [Existence of logarithms]
+ \label{thm:holomorphic_log}
+ Let $f : U \to \CC$ be holomorphic.
+ Then $f$ has a logarithm if and only if
+ \[ \frac{1}{2\pi i}\oint_\gamma \frac{f'}{f} \; dz = 0 \]
+ for every contour $\gamma$ in $U$.
+\end{theorem}
+
+\section{Some special cases}
+The most common special case is
+\begin{corollary}
+ [Nonvanishing functions from simply connected domains]
+ \label{cor:nonvanishing}
+ Let $f : \Omega \to \CC$ be continuous, where $\Omega$ is simply connected.
+ If $f(z) \neq 0$ for every $z \in \Omega$,
+ then $f$ has both a logarithm and holomorphic $n$th root.
+\end{corollary}
+
+Finally, let's return to the question of $f = \id$ from the very beginning.
+What's the best domain $U$ such that \[ \sqrt{-} : U \to \CC \]
+is well-defined? Clearly $U = \CC$ cannot be made to work,
+but we can do almost as well.
+For note that the only zero of $f = \id$ is at the origin.
+Thus if we want to make a logarithm exist,
+all we have to do is make an incision in the complex plane
+that renders it impossible to make a loop around the origin.
+The usual choice is to delete negative half of the real axis, our very first ray of death;
+we call this a \vocab{branch cut}, with \vocab{branch point} at $0 \in \CC$
+(the point which we cannot circle around).
+This gives
+\begin{theorem}[Branch cut functions]
+ \label{cor:principal}
+ There exist holomorphic functions
+ \begin{align*}
+ \log &: \CC \setminus (-\infty, 0] \to \CC \\ % chktex 9
+ \sqrt[n]{-} &: \CC \setminus (-\infty, 0] \to \CC % chktex 9
+ \end{align*}
+ satisfying the obvious properties.
+\end{theorem}
+There are many possible choices of such functions ($n$ choices for the $n$th root
+and infinitely many for $\log$);
+a choice of such a function is called a \vocab{branch}.
+So this is what is meant by a ``branch'' of a logarithm.
+
+The \vocab{principal branch} is the ``canonical'' branch, analogous
+to the way we arbitrarily pick the positive branch
+to define $\sqrt{-} : \RR_{\ge 0} \to \RR_{\ge 0}$.
+For $\log$, we take the $w$ such that $e^w = z$
+and the imaginary part of $w$ lies in $(-\pi, \pi]$ % chktex 9
+(since we can shift by integer multiples of $2\pi i$).
+Often, authors will write $\opname{Log} z$ to emphasize this choice.
+
+
+\section\problemhead
+\begin{problem}
+ Show that a holomorphic function $f : U \to \CC$
+ has a holomorphic logarithm if and only if it
+ has a holomorphic $n$th root for every integer $n$.
+\end{problem}
+
+\begin{problem}
+ Show that the function $f : U \to \CC$
+ by $z \mapsto z(z-1)$ has a holomorphic square root,
+ where $U$ is the entire complex plane minus the closed interval $[0,1]$.
+\end{problem}
diff --git a/books/napkin/long-exact.tex b/books/napkin/long-exact.tex
new file mode 100644
index 0000000000000000000000000000000000000000..dc8b51686c3285c1bbfff438decb52484d191b3d
--- /dev/null
+++ b/books/napkin/long-exact.tex
@@ -0,0 +1,697 @@
+\chapter{The long exact sequence}
+In this chapter we introduce the key fact about chain complexes that will allow us to compute
+the homology groups of any space: the so-called ``long exact sequence''.
+
+For those that haven't read about abelian categories:
+a sequence of morphisms of abelian groups
+\[ \dots \to G_{n+1} \to G_n \to G_{n-1} \to \dots \]
+is \vocab{exact} if the image of any arrow is equal to the kernel of the next arrow.
+In particular,
+\begin{itemize}
+ \ii The map $0 \to A \to B$ is exact if and only if $A \to B$ is injective.
+ \ii the map $A \to B \to 0$ is exact if and only if $A \to B$ is surjective.
+\end{itemize}
+(On that note: what do you call a chain complex whose homology groups are all trivial?)
+A short exact sequence is one of the form $0 \to A \injto B \surjto C \to 0$.
+
+\section{Short exact sequences and four examples}
+\prototype{Relative sequence and Mayer-Vietoris sequence.}
+Let $\AA = \catname{AbGrp}$.
+Recall that we defined a morphism of chain complexes in $\AA$ already.
+\begin{definition}
+Suppose we have a map of chain complexes
+\[ 0 \to A_\bullet \taking f B_\bullet \taking g C_\bullet \to 0 \]
+It is said to be \vocab{short exact} if \emph{each row} of the diagram below is short exact.
+\begin{center}
+\begin{tikzcd}[column sep=huge]
+ & \vdots \ar[d, "\partial_A"] & \vdots \ar[d, "\partial_B"] & \vdots \ar[d, "\partial_C"] & \\
+ 0 \ar[r]
+ & A_{n+1} \ar[hook, r, "f_{n+1}"] \ar[d, "\partial_A"]
+ & B_{n+1} \ar[r, two heads, "g_{n+1}"] \ar[d, "\partial_B"]
+ & C_{n+1} \ar[r] \ar[d, "\partial_C"]
+ & 0 \\
+ 0 \ar[r]
+ & A_n \ar[hook, r, "f_n"] \ar[d, "\partial_A"]
+ & B_n \ar[r, two heads, "g_n"] \ar[d, "\partial_B"]
+ & C_n \ar[r] \ar[d, "\partial_C"]
+ & 0 \\
+ 0 \ar[r]
+ & A_{n-1} \ar[hook, r, "f_{n-1}"] \ar[d, "\partial_A"]
+ & B_{n-1} \ar[r, two heads, "g_{n-1}"] \ar[d, "\partial_B"]
+ & C_{n-1} \ar[r] \ar[d, "\partial_C"]
+ & 0 \\
+ & \vdots & \vdots & \vdots
+\end{tikzcd}
+\end{center}
+\end{definition}
+
+\begin{example}
+ [Mayer-Vietoris short exact sequence and its augmentation]
+ \label{ex:mayer_short_exact}
+ Let $X = U \cup V$ be an open cover.
+ For each $n$ consider
+ \begin{center}
+ \begin{tikzcd}[row sep=tiny]
+ C_n(U \cap V) \ar[r, hook] & C_n(U) \oplus C_n(V) \ar[r, two heads] & C_n(U + V) \\
+ c \ar[r, mapsto] & (c, -c) \\
+ & (c, d) \ar[r, mapsto] & c + d
+ \end{tikzcd}
+ \end{center}
+ One can easily see (by taking a suitable basis)
+ that the kernel of the latter map is exactly
+ the image of the first map.
+ This generates a short exact sequence
+ \[ 0 \to C_\bullet(U \cap V) \injto C_\bullet(U) \oplus C_\bullet(V)
+ \surjto C_\bullet(U + V) \to 0. \]
+\end{example}
+\begin{example}
+ [Augmented Mayer-Vietoris sequence]
+ We can \emph{augment} each of the chain complexes in the Mayer-Vietoris
+ sequence as well, by appending
+ \begin{center}
+ \begin{tikzcd}[row sep=large]
+ 0 \ar[r]
+ & C_0(U \cap V) \ar[r, hook] \ar[d, "\eps"', two heads]
+ & C_0(U) \oplus C_0(V) \ar[r, two heads] \ar[d, "\eps \oplus \eps"', two heads]
+ & C_0(U+V) \ar[r] \ar[d, "\eps"']
+ & 0 \\
+ 0 \ar[r]
+ & \ZZ \ar[r]
+ & \ZZ \oplus \ZZ \ar[r]
+ & \ZZ \ar[r]
+ & 0
+ \end{tikzcd}
+ \end{center}
+ to the bottom of the diagram.
+ In other words we modify the above into
+ \[ 0 \to \wt C_\bullet(U \cap V)
+ \injto \wt C_\bullet(U) \oplus \wt C_\bullet(V)
+ \surjto \wt C_\bullet(U + V) \to 0 \]
+ where $\wt C_\bullet$ is the chain complex defined in \Cref{def:augment}.
+\end{example}
+
+\begin{example}
+ [Relative chain short exact sequence]
+ \label{ex:rel_short_exact}
+ Since $C_n(X,A) \defeq C_n(X) / C_n(A)$, we have a short exact sequence
+ \[ 0 \to C_\bullet(A) \injto C_\bullet(X) \surjto C_\bullet(X,A) \to 0 \]
+ for every space $X$ and subspace $A$.
+ %The maps for each $n$ are the obvious ones:
+ %$C_n(A) \injto C_n(X)$ inclusion and $C_n(X) \surjto C_n(X,A)$ projection.
+ This can be augmented: we get
+ \[ 0 \to \wt C_\bullet(A) \injto \wt C_\bullet(X)
+ \surjto C_\bullet(X,A) \to 0 \]
+ by adding the final row
+ \begin{center}
+ \begin{tikzcd}
+ 0 \ar[r]
+ & C_0(A) \ar[r, hook] \ar[d, "\eps", two heads]
+ & C_0(X) \ar[r, two heads] \ar[d, "\eps", two heads]
+ & C_0(X,A) \ar[r]
+ & 0 \\
+ 0 \ar[r] & \ZZ \ar[r, "\id"'] & \ZZ \ar[r] & 0 \ar[r] & 0.
+ \end{tikzcd}
+ \end{center}
+\end{example}
+
+\section{The long exact sequence of homology groups}
+Consider a short exact sequence $0 \to A_\bullet \taking f B_\bullet \taking g C_\bullet \to 0$.
+Now, we know that we get induced maps of homology groups, i.e.\ we have
+\begin{center}
+\begin{tikzcd}
+ \vdots & \vdots & \vdots \\
+ H_{n+1}(A_\bullet) \ar[r, "f_\ast"] & H_{n+1}(B_\bullet) \ar[r, "g_\ast"] & H_{n+1}(C_\bullet) \\
+ H_{n}(A_\bullet) \ar[r, "f_\ast"] & H_{n}(B_\bullet) \ar[r, "g_\ast"] & H_{n}(C_\bullet) \\
+ H_{n-1}(A_\bullet) \ar[r, "f_\ast"] & H_{n-1}(B_\bullet) \ar[r, "g_\ast"] & H_{n-1}(C_\bullet) \\
+ \vdots & \vdots & \vdots \\
+\end{tikzcd}
+\end{center}
+But the theorem is that we can string these all together,
+taking each $H_{n+1}(C_\bullet)$ to $H_n(A_\bullet)$.
+
+\begin{theorem}[Short exact $\implies$ long exact]
+ \label{thm:long_exact}
+ Let $0 \to A_\bullet \taking f B_\bullet \taking g C_\bullet \to 0$
+ be \emph{any} short exact sequence of chain complexes we like.
+ Then there is an \emph{exact} sequence
+ \begin{center}
+ \begin{tikzcd}
+ & \dots \ar[r] & H_{n+2}(C_\bullet) \ar[lld, "\partial"'] \\
+ H_{n+1}(A_\bullet) \ar[r, "f_\ast"']
+ & H_{n+1}(B_\bullet) \ar[r, "g_\ast"]
+ & H_{n+1}(C_\bullet) \ar[lld, "\partial"'] \\
+ H_{n}(A_\bullet) \ar[r, "f_\ast"']
+ & H_{n}(B_\bullet) \ar[r, "g_\ast"]
+ & H_{n}(C_\bullet) \ar[lld, "\partial"'] \\
+ H_{n-1}(A_\bullet) \ar[r, "f_\ast"']
+ & H_{n-1}(B_\bullet) \ar[r, "g_\ast"]
+ & H_{n-1}(C_\bullet) \ar[lld, "\partial"'] \\
+ H_{n-2}(A_\bullet) \ar[r] & \dots
+ \end{tikzcd}
+ \end{center}
+ This is called a \vocab{long exact sequence} of homology groups.
+\end{theorem}
+\begin{proof}
+ A very long diagram chase, valid over any abelian category.
+ (Alternatively, it's actually possible to use the snake lemma twice.)
+\end{proof}
+
+\begin{remark}
+ \label{rem:leftdownleft}
+ The map $\partial : H_n(C_\bullet) \to H_{n-1}(A_\bullet)$ can be written explicitly as follows.
+ Recall that $H_n$ is ``cycles modulo boundaries'', and consider the sub-diagram
+ \begin{center}
+ \begin{tikzcd}
+ & B_n \ar[r, "g_n", two heads] \ar[d, "\partial_B"'] & C_n \ar[d, "\partial_C"] \\
+ A_{n-1} \ar[r, "f_{n-1}"', hook] & B_{n-1} \ar[r, "g_{n-1}"', two heads] & C_{n-1}
+ \end{tikzcd}
+ \end{center}
+ We need to take every cycle in $C_n$ to a cycle in $A_{n-1}$.
+ (Then we need to check a ton of ``well-defined'' issues,
+ but let's put that aside for now.)
+
+ Suppose $c \in C_n$ is a cycle (so $\partial_C(c) = 0$).
+ By surjectivity, there is a $b \in B_n$ with $g_n(b) = c$,
+ which maps down to $\partial_B(b)$.
+ Now, the image of $\partial_B(b)$ under $g_{n-1}$ is zero by commutativity of the square,
+ and so we can pull back under $f_{n-1}$ to get a unique element of $A_{n-1}$
+ (by exactness at $B_{n-1}$).
+
+ In summary: we go ``\emph{left, down, left}'' to go from $c$ to $a$:
+ \begin{center}
+ \begin{tikzcd}
+ & b \ar[r, mapsto, "g_n"] \ar[d, "\partial_B"', mapsto]
+ & \boxed{c} \ar[d, "\partial_C", mapsto] \\
+ \boxed{a} \ar[r, mapsto, "f_{n-1}"']
+ & \partial_B(b) \ar[r, "g_{n-1}"', mapsto]
+ & 0
+ \end{tikzcd}
+ \end{center}
+\end{remark}
+\begin{exercise}
+ Check quickly that the recovered $a$ is actually a cycle,
+ meaning $\partial_A(a) = 0$.
+ (You'll need another row, and the fact that $\partial_B^2 = 0$.)
+\end{exercise}
+
+The final word is that:
+\begin{moral}
+ Short exact sequences of chain complexes give
+ long exact sequences of homology groups.
+\end{moral}
+In particular, let us take the four examples given earlier.
+\begin{example}[Mayer-Vietoris long exact sequence, provisional version]
+ The Mayer-Vietoris ones give, for $X = U \cup V$ an open cover,
+ \[ \dots \to H_n(U \cap V) \to H_n(U) \oplus H_n(V) \to H_n(U+V) \to H_{n-1}(U \cap V) \to \dots. \]
+ and its reduced version
+ \[ \dots \to \wt H_n(U \cap V) \to \wt H_n(U) \oplus \wt H_n(V)
+ \to \wt H_n(U+V) \to \wt H_{n-1}(U \cap V) \to \dots. \]
+\end{example}
+This version is ``provisional'' because in the next section
+we will replace $H_n(U+V)$ and $\wt H_n(U+V)$ with something better.
+As for the relative homology sequences, we have:
+\begin{theorem}[Long exact sequence for relative homology]
+ \label{thm:long_exact_rel}
+ Let $X$ be a space, and let $A \subseteq X$ be a subspace.
+ There are long exact sequences
+ \[ \dots \to H_n(A) \to H_n(X) \to H_n(X,A) \to H_{n-1}(A) \to \dots. \]
+ and
+ \[ \dots \to \wt H_n(A) \to \wt H_n(X) \to H_n(X,A) \to \wt H_{n-1}(A) \to \dots. \]
+\end{theorem}
+The exactness of these sequences will give \textbf{tons of information}
+about $H_n(X)$ if only we knew something about what $H_n(U+V)$
+or $H_n(X,A)$ looked like. This is the purpose of the next chapter.
+
+\section{The Mayer-Vietoris sequence}
+\prototype{The computation of $H_n(S^m)$ by splitting $S^m$ into two hemispheres.}
+
+Now that we have done so much algebra, we need to invoke some geometry.
+There are two major geometric results in the Napkin.
+One is the excision theorem, which we discuss next chapter.
+The other we present here, which will let us take advantage of the
+Mayer-Vietoris sequence.
+The proofs are somewhat involved and are thus omitted;
+see \cite{ref:hatcher} for details.
+
+The first theorem is that the notation $H_n(U+V)$ that we have kept until now
+is redundant, and can be replaced with just $H_n(X)$:
+\begin{theorem}[Open cover homology theorem]
+ \label{thm:open_cover_homology}
+ Consider the inclusion $\iota : C_\bullet(U+V) \injto C_\bullet(X)$.
+ Then $\iota$ induces an isomorphism
+ \[ H_n(U+V) \cong H_n(X). \]
+ % Then there exists a $\rho : C_\bullet(X) \to C_\bullet(U+V)$ such that
+ % $\rho\iota$ and $\iota\rho$ are chain homotopic to the identities.
+ % Thus $\iota$ induces an isomorphism
+\end{theorem}
+\begin{remark}
+ In fact, this is true for any open cover (even uncountable),
+ not just those with two covers $U \cup V$.
+ But we only state the special case with two open sets,
+ because this is what is needed for \Cref{ex:mayer_short_exact}.
+\end{remark}
+So, \Cref{ex:mayer_short_exact} together with the above theorem implies,
+after replacing all the $H_n(U+V)$'s with $H_n(X)$'s:
+\begin{theorem}[Mayer-Vietoris long exact sequence]
+ If $X = U \cup V$ is an open cover, then we have long exact sequences
+ \[ \dots \to H_n(U \cap V) \to H_n(U) \oplus H_n(V)
+ \to H_n(X) \to H_{n-1}(U \cap V) \to \dots. \]
+ and
+ \[ \dots \to \wt H_n(U \cap V) \to \wt H_n(U) \oplus \wt H_n(V) \to
+ \wt H_n(X) \to \wt H_{n-1}(U \cap V) \to \dots. \]
+\end{theorem}
+
+At long last, we can compute the homology groups of the spheres.
+\begin{theorem}[The homology groups of $S^m$]
+ \label{thm:reduced_homology_sphere}
+ For integers $m$ and $n$,
+ \[ \wt H_n(S^m) \cong
+ \begin{cases}
+ \ZZ & n=m \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ The generator $\wt H_n(S^n)$ is an $n$-cell which covers $S^n$
+ exactly once (for example, the generator for $\wt H_1(S^1)$
+ is a loop which wraps around $S^1$ once).
+\end{theorem}
+\begin{proof}
+ This one's fun, so I'll only spoil the case $m=1$, and leave the rest to you.
+ Decompose the circle $S^1$ into two arcs $U$ and $V$, as shown:
+ \begin{center}
+ \begin{asy}
+ size(4cm);
+ draw(unitcircle);
+ label("$S^1$", dir(45), dir(45));
+ real R = 0.1;
+ draw(arc(origin,1-R,-100,100), red+1);
+ label("$V$", (1-R)*dir(0), dir(180), red);
+ draw(arc(origin,1+R,80,280), blue+1);
+ label("$U$", (1+R)*dir(180), dir(180), blue);
+ \end{asy}
+ \end{center}
+ Each of $U$ and $V$ is contractible, so all their reduced homology groups vanish.
+ Moreover, $U \cap V$ is homotopy equivalent to two points,
+ hence
+ \[ \wt H_n(U \cap V) \cong
+ \begin{cases}
+ \ZZ & n = 0 \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ Now consider again the segment of the short exact sequence
+ \[
+ \dots \to
+ \underbrace{\wt H_n(U) \oplus \wt H_n(V)}_{= 0} \to
+ \wt H_n(S^1) \taking{\partial} \wt H_{n-1}(U \cap V) \to
+ \underbrace{\wt H_{n-1}(U) \oplus \wt H_{n-1}(V)}_{=0} \to \dots.
+ \]
+ From this we derive that $\wt H_n(S^1)$ is $\ZZ$ for $n=1$ and $0$ elsewhere.
+
+ It remains to analyze the generators of $\wt H_1(S^1)$.
+ Note that the isomorphism was given by the connecting homomorphism $\partial$,
+ which is given by a ``left, down, left'' procedure (\Cref{rem:leftdownleft})
+ in the diagram
+ \begin{center}
+ \begin{tikzcd}
+ & C_1(U) \oplus C_1(V) \ar[r] \ar[d, "\partial \oplus \partial"] & C_1(U+V) \\
+ C_0(U \cap V) \ar[r] & C_0(U) \oplus C_0(V)
+ \end{tikzcd}
+ \end{center}
+ Mark the points $a$ and $b$ as shown in the two disjoint paths of $U \cap V$.
+ \begin{center}
+ \begin{asy}
+ size(3cm);
+ label("$S^1$", dir(45), dir(45));
+ real R = 0.1;
+ /*
+ draw(arc(origin,1-R,-100,100), red+1);
+ label("$V$", (1-R)*dir(0), dir(180), red);
+ draw(arc(origin,1+R,80,280), blue+1);
+ label("$U$", (1+R)*dir(180), dir(180), blue);
+ */
+ dot("$a$", dir(90), dir(90));
+ dot("$b$", dir(-90), dir(-90));
+ draw(arc(origin,1,90,270), EndArrow, Margins);
+ draw(arc(origin,1,90,-90), EndArrow, Margins);
+ label("$c$", dir(180), dir(180));
+ label("$d$", dir(0), dir(0));
+ \end{asy}
+ \end{center}
+ Then $a-b$ is a cycle which represents a generator of $H_0(U \cap V)$.
+ We can find the pre-image of $\partial$ as follows:
+ letting $c$ and $d$ be the chains joining $a$ and $b$, with $c$ contained
+ in $U$, and $d$ contained in $V$, the diagram completes as
+ \begin{center}
+ \begin{tikzcd}
+ & (c,d) \ar[r, mapsto] \ar[d, mapsto] & c-d \\
+ a-b \ar[r, mapsto] & (a-b, a-b)
+ \end{tikzcd}
+ \end{center}
+ In other words $\partial(c-d) = a-b$, so $c-d$ is a generator for $\wt H^1(S^1)$.
+
+ Thus we wish to show that $c-d$ is (in $H^1(S^1)$) equivalent to the loop $\gamma$
+ wrapping around $S^1$ once, counterclockwise.
+ This was illustrated in \Cref{ex:S1_c_minus_d}.
+\end{proof}
+
+Thus, the key idea in Mayer-Vietoris is that
+\begin{moral}
+ Mayer-Vietoris lets us compute $H_n(X)$
+ by splitting $X$ into two open sets.
+\end{moral}
+
+Here are some more examples.
+\begin{proposition}[The homology groups of the figure eight]
+ Let $X = S^1 \vee S^1$ be the figure eight.
+ Then
+ \[
+ \wt H_n(X) \cong
+ \begin{cases}
+ \ZZ^{\oplus 2} & n = 1 \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ The generators for $\wt H_1(X)$ are the two loops of the figure eight.
+\end{proposition}
+\begin{proof}
+ Again, for simplicity we work with reduced homology groups.
+ Let $U$ be the ``left'' half of the figure eight plus a little bit of the right,
+ as shown below.
+ \begin{center}
+ \begin{asy}
+ size(4cm);
+ draw(unitcircle);
+ draw(CR(2*dir(180),1), blue+2);
+ draw(arc(origin,1,135,225), blue+2);
+ label("$U$", 2*dir(180)+dir(135), dir(135), blue);
+ label("$S^1 \vee S^1$", dir(15), dir(15));
+ \end{asy}
+ \end{center}
+ The set $V$ is defined symmetrically.
+ In this case $U \cap V$ is contractible, while each of $U$ and $V$
+ is homotopic to $S^1$.
+
+ Thus, we can read a segment of the long exact sequence as
+ \[
+ \dots \to
+ \underbrace{\wt H_n(U \cap V)}_{=0}
+ \to \wt H_n(U) \oplus \wt H_n(V) \to \wt H_n(X) \to
+ \underbrace{\wt H_{n-1}(U \cap V)}_{=0} \to \dots.
+ \]
+ So we get that $\wt H_n(X) \cong \wt H_n(S^1) \oplus \wt H_n(S^1)$,
+ The claim about the generators follows from the fact that,
+ according to the isomorphism above,
+ the generators of $\wt H_n(X)$ are the generators of $\wt H_n(U)$
+ and $\wt H_n(V)$, which we described geometrically
+ in the last theorem.
+\end{proof}
+
+Up until now, we have been very fortunate that we have always been able to make
+certain parts of the space contractible.
+This is not always the case, and in the next example we will have to
+actually understand the maps in question to complete the solution.
+
+\begin{proposition}
+ [Homology groups of the torus]
+ Let $X = S^1 \times S^1$ be the torus.
+ Then
+ \[
+ \wt H_n(X)
+ =
+ \begin{cases}
+ \ZZ^{\oplus 2} & n = 1 \\
+ \ZZ & n = 2 \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+\end{proposition}
+\begin{proof}
+ To make our diagram look good on 2D paper,
+ we'll represent the torus as a square with its edges identified,
+ though three-dimensionally the picture makes sense as well.
+ Consider $U$ (shaded light orange) and $V$ (shaded green) as shown.
+ (Note that $V$ is connected due to the identification of the left and right (blue) edges,
+ even if it doesn't look connected in the picture).
+ \begin{center}
+ \begin{asy}
+ pair A = (0,0);
+ pair B = (1,0);
+ pair C = (1,1);
+ pair D = (0,1);
+ draw(A--B, red+1.5, MidArrow);
+ draw(B--C, blue+1.5, MidArrow);
+ draw(D--C, red+1.5, MidArrow);
+ draw(A--D, blue+1.5, MidArrow);
+ fill(box((0.2,0),(0.8,1)), orange+opacity(0.2));
+ fill(box(A,(0.3,1)), heavygreen+opacity(0.2));
+ fill(box((0.7,0),C), heavygreen+opacity(0.2));
+ draw( (0.3,0)--(0.3,1), heavygreen+dashed+1.2);
+ draw( (0.7,0)--(0.7,1), heavygreen+dashed+1.2);
+ draw( (0.2,0)--(0.2,1), orange+dashed+1.2);
+ draw( (0.8,0)--(0.8,1), orange+dashed+1.2);
+
+ label("$U$", (0.5, 0.5));
+ label("$V$", (0.1, 0.8));
+ label("$V$", (0.9, 0.8));
+ \end{asy}
+ \end{center}
+ In the three dimensional picture, $U$ and $V$ are two cylinders which together give the torus.
+ This time, $U$ and $V$ are each homotopic to $S^1$, and the intersection $U \cap V$
+ is the disjoint union of two circles: thus $\wt H_1(U \cap V) \cong \ZZ \oplus \ZZ$,
+ and $H_0(U \cap V) \cong \ZZ^{\oplus 2} \implies \wt H_0(U \cap V) \cong \ZZ$.
+
+ For $n \ge 3$, we have
+ \[
+ \dots \to
+ \underbrace{\wt H_n(U \cap V)}_{=0}
+ \to \wt H_n(U) \oplus \wt H_n(V) \to \wt H_n(X) \to
+ \underbrace{\wt H_{n-1}(U \cap V)}_{=0} \to \dots.
+ \]
+ and so $H_n(X) \cong 0$ for $n \ge 3$.
+ Also, we have $H_0(X) \cong \ZZ$ since $X$ is path-connected.
+ So it remains to compute $H_2(X)$ and $H_1(X)$.
+
+ Let's find $H_2(X)$ first.
+ We first consider the segment
+ \[
+ \dots \to
+ \underbrace{\wt H_2(U) \oplus \wt H_2(V)}_{=0} \to \wt H_2(X) \xhookrightarrow{\delta}
+ \underbrace{\wt H_1(U \cap V)}_{\cong \ZZ \oplus \ZZ} \xrightarrow{\phi}
+ \underbrace{\wt H_1(U) \oplus \wt H_1(V)}_{\cong \ZZ \oplus \ZZ} \to \dots
+ \]
+ Unfortunately, this time it's not immediately clear what $\wt H_2(X)$ because
+ we only have one zero at the left.
+ In order to do this, we have to actually figure out what the maps $\delta$ and $\phi$ look like.
+ Note that, as we'll see, $\phi$ isn't an isomorphism even though the groups are isomorphic.
+
+ The presence of the zero term has allowed us to make the connecting map $\delta$ injective.
+ First, $\wt H_2(X)$ is isomorphic to the image of of $\delta$, which is
+ exactly the kernel of the arrow $\phi$ inserted.
+ To figure out what $\ker \phi$ is, we have to think back to how the map
+ $C_\bullet(U \cap V) \to C_\bullet(U) \oplus C_\bullet(V)$ was constructed:
+ it was $c \mapsto (c, -c)$.
+ So the induced maps of homology groups is actually what you would guess:
+ a $1$-cycle $z$ in $\wt H_1(U \cap V)$ gets sent $(z, -z)$ in $\wt H_1(U) \oplus \wt H_1(V)$.
+
+ In particular, consider the two generators $z_1$ and $z_2$ of
+ $\wt H_1(U \cap V) = \ZZ \oplus \ZZ$,
+ i.e.\ one cycle in each connected component of $U \cap V$.
+ (To clarify: $U \cap V$ consists of two ``wristbands'';
+ $z_i$ wraps around the $i$th one once.)
+ Moreover, let $\alpha_U$ denote a generator of $\wt H_1(U) \cong \ZZ$,
+ and $\alpha_V$ a generator of $H_2(U) \cong \ZZ$.
+ Then we have that
+ \[ z_1 \mapsto (\alpha_U, -\alpha_V) \qquad\text{and}\qquad z_2 \mapsto (\alpha_U, -\alpha_V). \]
+ (The signs may differ on which direction you pick for the generators;
+ note that $\ZZ$ has two possible generators.)
+ We can even format this as a matrix:
+ \[ \phi = \begin{bmatrix} 1 & 1 \\ -1 & -1 \end{bmatrix}. \]
+ And we observe $\phi(z_1 - z_2) = 0$, meaning this map has nontrivial kernel!
+ That is, \[ \ker\phi = \left< z_1 - z_2 \right> \cong \ZZ. \]
+ Thus, $\wt H_2(X) \cong \img \delta \cong \ker \phi \cong \ZZ$.
+ We'll also note that $\img \phi$ is the set generated by $(\alpha_U, -\alpha_V)$;
+ (in particular $\img\phi \cong \ZZ$ and the quotient by $\img\phi$ is $\ZZ$ too).
+
+ The situation is similar with $\wt H_1(X)$: this time, we have
+ \[
+ \dots
+ %\to \underbrace{\wt H_1(U \cap V)}_{\cong \ZZ \oplus \ZZ}
+ \xrightarrow{\phi} \underbrace{\wt H_1(U) \oplus \wt H_1(V)}_{\cong \ZZ \oplus \ZZ}
+ \overset{\psi}{\to} \wt H_1(X) \overset\partial\surjto
+ \underbrace{\wt H_0(U \cap V)}_{\cong \ZZ}
+ \to \underbrace{\wt H_0(U) \oplus \wt H_0(V)}_{=0} \to \dots
+ \]
+ and so we know that the connecting map $\partial$ is surjective,
+ hence $\img \partial \cong \ZZ$.
+ Now, we also have
+ \begin{align*}
+ \ker \partial \cong \img \psi &\cong \left( \wt H_1(U) \oplus \wt H_1(V) \right) / \ker \psi \\
+ &\cong \left( \wt H_1(U) \oplus \wt H_1(V) \right) / \img \phi
+ \cong \ZZ
+ \end{align*}
+ by what we knew about $\img \phi$ already.
+ To finish off we need some algebraic tricks. The first is \Cref{prop:break_exact},
+ which gives us a short exact sequence
+ \[
+ 0 \to \underbrace{\ker\partial}_{\cong \img\psi \cong \ZZ}
+ \injto \wt H_1(X)
+ \surjto \underbrace{\img\partial}_{\cong \ZZ} \to 0.
+ \]
+ You should satisfy yourself that $\wt H_1(X) \cong \ZZ \oplus \ZZ$ is the
+ only possibility, but we'll prove this rigorously with \Cref{lem:split_exact}.
+\end{proof}
+
+Note that the previous example is of a different attitude than the previous ones,
+because we had to figure out what the maps in the long exact sequence actually were
+to even compute the groups.
+In principle, you could also figure out all the isomorphisms in the previous proof
+and explicitly compute the generators of $\wt H_1(S^1 \times S^1)$,
+but to avoid getting bogged down in detail I won't do so here.
+
+Finally, to fully justify the last step, we present:
+\begin{lemma}[Splitting lemma]
+ \label{lem:split_exact}
+ For a short exact sequence $0 \to A \taking f B \taking g C \to 0$
+ of abelian groups, the following are equivalent:
+ \begin{enumerate}[(a)]
+ \ii There exists $p : B \to A$ such that $A \taking f B \taking p A$ is the identity.
+ \ii There exists $s : C \to B$ such that $C \taking s B \taking g C$ is the identity.
+ \ii There is an isomorphism from $B$ to $A \oplus C$ such that the diagram
+ \begin{center}
+ \begin{tikzcd}
+ && B \ar[rd, "g", two heads] \ar[dd, leftrightarrow, "\cong"] \\
+ 0 \ar[r] & A \ar[ru, hook, "f"] \ar[rd, hook] && C \ar[r] & 0 \\
+ && A \oplus C \ar[ru, two heads]
+ \end{tikzcd}
+ \end{center}
+ commutes. (The maps attached to $A \oplus C$ are the obvious ones.)
+ \end{enumerate}
+ In particular, (b) holds anytime $C$ is free.
+\end{lemma}
+In these cases we say the short exact sequence \vocab{splits}. The point is that
+\begin{moral}
+ An exact sequence which splits let us obtain $B$ given $A$ and $C$.
+\end{moral}
+In particular, for $C = \ZZ$ or any free abelian group,
+condition (b) is necessarily true.
+So, once we obtained the short exact sequence $0 \to \ZZ \to \wt H_1(X) \to \ZZ \to 0$,
+we were done.
+\begin{remark}
+ Unfortunately, not all exact sequences split:
+ An example of a short exact sequence which doesn't split is
+ \[ 0 \to \Zc 2 \xhookrightarrow{\times 2} \Zc 4 \surjto \Zc 2 \to 0 \]
+ since it is not true that $\Zc 4 \cong \Zc2 \oplus \Zc 2$.
+\end{remark}
+\begin{remark}
+ The splitting lemma is true in any abelian category.
+ The ``direct sum'' is the colimit of the two objects $A$ and $C$.
+\end{remark}
+
+\section\problemhead
+\begin{problem}
+ Complete the proof of \Cref{thm:reduced_homology_sphere},
+ i.e.\ compute $H_n(S^m)$ for all $m$ and $n$.
+ (Try doing $m=2$ first, and you'll see how to proceed.)
+ \begin{hint}
+ Induction on $m$, using hemispheres.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ Compute the reduced homology groups
+ of $\RR^n$ with $p \ge 1$ points removed.
+ \begin{hint}
+ One strategy is induction on $p$, with base case $p=1$.
+ Another strategy is to let $U$ be the desired space and let $V$
+ be the union of $p$ non intersecting balls.
+ \end{hint}
+ \begin{sol}
+ The answer is $\wt H_{n-1}(X) \cong \ZZ^{\oplus p}$,
+ with all other groups vanishing.
+ For $p=1$, $\RR^n - \{\ast\} \cong S^{n-1}$ so we're done.
+ For all other $p$, draw a hyperplane dividing the $p$ points into two halves
+ with $a$ points on one side and $b$ points on the other (so $a+b=p$).
+ Set $U$ and $V$ and use induction.
+
+ Alternatively, let $U$ be the desired space and let $V$
+ be the union of $p$ disjoint balls, one around every point.
+ Then $U \cup V = \RR^n$ has all reduced homology groups trivial.
+ From the Mayer-Vietoris sequence we can read $\wt H_k(U \cap V) \cong \wt H_k(U) \cap \wt H_k(V)$.
+ Then $U \cap V$ is $p$ punctured balls, which are each the same as $S^{n-1}$.
+ One can read the conclusion from here.
+ \end{sol}
+\end{problem}
+
+\begin{sproblem}
+ Let $n \ge 1$ and $k \ge 0$ be integers.
+ Compute $H_k(\RR^n, \RR^n \setminus \{0\})$.
+ \begin{hint}
+ Use \Cref{thm:long_exact_rel}.
+ Note that $\RR^n \setminus \{0\}$ is homotopy
+ equivalent to $S^{n-1}$.
+ \end{hint}
+ \begin{sol}
+ It is $\ZZ$ for $k=n$ and $0$ otherwise.
+ \end{sol}
+\end{sproblem}
+
+\begin{problem}
+ [Nine lemma]
+ Consider a commutative diagram
+ \begin{center}
+ \begin{tikzcd}
+ & 0 \ar[d] & 0 \ar[d] & 0 \ar[d] \\
+ 0 \ar[r] & A_1 \ar[r] \ar[d] & B_1 \ar[r] \ar[d] & C_1 \ar[r] \ar[d] & 0 \\
+ 0 \ar[r] & A_2 \ar[r] \ar[d] & B_2 \ar[r] \ar[d] & C_2 \ar[r] \ar[d] & 0 \\
+ 0 \ar[r] & A_3 \ar[r] \ar[d] & B_3 \ar[r] \ar[d] & C_3 \ar[r] \ar[d] & 0 \\
+ & 0 & 0 & 0 &
+ \end{tikzcd}
+ \end{center}
+ and assume that all rows are exact,
+ and two of the columns are exact.
+ Show that the third column is exact as well.
+ \begin{hint}
+ $0 \to A_\bullet \to B_\bullet \to C_\bullet \to 0$
+ is a short exact sequence of chain complexes.
+ Write out the corresponding long exact sequence.
+ Nearly all terms will vanish.
+ \end{hint}
+\end{problem}
+
+\begin{sproblem}[Klein bottle]
+ \gim
+ Show that the reduced homology groups of the Klein bottle $K$ are given by
+ \[
+ \wt H_n(K) =
+ \begin{cases}
+ \ZZ \oplus \Zc 2 & n = 1 \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ \begin{hint}
+ It's possible to use two cylinders with $U$ and $V$.
+ This time the matrix is $\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$
+ or some variant though; in particular, it's injective, so $\wt H_2(X) = 0$.
+ \end{hint}
+\end{sproblem}
+
+\begin{sproblem}
+ [Triple long exact sequence]
+ \label{prob:triple_long_exact}
+ Let $A \subseteq B \subseteq X$ be subspaces.
+ Show that there is a long exact sequence
+ \[
+ \dots \to H_n(B,A) \to H_n(X,A)
+ \to H_n(X,B) \to H_{n-1}(B,A) \to \dots.
+ \]
+ \begin{hint}
+ Find a new short exact sequence
+ to apply \Cref{thm:long_exact} to.
+ \end{hint}
+ \begin{sol}
+ Use the short exact sequence
+ \[ 0 \to C_\bullet(A,B) \to C_\bullet(C,B) \to C_\bullet(C,A) \to 0 \]
+ of chain complexes.
+ \end{sol}
+\end{sproblem}
diff --git a/books/napkin/manifolds.tex b/books/napkin/manifolds.tex
new file mode 100644
index 0000000000000000000000000000000000000000..a625e4cd4ff89d4109bad631accd9fd83b9644ed
--- /dev/null
+++ b/books/napkin/manifolds.tex
@@ -0,0 +1,572 @@
+\chapter{A bit of manifolds}
+Last chapter, we stated Stokes' theorem for cells.
+It turns out there is a much larger class of spaces,
+the so-called \emph{smooth manifolds}, for which this makes sense.
+
+Unfortunately, the definition of a smooth manifold is \emph{complete garbage},
+and so by the time I am done defining differential forms and orientations,
+I will be too lazy to actually define what the integral on it is,
+and just wave my hands and state Stokes' theorem.
+
+\section{Topological manifolds}
+\prototype{$S^2$: ``the Earth looks flat''.}
+
+Long ago, people thought the Earth was flat,
+i.e.\ homeomorphic to a plane, and in particular they thought that
+$\pi_2(\text{Earth}) = 0$.
+But in fact, as most of us know, the Earth is actually a sphere,
+which is not contractible and in particular $\pi_2(\text{Earth}) \cong \ZZ$.
+This observation underlies the definition of a manifold:
+\begin{moral}
+ An $n$-manifold is a space which locally looks like $\RR^n$.
+\end{moral}
+Actually there are two ways to think about a topological manifold $M$:
+\begin{itemize}
+ \ii ``Locally'': at every point $p \in M$,
+ some open neighborhood of $p$ looks like an open set of $\RR^n$.
+ For example, to someone standing on the surface of the Earth,
+ the Earth looks much like $\RR^2$.
+
+ \ii ``Globally'': there exists an open cover of $M$
+ by open sets $\{U_i\}_i$ (possibly infinite) such that each $U_i$
+ is homeomorphic to some open subset of $\RR^n$.
+ For example, from outer space, the Earth can be covered
+ by two hemispherical pancakes.
+\end{itemize}
+\begin{ques}
+ Check that these are equivalent.
+\end{ques}
+While the first one is the best motivation for examples,
+the second one is easier to use formally.
+
+\begin{definition}
+ A \vocab{topological $n$-manifold} $M$ is a Hausdorff space
+ with an open cover $\{U_i\}$ of sets
+ homeomorphic to subsets of $\RR^n$,
+ say by homeomorphisms
+ \[ \phi_i : U_i \taking\cong E_i \subseteq \RR^n \]
+ where each $E_i$ is an open subset of $\RR^n$.
+ Each $\phi_i : U_i \to E_i$ is called a \vocab{chart},
+ and together they form a so-called \vocab{atlas}.
+\end{definition}
+\begin{remark}
+ Here ``$E$'' stands for ``Euclidean''.
+ I think this notation is not standard; usually
+ people just write $\phi_i(U_i)$ instead.
+\end{remark}
+\begin{remark}
+ This definition is nice because it doesn't depend on embeddings:
+ a manifold is an \emph{intrinsic} space $M$,
+ rather than a subset of $\RR^N$ for some $N$.
+ Analogy: an abstract group $G$ is an intrinsic object
+ rather than a subgroup of $S_n$.
+\end{remark}
+
+\begin{example}[An atlas on $S^1$]
+Here is a picture of an atlas for $S^1$, with two open sets.
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ draw(unitcircle, black+2);
+ label("$S^1$", dir(45), dir(45));
+ real R = 0.1;
+ draw(arc(origin,1-R,-100,100), red);
+ label("$U_2$", (1-R)*dir(0), dir(180), red);
+ draw(arc(origin,1+R,80,280), blue);
+ label("$U_1$", (1+R)*dir(180), dir(180), blue);
+ dotfactor *= 2;
+ pair A = opendot( (-3, -2), blue );
+ pair B = opendot( (-1, -2), blue );
+ label("$E_1$", midpoint(A--B), dir(-90), blue);
+ draw(A--B, blue, Margins);
+ draw( (-1.25, -0.2)--(-2,-2), blue, EndArrow, Margins );
+ label("$\phi_1$", (-1.675, -1.1), dir(180), blue);
+
+ pair C = opendot( (1, -2), red );
+ pair D = opendot( (3, -2), red );
+ label("$E_2$", midpoint(C--D), dir(-90), red);
+ draw(C--D, red, Margins);
+ draw( (1.25, -0.2)--(2,-2), red, EndArrow, Margins );
+ label("$\phi_2$", (1.672, -1.1), dir(0), red);
+ \end{asy}
+\end{center}
+\end{example}
+
+\begin{ques}
+ Where do you think the words ``chart'' and ``atlas'' come from?
+\end{ques}
+
+\begin{example}
+ [Some examples of topological manifolds]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii As discussed at length,
+ the sphere $S^2$ is a $2$-manifold: every point in the sphere has a
+ small open neighborhood that looks like $D^2$.
+ One can cover the Earth with just two hemispheres,
+ and each hemisphere is homeomorphic to a disk.
+
+ \ii The circle $S^1$ is a $1$-manifold; every point has an
+ open neighborhood that looks like an open interval.
+
+ \ii The torus, Klein bottle, $\RP^2$ are all $2$-manifolds.
+
+ \ii $\RR^n$ is trivially a manifold, as are its open sets.
+ \end{enumerate}
+ All these spaces are compact except $\RR^n$.
+
+ A non-example of a manifold is $D^n$, because it has a \emph{boundary};
+ points on the boundary do not have open neighborhoods
+ that look Euclidean.
+\end{example}
+
+\section{Smooth manifolds}
+\prototype{All the topological manifolds.}
+
+Let $M$ be a topological $n$-manifold with atlas
+$\{U_i \taking{\phi_i} E_i\}$.
+\begin{definition}
+ For any $i$, $j$ such that $U_i \cap U_j \neq \varnothing$,
+ the \vocab{transition map} $\phi_{ij}$ is the composed map
+ \[
+ \phi_{ij} : E_i \cap \phi_i\im(U_i \cap U_j)
+ \taking{\phi_i\inv}
+ U_i \cap U_j
+ \taking{\phi_j} E_j \cap \phi_j\im(U_i \cap U_j).
+ \]
+\end{definition}
+Sorry for the dense notation, let me explain.
+The intersection with the image $\phi_i\im(U_i \cap U_j)$
+and the image $\phi_j\im(U_i \cap U_j)$ is a notational annoyance
+to make the map well-defined and a homeomorphism.
+The transition map is just the natural way to go from $E_i \to E_j$,
+restricted to overlaps.
+Picture below, where the intersections are just the green portions
+of each $E_1$ and $E_2$:
+
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ draw(unitcircle, black);
+ draw(arc(origin, 1, 80, 100), heavygreen+2);
+ draw(arc(origin, 1, -100, -80), heavygreen+2);
+ label("$S^1$", dir(45), dir(45));
+ real R = 0.1;
+ draw(arc(origin,1-R,-100,100), red);
+ label("$U_2$", (1-R)*dir(0), dir(180), red);
+ draw(arc(origin,1+R,80,280), blue);
+ label("$U_1$", (1+R)*dir(180), dir(180), blue);
+ dotfactor *= 2;
+ pair A = opendot( (-3, -2), blue );
+ pair B = opendot( (-1, -2), blue );
+ label("$E_1$", midpoint(A--B), dir(-90), blue);
+ draw(A--B, blue, Margins);
+ draw( (-1.25, -0.2)--(-2,-2), blue, EndArrow, Margins );
+ label("$\phi_1$", (-1.675, -1.1), dir(180), blue);
+
+ pair C = opendot( (1, -2), red );
+ pair D = opendot( (3, -2), red );
+ label("$E_2$", midpoint(C--D), dir(-90), red);
+ draw(C--D, red, Margins);
+ draw( (1.25, -0.2)--(2,-2), red, EndArrow, Margins );
+ label("$\phi_2$", (1.672, -1.1), dir(0), red);
+
+ draw(A--(0.7*A+0.3*B), heavygreen+2, Margins);
+ draw(B--(0.7*B+0.3*A), heavygreen+2, Margins);
+ draw(C--(0.7*C+0.3*D), heavygreen+2, Margins);
+ draw(D--(0.7*D+0.3*C), heavygreen+2, Margins);
+ draw(B--C, heavygreen, EndArrow, Margin(4,4));
+ label("$\phi_{12}$", B--C, dir(90), heavygreen);
+ \end{asy}
+\end{center}
+
+
+We want to add enough structure so that we can use differential forms.
+
+\begin{definition}
+ We say $M$ is a \vocab{smooth manifold}
+ if all its transition maps are smooth.
+\end{definition}
+
+This definition makes sense, because we know what it means
+for a map between two open sets of $\RR^n$ to be differentiable.
+
+With smooth manifolds we can try to port over definitions that
+we built for $\RR^n$ onto our manifolds.
+So in general, all definitions involving smooth manifolds will reduce to
+something on each of the coordinate charts, with a compatibility condition.
+
+AS an example, here is the definition of a ``smooth map'':
+\begin{definition}
+ \begin{enumerate}[(a)]
+ \ii Let $M$ be a smooth manifold.
+ A continuous function $f \colon M \to \RR$ is called \vocab{smooth}
+ if the composition
+ \[ E_i \taking{\phi_i\inv} U_i \injto M \taking f \RR \]
+ is smooth as a function $E_i \to \RR$.
+ \ii Let $M$ and $N$ be smooth
+ with atlases $\{ U_i^M \taking{\phi_i} E_i^M \}_i$
+ and $\{ U_j^N \taking{\phi_j} E_j^N \}_j$,
+ A map $f \colon M \to N$ is \vocab{smooth} if for every $i$ and $j$,
+ the composed map
+ \[ E_i \taking{\phi_i\inv} U_i \injto M
+ \taking f N \surjto U_j \taking{\phi_j} E_j \]
+ is smooth, as a function $E_i \to E_j$.
+ \end{enumerate}
+\end{definition}
+
+\section{Regular value theorem}
+\prototype{$x^2+y^2=1$ is a circle!}
+Despite all that I've written about general manifolds,
+it would be sort of mean if I left you here
+because I have not really told you how to actually construct
+manifolds in practice, even though we know the circle
+$x^2+y^2=1$ is a great example of a one-dimensional
+manifold embedded in $\RR^2$.
+
+\begin{theorem}
+ [Regular value theorem]
+ Let $V$ be an $n$-dimensional real normed vector
+ space, let $U \subseteq V$ be open
+ and let $f_1, \dots, f_m \colon U \to \RR$
+ be smooth functions.
+ Let $M$ be the set of points $p \in U$
+ such that $f_1(p) = \dots = f_m(p) = 0$.
+
+ Assume $M$ is nonempty and that the map
+ \[ V \to \RR^m \quad\text{by}\quad
+ v \mapsto \left( (Df_1)_p(v), \dots, (Df_m)_p(v) \right) \]
+ has rank $m$, for every point $p \in M$.
+ Then $M$ is a manifold of dimension $n-m$.
+\end{theorem}
+For a proof, see \cite[Theorem 6.3]{ref:manifolds}.
+
+One very common special case is to take $m = 1$ above.
+\begin{corollary}
+ [Level hypersurfaces]
+ Let $V$ be a finite-dimensional real normed vector
+ space, let $U \subseteq V$ be open
+ and let $f \colon U \to \RR$ be smooth.
+ Let $M$ be the set of points $p \in U$
+ such that $f(p) = 0$.
+ If $M \ne \varnothing$ and
+ $(Df)_p$ is not the zero map for any $p \in M$,
+ then $M$ is a manifold of dimension $n-1$.
+\end{corollary}
+
+\begin{example}
+ [The circle $x^2+y^2-c=0$]
+ Let $f(x,y) = x^2+y^2 - c$, $f \colon \RR^2 \to \RR$,
+ where $c$ is a positive real number.
+ Note that
+ \[ Df = 2x \cdot dx + 2y \cdot dy \]
+ which in particular is nonzero
+ as long as $(x,y) \ne (0,0)$, i.e.\ as long as $c \ne 0$.
+ Thus:
+ \begin{itemize}
+ \ii When $c > 0$, the resulting curve ---
+ a circle with radius $\sqrt c$ ---
+ is a one-dimensional manifold, as we knew.
+ \ii When $c = 0$, the result fails.
+ Indeed, $M$ is a single point,
+ which is actually a zero-dimensional manifold!
+ \end{itemize}
+\end{example}
+
+We won't give further examples
+since I'm only mentioning this in passing
+in order to increase your capacity to write real concrete examples.
+(But \cite[Chapter 6.2]{ref:manifolds} has some more examples,
+beautifully illustrated.)
+
+\section{Differential forms on manifolds}
+We already know what a differential form is on an open set $U \subseteq \RR^n$.
+So, we naturally try to port over the definition of
+differentiable form on each subset, plus a compatibility condition.
+
+Let $M$ be a smooth manifold with atlas $\{ U_i \taking{\phi_i} E_i \}_i$.
+
+\begin{definition}
+ A \vocab{differential $k$-form} $\alpha$ on a smooth manifold $M$
+ is a collection $\{\alpha_i\}_i$ of differential $k$-forms on each $E_i$,
+ such that for any $j$ and $i$ we have that
+ \[ \alpha_j = \phi_{ij}^\ast(\alpha_i). \]
+\end{definition}
+In English: we specify a differential form on each chart,
+which is compatible under pullbacks of the transition maps.
+
+\section{Orientations}
+\prototype{Left versus right, clockwise vs.\ counterclockwise.}
+
+This still isn't enough to integrate on manifolds.
+We need one more definition: that of an orientation.
+
+The main issue is the observation from standard calculus that
+\[ \int_a^b f(x) \; dx = - \int_b^a f(x) \; dx. \]
+Consider then a space $M$ which is homeomorphic to an interval.
+If we have a $1$-form $\alpha$, how do we integrate it over $M$?
+Since $M$ is just a topological space (rather than a subset of $\RR$),
+there is no default ``left'' or ``right'' that we can pick.
+As another example, if $M = S^1$ is a circle, there is
+no default ``clockwise'' or ``counterclockwise'' unless we decide
+to embed $M$ into $\RR^2$.
+
+To work around this we have to actually have to
+make additional assumptions about our manifold.
+\begin{definition}
+ A smooth $n$-manifold is \vocab{orientable} if
+ there exists a differential $n$-form $\omega$ on $M$
+ such that for every $p \in M$,
+ \[ \omega_p \neq 0. \]
+\end{definition}
+Recall here that $\omega_p$ is an element of $\Lambda^n(V^\vee)$.
+In that case we say $\omega$ is a \vocab{volume form} of $M$.
+
+How do we picture this definition?
+If we recall that an differential form is supposed to take
+tangent vectors of $M$ and return real numbers.
+To this end, we can think of each point $p \in M$ as
+having a \vocab{tangent plane} $T_p(M)$ which is $n$-dimensional.
+Now since the volume form $\omega$ is $n$-dimensional,
+it takes an entire basis of the $T_p(M)$ and gives a real number.
+So a manifold is orientable if there exists a consistent choice of
+sign for the basis of tangent vectors at every point of the manifold.
+
+For ``embedded manifolds'', this just amounts to being able
+to pick a nonzero field of normal vectors to each point $p \in M$.
+For example, $S^1$ is orientable in this way.
+\begin{center}
+ \begin{asy}
+ size(5cm);
+ draw(unitcircle, blue+1);
+ label("$S^1$", dir(100), dir(100), blue);
+ void arrow(real theta) {
+ pair P = dir(theta);
+ dot(P);
+ pair delta = 0.5*P;
+ draw( P--(P+delta), EndArrow );
+ }
+ arrow(0);
+ arrow(50);
+ arrow(140);
+ arrow(210);
+ arrow(300);
+ \end{asy}
+\end{center}
+Similarly, one can orient a sphere $S^2$ by having
+a field of vectors pointing away (or towards) the center.
+This is all non-rigorous,
+because I haven't defined the tangent plane $T_p(M)$;
+since $M$ is in general an intrinsic object one has to be
+quite roundabout to define $T_p(M)$ (although I do so in an optional section later).
+In any event, the point is that guesses about the orientability
+of spaces are likely to be correct.
+
+\begin{example}
+ [Orientable surfaces]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Spheres $S^n$, planes, and the torus $S^1 \times S^1$ are orientable.
+ \ii The M\"obius strip and Klein bottle are \emph{not} orientable:
+ they are ``one-sided''.
+ \ii $\CP^n$ is orientable for any $n$.
+ \ii $\RP^n$ is orientable only for odd $n$.
+ \end{enumerate}
+\end{example}
+
+
+\section{Stokes' theorem for manifolds}
+Stokes' theorem in the general case is based on the idea
+of a \vocab{manifold with boundary} $M$, which I won't define,
+other than to say its boundary $\partial M$ is an $n-1$ dimensional manifold,
+and that it is oriented if $M$ is oriented.
+An example is $M = D^2$, which has boundary $\partial M = S^1$.
+
+Next,
+\begin{definition}
+ The \vocab{support} of a differential form $\alpha$ on $M$
+ is the closure of the set
+ \[ \left\{ p \in M \mid \alpha_p \neq 0 \right\}. \]
+ If this support is compact as a topological space,
+ we say $\alpha$ is \vocab{compactly supported}.
+\end{definition}
+\begin{remark}
+ For example, volume forms are supported on all of $M$.
+\end{remark}
+
+Now, one can define integration on oriented manifolds,
+but I won't define this because the definition is truly awful.
+Then Stokes' theorem says
+\begin{theorem}
+ [Stokes' theorem for manifolds]
+ Let $M$ be a smooth oriented $n$-manifold with boundary
+ and let $\alpha$ be a compactly supported $n-1$-form.
+ Then
+ \[ \int_M d\alpha = \int_{\partial M} \alpha. \]
+\end{theorem}
+All the omitted details are developed in full in \cite{ref:manifolds}.
+
+\section{(Optional) The tangent and contangent space}
+\prototype{Draw a line tangent to a circle, or a plane tangent to a sphere.}
+
+Let $M$ be a smooth manifold and $p \in M$ a point.
+I omitted the definition of $T_p(M)$ earlier,
+but want to actually define it now.
+
+As I said, geometrically we know what this \emph{should}
+look like for our usual examples.
+For example, if $M = S^1$ is a circle embedded in $\RR^2$,
+then the tangent vector at a point $p$
+should just look like a vector running off tangent to the circle.
+Similarly, given a sphere $M = S^2$,
+the tangent space at a point $p$ along the sphere
+would look like plane tangent to $M$ at $p$.
+
+\begin{center}
+ \begin{asy}
+ size(5cm);
+ draw(unitcircle);
+ label("$S^1$", dir(140), dir(140));
+ pair p = dir(0);
+ draw( (1,-1.4)--(1,1.4), mediumblue, Arrows);
+ label("$T_p(M)$", (1, 1.4), dir(-45), mediumblue);
+ draw(p--(1,0.7), red, EndArrow);
+ label("$\vec v \in T_p(M)$", (1,0.7), dir(-15), red);
+ dot("$p$", p, p, blue);
+ \end{asy}
+\end{center}
+
+However, one of the points of all this manifold stuff
+is that we really want to see the manifold
+as an \emph{intrinsic object}, in its own right,
+rather than as embedded in $\RR^n$.\footnote{This
+ can be thought of as analogous to the way
+ that we think of a group as an abstract object in its own right,
+ even though Cayley's Theorem tells us that any group is a subgroup
+ of the permutation group.
+
+ Note this wasn't always the case!
+ During the 19th century, a group was literally defined
+ as a subset of $\text{GL}(n)$ or of $S_n$.
+ In fact Sylow developed his theorems without the word ``group''
+ Only much later did the abstract definition of a group was given,
+ an abstract set $G$ which was independent of any \emph{embedding} into $S_n$,
+ and an object in its own right.}
+So, we would like our notion of a tangent vector to not refer to an ambient space,
+but only to intrinsic properties of the manifold $M$ in question.
+
+\subsection{Tangent space}
+To motivate this construction, let us start
+with an embedded case for which we know the answer already:
+a sphere.
+
+Suppose $f \colon S^2 \to \RR$ is a
+function on a sphere, and take a point $p$.
+Near the point $p$, $f$ looks like a function
+on some open neighborhood of the origin.
+Thus we can think of taking a \emph{directional derivative}
+along a vector $\vec v$ in the imagined tangent plane
+(i.e.\ some partial derivative).
+For a fixed $\vec v$ this partial derivative is a linear map
+\[ D_{\vec v} \colon C^\infty(M) \to \RR. \]
+
+It turns out this goes the other way:
+if you know what $D_{\vec v}$ does to every smooth function,
+then you can recover $v$.
+This is the trick we use in order to create the tangent space.
+Rather than trying to specify a vector $\vec v$ directly
+(which we can't do because we don't have an ambient space),
+\begin{moral}
+ The vectors \emph{are} partial-derivative-like maps.
+\end{moral}
+More formally, we have the following.
+\begin{definition}
+ A \vocab{derivation} $D$ at $p$ is a linear map
+ $D \colon C^\infty(M) \to \RR$
+ (i.e.\ assigning a real number to every smooth $f$)
+ satisfying the following Leibniz rule:
+ for any $f$, $g$ we have the equality
+ \[ D(fg) = f(p) \cdot D(g) + g(p) \cdot D(f) \in \RR. \]
+\end{definition}
+This is just a ``product rule''.
+Then the tangent space is easy to define:
+\begin{definition}
+ A \vocab{tangent vector} is just a derivation at $p$, and
+ the \vocab{tangent space} $T_p(M)$ is simply
+ the set of all these tangent vectors.
+\end{definition}
+In this way we have constructed the tangent space.
+
+\subsection{The cotangent space}
+In fact, one can show that the product rule
+for $D$ is equivalent to the following three conditions:
+\begin{enumerate}
+ \ii $D$ is linear, meaning $D(af+bg) = a D(f) + b D(g)$.
+ \ii $D(1_M) = 0$, where $1_M$ is the constant function on $M$.
+ \ii $D(fg) = 0$ whenever $f(p) = g(p) = 0$.
+ Intuitively, this means that if a function $h = fg$
+ vanishes to second order at $p$,
+ then its derivative along $D$ should be zero.
+\end{enumerate}
+
+This suggests a third equivalent definition:
+suppose we define
+\[ \km_p \defeq \left\{ f \in C^\infty M \mid f(p) = 0 \right\} \]
+to be the set of functions which vanish at $p$
+(this is called the \emph{maximal ideal} at $p$).
+In that case,
+\[ \km_p^2 = \left\{ \sum_i f_i \cdot g_i
+ \mid f_i(p) = g_i(p) = 0 \right\} \]
+is the set of functions vanishing to second order at $p$.
+Thus, a tangent vector is really just a linear map
+\[ \km_p / \km_p^2 \to \RR. \]
+In other words, the tangent space is actually the
+dual space of $\km_p / \km_p^2$;
+for this reason, the space $\km_p / \km_p^2$ is defined as the
+\vocab{cotangent space} (the dual of the tangent space).
+This definition is even more abstract than the one with derivations above,
+but has some nice properties:
+\begin{itemize}
+ \ii it is coordinate-free, and
+ \ii it's defined only in terms of the smooth functions $M \to \RR$,
+ which will be really helpful later on in algebraic geometry
+ when we have varieties or schemes and can repeat this definition.
+\end{itemize}
+
+\subsection{Sanity check}
+With all these equivalent definitions, the last thing I should do is check that
+this definition of tangent space actually gives a vector space of dimension $n$.
+To do this it suffices to show verify this for open subsets of $\RR^n$,
+which will imply the result for general manifolds $M$
+(which are locally open subsets of $\RR^n$).
+Using some real analysis, one can prove the following result:
+\begin{theorem}
+ Suppose $M \subset \RR^n$ is open and $0 \in M$.
+ Then
+ \[
+ \begin{aligned}
+ \km_0 &= \{ \text{smooth functions } f : f(0) = 0 \} \\
+ \km_0^2 &= \{ \text{smooth functions } f : f(0) = 0, (\nabla f)_0 = 0 \}.
+ \end{aligned}
+ \]
+ In other words $\km_0^2$ is the set of functions which vanish at $0$
+ and such that all first derivatives of $f$ vanish at zero.
+\end{theorem}
+Thus, it follows that there is an isomorphism
+\[ \km_0 / \km_0^2 \cong \RR^n
+ \quad\text{by}\quad
+ f \mapsto
+ \left[ \frac{\partial f}{\partial x_1}(0),
+ \dots, \frac{\partial f}{\partial x_n}(0) \right] \]
+and so the cotangent space, hence tangent space,
+indeed has dimension $n$.
+
+%\subsection{So what does this have to do with orientations?}
+%\todo{beats me}
+
+\section\problemhead
+\begin{problem}
+ Show that a differential $0$-form on a smooth manifold $M$
+ is the same thing as a smooth function $M \to \RR$.
+\end{problem}
+\todo{some applications of regular value theorem here}
diff --git a/books/napkin/martingale.tex b/books/napkin/martingale.tex
new file mode 100644
index 0000000000000000000000000000000000000000..803f3d276557ab1dd5f988befd9fd5a74e4ddf53
--- /dev/null
+++ b/books/napkin/martingale.tex
@@ -0,0 +1,624 @@
+\chapter{Stopped martingales (TO DO)}
+\section{How to make money almost surely}
+We now take our newfound knowledge of measure theory to a casino.
+
+Here's the most classical example that shows up:
+a casino lets us play a game where we can bet any amount of
+on a fair coin flip, but with bad odds:
+we win $\$n$ if the coin is heads,
+but lose $\$2n$ if the coin is tails,
+for a value of $n$ of our choice.
+This seems like a game that no one in their right mind would want to play.
+
+Well, if we have unbounded time and money,
+we actually can almost surely make a profit.
+\begin{example}
+ [Being even greedier than 18th century France]
+ In the game above, we start by betting $\$1$.
+ \begin{itemize}
+ \ii If we win, we leave having made $\$1$.
+ \ii If we lose, we then bet $\$10$ instead, and
+ \begin{itemize}
+ \ii If we win, then we leave having made $\$10-\$2=\$8$, and
+ \ii If we lose then we bet $\$100$ instead, and
+ \begin{itemize}
+ \ii If we win, we leave having made $\$1000-\$20-\$2=\$978$, and
+ \ii If we lose then we bet $\$1000$ instead, and so on\dots
+ \end{itemize}
+ \end{itemize}
+ \end{itemize}
+ Since the coin will almost surely show heads eventually,
+ we make money whenever that happens.
+ In fact, the expected amount of time until a coin shows heads
+ is only $2$ flips! What could go wrong?
+\end{example}
+This chapter will show that under sane conditions
+such as ``finite time'' or ``finite money'',
+one cannot actually make money in this way --- the \emph{optional stopping theorem}.
+This will give us an excuse to define conditional probabilities,
+and then talk about martingales (which generalize the fair casino).
+
+Once we realize that trying to extract money from Las Vegas is a lost cause,
+we will stop gambling and then return to solving math problems,
+by showing some tricky surprises,
+where problems that look like they have nothing to do with gambling
+can be solved by considering a suitable martingale.
+
+In everything that follows, $\Omega = (\Omega, \SA, \mu)$ is a probability space.
+
+\section{Sub-$\sigma$-algebras and filtrations}
+\prototype{$\sigma$-algebra generated by a random variable, and coin flip filtration.}
+We considered our $\Omega$ as a space of worlds,
+equipped with a $\sigma$-algebra $\SA$ that lets us integrate over $\Omega$.
+However, it is a sad fact of life that at any given time,
+you only know partial information about the world.
+For example, at the time of writing,
+we know that the world did not end in 2012
+(see \url{https://en.wikipedia.org/wiki/2012_phenomenon}),
+but the fate of humanity in future years remains at slightly uncertain.
+
+Let's write this measure-theoretically: we could consider
+\begin{align*}
+ \Omega &= A \sqcup B \\
+ A &= \left\{ \omega \text{ for which world ends in $2012$} \right\} \\
+ B &= \left\{ \omega \text{ for which world does not end in $2012$} \right\}.
+\end{align*}
+We will assume that $A$ and $B$ are measurable sets,
+that is, $A,B \in \SA$.
+That means we could have good fun arguing about what the values
+of $\mu(A)$ and $\mu(B)$ should be
+(``a priori probability that the world ends in $2012$''),
+but let's move on to a different silly example.
+
+
+We will now introduce a new notion that
+we will need when we define conditional probabilities later.
+\begin{definition}
+ Let $\Omega = (\Omega, \SA, \mu)$ be a probability space.
+ A \vocab{sub-$\sigma$-algebra} $\SF$
+ on $\Omega$ is exactly what it sounds like:
+ a $\sigma$-algebra $\SF$ on the set $\Omega$
+ such that each $A \in \SF$ is measurable
+ (i.e.,\ $\SF \subseteq \SA$).
+\end{definition}
+
+The motivation is that $\SF$ is the $\sigma$-algebra of sets
+which let us ask questions about some piece of information.
+For example, in the 2012 example we gave above,
+we might take $\SF = \{\varnothing, A, B, \Omega\}$,
+which are the sets we care about if we are thinking only about 2012.
+
+Here are some more serious examples.
+\begin{example}
+ [Examples of sub-$\sigma$-algebras]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Let $X \colon \Omega \to \{0,1,2\}$ be a random
+ variable taking on one of three values.
+ If we're interested in $X$ then we could define
+ \begin{align*}
+ A &= \{\omega \colon X(\omega) = 1\} \\
+ B &= \{\omega \colon X(\omega) = 2\} \\
+ C &= \{\omega \colon X(\omega) = 3\}
+ \end{align*}
+ then we could write
+ \[ \SF = \left\{ \varnothing, \; A, \; B, \; C, \;
+ A \cup B, \; B \cup C, \; C \cup A, \; \Omega \right\}. \]
+ This is a sub-$\sigma$-algebra on $\SF$
+ that lets us ask questions about $X$
+ like ``what is the probability $X \ne 3$'', say.
+
+ \ii Now suppose $Y \colon \Omega \to [0,1]$ is another random variable.
+ If we are interested in $Y$,
+ the $\SF$ that captures our curiosity is
+ \[ \SF = \left\{ Y\pre(B) \mid B \subseteq [0,1]
+ \text{ is measurable } \right\}. \]
+ \end{enumerate}
+\end{example}
+
+You might notice a trend here which we formalize now:
+\begin{definition}
+ Let $X \colon \Omega \to \RR$ be a random variable.
+ The \vocab{sub-$\sigma$-algebra generated by $X$} is defined by
+ \[ \sigma(X) \defeq \left\{ X\pre(B) \mid B \subseteq \RR
+ \text{ is measurable } \right\}. \]
+ If $X_1$, \dots is a sequence (finite or infinite) of random variables,
+ the sub-$\sigma$-algebra generated by them
+ is the smallest $\sigma$-algebra which contains $\sigma(X_i)$ for each $i$.
+\end{definition}
+
+Finally, we can put a lot of these together ---
+since we're talking about time, we learn more as we grow older,
+and this can be formalized.
+\begin{definition}
+ A \vocab{filtration} on $\Omega = (\Omega, \SA, \mu)$
+ is a nested sequence\footnote{For convenience,
+ we will restrict ourselves to $\ZZ_{\ge0}$-indexed
+ filtrations, though really any index set is okay.}
+ \[ \SF_0 \subseteq \SF_1 \subseteq \SF_2 \subseteq \dots \]
+ of sub-$\sigma$-algebras on $\Omega$.
+\end{definition}
+
+\begin{example}
+ [Filtration]
+ Suppose you're bored in an infinitely long class
+ and start flipping a fair coin to pass the time.
+ (Accordingly, we could let $\Omega = \{H,T\}^\infty$
+ consist of infinite sequences of heads $H$ and tails $T$.)
+ We could let $\SF_n$ denote the sub-$\sigma$-algebra
+ generated by the values of the first $n$ coin flips.
+ So:
+ \begin{itemize}
+ \ii $\SF_0 = \{\varnothing, \Omega\}$,
+ \ii $\SF_1 = \{\varnothing, \text{first flip $H$}, \text{first flip $T$}, \Omega\}$,
+ \ii $\SF_2 = \{\varnothing, \text{first flips $HH$}, \text{second flip $T$}, \Omega, \text{first flip and second flip differ}, \dots\}$.
+ \ii and so on, with $\SF_n$ being the measurable sets
+ ``determined'' only by the first $n$ coin flips.
+ \end{itemize}
+\end{example}
+
+\begin{exercise}
+ In the previous example, compute the cardinality $|\SF_n|$ for each integer $n$.
+\end{exercise}
+
+\section{Conditional expectation}
+\prototype{$\EE(X \mid X+Y)$ for $X$ and $Y$ distributed over $[0,1]$.}
+
+We'll need the definition of conditional probability to define a martingale,
+but this turns out to be surprisingly tricky.
+Let's consider the following simple example to see why.
+\begin{example}
+ [Why high-school methods aren't enough here]
+ Suppose we have two independent random variables $X$, $Y$ distributed
+ uniformly over $[0,1]$ (so we may as well take $\Omega = [0,1]^2$).
+ We might try to ask the question:
+ \begin{quote}
+ \itshape
+ ``what is the expected value of $X$
+ given that $X+Y = 0.6$''?
+ \end{quote}
+ Intuitively, we know the answer has to be $0.3$.
+ However, if we try to write down a definition, we quickly run into trouble.
+ Ideally we want to say something like
+ \[ \EE[X \text{ given } X+Y=0.6]
+ = \frac{\int_{S} X}{\int_{S} 1}
+ \text{ where }
+ S = \left\{ \omega \in \Omega \mid X(\omega)+Y(\omega)=0.6 \right\}. \]
+ The problem is that $S$ is a set of measure zero,
+ so we quickly run into $\frac 00$, meaning a definition
+ of this shape will not work out.
+\end{example}
+
+The way that this is typically handled in measure theory
+is to use the notion of sub-$\sigma$-algebra that we defined.
+Let $\SF$ be a sub-$\sigma$-algebra which captures the information
+This means we create a function assigning the
+``conditional expectation'' to \emph{every} point $\omega \in \Omega$,
+which is measurable with respect to $\SF$.
+\todo{give the example}
+
+\begin{proposition}
+ [Conditional expectation definition]
+ \label{prop:conditional_exp}
+ Let $X \colon \Omega \to \RR$ be an \emph{absolutely integrable}
+ random variable (meaning $\EE[|X|] < \infty$)
+ over a probability space $\Omega$,
+ and let $\SF$ be a sub-$\sigma$-algebra on it.
+
+ Then there exists a function $\eta \colon \Omega \to \RR$
+ satisfying the following two properties:
+ \begin{itemize}
+ \ii $\eta$ is $\SF$-measurable (that is,
+ measurable as a function $(\Omega, \SF, \mu) \to \RR$); and
+ \ii for any set $A \in \SF$ we have
+ $\EE[\eta \cdot \mathbf{1}_A] = \EE[X \cdot \mathbf{1}_A]$.
+ \end{itemize}
+ Moreover, this random variable is unique up to almost sureness.
+\end{proposition}
+\begin{proof}
+ Omitted, but relevant buzzword used is ``Radon-Nikodym derivative''.
+\end{proof}
+
+\begin{definition}
+ Let $\eta$ be as in the previous proposition.
+ \begin{itemize}
+ \ii We denote $\eta$ by $\EE(X \mid \SF)$
+ and call it the \vocab{conditional expectation} of $X$ with respect to $\SF$.
+ \ii If $Y$ is a random variable then
+ $\EE(X \mid Y)$ denotes $\EE(X \mid \sigma(Y))$,
+ i.e.\ the conditional expectation of $X$
+ with respect to the $\sigma$-algebra generated by $Y$.
+ \end{itemize}
+\end{definition}
+
+More fine print:
+\begin{remark}
+ [This notation is terrible]
+ The notation $\EE(X \mid \SF)$ is admittedly confusing,
+ since it is actually an entire function $\Omega \to \RR$,
+ rather than just a real number like $\EE[X]$.
+ For this reason I try to be careful to remember
+ to use parentheses rather than square brackets
+ for conditional expectations; not everyone does this.
+\end{remark}
+
+\begin{abuse}
+ In addition, when we write $Y = \EE(X \mid \SF)$,
+ there is some abuse of notation happening here
+ since $\EE(X \mid \SF)$ is defined only up to some reasonable uniqueness
+ (i.e.\ up to measure zero changes).
+ So this really means that
+ ``$Y$ satisfies the hypothesis of \Cref{prop:conditional_exp}'',
+ but this is so pedantic that no one bothers.
+\end{abuse}
+
+\todo{properties}
+
+\section{Supermartingales}
+\prototype{Visiting a casino is a supermartingale, assuming house odds.}
+
+\begin{definition}
+ Let $X_0$, $X_1$, \dots be a sequence of random variables
+ on a probability space $\Omega$,
+ and let $\SF_0 \subseteq \SF_1 \subseteq \cdots$ be a filtration.
+
+ Then $(X_n)_{n \ge 0}$ is a \vocab{supermartingale}
+ with respect to $(\SF_n)_{n \ge 0}$ if the following conditions hold:
+ \begin{itemize}
+ \ii $X_n$ is absolutely integrable for every $n$;
+ \ii $X_n$ is measurable with respect to $\SF_n$; and
+ \ii for each $n = 1, 2, \dots$ the inequality
+ \[ \EE(X_n \mid \SF_{n-1}) \le X_{n-1} \]
+ holds for all $\omega \in \Omega$.
+ \end{itemize}
+
+ In a \vocab{submartingale} the inequality $\le$ is replaced with $\ge$,
+ and in a \vocab{martingale} it is replaced by $=$.
+\end{definition}
+
+\begin{abuse}
+ [No one uses that filtration thing anyways]
+ We will always take $\SF_n$ to be the $\sigma$-algebra
+ generated by the previous variables $X_0$, $X_1$, \dots, $X_{n-1}$,
+ and do so without further comment.
+ Nonetheless, all the results that follow hold in the more general setting
+ of a supermartingale with respect to some filtration.
+\end{abuse}
+
+We will prove all our theorems for supermartingales;
+the analogous versions for submartingales can be obtained
+by replacing $\le$ with $\ge$ everywhere
+(since $X_n$ is a martingale iff $-X_n$ is a supermartingale)
+and for martingales by replacing $\le$ with $=$ everywhere
+(since $X_n$ is a martingale iff it is both a supermartingale
+and a submartingale).
+
+Let's give examples.
+\begin{example}
+ [Supermartingales]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii \textbf{Random walks}:
+ an ant starts at the position $0$ on the number line.
+ Every minute, it flips a fair coin and either
+ walks one step left or one step right.
+ If $X_t$ is the position at the $t$th time,
+ then $X_t$ is a martingale, because
+ \[ \EE(X_t \mid X_0, \dots, X_{t-1})
+ = \frac{(X_{t-1}+1) + (X_{t-1}-1)}{2} = X_{t-1}. \]
+
+ \ii \textbf{Casino game}:
+ Consider a gambler using the strategy described
+ at the beginning of the chapter.
+ This is a martingale, since every bet the gambler makes
+ has expected value $0$.
+
+ \ii \textbf{Multiplying independent variables}:
+ Let $X_1$, $X_2$, \dots, be independent (not necessarily
+ identically distributed) integrable random variables with mean $1$.
+ Then the sequence $Y_1$, $Y_2$ \dots defined by
+ \[ Y_n \defeq X_1 X_2 \dots X_n \]
+ is a martingale; as
+ $\EE(Y_n \mid Y_1, \dots, Y_{n-1}) = \EE[Y_n] \cdot Y_{n-1} = Y_{n-1}$.
+
+ \ii \textbf{Iterated blackjack}:
+ Suppose one shows up to a casino and plays
+ infinitely many games of blackjack.
+ If $X_t$ is their wealth at time $t$, then $X_t$ is a supermartingale.
+ This is because each game has negative expected value (house edge).
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [Frivolous/inflamamtory example --- real life is a supermartingale]
+
+ Let $X_t$ be your happiness on day $t$ of your life.
+ Life has its ups and downs,
+ so it is not the case that $X_t \le X_{t-1}$ for every $t$.
+ For example, you might win the lottery one day.
+
+ However, on any given day, many things can go wrong (e.g.\ zombie apocalypse),
+ and by Murphy's Law this is more likely than things going well.
+ Also, as you get older, you have an increasing number of responsibilities
+ and your health gradually begins to deteriorate.
+
+ Thus it seems that
+ \[ \EE(X_t \mid X_0, \dots, X_{t-1}) \le X_{t-1} \]
+ is a reasonable description of the future ---
+ \emph{in expectation}, each successive day is
+ slightly worse than the previous one.
+ (In particular, if we set $X_t = -\infty$ on death,
+ then as long as you have a positive probability of dying,
+ the displayed inequality is obviously true.)
+\end{example}
+
+Before going on, we will state without proof one useful result:
+if a martingale is bounded, then it will almost certainly converge.
+
+\begin{theorem}
+ [Doob's martingale convergence theorem]
+ \label{thm:doob_martingale_converge}
+ Let $X_0$, \dots be a supermartingale on a
+ probability space $\Omega$ such that
+ \[ \sup_{n \in \ZZ_{\ge 0}}
+ \EE \left[ \left\lvert X_n \right\rvert \right] < \infty. \]
+ Then, there exists a random variable $X_\infty \colon \Omega \to \RR$
+ such that
+ \[ X_n \asto X_\infty. \]
+\end{theorem}
+
+\section{Optional stopping}
+\prototype{Las Vegas.}
+
+In the first section we described how to make money almost surely.
+The key advantage the gambler had was the ability to quit whenever he wanted
+(equivalently, an ability to control the size of the bets;
+betting \$0 forever is the same as quitting.)
+Let's formalize a notion of stopping time.
+
+The idea is we want to define a function
+$\tau \colon \Omega \to \{0, 1, \dots, \infty\}$ such that
+\begin{itemize}
+ \ii $\tau(\omega)$ specifies the index
+ after which we \emph{stop} the martingale.
+ Note that the decisions to stop after time $n$
+ must be made with only the information available at that time ---
+ i.e., with respect to $\SF_n$ of the filtration.
+
+ \ii $X_{\tau \wedge n}$ is the random value representing the
+ value at time $n$ of the stopped martingale,
+ where if $n$ is \emph{after} the stopping time,
+ we just take it to be the our currently value after we leave.
+
+ So for example in a world $\omega$ where we stopped at time $3$, then
+ $X_{\tau \wedge 0}(\omega) = X_0(\omega)$,
+ $X_{\tau \wedge 1}(\omega) = X_1(\omega)$,
+ $X_{\tau \wedge 2}(\omega) = X_2(\omega)$,
+ $X_{\tau \wedge 3}(\omega) = X_3(\omega)$, but then
+ \[ X_3(\omega)
+ = X_{\tau \wedge 4}(\omega)
+ = X_{\tau \wedge 5}(\omega)
+ = X_{\tau \wedge 6}(\omega)
+ = \dots
+ \]
+ since we have stopped --- the value stops changing.
+
+ \ii $X_{\tau}$ denotes the eventual value after we stop
+ (or the limit $X_\infty$ if we never stop).
+\end{itemize}
+
+Here's the compiled machine code.
+\begin{definition}
+ Let $\SF_0 \subseteq \SF_1 \subseteq \cdots$ be a filtration
+ on a probability space $\Omega$.
+ \begin{itemize}
+ \ii A \vocab{stopping time} is a function
+ \[ \tau \colon \Omega \to \{0, 1, 2, \dots\} \cup \{\infty\} \]
+ with the property that for each integer $n$, the set
+ \[ \left\{ \omega \in \Omega \mid \tau(\omega) = n \right\} \]
+ is $\SF_n$-measurable (i.e., is in $\SF_n$).
+
+ \ii For each $n \ge 0$ we define
+ $X_{\tau \wedge n} \colon \Omega \to \RR$ by
+ \[ X_{\tau \wedge n}(\omega)
+ = X_{\min \left\{ \tau(\omega), n \right\}}(\omega) \]
+
+ \ii Finally, we let the eventual outcome be denoted by
+ \[ X_\tau(\omega)
+ = \begin{cases}
+ X_{\tau(\omega)}(\omega) & \tau(\omega) \ne \infty \\
+ \lim_{n \to \infty} X_n(\omega) & \tau(\omega) = \infty
+ \text{ and } \lim_{n \to \infty} X_n(\omega) \text{ exists } \\
+ \text{undefined} & \text{otherwise}.
+ \end{cases}
+ \]
+ We require that the ``undefined'' case occurs
+ only for a set of measure zero
+ (for example, if \Cref{thm:doob_martingale_converge} applies).
+ Otherwise we don't allow $X_\tau$ to be defined.
+ \end{itemize}
+\end{definition}
+
+
+\begin{proposition}
+ [Stopped supermartingales are still supermartingales]
+ Let $X_0$, $X_1$, \dots be a supermartingale.
+ Then the sequence
+ \[ X_{\tau \wedge 0}, \; X_{\tau \wedge 1}, \; \dots \]
+ is itself a supermartingale.
+\end{proposition}
+\begin{proof}
+ We have almost everywhere the inequalities
+ \begin{align*}
+ \EE\left( X_{\tau \wedge n} \mid \SF_{n-1} \right)
+ &= \EE \left( X_{n-1} + \mathbf 1_{\tau(\omega) = n-1} (X_n - X_{n-1}) \mid \SF_{n-1} \right) \\
+ &= \EE \left( X_{n-1} \mid \SF_{n-1} \right)
+ + \EE \left( \mathbf 1_{\tau(\omega) = n-1} \cdot (X_n - X_{n-1}) \mid \SF_{n-1} \right) \\
+ &= X_{n-1} + \mathbf 1_{\tau(\omega) = n-1}
+ \cdot\EE \left( X_n - X_{n-1} \mid \SF_{n-1} \right)
+ \le X_{n-1}
+ \end{align*}
+ as functions from $\Omega \to \RR$.
+\end{proof}
+
+\begin{theorem}
+ [Doob's optional stopping theorem]
+ Let $X_0$, $X_1$, \dots be a supermartingale on a probability space $\Omega$,
+ with respect to a filtration $\SF_0 \subseteq \SF_1 \subseteq \cdots$.
+ Let $\tau$ be a stopping time with respect to this filtration.
+ Suppose that \emph{any} of the following hypotheses are true,
+ for some constant $C$:
+ \begin{enumerate}[(a)]
+ \ii \textbf{Finite time}: $\tau(\omega) \le C$ for almost all $\omega$.
+ \ii \textbf{Finite money}: for each $n \ge 1$,
+ $\left\lvert X_{\tau \wedge n}(\omega) \right\rvert \le C$
+ for almost all $\omega$.
+ \ii \textbf{Finite bets}: we have $\mathbb E[\tau] < \infty$,
+ and for each $n \ge 1$, the conditional expectation
+ \[ \EE\left( \left\lvert X_n-X_{n-1} \right\rvert
+ \mid \SF_n \right) \]
+ takes on values at most $C$ for almost all $\omega \in \Omega$
+ satisfying $\tau(\omega) \ge n$.
+ \end{enumerate}
+ Then $X_\tau$ is well-defined almost everywhere,
+ and more importantly, \[ \EE[X_\tau] \le \EE[X_0]. \]
+\end{theorem}
+The last equation can be cheekily expressed as
+``the only winning move is not to play''.
+
+\begin{proof}
+ \todo{do later tonight}
+\end{proof}
+
+\begin{exercise}
+ Conclude that going to Las Vegas with the strategy
+ described in the first section is a really bad idea.
+ What goes wrong?
+\end{exercise}
+
+\section{Fun applications of optional stopping (TO DO)}
+%% for 18.A34 we can do the random walk example
+We now give three problems which showcase some of the power of
+the results we have developed so far.
+
+\subsection{The ballot problem}
+Suppose Alice and Bob are racing in an election;
+Alice received $a$ votes total while Bob received $b$ votes total, and $a > b$.
+If the votes are chosen in random order,
+one could ask: what is the probability that Alice never falls behind
+Bob in the election?
+
+\missingfigure{path}
+
+\begin{proposition}
+ [Ballot problem]
+ This occurs with probability $\frac{a-b}{a+b}$.
+\end{proposition}
+
+
+\subsection{ABRACADABRA}
+
+
+\subsection{USA TST 2018}
+
+\section{\problemhead}
+
+\begin{problem}
+ [Examples of martingales]
+ We give some more examples of martingales.
+ \begin{enumerate}[(a)]
+ \ii \textbf{(Simple random walk)}
+ Let $X_1$, $X_2$, \dots be i.i.d.\ random variables
+ which equal $+1$ with probability $1/2$,
+ and $-1$ with probability $1/2$.
+ Prove that
+ \[ Y_n = \left( X_1 + \dots + X_n \right)^2 - n \]
+ is a martingale.
+
+ \ii \textbf{(de Moivre's martingale)}
+ Fix real numbers $p$ and $q$ such that $p,q > 0$ and $p+q=1$.
+ Let $X_1$, $X_2$, \dots be i.i.d.\ random variables
+ which equal $+1$ with probability $p$,
+ and $-1$ with probability $q$.
+ Show that
+ \[ Y_n = \left(qp^{-1}\right)^{X_1 + X_2 + \dots + X_n} \]
+ is a martingale.
+
+ \ii \textbf{(P\'{o}lya's urn)}
+ An urn contains one red and one blue marble initially.
+ Every minute, a marble is randomly removed from the urn,
+ and two more marbles of the same color are added to the urn.
+ Thus after $n$ minutes, the urn will have $n+2$ marbles.
+
+ Let $r_n$ denote the fraction of marbles which are red.
+ Show that $r_n$ is a martingale.
+ \end{enumerate}
+\end{problem}
+
+\begin{problem}
+ A deck has $52$ cards; of them $26$ are red and $26$ are black.
+ The cards are drawn and revealed one at a time.
+ At any point, if there is at least one card remaining in the deck,
+ you may stop the dealer;
+ you win if (and only if) the next card in the deck is red.
+ If all cards are dealt, then you lose.
+ Across all possible strategies,
+ determine the maximal probability of winning.
+ \begin{hint}
+ There is a cute elementary solution.
+ For the martingale-based solution,
+ show that the fraction of red cards in the deck is a martingale
+ at time $n$ is a martingale.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ [Wald's identity]
+ Let $\mu$ be a real number.
+ Let $X_1$, $X_2$, \dots be independent random variables
+ on a probability space $\Omega$ with mean $\mu$.
+ Finally let $\tau \colon \Omega \to \{1, 2, \dots\}$
+ be a stopping time such that $\mathbb E[\tau] < \infty$,
+ such that the event $\tau = n$ depends only on $X_1$, \dots, $X_n$.
+
+ Prove that
+ \[ \EE[X_1 + X_2 + \dots + X_\tau] = \mu \EE[\tau]. \]
+\end{problem}
+
+\begin{problem}
+ [Unbiased drunkard's walk]
+ An ant starts at $0$ on a number line,
+ and walks left or right one unit with probability $1/2$.
+ It stops once it reaches either $-17$ or $+8$.
+ \begin{enumerate}[(a)]
+ \ii Find the probability it reaches $+8$ before $-17$.
+
+ \ii Find the expected value of the amount of time
+ it takes to reach either endpoint.
+ \end{enumerate}
+\end{problem}
+
+\begin{problem}
+ [Biased drunkard's walk]
+ Let $0 < p < 1$ be a real number.
+ An ant starts at $0$ on a number line,
+ and walks left or right one unit with probability $p$.
+ It stops once it reaches either $-17$ or $+8$.
+ Find the probability it reaches $+8$ first.
+\end{problem}
+
+\begin{problem}
+ The number $1$ is written on a blackboard.
+ Every minute, if the number $a$ is written on the board,
+ it's erased and replaced by a real number
+ in the interval $[0, 2.01a]$ selected uniformly at random.
+ What is the probability that the resulting sequence of numbers approaches $0$?
+ \begin{hint}
+ It occurs with probability $1$.
+ If $X_n$ is the number on the board at step $n$,
+ and $\mu = \frac{1}{2.01} \int_0^{2.01} \log t \; dt$,
+ show that $\log(X_n) - n \mu$ is a martingale.
+ (Incidentally, using the law of large numbers could work too.)
+ \end{hint}
+\end{problem}
+
diff --git a/books/napkin/measure-space.tex b/books/napkin/measure-space.tex
new file mode 100644
index 0000000000000000000000000000000000000000..7d5359bdc4cfdabc4a2aad5e4c8bf60325632aba
--- /dev/null
+++ b/books/napkin/measure-space.tex
@@ -0,0 +1,434 @@
+\chapter{Measure spaces}
+Here is an outline of where we are going next.
+Our \emph{goal} over the next few chapters
+is to develop the machinery to state (and in some cases prove)
+the law of large numbers and the central limit theorem.
+For these purposes, the scant amount of work we did in Calculus 101
+is going to be awfully insufficient:
+integration over $\RR$ (or even $\RR^n$) is just not going to cut it.
+
+This chapter will develop the theory of ``measure spaces'',
+which you can think of as ``spaces equipped with a notion of size''.
+We will then be able to integrate over these
+with the so-called Lebesgue integral
+(which in some senses is almost strictly better than the Riemann one).
+
+\section*{Letter connotations}
+There are a lot of ``types'' of objects moving forward,
+so here are the letter connotations we'll use
+throughout the next several chapters.
+This makes it easier to tell what the ``type'' of each object
+is just by which letter is used.
+\begin{itemize}
+ \ii Measure spaces denoted by $\Omega$,
+ their elements denoted by $\omega$.
+ \ii Algebras and $\sigma$-algebras denoted by script $\SA$, $\SB$, \dots.
+ Sets in them denoted by early capital Roman $A$, $B$, $C$, $D$, $E$, \dots.
+ \ii Measures (i.e.\ functions assigning sets to reals)
+ denoted usually by $\mu$ or $\rho$.
+ \ii Random variables (functions sending worlds to reals)
+ denoted usually by late capital Roman $X$, $Y$, $Z$, \dots.
+ \ii Functions from $\RR \to \RR$
+ by Roman letters like $f$ and $g$ for pdf's
+ and $F$ and $G$ for cdf's.
+ \ii Real numbers denoted by lower Roman letters like $x$, $y$, $z$.
+\end{itemize}
+
+
+\section{Motivating measure spaces via random variables}
+To motivate \emph{why} we want to construct measure spaces,
+I want to talk about a (real) \vocab{random variable},
+which you might think of as
+\begin{itemize}
+ \ii the result of a coin flip,
+ \ii the high temperature in Boston on Saturday,
+ \ii the possibility of rain on your 18.725 date next weekend.
+\end{itemize}
+
+Why does this need a long theory to develop well?
+For a simple coin flip one intuitively just thinks
+``50\% heads, 50\% tails'' and is done with it.
+The situation is a little trickier with temperature
+since it is continuous rather than discrete,
+but if all you care about is that one temperature,
+calculus seems like it might be enough to deal with this.
+
+But it gets more slippery once the variables start to ``talk to'' each other:
+the high temperature tells you a little bit about whether it will rain,
+because e.g.\ if the temperature is very high it's quite likely to be sunny.
+Suddenly we find ourselves wishing we could talk about conditional probability,
+but this is a whole can of worms --- the relations
+between these sorts of things can get very complicated very quickly.
+
+The big idea to getting a formalism for this is that:
+\begin{moral}
+ Our measure spaces $\Omega$ will be thought of as a space of entire worlds,
+ with each $\omega \in \Omega$ representing a world.
+ Random variables are functions from worlds to $\RR$.
+\end{moral}
+This way, the space of ``worlds'' takes care of all the messy interdependence.
+
+Then, we can assign ``measures'' to sets of worlds:
+for example, to be a fair coin means that if you are only interested in
+that one coin flip, the ``fraction'' of worlds in
+which that coin showed heads should be $\half$.
+This is in some ways backwards from what you were told in high-school:
+officially, we start with the space of worlds,
+rather than starting with the probabilities.
+
+It will soon be clear that there is no way we can assign
+a well-defined measure to every single one of the $2^\Omega$ subsets.
+Fortunately, in practice, we won't need to,
+and the notion of a $\sigma$-algebra will capture the idea
+of ``enough measur-\emph{able} sets for us to get by''.
+
+\begin{remark}
+ [Random seeds]
+ Another analogy if you do some programming:
+ each $\omega \in \Omega$ is a \emph{random seed},
+ and everything is determined from there.
+\end{remark}
+
+\section{Motivating measure spaces geometrically}
+So, we have a set $\Omega$ of possible points
+(which in the context of the previous discussion
+can be thought of as the set of worlds),
+and we want to assign a \emph{measure} (think volume)
+to subsets of points in $\Omega$.
+We will now describe some of the obstacles that we will face,
+in order to motivate \emph{how} measure spaces are defined
+(as the previous section only motivated \emph{why} we want such things).
+
+If you try to do this na\"{\i}vely,
+you basically immediately run into set-theoretic issues.
+A good example to think about why this might happen
+is if $\Omega = \RR^2$ with the measure corresponding to area.
+You can define the area of a triangle as in high school,
+and you can then try and define the area of a circle,
+maybe by approximating it with polygons.
+But what area would you assign to the subset $\QQ^2$, for example?
+(It turns out ``zero'' is actually a working answer.)
+Or, a unit disk is composed of infinitely many points;
+each of the points better have measure zero,
+but why does their union have measure $\pi$ then?
+Blah blah blah.
+
+We'll say more about this later, but
+you might have already heard of the \textbf{Banach-Tarski paradox}
+which essentially shows there is no good way that you can assign a
+measure to every single subset of $\RR^3$
+and still satisfy basic sanity checks.
+There are just too many possible subsets of Euclidean space.
+
+However, the good news is that most of these sets are not ones
+that we will ever care about,
+and it's enough to define measures for certain
+``sufficiently nice sets''.
+The adjective we will use is \emph{measurable},
+and it will turn out that this will be way, way more than good enough
+for any practical purposes.
+
+We will generally use $A$, $B$, \dots for measurable sets
+and denote the entire family of measurable sets by curly $\SA$.
+
+\section{$\sigma$-algebras and measurable spaces}
+Here's the machine code.
+\begin{definition}
+ A \vocab{measurable space} consists of a space $\Omega$ of points,
+ and a \vocab{$\sigma$-algebra} $\SA$ of subsets of $\Omega$
+ (the ``measurable sets'' of $\Omega$).
+ The set $\SA$ is required to satisfy the following axioms:
+ \begin{itemize}
+ \ii $\SA$ contains $\varnothing$ and $\Omega$.
+ \ii $\SA$ should be closed under complements and
+ \emph{countable} unions/intersections.
+ (Hint on nomenclature: $\sigma$ usually indicates
+ some sort of ``countably finite'' condition.)
+ \end{itemize}
+\end{definition}
+(Complaint: this terminology is phonetically confusing,
+because it can be confused with ``measure space'' later.
+The way to think about is that
+``measur\emph{able} spaces have a $\sigma$-algebra, so we \emph{could}
+try to put a measure on it, but we \emph{haven't}, yet.'')
+
+Though this definition is how we actually think about it in a few select cases,
+for the most part, and we will usually instantiate $\SA$ in practice
+in a different way:
+\begin{definition}
+ Let $\Omega$ be a set, and consider some family of subsets $\SF$ of $\Omega$.
+ Then the \vocab{$\sigma$-algebra generated by $\SF$}
+ is the smallest $\sigma$-algebra $\SA$ which contains $\SF$.
+\end{definition}
+As is commonplace in math, when we see ``generated'',
+this means we sort of let the definition ``take care of itself''.
+So, if $\Omega = \RR$, maybe I want $\SA$ to contain all open sets.
+Well, then the definition means it should contain all complements too,
+so it contains all the closed sets.
+Then it has to contain all the half-open intervals too, and then\dots.
+Rather than try to reason out what exactly the final shape $\SA$ looks like
+(which basically turns out to be impossible),
+we just give up and say ``$\SA$ is all the sets you can get if you start
+with the open sets and apply repeatedly union/complement operations''.
+Or even more bluntly: ``start with open sets, shake vigorously''.
+
+I've gone on too long with no examples.
+\begin{example}
+ [Examples of measurable spaces]
+ The first two examples actually say what $\SA$ is;
+ the third example (most important) will use generation.
+ \begin{enumerate}[(a)]
+ \ii If $\Omega$ is any set,
+ then the power set $\SA = 2^{\Omega}$ is obviously a $\sigma$-algebra.
+ This will be used if $\Omega$ is countably finite,
+ but it won't be very helpful if $\Omega$ is huge.
+ \ii If $\Omega$ is an uncountable set,
+ then we can declare $\SA$ to be all subsets of $\Omega$
+ which are either countable,
+ or which have countable complement.
+ (You should check this satisfies the definitions.)
+ This is a very ``coarse'' algebra.
+ \ii If $\Omega$ is a topological space,
+ the \vocab{Borel $\sigma$-algebra}
+ is defined as the $\sigma$-algebra generated by all the open sets of $\Omega$.
+ We denote it by $\SB(\Omega)$,
+ and call the space a \vocab{Borel space}.
+ As warned earlier, it is basically impossible to describe
+ what it looks like,
+ and instead you should think of it as saying
+ ``we can measure the open sets''.
+ \end{enumerate}
+\end{example}
+\begin{ques}
+ Show that the closed sets are in $\SB(\Omega)$ for
+ any topological space $\Omega$.
+ Show that $[0,1)$ %chktex 9
+ is also in $\SB(\RR)$.
+\end{ques}
+
+\section{Measure spaces}
+\begin{definition}
+ Measurable spaces $(\Omega, \SA)$ are then equipped
+ with a function $\mu \colon \SA \to [0, +\infty]$
+ called the \vocab{measure}, which is required to satisfy
+ the following axioms:
+ \begin{itemize}
+ \ii $\mu(\varnothing) = 0$
+ \ii \textbf{Countable additivity}:
+ If $A_1$, $A_2$, \dots are disjoint sets in $\SA$,
+ then \[ \mu\left( \bigsqcup_n A_n \right) = \sum_n \mu(A_n). \]
+ \end{itemize}
+ The triple $(\Omega, \SA, \mu)$ is called a \vocab{measure space}.
+ It's called a \vocab{probability space} if $\mu(\Omega) = 1$.
+\end{definition}
+\begin{exercise}
+ [Weaker equivalent definitions]
+ I chose to give axioms for $\SA$ and $\mu$
+ that capture how people think of them in practice,
+ which means there is some redundancy:
+ for example, being closed under complements and unions
+ is enough to get intersections, by de Morgan's law.
+ Here are more minimal definitions,
+ which are useful if you are trying to prove something satisfies them
+ to reduce the amount of work you have to do:
+ \begin{enumerate}[(a)]
+ \ii The axioms on $\SA$ can be weakened
+ to (i) $\varnothing \in \SA$ and (ii) $\SA$ is closed under
+ complements and countable unions.
+ \ii The axioms on $\mu$ can be weakened to
+ (i) $\mu(\varnothing) = 0$,
+ (ii) $\mu(A \sqcup B) = \mu(A) + \mu(B)$, and
+ (iii) for $A_1 \supseteq A_2 \supseteq \cdots$,
+ we have $\mu\left( \bigcap_n A_n \right) = \lim_n \mu(A_n)$.
+ \end{enumerate}
+\end{exercise}
+
+
+\begin{remark}
+Here are some immediate remarks on these definitions.
+\begin{itemize}
+ \ii If $A \subseteq B$ are measurable,
+ then $\mu(A) \le \mu(B)$ since $\mu(B) = \mu(A) + \mu(B-A)$.
+ \ii In particular, in a probability space all measures are in $[0,1]$.
+ On the other hand, for general measure spaces we'll allow $+\infty$
+ as a possible measure
+ (hence the choice of $[0,+\infty]$ as codomain for $\mu$).
+ \ii We want to allow at least countable unions / additivity
+ because with finite unions it's too hard to make progress:
+ it's too hard to estimate the area of a circle
+ without being able to talk about limits of countably infinitely many triangles.
+ \end{itemize}
+\end{remark}
+We \emph{don't} want to allow uncountable unions and additivity,
+because uncountable sums basically never work out.
+In particular, there is a nice elementary exercise as follows:
+\begin{exercise}
+ [Tricky]
+ Let $S$ be an uncountable set of positive real numbers.
+ Show that some finite subset $T \subseteq S$ has sum greater than $10^{2019}$.
+ Colloquially, ``uncountable many positive reals
+ cannot have finite sum''.
+\end{exercise}
+So countable sums are as far as we'll let the infinite sums go.
+This is the reason why we considered $\sigma$-algebras in the first place.
+
+
+\begin{example}
+ [Measures]
+ We now discuss measures on each of the spaces
+ in our previous examples.
+ \begin{enumerate}[(a)]
+ \ii If $\SA = 2^\Omega$ (or for that matter any $\SA$)
+ we may declare $\mu(A) = |A|$ for each $A \in \SA$
+ (even if $|A| = \infty$).
+ This is called the \vocab{counting measure},
+ simply counting the number of elements.
+
+ This is useful if $\Omega$ is countably infinite,
+ and optimal if $\Omega$ is finite (and nonempty).
+ In the latter case, we will often normalize by
+ $\mu(A) = \frac{|A|}{|\Omega|}$
+ so that $\Omega$ becomes a probability space.
+
+ \ii Suppose $\Omega$ was uncountable
+ and we took $\SA$ to be the countable sets and their complements.
+ Then
+ \[
+ \mu(A) = \begin{cases}
+ 0 & \text{$A$ is countable} \\
+ 1 & \text{$\Omega \setminus A$ is countable}
+ \end{cases}
+ \]
+ is a measure. (Check this.)
+
+ \ii Elephant in the room:
+ defining a measure on $\SB(\Omega)$ is hard even for $\Omega = \RR$,
+ and is done in the next chapter.
+ So you will have to hold your breath.
+ Right now, all you know is that by declaring my \emph{intent}
+ to define a measure $\SB(\Omega)$,
+ I am hoping that at least every open set will have a volume.
+ \end{enumerate}
+\end{example}
+
+\section{A hint of Banach-Tarski}
+I will now try to convince you that $\SB(\Omega)$
+is a necessary concession,
+and for general topological spaces like $\Omega = \RR^n$,
+there is no hope of assigning a measure to $2^{\Omega}$.
+(In the literature, this example is called a Vitali set.)
+\begin{example}
+ [A geometric example why $\SA = 2^\Omega$ is unsuitable]
+ Let $\Omega$ denote the unit circle in $\RR^2$
+ and $\SA = 2^\Omega$.
+ We will show that any measure $\mu$ on $\Omega$
+ with $\mu(\Omega) = 1$ will have undesirable properties.
+
+ Let $\sim$ denote an equivalence relation on $\Omega$
+ defined as follows: two points are equivalent
+ if they differ by a rotation around the origin by a rational multiple of $\pi$.
+ We may pick a representative from each equivalence class,
+ letting $X$ denote the set of representatives.
+ Then
+ \[ \Omega = \bigsqcup_{\substack{q \in \QQ \\ 0 \le q < 2}}
+ \left( X \text{ rotated by $q\pi$ radians} \right). \]
+ Since we've only rotated $X$,
+ each of the rotations should have the same measure $m$.
+ But $\mu(\Omega) = 1$,
+ and there is no value we can assign that measure:
+ if $m = 0$ we get $\mu(\Omega) = 0$
+ and $m > 0$ we get $\mu(\Omega) = \infty$.
+\end{example}
+\begin{remark}
+ [Choice]
+ Experts may recognize that picking a representative
+ (i.e.\ creating set $X$)
+ technically requires the Axiom of Choice.
+ That is why, when people talk about Banach-Tarski issues,
+ the Axiom of Choice almost always gets honorable mention as well.
+\end{remark}
+
+Stay tuned to actually see a construction for $\SB(\RR^n)$
+in the next chapter.
+
+\section{Measurable functions}
+In the past, when we had topological spaces,
+we considered continuous functions.
+The analog here is:
+\begin{definition}
+ Let $(X, \SA)$ and $(Y, \SB)$ be measurable spaces
+ (or measure spaces).
+ A function $f \colon X \to Y$ is \vocab{measurable}
+ if for any measurable set $S \subseteq Y$ (i.e.\ $S \in \SB$)
+ we have $f\pre(S)$ is measurable (i.e. $f\pre(S) \in \SA$).
+\end{definition}
+
+In practice, most functions you encounter will be continuous anyways,
+and in that case we are fine.
+\begin{proposition}
+ [Continuous implies Borel measurable]
+ Suppose $X$ and $Y$ are topological spaces
+ and we pick the Borel measures on both.
+ A function $f \colon X \to Y$
+ which is continuous as a map of topological spaces
+ is also measurable.
+\end{proposition}
+\begin{proof}
+ Follows from the fact that pre-images of open sets are open,
+ and the Borel measure is generated by open sets.
+\end{proof}
+
+\section{On the word ``almost''}
+In later chapters we will begin seeing the phrase ``almost everywhere''
+and ``almost surely'' start to come up,
+and it seems prudent to take the time to talk about it now.
+
+\begin{definition}
+ We say that property $P$ occurs
+ \vocab{almost everywhere} or \vocab{almost surely} if the set
+ \[ \left\{ \omega \in \Omega \mid \text{$P$ does not hold for $\omega$} \right\} \]
+ has measure zero.
+\end{definition}
+
+For example, if we say ``$f = g$ almost everywhere''
+for some functions $f$ and $g$ defined on a measure space $\Omega$,
+then we mean that $f(\omega) = g(\omega)$ for all $\omega \in \Omega$
+other than a measure-zero set.
+
+There, that's the definition.
+The main thing to now update your instincts on is that
+\begin{moral}
+ In measure theory,
+ we basically only care about things up to almost-everywhere.
+\end{moral}
+Here are some examples:
+\begin{itemize}
+ \ii If $f=g$ almost everywhere,
+ then measure theory will basically not tell these functions apart.
+ For example, $\int_\Omega f \; d\omega = \int_\Omega g \; d\omega$
+ will hold for two functions agreeing almost everywhere.
+ \ii As another example,
+ if we prove ``there exists a unique function $f$ such that so-and-so'',
+ the uniqueness is usually going to be up to measure-zero sets.
+\end{itemize}
+You can think of this sort of like group isomorphism,
+where two groups are considered ``basically the same'' when they are isomorphic,
+except this one might take a little while to get used to.\footnote{In fact,
+ some people will even define functions on measure spaces
+ as \emph{equivalence classes} of maps,
+ modded out by agreement outside a measure zero set.}
+
+\section{\problemhead}
+\begin{dproblem}
+ Let $(\Omega, \SA, \mu)$ be a probability space.
+ Show that the intersection of countably many sets of measure $1$
+ also has measure $1$.
+\end{dproblem}
+
+\begin{problem}
+ [On countable $\sigma$-algebras]
+ \gim
+ Let $\SA$ be a $\sigma$-algebra on a set $\Omega$.
+ Suppose that $\SA$ has countable cardinality.
+ Prove that $|\SA|$ is finite and equals a power of $2$.
+\end{problem}
diff --git a/books/napkin/meromorphic.tex b/books/napkin/meromorphic.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2ac8e7e67f8e2666bc6e0af4a2168814098f7a43
--- /dev/null
+++ b/books/napkin/meromorphic.tex
@@ -0,0 +1,489 @@
+\chapter{Meromorphic functions}
+\section{The second nicest functions on earth}
+If holomorphic functions are like polynomials,
+then \emph{meromorphic} functions are like rational functions.
+Basically, a meromorphic function is a function of the form
+$ \frac{A(z)}{B(z)} $
+where $A , B: U \to \CC$ are holomorphic and $B$ is not zero.
+The most important example of a meromorphic function is $\frac 1z$.
+
+We are going to see that meromorphic functions behave
+like ``almost-holomorphic'' functions.
+Specifically, a meromorphic function $A/B$ will be holomorphic at all points
+except the zeros of $B$ (called \emph{poles}).
+By the identity theorem, there cannot be too many zeros of $B$!
+So meromorphic functions can be thought of as ``almost holomorphic''
+(like $\frac 1z$, which is holomorphic everywhere but the origin).
+We saw that
+\[ \frac{1}{2\pi i} \oint_{\gamma} \frac 1z \; dz = 1 \]
+for $\gamma(t) = e^{it}$ the unit circle.
+We will extend our results on contours to such situations.
+
+It turns out that, instead of just getting $\oint_\gamma f(z) \; dz = 0$
+like we did in the holomorphic case,
+the contour integrals will actually be used to
+\emph{count the number of poles} inside the loop $\gamma$.
+It's ridiculous, I know.
+
+\section{Meromorphic functions}
+\prototype{$\frac 1z$, with a pole of order $1$ and residue $1$ at $z=0$.}
+
+Let $U$ be an open subset of $\CC$ again.
+\begin{definition}
+ A function $f : U \to \CC$ is \vocab{meromorphic}
+ if there exists holomorphic functions $A, B \colon U \to \CC$
+ with $B$ not identically zero in any open neighborhood,
+ and $f(z) = A(z)/B(z)$ whenever $B(z) \ne 0$.
+\end{definition}
+Let's see how this function $f$ behaves.
+If $z \in U$ has $B(z) \neq 0$,
+then in some small open neighborhood the function $B$ isn't zero
+at all, and thus $A/B$ is in fact \emph{holomorphic};
+thus $f$ is holomorphic at $z$.
+(Concrete example: $\frac 1z$ is holomorphic
+in any disk not containing $0$.)
+
+On the other hand, suppose $p \in U$ has $B(p) = 0$: without loss of generality, $p=0$
+to ease notation.
+By using the Taylor series at $p=0$ we can put
+\[ B(z) = c_k z^k + c_{k+1} z^{k+1} + \dots \]
+with $c_k \neq 0$
+(certainly some coefficient is nonzero since $B$ is not identically zero!).
+Then we can write
+\[ \frac{1}{B(z)} = \frac{1}{z^k} \cdot \frac{1}{c_k + c_{k+1}z + \dots}. \]
+But the fraction on the right is a
+holomorphic function in this open neighborhood!
+So all that's happened is that we have an extra $z^{-k}$ kicking around.
+
+%We want to consider functions $f$ defined on all points in $U$
+%except for a set of ``isolated'' singularities;
+%for example, something like \[ \frac{1}{z(z+1)(z^2+1)} \] which is defined
+%for all $z$ other than $z=0$, $z=-1$ and $z=i$.
+%Or even \[ \frac{1}{\sin(2\pi z)}, \] which is defined at every $z$ which is \emph{not} an integer.
+%% Even though there's infinitely many points, they are not really that close together.
+
+This gives us an equivalent way of viewing meromorphic functions:
+
+\begin{definition}
+ Let $f : U \to \CC$ as usual.
+ A \vocab{meromorphic} function is a
+ function which is holomorphic on $U$
+ except at an isolated set $S$ of points
+ (meaning it is holomorphic as a function $U \setminus S \to \CC$).
+ For each $p \in S$, called a \vocab{pole} of $f$,
+ the function $f$ must admit a \vocab{Laurent series},
+ meaning that
+ \[
+ f(z) =
+ \frac{c_{-m}}{(z-p)^m}
+ + \frac{c_{-m+1}}{(z-p)^{m-1}}
+ + \dots
+ + \frac{c_{-1}}{z-p} + c_0 + c_1 (z-p) + \dots
+ \]
+ for all $z$ in some open neighborhood of $p$,
+ other than $z = p$.
+ Here $m$ is a positive integer (and $c_{-m} \neq 0$).
+\end{definition}
+Note that the trailing end \emph{must} terminate.
+By ``isolated set'', I mean that we can draw
+open neighborhoods around each pole in $S$,
+in such a way that no two open neighborhoods intersect.
+
+\begin{example}
+ [Example of a meromorphic function]
+ Consider the function \[ \frac{z+1}{\sin z}. \]
+ It is meromorphic, because it is holomorphic everywhere except at the zeros of $\sin z$.
+ At each of these points we can put a Laurent series: for example at $z=0$ we have
+ \begin{align*}
+ \frac{z+1}{\sin z}
+ &= (z+1) \cdot \frac{1}{z - \frac{z^3}{3!} + \frac{z^5}{5!} - \dots} \\
+ &= \frac 1z \cdot \frac{z+1}{1 - \left(%
+ \frac{z^2}{3!} - \frac{z^4}{5!} + \frac{z^6}{7!} - \dots \right)} \\
+ &= \frac 1z \cdot (z+1) \sum_{k \ge 0} \left( %
+ \frac{z^2}{3!}-\frac{z^4}{5!}+\frac{z^6}{7!}-\dots \right)^k.
+ \end{align*}
+ If we expand out the horrible sum (which I won't do),
+ then you get $\frac 1z$ times a perfectly
+ fine Taylor series, i.e.\ a Laurent series.
+\end{example}
+
+\begin{abuse}
+ We'll often say something like
+ ``consider the function $f : \CC \to \CC$
+ by $z \mapsto \frac 1z$''.
+ Of course this isn't completely correct,
+ because $f$ doesn't have a value at $z=0$.
+ If I was going to be completely rigorous
+ I would just set $f(0) = 2015$ or something and move on
+ with life, but for all intents
+ let's just think of it as ``undefined at $z=0$''.
+
+ Why don't I just write $g : \CC \setminus \{0\} \to \CC$?
+ The reason I have to do this is that it's still important
+ for $f$ to remember it's ``trying'' to be holomorphic on $\CC$,
+ even if isn't assigned a value at $z=0$.
+ As a function $\CC \setminus \{0\} \to \CC$ the function $\frac 1z$ is actually holomorphic.
+\end{abuse}
+
+\begin{remark}
+ I have shown that any function $A(z)/B(z)$
+ has this characterization with poles,
+ but an important result is
+ that the converse is true too:
+ if $f : U \setminus S \to \CC$ is holomorphic for some isolated set $S$,
+ and moreover $f$ admits a Laurent series at each point in $S$,
+ then $f$ can be written as a rational quotient of holomorphic functions.
+ I won't prove this here, but it is good to be aware of.
+\end{remark}
+
+\begin{definition}
+ Let $p$ be a pole of a meromorphic function $f$, with Laurent series
+ \[
+ f(z) =
+ \frac{c_{-m}}{(z-p)^m}
+ + \frac{c_{-m+1}}{(z-p)^{m-1}}
+ + \dots
+ + \frac{c_{-1}}{z-p} + c_0 + c_1 (z-p) + \dots.
+ \]
+ The integer $m$ is called the \vocab{order} of the pole.
+ A pole of order $1$ is called a \vocab{simple pole}.
+
+ We also give the coefficient $c_{-1}$ a name, the \vocab{residue} of $f$ at $p$,
+ which we write $\Res(f;p)$.
+\end{definition}
+
+The order of a pole tells you how ``bad'' the pole is.
+The order of a pole is the ``opposite'' concept of the \vocab{multiplicity} of a \vocab{zero}.
+If $f$ has a pole at zero, then its Taylor series near $z=0$ might look something like
+\[ f(z) = \frac{1}{z^5} + \frac{8}{z^3} - \frac{2}{z^2} + \frac{4}{z} + 9 - 3z + 8z^2 + \dots \]
+and so $f$ has a pole of order five.
+By analogy, if $g$ has a zero at $z=0$, it might look something like
+\[ g(z) = 3z^3 + 2z^4 + 9z^5 + \dots \]
+and so $g$ has a zero of multiplicity three.
+These orders are additive: $f(z) g(z)$ still has a pole of order $5-3=2$,
+but $f(z)g(z)^2$ is completely patched now, and in fact has a \vocab{simple zero} now
+(that is, a zero of degree $1$).
+
+\begin{exercise}
+ Convince yourself that orders are additive as described above.
+ (This is obvious once you understand that you
+ are multiplying Taylor/Laurent series.)
+\end{exercise}
+
+Metaphorically, poles can be thought of as ``negative zeros''.
+
+
+We can now give many more examples.
+\begin{example}
+ [Examples of meromorphic functions]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Any holomorphic function is a meromorphic function which happens to have no poles.
+ Stupid, yes.
+ \ii The function $\CC \to \CC$ by $z \mapsto 100z\inv$ for $z \neq 0$
+ but undefined at zero is a meromorphic function.
+ Its only pole is at zero, which has order $1$ and residue $100$.
+ \ii The function $\CC \to \CC$ by $z \mapsto z^{-3} + z^2 + z^9$ is also a meromorphic function.
+ Its only pole is at zero, and it has order $3$, and residue $0$.
+ \ii The function $\CC \to \CC$ by $z \mapsto \frac{e^z}{z^2}$ is meromorphic,
+ with the Laurent series at $z=0$ given by
+ \[
+ \frac{e^z}{z^2}
+ = \frac{1}{z^2} + \frac{1}{z} + \frac{1}{2} + \frac{z}{6} + \frac{z^2}{24} + \frac{z^3}{120}
+ + \dots.
+ \]
+ Hence the pole $z=0$ has order $2$ and residue $1$.
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [A rational meromorphic function]
+ Consider the function $\CC \to \CC$ given by
+ \begin{align*}
+ z &\mapsto \frac{z^4+1}{z^2-1} = z^2 + 1 + \frac{2}{(z-1)(z+1)} \\
+ &= z^2 + 1 + \frac1{z-1} \cdot \frac{1}{1+\frac{z-1}{2}} \\
+ &= \frac{1}{z-1} + \frac32 + \frac94(z-1) + \frac{7}{8}(z-1)^2 - \dots
+ \end{align*}
+ It has a pole of order $1$ and residue $1$ at $z=1$.
+ (It also has a pole of order $1$ at $z=-1$; you are invited to compute the residue.)
+\end{example}
+\begin{example}
+ [Function with infinitely many poles]
+ The function $\CC \to \CC$ by \[ z \mapsto \frac{1}{\sin(z)} \]
+ has infinitely many poles: the numbers $z = 2\pi k$, where $k$ is an integer.
+ Let's compute the Laurent series at just $z=0$:
+ \begin{align*}
+ \frac{1}{\sin(2\pi z)}
+ &= \frac{1}{\frac{z}{1!} - \frac{z^3}{3!} + \frac{z^5}{5!} - \dots} \\
+ % &= \frac{1/z}{\frac{1}{1!} - \frac{z^2}{3!} + \frac{z^4}{5!} - \dots} \\
+ &= \frac 1z \cdot \frac{1}{1 - \left( \frac{z^2}{3!} - \frac{z^4}{5!} + \dots \right)} \\
+ &= \frac 1z \sum_{k \ge 0} \left( \frac{z^2}{3!} - \frac{z^4}{5!} + \dots \right)^k.
+ \end{align*}
+ which is a Laurent series, though I have no clue what the coefficients are.
+ You can at least see the residue; the constant term of that huge sum is $1$,
+ so the residue is $1$.
+ Also, the pole has order $1$.
+\end{example}
+
+The Laurent series, if it exists, is unique (as you might have guessed),
+and by our result on holomorphic functions it is actually valid for \emph{any}
+disk centered at $p$ (minus the point $p$).
+The part $\frac{c_{-1}}{z-p} + \dots + \frac{c_{-m}}{(z-p)^m}$ is called the \vocab{principal part},
+and the rest of the series $c_0 + c_1(z-p) + \dots$ is called the \vocab{analytic part}.
+
+
+
+\section{Winding numbers and the residue theorem}
+Recall that for a counterclockwise circle $\gamma$ and a point $p$ inside it, we had
+\[
+ \oint_{\gamma} (z-p)^m \; dz =
+ \begin{cases}
+ 0 & m \neq -1 \\
+ 2\pi i & m = -1
+ \end{cases}
+\]
+where $m$ is an integer.
+One can extend this result to in fact show that $\oint_\gamma (z-p)^m \; dz = 0$
+for \emph{any} loop $\gamma$, where $m \neq -1$.
+So we associate a special name for the nonzero value at $m=-1$.
+\begin{definition}
+ For a point $p \in \CC$ and a loop $\gamma$ not passing through it,
+ we define the \vocab{winding number}, denoted $\Wind(p, \gamma)$, by
+ \[
+ \Wind(\gamma, p) = \frac{1}{2\pi i} \oint_{\gamma} \frac{1}{z-p} \; dz
+ \]
+\end{definition}
+For example, by our previous results we see that if $\gamma$ is a circle, we have
+\[
+ \Wind(\text{circle}, p)
+ =
+ \begin{cases}
+ 1 & \text{$p$ inside the circle} \\
+ 0 & \text{$p$ outside the circle}.
+ \end{cases}
+\]
+If you've read the chapter on fundamental groups, then this is just the fundamental group
+associated to $\CC \setminus \{p\}$.
+In particular, the winding number is always an integer (the proof of this requires the complex logarithm,
+so we omit it here).
+In the simplest case the winding numbers are either $0$ or $1$.
+\begin{definition}
+ We say a loop $\gamma$ is \vocab{regular} if $\Wind(p, \gamma) = 1$
+ for all points $p$ in the interior of $\gamma$ (for example,
+ if $\gamma$ is a counterclockwise circle).
+\end{definition}
+
+With all these ingredients we get a stunning generalization of the Cauchy-Goursat theorem:
+\begin{theorem}
+ [Cauchy's residue theorem]
+ Let $f : \Omega \to \CC$ be meromorphic, where $\Omega$ is simply connected.
+ Then for any loop $\gamma$ not passing through any of its poles, we have
+ \[
+ \frac{1}{2\pi i} \oint_{\gamma} f(z) \; dz
+ = \sum_{\text{pole $p$}} \Wind(\gamma, p) \Res(f; p).
+ \]
+ In particular, if $\gamma$ is regular then the contour integral
+ is the sum of all the residues, in the form
+ \[
+ \frac{1}{2\pi i} \oint_{\gamma} f(z) \; dz
+ = \sum_{\substack{\text{pole $p$} \\ \text{inside $\gamma$}}} \Res(f; p).
+ \]
+\end{theorem}
+\begin{ques}
+ Verify that this result coincides
+ with what you expect when you integrate $\oint_\gamma cz\inv \; dz$
+ for $\gamma$ a counter-clockwise circle.
+\end{ques}
+
+The proof from here is not really too impressive -- the ``work'' was already
+done in our statements about the winding number.
+\begin{proof}
+ Let the poles with nonzero winding number be $p_1, \dots, p_k$ (the others do not affect the sum).\footnote{
+ To show that there must be finitely many such poles: recall that all our contours $\gamma : [a,b] \to \CC$
+ are in fact bounded, so there is some big closed disk $D$ which contains all of $\gamma$.
+ The poles outside $D$ thus have winding number zero.
+ Now we cannot have infinitely many poles inside the disk $D$, for $D$ is compact and the
+ set of poles is a closed and isolated set!}
+ Then we can write $f$ in the form
+ \[
+ f(z) = g(z) + \sum_{i=1}^k P_i\left( \frac{1}{z-p_i} \right)
+ \]
+ where $P_i\left( \frac{1}{z-p_i} \right)$ is the principal part of the pole $p_i$.
+ (For example, if $f(z) = \frac{z^3-z+1}{z(z+1)}$ we would write $f(z) = (z-1) + \frac1z - \frac1{1+z}$.)
+
+ The point of doing so is that the function $g$ is holomorphic (we've removed all the ``bad'' parts), so
+ \[ \oint_{\gamma} g(z) \; dz = 0 \]
+ by Cauchy-Goursat.
+
+ On the other hand, if $P_i(x) = c_1x + c_2x^2 + \dots + c_d x^d$ then
+ \begin{align*}
+ \oint_{\gamma} P_i\left( \frac{1}{z-p_i} \right) \; dz
+ &=
+ \oint_{\gamma} c_1 \cdot \left( \frac{1}{z-p_i} \right) \; dz
+ + \oint_{\gamma} c_2 \cdot \left( \frac{1}{z-p_i} \right)^2 \; dz
+ + \dots \\
+ &= c_1 \cdot \Wind(\gamma, p_i) + 0 + 0 + \dots \\
+ &= \Wind(\gamma, p_i) \Res(f; p_i).
+ \end{align*}
+ which gives the conclusion.
+\end{proof}
+
+\section{Argument principle}
+One tricky application is as follows.
+Given a polynomial $P(x) = (x-a_1)^{e_1}(x-a_2)^{e_2}\dots(x-a_n)^{e_n}$, you might know that we have
+\[ \frac{P'(x)}{P(x)} = \frac{e_1}{x-a_1} + \frac{e_2}{x-a_2} + \dots + \frac{e_n}{x-a_n}. \]
+The quantity $P'/P$ is called the \vocab{logarithmic derivative}, as it is the derivative of $\log P$.
+This trick allows us to convert zeros of $P$ into poles of $P'/P$ with order $1$;
+moreover the residues of these poles are the multiplicities of the roots.
+
+In an analogous fashion, we can obtain a similar result for any meromorphic function $f$.
+\begin{proposition}
+ [The logarithmic derivative]
+ Let $f : U \to \CC$ be a meromorphic function.
+ Then the logarithmic derivative $f'/f$ is meromorphic as a function from $U$ to $\CC$;
+ its zeros and poles are:
+ \begin{enumerate}[(i)]
+ \ii A pole at each zero of $f$ whose residue is the multiplicity, and
+ \ii A pole at each pole of $f$ whose residue is the negative of the pole's order.
+ \end{enumerate}
+\end{proposition}
+Again, you can almost think of a pole as a zero of negative multiplicity.
+This spirit is exemplified below.
+\begin{proof}
+ Dead easy with Taylor series.
+ Let $a$ be a zero/pole of $f$, and WLOG set $a=0$ for convenience.
+ We take the Taylor series at zero to get
+ \[ f(z) = c_k z^k + c_{k+1} z^{k+1} + \dots \] % chktex 25
+ where $k < 0$ if $0$ is a pole and $k > 0$ if $0$ is a zero.
+ Taking the derivative gives
+ \[ f'(z) = kc_k z^{k-1} + (k+1)c_{k+1}z^{k} + \dots. \]
+ Now look at $f'/f$; with some computation, it equals
+ \[
+ \frac{f'(z)}{f(z)}
+ = \frac 1z \frac{kc_k + (k+1)c_{k+1}z + \dots}{c_k + c_{k+1}z + \dots}.
+ \]
+ So we get a simple pole at $z=0$, with residue $k$.
+\end{proof}
+
+Using this trick you can determine the number of zeros and poles inside a regular closed curve,
+using the so-called Argument Principle.
+
+\begin{theorem}
+ [Argument principle]
+ \label{thm:arg_principle}
+ Let $\gamma$ be a regular curve.
+ Suppose $f : U \to \CC$ is meromorphic inside and on $\gamma$, and
+ none of its zeros or poles lie on $\gamma$.
+ Then
+ \[
+ \frac{1}{2\pi i} \oint_\gamma \frac{f'}{f} \; dz
+ = Z - P
+ \]
+ where $Z$ is the number of zeros inside $\gamma$ (counted with multiplicity)
+ and $P$ is the number of poles inside $\gamma$ (again with multiplicity).
+\end{theorem}
+\begin{proof}
+ Immediate by applying Cauchy's residue theorem alongside the preceding proposition.
+ In fact you can generalize to any curve $\gamma$ via the winding number:
+ the integral is
+ \[ \frac{1}{2\pi i} \oint_\gamma \frac{f'}{f} \; dz
+ = \sum_{\text{zero $z$}} \Wind(\gamma,z)
+ - \sum_{\text{pole $p$}} \Wind(\gamma,p) \]
+ where the sums are with multiplicity.
+\end{proof}
+
+Thus the Argument Principle allows one to count zeros and poles inside any region of choice.
+
+Computers can use this to get information on functions whose values can be computed but whose behavior as a whole
+is hard to understand.
+Suppose you have a holomorphic function $f$, and you want to understand where its zeros are.
+Then just start picking various circles $\gamma$.
+Even with machine rounding error, the integral will be close enough to the true integer value that
+we can decide how many zeros are in any given circle.
+Numerical evidence for the Riemann Hypothesis (concerning the zeros of the Riemann zeta function)
+can be obtained in this way.
+
+\section{Philosophy: why are holomorphic functions so nice?}
+All the fun we've had with holomorphic and meromorphic functions comes down to the fact that
+complex differentiability is such a strong requirement.
+It's a small miracle that $\CC$, which \emph{a priori} looks only like $\RR^2$,
+is in fact a field.
+Moreover, $\RR^2$ has the nice property that one can draw nontrivial loops
+(it's also true for real functions that $\int_a^a f \; dx = 0$, but this is not so interesting!),
+and this makes the theory much more interesting.
+
+As another piece of intuition from Siu\footnote{Harvard professor.}:
+If you try to get (left) differentiable functions over \emph{quaternions},
+you find yourself with just linear functions.
+
+
+\section\problemhead
+% Looman-Menchoff theorem?
+
+\begin{problem}
+ [Fundamental theorem of algebra]
+ Prove that if $f$ is a nonzero polynomial of degree $n$
+ then it has $n$ roots.
+\end{problem}
+
+\begin{dproblem}
+ [Rouch\'e's theorem]
+ Let $f, g \colon U \to \CC$ be holomorphic functions,
+ where $U$ contains the unit disk.
+ Suppose that $\left\lvert f(z) \right\rvert > \left\lvert g(z) \right\rvert$
+ for all $z$ on the unit circle.
+ Prove that $f$ and $f+g$ have the same number of zeros
+ which lie strictly inside the unit circle (counting multiplicities).
+\end{dproblem}
+
+\begin{problem}
+ [Wedge contour]
+ \gim
+ For each odd integer $n \ge 3$, evaluate the improper integral
+ \[ \int_0^\infty \frac{1}{1+x^{n}} \; dx. \]
+ \begin{hint}
+ This is called a ``wedge contour''.
+ Try to integrate over a wedge shape
+ consisting of a sector of a circle of radius $r$,
+ with central angle $\frac{2\pi}{n}$.
+ Take the limit as $r \to \infty$ then.
+ \end{hint}
+ \begin{sol}
+ See \url{https://math.stackexchange.com/q/242514/229197},
+ which does it with $2019$ replaced by $3$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Another contour]
+ \yod
+ Prove that the integral
+ \[ \int_{-\infty}^{\infty} \frac{\cos x}{x^2+1} \; dx \]
+ converges and determine its value.
+ \begin{hint}
+ It's $\lim_{a \to \infty} \int_{-a}^{a} \frac{\cos x}{x^2+1} \; dx$.
+ For each $a$, construct a semicircle.
+ \end{hint}
+ % semicircle integral
+ % \quad \text{ and } \quad
+ % \int_{-\infty}^{\infty} \frac{\sin x}{x^2+1} \; dx
+\end{problem}
+
+\begin{sproblem}
+ \gim
+ Let $f \colon U \to \CC$ be a nonconstant holomorphic function.
+ \begin{enumerate}[(a)]
+ \ii (Open mapping theorem)
+ Prove that $f\im(U)$ is open in $\CC$.\footnote{Thus
+ the image of \emph{any}
+ open set $V \subseteq U$ is open in $\CC$
+ (by repeating the proof for the restriction of $f$ to $V$).}
+ \ii (Maximum modulus principle)
+ Show that $\left\lvert f \right\rvert$
+ cannot have a maximum over $U$.
+ That is, show that for any $z \in U$,
+ there is some $z' \in U$ such that
+ $\left\lvert f(z) \right\rvert < \left\lvert f(z') \right\rvert$.
+ \end{enumerate}
+ % http://en.wikipedia.org/wiki/Open_mapping_theorem_(complex_analysis)
+\end{sproblem}
diff --git a/books/napkin/metric-prop.tex b/books/napkin/metric-prop.tex
new file mode 100644
index 0000000000000000000000000000000000000000..275ece7e8d514873e5ed0d3a2782d45773f706ed
--- /dev/null
+++ b/books/napkin/metric-prop.tex
@@ -0,0 +1,407 @@
+\chapter{Properties of metric spaces}
+At the end of the last chapter on metric spaces,
+we introduced two adjectives ``open'' and ``closed''.
+These are important because they'll grow up
+to be the \emph{definition} for a general topological space,
+once we graduate from metric spaces.
+
+To move forward, we provide a couple niceness adjectives
+that applies to \emph{entire metric spaces},
+rather than just a set relative to a parent space.
+They are ``(totally) bounded'' and ``complete''.
+These adjectives are specific to metric spaces,
+but will grow up to become the notion of \emph{compactness},
+which is, in the words of \cite{ref:pugh},
+``the single most important concept in real analysis''.
+At the end of the chapter,
+we will know enough to realize that something is amiss
+with our definition of homeomorphism,
+and this will serve as the starting point for the next chapter,
+when we define fully general topological spaces.
+
+\section{Boundedness}
+\prototype{$[0,1]$ is bounded but $\RR$ is not.}
+Here is one notion of how to prevent a metric space
+from being a bit too large.
+
+\begin{definition}
+ A metric space $M$ is \vocab{bounded}
+ if there is a constant $D$ such that
+ $d(p,q) \le D$ for all $p,q \in M$.
+\end{definition}
+You can change the order of the quantifiers:
+\begin{proposition}
+ [Boundedness with radii instead of diameters]
+ A metric space $M$ is bounded if and only if
+ for every point $p \in M$, there is a radius $R$
+ (possibly depending on $p$) such that $d(p,q) \le R$ for all $q \in M$.
+\end{proposition}
+\begin{exercise}
+ Use the triangle inequality to show these are equivalent.
+ (The names ``radius'' and ``diameter'' are a big hint!)
+\end{exercise}
+
+\begin{example}
+ [Examples of bounded spaces]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Finite intervals like $[0,1]$ and $(a,b)$ are bounded.
+ \ii The unit square $[0,1]^2$ is bounded.
+ \ii $\RR^n$ is not bounded for any $n \ge 1$.
+ \ii A discrete space on an infinite set is bounded.
+ \ii $\NN$ is not bounded, despite being
+ homeomorphic to the discrete space!
+ \end{enumerate}
+\end{example}
+
+The fact that a discrete space on an infinite set
+is ``bounded'' might be upsetting to you, so
+here is a somewhat stronger condition you can use:
+
+\begin{definition}
+ A metric space is \vocab{totally bounded}
+ if for any $\eps > 0$,
+ we can cover $M$ with finitely many $\eps$-neighborhoods.
+\end{definition}
+For example, if $\eps = 1/2$, you can cover $[0,1]^2$
+by $\eps$-neighborhoods.
+\begin{center}
+ \begin{asy}
+ size(4cm);
+ draw(shift( (-0.5,-0.5) )*unitsquare, black+1);
+ real d = 0.4;
+ real r = 0.5;
+ draw(CR(dir( 45)*d, r), dotted);
+ draw(CR(dir(135)*d, r), dotted);
+ draw(CR(dir(225)*d, r), dotted);
+ draw(CR(dir(315)*d, r), dotted);
+ \end{asy}
+\end{center}
+\begin{exercise}
+ Show that ``totally bounded'' implies ``bounded''.
+\end{exercise}
+\begin{example}
+ [Examples of totally bounded spaces]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii A subset of $\RR^n$ is bounded if and only if
+ it is totally bounded.
+
+ This is for Euclidean geometry reasons:
+ for example in $\RR^2$ if I can cover a set
+ by a single disk of radius $2$,
+ then I can certainly cover it by finitely many
+ disks of radius $1/2$.
+ (We won't prove this rigorously.)
+
+ \ii So for example $[0,1]$ or $[0,2] \times [0,3]$
+ is totally bounded.
+
+ \ii In contrast, a discrete space on
+ an infinite set is not totally bounded.
+ \end{enumerate}
+\end{example}
+
+\section{Completeness}
+\prototype{$\RR$ is complete, but $\QQ$ and $(0,1)$ are not.}
+
+So far we can only talk about sequences converging if they have a limit.
+But consider the sequence
+\[ x_1 = 1, \; x_2 = 1.4, \; x_3 = 1.41, \; x_4 = 1.414, \dots. \]
+It converges to $\sqrt 2$ in $\RR$, of course.
+But it fails to converge in $\QQ$;
+there is no \emph{rational} number this sequence converges to.
+And so somehow, if we didn't know about the existence of $\RR$, we would
+have \emph{no idea} that the sequence $(x_n)$ is ``approaching'' something.
+
+That seems to be a shame.
+Let's set up a new definition to describe these sequences whose terms
+\textbf{get close to each other},
+even if they don't approach any particular point in the space.
+Thus, we only want to mention the given points in the definition.
+
+\begin{definition}
+ Let $x_1, x_2, \dots$ be a sequence which lives in a metric space $M = (M,d_M)$.
+ We say the sequence is \vocab{Cauchy} if for any $\eps > 0$, we have
+ \[ d_M(x_m, x_n) < \eps \]
+ for all sufficiently large $m$ and $n$.
+\end{definition}
+
+\begin{ques}
+ Show that a sequence which converges is automatically Cauchy.
+ (Draw a picture.)
+\end{ques}
+
+Now we can define:
+\begin{definition}
+ A metric space $M$ is \vocab{complete} if every
+ Cauchy sequence converges.
+\end{definition}
+
+\begin{example}
+ [Examples of complete spaces]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\RR$ is complete.
+ (Depending on your definition of $\RR$,
+ this either follows by definition, or requires some work.
+ We won't go through this here.)
+ \ii The discrete space is complete,
+ as the only Cauchy sequences are eventually constant.
+ \ii The closed interval $[0,1]$ is complete.
+ \ii $\RR^n$ is complete as well.
+ (You're welcome to prove this by induction on $n$.)
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [Non-examples of complete spaces]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The rationals $\QQ$ are not complete.
+ \ii The open interval $(0,1)$ is not complete,
+ as the sequence $0.9$, $0.99$, $0.999$, $0.9999$, \dots
+ is Cauchy but does not converge.
+ \end{enumerate}
+\end{example}
+
+So, metric spaces need not be complete, like $\QQ$.
+But we certainly would like them to be complete,
+and in light of the following theorem this is not unreasonable.
+\begin{theorem}
+ [Completion]
+ Every metric space can be ``completed'',
+ i.e.\ made into a complete space by adding in some points.
+\end{theorem}
+We won't need this construction at all,
+so it's left as \Cref{prob:completion}.
+\begin{example}
+ [$\QQ$ completes to $\RR$]
+ The completion of $\QQ$ is $\RR$.
+\end{example}
+(In fact, by using a modified definition of completion
+not depending on the real numbers,
+other authors often use this as the definition of $\RR$.)
+
+\section{Let the buyer beware}
+There is something suspicious about both these notions:
+neither are preserved under homeomorphism!
+
+\begin{example}
+ [Something fishy is going on here]
+ \label{ex:fishy}
+ Let $M = (0,1)$ and $N = \RR$.
+ As we saw much earlier $M$ and $N$ are homeomorphic.
+ However:
+ \begin{itemize}
+ \ii $(0,1)$ is totally bounded, but not complete.
+ \ii $\RR$ is complete, but not bounded.
+ \end{itemize}
+\end{example}
+
+This is the first hint of something going awry with the metric.
+As we progress further into our study of topology,
+we will see that in fact \emph{open and closed sets}
+(which we motivated by using the metric)
+are the notion that will really shine later on.
+I insist on introducing the metric first so that
+the standard pictures of open and closed sets make sense,
+but eventually it becomes time to remove the training wheels.
+
+%It's a theorem that $\RR$ is complete.
+%To prove this I'd have to define $\RR$ rigorously, which I won't do here.
+%In fact, there are some competing definitions of $\RR$.
+%It is sometimes defined as the completion of the space $\QQ$.
+%Other times it is defined using something called \emph{Dedekind cuts}.
+%For now, let's just accept that $\RR$ behaves as we expect and is complete.
+
+%And thus, I can now tell you exactly what $\RR$ is.
+%You might notice that it's actually not that easy to define:
+%we can define $\QQ = \left\{ a/b : a,b \in \NN \right\}$, but
+%what do we say for $\RR$?
+%Here's the answer:
+%\begin{definition}
+% $\RR$ is the completion of the metric space $\QQ$.
+%\end{definition}
+
+\section{Subspaces, and (inb4) a confusing linguistic point}
+\prototype{A circle is obtained as a subspace of $\RR^2$.}
+
+As we've already been doing implicitly in examples, we'll now say:
+\begin{definition}
+ Every subset $S \subseteq M$ is a metric space in its own right,
+ by re-using the distance function on $M$.
+ We say that $S$ is a \vocab{subspace} of $M$.
+\end{definition}
+For example, we saw that the circle $S^1$
+is just a subspace of $\RR^2$.
+
+It thus becomes important to distinguish between
+\begin{enumerate}[(i)]
+ \ii \textbf{``absolute'' adjectives} like ``complete'' or ``bounded'',
+ which can be applied to both spaces,
+ and hence even to subsets of spaces (by taking a subspace),
+ and
+ \ii \textbf{``relative'' adjectives}
+ like ``open (in $M$)'' and ``closed (in $M$)'',
+ which make sense only relative to a space,
+ even though people are often sloppy and omit them.
+\end{enumerate}
+So ``$[0,1]$ is complete'' makes sense,
+as does ``$[0,1]$ is a complete subset of $\RR$'',
+which we take to mean ``$[0,1]$ is a complete subspace of $\RR$''.
+This is since ``complete'' is an absolute adjective.
+
+But here are some examples of ways in which relative adjectives
+require a little more care:
+\begin{itemize}
+ \ii Consider the sequence $1$, $1.4$, $1.41$, $1.414$, \dots.
+ Viewed as a sequence in $\RR$, it converges to $\sqrt 2$.
+ But if viewed as a sequence in $\QQ$,
+ this sequence does \emph{not} converge!
+ Similarly, the sequence $0.9$, $0.99$, $0.999$, $0.9999$
+ does not converge in the space $(0,1)$,
+ although it does converge in $[0,1]$.
+
+ The fact that these sequences fail to converge
+ even though they ``ought to'' is weird and bad,
+ and was why we defined complete spaces to begin with.
+
+ \ii In general, it makes no sense to ask a question like ``is $[0,1]$ open?''.
+ The questions ``is $[0,1]$ open in $\RR$?''
+ and ``is $[0,1]$ open in $[0,1]$?'' do make sense, however.
+ The answer to the first question is ``no''
+ but the answer to the second question is ``yes'';
+ indeed, every space is open in itself.
+ Similarly, $[0, \half)$ is an open set % chktex 9
+ in the space $M = [0,1]$
+ because it is the ball \emph{in $M$}
+ of radius $\half$ centered at $0$.
+
+ \ii Dually, it doesn't make sense to ask ``is $[0,1]$ closed''?
+ It is closed \emph{in $\RR$} and \emph{in itself}
+ (but every space is closed in itself, anyways).
+\end{itemize}
+
+%Thus to list out all four cases:
+%\begin{itemize}
+% \ii The statement ``$[0,1]$ is complete'' makes sense (and is true);
+% it says that $[0,1]$ is a complete metric space.
+% \ii The statement ``$[0,1]$ is a complete subset of $\RR$'' is valid;
+% it says that the subspace $[0,1]$ of $\RR$ is a complete metric space.
+% \ii The statement ``$[0,1]$ is a closed subset of $\RR$'' makes sense;
+% it says that the set of points $[0,1]$ form a closed subset of a parent space $\RR$.
+% \ii The statement ``$[0,1]$ is closed'' does \emph{not} make sense officially.
+% Closed sets are only defined relative to parent spaces.
+%\end{itemize}
+
+To make sure you understand the above,
+here are two exercises to help you practice relative adjectives.
+
+\begin{exercise}
+ Let $M$ be a complete metric space and let $S \subseteq M$.
+ Prove that $S$ is complete if and only if it is closed in $M$.
+ % (This is obvious once you figure out what the question is asking.)
+ In particular, $[0,1]$ is complete.
+\end{exercise}
+\begin{exercise}
+ Let $M = [0,1] \cup (2,3)$.
+ Show that $[0,1]$ and $(2,3)$ are both open and closed in $M$.
+\end{exercise}
+
+This illustrates a third point:
+a nontrivial set can be both open and closed\footnote{Which
+ always gets made fun of.}
+As we'll see in \Cref{ch:top_more}, this implies the space is disconnected;
+i.e.\ the only examples look quite like the one we've given above.
+
+\section{\problemhead}
+\begin{dproblem}[Banach fixed point theorem]
+ Let $M = (M,d)$ be a complete metric space.
+ Suppose $T \colon M \to M$ is a continuous map
+ such that for any $p \neq q \in M$,
+ \[ d\left( T(p), T(q) \right) < 0.999 d(p,q). \]
+ (We call $T$ a \vocab{contraction}.)
+ Show that $T$ has a unique fixed point.
+\end{dproblem}
+
+\begin{problem}
+ [Henning Makholm,
+ on \href{https://math.stackexchange.com/a/3051746/229197}{math.SE}]
+ We let $M$ and $N$ denote the metric spaces obtained
+ by equipping $\RR$ with the following two metrics:
+ \begin{align*}
+ d_M(x,y) &= \min \left\{ 1, \left\lvert x-y \right\rvert \right\} \\
+ d_N(x,y) &= \left\lvert e^x - e^y \right\rvert.
+ \end{align*}
+ \begin{enumerate}[(a)]
+ \ii Fill in the following $2 \times 3$ table
+ with ``yes'' or ``no'' for each cell.
+ \begin{center}
+ \begin{tabular}[h]{l|ccc}
+ & Complete? & Bounded? & Totally bounded? \\ \hline
+ $M$ \\
+ $N$ \\
+ \end{tabular}
+ \end{center}
+ \ii Are $M$ and $N$ homeomorphic?
+ \end{enumerate}
+ \begin{hint}
+ (a): $M$ is complete and bounded
+ but not totally bounded. $N$ is all no.
+ For (b) show that $M \cong \RR \cong N$.
+ \end{hint}
+ \begin{sol}
+ Part (a) is essentially by definition.
+ The space $M$ is bounded since no distances exceed $1$,
+ but not totally bounded since we can't cover $M$
+ with finitely many $\half$-neighborhoods.
+ The space $M$ is complete since a sequence of real
+ numbers converges in $M$ if it converges in the usual sense.
+ As for $N$, the sequence $-1$, $-2$, \dots is Cauchy
+ but fails to converge; and it is obviously not bounded.
+
+ To show (b), the identity map (!) is an homeomorphism $M \cong \RR$
+ and $\RR \cong N$, since it is continuous.
+
+ This illustrates that $M \cong N$ despite the fact
+ that $M$ is both complete and bounded
+ but $N$ is neither complete nor bounded.
+ On the other hand, we will later see that complete
+ and totally bounded implies \emph{compact},
+ which is a very strong property preserved under homeomorphism.
+ \end{sol}
+\end{problem}
+
+
+\begin{dproblem}[Completion of a metric space]
+ \gim
+ \label{prob:completion}
+ Let $M$ be a metric space.
+ Construct a complete metric space $\ol M$
+ such that $M$ is a subspace of $\ol M$,
+ and every open set of $\ol M$ contains a point of $M$
+ (meaning $M$ is \vocab{dense} in $\ol M$).
+ \begin{hint}
+ As a set, we let $\ol M$ be the set of Cauchy
+ sequences $(x_n)$ in $M$, modulo the relation that
+ $(x_n) \sim (y_n)$ if $\lim_n d(x_n, y_n) = 0$.
+ \end{hint}
+\end{dproblem}
+
+\begin{problem}
+ Show that a metric space is totally bounded
+ if and only if any sequence has a Cauchy subsequence.
+ \begin{sol}
+ See \url{https://math.stackexchange.com/q/556150/229197}.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ \kurumi
+ Prove that $\QQ$ is not homeomorphic to any complete metric space.
+ \begin{hint}
+ The standard solution seems to be
+ via the so-called ``Baire category theorem''.
+ \end{hint}
+\end{problem}
diff --git a/books/napkin/metric-top.tex b/books/napkin/metric-top.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b6e9fe28ce8932862dcd24f3def3dd81fea2ee2e
--- /dev/null
+++ b/books/napkin/metric-top.tex
@@ -0,0 +1,824 @@
+\chapter{Metric spaces}
+\label{ch:metric_space}
+At the time of writing, I'm convinced that metric topology is
+the morally correct way to motivate point-set topology
+as well as to generalize normal calculus.\footnote{Also,
+``metric'' is a fun word to say.}
+So here is my best attempt.
+
+The concept of a metric space is very ``concrete'', and lends itself easily to visualization. Hence throughout this chapter you should draw lots of pictures as you learn about new objects, like convergent sequences, open sets, closed sets, and so on.
+
+\section{Definition and examples of metric spaces}
+\prototype{$\RR^2$, with the Euclidean metric.}
+\begin{definition}
+ A \vocab{metric space} is a pair $(M, d)$ consisting of
+ a set of points $M$
+ and a \vocab{metric} $d : M \times M \to \mathbb R_{\ge 0}$.
+ The distance function must obey:
+ \begin{itemize}
+ \ii For any $x,y \in M$, we have $d(x,y) = d(y,x)$; i.e.\ $d$ is symmetric.
+ \ii The function $d$ must be \vocab{positive definite}
+ which means that $d(x,y) \ge 0$ with equality if and only if $x=y$.
+ \ii The function $d$ should satisfy the \vocab{triangle inequality}: for all $x,y,z \in M$,
+ \[ d(x,z) + d(z,y) \ge d(x,y). \]
+ \end{itemize}
+\end{definition}
+\begin{abuse}
+ Just like with groups, we will abbreviate $(M,d)$ as just $M$.
+\end{abuse}
+\begin{example}[Metric spaces of $\RR$]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The real line $\RR$ is a metric space under the metric $d(x,y) = \left\lvert x-y \right\rvert$.
+ \ii The interval $[0,1]$ is also a metric space with the same distance function.
+ \ii In fact, any subset $S$ of $\RR$ can be made into a metric space in this way.
+ \end{enumerate}
+\end{example}
+\begin{example}[Metric spaces of $\RR^2$]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii We can make $\RR^2$ into a metric space by imposing the Euclidean distance function
+ \[ d\left( (x_1, y_1), (x_2, y_2) \right) = \sqrt{(x_1-x_2)^2 + (y_1-y_2)^2}. \]
+ \ii Just like with the first example, any subset of $\RR^2$ also becomes a metric space after we inherit it.
+ The unit disk, unit circle, and the unit square $[0,1]^2$
+ are special cases.
+ \end{enumerate}
+\end{example}
+\begin{example}[Taxicab on $\RR^2$]
+ It is also possible to place the \vocab{taxicab distance} on $\RR^2$:
+ \[ d\left( (x_1, y_1), (x_2, y_2) \right) =
+ \left\lvert x_1-x_2 \right\rvert + \left\lvert y_1-y_2 \right\rvert.
+ \]
+ For now, we will use the more natural Euclidean metric.
+\end{example}
+
+\begin{example}[Metric spaces of $\RR^n$]
+ We can generalize the above examples easily.
+ Let $n$ be a positive integer.
+ \begin{enumerate}[(a)]
+ \ii We let $\RR^n$ be the metric space whose points are points in $n$-dimensional Euclidean space,
+ and whose metric is the Euclidean metric
+ \[
+ d\left(
+ \left( a_1, \dots, a_n \right), \left( b_1, \dots, b_n \right)
+ \right)
+ = \sqrt{(a_1-b_1)^2 + \dots + (a_n-b_n)^2}.
+ \]
+ This is the $n$-dimensional \vocab{Euclidean space}.
+ \ii The open \vocab{unit ball} $B^{n}$ is the subset of $\RR^n$
+ consisting of those points $\left( x_1, \dots, x_n \right)$
+ such that $x_1^2 + \dots + x_n^2 < 1$.
+ \ii The \vocab{unit sphere} $S^{n-1}$ is the subset of $\RR^n$
+ consisting of those points $\left( x_1, \dots, x_n \right)$
+ such that $x_1^2 + \dots + x_n^2 = 1$, with the inherited metric.
+ (The superscript $n-1$ indicates that $S^{n-1}$ is an $n-1$ dimensional space,
+ even though it lives in $n$-dimensional space.)
+ For example, $S^1 \subseteq \RR^2$ is the unit circle,
+ whose distance between two points is the length of the chord joining them.
+ You can also think of it as the ``boundary'' of the unit ball $B^n$.
+ \end{enumerate}
+\end{example}
+\begin{example}
+ [Function space]
+ We can let $M$ be the space of
+ continuous functions $f \colon [0,1] \to \RR$ and define the metric
+ by $d(f,g) = \int_0^1 \left\lvert f-g \right\rvert \; dx$.
+ (It admittedly takes some work to check $d(f,g) = 0$ implies $f=g$,
+ but we won't worry about that yet.)
+\end{example}
+
+Here is a slightly more pathological example.
+\begin{example}
+ [Discrete space]
+ Let $S$ be any set of points (either finite or infinite).
+ We can make $S$ into a \vocab{discrete space} by declaring
+ \[
+ d(x,y)
+ =
+ \begin{cases}
+ 1 & \text{if $x \neq y$} \\
+ 0 & \text{if $x = y$}.
+ \end{cases}
+ \]
+ If $\left\lvert S \right\rvert = 4$ you might think of this space
+ as the vertices of a regular tetrahedron, living in $\RR^3$.
+ But for larger $S$ it's not so easy to visualize\dots
+\end{example}
+\begin{example}[Graphs are metric spaces]
+ Any connected simple graph $G$ can be made into a metric space
+ by defining the distance between two vertices to be the
+ graph-theoretic distance between them.
+ (The discrete metric is the special case when $G$ is the complete graph on $S$.)
+\end{example}
+\begin{ques}
+ Check the conditions of a metric space for the metrics on the discrete space
+ and for the connected graph.
+\end{ques}
+
+\begin{abuse}
+ From now on, we will refer to $\RR^n$ with the Euclidean metric
+ by just $\RR^n$.
+ Moreover, if we wish to take the metric space for a subset $S \subseteq \RR^n$
+ with the inherited metric, we will just write $S$.
+\end{abuse}
+
+\section{Convergence in metric spaces}
+\prototype{The sequence $\frac1n$ (for $n=1,2,\dots$) in $\RR$.}
+
+Since we can talk about the distance between two points, we can talk about what it means for a sequence of points to converge.
+This is the same as the typical epsilon-delta definition, with absolute values replaced by the distance function.
+
+\begin{definition}
+ Let $(x_n)_{n \ge 1}$ be a sequence of points in a metric space $M$.
+ We say that $x_n$ \vocab{converges} to $x$ if the following condition holds:
+ for all $\eps > 0$, there is an integer $N$ (depending on $\eps$)
+ such that $d(x_n, x) < \eps$ for each $n \ge N$.
+ This is written \[ x_n \to x \] or more verbosely as \[ \lim_{n \to \infty} x_n = x. \]
+ We say that a sequence converges in $M$ if it converges to a point in $M$.
+\end{definition}
+You should check that this definition coincides with your intuitive notion of ``converges''.
+\begin{abuse}
+ If the parent space $M$ is understood, we will allow ourselves
+ to abbreviate ``converges in $M$'' to just ``converges''.
+ However, keep in mind that convergence is defined relative to the parent space;
+ the ``limit'' of the space must actually be a point in $M$ for a sequence to converge.
+\end{abuse}
+
+\begin{center}
+ \begin{asy}
+ size(9cm);
+ Drawing("x_1", (-9,0.1), dir(90));
+ Drawing("x_2", (-6,0.8), dir(90));
+ Drawing("x_3", (-5,-0.3), dir(90));
+ Drawing("x_4", (-2, 0.8), dir(90));
+ Drawing("x_5", (-1.7, -0.7), dir(-90));
+ Drawing("x_6", (-0.6, -0.3), dir(225));
+ Drawing("x_7", (-0.4, 0.3), dir(90));
+ Drawing("x_8", (-0.25, -0.24), dir(-90));
+ Drawing("x_9", (-0.12, 0.1), dir(45));
+ dot("$x$", (0,0), dir(-45), blue);
+ draw(CR(origin, 1.5), blue+dashed);
+ \end{asy}
+\end{center}
+
+\begin{example}
+ Consider the sequence
+ $x_1 = 1$, $x_2 = 1.4$, $x_3 = 1.41$, $x_4 = 1.414$, \dots.
+ \begin{enumerate}[(a)]
+ \ii If we view this as a sequence in $\RR$, it converges to $\sqrt 2$.
+ \ii However, even though each $x_i$ is in $\QQ$,
+ this sequence does NOT converge when we view it as a sequence in $\QQ$!
+ \end{enumerate}
+\end{example}
+
+\begin{ques}
+ What are the convergent sequences in a discrete metric space?
+\end{ques}
+
+\section{Continuous maps}
+In calculus you were also told (or have at least heard)
+of what it means for a function to be continuous.
+Probably something like
+\begin{quote}
+ A function $f \colon \RR \to \RR$
+ is continuous at a point $p \in \RR$
+ if for every $\eps > 0$ there exists a $\delta > 0$ such that
+ $\left\lvert x-p \right\rvert < \delta
+ \implies
+ \left\lvert f(x) - f(p) \right\rvert < \eps$.
+\end{quote}
+\begin{ques}
+ Can you guess what the corresponding definition for metric spaces is?
+\end{ques}
+
+All we have do is replace the absolute values with the more general distance functions: this gives us a definition of continuity for any function $M \to N$.
+
+\begin{definition}
+ Let $M = (M, d_M)$ and $N = (N, d_N)$ be metric spaces.
+ A function $f \colon M \to N$ is \vocab{continuous}
+ at a point $p \in M$
+ if for every $\eps > 0$ there exists a $\delta > 0$ such that
+ \[ d_M(x,p) < \delta \implies d_N(f(x), f(p)) < \eps. \]
+ Moreover, the entire function $f$ is continuous
+ if it is continuous at every point $p \in M$.
+\end{definition}
+Notice that, just like in our definition of an isomorphism of a group,
+we use the metric of $M$ for one condition
+and the metric of $N$ for the other condition.
+
+This generalization is nice because it tells us immediately how we could carry over continuity arguments in $\RR$ to more general spaces like $\CC$.
+Nonetheless, this definition is kind of cumbersome to work with,
+because it makes extensive use of the real numbers
+(epsilons and deltas).
+Here is an equivalent condition.
+\begin{theorem}[Sequential continuity]
+ \label{thm:seq_cont}
+ A function $f \colon M \to N$ of metric spaces
+ is continuous at a point $p \in M$
+ if and only if the following property holds:
+ if $x_1$, $x_2$, \dots is a sequence in $M$ converging to $p$,
+ then the sequence $f(x_1)$, $f(x_2)$, \dots in $N$ converges to $f(p)$.
+\end{theorem}
+\begin{proof}
+ One direction is not too hard:
+ \begin{exercise}
+ Show that $\eps$-$\delta$ continuity implies sequential continuity
+ at each point.
+ \end{exercise}
+ Conversely, we will prove if $f$ is not $\eps$-$\delta$ continuous at $p$
+ then it does not preserve convergence.
+
+ If $f$ is not continuous at $p$, then there is a ``bad'' $\eps > 0$,
+ which we now consider fixed.
+ So for each choice of $\delta = 1/n$,
+ there should be some point $x_n$ which is within $\delta$ of $p$,
+ but which is mapped more than $\eps$ away from $f(p)$.
+ But then the sequence $x_n$ converges to $p$,
+ and $f(x_n)$ is always at least $\eps$ away from $f(p)$, contradiction.
+\end{proof}
+
+Example application showcasing the niceness of sequential continuity:
+\begin{proposition}[Composition of continuous functions is continuous]
+ Let $f \colon M \to N$ and $g \colon N \to L$ be continuous maps of metric spaces.
+ Then their composition $g \circ f$ is continuous.
+\end{proposition}
+\begin{proof}
+ Dead simple with sequences:
+ Let $p \in M$ be arbitrary and let $x_n \to p$ in $M$.
+ Then $f(x_n) \to f(p)$ in $N$ and $g(f(x_n)) \to g(f(p))$ in $L$, QED.
+\end{proof}
+
+\begin{ques}
+ Let $M$ be any metric space and $D$ a discrete space.
+ When is a map $f \colon D \to M$ continuous?
+\end{ques}
+
+
+
+\section{Homeomorphisms}
+\prototype{The unit circle $S^1$ is homeomorphic to the boundary of the unit square.}
+
+When do we consider two groups to be the same?
+Answer: if there's a structure-preserving map
+between them which is also a bijection.
+For metric spaces, we do exactly the same thing,
+but replace ``structure-preserving'' with ``continuous''.
+
+\begin{definition}
+ Let $M$ and $N$ be metric spaces.
+ A function $f \colon M \to N$ is a
+ \vocab{homeomorphism} if it is a bijection,
+ and both $f \colon M \to N$
+ and its inverse $f\inv \colon N \to M$ are continuous.
+ We say $M$ and $N$ are \vocab{homeomorphic}.
+\end{definition}
+Needless to say, homeomorphism is an equivalence relation.
+
+You might be surprised that we require $f\inv$ to also be continuous.
+Here's the reason: you can show that if $\phi$ is
+an isomorphism of groups, then $\phi\inv$ also preserves the group operation,
+hence $\phi\inv$ is itself an isomorphism.
+The same is not true for continuous bijections,
+which is why we need the new condition.
+\begin{example}
+ [Homeomorphism $\neq$ continuous bijection]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii There is a continuous bijection
+ from $[0,1)$ to the circle, % chktex 9
+ but it has no continuous inverse.
+ \ii Let $M$ be a discrete space with size $|\RR|$.
+ Then there is a continuous function $M \to \RR$
+ which certainly has no continuous inverse.
+ \end{enumerate}
+\end{example}
+
+Note that this is the topologist's definition of ``same'' --
+homeomorphisms are ``continuous deformations''.
+Here are some examples.
+
+\begin{example}[Examples of homeomorphisms]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Any space $M$ is homeomorphic to itself
+ through the identity map.
+ \ii The old saying: a doughnut (torus) is
+ homeomorphic to a coffee cup.
+ (Look this up if you haven't heard of it.)
+ \ii The unit circle $S^1$ is homeomorphic
+ to the boundary of the unit square.
+ Here's one bijection between them, after an appropriate scaling:
+ \begin{center}
+ \begin{asy}
+ size(1.5cm);
+ draw(unitcircle);
+ pair A = (1.4, 1.4);
+ pair B = rotate(90)*A;
+ pair C = rotate(90)*B;
+ pair D = rotate(90)*C;
+ draw(A--B--C--D--cycle);
+ dot(origin);
+ pair P = Drawing(dir(70));
+ pair Q = Drawing(extension(origin, P, A, B));
+ draw(origin--Q, dashed);
+ \end{asy}
+ \end{center}
+ \end{enumerate}
+\end{example}
+\begin{example}
+ [Metrics on the unit circle]
+ It may have seemed strange that our metric function on $S^1$
+ was the one inherited from $\RR^2$,
+ meaning the distance between two points
+ on the circle was defined to be the length of the chord.
+ Wouldn't it have made more sense to use the circumference of the
+ smaller arc joining the two points?
+
+ In fact, it doesn't matter:
+ if we consider $S^1$ with the ``chord'' metric
+ and the ``arc'' metric, we get two homeomorphic spaces,
+ as the map between them is continuous.
+
+ The same goes for $S^{n-1}$ for general $n$.
+\end{example}
+
+\begin{example}
+ [Homeomorphisms really don't preserve size]
+ Surprisingly, the open interval $(-1,1)$
+ is homeomorphic to the real line $\RR$!
+ One bijection is given by
+ \[ x \mapsto \tan(x \pi/2) \]
+ with the inverse being given by $t \mapsto \frac2\pi \arctan(t)$.
+
+ This might come as a surprise,
+ since $(-1,1)$ doesn't look that much like $\RR$;
+ the former is ``bounded'' while the latter is ``unbounded''.
+\end{example}
+
+\section{Extended example/definition: product metric}
+\prototype{$\RR \times \RR$ is $\RR^2$.}
+
+Here is an extended example which will occur later on.
+Let $M = (M, d_M)$ and $N = (N, d_N)$ be metric spaces (say, $M = N = \RR$).
+Our goal is to define a metric space on $M \times N$.
+
+Let $p_i = (x_i,y_i) \in M \times N$ for $i=1,2$.
+Consider the following metrics on the set of points $M \times N$:
+\begin{align*}
+ d_{\text{max}} ( p_1, p_2 )
+ &\defeq \max \left\{ d_M(x_1, x_2), d_N(y_1, y_2) \right\} \\
+ d_{\text{Euclid}} ( p_1, p_2 )
+ &\defeq \sqrt{d_M(x_1,x_2)^2 + d_N(y_1, y_2)^2} \\
+ d_{\text{taxicab}} \left( p_1, p_2 \right)
+ &\defeq d_M(x_1, x_2) + d_N(y_1, y_2).
+\end{align*}
+All of these are good candidates.
+We are about to see it doesn't matter which one we use:
+\begin{exercise}
+Verify that
+\[ d_{\text{max}}(p_1,p_2)
+ \le d_{\text{Euclid}}(p_1, p_2)
+ \le d_{\text{taxicab}}(p_1, p_2)
+ \le 2d_{\text{max}}(p_1, p_2). \]
+Use this to show that the metric spaces we obtain
+by imposing any of the three metrics are homeomorphic,
+with the homeomorphism being just the identity map.
+\end{exercise}
+
+\begin{definition}
+ Hence we will usually simply refer to
+ \emph{the} metric on $M \times N$,
+ called the \vocab{product metric}.
+ It will not be important which of the three metrics we select.
+\end{definition}
+
+\begin{example}[$\RR^2$]
+ If $M = N = \RR$, we get $\RR^2$, the Euclidean plane.
+ The metric $d_{\text{Euclid}}$ is the one we started with,
+ but using either of the other two metric works fine as well.
+\end{example}
+
+The product metric plays well with convergence of sequences.
+\begin{proposition}
+ [Convergence in the product metric is by component]
+ We have $(x_n, y_n) \to (x,y)$
+ if and only if $x_n \to x$ and $y_n \to y$.
+\end{proposition}
+\begin{proof}
+ We have $d_{\text{max}} \left( (x,y), (x_n, y_n) \right)
+ = \max\left\{ d_M(x, x_n), d_N(y, y_n) \right\}$
+ and the latter approaches zero as $n \to \infty$
+ if and only if $d_M(x,x_n) \to 0$ and $d_N(y, y_n) \to 0$.
+\end{proof}
+
+Let's see an application of this:
+\begin{proposition}
+ [Addition and multiplication are continuous]
+ \label{prop:arithmetic_continuous}
+ The addition and multiplication maps
+ are continuous maps $\RR \times \RR \to \RR$.
+\end{proposition}
+\begin{proof}
+ For multiplication: for any $n$ we have
+ \begin{align*}
+ x_n y_n &= \left( x + (x_n-x) \right)
+ \left( y + (y_n-y) \right) \\
+ &= xy + y(x_n-x) + x(y_n-y) + (x_n-x)(y_n-y) \\
+ \implies \left\lvert x_n y_n - xy \right\rvert
+ &\le \left\lvert y \right\rvert
+ \left\lvert x_n - x \right\rvert
+ + \left\lvert x \right\rvert
+ \left\lvert y_n - y \right\rvert
+ + \left\lvert x_n - x \right\rvert
+ \left\lvert y_n - y \right\rvert.
+ \end{align*}
+ As $n \to \infty$, all three terms on the
+ right-hand side tend to zero.
+ The proof that $+ \colon \RR \times \RR \to \RR$ is continuous
+ is similar (and easier): one notes for any $n$ that
+ \[ |(x_n + y_n) - (x+y)| \le |x_n-x| + |y_n-y| \]
+ and both terms on the right-hand side
+ tend to zero as $n \to \infty$.
+\end{proof}
+\Cref{prob:subtract_divide} covers the other two operations,
+subtraction and division.
+The upshot of this is that, since compositions are also continuous,
+most of your naturally arising real-valued functions
+will automatically be continuous as well.
+For example, the function $\frac{3x}{x^2+1}$
+will be a continuous function from $\RR \to \RR$,
+since it can be obtained by composing $+$, $\times$, $\div$.
+
+\section{Open sets}
+\prototype{The open disk $x^2+y^2 0$ and point $p \in M$, we define
+ \[ M_r(p) \defeq \left\{ x \in M: d(x,p) < r \right\}. \]
+ The set $M_r(p)$ is called an \vocab{$r$-neighborhood} of $p$.
+\end{definition}
+\begin{center}
+ \begin{asy}
+ size(4cm);
+ bigblob("$M$");
+ pair p = Drawing("p", (0.3,0.1), dir(-90));
+ real r = 1.8;
+ draw(CR(p,r), dashed);
+ label("$M_r(p)$", p+r*dir(-65), dir(-65));
+ draw(p--(p+r*dir(130)));
+ label("$r$", midpoint(p--(p+r*dir(130))), dir(40));
+ \end{asy}
+\end{center}
+
+We can rephrase convergence more succinctly in terms of $r$-neighborhoods.
+Specifically, a sequence $(x_n)$ converges to $x$
+if for every $r$-neighborhood of $x$,
+all terms of $x_n$ eventually stay within that $r$-neighborhood.
+
+Let's try to do the same with functions.
+\begin{ques}
+ In terms of $r$-neighborhoods,
+ what does it mean for a function $f \colon M \to N$
+ to be continuous at a point $p \in M$?
+\end{ques}
+
+Essentially, we require that the pre-image of every $\eps$-neighborhood has
+the property that some $\delta$-neighborhood exists inside it.
+This motivates:
+\begin{definition}
+ A set $U \subseteq M$ is \vocab{open} in $M$
+ if for each $p \in U$, some $r$-neighborhood of $p$
+ is contained inside $U$.
+ In other words, there exists $r>0$ such that $M_r(p) \subseteq U$.
+\end{definition}
+\begin{abuse}
+ Note that a set being open is defined
+ \emph{relative to} the parent space $M$.
+ However, if $M$ is understood we can abbreviate
+ ``open in $M$'' to just ``open''.
+\end{abuse}
+
+\begin{figure}[ht]
+ \centering
+ \begin{asy}
+ size(5cm);
+ draw(unitcircle, dashed);
+ pair P = Drawing("p", (0.6,0.2), dir(-90));
+ draw(CR(P, 0.3), dotted);
+ MP("x^2+y^2<1", dir(45), dir(45));
+ \end{asy}
+ \caption{The set of points $x^2+y^2<1$ in $\RR^2$ is open in $\RR^2$.}
+ \label{fig:example_open}
+\end{figure}
+
+\begin{example}[Examples of open sets]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Any $r$-neighborhood of a point is open.
+ \ii Open intervals of $\RR$ are open in $\RR$, hence the name!
+ This is the prototypical example to keep in mind.
+ \ii The open unit ball $B^n$ is open in $\RR^n$ for the same reason.
+ \ii In particular, the open interval $(0,1)$ is open in $\RR$.
+ However, if we embed it in $\RR^2$, it is no longer open!
+ \ii The empty set $\varnothing$ and the whole set of points $M$ are open in $M$.
+ \end{enumerate}
+\end{example}
+\begin{example}
+ [Non-examples of open sets]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The closed interval $[0,1]$ is not open in $\RR$.
+ There is no $\eps$-neighborhood of the point $0$
+ which is contained in $[0,1]$.
+ \ii The unit circle $S^1$ is not open in $\RR^2$.
+ \end{enumerate}
+\end{example}
+\begin{ques}
+ What are the open sets of the discrete space?
+\end{ques}
+
+Here are two quite important properties of open sets.
+\begin{proposition}
+ [Intersections and unions of open sets]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The intersection of finitely many open sets is open.
+ \ii The union of open sets is open, even if there are infinitely many.
+ \end{enumerate}
+\end{proposition}
+\begin{ques}
+ Convince yourself this is true.
+\end{ques}
+\begin{exercise}
+ Exhibit an infinite collection of open sets in $\RR$
+ whose intersection is the set $\{0\}$.
+ This implies that infinite intersections of open sets are not necessarily open.
+\end{exercise}
+
+The whole upshot of this is:
+\begin{theorem}[Open set condition]
+ \label{thm:open_set}
+ A function $f \colon M \to N$ of metric spaces is continuous
+ if and only if the pre-image of every open set in $N$ is open in $M$.
+\end{theorem}
+\begin{proof}
+ I'll just do one direction\dots
+ \begin{exercise}
+ Show that $\delta$-$\eps$ continuity follows from
+ the open set condition.
+ \end{exercise}
+ Now assume $f$ is continuous.
+ First, suppose $V$ is an open subset of the metric space $N$;
+ let $U = f\pre(V)$. Pick $x \in U$, so $y = f(x) \in V$; we want an
+ open neighborhood of $x$ inside $U$.
+
+ \begin{center}
+ \begin{asy}
+ size(12cm);
+ bigblob("$N$");
+ pair Y = Drawing("y", origin, dir(75));
+ real eps = 1.5;
+ draw(CR(Y, eps), dotted);
+ label("$\varepsilon$", Drawing(Y--(Y+eps*dir(255))));
+ label("$V$",
+ Drawing(shift(-0.5,0)*rotate(190)*scale(3.2,2.8)*unitcircle, dashed));
+ add(shift( (13,0) ) * CC());
+ label("$f$", Drawing( (4.5,0)--(8,0), EndArrow));
+
+ bigblob("$M$");
+ real delta = 1.1;
+ pair X = Drawing("x", (-1.5,-0.5), dir(-45));
+ label("$\delta$", Drawing(X--(X+delta*dir(155))));
+ draw(CR(X, delta), dotted);
+ label("$U = f^{\text{pre}}(V)$",
+ Drawing(shift(-1.5,-0.3)*rotate(235)*scale(2.4,1.8)*unitcircle, dashed));
+ \end{asy}
+ \end{center}
+
+ As $V$ is open, there is some small $\eps$-neighborhood around $y$
+ which is contained inside $V$.
+ By continuity of $f$, we can find a $\delta$ such that the $\delta$-neighborhood
+ of $x$ gets mapped by $f$ into the $\eps$-neighborhood in $N$,
+ which in particular lives inside $V$.
+ Thus the $\delta$-neighborhood lives in $U$, as desired.
+\end{proof}
+
+%From this we can get a new definition of homeomorphism
+%which makes it clear why open sets are good things to consider.
+%\begin{theorem}
+% [Open set formulation of homeomorphism]
+% A function $f \colon M \to N$ of metric spaces is a
+% homeomorphism if and only if
+% \begin{enumerate}[(i)]
+% \ii It is a bijection of the underlying points.
+% \ii It induces a bijection of the open sets of $M$ and $N$:
+% for any open set $U \subseteq M$ the set $f\im(U)$ is open,
+% and for any open set $V \subseteq N$ the set $f\pre(V)$ is open.
+% \end{enumerate}
+%\end{theorem}
+
+\section{Closed sets}
+\prototype{The closed unit disk $x^2+y^2 \le r^2$ in $\RR^2$.}
+It would be criminal for me to talk about open sets
+without talking about closed sets.
+The name ``closed'' comes from the definition in a metric space.
+\begin{definition}
+ Let $M$ be a metric space.
+ A subset $S \subseteq M$ is \vocab{closed} in $M$ if the following property holds:
+ let $x_1$, $x_2$, \dots be a sequence of points in $S$
+ and suppose that $x_n$ converges to $x$ in $M$.
+ Then $x \in S$ as well.
+\end{definition}
+\begin{abuse}
+ Same caveat: we abbreviate ``closed in $M$'' to just ``closed''
+ if the parent space $M$ is understood.
+\end{abuse}
+Here's another way to phrase it.
+The \vocab{limit points} of a subset $S \subseteq M$ are defined by
+\[ \lim S \defeq \left\{ p \in M :
+ \exists (x_n) \in S \text{ such that } x_n \to p \right\}. \]
+Thus $S$ is closed if and only if $S = \lim S$.
+
+\begin{exercise}
+ Prove that $\lim S$ is closed even if $S$ isn't closed. (Draw a picture.)
+\end{exercise}
+For this reason, $\lim S$ is also called the
+\vocab{closure} of $S$ in $M$, and denoted $\ol S$.
+It is simply the smallest closed set which contains $S$.
+
+\begin{example}
+ [Examples of closed sets]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The empty set $\varnothing$ is closed in $M$ for vacuous reasons:
+ there are no sequences of points with elements in $\varnothing$.
+ \ii The entire space $M$ is closed in $M$ for tautological reasons.
+ (Verify this!)
+ \ii The closed interval $[0,1]$ in $\RR$ is closed in $\RR$, hence the name. Like with open sets, this is the prototypical example of a closed set to keep in mind!
+ \ii In fact, the closed interval $[0,1]$ is even closed in $\RR^2$.
+ \end{enumerate}
+\end{example}
+\begin{example}
+ [Non-examples of closed sets]
+ Let $S=(0,1)$ denote the open interval.
+ Then $S$ is not closed in $\RR$
+ because the sequence of points
+ \[
+ \frac12, \;
+ \frac14, \;
+ \frac18, \;
+ \dots
+ \]
+ converges to $0 \in \RR$, but $0 \notin (0,1)$.
+\end{example}
+
+I should now warn you about a confusing part of this terminology.
+Firstly, \textbf{``most'' sets are neither open nor closed}.
+\begin{example}[A set neither open nor closed]
+ The half-open interval $[0,1)$ %chktex 9
+ is neither open nor closed in $\RR$.
+\end{example}
+Secondly, it's \textbf{also possible for a set to be both open and closed};
+this will be discussed in \Cref{ch:top_more}.
+
+The reason for the opposing terms is the following theorem:
+\begin{theorem}[Closed sets are complements of open sets]
+ Let $M$ be a metric space, and $S \subseteq M$ any subset.
+ Then the following are equivalent:
+ \begin{itemize}
+ \ii The set $S$ is closed in $M$.
+ \ii The complement $M \setminus S$ is open in $M$.
+ \end{itemize}
+\end{theorem}
+\begin{exercise}
+ [Great]
+ Prove this theorem!
+ You'll want to draw a picture to make it clear what's happening:
+ for example, you might take $M = \RR^2$ and $S$ to be the closed unit disk.
+\end{exercise}
+
+\section{\problemhead}
+\begin{problem}
+ Let $M = (M,d)$ be a metric space.
+ Show that \[ d \colon M \times M \to \RR \]
+ is itself a continuous function
+ (where $M \times M$ is equipped with the product metric).
+\end{problem}
+
+\begin{problem}
+ Are $\QQ$ and $\NN$ homeomorphic subspaces of $\RR$?
+ \begin{hint}
+ No. There is not even a continuous injective map from $\QQ$ to $\NN$.
+ \end{hint}
+ \begin{sol}
+ Two possible approaches, one using metric definition
+ and one using open sets.
+
+ Metric approach: I claim there is no injective map from $\QQ$ to $\NN$
+ that is continuous. Indeed, suppose $f$ was such a map and $f(x) = n$.
+ Then, choose $\eps = 1/2$.
+ There should be a $\delta > 0$ such that everything with $\delta$ of $x$
+ in $\QQ$ should land within $\eps$ of $n \in \NN$ --- i.e., is equal to $n$.
+ This is a blatant contradiction of injectivity.
+
+ Open set approach: In $\QQ$, no singleton set is open,
+ whereas in $\NN$, they all are (in fact $\NN$ is discrete).
+ As you'll see at the start of \Cref{ch:top_more},
+ with the new and improved definition of ``homeomorphism'',
+ we found out that the structure of open sets on $\QQ$ and $\NN$ are different,
+ so they are not homeomorphic.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Continuity of arithmetic continued]
+ \label{prob:subtract_divide}
+ Show that subtraction is a continuous map $- \colon \RR \times \RR \to \RR$,
+ and division is a continuous map $\div \colon \RR \times \RR_{>0} \to \RR$.
+ \begin{hint}
+ You can do this with bare hands.
+ You can also use composition.
+ \end{hint}
+ \begin{sol}
+ For subtraction, the map $x \mapsto -x$ is continuous
+ so you can view it as a composed map
+ \begin{center}
+ \begin{tikzcd}
+ \RR \times \RR \ar[r, "{(\id,-x)}"]
+ & \RR \times \RR \ar[r, "+"] & \RR \\[1ex]
+ (a,b) \ar[r, mapsto] & (a,-b) \ar[r, mapsto] & a-b.
+ \end{tikzcd}
+ \end{center}
+ Similarly, if you are willing to believe $x \mapsto 1/x$
+ is a continuous function, then division is composition
+ \begin{center}
+ \begin{tikzcd}
+ \RR \times \RR_{>0} \ar[r, "{(\id, 1/x)}"]
+ & \RR \times \RR_{>0} \ar[r, "\times"] & \RR \\
+ (a,b) \ar[r, mapsto] & (a,1/b) \ar[r, mapsto] & a/b.
+ \end{tikzcd}
+ \end{center}
+ If for some reason you are suspicious that $x \mapsto 1/x$ is continuous,
+ then here is a proof using sequential continuity.
+ Suppose $x_n \to x$ with $x_n > 0$ and $x > 0$
+ (since $x$ needs to be in $\RR_{>0}$ too).
+ Then \[ \left\lvert \frac{1}{x} - \frac 1{x_n} \right\rvert
+ = \frac{\left\lvert x_n-x \right\rvert}{\left\lvert x x_n \right\rvert}.
+ \]
+ If $n$ is large enough, then $\left\lvert x_n \right\rvert > x/2$;
+ so the denominator is at least $x^2/2$,
+ and hence the whole fraction is at most
+ $\frac{2}{x^2} \left\lvert x_n-x \right\rvert$,
+ which tends to zero as $n \to \infty$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Exhibit a function $f \colon \RR \to \RR$ such that
+ $f$ is continuous at $x \in \RR$ if and only if $x=0$.
+ \begin{hint}
+ $\pm x$ for good choices of $\pm$.
+ \end{hint}
+ \begin{sol}
+ Let $f(x) = x$ for $x \in \QQ$ and $f(x) = -x$ for irrational $x$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ \yod
+ Prove that a function $f \colon \RR \to \RR$ which is strictly increasing
+ must be continuous at some point.
+ \begin{hint}
+ Project gaps onto the $y$-axis.
+ Use the fact that uncountably many positive reals cannot have finite sum.
+ \end{hint}
+ \begin{sol}
+ Assume for contradiction it is completely discontinuous;
+ by scaling set $f(0) = 0$, $f(1) = 1$ and focus just on $f : [0,1] \to [0,1]$.
+ Since it's discontinuous everywhere,
+ for every $x \in [0,1]$ there's an $\eps_x > 0$
+ such that the continuity condition fails.
+ Since the function is strictly increasing,
+ that can only happen if the
+ function misses all $y$-values in the interval
+ $(fx-\eps_x, fx)$ or $(fx, fx+\eps_x)$ (or both).
+
+ Projecting these missing intervals to the $y$-axis you find uncountably
+ many intervals (one for each $x \in [0,1]$) all of which are disjoint.
+ In particular, summing the $\eps_x$ you get that a sum of uncountably
+ many positive reals is $1$.
+
+ But in general it isn't possible for an uncountable family $\mathcal F$
+ of positive reals to have finite sum.
+ Indeed, just classify the reals into buckets $\frac1k \le x < \frac1{k-1}$.
+ If the sum is actually finite then each bucket is finite,
+ so the collection $\mathcal F$ must be countable, contradiction.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/models.tex b/books/napkin/models.tex
new file mode 100644
index 0000000000000000000000000000000000000000..4d8f838b980c098598543496786314b2709fcadf
--- /dev/null
+++ b/books/napkin/models.tex
@@ -0,0 +1,701 @@
+\chapter{Inner model theory}
+Model theory is \emph{really} meta, so you will have to pay attention here.
+
+Roughly, a ``model of $\ZFC$'' is a set with a binary relation that satisfies the $\ZFC$ axioms,
+just as a group is a set with a binary operation that satisfies the group axioms.
+Unfortunately, unlike with groups, it is very hard for me to give interesting examples of models,
+for the simple reason that we are literally trying to model the entire universe.
+
+\section{Models}
+\prototype{$(\omega, \in)$ obeys $\PowerSet$, $V_\kappa$ is a model for $\kappa$ inaccessible (later).}
+\begin{definition}
+ A \vocab{model} $\MM$ consists of a set $M$ and a
+ binary relation $E \subseteq M \times M$.
+ (The $E$ relation is the ``$\in$'' for the model.)
+\end{definition}
+\begin{remark}
+ I'm only considering \emph{set-sized} models where $M$ is a set.
+ Experts may be aware that I can actually play with $M$ being a class,
+ but that would require too much care for now.
+\end{remark}
+If you have a model, you can ask certain things about it,
+such as ``does it satisfy $\EmptySet$?''.
+Let me give you an example of what I mean and then make it rigorous.
+\begin{example}
+ [A stupid model]
+ Let's take $\MM = (M,E) = \left( \omega, \in \right)$.
+ This is not a very good model of $\ZFC$, but let's see if we can
+ make sense of some of the first few axioms.
+ \begin{enumerate}[(a)]
+ \ii $\MM$ satisfies $\Extensionality$, which is the sentence
+ \[ \forall x \forall y \forall a :
+ \left( a \in x \iff a \in y \right) \implies x = y. \]
+ This just follows from the fact that $E$ is actually $\in$.
+
+ \ii $\MM$ satisfies $\EmptySet$, which is the sentence
+ \[ \exists a : \forall x \; \neg (x \in a). \]
+ Namely, take $a = \varnothing \in \omega$.
+
+ \ii $\MM$ does not satisfy $\Pairing$, since $\{1,3\}$ is not in $\omega$,
+ even though $1, 3 \in \omega$.
+
+ \ii Miraculously, $\MM$ satisfies $\Union$, since for any $n \in \omega$,
+ $\cup n$ is $n-1$ (unless $n=0$).
+ The Union axiom statements that
+ \[ \forall a \exists U \quad \forall x \;
+ \left[ (x \in U) \iff (\exists y : x \in y \in a) \right]. \]
+ An important thing to notice is that the ``$\forall a$'' ranges only over
+ the sets in the model of the universe, $\MM$.
+ \end{enumerate}
+\end{example}
+\begin{example}
+ [Important: this stupid model satisfies $\PowerSet$]
+ Most incredibly of all: $\MM = (\omega, \in)$ satisfies $\PowerSet$.
+ This is a really important example.
+
+ You might think this is ridiculous. Look at $2 = \{0,1\}$.
+ The power set of this is $\{0, 1, 2, \{1\}\}$ which is not in the model, right?
+
+ Well, let's look more closely at $\PowerSet$. It states that:
+ \[ \forall x \exists a \forall y (y \in a \iff y \subseteq x). \]
+ What happens if we set $x = 2 = \{0,1\}$?
+ Well, actually, we claim that $a = 3 = \{0,1,2\}$ works.
+ The key point is ``for all $y$'' -- this \emph{only ranges over the objects in $\MM$}.
+ In $\MM$, the only subsets of $2$ are $0 = \varnothing$,
+ $1 = \{0\}$ and $2 = \{0,1\}$.
+ The ``set'' $\{1\}$ in the ``real world'' (in $V$) is not a set in the model $\MM$.
+
+ In particular, you might say that in this strange new world,
+ we have $2^n = n+1$, since $n = \{0,1,\dots,n-1\}$ really does
+ have only $n+1$ subsets.
+\end{example}
+
+\begin{example}
+ [Sentences with parameters]
+ The sentences we ask of our model are allowed to have ``parameters'' as well.
+ For example, if $\MM = (\omega, \in)$ as before then $\MM$ satisfies the sentence
+ \[ \forall x \in 3 (x \in 5). \]
+\end{example}
+
+\section{Sentences and satisfaction}
+With this intuitive notion, we can define what it means for a model to satisfy a sentence.
+\begin{definition}
+Note that any sentence $\phi$ can be written in one of five forms:
+\begin{itemize}
+ \ii $x \in y$
+ \ii $x = y$
+ \ii $\neg \psi$ (``not $\psi$'') for some shorter sentence $\psi$
+ \ii $\psi_1 \lor \psi_2$ (``$\psi_1$ or $\psi_2$'')
+ for some shorter sentences $\psi_1$, $\psi_2$
+ \ii $\exists x \psi$ (``exists $x$'') for some shorter sentence $\psi$.
+\end{itemize}
+\end{definition}
+\begin{ques}
+ What happened to $\land$ (and) and $\forall$ (for all)?
+ (Hint: use $\neg$.)
+\end{ques}
+Often (almost always, actually) we will proceed by so-called ``induction on formula complexity'',
+meaning that we define or prove something by induction using this.
+Note that we require all formulas to be finite.
+
+Now suppose we have a sentence $\phi$, like $a = b$ or $\exists a \forall x \neg (x \in a)$,
+plus a model $\MM = (M,E)$.
+We want to ask whether $\MM$ satisfies $\phi$.
+
+To give meaning to this, we have to designate certain variables as \vocab{parameters}.
+For example, if I asked you
+\begin{quote}
+ ``Does $a=b$?''
+\end{quote}
+the first question you would ask is what $a$ and $b$ are.
+So $a$, $b$ would be parameters: I have to give them values for this sentence to make sense.
+
+On the other hand, if I asked you
+\begin{quote}
+ ``Does $\exists a \forall x \neg (x \in a)$?''
+\end{quote}
+then you would just say ``yes''.
+In this case, $x$ and $a$ are \emph{not} parameters.
+In general, parameters are those variables whose meaning is not given by some $\forall$ or $\exists$.
+
+In what follows, we will let $\phi(x_1, \dots, x_n)$ denote a formula $\phi$,
+whose parameters are $x_1$, \dots, $x_n$.
+Note that possibly $n=0$, for example all $\ZFC$ axioms have no parameters.
+
+\begin{ques}
+ Try to guess the definition of satisfaction before reading it below.
+ (It's not very hard to guess!)
+\end{ques}
+
+\begin{definition}
+ Let $\MM=(M,E)$ be a model.
+ Let $\phi(x_1, \dots, x_n)$ be a sentence, and let $b_1, \dots, b_n \in M$.
+ We will define a relation
+ \[ \MM \vDash \phi[b_1, \dots, b_n] \]
+ and say $\MM$ \vocab{satisfies} the sentence $\phi$ with parameters $b_1, \dots, b_n$.
+
+ The relationship is defined by induction on formula complexity as follows:
+ \begin{itemize}
+ \ii If $\phi$ is ``$x_1=x_2$'' then $\MM \vDash \phi[b_1, b_2] \iff b_1 = b_2$.
+ \ii If $\phi$ is ``$x_1\in x_2$'' then $\MM \vDash \phi[b_1, b_2] \iff b_1 \; E \; b_2$. \\
+ (This is what we mean by ``$E$ interprets $\in$''.)
+ \ii If $\phi$ is ``$\neg \psi$'' then
+ $\MM \vDash \phi[b_1, \dots, b_n] \iff \MM \not\vDash \psi[b_1, \dots, b_n]$.
+ \ii If $\phi$ is ``$\psi_1 \lor \psi_2$'' then $\MM \vDash \phi[b_1, \dots, b_n]$
+ means $\MM \vDash \psi_i[b_1, \dots, b_n]$ for some $i=1,2$.
+ \ii Most important case: suppose $\phi$ is $\exists x \psi(x,x_1, \dots, x_n)$.
+ Then $\MM \vDash \phi[b_1, \dots, b_n]$ if and only if
+ \[ \exists b \in M \text{ such that } \MM \vDash \psi[b, b_1, \dots, b_n]. \]
+ Note that $\psi$ has one extra parameter.
+ \end{itemize}
+\end{definition}
+Notice where the information of the model actually gets used.
+We only ever use $E$ in interpreting $x_1 \in x_2$; unsurprising.
+But we only ever use the set $M$ when we are running over $\exists$ (and hence $\forall$).
+That's well-worth keeping in mind:
+\begin{moral}
+ The behavior of a model essentially comes from $\exists$ and $\forall$,
+ which search through the entire model $M$.
+\end{moral}
+
+And finally,
+\begin{definition}
+ A \vocab{model of $\ZFC$} is a model $\MM = (M,E)$ satisfying all $\ZFC$ axioms.
+\end{definition}
+
+We are especially interested in models of the form $(M, \in)$, where $M$ is a \emph{transitive} set.
+(We want our universe to be transitive,
+otherwise we would have elements of sets which are not themselves
+in the universe, which is very strange.)
+Such a model is called a \vocab{transitive model}.
+\begin{abuse}
+ If $M$ is a transitive set, the model $(M, \in)$ will be abbreviated to just $M$.
+\end{abuse}
+\begin{definition}
+ An \vocab{inner model} of $\ZFC$ is a transitive model satisfying $\ZFC$.
+\end{definition}
+
+\section{The Levy hierarchy}
+\prototype{$\mathtt{isSubset}(x,y)$ is absolute. The axiom
+$\EmptySet$ is $\Sigma_1$, $\mathtt{isPowerSetOf}(X,x)$ is $\Pi_1$.}
+A key point to remember is that the behavior of a model is largely determined by $\exists$ and $\forall$.
+It turns out we can say even more than this.
+
+Consider a formula such as
+\[ \mathtt{isEmpty}(x) : \neg \exists a (a \in x) \]
+which checks whether a given set $x$ has an element in it.
+Technically, this has an ``$\exists$'' in it.
+But somehow this $\exists$ does not really search over the entire model,
+because it is \emph{bounded} to search in $x$.
+That is, we might informally rewrite this as
+\[ \neg (\exists a \in x) \]
+which doesn't fit into the strict form,
+but points out that we are only looking over $a \in x$.
+We call such a quantifier a \vocab{bounded quantifier}.
+%and write it
+%in the way we see it in most mathematics, such as
+%\[ (\exists x \in a) \quad\text{or}\quad (\forall x \in a). \]
+%To be painfully explicit,
+%$\exists x \in a \psi$ is short for $\exists x (x \in a \land \psi)$,
+%while $\forall x \in a \psi$ is short for $\forall x (x \in a \implies \psi)$.
+
+We like sentences with bounded quantifiers because they designate
+properties which are \vocab{absolute} over transitive models.
+It doesn't matter how strange your surrounding model $M$ is.
+As long as $M$ is transitive,
+\[ M \vDash \mathtt{isEmpty}(\varnothing) \]
+will always hold.
+Similarly, the sentence
+\[ \mathtt{isSubset}(x,y) : x \subseteq y \text { i.e. } \forall a \in x (a \in y) \]
+is absolute.
+Sentences with this property are called $\Sigma_0$ or $\Pi_0$.
+
+The situation is different with a sentence like
+\[
+ \mathtt{isPowerSetOf}(y,x) :
+ \forall z \left( z \subseteq x \iff z \in y \right)
+\]
+which in English means ``$y$ is the power set of $x$'', or just $y = \PP(x)$.
+The $\forall z$ is \emph{not} bounded here.
+This weirdness is what allows things like
+\[ \omega \vDash \text{``$\{0,1,2\}$ is the power set of $\{0,1\}$''} \]
+and hence
+\[ \omega \vDash \PowerSet \]
+which was our stupid example earlier.
+The sentence $\mathtt{isPowerSetOf}$ consists of an unbounded $\forall$ followed
+by an absolute sentence, so we say it is $\Pi_1$.
+
+More generally, the \vocab{Levy hierarchy} keeps track of how bounded our
+quantifiers are.
+Specifically,
+\begin{itemize}
+ \ii Formulas which have only bounded quantifiers
+ are $\Delta_0 = \Sigma_0 = \Pi_0$.
+ \ii Formulas of the form $\exists x_1 \dots \exists x_k \psi$
+ where $\psi$ is $\Pi_n$ are considered $\Sigma_{n+1}$.
+ \ii Formulas of the form $\forall x_1 \dots \forall x_k \psi$
+ where $\psi$ is $\Sigma_n$ are considered $\Pi_{n+1}$.
+\end{itemize}
+(A formula which is both $\Sigma_n$ and $\Pi_n$ is called $\Delta_n$, but we won't
+use this except for $n=0$.)
+
+\begin{example}
+ [Examples of $\Delta_0$ sentences]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The sentences $\mathtt{isEmpty}(x)$, $x \subseteq y$, as discussed above.
+ \ii The formula ``$x$ is transitive'' can be expanded as a $\Delta_0$ sentence.
+ \ii The formula ``$x$ is an ordinal'' can be expanded as a $\Delta_0$ sentence.
+ \end{enumerate}
+\end{example}
+\begin{exercise}
+ Write out the expansions for ``$x$ is transitive'' and ``$x$ is ordinal''
+ in a $\Delta_0$ form.
+\end{exercise}
+\begin{example}
+ [More complex formulas]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The axiom $\EmptySet$ is $\Sigma_1$; it is $\exists a (\mathtt{isEmpty}(a))$,
+ and $\mathtt{isEmpty}(a)$ is $\Delta_0$.
+ \ii The formula ``$y = \PP(x)$'' is $\Pi_1$, as discussed above.
+ \ii The formula ``$x$ is countable'' is $\Sigma_1$.
+ One way to phrase it is ``$\exists f$ an injective map $x \injto \omega$'',
+ which necessarily has an unbounded ``$\exists f$''.
+ \ii The axiom $\PowerSet$ is $\Pi_3$:
+ \[ \forall y \exists P \forall x (x\subseteq y \iff x \in P). \]
+ \end{enumerate}
+\end{example}
+
+\section{Substructures, and Tarski-Vaught}
+Let $\MM_1 = (M_1, E_1)$ and $\MM_2 = (M_2, E_2)$ be models.
+\begin{definition}
+ We say that $\MM_1 \subseteq \MM_2$ if $M_1 \subseteq M_2$ and
+ $E_1$ agrees with $E_2$; we say $\MM_1$ is a \vocab{substructure} of $\MM_2$.
+\end{definition}
+
+That's boring. The good part is:
+\begin{definition}
+ We say $\MM_1 \prec \MM_2$, or $\MM_1$ is an \vocab{elementary substructure} of $\MM_2$,
+ if $\MM_1 \subseteq \MM_2$ and for \emph{every} sentence $\phi(x_1, \dots, x_n)$
+ and parameters $b_1, \dots, b_n \in M_1$, we have
+ \[
+ \MM_1 \vDash \phi[b_1, \dots, b_n] \iff
+ \MM_2 \vDash \phi[b_1, \dots, b_n].
+ \]
+\end{definition}
+In other words, $\MM_1$ and $\MM_2$ agree on every sentence possible.
+Note that the $b_i$ have to come from $\MM_1$; if the $b_i$ came from $\MM_2$ then
+asking something of $\MM_1$ wouldn't make sense.
+
+Let's ask now: how would $\MM_1 \prec \MM_2$ fail to be true?
+If we look at the possible sentences, none of the atomic formulas,
+nor the ``$\land$'' and ``$\neg$'', are going to cause issues.
+
+The intuition you should be getting by now is that things go
+wrong once we hit $\forall$ and $\exists$.
+They won't go wrong for bounded quantifiers.
+But unbounded quantifiers search the entire model, and that's where things go wrong.
+
+To give a ``concrete example'':
+imagine $\MM_1$ is MIT, and $\MM_2$ is the state of Massachusetts.
+If $\MM_1$ thinks there exist hackers at MIT,
+certainly there exist hackers in Massachusetts.
+Where things go wrong is something like:
+\[ \MM_2 \vDash \text{``$\exists x$ : $x$ is a course numbered $> 50$''}. \]
+This is true for $\MM_2$ because we can
+take the witness $x = \text{Math 55}$, say.
+But it's false for $\MM_1$,
+because at MIT all courses are numbered $18.701$ or something similar.
+\begin{moral}
+ The issue is that the \emph{witnesses}
+ for statements in $\MM_2$ do not necessarily propagate
+ down to witnesses for $\MM_1$.
+\end{moral}
+
+The Tarski-Vaught test says this is the only impediment:
+if every witness in $\MM_2$ can be replaced by one in $\MM_1$ then $\MM_1 \prec \MM_2$.
+\begin{lemma}
+ [Tarski-Vaught]
+ Let $\MM_1 \subseteq \MM_2$.
+ Then $\MM_1 \prec \MM_2$ if and only if:
+ For every sentence $\phi(x, x_1, \dots, x_n)$ and parameters
+ $b_1, \dots, b_n \in M_1$:
+ if there is a witness $\tilde b \in M_2$
+ to $\MM_2 \vDash \phi(\tilde b, b_1 \dots, b_n)$
+ then there is a witness $b \in M_1$ to $\MM_1 \vDash \phi(b, b_1, \dots, b_n)$.
+\end{lemma}
+\begin{proof}
+ Easy after the above discussion.
+ To formalize it, use induction on formula complexity.
+\end{proof}
+
+\section{Obtaining the axioms of $\ZFC$}
+We now want to write down conditions for $M$ to satisfy $\ZFC$ axioms.
+The idea is that almost all the $\ZFC$ axioms are just $\Sigma_1$
+claims about certain desired sets,
+and so verifying an axiom reduces to checking some appropriate ``closure'' condition:
+that the witness to the axiom is actually in the model.
+
+For example, the $\EmptySet$ axiom is ``$\exists a (\mathtt{isEmpty}(a))$'',
+and so we're happy as long as $\varnothing \in M$, which is of course
+true for any nonempty transitive set $M$.
+
+\begin{lemma}[Transitive sets inheriting $\ZFC$]
+ \label{lem:transitive_ZFC}
+ Let $M$ be a nonempty transitive set. Then
+ \begin{enumerate}[(i)]
+ \ii $M$ satisfies $\Extensionality$, $\Foundation$, $\EmptySet$.
+ \ii $M \vDash \Pairing$ if $x,y \in M \implies \{x,y\} \in M$.
+ \ii $M \vDash \Union$ if $x \in M \implies \cup x \in M$.
+ \ii $M \vDash \PowerSet$ if $x \in M \implies \PP(x) \cap M \in M$.
+ \ii $M \vDash \Replacement$ if for every $x \in M$
+ and every function $F : x \to M$
+ which is $M$-definable with parameters,
+ we have $F``x \in M$ as well.
+ \ii $M \vDash \Infinity$ as long as $\omega \in M$.
+ \end{enumerate}
+\end{lemma}
+Here, a set $X \subseteq M$ is \vocab{$M$-definable with parameters}
+if it can be realized as
+\[ X = \left\{ x \in M \mid \phi[x, b_1, \dots, b_n] \right\} \]
+for some (fixed) choice of parameters $b_1,\dots,b_n \in M$.
+We allow $n=0$, in which case we say $X$ is \vocab{$M$-definable without parameters}.
+Note that $X$ need not itself be in $M$!
+As a trivial example, $X = M$ is $M$-definable without parameters
+(just take $\phi[x]$ to always be true), and certainly we do not have $X \in M$.
+\begin{exercise}
+ Verify (i)-(iv) above.
+\end{exercise}
+\begin{remark}
+ Converses to the statements of \Cref{lem:transitive_ZFC}
+ are true for all claims other than (vi).
+\end{remark}
+
+\section{Mostowski collapse}
+Up until now I have been only talking about transitive models,
+because they were easier to think about.
+Here's a second, better reason we might only care about transitive models.
+
+\begin{lemma}
+ [Mostowski collapse lemma]
+ Let $X = (X, \in)$ be a model satisfying Extensionality,
+ where $X$ is a set (possibly not transitive).
+ Then there exists an isomorphism $\pi \colon X \to M$ for
+ a transitive model $M = (M,\in)$.
+\end{lemma}
+
+This is also called the \emph{transitive collapse}.
+In fact, both $\pi$ and $M$ are unique.
+
+\begin{proof}
+ The idea behind the proof is very simple.
+ Since $\in$ is well-founded and extensional
+ (satisfies $\Foundation$ and $\Extensionality$, respectively),
+ we can look at the $\in$-minimal element $x_\varnothing$
+ of $X$ with respect to $\in$.
+ Clearly, we want to send that to $0 = \varnothing$.
+
+ Then we take the next-smallest set under $\in$, and send it to $1 = \{\varnothing\}$.
+ We ``keep doing this''; it's not hard to see this does exactly what we want.
+
+ To formalize, define $\pi$ by transfinite recursion:
+ \[ \pi(x) \defeq \left\{ \pi(y) \mid y \in x \right\}. \]
+ This $\pi$, by construction, does the trick.
+\end{proof}
+
+\begin{remark}
+ [Digression for experts]
+ Earlier versions of Napkin claimed this was true for general models
+ $\mathscr X = (X, E)$ with $\mathscr X \vDash \Foundation + \Extensionality$.
+ This is false; it does not even imply $E$ is well-founded,
+ because there may be infinite descending chains of subsets of $X$
+ which do not live in $X$ itself.
+ Another issue is that $E$ may not be set-like.
+\end{remark}
+
+The picture of this is ``collapsing'' the elements of $M$ down
+to the bottom of $V$, hence the name.
+\missingfigure{Picture of Mostowski collapse}
+
+
+\section{Adding an inaccessible}
+\prototype{$V_\kappa$}
+At this point you might be asking, well, where's my model of $\ZFC$?
+
+I unfortunately have to admit now: $\ZFC$ can never prove that there is a model of $\ZFC$
+(unless $\ZFC$ is inconsistent, but that would be even worse).
+This is a result called G\"odel's incompleteness theorem.
+
+Nonetheless, with some very modest assumptions added,
+we can actually show that a model \emph{does} exist:
+for example, assuming that there exists a strongly inaccessible cardinal $\kappa$ would do the trick,
+$V_\kappa$ will be such a model (\Cref{prob:inaccessible_model}).
+Intuitively you can see why: $\kappa$ is so big that any set of rank lower than it can't escape it
+even if we take their power sets, or any other method that $\ZFC$ lets us do.
+
+More pessimistically,
+this shows that it's impossible to prove in $\ZFC$ that such a $\kappa$ exists.
+Nonetheless, we now proceed under $\ZFC^+$ for convenience,
+which adds the existence of such a $\kappa$
+as a final axiom.
+So we now have a model $V_\kappa$ to play with. Joy!
+
+Great. Now we do something \emph{really} crazy.
+\begin{theorem}[Countable transitive model]
+ Assume $\ZFC^+$. Then there exists a transitive model $X$ of $\ZFC$
+ such that $X$ is a \emph{countable} set.
+\end{theorem}
+\begin{proof}
+ Fasten your seat belts.
+
+ First, since we assumed $\ZFC^+$,
+ we can take $V_\kappa = (V_\kappa, \in)$ as our model of $\ZFC$.
+ Start with the set $X_0 = \varnothing$.
+ Then for every integer $n$, we do the following to get $X_{n+1}$.
+ \begin{itemize}
+ \ii Start with $X_{n+1}$ containing every element of $X_n$.
+ \ii Consider a formula $\phi(x, x_1, \dots, x_n)$
+ and $b_1, \dots, b_n$ in $X_n$.
+ Suppose that $V_\kappa$ thinks there is a $b \in V_\kappa$ for which
+ \[ V_\kappa \vDash \phi[b, b_1, \dots, b_n]. \]
+ We then add in the element $b$ to $X_{n+1}$.
+ \ii We do this for \emph{every possible formula in the language of set theory}.
+ We also have to put in \emph{every possible set of parameters} from the previous set $X_n$.
+ \end{itemize}
+ At every step $X_n$ is countable.
+ Reason: there are countably many possible finite sets of parameters in $X_n$,
+ and countably many possible formulas, so in total we only ever add in countably many things
+ at each step.
+ This exhibits an infinite nested sequence of countable sets
+ \[ X_0 \subseteq X_1 \subseteq X_2 \subseteq \dots \]
+ None of these is a substructure of $V_\kappa$,
+ because each $X_n$ relies on witnesses in $X_{n+1}$.
+ So we instead \emph{take the union}:
+ \[ X = \bigcup_n X_n. \]
+ This satisfies the Tarski-Vaught test, and is countable.
+
+ There is one minor caveat: $X$ might not be transitive.
+ We don't care, because we just take its Mostowski collapse.
+\end{proof}
+
+Please take a moment to admire how insane this is.
+It hinges irrevocably on the fact that there are countably many sentences we can write down.
+
+\begin{remark}
+ This proof relies heavily on the Axiom of Choice
+ when we add in the element $b$ to $X_{n+1}$.
+ Without Choice, there is no way of making these decisions all at once.
+
+ Usually, the right way to formalize the Axiom of Choice usage is,
+ for every formula $\phi(x, x_1, \dots, x_n)$, to pre-commit (at the very beginning)
+ to a function $f_\phi(x_1, \dots, x_n)$, such that given any $b_1, \dots, b_n$
+ $f_\phi(b_1, \dots, b_n)$ will spit out the suitable value of $b$ (if one exists).
+ Personally, I think this is hiding the spirit of the proof, but it does
+ make it clear how exactly Choice is being used.
+
+ These $f_\phi$'s have a name: \vocab{Skolem functions}.
+\end{remark}
+
+The trick we used in the proof works in more general settings:
+\begin{theorem}
+ [Downward L\"owenheim-Skolem theorem]
+ Let $\MM = (M,E)$ be a model, and $A \subseteq M$.
+ Then there exists a set $B$ (called the \vocab{Skolem hull} of $A$)
+ with $A \subseteq B \subseteq M$,
+ such that $(B,E) \prec \MM$, and
+ \[ \left\lvert B \right\rvert =
+ \max \left\{ \omega, \left\lvert A \right\rvert \right\}. \]
+\end{theorem}
+In our case, what we did was simply take $A$ to be the empty set.
+\begin{ques}
+ Prove this. (Exactly the same proof as before.)
+\end{ques}
+
+
+\section{FAQ's on countable models}
+The most common one is ``how is this possible?'',
+with runner-up ``what just happened''.
+
+Let me do my best to answer the first question.
+It seems like there are two things running up against each other:
+\begin{enumerate}[(1)]
+ \ii $M$ is a transitive model of $\ZFC$, but its universe is uncountable.
+ \ii $\ZFC$ tells us there are uncountable sets!
+\end{enumerate}
+(This has confused so many people it has a name, Skolem's paradox.)
+
+The reason this works I actually pointed out earlier:
+\emph{countability is not absolute, it is a $\Sigma_1$ notion}.
+
+Recall that a set $x$ is countable if
+\emph{there exists} an injective map $x \injto \omega$.
+The first statement just says that \emph{in the universe $V$},
+there is a injective map $F: M \injto \omega$.
+In particular, for any $x \in M$ (hence $x \subseteq M$, since $M$ is transitive),
+$x$ is countable \emph{in $V$}.
+This is the content of the first statement.
+
+But for $M$ to be a model of $\ZFC$, $M$ only has to think statements in $\ZFC$ are true.
+More to the point, the fact that $\ZFC$ tells us there are uncountable sets means
+\[ M \vDash \text{$\exists x$ uncountable}. \]
+In other words,
+\[ M \vDash \exists x \forall f
+ \text{ If $f : x \to \omega$ then $f$ isn't injective}. \]
+The key point is the $\forall f$ searches only functions in our tiny model $M$.
+It is true that in the ``real world'' $V$, there are injective functions $f : x \to \omega$.
+But $M$ has no idea they exist!
+It is a brain in a vat: $M$ is oblivious to any information outside it.
+
+So in fact, every ordinal which appears in $M$ is countable in the real world.
+It is just not countable in $M$.
+Since $M \vDash \ZFC$, $M$ is going to think there is some smallest uncountable cardinal,
+say $\aleph_1^M$.
+It will be the smallest (infinite) ordinal in $M$
+with the property that there is no bijection \emph{in the model $M$}
+between $\aleph_1^M$ and $\omega$.
+However, we necessarily know that such a bijection is going to exist in the real world $V$.
+
+Put another way, cardinalities in $M$ can look vastly different from those in the real world,
+because cardinality is measured by bijections, which I guess is inevitable, but leads to chaos.
+
+\section{Picturing inner models}
+Here is a picture of a countable transitive model $M$.
+
+\begin{center}
+ \begin{asy}
+ size(14cm);
+ pair A = (12,30);
+ pair B = -conj(A);
+ pair M = midpoint(A--B);
+ pair O = origin;
+ MP("V", A, dir(10));
+ draw(A--O--B);
+
+ fill(A--O--B--cycle, opacity(0.3)+palecyan);
+ MP("M", 0.7*M, 3*dir(0)+dir(45));
+
+ MP("V_0 = \varnothing", origin, dir(-20));
+ MP("V_1 = \{\varnothing\}", 0.05*A, dir(0));
+ MP("V_2 = \{\varnothing, \{\varnothing\} \}", 0.10*A, dir(0));
+
+ pair A1 = 0.4*A;
+ pair B1 = 0.4*B;
+ draw(MP("V_\omega", A1, dir(0))--B1);
+ draw(MP("V_{\omega+1} = \mathcal P(V_\omega)", 0.45*A, dir(0))--0.45*B);
+ Drawing("\omega", 0.45*M, dir(45));
+
+ filldraw(O--A1--(A1+0.15*M)..(0.7*M)..(B1+0.15*M)--B1--cycle,
+ opacity(0.3)+lightgreen, heavygreen+1);
+ draw(O--0.7*M, heavygreen+1);
+
+ Drawing("\aleph_1^V", 0.80*M, dir(0));
+ Drawing("\aleph_2^V", 0.90*M, dir(0));
+ Drawing("\aleph_1^M", 0.55*M, dir(0));
+ Drawing("\aleph_2^M", 0.60*M, dir(0));
+
+ pair F = 0.6*B+0.15*A;
+ Drawing("f", F, dir(135));
+ draw(F--0.55*M, dotted, EndArrow, Margins);
+ draw(F--0.45*M, dotted, EndArrow, Margins);
+
+ draw(0.7*M--M);
+ MP("\mathrm{On}^V", M, dir(90));
+ MP("\mathrm{On}^M", 0.7*M, dir(135));
+ \end{asy}
+\end{center}
+
+Note that $M$ and $V$ must agree on finite sets,
+since every finite set has a formula that can express it.
+However, past $V_\omega$ the model and the true universe start to diverge.
+
+The entire model $M$ is countable, so it only occupies a small
+portion of the universe, below the first uncountable cardinal $\aleph_1^V$
+(where the superscript means ``of the true universe $V$'').
+The ordinals in $M$ are precisely the ordinals of $V$ which happen to live inside the model,
+because the sentence ``$\alpha$ is an ordinal'' is absolute.
+On the other hand, $M$ has only a portion of these ordinals, since it is only
+a lowly set, and a countable set at that.
+To denote the ordinals of $M$, we write $\On^M$, where the superscript means
+``the ordinals as computed in $M$''.
+Similarly, $\On^V$ will now denote the ``set of true ordinals''.
+
+Nonetheless, the model $M$ has its own version of the first uncountable
+cardinal $\aleph_1^M$.
+In the true universe, $\aleph_1^M$ is countable (below $\aleph_1^V$),
+but the necessary bijection witnessing this might not be inside $M$.
+That's why $M$ can think $\aleph_1^M$ is uncountable,
+even if it is a countable cardinal in the original universe.
+
+So our model $M$ is a brain in a vat.
+It happens to believe all the axioms of $\ZFC$, and so every
+statement that is true in $M$ could conceivably be true in $V$ as well.
+But $M$ can't see the universe around it; it has no idea that what it believes is
+the uncountable $\aleph_1^M$ is really just an ordinary countable cardinal.
+
+\section\problemhead
+\begin{sproblem}
+ Show that for any transitive model $M$, the set of ordinals in $M$
+ is itself some ordinal.
+\end{sproblem}
+
+\begin{dproblem}
+ Assume $\MM_1 \subseteq \MM_2$. Show that
+ \begin{enumerate}[(a)]
+ \ii If $\phi$ is $\Delta_0$,
+ then $\MM_1 \vDash \phi[b_1, \dots, b_n] \iff \MM_2 \vDash \phi[b_1, \dots, b_n]$.
+ \ii If $\phi$ is $\Sigma_1$,
+ then $\MM_1 \vDash \phi[b_1, \dots, b_n] \implies \MM_2 \vDash \phi[b_1, \dots, b_n]$.
+ \ii If $\phi$ is $\Pi_1$,
+ then $\MM_2 \vDash \phi[b_1, \dots, b_n] \implies \MM_1 \vDash \phi[b_1, \dots, b_n]$.
+ \end{enumerate}
+ (This should be easy if you've understood the chapter.)
+\end{dproblem}
+
+\begin{dproblem}[Reflection]
+ \gim
+ Let $\kappa$ be an inaccessible cardinal such that $|V_\alpha| < \kappa$ for all $\alpha < \kappa$.
+ Prove that for any $\delta < \kappa$ there exists $\delta < \alpha < \kappa$
+ such that $V_\alpha \prec V_\kappa$; in other words,
+ the set of $\alpha$ such that $V_\alpha \prec V_\kappa$ is \emph{unbounded} in $\kappa$.
+ This means that properties of $V_\kappa$ reflect down to properties of $V_\alpha$.
+ \begin{hint}
+ This is very similar to the proof of L\"owenheim-Skolem.
+ For a sentence $\phi$, let $f_\phi$
+ send $\alpha$ to the least $\beta < \kappa$ such that for all $\vec b \in V_\alpha$, if there exists $a \in M$ such that $V_\kappa \vDash \phi[a, \vec b]$ then $\exists a \in V_\beta$ such that $V_\kappa \vDash \phi[a, \vec b]$.
+ (To prove this $\beta$ exists, use the fact that $\kappa$ is cofinal.)
+ Then, take the supremum over the countably many sentences for each $\alpha$.
+ \end{hint}
+ \begin{sol}
+ For a sentence $\phi$ let \[ f_\phi : \kappa \to \kappa \]
+ send $\alpha$ to the least $\beta < \kappa$ such that for all $\vec b \in V_\alpha$, if there exists $a \in V_\kappa$ such that $V_\kappa \vDash \phi[a, \vec b]$ then $\exists a \in V_\beta$ such that $V_\kappa \vDash \phi[a, \vec b]$.
+
+ We claim this is well-defined.
+ There are only $\left\lvert V_\alpha \right\rvert^n$ many possible choices of $\vec b$,
+ and in particular there are fewer than $\kappa$ of these
+ (since we know that $\left\lvert V_\alpha \right\rvert < \kappa$; compare \Cref{prob:strongly_inaccessible}).
+ Otherwise, we can construct a cofinal map from $\left\lvert V_\alpha^n \right\rvert$
+ into $\kappa$ by mapping each vector $\vec b$ into a $\beta$ for which the proposition fails.
+ And that's impossible since $\kappa$ is regular!
+
+ In other words, what we've done is fix $\phi$ and then use Tarski-Vaught on all the $\vec b \in V_\alpha^n$.
+ Now let $g : \kappa \to \kappa$ be defined by
+ \[ \alpha \mapsto \sup f_\phi(\alpha). \]
+ Since $\kappa$ is regular and there are only countably many formulas, $g(\alpha)$ is well-defined.
+
+ Check that if $\alpha$ has the property that $g$ maps $\alpha$ into itself (in other words, $\alpha$ is closed under $g$), then by the Tarski-Vaught test, we have $V_\alpha \prec V_\kappa$.
+
+ So it suffices to show there are arbitrarily large $\alpha < \kappa$ which are closed under $g$.
+ Fix $\alpha_0$. Let $\alpha_1 = g(\alpha_0$), et cetera and define
+ \[ \alpha = \sup_{n < \omega} \alpha_n. \]
+ This $\alpha$ is closed under $g$, and by making $\alpha_0$ arbitrarily large we can make $\alpha$ as large as we like.
+ \end{sol}
+\end{dproblem}
+
+\begin{sproblem}
+ [Inaccessible cardinals produce models]
+ \label{prob:inaccessible_model}
+ \gim
+ Let $\kappa$ be an inaccessible cardinal.
+ Prove that $V_\kappa$ is a model of $\ZFC$.
+\end{sproblem}
diff --git a/books/napkin/mor-scheme.tex b/books/napkin/mor-scheme.tex
new file mode 100644
index 0000000000000000000000000000000000000000..4cad7731fd8242eddbae950b850bf68ef98bc9dc
--- /dev/null
+++ b/books/napkin/mor-scheme.tex
@@ -0,0 +1,907 @@
+\chapter{Morphisms of locally ringed spaces}
+Having set up the definition of a locally ringed space,
+we are almost ready to define morphisms between them.
+Throughout this chapter, you should imagine your ringed spaces
+are the affine schemes we have so painstakingly defined;
+but it will not change anything to work in the generality
+of arbitrary locally ringed spaces.
+
+\section{Morphisms of ringed spaces via sections}
+Let $(X, \OO_X)$ and $(Y, \OO_Y)$ be ringed spaces.
+We want to give a define what it means
+to have a function $\pi \colon X \to Y$ between them.\footnote{Notational
+ connotations: for ringed spaces, $\pi$ will be used for maps,
+ since $f$ is often used for sections.}
+We start by requiring the map to be continuous,
+but this is not enough: there is a sheaf on it!
+
+Well, you might remember what we did for baby ringed spaces:
+any time we had a function on an open set of $U \subseteq Y$,
+we wanted there to be an analogous function on $\pi\pre(U) \subseteq X$.
+For baby ringed spaces, this was done by composition,
+since the elements of the sheaf \emph{were} really complex valued functions:
+\[ \pi^\sharp\phi \text{ was defined as } \phi \circ \pi. \]
+The upshot was that we got a map $\OO_Y(U) \to \OO_X(\pi\pre(U))$
+for every open set $U$.
+\begin{center}
+ \begin{asy}
+ size(13cm);
+ bigblob("$X$");
+ pair p = (0.5,0);
+ filldraw(CR(p, 1), opacity(0.2)+lightgreen, deepgreen+dashed);
+ label("$\pi^{\text{pre}}(U)$", p+dir(135), dir(135), deepgreen);
+
+ transform t = scale(0.8) * shift(14*dir(180));
+ add(t * CC());
+ p = t*p;
+
+ bigblob("$Y$");
+ pair q = (0,0.5);
+ filldraw(CR(q, 1.2), opacity(0.2)+lightred, red+dashed);
+ label("$U$", q+1.2*dir(45), dir(45), red);
+
+ draw( (-9,0.5)--(-3,0.5), EndArrow);
+ label("$\pi$", (-6,0.5), dir(90));
+ label("$\boxed{f \in \mathcal O_Y(U)}$", (0,-5), red);
+ label("$\boxed{\pi^\sharp f \in \mathcal O_X(f^{\text{pre}}(U))}$",
+ t*(0,-6), deepgreen);
+ \end{asy}
+\end{center}
+
+Now, for general locally ringed spaces,
+the sections are just random rings,
+which may not be so well-behaved.
+So the solution is that we \emph{include} the data of $f^\sharp$
+as part of the definition of a morphism.
+\begin{definition}
+ A \vocab{morphism of ringed spaces}
+ $(X, \OO_X) \to (Y, \OO_Y)$ consists of a pair $(\pi, \pi^\sharp)$
+ where $\pi \colon X \to Y$ is a continuous map
+ (of topological spaces),
+ and $\pi^\sharp$ consists of a choice of ring homomorphism
+ \[ \pi^\sharp_U \colon \OO_Y(U) \to \OO_X(\pi\pre(U)) \]
+ for every open set $U \subseteq Y$, such that the restriction diagram
+ \begin{center}
+ \begin{tikzcd}
+ \OO_Y(U) \ar[r] \ar[d] & \OO_Y(\pi\pre(U)) \ar[d] \\
+ \OO_Y(V) \ar[r] & \OO_X(\pi\pre(V))
+ \end{tikzcd}
+ \end{center}
+ commutes for $V \subseteq U$.
+\end{definition}
+\begin{abuse}
+ We will abbreviate $(\pi, \pi^\sharp) \colon (X, \OO_X) \to (Y, \OO_Y)$
+ to just $\pi \colon X \to Y$,
+ despite the fact this notation is exactly
+ the same as that for topological spaces.
+\end{abuse}
+
+There is an obvious identity map,
+and so we can also define isomorphism etc.\ in the categorical way.
+
+\section{Morphisms of ringed spaces via stalks}
+Unsurprisingly, the sections are clumsier to work with
+than the stalks, now that we have grown to love localization.
+So rather than specifying $\pi^\sharp_U$ on every open set $U$,
+it seems better if we could do it by stalks
+(there are fewer stalks than open sets,
+so this saves us a lot of work!).
+
+We start out by observing that we \emph{do}
+get a morphism of stalks.
+\begin{proposition}
+ [Induced stalk morphisms]
+ If $\pi \colon X \to Y$ is a map of ringed spaces
+ sending $\pi(p) = q$, then we get a map
+ \[ \pi_p^\sharp \colon \OO_{Y,q} \to \OO_{X,p} \]
+ whenever $\pi(p) = q$.
+\end{proposition}
+
+This means you can draw a morphism of locally ringed spaces
+as a continuous map on the topological space,
+plus for each $\pi(p) = q$,
+an assignment of each germ at $q$ to a germ at $p$.
+\begin{center}
+\begin{asy}
+ size(10cm);
+ pair p = (-4,0);
+ pair q = ( 4,0);
+ filldraw(ellipse(p, 2, 1 ), opacity(0.2)+lightcyan, black);
+ filldraw(ellipse(q, 2, 1 ), opacity(0.2)+lightcyan, black);
+ label("$X$", p+dir(270), dir(270), blue);
+ label("$Y$", q+dir(270), dir(270), blue);
+ pair pt = (-4, 3.5);
+ pair qt = ( 4, 4.3);
+ draw( p--pt, red );
+ draw( q--qt, red );
+ label("$\mathcal O_{X,p}$", pt, dir(90), red);
+ label("$\mathcal O_{Y,q}$", qt, dir(90), red);
+ dot("$p$", p, dir(-90), blue);
+ dot("$q$", q, dir(-90), blue);
+ pair pg = (-4, 1.6);
+ pair qg = (4, 2.8);
+ dot("$[s]_q$", qg, dir(0), red+6);
+ dot("$[\pi^\sharp(s)]_p$", pg, dir(180), red+6);
+ draw(qg--pg, deepgreen, EndArrow, Margin(4,4));
+ label("$\pi^\sharp_p$", midpoint(qg--pg), dir(90), deepgreen);
+ draw( (-1.5,0)--(1.5,0), deepgreen, EndArrow );
+ label("$\pi$", origin, dir(-90), deepgreen);
+\end{asy}
+\end{center}
+Again, compare this to the pullback picture:
+this is roughly saying that if a function $f$ has some enriched value at $q$,
+then $\pi^\sharp(f)$ should be assigned a corresponding enriched value at $p$.
+The analogy is not perfect since the stalks at $q$ and $p$
+may be different rings in general,
+but there should at least be a ring homomorphism (the assignment).
+\begin{proof}
+ If $(s,U)$ is a germ at $q$,
+ then $(\pi^\sharp(s), \pi\pre(U))$ is a germ at $p$,
+ and this is a well-defined morphism because
+ of compatibility with restrictions.
+\end{proof}
+
+We already obviously have uniqueness in the following senes.
+\begin{proposition}
+ [Uniqueness of morphisms via stalks]
+ Consider a map of ringed spaces
+ $(\pi, \pi^\sharp) \colon (X, \OO_X) \to (Y, \OO_Y)$
+ and the corresponding map $\pi^\sharp_p$ of stalks.
+ Then $\pi^\sharp$ is uniquely determined by $\pi^\sharp_p$.
+\end{proposition}
+\begin{proof}
+ Given a section $s \in \OO_Y(U)$,
+ let
+ \[ t = \pi^\sharp_U(s) \in \OO_X(\pi\pre(U)) \]
+ denote the image under $\pi^\sharp$.
+
+ We know $t_p$ for each $p \in \pi\pre(U)$,
+ since it equals $\pi^\sharp_p(t)$ by definition.
+ That is, we know all the germs of $t$.
+ So we know $t$.
+\end{proof}
+
+However, it seems clear that not every choice of
+stalk morphisms will lead to $\pi^\sharp_U$:
+some sort of ``continuity'' or ``compatibility'' is needed.
+You can actually write down the explicit statement:
+each sequence of compatible germs over $U$
+should get mapped to a sequence of compatible germs over $\pi\pre(U)$.
+We avoid putting up a formal statement of this for now,
+because the statement is clumsy,
+and you're about to see it in practice (where it will make more sense).
+
+\begin{remark}
+ [Isomorphisms are determined by stalks]
+ One fact worth mentioning, that we won't prove, but good to know:
+ a map of ringed spaces
+ $(\pi, \pi^\sharp) \colon (X, \OO_X) \to (Y, \OO_Y)$
+ is an isomorphism if and only if $\pi$ is a homeomorphism,
+ and moreover $\pi^\sharp_p$ is an isomorphism for each $p \in X$.
+\end{remark}
+
+\section{Morphisms of locally ringed spaces}
+On the other hand, we've seen that our stalks are local rings,
+which enable us to actually talk about \emph{values}.
+And so we want to add one more compatibility condition
+to ensure that our notion of value is preserved.
+Now the stalks at $p$ and $q$ in the previous picture might be different,
+so $\kappa(p)$ and $\kappa(q)$ might even be different fields.
+\begin{definition}
+ A \vocab{morphism of locally ringed spaces}
+ is a morphism of ringed spaces $\pi \colon X \to Y$
+ with the following additional property:
+ whenever $\pi(p) = q$,
+ the map at the stalks also induces a well-defined ring homomorphism
+ \[ \pi^\sharp_p \colon \kappa(q) \to \kappa(p). \]
+\end{definition}
+So we require $\pi^\sharp_p$ induces a field homomorphisms\footnote{Which
+means it is automatically injective, by \Cref{prob:field_hom}.}
+on the \emph{residue fields}.
+In particular, since $\pi^\sharp(0) = 0$,
+this means something very important:
+\begin{moral}
+ In a morphism of locally ringed spaces,
+ a germ vanishes at $q$ if and only if
+ the corresponding germ vanishes at $p$.
+\end{moral}
+\begin{exercise}
+ [So-called ``local ring homomorphism'']
+ Show that this is equivalent to requiring
+ \[ (\pi^\sharp_p)\im(\km_{Y,q}) \subseteq \km_{X,p} \]
+ or in English, a germ at $q$ has value zero
+ iff the corresponding germ at $p$ has value zero.
+\end{exercise}
+I don't like this formulation
+$(\pi^\sharp)\im(\km_{Y,q}) \subseteq \km_{X,p}$
+as much since it hides the geometric intuition behind a lot of symbols:
+that we want the notion of ``value at a point''
+to be preserved in some way.
+
+At this point, we can state the definition of a scheme,
+and we do so, although we won't really use it for a few more sections.
+\begin{definition}
+ A \vocab{scheme} is a locally ringed space
+ for which every point has an open neighborhood
+ isomorphic to an affine scheme.
+ A morphism of schemes is just a morphism of locally ringed spaces.
+\end{definition}
+In particular, $\Spec A$ is a scheme
+(the open neighborhood being the entire space!).
+And so let's start by looking at those.
+
+\section{A few examples of morphisms between affine schemes}
+Okay, sorry for lack of examples in previous few sections.
+Let's make amends now,
+where you can see all the moving parts in action.
+
+\subsection{One-point schemes}
+\begin{example}
+ [$\Spec \RR$ is well-behaved]
+ There is only one map $X = \Spec \RR \to \Spec \RR = Y$.
+ Indeed, these are spaces with one point,
+ and specifying the map $\RR = \OO_Y(Y) \to \OO_X(X) = \RR$
+ can only be done in one way,
+ since there is only one field automorphism of $\RR$ (the identity).
+\end{example}
+\begin{example}
+ [$\Spec \CC$ horror story]
+ There are multiple maps $X = \Spec \CC \to \Spec \CC = Y$,
+ horribly enough!
+ Indeed, these are spaces with one point,
+ so again we're just reduced to specifying
+ a map $\CC = \OO_Y(Y) \to \OO_X(X) = \CC$.
+ However, in addition to the identity map,
+ complex conjugation also works,
+ as well as some so-called ``wild automorphisms'' of $\CC$.
+\end{example}
+
+This behavior is obviously terrible,
+so for illustration reasons,
+some of the examples use $\RR$ instead of $\CC$
+to avoid the horror story we just saw.
+However, there is an easy fix using ``scheme over $\CC$''
+which will force the ring homomorphisms to fix $\CC$, later.
+
+\begin{example}
+ [$\Spec k$ and $\Spec k'$]
+ In general, if $k$ and $k'$ are fields,
+ we see that maps $\Spec k \to \Spec k'$
+ are in bijection with field homomorphism $k' \to k$,
+ since that's all there is left to specify.
+\end{example}
+
+\subsection{Examples of constant maps}
+\begin{example}
+ [Constant map to $(y-3)$]
+ We analyze scheme morphisms
+ \[ X = \Spec \RR[x] \taking{\pi} \Spec \RR[y] = Y \]
+ which send all points of $X$ to $\km = (y-3) \in Y$.
+ Constant maps are continuous no matter how bizarre your topology is,
+ so this lets us just focus our attention on the sections.
+
+ This example is simple enough that we can even do it by sections,
+ as much as I think stalks are simpler.
+ Let $U$ be any open subset of $Y$, then we need to specify a map
+ \[ \pi^\sharp_U \colon \OO_Y(U) \to \OO_X(\pi\pre(U)). \]
+ If $U$ does not contain $(y-3)$, then $\pi\pre(U) = \varnothing$,
+ so $\OO_X(\varnothing) = 0$ is the zero ring and there is nothing to do.
+
+ Conversely, if $U$ does contain $(y-3)$ then $\pi\pre(U) = X$,
+ so this time we want to specify a map
+ \[ \pi^\sharp_U \colon \OO_Y(U) \to \OO_X(X) = \RR[x] \]
+ which satisfies restriction maps.
+ Note that for any $U$, the element $y$ must map to a unit in $\RR[x]$;
+ since $1/y$ is a section too for a subset of $U$ not containing $(y)$.
+ Actually for any real number $c \ne 3$,
+ $y-c$ must be map to a unit in $\RR[x]$.
+ This can only happen if $y \mapsto 3 \in \RR[x]$.
+
+ As we have specified $\RR[y] \mapsto \RR[x]$ with $y \mapsto 3$,
+ that determines all the ring homomorphisms we needed.
+\end{example}
+But we could have used stalks, too.
+We wanted to specify a morphism
+\[ \RR[y]_{(y-3)} = \OO_{\Spec Y, (y-3)} \to \OO_{\Spec X, \kp} \]
+for every prime ideal $\kp$,
+sending compatible germs to compatible germs\dots
+but wait, $(y-3)$ is spitting out all the germs.
+So every \emph{individual} germ in $\OO_{\Spec Y, (y-3)}$
+needs to yield a (compatible) germ above every point of $\Spec X$,
+which is the data of an entire global section.
+So we're actually trying to specify
+\[ \RR[y]_{(y-3)} = \OO_{\Spec Y, (y-3)} \to \OO_{\Spec X}(\Spec X) = \RR[x]. \]
+This requires $y \mapsto 3$, as we saw,
+since $y-c$ is a unit of $\RR[x]$ for any $c \ne 3$.
+
+\begin{example}
+ [Constant map to $(y^2+1)$ does not exist]
+ Let's see if there are constant maps
+ $X = \Spec \RR[x] \to \Spec \RR[y] = Y$
+ which send everything to $(y^2+1)$.
+ Copying the previous example, we see that we want
+ \[ \OO_Y(U) \to \OO_X(X) = \RR[x]. \]
+ We find that $y$ and $1/y$ has nowhere to go:
+ the same argument as last time shows that $y-c$ should be a unit of $\RR[x]$;
+ this time for any real number $c$.
+
+ Like this time, stalks show this too,
+ even with just residue fields.
+ We would for example need a field homomorphism
+ \[ \CC = \kappa( (y^2+1) ) \to \kappa( (x) ) = \RR \]
+ which does not exist.
+\end{example}
+
+You might already notice the following:
+\begin{example}
+ [The generic point repels smaller points]
+ Changing the tune, consider maps $\Spec \CC[x] \to \Spec \CC[y]$.
+ We claim that if $\km$ is a maximal ideal (closed point)
+ of $\CC[x]$, then it can never be mapped to the generic point $(0)$ of $\CC[y]$.
+
+ For otherwise, we would get a local ring homomorphism
+ \[ \CC(y) \cong \OO_{\Spec k[y], (0)}
+ \to \OO_{\Spec \CC[x], \km} \cong \CC[x]_\km \]
+ which in particular means we have a map on the residue fields
+ \[ \CC(y) \to \CC[x] / \km \cong \CC \]
+ which is impossible, there is no such field homomorphism at all (why?).
+\end{example}
+The last example gives some nice intuition in general:
+``more generic'' points tend to have bigger stalks
+than ``less generic'' points, hence repel them.
+
+\subsection{The map $t \mapsto t^2$}
+We now consider what we would think
+of as the map $t \mapsto t^2$.
+\begin{example}
+ [The map $t \mapsto t^2$]
+ We consider a map
+ \[ \pi \colon X = \Spec \CC[x] \to \Spec \CC[y] = Y \]
+ defined on points as follows:
+ \begin{align*}
+ \pi\left( (0) \right) &= (0) \\
+ \pi\left( (x-a) \right) &= (y-a^2).
+ \end{align*}
+ You may check if you wish this map is continuous.
+ I claim that, surprisingly,
+ you can actually read off $\pi^\sharp$ from just this
+ behavior at points.
+ The reason is that we imposed the requirement
+ that a section $s$ can vanish at $\kq \in Y$
+ if and only if $\pi^\sharp_X(s)$ vanishes at $\kp \in X$,
+ where $\pi(\kp) = \kq$.
+ So, now:
+ \begin{itemize}
+ \ii Consider the section $y \in \OO_Y(Y)$,
+ which vanishes only at $(y) \in \Spec \CC[y]$;
+ then its image $\pi^\sharp_Y(y) \in \OO_X(X)$
+ must vanish at exactly $(x) \in \Spec \CC[x]$,
+ so $\pi^\sharp_Y(y) = x^n$ for some integer $n \ge 1$.
+ \ii Consider the section $y-4 \in \OO_Y(Y)$,
+ which vanishes only at $(y-4) \in \Spec \CC[y]$;
+ then its image $\pi^\sharp_Y(y-4) \in \OO_X(X)$
+ must vanish at exactly $(x-2) \in \Spec \CC[x]$
+ and $(x+2) \in \Spec \CC[x]$.
+ So $\pi^\sharp_Y(y)-4$ is divisible by $(x-2)^a(x+2)^b$
+ for some $a \ge 1$ and $b \ge 1$.
+ \end{itemize}
+ Thus $y \mapsto x^2$ in the top level map of sections $\pi^\sharp_Y$:
+ and hence also in all the maps of sections
+ (as well as at all the stalks).
+\end{example}
+The above example works equally well if $t^2$ is replaced
+by some polynomial $f(t)$,
+so that $(x-a)$ maps to $(y-f(y))$.
+The image of $y$ must be a polynomial $g(x)$
+with the property that $g(x)-c$ has the same roots as $f(x)-c$
+for any $c \in \CC$.
+Put another way, $f$ and $g$ have the same values, so $f = g$.
+
+\begin{remark}
+ [Generic point stalk overpowered]
+ I want to also point out that you can read off the polynomial
+ just from the stalk at the generic point:
+ for example, the previous example has
+ \[ \CC(y) \cong \OO_{\Spec \CC[y], (0)}
+ \to \OO_{\Spec \CC[x], (0)} \cong \CC(x)
+ \qquad y \mapsto x^2. \]
+ This is part of the reason why generic points are so powerful.
+ We expect that with polynomials,
+ if you know what happens to a ``generic'' point,
+ you can figure out the entire map.
+ This intuition is true:
+ knowing where each germ at the generic point goes
+ is enough to tell us the whole map.
+\end{remark}
+
+\subsection{An arithmetic example}
+\begin{example}
+ [{$\Spec \ZZ[i] \to \Spec \ZZ$}]
+ We now construct a morphism of schemes
+ $\pi \colon \Spec \ZZ[i] \to \Spec \ZZ$.
+ On points it behaves by
+ \begin{align*}
+ \pi\left( (0) \right) &= (0) \\
+ \pi\left( (p) \right) &= (p) \\
+ \pi\left( (a+bi) \right) &= (a^2+b^2)
+ \end{align*}
+ where $a+bi$ is a Gaussian prime:
+ so for example $\pi( (2+i) ) = (5)$ and $\pi( (1+i) ) = (2)$.
+ We could figure out the induced map on stalks now,
+ much like before, but in a moment we'll have a big theorem
+ that spares us the trouble.
+\end{example}
+
+
+\section{The big theorem}
+We did a few examples of $\Spec A \to \Spec B$ by hand,
+specifying the full data of a map of locally ringed spaces.
+It turns out that in fact, we didn't to specify that much data,
+and much of the process can be automated:
+\begin{proposition}
+ [Affine reconstruction]
+ \label{prop:affine_reconstruction}
+ Let $\pi \colon \Spec A \to \Spec B$ be a map of schemes.
+ Let $\psi \colon B \to A$ be the ring homomorphism
+ obtained by taking global sections, i.e.
+ \[ \psi= \pi^\sharp_B \colon \OO_{\Spec B}(\Spec B)
+ \to \OO_{\Spec A}(\Spec A). \]
+ Then we can recover $\pi$ given only $\psi$;
+ in fact, $\pi$ is given explicitly by
+ \[ \pi(\kp) = \psi\pre(\kp) \]
+ and
+ \[ \pi^\sharp_\kp \colon \OO_{Y,\pi(\kp)} \to \OO_{X,\kp}
+ \quad\text{ by }\quad f/g \mapsto \psi(f)/\psi(g). \]
+\end{proposition}
+This is the big miracle of affine schemes.
+Despite the enormous amount of data packaged into the definition,
+we can compress maps between affine schemes
+into just the single ring homomorphism on the top level.
+\begin{proof}
+ This requires two parts.
+ \begin{itemize}
+ \ii We need to check that the maps agree on \emph{points};
+ surprisingly this is the harder half.
+ To see how this works, let $\kq = \pi(\kp)$.
+ The key fact is that a function $f \in B$ vanishes on $\kq$
+ if and only if $\pi^\sharp_B(f)$ vanishes on $\kp$
+ (because $\pi^\sharp_B$ is supposed to be a
+ homomorphism of \emph{local} rings).
+ Therefore,
+ \begin{align*}
+ \pi(\kp) &= \kq = \left\{ f \in B \mid f \in \kq \right\} \\
+ &= \left\{ f \in B \mid f \text{ vanishes on } \kq \right\} \\
+ &= \left\{ f \in B \mid \pi^\sharp_B(f) \text{ vanishes on } \kp \right\} \\
+ &= \left\{ f \in B \mid \pi^\sharp_B(f) \in \kp \right\}
+ = \left\{ f \in B \mid \psi(f) \in \kp \right\} \\
+ &= \psi\pre(\kp).
+ \end{align*}
+
+ \ii We also want to check the maps on the stalks is the same.
+ Suppose $\kp \in \Spec A$, $\kq \in \Spec B$,
+ and $\kp \mapsto \kq$ (under both of the above).
+
+ In our original $\pi$, consider the map
+ $\pi^\sharp_\kp \colon B_\kq \to A_\kp$.
+ We know that it sends each $f \in B$ to $\psi(f) \in A$,
+ by taking the germ of each global section $f \in B$ at $\kq$.
+ Thus it must send $f/g$ to $\psi(f) / \psi(g)$,
+ being a ring homomorphism, as needed. \qedhere
+ \end{itemize}
+\end{proof}
+
+All of this suggests a great idea:
+if $\psi \colon B \to A$ is \emph{any} ring homomorphism,
+we ought to be able to construct a map of schemes
+by using fragments of the proof we just found.
+The only extra work we have to do is verify that
+we get a continuous map in the Zariski topology,
+and that we can get a suitable $\pi^\sharp$.
+
+We thus get the huge important theorem about affine schemes.
+\begin{theorem}
+ [$\Spec A \to \Spec B$ is just $B \to A$]
+ These two construction gives a bijection between
+ ring homomorphisms $B \to A$ and $\Spec A \to \Spec B$.
+\end{theorem}
+\begin{proof}
+ We have seen how to take each
+ $\pi \colon \Spec A \to \Spec B$
+ and get a ring homomorphism $\psi$.
+ \Cref{prop:affine_reconstruction} shows this map is injective.
+ So we just need to check it is surjective --- that
+ every $\psi$ arises from some $\pi$.
+
+ Given $\psi \colon B \to A$, we define
+ $(\pi, \pi^\sharp) \colon \Spec A \to \Spec B$
+ by copying \Cref{prop:affine_reconstruction}
+ and checking that everything is well-definced.
+ The details are:
+ \begin{itemize}
+ \ii For each prime ideal $\kp \in \Spec A$,
+ we let $\pi(\kp) = \psi\pre(\kp) \in \Spec B$
+ (which by \Cref{prob:prime_preimage} is also prime).
+ \begin{exercise}
+ Show that the resulting map $\pi$ is continuous
+ in the Zariski topology.
+ \end{exercise}
+ \ii Now we want to also define maps on the stalks,
+ and so for ecah $\pi(\kp) = \kq$ we set
+ \[ B_\kq \ni \frac fg \mapsto \frac{\psi(f)}{\psi(g)} \in A_\kp. \]
+ This makes sense since $g \notin \kq \implies \psi(g) \notin \kp$.
+ Also $f \in \kq \implies \psi(f) \in \kp$,
+ we find this really is a local ring homomorphism
+ (sending the maximal ideal of $B_\kq$ into the one of $A_\kp$).
+
+ Observe that if $f/g$ is a \emph{section}
+ over an open set $U \subseteq B$
+ (meaning $g$ does not vanish at the primes in $U$),
+ then $\psi(f)/\psi(g)$ is a section over $\pi\pre(U)$
+ (meaning $\psi(g)$ does not vanish at the primes in $\pi\pre(U)$).
+ Therefore, compatible germs over $B$
+ get sent to compatible germs over $A$, as needed.
+ \end{itemize}
+ Finally, the resulting $\pi$ has $\pi^\sharp_B = \psi$ on global sections,
+ completing the proof.
+\end{proof}
+
+This can be summarized succinctly using category theory:
+\begin{corollary}
+ [Categorical interpretation]
+ The opposite category of rings $\catname{CRing}\op$,
+ is ``equivalent'' to the category of affine schemes, $\catname{AffSch}$,
+ with $\Spec$ as a functor.
+\end{corollary}
+This means for example that $\Spec A \cong \Spec B$,
+naturally, whenever $A \cong B$.
+
+To make sure you realize that this theorem is important,
+here is an amusing comment I found on MathOverflow
+while reading about algebraic geometry
+references\footnote{From \url{https://mathoverflow.net/q/2446/70654}.}:
+\begin{quote}
+ \small\sffamily\itshape
+ He [Hartshorne] never mentions that the category of affine schemes
+ is dual to the category of rings, as far as I can see.
+ I'd expect to see that in huge letters
+ near the definition of scheme.
+ How could you miss that out?
+\end{quote}
+
+\section{More examples of scheme morphisms}
+Now that we have the big hammer,
+we can talk about examples much more briefly
+than we did a few sections ago.
+Before throwing things around,
+I want to give another definition
+that will eliminate the weird behavior we saw with
+$\CC \to \CC$ having nontrivial field automorphisms:
+\begin{definition}
+ Let $S$ be a scheme.
+ A \vocab{scheme over $S$} or \vocab{$S$-scheme} is a scheme $X$
+ together with a map $X \to S$.
+ A morphism of $S$-schemes is a scheme morphism $X \to Y$
+ such that the diagram
+ \begin{tikzcd}
+ X \ar[r] \ar[rd] & Y \ar[d] \\ & S
+ \end{tikzcd}
+ commutes.
+ Often, if $S = \Spec k$, we will use refer
+ to schemes over $k$ or $k$-schemes for short.
+\end{definition}
+\begin{example}
+ [{$\Spec k[\dots]$}]
+ If $X = \Spec k[x_1, \dots, x_n] / I$ for some ideal $I$,
+ then $X$ being a $k$-scheme in a natural way;
+ since we have an obvious homomorphism
+ $k \injto k[x_1, \dots, x_n] / I$
+ which gives a map $X \to \Spec k$.
+\end{example}
+
+\begin{example}
+ [{$\Spec \CC[x] \to \Spec \CC[y]$}]
+ As $\CC$-schemes, maps $\Spec \CC[x] \to \Spec \CC[y]$
+ coincide with ring homomorphisms
+ from $\psi \colon \CC[y] \to \CC[x]$ such that the diagram
+ \begin{center}
+ \begin{tikzcd}
+ \CC[x] & \CC[y] \ar[l, "\psi"'] \\
+ & \ar[lu, hook] \ar[u, hook] \CC
+ \end{tikzcd}
+ \end{center}
+ We see that the ``over $\CC$'' condition is eliminating
+ the pathology from before:
+ the $\psi$ is required to preserve $\CC$.
+ So the morphism is determined by the image of $y$,
+ i.e.\ the choice of a polynomial in $\CC[x]$.
+ For example, if $\psi(y) = x^2$
+ we recover the first example we saw.
+ This matches our intuition that these maps should correspond
+ to polynomials.
+\end{example}
+
+\begin{example}
+ [$\Spec \OO_K$]
+ This generalize $\Spec \ZZ[i]$ from before.
+ If $K$ is a number field and $\OO_K$,
+ then there is a natural morphism $\Spec \OO_K \to \Spec \ZZ$
+ from the (unique) ring homomorphism $\ZZ \injto \OO_K$.
+ Above each rational prime $(p) \in \ZZ$,
+ one obtains the prime ideals that $p$ splits as.
+ (We don't have a way of capturing ramification yet, alas.)
+\end{example}
+
+\section{A little bit on non-affine schemes}
+We can finally state the isomorphism that we wanted for a long time
+(first mentioned in \Cref{subsec:distinguished_open_affine}):
+\begin{theorem}
+ [Distinguished open sets are isomorphic to affine schemes]
+ Let $A$ be a ring and $f$ an element.
+ Then
+ \[ \Spec A[1/f] \cong D(f) \subseteq \Spec A. \]
+\end{theorem}
+\begin{proof}
+ Annoying check, not included yet.
+ (We have already seen the bijection of prime ideals, at the level of points.)
+\end{proof}
+
+\begin{corollary}
+ [Open subsets are schemes]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Any nonempty open subset of an affine scheme is itself a scheme.
+ \ii Any nonempty open subset of any scheme (affine or not) is itself a scheme.
+ \end{enumerate}
+\end{corollary}
+\begin{proof}
+ Part (a) has essentially been done already:
+ \begin{ques}
+ Combine \Cref{thm:distinguished_base}
+ with the previous proposition to deduce (a).
+ \end{ques}
+ Part (b) then follows by noting that if $U$ is an open set,
+ and $p$ is a point in $U$,
+ then we can take an affine open neighborhood $\Spec A$ at $p$,
+ and then cover $U \cap \Spec A$ with distinguished
+ open subsets of $\Spec A$ as in (a).
+\end{proof}
+
+We now reprise \Cref{subsec:punctured_plane}
+(except $\CC$ will be replaced by $k$).
+We have seen it is an open subset $U$ of $\Spec k[x,y]$, so it is a scheme.
+\begin{ques}
+ Show that in fact $U$ can be covered by
+ two open sets which are both affine.
+\end{ques}
+However, we show now that you really do need two open sets.
+\begin{proposition}
+ [Famous example: punctured plane isn't affine]
+ The punctured plane $U = (U, \OO_U)$,
+ obtained by deleting $(x,y)$ from $\Spec k[x,y]$,
+ is not isomorphic to any affine scheme $\Spec B$.
+\end{proposition}
+
+The intuition is that $\OO_U(U) = k[x,y]$, but $U$ is not the plane.
+\begin{proof}
+ We already know $\OO_U(U) = k[x,y]$
+ and we have a good handle on it.
+ For example, $y \in \OO_U(U)$ is a global section
+ which vanishes on what looks like the $y$-axis.
+ Similarly, $x \in \OO_X(X)$ is a global section
+ which vanishes on what looks like the $y$-axis.
+ In particular, no point of $U$ vanishes at both.
+
+ Now assume for contradiction that we have an isomorphism
+ \[ \psi \colon \Spec B \to U. \]
+ By taking the map on global sections (part of the definition),
+ \[ k[x,y] = \OO_U(U) \taking{\psi^\sharp}
+ \OO_{\Spec B}(\Spec B) \cong B. \]
+ The global sections $x$ and $y$ in $\OO_U(U)$
+ should then have images $a$ and $b$ in $B$;
+ and it follows we have a ring isomorphism $B \cong k[a,b]$.
+
+ \begin{center}
+ \begin{asy}
+ unitsize(0.9cm);
+ draw( (0,-3)--(0,3), red );
+ draw( (-3,0)--(3,0), red );
+ fill(box( (-3,-3), (3,3) ), opacity(0.2)+lightcyan);
+ opendot(origin, blue+1.5);
+ label("$\mathcal V(x)$", (3,0), dir(135), red);
+ label("$\mathcal V(y)$", (0,3), dir(-45), red);
+ label("$U$", (0,3), dir(90), deepcyan);
+ label("$\mathcal O_U(U) = k[x,y]$", (0,-3), dir(-90), deepcyan);
+ add(shift(-8,0)*CC());
+ fill(box( (-3,-3), (3,3) ), opacity(0.2)+lightblue);
+ path a = (-3,-1)..(-1,0.7)..(0.2,0.5)..(2,1)..(3,0.8);
+ path b = (-0.5,-3)..(0.4,-1)..(0.2,0.5)..(-0.4,2)..(-0.7,3);
+ draw(a, red);
+ draw(b, red);
+ label("$\mathcal V(a)$", (3,0.8), dir(225), red);
+ label("$\mathcal V(b)$", (-0.7,3), dir(225), red);
+ dot("$(a,b)$", (0.2,0.5), dir(225), red);
+ label("$\operatorname{Spec} B$", (0,3), dir(90), blue);
+ label("$\mathcal O_{\operatorname{Spec} B}(\operatorname{Spec} B) \cong k[a,b]$", (0,-3), dir(-90), blue);
+ \end{asy}
+ \end{center}
+
+ Now in $\Spec B$, $\VV(a) \cap \VV(b)$ is a closed
+ set containing a single point, the maximal ideal $\km = (a,b)$.
+ Thus in $\Spec B$ there is exactly
+ one point vanishing at both $a$ and $b$.
+ \emph{Because we required morphisms of schemes to preserve values}
+ (hence the big fuss about locally ringed spaces),
+ that means there should be a single point of $U$
+ vanishing at both $x$ and $y$.
+ But there isn't --- it was the origin we deleted.
+\end{proof}
+
+\section{Where to go from here}
+% \todo{I think I want to write more, but in a later edition}
+This chapter concludes the long setup for the definition of a scheme.
+For now, this unfortunately is as far as I have time to go.
+So, if you want to actually see how schemes are used in ``real life'',
+you'll have to turn elsewhere.
+
+A good reference I happily recommend is \cite{ref:gathmann};
+a more difficult (and famous) one is \cite{ref:vakil}.
+See \Cref{ch:refs} for further remarks.
+
+\section{\problemhead}
+\begin{problem}
+ Given an affine scheme $X = \Spec R$,
+ show that there is a unique morphism of schemes $X \to \Spec \ZZ$,
+ and describe where it sends points of $X$.
+ \begin{hint}
+ Use the fact that $\catname{AffSch} \simeq \catname{CRing}$.
+ \end{hint}
+ \begin{sol}
+ Since $\ZZ$ is the initial object of $\catname{CRing}$,
+ it follows $\Spec \ZZ$ is the final object of $\catname{AffSch}$.
+ $\kp$ gets sent to the characteristic of the field $\OO_{X,\kp} / \km_{X,\kp}$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Is the open subset of $\Spec \ZZ[x]$
+ obtained by deleting the point $\km = (2,x)$
+ isomorphic to some affine scheme?
+\end{problem}
+
+\endinput
+
+\section{Projective scheme}
+\prototype{Projective varieties, in the same way.}
+The most important class of schemes which are not affine are
+\emph{projective} schemes.
+The complete the obvious analogy:
+\[
+ \frac{\text{Affine variety}}{\text{Projective variety}}
+ =
+ \frac{\text{Affine scheme}}{\text{Projective scheme}}.
+\]
+Let $S$ be \emph{any} (commutative) graded ring, like $\CC[x_0, \dots, x_n]$.
+\begin{definition}
+ We define $\Proj S$, the \vocab{projective scheme over $S$}:
+ \begin{itemize}
+ \ii As a set, $\Proj S$ consists of \emph{homogeneous prime ideals}
+ $\kp$ which do not contain $S^+$.
+ \ii If $I \subseteq S$ is homogeneous, then
+ we let $\Vp(I) = \{ \kp \in \Proj S \mid I \subseteq \kp \}$.
+ Then the \vocab{Zariski topology} is imposed by declaring
+ sets of the form $\Vp(I)$ to be closed.
+ \ii We now define a pre-sheaf $\SF$ on $\Proj S$ by
+ \[ \SF(U) =
+ \left\{ \frac{f}{g} \mid
+ g(\kp) \neq 0 \; \forall \kp \in U \text{ and }
+ \deg f = \deg g \right\}.
+ \]
+ In other words, the rational functions are quotients $f/g$
+ where $f$ and $g$ are \emph{homogeneous of the same degree}.
+ Then we let \[ \OO_{\Proj S} = \SF\sh \] be the sheafification.
+ \end{itemize}
+\end{definition}
+\begin{definition}
+ The \vocab{distinguished open sets} $D(f)$ of the $\Proj S$
+ are defined as $\left\{ \kp \in \Proj S : f(\kp) \neq 0 \right\}$,
+ as before; these form a basis for the Zariski topology of $\Proj S$.
+\end{definition}
+Now, we want analogous results as we did for affine structure sheaf.
+So, we define a slightly modified localization:
+\begin{definition}
+ Let $S$ be a graded ring.
+ \begin{enumerate}[(i)]
+ \ii For a prime ideal $\kp$, let
+ \[ S_{(\kp)} = \left\{ \frac fg \mid g(\kp) \neq 0 \text{ and }
+ \deg f = \deg g \right\} \]
+ denote the elements of $S_\kp$ with ``degree zero''.
+ \ii For any homogeneous $g \in S$ of degree $d$, let
+ \[ S_{(g)} = \left\{ \frac{f}{g^r} \mid
+ \deg f = r \deg g \right\} \]
+ denote the elements of $S_g$ with ``degree zero''.
+ \end{enumerate}
+\end{definition}
+
+\begin{theorem}
+ [On the projective structure sheaf]
+ Let $S$ be a graded ring and let $\Proj S$ the associated projective scheme.
+ \begin{enumerate}[(a)]
+ \ii Let $\kp \in \Proj S$.
+ Then $\OO_{\Proj S, \kp} \cong S_{(\kp)}$.
+ \ii Suppose $g$ is homogeneous with $\deg g > 0$. Then
+ \[ D(g) \cong \Spec S_{(g)} \]
+ as locally ringed spaces.
+ In particular, $\OO_{\Proj S}(D(g)) \cong S_{(g)}$.
+ \end{enumerate}
+\end{theorem}
+\begin{ques}
+ Conclude that $\Proj S$ is a scheme.
+\end{ques}
+
+Of course, the archetypal example is that
+\[ \Proj \CC[x_0, x_1, \dots, x_n] / I \]
+corresponds to the projective subvariety of $\CP^n$
+cut out by $I$ (when $I$ is radical).
+In the general case of an arbitrary ideal $I$, we
+call such schemes \vocab{projective subscheme} of $\CP^n$.
+For example, the ``double point'' in $\CP^1$
+is given by $\Proj[x_0,x_1]/(x_0^2)$.
+
+\begin{remark}
+ No comment yet on what the global sections of $\OO_{\Proj S}(\Proj S)$ are.
+ (The theorem above requires $\deg g > 0$, so we cannot just take $g=1$.)
+ One might hope that in general $\OO_{\Proj S}(\Proj S) \cong S^0$
+ in analogy to our complex projective varieties, but
+ one needs some additional assumptions on $S$ for this to hold.
+\end{remark}
+
+
+
+\section{Morphisms of sheaves}
+First, recall that a sheaf is a contravariant functor (pre-sheaf)
+with extra conditions. In light of this, it is not hard to guess
+the definition of a morphism of pre-sheaves:
+\begin{definition}
+ A \vocab{morphism of (pre-)sheaves} $\alpha : \SF \to \SG$ on the same
+ space $X$ is a \textbf{natural transformation} of the underlying functors.
+ Isomorphism of sheaves is defined in the usual way.
+\end{definition}
+\begin{ques}
+ Show that this amounts to: for each $U \subseteq X$ we need to specify
+ a morphism $\alpha_U : \SF(U) \to \SG(U)$ such that the diagram
+ \begin{center}
+ \begin{tikzcd}
+ \SF(U_2) \ar[r, "\alpha_{U_2}"]
+ \ar[d, "\res_{U_1, U_2}"']
+ & \SG(U_2) \ar[d, "\res_{U_1, U_2}"]\\
+ \SF(U_1) \ar[r, "\alpha_{U_1}"'] & \SG(U_1)
+ \end{tikzcd}
+ \end{center}
+ commutes any time that $U_1 \subseteq U_2$.
+\end{ques}
+However, in the sheaf case we like stalks more than sections because
+they are theoretically easier to think about.
+And in fact:
+\begin{proposition}
+ [Morphisms determined by stalks]
+ A morphism of sheaves $\alpha : \SF \to \SG$ induces a morphism of stalks
+ \[ \alpha_p : \SF_p \to \SG_p \]
+ for every point $p \in X$.
+ Moreover, the sequence $(\alpha_p)_{p \in X}$ determines $\alpha$ uniquely.
+\end{proposition}
+\begin{proof}
+ The morphism $\alpha_p$ itself is just
+ $(s, U) \xmapsto{\alpha_p} (\alpha_U(s), U)$.
+ \begin{ques}
+ Show this is well-defined.
+ \end{ques}
+ Now suppose $\alpha , \beta : \SF \to \SG$ satisfy $\alpha_p = \beta_p$
+ for every $p$. We want to show $\alpha_U(s) = \beta_U(s)$
+ for every $s \in \SF(U)$.
+ \begin{ques}
+ Verify this using the description of sections
+ as sequences of germs. \qedhere
+ \end{ques}
+\end{proof}
+Thus a morphism of sheaves can be instead modelled as a morphism
+of all the stalks. We will see later on that this viewpoint is quite useful.
diff --git a/books/napkin/multivar.tex b/books/napkin/multivar.tex
new file mode 100644
index 0000000000000000000000000000000000000000..238119d4e3891d60991772dd8eced2ee54a8f8a9
--- /dev/null
+++ b/books/napkin/multivar.tex
@@ -0,0 +1,380 @@
+\chapter{Multivariable calculus done correctly}
+As I have ranted about before, linear algebra is done wrong
+by the extensive use of matrices to obscure the structure of a linear map.
+Similar problems occur with multivariable calculus, so here I would like to set
+the record straight.
+
+Since we are doing this chapter using morally correct linear algebra,
+it's imperative you're comfortable with linear maps,
+and in particular the dual space $V^\vee$ which we will repeatedly use.
+
+In this chapter, all vector spaces have norms and
+are finite-dimensional over $\RR$.
+So in particular every vector space is also a metric space
+(with metric given by the norm), and we can talk about open sets as usual.
+
+\section{The total derivative}
+\prototype{If $f(x,y) = x^2+y^2$, then $(Df)_{(x,y)} = 2x\ee_1^\vee + 2y\ee_2^\vee$.}
+First, let $f : [a,b] \to \RR$.
+You might recall from high school calculus that for every point $p \in \RR$,
+we defined $f'(p)$ as the derivative at the point $p$ (if it existed), which we interpreted as the \emph{slope} of
+the ``tangent line''.
+
+\begin{center}
+ \begin{asy}
+ import graph;
+ size(150,0);
+
+ real f(real x) {return 3-2/(x+2.5);}
+ graph.xaxis("$x$");
+ graph.yaxis();
+ draw(graph(f,-2,2,operator ..), blue, Arrows);
+
+ real p = -1;
+ real h = 1000 * (f(p+0.001)-f(p));
+ real r = 0.9;
+ draw( (p+r,f(p)+r*h)--(p-r,f(p)-r*h), red);
+ dot( (p, f(p)), red );
+ draw( (p, f(p))--(p,0), dashed);
+ dot("$p$", (p, 0), dir(-90));
+ label("$f'(p)$", (p+r/2, f(p) + h*r/2), dir(115), red);
+ \end{asy}
+\end{center}
+
+That's fine, but I claim that the ``better'' way to interpret
+the derivative at that point is as a \emph{linear map},
+that is, as a \emph{function}.
+If $f'(p) = 1.5$,
+then the derivative tells me that if I move $\eps$ away from $p$
+then I should expect $f$ to change by about $1.5\eps$.
+In other words,
+\begin{moral}
+The derivative of $f$ at $p$ approximates $f$ near $p$ by a \emph{linear function}.
+\end{moral}
+
+What about more generally?
+Suppose I have a function like $f : \RR^2 \to \RR$, say
+\[ f(x,y) = x^2+y^2 \]
+for concreteness or something.
+For a point $p \in \RR^2$, the ``derivative'' of $f$ at $p$ ought to represent a linear map
+that approximates $f$ at that point $p$.
+That means I want a linear map $T : \RR^2 \to \RR$ such that
+\[ f(p + v) \approx f(p) + T(v) \]
+for small displacements $v \in \RR^2$.
+
+Even more generally, if $f : U \to W$ with $U \subseteq V$ open
+(in the $\norm{\bullet}_V$ metric as usual),
+then the derivative at $p \in U$ ought to be so that
+\[ f(p + v) \approx f(p) + T(v) \in W. \]
+(We need $U$ open so that for small enough $v$, $p+v \in U$ as well.)
+In fact this is exactly what we were doing earlier with $f'(p)$ in high school.
+
+\begin{center}
+ \includegraphics{media/tangent.pdf}
+ \\ \scriptsize Image derived from \cite{img:tangentplane}
+\end{center}
+
+The only difference is that, by an unfortunate coincidence,
+a linear map $\RR \to \RR$ can be represented by just its slope.
+And in the unending quest to make everything a number so that it can be AP tested,
+we immediately forgot all about what we were trying to do in the first place
+and just defined the derivative of $f$ to be a \emph{number} instead of a \emph{function}.
+
+\begin{moral}
+ The fundamental idea of Calculus is the local approximation of functions by linear functions.
+ The derivative does exactly this.
+\end{moral}
+Jean Dieudonn\'e as quoted in \cite{ref:pugh} continues:
+\begin{quote}
+ In the classical teaching of Calculus, this idea is immediately obscured
+ by the accidental fact that, on a one-dimensional vector space,
+ there is a one-to-one correspondence between linear forms and numbers,
+ and therefore the derivative at a point is defined as a number instead of a linear form.
+ This \textbf{slavish subservience to the shibboleth of numerical interpretation at any cost}
+ becomes much worse . . .
+\end{quote}
+
+So let's do this right.
+The only thing that we have to do is say what ``$\approx$'' means, and for
+this we use the norm of the vector space.
+\begin{definition}
+ Let $U \subseteq V$ be open.
+ Let $f : U \to W$ be a continuous function, and $p \in U$.
+ Suppose there exists a linear map $T : V \to W$ such that
+ \[
+ \lim_{\norm{v}_V \to 0}
+ \frac{\norm{f(p + v) - f(p) - T(v)}_W}{\norm{v}_V} = 0.
+ \]
+ Then $T$ is the \vocab{total derivative} of $f$ at $p$.
+ We denote this by $(Df)_p$, and say $f$ is \vocab{differentiable at $p$}.
+
+ If $(Df)_p$ exists at every point, we say $f$ is \vocab{differentiable}.
+\end{definition}
+
+\begin{ques}
+ Check if that $V = W = \RR$, this is equivalent to the single-variable definition.
+ (What are the linear maps from $V$ to $W$?)
+\end{ques}
+\begin{example}[Total derivative of $f(x,y) = x^2+y^2$]
+ Let $V = \RR^2$ with standard basis $\ee_1$, $\ee_2$ and let $W = \RR$,
+ and let $f\left( x \ee_1 + y \ee_2 \right) = x^2+y^2$. Let $p = a\ee_1 + b\ee_2$.
+ Then, we claim that \[ (Df)_p : \RR^2 \to \RR \quad\text{by}\quad
+ v \mapsto 2a \cdot \ee_1^\vee(v) + 2b \cdot \ee_2^\vee(v). \]
+\end{example}
+Here, the notation $\ee_1^\vee$ and $\ee_2^\vee$ makes sense,
+because by definition $(Df)_p \in V^\vee$: these are functions from $V$ to $\RR$!
+
+Let's check this manually with the limit definition.
+Set $v = xe_1 + ye_2$, and note that the norm on $V$ is $\norm{(x,y)}_V = \sqrt{x^2+y^2}$
+while the norm on $W$ is just the absolute value $\norm{c}_W = \left\lvert c \right\rvert$.
+Then we compute
+\begin{align*}
+ \frac{\norm{f(p + v) - f(p) - T(v)}_W}{\norm{v}_V}
+ &= \frac{\left\lvert (a+x)^2 + (b+y)^2 - (a^2+b^2) - (2ax+2by) \right\rvert}{\sqrt{x^2+y^2}} \\
+ &= \frac{x^2+y^2}{\sqrt{x^2+y^2}} = \sqrt{x^2+y^2} \\
+ &\to 0
+\end{align*}
+as $\norm{v} \to 0$.
+Thus, for $p = ae_1 + be_2$ we indeed have $(Df)_p = 2a \cdot e_1^\vee + 2b \cdot e_2^\vee$.
+
+\begin{remark}
+ As usual, differentiability implies continuity.
+\end{remark}
+\begin{remark}
+ Although $U \subseteq V$, it might be helpful to think of vectors from $U$ and $V$
+ as different types of objects (in particular, note that it's possible for $0_V \notin U$).
+ The vectors in $U$ are ``inputs'' on our space
+ while the vectors coming from $V$ are ``small displacements''.
+ For this reason, I deliberately try to use $p \in U$ and $v \in V$ when possible.
+\end{remark}
+
+\section{The projection principle}
+Before proceeding I need to say something really important.
+\begin{theorem}[Projection principle]
+ \label{thm:project_principle}
+ Let $U$ be an open subset of the vector space $V$.
+ Let $W$ be an $n$-dimensional real vector space with basis $w_1, \dots, w_n$.
+ Then there is a bijection between continuous functions $f : U \to W$ and
+ $n$-tuples of continuous $f_1, f_2, \dots, f_n : U \to \RR$
+ by projection onto the $i$th basis element, i.e.\
+ \[ f(v) = f_1(v)w_1 + \dots + f_n(v)w_n. \]
+\end{theorem}
+\begin{proof}
+ Obvious.
+\end{proof}
+The theorem remains true if one replaces ``continuous'' by ``differentiable'', ``smooth'', ``arbitrary'',
+or most other reasonable words. Translation:
+\begin{moral}
+To think about a function $f : U \to \RR^{n}$,
+it suffices to think about each coordinate separately.
+\end{moral}
+For this reason, we'll most often be interested in functions $f : U \to \RR$.
+That's why the dual space $V^\vee$ is so important.
+
+\section{Total and partial derivatives}
+\prototype{If $f(x,y) = x^2+y^2$, then
+$(Df) : (x,y) \mapsto 2x \cdot \ee_1^\vee + 2y \cdot \ee_2^\vee$, and
+$\fpartial fx = 2x$, $\fpartial fy = 2y$.}
+Let $U \subseteq V$ be open and let $V$ have a basis $e_1$, \dots, $e_n$.
+Suppose $f : U \to \RR$ is a function which is differentiable everywhere,
+meaning $(Df)_p \in V^\vee$ exists for every $p$.
+In that case, one can consider $Df$ as \emph{itself} a function:
+\begin{align*}
+ Df : U &\to V^\vee \\
+ p &\mapsto (Df)_p.
+\end{align*}
+This is a little crazy: to every \emph{point} in $U$
+we associate a \emph{function} in $V^\vee$.
+We say $Df$ is the \vocab{total derivative} of $f$,
+to reflect how much information we're dealing with.
+We say $(Df)_p$ is the total derivative at $p$.
+
+Let's apply the projection principle now to $Df$.
+Since we picked a basis $e_1$, \dots, $e_n$ of $V$,
+there is a corresponding dual basis
+$e_1^\vee$, $e_2^\vee$, \dots, $e_n^\vee$.
+The Projection Principle tells us that $Df$ can thus be thought of as just $n$ functions, so we can write
+\[ Df = \psi_1 e_1^\vee + \dots + \psi_n e_n^\vee. \]
+In fact, we can even describe what the $\psi_i$ are.
+\begin{definition}
+ The \vocab{$i^{\text{th}}$ partial derivative} of $f : U \to \RR$, denoted
+ \[ \fpartial{f}{e_i}: U \to \RR \]
+ is defined by
+ \[
+ \fpartial{f}{e_i} (p)
+ \defeq \lim_{t \to 0} \frac{f(p + te_i) - f(p)}{t}.
+ \]
+\end{definition}
+You can think of it as ``$f'$ along $e_i$''.
+\begin{ques}
+ Check that if $Df$ exists, then \[ (Df)_p(e_i) = \fpartial{f}{e_i}(p). \]
+\end{ques}
+\begin{remark}
+ Of course you can write down a definition of $\fpartial{f}{v}$
+ for any $v$ (rather than just the $e_i$).
+\end{remark}
+
+From the above remarks, we can derive that
+\[
+ \boxed{
+ Df =
+ \frac{\partial f}{\partial e_1} \cdot e_1^\vee
+ + \dots +
+ \frac{\partial f}{\partial e_n} \cdot e_n^\vee .
+ }
+\]
+and so given a basis of $V$, we can think of $Df$ as just
+the $n$ partials.
+\begin{remark}
+Keep in mind that each $\frac{\partial f}{\partial e_i}$ is a function from $U$ to the \emph{reals}.
+That is to say,
+\[
+ (Df)_p =
+ \underbrace{\frac{\partial f}{\partial e_1}(p)}_{\in \RR} \cdot e_1^\vee
+ + \dots +
+ \underbrace{\frac{\partial f}{\partial e_n}(p)}_{\in \RR} \cdot e_n^\vee
+ \in V^\vee.
+\]
+\end{remark}
+
+
+\begin{example}[Partial derivatives of $f(x,y) = x^2+y^2$]
+ Let $f : \RR^2 \to \RR$ by $(x,y) \mapsto x^2+y^2$.
+ Then in our new language,
+ \[ Df : (x,y) \mapsto 2x \cdot \ee_1^\vee + 2y \cdot \ee_2^\vee. \]
+ Thus the partials are
+ \[
+ \frac{\partial f}{\partial x} : (x,y) \mapsto 2x \in \RR
+ \quad\text{and}\quad
+ \frac{\partial f}{\partial y} : (x,y) \mapsto 2y \in \RR
+ \]
+\end{example}
+
+With all that said, I haven't really said much about how to
+find the total derivative itself.
+For example, if I told you
+\[ f(x,y) = x \sin y + x^2y^4 \]
+you might want to be able to compute $Df$ without going through
+that horrible limit definition I told you about earlier.
+
+Fortunately, it turns out you already know how to compute partial derivatives,
+because you had to take AP Calculus at some point in your life.
+It turns out for most reasonable functions, this is all you'll ever need.
+\begin{theorem}[Continuous partials implies differentiable]
+ \label{thm:apcalc_partials}
+ Let $U \subseteq V$ be open and pick any basis $e_1, \dots, e_n$.
+ Let $f : U \to \RR$ and suppose that $\fpartial{f}{e_i}$ is defined
+ for each $i$ and moreover is \emph{continuous}.
+ Then $f$ is differentiable and $Df$ is given by
+ \[ Df = \sum_{i=1}^n \fpartial{f}{e_i} \cdot e_i^\vee. \]
+\end{theorem}
+\begin{proof}
+ Not going to write out the details, but\dots
+ given $v = t_1e_1 + \dots + t_ne_n$,
+ the idea is to just walk from $p$ to $p+t_1e_1$, $p+t_1e_1+t_2e_2$, \dots,
+ up to $p+t_1e_1+t_2e_2+\dots+t_ne_n = p+v$,
+ picking up the partial derivatives on the way.
+ Do some calculation.
+\end{proof}
+
+\begin{remark}
+ The continuous condition cannot be dropped. The function
+ \[
+ f(x,y)
+ = \begin{cases}
+ \frac{xy}{x^2+y^2} & (x,y) \neq (0,0) \\
+ 0 & (x,y) = (0,0).
+ \end{cases}
+ \]
+ is the classic counterexample -- the total derivative $Df$ does not exist at zero,
+ even though both partials do.
+\end{remark}
+
+\begin{example}
+ [Actually computing a total derivative]
+ Let $f(x,y) = x \sin y + x^2y^4$. Then
+ \begin{align*}
+ \fpartial fx (x,y) &= \sin y + y^4 \cdot 2x \\
+ \fpartial fy (x,y) &= x \cos y + x^2 \cdot 4y^3.
+ \end{align*}
+ So \Cref{thm:apcalc_partials} applies,
+ and $Df = \fpartial fx \ee_1^\vee + \fpartial fy \ee_2^\vee$,
+ which I won't bother to write out.
+\end{example}
+
+The example $f(x,y) = x^2+y^2$ is the same thing.
+That being said, who cares about $x \sin y + x^2y^4$ anyways?
+
+\section{(Optional) A word on higher derivatives}
+Let $U \subseteq V$ be open, and take $f : U \to W$, so that $Df : U \to \Hom(V,W)$.
+
+Well, $\Hom(V,W)$ can also be thought of as a normed vector space in its own right:
+it turns out that one can define an operator norm on it by setting
+\[ \norm{T} \defeq \sup \left\{ \frac{\norm{T(v)}_W}{\norm{v}_V} \mid v \neq 0_V \right\}. \]
+So $\Hom(V,W)$ can be thought of as a normed vector space as well.
+Thus it makes sense to write
+\[ D(Df) : U \to \Hom(V,\Hom(V,W)) \]
+which we abbreviate as $D^2 f$. Dropping all doubt and plunging on,
+\[ D^3f : U \to \Hom(V, \Hom(V,\Hom(V,W))). \]
+I'm sorry.
+As consolation, we at least know that $\Hom(V,W) \cong V^\vee \otimes W$ in a natural way,
+so we can at least condense this to
+\[ D^kf : V \to (V^\vee)^{\otimes k} \otimes W \]
+rather than writing a bunch of $\Hom$'s.
+\begin{remark}
+ If $k=2$, $W = \RR$, then $D^2f(v) \in (V^\vee)^{\otimes 2}$,
+ so it can be represented as an $n \times n$ matrix,
+ which for some reason is called a \vocab{Hessian}.
+\end{remark}
+The most important property of the second derivative is that
+\begin{theorem}
+ [Symmetry of $D^2 f$]
+ Let $f : U \to W$ with $U \subseteq V$ open.
+ If $(D^2f)_p$ exists at some $p \in U$, then it is symmetric, meaning
+ \[ (D^2f)_p(v_1, v_2) = (D^2f)_p(v_2, v_1). \]
+\end{theorem}
+I'll just quote this without proof (see e.g. \cite[\S5, theorem 16]{ref:pugh}),
+because double derivatives make my head spin.
+An important corollary of this theorem:
+\begin{corollary}
+ [Clairaut's theorem: mixed partials are symmetric]
+ Let $f : U \to \RR$ with $U \subseteq V$ open be twice differentiable.
+ Then for any point $p$ such that the quantities are defined,
+ \[
+ \frac{\partial}{\partial e_i}
+ \frac{\partial}{\partial e_j}
+ f(p)
+ =
+ \frac{\partial}{\partial e_j}
+ \frac{\partial}{\partial e_i}
+ f(p).
+ \]
+\end{corollary}
+
+\section{Towards differential forms}
+This concludes the exposition of what the derivative really is:
+the key idea I want to communicate in this chapter is that $Df$
+should be thought of as a map from $U \to V^\vee$.
+
+The next natural thing to do is talk about \emph{integration}.
+The correct way to do this is through a so-called \emph{differential form}:
+you'll finally know what all those stupid $dx$'s and $dy$'s really mean.
+(They weren't just there for decoration!)
+
+\section\problemhead
+\begin{sproblem}[Chain rule]
+ Let $U_1 \taking f U_2 \taking g U_3$ be differentiable maps
+ between open sets of normed vector spaces $V_i$, and let $h = g \circ f$.
+ Prove the Chain Rule: for any point $p \in U_1$, we have
+ \[ (Dh)_p = (Dg)_{f(p)} \circ (Df)_p. \]
+\end{sproblem}
+
+\begin{problem}
+ Let $U \subseteq V$ be open, and $f : U \to \RR$ be differentiable $k$ times.
+ Show that $(D^kf)_p$ is symmetric in its $k$ arguments, meaning for any $v_1, \dots, v_k \in V$
+ and any permutation $\sigma$ on $\left\{ 1, \dots, k \right\}$ we have
+ \[ (D^kf)_p(v_1, \dots, v_k) = (D^kf)_p(v_{\sigma(1)}, \dots, v_{\sigma(k)}). \]
+ \begin{hint}
+ Simply induct, with the work having been done on the $k=2$ case.
+ \end{hint}
+\end{problem}
diff --git a/books/napkin/norm-trace.tex b/books/napkin/norm-trace.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f2165bcb93991c7b610b77404607b1c40c20c9f2
--- /dev/null
+++ b/books/napkin/norm-trace.tex
@@ -0,0 +1,487 @@
+\chapter{The ring of integers}
+\section{Norms and traces}
+\prototype{$a+b\sqrt2$ as an element of $\QQ(\sqrt2)$ has norm $a^2-2b^2$ and trace $2a$.}
+Remember when you did olympiads and we had like $a^2+b^2$ was the ``norm'' of $a+bi$?
+Cool, let me tell you what's actually happening.
+
+First, let me make precise the notion of a conjugate.
+\begin{definition}
+ Let $\alpha$ be an algebraic number, and let $P(x)$ be its minimal polynomial,
+ of degree $m$.
+ Then the $m$ roots of $P$ are the (Galois) \vocab{conjugates} of $\alpha$.
+\end{definition}
+It's worth showing at the moment that there are no repeated conjugates.
+\begin{lemma}[Irreducible polynomials have distinct roots]
+ An irreducible polynomial in $\QQ[x]$ cannot have a complex double root.
+ \label{lem:irred_complex}
+\end{lemma}
+\begin{proof}
+ Let $f(x) \in \QQ[x]$ be the irreducible polynomial and assume it has a double root $\alpha$.
+ \textbf{Take the derivative $f'(x)$.}
+ This derivative has three interesting properties.
+ \begin{itemize}
+ \ii The degree of $f'$ is one less than the degree of $f$.
+ \ii The polynomials $f$ and $f'$ are not relatively prime
+ because they share a factor $x-\alpha$.
+ \ii The coefficients of $f'$ are also in $\QQ$.
+ \end{itemize}
+ Consider $g = \gcd(f, f')$. We must have $g \in \QQ[x]$ by Euclidean algorithm.
+ But the first two facts about $f'$ ensure that $g$ is nonconstant
+ and $\deg g < \deg f$.
+ Yet $g$ divides $f$,
+ contradiction to the fact that $f$ should be a minimal polynomial.
+\end{proof}
+Hence $\alpha$ has exactly as many conjugates as the degree of $\alpha$.
+
+Now, we would \emph{like} to define the \emph{norm} of an element $\Norm(\alpha)$
+as the product of its conjugates.
+For example, we want $2+i$ to have norm $(2+i)(2-i) = 5$,
+and in general for $a+bi$ to have norm $a^2+b^2$.
+It would be \emph{really cool} if the norm was multiplicative;
+we already know this is true for complex numbers!
+
+Unfortunately, this doesn't quite work: consider
+\[ \Norm(2+i) = 5 \text{ and } \Norm(2-i) = 5. \]
+But $(2+i)(2-i) = 5$, which doesn't have norm $25$ like we want,
+since $5$ is degree $1$ and has no conjugates at all.
+The reason this ``bad'' thing is happening is that we're
+trying to define the norm of an \emph{element},
+when we really ought to be defining the norm of an element
+\emph{with respect to a particular $K$}.
+
+What I'm driving at is that the norm should have
+different meanings depending on which field you're in.
+If we think of $5$ as an element of $\QQ$, then its norm is $5$.
+But thought of as an element of $\QQ(i)$, its norm really ought to be $25$.
+Let's make this happen: for $K$ a number field, we will now define $\Norm_{K/\QQ}(\alpha)$
+to be the norm of $\alpha$ \emph{with respect to $K$} as follows.
+\begin{definition}
+ Let $\alpha \in K$ have degree $n$, so $\QQ(\alpha) \subseteq K$, and set $k = (\deg K) / n$.
+ The \vocab{norm} of $\alpha$ is defined as
+ \[ \Norm_{K/\QQ}(\alpha) \defeq \left( \prod \text{Galois conj of $\alpha$} \right)^k. \]
+ The \vocab{trace} is defined as
+ \[ \Tr_{K/\QQ}(\alpha) \defeq k \cdot \left( \sum \text{Galois conj of $\alpha$} \right). \]
+\end{definition}
+The exponent of $k$ is a ``correction factor'' that makes the norm of $5$ into $5^2=25$
+when we view $5$ as an element of $\QQ(i)$ rather than an element of $\QQ$.
+For a ``generic'' element of $K$, we expect $k = 1$.
+\begin{exercise}
+ Use what you know about nested vector spaces to convince
+ yourself that $k$ is actually an integer.
+\end{exercise}
+\begin{example}[Norm of $a+b\sqrt2$]
+ Let $\alpha = a+b\sqrt2 \in \QQ(\sqrt2) = K$.
+ If $b \neq 0$, then $\alpha$ and $K$ have the degree $2$.
+ Thus the only conjugates of $\alpha$ are $a \pm b\sqrt2$, which gives
+ the norm \[ (a+b\sqrt2)(a-b\sqrt2) = a^2-2b^2, \]
+ The trace is $(a-b\sqrt2) + (a+b\sqrt2) = 2a$.
+
+ Nicely, the formula $a^2-2b^2$ and $2a$ also works when $b=0$.
+\end{example}
+Of importance is:
+\begin{proposition}[Norms and traces are rational integers]
+ If $\alpha$ is an algebraic integer, its norm and trace
+ are rational integers.
+\end{proposition}
+\begin{ques}
+ Prove it. (Vieta formula.)
+\end{ques}
+
+That's great, but it leaves a question unanswered:
+why is the norm multiplicative?
+To do this, I have to give a new definition of norm and trace.
+
+\begin{theorem}[Morally correct definition of norm and trace]
+ Let $K$ be a number field of degree $n$, and let $\alpha \in K$.
+ Let $\mu_\alpha \colon K \to K$ denote the map \[ x \mapsto \alpha x \]
+ viewed as a linear map of $\QQ$-vector spaces.
+ Then,
+ \begin{itemize}
+ \ii the norm of $\alpha$ equals the determinant $\det \mu_\alpha$, and
+ \ii the trace of $\alpha$ equals the trace $\Tr \mu_\alpha$.
+ \end{itemize}
+\end{theorem}
+Since the trace and determinant don't depend on the choice of basis,
+you can pick whatever basis you want
+and use whatever definition you got in high school.
+Fantastic, right?
+
+\begin{example}[Explicit computation of matrices for $a+b\sqrt2$]
+ Let $K = \QQ(\sqrt2)$, and let $1$, $\sqrt 2$ be the basis of $K$.
+ Let \[ \alpha = a + b \sqrt 2 \] (possibly even $b = 0$), and notice that
+ \[ \left( a+b\sqrt2 \right) \left(x+y\sqrt2 \right)
+ = (ax+2yb) + (bx+ay)\sqrt2. \]
+ We can rewrite this in matrix form as
+ \[
+ \begin{bmatrix}
+ a & 2b \\
+ b & a
+ \end{bmatrix}
+ \begin{bmatrix}
+ x \\ y
+ \end{bmatrix}
+ =
+ \begin{bmatrix}
+ ax+2yb \\ bx+ay
+ \end{bmatrix}.
+ \]
+ Consequently, we can interpret $\mu_\alpha$ as the matrix
+ \[ \mu_\alpha =
+ \begin{bmatrix}
+ a & 2b \\ b & a
+ \end{bmatrix}. \]
+ Of course, the matrix will change if we pick a different basis,
+ but the determinant and trace do not: they are always given by
+ \[ \det \mu_\alpha = a^2-2b^2 \text{ and }
+ \Tr \mu_\alpha = 2a. \]
+\end{example}
+This interpretation explains why the same formula should work for $a+b\sqrt 2$
+even in the case $b = 0$.
+
+\begin{proof}
+I'll prove the result for just the norm; the trace falls out similarly.
+Set
+\[ n = \deg \alpha, \qquad kn = \deg K. \]
+The proof is split into two parts, depending on whether or not $k=1$.
+\begin{subproof}[Proof if $k=1$]
+ Set $n = \deg \alpha = \deg K$.
+ Thus the norm actually \emph{is} the product of the Galois conjugates.
+ Also, \[ \{1, \alpha, \dots, \alpha^{n-1}\} \]
+ is linearly independent in $K$, and hence a basis (as $\dim K = n$).
+ Let's use this as the basis for $\mu_\alpha$.
+
+ Let \[ x^n+c_{n-1}x^{n-1} + \dots + c_0 \]be the minimal polynomial of $\alpha$.
+ Thus $\mu_\alpha(1) = \alpha$, $\mu_\alpha(\alpha) = \alpha^2$, and so on,
+ but $\mu_\alpha(\alpha^{n-1}) = -c_{n-1}\alpha^{n-1} - \dots - c_0$.
+ Therefore, $\mu_\alpha$ is given by the matrix
+ \[
+ M =
+ \begin{bmatrix}
+ 0 & 0 & 0 & \dots & 0 & -c_0 \\
+ 1 & 0 & 0 & \dots & 0 & -c_1 \\
+ 0 & 1 & 0 & \dots & 0 & -c_2 \\
+ \vdots & \vdots & \vdots & \ddots & 0 & -c_{n-2} \\
+ 0 & 0 & 0 & \dots & 1 & -c_{n-1}
+ \end{bmatrix}.
+ \]
+ Thus \[ \det M = (-1)^n c_0 \] and we're done by Vieta's formulas.
+\end{subproof}
+\begin{subproof}[Proof if $k > 1$]
+ We have nested vector spaces
+ \[ \QQ \subseteq \QQ(\alpha) \subseteq K. \]
+ Let $e_1$, \dots, $e_k$ be a $\QQ(\alpha)$-basis for $K$
+ (meaning: interpret $K$ as a vector space over $\QQ(\alpha)$, and pick that basis).
+ Since $\{1, \alpha, \dots, \alpha^{n-1}\}$ is a $\QQ$ basis for $\QQ(\alpha)$,
+ the elements
+ \[
+ \begin{array}{cccc}
+ e_1, & e_1\alpha, & \dots, & e_1\alpha^{n-1} \\
+ e_2, & e_2\alpha, & \dots, & e_2\alpha^{n-1} \\
+ \vdots & \vdots & \ddots & \vdots \\
+ e_k, & e_k\alpha, & \dots, & e_k\alpha^{n-1}
+ \end{array}
+ \]
+ constitute a $\QQ$-basis of $K$.
+ Using \emph{this} basis, the map $\mu_\alpha$ looks like
+ \[
+ \underbrace{
+ \begin{bmatrix}
+ M & & & \\
+ & M & & \\
+ & & \ddots & \\
+ & & & M
+ \end{bmatrix}
+ }_{\text{$k$ times}}
+ \]
+ where $M$ is the same matrix as above:
+ we just end up with one copy of our old matrix for each $e_i$.
+ Thus $\det \mu_\alpha = (\det M)^k$, as needed. \qedhere
+\end{subproof}
+\begin{ques}
+ Verify the result for traces as well. \qedhere
+\end{ques}
+\end{proof}
+
+From this it follows immediately that
+\[ \Norm_{K/\QQ}(\alpha\beta) = \Norm_{K/\QQ}(\alpha)\Norm_{K/\QQ}(\beta) \]
+because by definition we have
+\[ \mu_{\alpha\beta} = \mu_\alpha \circ \mu_\beta, \]
+and that the determinant is multiplicative.
+In the same way, the trace is additive.
+
+\section{The ring of integers}
+\prototype{If $K = \QQ(\sqrt 2)$, then $\OO_K = \ZZ[\sqrt 2]$.
+ But if $K = \QQ(\sqrt 5)$, then $\OO_K = \ZZ[\frac{1+\sqrt5}{2}]$.}
+
+$\ZZ$ makes for better number theory than $\QQ$.
+In the same way, focusing on the \emph{algebraic integers} of $K$
+gives us some really nice structure, and we'll do that here.
+
+\begin{definition}
+ Given a number field $K$, we define
+ \[ \OO_K \defeq K \cap \ol\ZZ \]
+ to be the \vocab{ring of integers} of $K$;
+ in other words $\OO_K$ consists of the algebraic integers of $K$.
+\end{definition}
+
+We do the classical example of a quadratic field now.
+Before proceeding, I need to write a silly number theory fact.
+\begin{exercise}
+ [Annoying but straightforward]
+ Let $a$ and $b$ be rational numbers, and $d$ a squarefree positive integer.
+ \begin{itemize}
+ \ii If $d \equiv 2, 3 \pmod 4$, prove that
+ $2a, a^2-db^2 \in \ZZ$ if and only if $a,b \in \ZZ$.
+ \ii For $d \equiv 1 \pmod 4$, prove that
+ $2a, a^2-db^2 \in \ZZ$ if and only if $a,b \in \ZZ$
+ OR if $a -\half, b-\half \in \ZZ$.
+ \end{itemize}
+ You'll need to take mod $4$.
+\end{exercise}
+
+\begin{example}
+ [Ring of integers of $K = \QQ(\sqrt3)$]
+ Let $K$ be as above.
+ We claim that \[ \OO_K = \ZZ[\sqrt 3] = \left\{ m + n\sqrt 3 \mid m,n \in \ZZ \right\}. \]
+ We set $\alpha = a + b \sqrt 3$.
+ Then $\alpha \in \OO_K$ when the minimal polynomial has integer coefficients.
+
+ If $b = 0$, then the minimal polynomial is $x-\alpha=x-a$,
+ and thus $\alpha$ works if and only if it's an integer.
+ If $b \neq 0$, then the minimal polynomial is
+ \[ (x-a)^2 - 3b^2 = x^2 - 2a \cdot x + (a^2-3b^2). \]
+ From the exercise, this occurs exactly for $a,b \in \ZZ$.
+\end{example}
+\begin{example}
+ [Ring of integers of $K = \QQ(\sqrt 5)$]
+ We claim that in this case
+ \[ \OO_K = \ZZ\left[ \frac{1+\sqrt5}{2} \right]
+ = \left\{ m + n \cdot \frac{1+\sqrt5}{2} \mid m,n \in \ZZ \right\}. \]
+ The proof is exactly the same, except the exercise tells us instead
+ that for $b \neq 0$, we have both the possibility that $a,b \in \ZZ$
+ or that $a,b \in \ZZ - \half$.
+ This reflects the fact that $\frac{1+\sqrt5}{2}$ is the root of $x^2-x-1 = 0$;
+ no such thing is possible with $\sqrt 3$.
+\end{example}
+In general, the ring of integers of $K = \QQ(\sqrt d)$ is
+\[ \OO_K
+ =
+ \begin{cases}
+ \ZZ[\sqrt d] & d\equiv 2,3 \pmod 4 \\[1em]
+ \ZZ\left[ \frac{1+\sqrt d}{2} \right] & d \equiv 1 \pmod 4.
+ \end{cases}
+\]
+What we're going to show is that $\OO_K$ behaves in $K$
+a lot like the integers do in $\QQ$.
+First we show $K$ consists of quotients of numbers in $\OO_K$.
+In fact, we can do better:
+\begin{example}[Rationalizing the denominator]
+ For example, consider $K = \QQ(\sqrt3)$.
+ The number $x = \frac{1}{4+\sqrt3}$ is an element of $K$, but by
+ ``rationalizing the denominator'' we can write
+ \[ \frac{1}{4+\sqrt3} = \frac{4-\sqrt3}{13}. \]
+ So we see that in fact, $x$ is $\frac{1}{13}$ of an integer in $\OO_K$.
+\end{example}
+
+The theorem holds true more generally.
+\begin{theorem}[$K = \QQ \cdot \OO_K$]
+ Let $K$ be a number field, and let $x \in K$ be any element.
+ Then there exists an integer $n$ such that $nx \in \OO_K$;
+ in other words, \[ x = \frac 1n \alpha \] for some $\alpha \in \OO_K$.
+\end{theorem}
+\begin{exercise}
+ Prove this yourself.
+ (Start by using the fact that $x$ has a minimal
+ polynomial with rational coefficients.
+ Alternatively, take the norm.)
+\end{exercise}
+
+Now we are going to show $\OO_K$ is a ring;
+we'll check it is closed under addition and multiplication.
+To do so, the easiest route is:
+\begin{lemma}[$\alpha \in \ol\ZZ$ $\iff$ {$\ZZ[\alpha]$} finitely generated]
+ Let $\alpha \in \ol\QQ$.
+ Then $\alpha$ is an algebraic integer if and only if
+ the abelian group $\ZZ[\alpha]$ is finitely generated.
+\end{lemma}
+\begin{proof}
+ Note that $\alpha$ is an algebraic integer if and only if it's the root
+ of some nonzero, monic polynomial with integer coefficients.
+ Suppose first that
+ \[ \alpha^N = c_{N-1} \alpha^{N-1} + c_{N-2} \alpha^{N-2} + \dots + c_0. \]
+ Then the set $1, \alpha, \dots, \alpha^{N-1}$ generates $\ZZ[\alpha]$,
+ since we can repeatedly replace $\alpha^N$ until all powers of $\alpha$
+ are less than $N$.
+
+ Conversely, suppose that $\ZZ[\alpha]$ is finitely generated
+ by some $b_1, \dots, b_m$.
+ Viewing the $b_i$ as polynomials in $\alpha$, we can select a large integer
+ $N$ (say $N = \deg b_1 + \dots + \deg b_m + 2015$)
+ and express $\alpha^N$ in the $b_i$'s to get
+ \[ \alpha^N = c_1b_1(\alpha) + \dots + c_mb_m(\alpha). \]
+ The above gives us a monic polynomial in $\alpha$,
+ and the choice of $N$ guarantees it is not zero.
+ So $\alpha$ is an algebraic integer.
+\end{proof}
+\begin{example}[$\half$ isn't an algebraic integer]
+ We already know $\half$ isn't an algebraic integer.
+ So we expect
+ \[ \ZZ \left[ \half \right] = \left\{ \frac{a}{2^m}
+ \mid a, m \in \ZZ \text{ and } m \ge 0 \right\} \]
+ to not be finitely generated, and this is the case.
+\end{example}
+\begin{ques}
+ To make the last example concrete:
+ name all the elements of $\ZZ[\half]$
+ that cannot be written as an integer combination of
+ \[ \left\{ \frac12, \frac{7}{8}, \frac{13}{64},
+ \frac{2015}{4096}, \frac{1}{1048576} \right\} \]
+\end{ques}
+
+Now we can state the theorem.
+\begin{theorem}[Algebraic integers are closed under $+$ and $\times$]
+ The set $\ol\ZZ$ is closed under addition and multiplication;
+ i.e.\ it is a ring.
+ In particular, $\OO_K$ is also a ring for any number field $K$.
+\end{theorem}
+\begin{proof}
+ Let $\alpha, \beta \in \ol\ZZ$.
+ Then $\ZZ[\alpha]$ and $\ZZ[\beta]$ are finitely generated.
+ Hence so is $\ZZ[\alpha, \beta]$.
+ (Details: if $\ZZ[\alpha]$ has $\ZZ$-basis $a_1, \dots, a_m$ and
+ $\ZZ[\beta]$ has $\ZZ$-basis $b_1, \dots, b_n$,
+ then take the $mn$ elements $a_ib_j$.)
+
+ Now $\ZZ[\alpha \pm \beta]$ and $\ZZ[\alpha \beta]$ are subsets of $\ZZ[\alpha,\beta]$ and so they are also finitely generated.
+ Hence $\alpha \pm \beta$ and $\alpha\beta$ are algebraic integers.
+\end{proof}
+
+In fact, something even better is true.
+As you saw, for $\QQ(\sqrt 3)$ we had $\OO_K = \ZZ[\sqrt 3]$;
+in other words, $\OO_K$ was generated by $1$ and $\sqrt 3$.
+Something similar was true for $\QQ(\sqrt 5)$.
+We claim that in fact, the general picture looks exactly like this.
+
+\begin{theorem}[$\OO_K$ is a free $\ZZ$-module of rank $n$]
+ Let $K$ be a number field of degree $n$.
+ Then $\OO_K$ is a free $\ZZ$-module of rank $n$,
+ i.e.\ $\OO_K \cong \ZZ^{\oplus n}$ as an abelian group.
+ In other words, $\OO_K$ has a $\ZZ$-basis of $n$ elements as
+ \[ \OO_K = \left\{ c_1\alpha_1 + \dots
+ + c_{n-1}\alpha_{n-1} + c_n\alpha_n \mid c_i \in \ZZ \right\} \]
+ where $\alpha_i$ are algebraic integers in $\OO_K$.
+ \label{thm:OK_free_Z_module}
+\end{theorem}
+
+\begin{proof}
+ TODO: add this in.
+ (Originally, there was an incorrect proof;
+ the mistake was pointed out 2020-02-12 on
+ \url{https://math.stackexchange.com/q/3543641/229197}
+ and I hope to supply a correct one soon.)
+% This is a kind of fun proof, so it may be worth
+% trying to work out yourself before reading it.
+%
+% Pick a $\QQ$-basis of $\alpha_1$, \dots, $\alpha_n$ of $K$ and WLOG
+% the $\alpha_i$ are in $\OO_K$ by scaling.
+%
+% Consider $\alpha \in \OO_K$,
+% and write $\alpha = c_1\alpha_1 + \dots + c_n\alpha_n$.
+% We will try to bound the denominators of $c_i$.
+% Look at $\Norm(\alpha) = \Norm(c_1\alpha_1 + \dots + c_n\alpha_n)$.
+%
+% If we do a giant norm computation, we find that $\Norm(\alpha)$
+% is a polynomial in the $c_i$ with fixed coefficients.
+% (For example, $\Norm(c_1 + c_2\sqrt 2) = c_1^2 - 2c_2^2$, say.)
+% But $\Norm(\alpha)$ is an \emph{integer}, so the denominators of the $c_i$
+% have to be bounded by some very large integer $N$.
+% Thus
+% \[ \bigoplus_i \ZZ \cdot \alpha_i \subseteq \OO_K
+% \subseteq \frac 1N \bigoplus_i \ZZ \cdot \alpha_i. \]
+% The latter inclusion shows that $\OO_K$ is a subgroup
+% of a free group, and hence it is itself free.
+% On the other hand, the first inclusion shows it's rank $n$.
+\end{proof}
+
+This last theorem shows that in many ways $\OO_K$ is a ``lattice'' in $K$.
+That is, for a number field $K$ we can find $\alpha_1$, \dots, $\alpha_n$
+in $\OO_K$ such that
+\begin{align*}
+ \OO_K &\cong \alpha_1\ZZ \oplus \alpha_2\ZZ \oplus \dots \oplus \alpha_n\ZZ \\
+ K &\cong \alpha_1\QQ \oplus \alpha_2\QQ \oplus \dots \oplus \alpha_n\QQ
+\end{align*}
+as abelian groups.
+
+\section{On monogenic extensions}
+Recall that it turned out number fields $K$ could all be
+expressed as $\QQ(\alpha)$ for some $\alpha$.
+We might hope that something similar is true of the ring of integers:
+that we can write \[ \OO_K = \ZZ[\theta] \]
+in which case $\{1, \theta, \dots, \theta^{n-1}\}$
+serves both as a basis of $K$ and as the $\ZZ$-basis for $\OO_K$ (here $n = [K:\QQ]$).
+In other words, we hope that the basis of $\OO_K$ is actually a ``power basis''.
+
+This is true for the most common examples we use:
+\begin{itemize}
+ \ii the quadratic field, and
+ \ii the cyclotomic field in \Cref{prob:ring_int_cyclotomic}.
+\end{itemize}
+Unfortunately, it is not true in general:
+the first counterexample is $\QQ(\alpha)$ for $\alpha$ a root of $X^3-X^2-2X-8$.
+
+We call an extension with this nice property \vocab{monogenic}.
+As we'll later see, monogenic extensions have a really nice factoring algorithm,
+\Cref{thm:factor_alg}.
+
+\section{\problemhead}
+
+\begin{sproblem} % trivial
+ \label{prob:OK_unit_norm}
+ Show that $\alpha$ is a unit of $\OO_K$ (meaning $\alpha\inv \in \OO_K$)
+ if and only if $\Norm_{K/\QQ}(\alpha) = \pm 1$.
+ \begin{hint}
+ The norm is multiplicative and equal to product of Galois conjugates.
+ \end{hint}
+\end{sproblem}
+
+\begin{sproblem}
+ Let $K$ be a number field.
+ What is the field of fractions of $\OO_K$?
+ \begin{hint}
+ It's isomorphic to $K$.
+ \end{hint}
+\end{sproblem}
+
+\begin{problem}
+ [Russian olympiad 1984]
+ Find all integers $m$ and $n$ such that
+ \[ \left( 5+3\sqrt2 \right)^m = \left( 3+5\sqrt2 \right)^n. \]
+ \begin{hint}
+ Taking the standard norm on $\QQ(\sqrt2)$ will destroy it.
+ \end{hint}
+\end{problem}
+
+
+\begin{problem}
+ [USA TST 2012]
+ Decide whether there exist $a,b,c > 2010$ satisfying
+ \[ a^3+2b^3+4c^3=6abc+1. \]
+ \begin{hint}
+ Norm in $\QQ(\sqrt[3]2)$.
+ \end{hint}
+\end{problem}
+
+\begin{dproblem}[Cyclotomic Field]
+ \yod
+ \label{prob:ring_int_cyclotomic}
+ Let $p$ be an odd rational prime and $\zeta_p$ a primitive $p$th root of unity.
+ Let $K = \QQ(\zeta_p)$.
+ Prove that $\OO_K = \ZZ[\zeta_p]$.
+ (In fact, the result is true even if $p$ is not a prime.)
+ \begin{hint}
+ Obviously $\ZZ[\zeta_p] \subseteq \OO_K$, so our goal is to show the reverse inclusion.
+ Show that for any $\alpha \in \OO_K$, the trace of $\alpha(1-\zeta_p)$ is divisible by $p$.
+ Given $x = a_0 + a_1\zeta_p + \dots + a_{p-2}\zeta^{p-2} \in \OO_K$ (where $a_i \in \QQ$),
+ consider $(1-\zeta_p)x$.
+ \end{hint}
+\end{dproblem}
diff --git a/books/napkin/notation.tex b/books/napkin/notation.tex
new file mode 100644
index 0000000000000000000000000000000000000000..0b3ffd46c6cc03c3a2869f5b10606674deaae50e
--- /dev/null
+++ b/books/napkin/notation.tex
@@ -0,0 +1,332 @@
+\chapter{Glossary of notations}
+\section{General}
+\begin{itemize}
+ \ii $\forall$: for all
+ \ii $\exists$: there exists
+ \ii $\sign(\sigma)$: sign of permutation $\sigma$
+ \ii $X \implies Y$: $X$ implies $Y$
+\end{itemize}
+\section{Functions and sets}
+\begin{itemize}
+ \ii $f\im(S)$ is the image of $f : X \to Y$ for $S \subseteq X$.
+ \ii $f\inv(y)$ is the inverse for $f : X \to Y$ when $y \in Y$.
+ \ii $f\pre(T)$ is the pre-image for $f : X \to Y$ when $T \subseteq Y$.
+ \ii $f \restrict{S}$ is the restriction of $f : X \to Y$ to $S \subseteq X$.
+ \ii $f^n$ is the function $f$ applied $n$ times
+\end{itemize}
+
+Below are some common sets.
+These may also be thought of as groups,
+rings, fields etc.\ in the obvious way.
+\begin{itemize}
+ \ii $\CC$: set of complex numbers
+ \ii $\RR$: set of real numbers
+ \ii $\NN$: set of positive integers
+ \ii $\QQ$: set of rational numbers
+ \ii $\ZZ$: set of integers
+ \ii $\varnothing$: empty set
+\end{itemize}
+
+Some common notation with sets:
+\begin{itemize}
+ \ii $A \subset B$: $A$ is any subset of $B$
+ \ii $A \subseteq B$: $A$ is any subset of $B$
+ \ii $A \subsetneq B$: $A$ is a \emph{proper} subset of $B$
+ \ii $S \times T$: Cartesian product of sets $S$ and $T$
+ \ii $S \setminus T$: difference of sets $S$ and $T$
+ \ii $S \cup T$: set union of $S$ and $T$
+ \ii $S \cap T$: set intersection of $S$ and $T$
+ \ii $S \sqcup T$: disjoint union of $S$ and $T$
+ \ii $\left\lvert S \right\rvert$: cardinality of $S$
+ \ii $S / {\sim}$: if $\sim$ is an equivalence relation on $S$,
+ this is the set of equivalence classes
+ \ii $x + S$: denotes the set $\{x+s \mid s \in S\}$.
+ \ii $xS$: denotes the set $\{xs \mid s \in S\}$.
+\end{itemize}
+
+\section{Abstract and linear algebra}
+Some common groups/rings/fields:
+\begin{itemize}
+ \ii $\Zc n$: cyclic group of order $n$
+ \ii $\Zm n$: set of units of $\Zc n$.
+ \ii $S_n$: symmetric group on $\{1, \dots, n\}$
+ \ii $D_{2n}$: dihedral group of order $2n$.
+ \ii $0$, $1$: trivial group (depending on context)
+ \ii $\FF_p$: integers modulo $p$
+\end{itemize}
+Notation with groups:
+\begin{itemize}
+ \ii $1_G$: identity element of the group $G$
+ \ii $N \normalin G$: subgroup $N$ is normal in $G$.
+ \ii $G/N$: quotient group of $G$ by the normal subgroup $N$
+ \ii $Z(G)$: center of group $G$
+ \ii $N_G(H)$: normalizer of the subgroup $H$ of $G$
+ \ii $G \times H$: product group of $G$ and $H$
+ \ii $G \oplus H$: also product group,
+ but often used when $G$ and $H$ are abelian
+ (and hence we can think of them as $\ZZ$-modules)
+ \ii $\Stab_G(x)$: the stabilizer of $x \in X$, if $X$ is acted on by $G$
+ \ii $\FixPt g$, the set of fixed points by $g \in G$ (under a group action)
+\end{itemize}
+Notation with rings:
+\begin{itemize}
+ \ii $R/I$: quotient of ring $R$ by ideal $I$
+ \ii $(a_1, \dots, a_n)$: ideal generated by the $a_i$
+ \ii $R^\times$: the group of units of $R$
+ \ii $R[x_1, \dots, x_n]$: polynomial ring in $x_i$,
+ or ring obtained by adjoining the $x_i$ to $R$
+ \ii $F(x_1, \dots, x_n)$: field obtained by adjoining $x_i$ to $F$
+ \ii $R^d$: $d$th graded part of a graded (pseudo)ring $R$
+\end{itemize}
+Linear algebra:
+\begin{itemize}
+ \ii $\id$: the identity matrix
+ \ii $V \oplus W$: direct sum
+ \ii $V^{\oplus n}$: direct sum of $V$, $n$ times
+ \ii $V \otimes W$: tensor product
+ \ii $V^{\otimes n}$: tensor product of $V$, $n$ times
+ \ii $V^\vee$: dual space
+ \ii $T^\vee$: dual map (for $T$ a vector space)
+ \ii $T^\dagger$: conjugate transpose (for $T$ a vector space)
+ \ii $\left< -,-\right>$: a bilinear form
+ \ii $\Mat(V)$: endomorphisms of $V$, i.e.\ $\Hom_k(V,V)$
+ \ii $\ee_1$, \dots, $\ee_n$: the ``standard basis'' of $k^{\oplus n}$
+\end{itemize}
+\section{Quantum computation}
+\begin{itemize}
+ \ii $\ket{\psi}$: a vector in some vector space $H$
+ \ii $\bra{\psi}$: a vector in some vector space $H^\vee$, dual to $\ket{\psi}$.
+ \ii $\braket{\phi|\psi}$: evaluation of an element $\bra{\phi} \in H^\vee$ at $\ket{\phi} \in H$.
+ \ii $\zup$, $\zdown$: spin $z$-up, spin $z$-down
+ \ii $\xup$, $\xdown$: spin $x$-up, spin $x$-down
+ \ii $\yup$, $\ydown$: spin $y$-up, spin $y$-down
+\end{itemize}
+
+\section{Topology and real/complex analysis}
+Common topological spaces:
+\begin{itemize}
+ \ii $S^1$: the unit circle
+ \ii $S^n$: surface of an $n$-sphere (in $\RR^{n+1}$)
+ \ii $D^{n+1}$: closed $n+1$ dimensional ball (in $\RR^{n+1}$)
+ \ii $\RP^n$: real projective $n$-space
+ \ii $\CP^n$: complex projective $n$-space
+\end{itemize}
+Some topological notation:
+\begin{itemize}
+ \ii $\partial Y$: boundary of a set $Y$ (in some topological space)
+ \ii $X/S$: quotient topology of $X$ by $S \subseteq X$
+ \ii $X \times Y$: product topology of spaces $X$ and $Y$
+ \ii $X \amalg Y$: disjoint union of spaces $X$ and $Y$
+ \ii $X \vee Y$: wedge product of (pointed) spaces $X$ and $Y$
+\end{itemize}
+Real analysis (calculus 101):
+\begin{itemize}
+ \ii $\liminf$: limit infimum
+ \ii $\limsup$: limit supremum
+ \ii $\inf$: infimum
+ \ii $\sup$: supremum
+ \ii $\ZZ_p$: $p$-adic integers
+ \ii $\QQ_p$: $p$-adic numbers
+ \ii $f'$: derivative of $f$
+ \ii $\int_a^b f(x) \; dx$: Riemann integral of $f$ on $[a,b]$
+\end{itemize}
+Complex analysis:
+\begin{itemize}
+ \ii $\int_\alpha f \; dz$: contour integral of $f$ along path $\alpha$
+ \ii $\Res(f;p)$: the residue of a meromorphic function $f$ at point $p$
+ \ii $\Wind(\gamma, p)$: winding number of $\gamma$ around $p$.
+\end{itemize}
+\section{Measure theory and probability}
+\begin{itemize}
+ \ii $\SA\cme$: the $\sigma$-algebra of Caratheory-measurable sets
+ \ii $\SB(X)$: the Borel space for $X$
+ \ii $\mu\cme$: the induced measure on $\SA\cme$.
+ \ii $\lambda$: Lebesgue measure
+ \ii $\mathbf{1}_A$: the indicator function for $A$
+ \ii $\int_\Omega f \; d\mu$: the Lebesgue integral of $f$
+ \ii $\lim_{n \to \infty} f_n$: pointwise limit of $f_n$
+ \ii $\wh G$: Pontryagin dual for $G$
+\end{itemize}
+
+\section{Algebraic topology}
+\begin{itemize}
+ \ii $\alpha \simeq \beta$: for paths, this indicates path homotopy
+ \ii $\ast$: path concatenation
+ \ii $\pi_1(X) = \pi_1(X, x_0)$: the fundamental group of (pointed) space $X$
+ \ii $\pi_n(X) = \pi_n(X, x_0)$: the $n$th homotopy group of (pointed) space $X$
+ \ii $f_\sharp$: the induced map $\pi_1(X) \to \pi_1(Y)$ of $f : X \to Y$
+ \ii $\Delta^n$: the standard $n$-simplex
+ \ii $\partial\sigma$: the boundary of a singular $n$-simplex $\sigma$
+ \ii $H_n(A_\bullet)$: the $n$th homology group of the chain complex $A_\bullet$
+ \ii $H_n(X)$: the $n$th homology group of a space $X$
+ \ii $\wt H_n(X)$: the $n$th reduced homology group of $X$
+ \ii $H_n(X, A)$: the $n$th relative homology group of $X$ and $A \subseteq X$
+ \ii $f_\ast$: the induced map on $H_n(A_\bullet) \to H_n(B_\bullet)$
+ of $f : A_\bullet \to B_\bullet$,
+ or $H_n(X) \to H_n(Y)$ for $f : X \to Y$
+ \ii $\chi(X)$: Euler characteristic of a space $X$
+ \ii $H^n(A^\bullet)$: the $n$th cohomology group of a cochain complex $A^\bullet$
+ \ii $H^n(A_\bullet; G)$: the $n$th cohomology group of the cochain complex
+ obtained by applying $\Hom(-,G)$ to $A_\bullet$
+ \ii $H^n(X; G)$: the $n$th cohomology group/ring of $X$ with $G$-coefficients
+ \ii $\wt H^n(X; G)$: the $n$th reduced cohomology group/ring of $X$ with $G$-coefficients
+ \ii $H^n(X,A ; G)$: the $n$th relative cohomology group/ring of $X$ and $A \subset X$ with $G$-coefficients
+ \ii $f^\sharp$: the induced map on $H^n(A^\bullet) \to H^n(B^\bullet)$
+ of $f : A^\bullet \to B^\bullet$,
+ or $H^n(X) \to H^n(Y)$ for $f : X \to Y$
+ \ii $\Ext(-,-)$: the Ext functor
+ \ii $\phi \smile \psi$: cup product of cochains $\phi$ and $\psi$
+\end{itemize}
+
+\section{Category theory}
+Some common categories (in alphabetical order):
+\begin{itemize}
+ \ii $\catname{Grp}$: category of groups
+ \ii $\catname{CRing}$: category of commutative rings
+ \ii $\catname{Top}$: category of topological spaces
+ \ii $\catname{Top}_\ast$: category of pointed topological spaces
+ \ii $\catname{Vect}_k$: category of $k$-vector spaces
+ \ii $\catname{FDVect}_k$: category of finite-dimensional vector spaces
+ \ii $\catname{Set}$: category of sets
+ \ii $\catname{hTop}$: category of topological spaces,
+ whose morphisms are homotopy classes of maps
+ \ii $\catname{hTop}_\ast$: pointed version of $\catname{hTop}$
+ \ii $\catname{hPairTop}$: category of pairs $(X,A)$ with morphisms
+ being pair-homotopy equivalence classes
+ \ii $\Opens(X)$: the category of open sets of $X$, as a poset
+\end{itemize}
+Operations with categories:
+\begin{itemize}
+ \ii $\obj \AA$: objects of the category $\AA$
+ \ii $\AA\op$: opposite category
+ \ii $\AA \times \BB$: product category
+ \ii $[\AA, \BB]$: category of functors from $\AA$ to $\BB$
+ \ii $\ker f : \Ker f \to B$: for $f : A \to B$, categorical kernel
+ \ii $\coker f : A \to \Coker f$: for $f : A \to B$, categorical cokernel
+ \ii $\img f : A \to \Img f$: for $f : A \to B$, categorical image
+\end{itemize}
+
+\section{Differential geometry}
+\begin{itemize}
+ \ii $Df$: total derivative of $f$
+ \ii $(Df)_p$: total derivate of $f$ at point $p$
+ \ii $\fpartial{f}{e_i}$: $i^{\text{th}}$ partial derivative
+ \ii $\alpha_p$: evaluating a $k$-form $\alpha$ at $p$
+ \ii $\int_c \alpha$: integration of the differential form $\alpha$ over a cell $c$
+ \ii $d\alpha$: exterior derivative of a $k$-form $\alpha$
+ \ii $\phi^\ast \alpha$: pullback of $k$-form $\alpha$ by $\phi$
+\end{itemize}
+
+\section{Algebraic number theory}
+\begin{itemize}
+ \ii $\ol \QQ$: ring of algebraic numbers
+ \ii $\ol \ZZ$: ring of algebraic integers
+ \ii $\ol F$: algebraic closure of a field $F$
+ \ii $\NK(\alpha)$: the norm of $\alpha$ in extension $K/\QQ$
+ \ii $\TrK(\alpha)$: the trace of $\alpha$ in extension $K/\QQ$
+ \ii $\OO_K$: ring of integers in $K$
+ \ii $\ka+\kb$: sum of two ideals $\ka$ and $\kb$
+ \ii $\ka\kb$: ideal generated by products of elements in ideals $\ka$ and $\kb$
+ \ii $\ka \mid \kb$: ideal $\ka$ divides ideal $\kb$
+ \ii $\ka\inv$: the inverse of $\ka$ in the ideal group
+ \ii $\Norm(I)$: ideal norm
+ \ii $\Cl_K$: class group of $K$
+ \ii $\Delta_K$: discriminant of number field $K$
+ \ii $\mu(\OO_K)$: set of roots of unity contained in $\OO_K$
+ \ii $[K:F]$: degree of a field extension
+ \ii $\Aut(K/F)$: set of field automorphisms of $K$ fixing $F$
+ \ii $\Gal(K/F)$: Galois group of $K/F$
+ \ii $D_\kp$: decomposition group of prime ideal $\kp$
+ \ii $I_\kp$: inertia group of prime ideal $\kp$
+ \ii $\Frob_\kp$: Frobenius element of $\kp$ (element of $\Gal(K/\QQ)$)
+ \ii $P_K(\km)$: ray of principal ideals of a modulus $\km$
+ \ii $I_K(\km)$: fractional ideals of a modulus $\km$
+ \ii $C_K(\km)$: ray class group of a modulus $\km$
+ \ii $\left( \frac{L/K}{\bullet} \right)$: the Artin symbol
+ \ii $\Ram(L/K)$: primes of $K$ ramifying in $L$
+ \ii $\kf(L/K)$: the conductor of $L/K$
+\end{itemize}
+
+\section{Representation theory}
+\begin{itemize}
+ \ii $k[G]$: group algebra
+ \ii $V \oplus W$: direct sum of representations $V = (V, \rho_V)$
+ and $W = (W, \rho_W)$ of an algebra $A$
+ \ii $V^\vee$: dual representation of a representation $V = (V, \rho_V)$
+ \ii $\Reg(A)$: regular representation of an algebra $A$
+ \ii $\Homrep(V,W)$: algebra of morphisms $V \to W$ of representations
+ \ii $\chi_V$: the character $A \to k$ attached to an $A$-representation $V$
+ \ii $\Classes(G)$: set of conjugacy classes of $G$
+ \ii $\FunCl(G)$: the complex vector space of functions $\Classes(G) \to \CC$
+ \ii $V \otimes W$: tensor product of representations $V = (V, \rho_V)$ and $W = (W, \rho_W)$
+ of a \emph{group} $G$ (rather than an algebra)
+ \ii $\Ctriv$: the trivial representation
+ \ii $\Csign$: the sign representation
+\end{itemize}
+
+\section{Algebraic geometry}
+\begin{itemize}
+ \ii $\VV(-)$: vanishing locus of a set or ideal
+ \ii $\Aff^n$: $n$-dimensional (complex) affine space
+ \ii $\sqrt I$: radical of an ideal $I$
+ \ii $\CC[V]$: coordinate ring of an affine variety $V$
+ \ii $\OO_V(U)$: ring of rational functions on $U$
+ \ii $D(f)$: distinguished open set
+ \ii $\CP^n$: complex projective $n$-space (ambient space for projective varieties)
+ \ii $(x_0 : \dots : x_n)$: coordinates of projective space
+ \ii $U_i$: standard affine charts
+ \ii $\Vp(-)$: projective vanishing locus.
+ \ii $h_I$, $h_V$: Hilbert function of an ideal $I$ or projective variety $V$
+ \ii $\pi^\sharp$ or $\pi^\sharp_U$: the pullback $\OO_Y \to \OO_X(\pi\pre(U))$ obtained from $\pi \colon X \to Y$
+ \ii $\SF_p$: the stalk of a (pre-)sheaf $\SF$ at a point $p$
+ \ii $[s]_p:$ the germ of $s \in \SF(U)$ at the point $p$
+ \ii $\OO_{X,p}$: shorthand for $(\OO_X)_p$.
+ \ii $\SF\sh$: sheafification of pre-sheaf $\SF$
+ \ii $\alpha_p : \SF_p \to \SG_p$: morphism of stalks obtained from $\alpha : \SF \to \SG$
+ \ii $\km_{X,p}$: the maximal ideal of $\OO_{X,p}$
+ \ii $\Spec A$: the spectrum of a ring $A$
+ \ii $S\inv A$: localization of ring $A$ at a set $S$
+ \ii $A[1/f]$: localization of ring $A$ away from element $f$
+ \ii $A_\kp$: localization of ring $A$ at prime ideal $\kp$
+ \ii $f(\kp)$: the value of $f$ at $\kp$, i.e.\ $f \pmod \kp$
+ \ii $\kappa(\kp)$: the residue field of $\Spec A$ at the element $\kp$.
+ % \ii $\Proj R$: the projective scheme of a graded ring $S$
+ \ii $\pi^\sharp_{\kp}$: the induced map of stalks in $\pi^\sharp$.
+\end{itemize}
+
+\section{Set theory}
+\begin{itemize}
+ \ii $\ZFC$: standard theory of ZFC
+ \ii $\ZFC^+$: standard theory of ZFC, plus the sentence
+ ``there exists a strongly inaccessible cardinal''
+ \ii $2^S$ or $\PP(S)$: power set of $S$
+ \ii $A \land B$: $A$ and $B$
+ \ii $A \lor B$: $A$ or $B$
+ \ii $\neg A$: not $A$
+ \ii $V$: class of all sets (von Neumann universe)
+ \ii $\omega$: the first infinite ordinal, also the set of nonnegative integers
+ \ii $V_\alpha$: level of the von Neumann universe
+ \ii $\On$: class of ordinals
+ \ii $\bigcup A$: the union of elements inside $A$
+ \ii $A \approx B$: sets $A$ and $B$ are equinumerous
+ \ii $\aleph_\alpha$: the aleph numbers
+ \ii $\cof \lambda$: the cofinality of $\lambda$
+ \ii $\MM \vDash \phi[b_1, \dots, b_n]$: model $\MM$ satisfies sentence $\phi$
+ with parameters $b_1$, \dots, $b_n$
+ \ii $\Delta_n$, $\Sigma_n$, $\Pi_n$: levels of the Levy hierarchy
+ \ii $\MM_1 \subseteq \MM_2$: $\MM_1$ is a substructure of $\MM_2$
+ \ii $\MM_1 \prec \MM_2$: $\MM_1$ is an elementary substructure of $\MM_2$
+ \ii $p \parallel q$: elements $p$ and $q$ of a poset $\Po$ are compatible
+ \ii $p \perp q$: elements $p$ and $q$ of a poset $\Po$ are incompatible
+ \ii $\Name_\alpha$: the hierarchy of $\Po$-names
+ \ii $\tau^G$: interpretation of a name $\tau$ by filter $G$
+ \ii $M[G]$: the model obtained from a forcing poset $G \subseteq \Po$
+ \ii $p \Vdash \varphi(\sigma_1, \dots, \sigma_n)$: $p \in \Po$ forces the sentence $\varphi$
+ \ii $\check x$: the name giving an $x \in M$ when interpreted
+ \ii $\dot G$: the name giving $G$ when interpreted
+\end{itemize}
+
+
+% Consider adding:
+% lim (convergence)
+% group presentation
diff --git a/books/napkin/numfield.tex b/books/napkin/numfield.tex
new file mode 100644
index 0000000000000000000000000000000000000000..48d057fe2b5aa9b7a2550f462d4374c1b8f6f6a0
--- /dev/null
+++ b/books/napkin/numfield.tex
@@ -0,0 +1,314 @@
+\chapter{Algebraic integers}
+Here's a first taste of algebraic number theory.
+
+This is really close to the border between olympiads and higher math.
+You've always known that $a+\sqrt2 b$ had a ``norm'' $a^2-2b^2$,
+and that somehow this norm was multiplicative.
+You've also always known that roots come in conjugate pairs.
+You might have heard of minimal polynomials but not know much about them.
+
+This chapter and the next one will make all these vague notions precise.
+It's drawn largely from the first chapter of \cite{ref:oggier_NT}.
+
+\section{Motivation from high school algebra}
+This is adapted from my blog, \emph{Power Overwhelming}\footnote{URL: \url{https://usamo.wordpress.com/2014/10/19/why-do-roots-come-in-conjugate-pairs/}}.
+
+In high school precalculus, you'll often be asked to find the roots of some polynomial with integer coefficients.
+For instance,
+\[ x^3 - x^2 - x - 15 = (x-3)(x^2+2x+5) \]
+has roots $3$, $-1+2i$, $-1-2i$.
+Or as another example,
+\[ x^3 - 3x^2 - 2x + 2 = (x+1)(x^2-4x+2) \]
+has roots $-1$, $2 + \sqrt 2$, $2 - \sqrt 2$.
+You'll notice that the irrational roots, like $-1 \pm 2i$ and $2 \pm \sqrt 2$, are coming up in pairs. In fact, I think precalculus explicitly tells you that the complex roots come in conjugate pairs. More generally, it seems like all the roots of the form $a + b \sqrt c$ come in ``conjugate pairs''. And you can see why.
+
+But a polynomial like
+\[ x^3 - 8x + 4 \]
+has no rational roots.
+(The roots of this are approximately $-3.0514$, $0.51730$, $2.5341$.)
+Or even simpler,
+\[ x^3 - 2 \]
+has only one real root, $\sqrt[3]{2}$.
+These roots, even though they are irrational, have no ``conjugate'' pairs.
+Or do they?
+
+Let's try and figure out exactly what's happening.
+Let $\alpha$ be any complex number.
+We define a \vocab{minimal polynomial} of $\alpha$ over $\QQ$ to be a polynomial such that
+\begin{itemize}
+ \item $P(x)$ has rational coefficients, and leading coefficient $1$,
+ \item $P(\alpha) = 0$.
+ \item The degree of $P$ is as small as possible.
+ We call $\deg P$ the \vocab{degree} of $\alpha$.
+\end{itemize}
+\begin{example}[Examples of minimal polynomials]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\sqrt 2$ has minimal polynomial $x^2-2$.
+ \ii The imaginary unit $i = \sqrt{-1}$ has minimal polynomial $x^2+1$.
+ \ii A primitive $p$th root of unity, $\zeta_p = e^{\frac{2\pi i}{p}}$, has minimal polynomial $x^{p-1} + x^{p-2} + \dots + 1$, where $p$ is a prime.
+ \end{enumerate}
+\end{example}
+Note that $100x^2 - 200$ is also a polynomial of the same degree which has $\sqrt 2$ as a root; that's why we want to require the polynomial to be monic. That's also why we choose to work in the rational numbers; that way, we can divide by leading coefficients without worrying if we get non-integers.
+
+Why do we care? The point is as follows: suppose we have another polynomial $A(x)$ such that $A(\alpha) = 0$.
+Then we claim that $P(x)$ actually divides $A(x)$!
+That means that all the other roots of $P$ will also be roots of $A$.
+
+The proof is by contradiction: if not, by polynomial long division we can find a quotient and remainder $Q(x)$, $R(x)$ such that
+\[ A(x) = Q(x) P(x) + R(x) \]
+and $R(x) \not\equiv 0$.
+Notice that by plugging in $x = \alpha$, we find that $R(\alpha) = 0$.
+But $\deg R < \deg P$, and $P(x)$ was supposed to be the minimal polynomial.
+That's impossible!
+
+It follows from this and the monotonicity of the minimal polynomial
+that it is unique (when it exists), so actually it is better to refer to
+\emph{the} minimal polynomial.
+\begin{exercise}
+ Can you find an element in $\CC$
+ that has no minimal polynomial?
+\end{exercise}
+
+Let's look at a more concrete example.
+Consider $A(x) = x^3-3x^2-2x+2$ from the beginning.
+The minimal polynomial of $2 + \sqrt 2$ is $P(x) = x^2 - 4x + 2$ (why?).
+Now we know that if $2 + \sqrt 2$ is a root, then $A(x)$ is divisible by $P(x)$.
+And that's how we know that if $2 + \sqrt 2$ is a root of $A$, then $2 - \sqrt 2$ must be a root too.
+
+As another example, the minimal polynomial of $\sqrt[3]{2}$ is $x^3-2$. So $\sqrt[3]{2}$ actually has \textbf{two} conjugates, namely, $\alpha = \sqrt[3]{2} \left( \cos 120^\circ + i \sin 120^\circ \right)$ and $\beta = \sqrt[3]{2} \left( \cos 240^\circ + i \sin 240^\circ \right)$. Thus any polynomial which vanishes at $\sqrt[3]{2}$ also has $\alpha$ and $\beta$ as roots!
+
+\begin{ques}
+ [Important but tautological:
+ irreducible $\iff$ minimal]
+ Let $\alpha$ be a root of the polynomial $P(x)$.
+ Show that $P(x)$ is the minimal polynomial
+ if and only if it is irreducible.
+ % (This is tautological: the point is just to realize that ``minimal polynomials'' and ``irreducible polynomials'' are the same beasts.)
+\end{ques}
+
+\section{Algebraic numbers and algebraic integers}
+\prototype{$\sqrt2$ is an algebraic integer (root of $x^2-2$),
+$\half$ is an algebraic number but not an algebraic integer (root of $x-\half$).}
+
+Let's now work in much vaster generality.
+First, let's give names to the new numbers we've discussed above.
+\begin{definition}
+ An \vocab{algebraic number} is any $\alpha \in \CC$
+ which is the root of \emph{some} polynomial with coefficients in $\QQ$.
+ The set of algebraic numbers is denoted $\ol\QQ$.
+\end{definition}
+\begin{remark}
+ One can equally well say algebraic numbers are those that
+ are roots of some polynomial with coefficients in $\ZZ$ (rather than $\QQ$),
+ since any polynomial in $\QQ[x]$ can be scaled to one in $\ZZ[x]$.
+\end{remark}
+\begin{definition}
+ Consider an algebraic number $\alpha$ and
+ its minimal polynomial $P$
+ (which is monic and has rational coefficients).
+ If it turns out the coefficients of $P$ are integers,
+ then we say $\alpha$ is an \vocab{algebraic integer}.
+
+ The set of algebraic integers is denoted $\ol\ZZ$.
+\end{definition}
+\begin{remark}
+ One can show, using \emph{Gauss's Lemma}, that if $\alpha$ is the root
+ of \emph{any} monic polynomial with integer coefficients,
+ then $\alpha$ is an algebraic integer.
+ So in practice, if I want to prove that $\sqrt 2 + \sqrt 3$ is an algebraic integer,
+ then I only have to say ``the polynomial $(x^2-5)^2-24$ works''
+ without checking that it's minimal.
+\end{remark}
+Sometimes for clarity, we refer to elements of $\ZZ$
+as \vocab{rational integers}.
+\begin{example}[Examples of algebraic integers]
+ The numbers
+ \[ 4, \; i = \sqrt{-1}, \; \sqrt[3]{2}, \; \sqrt2+\sqrt3 \]
+ are all algebraic integers, since they are the roots of the monic polynomials
+ $x-4$, $x^2+1$, $x^3-2$ and $(x^2-5)^2-24$.
+
+ The number $\half$ has minimal polynomial $x - \half$,
+ so it's an algebraic number but not an algebraic integer.
+ (In fact, the rational root theorem also directly implies
+ that any monic integer polynomial does not have $\half$ as a root!)
+\end{example}
+
+There are two properties I want to give
+for these off the bat,
+because they'll be used extensively in the tricky
+(but nice) problems at the end of the section.
+The first we prove now, since it's very easy:
+\begin{proposition}[Rational algebraic integers are rational integers]
+ An algebraic integer is rational
+ if and only if it is a rational integer.
+ In symbols, \[ \ol\ZZ \cap \QQ = \ZZ. \]
+\end{proposition}
+\begin{proof}
+ Let $\alpha$ be a rational number.
+ If $\alpha$ is an integer, it is the root of $x-\alpha$,
+ hence an algebraic integer too.
+
+ Conversely, if $P$ is a monic polynomial with integer
+ coefficients such that $P(\alpha) = 0$ then
+ (by the rational root theorem, say)
+ it follows $\alpha$ must be an integer.
+\end{proof}
+The other is that:
+\begin{proposition}
+ [$\ol{\ZZ}$ is a ring and $\ol{\QQ}$ is a field]
+ The algebraic integers $\ol{\ZZ}$ form a ring.
+ The algebraic numbers $\ol{\QQ}$ form a field.
+\end{proposition}
+We could prove this now if we wanted to,
+but the results in the next chapter will more or less
+do it for us, and so we take this on faith temporarily.
+
+\section{Number fields}
+\prototype{$\QQ(\sqrt2)$ is a typical number field.}
+
+Given any algebraic number $\alpha$,
+we're able to consider fields of the form $\QQ(\alpha)$.
+Let us write down the more full version.
+
+\begin{definition}
+ A \vocab{number field} $K$ is a field containing $\QQ$ as a subfield
+ which is a \emph{finite-dimensional} $\QQ$-vector space.
+ The \vocab{degree} of $K$ is its dimension.
+\end{definition}
+\begin{example}[Prototypical example]
+ Consider the field
+ \[ K = \QQ(\sqrt2) = \left\{ a+b\sqrt2 \mid a,b \in \QQ \right\}. \]
+ This is a field extension of $\QQ$,
+ and has degree $2$ (the basis being $1$ and $\sqrt2$).
+\end{example}
+
+You might be confused that I wrote $\QQ(\sqrt2)$
+(which should permit denominators) instead of $\QQ[\sqrt2]$, say.
+But if you read through \Cref{ex:gaussian_rationals},
+you should see that the denominators don't really matter:
+$\frac{1}{3-\sqrt2} = \frac17(3+\sqrt2)$ anyways, for example.
+You can either check this now in general,
+or just ignore the distinction and pretend I wrote square brackets everywhere.
+\begin{exercise}
+ [Unimportant]
+ Show that if $\alpha$ is an algebraic number,
+ then $\QQ(\alpha) \cong \QQ[\alpha]$.
+\end{exercise}
+
+\begin{example}[Adjoining an algebraic number]
+ Let $\alpha$ be the root of some irreducible polynomial $P(x) \in \QQ[x]$.
+ The field $\QQ(\alpha)$ is a field extension as well, and the basis
+ is $1, \alpha, \alpha^2, \dots, \alpha^{m-1}$,
+ where $m$ is the degree of $\alpha$.
+ In particular, the degree of $\QQ(\alpha)$ is just the degree of $\alpha$.
+\end{example}
+\begin{example}[Non-examples of number fields]
+ $\RR$ and $\CC$ are not number fields since there is no \emph{finite}
+ $\QQ$-basis of them.
+\end{example}
+
+\section{Primitive element theorem, and monogenic extensions}
+\prototype{$\QQ(\sqrt3,\sqrt5) \cong \QQ(\sqrt3+\sqrt5)$. Can you see why?}
+
+I'm only putting this theorem here because I was upset that no one
+told me it was true (it's a very natural conjecture),
+and I hope to not do the same to the reader.
+However, I'm not going to use it in anything that follows.
+
+\begin{theorem}
+ [Artin's primitive element theorem]
+ Every number field $K$ is isomorphic to $\QQ(\alpha)$
+ for some algebraic number $\alpha$.
+ \label{thm:artin_primitive_elm}
+\end{theorem}
+The proof is left as \Cref{prob:artin_primitive_elm}, since to prove it I need to talk
+about field extensions first.
+
+The prototypical example \[ \QQ(\sqrt3,\sqrt5) \cong \QQ(\sqrt3+\sqrt5) \]
+makes it clear why this theorem should not be too surprising.
+%To see why this is true, note that $K$ contains the element
+%\[ \left( \sqrt3+\sqrt5 \right)^2-8 = 2\sqrt15 \]
+%and hence also the element $2(\sqrt15)(\sqrt3+\sqrt5) = 6\sqrt5+10\sqrt3$.
+%Thus from the fact that $6\sqrt5+10\sqrt3$ and $\sqrt3+\sqrt5$ are in $K$,
+%we can extract both $\sqrt 3$ and $\sqrt 5$.
+%Thus $\sqrt3, \sqrt5 \in K$, which is what we wanted.
+
+\section{\problemhead}
+
+\begin{problem}
+ Find a polynomial with integer coefficients
+ which has $\sqrt2+\sqrt[3]{3}$ as a root.
+\end{problem}
+
+\begin{problem}
+ [Brazil 2006]
+ \gim
+ Let $p$ be an irreducible polynomial in $\QQ[x]$
+ and degree larger than $1$.
+ Prove that if $p$ has two roots $r$ and $s$ whose product is $1$
+ then the degree of $p$ is even.
+ \begin{hint}
+ Note that $p(x)$ is a minimal polynomial for $r$,
+ but so is $q(x) = x^{\deg p} p(1/x)$.
+ So $q$ and $p$ must be multiples of each other.
+ \end{hint}
+\end{problem}
+
+\begin{sproblem}
+ \label{prob:rep_lemma}
+ Consider $n$ roots of unity $\eps_1$, \dots, $\eps_n$.
+ Assume the average $\frac1n(\eps_1 + \dots + \eps_n)$ is an algebraic integer.
+ Prove that either the average is zero or $\eps_1 = \dots = \eps_n$.
+ (Used in \Cref{lem:burnside_ant_lemma}.)
+ \begin{hint}
+ $\left\lvert \frac 1n(\eps_1 + \dots + \eps_n) \right\rvert \le 1$.
+ \end{hint}
+\end{sproblem}
+
+\begin{dproblem}
+ \gim
+ Which rational numbers $q$ satisfy $\cos(q\pi) \in \QQ$?
+ \begin{hint}
+ Only the obvious ones.
+ Assume $\cos(q\pi) \in \QQ$.
+ Let $\zeta$ be a root of unity (algebraic integer
+ as $\zeta^N-1 = 0$ for some $N$)
+ and note that $2\cos(q\pi) = \zeta + \zeta^{N-1}$
+ is both an algebraic integer and a rational number.
+ \end{hint}
+\end{dproblem}
+
+\begin{problem}
+ [MOP 2010]
+ There are $n > 2$ lamps arranged in a circle;
+ initially one is on and the others are off.
+ We may select any regular polygon whose vertices are among the lamps
+ and toggle the states of all the lamps simultaneously.
+ Show it is impossible to turn all lamps off.
+ \begin{hint}
+ View as roots of unity. Note $\half$ isn't an algebraic integer.
+ \end{hint}
+\end{problem}
+
+\begin{problem}[Kronecker's theorem]
+ \yod
+ Let $\alpha$ be an algebraic integer.
+ Suppose all its Galois conjugates have absolute value one.
+ Prove that $\alpha^N=1$ for some positive integer $N$.
+ \begin{hint}
+ Let $\alpha = \alpha_1$, $\alpha_2$, \dots, $\alpha_n$ be its conjugates.
+ Look at the polynomial $(x-\alpha_1^e) \dots (x-\alpha_n^e)$ across $e \in \NN$.
+ Pigeonhole principle on all possible polynomials.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ \yod
+ Is there an algebraic integer with absolute value one
+ which is not a root of unity?
+\end{problem}
+
+\begin{problem}
+ Is the ring of algebraic integers Noetherian?
+\end{problem}
diff --git a/books/napkin/ordinal.tex b/books/napkin/ordinal.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6e06e0ef174d33a948af34482433fc61e4ec1e57
--- /dev/null
+++ b/books/napkin/ordinal.tex
@@ -0,0 +1,518 @@
+\chapter{Ordinals}
+\label{ch:ordinal}
+\section{Counting for preschoolers}
+In preschool, we were told to count as follows.
+We defined a set of symbols $1$, $2$, $3$, $4$, \dots.
+Then the teacher would hold up three apples and say:
+\begin{quote}
+ ``One . . .\ two . . .\ three! There are three apples.''
+\end{quote}
+
+\begin{center}
+ \includegraphics[height=5cm]{media/three-apples.jpg}
+ \\ \scriptsize Image from \cite{img:apples}
+\end{center}
+
+The implicit definition is that the \emph{last} number said is the final answer.
+This raises some obvious problems if we try to count infinite sets,
+but even in the finite world,
+this method of counting fails for the simplest set of all:
+how many apples are in the following picture?
+
+\begin{center}
+ \includegraphics[height=7cm]{media/velociraptor.jpg}
+ \\ \scriptsize Image from \cite{img:velociraptor}
+\end{center}
+
+Answer: $0$. There is nothing to say, and our method of counting has failed
+for the simplest set of all: the empty set.
+
+\section{Counting for set theorists}
+\prototype{$\omega+1 = \{0,1,2,\dots,\omega\}$ might work.}
+Rather than using the \emph{last} number listed, I propose instead
+starting with a list of symbols $0$, $1$, $2$, \dots\ and making
+the final answer the \emph{first} number which was \emph{not} said.
+Thus to count three apples, we would say
+\begin{quote}
+ ``Zero . . .\ one . . .\ two! There are three apples.''
+\end{quote}
+We will call these numbers \emph{ordinal numbers} (rigorous definition later).
+In particular, we'll \emph{define} each ordinal to be the set of things we say:
+\begin{align*}
+ 0 &= \varnothing \\
+ 1 &= \{0\} \\
+ 2 &= \{0,1\} \\
+ 3 &= \{0,1,2\} \\
+ &\vdotswithin=
+\end{align*}
+In this way we can write out the natural numbers.
+You can have some fun with this, by saying things like
+\[
+ 4 \defeq
+ \left\{
+ \left\{ \right\},
+ \left\{ \left\{ \right\} \right\},
+ \left\{ \left\{ \right\}, \left\{ \left\{ \right\} \right\} \right\},
+ \left\{
+ \left\{ \right\},
+ \left\{ \left\{ \right\} \right\},
+ \left\{ \left\{ \right\}, \left\{ \left\{ \right\} \right\} \right\}
+ \right\}
+ \right\}.
+\]
+In this way, we soon write down all the natural numbers.
+The next ordinal, $\omega$,\footnote{As mentioned in the last chapter,
+ it's not immediate that $\omega$ is a set;
+ its existence is generally postulated by the $\Infinity$ axiom.
+} is defined as
+\begin{align*}
+ \omega &= \left\{ 0, 1, 2, \dots \right\} \\
+ \intertext{Then comes}
+ \omega+1 &= \left\{ 0, 1, 2, \dots, \omega \right\} \\
+ \omega+2 &= \left\{ 0, 1, 2, \dots, \omega, \omega+1 \right\} \\
+ \omega+3 &= \left\{ 0, 1, 2, \dots, \omega, \omega+1, \omega+2 \right\} \\
+ &\vdotswithin= \\
+ \intertext{And in this way we define $\omega+n$, and eventually reach}
+ \omega \cdot 2 = \omega+\omega &= \left\{ 0, 1, 2 \dots, \omega, \omega+1, \omega+2, \dots \right\} \\
+ \omega \cdot 2 + 1 &= \left\{ 0, 1, 2 \dots, \omega, \omega+1, \omega+2, \dots, \omega \cdot 2 \right\}.
+\end{align*}
+In this way we obtain
+\begin{align*}
+ 0,\; & 1,\; 2,\; 3,\; \dots,\; \omega \\
+ & \omega+1,\; \omega+2,\; \dots,\; \omega+\omega \\
+ & \omega \cdot 2 +1,\; \omega \cdot 2 +2,\; \dots,\; \omega \cdot 3,\; \\
+ & \vdots \\
+ & \omega^2 + 1,\; \omega^2+2,\; \dots \\
+ & \vdots \\
+ & \omega^3,\; \dots,\; \omega^4,\; \dots,\; \omega^\omega \\
+ & \vdots \\
+ & \omega^{\omega^{\omega^{\dots}}} \\
+\end{align*}
+
+The first several ordinals can be illustrated in a nice spiral.
+\begin{center}
+ \includegraphics[scale=0.60]{media/500px-Omega-exp-omega-labeled.png}
+\end{center}
+
+
+\begin{remark}
+ (Digression)
+ The number $\omega^{\omega^{\omega^{\dots}}}$ has a name, $\eps_0$;
+ it has the property that $\omega^{\eps_0} = \eps_0$.
+ The reason for using ``$\eps$'' (which is usually used to denote small quantities)
+ is that, despite how huge it may appear, it is actually a countable set.
+ More on that later.
+\end{remark}
+
+\section{Definition of an ordinal}
+Our informal description of ordinals gives us a chain
+\[ 0 \in 1 \in 2 \in \dots \in \omega \in \omega+1 \in \dots. \]
+To give the actual definition of an ordinal, I need to define two auxiliary terms first.
+\begin{definition}
+ A set $x$ is \vocab{transitive} if whenever $z \in y \in x$, we have $z \in x$ also.
+\end{definition}
+\begin{example}
+ [$7$ is transitive]
+ The set $7$ is transitive: for example, $2 \in 5 \in 7 \implies 2 \in 7$.
+\end{example}
+\begin{ques}
+ Show that this is equivalent to: whenever $y \in x$, $y \subseteq x$.
+\end{ques}
+Moreover, recall the definition of ``well-ordering'': a strict linear order
+with no infinite descending chains.
+\begin{example}
+ [$\in$ is a well-ordering on $\omega \cdot 3$]
+ In $\omega \cdot 3$, we have an ordering
+ \[ 0 \in 1 \in 2 \in \dots \in \omega \in \omega+1 \in \dots
+ \in \omega \cdot 2 \in \omega \cdot 2 + 1 \in \dots. \]
+ which has no infinite descending chains.
+ Indeed, a typical descending chain might look like
+ \[ \omega \cdot 2 + 6 \ni \omega \cdot 2 \ni
+ \omega + 2015 \ni \omega+3 \ni \omega \ni 1000 \ni 256 \ni 42 \ni 7 \ni 0. \]
+ Even though there are infinitely many elements, there is no way
+ to make an infinite descending chain.
+\end{example}
+\begin{exercise}
+ (Important)
+ Convince yourself there are no infinite
+ descending chains of ordinals at all,
+ without using the $\Foundation$ axiom.
+\end{exercise}
+
+\begin{definition}
+ An \vocab{ordinal} is a transitive set which is well-ordered by $\in$.
+ The class of all ordinals is denoted $\On$.
+\end{definition}
+
+\begin{ques}
+ Satisfy yourself that this definition works.
+\end{ques}
+
+We typically use Greek letters $\alpha$, $\beta$, etc.\ for ordinal numbers.
+\begin{definition}
+ We write
+ \begin{itemize}
+ \ii $\alpha < \beta$ to mean $\alpha \in \beta$,
+ and $\alpha > \beta$ to mean $\alpha \ni \beta$.
+ \ii $\alpha \le \beta$ to mean $\alpha \in \beta$ or $\alpha = \beta$,
+ and $\alpha \ge \beta$ to mean $\alpha \ni \beta$ or $\alpha = \beta$,
+ \end{itemize}
+\end{definition}
+
+\begin{theorem}[Ordinals are strictly ordered]
+ Given any two ordinal numbers $\alpha$ and $\beta$,
+ either $\alpha < \beta$, $\alpha = \beta$ or $\alpha > \beta$.
+\end{theorem}
+\begin{proof}
+ Surprisingly annoying, thus omitted.
+\end{proof}
+\begin{theorem}[Ordinals represent all order types]
+ Suppose $<$ is a well-ordering on a set $X$.
+ Then there exists a unique ordinal $\alpha$
+ such that there is a bijection $\alpha \to X$
+ which is order preserving.
+\end{theorem}
+Thus ordinals represent the possible \emph{equivalence classes} of order types.
+Any time you have a well-ordered set, it is isomorphic to a unique ordinal.
+
+We now formalize the ``$+1$'' operation we were doing:
+\begin{definition}
+ Given an ordinal $\alpha$, we let $\alpha+1 = \alpha \cup \{\alpha\}$.
+ An ordinal of the form $\alpha+1$ is called a \vocab{successor ordinal}.
+\end{definition}
+\begin{definition}
+ If $\lambda$ is an ordinal which is neither zero nor a successor ordinal,
+ then we say $\lambda$ is a \vocab{limit ordinal}.
+\end{definition}
+\begin{example}
+ [Sucessor and limit ordinals]
+ $7$, $\omega+3$, $\omega\cdot2+2015$ are successor ordinals,
+ but $\omega$ and $\omega \cdot 2$ are limit ordinals.
+\end{example}
+
+\section{Ordinals are ``tall''}
+First, we note that:
+\begin{theorem}
+ [There is no set of all ordinals]
+ $\On$ is a proper class.
+\end{theorem}
+\begin{proof}
+ Assume for contradiction not.
+ Then $\On$ is well-ordered by $\in$ and transitive, so $\On$ is an ordinal,
+ i.e.\ $\On \in \On$, which violates $\Foundation$.
+\end{proof}
+\begin{exercise}
+ [Unimportant] Give a proof without $\Foundation$ by considering $\On+1$.
+\end{exercise}
+
+From this we deduce:
+\begin{theorem}
+ [Sets of ordinals are bounded]
+ Let $A \subseteq \On$.
+ Then there is some ordinal $\alpha$ such that $A \subseteq \alpha$
+ (i.e.\ $A$ must be bounded).
+\end{theorem}
+\begin{proof}
+ Otherwise, look at $\bigcup A$.
+ It is a set.
+ But if $A$ is unbounded it must equal $\On$,
+ which is a contradiction.
+\end{proof}
+In light of this, every set of ordinals has a \vocab{supremum},
+which is the least upper bound. We denote this by $\sup A$.
+
+\begin{ques}
+ Show that
+ \begin{enumerate}[(a)]
+ \ii $\sup (\alpha+1) = \alpha$ for any ordinal $\alpha$.
+ \ii $\sup \lambda = \lambda$ for any limit ordinal $\lambda$.
+ \end{enumerate}
+\end{ques}
+
+The pictorial ``tall'' will be explained in a few sections.
+
+\section{Transfinite induction and recursion}
+The fact that $\in$ has no infinite descending chains means that induction and recursion still work verbatim.
+\begin{theorem}[Transfinite induction]
+ Given a statement $P(-)$, suppose that
+ \begin{itemize}
+ \ii $P(0)$ is true, and
+ \ii If $P(\alpha)$ is true for all $\alpha < \beta$, then $P(\beta)$ is true.
+ \end{itemize}
+ Then $P(\alpha)$ is true for every ordinal $\alpha$.
+\end{theorem}
+\begin{theorem}
+ [Transfinite recursion]
+ To define a sequence $x_\alpha$ for every ordinal $\alpha$,
+ it suffices to
+ \begin{itemize}
+ \ii define $x_0$, then
+ \ii for any $\beta$, define $x_\beta$ for any $\alpha < \beta$.
+ \end{itemize}
+\end{theorem}
+
+The difference between this and normal induction lies in the \emph{limit ordinals}.
+In real life, we might only do things like ``define $x_{n+1} = \dots$''.
+But this is not enough to define $x_\alpha$ for all $\alpha$,
+because we can't hit $\omega$ this way.
+Similarly, the simple $+1$ doesn't let us hit the ordinal $2\omega$,
+even if we already have $\omega+n$ for all $n$.
+In other words, simply incrementing by $1$ cannot get us past limit stages,
+but using transfinite induction to jump upwards lets us sidestep this issue.
+
+So a transfinite induction is often broken up into three cases.
+In the induction phrasing, it looks like
+\begin{itemize}
+ \ii (Zero Case) First, resolve $P(0)$.
+ \ii (Successor Case) Show that from $P(\alpha)$ we can get $P(\alpha+1)$.
+ \ii (Limit Case) For $\lambda$ a limit ordinal,
+ show that $P(\lambda)$ holds given $P(\alpha)$ for all $\alpha < \lambda$.
+\end{itemize}
+Similarly, transfinite recursion is often split into cases too.
+\begin{itemize}
+ \ii (Zero Case) First, define $x_0$.
+ \ii (Successor Case) Define $x_{\alpha+1}$ from $x_\alpha$.
+ \ii (Limit Case) Define $x_\lambda$ from $x_\alpha$ for all $\alpha < \lambda$,
+ where $\lambda$ is a limit ordinal.
+\end{itemize}
+In both situations, finite induction only does the first two cases,
+but if we're able to do the third case we can climb above the barrier $\omega$.
+
+\section{Ordinal arithmetic}
+\prototype{$1+\omega=\omega \neq \omega+1$.}
+To give an example of transfinite recursion, let's define addition of ordinals.
+Recall that we defined $\alpha+1 = \alpha \cup \{\alpha\}$.
+By transfinite recursion, let
+\begin{align*}
+ \alpha + 0 &= \alpha \\
+ \alpha + (\beta + 1) &= (\alpha + \beta) + 1 \\
+ \alpha + \lambda &= \bigcup_{\beta < \lambda} (\alpha + \beta).
+\end{align*}
+Here $\lambda \neq 0$.
+
+We can also do this explicitly:
+The picture is to just line up $\alpha$ after $\beta$.
+That is, we can consider the set
+\[
+ X =
+ \left( \left\{ 0 \right\} \times \alpha \right)
+ \cup
+ \left( \left\{ 1 \right\} \times \beta \right)
+\]
+(i.e.\ we tag each element of $\alpha$ with a $0$, and
+each element of $\beta$ with a $1$).
+We then impose a well-ordering on $X$ by a lexicographic ordering $\llex$
+(sort by first component, then by second).
+This well-ordering is isomorphic to a unique ordinal.
+\begin{example}
+ [$2+3=5$]
+ Under the explicit construction for $\alpha = 2$ and $\beta = 3$, we get the set
+ \[
+ X = \left\{ (0,0) < (0,1) < (1,0) < (1,1) < (1,2) \right\}
+ \]
+ which is isomorphic to $5$.
+\end{example}
+
+\begin{example}[Ordinal arithmetic is not commutative]
+ Note that $1 + \omega = \omega$!
+ Indeed, under the transfinite definition, we have
+ \[ 1 + \omega = \cup_n (1+n) = 2 \cup 3 \cup 4 \cup \dots = \omega. \]
+ With the explicit construction, we have
+ \[ X = \left\{ (0,0) < (1,0) < (1,1) < (1,2) < \dots \right\} \]
+ which is isomorphic to $\omega$.
+\end{example}
+\begin{exercise}
+ Show that $n+\omega = \omega$ for any $n \in \omega$.
+\end{exercise}
+
+\begin{remark}
+ Ordinal addition is not commutative.
+ However, from the explicit construction
+ we can see that it is at least associative.
+\end{remark}
+
+Similarly, we can define multiplication in two ways.
+By transfinite induction:
+\begin{align*}
+ \alpha \cdot 0 &= 0 \\
+ \alpha \cdot (\beta + 1) &= (\alpha \cdot \beta) + \alpha \\
+ \alpha \cdot \lambda &= \bigcup_{\beta < \lambda} \alpha \cdot \beta.
+\end{align*}
+We can also do an explicit construction: $\alpha \cdot \beta$
+is the order type of
+\[ \llex \text{ applied to } \beta \times \alpha. \]
+\begin{example}[Ordinal multiplication is not commutative]
+ We have $\omega \cdot 2 = \omega + \omega$,
+ but $2 \cdot \omega = \omega$.
+\end{example}
+\begin{exercise}
+ Prove this.
+\end{exercise}
+\begin{exercise}
+ Verify that ordinal multiplication
+ (like addition) is associative but not commutative.
+ (Look at $\gamma \times \beta \times \alpha$.)
+\end{exercise}
+
+Exponentiation can also be so defined, though the explicit construction is less natural.
+\begin{align*}
+ \alpha^0 &= 1 \\
+ \alpha^{\beta+1} &= \alpha^{\beta} \cdot \alpha \\
+ \alpha^{\lambda} &= \bigcup_{\beta < \lambda} \alpha^\beta.
+\end{align*}
+\begin{exercise}
+ Verify that $2^\omega = \omega$.
+\end{exercise}
+
+
+\section{The hierarchy of sets}
+We now define the \vocab{von Neumann Hierarchy} by transfinite recursion.
+\begin{definition}
+ By transfinite recursion, we set
+ \begin{align*}
+ V_0 &= \varnothing \\
+ V_{\alpha + 1} &= \PP(V_\alpha) \\
+ V_\lambda &= \bigcup_{\alpha<\lambda} V_\alpha
+ \end{align*}
+\end{definition}
+By transfinite induction, we see $V_\alpha$ is transitive
+and that $V_\alpha \subseteq V_\beta$ for all $\alpha < \beta$.
+
+\begin{example}[$V_\alpha$ for $\alpha \le 3$]
+ The first few levels of the hierarchy are:
+ \begin{align*}
+ V_0 &= \varnothing \\
+ V_1 &= \left\{ 0 \right\} \\
+ V_2 &= \left\{ 0, 1 \right\} \\
+ V_3 &= \left\{ 0, 1, 2, \left\{ 1 \right\} \right\}.
+ \end{align*}
+ Notice that for each $n$, $V_n$ consists of only finite sets,
+ and each $n$ appears in $V_{n+1}$ for the first time.
+ Observe that
+ \[ V_\omega = \bigcup_{n \in \omega} V_n \]
+ consists only of finite sets; thus $\omega$ appears for the first time
+ in $V_{\omega+1}$.
+\end{example}
+\begin{ques}
+ How many sets are in $V_5$?
+\end{ques}
+
+\begin{definition}
+ The \vocab{rank} of a set $y$, denoted $\rank(y)$,
+ is the smallest ordinal $\alpha$ such that $y \in V_{\alpha+1}$.
+\end{definition}
+\begin{example}
+ $\rank(2) = 2$, and actually $\rank(\alpha)=\alpha$
+ for any ordinal $\alpha$ (problem later).
+ This is the reason for the extra ``$+1$''.
+\end{example}
+\begin{ques}
+ Show that $\rank(y)$ is the smallest ordinal $\alpha$
+ such that $y \subseteq V_\alpha$.
+\end{ques}
+
+It's not yet clear that the rank of a set actually exists, so we prove:
+\begin{theorem}[The von Neumann hierachy is complete]
+ The class $V$ is equal to $\bigcup_{\alpha \in \On} V_\alpha$.
+ In other words, every set appears in some $V_\alpha$.
+\end{theorem}
+\begin{proof}
+ Assume for contradiction this is false.
+ The key is that because $\in$ satisfies $\Foundation$,
+ we can take a $\in$-minimal counterexample $x$.
+ Thus $\rank(y)$ is defined for every $y \in x$,
+ and we can consider (by $\Replacement$) the set
+ \[ \left\{ \rank(y) \mid y \in x \right\}. \]
+ Since it is a set of ordinals, it is bounded.
+ So there is some large ordinal $\alpha$ such that $y \in V_\alpha$
+ for all $y \in x$, i.e.\ $x \subseteq V_\alpha$,
+ so $x \in V_{\alpha+1}$.
+\end{proof}
+
+This leads us to a picture of the universe $V$:
+\begin{center}
+ \begin{asy}
+ size(11cm);
+ pair A = (12,30);
+ pair B = -conj(A);
+ pair M = midpoint(A--B);
+ pair O = origin;
+ MP("V", A, dir(10));
+ draw(A--O--B);
+
+ fill(A--O--B--cycle, opacity(0.3)+palecyan);
+
+ MP("V_0 = \varnothing", origin, dir(-20));
+ MP("V_1 = \{\varnothing\}", 0.05*A, dir(0));
+ MP("V_2 = \{\varnothing, \{\varnothing\} \}", 0.10*A, dir(0));
+
+ draw(MP("V_n", 0.3*A, dir(0))--0.3*B);
+ draw(MP("V_{n+1} = \mathcal P(V_n)", 0.35*A, dir(0))--0.35*B);
+ Drawing("n", 0.35*M, dir(45));
+
+ draw(MP("V_\omega = \bigcup V_n", 0.5*A, dir(0))--0.5*B);
+ draw(MP("V_{\omega+1} = \mathcal P(V_{\omega})", 0.55*A, dir(0))--0.55*B);
+ Drawing("\omega", 0.55*M, dir(45));
+ draw(MP("V_{\omega+2} = \mathcal P(V_{\omega+1})", 0.6*A, dir(0))--0.6*B);
+ Drawing("\omega+1", 0.6*M, dir(45));
+
+ draw(MP("V_{\omega+\omega}", 0.8*A, dir(0))--0.8*B);
+
+ draw(origin--M);
+ MP("\mathrm{On}", M, dir(90));
+
+ \end{asy}
+\end{center}
+
+We can imagine the universe $V$ as a triangle,
+built in several stages or layers,
+$V_0 \subsetneq V_1 \subsetneq V_2 \subsetneq \dots$.
+This universe doesn't have a top: but each of the $V_i$ do.
+However, the universe has a very clear bottom.
+Each stage is substantially wider than the previous one.
+
+In the center of this universe are the ordinals:
+for every successor $V_\alpha$, exactly one new ordinal appears, namely $\alpha$.
+Thus we can picture the class of ordinals as a thin line
+that stretches the entire height of the universe.
+A set has rank $\alpha$ if it appears at the same stage that $\alpha$ does.
+
+
+All of number theory, the study of the integers, lives inside $V_\omega$.
+Real analysis, the study of real numbers, lives inside $V_{\omega+1}$, since a real number
+can be encoded as a subset of $\NN$ (by binary expansion).
+Functional analysis lives one step past that, $V_{\omega+2}$.
+For all intents and purposes, most mathematics does not go beyond $V_{\omega+\omega}$.
+This pales in comparison to the true magnitude of the whole universe.
+
+\section\problemhead
+\begin{problem}
+ Prove that $\rank(\alpha) = \alpha$ for any $\alpha$
+ by transfinite induction.
+\end{problem}
+
+\begin{problem}
+ [Online Math Open]
+ Count the number of transitive sets in $V_5$.
+\end{problem}
+
+\begin{problem}
+ [Goodstein]
+ Let $a_2$ be any positive integer.
+ We define the infinite sequence $a_2$, $a_3$, \dots recursively as follows.
+ If $a_{n} = 0$, then $a_{n+1} = 0$.
+ Otherwise, we write $a_n$ in base $n$,
+ then write all exponents in base $n$, and so on until all
+ numbers in the expression are at most $n$.
+ Then we replace all instances of $n$ by $n+1$
+ (including the exponents!), subtract $1$,
+ and set the result to $a_{n+1}$.
+ For example, if $a_2 = 11$ we have
+ \begin{align*}
+ a_2 &= 2^{3} + 2 + 1 = 2^{2+1} + 2 + 1 \\
+ a_3 &= 3^{3+1}+3+1-1 = 3^{3+1} + 3\\
+ a_4 &= 4^{4+1} + 4 - 1 = 4^{4+1} + 3 \\
+ a_5 &= 5^{5+1} + 3 - 1 = 5^{5+1} + 2
+ \end{align*}
+ and so on. Prove that $a_N = 0$ for some integer $N > 2$.
+\end{problem}
diff --git a/books/napkin/p-adic.tex b/books/napkin/p-adic.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2533887468cdd77f22d6b4ab9ac5fd0cea6ca91a
--- /dev/null
+++ b/books/napkin/p-adic.tex
@@ -0,0 +1,709 @@
+\chapter[Bonus: A hint of p-adic numbers]{Bonus: A hint of $p$-adic numbers}
+This is a bonus chapter meant for those
+who have also read about \textbf{rings and fields}:
+it's a nice tidbit at the intersection of algebra and analysis.
+
+
+In this chapter, we are going to redo most of the previous chapter
+with the absolute value $\left\lvert - \right\rvert$
+replaced by the $p$-adic one.
+This will give us the $p$-adic integers $\ZZ_p$,
+and the $p$-adic numbers $\QQ_p$.
+The one-sentence description is that these are
+``integers/rationals carrying full mod $p^e$ information''
+(and only that information).
+
+In everything that follows $p$ is always assumed to denote a prime.
+The first four sections will cover the founding definitions
+culminating in a short solution to a USA TST problem.
+We will then state (mostly without proof) some more
+surprising results about continuous functions $f \colon \ZZ_p \to \QQ_p$;
+finally we close with the famous proof of the Skolem-Mahler-Lech theorem
+using $p$-adic analysis.
+
+\section{Motivation}
+Before really telling you what $\ZZ_p$ and $\QQ_p$ are,
+let me tell you what you might expect them to do.
+
+In elementary/olympiad number theory,
+we're already well-familiar with the following two ideas:
+\begin{itemize}
+ \ii Taking modulo a prime $p$ or prime power $p^e$, and
+ \ii Looking at the exponent $\nu_p$.
+\end{itemize}
+
+Let me expand on the first point.
+Suppose we have some Diophantine equation.
+In olympiad contexts, one can take an equation modulo $p$
+to gain something else to work with.
+Unfortunately, taking modulo $p$ loses some information:
+the reduction $\ZZ \surjto \ZZ/p$ is far from injective.
+
+If we want finer control, we could consider instead
+taking modulo $p^2$, rather than taking modulo $p$.
+This can also give some new information (cubes modulo $9$, anyone?),
+but it has the disadvantage that $\ZZ/p^2$ isn't a field,
+so we lose a lot of the nice algebraic properties that we
+got if we take modulo $p$.
+
+One of the goals of $p$-adic numbers is that we can get around
+these two issues I described.
+The $p$-adic numbers we introduce is going to have the following properties:
+\begin{enumerate}
+ \ii \textbf{You can ``take modulo $p^e$ for all $e$ at once''.}
+ In olympiad contexts, we are used to picking a particular modulus and then
+ seeing what happens if we take that modulus.
+ But with $p$-adic numbers, we won't have to make that choice.
+ An equation of $p$-adic numbers carries enough information
+ to take modulo $p^e$.
+ \ii \textbf{The numbers $\QQ_p$ form a field},
+ the nicest possible algebraic structure:
+ $1/p$ makes sense.
+ Contrast this with $\ZZ/p^2$, which is not even an integral domain.
+ \ii \textbf{It doesn't lose as much information}
+ as taking modulo $p$ does:
+ rather than the surjective $\ZZ \surjto \ZZ/p$ we have an
+ \emph{injective} map $\ZZ \injto \ZZ_p$.
+ \ii \textbf{Despite this, you ``ignore'' some ``irrelevant'' data}.
+ Just like taking modulo $p$, you want to zoom-in on
+ a particular type of algebraic information,
+ and this means necessarily
+ losing sight of other things.\footnote{To draw an analogy: the equation
+ $ a^2 + b^2 + c^2 + d^2 = -1$
+ has no integer solutions, because, well, squares are nonnegative.
+ But you will find that this equation has solutions modulo any prime $p$,
+ because once you take modulo $p$ you stop being able to
+ talk about numbers being nonnegative.
+ The same thing will happen if we work in $p$-adics:
+ the above equation has a solution in $\ZZ_p$ for every prime $p$.}
+\end{enumerate}
+So, you can think of $p$-adic numbers as the right tool to use
+if you only really care about modulo $p^e$ information,
+but normal $\ZZ/p^e$ isn't quite powerful enough.
+
+To be more concrete, I'll give a poster example now:
+\begin{example}
+ [USA TST 2002/2]
+ For a prime $p$, show the value of
+ \[ f_p(x) = \sum_{k=1}^{p-1} \frac{1}{(px+k)^2} \pmod{p^3} \]
+ does not depend on $x$.
+ \label{ex:token}
+\end{example}
+Here is a problem where we \emph{clearly} only care
+about $p^e$-type information.
+Yet it's a nontrivial challenge to do the
+necessary manipulations mod $p^3$ (try it!).
+The basic issue is that there is no good way to deal with
+the denominators modulo $p^3$
+(in part $\ZZ/p^3$ is not even an integral domain).
+
+However, with $p$-adic analysis we're going to be able
+to overcome these limitations and give a ``straightforward'' proof
+by using the identity
+\[
+ \left( 1 + \frac{px}{k} \right)^{-2}
+ = \sum_{n \ge 0} \binom{-2}{n} \left( \frac{px}{k} \right)^n.
+\]
+Such an identity makes no sense over $\QQ$ or $\RR$
+for convergence reasons,
+but it will work fine over the $\QQ_p$, which is all we need.
+
+
+\section{Algebraic perspective}
+\prototype{$-1/2 = 1 + 3 + 3^2 + 3^3 + \dots \in \ZZ_3$.}
+We now construct $\ZZ_p$ and $\QQ_p$.
+I promised earlier that a $p$-adic integer will let you look at
+``all residues modulo $p^e$'' at once.
+This definition will formalize this.
+
+\subsection{Definition of $\ZZ_p$}
+\begin{definition}
+ [Introducing $\ZZ_p$]
+ A \vocab{$p$-adic integer} is a sequence
+ \[ x = (x_1 \bmod p, \; x_2 \bmod{p^2}, \; x_3 \bmod{p^3}, \; \dots) \]
+ of residues $x_e$ modulo $p^e$ for each integer $e$,
+ satisfying the compatibility relations
+ $x_i \equiv x_j \pmod{p^i}$ for $i < j$.
+
+ The set $\ZZ_p$ of $p$-adic integers forms a ring
+ under component-wise addition and multiplication.
+\end{definition}
+
+\begin{example}
+ [Some $3$-adic integers]
+ Let $p=3$.
+ Every usual integer $n$ generates
+ a (compatible) sequence of residues modulo $p^e$ for each $e$,
+ so we can view each ordinary integer as $p$-adic one:
+ \[ 50 = \left( 2 \bmod 3, \; 5 \bmod 9, \;
+ 23 \bmod{27}, \; 50 \bmod{81}, \; 50 \bmod{243}, \; \dots \right). \]
+ On the other hand, there are sequences of residues
+ which do not correspond to any usual integer despite
+ satisfying compatibility relations, such as
+ \[ \left( 1 \bmod 3, \; 4 \bmod 9, \;
+ 13 \bmod{27}, \; 40 \bmod{81}, \; \dots \right) \]
+ which can be thought of as $x = 1 + p + p^2 + \dots$.
+\end{example}
+In this way we get an injective map
+\[ \ZZ \injto \ZZ_p \qquad
+ n \mapsto \left( n \bmod p, n \bmod{p^2}, n \bmod{p^3}, \dots \right) \]
+which is not surjective.
+So there are more $p$-adic integers than usual integers.
+
+(Remark for experts:
+those of you familiar with category theory might recognize
+that this definition can be written concisely as
+\[ \ZZ_p \defeq \varprojlim \ZZ/p^e \ZZ \]
+where the inverse limit is taken across $e \ge 1$.)
+
+\begin{exercise}
+ Check that $\ZZ_p$ is an integral domain.
+\end{exercise}
+
+\subsection{Base $p$ expansion}
+Here is another way to think about $p$-adic integers using ``base $p$''.
+As in the example earlier, every usual integer can be written in base $p$,
+for example
+\[ 50 = \ol{1212}_3 = 2 \cdot 3^0 + 1 \cdot 3^1 + 2 \cdot 3^2 + 1 \cdot 3^3. \]
+More generally, given any $x = (x_1, \dots) \in \ZZ_p$,
+we can write down a ``base $p$'' expansion in the sense
+that there are exactly $p$ choices of $x_k$ given $x_{k-1}$.
+Continuing the example earlier, we would write
+\begin{align*}
+ \left( 1 \bmod 3, \; 4 \bmod 9, \;
+ 13 \bmod{27}, \; 40 \bmod{81}, \; \dots \right)
+ &= 1 + 3 + 3^2 + \dots \\
+ &= \ol{\dots1111}_3
+\end{align*}
+and in general we can write
+\[ x = \sum_{k \ge 0} a_k p^k = \ol{\dots a_2 a_1 a_0}_p \]
+where $a_k \in \{0, \dots, p-1\}$,
+such that the equation holds modulo $p^e$ for each $e$.
+Note the expansion is infinite to the \emph{left},
+which is different from what you're used to.
+
+(Amusingly, negative integers also have infinite base $p$ expansions:
+$-4 = \ol{\dots222212}_3$, corresponding to
+$(2 \bmod 3, \; 5 \bmod 9, \; 23 \bmod{27}, \; 77 \bmod{81} \dots)$.)
+
+Thus you may often hear the advertisement that a $p$-adic integer
+is an ``possibly infinite base $p$ expansion''.
+This is correct, but later on we'll be thinking of $\ZZ_p$ in
+a more and more ``analytic'' way,
+and so I prefer to think of this as
+\begin{moral}
+ $p$-adic integers are Taylor series with base $p$.
+\end{moral}
+Indeed, much of your intuition from generating functions $K[[X]]$
+(where $K$ is a field) will carry over to $\ZZ_p$.
+
+\subsection{Constructing $\QQ_p$}
+Here is one way in which your intuition from generating functions carries over:
+\begin{proposition}
+ [Non-multiples of $p$ are all invertible]
+ The number $x \in \ZZ_p$ is invertible if and only if $x_1 \ne 0$.
+ In symbols,
+ \[ x \in \ZZ_p^\times \iff x \not\equiv 0 \pmod p. \]
+\end{proposition}
+Contrast this with the corresponding statement for $K[ [ X ] ]$:
+a generating function $F \in K[ [ X ] ]$ is invertible iff $F(0) \neq 0$.
+
+\begin{proof}
+ If $x \equiv 0 \pmod p$ then $x_1 = 0$,
+ so clearly not invertible.
+ Otherwise, $x_e \not\equiv 0 \pmod p$ for all $e$,
+ so we can take an inverse $y_e$ modulo $p^e$,
+ with $x_e y_e \equiv 1 \pmod{p^e}$.
+ As the $y_e$ are themselves compatible,
+ the element $(y_1, y_2, \dots)$ is an inverse.
+\end{proof}
+\begin{example}
+ [We have $-\half = \ol{\dots1111}_3 \in \ZZ_3$]
+ We claim the earlier example is actually
+ \begin{align*}
+ -\half = \left( 1 \bmod 3, \; 4 \bmod 9, \;
+ 13 \bmod{27}, \; 40 \bmod{81}, \; \dots \right)
+ &= 1 + 3 + 3^2 + \dots \\
+ &= \ol{\dots1111}_3.
+ \end{align*}
+ Indeed, multiplying it by $-2$ gives
+ \[ \left( -2 \bmod 3, \; -8 \bmod 9, \;
+ -26 \bmod{27}, \; -80 \bmod{81}, \; \dots \right)
+ = 1. \]
+ (Compare this with the ``geometric series''
+ $1 + 3 + 3^2 + \dots = \frac{1}{1-3}$.
+ We'll actually be able to formalize this later, but not yet.)
+\end{example}
+\begin{remark}
+ [$\half$ is an integer for $p > 2$]
+ The earlier proposition implies that $\half \in \ZZ_3$
+ (among other things);
+ your intuition about what is an ``integer'' is different here!
+ In olympiad terms, we already knew $\half \pmod 3$ made sense,
+ which is why calling $\half$ an ``integer''
+ in the $3$-adics is correct,
+ even though it doesn't correspond to any element of $\ZZ$.
+\end{remark}
+\begin{exercise}
+ [Unimportant but tricky]
+ Rational numbers correspond exactly to
+ eventually periodic base $p$ expansions.
+\end{exercise}
+
+With this observation, here is now the definition of $\QQ_p$.
+\begin{definition}
+ [Introducing $\QQ_p$]
+ Since $\ZZ_p$ is an integral domain,
+ we let $\QQ_p$ denote its field of fractions.
+ These are the \vocab{$p$-adic numbers}.
+\end{definition}
+Continuing our generating functions analogy:
+\[ \ZZ_p \text{ is to } \QQ_p
+ \quad\text{as}\quad
+ K[[X]] \text{ is to } K((X)). \]
+This means
+\begin{moral}
+ $\QQ_p$ can be thought of as Laurent series with base $p$.
+\end{moral}
+and in particular according to the earlier proposition we deduce:
+\begin{proposition}
+ [$\QQ_p$ looks like formal Laurent series]
+ Every nonzero element of $\QQ_p$ is uniquely of the form
+ \[ p^k u \qquad \text{ where } k \in \ZZ, \; u \in \ZZ_p^\times. \]
+\end{proposition}
+Thus, continuing our base $p$ analogy,
+elements of $\QQ_p$ are in bijection with ``Laurent series''
+\[ \sum_{k \ge -n} a_k p^k
+ = \ol{\dots a_2 a_1 a_0 . a_{-1} a_{-2} \dots a_{-n}}_p \]
+for $a_k \in \left\{ 0, \dots, p-1 \right\}$.
+So the base $p$ representations of elements of $\QQ_p$
+can be thought of as the same as usual,
+but extending infinitely far to the left
+(rather than to the right).
+
+\begin{remark}
+ [Warning]
+ The field $\QQ_p$ has characteristic \emph{zero}, not $p$.
+\end{remark}
+\begin{remark}
+ [Warning on fraction field]
+ This result implies that you shouldn't think about elements of $\QQ_p$
+ as $x/y$ (for $x,y \in \ZZ_p$) in practice,
+ even though this is the official definition
+ (and what you'd expect from the name $\QQ_p$).
+ The only denominators you need are powers of $p$.
+
+ To keep pushing the formal Laurent series analogy,
+ $K((X))$ is usually not thought of as quotient of generating functions
+ but rather as ``formal series with some negative exponents''.
+ You should apply the same intuition on $\QQ_p$.
+\end{remark}
+
+\begin{remark}
+At this point I want to make a remark about the fact $1/p \in \QQ_p$,
+connecting it to the wish-list of properties I had before.
+In elementary number theory you can take equations modulo $p$,
+but if you do the quantity $n/p \bmod{p}$ doesn't make sense
+unless you know $n \bmod{p^2}$.
+You can't fix this by just taking modulo $p^2$
+since then you need $n \bmod{p^3}$ to get $n/p \bmod{p^2}$, ad infinitum.
+You can work around issues like this,
+but the nice feature of $\ZZ_p$ and $\QQ_p$
+is that you have modulo $p^e$ information for ``all $e$ at once'':
+the information of $x \in \QQ_p$ packages all the modulo $p^e$
+information simultaneously.
+So you can divide by $p$ with no repercussions.
+\end{remark}
+
+
+\section{Analytic perspective}
+\subsection{Definition}
+Up until now we've been thinking about things mostly algebraically,
+but moving forward it will be helpful to start using the language of analysis.
+Usually, two real numbers are considered ``close'' if
+they are close on the number of line,
+but for $p$-adic purposes we only care about modulo $p^e$ information.
+So, we'll instead think of two elements of $\ZZ_p$ or $\QQ_p$
+as ``close'' if they differ by a large multiple of $p^e$.
+
+For this we'll borrow the familiar $\nu_p$ from elementary number theory.
+\begin{definition}
+ [$p$-adic valuation and absolute value]
+ We define the \vocab{$p$-adic valuation} $\nu_p : \QQ_p^\times \to \ZZ$
+ in the following two equivalent ways:
+ \begin{itemize}
+ \ii For $x = (x_1, x_2, \dots) \in \ZZ_p$ we let
+ $\nu_p(x)$ be the largest $e$ such that
+ $x_e \equiv 0 \pmod{p^e}$ (or $e=0$ if $x \in \ZZ_p^\times$).
+ Then extend to all of $\QQ_p^\times$
+ by $\nu_p(xy) = \nu_p(x) + \nu_p(y)$.
+ \ii Each $x \in \QQ_p^\times$ can be written
+ uniquely as $p^k u$ for $u \in \ZZ_p^\times$, $k \in \ZZ$.
+ We let $\nu_p(x) = k$.
+ \end{itemize}
+ By convention we set $\nu_p(0) = +\infty$.
+ Finally, define the \vocab{$p$-adic absolute value}
+ $\left\lvert \bullet \right\rvert_p$ by
+ \[ \left\lvert x \right\rvert_p = p^{-\nu_p(x)}. \]
+ In particular $\left\lvert 0 \right\rvert_p = 0$.
+\end{definition}
+This fulfills the promise that $x$ and $y$ are close
+if they look the same modulo $p^e$ for large $e$;
+in that case $\nu_p(x-y)$ is large
+and accordingly $\left\lvert x-y \right\rvert_p$ is small.
+
+\subsection{Ultrametric space}
+In this way, $\QQ_p$ and $\ZZ_p$ becomes a metric space
+with metric given by $\left\lvert x-y \right\rvert_p$.
+
+\begin{exercise}
+ \label{exer:alternating}
+ Suppose $f \colon \ZZ_p \to \QQ_p$ is continuous
+ and $f(n) = (-1)^n$ for every $n \in \ZZ_{\ge 0}$.
+ Prove that $p = 2$.
+\end{exercise}
+
+In fact, these spaces satisfy a stronger
+form of the triangle inequality than you are used to from $\RR$.
+\begin{proposition}
+ [$\left\lvert \bullet \right\rvert_p$ is an ultrametric]
+ For any $x,y \in \ZZ_p$, we have the \vocab{strong triangle inequality}
+ \[ \left\lvert x+y \right\rvert_p
+ \le \max \left\{ \left\lvert x \right\rvert_p,
+ \left\lvert y \right\rvert_p \right\}.
+ \]
+ Equality holds if (but not only if)
+ $\left\lvert x \right\rvert_p \neq \left\lvert y \right\rvert_p$.
+\end{proposition}
+
+However, $\QQ_p$ is more than just a metric space:
+it is a field, with its own addition and multiplication.
+This means we can do analysis just like in $\RR$ or $\CC$:
+basically, any notion such as ``continuous function'',
+``convergent series'', et cetera has a $p$-adic analog.
+In particular, we can define what it means for an infinite sum to converge:
+\begin{definition}
+ [Convergence notions]
+ Here are some examples of $p$-adic analogs of ``real-world'' notions.
+ \begin{itemize}
+ \ii A sequence $s_1$, \dots converges to a limit $L$
+ if $\lim_{n \to \infty} \left\lvert s_n - L \right\rvert_p = 0$.
+ \ii The infinite series $\sum_k x_k$ converges
+ if the sequence of partial sums $s_1 = x_1$,
+ $s_2 = x_1 + x_2$, \dots, converges to some limit.
+ \ii \dots et cetera \dots
+ \end{itemize}
+\end{definition}
+With this definition in place,
+the ``base $p$'' discussion we had earlier is now true
+in the analytic sense: if $x = \ol{\dots a_2 a_1 a_0}_p \in \ZZ_p$ then
+\[ \sum_{k=0}^\infty a_k p^k \quad\text{converges to } x. \]
+Indeed, the difference between $x$ and the $n$th partial sum is divisible by $p^n$,
+hence the partial sums approach $x$ as $n \to \infty$.
+
+While the definitions are all the same,
+there are some changes in properties that should be true.
+For example, in $\QQ_p$ convergence of partial sums is simpler:
+\begin{proposition}
+ [$|x_k|_p \to 0$ iff convergence of series]
+ A series $\sum_{k=1}^\infty x_k$ in $\QQ_p$
+ converges to some limit if and only if
+ $\lim_{k \to \infty} |x_k|_p = 0$.
+ \label{noharmonic}
+\end{proposition}
+Contrast this with $\sum \frac1n = \infty$ in $\RR$.
+You can think of this as a consequence of strong triangle inequality.
+\begin{proof}
+ By multiplying by a large enough power of $p$,
+ we may assume $x_k \in \ZZ_p$.
+ (This isn't actually necessary, but makes the notation nicer.)
+
+ Observe that $x_k \pmod p$ must eventually stabilize,
+ since for large enough $n$ we have
+ $\left\lvert x_n \right\rvert_p < 1 \iff \nu_p(x_n) \ge 1$.
+ So let $a_1$ be the eventual residue modulo $p$
+ of $\sum_{k=0}^N x_k \pmod p$ for large $N$.
+ In the same way let $a_2$ be the eventual residue modulo $p^2$, and so on.
+ Then one can check we approach the limit $a = (a_1, a_2, \dots)$.
+\end{proof}
+
+\subsection{More fun with geometric series}
+Let's finally state the $p$-adic analog of the geometric series formula.
+
+\begin{proposition}
+ [Geometric series]
+ Let $x \in \ZZ_p$ with $\left\lvert x \right\rvert_p < 1$.
+ Then \[ \frac{1}{1-x} = 1 + x + x^2 + x^3 + \dots. \]
+\end{proposition}
+\begin{proof}
+ Note that the partial sums satisfy
+ $1 + x + x^2 + \dots + x^n = \frac{1-x^n}{1-x}$,
+ and $x^n \to 0$ as $n \to \infty$ since
+ $\left\lvert x \right\rvert_p < 1$.
+\end{proof}
+
+So, $1 + 3 + 3^2 + \dots = -\half$ is really a correct convergence
+in $\ZZ_3$.
+And so on.
+
+If you buy the analogy that $\ZZ_p$ is generating functions
+with base $p$, then all the olympiad generating functions
+you might be used to have $p$-adic analogs.
+For example, you can prove more generally that:
+\begin{theorem}
+ [Generalized binomial theorem]
+ If $x \in \ZZ_p$ and $\left\lvert x \right\rvert_p < 1$,
+ then for any $r \in \QQ$ we have the series convergence
+ \[ \sum_{n \ge 0} \binom rn x^n = (1+x)^r. \]
+\end{theorem}
+(I haven't defined $(1+x)^r$, but it has the properties you expect.)
+
+\subsection{Completeness}
+Note that the definition of $\left\lvert \bullet \right\rvert_p$
+could have been given for $\QQ$ as well;
+we didn't need $\QQ_p$ to introduce it
+(after all, we have $\nu_p$ in olympiads already).
+The big important theorem I must state now is:
+\begin{theorem}
+ [$\QQ_p$ is complete]
+ The space $\QQ_p$ is the completion of $\QQ$
+ with respect to $\left\lvert \bullet \right\rvert_p$.
+\end{theorem}
+This is the definition of $\QQ_p$ you'll see more frequently;
+one then defines $\ZZ_p$ in terms of $\QQ_p$
+(rather than vice-versa) according to
+\[ \ZZ_p = \left\{ x \in \QQ_p :
+ \left\lvert x \right\rvert_p \le 1 \right\}. \]
+
+\subsection{Philosophical notes}
+Let me justify why this definition is philosophically nice.
+Suppose you are an ancient Greek mathematician who is given:
+\begin{quote}
+ \textbf{Problem for Ancient Greeks.}
+ Estimate the value of the sum
+ \[ S = \frac{1}{1^2} + \frac{1}{2^2} + \dots + \frac{1}{10000^2} \]
+ to within $0.001$.
+\end{quote}
+The sum $S$ consists entirely of rational numbers,
+so the problem statement would be fair game for ancient Greece.
+But it turns out that in order to get a good estimate,
+it \emph{really helps} if you know about the real numbers:
+because then you can construct the infinite series
+$\sum_{n \ge 1} n^{-2} = \frac16 \pi^2$,
+and deduce that $S \approx \frac{\pi^2}{6}$,
+up to some small error term from the terms past $\frac{1}{10001^2}$,
+which can be bounded.
+
+Of course, in order to have access to enough theory
+to prove that $S = \pi^2/6$, you need to have the real numbers;
+it's impossible to do calculus in $\QQ$
+(the sequence $1$, $1.4$, $1.41$, $1.414$,
+is considered ``not convergent''!)
+
+Now fast-forward to 2002, and suppose you are given
+\begin{quote}
+ \textbf{Problem from USA TST 2002.}
+ Estimate the sum
+ \[ f_p(x) = \sum_{k=1}^{p-1} \frac{1}{(px+k)^2} \]
+ to within mod $p^3$.
+\end{quote}
+% (i.e.\ to within $p^{-3}$ in $\left\lvert \bullet \right\rvert_p$).
+Even though $f_p(x)$ is a rational number,
+it still helps to be able to do analysis with infinite sums,
+and then bound the error term (i.e.\ take mod $p^3$).
+But the space $\QQ$ is not complete with respect
+to $\left\lvert \bullet \right\rvert_p$ either,
+and thus it makes sense to work in the completion of $\QQ$
+with respect to $\left\lvert \bullet \right\rvert_p$.
+This is exactly $\QQ_p$.
+
+In any case, let's finally solve \Cref{ex:token}.
+\begin{example}
+[USA TST 2002]
+We will now compute
+\[ f_p(x) = \sum_{k=1}^{p-1} \frac{1}{(px+k)^2} \pmod{p^3}. \]
+Armed with the generalized binomial theorem,
+this becomes straightforward.
+\begin{align*}
+ f_p(x) &= \sum_{k=1}^{p-1} \frac{1}{(px+k)^2}
+ = \sum_{k=1}^{p-1} \frac{1}{k^2}
+ \left( 1 + \frac{px}{k} \right)^{-2} \\
+ &= \sum_{k=1}^{p-1} \frac{1}{k^2} \sum_{n \ge 0}
+ \binom{-2}{n} \left( \frac{px}{k} \right)^{n} \\
+ &= \sum_{n \ge 0} \binom{-2}{n}
+ \sum_{k=1}^{p-1} \frac{1}{k^2} \left( \frac{x}{k} \right)^{n} p^n \\
+ &\equiv \sum_{k=1}^{p-1} \frac{1}{k^2}
+ - 2x \left( \sum_{k=1}^{p-1} \frac{1}{k^3} \right) p
+ + 3x^2 \left( \sum_{k=1}^{p-1} \frac{1}{k^4} \right) p^2 \pmod{p^3}.
+\end{align*}
+Using the elementary facts that
+$p^2 \mid \sum_k k^{-3}$ and $p \mid \sum_k k^{-4}$,
+this solves the problem.
+\end{example}
+
+
+%\section{Digression on $\CC_p$}
+%Before I go on, I want to mention that $\QQ_p$
+%is not algebraically closed.
+%So, we can take its algebraic closure $\ol{\QQ_p}$ --- but this
+%field is now no longer complete (in the topological sense).
+%However, we can then take the completion of this space
+%to obtain $\CC_p$.
+%In general, completing an algebraically closed field
+%remains algebraically closed,
+%and so there is a larger space $\CC_p$ which
+%is algebraically closed \emph{and} complete.
+%This space is called the $p$-adic complex numbers.
+%
+%We won't need $\CC_p$ at all in what follows,
+%so you can forget everything you just read.
+
+\section{Mahler coefficients}
+One of the big surprises of $p$-adic analysis is that:
+\begin{moral}
+ We can basically describe all continuous functions $\ZZ_p \to \QQ_p$.
+\end{moral}
+They are given by a basis of functions
+\[ \binom xn \defeq \frac{x(x-1) \dots (x-(n-1))}{n!} \]
+in the following way.
+\begin{theorem}
+ [Mahler; see {\cite[Theorem 51.1, Exercise 51.b]{ref:schikof}}]
+ Let $f \colon \ZZ_p \to \QQ_p$ be continuous, and define
+ \begin{equation}
+ a_n = \sum_{k=0}^n \binom nk (-1)^{n-k} f(k).
+ \label{eq:mahler}
+ \end{equation}
+ Then $\lim_n a_n = 0$ and \[ f(x) = \sum_{n \ge 0} a_n \binom xn. \]
+
+ Conversely, if $a_n$ is any sequence converging to zero,
+ then $f(x) = \sum_{n \ge 0} a_n \binom xn$
+ defines a continuous function satisfying \eqref{eq:mahler}.
+ % true for C_p too
+\end{theorem}
+The $a_i$ are called the \emph{Mahler coefficients} of $f$.
+\begin{exercise}
+ Last post we proved that if $f \colon \ZZ_p \to \QQ_p$ is continuous
+ and $f(n) = (-1)^n$ for every $n \in \ZZ_{\ge 0}$ then $p = 2$.
+ Re-prove this using Mahler's theorem,
+ and this time show conversely that a unique such $f$ exists when $p=2$.
+\end{exercise}
+
+You'll note that these are the same finite differences that one
+uses on polynomials in high school math contests,
+which is why they are also called ``Mahler differences''.
+\begin{align*}
+ a_0 &= f(0) \\
+ a_1 &= f(1) - f(0) \\
+ a_2 &= f(2) - 2f(1) + f(0) \\
+ a_3 &= f(3) - 3f(2) + 3f(1) - f(0).
+\end{align*}
+Thus one can think of $a_n \to 0$ as saying that
+the values of $f(0)$, $f(1)$, \dots behave like a polynomial modulo $p^e$
+for every $e \ge 0$.
+
+The notion ``analytic'' also has a Mahler interpretation.
+First, the definition.
+\begin{definition}
+We say that a function $f \colon \ZZ_p \to \QQ_p$ is \vocab{analytic}
+if it has a power series expansion
+\[ \sum_{n \ge 0} c_n x^n \quad c_n \in \QQ_p
+ \qquad\text{ converging for } x \in \ZZ_p. \]
+\end{definition}
+\begin{theorem}
+ [{\cite[Theorem 54.4]{ref:schikof}}]
+ The function $f(x) = \sum_{n \ge 0} a_n \binom xn$ is analytic
+ if and only if
+ \[ \lim_{n \to \infty} \frac{a_n}{n!} = 0. \]
+ % true for Q_p too
+\end{theorem}
+Analytic functions also satisfy the following niceness result:
+\begin{theorem}
+ [Strassmann's theorem]
+ Let $f \colon \ZZ_p \to \QQ_p$ be analytic.
+ Then $f$ has finitely many zeros.
+ % should be true for \CC_p? gdi
+\end{theorem}
+
+To give an application of these results,
+we will prove the following result,
+which was interesting even before $p$-adics came along!
+\begin{theorem}
+ [Skolem-Mahler-Lech]
+ Let $(x_i)_{i \ge 0}$ be an integral linear recurrence,
+ meaning $(x_i)_{i \ge 0}$ is a sequence of integers
+ \[ x_n = c_1 x_{n-1} + c_2 x_{n-2} + \dots + c_k x_{n-k}
+ \qquad n = 1, 2, \dots \]
+ holds for some choice of integers $c_1$, \dots, $c_k$.
+ Then the set of indices $\{ i \mid x_i = 0 \}$
+ is eventually periodic.
+\end{theorem}
+
+\begin{proof}
+ According to the theory of linear recurrences,
+ there exists a matrix $A$ such that we can write
+ $x_i$ as a dot product
+ \[ x_i = \left< A^i u, v \right>. \]
+ Let $p$ be a prime not dividing $\det A$.
+ Let $T$ be an integer such that $A^T \equiv \id \pmod p$
+ (with $\id$ denoting the identity matrix).
+
+ Fix any $0 \le r < N$.
+ We will prove that either all the terms
+ \[ f(n) = x_{nT+r} \qquad n = 0, 1, \dots \]
+ are zero, or at most finitely many of them are.
+ This will conclude the proof.
+
+ Let $A^T = \id + pB$ for some integer matrix $B$.
+ We have
+ \begin{align*}
+ f(n) &= \left< A^{nT+r} u, v \right>
+ = \left< (\id + pB)^n A^r u, v \right> \\
+ &= \sum_{k \ge 0} \binom nk \cdot p^n \left< B^n A^r u, v \right> \\
+ &= \sum_{k \ge 0} a_n \binom nk \qquad \text{ where }
+ a_n = p^n \left< B^n A^r u, v \right> \in p^n \ZZ.
+ \end{align*}
+ Thus we have written $f$ in Mahler form.
+ Initially, we define $f \colon \ZZ_{\ge 0} \to \ZZ$,
+ but by Mahler's theorem (since $\lim_n a_n = 0$)
+ it follows that $f$ extends to a function $f \colon \ZZ_p \to \QQ_p$.
+ Also, we can check that $\lim_n \frac{a_n}{n!} = 0$
+ hence $f$ is even analytic.
+
+ Thus by Strassman's theorem, $f$ is either identically zero,
+ or else it has finitely many zeros, as desired.
+\end{proof}
+
+\section{\problemhead}
+\begin{dproblem}
+ [$\ZZ_p$ is compact]
+ Show that $\QQ_p$ is not compact, but $\ZZ_p$ is.
+ (For the latter, I recommend using sequential continuity.)
+\end{dproblem}
+\begin{dproblem}
+ [Totally disconnected]
+ Show that both $\ZZ_p$ and $\QQ_p$ are \emph{totally disconnected}:
+ there are no connected sets other than the empty set and singleton sets.
+\end{dproblem}
+
+\begin{problem}
+ [Mentioned in \href{https://mathoverflow.net/a/81377/70654}{MathOverflow}]
+ Let $p$ be a prime.
+ Find a sequence $q_1$, $q_2$, \dots of rational numbers such that:
+ \begin{itemize}
+ \ii the sequence $q_n$ converges to $0$ in the real sense;
+ \ii the sequence $q_n$ converges to $2021$ in the $p$-adic sense.
+ \end{itemize}
+\end{problem}
+
+\begin{problem}
+ [USA TST 2011]
+ Let $p$ be a prime.
+ We say that a sequence of integers $\{z_n\}_{n=0}^\infty$
+ is a \emph{$p$-pod} if for each $e \geq 0$,
+ there is an $N \geq 0$ such that whenever $m \geq N$,
+ $p^e$ divides the sum
+ \[ \sum_{k=0}^m (-1)^k \binom mk z_k. \]
+ Prove that if both sequences $\{x_n\}_{n=0}^\infty$
+ and $\{y_n\}_{n=0}^\infty$ are $p$-pods,
+ then the sequence $\{x_n y_n\}_{n=0}^\infty$ is a $p$-pod.
+\end{problem}
diff --git a/books/napkin/pell.tex b/books/napkin/pell.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c3596540fae1b35457edc0c92012cb4a7e002ac8
--- /dev/null
+++ b/books/napkin/pell.tex
@@ -0,0 +1,254 @@
+\chapter{Bonus: Let's solve Pell's equation!}
+This is an optional aside, and can be safely ignored.
+(On the other hand, it's pretty short.)
+
+\section{Units}
+\prototype{$\pm 1$, roots of unity, $3-2\sqrt2$ and its powers.}
+Recall according to \Cref{prob:OK_unit_norm} that $\alpha \in \OO_K$ is invertible
+if and only if \[ \NK(\alpha) = \pm 1. \]
+We let $\OO_K^\times$ denote the set of units of $\OO_K$.
+
+\begin{ques}
+ Show that $\OO_K^\times$ is a group under multiplication.
+ Hence we name it the \vocab{unit group} of $\OO_K$.
+\end{ques}
+
+What are some examples of units?
+\begin{example}
+ [Examples of units in a number field]
+ \listhack
+ \begin{enumerate}
+ \ii $\pm 1$ are certainly units, present in any number field.
+
+ \ii If $\OO_K$ contains a root of unity $\omega$ (i.e.\ $\omega^n=1$),
+ then $\omega$ is a unit.
+ (In fact, $\pm 1$ are special cases of this.)
+
+ \ii Of course, not all units of $\OO_K$ are roots of unity.
+ For example, if $\OO_K = \ZZ[\sqrt3]$ (from $K = \QQ(\sqrt3)$) then
+ the number $2+\sqrt3$ is a unit, as its norm is
+ \[ \NK(2+\sqrt3) = 2^2 - 3 \cdot 1^2 = 1. \]
+ Alternatively, just note that the inverse $2-\sqrt3 \in \OO_K$ as well:
+ \[ \left( 2-\sqrt3 \right)\left( 2+\sqrt3 \right) = 1. \]
+ Either way, $2-\sqrt3$ is a unit.
+
+ \ii Given any unit $u \in \OO_K^\times$, all its powers are also units.
+ So for example, $(3-2\sqrt2)^n$ is always a unit of $\ZZ[\sqrt2]$, for any $n$.
+ If $u$ is not a root of unity, then this generates infinitely many new units in $\OO_K^\times$.
+ \end{enumerate}
+\end{example}
+
+\begin{ques}
+ Verify the claims above that
+ \begin{enumerate}[(a)]
+ \ii Roots of unity are units, and
+ \ii Powers of units are units.
+ \end{enumerate}
+ One can either proceed from the definition
+ or use the characterization $\NK(\alpha) = \pm 1$.
+ If one definition seems more natural to you, use the other.
+\end{ques}
+
+\section{Dirichlet's unit theorem}
+\prototype{The units of $\ZZ[\sqrt3]$ are $\pm(2+\sqrt3)^n$.}
+
+\begin{definition}
+ Let $\mu(\OO_K)$ denote the set of roots of unity
+ contained in a number field $K$ (equivalently, in $\OO_K$).
+\end{definition}
+\begin{example}[Examples of $\mu(\OO_K)$]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii If $K = \QQ(i)$, then $\OO_K = \ZZ[i]$. So
+ \[ \mu(\OO_K) = \{\pm1, \pm i\} \quad\text{where } K = \QQ(i). \]
+ \ii If $K = \QQ(\sqrt3)$, then $\OO_K = \ZZ[\sqrt 3]$. So
+ \[ \mu(\OO_K) = \{\pm 1\} \quad\text{where } K = \QQ(\sqrt 3). \]
+ \ii If $K = \QQ(\sqrt{-3})$, then $\OO_K = \ZZ[\half(1+\sqrt{-3})]$.
+ So
+ \[ \mu(\OO_K)
+ = \left\{ \pm 1, \frac{\pm 1 \pm \sqrt{-3}}{2} \right\}
+ \quad\text{where } K = \QQ(\sqrt{-3})
+ \]
+ where the $\pm$'s in the second term need not depend on each other;
+ in other words $\mu(\OO_K) = \left\{ z \mid z^6=1 \right\}$.
+ \end{enumerate}
+\end{example}
+\begin{exercise}
+ Show that we always have that $\mu(\OO_K)$
+ comprises the roots to $x^n-1$ for some integer $n$.
+ (First, show it is a finite group under multiplication.)
+\end{exercise}
+
+We now quote, without proof, the so-called Dirichlet's unit theorem,
+which gives us a much more complete picture of what the units in $\OO_K$ are.
+Legend says that Dirichlet found the proof of this theorem
+during an Easter concert in the Sistine Chapel.
+\begin{theorem}
+ [Dirichlet's unit theorem]
+ Let $K$ be a number field with signature $(r_1, r_2)$ and set
+ \[ s = r_1 + r_2 - 1. \]
+ Then there exist units $u_1$, \dots, $u_s$ such that every unit
+ $\alpha \in \OO_K^\times$ can be written \emph{uniquely} in the form
+ \[ \alpha = \omega \cdot u_1^{n_1} \dots u_s^{n_s} \]
+ for $\omega \in \mu(\OO_K)$ is a root of unity,
+ and $n_1, \dots, n_s \in \ZZ$.
+\end{theorem}
+More succinctly:
+\begin{moral}
+We have $\OO_K^\times \cong \ZZ^{r_1+r_2-1} \times \mu(\OO_K)$.
+\end{moral}
+A choice of $u_1$, \dots, $u_s$ is called a choice of \vocab{fundamental units}.
+
+Here are some example applications.
+\begin{example}
+ [Some unit groups]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Let $K = \QQ(i)$ with signature $(0,1)$.
+ Then we obtain $s = 0$, so Dirichlet's Unit theorem says that there are no
+ units other than the roots of unity.
+ Thus
+ \[ \OO_K^\times = \{\pm 1, \pm i\} \quad\text{where } K = \QQ(i). \]
+ This is not surprising,
+ since $a+bi \in \ZZ[i]$ is a unit if and only if $a^2+b^2 = 1$.
+
+ \ii Let $K = \QQ(\sqrt 3)$, which has signature $(2,0)$.
+ Then $s=1$, so we expect exactly one fundamental unit.
+ A fundamental unit is $2+\sqrt3$ (or $2-\sqrt3$, its inverse) with norm $1$, and so we find
+ \[ \OO_K^\times = \left\{ \pm (2+\sqrt3)^n \mid n \in \ZZ \right\}. \]
+
+ \ii Let $K = \QQ(\sqrt[3]{2})$ with signature $(1,1)$.
+ Then $s=1$, so we expect exactly one fundamental unit.
+ The choice $1 + \sqrt[3]{2} + \sqrt[3]{4}$. So
+ \[ \OO_K^\times
+ = \left\{ \pm \left( 1+\sqrt[3]{2}+\sqrt[3]{4} \right)^n \mid n \in \ZZ \right\}. \]
+ \end{enumerate}
+\end{example}
+
+I haven't actually shown you that these are fundamental units,
+and indeed computing fundamental units is in general hard.
+
+\section{Finding fundamental units}
+Here is a table with some fundamental units.
+\[
+ \begin{array}{rl}
+ d & \text{Unit} \\ \hline
+ d=2 & 1+\sqrt 2 \\
+ d=3 & 2+\sqrt3 \\
+ d=5 & \half(1+\sqrt5) \\
+ d=6 & 5+2\sqrt6 \\
+ d=7 & 8+3\sqrt7 \\
+ d=10 & 3+\sqrt{10} \\
+ d=11 & 10+3\sqrt11
+ \end{array}
+\]
+
+In general, determining fundamental units is computationally hard.
+
+However, once I tell you what the fundamental unit is, it's not too bad
+(at least in the case $s=1$) to verify it.
+For example,
+suppose we want to show that $10 + 3\sqrt{11}$ is a fundamental unit of $K = \QQ(\sqrt 11)$,
+which has ring of integers $\ZZ[\sqrt{11}]$.
+If not, then for some $n > 1$, we would have to have
+\[ 10 + 3 \sqrt{11} = \pm \left( x+y\sqrt{11} \right)^n. \]
+For this to happen, at the very least we would need $\left\lvert y \right\rvert < 3$.
+We would also have $x^2-11y^2 = \pm 1$.
+So one can just verify (using $y= 1,2$) that this fails.
+
+The point is that: Since $(10,3)$ is the \emph{smallest}
+(in the sense of $\left\lvert y \right\rvert$)
+integer solution to $x^2-11y^2 = \pm 1$, it must be the fundamental unit.
+This holds more generally, although in the case that $d \equiv 1 \pmod 4$
+a modification must be made as $x$, $y$ might be half-integers (like $\half(1+\sqrt5)$).
+\begin{theorem}
+ [Fundamental units of pell equations]
+ Assume $d$ is a squarefree integer.
+ \begin{enumerate}[(a)]
+ \ii If $d \equiv 2,3 \pmod 4$,
+ and $(x,y)$ is a minimal integer solution to $x^2-dy^2 = \pm 1$,
+ then $x + y \sqrt d$ is a fundamental unit.
+ \ii If $d \equiv 1 \pmod 4$,
+ and $(x,y)$ is a minimal \emph{half-integer} solution to $x^2-dy^2 = \pm 1$,
+ then $x + y \sqrt d$ is a fundamental unit.
+ (Equivalently, the minimal integer solution to $a^2 - db^2 = \pm 4$
+ gives $\half (a + b \sqrt d)$.)
+ \end{enumerate}
+ (Any reasonable definition of ``minimal'' will work, such as sorting by $\left\lvert y \right\rvert$.)
+\end{theorem}
+
+\section{Pell's equation}
+This class of results completely eradicates Pell's Equation.
+After all, solving
+\[ a^2 - d \cdot b^2 = \pm 1 \]
+amounts to finding elements of $\ZZ[\sqrt d]$ with norm $\pm 1$.
+It's a bit weirder in the $d \equiv 1 \pmod 4$ case, since in that case $K = \QQ(\sqrt d)$
+gives $\OO_K = \ZZ[\half(1+\sqrt d)]$, and so the fundamental unit may not actually be a solution.
+(For example, when $d = 5$, we get the solution $(\half, \half)$.)
+Nonetheless, all \emph{integer} solutions are eventually generated.
+
+To make this all concrete, here's a simple example.
+\begin{example}[$x^2-5y^2 = \pm 1$]
+ Set $K = \QQ(\sqrt 5)$, so $\OO_K = \ZZ[\half(1+\sqrt 5)]$.
+ By Dirichlet's unit theorem, $\OO_K^\times$ is generated by a single element $u$.
+ The choice
+ \[ u = \frac 12 + \frac 12 \sqrt 5 \]
+ serves as a fundamental unit,
+ as there are no smaller integer solutions to $a^2-5b^2=\pm 4$.
+
+ The first several powers of $u$ are
+ \[
+ \begin{array}{rrr}
+ \renewcommand{\arraystretch}{1.4}
+ n & \multicolumn{1}{c}{u^n} & \text{Norm} \\ \hline
+ -2 & \half(3-\sqrt5) & 1 \\
+ -1 & \half (1-\sqrt5) & -1 \\
+ 0 & 1 & 1 \\
+ 1 & \half(1+\sqrt5) & -1 \\
+ 2 & \half(3+\sqrt5) & 1 \\
+ 3 & 2 + \sqrt 5 & -1 \\
+ 4 & \half(7+3\sqrt5) & 1 \\
+ 5 & \half(11+5\sqrt5) & -1 \\
+ 6 & 9 + 4\sqrt 5 & 1
+ \end{array}
+ \]
+ One can see that the first integer solution is $(2,1)$, which gives $-1$.
+ The first solution with $+1$ is $(9,4)$.
+ Continuing the pattern, we find that every third power of $u$ gives an integer solution
+ (see also \Cref{prob:unit_cubed}),
+ with the odd ones giving a solution to $x^2-5y^2=-1$ and
+ the even ones a solution to $x^2-5y^2=+1$.
+ All solutions are generated this way, up to $\pm$ signs
+ (by considering $\pm u^{\pm n}$).
+\end{example}
+
+\section\problemhead
+\begin{problem}[Fictitious account of the battle of Hastings]
+ Determine the number of soldiers in the following battle:
+ \begin{quote}
+ The men of Harold stood well together,
+ as their wont was, and formed thirteen squares,
+ with a like number of men in every square thereof, and woe
+ to the hardy Norman who ventured to enter their redoubts;
+ for a single blow of Saxon war-hatched would break his lance
+ and cut through his coat of mail . . .
+ when Harold threw himself into the fray the Saxons
+ were one might square of men, shouting the battle-cries,
+ ``Ut!'', ``Olicrosse!'', ``Godemite!''
+ \end{quote}
+ % The answer can be found by hand.
+ % You may assume that the army has size at least $1$ and
+ % at most one billion.
+\end{problem}
+\begin{problem}
+ \label{prob:unit_cubed}
+ Let $d > 0$ be a squarefree integer,
+ and let $u$ denote the fundamental unit of $\QQ(\sqrt d)$.
+ Show that either $u \in \ZZ[\sqrt d]$,
+ or $u^n \in \ZZ[\sqrt d] \iff 3 \mid n$.
+\end{problem}
+\begin{problem}
+ Show that there are no integer solutions to
+ \[ x^2 - 34y^2 = -1 \]
+ despite the fact that $-1$ is a quadratic residue mod $34$.
+\end{problem}
diff --git a/books/napkin/pontryagin.tex b/books/napkin/pontryagin.tex
new file mode 100644
index 0000000000000000000000000000000000000000..a0ef1231491b15d5634b9e3385092f55a2f28305
--- /dev/null
+++ b/books/napkin/pontryagin.tex
@@ -0,0 +1,331 @@
+\chapter{Bonus: A hint of Pontryagin duality}
+\label{ch:pontryagin}
+
+In this short chapter we will give statements
+about how to generalize our Fourier analysis
+(a bonus chapter \Cref{ch:fourier})
+to a much wider class of groups $G$.
+
+\section{LCA groups}
+\prototype{$\TT$, $\RR$.}
+Earlier we played with $\RR$,
+which is nice because in addition to being a topological space,
+it is also an abelian group under addition.
+These sorts of objects which are both groups and spaces have a name.
+\begin{definition}
+ A group $G$ is a \vocab{topological group}
+ is a Hausdorff\footnote{Some authors omit the Hausdorff condition.}
+ topological space equipped also with a group operation $(G, \cdot)$,
+ such that both maps
+ \begin{align*}
+ G \times G &\to G \quad\text{ by }\quad (x,y) \mapsto xy \\
+ G &\to G \quad\text{ by }\quad x \mapsto x\inv
+ \end{align*}
+ are continuous.
+\end{definition}
+
+For our Fourier analysis, we need some additional conditions.
+\begin{definition}
+ A \vocab{locally compact abelian (LCA) group} $G$
+ is one for which the group operation is abelian,
+ and moreover the topology is \emph{locally compact}:
+ for every point $p$ of $G$,
+ there exists a compact subset $K$ of $G$
+ such that $K \ni p$, and $K$ contains some open neighborhood of $p$.
+\end{definition}
+
+Our previous examples all fall into this category:
+\begin{example}
+ [Examples of locally compact abelian groups]
+ \listhack
+ \begin{itemize}
+ \ii Any finite group $Z$ with the discrete topology is LCA.
+ \ii The circle group $\TT$ is LCA and also in fact compact.
+ \ii The real numbers $\RR$ are an example of an LCA group
+ which is \emph{not} compact.
+ \end{itemize}
+\end{example}
+
+These conditions turn out to be enough
+for us to define a measure on the space $G$.
+The relevant theorem, which we will just quote:
+\begin{theorem}
+ [Haar measure]
+ Let $G$ be a locally compact abelian group.
+ We regard it as a measurable space
+ using its Borel $\sigma$-algebra $\SB(G)$.
+ There exists a measure $\mu \colon \SB(G) \to [0,\infty]$,
+ called the \vocab{Haar measure},
+ satisfying the following properties:
+ \begin{itemize}
+ \ii $\mu(gS) = \mu(S)$ for every $g \in G$ and measurable $S$.
+ That means that $\mu$ is ``translation-invariant''
+ under translation by $G$.
+ \ii $\mu(K)$ is finite for any compact set $K$.
+ \ii if $S$ is measurable, then $\mu(S) = \inf\left\{ \mu(U) \colon U \supseteq S \text{ open} \right\}$.
+ \ii if $U$ is open, then $\mu(U) = \sup\left\{ \mu(S) \colon S \supseteq U \text{ measurable} \right\}$.
+ \end{itemize}
+ Moreover, it is unique up to scaling by a positive constant.
+\end{theorem}
+
+\begin{remark}
+ Note that if $G$ is compact, then $\mu(G)$ is finite (and positive).
+ For this reason the Haar measure on a LCA group $G$
+ is usually normalized so $\mu(G) = 1$.
+\end{remark}
+
+For this chapter, we will only use the first two properties at all,
+and the other two are just mentioned for completeness.
+Note that this actually generalizes the chapter where
+we constructed a measure on $\SB(\RR^n)$,
+since $\RR^n$ is an LCA group!
+
+So, in short: if we have an LCA group,
+we have a measure $\mu$ on it.
+
+\section{The Pontryagin dual}
+Now the key definition is:
+\begin{definition}
+ Let $G$ be an LCA group.
+ Then its \vocab{Pontryagin dual} is the abelian group
+ \[ \wh G \defeq \left\{ \text{continuous group homomorphisms }
+ \xi : G \to \TT \right\}. \]
+ The maps $\xi$ are called \vocab{characters}.
+ It can be itself made into an LCA group.\footnote{If you must
+ know the topology, it is the \vocab{compact-open topology}:
+ for any compact set $K \subseteq G$
+ and open set $U \subseteq \TT$,
+ we declare the set of all $\xi$ with $\xi\im(K) \subseteq U$ to be open,
+ and then take the smallest topology
+ containing all such sets. We won't use this at all.}
+\end{definition}
+\begin{example}
+ [Examples of Pontryagin duals]
+ \listhack
+ \begin{itemize}
+ \ii $\wh{\ZZ} \cong \TT$,
+ since group homomorphisms $\ZZ \to \TT$ are determined by the image of $1$.
+ \ii $\wh{\TT} \cong \ZZ$.
+ The characters are given by $\theta \mapsto n\theta$ for $n \in \ZZ$.
+ \ii $\wh{\RR} \cong \RR$.
+ This is because a nonzero continuous homomorphism
+ $\RR \to S^1$ is determined by the fiber above $1 \in S^1$.
+ (Algebraic topologists might see covering projections here.)
+ \ii $\wh{\ZZ/n\ZZ} \cong \ZZ/n\ZZ$,
+ characters $\xi$ being determined by the image $\xi(1) \in \TT$.
+ \ii $\wh{G \times H} \cong \wh G \times \wh H$.
+ \end{itemize}
+\end{example}
+\begin{exercise}
+ [$\wh Z \cong Z$, for those who read \Cref{sec:FTFGAG}]
+ If $Z$ is a finite abelian group, show that $\wh Z \cong Z$,
+ using the results of the previous example.
+ You may now recognize that the bilinear form
+ $\cdot \colon Z \times Z \to \TT$
+ is exactly a choice of isomorphism $Z \to \wh Z$.
+ It is not ``canonical''.
+\end{exercise}
+
+True to its name as the dual,
+and in analogy with $(V^\vee)^\vee \cong V$ for vector spaces $V$, we have:
+\begin{theorem}
+ [Pontryagin duality theorem]
+ For any LCA group $G$, there is an isomorphism
+ \[ G \cong \wh{\wh G} \qquad \text{by} \qquad
+ x \mapsto \left( \xi \mapsto \xi(x) \right). \]
+\end{theorem}
+
+The compact case is especially nice.
+\begin{proposition}
+ [$G$ compact $\iff$ $\wh G$ discrete]
+ Let $G$ be an LCA group.
+ Then $G$ is compact if and only if $\wh G$ is discrete.
+\end{proposition}
+\begin{proof}
+ \Cref{prob:LCA_compact}.
+\end{proof}
+
+\section{The orthonormal basis in the compact case}
+Let $G$ be a compact LCA group,
+and work with its Haar measure.
+We may now let $L^2(G)$ be the space of
+square-integrable functions to $\CC$, i.e.
+\[ L^2(G) = \left\{ f \colon G \to \CC
+ \quad\text{such that}\quad \int_G |f|^2 < \infty \right\}. \]
+Thus we can equip it with the inner form
+\[ \left< f,g \right> = \int_G f \cdot \ol{g}. \]
+In that case, we get all the results we wanted before:
+\begin{theorem}
+ [Characters of $\wh G$ forms an orthonormal basis]
+ \label{thm:god}
+ Assume $G$ is LCA and compact (so $\wh G$ is discrete).
+ Then the characters
+ \[ (e_\xi)_{\xi \in \wh G}
+ \qquad\text{by}\qquad e_\xi(x) = e(\xi(x)) = \exp(2\pi i \xi(x)) \]
+ form an orthonormal basis of $L^2(G)$.
+ Thus for each $f \in L^2(G)$ we have
+ \[ f = \sum_{\xi \in \wh G} \wh f(\xi) e_\xi \]
+ where
+ \[ \wh f(\xi) = \left< f, e_\xi \right>
+ = \int_G f(x) \exp(-2\pi i \xi(x)) \; d\mu. \]
+\end{theorem}
+The sum $\sum_{\xi \in \wh G}$ makes sense since $\wh G$ is discrete.
+In particular,
+\begin{itemize}
+ \ii Letting $G = Z$ for a finite group $G$
+ gives ``Fourier transform on finite groups''.
+ \ii The special case $G = \ZZ/n\ZZ$ has its
+ \href{https://en.wikipedia.org/wiki/Discrete_Fourier_transform#Definition}%
+ {own Wikipedia page}: the ``discrete-time Fourier transform''.
+ \ii Letting $G = \TT$ gives the ``Fourier series'' earlier.
+\end{itemize}
+
+\section{The Fourier transform of the non-compact case}
+If $G$ is LCA but not compact, then Theorem~\ref{thm:god} becomes false.
+On the other hand, it's still possible to define $\wh G$.
+We can then try to write the Fourier coefficients anyways:
+let \[ \wh f(\xi) = \int_G f \cdot \ol{e_\xi} \; d\mu \]
+for $\xi \in \wh G$ and $f \colon G \to \CC$.
+The results are less fun in this case, but we still have, for example:
+\begin{theorem}
+ [Fourier inverison formula in the non-compact case]
+ Let $\mu$ be a Haar measure on $G$.
+ Then there exists a unique Haar measure $\nu$ on $\wh G$
+ (called the \vocab{dual measure}) such that:
+ whenever $f \in L^1(G)$ and $\wh f \in L^1(\wh G)$, we have
+ \[ f(x) = \int_{\wh G} \wh f(\xi) \xi(x) \; d\nu \]
+ for almost all $x \in G$ (with respect to $\mu$).
+ If $f$ is continuous, this holds for all $x$.
+\end{theorem}
+So while we don't have the niceness of a full inner product from before,
+we can still in some situations at least write $f$ as integral
+in sort of the same way as before.
+
+In particular, they have special names for a few special $G$:
+\begin{itemize}
+ \ii If $G = \RR$, then $\wh G = \RR$,
+ yielding the
+ ``\href{https://en.wikipedia.org/wiki/Fourier_transform}{(continuous) Fourier transform}''.
+ \ii If $G = \ZZ$, then $\wh G = \TT$,
+ yielding the
+ ``\href{https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform}{discrete time Fourier transform}''.
+\end{itemize}
+
+\section{Summary}
+We summarize our various flavors of Fourier analysis
+from the previous sections in the following table.
+In the first part $G$ is compact,
+in the second half $G$ is not.
+\[
+ \begin{array}{llll}
+ \hline
+ \text{Name} & \text{Domain }G & \text{Dual }\wh G
+ & \text{Characters} \\ \hline
+ \text{Binary Fourier analysis} & \{\pm1\}^n
+ & S \subseteq \left\{ 1, \dots, n \right\}
+ & \prod_{s \in S} x_s \\
+ \text{Fourier transform on finite groups} & Z
+ & \xi \in \wh Z \cong Z & e( i \xi \cdot x) \\
+ \text{Discrete Fourier transform} & \ZZ/n\ZZ & \xi \in \ZZ/n\ZZ
+ & e(\xi x / n) \\
+ \text{Fourier series} & \TT \cong [-\pi, \pi] & n \in \ZZ
+ & \exp(inx) \\ \hline
+ \text{Continuous Fourier transform} & \RR & \xi \in \RR
+ & e(\xi x) \\
+ \text{Discrete time Fourier transform} & \ZZ & \xi \in \TT \cong [-\pi, \pi]
+ & \exp(i \xi n) \\
+ \end{array}
+\]
+You might notice that the \textbf{various names are awful}.
+This is part of the reason I got confused as a high school student:
+every type of Fourier series above has its own Wikipedia article.
+If it were up to me, we would just use the term ``$G$-Fourier transform'',
+and that would make everyone's lives a lot easier.
+
+
+
+\section{\problemhead}
+\begin{problem}
+ If $G$ is compact, so $\wh G$ is discrete,
+ describe the dual measure $\nu$.
+ \begin{hint}
+ You can read it off \Cref{thm:god}.
+ \end{hint}
+ \begin{sol}
+ It is the counting measure.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ \label{prob:LCA_compact}
+ Show that an LCA group $G$ is compact
+ if and only if $\wh G$ is discrete.
+ (You will need the compact-open topology for this.)
+ \begin{hint}
+ After Pontryagin duality,
+ we need to show $G$ compact implies $\wh G$ discrete
+ and $G$ discrete implies $\wh G$ compact.
+ Both do not need anything fancy:
+ they are topological facts.
+ \end{hint}
+\end{problem}
+
+\endinput
+\section{Peter-Weyl}
+In fact, if $G$ is a Lie group, even if $G$ is not abelian
+we can still give an orthonormal basis of $L^2(G)$
+(the square-integrable functions on $G$).
+It turns out in this case the characters are attached to complex
+irreducible representations of $G$
+(and in what follows all representations are complex).
+
+The result is given by the Peter-Weyl theorem.
+First, we need the following result:
+\begin{lemma}
+ [Compact Lie groups have unitary reps]
+ Any finite-dimensional (complex) representation $V$ of a compact Lie group $G$
+ is unitary, meaning it can be equipped with a $G$-invariant inner form.
+ Consequently, $V$ is completely reducible:
+ it splits into the direct sum of irreducible representations of $G$.
+\end{lemma}
+\begin{proof}
+ Suppose $B : V \times V \to \CC$ is any inner product.
+ Equip $G$ with a right-invariant Haar measure $dg$.
+ Then we can equip it with an ``averaged'' inner form
+ \[ \wt B(v,w) = \int_G B(gv, gw) \; dg. \]
+ Then $\wt B$ is the desired $G$-invariant inner form.
+ Now, the fact that $V$ is completely reducible follows from the fact
+ that given a subrepresentation of $V$, its orthogonal complement
+ is also a subrepresentation.
+\end{proof}
+
+The Peter-Weyl theorem then asserts that the finite-dimensional irreducible
+unitary representations essentially give an orthonormal basis for $L^2(G)$,
+in the following sense. Let $V = (V, \rho)$ be such a representation of $G$,
+and fix an orthonormal basis of $e_1$, \dots, $e_d$ for $V$ (where $d = \dim V$).
+The $(i,j)$th \vocab{matrix coefficient} for $V$ is then given by
+\[ G \taking{\rho} \GL(V) \taking{\pi_{ij}} \CC \]
+where $\pi_{ij}$ is the projection onto the $(i,j)$th entry of the matrix.
+We abbreviate $\pi_{ij} \circ \rho$ to $\rho_{ij}$.
+Then the theorem is:
+\begin{theorem}
+ [Peter-Weyl]
+ Let $G$ be a compact Lie group.
+ Let $\Sigma$ denote the (pairwise non-isomorphic) irreducible finite-dimensional
+ unitary representations of $G$.
+ Then
+ \[ \left\{ \sqrt{\dim V} \rho_{ij}
+ \; \Big\vert \; (V, \rho) \in \Sigma,
+ \text{ and } 1 \le i,j \le \dim V \right\} \]
+ is an orthonormal basis of $L^2(G)$.
+\end{theorem}
+Strictly, I should say $\Sigma$ is a set of representatives of
+the isomorphism classes of irreducible unitary representations,
+one for each isomorphism class.
+
+In the special case $G$ is abelian,
+all irreducible representations are one-dimensional.
+A one-dimensional representation of $G$ is a map
+$G \injto \GL(\CC) \cong \CC^\times$,
+but the unitary condition implies it is actually a map $G \injto S^1 \cong \TT$,
+i.e.\ it is an element of $\wh G$.
diff --git a/books/napkin/preface.tex b/books/napkin/preface.tex
new file mode 100644
index 0000000000000000000000000000000000000000..afb312690049e8b3cd6d1441d081efbd53ce5134
--- /dev/null
+++ b/books/napkin/preface.tex
@@ -0,0 +1,230 @@
+\chapter{Preface}
+The origin of the name ``Napkin''
+comes from the following quote of mine.
+\begin{quote}
+\small
+I'll be eating a quick lunch with some friends of mine
+who are still in high school.
+They'll ask me what I've been up to the last few weeks,
+and I'll tell them that I've been learning category theory.
+They'll ask me what category theory is about.
+I tell them it's about abstracting things by looking at just the
+structure-preserving morphisms between them, rather than the objects themselves.
+I'll try to give them the standard example $\catname{Grp}$,
+but then I'll realize that they don't know what a homomorphism is.
+So then I'll start trying to explain what a homomorphism is,
+but then I'll remember that they haven't learned what a group is.
+So then I'll start trying to explain what a group is,
+but by the time I finish writing the group axioms on my napkin,
+they've already forgotten why I was talking about groups in the first place.
+And then it's 1PM, people need to go places, and I can't help but think: \\[0.5ex]
+\emph{``Man, if I had forty hours instead of forty minutes,
+I bet I could actually have explained this all''.}
+\end{quote}
+This book was initially my attempt at those forty hours,
+but has grown considerably since then.
+
+\section*{About this book}
+The \emph{Infinitely Large Napkin} is a light
+but mostly self-contained introduction to a large
+amount of higher math.
+
+I should say at once that this book is not intended
+as a replacement for dedicated books or courses;
+the amount of depth is not comparable.
+On the flip side, the benefit of this ``light'' approach
+is that it becomes accessible to a larger audience,
+since the goal is merely to give the reader a feeling for
+any particular topic rather than to emulate a full semester of lectures.
+
+I initially wrote this book with talented high-school students in mind,
+particularly those with math-olympiad type backgrounds.
+Some remnants of that cultural bias can still be felt throughout the book,
+particularly in assorted challenge problems which are
+taken from mathematical competitions.
+However, in general I think this would be a good reference
+for anyone with some amount of mathematical maturity and curiosity.
+Examples include but certainly not limited to:
+math undergraduate majors, physics/CS majors,
+math PhD students who want to hear a little bit
+about fields other than their own,
+advanced high schoolers who like math but not math contests,
+and unusually intelligent kittens fluent in English.
+
+\section*{Source code}
+The project is hosted on GitHub at
+\url{https://github.com/vEnhance/napkin}.
+Pull requests are quite welcome!
+I am also happy to receive suggestions and corrections by email.
+
+\section*{Philosophy behind the Napkin approach}
+As far as I can tell, higher math for high-school students
+comes in two flavors:
+\begin{itemize}
+ \ii Someone tells you about the hairy ball theorem in the form
+ ``you can't comb the hair on a spherical cat''
+ then doesn't tell you anything about why it should be true,
+ what it means to actually ``comb the hair'',
+ or any of the underlying theory,
+ leaving you with just some vague notion in your head.
+
+ \ii You take a class and prove every result in full detail,
+ and at some point you stop caring about what the professor is saying.
+\end{itemize}
+Presumably you already know how unsatisfying the first approach is.
+So the second approach seems to be the default,
+but I really think there should be some sort of middle ground here.
+
+%I was talking to a friend of mine one day who described briefly
+%what the Israel IMO training looked like.
+%It turns out that rather than actually preparing for the IMO,
+%the students would, say, get taught a semester's worth of
+%undergraduate algebra in the couple weeks.
+%This might seem absurd, but I think if your goal is to just show the students
+%what this algebra thing they keeping hearing about is,
+%and your students have substantially more mathematical maturity
+%relative to their knowledge (e.g.\ are IMO medalists),
+%it seems like you really could make a lot of headway.
+
+%For example: often classes like to prove things for completeness.
+%I personally find that many proofs don't really teach anything,
+%and that it is often better to say
+%``you could work this out if you wanted to,
+%but it's not worth your time''.
+Unlike university, it is \emph{not} the purpose of this book to
+train you to solve exercises or write proofs\footnote{Which is
+ not to say problem-solving isn't valuable;
+ I myself am a math olympiad coach at heart.
+ It's just not the point of this book.},
+or prepare you for research in the field.
+Instead I just want to show you some interesting math.
+The things that are presented should be memorable and worth caring about.
+For that reason, proofs that would be included for completeness
+in any ordinary textbook are often omitted here,
+unless there is some idea in the proof which I think is worth seeing.
+In particular, I place a strong emphasis over explaining
+why a theorem \emph{should} be true rather than writing down its proof.
+This is a recurrent theme of this book:
+\begin{moral}
+ Natural explanations supersede proofs.
+\end{moral}
+
+My hope is that after reading any particular chapter in Napkin,
+one might get the following out of it:
+\begin{itemize}
+ \ii Knowing what the precise definitions are of the main characters,
+
+ \ii Being acquainted with the few really major examples,
+
+ \ii Knowing the precise statements of famous theorems,
+ and having a sense of why they \emph{should} be true.
+\end{itemize}
+Understanding ``why'' something is true can have many forms.
+This is sometimes accomplished with a complete rigorous proof;
+in other cases, it is given by the idea of the proof;
+in still other cases, it is just a few key examples
+with extensive cheerleading.
+
+Obviously this is nowhere near enough if you want to e.g.\ do research in a field;
+but if you are just a curious outsider,
+I hope that it's more satisfying than the elevator pitch or Wikipedia articles.
+Even if you do want to learn a topic with serious depth,
+I hope that it can be a good zoomed-out overview before you really dive in,
+because in many senses the choice of material is
+``what I wish someone had told me before I started''.
+
+%In terms of learning math, I suspect that
+%there is some sense in which reading a textbook
+%is like watching a movie.
+%The first time you watch it, you spend most of your bandwidth
+%getting to know the characters,
+%keeping up with what's going on,
+%trying to predict where it's headed next.
+%But if you like the movie enough and watch it a second
+%or third, or $n$th time, you'll pick up a lot of things
+%that you didn't notice the first time through
+%(e.g.\ all the hints that Z was the main villain all along).
+
+\section*{More pedagogical comments and references}
+The preface would become too long if I talked about
+some of my pedagogical decisions chapter by chapter,
+so \Cref{ch:refs} contains those comments instead.
+
+In particular, I often name specific references,
+and the end of that appendix has more references.
+So this is a good place to look if you want further reading.
+
+\section*{Historical and personal notes}
+I began writing this book in December of 2014,
+after having finished my first semester of undergraduate at Harvard.
+It became my main focus for about 18 months after that,
+as I became immersed in higher math.
+I essentially took only math classes,
+(gleefully ignoring all my other graduation requirements)
+and merged as much of it as I could
+(as well as lots of other math I learned on my own time)
+into the Napkin.
+
+Towards August of 2016, though, I finally lost steam.
+The first public drafts went online then, and I decided to step back.
+Having burnt out slightly,
+I then took a break from higher math,
+and spent the remaining two undergraduate years\footnote{Alternatively:
+ `` \dots and spent the next two years forgetting everything
+ I had painstakingly learned''.
+ Which made me grateful for all the past notes in the Napkin!}
+working extensively as a coach for the American math olympiad team,
+and trying to spend as much time with my friends as I could
+before they graduated and went their own ways.
+% It was also at this time I started to get into Korean pop dance,
+% which was the first (and only) extracurricular activity
+% I did during my entire undergraduate stay.
+
+During those two years, readers sent me many kind words of gratitude,
+many reports of errors, and many suggestions for additions.
+So in November of 2018,
+some weeks into my first semester as a math PhD student,
+I decided I should finish what I had started.
+Some months later, here is what I have.
+
+\section*{Acknowledgements}
+\todo{add more acknowledgments}
+I am indebted to countless people for this work.
+Here is a partial (surely incomplete) list.
+
+\begin{itemize}
+\ii Thanks to all my teachers and professors for teaching me much of the
+material covered in these notes,
+as well as the authors of all the references I have cited here.
+A special call-out to \cite{ref:55a}, \cite{ref:msci},
+\cite{ref:manifolds}, \cite{ref:gathmann}, \cite{ref:18-435},
+\cite{ref:etingof}, \cite{ref:145a}, \cite{ref:vakil},
+\cite{ref:pugh}, \cite{ref:gorin},
+which were especially influential.
+
+\ii Thanks also to dozens of friends and strangers
+who read through preview copies of my draft,
+and pointed out errors and gave other suggestions.
+Special mention to Andrej Vukovi\'c and Alexander Chua
+for together catching over a thousand errors.
+Thanks also to Brian Gu and Tom Tseng for many corrections.
+(If you find mistakes or have suggestions yourself,
+I would love to hear them!)
+
+\ii I'd also like to express my gratitude for
+many, many kind words I received
+during the development of this project.
+These generous comments led me to keep working on this,
+and were largely responsible for my decision in November 2018
+to begin updating the Napkin again.
+\end{itemize}
+
+Finally, a huge thanks to the math olympiad community,
+from which the Napkin (and me) has its roots.
+All the enthusiasm, encouragement, and thank-you notes I have received
+over the years led me to begin writing this in the first place.
+I otherwise would never have the arrogance to dream a project like this
+was at all possible.
+And of course I would be nowhere near where I am today were it not for the
+life-changing journey I took in chasing my dreams to the IMO.
+Forever TWN2!
diff --git a/books/napkin/proj-var.tex b/books/napkin/proj-var.tex
new file mode 100644
index 0000000000000000000000000000000000000000..82425225f7f61a467616c6d4857119da22923435
--- /dev/null
+++ b/books/napkin/proj-var.tex
@@ -0,0 +1,400 @@
+\chapter{Projective varieties}
+Having studied affine varieties in $\Aff^n$, we now consider $\CP^n$.
+We will also make it into a baby ringed space
+in the same way as with $\Aff^n$.
+
+\section{Graded rings}
+\prototype{$\CC[x_0, \dots, x_n]$ is a graded ring.}
+We first take the time to state what a graded ring is,
+just so that we have this language to use (now and later).
+
+\begin{definition}
+ A \vocab{graded ring} $R$ is a ring with the following additional structure:
+ as an abelian group, it decomposes as
+ \[ R = \bigoplus_{d \ge 0} R^d \]
+ where $R^0$, $R^1$, \dots, are abelian groups.
+ The ring multiplication has the property that
+ if $r \in R^d$ and $s \in R^e$, we have $rs \in R^{d+e}$.
+ Elements of an $R^d$ are called \vocab{homogeneous elements};
+ we write ``$d = \deg r$'' to mean ``$r \in R^d$''.
+
+ We denote by $R^+$ the ideal $R \setminus R^0$ generated by
+ the homogeneous elements of nonzero degree,
+ and call it the \vocab{irrelevant ideal}.
+\end{definition}
+\begin{remark}
+ For experts: all our graded rings are commutative with $1$.
+\end{remark}
+\begin{example}
+ [Examples of graded rings]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The ring $\CC[x]$ is graded by degree: as abelian groups,
+ $\CC[x] \cong \CC \oplus x\CC \oplus x^2\CC \oplus \dots$.
+ \ii More generally, the polynomial ring $\CC[x_0, \dots, x_n]$
+ is graded by degree.
+ \end{enumerate}
+\end{example}
+\begin{abuse}
+ The notation $\deg r$ is abusive in the case $r = 0$;
+ note that $0 \in R^d$ for every $d$.
+ So it makes sense to talk about ``the'' degree of $r$
+ except when $r = 0$.
+\end{abuse}
+
+We will frequently refer to homogeneous ideals:
+\begin{definition}
+ An ideal $I \subseteq \CC[x_0, \dots, x_n]$ is \vocab{homogeneous}
+ if it can be written as $I = (f_1, \dots, f_m)$
+ where each $f_i$ is a homogeneous polynomial.
+\end{definition}
+\begin{remark}
+ If $I$ and $J$ are homogeneous,
+ then so are $I+J$, $IJ$, $I \cap J$, $\sqrt I$.
+\end{remark}
+\begin{lemma}[Graded quotients are graded too]
+ Let $I$ be a homogeneous ideal of a graded ring $R$.
+ Then
+ \[ R/I = \bigoplus_{d \ge 0} R^d / (R^d \cap I) \]
+ realizes $R/I$ as a graded ring.
+\end{lemma}
+Since these assertions are just algebra,
+we omit their proofs here.
+\begin{example}
+ [Example of a graded quotient ring]
+ Let $R = \CC[x,y]$ and set $I = (x^3, y^2)$.
+ Let $S = R/I$. Then
+ \begin{align*}
+ S^0 &= \CC \\
+ S^1 &= \CC x \oplus \CC y \\
+ S^2 &= \CC x^2 \oplus \CC xy \\
+ S^3 &= \CC x^2y \\
+ S^d &= 0 \qquad \forall d \ge 4.
+ \end{align*}
+ So in fact $S = R/I$ is graded,
+ and is a six-dimensional $\CC$-vector space.
+\end{example}
+
+
+\section{The ambient space}
+\prototype{Perhaps $\Vp(x^2+y^2-z^2)$.}
+The set of points we choose to work with is $\CP^n$ this time,
+which for us can be thought of as the set of $n$-tuples
+\[ \left( x_0 : x_1 : \dots : x_n \right) \]
+not all zero, up to scaling.
+Equivalently, it is the set of lines through the origin in $\CC^{n+1}$.
+Projective space is defined in full in \Cref{sec:top_spaces},
+and you should refer there if you aren't familiar with projective space.
+
+The right way to think about it is ``$\Aff^n$ plus points at infinity'':
+\begin{definition}
+ We define the set
+ \[ U_i = \left\{ (x_0 : \dots : x_n) \mid x_i \neq 0 \right\}
+ \subseteq \CP^n. \]
+ These are called the \vocab{standard affine charts}.
+\end{definition}
+The name comes from:
+\begin{exercise}
+ [Mandatory]
+ Give a natural bijection from $U_i$ to $\Aff^n$.
+ Thus we can think of $\CP^n$ as the affine set $U_i$
+ plus ``points at infinity''.
+\end{exercise}
+\begin{remark}
+ In fact, these charts $U_i$ make $\CP^n$ with its usual topology
+ into a complex manifold with holomorphic transition functions.
+\end{remark}
+
+\begin{example}
+ [Colloquially, $\CP^1 = \Aff^1 \cup \{\infty\}$]
+ The space $\CP^1$ consists of pairs $(s:t)$,
+ which you can think of as representing the complex number $z/1$.
+ In particular $U_1 = \{ (z:1) \}$
+ is basically another copy of $\Aff^1$.
+ There is only one new point, $(1:0)$.
+\end{example}
+
+However, like before we want to impose a Zariski topology on it.
+For concreteness, let's consider $\CP^2 = \left\{ (x_0 : x_1 : x_2) \right\}$.
+We wish to consider zero loci in $\CP^2$, just like we did in affine space,
+and hence obtain a notion of a projective variety.
+
+But this isn't so easy: for example,
+the function ``$x_0$'' is not a well-defined function on points in $\CP^2$
+because $(x_0 : x_1 : x_2) = (5x_0 : 5x_1 : 5x_2)$!
+So we'd love to consider these ``pseudo-functions''
+that still have zero loci. These are just the homogeneous polynomials $f$,
+because $f$ is homogeneous of degree $d$ if and only if
+\[
+ f(\lambda x_0, \dots, \lambda x_n)
+ = \lambda^d f(x_0, \dots, x_n).
+\]
+In particular, the relation ``$f(x_0, \dots, x_n) = 0$'' is
+well-defined if $F$ is homogeneous. Thus, we can say:
+\begin{definition}
+ If $f$ is homogeneous, we can then define its \vocab{vanishing locus} as
+ \[
+ \Vp(f)
+ = \left\{ (x_0 : \dots : x_n) \mid f(x_0, \dots, x_n) = 0 \right\}.
+ \]
+\end{definition}
+
+The homogeneous condition is really necessary.
+For example, to require ``$x_0 - 1 = 0$'' makes no sense,
+since the points $(1:1:1)$ and $(2015:2015:2015)$ are the same.
+
+It's trivial to verify that homogeneous polynomials do exactly what we want;
+hence we can now define:
+\begin{definition}
+ A \vocab{projective variety} in $\CP^n$
+ is the common zero locus of an arbitrary
+ collection of homogeneous polynomials in $n+1$ variables.
+\end{definition}
+
+\begin{example}[A conic in $\CP^2$, or a cone in $\CC^3$]
+ Let's try to picture the variety
+ \[ \Vp(x^2+y^2-z^2) \subseteq \CP^2 \]
+ which consists of the points $[x:y:z]$ such that $x^2+y^2=z^2$.
+ If we view this as subspace of $\CC^3$
+ (i.e.\ by thinking of $\CP^2$ as the set of lines through the origin),
+ then we get a ``cone'':
+ \begin{center}
+ \includegraphics{media/cone.pdf}
+ \end{center}
+
+ If we take the standard affine charts now, we obtain:
+ \begin{itemize}
+ \ii At $x=1$, we get a hyperbola $\VV(1+y^2-z^2)$.
+ \ii At $y=1$, we get a hyperbola $\VV(1+x^2-z^2)$.
+ \ii At $z=1$, we get a circle $\VV(x^2+y^2-1)$.
+ \end{itemize}
+ That said, over $\CC$ a hyperbola and circle
+ are the same thing; I'm cheating a little by drawing $\CC$
+ as one-dimensional, just like last chapter.
+\end{example}
+\begin{ques}
+ Draw the intersection of the cone above
+ with the $z=1$ plane, and check that you do in fact get a circle.
+ (This geometric picture will be crucial later.)
+\end{ques}
+
+\section{Homogeneous ideals}
+Now, the next thing we want to do is define $\Vp(I)$ for an ideal $I$.
+Of course, we again run into an issue with things like $x_0-1$ not
+making sense.
+
+The way out of this is to use only \emph{homogeneous} ideals.
+\begin{definition}
+ If $I$ is a homogeneous ideal, we define
+ \[ \Vp(I) = \{ x \mid f(x) = 0 \; \forall f \in I\}. \]
+\end{definition}
+\begin{exercise}
+ Show that the notion ``$f(x) = 0 \; \forall f \in I$''
+ is well-defined for a homogeneous ideal $I$.
+\end{exercise}
+So, we would hope for a Nullstellensatz-like theorem
+which bijects the homogeneous radical ideals to projective ideals.
+Unfortunately:
+\begin{example}
+ [Irrelevant ideal]
+ To crush some dreams and hopes, consider the ideal
+ \[ I = (x_0, x_1, \dots, x_n). \]
+ This is called the \vocab{irrelevant ideal};
+ it is a homogeneous radical yet $\Vp(I) = \varnothing$.
+\end{example}
+
+However, other than the irrelevant ideal:
+\begin{theorem}
+ [Homogeneous Nullstellensatz]
+ Let $I$ and $J$ be homogeneous ideals.
+ \begin{enumerate}[(a)]
+ \ii If $\Vp(I) = \Vp(J) \neq \varnothing$ then $\sqrt I = \sqrt J$.
+ \ii If $\Vp(I) = \varnothing$, then either $I = (1)$
+ or $\sqrt I = (x_0, x_1, \dots, x_n)$.
+ \end{enumerate}
+ Thus there is a natural bijection between:
+ \begin{itemize}
+ \ii projective varieties in $\CP^n$, and
+ \ii homogeneous radical ideals of $\CC[x_1, \dots, x_n]$
+ except for the irrelevant ideal.
+ \end{itemize}
+\end{theorem}
+\begin{proof}
+ For the first part, let $V = \Vp(I)$ and $W = \Vp(J)$
+ be projective varieties in $\CP^n$.
+ We can consider them as \emph{affine varieties} in $\Aff^{n+1}$
+ by using the interpretation of $\CP^n$
+ as lines through the origin in $\CC^n$.
+
+ Algebraically, this is done by taking the homogeneous ideals
+ $I, J \subseteq \CC[x_0, \dots, x_n]$
+ and using the same ideals to cut out \emph{affine} varieties
+ $V_{\text{aff}} = \VV(I)$ and $W_{\text{aff}} = \VV(J)$ in $\Aff^{n+1}$.
+ For example, the cone $x^2+y^2-z^2=0$ is a conic (a one-dimensional curve)
+ in $\CP^2$, but can also be thought of as a cone
+ (which is a two-dimensional surface) in $\Aff^3$.
+
+ Then for (a), we have $V_{\text{aff}} = W_{\text{aff}}$,
+ so $\sqrt I = \sqrt J$.
+
+ For (b), either $V_{\text{aff}}$ is empty
+ or it is just the origin of $\Aff^{n+1}$,
+ so the Nullstellensatz implies either $I = (1)$
+ or $\sqrt I = (x_0, \dots, x_n)$ as desired.
+\end{proof}
+Projective analogues of \Cref{thm:many_aff_variety}
+(on intersections and unions of varieties) hold verbatim
+for projective varieties as well.
+
+
+\section{As ringed spaces}
+\prototype{The regular functions on $\CP^1$ minus a point
+are exactly those of the form $P(s/t)$.}
+Now, let us make every projective variety $V$ into a baby ringed space.
+We already have the set of points, a subset of $\CP^n$.
+
+The topology is defined as follows.
+\begin{definition}
+ We endow $\CP^n$ with the \vocab{Zariski topology}
+ by declaring the sets of the form $\Vp(I)$,
+ where $I$ is a homogeneous ideal, to be the closed sets.
+
+ Every projective variety $V$ then inherits the Zariski
+ topology from its parent $\CP^n$.
+ The \vocab{distinguished open sets} $D(f)$ are $V \setminus \Vp(f)$.
+\end{definition}
+
+Thus every projective variety $V$ is now a topological space.
+It remains to endow it with a sheaf of regular functions $\OO_V$.
+To do this we have to be a little careful.
+In the affine case we had a nice little ring of functions,
+the coordinate ring $\CC[x_0,\dots,x_n] / I$,
+that we could use to provide the numerator and denominators.
+So, it seems natural to then define:
+\begin{definition}
+ The \vocab{homogeneous coordinate ring} of a projective variety
+ $V = \Vp(I) \subseteq \CP^n$, where $I$ is homogeneous radical,
+ is defined as the ring
+ \[ \CC[V] = \CC[x_0, \dots, x_n] / I. \]
+\end{definition}
+However, when we define a rational function we must impose
+a new requirement that the numerator and denominator are the same degree.
+\begin{definition}
+ Let $U \subseteq V$ be an open set of a projective variety $V$.
+ A \vocab{rational function} $\phi$ on a projective variety $V$
+ is a quotient $f/g$, where $f,g \in \CC[V]$,
+ and $f$ and $g$ are homogeneous of the same degree,
+ and $\Vp(g) \cap U = \varnothing$.
+ In this way we obtain a function $\phi : U \to \CC$.
+\end{definition}
+\begin{example}
+ [Examples of rational functions]
+ Let $V = \CP^1$ have coordinates $(s:t)$.
+ \begin{enumerate}[(a)]
+ \ii If $U = V$, then constant functions $c/1$
+ are the only rational functions on $U$.
+ \ii Now let $U_1 = V \setminus \{(1:0)\}$.
+ Then, an example of a regular function is
+ \[ \frac{s^2+9t^2}{t^2} = \left( \frac st \right)^2 + 9. \]
+ If we think of $U_1$ as $\CC$
+ (i.e.\ $\CP^1$ minus an infinity point, hence like $\Aff^1$)
+ then really this is just the function $x^2+9$.
+ \end{enumerate}
+\end{example}
+Then we can repeat the same definition as before:
+\begin{definition}
+ Let $U \subseteq V$ be an open set of a projective variety $V$.
+ We say a function $\phi : U \to \CC$ is a \vocab{regular function} if
+ for every point $p$, we can find an open set $U_p$ containing $p$
+ and a rational function $f_p/g_p$ on $U_p$ such that
+ \[ \phi(x) = \frac{f_p(x)}{g_p(x)} \qquad \forall x \in U_p. \]
+ In particular, we require $U_p \cap \Vp(g_p) = \varnothing$.
+ We denote the set of all regular functions on $U$ by $\OO_V(U)$.
+\end{definition}
+Of course, the rational functions from the previous example
+are examples of regular functions as well.
+This completes the definition of a projective variety $V$
+as a baby ringed space.
+
+\section{Examples of regular functions}
+Naturally, I ought to tell you what the regular functions
+on distinguished open sets are; this is an analog to
+\Cref{thm:reg_func_distinguish_open} from last time.
+\begin{theorem}
+ [Regular functions on distinguished open sets for projective varieties]
+ \label{thm:proj_reg_func_dist_open}
+ Let $V$ be a projective variety, and let $g \in \CC[V]$ be homogeneous
+ of \emph{positive degree} (thus $g$ is nonconstant).
+ Then
+ \[
+ \OO_V(D(g))
+ = \left\{ \frac{f}{g^r} \mid
+ f \in \CC[V] \text{ homogeneous of degree $r\deg g$}
+ \right\}.
+ \]
+\end{theorem}
+What about the case $g = 1$?
+A similar result holds, but we need the assumption that $V$ is irreducible.
+\begin{definition}
+ A projective variety $V$ is irreducible
+ if it can't be written as the union of two proper (projective) sub-varieties.
+\end{definition}
+\begin{theorem}
+ [Only constant regular functions on projective space]
+ Let $V$ be an \emph{irreducible} projective variety.
+ Then the only regular functions on $V$ are constant,
+ thus we have \[ \OO_V(V) \cong \CC. \]
+ This relies on the fact that $\CC$ is algebraically closed.
+\end{theorem}
+Proofs of these are omitted for now.
+\begin{example}
+ [Irreducibility is needed above]
+ The reason we need $V$ irreducible is otherwise
+ we could, for example, take $V$ to be the union of two points;
+ in this case $\OO_V(V) \cong \CC^{\oplus 2}$.
+\end{example}
+
+\begin{remark}
+ It might seem strange that $\OO_V(D(g))$ behaves so differently
+ when $g = 1$. One vague explanation is that in a projective variety,
+ a distinguished open $D(g)$ looks much like an affine variety if $\deg g > 0$.
+ For example, in $\CP^1$ we have $\CP^1 \setminus \{0\} \cong \Aff^1$
+ (where $\cong$ is used in a sense that I haven't made precise).
+ Thus the claim becomes related to the corresponding affine result.
+ But if $\deg g = 0$ and $g \neq 0$, then $D(g)$ is the entire projective variety,
+ which does not look affine, and thus the analogy breaks down.
+\end{remark}
+
+\begin{example}[Regular functions on $\CP^1$]
+ Let $V = \CP^1$, with coordinates $(s:t)$.
+ \begin{enumerate}[(a)]
+ \ii By \Cref{thm:proj_reg_func_dist_open},
+ if $U_1$ is the standard affine chart
+ omitting the point $(1:0)$, we have
+ $ \OO_V(U_1) = \left\{ \frac{f}{t^n} \mid \deg f = n \right\} $.
+ One can write this as
+ \[ \OO_V(U_1) \cong \left\{ P(s/t) \mid P \in \CC[x] \right\}
+ \cong \OO_{\Aff^1} (\Aff^1). \]
+ This conforms with our knowledge that $U_1$
+ ``looks very much like $\Aff^1$''.
+ \ii As $V$ is irreducible, $\OO_V(V) = \CC$:
+ there are no nonconstant functions on $\CP^1$.
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [Regular functions on $\CP^2$]
+ Let $\CP^2$ have coordinates $(x:y:z)$ and
+ let $U_0 = \left\{ (x:y:1) \in \CP^2 \right\}$
+ be the distinguished open set $D(z)$.
+ Then in the same vein,
+ \[
+ \OO_{\CP^2}(U_0)
+ = \left\{ \frac{P(x,y)}{z^n} \mid \deg P = n \right\}
+ \cong \left\{ P(x/z, y/z) \mid P \in \CC[x,y] \right\}.
+ \]
+\end{example}
+
+\section\problemhead
+\todo{Problems:}
+% should be easy to come up with some explicit examples to play with
diff --git a/books/napkin/proj.tex b/books/napkin/proj.tex
new file mode 100644
index 0000000000000000000000000000000000000000..3876d3e1f8312fb38c52d986cb59e7f1cc84e076
--- /dev/null
+++ b/books/napkin/proj.tex
@@ -0,0 +1 @@
+\chapter{Projective schemes}
diff --git a/books/napkin/quasi-proj.tex b/books/napkin/quasi-proj.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c55281f97a692465879372c68481a8347320d02d
--- /dev/null
+++ b/books/napkin/quasi-proj.tex
@@ -0,0 +1,480 @@
+\chapter{Morphisms of varieties}
+In preparation for our work with schemes,
+we will finish this part by talking about \emph{morphisms}
+between affine and projective varieties,
+given that we have taken the time to define them.
+
+Idea: we know both affine and projective varieties are special
+cases of baby ringed spaces, so in fact we will just define a morphism
+between \emph{any} two baby ringed spaces.
+
+\section{Defining morphisms of baby ringed spaces}
+\prototype{See next section.}
+Let $(X, \OO_X)$ and $(Y, \OO_Y)$ be baby ringed spaces, and think
+about how to define a morphism between them.
+
+The guiding principle in algebra is that we want morphisms
+to be functions on underlying structure, but also respect
+the enriched additional data on top.
+To give some examples from the very beginning of time:
+\begin{example}
+ [How to define a morphism]
+ \listhack
+ \begin{itemize}
+ \ii Consider groups.
+ A group $G$ has an underlying set (of elements),
+ which we then enrich with a multiplication operation.
+ So a homomorphism is a map of the underlying sets,
+ plus it has to respect the group multiplication.
+ \ii Consider $R$-modules.
+ Each $R$-module has an underlying abelian group,
+ which we then enrich with scalar multiplication.
+ So we require that a linear map respects the scalar multiplication as well,
+ in addition to being a homomorphism of abelian groups.
+ \ii Consider topological spaces.
+ A space $X$ has an underlying set (of points),
+ which we then enrich with a topology of open sets.
+ So we consider maps of the set of points
+ which respect the topology (pre-images of open sets are open).
+ \end{itemize}
+\end{example}
+This time, the ringed spaces $(X, \OO_X)$ have an underlying
+\emph{topological space}, which we have enriched with a structure sheaf.
+So, we want a continuous map $f \colon X \to Y$ of these topological spaces,
+which we then need to respect the sheaf of regular functions.
+
+How might we do this? Well, if we let $\psi \colon Y \to \CC$
+be a regular function, then there's a natural way to get
+then composition gives us a way to write a map $X \to Y \to \CC$.
+We then want to require that this is also a regular function.
+
+More generally, we can take any regular function on $Y$
+and obtain some function on $X$, which we call a pullback.
+We then require that all the pullbacks are regular on $X$.
+\begin{definition}
+ Let $(X, \OO_X)$ and $(Y, \OO_Y)$ be baby ringed spaces.
+ Given a map $f \colon X \to Y$ and a regular function $\phi \in \OO_Y(U)$,
+ we define the \vocab{pullback} of $\phi$, denoted $f^\sharp\phi$,
+ to be the composed function
+ \[ f\pre(U) \taking{f} U \taking{\phi} \CC. \]
+\end{definition}
+The use of the word ``pullback'' is the same as in our study
+of differential forms.
+
+\begin{definition}
+ Let $(X, \OO_X)$ and $(Y, \OO_Y)$ be baby ringed spaces.
+ A continuous map of topological spaces $f \colon X \to Y$
+ is a \vocab{morphism} if every pullback of a regular function on $Y$
+ is a regular function on $X$.
+
+ Two baby ringed spaces are \vocab{isomorphic}
+ if there are mutually inverse morphisms between them,
+ which we then call \vocab{isomorphisms}.
+\end{definition}
+
+In particular, the pullback gives us a (reversed) \emph{ring homomorphism}
+\[ f^\sharp \colon \OO_Y(U) \to \OO_X(f\pre(U)) \] for \emph{every} $U$;
+thus our morphisms package a lot of information.
+Here's a picture of a morphism $f$,
+and the pullback of $\phi \colon U \to \CC$ (where $U \subseteq Y$).
+\begin{center}
+ \begin{asy}
+ size(13cm);
+ bigblob("$X$");
+ pair p = (0.5,0);
+ filldraw(CR(p, 1), opacity(0.2)+lightgreen, deepgreen+dashed);
+ label("$f^{\text{pre}}(U)$", p+dir(135), dir(135), deepgreen);
+
+ transform t = scale(0.8) * shift(14*dir(180));
+ add(t * CC());
+ p = t*p;
+
+ bigblob("$Y$");
+ pair q = (0,0.5);
+ filldraw(CR(q, 1.2), opacity(0.2)+lightred, red+dashed);
+ label("$U$", q+1.2*dir(45), dir(45), red);
+
+ draw( (-9,0.5)--(-3,0.5), EndArrow);
+ label("$f$", (-6,0.5), dir(90));
+
+ draw(q--(0,-8), red, EndArrow);
+ label("$\phi \in \mathcal O_Y(U)$", (0,-6), dir(0), red);
+ label(scale(2.718)*"$\mathbb C$", (0,-9), black);
+
+ draw( p--(-0.5,-8), deepgreen, EndArrow);
+ label("$f^\sharp \phi \in \mathcal O_X(f^{\text{pre}}(U))$",
+ (p+(-0.5,-8))/2, dir(225), deepgreen);
+ \end{asy}
+\end{center}
+
+\begin{example}
+ [The pullback of $\frac{1}{y-25}$ under $t \mapsto t^2$]
+ The map
+ \[ f \colon X = \Aff^1 \to Y = \Aff^1 \quad\text{by}\quad t \mapsto t^2 \]
+ is a morphism of varieties.
+ For example, consider the regular function $\varphi = \frac{1}{y-25}$ on
+ the open set $Y \setminus \{25\} \subseteq Y$.
+ The $f$-inverse image is $X \setminus \{\pm5\}$.
+ Thus the pullback is
+ \begin{align*}
+ f^\sharp\varphi \colon X \setminus \{\pm5\} &\to Y \setminus \{25\} \\
+ \quad\text{by}\quad x &\mapsto \frac{1}{x^2-25}
+ \end{align*}
+ which is regular on $X \setminus \{\pm5\}$.
+\end{example}
+
+\section{Classifying the simplest examples}
+\prototype{\Cref{thm:affine_global_polynomial};
+they're just polynomials.}
+
+On a philosophical point, we like the earlier definition because
+it adheres to our philosophy of treating our
+varieties as intrinsic objects, rather than embedded ones.
+However, it is somewhat of a nuisance to actually verify it.
+
+So in this section, we will
+\begin{itemize}
+ \ii classify all the morphisms from $\Aff^m \to \Aff^n$, and
+ \ii classify all the morphisms from $\CP^m \to \CP^n$.
+\end{itemize}
+It what follows I will wave my hands a lot in claiming
+that something is a morphism, since doing so is mostly detail checking.
+The theorems which follow will give us alternative definitions
+of morphism which are more coordinate-based
+and easier to use for actual computations.
+
+\subsection{Affine classification}
+Earlier we saw how $t \mapsto t^2$ gives us a map.
+More generally, given any polynomial $P(t)$,
+the map $t \mapsto P(t)$ will work.
+And in fact, that's all:
+\begin{exercise}
+ Let $X = \Aff^1$, $Y = \Aff^1$.
+ By considering $\id \in \OO_Y(Y)$, show that no other
+ regular functions exist.
+\end{exercise}
+
+In fact, let's generalize the previous exercise:
+\begin{theorem}
+ [Regular maps of affine varieties are globally polynomials]
+ Let $X \subseteq \Aff^m$ and $Y \subseteq \Aff^n$ be affine varieties.
+ Every morphism $f \colon X \to Y$ of varieties is given by
+ \[
+ x = \left( x_1, \dots, x_m \right)
+ \xmapsto{f} \left( P_1(x), \dots, P_n(x) \right)
+ \]
+ where $P_1$, \dots, $P_n$ are polynomials.
+ \label{thm:affine_global_polynomial}
+\end{theorem}
+\begin{proof}
+ It's not too hard to see that all such functions work,
+ so let's go the other way.
+ Let $f \colon X \to Y$ be a morphism.
+
+ First, remark that $f\pre(Y) = X$.
+ Now consider the regular function $\pi_1 \in \OO_Y(Y)$,
+ given by the projection $(y_1, \dots, y_n) \mapsto y_1$.
+ Thus we need $f \circ \pi_1$ to be regular on $X$.
+
+ But for affine varieties $\OO_X(X)$ is just the coordinate ring $\CC[X]$
+ and so we know there is a polynomial $P_1$ such that $f \circ \pi_1 = P_1$.
+ Similarly for the other coordinates.
+\end{proof}
+
+\subsection{Projective classification}
+Unfortunately, the situation is a little weirder in the projective setting.
+If $X \subseteq \CP^m$ and $Y \subseteq \CP^n$ are projective varieties,
+then every function
+\[
+ x = \left( x_0 : x_1 : \dots : x_m \right)
+ \mapsto \left( P_0(x) : P_1(x) : \dots : P_n(x) \right)
+\]
+is a valid morphism, provided the $P_i$ are homogeneous
+of the same degree and don't all vanish simultaneously.
+However if we try to repeat the proof for affine varieties
+we run into an issue: there is no $\pi_1$ morphism.
+(Would we send $(1:1) = (2:2)$ to $1$ or $2$?)
+
+And unfortunately, there is no way to repair this.
+Counterexample:
+\begin{example}
+ [Projective map which is not globally polynomial]
+ Let $V = \Vp(xy-z^2) \subseteq \CP^2$.
+ Then the map
+ \[
+ V \to \CP^1
+ \quad\text{by}\quad
+ (x:y:z)
+ \mapsto
+ \begin{cases}
+ (x:z) & x \neq 0 \\
+ (z:y) & y \neq 0
+ \end{cases}
+ \]
+ turns out to be a morphism of projective varieties.
+ This is well defined just because $(x:z) = (z:y)$ if $x,y \neq 0$;
+ this should feel reminiscent of the definition of regular function.
+\end{example}
+The good news is that ``local'' issues are the only limiting factor.
+\begin{theorem}
+ [Regular maps of projective varieties are locally polynomials]
+ Let $X \subseteq \CP^m$ and $Y \subseteq \CP^n$ be projective varieties
+ and let $f \colon X \to Y$ be a morphism.
+ Then at every point $p \in X$ there exists
+ an open neighborhood $U_p \ni p$
+ and polynomials $P_0$, $P_1$, \dots, $P_n$ (which depend on $U$) so that
+ \[ f(x) = \left( P_0(x) : P_1(x) : \dots : P_n(x) \right)
+ \quad \forall x = (x_0 : \dots : x_n) \in U_p. \]
+ Of course the polynomials $P_i$ must be homogeneous of the same degree
+ and cannot vanish simultaneously on any point of $U_p$.
+\end{theorem}
+\begin{example}
+ [Example of an isomorphism]
+ In fact, the map $V = \Vp(xy-z^2) \to \CP^1$ is an isomorphism.
+ The inverse map $\CP^1 \to V$ is given by
+ \[ (s:t) \mapsto (s^2:t^2:st). \]
+ Thus actually $V \cong \CP^1$.
+\end{example}
+
+\section{Some more applications and examples}
+\prototype{$\Aff^1 \injto \CP^1$ is a good one.}
+The previous section complete settles affine varieties to affine varieties,
+and projective varieties to projective varieties.
+However, the definition we gave at the start of the chapter
+works for \emph{any} baby ringed spaces,
+and therefore there is still a lot of room to explore.
+
+For example, \textbf{we can have affine spaces talk to projective ones}.
+Why not?
+The power of our pullback-based definition
+is that you enable any baby ringed spaces to communicate,
+even if they live in different places.
+\begin{example}
+ [Embedding $\Aff^1 \injto \CP^1$]
+ Consider a morphism
+ \[ f \colon \Aff^1 \injto \CP^1 \quad\text{by}\quad t \mapsto (t:1). \]
+ This is also a morphism of varieties.
+ (Can you see what the pullbacks look like?)
+ This reflects the fact that $\CP^1$ is ``$\Aff^1$ plus a point at infinity''.
+\end{example}
+
+Here is another way you can generate more baby ringed spaces.
+Given any projective variety,
+you can take an open subset of it,
+and that will itself be a baby ringed space.
+We give this a name:
+\begin{definition}
+ A \vocab{quasi-projective variety} is
+ an open set $X$ of a projective variety $V$.
+ It is a baby ringed space $(X, \OO_X)$ too,
+ because for any open set $U \subseteq X$
+ we simply define $\OO_X(U) = \OO_V(U)$.
+\end{definition}
+
+We chose to take open subsets of projective varieties
+because this will subsume the affine ones, for example:
+\begin{example}
+ [The parabola is quasi-projective]
+ Consider the parabola $V = \VV(y-x^2) \subset \Aff^2$.
+ We take the projective variety $W = \Vp(zy-x^2)$
+ and look at the standard affine chart $D(z)$.
+ Then there is an isomorphism
+ \begin{align*}
+ V &\to D(z) \subseteq W \\
+ (x,y) &\mapsto (x:y:1) \\
+ (x/z, y/z) &\mapsfrom (x:y:z).
+ \end{align*}
+ Consequently, $V$ is (isomorphic to)
+ an open subset of $W$, thus we regard it as quasi-projective.
+\end{example}
+
+\missingfigure{parabola}
+
+In general this proof can be readily adapted:
+\begin{proposition}
+ [Affine $\subseteq$ quasi-projective]
+ Every affine variety is isomorphic to a quasi-projective one
+ (i.e.\ every affine variety is an open subset of a projective variety).
+\end{proposition}
+So quasi-projective varieties generalize both types of varieties we have seen.
+
+%\begin{remark}
+%Note also that if $X \cong Y$ as quasi-projective varieties,
+%then we get a lot of information.
+%For one thing, we know that $X$ and $Y$ are homeomorphic topological spaces.
+%But moreover, we get bijections between all the rings $\OO_X(f\pre(U))$
+%and $\OO_Y(U)$; in particular, we have $\OO_X(X) \cong \OO_Y(Y)$.
+%\end{remark}
+%
+%In summary, we have three different types of varieties:
+%\begin{itemize}
+% \ii Affine varieties,
+% \ii Projective varieties, and
+% \ii Quasi-projective varieties.
+%\end{itemize}
+%These are baby ringed spaces, and so the notion of isomorphism
+%shows that the last type subsumes the first two.
+%
+%The reason quasi-projective varieties are interesting is that
+%almost all varieties which occur in real life are quasi-projective.
+%In fact, it is tricky even to exhibit a single example
+%of a variety which is not quasi-projective.
+%I certainly don't know one.
+
+\section{The hyperbola effect}
+\prototype{$\Aff^1 \setminus \{0\}$ is even affine}
+
+So here is a natural question: are there quasi-projective varieties
+which are neither affine nor projective?
+The answer is yes, but for the sake of narrative
+I'm going to play dumb and find a \emph{non-example},
+with the actual example being given in the problems.
+
+Our first guess might be to take the simplest projective variety,
+say $\CP^1$, and delete a point (to get an open set).
+This is quasi-projective, but it's isomorphic to $\Aff^1$.
+So instead we start with the simplest affine variety,
+say $\Aff^1$, and try to delete a point.
+
+Surprisingly, this doesn't work.
+\begin{example}
+ [Crucial example: punctured line is isomorphic to hyperbola]
+ Let $X = \Aff^1 \setminus \{0\}$ be an quasi-projective variety.
+ We claim that in fact we have an isomorphism
+ \[ X \cong V = \VV(xy-1) \subseteq \Aff^2 \]
+ which shows that $X$ is still isomorphic to an affine variety.
+ The maps are
+ \begin{align*}
+ X & \leftrightarrow V \\
+ t &\mapsto (t, 1/t) \\
+ x &\mapsfrom (x, y).
+ \end{align*}
+\end{example}
+Intuitively, the ``hyperbola $y=1/x$'' in $\Aff^2$ can be projected
+onto the $x$-axis.
+Here is the relevant picture.
+\begin{center}
+ \begin{asy}
+ import graph;
+ graph.xaxis("$x$", -4, 4);
+ graph.yaxis("$y$", -4, 4);
+
+ real f (real x) { return 1/x; }
+ draw(graph(f,-4,-0.25,operator ..), blue);
+ draw(graph(f,0.25,4,operator ..), blue);
+ label("$\mathcal V(xy-1)$", (1,1), dir(45), blue);
+
+ draw( (0,-5)--(0,-8), heavygreen, EndArrow );
+
+ pair A = (-4,-9); pair B = (4,-9);
+ draw(A--B, red, Arrows);
+ // label("$\mathbb V()$", (0,0), 2*dir(-90));
+ opendot((0,-9), red+1.5);
+ label("$X$", B, dir(-90), red);
+ \end{asy}
+\end{center}
+
+Actually, deleting any number of points from $\Aff^1$ fails.
+If we delete $\{1,2,3\}$, the resulting open set
+is isomorphic as a baby ringed space to $\VV(y(x-1)(x-2)(x-3)-1)$,
+which colloquially might be called $y = \frac{1}{(x-1)(x-2)(x-3)}$.
+
+The truth is more general.
+\begin{moral}
+ Distinguished open sets of affine varieties are affine.
+\end{moral}
+Here is the exact isomorphism.
+\begin{theorem}
+ [Distinguished open subsets of affines are affine]
+ Consider $X = D(f) \subseteq V = \VV(f_1, \dots, f_m) \subseteq \Aff^n$,
+ where $V$ is an affine variety,
+ and the distinguished open set $X$ is thought of as
+ a quasi-projective variety.
+ Define
+ \[ W = \VV(f_1, \dots, f_m, y \cdot f-1) \subseteq \Aff^{n+1} \]
+ where $y$ is the $(n+1)$st coordinate of $\Aff^{n+1}$.
+
+ Then $X \cong W$.
+\end{theorem}
+For lack of a better name, I will dub this the
+\vocab{hyperbola effect}, and it will play a significant role later on.
+
+Therefore, if we wish to find an example of a quasi-projective
+variety which is not affine,
+one good place to look would be an open set of an affine space
+which is not distinguished open.
+If you are ambitious now,
+you can try to prove the punctured plane
+(that is, $\Aff^2$ minus the origin) works.
+We will see that example once again later in the next chapter,
+so you will have a second chance to do so.
+
+\section\problemhead
+\begin{problem}
+ Consider the map
+ \[ \Aff^1 \to \VV(y^2-x^3) \subseteq \Aff^2
+ \quad\text{by}\quad t \mapsto (t^2, t^3). \]
+ Show that it is a morphism of varieties,
+ but it is not an isomorphism.
+\end{problem}
+
+\begin{dproblem}
+ Show that every projective variety has an open neighborhood
+ which is isomorphic to an affine variety.
+ In this way, ``projective varieties are locally affine''.
+ \begin{hint}
+ Use the standard affine charts.
+ \end{hint}
+\end{dproblem}
+
+\begin{problem}
+ Let $V$ be a affine variety
+ and let $W$ be a irreducible projective variety.
+ Prove that $V \cong W$ if and only
+ if $V$ and $W$ are a single point.
+ \begin{hint}
+ Examine the global regular functions.
+ \end{hint}
+ \begin{sol}
+ If they were isomorphic, we would have $\OO_V(V) \cong \OO_W(W)$.
+ For irreducible projective varieties, $\OO_W(W) \cong \CC$,
+ while for affine varieties $\OO_V(V) \cong \CC[V]$.
+ Thus we conclude $V$ must be a single point.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Punctured plane is not affine]
+ \gim
+ Let $X = \Aff^2 \setminus \{ (0,0) \}$ be an open set of $\Aff^2$.
+ Let $V$ be any affine variety and let $f \colon X \to V$ be a morphism.
+ Show that $f$ is not an isomorphism.
+ \begin{hint}
+ Assume $f$ was an isomorphism.
+ Then it gives an isomorphism $f^\sharp \colon \OO_V(V) \to \OO_X(X) = \CC[x,y]$.
+ Thus we may write $\OO_V(V) = \CC[a,b]$,
+ where $f^\sharp(a) = x$ and $f^\sharp(b) = y$.
+ Let $f(p) = q$ where $\VV(a,b) = \{q\}$.
+ Use the definition of pullback to prove $p \in \VV(x,y)$, contradiction.
+ \end{hint}
+ \begin{sol}
+ Assume for contradiction there is an affine variety $V$
+ and an isomorphism
+ \[ f \colon X \to V. \]
+ Then taking the pullback we get a ring isomorphism
+ \[ f^\sharp \colon \OO_V(V) \to \OO_X(X) = \CC[x,y]. \]
+ Now let $\OO_V(V) = \CC[a,b]$ where $f^\sharp(a) = x$, $f^\sharp(b) = y$.
+ In particular, we actually have to have $V \cong \Aff^2$.
+
+ Now in the \emph{affine} variety $V$ we can take $\VV(a)$ and $\VV(b)$;
+ these have nonempty intersection since $(a,b)$
+ is a maximal ideal in $\OO_V(V)$.
+ Call this point $q$, and let $p$ be a point with $f(p) = q$.
+
+ Then
+ \[ 0 = a(q) = (f^\sharp a)(p) = x(p) \]
+ and so $p \in \VV(x) \subseteq X$.
+ Similarly, $p \in \VV(y) \subseteq X$,
+ but this is a contradiction since $\VV(x,y) = \varnothing$.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/quasicoh.tex b/books/napkin/quasicoh.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c4f94877a9384d3c106a763bc1892fe4d5ca681f
--- /dev/null
+++ b/books/napkin/quasicoh.tex
@@ -0,0 +1 @@
+\chapter{Quasicoherent sheaves}
diff --git a/books/napkin/quotient.tex b/books/napkin/quotient.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d941223e629e5318f27fbd3301f64839b3546b24
--- /dev/null
+++ b/books/napkin/quotient.tex
@@ -0,0 +1,755 @@
+\chapter{Homomorphisms and quotient groups}
+\section{Generators and group presentations}
+\prototype{$D_{2n} = \left< r,s \mid r^n=s^2=1\right>$}
+
+Let $G$ be a group.
+Recall that for some element $x \in G$,
+we could consider the subgroup
+\[ \left\{ \dots, x^{-2}, x^{-1}, 1, x, x^2, \dots \right\} \]
+of $G$.
+Here's a more pictorial version of what we did:
+\textbf{put $x$ in a box, seal it tightly, and shake vigorously}.
+Using just the element $x$,
+we get a pretty explosion that produces the subgroup above.
+
+What happens if we put two elements $x$, $y$ in the box?
+Among the elements that get produced are things like
+\[ xyxyx, \quad x^2y^9x^{-5}y^3, \quad y^{-2015}, \quad \dots\]
+Essentially, I can create any finite product of $x$, $y$, $x\inv$, $y\inv$.
+This leads us to define:
+\begin{definition}
+ Let $S$ be a subset of $G$.
+ The subgroup \vocab{generated} by $S$,
+ denoted $\left$,
+ is the set of elements which can be written as a
+ finite product of elements in $S$ (and their inverses).
+ If $\left = G$ then we say $S$ is a set of
+ \vocab{generators} for $G$,
+ as the elements of $S$ together create all of $G$.
+\end{definition}
+\begin{exercise}
+ Why is the condition ``and their inverses''
+ not necessary if $G$ is a finite group?
+ (As usual, assume Lagrange's theorem.)
+\end{exercise}
+
+\begin{example}[$\ZZ$ is the infinite cyclic group]
+ Consider $1$ as an element of $\ZZ = (\ZZ, +)$.
+ We see $\left<1\right> = \ZZ$, meaning $\{1\}$ generates $\ZZ$.
+ It's important that $-1$, the inverse of $1$ is also allowed:
+ we need it to write all integers as the sum of $1$ and $-1$.
+\end{example}
+
+This gives us an idea for a way to try and express groups compactly.
+Why not just write down a list of generators for the groups?
+For example, we could write
+\[ \ZZ \cong \left< a \right> \]
+meaning that $\ZZ$ is just the group generated by one element.
+
+There's one issue: the generators usually satisfy certain properties.
+For example, consider $\Zc{100}$.
+It's also generated by a single element $x$,
+but this $x$ has the additional property that $x^{100} = 1$.
+This motivates us to write
+\[ \Zc{100} = \left< x \mid x^{100} = 1 \right>. \]
+I'm sure you can see where this is going.
+All we have to do is specify a set of generators and
+\vocab{relations} between the generators,
+and say that two elements are equal if and only if
+you can get from one to the other using relations.
+Such an expression is appropriately called a \vocab{group presentation}.
+
+\begin{example}[Dihedral group]
+ The dihedral group of order $2n$ has a presentation
+ \[ D_{2n} = \left< r, s
+ \mid r^n = s^2 = 1, rs = sr\inv \right>. \]
+ Thus each element of $D_{2n}$ can be written uniquely in the form $r^\alpha$
+ or $sr^\alpha$, where $\alpha = 0, 1, \dots, n-1$.
+\end{example}
+
+\begin{example}[Klein four group]
+ The \vocab{Klein four group},
+ isomorphic to $\Zc2 \times \Zc2$, is given by the presentation
+ \[ \left< a,b \mid a^2=b^2=1, ab=ba \right>. \]
+\end{example}
+
+\begin{example}
+ [Free group]
+ The \vocab{free group on $n$ elements} is the group
+ whose presentation has $n$ generators and no relations at all.
+ It is denoted $F_n$, so
+ \[
+ F_n = \left< x_1, x_2, \dots, x_n \right>.
+ \]
+ In other words, $F_2 = \left$ is the set of strings
+ formed by appending finitely many copies of $a$, $b$, $a\inv$, $b\inv$ together.
+\end{example}
+\begin{ques}
+ Notice that $F_1 \cong \ZZ$.
+\end{ques}
+\begin{abuse}
+ One might unfortunately notice that ``subgroup generated by $a$ and $b$''
+ has exactly the same notation as the free group $\left$.
+ We'll try to be clear based on context which one we mean.
+\end{abuse}
+
+Presentations are nice because they provide a compact way to write down groups.
+They do have some shortcomings, though.\footnote{%
+Actually, determining whether two elements of a presentation are equal is undecidable.
+In fact, it is undecidable to even determine if a group is finite from its presentation.}
+
+\begin{example}
+ [Presentations can look very different]
+ The same group can have very different presentations.
+ For instance consider
+ \[ D_{2n} = \left< x,y \mid x^2=y^2=1, (xy)^n=1. \right>. \]
+ (To see why this is equivalent, set $x=s$, $y=rs$.)
+\end{example}
+
+\section{Homomorphisms}
+\prototype{The ``mod out by $100$'' map, $\ZZ \to \Zc{100}$.}
+
+How can groups talk to each other?
+
+Two groups are ``the same'' if we can write an isomorphism between them.
+And as we saw, two metric spaces are ``the same''
+if we can write a homeomorphism between them.
+But what's the group analogy of a continuous map?
+We simply drop the ``bijection'' condition.
+
+\begin{definition}
+ Let $G = (G, \star)$ and $H = (H, \ast)$ be groups.
+ A \vocab{group homomorphism} is a map $\phi : G \to H$
+ such that for any $g_1, g_2 \in G$ we have
+ \[ \phi(g_1 \star g_2) = \phi(g_1) \ast \phi(g_2). \]
+\end{definition}
+(Not to be confused with ``homeomorphism'' from
+last chapter: note the spelling.)
+
+\begin{example}
+ [Examples of homomorphisms]
+ Let $G$ and $H$ be groups.
+ \begin{enumerate}[(a)]
+ \ii Any isomorphism $G \to H$ is a homomorphism.
+ In particular, the identity map $G \to G$ is a homomorphism.
+ \ii The \vocab{trivial homomorphism} $G \to H$ sends
+ everything to $1_H$.
+ \ii There is a homomorphism from $\ZZ$ to $\Zc{100}$ by
+ sending each integer to its residue modulo $100$.
+ \ii There is a homomorphism from $\ZZ$ to itself by $x \mapsto 10x$
+ which is injective but not surjective.
+ \ii There is a homomorphism from $S_n$ to $S_{n+1}$ by ``embedding'':
+ every permutation on $\{1,\dots,n\}$ can be thought of as a permutation
+ on $\{1,\dots,n+1\}$ if we simply let $n+1$ be a fixed point.
+ \ii A homomorphism $\phi: D_{12} \to D_6$
+ is given by $s_{12} \mapsto s_6$
+ and $r_{12} \mapsto r_6$.
+ \ii Specifying a homomorphism $\ZZ \to G$ is the same as
+ specifying just the image of the element $1 \in \ZZ$. Why?
+ \end{enumerate}
+\end{example}
+The last two examples illustrate something: suppose we have a presentation of $G$.
+To specify a homomorphism $G \to H$, we only have to specify where each generator of $G$ goes, in such a way that the relations are all satisfied.
+
+Important remark:
+the right way to think about an isomorphism is as a ``bijective homomorphism''.
+To be explicit,
+\begin{exercise}
+ Show that $G \cong H$ if and only if there exist
+ homomorphisms $\phi \colon G \to H$ and $\psi \colon H \to G$
+ such that $\phi \circ \psi = \id_H$ and $\psi \circ \phi = \id_G$.
+\end{exercise}
+So the definitions of homeomorphism of metric spaces
+and isomorphism of groups are not too different.
+
+Some obvious properties of homomorphisms follow.
+\begin{fact}
+ Let $\phi \colon G \to H$ be a homomorphism.
+ Then $\phi(1_G) = 1_H$ and $\phi(g\inv) = \phi(g)\inv$.
+\end{fact}
+\begin{proof}
+ Boring, and I'm sure you could do it yourself if you wanted to.
+\end{proof}
+
+Now let me define a very important property of a homomorphism.
+\begin{definition}
+ The \vocab{kernel} of a homomorphism $\phi \colon G \to H$ is defined by
+ \[ \ker \phi \defeq
+ \left\{ g \in G : \phi(g) = 1_H \right\}.
+ \]
+ It is a \emph{subgroup} of $G$
+ (in particular, $1_G \in \ker \phi$ for obvious reasons).
+\end{definition}
+\begin{ques}
+ Verify that $\ker\phi$ is in fact a subgroup of $G$.
+\end{ques}
+We also have the following important fact, which we also encourage the reader to verify.
+\begin{proposition}
+ [Kernel determines injectivity]
+ The map $\phi$ is injective if and only if $\ker\phi = \{1_G\}$.
+\end{proposition}
+
+To make this concrete, let's compute the kernel of each of our examples.
+\begin{example}
+ [Examples of kernels]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The kernel of any isomorphism $G \to H$ is trivial,
+ since an isomorphism is injective.
+ In particular, the kernel of the identity map $G \to G$ is $\{1_G\}$.
+ \ii The kernel of the trivial homomorphism $G \to H$
+ (by $g \mapsto 1_H$) is all of $G$.
+ \ii The kernel of the homomorphism $\ZZ \to \Zc{100}$ by $n \mapsto \ol n$
+ is precisely \[ 100\ZZ = \{\dots, -200, -100, 0, 100, 200, \dots \}. \]
+ \ii The kernel of the map $\ZZ \to \ZZ$ by $x \mapsto 10x$ is trivial: $\{0\}$.
+ \ii There is a homomorphism from $S_n$ to $S_{n+1}$ by ``embedding'',
+ but it also has trivial kernel because it is injective.
+ \ii A homomorphism $\phi\colon D_{12} \to D_6$
+ is given by $s_{12} \mapsto s_6$ and $r_{12} \mapsto r_6$.
+ You can check that
+ \[ \ker \phi = \left\{ 1, r_{12}^{3} \right\} \cong \Zc2. \]
+ \ii Exercise below.
+ \end{enumerate}
+\end{example}
+\begin{exercise}
+ Fix any $g \in G$.
+ Suppose we have a homomorphism $\ZZ \to G$ by $n \mapsto g^n$.
+ What is the kernel?
+\end{exercise}
+
+\begin{ques}
+ Show that for any homomorphism $\phi: G \to H$,
+ the image $\phi\im(G)$ is a subgroup of $H$.
+ Hence, we'll be especially interested in the case where $\phi$ is surjective.
+\end{ques}
+
+\section{Cosets and modding out}
+\prototype{Modding out by $n$: $\ZZ / (n \cdot \ZZ) \cong \Zc n$.}
+\emph{The next few sections are a bit dense.
+If this exposition doesn't work for you, try \cite{ref:gowers}.}
+
+Let $G$ and $Q$ be groups, and suppose there exists
+a \emph{surjective} homomorphism \[ \phi : G \surjto Q. \]
+In other words, if $\phi$ is injective then $\phi : G \to Q$ is a bijection,
+and hence an isomorphism.
+But suppose we're not so lucky and $\ker\phi$ is bigger than just $\{1_G\}$.
+What is the correct interpretation of a more general homomorphism?
+
+Let's look at the special case where $\phi : \ZZ \to \Zc{100}$ is ``modding out by $100$''.
+We already saw that the kernel of this map is
+\[
+ \ker \phi = 100\ZZ = \left\{ \dots, -200, -100, 0, 100, 200, \dots \right\}.
+\]
+Recall now that $\ker \phi$ is a subgroup of $G$.
+What this means is that \textbf{$\phi$ is indifferent to the subgroup $100\ZZ$ of $\ZZ$}:
+\[ \phi(15) = \phi(2000 + 15) = \phi(-300 + 15) = \phi(700 + 15) = \dots. \]
+So $\Zc{100}$ is what we get when we ``mod out by $100$''. Cool.
+
+In other words, let $G$ be a group and $\phi : G \surjto Q$
+be a surjective homomorphism with kernel $N \subseteq G$.
+\begin{moral}
+ We claim that $Q$ should be thought of as the quotient of $G$ by $N$.
+\end{moral}
+To formalize this, we will define a so-called
+\vocab{quotient group} $G/N$
+in terms of $G$ and $N$ only (without referencing $Q$)
+which will be naturally isomorphic to $Q$.
+
+For motivation, let's give a concrete description of $Q$ using just $\phi$ and $G$.
+Continuing our previous example, let $N = 100\ZZ$ be our subgroup of $G$.
+Consider the sets
+\begin{align*}
+ N &= \left\{ \dots, -200, -100, 0, 100, 200, \dots \right\} \\
+ 1+N &= \left\{ \dots, -199, -99, 1, 101, 201, \dots \right\} \\
+ 2+N &= \left\{ \dots, -198, -98, 2, 102, 202, \dots \right\} \\
+ &\vdots \\
+ 99+N &= \left\{ \dots, -101, -1, 99, 199, 299, \dots \right\}.
+\end{align*}
+The elements of each set all have the same image when we apply $\phi$,
+and moreover any two elements in different sets have different images.
+Then the main idea is to notice that
+\begin{moral}
+ We can think of $Q$ as the group
+ whose \emph{elements} are the \emph{sets} above.
+\end{moral}
+
+Thus, given $\phi$ we define an equivalence relation $\sim_N$
+on $G$ by saying $x \sim_N y$ for $\phi(x) = \phi(y)$.
+This $\sim_N$ divides $G$ into several equivalence classes in $G$
+which are in obvious bijection with $Q$, as above.
+Now we claim that we can write these equivalence classes very explicitly.
+
+% Also, for each $g \in G$ define $\ol g$ to be the equivalence class of $g$ under $\sim_\phi$.
+\begin{exercise}
+ Show that $x \sim_N y$ if and only if $x = yn$ for some $n \in N$
+ (in the mod $100$ example, this means they ``differ by some multiple of $100$'').
+ Thus for any $g \in G$, the equivalence class of $\sim_N$ which contains $g$
+ is given explicitly by \[ gN \defeq \left\{ gn \mid n \in N \right\}. \]
+\end{exercise}
+%Note that
+%\[ \phi(x) = \phi(y)
+% \implies 1_Q = \phi(x)\phi(y)\inv = \phi(xy\inv)
+% \implies xy\inv \in \ker\phi. \]
+%Hence another way to describe $\ol x$ is
+%\[ \ol g = gN = \left\{ gn \mid n \in N \right\}. \]
+
+Here's the word that describes the types of sets we're running into now.
+\begin{definition}
+ Let $H$ be any subgroup of $G$ (not necessarily the kernel of some homomorphism).
+ A set of the form $gH$ is called a \vocab{left coset} of $H$.
+\end{definition}
+\begin{remark}
+ Although the notation might not suggest it,
+ keep in mind that $g_1N$ is often equal to $g_2N$ even if $g_1 \neq g_2$.
+ In the ``mod $100$'' example, $3+N = 103+N$.
+ In other words, these cosets are \emph{sets}.
+
+ This means that if I write ``let $gH$ be a coset'' without telling you what $g$ is,
+ you can't figure out which $g$ I chose from just the coset itself.
+ If you don't believe me, here's an example of what I mean:
+ \[ x+100\ZZ = \left\{ \dots, -97, 3, 103, 203, \dots \right\} \implies x = {?}. \]
+ There's no reason to think I picked $x=3$. (I actually picked $x=-13597$.)
+ \label{remark:coset_warning}
+\end{remark}
+\begin{remark}
+ Given cosets $g_1H$ and $g_2H$,
+ you can check that the map $x \mapsto g_2g_1\inv x$ is a bijection between them.
+ So actually, all cosets have the same cardinality.
+\end{remark}
+
+So, long story short,
+\begin{moral}
+ Elements of the group $Q$ are naturally identified with left cosets of $N$.
+\end{moral}
+In practice, people often still prefer to picture elements of $Q$ as single points
+(for example it's easier to think of $\Zc2$ as $\{0,1\}$
+rather than $\big\{ \{\dots,-2,0,2,\dots\}, \{\dots,-1,1,3,\dots\} \big\}$).
+If you like this picture,
+then you might then draw $G$ as a bunch of equally tall fibers (the cosets),
+which are then ``collapsed'' onto $Q$.
+
+\begin{center}
+ \begin{asy}
+ size(9cm);
+ for (int i=1; i<=6; ++i) {
+ draw( (i,0)--(i,3.4) );
+ dot( (i,0.6) );
+ dot( (i,1) );
+ dot( (i,1.7) );
+ dot( (i,2.4) );
+ dot( (i,2.8) );
+ dot( (i,3) );
+ label(rotate(90)*scale(1.4)*"$\Longleftarrow$", (i+0.02,-0.9), dir(90));
+ dot( (i,-1) );
+ }
+ label("$G$", (0.5, 3));
+ label("$Q$", (0.5,-1));
+ \end{asy}
+\end{center}
+
+Now that we've done this, we can give an \emph{intrinsic}
+definition for the quotient group we alluded to earlier.
+\begin{definition}
+ A subgroup $N$ of $G$ is called \vocab{normal} if it is the
+ kernel of some homomorphism.
+ We write this as $N \normalin G$.
+\end{definition}
+\begin{definition}
+ Let $N \normalin G$.
+ Then the \vocab{quotient group}, denoted $G/N$
+ (and read ``$G$ mod $N$''),
+ is the group defined as follows.
+ \begin{itemize}
+ \ii The elements of $G/N$ will be the left cosets of $N$.
+ \ii We want to define the product of two cosets $C_1$ and $C_2$ in $G/N$.
+ Recall that the cosets are in bijection with elements of $Q$.
+ So let $q_1$ be the value associated to the coset $C_1$,
+ and $q_2$ the one for $C_2$.
+ Then we can take the product to be the coset corresponding to $q_1q_2$.
+
+ Quite importantly,
+ \textbf{we can also do this in terms of representatives of the cosets}.
+ Let $g_1 \in C_1$ and $g_2 \in C_2$,
+ so $C_1 = g_1N$ and $C_2 = g_2N$.
+ Then $C_1 \cdot C_2$ should be the coset which contains $g_1g_2$.
+ This is the same as the above definition since
+ $\phi(g_1g_2) = \phi(g_1)\phi(g_2) = q_1q_2$;
+ all we've done is define the product in terms of elements of $G$,
+ rather than values in $H$.
+
+ Using the $gN$ notation,
+ and with \Cref{remark:coset_warning} in mind,
+ we can write this even more succinctly:
+ \[ (g_1N) \cdot (g_2N) \defeq (g_1g_2)N. \]
+ \end{itemize}
+\end{definition}
+And now you know why the integers modulo $n$ are often written $\ZZ/n\ZZ$!
+\begin{ques}
+ Take a moment to digest the above definition.
+\end{ques}
+By the way we've built it, the resulting group $G/N$ is isomorphic to $Q$.
+In a sense we think of $G/N$ as ``$G$ modulo the condition that $n=1$
+for all $n \in N$''.
+
+\section{(Optional) Proof of Lagrange's theorem}
+As an aside, with the language of cosets
+we can now show Lagrange's theorem in the general case.
+\begin{theorem}
+ [Lagrange's theorem]
+ \label{thm:lagrange_grp}
+ Let $G$ be a finite group, and let $H$ be any subgroup.
+ Then $\left\lvert H \right\rvert$ divides $\left\lvert G \right\rvert$.
+\end{theorem}
+
+The proof is very simple: note that the cosets of $H$
+all have the same size and form a partition of $G$
+(even when $H$ is not necessarily normal).
+Hence if $n$ is the number of cosets,
+then $n \cdot \left\lvert H \right\rvert = \left\lvert G \right\rvert$.
+
+\begin{ques}
+ Conclude that $x^{\left\lvert G \right\rvert}=1$
+ by taking $H = \left \subseteq G$.
+\end{ques}
+
+\begin{remark}
+ It should be mentioned at this point that
+ in general, if $G$ is a finite group and $N$ is normal,
+ then $|G/N| = |G| / |N|$.
+\end{remark}
+
+\section{Eliminating the homomorphism}
+\prototype{Again $\ZZ/n\ZZ \cong \Zc n$.}
+Let's look at the last definition of $G/N$ we provided.
+The short version is:
+\begin{itemize}
+ \ii The elements of $G/N$ are cosets $gN$, which you can think
+ of as equivalence classes of a relation $\sim_N$
+ (where $g_1 \sim_N g_2$ if $g_1 = g_2n$ for some $n \in N$).
+ \ii Given cosets $g_1N$ and $g_2N$ the group operation is
+ \[ g_1N \cdot g_2N \defeq (g_1g_2)N. \]
+\end{itemize}
+Question: where do we actually use the fact that $N$ is normal?
+We don't talk about $\phi$ or $Q$ anywhere in this definition.
+
+The answer is in \Cref{remark:coset_warning}.
+The group operation takes in two cosets,
+so it doesn't know what $g_1$ and $g_2$ are.
+But behind the scenes,
+\textbf{the normal condition guarantees that the group operation can pick
+any $g_1$ and $g_2$ it wants and still end up with the same coset.}
+If we didn't have this property, then it would be hard to define the
+product of two cosets $C_1$ and $C_2$ because it might make a difference
+which $g_1 \in C_1$ and $g_2 \in C_2$ we picked.
+The fact that $N$ came from a homomorphism meant we could pick any representatives
+$g_1$ and $g_2$ of the cosets we wanted, because they all had the same $\phi$-value.
+
+We want some conditions which force this to be true without referencing $\phi$ at all.
+Suppose $\phi \colon G \to K$ is a homomorphism of groups with $H = \ker\phi$.
+Aside from the fact $H$ is a group, we can get an ``obvious'' property:
+\begin{ques}
+ Show that if $h \in H$, $g \in G$,
+ then $ghg\inv \in H$.
+ (Check $\phi(ghg\inv) = 1_K$.)
+\end{ques}
+\begin{example}[Example of a non-normal subgroup]
+ \label{ex:dihedral_normal_subgroup}
+ Let $D_{12} = \left$.
+ Consider the subgroup of order two $H = \{1,s\}$
+ and notice that \[ rsr\inv = r(sr\inv)= r(rs) = r^2s \notin H. \]
+ Hence $H$ is not normal, and cannot be the kernel of any homomorphism.
+\end{example}
+Well, duh -- so what?
+Amazingly it turns out that that this is the \emph{sufficient} condition we want.
+Specifically, it makes the nice ``coset multiplication'' we wanted work out.
+\begin{remark}
+ [For math contest enthusiasts]
+ This coincidence is really a lot like functional equations at the IMO.
+ We all know that normal subgroups $H$ satisfy $ghg\inv \in H$;
+ the surprise is that from the latter seemingly weaker condition,
+ we can deduce $H$ is normal.
+\end{remark}
+
+Thus we have a new criterion for ``normal'' subgroups which does not
+make any external references to $\phi$.
+\begin{theorem}[Algebraic condition for normal subgroups]
+ Let $H$ be a subgroup of $G$.
+ Then the following are equivalent:
+ \begin{itemize}
+ \ii $H \normalin G$.
+ \ii For every $g \in G$ and $h \in H$, $ghg\inv \in H$.
+ \end{itemize}
+\end{theorem}
+\begin{proof}
+ We already showed one direction.
+
+ For the other direction, we need to build a homomorphism with kernel $H$.
+ So we simply \emph{define} the group $G/H$ as the cosets.
+ To put a group operation, we need to verify:
+ \begin{claim}
+ If $g_1' \sim_H g_1$ and $g_2' \sim_H g_2$ then $g_1'g_2' \sim_H g_1g_2$.
+ \end{claim}
+ \begin{subproof}
+ Boring algebraic manipulation (again functional equation style).
+ Let $g_1' = g_1h_1$ and $g_2' = g_2h_2$, so we want to show that
+ $g_1h_1g_2h_2 \sim_H g_1g_2$.
+ Since $H$ has the property, $g_2\inv h_1g_2$ is some element of $H$, say $h_3$.
+ Thus $h_1 g_2 = g_2 h_3$, and the left-hand side becomes $g_1g_2(h_3h_2)$,
+ which is fine since $h_3h_2 \in H$.
+ \end{subproof}
+ With that settled we can just \emph{define} the
+ product of two cosets (of normal subgroups) by \[ (g_1H) \cdot (g_2H) = (g_1g_2)H. \]
+
+ Thus the claim above shows that this multiplication is well-defined
+ (this verification is the ``content'' of the theorem).
+ So $G/H$ is indeed a group!
+ Moreover there is an obvious ``projection'' homomorphism
+ $G \to G/H$ (with kernel $H$), by $g \mapsto gH$.
+\end{proof}
+%Another way to write the condition is
+%\[ H = gHg\inv \defeq \left\{ ghg\inv \mid h \in H \right\}. \]
+%You should take a moment to check that these definitions are equivalent.
+
+\begin{example}[Modding out in the product group]
+ Consider again the product group $G \times H$.
+ Earlier we identified a subgroup
+ \[ G' = \left\{ (g, 1_H) \mid g \in G \right\} \cong G. \]
+ You can easily see that $G' \normalin G \times H$.
+ (Easy calculation.)
+
+ Moreover, you can check that
+ \[ (G \times H) / (G') \cong H. \]
+ Indeed, we have $(g, h) \sim_{G'} (1_G, h)$ for all $g \in G$ and $h \in H$.
+\end{example}
+\begin{example}[Quotients and products don't necessarily cancel]
+ It is not necessarily true that $(G/H) \times H \cong G$.
+ For example, consider $G = \ZZ/4\ZZ$
+ and the normal subgroup $H = \{0,2\} \cong \ZZ/2\ZZ$.
+ Then $G/H \cong \ZZ/2\ZZ$,
+ but $\ZZ/4\ZZ \not\cong \ZZ/2\ZZ \times \ZZ/2\ZZ$.
+ (Footnote: the precise condition for this kind of ``canceling'' is called the Schur-Zassenhaus lemma.)
+\end{example}
+\begin{example}[Another explicit computation]
+ Let $\phi: D_8 \to \Zc 4$ be defined by \[ r \mapsto \ol 2, \quad s \mapsto \ol 2. \]
+ The kernel of this map is $N = \{1,r^2,sr,sr^3\}$.
+
+ We can do a quick computation of all the elements of $D_8$ to get
+ \[ \phi(1) = \phi(r^2) = \phi(sr) = \phi(sr^3) = \ol 0
+ \text{ and }
+ \phi(r) = \phi(r^3) = \phi(s) = \phi(sr^2) = \ol 2. \]
+ The two relevant fibers are \[ \phi\pre(\ol 0) = 1N = r^2N = srN = sr^3N = \{1,r^2,sr,sr^3\} \] and
+ \[ \phi\pre(\ol 2) = rN = r^3N = sN = sr^2N = \{r,r^3,s,sr^2\}. \]
+ So we see that $|D_8/N| = 2$ is a group of order two, or $\Zc 2$.
+ Indeed, the image of $\phi$ is \[ \left\{ \ol 0, \ol 2 \right\} \cong \Zc 2. \]
+\end{example}
+
+\begin{ques}
+ Suppose $G$ is abelian.
+ Why does it follow that any subgroup of $G$ is normal?
+\end{ques}
+
+
+Finally here's some food for thought:
+suppose one has a group presentation for a group $G$
+that uses $n$ generators.
+Can you write it as a quotient of the form $F_n / N$,
+where $N$ is a normal subgroup of $F_n$?
+
+\section{(Digression) The first isomorphism theorem}
+One quick word about what other sources usually say.
+
+Most textbooks actually \emph{define} normal using the $ghg\inv \in H$ property.
+Then they define $G/H$ for normal $H$ in the way I did above,
+using the coset definition
+\[ (g_1H) \cdot (g_2H) = g_1g_2H. \]
+Using purely algebraic manipulations (like I did) this is well-defined,
+and so now you have this group $G/H$ or something.
+The underlying homomorphism isn't mentioned at all,
+or is just mentioned in passing.
+
+I think this is incredibly dumb.
+The normal condition looks like it gets pulled out of thin air
+and no one has any clue what's going on,
+because no one has any clue what a normal subgroup actually should look like.
+
+Other sources like to also write the so-called first isomorphism theorem.\footnote{
+ There is a second and third isomorphism theorem.
+ But four years after learning about them,
+ I \emph{still} don't know what they are.
+ So I'm guessing they weren't very important.}
+It goes like this.
+\begin{theorem}
+ [First isomorphism theorem]
+ Let $\phi : G \to H$ be a homomorphism.
+ Then $G / \ker \phi$ is isomorphic to $\phi\im(G)$.
+\end{theorem}
+To me, this is just a clumsier way of stating the same idea.
+
+About the only merit this claim has is that if $\phi$ is injective,
+then the image $\phi\im(G)$ is an \emph{isomorphic copy}
+of $G$ inside the group $H$.
+(Try to see this directly!)
+This is a pattern we'll often see in other branches of mathematics:
+whenever we have an \emph{injective structure-preserving map},
+often the image of this map will be some ``copy'' of $G$.
+(Here ``structure'' refers to the group multiplication,
+but we'll see some more other examples of ``types of objects'' later!)
+
+In that sense an injective homomorphism $\phi : G \injto H$
+is an \emph{embedding} of $G$ into $H$.
+
+\section\problemhead
+\begin{problem}
+ [18.701 at MIT]
+ Determine all groups $G$ for which the map $\phi : G \to G$ defined by
+ \[ \phi(g) = g^2 \] is a homomorphism.
+ \begin{hint}
+ Write it out: $\phi(ab) = \phi(a)\phi(b)$.
+ \end{hint}
+ \begin{sol}
+ Abelian groups: $abab =a^2b^2 \iff ab= ba$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Consider the dihedral group $G = D_{10}$.
+ \begin{enumerate}[(a)]
+ \ii Is $H = \left< r \right>$ a normal subgroup of $G$?
+ If so, compute $G/H$ up to isomorphism.
+ \ii Is $H = \left< s \right>$ a normal subgroup of $G$?
+ If so, compute $G/H$ up to isomorphism.
+ \end{enumerate}
+ \begin{hint}
+ Yes, no.
+ \end{hint}
+ \begin{sol}
+ Yes to (a): you can check this directly
+ from the $ghg\inv$ definition.
+ For example, for (a)
+ it is enough to compute $(r^a s) r^n (r^a s)\inv = r^{-n} \in H$.
+ The quotient group is $\Zc2$.
+
+ The answer is no for (b) by following
+ \Cref{ex:dihedral_normal_subgroup}.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Does $S_4$ have a normal subgroup of order $3$?
+ \begin{hint}
+ No.
+ \end{hint}
+ \begin{sol}
+ A subgroup of order $3$ must be generated by
+ an element of order $3$, since $3$ is prime.
+ So we may assume WLOG that $H = \left< (1\; 2 \; 3) \right>$
+ (by renaming elements appropriately).
+ But then let $g = (3 \; 4)$; one can check $gHg\inv \ne H$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Let $G$ and $H$ be finite groups, where $\left\lvert G \right\rvert = 1000$
+ and $\left\lvert H \right\rvert = 999$.
+ Show that a homomorphism $G \to H$ must be trivial.
+ \begin{hint}
+ $\gcd(1000,999)=1$.
+ \end{hint}
+ \begin{sol}
+ $G/\ker G$ is isomorphic to a subgroup of $H$.
+ The order of the former divides $1000$;
+ the order of the latter divides $999$.
+ This can only occur if $G / \ker G = \{1\}$
+ so $\ker G = G$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Let $\CC^\times$ denote the nonzero complex numbers under multiplication.
+ Show that there are five homomorphisms $\Zc5 \to \CC^\times$
+ but only two homomorphisms $D_{10} \to \CC^\times$,
+ even though $\Zc5$ is a subgroup of $D_{10}$.
+\end{problem}
+
+\begin{problem}
+ \gim
+ Find a non-abelian group $G$
+ such that every subgroup of $G$ is normal.
+ (These groups are called \vocab{Hamiltonian}.)
+ \begin{hint}
+ Find an example of order $8$.
+ \end{hint}
+ \begin{sol}
+ Quaternion group.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [PRIMES entrance exam, 2017]
+ \gim
+ Let $G$ be a group with presentation given by
+ \[ G = \left< a,b,c \mid
+ ab = c^2a^4, \; bc = ca^6, \; ac = ca^8, \;
+ c^{2018} = b^{2019}
+ \right>. \]
+ Determine the order of $G$.
+ \begin{hint}
+ Try to show $G$ is the dihedral group of order $18$.
+ There is not much group theory content here --- just manipulation.
+ \end{hint}
+ \begin{sol}
+ The answer is $|G| = 18$.
+
+ First, observe that by induction we have
+ \[ a^n c = ca^{8n} \]
+ for all $n \ge 1$.
+ We then note that
+ \begin{align*}
+ a(bc) &= (ab)c \\
+ a \cdot ca^6 &= c^2 a^4 \cdot c \\
+ c a^8 \cdot a^6 &= c^2 a^4 \cdot c \\
+ a^{14} &= c(a^4c) = c^2 a^{32}.
+ \end{align*}
+ Hence we conclude $c^2 = a^{-18}$.
+ Then $ab = c^2a^4 \implies b = a^{-15}$.
+
+ In that case, if $c^{2018} = b^{2019}$,
+ we conclude $1 = a^{2018 \cdot 18 - 2019 \cdot 15} = a^{6039}$.
+ Finally,
+ \begin{align*}
+ bc &= ca^6 \\
+ a^{-15} c &= ca^6 \\
+ a^{-15} c^2 &= c(a^6c) = c^2 a^{48} \\
+ a^{-33} &= a^{30} \\
+ \implies a^{63} &= 1.
+ \end{align*}
+ Since $\gcd(6039, 63) = 9$, we find $a^9 = 1$, hence finally $c^2 = 1$.
+ So the presentation above simplifies to
+ \[ G = \left< a,c \mid a^9=c^2=1, \; ac = ca^{-1} \right> \]
+ which is the presentation of the dihedral group of order $18$.
+ % a = r, r^9 = 1
+ % b = r^3
+ % c = s, s^2 = 2
+ This completes the proof.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Homophony group]
+ \gim
+ The homophony group (of English) is the group
+ with $26$ generators $a$, $b$, \dots, $z$
+ and one relation for every pair of English words
+ which sound the same.
+ For example $knight = night$ (and hence $k=1$).
+ Prove that the group is trivial.
+ \begin{hint}
+ Get yourself a list of English homophones, I guess.
+ Don't try too hard.
+ Letter $v$ is the worst; maybe $felt = veldt$?
+ \end{hint}
+ \begin{sol}
+ You can find many solutions by searching ``homophone group'';
+ one is \url{https://math.stackexchange.com/q/843966/229197}.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/ramification.tex b/books/napkin/ramification.tex
new file mode 100644
index 0000000000000000000000000000000000000000..764438f49d5ff652aaa5046dc80bc15f956d3af5
--- /dev/null
+++ b/books/napkin/ramification.tex
@@ -0,0 +1,387 @@
+\chapter{Ramification theory}
+We're very interested in how rational primes $p$ factor in a bigger number field $K$.
+Some examples of this behavior: in $\ZZ[i]$ (which is a UFD!), we have factorizations
+\begin{align*}
+ (2) &= (1+i)^2 \\
+ (3) &= (3) \\
+ (5) &= (2+i)(2-i).
+\end{align*}
+In this chapter we'll learn more about how primes break down when they're thrown into bigger number fields.
+Using weapons from Galois Theory, this will culminate in a proof of Quadratic Reciprocity.
+
+\section{Ramified / inert / split primes}
+\prototype{In $\ZZ[i]$, $2$ is ramified, $3$ is inert, and $5$ splits.}
+
+Let $p$ be a rational prime, and toss it into $\OO_K$.
+Thus we get a factorization into prime ideals
+\[ p \cdot \OO_K = \kp_1^{e_1} \dots \kp_g^{e_g}. \]
+We say that each $\kp_i$ is \vocab{above} $(p)$.\footnote{%
+ Reminder that $p \cdot \OO_K$ and $(p)$ mean the same thing, and I'll use both interchangeably.}
+Pictorially, you might draw this as follows:
+\begin{center}
+\begin{tikzcd}
+ K \ar[d, dash] & \supset & \OO_K \ar[d, dash] & \kp_i \ar[d, dash] \\
+ \QQ & \supset & \ZZ & (p)
+\end{tikzcd}
+\end{center}
+Some names for various behavior that can happen:
+\begin{itemize}
+ \ii We say $p$ is \vocab{ramified} if $e_i > 1$ for some $i$.
+ For example $2$ is ramified in $\ZZ[i]$.
+ \ii We say $p$ is \vocab{inert} if $g=1$ and $e_1=1$; i.e. $(p)$ remains prime.
+ For example $3$ is inert in $\ZZ[i]$.
+ \ii We say $p$ is \vocab{split} if $g > 1$.
+ For example $5$ is split in $\ZZ[i]$.
+\end{itemize}
+\begin{ques}
+ More generally, for a prime $p$ in $\ZZ[i]$:
+ \begin{itemize}
+ \ii $p$ is ramified exactly when $p = 2$.
+ \ii $p$ is inert exactly when $p \equiv 3 \pmod 4$.
+ \ii $p$ is split exactly when $p \equiv 1 \pmod 4$.
+ \end{itemize}
+ Prove this.
+\end{ques}
+
+\section{Primes ramify if and only if they divide $\Delta_K$}
+The most unusual case is ramification:
+Just like we don't expect a randomly selected polynomial to have a double root,
+we don't expect a randomly selected prime to be ramified.
+In fact, the key to understanding ramification is the discriminant.
+
+For the sake of discussion, let's suppose that $K$ is monogenic,
+$\OO_K = \ZZ[\theta]$, where $\theta$ has minimal polynomial $f$.
+Let $p$ be a rational prime we'd like to factor.
+If $f$ factors as $f_1^{e_1} \dots f_g^{e_g}$, then we know that
+the prime factorization of $(p)$ is given by
+\[ p \cdot \OO_K = \prod_i \left( p, f_i(\theta) \right)^{e_i}. \]
+In particular, $p$ ramifies exactly when \emph{$f$ has a double root mod $p$}!
+To detect whether this happens, we look at the polynomial discriminant of $f$,
+namely
+\[ \Delta(f) = \prod_{i
+ \cong \Zc f.
+\]
+So \[ D_\kp \cong \Zc f \] as well.
+
+Let's now go back to
+\[ D_\kp \taking\theta \Gal\left( (\OO_K/\kp) : \FF_p \right). \]
+The kernel of $\theta$ is called the \vocab{inertia group}
+and denoted $I_\kp \subseteq D_\kp$; it has order $e$.
+
+This gives us a pretty cool sequence of subgroups
+$\{1\} \subseteq I \subseteq D \subseteq G$
+where $G$ is the Galois group (I'm dropping the $\kp$-subscripts now).
+Let's look at the corresponding \emph{fixed fields} via the Fundamental theorem of Galois theory.
+Picture:
+\begin{center}
+\begin{tikzcd}
+ \kp \subset \OO_K \subset
+ & K \ar[r, leftrightarrow] \ar[d, "\text{Ramify}"']
+ & \{1\} \ar[d, "e"] \\
+ \; & K^I \ar[d, "\text{Inert}"'] & I \ar[d, "f"] \\
+ \; & K^D \ar[d, "\text{Split}"'] & D \ar[d, "g"] \\
+ (p) \subset \ZZ \subset
+ & \QQ \ar[r, leftrightarrow]
+ & G
+\end{tikzcd}
+\end{center}
+Something curious happens:
+\begin{itemize}
+ \ii When $(p)$ is lifted into $K^D$ it splits completely into $g$ unramified primes.
+ Each of these has inertial degree $1$.
+ \ii When the primes in $K^D$ are lifted to $K^I$, they remain inert, and now have
+ inertial degree $f$.
+ \ii When then lifted to $K$, they ramify with exponent $e$ (but don't split at all).
+\end{itemize}
+Picture:
+In other words, the process of going from $1$ to $efg$
+can be very nicely broken into the three steps above.
+To draw this in the picture, we get
+\begin{center}
+\begin{tikzcd}
+ (p) \ar[r]
+ & \kp_1 \dots \kp_g \ar[r]
+ & \kp_1 \dots \kp_g \ar[r]
+ & (\kp_1 \dots \kp_g)^e \\
+ \{f_i\}: & 1,\dots,1 & f,\dots,f & f,\dots,f \\
+ \QQ \ar[r, dash, "\text{Split}"]
+ & K^D \ar[r, dash, "\text{Inert}"]
+ & K^I \ar[r, dash, "\text{Ramify}"]
+ & K
+\end{tikzcd}
+\end{center}
+In any case, in the ``typical'' case that there is no ramification,
+we just have $K^I = K$.
+
+\section{Tangential remark: more general Galois extensions}
+All the discussion about Galois extensions
+carries over if we replace $K/\QQ$ by some different Galois extension $K/F$.
+Instead of a rational prime $p$ breaking down in $\OO_K$,
+we would have a prime ideal $\kp$ of $F$ breaking down as
+\[ \kp \cdot \OO_L = (\kP_1 \dots \kP_g)^e \]
+in $\OO_L$ and then all results hold verbatim.
+(The $\kP_i$ are primes in $L$ above $\kp$.)
+Instead of $\FF_p$ we would have $\OO_F/\kp$.
+
+The reason I choose to work with $F = \QQ$ is that capital Gothic $P$'s ($\kP$)
+look \emph{really} terrifying.
+
+\section\problemhead
+\todo{more problems}
+% Cyclic Galois groups?
+
+\begin{dproblem}
+ Prove that no rational prime $p$ can remain inert in
+ $K = \QQ(\cbrt2, \omega)$, the splitting field of $x^3-2$.
+ How does this generalize?
+ \begin{hint}
+ Show that no rational prime $p$ can remain
+ inert if $\Gal(K/\QQ)$ is not cyclic.
+ Indeed, if $p$ is inert then $D_p \cong \Gal(K/\QQ)$.
+ \end{hint}
+\end{dproblem}
diff --git a/books/napkin/randvar.tex b/books/napkin/randvar.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ec038af7fd7d6db792c2fcb1b99068598da279a9
--- /dev/null
+++ b/books/napkin/randvar.tex
@@ -0,0 +1,81 @@
+\chapter{Random variables (TO DO)}
+\todo{write chapter}
+Having properly developed the Lebesgue measure
+and the integral on it,
+we can now proceed to develop random variables.
+
+\section{Random variables}
+With all this set-up, random variables are going to be really quick to define.
+\begin{definition}
+ A (real) \vocab{random variable} $X$ on a probability space
+ $\Omega = (\Omega, \SA, \mu)$
+ is a measurable function $X \colon \Omega \to \RR$,
+ where $\RR$ is equipped with the Borel $\sigma$-algebra.
+\end{definition}
+In particular, addition of random variables, etc.\
+all makes sense, as we can just add.
+Also, we can integrate $X$ over $\Omega$, by previous chapter.
+
+\begin{definition}
+ [First properties of random variables]
+ Given a random variable $X$,
+ the \vocab{expected value} of $X$ is defined by
+ the Lebesgue integral
+ \[ \EE[X] = \int_{\Omega} X(\omega) \; d\mu. \]
+ Confusingly, the letter $\mu$ is often used for expected values.
+
+ The \vocab{$k$th moment} of $X$ is defined as $\EE[X^k]$,
+ for each positive integer $k \ge 1$.
+ The \vocab{variance} of $X$ is then defined as
+ \[ \Var(X) = \EE\left[ (X-\EE[X])^2 \right]. \]
+\end{definition}
+\begin{ques}
+ Show that $\mathbf{1}_A$ is a random variable
+ (just check that it is Borel measurable),
+ and its expected value is $\mu(A)$.
+\end{ques}
+
+An important property of expected value you probably already know:
+\begin{theorem}
+ [Linearity of expectation]
+ If $X$ and $Y$ are random variables on $\Omega$ then
+ \[ \EE[X+Y] = \EE[X] + \EE[Y]. \]
+\end{theorem}
+\begin{proof}
+ $\EE[X+Y] = \int_\Omega X(\omega) + Y(\omega) \; d\mu
+ = \int_\Omega X(\omega) \; d\mu + \int_\Omega Y(\omega) \; d\mu
+ = \EE[X] + \EE[Y]$.
+\end{proof}
+Note that $X$ and $Y$ do not have to be ``independent'' here:
+a notion we will define shortly.
+
+\section{Distribution functions}
+
+\section{Examples of random variables}
+
+\section{Characteristic functions}
+
+\section{Independent random variables}
+
+\section{\problemhead}
+\begin{problem}
+ [Equidistribution]
+ Let $X_1$, $X_2$, \dots be i.i.d.\ uniform random variables on $[0,1]$.
+ Show that almost surely the $X_i$ are equidistributed,
+ meaning that
+ \[ \lim_{N \to \infty} \frac{ \# \{1 \le i \le N \mid a \le X_i(\omega) \le b \}}{N}
+ = b-a \qquad \forall 0 \le a < b \le 1 \]
+ holds for almost all choices of $\omega$.
+\end{problem}
+
+\begin{problem}
+ [Side length of triangle independent from median]
+ Let $X_1$, $Y_1$, $X_2$, $Y_2$, $X_3$, $Y_3$
+ be six independent standard Gaussians.
+ Define triangle $ABC$ in the Cartesian plane
+ by $A = (X_1,Y_1)$, $B = (X_2,Y_2)$, $C = (X_3,Y_3)$.
+ Prove that the length of side $BC$
+ is independent from the length of the $A$-median.
+\end{problem}
+
+% 18.175 has some other good pset problems that could go here
diff --git a/books/napkin/references.tex b/books/napkin/references.tex
new file mode 100644
index 0000000000000000000000000000000000000000..dab2889c7cc99765a67f1e388faa4148a0a61962
--- /dev/null
+++ b/books/napkin/references.tex
@@ -0,0 +1,227 @@
+\chapter{Pedagogical comments and references}
+\label{ch:refs}
+Here are some higher-level comments on the way specific topics were presented,
+as well as pointers to further reading.
+
+\section{Basic algebra and topology}
+\subsection{Linear algebra and multivariable calculus}
+Following the comments in \Cref{sec:basis_evil},
+I dislike most presentations of linear algebra and multivariable calculus
+since they miss the two key ideas, namely:
+\begin{itemize}
+ \ii In linear algebra, we study \emph{linear maps} between spaces.
+ \ii In calculus, we \emph{approximate functions at points by linear functions}.
+\end{itemize}
+Thus, I believe linear algebra should
+\emph{always} be taught before multivariable calculus.
+In particular, I do not recommend most linear algebra or
+multivariable calculus books.
+
+For linear algebra, I've heard that \cite{ref:axler} follows this approach,
+hence the appropriate name ``Linear Algebra Done Right''.
+I followed with heavy modifications the proceedings of Math 55a,
+see \cite{ref:55a}.
+
+For multivariable calculus and differential geometry,
+I found the notes \cite{ref:manifolds} to be unusually well-written.
+I referred to it frequently while I was enrolled in Math 55b \cite{ref:55b}.
+
+\subsection{General topology}
+My personal view on spaces is that every space I ever work with
+is either metrizable or is the Zariski topology.
+
+I adopted the approach of \cite{ref:pugh}, using metric topology first.
+I find that metric spaces are far more intuitive, and are a much better
+way to get a picture of what open / closed / compact etc.\ sets look like.
+This is the approach history took;
+general topology grew out of metric topology.
+
+I personally dislike starting any general topology class
+by defining what a general topological space is,
+because it doesn't communicate a good picture of open and closed sets
+to draw pictures of.
+
+\subsection{Groups and commutative algebra}
+I teach groups before commutative rings but might convert later.
+Rings have better examples, don't have the confusion of multiplicative
+notation for additive groups, and modding out by ideals is more intuitive.
+
+There's a specific thing I have a qualm with in group theory:
+the way that the concept of a normal subgroup is introduced.
+Only \cite{ref:gowers} does something similar to what I do.
+Most other people simply \emph{define} a normal subgroup $N$
+as one with $gNg\inv$ and then proceed to define modding out,
+without taking the time to explain where this definition comes from.
+I remember distinctly this concept as the first time in learning math
+where I didn't understand what was going on.
+Only in hindsight do I see where this definition came from;
+I tried hard to make sure my own presentation didn't have this issue.
+
+I deliberately don't include a chapter on just commutative algebra;
+other than the chapter on rings and ideals.
+The reason is that I always found it easier to learn
+commutative algebra theorems on the fly,
+in the context of something like algebraic number theory or algebraic geometry.
+For example, I finally understand why radicals and the Nullstellensatz were important
+when I saw how they were used in algebraic geometry.
+Before then, I never understood why I cared about them.
+
+\subsection{Calculus}
+I do real analysis by using metric and general topology
+as the main ingredient, since I think it's the most useful
+later on and the most enlightening.
+In some senses, I am still following \cite{ref:pugh}.
+
+\section{Second-year topics}
+\subsection{Measure theory and probability}
+The main inspiration for these lectures is
+Vadim Gorin's 18.175 at MIT;
+\cite{ref:gorin} has really nice lecture notes taken by Tony Zhang.
+I go into a bit more details of the measure theory,
+and (for now) less into the probability.
+But I think probability is a great way to motivate measure theory anyways,
+and conversely, it's the right setting in to which state things like
+the central limit theorem.
+
+I also found \cite{ref:lebesgue} quite helpful,
+as another possible reference.
+
+\subsection{Complex analysis}
+I picked the approach of presenting the Cauchy-Goursat theorem as given
+(rather than proving a weaker version by Stokes' theorem, or whatever),
+and then deriving the key result that holomorphic functions are analytic
+from it.
+I think this most closely mirrors the ``real-life'' use of complex
+analysis, i.e.\ the computation of contour integrals.
+
+The main reference for this chapter was \cite{ref:dartmouth}, which I recommend.
+
+\subsection{Category theory}
+I enthusiastically recommend \cite{ref:msci},
+from which my chapters are based,
+and which contains much more than I had time to cover.
+
+You might try reading chapters {2-4} in reverse order though:
+I found that limits were much more intuitive than adjoints.
+But your mileage may vary.
+
+The category theory will make more sense as you learn
+more examples of structures: it will help to have read,
+say, the chapters on groups, rings, and modules.
+
+\subsection{Quantum algorithms}
+The exposition given here is based off a full semester
+at MIT taught by Seth Lloyd, in 18.435J \cite{ref:18-435}.
+It is written in a far more mathematical perspective.
+
+I only deal with finite-dimensional Hilbert spaces,
+because that is all that is needed for Shor's algorithm,
+which is the point of this chapter.
+This is not an exposition intended for someone who wishes to seriously
+study quantum mechanics (though it might be a reasonable first read):
+the main purpose is to give students a little appreciation for
+what this ``Shor's algorithm'' that everyone keeps talking about is.
+
+\subsection{Representation theory}
+I staunchly support teaching the representation of algebras first,
+and then specializing to the case of groups by looking at $k[G]$.
+The primary influence for the chapters here is \cite{ref:etingof},
+and you might think of what I have here as just some selections
+from the first four chapters of this source.
+
+\subsection{Set theory}
+Set theory is far off the beaten path.
+The notes I have written are based off the class
+I took at Harvard College, Math 145a \cite{ref:145a}.
+
+My general impression is that the way I present set theory
+(trying to remain intuitive and informal in a logical minefield)
+is not standard. Possible other reference: \cite{ref:miquel}.
+
+\section{Advanced topics}
+\subsection{Algebraic topology}
+I cover the fundamental group $\pi_1$ first, because I think the subject is more
+intuitive this way. A possible reference in this topic is \cite{ref:munkres}.
+Only later do I do the much more involved homology groups.
+The famous standard reference for algebraic topology is \cite{ref:hatcher},
+which is what almost everyone uses these days.
+But I also found \cite{ref:maxim752} to be very helpful,
+particularly in the part about cohomology rings.
+
+I don't actually do very much algebraic topology.
+In particular, I think the main reason to learn algebraic topology
+is to see the construction of the homology and cohomology groups
+from the chain complex, and watch the long exact sequence in action.
+The concept of a (co)chain complex comes up often in other contexts as well,
+like the cohomology of sheaves or Galois cohomology.
+Algebraic topology is by far the most natural one.
+
+I use category theory extensively, being a category-lover.
+
+\subsection{Algebraic number theory}
+I learned from \cite{ref:oggier_NT},
+using \cite{ref:lenstra_chebotarev}
+for the part about the Chebotarev density theorem.
+
+When possible I try to keep the algebraic number theory chapter close at
+heart to an ``olympiad spirit''.
+Factoring in rings like $\ZZ[i]$ and $\ZZ[\sqrt{-5}]$
+is very much an olympiad-flavored topic at heart:
+one is led naturally to the idea of factoring in general rings of integers,
+around which the presentation is built.
+As a reward for the entire buildup, the exposition finishes
+with the application of the Chebotarev density theorem to IMO 2003, Problem 6.
+
+\subsection{Algebraic geometry}
+My preferred introduction to algebraic geometry is \cite{ref:gathmann}
+for a first read and \cite{ref:vakil} for the serious version.
+Both sets of lecture notes are essentially self-contained.
+
+I would like to confess now that I know relatively little algebraic geometry,
+and in my personal opinion the parts on algebraic geometry
+are the weakest part of the Napkin.
+This is reflected in my work here:
+in the entire set of notes I only barely finish defining a scheme,
+the first central definition of the subject.
+
+Nonetheless, I will foolishly still make some remarks about my own studies.
+I think there are three main approaches to beginning the study of schemes:
+\begin{itemize}
+ \ii Only looking at affine and projective varieties,
+ as part of an ``introductory'' class,
+ typically an undergraduate course.
+ \ii Studying affine and projective varieties closely
+ and using them as the motivating example of a \emph{scheme},
+ and then developing algebraic geometry from there.
+ \ii Jumping straight into the definition of a scheme,
+ as in the well-respected and challenging \cite{ref:vakil}.
+\end{itemize}
+I have gone with the second approach,
+I think that if you don't know what a scheme is,
+then you haven't learned algebraic geometry.
+But on the other hand I think the definition of a scheme is
+difficult to digest without having a good handle first on varieties.
+
+These opinions are based on my personal experience of having
+tried to learn the subject through all
+three approaches over a period of a year.
+Your mileage may vary.
+
+I made the decision to, at least for the second part,
+focus mostly on \emph{affine} schemes.
+These already generalize varieties in several ways,
+and I think the jump is too much if one starts
+then gluing schemes together.
+I would rather that the student first feel like
+they really understand how an affine scheme works,
+before going on into the world where they now have a general scheme $X$
+which is locally affine (but probably not itself affine).
+The entire chapter dedicated to a gazillion examples
+of affine schemes is a hint of this.
+
+\section{Topics not in Napkin}
+\subsection{Analytic number theory}
+I never had time to write up notes in Napkin for these.
+If you're interested though, I recommend \cite{ref:analytic_NT}.
+They are highly accessible and delightful to read.
+The only real prerequisites are a good handle on Cauchy's residue formula.
diff --git a/books/napkin/rep-alg.tex b/books/napkin/rep-alg.tex
new file mode 100644
index 0000000000000000000000000000000000000000..24f5d0004ab58c5516be8c20082c2a60daaa8537
--- /dev/null
+++ b/books/napkin/rep-alg.tex
@@ -0,0 +1,641 @@
+\chapter{Representations of algebras}
+In the 19th century, the word ``group'' hadn't been invented yet;
+all work was done with subsets of $\GL(n)$ or $S_n$.
+Only much later was the abstract definition of a group was given,
+an abstract set $G$ which was an object in its own right.
+
+While this abstraction is good for some reasons,
+it is often also useful to work with concrete representations.
+This is the subject of representation theory.
+Linear algebra is easier than abstract algebra,
+so if we can take a group $G$ and represent it concretely
+as a set of matrices in $\opname{GL}(n)$,
+this makes them easier to study.
+This is the \emph{representation theory of groups}:
+how can we take a group and represent its elements as matrices?
+
+\section{Algebras}
+\prototype{$k[x_1, \dots, x_n]$ and $k[G]$.}
+Rather than working directly with groups from the beginning,
+it will be more convenient to deal with so-called $k$-algebras.
+This setting is more natural and general than that of groups,
+so once we develop the theory of algebras well enough,
+it will be fairly painless to specialize to the case of groups.
+
+Colloquially,
+\begin{moral}
+ An associative $k$-algebra is
+ a possibly noncommutative ring with a copy of $k$ inside it.
+ It is thus a $k$-vector space.
+\end{moral}
+% In particular this makes such an algebra into a $k$-vector space.
+I'll present examples before the definition:
+\begin{example}
+ [Examples of $k$-Algebras]
+ Let $k$ be any field. The following are examples of $k$-algebras:
+ \begin{enumerate}[(a)]
+ \ii The field $k$ itself.
+ \ii The polynomial ring $k[x_1, \dots, x_n]$.
+ \ii The set of $n \times n$ matrices with entries in $k$,
+ which we denote by $\Mat_n(k)$.
+ Note the multiplication here is not commutative.
+ \ii The set $\Mat(V)$ of linear operators $T : V \to V$,
+ with multiplication given by the composition of operators.
+ (Here $V$ is some vector space over $k$.)
+ This is really the same as the previous example.
+ \end{enumerate}
+\end{example}
+\begin{definition}
+ Let $k$ be a field.
+ A \vocab{$k$-algebra} $A$ is a \emph{possibly noncommutative} ring,
+ equipped with an injective ring homomorphism $k \injto A$
+ (whose image is the ``copy of $k$'').
+ In particular, $1_k \mapsto 1_A$.
+
+ Thus we can consider $k$ as a subset of $A$, and
+ we then additionally require $\lambda \cdot a = a \cdot \lambda$
+ for each $\lambda \in k$ and $a \in A$.
+
+ If the multiplication operation is also commutative,
+ then we say $A$ is a \vocab{commutative algebra}.
+\end{definition}
+\begin{definition}
+ Equivalently, a \vocab{$k$-algebra} $A$ is a
+ $k$-\emph{vector space} which also has an associative,
+ bilinear multiplication operation (with an identity $1_A$).
+ The ``copy of $k$'' is obtained by considering elements
+ $\lambda 1_A$ for each $\lambda \in k$
+ (i.e.\ scaling the identity by the elements of $k$,
+ taking advantage of the vector space structure).
+\end{definition}
+
+\begin{abuse}
+ Some other authors don't require $A$ to be associative or to have
+ an identity, so to them what we have just defined is an
+ ``associative algebra with $1$''.
+ However, this is needlessly wordy for our purposes.
+\end{abuse}
+
+\begin{example}
+ [Group algebra]
+ The \vocab{group algebra} $k[G]$ is the $k$-vector space
+ whose \emph{basis elements} are the elements of a group $G$,
+ and where the product of two basis elements is the group multiplication.
+ For example, suppose $G = \Zc 2 = \{1_G, x\}$.
+ Then
+ \[ k[G] = \left\{ a1_G + bx \mid a,b \in k \right\} \]
+ with multiplication given by
+ \[ (a1_G + bx)(c1_G+dx) = (ac+bd)1_G + (bc+ad)x. \]
+\end{example}
+\begin{ques}
+ When is $k[G]$ commutative?
+\end{ques}
+The example $k[G]$ is very important,
+because (as we will soon see) a representation of the algebra $k[G]$
+amounts to a representation of the group $G$ itself.
+
+It is worth mentioning at this point that:
+\begin{definition}
+ A \vocab{homomorphism} of $k$-algebras $A$, $B$ is a
+ linear map $T : A \to B$ which respects multiplication
+ (i.e.\ $T(xy) = T(x)T(y)$) and which sends $1_A$ to $1_B$.
+ In other words, $T$ is both a homomorphism as a ring and as a vector space.
+\end{definition}
+\begin{definition}
+ Given $k$-algebras $A$ and $B$, the \vocab{direct sum} $A \oplus B$
+ is defined as pairs $a + b$, where addition is done in the obvious way,
+ but we declare $ab = 0$ for any $a \in A$ and $b \in B$.
+\end{definition}
+\begin{ques}
+ Show that $1_A + 1_B$ is the multiplicative identity of $A \oplus B$.
+\end{ques}
+
+\section{Representations}
+\prototype{$k[S_3]$ acting on $k^{\oplus 3}$ is my favorite.}
+
+\begin{definition}
+ A \vocab{representation} of a $k$-algebra $A$
+ (also a \vocab{left $A$-module}) is:
+ \begin{enumerate}[(i)]
+ \ii A $k$-vector space $V$, and
+ \ii An \emph{action} $\cdot$ of $A$ on $V$: thus, for every $a \in A$
+ we can take $v \in V$ and act on it to get $a \cdot v$.
+ This satisfies the usual axioms:
+ \begin{itemize}
+ \ii $(a+b) \cdot v = a \cdot v + b \cdot v$,
+ $a \cdot (v+w) = a \cdot v + a \cdot w$,
+ and $(ab) \cdot v = a \cdot (b \cdot v)$.
+ \ii $\lambda \cdot v = \lambda v$ for $\lambda \in k$.
+ In particular, $1_A \cdot v = v$.
+ \end{itemize}
+ \end{enumerate}
+\end{definition}
+
+\begin{definition}
+ The action of $A$ can be more succinctly described as saying
+ that there is a \emph{$k$-algebra homomorphism} $\rho : A \to \Mat(V)$.
+ (So $a \cdot v = \rho(a)(v)$.)
+ Thus we can also define a \vocab{representation} of $A$ as a pair
+ \[ \left( V, \rho : A \to \Mat(V) \right). \]
+\end{definition}
+This is completely analogous to how a group action $G$ on a set $X$
+with $n$ elements just amounts to a group homomorphism $G \to S_n$.
+From this perspective, what we are really trying to do is:
+\begin{moral}
+ If $A$ is an algebra,
+ we are trying to \emph{represent}
+ the elements of $A$ as matrices.
+\end{moral}
+
+\begin{abuse}
+ While a representation is a pair $(V, \rho)$
+ of \emph{both} the vector space $V$ and the action $\rho$,
+ we frequently will just abbreviate it to ``$V$''.
+ This is probably one of the worst abuses I will commit,
+ but everyone else does it and I fear the mob.
+\end{abuse}
+\begin{abuse}
+ Rather than $\rho(a)(v)$ we will just write $\rho(a)v$.
+\end{abuse}
+
+\begin{example}
+ [Representations of $\Mat(V)$]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Let $A = \Mat_2(\RR)$.
+ Then there is a representation $(\RR^{\oplus 2}, \rho)$
+ where a matrix $a \in A$ just acts by $a \cdot v = \rho(a)(v) = a(v)$.
+
+ \ii More generally, given a vector space $V$ over any field $k$,
+ there is an obvious representation of $A = \Mat(V)$
+ by $a \cdot v = \rho(a)(v) = a(v)$ (since $a \in \Mat(V)$).
+
+ From the matrix perspective: if $A = \Mat(V)$,
+ then we can just represent $A$ as matrices over $V$.
+
+ \ii There are other representations of $A = \Mat_2(\RR)$.
+ A silly example is the representation $(\RR^{\oplus 4}, \rho)$ given by
+ \[
+ \rho :
+ \begin{bmatrix} a & b \\ c & d \end{bmatrix}
+ \mapsto
+ \begin{bmatrix} a & b & 0 & 0 \\ c & d & 0 & 0 \\
+ 0 & 0 & a & b \\ 0 & 0 & c & d \end{bmatrix} .
+ \]
+ More abstractly, viewing $\RR^{\oplus 4}$ as
+ $(\RR^{\oplus 2}) \oplus (\RR^{\oplus 2})$,
+ this is $a \cdot (v_1,v_2) = (a \cdot v_1, a \cdot v_2)$.
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [Representations of polynomial algebras]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Let $A = k$.
+ Then a representation of $k$ is just any $k$-vector space $V$.
+
+ \ii If $A = k[x]$,
+ then a representation $(V, \rho)$ of $A$
+ amounts to a vector space $V$ plus the choice of
+ a linear operator $T \in \Mat(V)$ (by $T = \rho(x)$).
+
+ \ii If $A = k[x] / (x^2)$
+ then a representation $(V, \rho)$ of $A$
+ amounts to a vector space $V$ plus the choice of
+ a linear operator $T \in \Mat(V)$ satisfying $T^2 = 0$.
+
+ \ii We can create arbitrary ``functional equations'' with this pattern.
+ For example, if $A = k[x,y] / (x^2 - x+y, y^4)$
+ then representing $A$ by $V$ amounts to finding commuting operators
+ $S, T \in \Mat(V)$ satisfying $S^2 = S-T$ and $T^4 = 0$.
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [Representations of groups]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Let $A = \RR[S_3]$.
+ Then let
+ \[ V = \RR^{\oplus 3} = \{ (x,y,z) \mid x,y,z \in \RR \}. \]
+ We can let $A$ act on $V$ as follows:
+ given a permutation $\pi \in S_3$, we permute the corresponding
+ coordinates in $V$.
+ So for example, if
+ \[ \text{If } \pi = (1 \; 2)
+ \text{ then } \pi \cdot (x,y,z) = (y,x,z). \]
+ This extends linearly to let $A$ act on $V$,
+ by permuting the coordinates.
+
+ From the matrix perspective, what we are doing
+ is representing the permutations in $S_3$
+ as permutation matrices on $k^{\oplus 3}$, like
+ \[ (1 \; 2)
+ \mapsto \begin{bmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&1 \end{bmatrix}. \]
+
+ \ii More generally, let $A = k[G]$.
+ Then a representation $(V, \rho)$ of $A$
+ amounts to a group homomorphism $\psi : G \to \GL(V)$.
+ (In particular, $\rho(1_G) = \id_V$.)
+ We call this a \vocab{group representation} of $G$.
+ \end{enumerate}
+\end{example}
+\begin{example}[Regular representation]
+ Any $k$-algebra $A$ is a representation $(A, \rho)$ over itself,
+ with $a \cdot b = \rho(a)(b) = ab$ (i.e.\ multiplication given by $A$).
+ This is called the \vocab{regular representation}, denoted $\Reg(A)$.
+\end{example}
+
+\section{Direct sums}
+\prototype{The example with $\RR[S_3]$ seems best.}
+\begin{definition}
+ Let $A$ be $k$-algebra and let $V = (V, \rho_V)$ and $W = (W, \rho_W)$
+ be two representations of $A$.
+ Then $V \oplus W$ is a representation, with action $\rho$ given by
+ \[ a \cdot (v,w) = (a \cdot v, a \cdot w). \]
+ This representation is called the \vocab{direct sum} of $V$ and $W$.
+\end{definition}
+\begin{example}
+ Earlier we let $\Mat_2(\RR)$ act on $\RR^{\oplus 4}$ by
+ \[
+ \rho :
+ \begin{bmatrix} a & b \\ c & d \end{bmatrix}
+ \mapsto
+ \begin{bmatrix} a & b & 0 & 0 \\ c & d & 0 & 0 \\
+ 0 & 0 & a & b \\ 0 & 0 & c & d \end{bmatrix} .
+ \]
+ So this is just a direct sum of two two-dimensional representations.
+\end{example}
+More generally, given representations $(V, \rho_V)$ and $(W, \rho_W)$
+the representation $\rho$ of $V \oplus W$ looks like
+\[ \rho(a) =
+ \begin{bmatrix}
+ \rho_V(a) & 0 \\ 0 & \rho_W(a)
+ \end{bmatrix}.
+\]
+\begin{example}[Representation of $S_n$ decomposes]
+ Let $A = \RR[S_3]$ again,
+ acting via permutation of coordinates on
+ \[ V = \RR^{\oplus 3} = \{ (x,y,z) \mid x,y,z \in \RR \}. \]
+ Consider the two subspaces
+ \begin{align*}
+ W_1 &= \left\{ (t,t,t) \mid t \in \RR \right\} \\
+ W_2 &= \left\{ (x,y,z) \mid x+y+z = 0 \right\}.
+ \end{align*}
+ Note $V = W_1 \oplus W_2$ as vector spaces.
+ But each of $W_1$ and $W_2$ is a subrepresentation
+ (since the action of $A$ keeps each $W_i$ in place),
+ so $V = W_1 \oplus W_2$ as representations too.
+\end{example}
+
+Direct sums also come up when we play with algebras.
+\begin{proposition}[Representations of $A \oplus B$ are $V_A \oplus V_B$]
+ \label{prop:rep_direct_sum}
+ Let $A$ and $B$ be $k$-algebras.
+ Then every representation of $A \oplus B$ is of the form
+ \[ V_A \oplus V_B \]
+ where $V_A$ and $V_B$ are representations of $A$ and $B$, respectively.
+\end{proposition}
+\begin{proof}[Sketch of Proof]
+ Let $(V, \rho)$ be a representation of $A \oplus B$.
+ For any $v \in V$, $\rho(1_A+1_B)v = \rho(1_A)v + \rho(1_B)v$.
+ One can then set $V_A = \{ \rho(1_A)v \mid v \in V \}$
+ and $V_B = \{ \rho(1_B)v \mid v \in V \}$.
+ These are disjoint, since if $\rho(1_A) v = \rho(1_B) v'$,
+ we have $\rho(1_A)v = \rho(1_A1_A)v = \rho(1_A1_B) v' = 0_V$,
+ and similarly for the other side.
+\end{proof}
+
+\section{Irreducible and indecomposable representations}
+\prototype{$k[S_3]$ decomposes as the sum of two spaces.}
+
+One of the goals of representation theory will be to classify
+all possible representations of an algebra $A$.
+If we want to have a hope of doing this,
+then we want to discard ``silly'' representations such as
+\[
+ \rho :
+ \begin{bmatrix} a & b \\ c & d \end{bmatrix}
+ \mapsto
+ \begin{bmatrix} a & b & 0 & 0 \\ c & d & 0 & 0 \\
+ 0 & 0 & a & b \\ 0 & 0 & c & d \end{bmatrix}
+\]
+and focus our attention instead on ``irreducible'' representations.
+This motivates:
+\begin{definition}
+ Let $V$ be a representation of $A$.
+ A \vocab{subrepresentation} $W \subseteq V$ is a subspace $W$
+ with the property that for any $a \in A$ and $w \in W$,
+ $a \cdot w \in W$.
+ In other words, this subspace is invariant under actions by $A$.
+\end{definition}
+Thus for example if $V = W_1 \oplus W_2$ for representations $W_1$, $W_2$
+then $W_1$ and $W_2$ are subrepresentations of $V$.
+
+\begin{definition}
+ If $V$ has no proper nonzero subrepresentations then it is \vocab{irreducible}.
+ If there is no pair of proper subrepresentations $W_1$, $W_2$ such that $V = W_1 \oplus W_2$,
+ then we say $V$ is \vocab{indecomposable}.
+\end{definition}
+\begin{definition}
+ For brevity, an \vocab{irrep} of an algebra/group is a
+ \emph{finite-dimensional} irreducible representation.
+\end{definition}
+
+\begin{example}[Representation of $S_n$ decomposes]
+ Let $A = \RR[S_3]$ again, acting via permutation of coordinates on
+ \[ V = \RR^{\oplus 3} = \{ (x,y,z) \mid x,y,z \in \RR \}. \]
+ Consider again the two subspaces
+ \begin{align*}
+ W_1 &= \left\{ (t,t,t) \mid t \in \RR \right\} \\
+ W_2 &= \left\{ (x,y,z) \mid x+y+z = 0 \right\}.
+ \end{align*}
+ As we've seen, $V = W_1 \oplus W_2$, and thus $V$ is not irreducible.
+ But one can show that $W_1$ and $W_2$ are irreducible
+ (and hence indecomposable) as follows.
+ \begin{itemize}
+ \ii For $W_1$ it's obvious, since $W_1$ is one-dimensional.
+ \ii For $W_2$, consider any vector $w = (a,b,c)$
+ with $a+b+c=0$ and not all zero. Then WLOG we can assume $a \neq b$
+ (since not all three coordinates are equal).
+ In that case, $(1 \; 2)$ sends $w$ to $w' = (b,a,c)$.
+ Then $w$ and $w'$ span $W_2$.
+ \end{itemize}
+ Thus $V$ breaks down completely into irreps.
+\end{example}
+
+Unfortunately, if $W$ is a subrepresentation of $V$,
+then it is not necessarily the case that we can find a
+supplementary vector space $W'$ such that $V = W \oplus W'$.
+Put another way, if $V$ is reducible, we know that it has a subrepresentation,
+but a decomposition requires \emph{two} subrepresentations.
+Here is a standard counterexample:
+\begin{exercise}
+ \label{exer:irred_not_indecomp}
+ Let $A = \RR[x]$, and $V = \RR^{\oplus 2}$ be the representation with action
+ \[ \rho(x) = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}. \]
+ Show that the only subrepresentation is $W = \{ (t,0) \mid t \in \RR \}$.
+ So $V$ is not irreducible, but it is indecomposable.
+\end{exercise}
+
+Here is a slightly more optimistic example,
+and the ``prototypical example'' that you should keep in mind.
+
+\begin{exercise}
+ Let $A = \Mat_d(k)$ and consider the obvious representation $k^{\oplus d}$
+ of $A$ that we described earlier. Show that it is irreducible.
+ (This is obvious if you understand the definitions well enough.)
+\end{exercise}
+
+\section{Morphisms of representations}
+We now proceed to define the morphisms between representations.
+
+\begin{definition}
+ Let $(V, \rho_V)$ and $(W, \rho_W)$ be representations of $A$.
+ An \vocab{intertwining operator}, or \vocab{morphism}, is a
+ linear map $T : V \to W$ such that
+ \[ T(a \cdot v) = a \cdot T(v) \]
+ for any $a \in A$, $v \in V$.
+ (Note that the first $\cdot$ is the action of $\rho_V$
+ and the second $\cdot$ is the action of $\rho_W$.)
+ This is exactly what you expect if you think that $V$ and $W$
+ are ``left $A$-modules''.
+ If $T$ is invertible, then it is an \vocab{isomorphism} of representations
+ and we say $V \cong W$.
+\end{definition}
+\begin{remark}
+ [For commutative diagram lovers]
+ The condition $T(a \cdot v) = a \cdot T(v)$ can be read as saying that
+ \begin{center}
+ \begin{tikzcd}
+ V \ar[r, "\rho_1(a)"] \ar[d, "T"'] & V \ar[d, "T"] \\
+ W \ar[r, "\rho_2(a)"'] & W
+ \end{tikzcd}
+ \end{center}
+ commutes for any $a \in A$.
+\end{remark}
+
+\begin{remark}
+ [For category lovers]
+ A representation is just a ``bilinear'' functor from an
+ abelian one-object category $\{\ast\}$ (so $\Hom(\ast, \ast) \cong A$)
+ to the abelian category $\catname{Vect}_k$.
+ Then an intertwining operator is just a \emph{natural transformation}.
+\end{remark}
+
+Here are some examples of intertwining operators.
+\begin{example}[Intertwining operators]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii For any $\lambda \in k$, the scalar map $T(v) = \lambda v$
+ is intertwining.
+ \ii If $W \subseteq V$ is a subrepresentation,
+ then the inclusion $W \injto V$ is an intertwining operator.
+ \ii The projection map $V_1 \oplus V_2 \surjto V_1$
+ is an intertwining operator.
+ \ii Let $V = \RR^{\oplus 2}$
+ and represent $A = k[x]$ by $(V, \rho)$ where
+ \[ \rho(x) = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}. \]
+ Thus $\rho(x)$ is rotation by $90\dg$ around the origin.
+ Let $T$ be rotation by $30\dg$.
+ Then $T : V \to V$ is intertwining (the rotations commute).
+ \end{enumerate}
+\end{example}
+
+\begin{exercise}[Kernel and image are subrepresentations]
+ Let $T : V \to W$ be an intertwining operator.
+ \begin{enumerate}[(a)]
+ \ii Show that $\ker T \subseteq V$ is a subrepresentation of $V$.
+ \ii Show that $\img T \subseteq W$ is a subrepresentation of $W$.
+ \end{enumerate}
+\end{exercise}
+
+The previous lemma gives us the famous Schur's lemma.
+\begin{theorem}
+ [Schur's lemma]
+ Let $V$ and $W$ be representations of a $k$-algebra $A$.
+ Let $T : V \to W$ be a \emph{nonzero} intertwining operator.
+ Then
+ \begin{enumerate}[(a)]
+ \ii If $V$ is irreducible, then $T$ is injective.
+ \ii If $W$ is irreducible, then $T$ is surjective.
+ \end{enumerate}
+ In particular if both $V$ and $W$ are irreducible then $T$
+ is an isomorphism.
+\end{theorem}
+An important special case is if $k$ is algebraically closed:
+then the only intertwining operators $T \colon V \to V$
+are multiplication by a constant.
+\begin{theorem}
+ [Schur's lemma for algebraically closed fields]
+ Let $k$ be an algebraically closed field.
+ Let $V$ be an irrep of a $k$-algebra $A$.
+ Then any intertwining operator $T \colon V \to V$ is multiplication by a scalar.
+ \label{thm:schur_algclosed}
+\end{theorem}
+\begin{exercise}
+ Use the fact that $T$ has an eigenvalue $\lambda$ to
+ deduce this from Schur's lemma.
+ (Consider $T - \lambda \cdot \id_V$, and use Schur to deduce it's zero.)
+\end{exercise}
+We have already seen the counterexample of rotation by $90\dg$ for $k = \RR$;
+this was the same counterexample we gave to the assertion that all linear maps
+have eigenvalues.
+
+\section{The representations of $\Mat_d(k)$}
+To give an example of the kind of progress already possible, we prove:
+\begin{theorem}
+ [Representations of $\Mat_d(k)$]
+ \label{thm:rep_1mat}
+ Let $k$ be any field, $d$ be a positive integer and
+ let $W = k^{\oplus d}$ be the obvious representation of $A = \Mat_d(k)$.
+ Then the only finite-dimensional representations
+ of $\Mat_d(k)$ are $W^{\oplus n}$
+ for some positive integer $n$ (up to isomorphism).
+ In particular, it is irreducible if and only if $n=1$.
+\end{theorem}
+For concreteness, I'll just sketch the case $d=2$,
+since the same proof applies verbatim to other situations.
+This shows that the examples of representations of $\Mat_2(\RR)$
+we gave earlier are the only ones.
+
+As we've said this is essentially a functional equation.
+The algebra $A = \Mat_2(k)$ has basis given by four matrices
+\[
+ E_1 = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix},
+ \qquad
+ E_2 = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix},
+ \qquad
+ E_3 = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix},
+ \qquad
+ E_4 = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}
+\]
+satisfying relations like $E_1 + E_2 = \id_A$, $E_i^2 = E_i$, $E_1E_2 = 0$, etc.
+So let $V$ be a representation of $A$, and let $M_i = \rho(E_i)$ for each $i$;
+we want to classify the possible matrices $M_i$ on $V$
+satisfying the same functional equations.
+This is because, for example,
+\[ \id_V = \rho(\id_A) = \rho(E_1+E_2) = M_1 + M_2. \]
+By the same token $M_1M_3 = M_3$.
+Proceeding in a similar way, we can obtain the following multiplication table:
+\[
+ \begin{array}{r|llll}
+ \times & M_1 & M_2 & M_3 & M_4 \\ \hline
+ M_1 & M_1 & 0 & M_3 & 0 \\
+ M_2 & 0 & M_2 & 0 & M_4 \\
+ M_3 & 0 & M_3 & 0 & M_1 \\
+ M_4 & M_4 & 0 & M_2 & 0
+ \end{array}
+ \qquad \text{and} \qquad
+ M_1 + M_2 = \id_V
+\]
+Note that each $M_i$ is a linear operator $V \to V$;
+for all we know, it could have hundreds of entries.
+Nonetheless, given the multiplication table of the basis $E_i$
+we get the corresponding table for the $M_i$.
+
+So, in short, the problem is as follows:
+\begin{moral}
+ Find all vector spaces $V$ and quadruples of matrices $M_i$
+ satisfying the multiplication table above.
+\end{moral}
+
+Let $W_1 = M_1\im(V)$ and $W_2 = M_2\im(V)$ be the images of $M_1$ and $M_2$.
+\begin{claim}
+ $V = W_1 \oplus W_2$.
+\end{claim}
+\begin{proof}
+ First, note that for any $v \in V$ we have
+ \[ v = \rho(\id)(v) = (M_1+M_2)v = M_1v + M_2v. \]
+ Moreover, we have that $W_1 \cap W_2 = \{0\}$, because if
+ $M_1v_1 = M_2v_2$ then $M_1v_1 = M_1(M_1v_1) = M_1(M_2v_2) = 0$.
+\end{proof}
+\begin{claim}
+ $W_1 \cong W_2$.
+\end{claim}
+\begin{proof}
+ Check that the maps
+ \[ W_1 \taking{\times M_4} W_2
+ \quad\text{and}\quad
+ W_2 \taking{\times M_3} W_1 \]
+ are well-defined and mutually inverse.
+\end{proof}
+Now, let $e_1, \dots, e_n$ be basis elements of $W_1$;
+thus $M_4e_1$, \dots, $M_4e_n$ are basis elements of $W_2$.
+However, each $\{e_j, M_4e_j\}$ forms a basis of a subrepresentation
+isomorphic to $W = k^{\oplus 2}$ (what's the isomorphism?).
+
+This finally implies that all representations of $A$
+are of the form $W^{\oplus n}$.
+In particular, $W$ is irreducible because there are no representations
+of smaller dimension at all!
+
+\section\problemhead
+\begin{dproblem}
+ \label{prob:one_dim}
+ Suppose we have \emph{one-dimensional} representations
+ $V_1 = (V_1, \rho_1)$ and $V_2 = (V_2, \rho_2)$ of $A$.
+ Show that $V_1 \cong V_2$ if and only if
+ $\rho_1(a)$ and $\rho_2(a)$ are multiplication
+ by the same constant for every $a \in A$.
+\end{dproblem}
+
+\begin{dproblem}
+ [Schur's lemma for commutative algebras]
+ Let $A$ be a \emph{commutative} algebra
+ over an algebraically closed field $k$.
+ Prove that any irrep of $A$ is one-dimensional.
+ \begin{hint}
+ For any $a \in A$, the map $v \mapsto a \cdot v$ is intertwining.
+ \end{hint}
+\end{dproblem}
+
+\begin{sproblem}
+ \label{prob:reg_mat}
+ Let $(V, \rho)$ be a representation of $A$.
+ Then $\Mat(V)$ is a representation of $A$
+ with action given by
+ \[ a \cdot T = \rho(a) \circ T \]
+ for $T \in \Mat(V)$.
+ \begin{enumerate}[(a)]
+ \ii Show that $\rho : \Reg(A) \to \Mat(V)$ is an intertwining operator.
+ \ii If $V$ is $d$-dimensional, show that $\Mat(V) \cong V^{\oplus d}$
+ as representations of $A$.
+ \end{enumerate}
+ \begin{hint}
+ For part (b), pick a basis and do $T \mapsto (T(e_1), \dots, T(e_n))$.
+ \end{hint}
+\end{sproblem}
+
+\begin{sproblem}
+ \label{prob:regA_intertwine}
+ Fix an algebra $A$.
+ Find all intertwining operators
+ \[ T : \Reg(A) \to \Reg(A). \]
+ \begin{hint}
+ Right multiplication.
+ \end{hint}
+ \begin{sol}
+ The operators are those of the form $T(a) = ab$
+ for some fixed $b \in A$.
+ One can check these work, since for $c \in A$
+ we have $T(c \cdot a) = cab = c \cdot T(a)$.
+ To see they are the only ones, note that
+ $T(a) = T(a \cdot 1_A) = a \cdot T(1_A)$ for any $a \in A$.
+ \end{sol}
+\end{sproblem}
+
+\begin{problem}
+ \gim
+ Let $(V, \rho)$ be an \emph{indecomposable}
+ (not irreducible) representation of an algebra $A$.
+ Prove that any intertwining operator $T : V \to V$
+ is either nilpotent or an isomorphism.
+
+ (Note that \Cref{thm:schur_algclosed} doesn't apply,
+ since the field $k$ may not be algebraically closed.)
+ \begin{hint}
+ Apply \Cref{prob:endomorphism_eventual_lemma}.
+ \end{hint}
+\end{problem}
diff --git a/books/napkin/rings.tex b/books/napkin/rings.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ded43abdf8b85a9227dbcccd0f810cf213529ab2
--- /dev/null
+++ b/books/napkin/rings.tex
@@ -0,0 +1,501 @@
+\chapter{Flavors of rings}
+We continue our exploration of rings by considering
+some nice-ness properties that rings or ideals can satisfy,
+which will be valuable later on.
+As before, number theory is interlaced as motivation.
+I guess I can tell you at the outset what the completed table
+is going to look like, so you know what to expect.
+
+\begin{center}
+ \begin{tabular}[h]{lll}
+ Ring noun & Ideal adjective & Relation \\ \hline
+ PID & principal &
+ $R$ is a PID $\iff$ $R$ is an integral domain, \\
+ && \qquad and every $I$ is principal \\
+ Noetherian ring & finitely generated &
+ $R$ is Noetherian $\iff$ every $I$ is fin.\ gen. \\
+ field & maximal & $R/I$ is a field $\iff$ $I$ is maximal \\
+ integral domain & prime & $R/I$ is an integral domain
+ $\iff$ $I$ is prime \\
+ \end{tabular}
+\end{center}
+
+\section{Fields}
+\prototype{$\QQ$ is a field, but $\ZZ$ is not.}
+
+We already saw this definition last chapter:
+a field $K$ is a nontrivial ring for which every nonzero element is a unit.
+
+In particular, there are only two ideals in a field:
+the ideal $(0)$, which is maximal, and the entire field $K$.
+
+\section{Integral domains}
+\prototype{$\ZZ$ is an integral domain.}
+
+In practice, we are often not so lucky that we have a full-fledged field.
+Now it would be nice if we could still conclude the zero product property:
+if $ab = 0$ then either $a = 0$ or $b = 0$.
+If our ring is a field, this is true: if $b \neq 0$,
+then we can multiply by $b\inv$ to get $a = 0$.
+But many other rings we consider like $\ZZ$ and $\ZZ[x]$ also have this property,
+despite not having division.
+
+Not all rings though: in $\Zc{15}$,
+\[ 3 \cdot 5 \equiv 0 \pmod{15}. \]
+If $a, b \neq 0$ but $ab=0$ then we say $a$ and $b$ are \vocab{zero divisors}
+of the ring $R$.
+So we give a name to such rings.
+\begin{definition}
+ A nontrivial ring with no zero divisors
+ is called an \vocab{integral domain}.\footnote{Some
+ authors abbreviate this to ``domain'', notably Artin.}
+\end{definition}
+\begin{ques}
+ Show that a field is an integral domain.
+\end{ques}
+\begin{exercise}
+ [Cancellation in integral domains]
+ Suppose $ac = bc$ in an integral domain, and $c \neq 0$.
+ Show that that $a = b$.
+ (There is no $c\inv$ to multiply by,
+ so you have to use the definition.)
+\end{exercise}
+
+\begin{example}
+ [Examples of integral domains]
+ Every field is an integral domain,
+ so all the previous examples apply.
+ In addition:
+ \begin{enumerate}[(a)]
+ \ii $\ZZ$ is an integral domain, but it is not a field.
+ \ii $\RR[x]$ is not a field,
+ since there is no polynomial $P(x)$ with $xP(x) = 1$.
+ However, $\RR[x]$ is an integral domain,
+ because if $P(x) Q(x) = 0$ then one of $P$ or $Q$ is zero.
+ \ii $\ZZ[x]$ is also an example of an integral domain.
+ In fact, $R[x]$ is an integral domain for any integral domain $R$ (why?).
+ \ii $\Zc n$ is a field (hence integral domain)
+ exactly when $n$ is prime.
+ When $n$ is not prime, it is a ring but not an integral domain.
+ \end{enumerate}
+ The trivial ring $0$ is \emph{not} considered an integral domain.
+\end{example}
+
+At this point, we go ahead and say:
+\begin{definition}
+ An integral domain where all ideals are principal
+ is called a \vocab{principal ideal domain (PID)}.
+\end{definition}
+The ring $\ZZ/6\ZZ$ is an example of a ring
+which is a principal ideal ring, but not an integral domain.
+As we alluded to earlier, we will never really use ``principal ideal ring''
+in any real way: we typically will want to strengthen it to PID.
+
+\section{Prime ideals}
+\prototype{$(5)$ is a prime ideal of $\ZZ$.}
+
+We know that every integer can be factored (up to sign)
+as a unique product of primes; for example $15 = 3 \cdot 5$
+and $-10 = -2 \cdot 5$.
+You might remember the proof involves the so-called B\'ezout's lemma,
+which essentially says that $(a,b) = (\gcd(a,b))$;
+in other words we've carefully used the fact that $\ZZ$ is a PID.
+
+It turns out that for general rings, the situation is not as nice
+as factoring elements because most rings are not PID's.
+The classic example of something going wrong is
+\[ 6 = 2 \cdot 3 = \left( 1-\sqrt{-5} \right)\left( 1+\sqrt{-5} \right) \]
+in $\ZZ[\sqrt{-5}]$.
+Nonetheless, we can sidestep the issue
+and talk about factoring \emph{ideals}:
+somehow the example $10 = 2 \cdot 5$ should be $(10) = (2) \cdot (5)$,
+which says ``every multiple of $10$ is the product of a
+multiple of $2$ and a multiple of $5$''.
+I'd have to tell you then how to multiply two ideals, which I do
+in the chapter on unique factorization.
+
+Let's at least figure out what primes are.
+In $\ZZ$, we have that $p \neq 1$ is prime if whenever $p \mid xy$,
+either $p \mid x$ or $p \mid y$.
+We port over this definition to our world of ideals.
+\begin{definition}
+ \label{def:prime_ideal}
+ A \emph{proper} ideal $I \subsetneq R$ is a \vocab{prime ideal}
+ if whenever $xy \in I$, either $x \in I$ or $y \in I$.
+\end{definition}
+The condition that $I$ is proper is analogous to the
+fact that we don't consider $1$ to be a prime number.
+
+\begin{example}[Examples and non-examples of prime ideals]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The ideal $(7)$ of $\ZZ$ is prime.
+ \ii The ideal $(8)$ of $\ZZ$ is not prime,
+ since $2 \cdot 4 = 8$.
+
+ \ii The ideal $(x)$ of $\ZZ[x]$ is prime.
+ \ii The ideal $(x^2)$ of $\ZZ[x]$ is not prime,
+ since $x \cdot x = x^2$.
+
+ \ii The ideal $(3,x)$ of $\ZZ[x]$ is prime.
+ This is actually easiest to see
+ using \Cref{thm:prime_ideal_quotient} below.
+
+ \ii The ideal $(5) = 5\ZZ + 5i\ZZ$ of $\ZZ[i]$
+ is not prime, since the elements
+ $3+i$ and $3-i$ have product $10 \in (5)$,
+ yet neither is itself in $(5)$.
+ \end{enumerate}
+\end{example}
+\begin{remark}
+ \label{rem:unit_sign_issue}
+ Ideals have the nice property that they get rid of ``sign issues''.
+ For example, in $\ZZ$, do we consider $-3$ to be a prime?
+ When phrased with ideals, this annoyance goes away: $(-3) = (3)$.
+ More generally, for a ring $R$, talking about ideals
+ lets us ignore multiplication by a unit.
+ (Note that $-1$ is a unit in $\ZZ$.)
+\end{remark}
+
+\begin{exercise}
+ What do you call a ring $R$ for which the zero ideal $(0)$ is prime?
+\end{exercise}
+
+We also have:
+\begin{theorem}[Prime ideal $\iff$ quotient is integral domain]
+ \label{thm:prime_ideal_quotient}
+ An ideal $I$ is prime if and only if $R/I$ is an integral domain.
+\end{theorem}
+\begin{exercise}
+ [Mandatory]
+ Convince yourself the theorem is true;
+ it is just definition chasing.
+ (A possible start is to consider $R = \ZZ$ and $I = (15)$.)
+\end{exercise}
+
+I now must regrettably inform you that unique factorization is still
+not true even with the notion of a ``prime'' ideal
+(though again I haven't told you how to multiply two ideals yet).
+But it will become true with some additional assumptions
+that will arise in algebraic number theory
+(relevant buzzword: Dedekind domain).
+
+\section{Maximal ideals}
+\prototype{The ideal $(x,5)$ is maximal in $\ZZ[x]$, by quotient-ing.}
+
+Here's another flavor of an ideal.
+\begin{definition}
+ A proper ideal $I$ of a ring $R$ is \vocab{maximal} if
+ it is not contained in any other proper ideal.
+\end{definition}
+
+\begin{example}
+ [Examples of maximal ideals]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The ideal $I = (7)$ of $\ZZ$ is maximal, because
+ if an ideal $J$ contains $7$
+ and an element $n$ not in $I$
+ it must contain $\gcd(7,n) = 1$, and hence $J = \ZZ$.
+ \ii The ideal $(x)$ is \emph{not} maximal in $\ZZ[x]$,
+ because it's contained in $(x,5)$ (among others).
+ \ii On the other hand, $(x,5)$ is indeed maximal in $\ZZ[x]$.
+ This is actually easiest to verify using
+ \Cref{thm:max_ideal_quotient} below.
+ \ii Also, $(x)$ is maximal in $\CC[x]$,
+ again appealing to \Cref{thm:max_ideal_quotient} below.
+ \end{enumerate}
+\end{example}
+
+\begin{exercise}
+ What do you call a ring $R$ for which the zero ideal $(0)$ is maximal?
+\end{exercise}
+
+There's an analogous theorem to the one for prime ideals.
+\begin{theorem}
+ [$I$ maximal $\iff$ $R/I$ field]
+ \label{thm:max_ideal_quotient}
+ An ideal $I$ is maximal if and only if $R/I$ is a field.
+\end{theorem}
+\begin{proof}
+ A ring is a field if and only if $(0)$ is the only maximal ideal.
+ So this follows by \Cref{prob:inclusion_preserving}.
+\end{proof}
+
+\begin{corollary}
+ [Maximal ideals are prime]
+ If $I$ is a maximal ideal of a ring $R$, then $I$ is prime.
+\end{corollary}
+\begin{proof}
+ If $I$ is maximal, then $R/I$ is a field,
+ hence an integral domain, so $I$ is prime.
+\end{proof}
+
+In practice, because modding out by generated ideals is pretty convenient,
+this is a very efficient way to check whether an ideal is maximal.
+\begin{example}
+ [Modding out in $\ZZ{[x]}$]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii This instantly implies that $(x,5)$ is a maximal ideal
+ in $\ZZ[x]$, because if we mod out by $x$ and $5$ in $\ZZ[x]$,
+ we just get $\FF_5$, which is a field.
+ \ii On the other hand, modding out by just $x$ gives $\ZZ$,
+ which is an integral domain but not a field; that's why $(x)$ is
+ prime but not maximal.
+ \end{enumerate}
+\end{example}
+
+As we saw, any maximal ideal is prime.
+But now note that $\ZZ$ has the special property that
+all of its nonzero prime ideals are also maximal.
+It's with this condition and a few other minor conditions
+that you get a so-called \emph{Dedekind domain}
+where prime factorization of ideals \emph{does} work.
+More on that later.
+
+\section{Field of fractions}
+\prototype{$\Frac(\ZZ) = \QQ$.}
+As long as we are here, we take the time to introduce a useful
+construction that turns any integral domain into a field.
+
+\begin{definition}
+ Given an integral domain $R$,
+ we define its \vocab{field of fractions} or \vocab{fraction field}
+ $\Frac(R)$ as follows:
+ it consists of elements $a / b$, where $a,b \in R$ and $b \neq 0$.
+ We set $a / b \sim c / d$ if and only if $bc = ad$.
+ Addition and multiplication is defined by
+ \begin{align*}
+ \frac ab + \frac cd &= \frac{ad+bc}{bd} \\
+ \frac ab \cdot \frac cd &= \frac{ac}{bd}.
+ \end{align*}
+\end{definition}
+In fact everything you know about $\QQ$ basically carries over by analogy.
+You can prove if you want that this indeed a field, but
+considering how comfortable we are that $\QQ$ is well-defined,
+I wouldn't worry about it\dots
+
+\begin{definition}
+ Let $k$ be a field.
+ We define $k(x) = \Frac(k[x])$
+ (read ``$k$ of $x$''),
+ and call it the \vocab{field of rational functions}.
+\end{definition}
+
+\begin{example}
+ [Examples of fraction fields]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii By \emph{definition}, $\Frac(\ZZ) = \QQ$.
+ \ii The field $\RR(x)$ consists of rational functions in $x$:
+ \[ \RR(x) = \left\{ \frac{f(x)}{g(x)} \mid f,g \in \RR[x] \right\}. \]
+ For example, $\frac{2x}{x^2-3}$ might be a typical element.
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [Gaussian rationals]
+ \label{ex:gaussian_rationals}
+ Just like we defined $\ZZ[i]$ by abusing notation,
+ we can also write $\QQ(i) = \Frac(\ZZ[i])$.
+ Officially, it should consist of
+ \[ \QQ(i) = \left\{ \frac{f(i)}{g(i)} \mid g(i) \ne 0 \right\} \]
+ for polynomials $f$ and $g$ with rational coefficients.
+ But since $i^2=-1$ this just leads to
+ \[ \QQ(i) = \left\{ \frac{a+bi}{c+di} \mid a,b,c,d \in \QQ,
+ (c,d) \ne (0,0) \right\}. \]
+ And since $\frac{1}{c+di} = \frac{c-di}{c^2+d^2}$ we end up with
+ \[ \QQ(i) = \left\{ a+bi \mid a,b \in \QQ \right\}. \]
+\end{example}
+
+
+\section{Unique factorization domains (UFD's)}
+\prototype{$\ZZ$ and polynomial rings in general.}
+
+Here is one stray definition that will be important
+for those with a number-theoretic inclination.
+Over the positive integers, we have a fundamental theorem of arithmetic,
+stating that every integer is uniquely the product of prime numbers.
+
+We can even make an analogous statement in $\ZZ$ or $\ZZ[i]$,
+if we allow representations like $6 = (-2)(-3)$ and so on.
+The trick is that we only consider everything \emph{up to units};
+so $6 = (-2)(-3) = 2 \cdot 3$ are considered the same.
+
+The general definition goes as follows.
+\begin{definition}
+ A nonzero non-unit of an integral domain $R$ is
+ \vocab{irreducible} if it cannot be written as the product of two non-units.
+
+ An integral domain $R$ is a \vocab{unique factorization domain}
+ if every nonzero non-unit of $R$ can be written
+ as the product of irreducible elements,
+ which is unique up to multiplication by units.
+\end{definition}
+\begin{ques}
+ Verify that $\ZZ$ is a UFD.
+\end{ques}
+
+\begin{example}
+ [Examples of UFD's]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Fields are a ``degenerate'' example of UFD's:
+ every nonzero element is a unit,
+ so there is nothing to check.
+
+ \ii $\ZZ$ is a UFD.
+ The irreducible elements are $p$ and $-p$,
+ for example $5$ or $-17$.
+
+ \ii $\QQ[x]$ is a UFD:
+ polynomials with rational coefficients
+ can be uniquely factored, up to scaling by constants
+ (as the units of $\QQ[x]$ are just the rational numbers).
+
+ \ii $\ZZ[x]$ is a UFD.
+
+ \ii The Gaussian integers $\ZZ[i]$ turns out to be a UFD too
+ (and this will be proved in the chapters on algebraic number theory).
+
+ \ii $\ZZ[\sqrt{-5}]$ is the classic non-example of a UFD:
+ one may write
+ \[ 6 = 2 \cdot 3 = \left( 1-\sqrt{-5} \right)
+ \left( 1+\sqrt{-5} \right) \]
+ but each of $2$, $3$, $1 \pm \sqrt{-5}$ is irreducible.
+ (It turns out the right way to fix this is
+ by considering prime \emph{ideals} instead,
+ and this is one big motivation for \Cref{part:algnt1}.)
+
+ \ii Theorem we won't prove: every PID is a UFD.
+ \ii Theorem we won't prove: if $R$ is a UFD,
+ so is $R[x]$ (and hence by induction so is
+ $R[x,y]$, $R[x,y,z]$, \dots).
+ \end{enumerate}
+\end{example}
+
+
+
+\section{\problemhead}
+Not olympiad problems, but again the spirit is very close
+to what you might see in an olympiad.
+
+\begin{problem}
+ Consider the ring
+ \[ \QQ[\sqrt2] = \left\{ a + b\sqrt2 \mid a,b \in \QQ \right\}. \]
+ Is it a field?
+ \begin{hint}
+ Yes.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ [Homomorphisms from fields are injective]
+ \label{prob:field_hom}
+ Let $K$ be a field and $R$ a ring.
+ Prove that any homomorphism $\psi \colon K \to R$
+ is injective.\footnote{Note that $\psi$
+ cannot be the zero map for us,
+ since we require $\psi(1_K) = 1_R$.
+ You sometimes find different statements in the literature.}
+ \begin{hint}
+ The kernel is an ideal of $K$!
+ \end{hint}
+\end{problem}
+
+\begin{sproblem}
+ [Pre-image of prime ideals]
+ \label{prob:prime_preimage}
+ Suppose $\phi \colon R \to S$ is a ring homomorphism,
+ and $I \subseteq S$ is a prime ideal.
+ Prove that $\phi\pre(I)$ is prime as well.
+ \begin{hint}
+ This is just a definition chase.
+ \end{hint}
+ \begin{sol}
+ Consider $ab \in \phi\pre(I)$,
+ meaning $\phi(ab) = \phi(a) \phi(b) \in I$.
+ Since $I$ is prime, either $\phi(a) \in I$ or $\phi(b) \in I$.
+ In the former case we get $a \in \phi\pre(I)$ as needed;
+ the latter case we get $b \in \phi\pre(I)$.
+ \end{sol}
+\end{sproblem}
+
+\begin{sproblem}
+ \gim
+ Let $R$ be an integral domain with finitely many elements.
+ Prove that $R$ is a field.
+ \label{prob:finite_domain_field}
+ \begin{hint}
+ Fermat's little theorem type argument;
+ cancellation holds in integral domains.
+ \end{hint}
+ \begin{sol}
+ Let $x \in R$ with $x \neq 0$.
+ Look at the powers $x$, $x^2$, \dots.
+ By pigeonhole, eventually two of them coincide.
+ So assume $x^m = x^n$ where $m < n$, or equivalently
+ \[ 0 = x \cdot x \cdot \dots \cdot x
+ \cdot \left( x^{n-m} - 1 \right). \]
+ Since $x \ne 0$, we get $x^{n-m} - 1 = 0$,
+ or $x^{n-m} = 1$.
+ So $x^{n-m-1}$ is an inverse for $x$.
+
+ This means every nonzero element has an inverse,
+ ergo $R$ is a field.
+ \end{sol}
+\end{sproblem}
+
+\begin{sproblem}
+ [Krull's theorem]
+ \label{prob:krull_max_ideal}
+ Let $R$ be a ring and $J$ a proper ideal.
+ \begin{enumerate}[(a)]
+ \ii Prove that if $R$ is Noetherian,
+ then $J$ is contained in a maximal ideal $I$.
+ \ii Use Zorn's lemma (\Cref{ch:zorn})
+ to prove the result even if $R$ isn't Noetherian.
+ \end{enumerate}
+ \begin{hint}
+ Just keep on adding in elements to get an ascending chain.
+ \end{hint}
+ \begin{sol}
+ For part (b), look at the poset of \emph{proper} ideals.
+ Apply Zorn's lemma (again using a union trick to verify the condition;
+ be sure to verify that the union is proper!).
+ In part (a) we are given no ascending infinite chains,
+ so no need to use Zorn's lemma.
+ \end{sol}
+\end{sproblem}
+
+\begin{problem}
+ [{$\Spec k[x]$}]
+ Describe the prime ideals of $\CC[x]$ and $\RR[x]$.
+ \begin{hint}
+ Use the fact that both are PID's.
+ \end{hint}
+ \begin{sol}
+ The ideal $(0)$ is of course prime in both.
+ Also, both rings are PID's.
+
+ For $\CC[x]$ we get a prime ideal $(x-z)$ for each $z \in \CC$.
+
+ For $\RR[x]$ a prime ideal $(x-a)$ for each $a \in \RR$
+ and a prime ideal $(x^2 - ax + b)$ for each quadratic
+ with two conjugate non-real roots.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ Prove that any nonzero prime ideal of $\ZZ[\sqrt 2]$ is also a maximal ideal.
+ %(This is starred since the main idea of the solution will get used extensively in
+ %the algebraic NT chapters, even though the problem itself won't.)
+ \label{prob:dedekind_sample}
+ \begin{hint}
+ Show that the quotient $\ZZ[\sqrt2]/I$ has finitely many elements
+ for any nonzero prime ideal $I$.
+ Therefore, the quotient is an integral domain, it is also a field,
+ and thus $I$ was a maximal ideal.
+ \end{hint}
+\end{problem}
diff --git a/books/napkin/salespitch.tex b/books/napkin/salespitch.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f9ba7619e35abff5f9548a8dbcb3ac71df875d0b
--- /dev/null
+++ b/books/napkin/salespitch.tex
@@ -0,0 +1,388 @@
+\chapter{Sales pitches}
+\label{ch:sales}
+\newcommand{\pitch}[1]{\ii[\textsf{\color{blue}\ref{#1}}.] \textsf{\color{blue} \textbf{\nameref{#1}.}} \\[1ex]} % for now. . .
+\newcommand{\buzzword}[1]{\textbf{\color{green!40!black} #1}}
+
+This chapter contains a pitch for each part,
+to help you decide what you want to read
+and to elaborate more on how they are interconnected.
+
+For convenience, here is again the dependency plot
+that appeared in the frontmatter.
+\input{tex/frontmatter/digraph}
+
+\section{The basics}
+\begin{itemize}
+\pitch{part:startout}
+I made a design decision that the first part
+should have a little bit both of algebra and topology:
+so this first chapter begins by defining a \buzzword{group},
+while the second chapter begins by defining a \buzzword{metric space}.
+The intention is so that newcomers get to see two different
+examples of ``sets with additional structure''
+in somewhat different contexts,
+and to have a minimal amount of literacy as these sorts
+of definitions appear over and over.\footnote{In particular,
+ I think it's easier to learn
+ what a homeomorphism is after seeing group isomorphism,
+ and what a homomorphism is after seeing continuous map.}
+
+\pitch{part:absalg}
+The algebraically inclined can then delve into
+further types of algebraic structures:
+some more details of \buzzword{groups},
+and then \buzzword{rings} and \buzzword{fields} ---
+which will let you generalize $\ZZ$, $\QQ$, $\RR$, $\CC$.
+So you'll learn to become familiar with all sorts of other nouns
+that appear in algebra, unlocking a whole host of objects
+that one couldn't talk about before.
+
+We'll also come into \buzzword{ideals},
+which generalize the GCD in $\ZZ$ that you might know of.
+For example, you know in $\ZZ$ that any integer
+can be written in the form $3a+5b$ for $a,b \in \ZZ$,
+since $\gcd(3,5)=1$.
+We'll see that this statement is really
+a statement of ideals: ``$(3,5)=1$ in $\ZZ$'',
+and thus we'll understand in what situations
+it can be generalized, e.g.\ to polynomials.
+
+\pitch{part:basictop}
+The more analytically inclined can instead move into topology,
+learning more about spaces.
+We'll find out that ``metric spaces'' are actually too specific,
+and that it's better to work with \buzzword{topological spaces},
+which are based on the so-called \buzzword{open sets}.
+You'll then get to see the buddings of some geometrical ideals,
+ending with the really great notion of \buzzword{compactness},
+a powerful notion that makes real analysis tick.
+
+One example of an application of compactness to tempt you now:
+a continuous function $f \colon [0,1] \to \RR$
+always achieves a \emph{maximum} value.
+(In contrast, $f \colon (0,1) \to \RR$ by $x \mapsto 1/x$ does not.)
+We'll see the reason is that $[0,1]$ is compact.
+\end{itemize}
+
+\section{Abstract algebra}
+\begin{itemize}
+\pitch{part:linalg}
+In high school, linear algebra is often really unsatisfying.
+You are given these arrays of numbers,
+and they're manipulated in some ways that don't really make sense.
+For example, the determinant is defined as this
+funny-looking sum with a bunch of products that seems
+to come out of thin air. Where does it come from?
+Why does $\det(AB) = \det A \det B$ with such a bizarre formula?
+
+Well, it turns out that you \emph{can} explain all of these things!
+The trick is to not think of linear algebra
+as the study of matrices,
+but instead as the study of \emph{linear maps}.
+In earlier chapters we saw that we got great generalizations
+by speaking of ``sets with enriched structure'' and ``maps between them''.
+This time, our sets are \buzzword{vector spaces}
+and our maps are \buzzword{linear maps}.
+We'll find out that a matrix is actually just
+a way of writing down a linear map as an array of numbers,
+but using the ``intrinsic'' definitions
+we'll de-mystify all the strange formulas from high school
+and show you where they all come from.
+
+In particular, we'll see \emph{easy} proofs
+that column rank equals row rank,
+determinant is multiplicative, trace is the sum of the diagonal entries.
+We'll see how the dot product works,
+and learn all the words starting with ``eigen-''.
+We'll even have a bonus chapter for Fourier analysis
+showing that you can also explain all the big buzz-words
+by just being comfortable with vector spaces.
+
+\pitch{part:groups}
+Some of you might be interested in more about groups,
+and this chapter will give you a way to play further.
+It starts with an exploration of \buzzword{group actions},
+then goes into a bit on \buzzword{Sylow theorems},
+which are the tools that let us try to \emph{classify all groups}.
+
+\pitch{part:repth}
+If $G$ is a group, we can try to understand
+it by implementing it as a \emph{matrix},
+i.e.\ considering embeddings $G \injto \GL_n(\CC)$.
+These are called \buzzword{representations} of $G$;
+it turns out that they can be decomposed into \buzzword{irreducible} ones.
+Astonishingly we will find that we can
+\emph{basically characterize all of them}:
+the results turn out to be short and completely unexpected.
+
+For example, we will find out that there are finitely
+many irreducible representations of a given finite group $G$;
+if we label them $V_1$, $V_2$, \dots, $V_r$,
+then we will find that $r$ is the number
+of conjugacy classes of $G$, and moreover that
+\[ |G| = (\dim V_1)^2 + \dots + (\dim V_r)^2 \]
+which comes out of nowhere!
+
+The last chapter of this part will show you some
+unexpected corollaries.
+Here is one of them:
+let $G$ be a finite group and create variables $x_g$
+for each $g \in G$.
+A $|G| \times |G|$ matrix $M$ is defined by setting
+the $(g,h)$th entry to be the variable $x_{g \cdot h}$.
+Then this determinant will turn out to \emph{factor},
+and the factors will correspond to the $V_i$ we described above:
+there will be an irreducible factor of degree $\dim V_i$
+appearing $\dim V_i$ times.
+This result, called the \buzzword{Frobenius determinant},
+is said to have given birth to representation theory.
+
+\pitch{part:quantum}
+If you ever wondered what \buzzword{Shor's algorithm} is,
+this chapter will use the built-up linear algebra to tell you!
+\end{itemize}
+
+\section{Real and complex analysis}
+\begin{itemize}
+\pitch{part:calc}
+In this part, we'll use our built-up knowledge of
+metric and topological spaces to give short, rigorous definitions
+and theorems typical of high school calculus.
+That is, we'll really define and prove most everything you've seen about
+\buzzword{limits}, \buzzword{series}, \buzzword{derivatives}, and \buzzword{integrals}.
+
+Although this might seem intimidating,
+it turns out that actually, by the time we start this chapter,
+\emph{the hard work has already been done}:
+the notion of limits, open sets, and compactness
+will make short work of what was swept under the rug in AP calculus.
+Most of the proofs will thus actually be quite short.
+We sit back and watch all the pieces slowly come together
+as a reward for our careful study of topology beforehand.
+
+That said, if you are willing to suspend belief,
+you can actually read most of the other parts
+without knowing the exact details of all the calculus here,
+so in some sense this part is ``optional''.
+
+\pitch{part:cmplxana}
+It turns out that \buzzword{holomorphic functions}
+(complex-differentiable functions)
+are close to the nicest things ever:
+they turn out to be given by a Taylor series
+(i.e.\ are basically polynomials).
+This means we'll be able to prove unreasonably nice results
+about holomorphic functions $\CC \to \CC$, like
+\begin{itemize}
+ \ii they are determined by just a few inputs,
+ \ii their contour integrals are all zero,
+ \ii they can't be bounded unless they are constant,
+ \ii \dots.
+\end{itemize}
+We then introduce \buzzword{meromorphic functions},
+which are like quotients of holomorphic functions,
+and find that we can detect their zeros by simply drawing
+loops in the plane and integrating over them:
+the famous \buzzword{residue theorem} appears.
+(In the practice problems, you will see this even gives
+us a way to evaluate real integrals that can't be evaluated otherwise.)
+
+\pitch{part:measure}
+Measure theory is the upgraded version of integration.
+The Riemann integration is for a lot of purposes not really sufficient;
+for example, if $f$ is the function equals $1$ at rational numbers
+but $0$ at irrational numbers,
+we would hope that $\int_0^1 f(x) \; dx = 0$,
+but the Riemann integral is not capable of handling this function $f$.
+
+The \buzzword{Lebesgue integral} will handle these mistakes
+by assigning a \emph{measure} to a generic space $\Omega$,
+making it into a \buzzword{measure space}.
+This will let us develop a richer theory of integration
+where the above integral \emph{does} work out to zero
+because the ``rational numbers have measure zero''.
+Even the development of the measure will be an achievement,
+because it means we've developed a rigorous, complete way
+of talking about what notions like area and volume mean ---
+on any space, not just $\RR^n$!
+So for example the Lebesgue integral will let us
+integrate functions over any \buzzword{measure space}.
+
+\pitch{part:prob}
+Using the tools of measure theory, we'll be able to start
+giving rigorous definitions of \buzzword{probability}, too.
+We'll see that a \buzzword{random variable} is actually
+a function from a measure space of worlds to $\RR$,
+giving us a rigorous way to talk about its probabilities.
+We can then start actually stating results like
+the \buzzword{law of large numbers} and \buzzword{central limit theorem}
+in ways that make them both easy to state and straightforward to prove.
+
+\pitch{part:diffgeo}
+Multivariable calculus is often confusing
+because of all the partial derivatives.
+But we'll find out that, armed with our good understanding
+of linear algebra, that we're really looking at a \buzzword{total derivative}:
+at every point of a function $f \colon \RR^n \to \RR$
+we can associate a \emph{linear map} $Df$ which
+captures in one object the notion of partial derivatives.
+Set up this way, we'll get to see versions of \buzzword{differential forms}
+and \buzzword{Stokes' theorem},
+and we finally will know what the notation $dx$ really means.
+In the end, we'll say a little bit about manifolds in general.
+\end{itemize}
+
+\section{Algebraic number theory}
+\begin{itemize}
+\pitch{part:algnt1}
+Why is $3+\sqrt5$ the conjugate of $3-\sqrt5$?
+How come the norm $\norm{a+b\sqrt5} = a^2-5b^2$ used in Pell equations
+just happens to be multiplicative?
+Why is it we can do factoring into primes in $\ZZ[i]$
+but not in $\ZZ[\sqrt{-5}]$?
+All these questions and more will be answered in this part,
+when we learn about \buzzword{number fields},
+a generalization of $\QQ$ and $\ZZ$ to things like $\QQ(\sqrt5)$
+and $\ZZ[\sqrt{5}]$.
+We'll find out that we have unique factorization into prime ideals,
+that there is a real \emph{multiplicative norm} in play here,
+and so on.
+We'll also see that Pell's equation falls out of this theory.
+
+\pitch{part:algnt2}
+All the big buzz-words come out now:
+\buzzword{Galois groups}, the \buzzword{Frobenius}, and friends.
+We'll see quadratic reciprocity is just a shadow of
+the behavior of the Frobenius element,
+and meet the \buzzword{Chebotarev density theorem},
+which generalizes greatly the Dirichlet theorem on the infinitude
+of primes which are $a \pmod n$.
+Towards the end, we'll also state \buzzword{Artin reciprocity},
+one of the great results of \buzzword{class field theory},
+and how it generalizes quadratic reciprocity and cubic reciprocity.
+\end{itemize}
+
+\section{Algebraic topology}
+\begin{itemize}
+\pitch{part:algtop1}
+What's the difference between an annulus and disk?
+Well, one of them has a ``hole'' in it,
+but if we are just given intrinsic topological spaces
+it's hard to make this notion precise.
+The \buzzword{fundamental group} $\pi_1(X)$
+and more general \buzzword{homotopy group}
+will make this precise --- we'll find a way to define an abelian group
+$\pi_1(X)$ for every topological space $X$ which captures the idea
+there is a hole in the space, by throwing lassos into the space
+and seeing if we can reel them in.
+
+Amazingly, the fundamental group $\pi_1(X)$ will, under mild conditions,
+tell you about ways to cover $X$ with a so-called
+\buzzword{covering projection}.
+One picture is that one can wrap a real line $\RR$ into a helix shape
+and then project it down into the circle $S^1$.
+This will turn out to correspond to the fact that $\pi_1(S^1) = \ZZ$
+which has only one subgroup.
+More generally the subgroups of $\pi_1(X)$ will be in
+bijection with ways to cover the space $X$!
+
+\pitch{part:cats}
+What do fields, groups, manifolds, metric spaces, measure spaces,
+modules, representations, rings, topological spaces, vector spaces,
+all have in common?
+Answer: they are all ``objects with additional structure'',
+with maps between them.
+
+The notion of \buzzword{category} will appropriately generalize all of them.
+We'll see that all sorts of constructions and ideas
+can be abstracted into the framework of a category,
+in which we \emph{only} think about objects and arrows between them,
+without probing too hard into the details of what those objects are.
+This results in drawing many \buzzword{commutative diagrams}.
+
+For example, any way of taking an object in one category
+and getting another one (for example $\pi_1$ as above,
+from the category of spaces into the category of groups)
+will probably be a \buzzword{functor}.
+We'll unify $G \times H$, $X \times Y$, $R \times S$,
+and anything with the $\times$ symbol into the notion of a product,
+and then even more generally into a \buzzword{limit}.
+Towards the end, we talk about \buzzword{abelian categories}
+and talk about the famous
+\buzzword{snake lemma}, \buzzword{five lemma}, and so on.
+
+\pitch{part:algtop2}
+Using the language of category theory,
+we then resume our adventures in algebraic topology,
+in which we define the \buzzword{homology groups}
+which give a different way of noticing holes in a space,
+in a way that is longer to define but easier to compute in practice.
+We'll then reverse the construction to get so-called
+\buzzword{cohomology rings} instead,
+which give us an even finer invariant for telling spaces apart.
+\end{itemize}
+
+\section{Algebraic geometry}
+\begin{itemize}
+\pitch{part:ag1}
+We begin with a classical study of classical \buzzword{complex varieties}:
+the study of intersections of polynomial equations over $\CC$.
+This will naturally lead us into the geometry of rings,
+giving ways to draw pictures of ideals,
+and motivating \buzzword{Hilbert's nullstellensatz}.
+The \buzzword{Zariski topology} will show its face,
+and then we'll play with \buzzword{projective varities}
+and \buzzword{quasi-projective varieties},
+with a bonus detour into \buzzword{B\'{e}zout's theorem}.
+All this prepares us for our journey into schemes.
+
+\pitch{part:ag2}
+We now get serious and delve into Grothendiek's definition of
+an \buzzword{affine scheme}:
+a generalization of our classical varieties
+that lets us start with any ring $A$
+and construct a space $\Spec A$ on it.
+We'll equip it with its own Zariski topology
+and then a sheaf of functions on it,
+making it into a \buzzword{locally ringed space};
+we will find that the sheaf can be understood
+effectively in terms of \buzzword{localization} on it.
+We'll find that the language of commutative algebra provides
+elegant generalizations of what's going on geometrically:
+prime ideals correspond to irreducible closed subsets,
+radical ideals correspond to closed subsets,
+maximal ideals correspond to closed points, and so on.
+We'll draw lots of pictures of spaces and examples to accompany this.
+
+\pitch{part:ag3}
+Not yet written! Wait for v2.
+\end{itemize}
+
+\section{Set theory}
+\begin{itemize}
+\pitch{part:st1}
+Why is \buzzword{Russell's paradox} such a big deal
+and how is it resolved?
+What is this \buzzword{Zorn's lemma}
+that everyone keeps talking about?
+In this part we'll learn the answers to these questions
+by giving a real description of the \buzzword{Zermelo-Frankel}
+axioms, and the \buzzword{axiom of choice},
+delving into the details of how math is built axiomatically
+at the very bottom foundations.
+We'll meet the \buzzword{ordinal numbers} and \buzzword{cardinal numbers}
+and learn how to do \buzzword{transfinite induction} with them.
+
+\pitch{part:st2}
+The \buzzword{continuum hypothesis}
+states that there are no cardinalities
+between the size of the natural numbers and the size of the real numbers.
+It was shown to be \emph{independent} of the axioms ---
+one cannot prove or disprove it.
+How could a result like that possibly be proved?
+Using our understanding of the ZF axioms,
+we'll develop a bit of \buzzword{model theory}
+and then use \buzzword{forcing} in order to show
+how to construct entire models of the universe
+in which the continuum hypothesis is true or false.
+\end{itemize}
diff --git a/books/napkin/semisimple.tex b/books/napkin/semisimple.tex
new file mode 100644
index 0000000000000000000000000000000000000000..0b1d04f9d39f33c11853076b2bc089647844f46d
--- /dev/null
+++ b/books/napkin/semisimple.tex
@@ -0,0 +1,442 @@
+\chapter{Semisimple algebras}
+In what follows, \textbf{assume the field $k$ is algebraically closed}.
+
+Fix an algebra $A$ and suppose
+you want to study its representations.
+We have a ``direct sum'' operation already.
+So, much like we pay special attention to prime numbers,
+we're motivated to study irreducible representations
+and then build all the representations of $A$ from there.
+
+Unfortunately, we have seen (\Cref{exer:irred_not_indecomp})
+that there exists a representation which is not irreducible,
+and yet cannot be broken down as a direct sum (indecomposable).
+This is \emph{weird and bad}, so we want to give a name
+to representations which are more well-behaved.
+We say that a representation is \vocab{completely reducible}
+if it doesn't exhibit this bad behavior.
+
+Even better, we say a finite-dimensional algebra $A$
+is \vocab{semisimple} if all its finite-dimensional
+representations are completely reducible.
+So when we study finite-dimensional representations of
+semisimple algebras $A$,
+we just have to figure out what the irreps are,
+and then piecing them together will give all
+the representations of $A$.
+
+In fact, semisimple algebras $A$ have even nicer properties.
+The culminating point of the chapter is when we prove that
+$A$ is semisimple if and only if $A \cong \bigoplus_i \Mat(V_i)$,
+where the $V_i$ are the irreps of $A$
+(yes, there are only finitely many!).
+
+\section{Schur's lemma continued}
+\prototype{For $V$ irreducible,
+ $\Homrep(V^{\oplus 2}, V^{\oplus 2}) \cong k^{\oplus 4}$.}
+\begin{definition}
+ For an algebra $A$ and representations $V$ and $W$,
+ we let $\Homrep(V,W)$ be the set of intertwining operators between them.
+ (It is also a $k$-algebra.)
+\end{definition}
+
+By Schur's lemma (since $k$ is algebraically closed,
+which again, we are taking as a standing assumption),
+we already know that if $V$ and $W$ are irreps, then
+\[
+ \Homrep(V,W) \cong
+ \begin{cases}
+ k & \text{if $V \cong W$} \\
+ 0 & \text{if $V \not\cong W$}.
+ \end{cases}
+\]
+Can we say anything more?
+For example, it also tells us that
+\[ \Homrep(V, V^{\oplus 2}) = k^{\oplus 2}. \]
+The possible maps are $v \mapsto (c_1v_1, c_2v_2)$ for some choice of $c_1, c_2 \in k$.
+
+More generally, suppose $V$ is an irrep and consider
+$\Homrep(V^{\oplus m}, V^{\oplus n})$.
+Intertwining operators
+$T \colon V^{\oplus m} \to V^{\oplus n}$
+are determined completely
+by the $mn$ choices of compositions
+\begin{center}
+\begin{tikzcd}
+ V \ar[r, hook] & V^{\oplus m} \ar[r, "T"] & V^{\oplus n} \ar[r, two heads] & V
+\end{tikzcd}
+\end{center}
+where the first arrow is inclusion to the $i$th component of $V^{\oplus m}$
+(for $1 \le i \le m$) and the second arrow is inclusion to the $j$th
+component of $V^{\oplus n}$ (for $1 \le j \le n$).
+However, by Schur's lemma on each of these compositions,
+we know they must be constant.
+
+Thus, $\Homrep(V^{\oplus n}, V^{\oplus m})$ consist of $n \times m$ ``matrices''
+of constants, and the map is provided by
+\[
+ \begin{bmatrix}
+ c_{11} & c_{12} & \dots & c_{1(n-1)} & c_{1n} \\
+ c_{21} & c_{22} & \dots & c_{2(n-1)} & c_{1n} \\
+ \vdots & \vdots & \ddots & \vdots & \vdots \\
+ c_{m1} & c_{m2} & \dots & c_{m(n-1)} & c_{mn}
+ \end{bmatrix}
+ \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}
+ \in V^{\oplus n}
+\]
+where the $c_{ij} \in k$ but $v_i \in V$; note the type mismatch!
+This is \emph{not} just a linear map $V^{\oplus n_i} \to V^{\oplus m_i}$;
+rather, the outputs are $m$ \emph{linear combinations} of the inputs.
+
+More generally, we have:
+\begin{theorem}
+ [Schur's lemma for completely reducible representations]
+ \label{thm:compred_schur}
+ Let $V$ and $W$ be completely reducible representations,
+ and set $V = \bigoplus V_i^{\oplus n_i}$, $W = \bigoplus V_i^{\oplus m_i}$
+ for integers $n_i, m_i \ge 0$, where each $V_i$ is an irrep.
+ Then
+ \[ \Homrep(V, W)
+ \cong \bigoplus_i \Mat_{n_i \times m_i}(k) \]
+ meaning that an intertwining operator $T : V \to W$
+ amounts to, for each $i$, an $n_i \times m_i$ matrix of constants
+ which gives a map $V_i^{\oplus n_i} \to V_i^{\oplus m_i}$.
+\end{theorem}
+
+\begin{corollary}
+ [Subrepresentations of completely reducible representations]
+ \label{cor:subrep_schur}
+ Let $V = \bigoplus V_i^{\oplus n_i}$ be completely reducible.
+ Then any subrepresentation $W$ of $V$ is isomorphic
+ to $\bigoplus V_i^{\oplus m_i}$ where $m_i \le n_i$ for each $i$,
+ and the inclusion $W \injto V$ is given
+ by the direct sum of inclusion $V_i^{\oplus m_i} \injto V_i^{\oplus n_i}$,
+ which are $n_i \times m_i$ matrices.
+\end{corollary}
+\begin{proof}
+ Apply Schur's lemma to the inclusion $W \injto V$.
+\end{proof}
+
+
+
+\section{Density theorem}
+We are going to take advantage of the previous result to prove that
+finite-dimensional algebras have finitely many irreps.
+
+\begin{theorem}
+ [Jacobson density theorem]
+ Let $(V_1, \rho_1)$, \dots, $(V_r, \rho_r)$ be pairwise nonisomorphic
+ finite-dimensional representations of $A$.
+ Then there is a surjective map of vector spaces
+ \[ \bigoplus_{i=1}^r \rho_i : A \surjto \bigoplus_{i=1}^r \Mat(V_i). \]
+\end{theorem}
+The right way to think about this theorem is that
+\begin{moral}
+ Density is the ``Chinese remainder theorem''
+ for irreps of $A$.
+\end{moral}
+Recall that in number theory, the Chinese remainder theorem tells us
+that given lots of ``unrelated'' congruences, we can find a single $N$
+which simultaneously satisfies them all.
+Similarly, given lots of different nonisomorphic representations of $A$,
+this means that we can select a single $a \in A$ which induces any tuple
+$(\rho_1(a), \dots, \rho_r(a))$ of actions we want --- a surprising result,
+since even the $r=1$ case is not obvious at all!
+\begin{center}
+ \begin{tikzcd}[column sep=huge, row sep= tiny]
+ & \rho_1(a) = M_1 \in \Mat(V_1) \\
+ & \rho_2(a) = M_2 \in \Mat(V_2) \\
+ \boxed{a \in A} \ar[ruu, end anchor=west] \ar[ru, end anchor=west] \ar[rd, end anchor=west] & \vdots \\
+ & \rho_r(a) = M_r \in \Mat(V_r) \\
+ \end{tikzcd}
+\end{center}
+This also gives us the non-obvious corollary:
+\begin{corollary}
+ [Finiteness of number of representations]
+ Any finite-dimensional algebra $A$ has at most $\dim A$ irreps.
+ \label{cor:finiteness}
+\end{corollary}
+\begin{proof}
+ If $V_i$ are such irreps then
+ $A \surjto \bigoplus_i V_i^{\oplus \dim V_i}$,
+ hence we have the inequality $\sum (\dim V_i)^2 \le \dim A$.
+\end{proof}
+
+\begin{proof}[Proof of density theorem]
+ Let $V = V_1 \oplus \dots \oplus V_r$, so $A$
+ acts on $V = (V, \rho)$ by $\rho = \bigoplus_i \rho_i$.
+ Thus by \Cref{prob:reg_mat}, we can instead consider $\rho$
+ as an \emph{intertwining operator}
+ \[ \rho : \Reg(A) \to \bigoplus_{i=1}^r \Mat(V_i)
+ \cong \bigoplus_{i=1}^r V_i^{\oplus d_i}. \]
+ We will use this instead as it will be easier to work with.
+
+ First, we handle the case $r = 1$.
+ Fix a basis $e_1$, \dots, $e_n$ of $V = V_1$.
+ Assuming for contradiction that the map is not surjective.
+ Then there is a map of representations (by $\rho$ and the isomorphism)
+ $\Reg(A) \to V^{\oplus n}$ given by $a \mapsto (a \cdot e_1, \dots, a \cdot e_n)$.
+ By hypothesis is not surjective:
+ its image is a \emph{proper} subrepresentation of $V^{\oplus n}$.
+ Assume its image is isomorphic to $V^{\oplus m}$ for $m < n$,
+ so by \Cref{thm:compred_schur} there is a matrix of constants $X$ with
+ \begin{center}
+ \begin{tikzcd}[row sep=tiny]
+ \Reg(A) \ar[r] & V^{\oplus n} & \ar[l, "X \cdot -", hook'] V^{\oplus r} \\
+ a \ar[r, mapsto] & (a \cdot e_1, \dots, a \cdot e_n) \\
+ 1_A \ar[r, mapsto] & (e_1, \dots, e_n) & \ar[l, mapsto] (v_1, \dots, v_m)
+ \end{tikzcd}
+ \end{center}
+ where the two arrows in the top row have the same image;
+ hence the pre-image $(v_1, \dots, v_m)$ of $(e_1, \dots, e_n)$ can be found.
+ But since $r < n$ we can find constants $c_1, \dots, c_n$ not all zero
+ such that $X$ applied to the column vector $(c_1, \dots, c_n)$ is zero:
+ \[
+ \sum_{i=1}^n c_ie_i
+ =
+ \begin{bmatrix} c_1 & \dots & c_n \end{bmatrix}
+ \begin{bmatrix} e_1 \\ \vdots \\ e_n \end{bmatrix}
+ =
+ \begin{bmatrix} c_1 & \dots & c_n \end{bmatrix}
+ X
+ \begin{bmatrix} v_1 \\ \vdots \\ v_m \end{bmatrix}
+ = 0
+ \]
+ contradicting the fact that $e_i$ are linearly independent.
+ Hence we conclude the theorem for $r=1$.
+
+ As for $r \ge 2$, the image $\rho\im(A)$ is necessarily of the form
+ $\bigoplus_i V_i^{\oplus r_i}$ (by \Cref{cor:subrep_schur})
+ and by the above $r_i = \dim V_i$ for each $i$.
+\end{proof}
+
+\section{Semisimple algebras}
+
+\begin{definition}
+ A finite-dimensional algebra $A$ is a \vocab{semisimple}
+ if every finite-dimensional representation of $A$ is completely reducible.
+\end{definition}
+
+\begin{theorem}
+ [Semisimple algebras]
+ Let $A$ be a finite-dimensional algebra.
+ Then the following are equivalent:
+ \begin{enumerate}[(i)]
+ \ii $A \cong \bigoplus_i \Mat_{d_i}(k)$ for some $d_i$.
+ \ii $A$ is semisimple.
+ \ii $\Reg(A)$ is completely reducible.
+ \end{enumerate}
+\end{theorem}
+\begin{proof}
+ (i) $\implies$ (ii) follows
+ from \Cref{thm:rep_1mat} and \Cref{prop:rep_direct_sum}.
+ (ii) $\implies$ (iii) is tautological.
+
+ To see (iii) $\implies$ (i), we use the following clever trick.
+ Consider
+ \[ \Homrep(\Reg(A), \Reg(A)). \]
+ On one hand, by \Cref{prob:regA_intertwine},
+ it is isomorphic to $A\op$ ($A$ with opposite multiplication),
+ because the only intertwining operators $\Reg(A) \to \Reg(A)$
+ are those of the form $- \cdot a$.
+ On the other hand, suppose that we have set
+ $ \Reg(A) = \bigoplus_i V_i^{\oplus n_i} $.
+ By \Cref{thm:compred_schur}, we have
+ \[ A\op \cong \Homrep(\Reg(A), \Reg(A))
+ = \bigoplus_i \Mat_{n_i \times n_i}(k). \]
+ But $\Mat_n(k)\op \cong \Mat_n(k)$ (just by transposing),
+ so we recover the desired conclusion.
+\end{proof}
+
+In fact, if we combine the above result with
+the density theorem (and \Cref{cor:finiteness}), we obtain:
+\begin{theorem}
+ [Sum of squares formula]
+ For a finite-dimensional algebra $A$ we have
+ \[ \sum_{i} \dim(V_i)^2 \le \dim A \]
+ where the $V_i$ are the irreps of $A$;
+ equality holds exactly when $A$ is semisimple,
+ in which case
+ \[ \Reg(A) \cong \bigoplus_i \Mat(V_i)
+ \cong \bigoplus_I V_i^{\oplus \dim V_i}. \]
+\end{theorem}
+\begin{proof}
+ The inequality was already mentioned in \Cref{cor:finiteness}.
+ It is equality if and only if the map $\rho : A \to \bigoplus_i \Mat(V_i)$
+ is an isomorphism; this means all $V_i$ are present.
+\end{proof}
+
+\begin{remark}
+ [Digression]
+ For any finite-dimensional $A$, the kernel of the map
+ $\rho : A \to \bigoplus_i \Mat(V_i)$ is denoted $\opname{Rad}(A)$
+ and is the so-called \vocab{Jacobson radical} of $A$;
+ it's the set of all $a \in A$ which act by zero in all irreps of $A$.
+ The usual definition of ``semisimple'' given in books is that
+ this Jacobson radical is trivial.
+\end{remark}
+
+\section{Maschke's theorem}
+We now prove that the representation theory of groups is as nice as possible.
+\begin{theorem}
+ [Maschke's theorem]
+ Let $G$ be a finite group, and $k$ an algebraically closed
+ field whose characteristic does not divide $|G|$.
+ Then $k[G]$ is semisimple.
+\end{theorem}
+This tells us that when studying representations of groups,
+all representations are completely reducible.
+\begin{proof}
+ Consider any finite-dimensional representation $(V, \rho)$ of $k[G]$.
+ Given a proper subrepresentation $W \subseteq V$,
+ our goal is to construct a supplementary $G$-invariant subspace $W'$
+ which satisfies \[ V = W \oplus W'. \]
+ This will show that indecomposable $\iff$ irreducible,
+ which is enough to show $k[G]$ is semisimple.
+
+ Let $\pi : V \to W$ be any projection of $V$ onto $W$,
+ meaning $\pi(v) = v \iff v \in W$.
+ We consider the \emph{averaging} map $P : V \to V$ by
+ \[
+ P(v) = \frac{1}{\left\lvert G \right\rvert}
+ \sum_{g \in G} \rho(g\inv) \circ \pi \circ \rho(g).
+ \]
+ We'll use the following properties of the map:
+ \begin{exercise}
+ Show that the map $P$ satisfies:
+ \begin{itemize}
+ \ii For any $w \in W$, $P(w) = w$.
+ \ii For any $v \in V$, $P(w) \in W$.
+ \ii The map $P : V \to V$ is an intertwining operator.
+ \end{itemize}
+ \end{exercise}
+ Thus $P$ is idempotent (it is the identity on its image $W$),
+ so by \Cref{prob:idempotent} we have $V = \ker P \oplus \img P$,
+ but both $\ker P$ and $\img P$ are subrepresentations as desired.
+\end{proof}
+\begin{remark}
+ In the case where $k = \CC$, there is a shorter proof.
+ Suppose $B : V \times V \to \CC$ is an arbitrary bilinear form.
+ Then we can ``average'' it to obtain a new bilinear form
+ \[ \left< v,w \right> \defeq \frac{1}{|G|} \sum_{g \in G} B(g \cdot v, g \cdot w). \]
+ The averaged form $\left< -,- \right>$ is $G$-invariant,
+ in the sense that $\left< v,w \right> = \left< g \cdot v, g \cdot w\right>$.
+ Then, one sees that if $W \subseteq V$ is a subrepresentation,
+ so is its orthogonal complement $W^\perp$.
+ This implies the result.
+\end{remark}
+
+\section{Example: the representations of $\CC[S_3]$}
+We compute all irreps of $\CC[S_3]$.
+I'll take for granted right now there are exactly three such representations
+(which will be immediate by the first theorem in the next chapter:
+we'll in fact see that the number of representations of $G$
+is exactly equal to the number of conjugacy classes of $G$).
+
+Given that, if the three representations of have dimension $d_1$, $d_2$, $d_3$ ,
+then we ought to have
+\[ d_1^2 + d_2^2 + d_3^2 = |G| = 6. \]
+From this, combined with some deep arithmetic,
+we deduce that we should have $d_1 = d_2 = 1$ and $d_3 = 2$
+or some permutation.
+
+In fact, we can describe these representations explicitly.
+First, we define:
+\begin{definition}
+ Let $G$ be a group.
+ The complex \vocab{trivial group representation} of a group $G$
+ is the one-dimensional representation $\Ctriv = (\CC, \rho)$
+ where $g \cdot v = v$ for all $g \in G$ and $v \in \CC$
+ (i.e.\ $\rho(g) = \id$ for all $g \in G$).
+\end{definition}
+\begin{remark}
+ [Warning] The trivial representation of an \emph{algebra} $A$
+ doesn't make sense for us:
+ we might want to set $a \cdot v = v$ but this isn't linear in $A$.
+ (You \emph{could} try to force it to work by
+ deleting the condition $1_A \cdot v = v$ from our definition;
+ then one can just set $a \cdot v = 0$.
+ But even then $\Ctriv$ would not be the trivial representation of $k[G]$.)
+\end{remark}
+
+Then the representations are:
+\begin{itemize}
+ \ii The one-dimensional $\Ctriv$;
+ each $\sigma \in S_3$ acts by the identity.
+
+ \ii There is a nontrivial one-dimensional representation
+ $\Csign$ where the map $S_3 \to \CC^\times$ is given
+ by sending $\sigma$ to the sign of $\sigma$.
+ Thus in $\Csign$ every $\sigma \in S_3$ acts as $\pm 1$.
+ Of course, $\Ctriv$ and $\Csign$ are not isomorphic
+ (as one-dimensional representations are never isomorphic
+ unless the constants they act on coincide for all $a$,
+ as we saw in \Cref{prob:one_dim}).
+
+ \ii Finally, we have already seen the two-dimensional representation,
+ but now we give it a name.
+ Define $\refl_0$ to be the representation whose vector space is
+ $\{ (x,y,z) \mid x+y+z = 0 \}$,
+ and whose action of $S_3$ on it is permutation of coordinates.
+ \begin{exercise}
+ Show that $\refl_0$ is irreducible, for example by showing directly
+ that no subspace is invariant under the action of $S_3$.
+ \end{exercise}
+ Thus $V$ is also not isomorphic to the previous two representations.
+\end{itemize}
+This implies that these are all the irreps of $S_3$.
+Note that, if we take the representation $V$ of $S_3$ on $k^{\oplus 3}$,
+we just get that $V = \refl_0 \oplus \CC_{\text{triv}}$.
+
+\section\problemhead
+
+\begin{problem}
+ Find all the irreps of $\CC[\Zc n]$.
+ \begin{hint}
+ They are all one-dimensional, $n$ of them.
+ What are the homomorphisms $\Zc n \to \CC^\times$?
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ [Maschke requires $|G|$ finite]
+ Consider the representation of the group $\RR$
+ on $\CC^{\oplus 2}$ under addition by a homomorphism
+ \[
+ \RR \to \Mat_2(\CC)
+ \quad\text{by}\quad
+ t \mapsto
+ \begin{bmatrix} 1 & t \\ 0 & 1 \end{bmatrix}.
+ \]
+ Show that this representation is not irreducible,
+ but it is indecomposable.
+ \begin{hint}
+ The span of $(1,0)$ is a subrepresentation.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ Prove that all irreducible representations
+ of a finite group are finite-dimensional.
+ \begin{hint}
+ This is actually easy.
+ \end{hint}
+ \begin{sol}
+ Pick any $v \in V$, then the subspace
+ spanned by elements $g \cdot v$ for $v \in V$
+ is $G$-invariant;
+ this is a finite-dimensional subspace,
+ so it must equal all of $V$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ \gim
+ Determine all the complex irreps of $D_{10}$.
+ \begin{hint}
+ There are only two one-dimensional ones
+ (corresponding to the only two
+ homomorphisms $D_{10} \to \CC^\times$).
+ So the remaining ones are two-dimensional.
+ \end{hint}
+\end{problem}
diff --git a/books/napkin/sets-functions.tex b/books/napkin/sets-functions.tex
new file mode 100644
index 0000000000000000000000000000000000000000..045539bc9e0944483aacde0f2211a53ce1c7a89a
--- /dev/null
+++ b/books/napkin/sets-functions.tex
@@ -0,0 +1,260 @@
+\chapter{Terminology on sets and functions}
+\label{ch:sets_functions}
+This appendix will cover some notions on sets and functions
+such as ``bijections'', ``equivalence classes'', and so on.
+
+Remark for experts: I am not dealing with foundational issues in this chapter.
+See \Cref{ch:zfc} (and onwards) if that's what you're interested in.
+Consequently I will not prove most assertions.
+
+\section{Sets}
+A \vocab{set} for us will just be a collection of elements (whatever they may be).
+For example, the set $\NN = \{1, 2, 3, 4, \dots\}$ is the positive integers,
+and $\ZZ = \{ \dots, -2, -1, 0, 1, 2, \dots\}$ is the set of all integers.
+As another example, we have a set of humans:
+\[ H = \left\{ x \mid \text{$x$ is a featherless biped} \right\}. \]
+(Here the ``$\mid$'' means ``such that''.)
+
+There's also a set with no elements, which we call the \vocab{empty set}.
+It's denoted by $\varnothing$.
+
+It's conventional to use capital letters for sets (like $H$),
+and lowercase letters for elements of sets (like $x$).
+
+\begin{definition}
+We write $x \in S$ to mean ``$x$ is in $S$'', for example $3 \in \NN$.
+\end{definition}
+
+\begin{definition}
+ If every element of a set $A$ is also in a set $B$,
+ then we say $A$ is a \vocab{subset} of $B$,
+ and denote this by $A \subseteq B$.
+ If moreover $A \neq B$, we say $A$ is a \vocab{proper subset}
+ and write $A \subsetneq B$.
+ (This is analogous to $\le$ and $<$.)
+
+ Given a set $A$, the set of all subsets is denoted $2^A$
+ or $\PP(A)$ and called the \vocab{power set} of $A$.
+\end{definition}
+\begin{example}
+ [Examples of subsets]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\{1,2,3\} \subseteq \NN \subseteq \ZZ$.
+ \ii $\varnothing \subseteq A$ for any set $A$. (Why?)
+ \ii $A \subseteq A$ for any set $A$.
+ \ii If $A = \{1,2\}$ then $2^A =
+ \left\{ \varnothing, \{1\}, \{2\}, \{1,2\} \right\}$.
+ \end{enumerate}
+\end{example}
+
+\begin{definition}
+ We write
+ \begin{itemize}
+ \ii $A \cup B$ for the set of elements in
+ \emph{either} $A$ or $B$ (possibly both),
+ called the \vocab{union} of $A$ and $B$.
+ \ii $A \cap B$ for the set of elements in \emph{both} $A$ and $B$, and
+ called the \vocab{intersection} of $A$ and $B$.
+ \ii $A \setminus B$ for the set of elements in $A$ but \emph{not} in $B$.
+ \end{itemize}
+\end{definition}
+
+\begin{example}
+ [Examples of set operations]
+ Let $A = \{1,2,3\}$ and $B = \{3,4,5\}$. Then
+ \begin{align*}
+ A \cup B &= \{1,2,3,4,5\} \\
+ A \cap B &= \{3\} \\
+ A \setminus B &= \{1,2\}.
+ \end{align*}
+\end{example}
+
+\begin{exercise}
+ Convince yourself: for any sets $A$ and $B$,
+ we have $A \cap B \subseteq A \subseteq A \cup B$.
+\end{exercise}
+
+Here are some commonly recurring sets:
+\begin{itemize}
+ \ii $\CC$ is the set of complex numbers, like $3.2 + \sqrt 2 i$.
+ \ii $\RR$ is the set of real numbers, like $\sqrt 2$ or $\pi$.
+ \ii $\NN$ is the set of positive integers, like $5$ or $9$.
+ \ii $\QQ$ is the set of rational numbers, like $7/3$.
+ \ii $\ZZ$ is the set of integers, like $-2$ or $8$.
+\end{itemize}
+(These are pronounced in the way you would expect:
+``see'', ``are'', ``en'', ``cue'', ``zed''.)
+
+\section{Functions}
+Given two sets $A$ and $B$, a \vocab{function} $f$ from $A$ to $B$
+is a mapping of every element of $A$ to some element of $B$.
+
+We call $A$ the \vocab{domain} of $f$, and $B$ the \vocab{codomain}.
+We write this as $f \colon A \to B$ or $A \taking f B$.
+\begin{abuse}
+ If the name $f$ is not important, we will often just write $A \to B$.
+\end{abuse}
+We write $f(a) = b$ or $a \mapsto b$ to signal that $f$ takes $a$ to $b$.
+
+If $B$ has $0$ as an element and $f(a) = 0$,
+we often say $a$ is a \vocab{root} or \vocab{zero} of $f$,
+and that $f$ \vocab{vanishes} at $a$.
+
+\subsection{Injective / surjective / bijective functions}
+
+\begin{definition}
+ A function $f \colon A \to B$ is \vocab{injective}
+ if it is ``one-to-one'' in the following sense:
+ if $f(a) = f(a')$ then $a = a'$.
+ In other words, for any $b \in B$,
+ there is \emph{at most} one $a \in A$ such that $f(a) = b$.
+
+ Often, we will write $f \colon A \injto B$ to emphasize this.
+\end{definition}
+\begin{definition}
+ A function $f \colon A \to B$ is \vocab{surjective}
+ if it is ``onto'' in the following sense:
+ for any $b \in B$ there is \emph{at least} one $a \in A$
+ such that $f(a) = b$.
+
+ Often, we will write $f \colon A \surjto B$ to emphasize this.
+\end{definition}
+\begin{definition}
+ A function $f \colon A \to B$ is \vocab{bijective}
+ if it is both injective and surjective.
+ In other words, for each $b \in B$,
+ there is \emph{exactly} one $a \in A$ such that $f(a) = b$.
+\end{definition}
+
+\begin{example}
+ [Examples of functions]
+ By ``human'' I mean ``living featherless biped''.
+ \begin{enumerate}[(a)]
+ \ii There's a function taking every human to their
+ age in years (rounded to the nearest integer).
+ This function is \textbf{not injective},
+ because for example there are many people with age $20$.
+ This function is also \textbf{not surjective}: no one has age $10000$.
+
+ \ii There's a function taking every
+ USA citizen to their social security number.
+ This is also \textbf{not surjective} (no one has SSN equal to $3$),
+ but at least it \textbf{is injective} (no two people have the same SSN).
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [Examples of bijections]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Let $A = \{1,2,3,4,5\}$ and $B = \{6,7,8,9,10\}$.
+ Then the function $f \colon A \to B$ by $a \mapsto a+5$ is a bijection.
+ \ii In a classroom with $30$ seats,
+ there is exactly one student in every seat.
+ Thus the function taking each student to the seat they're in
+ is a bijection; in particular, there are exactly $30$ students.
+ \end{enumerate}
+\end{example}
+
+\begin{remark}
+ Assume for convenience that $A$ and $B$ are finite sets.
+ Then:
+ \begin{itemize}
+ \ii If $f \colon A \injto B$ is injective,
+ then the size of $A$ is at most the size of $B$.
+ \ii If $f \colon A \surjto B$ is surjective,
+ then the size of $A$ is at least the size of $B$.
+ \ii If $f \colon A \to B$ is a bijection,
+ then the size of $A$ equals the size of $B$.
+ \end{itemize}
+\end{remark}
+
+Now, notice that if $f \colon A \to B$ is a bijection,
+then we can ``apply $f$ backwards'':
+(for example, rather than mapping each student to the seat they're in,
+we map each seat to the student sitting in it).
+This is called an \vocab{inverse function};
+we denote it $f\inv \colon B \to A$.
+
+\subsection{Images and pre-images}
+Let $X \taking f Y$ be a function.
+
+\begin{definition}
+ Suppose $T \subseteq Y$.
+ The \vocab{pre-image} $f\pre(T)$ is the set of all
+ $x \in X$ such that $f(x) \in T$.
+ Thus, $f\pre(T)$ is a subset of $X$.
+\end{definition}
+\begin{example}
+ [Examples of pre-image]
+ Let $f \colon H \to \ZZ$ be the age function from earlier.
+ Then
+ \begin{enumerate}[(a)]
+ \ii $f\pre(\{13, 14, 15, 16, 17, 18, 19\})$ is the set of teenagers.
+ \ii $f\pre(\{0\})$ is the set of newborns.
+ \ii $f\pre(\{1000, 1001, 1002, \dots \}) = \varnothing$,
+ as I don't think anyone is that old.
+ \end{enumerate}
+\end{example}
+
+\begin{abuse}
+ By abuse of notation, we may abbreviate $f\pre(\{y\})$ to $f\pre(y)$.
+ So for example, $f\pre(\{0\})$ above becomes shortened to $f\pre(0)$.
+\end{abuse}
+
+The dual notion is:
+\begin{definition}
+ Suppose $S \subseteq X$.
+ The \vocab{image} $f\im(S)$ is the set of
+ all things of the form $f(s)$.
+\end{definition}
+\begin{example}
+ [Examples of images]
+ Let $A = \{1,2,3,4,5\}$ and $B = \ZZ$.
+ Consider a function $f \colon A \to B$ given by
+ \[
+ f(1) = 17 \quad
+ f(2) = 17 \quad
+ f(3) = 19 \quad
+ f(4) = 30 \quad
+ f(5) = 234.
+ \]
+ \begin{enumerate}[(a)]
+ \ii The image $f\im(\{1,2,3\})$ is the set $\{17, 19\}$.
+ \ii The image $f\im(A)$ is the set $\{17, 19, 30, 234\}$.
+ \end{enumerate}
+\end{example}
+\begin{ques}
+ Suppose $f \colon A \surjto B$ is surjective.
+ What is $f\im(A)$?
+\end{ques}
+
+\section{Equivalence relations}
+Let $X$ be a fixed set now.
+A binary relation $\sim$ on $X$ assigns a truth value ``true''
+or ``false'' to $x \sim y$ for each $x$ or $y$.
+Now an \vocab{equivalence relation} $\sim$ on $X$ is a binary relation
+which satisfies the following axioms:
+\begin{itemize}
+ \ii Reflexive: we have $x \sim x$.
+ \ii Symmetric: if $x \sim y$ then $y \sim x$
+ \ii Transitive: if $x \sim y$ and $y \sim z$ then $x \sim z$.
+\end{itemize}
+An \vocab{equivalence class} is then a
+set of all things equivalent to each other.
+One can show that $X$ becomes partitioned by these equivalence classes:
+
+\begin{example}
+ [Example of an equivalence relation]
+ Let $\NN$ denote the set of positive integers.
+ Then suppose we declare $a \sim b$ if $a$ and $b$ have the same last digit,
+ for example $131 \sim 211$, $45 \sim 125$, and so on.
+
+ Then $\sim$ is an equivalence relation.
+ It partitions $\NN$ into ten equivalence classes,
+ one for each trailing digit.
+\end{example}
+
+Often, the set of equivalence classes will be denoted $X/{\sim}$
+(pronounced ``$X$ mod sim'').
diff --git a/books/napkin/sheafmod.tex b/books/napkin/sheafmod.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e8c4367e25b5ab5b921a2a614ba11237576fffdb
--- /dev/null
+++ b/books/napkin/sheafmod.tex
@@ -0,0 +1 @@
+\chapter{Sheaves of $\OO_X$-modules}
diff --git a/books/napkin/sheaves.tex b/books/napkin/sheaves.tex
new file mode 100644
index 0000000000000000000000000000000000000000..89d0812e59aed1de839eb8af09d0918934ca40aa
--- /dev/null
+++ b/books/napkin/sheaves.tex
@@ -0,0 +1,809 @@
+\chapter{Sheaves and ringed spaces}
+Most of the complexity of the affine variety $V$ earlier comes from $\OO_V$.
+This is a type of object called a ``sheaf''.
+The purpose of this chapter is to completely define what this sheaf is,
+and just what it is doing.
+
+\section{Motivation and warnings}
+The typical example to keep in mind is a sheaf of
+``functions with property $P$'' on a topological space $X$:
+for every open set $U$, $\SF(U)$ gives us the ring of functions on $X$.
+However, we will work very abstractly and only assume $\SF(U)$
+is a ring, without an interpretation as ``functions''.
+
+Throughout this chapter, I will not only be using algebraic geometry
+examples, but also those with $X$ a topological space
+and $\SF$ being a sheaf of differentiable/analytic/etc functions.
+One of the nice things about sheaves is that the same abstraction works fine,
+so you can train your intuition with both algebraic and analytic examples.
+In particular, we can keep drawing open sets $U$ as ovals,
+even though in the Zariski topology that's not what they look like.
+
+The payoff for this abstraction is that it will
+allow us to define an arbitrary scheme in the next chapter.
+Varieties use $\CC[x_1, x_2, \dots, x_n] / I$ as their ``ring of functions'',
+and by using the fully general sheaf we replace this
+with \emph{any} commutative ring.
+In particular, we could choose $\CC[x] / (x^2)$
+and this will give the ``multiplicity''
+behavior that we sought all along.
+
+
+\section{Pre-sheaves}
+\prototype{The sheaf of holomorphic (or regular, continuous,
+differentiable, constant, whatever) functions.}
+
+The proper generalization of our $\OO_V$ is a so-called sheaf of rings.
+Recall that $\OO_V$ took \emph{open sets of $V$} to \emph{rings},
+with the interpretation that $\OO_V(U)$ was a ``ring of functions''.
+
+\subsection{Usual definition}
+So here is the official definition of a pre-sheaf.
+\begin{definition}
+ For a topological space $X$ let $\Opens(X)$ denote
+ its open sets of $X$.
+\end{definition}
+\begin{definition}
+ A \vocab{pre-sheaf} of rings on a space $X$ is a function
+ \[ \SF : \Opens(X) \to \catname{Rings} \]
+ meaning each open set gets associated with a ring $\SF(U)$.
+ Each individual element of $\SF(U)$ is called a \vocab{section}.
+
+ It is also equipped with a \vocab{restriction map}
+ for any $U_1 \subseteq U_2$; this is a map
+ \[ \res_{U_1, U_2}
+ \colon \SF(U_2) \to \SF(U_1). \]
+ The map satisfies two axioms:
+ \begin{itemize}
+ \ii The map $\res_{U,U}$ is the identity, and
+ \ii Whenever we have nested subsets
+ \[ U_{\text{small}} \subseteq U_{\text{med}} \subseteq U_{\text{big}} \]
+ the diagram
+ \begin{center}
+ \begin{tikzcd}
+ \SF(U_{\text{big}})
+ \ar[r, "\res"] \ar[rd, "\res"']
+ & \SF(U_{\text{med}}) \ar[d, "\res"] \\
+ & \SF(U_{\text{small}})
+ \end{tikzcd}
+ \end{center}
+ commutes.
+ \end{itemize}
+\end{definition}
+
+\begin{definition}
+ An element of $\SF(X)$ is called a \vocab{global section}.
+\end{definition}
+
+\begin{abuse}
+ If $s \in \mathscr F(U_2)$ is some section and $U_1 \subseteq U_2$,
+ then rather than write $\res_{U_1,U_2}(s)$
+ I will write $s\restrict{U_1}$ instead:
+ ``$s$ restricted to $U_1$''.
+ This is abuse of notation because the section $s$ is just
+ an element of some ring, and in the most abstract of cases
+ may not have a natural interpretation as function.
+\end{abuse}
+
+
+Here is a way you can picture sections.
+In all our usual examples, sheaves return functions an open set $U$.
+So, we draw a space $X$, and an open set $U$,
+and then we want to draw a ``function on $U$'' to represent a section $s$.
+Crudely, we will illustrate $s$ by drawing an $xy$-plot of a curve,
+since that is how we were told to draw functions in grade school.
+
+\begin{center}
+\begin{asy}
+ filldraw(ellipse( (0,0), 8, 1.5 ), opacity(0.2)+lightblue, black );
+ label("$X$", (8,-1.5));
+ filldraw(ellipse( (-1,0), 5, 1 ), opacity(0.2)+lightcyan, heavycyan);
+ label("$U$", (4,-1), dir(45), heavycyan);
+ path curve = (-6,3)..(-4,5)..(-2,8)..(-1,7)..(0,5)..(2,7)..(4,9);
+ draw(curve, red);
+ label("$s \in \mathcal F(U)$", (-2,8), dir(90), red);
+ draw( (-6,3)..(-6,0), dashed+red );
+ draw( (4,9)..(4,0), dashed+red );
+\end{asy}
+\end{center}
+Then, the restriction corresponds to, well, taking just a chunk of the section.
+\begin{center}
+\begin{asy}
+ filldraw(ellipse( (0,0), 8, 1.5 ), opacity(0.2)+lightblue, black );
+ label("$X$", (8,-1.5));
+ filldraw(ellipse( (-1,0), 5, 1 ), opacity(0.2)+lightcyan, heavycyan);
+ label("$U$", (4,-1), dir(45), heavycyan);
+ path curve = (-6,3)..(-4,5)..(-2,8)..(-1,7)..(0,5)..(2,7)..(4,9);
+ draw(curve, red);
+ draw(subpath(curve, 1, 5), deepgreen+1.5); // from x=-4 to x=2
+ filldraw(ellipse( (-1,0), 3, 0.8 ), opacity(0.2)+deepgreen, deepgreen);
+ label("$V$", (2,-0.8), dir(45), deepgreen);
+ label("$\operatorname{res} s \in \mathcal F(V)$", (-2,8), dir(90), deepgreen);
+ draw( (-4,5)..(-4,0), dashed+deepgreen);
+ draw( (2,7)..(2,0), dashed+deepgreen);
+ draw( (-6,3)..(-6,0), dashed+red );
+ draw( (4,9)..(4,0), dashed+red );
+\end{asy}
+\end{center}
+All of this is still a dream, since $s$ in reality is an element of a ring.
+However, by the end of this chapter we will be able to make
+our dream into a reality.
+
+\begin{example}[Examples of pre-sheaves]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii For an affine variety $V$, $\OO_V$ is of course a sheaf,
+ with $\OO_V(U)$ being the ring of regular functions on $U$.
+ The restriction map just says that if $U_1 \subseteq U_2$,
+ then a function $s \in \OO_V(U_2)$ can also be thought of as
+ a function $s \restrict{U_1} \in \OO_V(U_1)$,
+ hence the name ``restriction''.
+ The commutativity of the diagram then follows.
+
+ \ii Let $X \subseteq \RR^n$ be an open set.
+ Then there is a sheaf of smooth/differentiable/etc.\ functions on $X$.
+ In fact, one can do the same construction for any manifold $M$.
+
+ \ii Similarly, if $X \subseteq \CC$ is open,
+ we can construct a sheaf of holomorphic functions on $X$.
+ \end{enumerate}
+ In all these examples, the sections $s \in \SF(U)$
+ are really functions on the space, but in general they need not be.
+\end{example}
+
+In practice, thinking about the restriction maps
+might be more confusing than helpful; it is better to say:
+\begin{moral}
+ Pre-sheaves should be thought of as
+ ``returning the ring of functions with a property $P$''.
+\end{moral}
+
+\subsection{Categorical definition}
+If you really like category theory,
+we can give a second equivalent and shorter definition.
+Despite being a category lover myself,
+I find this definition less intuitive,
+but its brevity helps with remembering the first one.
+\begin{abuse}
+ By abuse of notation, $\Opens(X)$ will also be thought of as a
+ posetal category by inclusion. Thus $\varnothing$ is an initial object
+ and the entire space $X$ is a terminal object.
+\end{abuse}
+\begin{definition}
+ A \vocab{pre-sheaf} of rings on $X$ is a contravariant functor
+ \[ \SF : \Opens(X)\op \to \catname{Rings}. \]
+\end{definition}
+\begin{exercise}
+ Check that these definitions are equivalent.
+\end{exercise}
+In particular, it is possible to replace $\catname{Rings}$ with any category we want.
+We will not need to do so any time soon, but it's worth mentioning.
+
+
+\section{Stalks and germs}
+\prototype{Germs of real smooth functions tell you the derivatives,
+but germs of holomorphic functions determine the entire function.}
+
+As we mentioned, the helpful pictures from the previous section
+are still just metaphors, because there is no notion of ``value''.
+With the addition of the words ``stalk'' and ``germ'',
+we can actually change that.
+
+\begin{definition}
+ Let $\SF$ be a pre-sheaf (of rings).
+ For every point $p$ we define the \vocab{stalk} $\SF_p$ to be the set
+ \[ \left\{ (s, U) \mid s \in \SF(U), p \in U \right\} \]
+ modulo the equivalence relation $\sim$ that
+ \[ (s_1,U_1) \sim (s_2, U_2)
+ \quad\text{if}\quad
+ s_1 \restrict{V} = s_2 \restrict{V} \]
+ for some open set $V$ with $V \ni p$
+ and $V \subseteq U_1 \cap U_2$.
+ The equivalence classes themselves are called \vocab{germs}.
+\end{definition}
+
+\begin{definition}
+ The germ of a given $s \in \SF(U)$ at a point $p$
+ is the equivalence class for $(s,U) \in \SF_p$.
+ We denote this by $[s]_p$.
+\end{definition}
+
+It is rarely useful to think of a germ as an ordered pair,
+since the set $U$ can get arbitrarily small.
+Instead, one should think of a germ as a
+``shred'' of some section near $p$.
+A nice summary for the right mindset might be:
+\begin{moral}
+ A germ is an ``enriched value'';
+ the stalk is the set of possible germs.
+\end{moral}
+
+Let's add this to our picture from before.
+If we insist on continuing to draw our sections as $xy$-plots,
+then above each point $p$ a good metaphor would be a vertical line out from $p$.
+The germ would then be the ``enriched value of $s$ at $p$''.
+We just draw that as a big dot in our plot.
+The main difference is that the germ is enriched in the sense that
+the germ carries information in a small region around $p$ as well,
+rather than literally just the point $p$ itself.
+So accordingly we draw a large dot for $[s]_p$,
+rather than a small dot at $p$.
+
+\begin{center}
+\begin{asy}
+ filldraw(ellipse( (0,0), 8, 1.5 ), opacity(0.2)+lightblue, black );
+ label("$X$", (8,-1.5));
+ filldraw(ellipse( (-1,0), 5, 1 ), opacity(0.2)+lightcyan, heavycyan);
+ label("$U$", (4,-1), dir(45), heavycyan);
+ path curve = (-6,3)..(-4,5)..(-2,8)..(-1,7)..(0,5)..(2,7)..(4,9);
+ draw(curve, red);
+ label("$s \in \mathcal F(U)$", (-2,8), dir(120), red);
+ draw( (-6,3)..(-6,0), dashed+red );
+ draw( (4,9)..(4,0), dashed+red );
+ draw( (0,0)--(0,12), heavygreen);
+ label("$\mathcal F_p$", (0,12), dir(90), heavygreen);
+ dot("$[s]_p$", (0,5), dir(-45), heavygreen+6);
+ dot("$p$", (0,0), dir(0), blue);
+\end{asy}
+\end{center}
+
+Before going on,
+we might as well note that the stalks are themselves rings,
+not just sets: we can certainly add or subtract enriched values.
+\begin{definition}
+ The stalk $\SF_p$ can itself be regarded as a ring:
+ for example, addition is done by
+ \[
+ \left( s_1, U_1 \right) + \left( s_2, U_2 \right)
+ = \left( s_1 \restrict{U_1 \cap U_2} + s_2 \restrict{U_1 \cap U_2},
+ U_1 \cap U_2 \right).
+ \]
+\end{definition}
+
+\begin{example}
+ [Germs of real smooth functions]
+ Let $X = \RR$ and let $\SF$ be the pre-sheaf on $X$ of smooth functions
+ (i.e.\ $\SF(U)$ is the set of smooth real-valued functions on $U$).
+
+ Consider a global section, $s : \RR \to \RR$ (thus $s \in \SF(X)$)
+ and its germ at $0$.
+ \begin{enumerate}[(a)]
+ \ii From the germ we can read off $s(0)$, obviously.
+ \ii We can also find $s'(0)$, because the germ carries enough
+ information to compute the limit $\lim_{h \to 0} \frac1h[s(h)-s(0)]$.
+ \ii Similarly, we can compute the second derivative and so on.
+ \ii However, we can't read off, say, $s(3)$ from the germ.
+ For example, consider the function from \Cref{ex:nonanalytic},
+ \[
+ s(x) = \begin{cases}
+ e^{-\frac{1}{x-1}} & x > 1 \\
+ 0 & x \le 1.
+ \end{cases}
+ \]
+ Note $s(3) = e^{-\half}$, but $[\text{zero function}]_0 = [s]_0$.
+ So germs can't distinguish between the zero function and $s$.
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [Germs of holomorphic functions]
+ Holomorphic functions are surprising in this respect.
+ Consider the sheaf $\SF$ on $\CC$ of \emph{holomorphic} functions.
+
+ Take $s : \CC \to \CC$ a global section.
+ Given the germ of $s$ at $0$, we can read off $s(0)$, $s'(0)$, et cetera.
+ The miracle of complex analysis is that just knowing
+ the derivatives of $s$ at zero is enough to reconstruct all of $s$:
+ we can compute the Taylor series of $s$ now.
+ \textbf{Thus germs of holomorphic functions determine the entire function};
+ they ``carry more information'' than their real counterparts.
+
+ In particular, we can concretely describe the stalks of the pre-sheaf:
+ \[
+ \SF_p = \left\{
+ \sum_{k \ge 0} c_k (z-p)^k
+ \text{ convergent near $p$}
+ \right\}.
+ \]
+ For example, this includes germs of meromorphic functions,
+ so long as there is no pole at $p$ itself.
+\end{example}
+
+And of course, our algebraic geometry example.
+This example will matter a lot later,
+so we do it carefully now.
+\begin{abuse}
+ Rather than writing $(\OO_X)_p$ we will write $\OO_{X,p}$.
+\end{abuse}
+\begin{theorem}
+ [Stalks of $\OO_V$]
+ \label{thm:stalks_affine_var}
+ Let $V \subseteq \Aff^n$ be a variety,
+ and assume $p \in V$ is a point.
+ Then \[ \OO_{V,p} \cong
+ \left\{ \frac fg \mid f,g \in \CC[V], \; g(p) \ne 0 \right\}. \]
+\end{theorem}
+\begin{proof}
+% It is possible to skip this proof since we will subsume it later,
+% but I want to include it to show that you \emph{could} easily prove it,
+% if you wanted to.
+ A regular function $\varphi$ on $U \subseteq V$
+ is supposed to be a function on $U$ that ``locally'' is a quotient
+ of two functions in $\CC[V]$.
+ Since we are looking at the stalk near $p$, though,
+ the germ only cares up to the choice of representation at $p$,
+ and so we can go ahead and write
+ \[
+ \OO_{V,p} =
+ \left\{ \left( \tfrac fg , U \right) \mid
+ U \ni p, \; f,g \in \CC[V], \;
+ \text{$g \neq 0$ on $U$} \right\}
+ \]
+ modulo the same relation.
+
+ Now we claim that the map
+ \[ \OO_{V,p} \to \text{desired RHS}
+ \qquad\text{by}\qquad \left( \frac fg, U \right) \mapsto \frac fg \]
+ is an isomorphism.
+ \begin{itemize}
+ \ii Injectivity: We are working with complex polynomials,
+ so we know that a rational function is determined by its
+ behavior on any open neighborhood of $p$;
+ thus two germ representatives $(\frac{f_1}{g_1}, U_1)$
+ and $(\frac{f_2}{g_2}, U_2)$ agree on $U_1 \cap U_2$
+ if and only if they are actually the same quotient.
+ \ii Surjectivity: take $U = D(g)$. \qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{example}
+ [Stalks of your favorite varieties at the origin]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Let $V = \Aff^1$; then the stalk of $\OO_V$
+ at each point $p \in V$ is
+ \[ \OO_{V,p}
+ = \left\{ \frac{f(x)}{g(x)} \mid g(p) \ne 0 \right\}. \]
+ Examples of elements are $x^2+5$, $\frac{1}{x-1}$ if $p \ne 1$,
+ $\frac{x+7}{x^2-9}$ if $p \ne \pm 3$, and so on.
+
+ \ii Let $V = \Aff^2$;
+ then the stalk of $\OO_V$ at the origin is
+ \[ \OO_{V, (0,0)}
+ = \left\{ \frac{f(x,y)}{g(x,y)} \mid g(0,0) \ne 0 \right\}. \]
+ Examples of elements are $x^2+y^2$,
+ $\frac{x^3}{xy+1}$, $\frac{13x+37y}{x^2+8y+2}$.
+
+ \ii Let $V = \VV(y-x^2) \subseteq \Aff^2$;
+ then the stalk of $\OO_V$ at the origin is
+ \[ \OO_{V, (0,0)}
+ = \left\{ \frac{f(x,y)}{g(x,y)} \mid f,g \in \CC[x,y] / (y-x^2),
+ \; g(0,0) \ne 0 \right\}. \]
+ For example, $\frac{y}{1+x} = \frac{x^2}{1+x}$
+ denote the same element in the stalk.
+ Actually, you could give a canonical choice of representative
+ by replacing $y$ with $x^2$ everywhere, so it would also be
+ correct to write
+ \[ \OO_{V, (0,0)}
+ = \left\{ \frac{f(x)}{g(x)} \mid \; g(0) \ne 0 \right\} \]
+ which is the same as the first example.
+ \end{enumerate}
+\end{example}
+
+\begin{remark}
+ [Aside for category lovers]
+ You may notice that $\SF_p$ seems to be
+ ``all the $\SF_p(U)$ coming together'', where $p \in U$.
+ And in fact, $\SF_p(U)$ is the categorical \emph{colimit}
+ of the diagram formed by all the $\SF(U)$ such that $p \in U$.
+ This is often written
+ \[ \SF_p = \varinjlim_{U \ni p} \SF(U) \]
+ Thus we can define stalks in any category with colimits,
+ though to be able to talk about germs the category needs
+ to be concrete.
+\end{remark}
+
+\section{Sheaves}
+\prototype{Constant functions aren't sheaves, but locally constant ones are.}
+
+Since we care so much about stalks, which study local behavior,
+we will impose additional local conditions on our pre-sheaves.
+One way to think about this is:
+\begin{moral}
+ Sheaves are pre-sheaves for which $P$ is a \emph{local} property.
+\end{moral}
+
+The formal definition doesn't illuminate this
+as much as the examples do,
+but sadly I have to give the definition first
+for the examples to make sense.
+\begin{definition}
+ A \vocab{sheaf} $\mathscr F$ is a pre-sheaf obeying two additional axioms:
+ Suppose $U$ is covered by open sets $U_\alpha \subseteq U$. Then:
+ \begin{enumerate}
+ \ii (Identity) If $s, t \in \mathscr F(U)$ are sections,
+ and $s\restrict{U_\alpha} = t\restrict{U_\alpha}$
+ for all $\alpha$, then $s = t$.
+ \ii (Gluing) Consider sections
+ $s_\alpha \in \mathscr F(U_\alpha)$ for each $\alpha$.
+ Suppose that
+ \[ s_\alpha \restrict{U_\alpha \cap U_\beta}
+ = s_\beta \restrict{U_\alpha \cap U_\beta} \]
+ for each $U_\alpha$ and $U_\beta$.
+ Then we can find $s \in U$ such that
+ $s \restrict{U_\alpha} = s_\alpha$.
+ \end{enumerate}
+\end{definition}
+\begin{remark}
+ [For keepers of the empty set]
+ The above axioms imply $\SF(\varnothing) = 0$ (the zero ring).
+ This is not worth worrying about until you actually need it,
+ so you can forget I said that.
+\end{remark}
+This is best illustrated by picture in the case of just two open sets:
+consider two open sets $U$ and $V$.
+Then the sheaf axioms are saying something about
+$\SF(U \cup V)$, $\SF(U \cap V)$, $\SF(U)$ and $\SF(V)$.
+\begin{center}
+ \begin{asy}
+ size(4cm);
+ filldraw(shift(-0.5,0)*unitcircle, lightred+opacity(0.3), red);
+ filldraw(shift( 0.5,0)*unitcircle, lightblue+opacity(0.3), blue);
+ label("$U$", (-0.5,0)+dir(135), dir(135), red);
+ label("$V$", ( 0.5,0)+dir(45), dir(45), blue);
+ \end{asy}
+\end{center}
+Then for a sheaf of functions, the axioms are saying that:
+\begin{itemize}
+ \ii If $s$ and $t$ are functions (with property $P$)
+ on $U \cap V$
+ and $s \restrict{U} = t \restrict{U}$,
+ $s \restrict{V} = t \restrict{V}$,
+ then $s = t$ on the entire union.
+ This is clear.
+
+ \ii If $s_1$ is a function with property $P$ on $U$
+ and $s_2$ is a function with property $P$ on $V$,
+ and the two functions agree on the overlap,
+ then one can glue them to obtain a function $s$
+ on the whole space:
+ this is obvious, but
+ \textbf{the catch is that the collated function
+ needs to have property $P$ as well}
+ (i.e.\ needs to be an element of $\SF(U \cup V)$).
+ That's why it matters that $P$ is local.
+\end{itemize}
+So you can summarize both of these as saying:
+any two functions on $U$ and $V$
+which agree on the overlap
+glue to a \emph{unique} function on $U \cup V$.
+If you like category theory,
+you might remember we alluded to this in \Cref{ex:diff_pullback}.
+\begin{exercise}
+ [For the categorically inclined]
+ Show that the diagram
+ \begin{center}
+ \begin{tikzcd}
+ \SF(U \cup V) \ar[r] \ar[d] & \SF(U) \ar[d] \\
+ \SF(V) \ar[r] & \SF(U \cap V)
+ \end{tikzcd}
+ \end{center}
+ is a pullback square.
+\end{exercise}
+
+Now for the examples.
+\begin{example}
+ [Examples and non-examples of sheaves]
+ Note that every example of a stalk we computed
+ in the previous section was of a sheaf.
+ Here are more details:
+ \begin{enumerate}[(a)]
+ \ii Pre-sheaves of arbitrary / continuous / differentiable / smooth
+ / holomorphic functions are still sheaves.
+ This is because to verify a function is continuous,
+ one only needs to look at small open neighborhoods at once.
+
+ \ii For a complex variety $V$, $\OO_V$ is a sheaf,
+ precisely because our definition was \emph{locally} quotients
+ of polynomials.
+
+ \ii The pre-sheaf of \emph{constant} real functions on a space $X$
+ is \emph{not} a sheaf in general, because it fails the gluing axiom.
+ Namely, suppose that $U_1 \cap U_2 = \varnothing$
+ are disjoint open sets of $X$.
+ Then if $s_1$ is the constant function $1$ on $U_1$
+ while $s_2$ is the constant function $2$ on $U_2$,
+ then we cannot glue these to a constant function on $U_1 \cup U_2$.
+
+ \ii On the other hand, \emph{locally constant} functions
+ do produce a sheaf. (A function is locally constant
+ if for every point it is constant on some open neighborhood.)
+ \end{enumerate}
+ In fact, the sheaf in (d) is what is called a \emph{sheafification}
+ of the pre-sheaf constant functions, which we define momentarily.
+\end{example}
+
+\section{For sheaves, sections ``are'' sequences of germs}
+\prototype{A real function on $U$ is a sequence of
+ real numbers $f(p)$ for each $p \in U$ satisfying some local condition.
+ Analogously, a section $s \in \SF(U)$ is a sequence of germs
+ satisfying some local compatibility condition.}
+
+Once we impose the sheaf axioms,
+our metaphorical picture will actually be more or less complete.
+Just as a function was supposed to be a choice of value at each point,
+a section will be a choice of germ at each stalk.
+
+\begin{example}[Real functions vs.\ germs]
+ Let $X$ be a space and let $\SF$ be the sheaf of smooth functions.
+ Take a section $f \in \SF(U)$.
+ \begin{itemize}
+ \ii As a function, $f$ is just a choice of value $f(p) \in \RR$ at
+ every point $p$, subject to a local ``smooth'' condition.
+
+ \ii Let's now think of $f$ as a sequence of germs.
+ At every point $p$ the germ $[f]_p \in \SF_p$ gives us the value $f(p)$
+ as we described above. The germ packages even more data than this:
+ from the germ $[f]_p$ alone we can for example compute $f'(p)$.
+ Nonetheless we stretch the analogy and think of $f$
+ as a choice of germ $[f]_p \in \SF_p$ at each point $p$.
+ \end{itemize}
+ Thus we can replace the notion of the value $f(p)$ with germ $[f]_p$.
+ This is useful because in a general sheaf $\SF$, the notion $s(p)$
+ is not defined while the notion $[s]_p$ is.
+\end{example}
+
+
+From the above example it's obvious that if we know each germ $[s]_p$
+this should let us reconstruct the entire section $s$.
+Let's check this from the sheaf axioms:
+\begin{exercise}
+ [Sections are determined by stalks]
+ Let $\SF$ be a sheaf.
+ Consider the natural map
+ \[ \SF(U) \to \prod_{p \in U} \SF_p \]
+ described above.
+ Show that this map is injective, i.e.\
+ the germs of $s$ at every point $p \in U$ determine the section $s$.
+ (You will need the ``identity'' sheaf axiom, but not ``gluing''.)
+\end{exercise}
+
+However, this map is clearly not surjective!
+%\begin{ques}
+% Come up with a counterexample to break surjectivity.
+% (This is like asking ``come up with a non-smooth function''.)
+%\end{ques}
+Nonetheless we can describe the image:
+we want a sequence of germs $(g_p)_{p \in U}$ such that near every germ $g_p$,
+the germs $g_q$ are ``compatible'' with $g_p$.
+We make this precise:
+\begin{definition}
+ Let $\SF$ be pre-sheaf and let $U$ be an open set.
+ A sequence $(g_p)_{p \in U}$ of germs
+ (with $g_p \in \SF_p$ for each $p$)
+ is said to be \vocab{compatible} if
+ they can be ``locally collated'':
+ \begin{quote}
+ For any $p \in U$ there exists an open neighborhood $U_p \ni p$
+ and a section $s \in \SF(U_p)$ on it
+ such that $[s]_q = g_q$ for each $q \in U_p$.
+ \end{quote}
+ Intuitively, the germs should ``collate together'' to some section near
+ each \emph{individual} point $q$
+ (but not necessarily to a section on all of $U$).
+\end{definition}
+
+We let the reader check this definition is what we want:
+\begin{exercise}
+ Prove that any choice of compatible germs over $U$
+ collates together to a section of $U$.
+ (You will need the ``gluing'' sheaf axiom, but not ``identity''.)
+\end{exercise}
+
+Putting together the previous two exercise gives:
+\begin{theorem}
+ [Sections ``are'' just compatible germs]
+ Let $\SF$ be a sheaf.
+ There is a natural bijection between
+ \begin{itemize}
+ \ii sections of $\SF(U)$, and
+ \ii sequences of compatible germs over $U$.
+ \end{itemize}
+\end{theorem}
+
+
+We draw this in a picture below
+by drawing several stalks, rather than just one,
+with the germs above.
+The stalks at different points need not be related to each other,
+so I have drawn the stalks to have different heights to signal this.
+And, the caveat is that the germs are large enough that germs
+which are ``close to each other'' should be ``compatible''.
+
+\begin{center}
+\begin{asy}
+ filldraw(ellipse( (0,0), 8, 1.5 ), opacity(0.2)+lightblue, black );
+ label("$X$", (8,-1.5));
+ filldraw(ellipse( (-1,0), 5, 1 ), opacity(0.2)+lightcyan, heavycyan);
+ label("$U$", (4,-1), dir(45), heavycyan);
+ path curve = (-6,3)..(-4,5)..(-2,8)..(-1,7)..(0,5)..(2,7)..(4,9);
+ draw(curve, red);
+ draw( (-6,3)..(-6,0), dashed+red );
+ draw( (4,9)..(4,0), dashed+red );
+
+ draw( (-4,0)--(-4,13), heavygreen);
+ dot((-4,5), heavygreen+6);
+ dot((-4,0), blue);
+
+ draw( (-2,0)--(-2,12), heavygreen);
+ dot((-2,8), heavygreen+6);
+ dot((-2,0), blue);
+
+ draw( (-1,0)--(-1,11), heavygreen);
+ dot((-1,7), heavygreen+6);
+ dot((-1,0), blue);
+
+ draw( (0,0)--(0,12), heavygreen);
+ dot((0,5), heavygreen+6);
+ dot((0,0), blue);
+
+ draw( (2,0)--(2,11), heavygreen);
+ dot((2,7), heavygreen+6);
+ dot((2,0), blue);
+\end{asy}
+\end{center}
+
+
+
+This is in exact analogy to the way that e.g.\
+a smooth real-valued function on $U$ is a choice
+of real number $f(p) \in \RR$ at each point $p \in U$
+satisfying a local smoothness condition.
+
+Thus the notion of stalks is what lets us recover the viewpoint
+that sections are ``functions''. Therefore for theoretical purposes,
+\begin{moral}
+ With sheaf axioms, sections are sequences of compatible germs.
+\end{moral}
+In particular, this makes restriction morphisms easy to deal with:
+just truncate the sequence of germs!
+
+\section{Sheafification (optional)}
+\prototype{The pre-sheaf of constant functions
+ becomes the sheaf of locally constant functions.}
+
+The idea is that if $\SF$ is the pre-sheaf of ``functions with property $P$''
+then we want to associate a sheaf $\SF\sh$ of
+``functions which are locally $P$'', which makes them into a sheaf.
+We have already seen two examples of this:
+\begin{example}
+ [Sheafification]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii If $X$ is a topological space,
+ and $\SF$ is the pre-sheaf of constant functions on open sets of $X$,
+ then $\SF\sh$ is the sheaf of locally constant functions.
+
+ \ii If $V$ is an affine variety,
+ and $\SF$ is the pre-sheaf of rational functions,
+ then $\SF\sh$ is the sheaf of regular functions
+ (which are locally rational).
+ \end{enumerate}
+\end{example}
+
+The procedure is based on stalks and germs.
+We saw that for a sheaf, sections correspond to sequences of compatible germs.
+For a pre-sheaf, we can still define stalks and germs,
+but their properties will be less nice.
+But given our initial pre-sheaf $\SF$,
+we \emph{define} the sections of $\SF\sh$ to be sequences
+of compatible $\SF$-germs.
+\begin{definition}
+ The \vocab{sheafification} $\SF\sh$ of a pre-sheaf $\SF$ is defined by
+ \[ \SF\sh(U) =
+ \left\{ \text{sequences of compatible
+ $\SF$-germs $(g_p)_{p \in U}$} \right\}. \]
+\end{definition}
+\begin{ques}
+ Complete the definition by describing
+ the restriction morphisms of $\SF\sh$.
+\end{ques}
+\begin{abuse}
+ I'll usually be equally sloppy in the future:
+ when defining a sheaf $\SF$, I'll only say what $\SF(U)$ is,
+ with the restriction morphisms $\SF(U_2) \to \SF(U_1)$ being implicit.
+\end{abuse}
+The construction is contrived so that given a section
+$(g_p)_{p \in U} \in \SF\sh(U)$ the germ at a point $p$ is $g_p$:
+\begin{lemma}
+ [Stalks preserved by sheafification]
+ \label{lem:pre_sheaf_stalk}
+ Let $\SF$ be a pre-sheaf and $\SF\sh$ its sheafification.
+ Then for any point $q$, there is an isomorphism
+ \[ (\SF\sh)_q \cong \SF_q. \]
+\end{lemma}
+\begin{proof}
+ A germ in $(\SF\sh)_q$ looks like
+ $\left( (g_p)_{p \in U}, U \right)$,
+ where $g_p = (s_p, U_p)$ are themselves germs of $\SF_p$,
+ and $q \in U$.
+ Then the isomorphism is given by
+ \[ \left( (g_p)_{p \in U}, U \right) \mapsto g_q \in \SF_q. \]
+ The inverse map is given by for each $g = (s,U) \in \SF_q$ by
+ \[ g \mapsto \left( (g)_{p \in U}, U \right) \in (\SF\sh)_q \]
+ i.e.\ the sequence of germs is the constant sequence.
+\end{proof}
+
+We will use sheafification in the future to economically construct sheaves.
+However, in practice, the details of the construction will often not matter.
+
+\section\problemhead
+\begin{problem}
+ Prove that if $\SF$ is already a sheaf, then $\SF(U) \cong \SF\sh(U)$
+ for every open set $U$.
+ \begin{sol}
+ Because the stalks are preserved by sheafification,
+ there is essentially nothing to prove:
+ both sides correspond to sequences of compatible $\SF$-germs over $U$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ \label{prob:finite_sheaf}
+ Let $X$ be a space with two points $\{p,q\}$
+ and let $\SF$ be a sheaf on it.
+ Suppose $\SF_p = \Zc{5}$ and $\SF_q = \ZZ$.
+ Describe $\SF(U)$ for each open set $U$ of $X$, where
+ \begin{enumerate}[(a)]
+ \ii $X$ is equipped with the discrete topology.
+ \ii $X$ is equipped $\varnothing$, $\{p\}$, $\{p,q\}$
+ as the only open sets.
+ \end{enumerate}
+\end{problem}
+
+\begin{problem}
+ [Skyscraper sheaf]
+ Let $Y$ be a topological space.
+ Fix $p \in Y$ a point, and $R$ a ring.
+ The \vocab{skyscraper sheaf} is defined by
+ \[
+ \SF(U) = \begin{cases}
+ R & p \in U \\
+ 0 & \text{otherwise}
+ \end{cases}
+ \]
+ with restriction maps in the obvious manner.
+ Compute all the stalks of $\SF$.
+
+ (Possible suggestion: first do the case where $Y$ is Hausdorff,
+ where your intuition will give the right answer.
+ Then do the pathological case where every open set of $Y$ contains $p$.
+ Then try to work out the general answer.)
+ \begin{hint}
+ The stalk is $R$ at points in the closure of $\{p\}$, and $0$ elsewhere.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ Let $\SF$ be a sheaf on a space $X$ and let $s \in \SF(X)$
+ be a global section.
+ Define the \vocab{support} of $s$ as
+ \[ Z = \left\{ p \in X \mid [s]_p \neq 0 \in \SF_p \right\}. \]
+ Show that $Z$ is a closed set of $X$.
+ \begin{hint}
+ Show that the complement $\{ p \mid [s]_p = 0 \}$ is open.
+ \end{hint}
+\end{problem}
+
+%\begin{dproblem}
+% [Pushforward sheaf]
+% Suppose $f : X \to Y$ is a continuous map of spaces
+% and $\SF$ is a sheaf on $X$.
+% Define a sheaf $f_\ast \SF$ on $Y$ from $\SF$;
+% we call this the pushforward of $\SF$ onto $Y$.
+%\end{dproblem}
+%
+%\begin{problem}
+% Interpret the skyscraper sheaf as the pushforward
+% of a constant sheaf on a one-point space.
+%\end{problem}
diff --git a/books/napkin/shor.tex b/books/napkin/shor.tex
new file mode 100644
index 0000000000000000000000000000000000000000..542625c9604e359eec3168eac4e66d481027ba5b
--- /dev/null
+++ b/books/napkin/shor.tex
@@ -0,0 +1,302 @@
+\chapter{Shor's algorithm}
+OK, now for Shor's Algorithm:
+how to factor $M = pq$ in $O\left( (\log M)^2 \right)$ time.
+
+\section{The classical (inverse) Fourier transform}
+The ``crux move'' in Shor's algorithm is the so-called
+quantum Fourier transform.
+The Fourier transform is used to extract \emph{periodicity} in data,
+and it turns out the quantum analogue is a lot faster than the classical one.
+
+Let me throw the definition at you first.
+Let $N$ be a positive integer, and let $\omega_N = \exp\left( \frac{2\pi i}{N} \right)$.
+\begin{definition}
+ Given a tuple of complex numbers
+ \[ \left( x_0, x_1, \dots, x_{N-1} \right) \]
+ its \vocab{discrete inverse Fourier transform} is
+ the sequence $(y_0, y_1, \dots, y_{N-1})$ defined by
+ \[ y_k = \frac1N \sum_{j=0}^{N-1} \omega_N^{jk} x_j. \]
+ Equivalently, one is applying the matrix
+ \[
+ \frac 1N
+ \begin{bmatrix}
+ 1 & 1 & 1 & \dots & 1 \\
+ 1 & \omega_N & \omega_N^2 & \dots & \omega_N^{N-1} \\
+ 1 & \omega_N^2 & \omega_N^4 & \dots & \omega_N^{2(N-1)} \\
+ \vdots & \vdots & \vdots & \ddots & \vdots \\
+ 1 & \omega_N^{N-1} & \omega_N^{2(N-1)} & \dots & \omega_N^{(N-1)^2}
+ \end{bmatrix}
+ \begin{bmatrix} x_0 \\ x_1 \\ \vdots \\ x_{N-1} \end{bmatrix}
+ =
+ \begin{bmatrix} y_0 \\ y_1 \\ \vdots \\ y_{N-1} \end{bmatrix}.
+ \]
+\end{definition}
+The reason this operation is important is because it lets
+us detect if the $x_i$ are periodic:
+\begin{example}
+ [Example of discrete inverse Fourier transform]
+ Let $N = 6$, $\omega = \omega_6 = \exp(\frac{2\pi i}{6})$
+ and suppose $(x_0,x_1,x_2,x_3,x_4,x_5)=(0,1,0,1,0,1)$
+ (hence $x_i$ is periodic modulo $2$).
+ Thus,
+ \begin{align*}
+ y_0 &= \tfrac16\left(\omega^0 + \omega^0+ \omega^0\right) = 1/2 \\
+ y_1 &= \tfrac16\left(\omega^1 + \omega^3 + \omega^5\right) = 0 \\
+ y_2 &= \tfrac16\left( \omega^2 + \omega^{6} + \omega^{10} \right) = 0 \\
+ y_3 &= \tfrac16\left( \omega^3 + \omega^9 + \omega^{15} \right) = -1/2 \\
+ y_4 &= \tfrac16\left( \omega^4 + \omega^{12} + \omega^{20} \right) = 0 \\
+ y_5 &= \tfrac16\left( \omega^5 + \omega^{15} + \omega^{25} \right) = 0.
+ \end{align*}
+ Thus, in the inverse transformation the ``amplitudes''
+ are all concentrated at multiples of $3$;
+ thus this reveals the periodicity of the original
+ sequence by $\frac N3 = 2$.
+\end{example}
+More generally, given a sequence of $1$'s appearing with period $r$,
+the amplitudes will peak at inputs which are divisible by $\frac{N}{\gcd(N,r)}$.
+\begin{remark}
+ The fact that this operation is called the ``inverse''
+ Fourier transform is mostly a historical accident
+ (as my understanding goes).
+ Confusingly, the corresponding quantum operation is the
+ (not-inverted) Fourier transform.
+\end{remark}
+If we apply the definition as written, computing the transform takes $O(N^2)$ time.
+It turns out that by an algorithm called the \vocab{fast Fourier transform}
+(whose details we won't discuss), one can reduce this to $O(N \log N)$ time.
+However, for Shor's algorithm this is also insufficient;
+we need something like $O\left( (\log N)^2 \right)$ instead.
+This is where the quantum Fourier transform comes in.
+
+\section{The quantum Fourier transform}
+Note that to compute a Fourier transform, we need to multiply an $N \times N$ matrix
+with an $N$-vector, so this takes $O(N^2)$ multiplications.
+However, we are about to show that with a quantum computer,
+one can do this using $O( (\log N)^2 )$ quantum gates when $N = 2^n$,
+on a system with $n$ qubits.
+
+First, some more notation:
+\begin{abuse}
+ In what follows, $\ket{x}$ will refer to
+ $\ket{x_n} \otimes \ket{x_{n-1}} \otimes \dots \otimes \ket{x_1}$
+ where $x = x_n x_{n-1} \dots x_1$ in binary.
+ For example, if $n = 3$
+ then $\ket{6}$ really means $\ket1 \otimes \ket1 \otimes \ket 0$.
+\end{abuse}
+Observe that the $n$-qubit space now has an
+orthonormal basis $\ket0$, $\ket1$, \dots, $\ket{N-1}$
+
+\begin{definition}
+ Consider an $n$-qubit state
+ \[ \ket\psi = \sum_{k=0}^{N-1} x_k \ket{k}. \]
+ The \vocab{quantum Fourier transform} is defined by
+ \[
+ \UQFT(\ket\psi) = \frac{1}{\sqrt N}\sum_{j=0}^{N-1}
+ \left( \sum_{k=0}^{N-1} \omega_N^{jk} x_k \right) \ket{j}.
+ \]
+ In other words, using the basis $\ket0$, \dots, $\ket{N-1}$,
+ $\UQFT$ is given by the matrix
+ \[
+ \UQFT = \frac{1}{\sqrt N}
+ \begin{bmatrix}
+ 1 & 1 & 1 & \dots & 1 \\
+ 1 & \omega_N & \omega_N^2 & \dots & \omega_N^{N-1} \\
+ 1 & \omega_N^2 & \omega_N^4 & \dots & \omega_N^{2(N-1)} \\
+ \vdots & \vdots & \vdots & \ddots & \vdots \\
+ 1 & \omega_N^{N-1} & \omega_N^{2(N-1)} & \dots & \omega_N^{(N-1)^2}
+ \end{bmatrix}
+ \]
+\end{definition}
+This is the exactly the same definition as before,
+except we have a $\sqrt N$ factor added so that $\UQFT$ is unitary.
+But the trick is that in the quantum setup, the matrix can be rewritten:
+\begin{proposition}
+ [Tensor representation]
+ Let $\ket x = \ket{x_n x_{n-1} \dots x_1}$.
+ Then
+ \begin{align*}
+ \UQFT( \ket{x_n x_{n-1} \dots x_1} )
+ = \frac{1}{\sqrt N} &
+ \left( \ket0 +\exp(2\pi i \cdot 0.x_1) \ket 1 \right) \\
+ &\otimes \left( \ket0 +\exp(2\pi i \cdot 0.x_2x_1) \ket 1 \right) \\
+ &\otimes \dots \\
+ &\otimes \left( \ket0 +\exp(2\pi i \cdot 0.x_n\dots x_1) \ket 1 \right)
+ \end{align*}
+\end{proposition}
+\begin{proof}
+ Direct (and quite annoying) computation.
+ In short, expand everything.
+\end{proof}
+
+So by using mixed states, we can deal with the quantum Fourier transform
+using this ``multiplication by tensor product'' trick that isn't possible classically.
+
+Now, without further ado, here's the circuit.
+Define the rotation matrices
+\[ R_k = \begin{bmatrix} 1 & 0 \\ 0 & \exp(2\pi i/2^k) \end{bmatrix}. \]
+Then, for $n=3$ the circuit is given by by using controlled $R_k$'s as follows:
+\[
+ \Qcircuit @C=1em @R=.7em {
+ \lstick{\ket{x_3}} & \gate{H} & \gate{R_2} & \qw & \gate{R_3} & \qw & \qw & \rstick{\ket{y_1}} \\
+ \lstick{\ket{x_2}} & \qw & \ctrl{-1} & \gate{H} & \qw & \gate{R_2} & \qw & \rstick{\ket{y_2}} \\
+ \lstick{\ket{x_1}} & \qw & \qw & \qw & \ctrl{-2} & \ctrl{-1} & \gate{H} & \rstick{\ket{y_3}} \\
+ }
+\]
+\begin{exercise}
+ Show that in this circuit, the image of of $\ket{x_3x_2x_1}$
+ (for binary $x_i$) is
+ \[
+ \Big(\ket0+\exp(2\pi i \cdot 0.x_1) \ket1\Big)
+ \otimes \Big(\ket0+\exp(2\pi i \cdot 0.x_2x_1) \ket1\Big)
+ \otimes \Big(\ket0+\exp(2\pi i \cdot 0.x_3x_2x_1) \ket1\Big)
+ \]
+ as claimed.
+\end{exercise}
+
+For general $n$, we can write this as inductively as
+\[
+ \Qcircuit @C=1em @R=.7em {
+ \lstick{\ket{x_n}} & \multigate{5}{\text{QFT}_{n-1}} & \gate{R_n} & \qw & \qw & \cdots & & \qw & \qw & \cdots & & \qw & \qw & \rstick{\ket{y_1}} \qw \\
+ \lstick{\ket{x_{n-1}}} & \ghost{\text{QFT}_{n-1}} & \qw & \gate{R_{n-1}} & \qw & \cdots & & \qw & \qw & \cdots & & \qw & \qw & \rstick{\ket{y_2}} \qw \\
+ \lstick{\vdots\ \ } & \pureghost{\text{QFT}_{n-1}} & & & & & & & & & & & & \rstick{\ \ \vdots} \\
+ \lstick{\ket{x_i}} & \ghost{\text{QFT}_{n-1}} & \qw & \qw & \qw & \cdots & & \gate{R_i} & \qw & \cdots & & \qw & \qw & \rstick{\ket{y_{n-i+1}}} \qw \\
+ \lstick{\vdots\ \ } & \pureghost{\text{QFT}_{n-1}} & & & & & & & & & & & & \rstick{\ \ \vdots} \\
+ \lstick{\ket{x_2}} & \ghost{\text{QFT}_{n-1}} & \qw & \qw & \qw & \cdots & & \qw & \qw & \cdots & & \gate{R_2} & \qw & \rstick{\ket{y_{n-1}}} \qw \\
+ \lstick{\ket{x_1}} & \qw & \ctrl{-6} & \ctrl{-5} & \qw & \cdots & & \ctrl{-3} & \qw & \cdots & & \ctrl{-1} & \gate{H} & \rstick{\ket{y_n}} \qw
+ }
+\]
+\begin{ques}
+ Convince yourself that when $n=3$ the two circuits displayed are equivalent.
+\end{ques}
+
+Thus, the quantum Fourier transform is achievable with $O(n^2)$ gates,
+which is enormously better than the $O(N \log N)$ operations achieved by
+the classical fast Fourier transform (where $N=2^n$).
+
+\section{Shor's algorithm}
+The quantum Fourier transform is the key piece of Shor's algorithm.
+Now that we have it, we can solve the factoring problem.
+
+Let $p,q > 3$ be odd primes, and assume $p \neq q$.
+The main idea is to turn factoring an integer $M = pq$ into a problem
+about finding the order of $x \pmod M$; the latter is a ``periodicity''
+problem that the quantum Fourier transform will let us solve.
+Specifically, say that an $x \pmod M$ is \emph{good} if
+\begin{enumerate}[(i)]
+ \ii $\gcd(x,M) = 1$,
+ \ii The order $r$ of $x \pmod M$ is even, and
+ \ii Factoring $0 \equiv (x^{r/2}-1)(x^{r/2}+1) \pmod M$,
+ neither of the two factors is $0 \pmod M$.
+ Thus one of them is divisible by $p$, and the other
+ is divisible by $q$.
+\end{enumerate}
+\begin{exercise}
+ [For contest number theory practice]
+ Show that for $M = pq$ at least half of the residues
+ in $\Zm M$ are good.
+\end{exercise}
+
+So if we can find the order of an arbitrary $x \in \Zm M$,
+then we just keep picking $x$ until we pick a good one
+(this happens more than half the time);
+once we do, we compute $\gcd(x^{r/2}-1,M)$ using the Euclidean
+algorithm to extract one of the prime factors of $M$, and we're home free.
+
+Now how do we do this? The idea is not so difficult:
+first we generate a sequence which is periodic modulo $r$.
+\begin{example}[Factoring $77$: generating the periodic state]
+ Let's say we're trying to factor $M = 77$,
+ and we randomly select $x = 2$, and want to find its order $r$.
+ Let $n = 13$ and $N = 2^{13}$, and start by initializing the state
+ \[ \ket\psi = \frac{1}{\sqrt N} \sum_{k=0}^{N-1} \ket k. \]
+ Now, build a circuit $U_x$ (depending on $x=2$!)
+ which takes $\ket k \ket 0$ to $\ket k \ket{x^k \bmod M}$.
+ Applying this to $\ket\psi \otimes \ket0$ gives
+ \[ U(\ket\psi\ket0) =
+ \frac{1}{\sqrt N} \sum_{k=0}^{N-1} \ket k \otimes \ket{x^k \bmod M}. \]
+ Now suppose we measure the second qubit, and get a state of $\ket{128}$.
+ That tells us that the collapsed state now, up to scaling, is
+ \[ (\ket{7} + \ket{7+r} + \ket{7+2r} + \dots) \otimes \ket{128}. \]
+\end{example}
+The bottleneck is actually the circuit $U_x$;
+one can compute $x^k \pmod M$ by using repeated squaring,
+but it's still the clumsy part of the whole operation.
+
+In general, the operation is:
+\begin{itemize}
+ \ii Pick a sufficiently large $N = 2^n$ (say, $N \ge M^2$).
+ \ii Generate $\ket\psi = \sum_{k=0}^{2^n-1} \ket{k}$.
+ \ii Build a circuit $U_x$ which computes $\ket{x^k \mod M}$.
+ \ii Apply it to get a state
+ $\frac{1}{\sqrt N} \sum_{k=0}^{2^n-1} \ket k \otimes \ket{x^k \mod M}$.
+ \ii Measure the second qubit to cause the first qubit to
+ collapse to something which is periodic modulo $r$.
+ Let $\ket\phi$ denote the left qubit.
+\end{itemize}
+
+Suppose we apply the quantum Fourier transform to the left qubit $\ket\phi$ now:
+since the left bit is periodic modulo $r$, we expect the transform
+will tell us what $r$ is.
+Unfortunately, this doesn't quite work out, since $N$ is a power of two,
+but we don't expect $r$ to be.
+
+Nevertheless, consider a state
+\[ \ket\phi = \ket{k_0} + \ket{k_0+r} + \dots \]
+so for example previously we had $k_0=7$ if we measured $128$ on $x=2$.
+Applying the quantum Fourier transform, we see that the
+coefficient of $\ket j$ in the transformed image is equal to
+\[
+ \omega_N^{k_0j} \cdot
+ \left( \omega_N^{0} + \omega_N^{jr} + \omega_N^{2jr}
+ + \omega_N^{3jr} + \dots \right)
+\]
+As this is a sum of roots of unity, we realize we have
+destructive interference unless $\omega_N^{jr} = 1$ (since $N$ is large).
+In other words, we approximately have
+\[
+ \UQFT(\ket\phi)
+ \approx
+ \sum_{\substack{0 \le j < N \\ jr/N \in \ZZ}} \ket j
+\]
+up to scaling as usual.
+The bottom line is that
+\begin{moral}
+ If we measure $\UQFT\ket\phi$ we obtain a $\ket j$ such that
+ $\frac{jr}{N}$ is close to an $s \in \ZZ$.
+\end{moral}
+And thus given sufficient luck we can use continued fractions
+to extract the value of $r$.
+
+\begin{example}
+ [Finishing the factoring of $M = 77$]
+ As before, we made an observation to the second qubit,
+ and thus the first qubit collapses to the state
+ $\ket\phi = \ket7 + \ket{7+r} + \dots$.
+ Now we make a measurement and obtain $j = 4642$, which means that
+ for some integer $s$ we have
+ \[ \frac{4642r}{2^{13}} \approx s. \]
+ Now, we analyze the continued fraction of $\frac{4642}{2^{13}}$;
+ we find the first few convergents are
+ \[
+ 0, \;
+ 1, \;
+ \half, \;
+ \frac{4}{7}, \;
+ \frac{13}{23}, \;
+ \frac{17}{30}, \;
+ \frac{1152}{2033}, \;
+ \dots
+ \]
+ So $\frac{17}{30}$ is a very good approximation,
+ hence we deduce $s = 17$ and $r = 30$ as candidates.
+ And indeed, one can check that $r = 30$ is the desired order.
+\end{example}
+
+This won't work all the time (for example, we could get unlucky and
+measure $j=0$, i.e.\ $s=0$, which would tell us no information at all).
+
+But one can show that we succeed any time that \[ \gcd(s,r) = 1. \]
+This happens at least $\frac{1}{\log r}$ of the time,
+and since $r < M$ this means that given sufficiently many trials,
+we will eventually extract the correct order $r$.
+This is Shor's algorithm.
diff --git a/books/napkin/singular.tex b/books/napkin/singular.tex
new file mode 100644
index 0000000000000000000000000000000000000000..03f00dec3a39cdb05f9cbc19d177ddeb68483713
--- /dev/null
+++ b/books/napkin/singular.tex
@@ -0,0 +1,731 @@
+\chapter{Singular homology}
+Now that we've defined $\pi_1(X)$,
+we turn our attention to a second way of capturing the same idea, $H_1(X)$.
+We'll then define $H_n(X)$ for $n \ge 2$.
+The good thing about the $H_n$ groups is that, unlike the $\pi_n$ groups,
+they are much easier to compute in practice.
+The downside is that their definition will require quite a bit of setup,
+and the ``algebraic'' part of ``algebraic topology'' will become a lot more technical.
+
+\section{Simplices and boundaries}
+\prototype{$\partial[v_0, v_1, v_2] = [v_0,v_1]-[v_0,v_2]+[v_1,v_2]$.}
+First things first:
+\begin{definition}
+ The \vocab{standard $n$-simplex}, denoted $\Delta^n$, is defined as
+ \[ \left\{ (x_0 , x_1, \dots x_n) \mid x_i \ge 0, x_0+\dots+x_n=1 \right\}. \]
+ Hence it's the convex hull of some vertices $[v_0, \dots, v_n]$.
+ Note that we keep track of the order $v_0$, \dots, $v_n$ of the vertices,
+ for reasons that will soon become clear.
+
+ Given a topological space $X$, a \vocab{singular $n$-simplex} is a map $\sigma : \Delta^n \to X$.
+\end{definition}
+\begin{example}
+ [Singular simplices]
+ \listhack
+ \label{ex:simplex}
+ \begin{enumerate}[(a)]
+ \ii Since $\Delta^0 = [v_0]$ is just a point,
+ a singular $0$-simplex $X$ is just a point of $X$.
+ \ii Since $\Delta^1 = [v_0, v_1]$ is an interval,
+ a singular $1$-simplex $X$ is just a path in $X$.
+ \ii Since $\Delta^2 = [v_0, v_1, v_2]$ is an equilateral triangle,
+ a singular $2$-simplex $X$ looks a ``disk'' in $X$.
+ \end{enumerate}
+ Here is a picture of all three in a space $X$:
+ \begin{center}
+ \begin{asy}
+ size(8cm);
+ bigblob("$X$");
+ dot("$\sigma^0$", (-3.7,2.5), dir(-90));
+
+ pen hg = heavygreen;
+ pair A0 = (-2.7, 1.1);
+ pair A1 = (-2.9, -2.3);
+ dot("$v_0$", A0, dir(90), hg);
+ dot("$v_1$", A1, dir(-90), hg);
+ path s1 = A0..(-3.1,0.7)..A1;
+ draw(s1, hg, EndArrow);
+ label("$\sigma^1$", s1, dir(10), hg);
+
+ pen rr = red;
+ pair B0 = (1, 1.8);
+ pair B1 = (-0.7, -1.1);
+ pair B2 = (1.9, -1.9);
+ dot("$v_0$", B0, dir(90), rr);
+ dot("$v_1$", B1, dir(210), rr);
+ dot("$v_2$", B2, dir(330), rr);
+ pair b0 = (0,0);
+ pair b1 = (0,-2);
+ pair b2 = (1.6,0.3);
+ draw(B0..b0..B1, rr, EndArrow);
+ draw(B1..b1..B2, rr, EndArrow);
+ draw(B2..b2..B0, rr, EndArrow);
+ fill(B0..b0..B1--B1..b1..B2--B2..b2..B0--cycle, lightred+opacity(0.2));
+ label("$\sigma^2$", (B0+B1+B2)/3, rr);
+ \end{asy}
+ \end{center}
+ The arrows aren't strictly necessary, but I've included them
+ to help keep track of the ``order'' of the vertices;
+ this will be useful in just a moment.
+\end{example}
+
+Now we're going to do something much like
+when we were talking about Stokes' theorem:
+we'll put a boundary $\partial$ operator on the singular $n$-simplices.
+This will give us a formal linear sums of $n$-simplices $\sum_k a_k \sigma_k$,
+which we call an \vocab{$n$-chain}.
+
+In that case,
+\begin{definition}
+ Given a singular $n$-simplex $\sigma$ with vertices $[v_0, \dots, v_n]$,
+ note that for every $i$ we have an $(n-1)$ simplex $[v_0, \dots, v_{i-1}, v_{i+1}, \dots, v_n]$.
+ The \vocab{boundary operator} $\partial$ is then defined by
+ \[ \partial(\sigma) \defeq \sum_i (-1)^i
+ \left[ v_0, \dots, v_{i-1}, v_{i+1}, \dots, v_n \right]. \]
+ The boundary operator then extends linearly to $n$-chains:
+ \[ \partial\left( \sum_k a_k \sigma_k \right) \defeq \sum a_k \partial(\sigma_k). \]
+ By convention, a $0$-chain has empty boundary.
+\end{definition}
+\begin{example}
+ [Boundary operator]
+ Consider the chains depicted in \Cref{ex:simplex}. Then
+ \begin{enumerate}[(a)]
+ \ii $\partial\sigma^0 = 0$.
+ \ii $\partial(\sigma^1) = [v_1] - [v_0]$:
+ it's the ``difference'' of the $0$-chain corresponding to point $v_1$
+ and the $0$-chain corresponding to point $v_0$.
+ \ii $\partial(\sigma^2) = [v_0,v_1] - [v_0,v_2] + [v_1, v_2]$;
+ i.e.\ one can think of it as the sum of the three oriented arrows
+ which make up the ``sides'' of $\sigma^2$.
+ \ii Notice that if we take the boundary again, we get
+ \begin{align*}
+ \partial(\partial(\sigma^2))
+ &= \partial([v_0,v_1]) - \partial([v_0,v_2]) + \partial([v_1,v_2]) \\
+ &= \left( [v_1]-[v_0] \right) - \left( [v_2]-[v_0] \right) + \left( [v_2]-[v_1] \right) \\
+ &= 0.
+ \end{align*}
+ \end{enumerate}
+\end{example}
+
+The fact that $\partial^2 = 0$ is of course not a coincidence.
+\begin{theorem}
+ [$\partial^2=0$]
+ For any chain $c$, $\partial(\partial(c)) = 0$.
+\end{theorem}
+\begin{proof}
+ Essentially identical to \Cref{prob:partial_zero}:
+ this is just a matter of writing down a bunch of $\sum$ signs.
+ Diligent readers are welcome to try the computation.
+\end{proof}
+\begin{remark}
+ The eerie similarity between the chains used to integrate differential forms
+ and the chains in homology is not a coincidence.
+ The de Rham cohomology, discussed much later, will make the relation explicit.
+\end{remark}
+
+\section{The singular homology groups}
+\prototype{Probably $H_n(S^m)$, especially the case $m = n =1$.}
+Let $X$ be a topological space, and let $C_n(X)$ be the free abelian group
+of $n$-chains of $X$ that we defined earlier.
+Our work above gives us a boundary operator $\partial$, so we have a sequence of maps
+\[ \dots \taking\partial C_3(X) \taking\partial C_2(X)
+ \taking\partial C_1(X) \taking\partial C_0(X) \taking\partial 0 \]
+(here I'm using $0$ to for the trivial group, which is standard notation for abelian groups.)
+We'll call this the \vocab{singular chain complex}.
+
+Now, how does this let us detect holes in the space?
+To see why, let's consider an annulus, with a $1$-chain $c$ drawn in red:
+\begin{center}
+ \begin{asy}
+ size(6cm);
+ pair O = origin;
+ filldraw(CR(O, 6), lightblue+opacity(0.2), blue);
+ filldraw(CR(O, 1), white, blue);
+ label("$X$", 6*dir(45), dir(45));
+ pen rr = red;
+ pair v0 = 5*dir(100);
+ pair v1 = 4*dir(220);
+ pair v2 = 2.2*dir(330);
+ dot("$v_0$", v0, dir(v0), rr);
+ dot("$v_1$", v1, dir(v1), rr);
+ dot("$v_2$", v2, dir(v2), rr);
+ draw(v0..(3.2*dir(110))..v1, rr, EndArrow);
+ draw(v1..(2.1*dir(270))..v2, rr, EndArrow);
+ draw(v2..(3.6*dir(45))..v0, rr, EndArrow);
+ \end{asy}
+\end{center}
+Notice that
+\[ \partial c = ([v_1]-[v_0]) - ([v_2]-[v_0]) + ([v_2]-[v_1]) = 0 \]
+and so we can say this $1$-chain $c$ is a ``cycle'',
+because it has trivial boundary.
+However, $c$ is not itself the boundary of any $2$-chain,
+because of the hole in the center of the space
+--- it's impossible to ``fill in'' the interior of $c$!
+So, we have detected the hole by the algebraic fact that
+\[ c \in \ker\left( C_1(X) \taking{\partial} C_0(X) \right)
+ \qquad\text{but}\qquad
+ c \notin \img \left( C_2(X) \taking{\partial} C_1(X) \right). \]
+Indeed, if the hole was not present then this statement would be false.
+
+We can capture this idea in any dimension, as follows.
+\begin{definition}
+ Let
+ \[ \dots \taking\partial C_2(X) \taking\partial C_1(X) \taking\partial C_0(X) \taking\partial 0 \]
+ as above.
+ We say that $c \in C_n(X)$ is:
+ \begin{itemize}
+ \ii a \vocab{cycle} if $c \in \ker\left( C_{n}(X) \taking\partial C_{n-1}(X) \right)$,
+ and
+ \ii a \vocab{boundary} if $c \in \img \left( C_{n+1}(X) \taking\partial C_n(X) \right)$.
+ \end{itemize}
+ Denote the cycles and boundaries by $Z_n(X), B_n(X) \subseteq C_n(X)$, respectively.
+\end{definition}
+
+\begin{ques}
+ Just to get you used to the notation:
+ check that $B_n$ and $Z_n$ are themselves abelian groups,
+ and that $B_n(X) \subseteq Z_n(X) \subseteq C_n(X)$.
+\end{ques}
+
+The key point is that we can now define:
+\begin{definition}
+ The \vocab{$n$th homology group} $H_n(X)$ is
+ defined as \[ H_n(X) \defeq Z_n(X) / B_n(X). \]
+\end{definition}
+\begin{example}
+ [The zeroth homology group]
+ Let's compute $H_0(X)$ for a topological space $X$.
+ We take $C_0(X)$, which is just formal linear sums of points of $X$.
+
+ First, we consider the kernel of $\partial : C_0(X) \to 0$,
+ so the kernel of $\partial$ is the entire space $C_0(X)$:
+ that is, every point is a ``cycle''.
+
+ Now, what is the boundary?
+ The main idea is that $[b] - [a] = 0$ if and only if
+ there's a $1$-chain which connects $a$ to $b$, i.e.\
+ there is a path from $a$ to $b$.
+ In particular,
+ \[ \text{$X$ path connected} \implies H_0(X) \cong \ZZ. \]
+\end{example}
+More generally, we have
+\begin{proposition}[Homology groups split into path-connected components]
+ If $X = \bigcup_\alpha X_\alpha$ is a decomposition into path-connected components,
+ then we have \[ H_n(X) \cong \bigoplus_\alpha H_n(X_\alpha). \]
+ In particular, if $X$ has $r$ path-connected components, then $H_0(X) \cong \ZZ^{\oplus r}$.
+\end{proposition}
+(If it's surprising to see $\ZZ^{\oplus r}$, remember that
+an abelian group is the same thing as a $\ZZ$-module,
+so the notation $G \oplus H$ is customary in place of $G \times H$
+when $G$, $H$ are abelian.)
+
+Now let's investigate the first homology group.
+\begin{theorem}[Hurewicz theorem]
+ Let $X$ be path-connected.
+ Then $H_1(X)$ is the \emph{abelianization} of $\pi_1(X, x_0)$.
+\end{theorem}
+We won't prove this but you can see it roughly from the example.
+The group $H_1(X)$ captures the same information
+as $\pi_1(X, x_0)$: a cycle (in $Z_1(X)$) corresponds to the same thing
+as the loops we studied in $\pi_1(X, x_0)$,
+and the boundaries (in $B_1(X)$, i.e.\ the things we mod out by)
+are exactly the nulhomotopic loops in $\pi_1(X, x_0)$.
+The difference is that $H_1(X)$ allows loops to commute,
+whereas $\pi_1(X, x_0)$ does not.
+\begin{example}
+ [The first homology group of the annulus]
+ To give a concrete example, consider the annulus $X$ above.
+ We found a chain $c$ that wrapped once around the hole of $X$.
+ The point is that in fact,
+ \[ H_1(X) = \left< c\right> \cong \ZZ \]
+ which is to say the chains $c$, $2c$, \dots are all not the same in $H_1(X)$,
+ but that any other $1$-chain is equivalent to one of these.
+ This captures the fact that $X$ is really just $S^1$.
+\end{example}
+\begin{example}
+ [An explicit boundary in $S^1$]
+ \label{ex:S1_c_minus_d}
+ In $X = S^1$, let $a$ be the uppermost point and $b$ the lowermost point.
+ Let $c$ be the simplex from $a$ to $b$ along the left half of the circle,
+ and $d$ the simplex from $a$ to $b$ along the right half.
+ Finally, let $\gamma$ be the simplex which represents a loop $\gamma$ from $a$
+ to itself, wrapping once counterclockwise around $S^1$.
+ We claim that in $H^1(S^1)$ we have
+ \[ \gamma = c - d \]
+ which geometrically means that $c-d$ represents wrapping once around
+ the circle (which is of course what we expect).
+
+ \begin{center}
+ \begin{asy}
+ size(5cm);
+ real r = 0.8;
+ pair a = dir(90);
+ pair b = dir(-90);
+
+ fill(unitcircle, lightblue+opacity(0.2));
+ unfill(CR( -(1-r)/2*a, (1+r)/2 ));
+
+ draw(arc(origin, a, -a), red);
+ draw(arc((r-1)*a/2, -a, r*a), red, EndArrow);
+
+ dot("$v_0=a$", r*a, dir(-90));
+ dot("$v_1=a$", a, dir(90));
+ dot("$v_2=b$", b, dir(-90));
+ label("$\gamma$", dir(135), dir(135), red);
+ draw(arc((r-1)*a/2, r*a, -a), blue, EndArrow);
+ draw(arc(origin, 1, 90, -90), heavygreen, EndArrow);
+ label("$c$", (1+r)/2*dir(180), dir(0), blue);
+ label("$d$", dir(0), dir(0), heavygreen);
+ \end{asy}
+ \end{center}
+
+ Indeed this can be seen from the picture above, where we have drawn
+ a $2$-simplex whose boundary is exactly $\gamma - c + d$.
+ The picture is somewhat metaphorical: in reality $v_0 = v_1 = a$,
+ and the entire $2$-simplex is embedded in $S^1$.
+ This is why singular homology is so-called: the images of the simplex
+ can sometimes look quite ``singular''.
+\end{example}
+
+\begin{example}
+ [The first homology group of the figure eight]
+ Consider $X_8$ (see \Cref{ex:figure8}).
+ Both homology and homotopy see the two loops in $X_8$, call them $a$ and $b$.
+ The difference is that in $\pi_1(X_8, x_0)$,
+ these two loops are not allowed to commute: we don't have $ab \neq ba$,
+ because the group operation in $\pi_1$ is ``concatenate paths''
+ But in the homology group $H_1(X)$ the way we add $a$ and $b$
+ is to add them formally, to get the $1$-chain $a+b$.
+ So \[ H_1(X) \cong \ZZ^{\oplus 2} \quad\text{while}\quad \pi_1(X, x_0) = \left< a,b\right>. \]
+\end{example}
+
+\begin{example}
+ [The homology groups of $S^2$]
+ Consider $S^2$, the two-dimensional sphere.
+ Since it's path connected, we have $H_0(S^2) = \ZZ$.
+ We also have $H_1(S^2) = 0$, for the same reason that $\pi_1(S^2)$ is trivial as well.
+ On the other hand we claim that \[ H_2(S^2) \cong \ZZ. \]
+ The elements of $H_2(S^2)$ correspond to wrapping $S^2$ in a tetrahedral bag
+ (or two bags, or three bags, etc.).
+ Thus, the second homology group lets us detect the spherical cavity of $S^2$.
+\end{example}
+Actually, more generally it turns out that we will have
+\[
+ H_n(S^m) \cong
+ \begin{cases}
+ \ZZ & n=m \text{ or } n=0 \\
+ 0 & \text{otherwise}.
+ \end{cases}
+\]
+
+\begin{example}
+ [Contractible spaces]
+ Given any contractible space $X$, it turns out that
+ \[
+ H_n(X)
+ \cong
+ \begin{cases}
+ \ZZ & n = 0 \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ The reason is that, like homotopy groups, it turns out
+ that homology groups are homotopy invariant.
+ (We'll prove this next section.)
+ So the homology groups of contractible $X$ are the same
+ as those of a one-point space, which are those above.
+\end{example}
+
+\begin{example}
+ [Homology groups of the torus]
+ While we won't be able to prove it for a while, it turns out that
+ \[
+ H_n(S^1 \times S^1)
+ \cong
+ \begin{cases}
+ \ZZ & n = 0,2 \\
+ \ZZ^{\oplus 2} & n = 1 \\
+ 0 & \text{otherwise}.
+ \end{cases}
+ \]
+ The homology group at $1$ corresponds to our knowledge that $\pi_1(S^1 \times S^1) \cong \ZZ^2$
+ and the homology group at $2$ detects the ``cavity'' of the torus.
+\end{example}
+
+
+This is fantastic and all, but how does one go about actually computing any homology groups?
+This will be a rather long story, and we'll have to do a significant amount of both algebra and geometry
+before we're really able to compute any homology groups.
+In what follows, it will often be helpful to keep track of which things are purely algebraic
+(work for any chain complex), and which parts are actually stating something which is geometrically true.
+
+\section{The homology functor and chain complexes}
+As I mentioned before, the homology groups are homotopy invariant.
+This will be a similar song and dance as the work we did to
+create a functor $\pi_1 : \catname{hTop}_\ast \to \catname{Grp}$.
+Rather than working slowly and pulling away the curtain to reveal the category theory at the end,
+we'll instead start with the category theory right from the start just to save some time.
+
+\begin{definition}
+ The category $\catname{hTop}$ is defined as follows:
+ \begin{itemize}
+ \ii Objects: topological spaces.
+ \ii Morphisms: \emph{homotopy classes} of morphisms $X \to Y$.
+ \end{itemize}
+ In particular, $X$ and $Y$ are isomorphic in $\catname{hTop}$
+ if and only if they are homotopic.
+\end{definition}
+You'll notice this is the same as $\catname{hTop}_\ast$,
+except without the basepoints.
+
+\begin{theorem}[Homology is a functor $\catname{hTop} \to \catname{Grp}$]
+ \label{thm:Hn_functor}
+ For any particular $n$, $H_n$ is a functor $\catname{hTop} \to \catname{Grp}$.
+ In particular,
+ \begin{itemize}
+ \ii Given any map $f : X \to Y$, we get an induced map $f_\ast : H_n(X) \to H_n(Y)$.
+ \ii For two homotopic maps $f, g : X \to Y$, $f_\ast = g_\ast$.
+ \ii Two homotopic spaces $X$ and $Y$ have isomorphic homology groups:
+ if $f : X \to Y$ is a homotopy then $f_\ast : H_n(X) \to H_n(Y)$ is an isomorphism.
+ \ii (Insert your favorite result about functors here.)
+ \end{itemize}
+\end{theorem}
+
+In order to do this, we have to describe how to take a map $f : X \to Y$
+and obtain a map $H_n(f) : H_n(X) \to H_n(Y)$.
+Then we have to show that this map doesn't depend on the choice of homotopy.
+(This is the analog of the work we did with $f_\sharp$ before.)
+It turns out that this time around, proving this is much more tricky,
+and we will have to go back to the chain complex
+$C_\bullet(X)$ that we built at the beginning.
+
+\subsection{Algebra of chain complexes}
+Let's start with the algebra.
+First, I'll define the following abstraction of the complex to any sequence of abelian groups.
+Actually, though, it works in any category (not just $\catname{AbGrp}$).
+The strategy is as follows: we'll define everything that we need completely abstractly,
+then show that the geometry concepts we want correspond to this setting.
+
+\begin{definition}
+ A \vocab{chain complex} is a sequence of groups $A_n$ and maps
+ \[ \dots \taking{\partial} A_{n+1} \taking{\partial}
+ A_n \taking{\partial} A_{n-1} \taking{\partial} \dots \]
+ such that the composition of any two adjacent maps is the zero morphism.
+ We usually denote this by $A_\bullet$.
+
+ The $n$th homology group $H_n(A_\bullet)$ is defined
+ as $\ker(A_n \to A_{n-1}) / \img(A_{n+1} \to A_n)$.
+ Cycles and boundaries are defined in the same way as before.
+\end{definition}
+Obviously, this is just an algebraic generalization of the structure we previously looked at,
+rid of all its original geometric context.
+
+\begin{definition}
+ A \vocab{morphism of chain complexes} $f : A_\bullet \to B_\bullet$ is
+ a sequence of maps $f_n$ for every $n$ such that the diagram
+ \begin{center}
+ \begin{tikzcd}
+ \dots \ar[r, "\partial_A"] & A_{n+1} \ar[r, "\partial_A"] \ar[d, "f_{n+1}"] &
+ A_n \ar[r, "\partial_A"] \ar[d, "f_n"] & A_{n-1} \ar[r, "\partial_A"] \ar[d, "f_{n-1}"] & \dots \\
+ \dots \ar[r, "\partial_B"] & B_{n+1} \ar[r, "\partial_B"] &
+ B_n \ar[r, "\partial_B"] & B_{n-1} \ar[r, "\partial_B"] & \dots
+ \end{tikzcd}
+ \end{center}
+ commutes.
+ Under this definition, the set of chain complexes becomes a category,
+ which we denote $\catname{Cmplx}$.
+\end{definition}
+
+Note that given a morphism of chain complexes $f : A_\bullet \to B_\bullet$,
+every cycle in $A_n$ gets sent to a cycle in $B_n$, since the square
+\begin{center}
+\begin{tikzcd}
+ A_n \ar[r, "\partial_A"] \ar[d, "f_n"'] & A_{n-1} \ar[d, "f_{n-1}"] \\
+ B_n \ar[r, "\partial_B"'] & B_{n-1}
+\end{tikzcd}
+\end{center}
+commutes.
+Similarly, every boundary in $A_n$ gets sent to a boundary in $B_n$. Thus,
+\begin{moral}
+ Every map of $f : A_\bullet \to B_\bullet$ gives a
+ map $f_\ast : H_n(A) \to H_n(B)$ for every $n$.
+\end{moral}
+\begin{exercise}
+ Interpret $H_n$ as a functor $\catname{Cmplx} \to \catname{Grp}$.
+\end{exercise}
+
+Next, we want to define what it means for two maps $f$ and $g$ to be homotopic.
+Here's the answer:
+\begin{definition}
+ Let $f, g : A_\bullet \to B_\bullet$.
+ Suppose that one can find a map $P_n : A_n \to B_{n+1}$ for every $n$ such that
+ \[ g_n - f_n = \partial_B \circ P_n + P_{n-1} \circ \partial_A \]
+ Then $P$ is a \vocab{chain homotopy} from $f$ to $g$
+ and $f$ and $g$ are \vocab{chain homotopic}.
+\end{definition}
+
+We can draw a picture to illustrate this (warning: the diagonal dotted arrows do NOT commute
+with all the other arrows):
+\begin{center}
+\begin{tikzcd}[sep=large]
+ \dots \ar[r, "\partial_A"]
+ & A_{n+1} \ar[r, "\partial_A"] \ar[d, "g-f"']
+ & A_n \ar[r, "\partial_A"] \ar[d, "g-f"'] \ar[ld, dotted, "P_n"']
+ & A_{n-1} \ar[r, "\partial_A"] \ar[d, "g-f"'] \ar[ld, dotted, "P_{n-1}"']
+ & \dots \\
+ \dots \ar[r, "\partial_B"]
+ & B_{n+1} \ar[r, "\partial_B"]
+ & B_n \ar[r, "\partial_B"]
+ & B_{n-1} \ar[r, "\partial_B"]
+ & \dots
+\end{tikzcd}
+\end{center}
+The definition is that in each slanted ``parallelogram'', the $g-f$ arrow is the sum of the two
+compositions along the sides.
+
+\begin{remark}
+ This equation should look terribly unmotivated right now,
+ aside from the fact that we are about to show it does the right algebraic thing.
+ Its derivation comes from the geometric context that we have deferred
+ until the next section, where ``homotopy'' will naturally give ``chain homotopy''.
+\end{remark}
+
+Now, the point of this definition is that
+\begin{proposition}[Chain homotopic maps induce the same map on homology groups]
+ Let $f, g: A_\bullet \to B_\bullet$ be chain homotopic maps $A_\bullet \to B_\bullet$.
+ Then the induced maps $f_\ast, g_\ast : H_n(A_\bullet) \to H_n(B_\bullet)$ coincide for each $n$.
+\end{proposition}
+\begin{proof}
+ It's equivalent to show $g-f$ gives the zero map on homology groups,
+ In other words, we need to check that every cycle of $A_n$ becomes
+ a boundary of $B_n$ under $g-f$.
+ \begin{ques}
+ Verify that this is true. \qedhere
+ \end{ques}
+\end{proof}
+
+
+\subsection{Geometry of chain complexes}
+Now let's fill in the geometric details of the picture above.
+First:
+\begin{lemma}[Map of space $\implies$ map of singular chain complexes]
+ Each $f : X \to Y$ induces a map $C_n(X) \to C_n(Y)$.
+\end{lemma}
+\begin{proof}
+ Take the composition
+ \[ \Delta^n \taking{\sigma} X \taking{f} Y. \]
+ In other words, a path in $X$ becomes a path in $Y$, et cetera.
+ (It's not hard to see that the squares involving $\partial$ commute;
+ check it if you like.)
+\end{proof}
+
+Now, what we need is to show that if $f , g : X \to Y$ are homotopic,
+then they are chain homotopic.
+To produce a chain homotopy, we need to take every $n$-simplex $X$
+to an $(n+1)$-chain in $Y$, thus defining the map $P_n$.
+
+Let's think about how we might do this. Let's take the $n$-simplex $\sigma : \Delta^n \to X$
+and feed it through $f$ and $g$; pictured below is a $1$-simplex $\sigma$ (i.e.\ a path in $X$)
+which has been mapped into the space $Y$.
+Homotopy means the existence of a map $F : X \times [0,1] \to Y$
+such that $F(-,0) = f$ and $F(-,1) = g$, parts of which I've illustrated below with grey arrows
+in the image for $Y$.
+
+\begin{center}
+ \begin{asy}
+ size(14cm);
+ bigblob("$Y$");
+ pair A = (-3.5,2);
+ pair B = (-2.2,-2);
+ pair C = (2.2,1.6);
+ pair D = (2.0, -1.1);
+
+ pen rr = red;
+ dot("$v_0$", A, dir(90), rr); dot("$v_1$", B, dir(-090), rr);
+ path v = A..(-2,0)..B;
+ draw(v, rr, EndArrow);
+ label("$f(\sigma)$", v, dir(180), rr);
+
+ pen hg = heavygreen;
+ dot("$w_0$", C, dir(90), hg); dot("$w_1$", D, dir(-90), hg);
+ path w = C..(2.3,0)..D;
+ draw(w, hg, EndArrow);
+ label("$g(\sigma)$", w, dir(0), hg);
+
+ pen gr = grey + dashed;
+ draw(A--C, gr, Margins);
+ path F = A--D;
+ draw(F, gr, Margins);
+ label("$F$", F, dir(90), gr);
+ draw(B--D, gr, Margins);
+
+ add(shift(12.5,0)*CC());
+ pair W = 1.5*dir(135);
+ pair X = 1.5*dir(225);
+ pair Y = 1.5*dir(45);
+ pair Z = 1.5*dir(-45);
+ dot("$v_0$", W, dir(W), rr);
+ dot("$v_1$", X, dir(X), rr);
+ dot("$w_0$", Y, dir(Y), hg);
+ dot("$w_1$", Z, dir(Z), hg);
+
+ label("$\Delta^1 \times [0,1]$", 2*dir(90), dir(90));
+ pair A1 = (1.5,0);
+ pair A2 = (3.5,0);
+ pair A3 = (6.25,0);
+ pair A4 = (7.5,0);
+ draw(A1--A2, EndArrow);
+ draw(A3--A4, EndArrow);
+ label("$\boxed{X \times [0,1]}$", midpoint(A2--A3), blue);
+ label("$F$", A3--A4, dir(90));
+ label("$\sigma \times \operatorname{id}$", A1--A2, dir(90));
+
+ draw(W--X, rr, EndArrow);
+ draw(Y--Z, hg, EndArrow);
+ draw(W--Y, gr);
+ draw(X--Z, gr);
+ draw(W--Z, gr);
+ \end{asy}
+\end{center}
+
+This picture suggests how we might proceed:
+we want to create a $2$-chain on $Y$ given the $1$-chains we've drawn.
+The homotopy $F$ provides us with a ``square'' structure on $Y$,
+i.e.\ the square bounded by $v_0$, $v_1$, $w_1$, $w_0$.
+We split this up into two triangles; and that's our $2$-chain.
+
+We can make this formal by taking $\Delta^1 \times [0,1]$ (which \emph{is} a square)
+and splitting it into two triangles.
+Then, if we apply $\sigma \times \id$, we'll get an $2$-chain in $X \times [0,1]$,
+and then finally applying $F$ will map everything into our space $Y$.
+In our example, the final image is the $2$-chain, consisting of two triangles,
+which in our picture can be written as $[v_0, w_0, w_1] - [v_0, v_1, w_1]$;
+the boundaries are given by the red, green, grey.
+
+More generally, for an $n$-simplex $\phi = [x_0, \dots, x_n]$
+we define the so-called \emph{prism operator} $P_n$
+as follows. Set $v_i = f(x_i)$ and $w_i = g(x_i)$ for each $i$.
+Then, we let
+\[ P_n(\phi) \defeq \sum_{i=0}^n (-1)^i (F \circ (\phi\times\id))
+ \left[ v_0, \dots, v_i, w_i, \dots, w_n \right]. \]
+This is just the generalization of the construction above to dimensions $n > 1$;
+we split $\Delta^n \times [0,1]$ into $n+1$ simplices,
+map it into $X$ by $\phi \times \id$ and then push the whole thing into $Y$.
+The $(-1)^i$ makes sure that the ``diagonal'' faces all cancel off with each other.
+
+We now claim that for every $\sigma$,
+\[ \partial_Y(P_n(\sigma)) = g(\sigma) - f(\sigma) - P_{n-1}(\partial_X\sigma). \]
+In the picture, $\partial_Y \circ P_n$ is the boundary of the entire prism
+(in the figure, this becomes the red, green, and grey lines, not including diagonal grey,
+which is cancelled out).
+The $g-f$ is the green minus the red,
+and the $P_{n-1} \circ \partial_X$ represents the grey edges of the prism
+(not including the diagonal line from $v_1$ to $w_0$).
+Indeed, one can check (just by writing down several $\sum$ signs)
+that the above identity holds.
+
+So that gives the chain homotopy from $f$ to $g$, completing the proof of \Cref{thm:Hn_functor}.
+
+\section{More examples of chain complexes}
+We now end this chapter by providing some more examples of chain complexes,
+which we'll use in the next chapter to finally compute topological homology groups.
+
+\begin{example}
+ [Reduced homology groups]
+ Suppose $X$ is a (nonempty) topological space.
+ One can augment the standard singular complex as follows:
+ do the same thing as before, but augment the end by adding a $\ZZ$,
+ as shown:
+ \[ \dots \to C_1(X) \to C_0(X) \taking{\eps} \ZZ \to 0 \]
+ Here $\eps$ is defined by $\eps(\sum n_ip_i) = \sum n_i$ for points $p_i \in X$.
+ (Recall that a $0$-chain is just a formal sum of points!)
+ We denote this \vocab{augmented singular chain complex} by $\wt C_\bullet(X)$.
+\end{example}
+\begin{ques}
+ What's the homology of the above chain at $\ZZ$?
+ (Hint: you need $X$ nonempty.)
+\end{ques}
+\begin{definition}
+ \label{def:augment}
+ The homology groups of the augmented chain complex are called the
+ \vocab{reduced homology groups} $\wt H_n(X)$ of the space $X$.
+
+ Obviously $\wt H_n(X) \cong H_n(X)$ for $n > 0$.
+ But when $n=0$, the map $H_0(X) \to \ZZ$ by $\eps$ has kernel $\wt H_0(X)$,
+ thus $H_0(X) \cong \wt H_0(X) \oplus \ZZ$.
+\end{definition}
+This is usually just an added convenience.
+For example, it means that if $X$ is contractible,
+then all its reduced homology groups vanish,
+and thus we won't have to keep fussing with the special $n=0$ case.
+\begin{ques}
+ Given the claim earlier about $H_n(S^m)$, what should $\wt H_n(S^m)$ be?
+\end{ques}
+
+\begin{example}
+ [Relative chain groups]
+ Suppose $X$ is a topological space, and $A \subseteq X$ a subspace.
+ We can ``mod out'' by $A$ by defining
+ \[ C_n(X,A) \defeq C_n(X) / C_n(A) \]
+ for every $n$. Thus chains contained entirely in $A$ are trivial.
+
+ Then, the usual $\partial$ on $C_n(X)$ generates a new chain complex
+ \[ \dots \taking\partial C_{n+1}(X,A) \taking\partial C_n(X,A)
+ \taking\partial C_{n-1}(X,A) \taking\partial \dots. \]
+
+ This is well-defined since $\partial$ takes $C_n(A)$ into $C_{n-1}(A)$.
+\end{example}
+\begin{definition}
+ The homology groups of the relative chain complex are the
+ \vocab{relative homology groups} and denoted $H_n(X,A)$.
+\end{definition}
+
+One na\"{\i}ve guess is that this might equal $H_n(X) / H_n(A)$.
+This is not true and in general doesn't even make sense;
+if we take $X$ to be $\RR^2$ and $A = S^1$ a circle inside it,
+we have $H_1(X) = H_1(\RR^2) = 0$ and $H_1(S^1) = \ZZ$.
+
+Another guess is that $H_n(X,A)$ might just be $\wt H_n(X/A)$.
+This will turn out to be true for most reasonable spaces $X$ and $A$,
+and we will discuss this when we reach the excision theorem.
+
+\begin{example}
+ [Mayer-Vietoris sequence]
+ Suppose a space $X$ is covered by two open sets $U$ and $V$.
+ We can define $C_n(U+V)$ as follows:
+ it consists of chains such that each simplex is either entirely contained in $U$,
+ or entirely contained in $V$.
+
+ Of course, $\partial$ then defines another chain complex
+ \[ \dots \taking\partial C_{n+1}(U+V) \taking\partial C_n(U+V)
+ \taking\partial C_{n-1}(U+V) \taking\partial \dots. \]
+\end{example}
+So once again, we can define homology groups for this complex;
+we denote them by $H_n(U+V)$.
+Miraculously, it will turn out that $H_n(U+V) \cong H_n(X)$.
+
+\section\problemhead
+
+\begin{problem}
+ For $n \ge 1$ show that the composition
+ \[ S^{n-1} \injto D^{n} \taking{F} S^{n-1} \]
+ cannot be the identity map on $S^{n-1}$ for any $F$.
+ \begin{hint}
+ Take the $n-1$st homology groups.
+ \end{hint}
+ \begin{sol}
+ Applying the functor $H_{n-1}$ we get
+ that the composition $\ZZ \to 0 \to \ZZ$ is the identity
+ which is clearly not possible.
+ \end{sol}
+\end{problem}
+\begin{problem}
+ [Brouwer fixed point theorem]
+ Use the previous problem to prove that any
+ continuous function $f : D^n \to D^n$ has a fixed point.
+ \begin{hint}
+ Build $F$ as follows:
+ draw the ray from $x$ through $f(x)$
+ and intersect it with the boundary $S^{n-1}$.
+ \end{hint}
+\end{problem}
diff --git a/books/napkin/spec-examples.tex b/books/napkin/spec-examples.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ba1f1ff8b987a35eb6ef2a643957399723371a44
--- /dev/null
+++ b/books/napkin/spec-examples.tex
@@ -0,0 +1,908 @@
+\chapter{Interlude: eighteen examples of affine schemes}
+\label{ch:spec_examples}
+To cement in the previous two chapters,
+we now give an enormous list of examples.
+Each example gets its own section,
+rather than having page-long orange boxes.
+
+One common theme you will find as you wade through
+the examples is that your geometric intuition may
+be better than your algebraic one.
+For example, while studying $k[x,y] / (xy)$ you will say
+``geometrically, I expect so-and-so to look like other thing'',
+but when you write down the algebraic statements
+you find two expressions that are don't look equal to you.
+However, if you then do some calculation you will
+find that they were isomorphic after all.
+So in that sense, in this chapter you will learn to begin drawing
+pictures of algebraic statements --- which is great!
+
+As another example, all the lemmas about
+prime ideals from our study of localizations
+will begin to now take concrete forms:
+you will see many examples that
+\begin{itemize}
+ \ii $\Spec A/I$ looks like $\VV(I)$ of $\Spec A$,
+ \ii $\Spec A[1/f]$ looks like $D(f)$ of $\Spec A$,
+ \ii $\Spec A_\kp$ looks like $\OO_{\Spec A, \kp}$ of $\Spec A$.
+\end{itemize}
+
+In everything that follows, $k$ is any field.
+We will also use the following color connotations:
+\begin{itemize}
+ \ii The closed points of the scheme are drawn in blue.
+ \ii Non-closed points are drawn in red,
+ with their ``trails'' dotted red.
+ \ii Stalks are drawn in green, when they appear.
+\end{itemize}
+
+
+\section{Example: $\Spec k$, a single point}
+This one is easy: for any field $k$,
+$X = \Spec k$ has a single point,
+corresponding to the only proper ideal $(0)$.
+There is only way to put a topology on it.
+
+As for the sheaf,
+\[ \OO_X(X) = \OO_{X,(0)} = k. \]
+So the space is remembering what field it wants to be over.
+If we are complex analysts,
+the set of functions on a single point is $\CC$;
+if we are number theorists,
+maybe the set of functions on a single point is $\QQ$.
+
+%%fakechapter One-dimensional
+\section{$\Spec \CC[x]$, a one-dimensional line}
+The scheme $X = \Spec \CC[x]$ is our beloved one-dimensional line.
+It consists of two types of points:
+\begin{itemize}
+ \ii The closed points $(x-a)$, corresponding to each
+ complex number $a \in \CC$, and
+ \ii The \emph{generic} point $(0)$.
+\end{itemize}
+As for the Zariski topology, every open set contains $(0)$,
+which captures the idea it is close to everywhere:
+no matter where you stand, you can still hear the buzzing of the fly!
+True to the irreducibility of this space,
+the open sets are huge:
+the proper \emph{closed sets} consist of finitely many closed points.
+
+Here is a picture:
+for lack of better place to put it,
+the point $(0)$ is floating around just above the line in red.
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ pair A = (-8,0); pair B = (8,0);
+ draw(A--B, blue, Arrows);
+ dot("$(x)$", (0,0), dir(-90), blue);
+ dot("$(x-3)$", (3,0), dir(-90), blue);
+ dot("$(x+i)$", (-4,0), dir(-90), blue);
+ dot("$(0)$", (0,1), dir(90), red);
+ draw( (-6,0)..(-3,0.5)..(-1,1)--(1,1)--(3,0.5)--(6,0), dotted+red );
+ label("$\operatorname{Spec} \mathbb C[x]$", A, dir(-90), blue);
+ \end{asy}
+\end{center}
+
+The notion of ``value at $\kp$'' works as expected.
+For example, $f = x^2+5$ is a global section of $\CC[x]$.
+If we evaluate it at $\kp = x-3$,
+we find $f(\kp) = f \pmod \kp = x^2+5 \pmod{x-3} = 14 \pmod{x-3}$.
+Indeed, \[ \kappa(\kp) \cong \CC \]
+meaning the stalks all have residue field $\CC$.
+As \[ \CC[x] / \kp \cong \CC \qquad \text{by $x \mapsto 3$} \]
+we see we are just plugging $x=3$.
+
+Of course, the stalk at $(x-3)$ carries more information.
+In this case it is $\CC[x]_{(x-3)}$.
+Which means that if we stand near the point $(x-3)$,
+rational functions are all fine as long as no $x-3$
+appears in the denominator.
+So, $\frac{x^2+8}{(x-1)(x-5)}$ is a fine example of a germ near $x=3$.
+
+Things get more interesting if we
+consider the generic point $\eta = (0)$.
+
+What is the stalk $\OO_{X, \eta}$?
+Well, it should be $\CC[x]_{(0)} = \CC(x)$,
+which is the again the set of \emph{rational} functions.
+And that's what you expect.
+For example, $\frac{x^2+8}{(x-1)(x-5)}$
+certainly describes a rational function on ``most'' complex numbers.
+
+What happens if we evaluate the global section
+$f = x^2+5$ at $\eta$?
+Well, we just get $f(\eta) = x^2+5$ ---
+taking modulo $0$ doesn't do much.
+Fitting, it means that if you want to be able to evaluate
+a polynomial $f$ at a general complex number,
+you actually just need the whole polynomial
+(or rational function).
+We can think of this in terms of the residue field being $\CC(x)$:
+\[ \kappa( (0) ) = \Frac \left( \CC[x] / (0) \right)
+ \cong \Frac \CC[x] = \CC(x). \]
+
+\section{$\Spec \RR[x]$, a one-dimensional line
+with complex conjugates glued (no fear nullstellensatz)}
+
+Despite appearances, this actually
+looks almost exactly like $\Spec \CC[x]$,
+even more than you expect.
+The main thing to keep in mind is that now $(x^2+1)$
+is a point, which you can loosely think of as $\pm i$.
+So it almost didn't matter that $\RR$ is not algebraically closed;
+the $\CC$ is showing through anyways.
+But this time, because we only consider
+real coefficient polynomials,
+we do not distinguish between ``conjugate'' $+i$ and $-i$.
+Put another way, we have folded $a+bi$ and $a-bi$ into a single point:
+$(x+i)$ and $(x-i)$ merge to form $x^2+1$.
+
+To be explicit, there are three types of points:
+\begin{itemize}
+ \ii $(x-a)$ for each real number $a$
+ \ii $(x^2-ax+b)$ if $a^2 < 4b$, and
+ \ii the generic point $(0)$, again.
+\end{itemize}
+The ideals $(x-a)$ and $(x^2-ax+b)$
+are each closed points:
+the quotients with $\RR[x]$ are both fields
+($\RR$ and $\CC$, respectively).
+
+We have been drawing $\Spec \CC[x]$ as a one-dimensional line,
+so $\Spec \RR[x]$ will be drawn the same way.
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ pair A = (-8,0); pair B = (8,0);
+ draw(A--B, blue, Arrows);
+ dot("$(x)$", (0,0), dir(-90), blue);
+ dot("$(x-3)$", (3,0), dir(-90), blue);
+ dot("$(x^2+1)$", (-4,0), dir(-90), blue);
+ dot("$(0)$", (0,1), dir(90), red);
+ draw( (-6,0)..(-3,0.5)..(-1,1)--(1,1)--(3,0.5)--(6,0), dotted+red );
+ label("$\operatorname{Spec} \mathbb R[x]$", A, dir(-90), blue);
+ \end{asy}
+\end{center}
+
+One nice thing about this is that the nullstellensatz
+is less scary than it was with classical varieties.
+The short version is that the function $x^2+1$
+vanishes at a point of $\Spec \RR[x]$, namely $(x^2+1)$ itself!
+(So in some ways we're sort of automatically working
+with the algebraic closure.)
+
+You might remember a long time ago we made a big fuss about
+the weak nullstellensatz, for example in
+\Cref{prob:complex_variety_nonempty}:
+if $I$ was a proper ideal in $\CC[x_1, \dots, x_n]$
+there was \emph{some} point $(a_1, \dots, a_n) \in \CC^n$
+such that $f(a_1, \dots, a_n) = 0$ for all $f \in I$.
+With schemes, it doesn't matter anymore:
+if $I$ is a proper ideal of a ring $A$,
+then some maximal ideal contains it,
+and so $\VV(I)$ is nonempty in $\Spec A$.
+
+We better mention that the stalks this time look different than expected.
+Here are some examples:
+\begin{align*}
+ \kappa\left( (x^2+1) \right) &\cong \RR[x]/(x^2+1) \cong \CC \\
+ \kappa \left( (x-3) \right) &\cong \RR[x] / (x-3) \cong \RR \\
+ \kappa \left( (0) \right) &\cong \Frac(\RR[x] / (0)) \cong \RR(x)
+\end{align*}
+Notice the residue fields above the ``complex''
+points are bigger: functions on them take values in $\CC$.
+
+
+\section{$\Spec k[x]$, over any ground field}
+In general, if $\ol k$ is the algebraic closure of $k$,
+then $\Spec k[x]$ looks like $\Spec \ol k[x]$
+with all the Galois conjugates glued together.
+So we will almost never need ``algebraically closed''
+hypotheses anymore: we're working with polynomial ideals,
+so all the elements are implicitly there, anyways.
+
+\section{$\Spec \ZZ$, a one-dimensional scheme}
+The great thing about $\Spec \ZZ$ is that
+it basically looks like $\Spec k[x]$, too,
+being a one-dimensional scheme.
+It has two types of prime ideals:
+\begin{itemize}
+ \ii $(p)$, for every rational prime $p$,
+ \ii and the generic point $(0)$.
+\end{itemize}
+So the picture almost does not change.
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ pair A = (-8,0); pair B = (8,0);
+ draw(A--B, blue, Arrows);
+ dot("$(7)$", (0,0), dir(-90), blue);
+ dot("$(3)$", (3,0), dir(-90), blue);
+ dot("$(19)$", (-4,0), dir(-90), blue);
+ dot("$(0)$", (0,1), dir(90), red);
+ draw( (-6,0)..(-3,0.5)..(-1,1)--(1,1)--(3,0.5)--(6,0), dotted+red );
+ label("$\operatorname{Spec} \mathbb Z$", A, dir(-90), blue);
+ \end{asy}
+\end{center}
+This time $\eta = (0)$ has stalk $\ZZ_{(0)} = \QQ$,
+so a ``rational function'' is literally a rational number!
+Thus, $\frac{20}{19}$ is a function
+with a double root at $(2)$, a root at $(5)$,
+and a simple pole at $(19)$.
+If we evaluate it at $\kp = (7)$, we get $3 \pmod 7$.
+In general, the residue fields are what you'd guess:
+\[ \kappa\left( (p) \right) = \ZZ/(p) \cong \FF_p \]
+for each prime $p$, and
+$\kappa\left( (0) \right) \cong \QQ$.
+
+The stalk is bigger than the residue field at the closed points:
+for example
+\[ \OO_{\Spec \ZZ, (3)}
+ \cong \left\{ \frac mn \mid 3 \nmid n \right\} \]
+consists of rational numbers with no pole at $3$.
+The stalk at the generic point is $\ZZ_{(0)} \cong \Frac \ZZ = \QQ$.
+
+%%fakechapter 0D from 1D
+\section{$\Spec k[x] / (x^2-x)$, two points}
+If we were working with affine varieties,
+you would already know what the answer is:
+$x^2-x = 0$ has solutions $x=0$ and $x=1$,
+so this should be a scheme with two points.
+
+To see this come true, we use \Cref{prop:prime_quotient}:
+the points of $\Spec k[x]/(x^2-x)$
+should correspond to prime ideals of $k[x]$ containing $(x^2-x)$.
+As $k[x]$ is a PID, there are only two, $(x-1)$ and $(x)$.
+They are each maximal,
+since their quotient with $k[x]$ is a field (namely $k$),
+so as promised $\Spec k[x] / (x^2-x)$ has just two closed points.
+
+Each point has a stalk above it isomorphic to $k$.
+A section on the whole space $X$ is just a choice
+of two values, one at $(x)$ and one at $(x-1)$.
+\begin{center}
+\begin{asy}
+ size(3cm);
+ draw("$k$", (-2,0)--(-2,5), dir(180), heavygreen);
+ draw("$k$", (2,0)--(2,5), heavygreen);
+ dot("$(x)$", (-2,0), dir(-90), blue);
+ dot("$(x-1)$", (2,0), dir(-90), blue);
+ label("$\operatorname{Spec} k[x]/(x^2-x)$", (0,5), 2*dir(90));
+\end{asy}
+\end{center}
+So actually, this is a geometric way of thinking about the
+ring-theoretic fact that
+\[ k[x] / \left( x^2-x \right) \cong k \times k
+ \quad\text{by } f \mapsto \left( f(0), f(1) \right). \]
+
+Also, this is the first example of a reducible space in this chapter:
+in fact $X$ is even disconnected.
+Accordingly there is no generic point floating around:
+as the space is discrete, all points are closed.
+
+\section{$\Spec k[x]/(x^2)$, the double point}
+We can now elaborate on the ``double point'' scheme
+\[ X_2 = \Spec k[x] / (x^2) \]
+since it is such an important motivating example.
+How it does differ from the ``one-point'' scheme
+$X_1 = \Spec k[x] / (x) = \Spec k$?
+Both $X_2$ and $X_1$ have exactly one point,
+and so obviously the topologies are the same too.
+
+The difference is that the stalk
+(equivalently, the section, since we have only one point)
+is larger:
+\[ \OO_{X_2, (x)} = \OO_{X_2}(X_2) = k[x]/(x^2). \]
+So to specify a function on a double point,
+you need to specify two parameters, not just one:
+if we take a polynomial
+\[ f = a_0 + a_1 x + \dots \in k[x] \]
+then evaluating it at the double point
+will remember both $a_0$ and the ``first derivative'' say.
+
+I should mention that if you drop all the way to the residue fields,
+you can't tell the difference between
+the double point and the single point anymore.
+For the residue field of $\Spec k[x] / (x^2)$ at $(x)$ is
+\[ \Frac\left( A / (x) \right) = \Frac k = k. \]
+Thus the set of \emph{values} is still just $k$
+(leading to the ``nilpotent'' discussion at the end of last chapter);
+but the stalk, having ``enriched'' values,
+can tell the difference.
+
+\section{$\Spec k[x]/(x^3-x^2)$, a double point and a single point}
+There is no problem putting the previous two examples side by side:
+the scheme $X = \Spec k[x] / (x^3-x^2)$
+consists of a double point next to a single point.
+Note that the stalks are different:
+the one above the double point is larger.
+\begin{center}
+\begin{asy}
+ size(5cm);
+ draw("$k[x] / (x^2)$", (-2,0)--(-2,7), dir(180), heavygreen);
+ draw("$k$", (2,0)--(2,5), heavygreen);
+ dot("$(x)$", (-2,0), dir(-90), blue+5);
+ dot("$(x-1)$", (2,0), dir(-90), blue);
+ label("$\operatorname{Spec} k[x]/(x^3-x^2)$", (0,7), 2*dir(90));
+\end{asy}
+\end{center}
+This time, we implicitly have the ring isomorphism
+\[ k[x] / (x^3-x^2) \cong k[x] / (x^2) \times k \]
+by $f \mapsto \left( f(0) + f'(0) x, f(1) \right)$.
+The derivative is meant formally here!
+
+\section{$\Spec \Zc{60}$, a scheme with three points}
+We've being seeing geometric examples of ring products coming up,
+but actually the Chinese remainder theorem you are used to
+with integers is no different.
+(This example $X = \Spec \Zc{60}$
+is taken from \cite[\S4.4.11]{ref:vakil}.)
+
+By \Cref{prop:prime_quotient},
+the prime ideals of $\Zc{60}$ are $(2)$, $(3)$, $(5)$.
+But you can think of this also as coming out of $\Spec \ZZ$:
+as $60$ was a function with a double root at $(2)$,
+and single roots at $(3)$ and $(5)$.
+\begin{center}
+\begin{asy}
+ size(6cm);
+ draw("$\mathbb Z / 4 \mathbb Z$", (-2,0)--(-2,7), dir(180), heavygreen);
+ draw("$\mathbb Z / 3 \mathbb Z$", (0,0)--(0,5), heavygreen);
+ draw("$\mathbb Z / 5 \mathbb Z$", (2,0)--(2,6), heavygreen);
+ dot("$(2)$", (-2,0), dir(-90), blue+5);
+ dot("$(3)$", ( 0,0), dir(-90), blue);
+ dot("$(5)$", ( 2,0), dir(-90), blue);
+ label("$\operatorname{Spec} \mathbb Z / 60 \mathbb Z$", (0,7), 2*dir(90));
+\end{asy}
+\end{center}
+Actually, although I have been claiming the ring isomorphisms,
+the sheaves really actually give us a full proof.
+Let me phrase it in terms of global sections:
+\begin{align*}
+ \Zc{60} &= \OO_X(X) \\
+ &= \OO_X( \{(2)\} ) \times \OO_X( \{(3)\} ) \times \OO_X( \{(5)\} ) \\
+ &= \OO_{X,(2)} \times \OO_{X,(3)} \times \OO_{X,(5)} \\
+ &= \Zc4 \times \Zc3 \times \Zc5.
+\end{align*}
+So the theorem that $\OO_X(X) = A$ for $X = \Spec A$
+is doing the ``work'' here;
+the sheaf axioms then give us the Chinese remainder theorem from here.
+
+On that note, this gives us a way of thinking about
+the earlier example that
+\[ (\Zc{60})[1/5] \cong \Zc{12}. \]
+Indeed, $\Spec \Zc{60}[1/5]$ is supposed to look
+like the distinguished open set $D(5)$:
+which means we delete the point $(5)$ from the picture above.
+That leaves us with $\Zc{12}$.
+
+%%fakechapter Two-dimensional
+\section{$\Spec k[x,y]$, the two-dimensional plane}
+We have seen this scheme already: it is visualized as a plane.
+There are three types of points:
+\begin{itemize}
+ \ii The closed points $(x-a, y-b)$,
+ which consists of single points of the plane.
+ \ii A non-closed point $(f(x,y))$ for any irreducible
+ polynomial $f$, which floats along some irreducible curve.
+ We illustrate this by drawing the dotted curve along
+ which the point is floating.
+ \ii The generic point $(0)$, floating along the entire plane.
+ I don't know a good place to put it in the picture,
+ so I'll just put it somewhere and draw a dotted circle around it.
+\end{itemize}
+
+Here is an illustration of all three types of points.
+\begin{center}
+\begin{asy}
+ real f(real x) { return x*x; }
+ graph.xaxis("$x$");
+ graph.yaxis("$y$");
+ draw(graph(f,-2,2,operator ..), red+dotted, Arrows(TeXHead));
+ dot("$(y-x^2)$", (1.3, f(1.3)), dir(-45), red);
+ dot("$(x-1,y+2)$", (1,-2), dir(-45), blue);
+ pair O = (-3,3);
+ dot("$(0)$", O, dir(225), red);
+ filldraw(CR(O, 0.8), opacity(0.2)+lightred, dotted+red);
+\end{asy}
+\end{center}
+
+We also go ahead and compute the stalks above each point.
+\begin{itemize}
+ \ii The stalk above $(x-1, y+2)$ is
+ the set of rational functions $\frac{f(x,y)}{g(x,y)}$
+ such that $g(1,-2) \ne 0$.
+ \ii The stalk above the non-closed point $(y-x^2)$
+ the set of rational functions $\frac{f(x,y)}{g(x,y)}$
+ such that $g(t, t^2) \ne 0$.
+ For example the function $\frac{xy}{x+y-2}$ is still fine;
+ despite the fact it vanishes at the point $(1,1)$ and $(-2,4)$
+ on the parabola it is a function
+ on a ``generic point'' (crudely, ``most points'') of the parabola.
+ \ii The stalk above $(0)$ is the entire fraction field
+ $k(x,y)$ of rational functions.
+\end{itemize}
+
+Let's consider the global section $f = x^2 + y^2$
+and also take the value at each of the above points.
+\begin{itemize}
+ \ii $f \pmod{x-1,y-2} = 5$, so $f$ has value $5$ at $(x-1, y+2)$.
+ \ii The new bit is that we can think of evaluating
+ $f$ along the parabola too --- it is given a particular value
+ in the quotient $k[x,y] / (y-x^2)$.
+ We can think of it as $f = x^2+y^2 \equiv x^2+x^4 \pmod{y-x^2}$ for example.
+ Note that if we know the value of $f$ at the generic point of the parabola,
+ we can therefore also evaluate it at any closed point on the parabola.
+ \ii At the generic point $(0)$, $f \pmod{0} = f$.
+ So ``evaluating at the generic point'' does nothing, as in any other scheme.
+\end{itemize}
+
+
+\section{$\Spec \ZZ[x]$, a two-dimensional scheme, and Mumford's picture}
+We saw $\Spec \ZZ$ looked a lot like $\Spec k[x]$,
+and we will now see that $\Spec \ZZ[x]$ looks a lot like $\Spec k[x,y]$.
+
+There is a famous picture of this scheme in Mumford's ``red book'',
+which I will produce here for culture-preservation reasons,
+even though there are some discrepancies between
+the pictures that we previously drew.
+\begin{center}
+ \includegraphics[width=0.9\textwidth]{media/mumforddrawing.jpg}
+\end{center}
+Mumford uses $[\kp]$ to denote the point $\kp$,
+which we don't, so you can ignore the square brackets that appear everywhere.
+The non-closed points are illustrated as balls of fuzz.
+
+As before, there are there types of prime ideals,
+but they will look somewhat more different:
+\begin{itemize}
+ \ii The closed points are now pairs $(p, f(x))$
+ where $p$ is a prime and $f$ is an irreducible polynomial modulo $p$.
+ Indeed, these are the maximal ideals:
+ the quotient $\ZZ[x] / (p,f)$ becomes some finite extension of $\FF_p$.
+ \ii There are now two different ``one-dimensional'' non-closed points:
+ \begin{itemize}
+ \ii Each rational prime gives a point $(p)$ and
+ \ii Each irreducible polynomial $f$ gives a point $(f)$.
+ \end{itemize}
+ Indeed, note that the quotients of $\ZZ[x]$ by each are integral domains.
+ \ii $\ZZ[x]$ is an integral domain,
+ so as always $(0)$ is our generic point for the entire space.
+\end{itemize}
+There is one bit that I would do differently,
+in $\VV(3)$ and $\VV(7)$, there ought to be a point $(3,x^2+1)$,
+which is not drawn as a closed point in the picture,
+but rather as dashed oval.
+This is not right in the topological sense:
+as $\km = (3, x^2+1)$ is a maximal ideal,
+so it really is one closed point in the scheme.
+But the reason it might be thought of as ``doubled'',
+is that $\ZZ[x] / (3,x^2+1)$,
+the residue field at $\km$,
+is a two-dimensional $\FF_3$ vector space.
+
+%%fakechapter 1D cut out from 2D
+\section{$\Spec k[x,y]/(y-x^2)$, the parabola}
+\label{sec:parabola}
+By \Cref{prop:prime_quotient},
+the prime ideals of $k[x,y] / (y-x^2)$
+correspond to the prime ideals of $k[x,y]$ which are supersets of $(y-x^2)$,
+or equivalently the points of $\Spec k[x,y]$ contained
+inside the closed set $\VV(y-x^2)$.
+Moreover, the subspace topology on $\VV(y-x^2)$
+coincides with the topology on $\Spec k[x,y] / (y-x^2)$.
+
+\begin{center}
+\begin{asy}
+ real f(real x) { return x*x; }
+ draw(graph(f,-2,2,operator ..), blue, Arrows(TeXHead));
+ label("$\operatorname{Spec} k[x,y]/(y-x^2)$", (0, f(0)), dir(-90), blue);
+\end{asy}
+\end{center}
+
+This holds much more generally:
+\begin{exercise}
+ [Boring check]
+ Show that if $I$ is an ideal of a ring $A$,
+ then $\Spec A/I$ is homeomorphic as a topological space
+ to the closed subset $\VV(I)$ of $\Spec A$.
+\end{exercise}
+So this is the notion of ``closed embedding'':
+the parabola, which was a closed subset of $\Spec k[x,y]$,
+is itself a scheme.
+It will be possible to say more about this,
+once we actually define the notion of a morphism.
+
+The sheaf on this scheme only remembers the functions
+on the parabola, though: the stalks are not ``inherited'', so to speak.
+To see this, let's compute the stalk at the origin:
+\Cref{thm:localization_commute_quotient} tells us it is
+\[ k[x,y]_{(x,y)} / (y-x^2)
+ \cong k[x,x^2]_{(x,x^2)}
+ \cong k[x]_{(x)} \]
+which is the same as the stalk
+of the affine line $\Spec k[x]$ at the origin.
+Intuitively, not surprising;
+if one looks at any point of the parabola near the origin,
+it looks essentially like a line,
+as do the functions on it.
+
+The stalk above the generic point is $\Frac(k[x,y] / (y-x^2))$:
+so rational functions, with the identification that $y = x^2$.
+Also unsurprising.
+
+Finally, we note expect the parabola is actually isomorphic to $\Spec k[x]$,
+since there is an isomorphism $k[x,y] / (y-x^2) \cong k[x]$
+by sending $y \mapsto x^2$.
+Pictorially, this looks like ``un-bending'' the hyperbola.
+In general, we would hope that when two rings $A$ and $B$ are isomorphic,
+then $\Spec A$ and $\Spec B$ should be ``the same''
+(otherwise we would be horrified),
+and we'll see later this is indeed the case.
+
+\section{$\Spec \ZZ[i]$, the Gaussian integers (one-dimensional)}
+You can play on this idea some more in the integer case.
+Note that \[ \ZZ[i] \cong \ZZ[x] / (x^2+1) \]
+which means this is a ``dimension-one'' closed set within $\Spec \ZZ[x]$.
+In this way, we get a scheme whose elements are \emph{Gaussian primes}.
+
+You can tell which closed points are ``bigger'' than others
+by looking at the residue fields.
+For example the residue field of the point $(2+i)$ is
+\[ \kappa\left( (2+i) \right) = \ZZ[i] / (2+i) \cong \FF_5 \]
+but the residue field of the point $(3)$ is
+\[ \kappa\left( (3) \right) \cong \ZZ[i] / (3) \cong \FF_9 \]
+which is a degree two $\FF_3$-extension.
+
+\section{Long example: $\Spec k[x,y]/(xy)$, two axes}
+This is going to be our first example of a non-irreducible scheme.
+
+\subsection{Picture}
+Like before, topologically it looks like the closed set $\VV(xy)$
+of $\Spec k[x,y]$.
+Here is a picture:
+\begin{center}
+\begin{asy}
+ draw( (-4,0)--(4,0), blue, Arrows);
+ draw( (0,-4)--(0,4), blue, Arrows);
+ dot("$(x)$", (0.3,3), dir(-45), lightred);
+ dot("$(y)$", (3, 0.3), dir(135), lightred);
+ draw( (0.3,3.6)--(0.3,2.4), lightred+dotted );
+ draw( (2.4,0.3)--(3.6,0.3), lightred+dotted );
+ dot("$(x+2)$", (-2,0), dir(-90), blue);
+ dot("$(y+3)$", (0,-3), dir(0), blue);
+\end{asy}
+\end{center}
+To make sure things are making sense:
+\begin{ques}
+ [Sanity check]
+ Verify that $(y+3)$ is really a maximal ideal of $\Spec k[x,y] / (xy)$
+ lying in $\VV(x)$.
+\end{ques}
+
+The ideal $(0)$ is longer prime,
+so it is not a point of this space.
+Rather, there are two non-closed points this time:
+the ideals $(x)$ and $(y)$,
+which can be visualized as floating around each of the two axes.
+This space is reducible,
+since it can be written as the union
+of two proper closed sets, $\VV(x) \cup \VV(y)$.
+(It is still \emph{connected}, as a topological space.)
+
+\subsection{Throwing out the $y$-axis}
+Consider the distinguished open set $U = D(x)$.
+This corresponds to deleting $\VV(x)$, the $y$-axis.
+Therefore we expect that $D(x)$
+``is'' just $\Spec k[x]$ with the origin deleted,
+and in particular that we should get $k[x, x\inv]$
+for the sections.
+Indeed,
+\begin{align*}
+ \OO_{\Spec k[x,y]/(xy)} (D(x))
+ &\cong (k[x,y]/(xy))[1/x] \\
+ &\cong k[x, x\inv, y] / (xy) \cong k[x, x\inv, y] / (y) \cong k[x, x\inv].
+\end{align*}
+where $(xy) = (y)$ follows from $x$ being a unit.
+Everything as planned.
+
+\subsection{Stalks above some points}
+Let's compute the stalk above the point $\km = (x+2)$,
+which we think of as the point $(-2,0)$ on the $x$-axis.
+(If it makes you more comfortable, note that $\km \ni y(x+2) = 2y$
+and hence $y \in \km$, so we could also write $\km = (x+2,y)$.)
+The stalk is
+\[ \OO_{\Spec k[x,y] / (xy), \km} = (k[x,y]/ (xy))_{(x+2)}. \]
+But I claim that $y$ becomes the zero element with this localization.
+Indeed, we have $\frac y1 = \frac 0x = 0$.
+Hence the entire thing collapses to just
+\[ \OO_{\Spec k[x,y] / (xy), \km} = k[x,y]_{(x+2)} \]
+which anyways is the stalk of $(x+2)$ in $\Spec k[x]$.
+That's expected.
+If we have a space with two lines but we're
+standing away from the origin,
+then the stalk is not going to pick up the weird behavior
+at that far-away point;
+it only cares about what happens near $\km$,
+and so it looks just like an affine line there.
+
+\begin{remark}
+ Note that $(k[x,y]/ (xy))_{(x+2)}$
+ is \emph{not} the same as $k[x,y]_{(x+2)} / (xy)$;
+ the order matters here.
+ In fact, the latter is the zero ring,
+ since both $x$ and $y$, and hence $xy$, are units.
+\end{remark}
+
+The generic point $(y)$ (which floats around the $x$-axis)
+will tell a similar tale:
+if we look at the stalk above it,
+we ought to find that it doesn't recognize the presence of the $y$-axis,
+because ``nearly all'' points don't recognize it either.
+To actually compute the stalk:
+\[ \OO_{\Spec k[x,y] / (xy), (y)} = (k[x,y]/ (xy))_{(y)}. \]
+Again $\frac y1 = \frac 0x = 0$, so this is just
+\[ \OO_{\Spec k[x,y] / (xy), (y)} \cong k[x]_{(0)} \cong k(x). \]
+which is what we expected
+(it is the same as the stalk above $(0)$ in $\Spec k[x]$).
+
+\subsection{Stalk above the origin (tricky)}
+The stalk above the origin $(x,y)$ is interesting,
+and has some ideas in it we won't be able to explore fully
+without talking about localizations of modules.
+The localization is given by
+\[ (k[x,y] / (xy))_{(x,y)} \]
+and hence the elements should be
+\[ \frac{c + (a_1 x + a_2 x^2 + \dots)
+ + (b_1 y + b_2 y^2 + \dots)}
+ {c' + (a_1' x + a_2' x^2 + \dots)
+ + (b_1' y + b_2' y^2 + \dots)}
+\]
+where $c' \ne 0$.
+
+You might feel unsatisfied with this characterization.
+Here is some geometric intuition.
+You can write the global section ring as
+\[ k[x,y] / (xy) = c + (a_1 x + a_2 x^2 + \dots)
+ + (b_1 y + b_2 y^2 + \dots) \]
+meaning any global section is the sum of an $x$-polynomial
+and a $y$-polynomial.
+This is \emph{not} just the ring product $k[x] \times k[y]$, though;
+the constant term is shared.
+So it's better thought of as pairs of polynomials in $k[x]$
+and $k[y]$ which agree on the constant term.
+
+If you like category theory, it is thus a fibered product
+\[ k[x,y] / (xy) \cong k[x] \times_k k[y] \]
+with morphism $k[x] \to k$ and $k[y] \to k$ by sending $x$ and $y$ to zero.
+In that way, we can mostly decompose $k[x,y] / (xy)$
+into its two components.
+
+We really ought to be able to do the same as the stalk:
+we wish to say that
+\[ \OO_{\Spec k[x,y] / (xy), (x,y)}
+ \cong k[x]_{(x)} \times_k k[y]_{(y)}. \]
+English translation: a ``typical'' germ
+ought to look like $\frac{3+x}{x^2+7} + \frac{4+y^3}{y^2+y+7}$,
+with the $x$ and $y$ parts decoupled.
+Equivalently, the stalk should consist of
+pairs of $x$-germs and $y$-germs that agree at the origin.
+
+In fact, this is true!
+This might come as a surprise, but let's see why we expect this.
+Suppose we take the germ
+\[ \frac{1}{1-(x+y)}. \]
+If we hold our breath, we could imagine expanding it as
+a geometric series: $1 + (x+y) + (x+y)^2 + \dots$.
+As $xy =0 $, this just becomes $1+x+x^2+x^3 + \dots + y+y^2+y^3+\dots$.
+This is nonsense (as written), but nonetheless it suggests the conjecture
+\[ \frac{1}{1-(x+y)} = \frac{1}{1-x} + \frac{1}{1-y} - 1 \]
+which you can actually verify is true.
+\begin{ques}
+ Check this identity holds.
+\end{ques}
+
+Of course, this is a lot of computation just for one simple example.
+Is there a way to make it general?
+Yes: the key claim is that ``localization commutes with \emph{limits}''.
+You can try to work out the statement now if you want, but we won't do so.
+% \todo{but if we ever do sheaves of modules, then we will}
+
+%%fakechapter 1D with holes
+\section{$\Spec k[x,x\inv]$, the punctured line (or hyperbola)}
+This is supposed to look like $D(x)$ of $\Spec k[x]$,
+or the line with the origin deleted it.
+Alternatively, we could also write
+\[ k[x,x\inv] \cong k[x,y] / (xy-1) \]
+so that the scheme could also be drawn as a hyperbola.
+
+First, here's the 1D illustration.
+\begin{center}
+ \begin{asy}
+ size(8cm);
+ pair A = (-8,0); pair B = (8,0);
+ draw(A--B, blue, Arrows);
+ dot("$(x-3)$", (3,0), dir(-90), blue);
+ dot("$(x+2)$", (-2,0), dir(-90), blue);
+ dot("$(0)$", (0,1), dir(90), red);
+ opendot(origin, blue+1.5);
+ draw( (-6,0)..(-3,0.5)..(-1,1)--(1,1)--(3,0.5)--(6,0), dotted+red );
+ label("$\operatorname{Spec} k[x,x^{-1}]$", A, dir(-90), blue);
+ \end{asy}
+\end{center}
+We actually saw this scheme already when we took $\Spec k[x,y] / (xy)$
+and looked at $D(y)$, too.
+Anyways, let us compute the stalk at $(x-3)$ now; it is
+\[ \OO_{\Spec k[x,x\inv], (x-3)}
+ \cong k[x,x\inv]_{(x-3)}
+ \cong k[x]_{(x-3)} \]
+since $x\inv$ is in $k[x]_{(x-3)}$ anyways.
+So again, we see that the deletion of the origin
+doesn't affect the stalk at the farther away point $(x-3)$.
+
+As mentioned, since $k[x,x\inv]$ is isomorphic to $k[x,y] / (xy-1)$,
+another way to draw the visualize the same curve
+would be to draw the hyperbola
+(which you can imagine as flattening to give the punctured line.)
+There is one generic point $(0)$ since $k[x,y]/(xy-1)$
+really is an integral domain,
+as well as points like $(x+2, y+1/2) = (x+2) = (y+1/2)$.
+\begin{center}
+ \begin{asy}
+ size(7cm);
+ import graph;
+ graph.xaxis("$x$", -4, 4);
+ graph.yaxis("$y$", -4, 4);
+
+ real f (real x) { return 1/x; }
+ draw(graph(f,-4,-0.25,operator ..), blue);
+ draw(graph(f,0.25,4,operator ..), blue);
+ label("$\operatorname{Spec} k[x,y] / (xy-1)$", (1,1), dir(45), blue);
+
+ dot("$(0)$", (0.4,2), dir(225), red);
+ filldraw(CR( (0.4,2), 0.3 ), opacity(0.1)+lightred, red+dotted);
+ dot("$(x+2)$", (-2, f(-2)), dir(10), blue);
+ \end{asy}
+\end{center}
+
+%%fakechapter 0D --- stalks above 1D
+\section{$\Spec k[x]_{(x)}$, zooming in to the origin of the line}
+We know already that $\OO_{\Spec A, \kp} \cong A_\kp$:
+so $A_\kp$ should be the stalk at $\kp$.
+In this example we will see that $\Spec A_\kp$
+should be drawn sort of as this stalk, too.
+
+We saw earlier how to draw a picture of $\Spec k[x]$.
+You can also draw a picture of the stalk above the origin $(x)$,
+which you might visualize as a grass or other plant
+growing above $(x)$ if you like agriculture.
+In that case, $\Spec k[x]_{(x)}$ might look like what happens
+if you pluck out that stalk from the affine line.
+
+\begin{center}
+ \begin{asy}
+ unitsize(0.7cm);
+ pair A = (-5,0); pair B = (5,0);
+ draw(A--B, blue, Arrows);
+ draw( (-4,0)..(-2,0.5)..(-1,1)--(1,1)--(2,0.5)--(4,0), dotted+red );
+ label("$\operatorname{Spec} k[x]$", A, dir(-90), blue);
+ draw( (0,0)--(0,3), heavygreen);
+ label("$\mathcal O_{\operatorname{Spec} k[x], (x)}$", (0,3), dir(90), heavygreen);
+ dot("$(x)$", (0,0), dir(-90), blue);
+ dot("$(0)$", (0.2,1), dir(45), red);
+ add(shift(-9,0)*CC());
+ draw("pluck", (-4.5, 1.5)--(-1.5,1.5), dir(90), black, EndArrow(TeXHead));
+ // now the stalk
+ draw( (0,0)--(0,3), heavygreen);
+ label("$\operatorname{Spec} k[x]_{(x)}$", (0,3), dir(90), blue);
+ draw( (-1,1)--(1,1), dotted+red );
+ dot("$(0)$", (0.2,1), dir(45), red);
+ dot("$(x)$", (0,0), dir(-90), blue);
+ \end{asy}
+\end{center}
+
+Since $k[x]_{(x)}$ is a local ring
+(it is the localization of a prime ideal),
+this point has only one closed point: the maximal ideal $(x)$.
+However, surprisingly, it has one more point:
+a ``generic'' point $(0)$.
+So $\Spec k[x]_{(x)}$ is a \emph{two-point space},
+but it does not have the discrete topology:
+$(x)$ is a closed point, but $(0)$ is not.
+(This makes it a nice counter-example for exercises of various sorts.)
+
+So, topologically what's happening is that when we zoom in to $(x)$,
+the generic point $(0)$ (which was ``close to every point'') remains,
+floating above the point $(x)$.
+
+Note that the stalk above our single closed point $(x)$
+is the same as it was before:
+\[ \left( k[x]_{(x)} \right)_{(x)} \cong k[x]_{(x)}. \]
+Indeed, in general if $R$ is a local ring with maximal ideal $\km$,
+then $R_\km \cong R$:
+since every element $x \notin \km$ was invertible anyways.
+Thus in the picture, the stalk is drawn the same.
+
+Similarly, the stalk above $(0)$ is the same as it was
+before we plucked it out:
+\[ \left( k[x]_{(x)} \right)_{(0)}
+ = \Frac k[x]_{(x)} = k(x). \]
+More generally:
+\begin{exercise}
+ Let $A$ be a ring, and $\kq \subseteq \kp$ prime ideals.
+ Check that $A_\kq \cong (A_\kp)_\kq$,
+ where we view $\kq$ as a prime ideal of $A_\kp$.
+\end{exercise}
+So when we zoom in like this, all the stalks stay the same,
+even above the non-closed points.
+
+
+\section{$\Spec k[x,y]_{(x,y)}$, zooming in to the origin of the plane}
+The situation is more surprising if we pluck the stalk
+above the origin of $\Spec k[x,y]$,
+the two-dimensional plane.
+The points of $\Spec k[x,y]_{(x,y)}$ are supposed to be
+the prime ideals of $k[x,y]$ which are contained in $(x,y)$;
+geometrically these are $(x,y)$ and the generic points passing through the origin.
+For example, there will be a generic point for the parabola $(y-x^2)$
+contained in $k[x,y]_{(x,y)}$,
+and another one $(y-x)$ corresponding to a straight line, etc.
+
+So we have the single closed point $(x,y)$ sitting at the bottom,
+and all sorts of ``one-dimensional'' generic points floating above it:
+lines, parabolas, you name it.
+Finally, we have $(0)$, a generic point floating in two dimensions,
+whose closure equals the entire space.
+\begin{center}
+ \begin{asy}
+ size(6cm);
+ draw( (0,0)--(0,3), heavygreen);
+ label("$\operatorname{Spec} k[x,y]_{(x,y)}$", (0,3), dir(90), blue);
+ draw( (-1,0.1)--(1,1.1), dotted+red );
+ dot("$(3x-2y)$", (0.4,0.8), dir(-45), red);
+ path p1 = (-0.5,2.5)..(0,2)..(0.5,2.5);
+ draw(p1, dotted+red);
+ dot("$(y-x^2)$", relpoint(p1,0.3), dir(200), red);
+ path p2 = (-1,1.3)..(-0.5,1.8)..(0,1.6)..(0.5,1.4)..(1,1.9);
+ draw(p2, dotted+red);
+ dot("$(2y-x^3)$", relpoint(p2,0.7), dir(-45), red);
+ dot("$(x,y)$", (0,0), dir(-90), blue);
+ dot("$(0)$", (1.5,2.5), dir(225), red);
+ filldraw(CR( (1.5,2.5), 0.5 ), opacity(0.1)+lightred, red+dotted);
+ \end{asy}
+\end{center}
+
+\section{$\Spec k[x,y]_{(0)} = \Spec k(x,y)$, the stalk above the generic point}
+The generic point of the plane just has stalk $\Spec k(x,y)$:
+which is the spectrum of a field,
+hence a single point.
+The stalk remains intact as compared to when planted in $\Spec k[x,y]$;
+the functions are exactly rational functions in $x$ and $y$.
+
+% \section{$\Spec k[x,y]_{(y-x^2)}$, the stalk above the parabola}
+% \section{$\Spec \ZZ_{(0)} = \Spec \QQ$}
+% \section{$\Spec k[x,y]_{(0)}$, the stalk above the generic point}
+
+%%fakechapter Problems
+\section{\problemhead}
+\begin{problem}
+ Draw a picture of $\Spec \ZZ[1/55]$,
+ describe the topology, and compute the stalk at each point.
+\end{problem}
+
+\begin{problem}
+ Draw a picture of $\Spec \ZZ_{(5)}$,
+ describe the topology, and compute the stalk at each point.
+\end{problem}
+
+\begin{problem}
+ Let $A = (k[x,y]/(xy))[(x+y)\inv]$.
+ Draw a picture of $\Spec A$.
+ Show that it is not connected as a topological space.
+\end{problem}
+
+\begin{problem}
+ Let $A = k[x,y]_{(y-x^2)}$.
+ Draw a picture of $\Spec A$.
+\end{problem}
diff --git a/books/napkin/spec-sheaf.tex b/books/napkin/spec-sheaf.tex
new file mode 100644
index 0000000000000000000000000000000000000000..263b2207fa185f1cf3c10cac82da7f2fb66718f5
--- /dev/null
+++ b/books/napkin/spec-sheaf.tex
@@ -0,0 +1,600 @@
+\chapter{Affine schemes: the sheaf}
+\label{ch:spec_sheaf}
+
+We now complete our definition of $X = \Spec A$ by
+defining the sheaf $\OO_X$ on it, making it into a ringed space.
+This is done quickly in the first section.
+
+However, we will then spend the next several chapters
+trying to convince the reader to \emph{forget}
+the definition we gave, in practice.
+This is because practically,
+the sections of the sheaves are best computed by not using
+the definition directly, but by using some other results.
+
+Along the way we'll develop some related theory:
+in computing the stalks we'll find out the definition of a local ring,
+and in computing the sections we'll find out about distinguished open sets.
+
+A reminder once again: \Cref{ch:spec_examples} has
+many more concrete examples.
+It's not a bad idea to look through there for more examples
+if anything in this chapter trips you up.
+
+\section{A useless definition of the structure sheaf}
+\prototype{Still $\CC[x_1, \dots, x_n] / I$.}
+
+We have now endowed $\Spec A$ with the Zariski topology,
+and so all that remains is to put a sheaf $\OO_{\Spec A}$ on it.
+To do this we want a notion of ``regular functions'' as before.
+
+This is easy to do since we have localizations on hand.
+\begin{definition}
+ First, let $\SF$ be the pre-sheaf of ``globally rational'' functions:
+ i.e.\ we define $\SF(U)$ to be the localization
+ \[
+ \SF(U) = \left\{
+ \frac fg \mid f, g \in A
+ \text{ and } g(\kp) \neq 0 \; \forall \kp \in U
+ \right\}
+ = \left(A \setminus \bigcup_{\kp \in U} \kp \right)\inv A.
+ \]
+ We now define the structure sheaf on $\Spec A$.
+ It is
+ \[ \OO_{\Spec A} = \SF\sh \]
+ i.e.\ the sheafification of the $\SF$ we just defined.
+\end{definition}
+\begin{exercise}
+ Compare this with the definition for $\OO_V$
+ with $V$ a complex variety, and check that they essentially match.
+\end{exercise}
+And thus, we have completed the transition to adulthood,
+with a complete definition of the affine scheme.
+
+If you really like compatible germs,
+you can write out the definition:
+\begin{definition}
+ Let $A$ be a ring.
+ Then $\Spec A$ is made into a ringed space by setting
+ \[ \OO_{\Spec A}(U)
+ = \left\{ (f_\kp \in A_\kp)_{\kp \in U}
+ \text{ which are locally quotients} \right\}. \]
+ That is, it consists of sequence $(f_\kp)_{\kp \in U}$, with
+ each $f_\kp \in A_\kp$, such that for every point $\kp$ there
+ is an open neighborhood $U_\kp$ and an $f,g \in A$ such that
+ $f_\kq = \frac fg \in A_\kq$ for all $\kq \in U_\kp$.
+\end{definition}
+
+We will now \textbf{basically forget about this definition},
+because we will never use it in practice.
+In the next two sections, we will show you:
+\begin{itemize}
+ \ii that the stalks $\OO_{\Spec A, \kp}$ are just $A_\kp$, and
+ \ii that the sections $\OO_{\Spec A}(U)$
+ can be computed, for any open set $U$,
+ by focusing only on the special case where $U = D(f)$
+ is a distinguished open set.
+\end{itemize}
+These two results will be good enough for all of our purposes,
+so we will be able to not use this definition.
+(Hence the lack of examples in this section.)
+
+\section{The value of distinguished open sets (or: how to actually compute sections)}
+\prototype{$D(x)$ in $\Spec \CC[x]$ is the punctured line.}
+
+We will now really hammer in the importance of
+the distinguished open sets.
+The definition is analogous to before:
+\begin{definition}
+ Let $f \in \Spec A$.
+ Then $D(f)$ is the set of $\kp$ such that $f(\kp) \neq 0$,
+ a \vocab{distinguished open set}.
+\end{definition}
+Distinguished open sets will have three absolutely crucial properties,
+which build on each other.
+
+\subsection{A basis of the Zariski topology}
+The first is a topological observation:
+\begin{theorem}
+ [Distinguished open sets form a base]
+ \label{thm:distinguished_base}
+ The distinguished open sets $D(f)$
+ form a basis for the Zariski topology:
+ any open set $U$ is a union of distinguished open sets.
+\end{theorem}
+\begin{proof}
+ Let $U$ be an open set;
+ suppose it is the complement of closed set $V(I)$.
+ Then verify that \[ U = \bigcup_{f \in I} D(f). \qedhere \]
+\end{proof}
+
+\subsection{Sections are computable}
+The second critical fact is that the sections
+on distinguished open sets can be computed explicitly.
+\begin{theorem}
+ [Sections of $D(f)$ are localizations away from $f$]
+ Let $A$ be a ring and $f \in A$.
+ Then \[ \OO_{\Spec A}(D(f)) \cong A[1/f]. \]
+\end{theorem}
+\begin{proof}
+ Omitted, but similar to
+ \Cref{thm:reg_func_distinguish_open}.
+\end{proof}
+
+\begin{example}
+ [The punctured line is isomorphic to a hyperbola]
+ The ``hyperbola effect'' appears again:
+ \[ \OO_{\Spec \CC[x]} (D(x))
+ = \CC[x, x\inv]
+ \cong \CC[x,y] / (xy-1). \]
+\end{example}
+
+On a tangential note,
+we had better also note somewhere that $\Spec A = D(1)$
+is itself distinguished open, so the global sections can be recovered.
+\begin{corollary}
+ [$A$ is the ring of global sections]
+ The ring of global sections of $\Spec A$ is $A$.
+\end{corollary}
+\begin{proof}
+ By previous theorem, $\OO_{\Spec A}(\Spec A)
+ = \OO_{\Spec A}(D(1)) = A[1/1] = A$.
+\end{proof}
+
+\subsection{They are affine}
+\label{subsec:distinguished_open_affine}
+We know $\OO_X(D(f)) = A[1/f]$.
+In fact, if you draw $\Spec A[1/f]$,
+you will find that it looks exactly like $D(f)$.
+So the third final important fact is that
+$D(f)$ will actually be \emph{isomorphic} to $\Spec A[1/f]$
+(just like the line minus the origin is isomorphic to the hyperbola).
+We can't make this precise yet,
+because we have not yet discussed morphisms of schemes,
+but it will be handy later (though not right away).
+%However, you can already see this at the level of topological spaces;
+%see \Cref{prob:homeomorphism}.
+%Since distinguished open sets form a base,
+%though, this means that open sets of affine schemes
+%are, at least locally, themselves affine schemes:
+%given any open set $U \subseteq \Spec A$, and point $\kp \in U$,
+%there is some open neighborhood $V \ni p$ contained in $U$
+%which is itself affine.
+
+\subsection{Classic example: the punctured plane}
+\label{subsec:punctured_plane}
+We now give the classical example of a computation which shows
+how you can forget about sheafification,
+if you never liked it.\footnote{This perspective is
+ so useful that some sources, like Vakil \cite[\S4.1]{ref:vakil}
+ will \emph{define} $\OO_{\Spec A}$
+ by requiring $\OO_{\Spec A}(D(f)) = A[1/f]$,
+ rather than use sheafification as we did.}
+The idea is that:
+\begin{moral}
+ We can compute any section $\OO_X(U)$ in practice
+ by using distinguished open sets and sheaf axioms.
+\end{moral}
+
+Let $X = \Spec \CC[x,y]$,
+and consider the origin, i.e.\ the point $\km = (x,y)$.
+This ideal is maximal, so it corresponds to a closed point,
+and we can consider the open set $U$
+consisting of all the points other than $\km$.
+We wish to compute $\OO_X(U)$.
+
+\begin{center}
+\begin{asy}
+ graph.xaxis("$\mathcal{V}(y)$", red);
+ graph.yaxis("$\mathcal{V}(x)$", red);
+ fill(box( (-3,-3), (3,3) ), opacity(0.2)+lightcyan);
+ opendot(origin, blue+1.5);
+ label("$\mathfrak m = (x,y)$", origin, dir(45), blue);
+\end{asy}
+\end{center}
+
+Unfortunately, $U$ is not distinguished open.
+But, we can compute it anyways by writing $U = D(x) \cup D(y)$:
+conveniently, $D(x) \cap D(y) = D(xy)$.
+By the sheaf axioms,
+we have a pullback square
+\begin{center}
+\begin{tikzcd}
+ \OO_X(U) \ar[r] \ar[d] & \OO_X(D(x)) = \CC[x,y,x\inv] \ar[d] \\
+ \OO_X(D(x)) = \CC[x,y,y\inv] \ar[r] & \OO_X(D(xy)) = \CC[x,y,x\inv, y\inv].
+\end{tikzcd}
+\end{center}
+In other words, $\OO_X(U)$ consists of pairs
+\begin{align*}
+ f &\in \CC[x,y,x\inv] \\
+ g &\in \CC[x,y,y\inv]
+\end{align*}
+which agree on the overlap:
+$f = g$ on $D(x) \cap D(y)$.
+Well, we can describe
+$f$ as a polynomial with some $x$'s in the denominator, and
+$g$ as a polynomial with some $y$'s in the denominator.
+If they match, the denominator is actually constant.
+Put crudely,
+\[ \CC[x,y,x\inv] \cap \CC[x,y,y\inv] = \CC[x,y]. \]
+In conclusion,
+\[ \OO_X(U) = \CC[x,y]. \]
+That is, we get no additional functions.
+
+
+
+\section{The stalks of the structure sheaf}
+\prototype{The stalk of $\Spec \CC[x,y]$ at $\km = (x,y)$
+are rational functions defined at the origin.}
+
+Don't worry, this one is easier than last section.
+
+\subsection{They are localizations}
+\begin{theorem}
+ [Stalks of $\Spec A$ are $A_\kp$]
+ Let $A$ be a ring and let $\kp \in \Spec A$.
+ Then \[ \OO_{\Spec A, \kp} \cong A_\kp. \]
+ In particular $X$ is a locally ringed space.
+\end{theorem}
+\begin{proof}
+ Since sheafification preserved stalks,
+ it's enough to check it for $\SF$ the pre-sheaf
+ of globally rational functions in our definition.
+ The proof is basically the same as \Cref{thm:stalks_affine_var}:
+ there is an obvious map $\SF_\kp \to A_\kp$ on germs by
+ \[ \left(U, f/g \in \SF(U) \right)
+ \mapsto f/g \in A_\kp . \]
+ (Note the $f/g$ on the left lives in $\SF(U)$
+ but the one on the right lives in $A_\kp$).
+ We show injectivity and surjectivity:
+ \begin{itemize}
+ \ii Injective: suppose $(U_1, f_1 / g_1)$ and $(U_2, f_2 / g_2)$
+ are two germs with $f_1/g_1 = f_2/g_2 \in A_\kp$.
+ This means $h(g_1 f_2 - f_2 g_1) = 0$ in $A$, for some nonzero $h$.
+ Then both germs identify with
+ the germ $(U_1 \cap U_2 \cap D(h), f_1 / g_1)$.
+ \ii Surjective: let $U = D(g)$. \qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{example}
+ [Denominators not divisible by $x$]
+ We have seen this example so many times
+ that I will only write it in the new notation,
+ and make no further comment:
+ if $X = \Spec \CC[x]$ then
+ \[ \OO_{\Spec X, (x)} = \CC[x]_{(x)}
+ = \left\{ \frac fg \mid g(0) \ne 0 \right\}. \]
+\end{example}
+\begin{example}
+ [Denominators not divisible by $x$]
+ Let $X = \Spec \CC[x,y]$ and let $\km = (x,y)$ be the origin.
+ Then
+ \[ \CC[x,y]_{(x,y)}
+ = \left\{ \frac{f(x,y)}{g(x,y)} \mid g(0,0) \ne 0 \right\}. \]
+\end{example}
+
+If you want more examples,
+take any of the ones from \Cref{sec:localize_prime_ideal},
+and try to think about what they mean geometrically.
+
+\subsection{Motivating local rings: germs should package values}
+Let's return to our well-worn example $X = \Spec \CC[x,y]$
+and consider $\km = (x,y)$ the origin.
+The stalk was
+\[ \OO_{X, \km} = \CC[x,y]_{(x,y)}
+ = \left\{ \frac{f(x,y)}{g(x,y)} \mid g(0,0) \ne 0 \right\}. \]
+So let's take some section like $f = \frac{1}{xy + 4}$,
+which is a section of $U = D(xy+4)$ (or some smaller open set,
+but we'll just use this one for simplicity).
+We also have $U \ni m$, and so $f$ gives a germ at $\km$.
+
+On the other hand, $f$ also has a value at $\km$:
+it is $f \pmod{\km} = \frac 14$.
+And in general, the ring of possible values of a section
+at the origin $\km$ is $\CC[x,y] / \km \cong \CC$.
+
+Now, you might recall that I pressed the point of view
+that a germ might be thought of as an ``enriched value''.
+Then it makes sense that if you know the germ of a section $f$ at
+a point $\km$ --- i.e., you know the ``enriched value'' ---
+then you should be able to value as well.
+What this means is that we ought to have some map
+\[ A_\km \to A/\km \]
+sending germs to their associated values.
+
+Indeed you can, and this leads us to\dots
+
+\section{Local rings and residue fields:
+linking germs to values}
+\prototype{The residue field of $\Spec \CC[x,y]$ at $\km = (x,y)$
+is $\CC$.}
+
+\subsection{Localizations give local rings}
+This notation is about to get really terrible, but bear with me.
+\begin{theorem}
+ [Stalks are local rings]
+ \label{thm:stalks_local_ring}
+ Let $A$ be a ring and $\kp$ any prime ideal.
+ Then the localization $A_\kp$ has exactly one maximal ideal,
+ given explicitly by
+ \[ \kp A_\kp =
+ \left\{ \frac fg \mid f \in \kp \; g \notin \kp \right\}. \]
+\end{theorem}
+The ideal $\kp A_\kp$ thus captures the idea
+of ``germs vanishing at $\kp$''.\footnote{The notation $\kp A_\kp$ really means
+the set of $f \cdot h$ where $f \in \kp$
+(viewed as a subset of $A_\kp$ by $f \mapsto \frac f1$) and $h \in A_{\kp}$.
+I personally find this is more confusing than helpful,
+so I'm footnoting it.}
+
+Proof in a moment;
+for now let's introduce some words so we can
+give our examples in the proper language.
+\begin{definition}
+ A ring $R$ with exactly one maximal ideal $\km$
+ will be called a \vocab{local ring}.
+ The \vocab{residue field} is the quotient $A / \km$.
+\end{definition}
+\begin{ques}
+ Are fields local rings?
+\end{ques}
+
+Thus what we find is that:
+\begin{moral}
+ The stalks consist of the possible enriched values (germs);
+ the residue field is the set of (un-enriched) values.
+\end{moral}
+
+\begin{example}
+ [The stalk at the origin of {$\Spec \CC[x,y]$}]
+ Again set $A = \CC[x,y]$, $X = \Spec A$ and $\kp = (x,y)$
+ so that $\OO_{X,\kp} = A_\kp$.
+ (I switched to $\kp$ for the origin,
+ to avoid confusion with the maximal ideal $\kp A_{\kp}$
+ of the local ring $A_\kp$.)
+ As we said many times already,
+ $A_{\kp}$ consists of rational functions not vanishing at the origin,
+ such as $f = \frac{1}{xy+4}$.
+
+ What is the unique maximal ideal $\kp A_\kp$?
+ Answer: it consists of the rational functions
+ which \emph{vanish} at the origin:
+ for example, $\frac{x}{x^2+3y}$, or $\frac{3x+5y}{2}$,
+ or $\frac{-xy}{4(xy+4)}$.
+ If we allow ourselves to mod out by such functions,
+ we get the residue field $\CC$,
+ and $f$ will have the value $\frac14$, since
+ \[ \frac{1}{xy+4} -
+ {\underbrace{\frac{-xy}{4(xy+4)}}_{\text{vanishes at origin}}}
+ = \frac14. \]
+
+ More generally, suppose $f$ is any section of some open
+ set containing $\kp$.
+ Let $c \in \CC$ be the value $f(\kp)$, that is, $f \pmod \kp$.
+ Then $f - c$ is going to be another section
+ which vanishes at the origin $\kp$,
+ so as promised, $f \equiv c \pmod{\kp A_{\kp}}$.
+\end{example}
+
+Okay, we can write down a proof of the theorem now.
+\begin{proof}
+ [Proof of \Cref{thm:stalks_local_ring}]
+ One may check set $I = \kp A_\kp$ is an ideal of $A_\kp$.
+ Moreover, $1 \notin I$, so $I$ is proper.
+
+ To prove it is maximal and unique,
+ it suffices to prove that any $f \in A_\kp$ with $f \notin I$
+ is a \emph{unit} of $A_\kp$.
+ This will imply $I$ is maximal: there are no more non-units to add.
+ It will also imply $I$ is the only maximal ideal:
+ because any proper ideal can't contain units, so is contained in $I$.
+
+ This is actually easy.
+ An element of $A_\kp$ not in $I$ must be $x = \frac fg$
+ for $f,g \in A$ and $f,g \notin \kp$.
+ For such an element, $x\inv = \frac gf \notin \kp$ too.
+ So $x$ is a unit. End proof.
+\end{proof}
+
+Even more generally:
+\begin{moral}
+ If a sheaf $\SF$ consists of ``field-valued functions'',
+ the stalk $\SF_p$ probably has a maximal ideal
+ consisting of the germs vanishing at $p$.
+\end{moral}
+
+\begin{example}[Local rings in non-algebaric geometry sheaves]
+Let's go back to the example of $X = \RR$ and $\SF(U)$ the smooth functions,
+and consider the stalk $\SF_{p}$, where $p \in X$.
+Define the ideal $\km_p$ to be the set of germs $(s,U)$ for which $s(p) = 0$.
+
+Then $\km_p$ is maximal: we have an exact sequence
+\[ 0 \to \km_p \to \SF_p \taking{(s,U) \mapsto s(p)} \RR \to 0 \]
+and so $\SF_p / \km_p \cong \RR$, which is a field.
+
+It remains to check there are no nonzero maximal ideals.
+Now note that if $s \notin \km_p$,
+then $s$ is nonzero in some open neighborhood of $p$,
+and one can construct the function $1/s$ on it.
+So \textbf{every element of $\SF_p \setminus \km_p$ is a unit};
+and again $\km_p$ is in fact the only maximal ideal!
+
+Thus the stalks of each of the following types of sheaves
+are local rings, too.
+\begin{itemize}
+ \ii Sheaves of continuous real/complex functions on a topological space
+ \ii Sheaves of smooth functions on any manifold
+ \ii etc.
+\end{itemize}
+\end{example}
+
+\subsection{Computing values: a convenient square}
+\label{sec:convenient_square}
+Very careful readers might have noticed something
+a little uncomfortable in our extended example with $\Spec A$
+with $A = \CC[x,y]$ and $\kp = (x,y)$ the origin.
+Let's consider $f = \frac{1}{xy+4}$.
+We took $f \pmod{x,y}$ in the original ring $A$ in order
+to decide the value ``should'' be $\frac14$.
+However, all our calculations actually
+took place not in the ring $A$, but instead in the ring $A_\kp$.
+Does this cause issues?
+
+Thankfully, no, nothing goes wrong, even in a general ring $A$.
+
+\begin{definition}
+ We let the quotient $A_\kp / \kp A_\kp$,
+ i.e.\ the \vocab{residue field} of the stalk of $\Spec A$ at $\kp$,
+ be denoted by $\kappa(\kp)$.
+\end{definition}
+
+Then the following is a special case of
+\Cref{thm:localization_commute_quotient}
+(localization commutes with quotients):
+\begin{theorem}
+ [The germ-to-value square]
+ Let $A$ be a ring and $\kp$ a prime ideal.
+ The following diagram commutes:
+ \begin{center}
+ \begin{tikzcd}
+ A \ar[r, "\text{localize}"] \ar[d, "\bmod \kp"']
+ & A_\kp \ar[d, "\bmod \kp"] \\
+ A/\kp \ar[r, "\Frac(-)"] & \kappa (\kp)
+ \end{tikzcd}
+ \end{center}
+ In particular, $\kappa(\kp)$
+ can also be described as $\Frac(A/\kp)$.
+\end{theorem}
+So for example, if $A = \CC[x,y]$ and $\kp = (x,y)$,
+then $A/\kp = \CC$ and $\Frac(A_\kp) = \Frac(\CC) = \CC$, as we expected.
+In practice, $\Frac(A/\kp)$ is probably the easier way
+to compute $\kappa(\kp)$ for any prime ideal $\kp$.
+
+
+
+\section{Recap}
+To recap the last two chapters, let $A$ be a ring.
+\begin{itemize}
+ \ii We define $X = \Spec A$ to be the set of prime ideals of $A$.
+ \begin{itemize}
+ \ii The maximal ideals are the ``closed points'' we are used to,
+ but the prime ideals are ``generic points''.
+ \end{itemize}
+
+ \ii We equip $\Spec A$ with the Zariski topology by declaring
+ $\VV(I)$ to be the closed sets, for ideals $I \subseteq A$.
+ \begin{itemize}
+ \ii The distinguished open sets $D(f)$,
+ form a topological basis.
+ \ii The irreducible closed sets are exactly the closures of points.
+ \end{itemize}
+
+ \ii Finally, we defined a sheaf $\OO_X$.
+ We set up the definition such that
+ \begin{itemize}
+ \ii $\OO_{X}(D(f)) = A[1/f]$:
+ at distinguished open sets $D(f)$,
+ we get localizations too.
+ \ii $\OO_{X,\kp} = A_\kp$:
+ the stalks are localizations at a prime.
+ \end{itemize}
+ Since $D(f)$ is a basis,
+ these two properties lets us explicitly compute $\OO_X(U)$
+ for any open set $U$,
+ so we don't have to resort to the definition using sheafification.
+\end{itemize}
+
+\section{Functions are determined by germs, not values}
+\prototype{The functions $0$ and $x$ on $\Spec \CC[x]/(x^2)$.}
+
+We close the chapter with a word of warning.
+In any ringed space, a section is determined by its germs;
+so that on $\Spec A$ a function $f \in A$ is determined
+by its germ in each stalk $A_\kp$.
+However, we now will mention that an $f \in A$ is \emph{not}
+determined by its value $f(\kp) = f \pmod \kp$ at each point.
+
+The famous example is:
+\begin{example}
+ [On the double point, all multiples of $x$ are zero at all points]
+ The space $\Spec \CC[x] / (x^2)$ has only one point, $(x)$.
+ The functions $0$ and $x$ (and for that matter $2x$, $3x$, \dots)
+ all vanish on it.
+ This shows that functions are not determined uniquely
+ by values in general.
+\end{example}
+
+Fortunately, we can explicitly characterize
+when this sort of ``bad'' behavior happens.
+Indeed, we want to see when $f(\kp) = g(\kp)$ for every $\kp$,
+or equivalently, $h = f-g$ vanishes on every prime ideal $\kp$.
+This is equivalent to having
+\[ h \in \bigcap_{\kp} \kp = \sqrt{(0)} \]
+the radical of the \emph{zero} ideal.
+Thus in the prototype, the failure was caused by the fact that $x^n = 0$
+for some large $n$.
+
+\begin{definition}
+ For a ring $A$, the radical of the zero ideal, $\sqrt{(0)}$,
+ is called the \vocab{nilradical} of $A$.
+ Elements of the nilradical are called \vocab{nilpotents}.
+ We say $A$ is \vocab{reduced} if $0$ is the only nilpotent,
+ i.e.\ $\sqrt{(0)} = (0)$.
+\end{definition}
+\begin{ques}
+ Are integral domains reduced?
+\end{ques}
+
+Then our above discussion gives:
+\begin{theorem}
+ [Nilpotents are the only issue]
+ Two functions $f$ and $g$ have the same value
+ on all points of $\Spec A$ if and only if $f-g$ is nilpotent.
+\end{theorem}
+In particular, when $A$ is a reduced ring,
+even the values $f(\kp)$ as $\kp \in \Spec A$
+are enough to determine $f \in A$.
+
+\section{\problemhead}
+As \Cref{ch:spec_examples} contains many
+examples of affine schemes to train your intuition;
+it's likely to be worth reading even before attempting these problems.
+
+\begin{dproblem}
+ [Spectrums are quasicompact]
+ \gim
+ Show that $\Spec A$ is quasicompact for any ring $A$.
+\end{dproblem}
+
+\begin{problem}
+ [Punctured gyrotop, communicated by Aaron Pixton]
+ The gyrotop is the scheme $X = \Spec \CC[x,y,z] / (xy,z)$.
+ We let $U$ denote the open subset obtained
+ by deleting the closed point $\km = (x,y,z)$.
+ Compute $\OO_X(U)$.
+ \begin{hint}
+ $k[x,y] \times k[z,z\inv]$.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ Show that a ring $R$ is a local ring
+ if and only of the following property is true:
+ for any $x \in R$,
+ either $x$ or $1-x$ is a unit.
+\end{problem}
+
+\begin{problem}
+ Let $R$ be a local ring, and $\km$ be its maximal ideal.
+ Describe $R_\km$.
+ \begin{hint}
+ It's isomorphic to $R$!
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ Let $A$ be a ring, and $\km$ a maximal ideal.
+ Consider $\km$ as point of $\Spec A$
+ Show that $\kappa(\km) \cong A/\km$.
+\end{problem}
diff --git a/books/napkin/spec-zariski.tex b/books/napkin/spec-zariski.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1e873f5de29e2cfd21f80c16d19eceb011f7e08e
--- /dev/null
+++ b/books/napkin/spec-zariski.tex
@@ -0,0 +1,505 @@
+\chapter{Affine schemes: the Zariski topology}
+\label{ch:spec_zariski}
+Now that we understand sheaves well,
+we can define an affine scheme.
+It will be a ringed space, so we need to define
+\begin{itemize}
+ \ii The set of points,
+ \ii The topology on it, and
+ \ii The structure sheaf on it.
+\end{itemize}
+In this chapter, we handle the first two parts;
+\Cref{ch:spec_sheaf} does the last one.
+
+Quick note: \Cref{ch:spec_examples}
+contains a long list of examples of affine schemes.
+So if something written in this chapter is not making sense,
+one thing worth trying is skimming through the next chapter
+to see if any of the examples there are more helpful.
+
+\section{Some more advertising}
+Let me describe what the construction of $\Spec A$ is going to do.
+
+In the case of $\Aff^n$, we used $\CC^n$ as the set of points
+and $\CC[x_1, \dots, x_n]$ as the ring of functions
+but then remarked that the set of points
+of $\CC^n$ corresponded to the maximal ideals of $\CC[x_1, \dots, x_n]$.
+In an \emph{affine scheme}, we will take an \emph{arbitrary} ring $A$,
+and generate the entire structure from just $A$ itself.
+The final result is called $\Spec A$, the \vocab{spectrum} of $A$.
+The affine varieties $\VV(I)$ we met earlier will just be
+$\CC[x_1, \dots, x_n] / I$, but now we will be able to take
+\emph{any} ideal $I$, thus finally completing the table at the end
+of the ``affine variety'' chapter.
+
+The construction of the affine scheme in this way
+will have three big generalizations:
+\begin{enumerate}
+ \ii We no longer have to work over an algebraically
+ closed field $\CC$, or even a field at all.
+ This will be the most painless generalization:
+ you won't have to adjust your current picture much for this to work.
+
+ \ii We allow non-radical ideals:
+ $\Spec \CC[x] / (x^2)$ will be the double point
+ we sought for so long.
+ This will let us formalize the notion of a ``fat'' or ``fuzzy'' point.
+
+ \ii Our affine schemes will have so-called \emph{non-closed points}:
+ points which you can visualize as floating around,
+ somewhere in the space but nowhere in particular.
+ (They'll correspond to prime non-maximal ideals.)
+ These will take the longest to get used to,
+ but as we progress we will begin to see that these non-closed points
+ actually make life \emph{easier},
+ once you get a sense of what they look like.
+\end{enumerate}
+
+\section{The set of points}
+\prototype{$\Spec \CC[x_1, \dots, x_n] / I$.}
+
+First surprise, for a ring $A$:
+\begin{definition}
+ The set $\Spec A$ is defined as the set of prime ideals of $A$.
+\end{definition}
+
+This might be a little surprising, since we might have guessed
+that $\Spec A$ should just have the maximal ideals.
+What do the remaining ideals correspond to?
+The answer is that they will be so-called \emph{non-closed points}
+or \emph{generic points} which are ``somewhere'' in the space,
+but nowhere in particular.
+(The name ``non-closed'' is explained next chapter.)
+
+\begin{remark}
+ As usual $A$ itself is not a prime ideal, but $(0)$
+ is prime if and only if $A$ is an integral domain.
+\end{remark}
+
+\begin{example}
+ [Examples of spectrums]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\Spec \CC[x]$ consists of a point $(x-a)$ for every $a \in \CC$,
+ which correspond to what we geometrically think of as $\Aff^1$.
+ In additionally consists of a point $(0)$,
+ which we think of as a ``non-closed point'', nowhere in particular.
+
+ \ii $\Spec \CC[x,y]$ consists of points $(x-a,y-b)$
+ (which are the maximal ideals) as well as $(0)$ again,
+ a non-closed point that is thought of as ``somewhere in $\CC^2$,
+ but nowhere in particular''.
+ It also consists of non-closed points corresponding to irreducible
+ polynomials $f(x,y)$, for example $(y-x^2)$,
+ which is a ``generic point on the parabola''.
+
+ \ii If $k$ is a field, $\Spec k$ is a single point,
+ since the only maximal ideal of $k$ is $(0)$.
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [Complex affine varieties]
+ Let $I \subseteq \CC[x_1, \dots, x_n]$ be an ideal.
+ By \Cref{prop:prime_quotient},
+ the set \[ \Spec \CC[x_1, \dots, x_n] /I \]
+ consists of those prime ideals of $\CC[x_1, \dots, x_n]$
+ which contain $I$: in other words, it has a
+ point for every closed irreducible subvariety of $\VV(I)$.
+ So in addition to the ``geometric points''
+ (corresponding to the maximal ideals $(x_1-a_1, \dots, x_n-a_n)$
+ we have non-closed points along each of the varieties).
+\end{example}
+
+The non-closed points are the ones you are not used to:
+there is one for each non-maximal prime ideal
+(visualized as ``irreducible subvariety'').
+I like to visualize them in my head like a fly:
+you can hear it, so you know it is floating \emph{somewhere} in the room,
+but as it always moving, you never know exactly where.
+So the generic point of $\Spec \CC[x,y]$ corresponding to the prime
+ideal $(0)$ is floating everywhere in the plane,
+the one for the ideal $(y-x^2)$ floats along the parabola, etc.
+\begin{center}
+ \includegraphics[scale=0.4]{media/calvin-hobbes-fly.png} \\
+ \footnotesize Image from \cite{img:calvin_hobbes_fly}.
+\end{center}
+
+\begin{example}
+ [More examples of spectrums]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\Spec \ZZ$ consists of a point for every prime $p$,
+ plus a generic point that is somewhere, but no where in particular.
+
+ \ii $\Spec \CC[x] / (x^2)$ has only $(x)$ as a prime ideal.
+ The ideal $(0)$ is not prime since $0 = x \cdot x$.
+ Thus as a \emph{topological space},
+ $\Spec \CC[x] / (x^2)$ is a single point.
+
+ \ii $\Spec \Zc{60}$ consists of three points.
+ What are they?
+ \end{enumerate}
+\end{example}
+
+\section{The Zariski topology on the spectrum}
+\prototype{Still $\Spec \CC[x_1, \dots, x_n] / I$.}
+
+Now, we endow a topology on $\Spec A$.
+Since the points on $\Spec A$ are the prime ideals, we continue
+the analogy by thinking of the points $f$ as functions on $\Spec A$. That is:
+\begin{definition}
+ Let $f \in A$ and $\kp \in \Spec A$.
+ Then the \vocab{value} of $f$ at $\kp$ is defined to be $f \pmod{\kp}$,
+ an element of $A/\kp$.
+ We denote it $f(\kp)$.
+\end{definition}
+\begin{example}
+ [Vanishing locii in $\Aff^n$]
+ Suppose $A = \CC[x_1, \dots, x_n]$,
+ and $\km = (x_1-a_1, x_2-a_2, \dots, x_n-a_n)$ is a maximal ideal of $A$.
+ Then for a polynomial $f \in \CC$,
+ \[ f \pmod \km = f(a_1, \dots, a_n) \]
+ with the identification that $\CC/\km \cong \CC$.
+\end{example}
+\begin{example}
+ [Functions on $\Spec \ZZ$]
+ Consider $A = \Spec \ZZ$.
+ Then $2019$ is a function on $A$.
+ Its value at the point $(5)$ is $4 \pmod 5$;
+ its value at the point $(7)$ is $3 \pmod 7$.
+\end{example}
+
+Indeed if you replace $A$ with $\CC[x_1, \dots, x_n]$
+and $\Spec A$ with $\Aff^n$ in everything that follows,
+then everything will become quite familiar.
+
+\begin{definition}
+ Let $f \in A$. We define the \vocab{vanishing locus} of $f$ to be
+ \[ \VV(f) = \left\{ \kp \in \Spec A \mid f(\kp) = 0 \right\}
+ = \left\{ \kp \in \Spec A \mid f \in \kp \right\}. \]
+ More generally, just as in the affine case,
+ we define the vanishing locus for an ideal $I$ as
+ \begin{align*}
+ \VV(I) &= \left\{ \kp \in \Spec A \mid f(\kp)=0 \; \forall f \in I \right\} \\
+ &= \left\{ \kp \in \Spec A \mid f \in \kp \; \forall f \in I \right\} \\
+ &= \left\{ \kp \in \Spec A \mid I \subseteq \kp \right\}.
+ \end{align*}
+ Finally, we define the \vocab{Zariski topology} on $\Spec A$
+ by declaring that the sets of the form $\VV(I)$ are closed.
+\end{definition}
+
+We now define a few useful topological notions:
+\begin{definition}
+ Let $X$ be a topological space.
+ A point $p \in X$ is a \vocab{closed point}
+ if the set $\{p\}$ is closed.
+\end{definition}
+\begin{ques}
+ [Mandatory]
+ Show that a point (i.e.\ prime ideal)
+ $\km \in \Spec A$ is a closed point
+ if and only if $\km$ is a maximal ideal.
+\end{ques}
+Recall also in \Cref{def:closure} we denote by $\ol S$
+the closure of a set $S$ (i.e.\ the smallest closed set containing $S$);
+so you can think of a closed point $p$ also
+as one whose closure is just $\{p\}$.
+Therefore the Zariski topology lets us refer back to the old ``geometric''
+as just the closed points.
+
+\begin{example}
+ [Non-closed points, continued]
+ Let $A = \CC[x,y]$ and let $\kp = (y-x^2) \in \Spec A$;
+ this is the ``generic point'' on a parabola.
+ It is not closed, but we can compute its closure:
+ \[
+ \ol{\{\kp\}}
+ = \VV(\kp) = \left\{ \kq \in \Spec A \mid \kq \supseteq \kp \right\}.
+ \]
+ This closure contains the point $\kp$ as well
+ as several maximal ideals $\kq$, such as $(x-2,y-4)$ and $(x-3,y-9)$.
+ In other words, the closure of the ``generic point'' of the parabola
+ is literally the set of all points that are actually on the parabola
+ (including generic points).
+
+ That means the way to picture $\kp$ is a point that
+ is floating ``somewhere on the parabola'', but nowhere in particular.
+ It makes sense then that if we take the closure,
+ we get the entire parabola,
+ since $\kp$ ``could have been'' any of those points.
+\end{example}
+
+\begin{center}
+\begin{asy}
+ graph.xaxis();
+ graph.yaxis();
+ real f(real x) { return x*x; }
+ graph.xaxis("$x$");
+ graph.yaxis("$y$");
+ draw(graph(f,-2,2,operator ..), red+dotted, Arrows(TeXHead));
+ dot("$(y-x^2)$", (1.4, f(1.4)), dir(-45), red);
+ dot("$(x+1,y-1)$", (-1,1), dir(225), blue);
+\end{asy}
+\end{center}
+
+\begin{example}
+ [The generic point of the $y$-axis isn't on the $x$-axis]
+ Let $A = \CC[x,y]$ again.
+ Consider $\VV(y)$, which is the $x$-axis of $\Spec A$.
+ Then consider $\kp = (x)$, which is the generic point on the $y$-axis.
+ Observe that
+ \[ \kp \notin \VV(y). \]
+ The geometric way of saying this is that a \emph{generic point}
+ on the $y$-axis does not lie on the $x$-axis.
+\end{example}
+
+We now also introduce one more word:
+\begin{definition}
+ A topological space $X$ is \vocab{irreducible}
+ if either of the following two conditions hold:
+ \begin{itemize}
+ \ii The space $X$ cannot be written as the
+ union of two proper closed subsets.
+ \ii Any two nonempty open sets of $X$ intersect.
+ \end{itemize}
+ A subset $Z$ of $X$ (usually closed) is irreducible
+ if it is irreducible as a subspace.
+\end{definition}
+\begin{exercise}
+ Show that the two conditions above are indeed equivalent.
+ Also, show that the closure of a point is always irreducible.
+\end{exercise}
+
+This is the analog of the ``irreducible''
+we defined for affine varieties,
+but it is now a topological definition,
+although in practice this definition is only
+useful for spaces with the Zariski topology.
+Indeed, if any two nonempty open sets intersect
+(and there is more than one point),
+the space is certainly not Hausdorff!
+As with our old affine varieties,
+the intuition is that $\VV(xy)$ (the union of two lines)
+should not be irreducible.
+
+\begin{example}
+ [Reducible and irreducible spaces]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The closed set $\VV(xy) = \VV(x) \cup \VV(y)$ is reducible.
+ \ii The entire plane $\Spec \CC[x,y]$ is irreducible.
+ There is actually a simple (but counter-intuitive,
+ since you are just getting used to generic points)
+ reason why this is true:
+ the generic point $(0)$ is in \emph{every} open set,
+ ergo, any two open sets intersect.
+ \end{enumerate}
+\end{example}
+
+So actually, the generic points
+kind of let us cheat our way through the following bit:
+\begin{proposition}
+ [Spectrums of integral domains are irreducible]
+ If $A$ is an integral domain,
+ then $\Spec A$ is irreducible.
+\end{proposition}
+\begin{proof}
+ Just note $(0)$ is a prime ideal,
+ and in every open set.
+\end{proof}
+You should compare this with our old classical result that
+$\CC[x_1, \dots, x_n]/I$
+was irreducible as an affine variety exactly when $I$ was prime.
+This time, the generic point actually takes care
+of the work for us:
+the fact that it is \emph{allowed} to float
+anywhere in the plane lets us capture the idea that
+$\Aff^2$ should be irreducible
+without having to expend any additional effort.
+\begin{remark}
+ Surprisingly, the converse of this proposition is false:
+ we have seen $\Spec \CC[x]/(x^2)$ has only one point,
+ so is certainly irreducible.
+ But $A = \CC[x]/(x^2)$ is not an integral domain.
+ So this is one weird-ness introduced by allowing ``non-radical'' behavior.
+\end{remark}
+
+At this point you might notice something:
+\begin{theorem}
+ [Points are in bijection with irreducible closed sets]
+ Consider $X = \Spec A$.
+ For every irreducible closed set $Z$,
+ there is exactly one point $\kp$ such that $Z = \ol{\{\kp\}}$.
+ (In particular points of $X$ are in bijection
+ with closed subsets of $X$.)
+\end{theorem}
+\begin{proof}
+ [Idea of proof]
+ The point $\kp$ corresponds to the closed set $\VV(\kp)$,
+ which one can show is irreducible.
+ % Maybe I really should prove this here,
+ % but I don't really want to draw to much attention to radicals yet;
+ % there's too much going on already.
+\end{proof}
+This gives you a better way to draw non-closed points:
+they are the generic points lying along any irreducible closed set
+(consisting of more than just one point).
+
+At this point\footnote{Pun not intended},
+I may as well give you the real definition of generic point.
+\begin{definition}
+ Given a topological space $X$,
+ a \vocab{generic point} $\eta$
+ is a point whose closure is the entire space $X$.
+\end{definition}
+So for us, when $A$ is an integral domain,
+$\Spec A$ has generic point $(0)$.
+\begin{abuse}
+ Very careful readers might note I am being a little careless
+ with referring to $(y-x^2)$ as
+ ``the generic point along the parabola''
+ in $\Spec \CC[x,y]$.
+ What's happening is that $\VV(y-x^2)$ is a closed set,
+ and as a topological subspace, it has generic point $(y-x^2)$.
+\end{abuse}
+
+\section{On radicals}
+Back when we studied classical algebraic geometry in $\CC^n$,
+we saw Hilbert's Nullstellensatz (\Cref{thm:hilbert_null}) show up
+to give bijections between radical ideals and affine varieties;
+we omitted the proof, because it was nontrivial.
+
+However, for a \emph{scheme}, where the points \emph{are} prime ideals
+(rather than tuples in $\CC^n$),
+the corresponding results will actually be \emph{easy}:
+even in the case where $A = \CC[x_1, \dots, x_n]$,
+the addition of prime ideals (instead of just maximal ideals)
+will actually \emph{simplify} the proof,
+because radicals play well with prime ideals.
+
+We still have the following result.
+\begin{proposition}
+ [$\VV(\sqrt I) = \VV(I)$]
+ For any ideal $I$ of a ring $A$
+ we have $\VV(\sqrt I) = \VV(I)$.
+\end{proposition}
+\begin{proof}
+ We have $\sqrt I \supseteq I$.
+ Hence automatically $\VV(\sqrt I) \subseteq \VV(I)$.
+
+ Conversely, if $\kp \in \VV(I)$, then $I \subseteq \kp$,
+ so $\sqrt I \subseteq \sqrt{\kp} = \kp$
+ (by \Cref{prop:radical}).
+\end{proof}
+
+We hinted the key result in an earlier remark,
+and we now prove it.
+\begin{theorem}
+ [Radical is intersection of primes]
+ \label{thm:radical_intersect_prime}
+ Let $I$ be an ideal of a ring $A$.
+ Then \[ \sqrt I = \bigcap_{\kp \supseteq I} \kp. \]
+\end{theorem}
+\begin{proof}
+ This is a famous statement from commutative algebra,
+ and we prove it here only for completeness.
+ It is ``doing most of the work''.
+
+ Note that if $I \subseteq \kp$,
+ then $\sqrt I \subseteq \sqrt{\kp} = \kp$;
+ thus $\sqrt I \subseteq \bigcap_{\kp \supseteq I} \kp$.
+
+ Conversely, suppose $x \notin \sqrt I$,
+ meaning $1, x, x^2, x^3, \dots \notin I$.
+ Then, consider the localization $A[1/x]$.
+ Like any ring, it has some maximal ideal (Krull's theorem).
+ This means our usual bijection between prime ideals of $A[1/x]$
+ gives some prime ideal $\kp$ of $A$ containing $I$ but not containing $x$.
+ Thus $x \notin \bigcap_{\kp \supseteq I} \kp$,
+ as desired.
+\end{proof}
+\begin{remark}
+ [A variant of Krull's theorem]
+ The longer direction of this proof is essentially
+ saying that for any $x \in A$,
+ there is a maximal ideal of $A$ not containing $x$.
+ The ``short'' proof is to use Krull's theorem on $A[1/x]$ as above,
+ but one can also still prove it directly using Zorn's lemma
+ (by copying the proof of the original Krull's theorem).
+\end{remark}
+
+\begin{example}
+ [$\sqrt{(2016)} = (42)$ in $\ZZ$]
+ In the ring $\ZZ$, we see that $\sqrt{(2016)} = (42)$,
+ since the distinct primes containing $(2016)$
+ are $(2)$, $(3)$, $(7)$.
+\end{example}
+
+Geometrically, this gives us a good way to describe $\sqrt I$:
+it is the \emph{set of all functions vanishing on all of $\VV(I)$}.
+Indeed, we may write
+\[ \sqrt I = \bigcap_{\kp \supset I} \kp
+ = \bigcap_{\kp \in \VV(I)} \kp
+ = \bigcap_{\kp \in \VV(I)} \left\{ f \in A \mid f(\kp) = 0 \right\}. \]
+
+We can now state:
+\begin{theorem}
+ [Radical ideals correspond to closed sets]
+ Let $I$ and $J$ be ideals of $A$,
+ and considering the space $\Spec A$.
+ Then
+ \[ \VV(I) = \VV(J) \iff \sqrt I = \sqrt J. \]
+ In particular, radical ideals exactly
+ correspond to closed subsets of $\Spec A$.
+\end{theorem}
+\begin{proof}
+ If $\VV(I) = \VV(J)$,
+ then $\sqrt I = \bigcap_{\kp \in \VV(I)} \kp =
+ \bigcap_{\kp \in \VV(J)} \kp = \sqrt J$ as needed.
+
+ Conversely, suppose $\sqrt I = \sqrt J$.
+ Then $\VV(I) = \VV(\sqrt I) = \VV(\sqrt J) = \VV(J)$.
+\end{proof}
+
+Compare this to the theorem we had earlier
+that the \emph{irreducible} closed subsets correspond to \emph{prime} ideals!
+
+\section{\problemhead}
+As \Cref{ch:spec_examples} contains many
+examples of affine schemes to train your intuition,
+it's possibly worth reading even before attempting these problems,
+even though there will be some parts that won't make sense yet.
+
+\begin{problem}
+ [{$\Spec \QQ[x]$}]
+ Describe the points and topology of $\Spec \QQ[x]$.
+ \begin{hint}
+ Galois conjugates.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ [Product rings]
+ Describe the points and topology of $\Spec A \times B$
+ in terms of $\Spec A$ and $\Spec B$.
+\end{problem}
+
+\endinput
+
+\begin{problem}
+ [From Andrew Critch]
+ \gim
+ Let $A$ be a Noetherian ring.
+ Show that $A$ is an integral domain if and only if it has no idempotents,
+ and $A_\kp$ is an integral domain for every prime $\kp$.
+ \begin{hint}
+ Show that if $\Spec A$ is connected and its stalks are irreducible,
+ then $\Spec A$ is itself irreducible.
+ Consider nilradical $N = \sqrt{(0)}$.
+ \end{hint}
+ \begin{sol}
+ This is the proposition on the second page of
+ \url{http://www.acritch.com/media/math/Stalk-local_detection_of_irreducibility.pdf}
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/stokes.tex b/books/napkin/stokes.tex
new file mode 100644
index 0000000000000000000000000000000000000000..bf129e0bedd0985a28e065a9bedacf02d6256d97
--- /dev/null
+++ b/books/napkin/stokes.tex
@@ -0,0 +1,484 @@
+\chapter{Integrating differential forms}
+We now show how to integrate differential forms over cells,
+and state Stokes' theorem in this context.
+In this chapter, all vector spaces are finite-dimensional and real.
+
+\section{Motivation: line integrals}
+Given a function $g : [a,b] \to \RR$,
+we know by the fundamental theorem of calculus that
+\[
+ \int_{[a,b]} g(t) \; dt = f(b) - f(a)
+\]
+where $f$ is a function such that $g = df/dt$.
+%(You might recognize this more readily
+%as the more customary $\int_a^b g(t) \; dt$.)
+Equivalently, for $f : [a,b] \to \RR$,
+\[ \int_{[a,b]} g \; dt = \int_{[a,b]} df = f(b) - f(a) \]
+where $df$ is the exterior derivative we defined earlier.
+
+Cool, so we can integrate over $[a,b]$.
+Now suppose more generally, we have $U$ an open subset of our real vector space $V$
+and a $1$-form $\alpha : U \to V^\vee$.
+We consider a \vocab{parametrized curve}, which is a smooth function $c : [a,b] \to U$.
+Picture:
+\begin{center}
+ \begin{asy}
+ size(8.5cm);
+ pair A = (-13,0);
+ pair B = (-9,0);
+ draw(A--B, grey);
+ dot(A, grey); dot(B, grey);
+ label("$[a,b]$", A--B, dir(90), grey);
+ dot("$t$", 0.3*A+0.7*B, dir(-90), grey);
+
+ draw( (-8,0) -- (-6,0) , EndArrow);
+ label("$c$", (-7,0), dir(90));
+
+ bigblob("$U$");
+ pair a = (-2,-2);
+ pair b = (3,0);
+ pair p = (0,1);
+ pair q = (2,0);
+ label("$c$", q, dir(45), red);
+ draw(a..p..q..b, red);
+ dot("$p = c(t)$", p, dir(90), blue);
+ draw(p--(p+1.5*dir(-10)), blue, EndArrow);
+
+ draw( (0,-4)--(0,-8), EndArrow );
+ label("$\alpha$", (0,-6), dir(0));
+ label("$\alpha_p(v) \in \mathbb R$", (0,-9), heavygreen);
+
+ draw( (-10,-1)--(-1,-8), EndArrow);
+ label("$c^\ast \alpha$", (-5.5,-4.5), dir(225));
+ \end{asy}
+\end{center}
+
+We want to define an $\int_c \alpha$ such that:
+\begin{moral}
+ The integral $\int_c \alpha$ should add up all the $\alpha$ along the curve $c$.
+\end{moral}
+Our differential form $\alpha$ first takes in a point $p$ to get $\alpha_p \in V^\vee$.
+Then, it eats a tangent vector $v \in V$ to the curve $c$ to finally give a real number $\alpha_p(v) \in \RR$.
+We would like to ``add all these numbers up'',
+using only the notion of an integral over $[a,b]$.
+
+\begin{exercise}
+ Try to guess what the definition of the integral should be.
+ (By type-checking, there's only one reasonable answer.)
+\end{exercise}
+
+So, the definition we give is
+\[ \int_c \alpha \defeq
+ \int_{[a,b]} \alpha_{c(t)} \left( c'(t) \right) \; dt. \]
+Here, $c'(t)$ is shorthand for $(Dc)_{c(t)}(1)$.
+It represents the \emph{tangent vector} to the curve $c$ at the point $p=c(t)$,
+at time $t$.
+(Here we are taking advantage of the fact that $[a,b]$ is one-dimensional.)
+
+Now that definition was a pain to write, so we will define a differential
+$1$-form $c^\ast \alpha$ on $[a,b]$ to swallow that entire thing:
+specifically, in this case we define $c^\ast\alpha$ to be
+\[ \left( c^\ast \alpha \right)_t (\eps) = \alpha_{c(t)} \cdot (Dc)_{t} (\eps) \]
+(here $\eps$ is some displacement in time).
+Thus, we can more succinctly write
+\[ \int_c \alpha \defeq \int_{[a,b]} c^\ast \alpha. \]
+This is a special case of a \emph{pullback}:
+roughly, if $\phi : U \to U'$ (where $U \subseteq V$, $U' \subseteq V'$),
+we can change any differential $k$-form $\alpha$ on $U'$
+to a $k$-form on $U$.
+In particular, if $U = [a,b]$,\footnote{OK,
+ so $[a,b]$ isn't actually open, sorry.
+ I ought to write $(a-\eps, b+\eps)$, or something.}
+we can resort to our old definition of an integral.
+Let's now do this in full generality.
+
+\section{Pullbacks}
+Let $V$ and $V'$ be finite dimensional real vector spaces (possibly different dimensions)
+and suppose $U$ and $U'$ are open subsets of each;
+next, consider a $k$-form $\alpha$ on $U'$.
+
+Given a map $\phi : U \to U'$ we now want to define a pullback in
+much the same way as before.
+Picture:
+\begin{center}
+ \begin{asy}
+ size(13cm);
+ bigblob("$U$");
+ pair p = (-1,0);
+ dot("$p$", p, dir(90), red);
+ pair p1 = p + 1.4*dir(150);
+ pair p2 = p + 1.7*dir(-50);
+ draw(p--p1, red, EndArrow);
+ draw(p--p2, red, EndArrow);
+
+ add(scale(0.8)*shift(14*dir(180))*CC());
+ bigblob("$U'$");
+ pair q = (-0.5,0.5);
+ dot("$q = \phi(p)$", q, dir(90), blue);
+ pair q1 = q + 1.8*dir(-100);
+ pair q2 = q + 2.3*dir(-10);
+ draw(q--q1, blue, EndArrow);
+ draw(q--q2, blue, EndArrow);
+
+ draw((-9,0)--(-3,0), EndArrow);
+ label("$\phi$", (-6,0), dir(90));
+
+ draw( (0,-4)--(0,-8), EndArrow );
+ label("$\alpha$", (0,-6), dir(0));
+ label("$\alpha_q(\dots) \in \mathbb R$", (0,-9), heavygreen);
+
+ draw( (-11,-3)--(-1,-8), EndArrow);
+ label("$\phi^\ast \alpha$", (-6,-6), dir(225));
+ \end{asy}
+\end{center}
+
+Well, there's a total of about one thing we can do.
+Specifically: $\alpha$ accepts a point in $U'$ and $k$ tangent vectors in $V'$,
+and returns a real number.
+We want $\phi^\ast \alpha$ to accept a point in $p \in U$
+and $k$ tangent vectors $v_1, \dots, v_k$ in $V$,
+and feed the corresponding information to $\alpha$.
+
+Clearly we give the point $q = \phi(p)$.
+As for the tangent vectors, since we are interested in volume, we take the
+derivative of $\phi$ at $p$, $(D\phi)_p$, which will scale each of our vectors $v_i$
+into some vector in the target $V'$.
+To cut a long story short:
+\begin{definition}
+ Given $\phi : U \to U'$ and $\alpha$ a $k$-form, we define the \vocab{pullback}
+ \[
+ (\phi^\ast \alpha)_p(v_1, \dots, v_k)
+ \defeq \alpha_{\phi(p)}
+ \left( (D\phi)_p(v_1), \dots, (D\phi)_p(v_k) \right).
+ \]
+\end{definition}
+
+There is a more concrete way to define the pullback using bases.
+Suppose $w_1, \dots, w_n$ is a basis of $V'$
+and $e_1, \dots, e_m$ is a basis of $V$.
+Thus, by the projection principle (\Cref{thm:project_principle})
+the map $\phi : V \to V'$ can be thought of as
+\[ \phi(v) = \phi_1(v) w_1 + \dots \phi_n(v) w_n \]
+where each $\phi_i$ takes in a $v \in V$ and returns a real number.
+We know also that $\alpha$ can be written concretely as
+\[ \alpha = \sum_{J \subseteq \{1, \dots, n\}} f_J w_J. \]
+Then, we define
+\[
+ \phi^\ast\alpha
+ = \sum_{I \subseteq \{1, \dots, m\}}
+ (f_I \circ \phi) (D\phi_{i_1} \wedge \dots \wedge D\phi_{i_k}).
+\]
+A diligent reader can check these definitions are equivalent.
+\begin{example}
+ [Computation of a pullback]
+ Let $V = \RR^2$ with basis $\ee_1$ and $\ee_2$,
+ and suppose $\phi : V \to V'$ is given by sending
+ \[ \phi(a\ee_1 + b\ee_2) = (a^2+b^2)w_1 + \log(a^2+1) w_2 + b^3 w_3 \]
+ where $w_1$, $w_2$, $w_3$ is a basis for $V'$.
+ Consider the form $\alpha_q = f(q) w_1 \wedge w_3$, where $f : V' \to \RR$.
+ Then
+ \[ (\phi^\ast\alpha)_p = f(\phi(p)) \cdot (2a \ee_1^\vee + 2b\ee_2^\vee) \wedge (3b^2 \ee_2^\vee)
+ = f(\phi(p)) \cdot 6ab^2 \cdot \ee_1^\vee \wedge \ee_2^\vee. \]
+\end{example}
+
+It turns out that the pullback basically behaves nicely as possible, e.g.\
+\begin{itemize}
+ \ii $\phi^\ast(c\alpha + \beta) = c\phi^\ast \alpha + \phi^\ast\beta$ (linearity)
+ \ii $\phi^\ast(\alpha\wedge\beta)
+ = (\phi^\ast \alpha)\wedge(\phi^\ast \beta)$
+ \ii $\phi_1^\ast(\phi_2^\ast(\alpha))
+ = (\phi_2 \circ \phi_1)^\ast(\alpha)$ (naturality)
+\end{itemize}
+but I won't take the time to check these here
+(one can verify them all by expanding with a basis).
+
+\section{Cells}
+\prototype{A disk in $\RR^2$ can be thought of as the cell
+ $[0,R]\times[0,2\pi] \to \RR^2$ by
+ $(r,\theta) \mapsto (r\cos\theta)\ee_1 + (r\sin\theta)\ee_2$.}
+Now that we have the notion of a pullback,
+we can define the notion of an integral for more general spaces.
+Specifically, to generalize the notion of integrals we had before:
+\begin{definition}
+ A \vocab{$k$-cell} is a smooth function $c : [a_1, b_1] \times [a_2,b_2] \times \dots [a_k, b_k] \to V$.
+\end{definition}
+\begin{example}
+ [Examples of cells]
+ Let $V = \RR^2$ for convenience.
+ \begin{enumerate}[(a)]
+ \ii A $0$-cell consists of a single point.
+ \ii As we saw, a $1$-cell is an arbitrary curve.
+ \ii A $2$-cell corresponds to a $2$-dimensional surface.
+ For example, the map $c : [0,R] \times [0,2\pi] \to V$ by
+ \[ c : (r,\theta) \mapsto (r\cos\theta, r\sin\theta) \]
+ can be thought of as a disk of radius $R$.
+ \end{enumerate}
+\end{example}
+Then, to define an integral
+\[ \int_c \alpha \]
+for a differential $k$-form $\alpha$ and a $k$-cell $c : [0,1]^k \to V$,
+we simply take the pullback
+\[ \int_{[0,1]^k} c^\ast \alpha \]
+Since $c^\ast \alpha$ is a $k$-form on the $k$-dimensional unit box,
+it can be written as $f(x_1, \dots, x_n) \; dx_1 \wedge \dots \wedge dx_n$,
+so the above integral can be written as
+\[ \int_0^1 \dots \int_0^1 f(x_1, \dots, x_n) \; dx_1 \wedge \dots \wedge dx_n \]
+
+\begin{example}[Area of a circle]
+ Consider $V = \RR^2$ and let $c : (r,\theta) \mapsto (r\cos\theta)\ee_1 + (r\sin\theta)\ee_2$
+ on $[0,R] \times [0,2\pi]$ as before.
+ Take the $2$-form $\alpha$ which gives $\alpha_p = \ee_1^\vee \wedge \ee_2^\vee$ at every point $p$.
+ Then
+ \begin{align*}
+ c^\ast\alpha &=
+ \left( \cos\theta dr - r\sin\theta d\theta \right)
+ \wedge
+ \left( \sin\theta dr + r\cos\theta d\theta \right) \\
+ &= r(\cos^2\theta+\sin^2\theta) (dr \wedge d\theta) \\
+ &= r \; dr \wedge d\theta
+ \end{align*}
+ Thus,
+ \[ \int_c \alpha
+ = \int_0^R \int_0^{2\pi} r \; dr \wedge d\theta
+ = \pi R^2 \]
+ which is the area of a circle.
+\end{example}
+
+Here's some geometric intuition for what's happening.
+Given a $k$-cell in $V$, a differential $k$-form $\alpha$ accepts a point $p$ and some tangent vectors $v_1$, \dots, $v_k$
+and spits out a number $\alpha_p(v_1, \dots, v_k)$,
+which as before we view as a signed hypervolume.
+Then the integral \emph{adds up all these infinitesimals across the entire cell}.
+In particular, if $V = \RR^k$ and we take the form $\alpha : p \mapsto \ee_1^\vee \wedge \dots \wedge \ee_k^\vee$,
+then what these $\alpha$'s give is the $k$th hypervolume of the cell.
+For this reason, this $\alpha$ is called the \vocab{volume form} on $\RR^k$.
+
+You'll notice I'm starting to play loose with the term ``cell'':
+while the cell $c : [0,R] \times [0,2\pi] \to \RR^2$ is supposed to be a function
+I have been telling you to think of it as a unit disk (i.e.\ in terms of its image).
+In the same vein, a curve $[0,1] \to V$ should be thought of as a curve in space,
+rather than a function on time.
+
+This error turns out to be benign.
+Let $\alpha$ be a $k$-form on $U$ and $c : [a_1, b_1] \times \dots \times [a_k, b_k] \to U$ a $k$-cell.
+Suppose $\phi : [a_1', b_1'] \times \dots [a_k', b_k'] \to [a_1, b_1] \times \dots \times [a_k, b_k]$;
+it is a \vocab{reparametrization} if $\phi$ is bijective and $(D\phi)_p$ is always invertible
+(think ``change of variables'');
+thus
+\[ c \circ \phi : [a_1', b_1'] \times \dots \times [a_k',b_k'] \to U \]
+is a $k$-cell as well.
+Then it is said to \vocab{preserve orientation} if $\det(D\phi)_p > 0$ for all $p$
+and \vocab{reverse orientation} if $\det(D\phi)_p < 0$ for all $p$.
+\begin{exercise}
+ Why is it that exactly one of these cases must occur?
+\end{exercise}
+
+\begin{theorem}
+ [Changing variables doesn't affect integrals]
+ Let $c$ be a $k$-cell, $\alpha$ a $k$-form, and $\phi$ a reparametrization.
+ Then
+ \[ \int_{c \circ \phi} \alpha
+ =
+ \begin{cases}
+ \int_c \alpha & \phi \text{ preserves orientation} \\
+ - \int_c \alpha & \phi \text{ reverses orientation}.
+ \end{cases}
+ \]
+\end{theorem}
+\begin{proof}
+ Use naturality of the pullback to reduce it to the corresponding
+ theorem in normal calculus.
+\end{proof}
+
+So for example, if we had parametrized the unit circle as $[0,1] \times [0,1] \to \RR^2$
+by $(r,t) \mapsto R\cos(2\pi t) \ee_1 + R\sin(2\pi t) \ee_2$, we would have arrived at the same result.
+So we really can think of a $k$-cell just in terms of the points it specifies.
+
+\section{Boundaries}
+\prototype{The boundary of $[a,b]$ is $\{b\}-\{a\}$. The boundary of a square goes around its edge counterclockwise.}
+First, I introduce a technical term that lets us consider multiple cells at once.
+\begin{definition}
+ A \vocab{$k$-chain} $U$ is a formal
+ linear combination of $k$-cells over $U$,
+ i.e.\ a sum of the form
+ \[ c = a_1 c_1 + \dots + a_m c_m \]
+ where each $a_i \in \RR$ and $c_i$ is a $k$-cell.
+ We define $\int_c \alpha = \sum_i a_i \int c_i$.
+\end{definition}
+In particular, a $0$-chain consists of several points, each with a given weight.
+
+Now, how do we define the boundary?
+For a $1$-cell $[a,b] \to U$, as I hinted earlier we want the answer to be the $0$-chain $\{c(b)\}-\{c(a)\}$.
+Here's how we do it in general.
+\begin{definition}
+ Suppose $c : [0,1]^k \to U$ is a $k$-cell.
+ Then the \vocab{boundary} of $c$, denoted $\partial c : [0,1]^{k-1} \to U$,
+ is the $(k-1)$-chain defined as follows.
+ For each $i = 1,\dots,k$ define
+ \begin{align*}
+ c_i^{\text{start}}(t_1, \dots, t_{k-1}) &
+ = c(t_1, \dots, t_{i-1}, 0, t_i, \dots, t_k) \\
+ c_i^{\text{stop}}(t_1, \dots, t_{k-1}) &
+ = c(t_1, \dots, t_{i-1}, 1, t_i, \dots, t_k).
+ \end{align*}
+ Then
+ \[ \partial c \defeq
+ \sum_{i=1}^k (-1)^{i+1} \left( c_i^{\text{stop}} - c_i^{\text{start}} \right). \]
+ Finally, the boundary of a chain is the sum of the boundaries of each cell (with the appropriate weights).
+ That is, $\partial(\sum a_ic_i) = \sum a_i \partial c_i$.
+\end{definition}
+\begin{ques}
+ Satisfy yourself that one can extend this definition to
+ a $k$-cell $c$ defined on $c : [a_1, b_1] \times \dots \times [a_k, b_k] \to V$
+ (rather than from $[0,1]^k \to V$).
+\end{ques}
+
+\begin{example}
+ [Examples of boundaries]
+ Consider the $2$-cell $c : [0,1]^2 \to \RR^2$ shown below.
+ \begin{center}
+ \begin{asy}
+ size(7cm);
+ pen e1 = heavyred;
+ pen e2 = orange;
+ pen e3 = olive;
+ pen e4 = heavymagenta;
+ draw((0,0)--(2,0), e1, EndArrow);
+ draw((2,0)--(2,2), e2, EndArrow);
+ draw((2,2)--(0,2), e3, EndArrow);
+ draw((0,2)--(0,0), e4, EndArrow);
+ label(scale(0.8)*"$[0,1]^2$", (1,1));
+ draw( (3,1)--(6,1), EndArrow);
+ label("$c$", (4.5,1), dir(90));
+ pair p1 = (7,-1);
+ pair p2 = (12,-2);
+ pair p3 = (11,3);
+ pair p4 = (8,2);
+ fill(p1--p2--p3--p4--cycle, palecyan);
+ draw(p1--p2, e1, EndArrow, Margins);
+ draw(p2--p3, e2, EndArrow, Margins);
+ draw(p3--p4, e3, EndArrow, Margins);
+ draw(p4--p1, e4, EndArrow, Margins);
+ dot("$p_1$", p1, dir(225), blue+4);
+ dot("$p_2$", p2, dir(315), blue+4);
+ dot("$p_3$", p3, dir( 45), blue+4);
+ dot("$p_4$", p4, dir(135), blue+4);
+ label("$c$", (p1+p2+p3+p4)/4);
+ \end{asy}
+ \end{center}
+ Here $p_1$, $p_2$, $p_3$, $p_4$ are the images of $(0,0)$, $(0,1)$, $(1,0)$, $(1,1)$, respectively.
+ Then we can think of $\partial c$ as
+ \[ \partial c = [p_1,p_2] + [p_2,p_3] + [p_3,p_4] + [p_4,p_1] \]
+where each ``interval'' represents the $1$-cell shown by the reddish arrows on the right.
+ We can take the boundary of this as well, and obtain an empty chain as
+ \[ \partial(\partial c) = \sum_{i=1}^4 \partial([p_i, p_{i+1}]) = \sum_{i=1}^4 \{p_{i+1}\}-\{p_i\} = 0. \]
+\end{example}
+
+\begin{example}
+ [Boundary of a unit disk]
+ Consider the unit disk given by
+ \[ c : [0,1] \times [0,2\pi] \to \RR^2 \quad\text{by}\quad
+ (r,\theta) \mapsto s\cos(2\pi t)\ee_1 + s\sin(2\pi t)\ee_2. \]
+ The four parts of the boundary are shown in the picture below:
+ \begin{center}
+ \begin{asy}
+ size(7cm);
+ pen e1 = heavyred;
+ pen e2 = orange;
+ pen e3 = olive;
+ pen e4 = heavymagenta;
+ draw((0,0)--(2,0), e1, EndArrow);
+ draw((2,0)--(2,2), e2, EndArrow);
+ draw((2,2)--(0,2), e3, EndArrow);
+ draw((0,2)--(0,0), e4, EndArrow);
+ label("$r$", (2,0), dir(-45));
+ label("$\theta$", (0,2), dir(135));
+
+ label(scale(0.8)*"$[0,1]^2$", (1,1));
+ draw( (3,1)--(6,1), EndArrow);
+ label("$c$", (4.5,1), dir(90));
+
+ pair O = (9,1);
+ pair P = O + 2*dir(0);
+ fill(CP(O,P), palecyan);
+ real eps = 0.3;
+ draw(shift(0,eps) * (O--P), e1, EndArrow, Margins);
+ draw(shift(0,-eps) * (P--O), e3, EndArrow, Margins);
+ draw(CP(O,P), e2, EndArrow, Margins);
+ dot(O, e4+4);
+ \end{asy}
+ \end{center}
+ Note that two of the arrows more or less cancel each other out when they are integrated.
+ Moreover, we interestingly have a \emph{degenerate} $1$-cell at the center of the circle;
+ it is a constant function $[0,1] \to \RR^2$ which always gives the origin.
+\end{example}
+
+Obligatory theorem, analogous to $d^2=0$ and left as a problem.
+\begin{theorem}[The boundary of the boundary is empty]
+ $\partial^2 = 0$, in the sense that for any $k$-chain $c$ we have $\partial^2(c) = 0$.
+\end{theorem}
+
+\section{Stokes' theorem}
+\prototype{$\int_{[a,b]} dg = g(b) - g(a)$.}
+
+We now have all the ingredients to state Stokes' theorem for cells.
+\begin{theorem}
+ [Stokes' theorem for cells]
+ Take $U \subseteq V$ as usual, let $c : [0,1]^k \to U$ be a $k$-cell
+ and let $\alpha : U \to \Lambda^{k-1}(V^\vee)$ be a $(k-1)$-form.
+ Then
+ \[ \int_c d\alpha = \int_{\partial c} \alpha. \]
+ In particular, if $d\alpha = 0$ then the left-hand side vanishes.
+\end{theorem}
+For example, if $c$ is the interval $[a,b]$ then $\partial c = \{b\} - \{a\}$,
+and thus we obtain the fundamental theorem of calculus.
+
+\section\problemhead
+\begin{dproblem}[Green's theorem]
+ Let $f,g : \RR^2 \to \RR$ be smooth functions.
+ Prove that
+ \[ \int_c \left( \fpartial gx - \fpartial fy \right) \; dx \wedge dy
+ = \int_{\partial c} (f \; dx + g \; dy). \]
+ \begin{hint}
+ Direct application of Stokes' theorem to $\alpha = f \; dx + g \; dy$.
+ \end{hint}
+\end{dproblem}
+
+\begin{problem}
+ Show that $\partial^2 = 0$.
+ \label{prob:partial_zero}
+ \begin{hint}
+ This is just an exercises in sigma notation.
+ \end{hint}
+\end{problem}
+
+\begin{problem}[Pullback and $d$ commute]
+ Let $U$ and $U'$ be open sets of vector spaces $V$ and $V'$
+ and let $\phi : U \to U'$ be a smooth map between them.
+ Prove that for any differential form $\alpha$ on $U'$ we have
+ \[ \phi^\ast(d\alpha) = d(\phi^\ast\alpha). \]
+ \begin{hint}
+ This is a straightforward (but annoying) computation.
+ \end{hint}
+\end{problem}
+
+\begin{problem}[Arc length isn't a form]
+ Show that there does \emph{not} exist a $1$-form $\alpha$ on $\RR^2$ such that
+ for a curve $c : [0,1] \to \RR^2$,
+ the integral $\int_c \alpha$ gives the arc length of $c$.
+ \begin{hint}
+ We would want $\alpha_p(v) = \norm{v}$.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ An \vocab{exact} $k$-form $\alpha$ is one satisfying $\alpha = d\beta$ for some $\beta$.
+ Prove that
+ \[ \int_{C_1} \alpha = \int_{C_2} \alpha \]
+ where $C_1$ and $C_2$ are any concentric circles in the plane
+ and $\alpha$ is some exact $1$-form.
+ \begin{hint}
+ Show that $d^2=0$ implies $\int_{\partial c} \alpha = 0$ for exact $\alpha$.
+ Draw an annulus.
+ \end{hint}
+\end{problem}
diff --git a/books/napkin/structure.tex b/books/napkin/structure.tex
new file mode 100644
index 0000000000000000000000000000000000000000..77c965f44a20a1d746e7ca3ade94e584583ed380
--- /dev/null
+++ b/books/napkin/structure.tex
@@ -0,0 +1,505 @@
+\chapter{The PID structure theorem}
+The main point of this chapter is to discuss a classification
+theorem for finitely generated abelian groups.
+This won't take long to do, and if you like, you can read
+just the first section and then move on.
+
+However, since I'm here, I will go ahead and state the result as a
+special case of the much more general \emph{structure theorem}.
+Its corollaries include
+\begin{itemize}
+ \ii All finite-dimensional vector spaces are $k^{\oplus n}$.
+ \ii The classification theorem for finitely generated abelian groups,
+ \ii The Jordan decomposition of a matrix from before,
+ \ii Another canonical form for a matrix: ``Frobenius normal form''.
+\end{itemize}
+
+\section{Finitely generated abelian groups}
+\label{sec:FTFGAG}
+\begin{remark}
+ We talk about abelian groups in what follows, but really the
+ morally correct way to think about these structures is as $\ZZ$-modules.
+\end{remark}
+\begin{definition}
+ An abelian group $G = (G,+)$ is \vocab{finitely generated}
+ if it is finitely generated as a $\ZZ$-module.
+ (That is, there exists a finite collection $b_1, \dots, b_m \in G$,
+ such that every $x \in G$ can be written in the form
+ $c_1 b_1 + \dots + c_m b_m$ for some $c_1, \dots, c_m \in \ZZ$.)
+\end{definition}
+\begin{example}
+ [Examples of finitely generated abelian groups]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\ZZ$ is finitely generated (by $1$).
+ \ii $\Zc n$ is finitely generated (by $1$).
+ \ii $\ZZ^{\oplus 2}$ is finitely generated (by two elements $(1,0)$ and $(0,1)$).
+ \ii $\ZZ^{\oplus 3} \oplus \Zc9 \oplus \Zc{2016}$ is
+ finitely generated by five elements.
+ \ii $\Zc3\oplus\Zc5$ is finitely generated by two elements.
+ \end{enumerate}
+\end{example}
+\begin{exercise}
+ In fact $\Zc3\oplus\Zc5$ is generated by \emph{one} element.
+ What is it?
+\end{exercise}
+You might notice that these examples are not very diverse.
+That's because they are actually the only examples:
+\begin{theorem}
+ [Fundamental theorem of finitely generated abelian groups]
+ Let $G$ be a finitely generated abelian group.
+ Then there exists an integer $r$,
+ prime powers $q_1$, \dots, $q_m$ (not necessarily distinct) such that
+ \[
+ G \cong \ZZ^{\oplus r} \oplus \Zc{q_1} \oplus \Zc{q_2} \oplus
+ \dots \oplus \Zc{q_m}. \]
+ This decomposition is unique up to permutation of the $\Zc{q_i}$.
+\end{theorem}
+\begin{definition}
+ The \vocab{rank} of a finitely generated abelian group $G$ is the integer $r$ above.
+\end{definition}
+
+Now, we could prove this theorem, but it is more interesting to go for the gold
+and state and prove the entire structure theorem.
+
+\section{Some ring theory prerequisites}
+\prototype{$R = \ZZ$.}
+Before I can state the main theorem, I need to define a few terms for UFD's,
+which behave much like $\ZZ$:
+\begin{moral}
+ Our intuition from the case $R = \ZZ$ basically carries over verbatim.
+\end{moral}
+We don't even need to deal with prime ideals and can factor elements instead.
+
+\begin{definition}
+ If $R$ is a UFD, then $p \in R$ is a \vocab{prime element}
+ if $(p)$ is a prime ideal and $p \neq 0$.
+ For UFD's this is equivalent to:
+ if $p = xy$ then either $x$ or $y$ is a unit.
+\end{definition}
+So for example in $\ZZ$ the set of prime elements is $\{\pm2, \pm3, \pm5, \dots\}$.
+Now, since $R$ is a UFD, every element $r$ factors into a product of prime elements
+\[ r = u p_1^{e_1} p_2^{e_2} \dots p_m^{e_m} \]
+
+\begin{definition}
+ We say $r$ \vocab{divides} $s$ if $s = r'r$
+ for some $r' \in R$. This is written $r \mid s$.
+\end{definition}
+\begin{example}
+ [Divisibility in $\ZZ$]
+ The number $0$ is divisible by every element of $\ZZ$.
+ All other divisibility as expected.
+\end{example}
+\begin{ques}
+ Show that $r \mid s$ if and only if the exponent of each prime in $r$ is
+ less than or equal to the corresponding exponent in $s$.
+\end{ques}
+
+Now, the case of interest is the even stronger case when $R$ is a PID:
+\begin{proposition}
+ [PID's are Noetherian UFD's]
+ If $R$ is a PID, then it is Noetherian and also a UFD.
+\end{proposition}
+\begin{proof}
+ The fact that $R$ is Noetherian is obvious.
+ For $R$ to be a UFD we essentially repeat the proof for $\ZZ$,
+ using the fact that $(a,b)$ is principal in order to extract
+ $\gcd(a,b)$.
+\end{proof}
+
+In this case, we have a Chinese remainder theorem for elements.
+\begin{theorem}
+ [Chinese remainder theorem for rings]
+ Let $m$ and $n$ be relatively prime elements, meaning $(m) + (n) = (1)$.
+ Then \[ R / (mn) \cong R/(m) \times R/(n). \]
+\end{theorem}
+Here the ring product is as defined in \Cref{ex:product_ring}.
+\begin{proof}
+ This is the same as the proof of the usual Chinese remainder theorem.
+ First, since $(m,n)=(1)$ we have $am+bn=1$ for some $a$ and $b$.
+ Then we have a map
+ \[ R/(m) \times R/(n) \to R/(mn) \quad\text{by}\quad
+ (r,s) \mapsto r \cdot bn + s \cdot am. \]
+ One can check that this map is well-defined and an isomorphism of rings.
+ (Diligent readers invited to do so.)
+\end{proof}
+
+Finally, we need to introduce the concept of a Noetherian $R$-module.
+\begin{definition}
+ An $R$-module $M$ is \vocab{Noetherian}
+ if it satisfies one of the two equivalent conditions:
+ \begin{itemize}
+ \ii Its submodules obey the ascending chain condition:
+ there is no infinite sequence of modules
+ $M_1 \subsetneq M_2 \subsetneq \dots$.
+ \ii All submodules of $M$ (including $M$ itself) are finitely generated.
+ \end{itemize}
+\end{definition}
+This generalizes the notion of a Noetherian ring:
+a Noetherian ring $R$ is one for which $R$ is Noetherian as an $R$-module.
+\begin{ques}
+ Check these two conditions are equivalent. (Copy the proof for rings.)
+\end{ques}
+
+\section{The structure theorem}
+Our structure theorem takes two forms:
+\begin{theorem}
+ [Structure theorem, invariant form]
+ Let $R$ be a PID and let $M$ be any finitely generated $R$-module. Then
+ \[ M \cong \bigoplus_{i=1}^m R/(s_i) \]
+ for some $s_i$ (possibly zero)
+ satisfying $s_1 \mid s_2 \mid \dots \mid s_m$.
+ % These $s_i$ are unique up to multiplication by units.
+\end{theorem}
+\begin{corollary}
+ [Structure theorem, primary form]
+ Let $R$ be a PID and let $M$ be any finitely generated $R$-module. Then
+ \[ M \cong R^{\oplus r}
+ \oplus R/(q_1) \oplus R/(q_2) \oplus \dots \oplus R/(q_m) \]
+ where $q_i = p_i^{e_i}$ for some prime element $p_i$ and integer $e_i \ge 1$.
+ % The numbers $r$ and $q_i$ are unique up to permutation and multiplication by units.
+\end{corollary}
+\begin{proof}
+ [Proof of corollary]
+ Factor each $s_i$ into prime factors (since $R$ is a UFD),
+ then use the Chinese remainder theorem.
+\end{proof}
+\begin{remark}
+ In both theorems the decomposition is unique up to
+ permutations of the summands; good to know, but
+ I won't prove this.
+\end{remark}
+
+\section{Reduction to maps of free $R$-modules}
+\begin{definition}
+ A \vocab{free $R$-module} is a module of the form $R^{\oplus n}$
+ (or more generally, $\bigoplus_I R$ for some indexing set $I$,
+ just to allow an infinite basis).
+\end{definition}
+The proof of the structure theorem proceeds in two main steps.
+First, we reduce the problem to a \emph{linear algebra} problem
+involving free $R$-modules $R^{\oplus d}$.
+Once that's done, we just have to play with matrices;
+this is done in the next section.
+
+Suppose $M$ is finitely generated by $d$ elements.
+Then there is a surjective map of $R$-modules
+\[ R^{\oplus d} \surjto M \]
+whose image on the basis of $R^{\oplus d}$ are the generators of $M$.
+Let $K$ denote the kernel.
+
+We claim that $K$ is finitely generated as well.
+To this end we prove that
+\begin{lemma}[Direct sum of Noetherian modules is Noetherian]
+ Let $M$ and $N$ be two Noetherian $R$-modules.
+ Then the direct sum $M \oplus N$ is also a Noetherian $R$-module.
+\end{lemma}
+\begin{proof}
+ It suffices to show that if $L \subseteq M \oplus N$,
+ then $L$ is finitely generated.
+ One guess is that $L = P \oplus Q$,
+ where $P$ and $Q$ are the projections of $L$ onto $M$ and $N$.
+ Unfortunately this is false
+ (take $M = N = \ZZ$ and $L = \{(n,n) \mid n \in \ZZ\}$)
+ so we will have to be more careful.
+
+ Consider the submodules
+ \begin{align*}
+ A &= \left\{ x \in M \mid (x,0) \in L \right\} \subseteq M \\
+ B &= \left\{ y \in N \mid \exists x \in M : (x,y) \in L \right\}
+ \subseteq N.
+ \end{align*}
+ (Note the asymmetry for $A$ and $B$: the proof doesn't work otherwise.)
+ Then $A$ is finitely generated by $a_1$, \dots, $a_k$,
+ and $B$ is finitely generated by $b_1$, \dots, $b_\ell$.
+ Let $x_i = (a_i, 0)$ and let $y_i = (\ast, b_i)$ be elements of $L$
+ (where the $\ast$'s are arbitrary things we don't care about).
+ Then $x_i$ and $y_i$ together generate $L$.
+\end{proof}
+\begin{ques}
+ Deduce that for $R$ a PID, $R^{\oplus d}$ is Noetherian.
+\end{ques}
+Hence $K \subseteq R^{\oplus d}$ is finitely generated as claimed.
+So we can find another surjective map $R^{\oplus f} \surjto K$.
+Consequently, we have a composition
+\begin{center}
+\begin{tikzcd}
+ & K \ar[rd, hook] \\
+ R^{\oplus f} \ar[ru, two heads] \ar[rr, "T"'] && R^{\oplus d} \ar[r, two heads] & M
+\end{tikzcd}
+\end{center}
+Observe that $M$ is the \emph{cokernel} of the linear map $T$,
+i.e.\ we have that
+\[ M \cong R^{\oplus d} / \img(T). \]
+So it suffices to understand the map $T$ well.
+
+\section{Smith normal form}
+The idea is now that we have reduced our problem to studying
+linear maps $T : R^{\oplus m} \to R^{\oplus n}$,
+which can be thought of as a generic matrix
+\[ T = \begin{bmatrix}
+ a_{11} & \dots & a_{1m} \\
+ \vdots & \ddots & \vdots \\
+ a_{n1} & \dots & a_{nm}
+ \end{bmatrix} \]
+for a basis $e_1$, \dots, $e_m$ of $R^{\oplus m}$
+and $f_1$, \dots, $f_n$ of $N$.
+
+Of course, as you might expect it ought to be possible to change the
+given basis of $T$ such that $T$ has a nicer matrix form.
+We already saw this in \emph{Jordan form},
+where we had a map $T : V \to V$ and changed the basis
+so that $T$ was ``almost diagonal''.
+This time, we have \emph{two} sets of bases we can change,
+so we would hope to get a diagonal basis, or even better.
+
+Before proceeding let's think about how we might edit the matrix:
+what operations are permitted? Here are some examples:
+\begin{itemize}
+ \ii Swapping rows and columns, which just corresponds
+ to re-ordering the basis.
+ \ii Adding a multiple of a column to another column.
+ For example, if we add $3$ times the first column to the second column,
+ this is equivalent to replacing the basis
+ \[ (e_1, e_2, e_3, \dots, e_m) \mapsto (e_1, e_2+3e_1, e_3, \dots, e_m). \]
+ \ii Adding a multiple of a row to another row.
+ One can see that adding $3$ times the first row to the second row
+ is equivalent to replacing the basis
+ \[ (f_1, f_2, f_3, \dots, f_n) \mapsto (f_1-3f_2, f_2, f_3, \dots, f_n). \]
+\end{itemize}
+More generally,
+\begin{moral}
+ If $A$ is an invertible $n \times n$ matrix we can
+ replace $T$ with $AT$.
+\end{moral}
+This corresponds to replacing
+\[ (f_1, \dots, f_n) \mapsto (A(f_1), \dots, A(f_n)) \]
+(the ``invertible'' condition just guarantees the latter is a basis).
+Of course similarly we can replace $X$ with $XB$
+where $B$ is an invertible $m \times m$ matrix;
+this corresponds to
+\[ (e_1, \dots, e_m) \mapsto (B\inv(e_1), \dots, B\inv(e_m)) \]
+Armed with this knowledge, we can now approach:
+\begin{theorem}
+ [Smith normal form]
+ Let $R$ be a PID.
+ Let $M = R^{\oplus m}$ and $N = R^{\oplus n}$ be free $R$-modules
+ and let $T \colon M \to N$ be a linear map.
+ Set $k = \min\{m,n\}$.
+
+ Then we can select a pair of new bases for $M$ and $N$ such that
+ $T$ has only diagonal entries $s_1$, $s_2$, \dots, $s_k$
+ and $s_1 \mid s_2 \mid \dots \mid s_k$.
+\end{theorem}
+So if $m > n$, the matrix should take the form
+\[
+ \begin{bmatrix}
+ s_1 & 0 & 0 & 0 & \dots & 0 \\
+ 0 & s_2 & 0 & 0 & \dots & 0 \\
+ \vdots & \vdots & \ddots & \vdots & \dots & \vdots \\
+ 0 & 0 & 0 & s_n & \dots & 0
+ \end{bmatrix}.
+\]
+and similarly when $m \le n$.
+\begin{ques}
+ Show that Smith normal form implies the structure theorem.
+\end{ques}
+
+\begin{remark}
+ Note that this is not a generalization of Jordan form.
+ \begin{itemize}
+ \ii In Jordan form we consider maps $T : V \to V$;
+ note that the source and target space are the \emph{same},
+ and we are considering one basis for the space $V$.
+ \ii In Smith form the maps $T : M \to N$ are between
+ \emph{different} modules, and we pick \emph{two} sets of bases
+ (one for $M$ and one for $N$).
+ \end{itemize}
+\end{remark}
+
+\begin{example}
+ [Example of Smith normal form]
+ To give a flavor of the idea of the proof,
+ let's work through a concrete example with the $\ZZ$-matrix
+ \[ \begin{bmatrix}
+ 18 & 38 & 48 \\
+ 14 & 30 & 32
+ \end{bmatrix}. \]
+ The GCD of all the entries is $2$, and so motivated by this,
+ we perform the \textbf{Euclidean algorithm on the left column}:
+ subtract the second row from the first row,
+ then three times the first row from the second:
+ \[
+ \begin{bmatrix}
+ 18 & 38 & 48 \\
+ 14 & 30 & 32
+ \end{bmatrix}
+ \mapsto
+ \begin{bmatrix}
+ 4 & 8 & 16 \\
+ 14 & 30 & 32
+ \end{bmatrix}
+ \mapsto
+ \begin{bmatrix}
+ 4 & 8 & 16 \\
+ 2 & 6 & -16
+ \end{bmatrix}.
+ \]
+ Now that the GCD of $2$ is present, we move it to the upper-left
+ by switching the two rows,
+ and then kill off all the entries in the same row/column;
+ since $2$ was the GCD all along, we isolate $2$ completely:
+ \[
+ \begin{bmatrix}
+ 4 & 8 & 16 \\
+ 2 & 6 & -16
+ \end{bmatrix}
+ \mapsto
+ \begin{bmatrix}
+ 2 & 6 & -16 \\
+ 4 & 8 & 16
+ \end{bmatrix}
+ \mapsto
+ \begin{bmatrix}
+ 2 & 6 & -16 \\
+ 0 & -4 & 48 \\
+ \end{bmatrix}
+ \mapsto
+ \begin{bmatrix}
+ 2 & 0 & 0 \\
+ 0 & -4 & 48
+ \end{bmatrix}.
+ \]
+ This reduces the problem to a $1 \times 2$ matrix.
+ So we just apply the Euclidean algorithm again there:
+ \[
+ \begin{bmatrix}
+ 2 & 0 & 0 \\
+ 0 & -4 & 0
+ \end{bmatrix}
+ \mapsto
+ \begin{bmatrix}
+ 2 & 0 & 0 \\
+ 0 & 4 & 0
+ \end{bmatrix}.
+ \]
+\end{example}
+
+Now all we have to do is generalize this proof to work
+with any PID. It's intuitively clear how to do this:
+the PID condition more or less lets you perform a Euclidean algorithm.
+
+\begin{proof}
+ [Proof of Smith normal form]
+ Begin with a generic matrix
+ \[ T = \begin{bmatrix}
+ a_{11} & \dots & a_{1m} \\
+ \vdots & \ddots & \vdots \\
+ a_{n1} & \dots & a_{nm}
+ \end{bmatrix} \]
+ We want to show, by a series of operations (gradually changing the given basis)
+ that we can rearrange the matrix into Smith normal form.
+
+ Define $\gcd(x,y)$ to be any generator of the principal ideal $(x,y)$.
+ \begin{claim}[``Euclidean algorithm'']
+ If $a$ and $b$ are entries in the same row or column,
+ we can change bases to replace $a$ with $\gcd(a,b)$
+ and $b$ with something else.
+ \end{claim}
+ \begin{subproof}
+ We do just the case of columns.
+ By hypothesis, $\gcd(a,b) = xa+yb$ for some $x,y \in R$.
+ We must have $(x,y) = (1)$ now (we're in a UFD).
+ So there are $u$ and $v$ such that $xu + yv = 1$.
+ Then
+ \[
+ \begin{bmatrix} x & y \\ -v & u \end{bmatrix}
+ \begin{bmatrix} a \\ b \end{bmatrix}
+ = \begin{bmatrix} \gcd(a,b) \\ \text{something} \end{bmatrix}
+ \]
+ and the first matrix is invertible (check this!), as desired.
+ \end{subproof}
+ Let $s_1 = (a_{ij})_{i,j}$ be the GCD of all entries.
+ Now by repeatedly applying this algorithm,
+ we can cause $s$ to appear in the upper left hand corner.
+ Then, we use it to kill off all the entries in the first
+ row and the first column, thus arriving at a matrix
+ \[ \begin{bmatrix}
+ s_1 & 0 & 0 & \dots & 0 \\
+ 0 & a_{22}' & a_{23}' & \dots & a_{2n}' \\
+ 0 & a_{32}' & a_{33}' & \dots & a_{3n}' \\
+ \vdots&\vdots&\vdots&\ddots&\vdots \\
+ 0 & a_{m2}' & a_{m3}' & \dots & a_{mn}' \\
+ \end{bmatrix}. \]
+ Now we repeat the same procedure with this lower-right
+ $(m-1) \times (n-1)$ matrix, and so on.
+ This gives the Smith normal form.
+\end{proof}
+
+With the Smith normal form, we have in the original situation that
+\[ M \cong R^{\oplus d} / \img T \]
+and applying the theorem to $T$ completes the proof of the structure theorem.
+
+\section\problemhead
+Now, we can apply our structure theorem!
+\begin{dproblem}
+ [Finite-dimensional vector spaces are all isomorphic]
+ A vector space $V$ over a field $k$ has a finite spanning set of vectors.
+ Show that $V \cong k^{\oplus n}$ for some $n$.
+ \begin{hint}
+ In the structure theorem, $k / (s_i) \in \{0,k\}$.
+ \end{hint}
+\end{dproblem}
+
+\begin{dproblem}
+ [Frobenius normal form]
+ Let $T : V \to V$ where $V$ is a finite-dimensional vector space
+ over an arbitrary field $k$ (not necessarily algebraically closed).
+ Show that one can write $T$ as a block-diagonal matrix whose blocks
+ are all of the form
+ \[
+ \begin{bmatrix}
+ 0 & 0 & 0 & \dots & 0 & \ast \\
+ 1 & 0 & 0 & \dots & 0 & \ast \\
+ 0 & 1 & 0 & \dots & 0 & \ast \\
+ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots \\
+ 0 & 0 & 0 & \dots & 1 & \ast \\
+ \end{bmatrix}.
+ \]
+ (View $V$ as a $k[x]$-module with action $x \cdot v = T(v)$.)
+ \begin{hint}
+ By theorem $V \cong \bigoplus_i k[x] / (s_i)$ for some polynomials $s_i$.
+ Write each block in the form described.
+ \end{hint}
+\end{dproblem}
+
+\begin{dproblem}
+ [Jordan normal form]
+ Let $T : V \to V$ where $V$ is a finite-dimensional vector space
+ over an arbitrary field $k$ which is algebraically closed.
+ Prove that $T$ can be written in Jordan form.
+ \begin{hint}
+ Copy the previous proof, except using the other form of the structure theorem.
+ Since $k[x]$ is algebraically closed each $p_i$ is a linear factor.
+ \end{hint}
+\end{dproblem}
+
+\begin{problem}
+ \gim
+ Find two abelian groups $G$ and $H$ which are not isomorphic,
+ but for which there are injective homomorphisms
+ $G \injto H$ and $H \injto G$.
+ \begin{hint}
+ The structure theorem is an anti-result here:
+ it more or less implies that finitely generated abelian groups won't work.
+ So, look for an infinitely generated example.
+ \end{hint}
+ \begin{sol}
+ Take $G = \Zc3 \oplus \Zc9 \oplus \Zc9 \oplus \Zc9 \oplus \dots$
+ and $H = \Zc9 \oplus \Zc9 \oplus \Zc9 \oplus \Zc9 \oplus \dots$.
+ Then there are maps $G \injto H$ and $H \injto G$,
+ but the groups are not isomorphic since e.g.\
+ $G$ has an element $g \in G$ of order $3$
+ for which there's no $g' \in G$ with $g = 3g'$.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/swapsum.tex b/books/napkin/swapsum.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d59459070d2c14530f3bf99b1f0edd1c60c2a6d6
--- /dev/null
+++ b/books/napkin/swapsum.tex
@@ -0,0 +1,215 @@
+\chapter{Swapping order with Lebesgue integrals}
+\section{Motivating limit interchange}
+\prototype{$\mathbf{1}_\QQ$ is good!}
+
+One of the issues with the Riemann integral is
+that it behaves badly with respect to convergence of functions,
+and the Lebesgue integral deals with this.
+This is therefore often given as a poster child
+or why the Lebesgue integral has better behaviors than the Riemann one.
+
+We technically have already seen this:
+consider the indicator function $\mathbf{1}_\QQ$,
+which is not Riemann integrable by \Cref{prob:1QQ}.
+But we can readily compute its Lebesgue integral over $[0,1]$, as
+\[ \int_{[0,1]} \mathbf{1}_\QQ \; d\mu
+ = \mu\left( [0,1] \cap \QQ \right) = 0 \]
+since it is countable.
+
+This \emph{could} be thought of as a failure of convergence
+for the Riemann integral.
+\begin{example}
+ [$\mathbf{1}_\QQ$ is a limit of finitely supported functions]
+ \label{ex:1QQindicator}
+ We can define the sequence of functions $g_1$, $g_2$, \dots\ by
+ \[ g_n(x) = \begin{cases}
+ 1 & (n!)x \text{ is an integer} \\
+ 0 & \text{else}.
+ \end{cases} \]
+ Then each $g_n$ is piecewise continuous
+ and hence Riemann integrable on $[0,1]$ (with integral zero),
+ but $\lim_{n \to \infty} g_n = \mathbf{1}_\QQ$ is not.
+\end{example}
+
+The limit here is defined in the following sense:
+\begin{definition}
+ Let $f$ and $f_1, f_2, \dots \colon \Omega \to \RR$ be a sequence of functions.
+ Suppose that for each $\omega \in \Omega$, the sequence
+ \[ f_1(\omega), \; f_2(\omega), \; f_3(\omega), \;, \dots \]
+ converges to $f(\omega)$.
+ Then we say $(f_n)_n$ \vocab{converges pointwise}
+ to the limit $f$, written $\lim_{n \to \infty} f_n = f$.
+
+ We can define $\liminf_{n \to \infty} f_n$
+ and $\limsup_{n \to \infty} f_n$ similarly.
+\end{definition}
+This is actually a fairly weak notion of convergence, for example:
+\begin{exercise}
+ [Witch's hat]
+ Find a sequence of continuous function on $[-1,1] \to \RR$
+ which converges pointwise to the function $f$ given by
+ \[ f(x) = \begin{cases}
+ 1 & x = 0 \\
+ 0 & \text{otherwise}.
+ \end{cases} \]
+\end{exercise}
+This is why when thinking about the Riemann integral
+it is commonplace to work with stronger conditions like
+``uniformly convergent'' and the like.
+However, with the Lebesgue integral, we can mostly not think about these!
+
+\section{Overview}
+The three big-name results for exchanging
+pointwise limits with Lebesgue integrals is:
+\begin{itemize}
+ \ii Fatou's lemma: the most general statement possible,
+ for any nonnegative measurable functions.
+ \ii Monotone convergence: ``increasing limits'' just work.
+ \ii Dominated convergence (actually Fatou-Lebesgue):
+ limits that are not too big
+ (bounded by some absolutely integrable function) just work.
+\end{itemize}
+
+\section{Fatou's lemma}
+Without further ado:
+\begin{lemma}
+ [Fatou's lemma]
+ Let $f_1, f_2, \dots \colon \Omega \to [0,+\infty]$
+ be a sequence of \emph{nonnegative} measurable functions.
+ Then $\liminf_n \colon \Omega \to [0,+\infty]$ is measurable and
+ \[ \int_\Omega \left( \liminf_{n \to \infty} f_n \right) \; d\mu
+ \le \liminf_{n \to \infty} \left( \int_\Omega f_n \; d\mu \right). \]
+ Here we allow either side to be $+\infty$.
+\end{lemma}
+Notice that there are \emph{no extra hypothesis}
+on $f_n$ other than nonnegative: which makes this quite surprisingly versatile
+if you ever are trying to prove some general result.
+
+\section{Everything else}
+The big surprise is how quickly all the ``big-name''
+theorem follows from Fatou's lemma.
+Here is the so-called ``monotone convergence theorem''.
+\begin{corollary}
+ [Monotone convergence theorem]
+ Let $f$ and $f_1, f_2, \dots \colon \Omega \to [0,+\infty]$
+ be a sequence of \emph{nonnegative}
+ measurable functions such that $\lim_n f_n = f$
+ and $f_n(\omega) \le f(\omega)$ for each $n$.
+ Then $f$ is measurable and
+ \[ \lim_{n \to \infty} \left( \int_\Omega f_n \; d\mu \right)
+ = \int_\Omega f \; d\mu. \]
+ Here we allow either side to be $+\infty$.
+\end{corollary}
+\begin{proof}
+ We have
+ \begin{align*}
+ \int_\Omega f \; d\mu
+ &= \int_\Omega \left( \liminf_{n \to \infty} f_n \right) \; d\mu \\
+ &\le \liminf_{n \to \infty} \int_\Omega f_n \; d\mu \\
+ &\le \limsup_{n \to \infty} \int_\Omega f_n \; d\mu \\
+ &\le \int_\Omega f \; d\mu
+ \end{align*}
+ where the first $\le$ is by Fatou lemma,
+ and the second by the fact that
+ $\int_\Omega f_n \le \int_\Omega f$ for every $n$.
+ This implies all the inequalities are equalities and we are done.
+\end{proof}
+\begin{remark}
+ [The monotone convergence theorem does not require monotonicity!]
+ In the literature it is much more common
+ to see the hypothesis $f_1(\omega) \le f_2(\omega) \le \dots \le f(\omega)$
+ rather than just $f_n(\omega) \le f(\omega)$ for all $n$,
+ which is where the theorem gets its name.
+ However as we have shown this hypothesis is superfluous!
+ This is pointed out in \url{https://mathoverflow.net/a/296540/70654},
+ as a response to a question entitled
+ ``Do you know of any very important theorems that remain unknown?''.
+\end{remark}
+
+\begin{example}
+ [Monotone convergence gives $\mathbf{1}_\QQ$]
+ This already implies \Cref{ex:1QQindicator}.
+ Letting $g_n$ be the indicator function for $\frac1{n!}\ZZ$
+ as described in that example, we have $g_n \le \mathbf{1}_\QQ$
+ and $\lim_{n \to \infty} g_n(x) = \mathbf{1}_\QQ(x)$,
+ for each individual $x$.
+ So since $\int_{[0,1]} g_n \; d\mu = 0$ for each $n$,
+ this gives $\int_{[0,1]} \mathbf{1}_\QQ = 0$ as we already knew.
+\end{example}
+
+The most famous result, though is the following.
+\begin{corollary}
+ [Fatou–Lebesgue theorem]
+ Let $f$ and $f_1, f_2, \dots \colon \Omega \to \RR$
+ be a sequence of measurable functions.
+ Assume that $g \colon \Omega \to \RR$ is an
+ \emph{absolutely integrable} function for which
+ $|f_n(\omega)| \le |g(\omega)|$ for all $\omega \in \Omega$.
+ Then the inequality
+ \begin{align*}
+ \int_\Omega \left( \liminf_{n \to \infty} f_n \right) \; d\mu
+ &\le \liminf_{n \to \infty} \left( \int_\Omega f_n \; d\mu \right) \\
+ &\le \limsup_{n \to \infty} \left( \int_\Omega f_n \; d\mu \right)
+ \le \int_\Omega \left( \limsup_{n \to \infty} f_n \right) \; d\mu.
+ \end{align*}
+\end{corollary}
+\begin{proof}
+ There are three inequalities:
+ \begin{itemize}
+ \ii The first inequality follows by Fatou on $g + f_n$ which is nonnegative.
+ \ii The second inequality is just $\liminf \le \limsup$.
+ (This makes the theorem statement easy to remember!)
+ \ii The third inequality follows by Fatou on $g - f_n$ which is nonnegative.
+ \qedhere
+ \end{itemize}
+\end{proof}
+
+\begin{exercise}
+ Where is the fact that $g$ is absolutely integrable used in this proof?
+\end{exercise}
+
+\begin{corollary}
+ [Dominated convergence theorem]
+ Let $f_1, f_2, \dots \colon \Omega \to \RR$
+ be a sequence of measurable functions
+ such that $f = \lim_{n \to \infty} f_n$ exists.
+ Assume that $g \colon \Omega \to \RR$ is an
+ \emph{absolutely integrable} function for which
+ $|f_n(\omega)| \le |g(\omega)|$ for all $\omega \in \Omega$.
+ Then
+ \[ \int_\Omega f \; d \mu
+ = \lim_{n \to \infty} \left( \int_\Omega f_n \; d\mu \right). \]
+\end{corollary}
+\begin{proof}
+ If $f(\omega) = \lim_{n \to \infty} f_n(\omega)$,
+ then $f(\omega) = \liminf_{n \to \infty} f_n(\omega)
+ = \limsup_{n \to \infty} f_n(\omega)$.
+ So all the inequalities in the Fatou-Lebesgue theorem
+ become equalities, since the leftmost and rightmost sides are equal.
+\end{proof}
+Note this gives yet another way to verify \Cref{ex:1QQindicator}.
+In general, the dominated convergence theorem
+is a favorite clich\'{e} for undergraduate exams,
+because it is easy to create questions for it.
+Here is one example showing how they all look.
+\begin{example}
+ [The usual Lebesgue dominated convergence examples]
+ Suppose one wishes to compute
+ \[ \lim_{n \to \infty}
+ \left( \int_{(0,1)} \frac{n\sin(n\inv x)}{\sqrt x} \right) \; dx \]
+ then one starts by observing that
+ the inner term is bounded by the absolutely integrable function $x^{-1/2}$.
+ Therefore it equals
+ \begin{align*}
+ \int_{(0,1)} \lim_{n \to \infty}
+ \left( \frac{n\sin(n\inv x)}{\sqrt x} \right) \; dx
+ &= \int_{(0,1)} \frac{x}{\sqrt x} \; dx \\
+ &= \int_{(0,1)} \sqrt{x} \; dx = \frac23.
+ \end{align*}
+\end{example}
+
+\section{Fubini and Tonelli}
+\todo{TO BE WRITTEN}
+
+\section{\problemhead}
+\todo{problems}
diff --git a/books/napkin/sylow.tex b/books/napkin/sylow.tex
new file mode 100644
index 0000000000000000000000000000000000000000..173133a0c08fc31ef3810e04a9926a0359b1e63c
--- /dev/null
+++ b/books/napkin/sylow.tex
@@ -0,0 +1,381 @@
+\chapter{Find all groups}
+\label{chapter:sylow}
+The following problem will hopefully never be proposed at the IMO.
+\begin{quote}
+ Let $n$ be a positive integer and let $S = \left\{ 1,\dots,n \right\}$.
+ Find all functions $f : S \times S \to S$ such that
+ \begin{enumerate}[(a)]
+ \ii $f(x,1)=f(1,x)=x$ for all $x \in S$.
+ \ii $f(f(x,y),z)=f(x,f(y,z))$ for all $x,y,z \in S$.
+ \ii For every $x \in S$ there exists a $y \in S$ such that $f(x,y)=f(y,x)=1$.
+ \end{enumerate}
+\end{quote}
+Nonetheless, it's remarkable how much progress we've made on this ``problem''.
+In this chapter I'll try to talk about some things we have accomplished.
+
+\section{Sylow theorems}
+Here we present the famous Sylow theorems, some of the most
+general results we have about finite groups.
+
+\begin{theorem}[The Sylow theorems]
+ Let $G$ be a group of order $p^n m$,
+ where $\gcd(p,m)=1$ and $p$ is a prime.
+ A \vocab{Sylow $p$-subgroup} is a subgroup of order $p^n$.
+ Let $n_p$ be the number of Sylow $p$-subgroups of $G$.
+ Then
+ \begin{enumerate}[(a)]
+ \ii $n_p \equiv 1 \pmod p$. In particular, $n_p \neq 0$ and
+ a Sylow $p$-subgroup exists.
+ \ii $n_p$ divides $m$.
+ \ii Any two Sylow $p$-subgroups are conjugate subgroups (hence isomorphic).
+ \end{enumerate}
+\end{theorem}
+
+%\begin{figure}[ht]
+% \centering
+% \includegraphics[scale=0.6]{media/starcraft-nuclear-silo.png}
+% \caption{``Nuclear Sylow''.}
+%\end{figure}
+
+Sylow's theorem is really huge for classifying groups;
+in particular, the conditions $n_p \equiv 1 \pmod p$ and $n_p \mid m$
+can often pin down the value of $n_p$ to just a few values.
+Here are some results which follow from the Sylow theorems.
+\begin{itemize}
+ \ii A Sylow $p$-subgroup is normal if and only if $n_p = 1$.
+ \ii Any group $G$ of order $pq$, where $p < q$ are primes,
+ must have $n_q = 1$, since $n_q \equiv 1 \pmod q$ yet $n_q \mid p$.
+ Thus $G$ has a normal subgroup of order $q$.
+ \ii Since any abelian group has all subgroups normal,
+ it follows that any abelian group has exactly one Sylow $p$-subgroup
+ for every $p$ dividing its order.
+ \ii If $p \neq q$, the intersection of a Sylow $p$-subgroup and a Sylow $q$-subgroup is just $\{1_G\}$.
+ That's because the intersection of any two subgroups is also a subgroup,
+ and Lagrange's theorem tells us that its order must divide both a power of $p$
+ and a power of $q$; this can only happen if the subgroup is trivial.
+\end{itemize}
+
+Here's an example of another ``practical'' application.
+\begin{proposition}[Triple product of primes]
+ If $\left\lvert G \right\rvert = pqr$ is the product of distinct primes,
+ then $G$ must have a normal Sylow subgroup.
+\end{proposition}
+\begin{proof}
+ WLOG, assume $p 1$.
+
+ Since $n_r | pq$, we have $n_r = pq$ since $n_r$ divides neither $p$ nor $q$ as $n_r \ge 1 + r > p,q$. Also, $n_p \ge 1+p$ and $n_q \ge 1+q$.
+ So we must have at least $1+p$ Sylow $p$-subgroups,
+ at least $1+q$ Sylow $q$-subgroups, and at least $pq$ Sylow $r$-subgroups.
+
+ But these groups are pretty exclusive.
+ \begin{ques}
+ Take the $n_p+n_q+n_r$ Sylow subgroups and consider two of them,
+ say $H_1$ and $H_2$.
+ Show that $\left\lvert H_1 \cap H_2 \right\rvert = 1$
+ as follows:
+ check that $H_1 \cap H_2$ is a subgroup of both $H_1$ and $H_2$,
+ and then use Lagrange's theorem.
+ \end{ques}
+
+ We claim that there are too many elements now.
+ Indeed, if we count the non-identity elements contributed by these subgroups, we get
+ \[ n_p(p-1) + n_q(q-1) + n_r(r-1)
+ \ge (1+p)(p-1) + (1+q)(q-1) + pq(r-1) > pqr
+ \]
+ which is more elements than $G$ has!
+\end{proof}
+
+
+\section{(Optional) Proving Sylow's theorem}
+The proof of Sylow's theorem is somewhat involved,
+and in fact many proofs exist. I'll present one below here.
+It makes extensive use of group actions,
+so I want to recall a few facts first.
+If $G$ acts on $X$, then
+\begin{itemize}
+ \ii The orbits of the action form a partition of $X$.
+ \ii if $\OO$ is any orbit, then the orbit-Stabilizer theorem says that
+ \[ \left\lvert \OO \right\rvert = \left\lvert G \right\rvert / \left\lvert \Stab_G(x) \right\rvert \]
+ for any $x \in \OO$.
+ \ii In particular: suppose in the above that $G$ is a \vocab{$p$-group},
+ meaning $|G| = p^t$ for some $t$.
+ Then either $\left\lvert \OO \right\rvert = 1$ or $p$ divides $\left\lvert \OO \right\rvert$.
+ In the case $\OO = \{x\}$, then by definition, $x$ is a \vocab{fixed point} of every element of $G$: we have $g \cdot x = x$ for every $g$.
+\end{itemize}
+Note that when I say $x$ is a fixed point, I mean it is fixed by \textbf{every} element of the group, i.e.\ the orbit really has size one. Hence that's a really strong condition.
+
+\subsection{Definitions}
+\prototype{Conjugacy in $S_n$.}
+I've defined conjugacy of elements previously,
+but I now need to define it for groups:
+\begin{definition}
+ Let $G$ be a group, and let $X$ denote the set of subgroups of $G$.
+ Then \vocab{conjugation} is the action of $G$ on $X$ that sends
+ \[ H \mapsto gHg\inv = \left\{ ghg\inv \mid h \in H \right\}. \]
+ If $H$ and $K$ are subgroups of $G$ such that $H = gKg\inv$ for some $g \in G$
+ (in other words, they are in the same orbit under this action),
+ then we say they are \vocab{conjugate} subgroups.
+\end{definition}
+
+Because we somehow don't think of conjugate elements as
+``that different'' (for example, in permutation groups),
+the following shouldn't be surprising:
+\begin{ques}
+ Show that for any subgroup $H$ of a group $G$, the map $H \to gHg\inv$ by
+ $h \mapsto ghg\inv$ is in fact an isomorphism.
+ This implies that any two conjugate subgroups are isomorphic.
+\end{ques}
+
+\begin{definition}
+ For any subgroup $H$ of $G$ the \vocab{normalizer} of $H$ is defined as
+ \[ N_G(H) \defeq \left\{ g \in G \mid gHg\inv = H \right\}. \]
+ In other words, it is the stabilizer of $H$ under the conjugation action.
+\end{definition}
+
+We are now ready to present the proof.
+
+\subsection{Step 1: Prove that a Sylow $p$-subgroup exists}
+What follows is something like the probabilistic method.
+By considering the set $X$ of ALL subsets of size $p^n$ at once, we can exploit the ``deep number theoretic fact'' that
+\[ \left\lvert X \right\rvert = \binom{p^n m}{p^n} \not\equiv 0 \pmod p. \]
+(It's not actually deep: use Lucas' theorem.)
+
+Here is the proof.
+\begin{itemize}
+ \ii Let $G$ act on $X$ by $g \cdot X \defeq \left\{ gx \mid x \in X \right\}$.
+ \ii Take an orbit $\OO$ with size not divisible by $p$.
+ (This is possible because of our deep number theoretic fact.
+ Since $\left\lvert X \right\rvert$ is nonzero mod $p$ and the orbits partition $X$,
+ the claimed orbit must exist.)
+ \ii Let $S \in \OO$, $H = \Stab_G(S)$.
+ Then $p^n$ divides $\left\lvert H \right\rvert$, by the orbit-Stabilizer theorem.
+ \ii Consider a second action: let $H$ act on $S$ by
+ $h \cdot s \defeq hs$ (we know $hs \in S$ since $H = \Stab_G(S)$).
+ \ii
+ Observe that $\Stab_H(s) = \{1_H\}$.
+ Then all orbits of the second action must have size $\left\lvert H \right\rvert$. Thus $\left\lvert H \right\rvert$ divides $\left\lvert S \right\rvert = p^n$.
+ \ii This implies $\left\lvert H \right\rvert = p^n$, and we're done.
+\end{itemize}
+
+
+\subsection{Step 2: Any two Sylow $p$-subgroups are conjugate}
+Let $P$ be a Sylow $p$-subgroup (which exists by the previous step).
+We now prove that for any $p$-group $Q$, $Q \subseteq gPg\inv$.
+Note that if $Q$ is also a Sylow $p$-subgroup, then $Q = gPg\inv$ for size reasons;
+this implies that any two Sylow subgroups are indeed conjugate.
+
+\textbf{Let $Q$ act on the set of left cosets of $P$ by left multiplication.}
+Note that
+\begin{itemize}
+ \ii $Q$ is a $p$-group, so any orbit has size divisible by $p$ unless it's $1$.
+ \ii But the number of left cosets is $m$, which isn't divisible by $p$.
+\end{itemize}
+\textbf{Hence some coset $gP$ is a fixed point for every $q$}, meaning
+$qgP = gP$ for all $q$.
+Equivalently, $qg \in gP$ for all $q \in Q$, so $Q \subseteq gPg\inv$ as desired.
+
+
+\subsection{Step 3: Showing $n_p \equiv 1 \pmod p$}
+Let $\mathcal S$ denote the set of all the Sylow $p$-subgroups.
+Let $P \in \mathcal S$ be arbitrary.
+\begin{ques}
+ Why does $\left\lvert \mathcal S \right\rvert$ equal $n_p$?
+ (In other words, are you awake?)
+\end{ques}
+Now we can proceed with the proof.
+Let $P$ act on $\mathcal S$ by conjugation.
+Then:
+\begin{itemize}
+ \ii Because $P$ is a $p$-group, $n_p \pmod p$ is the number of fixed points
+ of this action.
+ Now we claim $P$ is the only fixed point of this action.
+ \ii Let $Q$ be any other fixed point, meaning $xQx\inv = Q$ for any $x \in P$.
+ \ii Define the normalizer $N_G(Q) = \left\{ g \in G \mid gQg\inv = Q \right\}$. It contains both $P$ and $Q$.
+ \ii Now for the crazy part: apply Step 2 to $N_G(Q)$.
+ Since $P$ and $Q$ are Sylow $p$-subgroups of it, they must be conjugate.
+ \ii Hence $P=Q$, as desired.
+\end{itemize}
+
+\subsection{Step 4: $n_p$ divides $m$}
+Since $n_p \equiv 1 \pmod p$, it suffices to show $n_p$ divides $\left\lvert G \right\rvert$.
+Let $G$ act on the set of all Sylow $p$-groups by conjugation.
+Step 2 says this action has only one orbit, so the Orbit-Stabilizer theorem
+implies $n_p$ divides $\left\lvert G \right\rvert$.
+
+
+\section{(Optional) Simple groups and Jordan-H\"older}
+\prototype{Decomposition of $\Zc{12}$ is $1 \normalin \Zc 2 \normalin \Zc 4 \normalin \Zc{12}$.}
+% Fortunately, there is at least a way to ``break down'' $G$.
+Just like every integer breaks down as the product of primes,
+we can try to break every group down as a product of ``basic'' groups.
+Armed with our idea of quotient groups, the right notion is this.
+
+\begin{definition}
+ A \vocab{simple group} is a group with no normal subgroups
+ other than itself and the trivial group.
+\end{definition}
+\begin{ques}
+ For which $n$ is $\Zc n$ simple? (Hint: remember that $\Zc n$ is abelian.)
+\end{ques}
+
+Then we can try to define what it means to ``break down a group''.
+\begin{definition}
+ A \vocab{composition series} of a group $G$ is a sequence of subgroups
+ $H_0$, $H_1$, \dots, $H_n$ such that
+ \[ \{1\} = H_0 \normalin H_1 \normalin H_2 \normalin \dots
+ \normalin H_n = G \]
+ of maximal length (i.e.\ $n$ is as large as possible,
+ but all $H_i$ are of course distinct).
+ The \vocab{composition factors} are the groups
+ $H_1/H_0$, $H_2/H_1$, \dots, $H_n/H_{n-1}$.
+\end{definition}
+You can show that the ``maximality'' condition implies that the composition factors are all simple groups.
+
+Let's say two composition series are equivalent if they have the same composition factors (up to permutation); in particular they have the same length.
+Then it turns out that the following theorem \emph{is} true.
+\begin{theorem}
+ [Jordan-H\"older]
+ Every finite group $G$ admits a unique composition series up to equivalence.
+\end{theorem}
+
+\begin{example}
+ [Fundamental theorem of arithmetic when $n=12$]
+ Let's consider the group $\Zc{12}$.
+ It's not hard to check that the possible composition series are
+ \begin{align*}
+ \{1\} \normalin \Zc2 \normalin \Zc4 \normalin \Zc{12}
+ & \text{ with factors $\Zc2$, $\Zc2$, $\Zc3$} \\
+ \{1\} \normalin \Zc2 \normalin \Zc6 \normalin \Zc{12}
+ & \text{ with factors $\Zc2$, $\Zc3$, $\Zc2$} \\
+ \{1\} \normalin \Zc3 \normalin \Zc6 \normalin \Zc{12}
+ & \text{ with factors $\Zc3$, $\Zc2$, $\Zc2$}.
+ \end{align*}
+ These correspond to the factorization $12 = 2^2 \cdot 3$.
+\end{example}
+
+This suggests that classifying all finite simple groups would be great progress, since every finite group is somehow a ``product'' of simple groups;
+the only issue is that there are multiple ways of building a group from constituents.
+
+Amazingly, we actually \emph{have} a full list of simple groups,
+but the list is really bizarre.
+Every finite simple group falls in one of the following categories:
+\begin{itemize}
+ \ii $\Zc p$ for $p$ a prime,
+ \ii For $n \ge 5$, the subgroup of $S_n$ consisting of ``even'' permutations.
+ \ii A simple group of Lie type (which I won't explain), and
+ \ii Twenty-six ``sporadic'' groups which do not fit into any nice family.
+\end{itemize}
+The two largest of the sporadic groups have cute names.
+The \vocab{baby monster group} has order
+\[ 2^{41} \cdot 3^{13} \cdot 5^6 \cdot 7^2 \cdot 11 \cdot 13 \cdot 17
+ \cdot 19 \cdot 23 \cdot 31 \cdot 47 \approx 4 \cdot 10^{33} \]
+and the \vocab{monster group} (also ``\vocab{friendly giant}'') has order
+\[
+ 2^{46} \cdot 3^{20} \cdot 5^9 \cdot 7^6 \cdot 11^2 \cdot 13^3
+ \cdot 17 \cdot 19 \cdot 23 \cdot 29 \cdot 31 \cdot 41 \cdot 47
+ \cdot 59 \cdot 71 \approx 8 \cdot 10^{53}. \]
+It contains twenty of the sporadic groups as subquotients (including itself),
+and these twenty groups are called the ``\vocab{happy family}''.
+
+Math is weird.
+
+\begin{ques}
+ Show that ``finite simple group of order $2$'' is redundant
+ in the sense that any group of order $2$ is both finite and simple.
+\end{ques}
+
+\section\problemhead
+
+\begin{sproblem}
+ [Cauchy's theorem]
+ Let $G$ be a group and let $p$ be a prime dividing $\left\lvert G \right\rvert$.
+ Prove\footnote{Cauchy's theorem can be proved without the Sylow theorems,
+ and in fact can often be used to give alternate proofs of Sylow.}
+ that $G$ has an element of order $p$.
+ \label{thm:cauchy_group}
+\end{sproblem}
+
+\begin{problem}
+ Let $G$ be a finite simple group.
+ Show that $\left\lvert G \right\rvert \neq 56$.
+ \begin{hint}
+ Count Sylow $2$ and $7$ groups and let them intersect.
+ \end{hint}
+ \begin{sol}
+ Suppose $|G|=56$ and $G$ is simple.
+ Consider the Sylow $7$-subgroups;
+ if there are $n_7$ of them we assume $n_7 > 1$ (since $G$ is simple)
+ and $n_7 \equiv 1 \pmod 7$, so $n_7 = 8$.
+ That means there are $(7-1) \cdot 8 = 48$ elements of order $7$ in $G$.
+
+ But consider the Sylow $2$-subgroups.
+ These have $8$ elements each, and we conclude therefore
+ that there is at exactly one Sylow $2$-subgroup.
+ That subgroup is normal, contradiction.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Engel's PSS?]
+ \gim
+ Consider the set of all words consisting of the letters $a$ and $b$.
+ Given such a word, we can change the word either
+ by inserting a word of the form $www$,
+ where $w$ is a word, anywhere in the given word,
+ or by deleting such a sequence from the word.
+ Can we turn the word $ab$ into the word $ba$?
+ % https://aops.com/community/c340h456603p2572053
+ % lol Alex Zhu op
+ \begin{hint}
+ Construct a non-abelian group such that all elements have order three.
+ \end{hint}
+ \begin{sol}
+ One example is the group of $3 \times 3$
+ matrices with entries in $\FF_3$ that are of the form
+ $\begin{bmatrix}
+ 1 & x & y \\
+ & 1 & z \\
+ & & 1
+ \end{bmatrix}$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ \yod
+ Let $p$ be a prime.
+ Show that the only simple group with order $p^n$
+ (for some positive integer $n$)
+ is the cyclic group $\Zc{p^n}$.
+
+ % class equation
+ \begin{hint}
+ First, if $G$ abelian it's trivial.
+ Otherwise, let $Z(G)$ be the center of the group,
+ which is always a normal subgroup of $G$.
+ Do a mod $p$ argument via conjugation (or use the class equation).
+ \end{hint}
+ \begin{sol}
+ Let $G$ be said group.
+ If $G$ is abelian then all subgroups are normal,
+ and since $G$ is simple, $G$ can't have any subgroups at all.
+ We can clearly find an element of order $p$, hence $G$ has a subgroup
+ of order $p$, which can only happen if $n=1$, $G \cong \Zc p$.
+
+ Thus it suffices to show $G$ can't be abelian.
+ For this, we can use the class equation, but let's avoid that and do it directly:
+
+ Assume not and let $Z(G) = \{ g \in G \mid xg = gx \; \forall x \in G \}$
+ be the center of the group.
+ Since $Z(G)$ is normal in $G$, and $G$ is simple, we see $Z(G) = \{1_G\}$.
+ But then let $G$ act on itself by conjugation: $g \cdot x = gxg\inv$.
+ This breaks $G$ into a bunch of orbits $\OO_0 = \{1_G\}$, $\OO_1$, $\OO_2$, \dots,
+ and since $1_G$ is the only fixed point by definition, all other orbits
+ have size greater than $1$.
+ The Orbit-stabilizer theorem says that each orbit now has size dividing $p^n$,
+ so they must all have size zero mod $p$.
+
+ But then summing across all orbits (which partition $G$),
+ we obtain $\left\lvert G \right\rvert \equiv 1 \pmod p$,
+ which is a contradiction.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/taylor.tex b/books/napkin/taylor.tex
new file mode 100644
index 0000000000000000000000000000000000000000..3b4aca2a881db28108c3e563e9999af2dcc530a4
--- /dev/null
+++ b/books/napkin/taylor.tex
@@ -0,0 +1,461 @@
+\chapter{Power series and Taylor series}
+Polynomials are very well-behaved functions,
+and are studied extensively for that reason.
+From an analytic perspective, for example, they are smooth,
+and their derivatives are easy to compute.
+
+In this chapter we will study \emph{power series},
+which are literally ``infinite polynomials'' $\sum_n a_n x^n$.
+Armed with our understanding of series and differentiation,
+we will see three great things:
+\begin{itemize}
+ \ii Many of the functions we see in nature
+ actually \emph{are} given by power series.
+ Among them are $e^x$, $\log x$, $\sin x$.
+
+ \ii Their convergence properties are actually quite well behaved:
+ from the string of coefficients,
+ we can figure out which $x$ they converge for.
+
+ \ii The derivative of $\sum_n a_n x^n$
+ is actually just $\sum_n n a_n x^{n-1}$.
+\end{itemize}
+
+\section{Motivation}
+To get the ball rolling, let's start with
+one infinite polynomial you'll recognize:
+for any fixed number $-1 < x < 1$ we have the series convergence
+\[ \frac{1}{1-x} = 1 + x + x^2 + \dots \]
+by the geometric series formula.
+
+Let's pretend we didn't see this already in
+\Cref{prob:geometric}.
+So, we instead have a smooth function $f \colon (-1,1) \to \RR$ by
+\[ f(x) = \frac{1}{1-x}. \]
+Suppose we wanted to pretend that it was equal to
+an ``infinite polynomial'' near the origin, that is
+\[ (1-x)\inv = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + a_4 x^4 + \dots. \]
+How could we find that polynomial,
+if we didn't already know?
+
+Well, for starters we can first note that by plugging in $x = 0$
+we obviously want $a_0 =1$.
+
+We have derivatives, so actually,
+we can then differentiate both sides to obtain that
+\[ (1-x)^{-2} = a_1 + 2a_2 x + 3a_3 x^2 + 4a_4 x^3. \]
+If we now set $x = 0$, we get $a_1 = 1$.
+In fact, let's keep taking derivatives and see what we get.
+\begin{alignat*}{8}
+ (1-x)^{-1} &{}={}& a_0 &{}+{}& a_1x &{}+{}& a_2x^2 &{}+{}& a_3x^3 &{}+{}& a_4x^4 &{}+{}& a_5x^5 &{}+{}& \dots \\
+ (1-x)^{-2} &{}={}& && a_1 &{}+{}& 2a_2 x &{}+{}& 3a_3 x^2 &{}+{}& 4a_4 x^3 &{}+{}& 5a_5 x^4 &{}+{}&\dots \\
+ 2(1-x)^{-3} &{}={}& && && 2a_2 &{}+{}& 6a_3 x &{}+{}& 12 a_4 x^2 &{}+{}& 20 a_5 x^3 &{}+{}& \dots \\
+ 6(1-x)^{-4} &{}={}& && &&&& 6a_3 &{}+{}& 24 a_4 x &{}+{}& 60 a_5 x^2 &{}+{}& \dots \\
+ 24(1-x)^{-5} &{}={}& && && && && 24 a_4 &{}+{}& 120 a_5 x &{}+{}& \dots \\
+ &{}\vdotswithin={}.
+\end{alignat*}
+If we set $x=0$ we find $1 = a_0 = a_1 = a_2 = \dots$
+which is what we expect;
+the geometric series $\frac{1}{1-x} = 1 + x + x^2 + \dots$.
+And so actually taking derivatives was enough to get the right claim!
+
+\section{Power series}
+\prototype{$\frac{1}{1-z} = 1 + z + z^2 + \dots$, which converges on $(-1,1)$.}
+Of course this is not rigorous,
+since we haven't described what the right-hand side is,
+much less show that it can be differentiated term by term.
+So we define the main character now.
+
+\begin{definition}
+ A \vocab{power series} is a sum of the form
+ \[ \sum_{n = 0}^\infty a_n z^n
+ = a_0 + a_1 z + a_2 z^2 + \dots \]
+ where $a_0$, $a_1$, \dots\ are real numbers,
+ and $z$ is a variable.
+\end{definition}
+\begin{abuse}
+ [$0^0=1$]
+ If you are very careful, you might notice
+ that when $z=0$ and $n=0$ we find $0^0$ terms appearing.
+ For this chapter the convention is that
+ they are all equal to one.
+\end{abuse}
+
+Now, if I plug in a \emph{particular} real number $h$,
+then I get a series of real numbers $\sum_{n = 0}^{\infty} a_n h^n$.
+So I can ask, when does this series converge?
+It terms out there is a precise answer for this.
+
+\begin{definition}
+ Given a power series $\sum_{n=0}^{\infty} a_n z_n$,
+ the \vocab{radius of convergence} $R$ is defined
+ by the formula
+ \[ \frac 1R = \limsup_{n \to \infty} \left\lvert a_n \right\rvert^{1/n}. \]
+ with the convention that $R = 0$ if the right-hand side is $\infty$,
+ and $R = \infty$ if the right-hand side is zero.
+\end{definition}
+\begin{theorem}
+ [Cauchy-Hadamard theorem]
+ Let $\sum_{n=0}^{\infty} a_n z^n$
+ be a power series with radius of convergence $R$.
+ Let $h$ be a real number, and consider the infinite series
+ \[ \sum_{n=0}^\infty a_n h^n \]
+ of real numbers.
+ Then:
+ \begin{itemize}
+ \ii The series converges absolutely if $|h| < R$.
+ \ii The series diverges if $|h| > R$.
+ \end{itemize}
+\end{theorem}
+\begin{proof}
+ This is not actually hard,
+ but it won't be essential, so not included.
+\end{proof}
+\begin{remark}
+ In the case $|h| = R$, it could go either way.
+\end{remark}
+\begin{example}
+ [$\sum z^n$ has radius $1$]
+ Consider the geometric series $\sum_{n} z^n = 1 + z + z^2 + \dots$.
+ Since $a_n = 1$ for every $n$, we get $R = 1$,
+ which is what we expected.
+\end{example}
+
+Therefore, if $\sum_n a_n z^n$ is a power
+series with a nonzero radius $R > 0$ of convergence,
+then it can \emph{also} be thought of as a function
+\[ (-R, R) \to \RR
+ \quad\text{ by }\quad
+ h \mapsto \sum_{n \ge 0} a_n h^n. \]
+This is great.
+Note also that if $R = \infty$,
+this means we get a function $\RR \to \RR$.
+
+\begin{abuse}
+ [Power series vs.\ functions]
+ There is some subtly going on with ``types'' of objects again.
+ Analogies with polynomials can help.
+
+ Consider $P(x) = x^3 + 7x + 9$, a polynomial.
+ You \emph{can}, for any real number $h$,
+ plug in $P(h)$ to get a real number.
+ However, in the polynomial \emph{itself},
+ the symbol $x$ is supposed to be a \emph{variable} ---
+ which sometimes we will plug in a real number for,
+ but that happens only after the polynomial is defined.
+
+ Despite this, ``the polynomial $p(x) = x^3+7x+9$''
+ (which can be thought of as the coefficients)
+ and ``the real-valued function $x \mapsto x^3+7x+9$''
+ are often used interchangeably.
+ The same is about to happen with power series:
+ while they were initially thought of as a sequence of
+ coefficients, the Cauchy-Hadamard theorem
+ lets us think of them as functions too,
+ and thus we blur the distinction between them.
+% Pedants will sometimes \emph{define} a polynomial
+% to be the sequence of coefficients, say $(9,7,0,1)$,
+% with $x^3+7x+9$ being ``intuition only''.
+% Similarly they would define a power series
+% to be the sequence $(a_0, a_1, \dots)$.
+% We will not go quite this far, but they have a point.
+%
+% I will be careful to use ``power series''
+% for the one with a variable,
+% and ``infinite series'' for the sums of real numbers from before.
+\end{abuse}
+
+\section{Differentiating them}
+\prototype{We saw earlier $1+x+x^2+x^3+\dots$ has derivative $1+2x+3x^2+\dots$.}
+As promised, differentiation works exactly as you want.
+
+\begin{theorem}
+ [Differentiation works term by term]
+ Let $\sum_{n \ge 0} a_n z^n$ be a power series
+ with radius of convergence $R > 0$,
+ and consider the corresponding function
+ \[ f \colon (-R,R) \to \RR \quad\text{by}\quad
+ f(x) = \sum_{n \ge 0} a_n x^n. \]
+ Then all the derivatives of $f$ exist and
+ are given by power series
+ \begin{align*}
+ f'(x) &= \sum_{n \ge 1} n a_n x^{n-1} \\
+ f''(x) &= \sum_{n \ge 2} n(n-1) a_n x^{n-2} \\
+ &\vdotswithin=
+ \end{align*}
+ which also converge for any $x \in (-R, R)$.
+ In particular, $f$ is smooth.
+\end{theorem}
+\begin{proof}
+ Also omitted.
+ The right way to prove it is to define the notion
+ ``converges uniformly'',
+ and strengthen Cauchy-Hadamard to have
+ this is as a conclusion as well.
+ However, we won't use this later.
+\end{proof}
+
+\begin{corollary}
+ [A description of power series coefficients]
+ Let $\sum_{n \ge 0} a_n z^n$ be a power series
+ with radius of convergence $R > 0$,
+ and consider the corresponding function $f(x)$ as above.
+ Then
+ \[ a_n = \frac{f^{(n)}(x)}{n!}. \]
+\end{corollary}
+\begin{proof}
+ Take the $n$th derivative and plug in $x=0$.
+\end{proof}
+
+\section{Analytic functions}
+\prototype{The piecewise $e^{-1/x}$ or $0$ function is not analytic,
+ but is smooth.}
+With all these nice results about power series,
+we now have a way to do this process the other way:
+suppose that $f \colon U \to \RR$ is a function.
+Can we express it as a power series?
+
+Functions for which this \emph{is} true
+are called analytic.
+\begin{definition}
+ A function $f \colon U \to \RR$ is \vocab{analytic} at
+ the point $p \in U$
+ if there exists an open neighborhood $V$ of $p$ (inside $U$)
+ and a power series $\sum_n a_n z^n$ such that
+ \[ f(x) = \sum_{n \ge 0} a_n (x-p)^n \]
+ for any $x \in V$.
+ As usual, the whole function is analytic
+ if it is analytic at each point.
+\end{definition}
+\begin{ques}
+ Show that if $f$ is analytic, then it's smooth.
+\end{ques}
+Moreover, if $f$ is analytic,
+then by the corollary above its coefficients are
+actually described exactly by
+\[ f(x) = \sum_{n \ge 0} \frac{f^{(n)}(p)}{n!} (x-p)^n. \]
+Even if $f$ is smooth but not analytic,
+we can at least write down the power series;
+we give this a name.
+\begin{definition}
+ For smooth $f$,
+ the power series $\sum_{n \ge 0} \frac{f^{(n)}(p)}{n!} z^n$
+ is called the \vocab{Taylor series} of $f$ at $p$.
+\end{definition}
+
+\begin{example}
+ [Examples of analytic functions]
+ \listhack
+ \label{ex:nonanalytic}
+ \begin{enumerate}[(a)]
+ \ii Polynomials, $\sin$, $\cos$, $e^x$, $\log$
+ all turn out to be analytic.
+
+ \ii The smooth function from before defined by
+ \[ f(x) = \begin{cases}
+ \exp(-1/x) & x > 0 \\
+ 0 & x \le 0 \\
+ \end{cases}
+ \]
+ is \emph{not} analytic.
+ Indeed, suppose for contradiction it was.
+ As all the derivatives are zero,
+ its Taylor series would be $0 + 0x + 0x^2 + \dots$.
+ This Taylor series does \emph{converge}, but not to the right value ---
+ as $f(\eps) > 0$ for any $\eps > 0$, contradiction.
+ \end{enumerate}
+\end{example}
+
+\begin{theorem}
+ [Analytic iff Taylor series has positive radius]
+ Let $f \colon U \to \RR$ be a smooth function.
+ Then $f$ is analytic if and only if for any point $p \in U$,
+ its Taylor series at $p$ has positive radius of convergence.
+\end{theorem}
+
+\begin{example}
+ It now follows that $f(x) = \sin(x)$ is analytic.
+ To see that, we can compute
+ \begin{align*}
+ f(0) &= \sin 0 = 0 \\
+ f'(0) &= \cos 0 = 1 \\
+ f''(0) &= -\sin 0 = 0 \\
+ f^{(3)}(0) &= -\cos 0 = -1 \\
+ f^{(4)}(0) &= \sin 0 = 0 \\
+ f^{(5)}(0) &= \cos 0 = 1 \\
+ f^{(6)}(0) &= -\sin 0 = 0 \\
+ &\vdotswithin=
+ \end{align*}
+ and so by continuing the pattern
+ (which repeats every four) we find the Taylor series is
+ \[ z - \frac{z^3}{3!} + \frac{z^5}{5!} - \frac{z^7}{7!} + \dots \]
+ which is seen to have radius of convergence $R = \infty$.
+\end{example}
+
+Like with differentiable functions:
+\begin{proposition}
+ [All your usual closure properties for analytic functions]
+ The sums, products, compositions, nonzero quotients
+ of analytic functions are analytic.
+\end{proposition}
+The upshot of this is is that most of your usual
+functions that occur in nature,
+or even artificial ones like $f(x) = e^x + x \sin(x^2)$,
+will be analytic, hence describable locally by Taylor series.
+
+\section{A definition of Euler's constant and exponentiation}
+We can actually give a definition of $e^x$ using the tools we have now.
+
+\begin{definition}
+ We define the map $\exp \colon \RR \to \RR$ by
+ using the following power series,
+ which has infinite radius of convergence:
+ \[ \exp(x) = \sum_{n \ge 0} \frac{x^n}{n!}. \]
+ We then define Euler's constant as $e = \exp(1)$.
+\end{definition}
+\begin{ques}
+ Show that under this definition, $\exp' = \exp$.
+\end{ques}
+
+We are then settled with:
+\begin{proposition}
+ [$\exp$ is multiplicative]
+ Under this definition,
+ \[ \exp(x+y) = \exp(x) \exp(y). \]
+\end{proposition}
+\begin{proof}
+ [Idea of proof.]
+ There is some subtlety here with switching
+ the order of summation that we won't address.
+ Modulo that:
+ \begin{align*}
+ \exp(x) \exp(y)
+ &= \sum_{n \ge 0} \frac{x^n}{n!}
+ \sum_{m \ge 0} \frac{y^m}{m!}
+ = \sum_{n \ge 0} \sum_{m \ge 0}
+ \frac{x^n}{n!} \frac{y^m}{m!} \\
+ &= \sum_{k \ge 0} \sum_{\substack{m+n = k \\ m,n \ge 0}}
+ \frac{x^n y^m}{n! m!}
+ = \sum_{k \ge 0} \sum_{\substack{m+n = k \\ m,n \ge 0}}
+ \binom{k}{n} \frac{x^n y^m}{k!} \\
+ &= \sum_{k \ge 0} \frac{(x+y)^k}{k!} = \exp(x+y). \qedhere
+ \end{align*}
+\end{proof}
+
+\begin{corollary}
+ [$\exp$ is positive]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii We have $\exp(x) > 0$ for any real number $x$.
+ \ii The function $\exp$ is strictly increasing.
+ \end{enumerate}
+\end{corollary}
+\begin{proof}
+ First \[ \exp(x) = \exp(x/2)^2 \ge 0 \]
+ which shows $\exp$ is nonnegative.
+ Also, $1 = \exp(0) = \exp(x) \exp(-x)$ implies $\exp(x) \ne 0$
+ for any $x$, proving (a).
+
+ (b) is just since $\exp'$ is strictly positive (racetrack principle).
+\end{proof}
+
+The log function then comes after.
+\begin{definition}
+ We may define $\log \colon \RR_{>0} \to \RR$
+ to be the inverse function of $\exp$.
+\end{definition}
+Since its derivative is $1/x$ it is smooth;
+and then one may compute its coefficients to show it is analytic.
+
+Note that this actually gives us a rigorous way to define
+$a^r$ for any $a > 0$ and $r > 0$, namely
+\[ a^r \defeq \exp\left( r \log a \right). \]
+
+\section{This all works over complex numbers as well,
+except also complex analysis is heaven}
+We now mention that every theorem we referred to above
+holds equally well if we work over $\CC$,
+with essentially no modifications.
+\begin{itemize}
+ \ii Power series are defined by $\sum_n a_n z^n$ with $a_n \in \CC$,
+ rather than $a_n \in \RR$.
+ \ii The definition of radius of convergence $R$ is unchanged!
+ The series will converge if $|z| < R$.
+ \ii Differentiation still works great.
+ (The definition of the derivative is unchanged.)
+ \ii Analytic still works great for functions
+ $f \colon U \to \CC$, with $U \subseteq \CC$ open.
+\end{itemize}
+In particular, we can now even define complex exponentials,
+giving us a function \[ \exp \colon \CC \to \CC \]
+since the power series still has $R = \infty$.
+More generally if $a > 0$ and $z \in \CC$
+we may still define \[ a^z \defeq \exp(z \log a). \]
+(We still require the base $a$ to be a positive real
+so that $\log a$ is defined, though.
+So this $i^i$ issue is still there.)
+
+However, if one tries to study calculus for complex functions
+as we did for the real case,
+in addition to most results carrying over,
+we run into a huge surprise:
+\begin{quote}
+ \itshape
+ If $f \colon \CC \to \CC$ is differentiable,
+ it is analytic.
+\end{quote}
+And this is just the beginning of the nearly unbelievable
+results that hold for complex analytic functions.
+But this is the part on real analysis,
+so you will have to read about this later!
+
+\section{\problemhead}
+
+\begin{problem}
+ Find the Taylor series of $\log(1-x)$.
+\end{problem}
+
+\begin{dproblem}
+ [Euler formula]
+ Show that \[ \exp(i \theta) = \cos \theta + i \sin \theta \]
+ for any real number $\theta$.
+ \begin{hint}
+ Because you know all derivatives of $\sin$ and $\cos$,
+ you can compute their Taylor series,
+ which converge everywhere on $\RR$.
+ At the same time, $\exp$ was defined as a Taylor series,
+ so you can also compute it.
+ Write them all out and compare.
+ \end{hint}
+\end{dproblem}
+
+\begin{dproblem}
+ [Taylor's theorem, Lagrange form]
+ Let $f \colon [a,b] \to \RR$ be continuous
+ and $n+1$ times differentiable on $(a,b)$.
+ Define
+ \[ P_n = \sum_{k=0}^n \frac{f^{(k)}(b)}{k!} \cdot (b-a)^k. \]
+ Prove that there exists $\xi \in (a,b)$ such that
+ \[ f^{(n)}(\xi) = (n+1)! \cdot \frac{f(b) - P_n}{(b-a)^{n+1}}. \]
+ This generalizes the mean value theorem
+ (which is the special case $n = 0$, where $P_0 = f(a)$).
+ \begin{hint}
+ Use repeated Rolle's theorem.
+ You don't need any of the theory in this chapter to solve this,
+ so it could have been stated much earlier;
+ but then it would be quite unmotivated.
+ \end{hint}
+\end{dproblem}
+
+\begin{problem}
+ [Putnam 2018 A5]
+ \yod
+ Let $f \colon \RR \to \RR$ be smooth,
+ and assume that $f(0) = 0$, $f(1) = 1$, and $f(x) \ge 0$
+ for every real number $x$.
+ Prove that $f^{(n)}(x) < 0$ for some positive integer $n$
+ and real number $x$.
+ \begin{hint}
+ Use Taylor's theorem.
+ \end{hint}
+\end{problem}
diff --git a/books/napkin/title-embellishments.tex b/books/napkin/title-embellishments.tex
new file mode 100644
index 0000000000000000000000000000000000000000..16024a764f0ebed587bed891347f5bd220b8e0a0
--- /dev/null
+++ b/books/napkin/title-embellishments.tex
@@ -0,0 +1,41 @@
+\begin{titlepage}
+ \vspace*{3cm}
+ \begin{flushright}
+ \large\itshape
+ When introduced to a new idea, always ask why you should care. \\[0.2cm]
+ Do not expect an answer right away, but demand one eventually. \\[0.8cm]
+ --- Ravi Vakil \cite{ref:vakil}
+ \end{flushright}
+
+ \vspace*{8em}
+ \hrulebar
+
+ \begin{center}
+ \begin{minipage}{50ex}
+ \centering
+ If you like this book and want to support me,
+ please consider buying me a coffee! \\[2ex]
+ \href{http://ko-fi.com/evanchen}{\includegraphics[width=32ex]{media/kofi4.png}} \\
+ \url{http://ko-fi.com/evanchen/}
+ \end{minipage}
+ \end{center}
+
+ \vfill
+ {
+ \small
+ \noindent \emph{For Brian and Lisa, who finally got me to write it}. \\[0.4cm]
+ \noindent {\copyright} 2022 Evan Chen. \\
+ Text licensed under
+ \href{https://creativecommons.org/licenses/by-sa/4.0/}{CC-by-SA-4.0}.
+ Source files licensed under
+ \href{https://choosealicense.com/licenses/gpl-3.0/}{GNU GPL v3}.
+ \\[0.4cm]
+ This is (still!) an \textbf{incomplete draft}.
+ Please send corrections, comments, pictures of kittens,
+ etc.\ to \mailto{evan@evanchen.cc},
+ or pull-request at \url{https://github.com/vEnhance/napkin}.
+ \\[0.4cm]
+ \noindent Last updated \today.
+ \vspace*{1cm}
+ }
+\end{titlepage}
diff --git a/books/napkin/top-more.tex b/books/napkin/top-more.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f3f045686595ecbddadfa214040519a5763913fa
--- /dev/null
+++ b/books/napkin/top-more.tex
@@ -0,0 +1,696 @@
+\chapter{Topological spaces}
+\label{ch:top_more}
+
+In \Cref{ch:metric_space} we introduced the notion
+of space by describing metrics on them.
+This gives you a lot examples, and nice intuition,
+and tells you how you should draw pictures of open and closed sets.
+
+However, moving forward, it will be useful to begin
+thinking about topological spaces in terms of just their \emph{open sets}.
+(One motivation is that our fishy \Cref{ex:fishy} shows that
+in some ways the notion of homeomorphism really wants
+to be phrased in terms of open sets, not in terms of the metric.)
+As we are going to see, the open sets manage to actually retain
+nearly all the information we need, but are simpler.\footnote{The
+ reason I adamantly introduce metric spaces first
+ is because I think otherwise the examples make much less sense.}
+This will be done in just a few sections,
+and after that we will start describing more adjectives
+that we can apply to topological (and hence metric) spaces.
+
+The most important topological notion is missing from this chapter:
+that of a \emph{compact} space.
+It is so important that I have dedicated a separate chapter just for it.
+
+Quick note for those who care:
+the adjectives ``Hausdorff'', ``connected'',
+and later ``compact'' are all absolute adjectives.
+
+%Note that in contrast to the warning on open/closed sets I gave earlier,
+%\begin{moral}
+% The adjectives in this chapter will be used to describe \emph{spaces}.
+%\end{moral}
+
+%As I alluded to earlier, sequences in metric spaces are super nice,
+%but sequences in general topological spaces \emph{suck} (to the point where
+%I didn't bother to define convergence of general sequences).
+
+\section{Forgetting the metric}
+Recall \Cref{thm:open_set}:
+\begin{quote}
+ A function $f \colon M \to N$ of metric spaces is continuous
+ if and only if the pre-image of every open set in $N$ is open in $M$.
+\end{quote}
+Despite us having defined this in the context of metric spaces,
+this nicely doesn't refer to the metric at all,
+only the open sets.
+As alluded to at the start of this chapter,
+this is a great motivation for how we can forget
+about the fact that we had a metric to begin with,
+and rather \emph{start} with the open sets instead.
+
+\begin{definition}
+ A \vocab{topological space} is a pair $(X, \mathcal T)$,
+ where $X$ is a set of points,
+ and $\mathcal T$ is the \vocab{topology},
+ which consists of several subsets of $X$, called the \vocab{open sets} of $X$.
+ The topology must obey the following axioms.
+ \begin{itemize}
+ \ii $\varnothing$ and $X$ are both in $\mathcal T$.
+ \ii Finite intersections of open sets are also in $\mathcal T$.
+ \ii Arbitrary unions (possibly infinite) of open sets are also in $\mathcal T$.
+ \end{itemize}
+\end{definition}
+So this time, the open sets are \emph{given}.
+Rather than defining a metric and getting open sets from the metric,
+we instead start from just the open sets.
+\begin{abuse}
+ We abbreviate $(X, \mathcal T)$ by just $X$,
+ leaving the topology $\mathcal T$ implicit.
+ (Do you see a pattern here?)
+\end{abuse}
+
+\begin{example}[Examples of topologies]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Given a metric space $M$, we can let $\mathcal T$ be
+ the open sets in the metric sense.
+ The point is that the axioms are satisfied.
+ \ii In particular, \vocab{discrete space}
+ is a topological space in which every set is open. (Why?)
+ \ii Given $X$, we can let $\mathcal T = \left\{ \varnothing, X \right\}$,
+ the opposite extreme of the discrete space.
+ \end{enumerate}
+\end{example}
+
+Now we can port over our metric definitions.
+\begin{definition}
+ An \vocab{open neighborhood}\footnote{In literature,
+ a ``neighborhood'' refers to a set
+ which contains some open set around $x$.
+ We will not use this term,
+ and exclusively refer to ``open neighborhoods''.}
+ of a point $x \in X$ is an
+ open set $U$ which contains $x$ (see figure).
+\end{definition}
+\begin{center}
+ \begin{asy}
+ size(4cm);
+ bigblob("$X$");
+ pair p = Drawing("x", (0.3,0.1), dir(-90));
+ real r = 1.55;
+ draw(shift(p) * scale(1.6,1.2)*unitcircle, dashed);
+ label("$U$", p+r*dir(45), dir(45));
+ \end{asy}
+\end{center}
+
+\begin{abuse}
+ Just to be perfectly clear:
+ by an ``open neighborhood'' I mean \emph{any} open set containing $x$.
+ But by an ``$r$-neighborhood'' I always mean the
+ points with distance less than $r$ from $x$,
+ and so I can only use this term if my space is a metric space.
+\end{abuse}
+
+\section{Re-definitions}
+Now that we've defined a topological space,
+for nearly all of our metric notions we can write down
+as the definition the one that required only open sets
+(which will of course agree with our old definitions
+when we have a metric space).
+
+\subsection{Continuity}
+Here was our motivating example, continuity:
+\begin{definition}
+ We say function $f \colon X \to Y$ of topological spaces
+ is \vocab{continuous} at a point $p \in X$ if the pre-image of any
+ open neighborhood of $f(p)$ is an open neighborhood of $p$.
+ The function is continuous if it is continuous at every point.
+\end{definition}
+
+Thus homeomorphisms carries over:
+a bijection which is continuous in both directions.
+\begin{definition}
+ A \vocab{homeomorphism} of topological spaces
+ $(X, \tau_X)$ and $(Y, \tau_Y)$
+ is a bijection $f \colon X \to Y$
+ which induces a bijection from $\tau_X$ to $\tau_Y$:
+ i.e.\ the bijection preserves open sets.
+\end{definition}
+\begin{ques}
+ Show that this is equivalent to $f$ and its inverse
+ both being continuous.
+\end{ques}
+Therefore, any property defined only in
+terms of open sets is preserved by homeomorphism.
+Such a property is called a \vocab{topological property}.
+However, the later adjectives we define
+(``connected'', ``Hausdorff'', ``compact'') will all be defined
+only in terms of the open sets, so they will be.
+
+
+
+%\begin{definition}
+% A sequence $(x_n)$ of points in a topological space $X$ is said to \vocab{converge to} $x \in X$ if for every open neighborhood of $x$,
+% eventually all terms of the sequence lie in that open neighborhood.
+%\end{definition}
+%\begin{remark}
+% Unfortunately, for general topological spaces we no longer have the nice property
+% that any function which preserves sequential limits is automatically continuous.
+%\end{remark}
+%There's one other property of open sets that we have in a metric space that isn't implied by the above: for any two points of $X$, we can find an open set containing one but not the other.
+%A space which also has this property is called a \vocab{Kolmogorov space}.
+%This property is a good property to have, because if $x,y \in X$ are in the same open sets, the topology can't tell them apart.
+
+\subsection{Closed sets}
+We saw last time there were two equivalent definitions
+for closed sets, but one of them relies only on open sets, and we use it:
+\begin{definition}
+ \label{def:closure}
+ In a general topological space $X$, we say that $S \subseteq X$ is
+ \vocab{closed} in $X$ if the complement $X \setminus S$ is open in $X$.
+
+ If $S \subseteq X$ is any set, the \vocab{closure} of $S$,
+ denoted $\ol S$,
+ is defined as the smallest closed set containing $S$.
+\end{definition}
+Thus for general topological spaces,
+open and closed sets carry the same information,
+and it is entirely a matter of taste whether we define everything in terms
+of open sets or closed sets.
+In particular, you can translate axioms and properties of open sets to closed ones:
+\begin{ques}
+ Show that the (possibly infinite) intersection of closed sets is closed
+ while the union of finitely many closed sets is closed.
+ (Look at complements.)
+\end{ques}
+\begin{exercise}
+ Show that a function is continuous if and only if the pre-image
+ of every closed set is closed.
+\end{exercise}
+Mathematicians seem to have agreed that they like open sets better.
+
+
+\subsection{Properties that don't carry over}
+Not everything works:
+
+\begin{remark}
+ [Complete and (totally) bounded are metric properties]
+ The two metric properties we have seen,
+ ``complete'' and ``(totally) bounded'',
+ are not topological properties.
+ They rely on a metric,
+ so as written we cannot apply them to topological spaces.
+ One might hope that maybe,
+ there is some alternate definition (like we saw for ``continuous function'')
+ that is just open-set based.
+ But \Cref{ex:fishy} showing $(0,1) \cong \RR$
+ tells us that it is hopeless.
+\end{remark}
+
+\begin{remark}
+ [Sequences don't work well]
+ You could also try to port over the notion
+ of sequences and convergent sequences.
+ However, this turns out to break a lot of desirable properties.
+ Therefore I won't bother to do so,
+ and thus if we are discussing sequences you should
+ assume that we are working with a metric space.
+\end{remark}
+
+\section{Hausdorff spaces}
+\prototype{Every space that's not the Zariski topology (defined much later).}
+
+As you might have guessed,
+there exist topological spaces which cannot be realized
+as metric spaces (in other words, are not \vocab{metrizable}).
+One example is just to take $X = \{a,b,c\}$ and the topology
+$\tau_X = \left\{ \varnothing, \{a,b,c\} \right\}$.
+This topology is fairly ``stupid'':
+it can't tell apart any of the points $a$, $b$, $c$!
+But any metric space can tell its points apart (because $d(x,y) > 0$ when $x \neq y$).
+
+We'll see less trivial examples later,
+but for now we want to add a little more sanity condition onto our spaces.
+There is a whole hierarchy of such axioms, labelled $T_n$ for
+integers $n$ (with $n=0$ being the weakest and $n=6$ the strongest);
+these axioms are called \vocab{separation axioms}.
+
+By far the most common hypothesis is the $T_2$ axiom,
+which bears a special name.
+\begin{definition}
+ A topological space $X$ is \vocab{Hausdorff} if
+ for any two distinct points $p$ and $q$ in $X$,
+ there exists an open neighborhood $U$ of $p$
+ and an open neighborhood $V$ of $q$ such that
+ \[ U \cap V = \varnothing. \]
+\end{definition}
+
+In other words, around any two distinct points we should be
+able to draw disjoint open neighborhoods.
+Here's a picture to go with above,
+but not much going on.
+\begin{center}
+\begin{asy}
+ size(5cm);
+ pair p = (-1.5,0);
+ pair q = ( 1.5,0);
+ filldraw(CR(p,1), opacity(0.1)+lightcyan, blue+dashed);
+ filldraw(CR(q,1), opacity(0.1)+lightred, red+dashed);
+ dot("$p$", p, dir(-90), blue);
+ dot("$q$", q, dir(-90), red);
+\end{asy}
+\end{center}
+
+\begin{ques}
+ Show that all metric spaces are Hausdorff.
+\end{ques}
+
+I just want to define this here so that I can use this word later.
+In any case, basically any space we will encounter other than
+the Zariski topology is Hausdorff.
+
+
+\section{Subspaces}
+\prototype{$S^1$ is a subspace of $\RR^2$.}
+
+One can also take subspaces of general topological spaces.
+\begin{definition}
+ Given a topological space $X$,
+ and a subset $S \subseteq X$,
+ we can make $S$ into a topological space
+ by declaring that the open subsets of $S$ are $U \cap S$
+ for open $U \subseteq X$.
+ This is called the \vocab{subspace topology}.
+\end{definition}
+So for example, if we view $S^1$ as a subspace of $\RR^2$,
+then any open arc is an open set,
+because you can view it as the intersection of an open disk with $S^1$.
+\begin{center}
+ \begin{asy}
+ size(3cm);
+ draw(unitcircle, black+1);
+ MP("S^1", dir(60), dir(60));
+ MP("\mathbb R^2", dir(-45)*1.2, dir(-45));
+ pair A = dir(-30);
+ pair B = dir(50);
+ draw(CP(dir(10), A), dotted);
+ draw(arc(origin,A,B), blue+2);
+ dotfactor *= 2;
+ opendot(A, blue);
+ opendot(B, blue);
+ \end{asy}
+\end{center}
+
+Needless to say, for metric spaces it doesn't matter
+which of these definitions I choose.
+(Proving this turns out to be surprisingly annoying,
+so I won't do so.)
+
+\section{Connected spaces}
+\prototype{$[0,1] \cup [2,3]$ is disconnected.}
+Even in metric spaces, it is possible for a set
+to be both open and closed.
+\begin{definition}
+ A subset $S$ of a topological space $X$
+ is \vocab{clopen} if it is both closed and open in $X$.
+ (Equivalently, both $S$ and its complement are open.)
+\end{definition}
+For example $\varnothing$ and the entire space are examples of clopen sets.
+In fact, the presence of a nontrivial clopen set other
+than these two leads to a so-called \emph{disconnected} space.
+
+\begin{ques}
+ Show that a space $X$ has a nontrivial clopen set
+ (one other than $\varnothing$ and $X$)
+ if and only if $X$ can be written as a disjoint union
+ of two nonempty open sets.
+\end{ques}
+
+We say $X$ is \vocab{disconnected}
+if there are nontrivial clopen sets,
+and \vocab{connected} otherwise.
+To see why this should be a reasonable definition,
+it might help to solve \Cref{prob:disconnected_better_def}.
+
+\begin{example}[Disconnected and connected spaces]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The metric space
+ \[ \{ (x,y) \mid x^2+y^2\le 1 \}
+ \cup \{ (x,y) \mid (x-4)^2+y^2\le1\} \subseteq \RR^2 \]
+ is disconnected (it consists of two disks).
+
+ \ii The space $[0,1] \cup [2,3]$ is disconnected:
+ it consists of two segments,
+ each of which is a clopen set.
+
+ \ii A discrete space on more than one point is disconnected,
+ since \emph{every} set is clopen in the discrete space.
+
+ \ii Convince yourself that the set
+ \[ \left\{ x \in \QQ : x^2 < 2014 \right\} \]
+ is a clopen subset of $\QQ$.
+ Hence $\QQ$ is disconnected too -- it has \emph{gaps}.
+
+ \ii $[0,1]$ is connected.
+ \end{enumerate}
+\end{example}
+
+%\begin{remark}
+% For general topological spaces $X$, it is still true that if $S$ is closed
+% (meaning $X \setminus S$ is open), then it contains the limits of all its sequences.
+% But the converse of this statement no longer holds.
+%\end{remark}
+
+
+\section{Path-connected spaces}
+\prototype{Walking around in $\CC$.}
+A stronger and perhaps more intuitive notion
+of a connected space is a \emph{path-connected} space.
+The short description: ``walk around in the space''.
+% We have general topological spaces, but so far they've just been sitting there.
+% Let's start walking in them.
+
+% Throughout this section, we let $I = [0,1]$ with the usual topology
+% inherited from $\RR$.
+%Also, recall that $S^n$ is the surface of the sphere living
+%in $n+1$ dimensional space; hence $S^1$ is a unit circle, say.
+
+\begin{definition}
+ A \vocab{path} in the space $X$ is a continuous function
+ \[ \gamma : [0,1] \to X. \]
+ Its \vocab{endpoints} are the two points $\gamma(0)$ and $\gamma(1)$.
+\end{definition}
+
+You can think of $[0,1]$ as measuring ``time'', and so we'll often write $\gamma(t)$
+for $t \in [0,1]$ (with $t$ standing for ``time'').
+Here's a picture of a path.
+\begin{center}
+ \begin{asy}
+ bigblob("$X$");
+ pair A = Drawing("\gamma(0)", (-3,-1));
+ pair B = Drawing("\gamma(1)", (2,1), dir(90));
+ path p = A..(-2,0)..(0,2)..(1,0)..B;
+ draw(p, EndArrow);
+ MP("\gamma", midpoint(p), dir(90));
+ \end{asy}
+\end{center}
+\begin{ques}
+ Why does this agree with your intuitive notion of what a ``path'' is?
+\end{ques}
+
+\begin{definition}
+ A space $X$ is \vocab{path-connected} if
+ any two points in it are connected by some path.
+\end{definition}
+
+\begin{exercise}
+ [Path-connected implies connected]
+ Let $X = U \sqcup V$ be a disconnected space.
+ Show that there is no path
+ from a point of $U$ to point $V$.
+ (If $\gamma \colon [0,1] \to X$, then we get
+ $[0,1] = \gamma\pre(U) \sqcup \gamma\pre(V)$,
+ but $[0,1]$ is connected.)
+\end{exercise}
+
+\begin{example}[Examples of path-connected spaces]
+ \listhack
+ \begin{itemize}
+ \ii $\RR^2$ is path-connected,
+ since we can ``connect'' any two points with a straight line.
+ \ii The unit circle $S^1$ is path-connected, since
+ we can just draw the major or minor arc to connect two points.
+ \end{itemize}
+\end{example}
+
+% Easy, right?
+
+\section{Homotopy and simply connected spaces}
+\prototype{$\CC$ and $\CC \setminus \{0\}$.}
+\label{sec:meteor}
+
+Now let's motivate the idea of homotopy.
+Consider the example of the complex plane $\CC$ (which you can
+think of just as $\RR^2$) with two points $p$ and $q$.
+There's a whole bunch of paths from $p$ to $q$ but somehow
+they're not very different from one another.
+If I told you ``walk from $p$ to $q$'' you wouldn't have too many questions.
+
+\begin{center}
+ \begin{asy}
+ unitsize(0.8cm);
+ bigbox("$\mathbb C$");
+ pair A = Drawing("p", (-3,0), dir(180));
+ pair B = Drawing("q", (3,0), dir(0));
+ draw(A..(-2,0.5)..(0,2)..(1,1.2)..B, red, EndArrow(TeXHead));
+ draw(A--B, mediumgreen, EndArrow(TeXHead));
+ draw(A--(-1,-1)--(2,-1)--B, blue, EndArrow(TeXHead));
+ // draw(A..(1.5,-2)..(-1.5,-2)..B, orange, EndArrow(TeXHead));
+ \end{asy}
+\end{center}
+
+So we're living happily in $\CC$ until a meteor strikes the origin,
+blowing it out of existence.
+Then suddenly to get from $p$ to $q$, people might tell you two different things:
+``go left around the meteor'' or ``go right around the meteor''.
+
+\begin{center}
+ \begin{asy}
+ unitsize(0.8cm);
+ bigbox("$\mathbb C \setminus \{0\}$");
+ pair A = Drawing("p", (-3,0), dir(180));
+ pair B = Drawing("q", (3,0), dir(0));
+ draw(A..(-2,0.5)..(0,2)..(1,1.2)..B, red, EndArrow(TeXHead));
+ draw(A--(-1,-1)--(2,-1)--B, blue, EndArrow(TeXHead));
+ filldraw(scale(0.5)*unitcircle, grey, black);
+ \end{asy}
+\end{center}
+
+So what's happening?
+In the first picture, the red, green, and blue paths somehow all looked
+the same: if you imagine them as pieces of elastic string pinned down
+at $p$ and $q$, you can stretch each one to any other one.
+
+But in the second picture,
+you can't move the red string to match with the blue string:
+there's a meteor in the way.
+The paths are actually different.\footnote{If you know about winding numbers,
+ you might feel this is familiar.
+ We'll talk more about this in the chapter on the fundamental group.}
+
+The formal notion we'll use to capture this is \emph{homotopy equivalence}.
+We want to write a definition such that in the first picture,
+the three paths are all \emph{homotopic}, but the two paths in the
+second picture are somehow not homotopic.
+And the idea is just continuous deformation.
+
+\begin{definition}
+ Let $\alpha$ and $\beta$ be paths in $X$ whose endpoints coincide.
+ A (path) \vocab{homotopy} from $\alpha$ to $\beta$ is a continuous function
+ $F : [0,1]^2 \to X$, which we'll write $F_s(t)$ for $s,t \in [0,1]$,
+ such that
+ \[ F_0(t) = \alpha(t) \text{ and } F_1(t) = \beta(t)
+ \text{ for all $t \in [0,1]$} \]
+ and moreover
+ \[ \alpha(0) = \beta(0) = F_s(0)
+ \text{ and }
+ \alpha(1) = \beta(1) = F_s(1)
+ \text{ for all $s \in [0,1]$}. \]
+ If a path homotopy exists, we say $\alpha$ and $\beta$
+ are path \vocab{homotopic} and write $\alpha \simeq \beta$.
+\end{definition}
+\begin{abuse}
+ While I strictly should say ``path homotopy'' to describe this relation
+ between two paths, I will shorten this to just ``homotopy'' instead.
+ Similarly I will shorten ``path homotopic'' to ``homotopic''.
+\end{abuse}
+Animated picture: \url{https://commons.wikimedia.org/wiki/File:HomotopySmall.gif}.
+Needless to say, $\simeq$ is an equivalence relation.
+
+What this definition is doing is taking $\alpha$ and ``continuously deforming'' it to $\beta$, while keeping the endpoints fixed.
+Note that for each particular $s$, $F_s$ is itself a function.
+So $s$ represents time as we deform $\alpha$ to $\beta$:
+it goes from $0$ to $1$, starting at $\alpha$ and ending at $\beta$.
+
+\begin{center}
+ \begin{asy}
+ size(9cm);
+ bigbox("$\mathbb C$");
+ pair A = Drawing("p", (-3,0), dir(180));
+ pair B = Drawing("q", (3,0), dir(0));
+ draw(A..MP("F_{0} = \alpha", (0,2), dir(45))..B, heavygreen);
+ draw(A..MP("F_{0.25}", (0,1), dir(45))..B, mediumgreen);
+ draw(A..MP("F_{0.5}", (0,0), dir(90))..B, palecyan);
+ draw(A..MP("F_{0.75}", (0,-1), dir(-45))..B, mediumcyan);
+ draw(A..MP("F_{1} = \beta", (0,-2), dir(-45))..B, heavycyan);
+ // draw(A..(1.5,-2)..(-1.5,-2)..B, orange, EndArrow);
+ \end{asy}
+\end{center}
+
+\begin{ques}
+ Convince yourself the above definition is right.
+ What goes wrong when the meteor strikes?
+\end{ques}
+
+So now I can tell you what makes $\CC$ special:
+\begin{definition}
+ A space $X$ is \vocab{simply connected} if it's path-connected and
+ for any points $p$ and $q$, all paths from $p$ to $q$ are homotopic.
+\end{definition}
+That's why you don't ask questions when walking from $p$ to $q$ in $\CC$:
+there's really only one way to walk. Hence the term ``simply'' connected.
+
+\begin{ques}
+ Convince yourself that $\RR^n$ is simply connected for all $n$.
+\end{ques}
+
+\section{Bases of spaces}
+\prototype{$\RR$ has a basis of open intervals, and $\RR^2$ has a basis of open disks.}
+
+You might have noticed that the open sets of $\RR$ are a little annoying to describe:
+the prototypical example of an open set is $(0,1)$,
+but there are other open sets like
+\[
+ (0,1)
+ \cup \left( 1, \frac32 \right)
+ \cup \left( 2, \frac 73 \right)
+ \cup (2014, 2015). \]
+\begin{ques}
+ Check this is an open set.
+\end{ques}
+
+But okay, this isn't \emph{that} different.
+All I've done is taken a bunch of my prototypes and threw a bunch of $\cup$ signs at it.
+And that's the idea behind a basis.
+
+\begin{definition}
+ A \vocab{basis} for a topological space $X$
+ is a subset $\mathcal B$ of the open sets
+ such that every open set in $X$
+ is a union of some (possibly infinite) number of elements in
+ $\mathcal B$.
+\end{definition}
+
+And all we're doing is saying
+\begin{example}[Basis of $\RR$]
+ The open intervals form a basis of $\RR$.
+\end{example}
+In fact, more generally we have:
+\begin{theorem}[Basis of metric spaces]
+ The $r$-neighborhoods form a basis of any metric space $M$.
+\end{theorem}
+\begin{proof}
+ Kind of silly -- given an open set $U$
+ draw an $r_p$-neighborhood $U_p$ contained entirely inside $U$.
+ Then $\bigcup_p U_p$ is contained in $U$ and covers
+ every point inside it.
+\end{proof}
+
+Hence, an open set in $\RR^2$ is nothing more than a union
+of a bunch of open disks, and so on.
+The point is that in a metric space, the only open sets you really
+ever have to worry too much about are the $r$-neighborhoods.
+
+
+\section{\problemhead}
+
+\begin{dproblem}
+ Let $X$ be a topological space.
+ Show that there exists a nonconstant continuous function $X \to \{0,1\}$ if and
+ only if $X$ is disconnected (here $\{0,1\}$ is given the discrete topology).
+ \label{prob:disconnected_better_def}
+\end{dproblem}
+
+\begin{sproblem}
+ Let $X$ and $Y$ be topological spaces
+ and let $f \colon X \to Y$ be a continuous function.
+ \begin{enumerate}[(a)]
+ \ii Show that if $X$ is connected then so is $f\im(X)$.
+ \ii Show that if $X$ is path-connected then so is $f\im(X)$.
+ \end{enumerate}
+\end{sproblem}
+
+\begin{problem}
+ [Hausdorff implies $T_1$ axiom]
+ Let $X$ be a Hausdorff topological space.
+ Prove that for any point $p \in X$
+ the set $\{p\}$ is closed.
+\end{problem}
+
+\begin{problem}
+ [\cite{ref:pugh}, Exercise 2.56]
+ Let $M$ be a metric space with more than one point
+ but at most countably infinitely many points.
+ Show that $M$ is disconnected.
+ \begin{hint}
+ Let $p$ be any point.
+ If there is a real number $r$
+ such that $d(p,q) \ne r$ for any $q \in M$,
+ then the $r$-neighborhood of $p$ is clopen.
+ \end{hint}
+\end{problem}
+
+\begin{problem}[Furstenberg]
+ We declare a subset of $\ZZ$ to be open
+ if it's the union (possibly empty or infinite)
+ of arithmetic sequences $\left\{ a + nd \mid n \in \ZZ \right\}$,
+ where $a$ and $d$ are positive integers.
+ \begin{enumerate}[(a)]
+ \ii Verify this forms a topology on $\ZZ$,
+ called the \vocab{evenly spaced integer topology}.
+ \ii Prove there are infinitely many primes by considering $\bigcup_p p\ZZ$
+ for primes $p$.
+ \end{enumerate}
+ \begin{hint}
+ Note that $p\ZZ$ is closed for each $p$.
+ If there were finitely many primes, then
+ $\bigcup p\ZZ = \ZZ \setminus \{-1,1\}$ would have to be closed;
+ i.e.\ $\{-1,1\}$ would be open, but all open sets here are infinite.
+ \end{hint}
+\end{problem}
+
+\begin{problem}
+ \gim
+ Prove that the evenly spaced integer topology on $\ZZ$ is metrizable.
+ In other words, show that one can impose a metric $d : \ZZ^2 \to \RR$
+ which makes $\ZZ$ into a metric space whose open sets are those described above.
+ % https://teratologicmuseum.wordpress.com/2009/05/05/a-metric-for-the-evenly-spaced-integer-topology/
+ \begin{hint}
+ The balls at $0$ should be of the form $n! \cdot \ZZ$.
+ \end{hint}
+ \begin{sol}
+ Let $d(x,y) = 2017^{-n}$,
+ where $n$ is the largest integer
+ such that $n!$ divides $\left\lvert x-y \right\rvert$.
+ \end{sol}
+\end{problem}
+
+%\begin{problem}
+% A topological space $X$ is called \vocab{locally path-connected}
+% if for every point $x \in X$ and open neighborhood $U$ of $x$,
+% some open neighborhood $V$ of $x$ contained in $U$ is path-connected.
+% Prove that $X$ is path-connected if and only if it is connected
+% and locally path-connected.
+% \label{prob:local_path_connected}
+%\end{problem}
+
+\begin{problem}
+ \gim
+ We know that any open set $U \subseteq \RR$
+ is a union of open intervals (allowing $\pm\infty$ as endpoints).
+ One can show that it's actually possible to write $U$ as the
+ union of \emph{pairwise disjoint} open intervals.\footnote{You are invited to try
+ and prove this, but I personally found the proof quite boring.}
+ Prove that there exists such a disjoint union with at most \emph{countably many}
+ intervals in it.
+ \begin{hint}
+ Appeal to $\QQ$.
+ \end{hint}
+ \begin{sol}
+ You can pick a rational number in each interval and
+ there are only countably many rational numbers. Done!
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/transpose.tex b/books/napkin/transpose.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c5d9a0833b079d2b4974856946f47f40616165e9
--- /dev/null
+++ b/books/napkin/transpose.tex
@@ -0,0 +1,535 @@
+\chapter{Duals, adjoint, and transposes}
+This chapter is dedicated to the basis-free interpretation
+of the transpose and conjugate transpose of a matrix.
+
+Poster corollary: we will see that
+symmetric matrices with real coefficients
+are diagonalizable and have real eigenvalues.
+
+\section{Dual of a map}
+\prototype{The example below.}
+We go ahead and now define a notion
+that will grow up to be the transpose of a matrix.
+
+\begin{definition}
+ Let $V$ and $W$ be vector spaces.
+ Suppose $T \colon V \to W$ is a linear map.
+ Then we actually get a map
+ \begin{align*}
+ T^\vee \colon W^\vee &\to V^\vee \\
+ f &\mapsto f \circ T.
+ \end{align*}
+ This map is called the \vocab{dual map}.
+\end{definition}
+
+\begin{example}
+ [Example of a dual map]
+ Work over $\RR$.
+ Let's consider $V$ with basis $e_1$, $e_2$, $e_3$
+ and $W$ with basis $f_1$, $f_2$.
+ Suppose that
+ \begin{align*}
+ T(e_1) &= f_1 + 2f_2 \\
+ T(e_2) &= 3f_1 + 4f_2 \\
+ T(e_3) &= 5f_1 + 6f_2.
+ \end{align*}
+ Now consider $V^\vee$ with its dual basis $e_1^\vee$,
+ $e_2^\vee$, $e_3^\vee$
+ and $W^\vee$ with its dual basis $f_1^\vee$, $f_2^\vee$.
+ Let's compute $T^\vee(f_1^\vee) = f_1^\vee \circ T$:
+ it is given by
+ \begin{align*}
+ f_1^\vee \left( T(ae_1 + be_2 + ce_3) \right)
+ &= f_1^\vee\left( (a+3b+5c) f_1 + (2a+4b+6c) f_2 \right) \\
+ &= a + 3b + 5c.
+ \end{align*}
+ So accordingly we can write
+ \[ T^\vee(f_1^\vee) = e_1^\vee + 3e_2^\vee + 5e_3^\vee \]
+ Similarly,
+ \[ T^\vee(f_2^\vee) = 2e_1^\vee + 4e_2^\vee + 6e_3^\vee. \]
+ This determines $T^\vee$ completely.
+\end{example}
+If we write the matrices for $T$ and $T^\vee$ in terms of our basis,
+we now see that
+\[ T = \begin{bmatrix}
+ 1 & 3 & 5 \\
+ 2 & 4 & 6
+ \end{bmatrix}
+ \quad\text{and}\quad
+ T^\vee = \begin{bmatrix}
+ 1 & 2 \\
+ 3 & 4 \\
+ 5 & 6
+ \end{bmatrix}.
+\]
+So in our selected basis,
+we find that the matrices are \vocab{transposes}:
+mirror images of each other over the diagonal.
+
+Of course, this should work in general.
+\begin{theorem}
+ [Transpose interpretation of $T^\vee$]
+ Let $V$ and $W$ be finite-dimensional $k$-vector spaces.
+ Then, for any $T \colon V \to W$,
+ the following two matrices are transposes:
+ \begin{itemize}
+ \ii The matrix for $T \colon V \to W$
+ expressed in the basis $(e_i)$, $(f_j)$.
+ \ii The matrix for $T^\vee \colon W^\vee \to V^\vee$
+ expressed in the basis $(f_j^\vee)$, $(e_i^\vee)$.
+ \end{itemize}
+\end{theorem}
+\begin{proof}
+ The $(i,j)$th entry of the matrix $T$
+ corresponds to the coefficient of $f_i$ in $T(e_j)$,
+ which corresponds to the coefficient of $e_j^\vee$
+ in $f_i^\vee \circ T$.
+\end{proof}
+The nice part of this is that the definition of $T^\vee$ is basis-free.
+So it means that if we start with any linear map $T$,
+and then pick whichever basis we feel like,
+then $T$ and $T^\vee$ will still be transposes.
+
+\section{Identifying with the dual space}
+For the rest of this chapter, though,
+we'll now bring inner products into the picture.
+
+Earlier I complained that there was no natural isomorphism $V \cong V^\vee$.
+But in fact, given an inner form
+we can actually make such an identification:
+that is we can naturally associate every linear map
+$\xi \colon V \to k$ with a vector $v \in V$.
+
+To see how we might do this, suppose $V = \RR^3$
+for now with an orthonormal basis $e_1$, $e_2$, $e_3$.
+How might we use the inner product to
+represent a map from $V \to \RR$?
+For example, take $\xi \in V^\vee$ by
+$\xi(e_1) = 3$, $\xi(e_2) = 4$ and $\xi(e_3) = 5$.
+Actually, I claim that
+\[ \xi(v) = \left< v, 3e_1 + 4e_2 + 5e_3 \right> \]
+for every $v$.
+\begin{ques}
+ Check this.
+\end{ques}
+
+And this works beautifully in the real case.
+\begin{theorem}[$V \cong V^\vee$ for real inner form]
+ \label{thm:real_dual_isomorphic}
+ Let $V$ be a finite-dimensional \emph{real}
+ inner product space and $V^\vee$ its dual.
+ Then the map $V \to V^\vee$ by
+ \[ v \mapsto \left< -, v \right> \in V^\vee \]
+ is an isomorphism of real vector spaces.
+\end{theorem}
+\begin{proof}
+ It suffices to show that the map is injective and surjective.
+ \begin{itemize}
+ \ii Injective: suppose $\left< v_1, v \right> = \left< v_2, v \right>$
+ for every vector $v \in V$.
+ This means $\left< v_1 - v_2, v \right> = 0$ for every vector $v \in V$.
+ This can only happen if $v_1 - v_2 = 0$; for example, take $v = v_1 - v_2$
+ and use positive definiteness.
+ \ii Surjective: take an orthonormal basis $e_1$, \dots $e_n$
+ and let $e_1^\vee$, \dots, $e_n^\vee$ be the dual basis on $V^\vee$.
+ Then $e_1$ maps to $e_1^\vee$, et cetera.
+ \qedhere
+ \end{itemize}
+\end{proof}
+Actually, since we already know $\dim V = \dim V^\vee$
+we only had to prove one of the above.
+As a matter of personal taste, I find the proof of injectivity more elegant,
+and the proof of surjectivity more enlightening,
+so I included both.
+Thus
+\begin{moral}
+ If a real inner product space $V$ is given an inner form,
+ then $V$ and $V^\vee$ are canonically isomorphic.
+\end{moral}
+
+Unfortunately, things go awry if $V$ is complex.
+Here is the results:
+\begin{theorem}[$V$ versus $V^\vee$ for complex inner forms]
+ Let $V$ be a finite-dimensional \emph{complex}
+ inner product space and $V^\vee$ its dual.
+ Then the map $V \to V^\vee$ by
+ \[ v \mapsto \left< -, v \right> \in V^\vee \]
+ is a bijection of sets.
+\end{theorem}
+Wait, what? Well, the proof above shows that it is both injective
+and surjective, but why is it not an isomorphism?
+The answer is that it is not a linear map:
+since the form is sesquilinear we have for example
+\[ iv \mapsto \left< -, iv\right> = -i \left< -, v\right> \]
+which has introduced a minus sign!
+In fact, it is an \emph{anti-linear} map, in the sense we defined before.
+
+Eager readers might try to fix this by defining
+the isomorphism $v \mapsto \left< v, - \right>$ instead.
+However, this also fails, because the right-hand side
+is not even an element of $V^\vee$:
+it is an ``anti-linear'', not linear.
+
+And so we are stuck.
+Fortunately, we will only need the ``bijection'' result
+for what follows, so we can continue on anyways.
+(If you want to fix this, \Cref{prob:complex_conj_space}
+gives a way to do so.)
+
+
+\section{The adjoint (conjugate transpose)}
+We will see that, as a result of the flipping above,
+the \emph{conjugate transpose} is actually the better concept
+for inner product spaces: since it can be defined using only the inner product
+without making mention to dual spaces at all.
+\begin{definition}
+ Let $V$ and $W$ be finite-dimensional inner product spaces,
+ and let $T \colon V \to W$.
+ The \vocab{adjoint} (or \vocab{conjugate transpose})
+ of $T$, denoted $T^\dagger \colon W \to V$,
+ is defined as follows: for every vector $w \in W$,
+ we let $T^\dagger(w) \in V$ be the unique vector with
+ \[ \left< v, T^\dagger(w) \right>_V = \left< T(v), w \right>_W \]
+ for every $v \in V$.
+\end{definition}
+
+Some immediate remarks about this definition:
+\begin{itemize}
+\ii Our $T^\dagger$ is well-defined,
+because $\left< T(-), w \right>_W$ is some function in $V^\vee$,
+and hence by the bijection earlier
+it should be uniquely of the form $\left< -, v \right>$ for some $v \in V$.
+\ii This map $T^\dagger$ is indeed a linear map (why?).
+\ii The niceness of this definition is that it doesn't
+make reference to any basis or even $V^\vee$,
+so it is the ``right'' definition for a inner product space.
+\ii By symmetry, of course, we also have
+$\left< T^\dagger(v), w \right> = \left< v, T(w)\right>$.
+\end{itemize}
+
+\begin{example}
+ [Example of an adjoint map]
+ We'll work over $\CC$, so the conjugates are more visible.
+ Let's consider $V$ with orthonormal basis $e_1$, $e_2$, $e_3$
+ and $W$ with orthonormal basis $f_1$, $f_2$.
+ We put
+ \begin{align*}
+ T(e_1) &= if_1 + 2f_2 \\
+ T(e_2) &= 3f_1 + 4f_2 \\
+ T(e_3) &= 5f_1 + 6if_2.
+ \end{align*}
+ We compute $T^\dagger(f_1)$.
+ It is the unique vector $x \in V$ such that
+ \[ \left< v, x \right>_V = \left< T(v), f_1 \right>_W \]
+ for any $v \in V$.
+ If we expand $v = ae_1 + be_2 + ce_3$ the above equality becomes
+ \begin{align*}
+ \left< ae_1 + be_2 + ce_3, x \right>_V
+ &= \left< T(ae_1 + be_2 + ce_3), f_1 \right>_W \\
+ &= ia + 3b + 5c.
+ \end{align*}
+ However, since $x$ is in the second argument,
+ this means we actually want to take
+ \[ T^\dagger(f_1) = -ie_1 + 3e_2 + 5e_3 \]
+ so that the sesquilinearity will conjugate the $i$.
+\end{example}
+
+
+The pattern continues, though we remind the reader that
+we need the basis to be orthonormal to proceed.
+\begin{theorem}
+ [Adjoints are conjugate transposes]
+ Fix an \emph{orthonormal} basis of a finite-dimensional inner product space $V$.
+ Let $T \colon V \to V$ be a linear map.
+ If we write $T$ as a matrix in this basis,
+ then the matrix $T^\dagger$ (in the same basis)
+ is the \emph{conjugate transpose} of the matrix of $T$;
+ that is, the $(i,j)$th entry of $T^\vee$
+ is the complex conjugate of the $(j,i)$th entry of $T$.
+\end{theorem}
+\begin{proof}
+ One-line version: take $v$ and $w$ to be basis elements,
+ and this falls right out.
+
+ Full proof: let
+ \[ T = \begin{bmatrix}
+ a_{11} & \dots & a_{1n} \\
+ \vdots & \ddots & \vdots \\
+ a_{n1} & \dots & a_{nn}
+ \end{bmatrix} \]
+ in this basis $e_1$, \dots, $e_n$.
+ Then, letting $w = e_i$ and $v = e_j$ we deduce that
+ \[ \left< e_i, T^\dagger(e_j) \right> = \left< T(e_i) , e_j\right> = a_{ji}
+ \implies
+ \left< T^\dagger(e_j), e_i \right> = \ol{a_{ji}} \]
+ for any $i$, which is enough to deduce the result.
+\end{proof}
+
+\section{Eigenvalues of normal maps}
+We now come to the advertised theorem.
+Restrict to the situation where $T \colon V \to V$.
+You see, the world would be a very beautiful place if it turned out
+that we could pick a basis of eigenvectors that was also \emph{orthonormal}.
+This is of course far too much to hope for;
+even without the orthonormal condition,
+we saw that Jordan form could still have $1$'s off the diagonal.
+
+However, it turns out that there is
+a complete characterization of exactly when our overzealous dream is true.
+
+\begin{definition}
+ We say a linear map $T$
+ (from a finite-dimensional inner product space to itself)
+ is \vocab{normal} if $TT^\dagger = T^\dagger T$.
+
+ We say a complex $T$ is \vocab{self-adjoint} or \vocab{Hermitian} if $T = T^\dagger$;
+ i.e.\ as a matrix in any orthonormal basis, $T$ is its own conjugate transpose.
+ For real $T$ we say ``self-adjoint'', ``Hermitian'' or \vocab{symmetric}.
+\end{definition}
+\begin{theorem}
+ [Normal $\iff$ diagonalizable with orthonormal basis]
+ Let $V$ be a finite-dimensional complex inner product space.
+ A linear map $T \colon V \to V$ is normal
+ if and only if one can pick an orthonormal basis of eigenvectors.
+\end{theorem}
+\begin{exercise}
+ Show that if there exists such an orthonormal basis
+ then $T \colon V \to V$ is normal,
+ by writing $T$ as a diagonal matrix in that basis.
+\end{exercise}
+\begin{proof}
+ This is long, and maybe should be omitted on a first reading.
+ If $T$ has an orthonormal basis of eigenvectors,
+ this result is immediate.
+
+ Now assume $T$ is normal.
+ We first prove $T$ is diagonalizable; this is the hard part.
+ \begin{claim}
+ If $T$ is normal, then $\ker T = \ker T^r = \ker T^\dagger$ for $r \ge 1$.
+ (Here $T^r$ is $T$ applied $r$ times.)
+ \end{claim}
+ \begin{subproof}
+ [Proof of Claim]
+ Let $S = T^\dagger \circ T$, which is self-adjoint.
+ We first note that $S$ is Hermitian and $\ker S = \ker T$.
+ To see it's Hermitian, note
+ $\left = \left = \left$.
+ Taking $v = w$ also implies $\ker S \subseteq \ker T$
+ (and hence equality since obviously $\ker T \subseteq \ker S$).
+
+ First, since we have $\left< S^r(v), S^{r-2}(v) \right>
+ = \left< S^{r-1}(v), S^{r-1}(v)\right>$,
+ an induction shows that $\ker S = \ker S^r$ for $r \ge 1$.
+ Now, since $T$ is normal, we have $S^r = (T^\dagger)^r \circ T^r$,
+ and thus we have the inclusion
+ \[ \ker T \subseteq \ker T^r \subseteq \ker S^r = \ker S = \ker T \]
+ where the last equality follows from the first claim.
+ Thus in fact $\ker T = \ker T^r$.
+
+ Finally, to show equality with $\ker T^\dagger$ we
+ \begin{align*}
+ \left< Tv, Tv\right> &= \left< v, T^\dagger T v\right> \\
+ &= \left< v, T T^\dagger v \right> \\
+ &= \left< T^\dagger v, T^\dagger v \right>. \qedhere
+ \end{align*}
+ \end{subproof}
+
+ Now consider the given $T$, and any $\lambda$.
+ \begin{ques}
+ Show that $(T - \lambda\id)^\dagger = T^\dagger - \ol\lambda \id$.
+ Thus if $T$ is normal, so is $T - \lambda \id$.
+ \end{ques}
+ In particular, for any eigenvalue $\lambda$ of $T$,
+ we find that $\ker (T-\lambda\id) = \ker(T-\lambda\id)^r$.
+ This implies that all the Jordan blocks of $T$ have size $1$;
+ i.e.\ that $T$ is in fact diagonalizable.
+ Finally, we conclude that the eigenvectors of $T$ and $T^\dagger$ match,
+ and the eigenvalues are complex conjugates.
+
+ So, diagonalize $T$.
+ We just need to show that if $v$ and $w$ are eigenvectors of $T$
+ with distinct eigenvalues, then they are orthogonal.
+ (We can use Gram-Schmidt on any eigenvalue that appears multiple times.)
+ To do this, suppose $T(v) = \lambda v$ and $T(w) = \mu w$
+ (thus $T^\dagger(w) = \ol\mu w $).
+ Then
+ \[ \lambda \left< v,w\right>
+ = \left< \lambda v, w\right>
+ = \left< Tv, w\right>
+ = \left< v, T^\dagger(w)\right>
+ = \left< v, \ol \mu w \right>
+ = \mu \left< v, w\right>. \]
+ Since $\lambda \neq \mu$, we conclude $\left< v,w \right> = 0$.
+\end{proof}
+This means that not only can we write
+\[ T = \begin{bmatrix}
+ \lambda_1 & \dots & \dots & 0 \\
+ 0 & \lambda_2 & \dots & 0 \\
+ \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & \dots & \lambda_n
+ \end{bmatrix}
+\]
+but moreover that the basis associated with this matrix happens to be orthonormal vectors.
+
+As a corollary:
+\begin{theorem}[Hermitian matrices have real eigenvalues]
+ A Hermitian matrix $T$ is diagonalizable,
+ and all its eigenvalues are real.
+\end{theorem}
+\begin{proof}
+ Obviously Hermitian $\implies$ normal,
+ so write it in the orthonormal basis of eigenvectors.
+ To see that the eigenvalues are real, note that $T = T^\dagger$
+ means $\lambda_i = \ol{\lambda_i}$ for every $i$.
+\end{proof}
+
+\section{\problemhead}
+\begin{sproblem}
+ [Double dual]
+ \gim
+ \label{prob:double_dual}
+ Let $V$ be a finite-dimensional vector space.
+ Prove that
+ \begin{align*}
+ V &\to (V^\vee)^\vee \\
+ v &\mapsto \left( \xi \mapsto \xi(v) \right)
+ \end{align*}
+ gives an isomorphism.
+ (This is significant because the isomorphism is \emph{canonical},
+ and in particular does not depend on the choice of basis.
+ So this is more impressive.)
+ \begin{hint}
+ You can \emph{prove} the result just by taking a basis
+ $e_1$, \dots, $e_n$ of $V$
+ and showing that it is a linear map sending $e_1$
+ to the basis $(e_1^\vee)^\vee$.
+ \end{hint}
+\end{sproblem}
+
+\begin{problem}
+ [Fundamental theorem of linear algebra]
+ Let $T \colon V \to W$ be a map of finite-dimensional $k$-vector spaces.
+ Prove that
+ \[ \dim \img T = \dim \img T^\vee
+ = \dim V - \dim \ker T = \dim W - \dim \ker T^\vee. \]
+ \begin{hint}
+ Use \Cref{thm:linear_map_basis} and it will be immediate
+ (the four quantities equal the $k$ in the theorem).
+ \end{hint}
+ \begin{sol}
+ By \Cref{thm:linear_map_basis},
+ we may select $e_1$, \dots, $e_n$ a basis of $V$
+ and $f_1$, \dots, $f_m$ a basis of $W$
+ such that $T(e_i) = f_i$ for $i \le k$
+ and $T(e_i) = 0$ for $i > k$.
+ Then $T^\vee(f_i^\vee) = e_i^\vee$ for $i \le k$
+ and $T^\vee(f_i^\vee) = 0$ for $i > k$.
+ All four quantities are above are then equal to $k$.
+ \end{sol}
+\end{problem}
+
+\begin{dproblem}
+ [Row rank is column rank]
+ A $m \times n$ matrix $M$ of real numbers is given.
+ The \emph{column rank} of $M$ is the dimension of the span in $\RR^m$
+ of its $n$ column vectors.
+ The \emph{row rank} of $M$ is the dimension of the span in $\RR^n$
+ of its $m$ row vectors.
+ Prove that the row rank and column rank are equal.
+ \begin{hint}
+ This actually is just the previous problem in disguise!
+ The row rank is $\dim \img T^\vee$
+ and the column rank is $\dim \ker T$.
+ \end{hint}
+\end{dproblem}
+
+\begin{problem}
+ [The complex conjugate spaces]
+ \label{prob:complex_conj_space}
+ Let $V = (V, +, \cdot)$ be a complex vector space.
+ Define the \vocab{complex conjugate vector space},
+ denoted $\ol V = (V, +, \ast)$
+ by changing just the multiplication:
+ \[ c \ast v = \ol c \cdot V. \]
+ Show that for any sesquilinear form on $V$,
+ if $V$ is finite-dimensional, then
+ \begin{align*}
+ \ol V & \to V^\vee \\
+ v &\mapsto \left< -, v \right>
+ \end{align*}
+ is an isomorphism of complex vector spaces.
+\end{problem}
+
+\begin{problem}
+ [$T^\dagger$ vs $T^\vee$]
+ Let $V$ and $W$ be real inner product spaces
+ and let $T \colon V \to W$ be an inner product.
+ Show that the following diagram commutes:
+ \begin{center}
+ \begin{tikzcd}
+ W \ar[r, "T^\dagger"] \ar[d, "\cong"']
+ & V \ar[d, "\cong"] \\
+ W^\vee \ar[r, "T^\vee"'] & V^\vee
+ \end{tikzcd}
+ \end{center}
+ Here the isomorphisms are $v \mapsto \left< -, v\right>$.
+ Thus, for real inner product spaces,
+ $T^\dagger$ is just $T^\vee$ with the duals eliminated
+ (by \Cref{thm:real_dual_isomorphic}).
+\end{problem}
+
+\begin{problem}
+ [Polynomial criteria for normality]
+ Let $V$ be a complex inner product space
+ and let $T \colon V \to V$ be a linear map.
+ Show that $T$ is normal if and only if
+ there is a polynomial\footnote{Here,
+ $p(T)$ is meant in the same composition
+ sense as in Cayley-Hamilton.}
+ $p \in \CC[t]$ such that \[ T^\dagger = p(T). \]
+ \begin{hint}
+ If there is a polynomial, check $TT^\dagger = T^\dagger T$ directly.
+ If $T$ is normal, diagonalize it.
+ \end{hint}
+ \begin{sol}
+ First, suppose $T^\ast = p(T)$.
+ Then $T^\ast T = p(T) \cdot T = T \cdot p(T) = T T^\ast$ and we're done.
+
+ Conversely, suppose $T$ is diagonalizable
+ in a way compatible with the inner form
+ (OK since $V$ is finite dimensional).
+ Consider the orthonormal basis.
+ Then $T$ consists of eigenvalues on the main diagonals
+ and zeros elsewhere, say
+ \[ T = \left(
+ \begin{array}{cccc}
+ \lambda_1 & 0 & \dots & 0 \\
+ 0 & \lambda_2 & \dots & 0 \\
+ \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & \dots & \lambda_n
+ \end{array}
+ \right). \]
+ In that case, we find that for any polynomial $q$ we have
+ \[ q(T) = \left(
+ \begin{array}{cccc}
+ q(\lambda_1) & 0 & \dots & 0 \\
+ 0 & q(\lambda_2) & \dots & 0 \\
+ \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & \dots & q(\lambda_n)
+ \end{array}
+ \right). \]
+ and
+ \[ T^\ast = \left(
+ \begin{array}{cccc}
+ \ol{\lambda_1} & 0 & \dots & 0 \\
+ 0 & \ol{\lambda_2} & \dots & 0 \\
+ \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & \dots & \ol{\lambda_n}
+ \end{array}
+ \right). \]
+ So we simply require a polynomial $q$
+ such that $q(\lambda_i) = \ol{\lambda_i}$ for every $i$.
+ Since there are finitely many $\lambda_i$,
+ we can construct such a polynomial using Lagrange interpolation.
+ \end{sol}
+\end{problem}
diff --git a/books/napkin/vector-space.tex b/books/napkin/vector-space.tex
new file mode 100644
index 0000000000000000000000000000000000000000..8256951f23e8ab8a6daae878b2a5d118aefe746a
--- /dev/null
+++ b/books/napkin/vector-space.tex
@@ -0,0 +1,1102 @@
+\chapter{Vector spaces}
+This is a pretty light chapter.
+The point of it is to define what a vector space and a basis are.
+These are intuitive concepts that you may already know.
+
+\section{The definitions of a ring and field}
+\prototype{$\ZZ$, $\RR$, and $\CC$ are rings; the latter two are fields.}
+
+I'll very informally define a ring/field here,
+in case you skipped the earlier chapter.
+\begin{itemize}
+ \ii A \textbf{ring} is a structure with a \emph{commutative}
+ addition and multiplication, as well as subtraction, like $\ZZ$.
+ It also has an additive identity $0$ and multiplicative identity $1$.
+
+ \ii If the multiplication is invertible like in $\RR$ or $\CC$,
+ (meaning $\frac 1x$ makes sense for any $x \neq 0$),
+ then the ring is called a \textbf{field}.
+\end{itemize}
+In fact, if you replace ``field'' by ``$\RR$'' everywhere in what follows,
+you probably won't lose much.
+It's customary to use the letter $R$ for rings, and $k$ or $K$ for fields.
+
+Finally, in case you skipped the chapter on groups, I should also mention:
+\begin{itemize}
+ \ii An \textbf{additive abelian group} is a structure
+ with a commutative addition, as well as subtraction,
+ plus an additive identity $0$.
+ It doesn't have to have multiplication.
+ A good example is $\RR^3$ (with addition componentwise).
+\end{itemize}
+
+\section{Modules and vector spaces}
+\prototype{Polynomials of degree at most $n$.}
+You intuitively know already that $\RR^n$ is a ``vector space'':
+its elements can be added together,
+and there's some scaling by real numbers.
+Let's develop this more generally.
+
+Fix a commutative ring $R$.
+Then informally,
+\begin{moral}
+ An $R$-module is any structure where you can add two elements
+ and scale by elements of $R$.
+\end{moral}
+% You can think of the $R$-module as consisting of soldiers
+% being commanded by the ring $R$.
+Moreover, a \vocab{vector space} is just a module whose commanding ring
+is actually a field.
+I'll give you the full definition in a moment,
+but first, examples\dots
+
+\begin{example}
+ [Quadratic polynomials, aka my favorite example]
+ My favorite example of an $\RR$-vector space is the
+ set of polynomials of degree at most two, namely
+ \[ \left\{ ax^2+bx+c \mid a,b,c \in \RR \right\}. \]
+ Indeed, you can add any two quadratics, and multiply by constants.
+ You can't multiply two quadratics to get a quadratic,
+ but that's irrelevant -- in a vector space there need not
+ be a notion of multiplying two vectors together.
+
+ In a sense we'll define later, this vector space
+ has dimension $3$ (as expected!).
+ % But I hope you can see why this is kind of true!
+ \label{example:quadratic_vector_space}
+\end{example}
+\begin{example}[All polynomials]
+ The set of \emph{all} polynomials with real coefficients is an
+ $\RR$-vector space, because you can \emph{add any two polynomials}
+ and \emph{scale by constants}.
+\end{example}
+
+\begin{example}
+ [Euclidean space]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The complex numbers
+ \[ \left\{ a+bi \mid a,b \in \RR \right\} \]
+ form a real vector space. As we'll see later,
+ it has ``dimension $2$''.
+ \ii The real numbers $\RR$ form a real vector space of dimension $1$.
+ \ii The set of 3D vectors
+ \[ \left\{ (x,y,z) \mid x,y,z \in \RR \right\} \]
+ forms a real vector space, because you can add any two triples
+ component-wise. Again, we'll later explain
+ why it has ``dimension $3$''.
+ \end{enumerate}
+\end{example}
+
+\begin{example}
+ [More examples of vector spaces]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The set \[ \QQ[\sqrt 2] = \left\{ a + b \sqrt 2 \mid a, b \in \QQ \right\} \]
+ has a structure of a $\QQ$-vector space in the obvious fashion:
+ one can add any two elements, and scale by rational numbers.
+ (It is not a real vector space -- why?)
+ \ii The set \[ \left\{ (x,y,z) \mid x+y+z = 0 \text{ and } x,y,z \in \RR \right\} \]
+ is a $2$-dimensional real vector space.
+ \ii The set of all functions $f : \RR \to \RR$ is also a real vector space
+ (since the notions $f+g$ and $c \cdot f$ both make sense for $c \in \RR$).
+ \end{enumerate}
+\end{example}
+
+Now let me write the actual rules for how this multiplication behaves.
+\begin{definition}
+ Let $R$ be a commutative ring.
+ An $R$-\vocab{module} starts with an additive abelian group $M = (M,+)$
+ whose identity is denoted $0 = 0_M$.
+ We additionally specify a left multiplication by elements of $R$.
+ This multiplication must satisfy the following properties
+ for $r, r_1, r_2 \in R$ and $m, m_1, m_2 \in M$:
+ \begin{enumerate}[(i)]
+ \ii $r_1 \cdot (r_2 \cdot m) = (r_1r_2) \cdot m$.
+ \ii Multiplication is distributive, meaning
+ \[ (r_1+r_2) \cdot m = r_1 \cdot m + r_2 \cdot m
+ \text{ and }
+ r \cdot (m_1 + m_2) = r \cdot m_1 + r \cdot m_2. \]
+ \ii $1_R \cdot m = m$.
+ \ii $0_R \cdot m = 0_M$.
+ (This is actually extraneous;
+ one can deduce it from the first three.)
+ \end{enumerate}
+ If $R$ is a field we say $M$ is an $R$-\vocab{vector space};
+ its elements are called \vocab{vectors}
+ and the members of $R$ are called \vocab{scalars}.
+\end{definition}
+
+\begin{abuse}
+ In the above, we're using the same symbol $+$ for the addition of $M$
+ and the addition of $R$.
+ Sorry about that, but it's kind of hard to avoid, and the point
+ of the axioms is that these additions should be related.
+ I'll try to remember to put $r \cdot m$ for the multiplication of the module
+ and $r_1r_2$ for the multiplication of $R$.
+\end{abuse}
+
+\begin{ques}
+ In \Cref{example:quadratic_vector_space},
+ I was careful to say ``degree at most $2$'' instead of ``degree $2$''.
+ What's the reason for this?
+ In other words, why is
+ \[ \left\{ ax^2 + bx + c \mid a,b,c \in \RR, a \neq 0 \right\} \]
+ not an $\RR$-vector space?
+\end{ques}
+
+A couple less intuitive but somewhat important examples\dots
+\begin{example}[Abelian groups are $\ZZ$-modules]
+ (Skip this example if you're not comfortable with groups.)
+ \begin{enumerate}[(a)]
+ \ii The example of real polynomials
+ \[ \left\{ ax^2+bx+c \mid a,b,c \in \RR \right\} \]
+ is also a $\ZZ$-module!
+ Indeed, we can add any two such polynomials,
+ and we can scale them by integers.
+ \ii The set of integers modulo $100$, say $\ZZ/100\ZZ$,
+ is a $\ZZ$-module as well. Can you see how?
+ \ii In fact, \emph{any} abelian group $G = (G,+)$ is a $\ZZ$-module.
+ The multiplication can be defined by
+ \[ n \cdot g = \underbrace{g+\dots+g}_{\text{$n$ times}}
+ \qquad (-n) \cdot g = n \cdot (-g)\]
+ for $n \ge 0$. (Here $-g$ is the additive inverse of $g$.)
+ \end{enumerate}
+\end{example}
+\begin{example}
+ [Every ring is its own module]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii $\RR$ can be thought of as an $\RR$-vector space over itself.
+ Can you see why?
+
+ \ii By the same reasoning,
+ we see that \emph{any} commutative ring $R$ can be thought of
+ as an $R$-module over itself.
+ \end{enumerate}
+\end{example}
+
+\section{Direct sums}
+\prototype{$\{ax^2+bx+c\} = \RR \oplus x\RR \oplus x^2\RR$, and
+$\RR^3$ is the sum of its axes.}
+Let's return to \Cref{example:quadratic_vector_space}, and consider
+\[ V = \left\{ ax^2+bx+c \mid a,b,c \in \RR \right\}. \]
+Even though I haven't told you what a dimension is,
+you can probably see that this vector space ``should have'' dimension $3$.
+We'll get to that in a moment.
+
+The other thing you may have noticed is that somehow
+the $x^2$, $x$ and $1$ terms don't ``talk to each other''.
+They're totally unrelated.
+In other words, we can consider the three sets
+\begin{align*}
+ x^2\RR &\defeq \left\{ ax^2 \mid a \in \RR \right\} \\
+ x\RR &\defeq \left\{ bx \mid b \in \RR \right\} \\
+ \RR &\defeq \left\{ c \mid c \in \RR \right\}.
+\end{align*}
+In an obvious way, each of these can be thought of as a ``copy'' of $\RR$.
+
+Then $V$ quite literally consists of the ``sums of these sets''.
+Specifically, every element of $V$ can be written \emph{uniquely}
+as the sum of one element from each of these sets.
+This motivates us to write
+\[ V = x^2\RR \oplus x\RR \oplus \RR. \]
+The notion which captures this formally is the \vocab{direct sum}.
+
+\begin{definition}
+ Let $M$ be an $R$-module.
+ Let $M_1$ and $M_2$ be subsets of $M$ which are themselves $R$-modules.
+ Then we write $M = M_1 \oplus M_2$ and say $M$ is a \vocab{direct sum}
+ of $M_1$ and $M_2$
+ if every element from $M$ can be written uniquely as the sum
+ of an element from $M_1$ and $M_2$.
+\end{definition}
+\begin{example}[Euclidean plane]
+ Take the vector space $\RR^2 = \left\{ (x,y) \mid x \in \RR, y \in \RR \right\}$.
+ We can consider it as a direct sum of its $x$-axis and $y$-axis:
+ \[ X = \left\{ (x,0) \mid x \in \RR \right\}
+ \text{ and }
+ Y = \left\{ (0,y) \mid y \in \RR \right\}. \]
+ Then $\RR^2 = X \oplus Y$.
+\end{example}
+
+This gives us a ``top-down'' way to break down modules
+into some disconnected components.
+
+By applying this idea in reverse, we can also construct
+new vector spaces as follows.
+In a very unfortunate accident, the two names and notations for technically
+distinct things are exactly the same.
+\begin{definition}
+ Let $M$ and $N$ be $R$-modules.
+ We define the \vocab{direct sum} $M \oplus N$
+ to be the $R$-module whose elements are pairs $(m,n) \in M \times N$.
+ The operations are given by
+ \[ (m_1, n_1) + (m_2, n_2) = (m_1+m_2, n_1+n_2). \]
+ and
+ \[ r \cdot (m, n) = (r \cdot m, r \cdot n). \]
+\end{definition}
+
+For example, while we technically wrote $\RR^2 = X \oplus Y$,
+since each of $X$ and $Y$ is a copy of $\RR$,
+we might as well have written $\RR^2 \cong \RR \oplus \RR$.
+
+\begin{abuse}
+ The above illustrates an abuse of notation in the way we write a direct sum. The symbol $\oplus$ has two meanings.
+ \begin{itemize}
+ \ii If $V$ is a \emph{given} space and $W_1$ and $W_2$ are subspaces, then $V = W_1 \oplus W_2$ means that ``$V$ \emph{splits} as a direct sum $W_1 \oplus W_2$'' in the way we defined above.
+ \ii If $W_1$ and $W_2$ are two \emph{unrelated} spaces, then $W_1 \oplus W_2$ is \emph{defined} as the vector space whose \emph{elements} are pairs $(w_1, w_2) \in W_1 \times W_2$.
+ \end{itemize}
+ You can see that these definitions ``kind of'' coincide.
+\end{abuse}
+
+In this way, you can see that $V$ should be isomorphic
+to $\RR \oplus \RR \oplus \RR$;
+we had $V = x^2\RR \oplus x\RR \oplus \RR$,
+but the $1$, $x$, $x^2$ don't really talk to each other
+and each of the summands is really just a copy of $\RR$ at heart.
+
+\begin{definition}
+ We can also define, for every positive integer $n$, the module
+ \[ M^{\oplus n}
+ \defeq \underbrace{M \oplus M \oplus \dots \oplus M}_{\text{$n$ times}}. \]
+\end{definition}
+
+\section{Linear independence, spans, and basis}
+\prototype{%
+ $\left\{ 1,x,x^2 \right\}$ is a basis of
+ $\left\{ ax^2 + bx + c \mid a,b,c \in \RR \right\}$.}
+
+The idea of a basis, the topic of this section,
+gives us another way to capture the notion that
+\[ V = \left\{ ax^2+bx+c \mid a,b,c \in \RR \right\} \]
+is sums of copies of $\{1,x,x^2\}$.
+This section should be very intuitive, if technical.
+If you can't see why the theorems here ``should'' be true,
+you're doing it wrong.
+
+Let $M$ be an $R$-module now.
+We define three very classical notions that you likely are already familiar with.
+If not, fall upon your notion of Euclidean space or $V$ above.
+\begin{definition}
+ A \vocab{linear combination} of some vectors $v_1, \dots, v_n$
+ is a sum of the form $r_1 v_1 + \dots + r_n v_n$,
+ where $r_1, \dots, r_n \in R$.
+ The linear combination is called \vocab{trivial}
+ if $r_1 = r_2 = \dots = r_n = 0_R$, and \vocab{nontrivial} otherwise.
+\end{definition}
+\begin{definition}
+ Consider a finite set of vectors $v_1, \dots, v_n$ in a module $M$.
+ \begin{itemize}
+ \ii It is called \vocab{linearly independent} if there
+ is no nontrivial linear combination with value $0_M$.
+ (Observe that $0_M = 0 \cdot v_1 + 0 \cdot v_2 + \dots + 0 \cdot v_n$
+ is always true -- the assertion is that there is no other
+ way to express $0_M$ in this form.)
+ \ii It is called a \vocab{generating set} if every $v \in M$ can be written as
+ a linear combination of the $\{v_i\}$.
+ If $M$ is a vector space we say it is \vocab{spanning} instead.
+ \ii It is called a \vocab{basis} (plural \vocab{bases})
+ if every $v \in M$ can be written
+ \emph{uniquely} as a linear combination of the $\{v_i\}$.
+ \end{itemize}
+ The same definitions apply for an infinite set, with the proviso
+ that all sums must be finite.
+
+\end{definition}
+So by definition, $\left\{ 1,x,x^2 \right\}$ is a basis for $V$.
+It's not the only one: $\{2, x, x^2\}$ and $\{x+4, x-2, x^2+x\}$
+are other examples of bases, though not as natural.
+However, the set $S = \{3+x^2, x+1, 5+2x+x^2\}$ is not a basis;
+it fails for two reasons:
+\begin{itemize}
+ \ii Note that
+ $0 = (3+x^2) + 2(x+1) - (5+2x+x^2)$.
+ So the set $S$ is not linearly independent.
+ \ii It's not possible to write $x^2$ as a sum of elements of $S$.
+ So $S$ fails to be spanning.
+\end{itemize}
+With these new terms, we can say a basis is a linearly independent and spanning set.
+
+\begin{example}[More examples of bases]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Regard $\QQ[\sqrt2]
+ = \left\{ a + b \sqrt 2 \mid a, b \in \QQ \right\}$
+ as a $\QQ$-vector space.
+ Then $\{1, \sqrt 2\}$ is a basis.
+ \ii If $V$ is the set of all real polynomials,
+ there is an infinite basis $\{1, x, x^2, \dots\}$.
+ The condition that we only use finitely many terms just says
+ that the polynomials must have finite degree (which is good).
+ \ii Let $V = \{ (x,y,z) \mid x+y+z=0 \text{ and } x,y,z \in \RR\}$.
+ Then we expect there to be a basis of size $2$, but unlike previous examples
+ there is no immediately ``obvious'' choice.
+ Some working examples include:
+ \begin{itemize}
+ \ii $(1,-1,0)$ and $(1,0,-1)$,
+ \ii $(0,1,-1)$ and $(1,0,-1)$,
+ \ii $(5,3,-8)$ and $(2,-1,-1)$.
+ \end{itemize}
+ \end{enumerate}
+\end{example}
+
+\begin{exercise}
+ Show that a set of vectors is a basis if and only if
+ it is linearly independent and spanning.
+ (Think about the polynomial example if you get stuck.)
+\end{exercise}
+
+Now we state a few results which assert
+that bases in vector spaces behave as nicely as possible.
+\begin{theorem}[Maximality and minimality of bases]
+ \label{thm:vector_best}
+ Let $V$ be a vector space over some field $k$
+ and take $e_1, \dots, e_n \in V$. The following are equivalent:
+ \begin{enumerate}[(a)]
+ \ii The $e_i$ form a basis.
+ \ii The $e_i$ are spanning, but no proper subset is spanning.
+ \ii The $e_i$ are linearly independent, but adding any other
+ element of $V$ makes them not linearly independent.
+ \end{enumerate}
+\end{theorem}
+\begin{remark}
+ If we replace $V$ by a general module $M$ over a commutative ring $R$,
+ then (a) $\implies$ (b) and (a) $\implies$ (c) but not conversely.
+\end{remark}
+\begin{proof}
+ Straightforward, do it yourself if you like.
+ The key point to notice is that you need to divide by scalars for the converse direction,
+ hence $V$ is required to be a vector space instead of just a module
+ for the implications (b) $\implies$ (a) and (c) $\implies$ (a).
+\end{proof}
+
+\begin{theorem}
+ [Dimension theorem for vector spaces]
+ If a vector space $V$ has a finite basis,
+ then every other basis has the same number of elements.
+\end{theorem}
+\begin{proof}
+ We prove something stronger:
+ Assume $v_1, \dots, v_n$ is a spanning set
+ while $w_1, \dots, w_m$ is linearly independent. We claim that $n \ge m$.
+ \begin{ques}
+ Show that this claim is enough to imply the theorem.
+ \end{ques}
+
+ Let $A_0 = \{v_1, \dots, v_n\}$ be the spanning set.
+ Throw in $w_1$: by the spanning condition,
+ $w_1 = c_1 v_1 + \dots + c_n v_n$.
+ There's some nonzero coefficient, say $c_n$.
+ Thus \[ v_n = \frac{1}{c_n} w_1 - \frac{c_1}{c_n}v_1 - \frac{c_2}{c_n}v_2 - \dots. \]
+ Thus $A_1 = \{v_1, \dots, v_{n-1}, w_1\}$ is spanning.
+ Now do the same thing, throwing in $w_2$,
+ and deleting some element of the $v_i$ as before to get $A_2$;
+ the condition that the $w_i$ are linearly independent
+ ensures that some $v_i$ coefficient
+ must always not be zero.
+ Since we can eventually get to $A_m$, we have $n \ge m$.
+\end{proof}
+\begin{remark*}
+ [Generalizations]
+ \hfill
+ \begin{itemize}
+ \ii The theorem is true for an infinite basis as well
+ if we interpret ``the number of elements'' as ``cardinality''.
+ This is confusing on a first read through, so we won't elaborate.
+ \ii In fact, this is true for modules over any commutative ring.
+ Interestingly, the proof for the general case proceeds by reducing
+ to the case of a vector space.
+ \end{itemize}
+\end{remark*}
+
+The dimension theorem, true to its name,
+lets us define the \vocab{dimension} of
+a vector space as the size of any finite basis, if one exists.
+When it does exist we say $V$ is \vocab{finite-dimensional}.
+So for example,
+\[ V = \left\{ ax^2 + bx + c \mid a,b,c \in \RR \right\} \]
+has dimension three, because $\left\{ 1,x,x^2 \right\}$ is a basis.
+That's not the only basis: we could as well have written
+\[ \left\{ a(x^2-4x) + b(x+2) + c \mid a,b,c \in \RR \right\} \]
+and gotten the exact same vector space.
+But the beauty of the theorem is that no matter how we try
+to contrive the generating set, we always will get exactly three elements.
+That's why it makes sense to say $V$ has dimension three.
+
+On the other hand, the set of all polynomials
+$\RR[x]$ is \emph{infinite-dimensional}
+(which should be intuitively clear).
+
+A basis $e_1, \dots, e_n$ of $V$ is really cool
+because it means that to specify $v \in V$,
+I only have to specify $a_1, \dots, a_n \in k$,
+and then let $v = a_1 e_1 + \dots + a_n e_n$.
+You can even think of $v$ as $\left( a_1, \dots, a_n \right)$.
+% In a way I'll make precise in a moment, $V$ is actually isomorphic to just $k^{\oplus n}$.
+To put it another way, if $V$ is a $k$-vector space we always have
+\[ V = e_1 k \oplus e_2 k \oplus \dots \oplus e_n k. \]
+
+\section{Linear maps}
+\prototype{Evaluation of $\{ax^2+bx+c\}$ at $x=3$.}
+We've seen homomorphisms and continuous maps.
+Now we're about to see linear maps, the structure preserving maps
+between vector spaces. Can you guess the definition?
+
+\begin{definition}
+ Let $V$ and $W$ be vector spaces over the same field $k$.
+ A \vocab{linear map} is a map $T \colon V \to W$ such that:
+ \begin{enumerate}[(i)]
+ \ii We have $T(v_1 + v_2) = T(v_1) + T(v_2)$
+ for any $v_1, v_2 \in V$.\footnote{In group language,
+ $T$ is a homomorphism $(V,+) \to (W,+)$.}
+ \ii For any $a \in k$ and $v \in V$, $T(a \cdot v) = a \cdot T(v)$.
+ \end{enumerate}
+ If this map is a bijection (equivalently, if it has an inverse),
+ it is an \vocab{isomorphism}.
+ We then say $V$ and $W$ are \vocab{isomorphic}
+ vector spaces and write $V \cong W$.
+\end{definition}
+
+\begin{example}[Examples of linear maps]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii For any vector spaces $V$ and $W$ there is a trivial linear map sending everything to $0_W \in W$.
+ \ii For any vector space $V$, there is the identity isomorphism $\id : V \to V$.
+ \ii The map $\RR^3 \to \RR$ by $(a,b,c) \mapsto 4a+2b+c$ is a linear map.
+ \ii Let $V$ be the set of real polynomials of degree at most $2$.
+ The map $\RR^3 \to V$ by $(a,b,c) \mapsto ax^2+bx+c$ is an \emph{isomorphism}.
+ \ii Let $V$ be the set of real polynomials of degree at most $2$.
+ The map $V \to \RR$ by $ax^2+bx+c \mapsto 9a + 3b + c$
+ is a linear map, which can be described as ``evaluation at $3$''.
+ \ii Let $W$ be the set of functions $\RR \to \RR$.
+ The evaluation map $W \to \RR$ by $f \mapsto f(0)$ is a linear map.
+ \ii There is a map of $\QQ$-vector spaces $\QQ[\sqrt2] \to \QQ[\sqrt2]$
+ called ``multiply by $\sqrt2$''; this map sends $a+b\sqrt2 \mapsto 2b + a\sqrt2$.
+ This map is an isomorphism, because it has an inverse ``multiply by $1/\sqrt2$''.
+ \end{enumerate}
+\end{example}
+
+In the expression $T(a \cdot v) = a \cdot T(v)$, note that the first $\cdot$ is the multiplication of $V$ and the second $\cdot$ is the multiplication of $W$.
+Note that this notion of isomorphism really only cares about the size of the basis:
+\begin{proposition}[$n$-dimensional vector spaces are isomorphic]
+ If $V$ is an $n$-dimensional vector space, then
+ $V \cong k^{\oplus n}$.
+\end{proposition}
+\begin{ques}
+ Let $e_1$, \dots, $e_n$ be a basis for $V$.
+ What is the isomorphism?
+ (Your first guess is probably right.)
+\end{ques}
+\begin{remark}
+ You could technically say that all finite-dimensional vector
+ spaces are just $k^{\oplus n}$ and that no other space is worth
+ caring about.
+ But this seems kind of rude.
+ Spaces often are more than just triples: $ax^2+bx+c$ is a polynomial,
+ and so it has some ``essence'' to it that you'd lose if you
+ compressed it into $(a,b,c)$.
+
+ Moreover, a lot of spaces, like the set of vectors $(x,y,z)$ with $x+y+z=0$,
+ do not have an obvious choice of basis.
+ Thus to cast such a space into $k^{\oplus n}$
+ would require you to make arbitrary decisions.
+\label{rem:vector_spaces_have_essence}
+\end{remark}
+
+\section{What is a matrix?}
+Now I get to tell you what a matrix is:
+it's a way of writing a linear map in terms of bases.
+
+Suppose we have a finite-dimensional
+vector space $V$ with basis $e_1, \dots, e_m$
+and a vector space $W$ with basis $w_1, \dots, w_n$.
+I also have a map $T \colon V \to W$ and I want to tell you what $T$ is.
+It would be awfully inconsiderate of me to try and tell you what $T(v)$
+is at every point $v$.
+In fact, I only have to tell you what $T(e_1)$, \dots, $T(e_m)$ are,
+because from there you can work out
+$T(a_1 e_1 + \dots + a_m e_m)$ for yourself:
+\[ T(a_1 e_1 + \dots + a_m e_m) = a_1 T(e_1) + \dots + a_m T(e_m). \]
+Since the $e_i$ are a basis, that tells you all you need to know about $T$.
+\begin{example}
+ [Extending linear maps]
+ Let $V = \left\{ ax^2+bx+c \mid a,b,c \in \RR \right\}$.
+ Then $T(ax^2+bx+c) = aT(x^2) + bT(x) + cT(1)$.
+\end{example}
+
+Now I can even be more concrete.
+I could tell you what $T(e_1)$ is, but seeing as I have a basis of $W$,
+I can actually just tell you what $T(e_1)$ is in terms of this basis.
+Specifically, there are unique $a_{11}, a_{21}, \dots, a_{n1} \in k$ such that
+\[ T(e_1) = a_{11} w_1 + a_{21} w_2 + \dots + a_{n1} w_n. \]
+So rather than telling you the value of $T(e_1)$ in some abstract space $W$,
+I could just tell you what $a_{11}, a_{21}, \dots, a_{n1}$ were.
+Then I'd repeat this for $T(e_2)$, $T(e_3)$, all the way up to $T(e_m)$,
+and that would tell you everything you need to know about $T$.
+
+That's where the matrix $T$ comes from!
+It's a concise way of writing down all $mn$ numbers
+I need to tell you.
+To be explicit, the matrix for $T$ is defined as the array
+\begin{align*}
+ T &= \underbrace{%
+ \begin{bmatrix}
+ \mid & \mid & & \mid \\
+ T(e_1) & T(e_2) & \dots & T(e_{m}) \\
+ \mid & \mid & & \mid \\
+ \end{bmatrix}
+ }_{\text{$m$ columns}} \Bigg\} \text{$n$ rows} \\
+ &=
+ \begin{bmatrix}
+ a_{11} & a_{12} & \dots & a_{1m} \\
+ a_{21} & a_{22} & \dots & a_{2m} \\
+ \vdots & \vdots & \ddots & \vdots \\
+ a_{n1} & a_{n2} & \dots & a_{nm}
+ \end{bmatrix}.
+\end{align*}
+
+\begin{example}
+ [An example of a matrix]
+ Here is a concrete example in terms of a basis.
+ Let $V = \RR^3$ with basis $e_1$, $e_2$, $e_3$
+ and let $W = \RR^2$ with basis $w_1$, $w_2$.
+ If I have $T \colon V \to W$
+ then uniquely determined by three values, for example:
+ \begin{align*}
+ T(e_1) &= 4w_1 + 7w_2 \\
+ T(e_2) &= 2w_1 + 3w_2 \\
+ T(e_3) &= w_1
+ \end{align*}
+ The columns then correspond to $T(e_1)$, $T(e_2)$, $T(e_3)$:
+ \[
+ T =
+ \begin{bmatrix}
+ 4 & 2 & 1 \\
+ 7 & 3 & 0
+ \end{bmatrix}
+ \]
+\end{example}
+
+\begin{example}
+ [An example of a matrix after choosing a basis]
+ We again let $V = \left\{ ax^2 + bx + c \right\}$
+ be the vector space of polynomials of degree at most $2$.
+ We fix the basis $1$, $x$, $x^2$ for it.
+
+ Consider the ``evaluation at $3$'' map,
+ a map $V \to \RR$.
+ We pick $1$ as the basis element of the RHS;
+ then we can write it as a $1 \times 3$ matrix
+ \[ \begin{bmatrix}
+ 1 & 3 & 9
+ \end{bmatrix} \]
+ with the columns corresponding to $T(1)$, $T(x)$, $T(x^2)$.
+\end{example}
+
+
+From here you can actually work out for yourself
+what it means to multiply two matrices.
+Suppose we have picked a basis for three spaces $U$, $V$, $W$.
+Given maps $T \colon U \to V$ and $S \colon V \to W$,
+we can consider their composition $S \circ T$, i.e.
+\[ U \taking{T} V \taking{S} W. \]
+Matrix multiplication is defined exactly so that the matrix $ST$
+is the same thing we get from interpreting the composed function $S \circ T$ as a matrix.
+\begin{exercise}
+ Check this for yourself!
+ For a concrete example let $\RR^2 \taking{T} \RR^2 \taking{S} \RR^2$
+ by $T(e_1) = 2e_1+3e_2$ and $T(e_2) = 4e_1+5e_2$,
+ $S(e_1) = 6e_1+7e_2$ and $S(e_2) = 8e_1+9e_2$.
+ Compute $S(T(e_1))$ and $S(T(e_2))$ and see how it compares
+ to multiplying the matrices associated to $S$ and $T$.
+\end{exercise}
+In particular, since function composition is associative,
+it follows that matrix multiplication is as well.
+To drive this point home,
+\begin{moral}
+ A matrix is the laziest possible way to specify
+ a linear map from $V$ to $W$.
+\end{moral}
+
+This means you can define concepts like the determinant or the trace of a matrix
+both in terms of an ``intrinsic'' map $T \colon V \to W$
+and in terms of the entries of the matrix.
+Since the map $T$ itself doesn't refer to any basis,
+the abstract definition will imply that the numerical definition
+doesn't depend on the choice of a basis.
+
+\section{Subspaces and picking convenient bases}
+\prototype{Any two linearly independent vectors in $\RR^3$.}
+% A submodule is exactly what you think it is.
+\begin{definition}
+ Let $M$ be a left $R$-module.
+ A \vocab{submodule} $N$ of $M$ is a module $N$
+ such that every element of $N$ is also an element of $M$.
+ If $M$ is a vector space then $N$ is called a \vocab{subspace}.
+\end{definition}
+
+\begin{example}[Kernels]
+ The \vocab{kernel} of a map $T \colon V \to W$ (written $\ker T$) is
+ the set of $v \in V$ such that $T(v) = 0_W$.
+ It is a subspace of $V$, since it's closed under addition and scaling (why?).
+\end{example}
+\begin{example}[Spans]
+ Let $V$ be a vector space and $v_1, \dots, v_m$ be any vectors of $V$.
+ The \vocab{span} of these vectors is defined as the set
+ \[ \left\{ a_1 v_1 + \dots + a_m v_m \mid a_1, \dots, a_m \in k \right\}. \]
+ Note that it is a subspace of $V$ as well!
+\end{example}
+\begin{ques}
+ Why is $0_V$ an element of each of the above examples?
+ In general, why must any subspace contain $0_V$?
+\end{ques}
+
+Subspaces behave nicely with respect to bases.
+\begin{theorem}[Basis completion]
+ \label{thm:basis_completion}
+ Let $V$ be an $n$-dimensional space, and $V'$ a subspace of $V$.
+ Then
+ \begin{enumerate}[(a)]
+ \ii $V'$ is also finite-dimensional.
+ \ii If $e_1, \dots, e_m$ is a basis of $V'$, then there exist
+ $e_{m+1}, \dots, e_n$ in $V$ such that
+ $e_1, \dots, e_n$ is a basis of $V$.
+ \end{enumerate}
+\end{theorem}
+\begin{proof}
+ Omitted, since it is intuitive and the proof is not that enlightening.
+ (However, we will use this result repeatedly later on,
+ so do take the time to internalize it now.)
+\end{proof}
+
+A very common use case is picking a convenient basis for a map $T$.
+\begin{theorem}[Picking a basis for linear maps]
+ \label{thm:linear_map_basis}
+ Let $T \colon V \to W$ be a map of finite-dimensional vector spaces,
+ with $n = \dim V$, $m = \dim W$.
+ Then there exists a basis $v_1, \dots, v_n$ of $V$
+ and a basis $w_1, \dots, w_m$ of $W$,
+ as well as a nonnegative integer $k$, such that
+ \[
+ T(v_i) =
+ \begin{cases}
+ w_i & \text{if $i \le k$} \\
+ 0_W & \text{if $i > k$}.
+ \end{cases}
+ \]
+ Moreover $\dim \ker T = n-k$ and $\dim T\im(V) = k$.
+\end{theorem}
+\begin{proof}[Sketch of Proof]
+ You might like to try this one yourself before reading on:
+ it's a repeated application of \Cref{thm:basis_completion}.
+
+ Let $\ker T$ have dimension $n-k$.
+ We can pick $v_{k+1}, \dots, v_{n}$ a basis of $\ker T$.
+ Then extend it to a basis $v_1, \dots, v_n$ of $V$.
+ The map $T$ is injective over the span of $v_1, \dots, v_k$
+ (since only $0_V$ is in the kernel) so its images in $W$ are linearly independent.
+ Setting $w_i = T(v_i)$ for each $i$,
+ we get some linearly independent set in $W$.
+ Then extend it again to a basis of $W$.
+\end{proof}
+
+This theorem is super important,
+not only because of applications but also
+because it will give you the right picture in your head
+of how a linear map is supposed to look.
+I'll even draw a cartoon of it to make sure you remember:
+
+\begin{center}
+\begin{asy}
+ unitsize(0.7cm);
+ real d = 3;
+
+ filldraw(box( (-3*d/2,3.5), (-d/2,-4.5) ), opacity(0.1)+lightcyan, blue);
+ filldraw(box( (d/2,3.5), (3*d/2,-5.5) ), opacity(0.1)+lightred, red);
+
+ label(scale(1.5)*"$V$", (-d,4), blue);
+ dot("$e_1$", (-d,3), dir(180), blue);
+ dot("$e_2$", (-d,2), dir(180), blue);
+ label("$\vdots$", (-d,1), blue);
+ dot("$e_k$", (-d,0), dir(180), blue);
+ dot("$e_{k+1}$", (-d,-1), dir(180), blue);
+ dot("$e_{k+2}$", (-d,-2), dir(180), blue);
+ label("$\vdots$", (-d,-3), dir(180), blue);
+ dot("$e_n$", (-d,-4), dir(180), blue);
+
+ label(scale(1.5)*"$W$", (d,4), red);
+ dot("$f_1$", (d,3), dir(0), red);
+ dot("$f_2$", (d,2), dir(0), red);
+ label("$\vdots$", (d,1), red);
+ dot("$f_k$", (d,0), dir(0), red);
+ dot("$f_{k+1}$", (d,-1), dir(0), red);
+ dot("$f_{k+2}$", (d,-2), dir(0), red);
+ dot("$f_{k+3}$", (d,-3), dir(0), red);
+ label("$\vdots$", (d,-4), dir(0), red);
+ dot("$f_m$", (d,-5), dir(0), red);
+
+ label("$T$", (0,3), dir(90));
+ draw( (-d,3)--(d,3), EndArrow, Margin(3,3) );
+ draw( (-d,2)--(d,2), EndArrow, Margin(3,3) );
+ draw( (-d,0)--(d,0), EndArrow, Margin(3,3) );
+ draw( (-d,-1)--(0,-1), EndArrow, Margin(3,3) );
+ draw( (-d,-2)--(0,-2), EndArrow, Margin(3,3) );
+ draw( (-d,-4)--(0,-4), EndArrow, Margin(3,3) );
+ label("$0$", (0,-1));
+ label("$0$", (0,-2));
+ label("$0$", (0,-4));
+
+ draw( (5.5,3)--(6,3)--(6,0)--(5.5,0));
+ label("$\operatorname{im} T$", (6, 1.5), dir(0));
+ draw( (-5.5,-1)--(-6,-1)--(-6,-4)--(-5.5,-4) );
+ label("$\ker T$", (-6, -2.5), dir(180));
+\end{asy}
+\end{center}
+
+In particular, for $T \colon V \to W$,
+one can write $V = \ker T \oplus V'$,
+so that $T$ annihilates its kernel while sending $V'$
+to an isomorphic copy in $W$.
+
+A corollary of this (which you should have expected anyways)
+is the so called rank-nullity theorem,
+which is the analog of the first isomorphism theorem.
+\begin{theorem}
+ [Rank-nullity theorem]
+ \label{thm:rank_nullity}
+ Let $V$ and $W$ be finite-dimensional vector spaces.
+ If $T \colon V \to W$, then
+ \[ \dim V = \dim \ker T + \dim \img T. \]
+\end{theorem}
+\begin{ques}
+ Conclude the rank-nullity theorem from \Cref{thm:linear_map_basis}.
+\end{ques}
+
+\section{A cute application: Lagrange interpolation}
+Here's a cute application\footnote{Source: Communicated to me
+by Joe Harris at the first Harvard-MIT Undergraduate Math Symposium.}
+of linear algebra to a theorem from high school.
+\begin{theorem}
+ [Lagrange interpolation]
+ Let $x_1, \dots, x_{n+1}$ be distinct real numbers
+ and $y_1, \dots, y_{n+1}$ any real numbers.
+ Then there exists a \emph{unique}
+ polynomial $P$ of degree at most $n$
+ such that \[ P(x_i) = y_i \] for every $i$.
+\end{theorem}
+When $n = 1$ for example, this loosely
+says there is a unique line joining two points.
+\begin{proof}
+ The idea is to consider the vector space $V$
+ of polynomials with degree at most $n$,
+ as well as the vector space $W = \RR^{n+1}$.
+ \begin{ques}
+ Check that $\dim V = n + 1 = \dim W$.
+ This is easiest to do if you pick a basis for $V$,
+ but you can then immediately forget about the basis
+ once you finish this exercise.
+ \end{ques}
+ Then consider the linear map $T : V \to W$ given by
+ \[ P \mapsto \left( P(x_1), \dots, P(x_{n+1}) \right). \]
+ This is indeed a linear map because,
+ well, $T(P+Q) = T(P)+T(Q)$ and $T(cP) = cT(P)$.
+ It also happens to be injective: if $P \in \ker T$,
+ then $P(x_1) = \dots = P(x_{n+1}) = 0$,
+ but $\deg P \le n$ and so $P$ can only be the zero polynomial.
+
+ So $T$ is an injective map between vector spaces of the same dimension.
+ Thus it is actually a bijection, which is exactly what we wanted.
+\end{proof}
+
+\section{(Digression) Arrays of numbers are evil}
+\label{sec:basis_evil}
+As I'll stress repeatedly, a matrix represents a
+\emph{linear map between two vector spaces}.
+Writing it in the form of an $m \times n$ matrix
+is merely a very convenient way to see the map concretely.
+But it obfuscates the fact that this map is,
+well, a map, not an array of numbers.
+
+If you took high school precalculus, you'll see everything done in terms of matrices.
+To any typical high school student, a matrix is an array of numbers.
+No one is sure what exactly these numbers represent,
+but they're told how to magically multiply these arrays to get more arrays.
+They're told that the matrix
+\[ \begin{bmatrix}
+ 1 & 0 & \dots & 0 \\
+ 0 & 1 & \dots & 0 \\
+ \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & \dots & 1 \\
+ \end{bmatrix} \]
+is an ``identity matrix'', because when you multiply
+by another matrix it doesn't change.
+Then they're told that the determinant is some magical combination of these
+numbers formed by this weird multiplication rule.
+No one knows what this determinant does,
+other than the fact that $\det(AB) = \det A \det B$,
+and something about areas and row operations and Cramer's rule.
+
+Then you go into linear algebra in college, and you do more magic
+with these arrays of numbers.
+You're told that two matrices $T_1$ and $T_2$ are similar if
+\[ T_2 = ST_1S\inv \] for some invertible matrix $S$.
+You're told that the trace of a matrix $\Tr T$ is the sum of the diagonal entries.
+Somehow this doesn't change if you look at a similar matrix,
+but you're not sure why.
+Then you define the characteristic polynomial as
+\[ p_T(X) = \det (XI - T). \]
+Somehow this also doesn't change if you take a similar matrix,
+but now you really don't know why.
+And then you have the Cayley-Hamilton theorem in all its black magic:
+$p_T(T)$ is the zero map. Out of curiosity you Google the proof,
+and you find some ad-hoc procedure which still leaves you
+with no idea why it's true.
+
+This is terrible. What's so special about $T_2 = ST_1S\inv$?
+Only if you know that the matrices are linear maps does this make sense:
+$T_2$ is just $T_1$ rewritten with a different choice of basis.
+
+I really want to push the opposite view.
+Linear algebra is the study of \emph{linear maps},
+but it is taught as the study of \emph{arrays of numbers},
+and no one knows what these numbers mean.
+And for a good reason: the numbers are meaningless.
+They are a highly convenient way of encoding the matrix,
+but they are not the main objects of study,
+any more than the dates of events are the main objects of study in history.
+
+The other huge downside is that people get the impression
+that the only (real) vector space in existence is $\RR^{\oplus n}$.
+As explained in \Cref{rem:vector_spaces_have_essence},
+while you \emph{can} work this way if you're a soulless robot,
+it's very unnatural for humans to do so.
+
+When I took Math 55a as a freshman at Harvard,
+I got the exact opposite treatment:
+we did all of linear algebra without writing down a single matrix.
+During all this time I was quite confused.
+What's wrong with a basis?
+I didn't appreciate until later that this approach was the
+morally correct way to treat the subject: it made it clear what was happening.
+
+Throughout the Napkin, I've tried to strike a balance between these
+two approaches, using matrices when appropriate to illustrate
+the maps and to simplify proofs, but ultimately writing
+theorems and definitions in their \emph{morally correct} form.
+I hope that this has both the advantage of giving the ``right'' definitions
+while being concrete enough to be digested.
+But I would like to say for the record that,
+if I had to pick between the high school approach and the 55a approach,
+I would pick 55a in a heartbeat.
+
+\section{A word on general modules}
+\prototype{$\ZZ[\sqrt2]$ is a $\ZZ$-module of rank two.}
+I focused mostly on vector spaces (aka modules over a field) in this chapter
+for simplicity, so I want to make a few remarks about
+modules over a general commutative ring $R$ before concluding.
+
+Firstly, recall that for general modules,
+we say ``generating set'' instead of ``spanning set''.
+Shrug.
+
+The main issue with rings is that our key theorem \Cref{thm:vector_best}
+fails in spectacular ways.
+For example, consider $\ZZ$ as a $\ZZ$-module over itself.
+Then $\{2\}$ is linearly independent, but it cannot be extended to a basis.
+Similarly, $\{2,3\}$ is spanning, but one cannot cut it down to a basis.
+You can see why defining dimension is going to be difficult.
+
+Nonetheless, there are still analogs of some of the definitions above.
+\begin{definition}
+ An $R$-module $M$ is called \vocab{finitely generated} if it has a finite generating set.
+\end{definition}
+\begin{definition}
+ An $R$-module $M$ is called \vocab{free} if it has a basis.
+ As said before, the analogue of the dimension theorem holds,
+ and we use the word \vocab{rank} to denote the size of the basis.
+ As before, there's an isomorphism $M \cong R^{\oplus n}$ where $n$ is the rank.
+\end{definition}
+\begin{example}[An example of a $\ZZ$-module]
+ The $\ZZ$-module
+ \[ \ZZ[\sqrt2] = \left\{ a + b\sqrt 2 \mid a,b \in \ZZ \right\} \]
+ has a basis $\{1, \sqrt 2\}$, so we say it is
+ a free $\ZZ$-module of rank $2$.
+\end{example}
+
+\begin{abuse}
+ [Notation for groups]
+ Recall that an abelian group
+ can be viewed a $\ZZ$-module (and in fact vice-versa!),
+ so we can (and will) apply these words to abelian groups.
+ We'll use the notation $G \oplus H$ for two abelian groups $G$ and $H$
+ for their Cartesian product, emphasizing the fact that $G$ and $H$ are abelian.
+ This will happen when we study algebraic number theory and homology groups.
+\end{abuse}
+
+\section{\problemhead}
+General hint:
+\Cref{thm:linear_map_basis} will be your best friend
+for many of these problems.
+
+\begin{dproblem}
+ Let $V$ and $W$ be finite-dimensional vector spaces
+ with nonzero dimension, and consider linear maps $T \colon V \to W$.
+ Complete the following table by writing
+ ``sometimes'', ``always'', or ``never'' for each entry.
+ \begin{center}
+ \begin{tabular}[h]{c|ccc}
+ & $T$ injective & $T$ surjective & $T$ isomorphism \\ \hline
+ If $\dim V > \dim W$\dots \\
+ If $\dim V = \dim W$\dots \\
+ If $\dim V < \dim W$\dots \\
+ \end{tabular}
+ \end{center}
+ \begin{hint}
+ Use the rank-nullity theorem.
+ Also consider the zero map.
+ \end{hint}
+ \begin{sol}
+ \begin{center}
+ \begin{tabular}[h]{c|ccc}
+ & $T$ injective & $T$ surjective & $T$ isomorphism \\ \hline
+ If $\dim V > \dim W$\dots & never & sometimes & never\\
+ If $\dim V = \dim W$\dots & sometimes & sometimes & sometimes \\
+ If $\dim V < \dim W$\dots & sometimes & never & never \\
+ \end{tabular}
+ \end{center}
+ Each ``never'' is by the rank-nullity theorem.
+ Each counterexample is obtained by the zero map
+ sending every element of $V$ to zero;
+ this map is certainly neither injective or surjective.
+ \end{sol}
+\end{dproblem}
+
+\begin{dproblem}
+ [Equal dimension vector spaces are usually isomorphisms]
+ \label{prob:equal_dimension}
+ Let $V$ and $W$ be finite-dimensional vector
+ spaces with $\dim V = \dim W$.
+ Prove that for a map $T \colon V \to W$,
+ the following are equivalent:
+ \begin{itemize}
+ \ii $T$ is injective,
+ \ii $T$ is surjective,
+ \ii $T$ is bijective.
+ \end{itemize}
+ \begin{sol}
+ It essentially follows by \Cref{thm:linear_map_basis}.
+ \end{sol}
+\end{dproblem}
+
+\begin{problem}
+ [Multiplication by $\sqrt5$]
+ Let $V = \QQ[\sqrt5] = \left\{ a + b \sqrt 5 \right\}$
+ be a two-dimensional $\QQ$-vector space,
+ and fix the basis $\{1, \sqrt 5\}$ for it.
+ Write down the $2 \times 2$ matrix with rational coefficients
+ that corresponds to multiplication by $\sqrt 5$.
+ \begin{hint}
+ $a + b \sqrt 5 \mapsto \sqrt 5 a + 5b$.
+ \end{hint}
+ \begin{sol}
+ Since $1 \mapsto \sqrt5$ and $\sqrt5 \mapsto 5$,
+ the matrix is
+ $\begin{bmatrix}
+ 0 & 5 \\
+ 1 & 0
+ \end{bmatrix}$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Multivariable Lagrange interpolation]
+ Let $S \subset {\mathbb Z}^2$ be a set of $n$ lattice points.
+ Prove that there exists a nonzero two-variable polynomial $p$
+ with real coefficients, of degree at most $\sqrt{2n}$,
+ such that $p(x,y) = 0$ for every $(x,y) \in S$.
+\end{problem}
+
+\begin{problem}
+ [Putnam 2003]
+ Do there exist polynomials $a(x)$, $b(x)$, $c(y)$, $d(y)$ such that
+ \[ 1 + xy + (xy)^2 = a(x)c(y) + b(x)d(y) \]
+ holds identically?
+ \begin{hint}
+ Plug in $y = -1, 0, 1$. Use dimensions of $\RR[x]$.
+ \end{hint}
+\end{problem}
+
+\begin{problem}[TSTST 2014]
+ \gim
+ Let $P(x)$ and $Q(x)$ be arbitrary polynomials with real coefficients,
+ and let $d$ be the degree of $P(x)$.
+ Assume that $P(x)$ is not the zero polynomial.
+ Prove that there exist polynomials $A(x)$ and $B(x)$ such that
+ \begin{enumerate}[(i)]
+ \ii Both $A$ and $B$ have degree at most $d/2$,
+ \ii At most one of $A$ and $B$ is the zero polynomial,
+ \ii $P$ divides $A+Q \cdot B$.
+ \end{enumerate}
+ \begin{hint}
+ Interpret as $V \oplus V \to W$ for suitable $V$, $W$.
+ \end{hint}
+ \begin{sol}
+ Let $V$ be the space of real polynomials with degree at most $d/2$
+ (which has dimension $1 + \left\lfloor d/2 \right\rfloor$),
+ and $W$ be the space of real polynomials modulo $P$ (which has dimension $d$).
+ Then $\dim (V \oplus V) > \dim W$.
+ So the linear map $V \oplus V \to W$ by $(A,B) \mapsto A + Q \cdot B$
+ has a kernel of positive dimension
+ (by rank-nullity, for example).
+ \end{sol}
+\end{problem}
+
+\begin{sproblem}
+ [Idempotents are projection maps]
+ \label{prob:idempotent}
+ Let $P \colon V \to V$ be a linear map,
+ where $V$ is a vector space
+ (not necessarily finite-dimensional).
+ Suppose $P$ is \vocab{idempotent},
+ meaning $P(P(v)) = P(v)$ for each $v \in V$,
+ or equivalently $P$ is the identity on its image.
+ Prove that \[ V = \ker P \oplus \img P. \]
+ Thus we can think of $P$ as \emph{projection}
+ onto the subspace $\img P$.
+\end{sproblem}
+
+\begin{sproblem}
+ \label{prob:endomorphism_eventual_lemma}
+ \gim
+ Let $V$ be a finite dimensional vector space.
+ Let $T \colon V \to V$ be a linear map,
+ and let $T^n \colon V \to V$ denote $T$ applied $n$ times.
+ Prove that there exists an integer $N$ such that
+ \[ V = \ker T^N \oplus \img T^N. \]
+ \begin{hint}
+ Use the fact that the infinite chain of subspaces
+ \[ \ker T \subseteq \ker T^2 \subseteq \ker T^3 \subseteq \dots \]
+ and the similar chain for $\img T$ must eventually stabilize
+ (for dimension reasons).
+ \end{hint}
+ \begin{sol}
+ Consider
+ \[
+ \{0\} \subsetneq \ker S \subseteq \ker S^2 \subseteq \ker S^3 \subseteq \dots
+ \text{ and }
+ V \supsetneq \img S \supseteq \img S^2 \supseteq \img S^3 \supseteq \dots.
+ \]
+ For dimension reasons, these subspaces must eventually stabilize:
+ for some large integer $N$,
+ $\ker T^N = \ker T^{N+1} = \dots$
+ and $\img T^N = \img T^{N+1} = \img T^{N+2} = \dots$.
+ When this happens, $\ker T^N \bigcap \img T^N = \{0\}$,
+ since $T^N$ is an automorphism of $\img T^N$.
+ On the other hand, by Rank-Nullity we also have
+ $\dim \ker T^N + \dim \img T^n = \dim V$.
+ Thus for dimension reasons, $V = \ker T^N \oplus \img T^N$.
+ \end{sol}
+\end{sproblem}
+
+%Now consider the sequences
diff --git a/books/napkin/vectors.tex b/books/napkin/vectors.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5ac944d3b0a22bf027c92dd9894b24ea930309c3
--- /dev/null
+++ b/books/napkin/vectors.tex
@@ -0,0 +1,487 @@
+\chapter{Quantum states and measurements}
+In this chapter we'll explain how to set up quantum states using
+linear algebra. This will allow me to talk about quantum \emph{circuits}
+in the next chapter, which will set the stage for Shor's algorithm.
+
+I won't do very much physics (read: none at all).
+That is, I'll only state what the physical reality is in terms
+of linear algebras, and defer the philosophy of why this is true
+to your neighborhood ``Philosophy of Quantum Mechanics'' class
+(which is a ``social science'' class at MIT!).
+
+\section{Bra-ket notation}
+Physicists have their own notation for vectors:
+whereas I previously used something like $v$, $e_1$, and so on,
+in this chapter you'll see the infamous \vocab{bra-ket} notation:
+a vector will be denoted by $\ket\bullet$, where $\bullet$ is some
+variable name: unlike in math or Python, this can include
+numbers, symbols, Unicode characters, whatever you like.
+This is called a ``ket''.
+To pay a homage to physicists everywhere,
+we'll use this notation for this chapter too.
+
+\begin{abuse}
+ [For this part, $\dim H < \infty$]
+ In this part on quantum computation,
+ we'll use the word ``Hilbert space'' as defined earlier,
+ but in fact all our Hilbert spaces will be finite-dimensional.
+\end{abuse}
+
+If $\dim H = n$, then its orthonormal basis elements are often denoted
+\[ \ket0, \ket1, \dots, \ket{n-1} \]
+(instead of $e_i$)
+and a generic element of $H$ denoted by
+\[ \ket\psi, \ket\phi, \dots \]
+and various other Greek letters.
+
+Now for any $\ket\psi \in H$,
+we can consider the canonical dual element in $H^\vee$
+(since $H$ has an inner form), which we denote by $\bra\psi$ (a ``bra'').
+For example, if $\dim H = 2$ then we can write
+\[ \ket\psi = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} \]
+in an orthonormal basis, in which case
+\[ \bra\psi = \begin{bmatrix} \ol\alpha & \ol\beta \end{bmatrix}. \]
+We even can write dot products succinctly in this notation:
+if $\ket\phi = \begin{bmatrix} \gamma \\ \delta \end{bmatrix}$,
+then the dot product of $\ket\phi$ and $\ket\psi$ is given by
+\[
+ \braket{\psi|\phi}
+ = \cvec{\ol\alpha & \ol\beta} \cvec{\gamma \\ \delta}
+ = \ol\alpha\gamma + \ol\beta \delta.
+\]
+So we will use the notation $\braket{\psi|\phi}$
+instead of the more mathematical $\left< \ket\psi, \ket\phi \right>$.
+In particular, the squared norm of $\ket\psi$ is just $\braket{\psi|\psi}$.
+Concretely, for $\dim H = 2$ we have
+$\braket{\psi|\psi} = |\alpha|^2 + |\beta|^2$.
+
+\section{The state space}
+If you think that's weird, well, it gets worse.
+
+In classical computation, a bit is either $0$ or $1$.
+More generally, we can think of a classical space of $n$
+possible states $0$, \dots, $n-1$.
+Thus in the classical situation, the space of possible states
+is just a discrete set with $n$ elements.
+
+In quantum computation, a \vocab{qubit} is instead
+any \emph{complex linear combination} of $0$ and $1$.
+To be precise, consider the normed complex vector space
+\[ H = \CC^{\oplus 2} \]
+and denote the orthonormal basis elements by $\ket0$ and $\ket1$.
+Then a \emph{qubit} is a nonzero element $\ket\psi \in H$,
+so that it can be written in the form
+\[ \ket\psi = \alpha \ket 0 + \beta \ket 1 \]
+where $\alpha$ and $\beta$ are not both zero.
+Typically, we normalize so that $\ket\psi$ has norm $1$:
+\[ \braket{\psi|\psi} = 1 \iff |\alpha|^2 + |\beta|^2 = 1. \]
+In particular, we can recover the ``classical'' situation
+with $\ket 0 \in H$ and $\ket 1 \in H$,
+but now we have some ``intermediate'' states,
+such as \[ \frac{1}{\sqrt2} \left(\ket 0 + \ket 1 \right). \]
+Philosophically, what has happened is that:
+\begin{moral}
+ Instead of allowing just the states $\ket 0$ and $\ket 1$,
+ we allow any complex linear combination of them.
+\end{moral}
+More generally, if $\dim H = n$,
+then the possible states are nonzero elements
+\[ c_0\ket0 + c_1\ket1 + \dots + c_{n-1}\ket{n-1} \]
+which we usually normalize so that
+$|c_0|^2 + |c_1|^2 + \dots + |c_{n-1}|^2 = 1$.
+
+\section{Observations}
+\prototype{$\id$ corresponds to not making a measurement
+ since all its eigenvalues are equal,
+ but any operator with distinct eigenvalues will cause collapse.}
+If you think that's weird, well, it gets worse.
+First, some linear algebra:
+\begin{definition}
+ Let $V$ be a finite-dimensional inner product space.
+ For a map $T: V \to V$, the following conditions are equivalent:
+ \begin{itemize}
+ \ii $\left< Tx, y\right> = \left< x, Ty \right>$
+ for any $x,y \in V$.
+ \ii $T = T^\dagger$.
+ \end{itemize}
+ A map $T$ satisfying these conditions is called \vocab{Hermitian}.
+\end{definition}
+\begin{ques}
+ Show that $T$ is normal.
+\end{ques}
+Thus, we know that $T$ is diagonalizable
+with respect to the inner form, so for a suitable basis we
+can write it in an orthonormal basis as
+\[
+ T = \begin{bmatrix}
+ \lambda_0 & 0 & \dots & 0 \\
+ 0 & \lambda_1 & \dots & 0 \\
+ \vdots & \vdots & \ddots & \vdots \\
+ 0 & 0 & \dots & \lambda_{n-1}
+ \end{bmatrix}.
+\]
+As we've said, this is fantastic:
+not only do we have a basis of eigenvectors,
+but the eigenvectors are pairwise orthogonal,
+and so they form an orthonormal basis of $V$.
+\begin{ques}
+ Show that all eigenvalues of $T$ are real.
+ ($T = T^\dagger$.)
+\end{ques}
+
+Back to quantum computation.
+Suppose we have a state $\ket\psi \in H$, where $\dim H = 2$;
+we haven't distinguished a particular basis yet,
+so we just have a nonzero vector.
+Then the way observations work (and this is physics, so you'll have to
+take my word for it) is as follows:
+\begin{moral}
+ Pick a Hermitian operator $T : H \to H$;
+ then observations of $T$ return eigenvalues of $T$.
+\end{moral}
+To be precise:
+\begin{itemize}
+ \ii Pick a Hermitian operator $T : H \to H$,
+ which is called the \vocab{observable}.
+ \ii Consider its eigenvalues $\lambda_0$, \dots, $\lambda_{n-1}$
+ and corresponding eigenvectors $\ket{0}_T$, \dots, $\ket{n-1}_T$.
+ Tacitly we may assume that $\ket{0}_T$, \dots, $\ket{n-1}_T$ form
+ an orthonormal basis of $H$.
+ (The subscript $T$ is here to distinguish the eigenvectors of $T$
+ from the basis elements of $H$.)
+ \ii Write $\ket\psi$ in the orthonormal basis as
+ \[ c_0\ket0_T + c_1\ket1_T + \dots + c_{n-1}\ket{n-1}_T. \]
+ \ii Then the probability of observing $\lambda_i$ is
+ \[ \frac{|c_i|^2}{|c_0|^2 + \dots + |c_{n-1}|^2}. \]
+ This is called making an \vocab{observation along $T$}.
+\end{itemize}
+Note that in particular, for any nonzero constant $c$,
+$\ket\psi$ and $c\ket\psi$ are indistinguishable,
+which is why we like to normalize $\ket\psi$.
+But the queerest thing of all is what happens to $\ket\psi$:
+by measuring it, we actually destroy information.
+This behavior is called \vocab{quantum collapse}.
+\begin{itemize}
+ \ii Suppose for simplicity that we observe $\ket\psi$
+ with $T$ and obtain an eigenvalue $\lambda$,
+ and that $\ket{i}_T$ is the only eigenvector with this eigenvalue.
+ Then, the state $\ket\psi$ \emph{collapses} to just the state
+ $c_i \ket{i}_T$: all the other information is destroyed.
+ (In fact, we may as well say it collapses to $\ket{i}_T$,
+ since again constant factors are not relevant.)
+
+ \ii More generally, if we observe $\lambda$,
+ consider the generalized eigenspace $H_\lambda$
+ (i.e.\ the span of eigenvectors with the same eigenvalue).
+ Then the physical state $\ket\psi$ has been changed as well:
+ it has now been projected onto the eigenspace $H_\lambda$.
+ In still other words, after observation, the state collapses to
+ \[
+ \sum_{\substack{0 \le i \le n \\ \substack{\lambda_i = \lambda}}}
+ c_i \ket{i}_T.
+ \]
+\end{itemize}
+In other words,
+\begin{moral}
+ When we make a measurement,
+ the coefficients from different eigenspaces are destroyed.
+\end{moral}
+Why does this happen? Beats me\dots physics (and hence real life) is weird.
+But anyways, an example.
+\begin{example}
+ [Quantum measurement of a state $\ket\psi$]
+ Let $H = \CC^{\oplus 2}$ with orthonormal basis $\ket0$ and $\ket1$
+ and consider the state
+ \[
+ \ket\psi
+ = \frac{i}{\sqrt5} \ket 0
+ + \frac{2}{\sqrt5} \ket 1
+ = \pair{i/\sqrt5}{2/\sqrt5} \in H.
+ \]
+ \begin{enumerate}[(a)]
+ \ii Let \[ T = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}. \]
+ This has eigenvectors $\ket0 = \ket0_T$ and $\ket1 = \ket1_T$,
+ with eigenvalues $+1$ and $-1$. So if we measure $\ket\psi$ to $T$,
+ we get $+1$ with probability $1/5$ and $-1$ with probability $4/5$.
+ After this measurement, the original state collapses to
+ $\ket0$ if we measured $+1$, and $\ket1$ if we measured $-1$.
+ So we never learn the original probabilities.
+
+ \ii Now consider $T = \id$, and arbitrarily
+ pick two orthonormal eigenvectors $\ket0_T$, $\ket1_T$;
+ thus $\psi = c_0\ket0_T + c_1\ket1_T$.
+ Since all eigenvalues of $T$ are $+1$,
+ our measurement will always be $+1$ no matter what we do.
+ But there is also no collapsing,
+ because none of the coefficients get destroyed.
+
+ \ii Now consider
+ \[ T = \begin{bmatrix} 0 & 7 \\ 7 & 0 \end{bmatrix}. \]
+ The two normalized eigenvectors are
+ \[ \ket0_T = \frac{1}{\sqrt2}\pair11
+ \qquad \ket1_T = \frac{1}{\sqrt2}\pair1{-1} \]
+ with eigenvalues $+7$ and $-7$ respectively. In this basis, we have
+ \[
+ \ket\psi = \frac{2+i}{\sqrt{10}}\ket0_T
+ + \frac{-2+i}{\sqrt{10}}\ket1_T. \]
+ So we get $+7$ with probability $\half$ and $-7$
+ with probability $\half$, and after the measurement,
+ $\ket\psi$ collapses to one of $\ket0_T$ and $\ket1_T$.
+ \end{enumerate}
+\end{example}
+\begin{ques}
+ Suppose we measure $\ket\psi$ with $T$ and get $\lambda$.
+ What happens if we measure with $T$ again?
+\end{ques}
+
+For $H = \CC^{\oplus 2}$ we can come up with more classes of examples using
+the so-called \vocab{Pauli matrices}.
+These are the three Hermitian matrices
+\[
+ \sigma_z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}
+ \qquad
+ \sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}
+ \qquad
+ \sigma_y = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}.
+\]
+These matrices are important because:
+\begin{ques}
+ Show that these three matrices, plus the identity matrix,
+ form a basis for the set of Hermitian $2 \times 2$ matrices.
+\end{ques}
+So the Pauli matrices are a natural choice of basis.
+
+Their normalized eigenvectors are
+\[ \zup = \ket0 = \pair10 \qquad \zdown = \ket1 = \pair01 \]
+\[ \xup = \frac{1}{\sqrt2}\pair11
+ \qquad \xdown = \frac{1}{\sqrt2}\pair1{-1} \]
+\[ \yup = \frac{1}{\sqrt2}\pair1i
+ \qquad \ydown = \frac{1}{\sqrt2}\pair1{-i} \]
+which we call ``$z$-up'', ``$z$-down'',
+``$x$-up'', ``$x$-down'', ``$y$-up'', ``$y$-down''.
+(The eigenvalues are $+1$ for ``up'' and $-1$ for ``down''.)
+So, given a state $\ket\psi \in \CC^{\oplus 2}$
+we can make a measurement with respect to any of these three bases
+by using the corresponding Pauli matrix.
+
+In light of this, the previous examples were
+(a) measuring along $\sigma_z$,
+(b) measuring along $\id$,
+and (c) measuring along $7\sigma_x$.
+
+Notice that if we are given a state $\ket\psi$,
+and are told in advance that it is either $\xup$ or $\xdown$
+(or any other orthogonal states)
+then we are in what is more or less a classical situation.
+Specifically, if we make a measurement along $\sigma_x$,
+then we find out which state that $\ket\psi$ was in (with 100\% certainty),
+and the state does not undergo any collapse.
+Thus, orthogonal states are reliably distinguishable.
+
+\section{Entanglement}
+\prototype{Singlet state: spooky action at a distance.}
+If you think that's weird, well, it gets worse.
+
+Qubits don't just act independently:
+they can talk to each other by means of a \emph{tensor product}.
+Explicitly, consider \[ H = \CC^{\oplus 2} \otimes \CC^{\oplus 2} \]
+endowed with the norm described in \Cref{prob:inner_prod_tensor}.
+One should think of this as a qubit $A$ in a space $H_A$
+along with a second qubit $B$ in a different space $H_B$,
+which have been allowed to interact in some way,
+and $H = H_A \otimes H_B$ is the set of possible states of \emph{both} qubits.
+Thus
+\[
+ \ket{0}_A \otimes \ket{0}_B, \quad
+ \ket{0}_A \otimes \ket{1}_B, \quad
+ \ket{1}_A \otimes \ket{0}_B, \quad
+ \ket{1}_A \otimes \ket{1}_B
+\]
+is an orthonormal basis of $H$;
+here $\ket{i}_A$ is the basis of the first $\CC^{\oplus 2}$
+while $\ket{i}_B$ is the basis of the second $\CC^{\oplus 2}$,
+so these vectors should be thought of as ``unrelated''
+just as with any tensor product.
+The pure tensors mean exactly what you want:
+for example $\ket0_A \otimes \ket1_B$ means
+``$0$ for qubit $A$ and $1$ for qubit $B$''.
+
+As before, a measurement of a state in $H$ requires
+a Hermitian map $H \to H$.
+In particular, if we only want to measure the qubit $B$ along $M_B$,
+we can use the operator \[ \id_A \otimes M_B. \]
+The eigenvalues of this operator coincide with the ones for $M_B$,
+and the eigenspace for $\lambda$ will be the $H_A \otimes (H_B)_\lambda$,
+so when we take the projection the $A$ qubit will be unaffected.
+
+This does what you would hope for pure tensors in $H$:
+\begin{example}[Two non-entangled qubits]
+ Suppose we have qubit $A$ in the state
+ $\frac{i}{\sqrt5}\ket0_A + \frac{2}{\sqrt5}\ket1_A$
+ and qubit $B$ in the state
+ $\frac{1}{\sqrt2} \ket0_B + \frac{1}{\sqrt2}\ket1_B$.
+ So, the two qubits in tandem are represented by the pure tensor
+ \[
+ \ket\psi
+ =
+ \left( \frac{i}{\sqrt5}\ket0_A + \frac{2}{\sqrt5}\ket1_A \right)
+ \otimes
+ \left( \frac{1}{\sqrt2} \ket0_B + \frac{1}{\sqrt2}\ket1_B \right).
+ \]
+ Suppose we measure $\ket\psi$ along
+ \[ M = \id_A \otimes \sigma_z^B. \]
+ The eigenspace decomposition is
+ \begin{itemize}
+ \ii $+1$ for the span of $\ket0_A \otimes \ket0_B$ and
+ $\ket1_A \otimes \ket0_B$, and
+ \ii $-1$ for the span of $\ket0_A \otimes \ket1_B$ and
+ $\ket1_A \otimes \ket1_B$.
+ \end{itemize}
+ (We could have used other bases, like $\xup_A \otimes \ket0_B$ and
+ $\xdown_A \otimes \ket0_B$ for the first eigenspace, but it doesn't matter.)
+ Expanding $\ket\psi$ in the four-element basis, we find that
+ we'll get the first eigenspace with probability
+ \[ \left|\frac{i}{\sqrt{10}}\right|^2
+ + \left|\frac{2}{\sqrt{10}}\right|^2 = \half. \]
+ and the second eigenspace with probability $\half$ as well.
+ (Note how the coefficients for $A$ don't do anything!)
+ After the measurement, we destroy the coefficients of the other eigenspace;
+ thus (after re-normalization) we obtain the collapsed state
+ \[ \left( \frac{i}{\sqrt5}\ket0_A + \frac{2}{\sqrt5}\ket1_A \right)
+ \otimes \ket0_B
+ \qquad\text{or}\qquad
+ \left( \frac{i}{\sqrt5}\ket0_A + \frac{2}{\sqrt5}\ket1_A \right)
+ \otimes \ket1_B
+ \]
+ again with 50\% probability each.
+\end{example}
+So this model lets us more or less work with the two qubits independently:
+when we make the measurement, we just make sure to not touch the other qubit
+(which corresponds to the identity operator).
+
+\begin{exercise}
+ Show that if $\id_A \otimes \sigma_x^B$ is applied to the $\ket\psi$
+ in this example, there is no collapse at all.
+ What's the result of this measurement?
+\end{exercise}
+
+Since the $\otimes$ is getting cumbersome to write, we say:
+\begin{abuse}
+ From now on $\ket 0_A \otimes \ket 0_B$ will be abbreviated
+ to just $\ket{00}$, and similarly for $\ket{01}$, $\ket{10}$, $\ket{11}$.
+\end{abuse}
+
+\begin{example}
+ [Simultaneously measuring a general $2$-Qubit state]
+ \label{ex:simult_measurement}
+ Consider a normalized state $\ket\psi$ in
+ $H = \CC^{\oplus 2} \otimes \CC^{\oplus 2}$, say
+ \[ \ket\psi = \alpha\ket{00} + \beta\ket{01}
+ + \gamma\ket{10} + \delta\ket{11}. \]
+ We can make a measurement along the diagonal matrix
+ $T : H \to H$ with
+ \[ T(\ket{00}) = 0\ket{00}, \quad
+ T(\ket{01}) = 1\ket{01}, \quad
+ T(\ket{10}) = 2\ket{10}, \quad
+ T(\ket{11}) = 3\ket{11}. \]
+ Thus we get each of the eigenvalues $0$, $1$, $2$, $3$
+ with probability $|\alpha|^2$, $|\beta|^2$, $|\gamma|^2$, $|\delta|^2$.
+ So if we like we can make ``simultaneous'' measurements on two qubits
+ in the same way that we make measurements on one qubit.
+\end{example}
+
+However, some states behave very weirdly.
+\begin{example}[The singlet state]
+ Consider the state
+ \[
+ \ket{\Psi_-}
+ =
+ \frac{1}{\sqrt2} \ket{01}
+ - \frac{1}{\sqrt2} \ket{10}
+ \]
+ which is called the \vocab{singlet state}.
+ One can see that $\ket{\Psi_-}$ is not a simple tensor,
+ which means that it doesn't just consist of two qubits side by side:
+ the qubits in $H_A$ and $H_B$ have become \emph{entangled}.
+
+ Now, what happens if we measure just the qubit $A$?
+ This corresponds to making the measurement
+ \[ T = \sigma_z^A \otimes \id_B. \]
+ The eigenspace decomposition of $T$ can be described as:
+ \begin{itemize}
+ \ii The span of $\ket{00}$ and $\ket{01}$, with eigenvalue $+1$.
+ \ii The span of $\ket{10}$ and $\ket{11}$, with eigenvalue $-1$.
+ \end{itemize}
+ So one of two things will happen:
+ \begin{itemize}
+ \ii With probability $\half$, we measure $+1$
+ and the collapsed state is $\ket{01}$.
+ \ii With probability $\half$, we measure $-1$
+ and the collapsed state is $\ket{10}$.
+ \end{itemize}
+ But now we see that measurement along $A$ has told us what the
+ state of the bit $B$ is completely!
+\end{example}
+By solely looking at measurements on $A$, we learn $B$;
+this paradox is called \emph{spooky action at a distance},
+or in Einstein's tongue, \vocab{spukhafte Fernwirkung}.
+Thus,
+\begin{moral}
+ In tensor products of Hilbert spaces,
+ states which are not pure tensors
+ correspond to ``entangled'' states.
+\end{moral}
+
+What this really means is that the qubits cannot be described independently;
+the state of the system must be given as a whole.
+That's what entangled states mean: the qubits somehow depend on each other.
+
+\section\problemhead
+
+\begin{problem}
+ We measure $\ket{\Psi_-}$ by $\sigma_x^A \otimes \id_B$,
+ and hence obtain either $+1$ or $-1$.
+ Determine the state of qubit $B$ from this measurement.
+ \begin{hint}
+ Rewrite $\ket{\Psi_-} = -\frac{1}{\sqrt2}
+ \left( \xup_A\otimes\xdown_B - \xdown_A\xup_B \right)$.
+ \end{hint}
+ \begin{sol}
+ By a straightforward computation,
+ we have $\ket{\Psi_-} = -\frac{1}{\sqrt2}
+ \left( \xup_A\otimes\xdown_B - \xdown_A\xup_B \right)$.
+ Now, $\xup_A \otimes \xup_B$, $\xup_A \otimes \xdown_B$
+ span one eigenspace of $\sigma_x^A \otimes \id_B$,
+ and $\xdown_A \otimes \xup_B$, $\xdown_A \otimes \xdown_B$
+ span the other. So this is the same as before:
+ $+1$ gives $\xdown_B$ and $-1$ gives $\xdown_A$.
+ \end{sol}
+\end{problem}
+
+\begin{problem}
+ [Greenberger-Horne-Zeilinger paradox]
+ Consider the state in $(\CC^{\oplus2})^{\otimes 3}$
+ \[
+ \ket{\Psi}_{\text{GHZ}}
+ =
+ \frac{1}{\sqrt2}
+ \left( \ket0_A \ket0_B \ket0_C
+ - \ket1_A \ket1_B \ket1_C \right).
+ \]
+ Find the value of the measurements along each of
+ \[ \sigma_y^A \otimes \sigma_y^B \otimes \sigma_x^C , \quad
+ \sigma_y^A \otimes \sigma_x^B \otimes \sigma_y^C, \quad
+ \sigma_x^A \otimes \sigma_y^B \otimes \sigma_y^C, \quad
+ \sigma_x^A \otimes \sigma_x^B \otimes \sigma_x^C.
+ \]
+ As for the paradox: what happens if you multiply all these measurements together?
+ \begin{hint}
+ $-1$, $1$, $1$, $1$.
+ When we multiply them all together,
+ we get that $\id^A \otimes \id^B \otimes \id^C$
+ has measurement $-1$, which is the paradox.
+ What this means is that the values of the measurements
+ are created when we make the observation,
+ and not prepared in advance.
+ \end{hint}
+\end{problem}
diff --git a/books/napkin/zariski.tex b/books/napkin/zariski.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b3c86f594f4beb365e510fbc2e17a021ab0b2724
--- /dev/null
+++ b/books/napkin/zariski.tex
@@ -0,0 +1,516 @@
+\chapter{Affine varieties as ringed spaces}
+As in the previous chapter, we are working only over affine varieties in $\CC$ for simplicity.
+
+\section{Synopsis}
+Group theory was a strange creature in the early 19th century.
+During the 19th century, a group was literally defined
+as a subset of $\GL(n)$ or of $S_n$.
+Indeed, the word ``group'' hadn't been invented yet.
+This may sound ludicrous, but it was true -- Sylow developed his theorems without this notion.
+Only much later was the abstract definition of a group given,
+an abstract set $G$ which was \emph{independent} of any embedding into $S_n$,
+and an object in its own right.
+
+We are about to make the same type of change for our affine varieties.
+Rather than thinking of them as an object locked into an ambient space $\Aff^n$
+we are instead going to try to make them into an object in their own right.
+Specifically, for us an affine variety will become a \emph{topological space}
+equipped with a \emph{ring of functions} for each of its open sets:
+this is why we call it a \textbf{ringed space}.
+
+The bit about the topological space is not too drastic.
+The key insight is the addition of the ring of functions.
+For example, consider the double point from last chapter.
+
+\begin{center}
+ \begin{asy}
+ import graph;
+ size(5cm);
+
+ real f(real x) { return x*x; }
+ graph.xaxis("$x$", red);
+ graph.yaxis("$y$");
+ draw(graph(f,-2,2,operator ..), blue, Arrows);
+ label("$\mathcal V(y-x^2)$", (0.8, f(0.8)), dir(-45), blue);
+ label("$\mathbb A^2$", (2,3), dir(45));
+ dotfactor *= 1.5;
+ dot(origin, heavygreen);
+ \end{asy}
+\end{center}
+
+As a set, it is a single point,
+and thus it can have only one possible topology.
+But the addition of the function ring will let us tell it apart
+from just a single point.
+
+This construction is quite involved, so we'll proceed as follows:
+we'll define the structure bit by bit onto our existing affine varieties in $\Aff^n$,
+until we have all the data of a ringed space.
+In later chapters, these ideas will grow up to
+become the core of modern algebraic geometry: the \emph{scheme}.
+
+\section{The Zariski topology on $\Aff^n$}
+\prototype{In $\Aff^1$, closed sets are finite collections of points.
+In $\Aff^2$, a nonempty open set is the whole space minus some finite collection of curves/points.}
+
+We begin by endowing a topological structure on every variety $V$.
+Since our affine varieties (for now) all live in $\Aff^n$, all we have to do
+is put a suitable topology on $\Aff^n$, and then just view $V$ as a subspace.
+
+However, rather than putting the standard Euclidean topology on $\Aff^n$,
+we put a much more bizarre topology.
+\begin{definition}
+ In the \vocab{Zariski topology} on $\Aff^n$,
+ the \emph{closed sets} are those of the form
+ \[ \VV(I) \qquad\text{where}\quad I \subseteq \CC[x_1, \dots, x_n]. \]
+ Of course, the open sets are complements of such sets.
+\end{definition}
+
+\begin{example}
+ [Zariski topology on $\Aff^1$]
+ Let us determine the open sets of $\Aff^1$,
+ which as usual we picture as a straight line
+ (ignoring the fact that $\CC$ is two-dimensional).
+
+ Since $\CC[x]$ is a principal ideal domain, rather than looking at $\VV(I)$
+ for every $I \subseteq \CC[x]$, we just have to look at $\VV(f)$ for a single $f$.
+ There are a few flavors of polynomials $f$:
+ \begin{itemize}
+ \ii The zero polynomial $0$ which vanishes everywhere:
+ this implies that the entire space $\Aff^1$ is a closed set.
+ \ii The constant polynomial $1$ which vanishes nowhere.
+ This implies that $\varnothing$ is a closed set.
+ \ii A polynomial $c(x-t_1)(x-t_2)\dots(x-t_n)$ of degree $n$.
+ It has $n$ roots, and so $\{t_1, \dots, t_n\}$ is a closed set.
+ \end{itemize}
+ Hence the closed sets of $\Aff^1$ are exactly all of $\Aff^1$
+ and finite sets of points (including $\varnothing$).
+ Consequently, the \emph{open} sets of $\Aff^1$ are
+ \begin{itemize}
+ \ii $\varnothing$, and
+ \ii $\Aff^1$ minus a finite collection (possibly empty) of points.
+ \end{itemize}
+\end{example}
+
+Thus, the picture of a ``typical'' open set $\Aff^1$ might be
+\begin{center}
+ \begin{asy}
+ size(6cm);
+ pair A = (-9,0); pair B = (9,0);
+ pen bloo = blue+1.5;
+ draw(A--B, blue, Arrows);
+ draw(A--B, bloo);
+ // label("$\mathbb V()$", (0,0), 2*dir(-90));
+ opendot((-3,0), bloo);
+ opendot((-1,0), bloo);
+ opendot((4,0), bloo);
+ label("$\mathbb A^1$", B-(2,0), dir(90));
+ \end{asy}
+\end{center}
+It's everything except a few marked points!
+
+\begin{example}[Zariski topology on $\Aff^2$]
+ Similarly, in $\Aff^2$,
+ the interesting closed sets are going to consist
+ of finite unions (possibly empty) of
+ \begin{itemize}
+ \ii Closed curves,
+ like $\VV(y-x^2)$ (which is a parabola), and
+ \ii Single points, like $\VV(x-3,y-4)$
+ (which is the point $(3,4)$).
+ \end{itemize}
+ Of course, the entire space $\Aff^2 = \VV(0)$ and the empty set $\varnothing = \VV(1)$
+ are closed sets.
+
+ Thus the nonempty open sets in $\Aff^2$ consist of the \emph{entire} plane,
+ minus a finite collection of points and one-dimensional curves.
+\end{example}
+\begin{ques}
+ Draw a picture (to the best of your artistic ability)
+ of a ``typical'' open set in $\Aff^2$.
+\end{ques}
+
+All this is to say
+\begin{moral}
+ The nonempty Zariski open sets are \emph{huge}.
+\end{moral}
+This is an important difference than what you're used to in topology.
+To be very clear:
+\begin{itemize}
+ \ii In the past, if I said something like
+ ``has so-and-so property in an open neighborhood of point $p$'',
+ one thought of this as saying
+ ``is true in a small region around $p$''.
+ \ii In the Zariski topology,
+ ``has so-and-so property in an open neighborhood of point $p$''
+ should be thought of as saying ``is true for virtually all points,
+ other than those on certain curves''.
+\end{itemize}
+Indeed, ``open neighborhood'' is no longer really a accurate description.
+Nonetheless, in many pictures to follow,
+it will still be helpful to draw open neighborhoods as circles.
+
+It remains to verify that as I've stated it, the closed sets actually form a topology.
+That is, I need to verify briefly that
+\begin{itemize}
+ \ii $\varnothing$ and $\Aff^n$ are both closed.
+ \ii Intersections of closed sets (even infinite) are still closed.
+ \ii Finite unions of closed sets are still closed.
+\end{itemize}
+Well, closed sets are the same as affine varieties,
+so we already know this!
+
+\section{The Zariski topology on affine varieties}
+\prototype{If $V = \VV(y-x^2)$ is a parabola,
+ then $V$ minus $(1,1)$ is open in $V$.
+ Also, the plane minus the origin is $D(x) \cup D(y)$.}
+
+As we said before, by considering a variety $V$ as a subspace of $\Aff^n$
+it inherits the Zariski topology.
+One should think of an open subset of $V$ as
+``$V$ minus a few Zariski-closed sets''.
+For example:
+\begin{example}[Open set of a variety]
+ Let $V = \VV(y-x^2) \subseteq \Aff^2$ be a parabola,
+ and let $U = V \setminus \{(1,1)\}$. We claim $U$ is open in $V$.
+ \begin{center}
+ \begin{asy}
+ import graph;
+ size(5cm);
+
+ real f(real x) { return x*x; }
+ graph.xaxis("$x$");
+ graph.yaxis("$y$");
+ draw(graph(f,-2,2,operator ..), blue, Arrows);
+ label("$\mathcal V(y-x^2)$", (0.8, f(0.8)), dir(-45));
+
+ opendot( (1,1), blue+1);
+ \end{asy}
+ \end{center}
+ Indeed, $\tilde U = \Aff^2 \setminus \{(1,1)\}$ is open in $\Aff^2$
+ (since it is the complement of the closed set $\VV(x-1,y-1)$),
+ so $U = \tilde U \cap V$ is open in $V$.
+ Note that on the other hand the set $U$ is \emph{not} open in $\Aff^2$.
+\end{example}
+
+We will go ahead and introduce now a definition
+that will be very useful later.
+\begin{definition}
+ Given $V \subseteq \Aff^n$ an affine variety and $f \in \CC[x_1, \dots, x_n]$,
+ we define the \vocab{distinguished open set}
+ $D(f)$ to be the open set in $V$
+ of points not vanishing on $f$:
+ \[ D(f) = \left\{ p \in V \mid f(p) \neq 0 \right\} = V \setminus \VV(f). \]
+\end{definition}
+In \cite{ref:vakil}, Vakil suggests remembering the
+notation $D(f)$ as ``doesn't-vanish set''.
+\begin{example}
+ [Examples of (unions of) distinguished open sets]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii If $V = \Aff^1$ then $D(x)$ corresponds to a line minus a point.
+ \ii If $V = \VV(y-x^2) \subseteq \Aff^2$,
+ then $D(x-1)$ corresponds to the parabola minus $(1,1)$.
+ \ii If $V = \Aff^2$, then
+ $D(x) \cup D(y) = \Aff^2 \setminus \{ (0,0) \}$
+ is the punctured plane.
+ You can show that this set is \emph{not} distinguished open.
+ \end{enumerate}
+\end{example}
+
+% There used to be an exercise here about D(f)
+% being a distinguished base
+
+%\begin{ques}
+% Give an example of an open set of $\Aff^2$
+% which is not a distinguished open set.
+% (There was one above already.)
+%\end{ques}
+
+
+\section{Coordinate rings}
+\prototype{If $V = \VV(y-x^2)$ then $\CC[V] = \CC[x,y]/(y-x^2)$.}
+
+The next thing we do is consider the functions from $V$ to the base field $\CC$.
+We restrict our attention to algebraic (polynomial) functions on a variety $V$:
+they should take every point $(a_1, \dots, a_n)$ on $V$ to some complex number $P(a_1, \dots, a_n) \in \CC$.
+For example, a valid function on a three-dimensional affine variety might be $(a,b,c) \mapsto a$;
+we just call this projection ``$x$''.
+Similarly we have a canonical projection $y$ and $z$,
+and we can create polynomials by combining them,
+say $x^2y + 2xyz$.
+
+\begin{definition}
+ The \vocab{coordinate ring} $\CC[V]$ of a variety $V$
+ is the ring of polynomial functions on $V$.
+ (Notation explained next section.)
+\end{definition}
+
+At first glance, we might think this is just $\CC[x_1, \dots, x_n]$.
+But on closer inspection we realize that \emph{on a given variety},
+some of these functions are the same.
+For example, consider in $\Aff^2$ the parabola $V = \VV(y-x^2)$.
+Then the two functions
+\begin{align*}
+ V & \to \CC \\
+ (x,y) & \mapsto x^2 \\
+ (x,y) & \mapsto y
+\end{align*}
+are actually the same function!
+We have to ``mod out'' by the ideal $I$ which generates $V$.
+This leads us naturally to:
+\begin{theorem}[Coordinate rings correspond to ideal]
+ Let $I$ be a radical ideal, and $V = \VV(I) \subseteq \Aff^n$.
+ Then \[ \CC[V] \cong \CC[x_1, \dots, x_n] / I. \]
+\end{theorem}
+\begin{proof}
+ There's a natural surjection as above
+ \[ \CC[x_1, \dots, x_n] \surjto \CC[V] \]
+ and the kernel is $I$.
+\end{proof}
+Thus properties of a variety $V$ correspond to properties of the ring $\CC[V]$.
+
+\section{The sheaf of regular functions}
+\prototype{Let $V = \Aff^1$, $U = V \setminus \{0\}$. Then $1/x \in \OO_V(U)$ is regular on $U$.}
+
+Let $V$ be an affine variety and let $\CC[V]$ be its coordinate ring.
+We want to define a notion of $\OO_V(U)$ for any open set $U$:
+the ``nice'' functions on any open subset.
+Obviously, any function in $\CC[V]$ will work as a function on $\OO_V(U)$.
+However, to capture more of the structure we want to
+loosen our definition of ``nice'' function slightly
+by allowing \emph{rational} functions.
+
+The chief example is that $1/x$ should be a regular function
+on $\Aff^1 \setminus \{0\}$.
+The first natural guess is:
+\begin{definition}
+ Let $U \subseteq V$ be an open set of the variety $V$.
+ A \vocab{rational function} on $U$
+ is a quotient $f(x) / g(x)$ of two elements $f$ and $g$ in $\CC[V]$,
+ where we require that $g(x) \neq 0$ for $x \in U$.
+\end{definition}
+However, the definition is slightly too restrictive;
+we have to allow for multiple representations:
+\begin{definition}
+ Let $U \subseteq V$ be open.
+ We say a function $\phi : U \to \CC$ is a \vocab{regular function} if for
+ every point $p \in U$, we can find an open set $U_p \subseteq U$ containing $p$
+ and a rational function $f_p/g_p$ on $U_p$ such that
+ \[ \phi(x) = \frac{f_p(x)}{g_p(x)} \qquad \forall x \in U_p. \]
+ In particular, we require $g_p(x) \neq 0$ on the set $U_p$.
+ We denote the set of all regular functions on $U$ by $\OO_V(U)$.
+\end{definition}
+
+Thus,
+\begin{moral}
+ $\phi$ is regular on $U$ if it is locally a rational function.
+\end{moral}
+
+This definition is misleadingly complicated,
+and the examples should illuminate it significantly.
+Firstly, in practice, most of the time we will be able to find
+a ``global'' representation of a regular function as a quotient,
+and we will not need to fuss with the $p$'s.
+For example:
+\begin{example}
+ [Regular functions]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii Any function in $f \in \CC[V]$ is clearly regular,
+ since we can take $g_p = 1$, $f_p = f$ for every $p$.
+ So $\CC[V] \subseteq \OO_V(U)$ for any open set $U$.
+ \ii Let $V = \Aff^1$, $U_0 = V \setminus \{0\}$.
+ Then $1/x \in \OO_V(U_0)$ is regular on $U$.
+ \ii Let $V = \Aff^1$, $U_{12} = V \setminus \{1,2\}$. Then
+ \[ \frac{1}{(x-1)(x-2)} \in \OO_V(U_{12}) \]
+ is regular on $U$.
+ \end{enumerate}
+\end{example}
+The ``local'' clause with $p$'s is still necessary, though.
+\begin{example}
+ [Requiring local representations]
+ \label{ex:local_rep}
+ Consider the variety
+ \[ V = \VV(ab-cd) \subseteq \Aff^4 \]
+ and the open set $U = V \setminus \VV(b,d)$.
+ There is a regular function on $U$ given by
+ \[
+ (a,b,c,d)
+ \mapsto
+ \begin{cases}
+ a/d & d \neq 0 \\
+ c/b & b \neq 0.
+ \end{cases}
+ \]
+ Clearly these are the ``same function'' (since $ab=cd$),
+ but we cannot write ``$a/d$'' or ``$c/b$''
+ to express it because we run into divide-by-zero issues.
+ That's why in the definition of a regular function,
+ we have to allow multiple representations.
+\end{example}
+
+In fact, we will see later on that the definition
+of a regular function is a special case of a more
+general construction called \emph{sheafification},
+in which ``presheaves of functions which are $P$'' are transformed
+into ``sheaves of functions which are \emph{locally} $P$''.
+
+\section{Regular functions on distinguished open sets}
+\prototype{Regular functions on $\Aff^1 \setminus \{0\}$ are $P(x) / x^n$.}
+The division-by-zero, as one would expect,
+essentially prohibits regular functions on the entire space $V$;
+i.e.\ there are no regular functions in $\OO_V(V)$
+that were not already in $\CC[V]$.
+Actually, we have a more general result which computes the
+regular functions on distinguished open sets:
+\begin{theorem}
+ [Regular functions on distinguished open sets]
+ \label{thm:reg_func_distinguish_open}
+ Let $V \subseteq \Aff^n$ be an affine variety
+ and $D(g)$ a distinguished open subset of it.
+ Then
+ \[ \OO_V( D(g) ) = \left\{ \frac{f}{g^k}
+ \mid f \in \CC[V] \text{ and } k \in \ZZ \right\}. \]
+ In particular, $\OO_V(V) = \OO_V(D(1)) \cong \CC[V]$.
+\end{theorem}
+The proof of this theorem requires the Nullstellensatz,
+so it relies on $\CC$ being algebraically closed.
+In fact, a counter-example is easy to find if we replace $\CC$ by $\RR$:
+consider $\frac{1}{x^2+1}$.
+\begin{proof}
+ Obviously, every function of the form $f/g^n$ works,
+ so we want the reverse direction.
+ This is long, and perhaps should be omitted on a first reading.
+
+ Here's the situation.
+ Let $U = D(g)$.
+ We're given a regular function $\phi$, meaning at every point $p \in D(g)$,
+ there is an open neighborhood $U_p$ on which $\phi$ can be expressed
+ as $f_p / g_p$ (where $f_p, g_p \in \CC[V]$).
+ Then, we want to construct an $f \in \CC[V]$ and an integer $n$
+ such that $\phi = f/g^n$.
+
+ First, look at a particular $U_p$ and $f_p / g_p$.
+ Shrink $U_p$ to a distinguished open set $D(h_p)$.
+ Then, let $\wt f_p = f_p h_p$ and $\wt g_p = g_p h_p$.
+ Thus we have that
+ \[ \frac{\wt f_p}{\wt g_p} \text{ is correct on }
+ D(h_p) \subseteq U \subseteq X. \]
+ The upshot of using the modified $f_p$ and $g_p$ is that:
+ \[ \wt f_p \wt g_q = \wt f_q \wt g_p \qquad \forall p,q \in U. \]
+ Indeed, it is correct on $D(h_p) \cap D(h_q)$ by definition,
+ and outside this set both the left-hand side and right-hand side are zero.
+
+ Now, we know that $D(g) = \bigcup_{p \in U} D(\wt g_p)$, i.e.\
+ \[ \VV(g) = \bigcap_{p \in U} \VV(\wt g_p). \]
+ So by the Nullstellensatz we know that
+ \[ g \in \sqrt{(\wt g_p : p \in U)}
+ \implies \exists n : g^n \in (\wt g_p : p \in U). \]
+ In other words, for some $n$ and $k_p \in \CC[V]$ we have
+ \[ g^n = \sum_p k_p \wt g_p \]
+ where only finitely many $k_p$ are not zero.
+ Now, we claim that
+ \[ f \defeq \sum_p k_p \wt f_p \]
+ works.
+ This just observes by noting that for any $q \in U$, we have
+ \[
+ f \wt g_q - g^n \wt f_q
+ = \sum_p k_p(\wt f_p \wt g_q - \wt g_p \wt f_q)
+ = 0. \qedhere
+ \]
+\end{proof}
+This means that the \emph{global} regular functions
+are just the same as those in the coordinate ring:
+you don't gain anything new by allowing it to be locally a quotient.
+(The same goes for distinguished open sets.)
+
+\begin{example}[Regular functions on distinguished open sets]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii As said already,
+ taking $g=1$ we recover $\OO_V(V) \cong \CC[V]$
+ for any affine variety $V$.
+ \ii Let $V = \Aff^1$, $U_0 = V \setminus \{0\}$. Then
+ \[ \OO_V(U_0)
+ = \left\{ \frac{P(x)}{x^n} \mid P \in \CC[x],
+ \quad n \in \ZZ \right\}. \]
+ So more examples are $1/x$ and $(x+1)/x^3$.
+ \end{enumerate}
+\end{example}
+
+\begin{ques}
+ Why doesn't our theorem on regular functions apply to \Cref{ex:local_rep}?
+\end{ques}
+
+The regular functions will become of crucial importance
+once we define a scheme in the next chapter.
+
+\section{Baby ringed spaces}
+In summary, given an affine variety $V$ we have:
+\begin{itemize}
+ \ii A structure of a set of points,
+ \ii A structure of a topological space $V$ on these points, and
+ \ii For every open set $U \subseteq V$, a ring $\OO_V(U)$.
+ Elements of the rings are functions $U \to \CC$.
+\end{itemize}
+Let us agree that:
+\begin{definition}
+ A \vocab{baby ringed space} is a topological space $X$
+ equipped with a ring $\OO_X(U)$ for every open set $U$.
+ It is required that elements of the ring $\OO_X(U)$
+ are functions $f : U \to \CC$;
+ we call these the \emph{regular functions} of $X$ on $U$.
+\end{definition}
+Therefore, affine varieties are baby ringed spaces.
+\begin{remark}
+ This is not a standard definition. Hehe.
+\end{remark}
+
+The reason this is called a ``baby ringed space''
+is that in a \emph{ringed space},
+the rings $\OO_V(U)$ can actually be \emph{any rings},
+but they have to satisfy a set of fairly technical conditions.
+When this happens, it's the $\OO_V$ that does all the work;
+we think of $\OO_V$ as a type of functor called a \emph{sheaf}.
+
+Since we are only studying affine/projective/quasi-projective varieties
+for the next chapters, we will just refer to these as baby ringed spaces
+so that we don't have to deal with the entire definition.
+The key concept is that we want to think of these varieties
+as \emph{intrinsic objects}, free of any embedding.
+A baby ringed space is philosophically the correct thing to do.
+
+Anyways, affine varieties are baby ringed spaces $(V, \OO_V)$.
+In the next chapter we'll meet projective and quasi-projective
+varieties, which give more such examples of (baby) ringed spaces.
+With these examples in mind, we will finally lay down
+the complete definition of a ringed space,
+and use this to define a scheme.
+
+\section\problemhead
+
+\begin{dproblem}
+ Show that for any $n \ge 1$ the Zariski topology of $\Aff^n$
+ is \emph{not} Hausdorff.
+\end{dproblem}
+
+\begin{dproblem}
+ Let $V$ be an affine variety,
+ and consider its Zariski topology.
+ \begin{enumerate}[(a)]
+ \ii Show that the Zariski topology is \vocab{Noetherian},
+ meaning there is no infinite descending chain
+ $Z_1 \supsetneq Z_2 \supsetneq Z_3 \supsetneq \dots$ of closed subsets.
+ \ii Prove that a Noetherian topological space is compact.
+ Hence varieties are topologically compact.
+ \end{enumerate}
+\end{dproblem}
+
+\begin{sproblem}
+ [Punctured Plane]
+ \label{prob:punctured_plane}
+ Let $V = \Aff^2$ and let $X = \Aff^2 \setminus \{(0,0)\}$ be the punctured plane
+ (which is an open set of $V$).
+ Compute $\OO_V(X)$.
+\end{sproblem}
diff --git a/books/napkin/zfc.tex b/books/napkin/zfc.tex
new file mode 100644
index 0000000000000000000000000000000000000000..40535c261edd0efaea8387fe8ec7c304c57854da
--- /dev/null
+++ b/books/napkin/zfc.tex
@@ -0,0 +1,429 @@
+\chapter{Zermelo-Fraenkel with choice}
+\label{ch:zfc}
+Chapter 3.1 of \cite{ref:msci} has a nice description of this.
+
+\section{The ultimate functional equation}
+In abstract mathematics, we often define structures by what \emph{properties}
+they should have; for example, a group is a set and a binary operation
+satisfying so-and-so axioms, while a metric space is a set and a distance function
+satisfying so-and-so axioms.
+
+Nevertheless, these definitions rely on previous definitions.
+The colorful illustration of \cite{ref:msci} on this:
+\begin{itemize}
+ \ii A \emph{vector space} is an abelian group with\dots
+ \ii An \emph{abelian group} has a binary operation such that\dots
+ \ii A \emph{binary operation} on a set is\dots
+ \ii A \emph{set} is \dots
+\end{itemize}
+and so on.
+
+We have to stop at some point, because infinite lists of definitions are bad.
+The stopping turns out to be a set, ``defined'' by properties.
+The trick is that we never actually define what a set is,
+but nonetheless postulate that these sets satisfy certain properties:
+these are the $\ZFC$ axioms.
+Loosely, $\ZFC$ can be thought of as the \emph{ultimate functional equation}.
+
+Before talking about what these axioms are, I should talk about the caveats.
+
+\section{Cantor's paradox}
+Intuitively, a set is an unordered collection of elements.
+Two sets are equal if they share the same elements:
+\[
+ \left\{ x \mid x \text{ is a featherless biped} \right\}
+ =
+ \left\{ x \mid x \text{ is human} \right\}
+\]
+(let's put aside the issue of dinosaurs).
+
+As another example, we have our empty set $\varnothing$ that contains no objects.
+We can have a set $\{1, 2, 3\}$, or maybe the set of natural numbers $\mathbb N = \{0, 1, 2, \dots \}$.
+(For the purposes of set theory, $0$ is usually considered a natural number.)
+Sets can even contain other sets, like $\left\{ \mathbb Z, \mathbb Q, \mathbb N \right\}$. Fine and dandy, right?
+
+The trouble is that this definition actually isn't good enough, and here's why.
+If we just say ``a set is any collection of objects'',
+then we can consider a really big set $V$, the set of all sets.
+So far no problem, right?
+We would have the oddity that $V \in V$,
+but oh well, no big deal.
+
+Unfortunately, this existence of this $V$ leads immediately to a paradox.
+The classical one is Russell's Paradox.
+I will instead present a somewhat simpler one:
+not only does $V$ contain itself, \emph{every subset $S \subseteq V$}
+is itself an element of $V$ (i.e. $S \in V$).
+If we let $\PP(V)$ denote the \vocab{power set} of $V$
+(i.e.\ all the subsets of $V$), then we have an inclusion
+\[ \PP(V) \injto V. \]
+This is bad, since:
+\begin{lemma}[Cantor's diagonal argument]
+ \label{lem:cantor_diag}
+ For \emph{any} set $X$, it's impossible to construct an injective
+ map $\iota : \PP(X) \injto X$.
+\end{lemma}
+\begin{proof}
+ Assume for contradiction $\iota$ exists.
+ \begin{exercise}
+ Show that if, $\iota$ exists,
+ then there exists a surjective map $j : X \surjto \PP(X)$.
+ (This is easier than it appears, just ``invert $\iota$'').
+ \end{exercise}
+ We now claim that $j$ can't exist.
+
+ Let me draw a picture for $j$ to give the idea first:
+ \[
+ \begin{array}{cccccccc}
+ && x_1 & x_2 & x_3 & x_4 & x_5 & \dots \\ \cline{3-8}
+ x_1 & \xmapsto{j} & \mathbf0 & 1 & 1 & 0 & 1 & \dots \\
+ x_2 & \xmapsto{j} & 1 & \mathbf1 & 0 & 1 & 1 & \dots \\
+ x_3 & \xmapsto{j} & 0 & 1 & \mathbf0 & 0 & 1 & \dots \\
+ x_4 & \xmapsto{j} & 1 & 0 & 0 & \mathbf1 & 0 & \dots \\
+ x_5 & \xmapsto{j} & 0 & 1 & 1 & 1 & \mathbf1 & \dots \\
+ \vdots && \vdots & \vdots & \vdots & \vdots & \vdots & \ddots
+ \end{array}
+ \]
+ Here, for each $j(x) \subseteq X$, I'm writing ``$1$'' to mean that
+ the element is inside $j(x)$, and ``$0$'' otherwise.
+ So $j(x_1) = \{x_2, x_3, x_5 \dots\}$.
+ (Here the indices are ordinals rather than integers
+ as $X$ may be uncountable.
+ Experts may notice I've tacitly assumed a well-ordering of $X$;
+ but this picture is for motivation only so I won't dwell on the point.)
+ Then we can read off the diagonal to get a new set.
+ In our example, the diagonal specifies a set
+ $A = \{x_2, x_4, x_5 \dots\}$.
+ Then we ``invert'' it to get a set $B = \{x_1, x_3, \dots\}$.
+
+ Back to the formal proof. As motivated above, we define
+ \[ B = \left\{ x \mid x \notin j(x) \right\}. \]
+ By construction, $B \subseteq X$ is not in the image of $j$,
+ which is a contradiction since $j$ was supposed to be surjective.
+\end{proof}
+
+Now if you're not a set theorist, you could probably just brush this off,
+saying ``oh well, I guess you can't look at certain sets''.
+But if you're a set theorist, this worries you,
+because you realize it means that you can't
+just define a set as ``a collection of objects'',
+because then everything would blow up.
+Something more is necessary.
+
+\section{The language of set theory}
+We need a way to refer to sets other
+than the informal description of ``collection of objects''.
+
+So here's what we're going to do.
+We'll start by defining a formal \emph{language of set theory},
+a way of writing logical statements.
+First of all we can throw in our usual logical operators:
+\begin{itemize}
+ \ii $\forall$ means ``for all''
+ \ii $\exists$ means ``exists''
+ \ii $=$ means ``equal''
+ \ii $X \implies Y$ means ``if $X$ then $Y$''
+ \ii $A \land B$ means ``$A$ and $B$''
+ \ii $A \lor B$ means ``$A$ or $B$''
+ \ii $\neg A$ means ``not $A$''.
+\end{itemize}
+
+Since we're doing set theory,
+there's only one more operator we add in: the inclusion $\in$.
+And that's all we're going to use (for now).
+
+So how do we express something like ``the set $\{1, 2\}$''?
+The trick is that we're not going to actually ``construct'' any sets,
+but rather refer to them indirectly, like so:
+\[ \exists S : x \in S \iff \left( (x=1) \lor (x=2) \right). \]
+This reads: ``there exists an $S$ such that $x$ is in $S$ if
+and only if either $x=1$ or $x=2$''.
+We don't have to refer to sets as objects in and of
+themselves anymore --- we now have a way to ``create'' our sets,
+by writing formulas for exactly what they contain.
+This is something a machine can parse.
+
+Well, what are we going to do with things like $1$ and $2$, which are not sets?
+Answer:
+\begin{moral}
+ Elements of sets are themselves sets.
+\end{moral}
+We're going to make \textbf{everything} into a set.
+Natural numbers will be sets. Ordered pairs will be sets. Functions will be sets.
+Later, I'll tell you exactly how we manage to do something like encode $1$ as a set.
+For now, all you need to know is that that sets don't just hold objects;
+they hold other sets.
+
+So now it makes sense to talk about whether something is a set or not:
+$\exists x$ means ``$x$ is a set'', while $\nexists x$ means ``$x$ is not a set''.
+In other words, we've rephrased the problem of deciding whether something
+is a set to whether it exists,
+which makes it easier to deal with in our formal language.
+That means that our axiom system had better find some way to let us
+show a lot of things exist, without letting us prove
+\[ \exists S \forall x : x \in S. \]
+For if we prove this formula,
+then we have our ``bad'' set that caused us to go down the
+rabbit hole in the first place.
+
+\section{The axioms of $\ZFC$}
+I don't especially want to get into details about these axioms;
+if you're interested, read:
+\begin{itemize}
+ \ii \footnotesize \url{https://usamo.wordpress.com/2014/11/13/set-theory-an-intro-to-zfc-part-1/}
+ \ii \footnotesize \url{https://usamo.wordpress.com/2014/11/18/set-theory-part-2-constructing-the-ordinals/}
+\end{itemize}
+Here is a much terser description of the axioms,
+which also includes the corresponding sentence in the language of set theory.
+It is worth the time to get some practice parsing $\forall$, $\exists$, etc.\
+and you can do so by comparing the formal sentences with the natural statement of the axiom.
+
+First, the two easiest axioms:
+\begin{itemize}
+ \ii $\Extensionality$ is the sentence
+ $\forall x \forall y
+ \left( \left( \forall a \left( a \in x \iff a \in y \right) \right)
+ \implies x = y \right)$,
+ which says that if two sets $x$ and $y$ have the same elements,
+ then $x = y$.
+
+ \ii $\EmptySet$ is the sentence $\exists a : \forall x \; \neg (x \in a)$;
+ it says there exists a set with no elements.
+ By $\Extensionality$ this set is unique, so we denote it $\varnothing$.
+\end{itemize}
+
+The next two axioms give us basic ways of building new sets.
+\begin{itemize}
+ \ii Given two elements $x$ and $y$, there exists a set $a$ containing only those two elements.
+ In machine code, this is the sentence $\Pairing$, written
+ \[ \forall x \forall y \exists a \quad \forall z,
+ \; z \in a \iff \left( (z=x) \lor (z=y) \right). \]
+ By $\Extensionality$ this set $a$ is unique, so we write $a = \{x,y\}$.
+
+ \ii Given a set $a$, we can create the union of the elements of $a$.
+ For example, if $a = \{ \{1,2\}, \{3,4\} \}$, then $U = \{1,2,3,4\}$ is a set.
+ Formally, this is the sentence $\Union$:
+ \[ \forall a \exists U \quad \forall x \;
+ \left[ (x \in U) \iff (\exists y : x \in y \in a) \right]. \]
+ Since $U$ is unique by $\Extensionality$, we denote it $\cup a$.
+
+ \ii
+ We can construct the \vocab{power set} $\mathcal P(x)$.
+ Formally, the sentence $\PowerSet$ says that
+ \[ \forall x \exists P \forall y (y \in P \iff y \subseteq x) \]
+ where $y \subseteq x$ is short for $\forall z (z \in y \implies z \in x)$.
+ As $\Extensionality$ gives us uniqueness of $P$,
+ we denote it $\mathcal P(x)$.
+
+ \ii $\Foundation$ says there are no infinite descending chains
+ \[ x_0 \ni x_1 \ni x_2 \ni \dots. \]
+ This is important, because it lets us induct.
+ In particular, \textbf{no set contains itself}.
+
+ \ii $\Infinity$ implies that $\omega = \{0,1,\dots\}$ is a set.
+\end{itemize}
+These are all things you are already used to, so keep your intuition there.
+The next one is less intuitive:
+\begin{itemize}
+ \ii The \vocab{schema of restricted comprehension} says:
+ if we are \emph{given a set $X$}, and some formula $\phi(x)$
+ then we can \emph{filter} through the elements of $X$ to get a subset
+ \[ Y = \left\{ x \in X \mid \phi(x) \right\}. \]
+ % (For example $\phi(x)$ might be ``$x$ is not the empty set, i.e. $\exists a(a\in x)$'')
+ Formally, given a formula $\phi$:
+ \[
+ \forall X \quad \exists Y \quad
+ \forall y (y \in Y \iff y \in X \land \phi(y)).
+ \]
+\end{itemize}
+Notice that we may \emph{only} do this filtering over an already given set.
+So it is not valid to create
+$ \left\{ x \mid x \text{ is a set} \right\} $.
+We are thankful for this, because this lets us evade Cantor's paradox.
+
+\begin{abuse}
+ Note that technically, there are infinitely many sentences,
+ a $\Comprehension_\phi$ for every possible formula $\phi$.
+ By abuse of notation, we let $\Comprehension$ abbreviate
+ the infinitely many axioms $\Comprehension_\phi$ for every $\phi$.
+\end{abuse}
+
+There is one last schema called $\Replacement_\phi$.
+Suppose $X$ is a set and $\phi(x,y)$ is some formula
+such that for every $x \in X$, there is a \emph{unique} $y$ in the universe
+such that $\phi(x,y)$ is true: for example ``$y = x \cup \{x\}$'' works.
+(In effect, $\phi$ is defining a function $f$ on $X$.)
+Then there exists a set $Y$ consisting exactly of these images:
+(i.e.\ $f\im(X)$ is a set).
+\begin{abuse}
+ By abuse of notation, we let $\Replacement$ abbreviate
+ the infinitely many axioms $\Replacement_\phi$ for every $\phi$.
+\end{abuse}
+
+We postpone discussion of the Axiom of Choice momentarily.
+
+\section{Encoding}
+Now that we have this rickety universe of sets, we can start re-building math.
+You'll get to see this more in the next chapter on ordinal numbers.
+
+\begin{definition}
+ An \vocab{ordered pair} $(x,y)$
+ is a set of the form
+ \[ (x,y) \defeq
+ \left\{ \left\{ x \right\}, \left\{ x,y \right\} \right\}. \]
+\end{definition}
+Note that $(x,y) = (a,b)$ if and only if $x=a$ and $y=b$.
+Ordered $k$-tuples can be defined recursively: a three-tuple $(a,b,c)$ means $(a,(b,c))$.
+
+\begin{definition}
+ A \vocab{function} $f : X \to Y$
+ is defined as a collection of ordered pairs such that
+ \begin{itemize}
+ \ii If $(x,y) \in f$, then $x \in X$ and $y \in Y$.
+ \ii For every $x \in X$, there is a unique $y \in Y$
+ such that $(x,y) \in f$. We denote this $y$ by $f(x)$.
+ \end{itemize}
+\end{definition}
+
+\begin{definition}
+ The \vocab{natural numbers} are defined inductively as
+ \begin{align*}
+ 0 &= \varnothing \\
+ 1 &= \{0\} \\
+ 2 &= \{0,1\} \\
+ 3 &= \{0,1,2\} \\
+ &\vdotswithin=
+ \end{align*}
+ The set of all natural numbers is denoted $\omega$.
+\end{definition}
+\begin{abuse}
+ Yes, I'm sorry, in set theory $0$ is considered a natural number.
+ For this reason I'm using $\omega$ and not $\NN$
+ since I explicitly have $0\notin\NN$ in all other parts of this book.
+\end{abuse}
+
+Et cetera, et cetera.
+
+\section{Choice and well-ordering}
+The Axiom of Choice states that given a collection $Y$ of nonempty sets,
+there is a function $g : Y \to \cup Y$ which ``picks'' an element of each member of $Y$.
+That means $g(y) \in y$ for every $y \in Y$.
+(The typical illustration is that $Y$ contains infinitely many drawers,
+and each drawer (a $y$) has some sock in it.)
+
+Formally, it is the sentence
+\[ \forall Y \left(\varnothing \notin Y
+ \implies
+ \exists g : Y \to \cup Y
+ \text{ such that }
+ \forall y \in Y \left( g(y) \in y \right).
+ \right)
+\]
+The tricky part is not that we can conceive of such a function $g$,
+but that in fact this function $g$ is \emph{actually a set}.
+
+There is an equivalent formulation which is often useful.
+\begin{definition}
+ A \vocab{well-ordering} $<$ of $X$ is a strict, total order on $X$
+ which has no infinite descending chains.
+\end{definition}
+Well-orderings on a set are very nice, because we can pick minimal elements:
+this lets us do induction, for example.
+(And the Foundation axiom tells us $\in$ is a well-ordering itself.)
+
+\begin{example}[Examples and non-examples of well-orderings]
+ \listhack
+ \begin{enumerate}[(a)]
+ \ii The natural numbers $\omega = \{0,1,2,\dots\}$
+ are well-ordered by $<$.
+ \ii The integers $\ZZ = \{\dots,-2,-1,0,1,2,\dots\}$ are not well-ordered by $<$,
+ because there are infinite descending chains (take $-1 > -2 > -3 > \dots$).
+ \ii The positive real numbers are not well-ordered by $<$,
+ again because of the descending chain $\frac11>\frac12>\frac13>\dots$.
+ \ii The positive integers are not well-ordered by the divisibility operation $\mid$.
+ While there are no descending chains, there are elements which cannot be compared
+ (for example $3 \nmid 5$, $5 \nmid 3$ and $3 \neq 5$).
+ \end{enumerate}
+\end{example}
+
+\begin{theorem}
+ [Well-ordering theorem]
+ Assuming Choice, for every set we can place some well-ordering on it.
+\end{theorem}
+In fact, the well-ordering theorem is actually equivalent to the axiom of choice.
+
+\section{Sets vs classes}
+\prototype{The set of all sets is the standard example of a proper class.}
+We close the discussion of $\ZFC$ by mentioning ``classes''.
+
+Roughly, the ``bad thing'' that happened was that we considered a set $S$, the
+``set of all sets'', and it was \emph{too big}.
+That is,
+\[ \left\{ x \mid x \text{ is a set} \right\} \]
+is not good.
+Similarly, we cannot construct a set
+\[ \left\{ x \mid x \text{ is an ordered pair} \right\}. \]
+The lesson of Cantor's Paradox is that we cannot create any sets we want;
+we have to be more careful than that.
+
+Nonetheless, if we are given a set
+we can still tell whether or not it is an ordered pair.
+So for convenience, we will define a \vocab{class} to be
+a ``concept'' like the ``class of all ordered pairs''.
+Formally, a class is defined by some formula $\phi$:
+it consists of the sets which satisfy the formula.
+
+In particular:
+\begin{definition}
+ The class of all sets is denoted $V$, defined by $V = \left\{ x \mid x=x \right\}$.
+ It is called the \vocab{von Neumann universe}.
+\end{definition}
+
+A class is a \vocab{proper class} if it is not a set,
+so for example we have:
+\begin{theorem}[There is no set of all sets]
+ $V$ is a proper class.
+\end{theorem}
+\begin{proof}
+ Assume not, and $V$ is a set. Then $V \in V$,
+ which violates $\Foundation$.
+ (In fact, $V$ cannot be a set even without $\Foundation$,
+ as we saw earlier).
+\end{proof}
+
+\begin{abuse}
+ Given a class $C$, we will write $x \in C$ to mean
+ that $x$ has the defining property of $C$.
+ For example, $x \in V$ means ``$x$ is a set''.
+
+ It does not mean $x$ is an element of $V$
+ -- this doesn't make sense as $V$ is not a set.
+\end{abuse}
+
+\section\problemhead
+\begin{problem}
+ Let $A$ and $B$ be sets.
+ Show that $A \cap B$ and $A \times B$ are sets.
+\end{problem}
+
+\begin{problem}
+ Show that the class of all groups is a proper class.
+ (You can take the definition of a group as a pair $(G, \cdot)$
+ where $\cdot$ is a function $G \times G \to G$.)
+\end{problem}
+\begin{problem}
+ Show that the axiom of choice follows from the well-ordering theorem.
+\end{problem}
+
+\begin{dproblem}
+ Prove that actually, $\Replacement \implies \Comprehension$.
+\end{dproblem}
+
+\begin{problem}
+ [From Taiwan IMO training camp]
+ Consider infinitely many people each wearing a hat,
+ which is either red, green, or blue.
+ Each person can see the hat color of everyone except themselves.
+ Simultaneously each person guesses the color of their hat.
+ Show that they can form a strategy such that at most finitely many people guess their color incorrectly.
+\end{problem}
diff --git a/books/napkin/zorn-lemma.tex b/books/napkin/zorn-lemma.tex
new file mode 100644
index 0000000000000000000000000000000000000000..dade81da4aedc631420fcdd0582a2aa0a30320db
--- /dev/null
+++ b/books/napkin/zorn-lemma.tex
@@ -0,0 +1,329 @@
+\chapter{Interlude: Cauchy's functional equation and Zorn's lemma}
+\label{ch:zorn}
+\emph{This is an informal chapter on Zorn's lemma,
+which will give an overview of what's going to come in the last parts of the Napkin.
+It can be omitted without loss of continuity.}
+
+\medskip
+
+In the world of olympiad math, there's a famous functional equation that goes as follows:
+\[ f : \RR \to \RR \qquad f(x+y) = f(x) + f(y). \]
+Everyone knows what its solutions are!
+There's an obvious family of solutions $f(x) = cx$.
+Then there's also this family of\dots\ uh\dots\ noncontinuous solutions (mumble grumble) pathological
+(mumble mumble) Axiom of Choice (grumble).
+
+There's also this thing called Zorn's lemma. It sounds terrifying,
+because it's equivalent to the Axiom of Choice, which is also terrifying because why not.
+
+In this post I will try to de-terrify these things,
+because they're really as terrifying as they sound.
+
+\section{Let's construct a monster}
+Let us just see if we can try and construct a ``bad'' $f$ and see what happens.
+
+By scaling, let's assume WLOG that $f(1) = 1$.
+Thus $f(n) = n$ for every integer $n$, and you can easily show from here that
+\[ f\left( \frac mn \right) = \frac mn. \]
+So $f$ is determined for all rationals. And then you get stuck.
+
+None of this is useful for determining, say, $f(\sqrt 2)$.
+You could add and subtract rational numbers all day
+and, say, $\sqrt 2$ isn't going to show up at all.
+
+Well, we're trying to set things on fire anyways, so let's set
+\[ f(\sqrt 2) = 2015 \]
+because why not?
+By the same induction, we get $f(n\sqrt2) = 2015n$, and then that
+\[ f\left( a + b \sqrt 2 \right) = a + 2015b. \]
+Here $a$ and $b$ are rationals.
+Well, so far so good -- as written, this is a perfectly good solution,
+other than the fact that we've only defined $f$ on a tiny portion of the real numbers.
+
+Well, we can do this all day:
+\[ f\left( a + b \sqrt 2 + c \sqrt 3 + d \pi \right) = a + 2015b + 1337c - 999d. \]
+Perfectly consistent.
+
+You can kind of see how we should keep going now.
+Just keep throwing in new real numbers which are ``independent''
+to the previous few, assigning them to whatever junk we want.
+It feels like it \emph{should} be workable. . .
+
+In a moment I'll explain what ``independent'' means (though you
+might be able to guess already), but at the moment there's a bigger issue:
+no matter how many numbers we throw, it seems like we'll never finish.
+Let's address the second issue first.
+
+\section{Review of finite induction}
+When you do induction, you get to count off $1$, $2$, $3$, \dots and so on.
+So for example, suppose we had a ``problem'' such as:
+\begin{quote}
+ Prove that the intersection of $n$ open intervals is either $\varnothing$
+ or an open interval.
+\end{quote}
+You can do this by induction easily: it's true for $n = 2$, and
+for the larger cases it's similarly easy.
+
+But you can't conclude from this that \emph{infinitely} many open intervals intersect
+at some open interval. Indeed, this is false: consider the intervals
+\[
+ \left( -1, 1 \right), \quad
+ \left( -\frac12, \frac12 \right), \quad
+ \left( -\frac13, \frac13 \right), \quad
+ \left( -\frac14, \frac14 \right), \quad
+ \dots
+\]
+This \emph{infinite} set of intervals intersects at a single point $\{0\}$!
+
+The moral of the story is that induction doesn't let us reach infinity.
+Too bad, because we'd have loved to use induction to help us construct a monster.
+That's what we're doing, after all -- adding things in one by one.
+
+\section{Transfinite induction}
+Well, it turns out we can, but we need a new notion of number,
+the so-called \emph{ordinal number}.
+I define these in their full glory in the first two sections of \Cref{ch:ordinal}
+(and curious readers are even invited to jump ahead to those two sections),
+but for this chapter I won't need that full definition yet.
+
+Here's what I want to say: after all the natural numbers
+\[ 0, \; 1, \; \dots, \]
+I'll put a \emph{new number} called $\omega$,
+the first ordinal greater than all the natural numbers.
+After that there's more numbers called
+\[\omega+1, \; \omega+2, \; \dots \]
+and eventually a number called $2\omega$.
+
+The list goes on:
+\[
+\begin{aligned}
+ 0, & 1, 2, 3, \dots, \omega \\
+ & \omega+1, \omega+2, \dots, \omega+\omega \\
+ & 2\omega+1, 2\omega+2, \dots, 3\omega \\
+ & \vdots \\
+ & \omega^2 + 1, \omega^2+2, \dots \\
+ & \vdots \\
+ & \omega^3, \dots, \omega^4, \dots, \omega^\omega
+ \dots, \omega^{\omega^{\omega^{\dots}}} \\
+\end{aligned}
+\]
+Pictorially, it kind of looks like this:
+\begin{center}
+ \includegraphics[scale=0.70]{media/500px-Omega-exp-omega-labeled.png}
+ \\ \scriptsize Image from \cite{img:omega500}
+\end{center}
+(Note that the diagram only shows an initial segment;
+there are still larger ordinals like $\omega^{\omega^{\omega}}+1000$ and so on).
+
+Anyways, in the same way that natural numbers ``dominate'' all finite sets,
+the ordinals dominate \emph{all the sets}, in the following sense.
+Essentially, assuming the Axiom of Choice,
+it follows that for every set $S$ there's some ordinal $\alpha$
+which is larger than $S$ (in a sense I won't make precise until later chapters).
+
+But it turns out (and you can intuitively see) that as large as the ordinals grow,
+there is no \emph{infinite descending chain}.
+Meaning: if I start at an ordinal (like $2 \omega + 4$) and jump down, I can only
+take finitely many jumps before I hit $0$.
+(To see this, try writing down a chain starting at $2 \omega + 4$ yourself.)
+Hence, induction and recursion still work verbatim:
+\begin{theorem}[Transfinite induction]
+ Given a statement $P(-)$, suppose that
+ \begin{itemize}
+ \ii $P(0)$ is true, and
+ \ii If $P(\alpha)$ is true for all $\alpha < \beta$, then $P(\beta)$ is true.
+ \end{itemize}
+ Then $P(\beta)$ is true.
+ \label{thm:transfinite}
+\end{theorem}
+Similarly, you're allowed to do recursion to define $x_\beta$ if you know the
+value of $x_\alpha$ for all $\alpha < \beta$.
+
+The difference from normal induction or recursion is that we'll often
+only do things like ``define $x_{n+1} = \dots$''.
+But this is not enough to define $x_\alpha$ for all $\alpha$.
+To see this, try using our normal induction and see how far we can climb up the ladder.
+
+Answer: you can't get $\omega$!
+It's not of the form $n+1$ for any of our natural numbers $n$ -- our finite induction only lets us
+get up to the ordinals less than $\omega$.
+Similarly, the simple $+1$ doesn't let us hit the ordinal $2\omega$,
+even if we already have $\omega+n$ for all $n$.
+Such ordinals are called \vocab{limit ordinals}.
+The ordinal that \emph{are} of the form $\alpha+1$ are called \vocab{successor ordinals}.
+
+So a transfinite induction or recursion is very often broken up into three cases.
+In the induction phrasing, it looks like
+\begin{itemize}
+ \ii (Zero Case) First, resolve $P(0)$.
+ \ii (Successor Case) Show that from $P(\alpha)$ we can get $P(\alpha+1)$.
+ \ii (Limit Case) Show that $P(\lambda)$ holds given $P(\alpha)$ for all $\alpha < \lambda$,
+ where $\lambda$ is a limit ordinal.
+\end{itemize}
+Similarly, transfinite recursion often is split into cases too.
+\begin{itemize}
+ \ii (Zero Case) First, define $x_0$.
+ \ii (Successor Case) Define $x_{\alpha+1}$ from $x_\alpha$.
+ \ii (Limit Case) Define $x_\lambda$ from $x_\alpha$ for all $\alpha < \lambda$,
+ where $\lambda$ is a limit ordinal.
+\end{itemize}
+In both situations, finite induction only does the first two cases,
+but if we're able to do the third case we can climb far above the barrier $\omega$.
+
+\section{Wrapping up functional equations}
+Let's return to solving our problem.
+
+Let $S_n$ denote the set of ``base'' numbers we have at the $n$th step.
+In our example, we might have
+\[
+ S_1 = \left\{ 1 \right\}, \quad
+ S_2 = \left\{ 1, \sqrt 2 \right\}, \quad
+ S_3 = \left\{ 1, \sqrt 2, \sqrt 3 \right\}, \quad
+ S_4 = \left\{ 1, \sqrt 2, \sqrt 3, \pi \right\}, \quad
+ \dots
+\]
+and we'd like to keep building up $S_i$ until we can express all real numbers.
+For completeness, let me declare $S_0 = \varnothing$.
+
+First, I need to be more precise about ``independent''.
+Intuitively, this construction is working because
+\[ a + b \sqrt 2 + c \sqrt 3 + d \pi \]
+is never going to equal zero for rational numbers $a$, $b$, $c$, $d$ (other than all zeros).
+In general, a set $X$ of numbers is ``independent'' if the combination
+\[ c_1 x_1 + c_2 x_2 + \dots + c_m x_m = 0 \]
+never occurs for rational numbers $\QQ$ unless $c_1 = c_2 = \dots = c_m = 0$.
+Here $x_i \in X$ are distinct. Note that even if $X$ is infinite,
+I can only take finite sums!
+(This notion has a name: we want $X$ to be \textbf{linearly independent} over $\QQ$;
+see the chapter on vector spaces for more on this!)
+
+When do we stop?
+We'd like to stop when we have a set $S_{\text{something}}$ that's so big,
+every real number can be written in terms of the independent numbers.
+(This notion also has a name: it's called a $\QQ$-basis.)
+Let's call such a set \textbf{spanning};
+we stop once we hit a spanning set.
+
+The idea that we can induct still seems okay:
+suppose $S_\alpha$ isn't spanning.
+Then there's some number that is independent of $S_\alpha$, say $\sqrt{2015}\pi$ or something.
+Then we just add it to get $S_{\alpha+1}$.
+And we keep going.
+
+Unfortunately, as I said before it's not enough to be able to go from $S_\alpha$ to $S_{\alpha+1}$
+(successor case); we need to handle the limit case as well.
+But it turns out there's a trick we can do.
+Suppose we've constructed \emph{all} the sets $S_0$, $S_1$, $S_2$, \dots, one for each positive integer $n$,
+and none of them are spanning.
+The next thing I want to construct is $S_\omega$; somehow I have to ``jump''.
+To do this, I now take the infinite union
+\[ S_\omega \overset{\text{def}}{=} S_0 \cup S_1 \cup S_2 \cup \dots. \]
+The elements of this set are also independent (why?).
+
+Ta-da!
+With the simple trick of ``union all the existing sets'',
+we've just jumped the hurdle to the first limit ordinal $\omega$.
+Then we can construct $S_{\omega+1}$, $S_{\omega+2}$, \dots, once again --
+just keep throwing in elements.
+Then when we need to jump the next hurdle to $S_{2 \omega}$,
+we just do the same trick of ``union-ing'' all the previous sets.
+
+So we can formalize the process as follows:
+\begin{enumerate}
+ \ii Let $S_0 = \varnothing$.
+ \ii For a successor stage $S_{\alpha+1}$, add any element to $S_\alpha$ to obtain $S_{\alpha+1}$.
+ \ii For a limit stage $S_{\lambda}$, take the union $\bigcup_{\gamma < \lambda} S_\gamma$.
+\end{enumerate}
+How do we know that we'll stop eventually?
+Well, the thing is that this process consumes a lot of real numbers.
+In particular, the ordinals get larger than the size of $\RR$ (assuming Choice).
+Hence if we don't stop we will quite literally reach a point where we have used up every single real number.
+Clearly that's impossible, because by then the elements can't possibly be independent!
+
+So by transfinite recursion, we eventually hit some $S_\gamma$ which is spanning:
+the elements are all independent, but every real number can be expressed using it.
+Done!
+
+\section{Zorn's lemma}
+Now I can tell you what Zorn's lemma is:
+it lets us do the same thing in any poset.
+
+We can think of the above example as follows:
+consider all sets of independent elements.
+These form a partially ordered set by inclusion, and what we did
+was quite literally climb up a chain
+\[ S_0 \subsetneq S_1 \subsetneq S_2 \subsetneq \dots. \]
+It's not quite climbing since we weren't just going one step at a time:
+we had to do ``jumps'' to get up to $S_\omega$ and resume climbing.
+But the main idea is to climb up a poset until we're at the very top;
+in the previous case, when we reached the spanning set.
+
+The same thing works verbatim with any \href{http://en.wikipedia.org/wiki/Partially_ordered_set}{partially ordered set}
+$\mathcal P$.
+Let's define some terminology.
+A \vocab{local maximum} of the entire poset $\mathcal P$ is an element
+which has no other elements strictly greater than it.
+(Most authors refer to this as ``maximal element'', but I think
+``local maximum'' is a more accurate term.)
+
+Now a \vocab{chain of length $\gamma$} is a set of elements $p_\alpha$ for every $\alpha < \gamma$
+such that $p_0 < p_1 < p_2 < \dots$.
+(Observe that a chain has a last element if and only if $\gamma$ is a successor ordinal, like $\omega+3$.)
+An \vocab{upper bound} to a chain is an element $\tilde p$ which is greater than or equal
+to all elements of the chain;
+In particular, if $\gamma$ is a successor ordinal, then just taking the last element of the chain works.
+
+In this language, Zorn's lemma states that
+\begin{theorem}
+ [Zorn's lemma]
+ Let $\mathcal P$ be a nonempty partially ordered set.
+ If every chain has an upper bound,
+ then $\mathcal P$ has a local maximum.
+\end{theorem}
+
+Chains with length equal to a successor ordinal always have upper bounds,
+but this is not true in the limit case.
+So the hypothesis of Zorn's lemma is exactly what
+lets us ``jump'' up to define $p_\omega$ and other limit ordinals.
+And the proof of Zorn's lemma is straightforward: keep climbing up the poset at successor stages,
+using Zorn's condition to jump up at limit stages, and thus building a really long chain.
+But we have to eventually stop, or we literally run out of elements of $\mathcal P$.
+And the only possible stopping point is a local maximum.
+
+If we want to phrase our previous solution in terms of Zorn's lemma, we'd say:
+\begin{proof}
+ Look at the poset whose elements are sets of independent real numbers.
+ Every chain $S_0 \subsetneq S_1 \subsetneq \dots$ has an upper bound $\bigcup S_\alpha$
+ (which you have to check is actually an element of the poset).
+ Thus by Zorn, there is a local maximum $S$.
+ Then $S$ must be spanning, because otherwise we could add an element to it.
+\end{proof}
+So really, Zorn's lemma is encoding all of the work of climbing that I argued earlier.
+It's a neat little package that captures all the boilerplate, and tells
+you exactly what you need to check.
+
+\begin{center}
+ \includegraphics[scale=0.5]{media/zornaholic.png}
+ \\ \scriptsize Image from \cite{img:zornaholic}
+\end{center}
+
+One last thing you might ask:
+where is the Axiom of Choice used?
+Well, the idea is that for any chain there could be lots of $\tilde p$'s,
+and you need to pick one of them.
+Since you are making arbitrary choices infinitely many times, you need the Axiom of Choice.
+(Actually, you also need Choice to talk about cardinalities as in
+\Cref{thm:transfinite}.)
+But really, it's nothing special.
+
+\section{\problemhead}
+\begin{problem}
+ [Tukey's lemma]
+ Let $\mathcal F$ be a nonempty family of sets.
+ Assume that for any set $A$,
+ the set $A$ is in $\mathcal F$
+ if and only if all its finite subsets are in $\mathcal F$.
+
+ Prove that there exists a maximal set $Y \in \mathcal F$
+ (i.e.\ $Y$ not contained in any other set of $\mathcal F$).
+\end{problem}
diff --git a/books/stacks/adequate.tex b/books/stacks/adequate.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5db7e82e2400425e2546b63594eefcdb7c2ccc00
--- /dev/null
+++ b/books/stacks/adequate.tex
@@ -0,0 +1,2990 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Adequate Modules}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+For any scheme $X$ the category $\QCoh(\mathcal{O}_X)$
+of quasi-coherent modules is abelian and a weak Serre subcategory
+of the abelian category of all $\mathcal{O}_X$-modules. The same
+thing works for the category of quasi-coherent modules on
+an algebraic space $X$ viewed as a subcategory of the category
+of all $\mathcal{O}_X$-modules on the small \'etale site of $X$.
+Moreover, for a quasi-compact and quasi-separated morphism
+$f : X \to Y$ the pushforward $f_*$ and higher direct images
+preserve quasi-coherence.
+
+\medskip\noindent
+Next, let $X$ be a scheme and let $\mathcal{O}$ be the structure
+sheaf on one of the big sites of $X$, say, the big fppf site.
+The category of quasi-coherent $\mathcal{O}$-modules is abelian
+(in fact it is equivalent to the category of usual quasi-coherent
+$\mathcal{O}_X$-modules on the scheme $X$ we mentioned above)
+but its imbedding into $\textit{Mod}(\mathcal{O})$ is not exact.
+An example is the map of quasi-coherent modules
+$$
+\mathcal{O}_{\mathbf{A}^1_k}
+\longrightarrow
+\mathcal{O}_{\mathbf{A}^1_k}
+$$
+on $\mathbf{A}^1_k = \Spec(k[x])$ given by multiplication by $x$.
+In the abelian category of quasi-coherent sheaves this map is injective,
+whereas in the abelian category of all $\mathcal{O}$-modules on the
+big site of $\mathbf{A}^1_k$ this map has a nontrivial kernel as we
+see by evaluating on sections over $\Spec(k[x]/(x)) = \Spec(k)$.
+Moreover, for a quasi-compact and quasi-separated morphism
+$f : X \to Y$ the functor $f_{big, *}$ does not preserve quasi-coherence.
+
+\medskip\noindent
+In this chapter we introduce the category of what we will call
+adequate modules, closely related to quasi-coherent modules, which
+``fixes'' the two problems mentioned above. Another solution,
+which we will implement when we talk about quasi-coherent modules
+on algebraic stacks, is to consider $\mathcal{O}$-modules which
+are locally quasi-coherent and satisfy the flat base change property.
+See Cohomology of Stacks, Section
+\ref{stacks-cohomology-section-loc-qcoh-flat-base-change},
+Cohomology of Stacks, Remark
+\ref{stacks-cohomology-remark-bousfield-colocalization}, and
+Derived Categories of Stacks, Section
+\ref{stacks-perfect-section-derived}.
+
+
+
+
+
+
+\section{Conventions}
+\label{section-conventions}
+
+\noindent
+In this chapter we fix
+$\tau \in \{Zar, \etale, smooth, syntomic, fppf\}$
+and we fix a big $\tau$-site $\Sch_\tau$ as in
+Topologies, Section \ref{topologies-section-procedure}.
+All schemes will be objects of $\Sch_\tau$.
+In particular, given a scheme $S$ we obtain sites
+$(\textit{Aff}/S)_\tau \subset (\Sch/S)_\tau$.
+The structure sheaf $\mathcal{O}$ on these sites is defined by
+the rule $\mathcal{O}(T) = \Gamma(T, \mathcal{O}_T)$.
+
+\medskip\noindent
+All rings $A$ will be such that $\Spec(A)$ is (isomorphic to) an
+object of $\Sch_\tau$. Given a ring $A$ we denote
+$\textit{Alg}_A$ the category of $A$-algebras whose objects are the
+$A$-algebras $B$ of the form $B = \Gamma(U, \mathcal{O}_U)$
+where $S$ is an affine object of $\Sch_\tau$. Thus given an
+affine scheme $S = \Spec(A)$ the functor
+$$
+(\textit{Aff}/S)_\tau \longrightarrow \textit{Alg}_A,
+\quad
+U \longmapsto \mathcal{O}(U)
+$$
+is an equivalence.
+
+
+
+
+
+\section{Adequate functors}
+\label{section-quasi-coherent}
+
+\noindent
+In this section we discuss a topic closely related to
+direct images of quasi-coherent sheaves. Most of this material
+was taken from the paper \cite{Jaffe}.
+
+\begin{definition}
+\label{definition-module-valued-functor}
+Let $A$ be a ring. A {\it module-valued functor} is a functor
+$F : \textit{Alg}_A \to \textit{Ab}$ such that
+\begin{enumerate}
+\item for every object $B$ of $\textit{Alg}_A$ the group
+$F(B)$ is endowed with the structure of a $B$-module, and
+\item for any morphism $B \to B'$ of $\textit{Alg}_A$ the map
+$F(B) \to F(B')$ is $B$-linear.
+\end{enumerate}
+A {\it morphism of module-valued functors} is a transformation of
+functors $\varphi : F \to G$ such that $F(B) \to G(B)$ is $B$-linear
+for all $B \in \Ob(\textit{Alg}_A)$.
+\end{definition}
+
+\noindent
+Let $S = \Spec(A)$ be an affine scheme.
+The category of module-valued functors on $\textit{Alg}_A$ is
+equivalent to the category
+$\textit{PMod}((\textit{Aff}/S)_\tau, \mathcal{O})$
+of presheaves of $\mathcal{O}$-modules. The equivalence is given
+by the rule which assigns to the module-valued functor $F$ the
+presheaf $\mathcal{F}$ defined by the rule
+$\mathcal{F}(U) = F(\mathcal{O}(U))$.
+This is clear from the equivalence
+$(\textit{Aff}/S)_\tau \to \textit{Alg}_A$, $U \mapsto \mathcal{O}(U)$
+given in Section \ref{section-conventions}.
+The quasi-inverse sets $F(B) = \mathcal{F}(\Spec(B))$.
+
+\medskip\noindent
+An important special case of a module-valued functor comes about as follows.
+Let $M$ be an $A$-module. Then we will denote $\underline{M}$ the
+module-valued functor $B \mapsto M \otimes_A B$ (with obvious $B$-module
+structure). Note that if $M \to N$ is a map of $A$-modules then there is an
+associated morphism $\underline{M} \to \underline{N}$ of module-valued
+functors. Conversely, any morphism of module-valued functors
+$\underline{M} \to \underline{N}$ comes from an $A$-module map $M \to N$
+as the reader can see by evaluating on $B = A$. In other words
+$\text{Mod}_A$ is a full
+subcategory of the category of module-valued functors on $\textit{Alg}_A$.
+
+\medskip\noindent
+Given and $A$-module map $\varphi : M \to N$ then
+$\Coker(\underline{M} \to \underline{N}) =
+\underline{Q}$ where $Q = \Coker(M \to N)$ because $\otimes$
+is right exact. But this isn't the case
+for the kernel in general: for example an injective map of
+$A$-modules need not be injective after base change. Thus the following
+definition makes sense.
+
+\begin{definition}
+\label{definition-adequate-functor}
+Let $A$ be a ring. A module-valued functor $F$ on $\textit{Alg}_A$ is
+called
+\begin{enumerate}
+\item {\it adequate} if there exists a
+map of $A$-modules $M \to N$ such that $F$ is isomorphic to
+$\Ker(\underline{M} \to \underline{N})$.
+\item {\it linearly adequate} if $F$ is isomorphic to the
+kernel of a map $\underline{A^{\oplus n}} \to \underline{A^{\oplus m}}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that $F$ is adequate if and only if there exists an
+exact sequence $0 \to F \to \underline{M} \to \underline{N}$ and
+$F$ is linearly adequate if and only if there exists an exact sequence
+$0 \to F \to \underline{A^{\oplus n}} \to \underline{A^{\oplus m}}$.
+
+\medskip\noindent
+Let $A$ be a ring. In this section we will show the category of adequate
+functors on $\textit{Alg}_A$ is abelian
+(Lemmas \ref{lemma-cokernel-adequate} and \ref{lemma-kernel-adequate})
+and has a set of generators
+(Lemma \ref{lemma-adequate-surjection-from-linear}).
+We will also see that it is a weak Serre subcategory of the category
+of all module-valued functors on $\textit{Alg}_A$
+(Lemma \ref{lemma-extension-adequate})
+and that it has arbitrary colimits
+(Lemma \ref{lemma-colimit-adequate}).
+
+\begin{lemma}
+\label{lemma-adequate-finite-presentation}
+Let $A$ be a ring.
+Let $F$ be an adequate functor on $\textit{Alg}_A$.
+If $B = \colim B_i$ is a filtered
+colimit of $A$-algebras, then $F(B) = \colim F(B_i)$.
+\end{lemma}
+
+\begin{proof}
+This holds because for any $A$-module $M$ we have
+$M \otimes_A B = \colim M \otimes_A B_i$ (see
+Algebra, Lemma \ref{algebra-lemma-tensor-products-commute-with-limits})
+and because filtered colimits commute with exact sequences, see
+Algebra, Lemma \ref{algebra-lemma-directed-colimit-exact}.
+\end{proof}
+
+\begin{remark}
+\label{remark-settheoretic}
+Consider the category $\textit{Alg}_{fp, A}$ whose objects are $A$-algebras
+$B$ of the form $B = A[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$ and whose
+morphisms are $A$-algebra maps. Every $A$-algebra $B$ is a filtered colimit
+of finitely presented $A$-algebra, i.e., a filtered colimit of objects of
+$\textit{Alg}_{fp, A}$. By
+Lemma \ref{lemma-adequate-finite-presentation}
+we conclude every adequate functor $F$ is determined by its restriction to
+$\textit{Alg}_{fp, A}$. For some questions we can therefore restrict to
+functors on $\textit{Alg}_{fp, A}$. For example, the category of adequate
+functors does not depend on the choice of the big $\tau$-site
+chosen in
+Section \ref{section-conventions}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-adequate-flat}
+Let $A$ be a ring.
+Let $F$ be an adequate functor on $\textit{Alg}_A$.
+If $B \to B'$ is flat, then $F(B) \otimes_B B' \to F(B')$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Choose an exact sequence $0 \to F \to \underline{M} \to \underline{N}$.
+This gives the diagram
+$$
+\xymatrix{
+0 \ar[r] & F(B) \otimes_B B' \ar[r] \ar[d] &
+(M \otimes_A B)\otimes_B B' \ar[r] \ar[d] &
+(N \otimes_A B)\otimes_B B' \ar[d] \\
+0 \ar[r] & F(B') \ar[r] &
+M \otimes_A B' \ar[r] &
+N \otimes_A B'
+}
+$$
+where the rows are exact (the top one because $B \to B'$ is flat).
+Since the right two vertical arrows are isomorphisms, so is the
+left one.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-adequate-surjection-from-linear}
+Let $A$ be a ring.
+Let $F$ be an adequate functor on $\textit{Alg}_A$. Then there exists a
+surjection $L \to F$ with $L$ a direct sum of linearly adequate functors.
+\end{lemma}
+
+\begin{proof}
+Choose an exact sequence $0 \to F \to \underline{M} \to \underline{N}$
+where $\underline{M} \to \underline{N}$ is given by
+$\varphi : M \to N$. By
+Lemma \ref{lemma-adequate-finite-presentation}
+it suffices to construct $L \to F$ such that $L(B) \to F(B)$ is surjective
+for every finitely presented $A$-algebra $B$. Hence it suffices to construct,
+given a finitely presented $A$-algebra $B$ and an element $\xi \in F(B)$
+a map $L \to F$ with $L$ linearly adequate such that $\xi$ is in the image
+of $L(B) \to F(B)$.
+(Because there is a set worth of such pairs $(B, \xi)$ up to isomorphism.)
+
+\medskip\noindent
+To do this write $\sum_{i = 1, \ldots, n} m_i \otimes b_i$ the image of
+$\xi$ in $\underline{M}(B) = M \otimes_A B$. We know that
+$\sum \varphi(m_i) \otimes b_i = 0$ in $N \otimes_A B$.
+As $N$ is a filtered colimit of finitely presented $A$-modules, we can
+find a finitely presented $A$-module $N'$, a commutative diagram
+of $A$-modules
+$$
+\xymatrix{
+A^{\oplus n} \ar[r] \ar[d]_{m_1, \ldots, m_n} & N' \ar[d] \\
+M \ar[r] & N
+}
+$$
+such that $(b_1, \ldots, b_n)$ maps to zero in $N' \otimes_A B$.
+Choose a presentation $A^{\oplus l} \to A^{\oplus k} \to N' \to 0$.
+Choose a lift $A^{\oplus n} \to A^{\oplus k}$ of the map
+$A^{\oplus n} \to N'$ of the diagram. Then we see that there exist
+$(c_1, \ldots, c_l) \in B^{\oplus l}$ such that
+$(b_1, \ldots, b_n, c_1, \ldots, c_l)$ maps to zero in $B^{\oplus k}$
+under the map $B^{\oplus n} \oplus B^{\oplus l} \to B^{\oplus k}$.
+Consider the commutative diagram
+$$
+\xymatrix{
+A^{\oplus n} \oplus A^{\oplus l} \ar[r] \ar[d] & A^{\oplus k} \ar[d] \\
+M \ar[r] & N
+}
+$$
+where the left vertical arrow is zero on the summand $A^{\oplus l}$.
+Then we see that $L$ equal to the kernel of $\underline{A^{\oplus n + l}}
+\to \underline{A^{\oplus k}}$ works because the element
+$(b_1, \ldots, b_n, c_1, \ldots, c_l) \in L(B)$ maps to $\xi$.
+\end{proof}
+
+\noindent
+Consider a graded $A$-algebra $B = \bigoplus_{d \geq 0} B_d$. Then there are
+two $A$-algebra maps $p, a : B \to B[t, t^{-1}]$, namely $p : b \mapsto b$ and
+$a : b \mapsto t^{\deg(b)} b$ where $b$ is homogeneous. If $F$ is a
+module-valued functor on $\textit{Alg}_A$, then we define
+\begin{equation}
+\label{equation-weight-k}
+F(B)^{(k)} = \{\xi \in F(B) \mid t^k F(p)(\xi) = F(a)(\xi)\}.
+\end{equation}
+For functors which behave well with respect to flat ring extensions
+this gives a direct sum decomposition. This amounts to the fact that
+representations of $\mathbf{G}_m$ are completely reducible.
+
+\begin{lemma}
+\label{lemma-flat-functor-split}
+Let $A$ be a ring.
+Let $F$ be a module-valued functor on $\textit{Alg}_A$.
+Assume that for $B \to B'$ flat the map
+$F(B) \otimes_B B' \to F(B')$ is an isomorphism.
+Let $B$ be a graded $A$-algebra. Then
+\begin{enumerate}
+\item $F(B) = \bigoplus_{k \in \mathbf{Z}} F(B)^{(k)}$, and
+\item the map $B \to B_0 \to B$ induces map $F(B) \to F(B)$
+whose image is contained in $F(B)^{(0)}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $x \in F(B)$. The map $p : B \to B[t, t^{-1}]$ is free
+hence we know that
+$$
+F(B[t, t^{-1}]) =
+\bigoplus\nolimits_{k \in \mathbf{Z}} F(p)(F(B)) \cdot t^k =
+\bigoplus\nolimits_{k \in \mathbf{Z}} F(B) \cdot t^k
+$$
+as indicated we drop the $F(p)$ in the rest of the proof.
+Write $F(a)(x) = \sum t^k x_k$ for some $x_k \in F(B)$.
+Denote $\epsilon : B[t, t^{-1}] \to B$
+the $B$-algebra map $t \mapsto 1$. Note that the compositions
+$\epsilon \circ p, \epsilon \circ a : B \to B[t, t^{-1}] \to B$ are
+the identity. Hence we see that
+$$
+x = F(\epsilon)(F(a)(x)) = F(\epsilon)(\sum t^k x_k) = \sum x_k.
+$$
+On the other hand, we claim that $x_k \in F(B)^{(k)}$. Namely, consider
+the commutative diagram
+$$
+\xymatrix{
+B \ar[r]_a \ar[d]_{a'} &
+B[t, t^{-1}] \ar[d]^f \\
+B[s, s^{-1}] \ar[r]^-g &
+B[t, s, t^{-1}, s^{-1}]
+}
+$$
+where $a'(b) = s^{\deg(b)}b$, $f(b) = b$, $f(t) = st$ and
+$g(b) = t^{\deg(b)}b$ and $g(s) = s$. Then
+$$
+F(g)(F(a'))(x) = F(g)(\sum s^k x_k) =
+\sum s^k F(a)(x_k)
+$$
+and going the other way we see
+$$
+F(f)(F(a))(x) = F(f)(\sum t^k x_k) = \sum (st)^k x_k.
+$$
+Since $B \to B[s, t, s^{-1}, t^{-1}]$ is free we see that
+$F(B[t, s, t^{-1}, s^{-1}]) =
+\bigoplus_{k, l \in \mathbf{Z}} F(B) \cdot t^ks^l$ and
+comparing coefficients in the expressions above we find
+$F(a)(x_k) = t^k x_k$ as desired.
+
+\medskip\noindent
+Finally, the image of $F(B_0) \to F(B)$ is contained in $F(B)^{(0)}$
+because $B_0 \to B \xrightarrow{a} B[t, t^{-1}]$ is equal to
+$B_0 \to B \xrightarrow{p} B[t, t^{-1}]$.
+\end{proof}
+
+\noindent
+As a particular case of
+Lemma \ref{lemma-flat-functor-split}
+note that
+$$
+\underline{M}(B)^{(k)} = M \otimes_A B_k
+$$
+where $B_k$ is the degree $k$ part of the graded $A$-algebra $B$.
+
+\begin{lemma}
+\label{lemma-lift-map}
+Let $A$ be a ring. Given a solid diagram
+$$
+\xymatrix{
+0 \ar[r] &
+L \ar[d]_\varphi \ar[r] &
+\underline{A^{\oplus n}} \ar[r] \ar@{..>}[ld] &
+\underline{A^{\oplus m}} \\
+& \underline{M}
+}
+$$
+of module-valued functors on $\textit{Alg}_A$
+with exact row there exists a dotted arrow making the diagram commute.
+\end{lemma}
+
+\begin{proof}
+Suppose that the map $A^{\oplus n} \to A^{\oplus m}$ is given by the
+$m \times n$-matrix $(a_{ij})$. Consider the ring
+$B = A[x_1, \ldots, x_n]/(\sum a_{ij}x_j)$. The element
+$(x_1, \ldots, x_n) \in \underline{A^{\oplus n}}(B)$ maps to zero in
+$\underline{A^{\oplus m}}(B)$ hence is the image of a unique element
+$\xi \in L(B)$. Note that $\xi$ has the following universal property:
+for any $A$-algebra $C$ and any $\xi' \in L(C)$ there exists an $A$-algebra
+map $B \to C$ such that $\xi$ maps to $\xi'$ via the map $L(B) \to L(C)$.
+
+\medskip\noindent
+Note that $B$ is a graded $A$-algebra, hence we can use
+Lemmas \ref{lemma-flat-functor-split} and \ref{lemma-adequate-flat}
+to decompose the values of our functors on $B$ into graded pieces.
+Note that $\xi \in L(B)^{(1)}$ as $(x_1, \ldots, x_n)$ is an element
+of degree one in $\underline{A^{\oplus n}}(B)$. Hence we see that
+$\varphi(\xi) \in \underline{M}(B)^{(1)} = M \otimes_A B_1$.
+Since $B_1$ is generated by $x_1, \ldots, x_n$ as an $A$-module we
+can write $\varphi(\xi) = \sum m_i \otimes x_i$. Consider the map
+$A^{\oplus n} \to M$ which maps the $i$th basis vector to $m_i$.
+By construction the associated map
+$\underline{A^{\oplus n}} \to \underline{M}$
+maps the element $\xi$ to $\varphi(\xi)$. It follows from the
+universal property mentioned above that the diagram commutes.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cokernel-into-module}
+Let $A$ be a ring.
+Let $\varphi : F \to \underline{M}$ be a map of module-valued functors
+on $\textit{Alg}_A$ with $F$ adequate.
+Then $\Coker(\varphi)$ is adequate.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-adequate-surjection-from-linear}
+we may assume that $F = \bigoplus L_i$ is a direct sum of linearly adequate
+functors. Choose exact sequences
+$0 \to L_i \to \underline{A^{\oplus n_i}} \to \underline{A^{\oplus m_i}}$.
+For each $i$ choose a map $A^{\oplus n_i} \to M$ as in
+Lemma \ref{lemma-lift-map}.
+Consider the diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\bigoplus L_i \ar[r] \ar[d] &
+\bigoplus \underline{A^{\oplus n_i}} \ar[r] \ar[ld] &
+\bigoplus \underline{A^{\oplus m_i}} \\
+& \underline{M}
+}
+$$
+Consider the $A$-modules
+$$
+Q =
+\Coker(\bigoplus A^{\oplus n_i} \to M \oplus \bigoplus A^{\oplus m_i})
+\quad\text{and}\quad
+P = \Coker(\bigoplus A^{\oplus n_i} \to \bigoplus A^{\oplus m_i}).
+$$
+Then we see that $\Coker(\varphi)$ is isomorphic to the
+kernel of $\underline{Q} \to \underline{P}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cokernel-adequate}
+\begin{slogan}
+The cokernel of a map of adequate functors on the category of algebras
+over a ring is adequate.
+\end{slogan}
+Let $A$ be a ring.
+Let $\varphi : F \to G$ be a map of adequate functors on $\textit{Alg}_A$.
+Then $\Coker(\varphi)$ is adequate.
+\end{lemma}
+
+\begin{proof}
+Choose an injection $G \to \underline{M}$.
+Then we have an injection $G/F \to \underline{M}/F$. By
+Lemma \ref{lemma-cokernel-into-module}
+we see that $\underline{M}/F$ is adequate, hence we can find an injection
+$\underline{M}/F \to \underline{N}$.
+Composing we obtain an injection $G/F \to \underline{N}$. By
+Lemma \ref{lemma-cokernel-into-module}
+the cokernel of the induced map $G \to \underline{N}$ is adequate
+hence we can find an injection $\underline{N}/G \to \underline{K}$.
+Then $0 \to G/F \to \underline{N} \to \underline{K}$ is exact and
+we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kernel-adequate}
+Let $A$ be a ring.
+Let $\varphi : F \to G$ be a map of adequate functors on $\textit{Alg}_A$.
+Then $\Ker(\varphi)$ is adequate.
+\end{lemma}
+
+\begin{proof}
+Choose an injection $F \to \underline{M}$ and an injection
+$G \to \underline{N}$. Denote $F \to \underline{M \oplus N}$
+the diagonal map so that
+$$
+\xymatrix{
+F \ar[d] \ar[r] & G \ar[d] \\
+\underline{M \oplus N} \ar[r] & \underline{N}
+}
+$$
+commutes. By
+Lemma \ref{lemma-cokernel-adequate}
+we can find a module map $M \oplus N \to K$ such that
+$F$ is the kernel of $\underline{M \oplus N} \to \underline{K}$.
+Then $\Ker(\varphi)$ is the kernel of
+$\underline{M \oplus N} \to \underline{K \oplus N}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-adequate}
+Let $A$ be a ring.
+An arbitrary direct sum of adequate functors on $\textit{Alg}_A$
+is adequate. A colimit of adequate functors is adequate.
+\end{lemma}
+
+\begin{proof}
+The statement on direct sums is immediate.
+A general colimit can be written as a kernel of a map between
+direct sums, see
+Categories, Lemma \ref{categories-lemma-colimits-coproducts-coequalizers}.
+Hence this follows from
+Lemma \ref{lemma-kernel-adequate}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-linear-functor}
+Let $A$ be a ring.
+Let $F, G$ be module-valued functors on $\textit{Alg}_A$.
+Let $\varphi : F \to G$ be a transformation of functors. Assume
+\begin{enumerate}
+\item $\varphi$ is additive,
+\item for every $A$-algebra $B$ and $\xi \in F(B)$ and unit
+$u \in B^*$ we have $\varphi(u\xi) = u\varphi(\xi)$ in $G(B)$, and
+\item for any flat ring map $B \to B'$ we have
+$G(B) \otimes_B B' = G(B')$.
+\end{enumerate}
+Then $\varphi$ is a morphism of module-valued functors.
+\end{lemma}
+
+\begin{proof}
+Let $B$ be an $A$-algebra, $\xi \in F(B)$, and $b \in B$. We have to show
+that $\varphi(b \xi) = b \varphi(\xi)$. Consider the ring map
+$$
+B \to B' = B[x, y, x^{-1}, y^{-1}]/(x + y - b).
+$$
+This ring map is faithfully flat, hence $G(B) \subset G(B')$. On the
+other hand
+$$
+\varphi(b\xi) = \varphi((x + y)\xi) =
+\varphi(x\xi) + \varphi(y\xi) = x\varphi(\xi) + y\varphi(\xi)
+= (x + y)\varphi(\xi) = b\varphi(\xi)
+$$
+because $x, y$ are units in $B'$. Hence we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extension-adequate-key}
+Let $A$ be a ring.
+Let $0 \to \underline{M} \to G \to L \to 0$ be a short exact sequence
+of module-valued functors on $\textit{Alg}_A$ with $L$ linearly adequate.
+Then $G$ is adequate.
+\end{lemma}
+
+\begin{proof}
+We first point out that for any flat $A$-algebra map
+$B \to B'$ the map $G(B) \otimes_B B' \to G(B')$ is an isomorphism.
+Namely, this holds for $\underline{M}$ and $L$, see
+Lemma \ref{lemma-adequate-flat}
+and hence follows for $G$ by the five lemma. In particular, by
+Lemma \ref{lemma-flat-functor-split}
+we see that $G(B) = \bigoplus_{k \in \mathbf{Z}} G(B)^{(k)}$
+for any graded $A$-algebra $B$.
+
+\medskip\noindent
+Choose an exact sequence
+$0 \to L \to \underline{A^{\oplus n}} \to \underline{A^{\oplus m}}$.
+Suppose that the map $A^{\oplus n} \to A^{\oplus m}$ is given by the
+$m \times n$-matrix $(a_{ij})$. Consider the graded $A$-algebra
+$B = A[x_1, \ldots, x_n]/(\sum a_{ij}x_j)$. The element
+$(x_1, \ldots, x_n) \in \underline{A^{\oplus n}}(B)$ maps to zero in
+$\underline{A^{\oplus m}}(B)$ hence is the image of a unique element
+$\xi \in L(B)$. Observe that $\xi \in L(B)^{(1)}$. The map
+$$
+\Hom_A(B, C) \longrightarrow L(C), \quad
+f \longmapsto L(f)(\xi)
+$$
+defines an isomorphism of functors. The reason is that $f$ is
+determined by the images $c_i = f(x_i) \in C$ which have to
+satisfy the relations $\sum a_{ij}c_j = 0$. And $L(C)$ is the
+set of $n$-tuples $(c_1, \ldots, c_n)$ satisfying the relations
+$\sum a_{ij} c_j = 0$.
+
+\medskip\noindent
+Since the value of each of the functors $\underline{M}$, $G$, $L$
+on $B$ is a direct sum of its weight spaces (by the lemma mentioned
+above) exactness of $0 \to \underline{M} \to G \to L \to 0$ implies
+the sequence $0 \to \underline{M}(B)^{(1)} \to G(B)^{(1)} \to L(B)^{(1)} \to 0$
+is exact. Thus we may choose an element $\theta \in G(B)^{(1)}$ mapping
+to $\xi$.
+
+\medskip\noindent
+Consider the graded $A$-algebra
+$$
+C = A[x_1, \ldots, x_n, y_1, \ldots, y_n]/
+(\sum a_{ij}x_j, \sum a_{ij}y_j)
+$$
+There are three graded $A$-algebra homomorphisms $p_1, p_2, m : B \to C$
+defined by the rules
+$$
+p_1(x_i) = x_i, \quad
+p_1(x_i) = y_i, \quad
+m(x_i) = x_i + y_i.
+$$
+We will show that the element
+$$
+\tau = G(m)(\theta) - G(p_1)(\theta) - G(p_2)(\theta) \in G(C)
+$$
+is zero. First, $\tau$ maps to zero in $L(C)$ by a direct calculation.
+Hence $\tau$ is an element of $\underline{M}(C)$.
+Moreover, since $m$, $p_1$, $p_2$ are graded algebra maps we see
+that $\tau \in G(C)^{(1)}$ and since $\underline{M} \subset G$
+we conclude
+$$
+\tau \in \underline{M}(C)^{(1)} = M \otimes_A C_1.
+$$
+We may write uniquely
+$\tau = \underline{M}(p_1)(\tau_1) + \underline{M}(p_2)(\tau_2)$ with
+$\tau_i \in M \otimes_A B_1 = \underline{M}(B)^{(1)}$ because
+$C_1 = p_1(B_1) \oplus p_2(B_1)$.
+Consider the ring map $q_1 : C \to B$ defined by $x_i \mapsto x_i$ and
+$y_i \mapsto 0$. Then
+$\underline{M}(q_1)(\tau) =
+\underline{M}(q_1)(\underline{M}(p_1)(\tau_1) + \underline{M}(p_2)(\tau_2)) =
+\tau_1$.
+On the other hand, because
+$q_1 \circ m = q_1 \circ p_1$ we see that
+$G(q_1)(\tau) = - G(q_1 \circ p_2)(\tau)$. Since $q_1 \circ p_2$ factors as
+$B \to A \to B$ we see that $G(q_1 \circ p_2)(\tau)$ is in
+$G(B)^{(0)}$, see
+Lemma \ref{lemma-flat-functor-split}.
+Hence $\tau_1 = 0$ because it is in
+$G(B)^{(0)} \cap \underline{M}(B)^{(1)} \subset
+G(B)^{(0)} \cap G(B)^{(1)} = 0$.
+Similarly $\tau_2 = 0$, whence $\tau = 0$.
+
+\medskip\noindent
+Since $\theta \in G(B)$ we obtain a transformation of functors
+$$
+\psi : L(-) = \Hom_A(B, - ) \longrightarrow G(-)
+$$
+by mapping $f : B \to C$ to $G(f)(\theta)$. Since $\theta$ is a lift of
+$\xi$ the map $\psi$ is a right inverse of $G \to L$. In terms of
+$\psi$ the statements proved above have the following meaning:
+$\tau = 0$ means that $\psi$ is additive and
+$\theta \in G(B)^{(1)}$ implies that for any $A$-algebra $D$ we have
+$\psi(ul) = u\psi(l)$ in $G(D)$ for $l \in L(D)$ and $u \in D^*$ a unit.
+This implies that $\psi$ is a morphism of module-valued functors, see
+Lemma \ref{lemma-flat-linear-functor}.
+Clearly this implies that $G \cong \underline{M} \oplus L$ and we win.
+\end{proof}
+
+\begin{remark}
+\label{remark-linearly-adequate}
+Let $A$ be a ring.
+The proof of
+Lemma \ref{lemma-extension-adequate-key}
+shows that any extension $0 \to \underline{M} \to E \to L \to 0$
+of module-valued functors on $\textit{Alg}_A$
+with $L$ linearly adequate splits. It uses only the following properties
+of the module-valued functor $F = \underline{M}$:
+\begin{enumerate}
+\item $F(B) \otimes_B B' \to F(B')$ is an isomorphism
+for a flat ring map $B \to B'$, and
+\item
+$F(C)^{(1)} = F(p_1)(F(B)^{(1)}) \oplus F(p_2)(F(B)^{(1)})$
+where $B = A[x_1, \ldots, x_n]/(\sum a_{ij}x_j)$ and
+$C = A[x_1, \ldots, x_n, y_1, \ldots, y_n]/
+(\sum a_{ij}x_j, \sum a_{ij}y_j)$.
+\end{enumerate}
+These two properties hold for any adequate functor $F$; details omitted.
+Hence we see that $L$ is a projective object of the abelian category of
+adequate functors.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-extension-adequate}
+Let $A$ be a ring.
+Let $0 \to F \to G \to H \to 0$ be a short exact sequence of
+module-valued functors on $\textit{Alg}_A$.
+If $F$ and $H$ are adequate, so is $G$.
+\end{lemma}
+
+\begin{proof}
+Choose an exact sequence $0 \to F \to \underline{M} \to \underline{N}$.
+If we can show that $(\underline{M} \oplus G)/F$ is adequate, then
+$G$ is the kernel of the map of adequate functors
+$(\underline{M} \oplus G)/F \to \underline{N}$, hence
+adequate by
+Lemma \ref{lemma-kernel-adequate}.
+Thus we may assume $F = \underline{M}$.
+
+\medskip\noindent
+We can choose a surjection $L \to H$ where $L$ is a direct sum of
+linearly adequate functors, see
+Lemma \ref{lemma-adequate-surjection-from-linear}.
+If we can show that the pullback $G \times_H L$ is adequate, then
+$G$ is the cokernel of the map $\Ker(L \to H) \to G \times_H L$
+hence adequate by
+Lemma \ref{lemma-cokernel-adequate}.
+Thus we may assume that $H = \bigoplus L_i$ is a direct sum of
+linearly adequate functors. By
+Lemma \ref{lemma-extension-adequate-key}
+each of the pullbacks $G \times_H L_i$ is adequate. By
+Lemma \ref{lemma-colimit-adequate}
+we see that $\bigoplus G \times_H L_i$ is adequate.
+Then $G$ is the cokernel of
+$$
+\bigoplus\nolimits_{i \not = i'} F \longrightarrow
+\bigoplus G \times_H L_i
+$$
+where $\xi$ in the summand $(i, i')$ maps to
+$(0, \ldots, 0, \xi, 0, \ldots, 0, -\xi, 0, \ldots, 0)$
+with nonzero entries in the summands $i$ and $i'$.
+Thus $G$ is adequate by
+Lemma \ref{lemma-cokernel-adequate}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-adequate}
+Let $A \to A'$ be a ring map. If $F$ is an adequate functor on
+$\textit{Alg}_A$, then its restriction $F'$ to
+$\textit{Alg}_{A'}$ is adequate too.
+\end{lemma}
+
+\begin{proof}
+Choose an exact sequence $0 \to F \to \underline{M} \to \underline{N}$.
+Then $F'(B') = F(B') = \Ker(M \otimes_A B' \to N \otimes_A B')$.
+Since $M \otimes_A B' = M \otimes_A A' \otimes_{A'} B'$ and similarly
+for $N$ we see that $F'$ is the kernel of
+$\underline{M \otimes_A A'} \to \underline{N \otimes_A A'}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushforward-adequate}
+Let $A \to A'$ be a ring map. If $F'$ is an adequate functor on
+$\textit{Alg}_{A'}$, then the module-valued functor
+$F : B \mapsto F'(A' \otimes_A B)$ on $\textit{Alg}_A$ is adequate too.
+\end{lemma}
+
+\begin{proof}
+Choose an exact sequence $0 \to F' \to \underline{M'} \to \underline{N'}$.
+Then
+\begin{align*}
+F(B) & = F'(A' \otimes_A B) \\
+& = \Ker(M' \otimes_{A'} (
+A' \otimes_A B) \to N' \otimes_{A'} (A' \otimes_A B)) \\
+& = \Ker(M' \otimes_A B \to N' \otimes_A B)
+\end{align*}
+Thus $F$ is the kernel of
+$\underline{M} \to \underline{N}$
+where $M = M'$ and $N = N'$ viewed as $A$-modules.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-adequate-product}
+Let $A = A_1 \times \ldots \times A_n$ be a product of rings.
+An adequate functor over $A$ is the same thing as a sequence
+$F_1, \ldots, F_n$ of adequate functors $F_i$ over $A_i$.
+\end{lemma}
+
+\begin{proof}
+This is true because an $A$-algebra $B$ is canonically a product
+$B_1 \times \ldots \times B_n$ and the same thing holds for $A$-modules.
+Setting $F(B) = \coprod F_i(B_i)$ gives the correspondence.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-adequate-descent}
+Let $A \to A'$ be a ring map and let $F$ be a module-valued functor on
+$\textit{Alg}_A$ such that
+\begin{enumerate}
+\item the restriction $F'$ of $F$ to the category of $A'$-algebras is
+adequate, and
+\item for any $A$-algebra $B$ the sequence
+$$
+0 \to F(B) \to F(B \otimes_A A') \to F(B \otimes_A A' \otimes_A A')
+$$
+is exact.
+\end{enumerate}
+Then $F$ is adequate.
+\end{lemma}
+
+\begin{proof}
+The functors $B \to F(B \otimes_A A')$ and
+$B \mapsto F(B \otimes_A A' \otimes_A A')$ are adequate, see
+Lemmas \ref{lemma-pushforward-adequate} and
+\ref{lemma-base-change-adequate}.
+Hence $F$ as a kernel of a map of adequate functors is adequate, see
+Lemma \ref{lemma-kernel-adequate}.
+\end{proof}
+
+
+
+
+
+
+\section{Higher exts of adequate functors}
+\label{section-higher-ext}
+
+\noindent
+Let $A$ be a ring. In
+Lemma \ref{lemma-extension-adequate}
+we have seen that any extension of adequate functors in the category
+of module-valued functors on $\textit{Alg}_A$ is adequate. In this
+section we show that the same remains true for higher ext groups.
+
+\begin{lemma}
+\label{lemma-adjoint}
+Let $A$ be a ring.
+For every module-valued functor $F$ on $\textit{Alg}_A$
+there exists a morphism $Q(F) \to F$ of module-valued functors on
+$\textit{Alg}_A$ such that (1) $Q(F)$ is adequate and (2) for every
+adequate functor $G$ the map $\Hom(G, Q(F)) \to \Hom(G, F)$
+is a bijection.
+\end{lemma}
+
+\begin{proof}
+Choose a set $\{L_i\}_{i \in I}$ of linearly adequate functors such that
+every linearly adequate functor is isomorphic to one of the $L_i$.
+This is possible. Suppose that we can find $Q(F) \to F$ with (1) and
+(2)' or every $i \in I$ the map $\Hom(L_i, Q(F)) \to \Hom(L_i, F)$
+is a bijection. Then (2) holds. Namely, combining
+Lemmas \ref{lemma-adequate-surjection-from-linear} and
+\ref{lemma-kernel-adequate}
+we see that every adequate functor $G$ sits in an exact sequence
+$$
+K \to L \to G \to 0
+$$
+with $K$ and $L$ direct sums of linearly adequate functors. Hence (2)'
+implies that
+$\Hom(L, Q(F)) \to \Hom(L, F)$
+and
+$\Hom(K, Q(F)) \to \Hom(K, F)$
+are bijections, whence the same thing for $G$.
+
+\medskip\noindent
+Consider the category $\mathcal{I}$ whose objects are pairs
+$(i, \varphi)$ where $i \in I$ and $\varphi : L_i \to F$ is a morphism.
+A morphism $(i, \varphi) \to (i', \varphi')$ is a map
+$\psi : L_i \to L_{i'}$ such that $\varphi' \circ \psi = \varphi$.
+Set
+$$
+Q(F) = \colim_{(i, \varphi) \in \Ob(\mathcal{I})} L_i
+$$
+There is a natural map $Q(F) \to F$, by
+Lemma \ref{lemma-colimit-adequate}
+it is adequate, and by construction it has property (2)'.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-enough-injectives}
+Let $A$ be a ring. Denote $\mathcal{P}$ the category of module-valued
+functors on $\textit{Alg}_A$ and $\mathcal{A}$ the category of adequate
+functors on $\textit{Alg}_A$. Denote $i : \mathcal{A} \to \mathcal{P}$
+the inclusion functor. Denote $Q : \mathcal{P} \to \mathcal{A}$
+the construction of Lemma \ref{lemma-adjoint}.
+Then
+\begin{enumerate}
+\item $i$ is fully faithful, exact, and its image is a weak Serre subcategory,
+\item $\mathcal{P}$ has enough injectives,
+\item the functor $Q$ is a right adjoint to $i$ hence left exact,
+\item $Q$ transforms injectives into injectives,
+\item $\mathcal{A}$ has enough injectives.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma just collects some facts we have already seen so far.
+Part (1) is clear from the definitions, the characterization of
+weak Serre subcategories (see
+Homology, Lemma \ref{homology-lemma-characterize-weak-serre-subcategory}),
+and
+Lemmas \ref{lemma-cokernel-adequate}, \ref{lemma-kernel-adequate},
+and \ref{lemma-extension-adequate}.
+Recall that $\mathcal{P}$ is equivalent to the category
+$\textit{PMod}((\textit{Aff}/\Spec(A))_\tau, \mathcal{O})$.
+Hence (2) by
+Injectives, Proposition \ref{injectives-proposition-presheaves-modules}.
+Part (3) follows from
+Lemma \ref{lemma-adjoint}
+and
+Categories, Lemma \ref{categories-lemma-adjoint-exact}.
+Parts (4) and (5) follow from
+Homology, Lemmas \ref{homology-lemma-adjoint-preserve-injectives} and
+\ref{homology-lemma-adjoint-enough-injectives}.
+\end{proof}
+
+\noindent
+Let $A$ be a ring. As in
+Formal Deformation Theory, Section
+\ref{formal-defos-section-tangent-spaces-functors}
+given an $A$-algebra $B$ and an $B$-module $N$ we set $B[N]$ equal to
+the $R$-algebra with underlying $B$-module $B \oplus N$ with multiplication
+given by $(b, m)(b', m ') = (bb', bm' + b'm)$. Note that this construction
+is functorial in the pair $(B, N)$ where morphism $(B, N) \to (B', N')$
+is given by an $A$-algebra map $B \to B'$ and an $B$-module map
+$N \to N'$. In some sense the functor $TF$ of pairs defined in the following
+lemma is the tangent space of $F$.
+Below we will only consider pairs $(B, N)$ such that
+$B[N]$ is an object of $\textit{Alg}_A$.
+
+\begin{lemma}
+\label{lemma-tangent-functor}
+Let $A$ be a ring. Let $F$ be a module valued functor.
+For every $B \in \Ob(\textit{Alg}_A)$ and $B$-module $N$
+there is a canonical decomposition
+$$
+F(B[N]) = F(B) \oplus TF(B, N)
+$$
+characterized by the following properties
+\begin{enumerate}
+\item $TF(B, N) = \Ker(F(B[N]) \to F(B))$,
+\item there is a $B$-module structure $TF(B, N)$
+compatible with $B[N]$-module structure on $F(B[N])$,
+\item $TF$ is a functor from the category of pairs $(B, N)$,
+\item
+\label{item-mult-map}
+there are canonical maps $N \otimes_B F(B) \to TF(B, N)$
+inducing a transformation between functors defined on the category
+of pairs $(B, N)$,
+\item $TF(B, 0) = 0$ and the map $TF(B, N) \to TF(B, N')$ is
+zero when $N \to N'$ is the zero map.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $B \to B[N] \to B$ is the identity we see that $F(B) \to F(B[N])$
+is a direct summand whose complement is $TF(N, B)$ as defined in (1).
+This construction is functorial in the pair $(B, N)$ simply because
+given a morphism of pairs $(B, N) \to (B', N')$ we obtain a commutative
+diagram
+$$
+\xymatrix{
+B' \ar[r] & B'[N'] \ar[r] & B' \\
+B \ar[r] \ar[u] & B[N] \ar[r] \ar[u] & B \ar[u]
+}
+$$
+in $\textit{Alg}_A$. The $B$-module structure comes from the $B[N]$-module
+structure and the ring map $B \to B[N]$. The map in (4) is the
+composition
+$$
+N \otimes_B F(B) \longrightarrow
+B[N] \otimes_{B[N]} F(B[N]) \longrightarrow F(B[N])
+$$
+whose image is contained in $TF(B, N)$. (The first arrow uses the inclusions
+$N \to B[N]$ and $F(B) \to F(B[N])$ and the second arrow is the multiplication
+map.) If $N = 0$, then $B = B[N]$
+hence $TF(B, 0) = 0$. If $N \to N'$ is zero then it factors as
+$N \to 0 \to N'$ hence the induced map is zero since $TF(B, 0) = 0$.
+\end{proof}
+
+\noindent
+Let $A$ be a ring. Let $M$ be an $A$-module. Then the module-valued functor
+$\underline{M}$ has tangent space $T\underline{M}$ given by the rule
+$T\underline{M}(B, N) = N \otimes_A M$. In particular, for $B$ given, the
+functor $N \mapsto T\underline{M}(B, N)$ is additive and right exact. It turns
+out this also holds for injective module-valued functors.
+
+\begin{lemma}
+\label{lemma-tangent-injective}
+Let $A$ be a ring. Let $I$ be an injective object of the category
+of module-valued functors. Then for any $B \in \Ob(\textit{Alg}_A)$
+and short exact sequence
+$0 \to N_1 \to N \to N_2 \to 0$
+of $B$-modules the sequence
+$$
+TI(B, N_1) \to TI(B, N) \to TI(B, N_2) \to 0
+$$
+is exact.
+\end{lemma}
+
+\begin{proof}
+We will use the results of
+Lemma \ref{lemma-tangent-functor}
+without further mention.
+Denote $h : \textit{Alg}_A \to \textit{Sets}$ the functor given by
+$h(C) = \Mor_A(B[N], C)$. Similarly for $h_1$ and $h_2$.
+The map $B[N] \to B[N_2]$ corresponding to the surjection $N \to N_2$
+is surjective. It corresponds to a map $h_2 \to h$ such that
+$h_2(C) \to h(C)$ is injective for all $A$-algebras $C$. On the other
+hand, there are two maps $p, q : h \to h_1$, corresponding to the
+zero map $N_1 \to N$ and the injection $N_1 \to N$. Note that
+$$
+\xymatrix{
+h_2 \ar[r] & h \ar@<1ex>[r] \ar@<-1ex>[r] & h_1
+}
+$$
+is an equalizer diagram. Denote $\mathcal{O}_h$ the module-valued functor
+$C \mapsto \bigoplus_{h(C)} C$. Similarly for $\mathcal{O}_{h_1}$ and
+$\mathcal{O}_{h_2}$. Note that
+$$
+\Hom_\mathcal{P}(\mathcal{O}_h, F) = F(B[N])
+$$
+where $\mathcal{P}$ is the category of module-valued functors on
+$\textit{Alg}_A$. We claim there is an equalizer diagram
+$$
+\xymatrix{
+\mathcal{O}_{h_2} \ar[r] &
+\mathcal{O}_h \ar@<1ex>[r] \ar@<-1ex>[r] &
+\mathcal{O}_{h_1}
+}
+$$
+in $\mathcal{P}$. Namely, suppose that $C \in \Ob(\textit{Alg}_A)$
+and $\xi = \sum_{i = 1, \ldots, n} c_i \cdot f_i$ where $c_i \in C$ and
+$f_i : B[N] \to C$ is an element of
+$\mathcal{O}_h(C)$. If $p(\xi) = q(\xi)$, then
+we see that
+$$
+\sum c_i \cdot f_i \circ z = \sum c_i \cdot f_i \circ y
+$$
+where $z, y : B[N_1] \to B[N]$ are the maps $z : (b, m_1) \mapsto (b, 0)$
+and $y : (b, m_1) \mapsto (b, m_1)$. This means that for every $i$
+there exists a $j$ such that $f_j \circ z = f_i \circ y$. Clearly, this
+implies that $f_i(N_1) = 0$, i.e., $f_i$ factors through a unique map
+$\overline{f}_i : B[N_2] \to C$. Hence $\xi$ is the image of
+$\overline{\xi} = \sum c_i \cdot \overline{f}_i$.
+Since $I$ is injective, it transforms this equalizer diagram
+into a coequalizer diagram
+$$
+\xymatrix{
+I(B[N_1]) \ar@<1ex>[r] \ar@<-1ex>[r] &
+I(B[N]) \ar[r] &
+I(B[N_2])
+}
+$$
+This diagram is compatible with the direct sum decompositions
+$I(B[N]) = I(B) \oplus TI(B, N)$ and $I(B[N_i]) = I(B) \oplus TI(B, N_i)$.
+The zero map $N \to N_1$ induces the zero map $TI(B, N) \to TI(B, N_1)$.
+Thus we see that the coequalizer property
+above means we have an exact sequence
+$TI(B, N_1) \to TI(B, N) \to TI(B, N_2) \to 0$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exactness-implies}
+Let $A$ be a ring. Let $F$ be a module-valued functor
+such that for any $B \in \Ob(\textit{Alg}_A)$ the
+functor $TF(B, -)$ on $B$-modules transforms a short exact sequence
+of $B$-modules into a right exact sequence. Then
+\begin{enumerate}
+\item $TF(B, N_1 \oplus N_2) = TF(B, N_1) \oplus TF(B, N_2)$,
+\item there is a second functorial $B$-module structure on $TF(B, N)$
+defined by setting $x \cdot b = TF(B, b\cdot 1_N)(x)$ for $x \in TF(B, N)$
+and $b \in B$,
+\item
+\label{item-mult-map-linear}
+the canonical map $N \otimes_B F(B) \to TF(B, N)$ of
+Lemma \ref{lemma-tangent-functor}
+is $B$-linear also with respect to the second $B$-module structure,
+\item
+\label{item-tangent-right-exact}
+given a finitely presented $B$-module $N$ there is a canonical
+isomorphism $TF(B, B) \otimes_B N \to TF(B, N)$ where the tensor
+product uses the second $B$-module structure on $TF(B, B)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will use the results of
+Lemma \ref{lemma-tangent-functor}
+without further mention.
+The maps $N_1 \to N_1 \oplus N_2$ and $N_2 \to N_1 \oplus N_2$ give
+a map $TF(B, N_1) \oplus TF(B, N_2) \to TF(B, N_1 \oplus N_2)$
+which is injective since the maps $N_1 \oplus N_2 \to N_1$ and
+$N_1 \oplus N_2 \to N_2$ induce an inverse.
+Since $TF$ is right exact we see that
+$TF(B, N_1) \to TF(B, N_1 \oplus N_2) \to TF(B, N_2) \to 0$ is exact.
+Hence $TF(B, N_1) \oplus TF(B, N_2) \to TF(B, N_1 \oplus N_2)$ is an
+isomorphism. This proves (1).
+
+\medskip\noindent
+To see (2) the only thing we need to show is that
+$x \cdot (b_1 + b_2) = x \cdot b_1 + x \cdot b_2$.
+(Associativity and additivity are clear.) To see this consider
+$$
+N \xrightarrow{(b_1, b_2)} N \oplus N \xrightarrow{+} N
+$$
+and apply $TF(B, -)$.
+
+\medskip\noindent
+Part (3) follows immediately from the fact that
+$N \otimes_B F(B) \to TF(B, N)$ is functorial in the pair $(B, N)$.
+
+\medskip\noindent
+Suppose $N$ is a finitely presented $B$-module. Choose a presentation
+$B^{\oplus m} \to B^{\oplus n} \to N \to 0$. This gives an exact
+sequence
+$$
+TF(B, B^{\oplus m}) \to TF(B, B^{\oplus n}) \to TF(B, N) \to 0
+$$
+by right exactness of $TF(B, -)$. By part (1) we can write
+$TF(B, B^{\oplus m}) = TF(B, B)^{\oplus m}$ and
+$TF(B, B^{\oplus n}) = TF(B, B)^{\oplus n}$. Next, suppose that
+$B^{\oplus m} \to B^{\oplus n}$ is given by the matrix $T = (b_{ij})$.
+Then the induced map $TF(B, B)^{\oplus m} \to TF(B, B)^{\oplus n}$
+is given by the matrix with entries $TF(B, b_{ij} \cdot 1_B)$.
+This combined with right exactness of $\otimes$ proves (4).
+\end{proof}
+
+\begin{example}
+\label{example-module-structure-different}
+Let $F$ be a module-valued functor as in
+Lemma \ref{lemma-exactness-implies}.
+It is not always the case that the two module structures on
+$TF(B, N)$ agree. Here is an example. Suppose $A = \mathbf{F}_p$
+where $p$ is a prime. Set $F(B) = B$ but with $B$-module structure
+given by $b \cdot x = b^px$. Then $TF(B, N) = N$ with $B$-module structure
+given by $b \cdot x = b^px$ for $x \in N$. However, the second $B$-module
+structure is given by $x \cdot b = bx$. Note that in this case the canonical
+map $N \otimes_B F(B) \to TF(B, N)$ is zero as raising an element
+$n \in B[N]$ to the $p$th power is zero.
+\end{example}
+
+\noindent
+In the following lemma we will frequently use the observation that
+if $0 \to F \to G \to H \to 0$ is an exact sequence of module-valued
+functors on $\textit{Alg}_A$, then for any pair $(B, N)$ the
+sequence $0 \to TF(B, N) \to TG(B, N) \to TH(B, N) \to 0$ is exact.
+This follows from the fact that $0 \to F(B[N]) \to G(B[N]) \to H(B[N]) \to 0$
+is exact.
+
+\begin{lemma}
+\label{lemma-exactness-permanence}
+Let $A$ be a ring. For $F$ a module-valued functor on $\textit{Alg}_A$
+say $(*)$ holds if for all $B \in \Ob(\textit{Alg}_A)$ the
+functor $TF(B, -)$ on $B$-modules transforms a short exact sequence
+of $B$-modules into a right exact sequence. Let
+$0 \to F \to G \to H \to 0$ be a short exact sequence of
+module-valued functors on $\textit{Alg}_A$.
+\begin{enumerate}
+\item If $(*)$ holds for $F, G$ then $(*)$ holds for $H$.
+\item If $(*)$ holds for $F, H$ then $(*)$ holds for $G$.
+\item If $H' \to H$ is morphism of module-valued functors on $\textit{Alg}_A$
+and $(*)$ holds for $F$, $G$, $H$, and $H'$, then $(*)$ holds for
+$G \times_H H'$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $B$ be given. Let $0 \to N_1 \to N_2 \to N_3 \to 0$ be a short exact
+sequence of $B$-modules. Part (1) follows from a diagram chase in
+the diagram
+$$
+\xymatrix{
+0 \ar[r] &
+TF(B, N_1) \ar[r] \ar[d] &
+TG(B, N_1) \ar[r] \ar[d] &
+TH(B, N_1) \ar[r] \ar[d] & 0 \\
+0 \ar[r] &
+TF(B, N_2) \ar[r] \ar[d] &
+TG(B, N_2) \ar[r] \ar[d] &
+TH(B, N_2) \ar[r] \ar[d] & 0 \\
+0 \ar[r] &
+TF(B, N_3) \ar[r] \ar[d] &
+TG(B, N_3) \ar[r] \ar[d] &
+TH(B, N_3) \ar[r] & 0 \\
+& 0 & 0
+}
+$$
+with exact horizontal rows and exact columns involving $TF$ and $TG$.
+To prove part (2) we do a diagram chase in the diagram
+$$
+\xymatrix{
+0 \ar[r] &
+TF(B, N_1) \ar[r] \ar[d] &
+TG(B, N_1) \ar[r] \ar[d] &
+TH(B, N_1) \ar[r] \ar[d] & 0 \\
+0 \ar[r] &
+TF(B, N_2) \ar[r] \ar[d] &
+TG(B, N_2) \ar[r] \ar[d] &
+TH(B, N_2) \ar[r] \ar[d] & 0 \\
+0 \ar[r] &
+TF(B, N_3) \ar[r] \ar[d] &
+TG(B, N_3) \ar[r] &
+TH(B, N_3) \ar[r] \ar[d] & 0 \\
+& 0 & & 0
+}
+$$
+with exact horizontal rows and exact columns involving $TF$ and $TH$.
+Part (3) follows from part (2) as $G \times_H H'$ sits in the exact
+sequence $0 \to F \to G \times_H H' \to H' \to 0$.
+\end{proof}
+
+\noindent
+Most of the work in this section was done in order to prove the
+following key vanishing result.
+
+\begin{lemma}
+\label{lemma-ext-group-zero-key}
+Let $A$ be a ring. Let $M$, $P$ be $A$-modules with $P$ of finite
+presentation. Then
+$\Ext^i_\mathcal{P}(\underline{P}, \underline{M}) = 0$
+for $i > 0$ where $\mathcal{P}$ is the category of module-valued
+functors on $\textit{Alg}_A$.
+\end{lemma}
+
+\begin{proof}
+Choose an injective resolution $\underline{M} \to I^\bullet$ in
+$\mathcal{P}$, see
+Lemma \ref{lemma-enough-injectives}.
+By
+Derived Categories, Lemma \ref{derived-lemma-compute-ext-resolutions}
+any element of $\Ext^i_\mathcal{P}(\underline{P}, \underline{M})$
+comes from a morphism $\varphi : \underline{P} \to I^i$ with
+$d^i \circ \varphi = 0$. We will prove that the
+Yoneda extension
+$$
+E : 0 \to \underline{M} \to I^0 \to \ldots \to
+I^{i - 1} \times_{\Ker(d^i)} \underline{P} \to \underline{P} \to 0
+$$
+of $\underline{P}$ by $\underline{M}$
+associated to $\varphi$ is trivial, which will prove the lemma by
+Derived Categories, Lemma \ref{derived-lemma-yoneda-extension}.
+
+\medskip\noindent
+For $F$ a module-valued functor on $\textit{Alg}_A$
+say $(*)$ holds if for all $B \in \Ob(\textit{Alg}_A)$ the
+functor $TF(B, -)$ on $B$-modules transforms a short exact sequence
+of $B$-modules into a right exact sequence.
+Recall that the module-valued functors $\underline{M}, I^n, \underline{P}$
+each have property $(*)$, see
+Lemma \ref{lemma-tangent-injective}
+and the remarks preceding it.
+By splitting $0 \to \underline{M} \to I^\bullet$ into short
+exact sequences we find that each of the functors
+$\Im(d^{n - 1}) = \Ker(d^n) \subset I^n$ has property $(*)$ by
+Lemma \ref{lemma-exactness-permanence}
+and also that $I^{i - 1} \times_{\Ker(d^i)} \underline{P}$ has property
+$(*)$.
+
+\medskip\noindent
+Thus we may assume the Yoneda extension is given as
+$$
+E : 0 \to \underline{M} \to F_{i - 1} \to \ldots \to
+F_0 \to \underline{P} \to 0
+$$
+where each of the module-valued functors $F_j$ has property $(*)$.
+Set $G_j(B) = TF_j(B, B)$ viewed as a $B$-module via the {\it second}
+$B$-module structure defined in
+Lemma \ref{lemma-exactness-implies}.
+Since $TF_j$ is a functor on pairs we see that $G_j$ is a module-valued
+functor on $\textit{Alg}_A$. Moreover, since $E$ is an exact sequence
+the sequence $G_{j + 1} \to G_j \to G_{j - 1}$ is exact (see remark
+preceding
+Lemma \ref{lemma-exactness-permanence}).
+Observe that $T\underline{M}(B, B) = M \otimes_A B = \underline{M}(B)$
+and that the two $B$-module structures agree on this.
+Thus we obtain a Yoneda extension
+$$
+E' : 0 \to \underline{M} \to G_{i - 1} \to \ldots \to
+G_0 \to \underline{P} \to 0
+$$
+Moreover, the canonical maps
+$$
+F_j(B) = B \otimes_B F_j(B) \longrightarrow TF_j(B, B) = G_j(B)
+$$
+of
+Lemma \ref{lemma-tangent-functor} (\ref{item-mult-map})
+are $B$-linear by
+Lemma \ref{lemma-exactness-implies} (\ref{item-mult-map-linear})
+and functorial in $B$. Hence a map
+$$
+\xymatrix{
+0 \ar[r] &
+\underline{M} \ar[r] \ar[d]^1 &
+F_{i - 1} \ar[r] \ar[d] &
+\ldots \ar[r] &
+F_0 \ar[r] \ar[d] &
+\underline{P} \ar[r] \ar[d]^1 & 0 \\
+0 \ar[r] &
+\underline{M} \ar[r] &
+G_{i - 1} \ar[r] &
+\ldots \ar[r] &
+G_0 \ar[r] &
+\underline{P} \ar[r] & 0
+}
+$$
+of Yoneda extensions. In particular we see that $E$ and $E'$ have the
+same class in $\Ext^i_\mathcal{P}(\underline{P}, \underline{M})$
+by the lemma on Yoneda Exts mentioned above. Finally, let $N$ be a
+$A$-module of finite presentation. Then we see that
+$$
+0 \to T\underline{M}(A, N) \to TF_{i - 1}(A, N) \to \ldots \to
+TF_0(A, N) \to T\underline{P}(A, N) \to 0
+$$
+is exact. By
+Lemma \ref{lemma-exactness-implies} (\ref{item-tangent-right-exact})
+with $B = A$ this translates into the exactness of the sequence of
+$A$-modules
+$$
+0 \to M \otimes_A N \to G_{i - 1}(A) \otimes_A N \to \ldots \to
+G_0(A) \otimes_A N \to P \otimes_A N \to 0
+$$
+Hence the sequence of $A$-modules
+$0 \to M \to G_{i - 1}(A) \to \ldots \to G_0(A) \to P \to 0$
+is universally exact, in the sense that it remains exact on tensoring
+with any finitely presented $A$-module $N$. Let
+$K = \Ker(G_0(A) \to P)$ so that we have exact sequences
+$$
+0 \to K \to G_0(A) \to P \to 0
+\quad\text{and}\quad
+G_2(A) \to G_1(A) \to K \to 0
+$$
+Tensoring the second sequence with $N$ we obtain that
+$K \otimes_A N = \Coker(G_2(A) \otimes_A N \to G_1(A) \otimes_A N)$.
+Exactness of $G_2(A) \otimes_A N \to G_1(A) \otimes_A N \to G_0(A) \otimes_A N$
+then implies that $K \otimes_A N \to G_0(A) \otimes_A N$ is injective.
+By
+Algebra, Theorem \ref{algebra-theorem-universally-exact-criteria}
+this means that the $A$-module extension $0 \to K \to G_0(A) \to P \to 0$
+is exact, and because $P$ is assumed of finite presentation this means
+the sequence is split, see
+Algebra, Lemma \ref{algebra-lemma-universally-exact-split}.
+Any splitting $P \to G_0(A)$ defines a map $\underline{P} \to G_0$
+which splits the surjection $G_0 \to \underline{P}$. Thus the
+Yoneda extension $E'$ is equivalent to the trivial Yoneda extension
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ext-group-zero}
+Let $A$ be a ring. Let $M$ be an $A$-module. Let $L$ be a linearly
+adequate functor on $\textit{Alg}_A$. Then
+$\Ext^i_\mathcal{P}(L, \underline{M}) = 0$
+for $i > 0$ where $\mathcal{P}$ is the category of module-valued
+functors on $\textit{Alg}_A$.
+\end{lemma}
+
+\begin{proof}
+Since $L$ is linearly adequate there exists an exact sequence
+$$
+0 \to L \to \underline{A^{\oplus m}} \to \underline{A^{\oplus n}} \to
+\underline{P} \to 0
+$$
+Here $P = \Coker(A^{\oplus m} \to A^{\oplus n})$ is the cokernel
+of the map of finite free $A$-modules which is given by the definition
+of linearly adequate functors. By
+Lemma \ref{lemma-ext-group-zero-key}
+we have the vanishing of
+$\Ext^i_\mathcal{P}(\underline{P}, \underline{M})$
+and
+$\Ext^i_\mathcal{P}(\underline{A}, \underline{M})$
+for $i > 0$.
+Let $K = \Ker(\underline{A^{\oplus n}} \to \underline{P})$.
+By the long exact sequence of Ext groups associated to the exact sequence
+$0 \to K \to \underline{A^{\oplus n}} \to \underline{P} \to 0$
+we conclude that
+$\Ext^i_\mathcal{P}(K, \underline{M}) = 0$ for $i > 0$.
+Repeating with the sequence
+$0 \to L \to \underline{A^{\oplus m}} \to K \to 0$
+we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-RQ-zero}
+With notation as in
+Lemma \ref{lemma-enough-injectives}
+we have $R^pQ(F) = 0$ for all $p > 0$ and any adequate functor $F$.
+\end{lemma}
+
+\begin{proof}
+Choose an exact sequence $0 \to F \to \underline{M^0} \to \underline{M^1}$.
+Set $M^2 = \Coker(M^0 \to M^1)$ so that
+$0 \to F \to \underline{M^0} \to \underline{M^1}
+\to \underline{M^2} \to 0$ is a resolution. By
+Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}
+we obtain a spectral sequence
+$$
+R^pQ(\underline{M^q}) \Rightarrow R^{p + q}Q(F)
+$$
+Since $Q(\underline{M^q}) = \underline{M^q}$ it suffices to prove
+$R^pQ(\underline{M}) = 0$, $p > 0$ for any $A$-module $M$.
+
+\medskip\noindent
+Choose an injective resolution $\underline{M} \to I^\bullet$ in
+the category $\mathcal{P}$. Suppose that $R^iQ(\underline{M})$ is nonzero.
+Then $\Ker(Q(I^i) \to Q(I^{i + 1}))$ is strictly bigger
+than the image of $Q(I^{i - 1}) \to Q(I^i)$. Hence by
+Lemma \ref{lemma-adequate-surjection-from-linear}
+there exists a linearly adequate functor $L$ and a map
+$\varphi : L \to Q(I^i)$ mapping into the kernel of $Q(I^i) \to Q(I^{i + 1})$
+which does not factor through the image of $Q(I^{i - 1}) \to Q(I^i)$.
+Because $Q$ is a left adjoint to the inclusion functor the map
+$\varphi$ corresponds to a map $\varphi' : L \to I^i$ with the same properties.
+Thus $\varphi'$ gives a nonzero element of
+$\Ext^i_\mathcal{P}(L, \underline{M})$ contradicting
+Lemma \ref{lemma-ext-group-zero}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Adequate modules}
+\label{section-adequate}
+
+\noindent
+In
+Descent, Section \ref{descent-section-quasi-coherent-sheaves}
+we have seen that quasi-coherent modules on a scheme $S$
+are the same as quasi-coherent modules on any of the big
+sites $(\Sch/S)_\tau$ associated to $S$. We have seen that there
+are two issues with this identification:
+\begin{enumerate}
+\item $\QCoh(\mathcal{O}_S) \to
+\textit{Mod}((\Sch/S)_\tau, \mathcal{O})$,
+$\mathcal{F} \mapsto \mathcal{F}^a$ is not exact in general
+(Descent, Lemma \ref{descent-lemma-equivalence-quasi-coherent-limits}), and
+\item given a quasi-compact and quasi-separated morphism $f : X \to S$
+the functor $f_*$ does not preserve quasi-coherent sheaves on the
+big sites in general (Descent, Proposition
+\ref{descent-proposition-equivalence-quasi-coherent-functorial}).
+\end{enumerate}
+Part (1) means that we cannot define a triangulated subcategory
+of $D(\mathcal{O})$ consisting of complexes whose cohomology sheaves
+are quasi-coherent. Part (2) means that $Rf_*\mathcal{F}$ isn't a
+complex with quasi-coherent cohomology sheaves even when $\mathcal{F}$
+is quasi-coherent and $f$ is quasi-compact and quasi-separated.
+Moreover, the examples given in the proofs of
+Descent, Lemma
+\ref{descent-lemma-equivalence-quasi-coherent-limits}
+and
+Descent, Proposition
+\ref{descent-proposition-equivalence-quasi-coherent-functorial}
+are not of a pathological nature.
+
+\medskip\noindent
+In this section we discuss a slightly larger category
+of $\mathcal{O}$-modules on $(\Sch/S)_\tau$ with contains the
+quasi-coherent modules, is abelian, and is preserved under $f_*$ when
+$f$ is quasi-compact and quasi-separated.
+To do this, suppose that $S$ is a scheme. Let $\mathcal{F}$ be a presheaf
+of $\mathcal{O}$-modules on $(\Sch/S)_\tau$.
+For any affine object $U = \Spec(A)$ of $(\Sch/S)_\tau$
+we can restrict $\mathcal{F}$ to $(\textit{Aff}/U)_\tau$ to get
+a presheaf of $\mathcal{O}$-modules on this site. The corresponding
+module-valued functor, see
+Section \ref{section-quasi-coherent},
+will be denoted
+$$
+F = F_{\mathcal{F}, A} :
+\textit{Alg}_A \longrightarrow \textit{Ab},
+\quad
+B \longmapsto \mathcal{F}(\Spec(B))
+$$
+The assignment $\mathcal{F} \mapsto F_{\mathcal{F}, A}$ is an exact
+functor of abelian categories.
+
+\begin{definition}
+\label{definition-adequate}
+A sheaf of $\mathcal{O}$-modules $\mathcal{F}$ on $(\Sch/S)_\tau$ is
+{\it adequate} if there exists a $\tau$-covering
+$\{\Spec(A_i) \to S\}_{i \in I}$ such that $F_{\mathcal{F}, A_i}$ is
+adequate for all $i \in I$.
+\end{definition}
+
+\noindent
+We will see below that the category of adequate $\mathcal{O}$-modules
+is independent of the chosen topology $\tau$.
+
+\begin{lemma}
+\label{lemma-adequate-local}
+Let $S$ be a scheme. Let $\mathcal{F}$ be an adequate $\mathcal{O}$-module on
+$(\Sch/S)_\tau$. For any affine scheme $\Spec(A)$ over $S$
+the functor $F_{\mathcal{F}, A}$ is adequate.
+\end{lemma}
+
+\begin{proof}
+Let $\{\Spec(A_i) \to S\}_{i \in I}$ be a $\tau$-covering
+such that $F_{\mathcal{F}, A_i}$ is adequate for all $i \in I$.
+We can find a standard affine $\tau$-covering
+$\{\Spec(A'_j) \to \Spec(A)\}_{j = 1, \ldots, m}$
+such that $\Spec(A'_j) \to \Spec(A) \to S$ factors
+through $\Spec(A_{i(j)})$ for some $i(j) \in I$. Then we see that
+$F_{\mathcal{F}, A'_j}$ is the restriction of
+$F_{\mathcal{F}, A_{i(j)}}$ to the category of $A'_j$-algebras.
+Hence $F_{\mathcal{F}, A'_j}$ is adequate by
+Lemma \ref{lemma-base-change-adequate}.
+By
+Lemma \ref{lemma-adequate-product}
+the sequence
+$F_{\mathcal{F}, A'_j}$ corresponds to an adequate ``product'' functor
+$F'$ over $A' = A'_1 \times \ldots \times A'_m$. As $\mathcal{F}$ is a
+sheaf (for the Zariski topology) this product functor $F'$ is equal
+to $F_{\mathcal{F}, A'}$, i.e., is the restriction of $F$ to $A'$-algebras.
+Finally, $\{\Spec(A') \to \Spec(A)\}$ is a $\tau$-covering.
+It follows from
+Lemma \ref{lemma-adequate-descent}
+that $F_{\mathcal{F}, A}$ is adequate.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-adequate-affine}
+Let $S = \Spec(A)$ be an affine scheme. The category of adequate
+$\mathcal{O}$-modules on $(\Sch/S)_\tau$ is equivalent to the
+category of adequate module-valued functors on $\textit{Alg}_A$.
+\end{lemma}
+
+\begin{proof}
+Given an adequate module $\mathcal{F}$ the functor $F_{\mathcal{F}, A}$
+is adequate by Lemma \ref{lemma-adequate-local}.
+Given an adequate functor $F$ we choose an exact sequence
+$0 \to F \to \underline{M} \to \underline{N}$ and we consider
+the $\mathcal{O}$-module $\mathcal{F} = \Ker(M^a \to N^a)$ where
+$M^a$ denotes the quasi-coherent $\mathcal{O}$-module on
+$(\Sch/S)_\tau$ associated to the quasi-coherent sheaf
+$\widetilde{M}$ on $S$. Note that $F = F_{\mathcal{F}, A}$, in particular
+the module $\mathcal{F}$ is adequate by definition.
+We omit the proof that the constructions define mutually inverse
+equivalences of categories.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-adequate}
+Let $f : T \to S$ be a morphism of schemes.
+The pullback $f^*\mathcal{F}$ of an adequate $\mathcal{O}$-module
+$\mathcal{F}$ on $(\Sch/S)_\tau$ is an adequate
+$\mathcal{O}$-module on $(\Sch/T)_\tau$.
+\end{lemma}
+
+\begin{proof}
+The pullback map
+$f^* : \textit{Mod}((\Sch/S)_\tau, \mathcal{O}) \to
+\textit{Mod}((\Sch/T)_\tau, \mathcal{O})$
+is given by restriction, i.e., $f^*\mathcal{F}(V) = \mathcal{F}(V)$
+for any scheme $V$ over $T$. Hence this lemma follows immediately from
+Lemma \ref{lemma-adequate-local}
+and the definition.
+\end{proof}
+
+\noindent
+Here is a characterization of the category of adequate $\mathcal{O}$-modules.
+To understand the significance, consider a map $\mathcal{G} \to \mathcal{H}$
+of quasi-coherent $\mathcal{O}_S$-modules on a scheme $S$.
+The cokernel of the associated map $\mathcal{G}^a \to \mathcal{H}^a$
+of $\mathcal{O}$-modules is quasi-coherent because it is equal to
+$(\mathcal{H}/\mathcal{G})^a$. But the kernel of
+$\mathcal{G}^a \to \mathcal{H}^a$ in general isn't
+quasi-coherent. However, it is adequate.
+
+\begin{lemma}
+\label{lemma-adequate-characterize}
+Let $S$ be a scheme. Let $\mathcal{F}$ be an $\mathcal{O}$-module on
+$(\Sch/S)_\tau$. The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is adequate,
+\item there exists an affine open covering $S = \bigcup S_i$ and
+maps of quasi-coherent $\mathcal{O}_{S_i}$-modules
+$\mathcal{G}_i \to \mathcal{H}_i$
+such that $\mathcal{F}|_{(\Sch/S_i)_\tau}$ is the
+kernel of $\mathcal{G}_i^a \to \mathcal{H}_i^a$
+\item there exists a $\tau$-covering $\{S_i \to S\}_{i \in I}$ and
+maps of $\mathcal{O}_{S_i}$-quasi-coherent modules
+$\mathcal{G}_i \to \mathcal{H}_i$
+such that $\mathcal{F}|_{(\Sch/S_i)_\tau}$ is the
+kernel of $\mathcal{G}_i^a \to \mathcal{H}_i^a$,
+\item there exists a $\tau$-covering $\{f_i : S_i \to S\}_{i \in I}$
+such that each $f_i^*\mathcal{F}$ is adequate,
+\item for any affine scheme $U$ over $S$ the restriction
+$\mathcal{F}|_{(\Sch/U)_\tau}$ is the kernel
+of a map $\mathcal{G}^a \to \mathcal{H}^a$ of quasi-coherent
+$\mathcal{O}_U$-modules.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $U = \Spec(A)$ be an affine scheme over $S$.
+Set $F = F_{\mathcal{F}, A}$. By definition, the functor
+$F$ is adequate if and only if there exists a map of $A$-modules
+$M \to N$ such that $F = \Ker(\underline{M} \to \underline{N})$.
+Combining with
+Lemmas \ref{lemma-adequate-local} and
+\ref{lemma-adequate-affine}
+we see that (1) and (5) are equivalent.
+
+\medskip\noindent
+It is clear that (5) implies (2) and (2) implies (3).
+If (3) holds then we can refine the covering
+$\{S_i \to S\}$ such that each $S_i = \Spec(A_i)$ is affine.
+Then we see, by the preliminary remarks of the proof, that
+$F_{\mathcal{F}, A_i}$ is adequate. Thus $\mathcal{F}$
+is adequate by definition. Hence (3) implies (1).
+
+\medskip\noindent
+Finally, (4) is equivalent to (1) using
+Lemma \ref{lemma-pullback-adequate}
+for one direction and that
+a composition of $\tau$-coverings is a $\tau$-covering for the other.
+\end{proof}
+
+\noindent
+Just like is true for quasi-coherent sheaves the category of
+adequate modules is independent of the topology.
+
+\begin{lemma}
+\label{lemma-adequate-fpqc}
+Let $\mathcal{F}$ be an adequate $\mathcal{O}$-module on
+$(\Sch/S)_\tau$. For any surjective flat morphism
+$\Spec(B) \to \Spec(A)$ of affines over $S$
+the extended {\v C}ech complex
+$$
+0 \to \mathcal{F}(\Spec(A)) \to
+\mathcal{F}(\Spec(B)) \to
+\mathcal{F}(\Spec(B \otimes_A B)) \to \ldots
+$$
+is exact. In particular $\mathcal{F}$ satisfies the sheaf condition
+for fpqc coverings, and is a sheaf of $\mathcal{O}$-modules
+on $(\Sch/S)_{fppf}$.
+\end{lemma}
+
+\begin{proof}
+With $A \to B$ as in the lemma let $F = F_{\mathcal{F}, A}$. This functor
+is adequate by
+Lemma \ref{lemma-adequate-local}.
+By
+Lemma \ref{lemma-adequate-flat}
+since $A \to B$, $A \to B \otimes_A B$, etc are flat we see that
+$F(B) = F(A) \otimes_A B$,
+$F(B \otimes_A B) = F(A) \otimes_A B \otimes_A B$, etc.
+Exactness follows from
+Descent, Lemma \ref{descent-lemma-ff-exact}.
+
+\medskip\noindent
+Thus $\mathcal{F}$ satisfies the sheaf condition for
+$\tau$-coverings (in particular Zariski coverings) and any faithfully
+flat covering of an affine by an affine. Arguing as in the proofs of
+Descent, Lemma \ref{descent-lemma-standard-fpqc-covering}
+and
+Descent, Proposition \ref{descent-proposition-fpqc-descent-quasi-coherent}
+we conclude that $\mathcal{F}$ satisfies the sheaf condition for all
+fpqc coverings (made out of objects of $(\Sch/S)_\tau$).
+Details omitted.
+\end{proof}
+
+\noindent
+Lemma \ref{lemma-adequate-fpqc} shows in particular that
+for any pair of topologies $\tau, \tau'$ the collection
+of adequate modules for the $\tau$-topology and the $\tau'$-topology
+are identical (as presheaves of modules on the underlying category $\Sch/S$).
+
+\begin{definition}
+\label{definition-category-adequate-modules}
+Let $S$ be a scheme. The category of adequate $\mathcal{O}$-modules on
+$(\Sch/S)_\tau$ is denoted {\it $\textit{Adeq}(\mathcal{O})$} or
+{\it $\textit{Adeq}((\Sch/S)_\tau, \mathcal{O})$}. If we want to think just
+about the abelian category of adequate modules without choosing a
+topology we simply write {\it $\textit{Adeq}(S)$}.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-same-cohomology-adequate}
+Let $S$ be a scheme. Let $\mathcal{F}$ be an adequate
+$\mathcal{O}$-module on $(\Sch/S)_\tau$.
+\begin{enumerate}
+\item The restriction $\mathcal{F}|_{S_{Zar}}$ is a quasi-coherent
+$\mathcal{O}_S$-module on the scheme $S$.
+\item The restriction $\mathcal{F}|_{S_\etale}$ is the
+quasi-coherent module associated to $\mathcal{F}|_{S_{Zar}}$.
+\item For any affine scheme $U$ over $S$ we have $H^q(U, \mathcal{F}) = 0$
+for all $q > 0$.
+\item There is a canonical isomorphism
+$$
+H^q(S, \mathcal{F}|_{S_{Zar}}) =
+H^q((\Sch/S)_\tau, \mathcal{F}).
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-adequate-flat}
+and
+Lemma \ref{lemma-adequate-local}
+we see that for any flat morphism of affines $U \to V$ over $S$
+we have
+$\mathcal{F}(U) = \mathcal{F}(V) \otimes_{\mathcal{O}(V)} \mathcal{O}(U)$.
+This works in particular if $U \subset V \subset S$ are affine opens of
+$S$, hence $\mathcal{F}|_{S_{Zar}}$ is quasi-coherent.
+Thus (1) holds.
+
+\medskip\noindent
+Let $S' \to S$ be an \'etale morphism of schemes.
+Then for $U \subset S'$ affine open mapping into an affine open
+$V \subset S$ we see that
+$\mathcal{F}(U) = \mathcal{F}(V) \otimes_{\mathcal{O}(V)} \mathcal{O}(U)$
+because $U \to V$ is \'etale, hence flat. Therefore
+$\mathcal{F}|_{S'_{Zar}}$ is the pullback of $\mathcal{F}|_{S_{Zar}}$.
+This proves (2).
+
+\medskip\noindent
+We are going to apply
+Cohomology on Sites,
+Lemma \ref{sites-cohomology-lemma-cech-vanish-collection}
+to the site $(\Sch/S)_\tau$ with
+$\mathcal{B}$ the set of affine schemes over $S$ and
+$\text{Cov}$ the set of standard affine $\tau$-coverings.
+Assumption (3) of the lemma is satisfied by
+Descent, Lemma \ref{descent-lemma-standard-covering-Cech}
+and
+Lemma \ref{lemma-adequate-fpqc}
+for the case of a covering by a single affine.
+Hence we conclude that $H^p(U, \mathcal{F}) = 0$ for every
+affine scheme $U$ over $S$. This proves (3).
+In exactly the same way as in the proof of
+Descent, Proposition \ref{descent-proposition-same-cohomology-quasi-coherent}
+this implies the equality of cohomologies (4).
+\end{proof}
+
+\begin{remark}
+\label{remark-compare}
+Let $S$ be a scheme. We have functors
+$u : \QCoh(\mathcal{O}_S) \to \textit{Adeq}(\mathcal{O})$
+and
+$v : \textit{Adeq}(\mathcal{O}) \to \QCoh(\mathcal{O}_S)$.
+Namely, the functor $u : \mathcal{F} \mapsto \mathcal{F}^a$
+comes from taking the associated $\mathcal{O}$-module which is
+adequate by
+Lemma \ref{lemma-adequate-characterize}.
+Conversely, the functor $v$ comes from restriction
+$v : \mathcal{G} \mapsto \mathcal{G}|_{S_{Zar}}$, see
+Lemma \ref{lemma-same-cohomology-adequate}.
+Since $\mathcal{F}^a$ can be described as the pullback of
+$\mathcal{F}$ under a morphism of ringed topoi
+$((\Sch/S)_\tau, \mathcal{O}) \to (S_{Zar}, \mathcal{O}_S)$, see
+Descent, Remark \ref{descent-remark-change-topologies-ringed-sites}
+and since restriction is the pushforward we see that $u$ and $v$
+are adjoint as follows
+$$
+\SheafHom_{\mathcal{O}_S}(\mathcal{F}, v\mathcal{G})
+=
+\SheafHom_\mathcal{O}(u\mathcal{F}, \mathcal{G})
+$$
+where $\mathcal{O}$ denotes the structure sheaf on the big site.
+It is immediate from the description that the adjunction mapping
+$\mathcal{F} \to vu\mathcal{F}$ is an isomorphism for all quasi-coherent
+sheaves.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-sheafification-adequate}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a presheaf of $\mathcal{O}$-modules
+on $(\Sch/S)_\tau$. If for every affine scheme
+$\Spec(A)$ over $S$ the functor $F_{\mathcal{F}, A}$ is
+adequate, then the sheafification of $\mathcal{F}$ is an adequate
+$\mathcal{O}$-module.
+\end{lemma}
+
+\begin{proof}
+Let $U = \Spec(A)$ be an affine scheme over $S$.
+Set $F = F_{\mathcal{F}, A}$.
+The sheafification $\mathcal{F}^\# = (\mathcal{F}^+)^+$, see
+Sites, Section \ref{sites-section-sheafification}.
+By construction
+$$
+(\mathcal{F})^+(U) =
+\colim_\mathcal{U} \check{H}^0(\mathcal{U}, \mathcal{F})
+$$
+where the colimit is over coverings in the site $(\Sch/S)_\tau$.
+Since $U$ is affine it suffices to take the limit over standard
+affine $\tau$-coverings
+$\mathcal{U} = \{U_i \to U\}_{i \in I} =
+\{\Spec(A_i) \to \Spec(A)\}_{i \in I}$ of $U$.
+Since each $A \to A_i$ and $A \to A_i \otimes_A A_j$ is flat we see that
+$$
+\check{H}^0(\mathcal{U}, \mathcal{F}) =
+\Ker(\prod F(A) \otimes_A A_i \to \prod F(A) \otimes_A A_i \otimes_A A_j)
+$$
+by
+Lemma \ref{lemma-adequate-flat}.
+Since $A \to \prod A_i$ is faithfully flat we see that this always
+is canonically isomorphic to $F(A)$ by
+Descent, Lemma \ref{descent-lemma-ff-exact}.
+Thus the presheaf $(\mathcal{F})^+$ has the same value as
+$\mathcal{F}$ on all affine schemes over $S$. Repeating the argument
+once more we deduce the same thing for $\mathcal{F}^\# = ((\mathcal{F})^+)^+$.
+Thus $F_{\mathcal{F}, A} = F_{\mathcal{F}^\#, A}$ and we conclude
+that $\mathcal{F}^\#$ is adequate.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-abelian-adequate}
+Let $S$ be a scheme.
+\begin{enumerate}
+\item The category $\textit{Adeq}(\mathcal{O})$ is abelian.
+\item The functor
+$\textit{Adeq}(\mathcal{O}) \to
+\textit{Mod}((\Sch/S)_\tau, \mathcal{O})$
+is exact.
+\item If $0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0$
+is a short exact sequence of $\mathcal{O}$-modules and
+$\mathcal{F}_1$ and $\mathcal{F}_3$ are adequate, then
+$\mathcal{F}_2$ is adequate.
+\item The category $\textit{Adeq}(\mathcal{O})$ has colimits and
+$\textit{Adeq}(\mathcal{O}) \to
+\textit{Mod}((\Sch/S)_\tau, \mathcal{O})$
+commutes with them.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\varphi : \mathcal{F} \to \mathcal{G}$ be a map of adequate
+$\mathcal{O}$-modules. To prove (1) and (2) it suffices to show that
+$\mathcal{K} = \Ker(\varphi)$ and
+$\mathcal{Q} = \Coker(\varphi)$ computed in
+$\textit{Mod}((\Sch/S)_\tau, \mathcal{O})$ are adequate.
+Let $U = \Spec(A)$ be an affine scheme over $S$.
+Let $F = F_{\mathcal{F}, A}$ and $G = F_{\mathcal{G}, A}$.
+By
+Lemmas \ref{lemma-kernel-adequate} and
+\ref{lemma-cokernel-adequate}
+the kernel $K$ and cokernel $Q$ of the induced map
+$F \to G$ are adequate functors.
+Because the kernel is computed on the level of presheaves, we see
+that $K = F_{\mathcal{K}, A}$ and we conclude $\mathcal{K}$ is adequate.
+To prove the result for the cokernel, denote $\mathcal{Q}'$ the presheaf
+cokernel of $\varphi$. Then $Q = F_{\mathcal{Q}', A}$ and
+$\mathcal{Q} = (\mathcal{Q}')^\#$. Hence $\mathcal{Q}$
+is adequate by
+Lemma \ref{lemma-sheafification-adequate}.
+
+\medskip\noindent
+Let $0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0$
+is a short exact sequence of $\mathcal{O}$-modules and
+$\mathcal{F}_1$ and $\mathcal{F}_3$ are adequate.
+Let $U = \Spec(A)$ be an affine scheme over $S$.
+Let $F_i = F_{\mathcal{F}_i, A}$. The sequence of functors
+$$
+0 \to F_1 \to F_2 \to F_3 \to 0
+$$
+is exact, because for $V = \Spec(B)$ affine over $U$ we have
+$H^1(V, \mathcal{F}_1) = 0$ by
+Lemma \ref{lemma-same-cohomology-adequate}.
+Since $F_1$ and $F_3$ are adequate functors by
+Lemma \ref{lemma-adequate-local}
+we see that $F_2$ is adequate by
+Lemma \ref{lemma-extension-adequate}.
+Thus $\mathcal{F}_2$ is adequate.
+
+\medskip\noindent
+Let $\mathcal{I} \to \textit{Adeq}(\mathcal{O})$, $i \mapsto \mathcal{F}_i$
+be a diagram. Denote $\mathcal{F} = \colim_i \mathcal{F}_i$
+the colimit computed in
+$\textit{Mod}((\Sch/S)_\tau, \mathcal{O})$.
+To prove (4) it suffices to show that $\mathcal{F}$ is adequate.
+Let $\mathcal{F}' = \colim_i \mathcal{F}_i$ be the colimit computed
+in presheaves of $\mathcal{O}$-modules. Then
+$\mathcal{F} = (\mathcal{F}')^\#$.
+Let $U = \Spec(A)$ be an affine scheme over $S$.
+Let $F_i = F_{\mathcal{F}_i, A}$. By
+Lemma \ref{lemma-colimit-adequate}
+the functor $\colim_i F_i = F_{\mathcal{F}', A}$ is adequate.
+Lemma \ref{lemma-sheafification-adequate}
+shows that $\mathcal{F}$ is adequate.
+\end{proof}
+
+\noindent
+The following lemma tells us that the total direct image
+$Rf_*\mathcal{F}$ of an adequate module under a quasi-compact and
+quasi-separated morphism is a complex whose cohomology sheaves
+are adequate.
+
+\begin{lemma}
+\label{lemma-direct-image-adequate}
+Let $f : T \to S$ be a quasi-compact and quasi-separated morphism
+of schemes. For any adequate $\mathcal{O}_T$-module on
+$(\Sch/T)_\tau$ the pushforward
+$f_*\mathcal{F}$ and the higher direct images $R^if_*\mathcal{F}$
+are adequate $\mathcal{O}_S$-modules on $(\Sch/S)_\tau$.
+\end{lemma}
+
+\begin{proof}
+First we explain how to compute the higher direct images.
+Choose an injective resolution $\mathcal{F} \to \mathcal{I}^\bullet$.
+Then $R^if_*\mathcal{F}$ is the $i$th cohomology sheaf of the
+complex $f_*\mathcal{I}^\bullet$.
+Hence $R^if_*\mathcal{F}$ is the sheaf associated to the presheaf
+which associates to an object $U/S$ of $(\Sch/S)_\tau$
+the module
+\begin{align*}
+\frac{\Ker(f_*\mathcal{I}^i(U) \to f_*\mathcal{I}^{i + 1}(U))}
+{\Im(f_*\mathcal{I}^{i - 1}(U) \to f_*\mathcal{I}^i(U))}
+& =
+\frac{\Ker(\mathcal{I}^i(U \times_S T) \to
+\mathcal{I}^{i + 1}(U \times_S T))}
+{\Im(\mathcal{I}^{i - 1}(U \times_S T) \to \mathcal{I}^i(U \times_S T))}
+\\
+& =
+H^i(U \times_S T, \mathcal{F}) \\
+& = H^i((\Sch/U \times_S T)_\tau,
+\mathcal{F}|_{(\Sch/U \times_S T)_\tau}) \\
+& = H^i(U \times_S T, \mathcal{F}|_{(U \times_S T)_{Zar}})
+\end{align*}
+The first equality by
+Topologies, Lemma \ref{topologies-lemma-morphism-big-fppf}
+(and its analogues for other topologies),
+the second equality by definition of cohomology of $\mathcal{F}$
+over an object of $(\Sch/T)_\tau$,
+the third equality by
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-cohomology-of-open},
+and the last equality by
+Lemma \ref{lemma-same-cohomology-adequate}.
+Thus by
+Lemma \ref{lemma-sheafification-adequate}
+it suffices to prove the claim stated in the following paragraph.
+
+\medskip\noindent
+Let $A$ be a ring. Let $T$ be a scheme quasi-compact and quasi-separated
+over $A$. Let $\mathcal{F}$ be an adequate $\mathcal{O}_T$-module on
+$(\Sch/T)_\tau$. For an $A$-algebra $B$ set
+$T_B = T \times_{\Spec(A)} \Spec(B)$ and denote
+$\mathcal{F}_B = \mathcal{F}|_{(T_B)_{Zar}}$ the restriction of
+$\mathcal{F}$ to the small Zariski site of $T_B$.
+(Recall that this is a ``usual'' quasi-coherent sheaf on the scheme
+$T_B$, see
+Lemma \ref{lemma-same-cohomology-adequate}.)
+Claim: The functor
+$$
+B \longmapsto H^q(T_B, \mathcal{F}_B)
+$$
+is adequate. We will prove the lemma by the usual
+procedure of cutting $T$ into pieces.
+
+\medskip\noindent
+Case I: $T$ is affine. In this case the schemes $T_B$ are all affine
+and $H^q(T_B, \mathcal{F}_B) = 0$ for all $q \geq 1$.
+The functor $B \mapsto H^0(T_B, \mathcal{F}_B)$ is adequate by
+Lemma \ref{lemma-pushforward-adequate}.
+
+\medskip\noindent
+Case II: $T$ is separated. Let $n$ be the minimal number of affines needed
+to cover $T$. We argue by induction on $n$. The base case is Case I.
+Choose an affine open covering $T = V_1 \cup \ldots \cup V_n$.
+Set $V = V_1 \cup \ldots \cup V_{n - 1}$ and $U = V_n$. Observe that
+$$
+U \cap V = (V_1 \cap V_n) \cup \ldots \cup (V_{n - 1} \cap V_n)
+$$
+is also a union of $n - 1$ affine opens as $T$ is separated, see
+Schemes, Lemma \ref{schemes-lemma-characterize-separated}.
+Note that for each $B$ the base changes $U_B$, $V_B$ and
+$(U \cap V)_B = U_B \cap V_B$ behave in the same way. Hence we see that
+for each $B$ we have a long exact sequence
+$$
+0 \to
+H^0(T_B, \mathcal{F}_B) \to
+H^0(U_B, \mathcal{F}_B) \oplus H^0(V_B, \mathcal{F}_B) \to
+H^0((U \cap V)_B, \mathcal{F}_B) \to
+H^1(T_B, \mathcal{F}_B) \to \ldots
+$$
+functorial in $B$, see
+Cohomology, Lemma \ref{cohomology-lemma-mayer-vietoris}.
+By induction hypothesis the functors
+$B \mapsto H^q(U_B, \mathcal{F}_B)$,
+$B \mapsto H^q(V_B, \mathcal{F}_B)$, and
+$B \mapsto H^q((U \cap V)_B, \mathcal{F}_B)$
+are adequate. Using
+Lemmas \ref{lemma-kernel-adequate} and
+\ref{lemma-cokernel-adequate}
+we see that our functor $B \mapsto H^q(T_B, \mathcal{F}_B)$ sits in the
+middle of a short exact sequence whose outer terms are adequate.
+Thus the claim follows from
+Lemma \ref{lemma-extension-adequate}.
+
+\medskip\noindent
+Case III: General quasi-compact and quasi-separated case.
+The proof is again by induction on the number $n$ of affines needed to
+cover $T$. The base case $n = 1$ is Case I.
+Choose an affine open covering $T = V_1 \cup \ldots \cup V_n$.
+Set $V = V_1 \cup \ldots \cup V_{n - 1}$ and $U = V_n$. Note that
+since $T$ is quasi-separated $U \cap V$ is a quasi-compact open of an
+affine scheme, hence Case II applies to it. The rest of the argument
+proceeds in exactly the same manner as in the paragraph above and is
+omitted.
+\end{proof}
+
+
+
+
+
+
+
+\section{Parasitic adequate modules}
+\label{section-parasitic}
+
+\noindent
+In this section we start comparing adequate modules and quasi-coherent
+modules on a scheme $S$. Recall that there are functors
+$u : \QCoh(\mathcal{O}_S) \to \textit{Adeq}(\mathcal{O})$
+and
+$v : \textit{Adeq}(\mathcal{O}) \to \QCoh(\mathcal{O}_S)$
+satisfying the adjunction
+$$
+\SheafHom_{\QCoh(\mathcal{O}_S)}(\mathcal{F}, v\mathcal{G})
+=
+\SheafHom_{\textit{Adeq}(\mathcal{O})}(u\mathcal{F}, \mathcal{G})
+$$
+and such that $\mathcal{F} \to vu\mathcal{F}$ is an isomorphism for
+every quasi-coherent sheaf $\mathcal{F}$, see
+Remark \ref{remark-compare}.
+Hence $u$ is a fully faithful embedding and we can identify
+$\QCoh(\mathcal{O}_S)$ with a full subcategory of
+$\textit{Adeq}(\mathcal{O})$.
+The functor $v$ is exact but $u$ is not left exact in general.
+The kernel of $v$ is the subcategory of parasitic adequate modules.
+
+\medskip\noindent
+In Descent, Definition \ref{descent-definition-parasitic}
+we give the definition of a parasitic module.
+For adequate modules the notion does not depend
+on the chosen topology.
+
+\begin{lemma}
+\label{lemma-parasitic-adequate}
+Let $S$ be a scheme.
+Let $\mathcal{F}$ be an adequate $\mathcal{O}$-module on
+$(\Sch/S)_\tau$. The following are equivalent:
+\begin{enumerate}
+\item $v\mathcal{F} = 0$,
+\item $\mathcal{F}$ is parasitic,
+\item $\mathcal{F}$ is parasitic for the $\tau$-topology,
+\item $\mathcal{F}(U) = 0$ for all $U \subset S$ open, and
+\item there exists an affine open covering $S = \bigcup U_i$
+such that $\mathcal{F}(U_i) = 0$ for all $i$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implications (2) $\Rightarrow$ (3) $\Rightarrow$ (4) $\Rightarrow$ (5)
+are immediate from the definitions. Assume (5). Suppose that
+$S = \bigcup U_i$ is an affine open covering such that $\mathcal{F}(U_i) = 0$
+for all $i$. Let $V \to S$ be a flat morphism. There exists an affine
+open covering $V = \bigcup V_j$ such that each $V_j$ maps into some
+$U_i$. As the morphism $V_j \to S$ is flat, also $V_j \to U_i$ is flat.
+Hence the corresponding ring map
+$A_i = \mathcal{O}(U_i) \to \mathcal{O}(V_j) = B_j$ is flat. Thus by
+Lemma \ref{lemma-adequate-local}
+and
+Lemma \ref{lemma-adequate-flat}
+we see that $\mathcal{F}(U_i) \otimes_{A_i} B_j \to \mathcal{F}(V_j)$
+is an isomorphism. Hence $\mathcal{F}(V_j) = 0$. Since $\mathcal{F}$ is
+a sheaf for the Zariski topology we conclude that $\mathcal{F}(V) = 0$.
+In this way we see that (5) implies (2).
+
+\medskip\noindent
+This proves the equivalence of (2), (3), (4), and (5).
+As (1) is equivalent to (3) (see
+Remark \ref{remark-compare})
+we conclude that all five conditions are equivalent.
+\end{proof}
+
+\noindent
+Let $S$ be a scheme.
+The subcategory of parasitic adequate modules is a Serre subcategory of
+$\textit{Adeq}(\mathcal{O})$. The quotient is the category of
+quasi-coherent modules.
+
+\begin{lemma}
+\label{lemma-adequate-by-parasitic}
+Let $S$ be a scheme. The subcategory
+$\mathcal{C} \subset \textit{Adeq}(\mathcal{O})$ of parasitic adequate
+modules is a Serre subcategory. Moreover, the functor $v$ induces
+an equivalence of categories
+$$
+\textit{Adeq}(\mathcal{O}) / \mathcal{C} = \QCoh(\mathcal{O}_S).
+$$
+\end{lemma}
+
+\begin{proof}
+The category $\mathcal{C}$ is the kernel of the exact functor
+$v : \textit{Adeq}(\mathcal{O}) \to \QCoh(\mathcal{O}_S)$, see
+Lemma \ref{lemma-parasitic-adequate}.
+Hence it is a Serre subcategory by
+Homology, Lemma \ref{homology-lemma-kernel-exact-functor}.
+By
+Homology, Lemma \ref{homology-lemma-serre-subcategory-is-kernel}
+we obtain an induced exact functor
+$\overline{v} :
+\textit{Adeq}(\mathcal{O}) / \mathcal{C}
+\to
+\QCoh(\mathcal{O}_S)$.
+Because $u$ is a right inverse to $v$ we see right away that
+$\overline{v}$ is essentially surjective.
+We see that $\overline{v}$ is faithful by
+Homology, Lemma \ref{homology-lemma-quotient-by-kernel-exact-functor}.
+Because $u$ is a right inverse to $v$ we finally conclude that
+$\overline{v}$ is fully faithful.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-direct-image-parasitic-adequate}
+Let $f : T \to S$ be a quasi-compact and quasi-separated morphism
+of schemes. For any parasitic adequate $\mathcal{O}_T$-module on
+$(\Sch/T)_\tau$ the pushforward
+$f_*\mathcal{F}$ and the higher direct images $R^if_*\mathcal{F}$
+are parasitic adequate $\mathcal{O}_S$-modules on $(\Sch/S)_\tau$.
+\end{lemma}
+
+\begin{proof}
+We have already seen in
+Lemma \ref{lemma-direct-image-adequate}
+that these higher direct images are adequate.
+Hence it suffices to show that
+$(R^if_*\mathcal{F})(U_i) = 0$ for any $\tau$-covering
+$\{U_i \to S\}$ open. And $R^if_*\mathcal{F}$
+is parasitic by
+Descent, Lemma \ref{descent-lemma-direct-image-parasitic}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Derived categories of adequate modules, I}
+\label{section-comparison}
+
+
+\noindent
+Let $S$ be a scheme. We continue the discussion started in
+Section \ref{section-parasitic}.
+The exact functor $v$ induces a functor
+$$
+D(\textit{Adeq}(\mathcal{O}))
+\longrightarrow
+D(\QCoh(\mathcal{O}_S))
+$$
+and similarly for bounded versions.
+
+\begin{lemma}
+\label{lemma-quotient-easy}
+Let $S$ be a scheme. Let
+$\mathcal{C} \subset \textit{Adeq}(\mathcal{O})$ denote the
+full subcategory consisting of parasitic adequate modules.
+Then
+$$
+D(\textit{Adeq}(\mathcal{O}))/D_\mathcal{C}(\textit{Adeq}(\mathcal{O}))
+= D(\QCoh(\mathcal{O}_S))
+$$
+and similarly for the bounded versions.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from
+Derived Categories, Lemma \ref{derived-lemma-quotient-by-serre-easy}.
+\end{proof}
+
+\noindent
+Next, we look for a description the other way around by looking at
+the functors
+$$
+K^+(\QCoh(\mathcal{O}_S))
+\longrightarrow
+K^+(\textit{Adeq}(\mathcal{O}))
+\longrightarrow
+D^+(\textit{Adeq}(\mathcal{O}))
+\longrightarrow
+D^+(\QCoh(\mathcal{O}_S)).
+$$
+In some cases the derived category of adequate modules is a localization
+of the homotopy category of complexes of quasi-coherent modules at
+universal quasi-isomorphisms. Let $S$ be a scheme. A map of complexes
+$\varphi : \mathcal{F}^\bullet \to \mathcal{G}^\bullet$
+of quasi-coherent $\mathcal{O}_S$-modules is said to be a
+{\it universal quasi-isomorphism} if for every morphism of schemes
+$f : T \to S$ the pullback $f^*\varphi$ is a quasi-isomorphism.
+
+\begin{lemma}
+\label{lemma-describe-Dplus-adequate}
+Let $U = \Spec(A)$ be an affine scheme.
+The bounded below derived category
+$D^+(\textit{Adeq}(\mathcal{O}))$ is the localization
+of $K^+(\QCoh(\mathcal{O}_U))$ at the multiplicative subset of
+universal quasi-isomorphisms.
+\end{lemma}
+
+\begin{proof}
+If $\varphi : \mathcal{F}^\bullet \to \mathcal{G}^\bullet$
+is a morphism of complexes of quasi-coherent
+$\mathcal{O}_U$-modules, then $u\varphi : u\mathcal{F}^\bullet \to
+u\mathcal{G}^\bullet$ is a quasi-isomorphism if and only if $\varphi$ is
+a universal quasi-isomorphism. Hence the collection $S$
+of universal quasi-isomorphisms is a saturated multiplicative
+system compatible with the triangulated structure by
+Derived Categories, Lemma \ref{derived-lemma-triangle-functor-localize}.
+Hence $S^{-1}K^+(\QCoh(\mathcal{O}_U))$ exists and is a
+triangulated category, see
+Derived Categories, Proposition
+\ref{derived-proposition-construct-localization}.
+We obtain a canonical functor
+$can : S^{-1}K^+(\QCoh(\mathcal{O}_U)) \to
+D^{+}(\textit{Adeq}(\mathcal{O}))$ by
+Derived Categories, Lemma \ref{derived-lemma-universal-property-localization}.
+
+\medskip\noindent
+Note that, almost by definition, every adequate module on $U$ has an
+embedding into a quasi-coherent sheaf, see
+Lemma \ref{lemma-adequate-characterize}. Hence by
+Derived Categories, Lemma \ref{derived-lemma-subcategory-right-resolution}
+given $\mathcal{F}^\bullet \in \Ob(K^+(\textit{Adeq}(\mathcal{O})))$
+there exists a quasi-isomorphism
+$\mathcal{F}^\bullet \to u\mathcal{G}^\bullet$
+where $\mathcal{G}^\bullet \in \Ob(K^+(\QCoh(\mathcal{O}_U)))$.
+This proves that $can$ is essentially surjective.
+
+\medskip\noindent
+Similarly, suppose that $\mathcal{F}^\bullet$ and $\mathcal{G}^\bullet$
+are bounded below complexes of quasi-coherent $\mathcal{O}_U$-modules.
+A morphism in $D^+(\textit{Adeq}(\mathcal{O}))$ between these
+consists of a pair $f : u\mathcal{F}^\bullet \to \mathcal{H}^\bullet$
+and $s : u\mathcal{G}^\bullet \to \mathcal{H}^\bullet$ where $s$
+is a quasi-isomorphism. Pick a quasi-isomorphism
+$s' : \mathcal{H}^\bullet \to u\mathcal{E}^\bullet$. Then we see that
+$s' \circ f : \mathcal{F} \to \mathcal{E}^\bullet$ and the
+universal quasi-isomorphism
+$s' \circ s : \mathcal{G}^\bullet \to \mathcal{E}^\bullet$ give
+a morphism in $S^{-1}K^{+}(\QCoh(\mathcal{O}_U))$ mapping
+to the given morphism. This proves the "fully" part of full faithfulness.
+Faithfulness is proved similarly.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-right-adjoint-adequate}
+Let $U = \Spec(A)$ be an affine scheme.
+The inclusion functor
+$$
+\textit{Adeq}(\mathcal{O}) \to
+\textit{Mod}((\Sch/U)_\tau, \mathcal{O})
+$$
+has a right adjoint $A$\footnote{This is the ``adequator''.}.
+Moreover, the adjunction mapping
+$A(\mathcal{F}) \to \mathcal{F}$ is an isomorphism for every
+adequate module $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+By
+Topologies, Lemma \ref{topologies-lemma-affine-big-site-fppf}
+(and similarly for the other topologies)
+we may work with $\mathcal{O}$-modules on $(\textit{Aff}/U)_\tau$.
+Denote $\mathcal{P}$ the category of module-valued
+functors on $\textit{Alg}_A$ and $\mathcal{A}$ the category of adequate
+functors on $\textit{Alg}_A$. Denote $i : \mathcal{A} \to \mathcal{P}$
+the inclusion functor. Denote $Q : \mathcal{P} \to \mathcal{A}$
+the construction of Lemma \ref{lemma-adjoint}.
+We have the commutative diagram
+\begin{equation}
+\label{equation-categories}
+\vcenter{
+\xymatrix{
+\textit{Adeq}(\mathcal{O}) \ar[r]_-k \ar@{=}[d] &
+\textit{Mod}((\textit{Aff}/U)_\tau, \mathcal{O}) \ar[r]_-j &
+\textit{PMod}((\textit{Aff}/U)_\tau, \mathcal{O}) \ar@{=}[d] \\
+\mathcal{A} \ar[rr]^-i & & \mathcal{P}
+}
+}
+\end{equation}
+The left vertical equality is
+Lemma \ref{lemma-adequate-affine}
+and the right vertical equality was explained in
+Section \ref{section-quasi-coherent}.
+Define $A(\mathcal{F}) = Q(j(\mathcal{F}))$.
+Since $j$ is fully faithful it follows immediately that $A$
+is a right adjoint of the inclusion functor $k$. Also, since
+$k$ is fully faithful too, the final assertion follows formally.
+\end{proof}
+
+\noindent
+The functor $A$ is a right adjoint hence left exact. Since the inclusion
+functor is exact, see
+Lemma \ref{lemma-abelian-adequate}
+we conclude that $A$ transforms injectives into injectives, and that
+the category $\textit{Adeq}(\mathcal{O})$ has enough injectives, see
+Homology, Lemma \ref{homology-lemma-adjoint-enough-injectives}
+and
+Injectives, Theorem \ref{injectives-theorem-sheaves-modules-injectives}.
+This also follows from the equivalence in
+(\ref{equation-categories})
+and
+Lemma \ref{lemma-enough-injectives}.
+
+\begin{lemma}
+\label{lemma-RA-zero}
+Let $U = \Spec(A)$ be an affine scheme.
+For any object $\mathcal{F}$ of $\textit{Adeq}(\mathcal{O})$
+we have $R^pA(\mathcal{F}) = 0$ for all $p > 0$ where $A$ is
+as in
+Lemma \ref{lemma-right-adjoint-adequate}.
+\end{lemma}
+
+\begin{proof}
+With notation as in the proof of
+Lemma \ref{lemma-right-adjoint-adequate}
+choose an injective resolution $k(\mathcal{F}) \to \mathcal{I}^\bullet$
+in the category of $\mathcal{O}$-modules on $(\textit{Aff}/U)_\tau$.
+By
+Cohomology on Sites, Lemmas \ref{sites-cohomology-lemma-include-modules}
+and
+Lemma \ref{lemma-same-cohomology-adequate}
+the complex $j(\mathcal{I}^\bullet)$ is exact.
+On the other hand, each $j(\mathcal{I}^n)$ is an injective object
+of the category of presheaves of modules by
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-injective-module-injective-presheaf}.
+It follows that $R^pA(\mathcal{F}) = R^pQ(j(k(\mathcal{F})))$.
+Hence the result now follows from
+Lemma \ref{lemma-RQ-zero}.
+\end{proof}
+
+\noindent
+Let $S$ be a scheme. By the discussion in
+Section \ref{section-adequate}
+the embedding
+$\textit{Adeq}(\mathcal{O}) \subset
+\textit{Mod}((\Sch/S)_\tau, \mathcal{O})$
+exhibits $\textit{Adeq}(\mathcal{O})$ as a weak Serre subcategory of
+the category of all $\mathcal{O}$-modules. Denote
+$$
+D_{\textit{Adeq}}(\mathcal{O}) \subset
+D(\mathcal{O}) = D(\textit{Mod}((\Sch/S)_\tau, \mathcal{O}))
+$$
+the triangulated subcategory of complexes whose cohomology sheaves
+are adequate, see
+Derived Categories, Section \ref{derived-section-triangulated-sub}.
+We obtain a canonical functor
+$$
+D(\textit{Adeq}(\mathcal{O}))
+\longrightarrow
+D_{\textit{Adeq}}(\mathcal{O})
+$$
+see
+Derived Categories, Equation (\ref{derived-equation-compare}).
+
+\begin{lemma}
+\label{lemma-bounded-below}
+If $U = \Spec(A)$ is an affine scheme, then the bounded
+below version
+\begin{equation}
+\label{equation-compare-bounded-adequate}
+D^+(\textit{Adeq}(\mathcal{O}))
+\longrightarrow
+D^+_{\textit{Adeq}}(\mathcal{O})
+\end{equation}
+of the functor above is an equivalence.
+\end{lemma}
+
+\begin{proof}
+Let $A : \textit{Mod}(\mathcal{O}) \to \textit{Adeq}(\mathcal{O})$
+be the right adjoint to the inclusion functor constructed in
+Lemma \ref{lemma-right-adjoint-adequate}.
+Since $A$ is left exact and since $\textit{Mod}(\mathcal{O})$
+has enough injectives, $A$ has a right derived functor
+$RA : D^+_{\textit{Adeq}}(\mathcal{O}) \to D^+(\textit{Adeq}(\mathcal{O}))$.
+We claim that $RA$ is a quasi-inverse to
+(\ref{equation-compare-bounded-adequate}).
+To see this the key fact is that if $\mathcal{F}$ is an adequate module, then
+the adjunction map $\mathcal{F} \to RA(\mathcal{F})$ is a
+quasi-isomorphism by Lemma \ref{lemma-RA-zero}.
+
+\medskip\noindent
+Namely, to prove the lemma in full it suffices to show:
+\begin{enumerate}
+\item Given $\mathcal{F}^\bullet \in K^+(\textit{Adeq}(\mathcal{O}))$
+the canonical map $\mathcal{F}^\bullet \to RA(\mathcal{F}^\bullet)$
+is a quasi-isomorphism, and
+\item given $\mathcal{G}^\bullet \in K^+(\textit{Mod}(\mathcal{O}))$
+the canonical map $RA(\mathcal{G}^\bullet) \to \mathcal{G}^\bullet$
+is a quasi-isomorphism.
+\end{enumerate}
+Both (1) and (2) follow from the key fact via a spectral sequence
+argument using one of the spectral sequences of
+Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ext-adequate}
+Let $U = \Spec(A)$ be an affine scheme.
+Let $\mathcal{F}$ and $\mathcal{G}$ be adequate $\mathcal{O}$-modules.
+For any $i \geq 0$ the natural map
+$$
+\Ext^i_{\textit{Adeq}(\mathcal{O})}(\mathcal{F}, \mathcal{G})
+\longrightarrow
+\Ext^i_{\textit{Mod}(\mathcal{O})}(\mathcal{F}, \mathcal{G})
+$$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+By definition these ext groups are computed as hom sets in the
+derived category. Hence this follows immediately from
+Lemma \ref{lemma-bounded-below}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Pure extensions}
+\label{section-pure}
+
+\noindent
+We want to characterize extensions of quasi-coherent sheaves on
+the big site of an affine schemes in terms of algebra. To do this
+we introduce the following notion.
+
+\begin{definition}
+\label{definition-pure}
+Let $A$ be a ring.
+\begin{enumerate}
+\item An $A$-module $P$ is said to be {\it pure projective}
+if for every universally exact sequence
+$0 \to K \to M \to N \to 0$ of $A$-module the sequence
+$0 \to \Hom_A(P, K) \to \Hom_A(P, M) \to \Hom_A(P, N) \to 0$
+is exact.
+\item An $A$-module $I$ is said to be {\it pure injective}
+if for every universally exact sequence
+$0 \to K \to M \to N \to 0$ of $A$-module the sequence
+$0 \to \Hom_A(N, I) \to \Hom_A(M, I) \to \Hom_A(K, I) \to 0$
+is exact.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Let's characterize pure projectives.
+
+\begin{lemma}
+\label{lemma-pure-projective}
+Let $A$ be a ring.
+\begin{enumerate}
+\item A module is pure projective if and only if
+it is a direct summand of a direct sum of finitely presented $A$-modules.
+\item For any module $M$ there exists a universally exact sequence
+$0 \to N \to P \to M \to 0$ with $P$ pure projective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First note that a finitely presented $A$-module is pure projective by
+Algebra, Theorem \ref{algebra-theorem-universally-exact-criteria}.
+Hence a direct summand of a direct sum of finitely presented $A$-modules
+is indeed pure projective. Let $M$ be any $A$-module.
+Write $M = \colim_{i \in I} P_i$ as a filtered colimit of
+finitely presented $A$-modules. Consider the sequence
+$$
+0 \to N \to \bigoplus P_i \to M \to 0.
+$$
+For any finitely presented $A$-module $P$ the map
+$\Hom_A(P, \bigoplus P_i) \to \Hom_A(P, M)$
+is surjective, as any map $P \to M$ factors through some $P_i$.
+Hence by
+Algebra, Theorem \ref{algebra-theorem-universally-exact-criteria}
+this sequence is universally exact. This proves (2).
+If now $M$ is pure projective, then the sequence is split and
+we see that $M$ is a direct summand of $\bigoplus P_i$.
+\end{proof}
+
+\noindent
+Let's characterize pure injectives.
+
+\begin{lemma}
+\label{lemma-pure-injective}
+Let $A$ be a ring. For any $A$-module $M$ set
+$M^\vee = \Hom_\mathbf{Z}(M, \mathbf{Q}/\mathbf{Z})$.
+\begin{enumerate}
+\item For any $A$-module $M$ the $A$-module $M^\vee$ is pure injective.
+\item An $A$-module $I$ is pure injective if and only if the map
+$I \to (I^\vee)^\vee$ splits.
+\item For any module $M$ there exists a universally exact sequence
+$0 \to M \to I \to N \to 0$ with $I$ pure injective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will use the properties of the functor $M \mapsto M^\vee$ found in
+More on Algebra, Section \ref{more-algebra-section-injectives-modules}
+without further mention. Part (1) holds because
+$\Hom_A(N, M^\vee) =
+\Hom_\mathbf{Z}(N \otimes_A M, \mathbf{Q}/\mathbf{Z})$
+and because $\mathbf{Q}/\mathbf{Z}$ is injective in the category of
+abelian groups. Hence if $I \to (I^\vee)^\vee$ is split, then
+$I$ is pure injective. We claim that for any $A$-module $M$ the
+evaluation map $ev : M \to (M^\vee)^\vee$ is universally injective.
+To see this note that $ev^\vee : ((M^\vee)^\vee)^\vee \to M^\vee$
+has a right inverse, namely $ev' : M^\vee \to ((M^\vee)^\vee)^\vee$.
+Then for any $A$-module $N$ applying the exact faithful functor
+${}^\vee$ to the map $N \otimes_A M \to N \otimes_A (M^\vee)^\vee$
+gives
+$$
+\Hom_A(N, ((M^\vee)^\vee)^\vee) =
+\Big(N \otimes_A (M^\vee)^\vee\Big)^\vee
+\to
+\Big(N \otimes_A M\Big)^\vee =
+\Hom_A(N, M^\vee)
+$$
+which is surjective by the existence of the right inverse. The claim follows.
+The claim implies (3) and the necessity of the condition in (2).
+\end{proof}
+
+\noindent
+Before we continue we make the following observation which we will
+use frequently in the rest of this section.
+
+\begin{lemma}
+\label{lemma-split-universally-exact-sequence}
+Let $A$ be a ring.
+\begin{enumerate}
+\item Let $L \to M \to N$ be a universally exact sequence
+of $A$-modules. Let $K = \Im(M \to N)$.
+Then $K \to N$ is universally injective.
+\item Any universally exact complex
+can be split into universally exact short exact sequences.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). For any $A$-module $T$ the sequence
+$L \otimes_A T \to M \otimes_A T \to K \otimes_A T \to 0$ is exact
+by right exactness of $\otimes$. By assumption the sequence
+$L \otimes_A T \to M \otimes_A T \to N \otimes_A T$ is exact.
+Combined this shows that $K \otimes_A T \to N \otimes_A T$ is injective.
+
+\medskip\noindent
+Part (2) means the following: Suppose that $M^\bullet$ is a universally
+exact complex of $A$-modules. Set $K^i = \Ker(d^i) \subset M^i$.
+Then the short exact sequences $0 \to K^i \to M^i \to K^{i + 1} \to 0$
+are universally exact. This follows immediately from part (1).
+\end{proof}
+
+\begin{definition}
+\label{definition-pure-resolution}
+Let $A$ be a ring. Let $M$ be an $A$-module.
+\begin{enumerate}
+\item A {\it pure projective resolution} $P_\bullet \to M$
+is a universally exact sequence
+$$
+\ldots \to P_1 \to P_0 \to M \to 0
+$$
+with each $P_i$ pure projective.
+\item A {\it pure injective resolution} $M \to I^\bullet$ is a universally
+exact sequence
+$$
+0 \to M \to I^0 \to I^1 \to \ldots
+$$
+with each $I^i$ pure injective.
+\end{enumerate}
+\end{definition}
+
+\noindent
+These resolutions satisfy the usual uniqueness properties among the class
+of all universally exact left or right resolutions.
+
+\begin{lemma}
+\label{lemma-pure-projective-resolutions}
+Let $A$ be a ring.
+\begin{enumerate}
+\item Any $A$-module has a pure projective resolution.
+\end{enumerate}
+Let $M \to N$ be a map of $A$-modules.
+Let $P_\bullet \to M$ be a pure projective resolution and
+let $N_\bullet \to N$ be a universally exact resolution.
+\begin{enumerate}
+\item[(2)] There exists a map of complexes $P_\bullet \to N_\bullet$
+inducing the given map
+$$
+M = \Coker(P_1 \to P_0) \to \Coker(N_1 \to N_0) = N
+$$
+\item[(3)] two maps $\alpha, \beta : P_\bullet \to N_\bullet$
+inducing the same map $M \to N$ are homotopic.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) follows immediately from
+Lemma \ref{lemma-pure-projective}.
+Before we prove (2) and (3) note that by
+Lemma \ref{lemma-split-universally-exact-sequence}
+we can split the universally exact complex $N_\bullet \to N \to 0$
+into universally exact short exact sequences $0 \to K_0 \to N_0 \to N \to 0$
+and $0 \to K_i \to N_i \to K_{i - 1} \to 0$.
+
+\medskip\noindent
+Proof of (2). Because $P_0$ is pure projective
+we can find a map $P_0 \to N_0$ lifting the map $P_0 \to M \to N$.
+We obtain an induced map $P_1 \to F_0 \to N_0$ wich ends up in $K_0$.
+Since $P_1$ is pure projective we may lift this
+to a map $P_1 \to N_1$. This in turn induces a map
+$P_2 \to P_1 \to N_1$ which maps to zero into
+$N_0$, i.e., into $K_1$. Hence we may lift to get a map
+$P_2 \to N_2$. Repeat.
+
+\medskip\noindent
+Proof of (3). To show that $\alpha, \beta$ are homotopic it suffices
+to show the difference $\gamma = \alpha - \beta$ is homotopic
+to zero. Note that the image of $\gamma_0 : P_0 \to N_0$
+is contained in $K_0$. Hence we may lift
+$\gamma_0$ to a map $h_0 : P_0 \to N_1$. Consider the map
+$\gamma_1' = \gamma_1 - h_0 \circ d_{P, 1} : P_1 \to N_1$.
+By our choice of $h_0$ we see that the image of $\gamma_1'$
+is contained in $K_1$. Since $P_1$ is pure projective may lift
+$\gamma_1'$ to a map $h_1 : P_1 \to N_2$. At this point we have
+$\gamma_1 = h_0 \circ d_{F, 1} + d_{N, 2} \circ h_1$. Repeat.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pure-injective-resolutions}
+Let $A$ be a ring.
+\begin{enumerate}
+\item Any $A$-module has a pure injective resolution.
+\end{enumerate}
+Let $M \to N$ be a map of $A$-modules.
+Let $M \to M^\bullet$ be a universally exact resolution and
+let $N \to I^\bullet$ be a pure injective resolution.
+\begin{enumerate}
+\item[(2)] There exists a map of complexes $M^\bullet \to I^\bullet$
+inducing the given map
+$$
+M = \Ker(M^0 \to M^1) \to \Ker(I^0 \to I^1) = N
+$$
+\item[(3)] two maps $\alpha, \beta : M^\bullet \to I^\bullet$
+inducing the same map $M \to N$ are homotopic.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma is dual to
+Lemma \ref{lemma-pure-projective-resolutions}.
+The proof is identical, except one has to reverse all the arrows.
+\end{proof}
+
+\noindent
+Using the material above we can define pure extension groups as
+follows. Let $A$ be a ring and let $M$, $N$ be $A$-modules.
+Choose a pure injective resolution $N \to I^\bullet$. By
+Lemma \ref{lemma-pure-injective-resolutions}
+the complex
+$$
+\Hom_A(M, I^\bullet)
+$$
+is well defined up to homotopy. Hence its $i$th cohomology module
+is a well defined invariant of $M$ and $N$.
+
+\begin{definition}
+\label{definition-pure-ext}
+Let $A$ be a ring and let $M$, $N$ be $A$-modules.
+The $i$th {\it pure extension module} $\text{Pext}^i_A(M, N)$
+is the $i$th cohomology module of the complex
+$\Hom_A(M, I^\bullet)$ where $I^\bullet$ is a pure injective
+resolution of $N$.
+\end{definition}
+
+\noindent
+Warning: It is not true that an exact sequence of $A$-modules gives
+rise to a long exact sequence of pure extensions groups. (You need
+a universally exact sequence for this.)
+We collect some facts which are obvious from the material above.
+
+\begin{lemma}
+\label{lemma-facts-pext}
+Let $A$ be a ring.
+\begin{enumerate}
+\item $\text{Pext}^i_A(M, N) = 0$ for $i > 0$ whenever $N$ is pure injective,
+\item $\text{Pext}^i_A(M, N) = 0$ for $i > 0$ whenever $M$ is pure projective,
+in particular if $M$ is an $A$-module of finite presentation,
+\item $\text{Pext}^i_A(M, N)$ is also the $i$th cohomology module
+of the complex $\Hom_A(P_\bullet, N)$ where $P_\bullet$
+is a pure projective resolution of $M$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To see (3) consider the double complex
+$$
+A^{\bullet, \bullet} = \Hom_A(P_\bullet, I^\bullet)
+$$
+Each of its rows is exact except in degree $0$ where its cohomology
+is $\Hom_A(M, I^q)$. Each of its columns is exact except in degree $0$
+where its cohomology is $\Hom_A(P_p, N)$. Hence the two spectral
+sequences associated to this complex in
+Homology, Section \ref{homology-section-double-complex}
+degenerate, giving the equality.
+\end{proof}
+
+
+
+
+
+\section{Higher exts of quasi-coherent sheaves on the big site}
+\label{section-big}
+
+\noindent
+It turns out that the module-valued functor $\underline{I}$ associated to
+a pure injective module $I$ gives rise to an injective object in the
+category of adequate functors on $\textit{Alg}_A$.
+Warning: It is not true that a pure projective module gives rise to
+a projective object in the category of adequate functors. We do have
+plenty of projective objects, namely, the linearly adequate functors.
+
+\begin{lemma}
+\label{lemma-pure-injective-injective-adequate}
+Let $A$ be a ring.
+Let $\mathcal{A}$ be the category of adequate functors on $\textit{Alg}_A$.
+The injective objects of $\mathcal{A}$ are exactly the functors
+$\underline{I}$ where $I$ is a pure injective $A$-module.
+\end{lemma}
+
+\begin{proof}
+Let $I$ be an injective object of $\mathcal{A}$.
+Choose an embedding $I \to \underline{M}$ for some $A$-module $M$.
+As $I$ is injective we see that $\underline{M} = I \oplus F$ for some
+module-valued functor $F$. Then $M = I(A) \oplus F(A)$ and it follows
+that $I = \underline{I(A)}$. Thus we see that any injective object
+is of the form $\underline{I}$ for some $A$-module $I$.
+It is clear that the module $I$ has to be pure injective
+since any universally exact sequence $0 \to M \to N \to L \to 0$
+gives rise to an exact sequence
+$0 \to \underline{M} \to \underline{N} \to \underline{L} \to 0$
+of $\mathcal{A}$.
+
+\medskip\noindent
+Finally, suppose that $I$ is a pure injective
+$A$-module. Choose an embedding $\underline{I} \to J$
+into an injective object of $\mathcal{A}$ (see
+Lemma \ref{lemma-enough-injectives}).
+We have seen above that $J = \underline{I'}$
+for some $A$-module $I'$ which is pure injective. As
+$\underline{I} \to \underline{I'}$ is injective
+the map $I \to I'$ is universally injective. By assumption on $I$
+it splits. Hence $\underline{I}$ is a summand of $J = \underline{I'}$
+whence an injective object of the category $\mathcal{A}$.
+\end{proof}
+
+\noindent
+Let $U = \Spec(A)$ be an affine scheme. Let $M$ be an $A$-module.
+We will use the notation $M^a$ to denote the quasi-coherent sheaf
+of $\mathcal{O}$-modules on $(\Sch/U)_\tau$ associated to
+the quasi-coherent sheaf $\widetilde{M}$ on $U$.
+Now we have all the notation in place to formulate the following lemma.
+
+\begin{lemma}
+\label{lemma-big-ext}
+Let $U = \Spec(A)$ be an affine scheme. Let $M$, $N$ be $A$-modules.
+For all $i$ we have a canonical isomorphism
+$$
+\Ext^i_{\textit{Mod}(\mathcal{O})}(M^a, N^a) = \text{Pext}^i_A(M, N)
+$$
+functorial in $M$ and $N$.
+\end{lemma}
+
+\begin{proof}
+Let us construct a canonical arrow from right to left. Namely, if
+$N \to I^\bullet$ is a pure injective resolution, then
+$M^a \to (I^\bullet)^a$ is an exact complex of (adequate)
+$\mathcal{O}$-modules. Hence any element of $\text{Pext}^i_A(M, N)$
+gives rise to a map $N^a \to M^a[i]$ in $D(\mathcal{O})$, i.e.,
+an element of the group on the left.
+
+\medskip\noindent
+To prove this map is an isomorphism, note that we may replace
+$\Ext^i_{\textit{Mod}(\mathcal{O})}(M^a, N^a)$ by
+$\Ext^i_{\textit{Adeq}(\mathcal{O})}(M^a, N^a)$, see
+Lemma \ref{lemma-ext-adequate}.
+Let $\mathcal{A}$ be the category of adequate functors
+on $\textit{Alg}_A$. We have seen that $\mathcal{A}$ is
+equivalent to $\textit{Adeq}(\mathcal{O})$, see
+Lemma \ref{lemma-adequate-affine}; see also the proof of
+Lemma \ref{lemma-right-adjoint-adequate}.
+Hence now it suffices to prove that
+$$
+\Ext^i_\mathcal{A}(\underline{M}, \underline{N}) =
+\text{Pext}^i_A(M, N)
+$$
+However, this is clear from
+Lemma \ref{lemma-pure-injective-injective-adequate}
+as a pure injective resolution $N \to I^\bullet$ exactly corresponds
+to an injective resolution of $\underline{N}$ in $\mathcal{A}$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Derived categories of adequate modules, II}
+\label{section-derived-categories}
+
+\noindent
+Let $S$ be a scheme. Denote $\mathcal{O}_S$ the structure sheaf of $S$
+and $\mathcal{O}$ the structure sheaf of the big site $(\Sch/S)_\tau$.
+In
+Descent, Remark \ref{descent-remark-change-topologies-ringed}
+we constructed a morphism of ringed sites
+\begin{equation}
+\label{equation-compare-big-small}
+f :
+((\Sch/S)_\tau, \mathcal{O})
+\longrightarrow
+(S_{Zar}, \mathcal{O}_S).
+\end{equation}
+In the previous sections have seen that the functor
+$f_* : \textit{Mod}(\mathcal{O}) \to \textit{Mod}(\mathcal{O}_S)$
+transforms adequate sheaves into quasi-coherent sheaves, and
+induces an exact functor
+$v : \textit{Adeq}(\mathcal{O}) \to \QCoh(\mathcal{O}_S)$, and
+in fact that $f_* = v$ induces an equivalence
+$\textit{Adeq}(\mathcal{O})/\mathcal{C} \to \QCoh(\mathcal{O}_S)$
+where $\mathcal{C}$ is the subcategory of parasitic adequate modules.
+Moreover, the functor $f^*$ transforms quasi-coherent modules
+into adequate modules, and induces a functor
+$u : \QCoh(\mathcal{O}_S) \to \textit{Adeq}(\mathcal{O})$
+which is a left adjoint to $v$.
+
+\medskip\noindent
+There is a very similar relationship between
+$D_{\textit{Adeq}}(\mathcal{O})$ and $D_\QCoh(S)$.
+First we explain why the category $D_{\textit{Adeq}}(\mathcal{O})$
+is independent of the chosen topology.
+
+\begin{remark}
+\label{remark-D-adeq-independence-topology}
+Let $S$ be a scheme.
+Let $\tau, \tau' \in \{Zar, \etale, smooth, syntomic, fppf\}$.
+Denote $\mathcal{O}_\tau$, resp.\ $\mathcal{O}_{\tau'}$
+the structure sheaf $\mathcal{O}$ viewed as a sheaf on
+$(\Sch/S)_\tau$, resp.\ $(\Sch/S)_{\tau'}$.
+Then $D_{\textit{Adeq}}(\mathcal{O}_\tau)$ and
+$D_{\textit{Adeq}}(\mathcal{O}_{\tau'})$ are canonically isomorphic.
+This follows from Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compare-topologies-derived-adequate-modules}.
+Namely, assume $\tau$ is stronger than the topology $\tau'$, let
+$\mathcal{C} = (\Sch/S)_{fppf}$, and let $\mathcal{B}$ the collection
+of affine schemes over $S$. Assumptions (1) and (2) we've seen above.
+Assumption (3) is clear and assumption (4) follows from
+Lemma \ref{lemma-same-cohomology-adequate}.
+\end{remark}
+
+\begin{remark}
+\label{remark-D-adeq-and-D-QCoh}
+Let $S$ be a scheme. The morphism $f$ see
+(\ref{equation-compare-big-small}) induces
+adjoint functors
+$Rf_* : D_{\textit{Adeq}}(\mathcal{O}) \to D_\QCoh(S)$
+and
+$Lf^* : D_\QCoh(S) \to D_{\textit{Adeq}}(\mathcal{O})$.
+Moreover $Rf_* Lf^* \cong \text{id}_{D_\QCoh(S)}$.
+
+\medskip\noindent
+We sketch the proof. By
+Remark \ref{remark-D-adeq-independence-topology}
+we may assume the topology $\tau$ is the Zariski topology.
+We will use the existence of the unbounded total derived
+functors $Lf^*$ and $Rf_*$ on $\mathcal{O}$-modules and their
+adjointness, see
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-adjoint}.
+In this case $f_*$ is just the restriction to the subcategory
+$S_{Zar}$ of $(\Sch/S)_{Zar}$. Hence it is clear that
+$Rf_* = f_*$ induces
+$Rf_* : D_{\textit{Adeq}}(\mathcal{O}) \to D_\QCoh(S)$.
+Suppose that $\mathcal{G}^\bullet$ is an object of
+$D_\QCoh(S)$. We may choose a system
+$\mathcal{K}_1^\bullet \to \mathcal{K}_2^\bullet \to \ldots$
+of bounded above complexes of flat $\mathcal{O}_S$-modules whose
+transition maps are termwise split injectives and a diagram
+$$
+\xymatrix{
+\mathcal{K}_1^\bullet \ar[d] \ar[r] &
+\mathcal{K}_2^\bullet \ar[d] \ar[r] & \ldots \\
+\tau_{\leq 1}\mathcal{G}^\bullet \ar[r] &
+\tau_{\leq 2}\mathcal{G}^\bullet \ar[r] & \ldots
+}
+$$
+with the properties (1), (2), (3) listed in
+Derived Categories, Lemma \ref{derived-lemma-special-direct-system}
+where $\mathcal{P}$ is the collection of flat $\mathcal{O}_S$-modules.
+Then $Lf^*\mathcal{G}^\bullet$ is computed by
+$\colim f^*\mathcal{K}_n^\bullet$, see
+Cohomology on Sites, Lemmas \ref{sites-cohomology-lemma-pullback-K-flat} and
+\ref{sites-cohomology-lemma-derived-base-change}
+(note that our sites have enough points by
+\'Etale Cohomology, Lemma \ref{etale-cohomology-lemma-points-fppf}).
+We have to see that $H^i(Lf^*\mathcal{G}^\bullet) =
+\colim H^i(f^*\mathcal{K}_n^\bullet)$ is adequate for each $i$. By
+Lemma \ref{lemma-abelian-adequate}
+we conclude that it suffices to show that
+each $H^i(f^*\mathcal{K}_n^\bullet)$ is adequate.
+
+\medskip\noindent
+The adequacy of $H^i(f^*\mathcal{K}_n^\bullet)$ is local on $S$, hence
+we may assume that $S = \Spec(A)$ is affine. Because $S$ is affine
+$D_\QCoh(S) = D(\QCoh(\mathcal{O}_S))$, see
+the discussion in
+Derived Categories of Schemes, Section
+\ref{perfect-section-derived-quasi-coherent}.
+Hence there exists a quasi-isomorphism
+$\mathcal{F}^\bullet \to \mathcal{K}_n^\bullet$
+where $\mathcal{F}^\bullet$ is a bounded above complex of flat
+quasi-coherent modules.
+Then $f^*\mathcal{F}^\bullet \to f^*\mathcal{K}_n^\bullet$ is a
+quasi-isomorphism, and the cohomology sheaves of
+$f^*\mathcal{F}^\bullet$ are adequate.
+
+\medskip\noindent
+The final assertion
+$Rf_* Lf^* \cong \text{id}_{D_\QCoh(S)}$
+follows from the explicit description of the functors above.
+(In plain English: if $\mathcal{F}$ is quasi-coherent and $p > 0$, then
+$L_pf^*\mathcal{F}$ is a parasitic adequate module.)
+\end{remark}
+
+
+\begin{remark}
+\label{remark-conclusion}
+Remark \ref{remark-D-adeq-and-D-QCoh}
+above implies we have an equivalence of derived categories
+$$
+D_{\textit{Adeq}}(\mathcal{O})/D_\mathcal{C}(\mathcal{O})
+\longrightarrow
+D_\QCoh(S)
+$$
+where $\mathcal{C}$ is the category of parasitic adequate modules.
+Namely, it is clear that $D_\mathcal{C}(\mathcal{O})$ is the kernel
+of $Rf_*$, hence a functor as indicated. For any object $X$ of
+$D_{\textit{Adeq}}(\mathcal{O})$ the map $Lf^*Rf_*X \to X$ maps
+to a quasi-isomorphism in $D_\QCoh(S)$, hence
+$Lf^*Rf_*X \to X$ is an isomorphism in
+$D_{\textit{Adeq}}(\mathcal{O})/D_\mathcal{C}(\mathcal{O})$.
+Finally, for $X, Y$ objects of $D_{\textit{Adeq}}(\mathcal{O})$
+the map
+$$
+Rf_* :
+\Hom_{D_{\textit{Adeq}}(\mathcal{O})/D_\mathcal{C}(\mathcal{O})}(X, Y)
+\to
+\Hom_{D_\QCoh(S)}(Rf_*X, Rf_*Y)
+$$
+is bijective as $Lf^*$ gives an inverse (by the remarks above).
+\end{remark}
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/algebra.tex b/books/stacks/algebra.tex
new file mode 100644
index 0000000000000000000000000000000000000000..eebd0d0c61220acc46accdfa42988a3b9df8943a
--- /dev/null
+++ b/books/stacks/algebra.tex
@@ -0,0 +1,47177 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Commutative Algebra}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+
+
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+Basic commutative algebra will be explained in this document.
+A reference is \cite{MatCA}.
+
+
+
+
+
+
+\section{Conventions}
+\label{section-conventions}
+
+\noindent
+A ring is commutative with $1$. The zero ring is a ring. In fact it is
+the only ring that does not have a prime ideal. The Kronecker
+symbol $\delta_{ij}$ will be used. If $R \to S$ is a ring map and
+$\mathfrak q$ a prime of $S$, then we use the notation
+``$\mathfrak p = R \cap \mathfrak q$''
+to indicate the prime which is the inverse image of $\mathfrak q$ under
+$R \to S$ even if $R$ is not a subring of $S$ and even if $R \to S$
+is not injective.
+
+
+
+
+
+
+\section{Basic notions}
+\label{section-rings-basic}
+
+\noindent
+The following is a list of basic notions in commutative algebra. Some of these
+notions are discussed in more detail in the text that follows and some are
+defined in the list, but others are considered basic and will not be defined.
+If you are not familiar with most of the italicized concepts, then we suggest
+looking at an introductory text on algebra before continuing.
+
+\begin{enumerate}
+\item $R$ is a {\it ring},
+\label{item-ring}
+\item $x\in R$ is {\it nilpotent},
+\label{item-ring-element-nilpotent}
+\item $x\in R$ is a {\it zerodivisor},
+\label{item-ring-element-zerodivisor}
+\item $x\in R$ is a {\it unit},
+\label{item-ring-element-unit}
+\item $e \in R$ is an {\it idempotent},
+\label{item-ring-element-idempotent}
+\item an idempotent $e \in R$ is called {\it trivial} if $e = 1$ or $e = 0$,
+\label{item-idempotent-trivial}
+\item $\varphi : R_1 \to R_2$ is a {\it ring homomorphism},
+\label{item-ring-homomorphism}
+\item
+\label{item-ring-homomorphism-finite-presentation}
+$\varphi : R_1 \to R_2$ is {\it of finite presentation}, or
+{\it $R_2$ is a finitely presented $R_1$-algebra},
+see Definition \ref{definition-finite-type},
+\item
+\label{item-ring-homomorphism-finite-type}
+$\varphi : R_1 \to R_2$ is {\it of finite type}, or
+{\it $R_2$ is a finite type $R_1$-algebra},
+see Definition \ref{definition-finite-type},
+\item
+\label{item-ring-homomorphism-finite}
+$\varphi : R_1 \to R_2$ is {\it finite}, or
+{\it $R_2$ is a finite $R_1$-algebra},
+\item $R$ is a {\it (integral) domain},
+\label{item-ring-domain}
+\item $R$ is {\it reduced},
+\label{item-ring-reduced}
+\item $R$ is {\it Noetherian},
+\label{item-ring-Noetherian}
+\item $R$ is a {\it principal ideal domain} or a {\it PID},
+\label{item-ring-PID}
+\item $R$ is a {\it Euclidean domain},
+\label{item-ring-Euclidean}
+\item $R$ is a {\it unique factorization domain} or a {\it UFD},
+\label{item-ring-UFD}
+\item $R$ is a {\it discrete valuation ring} or a {\it dvr},
+\label{item-ring-dvr}
+\item $K$ is a {\it field},
+\label{item-field}
+\item $L/K$ is a {\it field extension},
+\label{item-field-extension}
+\item $L/K$ is an {\it algebraic field extension},
+\label{item-field-extension-algebraic}
+\item $\{t_i\}_{i\in I}$ is a {\it transcendence basis} for $L$ over $K$,
+\label{item-transcendence-basis}
+\item the {\it transcendence degree} $\text{trdeg}(L/K)$ of $L$ over $K$,
+\label{item-transcendence-degree}
+\item the field $k$ is {\it algebraically closed},
+\label{item-algebraically-closed}
+\item
+\label{item-extend-into-algebraically-closed}
+if $L/K$ is algebraic, and $\Omega/K$ an extension with $\Omega$
+algebraically closed,
+then there exists a ring map $L \to \Omega$ extending the map on $K$,
+\item $I \subset R$ is an {\it ideal},
+\label{item-ideal}
+\item $I \subset R$ is {\it radical},
+\label{item-ideal-radical}
+\item if $I$ is an ideal then we have its {\it radical} $\sqrt{I}$,
+\label{item-radical-ideal}
+\item
+\label{item-ideal-nilpotent}
+$I \subset R$ is {\it nilpotent} means that $I^n = 0$ for
+some $n \in \mathbf{N}$,
+\item
+\label{item-ideal-locally-nilpotent}
+$I \subset R$ is {\it locally nilpotent} means that every
+element of $I$ is nilpotent,
+\item $\mathfrak p \subset R$ is a {\it prime ideal},
+\label{item-prime-ideal}
+\item
+\label{item-prime-product-ideals}
+if $\mathfrak p \subset R$ is prime and if $I, J \subset R$
+are ideal, and if $IJ\subset \mathfrak p$, then
+$I \subset \mathfrak p$ or $J \subset \mathfrak p$.
+\item $\mathfrak m \subset R$ is a {\it maximal ideal},
+\label{item-maximal-ideal}
+\item any nonzero ring has a maximal ideal,
+\label{item-exists-maximal-ideal}
+\item
+\label{item-jacobson-radical}
+the {\it Jacobson radical} of $R$ is $\text{rad}(R) =
+\bigcap_{\mathfrak m \subset R} \mathfrak m$ the intersection
+of all the maximal ideals of $R$,
+\item the ideal $(T)$ {\it generated} by a subset $T \subset R$,
+\label{item-ideal-generated-by}
+\item the {\it quotient ring} $R/I$,
+\label{item-quotient-ring}
+\item an ideal $I$ in the ring $R$ is prime if and only if $R/I$ is a domain,
+\label{item-characterize-prime-ideal}
+\item
+\label{item-characterize-maximal-ideal}
+an ideal $I$ in the ring $R$ is maximal if and only if the
+ring $R/I$ is a field,
+\item
+\label{item-inverse-image-ideal}
+if $\varphi : R_1 \to R_2$ is a ring homomorphism, and if
+$I \subset R_2$ is an ideal, then $\varphi^{-1}(I)$ is an
+ideal of $R_1$,
+\item
+\label{item-image-ideal}
+if $\varphi : R_1 \to R_2$ is a ring homomorphism, and if
+$I \subset R_1$ is an ideal, then $\varphi(I) \cdot R_2$ (sometimes
+denoted $I \cdot R_2$, or $IR_2$) is the ideal of $R_2$ generated
+by $\varphi(I)$,
+\item
+\label{item-inverse-image-prime}
+if $\varphi : R_1 \to R_2$ is a ring homomorphism, and if
+$\mathfrak p \subset R_2$ is a prime ideal, then
+$\varphi^{-1}(\mathfrak p)$ is a prime ideal of $R_1$,
+\item $M$ is an {\it $R$-module},
+\label{item-module}
+\item
+\label{item-annihilator}
+for $m \in M$ the {\it annihilator}
+$I = \{f \in R \mid fm = 0\}$ of $m$ in $R$,
+\item $N \subset M$ is an {\it $R$-submodule},
+\label{item-submodule}
+\item $M$ is an {\it Noetherian $R$-module},
+\label{item-Noetherian-module}
+\item $M$ is a {\it finite $R$-module},
+\label{item-finite-module}
+\item $M$ is a {\it finitely generated $R$-module},
+\label{item-finitely-generated-module}
+\item $M$ is a {\it finitely presented $R$-module},
+\label{item-finitely-presented-module}
+\item $M$ is a {\it free $R$-module},
+\label{item-free-module}
+\item
+\label{item-extension-free}
+if $0 \to K \to L \to M \to 0$ is a short exact sequence
+of $R$-modules and $K$, $M$ are free, then $L$ is free,
+\item if $N \subset M \subset L$ are $R$-modules, then $L/M = (L/N)/(M/N)$,
+\label{item-isomorphism-theorem}
+\item $S$ is a {\it multiplicative subset of $R$},
+\label{item-multiplicative-subset}
+\item the {\it localization} $R \to S^{-1}R$ of $R$,
+\label{item-localization-ring}
+\item
+\label{item-localization-zero}
+if $R$ is a ring and $S$ is a multiplicative subset
+of $R$ then $S^{-1}R$ is the zero ring if and only if $S$ contains $0$,
+\item
+\label{item-localize-nonzerodivisors}
+if $R$ is a ring and if the multiplicative subset $S$
+consists completely of nonzerodivisors, then $R \to S^{-1}R$
+is injective,
+\item if $\varphi : R_1 \to R_2$ is a ring homomorphism, and
+$S$ is a multiplicative subsets of $R_1$, then $\varphi(S)$ is
+a multiplicative subset of $R_2$,
+\item
+\label{item-products-multiplicative-subsets}
+if $S$, $S'$ are multiplicative subsets of $R$,
+and if $SS'$ denotes the set of products $SS' =
+\{r \in R \mid \exists s\in S, \exists s' \in S', r = ss'\}$
+then $SS'$ is a multiplicative subset of $R$,
+\item
+\label{item-localization-localization}
+if $S$, $S'$ are multiplicative subsets of $R$,
+and if $\overline{S}$ denotes the image of $S$ in $(S')^{-1}R$,
+then $(SS')^{-1}R = \overline{S}^{-1}((S')^{-1}R)$,
+\item the {\it localization} $S^{-1}M$ of the $R$-module $M$,
+\label{item-localization-module}
+\item
+\label{item-localization-exact}
+the functor $M \mapsto S^{-1}M$ preserves injective maps,
+surjective maps, and exactness,
+\item
+\label{item-localization-localization-module}
+if $S$, $S'$ are multiplicative subsets of $R$,
+and if $M$ is an $R$-module, then
+$(SS')^{-1}M = S^{-1}((S')^{-1}M)$,
+\item
+\label{item-localize-ideal}
+if $R$ is a ring, $I$ and ideal of $R$ and $S$ a multiplicative
+subset of $R$, then $S^{-1}I$ is an ideal of $S^{-1}R$, and we have
+$S^{-1}R/S^{-1}I = \overline{S}^{-1}(R/I)$, where $\overline{S}$
+is the image of $S$ in $R/I$,
+\item
+\label{item-ideal-in-localization}
+if $R$ is a ring, and $S$ a multiplicative
+subset of $R$, then any ideal $I'$ of $S^{-1}R$ is
+of the form $S^{-1}I$, where one can take $I$ to be
+the inverse image of $I'$ in $R$,
+\item
+\label{item-submodule-in-localization}
+if $R$ is a ring, $M$ an $R$-module, and $S$ a multiplicative
+subset of $R$, then any submodule $N'$ of $S^{-1}M$ is of the form
+$S^{-1}N$ for some submodule $N \subset M$, where
+one can take $N$ to be the inverse image of $N'$ in $M$,
+\item if $S = \{1, f, f^2, \ldots\}$ then $R_f = S^{-1}R$ and $M_f = S^{-1}M$,
+\label{item-localize-f}
+\item
+\label{item-localize-p}
+if $S = R \setminus \mathfrak p = \{x\in R \mid x\not\in \mathfrak p\}$
+for some prime ideal $\mathfrak p$,
+then it is customary to denote $R_{\mathfrak p} = S^{-1}R$
+and $M_{\mathfrak p} = S^{-1}M$,
+\item a {\it local ring} is a ring with exactly one maximal ideal,
+\label{item-local-ring}
+\item a {\it semi-local ring} is a ring with finitely many maximal ideals,
+\label{item-semi-local-ring}
+\item
+\label{item-localize-p-local-ring}
+if $\mathfrak p$ is a prime in $R$, then $R_{\mathfrak p}$ is
+a local ring with maximal ideal $\mathfrak p R_{\mathfrak p}$,
+\item
+\label{item-residue-field}
+the {\it residue field}, denoted $\kappa(\mathfrak p)$,
+of the prime $\mathfrak p$ in the ring $R$ is the
+field of fractions of the domain $R/\mathfrak p$;
+it is equal to $R_\mathfrak p/\mathfrak pR_\mathfrak p
+= (R \setminus \mathfrak p)^{-1}R/\mathfrak p$,
+\item given $R$ and $M_1$, $M_2$ the {\it tensor product} $M_1 \otimes_R M_2$,
+\label{item-tensor-product}
+\item
+\label{item-cauchy-binet}
+given matrices $A$ and $B$ in a ring $R$ of sizes $m \times n$ and
+$n \times m$ we have $\det(AB) = \sum \det(A_S)\det({}_SB)$ in $R$ where
+the sum is over subsets $S \subset \{1, \ldots, n\}$ of size $m$
+and $A_S$ is the $m \times m$ submatrix of $A$ with columns
+corresponding to $S$ and ${}_SB$ is the $m \times m$ submatrix of $B$
+with rows corresponding to $S$,
+\item etc.
+\end{enumerate}
+
+
+
+
+\section{Snake lemma}
+\label{section-snake}
+
+\noindent
+The snake lemma and its variants are discussed in the setting of
+abelian categories in
+Homology, Section \ref{homology-section-abelian-categories}.
+
+\begin{lemma}
+\label{lemma-snake}
+\begin{reference}
+\cite[III, Lemma 3.3]{Cartan-Eilenberg}
+\end{reference}
+Given a commutative diagram
+$$
+\xymatrix{
+& X \ar[r] \ar[d]^\alpha &
+Y \ar[r] \ar[d]^\beta &
+Z \ar[r] \ar[d]^\gamma &
+0 \\
+0 \ar[r] & U \ar[r] & V \ar[r] & W
+}
+$$
+of abelian groups with exact rows, there is a canonical exact sequence
+$$
+\Ker(\alpha) \to \Ker(\beta) \to \Ker(\gamma)
+\to
+\Coker(\alpha) \to \Coker(\beta) \to \Coker(\gamma)
+$$
+Moreover: if $X \to Y$ is injective, then the first map is
+injective; if $V \to W$ is surjective, then the last
+map is surjective.
+\end{lemma}
+
+\begin{proof}
+The map $\partial : \Ker(\gamma) \to \Coker(\alpha)$ is defined
+as follows. Take $z \in \Ker(\gamma)$. Choose $y \in Y$ mapping to $z$.
+Then $\beta(y) \in V$ maps to zero in $W$. Hence $\beta(y)$ is the image of
+some $u \in U$. Set $\partial z = \overline{u}$, the class of $u$ in the
+cokernel of $\alpha$. Proof of exactness is omitted.
+\end{proof}
+
+
+
+
+
+
+
+\section{Finite modules and finitely presented modules}
+\label{section-module-finite-type}
+
+\noindent
+Just some basic notation and lemmas.
+
+\begin{definition}
+\label{definition-module-finite-type}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+\begin{enumerate}
+\item We say $M$ is a {\it finite $R$-module}, or a {\it finitely generated
+$R$-module} if there exist $n \in \mathbf{N}$ and $x_1, \ldots, x_n \in M$
+such that every element of $M$ is an $R$-linear combination of the $x_i$.
+Equivalently, this means there exists a surjection
+$R^{\oplus n} \to M$ for some $n \in \mathbf{N}$.
+\item We say $M$ is a {\it finitely presented $R$-module} or an
+{\it $R$-module of finite presentation} if there exist integers
+$n, m \in \mathbf{N}$ and an exact sequence
+$$
+R^{\oplus m} \longrightarrow R^{\oplus n} \longrightarrow M \longrightarrow 0
+$$
+\end{enumerate}
+\end{definition}
+
+\noindent
+Informally, $M$ is a finitely presented $R$-module if and only if
+it is finitely generated and the module of relations among these
+generators is finitely generated as well.
+A choice of an exact sequence as in the definition is called a
+{\it presentation} of $M$.
+
+\begin{lemma}
+\label{lemma-lift-map}
+Let $R$ be a ring. Let $\alpha : R^{\oplus n} \to M$ and $\beta : N \to M$ be
+module maps. If $\Im(\alpha) \subset \Im(\beta)$, then there
+exists an $R$-module map $\gamma : R^{\oplus n} \to N$ such that
+$\alpha = \beta \circ \gamma$.
+\end{lemma}
+
+\begin{proof}
+Let $e_i = (0, \ldots, 0, 1, 0, \ldots, 0)$ be the $i$th basis vector
+of $R^{\oplus n}$. Let $x_i \in N$ be an element with
+$\alpha(e_i) = \beta(x_i)$ which exists by assumption. Set
+$\gamma(a_1, \ldots, a_n) = \sum a_i x_i$. By construction
+$\alpha = \beta \circ \gamma$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extension}
+Let $R$ be a ring.
+Let
+$$
+0 \to M_1 \to M_2 \to M_3 \to 0
+$$
+be a short exact sequence of $R$-modules.
+\begin{enumerate}
+\item If $M_1$ and $M_3$ are finite $R$-modules, then $M_2$ is a finite
+$R$-module.
+\item If $M_1$ and $M_3$ are finitely presented $R$-modules, then $M_2$
+is a finitely presented $R$-module.
+\item If $M_2$ is a finite $R$-module, then $M_3$ is a finite $R$-module.
+\item If $M_2$ is a finitely presented $R$-module and $M_1$ is a
+finite $R$-module, then $M_3$ is a finitely presented $R$-module.
+\item If $M_3$ is a finitely presented $R$-module and $M_2$ is a finite
+$R$-module, then $M_1$ is a finite $R$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). If $x_1, \ldots, x_n$ are generators of $M_1$ and
+$y_1, \ldots, y_m \in M_2$ are elements whose images in $M_3$ are
+generators of $M_3$, then $x_1, \ldots, x_n, y_1, \ldots, y_m$
+generate $M_2$.
+
+\medskip\noindent
+Part (3) is immediate from the definition.
+
+\medskip\noindent
+Proof of (5). Assume $M_3$ is finitely presented and $M_2$ finite.
+Choose a presentation
+$$
+R^{\oplus m} \to R^{\oplus n} \to M_3 \to 0
+$$
+By Lemma \ref{lemma-lift-map} there exists a map
+$R^{\oplus n} \to M_2$ such that
+the solid diagram
+$$
+\xymatrix{
+& R^{\oplus m} \ar[r] \ar@{..>}[d] & R^{\oplus n} \ar[r] \ar[d] &
+M_3 \ar[r] \ar[d]^{\text{id}} & 0 \\
+0 \ar[r] & M_1 \ar[r] & M_2 \ar[r] & M_3 \ar[r] & 0
+}
+$$
+commutes. This produces the dotted arrow. By the snake lemma
+(Lemma \ref{lemma-snake}) we see that we get an isomorphism
+$$
+\Coker(R^{\oplus m} \to M_1)
+\cong
+\Coker(R^{\oplus n} \to M_2)
+$$
+In particular we conclude that $\Coker(R^{\oplus m} \to M_1)$
+is a finite $R$-module. Since $\Im(R^{\oplus m} \to M_1)$
+is finite by (3), we see that $M_1$ is finite by part (1).
+
+\medskip\noindent
+Proof of (4). Assume $M_2$ is finitely presented and $M_1$ is finite.
+Choose a presentation $R^{\oplus m} \to R^{\oplus n} \to M_2 \to 0$.
+Choose a surjection $R^{\oplus k} \to M_1$. By Lemma \ref{lemma-lift-map}
+there exists a factorization $R^{\oplus k} \to R^{\oplus n} \to M_2$
+of the composition $R^{\oplus k} \to M_1 \to M_2$. Then
+$R^{\oplus k + m} \to R^{\oplus n} \to M_3 \to 0$
+is a presentation.
+
+\medskip\noindent
+Proof of (2). Assume that $M_1$ and $M_3$ are finitely presented.
+The argument in the proof of part (1) produces a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] & R^{\oplus n} \ar[d] \ar[r] & R^{\oplus n + m} \ar[d] \ar[r] &
+R^{\oplus m} \ar[d] \ar[r] & 0 \\
+0 \ar[r] & M_1 \ar[r] & M_2 \ar[r] & M_3 \ar[r] & 0
+}
+$$
+with surjective vertical arrows. By the snake lemma we obtain a short
+exact sequence
+$$
+0 \to \Ker(R^{\oplus n} \to M_1) \to
+\Ker(R^{\oplus n + m} \to M_2) \to
+\Ker(R^{\oplus m} \to M_3) \to 0
+$$
+By part (5) we see that the outer two modules are finite. Hence the
+middle one is finite too. By (4) we see that $M_2$ is of finite presentation.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-trivial-filter-finite-module}
+\begin{slogan}
+Finite modules have filtrations such that successive quotients are
+cyclic modules.
+\end{slogan}
+Let $R$ be a ring, and let $M$ be a finite $R$-module.
+There exists a filtration by $R$-submodules
+$$
+0 = M_0 \subset M_1 \subset \ldots \subset M_n = M
+$$
+such that each quotient $M_i/M_{i-1}$ is isomorphic
+to $R/I_i$ for some ideal $I_i$ of $R$.
+\end{lemma}
+
+\begin{proof}
+By induction on the number of generators of $M$. Let
+$x_1, \ldots, x_r \in M$ be a minimal number of generators.
+Let $M' = Rx_1 \subset M$. Then $M/M'$ has $r - 1$ generators
+and the induction hypothesis applies. And clearly $M' \cong R/I_1$
+with $I_1 = \{f \in R \mid fx_1 = 0\}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-over-subring}
+Let $R \to S$ be a ring map.
+Let $M$ be an $S$-module.
+If $M$ is finite as an $R$-module, then $M$ is finite as an $S$-module.
+\end{lemma}
+
+\begin{proof}
+In fact, any $R$-generating set of $M$ is also an $S$-generating set of
+$M$, since the $R$-module structure is induced by the image of $R$ in $S$.
+\end{proof}
+
+
+
+\section{Ring maps of finite type and of finite presentation}
+\label{section-finite-type}
+
+\begin{definition}
+\label{definition-finite-type}
+Let $R \to S$ be a ring map.
+\begin{enumerate}
+\item We say $R \to S$ is of {\it finite type}, or that {\it $S$ is a finite
+type $R$-algebra} if there exist an $n \in \mathbf{N}$ and an surjection
+of $R$-algebras $R[x_1, \ldots, x_n] \to S$.
+\item We say $R \to S$ is of {\it finite presentation} if there
+exist integers $n, m \in \mathbf{N}$ and polynomials
+$f_1, \ldots, f_m \in R[x_1, \ldots, x_n]$
+and an isomorphism of $R$-algebras
+$R[x_1, \ldots, x_n]/(f_1, \ldots, f_m) \cong S$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Informally, $R \to S$ is of finite presentation if and only if
+$S$ is finitely generated as an $R$-algebra
+and the ideal of relations among the generators is finitely generated.
+A choice of a surjection $R[x_1, \ldots, x_n] \to S$ as in the definition
+is sometimes called a {\it presentation} of $S$.
+
+\begin{lemma}
+\label{lemma-compose-finite-type}
+The notions finite type and finite presentation have the following
+permanence properties.
+\begin{enumerate}
+\item A composition of ring maps of finite type is of finite type.
+\item A composition of ring maps of finite presentation is of finite
+presentation.
+\item Given $R \to S' \to S$ with $R \to S$ of finite type,
+then $S' \to S$ is of finite type.
+\item Given $R \to S' \to S$, with $R \to S$ of finite presentation,
+and $R \to S'$ of finite type, then $S' \to S$ is of finite presentation.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We only prove the last assertion.
+Write $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$
+and $S' = R[y_1, \ldots, y_a]/I$. Say that the class
+$\bar y_i$ of $y_i$ maps
+to $h_i \bmod (f_1, \ldots, f_m)$ in $S$.
+Then it is clear that
+$S = S'[x_1, \ldots, x_n]/(f_1, \ldots, f_m,
+h_1 - \bar y_1, \ldots, h_a - \bar y_a)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-presentation-independent}
+Let $R \to S$ be a ring map of finite presentation.
+For any surjection $\alpha : R[x_1, \ldots, x_n] \to S$ the
+kernel of $\alpha$ is a finitely generated ideal in $R[x_1, \ldots, x_n]$.
+\end{lemma}
+
+\begin{proof}
+Write $S = R[y_1, \ldots, y_m]/(f_1, \ldots, f_k)$.
+Choose $g_i \in R[y_1, \ldots, y_m]$ which are lifts
+of $\alpha(x_i)$. Then we see that $S = R[x_i, y_j]/(f_l, x_i - g_i)$.
+Choose $h_j \in R[x_1, \ldots, x_n]$ such that $\alpha(h_j)$
+corresponds to $y_j \bmod (f_1, \ldots, f_k)$. Consider
+the map $\psi : R[x_i, y_j] \to R[x_i]$, $x_i \mapsto x_i$,
+$y_j \mapsto h_j$. Then the kernel of $\alpha$
+is the image of $(f_l, x_i - g_i)$ under $\psi$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finitely-presented-over-subring}
+Let $R \to S$ be a ring map.
+Let $M$ be an $S$-module.
+Assume $R \to S$ is of finite type and
+$M$ is finitely presented as an $R$-module.
+Then $M$ is finitely presented as an $S$-module.
+\end{lemma}
+
+\begin{proof}
+This is similar to the proof of part (4) of
+Lemma \ref{lemma-compose-finite-type}.
+We may assume $S = R[x_1, \ldots, x_n]/J$.
+Choose $y_1, \ldots, y_m \in M$ which generate $M$ as an $R$-module
+and choose relations $\sum a_{ij} y_j = 0$, $i = 1, \ldots, t$ which
+generate the kernel of $R^{\oplus m} \to M$. For any
+$i = 1, \ldots, n$ and $j = 1, \ldots, m$ write
+$$
+x_i y_j = \sum a_{ijk} y_k
+$$
+for some $a_{ijk} \in R$. Consider the $S$-module $N$ generated by
+$y_1, \ldots, y_m$ subject to the relations
+$\sum a_{ij} y_j = 0$, $i = 1, \ldots, t$ and
+$x_i y_j = \sum a_{ijk} y_k$, $i = 1, \ldots, n$ and $j = 1, \ldots, m$.
+Then $N$ has a presentation
+$$
+S^{\oplus nm + t} \longrightarrow S^{\oplus m} \longrightarrow N
+\longrightarrow 0
+$$
+By construction there is a surjective map $\varphi : N \to M$.
+To finish the proof we show $\varphi$ is injective.
+Suppose $z = \sum b_j y_j \in N$ for some $b_j \in S$.
+We may think of $b_j$ as a polynomial in $x_1, \ldots, x_n$
+with coefficients in $R$.
+By applying the relations of the form $x_i y_j = \sum a_{ijk} y_k$
+we can inductively lower the degree of the polynomials.
+Hence we see that $z = \sum c_j y_j$ for some $c_j \in R$.
+Hence if $\varphi(z) = 0$ then the vector $(c_1, \ldots, c_m)$
+is an $R$-linear combination of the vectors $(a_{i1}, \ldots, a_{im})$
+and we conclude that $z = 0$ as desired.
+\end{proof}
+
+
+
+
+
+\section{Finite ring maps}
+\label{section-finite}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-finite-ring-map}
+Let $\varphi : R \to S$ be a ring map. We say $\varphi : R \to S$ is
+{\it finite} if $S$ is finite as an $R$-module.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-finite-module-over-finite-extension}
+Let $R \to S$ be a finite ring map.
+Let $M$ be an $S$-module.
+Then $M$ is finite as an $R$-module if and only if $M$ is finite
+as an $S$-module.
+\end{lemma}
+
+\begin{proof}
+One of the implications follows from
+Lemma \ref{lemma-finite-over-subring}.
+To see the other assume that $M$ is finite as an $S$-module.
+Pick $x_1, \ldots, x_n \in S$ which generate $S$ as an $R$-module.
+Pick $y_1, \ldots, y_m \in M$ which generate $M$ as an $S$-module.
+Then $x_i y_j$ generate $M$ as an $R$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-transitive}
+Suppose that $R \to S$ and $S \to T$ are finite ring maps.
+Then $R \to T$ is finite.
+\end{lemma}
+
+\begin{proof}
+If $t_i$ generate $T$ as an $S$-module and $s_j$ generate $S$ as an
+$R$-module, then $t_i s_j$ generate $T$ as an $R$-module.
+(Also follows from
+Lemma \ref{lemma-finite-module-over-finite-extension}.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-finite-type}
+Let $\varphi : R \to S$ be a ring map.
+\begin{enumerate}
+\item If $\varphi$ is finite, then $\varphi$ is of finite type.
+\item If $S$ is of finite presentation as an $R$-module, then
+$\varphi$ is of finite presentation.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For (1) if $x_1, \ldots, x_n \in S$ generate $S$ as an $R$-module,
+then $x_1, \ldots, x_n$ generate $S$ as an $R$-algebra. For (2),
+suppose that $\sum r_j^ix_i = 0$, $j = 1, \ldots, m$ is a set
+of generators of the relations among the $x_i$ when viewed as
+$R$-module generators of $S$. Furthermore, write
+$1 = \sum r_ix_i$ for some $r_i \in R$ and
+$x_ix_j = \sum r_{ij}^k x_k$ for some $r_{ij}^k \in R$.
+Then
+$$
+S = R[t_1, \ldots, t_n]/
+(\sum r_j^it_i,\ 1 - \sum r_it_i,\ t_it_j - \sum r_{ij}^k t_k)
+$$
+as an $R$-algebra which proves (2).
+\end{proof}
+
+\noindent
+For more information on finite ring maps, please see
+Section \ref{section-finite-ring-extensions}.
+
+
+\section{Colimits}
+\label{section-colimits}
+
+\noindent
+Some of the material in this section overlaps with the general
+discussion on colimits in
+Categories, Sections \ref{categories-section-limits} --
+\ref{categories-section-posets-limits}.
+The notion of a preordered set is defined in
+Categories, Definition \ref{categories-definition-directed-set}.
+It is a slightly weaker notion than a partially ordered set.
+
+\begin{definition}
+\label{definition-directed-system}
+Let $(I, \leq)$ be a preordered set.
+A {\it system $(M_i, \mu_{ij})$ of $R$-modules over $I$}
+consists of a family of $R$-modules $\{M_i\}_{i\in I}$ indexed
+by $I$ and a family of $R$-module maps $\{\mu_{ij} : M_i \to M_j\}_{i \leq j}$
+such that for all $i \leq j \leq k$
+$$
+\mu_{ii} = \text{id}_{M_i}\quad
+\mu_{ik} = \mu_{jk}\circ \mu_{ij}
+$$
+We say $(M_i, \mu_{ij})$ is a {\it directed system} if $I$ is a directed set.
+\end{definition}
+
+\noindent
+This is the same as the notion defined in Categories,
+Definition \ref{categories-definition-system-over-poset}
+and Section \ref{categories-section-posets-limits}.
+We refer to Categories, Definition \ref{categories-definition-colimit}
+for the definition of a colimit of a diagram/system in any
+category.
+
+\begin{lemma}
+\label{lemma-colimit}
+Let $(M_i, \mu_{ij})$ be a system of $R$-modules over the preordered set $I$.
+The colimit of the system $(M_i, \mu_{ij})$ is the quotient $R$-module
+$(\bigoplus_{i\in I} M_i) /Q$ where $Q$ is the
+$R$-submodule generated by all elements
+$$
+\iota_i(x_i) - \iota_j(\mu_{ij}(x_i))
+$$
+where $\iota_i : M_i \to \bigoplus_{i\in I} M_i$
+is the natural inclusion. We denote the colimit
+$M = \colim_i M_i$. We denote
+$\pi : \bigoplus_{i\in I} M_i \to M$ the
+projection map and
+$\phi_i = \pi \circ \iota_i : M_i \to M$.
+\end{lemma}
+
+\begin{proof}
+This lemma is a special case of
+Categories, Lemma \ref{categories-lemma-colimits-coproducts-coequalizers}
+but we will also prove it directly in this case.
+Namely, note that $\phi_i = \phi_j\circ \mu_{ij}$ in the above
+construction. To show the pair $(M, \phi_i)$ is the colimit we have
+to show it satisfies the universal property: for any other such pair
+$(Y, \psi_i)$ with $\psi_i : M_i \to
+Y$, $\psi_i = \psi_j\circ \mu_{ij}$, there is a unique $R$-module
+homomorphism $g : M \to Y$ such that the
+following diagram commutes:
+$$
+\xymatrix{
+M_i \ar[rr]^{\mu_{ij}} \ar[dr]^{\phi_i} \ar[ddr]_{\psi_i} & &
+M_j\ar[dl]_{\phi_j} \ar[ddl]^{\psi_j} \\
+& M \ar[d]^{g}\\
+& Y
+}
+$$
+And this is clear because we can define $g$ by taking the
+map $\psi_i$ on the summand $M_i$ in the direct sum
+$\bigoplus M_i$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-directed-colimit}
+Let $(M_i, \mu_{ij})$ be a system of $R$-modules over the
+preordered set $I$. Assume that $I$ is directed.
+The colimit of the system $(M_i, \mu_{ij})$ is canonically
+isomorphic to the module $M$ defined as follows:
+\begin{enumerate}
+\item as a set let
+$$
+M = \left(\coprod\nolimits_{i \in I} M_i\right)/\sim
+$$
+where for $m \in M_i$ and $m' \in M_{i'}$ we have
+$$
+m \sim m' \Leftrightarrow
+\mu_{ij}(m) = \mu_{i'j}(m')\text{ for some }j \geq i, i'
+$$
+\item as an abelian group for $m \in M_i$ and $m' \in M_{i'}$
+we define the sum of the classes of $m$ and $m'$ in $M$
+to be the class of $\mu_{ij}(m) + \mu_{i'j}(m')$ where
+$j \in I$ is any index with $i \leq j$ and $i' \leq j$, and
+\item as an $R$-module define for $m \in M_i$ and $x \in R$
+the product of $x$ and the class of $m$ in $M$ to be the
+class of $xm$ in $M$.
+\end{enumerate}
+The canonical maps $\phi_i : M_i \to M$ are induced by the canonical
+maps $M_i \to \coprod_{i \in I} M_i$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Compare with
+Categories, Section \ref{categories-section-directed-colimits}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-zero-directed-limit}
+Let $(M_i, \mu_{ij})$ be a directed system.
+Let $M = \colim M_i$ with $\mu_i : M_i \to M$.
+Then, $\mu_i(x_i) = 0$ for $x_i \in M_i$ if and only if
+there exists $j \geq i$ such that $\mu_{ij}(x_i) = 0$.
+\end{lemma}
+
+\begin{proof}
+This is clear from the description of the directed colimit
+in Lemma \ref{lemma-directed-colimit}.
+\end{proof}
+
+\begin{example}
+\label{example-zero-colimit-different}
+Consider the partially ordered set $I = \{a, b, c\}$ with
+$a < b$ and $a < c$ and no other strict inequalities.
+A system $(M_a, M_b, M_c, \mu_{ab}, \mu_{ac})$
+over $I$ consists of three $R$-modules $M_a, M_b, M_c$
+and two $R$-module homomorphisms $\mu_{ab} : M_a \to M_b$ and
+$\mu_{ac} : M_a \to M_c$.
+The colimit of the system is just
+$$
+M := \colim_{i \in I} M_i = \Coker(M_a \to M_b \oplus M_c)
+$$
+where the map is $\mu_{ab} \oplus -\mu_{ac}$. Thus the kernel of the
+canonical map $M_a \to M$ is $\Ker(\mu_{ab}) + \Ker(\mu_{ac})$.
+And the kernel of the canonical map $M_b \to M$ is the image of
+$\Ker(\mu_{ac})$ under the map $\mu_{ab}$. Hence clearly
+the result of Lemma \ref{lemma-zero-directed-limit} is false for
+general systems.
+\end{example}
+
+\begin{definition}
+\label{definition-homomorphism-directed-systems}
+Let $(M_i, \mu_{ij})$, $(N_i, \nu_{ij})$ be
+systems of $R$-modules over the same preordered set $I$.
+A {\it homomorphism of systems} $\Phi$ from $(M_i, \mu_{ij})$ to
+$(N_i, \nu_{ij})$ is by definition a family of $R$-module homomorphisms
+$\phi_i : M_i \to N_i$
+such that $\phi_j \circ \mu_{ij} = \nu_{ij} \circ \phi_i$
+for all $i \leq j$.
+\end{definition}
+
+\noindent
+This is the same notion as a transformation of functors
+between the associated diagrams $M : I \to \text{Mod}_R$
+and $N : I \to \text{Mod}_R$, in the language of
+categories.
+The following lemma is a special case of
+Categories, Lemma \ref{categories-lemma-functorial-colimit}.
+
+\begin{lemma}
+\label{lemma-homomorphism-limit}
+Let $(M_i, \mu_{ij})$, $(N_i, \nu_{ij})$ be
+systems of $R$-modules over the same preordered set.
+A morphism of systems $\Phi = (\phi_i)$ from $(M_i, \mu_{ij})$ to
+$(N_i, \nu_{ij})$ induces a unique homomorphism
+$$
+\colim \phi_i : \colim M_i \longrightarrow \colim N_i
+$$
+such that
+$$
+\xymatrix{
+M_i \ar[r] \ar[d]_{\phi_i} & \colim M_i \ar[d]^{\colim \phi_i} \\
+N_i \ar[r] & \colim N_i
+}
+$$
+commutes for all $i \in I$.
+\end{lemma}
+
+\begin{proof}
+Write $M = \colim M_i$ and $N = \colim N_i$ and $\phi = \colim \phi_i$
+(as yet to be constructed). We will use the explicit description of $M$
+and $N$ in Lemma \ref{lemma-colimit} without further mention.
+The condition of the lemma is equivalent to the condition that
+$$
+\xymatrix{
+\bigoplus_{i\in I} M_i \ar[r] \ar[d]_{\bigoplus\phi_i} & M \ar[d]^\phi \\
+\bigoplus_{i\in I} N_i \ar[r] & N
+}
+$$
+commutes. Hence it is clear that if $\phi$ exists, then it is unique.
+To see that $\phi$ exists, it suffices to show that the kernel of the
+upper horizontal arrow is mapped by $\bigoplus \phi_i$ to the kernel
+of the lower horizontal arrow. To see this, let $j \leq k$ and
+$x_j \in M_j$. Then
+$$
+(\bigoplus \phi_i)(x_j - \mu_{jk}(x_j))
+=
+\phi_j(x_j) - \phi_k(\mu_{jk}(x_j))
+=
+\phi_j(x_j) - \nu_{jk}(\phi_j(x_j))
+$$
+which is in the kernel of the lower horizontal arrow as required.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-directed-colimit-exact}
+\begin{slogan}
+Filtered colimits are exact. Directed colimits are exact.
+\end{slogan}
+Let $I$ be a directed set.
+Let $(L_i, \lambda_{ij})$, $(M_i, \mu_{ij})$, and
+$(N_i, \nu_{ij})$ be systems of $R$-modules over $I$.
+Let $\varphi_i : L_i \to M_i$ and $\psi_i : M_i \to N_i$ be
+morphisms of systems over $I$. Assume that for all $i \in I$ the
+sequence of $R$-modules
+$$
+\xymatrix{
+L_i \ar[r]^{\varphi_i} &
+M_i \ar[r]^{\psi_i} &
+N_i
+}
+$$
+is a complex with homology $H_i$.
+Then the $R$-modules $H_i$ form a system over $I$,
+the sequence of $R$-modules
+$$
+\xymatrix{
+\colim_i L_i \ar[r]^\varphi &
+\colim_i M_i \ar[r]^\psi &
+\colim_i N_i
+}
+$$
+is a complex as well, and denoting $H$ its homology we have
+$$
+H = \colim_i H_i.
+$$
+\end{lemma}
+
+\begin{proof}
+It is clear that
+$
+\xymatrix{
+\colim_i L_i \ar[r]^\varphi &
+\colim_i M_i \ar[r]^\psi &
+\colim_i N_i
+}
+$
+is a complex. For each $i \in I$, there is a canonical
+$R$-module morphism $H_i \to H$ (sending each
+$[m] \in H_i = \Ker(\psi_i) / \Im(\varphi_i)$ to the
+residue class in $H = \Ker(\psi) / \Im(\varphi)$
+of the image of $m$ in $\colim_i M_i$). These give rise
+to a morphism $\colim_i H_i \to H$. It remains to
+show that this morphism is surjective and injective.
+
+\medskip\noindent
+We are going to repeatedly use the description of colimits over $I$
+as in Lemma \ref{lemma-directed-colimit} without further mention.
+Let $h \in H$.
+Since $H = \Ker(\psi)/\Im(\varphi)$ we see that
+$h$ is the class mod $\Im(\varphi)$ of an element $[m]$
+in $\Ker(\psi) \subset \colim_i M_i$. Choose an
+$i$ such that $[m]$ comes from an element $m \in M_i$. Choose
+a $j \geq i$ such that $\nu_{ij}(\psi_i(m)) = 0$ which is possible
+since $[m] \in \Ker(\psi)$. After replacing $i$ by $j$ and
+$m$ by $\mu_{ij}(m)$ we see that we may assume $m \in \Ker(\psi_i)$.
+This shows that the map $\colim_i H_i \to H$ is surjective.
+
+\medskip\noindent
+Suppose that $h_i \in H_i$ has image zero on $H$. Since
+$H_i = \Ker(\psi_i)/\Im(\varphi_i)$ we may represent
+$h_i$ by an element $m \in \Ker(\psi_i) \subset M_i$.
+The assumption on the vanishing of $h_i$ in $H$ means that
+the class of $m$ in $\colim_i M_i$ lies in the image
+of $\varphi$. Hence there exists a $j \geq i$ and an $l \in L_j$
+such that $\varphi_j(l) = \mu_{ij}(m)$. Clearly this shows that
+the image of $h_i$ in $H_j$ is zero. This proves the
+injectivity of $\colim_i H_i \to H$.
+\end{proof}
+
+\begin{example}
+\label{example-colimit-not-exact}
+Taking colimits is not exact in general.
+Consider the partially ordered set $I = \{a, b, c\}$ with
+$a < b$ and $a < c$ and no other strict inequalities,
+as in Example \ref{example-zero-colimit-different}.
+Consider the map of systems
+$(0, \mathbf{Z}, \mathbf{Z}, 0, 0) \to
+(\mathbf{Z}, \mathbf{Z}, \mathbf{Z}, 1, 1)$.
+From the description of the colimit in
+Example \ref{example-zero-colimit-different}
+we see that the associated map of colimits is not injective,
+even though the map of systems is injective on each object.
+Hence the result of Lemma \ref{lemma-directed-colimit-exact}
+is false for general systems.
+\end{example}
+
+\begin{lemma}
+\label{lemma-almost-directed-colimit-exact}
+Let $\mathcal{I}$ be an index category satisfying the assumptions of
+Categories, Lemma \ref{categories-lemma-split-into-directed}.
+Then taking colimits of diagrams of abelian groups over $\mathcal{I}$
+is exact (i.e., the analogue of
+Lemma \ref{lemma-directed-colimit-exact}
+holds in this situation).
+\end{lemma}
+
+\begin{proof}
+By
+Categories, Lemma \ref{categories-lemma-split-into-directed}
+we may write $\mathcal{I} = \coprod_{j \in J} \mathcal{I}_j$ with each
+$\mathcal{I}_j$ a filtered category, and $J$ possibly empty. By
+Categories, Lemma \ref{categories-lemma-directed-category-system}
+taking colimits over the index categories $\mathcal{I}_j$ is
+the same as taking the colimit over some directed set. Hence
+Lemma \ref{lemma-directed-colimit-exact}
+applies to these colimits. This reduces the problem to showing that
+coproducts in the category of $R$-modules over the set $J$ are exact.
+In other words, exact sequences
+$L_j \to M_j \to N_j$ of $R$ modules we have to show that
+$$
+\bigoplus\nolimits_{j \in J} L_j
+\longrightarrow
+\bigoplus\nolimits_{j \in J} M_j
+\longrightarrow
+\bigoplus\nolimits_{j \in J} N_j
+$$
+is exact. This can be verified by hand, and holds even if $J$ is empty.
+\end{proof}
+
+
+
+
+
+
+\section{Localization}
+\label{section-localization}
+
+\begin{definition}
+\label{definition-multiplicative-subset}
+Let $R$ be a ring, $S$ a subset of $R$.
+We say $S$ is a {\it multiplicative subset of $R$} if
+$1\in S$ and $S$ is closed
+under multiplication, i.e., $s, s' \in S \Rightarrow ss' \in S$.
+\end{definition}
+
+\noindent
+Given a ring $A$ and a multiplicative subset $S$, we
+define a relation on $A \times S$ as follows:
+$$
+(x, s) \sim (y, t)
+\Leftrightarrow
+\exists u \in S \text{ such that } (xt-ys)u = 0
+$$
+It is easily checked that this is an equivalence relation.
+Let $x/s$ (or $\frac{x}{s}$) be the equivalence class of $(x, s)$ and
+$S^{-1}A$ be the set of all equivalence classes. Define addition
+and multiplication in $S^{-1}A$ as follows:
+$$
+x/s + y/t = (xt + ys)/st, \quad
+x/s \cdot y/t = xy/st
+$$
+One can check that $S^{-1}A$ becomes a ring under these operations.
+
+\begin{definition}
+\label{definition-localization}
+This ring is called the {\it localization of $A$ with respect to $S$}.
+\end{definition}
+
+\noindent
+We have a natural ring map from $A$ to its localization $S^{-1}A$,
+$$
+A \longrightarrow S^{-1}A, \quad x \longmapsto x/1
+$$
+which is sometimes called the {\it localization map}. In general the
+localization map is not injective, unless $S$ contains no zerodivisors.
+For, if $x/1 = 0$, then there is a $u\in S$ such that $xu = 0$ in $A$ and
+hence $x = 0$ since there are no zerodivisors in $S$.
+The localization of a ring has the following universal property.
+
+\begin{proposition}
+\label{proposition-universal-property-localization}
+Let $f : A \to B$ be a ring map that sends every element in $S$ to a unit
+of $B$. Then there is a unique homomorphism $g : S^{-1}A \to B$ such
+that the following diagram commutes.
+$$
+\xymatrix{
+A \ar[rr]^{f} \ar[dr] & & B \\
+& S^{-1}A \ar[ur]_g
+}
+$$
+\end{proposition}
+
+\begin{proof}
+Existence. We define a map $g$ as follows. For $x/s\in S^{-1}A$, let
+$g(x/s) = f(x)f(s)^{-1}\in B$. It is easily checked from the definition
+that this is a well-defined ring map. And it is also clear that
+this makes the diagram commutative.
+
+\medskip\noindent
+Uniqueness. We now show that if $g' : S^{-1}A \to B$
+satisfies $g'(x/1) = f(x)$, then $g = g'$. Hence $f(s) = g'(s/1)$ for
+$s \in S$ by the commutativity of the diagram. But then $g'(1/s)f(s) = 1$
+in $B$, which implies that $g'(1/s) = f(s)^{-1}$ and hence
+$g'(x/s) = g'(x/1)g'(1/s) = f(x)f(s)^{-1} = g(x/s)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localization-zero}
+The localization $S^{-1}A$ is the zero ring if and only if $0\in S$.
+\end{lemma}
+
+\begin{proof}
+If $0\in S$, any pair $(a, s)\sim (0, 1)$ by definition.
+If $0\not \in S$, then clearly $1/1 \neq 0/1$ in $S^{-1}A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localization-and-modules}
+Let $R$ be a ring. Let $S \subset R$ be a multiplicative subset.
+The category of $S^{-1}R$-modules is equivalent to the category
+of $R$-modules $N$ with the property that every $s \in S$ acts as
+an automorphism on $N$.
+\end{lemma}
+
+\begin{proof}
+The functor which defines the equivalence associates to an $S^{-1}R$-module
+$M$ the same module but now viewed as an $R$-module via the localization
+map $R \to S^{-1}R$. Conversely, if $N$ is an $R$-module, such that every
+$s \in S$ acts via an automorphism $s_N$, then we can think of $N$ as an
+$S^{-1}R$-module by letting $x/s$ act via $x_N \circ s_N^{-1}$.
+We omit the verification that these two functors are quasi-inverse to
+each other.
+\end{proof}
+
+\noindent
+The notion of localization of a ring can be generalized to the
+localization of a module. Let $A$ be a ring, $S$ a multiplicative
+subset of $A$ and $M$ an $A$-module. We define a relation on
+$M \times S$ as follows
+$$
+(m, s) \sim (n, t)
+\Leftrightarrow
+\exists u\in S \text{ such that } (mt-ns)u = 0
+$$
+This is clearly an equivalence relation. Denote by $m/s$ (or
+$\frac{m}{s}$) be the equivalence class of $(m, s)$ and $S^{-1}M$ be
+the set of all equivalence classes. Define the addition and scalar
+multiplication as follows
+$$
+m/s + n/t = (mt + ns)/st,\quad
+m/s\cdot n/t = mn/st
+$$
+It is clear that this makes $S^{-1}M$ an $S^{-1}A$-module.
+
+\begin{definition}
+\label{definition-localization-module}
+The $S^{-1}A$-module $S^{-1}M$ is called the {\it localization} of $M$ at $S$.
+\end{definition}
+
+\noindent
+Note that there is an $A$-module map $M \to S^{-1}M$, $m \mapsto m/1$
+which is sometimes
+called the {\it localization map}. It satisfies the following universal
+property.
+
+\begin{lemma}
+\label{lemma-universal-property-localization-module}
+Let $R$ be a ring. Let $S \subset R$ a multiplicative subset. Let $M$, $N$
+be $R$-modules. Assume all the elements of $S$ act as automorphisms on $N$.
+Then the canonical map
+$$
+\Hom_R(S^{-1}M, N) \longrightarrow \Hom_R(M, N)
+$$
+induced by the localization map, is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+It is clear that the map is well-defined and $R$-linear.
+Injectivity: Let $\alpha \in \Hom_R(S^{-1}M, N)$ and take an arbitrary
+element $m/s \in S^{-1}M$. Then, since $s \cdot \alpha(m/s) = \alpha(m/1)$,
+we have $ \alpha(m/s) =s^{-1}(\alpha (m/1))$, so $\alpha$ is completely
+determined by what it does on the image of $M$ in $S^{-1}M$.
+Surjectivity: Let $\beta : M \rightarrow N$ be a given R-linear map.
+We need to show that it can be "extended" to $S^{-1}M$. Define a map of
+sets
+$$
+M \times S \rightarrow N,\quad
+(m,s) \mapsto s^{-1}\beta(m)
+$$
+Clearly, this map respects the equivalence relation from above, so it
+descends to a well-defined map $\alpha : S^{-1}M \rightarrow N$.
+It remains to show that this map is $R$-linear, so take
+$r, r' \in R$ as well as $s, s' \in S$ and
+$m, m' \in M$. Then
+\begin{align*}
+\alpha(r \cdot m/s + r' \cdot m' /s')
+& =
+\alpha((r \cdot s' \cdot m + r' \cdot s \cdot m') /(ss')) \\
+& =
+(ss')^{-1}\beta(r \cdot s' \cdot m + r' \cdot s \cdot m') \\
+& =
+(ss')^{-1} (r \cdot s' \beta (m) + r' \cdot s \beta (m')) \\
+& =
+r \alpha (m/s) + r' \alpha (m' /s')
+\end{align*}
+and we win.
+\end{proof}
+
+\begin{example}
+\label{example-localize-at-prime}
+Let $A$ be a ring and let $M$ be an $A$-module.
+Here are some important examples of localizations.
+\begin{enumerate}
+\item Given $\mathfrak p$ a prime ideal of $A$ consider
+$S = A\setminus\mathfrak p$. It is
+immediately checked that $S$ is a multiplicative set. In this case
+we denote $A_\mathfrak p$ and $M_\mathfrak p$ the localization of
+$A$ and $M$ with respect to $S$ respectively. These are
+called the {\it localization of $A$, resp.\ $M$ at $\mathfrak p$}.
+\item Let $f\in A$. Consider $S = \{1, f, f^2, \ldots\}$.
+This is clearly a multiplicative subset of $A$.
+In this case we denote $A_f$
+(resp. $M_f$) the localization $S^{-1}A$ (resp. $S^{-1}M$).
+This is called the {\it localization of $A$, resp.\ $M$ with
+respect to $f$}.
+Note that $A_f = 0$ if and only if $f$ is nilpotent in $A$.
+\item Let $S = \{f \in A \mid f \text{ is not a zerodivisor in }A\}$.
+This is a multiplicative subset of $A$. In this case the
+ring $Q(A) = S^{-1}A$ is called either the
+{\it total quotient ring}, or the {\it total ring of fractions}
+of $A$.
+\item If $A$ is a domain, then the total quotient ring $Q(A)$ is
+the field of fractions of $A$. Please see
+Fields, Example \ref{fields-example-quotient-field}.
+\end{enumerate}
+\end{example}
+
+\begin{lemma}
+\label{lemma-localization-colimit}
+Let $R$ be a ring.
+Let $S \subset R$ be a multiplicative subset.
+Let $M$ be an $R$-module.
+Then
+$$
+S^{-1}M = \colim_{f \in S} M_f
+$$
+where the preorder on $S$ is given by
+$f \geq f' \Leftrightarrow f = f'f''$ for some $f'' \in R$
+in which case the map $M_{f'} \to M_f$ is given
+by $m/(f')^e \mapsto m(f'')^e/f^e$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: Use the universal property of
+Lemma \ref{lemma-universal-property-localization-module}.
+\end{proof}
+
+\noindent
+In the following paragraph,
+let $A$ denote a ring,
+and $M, N$ denote modules over $A$.
+
+\medskip\noindent
+If $S$ and $S'$ are multiplicative sets of $A$, then it is
+clear that
+$$
+SS' = \{ss' : s\in S, \ s'\in S'\}
+$$
+is also a multiplicative set of $A$. Then the following holds.
+
+\begin{proposition}
+\label{proposition-localize-twice}
+Let $\overline{S}$ be the image of $S$ in $S'^{-1}A$, then
+$(SS')^{-1}A$ is isomorphic to $\overline{S}^{-1}(S'^{-1}A)$.
+\end{proposition}
+
+\begin{proof}
+The map sending $x\in A$ to $x/1\in (SS')^{-1}A$ induces a map
+sending $x/s\in S'^{-1}A$ to $x/s \in (SS')^{-1}A$, by universal
+property. The image of the elements in $\overline{S}$ are invertible
+in $(SS')^{-1}A$. By the universal property we get a map
+$f : \overline{S}^{-1}(S'^{-1}A) \to (SS')^{-1}A$ which maps
+$(x/t')/(s/s')$ to $(x/t')\cdot(s/s')^{-1}$.
+
+\medskip\noindent
+On the other hand, the map from $A$ to $\overline{S}^{-1}(S'^{-1}A)$
+sending $x\in A$ to $(x/1)/(1/1)$ also induces a map
+$g : (SS')^{-1}A \to \overline{S}^{-1}(S'^{-1}A)$ which sends $x/ss'$
+to $(x/s')/(s/1)$, by the universal property again. It is
+immediately checked that $f$ and $g$ are inverse to each other,
+hence they are both isomorphisms.
+\end{proof}
+
+\noindent
+For the module $M$ we have
+
+\begin{proposition}
+\label{proposition-localize-twice-module}
+View $S'^{-1}M$ as an $A$-module, then $S^{-1}(S'^{-1}M)$ is
+isomorphic to $(SS')^{-1}M$.
+\end{proposition}
+
+\begin{proof}
+Note that given a $A$-module M, we have not proved any
+universal property for $S^{-1}M$. Hence we cannot reason
+as in the preceding proof; we have to construct the isomorphism explicitly.
+
+\medskip\noindent
+We define the maps as follows
+\begin{align*}
+& f : S^{-1}(S'^{-1}M) \longrightarrow (SS')^{-1}M, \quad \frac{x/s'}{s}\mapsto
+x/ss'\\
+& g : (SS')^{-1}M \longrightarrow S^{-1}(S'^{-1}M), \quad x/t\mapsto
+\frac{x/s'}{s}\ \text{for some }s\in S, s'\in S', \text{ and }
+t = ss'
+\end{align*}
+We have to check that these homomorphisms are well-defined, that is,
+independent the choice of the fraction. This is easily checked and it is also
+straightforward to show that they are inverse to each other.
+\end{proof}
+
+\noindent
+If $u : M \to N$ is an $A$ homomorphism, then the localization indeed
+induces a well-defined $S^{-1}A$ homomorphism $S^{-1}u : S^{-1}M \to
+S^{-1}N$ which sends $x/s$ to $u(x)/s$. It is immediately checked that
+this construction is functorial, so that $S^{-1}$
+is actually a functor from the category of $A$-modules to the
+category of $S^{-1}A$-modules. Moreover this functor is exact,
+as we show in the following proposition.
+
+\begin{proposition}
+\label{proposition-localization-exact}
+\begin{slogan}
+Localization is exact.
+\end{slogan}
+Let $L\xrightarrow{u} M\xrightarrow{v} N$ be an exact sequence
+of $R$-modules. Then
+$S^{-1}L \to S^{-1}M \to S^{-1}N$ is also exact.
+\end{proposition}
+
+\begin{proof}
+First it is clear that $S^{-1}L \to S^{-1}M \to S^{-1}N$ is a complex
+since localization is a functor. Next suppose that $x/s$ maps to zero
+in $S^{-1}N$ for some $x/s \in S^{-1}M$. Then by definition there is a
+$t\in S$ such that $v(xt) = v(x)t = 0$ in $M$, which means
+$xt \in \Ker(v)$. By the exactness of $L \to M \to N$ we have
+$xt = u(y)$ for some $y$ in $L$. Then $x/s$ is the image of $y/st$.
+This proves the exactness.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-quotient-modules}
+Localization respects quotients, i.e. if $N$ is a submodule of
+$M$, then $S^{-1}(M/N)\simeq (S^{-1}M)/(S^{-1}N)$.
+\end{lemma}
+
+\begin{proof}
+From the exact sequence
+$$
+0 \longrightarrow N \longrightarrow M \longrightarrow M/N \longrightarrow 0
+$$
+we have
+$$
+0 \longrightarrow S^{-1}N \longrightarrow S^{-1}M
+\longrightarrow S^{-1}(M/N) \longrightarrow 0
+$$
+The corollary then follows.
+\end{proof}
+
+\noindent
+If, in the preceding Corollary, we take $N = I$ and $M = A$ for an ideal $I$ of
+$A$, we see that $S^{-1}A/S^{-1}I \simeq S^{-1}(A/I)$ as $A$-modules. The next
+proposition shows that they are isomorphic as rings.
+
+\begin{proposition}
+\label{proposition-localize-quotient}
+Let $I$ be an ideal of $A$, $S$ a multiplicative set of $A$. Then
+$S^{-1}I$ is an ideal of $S^{-1}A$ and $\overline{S}^{-1}(A/I)$ is
+isomorphic to $S^{-1}A/S^{-1}I$, where $\overline{S}$ is
+the image of $S$ in $A/I$.
+\end{proposition}
+
+\begin{proof}
+The fact that $S^{-1}I$ is an ideal is clear since $I$ itself is an
+ideal. Define
+$$
+f : S^{-1}A\longrightarrow \overline{S}^{-1}(A/I), \quad x/s\mapsto
+\overline{x}/\overline{s}
+$$
+where $\overline{x}$ and $\overline{s}$ are the images of $x$ and
+$s$ in $A/I$. We shall keep similar notations in this proof.
+This map is well-defined by the universal property of
+$S^{-1}A$, and $S^{-1}I$ is contained in the kernel of it,
+therefore it induces a map
+$$
+\overline{f} : S^{-1}A/S^{-1}I \longrightarrow \overline{S}^{-1}(A/I), \quad
+\overline{x/s}\mapsto \overline{x}/\overline{s}
+$$
+
+\medskip\noindent
+On the other hand, the map $A \to S^{-1}A/S^{-1}I$ sending $x$ to
+$\overline{x/1}$ induces a map $A/I \to S^{-1}A/S^{-1}I$ sending
+$\overline{x}$ to $\overline{x/1}$. The image of $\overline{S}$ is
+invertible in $S^{-1}A/S^{-1}I$, thus induces a map
+$$
+g : \overline{S}^{-1}(A/I) \longrightarrow S^{-1}A/S^{-1}I, \quad
+\frac{\overline{x}}{\overline{s}}\mapsto \overline{x/s}
+$$
+by the universal property. It is then clear that $\overline{f}$ and $g$
+are inverse to each other, hence are both isomorphisms.
+\end{proof}
+
+\noindent
+We now consider how submodules behave in localization.
+
+\begin{lemma}
+\label{lemma-submodule-localization}
+Any submodule $N'$ of $S^{-1}M$ is of the form $S^{-1}N$ for some
+$N\subset M$. Indeed one can take $N$ to be the inverse image of
+$N'$ in $M$.
+\end{lemma}
+
+\begin{proof}
+Let $N$ be the inverse image of $N'$ in $M$. Then one can see that
+$S^{-1}N\supset N'$. To show they are equal, take $x/s$ in
+$S^{-1}N$, where $s\in S$ and $x\in N$. This yields that $x/1\in
+N'$. Since $N'$ is an $S^{-1}R$-submodule we have
+$x/s = x/1\cdot 1/s\in N'$. This finishes the proof.
+\end{proof}
+
+\noindent
+Taking $M = A$ and $N = I$ an ideal of $A$, we have the following
+corollary, which can be viewed as a converse of the first part of
+Proposition \ref{proposition-localize-quotient}.
+
+\begin{lemma}
+\label{lemma-ideal-in-localization}
+\begin{slogan}
+Ideals in the localization of a ring are localizations of ideals.
+\end{slogan}
+Each ideal $I'$ of $S^{-1}A$ takes the form $S^{-1}I$, where one can
+take $I$ to be the inverse image of $I'$ in $A$.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-submodule-localization}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Internal Hom}
+\label{section-hom}
+
+\noindent
+If $R$ is a ring, and $M$, $N$ are $R$-modules, then
+$$
+\Hom_R(M, N) = \{ \varphi : M \to N\}
+$$
+is the set of $R$-linear maps from $M$ to $N$. This set comes with
+the structure of an abelian group by setting
+$(\varphi + \psi)(m) = \varphi(m) + \psi(m)$, as usual.
+In fact, $\Hom_R(M, N)$ is also an $R$-module via the rule
+$(x \varphi)(m) = x \varphi(m) = \varphi(xm)$.
+
+\medskip\noindent
+Given maps $a : M \to M'$ and $b : N \to N'$ of $R$-modules, we can
+pre-compose and post-compose homomorphisms by $a$ and $b$. This leads
+to the following commutative diagram
+$$
+\xymatrix{
+\Hom_R(M', N) \ar[d]_{- \circ a} \ar[r]_{b \circ -} &
+\Hom_R(M', N') \ar[d]^{- \circ a} \\
+\Hom_R(M, N) \ar[r]^{b \circ -} &
+\Hom_R(M, N')
+}
+$$
+In fact, the maps in this diagram are $R$-module maps.
+Thus $\Hom_R$ defines an additive functor
+$$
+\text{Mod}_R^{opp} \times \text{Mod}_R \longrightarrow \text{Mod}_R, \quad
+(M, N) \longmapsto \Hom_R(M, N)
+$$
+
+\begin{lemma}
+\label{lemma-hom-exact}
+Exactness and $\Hom_R$. Let $R$ be a ring.
+\begin{enumerate}
+\item Let $M_1 \to M_2 \to M_3 \to 0$ be a complex of $R$-modules.
+Then $M_1 \to M_2 \to M_3 \to 0$ is exact if and only if
+$0 \to \Hom_R(M_3, N) \to \Hom_R(M_2, N) \to \Hom_R(M_1, N)$
+is exact for all $R$-modules $N$.
+\item Let $0 \to M_1 \to M_2 \to M_3$ be a complex of $R$-modules.
+Then $0 \to M_1 \to M_2 \to M_3$ is exact if and only if
+$0 \to \Hom_R(N, M_1) \to \Hom_R(N, M_2) \to \Hom_R(N, M_3)$
+is exact for all $R$-modules $N$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hom-from-finitely-presented}
+Let $R$ be a ring. Let $M$ be a finitely presented $R$-module.
+Let $N$ be an $R$-module.
+\begin{enumerate}
+\item For $f \in R$ we have
+$\Hom_R(M, N)_f = \Hom_{R_f}(M_f, N_f) = \Hom_R(M_f, N_f)$,
+\item for a multiplicative subset $S$ of $R$ we have
+$$
+S^{-1}\Hom_R(M, N) = \Hom_{S^{-1}R}(S^{-1}M, S^{-1}N) =
+\Hom_R(S^{-1}M, S^{-1}N).
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is a special case of part (2).
+The second equality in (2) follows from
+Lemma \ref{lemma-universal-property-localization-module}.
+Choose a presentation
+$$
+\bigoplus\nolimits_{j = 1, \ldots, m} R
+\longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} R
+\to M \to 0.
+$$
+By
+Lemma \ref{lemma-hom-exact}
+this gives an exact sequence
+$$
+0 \to
+\Hom_R(M, N) \to
+\bigoplus\nolimits_{i = 1, \ldots, n} N
+\longrightarrow
+\bigoplus\nolimits_{j = 1, \ldots, m} N.
+$$
+Inverting $S$ and using Proposition \ref{proposition-localization-exact}
+we get an exact sequence
+$$
+0 \to
+S^{-1}\Hom_R(M, N) \to
+\bigoplus\nolimits_{i = 1, \ldots, n} S^{-1}N
+\longrightarrow
+\bigoplus\nolimits_{j = 1, \ldots, m} S^{-1}N
+$$
+and the result follows since $S^{-1}M$ sits in
+an exact sequence
+$$
+\bigoplus\nolimits_{j = 1, \ldots, m} S^{-1}R
+\longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} S^{-1}R \to S^{-1}M \to 0
+$$
+which induces (by Lemma \ref{lemma-hom-exact})
+the exact sequence
+$$
+0 \to
+\Hom_{S^{-1}R}(S^{-1}M, S^{-1}N) \to
+\bigoplus\nolimits_{i = 1, \ldots, n} S^{-1}N
+\longrightarrow
+\bigoplus\nolimits_{j = 1, \ldots, m} S^{-1}N
+$$
+which is the same as the one above.
+\end{proof}
+
+
+
+
+\section{Characterizing finite and finitely presented modules}
+\label{section-colim-and-hom}
+
+\noindent
+Given a module $N$ over a ring $R$, you can characterize whether or not
+$N$ is a finite module or a finitely presented module
+in terms of the functor $\Hom_R(N, -)$.
+
+\begin{lemma}
+\label{lemma-characterize-finite-module-hom}
+Let $R$ be a ring. Let $N$ be an $R$-module. The following are equivalent
+\begin{enumerate}
+\item $N$ is a finite $R$-module,
+\item for any filtered colimit $M = \colim M_i$ of $R$-modules the map
+$\colim \Hom_R(N, M_i) \to \Hom_R(N, M)$ is injective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1) and choose generators $x_1, \ldots, x_m$ for $N$.
+If $N \to M_i$ is a module map and the composition
+$N \to M_i \to M$ is zero, then because $M = \colim_{i' \geq i} M_{i'}$
+for each $j \in \{1, \ldots, m\}$ we can find a $i' \geq i$ such that
+$x_j$ maps to zero in $M_{i'}$. Since there are finitely many
+$x_j$ we can find a single $i'$ which works for all of them.
+Then the composition $N \to M_i \to M_{i'}$ is zero and we conclude
+the map is injective, i.e., part (2) holds.
+
+\medskip\noindent
+Assume (2). For a finite subset $E \subset N$ denote $N_E \subset N$
+the $R$-submodule generated by the elements of $E$. Then
+$0 = \colim N/N_E$ is a filtered colimit. Hence we see that
+$\text{id} : N \to N$ maps into $N_E$ for some $E$, i.e., $N$
+is finitely generated.
+\end{proof}
+
+\noindent
+For purposes of reference, we define what it means to have a relation
+between elements of a module.
+
+\begin{definition}
+\label{definition-relation}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $n \geq 0$ and $x_i \in M$ for $i = 1, \ldots, n$.
+A {\it relation} between $x_1, \ldots, x_n$ in $M$ is a
+sequence of elements $f_1, \ldots, f_n \in R$ such that
+$\sum_{i = 1, \ldots, n} f_i x_i = 0$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-module-colimit-fp}
+Let $R$ be a ring and let $M$ be an $R$-module.
+Then $M$ is the colimit of a directed system
+$(M_i, \mu_{ij})$ of $R$-modules
+with all $M_i$ finitely presented $R$-modules.
+\end{lemma}
+
+\begin{proof}
+Consider any finite subset $S \subset M$ and any finite
+collection of relations $E$ among the elements
+of $S$. So each $s \in S$ corresponds to $x_s \in M$ and
+each $e \in E$ consists of a vector
+of elements $f_{e, s} \in R$ such that $\sum f_{e, s} x_s = 0$.
+Let $M_{S, E}$ be the cokernel of the map
+$$
+R^{\# E} \longrightarrow R^{\# S}, \quad
+(g_e)_{e\in E} \longmapsto (\sum g_e f_{e, s})_{s\in S}.
+$$
+There are canonical maps $M_{S, E} \to M$.
+If $S \subset S'$ and if the elements of
+$E$ correspond, via this map, to relations
+in $E'$, then there is an obvious map
+$M_{S, E} \to M_{S', E'}$ commuting with the
+maps to $M$. Let $I$ be the set of pairs
+$(S, E)$ with ordering by inclusion as above.
+It is clear that the colimit of this directed system is $M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-finitely-presented-module-hom}
+Let $R$ be a ring. Let $N$ be an $R$-module. The following are equivalent
+\begin{enumerate}
+\item $N$ is a finitely presented $R$-module,
+\item for any filtered colimit $M = \colim M_i$ of $R$-modules the map
+$\colim \Hom_R(N, M_i) \to \Hom_R(N, M)$ is bijective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1) and choose an exact sequence $F_{-1} \to F_0 \to N \to 0$
+with $F_i$ finite free. Then we have an exact sequence
+$$
+0 \to \Hom_R(N, M) \to \Hom_R(F_0, M) \to \Hom_R(F_{-1}, M)
+$$
+functorial in the $R$-module $M$. The functors $\Hom_R(F_i, M)$ commute
+with filtered colimits as $\Hom_R(R^{\oplus n}, M) = M^{\oplus n}$.
+Since filtered colimits are exact
+(Lemma \ref{lemma-directed-colimit-exact})
+we see that (2) holds.
+
+\medskip\noindent
+Assume (2). By Lemma \ref{lemma-module-colimit-fp}
+we can write $M = \colim M_i$ as a filtered
+colimit such that $M_i$ is of finite presentation for all $i$.
+Thus $\text{id}_M$ factors through $M_i$ for some $i$.
+This means that $M$ is a direct summand of a finitely
+presented $R$-module (namely $M_i$) and hence finitely presented.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Tensor products}
+\label{section-tensor-product}
+
+\begin{definition}
+\label{definition-bilinear}
+Let $R$ be a ring, $M, N, P$ be three $R$-modules.
+A mapping $f : M \times N \to P$ (where $M \times N$
+is viewed only as Cartesian product of two $R$-modules) is said to be
+{\it $R$-bilinear} if for each $x \in M$
+the mapping $y\mapsto f(x, y)$ of $N$ into $P$ is $R$-linear, and for each
+$y\in N$ the mapping $x\mapsto f(x, y)$ is also $R$-linear.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-tensor-product}
+Let $M, N$ be $R$-modules. Then there exists a pair $(T, g)$
+where $T$ is an $R$-module, and
+$g : M \times N \to T$ an $R$-bilinear
+mapping, with the following universal property:
+For any $R$-module $P$ and any $R$-bilinear mapping
+$f : M \times N \to P$, there
+exists a unique $R$-linear
+mapping $\tilde{f} : T \to P$ such that $f = \tilde{f} \circ g$.
+In other words, the following diagram commutes:
+$$
+\xymatrix{
+M \times N \ar[rr]^f \ar[dr]_g & & P\\
+& T \ar[ur]_{\tilde f}
+}
+$$
+Moreover, if $(T, g)$ and $(T', g')$
+are two pairs with this property, then there
+exists a unique isomorphism
+$j : T \to T'$ such that $j\circ g = g'$.
+\end{lemma}
+
+\noindent
+The $R$-module $T$ which satisfies the above universal property is called
+the {\it tensor product} of $R$-modules $M$ and $N$, denoted as
+$M \otimes_R N$.
+
+\begin{proof}
+We first prove the existence of such $R$-module $T$.
+Let $M, N$ be $R$-modules.
+Let $T$ be the quotient module
+$P/Q$, where $P$ is the free $R$-module $R^{(M \times N)}$ and $Q$ is the
+$R$-module generated by all elements of
+the following types: ($x\in M, y\in N$)
+\begin{align*}
+(x + x', y) - (x, y) - (x', y), \\
+(x, y + y') - (x, y) - (x, y'), \\
+(ax, y) - a(x, y), \\
+(x, ay) - a(x, y)
+\end{align*}
+Let $\pi : M \times N \to T$ denote the natural map.
+This map is $R$-bilinear, as
+implied by the above relations
+when we check the bilinearity conditions. Denote the image
+$\pi(x, y) = x \otimes
+y$, then these elements generate
+$T$. Now let $f : M \times N \to P$ be an $R$-bilinear map,
+then we can define
+$f' : T \to P$ by extending the mapping
+$f'(x \otimes y) = f(x, y)$. Clearly $f = f'\circ \pi$. Moreover, $f'$ is
+uniquely determined by the value on the
+generating sets $\{x \otimes y : x\in M, y\in N\}$.
+Suppose there is another pair $(T', g')$ satisfying the same properties.
+Then there is a unique $j : T \to T'$ and
+also $j' : T' \to T$ such that $g' = j\circ g$, $g = j'\circ g'$.
+But then both the maps $(j\circ j') \circ g$ and $g$
+satisfies the universal properties, so by uniqueness they are equal,
+and hence $j'\circ j$ is identity on $T$.
+Similarly $(j'\circ j) \circ g' = g'$ and $j\circ j'$ is identity on $T'$.
+So $j$ is an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flip-tensor-product}
+Let $M, N, P$ be $R$-modules, then the bilinear maps
+\begin{align*}
+(x, y) & \mapsto y \otimes x\\
+(x + y, z) & \mapsto x \otimes z + y \otimes z\\
+(r, x) & \mapsto rx
+\end{align*}
+induce unique isomorphisms
+\begin{align*}
+M \otimes_R N & \to N \otimes_R M, \\
+(M\oplus N)\otimes_R P & \to (M \otimes_R P)\oplus(N \otimes_R P), \\
+R \otimes_R M & \to M
+\end{align*}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\noindent
+We may generalize the tensor product of two $R$-modules to finitely many
+$R$-modules, and set up a
+correspondence between the multi-tensor product with multilinear mappings.
+Using almost the same construction
+one can prove that:
+
+\begin{lemma}
+\label{lemma-multilinear}
+Let $M_1, \ldots, M_r$ be $R$-modules. Then there exists a pair $(T, g)$
+consisting of an $R$-module T and an $R$-multilinear mapping
+$g : M_1\times \ldots \times M_r \to T$ with the universal
+property: For any $R$-multilinear mapping
+$f : M_1\times \ldots \times M_r \to P$ there exists a unique $R$-module
+homomorphism $f' : T \to P$ such that $f'\circ g = f$.
+Such a module $T$ is unique up to unique isomorphism. We denote it
+$M_1\otimes_R \ldots \otimes_R M_r$ and we denote the universal
+multilinear map $(m_1, \ldots, m_r) \mapsto m_1 \otimes \ldots \otimes m_r$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-transitive}
+The homomorphisms
+$$
+(M \otimes_R N)\otimes_R P \to
+M \otimes_R N \otimes_R P \to
+M \otimes_R (N \otimes_R P)
+$$
+such that
+$f((x \otimes y)\otimes z) = x \otimes y \otimes z$
+and $g(x \otimes y \otimes z) = x \otimes (y \otimes z)$,
+$x\in M, y\in N, z\in P$ are well-defined and are isomorphisms.
+\end{lemma}
+
+\begin{proof}
+We shall prove $f$ is well-defined and is an isomorphism, and this proof
+carries analogously to $g$. Fix any
+$z\in P$, then the mapping $(x, y)\mapsto x \otimes y \otimes z$,
+$x\in M, y\in N$, is $R$-bilinear in $x$ and $y$,
+and hence induces homomorphism $f_z : M \otimes N \to M \otimes N \otimes P$
+which sends
+$f_z(x \otimes y) = x \otimes y \otimes z$.
+Then consider $(M \otimes N)\times P \to M \otimes N \otimes P$ given by
+$(w, z)\mapsto f_z(w)$. The map is
+$R$-bilinear and thus induces
+$f : (M \otimes_R N)\otimes_R P \to M \otimes_R N \otimes_R P$
+and $f((x \otimes y)\otimes z) = x \otimes y \otimes z$.
+To construct the inverse, we note that the map
+$\pi : M \times N \times P \to (M \otimes N)\otimes P$ is
+$R$-trilinear.
+Therefore, it induces an $R$-linear map
+$h : M \otimes N \otimes P \to (M \otimes N)\otimes P$ which
+agrees with the universal property. Here we see that
+$h(x \otimes y \otimes z) = (x \otimes y)\otimes z$.
+From the explicit expression of $f$ and $h$, $f\circ h$ and $h\circ f$ are
+identity maps of $M \otimes N \otimes
+P$ and $(M \otimes N)\otimes P$ respectively, hence $f$ is our desired
+isomorphism.
+\end{proof}
+
+\noindent
+Doing induction we see that this extends to multi-tensor products. Combined
+with Lemma \ref{lemma-flip-tensor-product} we see that
+the tensor product operation on the category of $R$-modules is associative,
+commutative and distributive.
+
+\begin{definition}
+\label{definition-bimodule}
+An abelian group $N$ is called an {\it $(A, B)$-bimodule} if it is both an
+$A$-module and a $B$-module, and
+the actions $A \to End(M)$ and $B \to End(M)$
+are compatible in the sense that $(ax)b = a(xb)$ for all
+$a\in A, b\in B, x\in N$. Usually we denote it as $_AN_B$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-tensor-with-bimodule}
+For $A$-module $M$, $B$-module $P$ and $(A, B)$-bimodule $N$, the modules
+$(M \otimes_A N)\otimes_B P$ and $M \otimes_A(N \otimes_B P)$ can both be
+given $(A, B)$-bimodule structure,
+and moreover
+$$
+(M \otimes_A N)\otimes_B P \cong M \otimes_A(N \otimes_B P).
+$$
+\end{lemma}
+
+\begin{proof}
+A priori $M \otimes_A N$ is an $A$-module, but we can give it a
+$B$-module structure by letting
+$$
+(x \otimes y)b = x \otimes yb, \quad x\in M, y\in N, b\in B
+$$
+Thus $M \otimes_A N$ becomes an $(A, B)$-bimodule. Similarly for
+$N \otimes_B P$, and thus for
+$(M \otimes_A N)\otimes_B P$ and $M \otimes_A(N \otimes_B P)$. By
+Lemma \ref{lemma-transitive}, these two
+modules are isomorphic as both as $A$-module and $B$-module via the same
+mapping.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hom-from-tensor-product}
+For any three $R$-modules $M, N, P$,
+$$
+\Hom_R(M \otimes_R N, P) \cong \Hom_R(M, \Hom_R(N, P))
+$$
+\end{lemma}
+
+\begin{proof}
+An $R$-linear map $\hat{f}\in \Hom_R(M \otimes_R N, P)$ corresponds to an
+$R$-bilinear map $f : M \times N \to P$. For
+each $x\in M$ the mapping $y\mapsto f(x, y)$ is $R$-linear by the universal
+property. Thus $f$ corresponds to a
+map $\phi_f : M \to \Hom_R(N, P)$. This map is $R$-linear since
+$$
+\phi_f(ax + y)(z) =
+f(ax + y, z) = af(x, z)+f(y, z) =
+(a\phi_f(x)+\phi_f(y))(z),
+$$
+for all $a \in R$, $x \in M$, $y \in M$ and
+$z \in N$. Conversely, any
+$f \in \Hom_R(M, \Hom_R(N, P))$ defines an $R$-bilinear
+map $M \times N \to P$, namely $(x, y)\mapsto f(x)(y)$.
+So this is a natural one-to-one correspondence between the
+two modules
+$\Hom_R(M \otimes_R N, P)$ and $\Hom_R(M, \Hom_R(N, P))$.
+\end{proof}
+
+\begin{lemma}[Tensor products commute with colimits]
+\label{lemma-tensor-products-commute-with-limits}
+Let $(M_i, \mu_{ij})$ be a system over the preordered set $I$.
+Let $N$ be an $R$-module. Then
+$$
+\colim (M_i \otimes N) \cong (\colim M_i)\otimes N.
+$$
+Moreover, the isomorphism is induced by the homomorphisms
+$\mu_i \otimes 1: M_i \otimes N \to M \otimes N$
+where $M = \colim_i M_i$ with natural maps $\mu_i : M_i \to M$.
+\end{lemma}
+
+\begin{proof}
+First proof. The functor $M' \mapsto M' \otimes_R N$ is left adjoint
+to the functor $N' \mapsto \Hom_R(N, N')$ by
+Lemma \ref{lemma-hom-from-tensor-product}. Thus $M' \mapsto M' \otimes_R N$
+commutes with all colimits, see
+Categories, Lemma \ref{categories-lemma-adjoint-exact}.
+
+\medskip\noindent
+Second direct proof. Let $P = \colim (M_i \otimes N)$ with coprojections
+$\lambda_i : M_i \otimes N \to P$. Let $M = \colim M_i$ with coprojections
+$\mu_i : M_i \to M$. Then for all $i\leq j$, the following diagram commutes:
+$$
+\xymatrix{
+M_i \otimes N \ar[r]_{\mu_i \otimes 1} \ar[d]_{\mu_{ij} \otimes 1} &
+M \otimes N \ar[d]^{\text{id}} \\
+M_j \otimes N \ar[r]^{\mu_j \otimes 1} &
+M \otimes N
+}
+$$
+By Lemma \ref{lemma-homomorphism-limit} these maps induce a unique homomorphism
+$\psi : P \to M \otimes N$ such that
+$\lambda_i = \psi \circ (\mu_i \otimes 1)$.
+
+\medskip\noindent
+To construct the inverse map, for each $i\in I$, there is the canonical
+$R$-bilinear mapping $g_i : M_i \times N \to
+M_i \otimes N$. This induces a unique mapping
+$\widehat{\phi} : M \times N \to P$
+such that $\widehat{\phi} \circ (\mu_i \times 1) = \lambda_i \circ g_i$.
+It is $R$-bilinear. Thus it induces an
+$R$-linear mapping $\phi : M \otimes N \to P$.
+From the commutative diagram below:
+$$
+\xymatrix{
+M_i \times N \ar[r]^{g_i} \ar[d]^{\mu_i \times \text{id}} &
+M_i \otimes N\ar[r]_{\text{id}} \ar[d]_{\lambda_i} &
+M_i \otimes N \ar[d]_{\mu_i \otimes \text{id}} \ar[rd]^{\lambda_i} \\
+M \times N \ar[r]^{\widehat{\phi}} &
+P \ar[r]^{\psi} & M \otimes N \ar[r]^{\phi} & P
+}
+$$
+we see that $\psi\circ\widehat{\phi} = g$, the canonical $R$-bilinear mapping
+$g : M \times N \to M \otimes N$. So
+$\psi\circ\phi$ is identity on $M \otimes N$. From the right-hand square and
+triangle, $\phi\circ\psi$ is also
+identity on $P$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-product-exact}
+Let
+\begin{align*}
+M_1\xrightarrow{f} M_2\xrightarrow{g} M_3 \to 0
+\end{align*}
+be an exact sequence of $R$-modules and homomorphisms, and let $N$ be any
+$R$-module. Then the sequence
+\begin{equation}
+\label{equation-2ndex}
+M_1\otimes N\xrightarrow{f \otimes 1} M_2\otimes N \xrightarrow{g \otimes 1}
+M_3\otimes N \to 0
+\end{equation}
+is exact. In other words, the functor $- \otimes_R N$ is
+{\it right exact}, in the sense that tensoring
+each term in the original right exact sequence preserves the exactness.
+\end{lemma}
+
+\begin{proof}
+We apply the functor $\Hom(-, \Hom(N, P))$ to the first exact
+sequence. We obtain
+$$
+0 \to
+\Hom(M_3, \Hom(N, P)) \to
+\Hom(M_2, \Hom(N, P)) \to
+\Hom(M_1, \Hom(N, P))
+$$
+By Lemma \ref{lemma-hom-from-tensor-product}, we have
+$$
+0 \to \Hom(M_3 \otimes N, P) \to
+\Hom(M_2 \otimes N, P) \to \Hom(M_1 \otimes N, P)
+$$
+Using the pullback property again, we arrive at the desired exact sequence.
+\end{proof}
+
+\begin{remark}
+\label{remark-tensor-product-not-exact}
+However, tensor product does NOT preserve exact sequences in general.
+In other words, if $M_1 \to M_2 \to M_3$ is
+exact, then it is not necessarily true that
+$M_1 \otimes N \to M_2 \otimes N \to M_3 \otimes N$
+is exact for arbitrary $R$-module $N$.
+\end{remark}
+
+\begin{example}
+\label{example-tensor-product-not-exact}
+Consider the injective map $2 : \mathbf{Z}\to \mathbf{Z}$
+viewed as a map of $\mathbf{Z}$-modules.
+Let $N = \mathbf{Z}/2$. Then the induced map
+$\mathbf{Z} \otimes \mathbf{Z}/2 \to \mathbf{Z} \otimes \mathbf{Z}/2$
+is NOT injective. This is because for
+$x \otimes y\in \mathbf{Z} \otimes \mathbf{Z}/2$,
+$$
+(2 \otimes 1)(x \otimes y) = 2x \otimes y = x \otimes 2y = x \otimes 0 = 0
+$$
+Therefore the induced map is the zero map while $\mathbf{Z} \otimes N\neq 0$.
+\end{example}
+
+\begin{remark}
+\label{remark-flat-module}
+For $R$-modules $N$, if the
+functor $-\otimes_R N$ is exact, i.e. tensoring
+with $N$ preserves all exact
+sequences, then $N$ is said to be {\it flat} $R$-module.
+We will discuss this later in Section \ref{section-flat}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-tensor-finiteness}
+Let $R$ be a ring. Let $M$ and $N$ be $R$-modules.
+\begin{enumerate}
+\item If $N$ and $M$ are finite, then so is $M \otimes_R N$.
+\item If $N$ and $M$ are finitely presented, then so is $M \otimes_R N$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Suppose $M$ is finite. Then choose a presentation
+$0 \to K \to R^{\oplus n} \to M \to 0$. This gives an exact sequence
+$K \otimes_R N \to N^{\oplus n} \to M \otimes_R N \to 0$ by
+Lemma \ref{lemma-tensor-product-exact}.
+We conclude that if $N$ is finite too then $M \otimes_R N$
+is a quotient of a finite module, hence finite, see
+Lemma \ref{lemma-extension}.
+Similarly, if both $N$ and $M$ are finitely presented, then
+we see that $K$ is finite and that $M \otimes_R N$
+is a quotient of the finitely presented module $N^{\oplus n}$ by
+a finite module, namely $K \otimes_R N$, and hence finitely presented, see
+Lemma \ref{lemma-extension}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-localization}
+Let $M$ be an $R$-module. Then the $S^{-1}R$-modules $S^{-1}M$
+and $S^{-1}R \otimes_R M$ are canonically isomorphic, and the
+canonical isomorphism $f : S^{-1}R \otimes_R M \to S^{-1}M$
+is given by
+$$
+f((a/s) \otimes m) = am/s, \forall a \in R, m \in M, s \in S
+$$
+\end{lemma}
+
+\begin{proof}
+Obviously, the map
+$f' : S^{-1}R \times M \to S^{-1}M$ given by $f((a/s, m)) = am/s$ is
+bilinear, and thus by the
+universal property, this map induces a unique $S^{-1}R$-module homomorphism
+$f : S^{-1}R \otimes_R M \to S^{-1}M$ as in the statement of the lemma.
+Actually every element in $S^{-1}M$ is of the form $m/s$, $m\in M, s\in S$ and
+every element in
+$S^{-1}R \otimes_R M$ is of the form $1/s \otimes m$. To see the latter fact,
+write an element in
+$S^{-1}R \otimes_R M$ as
+$$
+\sum_k \frac{a_k}{s_k} \otimes m_k =
+\sum_k \frac{a_k t_k}{s} \otimes m_k =
+\frac{1}{s} \otimes \sum_k {a_k t_k}m_k = \frac{1}{s} \otimes m
+$$
+Where $m = \sum_k {a_k t_k}m_k$. Then it is obvious that $f$ is surjective,
+and if $f(\frac{1}{s} \otimes m) = m/s = 0$ then there exists $t'\in S$ with
+$tm = 0$ in $M$. Then we have
+$$
+\frac{1}{s} \otimes m = \frac{1}{st} \otimes tm = \frac{1}{st} \otimes 0 = 0
+$$
+Therefore $f$ is injective.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-product-localization}
+Let $M, N$ be $R$-modules, then there is a canonical
+$S^{-1}R$-module isomorphism
+$f : S^{-1}M \otimes_{S^{-1}R}S^{-1}N \to S^{-1}(M \otimes_R N)$,
+given by
+$$
+f((m/s)\otimes(n/t)) = (m \otimes n)/st
+$$
+\end{lemma}
+
+\begin{proof}
+We may use Lemma \ref{lemma-tensor-with-bimodule}
+and Lemma \ref{lemma-tensor-localization} repeatedly to
+see that these two
+$S^{-1}R$-modules are isomorphic, noting that $S^{-1}R$ is an
+$(R, S^{-1}R)$-bimodule:
+\begin{align*}
+S^{-1}(M \otimes_R N) & \cong S^{-1}R \otimes_R (M \otimes_R N)\\
+ & \cong S^{-1}M \otimes_R N\\
+ & \cong (S^{-1}M \otimes_{S^{-1}R}S^{-1}R)\otimes_R N\\
+ & \cong S^{-1}M \otimes_{S^{-1}R}(S^{-1}R \otimes_R N)\\
+ & \cong S^{-1}M \otimes_{S^{-1}R}S^{-1}N
+\end{align*}
+This isomorphism is easily seen to be the one stated in the lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Tensor algebra}
+\label{section-tensor-algebra}
+
+\noindent
+Let $R$ be a ring. Let $M$ be an $R$-module.
+We define the {\it tensor algebra of $M$ over $R$} to
+be the noncommutative $R$-algebra
+$$
+\text{T}(M) = \text{T}_R(M) =
+\bigoplus\nolimits_{n \geq 0} \text{T}^n(M)
+$$
+with
+$\text{T}^0(M) = R$,
+$\text{T}^1(M) = M$,
+$\text{T}^2(M) = M \otimes_R M$,
+$\text{T}^3(M) = M \otimes_R M \otimes_R M$, and so on.
+Multiplication is defined by the rule that on pure tensors we have
+$$
+(x_1 \otimes x_2 \otimes \ldots \otimes x_n)
+\cdot
+(y_1 \otimes y_2 \otimes \ldots \otimes y_m)
+=
+x_1 \otimes x_2 \otimes \ldots \otimes x_n \otimes
+y_1 \otimes y_2 \otimes \ldots \otimes y_m
+$$
+and we extend this by linearity.
+
+\medskip\noindent
+We define the {\it exterior algebra $\wedge(M)$ of $M$ over $R$}
+to be the quotient of $\text{T}(M)$ by the two sided
+ideal generated by the elements $x \otimes x \in \text{T}^2(M)$.
+The image of a pure tensor $x_1 \otimes \ldots \otimes x_n$
+in $\wedge^n(M)$ is denoted $x_1 \wedge \ldots \wedge x_n$.
+These elements generate $\wedge^n(M)$, they are $R$-linear
+in each $x_i$ and they are zero when two of the $x_i$ are equal
+(i.e., they are alternating as functions of
+$x_1, x_2, \ldots, x_n$). The multiplication on $\wedge(M)$ is
+graded commutative, i.e., every $x \in M$ and $y \in M$
+satisfy $x \wedge y = - y \wedge x$.
+
+\medskip\noindent
+An example of this is when $M = Rx_1 \oplus \ldots \oplus Rx_n$
+is a finite free module. In this case $\wedge(M)$ is free over
+$R$ with basis the elements
+$$
+x_{i_1} \wedge \ldots \wedge x_{i_r}
+$$
+with $0 \leq r \leq n$ and $1 \leq i_1 < i_2 < \ldots < i_r \leq n$.
+
+\medskip\noindent
+We define the {\it symmetric algebra $\text{Sym}(M)$ of $M$ over $R$}
+to be the quotient of $\text{T}(M)$ by the two sided
+ideal generated by the elements $x \otimes y - y \otimes x \in \text{T}^2(M)$.
+The image of a pure tensor $x_1 \otimes \ldots \otimes x_n$
+in $\text{Sym}^n(M)$ is denoted just $x_1 \ldots x_n$.
+These elements generate $\text{Sym}^n(M)$, these are $R$-linear
+in each $x_i$ and $x_1 \ldots x_n = x_1' \ldots x_n'$ if the
+sequence of elements $x_1, \ldots, x_n$ is a permutation of the
+sequence $x_1', \ldots, x_n'$. Thus we see that $\text{Sym}(M)$
+is commutative.
+
+\medskip\noindent
+An example of this is when $M = Rx_1 \oplus \ldots \oplus Rx_n$
+is a finite free module. In this case
+$\text{Sym}(M) = R[x_1, \ldots, x_n]$ is a polynomial algebra.
+
+\begin{lemma}
+\label{lemma-free-tensor-algebra}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+If $M$ is a free $R$-module, so is each symmetric and exterior power.
+\end{lemma}
+
+\begin{proof}
+Omitted, but see above for the finite free case.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-presentation-sym-exterior}
+Let $R$ be a ring.
+Let $M_2 \to M_1 \to M \to 0$ be an exact sequence of $R$-modules.
+There are exact sequences
+$$
+M_2 \otimes_R \text{Sym}^{n - 1}(M_1)
+\to
+\text{Sym}^n(M_1)
+\to
+\text{Sym}^n(M)
+\to
+0
+$$
+and similarly
+$$
+M_2 \otimes_R \wedge^{n - 1}(M_1)
+\to
+\wedge^n(M_1)
+\to
+\wedge^n(M)
+\to
+0
+$$
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-present-sym-wedge}
+Let $R$ be a ring.
+Let $M$ be an $R$-module.
+Let $x_i$, $i \in I$ be a given system of generators of
+$M$ as an $R$-module. Let $n \geq 2$.
+There exists a canonical exact sequence
+$$
+\bigoplus_{1 \leq j_1 < j_2 \leq n}
+\bigoplus_{i_1, i_2 \in I}
+\text{T}^{n - 2}(M)
+\oplus
+\bigoplus_{1 \leq j_1 < j_2 \leq n}
+\bigoplus_{i \in I}
+\text{T}^{n - 2}(M)
+\to
+\text{T}^n(M)
+\to
+\wedge^n(M)
+\to
+0
+$$
+where the pure tensor $m_1 \otimes \ldots \otimes m_{n - 2}$ in the first
+summand maps to
+\begin{align*}
+\underbrace{
+m_1 \otimes \ldots \otimes x_{i_1} \otimes \ldots
+\otimes x_{i_2} \otimes \ldots \otimes m_{n - 2}
+}_{\text{with } x_{i_1} \text{ and } x_{i_2}
+\text{ occupying slots } j_1 \text{ and } j_2
+\text{ in the tensor}} \\
++
+\underbrace{
+m_1 \otimes \ldots \otimes x_{i_2} \otimes \ldots
+\otimes x_{i_1} \otimes \ldots \otimes m_{n - 2}
+}_{\text{with } x_{i_2} \text{ and } x_{i_1}
+\text{ occupying slots } j_1 \text{ and } j_2
+\text{ in the tensor}}
+\end{align*}
+and $m_1 \otimes \ldots \otimes m_{n - 2}$ in the second
+summand maps to
+$$
+\underbrace{
+m_1 \otimes \ldots \otimes x_i \otimes \ldots
+\otimes x_i \otimes \ldots \otimes m_{n - 2}
+}_{\text{with } x_{i} \text{ and } x_{i}
+\text{ occupying slots } j_1 \text{ and } j_2
+\text{ in the tensor}}
+$$
+There is also a canonical exact sequence
+$$
+\bigoplus_{1 \leq j_1 < j_2 \leq n}
+\bigoplus_{i_1, i_2 \in I}
+\text{T}^{n - 2}(M)
+\to
+\text{T}^n(M)
+\to
+\text{Sym}^n(M)
+\to
+0
+$$
+where the pure tensor $m_1 \otimes \ldots \otimes m_{n - 2}$ maps to
+\begin{align*}
+\underbrace{
+m_1 \otimes \ldots \otimes x_{i_1} \otimes \ldots
+\otimes x_{i_2} \otimes \ldots \otimes m_{n - 2}
+}_{\text{with } x_{i_1} \text{ and } x_{i_2}
+\text{ occupying slots } j_1 \text{ and } j_2
+\text{ in the tensor}} \\
+-
+\underbrace{
+m_1 \otimes \ldots \otimes x_{i_2} \otimes \ldots
+\otimes x_{i_1} \otimes \ldots \otimes m_{n - 2}
+}_{\text{with } x_{i_2} \text{ and } x_{i_1}
+\text{ occupying slots } j_1 \text{ and } j_2
+\text{ in the tensor}}
+\end{align*}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-tensor-algebra}
+\begin{slogan}
+Taking tensor algebras commutes with filtered colimits.
+\end{slogan}
+Let $R$ be a ring. Let $M_i$ be a directed system of
+$R$-modules. Then
+$\colim_i \text{T}(M_i) = \text{T}(\colim_i M_i)$
+and similarly for the symmetric and exterior algebras.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: Apply Lemma \ref{lemma-tensor-products-commute-with-limits}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-algebra-localization}
+Let $R$ be a ring and let $S \subset R$ be a multiplicative subset.
+Then $S^{-1}T_R(M) = T_{S^{-1}R}(S^{-1}M)$ for any $R$-module $M$.
+Similar for symmetric and exterior algebras.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: Apply Lemma \ref{lemma-tensor-product-localization}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Base change}
+\label{section-base-change}
+
+\noindent
+We formally introduce base change in algebra as follows.
+
+\begin{definition}
+\label{definition-base-change}
+Let $\varphi : R \to S$ be a ring map. Let $M$ be an $S$-module.
+Let $R \to R'$ be any ring map. The {\it base change} of $\varphi$
+by $R \to R'$ is the ring map $R' \to S \otimes_R R'$. In this situation
+we often write $S' = S \otimes_R R'$.
+The {\it base change} of the $S$-module $M$ is the $S'$-module
+$M \otimes_R R'$.
+\end{definition}
+
+\noindent
+If $S = R[x_i]/(f_j)$ for some collection of variables $x_i$, $i \in I$
+and some collection of polynomials $f_j \in R[x_i]$, $j \in J$, then
+$S \otimes_R R' = R'[x_i]/(f'_j)$, where $f'_j \in R'[x_i]$ is the image
+of $f_j$ under the map $R[x_i] \to R'[x_i]$ induced by $R \to R'$.
+This simple remark is the key to understanding base change.
+
+\begin{lemma}
+\label{lemma-base-change-finiteness}
+Let $R \to S$ be a ring map. Let $M$ be an $S$-module.
+Let $R \to R'$ be a ring map and let $S' = S \otimes_R R'$ and
+$M' = M \otimes_R R'$ be the base changes.
+\begin{enumerate}
+\item If $M$ is a finite $S$-module, then the base change
+$M'$ is a finite $S'$-module.
+\item If $M$ is an $S$-module of finite presentation, then
+the base change $M'$ is an $S'$-module of finite presentation.
+\item If $R \to S$ is of finite type, then the base change
+$R' \to S'$ is of finite type.
+\item If $R \to S$ is of finite presentation, then
+the base change $R' \to S'$ is of finite presentation.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Take a surjective, $S$-linear map
+$S^{\oplus n} \to M \to 0$.
+By Lemma \ref{lemma-flip-tensor-product} and \ref{lemma-tensor-product-exact}
+the result after tensoring with $R^\prime$ is a surjection
+${S^\prime}^{\oplus n} \to M^\prime \rightarrow 0$,
+so $M^\prime$ is a finitely generated $S^\prime$-module.
+Proof of (2). Take a presentation
+$S^{\oplus m} \to S^{\oplus n} \to M \to 0$.
+By Lemma \ref{lemma-flip-tensor-product} and \ref{lemma-tensor-product-exact}
+the result after tensoring with $R^\prime$ gives a finite presentation
+${S^\prime}^{\oplus m} \to {S^\prime}^{\oplus n} \to M^\prime \to 0$, of
+the $S^\prime$-module $M^\prime$. Proof of (3). This follows by the remark
+preceding the lemma as we can take $I$ to be finite by assumption.
+Proof of (4). This follows by the remark preceding the lemma
+as we can take $I$ and $J$ to be finite by assumption.
+\end{proof}
+
+\noindent
+Let $\varphi : R \to S$ be a ring map. Given an $S$-module $N$ we
+obtain an $R$-module $N_R$ by the rule $r \cdot n = \varphi(r)n$.
+This is sometimes called the {\it restriction} of $N$ to $R$.
+
+\begin{lemma}
+\label{lemma-adjoint-tensor-restrict}
+Let $R \to S$ be a ring map. The functors
+$\text{Mod}_S \to \text{Mod}_R$, $N \mapsto N_R$ (restriction)
+and $\text{Mod}_R \to \text{Mod}_S$, $M \mapsto M \otimes_R S$
+(base change) are adjoint functors. In a formula
+$$
+\Hom_R(M, N_R) = \Hom_S(M \otimes_R S, N)
+$$
+\end{lemma}
+
+\begin{proof}
+If $\alpha : M \to N_R$ is an $R$-module map, then we define
+$\alpha' : M \otimes_R S \to N$ by the rule
+$\alpha'(m \otimes s) = s\alpha(m)$. If $\beta : M \otimes_R S \to N$
+is an $S$-module map, we define $\beta' : M \to N_R$ by the rule
+$\beta'(m) = \beta(m \otimes 1)$.
+We omit the verification that these constructions are mutually inverse.
+\end{proof}
+
+\noindent
+The lemma above tells us that restriction has a left adjoint, namely
+base change. It also has a right adjoint.
+
+\begin{lemma}
+\label{lemma-adjoint-hom-restrict}
+Let $R \to S$ be a ring map. The functors
+$\text{Mod}_S \to \text{Mod}_R$, $N \mapsto N_R$ (restriction)
+and $\text{Mod}_R \to \text{Mod}_S$, $M \mapsto \Hom_R(S, M)$
+are adjoint functors. In a formula
+$$
+\Hom_R(N_R, M) = \Hom_S(N, \Hom_R(S, M))
+$$
+\end{lemma}
+
+\begin{proof}
+If $\alpha : N_R \to M$ is an $R$-module map, then we define
+$\alpha' : N \to \Hom_R(S, M)$ by the rule
+$\alpha'(n) = (s \mapsto \alpha(sn))$. If $\beta : N \to \Hom_R(S, M)$
+is an $S$-module map, we define $\beta' : N_R \to M$ by the rule
+$\beta'(n) = \beta(n)(1)$.
+We omit the verification that these constructions are mutually inverse.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hom-from-tensor-product-variant}
+Let $R \to S$ be a ring map. Given $S$-modules $M, N$ and an $R$-module $P$
+we have
+$$
+\Hom_R(M \otimes_S N, P) = \Hom_S(M, \Hom_R(N, P))
+$$
+\end{lemma}
+
+\begin{proof}
+This can be proved directly, but it is also a consequence of
+Lemmas \ref{lemma-adjoint-hom-restrict} and \ref{lemma-hom-from-tensor-product}.
+Namely, we have
+\begin{align*}
+\Hom_R(M \otimes_S N, P)
+& =
+\Hom_S(M \otimes_S N, \Hom_R(S, P)) \\
+& =
+\Hom_S(M, \Hom_S(N, \Hom_R(S, P))) \\
+& =
+\Hom_S(M, \Hom_R(N, P))
+\end{align*}
+as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Miscellany}
+\label{section-miscellany}
+
+\noindent
+The proofs in this section should not refer to any results except
+those from the section on basic notions, Section \ref{section-rings-basic}.
+
+\begin{lemma}
+\label{lemma-product-ideals-in-prime}
+Let $R$ be a ring, $I$ and $J$ two ideals and $\mathfrak p$ a prime ideal
+containing the product $IJ$. Then $\mathfrak{p}$ contains $I$ or $J$.
+\end{lemma}
+
+\begin{proof}
+Assume the contrary and take $x \in I \setminus \mathfrak p$ and
+$y \in J \setminus \mathfrak p$. Their product is an element of
+$IJ \subset \mathfrak p$, which contradicts the assumption that
+$\mathfrak p$ was prime.
+\end{proof}
+
+\begin{lemma}[Prime avoidance]
+\label{lemma-silly}
+\begin{slogan}
+1. In an affine scheme if a finite number of points are contained in an
+open subset then they are contained in a smaller principal open subset.
+2. Affine opens are cofinal among the neighborhoods of a given finite set
+of an affine scheme
+\end{slogan}
+Let $R$ be a ring. Let $I_i \subset R$, $i = 1, \ldots, r$,
+and $J \subset R$ be ideals. Assume
+\begin{enumerate}
+\item $J \not\subset I_i$ for $i = 1, \ldots, r$, and
+\item all but two of $I_i$ are prime ideals.
+\end{enumerate}
+Then there exists an $x \in J$, $x\not\in I_i$ for all $i$.
+\end{lemma}
+
+\begin{proof}
+The result is true for $r = 1$. If $r = 2$, then let $x, y \in J$ with
+$x \not \in I_1$ and $y \not \in I_2$. We are done unless $x \in I_2$
+and $y \in I_1$. Then the element $x + y$ cannot be in $I_1$ (since that
+would mean $x + y - y \in I_1$) and it also cannot be in $I_2$.
+
+\medskip\noindent
+For $r \geq 3$, assume the result holds for $r - 1$. After renumbering
+we may assume that $I_r$ is prime. We may also assume there are no
+inclusions among the $I_i$. Pick $x \in J$, $x \not \in I_i$ for all
+$i = 1, \ldots, r - 1$. If $x \not\in I_r$ we are done. So assume
+$x \in I_r$. If $J I_1 \ldots I_{r - 1} \subset I_r$ then
+$J \subset I_r$ (by Lemma \ref{lemma-product-ideals-in-prime}) a contradiction.
+Pick $y \in J I_1 \ldots I_{r - 1}$, $y \not \in I_r$. Then $x + y$ works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-silly}
+Let $R$ be a ring. Let $x \in R$, $I \subset R$ an ideal, and
+$\mathfrak p_i$, $i = 1, \ldots, r$ be prime ideals.
+Suppose that $x + I \not \subset \mathfrak p_i$ for
+$i = 1, \ldots, r$. Then there exists a $y \in I$
+such that $x + y \not \in \mathfrak p_i$ for all $i$.
+\end{lemma}
+
+\begin{proof}
+We may assume there are no inclusions among the $\mathfrak p_i$.
+After reordering we may assume $x \not \in \mathfrak p_i$ for $i < s$
+and $x \in \mathfrak p_i$ for $i \geq s$. If $s = r + 1$ then we are done.
+If not, then we can find $y \in I$ with $y \not \in \mathfrak p_s$.
+Choose $f \in \bigcap_{i < s} \mathfrak p_i$ with $f \not \in \mathfrak p_s$.
+Then $x + fy$ is not contained in $\mathfrak p_1, \ldots, \mathfrak p_s$.
+Thus we win by induction on $s$.
+\end{proof}
+
+\begin{lemma}[Chinese remainder]
+\label{lemma-chinese-remainder}
+Let $R$ be a ring.
+\begin{enumerate}
+\item If $I_1, \ldots, I_r$ are ideals such that $I_a + I_b = R$
+when $a \not = b$, then $I_1 \cap \ldots \cap I_r =
+I_1I_2\ldots I_r$ and $R/(I_1I_2\ldots I_r)
+\cong R/I_1 \times \ldots \times R/I_r$.
+\item If $\mathfrak m_1, \ldots, \mathfrak m_r$ are pairwise distinct maximal
+ideals then $\mathfrak m_a + \mathfrak m_b = R$ for $a \not = b$ and the
+above applies.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let us first prove $I_1 \cap \ldots \cap I_r = I_1 \ldots I_r$
+as this will also imply the injectivity of the induced ring
+homomorphism $R/(I_1 \ldots I_r) \rightarrow R/I_1 \times \ldots \times R/I_r$.
+The inclusion $I_1 \cap \ldots \cap I_r \supset I_1 \ldots I_r$ is always
+fulfilled since ideals are closed under multiplication with arbitrary ring
+elements. To prove the other inclusion, we claim that the ideals
+$$
+I_1 \ldots \hat I_i \ldots I_r,\quad i = 1, \ldots, r
+$$
+generate the ring $R$. We prove this by induction on $r$. It holds when
+$r = 2$. If $r > 2$, then we see that $R$ is the sum of the ideals
+$I_1 \ldots \hat I_i \ldots I_{r - 1}$, $i = 1, \ldots, r - 1$.
+Hence $I_r$ is the sum of the ideals
+$I_1 \ldots \hat I_i \ldots I_r$, $i = 1, \ldots, r - 1$.
+Applying the same argument with the reverse ordering on the ideals
+we see that $I_1$ is the sum of the ideals
+$I_1 \ldots \hat I_i \ldots I_r$, $i = 2, \ldots, r$.
+Since $R = I_1 + I_r$ by assumption we see that $R$ is the sum of the
+ideals displayed above. Therefore we can find elements
+$a_i \in I_1 \ldots \hat I_i \ldots I_r$
+such that their sum is one. Multiplying this equation by an element
+of $I_1 \cap \ldots \cap I_r$ gives the other inclusion.
+It remains to show that the canonical map
+$R/(I_1 \ldots I_r) \rightarrow R/I_1 \times \ldots \times R/I_r$
+is surjective. For this, consider its action on the equation
+$1 = \sum_{i=1}^r a_i$ we derived above. On the one hand, a
+ring morphism sends 1 to 1 and on the other hand, the image of any
+$a_i$ is zero in $R/I_j$ for $j \neq i$. Therefore, the image of $a_i$
+in $R/I_i$ is the identity. So given any element
+$(\bar{b_1}, \ldots, \bar{b_r}) \in R/I_1 \times \ldots \times R/I_r$,
+the element $\sum_{i=1}^r a_i \cdot b_i$ is an inverse image in $R$.
+
+\medskip\noindent
+To see (2), by the very definition of being distinct maximal ideals, we have
+$\mathfrak{m}_a + \mathfrak{m}_b = R$ for $a \neq b$ and so the above applies.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-matrix-left-inverse}
+Let $R$ be a ring. Let $n \geq m$. Let $A$ be an
+$n \times m$ matrix with coefficients in $R$. Let $J \subset R$
+be the ideal generated by the $m \times m$ minors of $A$.
+\begin{enumerate}
+\item For any $f \in J$ there exists a $m \times n$ matrix $B$
+such that $BA = f 1_{m \times m}$.
+\item If $f \in R$ and $BA = f 1_{m \times m}$ for some $m \times n$ matrix
+$B$, then $f^m \in J$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For $I \subset \{1, \ldots, n\}$ with $|I| = m$, we denote
+by $E_I$ the $m \times n$ matrix of the projection
+$$
+R^{\oplus n} = \bigoplus\nolimits_{i \in \{1, \ldots, n\}} R
+\longrightarrow \bigoplus\nolimits_{i \in I} R
+$$
+and set $A_I = E_I A$, i.e., $A_I$ is the $m \times m$ matrix
+whose rows are the rows of $A$ with indices in $I$.
+Let $B_I$ be the adjugate (transpose of
+cofactor) matrix to $A_I$, i.e., such that
+$A_I B_I = B_I A_I = \det(A_I) 1_{m \times m}$.
+The $m \times m$ minors of $A$ are the determinants $\det A_I$
+for all the $I \subset \{1, \ldots, n\}$ with $|I| = m$.
+If $f \in J$ then we can write $f = \sum c_I \det(A_I)$ for some
+$c_I \in R$. Set $B = \sum c_I B_I E_I$ to see that (1) holds.
+
+\medskip\noindent
+If $f 1_{m \times m} = BA$ then by the
+Cauchy-Binet formula (\ref{item-cauchy-binet}) we
+have $f^m = \sum b_I \det(A_I)$ where $b_I$ is the determinant
+of the $m \times m$ matrix whose columns are the columns of $B$ with
+indices in $I$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-matrix-right-inverse}
+Let $R$ be a ring. Let $n \geq m$. Let $A = (a_{ij})$ be an
+$n \times m$ matrix with coefficients in $R$, written in block form
+as
+$$
+A =
+\left(
+\begin{matrix}
+A_1 \\
+A_2
+\end{matrix}
+\right)
+$$
+where $A_1$ has size $m \times m$. Let $B$ be the adjugate (transpose of
+cofactor) matrix to $A_1$. Then
+$$
+AB =
+\left(
+\begin{matrix}
+f 1_{m \times m} \\
+C
+\end{matrix}
+\right)
+$$
+where $f = \det(A_1)$ and $c_{ij}$ is (up to sign) the determinant of the
+$m \times m$ minor of $A$ corresponding to the rows
+$1, \ldots, \hat j, \ldots, m, i$.
+\end{lemma}
+
+\begin{proof}
+Since the adjugate has the property $A_1B = B A_1 = f$ the first block
+of the expression for $AB$ is correct. Note that
+$$
+c_{ij} = \sum\nolimits_k a_{ik}b_{kj} = \sum (-1)^{j + k}a_{ik} \det(A_1^{jk})
+$$
+where $A_1^{ij}$ means $A_1$ with the $j$th row and $k$th column removed.
+This last expression is the row expansion of the determinant of the matrix
+in the statement of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-cannot-be-injective}
+\begin{slogan}
+A map of finite free modules cannot be injective if the source has
+rank bigger than the target.
+\end{slogan}
+Let $R$ be a nonzero ring. Let $n \geq 1$. Let $M$ be an $R$-module generated
+by $< n$ elements. Then any $R$-module map $f : R^{\oplus n} \to M$ has a
+nonzero kernel.
+\end{lemma}
+
+\begin{proof}
+Choose a surjection $R^{\oplus n - 1} \to M$.
+We may lift the map $f$ to a map $f' : R^{\oplus n} \to R^{\oplus n - 1}$
+(Lemma \ref{lemma-lift-map}).
+It suffices to prove $f'$ has a nonzero kernel.
+The map $f' : R^{\oplus n} \to R^{\oplus n - 1}$ is given by a
+matrix $A = (a_{ij})$. If one of the $a_{ij}$ is not nilpotent, say
+$a = a_{ij}$ is not, then we can replace $R$ by the localization $R_a$
+and we may assume $a_{ij}$ is a unit. Since if we find a nonzero kernel
+after localization then there was a nonzero kernel to start with as
+localization is exact, see Proposition \ref{proposition-localization-exact}.
+In this case we can do a base change on both $R^{\oplus n}$
+and $R^{\oplus n - 1}$ and reduce to the case where
+$$
+A =
+\left(
+\begin{matrix}
+1 & 0 & 0 & \ldots \\
+0 & a_{22} & a_{23} & \ldots \\
+0 & a_{32} & \ldots \\
+\ldots & \ldots
+\end{matrix}
+\right)
+$$
+Hence in this case we win by induction on $n$. If not then each
+$a_{ij}$ is nilpotent. Set $I = (a_{ij}) \subset R$. Note that
+$I^{m + 1} = 0$ for some $m \geq 0$. Let $m$ be the largest integer
+such that $I^m \not = 0$. Then we see that $(I^m)^{\oplus n}$ is
+contained in the kernel of the map and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-rank}
+\begin{slogan}
+The rank of a finite free module is well defined.
+\end{slogan}
+Let $R$ be a nonzero ring. Let $n, m \geq 0$ be integers.
+If $R^{\oplus n}$ is isomorphic to $R^{\oplus m}$ as
+$R$-modules, then $n = m$.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-map-cannot-be-injective}.
+\end{proof}
+
+
+
+
+
+\section{Cayley-Hamilton}
+\label{section-cayley-hamilton}
+
+\begin{lemma}
+\label{lemma-charpoly}
+Let $R$ be a ring. Let $A = (a_{ij})$ be an $n \times n$
+matrix with coefficients in $R$. Let $P(x) \in R[x]$
+be the characteristic polynomial of $A$ (defined
+as $\det(x\text{id}_{n \times n} - A)$).
+Then $P(A) = 0$ in $\text{Mat}(n \times n, R)$.
+\end{lemma}
+
+\begin{proof}
+We reduce the question to the well-known Cayley-Hamilton
+theorem from linear algebra in several steps:
+\begin{enumerate}
+\item If $\phi :S \rightarrow R$ is a ring morphism and $b_{ij}$
+are inverse images of the $a_{ij}$ under this map, then it suffices
+to show the statement for $S$ and $(b_{ij})$ since $\phi$ is a ring morphism.
+\item If $\psi :R \hookrightarrow S$ is an injective ring morphism, it
+clearly suffices to show the result for $S$ and the $a_{ij}$ considered as
+elements of $S$.
+\item Thus we may first reduce to the case $R = \mathbf{Z}[X_{ij}]$,
+$a_{ij} = X_{ij}$ of a polynomial ring and then further to
+the case $R = \mathbf{Q}(X_{ij})$ where we may finally apply Cayley-Hamilton.
+\end{enumerate}
+\end{proof}
+
+\begin{lemma}
+\label{lemma-charpoly-module}
+Let $R$ be a ring.
+Let $M$ be a finite $R$-module.
+Let $\varphi : M \to M$ be an endomorphism.
+Then there exists a monic polynomial $P \in R[T]$ such that
+$P(\varphi) = 0$ as an endomorphism of $M$.
+\end{lemma}
+
+\begin{proof}
+Choose a surjective $R$-module map $R^{\oplus n} \to M$, given by
+$(a_1, \ldots, a_n) \mapsto \sum a_ix_i$ for some generators $x_i \in M$.
+Choose $(a_{i1}, \ldots, a_{in}) \in R^{\oplus n}$ such that
+$\varphi(x_i) = \sum a_{ij} x_j$. In other words the diagram
+$$
+\xymatrix{
+R^{\oplus n} \ar[d]_A \ar[r] & M \ar[d]^\varphi \\
+R^{\oplus n} \ar[r] & M
+}
+$$
+is commutative where $A = (a_{ij})$. By
+Lemma \ref{lemma-charpoly}
+there exists a monic polynomial $P$ such that $P(A) = 0$.
+Then it follows that $P(\varphi) = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-charpoly-module-ideal}
+Let $R$ be a ring. Let $I \subset R$ be an ideal.
+Let $M$ be a finite $R$-module.
+Let $\varphi : M \to M$ be an endomorphism such
+that $\varphi(M) \subset IM$.
+Then there exists a monic polynomial
+$P = t^n + a_1 t^{n - 1} + \ldots + a_n \in R[T]$
+such that $a_j \in I^j$ and $P(\varphi) = 0$ as an endomorphism of $M$.
+\end{lemma}
+
+\begin{proof}
+Choose a surjective $R$-module map $R^{\oplus n} \to M$, given by
+$(a_1, \ldots, a_n) \mapsto \sum a_ix_i$ for some generators $x_i \in M$.
+Choose $(a_{i1}, \ldots, a_{in}) \in I^{\oplus n}$ such that
+$\varphi(x_i) = \sum a_{ij} x_j$. In other words the diagram
+$$
+\xymatrix{
+R^{\oplus n} \ar[d]_A \ar[r] & M \ar[d]^\varphi \\
+I^{\oplus n} \ar[r] & M
+}
+$$
+is commutative where $A = (a_{ij})$. By
+Lemma \ref{lemma-charpoly}
+the polynomial
+$P(t) = \det(t\text{id}_{n \times n} - A)$
+has all the desired properties.
+\end{proof}
+
+\noindent
+As a fun example application we prove the following surprising lemma.
+
+\begin{lemma}
+\label{lemma-fun}
+Let $R$ be a ring.
+Let $M$ be a finite $R$-module.
+Let $\varphi : M \to M$ be a surjective $R$-module map.
+Then $\varphi$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}[First proof]
+Write $R' = R[x]$ and think of $M$ as a finite $R'$-module with
+$x$ acting via $\varphi$. Set $I = (x) \subset R'$. By our assumption that
+$\varphi$ is surjective we have $IM = M$. Hence we may apply
+Lemma \ref{lemma-charpoly-module-ideal}
+to $M$ as an $R'$-module, the ideal $I$ and the endomorphism $\text{id}_M$.
+We conclude that
+$(1 + a_1 + \ldots + a_n)\text{id}_M = 0$ with $a_j \in I$.
+Write $a_j = b_j(x)x$ for some $b_j(x) \in R[x]$.
+Translating back into $\varphi$ we see that
+$\text{id}_M = -(\sum_{j = 1, \ldots, n} b_j(\varphi)) \varphi$, and hence
+$\varphi$ is invertible.
+\end{proof}
+
+\begin{proof}[Second proof]
+We perform induction on the number of generators of $M$ over $R$. If
+$M$ is generated by one element, then $M \cong R/I$ for some ideal
+$I \subset R$. In this case we may replace $R$ by $R/I$ so that $M = R$.
+In this case $\varphi : R \to R$ is given by multiplication on $M$ by an
+element $r \in R$. The surjectivity of $\varphi$ forces $r$ invertible,
+since $\varphi$ must hit $1$, which implies that $\varphi$ is
+invertible.
+
+\medskip\noindent
+Now assume that we have proven the lemma in the case of modules
+generated by $n - 1$ elements, and are examining a module $M$ generated
+by $n$ elements. Let $A$ mean the ring $R[t]$, and regard the module
+$M$ as an $A$-module by letting $t$ act via $\varphi$; since $M$ is
+finite over $R$, it is finite over $R[t]$ as well, and since we're
+trying to prove $\varphi$ injective, a set-theoretic property, we might
+as well prove the endomorphism $t : M \to M$ over $A$ injective. We have
+reduced our problem to the case our endomorphism is multiplication by
+an element of the ground ring. Let $M' \subset M$ denote the
+sub-$A$-module generated by the first $n - 1$ of the generators of $M$,
+and consider the diagram
+$$
+\xymatrix{
+0 \ar[r] & M' \ar[r]\ar[d]^{\varphi\mid_{M'}} & M\ar[d]^\varphi \ar[r] &
+M/M' \ar[d]^{\varphi \bmod M'} \ar[r] & 0 \\
+0 \ar[r] & M' \ar[r] & M \ar[r]
+ & M/M' \ar[r] & 0,
+}
+$$
+where the restriction of $\varphi$ to $M'$ and the map induced by $\varphi$
+on the quotient $M/M'$ are well-defined since $\varphi$ is multiplication
+by an element in the base, and $M'$ and $M/M'$ are $A$-modules in
+their own right. By the case $n = 1$ the map $M/M' \to M/M'$ is an
+isomorphism. A diagram chase implies that $\varphi|_{M'}$ is surjective
+hence by induction $\varphi|_{M'}$ is an isomorphism. This forces the
+middle column to be an isomorphism by the snake lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{The spectrum of a ring}
+\label{section-spectrum-ring}
+
+\noindent
+We arbitrarily decide that the spectrum of a ring as a topological space
+is part of the algebra chapter, whereas an affine scheme is part of the
+chapter on schemes.
+
+\begin{definition}
+\label{definition-spectrum-ring}
+Let $R$ be a ring.
+\begin{enumerate}
+\item The {\it spectrum} of $R$ is the set of prime ideals of $R$.
+It is usually denoted $\Spec(R)$.
+\item Given a subset $T \subset R$ we let $V(T) \subset \Spec(R)$
+be the set of primes containing $T$, i.e., $V(T) = \{ \mathfrak p \in
+\Spec(R) \mid \forall f\in T, f\in \mathfrak p\}$.
+\item Given an element $f \in R$ we let $D(f) \subset \Spec(R)$
+be the set of primes not containing $f$.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-Zariski-topology}
+Let $R$ be a ring.
+\begin{enumerate}
+\item The spectrum of a ring $R$ is empty if and only if $R$
+is the zero ring.
+\item Every nonzero ring has a maximal ideal.
+\item Every nonzero ring has a minimal prime ideal.
+\item Given an ideal $I \subset R$ and a prime ideal
+$I \subset \mathfrak p$ there exists a prime
+$I \subset \mathfrak q \subset \mathfrak p$ such
+that $\mathfrak q$ is minimal over $I$.
+\item If $T \subset R$, and if $(T)$ is the ideal generated by
+$T$ in $R$, then $V((T)) = V(T)$.
+\item If $I$ is an ideal and $\sqrt{I}$ is its radical,
+see basic notion (\ref{item-radical-ideal}), then $V(I) = V(\sqrt{I})$.
+\item Given an ideal $I$ of $R$ we have $\sqrt{I} =
+\bigcap_{I \subset \mathfrak p} \mathfrak p$.
+\item If $I$ is an ideal then $V(I) = \emptyset$ if and only
+if $I$ is the unit ideal.
+\item If $I$, $J$ are ideals of $R$ then $V(I) \cup V(J) =
+V(I \cap J)$.
+\item If $(I_a)_{a\in A}$ is a set of ideals of $R$ then
+$\bigcap_{a\in A} V(I_a) = V(\bigcup_{a\in A} I_a)$.
+\item If $f \in R$, then $D(f) \amalg V(f) = \Spec(R)$.
+\item If $f \in R$ then $D(f) = \emptyset$ if and only if $f$
+is nilpotent.
+\item If $f = u f'$ for some unit $u \in R$, then $D(f) = D(f')$.
+\item If $I \subset R$ is an ideal, and $\mathfrak p$ is a prime of
+$R$ with $\mathfrak p \not\in V(I)$, then there exists an $f \in R$
+such that $\mathfrak p \in D(f)$, and $D(f) \cap V(I) = \emptyset$.
+\item If $f, g \in R$, then $D(fg) = D(f) \cap D(g)$.
+\item If $f_i \in R$ for $i \in I$, then
+$\bigcup_{i\in I} D(f_i)$ is the complement of $V(\{f_i \}_{i\in I})$
+in $\Spec(R)$.
+\item If $f \in R$ and $D(f) = \Spec(R)$, then $f$ is a unit.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We address each part in the corresponding item below.
+\begin{enumerate}
+\item This is a direct consequence of (2) or (3).
+\item Let $\mathfrak{A}$ be the set of all proper ideals of $R$. This set is
+ordered by inclusion and is non-empty, since $(0) \in \mathfrak{A}$ is a proper
+ideal. Let $A$ be a totally ordered subset of $\mathfrak A$.
+Then $\bigcup_{I \in A} I$ is in
+fact an ideal. Since 1 $\notin I$ for all $I \in A$, the union does not contain
+1 and thus is proper. Hence $\bigcup_{I \in A} I$ is in $\mathfrak{A}$ and is
+an upper bound for the set $A$. Thus by Zorn's lemma $\mathfrak{A}$ has a
+maximal element, which is the sought-after maximal ideal.
+\item Since $R$ is nonzero, it contains a maximal ideal which is a prime ideal.
+Thus the set $\mathfrak{A}$ of all prime ideals of $R$ is nonempty.
+$\mathfrak{A}$ is ordered by reverse-inclusion. Let $A$ be a totally ordered
+subset of $\mathfrak{A}$. It's pretty clear that $J = \bigcap_{I \in A} I$ is
+in fact an ideal. Not so clear, however, is that it is prime. Let $xy \in J$.
+Then $xy \in I$ for all $I \in A$. Now let $B = \{I \in A | y \in I\}$. Let $K
+= \bigcap_{I \in B} I$. Since $A$ is totally ordered, either $K = J$ (and we're
+done, since then $y \in J$) or $K \supset J$ and for all $I \in A$ such that
+$I$ is properly contained in $K$, we have $y \notin I$. But that means that for
+all those $I, x \in I$, since they are prime. Hence $x \in J$. In either case,
+$J$ is prime as desired. Hence by Zorn's lemma we get a maximal element which
+in this case is a minimal prime ideal.
+\item This is the same exact argument as (3) except you only consider prime
+ideals contained in $\mathfrak{p}$ and containing $I$.
+\item $(T)$ is the smallest ideal containing $T$. Hence if $T \subset I$, some
+ideal, then $(T) \subset I$ as well. Hence if $I \in V(T)$, then $I \in V((T))$
+as well. The other inclusion is obvious.
+\item Since $I \subset \sqrt{I}, V(\sqrt{I}) \subset V(I)$. Now let
+$\mathfrak{p} \in V(I)$. Let $x \in \sqrt{I}$. Then $x^n \in I$ for some $n$.
+Hence $x^n \in \mathfrak{p}$. But since $\mathfrak{p}$ is prime, a boring
+induction argument gets you that $x \in \mathfrak{p}$. Hence $\sqrt{I} \subset
+\mathfrak{p}$ and $\mathfrak{p} \in V(\sqrt{I})$.
+\item Let $f \in R \setminus \sqrt{I}$. Then $f^n \notin I$ for all $n$. Hence
+$S = \{1, f, f^2, \ldots\}$ is a multiplicative subset, not containing $0$.
+Take a
+prime ideal $\bar{\mathfrak{p}} \subset S^{-1}R$ containing $S^{-1}I$. Then the
+pull-back $\mathfrak{p}$ in $R$ of $\bar{\mathfrak{p}}$ is a prime ideal
+containing $I$ that does not intersect $S$. This shows that $\bigcap_{I \subset
+\mathfrak p} \mathfrak p \subset \sqrt{I}$. Now if $a \in \sqrt{I}$, then $a^n
+\in I$ for some $n$. Hence if $I \subset \mathfrak{p}$, then $a^n \in
+\mathfrak{p}$. But since $\mathfrak{p}$ is prime, we have $a \in \mathfrak{p}$.
+Thus the equality is shown.
+\item $I$ is not the unit ideal if and only if $I$
+is contained in some maximal ideal (to
+see this, apply (2) to the ring $R/I$) which is therefore prime.
+\item If $\mathfrak{p} \in V(I) \cup V(J)$, then $I \subset \mathfrak{p}$ or $J
+\subset \mathfrak{p}$ which means that $I \cap J \subset \mathfrak{p}$. Now if
+$I \cap J \subset \mathfrak{p}$, then $IJ \subset \mathfrak{p}$ and hence
+either $I$ of $J$ is in $\mathfrak{p}$, since $\mathfrak{p}$ is prime.
+\item $\mathfrak{p} \in \bigcap_{a \in A} V(I_a) \Leftrightarrow
+I_a \subset \mathfrak{p}, \forall a \in A \Leftrightarrow
+\mathfrak{p} \in V(\bigcup_{a\in A} I_a)$
+\item If $\mathfrak{p}$ is a prime ideal and $f \in R$, then either $f \in
+\mathfrak{p}$ or $f \notin \mathfrak{p}$ (strictly) which is what the disjoint
+union says.
+\item If $a \in R$ is nilpotent, then $a^n = 0$ for some $n$. Hence $a^n \in
+\mathfrak{p}$ for any prime ideal. Thus $a \in \mathfrak{p}$ as can be shown by
+induction and $D(a) = \emptyset$. Now, as shown in (7), if $a \in R$ is not
+nilpotent, then there is a prime ideal that does not contain it.
+\item $f \in \mathfrak{p} \Leftrightarrow uf \in \mathfrak{p}$, since $u$ is
+invertible.
+\item If $\mathfrak{p} \notin V(I)$, then $\exists f \in I \setminus
+\mathfrak{p}$. Then $f \notin \mathfrak{p}$ so $\mathfrak{p} \in D(f)$. Also if
+$\mathfrak{q} \in D(f)$, then $f \notin \mathfrak{q}$ and thus $I$ is not
+contained in $\mathfrak{q}$. Thus $D(f) \cap V(I) = \emptyset$.
+\item If $fg \in \mathfrak{p}$, then $f \in \mathfrak{p}$ or $g \in
+\mathfrak{p}$. Hence if $f \notin \mathfrak{p}$ and $g \notin \mathfrak{p}$,
+then $fg \notin \mathfrak{p}$. Since $\mathfrak{p}$ is an ideal, if $fg \notin
+\mathfrak{p}$, then $f \notin \mathfrak{p}$ and $g \notin \mathfrak{p}$.
+\item $\mathfrak{p} \in \bigcup_{i \in I} D(f_i) \Leftrightarrow \exists i \in
+I, f_i \notin \mathfrak{p} \Leftrightarrow \mathfrak{p} \in \Spec(R)
+\setminus V(\{f_i\}_{i \in I})$
+\item If $D(f) = \Spec(R)$, then $V(f) = \emptyset$ and
+hence $fR = R$, so $f$ is a unit.
+\end{enumerate}
+\end{proof}
+
+\noindent
+The lemma implies that the subsets $V(T)$ from
+Definition \ref{definition-spectrum-ring} form the closed
+subsets of a topology on $\Spec(R)$. And it also shows that
+the sets $D(f)$ are open and form a basis for this
+topology.
+
+\begin{definition}
+\label{definition-Zariski-topology}
+Let $R$ be a ring.
+The topology on $\Spec(R)$ whose closed sets are the
+sets $V(T)$ is called the {\it Zariski} topology. The open
+subsets $D(f)$ are called the {\it standard opens} of $\Spec(R)$.
+\end{definition}
+
+\noindent
+It should be clear from context whether we consider $\Spec(R)$
+just as a set or as a topological space.
+
+\begin{lemma}
+\label{lemma-spec-functorial}
+\begin{slogan}
+Functoriality of the spectrum
+\end{slogan}
+Suppose that $\varphi : R \to R'$ is a ring homomorphism.
+The induced map
+$$
+\Spec(\varphi) : \Spec(R') \longrightarrow \Spec(R),
+\quad
+\mathfrak p' \longmapsto \varphi^{-1}(\mathfrak p')
+$$
+is continuous for the Zariski topologies. In fact, for any
+element $f \in R$ we have
+$\Spec(\varphi)^{-1}(D(f)) = D(\varphi(f))$.
+\end{lemma}
+
+\begin{proof}
+It is basic notion (\ref{item-inverse-image-prime}) that
+$\mathfrak p := \varphi^{-1}(\mathfrak p')$
+is indeed a prime ideal of $R$. The last assertion
+of the lemma follows directly from the definitions,
+and implies the first.
+\end{proof}
+
+\noindent
+If $\varphi' : R' \to R''$ is a second ring homomorphism
+then the composition
+$$
+\Spec(R'')
+\longrightarrow
+\Spec(R')
+\longrightarrow
+\Spec(R)
+$$
+equals $\Spec(\varphi' \circ \varphi)$. In other
+words, $\Spec$ is a contravariant functor from the
+category of rings to the category of topological spaces.
+
+\begin{lemma}
+\label{lemma-spec-localization}
+Let $R$ be a ring. Let $S \subset R$ be a multiplicative subset.
+The map $R \to S^{-1}R$ induces via the functoriality of $\Spec$
+a homeomorphism
+$$
+\Spec(S^{-1}R)
+\longrightarrow
+\{\mathfrak p \in \Spec(R) \mid S \cap \mathfrak p = \emptyset \}
+$$
+where the topology on the right hand side is that induced from the
+Zariski topology on $\Spec(R)$. The inverse map is given
+by $\mathfrak p \mapsto S^{-1}\mathfrak p$.
+\end{lemma}
+
+\begin{proof}
+Denote the right hand side of the arrow of the lemma by $D$.
+Choose a prime $\mathfrak p' \subset S^{-1}R$ and let $\mathfrak p$
+the inverse image of $\mathfrak p'$ in $R$. Since $\mathfrak p'$
+does not contain $1$ we see that $\mathfrak p$ does not contain
+any element of $S$. Hence $\mathfrak p \in D$ and we see that
+the image is contained in $D$. Let $\mathfrak p \in D$.
+By assumption the image $\overline{S}$ does not contain $0$.
+By basic notion (\ref{item-localization-zero})
+$\overline{S}^{-1}(R/\mathfrak p)$ is not the zero ring.
+By basic notion (\ref{item-localize-ideal}) we see
+$S^{-1}R / S^{-1}\mathfrak p = \overline{S}^{-1}(R/\mathfrak p)$
+is a domain, and hence $S^{-1}\mathfrak p$ is a prime.
+The equality of rings also shows that the inverse image of
+$S^{-1}\mathfrak p$ in $R$ is equal to $\mathfrak p$,
+because $R/\mathfrak p \to \overline{S}^{-1}(R/\mathfrak p)$
+is injective by basic notion (\ref{item-localize-nonzerodivisors}).
+This proves that the map $\Spec(S^{-1}R) \to \Spec(R)$
+is bijective onto $D$ with inverse as given.
+It is continuous by Lemma \ref{lemma-spec-functorial}.
+Finally, let $D(g) \subset \Spec(S^{-1}R)$ be a standard
+open. Write $g = h/s$ for some $h\in R$ and $s\in S$.
+Since $g$ and $h/1$ differ by a unit we have $D(g) =
+D(h/1)$ in $\Spec(S^{-1}R)$.
+Hence by Lemma \ref{lemma-spec-functorial} and the bijectivity
+above the image of $D(g) = D(h/1)$ is $D \cap D(h)$.
+This proves the map is open as well.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-standard-open}
+Let $R$ be a ring. Let $f \in R$.
+The map $R \to R_f$ induces via the functoriality of
+$\Spec$ a homeomorphism
+$$
+\Spec(R_f) \longrightarrow D(f) \subset \Spec(R).
+$$
+The inverse is given by $\mathfrak p \mapsto \mathfrak p \cdot R_f$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-spec-localization}.
+\end{proof}
+
+\noindent
+It is not the case that every ``affine open'' of a
+spectrum is a standard open. See
+Example \ref{example-affine-open-not-standard}.
+
+\begin{lemma}
+\label{lemma-spec-closed}
+Let $R$ be a ring. Let $I \subset R$ be an ideal.
+The map $R \to R/I$ induces via the functoriality of
+$\Spec$ a homeomorphism
+$$
+\Spec(R/I) \longrightarrow V(I) \subset \Spec(R).
+$$
+The inverse is given by $\mathfrak p \mapsto \mathfrak p / I$.
+\end{lemma}
+
+\begin{proof}
+It is immediate that the image is contained in $V(I)$.
+On the other hand, if $\mathfrak p \in V(I)$
+then $\mathfrak p \supset I$ and we may consider
+the ideal $\mathfrak p /I \subset R/I$. Using
+basic notion (\ref{item-isomorphism-theorem}) we see that
+$(R/I)/(\mathfrak p/I) = R/\mathfrak p$ is a domain
+and hence $\mathfrak p/I$ is a prime ideal. From this
+it is immediately clear that the image of $D(f + I)$
+is $D(f) \cap V(I)$, and hence the map is a homeomorphism.
+\end{proof}
+
+
+
+\begin{remark}
+\label{remark-fundamental-diagram}
+A fundamental commutative diagram associated to a ring map
+$\varphi : R \to S$, a prime $\mathfrak q \subset S$ and
+the corresponding prime $\mathfrak p = \varphi^{-1}(\mathfrak q)$
+of $R$ is the following
+$$
+\xymatrix{
+\kappa(\mathfrak q) = S_{\mathfrak q}/{\mathfrak q}S_{\mathfrak q}
+&
+S_{\mathfrak q} \ar[l]
+&
+S \ar[r] \ar[l]
+&
+S/\mathfrak q \ar[r]
+&
+\kappa(\mathfrak q)
+\\
+\kappa(\mathfrak p) \otimes_R S =
+S_{\mathfrak p}/{\mathfrak p}S_{\mathfrak p} \ar[u]
+&
+S_{\mathfrak p} \ar[u] \ar[l]
+&
+S \ar[u] \ar[r] \ar[l]
+&
+S/\mathfrak pS \ar[u] \ar[r]
+&
+(R \setminus \mathfrak p)^{-1}S/\mathfrak pS \ar[u]
+\\
+\kappa(\mathfrak p) =
+R_{\mathfrak p}/{\mathfrak p}R_{\mathfrak p} \ar[u]
+&
+R_{\mathfrak p} \ar[u] \ar[l]
+&
+R \ar[u] \ar[r] \ar[l]
+&
+R/\mathfrak p \ar[u] \ar[r]
+&
+\kappa(\mathfrak p) \ar[u]
+}
+$$
+In this diagram the arrows in the outer left and outer right columns
+are identical. The horizontal maps induce on the associated spectra
+always a homeomorphism onto the image. The lower two rows
+of the diagram make sense without assuming $\mathfrak q$ exists.
+The lower squares induce fibre squares of topological spaces.
+This diagram shows that $\mathfrak p$ is in the image
+of the map on Spec if and only if $S \otimes_R \kappa(\mathfrak p)$
+is not the zero ring.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-in-image}
+Let $\varphi : R \to S$ be a ring map. Let $\mathfrak p$
+be a prime of $R$. The following are equivalent
+\begin{enumerate}
+\item $\mathfrak p$ is in the image of
+$\Spec(S) \to \Spec(R)$,
+\item $S \otimes_R \kappa(\mathfrak p) \not = 0$,
+\item $S_{\mathfrak p}/\mathfrak p S_{\mathfrak p} \not = 0$,
+\item $(S/\mathfrak pS)_{\mathfrak p} \not = 0$, and
+\item $\mathfrak p = \varphi^{-1}(\mathfrak pS)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We have already seen the equivalence of the first two
+in Remark \ref{remark-fundamental-diagram}. The others
+are just reformulations of this.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-compact}
+\begin{slogan}
+The spectrum of a ring is quasi-compact
+\end{slogan}
+Let $R$ be a ring. The space $\Spec(R)$ is quasi-compact.
+\end{lemma}
+
+\begin{proof}
+It suffices to prove that any covering of $\Spec(R)$
+by standard opens can be refined by a finite covering.
+Thus suppose that $\Spec(R) = \cup D(f_i)$
+for a set of elements $\{f_i\}_{i\in I}$ of $R$. This means that
+$\cap V(f_i) = \emptyset$. According to Lemma
+\ref{lemma-Zariski-topology} this means that
+$V(\{f_i \}) = \emptyset$. According to the
+same lemma this means that the ideal generated
+by the $f_i$ is the unit ideal of $R$. This means
+that we can write $1$ as a {\it finite} sum:
+$1 = \sum_{i \in J} r_i f_i$ with $J \subset I$ finite.
+And then it follows that $\Spec(R)
+= \cup_{i \in J} D(f_i)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-topology-spec}
+Let $R$ be a ring.
+The topology on $X = \Spec(R)$ has the following properties:
+\begin{enumerate}
+\item $X$ is quasi-compact,
+\item $X$ has a basis for the topology consisting of quasi-compact opens, and
+\item the intersection of any two quasi-compact opens is quasi-compact.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The spectrum of a ring is quasi-compact, see
+Lemma \ref{lemma-quasi-compact}.
+It has a basis for the topology consisting of the standard opens
+$D(f) = \Spec(R_f)$
+(Lemma \ref{lemma-standard-open})
+which are quasi-compact by the first remark.
+The intersection of two standard opens is quasi-compact
+as $D(f) \cap D(g) = D(fg)$. Given any two quasi-compact opens
+$U, V \subset X$ we may write $U = D(f_1) \cup \ldots \cup D(f_n)$
+and $V = D(g_1) \cup \ldots \cup D(g_m)$. Then
+$U \cap V = \bigcup D(f_ig_j)$ which is quasi-compact.
+\end{proof}
+
+
+
+
+
+
+\section{Local rings}
+\label{section-local-rings}
+
+\noindent
+Local rings are the bread and butter of algebraic geometry.
+
+\begin{definition}
+\label{definition-local-ring}
+A {\it local ring} is a ring with exactly one maximal ideal.
+The maximal ideal is often denoted $\mathfrak m_R$ in this case.
+We often say ``let $(R, \mathfrak m, \kappa)$ be a local ring''
+to indicate that $R$ is local, $\mathfrak m$ is its unique maximal
+ideal and $\kappa = R/\mathfrak m$ is its residue field.
+A {\it local homomorphism of local rings} is a ring map
+$\varphi : R \to S$ such that $R$ and $S$ are local rings and such
+that $\varphi(\mathfrak m_R) \subset \mathfrak m_S$.
+If it is given that $R$ and $S$ are local rings, then the phrase
+``{\it local ring map $\varphi : R \to S$}'' means that $\varphi$
+is a local homomorphism of local rings.
+\end{definition}
+
+\noindent
+A field is a local ring. Any ring map between fields is a local homomorphism
+of local rings.
+
+\begin{lemma}
+\label{lemma-characterize-local-ring}
+Let $R$ be a ring. The following are equivalent:
+\begin{enumerate}
+\item $R$ is a local ring,
+\item $\Spec(R)$ has exactly one closed point,
+\item $R$ has a maximal ideal $\mathfrak m$
+and every element of $R \setminus \mathfrak m$
+is a unit, and
+\item $R$ is not the zero ring and for every $x \in R$ either $x$
+or $1 - x$ is invertible or both.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a ring, and $\mathfrak m$ a maximal ideal.
+If $x \in R \setminus \mathfrak m$, and $x$ is not a unit
+then there is a maximal ideal $\mathfrak m'$ containing $x$.
+Hence $R$ has at least two maximal ideals. Conversely,
+if $\mathfrak m'$ is another maximal ideal, then choose
+$x \in \mathfrak m'$, $x \not \in \mathfrak m$. Clearly
+$x$ is not a unit. This proves the equivalence of (1) and (3).
+The equivalence (1) and (2) is tautological.
+If $R$ is local then (4) holds since $x$ is either in $\mathfrak m$
+or not. If (4) holds, and $\mathfrak m$, $\mathfrak m'$ are distinct
+maximal ideals then we may choose $x \in R$ such that
+$x \bmod \mathfrak m' = 0$ and $x \bmod \mathfrak m = 1$
+by the Chinese remainder theorem
+(Lemma \ref{lemma-chinese-remainder}).
+This element $x$ is not invertible and neither is $1 - x$ which is
+a contradiction. Thus (4) and (1) are equivalent.
+\end{proof}
+
+\noindent
+The localization $R_\mathfrak p$ of a ring $R$ at a prime $\mathfrak p$
+is a local ring with maximal ideal $\mathfrak p R_\mathfrak p$. Namely,
+the quotient $R_\mathfrak p/\mathfrak pR_\mathfrak p$ is the fraction
+field of the domain $R/\mathfrak p$ and every element of $R_\mathfrak p$
+which is not contained in $\mathfrak pR_\mathfrak p$ is invertible.
+
+\begin{lemma}
+\label{lemma-characterize-local-ring-map}
+Let $\varphi : R \to S$ be a ring map. Assume $R$ and $S$ are local rings.
+The following are equivalent:
+\begin{enumerate}
+\item $\varphi$ is a local ring map,
+\item $\varphi(\mathfrak m_R) \subset \mathfrak m_S$, and
+\item $\varphi^{-1}(\mathfrak m_S) = \mathfrak m_R$.
+\item For any $x \in R$, if $\varphi(x)$ is invertible in $S$, then $x$
+is invertible in $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Conditions (1) and (2) are equivalent by definition.
+If (3) holds then (2) holds. Conversely, if (2) holds, then
+$\varphi^{-1}(\mathfrak m_S)$ is a prime ideal containing
+the maximal ideal $\mathfrak m_R$, hence
+$\varphi^{-1}(\mathfrak m_S) = \mathfrak m_R$. Finally, (4) is the
+contrapositive of (2) by Lemma \ref{lemma-characterize-local-ring}.
+\end{proof}
+
+\noindent
+Let $\varphi : R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime
+and set $\mathfrak p = \varphi^{-1}(\mathfrak q)$. Then the induced ring
+map $R_\mathfrak p \to S_\mathfrak q$ is a local ring map.
+
+
+
+\section{The Jacobson radical of a ring}
+\label{section-radical}
+
+\noindent
+We recall that the {\it Jacobson radical} $\text{rad}(R)$ of a ring $R$
+is the intersection of all maximal ideals of $R$. If $R$ is local
+then $\text{rad}(R)$ is the maximal ideal of $R$.
+
+\begin{lemma}
+\label{lemma-contained-in-radical}
+Let $R$ be a ring with Jacobson radical $\text{rad}(R)$.
+Let $I \subset R$ be an ideal. The following are
+equivalent
+\begin{enumerate}
+\item $I \subset \text{rad}(R)$, and
+\item every element of $1 + I$ is a unit in $R$.
+\end{enumerate}
+In this case every element of $R$ which maps to a unit of $R/I$ is a unit.
+\end{lemma}
+
+\begin{proof}
+If $f \in \text{rad}(R)$, then $f \in \mathfrak m$ for all
+maximal ideals $\mathfrak m$ of $R$. Hence $1 + f \not \in \mathfrak m$
+for all maximal ideals $\mathfrak m$ of $R$. Thus the closed
+subset $V(1 + f)$ of $\Spec(R)$ is empty. This implies
+that $1 + f$ is a unit, see Lemma \ref{lemma-Zariski-topology}.
+
+\medskip\noindent
+Conversely, assume that $1 + f$ is a unit for all $f \in I$.
+If $\mathfrak m$ is a maximal ideal and $I \not \subset \mathfrak m$,
+then $I + \mathfrak m = R$. Hence $1 = f + g$ for some $g \in \mathfrak m$
+and $f \in I$. Then $g = 1 + (-f)$ is not a unit, contradiction.
+
+\medskip\noindent
+For the final statement let $f \in R$ map to a unit in $R/I$.
+Then we can find $g \in R$ mapping to the multiplicative inverse
+of $f \bmod I$. Then $fg = 1 \bmod I$. Hence $fg$ is a unit of $R$
+by (2) which implies that $f$ is a unit.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-on-spec-units}
+Let $\varphi : R \to S$ be a ring map such that the induced map
+$\Spec(S) \to \Spec(R)$ is surjective. Then an element $x \in R$
+is a unit if and only if $\varphi(x) \in S$ is a unit.
+\end{lemma}
+
+\begin{proof}
+If $x$ is a unit, then so is $\varphi(x)$. Conversely, if $\varphi(x)$
+is a unit, then $\varphi(x) \not \in \mathfrak q$ for all
+$\mathfrak q \in \Spec(S)$. Hence
+$x \not \in \varphi^{-1}(\mathfrak q) = \Spec(\varphi)(\mathfrak q)$
+for all $\mathfrak q \in \Spec(S)$. Since $\Spec(\varphi)$ is surjective
+we conclude that $x$ is a unit by
+part (17) of Lemma \ref{lemma-Zariski-topology}.
+\end{proof}
+
+
+
+
+\section{Nakayama's lemma}
+\label{section-nakayama}
+
+\noindent
+We quote from \cite{MatCA}: ``This simple but
+important lemma is due to T.~Nakayama, G.~Azumaya and W.~Krull. Priority
+is obscure, and although it is usually called the Lemma of Nakayama, late
+Prof.~Nakayama did not like the name.''
+
+\begin{lemma}[Nakayama's lemma]
+\label{lemma-NAK}
+\begin{reference}
+\cite[1.M Lemma (NAK) page 11]{MatCA}
+\end{reference}
+\begin{history}
+We quote from \cite{MatCA}: ``This simple but
+important lemma is due to T.~Nakayama, G.~Azumaya and W.~Krull. Priority
+is obscure, and although it is usually called the Lemma of Nakayama, late
+Prof.~Nakayama did not like the name.''
+\end{history}
+Let $R$ be a ring with Jacobson radical $\text{rad}(R)$.
+Let $M$ be an $R$-module. Let $I \subset R$
+be an ideal.
+\begin{enumerate}
+\item
+\label{item-nakayama}
+If $IM = M$ and $M$ is finite, then there exists an $f \in 1 + I$ such that
+$fM = 0$.
+\item If $IM = M$, $M$ is finite, and $I \subset \text{rad}(R)$, then $M = 0$.
+\item If $N, N' \subset M$, $M = N + IN'$, and $N'$ is finite,
+then there exists an $f \in 1 + I$ such that $fM \subset N$ and $M_f = N_f$.
+\item If $N, N' \subset M$, $M = N + IN'$, $N'$ is finite, and
+$I \subset \text{rad}(R)$, then $M = N$.
+\item If $N \to M$ is a module map, $N/IN \to M/IM$ is
+surjective, and $M$ is finite, then there exists an $f \in 1 + I$
+such that $N_f \to M_f$ is surjective.
+\item If $N \to M$ is a module map, $N/IN \to M/IM$ is
+surjective, $M$ is finite, and $I \subset \text{rad}(R)$,
+then $N \to M$ is surjective.
+\item If $x_1, \ldots, x_n \in M$ generate $M/IM$ and $M$ is finite,
+then there exists an $f \in 1 + I$ such that $x_1, \ldots, x_n$
+generate $M_f$ over $R_f$.
+\item If $x_1, \ldots, x_n \in M$ generate $M/IM$, $M$ is finite, and
+$I \subset \text{rad}(R)$, then $M$ is generated by $x_1, \ldots, x_n$.
+\item If $IM = M$, $I$ is nilpotent, then $M = 0$.
+\item If $N, N' \subset M$, $M = N + IN'$, and $I$ is nilpotent then $M = N$.
+\item If $N \to M$ is a module map, $I$ is nilpotent, and $N/IN \to M/IM$
+is surjective, then $N \to M$ is surjective.
+\item If $\{x_\alpha\}_{\alpha \in A}$ is a set of elements of $M$
+which generate $M/IM$ and $I$ is nilpotent, then $M$ is generated
+by the $x_\alpha$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (\ref{item-nakayama}). Choose generators $y_1, \ldots, y_m$ of $M$
+over $R$. For each $i$ we can write $y_i = \sum z_{ij} y_j$ with
+$z_{ij} \in I$ (since $M = IM$).
+In other words $\sum_j (\delta_{ij} - z_{ij})y_j = 0$.
+Let $f$ be the determinant of the $m \times m$ matrix
+$A = (\delta_{ij} - z_{ij})$. Note that $f \in 1 + I$
+(since the matrix $A$ is entrywise congruent to the
+$m \times m$ identity matrix modulo $I$).
+By Lemma \ref{lemma-matrix-left-inverse} (1),
+there exists an $m \times m$
+matrix $B$ such that $BA = f 1_{m \times m}$. Writing out we see that
+$\sum_{i} b_{hi} a_{ij} = f \delta_{hj}$ for all
+$h$ and $j$; hence, $\sum_{i, j} b_{hi} a_{ij} y_j
+= \sum_{j} f \delta_{hj} y_j = f y_h$ for every $h$.
+In other words, $0 = f y_h$ for every $h$ (since each
+$i$ satisfies $\sum_j a_{ij} y_j = 0$).
+This implies that $f$ annihilates $M$.
+
+\medskip\noindent
+By Lemma \ref{lemma-contained-in-radical} an element of $1 + \text{rad}(R)$ is
+invertible element of $R$. Hence we see that (\ref{item-nakayama}) implies
+(2). We obtain (3) by applying (1) to $M/N$ which is finite as $N'$ is finite.
+We obtain (4) by applying (2) to $M/N$ which is finite as $N'$ is finite.
+We obtain (5) by applying (3) to $M$ and the submodules $\Im(N \to M)$
+and $M$. We obtain (6) by applying (4) to $M$ and the submodules
+$\Im(N \to M)$ and $M$.
+We obtain (7) by applying (5) to the map $R^{\oplus n} \to M$,
+$(a_1, \ldots, a_n) \mapsto a_1x_1 + \ldots + a_nx_n$.
+We obtain (8) by applying (6) to the map $R^{\oplus n} \to M$,
+$(a_1, \ldots, a_n) \mapsto a_1x_1 + \ldots + a_nx_n$.
+
+\medskip\noindent
+Part (9) holds because if $M = IM$ then $M = I^nM$ for all $n \geq 0$
+and $I$ being nilpotent means $I^n = 0$ for some $n \gg 0$. Parts
+(10), (11), and (12) follow from (9) by the arguments used above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-NAK-localization}
+Let $R$ be a ring, let $S \subset R$ be a multiplicative subset,
+let $I \subset R$ be an ideal, and let $M$ be a finite $R$-module.
+If $x_1, \ldots, x_r \in M$ generate $S^{-1}(M/IM)$
+as an $S^{-1}(R/I)$-module, then there exists an $f \in S + I$
+such that $x_1, \ldots, x_r$ generate $M_f$ as an
+$R_f$-module.\footnote{Special cases: (I) $I = 0$. The lemma says
+if $x_1, \ldots, x_r$ generate $S^{-1}M$, then $x_1, \ldots, x_r$
+generate $M_f$ for some $f \in S$. (II) $I = \mathfrak p$ is
+a prime ideal and $S = R \setminus \mathfrak p$. The lemma says if
+$x_1, \ldots, x_r$ generate $M \otimes_R \kappa(\mathfrak p)$
+then $x_1, \ldots, x_r$ generate $M_f$ for some
+$f \in R$, $f \not \in \mathfrak p$.}
+\end{lemma}
+
+\begin{proof}
+Special case $I = 0$. Let $y_1, \ldots, y_s$ be generators for $M$ over $R$.
+Since $S^{-1}M$ is generated by $x_1, \ldots, x_r$, for each $i$
+we can write $y_i = \sum (a_{ij}/s_{ij})x_j$ for some $a_{ij} \in R$
+and $s_{ij} \in S$. Let $s \in S$ be the product
+of all of the $s_{ij}$. Then we see that $y_i$ is contained in the
+$R_s$-submodule of $M_s$ generated by $x_1, \ldots, x_r$.
+Hence $x_1, \ldots, x_r$ generates $M_s$.
+
+\medskip\noindent
+General case. By the special case, we can find an $s \in S$
+such that $x_1, \ldots, x_r$ generate $(M/IM)_s$ over $(R/I)_s$.
+By Lemma \ref{lemma-NAK} we can find a $g \in 1 + I_s \subset R_s$
+such that $x_1, \ldots, x_r$ generate $(M_s)_g$ over $(R_s)_g$.
+Write $g = 1 + i/s'$. Then $f = ss' + is$ works; details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-surjective-local}
+Let $A \to B$ be a local homomorphism of local rings.
+Assume
+\begin{enumerate}
+\item $B$ is finite as an $A$-module,
+\item $\mathfrak m_B$ is a finitely generated ideal,
+\item $A \to B$ induces an isomorphism on residue fields, and
+\item $\mathfrak m_A/\mathfrak m_A^2 \to \mathfrak m_B/\mathfrak m_B^2$
+is surjective.
+\end{enumerate}
+Then $A \to B$ is surjective.
+\end{lemma}
+
+\begin{proof}
+To show that $A \to B$ is surjective, we view it as a map of $A$-modules
+and apply Lemma \ref{lemma-NAK} (6). We conclude it suffices
+to show that $A/\mathfrak m_A \to B/\mathfrak m_AB$ is surjective.
+As $A/\mathfrak m_A = B/\mathfrak m_B$ it suffices to show that
+$\mathfrak m_AB \to \mathfrak m_B$ is surjective. View
+$\mathfrak m_AB \to \mathfrak m_B$ as a map of $B$-modules and apply
+Lemma \ref{lemma-NAK} (6). We conclude it suffices to see that
+$\mathfrak m_AB/\mathfrak m_A\mathfrak m_B \to \mathfrak m_B/\mathfrak m_B^2$
+is surjective. This follows from assumption (4).
+\end{proof}
+
+
+
+
+
+
+
+\section{Open and closed subsets of spectra}
+\label{section-open-and-closed}
+
+\noindent
+It turns out that open and closed subsets of a spectrum correspond to
+idempotents of the ring.
+
+\begin{lemma}
+\label{lemma-idempotent-spec}
+Let $R$ be a ring. Let $e \in R$ be an idempotent.
+In this case
+$$
+\Spec(R) = D(e) \amalg D(1-e).
+$$
+\end{lemma}
+
+\begin{proof}
+Note that an idempotent $e$ of a domain is either $1$ or $0$.
+Hence we see that
+\begin{eqnarray*}
+D(e)
+& = &
+\{ \mathfrak p \in \Spec(R)
+\mid
+e \not\in \mathfrak p \} \\
+& = &
+\{ \mathfrak p \in \Spec(R)
+\mid
+e \not = 0\text{ in }\kappa(\mathfrak p) \} \\
+& = &
+\{ \mathfrak p \in \Spec(R)
+\mid
+e = 1\text{ in }\kappa(\mathfrak p) \}
+\end{eqnarray*}
+Similarly we have
+\begin{eqnarray*}
+D(1-e)
+& = &
+\{ \mathfrak p \in \Spec(R)
+\mid
+1 - e \not\in \mathfrak p \} \\
+& = &
+\{ \mathfrak p \in \Spec(R)
+\mid
+e \not = 1\text{ in }\kappa(\mathfrak p) \} \\
+& = &
+\{ \mathfrak p \in \Spec(R)
+\mid
+e = 0\text{ in }\kappa(\mathfrak p) \}
+\end{eqnarray*}
+Since the image of $e$ in any residue field is either $1$ or $0$
+we deduce that $D(e)$ and $D(1-e)$ cover all of $\Spec(R)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-spec-product}
+Let $R_1$ and $R_2$ be rings.
+Let $R = R_1 \times R_2$.
+The maps $R \to R_1$, $(x, y) \mapsto x$ and $R \to R_2$,
+$(x, y) \mapsto y$
+induce continuous maps $\Spec(R_1) \to \Spec(R)$ and
+$\Spec(R_2) \to \Spec(R)$.
+The induced map
+$$
+\Spec(R_1) \amalg \Spec(R_2)
+\longrightarrow
+\Spec(R)
+$$
+is a homeomorphism. In other words,
+the spectrum of $R = R_1\times R_2$ is the
+disjoint union of the spectrum of $R_1$ and the
+spectrum of $R_2$.
+\end{lemma}
+
+\begin{proof}
+Write $1 = e_1 + e_2$ with $e_1 = (1, 0)$ and $e_2 = (0, 1)$.
+Note that $e_1$ and $e_2 = 1 - e_1$ are idempotents.
+We leave it to the reader to show that
+$R_1 = R_{e_1}$ is the localization of $R$ at $e_1$.
+Similarly for $e_2$.
+Thus the statement of the lemma follows from Lemma
+\ref{lemma-idempotent-spec} combined with Lemma
+\ref{lemma-standard-open}.
+\end{proof}
+
+\noindent
+We reprove the following lemma later after introducing
+a glueing lemma for functions. See Section
+\ref{section-tilde-module-sheaf}.
+
+\begin{lemma}
+\label{lemma-disjoint-decomposition}
+Let $R$ be a ring. For each $U \subset \Spec(R)$
+which is open and closed
+there exists a unique idempotent $e \in R$ such that
+$U = D(e)$. This induces a 1-1 correspondence between
+open and closed subsets $U \subset \Spec(R)$ and
+idempotents $e \in R$.
+\end{lemma}
+
+\begin{proof}
+Let $U \subset \Spec(R)$ be open and closed.
+Since $U$ is closed it is quasi-compact by
+Lemma \ref{lemma-quasi-compact}, and similarly for
+its complement.
+Write $U = \bigcup_{i = 1}^n D(f_i)$ as a finite union of standard opens.
+Similarly, write $\Spec(R) \setminus U = \bigcup_{j = 1}^m D(g_j)$
+as a finite union of standard opens. Since $\emptyset =
+D(f_i) \cap D(g_j) = D(f_i g_j)$ we see that $f_i g_j$ is
+nilpotent by Lemma \ref{lemma-Zariski-topology}.
+Let $I = (f_1, \ldots, f_n) \subset R$ and let
+$J = (g_1, \ldots, g_m) \subset R$.
+Note that $V(J)$ equals $U$, that $V(I)$
+equals the complement of $U$, so $\Spec(R) = V(I) \amalg V(J)$.
+By the remark on nilpotency above,
+we see that $(IJ)^N = (0)$ for some sufficiently large integer $N$.
+Since $\bigcup D(f_i) \cup \bigcup D(g_j) = \Spec(R)$
+we see that $I + J = R$, see Lemma \ref{lemma-Zariski-topology}.
+By raising this equation to the $2N$th power we conclude that
+$I^N + J^N = R$. Write $1 = x + y$ with $x \in I^N$ and $y \in J^N$.
+Then $0 = xy = x(1 - x)$ as $I^N J^N = (0)$. Thus $x = x^2$
+is idempotent and contained
+in $I^N \subset I$. The idempotent $y = 1 - x$ is contained in $J^N \subset J$.
+This shows that the idempotent $x$ maps to $1$ in every residue field
+$\kappa(\mathfrak p)$ for $\mathfrak p \in V(J)$ and that $x$ maps to $0$
+in $\kappa(\mathfrak p)$ for every $\mathfrak p \in V(I)$.
+
+\medskip\noindent
+To see uniqueness suppose that $e_1, e_2$ are
+distinct idempotents in $R$. We have to show there
+exists a prime $\mathfrak p$ such that $e_1 \in \mathfrak p$
+and $e_2 \not \in \mathfrak p$, or conversely.
+Write $e_i' = 1 - e_i$. If $e_1 \not = e_2$, then
+$0 \not = e_1 - e_2 = e_1(e_2 + e_2') - (e_1 + e_1')e_2
+= e_1 e_2' - e_1' e_2$. Hence either the idempotent
+$e_1 e_2' \not = 0$ or $e_1' e_2 \not = 0$. An idempotent
+is not nilpotent, and hence we find a prime
+$\mathfrak p$ such that either $e_1e_2' \not \in \mathfrak p$
+or $e_1'e_2 \not \in \mathfrak p$, by Lemma \ref{lemma-Zariski-topology}.
+It is easy to see this gives the desired prime.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-spec-connected}
+Let $R$ be a nonzero ring. Then $\Spec(R)$ is
+connected if and only if $R$ has no nontrivial
+idempotents.
+\end{lemma}
+
+\begin{proof}
+Obvious from Lemma \ref{lemma-disjoint-decomposition}
+and the definition of a connected topological space.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ideal-is-squared-union-connected}
+Let $I \subset R$ be a finitely generated ideal of a ring $R$
+such that $I = I^2$. Then
+\begin{enumerate}
+\item there exists an idempotent $e \in R$ such that $I = (e)$,
+\item $R/I \cong R_{e'}$ for the idempotent $e' = 1 - e \in R$, and
+\item $V(I)$ is open and closed in $\Spec(R)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Nakayama's Lemma \ref{lemma-NAK} there exists an element
+$f = 1 + i$, $i \in I$ such that $fI = 0$. Then $f^2 = f + fi = f$
+is an idempotent. Consider the idempotent $e = 1 - f = -i \in I$.
+For $j \in I$ we have $ej = j - fj = j$ hence $I = (e)$.
+This proves (1).
+
+\medskip\noindent
+Parts (2) and (3) follow from (1). Namely, we have
+$V(I) = V(e) = \Spec(R) \setminus D(e)$ which is open and
+closed by either Lemma \ref{lemma-idempotent-spec} or
+Lemma \ref{lemma-disjoint-decomposition}. This proves (3).
+For (2) observe that the map $R \to R_{e'}$ is surjective
+since $x/(e')^n = x/e' = xe'/(e')^2 = xe'/e' = x/1$ in $R_{e'}$.
+The kernel of the map $R \to R_{e'}$ is the set of elements of
+$R$ annihilated by a positive power of $e'$. Since $e'$ is
+idempotent this is the ideal of elements annihilated by $e'$
+which is the ideal $I = (e)$ as $e + e' = 1$ is a pair
+of orthognal idempotents. This proves (2).
+\end{proof}
+
+
+
+
+
+
+\section{Connected components of spectra}
+\label{section-connected-components}
+
+\noindent
+Connected components of spectra are not as easy to understand as one
+may think at first. This is because we are used to the topology of
+locally connected spaces, but the spectrum of a ring is in general
+not locally connected.
+
+\begin{lemma}
+\label{lemma-closed-union-connected-components}
+Let $R$ be a ring. Let $T \subset \Spec(R)$ be a subset of the spectrum.
+The following are equivalent
+\begin{enumerate}
+\item $T$ is closed and is a union of connected components of
+$\Spec(R)$,
+\item $T$ is an intersection of open and closed subsets of
+$\Spec(R)$, and
+\item $T = V(I)$ where $I \subset R$ is an ideal generated by idempotents.
+\end{enumerate}
+Moreover, the ideal in (3) if it exists is unique.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-topology-spec}
+and
+Topology, Lemma \ref{topology-lemma-closed-union-connected-components}
+we see that (1) and (2) are equivalent.
+Assume (2) and write $T = \bigcap U_\alpha$ with
+$U_\alpha \subset \Spec(R)$ open and closed.
+Then $U_\alpha = D(e_\alpha)$ for some idempotent $e_\alpha \in R$ by
+Lemma \ref{lemma-disjoint-decomposition}.
+Then setting $I = (1 - e_\alpha)$ we see that $T = V(I)$, i.e., (3) holds.
+Finally, assume (3). Write $T = V(I)$ and $I = (e_\alpha)$ for some
+collection of idempotents $e_\alpha$. Then it is clear that
+$T = \bigcap V(e_\alpha) = \bigcap D(1 - e_\alpha)$.
+
+\medskip\noindent
+Suppose that $I$ is an ideal generated by idempotents.
+Let $e \in R$ be an idempotent such that $V(I) \subset V(e)$. Then by
+Lemma \ref{lemma-Zariski-topology}
+we see that $e^n \in I$ for some $n \geq 1$. As $e$ is an idempotent
+this means that $e \in I$. Hence we see that $I$ is generated by
+exactly those idempotents $e$ such that $T \subset V(e)$.
+In other words, the ideal $I$ is completely determined by the
+closed subset $T$ which proves uniqueness.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-connected-component}
+Let $R$ be a ring.
+A connected component of
+$\Spec(R)$ is of the form $V(I)$,
+where $I$ is an ideal generated by idempotents
+such that every idempotent of $R$ either maps to
+$0$ or $1$ in $R/I$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p$ be a prime of $R$. By
+Lemma \ref{lemma-topology-spec}
+we have see that the hypotheses of
+Topology, Lemma \ref{topology-lemma-connected-component-intersection}
+are satisfied for the topological space $\Spec(R)$.
+Hence the connected component of $\mathfrak p$ in $\Spec(R)$
+is the intersection of open and closed subsets of $\Spec(R)$
+containing $\mathfrak p$. Hence it equals $V(I)$ where
+$I$ is generated by the idempotents $e \in R$ such that $e$ maps to $0$
+in $\kappa(\mathfrak p)$, see
+Lemma \ref{lemma-disjoint-decomposition}.
+Any idempotent $e$ which is not in this collection clearly maps to $1$
+in $R/I$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Glueing properties}
+\label{section-more-glueing}
+
+\noindent
+In this section we put a number of standard results of the
+form: if something is true for all members of a standard open
+covering then it is true. In fact, it often suffices to check
+things on the level of local rings as in the following lemma.
+
+\begin{lemma}
+\label{lemma-characterize-zero-local}
+Let $R$ be a ring.
+\begin{enumerate}
+\item For an element $x$ of an $R$-module $M$ the following are equivalent
+\begin{enumerate}
+\item $x = 0$,
+\item $x$ maps to zero in $M_\mathfrak p$ for all $\mathfrak p \in \Spec(R)$,
+\item $x$ maps to zero in $M_{\mathfrak m}$ for all maximal ideals
+$\mathfrak m$ of $R$.
+\end{enumerate}
+In other words, the map $M \to \prod_{\mathfrak m} M_{\mathfrak m}$
+is injective.
+\item Given an $R$-module $M$ the following are equivalent
+\begin{enumerate}
+\item $M$ is zero,
+\item $M_{\mathfrak p}$ is zero for all $\mathfrak p \in \Spec(R)$,
+\item $M_{\mathfrak m}$ is zero for all maximal ideals $\mathfrak m$ of $R$.
+\end{enumerate}
+\item Given a complex $M_1 \to M_2 \to M_3$
+of $R$-modules the following are equivalent
+\begin{enumerate}
+\item $M_1 \to M_2 \to M_3$ is exact,
+\item for every prime $\mathfrak p$ of $R$ the localization
+$M_{1, \mathfrak p} \to M_{2, \mathfrak p} \to M_{3, \mathfrak p}$
+is exact,
+\item for every maximal ideal $\mathfrak m$ of $R$ the localization
+$M_{1, \mathfrak m} \to M_{2, \mathfrak m} \to M_{3, \mathfrak m}$
+is exact.
+\end{enumerate}
+\item Given a map $f : M \to M'$ of $R$-modules the following are equivalent
+\begin{enumerate}
+\item $f$ is injective,
+\item $f_{\mathfrak p} : M_\mathfrak p \to M'_\mathfrak p$ is injective
+for all primes $\mathfrak p$ of $R$,
+\item $f_{\mathfrak m} : M_\mathfrak m \to M'_\mathfrak m$ is injective
+for all maximal ideals $\mathfrak m$ of $R$.
+\end{enumerate}
+\item Given a map $f : M \to M'$ of $R$-modules the following are equivalent
+\begin{enumerate}
+\item $f$ is surjective,
+\item $f_{\mathfrak p} : M_\mathfrak p \to M'_\mathfrak p$ is surjective
+for all primes $\mathfrak p$ of $R$,
+\item $f_{\mathfrak m} : M_\mathfrak m \to M'_\mathfrak m$ is surjective
+for all maximal ideals $\mathfrak m$ of $R$.
+\end{enumerate}
+\item Given a map $f : M \to M'$ of $R$-modules the following are equivalent
+\begin{enumerate}
+\item $f$ is bijective,
+\item $f_{\mathfrak p} : M_\mathfrak p \to M'_\mathfrak p$ is bijective
+for all primes $\mathfrak p$ of $R$,
+\item $f_{\mathfrak m} : M_\mathfrak m \to M'_\mathfrak m$ is bijective
+for all maximal ideals $\mathfrak m$ of $R$.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $x \in M$ as in (1). Let $I = \{f \in R \mid fx = 0\}$.
+It is easy to see that $I$ is an ideal (it is the
+annihilator of $x$). Condition (1)(c) means that for
+all maximal ideals $\mathfrak m$ there exists an
+$f \in R \setminus \mathfrak m$ such that $fx =0$.
+In other words, $V(I)$ does not contain a closed point.
+By Lemma \ref{lemma-Zariski-topology} we see $I$ is the unit ideal.
+Hence $x$ is zero, i.e., (1)(a) holds. This proves (1).
+
+\medskip\noindent
+Part (2) follows by applying (1) to all elements of $M$ simultaneously.
+
+\medskip\noindent
+Proof of (3). Let $H$ be the homology of the sequence, i.e.,
+$H = \Ker(M_2 \to M_3)/\Im(M_1 \to M_2)$. By
+Proposition \ref{proposition-localization-exact}
+we have that $H_\mathfrak p$ is the homology of the sequence
+$M_{1, \mathfrak p} \to M_{2, \mathfrak p} \to M_{3, \mathfrak p}$.
+Hence (3) is a consequence of (2).
+
+\medskip\noindent
+Parts (4) and (5) are special cases of (3). Part (6) follows
+formally on combining (4) and (5).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cover}
+\begin{slogan}
+Zariski-local properties of modules and algebras
+\end{slogan}
+Let $R$ be a ring. Let $M$ be an $R$-module. Let $S$ be an $R$-algebra.
+Suppose that $f_1, \ldots, f_n$ is a finite list of
+elements of $R$ such that $\bigcup D(f_i) = \Spec(R)$,
+in other words $(f_1, \ldots, f_n) = R$.
+\begin{enumerate}
+\item If each $M_{f_i} = 0$ then $M = 0$.
+\item If each $M_{f_i}$ is a finite $R_{f_i}$-module,
+then $M$ is a finite $R$-module.
+\item If each $M_{f_i}$ is a finitely presented $R_{f_i}$-module,
+then $M$ is a finitely presented $R$-module.
+\item Let $M \to N$ be a map of $R$-modules. If $M_{f_i} \to N_{f_i}$
+is an isomorphism for each $i$ then $M \to N$ is an isomorphism.
+\item Let $0 \to M'' \to M \to M' \to 0$ be a complex of $R$-modules.
+If $0 \to M''_{f_i} \to M_{f_i} \to M'_{f_i} \to 0$ is exact for each $i$,
+then $0 \to M'' \to M \to M' \to 0$ is exact.
+\item If each $R_{f_i}$ is Noetherian, then $R$ is Noetherian.
+\item If each $S_{f_i}$ is a finite type $R$-algebra, so is $S$.
+\item If each $S_{f_i}$ is of finite presentation over $R$, so is $S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We prove each of the parts in turn.
+\begin{enumerate}
+\item By Proposition \ref{proposition-localize-twice}
+this implies $M_\mathfrak p = 0$ for all $\mathfrak p \in \Spec(R)$,
+so we conclude by Lemma \ref{lemma-characterize-zero-local}.
+\item For each $i$ take a finite generating set $X_i$ of $M_{f_i}$.
+Without loss of generality, we may assume that the elements of $X_i$ are
+in the image of the localization map $M \rightarrow M_{f_i}$, so we take
+a finite set $Y_i$ of preimages of the elements of $X_i$ in $M$. Let $Y$
+be the union of these sets. This is still a finite set.
+Consider the obvious $R$-linear map $R^Y \rightarrow M$ sending the basis
+element $e_y$ to $y$. By assumption this map is surjective after localizing
+at an arbitrary prime ideal $\mathfrak p$ of $R$, so it is surjective by
+Lemma \ref{lemma-characterize-zero-local}
+and $M$ is finitely generated.
+\item By (2) we have a short exact sequence
+$$
+0 \rightarrow K \rightarrow R^n \rightarrow M \rightarrow 0
+$$
+Since localization is an exact functor and $M_{f_i}$ is finitely
+presented we see that $K_{f_i}$ is finitely generated for all
+$1 \leq i \leq n$ by Lemma \ref{lemma-extension}.
+By (2) this implies that $K$ is a finite $R$-module and therefore
+$M$ is finitely presented.
+\item By Proposition \ref{proposition-localize-twice}
+the assumption implies that the induced morphism
+on localizations at all prime ideals is an isomorphism, so we conclude
+by Lemma \ref{lemma-characterize-zero-local}.
+\item By Proposition \ref{proposition-localize-twice} the assumption
+implies that the induced
+sequence of localizations at all prime ideals is short exact, so we
+conclude by Lemma \ref{lemma-characterize-zero-local}.
+\item We will show that every ideal of $R$ has a finite generating set:
+For this, let $I \subset R$ be an arbitrary ideal. By
+Proposition \ref{proposition-localization-exact}
+each $I_{f_i} \subset R_{f_i}$ is an ideal. These are all
+finitely generated by assumption, so we conclude by (2).
+\item For each $i$ take a finite generating set $X_i$ of $S_{f_i}$.
+Without loss of generality, we may assume that the elements of $X_i$
+are in the image of the localization map $S \rightarrow S_{f_i}$, so
+we take a finite set $Y_i$ of preimages of the elements of $X_i$ in
+$S$. Let $Y$ be the union of these sets. This is still a finite set.
+Consider the algebra homomorphism $R[X_y]_{y \in Y} \rightarrow S$
+induced by $Y$. Since it is an algebra homomorphism, the image $T$
+is an $R$-submodule of the $R$-module $S$, so we can consider the
+quotient module $S/T$. By assumption, this is zero if we localize
+at the $f_i$, so it is zero by (1) and therefore $S$ is an
+$R$-algebra of finite type.
+\item By the previous item, there exists a surjective $R$-algebra
+homomorphism $R[X_1, \ldots, X_n] \rightarrow S$. Let $K$ be the kernel
+of this map. This is an ideal in $R[X_1, \ldots, X_n]$, finitely generated
+in each localization at $f_i$. Since the $f_i$ generate the unit ideal
+in $R$, they also generate the unit ideal in $R[X_1, \ldots, X_n]$, so an
+application of (2) finishes the proof.
+\end{enumerate}
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cover-upstairs}
+Let $R \to S$ be a ring map.
+Suppose that $g_1, \ldots, g_n$ is a finite list of
+elements of $S$ such that $\bigcup D(g_i) = \Spec(S)$
+in other words $(g_1, \ldots, g_n) = S$.
+\begin{enumerate}
+\item If each $S_{g_i}$ is of finite type over $R$, then $S$ is
+of finite type over $R$.
+\item If each $S_{g_i}$ is of finite presentation over $R$,
+then $S$ is of finite presentation over $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose $h_1, \ldots, h_n \in S$ such that $\sum h_i g_i = 1$.
+
+\medskip\noindent
+Proof of (1). For each $i$ choose a finite list of elements
+$x_{i, j} \in S_{g_i}$, $j = 1, \ldots, m_i$
+which generate $S_{g_i}$ as an $R$-algebra.
+Write $x_{i, j} = y_{i, j}/g_i^{n_{i, j}}$ for some $y_{i, j} \in S$ and
+some $n_{i, j} \ge 0$. Consider the $R$-subalgebra $S' \subset S$
+generated by $g_1, \ldots, g_n$, $h_1, \ldots, h_n$ and
+$y_{i, j}$, $i = 1, \ldots, n$, $j = 1, \ldots, m_i$.
+Since localization is exact (Proposition \ref{proposition-localization-exact}),
+we see that $S'_{g_i} \to S_{g_i}$ is injective.
+On the other hand, it is surjective by our choice of $y_{i, j}$.
+The elements $g_1, \ldots, g_n$ generate the unit ideal in $S'$
+as $h_1, \ldots, h_n \in S'$.
+Thus $S' \to S$ viewed as an $S'$-module map is an isomorphism
+by Lemma \ref{lemma-cover}.
+
+\medskip\noindent
+Proof of (2). We already know that $S$ is of finite type.
+Write $S = R[x_1, \ldots, x_m]/J$ for some ideal $J$.
+For each $i$ choose a lift $g'_i \in R[x_1, \ldots, x_m]$ of $g_i$
+and we choose a lift $h'_i \in R[x_1, \ldots, x_m]$ of $h_i$.
+Then we see that
+$$
+S_{g_i} = R[x_1, \ldots, x_m, y_i]/(J_i + (1 - y_ig'_i))
+$$
+where $J_i$ is the ideal of $R[x_1, \ldots, x_m, y_i]$
+generated by $J$. Small detail omitted. By
+Lemma \ref{lemma-finite-presentation-independent}
+we may choose a finite list of elements
+$f_{i, j} \in J$, $j = 1, \ldots, m_i$
+such that the images of $f_{i, j}$ in $J_i$ and $1 - y_ig'_i$
+generate the ideal $J_i + (1 - y_ig'_i)$.
+Set
+$$
+S' = R[x_1, \ldots, x_m]/\left(\sum h'_ig'_i - 1, f_{i, j};
+i = 1, \ldots, n, j = 1, \ldots, m_i\right)
+$$
+There is a surjective $R$-algebra map $S' \to S$.
+The classes of the elements $g'_1, \ldots, g'_n$ in $S'$
+generate the unit ideal and by construction the maps
+$S'_{g'_i} \to S_{g_i}$ are injective.
+Thus we conclude as in part (1).
+\end{proof}
+
+
+
+
+
+\section{Glueing functions}
+\label{section-tilde-module-sheaf}
+
+\noindent
+In this section we show that given an open covering
+$$
+\Spec(R) = \bigcup\nolimits_{i = 1}^n D(f_i)
+$$
+by standard opens, and given an element $h_i \in R_{f_i}$
+for each $i$ such that $h_i = h_j$ as elements of $R_{f_i f_j}$
+then there exists a unique $h \in R$ such that the image of
+$h$ in $R_{f_i}$ is $h_i$. This result can be interpreted
+in two ways:
+\begin{enumerate}
+\item The rule $D(f) \mapsto R_f$ is a sheaf of rings
+on the standard opens, see Sheaves, Section \ref{sheaves-section-bases}.
+\item If we think of elements of $R_f$ as the ``algebraic''
+or ``regular'' functions on $D(f)$, then these glue
+as would continuous, resp.\ differentiable functions
+on a topological, resp.\ differentiable manifold.
+\end{enumerate}
+
+\begin{lemma}
+\label{lemma-cover-module}
+Let $R$ be a ring. Let $f_1, \ldots, f_n$ be elements of $R$
+generating the unit ideal. Let $M$ be an $R$-module.
+The sequence
+$$
+0 \to
+M \xrightarrow{\alpha}
+\bigoplus\nolimits_{i = 1}^n M_{f_i} \xrightarrow{\beta}
+\bigoplus\nolimits_{i, j = 1}^n M_{f_i f_j}
+$$
+is exact, where $\alpha(m) = (m/1, \ldots, m/1)$
+and $\beta(m_1/f_1^{e_1}, \ldots, m_n/f_n^{e_n})
+= (m_i/f_i^{e_i} - m_j/f_j^{e_j})_{(i, j)}$.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that the localization of the sequence at
+any maximal ideal $\mathfrak m$ is exact, see
+Lemma \ref{lemma-characterize-zero-local}.
+Since $f_1, \ldots, f_n$ generate the unit ideal,
+there is an $i$ such that $f_i \not \in \mathfrak m$.
+After renumbering we may assume $i = 1$.
+Note that $(M_{f_i})_\mathfrak m = (M_\mathfrak m)_{f_i}$
+and $(M_{f_if_j})_\mathfrak m = (M_\mathfrak m)_{f_if_j}$, see
+Proposition \ref{proposition-localize-twice-module}.
+In particular $(M_{f_1})_\mathfrak m = M_\mathfrak m$ and
+$(M_{f_1 f_i})_\mathfrak m = (M_\mathfrak m)_{f_i}$, because
+$f_1$ is a unit.
+Note that the maps in the sequence are the canonical ones
+coming from
+Lemma \ref{lemma-universal-property-localization-module}
+and the identity map on $M$.
+Having said all of this, after replacing $R$ by $R_\mathfrak m$,
+$M$ by $M_\mathfrak m$, and $f_i$ by their image in $R_\mathfrak m$,
+and $f_1$ by $1 \in R_\mathfrak m$,
+we reduce to the case where $f_1 = 1$.
+
+\medskip\noindent
+Assume $f_1 = 1$. Injectivity of $\alpha$ is now trivial. Let
+$m = (m_i) \in \bigoplus_{i = 1}^n M_{f_i}$ be in the kernel of $\beta$.
+Then $m_1 \in M_{f_1} = M$. Moreover, $\beta(m) = 0$
+implies that $m_1$ and $m_i$ map to the same element of
+$M_{f_1f_i} = M_{f_i}$. Thus $\alpha(m_1) = m$ and the
+proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-standard-covering}
+Let $R$ be a ring, and let $f_1, f_2, \ldots f_n\in R$ generate
+the unit ideal in $R$.
+Then the following sequence is exact:
+$$
+0 \longrightarrow
+R \longrightarrow
+\bigoplus\nolimits_i R_{f_i} \longrightarrow
+\bigoplus\nolimits_{i, j}R_{f_if_j}
+$$
+where the maps $\alpha : R \longrightarrow \bigoplus_i R_{f_i}$
+and $\beta : \bigoplus_i R_{f_i} \longrightarrow \bigoplus_{i, j} R_{f_if_j}$
+are defined as
+$$
+\alpha(x) = \left(\frac{x}{1}, \ldots, \frac{x}{1}\right)
+\text{ and }
+\beta\left(\frac{x_1}{f_1^{r_1}}, \ldots, \frac{x_n}{f_n^{r_n}}\right)
+=
+\left(\frac{x_i}{f_i^{r_i}}-\frac{x_j}{f_j^{r_j}}~\text{in}~R_{f_if_j}\right).
+$$
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-cover-module}.
+\end{proof}
+
+\noindent
+The following we have already seen above, but we state it explicitly here
+for convenience.
+
+\begin{lemma}
+\label{lemma-disjoint-implies-product}
+Let $R$ be a ring.
+If $\Spec(R) = U \amalg V$ with both $U$ and $V$ open
+then $R \cong R_1 \times R_2$ with $U \cong \Spec(R_1)$
+and $V \cong \Spec(R_2)$ via the maps in Lemma \ref{lemma-spec-product}.
+Moreover, both $R_1$ and $R_2$ are localizations as well as quotients
+of the ring $R$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-disjoint-decomposition} we have
+$U = D(e)$ and $V = D(1-e)$ for some idempotent $e$.
+By Lemma \ref{lemma-standard-covering} we see that
+$R \cong R_e \times R_{1 - e}$ (since clearly $R_{e(1-e)} = 0$
+so the glueing condition is trivial; of course it is
+trivial to prove the product decomposition directly in this
+case). The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-injective-covering}
+Let $R$ be a ring.
+Let $f_1, \ldots, f_n \in R$.
+Let $M$ be an $R$-module.
+Then $M \to \bigoplus M_{f_i}$ is injective if and only if
+$$
+M \longrightarrow \bigoplus\nolimits_{i = 1, \ldots, n} M, \quad
+m \longmapsto (f_1m, \ldots, f_nm)
+$$
+is injective.
+\end{lemma}
+
+\begin{proof}
+The map $M \to \bigoplus M_{f_i}$ is injective if and only if
+for all $m \in M$ and $e_1, \ldots, e_n \geq 1$ such that
+$f_i^{e_i}m = 0$, $i = 1, \ldots, n$ we have $m = 0$.
+This clearly implies the displayed map is injective.
+Conversely, suppose the displayed map is injective and
+$m \in M$ and $e_1, \ldots, e_n \geq 1$ are such that
+$f_i^{e_i}m = 0$, $i = 1, \ldots, n$. If $e_i = 1$ for all $i$,
+then we immediately conclude that $m = 0$ from the injectivity of
+the displayed map. Next, we prove this holds for any such data
+by induction on $e = \sum e_i$. The base case is $e = n$, and we have
+just dealt with this. If some $e_i > 1$, then set $m' = f_im$.
+By induction we see that $m' = 0$. Hence we see that $f_i m = 0$,
+i.e., we may take $e_i = 1$ which decreases $e$ and we win.
+\end{proof}
+
+\noindent
+The following lemma is better stated and proved in the more general
+context of flat descent. However, it makes sense to state it here
+since it fits well with the above.
+
+\begin{lemma}
+\label{lemma-glue-modules}
+Let $R$ be a ring. Let $f_1, \ldots, f_n \in R$. Suppose we are given
+the following data:
+\begin{enumerate}
+\item For each $i$ an $R_{f_i}$-module $M_i$.
+\item For each pair $i, j$ an $R_{f_if_j}$-module isomorphism
+$\psi_{ij} : (M_i)_{f_j} \to (M_j)_{f_i}$.
+\end{enumerate}
+which satisfy the ``cocycle condition'' that all the diagrams
+$$
+\xymatrix{
+(M_i)_{f_jf_k}
+\ar[rd]_{\psi_{ij}}
+\ar[rr]^{\psi_{ik}}
+& &
+(M_k)_{f_if_j} \\
+&
+(M_j)_{f_if_k} \ar[ru]_{\psi_{jk}}
+}
+$$
+commute (for all triples $i, j, k$). Given this data define
+$$
+M = \Ker\left(
+\bigoplus\nolimits_{1 \leq i \leq n} M_i
+\longrightarrow
+\bigoplus\nolimits_{1 \leq i, j \leq n} (M_i)_{f_j}
+\right)
+$$
+where $(m_1, \ldots, m_n)$ maps to the element whose
+$(i, j)$th entry is $m_i/1 - \psi_{ji}(m_j/1)$.
+Then the natural map $M \to M_i$ induces an isomorphism
+$M_{f_i} \to M_i$. Moreover $\psi_{ij}(m/1) = m/1$
+for all $m \in M$ (with obvious notation).
+\end{lemma}
+
+\begin{proof}
+To show that $M_{f_1} \to M_1$ is an isomorphism, it suffices
+to show that its localization at every prime $\mathfrak p'$
+of $R_{f_1}$ is an isomorphism, see
+Lemma \ref{lemma-characterize-zero-local}.
+Write $\mathfrak p' = \mathfrak p R_{f_1}$
+for some prime $\mathfrak p \subset R$, $f_1 \not \in \mathfrak p$, see
+Lemma \ref{lemma-standard-open}.
+Since localization is exact
+(Proposition \ref{proposition-localization-exact}),
+we see that
+\begin{align*}
+(M_{f_1})_{\mathfrak p'} & =
+M_\mathfrak p \\
+& =
+\Ker\left(
+\bigoplus\nolimits_{1 \leq i \leq n} M_{i, \mathfrak p}
+\longrightarrow
+\bigoplus\nolimits_{1 \leq i, j \leq n} ((M_i)_{f_j})_\mathfrak p
+\right) \\
+& =
+\Ker\left(
+\bigoplus\nolimits_{1 \leq i \leq n} M_{i, \mathfrak p}
+\longrightarrow
+\bigoplus\nolimits_{1 \leq i, j \leq n} (M_{i, \mathfrak p})_{f_j}
+\right)
+\end{align*}
+Here we also used Proposition \ref{proposition-localize-twice-module}.
+Since $f_1$ is a unit in $R_\mathfrak p$, this reduces us to the case
+where $f_1 = 1$ by replacing $R$ by $R_\mathfrak p$, $f_i$ by the
+image of $f_i$ in $R_\mathfrak p$, $M$ by $M_\mathfrak p$, and
+$f_1$ by $1$.
+
+\medskip\noindent
+Assume $f_1 = 1$. Then $\psi_{1j} : (M_1)_{f_j} \to M_j$
+is an isomorphism for $j = 2, \ldots, n$. If we use these
+isomorphisms to identify $M_j = (M_1)_{f_j}$, then we see
+that $\psi_{ij} : (M_1)_{f_if_j} \to (M_1)_{f_if_j}$ is
+the canonical identification. Thus the complex
+$$
+0 \to M_1 \to
+\bigoplus\nolimits_{1 \leq i \leq n} (M_1)_{f_i}
+\longrightarrow
+\bigoplus\nolimits_{1 \leq i, j \leq n}
+(M_1)_{f_if_j}
+$$
+is exact by Lemma \ref{lemma-cover-module}.
+Thus the first map identifies $M_1$ with $M$ in this case
+and everything is clear.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Zerodivisors and total rings of fractions}
+\label{section-total-quotient-ring}
+
+\noindent
+The local ring at a minimal prime has the following properties.
+
+\begin{lemma}
+\label{lemma-minimal-prime-reduced-ring}
+Let $\mathfrak p$ be a minimal prime of a ring $R$.
+Every element of the maximal ideal of $R_{\mathfrak p}$
+is nilpotent. If $R$ is reduced then $R_{\mathfrak p}$
+is a field.
+\end{lemma}
+
+\begin{proof}
+If some element $x$ of ${\mathfrak p}R_{\mathfrak p}$
+is not nilpotent, then $D(x) \not = \emptyset$, see
+Lemma \ref{lemma-Zariski-topology}. This contradicts
+the minimality of $\mathfrak p$. If $R$ is reduced,
+then ${\mathfrak p}R_{\mathfrak p} = 0$ and
+hence it is a field.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reduced-ring-sub-product-fields}
+Let $R$ be a reduced ring. Then
+\begin{enumerate}
+\item $R$ is a subring of a product of fields,
+\item $R \to \prod_{\mathfrak p\text{ minimal}} R_{\mathfrak p}$
+is an embedding into a product of fields,
+\item $\bigcup_{\mathfrak p\text{ minimal}} \mathfrak p$ is the set
+of zerodivisors of $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-minimal-prime-reduced-ring} each of the rings
+$R_\mathfrak p$ is a field. In particular, the kernel of the ring
+map $R \to R_\mathfrak p$ is $\mathfrak p$.
+By Lemma \ref{lemma-Zariski-topology}
+we have $\bigcap_{\mathfrak p} \mathfrak p = (0)$.
+Hence (2) and (1) are true. If $x y = 0$ and $y \not = 0$, then
+$y \not \in \mathfrak p$ for some minimal prime $\mathfrak p$.
+Hence $x \in \mathfrak p$. Thus every zerodivisor of $R$ is contained
+in $\bigcup_{\mathfrak p\text{ minimal}} \mathfrak p$.
+Conversely, suppose that $x \in \mathfrak p$ for some minimal
+prime $\mathfrak p$. Then $x$ maps to zero in $R_\mathfrak p$,
+hence there exists $y \in R$, $y \not \in \mathfrak p$ such that
+$xy = 0$. In other words, $x$ is a zerodivisor. This finishes the
+proof of (3) and the lemma.
+\end{proof}
+
+\noindent
+The total ring of fractions $Q(R)$ of a ring $R$ was introduced in
+Example \ref{example-localize-at-prime}.
+
+\begin{lemma}
+\label{lemma-total-ring-fractions}
+Let $R$ be a ring.
+Let $S \subset R$ be a multiplicative subset consisting of nonzerodivisors.
+Then $Q(R) \cong Q(S^{-1}R)$.
+In particular $Q(R) \cong Q(Q(R))$.
+\end{lemma}
+
+\begin{proof}
+If $x \in S^{-1}R$ is a nonzerodivisor, and
+$x = r/f$ for some $r \in R$, $f \in S$, then
+$r$ is a nonzerodivisor in $R$. Whence the lemma.
+\end{proof}
+
+\noindent
+We can apply glueing results to prove something about
+total rings of fractions $Q(R)$ which we introduced in
+Example \ref{example-localize-at-prime}.
+
+\begin{lemma}
+\label{lemma-total-ring-fractions-no-embedded-points}
+Let $R$ be a ring.
+Assume that $R$ has finitely many minimal primes
+$\mathfrak q_1, \ldots, \mathfrak q_t$, and that
+$\mathfrak q_1 \cup \ldots \cup \mathfrak q_t$ is the set
+of zerodivisors of $R$.
+Then the total ring of fractions $Q(R)$ is equal to
+$R_{\mathfrak q_1} \times \ldots \times R_{\mathfrak q_t}$.
+\end{lemma}
+
+\begin{proof}
+There are natural maps $Q(R) \to R_{\mathfrak q_i}$ since
+any nonzerodivisor is contained in $R \setminus \mathfrak q_i$.
+Hence a natural map
+$Q(R) \to R_{\mathfrak q_1} \times \ldots \times R_{\mathfrak q_t}$.
+For any nonminimal prime $\mathfrak p \subset R$ we see that
+$\mathfrak p \not \subset \mathfrak q_1 \cup \ldots \cup \mathfrak q_t$
+by Lemma \ref{lemma-silly}. Hence
+$\Spec(Q(R)) = \{\mathfrak q_1, \ldots, \mathfrak q_t\}$
+(as subsets of $\Spec(R)$, see Lemma \ref{lemma-spec-localization}).
+Therefore $\Spec(Q(R))$ is a finite discrete set and
+it follows that $Q(R) = A_1 \times \ldots \times A_t$
+with $\Spec(A_i) = \{q_i\}$, see
+Lemma \ref{lemma-disjoint-implies-product}.
+Moreover $A_i$ is a local ring, which is a localization
+of $R$. Hence $A_i \cong R_{\mathfrak q_i}$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Irreducible components of spectra}
+\label{section-irreducible}
+
+\noindent
+We show that irreducible components of
+the spectrum of a ring correspond to the
+minimal primes in the ring.
+
+\begin{lemma}
+\label{lemma-irreducible}
+Let $R$ be a ring.
+\begin{enumerate}
+\item For a prime $\mathfrak p \subset R$ the closure
+of $\{\mathfrak p\}$ in the Zariski topology is $V(\mathfrak p)$.
+In a formula $\overline{\{\mathfrak p\}} = V(\mathfrak p)$.
+\item The irreducible closed subsets of $\Spec(R)$ are
+exactly the subsets $V(\mathfrak p)$, with $\mathfrak p \subset R$
+a prime.
+\item The irreducible components (see Topology,
+Definition \ref{topology-definition-irreducible-components})
+of $\Spec(R)$ are exactly the subsets $V(\mathfrak p)$,
+with $\mathfrak p \subset R$ a minimal prime.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Note that if $ \mathfrak p \in V(I)$, then
+$I \subset \mathfrak p$. Hence,
+clearly $\overline{\{\mathfrak p\}} = V(\mathfrak p)$.
+In particular $V(\mathfrak p)$ is the closure of
+a singleton and hence irreducible.
+The second assertion implies the third.
+To show the second, let
+$V(I) \subset \Spec(R)$ with $I$ a radical ideal.
+If $I$ is not prime, then choose $a, b\in R$, $a, b\not \in I$
+with $ab\in I$. In this case $V(I, a) \cup V(I, b) = V(I)$,
+but neither $V(I, b) = V(I)$ nor $V(I, a) = V(I)$, by
+Lemma \ref{lemma-Zariski-topology}. Hence $V(I)$ is not
+irreducible.
+\end{proof}
+
+\noindent
+In other words, this lemma shows that every irreducible closed
+subset of $\Spec(R)$ is of the form $V(\mathfrak p)$ for
+some prime $\mathfrak p$. Since $V(\mathfrak p) = \overline{\{\mathfrak p\}}$
+we see that each irreducible closed subset has a unique generic point,
+see Topology, Definition \ref{topology-definition-generic-point}.
+In particular, $\Spec(R)$ is a sober topological space.
+We record this fact in the following lemma.
+
+\begin{lemma}
+\label{lemma-spec-spectral}
+The spectrum of a ring is a spectral space, see Topology, Definition
+\ref{topology-definition-spectral-space}.
+\end{lemma}
+
+\begin{proof}
+Formally this follows from Lemma \ref{lemma-irreducible} and
+Lemma \ref{lemma-topology-spec}. See also discussion above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-irreducible-components-containing-x}
+Let $R$ be a ring. Let $\mathfrak p \subset R$ be a prime.
+\begin{enumerate}
+\item the set of irreducible closed subsets of $\Spec(R)$
+passing through $\mathfrak p$ is in one-to-one correspondence with
+primes $\mathfrak q \subset R_{\mathfrak p}$.
+\item The set of irreducible components of $\Spec(R)$ passing through
+$\mathfrak p$ is in one-to-one correspondence with minimal
+primes $\mathfrak q \subset R_{\mathfrak p}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-irreducible}
+and the description of $\Spec(R_\mathfrak p)$ in
+Lemma \ref{lemma-spec-localization} which shows that
+$\Spec(R_\mathfrak p)$ corresponds to primes $\mathfrak q$ in $R$
+with $\mathfrak q \subset \mathfrak p$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-standard-open-containing-maximal-point}
+Let $R$ be a ring.
+Let $\mathfrak p$ be a minimal prime of $R$.
+Let $W \subset \Spec(R)$ be a quasi-compact open
+not containing the point $\mathfrak p$. Then there
+exists an $f \in R$, $f \not \in \mathfrak p$ such
+that $D(f) \cap W = \emptyset$.
+\end{lemma}
+
+\begin{proof}
+Since $W$ is quasi-compact we may write it as a finite union
+of standard affine opens $D(g_i)$, $i = 1, \ldots, n$.
+Since $\mathfrak p \not \in W$ we have $g_i \in \mathfrak p$ for
+all $i$. By Lemma \ref{lemma-minimal-prime-reduced-ring}
+each $g_i$ is nilpotent in $R_{\mathfrak p}$. Hence we can find
+an $f \in R$, $f \not \in \mathfrak p$ such that for all $i$ we have
+$f g_i^{n_i} = 0$ for some $n_i > 0$. Then $D(f)$ works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ring-with-only-minimal-primes}
+Let $R$ be a ring. Let $X = \Spec(R)$ as a topological space.
+The following are equivalent
+\begin{enumerate}
+\item $X$ is profinite,
+\item $X$ is Hausdorff,
+\item $X$ is totally disconnected.
+\item every quasi-compact open of $X$ is closed,
+\item there are no nontrivial inclusions between its prime ideals,
+\item every prime ideal is a maximal ideal,
+\item every prime ideal is minimal,
+\item every standard open $D(f) \subset X$ is closed, and
+\item add more here.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First proof. It is clear that (5), (6), and (7) are equivalent.
+It is clear that (4) and (8) are equivalent as every quasi-compact
+open is a finite union of standard opens.
+The implication (7) $\Rightarrow$ (4) follows from
+Lemma \ref{lemma-standard-open-containing-maximal-point}.
+Assume (4) holds. Let $\mathfrak p, \mathfrak p'$ be distinct
+primes of $R$. Choose an $f \in \mathfrak p'$, $f \not \in \mathfrak p$
+(if needed switch $\mathfrak p$ with $\mathfrak p'$).
+Then $\mathfrak p' \not \in D(f)$ and $\mathfrak p \in D(f)$.
+By (4) the open $D(f)$ is also closed.
+Hence $\mathfrak p$ and $\mathfrak p'$ are in disjoint open
+neighbourhoods whose union is $X$. Thus $X$ is Hausdorff and totally
+disconnected. Thus (4) $\Rightarrow$ (2) and (3).
+If (3) holds then there cannot be any specializations
+between points of $\Spec(R)$ and we see that (5) holds.
+If $X$ is Hausdorff then every point is closed, so (2) implies (6).
+Thus (2), (3), (4), (5), (6), (7) and (8) are equivalent.
+Any profinite space is Hausdorff, so (1) implies (2).
+If $X$ satisfies (2) and (3), then $X$ (being quasi-compact by
+Lemma \ref{lemma-quasi-compact}) is profinite by
+Topology, Lemma \ref{topology-lemma-profinite}.
+
+\medskip\noindent
+Second proof. Besides the equivalence of (4) and (8) this follows
+from Lemma \ref{lemma-spec-spectral} and purely topological facts, see
+Topology, Lemma \ref{topology-lemma-characterize-profinite-spectral}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Examples of spectra of rings}
+\label{section-examples-spectra}
+
+\noindent
+In this section we put some examples of spectra.
+
+\begin{example}
+\label{example-spec-Zxmodx2minus4}
+In this example we describe $X = \Spec(\mathbf{Z}[x]/(x^2 - 4))$.
+Let $\mathfrak{p}$ be an arbitrary prime in $X$.
+Let $\phi : \mathbf{Z} \to \mathbf{Z}[x]/(x^2 - 4)$ be the natural ring map.
+Then, $ \phi^{-1}(\mathfrak p)$ is a prime in $\mathbf{Z}$.
+If $ \phi^{-1}(\mathfrak p) = (2)$, then since $\mathfrak p$ contains $2$,
+it corresponds to a prime ideal in
+$\mathbf{Z}[x]/(x^2 - 4, 2) \cong (\mathbf{Z}/2\mathbf{Z})[x]/(x^2)$
+via the map $ \mathbf{Z}[x]/(x^2 - 4) \to \mathbf{Z}[x]/(x^2 - 4, 2)$.
+Any prime in $(\mathbf{Z}/2\mathbf{Z})[x]/(x^2)$ corresponds to a prime
+in $(\mathbf{Z}/2\mathbf{Z})[x]$ containing $(x^2)$. Such primes will
+then contain $x$. Since
+$(\mathbf{Z}/2\mathbf{Z}) \cong (\mathbf{Z}/2\mathbf{Z})[x]/(x)$ is a field,
+$(x)$ is a maximal ideal. Since any prime contains $(x)$ and $(x)$ is
+maximal, the ring contains only one prime $(x)$. Thus, in this case,
+$\mathfrak p = (2, x)$. Now, if $ \phi^{-1}(\mathfrak p) = (q)$ for
+$q > 2$, then since $\mathfrak p$ contains $q$, it corresponds to a
+prime ideal in
+$\mathbf{Z}[x]/(x^2 - 4, q) \cong (\mathbf{Z}/q\mathbf{Z})[x]/(x^2 - 4)$
+via the map $ \mathbf{Z}[x]/(x^2 - 4) \to \mathbf{Z}[x]/(x^2 - 4, q)$.
+Any prime in $(\mathbf{Z}/q\mathbf{Z})[x]/(x^2 - 4)$ corresponds to a
+prime in $(\mathbf{Z}/q\mathbf{Z})[x]$ containing $(x^2 - 4) = (x -2)(x + 2)$.
+Hence, these primes must contain either $x -2$ or $x + 2$. Since
+$(\mathbf{Z}/q\mathbf{Z})[x]$ is a PID, all nonzero
+primes are maximal, and so there
+are precisely 2 primes in $(\mathbf{Z}/q\mathbf{Z})[x]$ containing
+$(x-2)(x + 2)$, namely $(x-2)$ and $(x + 2)$. In conclusion, there exist two
+primes $(q, x-2)$ and $(q, x + 2)$ since $2 \neq -2 \in \mathbf{Z}/(q)$.
+Finally, we treat the case where $\phi^{-1}(\mathfrak p) = (0)$. Notice
+that $\mathfrak p$ corresponds to a prime ideal in $\mathbf{Z}[x]$ that
+contains $(x^2 - 4) = (x -2)(x + 2)$. Hence, $\mathfrak p$ contains either
+$(x-2)$ or $(x + 2)$. Hence, $\mathfrak p$ corresponds to a prime in
+$\mathbf{Z}[x]/(x - 2)$ or one in $\mathbf{Z}[x]/(x + 2)$ that intersects
+$\mathbf{Z}$ only at $0$, by assumption. Since
+$\mathbf{Z}[x]/(x - 2) \cong \mathbf{Z}$ and
+$\mathbf{Z}[x]/(x + 2) \cong \mathbf{Z}$, this means that $\mathfrak p$
+must correspond to $0$ in one of these rings. Thus,
+$\mathfrak p = (x - 2)$ or $\mathfrak p = (x + 2)$ in the original ring.
+\end{example}
+
+\begin{example}
+\label{example-spec-Zx}
+In this example we describe $X = \Spec(\mathbf{Z}[x])$.
+Fix $\mathfrak p \in X$.
+Let $\phi : \mathbf{Z} \to \mathbf{Z}[x]$ and notice
+that $\phi^{-1}(\mathfrak p) \in \Spec(\mathbf{Z})$.
+If $\phi^{-1}(\mathfrak p) = (q)$ for $q$ a prime number $q > 0$,
+then $\mathfrak p$ corresponds to a prime in $(\mathbf{Z}/(q))[x]$,
+which must be generated by a polynomial that is irreducible in
+$(\mathbf{Z}/(q))[x]$. If we choose a representative of this polynomial
+with minimal degree, then it will also be irreducible in $\mathbf{Z}[x]$.
+Hence, in this case $\mathfrak p = (q, f_q)$ where $f_q$ is an irreducible
+polynomial in $\mathbf{Z}[x]$ that is irreducible when viewed
+in $(\mathbf{Z}/(q) [x])$. Now, assume that $\phi^{-1}(\mathfrak p) = (0)$.
+In this case, $\mathfrak p$ must be generated by nonconstant polynomials
+which, since $\mathfrak p$ is prime, may be assumed to be irreducible in
+$\mathbf{Z}[x]$. By Gauss' lemma, these polynomials are also irreducible
+in $\mathbf{Q}[x]$. Since $\mathbf{Q}[x]$ is a Euclidean domain, if there
+are at least two distinct irreducibles $f, g$ generating $\mathfrak p$,
+then $1 = af + bg$ for $a, b \in \mathbf{Q}[x]$. Multiplying through by
+a common denominator, we see that $m = \bar{a}f + \bar{b} g$ for
+$\bar{a}, \bar{b} \in \mathbf{Z}[x]$ and nonzero $m \in \mathbf{Z}$.
+This is a contradiction. Hence, $\mathfrak p$ is generated by one
+irreducible polynomial in $\mathbf{Z}[x]$.
+\end{example}
+
+\begin{example}
+\label{example-spec-kxy}
+In this example we describe $X = \Spec(k[x, y])$
+when $k$ is an arbitrary field.
+Clearly $(0)$ is prime, and any principal ideal generated by an
+irreducible polynomial will also be a prime since $k[x, y]$ is a
+unique factorization domain. Now assume $\mathfrak p$ is an
+element of $X$ that is not principal. Since $k[x, y]$ is a
+Noetherian UFD, the prime ideal $\mathfrak p$ can be generated
+by a finite number of irreducible polynomials $(f_1, \ldots, f_n)$.
+Now, I claim that if $f, g$ are irreducible polynomials in $k[x, y]$
+that are not associates, then $(f, g) \cap k[x] \neq 0$. To do this,
+it is enough to show that $f$ and $g$ are relatively prime when
+viewed in $k(x)[y]$. In this case, $k(x)[y]$ is a Euclidean domain,
+so by applying the Euclidean algorithm and clearing denominators, we
+obtain $p = af + bg$ for $p, a, b \in k[x]$. Thus, assume this is not
+the case, that is, that some nonunit $h \in k(x)[y]$ divides both
+$f$ and $g$. Then, by Gauss's lemma, for some $a, b \in k(x)$ we
+have $ah | f$ and $bh | g$ for $ah, bh \in k[x]$. By
+irreducibility, $ah = f$ and
+$bh = g$ (since $h \notin k(x)$). So, back in $k(x)[y]$, $f, g $
+are associates, as $\frac{a}{b} g = f$. Since
+$k(x)$ is the fraction field of $k[x]$, we can write $g = \frac{r}{s} f $
+for elements $r , s \in k[x]$ sharing no common factors. This
+implies that $sg = rf$ in $k[x, y]$ and so $s$ must divide $f$
+since $k[x, y]$ is a UFD. Hence, $s = 1$ or $s = f$. If $s = f$,
+then $r = g$, implying $f, g \in k[x]$ and thus must be units in
+$k(x)$ and relatively prime in $k(x)[y]$, contradicting our
+hypothesis. If $s = 1$, then $g = rf$, another contradiction.
+Thus, we must have $f, g$ relatively prime in $k(x)[y]$, a
+Euclidean domain. Thus, we have reduced to the case $\mathfrak p$
+contains some irreducible polynomial $p \in k[x] \subset k[x, y]$.
+By the above, $\mathfrak p$ corresponds to a prime in the ring
+$k[x, y]/(p) = k(\alpha)[y]$, where $\alpha$ is an element
+algebraic over $k$ with minimum polynomial $p$. This is a
+PID, and so any prime ideal corresponds to $(0)$ or an
+irreducible polynomial in $k(\alpha)[y]$. Thus, $\mathfrak p$
+is of the form $(p)$ or $(p, f)$ where $f$ is a
+polynomial in $k[x, y]$ that is irreducible in the quotient
+$k[x, y]/(p)$.
+\end{example}
+
+\begin{example}
+\label{example-affine-open-not-standard}
+Consider the ring
+$$
+R = \{ f \in \mathbf{Q}[z]\text{ with }f(0) = f(1) \}.
+$$
+Consider the map
+$$
+\varphi : \mathbf{Q}[A, B] \to R
+$$
+defined by $\varphi(A) = z^2-z$ and $\varphi(B) = z^3-z^2$. It is
+easily checked that $(A^3 - B^2 + AB) \subset \Ker(\varphi)$ and that
+$A^3 - B^2 + AB$ is irreducible. Assume that $\varphi$ is surjective;
+then since $R$ is an integral domain (it is a subring of an integral
+domain), $\Ker(\varphi)$ must be a prime ideal of $\mathbf{Q}[A, B]$.
+The prime ideals which contain $(A^3-B^2 + AB)$ are $(A^3-B^2 + AB)$
+itself and any maximal ideal $(f, g)$ with $f, g\in\mathbf{Q}[A, B]$
+such that $f$ is irreducible mod $g$. But $R$ is not a field, so the
+kernel must be $(A^3-B^2 + AB)$; hence $\varphi$ gives an isomorphism
+$R \to \mathbf{Q}[A, B]/(A^3-B^2 + AB)$.
+
+\medskip\noindent
+To see that $\varphi$ is surjective, we must express any
+$f\in R$ as a $\mathbf{Q}$-coefficient polynomial in $A(z) = z^2-z$
+and $B(z) = z^3-z^2$. Note the relation $zA(z) = B(z)$. Let
+$a = f(0) = f(1)$. Then $z(z-1)$ must divide $f(z)-a$, so we can write
+$f(z) = z(z-1)g(z)+a = A(z)g(z)+a$. If $\deg(g) < 2$, then
+$h(z) = c_1z + c_0$ and $f(z) = A(z)(c_1z + c_0)+a = c_1B(z)+c_0A(z)+a$, so we
+are done. If $\deg(g)\geq 2$, then by the polynomial division
+algorithm, we can write $g(z) = A(z)h(z)+b_1z + b_0$
+($\deg(h)\leq\deg(g)-2$), so $f(z) = A(z)^2h(z)+b_1B(z)+b_0A(z)$.
+Applying division to $h(z)$ and iterating, we obtain an expression
+for $f(z)$ as a polynomial in $A(z)$ and $B(z)$; hence $\varphi$ is
+surjective.
+
+\medskip\noindent
+Now let $a \in \mathbf{Q}$, $a \neq 0, \frac{1}{2}, 1$ and
+consider
+$$
+R_a = \{ f \in \mathbf{Q}[z, \frac{1}{z-a}]\text{ with }f(0) = f(1)
+\}.
+$$
+This is a finitely generated $\mathbf{Q}$-algebra as well: it is
+easy to check that the functions $z^2-z$, $z^3-z$, and
+$\frac{a^2-a}{z-a}+z$ generate $R_a$ as an $\mathbf{Q}$-algebra. We
+have the following inclusions:
+$$
+R\subset R_a\subset\mathbf{Q}[z, \frac{1}{z-a}], \quad
+R\subset\mathbf{Q}[z]\subset\mathbf{Q}[z, \frac{1}{z-a}].
+$$
+Recall (Lemma \ref{lemma-spec-localization}) that for a ring T and a
+multiplicative subset $S\subset T$, the ring map $T \to S^{-1}T$
+induces a map on spectra $\Spec(S^{-1}T) \to \Spec(T)$
+which is a homeomorphism onto the subset
+$$
+\{\mathfrak p \in \Spec(T) \mid S \cap \mathfrak p = \emptyset\}
+\subset \Spec(T).
+$$
+When $S = \{ 1, f, f^2, \ldots\}$ for some $f\in T$, this is
+the open set $D(f)\subset T$. We now verify a corresponding
+property for the ring map $R \to R_a$: we will show that the map
+$\theta : \Spec(R_a) \to \Spec(R)$ induced by inclusion
+$R\subset R_a$ is a homeomorphism onto an open subset of
+$\Spec(R)$ by verifying that $\theta$ is an injective local
+homeomorphism. We do so with respect to an open cover of
+$\Spec(R_a)$ by two distinguished opens, as we now describe.
+For any $r\in\mathbf{Q}$, let $\text{ev}_r : R \to \mathbf{Q}$ be the
+homomorphism given by evaluation at $r$. Note that for $r = 0$ and
+$r = 1-a$, this can be extended to a homomorphism
+$\text{ev}_r' : R_a \to \mathbf{Q}$ (the latter because $\frac{1}{z-a}$
+is well-defined at $z = 1-a$, since $a\neq\frac{1}{2}$). However,
+$\text{ev}_a$ does not extend to $R_a$. Write
+$\mathfrak{m}_r = \Ker(\text{ev}_r)$. We have
+$$
+\mathfrak{m}_0 = (z^2-z, z^3-z),
+$$
+$$
+\mathfrak{m}_a = ((z-1 + a)(z-a), (z^2-1 + a)(z-a)), \text{ and}
+$$
+$$
+\mathfrak{m}_{1-a} = ((z-1 + a)(z-a), (z-1 + a)(z^2-a)).
+$$
+To verify this, note that the right-hand sides are clearly contained in
+the left-hand sides. Then check that the right-hand sides are
+maximal ideals by writing the generators in terms of $A$ and $B$,
+and viewing $R$ as $\mathbf{Q}[A, B]/(A^3-B^2 + AB)$. Note that
+$\mathfrak{m}_a$ is not in the image of $\theta$: we have
+$$
+(z^2 - z)^2(z - a)\left(\frac{a^2 - a}{z - a} + z\right) =
+(z^2 - z)^2(a^2 - a) + (z^2 - z)^2(z - a)z
+$$
+The left hand side is in $\mathfrak m_a R_a$ because
+$(z^2 - z)(z - a)$ is in $\mathfrak m_a$ and because
+$(z^2 - z)(\frac{a^2 - a}{z - a} + z)$ is in $R_a$. Similarly
+the element $(z^2 - z)^2(z - a)z$ is in $\mathfrak m_a R_a$
+because $(z^2 - z)$ is in $R_a$ and $(z^2 - z)(z - a)$ is in $\mathfrak m_a$.
+As $a \not \in \{0, 1\}$ we conclude that
+$(z^2 - z)^2 \in \mathfrak m_a R_a$. Hence
+no ideal $I$ of $R_a$ can satisfy $I \cap R = \mathfrak m_a$, as such
+an $I$ would have to contain $(z^2 - z)^2$, which is in $R$ but not in
+$\mathfrak m_a$. The distinguished open set
+$D((z-1 + a)(z-a))\subset\Spec(R)$ is equal to the complement of
+the closed set $\{\mathfrak{m}_a, \mathfrak{m}_{1-a}\}$.
+Then check that $R_{(z-1 + a)(z-a)} = (R_a)_{(z-1 + a)(z-a)}$; calling
+this localized ring $R'$, then, it follows that the map $R \to R'$
+factors as $R \to R_a \to R'$. By Lemma
+\ref{lemma-spec-localization}, then, these maps express
+$\Spec(R') \subset \Spec(R_a)$ and
+$\Spec(R') \subset \Spec(R)$ as open subsets; hence
+$\theta : \Spec(R_a) \to \Spec(R)$, when restricted to
+$D((z-1 + a)(z-a))$, is a homeomorphism onto an open subset.
+Similarly, $\theta$ restricted to
+$D((z^2 + z + 2a-2)(z-a)) \subset \Spec(R_a)$ is a homeomorphism
+onto the open subset $D((z^2 + z + 2a-2)(z-a)) \subset \Spec(R)$.
+Depending on whether $z^2 + z + 2a-2$ is irreducible or not over
+$\mathbf{Q}$, this former distinguished open set has complement
+equal to one or two closed points along with the closed point
+$\mathfrak{m}_a$. Furthermore, the ideal in $R_a$ generated by the
+elements $(z^2 + z + 2a-a)(z-a)$ and $(z-1 + a)(z-a)$ is all of $R_a$, so
+these two distinguished open sets cover $\Spec(R_a)$. Hence in
+order to show that $\theta$ is a homeomorphism onto
+$\Spec(R)-\{\mathfrak{m}_a\}$, it suffices to show
+that these one or two points can never equal $\mathfrak{m}_{1-a}$.
+And this is indeed the case, since $1-a$ is a root of $z^2 + z + 2a-2$
+if and only if $a = 0$ or $a = 1$, both of which do not occur.
+
+\medskip\noindent
+Despite this homeomorphism which mimics the behavior of a
+localization at an element of $R$, while
+$\mathbf{Q}[z, \frac{1}{z-a}]$ is the localization of $\mathbf{Q}[z]$
+at the maximal ideal $(z-a)$, the ring $R_a$ is {\it not} a
+localization of $R$: Any localization $S^{-1}R$ results in more
+units than the original ring $R$. The units of $R$ are
+$\mathbf{Q}^\times$, the units of $\mathbf{Q}$. In fact, it is
+easy to see that the units of $R_a$ are $\mathbf{Q}^*$.
+Namely, the units of $\mathbf{Q}[z, \frac{1}{z - a}]$ are
+$c (z - a)^n$ for $c \in \mathbf{Q}^*$ and $n \in \mathbf{Z}$
+and it is clear that these are in $R_a$ only if $n = 0$.
+Hence $R_a$ has no more units than
+$R$ does, and thus cannot be a localization of $R$.
+
+\medskip\noindent
+We used the fact that $a\neq 0, 1$ to ensure that
+$\frac{1}{z-a}$ makes sense at $z = 0, 1$. We used the fact that
+$a\neq 1/2$ in a few places: (1) In order to be able to talk about
+the kernel of $\text{ev}_{1-a}$ on $R_a$, which ensures that
+$\mathfrak{m}_{1-a}$ is a point of $R_a$ (i.e., that $R_a$ is
+missing just one point of $R$). (2) At the end in order to conclude
+that $(z-a)^{k + \ell}$ can only be in $R$ for $k = \ell = 0$; indeed, if
+$a = 1/2$, then this is in $R$ as long as $k + \ell$ is even. Hence
+there would indeed be more units in $R_a$ than in $R$, and $R_a$
+could possibly be a localization of $R$.
+\end{example}
+
+
+
+
+
+\section{A meta-observation about prime ideals}
+\label{section-oka-families}
+
+\noindent
+This section is taken from the CRing project. Let $R$ be a ring and
+let $S \subset R$ be a multiplicative subset.
+A consequence of
+Lemma \ref{lemma-spec-localization}
+is that an ideal $I \subset R$ maximal with respect to the property
+of not intersecting $S$ is prime. The reason is that $I = R \cap \mathfrak m$
+for some maximal ideal $\mathfrak m$ of the ring $S^{-1}R$.
+It turns out that for many properties of ideals, the maximal ones
+are prime. A general method of seeing this was developed in \cite{Lam-Reyes}.
+In this section, we digress to explain this phenomenon.
+
+\medskip\noindent
+Let $R$ be a ring. If $I$ is an ideal of $R$ and $a \in R$, we
+define
+$$
+(I : a) = \left\{ x \in R \mid xa \in I\right\}.
+$$
+More generally, if $J \subset R$ is an ideal, we define
+$$
+(I : J) = \left\{ x \in R \mid xJ \subset I\right\}.
+$$
+
+\begin{lemma}
+\label{lemma-colon}
+Let $R$ be a ring. For a principal ideal $J \subset R$, and for any ideal
+$I \subset J$ we have $I = J (I : J)$.
+\end{lemma}
+
+\begin{proof}
+Say $J = (a)$. Then $(I : J) = (I : a)$.
+Since $I \subset J$ we see that any $y \in I$ is of the form
+$y = xa$ for some $x \in (I : a)$. Hence $I \subset J (I : J)$.
+Conversely, if $x \in (I : a)$, then $xJ = (xa) \subset I$, which
+proves the other inclusion.
+\end{proof}
+
+\noindent
+Let $\mathcal{F}$ be a collection of ideals of $R$. We are interested in
+conditions that will guarantee that the maximal elements in the complement
+of $\mathcal{F}$ are prime.
+
+\begin{definition}
+\label{definition-oka-family}
+Let $R$ be a ring. Let $\mathcal{F}$ be a set of ideals of $R$. We say
+$\mathcal{F}$ is an {\it Oka family} if $R \in \mathcal{F}$ and
+whenever $I \subset R$ is an ideal and $(I : a), (I, a) \in \mathcal{F}$
+for some $a \in R$, then $I \in \mathcal{F}$.
+\end{definition}
+
+\noindent
+Let us give some examples of Oka families. The first example is the basic
+example discussed in the introduction to this section.
+
+\begin{example}
+\label{example-oka-family-not-meet-multiplicative-set}
+Let $R$ be a ring and let $S$ be a multiplicative subset of $R$.
+We claim that $\mathcal{F} = \{I \subset R \mid I \cap S \not = \emptyset\}$
+is an Oka family. Namely, suppose that $(I : a), (I, a) \in \mathcal{F}$
+for some $a \in R$. Then pick $s \in (I, a) \cap S$ and
+$s' \in (I : a) \cap S$. Then $ss' \in I \cap S$ and hence
+$I \in \mathcal{F}$. Thus $\mathcal{F}$ is an Oka family.
+\end{example}
+
+\begin{example}
+\label{example-oka-family-finitely-generated}
+Let $R$ be a ring, $I \subset R$ an ideal, and $a \in R$.
+If $(I : a)$ is generated by $a_1, \ldots, a_n$ and $(I, a)$
+is generated by $a, b_1, \ldots, b_m$ with
+$b_1, \ldots, b_m \in I$, then $I$ is generated by
+$aa_1, \ldots, aa_n, b_1, \ldots, b_m$.
+To see this, note that if $x \in I$, then $x \in (I, a)$
+is a linear combination of $a, b_1, \ldots, b_m$, but the
+coefficient of $a$ must lie in $(I:a)$.
+As a result, we deduce that
+the family of finitely generated ideals is an Oka family.
+\end{example}
+
+\begin{example}
+\label{example-oka-family-principal}
+Let us show that the family of principal ideals of a ring $R$ is an Oka family.
+Indeed, suppose $I \subset R$ is an ideal, $a \in R$, and $(I, a)$ and
+$(I : a)$ are principal. Note that $(I : a) = (I : (I, a))$.
+Setting $J = (I, a)$, we find that $J$ is principal and $(I : J)$ is too. By
+Lemma \ref{lemma-colon}
+we have $I = J (I : J)$.
+Thus we find in our situation that since $J = (I, a)$ and $(I : J)$
+are principal, $I$ is principal.
+\end{example}
+
+\begin{example}
+\label{example-oka-family-bound-cardinality}
+Let $R$ be a ring.
+Let $\kappa$ be an infinite cardinal.
+The family of ideals which can be generated by at most $\kappa$ elements
+is an Oka family. The argument is analogous to the argument in
+Example \ref{example-oka-family-finitely-generated}
+and is omitted.
+\end{example}
+
+\begin{example}
+\label{example-oka-family-property-modules}
+Let $A$ be a ring, $I \subset A$ an ideal, and $a \in A$ an element.
+There is a short exact sequence $0 \to A/(I : a) \to A/I \to A/(I, a) \to 0$
+where the first arrow is given by multiplication by $a$. Thus if $P$
+is a property of $A$-modules that is stable under extensions and holds
+for $0$, then the family of ideals $I$ such that $A/I$ has $P$
+is an Oka family.
+\end{example}
+
+\begin{proposition}
+\label{proposition-oka}
+If $\mathcal{F}$ is an Oka family of ideals, then any maximal element of
+the complement of $\mathcal{F}$ is prime.
+\end{proposition}
+
+\begin{proof}
+Suppose $I \not \in \mathcal{F}$ is maximal with respect to not being in
+$\mathcal{F}$ but $I$ is not prime. Note that $I \not = R$ because
+$R \in \mathcal{F}$. Since $I$ is not prime we can find $a, b \in R - I$
+with $ab \in I$. It follows that $(I, a) \neq I$ and $(I : a)$ contains
+$b \not \in I$ so also $(I : a) \neq I$. Thus $(I : a), (I, a)$ both
+strictly contain $I$, so they must belong to $\mathcal{F}$.
+By the Oka condition, we have $I \in \mathcal{F}$, a contradiction.
+\end{proof}
+
+\noindent
+At this point we are able to turn most of the examples above into
+a lemma about prime ideals in a ring.
+
+\begin{lemma}
+\label{lemma-simple}
+Let $R$ be a ring. Let $S$ be a multiplicative subset of $R$.
+An ideal $I \subset R$ which is maximal with respect to the property
+that $I \cap S = \emptyset$ is prime.
+\end{lemma}
+
+\begin{proof}
+This is the example discussed in the introduction to this section.
+For an alternative proof, combine
+Example \ref{example-oka-family-not-meet-multiplicative-set}
+with
+Proposition \ref{proposition-oka}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohen}
+Let $R$ be a ring.
+\begin{enumerate}
+\item An ideal $I \subset R$ maximal with respect to not being
+finitely generated is prime.
+\item If every prime ideal of $R$ is
+finitely generated, then
+every ideal of $R$ is finitely generated\footnote{Later we will say
+that $R$ is Noetherian.}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first assertion is an immediate consequence of
+Example \ref{example-oka-family-finitely-generated} and
+Proposition \ref{proposition-oka}. For the second,
+suppose that there exists an ideal $I \subset R$ which is not finitely
+generated. The union of a totally ordered chain $\left\{I_\alpha\right\}$
+of ideals that are not finitely generated is not finitely generated;
+indeed, if $I = \bigcup I_\alpha$ were generated by
+$a_1, \ldots, a_n$, then all the generators would belong to some
+$I_\alpha $ and would consequently generate it.
+By Zorn's lemma, there is an ideal maximal with respect to being not finitely
+generated. By the first part this ideal is prime.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-primes-principal}
+Let $R$ be a ring.
+\begin{enumerate}
+\item An ideal $I \subset R$ maximal with respect to not being
+principal is prime.
+\item If every prime ideal of $R$ is principal, then
+every ideal of $R$ is principal.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first part follows from
+Example \ref{example-oka-family-principal} and
+Proposition \ref{proposition-oka}.
+For the second, suppose that there exists an ideal $I \subset R$
+which is not principal. The union of a totally ordered chain
+$\left\{I_\alpha\right\}$ of ideals that not principal is not principal;
+indeed, if $I = \bigcup I_\alpha$ were generated by
+$a$, then $a$ would belong to some $I_\alpha $ and $a$ would generate it.
+By Zorn's lemma, there is an ideal maximal with respect to not being
+principal. This ideal is necessarily prime by the first part.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-domain}
+Let $R$ be a ring.
+\begin{enumerate}
+\item An ideal maximal among the ideals which do not contain a
+nonzerodivisor is prime.
+\item If $R$ is nonzero and every nonzero prime ideal in $R$
+contains a nonzerodivisor, then $R$ is a domain.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Consider the set $S$ of nonzerodivisors. It is a multiplicative
+subset of $R$. Hence any ideal maximal with respect to not intersecting
+$S$ is prime, see
+Lemma \ref{lemma-simple}.
+Thus, if every nonzero prime ideal contains a nonzerodivisor, then
+$(0)$ is prime, i.e., $R$ is a domain.
+\end{proof}
+
+\begin{remark}
+\label{remark-cohen-bound-cardinality}
+Let $R$ be a ring. Let $\kappa$ be an infinite cardinal.
+By applying
+Example \ref{example-oka-family-bound-cardinality} and
+Proposition \ref{proposition-oka}
+we see that any ideal maximal with respect to the property of not being
+generated by $\kappa$ elements is prime. This result is not so
+useful because there exists a ring for which every prime ideal
+of $R$ can be generated by $\aleph_0$ elements, but some
+ideal cannot. Namely, let $k$ be a field, let $T$ be a set whose
+cardinality is greater than $\aleph_0$ and let
+$$
+R = k[\{x_n\}_{n \geq 1}, \{z_{t, n}\}_{t \in T, n \geq 0}]/
+(x_n^2, z_{t, n}^2, x_n z_{t, n} - z_{t, n - 1})
+$$
+This is a local ring with unique prime ideal
+$\mathfrak m = (x_n)$. But the ideal $(z_{t, n})$ cannot
+be generated by countably many elements.
+\end{remark}
+
+\begin{example}
+\label{example-noetherian-topology-on-spec}
+\begin{reference}
+Comment by Lukas Heger of November 12, 2020.
+\end{reference}
+Let $R$ be a ring and $X = \Spec(R)$. Since closed subsets of $X$ correspond to
+radical ideas of $R$ (Lemma \ref{lemma-Zariski-topology}) we see that $X$ is a
+Noetherian topological space if and only if we have ACC for radical ideals.
+This holds if and only if every radical ideal is the radical of a finitely
+generated ideal (details omitted). Let
+$$
+\mathcal{F} = \{I \subset R \mid
+\sqrt{I} = \sqrt{(f_1, \ldots, f_n)}\text{ for some }n
+\text{ and }f_1, \ldots, f_n \in R\}.
+$$
+The reader can show that $\mathcal{F}$ is an Oka family by using the identity
+$$
+\sqrt{I} = \sqrt{(I, a)(I : a)}
+$$
+which holds for any ideal $I \subset R$ and any element $a \in R$.
+On the other hand, if we have a totally ordered chain of ideals
+$\{I_\alpha\}$ none of which are in $\mathcal{F}$, then the union
+$I = \bigcup I_\alpha$ cannot be in $\mathcal{F}$ either. Otherwise
+$\sqrt{I} = \sqrt{(f_1, \ldots, f_n)}$, then $f_i^e \in I$ for some $e$,
+then $f_i^e \in I_\alpha$ for some $\alpha$ independent of $i$, then
+$\sqrt{I_\alpha} = \sqrt{(f_1, \ldots, f_n)}$, contradiction.
+Thus if the set of ideals not in $\mathcal{F}$ is nonempty, then it
+has maximal elements and
+exactly as in Lemma \ref{lemma-cohen} we conclude that $X$ is a
+Noetherian topological space if and only if every prime ideal of $R$
+is equal to $\sqrt{(f_1, \ldots, f_n)}$ for some $f_1, \ldots, f_n \in R$.
+If we ever need this result we will carefully state and prove this
+result here.
+\end{example}
+
+
+
+
+
+
+
+
+\section{Images of ring maps of finite presentation}
+\label{section-images-finite-presentation}
+
+\noindent
+In this section we prove some results on the
+topology of maps $\Spec(S) \to \Spec(R)$
+induced by ring maps $R \to S$, mainly Chevalley's Theorem.
+In order to do this we will use the notions of constructible sets,
+quasi-compact sets, retrocompact sets, and so on
+which are defined in Topology, Section \ref{topology-section-constructible}.
+
+\begin{lemma}
+\label{lemma-qc-open}
+Let $U \subset \Spec(R)$ be open. The following
+are equivalent:
+\begin{enumerate}
+\item $U$ is retrocompact in $\Spec(R)$,
+\item $U$ is quasi-compact,
+\item $U$ is a finite union of standard opens, and
+\item there exists a finitely generated ideal $I \subset R$ such
+that $X \setminus V(I) = U$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We have (1) $\Rightarrow$ (2) because $\Spec(R)$ is quasi-compact, see
+Lemma \ref{lemma-quasi-compact}. We have (2) $\Rightarrow$ (3) because
+standard opens form a basis for the topology. Proof of (3) $\Rightarrow$ (1).
+Let $U = \bigcup_{i = 1\ldots n} D(f_i)$. To show that $U$ is retrocompact
+in $\Spec(R)$ it suffices to show that $U \cap V$ is quasi-compact for any
+quasi-compact open $V$ of $\Spec(R)$. Write
+$V = \bigcup_{j = 1\ldots m} D(g_j)$ which is possible by (2) $\Rightarrow$
+(3). Each standard open is homeomorphic to the spectrum of a ring and hence
+quasi-compact, see Lemmas \ref{lemma-standard-open} and
+\ref{lemma-quasi-compact}. Thus
+$U \cap V =
+(\bigcup_{i = 1\ldots n} D(f_i)) \cap (\bigcup_{j = 1\ldots m} D(g_j))
+= \bigcup_{i, j} D(f_i g_j)$ is a finite union of quasi-compact opens
+hence quasi-compact. To finish the proof note
+that (4) is equivalent to (3) by
+Lemma \ref{lemma-Zariski-topology}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-affine-map-quasi-compact}
+Let $\varphi : R \to S$ be a ring map.
+The induced continuous map $f : \Spec(S) \to \Spec(R)$
+is quasi-compact. For any constructible set $E \subset \Spec(R)$
+the inverse image $f^{-1}(E)$ is constructible in $\Spec(S)$.
+\end{lemma}
+
+\begin{proof}
+We first show that the inverse image of any quasi-compact
+open $U \subset \Spec(R)$ is quasi-compact. By
+Lemma \ref{lemma-qc-open} we may write $U$ as a finite
+open of standard opens. Thus by Lemma \ref{lemma-spec-functorial}
+we see that $f^{-1}(U)$ is a finite union of standard opens.
+Hence $f^{-1}(U)$ is quasi-compact by Lemma \ref{lemma-qc-open} again.
+The second assertion now follows from Topology, Lemma
+\ref{topology-lemma-inverse-images-constructibles}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-constructible}
+Let $R$ be a ring. A subset of $\Spec(R)$ is constructible if and only
+if it can be written as a finite union of subsets of the form
+$D(f) \cap V(g_1, \ldots, g_m)$ for $f, g_1, \ldots, g_m \in R$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-qc-open} the subset $D(f)$ and the complement of
+$V(g_1, \ldots, g_m)$ are retro-compact open. Hence
+$D(f) \cap V(g_1, \ldots, g_m)$ is a constructible subset and so is
+any finite union of such. Conversely, let $T \subset \Spec(R)$ be
+constructible. By Topology, Definition \ref{topology-definition-constructible},
+we may assume that $T = U \cap V^c$, where $U, V \subset \Spec(R)$
+are retrocompact open. By Lemma \ref{lemma-qc-open} we may write
+$U = \bigcup_{i = 1, \ldots, n} D(f_i)$ and
+$V = \bigcup_{j = 1, \ldots, m} D(g_j)$. Then
+$T = \bigcup_{i = 1, \ldots, n} \big(D(f_i) \cap V(g_1, \ldots, g_m)\big)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-constructible-is-image}
+Let $R$ be a ring and let $T \subset \Spec(R)$
+be constructible. Then there exists a ring map $R \to S$ of
+finite presentation such that $T$ is the image of
+$\Spec(S)$ in $\Spec(R)$.
+\end{lemma}
+
+\begin{proof}
+The spectrum of a finite product of rings
+is the disjoint union of the spectra, see
+Lemma \ref{lemma-spec-product}. Hence if $T = T_1 \cup T_2$
+and the result holds for $T_1$ and $T_2$, then the
+result holds for $T$.
+By Lemma \ref{lemma-constructible} we may assume
+that $T = D(f) \cap V(g_1, \ldots, g_m)$.
+In this case $T$ is the image of the map
+$\Spec((R/(g_1, \ldots, g_m))_f) \to \Spec(R)$, see Lemmas
+\ref{lemma-standard-open} and \ref{lemma-spec-closed}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-open-fp}
+Let $R$ be a ring.
+Let $f$ be an element of $R$.
+Let $S = R_f$.
+Then the image of a constructible subset of $\Spec(S)$
+is constructible in $\Spec(R)$.
+\end{lemma}
+
+\begin{proof}
+We repeatedly use Lemma \ref{lemma-qc-open} without mention.
+Let $U, V$ be quasi-compact open in $\Spec(S)$.
+We will show that the image of $U \cap V^c$ is constructible.
+Under the identification
+$\Spec(S) = D(f)$ of Lemma \ref{lemma-standard-open}
+the sets $U, V$ correspond to quasi-compact opens
+$U', V'$ of $\Spec(R)$.
+Hence it suffices to show that $U' \cap (V')^c$
+is constructible in $\Spec(R)$ which is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-closed-fp}
+Let $R$ be a ring.
+Let $I$ be a finitely generated ideal of $R$.
+Let $S = R/I$.
+Then the image of a constructible subset of $\Spec(S)$
+is constructible in $\Spec(R)$.
+\end{lemma}
+
+\begin{proof}
+If $I = (f_1, \ldots, f_m)$, then we see that
+$V(I)$ is the complement of $\bigcup D(f_i)$,
+see Lemma \ref{lemma-Zariski-topology}.
+Hence it is constructible, by Lemma \ref{lemma-qc-open}.
+Denote the map $R \to S$ by $f \mapsto \overline{f}$.
+We have to show that if $\overline{U}, \overline{V}$
+are retrocompact opens of $\Spec(S)$, then the
+image of $\overline{U} \cap \overline{V}^c$
+in $\Spec(R)$ is constructible.
+By Lemma \ref{lemma-qc-open} we may write
+$\overline{U} = \bigcup D(\overline{g_i})$.
+Setting $U = \bigcup D({g_i})$ we see $\overline{U}$
+has image $U \cap V(I)$ which is constructible in
+$\Spec(R)$. Similarly the image of $\overline{V}$ equals
+$V \cap V(I)$ for some retrocompact open $V$ of $\Spec(R)$.
+Hence the image of $\overline{U} \cap \overline{V}^c$
+equals $U \cap V(I) \cap V^c$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-affineline-open}
+Let $R$ be a ring. The map $\Spec(R[x]) \to \Spec(R)$
+is open, and the image of any standard open is a quasi-compact
+open.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that the image of a standard open
+$D(f)$, $f\in R[x]$ is quasi-compact open.
+The image of $D(f)$ is the image of
+$\Spec(R[x]_f) \to \Spec(R)$.
+Let $\mathfrak p \subset R$ be a prime ideal.
+Let $\overline{f}$ be the image of $f$ in
+$\kappa(\mathfrak p)[x]$.
+Recall, see Lemma \ref{lemma-in-image},
+that $\mathfrak p$ is in the image
+if and only if $R[x]_f \otimes_R \kappa(\mathfrak p) =
+\kappa(\mathfrak p)[x]_{\overline{f}}$ is not the
+zero ring. This is exactly the condition that $f$ does not map
+to zero in $\kappa(\mathfrak p)[x]$, in other words, that
+some coefficient of $f$ is not in $\mathfrak p$.
+Hence we see: if $f = a_d x^d + \ldots + a_0$, then
+the image of $D(f)$ is $D(a_d) \cup \ldots \cup D(a_0)$.
+\end{proof}
+
+\noindent
+We prove a property of characteristic polynomials which
+will be used below.
+
+\begin{lemma}
+\label{lemma-characteristic-polynomial-prime}
+Let $R \to A$ be a ring homomorphism.
+Assume $A \cong R^{\oplus n}$ as an $R$-module.
+Let $f \in A$. The multiplication map $m_f: A
+\to A$ is $R$-linear and hence
+has a characteristic polynomial
+$P(T) = T^n + r_{n-1}T^{n-1} + \ldots + r_0 \in R[T]$.
+For any prime
+$\mathfrak{p} \in \Spec(R)$, $f$ acts nilpotently on $A
+\otimes_R \kappa(\mathfrak{p})$ if and only if $\mathfrak p \in
+V(r_0, \ldots, r_{n-1})$.
+\end{lemma}
+
+\begin{proof}
+This follows quite easily once we prove that the characteristic
+polynomial $\bar P(T) \in \kappa(\mathfrak p)[T]$ of the
+multiplication map $m_{\bar f}: A \otimes_R \kappa(\mathfrak p) \to
+A \otimes_R \kappa(\mathfrak p)$ which multiplies elements of $A
+\otimes_R \kappa(\mathfrak p)$ by $\bar f$, the image of $f$ viewed in
+$\kappa(\mathfrak p)$, is just the image of $P(T)$ in
+$\kappa(\mathfrak p)[T]$. Let $(a_{ij})$ be the matrix of the map
+$m_f$ with entries in $R$, using a basis $e_1, \ldots, e_n$
+of $A$ as an $R$-module.
+Then, $A \otimes_R \kappa(\mathfrak p) \cong (R \otimes_R
+\kappa(\mathfrak p))^{\oplus n} = \kappa(\mathfrak p)^n$, which is
+an $n$-dimensional vector space over $\kappa(\mathfrak p)$ with
+basis $e_1 \otimes 1, \ldots, e_n \otimes 1$. The image $\bar f = f
+\otimes 1$, and so the multiplication map $m_{\bar f}$ has matrix
+$(a_{ij} \otimes 1)$. Thus, the characteristic polynomial is
+precisely the image of $P(T)$.
+
+\medskip\noindent
+From linear algebra, we know that a linear transformation acts
+nilpotently on an $n$-dimensional vector space if and only if the
+characteristic polynomial is $T^n$ (since the characteristic
+polynomial divides some power of the minimal polynomial). Hence,
+$f$ acts nilpotently on $A \otimes_R \kappa(\mathfrak p)$ if and
+only if $\bar P(T) = T^n$. This occurs if and only if $r_i \in
+\mathfrak p$ for all $0 \leq i \leq n - 1$, that is when $\mathfrak p \in
+V(r_0, \ldots, r_{n - 1}).$
+\end{proof}
+
+\begin{lemma}
+\label{lemma-affineline-special}
+Let $R$ be a ring. Let $f, g \in R[x]$ be polynomials.
+Assume the leading coefficient of $g$ is a unit of $R$.
+There exists elements $r_i\in R$, $i = 1\ldots, n$ such that
+the image of $D(f) \cap V(g)$ in $\Spec(R)$ is
+$\bigcup_{i = 1, \ldots, n} D(r_i)$.
+\end{lemma}
+
+\begin{proof}
+Write $g = ux^d + a_{d-1}x^{d-1} + \ldots + a_0$, where
+$d$ is the degree of $g$, and hence $u \in R^*$.
+Consider the ring $A = R[x]/(g)$.
+It is, as an $R$-module, finite free with basis the images
+of $1, x, \ldots, x^{d-1}$. Consider multiplication
+by (the image of) $f$ on $A$. This is an $R$-module map.
+Hence we can let $P(T) \in R[T]$ be the characteristic polynomial
+of this map. Write $P(T) = T^d + r_{d-1} T^{d-1} + \ldots + r_0$.
+We claim that $r_0, \ldots, r_{d-1}$ have the desired property.
+We will use below the property of characteristic polynomials
+that
+$$
+\mathfrak p \in V(r_0, \ldots, r_{d-1})
+\Leftrightarrow
+\text{multiplication by }f\text{ is nilpotent on }
+A \otimes_R \kappa(\mathfrak p).
+$$
+This was proved in Lemma \ref{lemma-characteristic-polynomial-prime}.
+
+\medskip\noindent
+Suppose $\mathfrak q\in D(f) \cap V(g)$, and let
+$\mathfrak p = \mathfrak q \cap R$. Then there is a nonzero map
+$A \otimes_R \kappa(\mathfrak p) \to \kappa(\mathfrak q)$ which
+is compatible with multiplication by $f$.
+And $f$ acts as a unit on $\kappa(\mathfrak q)$.
+Thus we conclude $\mathfrak p \not \in V(r_0, \ldots, r_{d-1})$.
+
+\medskip\noindent
+On the other hand, suppose that $r_i \not\in \mathfrak p$ for some
+prime $\mathfrak p$ of $R$ and some $0 \leq i \leq d - 1$.
+Then multiplication by $f$ is not nilpotent on the algebra
+$A \otimes_R \kappa(\mathfrak p)$.
+Hence there exists a prime ideal $\overline{\mathfrak q} \subset
+A \otimes_R \kappa(\mathfrak p)$ not containing the image of $f$.
+The inverse image of $\overline{\mathfrak q}$ in $R[x]$
+is an element of $D(f) \cap V(g)$ mapping to $\mathfrak p$.
+\end{proof}
+
+\begin{theorem}[Chevalley's Theorem]
+\label{theorem-chevalley}
+Suppose that $R \to S$ is of finite presentation.
+The image of a constructible subset of
+$\Spec(S)$ in $\Spec(R)$ is constructible.
+\end{theorem}
+
+\begin{proof}
+Write $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$.
+We may factor $R \to S$ as $R \to R[x_1] \to R[x_1, x_2]
+\to \ldots \to R[x_1, \ldots, x_{n-1}] \to S$. Hence
+we may assume that $S = R[x]/(f_1, \ldots, f_m)$.
+In this case we factor the map as $R \to R[x] \to S$,
+and by Lemma \ref{lemma-closed-fp} we reduce to
+the case $S = R[x]$. By Lemma \ref{lemma-qc-open} suffices
+to show that if
+$T = (\bigcup_{i = 1\ldots n} D(f_i)) \cap V(g_1, \ldots, g_m)$
+for $f_i , g_j \in R[x]$ then the image in $\Spec(R)$ is
+constructible. Since finite unions of constructible sets
+are constructible, it suffices to deal with the case $n = 1$,
+i.e., when $T = D(f) \cap V(g_1, \ldots, g_m)$.
+
+\medskip\noindent
+Note that if $c \in R$, then we have
+$$
+\Spec(R) =
+V(c) \amalg D(c) =
+\Spec(R/(c)) \amalg \Spec(R_c),
+$$
+and correspondingly $\Spec(R[x]) =
+V(c) \amalg D(c) = \Spec(R/(c)[x]) \amalg
+\Spec(R_c[x])$. The intersection of $T = D(f) \cap V(g_1, \ldots, g_m)$
+with each part still has the same shape, with $f$, $g_i$ replaced
+by their images in $R/(c)[x]$, respectively $R_c[x]$.
+Note that the image of $T$
+in $\Spec(R)$ is the union of the image of
+$T \cap V(c)$ and $T \cap D(c)$. Using Lemmas \ref{lemma-open-fp}
+and \ref{lemma-closed-fp} it suffices to prove the images of both
+parts are constructible in $\Spec(R/(c))$, respectively
+$\Spec(R_c)$.
+
+\medskip\noindent
+Let us assume we have $T = D(f) \cap V(g_1, \ldots, g_m)$
+as above, with $\deg(g_1) \leq \deg(g_2) \leq \ldots \leq \deg(g_m)$.
+We are going to use induction on $m$, and on the
+degrees of the $g_i$. Let $d = \deg(g_1)$, i.e., $g_1 = c x^{d_1} + l.o.t$
+with $c \in R$ not zero. Cutting $R$ up into the pieces
+$R/(c)$ and $R_c$ we either lower the degree of $g_1$ (and this
+is covered by induction)
+or we reduce to the case where $c$ is invertible.
+If $c$ is invertible, and $m > 1$, then write
+$g_2 = c' x^{d_2} + l.o.t$. In this case consider
+$g_2' = g_2 - (c'/c) x^{d_2 - d_1} g_1$. Since the ideals
+$(g_1, g_2, \ldots, g_m)$ and $(g_1, g_2', g_3, \ldots, g_m)$
+are equal we see that $T = D(f) \cap V(g_1, g_2', g_3\ldots, g_m)$.
+But here the degree of $g_2'$ is strictly less than the degree
+of $g_2$ and hence this case is covered by induction.
+
+\medskip\noindent
+The bases case for the induction above are the cases
+(a) $T = D(f) \cap V(g)$ where the leading coefficient
+of $g$ is invertible, and (b) $T = D(f)$. These two cases
+are dealt with in Lemmas \ref{lemma-affineline-special}
+and \ref{lemma-affineline-open}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{More on images}
+\label{section-more-images}
+
+\noindent
+In this section we collect a few additional lemmas concerning the image
+on $\Spec$ for ring maps. See also Section \ref{section-going-up}
+for example.
+
+\begin{lemma}
+\label{lemma-generic-finite-presentation}
+Let $R \subset S$ be an inclusion of domains.
+Assume that $R \to S$ is of finite type.
+There exists a nonzero $f \in R$, and a nonzero $g \in S$
+such that $R_f \to S_{fg}$ is of finite presentation.
+\end{lemma}
+
+\begin{proof}
+By induction on the number of generators of $S$ over $R$.
+During the proof we may replace $R$ by $R_f$ and $S$ by $S_f$
+for some nonzero $f \in R$.
+
+\medskip\noindent
+Suppose that $S$ is generated by a single element
+over $R$. Then $S = R[x]/\mathfrak q$ for some
+prime ideal $\mathfrak q \subset R[x]$. If $\mathfrak q = (0)$
+there is nothing to prove. If $\mathfrak q \not = (0)$,
+then let $h \in \mathfrak q$ be a nonzero element with minimal
+degree in $x$. Write $h = f x^d + a_{d - 1} x^{d - 1} + \ldots + a_0$
+with $a_i \in R$ and $f \not = 0$. After inverting $f$
+in $R$ and $S$ we may assume that $h$ is monic. We obtain
+a surjective $R$-algebra map $R[x]/(h) \to S$.
+We have $R[x]/(h) = R \oplus Rx \oplus \ldots \oplus Rx^{d - 1}$
+as an $R$-module and by minimality of $d$ we see that
+$R[x]/(h)$ maps injectively into $S$. Thus $R[x]/(h) \cong S$
+is finitely presented over $R$.
+
+\medskip\noindent
+Suppose that $S$ is generated by $n > 1$ elements over $R$.
+Say $x_1, \ldots, x_n \in S$ generate $S$. Denote $S' \subset S$
+the subring generated by $x_1, \ldots, x_{n-1}$. By induction
+hypothesis we see that there exist $f\in R$ and $g \in S'$
+nonzero such that $R_f \to S'_{fg}$ is of finite presentation.
+Next we apply the induction hypothesis to $S'_{fg} \to S_{fg}$
+to see that there exist $f' \in S'_{fg}$ and
+$g' \in S_{fg}$ such that $S'_{fgf'} \to S_{fgf'g'}$
+is of finite presentation. We leave it to the reader to conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-image-finite-type}
+Let $R \to S$ be a finite type ring map.
+Denote $X = \Spec(R)$ and $Y = \Spec(S)$.
+Write $f : Y \to X$ the induced
+map of spectra. Let $E \subset Y = \Spec(S)$ be a
+constructible set.
+If a point $\xi \in X$ is in $f(E)$, then
+$\overline{\{\xi\}} \cap f(E)$ contains an open
+dense subset of $\overline{\{\xi\}}$.
+\end{lemma}
+
+\begin{proof}
+Let $\xi \in X$ be a point of $f(E)$. Choose a point $\eta \in E$
+mapping to $\xi$. Let $\mathfrak p \subset R$ be the prime
+corresponding to $\xi$ and let $\mathfrak q \subset S$ be the
+prime corresponding to $\eta$. Consider the diagram
+$$
+\xymatrix{
+\eta \ar[r] \ar@{|->}[d] & E \cap Y' \ar[r] \ar[d] &
+Y' = \Spec(S/\mathfrak q) \ar[r] \ar[d] &
+Y \ar[d] \\
+\xi \ar[r] & f(E) \cap X' \ar[r] &
+X' = \Spec(R/\mathfrak p) \ar[r] &
+X
+}
+$$
+By Lemma \ref{lemma-affine-map-quasi-compact} the set $E \cap Y'$
+is constructible in $Y'$.
+It follows that we may replace $X$ by $X'$ and
+$Y$ by $Y'$. Hence we may assume that $R \subset S$ is an
+inclusion of domains, $\xi$ is the generic
+point of $X$, and $\eta$ is the generic point of $Y$.
+By Lemma \ref{lemma-generic-finite-presentation}
+combined with Chevalley's theorem
+(Theorem \ref{theorem-chevalley})
+we see that there exist dense opens $U \subset X$,
+$V \subset Y$ such that $f(V) \subset U$ and
+such that $f : V \to U$ maps constructible sets
+to constructible sets. Note that $E \cap V$ is
+constructible in $V$, see Topology,
+Lemma \ref{topology-lemma-open-immersion-constructible-inverse-image}.
+Hence $f(E \cap V)$ is constructible in $U$ and contains $\xi$.
+By Topology, Lemma \ref{topology-lemma-generic-point-in-constructible}
+we see that $f(E \cap V)$ contains a dense open $U' \subset U$.
+\end{proof}
+
+\noindent
+At the end of this section we present a few more results on
+images of maps on Spectra that have nothing to do with constructible
+sets.
+
+\begin{lemma}
+\label{lemma-surjective-spec-radical-ideal}
+Let $\varphi : R \to S$ be a ring map.
+The following are equivalent:
+\begin{enumerate}
+\item The map $\Spec(S) \to \Spec(R)$ is surjective.
+\item For any ideal $I \subset R$
+the inverse image of $\sqrt{IS}$ in $R$ is equal to $\sqrt{I}$.
+\item For any radical ideal $I \subset R$ the inverse image
+of $IS$ in $R$ is equal to $I$.
+\item For every prime $\mathfrak p$ of $R$ the inverse
+image of $\mathfrak p S$ in $R$ is $\mathfrak p$.
+\end{enumerate}
+In this case the same is true after any base change: Given a ring map
+$R \to R'$ the ring map $R' \to R' \otimes_R S$ has the equivalent
+properties (1), (2), (3) as well.
+\end{lemma}
+
+\begin{proof}
+If $J \subset S$ is an ideal, then
+$\sqrt{\varphi^{-1}(J)} = \varphi^{-1}(\sqrt{J})$. This shows that (2)
+and (3) are equivalent.
+The implication (3) $\Rightarrow$ (4) is immediate.
+If $I \subset R$ is a radical ideal, then
+Lemma \ref{lemma-Zariski-topology}
+guarantees that $I = \bigcap_{I \subset \mathfrak p} \mathfrak p$.
+Hence (4) $\Rightarrow$ (2). By
+Lemma \ref{lemma-in-image}
+we have $\mathfrak p = \varphi^{-1}(\mathfrak p S)$ if and only if
+$\mathfrak p$ is in the image. Hence (1) $\Leftrightarrow$ (4).
+Thus (1), (2), (3), and (4) are equivalent.
+
+\medskip\noindent
+Assume (1) holds. Let $R \to R'$ be a ring map. Let
+$\mathfrak p' \subset R'$ be a prime ideal lying over the prime
+$\mathfrak p$ of $R$. To see that $\mathfrak p'$ is in the image
+of $\Spec(R' \otimes_R S) \to \Spec(R')$ we have to show
+that $(R' \otimes_R S) \otimes_{R'} \kappa(\mathfrak p')$ is not zero, see
+Lemma \ref{lemma-in-image}.
+But we have
+$$
+(R' \otimes_R S) \otimes_{R'} \kappa(\mathfrak p') =
+S \otimes_R \kappa(\mathfrak p)
+\otimes_{\kappa(\mathfrak p)} \kappa(\mathfrak p')
+$$
+which is not zero as $S \otimes_R \kappa(\mathfrak p)$ is not zero
+by assumption and $\kappa(\mathfrak p) \to \kappa(\mathfrak p')$ is
+an extension of fields.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-domain-image-dense-set-points-generic-point}
+Let $R$ be a domain. Let $\varphi : R \to S$ be a ring map.
+The following are equivalent:
+\begin{enumerate}
+\item The ring map $R \to S$ is injective.
+\item The image $\Spec(S) \to \Spec(R)$
+contains a dense set of points.
+\item There exists a prime ideal $\mathfrak q \subset S$
+whose inverse image in $R$ is $(0)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $K$ be the field of fractions of the domain $R$.
+Assume that $R \to S$ is injective. Since localization
+is exact we see that $K \to S \otimes_R K$ is injective.
+Hence there is a prime mapping to $(0)$ by
+Lemma \ref{lemma-in-image}.
+
+\medskip\noindent
+Note that $(0)$ is dense in $\Spec(R)$, so that the
+last condition implies the second.
+
+\medskip\noindent
+Suppose the second condition holds. Let $f \in R$,
+$f \not = 0$. As $R$ is a domain we see that $V(f)$
+is a proper closed subset of $R$. By assumption
+there exists a prime $\mathfrak q$
+of $S$ such that $\varphi(f) \not \in \mathfrak q$.
+Hence $\varphi(f) \not = 0$.
+Hence $R \to S$ is injective.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-minimal-primes-in-image}
+Let $R \subset S$ be an injective ring map.
+Then $\Spec(S) \to \Spec(R)$
+hits all the minimal primes.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p \subset R$ be a minimal prime.
+In this case $R_{\mathfrak p}$ has a unique prime ideal.
+Hence it suffices to show that $S_{\mathfrak p}$ is not zero.
+And this follows from the fact that localization is exact,
+see Proposition \ref{proposition-localization-exact}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-image-dense-generic-points}
+Let $R \to S$ be a ring map. The following are equivalent:
+\begin{enumerate}
+\item The kernel of $R \to S$ consists of nilpotent elements.
+\item The minimal primes of $R$ are in the image of
+$\Spec(S) \to \Spec(R)$.
+\item The image of $\Spec(S) \to \Spec(R)$ is dense
+in $\Spec(R)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $I = \Ker(R \to S)$. Note that
+$\sqrt{(0)} = \bigcap_{\mathfrak q \subset S} \mathfrak q$, see
+Lemma \ref{lemma-Zariski-topology}.
+Hence $\sqrt{I} = \bigcap_{\mathfrak q \subset S} R \cap \mathfrak q$.
+Thus $V(I) = V(\sqrt{I})$ is the closure of the image of
+$\Spec(S) \to \Spec(R)$.
+This shows that (1) is equivalent to (3). It is clear that
+(2) implies (3). Finally, assume (1). We may replace
+$R$ by $R/I$ and $S$ by $S/IS$ without affecting the topology
+of the spectra and the map. Hence the implication (1) $\Rightarrow$ (2)
+follows from Lemma \ref{lemma-injective-minimal-primes-in-image}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-minimal-prime-image-minimal-prime}
+Let $R \to S$ be a ring map. If a minimal prime $\mathfrak p \subset R$
+is in the image of $\Spec(S) \to \Spec(R)$, then it is the image
+of a minimal prime.
+\end{lemma}
+
+\begin{proof}
+Say $\mathfrak p = \mathfrak q \cap R$. Then choose a minimal
+prime $\mathfrak r \subset S$ with $\mathfrak r \subset \mathfrak q$, see
+Lemma \ref{lemma-Zariski-topology}.
+By minimality of $\mathfrak p$ we see that
+$\mathfrak p = \mathfrak r \cap R$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Noetherian rings}
+\label{section-Noetherian}
+
+\noindent
+A ring $R$ is {\it Noetherian} if any ideal of $R$ is finitely generated.
+This is clearly equivalent to the ascending chain condition for ideals of $R$.
+By
+Lemma \ref{lemma-cohen}
+it suffices to check that every prime ideal of $R$ is finitely generated.
+
+\begin{lemma}
+\label{lemma-Noetherian-permanence}
+\begin{slogan}
+Noetherian property is stable by passage to finite type extension
+and localization.
+\end{slogan}
+Any finitely generated ring over a Noetherian ring
+is Noetherian. Any localization of a Noetherian ring
+is Noetherian.
+\end{lemma}
+
+\begin{proof}
+The statement on localizations follows from the fact
+that any ideal $J \subset S^{-1}R$ is of the form
+$I \cdot S^{-1}R$. Any quotient $R/I$ of a Noetherian
+ring $R$ is Noetherian because any ideal $\overline{J} \subset R/I$
+is of the form $J/I$ for some ideal $I \subset J \subset R$.
+Thus it suffices to show that if $R$ is Noetherian so
+is $R[X]$. Suppose $J_1 \subset J_2 \subset \ldots$ is an
+ascending chain of ideals in $R[X]$. Consider the ideals $I_{i, d}$
+defined as the ideal of elements of $R$ which occur as leading
+coefficients of degree $d$ polynomials in $J_i$.
+Clearly $I_{i, d} \subset I_{i', d'}$ whenever
+$i \leq i'$ and $d \leq d'$. By the ascending chain condition
+in $R$ there are at most finitely many distinct ideals among all of
+the $I_{i, d}$.
+(Hint: Any infinite set of elements of
+$\mathbf{N} \times \mathbf{N}$ contains an increasing
+infinite sequence.)
+Take $i_0$ so large that $I_{i, d} = I_{i_0, d}$
+for all $i \geq i_0$ and all $d$. Suppose $f \in J_i$ for some $i \geq i_0$.
+By induction on the degree $d = \deg(f)$ we show that $f \in J_{i_0}$.
+Namely, there exists a $g\in J_{i_0}$ whose degree is $d$ and which
+has the same leading coefficient as $f$. By induction
+$f - g \in J_{i_0}$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-power-series}
+If $R$ is a Noetherian ring, then so is the formal power
+series ring $R[[x_1, \ldots, x_n]]$.
+\end{lemma}
+
+\begin{proof}
+Since $R[[x_1, \ldots, x_{n + 1}]] \cong R[[x_1, \ldots, x_n]][[x_{n + 1}]]$
+it suffices to prove the statement that $R[[x]]$ is Noetherian if
+$R$ is Noetherian. Let $I \subset R[[x]]$ be a ideal.
+We have to show that $I$ is a finitely generated ideal.
+For each integer
+$d$ denote $I_d = \{a \in R \mid ax^d + \text{h.o.t.} \in I\}$.
+Then we see that $I_0 \subset I_1 \subset \ldots$ stabilizes as $R$
+is Noetherian. Choose $d_0$ such that $I_{d_0} = I_{d_0 + 1} = \ldots$.
+For each $d \leq d_0$ choose elements $f_{d, j} \in I \cap (x^d)$,
+$j = 1, \ldots, n_d$ such that if we write
+$f_{d, j} = a_{d, j}x^d + \text{h.o.t}$ then $I_d = (a_{d, j})$.
+Denote $I' = (\{f_{d, j}\}_{d = 0, \ldots, d_0, j = 1, \ldots, n_d})$.
+Then it is clear that $I' \subset I$. Pick $f \in I$.
+First we may choose $c_{d, i} \in R$ such that
+$$
+f - \sum c_{d, i} f_{d, i} \in (x^{d_0 + 1}) \cap I.
+$$
+Next, we can choose $c_{i, 1} \in R$, $i = 1, \ldots, n_{d_0}$ such that
+$$
+f - \sum c_{d, i} f_{d, i} - \sum c_{i, 1}xf_{d_0, i} \in (x^{d_0 + 2}) \cap I.
+$$
+Next, we can choose $c_{i, 2} \in R$, $i = 1, \ldots, n_{d_0}$ such that
+$$
+f - \sum c_{d, i} f_{d, i} - \sum c_{i, 1}xf_{d_0, i}
+- \sum c_{i, 2}x^2f_{d_0, i}
+\in (x^{d_0 + 3}) \cap I.
+$$
+And so on. In the end we see that
+$$
+f = \sum c_{d, i} f_{d, i} +
+\sum\nolimits_i (\sum\nolimits_e c_{i, e} x^e)f_{d_0, i}
+$$
+is contained in $I'$ as desired.
+\end{proof}
+
+\noindent
+The following lemma, although easy, is useful because
+finite type $\mathbf{Z}$-algebras come up quite often in
+a technique called ``absolute Noetherian reduction''.
+
+\begin{lemma}
+\label{lemma-obvious-Noetherian}
+Any finite type algebra over a field is Noetherian.
+Any finite type algebra over $\mathbf{Z}$ is Noetherian.
+\end{lemma}
+
+\begin{proof}
+This is immediate from Lemma \ref{lemma-Noetherian-permanence}
+and the fact that fields are Noetherian rings and that
+$\mathbf{Z}$ is Noetherian ring (because it is a
+principal ideal domain).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-finite-type-is-finite-presentation}
+Let $R$ be a Noetherian ring.
+\begin{enumerate}
+\item Any finite $R$-module is of finite presentation.
+\item Any submodule of a finite $R$-module is finite.
+\item Any finite type $R$-algebra is of finite presentation over $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $M$ be a finite $R$-module. By
+Lemma \ref{lemma-trivial-filter-finite-module}
+we can find a finite filtration of $M$ whose successive quotients are
+of the form $R/I$. Since any ideal is finitely generated, each of
+the quotients $R/I$ is finitely presented. Hence $M$ is finitely
+presented by
+Lemma \ref{lemma-extension}.
+This proves (1).
+
+\medskip\noindent
+Let $N \subset M$ be a submodule. As $M$ is finite, the quotient
+$M/N$ is finite. Thus $M/N$ is of finite presentation by part (1).
+Thus we see that $N$ is finite by Lemma \ref{lemma-extension} part (5).
+This proves part (2).
+
+\medskip\noindent
+To see (3) note that any ideal of
+$R[x_1, \ldots, x_n]$ is finitely generated by
+Lemma \ref{lemma-Noetherian-permanence}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-topology}
+If $R$ is a Noetherian ring then $\Spec(R)$
+is a Noetherian topological space, see Topology,
+Definition \ref{topology-definition-noetherian}.
+\end{lemma}
+
+\begin{proof}
+This is because any closed subset of $\Spec(R)$
+is uniquely of the form $V(I)$ with $I$ a radical ideal,
+see Lemma \ref{lemma-Zariski-topology}.
+And this correspondence is inclusion reversing.
+Thus the result follows from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-irreducible-components}
+\begin{slogan}
+A Noetherian affine scheme has finitely many generic points.
+\end{slogan}
+If $R$ is a Noetherian ring then $\Spec(R)$
+has finitely many irreducible components. In other words
+$R$ has finitely many minimal primes.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-Noetherian-topology} and
+Topology, Lemma \ref{topology-lemma-Noetherian}
+we see there are finitely many irreducible components.
+By Lemma \ref{lemma-irreducible} these correspond to
+minimal primes of $R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-base-change-finite-type}
+Let $R \to S$ be a ring map. Let $R \to R'$ be of finite type.
+If $S$ is Noetherian, then the base change $S' = R' \otimes_R S$
+is Noetherian.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-base-change-finiteness} finite type is stable under
+base change. Thus $S \to S'$ is of finite type. Since $S$ is Noetherian we
+can apply Lemma \ref{lemma-Noetherian-permanence}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-field-extension}
+Let $k$ be a field and let $R$ be a Noetherian $k$-algebra.
+If $K/k$ is a finitely generated field extension then
+$K \otimes_k R$ is Noetherian.
+\end{lemma}
+
+\begin{proof}
+Since $K/k$ is a finitely generated field extension, there exists
+a finitely generated $k$-algebra $B \subset K$ such that $K$ is
+the fraction field of $B$. In other words, $K = S^{-1}B$
+with $S = B \setminus \{0\}$. Then $K \otimes_k R = S^{-1}(B \otimes_k R)$.
+Then $B \otimes_k R$ is Noetherian by
+Lemma \ref{lemma-Noetherian-base-change-finite-type}.
+Finally, $K \otimes_k R = S^{-1}(B \otimes_k R)$ is Noetherian by
+Lemma \ref{lemma-Noetherian-permanence}.
+\end{proof}
+
+\noindent
+Here are some fun lemmas that are sometimes useful.
+
+\begin{lemma}
+\label{lemma-subring-of-local-ring}
+Let $R$ be a ring and $\mathfrak p \subset R$ be a prime.
+There exists an $f \in R$, $f \not \in \mathfrak p$ such
+that $R_f \to R_\mathfrak p$ is injective in each of the
+following cases
+\begin{enumerate}
+\item $R$ is a domain,
+\item $R$ is Noetherian, or
+\item $R$ is reduced and has finitely many minimal primes.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $R$ is a domain, then $R \subset R_\mathfrak p$, hence $f = 1$ works.
+If $R$ is Noetherian, then the kernel $I$ of $R \to R_\mathfrak p$
+is a finitely generated ideal and we can find
+$f \in R$, $f \not \in \mathfrak p$ such that $IR_f = 0$.
+For this $f$ the map $R_f \to R_\mathfrak p$ is injective
+and $f$ works. If $R$ is reduced with finitely
+many minimal primes $\mathfrak p_1, \ldots, \mathfrak p_n$,
+then we can choose
+$f \in \bigcap_{\mathfrak p_i \not \subset \mathfrak p} \mathfrak p_i$,
+$f \not \in \mathfrak p$. Indeed, if $\mathfrak{p}_i\not\subset
+\mathfrak{p}$ then there exist $f_i \in \mathfrak{p}_i$,
+$f_i \not\in \mathfrak{p}$ and $f = \prod f_i$ works.
+For this $f$ we have $R_f \subset R_\mathfrak p$ because the minimal
+primes of $R_f$ correspond to minimal primes of $R_\mathfrak p$
+and we can apply Lemma \ref{lemma-reduced-ring-sub-product-fields}
+(some details omitted).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-endo-noetherian-ring-is-iso}
+Any surjective endomorphism of a Noetherian ring is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+If $f : R \to R$ were such an endomorphism but not injective, then
+$$
+\Ker(f) \subset \Ker(f \circ f) \subset
+\Ker(f \circ f \circ f) \subset \ldots
+$$
+would be a strictly increasing chain of ideals.
+\end{proof}
+
+
+
+
+
+
+
+\section{Locally nilpotent ideals}
+\label{section-locally-nilpotent}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-locally-nilpotent-ideal}
+Let $R$ be a ring. Let $I \subset R$ be an ideal.
+We say $I$ is {\it locally nilpotent} if for every
+$x \in I$ there exists an $n \in \mathbf{N}$ such
+that $x^n = 0$. We say $I$ is {\it nilpotent} if
+there exists an $n \in \mathbf{N}$ such that $I^n = 0$.
+\end{definition}
+
+\begin{example}
+\label{example-locally-nilpotent-not-nilpotent}
+Let $R = k[x_n | n \in \mathbf{N}]$ be the polynomial ring in infinitely
+many variables over a field $k$. Let $I$ be the ideal generated by
+the elements $x_n^n$ for $n \in \mathbf{N}$ and $S = R/I$. Then the ideal
+$J \subset S$ generated by the images of $x_n$, $n \in \mathbf{N}$
+is locally nilpotent, but not nilpotent. Indeed, since $S$-linear
+combinations of nilpotents are nilpotent, to prove that $J$ is locally
+nilpotent it is enough to observe that all its generators are nilpotent
+(which they obviously are). On the other hand, for each $n \in \mathbf{N}$
+it holds that $x_{n + 1}^n \not \in I$, so that $J^n \not = 0$.
+It follows that $J$ is not nilpotent.
+\end{example}
+
+\begin{lemma}
+\label{lemma-locally-nilpotent}
+Let $R \to R'$ be a ring map and let $I \subset R$ be a locally nilpotent
+ideal. Then $IR'$ is a locally nilpotent ideal of $R'$.
+\end{lemma}
+
+\begin{proof}
+This follows from the fact that if $x, y \in R'$ are nilpotent, then
+$x + y$ is nilpotent too. Namely, if $x^n = 0$ and $y^m = 0$, then
+$(x + y)^{n + m - 1} = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-nilpotent-unit}
+Let $R$ be a ring and let $I \subset R$ be a locally nilpotent
+ideal.
+An element $x$ of $R$ is a unit if and only if the image of $x$
+in $R/I$ is a unit.
+\end{lemma}
+
+\begin{proof}
+If $x$ is a unit in $R$, then its image is clearly a unit in $R/I$.
+It remains to prove the converse.
+Assume the image of $y \in R$ in $R/I$ is the inverse of the image of $x$.
+Then $xy = 1 - z$ for some $z \in I$.
+This means that $1\equiv z$ modulo $xR$.
+Since $z$ lies in the locally nilpotent ideal
+$I$, we have $z^N = 0$ for some sufficiently large $N$.
+It follows that $1 = 1^N \equiv z^N = 0$ modulo $xR$.
+In other words, $x$ divides $1$ and is hence a unit.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-power}
+\begin{slogan}
+An ideal in a Noetherian ring is nilpotent if each element
+of the ideal is nilpotent.
+\end{slogan}
+Let $R$ be a Noetherian ring. Let $I, J$ be ideals of $R$.
+Suppose $J \subset \sqrt{I}$. Then $J^n \subset I$ for some $n$.
+In particular, in a Noetherian ring the notions of
+``locally nilpotent ideal''
+and ``nilpotent ideal'' coincide.
+\end{lemma}
+
+\begin{proof}
+Say $J = (f_1, \ldots, f_s)$.
+By assumption $f_i^{d_i} \in I$.
+Take $n = d_1 + d_2 + \ldots + d_s + 1$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-idempotents}
+Let $R$ be a ring. Let $I \subset R$ be a locally nilpotent ideal.
+Then $R \to R/I$ induces a bijection on idempotents.
+\end{lemma}
+
+\begin{proof}[First proof of Lemma \ref{lemma-lift-idempotents}]
+As $I$ is locally nilpotent it is contained in every prime ideal.
+Hence $\Spec(R/I) = V(I) = \Spec(R)$. Hence the
+lemma follows from Lemma \ref{lemma-disjoint-decomposition}.
+\end{proof}
+
+\begin{proof}[Second proof of Lemma \ref{lemma-lift-idempotents}]
+Suppose $\overline{e} \in R/I$ is an idempotent.
+We have to lift $\overline{e}$ to an idempotent of $R$.
+
+\medskip\noindent
+First, choose any lift $f \in R$ of $\overline{e}$, and set
+$x = f^2 - f$. Then, $x \in I$, so $x$ is nilpotent (since $I$
+is locally nilpotent). Let now $J$ be the ideal of $R$ generated
+by $x$. Then, $J$ is nilpotent (not just locally nilpotent),
+since it is generated by the nilpotent $x$.
+
+\medskip\noindent
+Now, assume that we have found a lift $e \in R$ of $\overline{e}$
+such that $e^2 - e \in J^k$ for some $k \geq 1$.
+Let $e' = e - (2e - 1)(e^2 - e) = 3e^2 - 2e^3$, which is another
+lift of $\overline{e}$ (since the idempotency of $\overline{e}$
+yields $e^2 - e \in I$). Then
+$$
+(e')^2 - e' = (4e^2 - 4e - 3)(e^2 - e)^2 \in J^{2k}
+$$
+by a simple computation.
+
+\medskip\noindent
+We thus have started with a lift $e$ of $\overline{e}$ such
+that $e^2 - e \in J^k$, and obtained a lift $e'$ of
+$\overline{e}$ such that $(e')^2 - e' \in J^{2k}$.
+This way we can successively improve the approximation
+(starting with $e = f$, which fits the bill for $k = 1$).
+Eventually, we reach a stage where $J^k = 0$, and at that
+stage we have a lift $e$ of $\overline{e}$ such that
+$e^2 - e \in J^k = 0$, that is, this $e$ is idempotent.
+
+\medskip\noindent
+We thus have seen that if $\overline{e} \in R/I$ is any
+idempotent, then there exists a lift of $\overline{e}$
+which is an idempotent of $R$.
+It remains to prove that this lift is unique. Indeed, let
+$e_1$ and $e_2$ be two such lifts. We
+need to show that $e_1 = e_2$.
+
+\medskip\noindent
+By definition of $e_1$ and $e_2$, we have $e_1 \equiv e_2
+\mod I$, and both $e_1$ and $e_2$ are idempotent. From
+$e_1 \equiv e_2 \mod I$, we see that $e_1 - e_2 \in I$,
+so that $e_1 - e_2$ is nilpotent (since $I$ is locally nilpotent).
+A straightforward
+computation (using the idempotency of $e_1$ and $e_2$)
+reveals that $(e_1 - e_2)^3 = e_1 - e_2$. Using this and
+induction, we obtain $(e_1 - e_2)^k = e_1 - e_2$ for any
+positive odd integer $k$. Since all high enough $k$ satisfy
+$(e_1 - e_2)^k = 0$ (since $e_1 - e_2$ is nilpotent),
+this shows $e_1 - e_2 = 0$, so that $e_1 = e_2$, which
+completes our proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-idempotents-noncommutative}
+Let $A$ be a possibly noncommutative algebra.
+Let $e \in A$ be an element such that $x = e^2 - e$ is nilpotent.
+Then there exists an idempotent of the form
+$e' = e + x(\sum a_{i, j}e^ix^j) \in A$
+with $a_{i, j} \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Consider the ring $R_n = \mathbf{Z}[e]/((e^2 - e)^n)$. It is clear that
+if we can prove the result for each $R_n$ then the lemma follows.
+In $R_n$ consider the ideal $I = (e^2 - e)$ and apply
+Lemma \ref{lemma-lift-idempotents}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-nth-roots}
+Let $R$ be a ring. Let $I \subset R$ be a locally nilpotent ideal.
+Let $n \geq 1$ be an integer which is invertible in $R/I$. Then
+\begin{enumerate}
+\item the $n$th power map $1 + I \to 1 + I$, $1 + x \mapsto (1 + x)^n$
+is a bijection,
+\item a unit of $R$ is a $n$th power if and only if its image in $R/I$
+is an $n$th power.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $a \in R$ be a unit whose image in $R/I$ is the same as the image
+of $b^n$ with $b \in R$. Then $b$ is a unit
+(Lemma \ref{lemma-locally-nilpotent-unit}) and
+$ab^{-n} = 1 + x$ for some $x \in I$. Hence $ab^{-n} = c^n$ by
+part (1). Thus (2) follows from (1).
+
+\medskip\noindent
+Proof of (1). This is true because there is an inverse to the
+map $1 + x \mapsto (1 + x)^n$. Namely, we can consider the map
+which sends $1 + x$ to
+\begin{align*}
+(1 + x)^{1/n}
+& =
+1 + {1/n \choose 1}x +
+{1/n \choose 2}x^2 +
+{1/n \choose 3}x^3 + \ldots \\
+& =
+1 + \frac{1}{n} x + \frac{1 - n}{2n^2}x^2 +
+\frac{(1 - n)(1 - 2n)}{6n^3}x^3 + \ldots
+\end{align*}
+as in elementary calculus. This makes sense because the series is finite
+as $x^k = 0$ for all $k \gg 0$ and each coefficient
+${1/n \choose k} \in \mathbf{Z}[1/n]$ (details omitted; observe that
+$n$ is invertible in $R$ by Lemma \ref{lemma-locally-nilpotent-unit}).
+\end{proof}
+
+
+
+
+
+
+\section{Curiosity}
+\label{section-curiosity}
+
+\noindent
+Lemma \ref{lemma-disjoint-implies-product}
+explains what happens if $V(I)$ is open for some ideal $I \subset R$.
+But what if $\Spec(S^{-1}R)$ is closed in $\Spec(R)$?
+The next two lemmas give a partial answer. For more information see
+Section \ref{section-pure-ideals}.
+
+\begin{lemma}
+\label{lemma-invert-closed-quotient}
+Let $R$ be a ring. Let $S \subset R$ be a multiplicative subset.
+Assume the image of the map $\Spec(S^{-1}R) \to \Spec(R)$
+is closed. Then $S^{-1}R \cong R/I$ for some ideal $I \subset R$.
+\end{lemma}
+
+\begin{proof}
+Let $I = \Ker(R \to S^{-1}R)$ so that $V(I)$ contains the image.
+Say the image is the closed subset $V(I') \subset \Spec(R)$ for
+some ideal $I' \subset R$. So $V(I') \subset V(I)$.
+For $f \in I'$ we see that $f/1 \in S^{-1}R$
+is contained in every prime ideal. Hence $f^n$ maps to zero in $S^{-1}R$
+for some $n \geq 1$ (Lemma \ref{lemma-Zariski-topology}).
+Hence $V(I') = V(I)$.
+Then this implies every $g \in S$ is invertible mod $I$.
+Hence we get ring maps $R/I \to S^{-1}R$ and $S^{-1}R \to R/I$.
+The first map is injective by choice of $I$.
+The second is the map $S^{-1}R \to S^{-1}(R/I) = R/I$ which
+has kernel $S^{-1}I$ because localization is exact.
+Since $S^{-1}I = 0$ we see also the second map is injective.
+Hence $S^{-1}R \cong R/I$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-invert-closed-split}
+Let $R$ be a ring. Let $S \subset R$ be a multiplicative subset.
+Assume the image of the map $\Spec(S^{-1}R) \to \Spec(R)$
+is closed. If $R$ is Noetherian, or $\Spec(R)$ is a
+Noetherian topological space, or $S$ is finitely generated as a monoid,
+then $R \cong S^{-1}R \times R'$ for some ring $R'$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-invert-closed-quotient} we have $S^{-1}R \cong R/I$
+for some ideal $I \subset R$. By Lemma \ref{lemma-disjoint-implies-product}
+it suffices to show that $V(I)$ is open.
+If $R$ is Noetherian then $\Spec(R)$ is a Noetherian
+topological space, see Lemma \ref{lemma-Noetherian-topology}.
+If $\Spec(R)$ is a Noetherian topological space,
+then the complement $\Spec(R) \setminus V(I)$ is quasi-compact, see
+Topology, Lemma \ref{topology-lemma-Noetherian-quasi-compact}.
+Hence there exist finitely many $f_1, \ldots, f_n \in I$ such
+that $V(I) = V(f_1, \ldots, f_n)$.
+Since each $f_i$ maps to zero in $S^{-1}R$
+there exists a $g \in S$ such that $gf_i = 0$ for
+$i = 1, \ldots, n$. Hence $D(g) = V(I)$ as desired.
+In case $S$ is finitely generated as a monoid, say $S$ is generated
+by $g_1, \ldots, g_m$, then $S^{-1}R \cong R_{g_1 \ldots g_m}$
+and we conclude that $V(I) = D(g_1 \ldots g_m)$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Hilbert Nullstellensatz}
+\label{section-nullstellensatz}
+
+\begin{theorem}[Hilbert Nullstellensatz]
+\label{theorem-nullstellensatz}
+Let $k$ be a field.
+\begin{enumerate}
+\item
+\label{item-finite-kappa}
+For any maximal ideal $\mathfrak m \subset k[x_1, \ldots, x_n]$
+the field extension $\kappa(\mathfrak m)/k$ is finite.
+\item
+\label{item-polynomial-ring-Jacobson}
+Any radical ideal $I \subset k[x_1, \ldots, x_n]$
+is the intersection of maximal ideals containing it.
+\end{enumerate}
+The same is true in any finite type $k$-algebra.
+\end{theorem}
+
+\begin{proof}
+It is enough to prove part (\ref{item-finite-kappa}) of
+the theorem for the case of a polynomial
+algebra $k[x_1, \ldots, x_n]$, because any finitely generated
+$k$-algebra is a quotient of such a polynomial algebra.
+We prove this by induction on $n$. The case $n = 0$ is clear.
+Suppose that $\mathfrak m$ is a maximal ideal in $k[x_1, \ldots, x_n]$.
+Let $\mathfrak p \subset k[x_n]$ be the intersection
+of $\mathfrak m$ with $k[x_n]$.
+
+\medskip\noindent
+If $\mathfrak p \not = (0)$,
+then $\mathfrak p$ is maximal and generated by an irreducible
+monic polynomial $P$ (because of the Euclidean algorithm
+in $k[x_n]$). Then
+$k' = k[x_n]/\mathfrak p$ is a finite field extension of $k$
+and contained in $\kappa(\mathfrak m)$. In this case
+we get a surjection
+$$
+k'[x_1, \ldots, x_{n-1}]
+\to
+k'[x_1, \ldots, x_n] =
+k' \otimes_k k[x_1, \ldots, x_n]
+\longrightarrow
+\kappa(\mathfrak m)
+$$
+and hence we see that $\kappa(\mathfrak m)$ is a finite
+extension of $k'$ by induction hypothesis. Thus $\kappa(\mathfrak m)$
+is finite over $k$ as well.
+
+\medskip\noindent
+If $\mathfrak p = (0)$ we consider the ring
+extension $k[x_n] \subset k[x_1, \ldots, x_n]/\mathfrak m$.
+This is a finitely generated ring extension, hence
+of finite presentation by
+Lemmas \ref{lemma-obvious-Noetherian} and
+\ref{lemma-Noetherian-finite-type-is-finite-presentation}.
+Thus the image of $\Spec(k[x_1, \ldots, x_n]/\mathfrak m)$
+in $\Spec(k[x_n])$ is constructible by
+Theorem \ref{theorem-chevalley}. Since the image
+contains $(0)$ we conclude that it contains a standard
+open $D(f)$ for some $f\in k[x_n]$ nonzero. Since clearly
+$D(f)$ is infinite we get a contradiction with the
+assumption that $k[x_1, \ldots, x_n]/\mathfrak m$ is
+a field (and hence has a spectrum consisting of one point).
+
+\medskip\noindent
+Proof of (\ref{item-polynomial-ring-Jacobson}). Let
+$I \subset R$ be a radical ideal, with $R$ of finite type over $k$.
+Let $f \in R$, $f \not \in I$. We have to find a maximal ideal
+$\mathfrak m \subset R$ with $I \subset \mathfrak m$ and
+$f \not \in \mathfrak m$. The ring $(R/I)_f$ is nonzero, since
+$1 = 0$ in this ring would mean $f^n \in I$ and since $I$ is
+radical this would mean $f \in I$ contrary to our assumption on $f$.
+Thus we may choose a maximal ideal $\mathfrak m'$
+in $(R/I)_f$, see Lemma \ref{lemma-Zariski-topology}.
+Let $\mathfrak m \subset R$
+be the inverse image of $\mathfrak m'$ in $R$. We see that
+$I \subset \mathfrak m$
+and $f \not \in \mathfrak m$. If we show that $\mathfrak m$ is a maximal
+ideal of $R$, then we are done. We clearly have
+$$
+k \subset R/\mathfrak m \subset \kappa(\mathfrak m').
+$$
+By part (\ref{item-finite-kappa}) the field extension
+$\kappa(\mathfrak m')/k$ is finite. Hence
+$R/\mathfrak m$ is a field by Fields, Lemma
+\ref{fields-lemma-subalgebra-algebraic-extension-field}.
+Thus $\mathfrak m$ is maximal and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-field-finite-type-over-domain}
+Let $R$ be a ring. Let $K$ be a field.
+If $R \subset K$ and $K$ is of finite type over $R$,
+then there exists an $f \in R$ such that $R_f$ is a field,
+and $K/R_f$ is a finite field extension.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-characterize-image-finite-type} there
+exist a nonempty open $U \subset \Spec(R)$
+contained in the image $\{(0)\}$ of $\Spec(K) \to \Spec(R)$.
+Choose $f \in R$, $f \not = 0$ such that $D(f) \subset U$, i.e.,
+$D(f) = \{(0)\}$. Then $R_f$ is a domain whose spectrum has exactly one
+point and $R_f$ is a field. Then $K$ is a finitely generated algebra
+over the field $R_f$ and hence a finite field extension of
+$R_f$ by the Hilbert Nullstellensatz (Theorem \ref{theorem-nullstellensatz}).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Jacobson rings}
+\label{section-ring-jacobson}
+
+\noindent
+Let $R$ be a ring. The closed points of $\Spec(R)$ are the
+maximal ideals of $R$. Often rings which occur naturally in algebraic
+geometry have lots of maximal ideals. For example finite type algebras
+over a field or over $\mathbf{Z}$. We will show that these
+are examples of Jacobson rings.
+
+\begin{definition}
+\label{definition-ring-jacobson}
+Let $R$ be a ring. We say that $R$ is a
+{\it Jacobson ring} if every radical
+ideal $I$ is the intersection of the
+maximal ideals containing it.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-finite-type-field-Jacobson}
+Any algebra of finite type over a field is Jacobson.
+\end{lemma}
+
+\begin{proof}
+This follows from Theorem \ref{theorem-nullstellensatz}
+and Definition \ref{definition-ring-jacobson}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-jacobson-prime}
+Let $R$ be a ring. If every prime ideal of $R$ is the
+intersection of the maximal ideals containing it,
+then $R$ is Jacobson.
+\end{lemma}
+
+\begin{proof}
+This is immediately clear from the fact that
+every radical ideal $I \subset R$ is the
+intersection of the primes containing it.
+See Lemma \ref{lemma-Zariski-topology}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-jacobson}
+A ring $R$ is Jacobson if and only if $\Spec(R)$
+is Jacobson, see Topology,
+Definition \ref{topology-definition-space-jacobson}.
+\end{lemma}
+
+\begin{proof}
+Suppose $R$ is Jacobson. Let $Z \subset \Spec(R)$
+be a closed subset. We have to show that the set of closed
+points in $Z$ is dense in $Z$. Let $U \subset \Spec(R)$
+be an open such that $U \cap Z$ is nonempty.
+We have to show $Z \cap U$ contains a closed point
+of $\Spec(R)$. We may
+assume $U = D(f)$ as standard opens form a basis for the
+topology on $\Spec(R)$. According to
+Lemma \ref{lemma-Zariski-topology} we may assume that
+$Z = V(I)$, where $I$ is a radical ideal. We see also
+that $f \not \in I$. By assumption, there exists a
+maximal ideal $\mathfrak m \subset R$ such that
+$I \subset \mathfrak m$ but $f \not\in \mathfrak m$.
+Hence $\mathfrak m \in D(f) \cap V(I) = U \cap Z$ as desired.
+
+\medskip\noindent
+Conversely, suppose that $\Spec(R)$ is Jacobson.
+Let $I \subset R$ be a radical ideal. Let
+$J = \cap_{I \subset \mathfrak m} \mathfrak m$
+be the intersection of the maximal ideals containing $I$.
+Clearly $J$ is a radical ideal, $V(J) \subset V(I)$, and
+$V(J)$ is the smallest closed subset of $V(I)$ containing
+all the closed points of $V(I)$. By assumption we see that
+$V(J) = V(I)$. But Lemma \ref{lemma-Zariski-topology}
+shows there is a bijection between Zariski closed
+sets and radical ideals, hence $I = J$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-jacobson}
+Let $R$ be a ring. If $R$ is not Jacobson there exist
+a prime $\mathfrak p \subset R$, an element $f \in R$
+such that the following hold
+\begin{enumerate}
+\item $\mathfrak p$ is not a maximal ideal,
+\item $f \not \in \mathfrak p$,
+\item $V(\mathfrak p) \cap D(f) = \{\mathfrak p\}$, and
+\item $(R/\mathfrak p)_f$ is a field.
+\end{enumerate}
+On the other hand, if $R$ is Jacobson, then for any pair $(\mathfrak p, f)$
+such that (1) and (2) hold the set $V(\mathfrak p) \cap D(f)$ is
+infinite.
+\end{lemma}
+
+\begin{proof}
+Assume $R$ is not Jacobson.
+By Lemma \ref{lemma-jacobson} this means there exists an
+closed subset $T \subset \Spec(R)$
+whose set $T_0 \subset T$ of closed points is not dense in $T$.
+Choose an $f \in R$ such that $T_0 \subset V(f)$ but
+$T \not \subset V(f)$. Note that $T \cap D(f)$
+is homeomorphic to $\Spec((R/I)_f)$ if $T = V(I)$, see
+Lemmas \ref{lemma-spec-closed} and \ref{lemma-standard-open}.
+As any ring has a maximal ideal
+(Lemma \ref{lemma-Zariski-topology}) we can choose a closed point $t$ of
+space $T \cap D(f)$. Then $t$ corresponds to a prime ideal
+$\mathfrak p \subset R$ which is not maximal (as $t \not \in T_0$).
+Thus (1) holds. By construction $f \not \in \mathfrak p$, hence (2).
+As $t$ is a closed point of $T \cap D(f)$ we see that
+$V(\mathfrak p) \cap D(f) = \{\mathfrak p\}$, i.e., (3) holds. Hence we
+conclude that $(R/\mathfrak p)_f$ is a domain whose
+spectrum has one point, hence (4) holds
+(for example combine Lemmas \ref{lemma-characterize-local-ring} and
+\ref{lemma-minimal-prime-reduced-ring}).
+
+\medskip\noindent
+Conversely, suppose that $R$ is Jacobson and $(\mathfrak p, f)$
+satisfy (1) and (2). If
+$V(\mathfrak p) \cap D(f) =
+\{\mathfrak p, \mathfrak q_1, \ldots, \mathfrak q_t\}$
+then $\mathfrak p \not = \mathfrak q_i$
+implies there exists an element $g \in R$ such that $g \not \in \mathfrak p$
+but $g \in \mathfrak q_i$ for all $i$. Hence
+$V(\mathfrak p) \cap D(fg) = \{\mathfrak p\}$ which
+is impossible since each locally closed subset of $\Spec(R)$
+contains at least one closed point as $\Spec(R)$ is
+a Jacobson topological space.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pid-jacobson}
+The ring $\mathbf{Z}$ is a Jacobson ring.
+More generally, let $R$ be a ring such that
+\begin{enumerate}
+\item $R$ is a domain,
+\item $R$ is Noetherian,
+\item any nonzero prime ideal is a maximal ideal, and
+\item $R$ has infinitely many maximal ideals.
+\end{enumerate}
+Then $R$ is a Jacobson ring.
+\end{lemma}
+
+\begin{proof}
+Let $R$ satisfy (1), (2), (3) and (4). The statement
+means that $(0) = \bigcap_{\mathfrak m \subset R} \mathfrak m$.
+Since $R$ has infinitely many maximal ideals it suffices to
+show that any nonzero $x \in R$ is contained in at most
+finitely many maximal ideals, in other words that $V(x)$ is finite.
+By Lemma \ref{lemma-spec-closed}
+we see that $V(x)$ is homeomorphic to $\Spec(R/xR)$.
+By assumption (3) every prime of $R/xR$ is minimal and hence
+corresponds to an irreducible component of $\Spec(R/xR)$
+(Lemma \ref{lemma-irreducible}).
+As $R/xR$ is Noetherian, the topological space $\Spec(R/xR)$
+is Noetherian (Lemma \ref{lemma-Noetherian-topology})
+and has finitely many irreducible components
+(Topology, Lemma \ref{topology-lemma-Noetherian}).
+Thus $V(x)$ is finite as desired.
+\end{proof}
+
+\begin{example}
+\label{example-infinite-product-fields-jacobson}
+Let $A$ be an infinite set.
+For each $\alpha \in A$, let $k_\alpha$ be a field.
+We claim that $R = \prod_{\alpha\in A} k_\alpha$ is Jacobson.
+First, note that any element $f \in R$ has the form
+$f = ue$, with $u \in R$ a unit and $e\in R$ an idempotent
+(left to the reader).
+Hence $D(f) = D(e)$, and $R_f = R_e = R/(1-e)$ is a quotient of $R$.
+Actually, any ring with this property is Jacobson.
+Namely, say $\mathfrak p \subset R$ is a prime ideal
+and $f \in R$, $f \not \in \mathfrak p$. We have to find
+a maximal ideal $\mathfrak m$ of $R$ such that
+$\mathfrak p \subset \mathfrak m$ and $f \not\in \mathfrak m$.
+Because $R_f$ is a quotient of $R$ we see that any maximal
+ideal of $R_f$ corresponds to a maximal ideal of $R$
+not containing $f$. Hence the result follows
+by choosing a maximal ideal of $R_f$ containing $\mathfrak p R_f$.
+\end{example}
+
+\begin{example}
+\label{example-not-jacobson}
+A domain $R$ with finitely many maximal ideals
+$\mathfrak m_i$, $i = 1, \ldots, n$ is not a
+Jacobson ring, except when it is a field.
+Namely, in this case $(0)$ is not the intersection
+of the maximal ideals $(0) \not =
+\mathfrak m_1 \cap \mathfrak m_2 \cap \ldots \cap \mathfrak m_n
+\supset \mathfrak m_1 \cdot \mathfrak m_2 \cdot \ldots
+\cdot \mathfrak m_n \not = 0$.
+In particular a discrete valuation ring, or any local ring with
+at least two prime ideals is not a Jacobson
+ring.
+\end{example}
+
+\begin{lemma}
+\label{lemma-finite-residue-extension-closed}
+Let $R \to S$ be a ring map.
+Let $\mathfrak m \subset R$ be a maximal ideal.
+Let $\mathfrak q \subset S$ be a prime ideal
+lying over $\mathfrak m$ such that $\kappa(\mathfrak q)/\kappa(\mathfrak m)$
+is an algebraic field extension.
+Then $\mathfrak q$ is a maximal ideal of $S$.
+\end{lemma}
+
+\begin{proof}
+Consider the diagram
+$$
+\xymatrix{
+S \ar[r] & S/\mathfrak q \ar[r] & \kappa(\mathfrak q) \\
+R \ar[r] \ar[u] & R/\mathfrak m \ar[u]
+}
+$$
+We see that $\kappa(\mathfrak m) \subset S/\mathfrak q \subset
+\kappa(\mathfrak q)$. Because the field extension
+$\kappa(\mathfrak m) \subset \kappa(\mathfrak q)$
+is algebraic, any ring between $\kappa(\mathfrak m)$
+and $\kappa(\mathfrak q)$ is a field
+(Fields, Lemma \ref{fields-lemma-subalgebra-algebraic-extension-field}).
+Thus $S/\mathfrak q$ is a field, and a posteriori equal
+to $\kappa(\mathfrak q)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension}
+Suppose that $k$ is a field and suppose that $V$ is a nonzero vector
+space over $k$. Assume the dimension of $V$ (which is a cardinal number)
+is smaller than the cardinality of $k$. Then for any linear operator
+$T : V \to V$ there exists some monic polynomial $P(t) \in k[t]$ such that
+$P(T)$ is not invertible.
+\end{lemma}
+
+\begin{proof}
+If not then $V$ inherits the structure of a vector space over
+the field $k(t)$. But the dimension of $k(t)$ over $k$ is
+at least the cardinality of $k$ for example due to the fact that the elements
+$\frac{1}{t - \lambda}$ are $k$-linearly independent.
+\end{proof}
+
+\noindent
+Here is another version of Hilbert's Nullstellensatz.
+
+\begin{theorem}
+\label{theorem-uncountable-nullstellensatz}
+Let $k$ be a field. Let $S$ be a $k$-algebra generated over $k$
+by the elements $\{x_i\}_{i \in I}$. Assume the cardinality of $I$
+is smaller than the cardinality of $k$. Then
+\begin{enumerate}
+\item for all maximal ideals $\mathfrak m \subset S$ the field
+extension $\kappa(\mathfrak m)/k$
+is algebraic, and
+\item $S$ is a Jacobson ring.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+If $I$ is finite then the result follows from the Hilbert Nullstellensatz,
+Theorem \ref{theorem-nullstellensatz}. In the rest of the proof we assume
+$I$ is infinite. It suffices to prove the result for
+$\mathfrak m \subset k[\{x_i\}_{i \in I}]$ maximal in the polynomial
+ring on variables $x_i$, since $S$ is a quotient of this.
+As $I$ is infinite the set
+of monomials $x_{i_1}^{e_1} \ldots x_{i_r}^{e_r}$, $i_1, \ldots, i_r \in I$
+and $e_1, \ldots, e_r \geq 0$ has cardinality at most equal to the
+cardinality of $I$. Because the cardinality of $I \times \ldots \times I$
+is the cardinality of $I$, and also the cardinality of
+$\bigcup_{n \geq 0} I^n$ has the same cardinality.
+(If $I$ is finite, then this is not true and
+in that case this proof only works if $k$ is uncountable.)
+
+\medskip\noindent
+To arrive at a contradiction pick $T \in \kappa(\mathfrak m)$ transcendental
+over $k$. Note that the $k$-linear map
+$T : \kappa(\mathfrak m) \to \kappa(\mathfrak m)$
+given by multiplication by $T$ has the property that $P(T)$ is invertible
+for all monic polynomials $P(t) \in k[t]$.
+Also, $\kappa(\mathfrak m)$ has dimension at most the cardinality of $I$
+over $k$ since it is a quotient of the vector space
+$k[\{x_i\}_{i \in I}]$ over $k$ (whose dimension is $\# I$ as we saw above).
+This is impossible by Lemma \ref{lemma-dimension}.
+
+\medskip\noindent
+To show that $S$ is Jacobson we argue as follows. If not then
+there exists a prime $\mathfrak q \subset S$ and an element $f \in S$,
+$f \not \in \mathfrak q$ such that $\mathfrak q$ is not maximal
+and $(S/\mathfrak q)_f$ is a field, see
+Lemma \ref{lemma-characterize-jacobson}.
+But note that $(S/\mathfrak q)_f$ is generated by at most $\# I + 1$ elements.
+Hence the field extension $(S/\mathfrak q)_f/k$ is algebraic
+(by the first part of the proof).
+This implies that $\kappa(\mathfrak q)$ is an algebraic extension of $k$
+hence $\mathfrak q$ is maximal by
+Lemma \ref{lemma-finite-residue-extension-closed}. This contradiction
+finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-Jacobson}
+Let $k$ be a field. Let $S$ be a $k$-algebra.
+For any field extension $K/k$ whose cardinality is larger
+than the cardinality of $S$ we have
+\begin{enumerate}
+\item for every maximal ideal $\mathfrak m$ of $S_K$ the field
+$\kappa(\mathfrak m)$ is algebraic over $K$, and
+\item $S_K$ is a Jacobson ring.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose $k \subset K$ such that the cardinality of $K$ is greater
+than the cardinality of $S$. Since the elements of $S$ generate
+the $K$-algebra $S_K$ we see that
+Theorem \ref{theorem-uncountable-nullstellensatz}
+applies.
+\end{proof}
+
+\begin{example}
+\label{example-countable-trick-does-not-work}
+The trick in the proof of
+Theorem \ref{theorem-uncountable-nullstellensatz}
+really does not work if $k$ is a countable field and $I$ is countable too.
+Let $k$ be a countable field. Let $x$ be a variable,
+and let $k(x)$ be the field of rational functions in $x$.
+Consider the polynomial algebra $R = k[x, \{x_f\}_{f \in k[x]-\{0\}}]$.
+Let $I = (\{fx_f - 1\}_{f\in k[x] - \{0\}})$. Note that
+$I$ is a proper ideal in $R$.
+Choose a maximal ideal $I \subset \mathfrak m$.
+Then $k \subset R/\mathfrak m$ is isomorphic to
+$k(x)$, and is not algebraic over $k$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-Jacobson-invert-element}
+Let $R$ be a Jacobson ring. Let $f \in R$. The ring $R_f$ is Jacobson and
+maximal ideals of $R_f$ correspond to maximal ideals of $R$ not containing $f$.
+\end{lemma}
+
+\begin{proof}
+By Topology, Lemma \ref{topology-lemma-jacobson-inherited}
+we see that $D(f) = \Spec(R_f)$ is Jacobson and
+that closed points of $D(f)$
+correspond to closed points in $\Spec(R)$
+which happen to lie in $D(f)$. Thus $R_f$ is Jacobson by
+Lemma \ref{lemma-jacobson}.
+\end{proof}
+
+\begin{example}
+\label{example-localize-not-preserve-closed-points}
+Here is a simple example that shows Lemma \ref{lemma-Jacobson-invert-element}
+to be false if $R$ is not Jacobson.
+Consider the ring $R = \mathbf{Z}_{(2)}$, i.e., the localization
+of $\mathbf{Z}$ at the prime $(2)$. The localization of $R$ at
+the element $2$ is isomorphic to $\mathbf{Q}$, in a formula:
+$R_2 \cong \mathbf{Q}$. Clearly the map $R \to R_2$ maps the
+closed point of $\Spec(\mathbf{Q})$ to the generic point
+of $\Spec(R)$.
+\end{example}
+
+\begin{example}
+\label{example-infinite-localize-not-preserve-closed-points}
+Here is a simple example that shows
+Lemma \ref{lemma-Jacobson-invert-element}
+is false if $R$ is Jacobson but we localize at infinitely
+many elements.
+Namely, let $R = \mathbf{Z}$ and consider the localization
+$(R \setminus \{0\})^{-1}R \cong \mathbf{Q}$
+of $R$ at the set of all nonzero elements.
+Clearly the map $\mathbf{Z} \to \mathbf{Q}$ maps the
+closed point of $\Spec(\mathbf{Q})$ to the generic point
+of $\Spec(\mathbf{Z})$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-Jacobson-mod-ideal}
+Let $R$ be a Jacobson ring. Let $I \subset R$ be an ideal.
+The ring $R/I$ is Jacobson and maximal ideals
+of $R/I$ correspond to maximal ideals of $R$ containing $I$.
+\end{lemma}
+
+\begin{proof}
+The proof is the same as the proof of
+Lemma \ref{lemma-Jacobson-invert-element}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-jacobson}
+Let $R$ be a Jacobson ring. Let $K$ be a field. Let $R \subset K$ and
+$K$ is of finite type over $R$. Then $R$ is a field and $K/R$
+is a finite field extension.
+\end{lemma}
+
+\begin{proof}
+First note that $R$ is a domain.
+By Lemma \ref{lemma-field-finite-type-over-domain}
+we see that $R_f$ is a field and $K/R_f$ is a finite field extension
+for some nonzero $f \in R$. Hence $(0)$ is a maximal ideal of $R_f$
+and by
+Lemma \ref{lemma-Jacobson-invert-element}
+we conclude $(0)$ is a maximal ideal of $R$.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-Jacobson-permanence}
+Let $R$ be a Jacobson ring. Let $R \to S$ be a
+ring map of finite type. Then
+\begin{enumerate}
+\item The ring $S$ is Jacobson.
+\item The map $\Spec(S) \to \Spec(R)$ transforms
+closed points to closed points.
+\item For $\mathfrak m' \subset S$ maximal lying over $\mathfrak m \subset R$
+the field extension $\kappa(\mathfrak m')/\kappa(\mathfrak m)$
+is finite.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Let $\mathfrak m' \subset S$ be a maximal ideal and
+$R \cap \mathfrak m' = \mathfrak m$.
+Then $R/\mathfrak m \to S/\mathfrak m'$ satisfies
+the conditions of Lemma \ref{lemma-silly-jacobson}
+by Lemma \ref{lemma-Jacobson-mod-ideal}.
+Hence $R/\mathfrak m$ is a field and
+$\mathfrak m$ a maximal ideal and the induced
+residue field extension is finite. This proves (2) and (3).
+
+\medskip\noindent
+If $S$ is not Jacobson, then by Lemma \ref{lemma-characterize-jacobson} there
+exists a non-maximal prime ideal $\mathfrak q$ of $S$ and an
+$g \in S$, $g \not\in \mathfrak q$ such that $(S/\mathfrak q)_g$ is a field.
+To arrive at a contradiction we show that $\mathfrak q$ is a maximal ideal.
+Let $\mathfrak p = \mathfrak q \cap R$. Then
+$R/\mathfrak p \to (S/\mathfrak q)_g$ satisfies the conditions of
+Lemma \ref{lemma-silly-jacobson} by
+Lemma \ref{lemma-Jacobson-mod-ideal}.
+Hence $R/\mathfrak p$ is a field and the field extension
+$\kappa(\mathfrak p) \to (S/\mathfrak q)_g = \kappa(\mathfrak q)$ is
+finite, thus algebraic. Then $\mathfrak q$ is a maximal ideal of $S$ by
+Lemma \ref{lemma-finite-residue-extension-closed}. Contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-corollary-jacobson}
+Any finite type algebra over $\mathbf{Z}$ is Jacobson.
+\end{lemma}
+
+\begin{proof}
+Combine Lemma \ref{lemma-pid-jacobson} and
+Proposition \ref{proposition-Jacobson-permanence}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-image-finite-type-map-Jacobson-rings}
+Let $R \to S$ be a finite type ring map of Jacobson rings.
+Denote $X = \Spec(R)$ and $Y = \Spec(S)$.
+Write $f : Y \to X$ the induced
+map of spectra. Let $E \subset Y = \Spec(S)$ be a
+constructible set. Denote with a subscript ${}_0$ the set
+of closed points of a topological space.
+\begin{enumerate}
+\item We have $f(E)_0 = f(E_0) = X_0 \cap f(E)$.
+\item A point $\xi \in X$ is in $f(E)$ if and only if
+$\overline{\{\xi\}} \cap f(E_0)$ is dense in $\overline{\{\xi\}}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We have a commutative diagram of continuous maps
+$$
+\xymatrix{
+E \ar[r] \ar[d] & Y \ar[d] \\
+f(E) \ar[r] & X
+}
+$$
+Suppose $x \in f(E)$ is closed in $f(E)$. Then $f^{-1}(\{x\})\cap E$
+is nonempty and closed in $E$. Applying
+Topology, Lemma \ref{topology-lemma-jacobson-inherited}
+to both inclusions
+$$
+f^{-1}(\{x\}) \cap E \subset E \subset Y
+$$
+we find there exists a point $y \in f^{-1}(\{x\}) \cap E$ which is
+closed in $Y$. In other words, there exists $y \in Y_0$ and $y \in E_0$
+mapping to $x$. Hence $x \in f(E_0)$.
+This proves that $f(E)_0 \subset f(E_0)$.
+Proposition \ref{proposition-Jacobson-permanence} implies that
+$f(E_0) \subset X_0 \cap f(E)$. The inclusion
+$X_0 \cap f(E) \subset f(E)_0$ is trivial. This proves the
+first assertion.
+
+\medskip\noindent
+Suppose that $\xi \in f(E)$. According to
+Lemma \ref{lemma-characterize-image-finite-type}
+the set $f(E) \cap \overline{\{\xi\}}$ contains a dense
+open subset of $\overline{\{\xi\}}$. Since $X$ is Jacobson
+we conclude that $f(E) \cap \overline{\{\xi\}}$ contains a
+dense set of closed points, see Topology,
+Lemma \ref{topology-lemma-jacobson-inherited}.
+We conclude by part (1) of the lemma.
+
+\medskip\noindent
+On the other hand, suppose that $\overline{\{\xi\}} \cap f(E_0)$
+is dense in $\overline{\{\xi\}}$. By
+Lemma \ref{lemma-constructible-is-image}
+there exists a ring map $S \to S'$ of finite presentation
+such that $E$ is the image of $Y' := \Spec(S') \to Y$.
+Then $E_0$ is the image of $Y'_0$ by the first part of the
+lemma applied to the ring map $S \to S'$. Thus we may assume that
+$E = Y$ by replacing $S$ by $S'$. Suppose $\xi$ corresponds
+to $\mathfrak p \subset R$. Consider the diagram
+$$
+\xymatrix{
+S \ar[r] & S/\mathfrak p S \\
+R \ar[r] \ar[u] & R/\mathfrak p \ar[u]
+}
+$$
+This diagram and the density of $f(Y_0) \cap V(\mathfrak p)$
+in $V(\mathfrak p)$
+shows that the morphism $R/\mathfrak p \to S/\mathfrak p S$
+satisfies condition (2) of
+Lemma \ref{lemma-domain-image-dense-set-points-generic-point}.
+Hence we conclude
+there exists a prime $\overline{\mathfrak q} \subset S/\mathfrak pS$
+mapping to $(0)$. In other words the inverse image $\mathfrak q$
+of $\overline{\mathfrak q}$ in $S$ maps to $\mathfrak p$ as desired.
+\end{proof}
+
+\noindent
+The conclusion of the lemma above is that we can read off
+the image of $f$ from the set of closed points of the image.
+This is a little nicer in case the map is of finite presentation
+because then we know that images of a constructible is constructible.
+Before we state it we introduce some notation.
+Denote $\text{Constr}(X)$ the set of constructible sets.
+Let $R \to S$ be a ring map.
+Denote $X = \Spec(R)$ and $Y = \Spec(S)$.
+Write $f : Y \to X$ the induced map of spectra.
+Denote with a subscript ${}_0$ the set
+of closed points of a topological space.
+
+
+\begin{lemma}
+\label{lemma-conclude-jacobson-Noetherian}
+With notation as above. Assume that $R$ is a Noetherian Jacobson ring.
+Further assume $R \to S$ is of finite type.
+There is a commutative diagram
+$$
+\xymatrix{
+\text{Constr}(Y) \ar[r]^{E \mapsto E_0} \ar[d]^{E \mapsto f(E)} &
+\text{Constr}(Y_0) \ar[d]^{E \mapsto f(E)} \\
+\text{Constr}(X) \ar[r]^{E \mapsto E_0} &
+\text{Constr}(X_0)
+}
+$$
+where the horizontal arrows are the bijections from
+Topology, Lemma \ref{topology-lemma-jacobson-equivalent-constructible}.
+\end{lemma}
+
+\begin{proof}
+Since $R \to S$ is of finite type, it is of finite presentation,
+see Lemma \ref{lemma-Noetherian-finite-type-is-finite-presentation}.
+Thus the image of a constructible set in $X$ is constructible
+in $Y$ by Chevalley's theorem
+(Theorem \ref{theorem-chevalley}).
+Combined with
+Lemma \ref{lemma-image-finite-type-map-Jacobson-rings}
+the lemma follows.
+\end{proof}
+
+\noindent
+To illustrate the use of Jacobson rings, we give the following two examples.
+
+\begin{example}
+\label{example-product-matrices-zero}
+Let $k$ be a field. The space $\Spec(k[x, y]/(xy))$
+has two irreducible components: namely the $x$-axis and the
+$y$-axis. As a generalization, let
+$$
+R = k[x_{11}, x_{12}, x_{21}, x_{22}, y_{11}, y_{12}, y_{21}, y_{22}]/
+\mathfrak a,
+$$
+where $\mathfrak a$ is the ideal in
+$k[x_{11}, x_{12}, x_{21}, x_{22}, y_{11}, y_{12}, y_{21}, y_{22}]$
+generated by the entries of the $2 \times 2$ product matrix
+$$
+\left(
+\begin{matrix}
+x_{11} & x_{12}\\
+x_{21} & x_{22}
+\end{matrix}
+\right)
+ \left(
+\begin{matrix}
+y_{11} & y_{12}\\
+y_{21} & y_{22}
+\end{matrix}
+\right).
+$$
+In this example we will describe $\Spec(R)$.
+
+\medskip\noindent
+To prove the statement about $\Spec(k[x, y]/(xy))$ we argue as follows.
+If $\mathfrak p \subset k[x, y]$ is any ideal containing $xy$, then either
+$x$ or $y$ would be contained in $\mathfrak p$. Hence the minimal such
+prime ideals are just $(x)$ and $(y)$. In case $k$ is
+algebraically closed, the $\text{max-Spec}$ of these components
+can then be visualized as the point sets of $y$- and $x$-axis.
+
+\medskip\noindent
+For the generalization, note that we may identify the closed
+points of the spectrum of
+$k[x_{11}, x_{12}, x_{21}, x_{22}, y_{11}, y_{12}, y_{21}, y_{22}])$
+with the space of matrices
+$$
+\left\{ (X, Y) \in \text{Mat}(2, k)\times \text{Mat}(2, k) \mid
+X = \left(
+\begin{matrix}
+x_{11} & x_{12}\\
+x_{21} & x_{22}
+\end{matrix}
+\right),
+Y= \left(
+\begin{matrix}
+y_{11} & y_{12}\\
+y_{21} & y_{22}
+\end{matrix}
+\right)
+\right\}
+$$
+at least if $k$ is algebraically closed.
+Now define a group action of
+$\text{GL}(2, k)\times \text{GL}(2, k)\times \text{GL}(2, k)$
+on the space of matrices $\{(X, Y)\}$ by
+$$
+(g_1, g_2, g_3) \times (X, Y) \mapsto ((g_1Xg_2^{-1}, g_2Yg_3^{-1})).
+$$
+Here, also observe that the algebraic set
+$$
+\text{GL}(2, k)\times \text{GL}(2, k)\times \text{GL}(2, k) \subset
+\text{Mat}(2, k)\times \text{Mat}(2, k) \times \text{Mat}(2, k)
+$$
+is irreducible since it is the max spectrum of the domain
+$$
+k[x_{11}, x_{12}, \ldots, z_{21}, z_{22}, (x_{11}x_{22}-x_{12}x_{21})^{-1}
+, (y_{11}y_{22}-y_{12}y_{21})^{-1}, (z_{11}z_{22}-z_{12}z_{21})^{-1}].
+$$
+Since the image of irreducible an algebraic set is still
+irreducible, it suffices to classify the orbits of the set
+$\{(X, Y)\in \text{Mat}(2, k)\times \text{Mat}(2, k)|XY = 0\}$ and take their
+closures. From standard linear algebra, we are reduced to the
+following three cases:
+\begin{enumerate}
+\item $\exists (g_1, g_2)$ such that $g_1Xg_2^{-1} = I_{2\times 2}$.
+Then $Y$ is necessarily $0$, which as an algebraic set is
+invariant under the group action. It follows that this orbit is
+contained in the irreducible algebraic set defined by the prime
+ideal $(y_{11}, y_{12}, y_{21}, y_{22})$. Taking the closure, we see
+that $(y_{11}, y_{12}, y_{21}, y_{22})$ is actually a component.
+\item $\exists (g_1, g_2)$ such that
+$$
+g_1Xg_2^{-1} = \left(
+\begin{matrix}
+1 & 0 \\
+0 & 0
+\end{matrix}
+\right).
+$$
+This case occurs if and only if $X$ is a rank 1 matrix,
+and furthermore, $Y$ is killed by such an $X$ if and only if
+$$
+x_{11}y_{11}+x_{12}y_{21} = 0; \quad x_{11}y_{12}+x_{12}y_{22} = 0;
+$$
+$$
+x_{21}y_{11}+x_{22}y_{21} = 0; \quad x_{21}y_{12}+x_{22}y_{22} = 0.
+$$
+Fix a rank 1 $X$, such non zero $Y$'s satisfying the above
+equations form an irreducible algebraic set for the following
+reason($Y = 0$ is contained the previous case):
+$0 = g_1Xg_2^{-1}g_2Y$ implies that
+$$
+g_2Y = \left(
+\begin{matrix}
+0 & 0 \\
+y_{21}' & y_{22}'
+\end{matrix}
+\right).
+$$
+With a further $\text{GL}(2, k)$-action on the right by $g_3$,
+$g_2Y$ can be brought into
+$$
+g_2Yg_3^{-1} = \left(
+\begin{matrix}
+0 & 0 \\
+0 & 1
+\end{matrix}
+\right),
+$$
+and thus such $Y$'s form an irreducible algebraic set
+isomorphic to the image of $\text{GL}(2, k)$ under this action. Finally,
+notice that the ``rank 1" condition for $X$'s forms an open dense
+subset of the irreducible algebraic set
+$\det X = x_{11}x_{22} - x_{12}x_{21} = 0$.
+It now follows that all the five equations define an irreducible component
+$(x_{11}y_{11}+x_{12}y_{21}, x_{11}y_{12}+x_{12}y_{22}, x_{21}y_{11}
++x_{22}y_{21}, x_{21}y_{12}+x_{22}y_{22}, x_{11}x_{22}-x_{12}x_{21})$
+in the open subset of the space of pairs of nonzero matrices.
+It can be shown that the pair of equations
+$\det X = 0$, $\det Y = 0$ cuts $\Spec(R)$ in an irreducible component
+with the above locus an open dense subset.
+\item $\exists (g_1, g_2)$ such that $g_1Xg_2^{-1} = 0$, or
+equivalently, $X = 0$. Then $Y$ can be arbitrary and this component
+is thus defined by $(x_{11}, x_{12}, x_{21}, x_{22})$.
+\end{enumerate}
+\end{example}
+
+
+\begin{example}
+\label{example-idempotent-matrices}
+For another example, consider
+$R = k[\{t_{ij}\}_{i, j = 1}^{n}]/\mathfrak a$, where $\mathfrak a$ is
+the ideal generated by the entries of the product matrix $T^2-T$,
+$T = (t_{ij})$. From linear algebra, we know that under the
+$GL(n, k)$-action defined by $g, T \mapsto gTg^{-1}$, $T$ is
+classified by the its rank and each $T$ is conjugate to some
+$\text{diag}(1, \ldots, 1, 0, \ldots, 0)$, which has $r$ 1's and $n-r$ 0's.
+Thus each orbit of such a $\text{diag}(1, \ldots, 1, 0, \ldots, 0)$ under the
+group action forms an irreducible component and every idempotent
+matrix is contained in one such orbit. Next we will show that any
+two different orbits are necessarily disjoint. For this purpose we
+only need to cook up polynomial functions that take different
+values on different orbits. In characteristic 0 cases, such a
+function can be taken to be
+$f(t_{ij}) = trace(T) = \sum_{i = 1}^nt_{ii}$. In positive
+characteristic cases, things are slightly more tricky since we
+might have $trace(T) = 0$ even if $T \neq 0$. For instance, $char = 3$
+$$
+trace\left(
+\begin{matrix}
+1 & & \\
+& 1 & \\
+& & 1
+\end{matrix}
+\right) = 3 = 0
+$$
+Anyway, these components can be separated using other functions.
+For instance, in the characteristic 3 case, $tr(\wedge^3T)$ takes
+value 1 on the components corresponding to $diag(1, 1, 1)$ and 0 on
+other components.
+\end{example}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Finite and integral ring extensions}
+\label{section-finite-ring-extensions}
+
+\noindent
+Trivial lemmas concerning finite and integral ring maps.
+We recall the definition.
+
+\begin{definition}
+\label{definition-integral-ring-map}
+Let $\varphi : R \to S$ be a ring map.
+\begin{enumerate}
+\item An element $s \in S$
+is {\it integral over $R$} if there exists a monic
+polynomial $P(x) \in R[x]$ such that
+$P^\varphi(s) = 0$, where $P^\varphi(x) \in S[x]$
+is the image of $P$ under $\varphi : R[x] \to S[x]$.
+\item The ring map $\varphi$ is {\it integral}
+if every $s \in S$ is integral over $R$.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-characterize-integral-element}
+Let $\varphi : R \to S$ be a ring map. Let $y \in S$. If there exists a
+finite $R$-submodule $M$ of $S$ such that $1 \in M$ and $yM \subset M$,
+then $y$ is integral over $R$.
+\end{lemma}
+
+\begin{proof}
+Consider the map $\varphi : M \to M$, $x \mapsto y \cdot x$.
+By Lemma \ref{lemma-charpoly-module} there exists a monic polynomial
+$P \in R[T]$ with $P(\varphi) = 0$. In the ring $S$ we get
+$P(y) = P(y) \cdot 1 = P(\varphi)(1) = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-is-integral}
+A finite ring extension is integral.
+\end{lemma}
+
+\begin{proof}
+Let $R \to S$ be finite. Let $y \in S$. Apply
+Lemma \ref{lemma-characterize-integral-element}
+to $M = S$ to see that $y$ is integral over $R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-integral}
+Let $\varphi : R \to S$ be a ring map. Let $s_1, \ldots, s_n$
+be a finite set of elements of $S$.
+In this case $s_i$ is integral over $R$ for all $i = 1, \ldots, n$
+if and only if
+there exists an $R$-subalgebra $S' \subset S$ finite over $R$
+containing all of the $s_i$.
+\end{lemma}
+
+\begin{proof}
+If each $s_i$ is integral, then the subalgebra
+generated by $\varphi(R)$ and the $s_i$ is finite
+over $R$. Namely, if $s_i$ satisfies a monic equation
+of degree $d_i$ over $R$, then this subalgebra is generated as an
+$R$-module by the elements $s_1^{e_1} \ldots s_n^{e_n}$
+with $0 \leq e_i \leq d_i - 1$.
+Conversely, suppose given a finite $R$-subalgebra
+$S'$ containing all the $s_i$. Then all of the
+$s_i$ are integral by Lemma \ref{lemma-finite-is-integral}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-finite-in-terms-of-integral}
+Let $R \to S$ be a ring map. The following are equivalent
+\begin{enumerate}
+\item $R \to S$ is finite,
+\item $R \to S$ is integral and of finite type, and
+\item there exist $x_1, \ldots, x_n \in S$ which generate $S$ as an
+algebra over $R$ such that each $x_i$ is integral over $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Clear from Lemma \ref{lemma-characterize-integral}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-transitive}
+\begin{slogan}
+A composition of integral ring maps is integral
+\end{slogan}
+Suppose that $R \to S$ and $S \to T$ are integral
+ring maps. Then $R \to T$ is integral.
+\end{lemma}
+
+\begin{proof}
+Let $t \in T$. Let $P(x) \in S[x]$ be a
+monic polynomial such that $P(t) = 0$.
+Apply Lemma \ref{lemma-characterize-integral}
+to the finite set of coefficients of $P$.
+Hence $t$ is integral over some subalgebra
+$S' \subset S$ finite over $R$. Apply Lemma
+\ref{lemma-characterize-integral} again to find
+a subalgebra $T' \subset T$ finite over $S'$ and
+containing $t$. Lemma \ref{lemma-finite-transitive}
+applied to $R \to S' \to T'$ shows that $T'$ is finite
+over $R$. The integrality of $t$ over $R$
+now follows from Lemma \ref{lemma-finite-is-integral}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-closure-is-ring}
+Let $R \to S$ be a ring homomorphism.
+The set
+$$
+S' = \{s \in S \mid s\text{ is integral over }R\}
+$$
+is an $R$-subalgebra of $S$.
+\end{lemma}
+
+\begin{proof}
+This is clear from Lemmas \ref{lemma-characterize-integral}
+and \ref{lemma-finite-is-integral}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-product-integral}
+Let $R_i\to S_i$ be ring maps $i = 1, \ldots, n$.
+Let $R$ and $S$ denote the product of the $R_i$ and $S_i$ respectively.
+Then an element $s = (s_1, \ldots, s_n) \in S$ is integral over $R$
+if and only if each $s_i$ is integral over $R_i$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{definition}
+\label{definition-integral-closure}
+Let $R \to S$ be a ring map.
+The ring $S' \subset S$ of elements integral over
+$R$, see Lemma \ref{lemma-integral-closure-is-ring},
+is called the {\it integral closure} of $R$
+in $S$. If $R \subset S$ we say that $R$ is
+{\it integrally closed} in $S$ if $R = S'$.
+\end{definition}
+
+\noindent
+In particular, we see that $R \to S$ is integral if and only
+if the integral closure of $R$ in $S$ is all of $S$.
+
+\begin{lemma}
+\label{lemma-finite-product-integral-closure}
+Let $R_i\to S_i$ be ring maps $i = 1, \ldots, n$.
+Denote the integral closure of $R_i$ in $S_i$ by $S'_i$.
+Further let $R$ and $S$ denote the product of the $R_i$ and $S_i$ respectively.
+Then the integral closure of $R$ in $S$
+is the product of the $S'_i$. In particular $R \to S$ is
+integrally closed if and only if each $R_i \to S_i$ is integrally closed.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from Lemma \ref{lemma-finite-product-integral}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-closure-localize}
+Integral closure commutes with localization: If $A \to B$ is a ring
+map, and $S \subset A$ is a multiplicative subset, then the integral
+closure of $S^{-1}A$ in $S^{-1}B$ is $S^{-1}B'$, where $B' \subset B$
+is the integral closure of $A$ in $B$.
+\end{lemma}
+
+\begin{proof}
+Since localization is exact we see that $S^{-1}B' \subset S^{-1}B$.
+Suppose $x \in B'$ and $f \in S$. Then
+$x^d + \sum_{i = 1, \ldots, d} a_i x^{d - i} = 0$
+in $B$ for some $a_i \in A$. Hence also
+$$
+(x/f)^d + \sum\nolimits_{i = 1, \ldots, d} a_i/f^i (x/f)^{d - i} = 0
+$$
+in $S^{-1}B$. In this way we see that $S^{-1}B'$ is contained in
+the integral closure of $S^{-1}A$ in $S^{-1}B$. Conversely, suppose
+that $x/f \in S^{-1}B$ is integral over $S^{-1}A$. Then we have
+$$
+(x/f)^d + \sum\nolimits_{i = 1, \ldots, d} (a_i/f_i) (x/f)^{d - i} = 0
+$$
+in $S^{-1}B$ for some $a_i \in A$ and $f_i \in S$. This means that
+$$
+(f'f_1 \ldots f_d x)^d +
+\sum\nolimits_{i = 1, \ldots, d}
+f^i(f')^if_1^i \ldots f_i^{i - 1} \ldots f_d^i a_i
+(f'f_1 \ldots f_dx)^{d - i} = 0
+$$
+for a suitable $f' \in S$. Hence $f'f_1\ldots f_dx \in B'$ and thus
+$x/f \in S^{-1}B'$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-closure-stalks}
+\begin{slogan}
+An element of an algebra over a ring is integral over the ring
+if and only if it is locally integral at every prime ideal of the ring.
+\end{slogan}
+Let $\varphi : R \to S$ be a ring map.
+Let $x \in S$. The following are equivalent:
+\begin{enumerate}
+\item $x$ is integral over $R$, and
+\item for every prime ideal $\mathfrak p \subset R$ the element
+$x \in S_{\mathfrak p}$ is integral over $R_{\mathfrak p}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that (1) implies (2). Assume (2). Consider the $R$-algebra
+$S' \subset S$ generated by $\varphi(R)$ and $x$. Let $\mathfrak p$ be
+a prime ideal of $R$. Then we know that
+$x^d + \sum_{i = 1, \ldots, d} \varphi(a_i) x^{d - i} = 0$
+in $S_{\mathfrak p}$ for some $a_i \in R_{\mathfrak p}$. Hence we see,
+by looking at which denominators occur, that
+for some $f \in R$, $f \not \in \mathfrak p$ we have
+$a_i \in R_f$ and
+$x^d + \sum_{i = 1, \ldots, d} \varphi(a_i) x^{d - i} = 0$
+in $S_f$. This implies that $S'_f$ is finite over $R_f$.
+Since $\mathfrak p$ was arbitrary and $\Spec(R)$ is quasi-compact
+(Lemma \ref{lemma-quasi-compact}) we can find finitely many elements
+$f_1, \ldots, f_n \in R$
+which generate the unit ideal of $R$ such that $S'_{f_i}$ is finite
+over $R_{f_i}$. Hence we conclude from Lemma \ref{lemma-cover} that
+$S'$ is finite over $R$. Hence $x$ is integral over $R$ by
+Lemma \ref{lemma-characterize-integral}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-integral}
+\begin{slogan}
+Integrality and finiteness are preserved under base change.
+\end{slogan}
+Let $R \to S$ and $R \to R'$ be ring maps.
+Set $S' = R' \otimes_R S$.
+\begin{enumerate}
+\item If $R \to S$ is integral so is $R' \to S'$.
+\item If $R \to S$ is finite so is $R' \to S'$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We prove (1).
+Let $s_i \in S$ be generators for $S$ over $R$.
+Each of these satisfies a monic polynomial equation $P_i$
+over $R$. Hence the elements $1 \otimes s_i \in S'$ generate
+$S'$ over $R'$ and satisfy the corresponding polynomial
+$P_i'$ over $R'$. Since these elements generate $S'$ over $R'$
+we see that $S'$ is integral over $R'$.
+Proof of (2) omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-local}
+Let $R \to S$ be a ring map.
+Let $f_1, \ldots, f_n \in R$ generate the unit ideal.
+\begin{enumerate}
+\item If each $R_{f_i} \to S_{f_i}$ is integral, so is $R \to S$.
+\item If each $R_{f_i} \to S_{f_i}$ is finite, so is $R \to S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1).
+Let $s \in S$. Consider the ideal $I \subset R[x]$ of
+polynomials $P$ such that $P(s) = 0$. Let $J \subset R$
+denote the ideal (!) of leading coefficients of elements of $I$.
+By assumption and clearing denominators
+we see that $f_i^{n_i} \in J$ for all $i$
+and certain $n_i \geq 0$. Hence $J$ contains $1$ and we see
+$s$ is integral over $R$. Proof of (2) omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-permanence}
+Let $A \to B \to C$ be ring maps.
+\begin{enumerate}
+\item If $A \to C$ is integral so is $B \to C$.
+\item If $A \to C$ is finite so is $B \to C$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-closure-transitive}
+Let $A \to B \to C$ be ring maps.
+Let $B'$ be the integral closure of $A$ in $B$,
+let $C'$ be the integral closure of $B'$ in $C$. Then
+$C'$ is the integral closure of $A$ in $C$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-overring-surjective}
+Suppose that $R \to S$ is an integral
+ring extension with $R \subset S$.
+Then $\varphi : \Spec(S) \to \Spec(R)$
+is surjective.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p \subset R$ be a prime ideal.
+We have to show $\mathfrak pS_{\mathfrak p} \not = S_{\mathfrak p}$, see
+Lemma \ref{lemma-in-image}.
+The localization $R_{\mathfrak p} \to S_{\mathfrak p}$ is injective
+(as localization is exact) and integral by
+Lemma \ref{lemma-integral-closure-localize} or
+\ref{lemma-base-change-integral}.
+Hence we may replace $R$, $S$ by $R_{\mathfrak p}$, $S_{\mathfrak p}$ and
+we may assume $R$ is local with maximal ideal $\mathfrak m$ and
+it suffices to show that $\mathfrak mS \not = S$.
+Suppose $1 = \sum f_i s_i$ with $f_i \in \mathfrak m$
+and $s_i \in S$ in order to get a contradiction.
+Let $R \subset S' \subset S$
+be such that $R \to S'$ is finite and $s_i \in S'$, see
+Lemma \ref{lemma-characterize-integral}.
+The equation $1 = \sum f_i s_i$ implies that
+the finite $R$-module $S'$ satisfies $S' = \mathfrak m S'$. Hence by
+Nakayama's Lemma \ref{lemma-NAK}
+we see $S' = 0$. Contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-under-field}
+Let $R$ be a ring. Let $K$ be a field.
+If $R \subset K$ and $K$ is integral over $R$,
+then $R$ is a field and $K$ is an algebraic extension.
+If $R \subset K$ and $K$ is finite over $R$,
+then $R$ is a field and $K$ is a finite algebraic extension.
+\end{lemma}
+
+\begin{proof}
+Assume that $R \subset K$ is integral.
+By Lemma \ref{lemma-integral-overring-surjective} we see that
+$\Spec(R)$ has $1$ point. Since clearly $R$ is a domain we see
+that $R = R_{(0)}$ is a field (Lemma \ref{lemma-minimal-prime-reduced-ring}).
+The other assertions are immediate from this.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-over-field}
+Let $k$ be a field. Let $S$ be a $k$-algebra over $k$.
+\begin{enumerate}
+\item If $S$ is a domain and finite dimensional over $k$,
+then $S$ is a field.
+\item If $S$ is integral over $k$ and a domain,
+then $S$ is a field.
+\item If $S$ is integral over $k$ then every prime of
+$S$ is a maximal ideal (see
+Lemma \ref{lemma-ring-with-only-minimal-primes}
+for more consequences).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The statement on primes follows from the statement
+``integral $+$ domain $\Rightarrow$ field''.
+Let $S$ integral over $k$ and assume $S$ is a domain,
+Take $s \in S$. By Lemma
+\ref{lemma-characterize-integral} we may find a
+finite dimensional $k$-subalgebra $k \subset S' \subset S$
+containing $s$. Hence $S$ is a field if we can prove the
+first statement. Assume $S$ finite dimensional
+over $k$ and a domain. Pick $s\in S$.
+Since $S$ is a domain the multiplication
+map $s : S \to S$ is surjective by dimension
+reasons. Hence there exists an element $s_1 \in S$
+such that $ss_1 = 1$. So $S$ is a field.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-no-inclusion}
+Suppose $R \to S$ is integral.
+Let $\mathfrak q, \mathfrak q' \in \Spec(S)$
+be distinct primes
+having the same image in $\Spec(R)$.
+Then neither $\mathfrak q \subset \mathfrak q'$
+nor $\mathfrak q' \subset \mathfrak q$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p \subset R$ be the image.
+By Remark \ref{remark-fundamental-diagram}
+the primes $\mathfrak q, \mathfrak q'$
+correspond to ideals in
+$S \otimes_R \kappa(\mathfrak p)$.
+Thus the lemma follows from Lemma \ref{lemma-integral-over-field}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-finite-fibres}
+Suppose $R \to S$ is finite.
+Then the fibres of $\Spec(S) \to \Spec(R)$ are finite.
+\end{lemma}
+
+\begin{proof}
+By the discussion in
+Remark \ref{remark-fundamental-diagram}
+the fibres are the spectra of the rings $S \otimes_R \kappa(\mathfrak p)$.
+As $R \to S$ is finite, these fibre rings are finite over
+$\kappa(\mathfrak p)$ hence Noetherian by
+Lemma \ref{lemma-Noetherian-permanence}.
+By
+Lemma \ref{lemma-integral-no-inclusion}
+every prime of $S \otimes_R \kappa(\mathfrak p)$ is a minimal
+prime. Hence by
+Lemma \ref{lemma-Noetherian-irreducible-components}
+there are at most finitely many.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-going-up}
+Let $R \to S$ be a ring map such that
+$S$ is integral over $R$.
+Let $\mathfrak p \subset \mathfrak p' \subset R$
+be primes. Let $\mathfrak q$ be a prime of $S$ mapping
+to $\mathfrak p$. Then there exists a prime $\mathfrak q'$
+with $\mathfrak q \subset \mathfrak q'$
+mapping to $\mathfrak p'$.
+\end{lemma}
+
+\begin{proof}
+We may replace $R$ by $R/\mathfrak p$ and $S$ by $S/\mathfrak q$.
+This reduces us to the situation of having an integral
+extension of domains $R \subset S$ and a prime $\mathfrak p' \subset R$.
+By Lemma \ref{lemma-integral-overring-surjective} we win.
+\end{proof}
+
+\noindent
+The property expressed in the lemma above is called
+the ``going up property'' for the ring map $R \to S$,
+see Definition \ref{definition-going-up-down}.
+
+\begin{lemma}
+\label{lemma-finite-finitely-presented-extension}
+Let $R \to S$ be a finite and finitely presented ring map.
+Let $M$ be an $S$-module.
+Then $M$ is finitely presented as an $R$-module if and only if
+$M$ is finitely presented as an $S$-module.
+\end{lemma}
+
+\begin{proof}
+One of the implications follows from
+Lemma \ref{lemma-finitely-presented-over-subring}.
+To see the other assume that $M$ is finitely presented as an $S$-module.
+Pick a presentation
+$$
+S^{\oplus m} \longrightarrow
+S^{\oplus n} \longrightarrow
+M \longrightarrow 0
+$$
+As $S$ is finite as an $R$-module, the kernel of
+$S^{\oplus n} \to M$ is a finite $R$-module. Thus from
+Lemma \ref{lemma-extension}
+we see that it suffices to prove that $S$ is finitely presented as an
+$R$-module.
+
+\medskip\noindent
+Pick $y_1, \ldots, y_n \in S$ such that $y_1, \ldots, y_n$ generate $S$
+as an $R$-module. By Lemma \ref{lemma-characterize-integral-element}
+each $y_i$ is integral over $R$. Choose monic polynomials
+$P_i(x) \in R[x]$ with $P_i(y_i) = 0$. Consider the ring
+$$
+S' = R[x_1, \ldots, x_n]/(P_1(x_1), \ldots, P_n(x_n))
+$$
+Then we see that $S$ is of finite presentation as an $S'$-algebra
+by Lemma \ref{lemma-compose-finite-type}. Since $S' \to S$ is surjective,
+the kernel $J = \Ker(S' \to S)$ is finitely generated as an ideal by
+Lemma \ref{lemma-finite-presentation-independent}. Hence $J$ is a finite
+$S'$-module (immediate from the definitions).
+Thus $S = \Coker(J \to S')$ is of finite presentation as an $S'$-module
+by Lemma \ref{lemma-extension}.
+Hence, arguing as in the first paragraph, it suffices to show that $S'$ is
+of finite presentation as an $R$-module. Actually, $S'$ is free as an
+$R$-module with basis the monomials $x_1^{e_1} \ldots x_n^{e_n}$
+for $0 \leq e_i < \deg(P_i)$. Namely, write $R \to S'$ as the composition
+$$
+R \to R[x_1]/(P_1(x_1)) \to R[x_1, x_2]/(P_1(x_1), P_2(x_2)) \to
+\ldots \to S'
+$$
+This shows that the $i$th ring in this sequence is free as a module over the
+$(i - 1)$st one with basis $1, x_i, \ldots, x_i^{\deg(P_i) - 1}$. The result
+follows easily from this by induction. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-normal}
+Let $R$ be a ring. Let $x, y \in R$ be nonzerodivisors.
+Let $R[x/y] \subset R_{xy}$ be the $R$-subalgebra generated
+by $x/y$, and similarly for the subalgebras $R[y/x]$ and $R[x/y, y/x]$.
+If $R$ is integrally closed in $R_x$ or $R_y$, then the sequence
+$$
+0 \to R \xrightarrow{(-1, 1)} R[x/y] \oplus R[y/x] \xrightarrow{(1, 1)}
+R[x/y, y/x] \to 0
+$$
+is a short exact sequence of $R$-modules.
+\end{lemma}
+
+\begin{proof}
+Since $x/y \cdot y/x = 1$ it is clear that the map
+$R[x/y] \oplus R[y/x] \to R[x/y, y/x]$ is surjective.
+Let $\alpha \in R[x/y] \cap R[y/x]$. To show exactness in the middle
+we have to prove that $\alpha \in R$. By assumption we may write
+$$
+\alpha = a_0 + a_1 x/y + \ldots + a_n (x/y)^n =
+b_0 + b_1 y/x + \ldots + b_m(y/x)^m
+$$
+for some $n, m \geq 0$ and $a_i, b_j \in R$.
+Pick some $N > \max(n, m)$.
+Consider the finite $R$-submodule $M$ of $R_{xy}$ generated by the elements
+$$
+(x/y)^N, (x/y)^{N - 1}, \ldots, x/y, 1, y/x, \ldots, (y/x)^{N - 1}, (y/x)^N
+$$
+We claim that $\alpha M \subset M$. Namely, it is clear that
+$(x/y)^i (b_0 + b_1 y/x + \ldots + b_m(y/x)^m) \in M$ for
+$0 \leq i \leq N$ and that
+$(y/x)^i (a_0 + a_1 x/y + \ldots + a_n(x/y)^n) \in M$ for
+$0 \leq i \leq N$. Hence $\alpha$ is integral over $R$ by
+Lemma \ref{lemma-characterize-integral-element}. Note that
+$\alpha \in R_x$, so if $R$ is integrally closed in $R_x$
+then $\alpha \in R$ as desired.
+\end{proof}
+
+
+
+
+
+
+
+\section{Normal rings}
+\label{section-normal-rings}
+
+\noindent
+We first introduce the notion of a normal domain, and then we
+introduce the (very general) notion of a normal ring.
+
+\begin{definition}
+\label{definition-domain-normal}
+A domain $R$ is called {\it normal} if it is integrally
+closed in its field of fractions.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-integral-closure-in-normal}
+Let $R \to S$ be a ring map.
+If $S$ is a normal domain, then the integral closure of $R$
+in $S$ is a normal domain.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\noindent
+The following notion is occasionally useful when
+studying normality.
+
+\begin{definition}
+\label{definition-almost-integral}
+Let $R$ be a domain.
+\begin{enumerate}
+\item An element $g$ of the fraction
+field of $R$ is called {\it almost integral over $R$}
+if there exists an element $r \in R$, $r\not = 0$
+such that $rg^n \in R$ for all $n \geq 0$.
+\item The domain $R$ is called {\it completely normal} if every
+almost integral element of the fraction field of $R$ is
+contained in $R$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+The following lemma shows that a Noetherian domain is normal
+if and only if it is completely normal.
+
+\begin{lemma}
+\label{lemma-almost-integral}
+Let $R$ be a domain with fraction field $K$.
+If $u, v \in K$ are almost integral over $R$, then so are
+$u + v$ and $uv$. Any element $g \in K$ which is integral over $R$
+is almost integral over $R$. If $R$ is Noetherian
+then the converse holds as well.
+\end{lemma}
+
+\begin{proof}
+If $ru^n \in R$ for all $n \geq 0$ and
+$v^nr' \in R$ for all $n \geq 0$, then
+$(uv)^nrr'$ and $(u + v)^nrr'$ are in $R$ for
+all $n \geq 0$. Hence the first assertion.
+Suppose $g \in K$ is integral over $R$.
+In this case there exists an $d > 0$ such that
+the ring $R[g]$ is generated by $1, g, \ldots, g^d$ as an $R$-module.
+Let $r \in R$ be a common denominator of the elements
+$1, g, \ldots, g^d \in K$. It is follows that $rR[g] \subset R$,
+and hence $g$ is almost integral over $R$.
+
+\medskip\noindent
+Suppose $R$ is Noetherian and $g \in K$ is almost integral over $R$.
+Let $r \in R$, $r\not = 0$ be as in the definition.
+Then $R[g] \subset \frac{1}{r}R$ as an $R$-module.
+Since $R$ is Noetherian this implies that $R[g]$ is
+finite over $R$. Hence $g$ is integral over $R$, see
+Lemma \ref{lemma-finite-is-integral}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-normal-domain}
+Any localization of a normal domain is normal.
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a normal domain, and let $S \subset R$ be
+a multiplicative subset. Suppose $g$ is an element
+of the fraction field of $R$ which is integral over $S^{-1}R$.
+Let $P = x^d + \sum_{j < d} a_j x^j$ be a polynomial
+with $a_i \in S^{-1}R$ such that $P(g) = 0$.
+Choose $s \in S$ such that $sa_i \in R$ for all $i$.
+Then $sg$ satisfies the monic polynomial
+$x^d + \sum_{j < d} s^{d-j}a_j x^j$ which has coefficients
+$s^{d-j}a_j$ in $R$. Hence $sg \in R$ because $R$ is normal.
+Hence $g \in S^{-1}R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-PID-normal}
+A principal ideal domain is normal.
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a principal ideal domain.
+Let $g = a/b$ be an element of the fraction field
+of $R$ integral over $R$. Because $R$ is a principal ideal domain
+we may divide out a common factor of $a$ and $b$
+and assume $(a, b) = R$. In this case, any equation
+$(a/b)^n + r_{n-1} (a/b)^{n-1} + \ldots + r_0 = 0$
+with $r_i \in R$ would imply $a^n \in (b)$. This
+contradicts $(a, b) = R$ unless $b$ is a unit in $R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-prepare-polynomial-ring-normal}
+Let $R$ be a domain with fraction field $K$.
+Suppose $f = \sum \alpha_i x^i$ is an
+element of $K[x]$.
+\begin{enumerate}
+\item If $f$ is integral over $R[x]$
+then all $\alpha_i$ are integral over $R$, and
+\item If $f$ is almost integral over $R[x]$
+then all $\alpha_i$ are almost integral over $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We first prove the second statement.
+Write $f = \alpha_0 + \alpha_1 x + \ldots + \alpha_r x^r$
+with $\alpha_r \not = 0$.
+By assumption there exists $h = b_0 + b_1 x + \ldots + b_s x^s \in R[x]$,
+$b_s \not = 0$ such that $f^n h \in R[x]$ for all
+$n \geq 0$. This implies that $b_s \alpha_r^n \in R$
+for all $n \geq 0$. Hence $\alpha_r$ is almost
+integral over $R$. Since the set of almost integral
+elements form a subring (Lemma \ref{lemma-almost-integral}) we deduce that
+$f - \alpha_r x^r = \alpha_0 + \alpha_1 x + \ldots + \alpha_{r - 1} x^{r - 1}$
+is almost integral over $R[x]$. By induction on $r$ we win.
+
+\medskip\noindent
+In order to prove the first statement we will use absolute Noetherian
+reduction. Namely, write $\alpha_i = a_i / b_i$ and
+let $P(t) = t^d + \sum_{j < d} f_j t^j$ be a polynomial
+with coefficients $f_j \in R[x]$ such that $P(f) = 0$.
+Let $f_j = \sum f_{ji}x^i$. Consider the subring
+$R_0 \subset R$ generated by the finite list of elements
+$a_i, b_i, f_{ji}$ of $R$. It is a domain; let
+$K_0$ be its field of fractions. Since $R_0$ is a finite type
+$\mathbf{Z}$-algebra it is Noetherian, see
+Lemma \ref{lemma-obvious-Noetherian}. It is still
+the case that $f \in K_0[x]$ is integral over $R_0[x]$,
+because all the identities in $R$
+among the elements $a_i, b_i, f_{ji}$ also hold in $R_0$.
+By Lemma \ref{lemma-almost-integral} the element
+$f$ is almost integral over $R_0[x]$. By the second statement of
+the lemma, the elements $\alpha_i$ are almost integral
+over $R_0$. And since $R_0$ is Noetherian, they are
+integral over $R_0$, see Lemma \ref{lemma-almost-integral}.
+Of course, then they are integral over $R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-polynomial-domain-normal}
+Let $R$ be a normal domain.
+Then $R[x]$ is a normal domain.
+\end{lemma}
+
+\begin{proof}
+The result is true if $R$ is a field $K$ because
+$K[x]$ is a euclidean domain and hence a principal ideal
+domain and hence normal by Lemma \ref{lemma-PID-normal}.
+Let $g$ be an element of the fraction field of
+$R[x]$ which is integral over $R[x]$. Because $g$
+is integral over $K[x]$ where $K$ is the fraction
+field of $R$ we may write $g = \alpha_d x^d + \alpha_{d-1}x^{d-1} +
+\ldots + \alpha_0$ with $\alpha_i \in K$.
+By Lemma \ref{lemma-prepare-polynomial-ring-normal}
+the elements $\alpha_i$ are integral over $R$ and
+hence are in $R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-power-series-over-Noetherian-normal-domain}
+Let $R$ be a Noetherian normal domain. Then $R[[x]]$ is
+a Noetherian normal domain.
+\end{lemma}
+
+\begin{proof}
+The power series ring is Noetherian by
+Lemma \ref{lemma-Noetherian-power-series}.
+Let $f, g \in R[[x]]$ be nonzero elements such that
+$w = f/g$ is integral over $R[[x]]$.
+Let $K$ be the fraction field of $R$. Since the ring of Laurent series
+$K((x)) = K[[x]][1/x]$ is a field, we can write
+$w = a_n x^n + a_{n + 1} x^{n + 1} + \ldots)$
+for some $n \in \mathbf{Z}$, $a_i \in K$, and $a_n \not = 0$.
+By Lemma \ref{lemma-almost-integral} we see there exists a
+nonzero element $h = b_m x^m + b_{m + 1} x^{m + 1} + \ldots$
+in $R[[x]]$ with $b_m \not = 0$ such that
+$w^e h \in R[[x]]$ for all $e \geq 1$. We conclude that $n \geq 0$ and that
+$b_m a_n^e \in R$ for all $e \geq 1$.
+Since $R$ is Noetherian this implies that $a_n \in R$ by
+the same lemma. Now, if $a_n, a_{n + 1}, \ldots, a_{N - 1} \in R$,
+then we can apply the same argument to
+$w - a_n x^n - \ldots - a_{N - 1} x^{N - 1} = a_N x^N + \ldots$.
+In this way we see that all $a_i \in R$ and the lemma is proved.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-normality-is-local}
+Let $R$ be a domain. The following are equivalent:
+\begin{enumerate}
+\item The domain $R$ is a normal domain,
+\item for every prime $\mathfrak p \subset R$ the local ring
+$R_{\mathfrak p}$ is a normal domain, and
+\item for every maximal ideal $\mathfrak m$ the ring $R_{\mathfrak m}$
+is a normal domain.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows easily from the fact that for any domain $R$ we have
+$$
+R = \bigcap\nolimits_{\mathfrak m} R_{\mathfrak m}
+$$
+inside the fraction field of $R$. Namely, if $g$ is an element of
+the right hand side then the ideal $I = \{x \in R \mid xg \in R\}$
+is not contained in any maximal ideal $\mathfrak m$, whence $I = R$.
+\end{proof}
+
+\noindent
+Lemma \ref{lemma-normality-is-local} shows that the following definition
+is compatible with Definition \ref{definition-domain-normal}. (It is the
+definition from EGA -- see \cite[IV, 5.13.5 and 0, 4.1.4]{EGA}.)
+
+\begin{definition}
+\label{definition-ring-normal}
+A ring $R$ is called {\it normal} if for every prime
+$\mathfrak p \subset R$ the localization $R_{\mathfrak p}$ is
+a normal domain (see Definition \ref{definition-domain-normal}).
+\end{definition}
+
+\noindent
+Note that a normal ring is a reduced ring, as $R$ is a subring of the product
+of its localizations at all primes (see for example
+Lemma \ref{lemma-characterize-zero-local}).
+
+\begin{lemma}
+\label{lemma-normal-ring-integrally-closed}
+A normal ring is integrally closed in its total ring of fractions.
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a normal ring. Let $x \in Q(R)$ be an element of the total ring
+of fractions of $R$ integral over $R$. Set $I = \{f \in R, fx \in R\}$. Let
+$\mathfrak p \subset R$ be a prime. As $R \subset R_{\mathfrak p}$ is
+flat we see that $R_{\mathfrak p} \subset Q(R) \otimes_R R_{\mathfrak p}$. As
+$R_{\mathfrak p}$ is a normal domain we see that $x \otimes 1$ is an element of
+$R_{\mathfrak p}$. Hence we can find $a, f \in R$, $f \not \in \mathfrak p$
+such that $x \otimes 1 = a \otimes 1/f$. This means that $fx - a$ maps to
+zero in $Q(R) \otimes_R R_{\mathfrak p} = Q(R)_{\mathfrak p}$, which
+in turn means that there exists an $f' \in R$, $f' \not \in \mathfrak p$
+such that $f'fx = f'a$ in $R$. In other words, $ff' \in I$. Thus $I$
+is an ideal which isn't contained in any of the prime ideals of $R$, i.e.,
+$I = R$ and $x \in R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localization-normal-ring}
+A localization of a normal ring is a normal ring.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-polynomial-ring-normal}
+Let $R$ be a normal ring. Then $R[x]$ is a normal ring.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak q$ be a prime of $R[x]$. Set $\mathfrak p = R \cap \mathfrak q$.
+Then we see that $R_{\mathfrak p}[x]$ is a normal domain by
+Lemma \ref{lemma-polynomial-domain-normal}.
+Hence $(R[x])_{\mathfrak q}$ is a normal domain by
+Lemma \ref{lemma-localize-normal-domain}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-product-normal}
+A finite product of normal rings is normal.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that the product of two normal rings, say $R$ and $S$, is
+normal. By Lemma \ref{lemma-disjoint-decomposition} the prime ideals of
+$R\times S$ are of the form $\mathfrak{p}\times S$ and $R\times
+\mathfrak{q}$, where $\mathfrak{p}$ and $\mathfrak{q}$ are primes of $R$
+and $S$ respectively. Localization yields
+$(R\times S)_{\mathfrak{p}\times S}=R_{\mathfrak{p}}$ which is a normal domain
+by assumption. Similarly for $S$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-reduced-ring-normal}
+Let $R$ be a ring. Assume $R$ is reduced and has finitely many
+minimal primes. Then the following are equivalent:
+\begin{enumerate}
+\item $R$ is a normal ring,
+\item $R$ is integrally closed in its total ring of fractions, and
+\item $R$ is a finite product of normal domains.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implications (1) $\Rightarrow$ (2) and
+(3) $\Rightarrow$ (1) hold in general,
+see Lemmas \ref{lemma-normal-ring-integrally-closed} and
+\ref{lemma-finite-product-normal}.
+
+\medskip\noindent
+Let $\mathfrak p_1, \ldots, \mathfrak p_n$ be the minimal primes of $R$.
+By Lemmas \ref{lemma-reduced-ring-sub-product-fields} and
+\ref{lemma-total-ring-fractions-no-embedded-points} we have
+$Q(R) = R_{\mathfrak p_1} \times \ldots \times R_{\mathfrak p_n}$, and
+by Lemma \ref{lemma-minimal-prime-reduced-ring} each factor is a field.
+Denote $e_i = (0, \ldots, 0, 1, 0, \ldots, 0)$ the $i$th idempotent
+of $Q(R)$.
+
+\medskip\noindent
+If $R$ is integrally closed in $Q(R)$, then it contains in particular
+the idempotents $e_i$, and we see that $R$ is a product of $n$
+domains (see Sections \ref{section-connected-components} and
+\ref{section-tilde-module-sheaf}). Each factor is of the form
+$R/\mathfrak p_i$ with field of fractions $R_{\mathfrak p_i}$.
+By Lemma \ref{lemma-finite-product-integral-closure} each map
+$R/\mathfrak p_i \to R_{\mathfrak p_i}$ is integrally closed.
+Hence $R$ is a finite product of normal domains.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-normal-ring}
+Let $(R_i, \varphi_{ii'})$ be a directed system
+(Categories, Definition \ref{definition-directed-system})
+of rings. If each $R_i$ is a normal ring so is
+$R = \colim_i R_i$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p \subset R$ be a prime ideal.
+Set $\mathfrak p_i = R_i \cap \mathfrak p$ (usual abuse of notation).
+Then we see that
+$R_{\mathfrak p} = \colim_i (R_i)_{\mathfrak p_i}$.
+Since each $(R_i)_{\mathfrak p_i}$ is a normal domain we
+reduce to proving the statement of the lemma for normal
+domains. If $a, b \in R$ and $a/b$ satisfies a monic polynomial
+$P(T) \in R[T]$, then we can find a (sufficiently large) $i \in I$
+such that $a, b$ come from objects $a_i, b_i$ over $R_i$, $P$ comes from a
+monic polynomial $P_i\in R_i[T]$ and $P_i(a_i/b_i)=0$. Since $R_i$ is normal we
+see $a_i/b_i \in R_i$ and hence also $a/b \in R$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Going down for integral over normal}
+\label{section-going-down-integral-over-normal}
+
+\noindent
+We first play around a little bit with the notion of elements
+integral over an ideal, and then we prove the theorem referred
+to in the section title.
+
+\begin{definition}
+\label{definition-integral-over-ideal}
+Let $\varphi : R \to S$ be a ring map.
+Let $I \subset R$ be an ideal.
+We say an element $g \in S$ is
+{\it integral over $I$} if
+there exists a monic
+polynomial $P = x^d + \sum_{j < d} a_j x^j$
+with coefficients $a_j \in I^{d-j}$ such
+that $P^\varphi(g) = 0$ in $S$.
+\end{definition}
+
+\noindent
+This is mostly used when $\varphi = \text{id}_R : R \to R$.
+In this case the set $I'$ of elements integral over $I$ is called
+the {\it integral closure of $I$}. We will see that $I'$ is
+an ideal of $R$ (and of course $I \subset I'$).
+
+\begin{lemma}
+\label{lemma-characterize-integral-ideal}
+Let $\varphi : R \to S$ be a ring map.
+Let $I \subset R$ be an ideal.
+Let $A = \sum I^nt^n \subset R[t]$ be the
+subring of the polynomial ring
+generated by $R \oplus It \subset R[t]$.
+An element $s \in S$ is integral over $I$ if
+and only if the element $st \in S[t]$
+is integral over $A$.
+\end{lemma}
+
+\begin{proof}
+Suppose $st$ is integral over $A$.
+Let $P = x^d + \sum_{j < d} a_j x^j$
+be a monic polynomial with coefficients in $A$
+such that $P^\varphi(st) = 0$. Let $a_j' \in A$
+be the degree $d-j$ part of $a_i$, in other
+words $a_j' = a_j'' t^{d-j}$ with $a_j'' \in I^{d-j}$.
+For degree reasons we still have
+$(st)^d + \sum_{j < d} \varphi(a_j'') t^{d-j} (st)^j = 0$.
+Hence we see that $s$ is integral over $I$.
+
+\medskip\noindent
+Suppose that $s$ is integral over $I$.
+Say $P = x^d + \sum_{j < d} a_j x^j$
+with $a_j \in I^{d-j}$. Then we immediately find a
+polynomial $Q = x^d + \sum_{j < d} (a_j t^{d-j}) x^j$
+with coefficients in $A$ which proves that
+$st$ is integral over $A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-over-ideal-is-submodule}
+Let $\varphi : R \to S$ be a ring map.
+Let $I \subset R$ be an ideal.
+The set of elements of $S$ which are integral
+over $I$ form a $R$-submodule of $S$.
+Furthermore, if $s \in S$ is integral over
+$R$, and $s'$ is integral over $I$, then
+$ss'$ is integral over $I$.
+\end{lemma}
+
+\begin{proof}
+Closure under addition is clear from the
+characterization of Lemma \ref{lemma-characterize-integral-ideal}.
+Any element $s \in S$ which is integral over
+$R$ corresponds to the degree $0$ element $s$ of $S[x]$
+which is integral over $A$ (because $R \subset A$).
+Hence we see that multiplication by $s$ on $S[x]$
+preserves the property of being integral over $A$,
+by Lemma \ref{lemma-integral-closure-is-ring}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-integral-over-ideal}
+Suppose $\varphi : R \to S$ is integral.
+Suppose $I \subset R$ is an ideal.
+Then every element of $IS$ is integral over $I$.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-integral-over-ideal-is-submodule}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-polynomials-divide}
+Let $K$ be a field. Let $n, m \in \mathbf{N}$ and
+$a_0, \ldots, a_{n - 1}, b_0, \ldots, b_{m - 1} \in K$.
+If the polynomial $x^n + a_{n - 1}x^{n - 1} + \ldots + a_0$
+divides the polynomial $x^m + b_{m - 1} x^{m - 1} + \ldots + b_0$
+in $K[x]$ then
+\begin{enumerate}
+\item $a_0, \ldots, a_{n - 1}$ are integral over any subring
+$R_0$ of $K$ containing the elements $b_0, \ldots, b_{m - 1}$, and
+\item each $a_i$ lies in $\sqrt{(b_0, \ldots, b_{m-1})R}$
+for any subring $R \subset K$ containing the elements
+$a_0, \ldots, a_{n - 1}, b_0, \ldots, b_{m - 1}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $L/K$ be a field extension such that we can write
+$x^m + b_{m - 1} x^{m - 1} + \ldots + b_0 =
+\prod_{i = 1}^m (x - \beta_i)$ with $\beta_i \in L$.
+See Fields, Section \ref{fields-section-splitting-fieds}.
+Each $\beta_i$ is integral over $R_0$.
+Since each $a_i$ is a homogeneous polynomial in $\beta_1, \ldots, \beta_m$
+we deduce the same for the $a_i$
+(use Lemma \ref{lemma-integral-closure-is-ring}).
+
+\medskip\noindent
+Choose $c_0, \ldots, c_{m - n - 1} \in K$ such that
+$$
+\begin{matrix}
+x^m + b_{m - 1} x^{m - 1} + \ldots + b_0 = \\
+(x^n + a_{n - 1}x^{n - 1} + \ldots + a_0)
+(x^{m - n} + c_{m - n - 1}x^{m - n - 1}+ \ldots + c_0).
+\end{matrix}
+$$
+By part (1) the elements $c_i$ are integral over $R$. Consider
+the integral extension
+$$
+R \subset R' = R[c_0, \ldots, c_{m - n - 1}] \subset K
+$$
+By Lemmas \ref{lemma-integral-overring-surjective}
+and \ref{lemma-surjective-spec-radical-ideal}
+we see that $R \cap \sqrt{(b_0, \ldots, b_{m - 1})R'}
+= \sqrt{(b_0, \ldots, b_{m - 1})R}$. Thus we may replace
+$R$ by $R'$ and assume $c_i \in R$.
+Dividing out the radical $\sqrt{(b_0, \ldots, b_{m - 1})}$
+we get a reduced ring $\overline{R}$.
+We have to show that the images $\overline{a}_i \in \overline{R}$
+are zero. And in
+$\overline{R}[x]$ we have the relation
+$$
+\begin{matrix}
+x^m = x^m + \overline{b}_{m - 1} x^{m - 1} + \ldots + \overline{b}_0 = \\
+(x^n + \overline{a}_{n - 1}x^{n - 1} + \ldots + \overline{a}_0)
+(x^{m - n} + \overline{c}_{m - n - 1}x^{m - n - 1}+ \ldots + \overline{c}_0).
+\end{matrix}
+$$
+It is easy to see that this implies $\overline{a}_i = 0$ for all $i$. Indeed
+by Lemma \ref{lemma-minimal-prime-reduced-ring} the localization of
+$\overline{R}$ at a minimal prime $\mathfrak{p}$ is a field and
+$\overline{R}_{\mathfrak p}[x]$ a UFD. Thus
+$f = x^n + \sum \overline{a}_i x^i$
+is associated to $x^n$ and since $f$ is monic $f = x^n$
+in $\overline{R}_{\mathfrak p}[x]$.
+Then there exists an $s \in \overline{R}$, $s \not\in \mathfrak p$
+such that $s(f - x^n) = 0$. Therefore all $\overline{a}_i$ lie
+in $\mathfrak p$ and we conclude by
+Lemma \ref{lemma-reduced-ring-sub-product-fields}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-minimal-polynomial-normal-domain}
+Let $R \subset S$ be an inclusion of domains.
+Assume $R$ is normal. Let $g \in S$ be integral
+over $R$. Then the minimal polynomial of $g$
+has coefficients in $R$.
+\end{lemma}
+
+\begin{proof}
+Let $P = x^m + b_{m-1} x^{m-1} + \ldots + b_0$
+be a polynomial with coefficients in $R$
+such that $P(g) = 0$. Let $Q = x^n + a_{n-1}x^{n-1} + \ldots + a_0$
+be the minimal polynomial for $g$ over the fraction field
+$K$ of $R$. Then $Q$ divides $P$ in $K[x]$. By Lemma
+\ref{lemma-polynomials-divide} we see the $a_i$ are
+integral over $R$. Since $R$ is normal this
+means they are in $R$.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-going-down-normal-integral}
+Let $R \subset S$ be an inclusion of domains.
+Assume $R$ is normal and $S$ integral over $R$.
+Let $\mathfrak p \subset \mathfrak p' \subset R$
+be primes. Let $\mathfrak q'$ be a prime of $S$
+with $\mathfrak p' = R \cap \mathfrak q'$.
+Then there exists a prime $\mathfrak q$
+with $\mathfrak q \subset \mathfrak q'$
+such that $\mathfrak p = R \cap \mathfrak q$. In other words:
+the going down property holds for $R \to S$, see
+Definition \ref{definition-going-up-down}.
+\end{proposition}
+
+\begin{proof}
+Let $\mathfrak p$, $\mathfrak p'$ and $\mathfrak q'$
+be as in the statement. We have to show there is a prime
+$\mathfrak q$, with $\mathfrak q \subset \mathfrak q'$ and
+$R \cap \mathfrak q = \mathfrak p$. This is the same
+as finding a prime of
+$S_{\mathfrak q'}$ mapping to $\mathfrak p$.
+According to Lemma \ref{lemma-in-image} we have to show
+that $\mathfrak p S_{\mathfrak q'} \cap R
+= \mathfrak p$. Pick $z \in \mathfrak p S_{\mathfrak q'} \cap R$.
+We may write $z = y/g$ with $y \in \mathfrak pS$ and
+$g \in S$, $g \not\in \mathfrak q'$. Written
+differently we have $zg = y$.
+
+\medskip\noindent
+By Lemma \ref{lemma-integral-integral-over-ideal}
+there exists a monic polynomial
+$P = x^m + b_{m-1} x^{m-1} + \ldots + b_0$
+with $b_i \in \mathfrak p$ such that $P(y) = 0$.
+
+\medskip\noindent
+By Lemma \ref{lemma-minimal-polynomial-normal-domain}
+the minimal polynomial of $g$ over $K$ has coefficients
+in $R$. Write it as $Q = x^n + a_{n-1} x^{n-1} + \ldots
++ a_0$. Note that not all $a_i$, $i = n-1, \ldots, 0$
+are in $\mathfrak p$ since that would imply
+$g^n = \sum_{j < n} a_j g^j \in \mathfrak pS
+\subset \mathfrak p'S
+\subset \mathfrak q'$
+which is a contradiction.
+
+\medskip\noindent
+Since $y = zg$ we see immediately from the above
+that $Q' = x^n + za_{n-1} x^{n-1} + \ldots + z^{n}a_0$
+is the minimal polynomial for $y$. Hence
+$Q'$ divides $P$ and by Lemma \ref{lemma-polynomials-divide}
+we see that $z^ja_{n - j} \in \sqrt{(b_0, \ldots, b_{m-1})}
+\subset \mathfrak p$, $j = 1, \ldots, n$.
+Because not all $a_i$, $i = n-1, \ldots, 0$
+are in $\mathfrak p$ we conclude $z \in \mathfrak p$
+as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Flat modules and flat ring maps}
+\label{section-flat}
+
+\noindent
+One often used result is that if $M = \colim_{i\in \mathcal{I}} M_i$
+is a colimit of $R$-modules and if $N$ is an $R$-module then
+$$
+M \otimes N
+=
+\colim_{i\in \mathcal{I}} M_i \otimes_R N,
+$$
+see Lemma \ref{lemma-tensor-products-commute-with-limits}.
+This property is usually expressed by saying
+that {\it $\otimes$ commutes with colimits}.
+Another often used result is that if $0 \to N_1 \to N_2 \to N_3 \to 0$
+is an exact sequence and if $M$ is any $R$-module, then
+$$
+M \otimes_R N_1
+\to
+M \otimes_R N_2
+\to
+M \otimes_R N_3
+\to
+0
+$$
+is still exact, see Lemma \ref{lemma-tensor-product-exact}.
+Both of these properties tell us that the functor
+$N \mapsto M \otimes_R N$ {\it is right exact}.
+See Categories, Section \ref{categories-section-exact-functor}
+and Homology, Section \ref{homology-section-functors}.
+An $R$-module $M$ is flat if $N \mapsto N \otimes_R M$ is also left exact,
+i.e., if it is exact. Here is the precise definition.
+
+\begin{definition}
+\label{definition-flat}
+Let $R$ be a ring.
+\begin{enumerate}
+\item An $R$-module $M$ is called {\it flat} if whenever
+$N_1 \to N_2 \to N_3$ is an exact sequence of $R$-modules
+the sequence $M \otimes_R N_1 \to M \otimes_R N_2 \to M \otimes_R N_3$
+is exact as well.
+\item An $R$-module $M$ is called {\it faithfully flat} if the
+complex of $R$-modules
+$N_1 \to N_2 \to N_3$ is exact if and only if
+the sequence $M \otimes_R N_1 \to M \otimes_R N_2 \to M \otimes_R N_3$
+is exact.
+\item A ring map $R \to S$ is called {\it flat} if
+$S$ is flat as an $R$-module.
+\item A ring map $R \to S$ is called {\it faithfully flat} if
+$S$ is faithfully flat as an $R$-module.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Here is an example of how you can use the flatness condition.
+
+\begin{lemma}
+\label{lemma-flat-intersect-ideals}
+Let $R$ be a ring. Let $I, J \subset R$ be ideals. Let $M$ be a flat
+$R$-module. Then $IM \cap JM = (I \cap J)M$.
+\end{lemma}
+
+\begin{proof}
+Consider the exact sequence $0 \to I \cap J \to R \to R/I \oplus R/J$.
+Tensoring with the flat module $M$ we obtain an exact sequence
+$$
+0 \to (I \cap J) \otimes_R M \to M \to M/IM \oplus M/JM
+$$
+Since the kernel of $M \to M/IM \oplus M/JM$ is equal to
+$IM \cap JM$ we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-flat}
+Let $R$ be a ring. Let $\{M_i, \varphi_{ii'}\}$ be a directed system of
+flat $R$-modules. Then $\colim_i M_i$ is a flat $R$-module.
+\end{lemma}
+
+\begin{proof}
+This follows as $\otimes$ commutes with colimits and because
+directed colimits are exact, see
+Lemma \ref{lemma-directed-colimit-exact}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-flat}
+A composition of (faithfully) flat ring maps is
+(faithfully) flat.
+If $R \to R'$ is (faithfully) flat, and $M'$ is a
+(faithfully) flat $R'$-module, then $M'$ is a
+(faithfully) flat $R$-module.
+\end{lemma}
+
+\begin{proof}
+The first statement of the lemma is a particular case of the
+second, so it is clearly enough to prove the latter. Let
+$R \to R'$ be a flat ring map, and $M'$ a flat $R'$-module.
+We need to prove that $M'$ is a flat $R$-module. Let
+$N_1 \to N_2 \to N_3$ be an exact complex of $R$-modules.
+Then, the complex $R' \otimes_R N_1 \to
+R' \otimes_R N_2 \to R' \otimes_R N_3$ is exact (since $R'$
+is flat as an $R$-module), and so the complex
+$M' \otimes_{R'} \left(R' \otimes_R N_1\right)
+\to M' \otimes_{R'} \left(R' \otimes_R N_2\right)
+\to M' \otimes_{R'} \left(R' \otimes_R N_3\right)$ is
+exact (since $M'$ is a flat $R'$-module). Since
+$M' \otimes_{R'} \left(R' \otimes_R N\right)
+\cong \left(M' \otimes_{R'} R'\right) \otimes_R N
+\cong M' \otimes_R N$ for any $R$-module $N$ functorially
+(by Lemmas \ref{lemma-tensor-with-bimodule} and
+\ref{lemma-flip-tensor-product}), this complex is isomorphic
+to the complex
+$M' \otimes_R N_1 \to M' \otimes_R N_2 \to M' \otimes_R N_3$,
+which is therefore also exact. This shows that $M'$ is a flat
+$R$-module. Tracing this argument backwards, we can show
+that if $R \to R'$ is faithfully flat, and if $M'$ is
+faithfully flat as an $R'$-module, then $M'$ is faithfully
+flat as an $R$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat}
+Let $M$ be an $R$-module. The following are equivalent:
+\begin{enumerate}
+\item
+\label{item-flat}
+$M$ is flat over $R$.
+\item
+\label{item-injective}
+for every injection of $R$-modules $N \subset N'$
+the map $N \otimes_R M \to N'\otimes_R M$ is injective.
+\item
+\label{item-f-ideal}
+for every ideal $I \subset R$ the map
+$I \otimes_R M \to R \otimes_R M = M$ is injective.
+\item
+\label{item-ffg-ideal}
+for every finitely generated ideal $I \subset R$
+the map $I \otimes_R M \to R \otimes_R M = M$ is injective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implications (\ref{item-flat}) implies (\ref{item-injective})
+implies (\ref{item-f-ideal}) implies (\ref{item-ffg-ideal}) are all
+trivial. Thus we prove (\ref{item-ffg-ideal}) implies (\ref{item-flat}).
+Suppose that $N_1 \to N_2 \to N_3$ is exact.
+Let $K = \Ker(N_2 \to N_3)$ and $Q = \Im(N_2 \to N_3)$.
+Then we get maps
+$$
+N_1 \otimes_R M \to
+K \otimes_R M \to
+N_2 \otimes_R M \to
+Q \otimes_R M \to
+N_3 \otimes_R M
+$$
+Observe that the first and third arrows are surjective. Thus if we show
+that the second and fourth arrows are injective, then we are
+done\footnote{Here is the argument in more detail:
+Assume that we know that the second and fourth arrows are
+injective. Lemma \ref{lemma-tensor-product-exact} (applied
+to the exact sequence $K \to N_2 \to Q \to 0$) yields that
+the sequence $K \otimes_R M \to N_2 \otimes_R M \to
+Q \otimes_R M \to 0$ is exact. Hence,
+$\Ker \left(N_2 \otimes_R M \to Q \otimes_R M\right)
+= \Im \left(K \otimes_R M \to N_2 \otimes_R M\right)$.
+Since
+$\Im \left(K \otimes_R M \to N_2 \otimes_R M\right)
+= \Im \left(N_1 \otimes_R M \to N_2 \otimes_R M\right)$
+(due to the surjectivity of $N_1 \otimes_R M \to
+K \otimes_R M$) and
+$\Ker \left(N_2 \otimes_R M \to Q \otimes_R M\right)
+= \Ker \left(N_2 \otimes_R M \to N_3 \otimes_R M\right)$
+(due to the injectivity of $Q \otimes_R M \to
+N_3 \otimes_R M$), this becomes
+$\Ker \left(N_2 \otimes_R M \to N_3 \otimes_R M\right)
+= \Im \left(N_1 \otimes_R M \to N_2 \otimes_R M\right)$,
+which shows that the functor $- \otimes_R M$ is exact,
+whence $M$ is flat.}.
+Hence it suffices to show that $- \otimes_R M$ transforms
+injective $R$-module maps into injective $R$-module maps.
+
+\medskip\noindent
+Assume $K \to N$ is an injective $R$-module map and
+let $x \in \Ker(K \otimes_R M \to N \otimes_R M)$.
+We have to show that $x$ is zero.
+The $R$-module $K$ is the union of its finite
+$R$-submodules; hence, $K \otimes_R M$ is
+the colimit of $R$-modules of the form
+$K_i \otimes_R M$ where $K_i$ runs over all finite
+$R$-submodules of $K$
+(because tensor product commutes with colimits).
+Thus, for some $i$ our $x$ comes from an element
+$x_i \in K_i \otimes_R M$. Thus we may assume that $K$
+is a finite $R$-module. Assume this. We regard the
+injection $K \to N$ as an inclusion, so that
+$K \subset N$.
+
+\medskip\noindent
+The $R$-module $N$ is the union of its finite
+$R$-submodules that contain $K$. Hence, $N \otimes_R M$
+is the colimit of $R$-modules of the form
+$N_i \otimes_R M$ where $N_i$ runs over all finite
+$R$-submodules of $N$ that contain $K$
+(again since tensor product commutes with colimits).
+Notice that this is a colimit over a directed system
+(since the sum of two finite submodules of $N$ is
+again finite).
+Hence, (by Lemma \ref{lemma-zero-directed-limit})
+the element $x \in K \otimes_R M$ maps to
+zero in at least one of these $R$-modules
+$N_i \otimes_R M$ (since $x$ maps to zero
+in $N \otimes_R M$).
+Thus we may assume $N$ is a finite $R$-module.
+
+\medskip\noindent
+Assume $N$ is a finite $R$-module. Write $N = R^{\oplus n}/L$ and $K = L'/L$
+for some $L \subset L' \subset R^{\oplus n}$.
+For any $R$-submodule $G \subset R^{\oplus n}$,
+we have a canonical map $G \otimes_R M \to M^{\oplus n}$
+obtained by composing
+$G \otimes_R M \to R^n \otimes_R M = M^{\oplus n}$.
+It suffices to prove that $L \otimes_R M \to M^{\oplus n}$
+and $L' \otimes_R M \to M^{\oplus n}$ are injective.
+Namely, if so, then we see that
+$K \otimes_R M = L' \otimes_R M/L \otimes_R M \to M^{\oplus n}/L \otimes_R M$
+is injective too\footnote{This becomes obvious if we
+identify $L' \otimes_R M$ and $L \otimes_R M$ with
+submodules of $M^{\oplus n}$ (which is legitimate since
+the maps $L \otimes_R M \to M^{\oplus n}$
+and $L' \otimes_R M \to M^{\oplus n}$ are injective and
+commute with the obvious map $L' \otimes_R M \to L \otimes_R M$).}.
+
+\medskip\noindent
+Thus it suffices to show that $L \otimes_R M \to M^{\oplus n}$
+is injective when $L \subset R^{\oplus n}$ is an $R$-submodule.
+We do this by induction on $n$. The base case $n = 1$ we handle below.
+For the induction step assume $n > 1$ and set
+$L' = L \cap R \oplus 0^{\oplus n - 1}$. Then $L'' = L/L'$ is a submodule
+of $R^{\oplus n - 1}$. We obtain a diagram
+$$
+\xymatrix{
+&
+L' \otimes_R M \ar[r] \ar[d] &
+L \otimes_R M \ar[r] \ar[d] &
+L'' \otimes_R M \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+M \ar[r] &
+M^{\oplus n} \ar[r] &
+M^{\oplus n - 1} \ar[r] & 0
+}
+$$
+By induction hypothesis and the base case the left and right vertical
+arrows are injective. The rows are exact. It follows that the middle vertical
+arrow is injective too.
+
+\medskip\noindent
+The base case of the induction above is when $L \subset R$ is an ideal.
+In other words, we have to show that $I \otimes_R M \to M$ is injective
+for any ideal $I$ of $R$. We know this is true when $I$ is finitely
+generated. However, $I = \bigcup I_\alpha$ is the union of the
+finitely generated ideals $I_\alpha$ contained in it. In other words,
+$I = \colim I_\alpha$. Since $\otimes$ commutes with colimits we see that
+$I \otimes_R M = \colim I_\alpha \otimes_R M$ and since all
+the morphisms $I_\alpha \otimes_R M \to M$ are injective by
+assumption, the same is true for $I \otimes_R M \to M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-rings-flat}
+Let $\{R_i, \varphi_{ii'}\}$ be a system of rings over the directed set $I$.
+Let $R = \colim_i R_i$.
+\begin{enumerate}
+\item If $M$ is an $R$-module such that $M$ is flat as an $R_i$-module
+for all $i$, then $M$ is flat as an $R$-module.
+\item For $i \in I$ let $M_i$ be a flat $R_i$-module and
+for $i' \geq i$ let $f_{ii'} : M_i \to M_{i'}$ be a $\varphi_{ii'}$-linear
+map such that $f_{i' i''} \circ f_{i i'} = f_{i i''}$. Then
+$M = \colim_{i \in I} M_i$ is a flat $R$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is a special case of part (2) with $M_i = M$ for all $i$
+and $f_{i i'} = \text{id}_M$. Proof of (2).
+Let $\mathfrak a \subset R$ be a finitely generated ideal. By
+Lemma \ref{lemma-flat}
+it suffices to show that $\mathfrak a \otimes_R M \to M$ is
+injective. We can find an $i \in I$ and a finitely generated ideal
+$\mathfrak a' \subset R_i$ such that $\mathfrak a = \mathfrak a'R$.
+Then $\mathfrak a = \colim_{i' \geq i} \mathfrak a'R_{i'}$.
+Since $\otimes$ commutes with colimits the map $\mathfrak a \otimes_R M \to M$
+is the colimit of the maps
+$$
+\mathfrak a'R_{i'} \otimes_{R_{i'}} M_{i'} \longrightarrow M_{i'}
+$$
+These maps are all injective by assumption. Since colimits over $I$
+are exact by Lemma \ref{lemma-directed-colimit-exact} we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-base-change}
+Suppose that $M$ is (faithfully) flat over $R$, and that $R \to R'$
+is a ring map. Then $M \otimes_R R'$ is (faithfully) flat over $R'$.
+\end{lemma}
+
+\begin{proof}
+For any $R'$-module $N$ we have a canonical
+isomorphism $N \otimes_{R'} (R'\otimes_R M)
+= N \otimes_R M$. Hence the desired exactness properties of the functor
+$-\otimes_{R'}(R'\otimes_R M)$ follow from
+the corresponding exactness properties of the functor $-\otimes_R M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flatness-descends}
+Let $R \to R'$ be a faithfully flat ring map.
+Let $M$ be a module over $R$, and set $M' = R' \otimes_R M$.
+Then $M$ is flat over $R$ if and only if $M'$ is flat over $R'$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-flat-base-change} we see that if $M$ is flat
+then $M'$ is flat. For the converse, suppose that $M'$ is flat.
+Let $N_1 \to N_2 \to N_3$ be an exact sequence of $R$-modules.
+We want to show that $N_1 \otimes_R M \to N_2 \otimes_R M \to N_3 \otimes_R M$
+is exact. We know that
+$N_1 \otimes_R R' \to N_2 \otimes_R R' \to N_3 \otimes_R R'$ is
+exact, because $R \to R'$ is flat. Flatness of $M'$ implies that
+$N_1 \otimes_R R' \otimes_{R'} M'
+\to N_2 \otimes_R R' \otimes_{R'} M'
+\to N_3 \otimes_R R' \otimes_{R'} M'$ is exact.
+We may write this as
+$N_1 \otimes_R M \otimes_R R'
+\to N_2 \otimes_R M \otimes_R R'
+\to N_3 \otimes_R M \otimes_R R'$.
+Finally, faithful flatness implies that
+$N_1 \otimes_R M \to N_2 \otimes_R M \to N_3 \otimes_R M$
+is exact.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flatness-descends-more-general}
+Let $R$ be a ring. Let $S \to S'$ be a flat map of $R$-algebras.
+Let $M$ be a module over $S$, and set $M' = S' \otimes_S M$.
+\begin{enumerate}
+\item If $M$ is flat over $R$, then $M'$ is flat over $R$.
+\item If $S \to S'$ is faithfully flat, then $M$ is flat
+over $R$ if and only if $M'$ is flat over $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $N \to N'$ be an injection of $R$-modules. By the flatness
+of $S \to S'$ we have
+$$
+\Ker(N \otimes_R M \to N' \otimes_R M) \otimes_S S'
+=
+\Ker(N \otimes_R M' \to N' \otimes_R M')
+$$
+If $M$ is flat over $R$, then the left hand side is zero and
+we find that $M'$ is flat over $R$ by the second characterization
+of flatness in Lemma \ref{lemma-flat}.
+If $M'$ is flat over $R$ then we have the vanishing of the right hand side
+and if in addition $S \to S'$ is faithfully flat, this implies that
+$\Ker(N \otimes_R M \to N' \otimes_R M)$ is zero which in turn
+shows that $M$ is flat over $R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-permanence}
+Let $R \to S$ be a ring map. Let $M$ be an $S$-module.
+If $M$ is flat as an $R$-module and faithfully flat as an $S$-module,
+then $R \to S$ is flat.
+\end{lemma}
+
+\begin{proof}
+Let $N_1 \to N_2 \to N_3$ be an exact sequence of $R$-modules.
+By assumption $N_1 \otimes_R M \to N_2 \otimes_R M \to N_3 \otimes_R M$
+is exact. We may write this as
+$$
+N_1 \otimes_R S \otimes_S M
+\to
+N_2 \otimes_R S \otimes_S M
+\to
+N_3 \otimes_R S \otimes_S M.
+$$
+By faithful flatness of $M$ over $S$ we conclude that
+$N_1 \otimes_R S \to N_2 \otimes_R S \to N_3 \otimes_R S$ is exact.
+Hence $R \to S$ is flat.
+\end{proof}
+
+\noindent
+Let $R$ be a ring.
+Let $M$ be an $R$-module.
+Let $\sum f_i x_i = 0$ be a relation in $M$.
+We say the relation $\sum f_i x_i$
+is {\it trivial} if there exist an integer $m \geq 0$,
+elements $y_j \in M$, $j = 1, \ldots, m$, and elements $a_{ij} \in R$,
+$i = 1, \ldots, n$, $j = 1, \ldots, m$ such that
+$$
+x_i = \sum\nolimits_j a_{ij} y_j, \forall i,
+\quad\text{and}\quad
+0 = \sum\nolimits_i f_ia_{ij}, \forall j.
+$$
+
+\begin{lemma}[Equational criterion of flatness]
+\label{lemma-flat-eq}
+A module $M$ over $R$ is flat if and only if
+every relation in $M$ is trivial.
+\end{lemma}
+
+\begin{proof}
+Assume $M$ is flat and let $\sum f_i x_i = 0$ be a relation in $M$.
+Let $I = (f_1, \ldots, f_n)$, and let
+$K = \Ker(R^n \to I, (a_1, \ldots, a_n) \mapsto \sum_i a_i f_i)$.
+So we have the short exact sequence
+$0 \to K \to R^n \to I \to 0$. Then $\sum f_i \otimes x_i$
+is an element of $I \otimes_R M$ which maps
+to zero in $R \otimes_R M = M$. By flatness
+$\sum f_i \otimes x_i$ is zero in $I \otimes_R M$.
+Thus there exists an element of $K \otimes_R M$ mapping
+to $\sum e_i \otimes x_i \in R^n \otimes_R M$ where $e_i$
+is the $i$th basis element of $R^n$.
+Write this element as $\sum k_j \otimes y_j$
+and then write the image of $k_j$ in $R^n$ as
+$\sum a_{ij} e_i$ to get the result.
+
+\medskip\noindent
+Assume every relation is trivial, let $I$
+be a finitely generated ideal, and let $x = \sum f_i \otimes x_i$
+be an element of $I \otimes_R M$ mapping to zero in $R \otimes_R M = M$.
+This just means exactly that $\sum f_i x_i$ is a relation in
+$M$. And the fact that it is trivial implies easily that
+$x$ is zero, because
+$$
+x
+=
+\sum f_i \otimes x_i
+=
+\sum f_i \otimes \left(\sum a_{ij}y_j\right)
+=
+\sum \left(\sum f_i a_{ij}\right) \otimes y_j
+=
+0
+$$
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-tor-zero}
+Suppose that $R$ is a ring, $0 \to M'' \to M' \to M \to 0$
+a short exact sequence, and $N$ an $R$-module. If $M$ is flat
+then $N \otimes_R M'' \to N \otimes_R M'$ is injective, i.e., the
+sequence
+$$
+0 \to N \otimes_R M'' \to N \otimes_R M' \to N \otimes_R M \to 0
+$$
+is a short exact sequence.
+\end{lemma}
+
+\begin{proof}
+Let $R^{(I)} \to N$ be a surjection from a free module
+onto $N$ with kernel $K$. The result follows
+from the snake lemma applied to the following diagram
+$$
+\begin{matrix}
+ & & 0 & & 0 & & 0 & & \\
+ & & \uparrow & & \uparrow & & \uparrow & & \\
+ & & M''\otimes_R N & \to & M' \otimes_R N & \to & M \otimes_R N & \to & 0 \\
+ & & \uparrow & & \uparrow & & \uparrow & & \\
+0 & \to & (M'')^{(I)} & \to & (M')^{(I)} & \to & M^{(I)} & \to & 0 \\
+ & & \uparrow & & \uparrow & & \uparrow & & \\
+ & & M''\otimes_R K & \to & M' \otimes_R K & \to & M \otimes_R K & \to & 0 \\
+ & & & & & & \uparrow & & \\
+ & & & & & & 0 & &
+\end{matrix}
+$$
+with exact rows and columns. The middle row is exact because tensoring
+with the free module $R^{(I)}$ is exact.
+\end{proof}
+
+
+\begin{lemma}
+\label{lemma-flat-ses}
+Suppose that $0 \to M' \to M \to M'' \to 0$ is
+a short exact sequence of $R$-modules.
+If $M'$ and $M''$ are flat so is $M$.
+If $M$ and $M''$ are flat so is $M'$.
+\end{lemma}
+
+\begin{proof}
+We will use the criterion that a module $N$ is flat if for
+every ideal $I \subset R$ the map $N \otimes_R I \to N$ is injective,
+see Lemma \ref{lemma-flat}.
+Consider an ideal $I \subset R$.
+Consider the diagram
+$$
+\begin{matrix}
+0 & \to & M' & \to & M & \to & M'' & \to & 0 \\
+& & \uparrow & & \uparrow & & \uparrow & & \\
+& & M'\otimes_R I & \to & M \otimes_R I & \to & M''\otimes_R I & \to & 0
+\end{matrix}
+$$
+with exact rows. This immediately proves the first assertion.
+The second follows because if $M''$ is flat then the lower left
+horizontal arrow is injective by Lemma \ref{lemma-flat-tor-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-easy-ff}
+Let $R$ be a ring.
+Let $M$ be an $R$-module.
+The following are equivalent
+\begin{enumerate}
+\item $M$ is faithfully flat, and
+\item $M$ is flat and for all $R$-module homomorphisms $\alpha : N \to N'$
+we have $\alpha = 0$ if and only if $\alpha \otimes \text{id}_M = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $M$ is faithfully flat, then
+$0 \to \Ker(\alpha) \to N \to N'$ is exact if and only if the same holds
+after tensoring with $M$. This proves (1) implies (2).
+For the other, assume (2). Let $N_1 \to N_2 \to N_3$
+be a complex, and assume the complex
+$N_1 \otimes_R M \to N_2 \otimes_R M \to N_3\otimes_R M$
+is exact. Take $x \in \Ker(N_2 \to N_3)$,
+and consider the map $\alpha : R \to N_2/\Im(N_1)$,
+$r \mapsto rx + \Im(N_1)$. By the exactness
+of the complex $-\otimes_R M$ we see that $\alpha \otimes
+\text{id}_M$ is zero. By assumption we get that $\alpha$ is
+zero. Hence $x $ is in the image of $N_1 \to N_2$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ff}
+\begin{slogan}
+A flat module is faithfully flat if and only if it has nonzero fibers.
+\end{slogan}
+Let $M$ be a flat $R$-module.
+The following are equivalent:
+\begin{enumerate}
+\item $M$ is faithfully flat,
+\item for every nonzero $R$-module $N$, then tensor product $M \otimes_R N$
+is nonzero,
+\item for all $\mathfrak p \in \Spec(R)$
+the tensor product $M \otimes_R \kappa(\mathfrak p)$ is nonzero, and
+\item for all maximal ideals $\mathfrak m$ of $R$
+the tensor product $M \otimes_R \kappa(\mathfrak m) = M/{\mathfrak m}M$
+is nonzero.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $M$ faithfully flat and $N \not = 0$. By Lemma \ref{lemma-easy-ff}
+the nonzero map $1 : N \to N$ induces a nonzero map
+$M \otimes_R N \to M \otimes_R N$, so $M \otimes_R N \not = 0$.
+Thus (1) implies (2). The imlpications (2) $\Rightarrow$ (3) $\Rightarrow$ (4)
+are immediate.
+
+\medskip\noindent
+Assume (4). Suppose that $N_1 \to N_2 \to N_3$ is a complex and
+suppose that $N_1 \otimes_R M \to N_2\otimes_R M \to
+N_3\otimes_R M$ is exact. Let $H$ be the cohomology of the complex,
+so $H = \Ker(N_2 \to N_3)/\Im(N_1 \to N_2)$. To finish the proof
+we will show $H = 0$. By flatness we see that $H \otimes_R M = 0$.
+Take $x \in H$ and let $I = \{f \in R \mid fx = 0 \}$
+be its annihilator. Since $R/I \subset H$ we get
+$M/IM \subset H \otimes_R M = 0$ by flatness of $M$.
+If $I \not = R$ we may choose
+a maximal ideal $I \subset \mathfrak m \subset R$.
+This immediately gives a contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ff-rings}
+Let $R \to S$ be a flat ring map.
+The following are equivalent:
+\begin{enumerate}
+\item $R \to S$ is faithfully flat,
+\item the induced map on $\Spec$ is surjective, and
+\item any closed point $x \in \Spec(R)$ is
+in the image of the map $\Spec(S) \to \Spec(R)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows quickly from Lemma \ref{lemma-ff}, because we
+saw in Remark \ref{remark-fundamental-diagram}
+that $\mathfrak p$ is in the image
+if and only if the ring $S \otimes_R \kappa(\mathfrak p)$
+is nonzero.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-flat-ff}
+A flat local ring homomorphism of local rings is faithfully flat.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-ff-rings}.
+\end{proof}
+
+\noindent
+Flatness meshes well with localization.
+
+\begin{lemma}
+\label{lemma-flat-localization}
+Let $R$ be a ring. Let $S \subset R$ be a multiplicative subset.
+\begin{enumerate}
+\item The localization $S^{-1}R$ is a flat $R$-algebra.
+\item If $M$ is an $S^{-1}R$-module, then $M$ is a flat $R$-module
+if and only if $M$ is a flat $S^{-1}R$-module.
+\item Suppose $M$ is an $R$-module. Then
+$M$ is a flat $R$-module if and only if $M_{\mathfrak p}$ is a flat
+$R_{\mathfrak p}$-module for all primes $\mathfrak p$ of $R$.
+\item Suppose $M$ is an $R$-module. Then $M$ is a flat $R$-module if
+and only if $M_{\mathfrak m}$ is a flat
+$R_{\mathfrak m}$-module for all maximal ideals $\mathfrak m$ of $R$.
+\item Suppose $R \to A$ is a ring map, $M$ is an $A$-module,
+and $g_1, \ldots, g_m \in A$ are elements generating the unit
+ideal of $A$. Then $M$ is flat over $R$ if and only if each localization
+$M_{g_i}$ is flat over $R$.
+\item Suppose $R \to A$ is a ring map, and $M$ is an $A$-module.
+Then $M$ is a flat $R$-module if and only if the localization
+$M_{\mathfrak q}$ is a flat $R_{\mathfrak p}$-module
+(with $\mathfrak p$ the prime of $R$ lying under $\mathfrak q$)
+for all primes $\mathfrak q$ of $A$.
+\item Suppose $R \to A$ is a ring map, and $M$ is an $A$-module.
+Then $M$ is a flat $R$-module if and only if the localization
+$M_{\mathfrak m}$ is a flat $R_{\mathfrak p}$-module
+(with $\mathfrak p = R \cap \mathfrak m$)
+for all maximal ideals $\mathfrak m$ of $A$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let us prove the last statement of the lemma.
+In the proof we will use repeatedly that localization is exact
+and commutes with tensor product, see Sections \ref{section-localization}
+and \ref{section-tensor-product}.
+
+\medskip\noindent
+Suppose $R \to A$ is a ring map, and $M$ is an $A$-module.
+Assume that $M_{\mathfrak m}$ is a flat $R_{\mathfrak p}$-module
+for all maximal ideals $\mathfrak m$ of $A$ (with
+$\mathfrak p = R \cap \mathfrak m$). Let $I \subset R$ be an ideal.
+We have to show the map $I \otimes_R M \to M$ is injective.
+We can think of this as a map of $A$-modules.
+By assumption the localization
+$(I \otimes_R M)_{\mathfrak m} \to M_{\mathfrak m}$ is injective
+because
+$(I \otimes_R M)_{\mathfrak m} =
+I_{\mathfrak p} \otimes_{R_{\mathfrak p}} M_{\mathfrak m}$.
+Hence the kernel of $I \otimes_R M \to M$ is zero by
+Lemma \ref{lemma-characterize-zero-local}.
+Hence $M$ is flat over $R$.
+
+\medskip\noindent
+Conversely, assume $M$ is flat over $R$. Pick a prime $\mathfrak q$
+of $A$ lying over the prime $\mathfrak p$ of $R$. Suppose that
+$I \subset R_{\mathfrak p}$ is an ideal. We have to show that
+$I \otimes_{R_{\mathfrak p}} M_{\mathfrak q} \to M_{\mathfrak q}$
+is injective. We can write $I = J_{\mathfrak p}$ for some
+ideal $J \subset R$. Then the map
+$I \otimes_{R_{\mathfrak p}} M_{\mathfrak q} \to M_{\mathfrak q}$
+is just the localization (at $\mathfrak q$) of the map
+$J \otimes_R M \to M$ which is injective. Since localization is exact
+we see that $M_{\mathfrak q}$ is a flat $R_{\mathfrak p}$-module.
+
+\medskip\noindent
+This proves (7) and (6). The other statements follow in a straightforward
+way from the last statement (proofs omitted).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-going-down}
+Let $R \to S$ be flat. Let $\mathfrak p \subset \mathfrak p'$
+be primes of $R$. Let $\mathfrak q' \subset S$ be a prime of $S$
+mapping to $\mathfrak p'$. Then there exists a prime
+$\mathfrak q \subset \mathfrak q'$ mapping to $\mathfrak p$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-flat-localization} the local ring map
+$R_{\mathfrak p'} \to S_{\mathfrak q'}$ is flat.
+By Lemma \ref{lemma-local-flat-ff} this local ring map is faithfully
+flat. By Lemma \ref{lemma-ff-rings} there is a prime mapping to
+$\mathfrak p R_{\mathfrak p'}$. The inverse image of this
+prime in $S$ does the job.
+\end{proof}
+
+\noindent
+The property of $R \to S$ described in the lemma is called the
+``going down property''. See Definition \ref{definition-going-up-down}.
+
+\begin{lemma}
+\label{lemma-colimit-faithfully-flat}
+Let $R$ be a ring. Let $\{S_i, \varphi_{ii'}\}$ be a directed system of
+faithfully flat $R$-algebras. Then $S = \colim_i S_i$ is a faithfully flat
+$R$-algebra.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-colimit-flat} we see that $S$ is flat.
+Let $\mathfrak m \subset R$ be a maximal ideal. By
+Lemma \ref{lemma-ff-rings}
+none of the rings $S_i/\mathfrak m S_i$ is zero.
+Hence $S/\mathfrak mS = \colim S_i/\mathfrak mS_i$ is nonzero
+as well because $1$ is not equal to zero. Thus the image of
+$\Spec(S) \to \Spec(R)$ contains $\mathfrak m$ and we see that $R \to S$
+is faithfully flat by Lemma \ref{lemma-ff-rings}.
+\end{proof}
+
+
+
+
+
+\section{Supports and annihilators}
+\label{section-supp-and-ann}
+
+\noindent
+Some very basic definitions and lemmas.
+
+\begin{definition}
+\label{definition-support-module}
+Let $R$ be a ring and let $M$ be an $R$-module.
+The {\it support of $M$} is the set
+$$
+\text{Supp}(M)
+=
+\{
+\mathfrak p \in \Spec(R)
+\mid
+M_{\mathfrak p} \not = 0
+\}
+$$
+\end{definition}
+
+\begin{lemma}
+\label{lemma-support-zero}
+\begin{slogan}
+A module over a ring has empty support if and only if it is the trivial module.
+\end{slogan}
+Let $R$ be a ring. Let $M$ be an $R$-module. Then
+$$
+M = (0) \Leftrightarrow \text{Supp}(M) = \emptyset.
+$$
+\end{lemma}
+
+\begin{proof}
+Actually,
+Lemma \ref{lemma-characterize-zero-local}
+even shows that $\text{Supp}(M)$ always contains a maximal ideal
+if $M$ is not zero.
+\end{proof}
+
+\begin{definition}
+\label{definition-annihilator}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+\begin{enumerate}
+\item Given an element $m \in M$ the {\it annihilator of $m$}
+is the ideal
+$$
+\text{Ann}_R(m) = \text{Ann}(m) = \{f \in R \mid fm = 0\}.
+$$
+\item The {\it annihilator of $M$}
+is the ideal
+$$
+\text{Ann}_R(M) = \text{Ann}(M) = \{f \in R \mid fm = 0\ \forall m \in M\}.
+$$
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-annihilator-flat-base-change}
+Let $R \to S$ be a flat ring map. Let $M$ be an $R$-module and
+$m \in M$. Then $\text{Ann}_R(m) S = \text{Ann}_S(m \otimes 1)$.
+If $M$ is a finite $R$-module, then
+$\text{Ann}_R(M) S = \text{Ann}_S(M \otimes_R S)$.
+\end{lemma}
+
+\begin{proof}
+Set $I = \text{Ann}_R(m)$. By definition there is an exact sequence
+$0 \to I \to R \to M$ where the map $R \to M$ sends $f$ to $fm$. Using
+flatness we obtain an exact sequence
+$0 \to I \otimes_R S \to S \to M \otimes_R S$ which proves the first
+assertion. If $m_1, \ldots, m_n$ is a set of generators of $M$
+then $\text{Ann}_R(M) = \bigcap \text{Ann}_R(m_i)$. Similarly
+$\text{Ann}_S(M \otimes_R S) = \bigcap \text{Ann}_S(m_i \otimes 1)$.
+Set $I_i = \text{Ann}_R(m_i)$. Then it suffices to show that
+$\bigcap_{i = 1, \ldots, n} (I_i S) = (\bigcap_{i = 1, \ldots, n} I_i)S$.
+This is Lemma \ref{lemma-flat-intersect-ideals}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-support-closed}
+Let $R$ be a ring and let $M$ be an $R$-module.
+If $M$ is finite, then $\text{Supp}(M)$ is closed.
+More precisely, if $I = \text{Ann}(M)$ is the annihilator of $M$, then
+$V(I) = \text{Supp}(M)$.
+\end{lemma}
+
+\begin{proof}
+We will show that $V(I) = \text{Supp}(M)$.
+
+\medskip\noindent
+Suppose $\mathfrak p \in \text{Supp}(M)$. Then $M_{\mathfrak p} \not = 0$.
+Choose an element $m \in M$ whose image in $M_\mathfrak p$ is nonzero.
+Then the annihilator of $m$ is contained in $\mathfrak p$ by construction
+of the localization $M_\mathfrak p$. Hence
+a fortiori $I = \text{Ann}(M)$ must be contained in $\mathfrak p$.
+
+\medskip\noindent
+Conversely, suppose that $\mathfrak p \not \in \text{Supp}(M)$.
+Then $M_{\mathfrak p} = 0$.
+Let $x_1, \ldots, x_r \in M$ be generators.
+By Lemma \ref{lemma-localization-colimit} there exists
+an $f \in R$, $f\not\in \mathfrak p$ such that
+$x_i/1 = 0$ in $M_f$. Hence $f^{n_i} x_i = 0$ for some $n_i \geq 1$.
+Hence $f^nM = 0$ for $n = \max\{n_i\}$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-support-base-change}
+Let $R \to R'$ be a ring map and let $M$ be a finite $R$-module.
+Then $\text{Supp}(M \otimes_R R')$ is the inverse image of
+$\text{Supp}(M)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p \in \text{Supp}(M)$. By Nakayama's lemma
+(Lemma \ref{lemma-NAK}) we see that
+$$
+M \otimes_R \kappa(\mathfrak p) = M_\mathfrak p/\mathfrak p M_\mathfrak p
+$$
+is a nonzero $\kappa(\mathfrak p)$ vector space.
+Hence for every prime $\mathfrak p' \subset R'$ lying
+over $\mathfrak p$ we see that
+$$
+(M \otimes_R R')_{\mathfrak p'}/\mathfrak p' (M \otimes_R R')_{\mathfrak p'} =
+(M \otimes_R R') \otimes_{R'} \kappa(\mathfrak p') =
+M \otimes_R \kappa(\mathfrak p) \otimes_{\kappa(\mathfrak p)}
+\kappa(\mathfrak p')
+$$
+is nonzero. This implies $\mathfrak p' \in \text{Supp}(M \otimes_R R')$.
+For the converse, if $\mathfrak p' \subset R'$ is a prime lying
+over an arbitrary prime $\mathfrak p \subset R$, then
+$$
+(M \otimes_R R')_{\mathfrak p'} =
+M_\mathfrak p \otimes_{R_\mathfrak p} R'_{\mathfrak p'}.
+$$
+Hence if $\mathfrak p' \in \text{Supp}(M \otimes_R R')$
+lies over the prime $\mathfrak p \subset R$, then
+$\mathfrak p \in \text{Supp}(M)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-support-element}
+Let $R$ be a ring, let $M$ be an $R$-module, and let $m \in M$.
+Then $\mathfrak p \in V(\text{Ann}(m))$ if and only if
+$m$ does not map to zero in $M_\mathfrak p$.
+\end{lemma}
+
+\begin{proof}
+We may replace $M$ by $Rm \subset M$. Then (1) $\text{Ann}(m) = \text{Ann}(M)$
+and (2) $m$ does not map to zero in $M_\mathfrak p$ if and only if
+$\mathfrak p \in \text{Supp}(M)$.
+The result now follows from Lemma \ref{lemma-support-closed}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-support-finite-presentation-constructible}
+Let $R$ be a ring and let $M$ be an $R$-module.
+If $M$ is a finitely presented $R$-module, then $\text{Supp}(M)$ is a
+closed subset of $\Spec(R)$ whose complement is quasi-compact.
+\end{lemma}
+
+\begin{proof}
+Choose a presentation
+$$
+R^{\oplus m} \longrightarrow R^{\oplus n} \longrightarrow M \to 0
+$$
+Let $A \in \text{Mat}(n \times m, R)$ be the matrix of the first
+map. By Nakayama's Lemma \ref{lemma-NAK} we see that
+$$
+M_{\mathfrak p} \not = 0 \Leftrightarrow
+M \otimes \kappa(\mathfrak p) \not = 0 \Leftrightarrow
+\text{rank}(A \bmod \mathfrak p) < n.
+$$
+Hence, if $I$ is the ideal of $R$ generated by the $n \times n$ minors
+of $A$, then $\text{Supp}(M) = V(I)$. Since $I$
+is finitely generated, say $I = (f_1, \ldots, f_t)$,
+we see that $\Spec(R) \setminus V(I)$ is
+a finite union of the standard opens $D(f_i)$, hence quasi-compact.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-support-quotient}
+Let $R$ be a ring and let $M$ be an $R$-module.
+\begin{enumerate}
+\item If $M$ is finite then the support
+of $M/IM$ is $\text{Supp}(M) \cap V(I)$.
+\item If $N \subset M$, then $\text{Supp}(N) \subset
+\text{Supp}(M)$.
+\item If $Q$ is a quotient module of $M$ then $\text{Supp}(Q) \subset
+\text{Supp}(M)$.
+\item If $0 \to N \to M \to Q \to 0$ is a short exact sequence
+then $\text{Supp}(M) = \text{Supp}(Q) \cup \text{Supp}(N)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The functors $M \mapsto M_{\mathfrak p}$ are exact. This immediately
+implies all but the first assertion. For the first assertion
+we need to show that $M_\mathfrak p \not = 0$ and
+$I \subset \mathfrak p$ implies $(M/IM)_{\mathfrak p}
+= M_\mathfrak p/IM_\mathfrak p \not = 0$. This follows
+from Nakayama's Lemma \ref{lemma-NAK}.
+\end{proof}
+
+
+
+
+
+\section{Going up and going down}
+\label{section-going-up}
+
+\noindent
+Suppose $\mathfrak p$, $\mathfrak p'$ are primes
+of the ring $R$. Let $X = \Spec(R)$ with the Zariski
+topology. Denote $x \in X$ the point corresponding
+to $\mathfrak p$ and $x' \in X$ the point corresponding
+to $\mathfrak p'$. Then we have:
+$$
+x' \leadsto x \Leftrightarrow \mathfrak p' \subset \mathfrak p.
+$$
+In words: $x$ is a specialization of $x'$ if and
+only if $\mathfrak p' \subset \mathfrak p$.
+See Topology, Section \ref{topology-section-specialization}
+for terminology and notation.
+
+\begin{definition}
+\label{definition-going-up-down}
+Let $\varphi : R \to S$ be a ring map.
+\begin{enumerate}
+\item We say a $\varphi : R \to S$ satisfies {\it going up} if
+given primes $\mathfrak p \subset \mathfrak p'$ in $R$
+and a prime $\mathfrak q$ in $S$ lying over $\mathfrak p$
+there exists a prime $\mathfrak q'$ of $S$ such that
+(a) $\mathfrak q \subset \mathfrak q'$, and (b)
+$\mathfrak q'$ lies over $\mathfrak p'$.
+\item We say a $\varphi : R \to S$ satisfies {\it going down} if
+given primes $\mathfrak p \subset \mathfrak p'$ in $R$
+and a prime $\mathfrak q'$ in $S$ lying over $\mathfrak p'$
+there exists a prime $\mathfrak q$ of $S$ such that
+(a) $\mathfrak q \subset \mathfrak q'$, and (b)
+$\mathfrak q$ lies over $\mathfrak p$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+So far we have see the following cases of this:
+\begin{enumerate}
+\item An integral ring map satisfies going up, see
+Lemma \ref{lemma-integral-going-up}.
+\item As a special case finite ring maps satisfy going up.
+\item As a special case quotient maps $R \to R/I$ satisfy going up.
+\item A flat ring map satisfies going down, see
+Lemma \ref{lemma-flat-going-down}
+\item As a special case any localization satisfies going down.
+\item An extension $R \subset S$ of domains, with $R$ normal
+and $S$ integral over $R$ satisfies going down, see
+Proposition \ref{proposition-going-down-normal-integral}.
+\end{enumerate}
+Here is another case where going down holds.
+
+\begin{lemma}
+\label{lemma-open-going-down}
+Let $R \to S$ be a ring map. If the induced map
+$\varphi : \Spec(S) \to \Spec(R)$ is open, then
+$R \to S$ satisfies going down.
+\end{lemma}
+
+\begin{proof}
+Suppose that $\mathfrak p \subset \mathfrak p' \subset R$ and
+$\mathfrak q' \subset S$ lies over $\mathfrak p'$. As $\varphi$ is open,
+for every $g \in S$, $g \not \in \mathfrak q'$ we see that $\mathfrak p$
+is in the image of $D(g) \subset \Spec(S)$. In other words
+$S_g \otimes_R \kappa(\mathfrak p)$ is not zero. Since $S_{\mathfrak q'}$
+is the directed colimit of these $S_g$ this implies
+that $S_{\mathfrak q'} \otimes_R \kappa(\mathfrak p)$ is not
+zero, see
+Lemmas \ref{lemma-localization-colimit} and
+\ref{lemma-tensor-products-commute-with-limits}.
+Hence $\mathfrak p$ is in the image of
+$\Spec(S_{\mathfrak q'}) \to \Spec(R)$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-going-up-down-specialization}
+Let $R \to S$ be a ring map.
+\begin{enumerate}
+\item $R \to S$ satisfies going down if and only if
+generalizations lift along the map $\Spec(S) \to \Spec(R)$,
+see Topology, Definition \ref{topology-definition-lift-specializations}.
+\item $R \to S$ satisfies going up if and only if
+specializations lift along the map $\Spec(S) \to \Spec(R)$,
+see Topology, Definition \ref{topology-definition-lift-specializations}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-going-up-down-composition}
+Suppose $R \to S$ and $S \to T$ are ring maps satisfying
+going down. Then so does $R \to T$. Similarly for going up.
+\end{lemma}
+
+\begin{proof}
+According to Lemma \ref{lemma-going-up-down-specialization}
+this follows from
+Topology, Lemma \ref{topology-lemma-lift-specialization-composition}
+\end{proof}
+
+\begin{lemma}
+\label{lemma-image-stable-specialization-closed}
+Let $R \to S$ be a ring map. Let $T \subset \Spec(R)$
+be the image of $\Spec(S)$. If $T$ is stable under specialization,
+then $T$ is closed.
+\end{lemma}
+
+\begin{proof}
+We give two proofs.
+
+\medskip\noindent
+First proof. Let $\mathfrak p \subset R$ be a prime ideal such that
+the corresponding point of $\Spec(R)$ is in the closure
+of $T$. This means that for every $f \in R$, $f \not \in \mathfrak p$
+we have $D(f) \cap T \not = \emptyset$. Note that $D(f) \cap T$
+is the image of $\Spec(S_f)$ in $\Spec(R)$. Hence
+we conclude that $S_f \not = 0$. In other words, $1 \not = 0$ in
+the ring $S_f$. Since $S_{\mathfrak p}$ is the directed colimit
+of the rings $S_f$ we conclude that $1 \not = 0$ in
+$S_{\mathfrak p}$. In other words, $S_{\mathfrak p} \not = 0$ and
+considering the image of $\Spec(S_{\mathfrak p})
+\to \Spec(S) \to \Spec(R)$ we see there exists
+a $\mathfrak p' \in T$ with $\mathfrak p' \subset \mathfrak p$.
+As we assumed $T$ closed under specialization we conclude $\mathfrak p$
+is a point of $T$ as desired.
+
+\medskip\noindent
+Second proof. Let $I = \Ker(R \to S)$. We may replace $R$ by $R/I$.
+In this case the ring map $R \to S$ is injective.
+By Lemma \ref{lemma-injective-minimal-primes-in-image}
+all the minimal primes of $R$ are contained in the image $T$. Hence
+if $T$ is stable under specialization then it contains all primes.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-going-up-closed}
+Let $R \to S$ be a ring map. The following are equivalent:
+\begin{enumerate}
+\item Going up holds for $R \to S$, and
+\item the map $\Spec(S) \to \Spec(R)$ is closed.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is a general fact that specializations lift along a
+closed map of topological spaces, see
+Topology, Lemma \ref{topology-lemma-closed-open-map-specialization}.
+Hence the second condition implies the first.
+
+\medskip\noindent
+Assume that going up holds for $R \to S$.
+Let $V(I) \subset \Spec(S)$ be a closed set.
+We want to show that the image of $V(I)$ in $\Spec(R)$ is closed.
+The ring map $S \to S/I$ obviously satisfies going up.
+Hence $R \to S \to S/I$ satisfies going up,
+by Lemma \ref{lemma-going-up-down-composition}.
+Replacing $S$ by $S/I$ it suffices to show the image $T$
+of $\Spec(S)$ in $\Spec(R)$ is closed.
+By Topology, Lemmas \ref{topology-lemma-open-closed-specialization}
+and \ref{topology-lemma-lift-specializations-images} this
+image is stable under specialization. Thus the result follows
+from Lemma \ref{lemma-image-stable-specialization-closed}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-constructible-stable-specialization-closed}
+Let $R$ be a ring. Let $E \subset \Spec(R)$ be a constructible subset.
+\begin{enumerate}
+\item If $E$ is stable under specialization, then $E$ is closed.
+\item If $E$ is stable under generalization, then $E$ is open.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First proof. The first assertion
+follows from Lemma \ref{lemma-image-stable-specialization-closed}
+combined with Lemma \ref{lemma-constructible-is-image}.
+The second follows because the complement of a constructible
+set is constructible
+(see Topology, Lemma \ref{topology-lemma-constructible}),
+the first part of the lemma and Topology,
+Lemma \ref{topology-lemma-open-closed-specialization}.
+
+\medskip\noindent
+Second proof. Since $\Spec(R)$ is a spectral space by
+Lemma \ref{lemma-spec-spectral} this is a special case of
+Topology, Lemma
+\ref{topology-lemma-constructible-stable-specialization-closed}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-fppf-open}
+Let $R \to S$ be flat and of finite presentation.
+Then $\Spec(S) \to \Spec(R)$ is open.
+More generally this holds for any ring map $R \to S$ of
+finite presentation which satisfies going down.
+\end{proposition}
+
+\begin{proof}
+If $R \to S$ is flat, then $R \to S$ satisfies going down by
+Lemma \ref{lemma-flat-going-down}. Thus to prove the lemma we may
+assume that $R \to S$ has finite presentation and satisfies going down.
+
+\medskip\noindent
+Since the standard opens $D(g) \subset \Spec(S)$, $g \in S$
+form a basis for the topology, it suffices to prove that the
+image of $D(g)$ is open. Recall that $\Spec(S_g) \to \Spec(S)$
+is a homeomorphism of $\Spec(S_g)$ onto $D(g)$
+(Lemma \ref{lemma-standard-open}).
+Since $S \to S_g$ satisfies going down (see above), we see that
+$R \to S_g$ satisfies going down by
+Lemma \ref{lemma-going-up-down-composition}. Thus after replacing
+$S$ by $S_g$ we see it suffices to prove the image is open.
+By Chevalley's theorem (Theorem \ref{theorem-chevalley})
+the image is a constructible set $E$. And $E$ is stable
+under generalization because $R \to S$ satisfies going down,
+see Topology, Lemmas \ref{topology-lemma-open-closed-specialization}
+and \ref{topology-lemma-lift-specializations-images}.
+Hence $E$ is open by
+Lemma \ref{lemma-constructible-stable-specialization-closed}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-same-image}
+Let $k$ be a field, and let $R$, $S$ be $k$-algebras.
+Let $S' \subset S$ be a sub $k$-algebra, and let $f \in S' \otimes_k R$.
+In the commutative diagram
+$$
+\xymatrix{
+\Spec((S \otimes_k R)_f) \ar[rd] \ar[rr] & &
+\Spec((S' \otimes_k R)_f) \ar[ld] \\
+& \Spec(R) &
+}
+$$
+the images of the diagonal arrows are the same.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p \subset R$ be in the image of the south-west
+arrow. This means (Lemma \ref{lemma-in-image}) that
+$$
+(S' \otimes_k R)_f \otimes_R \kappa(\mathfrak p)
+=
+(S' \otimes_k \kappa(\mathfrak p))_f
+$$
+is not the zero ring, i.e., $S' \otimes_k \kappa(\mathfrak p)$
+is not the zero ring and the image of $f$ in it is not nilpotent.
+The ring map
+$S' \otimes_k \kappa(\mathfrak p) \to S \otimes_k \kappa(\mathfrak p)$
+is injective. Hence also $S \otimes_k \kappa(\mathfrak p)$
+is not the zero ring and the image of $f$ in it is not nilpotent.
+Hence $(S \otimes_k R)_f \otimes_R \kappa(\mathfrak p)$
+is not the zero ring. Thus (Lemma \ref{lemma-in-image})
+we see that $\mathfrak p$ is in the image of the south-east arrow
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-into-tensor-algebra-open}
+Let $k$ be a field.
+Let $R$ and $S$ be $k$-algebras.
+The map $\Spec(S \otimes_k R) \to \Spec(R)$
+is open.
+\end{lemma}
+
+\begin{proof}
+Let $f \in S \otimes_k R$.
+It suffices to prove that the image of the standard open $D(f)$ is open.
+Let $S' \subset S$ be a finite type $k$-subalgebra such that
+$f \in S' \otimes_k R$. The map $R \to S' \otimes_k R$ is flat
+and of finite presentation, hence the image $U$ of
+$\Spec((S' \otimes_k R)_f) \to \Spec(R)$ is open
+by Proposition \ref{proposition-fppf-open}.
+By Lemma \ref{lemma-same-image} this is also the image of $D(f)$ and we win.
+\end{proof}
+
+\noindent
+Here is a tricky lemma that is sometimes useful.
+
+\begin{lemma}
+\label{lemma-unique-prime-over-localize-below}
+Let $R \to S$ be a ring map.
+Let $\mathfrak p \subset R$ be a prime.
+Assume that
+\begin{enumerate}
+\item there exists a unique prime $\mathfrak q \subset S$ lying over
+$\mathfrak p$, and
+\item either
+\begin{enumerate}
+\item going up holds for $R \to S$, or
+\item going down holds for $R \to S$ and there is at most one prime
+of $S$ above every prime of $R$.
+\end{enumerate}
+\end{enumerate}
+Then $S_{\mathfrak p} = S_{\mathfrak q}$.
+\end{lemma}
+
+\begin{proof}
+Consider any prime $\mathfrak q' \subset S$ which corresponds to
+a point of $\Spec(S_{\mathfrak p})$. This means that
+$\mathfrak p' = R \cap \mathfrak q'$ is contained in $\mathfrak p$.
+Here is a picture
+$$
+\xymatrix{
+\mathfrak q' \ar@{-}[d] \ar@{-}[r] & ? \ar@{-}[r] \ar@{-}[d] & S \ar@{-}[d] \\
+\mathfrak p' \ar@{-}[r] & \mathfrak p \ar@{-}[r] & R
+}
+$$
+Assume (1) and (2)(a).
+By going up there exists a prime $\mathfrak q'' \subset S$
+with $\mathfrak q' \subset \mathfrak q''$ and $\mathfrak q''$
+lying over $\mathfrak p$. By the uniqueness of $\mathfrak q$ we
+conclude that $\mathfrak q'' = \mathfrak q$. In other words
+$\mathfrak q'$ defines a point of $\Spec(S_{\mathfrak q})$.
+
+\medskip\noindent
+Assume (1) and (2)(b).
+By going down there exists a prime $\mathfrak q'' \subset \mathfrak q$
+lying over $\mathfrak p'$. By the uniqueness of primes lying over
+$\mathfrak p'$ we see that $\mathfrak q' = \mathfrak q''$. In other words
+$\mathfrak q'$ defines a point of $\Spec(S_{\mathfrak q})$.
+
+\medskip\noindent
+In both cases we conclude that the map
+$\Spec(S_{\mathfrak q}) \to \Spec(S_{\mathfrak p})$
+is bijective. Clearly this means all the elements of $S - \mathfrak q$
+are all invertible in $S_{\mathfrak p}$, in other words
+$S_{\mathfrak p} = S_{\mathfrak q}$.
+\end{proof}
+
+\noindent
+The following lemma is a generalization of going down for
+flat ring maps.
+
+\begin{lemma}
+\label{lemma-going-down-flat-module}
+Let $R \to S$ be a ring map. Let $N$ be a finite $S$-module flat over $R$.
+Endow $\text{Supp}(N) \subset \Spec(S)$ with the induced topology.
+Then generalizations lift along $\text{Supp}(N) \to \Spec(R)$.
+\end{lemma}
+
+\begin{proof}
+The meaning of the statement is as follows. Let
+$\mathfrak p \subset \mathfrak p' \subset R$ be primes. Let
+$\mathfrak q' \subset S$ be a prime $\mathfrak q' \in \text{Supp}(N)$
+Then there exists a prime $\mathfrak q \subset \mathfrak q'$,
+$\mathfrak q \in \text{Supp}(N)$ lying over $\mathfrak p$.
+As $N$ is flat over $R$ we see that $N_{\mathfrak q'}$ is flat
+over $R_{\mathfrak p'}$, see Lemma \ref{lemma-flat-localization}.
+As $N_{\mathfrak q'}$ is finite over $S_{\mathfrak q'}$
+and not zero since $\mathfrak q' \in \text{Supp}(N)$ we see
+that $N_{\mathfrak q'} \otimes_{S_{\mathfrak q'}} \kappa(\mathfrak q')$
+is nonzero by Nakayama's Lemma \ref{lemma-NAK}.
+Thus $N_{\mathfrak q'} \otimes_{R_{\mathfrak p'}} \kappa(\mathfrak p')$
+is also not zero. We conclude from Lemma \ref{lemma-ff}
+that $N_{\mathfrak q'} \otimes_{R_{\mathfrak p'}} \kappa(\mathfrak p)$
+is nonzero. Let
+$J \subset S_{\mathfrak q'} \otimes_{R_{\mathfrak p'}} \kappa(\mathfrak p)$
+be the annihilator of the finite nonzero module
+$N_{\mathfrak q'} \otimes_{R_{\mathfrak p'}} \kappa(\mathfrak p)$.
+Since $J$ is a proper ideal we can choose a prime $\mathfrak q \subset S$
+which corresponds to a prime of
+$S_{\mathfrak q'} \otimes_{R_{\mathfrak p'}} \kappa(\mathfrak p)/J$.
+This prime is in the support of $N$, lies over $\mathfrak p$, and
+is contained in $\mathfrak q'$ as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Separable extensions}
+\label{section-separability}
+
+\noindent
+In this section we talk about separability for nonalgebraic field extensions.
+This is closely related to the concept of geometrically reduced algebras, see
+Definition \ref{definition-geometrically-reduced}.
+
+\begin{definition}
+\label{definition-separable-field-extension}
+Let $K/k$ be a field extension.
+\begin{enumerate}
+\item We say $K$ is {\it separably generated over $k$} if there exists
+a transcendence basis $\{x_i; i \in I\}$ of $K/k$ such that the extension
+$K/k(x_i; i \in I)$ is a separable algebraic extension.
+\item We say $K$ is {\it separable over $k$} if for every subextension
+$k \subset K' \subset K$ with $K'$ finitely generated
+over $k$, the extension $K'/k$ is separably generated.
+\end{enumerate}
+\end{definition}
+
+\noindent
+With this awkward definition it is not clear that
+a separably generated field extension is itself separable.
+It will turn out that this is the case, see
+Lemma \ref{lemma-separably-generated-separable}.
+
+\begin{lemma}
+\label{lemma-subextensions-are-separable}
+Let $K/k$ be a separable field extension.
+For any subextension $K/K'/k$ the field
+extension $K'/k$ is separable.
+\end{lemma}
+
+\begin{proof}
+This is direct from the definition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generating-finitely-generated-separable-field-extensions}
+Let $K/k$ be a separably generated, and finitely generated
+field extension.
+Set $r = \text{trdeg}_k(K)$. Then there exist elements
+$x_1, \ldots, x_{r + 1}$ of $K$ such that
+\begin{enumerate}
+\item $x_1, \ldots, x_r$ is a transcendence basis of $K$ over $k$,
+\item $K = k(x_1, \ldots, x_{r + 1})$, and
+\item $x_{r + 1}$ is separable over $k(x_1, \ldots, x_r)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Combine the definition with Fields, Lemma \ref{fields-lemma-primitive-element}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-make-separably-generated}
+Let $K/k$ be a finitely generated field extension.
+There exists a diagram
+$$
+\xymatrix{
+K \ar[r] & K' \\
+k \ar[u] \ar[r] & k' \ar[u]
+}
+$$
+where $k'/k$, $K'/K$ are finite purely inseparable field
+extensions such that $K'/k'$ is a separably generated field extension.
+\end{lemma}
+
+\begin{proof}
+This lemma is only interesting when the characteristic of $k$ is $p > 0$.
+Choose $x_1, \ldots, x_r$ a transcendence basis of $K$ over $k$.
+As $K$ is finitely generated over $k$ the extension
+$k(x_1, \ldots, x_r) \subset K$ is finite.
+Let $K/K_{sep}/k(x_1, \ldots, x_r)$ be the subextension
+found in
+Fields, Lemma \ref{fields-lemma-separable-first}.
+If $K = K_{sep}$ then we are done.
+We will use induction on $d = [K : K_{sep}]$.
+
+\medskip\noindent
+Assume that $d > 1$. Choose a $\beta \in K$ with
+$\alpha = \beta^p \in K_{sep}$ and $\beta \not \in K_{sep}$.
+Let $P = T^n + a_1T^{n - 1} + \ldots + a_n$
+be the minimal polynomial of $\alpha$ over $k(x_1, \ldots, x_r)$.
+Let $k'/k$ be a finite purely inseparable extension
+obtained by adjoining $p$th roots such that each $a_i$ is a
+$p$th power in $k'(x_1^{1/p}, \ldots, x_r^{1/p})$.
+Such an extension exists; details omitted.
+Let $L$ be a field fitting into the diagram
+$$
+\xymatrix{
+K \ar[r] & L \\
+k(x_1, \ldots, x_r) \ar[u] \ar[r] & k'(x_1^{1/p}, \ldots, x_r^{1/p}) \ar[u]
+}
+$$
+We may and do assume $L$ is the compositum of $K$ and
+$k'(x_1^{1/p}, \ldots, x_r^{1/p})$. Let
+$L/L_{sep}/k'(x_1^{1/p}, \ldots, x_r^{1/p})$
+be the subextension found in
+Fields, Lemma \ref{fields-lemma-separable-first}.
+Then $L_{sep}$ is the compositum of
+$K_{sep}$ and $k'(x_1^{1/p}, \ldots, x_r^{1/p})$.
+The element $\alpha \in L_{sep}$ is a zero of the polynomial
+$P$ all of whose coefficients are $p$th powers in
+$k'(x_1^{1/p}, \ldots, x_r^{1/p})$ and whose roots are
+pairwise distinct. By
+Fields, Lemma \ref{fields-lemma-pth-root}
+we see that $\alpha = (\alpha')^p$ for some $\alpha' \in L_{sep}$.
+Clearly, this means that $\beta$ maps to $\alpha' \in L_{sep}$.
+In other words, we get the tower of fields
+$$
+\xymatrix{
+K \ar[r] & L \\
+K_{sep}(\beta) \ar[r] \ar[u] & L_{sep} \ar[u] \\
+K_{sep} \ar[r] \ar[u] & L_{sep} \ar@{=}[u] \\
+k(x_1, \ldots, x_r) \ar[u] \ar[r] & k'(x_1^{1/p}, \ldots, x_r^{1/p}) \ar[u] \\
+k \ar[r] \ar[u] & k' \ar[u]
+}
+$$
+Thus this construction leads to a new situation with
+$[L : L_{sep}] < [K : K_{sep}]$. By induction we can find
+$k' \subset k''$ and $L \subset L'$ as in the lemma for the
+extension $L/k'$. Then the extensions $k''/k$ and
+$L'/K$ work for the extension $K/k$.
+This proves the lemma.
+\end{proof}
+
+
+
+
+\section{Geometrically reduced algebras}
+\label{section-geometrically-reduced}
+
+\noindent
+The main result on geometrically reduced algebras is
+Lemma \ref{lemma-geometrically-reduced-finite-purely-inseparable-extension}.
+We suggest the reader skip to the lemma after reading the definition.
+
+\begin{definition}
+\label{definition-geometrically-reduced}
+Let $k$ be a field. Let $S$ be a $k$-algebra.
+We say $S$ is {\it geometrically reduced over $k$}
+if for every field extension $K/k$ the
+$K$-algebra $K \otimes_k S$ is reduced.
+\end{definition}
+
+\noindent
+Let $k$ be a field and let $S$ be a reduced $k$-algebra.
+To check that $S$ is geometrically reduced it will suffice
+to check that $\overline{k} \otimes_k S$ is reduced (where
+$\overline{k}$ denotes the algebraic closure of $k$).
+In fact it is enough to check this for finite purely inseparable
+field extensions $k'/k$. See
+Lemma \ref{lemma-geometrically-reduced-finite-purely-inseparable-extension}.
+
+\begin{lemma}
+\label{lemma-subalgebra-separable}
+Elementary properties of geometrically reduced algebras.
+Let $k$ be a field. Let $S$ be a $k$-algebra.
+\begin{enumerate}
+\item If $S$ is geometrically reduced over $k$ so is every
+$k$-subalgebra.
+\item If all finitely generated $k$-subalgebras of $S$ are
+geometrically reduced, then $S$ is geometrically reduced.
+\item A directed colimit of geometrically reduced $k$-algebras
+is geometrically reduced.
+\item If $S$ is geometrically reduced over $k$, then any localization
+of $S$ is geometrically reduced over $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. The second and third property follow from the fact that
+tensor product commutes with colimits.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-reduced-permanence}
+Let $k$ be a field.
+If $R$ is geometrically reduced over $k$,
+and $S \subset R$ is a multiplicative subset, then the localization
+$S^{-1}R$ is geometrically reduced over $k$.
+If $R$ is geometrically reduced over $k$, then $R[x]$ is geometrically
+reduced over $k$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hints: A localization of a reduced ring is reduced, and
+localization commutes with tensor products.
+\end{proof}
+
+\noindent
+In the proofs of the following lemmas we will repeatedly use
+the following observation: Suppose that $R' \subset R$ and
+$S' \subset S$ are inclusions of $k$-algebras.
+Then the map $R' \otimes_k S' \to R \otimes_k S$
+is injective.
+
+\begin{lemma}
+\label{lemma-limit-argument}
+Let $k$ be a field. Let $R$, $S$ be $k$-algebras.
+\begin{enumerate}
+\item If $R \otimes_k S$ is nonreduced, then there exist
+finitely generated subalgebras $R' \subset R$,
+$S' \subset S$ such that $R' \otimes_k S'$ is not reduced.
+\item If $R \otimes_k S$ contains a nonzero zerodivisor, then there exist
+finitely generated subalgebras $R' \subset R$,
+$S' \subset S$ such that $R' \otimes_k S'$ contains a nonzero zerodivisor.
+\item If $R \otimes_k S$ contains a nontrivial idempotent, then there exist
+finitely generated subalgebras $R' \subset R$,
+$S' \subset S$ such that $R' \otimes_k S'$ contains a nontrivial idempotent.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Suppose $z \in R \otimes_k S$ is nilpotent. We may write
+$z = \sum_{i = 1, \ldots, n} x_i \otimes y_i$.
+Thus we may take $R'$ the $k$-subalgebra generated by
+the $x_i$ and $S'$ the $k$-subalgebra generated by the $y_i$.
+The second and third statements are proved in the same way.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-reduced-any-reduced-base-change}
+Let $k$ be a field.
+Let $S$ be a geometrically reduced $k$-algebra.
+Let $R$ be any reduced $k$-algebra.
+Then $R \otimes_k S$ is reduced.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-limit-argument}
+we may assume that $R$ is of finite type over $k$.
+Then $R$, as a reduced Noetherian ring, embeds into a finite
+product of fields
+(see Lemmas \ref{lemma-total-ring-fractions-no-embedded-points},
+\ref{lemma-Noetherian-irreducible-components}, and
+\ref{lemma-minimal-prime-reduced-ring}).
+Hence we may assume $R$ is a finite product of
+fields. In this case it follows from
+Definition \ref{definition-geometrically-reduced}
+that $R \otimes_k S$ is reduced.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-separable-extension-preserves-reducedness}
+Let $k$ be a field.
+Let $S$ be a reduced $k$-algebra.
+Let $K/k$ be either a separable field extension,
+or a separably generated field extension.
+Then $K \otimes_k S$ is reduced.
+\end{lemma}
+
+\begin{proof}
+Assume $k \subset K$ is separable.
+By Lemma \ref{lemma-limit-argument}
+we may assume that $S$ is of finite type over $k$
+and $K$ is finitely generated over $k$.
+Then $S$ embeds into a finite product of fields,
+namely its total ring of fractions (see
+Lemmas \ref{lemma-minimal-prime-reduced-ring} and
+\ref{lemma-total-ring-fractions-no-embedded-points}).
+Hence we may actually assume that $S$ is a domain.
+We choose $x_1, \ldots, x_{r + 1} \in K$ as in
+Lemma \ref{lemma-generating-finitely-generated-separable-field-extensions}.
+Let $P \in k(x_1, \ldots, x_r)[T]$
+be the minimal polynomial of $x_{r + 1}$. It is a separable polynomial.
+It is easy to see that
+$k[x_1, \ldots, x_r] \otimes_k S = S[x_1, \ldots, x_r]$ is a domain.
+This implies $k(x_1, \ldots, x_r) \otimes_k S$ is a domain
+as it is a localization of $S[x_1, \ldots, x_r]$.
+The ring extension $k(x_1, \ldots, x_r) \otimes_k S \subset K \otimes_k S$
+is generated by a single element $x_{r + 1}$ with a single
+equation, namely $P$. Hence $K \otimes_k S$ embeds into
+$F[T]/(P)$ where $F$ is the fraction field of $k(x_1, \ldots, x_r) \otimes_k S$.
+Since $P$ is separable this is a finite product of fields and we win.
+
+\medskip\noindent
+At this point we do not yet know that a separably generated field
+extension is separable, so we have to prove the lemma in this case also.
+To do this suppose that $\{x_i\}_{i \in I}$ is a separating
+transcendence basis for $K$ over $k$. For any finite set of elements
+$\lambda_j \in K$ there exists a finite subset $T \subset I$ such
+that $k(\{x_i\}_{i\in T}) \subset k(\{x_i\}_{i \in T} \cup \{\lambda_j\})$
+is finite separable. Hence we see that $K$ is a directed colimit of
+finitely generated and separably generated extensions of $k$. Thus
+the argument of the preceding paragraph applies to this case as well.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generic-points-geometrically-reduced}
+Let $k$ be a field and let $S$ be a $k$-algebra. Assume that
+$S$ is reduced and that $S_{\mathfrak p}$ is geometrically
+reduced for every minimal prime $\mathfrak p$ of $S$.
+Then $S$ is geometrically reduced.
+\end{lemma}
+
+\begin{proof}
+Since $S$ is reduced the map
+$S \to \prod_{\mathfrak p\text{ minimal}} S_{\mathfrak p}$
+is injective, see
+Lemma \ref{lemma-reduced-ring-sub-product-fields}.
+If $K/k$ is a field extension, then the maps
+$$
+S \otimes_k K \to (\prod S_\mathfrak p) \otimes_k K \to
+\prod S_\mathfrak p \otimes_k K
+$$
+are injective: the first as $k \to K$ is flat and the second by inspection
+because $K$ is a free $k$-module. As $S_\mathfrak p$ is geometrically
+reduced the ring on the right is reduced. Thus we see that $S \otimes_k K$
+is reduced as a subring of a reduced ring.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-separable-algebraic-diagonal}
+Let $k'/k$ be a separable algebraic extension.
+Then there exists a multiplicative subset $S \subset k' \otimes_k k'$
+such that the multiplication map $k' \otimes_k k' \to k'$
+is identified with $k' \otimes_k k' \to S^{-1}(k' \otimes_k k')$.
+\end{lemma}
+
+\begin{proof}
+First assume $k'/k$ is finite separable. Then $k' = k(\alpha)$,
+see Fields, Lemma \ref{fields-lemma-primitive-element}.
+Let $P \in k[x]$ be the minimal polynomial of $\alpha$ over $k$.
+Then $P$ is an irreducible, separable, monic polynomial, see
+Fields, Section \ref{fields-section-separable-extensions}.
+Then $k'[x]/(P) \to k' \otimes_k k'$,
+$\sum \alpha_i x^i \mapsto \alpha_i \otimes \alpha^i$ is an isomorphism.
+We can factor $P = (x - \alpha) Q$ in $k'[x]$ and since $P$
+is separable we see that $Q(\alpha) \not = 0$.
+Then it is clear that the multiplicative set $S'$ generated by
+$Q$ in $k'[x]/(P)$ works, i.e., that $k' = (S')^{-1}(k'[x]/(P))$.
+By transport of structure the image $S$ of $S'$ in $k' \otimes_k k'$
+works.
+
+\medskip\noindent
+In the general case we write $k' = \bigcup k_i$ as the union
+of its finite subfield extensions over $k$. For each $i$ there
+is a multiplicative subset $S_i \subset k_i \otimes_k k_i$
+such that $k_i = S_i^{-1}(k_i \otimes_k k_i)$. Then
+$S = \bigcup S_i \subset k' \otimes_k k'$ works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-reduced-over-separable-algebraic}
+Let $k'/k$ be a separable algebraic field extension.
+Let $A$ be an algebra over $k'$. Then $A$ is geometrically
+reduced over $k$ if and only if it is geometrically reduced over $k'$.
+\end{lemma}
+
+\begin{proof}
+Assume $A$ is geometrically reduced over $k'$.
+Let $K/k$ be a field extension. Then $K \otimes_k k'$ is
+a reduced ring by
+Lemma \ref{lemma-separable-extension-preserves-reducedness}.
+Hence by Lemma \ref{lemma-geometrically-reduced-any-reduced-base-change}
+we find that $K \otimes_k A = (K \otimes_k k') \otimes_{k'} A$ is reduced.
+
+\medskip\noindent
+Assume $A$ is geometrically reduced over $k$. Let $K/k'$ be a field
+extension. Then
+$$
+K \otimes_{k'} A = (K \otimes_k A) \otimes_{(k' \otimes_k k')} k'
+$$
+Since $k' \otimes_k k' \to k'$ is a localization by
+Lemma \ref{lemma-separable-algebraic-diagonal},
+we see that $K \otimes_{k'} A$
+is a localization of a reduced algebra, hence reduced.
+\end{proof}
+
+
+
+
+
+\section{Separable extensions, continued}
+\label{section-separability-continued}
+
+\noindent
+In this section we continue the discussion started in
+Section \ref{section-separability}.
+Let $p$ be a prime number and let $k$ be a field of characteristic $p$.
+In this case we write $k^{1/p}$ for the extension of $k$ gotten by
+adjoining $p$th roots of all the elements of $k$ to $k$.
+(In other words it is the subfield of an algebraic closure of
+$k$ generated by the $p$th roots of elements of $k$.)
+
+\begin{lemma}
+\label{lemma-characterize-separable-field-extensions}
+Let $k$ be a field of characteristic $p > 0$.
+Let $K/k$ be a field extension.
+The following are equivalent:
+\begin{enumerate}
+\item $K$ is separable over $k$,
+\item the ring $K \otimes_k k^{1/p}$ is reduced, and
+\item $K$ is geometrically reduced over $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (1) $\Rightarrow$ (3) follows from
+Lemma \ref{lemma-separable-extension-preserves-reducedness}.
+The implication (3) $\Rightarrow$ (2) is immediate.
+
+\medskip\noindent
+Assume (2). Let $K/L/k$ be a subextension such that
+$L$ is a finitely generated field extension of $k$.
+We have to show that we can find a separating transcendence basis of $L$.
+The assumption implies that $L \otimes_k k^{1/p}$ is reduced.
+Let $x_1, \ldots, x_r$ be a transcendence basis of $L$ over $k$ such
+that the degree of inseparability of the finite extension
+$k(x_1, \ldots, x_r) \subset L$ is minimal.
+If $L$ is separable over $k(x_1, \ldots, x_r)$ then we win.
+Assume this is not the case to get a contradiction.
+Then there exists an element $\alpha \in L$ which is not
+separable over $k(x_1, \ldots, x_r)$. Let $P(T) \in k(x_1, \ldots, x_r)[T]$
+be the minimal polynomial of $\alpha$ over $k(x_1, \ldots, x_r)$.
+After replacing $\alpha$ by $f \alpha$ for some nonzero
+$f \in k[x_1, \ldots, x_r]$
+we may and do assume that $P$ lies in $k[x_1, \ldots, x_r, T]$.
+Because $\alpha$ is not separable $P$ is a polynomial in $T^p$, see
+Fields, Lemma \ref{fields-lemma-irreducible-polynomials}.
+Let $dp$ be the degree of $P$ as a polynomial in $T$.
+Since $P$ is the minimal polynomial of $\alpha$ the monomials
+$$
+x_1^{e_1} \ldots x_r^{e_r} \alpha^e
+$$
+for $e < dp$ are linearly independent over $k$ in $L$. We claim that
+the element $\partial P/\partial x_i \in k[x_1, \ldots, x_r, T]$ is not zero
+for at least one $i$.
+Namely, if this was not the case, then $P$ is actually a polynomial in
+$x_1^p, \ldots, x_r^p, T^p$. In that case we can consider
+$P^{1/p} \in k^{1/p}[x_1, \ldots, x_r, T]$. This would map to
+$P^{1/p}(x_1, \ldots, x_r, \alpha)$ which is a nilpotent element of
+$k^{1/p} \otimes_k L$ and hence zero. On the other hand,
+$P^{1/p}(x_1, \ldots, x_r, \alpha)$ is a $k^{1/p}$-linear combination
+the monomials listed above, hence nonzero in $k^{1/p} \otimes_k L$.
+This is a contradiction which proves our claim.
+
+\medskip\noindent
+Thus, after renumbering, we may assume that $\partial P/\partial x_1$
+is not zero. As $P$ is an irreducible polynomial in $T$ over
+$k(x_1, \ldots, x_r)$ it is irreducible as a polynomial in
+$x_1, \ldots, x_r, T$, hence by Gauss's lemma it is irreducible
+as a polynomial in $x_1$ over $k(x_2, \ldots, x_r, T)$.
+Since the transcendence degree of $L$ is $r$ we see that
+$x_2, \ldots, x_r, \alpha$ are algebraically independent.
+Hence $P(X, x_2, \ldots, x_r, \alpha) \in k(x_2, \ldots, x_r, \alpha)[X]$
+is irreducible. It follows that $x_1$ is separably algebraic over
+$k(x_2, \ldots, x_r, \alpha)$. This means that
+the degree of inseparability of the finite extension
+$k(x_2, \ldots, x_r, \alpha) \subset L$ is less than the
+degree of inseparability of the finite extension
+$k(x_1, \ldots, x_r) \subset L$, which is a contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-separably-generated-separable}
+A separably generated field extension is separable.
+\end{lemma}
+
+\begin{proof}
+Combine Lemma \ref{lemma-separable-extension-preserves-reducedness}
+with Lemma \ref{lemma-characterize-separable-field-extensions}.
+\end{proof}
+
+\noindent
+In the following lemma we will use the notion of the perfect closure
+which is defined in
+Definition \ref{definition-perfection}.
+
+\begin{lemma}
+\label{lemma-geometrically-reduced-finite-purely-inseparable-extension}
+Let $k$ be a field. Let $S$ be a $k$-algebra.
+The following are equivalent:
+\begin{enumerate}
+\item $k' \otimes_k S$ is reduced for every finite
+purely inseparable extension $k'$ of $k$,
+\item $k^{1/p} \otimes_k S$ is reduced,
+\item $k^{perf} \otimes_k S$ is reduced, where $k^{perf}$ is the
+perfect closure of $k$,
+\item $\overline{k} \otimes_k S$ is reduced, where $\overline{k}$ is the
+algebraic closure of $k$, and
+\item $S$ is geometrically reduced over $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Note that any finite purely inseparable extension $k'/k$ embeds
+in $k^{perf}$. Moreover, $k^{1/p}$ embeds into $k^{perf}$ which embeds
+into $\overline{k}$. Thus it is
+clear that (5) $\Rightarrow$ (4) $\Rightarrow$ (3) $\Rightarrow$ (2)
+and that (3) $\Rightarrow$ (1).
+
+\medskip\noindent
+We prove that (1) $\Rightarrow$ (5).
+Assume $k' \otimes_k S$ is reduced for every finite
+purely inseparable extension $k'$ of $k$. Let $K/k$ be
+an extension of fields. We have to show that $K \otimes_k S$
+is reduced. By Lemma \ref{lemma-limit-argument} we reduce to the case where
+$K/k$ is a finitely generated field extension. Choose a diagram
+$$
+\xymatrix{
+K \ar[r] & K' \\
+k \ar[u] \ar[r] & k' \ar[u]
+}
+$$
+as in Lemma \ref{lemma-make-separably-generated}.
+By assumption $k' \otimes_k S$ is reduced.
+By Lemma \ref{lemma-separable-extension-preserves-reducedness}
+it follows that $K' \otimes_k S$ is reduced.
+Hence we conclude that $K \otimes_k S$ is reduced as desired.
+
+\medskip\noindent
+Finally we prove that (2) $\Rightarrow$ (5).
+Assume $k^{1/p} \otimes_k S$ is reduced. Then $S$ is reduced.
+Moreover, for each localization $S_{\mathfrak p}$ at a minimal
+prime $\mathfrak p$, the ring $k^{1/p}\otimes_k S_{\mathfrak p}$
+is a localization of $k^{1/p} \otimes_k S$ hence is reduced.
+But $S_{\mathfrak p}$ is a field by
+Lemma \ref{lemma-minimal-prime-reduced-ring},
+hence $S_{\mathfrak p}$ is geometrically reduced by
+Lemma \ref{lemma-characterize-separable-field-extensions}.
+It follows from Lemma \ref{lemma-generic-points-geometrically-reduced}
+that $S$ is geometrically reduced.
+\end{proof}
+
+
+
+\section{Perfect fields}
+\label{section-perfect-fields}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-perfect}
+Let $k$ be a field. We say $k$ is {\it perfect}
+if every field extension of $k$ is separable over $k$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-perfect}
+A field $k$ is perfect if and only if it is a field of characteristic $0$
+or a field of characteristic $p > 0$ such that every element has a $p$th
+root.
+\end{lemma}
+
+\begin{proof}
+The characteristic zero case is clear.
+Assume the characteristic of $k$ is $p > 0$.
+If $k$ is perfect, then all the field extensions where we adjoin
+a $p$th root of an element of $k$ have to be trivial, hence every
+element of $k$ has a $p$th root. Conversely if every element has a $p$th
+root, then $k = k^{1/p}$ and every field extension of $k$ is
+separable by
+Lemma \ref{lemma-characterize-separable-field-extensions}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-make-separable}
+Let $K/k$ be a finitely generated field extension.
+There exists a diagram
+$$
+\xymatrix{
+K \ar[r] & K' \\
+k \ar[u] \ar[r] & k' \ar[u]
+}
+$$
+where $k'/k$, $K'/K$ are finite purely inseparable field
+extensions such that $K'/k'$ is a separable field extension.
+In this situation we can assume that $K' = k'K$ is the compositum,
+and also that $K' = (k' \otimes_k K)_{red}$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-make-separably-generated}
+we can find such a diagram with $K'/k'$ separably generated.
+By
+Lemma \ref{lemma-separably-generated-separable}
+this implies that $K'$ is separable over $k'$.
+The compositum $k'K$ is a subextension of $K'/k'$ and hence
+$k' \subset k'K$ is separable by
+Lemma \ref{lemma-subextensions-are-separable}.
+The ring $(k' \otimes_k K)_{red}$ is a domain as for some
+$n \gg 0$ the map $x \mapsto x^{p^n}$ maps it into $K$.
+Hence it is a field by
+Lemma \ref{lemma-integral-over-field}.
+Thus $(k' \otimes_k K)_{red} \to K'$ maps it isomorphically onto $k'K$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-perfection}
+\begin{slogan}
+Every field has a unique perfect closure.
+\end{slogan}
+For every field $k$ there exists a purely inseparable extension
+$k'/k$ such that $k'$ is perfect. The field extension
+$k'/k$ is unique up to unique isomorphism.
+\end{lemma}
+
+\begin{proof}
+If the characteristic of $k$ is zero, then $k' = k$ is the
+unique choice. Assume the characteristic of $k$ is $p > 0$.
+For every $n > 0$ there exists a unique algebraic extension
+$k \subset k^{1/p^n}$ such that (a) every element $\lambda \in k$
+has a $p^n$th root in $k^{1/p^n}$ and (b) for every element
+$\mu \in k^{1/p^n}$ we have $\mu^{p^n} \in k$.
+Namely, consider the ring map $k \to k^{1/p^n} = k$, $x \mapsto x^{p^n}$.
+This is injective and satisfies (a) and (b). It is clear that
+$k^{1/p^n} \subset k^{1/p^{n + 1}}$ as extensions of $k$ via
+the map $y \mapsto y^p$. Then we can take $k' = \bigcup k^{1/p^n}$.
+Some details omitted.
+\end{proof}
+
+\begin{definition}
+\label{definition-perfection}
+Let $k$ be a field. The field extension $k'/k$ of Lemma \ref{lemma-perfection}
+is called the {\it perfect closure} of $k$. Notation $k^{perf}/k$.
+\end{definition}
+
+\noindent
+Note that if $k'/k$ is any algebraic purely inseparable extension, then
+$k'$ is a subextension of $k^{perf}$, i.e., $k^{perf}/k'/k$. Namely,
+$(k')^{perf}$ is isomorphic to $k^{perf}$ by the uniqueness of
+Lemma \ref{lemma-perfection}.
+
+\begin{lemma}
+\label{lemma-perfect-reduced}
+Let $k$ be a perfect field.
+Any reduced $k$ algebra is geometrically reduced over $k$.
+Let $R$, $S$ be $k$-algebras.
+Assume both $R$ and $S$ are reduced.
+Then the $k$-algebra $R \otimes_k S$ is reduced.
+\end{lemma}
+
+\begin{proof}
+The first statement follows from
+Lemma \ref{lemma-geometrically-reduced-finite-purely-inseparable-extension}.
+For the second statement use the first statement and
+Lemma \ref{lemma-geometrically-reduced-any-reduced-base-change}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Universal homeomorphisms}
+\label{section-universal-homeomorphism}
+
+\noindent
+Let $k'/k$ be an algebraic purely inseparable field
+extension. Then for any $k$-algebra $R$ the ring map
+$R \to k' \otimes_k R$ induces a homeomorphism of spectra.
+The reason for this is the slightly more general
+Lemma \ref{lemma-p-ring-map} below.
+
+\begin{lemma}
+\label{lemma-surjective-locally-nilpotent-kernel}
+Let $\varphi : R \to S$ be a surjective map with locally nilpotent kernel.
+Then $\varphi$ induces a homeomorphism of spectra and isomorphisms
+on residue fields. For any ring map $R \to R'$ the ring map
+$R' \to R' \otimes_R S$ is surjective with locally nilpotent kernel.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-spec-closed} the map $\Spec(S) \to \Spec(R)$ is
+a homeomorphism onto the closed subset $V(\Ker(\varphi))$. Of course
+$V(\Ker(\varphi)) = \Spec(R)$ because every prime ideal of $R$ contains
+every nilpotent element of $R$. This also implies the statement on
+residue fields. By right exactness of tensor product we see that
+$\Ker(\varphi)R'$ is the kernel of the surjective map $R' \to R' \otimes_R S$.
+Hence the final statement by Lemma \ref{lemma-locally-nilpotent}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-powers-field}
+\begin{reference}
+\cite[Lemma 3.1.6]{Alper-adequate}
+\end{reference}
+Let $k'/k$ be a field extension. The following are equivalent
+\begin{enumerate}
+\item for each $x \in k'$ there exists an $n > 0$ such that $x^n \in k$, and
+\item $k' = k$ or $k$ and $k'$ have characteristic $p > 0$ and
+either $k'/k$ is a purely inseparable extension or
+$k$ and $k'$ are algebraic extensions of $\mathbf{F}_p$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Observe that each of the possibilities listed in (2) satisfies (1).
+Thus we assume $k'/k$ satisfies (1) and we prove that we are in
+one of the cases of (2). Discarding the case $k = k'$ we may assume
+$k' \not = k$. It is clear that $k'/k$ is algebraic.
+Hence we may assume that $k'/k$ is a nontrivial finite extension.
+Let $k'/k'_{sep}/k$ be the separable subextension
+found in Fields, Lemma \ref{fields-lemma-separable-first}.
+We have to show that $k = k'_{sep}$ or that $k$ is an algebraic over
+$\mathbf{F}_p$. Thus we may assume that $k'/k$
+is a nontrivial finite separable extension and we have to show
+$k$ is algebraic over $\mathbf{F}_p$.
+
+\medskip\noindent
+Pick $x \in k'$, $x \not \in k$. Pick $n, m > 0$ such that
+$x^n \in k$ and $(x + 1)^m \in k$. Let $\overline{k}$ be an
+algebraic closure of $k$. We can choose embeddings
+$\sigma, \tau : k' \to \overline{k}$ with $\sigma(x) \not = \tau(x)$.
+This follows from the discussion in
+Fields, Section \ref{fields-section-separable-extensions}
+(more precisely, after replacing $k'$ by the $k$-extension
+generated by $x$ it follows from
+Fields, Lemma \ref{fields-lemma-count-embeddings}).
+Then we see that $\sigma(x) = \zeta \tau(x)$ for some
+$n$th root of unity $\zeta$ in $\overline{k}$.
+Similarly, we see that $\sigma(x + 1) = \zeta' \tau(x + 1)$
+for some $m$th root of unity $\zeta' \in \overline{k}$.
+Since $\sigma(x + 1) \not = \tau(x + 1)$ we see $\zeta' \not = 1$.
+Then
+$$
+\zeta' (\tau(x) + 1) =
+\zeta' \tau(x + 1) =
+\sigma(x + 1) =
+\sigma(x) + 1 =
+\zeta \tau(x) + 1
+$$
+implies that
+$$
+\tau(x) (\zeta' - \zeta) = 1 - \zeta'
+$$
+hence $\zeta' \not = \zeta$ and
+$$
+\tau(x) = (1 - \zeta')/(\zeta' - \zeta)
+$$
+Hence every element of $k'$ which is not in $k$ is algebraic over the prime
+subfield. Since $k'$ is generated over the prime subfield by the elements
+of $k'$ which are not in $k$, we conclude that $k'$ (and hence $k$)
+is algebraic over the prime subfield.
+
+\medskip\noindent
+Finally, if the characteristic of $k$ is $0$, the above leads to a
+contradiction as follows (we encourage the reader to find their own proof).
+For every rational number $y$ we similarly get a root of unity
+$\zeta_y$ such that $\sigma(x + y) = \zeta_y\tau(x + y)$.
+Then we find
+$$
+\zeta \tau(x) + y = \zeta_y(\tau(x) + y)
+$$
+and by our formula for $\tau(x)$ above we conclude
+$\zeta_y \in \mathbf{Q}(\zeta, \zeta')$. Since the number field
+$\mathbf{Q}(\zeta, \zeta')$ contains only a finite number of roots of
+unity we find two distinct rational numbers $y, y'$ with
+$\zeta_y = \zeta_{y'}$. Then we conclude that
+$$
+y - y' =
+\sigma(x + y) - \sigma(x + y') =
+\zeta_y(\tau(x + y)) - \zeta_{y'}\tau(x + y') = \zeta_y(y - y')
+$$
+which implies $\zeta_y = 1$ a contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-powers}
+Let $\varphi : R \to S$ be a ring map. If
+\begin{enumerate}
+\item for any $x \in S$ there exists $n > 0$ such that
+$x^n$ is in the image of $\varphi$, and
+\item $\Ker(\varphi)$ is locally nilpotent,
+\end{enumerate}
+then $\varphi$ induces a homeomorphism on spectra and induces residue
+field extensions satisfying the equivalent conditions of
+Lemma \ref{lemma-powers-field}.
+\end{lemma}
+
+\begin{proof}
+Assume (1) and (2). Let $\mathfrak q, \mathfrak q'$ be primes of $S$
+lying over the same prime ideal $\mathfrak p$ of $R$. Suppose $x \in S$ with
+$x \in \mathfrak q$, $x \not \in \mathfrak q'$. Then $x^n \in \mathfrak q$
+and $x^n \not \in \mathfrak q'$ for all $n > 0$. If $x^n = \varphi(y)$ with
+$y \in R$ for some $n > 0$ then
+$$
+x^n \in \mathfrak q \Rightarrow y \in \mathfrak p \Rightarrow
+x^n \in \mathfrak q'
+$$
+which is a contradiction. Hence there does not exist an $x$ as above and
+we conclude that $\mathfrak q = \mathfrak q'$, i.e., the map on spectra
+is injective. By assumption (2) the kernel $I = \Ker(\varphi)$ is
+contained in every prime, hence $\Spec(R) = \Spec(R/I)$ as
+topological spaces. As the induced map $R/I \to S$ is integral by
+assumption (1)
+Lemma \ref{lemma-integral-overring-surjective}
+shows that $\Spec(S) \to \Spec(R/I)$ is surjective. Combining
+the above we see that $\Spec(S) \to \Spec(R)$ is bijective.
+If $x \in S$ is arbitrary, and we pick $y \in R$ such that
+$\varphi(y) = x^n$ for some $n > 0$, then we see that the open
+$D(x) \subset \Spec(S)$ corresponds to the open
+$D(y) \subset \Spec(R)$ via the bijection above. Hence we see that
+the map $\Spec(S) \to \Spec(R)$ is a homeomorphism.
+
+\medskip\noindent
+To see the statement on residue fields, let $\mathfrak q \subset S$
+be a prime lying over a prime ideal $\mathfrak p \subset R$. Let
+$x \in \kappa(\mathfrak q)$. If we think of $\kappa(\mathfrak q)$
+as the residue field of the local ring $S_\mathfrak q$, then we
+see that $x$ is the image of some $y/z \in S_\mathfrak q$
+with $y \in S$, $z \in S$, $z \not \in \mathfrak q$.
+Choose $n, m > 0$ such that $y^n, z^m$ are in the image of $\varphi$.
+Then $x^{nm}$ is the residue of $(y/z)^{nm} = (y^n)^m/(z^m)^n$
+which is in the image of $R_\mathfrak p \to S_\mathfrak q$.
+Hence $x^{nm}$ is in the image of
+$\kappa(\mathfrak p) \to \kappa(\mathfrak q)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-2-3-ring-map}
+Let $\varphi : R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item[(a)] $S$ is generated as an $R$-algebra by elements $x$ such
+that $x^2, x^3 \in \varphi(R)$, and
+\item[(b)] $\Ker(\varphi)$ is locally nilpotent,
+\end{enumerate}
+Then $\varphi$ induces isomorphisms on residue fields and
+a homeomorphism of spectra. For any ring map $R \to R'$
+the ring map $R' \to R' \otimes_R S$ also satisfies (a) and (b).
+\end{lemma}
+
+\begin{proof}
+Assume (a) and (b). The map on spectra is closed as $S$ is integral
+over $R$, see Lemmas \ref{lemma-going-up-closed} and
+\ref{lemma-integral-going-up}. The image is dense by
+Lemma \ref{lemma-image-dense-generic-points}. Thus $\Spec(S) \to \Spec(R)$
+is surjective. If $\mathfrak q \subset S$ is a prime lying over
+$\mathfrak p \subset R$ then the field extension
+$\kappa(\mathfrak q)/\kappa(\mathfrak p)$ is generated by elements
+$\alpha \in \kappa(\mathfrak q)$ whose square and cube are
+in $\kappa(\mathfrak p)$. Thus clearly $\alpha \in \kappa(\mathfrak p)$
+and we find that $\kappa(\mathfrak q) = \kappa(\mathfrak p)$.
+If $\mathfrak q, \mathfrak q'$ were two distinct primes lying over
+$\mathfrak p$, then at least one of the generators $x$ of $S$ as
+in (a) would have distinct images in
+$\kappa(\mathfrak q) = \kappa(\mathfrak p)$ and
+$\kappa(\mathfrak q') = \kappa(\mathfrak p)$.
+This would contradict the fact that both $x^2$ and $x^3$
+do have the same image. This proves that $\Spec(S) \to \Spec(R)$
+is injective hence a homeomorphism (by what was already shown).
+
+\medskip\noindent
+Since $\varphi$ induces a homeomorphism on spectra, it is in particular
+surjective on spectra which is a property preserved under any base change, see
+Lemma \ref{lemma-surjective-spec-radical-ideal}.
+Therefore for any $R \to R'$ the kernel of the ring map
+$R' \to R' \otimes_R S$ consists of nilpotent elements, see
+Lemma \ref{lemma-image-dense-generic-points},
+in other words (b) holds for $R' \to R' \otimes_R S$.
+It is clear that (a) is preserved under base change.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-help-with-powers}
+Let $p$ be a prime number. Let $n, m > 0$ be two integers. There exists
+an integer $a$ such that
+$(x + y)^{p^a}, p^a(x + y) \in \mathbf{Z}[x^{p^n}, p^nx, y^{p^m}, p^my]$.
+\end{lemma}
+
+\begin{proof}
+This is clear for $p^a(x + y)$ as soon as $a \geq n, m$.
+In fact, pick $a \gg n, m$. Write
+$$
+(x + y)^{p^a} = \sum\nolimits_{i, j \geq 0, i + j = p^a}
+{p^a \choose i, j} x^iy^j
+$$
+For every $i, j \geq 0$ with $i + j = p^a$ write
+$i = q p^n + r$ with $r \in \{0, \ldots, p^n - 1\}$ and
+$j = q' p^m + r'$ with $r' \in \{0, \ldots, p^m - 1\}$.
+The condition $(x + y)^{p^a} \in \mathbf{Z}[x^{p^n}, p^nx, y^{p^m}, p^my]$
+holds if
+$$
+p^{nr + mr'} \text{ divides } {p^a \choose i, j}
+$$
+If $r = r' = 0$ then the divisibility holds. If $r \not = 0$, then
+we write
+$$
+{p^a \choose i, j} = \frac{p^a}{i} {p^a - 1 \choose i - 1, j}
+$$
+Since $r \not = 0$ the rational number $p^a/i$ has $p$-adic
+valuation at least $a - (n - 1)$ (because $i$ is not divisible by $p^n$).
+Thus ${p^a \choose i, j}$ is divisible by $p^{a - n + 1}$ in this case.
+Similarly, we see that if $r' \not = 0$, then ${p^a \choose i, j}$ is
+divisible by $p^{a - m + 1}$. Picking $a = np^n + mp^m + n + m$ will work.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-p-ring-map-field}
+Let $k'/k$ be a field extension. Let $p$ be a prime number.
+The following are equivalent
+\begin{enumerate}
+\item $k'$ is generated as a field extension of $k$ by elements
+$x$ such that there exists an $n > 0$ with $x^{p^n} \in k$ and
+$p^nx \in k$, and
+\item $k = k'$ or the characteristic of $k$
+and $k'$ is $p$ and $k'/k$ is purely inseparable.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $x \in k'$. If there exists an $n > 0$ with $x^{p^n} \in k$ and
+$p^nx \in k$ and if the characteristic is not $p$, then $x \in k$.
+If the characteristic is $p$, then we find $x^{p^n} \in k$
+and hence $x$ is purely inseparable over $k$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-p-ring-map}
+Let $\varphi : R \to S$ be a ring map. Let $p$ be a prime number. Assume
+\begin{enumerate}
+\item[(a)] $S$ is generated as an $R$-algebra by elements $x$ such
+that there exists an $n > 0$ with $x^{p^n} \in \varphi(R)$ and
+$p^nx \in \varphi(R)$, and
+\item[(b)] $\Ker(\varphi)$ is locally nilpotent,
+\end{enumerate}
+Then $\varphi$ induces a homeomorphism of spectra and induces
+residue field extensions satisfying the equivalent conditions
+of Lemma \ref{lemma-p-ring-map-field}. For any ring map $R \to R'$
+the ring map $R' \to R' \otimes_R S$ also satisfies (a) and (b).
+\end{lemma}
+
+\begin{proof}
+Assume (a) and (b). Note that (b) is equivalent to condition (2)
+of Lemma \ref{lemma-powers}. Let $T \subset S$ be the set of
+elements $x \in S$ such that there exists an
+integer $n > 0$ such that $x^{p^n} , p^n x \in \varphi(R)$.
+We claim that $T = S$. This will prove that condition (1) of
+Lemma \ref{lemma-powers} holds and hence $\varphi$ induces
+a homeomorphism on spectra.
+By assumption (a) it suffices to show that $T \subset S$ is an $R$-sub algebra.
+If $x \in T$ and $y \in R$, then it is clear that $yx \in T$.
+Suppose $x, y \in T$ and $n, m > 0$ such that
+$x^{p^n}, y^{p^m}, p^n x, p^m y \in \varphi(R)$.
+Then $(xy)^{p^{n + m}}, p^{n + m}xy \in \varphi(R)$
+hence $xy \in T$. We have $x + y \in T$ by Lemma \ref{lemma-help-with-powers}
+and the claim is proved.
+
+\medskip\noindent
+Since $\varphi$ induces a homeomorphism on spectra, it is in particular
+surjective on spectra which is a property preserved under any base change, see
+Lemma \ref{lemma-surjective-spec-radical-ideal}.
+Therefore for any $R \to R'$ the kernel of the ring map
+$R' \to R' \otimes_R S$ consists of nilpotent elements, see
+Lemma \ref{lemma-image-dense-generic-points},
+in other words (b) holds for $R' \to R' \otimes_R S$.
+It is clear that (a) is preserved under base change.
+Finally, the condition on residue fields follows from (a)
+as generators for $S$ as an $R$-algebra map to generators for
+the residue field extensions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-radicial}
+Let $\varphi : R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item $\varphi$ induces an injective map of spectra,
+\item $\varphi$ induces purely inseparable residue field extensions.
+\end{enumerate}
+Then for any ring map $R \to R'$ properties (1) and (2) are true for
+$R' \to R' \otimes_R S$.
+\end{lemma}
+
+\begin{proof}
+Set $S' = R' \otimes_R S$ so that we have a commutative diagram
+of continuous maps of spectra of rings
+$$
+\xymatrix{
+\Spec(S') \ar[r] \ar[d] & \Spec(S) \ar[d] \\
+\Spec(R') \ar[r] & \Spec(R)
+}
+$$
+Let $\mathfrak p' \subset R'$ be a prime ideal lying over
+$\mathfrak p \subset R$. If there is no prime ideal of $S$
+lying over $\mathfrak p$, then there is no prime ideal of
+$S'$ lying over $\mathfrak p'$. Otherwise, by
+Remark \ref{remark-fundamental-diagram} there is a unique
+prime ideal $\mathfrak r$ of $F = S \otimes_R \kappa(\mathfrak p)$
+whose residue field is purely inseparable over $\kappa(\mathfrak p)$.
+Consider the ring maps
+$$
+\kappa(\mathfrak p) \to F \to \kappa(\mathfrak r)
+$$
+By Lemma \ref{lemma-minimal-prime-reduced-ring} the ideal
+$\mathfrak r \subset F$ is locally nilpotent, hence
+we may apply Lemma \ref{lemma-surjective-locally-nilpotent-kernel}
+to the ring map $F \to \kappa(\mathfrak r)$.
+We may apply Lemma \ref{lemma-p-ring-map}
+to the ring map $\kappa(\mathfrak p) \to \kappa(\mathfrak r)$.
+Hence the composition and the second arrow in the maps
+$$
+\kappa(\mathfrak p') \to
+\kappa(\mathfrak p') \otimes_{\kappa(\mathfrak p)} F \to
+\kappa(\mathfrak p') \otimes_{\kappa(\mathfrak p)} \kappa(\mathfrak r)
+$$
+induces bijections on spectra and purely inseparable residue
+field extensions. This implies the same thing for the first
+map. Since
+$$
+\kappa(\mathfrak p') \otimes_{\kappa(\mathfrak p)} F =
+\kappa(\mathfrak p') \otimes_{\kappa(\mathfrak p)}
+\kappa(\mathfrak p) \otimes_R S =
+\kappa(\mathfrak p') \otimes_R S =
+\kappa(\mathfrak p') \otimes_{R'} R' \otimes_R S
+$$
+we conclude by the discussion in Remark \ref{remark-fundamental-diagram}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-radicial-integral}
+Let $\varphi : R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item $\varphi$ is integral,
+\item $\varphi$ induces an injective map of spectra,
+\item $\varphi$ induces purely inseparable residue field extensions.
+\end{enumerate}
+Then $\varphi$ induces a homeomorphism from $\Spec(S)$ onto a closed
+subset of $\Spec(R)$ and for any ring map
+$R \to R'$ properties (1), (2), (3) are true for $R' \to R' \otimes_R S$.
+\end{lemma}
+
+\begin{proof}
+The map on spectra is closed by
+Lemmas \ref{lemma-going-up-closed} and \ref{lemma-integral-going-up}.
+The properties are preserved under base change by
+Lemmas \ref{lemma-radicial} and \ref{lemma-base-change-integral}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-radicial-integral-bijective}
+Let $\varphi : R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item $\varphi$ is integral,
+\item $\varphi$ induces an bijective map of spectra,
+\item $\varphi$ induces purely inseparable residue field extensions.
+\end{enumerate}
+Then $\varphi$ induces a homeomorphism on spectra and for any ring map
+$R \to R'$ properties (1), (2), (3) are true for $R' \to R' \otimes_R S$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemmas \ref{lemma-radicial-integral} and
+\ref{lemma-surjective-spec-radical-ideal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-bijective}
+Let $\varphi : R \to S$ be a ring map such that
+\begin{enumerate}
+\item the kernel of $\varphi$ is locally nilpotent, and
+\item $S$ is generated as an $R$-algebra by elements $x$
+such that there exist $n > 0$ and a polynomial $P(T) \in R[T]$
+whose image in $S[T]$ is $(T - x)^n$.
+\end{enumerate}
+Then $\Spec(S) \to \Spec(R)$ is a homeomorphism and $R \to S$
+induces purely inseparable extensions of residue fields.
+Moreover, conditions (1) and (2) remain true on arbitrary base change.
+\end{lemma}
+
+\begin{proof}
+We may replace $R$ by $R/\Ker(\varphi)$, see
+Lemma \ref{lemma-surjective-locally-nilpotent-kernel}.
+Assumption (2) implies $S$ is generated over $R$ by
+elements which are integral over $R$.
+Hence $R \subset S$ is integral
+(Lemma \ref{lemma-integral-closure-is-ring}).
+In particular $\Spec(S) \to \Spec(R)$ is surjective and closed
+(Lemmas \ref{lemma-integral-overring-surjective},
+\ref{lemma-going-up-closed}, and
+\ref{lemma-integral-going-up}).
+
+\medskip\noindent
+Let $x \in S$ be one of the generators in (2), i.e., there exists an
+$n > 0$ be such that $(T - x)^n \in R[T]$.
+Let $\mathfrak p \subset R$ be a prime.
+The $\kappa(\mathfrak p) \otimes_R S$ ring is nonzero by
+the above and Lemma \ref{lemma-in-image}.
+If the characteristic of $\kappa(\mathfrak p)$ is zero
+then we see that $nx \in R$ implies $1 \otimes x$ is in the image
+of $\kappa(\mathfrak p) \to \kappa(\mathfrak p) \otimes_R S$.
+Hence $\kappa(\mathfrak p) \to \kappa(\mathfrak p) \otimes_R S$
+is an isomorphism.
+If the characteristic of $\kappa(\mathfrak p)$ is $p > 0$,
+then write $n = p^k m$ with $m$ prime to $p$.
+In $\kappa(\mathfrak p) \otimes_R S[T]$ we have
+$$
+(T - 1 \otimes x)^n = ((T - 1 \otimes x)^{p^k})^m =
+(T^{p^k} - 1 \otimes x^{p^k})^m
+$$
+and we see that $mx^{p^k} \in R$. This implies that
+$1 \otimes x^{p^k}$ is in the image of
+$\kappa(\mathfrak p) \to \kappa(\mathfrak p) \otimes_R S$.
+Hence Lemma \ref{lemma-p-ring-map} applies to
+$\kappa(\mathfrak p) \to \kappa(\mathfrak p) \otimes_R S$.
+In both cases we conclude that $\kappa(\mathfrak p) \otimes_R S$
+has a unique prime ideal with residue field purely inseparable
+over $\kappa(\mathfrak p)$. By Remark \ref{remark-fundamental-diagram}
+we conclude that $\varphi$ is bijective on spectra.
+
+\medskip\noindent
+The statement on base change is immediate.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Geometrically irreducible algebras}
+\label{section-algebras-over-fields}
+
+\noindent
+An algebra $S$ over a field $k$ is geometrically irreducible if
+the algebra $S \otimes_k k'$ has a unique minimal prime for
+every field extension $k'/k$. In this section we develop a bit
+of theory relevant to this notion.
+
+\begin{lemma}
+\label{lemma-flat-fibres-irreducible}
+Let $R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item[(a)] $\Spec(R)$ is irreducible,
+\item[(b)] $R \to S$ is flat,
+\item[(c)] $R \to S$ is of finite presentation,
+\item[(d)] the fibre rings $S \otimes_R \kappa(\mathfrak p)$
+have irreducible spectra for a dense collection of primes $\mathfrak p$ of $R$.
+\end{enumerate}
+Then $\Spec(S)$ is irreducible.
+This is true more generally with (b) $+$ (c)
+replaced by ``the map $\Spec(S) \to \Spec(R)$ is open''.
+\end{lemma}
+
+\begin{proof}
+The assumptions (b) and (c) imply that the map on spectra is open,
+see
+Proposition \ref{proposition-fppf-open}.
+Hence the lemma follows from
+Topology, Lemma \ref{topology-lemma-irreducible-on-top}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-separably-closed-irreducible}
+Let $k$ be a separably closed field.
+Let $R$, $S$ be $k$-algebras. If $R$, $S$ have a unique
+minimal prime, so does $R \otimes_k S$.
+\end{lemma}
+
+\begin{proof}
+Let $k \subset \overline{k}$ be a perfect closure, see
+Definition \ref{definition-perfection}.
+By assumption $\overline{k}$ is algebraically closed.
+The ring maps $R \to R \otimes_k \overline{k}$ and
+$S \to S \otimes_k \overline{k}$ and
+$R \otimes_k S \to (R \otimes_k S) \otimes_k \overline{k}
+= (R \otimes_k \overline{k}) \otimes_{\overline{k}} (S \otimes_k \overline{k})$
+satisfy the assumptions of Lemma \ref{lemma-p-ring-map}.
+Hence we may assume $k$ is algebraically closed.
+
+\medskip\noindent
+We may replace $R$ and $S$ by their reductions.
+Hence we may assume that $R$ and $S$ are domains.
+By Lemma \ref{lemma-perfect-reduced} we see that $R \otimes_k S$ is
+reduced. Hence its spectrum is reducible if and only if it contains a nonzero
+zerodivisor. By Lemma \ref{lemma-limit-argument} we reduce to the case where
+$R$ and $S$ are domains of finite type over $k$ algebraically closed.
+
+\medskip\noindent
+Note that the ring map $R \to R \otimes_k S$ is of finite
+presentation and flat. Moreover, for every maximal ideal
+$\mathfrak m$ of $R$ we have
+$(R \otimes_k S) \otimes_R R/\mathfrak m \cong S$ because
+$k \cong R/\mathfrak m$ by the Hilbert Nullstellensatz Theorem
+\ref{theorem-nullstellensatz}. Moreover, the set of
+maximal ideals is dense in the spectrum of $R$ since
+$\Spec(R)$ is Jacobson, see Lemma \ref{lemma-finite-type-field-Jacobson}.
+Hence we see that Lemma \ref{lemma-flat-fibres-irreducible} applies
+to the ring map $R \to R \otimes_k S$ and we conclude that
+the spectrum of $R \otimes_k S$ is irreducible as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-irreducible}
+Let $k$ be a field.
+Let $R$ be a $k$-algebra.
+The following are equivalent
+\begin{enumerate}
+\item for every field extension $k'/k$ the
+spectrum of $R \otimes_k k'$ is irreducible,
+\item for every finite separable field extension $k'/k$ the
+spectrum of $R \otimes_k k'$ is irreducible,
+\item the spectrum of $R \otimes_k \overline{k}$ is irreducible
+where $\overline{k}$ is the separable algebraic closure of $k$, and
+\item the spectrum of $R \otimes_k \overline{k}$ is irreducible
+where $\overline{k}$ is the algebraic closure of $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that (1) implies (2).
+
+\medskip\noindent
+Assume (2) and let $\overline{k}$ is the separable algebraic closure of $k$.
+Suppose $\mathfrak q_i \subset R \otimes_k \overline{k}$, $i = 1, 2$
+are two minimal prime ideals. For every finite subextension
+$\overline{k}/k'/k$ the extension $k'/k$ is separable and
+the ring map $R \otimes_k k' \to R \otimes_k \overline{k}$
+is flat. Hence $\mathfrak p_i = (R \otimes_k k') \cap \mathfrak q_i$
+are minimal prime ideals (as we have going down for flat ring maps
+by Lemma \ref{lemma-flat-going-down}). Thus we see that
+$\mathfrak p_1 = \mathfrak p_2$
+by assumption (2). Since $\overline{k} = \bigcup k'$ we conclude
+$\mathfrak q_1 = \mathfrak q_2$. Hence $\Spec(R \otimes_k \overline{k})$
+is irreducible.
+
+\medskip\noindent
+Assume (3) and let $\overline{k}$ be the algebraic closure of $k$.
+Let $\overline{k}/\overline{k}'/k$ be the corresponding
+separable algebraic closure of $k$. Then $\overline{k}/\overline{k}'$
+is purely inseparable (in positive characteristic) or trivial.
+Hence $R \otimes_k \overline{k}' \to R \otimes_k \overline{k}$
+induces a homeomorphism on spectra, for example by
+Lemma \ref{lemma-p-ring-map}. Thus we have (4).
+
+\medskip\noindent
+Assume (4). Let $k'/k$ be an arbitrary field extension and let
+$\overline{k}$ be the algebraic closure of $k$. We may choose a
+field $F$ such that both $k'$ and $\overline{k}$ are isomorphic
+to subfields of $F$. Then
+$$
+R \otimes_k F = (R \otimes_k \overline{k}) \otimes_{\overline{k}} F
+$$
+and hence we see from Lemma \ref{lemma-separably-closed-irreducible}
+that $R \otimes_k F$ has a unique minimal prime. Finally, the
+ring map $R \otimes_k k' \to R \otimes_k F$ is flat and injective
+and hence any minimal prime of $R \otimes_k k'$ is the image of
+a minimal prime of $R \otimes_k F$ (by
+Lemma \ref{lemma-injective-minimal-primes-in-image}
+and going down). We conclude that there is only one
+such minimal prime and the proof is complete.
+\end{proof}
+
+\begin{definition}
+\label{definition-geometrically-irreducible}
+Let $k$ be a field.
+Let $S$ be a $k$-algebra.
+We say $S$ is {\it geometrically irreducible over $k$}
+if for every field extension $k'/k$ the spectrum of
+$S \otimes_k k'$ is irreducible\footnote{An irreducible space is nonempty.}.
+\end{definition}
+
+\noindent
+By Lemma \ref{lemma-geometrically-irreducible} it suffices
+to check this for finite separable field extensions $k'/k$
+or for $k'$ equal to the separable algebraic closure of $k$.
+
+\begin{lemma}
+\label{lemma-separably-closed-irreducible-implies-geometric}
+Let $k$ be a field.
+Let $R$ be a $k$-algebra.
+If $k$ is separably algebraically closed then $R$ is
+geometrically irreducible over $k$ if and only if the
+spectrum of $R$ is irreducible.
+\end{lemma}
+
+\begin{proof}
+Immediate from the remark following
+Definition \ref{definition-geometrically-irreducible}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-subalgebra-geometrically-irreducible}
+Let $k$ be a field. Let $S$ be a $k$-algebra.
+\begin{enumerate}
+\item If $S$ is geometrically irreducible over $k$ so is every
+$k$-subalgebra.
+\item If all finitely generated $k$-subalgebras of $S$ are
+geometrically irreducible, then $S$ is geometrically irreducible.
+\item A directed colimit of geometrically irreducible $k$-algebras
+is geometrically irreducible.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $S' \subset S$ be a subalgebra. Then for any extension $k'/k$
+the ring map $S' \otimes_k k' \to S \otimes_k k'$ is injective also.
+Hence (1) follows from Lemma \ref{lemma-injective-minimal-primes-in-image}
+(and the fact that the image of an irreducible space under a continuous
+map is irreducible). The second and third property follow from the fact
+that tensor product commutes with colimits.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-irreducible-any-base-change}
+Let $k$ be a field.
+Let $S$ be a geometrically irreducible $k$-algebra.
+Let $R$ be any $k$-algebra.
+The map
+$$
+\Spec(R \otimes_k S) \longrightarrow \Spec(R)
+$$
+induces a bijection on irreducible components.
+\end{lemma}
+
+\begin{proof}
+Recall that irreducible components correspond to minimal primes
+(Lemma \ref{lemma-irreducible}).
+As $R \to R \otimes_k S$ is flat we see by going down
+(Lemma \ref{lemma-flat-going-down}) that
+any minimal prime of $R \otimes_k S$ lies over a minimal prime of $R$.
+Conversely, if $\mathfrak p \subset R$ is a (minimal) prime then
+$$
+R \otimes_k S/\mathfrak p(R \otimes_k S)
+=
+(R/\mathfrak p) \otimes_k S
+\subset
+\kappa(\mathfrak p) \otimes_k S
+$$
+by flatness of $R \to R \otimes_k S$. The ring
+$\kappa(\mathfrak p) \otimes_k S$ has irreducible spectrum
+by assumption. It follows that
+$R \otimes_k S/\mathfrak p(R \otimes_k S)$ has a single minimal
+prime (Lemma \ref{lemma-injective-minimal-primes-in-image}).
+In other words, the inverse image of the irreducible set
+$V(\mathfrak p)$ is irreducible.
+Hence the lemma follows.
+\end{proof}
+
+\noindent
+Let us make some remarks on the notion of geometrically irreducible
+field extensions.
+
+\begin{lemma}
+\label{lemma-field-extension-geometrically-irreducible}
+Let $K/k$ be a field extension. If $k$ is algebraically closed in $K$, then
+$K$ is geometrically irreducible over $k$.
+\end{lemma}
+
+\begin{proof}
+Assume $k$ is algebraically closed in $K$. By
+Definition \ref{definition-geometrically-irreducible}
+and Lemma \ref{lemma-geometrically-irreducible} it suffices to show
+that the spectrum of $K \otimes_k k'$ is irreducible for every
+finite separable extension $k'/k$. Say $k'$ is generated by $\alpha \in k'$
+over $k$, see
+Fields, Lemma \ref{fields-lemma-primitive-element}. Let
+$P = T^d + a_1 T^{d - 1} + \ldots + a_d \in k[T]$ be the minimal
+polynomial of $\alpha$. Then $K \otimes_k k' \cong K[T]/(P)$.
+The only way the spectrum of $K[T]/(P)$ can be reducible
+is if $P$ is reducible in $K[T]$. Assume $P = P_1 P_2$ is a nontrivial
+factorization in $K[T]$ to get a contradiction.
+By Lemma \ref{lemma-polynomials-divide} we see that
+the coefficients of $P_1$ and $P_2$ are algebraic over $k$.
+Our assumption implies the coefficients of $P_1$ and $P_2$
+are in $k$ which contradicts the fact that $P$ is irreducible
+over $k$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-irreducible-transitive}
+Let $K/k$ be a geometrically irreducible field extension.
+Let $S$ be a geometrically irreducible $K$-algebra.
+Then $S$ is geometrically irreducible over $k$.
+\end{lemma}
+
+\begin{proof}
+By Definition \ref{definition-geometrically-irreducible}
+and Lemma \ref{lemma-geometrically-irreducible} it suffices to show
+that the spectrum of $S \otimes_k k'$ is irreducible for every
+finite separable extension $k'/k$. Since $K$ is geometrically irreducible
+over $k$ we see that $K' = K \otimes_k k'$
+is a finite, separable field extension of $K$.
+Hence the spectrum of $S \otimes_k k' = S \otimes_K K'$ is
+irreducible as $S$ is assumed geometrically irreducible over $K$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-irreducible-base-change-transcendental}
+Let $K/k$ be a field extension. The following are equivalent
+\begin{enumerate}
+\item $K$ is geometrically irreducible over $k$, and
+\item the induced extension $K(t)/k(t)$ of purely transcendental extensions
+is geometrically irreducible.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). Denote $\Omega$ an algebraic closure of $k(t)$.
+By Definition \ref{definition-geometrically-irreducible}
+we find that the spectrum of
+$$
+K \otimes_k \Omega = K \otimes_k k(t) \otimes_{k(t)} \Omega
+$$
+is irreducible. Since $K(t)$ is a localization of $K \otimes_k k(T)$
+we conclude that the spectrum of $K(t) \otimes_{k(t)} \Omega$
+is irreducible. Thus by Lemma \ref{lemma-geometrically-irreducible}
+we find that $K(t)/k(t)$ is geometrically irreducible.
+
+\medskip\noindent
+Assume (2). Let $k'/k$ be a field extension.
+We have to show that $K \otimes_k k'$ has a unique minimal prime.
+We know that the spectrum of
+$$
+K(t) \otimes_{k(t)} k'(t)
+$$
+is irreducible, i.e., has a unique minimal prime.
+Since there is an injective map
+$K \otimes_k k' \to K(t) \otimes_{k(t)} k'(t)$ (details omitted)
+we conclude by
+Lemmas \ref{lemma-injective-minimal-primes-in-image} and
+\ref{lemma-minimal-prime-image-minimal-prime}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-irreducible-add-transcendental}
+Let $K/L/M$ be a tower of fields with $L/M$ geometrically irreducible.
+Let $x \in K$ be transcendental over $L$. Then $L(x)/M(x)$ is geometrically
+irreducible.
+\end{lemma}
+
+\begin{proof}
+This follows from
+Lemma \ref{lemma-geometrically-irreducible-base-change-transcendental}
+because the fields $L(x)$ and $M(x)$ are purely transcendental
+extensions of $L$ and $M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-irreducible-separable-elements}
+Let $K/k$ be a field extension. The following are equivalent
+\begin{enumerate}
+\item $K/k$ is geometrically irreducible, and
+\item every element $\alpha \in K$ separably algebraic over $k$ is
+in $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1) and let $\alpha \in K$ be separably algebraic over $k$.
+Then $k' = k(\alpha)$ is a finite separable extension of $k$ contained
+in $K$. By Lemma \ref{lemma-subalgebra-geometrically-irreducible}
+the extension $k'/k$ is geometrically irreducible.
+In particular, we see that the spectrum of $k' \otimes_k \overline{k}$
+is irreducible (and hence if it is a product of fields, then there
+is exactly one factor).
+By Fields, Lemma \ref{fields-lemma-finite-separable-tensor-alg-closed}
+it follows that $\Hom_k(k', \overline{k})$ has one element which in turn
+implies that $k' = k$ by
+Fields, Lemma \ref{fields-lemma-separable-equality}.
+Thus (2) holds.
+
+\medskip\noindent
+Assume (2). Let $k' \subset K$ be the subfield consisting of elements
+algebraic over $k$. By
+Lemma \ref{lemma-field-extension-geometrically-irreducible}
+the extension $K/k'$ is geometrically irreducible.
+By assumption $k'/k$ is a purely inseparable extension.
+By Lemma \ref{lemma-p-ring-map} the extension
+$k'/k$ is geometrically irreducible. Hence by
+Lemma \ref{lemma-geometrically-irreducible-transitive}
+we see that $K/k$ is geometrically irreducible.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-make-geometrically-irreducible}
+Let $K/k$ be a field extension. Consider the subextension $K/k'/k$ consisting
+of elements separably algebraic over $k$. Then $K$ is geometrically irreducible
+over $k'$. If $K/k$ is a finitely generated field extension, then
+$[k' : k] < \infty$.
+\end{lemma}
+
+\begin{proof}
+The first statement is immediate from
+Lemma \ref{lemma-geometrically-irreducible-separable-elements}
+and the fact that elements separably algebraic over $k'$
+are in $k'$ by the transitivity of separable algebraic
+extensions, see Fields, Lemma \ref{fields-lemma-separable-permanence}.
+If $K/k$ is finitely generated, then $k'$ is finite over $k$ by
+Fields, Lemma \ref{fields-lemma-algebraic-closure-in-finitely-generated}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Galois-orbit}
+Let $K/k$ be an extension of fields.
+Let $\overline{k}/k$ be a separable algebraic closure.
+Then $\text{Gal}(\overline{k}/k)$ acts transitively on the
+primes of $\overline{k} \otimes_k K$.
+\end{lemma}
+
+\begin{proof}
+Let $K/k'/k$ be the subextension found in
+Lemma \ref{lemma-make-geometrically-irreducible}.
+Note that as $k \subset \overline{k}$ is integral all the prime ideals
+of $\overline{k} \otimes_k K$ and $\overline{k} \otimes_k k'$ are maximal, see
+Lemma \ref{lemma-integral-no-inclusion}.
+By Lemma \ref{lemma-geometrically-irreducible-any-base-change}
+the map
+$$
+\Spec(\overline{k} \otimes_k K) \to \Spec(\overline{k} \otimes_k k')
+$$
+is bijective because (1) all primes are minimal primes, (2)
+$\overline{k} \otimes_k K = (\overline{k} \otimes_k k') \otimes_{k'} K$,
+and (3) $K$ is geometrically irreducible over $k'$.
+Hence it suffices to prove the lemma for the action of
+$\text{Gal}(\overline{k}/k)$ on the primes of $\overline{k} \otimes_k k'$.
+
+\medskip\noindent
+As every prime of $\overline{k} \otimes_k k'$ is maximal, the residue fields
+are isomorphic to $\overline{k}$. Hence the prime ideals of
+$\overline{k} \otimes_k k'$ correspond one to one to elements of
+$\Hom_k(k', \overline{k})$ with $\sigma \in \Hom_k(k', \overline{k})$
+corresponding to the kernel $\mathfrak p_\sigma$ of
+$1 \otimes \sigma : \overline{k} \otimes_k k' \to \overline{k}$.
+In particular $\text{Gal}(\overline{k}/k)$ acts transitively on
+this set as desired.
+\end{proof}
+
+
+
+
+\section{Geometrically connected algebras}
+\label{section-geometrically-connected}
+
+\begin{lemma}
+\label{lemma-separably-closed-connected}
+Let $k$ be a separably algebraically closed field.
+Let $R$, $S$ be $k$-algebras. If $\Spec(R)$, and
+$\Spec(S)$ are connected, then so is
+$\Spec(R \otimes_k S)$.
+\end{lemma}
+
+\begin{proof}
+Recall that $\Spec(R)$ is connected if and only if
+$R$ has no nontrivial idempotents, see
+Lemma \ref{lemma-characterize-spec-connected}.
+Hence, by Lemma \ref{lemma-limit-argument} we may assume $R$ and $S$ are of
+finite type over $k$.
+In this case $R$ and $S$ are Noetherian,
+and have finitely many minimal primes, see
+Lemma \ref{lemma-Noetherian-irreducible-components}.
+Thus we may argue by induction on $n + m$ where $n$, resp.\ $m$
+is the number of irreducible components of $\Spec(R)$,
+resp.\ $\Spec(S)$. Of course the case where either $n$ or
+$m$ is zero is trivial. If $n = m = 1$, i.e.,
+$\Spec(R)$ and $\Spec(S)$ both have one irreducible component,
+then the result holds by Lemma \ref{lemma-separably-closed-irreducible}.
+Suppose that $n > 1$. Let $\mathfrak p \subset R$ be a minimal prime
+corresponding to the irreducible closed subset $T \subset \Spec(R)$.
+Let $T' \subset \Spec(R)$ be the union of the other $n - 1$
+irreducible components. Choose an ideal $I \subset R$
+such that $T' = V(I) = \Spec(R/I)$ (Lemma \ref{lemma-spec-closed}).
+By choosing our minimal prime carefully
+we may in addition arrange it so that $T'$ is connected, see
+Topology, Lemma \ref{topology-lemma-remove-irreducible-connected}.
+Then $T \cup T' = \Spec(R)$ and
+$T \cap T' = V(\mathfrak p + I) = \Spec(R/(\mathfrak p + I))$
+is not empty as $\Spec(R)$ is assumed connected.
+The inverse image of $T$ in $\Spec(R \otimes_k S)$
+is $\Spec(R/\mathfrak p \otimes_k S)$, and the inverse
+of $T'$ in $\Spec(R \otimes_k S)$
+is $\Spec(R/I \otimes_k S)$. By induction these are both
+connected. The inverse image of $T \cap T'$ is
+$\Spec(R/(\mathfrak p + I) \otimes_k S)$ which is nonempty.
+Hence $\Spec(R \otimes_k S)$ is connected.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-connected}
+Let $k$ be a field.
+Let $R$ be a $k$-algebra.
+The following are equivalent
+\begin{enumerate}
+\item for every field extension $k'/k$ the
+spectrum of $R \otimes_k k'$ is connected, and
+\item for every finite separable field extension $k'/k$ the
+spectrum of $R \otimes_k k'$ is connected.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For any extension of fields $k'/k$ the connectivity
+of the spectrum of $R \otimes_k k'$ is equivalent to $R \otimes_k k'$
+having no nontrivial idempotents, see
+Lemma \ref{lemma-characterize-spec-connected}. Assume (2).
+Let $k \subset \overline{k}$ be a separable algebraic closure of $k$.
+Using Lemma \ref{lemma-limit-argument}
+we see that (2) is equivalent to $R \otimes_k \overline{k}$
+having no nontrivial idempotents.
+For any field extension $k'/k$, there exists a field
+extension $\overline{k}'/\overline{k}$ with
+$k' \subset \overline{k}'$. By Lemma \ref{lemma-separably-closed-connected}
+we see that $R \otimes_k \overline{k}'$ has no nontrivial idempotents.
+If $R \otimes_k k'$ has a nontrivial idempotent,
+then also $R \otimes_k \overline{k}'$, contradiction.
+\end{proof}
+
+\begin{definition}
+\label{definition-geometrically-connected}
+Let $k$ be a field.
+Let $S$ be a $k$-algebra.
+We say $S$ is {\it geometrically connected over $k$}
+if for every field extension $k'/k$ the spectrum
+of $S \otimes_k k'$ is connected.
+\end{definition}
+
+\noindent
+By Lemma \ref{lemma-geometrically-connected} it suffices
+to check this for finite separable field extensions $k'/k$.
+
+\begin{lemma}
+\label{lemma-separably-closed-connected-implies-geometric}
+Let $k$ be a field.
+Let $R$ be a $k$-algebra.
+If $k$ is separably algebraically closed then $R$ is
+geometrically connected over $k$ if and only if the
+spectrum of $R$ is connected.
+\end{lemma}
+
+\begin{proof}
+Immediate from the remark following
+Definition \ref{definition-geometrically-connected}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-subalgebra-geometrically-connected}
+Let $k$ be a field. Let $S$ be a $k$-algebra.
+\begin{enumerate}
+\item If $S$ is geometrically connected over $k$ so is every
+$k$-subalgebra.
+\item If all finitely generated $k$-subalgebras of $S$ are
+geometrically connected, then $S$ is geometrically connected.
+\item A directed colimit of geometrically connected $k$-algebras
+is geometrically connected.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from the characterization of connectedness in terms of the
+nonexistence of nontrivial idempotents. The second and third property follow
+from the fact that tensor product commutes with colimits.
+\end{proof}
+
+\noindent
+The following lemma will be superseded by the more general
+Varieties, Lemma \ref{varieties-lemma-bijection-connected-components}.
+
+\begin{lemma}
+\label{lemma-geometrically-connected-any-base-change}
+Let $k$ be a field.
+Let $S$ be a geometrically connected $k$-algebra.
+Let $R$ be any $k$-algebra.
+The map
+$$
+R \longrightarrow R \otimes_k S
+$$
+induces a bijection on idempotents, and the map
+$$
+\Spec(R \otimes_k S) \longrightarrow \Spec(R)
+$$
+induces a bijection on connected components.
+\end{lemma}
+
+\begin{proof}
+The second assertion follows from the first combined with
+Lemma \ref{lemma-connected-component}.
+By Lemmas \ref{lemma-subalgebra-geometrically-connected}
+and \ref{lemma-limit-argument} we may assume that $R$ and $S$
+are of finite type over $k$. Then we see that also
+$R \otimes_k S$ is of finite type over $k$. Note that in this
+case all the rings are Noetherian and hence their spectra
+have finitely many connected components (since they have
+finitely many irreducible components, see
+Lemma \ref{lemma-Noetherian-irreducible-components}).
+In particular, all connected components in question are open!
+Hence via Lemma \ref{lemma-disjoint-implies-product}
+we see that the first statement of the
+lemma in this case is equivalent to the second. Let's prove this.
+As the algebra $S$ is geometrically connected
+and nonzero we see that all fibres of $X = \Spec(R \otimes_k S)
+\to \Spec(R) = Y$ are connected and nonempty. Also, as
+$R \to R \otimes_k S$ is flat of finite presentation the map
+$X \to Y$ is open
+(Proposition \ref{proposition-fppf-open}).
+Topology, Lemma \ref{topology-lemma-connected-fibres-connected-components}
+shows that $X \to Y$ induces bijection on connected components.
+\end{proof}
+
+
+
+
+\section{Geometrically integral algebras}
+\label{section-geometrically-integral}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-geometrically-integral}
+Let $k$ be a field.
+Let $S$ be a $k$-algebra.
+We say $S$ is {\it geometrically integral over $k$}
+if for every field extension $k'/k$ the ring
+of $S \otimes_k k'$ is a domain.
+\end{definition}
+
+\noindent
+Any question about geometrically integral algebras can be translated
+in a question about geometrically reduced and irreducible algebras.
+
+\begin{lemma}
+\label{lemma-geometrically-integral}
+Let $k$ be a field.
+Let $S$ be a $k$-algebra.
+In this case $S$ is geometrically integral over $k$ if and only if
+$S$ is geometrically irreducible as well as geometrically reduced over $k$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-geometrically-integral}
+Let $k$ be a field. Let $S$ be a $k$-algebra.
+The following are equivalent
+\begin{enumerate}
+\item $S$ is geometrically integral over $k$,
+\item for every finite extension $k'/k$ of fields
+the ring $S \otimes_k k'$ is a domain,
+\item $S \otimes_k \overline{k}$ is a domain
+where $\overline{k}$ is the algebraic closure of $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows from Lemmas \ref{lemma-geometrically-integral},
+\ref{lemma-geometrically-reduced-finite-purely-inseparable-extension}, and
+\ref{lemma-geometrically-irreducible}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-integral-any-integral-base-change}
+Let $k$ be a field. Let $S$ be a geometrically integral $k$-algebra.
+Let $R$ be a $k$-algebra and an integral domain. Then $R \otimes_k S$
+is an integral domain.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-geometrically-reduced-any-reduced-base-change}
+the ring $R \otimes_k S$ is reduced and by
+Lemma \ref{lemma-geometrically-irreducible-any-base-change}
+the ring $R \otimes_k S$ is irreducible (the spectrum
+has just one irreducible component), so $R \otimes_k S$ is
+an integral domain.
+\end{proof}
+
+
+
+
+
+\section{Valuation rings}
+\label{section-valuation-rings}
+
+\noindent
+Here are some definitions.
+
+\begin{definition}
+\label{definition-valuation-ring}
+Valuation rings.
+\begin{enumerate}
+\item Let $K$ be a field. Let $A$, $B$ be local rings contained
+in $K$. We say that $B$ {\it dominates} $A$ if $A \subset B$
+and $\mathfrak m_A = A \cap \mathfrak m_B$.
+\item Let $A$ be a ring. We say $A$ is a {\it valuation ring}
+if $A$ is a local domain and if $A$ is maximal
+for the relation of domination among local rings contained in
+the fraction field of $A$.
+\item Let $A$ be a valuation ring with fraction field $K$.
+If $R \subset K$ is a subring of $K$, then we say $A$
+is {\it centered} on $R$ if $R \subset A$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+With this definition a field is a valuation ring.
+
+\begin{lemma}
+\label{lemma-dominate}
+Let $K$ be a field. Let $A \subset K$ be a local subring.
+Then there exists a valuation ring with fraction field $K$
+dominating $A$.
+\end{lemma}
+
+\begin{proof}
+We consider the collection of local subrings
+of $K$ as a partially ordered set using the relation of domination.
+Suppose that $\{A_i\}_{i \in I}$ is a totally ordered
+collection of local subrings of $K$. Then $B = \bigcup A_i$
+is a local subring which dominates all of the $A_i$.
+Hence by Zorn's Lemma, it suffices to show that if $A \subset K$
+is a local ring whose fraction field is not $K$, then there
+exists a local ring $B \subset K$, $B \not = A$ dominating $A$.
+
+\medskip\noindent
+Pick $t \in K$ which is not in the fraction field of $A$.
+If $t$ is transcendental over $A$, then $A[t] \subset K$
+and hence $A[t]_{(t, \mathfrak m)} \subset K$ is a local ring
+distinct from $A$ dominating $A$. Suppose $t$ is algebraic over $A$.
+Then for some $a \in A$ the element $at$ is integral over $A$.
+In this case the subring $A' \subset K$ generated by $A$ and
+$ta$ is finite over $A$.
+By Lemma \ref{lemma-integral-overring-surjective} there exists
+a prime ideal $\mathfrak m' \subset A'$ lying over
+$\mathfrak m$. Then $A'_{\mathfrak m'}$ dominates
+$A$. If $A = A'_{\mathfrak m'}$, then $t$
+is in the fraction field of $A$ which we assumed not to be the case.
+Thus $A \not = A'_{\mathfrak m'}$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-valuation-ring-x-or-x-inverse}
+Let $A$ be a valuation ring with maximal ideal $\mathfrak m$ and
+fraction field $K$.
+Let $x \in K$. Then either $x \in A$ or $x^{-1} \in A$ or both.
+\end{lemma}
+
+\begin{proof}
+Assume that $x$ is not in $A$.
+Let $A'$ denote the subring of $K$ generated by $A$ and $x$.
+Since $A$ is a valuation ring we see that there is no prime
+of $A'$ lying over $\mathfrak m$. Since $\mathfrak m$ is maximal
+we see that $V(\mathfrak m A') = \emptyset$. Then $\mathfrak m A' = A'$
+by Lemma \ref{lemma-Zariski-topology}.
+Hence we can write
+$1 = \sum_{i = 0}^d t_i x^i$ with $t_i \in \mathfrak m$.
+This implies that $(1 - t_0) (x^{-1})^d - \sum t_i (x^{-1})^{d - i} = 0$.
+In particular we see that $x^{-1}$ is integral over $A$.
+Thus the subring $A''$ of $K$ generated by $A$ and $x^{-1}$ is
+finite over $A$ and we see there exists a prime ideal
+$\mathfrak m'' \subset A''$ lying over $\mathfrak m$ by
+Lemma \ref{lemma-integral-overring-surjective}. Since $A$
+is a valuation ring we conclude that $A = (A'')_{\mathfrak m''}$
+and hence $x^{-1} \in A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-x-or-x-inverse-valuation-ring}
+Let $A \subset K$ be a subring of a field $K$ such that for all
+$x \in K$ either $x \in A$ or $x^{-1} \in A$ or both.
+Then $A$ is a valuation ring with fraction field $K$.
+\end{lemma}
+
+\begin{proof}
+If $A$ is not $K$, then $A$ is not a field and there is a nonzero
+maximal ideal $\mathfrak m$.
+If $\mathfrak m'$ is a second maximal ideal, then choose $x, y \in A$
+with $x \in \mathfrak m$, $y \not \in \mathfrak m$,
+$x \not \in \mathfrak m'$, and $y \in \mathfrak m'$ (see
+Lemma \ref{lemma-silly}). Then neither $x/y \in A$ nor $y/x \in A$
+contradicting the assumption of the lemma. Thus we see that $A$ is
+a local ring. Suppose that $A'$ is a local ring contained in $K$ which
+dominates $A$. Let $x \in A'$. We have to show that $x \in A$. If not, then
+$x^{-1} \in A$, and of course $x^{-1} \in \mathfrak m_A$. But then
+$x^{-1} \in \mathfrak m_{A'}$ which contradicts $x \in A'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-valuation-rings}
+\begin{slogan}
+Valuation rings are stable under filtered direct limits
+\end{slogan}
+Let $I$ be a directed set. Let $(A_i, \varphi_{ij})$
+be a system of valuation rings over $I$.
+Then $A = \colim A_i$ is a valuation ring.
+\end{lemma}
+
+\begin{proof}
+It is clear that $A$ is a domain. Let $a, b \in A$.
+Lemma \ref{lemma-x-or-x-inverse-valuation-ring} tells us we have
+to show that either $a | b$ or $b | a$ in $A$. Choose $i$
+so large that there exist $a_i, b_i \in A_i$ mapping to $a, b$.
+Then Lemma \ref{lemma-valuation-ring-x-or-x-inverse}
+applied to $a_i, b_i$ in $A_i$ implies the result for $a, b$ in $A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-valuation-ring-cap-field}
+Let $L/K$ be an extension of fields. If $B \subset L$
+is a valuation ring, then $A = K \cap B$ is a valuation ring.
+\end{lemma}
+
+\begin{proof}
+We can replace $L$ by the fraction field $F$ of $B$ and $K$ by
+$K \cap F$. Then the lemma follows from a combination of
+Lemmas \ref{lemma-valuation-ring-x-or-x-inverse} and
+\ref{lemma-x-or-x-inverse-valuation-ring}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-valuation-ring-cap-field-finite}
+Let $L/K$ be an algebraic extension of fields. If $B \subset L$
+is a valuation ring with fraction field $L$ and not a field, then
+$A = K \cap B$ is a valuation ring and not a field.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-valuation-ring-cap-field} the ring $A$ is a valuation
+ring. If $A$ is a field, then $A = K$. Then $A = K \subset B$ is an integral
+extension, hence there are no proper inclusions among the primes of $B$
+(Lemma \ref{lemma-integral-no-inclusion}).
+This contradicts the assumption that $B$ is a local domain and not a field.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-make-valuation-rings}
+Let $A$ be a valuation ring. For any prime ideal $\mathfrak p \subset A$ the
+quotient $A/\mathfrak p$ is a valuation ring. The same is true for the
+localization $A_\mathfrak p$ and in fact any localization of $A$.
+\end{lemma}
+
+\begin{proof}
+Use the characterization of valuation rings given
+in Lemma \ref{lemma-x-or-x-inverse-valuation-ring}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-stack-valuation-rings}
+Let $A'$ be a valuation ring with residue field $K$.
+Let $A$ be a valuation ring with fraction field $K$.
+Then
+$C = \{\lambda \in A' \mid \lambda \bmod \mathfrak m_{A'} \in A\}$
+is a valuation ring.
+\end{lemma}
+
+\begin{proof}
+Note that $\mathfrak m_{A'} \subset C$ and $C/\mathfrak m_{A'} = A$.
+In particular, the fraction field of $C$ is equal to the fraction field
+of $A'$. We will use the criterion of
+Lemma \ref{lemma-x-or-x-inverse-valuation-ring} to prove the lemma.
+Let $x$ be an element of the fraction field of $C$.
+By the lemma we may assume $x \in A'$. If $x \in \mathfrak m_{A'}$,
+then we see $x \in C$. If not, then $x$ is a unit of $A'$ and we
+also have $x^{-1} \in A'$. Hence either $x$ or $x^{-1}$ maps to
+an element of $A$ by the lemma again.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-valuation-ring-normal}
+Let $A$ be a valuation ring.
+Then $A$ is a normal domain.
+\end{lemma}
+
+\begin{proof}
+Suppose $x$ is in the field of fractions of $A$ and integral over $A$,
+say $x^{d + 1} + \sum_{i \leq d} a_i x^i = 0$. By
+Lemma \ref{lemma-x-or-x-inverse-valuation-ring}
+either $x \in A$ (and we're done) or $x^{-1} \in A$. In the second case
+we see that $x = - \sum a_i x^{i - d} \in A$ as well.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-find-valuation-rings}
+Let $A$ be a normal domain with fraction field $K$.
+\begin{enumerate}
+\item For every $x \in K$, $x \not \in A$ there exists a valuation ring
+$A \subset V \subset K$ with fraction field $K$ such that $x \not \in V$.
+\item If $A$ is local, we can moreover choose $V$ which dominates $A$.
+\end{enumerate}
+In other words, $A$ is the intersection of all valuation rings in $K$
+containing $A$ and if $A$ is local, then $A$ is the intersection of
+all valuation rings in $K$ dominating $A$.
+\end{lemma}
+
+\begin{proof}
+Suppose $x \in K$, $x \not \in A$. Consider $B = A[x^{-1}]$.
+Then $x \not \in B$. Namely, if $x = a_0 + a_1x^{-1} + \ldots + a_d x^{-d}$
+then $x^{d + 1} - a_0x^d - \ldots - a_d = 0$ and $x$ is integral
+over $A$ in contradiction with the fact that $A$ is normal.
+Thus $x^{-1}$ is not a unit in $B$. Thus $V(x^{-1}) \subset \Spec(B)$
+is not empty (Lemma \ref{lemma-Zariski-topology}), and we can choose a prime
+$\mathfrak p \subset B$ with $x^{-1} \in \mathfrak p$.
+Choose a valuation ring $V \subset K$ dominating $B_\mathfrak p$
+(Lemma \ref{lemma-dominate}).
+Then $x \not \in V$ as $x^{-1} \in \mathfrak m_V$.
+
+\medskip\noindent
+If $A$ is local, then we claim that $x^{-1} B + \mathfrak m_A B \not = B$.
+Namely, if $1 = (a_0 + a_1x^{-1} + \ldots + a_d x^{-d})x^{-1} +
+a'_0 + \ldots + a'_d x^{-d}$ with $a_i \in A$ and $a'_i \in \mathfrak m_A$,
+then we'd get
+$$
+(1 - a'_0) x^{d + 1} - (a_0 + a'_1) x^d - \ldots - a_d = 0
+$$
+Since $a'_0 \in \mathfrak m_A$ we see that $1 - a'_0$ is a unit in $A$
+and we conclude that $x$ would be integral over $A$, a contradiction as
+before. Then choose the prime $\mathfrak p \supset x^{-1} B + \mathfrak m_A B$
+we find $V$ dominating $A$.
+\end{proof}
+
+\noindent
+An {\it totally ordered abelian group} is a pair $(\Gamma, \geq)$
+consisting of an abelian group $\Gamma$ endowed with a total
+ordering $\geq$ such that $\gamma \geq \gamma' \Rightarrow
+\gamma + \gamma'' \geq \gamma' + \gamma''$ for all
+$\gamma, \gamma', \gamma'' \in \Gamma$.
+
+\begin{lemma}
+\label{lemma-valuation-group}
+Let $A$ be a valuation ring with field of fractions $K$.
+Set $\Gamma = K^*/A^*$ (with group law written additively).
+For $\gamma, \gamma' \in \Gamma$
+define $\gamma \geq \gamma'$ if and only if
+$\gamma - \gamma'$ is in the image of $A - \{0\} \to \Gamma$.
+Then $(\Gamma, \geq)$ is a totally ordered abelian group.
+\end{lemma}
+
+\begin{proof}
+Omitted, but follows easily from
+Lemma \ref{lemma-valuation-ring-x-or-x-inverse}.
+Note that in case $A = K$ we obtain the zero group $\Gamma = \{0\}$
+endowed with its unique total ordering.
+\end{proof}
+
+\begin{definition}
+\label{definition-value-group}
+Let $A$ be a valuation ring.
+\begin{enumerate}
+\item The totally ordered abelian group $(\Gamma, \geq)$ of
+Lemma \ref{lemma-valuation-group} is called the
+{\it value group} of the valuation ring $A$.
+\item The map $v : A - \{0\} \to \Gamma$ and also $v : K^* \to \Gamma$ is
+called the {\it valuation} associated to $A$.
+\item The valuation ring $A$ is called a {\it discrete valuation ring}
+if $\Gamma \cong \mathbf{Z}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that if $\Gamma \cong \mathbf{Z}$ then there is a unique such
+isomorphism such that $1 \geq 0$. If the isomorphism is chosen in this
+way, then the ordering becomes the usual ordering of the integers.
+
+\begin{lemma}
+\label{lemma-properties-valuation}
+Let $A$ be a valuation ring. The valuation $v : A -\{0\} \to \Gamma_{\geq 0}$
+has the following properties:
+\begin{enumerate}
+\item $v(a) = 0 \Leftrightarrow a \in A^*$,
+\item $v(ab) = v(a) + v(b)$,
+\item $v(a + b) \geq \min(v(a), v(b))$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-valuation-ring}
+Let $A$ be a ring. The following are equivalent
+\begin{enumerate}
+\item $A$ is a valuation ring,
+\item $A$ is a local domain and every finitely generated
+ideal of $A$ is principal.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $A$ is a valuation ring and let $f_1, \ldots, f_n \in A$.
+Choose $i$ such that $v(f_i)$ is minimal among $v(f_j)$.
+Then $(f_i) = (f_1, \ldots, f_n)$. Conversely, assume $A$ is
+a local domain and every finitely generated ideal of $A$ is principal.
+Pick $f, g \in A$ and write $(f, g) = (h)$. Then $f = ah$ and $g = bh$
+and $h = cf + dg$ for some $a, b, c, d \in A$. Thus $ac + bd = 1$
+and we see that either $a$ or $b$ is a unit, i.e., either
+$g/f$ or $f/g$ is an element of $A$. This shows $A$ is a valuation ring
+by Lemma \ref{lemma-x-or-x-inverse-valuation-ring}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-valuation-valuation-ring}
+Let $(\Gamma, \geq)$ be a totally ordered abelian group.
+Let $K$ be a field. Let $v : K^* \to \Gamma$ be a homomorphism
+of abelian groups such that $v(a + b) \geq \min(v(a), v(b))$ for
+$a, b \in K$ with $a, b, a + b$ not zero. Then
+$$
+A =
+\{
+x \in K \mid x = 0 \text{ or } v(x) \geq 0
+\}
+$$
+is a valuation ring with value group $\Im(v) \subset \Gamma$,
+with maximal ideal
+$$
+\mathfrak m =
+\{
+x \in K \mid x = 0 \text{ or } v(x) > 0
+\}
+$$
+and with group of units
+$$
+A^* =
+\{
+x \in K^* \mid v(x) = 0
+\}.
+$$
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\noindent
+Let $(\Gamma, \geq)$ be a totally ordered abelian group.
+An {\it ideal of $\Gamma$} is a subset $I \subset \Gamma$ such
+that all elements of $I$ are $\geq 0$ and $\gamma \in I$,
+$\gamma' \geq \gamma$ implies $\gamma' \in I$. We say that such
+an ideal is {\it prime} if $\gamma + \gamma' \in I, \gamma, \gamma' \geq 0
+\Rightarrow \gamma \in I \text{ or } \gamma' \in I$.
+
+\begin{lemma}
+\label{lemma-ideals-valuation-ring}
+Let $A$ be a valuation ring.
+Ideals in $A$ correspond $1 - 1$ with ideals of $\Gamma$.
+This bijection is inclusion preserving, and maps prime
+ideals to prime ideals.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-valuation-ring-Noetherian-discrete}
+A valuation ring is Noetherian if and only if it is
+a discrete valuation ring or a field.
+\end{lemma}
+
+\begin{proof}
+Suppose $A$ is a discrete valuation ring
+with valuation $v : A \setminus \{0\} \to \mathbf{Z}$
+normalized so that $\Im(v) = \mathbf{Z}_{\geq 0}$.
+By Lemma \ref{lemma-ideals-valuation-ring} the ideals of $A$ are the subsets
+$I_n = \{0\} \cup v^{-1}(\mathbf{Z}_{\geq n})$. It is clear
+that any element $x \in A$ with $v(x) = n$ generates $I_n$.
+Hence $A$ is a PID so certainly Noetherian.
+
+\medskip\noindent
+Suppose $A$ is a Noetherian valuation ring with value group $\Gamma$.
+By Lemma \ref{lemma-ideals-valuation-ring} we see the ascending chain
+condition holds for ideals in $\Gamma$.
+We may assume $A$ is not a field, i.e., there is a $\gamma \in \Gamma$
+with $\gamma > 0$. Applying the ascending chain condition to the subsets
+$\gamma + \Gamma_{\geq 0}$ with $\gamma > 0$ we see
+there exists a smallest element $\gamma_0$ which is bigger than $0$.
+Let $\gamma \in \Gamma$ be an element $\gamma > 0$. Consider the sequence
+of elements $\gamma$, $\gamma - \gamma_0$, $\gamma - 2\gamma_0$,
+etc. By the ascending chain condition these cannot all be $> 0$.
+Let $\gamma - n \gamma_0$ be the last one $\geq 0$. By minimality
+of $\gamma_0$ we see that $0 = \gamma - n \gamma_0$. Hence $\Gamma$
+is a cyclic group as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{More Noetherian rings}
+\label{section-Noetherian-again}
+
+
+\begin{lemma}
+\label{lemma-Noetherian-basic}
+Let $R$ be a Noetherian ring.
+Any finite $R$-module is of finite presentation.
+Any submodule of a finite $R$-module is finite.
+The ascending chain condition holds for $R$-submodules
+of a finite $R$-module.
+\end{lemma}
+
+\begin{proof}
+We first show that any submodule $N$ of a finite $R$-module
+$M$ is finite. We do this by induction on the number of
+generators of $M$. If this number is $1$, then $N = J/I \subset
+M = R/I$ for some ideals $I \subset J \subset R$. Thus the definition
+of Noetherian implies the result. If the number of generators of
+$M$ is greater than $1$, then we can find a short exact sequence
+$0 \to M' \to M \to M'' \to 0$ where $M'$ and $M''$ have fewer
+generators. Note that setting $N' = M' \cap N$ and $N'' = \Im(N \to
+M'')$ gives a similar short exact sequence for $N$. Hence the result
+follows from the induction hypothesis
+since the number of generators of $N$ is at most the number of
+generators of $N'$ plus the number of generators of $N''$.
+
+\medskip\noindent
+To show that $M$ is finitely presented just apply the previous result
+to the kernel of a presentation $R^n \to M$.
+
+\medskip\noindent
+It is well known and easy to prove that the ascending chain condition for
+$R$-submodules of $M$ is equivalent to the condition that every submodule
+of $M$ is a finite $R$-module. We omit the proof.
+\end{proof}
+
+\begin{lemma}[Artin-Rees]
+\label{lemma-Artin-Rees}
+Suppose that $R$ is Noetherian, $I \subset R$ an ideal.
+Let $N \subset M$ be finite $R$-modules.
+There exists a constant $c > 0$ such that
+$I^n M \cap N = I^{n-c}(I^cM \cap N)$ for all $n \geq c$.
+\end{lemma}
+
+\begin{proof}
+Consider the ring $S = R \oplus I \oplus I^2 \oplus \ldots
+= \bigoplus_{n \geq 0} I^n$. Convention: $I^0 = R$.
+Multiplication maps $I^n \times I^m$
+into $I^{n + m}$ by multiplication in $R$.
+Note that if $I = (f_1, \ldots, f_t)$
+then $S$ is a quotient of the Noetherian ring $R[X_1, \ldots, X_t]$.
+The map just sends the monomial $X_1^{e_1}\ldots X_t^{e_t}$
+to $f_1^{e_1}\ldots f_t^{e_t}$. Thus $S$ is Noetherian.
+Similarly, consider the module $M \oplus IM \oplus I^2M \oplus \ldots
+= \bigoplus_{n \geq 0} I^nM$. This is a finitely generated $S$-module.
+Namely, if $x_1, \ldots, x_r$ generate $M$ over $R$, then they also generate
+$\bigoplus_{n \geq 0} I^nM$ over $S$. Next, consider the
+submodule $\bigoplus_{n \geq 0} I^nM \cap N$.
+This is an $S$-submodule, as is easily verified. By
+Lemma \ref{lemma-Noetherian-basic} it is finitely generated as
+an $S$-module,
+say by $\xi_j \in \bigoplus_{n \geq 0} I^nM \cap N$, $j = 1, \ldots, s$.
+We may assume by decomposing each $\xi_j$ into its homogeneous
+pieces that each $\xi_j \in I^{d_j}M \cap N$ for some $d_j$.
+Set $c = \max\{d_j\}$. Then for all $n \geq c$ every element
+in $I^nM \cap N$ is of the form $\sum h_j \xi_j$ with
+$h_j \in I^{n - d_j}$. The lemma now follows from this and the trivial
+observation that $I^{n-d_j}(I^{d_j}M \cap N) \subset I^{n-c}(I^cM \cap N)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-AR}
+Suppose that $0 \to K \to M \xrightarrow{f} N$ is an
+exact sequence of finitely generated modules
+over a Noetherian ring $R$. Let $I \subset R$ be an ideal.
+Then there exists a $c$ such that
+$$
+f^{-1}(I^nN) = K + I^{n-c}f^{-1}(I^cN)
+\quad\text{and}\quad
+f(M) \cap I^nN \subset f(I^{n - c}M)
+$$
+for all $n \geq c$.
+\end{lemma}
+
+\begin{proof}
+Apply Lemma \ref{lemma-Artin-Rees} to
+$\Im(f) \subset N$ and note that
+$f : I^{n-c}M \to I^{n-c}f(M)$ is surjective.
+\end{proof}
+
+\begin{lemma}[Krull's intersection theorem]
+\label{lemma-intersect-powers-ideal-module-zero}
+Let $R$ be a Noetherian local ring. Let $I \subset R$ be
+a proper ideal. Let $M$ be a finite $R$-module.
+Then $\bigcap_{n \geq 0} I^nM = 0$.
+\end{lemma}
+
+\begin{proof}
+Let $N = \bigcap_{n \geq 0} I^nM$.
+Then $N = I^nM \cap N$ for all $n \geq 0$.
+By the Artin-Rees Lemma \ref{lemma-Artin-Rees}
+we see that $N = I^nM \cap N \subset IN$ for
+some suitably large $n$. By Nakayama's Lemma \ref{lemma-NAK}
+we see that $N = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-intersection-powers-ideal-module}
+Let $R$ be a Noetherian ring. Let $I \subset R$ be an ideal.
+Let $M$ be a finite $R$-module. Let $N = \bigcap_n I^n M$.
+\begin{enumerate}
+\item For every prime $\mathfrak p$, $I \subset \mathfrak p$ there
+exists a $f \in R$, $f \not \in \mathfrak p$ such that $N_f = 0$.
+\item If $I$ is contained in the Jacobson radical
+of $R$, then $N = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Let $x_1, \ldots, x_n$ be generators for the module $N$,
+see Lemma \ref{lemma-Noetherian-basic}. For every prime
+$\mathfrak p$, $I \subset \mathfrak p$ we see that
+the image of $N$ in the localization $M_{\mathfrak p}$
+is zero, by Lemma \ref{lemma-intersect-powers-ideal-module-zero}.
+Hence we can find $g_i \in R$, $g_i \not \in \mathfrak p$
+such that $x_i$ maps to zero in $N_{g_i}$. Thus
+$N_{g_1g_2\ldots g_n} = 0$.
+
+\medskip\noindent
+Part (2) follows from (1) and Lemma \ref{lemma-characterize-zero-local}.
+\end{proof}
+
+\begin{remark}
+\label{remark-intersection-powers-ideal}
+Lemma \ref{lemma-intersect-powers-ideal-module-zero} in particular implies
+that $\bigcap_n I^n = (0)$ when $I \subset R$ is a non-unit ideal in a
+Noetherian local ring $R$. More generally, let $R$ be a Noetherian ring and
+$I \subset R$ an ideal. Suppose that $f \in \bigcap_{n \in \mathbf{N}} I^n$.
+Then Lemma \ref{lemma-intersection-powers-ideal-module}
+says that for every prime ideal $I \subset \mathfrak p$
+there exists a $g \in R$, $g \not \in \mathfrak p$
+such that $f$ maps to zero in $R_g$. In algebraic geometry we
+express this by saying that ``$f$ is zero in an open neighbourhood
+of the closed set $V(I)$ of $\Spec(R)$''.
+\end{remark}
+
+\begin{lemma}[Artin-Tate]
+\label{lemma-Artin-Tate}
+Let $R$ be a Noetherian ring. Let $S$ be a finitely
+generated $R$-algebra. If $T \subset S$ is an $R$-subalgebra such
+that $S$ is finitely generated as a $T$-module, then $T$ is of
+finite type over $R$.
+\end{lemma}
+
+\begin{proof}
+Choose elements $x_1, \ldots, x_n \in S$ which generate $S$ as an $R$-algebra.
+Choose $y_1, \ldots, y_m$ in $S$ which generate $S$ as a $T$-module.
+Thus there exist $a_{ij} \in T$ such that
+$x_i = \sum a_{ij} y_j$. There also exist $b_{ijk} \in T$ such
+that $y_i y_j = \sum b_{ijk} y_k$. Let $T' \subset T$ be the
+sub $R$-algebra generated by $a_{ij}$ and $b_{ijk}$. This is a finitely
+generated $R$-algebra, hence Noetherian. Consider the algebra
+$$
+S' = T'[Y_1, \ldots, Y_m]/(Y_i Y_j - \sum b_{ijk} Y_k).
+$$
+Note that $S'$ is finite over $T'$, namely as a $T'$-module it is
+generated by the classes of $1, Y_1, \ldots, Y_m$.
+Consider the $T'$-algebra homomorphism $S' \to S$ which maps
+$Y_i$ to $y_i$. Because $a_{ij} \in T'$ we see that $x_j$ is
+in the image of this map. Thus $S' \to S$ is surjective.
+Therefore $S$ is finite over $T'$ as well. Since $T'$ is Noetherian
+and we conclude that $T \subset S$ is finite over $T'$ and
+we win.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Length}
+\label{section-length}
+
+\begin{definition}
+\label{definition-length}
+Let $R$ be a ring. For any $R$-module $M$
+we define the {\it length} of $M$ over $R$ by the
+formula
+$$
+\text{length}_R(M)
+=
+\sup
+\{
+n
+\mid
+\exists\ 0 = M_0 \subset M_1 \subset \ldots \subset M_n = M,
+\text{ }M_i \not = M_{i + 1}
+\}.
+$$
+\end{definition}
+
+\noindent
+In other words it is the supremum of the lengths of chains
+of submodules. There is an obvious notion of when a chain
+of submodules is a refinement of another. This gives a
+partial ordering on the collection of all chains of submodules,
+with the smallest chain having the shape $0 = M_0 \subset M_1 = M$
+if $M$ is not zero.
+We note the obvious fact that if the length of
+$M$ is finite, then every chain can be refined to a
+maximal chain. But it is not as obvious that all maximal
+chains have the same length (as we will see later).
+
+\begin{lemma}
+\label{lemma-finite-length-finite}
+\begin{slogan}
+Modules of finite length are finite.
+\end{slogan}
+Let $R$ be a ring.
+Let $M$ be an $R$-module.
+If $\text{length}_R(M) < \infty$ then $M$ is a finite $R$-module.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-length-additive}
+\begin{slogan}
+Length is additive in short exact sequences.
+\end{slogan}
+If $0 \to M' \to M \to M'' \to 0$
+is a short exact sequence of modules over $R$ then
+the length of $M$ is the sum of the
+lengths of $M'$ and $M''$.
+\end{lemma}
+
+\begin{proof}
+Given filtrations of $M'$ and $M''$ of lengths $n', n''$
+it is easy to make a corresponding filtration of $M$
+of length $n' + n''$. Thus we see that $\text{length}_R M
+\geq \text{length}_R M' + \text{length}_R M''$.
+Conversely, given a filtration
+$M_0 \subset M_1 \subset \ldots \subset M_n$ of
+$M$ consider the induced filtrations
+$M_i' = M_i \cap M'$ and $M_i'' = \Im(M_i \to M'')$.
+Let $n'$ (resp.\ $n''$) be the number of steps in the filtration
+$\{M'_i\}$ (resp.\ $\{M''_i\}$).
+If $M_i' = M_{i + 1}'$ and $M_i'' = M_{i + 1}''$ then
+$M_i = M_{i + 1}$. Hence we conclude that $n' + n'' \geq n$.
+Combined with the earlier result we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-length-infinite}
+Let $R$ be a local ring with maximal ideal $\mathfrak m$.
+Let $M$ be an $R$-module.
+\begin{enumerate}
+\item If $M$ is a finite module and
+$\mathfrak m^n M \not = 0$ for all $n\geq 0$,
+then $\text{length}_R(M) = \infty$.
+\item If $M$ has finite length then $\mathfrak m^nM = 0$
+for some $n$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $\mathfrak m^n M \not = 0$ for all $n\geq 0$.
+Choose $x \in M$ and $f_1, \ldots, f_n \in \mathfrak m$
+such that $f_1f_2 \ldots f_n x \not = 0$.
+By Nakayama's Lemma \ref{lemma-NAK} the first $n$ steps in the filtration
+$$
+0 \subset R f_1 \ldots f_n x \subset R f_1 \ldots f_{n - 1} x
+\subset \ldots \subset R x \subset M
+$$
+are distinct. This can also be seen directly. For example, if
+$R f_1 x = R f_1 f_2 x$ , then $f_1 x = g f_1 f_2 x$ for some $g$,
+hence $(1 - gf_2) f_1 x = 0$ hence $f_1 x = 0$ as $1 - gf_2$ is a unit
+which is a contradiction with the choice of $x$ and $f_1, \ldots, f_n$.
+Hence the length is infinite, i.e., (1) holds.
+Combine (1) and Lemma \ref{lemma-finite-length-finite} to see (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-length-independent}
+Let $R \to S$ be a ring map. Let $M$ be an $S$-module.
+We always have $\text{length}_R(M) \geq \text{length}_S(M)$.
+If $R \to S$ is surjective then equality holds.
+\end{lemma}
+
+\begin{proof}
+A filtration of $M$ by $S$-submodules gives rise a filtration
+of $M$ by $R$-submodules. This proves the inequality.
+And if $R \to S$ is surjective, then any $R$-submodule
+of $M$ is automatically an $S$-submodule. Hence equality
+in this case.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-is-length}
+Let $R$ be a ring with maximal ideal $\mathfrak m$.
+Suppose that $M$ is an $R$-module with
+$\mathfrak m M = 0$. Then the length of $M$ as
+an $R$-module agrees with the dimension of $M$ as
+a $R/\mathfrak m$ vector space.
+The length is finite if and only if $M$ is a finite $R$-module.
+\end{lemma}
+
+\begin{proof}
+The first part is a special case of Lemma \ref{lemma-length-independent}.
+Thus the length is finite if and only if $M$ has a finite basis
+as a $R/\mathfrak m$-vector space if and only if $M$ has a finite
+set of generators as an $R$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-length-localize}
+Let $R$ be a ring. Let $M$ be an $R$-module. Let $S \subset R$ be
+a multiplicative subset. Then
+$\text{length}_R(M) \geq \text{length}_{S^{-1}R}(S^{-1}M)$.
+\end{lemma}
+
+\begin{proof}
+Any submodule $N' \subset S^{-1}M$ is of the form
+$S^{-1}N$ for some $R$-submodule $N \subset M$, by Lemma
+\ref{lemma-submodule-localization}. The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-length-finite}
+Let $R$ be a ring with finitely generated
+maximal ideal $\mathfrak m$. (For example $R$ Noetherian.)
+Suppose that $M$ is a finite $R$-module with
+$\mathfrak m^n M = 0$ for some $n$.
+Then $\text{length}_R(M) < \infty$.
+\end{lemma}
+
+\begin{proof}
+Consider the filtration
+$0 = \mathfrak m^n M \subset
+\mathfrak m^{n-1} M \subset
+\ldots \subset \mathfrak m M \subset M$.
+All of the subquotients are finitely generated $R$-modules
+to which Lemma \ref{lemma-dimension-is-length} applies. We conclude
+by additivity, see Lemma \ref{lemma-length-additive}.
+\end{proof}
+
+\begin{definition}
+\label{definition-simple-module}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+We say $M$ is {\it simple} if $M \not = 0$ and
+every submodule of $M$ is either equal to $M$ or
+to $0$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-characterize-length-1}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+The following are equivalent:
+\begin{enumerate}
+\item $M$ is simple,
+\item $\text{length}_R(M) = 1$, and
+\item $M \cong R/\mathfrak m$ for some maximal ideal
+$\mathfrak m \subset R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak m$ be a maximal ideal of $R$.
+By Lemma \ref{lemma-dimension-is-length} the module
+$R/\mathfrak m$ has length $1$. The equivalence of
+the first two assertions is tautological.
+Suppose that $M$ is simple. Choose $x \in M$, $x \not = 0$.
+As $M$ is simple we have $M = R \cdot x$.
+Let $I \subset R$ be the annihilator of $x$, i.e.,
+$I = \{f \in R \mid fx = 0\}$. The map $R/I \to M$,
+$f \bmod I \mapsto fx$ is an isomorphism, hence
+$R/I$ is a simple $R$-module. Since $R/I \not = 0$ we see $I \not = R$.
+Let $I \subset \mathfrak m$ be a maximal ideal containing $I$.
+If $I \not = \mathfrak m$, then $\mathfrak m /I \subset R/I$
+is a nontrivial submodule contradicting the simplicity
+of $R/I$. Hence we see $I = \mathfrak m$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-simple-pieces}
+Let $R$ be a ring. Let $M$ be a finite length $R$-module.
+Choose any maximal chain of submodules
+$$
+0 = M_0 \subset M_1 \subset M_2 \subset \ldots \subset M_n = M
+$$
+with $M_i \not = M_{i-1}$, $i = 1, \ldots, n$. Then
+\begin{enumerate}
+\item $n = \text{length}_R(M)$,
+\item each $M_i/M_{i-1}$ is simple,
+\item each $M_i/M_{i-1}$ is of the form
+$R/\mathfrak m_i$ for some maximal ideal $\mathfrak m_i$,
+\item given a maximal ideal $\mathfrak m \subset R$
+we have
+$$
+\# \{i \mid \mathfrak m_i = \mathfrak m\}
+=
+\text{length}_{R_{\mathfrak m}} (M_{\mathfrak m}).
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $M_i/M_{i-1}$ is not simple then we can refine the filtration
+and the filtration is not maximal. Thus we see that $M_i/M_{i-1}$
+is simple. By Lemma \ref{lemma-characterize-length-1} the modules
+$M_i/M_{i-1}$ have length $1$ and are of the form $R/\mathfrak m_i$
+for some maximal ideals $\mathfrak m_i$. By additivity of length,
+Lemma \ref{lemma-length-additive}, we see $n = \text{length}_R(M)$.
+Since localization is exact, we see that
+$$
+0 = (M_0)_{\mathfrak m}
+\subset (M_1)_{\mathfrak m}
+\subset (M_2)_{\mathfrak m}
+\subset \ldots
+\subset (M_n)_{\mathfrak m} = M_{\mathfrak m}
+$$
+is a filtration of $M_{\mathfrak m}$ with successive quotients
+$(M_i/M_{i-1})_{\mathfrak m}$. Thus the last statement follows
+directly from the fact that given maximal ideals $\mathfrak m$,
+$\mathfrak m'$ of $R$ we have
+$$
+(R/\mathfrak m')_{\mathfrak m}
+\cong
+\left\{
+\begin{matrix}
+0 &
+\text{if } \mathfrak m \not = \mathfrak m', \\
+R_{\mathfrak m}/\mathfrak m R_{\mathfrak m} &
+\text{if } \mathfrak m = \mathfrak m'
+\end{matrix}
+\right.
+$$
+This we leave to the reader.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushdown-module}
+Let $A$ be a local ring with maximal ideal $\mathfrak m$.
+Let $B$ be a semi-local ring with maximal ideals $\mathfrak m_i$,
+$i = 1, \ldots, n$.
+Suppose that $A \to B$ is a homomorphism such that each $\mathfrak m_i$
+lies over $\mathfrak m$ and such that
+$$
+[\kappa(\mathfrak m_i) : \kappa(\mathfrak m)] < \infty.
+$$
+Let $M$ be a $B$-module of finite length.
+Then
+$$
+\text{length}_A(M) = \sum\nolimits_{i = 1, \ldots, n}
+[\kappa(\mathfrak m_i) : \kappa(\mathfrak m)]
+\text{length}_{B_{\mathfrak m_i}}(M_{\mathfrak m_i}),
+$$
+in particular $\text{length}_A(M) < \infty$.
+\end{lemma}
+
+\begin{proof}
+Choose a maximal chain
+$$
+0 = M_0
+\subset M_1
+\subset M_2
+\subset \ldots
+\subset M_m = M
+$$
+by $B$-submodules as in Lemma \ref{lemma-simple-pieces}.
+Then each quotient $M_j/M_{j - 1}$ is isomorphic to
+$\kappa(\mathfrak m_{i(j)})$ for some $i(j) \in \{1, \ldots, n\}$.
+Moreover
+$\text{length}_A(\kappa(\mathfrak m_i)) =
+[\kappa(\mathfrak m_i) : \kappa(\mathfrak m)]$ by
+Lemma \ref{lemma-dimension-is-length}. The lemma follows
+by additivity of lengths (Lemma \ref{lemma-length-additive}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-module}
+Let $A \to B$ be a flat local homomorphism of local rings.
+Then for any $A$-module $M$ we have
+$$
+\text{length}_A(M) \text{length}_B(B/\mathfrak m_AB)
+=
+\text{length}_B(M \otimes_A B).
+$$
+In particular, if $\text{length}_B(B/\mathfrak m_AB) < \infty$
+then $M$ has finite length if and only if $M \otimes_A B$ has finite length.
+\end{lemma}
+
+\begin{proof}
+The ring map $A \to B$ is faithfully flat by
+Lemma \ref{lemma-local-flat-ff}.
+Hence if $0 = M_0 \subset M_1 \subset \ldots \subset M_n = M$
+is a chain of length $n$ in $M$, then the corresponding chain
+$0 = M_0 \otimes_A B \subset M_1 \otimes_A B \subset
+\ldots \subset M_n \otimes_A B = M \otimes_A B$ has length $n$
+also. This proves
+$\text{length}_A(M) = \infty \Rightarrow
+\text{length}_B(M \otimes_A B) = \infty$.
+Next, assume $\text{length}_A(M) < \infty$. In this case we see
+that $M$ has a filtration of length $\ell = \text{length}_A(M)$
+whose quotients are $A/\mathfrak m_A$. Arguing as above
+we see that $M \otimes_A B$ has a filtration of length $\ell$
+whose quotients are isomorphic to
+$B \otimes_A A/\mathfrak m_A = B/\mathfrak m_AB$.
+Thus the lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-transitive}
+Let $A \to B \to C$ be flat local homomorphisms of local rings. Then
+$$
+\text{length}_B(B/\mathfrak m_A B)
+\text{length}_C(C/\mathfrak m_B C)
+=
+\text{length}_C(C/\mathfrak m_A C)
+$$
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-pullback-module} applied to the ring map
+$B \to C$ and the $B$-module $M = B/\mathfrak m_A B$
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Artinian rings}
+\label{section-artinian}
+
+\noindent
+Artinian rings, and especially local Artinian rings,
+play an important role in algebraic geometry, for example
+in deformation theory.
+
+\begin{definition}
+\label{definition-artinian}
+A ring $R$ is {\it Artinian} if it satisfies the
+descending chain condition for ideals.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-finite-dimensional-algebra}
+Suppose $R$ is a finite dimensional algebra over a field.
+Then $R$ is Artinian.
+\end{lemma}
+
+\begin{proof}
+The descending chain condition for ideals obviously holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-artinian-finite-nr-max}
+If $R$ is Artinian then $R$ has only finitely many maximal ideals.
+\end{lemma}
+
+\begin{proof}
+Suppose that $\mathfrak m_i$, $i = 1, 2, 3, \ldots$ are
+pairwise distinct maximal ideals.
+Then $\mathfrak m_1 \supset \mathfrak m_1\cap \mathfrak m_2
+\supset \mathfrak m_1 \cap \mathfrak m_2 \cap \mathfrak m_3 \supset \ldots$
+is an infinite descending sequence (because by the Chinese
+remainder theorem all the maps $R \to \oplus_{i = 1}^n R/\mathfrak m_i$
+are surjective).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-artinian-radical-nilpotent}
+Let $R$ be Artinian. The Jacobson radical of $R$ is a nilpotent ideal.
+\end{lemma}
+
+\begin{proof}
+Let $I \subset R$ be the Jacobson radical.
+Note that $I \supset I^2 \supset I^3 \supset \ldots$ is a descending
+sequence. Thus $I^n = I^{n + 1}$ for some $n$.
+Set $J = \{ x\in R \mid xI^n = 0\}$. We have to show $J = R$.
+If not, choose an ideal $J' \not = J$, $J \subset J'$ minimal (possible
+by the Artinian property). Then $J' = J + Rx$ for some $x \in R$.
+By NAK, Lemma \ref{lemma-NAK}, we have $IJ' \subset J$.
+Hence $xI^{n + 1} \subset xI \cdot I^n \subset J \cdot I^n = 0$.
+Since $I^{n + 1} = I^n$ we conclude $x\in J$. Contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-product-local}
+Any ring with finitely many maximal ideals and
+locally nilpotent Jacobson radical is the product of its localizations
+at its maximal ideals. Also, all primes are maximal.
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a ring with finitely many maximal ideals
+$\mathfrak m_1, \ldots, \mathfrak m_n$.
+Let $I = \bigcap_{i = 1}^n \mathfrak m_i$
+be the Jacobson radical of $R$. Assume $I$ is locally nilpotent.
+Let $\mathfrak p$ be a prime ideal of $R$.
+Since every prime contains every nilpotent
+element of $R$ we see
+$ \mathfrak p \supset \mathfrak m_1 \cap \ldots \cap \mathfrak m_n$.
+Since $\mathfrak m_1 \cap \ldots \cap \mathfrak m_n \supset
+\mathfrak m_1 \ldots \mathfrak m_n$
+we conclude $\mathfrak p \supset \mathfrak m_1 \ldots \mathfrak m_n$.
+Hence $\mathfrak p \supset \mathfrak m_i$ for some $i$, and so
+$\mathfrak p = \mathfrak m_i$. By the Chinese remainder theorem
+(Lemma \ref{lemma-chinese-remainder})
+we have $R/I \cong \bigoplus R/\mathfrak m_i$
+which is a product of fields.
+Hence by Lemma \ref{lemma-lift-idempotents}
+there are idempotents $e_i$, $i = 1, \ldots, n$
+with $e_i \bmod \mathfrak m_j = \delta_{ij}$.
+Hence $R = \prod Re_i$, and each $Re_i$ is a
+ring with exactly one maximal ideal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-artinian-finite-length}
+A ring $R$ is Artinian if and only if it has finite length
+as a module over itself. Any such ring $R$ is both Artinian and
+Noetherian, any prime ideal of $R$ is a maximal ideal, and $R$ is equal
+to the (finite) product of its localizations at its maximal ideals.
+\end{lemma}
+
+\begin{proof}
+If $R$ has finite length over itself then it satisfies both
+the ascending chain condition and the descending chain
+condition for ideals. Hence it is both Noetherian and Artinian.
+Any Artinian ring is equal to product of its localizations
+at maximal ideals by Lemmas \ref{lemma-artinian-finite-nr-max},
+\ref{lemma-artinian-radical-nilpotent}, and \ref{lemma-product-local}.
+
+\medskip\noindent
+Suppose that $R$ is Artinian. We will show $R$ has finite
+length over itself. It suffices to exhibit a chain of
+submodules whose successive quotients have finite length.
+By what we said above
+we may assume that $R$ is local, with maximal ideal $\mathfrak m$.
+By Lemma \ref{lemma-artinian-radical-nilpotent} we have
+$\mathfrak m^n =0$ for some $n$.
+Consider the sequence
+$0 = \mathfrak m^n \subset \mathfrak m^{n-1} \subset
+\ldots \subset \mathfrak m \subset R$. By Lemma
+\ref{lemma-dimension-is-length} the length of each subquotient
+$\mathfrak m^j/\mathfrak m^{j + 1}$ is the dimension of this
+as a vector space over $\kappa(\mathfrak m)$. This has to be
+finite since otherwise we would have an infinite descending
+chain of sub vector spaces which would correspond to an
+infinite descending chain of ideals in $R$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Homomorphisms essentially of finite type}
+\label{section-essentially-of-finite-type}
+
+\noindent
+Some simple remarks on localizations of finite type ring maps.
+
+\begin{definition}
+\label{definition-essentially-finite-p-t}
+Let $R \to S$ be a ring map.
+\begin{enumerate}
+\item We say that $R \to S$ is {\it essentially of finite type} if
+$S$ is the localization of an $R$-algebra of finite type.
+\item We say that $R \to S$ is {\it essentially of finite presentation} if
+$S$ is the localization of an $R$-algebra of finite presentation.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-composition-essentially-of-finite-type}
+The class of ring maps which are essentially of finite type is
+preserved under composition. Similarly for essentially of finite
+presentation.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-essentially-of-finite-type}
+The class of ring maps which are essentially of finite type is
+preserved by base change. Similarly for essentially of finite
+presentation.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-essentially-of-finite-type-into-artinian-local}
+Let $R \to S$ be a ring map. Assume $S$ is an Artinian local ring with
+maximal ideal $\mathfrak m$. Then
+\begin{enumerate}
+\item $R \to S$ is finite if and only if $R \to S/\mathfrak m$ is finite,
+\item $R \to S$ is of finite type if and only if $R \to S/\mathfrak m$
+is of finite type.
+\item $R \to S$ is essentially of finite type if and
+only if the composition $R \to S/\mathfrak m$ is essentially
+of finite type.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $R \to S$ is finite, then $R \to S/\mathfrak m$
+is finite by Lemma \ref{lemma-finite-transitive}.
+Conversely, assume $R \to S/\mathfrak m$ is finite.
+As $S$ has finite length over itself
+(Lemma \ref{lemma-artinian-finite-length})
+we can choose a filtration
+$$
+0 \subset I_1 \subset \ldots \subset I_n = S
+$$
+by ideals such that $I_i/I_{i - 1} \cong S/\mathfrak m$ as $S$-modules.
+Thus $S$ has a filtration by $R$-submodules $I_i$ such that each
+successive quotient is a finite $R$-module. Thus $S$ is a finite
+$R$-module by Lemma \ref{lemma-extension}.
+
+\medskip\noindent
+If $R \to S$ is of finite type, then $R \to S/\mathfrak m$
+is of finite type by Lemma \ref{lemma-compose-finite-type}.
+Conversely, assume that $R \to S/\mathfrak m$ is of finite type.
+Choose $f_1, \ldots, f_n \in S$ which map to generators of $S/\mathfrak m$.
+Then $A = R[x_1, \ldots, x_n] \to S$, $x_i \mapsto f_i$ is a ring map such
+that $A \to S/\mathfrak m$ is surjective (in particular finite).
+Hence $A \to S$ is finite by part (1) and we see that $R \to S$
+is of finite type by Lemma \ref{lemma-compose-finite-type}.
+
+\medskip\noindent
+If $R \to S$ is essentially of finite type, then $R \to S/\mathfrak m$
+is essentially of finite type by
+Lemma \ref{lemma-composition-essentially-of-finite-type}.
+Conversely, assume that $R \to S/\mathfrak m$ is essentially
+of finite type. Suppose $S/\mathfrak m$ is the localization
+of $R[x_1, \ldots, x_n]/I$. Choose $f_1, \ldots, f_n \in S$
+whose congruence classes modulo $\mathfrak m$ correspond to
+the congruence classes of $x_1, \ldots, x_n$ modulo $I$.
+Consider the map $R[x_1, \ldots, x_n] \to S$, $x_i \mapsto f_i$
+with kernel $J$. Set $A = R[x_1, \ldots, x_n]/J \subset S$
+and $\mathfrak p = A \cap \mathfrak m$. Note that
+$A/\mathfrak p \subset S/\mathfrak m$ is equal to the image
+of $R[x_1, \ldots, x_n]/I$ in $S/\mathfrak m$. Hence
+$\kappa(\mathfrak p) = S/\mathfrak m$. Thus $A_\mathfrak p \to S$
+is finite by part (1). We conclude that $S$ is essentially of finite
+type by Lemma \ref{lemma-composition-essentially-of-finite-type}.
+\end{proof}
+
+\noindent
+The following lemma can be proven using properness of projective
+space instead of the algebraic argument we give here.
+
+\begin{lemma}
+\label{lemma-localization-at-closed-point-special-fibre}
+Let $\varphi : R \to S$ be essentially of finite type with $R$ and $S$
+local (but not necessarily $\varphi$ local). Then there exists
+an $n$ and a maximal ideal $\mathfrak m \subset R[x_1, \ldots, x_n]$
+lying over $\mathfrak m_R$ such that $S$ is a localization of a
+quotient of $R[x_1, \ldots, x_n]_\mathfrak m$.
+\end{lemma}
+
+\begin{proof}
+We can write $S$ as a localization of a quotient of $R[x_1, \ldots, x_n]$.
+Hence it suffices to prove the lemma in case
+$S = R[x_1, \ldots, x_n]_\mathfrak q$ for some prime
+$\mathfrak q \subset R[x_1, \ldots, x_n]$.
+If $\mathfrak q + \mathfrak m_R R[x_1, \ldots, x_n] \not = R[x_1, \ldots, x_n]$
+then we can find a maximal ideal $\mathfrak m$ as in the statement
+of the lemma with $\mathfrak q \subset \mathfrak m$ and the result is clear.
+
+\medskip\noindent
+Choose a valuation ring $A \subset \kappa(\mathfrak q)$
+which dominates the image of $R \to \kappa(\mathfrak q)$
+(Lemma \ref{lemma-dominate}). If the image $\lambda_i \in \kappa(\mathfrak q)$
+of $x_i$ is contained in $A$, then $\mathfrak q$ is contained in the inverse
+image of $\mathfrak m_A$ via $R[x_1, \ldots, x_n] \to A$
+which means we are back in the preceding case.
+Hence there exists an $i$ such that $\lambda_i^{-1} \in A$
+and such that $\lambda_j/\lambda_i \in A$ for all $j = 1, \ldots, n$
+(because the value group of $A$ is totally ordered, see
+Lemma \ref{lemma-valuation-group}). Then we consider the map
+$$
+R[y_0, y_1, \ldots, \hat{y_i}, \ldots, y_n]
+\to R[x_1, \ldots, x_n]_\mathfrak q,\quad
+y_0 \mapsto 1/x_i,\quad y_j \mapsto x_j/x_i
+$$
+Let $\mathfrak q' \subset R[y_0, \ldots, \hat{y_i}, \ldots, y_n]$
+be the inverse image
+of $\mathfrak q$. Since $y_0 \not \in \mathfrak q'$ it is easy to see
+that the displayed arrow defines an isomorphism on localizations.
+On the other hand, the result of the first paragraph applies to
+$R[y_0, \ldots, \hat{y_i}, \ldots, y_n]$
+because $y_j$ maps to an element of $A$.
+This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{K-groups}
+\label{section-K-groups}
+
+\noindent
+Let $R$ be a ring. We will introduce two abelian groups associated
+to $R$. The first of the two is denoted $K'_0(R)$ and has the following
+properties\footnote{The definition makes sense for any ring but is rarely
+used unless $R$ is Noetherian.}:
+\begin{enumerate}
+\item For every finite $R$-module $M$ there is given an element $[M]$ in
+$K'_0(R)$,
+\item for every short exact sequence $0 \to M' \to M \to M'' \to 0$
+of finite $R$-modules we have the relation $[M] = [M'] + [M'']$,
+\item the group $K'_0(R)$ is generated by the elements $[M]$, and
+\item all relations in $K'_0(R)$ among the generators $[M]$
+are $\mathbf{Z}$-linear combinations
+of the relations coming from exact sequences as above.
+\end{enumerate}
+The actual construction is a bit more annoying since one has to take care
+that the collection of all finitely generated $R$-modules is a proper class.
+However, this problem can be overcome by taking as set of generators
+of the group $K'_0(R)$ the elements $[R^n/K]$ where $n$ ranges over
+all integers and $K$ ranges over all submodules $K \subset R^n$.
+The generators for the subgroup of relations imposed on these elements
+will be the relations coming from short exact sequences whose terms
+are of the form $R^n/K$. The element $[M]$ is defined by
+choosing $n$ and $K$ such that $M \cong R^n/K$ and putting
+$[M] = [R^n/K]$. Details left to the reader.
+
+\begin{lemma}
+\label{lemma-length-K0}
+If $R$ is an Artinian local ring then the length function
+defines a natural abelian group homomorphism
+$\text{length}_R : K'_0(R) \to \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+The length of any finite $R$-module is finite,
+because it is the quotient of $R^n$ which has finite length by
+Lemma \ref{lemma-artinian-finite-length}. And the length function
+is additive, see Lemma \ref{lemma-length-additive}.
+\end{proof}
+
+\noindent
+The second of the two is denoted $K_0(R)$ and has the following
+properties:
+\begin{enumerate}
+\item For every finite projective $R$-module $M$ there
+is given an element $[M]$ in $K_0(R)$,
+\item for every short exact sequence $0 \to M' \to M \to M'' \to 0$
+of finite projective $R$-modules we have the relation $[M] = [M'] + [M'']$,
+\item the group $K_0(R)$ is generated by the elements $[M]$, and
+\item all relations in $K_0(R)$ are $\mathbf{Z}$-linear combinations
+of the relations coming from exact sequences as above.
+\end{enumerate}
+The construction of this group is done as above.
+
+\medskip\noindent
+We note that there is an obvious map $K_0(R) \to K'_0(R)$
+which is not an isomorphism in general.
+
+\begin{example}
+\label{example-K0-field}
+Note that if $R = k$ is a field then we clearly have
+$K_0(k) = K'_0(k) \cong \mathbf{Z}$ with the isomorphism
+given by the dimension function (which is also the length function).
+\end{example}
+
+\begin{example}
+\label{example-K0-PID}
+Let $R$ be a PID. We claim $K_0(R) = K'_0(R) = \mathbf{Z}$.
+Namely, any finite projective $R$-module is finite free.
+A finite free module has a well defined rank by Lemma \ref{lemma-rank}.
+Given a short exact sequence of finite free modules
+$$
+0 \to M' \to M \to M'' \to 0
+$$
+we have $\text{rank}(M) = \text{rank}(M') + \text{rank}(M'')$
+because we have $M \cong M' \oplus M'$ in this case (for example
+we have a splitting by Lemma \ref{lemma-lift-map}).
+We conclude $K_0(R) = \mathbf{Z}$.
+
+\medskip\noindent
+The structure theorem for modules of a PID says that
+any finitely generated $R$-module is of the form
+$M = R^{\oplus r} \oplus R/(d_1) \oplus \ldots \oplus R/(d_k)$.
+Consider the short exact sequence
+$$
+0 \to (d_i) \to R \to R/(d_i) \to 0
+$$
+Since the ideal $(d_i)$ is isomorphic to $R$ as a module
+(it is free with generator $d_i$), in $K'_0(R)$ we have
+$[(d_i)] = [R]$. Then $[R/(d_i)] = [(d_i)]-[R] = 0$. From this it
+follows that a torsion module has zero class in $K'_0(R)$.
+Using the rank of the free part gives an identification $K'_0(R) = \mathbf{Z}$
+and the canonical homomorphism from $K_0(R) \to K'_0(R)$ is an isomorphism.
+\end{example}
+
+\begin{example}
+\label{example-K0-polynomial-ring}
+Let $k$ be a field. Then $K_0(k[x]) = K'_0(k[x]) = \mathbf{Z}$.
+This follows from Example \ref{example-K0-PID} as
+$R = k[x]$ is a PID.
+\end{example}
+
+\begin{example}
+\label{example-K0-node}
+Let $k$ be a field. Let $R = \{f \in k[x] \mid f(0) = f(1)\}$,
+compare Example \ref{example-affine-open-not-standard}.
+In this case $K_0(R) \cong k^* \oplus \mathbf{Z}$, but
+$K'_0(R) = \mathbf{Z}$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-K0-product}
+Let $R = R_1 \times R_2$. Then $K_0(R) = K_0(R_1) \times K_0(R_2)$
+and $K'_0(R) = K'_0(R_1) \times K'_0(R_2)$
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-K0prime-Artinian}
+Let $R$ be an Artinian local ring.
+The map $\text{length}_R : K'_0(R) \to \mathbf{Z}$
+of Lemma \ref{lemma-length-K0} is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-K0-local}
+Let $(R, \mathfrak m)$ be a local ring. Every finite projective $R$-module
+is finite free. The map $\text{rank}_R : K_0(R) \to \mathbf{Z}$
+defined by $[M] \to \text{rank}_R(M)$ is well defined
+and an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Let $P$ be a finite projective $R$-module. Choose elements
+$x_1, \ldots, x_n \in P$ which map to a basis of $P/\mathfrak m P$.
+By Nakayama's Lemma \ref{lemma-NAK} these elements generate $P$.
+The corresponding surjection $u : R^{\oplus n} \to P$ has a splitting
+as $P$ is projective. Hence $R^{\oplus n} = P \oplus Q$ with $Q = \Ker(u)$.
+It follows that $Q/\mathfrak m Q = 0$, hence $Q$ is zero by
+Nakayama's lemma. In this way we see that every finite projective
+$R$-module is finite free.
+A finite free module has a well defined rank by Lemma \ref{lemma-rank}.
+Given a short exact sequence of finite free $R$-modules
+$$
+0 \to M' \to M \to M'' \to 0
+$$
+we have $\text{rank}(M) = \text{rank}(M') + \text{rank}(M'')$
+because we have $M \cong M' \oplus M'$ in this case (for example
+we have a splitting by Lemma \ref{lemma-lift-map}).
+We conclude $K_0(R) = \mathbf{Z}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-K0-and-K0prime-Artinian-local}
+Let $R$ be a local Artinian ring. There is a commutative
+diagram
+$$
+\xymatrix{
+K_0(R) \ar[rr] \ar[d]_{\text{rank}_R} & &
+K'_0(R) \ar[d]^{\text{length}_R} \\
+\mathbf{Z} \ar[rr]^{\text{length}_R(R)} & &
+\mathbf{Z}
+}
+$$
+where the vertical maps are isomorphisms by
+Lemmas \ref{lemma-K0prime-Artinian} and \ref{lemma-K0-local}.
+\end{lemma}
+
+\begin{proof}
+Let $P$ be a finite projective $R$-module. We have to show that
+$\text{length}_R(P) = \text{rank}_R(P) \text{length}_R(R)$.
+By Lemma \ref{lemma-K0-local} the module $P$ is finite free. So
+$P \cong R^{\oplus n}$ for some $n \geq 0$. Then $\text{rank}_R(P) = n$ and
+$\text{length}_R(R^{\oplus n}) = n \text{length}_R(R)$
+by additivity of lenghts (Lemma \ref{lemma-length-additive}).
+Thus the result holds.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Graded rings}
+\label{section-graded}
+
+\noindent
+A {\it graded ring} will be for us a ring $S$ endowed
+with a direct sum decomposition $S = \bigoplus_{d \geq 0} S_d$
+of the underlying abelian group such that $S_d \cdot S_e \subset S_{d + e}$.
+Note that we do not allow nonzero elements in negative degrees.
+The {\it irrelevant ideal} is the ideal $S_{+} = \bigoplus_{d > 0} S_d$.
+A {\it graded module}
+will be an $S$-module $M$ endowed with a direct sum decomposition
+$M = \bigoplus_{n\in \mathbf{Z}} M_n$ of the underlying abelian group
+such that $S_d \cdot M_e \subset M_{d + e}$. Note that for modules we do allow
+nonzero elements in negative degrees.
+We think of $S$ as a graded $S$-module by setting $S_{-k} = (0)$
+for $k > 0$. An element $x$ (resp.\ $f$) of $M$ (resp.\ $S$) is called
+{\it homogeneous}
+if $x \in M_d$ (resp.\ $f \in S_d$) for some $d$.
+A {\it map of graded $S$-modules} is a map of $S$-modules
+$\varphi : M \to M'$ such that $\varphi(M_d) \subset M'_d$.
+We do not allow maps to shift degrees. Let us denote
+$\text{GrHom}_0(M, N)$ the $S_0$-module of homomorphisms
+of graded modules from $M$ to $N$.
+
+\medskip\noindent
+At this point there are the notions of graded ideal,
+graded quotient ring, graded submodule, graded quotient module,
+graded tensor product, etc. We leave it to the reader to find the
+relevant definitions, and lemmas. For example: A short exact sequence
+of graded modules is short exact in every degree.
+
+\medskip\noindent
+Given a graded ring $S$, a graded $S$-module $M$ and $n \in \mathbf{Z}$
+we denote $M(n)$ the graded $S$-module with $M(n)_d = M_{n + d}$.
+This is called the {\it twist of $M$ by $n$}. In particular we get
+modules $S(n)$, $n \in \mathbf{Z}$ which will play an important
+role in the study of projective schemes. There are some obvious
+functorial isomorphisms such as
+$(M \oplus N)(n) = M(n) \oplus N(n)$,
+$(M \otimes_S N)(n) = M \otimes_S N(n) = M(n) \otimes_S N$.
+In addition we can define a graded $S$-module structure on
+the $S_0$-module
+$$
+\text{GrHom}(M, N) =
+\bigoplus\nolimits_{n \in \mathbf{Z}} \text{GrHom}_n(M, N),
+\quad
+\text{GrHom}_n(M, N) = \text{GrHom}_0(M, N(n)).
+$$
+We omit the definition of the multiplication.
+
+\begin{lemma}
+\label{lemma-graded-NAK}
+Let $S$ be a graded ring. Let $M$ be a graded $S$-module.
+\begin{enumerate}
+\item If $S_+M = M$ and $M$ is finite, then $M = 0$.
+\item If $N, N' \subset M$ are graded submodules,
+$M = N + S_+N'$, and $N'$ is finite, then $M = N$.
+\item If $N \to M$ is a map of graded modules, $N/S_+N \to M/S_+M$
+is surjective, and $M$ is finite, then $N \to M$ is surjective.
+\item If $x_1, \ldots, x_n \in M$ are homogeneous and generate $M/S_+M$
+and $M$ is finite, then $x_1, \ldots, x_n$ generate $M$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Choose generators $y_1, \ldots, y_r$ of $M$ over $S$.
+We may assume that $y_i$ is homogeneous of degree $d_i$. After
+renumbering we may assume $d_r = \min(d_i)$. Then the condition that
+$S_+M = M$ implies $y_r = 0$. Hence $M = 0$ by induction on $r$.
+Part (2) follows by applying (1) to $M/N$. Part (3) follows by
+applying (2) to the submodules $\Im(N \to M)$ and $M$.
+Part (4) follows by applying (3) to the module map
+$\bigoplus S(-d_i) \to M$, $(s_1, \ldots, s_n) \mapsto \sum s_i x_i$.
+\end{proof}
+
+\noindent
+Let $S$ be a graded ring. Let $d \geq 1$ be an integer.
+We set $S^{(d)} = \bigoplus_{n \geq 0} S_{nd}$. We think of
+$S^{(d)}$ as a graded ring with degree $n$ summand
+$(S^{(d)})_n = S_{nd}$. Given a graded $S$-module $M$ we
+can similarly consider $M^{(d)} = \bigoplus_{n \in \mathbf{Z}} M_{nd}$
+which is a graded $S^{(d)}$-module.
+
+\begin{lemma}
+\label{lemma-uple-generated-degree-1}
+Let $S$ be a graded ring, which is finitely generated over $S_0$.
+Then for all sufficiently divisible $d$ the algebra
+$S^{(d)}$ is generated in degree $1$ over $S_0$.
+\end{lemma}
+
+\begin{proof}
+Say $S$ is generated by $f_1, \ldots, f_r \in S$ over $S_0$.
+After replacing $f_i$ by their homogeneous parts, we may assume
+$f_i$ is homogeneous of degree $d_i > 0$. Then any element of
+$S_n$ is a linear combination with coefficients in $S_0$ of monomials
+$f_1^{e_1} \ldots f_r^{e_r}$ with $\sum e_i d_i = n$.
+Let $m$ be a multiple of $\text{lcm}(d_i)$. For any $N \geq r$ if
+$$
+\sum e_i d_i = N m
+$$
+then for some $i$ we have $e_i \geq m/d_i$ by an elementary argument.
+Hence every monomial of degree $N m$ is a product of a monomial
+of degree $m$, namely $f_i^{m/d_i}$, and a monomial of degree $(N - 1)m$.
+It follows that any monomial of degree $nrm$ with $n \geq 2$
+is a product of monomials of degree $rm$. Thus $S^{(rm)}$ is generated
+in degree $1$ over $S_0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-closure-graded}
+Let $R \to S$ be a homomorphism of graded rings.
+Let $S' \subset S$ be the integral closure of $R$ in $S$.
+Then
+$$
+S' = \bigoplus\nolimits_{d \geq 0} S' \cap S_d,
+$$
+i.e., $S'$ is a graded $R$-subalgebra of $S$.
+\end{lemma}
+
+\begin{proof}
+We have to show the following: If
+$s = s_n + s_{n + 1} + \ldots + s_m \in S'$, then each homogeneous
+part $s_j \in S'$. We will prove this by induction on $m - n$ over
+all homomorphisms $R \to S$ of graded rings. First note that it
+is immediate that $s_0$ is integral over $R_0$ (hence over $R$) as
+there is a ring map $S \to S_0$ compatible with the ring map $R \to R_0$.
+Thus, after replacing $s$ by $s - s_0$, we may assume $n > 0$. Consider the
+extension of graded rings $R[t, t^{-1}] \to S[t, t^{-1}]$ where
+$t$ has degree $0$. There is a commutative diagram
+$$
+\xymatrix{
+S[t, t^{-1}] \ar[rr]_{s \mapsto t^{\deg(s)}s} & & S[t, t^{-1}] \\
+R[t, t^{-1}] \ar[u] \ar[rr]^{r \mapsto t^{\deg(r)}r} & & R[t, t^{-1}] \ar[u]
+}
+$$
+where the horizontal maps are ring automorphisms. Hence the integral
+closure $C$ of $S[t, t^{-1}]$ over $R[t, t^{-1}]$ maps into itself.
+Thus we see that
+$$
+t^m(s_n + s_{n + 1} + \ldots + s_m) -
+(t^ns_n + t^{n + 1}s_{n + 1} + \ldots + t^ms_m) \in C
+$$
+which implies by induction hypothesis that each $(t^m - t^i)s_i \in C$
+for $i = n, \ldots, m - 1$. Note that for any ring $A$ and $m > i \geq n > 0$
+we have $A[t, t^{-1}]/(t^m - t^i - 1) \cong A[t]/(t^m - t^i - 1) \supset A$
+because $t(t^{m - 1} - t^{i - 1}) = 1$ in $A[t]/(t^m - t^i - 1)$.
+Since $t^m - t^i$ maps to $1$ we see the image of $s_i$ in the ring
+$S[t]/(t^m - t^i - 1)$ is integral over $R[t]/(t^m - t^i - 1)$ for
+$i = n, \ldots, m - 1$. Since $R \to R[t]/(t^m - t^i - 1)$ is finite
+we see that $s_i$ is integral over $R$ by transitivity, see
+Lemma \ref{lemma-integral-transitive}.
+Finally, we also conclude that $s_m = s - \sum_{i = n, \ldots, m - 1} s_i$
+is integral over $R$.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Proj of a graded ring}
+\label{section-proj}
+
+\noindent
+Let $S$ be a graded ring.
+A {\it homogeneous ideal} is simply an ideal
+$I \subset S$ which is also a graded submodule of $S$.
+Equivalently, it is an ideal generated by homogeneous elements.
+Equivalently, if $f \in I$ and
+$$
+f = f_0 + f_1 + \ldots + f_n
+$$
+is the decomposition of $f$ into homogeneous parts in $S$ then $f_i \in I$
+for each $i$. To check that a homogeneous ideal $\mathfrak p$
+is prime it suffices to check that if $ab \in \mathfrak p$
+with $a, b$ homogeneous then either $a \in \mathfrak p$ or
+$b \in \mathfrak p$.
+
+\begin{definition}
+\label{definition-proj}
+Let $S$ be a graded ring.
+We define $\text{Proj}(S)$ to be the set of homogeneous
+prime ideals $\mathfrak p$ of $S$ such that
+$S_{+} \not \subset \mathfrak p$.
+The set $\text{Proj}(S)$ is a subset of $\Spec(S)$
+and we endow it with the induced topology.
+The topological space $\text{Proj}(S)$ is called the
+{\it homogeneous spectrum} of the graded ring $S$.
+\end{definition}
+
+\noindent
+Note that by construction there is a continuous map
+$$
+\text{Proj}(S) \longrightarrow \Spec(S_0).
+$$
+
+\medskip\noindent
+Let $S = \oplus_{d \geq 0} S_d$ be a graded ring.
+Let $f\in S_d$ and assume that $d \geq 1$.
+We define $S_{(f)}$ to be the subring of $S_f$
+consisting of elements of the form $r/f^n$ with $r$ homogeneous and
+$\deg(r) = nd$. If $M$ is a graded $S$-module,
+then we define the $S_{(f)}$-module $M_{(f)}$ as the
+sub module of $M_f$ consisting of elements of
+the form $x/f^n$ with $x$ homogeneous of degree $nd$.
+
+\begin{lemma}
+\label{lemma-Z-graded}
+Let $S$ be a $\mathbf{Z}$-graded ring containing a homogeneous
+invertible element of positive degree. Then the set
+$G \subset \Spec(S)$ of $\mathbf{Z}$-graded primes of $S$
+(with induced topology) maps homeomorphically to $\Spec(S_0)$.
+\end{lemma}
+
+\begin{proof}
+First we show that the map is a bijection by constructing an inverse.
+Let $f \in S_d$, $d > 0$ be invertible in $S$.
+If $\mathfrak p_0$ is a prime of $S_0$, then $\mathfrak p_0S$
+is a $\mathbf{Z}$-graded ideal of $S$ such that
+$\mathfrak p_0S \cap S_0 = \mathfrak p_0$. And if $ab \in \mathfrak p_0S$
+with $a$, $b$ homogeneous, then
+$a^db^d/f^{\deg(a) + \deg(b)} \in \mathfrak p_0$.
+Thus either $a^d/f^{\deg(a)} \in \mathfrak p_0$ or
+$b^d/f^{\deg(b)} \in \mathfrak p_0$, in other words either
+$a^d \in \mathfrak p_0S$ or $b^d \in \mathfrak p_0S$.
+It follows that $\sqrt{\mathfrak p_0S}$ is a $\mathbf{Z}$-graded
+prime ideal of $S$ whose intersection with $S_0$ is $\mathfrak p_0$.
+
+\medskip\noindent
+To show that the map is a homeomorphism we show that
+the image of $G \cap D(g)$ is open. If $g = \sum g_i$
+with $g_i \in S_i$, then by the above $G \cap D(g)$
+maps onto the set $\bigcup D(g_i^d/f^i)$ which is open.
+\end{proof}
+
+\noindent
+For $f \in S$ homogeneous of degree $> 0$ we define
+$$
+D_{+}(f) = \{ \mathfrak p \in \text{Proj}(S) \mid f \not\in \mathfrak p \}.
+$$
+Finally, for a homogeneous ideal $I \subset S$ we define
+$$
+V_{+}(I) = \{ \mathfrak p \in \text{Proj}(S) \mid I \subset \mathfrak p \}.
+$$
+We will use more generally the notation $V_{+}(E)$ for any
+set $E$ of homogeneous elements $E \subset S$.
+
+\begin{lemma}[Topology on Proj]
+\label{lemma-topology-proj}
+Let $S = \oplus_{d \geq 0} S_d$ be a graded ring.
+\begin{enumerate}
+\item The sets $D_{+}(f)$ are open in $\text{Proj}(S)$.
+\item We have $D_{+}(ff') = D_{+}(f) \cap D_{+}(f')$.
+\item Let $g = g_0 + \ldots + g_m$ be an element
+of $S$ with $g_i \in S_i$. Then
+$$
+D(g) \cap \text{Proj}(S) =
+(D(g_0) \cap \text{Proj}(S))
+\cup
+\bigcup\nolimits_{i \geq 1} D_{+}(g_i).
+$$
+\item
+Let $g_0\in S_0$ be a homogeneous element of degree $0$. Then
+$$
+D(g_0) \cap \text{Proj}(S)
+=
+\bigcup\nolimits_{f \in S_d, \ d\geq 1} D_{+}(g_0 f).
+$$
+\item The open sets $D_{+}(f)$ form a
+basis for the topology of $\text{Proj}(S)$.
+\item Let $f \in S$ be homogeneous of positive degree.
+The ring $S_f$ has a natural $\mathbf{Z}$-grading.
+The ring maps $S \to S_f \leftarrow S_{(f)}$ induce
+homeomorphisms
+$$
+D_{+}(f)
+\leftarrow
+\{\mathbf{Z}\text{-graded primes of }S_f\}
+\to
+\Spec(S_{(f)}).
+$$
+\item There exists an $S$ such that $\text{Proj}(S)$ is not
+quasi-compact.
+\item The sets $V_{+}(I)$ are closed.
+\item Any closed subset $T \subset \text{Proj}(S)$ is of
+the form $V_{+}(I)$ for some homogeneous ideal $I \subset S$.
+\item For any graded ideal $I \subset S$ we have
+$V_{+}(I) = \emptyset$ if and only if $S_{+} \subset \sqrt{I}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $D_{+}(f) = \text{Proj}(S) \cap D(f)$, these sets are open.
+This proves (1). Also (2) follows as $D(ff') = D(f) \cap D(f')$.
+Similarly the sets $V_{+}(I) = \text{Proj}(S) \cap V(I)$
+are closed. This proves (8).
+
+\medskip\noindent
+Suppose that $T \subset \text{Proj}(S)$ is closed.
+Then we can write $T = \text{Proj}(S) \cap V(J)$ for some
+ideal $J \subset S$. By definition of a homogeneous ideal
+if $g \in J$, $g = g_0 + \ldots + g_m$
+with $g_d \in S_d$ then $g_d \in \mathfrak p$ for all
+$\mathfrak p \in T$. Thus, letting $I \subset S$
+be the ideal generated by the homogeneous parts of the elements
+of $J$ we have $T = V_{+}(I)$. This proves (9).
+
+\medskip\noindent
+The formula for $\text{Proj}(S) \cap D(g)$, with $g \in S$ is direct
+from the definitions. This proves (3).
+Consider the formula for $\text{Proj}(S) \cap D(g_0)$.
+The inclusion of the right hand side in the left hand side is
+obvious. For the other inclusion, suppose $g_0 \not \in \mathfrak p$
+with $\mathfrak p \in \text{Proj}(S)$. If all $g_0f \in \mathfrak p$
+for all homogeneous $f$ of positive degree, then we see that
+$S_{+} \subset \mathfrak p$ which is a contradiction. This gives
+the other inclusion. This proves (4).
+
+\medskip\noindent
+The collection of opens $D(g) \cap \text{Proj}(S)$
+forms a basis for the topology since the standard opens
+$D(g) \subset \Spec(S)$ form a basis for the topology on
+$\Spec(S)$. By the formulas above we can express
+$D(g) \cap \text{Proj}(S)$ as a union of opens $D_{+}(f)$.
+Hence the collection of opens $D_{+}(f)$ forms a basis for the topology
+also. This proves (5).
+
+\medskip\noindent
+Proof of (6). First we note that $D_{+}(f)$ may be identified
+with a subset (with induced topology) of $D(f) = \Spec(S_f)$
+via Lemma \ref{lemma-standard-open}. Note that the ring
+$S_f$ has a $\mathbf{Z}$-grading. The homogeneous elements are
+of the form $r/f^n$ with $r \in S$ homogeneous and have
+degree $\deg(r/f^n) = \deg(r) - n\deg(f)$. The subset
+$D_{+}(f)$ corresponds exactly to those prime ideals
+$\mathfrak p \subset S_f$ which are $\mathbf{Z}$-graded ideals
+(i.e., generated by homogeneous elements). Hence we have to show that
+the set of $\mathbf{Z}$-graded prime ideals of $S_f$ maps homeomorphically
+to $\Spec(S_{(f)})$. This follows from Lemma \ref{lemma-Z-graded}.
+
+\medskip\noindent
+Let $S = \mathbf{Z}[X_1, X_2, X_3, \ldots]$ with grading such that
+each $X_i$ has degree $1$. Then it is easy to see that
+$$
+\text{Proj}(S) = \bigcup\nolimits_{i = 1}^\infty D_{+}(X_i)
+$$
+does not have a finite refinement. This proves (7).
+
+\medskip\noindent
+Let $I \subset S$ be a graded ideal.
+If $\sqrt{I} \supset S_{+}$ then $V_{+}(I) = \emptyset$ since
+every prime $\mathfrak p \in \text{Proj}(S)$ does not contain
+$S_{+}$ by definition. Conversely, suppose that
+$S_{+} \not \subset \sqrt{I}$. Then we can find an element
+$f \in S_{+}$ such that $f$ is not nilpotent modulo $I$.
+Clearly this means that one of the homogeneous parts of $f$
+is not nilpotent modulo $I$, in other words we may (and do)
+assume that $f$ is homogeneous. This implies that
+$I S_f \not = S_f$, in other words that $(S/I)_f$ is not
+zero. Hence $(S/I)_{(f)} \not = 0$ since it is a ring
+which maps into $(S/I)_f$. Pick a prime
+$\mathfrak q \subset (S/I)_{(f)}$. This corresponds to
+a graded prime of $S/I$, not containing the irrelevant ideal
+$(S/I)_{+}$. And this in turn corresponds to a graded prime
+ideal $\mathfrak p$ of $S$, containing $I$ but not containing $S_{+}$
+as desired. This proves (10) and finishes the proof.
+\end{proof}
+
+\begin{example}
+\label{example-proj-polynomial-ring-1-variable}
+Let $R$ be a ring. If $S = R[X]$ with $\deg(X) = 1$, then the natural map
+$\text{Proj}(S) \to \Spec(R)$ is a bijection and in fact a homeomorphism.
+Namely, suppose $\mathfrak p \in \text{Proj}(S)$. Since
+$S_{+} \not \subset \mathfrak p$ we see that $X \not \in \mathfrak p$.
+Thus if $aX^n \in \mathfrak p$ with $a \in R$ and $n > 0$, then
+$a \in \mathfrak p$. It follows that $\mathfrak p = \mathfrak p_0S$
+with $\mathfrak p_0 = \mathfrak p \cap R$.
+\end{example}
+
+\noindent
+If $\mathfrak p \in \text{Proj}(S)$, then we
+define $S_{(\mathfrak p)}$ to be the ring whose
+elements are fractions $r/f$ where $r, f \in S$ are homogeneous
+elements of the same degree such that $f \not\in \mathfrak p$.
+As usual we say $r/f = r'/f'$ if and only if there exists
+some $f'' \in S$ homogeneous, $f'' \not \in \mathfrak p$ such
+that $f''(rf' - r'f) = 0$.
+Given a graded $S$-module $M$ we let
+$M_{(\mathfrak p)}$ be the $S_{(\mathfrak p)}$-module
+whose elements are fractions $x/f$ with $x \in M$
+and $f \in S$ homogeneous of the same degree such that
+$f \not \in \mathfrak p$. We say $x/f = x'/f'$
+if and only if there exists some $f'' \in S$ homogeneous,
+$f'' \not \in \mathfrak p$ such that $f''(xf' - x'f) = 0$.
+
+\begin{lemma}
+\label{lemma-proj-prime}
+Let $S$ be a graded ring. Let $M$ be a graded $S$-module.
+Let $\mathfrak p$ be an element of $\text{Proj}(S)$.
+Let $f \in S$ be a homogeneous element of positive degree
+such that $f \not \in \mathfrak p$, i.e., $\mathfrak p \in D_{+}(f)$.
+Let $\mathfrak p' \subset S_{(f)}$ be the element of
+$\Spec(S_{(f)})$ corresponding to $\mathfrak p$ as in
+Lemma \ref{lemma-topology-proj}. Then
+$S_{(\mathfrak p)} = (S_{(f)})_{\mathfrak p'}$
+and compatibly
+$M_{(\mathfrak p)} = (M_{(f)})_{\mathfrak p'}$.
+\end{lemma}
+
+\begin{proof}
+We define a map $\psi : M_{(\mathfrak p)} \to (M_{(f)})_{\mathfrak p'}$.
+Let $x/g \in M_{(\mathfrak p)}$. We set
+$$
+\psi(x/g) = (x g^{\deg(f) - 1}/f^{\deg(x)})/(g^{\deg(f)}/f^{\deg(g)}).
+$$
+This makes sense since $\deg(x) = \deg(g)$ and since
+$g^{\deg(f)}/f^{\deg(g)} \not \in \mathfrak p'$.
+We omit the verification that $\psi$ is well defined, a module map
+and an isomorphism. Hint: the inverse sends $(x/f^n)/(g/f^m)$ to
+$(xf^m)/(g f^n)$.
+\end{proof}
+
+\noindent
+Here is a graded variant of Lemma \ref{lemma-silly}.
+
+\begin{lemma}
+\label{lemma-graded-silly}
+Suppose $S$ is a graded ring, $\mathfrak p_i$, $i = 1, \ldots, r$
+homogeneous prime ideals and $I \subset S_{+}$ a graded ideal.
+Assume $I \not\subset \mathfrak p_i$ for all $i$. Then there
+exists a homogeneous element $x\in I$ of positive degree such
+that $x\not\in \mathfrak p_i$ for all $i$.
+\end{lemma}
+
+\begin{proof}
+We may assume there are no inclusions among the $\mathfrak p_i$.
+The result is true for $r = 1$. Suppose the result holds for $r - 1$.
+Pick $x \in I$ homogeneous of positive degree such that
+$x \not \in \mathfrak p_i$ for all $i = 1, \ldots, r - 1$.
+If $x \not\in \mathfrak p_r$ we are done. So assume $x \in \mathfrak p_r$.
+If $I \mathfrak p_1 \ldots \mathfrak p_{r-1} \subset \mathfrak p_r$
+then $I \subset \mathfrak p_r$ a contradiction.
+Pick $y \in I\mathfrak p_1 \ldots \mathfrak p_{r-1}$ homogeneous
+and $y \not \in \mathfrak p_r$. Then $x^{\deg(y)} + y^{\deg(x)}$ works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smear-out}
+Let $S$ be a graded ring.
+Let $\mathfrak p \subset S$ be a prime.
+Let $\mathfrak q$ be the homogeneous ideal of $S$ generated by the
+homogeneous elements of $\mathfrak p$. Then $\mathfrak q$ is a
+prime ideal of $S$.
+\end{lemma}
+
+\begin{proof}
+Suppose $f, g \in S$ are such that $fg \in \mathfrak q$.
+Let $f_d$ (resp.\ $g_e$) be the homogeneous part of
+$f$ (resp.\ $g$) of degree $d$ (resp.\ $e$). Assume $d, e$ are
+maxima such that $f_d \not = 0$ and $g_e \not = 0$.
+By assumption we can write $fg = \sum a_i f_i$ with
+$f_i \in \mathfrak p$ homogeneous. Say $\deg(f_i) = d_i$.
+Then $f_d g_e = \sum a_i' f_i$ with $a_i'$ to homogeneous
+par of degree $d + e - d_i$ of $a_i$ (or $0$ if $d + e -d_i < 0$).
+Hence $f_d \in \mathfrak p$ or $g_e \in \mathfrak p$. Hence
+$f_d \in \mathfrak q$ or $g_e \in \mathfrak q$. In the first
+case replace $f$ by $f - f_d$, in the second case replace
+$g$ by $g - g_e$. Then still $fg \in \mathfrak q$ but the discrete
+invariant $d + e$ has been decreased. Thus we may continue in this
+fashion until either $f$ or $g$ is zero. This clearly shows that
+$fg \in \mathfrak q$ implies either $f \in \mathfrak q$ or $g \in \mathfrak q$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-graded-ring-minimal-prime}
+Let $S$ be a graded ring.
+\begin{enumerate}
+\item Any minimal prime of $S$ is a homogeneous ideal of $S$.
+\item Given a homogeneous ideal $I \subset S$ any minimal
+prime over $I$ is homogeneous.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first assertion holds because the prime $\mathfrak q$ constructed in
+Lemma \ref{lemma-smear-out} satisfies $\mathfrak q \subset \mathfrak p$.
+The second because we may consider $S/I$ and apply the first part.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dehomogenize-finite-type}
+Let $R$ be a ring. Let $S$ be a graded $R$-algebra. Let $f \in S_{+}$
+be homogeneous. Assume that $S$ is of finite type over $R$. Then
+\begin{enumerate}
+\item the ring $S_{(f)}$ is of finite type over $R$, and
+\item for any finite graded $S$-module $M$ the module $M_{(f)}$
+is a finite $S_{(f)}$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose $f_1, \ldots, f_n \in S$ which generate $S$ as an $R$-algebra.
+We may assume that each $f_i$ is homogeneous (by decomposing each $f_i$
+into its homogeneous components). An element of $S_{(f)}$ is a sum
+of the form
+$$
+\sum\nolimits_{e\deg(f) =
+\sum e_i\deg(f_i)} \lambda_{e_1 \ldots e_n} f_1^{e_1} \ldots f_n^{e_n}/f^e
+$$
+with $\lambda_{e_1 \ldots e_n} \in R$. Thus $S_{(f)}$ is generated
+as an $R$-algebra by the $f_1^{e_1} \ldots f_n^{e_n} /f^e$ with the
+property that $e\deg(f) = \sum e_i\deg(f_i)$. If $e_i \geq \deg(f)$
+then we can write this as
+$$
+f_1^{e_1} \ldots f_n^{e_n}/f^e =
+f_i^{\deg(f)}/f^{\deg(f_i)} \cdot
+f_1^{e_1} \ldots f_i^{e_i - \deg(f)} \ldots f_n^{e_n}/f^{e - \deg(f_i)}
+$$
+Thus we only need the elements $f_i^{\deg(f)}/f^{\deg(f_i)}$ as well
+as the elements $f_1^{e_1} \ldots f_n^{e_n} /f^e$ with
+$e \deg(f) = \sum e_i \deg(f_i)$ and $e_i < \deg(f)$.
+This is a finite list and we see that (1) is true.
+
+\medskip\noindent
+To see (2) suppose that $M$ is generated by homogeneous elements
+$x_1, \ldots, x_m$. Then arguing
+as above we find that $M_{(f)}$ is generated as an $S_{(f)}$-module
+by the finite list of elements of the form
+$f_1^{e_1} \ldots f_n^{e_n} x_j /f^e$
+with $e \deg(f) = \sum e_i \deg(f_i) + \deg(x_j)$ and
+$e_i < \deg(f)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-homogenize}
+Let $R$ be a ring.
+Let $R'$ be a finite type $R$-algebra, and let $M$ be a finite $R'$-module.
+There exists a graded $R$-algebra $S$, a graded $S$-module $N$ and
+an element $f \in S$ homogeneous of degree $1$ such that
+\begin{enumerate}
+\item $R' \cong S_{(f)}$ and $M \cong N_{(f)}$ (as modules),
+\item $S_0 = R$ and $S$ is generated by finitely many elements
+of degree $1$ over $R$, and
+\item $N$ is a finite $S$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We may write $R' = R[x_1, \ldots, x_n]/I$ for some ideal $I$.
+For an element $g \in R[x_1, \ldots, x_n]$ denote
+$\tilde g \in R[X_0, \ldots, X_n]$ the element homogeneous of minimal
+degree such that $g = \tilde g(1, x_1, \ldots, x_n)$.
+Let $\tilde I \subset R[X_0, \ldots, X_n]$ generated by all
+elements $\tilde g$, $g \in I$.
+Set $S = R[X_0, \ldots, X_n]/\tilde I$ and denote $f$ the image
+of $X_0$ in $S$. By construction we have an isomorphism
+$$
+S_{(f)} \longrightarrow R', \quad
+X_i/X_0 \longmapsto x_i.
+$$
+To do the same thing with the module $M$ we choose a presentation
+$$
+M = (R')^{\oplus r}/\sum\nolimits_{j \in J} R'k_j
+$$
+with $k_j = (k_{1j}, \ldots, k_{rj})$. Let $d_{ij} = \deg(\tilde k_{ij})$.
+Set $d_j = \max\{d_{ij}\}$. Set $K_{ij} = X_0^{d_j - d_{ij}}\tilde k_{ij}$
+which is homogeneous of degree $d_j$. With this notation we set
+$$
+N = \Coker\Big(
+\bigoplus\nolimits_{j \in J} S(-d_j) \xrightarrow{(K_{ij})} S^{\oplus r}
+\Big)
+$$
+which works. Some details omitted.
+\end{proof}
+
+
+
+
+\section{Noetherian graded rings}
+\label{section-noetherian-graded}
+
+\noindent
+A bit of theory on Noetherian graded rings including some material on
+Hilbert polynomials.
+
+\begin{lemma}
+\label{lemma-S-plus-generated}
+Let $S$ be a graded ring. A set of homogeneous elements
+$f_i \in S_{+}$ generates $S$ as an algebra over $S_0$ if
+and only if they generate $S_{+}$ as an ideal of $S$.
+\end{lemma}
+
+\begin{proof}
+If the $f_i$ generate $S$ as an algebra over $S_0$ then every element
+in $S_{+}$ is a polynomial without constant term in the $f_i$ and hence
+$S_{+}$ is generated by the $f_i$ as an ideal. Conversely, suppose that
+$S_{+} = \sum Sf_i$. We will prove that any element $f$ of $S$ can be written
+as a polynomial in the $f_i$ with coefficients in $S_0$. It suffices
+to do this for homogeneous elements. Say $f$ has degree $d$. Then we may
+perform induction on $d$. The case $d = 0$ is immediate. If $d > 0$
+then $f \in S_{+}$ hence we can write $f = \sum g_i f_i$
+for some $g_i \in S$. As $S$ is graded we can replace $g_i$ by its
+homogeneous component of degree $d - \deg(f_i)$. By induction we
+see that each $g_i$ is a polynomial in the $f_i$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-graded-Noetherian}
+A graded ring $S$ is Noetherian if and only if $S_0$ is
+Noetherian and $S_{+}$ is finitely generated as an ideal of $S$.
+\end{lemma}
+
+\begin{proof}
+It is clear that if $S$ is Noetherian then $S_0 = S/S_{+}$ is Noetherian
+and $S_{+}$ is finitely generated. Conversely, assume $S_0$ is Noetherian
+and $S_{+}$ finitely generated as an ideal of $S$. Pick generators
+$S_{+} = (f_1, \ldots, f_n)$. By decomposing the $f_i$ into homogeneous
+pieces we may assume each $f_i$ is homogeneous. By
+Lemma \ref{lemma-S-plus-generated}
+we see that $S_0[X_1, \ldots X_n] \to S$ sending $X_i$ to $f_i$
+is surjective. Thus $S$ is Noetherian by
+Lemma \ref{lemma-Noetherian-permanence}.
+\end{proof}
+
+\begin{definition}
+\label{definition-numerical-polynomial}
+Let $A$ be an abelian group.
+We say that a function $f : n \mapsto f(n) \in A$
+defined for all sufficient large integers $n$ is a
+{\it numerical polynomial} if there exists $r \geq 0$,
+elements $a_0, \ldots, a_r\in A$ such that
+$$
+f(n) = \sum\nolimits_{i = 0}^r \binom{n}{i} a_i
+$$
+for all $n \gg 0$.
+\end{definition}
+
+\noindent
+The reason for using the binomial coefficients is the
+elementary fact that any polynomial $P \in \mathbf{Q}[T]$
+all of whose values at integer points are integers, is
+equal to a sum $P(T) = \sum a_i \binom{T}{i}$ with
+$a_i \in \mathbf{Z}$. Note that in particular the
+expressions $\binom{T + 1}{i + 1}$ are of this form.
+
+\begin{lemma}
+\label{lemma-numerical-polynomial-functorial}
+If $A \to A'$ is a homomorphism of abelian groups and if
+$f : n \mapsto f(n) \in A$ is a numerical polynomial,
+then so is the composition.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-numerical-polynomial}
+Suppose that $f: n \mapsto f(n) \in A$
+is defined for all $n$ sufficiently large
+and suppose that $n \mapsto f(n) - f(n-1)$
+is a numerical polynomial. Then $f$ is a
+numerical polynomial.
+\end{lemma}
+
+\begin{proof}
+Let $f(n) - f(n-1) = \sum\nolimits_{i = 0}^r \binom{n}{i} a_i$
+for all $n \gg 0$. Set
+$g(n) = f(n) - \sum\nolimits_{i = 0}^r \binom{n + 1}{i + 1} a_i$.
+Then $g(n) - g(n-1) = 0$ for all $n \gg 0$. Hence $g$ is
+eventually constant, say equal to $a_{-1}$. We leave it
+to the reader to show that
+$a_{-1} + \sum\nolimits_{i = 0}^r \binom{n + 1}{i + 1} a_i$
+has the required shape (see remark above the lemma).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-graded-module-fg}
+If $M$ is a finitely generated graded $S$-module,
+and if $S$ is finitely generated over $S_0$, then
+each $M_n$ is a finite $S_0$-module.
+\end{lemma}
+
+\begin{proof}
+Suppose the generators of $M$ are $m_i$ and the generators
+of $S$ are $f_i$. By taking homogeneous components we may
+assume that the $m_i$ and the $f_i$ are homogeneous
+and we may assume $f_i \in S_{+}$. In this case it is
+clear that each $M_n$ is generated over $S_0$
+by the ``monomials'' $\prod f_i^{e_i} m_j$ whose
+degree is $n$.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-graded-hilbert-polynomial}
+Suppose that $S$ is a Noetherian graded ring
+and $M$ a finite graded $S$-module. Consider the
+function
+$$
+\mathbf{Z} \longrightarrow K'_0(S_0), \quad
+n \longmapsto [M_n]
+$$
+see Lemma \ref{lemma-graded-module-fg}.
+If $S_{+}$ is generated by elements of degree $1$,
+then this function is a numerical polynomial.
+\end{proposition}
+
+\begin{proof}
+We prove this by induction on the minimal number of
+generators of $S_1$. If this number is $0$, then
+$M_n = 0$ for all $n \gg 0$ and the result holds.
+To prove the induction step, let $x\in S_1$
+be one of a minimal set of generators, such that
+the induction hypothesis applies to the
+graded ring $S/(x)$.
+
+\medskip\noindent
+First we show the result holds if $x$ is nilpotent on $M$.
+This we do by induction on the minimal integer $r$ such that
+$x^r M = 0$. If $r = 1$, then $M$ is a module over $S/xS$
+and the result holds (by the other induction hypothesis).
+If $r > 1$, then we can find a short exact sequence
+$0 \to M' \to M \to M'' \to 0$ such that the integers
+$r', r''$ are strictly smaller than $r$. Thus we know
+the result for $M''$ and $M'$. Hence
+we get the result for $M$ because of the relation
+$
+[M_d] = [M'_d] + [M''_d]
+$
+in $K'_0(S_0)$.
+
+\medskip\noindent
+If $x$ is not nilpotent on $M$, let $M' \subset M$ be
+the largest submodule on which $x$ is nilpotent.
+Consider the exact sequence $0 \to M' \to M \to M/M' \to 0$
+we see again it suffices to prove the result for $M/M'$. In other
+words we may assume that multiplication by $x$ is injective.
+
+\medskip\noindent
+Let $\overline{M} = M/xM$. Note that the map $x : M \to M$
+is {\it not} a map of graded $S$-modules, since it does
+not map $M_d$ into $M_d$. Namely, for each $d$ we have the
+following short exact sequence
+$$
+0 \to M_d \xrightarrow{x} M_{d + 1} \to \overline{M}_{d + 1} \to 0
+$$
+This proves that $[M_{d + 1}] - [M_d] = [\overline{M}_{d + 1}]$.
+Hence we win by Lemma \ref{lemma-numerical-polynomial}.
+\end{proof}
+
+\begin{remark}
+\label{remark-period-polynomial}
+If $S$ is still Noetherian but $S$ is not generated in degree $1$,
+then the function associated to a graded $S$-module is a periodic
+polynomial (i.e., it is a numerical polynomial on the
+congruence classes of integers modulo $n$ for some $n$).
+\end{remark}
+
+\begin{example}
+\label{example-hilbert-function}
+Suppose that $S = k[X_1, \ldots, X_d]$.
+By Example \ref{example-K0-field} we may identify
+$K_0(k) = K'_0(k) = \mathbf{Z}$. Hence any finitely
+generated graded $k[X_1, \ldots, X_d]$-module
+gives rise to a numerical polynomial
+$n \mapsto \dim_k(M_n)$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-quotient-smaller-d}
+Let $k$ be a field. Suppose that $I \subset k[X_1, \ldots, X_d]$
+is a nonzero graded ideal. Let $M = k[X_1, \ldots, X_d]/I$.
+Then the numerical polynomial $n \mapsto \dim_k(M_n)$ (see
+Example \ref{example-hilbert-function})
+has degree $ < d - 1$ (or is zero if $d = 1$).
+\end{lemma}
+
+\begin{proof}
+The numerical polynomial associated to the graded module
+$k[X_1, \ldots, X_d]$ is $n \mapsto \binom{n - 1 + d}{d - 1}$.
+For any nonzero homogeneous $f \in I$ of degree $e$
+and any degree $n >> e$ we have $I_n \supset f \cdot k[X_1, \ldots, X_d]_{n-e}$
+and hence $\dim_k(I_n) \geq \binom{n - e - 1 + d}{d - 1}$. Hence
+$\dim_k(M_n) \leq \binom{n - 1 + d}{d - 1} - \binom{n - e - 1 + d}{d - 1}$.
+We win because the last expression
+has degree $ < d - 1$ (or is zero if $d = 1$).
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Noetherian local rings}
+\label{section-Noetherian-local}
+
+\noindent
+In all of this section $(R, \mathfrak m, \kappa)$ is a Noetherian local ring.
+We develop some theory on Hilbert functions of modules in this section.
+Let $M$ be a finite $R$-module. We define the {\it Hilbert function}
+of $M$ to be the function
+$$
+\varphi_M : n
+\longmapsto
+\text{length}_R(\mathfrak m^nM/{\mathfrak m}^{n + 1}M)
+$$
+defined for all integers $n \geq 0$. Another important invariant is the
+function
+$$
+\chi_M : n
+\longmapsto
+\text{length}_R(M/{\mathfrak m}^{n + 1}M)
+$$
+defined for all integers $n \geq 0$.
+Note that we have by Lemma \ref{lemma-length-additive}
+that
+$$
+\chi_M(n) = \sum\nolimits_{i = 0}^n \varphi_M(i).
+$$
+There is a variant of this construction which uses an ideal of definition.
+
+\begin{definition}
+\label{definition-ideal-definition}
+Let $(R, \mathfrak m)$ be a local Noetherian ring.
+An ideal $I \subset R$ such that $\sqrt{I} = \mathfrak m$ is called
+{\it an ideal of definition of $R$}.
+\end{definition}
+
+\noindent
+Let $I \subset R$ be an ideal of definition.
+Because $R$ is Noetherian this means that
+$\mathfrak m^r \subset I$ for some $r$, see Lemma
+\ref{lemma-Noetherian-power}. Hence any finite $R$-module
+annihilated by a power of $I$ has a finite length, see Lemma
+\ref{lemma-length-finite}.
+Thus it makes sense to define
+$$
+\varphi_{I, M}(n) = \text{length}_R(I^nM/I^{n + 1}M)
+\quad\text{and}\quad
+\chi_{I, M}(n) = \text{length}_R(M/I^{n + 1}M)
+$$
+for all $n \geq 0$. Again we have that
+$$
+\chi_{I, M}(n) = \sum\nolimits_{i = 0}^n \varphi_{I, M}(i).
+$$
+
+\begin{lemma}
+\label{lemma-differ-finite}
+Suppose that $M' \subset M$ are finite $R$-modules
+with finite length quotient. Then there exists a
+constants $c_1, c_2$ such that for all $n \geq c_2$ we have
+$$
+c_1 + \chi_{I, M'}(n - c_2) \leq \chi_{I, M}(n) \leq
+c_1 + \chi_{I, M'}(n)
+$$
+\end{lemma}
+
+\begin{proof}
+Since $M/M'$ has finite length there is a $c_2 \geq 0$ such that
+$I^{c_2}M \subset M'$. Let $c_1 = \text{length}_R(M/M')$.
+For $n \geq c_2$ we have
+\begin{eqnarray*}
+\chi_{I, M}(n)
+& = &
+\text{length}_R(M/I^{n + 1}M) \\
+& = &
+c_1 + \text{length}_R(M'/I^{n + 1}M) \\
+& \leq &
+c_1 + \text{length}_R(M'/I^{n + 1}M') \\
+& = &
+c_1 + \chi_{I, M'}(n)
+\end{eqnarray*}
+On the other hand, since $I^{c_2}M \subset M'$,
+we have $I^nM \subset I^{n - c_2}M'$ for $n \geq c_2$.
+Thus for $n \geq c_2$ we get
+\begin{eqnarray*}
+\chi_{I, M}(n)
+& = &
+\text{length}_R(M/I^{n + 1}M) \\
+& = &
+c_1 + \text{length}_R(M'/I^{n + 1}M) \\
+& \geq &
+c_1 + \text{length}_R(M'/I^{n + 1 - c_2}M') \\
+& = &
+c_1 + \chi_{I, M'}(n - c_2)
+\end{eqnarray*}
+which finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hilbert-ses}
+Suppose that $0 \to M' \to M \to M'' \to 0$
+is a short exact sequence of finite $R$-modules.
+Then there exists a submodule $N \subset M'$ with
+finite colength $l$ and $c \geq 0$ such that
+$$
+\chi_{I, M}(n) = \chi_{I, M''}(n) + \chi_{I, N}(n - c) + l
+$$
+and
+$$
+\varphi_{I, M}(n) = \varphi_{I, M''}(n) + \varphi_{I, N}(n - c)
+$$
+for all $n \geq c$.
+\end{lemma}
+
+\begin{proof}
+Note that $M/I^nM \to M''/I^nM''$ is surjective
+with kernel $M' / M' \cap I^nM$. By the Artin-Rees
+Lemma \ref{lemma-Artin-Rees} there exists a
+constant $c$ such that $M' \cap I^nM =
+I^{n - c}(M' \cap I^cM)$. Denote $N = M' \cap I^cM$.
+Note that $I^c M' \subset N \subset M'$.
+Hence $\text{length}_R(M' / M' \cap I^nM)
+= \text{length}_R(M'/N) + \text{length}_R(N/I^{n - c}N)$ for $n \geq c$.
+From the short exact sequence
+$$
+0 \to M' / M' \cap I^nM \to M/I^nM \to M''/I^nM'' \to 0
+$$
+and additivity of lengths (Lemma \ref{lemma-length-additive})
+we obtain the equality
+$$
+\chi_{I, M}(n - 1)
+=
+\chi_{I, M''}(n - 1)
++
+\chi_{I, N}(n - c - 1)
++
+\text{length}_R(M'/N)
+$$
+for $n \geq c$. We have
+$\varphi_{I, M}(n) = \chi_{I, M}(n) - \chi_{I, M}(n - 1)$
+and similarly for the modules $M''$ and $N$. Hence
+we get $\varphi_{I, M}(n) = \varphi_{I, M''}(n) + \varphi_{I, N}(n-c)$ for
+$n \geq c$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hilbert-change-I}
+Suppose that $I$, $I'$ are two ideals of definition
+for the Noetherian local ring $R$. Let $M$ be a
+finite $R$-module. There exists a constant $a$ such that
+$\chi_{I, M}(n) \leq \chi_{I', M}(an)$ for $n \geq 1$.
+\end{lemma}
+
+\begin{proof}
+There exists an integer $c \geq 1$ such that $(I')^c \subset I$.
+Hence we get a surjection $M/(I')^{c(n + 1)}M \to M/I^{n + 1}M$.
+Whence the result with $a = 2c - 1$.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-hilbert-function-polynomial}
+Let $R$ be a Noetherian local ring. Let $M$ be a finite $R$-module.
+Let $I \subset R$ be an ideal of definition.
+The Hilbert function $\varphi_{I, M}$ and the function
+$\chi_{I, M}$ are numerical polynomials.
+\end{proposition}
+
+\begin{proof}
+Consider the graded ring $S = R/I \oplus I/I^2 \oplus I^2/I^3 \oplus
+\ldots = \bigoplus_{d \geq 0} I^d/I^{d + 1}$. Consider the graded
+$S$-module $N = M/IM \oplus IM/I^2M \oplus \ldots =
+\bigoplus_{d \geq 0} I^dM/I^{d + 1}M$. This pair $(S, N)$ satisfies
+the hypotheses of Proposition \ref{proposition-graded-hilbert-polynomial}.
+Hence the result for $\varphi_{I, M}$ follows from that proposition and
+Lemma \ref{lemma-length-K0}. The result for $\chi_{I, M}$ follows
+from this and Lemma \ref{lemma-numerical-polynomial}.
+\end{proof}
+
+\begin{definition}
+\label{definition-hilbert-polynomial}
+Let $R$ be a Noetherian local ring. Let $M$ be a finite $R$-module.
+The {\it Hilbert polynomial} of $M$ over $R$ is the element
+$P(t) \in \mathbf{Q}[t]$ such that $P(n) = \varphi_M(n)$ for $n \gg 0$.
+\end{definition}
+
+\noindent
+By Proposition \ref{proposition-hilbert-function-polynomial}
+we see that the Hilbert polynomial exists.
+
+\begin{lemma}
+\label{lemma-d-independent}
+Let $R$ be a Noetherian local ring. Let $M$ be a finite $R$-module.
+\begin{enumerate}
+\item The degree of the numerical polynomial $\varphi_{I, M}$ is independent
+of the ideal of definition $I$.
+\item The degree of the numerical polynomial $\chi_{I, M}$ is independent
+of the ideal of definition $I$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (2) follows immediately from Lemma \ref{lemma-hilbert-change-I}.
+Part (1) follows from (2) because
+$\varphi_{I, M}(n) = \chi_{I, M}(n) - \chi_{I, M}(n - 1)$
+for $n \geq 1$.
+\end{proof}
+
+\begin{definition}
+\label{definition-d}
+Let $R$ be a local Noetherian ring and $M$ a finite $R$-module.
+We denote {\it $d(M)$} the element of $\{-\infty, 0, 1, 2, \ldots \}$
+defined as follows:
+\begin{enumerate}
+\item If $M = 0$ we set $d(M) = -\infty$,
+\item if $M \not = 0$ then $d(M)$ is the degree of the numerical
+polynomial $\chi_M$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+If $\mathfrak m^nM \not = 0$ for all $n$, then we see that
+$d(M)$ is the degree $+1$ of the Hilbert polynomial of $M$.
+
+\begin{lemma}
+\label{lemma-differ-finite-chi}
+Let $R$ be a Noetherian local ring. Let $I \subset R$ be an ideal
+of definition. Let $M$ be a finite $R$-module
+which does not have finite length. If $M' \subset M$ is a submodule
+with finite colength, then $\chi_{I, M} - \chi_{I, M'}$
+is a polynomial of degree $<$ degree of either polynomial.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-differ-finite} by elementary calculus.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hilbert-ses-chi}
+Let $R$ be a Noetherian local ring. Let $I \subset R$ be an ideal of
+definition. Let $0 \to M' \to M \to M'' \to 0$ be a short exact sequence
+of finite $R$-modules. Then
+\begin{enumerate}
+\item if $M'$ does not have finite length, then
+$\chi_{I, M} - \chi_{I, M''} - \chi_{I, M'}$
+is a numerical polynomial of degree $<$ the degree of
+$\chi_{I, M'}$,
+\item $\max\{ \deg(\chi_{I, M'}), \deg(\chi_{I, M''}) \} = \deg(\chi_{I, M})$,
+and
+\item $\max\{d(M'), d(M'')\} = d(M)$,
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We first prove (1). Let $N \subset M'$ be as in Lemma \ref{lemma-hilbert-ses}.
+By Lemma \ref{lemma-differ-finite-chi} the numerical polynomial
+$\chi_{I, M'} - \chi_{I, N}$ has degree $<$ the common degree of
+$\chi_{I, M'}$ and $\chi_{I, N}$. By Lemma \ref{lemma-hilbert-ses}
+the difference
+$$
+\chi_{I, M}(n) - \chi_{I, M''}(n) - \chi_{I, N}(n - c)
+$$
+is constant for $n \gg 0$. By elementary calculus the difference
+$\chi_{I, N}(n) - \chi_{I, N}(n - c)$ has degree $<$ the degree of
+$\chi_{I, N}$ which is bigger than zero (see above). Putting everything
+together we obtain (1).
+
+\medskip\noindent
+Note that the leading coefficients of $\chi_{I, M'}$ and $\chi_{I, M''}$ are
+nonnegative. Thus the degree of $\chi_{I, M'} + \chi_{I, M''}$ is equal
+to the maximum of the degrees. Thus if $M'$ does not have finite
+length, then (2) follows from (1). If $M'$ does have finite length, then
+$I^nM \to I^nM''$ is an isomorphism for all $n \gg 0$ by Artin-Rees
+(Lemma \ref{lemma-Artin-Rees}). Thus $M/I^nM \to M''/I^nM''$ is a
+surjection with kernel $M'$ for $n \gg 0$ and we see that
+$\chi_{I, M}(n) - \chi_{I, M''}(n) = \text{length}(M')$
+for all $n \gg 0$. Thus (2) holds in this case also.
+
+\medskip\noindent
+Proof of (3). This follows from (2) except if one of $M$, $M'$, or $M''$
+is zero. We omit the proof in these special cases.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Dimension}
+\label{section-dimension}
+
+\noindent
+Please compare with
+Topology, Section \ref{topology-section-krull-dimension}.
+
+\begin{definition}
+\label{definition-chain-primes}
+Let $R$ be a ring. A {\it chain of prime ideals} is a sequence
+$\mathfrak p_0 \subset \mathfrak p_1 \subset \ldots \subset \mathfrak p_n$
+of prime ideals of $R$ such that $\mathfrak p_i \not = \mathfrak p_{i + 1}$
+for $i = 0, \ldots, n - 1$. The {\it length} of this chain of prime
+ideals is $n$.
+\end{definition}
+
+\noindent
+Recall that we have an inclusion reversing bijection between prime
+ideals of a ring $R$ and irreducible closed subsets of $\Spec(R)$,
+see Lemma \ref{lemma-irreducible}.
+
+\begin{definition}
+\label{definition-Krull}
+The {\it Krull dimension} of the ring $R$ is the
+Krull dimension of the topological space $\Spec(R)$, see
+Topology, Definition \ref{topology-definition-Krull}.
+In other words it is the supremum of the integers $n\geq 0$
+such that $R$ has a chain of prime ideals
+$$
+\mathfrak p_0
+\subset
+\mathfrak p_1
+\subset
+\ldots
+\subset
+\mathfrak p_n, \quad
+\mathfrak p_i \not = \mathfrak p_{i + 1}.
+$$
+of length $n$.
+\end{definition}
+
+\begin{definition}
+\label{definition-height}
+The {\it height} of a prime ideal $\mathfrak p$ of
+a ring $R$ is the dimension of the local ring $R_{\mathfrak p}$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-dimension-height}
+The Krull dimension of $R$ is the supremum of the
+heights of its (maximal) primes.
+\end{lemma}
+
+\begin{proof}
+This is so because we can always add a maximal ideal at the end of a chain
+of prime ideals.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-dimension-0}
+A Noetherian ring of dimension $0$ is Artinian.
+Conversely, any Artinian ring is Noetherian of dimension zero.
+\end{lemma}
+
+\begin{proof}
+Assume $R$ is a Noetherian ring of dimension $0$.
+By Lemma \ref{lemma-Noetherian-topology} the space $\Spec(R)$
+is Noetherian. By Topology, Lemma \ref{topology-lemma-Noetherian} we see
+that $\Spec(R)$ has finitely many irreducible
+components, say $\Spec(R) = Z_1 \cup \ldots \cup Z_r$.
+According to Lemma \ref{lemma-irreducible} each $Z_i = V(\mathfrak p_i)$
+with $\mathfrak p_i$ a minimal ideal. Since the dimension is $0$
+these $\mathfrak p_i$ are also maximal. Thus $\Spec(R)$
+is the discrete topological space with elements $\mathfrak p_i$.
+All elements $f$ of the Jacobson radical $\bigcap \mathfrak p_i$
+are nilpotent since otherwise $R_f$ would not be the zero ring
+and we would have another prime.
+By Lemma \ref{lemma-product-local} $R$ is equal to
+$\prod R_{\mathfrak p_i}$. Since $R_{\mathfrak p_i}$
+is also Noetherian and dimension $0$, the previous arguments
+show that its radical $\mathfrak p_iR_{\mathfrak p_i}$ is locally nilpotent.
+Lemma \ref{lemma-Noetherian-power} gives
+$\mathfrak p_i^nR_{\mathfrak p_i} = 0$ for some $n \geq 1$.
+By Lemma \ref{lemma-length-finite} we conclude that $R_{\mathfrak p_i}$
+has finite length over $R$. Hence we conclude that $R$
+is Artinian by Lemma \ref{lemma-artinian-finite-length}.
+
+\medskip\noindent
+If $R$ is an Artinian ring then by Lemma \ref{lemma-artinian-finite-length}
+it is Noetherian. All of its primes are maximal by a combination
+of Lemmas \ref{lemma-artinian-finite-nr-max},
+\ref{lemma-artinian-radical-nilpotent} and \ref{lemma-product-local}.
+\end{proof}
+
+\noindent
+In the following we will use the invariant $d(-)$ defined
+in Definition \ref{definition-d}. Here is a warm up lemma.
+
+\begin{lemma}
+\label{lemma-dimension-0-d-0}
+Let $R$ be a Noetherian local ring.
+Then $\dim(R) = 0 \Leftrightarrow d(R) = 0$.
+\end{lemma}
+
+\begin{proof}
+This is because $d(R) = 0$ if and only if $R$ has finite
+length as an $R$-module. See Lemma \ref{lemma-artinian-finite-length}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-dimension-zero-ring}
+Let $R$ be a ring. The following are equivalent:
+\begin{enumerate}
+\item $R$ is Artinian,
+\item $R$ is Noetherian and $\dim(R) = 0$,
+\item $R$ has finite length as a module over itself,
+\item $R$ is a finite product of Artinian local rings,
+\item $R$ is Noetherian and $\Spec(R)$ is a
+finite discrete topological space,
+\item $R$ is a finite product of Noetherian local rings
+of dimension $0$,
+\item $R$ is a finite product of Noetherian local rings
+$R_i$ with $d(R_i) = 0$,
+\item $R$ is a finite product of Noetherian local rings
+$R_i$ whose maximal ideals are nilpotent,
+\item $R$ is Noetherian, has finitely many maximal
+ideals and its Jacobson radical ideal is nilpotent, and
+\item $R$ is Noetherian and there are no strict inclusions
+among its primes.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+This is a combination of Lemmas
+\ref{lemma-product-local},
+\ref{lemma-artinian-finite-length},
+\ref{lemma-Noetherian-dimension-0}, and
+\ref{lemma-dimension-0-d-0}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-height-1}
+Let $R$ be a local Noetherian ring.
+The following are equivalent:
+\begin{enumerate}
+\item
+\label{item-dim-1}
+$\dim(R) = 1$,
+\item
+\label{item-d-1}
+$d(R) = 1$,
+\item
+\label{item-Vx}
+there exists an $x \in \mathfrak m$, $x$ not nilpotent
+such that $V(x) = \{\mathfrak m\}$,
+\item
+\label{item-x}
+there exists an $x \in \mathfrak m$, $x$ not nilpotent
+such that $\mathfrak m = \sqrt{(x)}$, and
+\item
+\label{item-ideal-1}
+there exists an ideal of definition generated by $1$ element,
+and no ideal of definition is generated by $0$ elements.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First, assume that $\dim(R) = 1$.
+Let $\mathfrak p_i$ be the minimal primes of $R$.
+Because the dimension is $1$ the only other prime of $R$
+is $\mathfrak m$.
+According to Lemma \ref{lemma-Noetherian-irreducible-components}
+there are finitely many. Hence we can find $x \in \mathfrak m$,
+$x \not \in \mathfrak p_i$, see Lemma \ref{lemma-silly}.
+Thus the only prime containing $x$ is $\mathfrak m$ and
+hence (\ref{item-Vx}).
+
+\medskip\noindent
+If (\ref{item-Vx}) then $\mathfrak m = \sqrt{(x)}$ by
+Lemma \ref{lemma-Zariski-topology}, and hence (\ref{item-x}).
+The converse is clear as well.
+The equivalence of (\ref{item-x}) and (\ref{item-ideal-1}) follows
+from directly the definitions.
+
+\medskip\noindent
+Assume (\ref{item-ideal-1}).
+Let $I = (x)$ be an ideal of definition.
+Note that $I^n/I^{n + 1}$ is a quotient of $R/I$ via multiplication
+by $x^n$ and hence $\text{length}_R(I^n/I^{n + 1})$ is bounded.
+Thus $d(R) = 0$ or $d(R) = 1$, but $d(R) = 0$ is excluded
+by the assumption that $0$ is not an ideal of definition.
+
+\medskip\noindent
+Assume (\ref{item-d-1}). To get a contradiction, assume there
+exist primes $\mathfrak p \subset \mathfrak q \subset \mathfrak m$,
+with both inclusions strict. Pick some ideal of definition $I \subset R$.
+We will repeatedly use
+Lemma \ref{lemma-hilbert-ses-chi}. First of all
+it implies, via the exact sequence
+$0 \to \mathfrak p \to R \to R/\mathfrak p \to 0$,
+that $d(R/\mathfrak p) \leq 1$. But it clearly cannot
+be zero. Pick $x\in \mathfrak q$, $x\not \in \mathfrak p$.
+Consider the short exact sequence
+$$
+0 \to R/\mathfrak p \to R/\mathfrak p \to R/(xR + \mathfrak p) \to 0.
+$$
+This implies that $\chi_{I, R/\mathfrak p} - \chi_{I, R/\mathfrak p}
+- \chi_{I, R/(xR + \mathfrak p)} = - \chi_{I, R/(xR + \mathfrak p)}$
+has degree $ < 1$. In other words, $d(R/(xR + \mathfrak p)) = 0$,
+and hence $\dim(R/(xR + \mathfrak p)) = 0$, by
+Lemma \ref{lemma-dimension-0-d-0}. But $R/(xR + \mathfrak p)$
+has the distinct primes $\mathfrak q/(xR + \mathfrak p)$ and
+$\mathfrak m/(xR + \mathfrak p)$ which gives the desired contradiction.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-dimension}
+Let $R$ be a local Noetherian ring. Let $d \geq 0$ be an integer.
+The following are equivalent:
+\begin{enumerate}
+\item
+\label{item-dim-d}
+$\dim(R) = d$,
+\item
+\label{item-d-d}
+$d(R) = d$,
+\item
+\label{item-ideal-d}
+there exists an ideal of definition generated by $d$ elements,
+and no ideal of definition is generated by fewer than $d$ elements.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+This proof is really just the same as the proof of Lemma
+\ref{lemma-height-1}. We will prove the proposition by induction
+on $d$. By Lemmas \ref{lemma-dimension-0-d-0} and \ref{lemma-height-1}
+we may assume that $d > 1$. Denote the minimal number of
+generators for an ideal of definition of $R$ by $d'(R)$.
+We will prove the inequalities
+$\dim(R) \geq d'(R) \geq d(R) \geq \dim(R)$,
+and hence they are all equal.
+
+\medskip\noindent
+First, assume that $\dim(R) = d$.
+Let $\mathfrak p_i$ be the minimal primes of $R$.
+According to Lemma \ref{lemma-Noetherian-irreducible-components}
+there are finitely many. Hence we can find $x \in \mathfrak m$,
+$x \not \in \mathfrak p_i$, see Lemma \ref{lemma-silly}.
+Note that every maximal chain of primes starts with some $\mathfrak p_i$,
+hence the dimension of $R/xR$ is at most $d-1$. By induction
+there are $x_2, \ldots, x_d$ which generate an ideal of definition
+in $R/xR$. Hence $R$ has an ideal of definition generated
+by (at most) $d$ elements.
+
+\medskip\noindent
+Assume $d'(R) = d$. Let $I = (x_1, \ldots, x_d)$ be an ideal
+of definition. Note that $I^n/I^{n + 1}$ is a quotient of a direct
+sum of $\binom{d + n - 1}{d - 1}$ copies $R/I$ via multiplication
+by all degree $n$ monomials in $x_1, \ldots, x_d$.
+Hence $\text{length}_R(I^n/I^{n + 1})$ is bounded by a polynomial
+of degree $d-1$. Thus $d(R) \leq d$.
+
+\medskip\noindent
+Assume $d(R) = d$. Consider a chain of primes
+$\mathfrak p \subset \mathfrak q \subset
+\mathfrak q_2 \subset \ldots \subset \mathfrak q_e = \mathfrak m$,
+with all inclusions strict, and $e \geq 2$.
+Pick some ideal of definition $I \subset R$.
+We will repeatedly use
+Lemma \ref{lemma-hilbert-ses-chi}. First of all
+it implies, via the exact sequence
+$0 \to \mathfrak p \to R \to R/\mathfrak p \to 0$,
+that $d(R/\mathfrak p) \leq d$. But it clearly cannot
+be zero. Pick $x\in \mathfrak q$, $x\not \in \mathfrak p$.
+Consider the short exact sequence
+$$
+0 \to R/\mathfrak p \to R/\mathfrak p \to R/(xR + \mathfrak p) \to 0.
+$$
+This implies that $\chi_{I, R/\mathfrak p} - \chi_{I, R/\mathfrak p}
+- \chi_{I, R/(xR + \mathfrak p)} = - \chi_{I, R/(xR + \mathfrak p)}$
+has degree $ < d$. In other words, $d(R/(xR + \mathfrak p)) \leq d - 1$,
+and hence $\dim(R/(xR + \mathfrak p)) \leq d - 1$, by
+induction. Now $R/(xR + \mathfrak p)$ has the chain of prime ideals
+$\mathfrak q/(xR + \mathfrak p) \subset \mathfrak q_2/(xR + \mathfrak p)
+\subset \ldots \subset \mathfrak q_e/(xR + \mathfrak p)$ which gives
+$e - 1 \leq d - 1$. Since we started with an arbitrary chain of
+primes this proves that $\dim(R) \leq d(R)$.
+
+\medskip\noindent
+Reading back the reader will see we proved the circular
+inequalities as desired.
+\end{proof}
+
+\noindent
+Let $(R, \mathfrak m)$ be a Noetherian local ring.
+From the above it is clear that $\mathfrak m$ cannot be
+generated by fewer than $\dim(R)$ variables.
+By Nakayama's Lemma \ref{lemma-NAK} the minimal number
+of generators of $\mathfrak m$ equals $\dim_{\kappa(\mathfrak m)}
+\mathfrak m/\mathfrak m^2$. Hence we have the following
+fundamental inequality
+$$
+\dim(R) \leq \dim_{\kappa(\mathfrak m)} \mathfrak m/\mathfrak m^2.
+$$
+It turns out that the rings where equality holds
+have a lot of good properties. They are called
+regular local rings.
+
+\begin{definition}
+\label{definition-regular-local}
+Let $(R, \mathfrak m)$ be a Noetherian local ring of dimension $d$.
+\begin{enumerate}
+\item A {\it system of parameters of $R$} is a sequence of elements
+$x_1, \ldots, x_d \in \mathfrak m$ which generates an ideal of
+definition of $R$,
+\item if there exist $x_1, \ldots, x_d \in \mathfrak m$
+such that $\mathfrak m = (x_1, \ldots, x_d)$ then we call
+$R$ a {\it regular local ring} and $x_1, \ldots, x_d$ a {\it regular
+system of parameters}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+The following lemmas are clear from the proofs of the
+lemmas and proposition above, but we spell them out so we have
+convenient references.
+
+\begin{lemma}
+\label{lemma-minimal-over-1}
+Let $R$ be a Noetherian ring. Let $x \in R$.
+\begin{enumerate}
+\item If $\mathfrak p$ is minimal over $(x)$
+then the height of $\mathfrak p$ is $0$ or $1$.
+\item If $\mathfrak p, \mathfrak q \in \Spec(R)$ and $\mathfrak q$
+is minimal over $(\mathfrak p, x)$, then there is no prime strictly
+between $\mathfrak p$ and $\mathfrak q$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). If $\mathfrak p$ is minimal over $x$, then the only
+prime ideal of $R_\mathfrak p$ containing $x$ is the maximal ideal
+$\mathfrak p R_\mathfrak p$. This is true because the primes of
+$R_\mathfrak p$ correspond $1$-to-$1$ with the primes of $R$ contained
+in $\mathfrak p$, see Lemma \ref{lemma-spec-localization}.
+Hence Lemma \ref{lemma-height-1} shows $\dim(R_\mathfrak p) = 1$
+if $x$ is not nilpotent in $R_\mathfrak p$. Of course, if
+$x$ is nilpotent in $R_\mathfrak p$ the argument gives that
+$\mathfrak pR_\mathfrak p$ is the only prime ideal and we see
+that the height is $0$.
+
+\medskip\noindent
+Proof of (2). By part (1) we see that $\mathfrak q/\mathfrak p$
+is a prime of height $1$ or $0$ in $R/\mathfrak p$. This immediately
+implies there cannot be a prime strictly between $\mathfrak p$
+and $\mathfrak q$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-minimal-over-r}
+Let $R$ be a Noetherian ring. Let $f_1, \ldots, f_r \in R$.
+\begin{enumerate}
+\item If $\mathfrak p$ is minimal over $(f_1, \ldots, f_r)$
+then the height of $\mathfrak p$ is $\leq r$.
+\item If $\mathfrak p, \mathfrak q \in \Spec(R)$ and
+$\mathfrak q$ is minimal over $(\mathfrak p, f_1, \ldots, f_r)$,
+then every chain of primes between $\mathfrak p$ and $\mathfrak q$
+has length at most $r$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). If $\mathfrak p$ is minimal over $f_1, \ldots, f_r$,
+then the only prime ideal of $R_\mathfrak p$ containing $f_1, \ldots, f_r$
+is the maximal ideal $\mathfrak p R_\mathfrak p$. This is true because
+the primes of $R_\mathfrak p$ correspond $1$-to-$1$ with the primes of
+$R$ contained in $\mathfrak p$, see Lemma \ref{lemma-spec-localization}.
+Hence Proposition \ref{proposition-dimension} shows
+$\dim(R_\mathfrak p) \leq r$.
+
+\medskip\noindent
+Proof of (2). By part (1) we see that $\mathfrak q/\mathfrak p$
+is a prime of height $\leq r$. This immediately
+implies the statement about chains of primes between $\mathfrak p$
+and $\mathfrak q$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-one-equation}
+Suppose that $R$ is a Noetherian local ring and $x\in \mathfrak m$ an
+element of its maximal ideal. Then $\dim R \leq \dim R/xR + 1$.
+If $x$ is not contained in any of the minimal primes of $R$
+then equality holds. (For example if $x$ is a nonzerodivisor.)
+\end{lemma}
+
+\begin{proof}
+If $x_1, \ldots, x_{\dim R/xR} \in R$ map to elements of $R/xR$ which
+generate an ideal of definition for $R/xR$, then $x, x_1, \ldots,
+x_{\dim R/xR}$ generate an ideal of definition for $R$. Hence
+the inequality by Proposition \ref{proposition-dimension}.
+On the other hand, if $x$ is not contained in any minimal
+prime of $R$, then the chains of primes in $R/xR$ all give
+rise to chains in $R$ which are at least one step away
+from being maximal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-elements-generate-ideal-definition}
+Let $(R, \mathfrak m)$ be a Noetherian local ring.
+Suppose $x_1, \ldots, x_d \in \mathfrak m$ generate an
+ideal of definition and $d = \dim(R)$. Then
+$\dim(R/(x_1, \ldots, x_i)) = d - i$ for all $i = 1, \ldots, d$.
+\end{lemma}
+
+\begin{proof}
+Follows either from the proof of Proposition \ref{proposition-dimension},
+or by using induction on $d$ and Lemma \ref{lemma-one-equation}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Applications of dimension theory}
+\label{section-applications-dimension-theory}
+
+\noindent
+We can use the results on dimension to prove certain rings
+have infinite spectra and to produce more Jacobson rings.
+
+\begin{lemma}
+\label{lemma-Noetherian-local-domain-dim-2-infinite-opens}
+Let $R$ be a Noetherian local domain of dimension $\geq 2$.
+A nonempty open subset $U \subset \Spec(R)$ is
+infinite.
+\end{lemma}
+
+\begin{proof}
+To get a contradiction, assume that $U \subset \Spec(R)$ is finite.
+In this case $(0) \in U$ and $\{(0)\}$ is an open subset of $U$ (because
+the complement of $\{(0)\}$ is the union of the closures of the other points).
+Thus we may assume $U = \{(0)\}$.
+Let $\mathfrak m \subset R$ be the maximal ideal.
+We can find an $x \in \mathfrak m$, $x \not = 0$ such that
+$V(x) \cup U = \Spec(R)$. In other words we see that
+$D(x) = \{(0)\}$. In particular we see
+that $\dim(R/xR) = \dim(R) - 1 \geq 1$, see Lemma \ref{lemma-one-equation}.
+Let $\overline{y}_2, \ldots, \overline{y}_{\dim(R)} \in R/xR$ generate
+an ideal of definition of $R/xR$, see Proposition \ref{proposition-dimension}.
+Choose lifts $y_2, \ldots, y_{\dim(R)} \in R$, so that
+$x, y_2, \ldots, y_{\dim(R)}$ generate an ideal of definition in $R$.
+This implies that $\dim(R/(y_2)) = \dim(R) - 1$ and
+$\dim(R/(y_2, x)) = \dim(R) - 2$, see
+Lemma \ref{lemma-elements-generate-ideal-definition}.
+Hence there exists a prime
+$\mathfrak p$ containing $y_2$ but not $x$. This contradicts
+the fact that $D(x) = \{(0)\}$.
+\end{proof}
+
+\noindent
+The rings $k[[t]]$ where $k$ is a field, or the ring of $p$-adic
+numbers are Noetherian rings of dimension $1$ with finitely many primes.
+This is the maximum dimension for which this can happen.
+
+\begin{lemma}
+\label{lemma-Noetherian-finite-nr-primes}
+A Noetherian ring with finitely many primes has dimension $\leq 1$.
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a Noetherian ring with finitely many primes.
+If $R$ is a local domain, then the lemma follows from
+Lemma \ref{lemma-Noetherian-local-domain-dim-2-infinite-opens}.
+If $R$ is a domain, then $R_\mathfrak m$ has dimension $\leq 1$
+for all maximal ideals $\mathfrak m$ by the local case.
+Hence $\dim(R) \leq 1$ by Lemma \ref{lemma-dimension-height}.
+If $R$ is general, then $\dim(R/\mathfrak q) \leq 1$
+for every minimal prime $\mathfrak q$ of $R$.
+Since every prime contains a minimal prime
+(Lemma \ref{lemma-Zariski-topology}), this implies $\dim(R) \leq 1$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-type-algebra-finite-nr-primes}
+Let $S$ be a nonzero finite type algebra over a field $k$.
+Then $\dim(S) = 0$ if and only if $S$ has
+finitely many primes.
+\end{lemma}
+
+\begin{proof}
+Recall that $\Spec(S)$ is sober, Noetherian, and Jacobson, see
+Lemmas \ref{lemma-spec-spectral}, \ref{lemma-Noetherian-topology},
+\ref{lemma-finite-type-field-Jacobson}, and \ref{lemma-jacobson}.
+If it has dimension $0$, then every point defines an
+irreducible component and there are only a finite number
+of irreducible components (Topology, Lemma \ref{topology-lemma-Noetherian}).
+Conversely, if $\Spec(S)$ is finite, then it is discrete
+by Topology, Lemma \ref{topology-lemma-finite-jacobson}
+and hence the dimension is $0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-noetherian-dim-1-Jacobson}
+Noetherian Jacobson rings.
+\begin{enumerate}
+\item Any Noetherian domain $R$ of dimension $1$
+with infinitely many primes is Jacobson.
+\item Any Noetherian ring such that every prime
+$\mathfrak p$ is either maximal or contained in
+infinitely many prime ideals is Jacobson.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is a reformulation of Lemma \ref{lemma-pid-jacobson}.
+
+\medskip\noindent
+Let $R$ be a Noetherian ring such that
+every non-maximal prime $\mathfrak p$ is contained
+in infinitely many prime ideals.
+Assume $\Spec(R)$ is not Jacobson to get
+a contradiction.
+By Lemmas \ref{lemma-irreducible}
+and \ref{lemma-Noetherian-topology}
+we see that $\Spec(R)$ is a sober, Noetherian topological space.
+By Topology, Lemma \ref{topology-lemma-non-jacobson-Noetherian-characterize}
+we see that there exists a non-maximal ideal $\mathfrak p \subset R$
+such that $\{\mathfrak p\}$ is a locally closed subset of
+$\Spec(R)$. In other words, $\mathfrak p$ is not maximal
+and $\{\mathfrak p\}$ is an open subset of $V(\mathfrak p)$.
+Consider a prime $\mathfrak q \subset R$ with
+$\mathfrak p \subset \mathfrak q$. Recall that the topology on the spectrum of
+$(R/\mathfrak p)_{\mathfrak q} = R_{\mathfrak q}/\mathfrak pR_{\mathfrak q}$
+is induced from that of $\Spec(R)$, see Lemmas
+\ref{lemma-spec-localization} and \ref{lemma-spec-closed}.
+Hence we see that $\{(0)\}$ is a locally closed subset of
+$\Spec((R/\mathfrak p)_{\mathfrak q})$. By
+Lemma \ref{lemma-Noetherian-local-domain-dim-2-infinite-opens}
+we conclude that $\dim((R/\mathfrak p)_{\mathfrak q}) = 1$.
+Since this holds for every $\mathfrak q \supset \mathfrak p$
+we conclude that $\dim(R/\mathfrak p) = 1$. At this point we use
+the assumption that $\mathfrak p$ is contained in infinitely many
+primes to see that $\Spec(R/\mathfrak p)$ is infinite.
+Hence by part (1) of the lemma we see that
+$V(\mathfrak p) \cong \Spec(R/\mathfrak p)$
+is the closure of its closed points.
+This is the desired contradiction since it means that
+$\{\mathfrak p\} \subset V(\mathfrak p)$ cannot be open.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Support and dimension of modules}
+\label{section-support}
+
+\noindent
+Some basic results on the support and dimension of modules.
+
+\begin{lemma}
+\label{lemma-filter-Noetherian-module}
+Let $R$ be a Noetherian ring, and let $M$ be a finite $R$-module.
+There exists a filtration by $R$-submodules
+$$
+0 = M_0 \subset M_1 \subset \ldots \subset M_n = M
+$$
+such that each quotient $M_i/M_{i-1}$ is isomorphic
+to $R/\mathfrak p_i$ for some prime ideal $\mathfrak p_i$
+of $R$.
+\end{lemma}
+
+\begin{proof}[First proof]
+By Lemma \ref{lemma-trivial-filter-finite-module}
+it suffices to do the case $M = R/I$ for some ideal $I$.
+Consider the set $S$ of ideals $J$ such that the lemma
+does not hold for the module $R/J$, and order it by
+inclusion. To arrive at a
+contradiction, assume that $S$ is not empty. Because
+$R$ is Noetherian, $S$ has a maximal element $J$.
+By definition of $S$, the ideal $J$ cannot be prime.
+Pick $a, b\in R$ such that $ab \in J$, but neither
+$a \in J$ nor $b\in J$. Consider the filtration
+$0 \subset aR/(J \cap aR) \subset R/J$.
+Note that both the submodule $aR/(J \cap aR)$ and the quotient
+module $(R/J)/(aR/(J \cap aR))$ are cyclic modules; write
+them as $R/J'$ and $R/J''$ so we have a short exact sequence
+$0 \to R/J' \to R/J \to R/J'' \to 0$.
+The inclusion $J \subset J'$ is strict
+as $b \in J'$ and the inclusion $J \subset J''$ is strict as $a \in J''$.
+Hence by maximality of $J$, both $R/J'$ and $R/J''$ have a filtration as
+above and hence so does $R/J$. Contradiction.
+\end{proof}
+
+\begin{proof}[Second proof]
+For an $R$-module $M$ we say $P(M)$ holds if there exists a filtration
+as in the statement of the lemma. Observe that $P$ is stable under
+extensions and holds for $0$. By Lemma \ref{lemma-trivial-filter-finite-module}
+it suffices to prove $P(R/I)$ holds for every ideal $I$.
+If not then because $R$ is Noetherian, there is a maximal
+counter example $J$. By Example \ref{example-oka-family-property-modules} and
+Proposition \ref{proposition-oka}
+the ideal $J$ is prime which is a contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-filter-primes-in-support}
+Let $R$, $M$, $M_i$, $\mathfrak p_i$ as in
+Lemma \ref{lemma-filter-Noetherian-module}.
+Then $\text{Supp}(M) = \bigcup V(\mathfrak p_i)$
+and in particular $\mathfrak p_i \in \text{Supp}(M)$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemmas \ref{lemma-support-closed} and
+\ref{lemma-support-quotient}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-support-point}
+Suppose that $R$ is a Noetherian local ring with
+maximal ideal $\mathfrak m$. Let $M$ be a nonzero finite
+$R$-module. Then $\text{Supp}(M) = \{ \mathfrak m\}$
+if and only if $M$ has finite length over $R$.
+\end{lemma}
+
+\begin{proof}
+Assume that $\text{Supp}(M) = \{ \mathfrak m\}$.
+It suffices to show that all the primes $\mathfrak p_i$
+in the filtration of Lemma \ref{lemma-filter-Noetherian-module}
+are the maximal ideal. This is clear by
+Lemma \ref{lemma-filter-primes-in-support}.
+
+\medskip\noindent
+Suppose that $M$ has finite length over $R$.
+Then $\mathfrak m^n M = 0$ by Lemma \ref{lemma-length-infinite}.
+Since some element of $\mathfrak m$ maps to a unit
+in $R_{\mathfrak p}$ for any prime
+$\mathfrak p \not = \mathfrak m$ in $R$ we see $M_{\mathfrak p} = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-power-ideal-kills-module}
+Let $R$ be a Noetherian ring.
+Let $I \subset R$ be an ideal.
+Let $M$ be a finite $R$-module.
+Then $I^nM = 0$ for some $n \geq 0$ if and only if
+$\text{Supp}(M) \subset V(I)$.
+\end{lemma}
+
+\begin{proof}
+Indeed, $I^nM = 0$ is equivalent to $I^n \subset \text{Ann}(M)$.
+Since $R$ is Noetherian, this is equivalent to
+$I \subset \sqrt{\text{Ann}(M)}$, see
+Lemma \ref{lemma-Noetherian-power}.
+This in turn is equivalent to $V(I) \supset V(\text{Ann}(M))$, see
+Lemma \ref{lemma-Zariski-topology}.
+By Lemma \ref{lemma-support-closed}
+this is equivalent to $V(I) \supset \text{Supp}(M)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-filter-minimal-primes-in-support}
+Let $R$, $M$, $M_i$, $\mathfrak p_i$ as in
+Lemma \ref{lemma-filter-Noetherian-module}.
+The minimal elements of the set $\{\mathfrak p_i\}$
+are the minimal elements of $\text{Supp}(M)$.
+The number of times a minimal prime $\mathfrak p$
+occurs is
+$$
+\#\{i \mid \mathfrak p_i = \mathfrak p\}
+=
+\text{length}_{R_\mathfrak p} M_{\mathfrak p}.
+$$
+\end{lemma}
+
+\begin{proof}
+The first statement follows because
+$\text{Supp}(M) = \bigcup V(\mathfrak p_i)$, see
+Lemma \ref{lemma-filter-primes-in-support}.
+Let $\mathfrak p \in \text{Supp}(M)$ be minimal.
+The support of $M_{\mathfrak p}$ is the set
+consisting of the maximal ideal $\mathfrak p R_{\mathfrak p}$.
+Hence by Lemma \ref{lemma-support-point} the length
+of $M_{\mathfrak p}$ is finite and $> 0$. Next we
+note that $M_{\mathfrak p}$ has a filtration with subquotients
+$
+(R/\mathfrak p_i)_{\mathfrak p}
+=
+R_{\mathfrak p}/{\mathfrak p_i}R_{\mathfrak p}
+$.
+These are zero if $\mathfrak p_i \not \subset \mathfrak p$
+and equal to $\kappa(\mathfrak p)$ if $\mathfrak p_i \subset
+\mathfrak p$ because by minimality of $\mathfrak p$
+we have $\mathfrak p_i = \mathfrak p$ in this case.
+The result follows since $\kappa(\mathfrak p)$ has length $1$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-support-dimension-d}
+Let $R$ be a Noetherian local ring.
+Let $M$ be a finite $R$-module.
+Then $d(M) = \dim(\text{Supp}(M))$ where $d(M)$ is as in
+Definition \ref{definition-d}.
+\end{lemma}
+
+\begin{proof}
+Let $M_i, \mathfrak p_i$ be as in Lemma \ref{lemma-filter-Noetherian-module}.
+By Lemma \ref{lemma-hilbert-ses-chi} we obtain the equality
+$d(M) = \max \{ d(R/\mathfrak p_i) \}$. By
+Proposition \ref{proposition-dimension} we have
+$d(R/\mathfrak p_i) = \dim(R/\mathfrak p_i)$.
+Trivially $\dim(R/\mathfrak p_i) = \dim V(\mathfrak p_i)$.
+Since all minimal primes of $\text{Supp}(M)$ occur among
+the $\mathfrak p_i$ (Lemma \ref{lemma-filter-minimal-primes-in-support}) we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ses-dimension}
+Let $R$ be a Noetherian ring. Let $0 \to M' \to M \to M'' \to 0$
+be a short exact sequence of finite $R$-modules. Then
+$\max\{\dim(\text{Supp}(M')), \dim(\text{Supp}(M''))\} =
+\dim(\text{Supp}(M))$.
+\end{lemma}
+
+\begin{proof}
+If $R$ is local, this follows immediately from
+Lemmas \ref{lemma-support-dimension-d} and \ref{lemma-hilbert-ses-chi}.
+A more elementary argument, which works also if $R$ is not local,
+is to use that $\text{Supp}(M')$, $\text{Supp}(M'')$, and
+$\text{Supp}(M)$ are closed (Lemma \ref{lemma-support-closed})
+and that $\text{Supp}(M) = \text{Supp}(M') \cup \text{Supp}(M'')$
+(Lemma \ref{lemma-support-quotient}).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Associated primes}
+\label{section-ass}
+
+\noindent
+Here is the standard definition. For non-Noetherian rings and non-finite
+modules it may be more appropriate to use the definition in
+Section \ref{section-weakly-ass}.
+
+\begin{definition}
+\label{definition-associated}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+A prime $\mathfrak p$ of $R$ is {\it associated} to $M$
+if there exists an element $m \in M$ whose annihilator
+is $\mathfrak p$.
+The set of all such primes is denoted $\text{Ass}_R(M)$
+or $\text{Ass}(M)$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-ass-support}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Then $\text{Ass}(M) \subset \text{Supp}(M)$.
+\end{lemma}
+
+\begin{proof}
+If $m \in M$ has annihilator $\mathfrak p$, then in particular
+no element of $R \setminus \mathfrak p$ annihilates $m$.
+Hence $m$ is a nonzero element of $M_{\mathfrak p}$, i.e.,
+$\mathfrak p \in \text{Supp}(M)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass}
+Let $R$ be a ring. Let $0 \to M' \to M \to M'' \to 0$ be a short exact sequence
+of $R$-modules. Then $\text{Ass}(M') \subset \text{Ass}(M)$ and
+$\text{Ass}(M) \subset \text{Ass}(M') \cup \text{Ass}(M'')$.
+Also $\text{Ass}(M' \oplus M'') = \text{Ass}(M') \cup \text{Ass}(M'')$.
+\end{lemma}
+
+\begin{proof}
+If $m' \in M'$, then the annihilator of $m'$ viewed as an element of $M'$
+is the same as the annihilator of $m'$ viewed as an element of $M$. Hence
+the inclusion $\text{Ass}(M') \subset \text{Ass}(M)$. Let $m \in M$
+be an element whose annihilator is a prime ideal $\mathfrak p$. If there
+exists a $g \in R$, $g \not \in \mathfrak p$ such that $m' = gm \in M'$
+then the annihilator of $m'$ is $\mathfrak p$. If there does not
+exist a $g \in R$, $g \not \in \mathfrak p$ such that $gm \in M'$,
+then the annilator of the image $m'' \in M''$ of $m$ is $\mathfrak p$.
+This proves the inclusion
+$\text{Ass}(M) \subset \text{Ass}(M') \cup \text{Ass}(M'')$.
+We omit the proof of the final statement.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass-filter}
+Let $R$ be a ring, and $M$ an $R$-module.
+Suppose there exists a filtration by $R$-submodules
+$$
+0 = M_0 \subset M_1 \subset \ldots \subset M_n = M
+$$
+such that each quotient $M_i/M_{i-1}$ is isomorphic to $R/\mathfrak p_i$
+for some prime ideal $\mathfrak p_i$ of $R$.
+Then $\text{Ass}(M) \subset \{\mathfrak p_1, \ldots, \mathfrak p_n\}$.
+\end{lemma}
+
+\begin{proof}
+By induction on the length $n$ of the filtration $\{ M_i \}$.
+Pick $m \in M$ whose annihilator is a prime $\mathfrak p$.
+If $m \in M_{n-1}$ we are done by induction. If not,
+then $m$ maps to a nonzero element of $M/M_{n-1} \cong
+R/\mathfrak p_n$. Hence we have $\mathfrak p \subset \mathfrak p_n$.
+If equality does not hold, then we can find $f \in \mathfrak p_n$,
+$f \not\in \mathfrak p$. In this case the annihilator of $fm$ is still
+$\mathfrak p$ and $fm \in M_{n-1}$. Thus we win by induction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-ass}
+Let $R$ be a Noetherian ring.
+Let $M$ be a finite $R$-module.
+Then $\text{Ass}(M)$ is finite.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-ass-filter} and
+Lemma \ref{lemma-filter-Noetherian-module}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-minimal-primes-associated-primes}
+Let $R$ be a Noetherian ring.
+Let $M$ be a finite $R$-module.
+The following sets of primes are the same:
+\begin{enumerate}
+\item The minimal primes in the support of $M$.
+\item The minimal primes in $\text{Ass}(M)$.
+\item For any filtration $0 = M_0 \subset M_1 \subset \ldots
+\subset M_{n-1} \subset M_n = M$ with $M_i/M_{i-1} \cong R/\mathfrak p_i$
+the minimal primes of the set $\{\mathfrak p_i\}$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Choose a filtration as in (3).
+In Lemma \ref{lemma-filter-minimal-primes-in-support}
+we have seen that the sets in (1) and (3) are equal.
+
+\medskip\noindent
+Let $\mathfrak p$ be a minimal element of the set $\{\mathfrak p_i\}$.
+Let $i$ be minimal such that $\mathfrak p = \mathfrak p_i$.
+Pick $m \in M_i$, $m \not \in M_{i-1}$. The annihilator of $m$
+is contained in $\mathfrak p_i = \mathfrak p$ and contains
+$\mathfrak p_1 \mathfrak p_2 \ldots \mathfrak p_i$. By our choice of
+$i$ and $\mathfrak p$ we have $\mathfrak p_j \not \subset \mathfrak p$
+for $j < i$ and hence we have
+$\mathfrak p_1 \mathfrak p_2 \ldots \mathfrak p_{i - 1}
+\not \subset \mathfrak p_i$. Pick
+$f \in \mathfrak p_1 \mathfrak p_2 \ldots \mathfrak p_{i - 1}$,
+$f \not \in \mathfrak p$. Then $fm$ has annihilator $\mathfrak p$.
+In this way we see that $\mathfrak p$ is an associated prime of $M$.
+By Lemma \ref{lemma-ass-support} we have $\text{Ass}(M) \subset \text{Supp}(M)$
+and hence $\mathfrak p$ is minimal in $\text{Ass}(M)$.
+Thus the set of primes in (1) is contained in the set of primes of (2).
+
+\medskip\noindent
+Let $\mathfrak p$ be a minimal element of $\text{Ass}(M)$.
+Since $\text{Ass}(M) \subset \text{Supp}(M)$ there is a minimal
+element $\mathfrak q$ of $\text{Supp}(M)$ with
+$\mathfrak q \subset \mathfrak p$. We have just shown that
+$\mathfrak q \in \text{Ass}(M)$. Hence $\mathfrak q = \mathfrak p$
+by minimality of $\mathfrak p$. Thus the set of primes in (2) is
+contained in the set of primes of (1).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass-zero}
+\begin{slogan}
+Over a Noetherian ring each nonzero module has an associated prime.
+\end{slogan}
+Let $R$ be a Noetherian ring. Let $M$ be an $R$-module.
+Then
+$$
+M = (0) \Leftrightarrow \text{Ass}(M) = \emptyset.
+$$
+\end{lemma}
+
+\begin{proof}
+If $M = (0)$, then $\text{Ass}(M) = \emptyset$ by definition.
+If $M \not = 0$, pick any nonzero finitely generated submodule
+$M' \subset M$, for example a submodule generated by a single nonzero
+element. By
+Lemma \ref{lemma-support-zero}
+we see that $\text{Supp}(M')$ is nonempty. By
+Proposition \ref{proposition-minimal-primes-associated-primes}
+this implies that $\text{Ass}(M')$ is nonempty.
+By
+Lemma \ref{lemma-ass}
+this implies $\text{Ass}(M) \not = \emptyset$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass-minimal-prime-support}
+Let $R$ be a Noetherian ring.
+Let $M$ be an $R$-module.
+Any $\mathfrak p \in \text{Supp}(M)$ which is minimal among the elements
+of $\text{Supp}(M)$ is an element of $\text{Ass}(M)$.
+\end{lemma}
+
+\begin{proof}
+If $M$ is a finite $R$-module, then this is a consequence of
+Proposition \ref{proposition-minimal-primes-associated-primes}.
+In general write $M = \bigcup M_\lambda$ as the union of its
+finite submodules, and use that
+$\text{Supp}(M) = \bigcup \text{Supp}(M_\lambda)$
+and
+$\text{Ass}(M) = \bigcup \text{Ass}(M_\lambda)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass-zero-divisors}
+Let $R$ be a Noetherian ring.
+Let $M$ be an $R$-module.
+The union $\bigcup_{\mathfrak q \in \text{Ass}(M)} \mathfrak q$
+is the set of elements of $R$ which are zerodivisors on $M$.
+\end{lemma}
+
+\begin{proof}
+Any element in any associated prime clearly is a zerodivisor on $M$.
+Conversely, suppose $x \in R$ is a zerodivisor on $M$.
+Consider the submodule $N = \{m \in M \mid xm = 0\}$.
+Since $N$ is not zero it has an associated prime $\mathfrak q$ by
+Lemma \ref{lemma-ass-zero}.
+Then $x \in \mathfrak q$ and $\mathfrak q$
+is an associated prime of $M$ by
+Lemma \ref{lemma-ass}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-one-equation-module}
+Let $R$ is a Noetherian local ring, $M$ a finite $R$-module, and
+$f \in \mathfrak m$ an element of the maximal ideal of $R$. Then
+$$
+\dim(\text{Supp}(M/fM)) \leq
+\dim(\text{Supp}(M)) \leq
+\dim(\text{Supp}(M/fM)) + 1
+$$
+If $f$ is not in any of the minimal primes of the support of $M$
+(for example if $f$ is a nonzerodivisor on $M$), then equality
+holds for the right inequality.
+\end{lemma}
+
+\begin{proof}
+(The parenthetical statement follows from Lemma \ref{lemma-ass-zero-divisors}.)
+The first inequality follows from $\text{Supp}(M/fM) \subset \text{Supp}(M)$,
+see Lemma \ref{lemma-support-quotient}. For the second inequality, note
+that $\text{Supp}(M/fM) = \text{Supp}(M) \cap V(f)$, see
+Lemma \ref{lemma-support-quotient}. It follows, for example by
+Lemma \ref{lemma-filter-primes-in-support} and elementary properties
+of dimension, that it suffices to show
+$\dim V(\mathfrak p) \leq \dim (V(\mathfrak p) \cap V(f)) + 1$
+for primes $\mathfrak p$ of $R$. This is a consequence of
+Lemma \ref{lemma-one-equation}.
+Finally, if $f$ is not contained in any minimal
+prime of the support of $M$, then the chains of primes in
+$\text{Supp}(M/fM)$ all give
+rise to chains in $\text{Supp}(M)$ which are at least one step away
+from being maximal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass-functorial}
+Let $\varphi : R \to S$ be a ring map.
+Let $M$ be an $S$-module.
+Then $\Spec(\varphi)(\text{Ass}_S(M)) \subset \text{Ass}_R(M)$.
+\end{lemma}
+
+\begin{proof}
+If $\mathfrak q \in \text{Ass}_S(M)$, then there exists an $m$ in $M$
+such that the annihilator of $m$ in $S$ is $\mathfrak q$. Then the annihilator
+of $m$ in $R$ is $\mathfrak q \cap R$.
+\end{proof}
+
+\begin{remark}
+\label{remark-ass-reverse-functorial}
+Let $\varphi : R \to S$ be a ring map.
+Let $M$ be an $S$-module.
+Then it is not always the case that
+$\Spec(\varphi)(\text{Ass}_S(M)) \supset \text{Ass}_R(M)$.
+For example, consider the ring map
+$R = k \to S = k[x_1, x_2, x_3, \ldots]/(x_i^2)$ and $M = S$.
+Then $\text{Ass}_R(M)$ is not empty, but $\text{Ass}_S(S)$ is empty.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-ass-functorial-Noetherian}
+Let $\varphi : R \to S$ be a ring map.
+Let $M$ be an $S$-module. If $S$ is Noetherian, then
+$\Spec(\varphi)(\text{Ass}_S(M)) = \text{Ass}_R(M)$.
+\end{lemma}
+
+\begin{proof}
+We have already seen in
+Lemma \ref{lemma-ass-functorial}
+that
+$\Spec(\varphi)(\text{Ass}_S(M)) \subset \text{Ass}_R(M)$.
+For the converse, choose a prime $\mathfrak p \in \text{Ass}_R(M)$.
+Let $m \in M$ be an element such that the annihilator of $m$ in $R$
+is $\mathfrak p$. Let $I = \{g \in S \mid gm = 0\}$ be the annihilator
+of $m$ in $S$. Then $R/\mathfrak p \subset S/I$ is injective.
+Combining Lemmas \ref{lemma-injective-minimal-primes-in-image} and
+\ref{lemma-minimal-prime-image-minimal-prime} we see that
+there is a prime $\mathfrak q \subset S$ minimal over $I$
+mapping to $\mathfrak p$. By
+Proposition \ref{proposition-minimal-primes-associated-primes}
+we see that $\mathfrak q$ is an associated prime of $S/I$, hence
+$\mathfrak q$ is an associated prime of $M$ by
+Lemma \ref{lemma-ass}
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass-quotient-ring}
+Let $R$ be a ring.
+Let $I$ be an ideal.
+Let $M$ be an $R/I$-module.
+Via the canonical injection
+$\Spec(R/I) \to \Spec(R)$
+we have $\text{Ass}_{R/I}(M) = \text{Ass}_R(M)$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-associated-primes-localize}
+Let $R$ be a ring.
+Let $M$ be an $R$-module.
+Let $\mathfrak p \subset R$ be a prime.
+\begin{enumerate}
+\item If $\mathfrak p \in \text{Ass}(M)$ then
+$\mathfrak pR_{\mathfrak p} \in \text{Ass}(M_{\mathfrak p})$.
+\item If $\mathfrak p$ is finitely generated then the converse holds
+as well.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $\mathfrak p \in \text{Ass}(M)$ there exists an element $m \in M$
+whose annihilator is $\mathfrak p$. As localization is exact
+(Proposition \ref{proposition-localization-exact})
+we see that the annihilator of $m/1$ in
+$M_{\mathfrak p}$ is $\mathfrak pR_{\mathfrak p}$ hence (1) holds.
+Assume $\mathfrak pR_{\mathfrak p} \in \text{Ass}(M_{\mathfrak p})$
+and $\mathfrak p = (f_1, \ldots, f_n)$. Let $m/g$ be an element of
+$M_{\mathfrak p}$ whose annihilator is $\mathfrak pR_{\mathfrak p}$.
+This implies that the annihilator of $m$ is contained in $\mathfrak p$.
+As $f_i m/g = 0$ in $M_{\mathfrak p}$ we see there exists a
+$g_i \in R$, $g_i \not \in \mathfrak p$ such that $g_i f_i m = 0$ in $M$.
+Combined we see the annihilator of $g_1\ldots g_nm$ is $\mathfrak p$. Hence
+$\mathfrak p \in \text{Ass}(M)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-ass}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $S \subset R$ be a multiplicative subset.
+Via the canonical injection $\Spec(S^{-1}R) \to \Spec(R)$
+we have
+\begin{enumerate}
+\item $\text{Ass}_R(S^{-1}M) = \text{Ass}_{S^{-1}R}(S^{-1}M)$,
+\item
+$\text{Ass}_R(M) \cap \Spec(S^{-1}R) \subset \text{Ass}_R(S^{-1}M)$, and
+\item if $R$ is Noetherian this inclusion is an equality.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first equality follows, since if $m \in S^{-1}M$, then the annihilator
+of $m$ in $R$ is the intersection of the annihilator of $m$ in $S^{-1}R$
+with $R$.
+The displayed inclusion and equality in the Noetherian case follows from
+Lemma \ref{lemma-associated-primes-localize}
+since for $\mathfrak p \in R$, $S \cap \mathfrak p = \emptyset$ we have
+$M_{\mathfrak p} = (S^{-1}M)_{S^{-1}\mathfrak p}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-ass-nonzero-divisors}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $S \subset R$ be a multiplicative subset.
+Assume that every $s \in S$ is a nonzerodivisor on $M$.
+Then
+$$
+\text{Ass}_R(M) = \text{Ass}_R(S^{-1}M).
+$$
+\end{lemma}
+
+\begin{proof}
+As $M \subset S^{-1}M$ by assumption we get the inclusion
+$\text{Ass}(M) \subset \text{Ass}(S^{-1}M)$ from
+Lemma \ref{lemma-ass}.
+Conversely, suppose that $n/s \in S^{-1}M$ is an element whose
+annihilator is a prime ideal $\mathfrak p$. Then the annihilator
+of $n \in M$ is also $\mathfrak p$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ideal-nonzerodivisor}
+Let $R$ be a Noetherian local ring with
+maximal ideal $\mathfrak m$. Let $I \subset \mathfrak m$
+be an ideal. Let $M$ be a finite $R$-module.
+The following are equivalent:
+\begin{enumerate}
+\item There exists an $x \in I$ which is not a zerodivisor on $M$.
+\item We have $I \not \subset \mathfrak q$ for all
+$\mathfrak q \in \text{Ass}(M)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If there exists a nonzerodivisor $x$ in $I$,
+then $x$ clearly cannot be in any associated
+prime of $M$. Conversely, suppose $I \not \subset \mathfrak q$
+for all $\mathfrak q \in \text{Ass}(M)$. In this case we can
+choose $x \in I$, $x \not \in \mathfrak q$ for all
+$\mathfrak q \in \text{Ass}(M)$ by Lemmas
+\ref{lemma-finite-ass} and \ref{lemma-silly}.
+By Lemma \ref{lemma-ass-zero-divisors} the element $x$
+is not a zerodivisor on $M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-zero-at-ass-zero}
+Let $R$ be a ring. Let $M$ be an $R$-module. If $R$ is Noetherian
+the map
+$$
+M
+\longrightarrow
+\prod\nolimits_{\mathfrak p \in \text{Ass}(M)} M_{\mathfrak p}
+$$
+is injective.
+\end{lemma}
+
+\begin{proof}
+Let $x \in M$ be an element of the kernel of the map.
+Then if $\mathfrak p$ is an associated prime of $Rx \subset M$
+we see on the one hand that $\mathfrak p \in \text{Ass}(M)$
+(Lemma \ref{lemma-ass}) and
+on the other hand that $(Rx)_{\mathfrak p} \subset M_{\mathfrak p}$
+is not zero. This contradiction shows that $\text{Ass}(Rx) = \emptyset$.
+Hence $Rx = 0$ by
+Lemma \ref{lemma-ass-zero}.
+\end{proof}
+
+
+\section{Symbolic powers}
+\label{section-symbolic-power}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-symbolic-power}
+Let $R$ be a ring. Let $\mathfrak p$ be a prime ideal. For $n \geq 0$ the
+$n$th {\it symbolic power} of $\mathfrak p$ is the ideal
+$\mathfrak p^{(n)} = \Ker(R \to R_\mathfrak p/\mathfrak p^nR_\mathfrak p)$.
+\end{definition}
+
+\noindent
+Note that $\mathfrak p^n \subset \mathfrak p^{(n)}$ but equality does
+not always hold.
+
+\begin{lemma}
+\label{lemma-symbolic-power-associated}
+Let $R$ be a Noetherian ring.
+Let $\mathfrak p$ be a prime ideal.
+Let $n > 0$. Then $\text{Ass}(R/\mathfrak p^{(n)}) = \{\mathfrak p\}$.
+\end{lemma}
+
+\begin{proof}
+If $\mathfrak q$ is an associated prime of $R/\mathfrak p^{(n)}$
+then clearly $\mathfrak p \subset \mathfrak q$.
+On the other hand, any element $x \in R$, $x \not \in \mathfrak p$
+is a nonzerodivisor on $R/\mathfrak p^{(n)}$.
+Namely, if $y \in R$ and
+$xy \in \mathfrak p^{(n)} = R \cap \mathfrak p^nR_{\mathfrak p}$
+then $y \in \mathfrak p^nR_{\mathfrak p}$, hence $y \in \mathfrak p^{(n)}$.
+Hence the lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-symbolic-power-flat-extension}
+Let $R \to S$ be flat ring map. Let $\mathfrak p \subset R$ be a prime
+such that $\mathfrak q = \mathfrak p S$ is a prime of $S$.
+Then $\mathfrak p^{(n)} S = \mathfrak q^{(n)}$.
+\end{lemma}
+
+\begin{proof}
+Since
+$\mathfrak p^{(n)} = \Ker(R \to R_\mathfrak p/\mathfrak p^nR_\mathfrak p)$
+we see using flatness that $\mathfrak p^{(n)} S$ is the kernel of the map
+$S \to S_\mathfrak p/\mathfrak p^nS_\mathfrak p$. On the other hand
+$\mathfrak q^{(n)}$ is the kernel of the map
+$S \to S_\mathfrak q/\mathfrak q^nS_\mathfrak q =
+S_\mathfrak q/\mathfrak p^nS_\mathfrak q$. Hence it suffices
+to show that
+$$
+S_\mathfrak p/\mathfrak p^nS_\mathfrak p
+\longrightarrow
+S_\mathfrak q/\mathfrak p^nS_\mathfrak q
+$$
+is injective. Observe that the right hand module is the localization
+of the left hand module by elements $f \in S$, $f \not \in \mathfrak q$.
+Thus it suffices to show these elements are nonzerodivisors on
+$S_\mathfrak p/\mathfrak p^nS_\mathfrak p$. By flatness, the module
+$S_\mathfrak p/\mathfrak p^nS_\mathfrak p$ has a finite filtration whose
+subquotients are
+$$
+\mathfrak p^iS_\mathfrak p/\mathfrak p^{i + 1}S_\mathfrak p
+\cong \mathfrak p^iR_\mathfrak p/\mathfrak p^{i + 1}R_\mathfrak p
+\otimes_{R_\mathfrak p} S_\mathfrak p \cong
+V \otimes_{\kappa(\mathfrak p)} (S/\mathfrak q)_\mathfrak p
+$$
+where $V$ is a $\kappa(\mathfrak p)$ vector space. Thus $f$
+acts invertibly as desired.
+\end{proof}
+
+
+
+\section{Relative assassin}
+\label{section-relative-assassin}
+
+\noindent
+Discussion of relative assassins. Let $R \to S$ be a ring map.
+Let $N$ be an $S$-module. In this situation we can introduce the following
+sets of primes $\mathfrak q$ of $S$:
+\begin{enumerate}
+\item $A$: with $\mathfrak p = R \cap \mathfrak q$ we have that
+$\mathfrak q \in \text{Ass}_S(N \otimes_R \kappa(\mathfrak p))$,
+\item $A'$: with $\mathfrak p = R \cap \mathfrak q$ we have that
+$\mathfrak q$ is in the image of
+$\text{Ass}_{S \otimes \kappa(\mathfrak p)}(N \otimes_R \kappa(\mathfrak p))$
+under the canonical map
+$\Spec(S \otimes_R \kappa(\mathfrak p)) \to \Spec(S)$,
+\item $A_{fin}$: with $\mathfrak p = R \cap \mathfrak q$ we have that
+$\mathfrak q \in \text{Ass}_S(N/\mathfrak pN)$,
+\item $A'_{fin}$: for some prime $\mathfrak p' \subset R$ we have
+$\mathfrak q \in \text{Ass}_S(N/\mathfrak p'N)$,
+\item $B$: for some $R$-module $M$ we have
+$\mathfrak q \in \text{Ass}_S(N \otimes_R M)$, and
+\item $B_{fin}$: for some finite $R$-module $M$ we have
+$\mathfrak q \in \text{Ass}_S(N \otimes_R M)$.
+\end{enumerate}
+Let us determine some of the relations between theses sets.
+
+\begin{lemma}
+\label{lemma-compare-relative-assassins}
+Let $R \to S$ be a ring map. Let $N$ be an $S$-module.
+Let $A$, $A'$, $A_{fin}$, $B$, and $B_{fin}$ be the subsets of
+$\Spec(S)$ introduced above.
+\begin{enumerate}
+\item We always have $A = A'$.
+\item We always have $A_{fin} \subset A$,
+$B_{fin} \subset B$, $A_{fin} \subset A'_{fin} \subset B_{fin}$
+and $A \subset B$.
+\item If $S$ is Noetherian, then $A = A_{fin}$ and $B = B_{fin}$.
+\item If $N$ is flat over $R$, then $A = A_{fin} = A'_{fin}$ and $B = B_{fin}$.
+\item If $R$ is Noetherian and $N$ is flat over $R$, then all of the sets
+are equal, i.e., $A = A' = A_{fin} = A'_{fin} = B = B_{fin}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Some of the arguments in the proof will be repeated in the proofs of
+later lemmas which are more precise than this one (because they deal
+with a given module $M$ or a given prime $\mathfrak p$ and not with
+the collection of all of them).
+
+\medskip\noindent
+Proof of (1). Let $\mathfrak p$ be a prime of $R$. Then we have
+$$
+\text{Ass}_S(N \otimes_R \kappa(\mathfrak p)) =
+\text{Ass}_{S/\mathfrak pS}(N \otimes_R \kappa(\mathfrak p)) =
+\text{Ass}_{S \otimes_R \kappa(\mathfrak p)}(N \otimes_R \kappa(\mathfrak p))
+$$
+the first equality by
+Lemma \ref{lemma-ass-quotient-ring}
+and the second by
+Lemma \ref{lemma-localize-ass} part (1). This prove that $A = A'$.
+The inclusion $A_{fin} \subset A'_{fin}$ is clear.
+
+\medskip\noindent
+Proof of (2). Each of the inclusions is immediate from the definitions
+except perhaps $A_{fin} \subset A$ which follows from
+Lemma \ref{lemma-localize-ass}
+and the fact that we require $\mathfrak p = R \cap \mathfrak q$ in
+the formulation of $A_{fin}$.
+
+\medskip\noindent
+Proof of (3). The equality $A = A_{fin}$ follows from
+Lemma \ref{lemma-localize-ass} part (3)
+if $S$ is Noetherian. Let $\mathfrak q = (g_1, \ldots, g_m)$ be a finitely
+generated prime ideal of $S$.
+Say $z \in N \otimes_R M$ is an element whose annihilator is $\mathfrak q$.
+We may pick a finite submodule $M' \subset M$ such that $z$ is the
+image of $z' \in N \otimes_R M'$. Then
+$\text{Ann}_S(z') \subset \mathfrak q = \text{Ann}_S(z)$.
+Since $N \otimes_R -$ commutes with colimits and since $M$ is the
+directed colimit of finite $R$-modules we can find $M' \subset M'' \subset M$
+such that the image $z'' \in N \otimes_R M''$ is annihilated by
+$g_1, \ldots, g_m$. Hence $\text{Ann}_S(z'') = \mathfrak q$. This proves
+that $B = B_{fin}$ if $S$ is Noetherian.
+
+\medskip\noindent
+Proof of (4). If $N$ is flat, then the functor $N \otimes_R -$ is exact.
+In particular, if $M' \subset M$, then $N \otimes_R M' \subset N \otimes_R M$.
+Hence if $z \in N \otimes_R M$ is an element whose annihilator
+$\mathfrak q = \text{Ann}_S(z)$ is a prime, then we can pick any
+finite $R$-submodule $M' \subset M$ such that $z \in N \otimes_R M'$
+and we see that the annihilator of $z$ as an element of $N \otimes_R M'$
+is equal to $\mathfrak q$. Hence $B = B_{fin}$. Let $\mathfrak p'$ be a
+prime of $R$ and let $\mathfrak q$ be a prime of $S$ which is
+an associated prime of $N/\mathfrak p'N$. This implies that
+$\mathfrak p'S \subset \mathfrak q$. As $N$ is flat over $R$ we
+see that $N/\mathfrak p'N$ is flat over the integral domain $R/\mathfrak p'$.
+Hence every nonzero element of $R/\mathfrak p'$ is a nonzerodivisor on
+$N/\mathfrak p'$. Hence none of these elements can map to an element of
+$\mathfrak q$ and we conclude that $\mathfrak p' = R \cap \mathfrak q$.
+Hence $A_{fin} = A'_{fin}$. Finally, by
+Lemma \ref{lemma-localize-ass-nonzero-divisors}
+we see that
+$\text{Ass}_S(N/\mathfrak p'N) =
+\text{Ass}_S(N \otimes_R \kappa(\mathfrak p'))$, i.e., $A'_{fin} = A$.
+
+\medskip\noindent
+Proof of (5). We only need to prove $A'_{fin} = B_{fin}$ as the other
+equalities have been proved in (4). To see this let $M$ be a finite
+$R$-module. By
+Lemma \ref{lemma-filter-Noetherian-module}
+there exists a filtration by $R$-submodules
+$$
+0 = M_0 \subset M_1 \subset \ldots \subset M_n = M
+$$
+such that each quotient $M_i/M_{i-1}$ is isomorphic
+to $R/\mathfrak p_i$ for some prime ideal $\mathfrak p_i$
+of $R$. Since $N$ is flat we obtain a filtration by $S$-submodules
+$$
+0 = N \otimes_R M_0 \subset N \otimes_R M_1 \subset \ldots \subset
+N \otimes_R M_n = N \otimes_R M
+$$
+such that each subquotient is isomorphic to $N/\mathfrak p_iN$. By
+Lemma \ref{lemma-ass}
+we conclude that
+$\text{Ass}_S(N \otimes_R M) \subset \bigcup \text{Ass}_S(N/\mathfrak p_iN)$.
+Hence we see that $B_{fin} \subset A'_{fin}$. Since the other inclusion is part
+of (2) we win.
+\end{proof}
+
+\noindent
+We define the relative assassin of $N$ over $S/R$ to be the
+set $A = A'$ above. As a motivation we point out that it depends
+only on the fibre modules $N \otimes_R \kappa(\mathfrak p)$
+over the fibre rings. As in the case of the assassin of a module we
+warn the reader that this notion makes most sense when the fibre
+rings $S \otimes_R \kappa(\mathfrak p)$ are Noetherian, for example
+if $R \to S$ is of finite type.
+
+\begin{definition}
+\label{definition-relative-assassin}
+Let $R \to S$ be a ring map. Let $N$ be an $S$-module.
+The {\it relative assassin of $N$ over $S/R$} is the set
+$$
+\text{Ass}_{S/R}(N)
+=
+\{ \mathfrak q \subset S \mid
+\mathfrak q \in \text{Ass}_S(N \otimes_R \kappa(\mathfrak p))
+\text{ with }\mathfrak p = R \cap \mathfrak q\}.
+$$
+This is the set named $A$ in
+Lemma \ref{lemma-compare-relative-assassins}.
+\end{definition}
+
+\noindent
+The spirit of the next few results is that they are about the relative
+assassin, even though this may not be apparent.
+
+\begin{lemma}
+\label{lemma-bourbaki}
+Let $R \to S$ be a ring map.
+Let $M$ be an $R$-module, and let $N$ be an $S$-module.
+If $N$ is flat as $R$-module, then
+$$
+\text{Ass}_S(M \otimes_R N)
+\supset
+\bigcup\nolimits_{\mathfrak p \in \text{Ass}_R(M)} \text{Ass}_S(N/\mathfrak pN)
+$$
+and if $R$ is Noetherian then we have equality.
+\end{lemma}
+
+\begin{proof}
+If $\mathfrak p \in \text{Ass}_R(M)$ then there exists an injection
+$R/\mathfrak p \to M$. As $N$ is flat over $R$ we obtain an injection
+$R/\mathfrak p \otimes_R N \to M \otimes_R N$. Since
+$R/\mathfrak p \otimes_R N = N/\mathfrak pN$ we conclude that
+$\text{Ass}_S(N/\mathfrak pN) \subset \text{Ass}_S(M \otimes_R N)$, see
+Lemma \ref{lemma-ass}. Hence the right hand side is
+contained in the left hand side.
+
+\medskip\noindent
+Write $M = \bigcup M_\lambda$ as the union of its finitely generated
+$R$-submodules. Then also $N \otimes_R M = \bigcup N \otimes_R M_\lambda$
+(as $N$ is $R$-flat). By definition of associated primes we see that
+$\text{Ass}_S(N \otimes_R M) = \bigcup \text{Ass}_S(N \otimes_R M_\lambda)$
+and $\text{Ass}_R(M) = \bigcup \text{Ass}(M_\lambda)$. Hence we may assume
+$M$ is finitely generated.
+
+\medskip\noindent
+Let $\mathfrak q \in \text{Ass}_S(M \otimes_R N)$, and assume $R$ is
+Noetherian and $M$
+is a finite $R$-module. To finish the proof we have to show that
+$\mathfrak q$ is an element of the right hand side. First we observe that
+$\mathfrak qS_{\mathfrak q} \in
+\text{Ass}_{S_{\mathfrak q}}((M \otimes_R N)_{\mathfrak q})$,
+see Lemma \ref{lemma-associated-primes-localize}.
+Let $\mathfrak p$ be the corresponding prime of $R$.
+Note that
+$$
+(M \otimes_R N)_{\mathfrak q} = M \otimes_R N_{\mathfrak q}
+= M_{\mathfrak p} \otimes_{R_{\mathfrak p}} N_{\mathfrak q}
+$$
+If
+$\mathfrak pR_{\mathfrak p} \not \in
+\text{Ass}_{R_{\mathfrak p}}(M_{\mathfrak p})$
+then there exists an element $x \in \mathfrak pR_{\mathfrak p}$ which
+is a nonzerodivisor in $M_{\mathfrak p}$ (see
+Lemma \ref{lemma-ideal-nonzerodivisor}). Since
+$N_{\mathfrak q}$ is flat over $R_{\mathfrak p}$ we see that
+the image of $x$ in $\mathfrak qS_{\mathfrak q}$ is a nonzerodivisor on
+$(M \otimes_R N)_{\mathfrak q}$. This is a contradiction
+with the assumption that
+$\mathfrak qS_{\mathfrak q} \in \text{Ass}_S((M \otimes_R N)_{\mathfrak q})$.
+Hence we conclude that $\mathfrak p$ is one of the associated
+primes of $M$.
+
+\medskip\noindent
+Continuing the argument we choose a filtration
+$$
+0 = M_0 \subset M_1 \subset \ldots \subset M_n = M
+$$
+such that each quotient $M_i/M_{i-1}$ is isomorphic
+to $R/\mathfrak p_i$ for some prime ideal $\mathfrak p_i$
+of $R$, see Lemma \ref{lemma-filter-Noetherian-module}.
+(By Lemma \ref{lemma-ass-filter} we have $\mathfrak p_i = \mathfrak p$ for
+at least one $i$.) This gives a filtration
+$$
+0 = M_0 \otimes_R N \subset M_1 \otimes_R N \subset \ldots
+\subset M_n \otimes_R N = M \otimes_R N
+$$
+with subquotients isomorphic to $N/\mathfrak p_iN$. If
+$\mathfrak p_i \not = \mathfrak p$ then $\mathfrak q$ cannot be
+associated to the module $N/\mathfrak p_iN$ by the result of the
+preceding paragraph (as $\text{Ass}_R(R/\mathfrak p_i) = \{\mathfrak p_i\}$).
+Hence we conclude that $\mathfrak q$ is associated to
+$N/\mathfrak pN$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-post-bourbaki}
+Let $R \to S$ be a ring map.
+Let $N$ be an $S$-module.
+Assume $N$ is flat as an $R$-module and
+$R$ is a domain with fraction field $K$.
+Then
+$$
+\text{Ass}_S(N) =
+\text{Ass}_S(N \otimes_R K) =
+\text{Ass}_{S \otimes_R K}(N \otimes_R K)
+$$
+via the canonical inclusion
+$\Spec(S \otimes_R K) \subset \Spec(S)$.
+\end{lemma}
+
+\begin{proof}
+Note that $S \otimes_R K = (R \setminus \{0\})^{-1}S$ and
+$N \otimes_R K = (R \setminus \{0\})^{-1}N$.
+For any nonzero $x \in R$ multiplication by $x$ on $N$ is injective as
+$N$ is flat over $R$. Hence the lemma follows from
+Lemma \ref{lemma-localize-ass-nonzero-divisors}
+combined with
+Lemma \ref{lemma-localize-ass} part (1).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bourbaki-fibres}
+Let $R \to S$ be a ring map.
+Let $M$ be an $R$-module, and let $N$ be an $S$-module.
+Assume $N$ is flat as $R$-module. Then
+$$
+\text{Ass}_S(M \otimes_R N)
+\supset
+\bigcup\nolimits_{\mathfrak p \in \text{Ass}_R(M)}
+\text{Ass}_{S \otimes_R \kappa(\mathfrak p)}(N \otimes_R \kappa(\mathfrak p))
+$$
+where we use
+Remark \ref{remark-fundamental-diagram}
+to think of the spectra of fibre rings as subsets of $\Spec(S)$.
+If $R$ is Noetherian then this inclusion is an equality.
+\end{lemma}
+
+\begin{proof}
+This is equivalent to
+Lemma \ref{lemma-bourbaki}
+by
+Lemmas \ref{lemma-ass-quotient-ring},
+\ref{lemma-flat-base-change}, and
+\ref{lemma-post-bourbaki}.
+\end{proof}
+
+\begin{remark}
+\label{remark-bourbaki}
+Let $R \to S$ be a ring map. Let $N$ be an $S$-module.
+Let $\mathfrak p$ be a prime of $R$. Then
+$$
+\text{Ass}_S(N \otimes_R \kappa(\mathfrak p)) =
+\text{Ass}_{S/\mathfrak pS}(N \otimes_R \kappa(\mathfrak p)) =
+\text{Ass}_{S \otimes_R \kappa(\mathfrak p)}(N \otimes_R \kappa(\mathfrak p)).
+$$
+The first equality by
+Lemma \ref{lemma-ass-quotient-ring}
+and the second by
+Lemma \ref{lemma-localize-ass} part (1).
+\end{remark}
+
+
+
+
+
+
+\section{Weakly associated primes}
+\label{section-weakly-ass}
+
+\noindent
+This is a variant on the notion of an associated prime that is useful
+for non-Noetherian ring and non-finite modules.
+
+\begin{definition}
+\label{definition-weakly-associated}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+A prime $\mathfrak p$ of $R$ is {\it weakly associated} to $M$
+if there exists an element $m \in M$ such that $\mathfrak p$ is minimal
+among the prime ideals containing the annihilator
+$\text{Ann}(m) = \{f \in R \mid fm = 0\}$.
+The set of all such primes is denoted $\text{WeakAss}_R(M)$
+or $\text{WeakAss}(M)$.
+\end{definition}
+
+\noindent
+Thus an associated prime is a weakly associated prime.
+Here is a characterization in terms of the localization at the prime.
+
+\begin{lemma}
+\label{lemma-weakly-ass-local}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $\mathfrak p$ be a prime of $R$.
+The following are equivalent:
+\begin{enumerate}
+\item $\mathfrak p$ is weakly associated to $M$,
+\item $\mathfrak pR_{\mathfrak p}$ is weakly associated to $M_{\mathfrak p}$,
+and
+\item $M_{\mathfrak p}$ contains an element whose
+annihilator has radical equal to $\mathfrak pR_{\mathfrak p}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). Then there exists an element $m \in M$ such that
+$\mathfrak p$ is minimal among the primes containing the annihilator
+$I = \{x \in R \mid xm = 0\}$ of $m$. As localization is exact, the
+annihilator of $m$ in $M_{\mathfrak p}$ is $I_{\mathfrak p}$.
+Hence $\mathfrak pR_{\mathfrak p}$ is a minimal prime of
+$R_{\mathfrak p}$ containing the annihilator $I_{\mathfrak p}$
+of $m$ in $M_{\mathfrak p}$. This implies (2) holds, and also (3)
+as it implies that $\sqrt{I_{\mathfrak p}} = \mathfrak pR_{\mathfrak p}$.
+
+\medskip\noindent
+Applying the implication (1) $\Rightarrow$ (3) to $M_{\mathfrak p}$
+over $R_{\mathfrak p}$ we see that (2) $\Rightarrow$ (3).
+
+\medskip\noindent
+Finally, assume (3). This means there exists an element
+$m/f \in M_{\mathfrak p}$ whose annihilator has radical equal
+to $\mathfrak pR_{\mathfrak p}$. Then the annihilator
+$I = \{x \in R \mid xm = 0\}$ of $m$ in $M$ is such that
+$\sqrt{I_{\mathfrak p}} = \mathfrak pR_{\mathfrak p}$. Clearly
+this means that $\mathfrak p$ contains $I$ and is minimal among the
+primes containing $I$, i.e., (1) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reduced-weakly-ass-minimal}
+For a reduced ring the weakly associated primes of the ring are
+the minimal primes.
+\end{lemma}
+
+\begin{proof}
+Let $(R, \mathfrak m)$ be a reduced local ring.
+Suppose $x \in R$ is an element whose annihilator
+has radical $\mathfrak m$. If $\mathfrak m \not = 0$, then $x$
+cannot be a unit, so $x \in \mathfrak m$. Then in particular $x^{1 + n} = 0$
+for some $n \geq 0$. Hence $x = 0$. Which contradicts the assumption
+that the annihilator of $x$ is contained in $\mathfrak m$.
+Thus we see that $\mathfrak m = 0$, i.e., $R$ is a field.
+By Lemma \ref{lemma-weakly-ass-local} this
+implies the statement of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-ass}
+Let $R$ be a ring.
+Let $0 \to M' \to M \to M'' \to 0$ be a short exact sequence
+of $R$-modules.
+Then $\text{WeakAss}(M') \subset \text{WeakAss}(M)$ and
+$\text{WeakAss}(M) \subset \text{WeakAss}(M') \cup \text{WeakAss}(M'')$.
+\end{lemma}
+
+\begin{proof}
+We will use the characterization of weakly associated primes of
+Lemma \ref{lemma-weakly-ass-local}.
+Let $\mathfrak p$ be a prime of $R$. As localization is exact we obtain
+the short exact sequence
+$0 \to M'_{\mathfrak p} \to M_{\mathfrak p} \to M''_{\mathfrak p} \to 0$.
+Suppose that $m \in M_{\mathfrak p}$ is an element whose annihilator
+has radical $\mathfrak pR_{\mathfrak p}$. Then either the image $\overline{m}$
+of $m$ in $M''_{\mathfrak p}$ is zero and $m \in M'_{\mathfrak p}$, or the
+radical of the annihilator of $\overline{m}$ is $\mathfrak pR_{\mathfrak p}$.
+This proves that
+$\text{WeakAss}(M) \subset \text{WeakAss}(M') \cup \text{WeakAss}(M'')$.
+The inclusion $\text{WeakAss}(M') \subset \text{WeakAss}(M)$ is immediate
+from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-ass-zero}
+\begin{slogan}
+Every nonzero module has a weakly associated prime.
+\end{slogan}
+Let $R$ be a ring. Let $M$ be an $R$-module. Then
+$$
+M = (0) \Leftrightarrow \text{WeakAss}(M) = \emptyset
+$$
+\end{lemma}
+
+\begin{proof}
+If $M = (0)$ then $\text{WeakAss}(M) = \emptyset$ by definition.
+Conversely, suppose that $M \not = 0$. Pick a nonzero element $m \in M$.
+Write $I = \{x \in R \mid xm = 0\}$ the annihilator of $m$.
+Then $R/I \subset M$. Hence $\text{WeakAss}(R/I) \subset \text{WeakAss}(M)$ by
+Lemma \ref{lemma-weakly-ass}.
+But as $I \not = R$ we have $V(I) = \Spec(R/I)$ contains a minimal
+prime, see
+Lemmas \ref{lemma-Zariski-topology} and
+\ref{lemma-spec-closed},
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-ass-support}
+Let $R$ be a ring. Let $M$ be an $R$-module. Then
+$$
+\text{Ass}(M) \subset \text{WeakAss}(M) \subset \text{Supp}(M).
+$$
+\end{lemma}
+
+\begin{proof}
+The first inclusion is immediate from the definitions.
+If $\mathfrak p \in \text{WeakAss}(M)$, then by
+Lemma \ref{lemma-weakly-ass-local}
+we have $M_{\mathfrak p} \not = 0$, hence $\mathfrak p \in \text{Supp}(M)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-ass-zero-divisors}
+Let $R$ be a ring.
+Let $M$ be an $R$-module.
+The union $\bigcup_{\mathfrak q \in \text{WeakAss}(M)} \mathfrak q$
+is the set of elements of $R$ which are zerodivisors on $M$.
+\end{lemma}
+
+\begin{proof}
+Suppose $f \in \mathfrak q \in \text{WeakAss}(M)$.
+Then there exists an element $m \in M$ such that
+$\mathfrak q$ is minimal over $I = \{x \in R \mid xm = 0\}$.
+Hence there exists a $g \in R$, $g \not \in \mathfrak q$ and $n > 0$
+such that $f^ngm = 0$. Note that $gm \not = 0$ as $g \not \in I$.
+If we take $n$ minimal as above, then $f (f^{n - 1}gm) = 0$
+and $f^{n - 1}gm \not = 0$, so $f$ is a zerodivisor on $M$.
+Conversely, suppose $f \in R$ is a zerodivisor on $M$.
+Consider the submodule $N = \{m \in M \mid fm = 0\}$.
+Since $N$ is not zero it has a weakly associated prime $\mathfrak q$ by
+Lemma \ref{lemma-weakly-ass-zero}.
+Clearly $f \in \mathfrak q$ and by
+Lemma \ref{lemma-weakly-ass}
+$\mathfrak q$ is a weakly associated prime of $M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-ass-minimal-prime-support}
+Let $R$ be a ring.
+Let $M$ be an $R$-module.
+Any $\mathfrak p \in \text{Supp}(M)$ which is minimal among the elements
+of $\text{Supp}(M)$ is an element of $\text{WeakAss}(M)$.
+\end{lemma}
+
+\begin{proof}
+Note that $\text{Supp}(M_{\mathfrak p}) = \{\mathfrak pR_{\mathfrak p}\}$
+in $\Spec(R_{\mathfrak p})$. In particular $M_{\mathfrak p}$
+is nonzero, and hence $\text{WeakAss}(M_{\mathfrak p}) \not = \emptyset$ by
+Lemma \ref{lemma-weakly-ass-zero}.
+Since $\text{WeakAss}(M_{\mathfrak p}) \subset \text{Supp}(M_{\mathfrak p})$
+by
+Lemma \ref{lemma-weakly-ass-support}
+we conclude that
+$\text{WeakAss}(M_{\mathfrak p}) = \{\mathfrak pR_{\mathfrak p}\}$,
+whence $\mathfrak p \in \text{WeakAss}(M)$ by
+Lemma \ref{lemma-weakly-ass-local}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass-weakly-ass}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $\mathfrak p$ be a prime ideal of $R$ which is finitely generated.
+Then
+$$
+\mathfrak p \in \text{Ass}(M) \Leftrightarrow
+\mathfrak p \in \text{WeakAss}(M).
+$$
+In particular, if $R$ is Noetherian, then $\text{Ass}(M) = \text{WeakAss}(M)$.
+\end{lemma}
+
+\begin{proof}
+Write $\mathfrak p = (g_1, \ldots, g_n)$ for some $g_i \in R$.
+It is enough the prove the implication ``$\Leftarrow$'' as the other
+implication holds in general, see
+Lemma \ref{lemma-weakly-ass-support}.
+Assume $\mathfrak p \in \text{WeakAss}(M)$.
+By
+Lemma \ref{lemma-weakly-ass-local}
+there exists an element $m \in M_{\mathfrak p}$ such that
+$I = \{x \in R_{\mathfrak p} \mid xm = 0\}$ has radical
+$\mathfrak pR_{\mathfrak p}$. Hence for each $i$ there exists
+a smallest $e_i > 0$ such that $g_i^{e_i}m = 0$ in $M_{\mathfrak p}$.
+If $e_i > 1$ for some $i$, then we can replace $m$ by
+$g_i^{e_i - 1} m \not = 0$ and decrease $\sum e_i$.
+Hence we may assume that the annihilator of $m \in M_{\mathfrak p}$ is
+$(g_1, \ldots, g_n)R_{\mathfrak p} = \mathfrak p R_{\mathfrak p}$. By
+Lemma \ref{lemma-associated-primes-localize}
+we see that $\mathfrak p \in \text{Ass}(M)$.
+\end{proof}
+
+\begin{remark}
+\label{remark-weakly-ass-not-functorial}
+Let $\varphi : R \to S$ be a ring map. Let $M$ be an $S$-module.
+Then it is not always the case that
+$\Spec(\varphi)(\text{WeakAss}_S(M)) \subset \text{WeakAss}_R(M)$
+contrary to the case of associated primes (see
+Lemma \ref{lemma-ass-functorial}).
+An example is to consider the ring map
+$$
+R = k[x_1, x_2, x_3, \ldots] \to
+S = k[x_1, x_2, x_3, \ldots, y_1, y_2, y_3, \ldots]/
+(x_1y_1, x_2y_2, x_3y_3, \ldots)
+$$
+and $M = S$. In this case $\mathfrak q = \sum x_iS$ is a minimal prime of
+$S$, hence a weakly associated prime of $M = S$ (see
+Lemma \ref{lemma-weakly-ass-minimal-prime-support}).
+But on the other hand, for any nonzero element of $S$ the annihilator
+in $R$ is finitely generated, and hence does not
+have radical equal to $R \cap \mathfrak q = (x_1, x_2, x_3, \ldots)$
+(details omitted).
+\end{remark}
+
+\begin{lemma}
+\label{lemma-weakly-ass-reverse-functorial}
+Let $\varphi : R \to S$ be a ring map. Let $M$ be an $S$-module.
+Then we have
+$\Spec(\varphi)(\text{WeakAss}_S(M)) \supset \text{WeakAss}_R(M)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p$ be an element of $\text{WeakAss}_R(M)$.
+Then there exists an $m \in M_{\mathfrak p}$ whose annihilator
+$I = \{x \in R_{\mathfrak p} \mid xm = 0\}$ has radical
+$\mathfrak pR_{\mathfrak p}$. Consider the annihilator
+$J = \{x \in S_{\mathfrak p} \mid xm = 0 \}$ of $m$ in $S_{\mathfrak p}$.
+As $IS_{\mathfrak p} \subset J$ we see that any minimal prime
+$\mathfrak q \subset S_{\mathfrak p}$ over $J$ lies over $\mathfrak p$.
+Moreover such a $\mathfrak q$ corresponds to a weakly associated prime
+of $M$ for example by
+Lemma \ref{lemma-weakly-ass-local}.
+\end{proof}
+
+\begin{remark}
+\label{remark-ass-functorial}
+Let $\varphi : R \to S$ be a ring map. Let $M$ be an $S$-module.
+Denote $f : \Spec(S) \to \Spec(R)$ the associated map on spectra.
+Then we have
+$$
+f(\text{Ass}_S(M)) \subset
+\text{Ass}_R(M) \subset
+\text{WeakAss}_R(M) \subset
+f(\text{WeakAss}_S(M))
+$$
+see
+Lemmas \ref{lemma-ass-functorial},
+\ref{lemma-weakly-ass-reverse-functorial}, and
+\ref{lemma-weakly-ass-support}.
+In general all of the inclusions may be strict, see
+Remarks \ref{remark-ass-reverse-functorial} and
+\ref{remark-weakly-ass-not-functorial}.
+If $S$ is Noetherian, then all the inclusions are equalities as
+the outer two are equal by
+Lemma \ref{lemma-ass-weakly-ass}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-weakly-ass-finite-ring-map}
+Let $\varphi : R \to S$ be a ring map. Let $M$ be an $S$-module.
+Denote $f : \Spec(S) \to \Spec(R)$ the associated map on spectra.
+If $\varphi$ is a finite ring map, then
+$$
+\text{WeakAss}_R(M) = f(\text{WeakAss}_S(M)).
+$$
+\end{lemma}
+
+\begin{proof}
+One of the inclusions has already been proved, see
+Remark \ref{remark-ass-functorial}.
+To prove the other assume $\mathfrak q \in \text{WeakAss}_S(M)$
+and let $\mathfrak p$ be the corresponding prime of $R$. Let $m \in M$
+be an element such that $\mathfrak q$ is a minimal prime over
+$J = \{g \in S \mid gm = 0\}$. Thus the radical of
+$JS_{\mathfrak q}$ is $\mathfrak qS_{\mathfrak q}$.
+As $R \to S$ is finite there are
+finitely many primes
+$\mathfrak q = \mathfrak q_1, \mathfrak q_2, \ldots, \mathfrak q_l$
+over $\mathfrak p$, see
+Lemma \ref{lemma-finite-finite-fibres}.
+Pick $x \in \mathfrak q$ with $x \not \in \mathfrak q_i$ for $i > 1$, see
+Lemma \ref{lemma-silly}.
+By the above there exists an element $y \in S$, $y \not \in \mathfrak q$
+and an integer $t > 0$ such that $y x^t m = 0$. Thus the element
+$ym \in M$ is annihilated by $x^t$, hence $ym$ maps to zero in
+$M_{\mathfrak q_i}$, $i = 2, \ldots, l$. To be sure, $ym$ does not
+map to zero in $S_{\mathfrak q}$.
+
+\medskip\noindent
+The ring $S_{\mathfrak p}$ is semi-local with maximal ideals
+$\mathfrak q_i S_{\mathfrak p}$ by going up for finite ring maps, see
+Lemma \ref{lemma-integral-going-up}.
+If $f \in \mathfrak pR_{\mathfrak p}$ then some power of $f$ ends
+up in $JS_{\mathfrak q}$ hence for some $n > 0$ we see that
+$f^t ym$ maps to zero in $M_{\mathfrak q}$. As $ym$ vanishes at the
+other maximal ideals of $S_{\mathfrak p}$ we conclude that $f^t ym$ is zero
+in $M_{\mathfrak p}$, see
+Lemma \ref{lemma-characterize-zero-local}.
+In this way we see that $\mathfrak p$ is a minimal prime over
+the annihilator of $ym$ in $R$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-ass-quotient-ring}
+Let $R$ be a ring.
+Let $I$ be an ideal.
+Let $M$ be an $R/I$-module.
+Via the canonical injection
+$\Spec(R/I) \to \Spec(R)$
+we have $\text{WeakAss}_{R/I}(M) = \text{WeakAss}_R(M)$.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-weakly-ass-finite-ring-map}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-weakly-ass}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $S \subset R$ be a multiplicative subset.
+Via the canonical injection $\Spec(S^{-1}R) \to \Spec(R)$
+we have $\text{WeakAss}_R(S^{-1}M) = \text{WeakAss}_{S^{-1}R}(S^{-1}M)$
+and
+$$
+\text{WeakAss}(M) \cap \Spec(S^{-1}R) = \text{WeakAss}(S^{-1}M).
+$$
+\end{lemma}
+
+\begin{proof}
+Suppose that $m \in S^{-1}M$. Let $I = \{x \in R \mid xm = 0\}$
+and $I' = \{x' \in S^{-1}R \mid x'm = 0\}$. Then $I' = S^{-1}I$
+and $I \cap S = \emptyset$ unless $I = R$ (verifications omitted).
+Thus primes in $S^{-1}R$ minimal over $I'$ correspond bijectively
+to primes in $R$ minimal over $I$ and avoiding $S$. This proves the
+equality $\text{WeakAss}_R(S^{-1}M) = \text{WeakAss}_{S^{-1}R}(S^{-1}M)$.
+The second equality follows from
+Lemma \ref{lemma-weakly-ass-local}
+since for $\mathfrak p \in R$, $S \cap \mathfrak p = \emptyset$ we have
+$M_{\mathfrak p} = (S^{-1}M)_{S^{-1}\mathfrak p}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-weakly-ass-nonzero-divisors}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $S \subset R$ be a multiplicative subset.
+Assume that every $s \in S$ is a nonzerodivisor on $M$.
+Then
+$$
+\text{WeakAss}(M) = \text{WeakAss}(S^{-1}M).
+$$
+\end{lemma}
+
+\begin{proof}
+As $M \subset S^{-1}M$ by assumption we obtain
+$\text{WeakAss}(M) \subset \text{WeakAss}(S^{-1}M)$ from
+Lemma \ref{lemma-weakly-ass}.
+Conversely, suppose that $n/s \in S^{-1}M$ is an element with annihilator
+$I$ and $\mathfrak p$ a prime which is minimal over $I$.
+Then the annihilator of $n \in M$ is $I$ and $\mathfrak p$ is a prime
+minimal over $I$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-zero-at-weakly-ass-zero}
+Let $R$ be a ring. Let $M$ be an $R$-module. The map
+$$
+M
+\longrightarrow
+\prod\nolimits_{\mathfrak p \in \text{WeakAss}(M)} M_{\mathfrak p}
+$$
+is injective.
+\end{lemma}
+
+\begin{proof}
+Let $x \in M$ be an element of the kernel of the map. Set
+$N = Rx \subset M$. If $\mathfrak p$ is a weakly associated prime of $N$
+we see on the one hand that $\mathfrak p \in \text{WeakAss}(M)$
+(Lemma \ref{lemma-weakly-ass})
+and on the other hand that $N_{\mathfrak p} \subset M_{\mathfrak p}$
+is not zero. This contradiction shows that $\text{WeakAss}(N) = \emptyset$.
+Hence $N = 0$, i.e., $x = 0$ by
+Lemma \ref{lemma-weakly-ass-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weak-post-bourbaki}
+Let $R \to S$ be a ring map.
+Let $N$ be an $S$-module.
+Assume $N$ is flat as an $R$-module and
+$R$ is a domain with fraction field $K$.
+Then
+$$
+\text{WeakAss}_S(N) = \text{WeakAss}_{S \otimes_R K}(N \otimes_R K)
+$$
+via the canonical inclusion
+$\Spec(S \otimes_R K) \subset \Spec(S)$.
+\end{lemma}
+
+\begin{proof}
+Note that $S \otimes_R K = (R \setminus \{0\})^{-1}S$ and
+$N \otimes_R K = (R \setminus \{0\})^{-1}N$.
+For any nonzero $x \in R$ multiplication by $x$ on $N$ is injective as
+$N$ is flat over $R$. Hence the lemma follows from
+Lemma \ref{lemma-localize-weakly-ass-nonzero-divisors}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-ass-change-fields}
+Let $K/k$ be a field extension. Let $R$ be a $k$-algebra.
+Let $M$ be an $R$-module. Let $\mathfrak q \subset R \otimes_k K$
+be a prime lying over $\mathfrak p \subset R$. If
+$\mathfrak q$ is weakly associated to $M \otimes_k K$,
+then $\mathfrak p$ is weakly associated to $M$.
+\end{lemma}
+
+\begin{proof}
+Let $z \in M \otimes_k K$ be an element such that $\mathfrak q$
+is minimal over the annihilator $J \subset R \otimes_k K$ of $z$.
+Choose a finitely generated subextension $K/L/k$ such that
+$z \in M \otimes_k L$. Since $R \otimes_k L \to R \otimes_k K$
+is flat we see that $J = I(R \otimes_k K)$ where $I \subset R \otimes_k L$
+is the annihilator of $z$ in the smaller ring
+(Lemma \ref{lemma-annihilator-flat-base-change}).
+Thus $\mathfrak q \cap (R \otimes_k L)$ is minimal over $I$ by
+going down (Lemma \ref{lemma-flat-going-down}).
+In this way we reduce to the case described in the next paragraph.
+
+\medskip\noindent
+Assume $K/k$ is a finitely generated field extension.
+Let $x_1, \ldots, x_r \in K$ be a transcendence basis
+of $K$ over $k$, see Fields, Section \ref{fields-section-transcendence}.
+Set $L = k(x_1, \ldots, x_r)$. Say $[K : L] = n$. Then
+$R \otimes_k L \to R \otimes_k K$ is a finite ring map.
+Hence $\mathfrak q \cap (R \otimes_k L)$
+is a weakly associated prime of $M \otimes_k K$
+viewed as a $R \otimes_k L$-module by
+Lemma \ref{lemma-weakly-ass-finite-ring-map}.
+Since $M \otimes_k K \cong (M \otimes_k L)^{\oplus n}$
+as a $R \otimes_k L$-module, we see that
+$\mathfrak q \cap (R \otimes_k L)$
+is a weakly associated prime of $M \otimes_k L$
+(for example by using Lemma \ref{lemma-weakly-ass} and induction).
+In this way we reduce to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $K = k(x_1, \ldots, x_r)$ is a purely transcendental field extension.
+We may replace $R$ by $R_\mathfrak p$, $M$ by $M_\mathfrak p$
+and $\mathfrak q$ by $\mathfrak q(R_\mathfrak p \otimes_k K)$.
+See Lemma \ref{lemma-localize-weakly-ass}.
+In this way we reduce to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $K = k(x_1, \ldots, x_r)$ is a purely transcendental field extension
+and $R$ is local with maximal ideal $\mathfrak p$. We claim that any
+$f \in R \otimes_k K$, $f \not \in \mathfrak p(R \otimes_k K)$
+is a nonzerodivisor on $M \otimes_k K$. Namely, let
+$z \in M \otimes_k K$ be an element.
+There is a finite $R$-submodule $M' \subset M$ such that
+$z \in M' \otimes_k K$ and such that $M'$ is minimal with
+this property: choose a basis $\{t_\alpha\}$ of $K$ as a
+$k$-vector space, write $z = \sum m_\alpha \otimes t_\alpha$ and let
+$M'$ be the $R$-submodule generated by the $m_\alpha$.
+If $z \in \mathfrak p(M' \otimes_k K) = \mathfrak p M' \otimes_k K$,
+then $\mathfrak pM' = M'$ and $M' = 0$ by Lemma \ref{lemma-NAK}
+a contradiction.
+Thus $z$ has nonzero image $\overline{z}$ in $M'/\mathfrak p M' \otimes_k K$
+But $R/\mathfrak p \otimes_k K$ is a domain as a localization
+of $\kappa(\mathfrak p)[x_1, \ldots, x_n]$ and
+$M'/\mathfrak p M' \otimes_k K$ is a free module, hence
+$f\overline{z} \not = 0$. This proves the claim.
+
+\medskip\noindent
+Finally, pick $z \in M \otimes_k K$ such that $\mathfrak q$
+is minimal over the annihilator $J \subset R \otimes_k K$ of $z$.
+For $f \in \mathfrak p$ there exists an $n \geq 1$ and a
+$g \in R \otimes_k K$, $g \not \in \mathfrak q$ such that
+$g f^n z \in J$, i.e., $g f^n z = 0$.
+(This holds because $\mathfrak q$ lies over $\mathfrak p$
+and $\mathfrak q$ is minimal over $J$.)
+Above we have seen that $g$ is a nonzerodivisor hence $f^n z = 0$.
+This means that $\mathfrak p$ is a weakly associated prime
+of $M \otimes_k K$ viewed as an $R$-module.
+Since $M \otimes_k K$ is a direct sum of copies of $M$
+we conclude that $\mathfrak p$ is a weakly associated
+prime of $M$ as before.
+\end{proof}
+
+
+
+
+
+\section{Embedded primes}
+\label{section-embedded-primes}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-embedded-primes}
+Let $R$ be a ring.
+Let $M$ be an $R$-module.
+\begin{enumerate}
+\item The associated primes of $M$ which are
+not minimal among the associated primes of $M$ are called the
+{\it embedded associated primes} of $M$.
+\item The {\it embedded primes of $R$}
+are the embedded associated primes of $R$ as an $R$-module.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Here is a way to get rid of these.
+
+\begin{lemma}
+\label{lemma-remove-embedded-primes}
+Let $R$ be a Noetherian ring.
+Let $M$ be a finite $R$-module.
+Consider the set of $R$-submodules
+$$
+\{
+K \subset M
+\mid
+\text{Supp}(K)
+\text{ nowhere dense in }
+\text{Supp}(M)
+\}.
+$$
+This set has a maximal element $K$ and the quotient
+$M' = M/K$ has the following properties
+\begin{enumerate}
+\item $\text{Supp}(M) = \text{Supp}(M')$,
+\item $M'$ has no embedded associated primes,
+\item for any $f \in R$ which is contained in all
+embedded associated primes of $M$ we have $M_f \cong M'_f$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will use Lemma \ref{lemma-finite-ass} and
+Proposition \ref{proposition-minimal-primes-associated-primes}
+without further mention.
+Let $\mathfrak q_1, \ldots, \mathfrak q_t$ denote the
+minimal primes in the support of $M$. Let
+$\mathfrak p_1, \ldots, \mathfrak p_s$ denote the
+embedded associated primes of $M$. Then
+$\text{Ass}(M) = \{\mathfrak q_j, \mathfrak p_i\}$.
+Let
+$$
+K = \{m \in M \mid \text{Supp}(Rm) \subset \bigcup V(\mathfrak p_i)\}
+$$
+It is immediately seen to be a submodule. Since $M$ is finite over a
+Noetherian ring, we know $K$ is finite too. Hence $\text{Supp}(K)$
+is nowhere dense in $\text{Supp}(M)$. Let $K' \subset M$ be another submodule
+with support nowhere dense in $\text{Supp}(M)$. This means
+that $K_{\mathfrak q_j} = 0$. Hence if $m \in K'$,
+then $m$ maps to zero in $M_{\mathfrak q_j}$ which
+in turn implies $(Rm)_{\mathfrak q_j} = 0$.
+On the other hand we have $\text{Ass}(Rm) \subset \text{Ass}(M)$.
+Hence the support of $Rm$ is contained in $\bigcup V(\mathfrak p_i)$.
+Therefore $m \in K$ and thus $K' \subset K$ as $m$ was arbitrary in $K'$.
+
+\medskip\noindent
+Let $M' = M/K$. Since $K_{\mathfrak q_j}=0$ we know
+$M'_{\mathfrak q_j} = M_{\mathfrak q_j}$ for all $j$.
+Hence $M$ and $M'$ have the same support.
+
+\medskip\noindent
+Suppose $\mathfrak q = \text{Ann}(\overline{m}) \in \text{Ass}(M')$
+where $\overline{m} \in M'$ is the image of $m \in M$.
+Then $m \not \in K$ and hence the support of $Rm$ must contain one of the
+$\mathfrak q_j$. Since $M_{\mathfrak q_j} = M'_{\mathfrak q_j}$,
+we know $\overline{m}$ does not map to zero in $M'_{\mathfrak q_j}$.
+Hence $\mathfrak q \subset \mathfrak q_j$ (actually we have equality),
+which means that all the associated primes of $M'$ are not embedded.
+
+\medskip\noindent
+Let $f$ be an element contained in all $\mathfrak p_i$.
+Then $D(f) \cap \text{supp}(K) = 0$. Hence $M_f = M'_f$
+because $K_f = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-remove-embedded-primes-localize}
+Let $R$ be a Noetherian ring.
+Let $M$ be a finite $R$-module.
+For any $f \in R$ we have $(M')_f = (M_f)'$ where
+$M \to M'$ and $M_f \to (M_f)'$ are the quotients
+constructed in Lemma \ref{lemma-remove-embedded-primes}.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-no-embedded-primes-endos}
+Let $R$ be a Noetherian ring.
+Let $M$ be a finite $R$-module without embedded associated primes.
+Let $I = \{x \in R \mid xM = 0\}$. Then the ring $R/I$ has no
+embedded primes.
+\end{lemma}
+
+\begin{proof}
+We may replace $R$ by $R/I$.
+Hence we may assume every nonzero element
+of $R$ acts nontrivially on $M$.
+By Lemma \ref{lemma-support-closed} this implies that
+$\Spec(R)$ equals the support of $M$.
+Suppose that $\mathfrak p$ is an embedded prime of $R$.
+Let $x \in R$ be an element whose annihilator is $\mathfrak p$.
+Consider the nonzero module $N = xM \subset M$. It is annihilated
+by $\mathfrak p$. Hence any associated prime $\mathfrak q$ of $N$
+contains $\mathfrak p$ and is also an associated prime of $M$.
+Then $\mathfrak q$ would be an embedded associated prime of
+$M$ which contradicts the assumption of the lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Regular sequences}
+\label{section-regular-sequences}
+
+\noindent
+In this section we develop some basic properties of regular sequences.
+
+\begin{definition}
+\label{definition-regular-sequence}
+Let $R$ be a ring. Let $M$ be an $R$-module. A sequence of elements
+$f_1, \ldots, f_r$ of $R$ is called an {\it $M$-regular sequence}
+if the following conditions hold:
+\begin{enumerate}
+\item $f_i$ is a nonzerodivisor on
+$M/(f_1, \ldots, f_{i - 1})M$
+for each $i = 1, \ldots, r$, and
+\item the module $M/(f_1, \ldots, f_r)M$ is not zero.
+\end{enumerate}
+If $I$ is an ideal of $R$ and $f_1, \ldots, f_r \in I$
+then we call $f_1, \ldots, f_r$ an {\it $M$-regular sequence
+in $I$}. If $M = R$, we call $f_1, \ldots, f_r$ simply a
+{\it regular sequence} (in $I$).
+\end{definition}
+
+\noindent
+Please pay attention to the fact that the definition depends on the order
+of the elements $f_1, \ldots, f_r$ (see examples below). Some papers/books
+drop the requirement that the module $M/(f_1, \ldots, f_r)M$ is nonzero.
+This has the advantage that being a regular sequence is preserved under
+localization. However, we will use this definition mainly to define the
+depth of a module in case $R$ is local; in that case the $f_i$ are required
+to be in the maximal ideal -- a condition which is not preserved under going
+from $R$ to a localization $R_\mathfrak p$.
+
+\begin{example}
+\label{example-global-regular}
+Let $k$ be a field. In the ring $k[x, y, z]$
+the sequence $x, y(1-x), z(1-x)$ is regular
+but the sequence $y(1-x), z(1-x), x$ is not.
+\end{example}
+
+\begin{example}
+\label{example-local-regular}
+Let $k$ be a field. Consider the ring
+$k[x, y, w_0, w_1, w_2, \ldots]/I$
+where $I$ is generated by $yw_i$, $i = 0, 1, 2, \ldots$ and
+$w_i - xw_{i + 1}$, $i = 0, 1, 2, \ldots$.
+The sequence $x, y$ is regular, but $y$ is a zerodivisor.
+Moreover you can localize at the maximal ideal
+$(x, y, w_i)$ and still get an example.
+\end{example}
+
+\begin{lemma}
+\label{lemma-permute-xi}
+Let $R$ be a local Noetherian ring.
+Let $M$ be a finite $R$-module.
+Let $x_1, \ldots, x_c$ be an $M$-regular sequence.
+Then any permutation of the $x_i$ is a regular
+sequence as well.
+\end{lemma}
+
+\begin{proof}
+First we do the case $c = 2$.
+Consider $K \subset M$ the kernel of $x_2 : M \to M$.
+For any $z \in K$ we know that $z = x_1 z'$
+for some $z' \in M$ because
+$x_2$ is a nonzerodivisor on $M/x_1M$.
+Because $x_1$ is a nonzerodivisor on $M$ we see that $x_2 z' = 0$
+as well. Hence $x_1 : K \to K$ is surjective.
+Thus $K = 0$ by Nakayama's Lemma \ref{lemma-NAK}.
+Next, consider multiplication by $x_1$ on $M/x_2M$.
+If $z \in M$ maps to an element $\overline{z} \in M/x_2M$
+in the kernel of this map, then $x_1 z = x_2 y$ for some $y \in M$.
+But then since $x_1, x_2$ is a regular sequence we see that
+$y = x_1 y'$ for some $y' \in M$. Hence $x_1 ( z - x_2 y' ) =0$
+and hence $z = x_2 y'$ and hence $\overline{z} = 0$ as desired.
+
+\medskip\noindent
+For the general case, observe that any permutation is
+a composition of transpositions of adjacent indices.
+Hence it suffices to prove that
+$$
+x_1, \ldots, x_{i-2}, x_i, x_{i-1}, x_{i + 1}, \ldots, x_c
+$$
+is an $M$-regular sequence. This follows from the case we
+just did applied to the module $M/(x_1, \ldots, x_{i-2})$
+and the length $2$ regular sequence $x_{i-1}, x_i$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-increases-depth}
+\begin{slogan}
+Flat local ring homomorphisms preserve and reflect regular sequences.
+\end{slogan}
+Let $R, S$ be local rings. Let $R \to S$ be a flat local ring homomorphism.
+Let $x_1, \ldots, x_r$ be a sequence in $R$. Let $M$ be an $R$-module.
+The following are equivalent
+\begin{enumerate}
+\item $x_1, \ldots, x_r$ is an $M$-regular sequence in $R$, and
+\item the images of $x_1, \ldots, x_r$ in $S$ form a $M \otimes_R S$-regular
+sequence.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is so because $R \to S$ is faithfully flat
+by Lemma \ref{lemma-local-flat-ff}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-sequence-in-neighbourhood}
+Let $R$ be a Noetherian ring. Let $M$ be a finite $R$-module.
+Let $\mathfrak p$ be a prime. Let $x_1, \ldots, x_r$ be a sequence
+in $R$ whose image in $R_{\mathfrak p}$ forms an $M_{\mathfrak p}$-regular
+sequence. Then there exists a $g \in R$, $g \not \in \mathfrak p$
+such that the image of $x_1, \ldots, x_r$ in $R_g$ forms
+an $M_g$-regular sequence.
+\end{lemma}
+
+\begin{proof}
+Set
+$$
+K_i = \Ker\left(x_i : M/(x_1, \ldots, x_{i - 1})M \to
+M/(x_1, \ldots, x_{i - 1})M\right).
+$$
+This is a finite $R$-module whose localization at $\mathfrak p$ is
+zero by assumption. Hence there exists a $g \in R$, $g \not \in \mathfrak p$
+such that $(K_i)_g = 0$ for all $i = 1, \ldots, r$. This $g$ works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-join-regular-sequences}
+Let $A$ be a ring. Let $I$ be an ideal generated by a regular
+sequence $f_1, \ldots, f_n$ in $A$. Let $g_1, \ldots, g_m \in A$ be
+elements whose images $\overline{g}_1, \ldots, \overline{g}_m$ form a
+regular sequence in $A/I$. Then $f_1, \ldots, f_n, g_1, \ldots, g_m$
+is a regular sequence in $A$.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-sequence-short-exact-sequence}
+Let $R$ be a ring. Let $0 \to M_1 \to M_2 \to M_3 \to 0$
+be a short exact sequence of $R$-modules. Let $f_1, \ldots, f_r \in R$.
+If $f_1, \ldots, f_r$ is $M_1$-regular and $M_3$-regular, then
+$f_1, \ldots, f_r$ is $M_2$-regular.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-snake}, if $f_1 : M_1 \to M_1$ and
+$f_1 : M_3 \to M_3$ are injective, then so is $f_1 : M_2 \to M_2$
+and we obtain a short exact sequence
+$$
+0 \to M_1/f_1M_1 \to M_2/f_1M_2 \to M_3/f_1M_3 \to 0
+$$
+The lemma follows from this and induction on $r$. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-sequence-powers}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $f_1, \ldots, f_r \in R$ and $e_1, \ldots, e_r > 0$ integers.
+Then $f_1, \ldots, f_r$ is an $M$-regular sequence
+if and only if $f_1^{e_1}, \ldots, f_r^{e_r}$
+is an $M$-regular sequence.
+\end{lemma}
+
+\begin{proof}
+We will prove this by induction on $r$. If $r = 1$ this follows from the
+following two easy facts: (a) a power of a nonzerodivisor on $M$
+is a nonzerodivisor on $M$ and (b) a divisor of a nonzerodivisor on $M$
+is a nonzerodivisor on $M$.
+If $r > 1$, then by induction applied to $M/f_1M$ we have that
+$f_1, f_2, \ldots, f_r$ is an $M$-regular sequence if and only if
+$f_1, f_2^{e_2}, \ldots, f_r^{e_r}$ is an $M$-regular sequence.
+Thus it suffices to show, given $e > 0$, that $f_1^e, f_2, \ldots, f_r$
+is an $M$-regular sequence if and only if $f_1, \ldots, f_r$
+is an $M$-regular sequence. We will prove this
+by induction on $e$. The case $e = 1$ is trivial. Since $f_1$ is a
+nonzerodivisor under both assumptions (by the case $r = 1$)
+we have a short exact sequence
+$$
+0 \to M/f_1M \xrightarrow{f_1^{e - 1}} M/f_1^eM \to M/f_1^{e - 1}M \to 0
+$$
+Suppose that $f_1, f_2, \ldots, f_r$ is an $M$-regular sequence.
+Then by induction the elements $f_2, \ldots, f_r$ are $M/f_1M$ and
+$M/f_1^{e - 1}M$-regular sequences. By
+Lemma \ref{lemma-regular-sequence-short-exact-sequence}
+$f_2, \ldots, f_r$ is $M/f_1^eM$-regular. Hence $f_1^e, f_2, \ldots, f_r$
+is $M$-regular. Conversely, suppose
+that $f_1^e, f_2, \ldots, f_r$ is an $M$-regular sequence. Then
+$f_2 : M/f_1^eM \to M/f_1^eM$ is injective, hence
+$f_2 : M/f_1M \to M/f_1M$ is injective, hence by induction(!)
+$f_2 : M/f_1^{e - 1}M \to M/f_1^{e - 1}M$ is injective, hence
+$$
+0 \to
+M/(f_1, f_2)M \xrightarrow{f_1^{e - 1}}
+M/(f_1^e, f_2)M \to
+M/(f_1^{e - 1}, f_2)M \to 0
+$$
+is a short exact sequence by Lemma \ref{lemma-snake}. This proves the
+converse for $r = 2$. If $r > 2$, then we have
+$f_3 : M/(f_1^e, f_2)M \to M/(f_1^e, f_2)M$ is injective, hence
+$f_3 : M/(f_1, f_2)M \to M/(f_1, f_2)M$ is injective, and so on.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-sequence-in-polynomial-ring}
+Let $R$ be a ring. Let $f_1, \ldots, f_r \in R$ which do not generate
+the unit ideal. The following are equivalent:
+\begin{enumerate}
+\item any permutation of $f_1, \ldots, f_r$ is a regular sequence,
+\item any subsequence of $f_1, \ldots, f_r$ (in the given order) is
+a regular sequence, and
+\item $f_1x_1, \ldots, f_rx_r$ is a regular sequence in the polynomial
+ring $R[x_1, \ldots, x_r]$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that (1) implies (2). We prove (2) implies (1) by induction
+on $r$. The case $r = 1$ is trivial. The case $r = 2$ says that if
+$a, b \in R$ are a regular sequence and $b$ is a nonzerodivisor, then
+$b, a$ is a regular sequence. This is clear because the kernel of
+$a : R/(b) \to R/(b)$ is isomorphic to the kernel of $b : R/(a) \to R/(a)$
+if both $a$ and $b$ are nonzerodivisors. The case $r > 2$. Assume
+(2) holds and say we want to prove $f_{\sigma(1)}, \ldots, f_{\sigma(r)}$
+is a regular sequence for some permutation $\sigma$. We already know
+that $f_{\sigma(1)}, \ldots, f_{\sigma(r - 1)}$ is a regular sequence
+by induction. Hence it suffices to show that $f_s$ where $s = \sigma(r)$
+is a nonzerodivisor modulo $f_1, \ldots, \hat f_s, \ldots, f_r$.
+If $s = r$ we are done. If $s < r$, then note that $f_s$ and $f_r$
+are both nonzerodivisors in the ring
+$R/(f_1, \ldots, \hat f_s, \ldots, f_{r - 1})$
+(by induction hypothesis again). Since we know $f_s, f_r$ is a
+regular sequence in that ring we conclude by the case of sequence of length
+$2$ that $f_r, f_s$ is too.
+
+\medskip\noindent
+Note that $R[x_1, \ldots, x_r]/(f_1x_1, \ldots, f_ix_i)$ as an $R$-module
+is a direct sum of the modules
+$$
+R/I_E \cdot x_1^{e_1} \ldots x_r^{e_r}
+$$
+indexed by multi-indices $E = (e_1, \ldots, e_r)$ where
+$I_E$ is the ideal generated by $f_j$ for $1 \leq j \leq i$
+with $e_j > 0$. Hence $f_{i + 1}x_i$ is a nonzerodivisor on this if
+and only if $f_{i + 1}$ is a nonzerodivisor on $R/I_E$ for all $E$.
+Taking $E$ with all positive entries, we see that $f_{i + 1}$
+is a nonzerodivisor on $R/(f_1, \ldots, f_i)$. Thus (3) implies (2).
+Conversely, if (2) holds, then any subsequence of
+$f_1, \ldots, f_i, f_{i + 1}$ is a regular sequence
+in particular $f_{i + 1}$ is a nonzerodivisor on all $R/I_E$.
+In this way we see that (2) implies (3).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Quasi-regular sequences}
+\label{section-quasi-regular}
+
+\noindent
+There is a notion of regular sequence which is slightly weaker
+than that of a regular sequence and easier to use.
+Let $R$ be a ring and let $f_1, \ldots, f_c \in R$.
+Set $J = (f_1, \ldots, f_c)$. Let $M$ be an $R$-module.
+Then there is a canonical map
+\begin{equation}
+\label{equation-quasi-regular}
+M/JM \otimes_{R/J} R/J[X_1, \ldots, X_c]
+\longrightarrow
+\bigoplus\nolimits_{n \geq 0} J^nM/J^{n + 1}M
+\end{equation}
+of graded $R/J[X_1, \ldots, X_c]$-modules defined by the rule
+$$
+\overline{m} \otimes X_1^{e_1} \ldots X_c^{e_c} \longmapsto
+f_1^{e_1} \ldots f_c^{e_c} m \bmod J^{e_1 + \ldots + e_c + 1}M.
+$$
+Note that (\ref{equation-quasi-regular}) is always surjective.
+
+\begin{definition}
+\label{definition-quasi-regular-sequence}
+Let $R$ be a ring.
+Let $M$ be an $R$-module.
+A sequence of elements $f_1, \ldots, f_c$ of $R$ is called
+{\it $M$-quasi-regular} if (\ref{equation-quasi-regular})
+is an isomorphism. If $M = R$, we call $f_1, \ldots, f_c$ simply a
+{\it quasi-regular sequence}.
+\end{definition}
+
+\noindent
+So if $f_1, \ldots, f_c$ is a quasi-regular sequence, then
+$$
+R/J[X_1, \ldots, X_c] = \bigoplus\nolimits_{n \geq 0} J^n/J^{n + 1}
+$$
+where $J = (f_1, \ldots, f_c)$. It is clear that being a quasi-regular
+sequence is independent of the order of $f_1, \ldots, f_c$.
+
+\begin{lemma}
+\label{lemma-regular-quasi-regular}
+Let $R$ be a ring.
+\begin{enumerate}
+\item A regular sequence $f_1, \ldots, f_c$ of $R$ is a quasi-regular
+sequence.
+\item Suppose that $M$ is an $R$-module and that $f_1, \ldots, f_c$
+is an $M$-regular sequence. Then $f_1, \ldots, f_c$ is an
+$M$-quasi-regular sequence.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Set $J = (f_1, \ldots, f_c)$.
+We prove the first assertion by induction on $c$.
+We have to show that given any relation
+$\sum_{|I| = n} a_I f^I \in J^{n + 1}$ with $a_I \in R$ we
+actually have $a_I \in J$ for all multi-indices $I$. Since
+any element of $J^{n + 1}$ is of the form $\sum_{|I| = n} b_I f^I$
+with $b_I \in J$ we may assume, after replacing $a_I$ by $a_I - b_I$,
+the relation reads $\sum_{|I| = n} a_I f^I = 0$. We can rewrite
+this as
+$$
+\sum\nolimits_{e = 0}^n
+\left(
+\sum\nolimits_{|I'| = n - e}
+a_{I', e} f^{I'}
+\right)
+f_c^e
+=
+0
+$$
+Here and below the ``primed'' multi-indices $I'$ are required to be of the form
+$I' = (i_1, \ldots, i_{c - 1}, 0)$. We will show by descending
+induction on $l \in \{0, \ldots, n\}$
+that if we have a relation
+$$
+\sum\nolimits_{e = 0}^l
+\left(
+\sum\nolimits_{|I'| = n - e}
+a_{I', e} f^{I'}
+\right)
+f_c^e
+=
+0
+$$
+then $a_{I', e} \in J$ for all $I', e$.
+Namely, set $J' = (f_1, \ldots, f_{c-1})$.
+Observe that $\sum\nolimits_{|I'| = n - l} a_{I', l} f^{I'}$
+is mapped into $(J')^{n - l + 1}$ by $f_c^{l}$.
+By induction hypothesis (for the induction on $c$)
+we see that $f_c^l a_{I', l} \in J'$.
+Because $f_c$ is not a zerodivisor on $R/J'$ (as $f_1, \ldots, f_c$
+is a regular sequence) we conclude that $a_{I', l} \in J'$.
+This allows us to rewrite the term
+$(\sum\nolimits_{|I'| = n - l} a_{I', l} f^{I'})f_c^l$
+in the form $(\sum\nolimits_{|I'| = n - l + 1} f_c b_{I', l - 1}
+f^{I'})f_c^{l-1}$. This gives a new relation of the form
+$$
+\left(\sum\nolimits_{|I'| = n - l + 1}
+(a_{I', l-1} + f_c b_{I', l - 1}) f^{I'}\right)f_c^{l-1}
++
+\sum\nolimits_{e = 0}^{l - 2}
+\left(
+\sum\nolimits_{|I'| = n - e}
+a_{I', e} f^{I'}
+\right)
+f_c^e
+=
+0
+$$
+Now by the induction hypothesis (on $l$ this time) we see that
+all $a_{I', l-1} + f_c b_{I', l - 1} \in J$ and
+all $a_{I', e} \in J$ for $e \leq l - 2$. This, combined with
+$a_{I', l} \in J' \subset J$ seen above, finishes the proof of the
+induction step.
+
+\medskip\noindent
+The second assertion means that given any formal expression
+$F = \sum_{|I| = n} m_I X^I$, $m_I \in M$ with $\sum m_I f^I
+\in J^{n + 1}M$, then all the coefficients $m_I$ are in $J$.
+This is proved in exactly the same way as we prove the corresponding
+result for the first assertion above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-base-change-quasi-regular}
+Let $R \to R'$ be a flat ring map. Let $M$ be an $R$-module.
+Suppose that $f_1, \ldots, f_r \in R$ form an $M$-quasi-regular sequence.
+Then the images of $f_1, \ldots, f_r$ in
+$R'$ form a $M \otimes_R R'$-quasi-regular sequence.
+\end{lemma}
+
+\begin{proof}
+Set $J = (f_1, \ldots, f_r)$, $J' = JR'$ and $M' = M \otimes_R R'$.
+We have to show the canonical map
+$\mu : R'/J'[X_1, \ldots X_r] \otimes_{R'/J'} M'/J'M' \to
+\bigoplus (J')^nM'/(J')^{n + 1}M'$ is an isomorphism.
+Because $R \to R'$ is flat the sequences
+$0 \to J^nM \to M$ and
+$0 \to J^{n + 1}M \to J^nM \to J^nM/J^{n + 1}M \to 0$
+remain exact on tensoring with $R'$. This first implies that
+$J^nM \otimes_R R' = (J')^nM'$ and then that
+$(J')^nM'/(J')^{n + 1}M' = J^nM/J^{n + 1}M \otimes_R R'$.
+Thus $\mu$ is the tensor product of (\ref{equation-quasi-regular}),
+which is an isomorphism by assumption,
+with $\text{id}_{R'}$ and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-regular-sequence-in-neighbourhood}
+Let $R$ be a Noetherian ring. Let $M$ be a finite $R$-module.
+Let $\mathfrak p$ be a prime. Let $x_1, \ldots, x_c$ be a sequence
+in $R$ whose image in $R_{\mathfrak p}$ forms an
+$M_{\mathfrak p}$-quasi-regular sequence. Then there exists a
+$g \in R$, $g \not \in \mathfrak p$
+such that the image of $x_1, \ldots, x_c$ in $R_g$ forms
+an $M_g$-quasi-regular sequence.
+\end{lemma}
+
+\begin{proof}
+Consider the kernel $K$ of the map (\ref{equation-quasi-regular}).
+As $M/JM \otimes_{R/J} R/J[X_1, \ldots, X_c]$ is a finite
+$R/J[X_1, \ldots, X_c]$-module and as $R/J[X_1, \ldots, X_c]$ is
+Noetherian, we see that $K$ is also a finite $R/J[X_1, \ldots, X_c]$-module.
+Pick homogeneous generators $k_1, \ldots, k_t \in K$.
+By assumption for each $i = 1, \ldots, t$ there exists a $g_i \in R$,
+$g_i \not \in \mathfrak p$ such that $g_i k_i = 0$.
+Hence $g = g_1 \ldots g_t$ works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-truncate-quasi-regular}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $f_1, \ldots, f_c \in R$ be an $M$-quasi-regular sequence.
+For any $i$ the sequence
+$\overline{f}_{i + 1}, \ldots, \overline{f}_c$
+of $\overline{R} = R/(f_1, \ldots, f_i)$ is an
+$\overline{M} = M/(f_1, \ldots, f_i)M$-quasi-regular sequence.
+\end{lemma}
+
+\begin{proof}
+It suffices to prove this for $i = 1$. Set
+$\overline{J} = (\overline{f}_2, \ldots, \overline{f}_c) \subset \overline{R}$.
+Then
+\begin{align*}
+\overline{J}^n\overline{M}/\overline{J}^{n + 1}\overline{M}
+& =
+(J^nM + f_1M)/(J^{n + 1}M + f_1M) \\
+& = J^nM / (J^{n + 1}M + J^nM \cap f_1M).
+\end{align*}
+Thus, in order to prove the lemma it suffices to show that
+$J^{n + 1}M + J^nM \cap f_1M = J^{n + 1}M + f_1J^{n - 1}M$
+because that will show that
+$\bigoplus_{n \geq 0}
+\overline{J}^n\overline{M}/\overline{J}^{n + 1}\overline{M}$
+is the quotient of
+$\bigoplus_{n \geq 0} J^nM/J^{n + 1} \cong M/JM[X_1, \ldots, X_c]$
+by $X_1$. Actually, we have $J^nM \cap f_1M = f_1J^{n - 1}M$.
+Namely, if $m \not \in J^{n - 1}M$, then $f_1m \not \in J^nM$
+because $\bigoplus J^nM/J^{n + 1}M$ is the polynomial algebra
+$M/J[X_1, \ldots, X_c]$ by assumption.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-regular-regular}
+Let $(R, \mathfrak m)$ be a local Noetherian ring.
+Let $M$ be a nonzero finite $R$-module.
+Let $f_1, \ldots, f_c \in \mathfrak m$ be an $M$-quasi-regular sequence.
+Then $f_1, \ldots, f_c$ is an $M$-regular sequence.
+\end{lemma}
+
+\begin{proof}
+Set $J = (f_1, \ldots, f_c)$.
+Let us show that $f_1$ is a nonzerodivisor on $M$.
+Suppose $x \in M$ is not zero.
+By Krull's intersection theorem there exists an integer $r$
+such that $x \in J^rM$ but $x \not \in J^{r + 1}M$, see
+Lemma \ref{lemma-intersect-powers-ideal-module-zero}.
+Then $f_1 x \in J^{r + 1}M$ is an element whose class in
+$J^{r + 1}M/J^{r + 2}M$ is nonzero by the assumed structure of
+$\bigoplus J^nM/J^{n + 1}M$. Whence $f_1x \not = 0$.
+
+\medskip\noindent
+Now we can finish the proof by induction on $c$ using
+Lemma \ref{lemma-truncate-quasi-regular}.
+\end{proof}
+
+\begin{remark}[Koszul regular sequences]
+\label{remark-koszul-regular}
+In the paper \cite{Kabele} the author introduces two more
+regularity conditions for sequences $x_1, \ldots, x_r$ of elements
+of a ring $R$. Namely, we say the sequence is {\it Koszul-regular}
+if $H_i(K_{\bullet}(R, x_{\bullet})) = 0$ for $i \geq 1$ where
+$K_{\bullet}(R, x_{\bullet})$ is the Koszul complex. The sequence is
+called {\it $H_1$-regular} if $H_1(K_{\bullet}(R, x_{\bullet})) = 0$.
+One has the implications regular $\Rightarrow$ Koszul-regular $\Rightarrow$
+$H_1$-regular $\Rightarrow$ quasi-regular. By examples the author shows that
+these implications cannot be reversed in general even if $R$ is a
+(non-Noetherian) local ring and the sequence generates the maximal ideal of
+$R$. We introduce these notions in more detail in
+More on Algebra, Section \ref{more-algebra-section-koszul-regular}.
+\end{remark}
+
+\begin{remark}
+\label{remark-join-quasi-regular-sequences}
+Let $k$ be a field. Consider the ring
+$$
+A = k[x, y, w, z_0, z_1, z_2, \ldots]/
+(y^2z_0 - wx, z_0 - yz_1, z_1 - yz_2, \ldots)
+$$
+In this ring $x$ is a nonzerodivisor and the image of $y$ in
+$A/xA$ gives a quasi-regular sequence. But it is not true that
+$x, y$ is a quasi-regular sequence in $A$ because $(x, y)/(x, y)^2$
+isn't free of rank two over $A/(x, y)$ due to the fact that
+$wx = 0$ in $(x, y)/(x, y)^2$ but $w$ isn't zero in $A/(x, y)$.
+Hence the analogue of
+Lemma \ref{lemma-join-regular-sequences}
+does not hold for quasi-regular sequences.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-quasi-regular-on-quotient}
+Let $R$ be a ring. Let $J = (f_1, \ldots, f_r)$ be an ideal of $R$.
+Let $M$ be an $R$-module. Set $\overline{R} = R/\bigcap_{n \geq 0} J^n$,
+$\overline{M} = M/\bigcap_{n \geq 0} J^nM$, and denote
+$\overline{f}_i$ the image of $f_i$ in $\overline{R}$.
+Then $f_1, \ldots, f_r$ is $M$-quasi-regular if and only if
+$\overline{f}_1, \ldots, \overline{f}_r$ is $\overline{M}$-quasi-regular.
+\end{lemma}
+
+\begin{proof}
+This is true because
+$J^nM/J^{n + 1}M \cong
+\overline{J}^n\overline{M}/\overline{J}^{n + 1}\overline{M}$.
+\end{proof}
+
+
+
+
+
+\section{Blow up algebras}
+\label{section-blow-up}
+
+\noindent
+In this section we make some elementary observations about blowing up.
+
+\begin{definition}
+\label{definition-blow-up}
+Let $R$ be a ring.
+Let $I \subset R$ be an ideal.
+\begin{enumerate}
+\item The {\it blowup algebra}, or the {\it Rees algebra}, associated to
+the pair $(R, I)$ is the graded $R$-algebra
+$$
+\text{Bl}_I(R) =
+\bigoplus\nolimits_{n \geq 0} I^n =
+R \oplus I \oplus I^2 \oplus \ldots
+$$
+where the summand $I^n$ is placed in degree $n$.
+\item Let $a \in I$ be an element. Denote $a^{(1)}$ the element $a$
+seen as an element of degree $1$ in the Rees algebra. Then the
+{\it affine blowup algebra} $R[\frac{I}{a}]$ is the algebra
+$(\text{Bl}_I(R))_{(a^{(1)})}$ constructed in Section \ref{section-proj}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In other words, an element of $R[\frac{I}{a}]$ is represented by
+an expression of the form $x/a^n$ with $x \in I^n$. Two representatives
+$x/a^n$ and $y/a^m$ define the same element if and only if
+$a^k(a^mx - a^ny) = 0$ for some $k \geq 0$.
+
+\begin{lemma}
+\label{lemma-affine-blowup}
+Let $R$ be a ring, $I \subset R$ an ideal, and $a \in I$.
+Let $R' = R[\frac{I}{a}]$ be the affine blowup algebra. Then
+\begin{enumerate}
+\item the image of $a$ in $R'$ is a nonzerodivisor,
+\item $IR' = aR'$, and
+\item $(R')_a = R_a$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Immediate from the description of $R[\frac{I}{a}]$ above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowup-base-change}
+Let $R \to S$ be a ring map. Let $I \subset R$ be an ideal
+and $a \in I$. Set $J = IS$ and let $b \in J$ be the image of $a$.
+Then $S[\frac{J}{b}]$ is the quotient of $S \otimes_R R[\frac{I}{a}]$
+by the ideal of elements annihilated by some power of $b$.
+\end{lemma}
+
+\begin{proof}
+Let $S'$ be the quotient of $S \otimes_R R[\frac{I}{a}]$ by its
+$b$-power torsion elements. The ring map
+$$
+S \otimes_R R[\textstyle{\frac{I}{a}}]
+\longrightarrow
+S[\textstyle{\frac{J}{b}}]
+$$
+is surjective and annihilates $a$-power torsion as $b$ is a nonzerodivisor
+in $S[\frac{J}{b}]$. Hence we obtain a surjective map $S' \to S[\frac{J}{b}]$.
+To see that the kernel is trivial, we construct an inverse map. Namely, let
+$z = y/b^n$ be an element of $S[\frac{J}{b}]$, i.e., $y \in J^n$.
+Write $y = \sum x_is_i$ with $x_i \in I^n$ and $s_i \in S$.
+We map $z$ to the class of $\sum s_i \otimes x_i/a^n$ in
+$S'$. This is well defined because an element of the kernel of the map
+$S \otimes_R I^n \to J^n$ is annihilated by $a^n$, hence maps to zero in $S'$.
+\end{proof}
+
+\begin{example}
+\label{example-rees-algebra-polynomial}
+Let $R$ be a ring. Let $P = R[t_1, \ldots, t_n]$ be the polynomial algebra.
+Let $I = (t_1, \ldots, t_n) \subset P$. With notation as in
+Definition \ref{definition-blow-up} there is an isomorphism
+$$
+P[T_1, \ldots, T_n]/(t_iT_j - t_jT_i) \longrightarrow \text{Bl}_I(P)
+$$
+sending $T_i$ to $t_i^{(1)}$. We leave it to the reader to show that
+this map is well defined. Since $I$ is generated by $t_1, \ldots, t_n$
+we see that our map is surjective. To see that our map is injective
+one has to show: for each $e \geq 1$ the $P$-module $I^e$ is generated
+by the monomials $t^E = t_1^{e_1} \ldots x_n^{e_n}$ for multiindices
+$E = (e_1, \ldots, e_n)$ of degree $|E| = e$ subject only to the
+relations $t_i t^E = t_j t^{E'}$ when $|E| = |E'| = e$ and
+$e_a + \delta_{a i} = e'_a + \delta_{a j},\ a = 1, \ldots, n$
+(Kronecker delta). We omit the details.
+\end{example}
+
+\begin{example}
+\label{example-affine-blowup-algebra-polynomial}
+Let $R$ be a ring. Let $P = R[t_1, \ldots, t_n]$ be the polynomial algebra.
+Let $I = (t_1, \ldots, t_n) \subset P$. Let $a = t_1$. With notation as in
+Definition \ref{definition-blow-up} there is an isomorphism
+$$
+P[x_2, \ldots, x_n]/(t_1x_2 - t_2, \ldots, t_1x_n - t_n)
+\longrightarrow
+\textstyle{P[\frac{I}{a}] = P[\frac{I}{t_1}]}
+$$
+sending $x_i$ to $t_i/t_1$. We leave it to the reader to show that
+this map is well defined. Since $I$ is generated by $t_1, \ldots, t_n$
+we see that our map is surjective. To see that our map is injective,
+the reader can argue that the source and target of our map are
+$t_1$-torsion free and that the map is an isomorphism after inverting
+$t_1$, see Lemma \ref{lemma-affine-blowup}. Alternatively, the reader
+can use the description of the Rees algebra in
+Example \ref{example-rees-algebra-polynomial}.
+We omit the details.
+\end{example}
+
+\begin{lemma}
+\label{lemma-affine-blowup-quotient-description}
+Let $R$ be a ring. Let $I = (a_1, \ldots, a_n)$ be an ideal of $R$.
+Let $a = a_1$. Then there is a surjection
+$$
+R[x_2, \ldots, x_n]/(a x_2 - a_2, \ldots, a x_n - a_n)
+\longrightarrow
+\textstyle{R[\frac{I}{a}]}
+$$
+whose kernel is the $a$-power torsion in the source.
+\end{lemma}
+
+\begin{proof}
+Consider the ring map $P = \mathbf{Z}[t_1, \ldots, t_n] \to R$
+sending $t_i$ to $a_i$. Set $J = (t_1, \ldots, t_n)$.
+By Example \ref{example-affine-blowup-algebra-polynomial} we have
+$P[\frac{J}{t_1}] =
+P[x_2, \ldots, x_n]/(t_1 x_2 - t_2, \ldots, t_1 x_n - t_n)$.
+Apply Lemma \ref{lemma-blowup-base-change} to the map $P \to A$ to conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowup-in-principal}
+Let $R$ be a ring, $I \subset R$ an ideal, and $a \in I$.
+Set $R' = R[\frac{I}{a}]$. If $f \in R$ is such that $V(f) = V(I)$,
+then $f$ maps to a nonzerodivisor in $R'$ and $R'_f = R'_a = R_a$.
+\end{lemma}
+
+\begin{proof}
+We will use the results of Lemma \ref{lemma-affine-blowup}
+without further mention.
+The assumption $V(f) = V(I)$ implies $V(fR') = V(IR') = V(aR')$.
+Hence $a^n = fb$ and $f^m = ac$ for some $b, c \in R'$.
+The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowup-add-principal}
+Let $R$ be a ring, $I \subset R$ an ideal, $a \in I$, and $f \in R$.
+Set $R' = R[\frac{I}{a}]$ and $R'' = R[\frac{fI}{fa}]$. Then
+there is a surjective $R$-algebra map $R' \to R''$ whose kernel
+is the set of $f$-power torsion elements of $R'$.
+\end{lemma}
+
+\begin{proof}
+The map is given by sending $x/a^n$ for $x \in I^n$ to $f^nx/(fa)^n$.
+It is straightforward to check this map is well defined and surjective.
+Since $af$ is a nonzero divisor in $R''$
+(Lemma \ref{lemma-affine-blowup}) we see that the set of $f$-power
+torsion elements are mapped to zero. Conversely, if $x \in R'$
+and $f^n x \not = 0$ for all $n > 0$, then $(af)^n x \not = 0$
+for all $n$ as $a$ is a nonzero divisor in $R'$. It follows
+that the image of $x$ in $R''$ is not zero by the description of
+$R''$ following Definition \ref{definition-blow-up}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowup-reduced}
+\begin{slogan}
+Being reduced is invariant under blowup
+\end{slogan}
+If $R$ is reduced then every (affine) blowup algebra of $R$ is reduced.
+\end{lemma}
+
+\begin{proof}
+Let $I \subset R$ be an ideal and $a \in I$. Suppose $x/a^n$ with
+$x \in I^n$ is a nilpotent element of $R[\frac{I}{a}]$. Then
+$(x/a^n)^m = 0$. Hence $a^N x^m = 0$ in $R$ for some $N \geq 0$.
+After increasing $N$ if necessary we may assume $N = me$ for some
+$e \geq 0$. Then $(a^e x)^m = 0$ and since $R$ is reduced we find
+$a^e x = 0$. This means that $x/a^n = 0$ in $R[\frac{I}{a}]$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowup-domain}
+Let $R$ be a domain, $I \subset R$ an ideal, and $a \in I$ a nonzero
+element. Then the affine blowup algebra $R[\frac{I}{a}]$ is a domain.
+\end{lemma}
+
+\begin{proof}
+Suppose $x/a^n$, $y/a^m$ with $x \in I^n$, $y \in I^m$
+are elements of $R[\frac{I}{a}]$ whose product is zero.
+Then $a^N x y = 0$ in $R$. Since $R$ is a domain we conclude
+that either $x = 0$ or $y = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowup-dominant}
+Let $R$ be a ring. Let $I \subset R$ be an ideal. Let $a \in I$.
+If $a$ is not contained in any minimal prime of $R$, then
+$\Spec(R[\frac{I}{a}]) \to \Spec(R)$ has dense image.
+\end{lemma}
+
+\begin{proof}
+If $a^k x = 0$ for $x \in R$, then $x$ is contained in all the
+minimal primes of $R$ and hence nilpotent, see
+Lemma \ref{lemma-Zariski-topology}.
+Thus the kernel of $R \to R[\frac{I}{a}]$ consists of nilpotent
+elements. Hence the result follows from
+Lemma \ref{lemma-image-dense-generic-points}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-valuation-ring-colimit-affine-blowups}
+Let $(R, \mathfrak m)$ be a local domain with fraction field $K$.
+Let $R \subset A \subset K$ be a valuation ring which dominates $R$.
+Then
+$$
+A = \colim R[\textstyle{\frac{I}{a}}]
+$$
+is a directed colimit of affine blowups $R \to R[\frac{I}{a}]$ with
+the following properties
+\begin{enumerate}
+\item $a \in I \subset \mathfrak m$,
+\item $I$ is finitely generated, and
+\item the fibre ring of $R \to R[\frac{I}{a}]$ at $\mathfrak m$
+is not zero.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Any blowup algebra $R[\frac{I}{a}]$ is a domain contained in $K$ see
+Lemma \ref{lemma-blowup-domain}. The lemma simply says that $A$ is the
+directed union of the ones where $a \in I$ have properties (1), (2), (3).
+If $R[\frac{I}{a}] \subset A$ and $R[\frac{J}{b}] \subset A$, then
+we have
+$$
+R[\textstyle{\frac{I}{a}}] \cup R[\textstyle{\frac{J}{b}}] \subset
+R[\textstyle{\frac{IJ}{ab}}] \subset A
+$$
+The first inclusion because $x/a^n = b^nx/(ab)^n$ and the second one
+because if $z \in (IJ)^n$, then $z = \sum x_iy_i$ with $x_i \in I^n$
+and $y_i \in J^n$ and hence $z/(ab)^n = \sum (x_i/a^n)(y_i/b^n)$
+is contained in $A$.
+
+\medskip\noindent
+Consider a finite subset $E \subset A$. Say $E = \{e_1, \ldots, e_n\}$.
+Choose a nonzero $a \in R$ such that we can write $e_i = f_i/a$ for
+all $i = 1, \ldots, n$. Set $I = (f_1, \ldots, f_n, a)$.
+We claim that $R[\frac{I}{a}] \subset A$. This is clear as an element
+of $R[\frac{I}{a}]$ can be represented as a polynomial in the elements
+$e_i$. The lemma follows immediately from this observation.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Ext groups}
+\label{section-ext}
+
+\noindent
+In this section we do a tiny bit of homological algebra,
+in order to establish some fundamental properties of
+depth over Noetherian local rings.
+
+\begin{lemma}
+\label{lemma-resolution-by-finite-free}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+\begin{enumerate}
+\item There exists an exact complex
+$$
+\ldots \to F_2 \to F_1 \to F_0 \to M \to 0.
+$$
+with $F_i$ free $R$-modules.
+\item If $R$ is Noetherian and $M$ finite over $R$, then we
+can choose the complex such that $F_i$ is finite free.
+In other words, we can find an exact complex
+$$
+\ldots \to R^{\oplus n_2} \to R^{\oplus n_1} \to R^{\oplus n_0} \to M \to 0.
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let us explain only the Noetherian case.
+As a first step choose a surjection $R^{n_0} \to M$.
+Then having constructed an exact complex of length
+$e$ we simply choose a surjection $R^{n_{e + 1}} \to
+\Ker(R^{n_e} \to R^{n_{e-1}})$ which is possible
+because $R$ is Noetherian.
+\end{proof}
+
+\begin{definition}
+\label{definition-finite-free-resolution}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+\begin{enumerate}
+\item A (left) {\it resolution} $F_\bullet \to M$ of $M$ is an exact complex
+$$
+\ldots \to F_2 \to F_1 \to F_0 \to M \to 0
+$$
+of $R$-modules.
+\item A {\it resolution of $M$ by free $R$-modules} is a resolution
+$F_\bullet \to M$ where each $F_i$ is a free $R$-module.
+\item A {\it resolution of $M$ by finite free $R$-modules} is a resolution
+$F_\bullet \to M$ where each $F_i$ is a finite free $R$-module.
+\end{enumerate}
+\end{definition}
+
+\noindent
+We often use the notation $F_{\bullet}$ to denote a complex
+of $R$-modules
+$$
+\ldots \to F_i \to F_{i-1} \to \ldots
+$$
+In this case we often use $d_i$ or $d_{F, i}$ to denote the map
+$F_i \to F_{i-1}$. In this section we are always going to
+assume that $F_0$ is the last nonzero term in the complex.
+The {\it $i$th homology group of the complex} $F_{\bullet}$
+is the group $H_i = \Ker(d_{F, i})/\Im(d_{F, i + 1})$.
+A {\it map of complexes $\alpha : F_{\bullet} \to G_{\bullet}$}
+is given by maps $\alpha_i : F_i \to G_i$ such that
+$\alpha_{i-1} \circ d_{F, i} = d_{G, i-1} \circ \alpha_i$.
+Such a map induces a map on homology $H_i(\alpha) :
+H_i(F_{\bullet}) \to H_i(G_{\bullet})$. If $\alpha, \beta
+: F_{\bullet} \to G_{\bullet}$ are maps of complexes, then
+a {\it homotopy} between $\alpha$ and $\beta$ is given by
+a collection of maps $h_i : F_i \to G_{i + 1}$ such that
+$\alpha_i - \beta_i = d_{G, i + 1} \circ h_i +
+h_{i-1} \circ d_{F, i}$.
+Two maps $\alpha, \beta : F_{\bullet} \to G_{\bullet}$ are
+said to be {\it homotopic} if a homotopy between $\alpha$
+and $\beta$ exists.
+
+\medskip\noindent
+We will use a very similar notation regarding complexes
+of the form $F^{\bullet}$ which look like
+$$
+\ldots \to F^i \xrightarrow{d^i} F^{i + 1} \to \ldots
+$$
+There are maps of complexes, homotopies, etc.
+In this case we set $H^i(F^{\bullet}) =
+\Ker(d^i)/\Im(d^{i - 1})$ and we call it
+the {\it $i$th cohomology group}.
+
+\begin{lemma}
+\label{lemma-homotopic-equal-homology}
+Any two homotopic maps of complexes induce the same maps on
+(co)homology groups.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-resolutions}
+Let $R$ be a ring. Let $M \to N$ be a map of $R$-modules.
+Let $N_\bullet \to N$ be an arbitrary resolution.
+Let
+$$
+\ldots \to F_2 \to F_1 \to F_0 \to M
+$$
+be a complex of $R$-modules where each $F_i$ is a free $R$-module. Then
+\begin{enumerate}
+\item there exists a map of complexes $F_\bullet \to N_\bullet$ such that
+$$
+\xymatrix{
+F_0 \ar[r] \ar[d] & M \ar[d] \\
+N_0 \ar[r] & N
+}
+$$
+is commutative, and
+\item any two maps $\alpha, \beta : F_\bullet \to N_\bullet$ as in (1)
+are homotopic.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Because $F_0$ is free we can find a map $F_0 \to N_0$
+lifting the map $F_0 \to M \to N$. We obtain an induced
+map $F_1 \to F_0 \to N_0$ which ends up in the image
+of $N_1 \to N_0$. Since $F_1$ is free we may lift this
+to a map $F_1 \to N_1$. This in turn induces a map
+$F_2 \to F_1 \to N_1$ which maps to zero into
+$N_0$. Since $N_\bullet$ is exact we see that
+the image of this map is contained in the image
+of $N_2 \to N_1$. Hence we may lift to get a map
+$F_2 \to N_2$. Repeat.
+
+\medskip\noindent
+Proof of (2). To show that $\alpha, \beta$ are homotopic it suffices
+to show the difference $\gamma = \alpha - \beta$ is homotopic
+to zero. Note that the image of $\gamma_0 : F_0 \to N_0$
+is contained in the image of $N_1 \to N_0$. Hence we may lift
+$\gamma_0$ to a map $h_0 : F_0 \to N_1$. Consider the map
+$\gamma_1' = \gamma_1 - h_0 \circ d_{F, 1}$. By our choice of $h_0$
+we see that the image of $\gamma_1'$ is contained in
+the kernel of $N_1 \to N_0$. Since $N_\bullet$ is exact
+we may lift $\gamma_1'$ to a map $h_1 : F_1 \to N_2$.
+At this point we have $\gamma_1 = h_0 \circ d_{F, 1}
++ d_{N, 2} \circ h_1$. Repeat.
+\end{proof}
+
+\noindent
+At this point we are ready to define the groups
+$\Ext^i_R(M, N)$. Namely, choose a resolution
+$F_{\bullet}$ of $M$ by free $R$-modules, see Lemma
+\ref{lemma-resolution-by-finite-free}. Consider
+the (cohomological) complex
+$$
+\Hom_R(F_\bullet, N) :
+\Hom_R(F_0, N) \to
+\Hom_R(F_1, N) \to
+\Hom_R(F_2, N) \to \ldots
+$$
+We define $\Ext^i_R(M, N)$ for $i \geq 0$ to be the $i$th
+cohomology group of this complex\footnote{At this point
+it would perhaps be more appropriate to say ``an'' in stead
+of ``the'' Ext-group.}. For $i < 0$ we set $\Ext^i_R(M, N) = 0$.
+Before we continue we point out that
+$$
+\Ext^0_R(M, N) = \Ker(\Hom_R(F_0, N) \to \Hom_R(F_1, N)) =
+\Hom_R(M, N)
+$$
+because we can apply part (1) of Lemma \ref{lemma-hom-exact} to
+the exact sequence $F_1 \to F_0 \to M \to 0$.
+The following lemma explains
+in what sense this is well defined.
+
+\begin{lemma}
+\label{lemma-ext-welldefined}
+Let $R$ be a ring. Let $M_1, M_2, N$ be $R$-modules.
+Suppose that $F_{\bullet}$ is a free resolution of the module $M_1$,
+and $G_{\bullet}$ is a free resolution of the module $M_2$.
+Let $\varphi : M_1 \to M_2$ be a module map.
+Let $\alpha : F_{\bullet} \to G_{\bullet}$ be
+a map of complexes inducing $\varphi$ on
+$M_1 = \Coker(d_{F, 1}) \to M_2 = \Coker(d_{G, 1})$,
+see Lemma \ref{lemma-compare-resolutions}.
+Then the induced maps
+$$
+H^i(\alpha) :
+H^i(\Hom_R(F_{\bullet}, N))
+\longrightarrow
+H^i(\Hom_R(G_{\bullet}, N))
+$$
+are independent of the choice of $\alpha$.
+If $\varphi$ is an isomorphism, so are all the maps
+$H^i(\alpha)$. If $M_1 = M_2$, $F_\bullet = G_\bullet$, and
+$\varphi$ is the identity, so are all the maps $H_i(\alpha)$.
+\end{lemma}
+
+\begin{proof}
+Another map $\beta : F_{\bullet} \to G_{\bullet}$
+inducing $\varphi$ is homotopic to $\alpha$ by
+Lemma \ref{lemma-compare-resolutions}. Hence the
+maps $\Hom_R(F_\bullet, N) \to
+\Hom_R(G_\bullet, N)$ are homotopic.
+Hence the independence result follows from
+Lemma \ref{lemma-homotopic-equal-homology}.
+
+\medskip\noindent
+Suppose that $\varphi$ is an isomorphism.
+Let $\psi : M_2 \to M_1$ be an inverse.
+Choose $\beta : G_{\bullet} \to F_{\bullet}$
+be a map inducing $\psi :
+M_2 = \Coker(d_{G, 1}) \to M_1 = \Coker(d_{F, 1})$,
+see Lemma \ref{lemma-compare-resolutions}.
+OK, and now consider the map
+$H^i(\alpha) \circ H^i(\beta) =
+H^i(\alpha \circ \beta)$. By the above the
+map $H^i(\alpha \circ \beta)$ is the {\it same}
+as the map $H^i(\text{id}_{G_{\bullet}}) = \text{id}$.
+Similarly for the composition $H^i(\beta) \circ H^i(\alpha)$.
+Hence $H^i(\alpha)$ and $H^i(\beta)$ are inverses of each other.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-long-exact-seq-ext}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $0 \to N' \to N \to N'' \to 0$ be a
+short exact sequence. Then we get a long exact
+sequence
+$$
+\begin{matrix}
+0
+\to \Hom_R(M, N')
+\to \Hom_R(M, N)
+\to \Hom_R(M, N'')
+\\
+\phantom{0\ }
+\to \Ext^1_R(M, N')
+\to \Ext^1_R(M, N)
+\to \Ext^1_R(M, N'')
+\to \ldots
+\end{matrix}
+$$
+\end{lemma}
+
+\begin{proof}
+Pick a free resolution $F_{\bullet} \to M$.
+Since each of the $F_i$ are free we see that
+we get a short exact sequence of complexes
+$$
+0 \to
+\Hom_R(F_{\bullet}, N') \to
+\Hom_R(F_{\bullet}, N) \to
+\Hom_R(F_{\bullet}, N'') \to
+0
+$$
+Thus we get the long exact sequence from
+the snake lemma applied to this.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reverse-long-exact-seq-ext}
+Let $R$ be a ring. Let $N$ be an $R$-module.
+Let $0 \to M' \to M \to M'' \to 0$ be a
+short exact sequence. Then we get a long exact
+sequence
+$$
+\begin{matrix}
+0
+\to \Hom_R(M'', N)
+\to \Hom_R(M, N)
+\to \Hom_R(M', N)
+\\
+\phantom{0\ }
+\to \Ext^1_R(M'', N)
+\to \Ext^1_R(M, N)
+\to \Ext^1_R(M', N)
+\to \ldots
+\end{matrix}
+$$
+\end{lemma}
+
+\begin{proof}
+Pick sets of generators $\{m'_{i'}\}_{i' \in I'}$ and
+$\{m''_{i''}\}_{i'' \in I''}$ of $M'$ and $M''$.
+For each $i'' \in I''$ choose a lift $\tilde m''_{i''} \in M$
+of the element $m''_{i''} \in M''$. Set $F' = \bigoplus_{i' \in I'} R$,
+$F'' = \bigoplus_{i'' \in I''} R$ and $F = F' \oplus F''$.
+Mapping the generators of these free modules to the corresponding
+chosen generators gives surjective $R$-module maps $F' \to M'$,
+$F'' \to M''$, and $F \to M$. We obtain a map of short exact sequences
+$$
+\begin{matrix}
+0 & \to & M' & \to & M & \to & M'' & \to & 0 \\
+& & \uparrow & & \uparrow & & \uparrow \\
+0 & \to & F' & \to & F & \to & F'' & \to & 0 \\
+\end{matrix}
+$$
+By the snake lemma we see that the sequence of kernels
+$0 \to K' \to K \to K'' \to 0$ is short exact sequence of $R$-modules.
+Hence we can continue this process indefinitely. In other words
+we obtain a short exact sequence of resolutions fitting into the diagram
+$$
+\begin{matrix}
+0 & \to & M' & \to & M & \to & M'' & \to & 0 \\
+& & \uparrow & & \uparrow & & \uparrow \\
+0 & \to & F_\bullet' & \to & F_\bullet & \to & F_\bullet'' & \to & 0 \\
+\end{matrix}
+$$
+Because each of the sequences $0 \to F'_n \to F_n \to F''_n \to 0$
+is split exact (by construction) we obtain a short exact sequence of
+complexes
+$$
+0 \to
+\Hom_R(F''_{\bullet}, N) \to
+\Hom_R(F_{\bullet}, N) \to
+\Hom_R(F'_{\bullet}, N) \to
+0
+$$
+by applying the $\Hom_R(-, N)$ functor.
+Thus we get the long exact sequence from
+the snake lemma applied to this.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-annihilate-ext}
+Let $R$ be a ring. Let $M$, $N$ be $R$-modules.
+Any $x\in R$ such that either $xN = 0$, or $xM = 0$
+annihilates each of the modules $\Ext^i_R(M, N)$.
+\end{lemma}
+
+\begin{proof}
+Pick a free resolution $F_{\bullet}$ of $M$.
+Since $\Ext^i_R(M, N)$
+is defined as the cohomology of the complex
+$\Hom_R(F_{\bullet}, N)$ the lemma is
+clear when $xN = 0$. If $xM = 0$, then
+we see that multiplication by $x$ on $F_{\bullet}$
+lifts the zero map on $M$. Hence by Lemma
+\ref{lemma-ext-welldefined} we see that it
+induces the same map on Ext groups as the
+zero map.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ext-noetherian}
+Let $R$ be a Noetherian ring. Let $M$, $N$ be finite $R$-modules.
+Then $\Ext^i_R(M, N)$ is a finite $R$-module for all $i$.
+\end{lemma}
+
+\begin{proof}
+This holds because $\Ext^i_R(M, N)$ is computed as the
+cohomology groups of a complex $\Hom_R(F_\bullet, N)$
+with each $F_n$ a finite free $R$-module, see
+Lemma \ref{lemma-resolution-by-finite-free}.
+\end{proof}
+
+
+
+
+\section{Depth}
+\label{section-depth}
+
+\noindent
+Here is our definition.
+
+\begin{definition}
+\label{definition-depth}
+Let $R$ be a ring, and $I \subset R$ an ideal. Let $M$ be a finite $R$-module.
+The {\it $I$-depth} of $M$, denoted $\text{depth}_I(M)$, is defined as follows:
+\begin{enumerate}
+\item if $IM \not = M$, then $\text{depth}_I(M)$ is the supremum in
+$\{0, 1, 2, \ldots, \infty\}$ of the lengths of $M$-regular sequences in $I$,
+\item if $IM = M$ we set $\text{depth}_I(M) = \infty$.
+\end{enumerate}
+If $(R, \mathfrak m)$ is local we call $\text{depth}_{\mathfrak m}(M)$ simply
+the {\it depth} of $M$.
+\end{definition}
+
+\noindent
+Explanation. By Definition \ref{definition-regular-sequence} the empty
+sequence is not a regular sequence on the zero module, but for practical
+purposes it turns out to be convenient to set the depth of the $0$ module
+equal to $+\infty$. Note that if $I = R$, then $\text{depth}_I(M) = \infty$
+for all finite $R$-modules $M$. If $I$ is contained in the Jacobson radical
+of $R$ (e.g., if $R$ is local and $I \subset \mathfrak m_R$), then
+$M \not = 0 \Rightarrow IM \not = M$ by Nakayama's lemma.
+A module $M$ has $I$-depth $0$ if and only if $M$ is nonzero and $I$ does
+not contain a nonzerodivisor on $M$.
+
+\medskip\noindent
+Example \ref{example-global-regular} shows depth does not
+behave well even if the ring is Noetherian, and Example
+\ref{example-local-regular} shows that it does not
+behave well if the ring is local but non-Noetherian.
+We will see depth behaves well if the ring is local Noetherian.
+
+\begin{lemma}
+\label{lemma-depth-weak-sequence}
+Let $R$ be a ring, $I \subset R$ an ideal, and $M$ a finite $R$-module.
+Then $\text{depth}_I(M)$ is equal to the supremum of the lengths of
+sequences $f_1, \ldots, f_r \in I$ such that $f_i$ is a nonzerodivisor
+on $M/(f_1, \ldots, f_{i - 1})M$.
+\end{lemma}
+
+\begin{proof}
+Suppose that $IM = M$. Then Lemma \ref{lemma-NAK} shows there exists
+an $f \in I$ such that $f : M \to M$ is $\text{id}_M$. Hence
+$f, 0, 0, 0, \ldots$ is an infinite sequence of successive
+nonzerodivisors and we see agreement holds in this case.
+If $IM \not = M$, then we see that a sequence as in the lemma
+is an $M$-regular sequence and we conclude that agreement holds as well.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bound-depth}
+Let $(R, \mathfrak m)$ be a Noetherian local ring.
+Let $M$ be a nonzero finite $R$-module.
+Then $\dim(\text{Supp}(M)) \geq \text{depth}(M)$.
+\end{lemma}
+
+\begin{proof}
+The proof is by induction on $\dim(\text{Supp}(M))$.
+If $\dim(\text{Supp}(M)) = 0$, then
+$\text{Supp}(M) = \{\mathfrak m\}$, whence $\text{Ass}(M) = \{\mathfrak m\}$
+(by Lemmas \ref{lemma-ass-support} and \ref{lemma-ass-zero}), and hence
+the depth of $M$ is zero for example by
+Lemma \ref{lemma-ideal-nonzerodivisor}.
+For the induction step we assume $\dim(\text{Supp}(M)) > 0$.
+Let $f_1, \ldots, f_d$ be a sequence of elements of $\mathfrak m$
+such that $f_i$ is a nonzerodivisor on $M/(f_1, \ldots, f_{i - 1})M$.
+According to Lemma \ref{lemma-depth-weak-sequence} it suffices to prove
+$\dim(\text{Supp}(M)) \geq d$. We may assume
+$d > 0$ otherwise the lemma holds. By
+Lemma \ref{lemma-one-equation-module}
+we have $\dim(\text{Supp}(M/f_1M)) = \dim(\text{Supp}(M)) - 1$.
+By induction we conclude $\dim(\text{Supp}(M/f_1M)) \geq d - 1$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-finite-noetherian}
+Let $R$ be a Noetherian ring, $I \subset R$ an ideal, and $M$ a
+finite nonzero $R$-module such that $IM \not = M$. Then
+$\text{depth}_I(M) < \infty$.
+\end{lemma}
+
+\begin{proof}
+Since $M/IM$ is nonzero we can choose $\mathfrak p \in \text{Supp}(M/IM)$
+by Lemma \ref{lemma-support-zero}. Then $(M/IM)_\mathfrak p \not = 0$
+which implies $I \subset \mathfrak p$ and moreover implies
+$M_\mathfrak p \not = IM_\mathfrak p$ as localization is exact.
+Let $f_1, \ldots, f_r \in I$ be an $M$-regular sequence.
+Then $M_\mathfrak p/(f_1, \ldots, f_r)M_\mathfrak p$ is
+nonzero as $(f_1, \ldots, f_r) \subset I$. As localization is
+flat we see that the images of $f_1, \ldots, f_r$ form a
+$M_\mathfrak p$-regular sequence in $I_\mathfrak p$. Since this
+works for every $M$-regular sequence in $I$ we conclude that
+$\text{depth}_I(M) \leq \text{depth}_{I_\mathfrak p}(M_\mathfrak p)$.
+The latter is $\leq \text{depth}(M_\mathfrak p)$ which is
+$< \infty$ by Lemma \ref{lemma-bound-depth}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-ext}
+Let $R$ be a Noetherian local ring with maximal ideal $\mathfrak m$.
+Let $M$ be a nonzero finite $R$-module. Then $\text{depth}(M)$
+is equal to the smallest integer $i$ such that
+$\Ext^i_R(R/\mathfrak m, M)$ is nonzero.
+\end{lemma}
+
+\begin{proof}
+Let $\delta(M)$ denote the depth of $M$ and let $i(M)$ denote
+the smallest integer $i$ such that $\Ext^i_R(R/\mathfrak m, M)$
+is nonzero. We will see in a moment that $i(M) < \infty$.
+By Lemma \ref{lemma-ideal-nonzerodivisor} we have
+$\delta(M) = 0$ if and only if $i(M) = 0$, because
+$\mathfrak m \in \text{Ass}(M)$ exactly means
+that $i(M) = 0$. Hence if $\delta(M)$ or $i(M)$ is $> 0$, then we may
+choose $x \in \mathfrak m$ such that (a) $x$ is a nonzerodivisor
+on $M$, and (b) $\text{depth}(M/xM) = \delta(M) - 1$.
+Consider the long exact sequence
+of Ext-groups associated to the short exact sequence
+$0 \to M \to M \to M/xM \to 0$ by Lemma \ref{lemma-long-exact-seq-ext}:
+$$
+\begin{matrix}
+0
+\to \Hom_R(\kappa, M)
+\to \Hom_R(\kappa, M)
+\to \Hom_R(\kappa, M/xM)
+\\
+\phantom{0\ }
+\to \Ext^1_R(\kappa, M)
+\to \Ext^1_R(\kappa, M)
+\to \Ext^1_R(\kappa, M/xM)
+\to \ldots
+\end{matrix}
+$$
+Since $x \in \mathfrak m$ all the maps $\Ext^i_R(\kappa, M)
+\to \Ext^i_R(\kappa, M)$ are zero, see
+Lemma \ref{lemma-annihilate-ext}.
+Thus it is clear that $i(M/xM) = i(M) - 1$. Induction on
+$\delta(M)$ finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-in-ses}
+Let $R$ be a local Noetherian ring. Let $0 \to N' \to N \to N'' \to 0$
+be a short exact sequence of nonzero finite $R$-modules.
+\begin{enumerate}
+\item
+$\text{depth}(N) \geq \min\{\text{depth}(N'), \text{depth}(N'')\}$
+\item
+$\text{depth}(N'') \geq \min\{\text{depth}(N), \text{depth}(N') - 1\}$
+\item
+$\text{depth}(N') \geq \min\{\text{depth}(N), \text{depth}(N'') + 1\}$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Use the characterization of depth using the Ext groups
+$\Ext^i(\kappa, N)$, see Lemma \ref{lemma-depth-ext},
+and use the long exact cohomology sequence
+$$
+\begin{matrix}
+0
+\to \Hom_R(\kappa, N')
+\to \Hom_R(\kappa, N)
+\to \Hom_R(\kappa, N'')
+\\
+\phantom{0\ }
+\to \Ext^1_R(\kappa, N')
+\to \Ext^1_R(\kappa, N)
+\to \Ext^1_R(\kappa, N'')
+\to \ldots
+\end{matrix}
+$$
+from Lemma \ref{lemma-long-exact-seq-ext}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-drops-by-one}
+Let $R$ be a local Noetherian ring and $M$ a nonzero finite $R$-module.
+\begin{enumerate}
+\item If $x \in \mathfrak m$ is a nonzerodivisor on $M$, then
+$\text{depth}(M/xM) = \text{depth}(M) - 1$.
+\item Any $M$-regular sequence $x_1, \ldots, x_r$ can be extended to an
+$M$-regular sequence of length $\text{depth}(M)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (2) is a formal consequence of part (1). Let $x \in R$ be as in (1).
+By the short exact sequence $0 \to M \to M \to M/xM \to 0$
+and Lemma \ref{lemma-depth-in-ses} we see that the depth drops by at most 1.
+On the other hand, if $x_1, \ldots, x_r \in \mathfrak m$
+is a regular sequence for $M/xM$, then $x, x_1, \ldots, x_r$
+is a regular sequence for $M$. Hence we see that the depth drops by
+at least 1.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inherit-minimal-primes}
+Let $(R, \mathfrak m)$ be a local Noetherian ring and $M$ a finite $R$-module.
+Let $x \in \mathfrak m$, $\mathfrak p \in \text{Ass}(M)$, and $\mathfrak q$
+minimal over $\mathfrak p + (x)$. Then $\mathfrak q \in \text{Ass}(M/x^nM)$
+for some $n \geq 1$.
+\end{lemma}
+
+\begin{proof}
+Pick a submodule $N \subset M$ with $N \cong R/\mathfrak p$.
+By the Artin-Rees lemma (Lemma \ref{lemma-Artin-Rees})
+we can pick $n > 0$ such that $N \cap x^nM \subset xN$.
+Let $\overline{N} \subset M/x^nM$ be the image of $N \to M \to M/x^nM$.
+By Lemma \ref{lemma-ass} it suffices to show
+$\mathfrak q \in \text{Ass}(\overline{N})$.
+By our choice of $n$ there is a surjection
+$\overline{N} \to N/xN = R/\mathfrak p + (x)$
+and hence $\mathfrak q$ is in the support of $\overline{N}$.
+Since $\overline{N}$ is annihilated by $x^n$ and $\mathfrak p$ we see that
+$\mathfrak q$ is minimal among the primes in the support of $\overline{N}$.
+Thus $\mathfrak q$ is an associated prime of $\overline{N}$ by
+Lemma \ref{lemma-ass-minimal-prime-support}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-dim-associated-primes}
+Let $(R, \mathfrak m)$ be a local Noetherian ring and $M$ a finite $R$-module.
+For $\mathfrak p \in \text{Ass}(M)$ we have
+$\dim(R/\mathfrak p) \geq \text{depth}(M)$.
+\end{lemma}
+
+\begin{proof}
+If $\mathfrak m \in \text{Ass}(M)$ then there is a nonzero element
+$x \in M$ which is annihilated by all elements of $\mathfrak m$.
+Thus $\text{depth}(M) = 0$. In particular the lemma holds in this case.
+
+\medskip\noindent
+If $\text{depth}(M) = 1$, then by the first paragraph
+we find that $\mathfrak m \not \in \text{Ass}(M)$.
+Hence $\dim(R/\mathfrak p) \geq 1$ for all $\mathfrak p \in \text{Ass}(M)$
+and the lemma is true in this case as well.
+
+\medskip\noindent
+We will prove the lemma in general by induction on $\text{depth}(M)$
+which we may and do assume to be $> 1$. Pick $x \in \mathfrak m$ which
+is a nonzerodivisor on $M$. Note $x \not \in \mathfrak p$
+(Lemma \ref{lemma-ass-zero-divisors}).
+By Lemma \ref{lemma-one-equation} we have
+$\dim(R/\mathfrak p + (x)) = \dim(R/\mathfrak p) - 1$.
+Thus there exists a prime $\mathfrak q$ minimal over $\mathfrak p + (x)$ with
+$\dim(R/\mathfrak q) = \dim(R/\mathfrak p) - 1$ (small argument omitted;
+hint: the dimension of a Noetherian local ring $A$ is the maximum
+of the dimensions of $A/\mathfrak r$ taken over the minimal
+primes $\mathfrak r$ of $A$). Pick $n$ as in
+Lemma \ref{lemma-inherit-minimal-primes} so that
+$\mathfrak q$ is an associated prime of $M/x^nM$.
+We may apply induction hypothesis to $M/x^nM$ and $\mathfrak q$
+because $\text{depth}(M/x^nM) = \text{depth}(M) - 1$ by
+Lemma \ref{lemma-depth-drops-by-one}. We find
+$\dim(R/\mathfrak q) \geq \text{depth}(M/x^nM)$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-localization}
+Let $R$ be a local Noetherian ring and $M$ a finite $R$-module.
+For a prime ideal $\mathfrak p \subset R$ we have
+$\text{depth}(M_\mathfrak p) + \dim(R/\mathfrak p) \geq \text{depth}(M)$.
+\end{lemma}
+
+\begin{proof}
+If $M_\mathfrak p = 0$, then $\text{depth}(M_\mathfrak p) = \infty$ and
+the lemma holds.
+If $\text{depth}(M) \leq \dim(R/\mathfrak p)$, then the lemma is true.
+If $\text{depth}(M) > \dim(R/\mathfrak p)$, then $\mathfrak p$ is not
+contained in any associated prime $\mathfrak q$ of $M$ by
+Lemma \ref{lemma-depth-dim-associated-primes}.
+Hence we can find an $x \in \mathfrak p$ not contained in any
+associated prime of $M$ by Lemma \ref{lemma-silly} and
+Lemma \ref{lemma-finite-ass}. Then $x$ is a nonzerodivisor
+on $M$, see Lemma \ref{lemma-ass-zero-divisors}.
+Hence $\text{depth}(M/xM) = \text{depth}(M) - 1$ and
+$\text{depth}(M_\mathfrak p / x M_\mathfrak p) =
+\text{depth}(M_\mathfrak p) - 1$ provided $M_\mathfrak p$ is nonzero,
+see Lemma \ref{lemma-depth-drops-by-one}.
+Thus we conclude by induction on $\text{depth}(M)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-goes-down-finite}
+Let $(R, \mathfrak m)$ be a Noetherian local ring. Let $R \to S$
+be a finite ring map. Let $\mathfrak m_1, \ldots, \mathfrak m_n$
+be the maximal ideals of $S$. Let $N$ be a finite $S$-module.
+Then
+$$
+\min\nolimits_{i = 1, \ldots, n} \text{depth}(N_{\mathfrak m_i}) =
+\text{depth}_\mathfrak m(N)
+$$
+\end{lemma}
+
+\begin{proof}
+By Lemmas \ref{lemma-integral-no-inclusion}, \ref{lemma-integral-going-up},
+and Lemma \ref{lemma-finite-finite-fibres} the maximal ideals of
+$S$ are exactly the primes of $S$ lying over $\mathfrak m$ and
+there are finitely many of them. Hence the statement of the lemma
+makes sense. We will prove the lemma by induction on
+$k = \min\nolimits_{i = 1, \ldots, n} \text{depth}(N_{\mathfrak m_i})$.
+If $k = 0$, then $\text{depth}(N_{\mathfrak m_i}) = 0$ for some $i$.
+By Lemma \ref{lemma-depth-ext} this means
+$\mathfrak m_i S_{\mathfrak m_i}$ is an associated prime
+of $N_{\mathfrak m_i}$ and hence $\mathfrak m_i$ is an
+associated prime of $N$ (Lemma \ref{lemma-localize-ass}).
+By Lemma \ref{lemma-ass-functorial-Noetherian} we see that
+$\mathfrak m$ is an associated prime of $N$ as an $R$-module.
+Whence $\text{depth}_\mathfrak m(N) = 0$. This proves the base case.
+If $k > 0$, then we see that $\mathfrak m_i \not \in \text{Ass}_S(N)$.
+Hence $\mathfrak m \not \in \text{Ass}_R(N)$, again by
+Lemma \ref{lemma-ass-functorial-Noetherian}.
+Thus we can find $f \in \mathfrak m$ which is not a zerodivisor on
+$N$, see Lemma \ref{lemma-ideal-nonzerodivisor}. By
+Lemma \ref{lemma-depth-drops-by-one}
+all the depths drop exactly by $1$ when passing from $N$ to
+$N/fN$ and the induction hypothesis does the rest.
+\end{proof}
+
+
+
+
+\section{Functorialities for Ext}
+\label{section-functoriality-ext}
+
+\noindent
+In this section we briefly discuss the functoriality
+of $\Ext$ with respect to change of ring, etc.
+Here is a list of items to work out.
+\begin{enumerate}
+\item Given $R \to R'$, an $R$-module
+$M$ and an $R'$-module $N'$
+the $R$-module $\Ext^i_R(M, N')$
+has a natural $R'$-module structure. Moreover, there is a
+canonical $R'$-linear map $\Ext^i_{R'}(M \otimes_R R', N') \to
+\Ext^i_R(M, N')$.
+\item Given $R \to R'$ and $R$-modules $M$, $N$ there is a natural
+$R$-module map
+$\Ext^i_R(M, N) \to \text{Ext}^i_R(M, N \otimes_R R')$.
+\end{enumerate}
+
+\begin{lemma}
+\label{lemma-flat-base-change-ext}
+Given a flat ring map $R \to R'$, an $R$-module $M$, and an
+$R'$-module $N'$ the natural map
+$$
+\Ext^i_{R'}(M \otimes_R R', N') \to \text{Ext}^i_R(M, N')
+$$
+is an isomorphism for $i \geq 0$.
+\end{lemma}
+
+\begin{proof}
+Choose a free resolution $F_\bullet$ of $M$.
+Since $R \to R'$ is flat we see that $F_\bullet \otimes_R R'$ is
+a free resolution of $M \otimes_R R'$ over $R'$.
+The statement is that the map
+$$
+\Hom_{R'}(F_\bullet \otimes_R R', N') \to
+\Hom_R(F_\bullet, N')
+$$
+induces an isomorphism on homology groups, which is true because
+it is an isomorphism of complexes by
+Lemma \ref{lemma-adjoint-tensor-restrict}.
+\end{proof}
+
+
+
+
+
+
+\section{An application of Ext groups}
+\label{section-ext-application}
+
+\noindent
+Here it is.
+
+\begin{lemma}
+\label{lemma-split-injection-after-completion}
+Let $R$ be a Noetherian ring. Let $I \subset R$ be an ideal
+contained in the Jacobson radical of $R$.
+Let $N \to M$ be a homomorphism of finite $R$-modules.
+Suppose that there exists arbitrarily large $n$ such that
+$N/I^nN \to M/I^nM$ is a split injection.
+Then $N \to M$ is a split injection.
+\end{lemma}
+
+\begin{proof}
+Assume $\varphi : N \to M$ satisfies the assumptions of the lemma.
+Note that this implies that $\Ker(\varphi) \subset I^nN$
+for arbitrarily large $n$. Hence by
+Lemma \ref{lemma-intersection-powers-ideal-module} we see that $\varphi$
+is injection. Let $Q = M/N$ so that we have a short exact sequence
+$$
+0 \to N \to M \to Q \to 0.
+$$
+Let
+$$
+F_2 \xrightarrow{d_2} F_1 \xrightarrow{d_1} F_0 \to Q \to 0
+$$
+be a finite free resolution of $Q$. We can choose a map
+$\alpha : F_0 \to M$ lifting the map $F_0 \to Q$. This induces a map
+$\beta : F_1 \to N$ such that $\beta \circ d_2 = 0$. The extension
+above is split if and only if there exists a map $\gamma : F_0 \to N$
+such that $\beta = \gamma \circ d_1$. In other words, the class of
+$\beta$ in $\Ext^1_R(Q, N)$ is the obstruction to splitting
+the short exact sequence above.
+
+\medskip\noindent
+Suppose $n$ is a large integer such that $N/I^nN \to M/I^nM$ is a
+split injection. This implies
+$$
+0 \to N/I^nN \to M/I^nM \to Q/I^nQ \to 0.
+$$
+is still short exact. Also, the sequence
+$$
+F_1/I^nF_1 \xrightarrow{d_1} F_0/I^nF_0 \to Q/I^nQ \to 0
+$$
+is still exact. Arguing as above we see that the map
+$\overline{\beta} : F_1/I^nF_1 \to N/I^nN$
+induced by $\beta$ is equal to $\overline{\gamma_n} \circ d_1$ for some
+map $\overline{\gamma_n} : F_0/I^nF_0 \to N/I^nN$.
+Since $F_0$ is free we can lift $\overline{\gamma_n}$ to a map
+$\gamma_n : F_0 \to N$ and then we see that
+$\beta - \gamma_n \circ d_1$ is a map from $F_1$ into $I^nN$.
+In other words we conclude that
+$$
+\beta \in
+\Im\Big(\Hom_R(F_0, N) \to \Hom_R(F_1, N)\Big) + I^n\Hom_R(F_1, N).
+$$
+for this $n$.
+
+\medskip\noindent
+Since we have this property for arbitrarily large $n$ by assumption
+we conclude that the image of $\beta$ in the cokernel of
+$\Hom_R(F_0, N) \to \Hom_R(F_1, N)$ is zero by
+Lemma \ref{lemma-intersection-powers-ideal-module}. Hence
+$\beta$ is in the image of the map $\Hom_R(F_0, N) \to \Hom_R(F_1, N)$ as
+desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Tor groups and flatness}
+\label{section-tor}
+
+\noindent
+In this section we use some of the homological algebra
+developed in the previous section to explain what
+Tor groups are. Namely, suppose that $R$ is a ring
+and that $M$, $N$ are two $R$-modules. Choose
+a resolution $F_\bullet$ of $M$ by free $R$-modules.
+See Lemma \ref{lemma-resolution-by-finite-free}.
+Consider the homological complex
+$$
+F_\bullet \otimes_R N
+:
+\ldots
+\to F_2 \otimes_R N
+\to F_1 \otimes_R N
+\to F_0 \otimes_R N
+$$
+We define $\text{Tor}^R_i(M, N)$ to be the $i$th homology
+group of this complex. The following lemma explains in
+what sense this is well defined.
+
+\begin{lemma}
+\label{lemma-tor-welldefined}
+Let $R$ be a ring. Let $M_1, M_2, N$ be $R$-modules.
+Suppose that $F_\bullet$ is a free resolution of
+the module $M_1$ and that $G_\bullet$ is a free
+resolution of the module $M_2$. Let $\varphi : M_1 \to M_2$
+be a module map. Let $\alpha : F_\bullet \to G_\bullet$
+be a map of complexes inducing $\varphi$ on
+$M_1 = \Coker(d_{F, 1}) \to M_2 = \Coker(d_{G, 1})$,
+see Lemma \ref{lemma-compare-resolutions}.
+Then the induced maps
+$$
+H_i(\alpha) :
+H_i(F_\bullet \otimes_R N)
+\longrightarrow
+H_i(G_\bullet \otimes_R N)
+$$
+are independent of the choice of $\alpha$. If $\varphi$
+is an isomorphism, so are all the maps $H_i(\alpha)$.
+If $M_1 = M_2$, $F_\bullet = G_\bullet$, and
+$\varphi$ is the identity, so are all the maps $H_i(\alpha)$.
+\end{lemma}
+
+\begin{proof}
+The proof of this lemma is identical to the proof of Lemma
+\ref{lemma-ext-welldefined}.
+\end{proof}
+
+\noindent
+Not only does this lemma imply that the Tor modules are well defined,
+but it also provides for the functoriality of the constructions
+$(M, N) \mapsto \text{Tor}_i^R(M, N)$ in the first variable. Of course
+the functoriality in the second variable is evident. We leave it to
+the reader to see that each of the $\text{Tor}_i^R$ is in fact
+a functor
+$$
+\text{Mod}_R \times \text{Mod}_R \to \text{Mod}_R.
+$$
+Here $\text{Mod}_R$ denotes the category of $R$-modules, and
+for the definition of the product category
+see Categories, Definition \ref{categories-definition-product-category}.
+Namely, given morphisms of $R$-modules $M_1 \to M_2$
+and $N_1 \to N_2$ we get a commutative diagram
+$$
+\xymatrix{
+\text{Tor}_i^R(M_1, N_1) \ar[r] \ar[d] &
+\text{Tor}_i^R(M_1, N_2) \ar[d] \\
+\text{Tor}_i^R(M_2, N_1) \ar[r] &
+\text{Tor}_i^R(M_2, N_2) \\
+}
+$$
+
+\begin{lemma}
+\label{lemma-long-exact-sequence-tor}
+Let $R$ be a ring and let $M$ be an $R$-module.
+Suppose that $0 \to N' \to N \to N'' \to 0$ is a short
+exact sequence of $R$-modules. There exists a long
+exact sequence
+$$
+\text{Tor}_1^R(M, N')
+\to \text{Tor}_1^R(M, N)
+\to \text{Tor}_1^R(M, N'')
+\to
+M \otimes_R N'
+\to M \otimes_R N
+\to M \otimes_R N''
+\to 0
+$$
+\end{lemma}
+
+\begin{proof}
+The proof of this is the same as the proof of
+Lemma \ref{lemma-long-exact-seq-ext}.
+\end{proof}
+
+\noindent
+Consider a homological double complex of $R$-modules
+$$
+\xymatrix{
+\ldots \ar[r]^d &
+A_{2, 0} \ar[r]^d &
+A_{1, 0} \ar[r]^d &
+A_{0, 0} \\
+\ldots \ar[r]^d &
+A_{2, 1} \ar[r]^d \ar[u]^\delta &
+A_{1, 1} \ar[r]^d \ar[u]^\delta &
+A_{0, 1} \ar[u]^\delta \\
+\ldots \ar[r]^d &
+A_{2, 2} \ar[r]^d \ar[u]^\delta &
+A_{1, 2} \ar[r]^d \ar[u]^\delta &
+A_{0, 2} \ar[u]^\delta \\
+&
+\ldots \ar[u]^\delta &
+\ldots \ar[u]^\delta &
+\ldots \ar[u]^\delta \\
+}
+$$
+This means that $d_{i, j} : A_{i, j} \to A_{i-1, j}$
+and $\delta_{i, j} : A_{i, j} \to A_{i, j-1}$ have the following
+properties
+\begin{enumerate}
+\item Any composition of two $d_{i, j}$ is zero. In other words
+the rows of the double complex are complexes.
+\item Any composition of two $\delta_{i, j}$ is zero. In other words
+the columns of the double complex are complexes.
+\item For any pair $(i, j)$ we have $\delta_{i-1, j} \circ d_{i, j}
+= d_{i, j-1} \circ \delta_{i, j}$. In other words, all the squares
+commute.
+\end{enumerate}
+The correct thing to do is to associate a spectral sequence to
+any such double complex. However, for the moment we can get away with
+doing something slightly easier.
+
+\medskip\noindent
+Namely, for the purposes of this section only, given a double
+complex $(A_{\bullet, \bullet}, d, \delta)$ set
+$R(A)_j = \Coker(A_{1, j} \to A_{0, j})$ and
+$U(A)_i = \Coker(A_{i, 1} \to A_{i, 0})$. (The letters
+$R$ and $U$ are meant to suggest Right and Up.)
+We endow $R(A)_\bullet$ with the structure of a complex
+using the maps $\delta$. Similarly we endow $U(A)_\bullet$
+with the structure of a complex using the maps $d$.
+In other words we obtain the following huge commutative diagram
+$$
+\xymatrix{
+\ldots \ar[r]^d &
+U(A)_2 \ar[r]^d &
+U(A)_1 \ar[r]^d &
+U(A)_0 &
+\\
+\ldots \ar[r]^d &
+A_{2, 0} \ar[r]^d \ar[u] &
+A_{1, 0} \ar[r]^d \ar[u] &
+A_{0, 0} \ar[r] \ar[u] &
+R(A)_0 \\
+\ldots \ar[r]^d &
+A_{2, 1} \ar[r]^d \ar[u]^\delta &
+A_{1, 1} \ar[r]^d \ar[u]^\delta &
+A_{0, 1} \ar[r] \ar[u]^\delta &
+R(A)_1 \ar[u]^\delta \\
+\ldots \ar[r]^d &
+A_{2, 2} \ar[r]^d \ar[u]^\delta &
+A_{1, 2} \ar[r]^d \ar[u]^\delta &
+A_{0, 2} \ar[r] \ar[u]^\delta &
+R(A)_2 \ar[u]^\delta \\
+&
+\ldots \ar[u]^\delta &
+\ldots \ar[u]^\delta &
+\ldots \ar[u]^\delta &
+\ldots \ar[u]^\delta \\
+}
+$$
+(This is no longer a double complex of course.)
+It is clear what a morphism $\Phi : (A_{\bullet, \bullet}, d, \delta)
+\to (B_{\bullet, \bullet}, d, \delta)$ of double complexes
+is, and it is clear that this induces morphisms of complexes
+$R(\Phi) : R(A)_\bullet \to R(B)_\bullet$ and
+$U(\Phi) : U(A)_\bullet \to U(B)_\bullet$.
+
+\begin{lemma}
+\label{lemma-no-spectral-sequence}
+Let $(A_{\bullet, \bullet}, d, \delta)$ be a double complex such
+that
+\begin{enumerate}
+\item Each row $A_{\bullet, j}$ is a resolution of $R(A)_j$.
+\item Each column $A_{i, \bullet}$ is a resolution of $U(A)_i$.
+\end{enumerate}
+Then there are canonical isomorphisms
+$$
+H_i(R(A)_\bullet)
+\cong
+H_i(U(A)_\bullet).
+$$
+The isomorphisms are functorial with respect to morphisms
+of double complexes with the properties above.
+\end{lemma}
+
+\begin{proof}
+We will show that $H_i(R(A)_\bullet))$
+and $H_i(U(A)_\bullet)$ are canonically
+isomorphic to a third group. Namely
+$$
+\mathbf{H}_i(A) :=
+\frac{
+\{
+(a_{i, 0}, a_{i-1, 1}, \ldots, a_{0, i})
+\mid
+d(a_{i, 0}) = \delta(a_{i-1, 1}), \ldots,
+d(a_{1, i-1}) = \delta(a_{0, i})
+\}}
+{
+\{
+d(a_{i + 1, 0}) + \delta(a_{i, 1}),
+d(a_{i, 1}) + \delta(a_{i-1, 2}),
+\ldots,
+d(a_{1, i}) + \delta(a_{0, i + 1})
+\}
+}
+$$
+Here we use the notational convention that $a_{i, j}$ denotes
+an element of $A_{i, j}$. In other words, an element of $\mathbf{H}_i$
+is represented by a zig-zag, represented as follows for $i = 2$
+$$
+\xymatrix{
+a_{2, 0} \ar@{|->}[r] & d(a_{2, 0}) = \delta(a_{1, 1}) & \\
+& a_{1, 1} \ar@{|->}[u] \ar@{|->}[r] & d(a_{1, 1}) = \delta(a_{0, 2}) \\
+& & a_{0, 2} \ar@{|->}[u] \\
+}
+$$
+Naturally, we divide out by ``trivial'' zig-zags, namely the submodule
+generated by elements of the form $(0, \ldots, 0, -\delta(a_{t + 1, t-i}),
+d(a_{t + 1, t-i}), 0, \ldots, 0)$. Note that there are canonical
+homomorphisms
+$$
+\mathbf{H}_i(A) \to H_i(R(A)_\bullet), \quad
+(a_{i, 0}, a_{i-1, 1}, \ldots, a_{0, i}) \mapsto
+\text{class of image of }a_{0, i}
+$$
+and
+$$
+\mathbf{H}_i(A) \to H_i(U(A)_\bullet), \quad
+(a_{i, 0}, a_{i-1, 1}, \ldots, a_{0, i}) \mapsto
+\text{class of image of }a_{i, 0}
+$$
+
+\medskip\noindent
+First we show that these maps are surjective.
+Suppose that $\overline{r} \in H_i(R(A)_\bullet)$.
+Let $r \in R(A)_i$ be a cocycle representing the
+class of $\overline{r}$.
+Let $a_{0, i} \in A_{0, i}$ be an element which
+maps to $r$. Because $\delta(r) = 0$,
+we see that $\delta(a_{0, i})$ is in the
+image of $d$. Hence there exists an element
+$a_{1, i-1} \in A_{1, i-1}$ such that
+$d(a_{1, i-1}) = \delta(a_{0, i})$. This in turn
+implies that $\delta(a_{1, i-1})$ is in the kernel
+of $d$ (because $d(\delta(a_{1, i-1})) = \delta(d(a_{1, i-1}))
+= \delta(\delta(a_{0, i})) = 0$. By exactness of the
+rows we find an element $a_{2, i-2}$ such that
+$d(a_{2, i-2}) = \delta(a_{1, i-1})$. And so on
+until a full zig-zag is found. Of course surjectivity
+of $\mathbf{H}_i \to H_i(U(A))$ is shown similarly.
+
+\medskip\noindent
+To prove injectivity we argue in exactly the same way.
+Namely, suppose we are given a zig-zag
+$(a_{i, 0}, a_{i-1, 1}, \ldots, a_{0, i})$
+which maps to zero in $H_i(R(A)_\bullet)$.
+This means that $a_{0, i}$ maps to an element
+of $\Coker(A_{i, 1} \to A_{i, 0})$
+which is in the image of
+$\delta : \Coker(A_{i + 1, 1} \to A_{i + 1, 0}) \to
+\Coker(A_{i, 1} \to A_{i, 0})$.
+In other words, $a_{0, i}$ is in the image of
+$\delta \oplus d : A_{0, i + 1} \oplus A_{1, i} \to A_{0, i}$.
+From the definition of trivial zig-zags we see that
+we may modify our zig-zag by a trivial one and
+assume that $a_{0, i} = 0$. This immediately
+implies that $d(a_{1, i-1}) = 0$. As the rows
+are exact this implies that $a_{1, i-1}$ is
+in the image of $d : A_{2, i-1} \to A_{1, i-1}$.
+Thus we may modify our zig-zag once again by a
+trivial zig-zag and assume that our zig-zag looks
+like $(a_{i, 0}, a_{i-1, 1}, \ldots, a_{2, i-2}, 0, 0)$.
+Continuing like this we obtain the desired injectivity.
+
+\medskip\noindent
+If $\Phi : (A_{\bullet, \bullet}, d, \delta)
+\to (B_{\bullet, \bullet}, d, \delta)$ is a morphism
+of double complexes both of which satisfy the conditions
+of the lemma, then we clearly obtain a commutative
+diagram
+$$
+\xymatrix{
+H_i(U(A)_\bullet) \ar[d] &
+\mathbf{H}_i(A) \ar[r] \ar[l] \ar[d] &
+H_i(R(A)_\bullet) \ar[d] \\
+H_i(U(B)_\bullet) &
+\mathbf{H}_i(B) \ar[r] \ar[l] &
+H_i(R(B)_\bullet) \\
+}
+$$
+This proves the functoriality.
+\end{proof}
+
+\begin{remark}
+\label{remark-signs-double-complex}
+The isomorphism constructed above is the ``correct'' one only up to signs.
+A good part of homological algebra is concerned with choosing signs for
+various maps and showing commutativity of diagrams with intervention
+of suitable signs. For the moment we will simply use the isomorphism
+as given in the proof above, and worry about signs later.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-tor-left-right}
+Let $R$ be a ring. For any $i \geq 0$ the functors
+$\text{Mod}_R \times \text{Mod}_R \to \text{Mod}_R$,
+$(M, N) \mapsto \text{Tor}_i^R(M, N)$ and
+$(M, N) \mapsto \text{Tor}_i^R(N, M)$ are
+canonically isomorphic.
+\end{lemma}
+
+\begin{proof}
+Let $F_\bullet$ be a free resolution of the module $M$ and
+let $G_\bullet$ be a free resolution of the module $N$.
+Consider the double complex $(A_{i, j}, d, \delta)$ defined
+as follows:
+\begin{enumerate}
+\item set $A_{i, j} = F_i \otimes_R G_j$,
+\item set $d_{i, j} : F_i \otimes_R G_j \to F_{i-1} \otimes G_j$
+equal to $d_{F, i} \otimes \text{id}$, and
+\item set $\delta_{i, j} : F_i \otimes_R G_j \to F_i \otimes G_{j-1}$
+equal to $\text{id} \otimes d_{G, j}$.
+\end{enumerate}
+This double complex is usually simply denoted $F_\bullet \otimes_R G_\bullet$.
+
+\medskip\noindent
+Since each $G_j$ is free, and hence flat we see that each
+row of the double complex is exact except in homological
+degree $0$. Since each $F_i$ is free and hence flat we see that each
+column of the double complex is exact except in homological
+degree $0$. Hence the double complex satisfies the conditions
+of Lemma \ref{lemma-no-spectral-sequence}.
+
+\medskip\noindent
+To see what the lemma says we compute $R(A)_\bullet$ and $U(A)_\bullet$.
+Namely,
+\begin{eqnarray*}
+R(A)_i & = & \Coker(A_{1, i} \to A_{0, i}) \\
+& = & \Coker(F_1 \otimes_R G_i \to F_0 \otimes_R G_i) \\
+& = & \Coker(F_1 \to F_0) \otimes_R G_i \\
+& = & M \otimes_R G_i
+\end{eqnarray*}
+In fact these isomorphisms are compatible with the differentials
+$\delta$ and we see that $R(A)_\bullet = M \otimes_R G_\bullet$
+as homological complexes. In exactly the same way we see that
+$U(A)_\bullet = F_\bullet \otimes_R N$. We get
+\begin{eqnarray*}
+\text{Tor}_i^R(M, N)
+& = & H_i(F_\bullet \otimes_R N) \\
+& = & H_i(U(A)_\bullet) \\
+& = & H_i(R(A)_\bullet) \\
+& = & H_i(M \otimes_R G_\bullet) \\
+& = & H_i(G_\bullet \otimes_R M) \\
+& = & \text{Tor}_i^R(N, M)
+\end{eqnarray*}
+Here the third equality is Lemma \ref{lemma-no-spectral-sequence}, and
+the fifth equality uses the isomorphism $V \otimes W = W \otimes V$
+of the tensor product.
+
+\medskip\noindent
+Functoriality. Suppose that we have $R$-modules $M_\nu$, $N_\nu$,
+$\nu = 1, 2$. Let $\varphi : M_1 \to M_2$ and $\psi : N_1 \to N_2$
+be morphisms of $R$-modules.
+Suppose that we have free resolutions $F_{\nu, \bullet}$
+for $M_\nu$ and free resolutions $G_{\nu, \bullet}$ for $N_\nu$.
+By Lemma \ref{lemma-compare-resolutions} we may choose
+maps of complexes $\alpha : F_{1, \bullet} \to F_{2, \bullet}$
+and $\beta : G_{1, \bullet} \to G_{2, \bullet}$ compatible
+with $\varphi$ and $\psi$. We claim that
+the pair $(\alpha, \beta)$ induces a morphism of double
+complexes
+$$
+\alpha \otimes \beta :
+F_{1, \bullet} \otimes_R G_{1, \bullet}
+\longrightarrow
+F_{2, \bullet} \otimes_R G_{2, \bullet}
+$$
+This is really a very straightforward check using the rule
+that $F_{1, i} \otimes_R G_{1, j} \to F_{2, i} \otimes_R G_{2, j}$
+is given by $\alpha_i \otimes \beta_j$ where $\alpha_i$,
+resp.\ $\beta_j$ is the degree $i$, resp.\ $j$ component of $\alpha$,
+resp.\ $\beta$. The reader also readily verifies that the
+induced maps $R(F_{1, \bullet} \otimes_R G_{1, \bullet})_\bullet
+\to R(F_{2, \bullet} \otimes_R G_{2, \bullet})_\bullet$
+agrees with the map $M_1 \otimes_R G_{1, \bullet}
+\to M_2 \otimes_R G_{2, \bullet}$ induced by $\varphi \otimes \beta$.
+Similarly for the map induced on the $U(-)_\bullet$ complexes.
+Thus the statement on functoriality follows from the statement
+on functoriality in Lemma \ref{lemma-no-spectral-sequence}.
+\end{proof}
+
+\begin{remark}
+\label{remark-curiosity-signs-swap}
+An interesting case occurs when $M = N$ in the above.
+In this case we get a canonical map $\text{Tor}_i^R(M, M)
+\to \text{Tor}_i^R(M, M)$. Note that this map is not the
+identity, because even when $i = 0$ this map is not the
+identity! For example, if $V$ is a vector space of dimension
+$n$ over a field, then the switch map $V \otimes_k V \to V \otimes_k V$
+has $(n^2 + n)/2$ eigenvalues $+1$ and $(n^2-n)/2$ eigenvalues
+$-1$. In characteristic $2$ it is not even diagonalizable.
+Note that even changing the sign of the map will not get rid
+of this.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-tor-noetherian}
+Let $R$ be a Noetherian ring. Let $M$, $N$ be finite $R$-modules.
+Then $\text{Tor}_p^R(M, N)$ is a finite $R$-module for all $p$.
+\end{lemma}
+
+\begin{proof}
+This holds because $\text{Tor}_p^R(M, N)$ is computed as the
+cohomology groups of a complex $F_\bullet \otimes_R N$
+with each $F_n$ a finite free $R$-module, see
+Lemma \ref{lemma-resolution-by-finite-free}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-flat}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+The following are equivalent:
+\begin{enumerate}
+\item The module $M$ is flat over $R$.
+\item For all $i > 0$ the functor $\text{Tor}_i^R(M, -)$ is zero.
+\item The functor $\text{Tor}_1^R(M, -)$ is zero.
+\item For all ideals $I \subset R$ we have $\text{Tor}_1^R(M, R/I) = 0$.
+\item For all finitely generated ideals $I \subset R$ we have
+$\text{Tor}_1^R(M, R/I) = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Suppose $M$ is flat. Let $N$ be an $R$-module.
+Let $F_\bullet$ be a free resolution of $N$.
+Then $F_\bullet \otimes_R M$ is a resolution of $N \otimes_R M$,
+by flatness of $M$. Hence all higher Tor groups vanish.
+
+\medskip\noindent
+It now suffices to show that the last condition implies that
+$M$ is flat. Let $I \subset R$ be an ideal.
+Consider the short exact sequence
+$0 \to I \to R \to R/I \to 0$. Apply
+Lemma \ref{lemma-long-exact-sequence-tor}. We get an
+exact sequence
+$$
+\text{Tor}_1^R(M, R/I) \to
+M \otimes_R I \to
+M \otimes_R R \to
+M \otimes_R R/I \to
+0
+$$
+Since obviously $M \otimes_R R = M$ we conclude that the
+last hypothesis implies that $M \otimes_R I \to M$ is
+injective for every finitely generated ideal $I$.
+Thus $M$ is flat by Lemma \ref{lemma-flat}.
+\end{proof}
+
+\begin{remark}
+\label{remark-Tor-ring-mod-ideal}
+The proof of Lemma \ref{lemma-characterize-flat} actually shows
+that
+$$
+\text{Tor}_1^R(M, R/I)
+=
+\Ker(I \otimes_R M \to M).
+$$
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Functorialities for Tor}
+\label{section-functoriality-tor}
+
+\noindent
+In this section we briefly discuss the functoriality
+of $\text{Tor}$ with respect to change of ring, etc.
+Here is a list of items to work out.
+\begin{enumerate}
+\item Given a ring map $R \to R'$, an $R$-module
+$M$ and an $R'$-module $N'$
+the $R$-modules $\text{Tor}_i^R(M, N')$ have
+a natural $R'$-module structure.
+\item Given a ring map $R \to R'$ and $R$-modules
+$M$, $N$ there is a natural $R$-module map
+$\text{Tor}_i^R(M, N) \to \text{Tor}_i^{R'}(M \otimes_R R', N \otimes_R R')$.
+\item Given a ring map $R \to R'$ an $R$-module $M$ and
+an $R'$-module $N'$ there exists a natural
+$R'$-module map
+$\text{Tor}_i^R(M, N') \to \text{Tor}_i^{R'}(M \otimes_R R', N')$.
+\end{enumerate}
+
+\begin{lemma}
+\label{lemma-flat-base-change-tor}
+Given a flat ring map $R \to R'$ and $R$-modules
+$M$, $N$ the natural $R$-module map
+$\text{Tor}_i^R(M, N)\otimes_R R'
+\to \text{Tor}_i^{R'}(M \otimes_R R', N \otimes_R R')$
+is an isomorphism for all $i$.
+\end{lemma}
+
+\begin{proof}
+Omitted. This is true because a free resolution $F_\bullet$ of $M$ over
+$R$ stays exact when tensoring with $R'$ over $R$ and hence
+$(F_\bullet \otimes_R N)\otimes_R R'$ computes the Tor groups
+over $R'$.
+\end{proof}
+
+\noindent
+The following lemma does not seem to fit anywhere else.
+
+\begin{lemma}
+\label{lemma-tor-commutes-filtered-colimits}
+Let $R$ be a ring. Let $M = \colim M_i$ be a filtered colimit of
+$R$-modules. Let $N$ be an $R$-module. Then
+$\text{Tor}_n^R(M, N) = \colim \text{Tor}_n^R(M_i, N)$ for all $n$.
+\end{lemma}
+
+\begin{proof}
+Choose a free resolution $F_\bullet$ of $N$. Then
+$F_\bullet \otimes_R M = \colim F_\bullet \otimes_R M_i$
+as complexes by Lemma \ref{lemma-tensor-products-commute-with-limits}.
+Thus the result by Lemma \ref{lemma-directed-colimit-exact}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Projective modules}
+\label{section-projective}
+
+\noindent
+Some lemmas on projective modules.
+
+\begin{definition}
+\label{definition-projective}
+Let $R$ be a ring. An $R$-module $P$ is {\it projective} if and only if
+the functor $\Hom_R(P, -) : \text{Mod}_R \to \text{Mod}_R$ is
+an exact functor.
+\end{definition}
+
+\noindent
+The functor $\Hom_R(M, - )$ is left exact for any $R$-module $M$, see
+Lemma \ref{lemma-hom-exact}.
+Hence the condition for $P$ to be projective really signifies that given
+a surjection of $R$-modules $N \to N'$ the map
+$\Hom_R(P, N) \to \Hom_R(P, N')$ is surjective.
+
+\begin{lemma}
+\label{lemma-characterize-projective}
+Let $R$ be a ring. Let $P$ be an $R$-module.
+The following are equivalent
+\begin{enumerate}
+\item $P$ is projective,
+\item $P$ is a direct summand of a free $R$-module, and
+\item $\Ext^1_R(P, M) = 0$ for every $R$-module $M$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $P$ is projective. Choose a surjection $\pi : F \to P$ where $F$
+is a free $R$-module. As $P$ is projective there exists a
+$i \in \Hom_R(P, F)$ such that $\pi \circ i = \text{id}_P$.
+In other words $F \cong \Ker(\pi) \oplus i(P)$ and we see
+that $P$ is a direct summand of $F$.
+
+\medskip\noindent
+Conversely, assume that $P \oplus Q = F$ is a free $R$-module.
+Note that the free module $F = \bigoplus_{i \in I} R$ is projective
+as $\Hom_R(F, M) = \prod_{i \in I} M$ and the functor
+$M \mapsto \prod_{i \in I} M$ is exact.
+Then $\Hom_R(F, -) = \Hom_R(P, -) \times \Hom_R(Q, -)$
+as functors, hence both $P$ and $Q$ are projective.
+
+\medskip\noindent
+Assume $P \oplus Q = F$ is a free $R$-module. Then we have a
+free resolution $F_\bullet$ of the form
+$$
+\ldots F \xrightarrow{a} F \xrightarrow{b} F \to P \to 0
+$$
+where the maps $a, b$ alternate and are equal to the projector onto
+$P$ and $Q$. Hence the complex $\Hom_R(F_\bullet, M)$ is split
+exact in degrees $\geq 1$, whence we see the vanishing in (3).
+
+\medskip\noindent
+Assume $\Ext^1_R(P, M) = 0$ for every $R$-module $M$.
+Pick a free resolution $F_\bullet \to P$. Set
+$M = \Im(F_1 \to F_0) = \Ker(F_0 \to P)$.
+Consider the element $\xi \in \Ext^1_R(P, M)$ given by
+the class of the quotient map $\pi : F_1 \to M$. Since $\xi$ is zero
+there exists a map $s : F_0 \to M$ such that $\pi = s \circ (F_1 \to F_0)$.
+Clearly, this means that
+$$
+F_0 = \Ker(s) \oplus \Ker(F_0 \to P) =
+P \oplus \Ker(F_0 \to P)
+$$
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-finite-projective-noetherian}
+Let $R$ be a Noetherian ring. Let $P$ be a finite $R$-module.
+If $\Ext^1_R(P, M) = 0$ for every finite $R$-module $M$, then
+$P$ is projective.
+\end{lemma}
+
+\noindent
+This lemma can be strengthened: There is a version for finitely presented
+$R$-modules if $R$ is not assumed Noetherian. There is a version with $M$
+running through all finite length modules in the Noetherian case.
+
+\begin{proof}
+Choose a surjection $R^{\oplus n} \to P$ with kernel $M$.
+Since $\Ext^1_R(P, M) = 0$ this surjection is split and
+we conclude by Lemma \ref{lemma-characterize-projective}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-direct-sum-projective}
+A direct sum of projective modules is projective.
+\end{lemma}
+
+\begin{proof}
+This is true by the characterization of projectives as direct
+summands of free modules in
+Lemma \ref{lemma-characterize-projective}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-projective-module}
+Let $R$ be a ring. Let $I \subset R$ be a nilpotent ideal. Let
+$\overline{P}$ be a projective $R/I$-module. Then there exists a
+projective $R$-module $P$ such that $P/IP \cong \overline{P}$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-characterize-projective}
+we can choose a set $A$ and a direct sum decomposition
+$\bigoplus_{\alpha \in A} R/I = \overline{P} \oplus \overline{K}$
+for some $R/I$-module $\overline{K}$. Write $F = \bigoplus_{\alpha \in A} R$
+for the free $R$-module on $A$. Choose a lift
+$p : F \to F$ of the projector $\overline{p}$ associated to
+the direct summand $\overline{P}$ of $\bigoplus_{\alpha \in A} R/I$.
+Note that $p^2 - p \in \text{End}_R(F)$ is a nilpotent
+endomorphism of $F$ (as $I$ is nilpotent and the matrix entries of
+$p^2 - p$ are in $I$; more precisely, if $I^n = 0$, then $(p^2 - p)^n = 0$).
+Hence by Lemma \ref{lemma-lift-idempotents-noncommutative}
+we can modify our choice of $p$ and assume that $p$ is a projector.
+Set $P = \Im(p)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-finite-projective-module}
+Let $R$ be a ring. Let $I \subset R$ be a locally nilpotent ideal. Let
+$\overline{P}$ be a finite projective $R/I$-module. Then there exists a
+finite projective $R$-module $P$ such that $P/IP \cong \overline{P}$.
+\end{lemma}
+
+\begin{proof}
+Recall that $\overline{P}$ is a direct summand of a free $R/I$-module
+$\bigoplus_{\alpha \in A} R/I$ by Lemma \ref{lemma-characterize-projective}.
+As $\overline{P}$ is finite, it follows that $\overline{P}$ is contained
+in $\bigoplus_{\alpha \in A'} R/I$ for some $A' \subset A$ finite.
+Hence we may assume we have a direct sum decomposition
+$(R/I)^{\oplus n} = \overline{P} \oplus \overline{K}$
+for some $n$ and some $R/I$-module $\overline{K}$. Choose a lift
+$p \in \text{Mat}(n \times n, R)$ of the projector $\overline{p}$
+associated to the direct summand $\overline{P}$ of $(R/I)^{\oplus n}$.
+Note that $p^2 - p \in \text{Mat}(n \times n, R)$ is nilpotent:
+as $I$ is locally nilpotent and the matrix entries $c_{ij}$ of
+$p^2 - p$ are in $I$ we have $c_{ij}^t = 0$ for some $t > 0$ and
+then $(p^2 - p)^{tn^2} = 0$ (by looking at the matrix coefficients).
+Hence by Lemma \ref{lemma-lift-idempotents-noncommutative}
+we can modify our choice of $p$ and assume that $p$ is a projector.
+Set $P = \Im(p)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-projective}
+Let $R$ be a ring.
+Let $I \subset R$ be an ideal.
+Let $M$ be an $R$-module.
+Assume
+\begin{enumerate}
+\item $I$ is nilpotent,
+\item $M/IM$ is a projective $R/I$-module,
+\item $M$ is a flat $R$-module.
+\end{enumerate}
+Then $M$ is a projective $R$-module.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-lift-projective-module} we can find a projective
+$R$-module $P$ and an isomorphism $P/IP \to M/IM$. We are going to show
+that $M$ is isomorphic to $P$ which will finish the proof. Because $P$
+is projective we can lift the map $P \to P/IP \to M/IM$ to an $R$-module
+map $P \to M$ which is an isomorphism modulo $I$. Since $I^n = 0$
+for some $n$, we can use the filtrations
+\begin{align*}
+0 = I^nM \subset I^{n - 1}M \subset \ldots \subset IM \subset M \\
+0 = I^nP \subset I^{n - 1}P \subset \ldots \subset IP \subset P
+\end{align*}
+to see that it suffices to show that the induced maps
+$I^aP/I^{a + 1}P \to I^aM/I^{a + 1}M$ are bijective. Since both $P$
+and $M$ are flat $R$-modules we can identify this with the map
+$$
+I^a/I^{a + 1} \otimes_{R/I} P/IP
+\longrightarrow
+I^a/I^{a + 1} \otimes_{R/I} M/IM
+$$
+induced by $P \to M$. Since we chose $P \to M$ such that the induced
+map $P/IP \to M/IM$ is an isomorphism, we win.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Finite projective modules}
+\label{section-finite-projective-modules}
+
+\begin{definition}
+\label{definition-locally-free}
+Let $R$ be a ring and $M$ an $R$-module.
+\begin{enumerate}
+\item We say that $M$ is {\it locally free} if we can cover $\Spec(R)$ by
+standard opens $D(f_i)$, $i \in I$ such that $M_{f_i}$ is a free
+$R_{f_i}$-module for all $i \in I$.
+\item We say that $M$ is {\it finite locally free} if we can choose
+the covering such that each $M_{f_i}$ is finite free.
+\item We say that $M$ is {\it finite locally free of rank $r$}
+if we can choose the covering such that each $M_{f_i}$ is isomorphic
+to $R_{f_i}^{\oplus r}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that a finite locally free $R$-module is automatically
+finitely presented by Lemma \ref{lemma-cover}. Moreover, if $M$ is a
+finite locally free module of rank $r$ over a ring $R$ and if
+$R$ is nonzero, then $r$ is uniquely determined by
+Lemma \ref{lemma-rank} (because at least one of the
+localizations $ R_{f_i}$ is a nonzero ring).
+
+\begin{lemma}
+\label{lemma-finite-projective}
+Let $R$ be a ring and let $M$ be an $R$-module.
+The following are equivalent
+\begin{enumerate}
+\item $M$ is finitely presented and $R$-flat,
+\item $M$ is finite projective,
+\item $M$ is a direct summand of a finite free $R$-module,
+\item $M$ is finitely presented and
+for all $\mathfrak p \in \Spec(R)$ the
+localization $M_{\mathfrak p}$ is free,
+\item $M$ is finitely presented and
+for all maximal ideals $\mathfrak m \subset R$ the
+localization $M_{\mathfrak m}$ is free,
+\item $M$ is finite and locally free,
+\item $M$ is finite locally free, and
+\item $M$ is finite, for every prime $\mathfrak p$ the module
+$M_{\mathfrak p}$ is free, and the function
+$$
+\rho_M : \Spec(R) \to \mathbf{Z}, \quad
+\mathfrak p
+\longmapsto
+\dim_{\kappa(\mathfrak p)} M \otimes_R \kappa(\mathfrak p)
+$$
+is locally constant in the Zariski topology.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First suppose $M$ is finite projective, i.e., (2) holds.
+Take a surjection $R^n \to M$ and let $K$ be the kernel.
+Since $M$ is projective,
+$0 \to K \to R^n \to M \to 0$ splits.
+Hence (2) $\Rightarrow$ (3).
+The implication (3) $\Rightarrow$ (2) follows from the fact that
+a direct summand of a projective is projective, see
+Lemma \ref{lemma-characterize-projective}.
+
+\medskip\noindent
+Assume (3), so we can write $K \oplus M \cong R^{\oplus n}$.
+So $K$ is a direct summand of $R^n$ and thus finitely generated.
+This shows $M = R^{\oplus n}/K$ is finitely presented.
+In other words, (3) $\Rightarrow$ (1).
+
+\medskip\noindent
+Assume $M$ is finitely presented and flat, i.e., (1) holds.
+We will prove that (7) holds. Pick any prime $\mathfrak p$ and
+$x_1, \ldots, x_r \in M$ which map to a basis of
+$M \otimes_R \kappa(\mathfrak p)$. By
+Nakayama's lemma (in the form of Lemma \ref{lemma-NAK-localization})
+these elements generate $M_g$ for some $g \in R$, $g \not \in \mathfrak p$.
+The corresponding surjection $\varphi : R_g^{\oplus r} \to M_g$
+has the following two properties: (a) $\Ker(\varphi)$ is a finite
+$R_g$-module (see Lemma \ref{lemma-extension})
+and (b) $\Ker(\varphi) \otimes \kappa(\mathfrak p) = 0$
+by flatness of $M_g$ over $R_g$ (see
+Lemma \ref{lemma-flat-tor-zero}).
+Hence by Nakayama's lemma again there exists a $g' \in R_g$ such that
+$\Ker(\varphi)_{g'} = 0$. In other words, $M_{gg'}$ is free.
+
+\medskip\noindent
+A finite locally free module is a finite module, see
+Lemma \ref{lemma-cover},
+hence (7) $\Rightarrow$ (6).
+It is clear that (6) $\Rightarrow$ (7) and that (7) $\Rightarrow$ (8).
+
+\medskip\noindent
+A finite locally free module is a finitely presented module, see
+Lemma \ref{lemma-cover},
+hence (7) $\Rightarrow$ (4).
+Of course (4) implies (5).
+Since we may check flatness locally (see
+Lemma \ref{lemma-flat-localization})
+we conclude that (5) implies (1).
+At this point we have
+$$
+\xymatrix{
+(2) \ar@{<=>}[r] & (3) \ar@{=>}[r] & (1) \ar@{=>}[r] &
+(7) \ar@{<=>}[r] \ar@{=>}[rd] \ar@{=>}[d] & (6) \\
+& & (5) \ar@{=>}[u] & (4) \ar@{=>}[l] & (8)
+}
+$$
+
+\medskip\noindent
+Suppose that $M$ satisfies (1), (4), (5), (6), and (7).
+We will prove that (3) holds. It suffices
+to show that $M$ is projective. We have to show that $\Hom_R(M, -)$
+is exact. Let $0 \to N'' \to N \to N'\to 0$ be a short exact sequence of
+$R$-module. We have to show that
+$0 \to \Hom_R(M, N'') \to \Hom_R(M, N) \to
+\Hom_R(M, N') \to 0$ is exact.
+As $M$ is finite locally free there exist a covering
+$\Spec(R) = \bigcup D(f_i)$ such that $M_{f_i}$ is finite free.
+By
+Lemma \ref{lemma-hom-from-finitely-presented}
+we see that
+$$
+0 \to \Hom_R(M, N'')_{f_i} \to \Hom_R(M, N)_{f_i} \to
+\Hom_R(M, N')_{f_i} \to 0
+$$
+is equal to
+$0 \to \Hom_{R_{f_i}}(M_{f_i}, N''_{f_i}) \to
+\Hom_{R_{f_i}}(M_{f_i}, N_{f_i}) \to
+\Hom_{R_{f_i}}(M_{f_i}, N'_{f_i}) \to 0$
+which is exact as $M_{f_i}$ is free and as the localization
+$0 \to N''_{f_i} \to N_{f_i} \to N'_{f_i} \to 0$
+is exact (as localization is exact). Whence we see that
+$0 \to \Hom_R(M, N'') \to \Hom_R(M, N) \to
+\Hom_R(M, N') \to 0$ is exact by
+Lemma \ref{lemma-cover}.
+
+\medskip\noindent
+Finally, assume that (8) holds. Pick a maximal ideal $\mathfrak m \subset R$.
+Pick $x_1, \ldots, x_r \in M$ which map to a $\kappa(\mathfrak m)$-basis of
+$M \otimes_R \kappa(\mathfrak m) = M/\mathfrak mM$. In particular
+$\rho_M(\mathfrak m) = r$. By
+Nakayama's Lemma \ref{lemma-NAK}
+there exists an $f \in R$, $f \not \in \mathfrak m$ such that
+$x_1, \ldots, x_r$ generate $M_f$ over $R_f$. By the assumption that
+$\rho_M$ is locally constant there exists a $g \in R$, $g \not \in \mathfrak m$
+such that $\rho_M$ is constant equal to $r$ on $D(g)$. We claim that
+$$
+\Psi : R_{fg}^{\oplus r} \longrightarrow M_{fg}, \quad
+(a_1, \ldots, a_r) \longmapsto \sum a_i x_i
+$$
+is an isomorphism. This claim will show that $M$ is finite locally
+free, i.e., that (7) holds. To see the claim
+it suffices to show that the induced map on localizations
+$\Psi_{\mathfrak p} : R_{\mathfrak p}^{\oplus r} \to M_{\mathfrak p}$
+is an isomorphism for all $\mathfrak p \in D(fg)$, see
+Lemma \ref{lemma-characterize-zero-local}.
+By our choice of $f$ the map $\Psi_{\mathfrak p}$
+is surjective. By assumption (8) we have
+$M_{\mathfrak p} \cong R_{\mathfrak p}^{\oplus \rho_M(\mathfrak p)}$
+and by our choice of $g$ we have $\rho_M(\mathfrak p) = r$.
+Hence $\Psi_{\mathfrak p}$ determines a surjection
+$R_{\mathfrak p}^{\oplus r} \to
+M_{\mathfrak p} \cong R_{\mathfrak p}^{\oplus r}$
+whence is an isomorphism by
+Lemma \ref{lemma-fun}.
+(Of course this last fact follows from a simple matrix argument also.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-projective-reduced}
+Let $R$ be a reduced ring and let $M$ be an $R$-module. Then the
+equivalent conditions of Lemma \ref{lemma-finite-projective}
+are also equivalent to
+\begin{enumerate}
+\item[(9)] $M$ is finite and the function
+$\rho_M : \Spec(R) \to \mathbf{Z}$, $\mathfrak p \mapsto
+\dim_{\kappa(\mathfrak p)} M \otimes_R \kappa(\mathfrak p)$
+is locally constant in the Zariski topology.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Pick a maximal ideal $\mathfrak m \subset R$.
+Pick $x_1, \ldots, x_r \in M$ which map to a $\kappa(\mathfrak m)$-basis of
+$M \otimes_R \kappa(\mathfrak m) = M/\mathfrak mM$. In particular
+$\rho_M(\mathfrak m) = r$. By
+Nakayama's Lemma \ref{lemma-NAK}
+there exists an $f \in R$, $f \not \in \mathfrak m$ such that
+$x_1, \ldots, x_r$ generate $M_f$ over $R_f$. By the assumption that
+$\rho_M$ is locally constant there exists a $g \in R$, $g \not \in \mathfrak m$
+such that $\rho_M$ is constant equal to $r$ on $D(g)$. We claim that
+$$
+\Psi : R_{fg}^{\oplus r} \longrightarrow M_{fg}, \quad
+(a_1, \ldots, a_r) \longmapsto \sum a_i x_i
+$$
+is an isomorphism. This claim will show that $M$ is finite locally
+free, i.e., that (7) holds. Since $\Psi$ is surjective, it suffices to
+show that $\Psi$ is injective. Since $R_{fg}$ is reduced,
+it suffices to show that $\Psi$ is injective after localization
+at all minimal primes $\mathfrak p$ of $R_{fg}$, see
+Lemma \ref{lemma-reduced-ring-sub-product-fields}.
+However, we know that $R_\mathfrak p = \kappa(\mathfrak p)$
+by Lemma \ref{lemma-minimal-prime-reduced-ring} and
+$\rho_M(\mathfrak p) = r$ hence $\Psi_\mathfrak p :
+R_\mathfrak p^{\oplus r} \to M \otimes_R \kappa(\mathfrak p)$
+is an isomorphism as a surjective map of finite dimensional
+vector spaces of the same dimension.
+\end{proof}
+
+\begin{remark}
+\label{remark-warning}
+It is not true that a finite $R$-module which is
+$R$-flat is automatically projective. A counter
+example is where $R = \mathcal{C}^\infty(\mathbf{R})$
+is the ring of infinitely differentiable functions on
+$\mathbf{R}$, and $M = R_{\mathfrak m} = R/I$ where
+$\mathfrak m = \{f \in R \mid f(0) = 0\}$ and
+$I = \{f \in R \mid \exists \epsilon, \epsilon > 0 :
+f(x) = 0\ \forall x, |x| < \epsilon\}$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-finite-flat-local}
+(Warning: see Remark \ref{remark-warning}.)
+Suppose $R$ is a local ring, and $M$ is a finite
+flat $R$-module. Then $M$ is finite free.
+\end{lemma}
+
+\begin{proof}
+Follows from the equational criterion of flatness, see
+Lemma \ref{lemma-flat-eq}. Namely, suppose that
+$x_1, \ldots, x_r \in M$ map to a basis of
+$M/\mathfrak mM$. By Nakayama's Lemma \ref{lemma-NAK}
+these elements generate $M$. We want to show there
+is no relation among the $x_i$. Instead, we will show
+by induction on $n$ that if $x_1, \ldots, x_n \in M$
+are linearly independent in the vector space
+$M/\mathfrak mM$ then they are independent over $R$.
+
+\medskip\noindent
+The base case of the induction is where we have
+$x \in M$, $x \not\in \mathfrak mM$ and a relation
+$fx = 0$. By the equational criterion there
+exist $y_j \in M$ and $a_j \in R$ such that
+$x = \sum a_j y_j$ and $fa_j = 0$ for all $j$.
+Since $x \not\in \mathfrak mM$ we see that
+at least one $a_j$ is a unit and hence $f = 0 $.
+
+\medskip\noindent
+Suppose that $\sum f_i x_i$ is a relation among $x_1, \ldots, x_n$.
+By our choice of $x_i$ we have $f_i \in \mathfrak m$.
+According to the equational criterion of flatness there exist
+$a_{ij} \in R$ and $y_j \in M$ such that
+$x_i = \sum a_{ij} y_j$ and $\sum f_i a_{ij} = 0$.
+Since $x_n \not \in \mathfrak mM$ we see that
+$a_{nj}\not\in \mathfrak m$ for at least one $j$.
+Since $\sum f_i a_{ij} = 0$ we get
+$f_n = \sum_{i = 1}^{n-1} (-a_{ij}/a_{nj}) f_i$.
+The relation $\sum f_i x_i = 0$ now can be rewritten
+as $\sum_{i = 1}^{n-1} f_i( x_i + (-a_{ij}/a_{nj}) x_n) = 0$.
+Note that the elements $x_i + (-a_{ij}/a_{nj}) x_n$ map
+to $n-1$ linearly independent elements of $M/\mathfrak mM$.
+By induction assumption we get that all the $f_i$, $i \leq n-1$
+have to be zero, and also $f_n = \sum_{i = 1}^{n-1} (-a_{ij}/a_{nj}) f_i$.
+This proves the induction step.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-projective-descends}
+Let $R \to S$ be a flat local homomorphism of local rings.
+Let $M$ be a finite $R$-module. Then $M$ is finite projective
+over $R$ if and only if $M \otimes_R S$ is finite projective
+over $S$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-finite-projective} being finite projective
+over a local ring is the same thing as being finite free.
+Suppose that $M \otimes_R S$ is a finite free $S$-module.
+Pick $x_1, \ldots, x_r \in M$ whose images in $M/\mathfrak m_RM$
+form a basis over $\kappa(\mathfrak m)$. Then
+we see that $x_1 \otimes 1, \ldots, x_r \otimes 1$
+are a basis for $M \otimes_R S$. This implies that
+the map $R^{\oplus r} \to M, (a_i) \mapsto \sum a_i x_i$
+becomes an isomorphism after tensoring with $S$.
+By faithful flatness of $R \to S$, see Lemma \ref{lemma-local-flat-ff}
+we see that it is an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-free-semi-local-free}
+Let $R$ be a semi-local ring.
+Let $M$ be a finite locally free module.
+If $M$ has constant rank, then $M$ is free. In particular, if $R$ has
+connected spectrum, then $M$ is free.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hints: First show that $M/\mathfrak m_iM$ has the
+same dimension $d$ for all maximal ideal $\mathfrak m_1, \ldots, \mathfrak m_n$
+of $R$ using the rank is constant.
+Next, show that there exist elements $x_1, \ldots, x_d \in M$
+which form a basis for each $M/\mathfrak m_iM$ by the Chinese
+remainder theorem. Finally show that $x_1, \ldots, x_d$ is a basis for $M$.
+\end{proof}
+
+\noindent
+Here is a technical lemma that is used in the chapter on groupoids.
+
+\begin{lemma}
+\label{lemma-semi-local-module-basis-in-submodule}
+Let $R$ be a local ring with maximal ideal $\mathfrak m$ and
+infinite residue field.
+Let $R \to S$ be a ring map.
+Let $M$ be an $S$-module and let $N \subset M$ be an $R$-submodule.
+Assume
+\begin{enumerate}
+\item $S$ is semi-local and $\mathfrak mS$ is contained in the
+Jacobson radical of $S$,
+\item $M$ is a finite free $S$-module, and
+\item $N$ generates $M$ as an $S$-module.
+\end{enumerate}
+Then $N$ contains an $S$-basis of $M$.
+\end{lemma}
+
+\begin{proof}
+Assume $M$ is free of rank $n$. Let $I \subset S$ be the Jacobson radical.
+By Nakayama's Lemma \ref{lemma-NAK} a sequence of elements
+$m_1, \ldots, m_n$ is a basis for $M$ if and only if
+$\overline{m}_i \in M/IM$ generate $M/IM$. Hence we may replace
+$M$ by $M/IM$, $N$ by $N/(N \cap IM)$, $R$ by $R/\mathfrak m$,
+and $S$ by $S/IS$. In this case we see that $S$ is a finite product
+of fields $S = k_1 \times \ldots \times k_r$ and
+$M = k_1^{\oplus n} \times \ldots \times k_r^{\oplus n}$.
+The fact that $N \subset M$ generates $M$ as an $S$-module
+means that there exist $x_j \in N$ such that a linear combination
+$\sum a_j x_j$ with $a_j \in S$ has a nonzero component in each
+factor $k_i^{\oplus n}$.
+Because $R = k$ is an infinite field, this means that also
+some linear combination $y = \sum c_j x_j$ with $c_j \in k$ has a
+nonzero component in each factor. Hence $y \in N$ generates a
+free direct summand $Sy \subset M$. By induction on $n$ the result
+holds for $M/Sy$ and the submodule $\overline{N} = N/(N \cap Sy)$.
+In other words there exist $\overline{y}_2, \ldots, \overline{y}_n$
+in $\overline{N}$ which (freely) generate $M/Sy$. Then
+$y, y_2, \ldots, y_n$ (freely) generate $M$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-evaluation-map-iso-finite-projective}
+Let $R$ be ring. Let $L$, $M$, $N$ be $R$-modules.
+The canonical map
+$$
+\Hom_R(M, N) \otimes_R L \to \Hom_R(M, N \otimes_R L)
+$$
+is an isomorphism if $M$ is finite projective.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-finite-projective} we see that $M$
+is finitely presented as well as finite locally free.
+By Lemmas \ref{lemma-hom-from-finitely-presented} and
+\ref{lemma-tensor-product-localization} formation of
+the left and right hand side of the arrow commutes with
+localization. We may check that our map is an isomorphism
+after localization, see Lemma \ref{lemma-cover}.
+Thus we may assume $M$ is finite free. In this case
+the lemma is immediate.
+\end{proof}
+
+
+
+
+
+\section{Open loci defined by module maps}
+\label{section-loci-maps}
+
+\noindent
+The set of primes where a given module map is surjective, or an isomorphism
+is sometimes open. In the case of finite projective modules we can look
+at the rank of the map.
+
+\begin{lemma}
+\label{lemma-map-between-finite}
+Let $R$ be a ring. Let $\varphi : M \to N$ be a map of $R$-modules
+with $N$ a finite $R$-module. Then we have the equality
+\begin{align*}
+U & = \{\mathfrak p \subset R \mid
+\varphi_{\mathfrak p} : M_{\mathfrak p} \to N_{\mathfrak p}
+\text{ is surjective}\} \\
+& = \{\mathfrak p \subset R \mid
+\varphi \otimes \kappa(\mathfrak p) :
+M \otimes \kappa(\mathfrak p) \to N \otimes \kappa(\mathfrak p)
+\text{ is surjective}\}
+\end{align*}
+and $U$ is an open subset of $\Spec(R)$. Moreover, for any $f \in R$
+such that $D(f) \subset U$ the map $M_f \to N_f$ is surjective.
+\end{lemma}
+
+\begin{proof}
+The equality in the displayed formula follows from Nakayama's lemma.
+Nakayama's lemma also implies that $U$ is open. See
+Lemma \ref{lemma-NAK} especially part (3). If $D(f) \subset U$, then
+$M_f \to N_f$ is surjective on all localizations at primes of
+$R_f$, and hence it is surjective by Lemma \ref{lemma-characterize-zero-local}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-between-finitely-presented}
+Let $R$ be a ring. Let $\varphi : M \to N$ be a map of $R$-modules
+with $M$ finite and $N$ finitely presented. Then
+$$
+U = \{\mathfrak p \subset R \mid
+\varphi_{\mathfrak p} : M_{\mathfrak p} \to N_{\mathfrak p}
+\text{ is an isomorphism}\}
+$$
+is an open subset of $\Spec(R)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p \in U$. Pick a presentation
+$N = R^{\oplus n}/\sum_{j = 1, \ldots, m} R k_j$.
+Denote $e_i$ the image in $N$ of the $i$th basis vector of $R^{\oplus n}$.
+For each $i \in \{1, \ldots, n\}$ choose an element
+$m_i \in M_{\mathfrak p}$ such that $\varphi(m_i) = f_i e_i$
+for some $f_i \in R$, $f_i \not \in \mathfrak p$. This is possible
+as $\varphi_{\mathfrak p}$ is an isomorphism. Set $f = f_1 \ldots f_n$
+and let $\psi : R_f^{\oplus n} \to M_f$ be the map which maps
+the $i$th basis vector to $m_i/f_i$. Note that
+$\varphi_f \circ \psi$ is the localization
+at $f$ of the given map $R^{\oplus n} \to N$.
+As $\varphi_{\mathfrak p}$ is an isomorphism we
+see that $\psi(k_j)$ is an element of $M$ which maps to zero
+in $M_{\mathfrak p}$. Hence we see that there exist
+$g_j \in R$, $g_j \not \in \mathfrak p$ such that $g_j \psi(k_j) = 0$.
+Setting $g = g_1 \ldots g_m$, we see that $\psi_g$ factors through
+$N_{fg}$ to give a map $\chi : N_{fg} \to M_{fg}$. By construction
+$\chi$ is a right inverse to $\varphi_{fg}$. It follows that
+$\chi_\mathfrak p$ is an isomorphism. By Lemma \ref{lemma-map-between-finite}
+there is an $h \in R$, $h \not \in \mathfrak p$ such that
+$\chi_h : N_{fgh} \to M_{fgh}$ is surjective.
+Hence $\varphi_{fgh}$ and $\chi_h$ are mutually inverse maps,
+which implies that $D(fgh) \subset U$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finitely-presented-localization-free}
+Let $R$ be a ring. Let $\mathfrak p \subset R$ be a prime.
+Let $M$ be a finitely presented $R$-module. If $M_\mathfrak p$
+is free, then there is an $f \in R$, $f \not \in \mathfrak p$
+such that $M_f$ is a free $R_f$-module.
+\end{lemma}
+
+\begin{proof}
+Choose a basis $x_1, \ldots, x_n \in M_\mathfrak p$. We can choose
+an $f \in R$, $f \not \in \mathfrak p$ such that $x_i$ is the image
+of some $y_i \in M_f$. After replacing $y_i$ by $f^m y_i$ for $m \gg 0$
+we may assume $y_i \in M$. Namely, this replaces $x_1, \ldots, x_n$
+by $f^mx_1, \ldots, f^mx_n$ which is still a basis as $f$ maps to
+a unit in $R_\mathfrak p$. Hence
+we obtain a homomorphism $\varphi = (y_1, \ldots, y_n) : R^{\oplus n} \to M$
+of $R$-modules whose localization at $\mathfrak p$ is an isomorphism.
+By Lemma \ref{lemma-map-between-finitely-presented}
+we can find an $f \in R$, $f \not \in \mathfrak p$
+such that $\varphi_\mathfrak q$ is an isomorphism for all primes
+$\mathfrak q \subset R$ with $f \not \in \mathfrak q$.
+Then it follows from Lemma \ref{lemma-characterize-zero-local}
+that $\varphi_f$ is an isomorphism and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cokernel-flat}
+Let $R$ be a ring. Let $\varphi : P_1 \to P_2$ be a map of
+finite projective modules. Then
+\begin{enumerate}
+\item The set $U$ of primes $\mathfrak p \in \Spec(R)$ such that
+$\varphi \otimes \kappa(\mathfrak p)$ is injective is open and
+for any $f\in R$ such that $D(f) \subset U$ we have
+\begin{enumerate}
+\item $P_{1, f} \to P_{2, f}$ is injective, and
+\item the module $\Coker(\varphi)_f$ is finite projective over $R_f$.
+\end{enumerate}
+\item The set $W$ of primes $\mathfrak p \in \Spec(R)$ such that
+$\varphi \otimes \kappa(\mathfrak p)$ is surjective is open and
+for any $f\in R$ such that $D(f) \subset W$ we have
+\begin{enumerate}
+\item $P_{1, f} \to P_{2, f}$ is surjective, and
+\item the module $\Ker(\varphi)_f$ is finite projective over $R_f$.
+\end{enumerate}
+\item The set $V$ of primes $\mathfrak p \in \Spec(R)$ such that
+$\varphi \otimes \kappa(\mathfrak p)$ is an isomorphism is open and
+for any $f\in R$ such that $D(f) \subset V$ the map
+$\varphi : P_{1, f} \to P_{2, f}$ is an isomorphism of modules over $R_f$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove the set $U$ is open we may work locally on
+$\Spec(R)$. Thus we may replace $R$ by a suitable localization
+and assume that $P_1 = R^{n_1}$ and $P_2 = R^{n_2}$, see Lemma
+\ref{lemma-finite-projective}. In this case injectivity of
+$\varphi \otimes \kappa(\mathfrak p)$ is equivalent to $n_1 \leq n_2$
+and some $n_1 \times n_1$ minor $f$ of the matrix of $\varphi$ being
+invertible in $\kappa(\mathfrak p)$. Thus $D(f) \subset U$.
+This argument also shows that $P_{1, \mathfrak p} \to P_{2, \mathfrak p}$
+is injective for $\mathfrak p \in U$.
+
+\medskip\noindent
+Now suppose $D(f) \subset U$. By the remark in the previous paragraph
+and Lemma \ref{lemma-characterize-zero-local} we see that
+$P_{1, f} \to P_{2, f}$ is injective, i.e., (1)(a) holds.
+By Lemma \ref{lemma-finite-projective} to prove (1)(b)
+it suffices to prove that $\Coker(\varphi)$ is finite projective
+locally on $D(f)$. Thus, as we saw above, we may
+assume that $P_1 = R^{n_1}$ and $P_2 = R^{n_2}$
+and that some minor of the matrix of $\varphi$ is invertible in $R$.
+If the minor in question corresponds to the first $n_1$
+basis vectors of $R^{n_2}$, then using the last $n_2 - n_1$ basis
+vectors we get a map $R^{n_2 - n_1} \to R^{n_2} \to \Coker(\varphi)$
+which is easily seen to be an isomorphism.
+
+\medskip\noindent
+Openness of $W$ and (2)(a) for $D(f) \subset W$ follow from
+Lemma \ref{lemma-map-between-finite}. Since $P_{2, f}$ is projective
+over $R_f$ we see that $\varphi_f : P_{1, f} \to P_{2, f}$ has a section
+and it follows that $\Ker(\varphi)_f$ is a direct summand of $P_{2, f}$.
+Therefore $\Ker(\varphi)_f$ is finite projective. Thus (2)(b) holds as well.
+
+\medskip\noindent
+It is clear that $V = U \cap W$ is open and the other statement in (3)
+follows from (1)(a) and (2)(a).
+\end{proof}
+
+
+
+
+
+
+\section{Faithfully flat descent for projectivity of modules}
+\label{section-ffdescent-projectivity-introduction}
+
+\medskip\noindent
+In the next few sections we prove, following
+Raynaud and Gruson \cite{GruRay}, that the
+projectivity of modules descends along faithfully flat ring maps. The idea of
+the proof is to use d\'evissage \`a la Kaplansky \cite{Kaplansky} to reduce
+to the
+case of countably generated modules. Given a well-behaved filtration of a
+module $M$, d\'evissage allows us to express $M$ as a direct sum of successive
+quotients of the filtering submodules (see
+Section \ref{section-transfinite-devissage}).
+Using this technique, we prove that a projective module is a direct sum of
+countably generated modules (Theorem \ref{theorem-projective-direct-sum}). To
+prove descent of projectivity for countably generated modules, we introduce a
+``Mittag-Leffler'' condition on modules, prove that a countably generated
+module is projective if and only if it is flat and Mittag-Leffler (Theorem
+\ref{theorem-projectivity-characterization}), and then show that the property
+of being a Mittag-Leffler module descends (Lemma
+\ref{lemma-ffdescent-ML}). Finally, given an arbitrary module $M$ whose
+base change by a faithfully flat ring map is projective, we filter $M$ by
+submodules whose successive quotients are countably generated projective
+modules, and then by d\'evissage conclude $M$ is a direct sum of projectives,
+hence projective itself (Theorem \ref{theorem-ffdescent-projectivity}).
+
+\medskip\noindent
+We note that there is an error in the proof of faithfully flat descent
+of projectivity in \cite{GruRay}. There, descent
+of projectivity along faithfully flat ring maps is deduced from descent of
+projectivity along a more general type of ring map
+(\cite[Example 3.1.4(1) of Part II]{GruRay}).
+However, the proof of descent along this more general type of
+map is incorrect. In \cite{G}, Gruson explains what went wrong,
+although he does not provide a fix for the case of interest.
+Patching this hole in the proof of faithfully flat descent of projectivity
+comes down to proving that the property of being a Mittag-Leffler module
+descends along faithfully flat ring maps. We do this in
+Lemma \ref{lemma-ffdescent-ML}.
+
+
+
+
+
+\section{Characterizing flatness}
+\label{section-characterize-flatness}
+
+\noindent
+In this section we discuss criteria for flatness. The main result in
+this section is Lazard's theorem (Theorem \ref{theorem-lazard}
+below), which says that a flat module is the colimit of a directed system of
+free finite modules.
+We remind the reader of the ``equational criterion for flatness'', see
+Lemma \ref{lemma-flat-eq}.
+It turns out that this can be massaged into a seemingly much stronger property.
+
+\begin{lemma}
+\label{lemma-flat-factors-free}
+Let $M$ be an $R$-module. The following are equivalent:
+\begin{enumerate}
+\item $M$ is flat.
+\item If $f: R^n \to M$ is a module map and $x \in \Ker(f)$, then there
+are module maps $h: R^n \to R^m$ and $g: R^m \to M$ such that
+$f = g \circ h$ and $x \in \Ker(h)$.
+\item Suppose $f: R^n \to M$ is a module map, $N \subset \Ker(f)$ any
+submodule, and $h: R^n \to R^{m}$ a map such that $N \subset \Ker(h)$
+and $f$ factors through $h$. Then given any $x \in \Ker(f)$ we can find a map
+$h': R^n \to R^{m'}$ such that $N + Rx \subset \Ker(h')$ and $f$
+factors through $h'$.
+\item If $f: R^n \to M$ is a module map and $N \subset \Ker(f)$ is a
+finitely generated submodule, then there are module maps $h: R^n \to
+R^m$ and $g: R^m \to M$ such that $f = g \circ h$ and $N \subset
+\Ker(h)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+That (1) is equivalent to (2) is just a reformulation of the equational
+criterion for flatness\footnote{In fact, a module map $f : R^n \to M$
+corresponds to a choice of elements $x_1, x_2, \ldots, x_n$ of $M$
+(namely, the images of the standard basis elements $e_1, e_2, \ldots,
+e_n$); furthermore, an element $x \in \Ker(f)$ corresponds to a
+relation between these $x_1, x_2, \ldots, x_n$ (namely, the relation
+$\sum_i f_i x_i = 0$, where the $f_i$ are the coordinates of $x$).
+The module map $h$ (represented as an $m \times n$-matrix)
+corresponds to the matrix $(a_{ij})$ from Lemma \ref{lemma-flat-eq},
+and the $y_j$ of Lemma \ref{lemma-flat-eq} are the images of the
+standard basis vectors of $R^m$ under $g$.}. To show
+(2) implies (3), let $g: R^m \to M$
+be the map such that $f$ factors as $f = g \circ h$. By (2) find $h'': R^m
+\to R^{m'}$ such that $h''$ kills $h(x)$ and $g: R^m \to M$
+factors through $h''$. Then taking $h' = h'' \circ h$ works. (3) implies (4)
+by induction on the number of generators of $N \subset \Ker(f)$ in (4).
+Clearly (4) implies (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-factors-fp}
+Let $M$ be an $R$-module. Then $M$ is flat if and only if the following
+condition holds: if $P$ is a finitely presented $R$-module and $f: P
+\to M$ a module map, then there is a free finite $R$-module $F$ and
+module maps $h: P \to F$ and $g: F \to M$ such that $f = g
+\circ h$.
+\end{lemma}
+
+\begin{proof}
+This is just a reformulation of condition (4) from
+Lemma \ref{lemma-flat-factors-free}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-surjective-hom}
+Let $M$ be an $R$-module. Then $M$ is flat if and only if the following
+condition holds: for every finitely presented $R$-module $P$, if $N \to
+M$ is a surjective $R$-module map, then the induced map $\Hom_R(P, N)
+\to \Hom_R(P, M)$ is surjective.
+\end{lemma}
+
+\begin{proof}
+First suppose $M$ is flat. We must show that if $P$ is finitely presented,
+then given a map $f: P \to M$, it factors through the map $N
+\to M$. By
+Lemma \ref{lemma-flat-factors-fp}
+the map $f$ factors through a map $F \to M$ where $F$ is free and finite.
+Since $F$ is free, this map factors through
+$N \to M$. Thus $f$ factors through $N \to M$.
+
+\medskip\noindent
+Conversely, suppose the condition of the lemma holds. Let $f: P \to M$
+be a map from a finitely presented module $P$. Choose a free module $N$ with a
+surjection $N \to M$ onto $M$. Then $f$ factors through $N \to
+M$, and since $P$ is finitely generated, $f$ factors through a free finite
+submodule of $N$. Thus $M$ satisfies the condition of Lemma
+\ref{lemma-flat-factors-fp}, hence is flat.
+\end{proof}
+
+\begin{theorem}[Lazard's theorem]
+\label{theorem-lazard}
+Let $M$ be an $R$-module. Then $M$ is flat if and only if it is the colimit of
+a directed system of free finite $R$-modules.
+\end{theorem}
+
+\begin{proof}
+A colimit of a directed system of flat modules is flat, as taking directed
+colimits is exact and commutes with tensor product. Hence if $M$ is the colimit
+of a directed system of free finite modules then $M$ is flat.
+
+\medskip\noindent
+For the converse, first recall that any module $M$ can be written as the
+colimit of a directed system of finitely presented modules, in the following
+way. Choose a surjection $f: R^I \to M$ for some set $I$, and let $K$
+be the kernel. Let $E$ be the set of ordered pairs $(J, N)$ where $J$ is a
+finite subset of $I$ and $N$ is a finitely generated submodule of $R^J \cap K$.
+Then $E$ is made into a directed partially ordered set by defining $(J, N) \leq
+(J', N')$ if and only if $J \subset J'$ and $N \subset N'$. Define $M_e =
+R^J/N$ for $e = (J, N)$, and define $f_{ee'}: M_e \to M_{e'}$ to be
+the natural map for $e \leq e'$. Then $(M_e, f_{ee'})$ is a directed system and
+the natural maps $f_e: M_e \to M$ induce an isomorphism
+$\colim_{e
+\in E} M_e \xrightarrow{\cong} M$.
+
+\medskip\noindent
+Now suppose $M$ is flat. Let $I = M \times \mathbf{Z}$, write $(x_i)$ for
+the canonical basis of $R^{I}$, and take in the above discussion $f: R^I
+\to M$ to be the map sending $x_i$ to the projection of $i$ onto $M$.
+To prove the theorem it suffices to show that the $e \in E$ such that $M_e$
+is free form a cofinal subset of $E$. So let $e = (J, N) \in E$ be arbitrary.
+By Lemma \ref{lemma-flat-factors-fp} there is a free finite module $F$ and maps
+$h: R^J/N \to F$ and $g: F \to M$ such that the natural map
+$f_e: R^J/N \to M$ factors as $R^J/N \xrightarrow{h} F
+\xrightarrow{g} M$. We are going to realize $F$ as $M_{e'}$ for some $e' \geq
+e$.
+
+\medskip\noindent
+Let $\{ b_1, \ldots, b_n \}$ be a finite basis of $F$. Choose $n$ distinct
+elements $i_1, \ldots, i_n \in I$ such that $i_{\ell} \notin J$ for all $\ell$,
+and such that the image of $x_{i_{\ell}}$ under $f: R^I \to M$ equals
+the image of $b_{\ell}$ under $g: F \to M$. This is possible since
+every element of $M$ can be written as $f(x_i)$ for infinitely many
+distinct $i \in I$ (by our choice of $I$). Now let
+$J' = J \cup \{i_1, \ldots , i_n \}$, and define $R^{J'}
+\to F$ by $x_i \mapsto h(x_i)$ for $i \in J$ and $x_{i_{\ell}} \mapsto
+b_{\ell}$ for $\ell = 1, \ldots, n$. Let $N' = \Ker(R^{J'} \to F)$.
+Observe:
+\begin{enumerate}
+\item The square
+$$
+\xymatrix{
+R^{J'} \ar[r] \ar@{^{(}->}[d] & F \ar[d]^{g} \\
+R^{I} \ar[r]_{f} & M
+}
+$$
+is commutative,
+%$R^{J'} \to F$ factors $f: R^I \to M$,
+hence $N' \subset K = \Ker(f)$;
+\item $R^{J'} \to F$ is a surjection onto a free finite module, hence
+it splits and so $N'$ is finitely generated;
+\item $J \subset J'$ and $N \subset N'$.
+\end{enumerate}
+By (1) and (2) $e' = (J', N')$ is in $E$, by (3) $e' \geq e$, and by
+construction $M_{e'} = R^{J'}/N' \cong F$ is free.
+\end{proof}
+
+\section{Universally injective module maps}
+\label{section-universally-injective}
+
+\noindent
+Next we discuss universally injective module maps, which
+are in a sense complementary to flat modules (see Lemma
+\ref{lemma-flat-universally-injective}). We follow Lazard's thesis
+\cite{Autour}; also see \cite{Lam}.
+
+\begin{definition}
+\label{definition-universally-injective}
+Let $f: M \to N$ be a map of $R$-modules. Then $f$ is called
+{\it universally injective} if for every $R$-module $Q$, the map $f
+\otimes_R \text{id}_Q: M \otimes_R Q \to N \otimes_R Q$
+is injective. A sequence $0 \to M_1 \to M_2 \to M_3
+\to 0$ of $R$-modules is called {\it universally exact} if it is exact
+and $M_1 \to M_2$ is universally injective.
+\end{definition}
+
+\begin{example}
+\label{example-universally-exact}
+Examples of universally exact sequences.
+\begin{enumerate}
+\item A split short exact sequence is universally exact since tensoring
+commutes with taking direct sums.
+\item The colimit of a directed system of universally exact sequences is
+universally exact. This follows from the fact that taking directed colimits is
+exact and that tensoring commutes with taking colimits. In particular the
+colimit of a directed system of split exact sequences is universally exact. We
+will see below that, conversely, any universally exact sequence arises in this
+way.
+\end{enumerate}
+\end{example}
+
+\noindent
+Next we give a list of criteria for a short exact sequence to be universally
+exact. They are analogues of criteria for flatness given above. Parts (3)-(6)
+below correspond, respectively, to the criteria for flatness given in
+Lemmas \ref{lemma-flat-eq}, \ref{lemma-flat-factors-free},
+\ref{lemma-flat-surjective-hom}, and
+Theorem \ref{theorem-lazard}.
+
+\begin{theorem}
+\label{theorem-universally-exact-criteria}
+Let
+$$
+0 \to M_1 \xrightarrow{f_1} M_2 \xrightarrow{f_2} M_3 \to 0
+$$
+be an exact sequence of $R$-modules. The following are equivalent:
+\begin{enumerate}
+\item The sequence $0 \to M_1 \to M_2 \to M_3
+\to 0$ is universally exact.
+\item For every finitely presented $R$-module $Q$, the sequence
+$$
+0 \to M_1 \otimes_R Q \to M_2 \otimes_R Q \to
+M_3 \otimes_R Q \to 0
+$$
+is exact.
+\item Given elements $x_i \in M_1$ $(i = 1, \ldots, n)$, $y_j \in M_2$ $(j = 1,
+\ldots, m)$, and $a_{ij} \in R$ $(i = 1, \ldots, n, j = 1, \ldots, m)$ such that
+for all $i$
+$$
+f_1(x_i) = \sum\nolimits_j a_{ij} y_j,
+$$
+there exists $z_j \in M_1$ $(j =1, \ldots, m)$ such that for all $i$,
+$$
+x_i = \sum\nolimits_j a_{ij} z_j .
+$$
+\item Given a commutative diagram of $R$-module maps
+$$
+\xymatrix{
+R^n \ar[r] \ar[d] & R^m \ar[d] \\
+M_1 \ar[r]^{f_1} & M_2
+}
+$$
+where $m$ and $n$ are integers, there exists a map $R^m \to M_1$ making
+the top triangle commute.
+\item For every finitely presented $R$-module $P$, the $R$-module
+map $\Hom_R(P, M_2) \to \Hom_R(P, M_3)$ is surjective.
+\item The sequence $0 \to M_1 \to M_2 \to M_3
+\to 0$ is the colimit of a directed system of split exact sequences of
+the form
+$$
+0 \to M_{1} \to M_{2, i} \to M_{3, i} \to 0
+$$
+where the $M_{3, i}$ are finitely presented.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+Obviously (1) implies (2).
+
+\medskip\noindent
+Next we show (2) implies (3). Let $f_1(x_i) = \sum_j a_{ij} y_j$ be relations
+as in (3). Let $(d_j)$ be a basis for $R^m$, $(e_i)$ a basis for $R^n$, and
+$R^m \to R^n$ the map given by $d_j \mapsto \sum_i a_{ij} e_i$.
+Let $Q$ be the cokernel of $R^m \to R^n$. Then tensoring
+$R^m \to R^n \to Q \to 0$ by the map $f_1: M_1 \to M_2$, we get a
+commutative diagram
+$$
+\xymatrix{
+M_1^{\oplus m} \ar[r] \ar[d] & M_1^{\oplus n} \ar[r] \ar[d] & M_1 \otimes_R Q
+\ar[r] \ar[d] & 0 \\
+M_2^{\oplus m} \ar[r] & M_2^{\oplus n} \ar[r] & M_2 \otimes_R Q \ar[r] & 0
+}
+$$
+where $M_1^{\oplus m} \to M_1^{\oplus n}$ is given by
+$$
+(z_1, \ldots, z_m) \mapsto
+(\sum\nolimits_j a_{1j} z_j, \ldots, \sum\nolimits_j a_{nj} z_j),
+$$
+and $M_2^{\oplus m} \to M_2^{\oplus n}$ is given similarly. We want to
+show $x = (x_1, \ldots, x_n) \in M_1^{\oplus n}$ is in the image of $M_1^{\oplus
+m} \to M_1^{\oplus n}$. By (2) the map $M_1 \otimes Q \to M_2
+\otimes Q$ is injective, hence by exactness of the top row it is enough to show
+$x$ maps to $0$ in $M_2 \otimes Q$, and so by exactness of the bottom row it is
+enough to show the image of $x$ in $M_2^{\oplus n}$ is in the image of
+$M_2^{\oplus m} \to M_2^{\oplus n}$. This is true by assumption.
+
+\medskip\noindent
+Condition (4) is just a translation of (3) into diagram form.
+
+\medskip\noindent
+Next we show (4) implies (5). Let $\varphi : P \to M_3$ be a map from a
+finitely presented $R$-module $P$. We must show that $\varphi$ lifts to a map
+$P \to M_2$. Choose a presentation of $P$,
+$$
+R^n \xrightarrow{g_1} R^m \xrightarrow{g_2} P \to 0.
+$$
+Using freeness of $R^n$ and $R^m$, we can construct $h_2: R^m \to M_2$
+and then $h_1: R^n \to M_1$ such that the following diagram commutes
+$$
+\xymatrix{
+ & R^n \ar[r]^{g_1} \ar[d]^{h_1} & R^m \ar[r]^{g_2} \ar[d]^{h_2} & P
+\ar[r] \ar[d]^{\varphi} & 0 \\
+0 \ar[r] & M_1 \ar[r]^{f_1} & M_2 \ar[r]^{f_2} & M_3 \ar[r] & 0 .
+}
+$$
+By (4) there is a map $k_1: R^m \to M_1$ such that $k_1 \circ g_1 =
+h_1$. Now define $h'_2: R^m \to M_2$ by $h_2' = h_2 - f_1 \circ k_1$.
+Then
+$$
+h'_2 \circ g_1 = h_2 \circ g_1 - f_1 \circ k_1 \circ g_1 =
+h_2 \circ g_1 - f_1 \circ h_1 = 0 .
+$$
+Hence by passing to the quotient $h'_2$ defines a map $\varphi': P \to
+M_2$ such that $\varphi' \circ g_2 = h_2'$. In a diagram, we have
+$$
+\xymatrix{
+R^m \ar[r]^{g_2} \ar[d]_{h'_2} & P \ar[d]^{\varphi} \ar[dl]_{\varphi'} \\
+M_2 \ar[r]^{f_2} & M_3.
+}
+$$
+where the top triangle commutes. We claim that $\varphi'$ is the desired lift,
+i.e.\ that $f_2 \circ \varphi' = \varphi$. From the definitions we have
+$$
+f_2 \circ \varphi' \circ g_2 = f_2 \circ h'_2 =
+f_2 \circ h_2 - f_2 \circ f_1 \circ k_1 = f_2 \circ h_2 =
+\varphi \circ g_2.
+$$
+Since $g_2$ is surjective, this finishes the proof.
+
+\medskip\noindent
+Now we show (5) implies (6). Write $M_{3}$ as the colimit of a directed system
+of finitely presented modules $M_{3, i}$, see
+Lemma \ref{lemma-module-colimit-fp}. Let $M_{2, i}$ be the fiber product
+of $M_{3, i}$ and $M_{2}$ over $M_{3}$---by definition this is the submodule of
+$M_2 \times M_{3, i}$ consisting of elements whose two projections onto $M_3$
+are equal. Let $M_{1, i}$ be the kernel of the projection
+$M_{2, i} \to M_{3, i}$. Then we have a directed system of exact sequences
+$$
+0 \to M_{1, i} \to M_{2, i} \to M_{3, i} \to 0,
+$$
+and for each $i$ a map of exact sequences
+$$
+\xymatrix{
+0 \ar[r] & M_{1, i} \ar[d] \ar[r] & M_{2, i} \ar[r] \ar[d] & M_{3, i} \ar[d]
+\ar[r] & 0 \\
+0 \ar[r] & M_{1} \ar[r] & M_{2} \ar[r] & M_{3} \ar[r] & 0
+}
+$$
+compatible with the directed system. From the definition of the fiber product
+$M_{2, i}$, it follows that the map $M_{1, i} \to M_1$ is an isomorphism.
+ By (5) there is a map $M_{3, i} \to M_{2}$ lifting $M_{3, i} \to
+M_3$, and by the universal property of the fiber product this gives rise to a
+section of $M_{2, i} \to M_{3, i}$. Hence the sequences
+$$
+0 \to M_{1, i} \to M_{2, i} \to M_{3, i} \to 0
+$$
+split. Passing to the colimit, we have a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] & \colim M_{1, i} \ar[d]^{\cong} \ar[r] & \colim M_{2, i}
+ \ar[r]
+\ar[d] & \colim M_{3, i} \ar[d]^{\cong} \ar[r] & 0 \\
+0 \ar[r] & M_{1} \ar[r] & M_{2} \ar[r] & M_{3} \ar[r] & 0
+}
+$$
+with exact rows and outer vertical maps isomorphisms. Hence $\colim
+M_{2, i}
+\to M_2$ is also an isomorphism and (6) holds.
+
+\medskip\noindent
+Condition (6) implies (1) by
+Example \ref{example-universally-exact} (2).
+\end{proof}
+
+\noindent
+The previous theorem shows that a universally exact sequence is always a
+colimit of split short exact sequences. If the cokernel of a universally
+injective map is finitely presented, then in fact the map itself splits:
+
+\begin{lemma}
+\label{lemma-universally-exact-split}
+Let
+$$
+0 \to M_1 \to M_2 \to M_3 \to 0
+$$
+be an exact sequence of $R$-modules. Suppose $M_3$ is of finite presentation.
+Then
+$$
+0 \to M_1 \to M_2 \to M_3 \to 0
+$$
+is universally exact if and only if it is split.
+\end{lemma}
+
+\begin{proof}
+A split short exact sequence is always universally exact, see
+Example \ref{example-universally-exact}.
+Conversely, if the sequence is universally exact, then by
+Theorem \ref{theorem-universally-exact-criteria} (5)
+applied to $P = M_3$, the map $M_2 \to M_3$ admits a section.
+\end{proof}
+
+\noindent
+The following lemma shows how universally injective maps are complementary to
+flat modules.
+
+\begin{lemma}
+\label{lemma-flat-universally-injective}
+Let $M$ be an $R$-module. Then $M$ is flat if and only if any exact sequence
+of $R$-modules
+$$
+0 \to M_1 \to M_2 \to M \to 0
+$$
+is universally exact.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-flat-surjective-hom} and
+Theorem \ref{theorem-universally-exact-criteria} (5).
+\end{proof}
+
+\begin{example}
+\label{example-universally-exact-non-split-non-flat}
+Non-split and non-flat universally exact sequences.
+\begin{enumerate}
+\item In spite of
+Lemma \ref{lemma-universally-exact-split},
+it is possible to have a short exact sequence of $R$-modules
+$$
+0 \to M_1 \to M_2 \to M_3 \to 0
+$$
+that is universally exact but non-split. For instance, take $R = \mathbf{Z}$,
+let $M_1 = \bigoplus_{n=1}^{\infty} \mathbf{Z}$, let $M_{2} = \prod_{n =
+1}^{\infty} \mathbf{Z}$, and let $M_{3}$ be the cokernel of the inclusion $M_1
+\to M_2$. Then $M_1, M_2, M_3$ are all flat since they are torsion-free
+(More on Algebra, Lemma \ref{more-algebra-lemma-dedekind-torsion-free-flat}),
+so by Lemma \ref{lemma-flat-universally-injective},
+$$
+0 \to M_1 \to M_2 \to M_3 \to 0
+$$
+is universally exact. However there can be no section $s: M_3 \to
+M_2$. In fact, if $x$ is the image of $(2, 2^2, 2^3, \ldots) \in M_2$ in $M_3$,
+then any module map $s: M_3 \to M_2$ must kill $x$. This is because $x
+\in 2^n M_3$ for any $n \geq 1$, hence $s(x)$ is divisible by $2^n$ for all $n
+\geq 1$ and so must be $0$.
+\item In spite of Lemma \ref{lemma-flat-universally-injective}, it is possible
+to have a short exact sequence of $R$-modules
+$$
+0 \to M_1 \to M_2 \to M_3 \to 0
+$$
+that is universally exact but with $M_1, M_2, M_3$ all non-flat. In fact if $M$
+is any non-flat module, just take the split exact sequence
+$$
+0 \to M \to M \oplus M \to M \to 0.
+$$
+For instance over $R = \mathbf{Z}$, take $M$ to be any torsion module.
+\item Taking the direct sum of an exact sequence as in (1) with one as in (2),
+we get a short exact sequence of $R$-modules
+$$
+0 \to M_1 \to M_2 \to M_3 \to 0
+$$
+that is universally exact, non-split, and such that
+$M_1, M_2, M_3$ are all non-flat.
+\end{enumerate}
+\end{example}
+
+\begin{lemma}
+\label{lemma-ui-flat-domain}
+Let $0 \to M_1 \to M_2 \to M_3 \to 0$ be a universally exact sequence
+of $R$-modules, and suppose $M_2$ is flat.
+Then $M_1$ and $M_3$ are flat.
+\end{lemma}
+
+\begin{proof}
+Let $0 \to N \to N' \to N'' \to 0$ be a short exact sequence of
+$R$-modules. Consider the commutative diagram
+$$
+\xymatrix{
+M_1 \otimes_R N \ar[r] \ar[d] &
+M_2 \otimes_R N \ar[r] \ar[d] &
+M_3 \otimes_R N \ar[d] \\
+M_1 \otimes_R N' \ar[r] \ar[d] &
+M_2 \otimes_R N' \ar[r] \ar[d] &
+M_3 \otimes_R N' \ar[d] \\
+M_1 \otimes_R N'' \ar[r] &
+M_2 \otimes_R N'' \ar[r] &
+M_3 \otimes_R N''
+}
+$$
+(we have dropped the $0$'s on the boundary).
+By assumption the rows give short exact sequences and the arrow
+$M_2 \otimes N \to M_2 \otimes N'$ is injective. Clearly this implies
+that $M_1 \otimes N \to M_1 \otimes N'$ is injective and we see that $M_1$
+is flat. In particular the left and middle columns give rise to short
+exact sequences. It follows from a diagram chase that the arrow
+$M_3 \otimes N \to M_3 \otimes N'$ is injective. Hence $M_3$ is flat.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-injective-tensor}
+Let $R$ be a ring.
+Let $M \to M'$ be a universally injective $R$-module map.
+Then for any $R$-module $N$ the map $M \otimes_R N \to M' \otimes_R N$
+is universally injective.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-universally-injective}
+Let $R$ be a ring. A composition of universally injective
+$R$-module maps is universally injective.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-injective-permanence}
+Let $R$ be a ring. Let $M \to M'$ and $M' \to M''$ be $R$-module maps.
+If their composition $M \to M''$ is universally injective, then
+$M \to M'$ is universally injective.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-faithfully-flat-universally-injective}
+Let $R \to S$ be a faithfully flat ring map.
+Then $R \to S$ is universally injective as a map of $R$-modules.
+In particular $R \cap IS = I$ for any ideal $I \subset R$.
+\end{lemma}
+
+\begin{proof}
+Let $N$ be an $R$-module. We have to show that $N \to N \otimes_R S$ is
+injective. As $S$ is faithfully flat as an $R$-module, it suffices to prove
+this after tensoring with $S$. Hence it suffices to show that
+$N \otimes_R S \to N \otimes_R S \otimes_R S$,
+$n \otimes s \mapsto n \otimes 1 \otimes s$ is injective. This is true
+because there is a retraction, namely,
+$n \otimes s \otimes s' \mapsto n \otimes ss'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-injective-check-stalks}
+Let $R \to S$ be a ring map.
+Let $M \to M'$ be a map of $S$-modules.
+The following are equivalent
+\begin{enumerate}
+\item $M \to M'$ is universally injective as a map of $R$-modules,
+\item for each prime $\mathfrak q$ of $S$ the map
+$M_{\mathfrak q} \to M'_{\mathfrak q}$ is universally injective
+as a map of $R$-modules,
+\item for each maximal ideal $\mathfrak m$ of $S$ the map
+$M_{\mathfrak m} \to M'_{\mathfrak m}$ is universally injective
+as a map of $R$-modules,
+\item for each prime $\mathfrak q$ of $S$ the map
+$M_{\mathfrak q} \to M'_{\mathfrak q}$ is universally injective
+as a map of $R_{\mathfrak p}$-modules, where $\mathfrak p$ is the
+inverse image of $\mathfrak q$ in $R$, and
+\item for each maximal ideal $\mathfrak m$ of $S$ the map
+$M_{\mathfrak m} \to M'_{\mathfrak m}$ is universally injective
+as a map of $R_{\mathfrak p}$-modules, where $\mathfrak p$ is the
+inverse image of $\mathfrak m$ in $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $N$ be an $R$-module. Let $\mathfrak q$ be a prime of $S$ lying over
+the prime $\mathfrak p$ of $R$. Then we have
+$$
+(M \otimes_R N)_{\mathfrak q} =
+M_{\mathfrak q} \otimes_R N =
+M_{\mathfrak q} \otimes_{R_{\mathfrak p}} N_{\mathfrak p}.
+$$
+Moreover, the same thing holds for $M'$ and localization is exact.
+Also, if $N$ is an $R_{\mathfrak p}$-module, then $N_{\mathfrak p} = N$.
+Using this the equivalences can be proved in a straightforward manner.
+
+\medskip\noindent
+For example, suppose that (5) holds. Let
+$K = \Ker(M \otimes_R N \to M' \otimes_R N)$. By the remarks
+above we see that $K_{\mathfrak m} = 0$ for each maximal ideal $\mathfrak m$
+of $S$. Hence $K = 0$ by
+Lemma \ref{lemma-characterize-zero-local}.
+Thus (1) holds. Conversely, suppose that (1) holds. Take any
+$\mathfrak q \subset S$ lying over $\mathfrak p \subset R$.
+Take any module $N$ over $R_{\mathfrak p}$. Then
+by assumption $\Ker(M \otimes_R N \to M' \otimes_R N) = 0$.
+Hence by the formulae above and the fact that $N = N_{\mathfrak p}$
+we see that
+$\Ker(M_{\mathfrak q} \otimes_{R_{\mathfrak p}} N \to
+M'_{\mathfrak q} \otimes_{R_{\mathfrak p}} N) = 0$. In other words
+(4) holds. Of course (4) $\Rightarrow$ (5) is immediate. Hence
+(1), (4) and (5) are all equivalent.
+We omit the proof of the other equivalences.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-injective-localize}
+Let $\varphi : A \to B$ be a ring map. Let $S \subset A$ and
+$S' \subset B$ be multiplicative subsets such that $\varphi(S) \subset S'$.
+Let $M \to M'$ be a map of $B$-modules.
+\begin{enumerate}
+\item If $M \to M'$ is universally injective as a map of $A$-modules,
+then $(S')^{-1}M \to (S')^{-1}M'$ is universally injective as a map of
+$A$-modules and as a map of $S^{-1}A$-modules.
+\item If $M$ and $M'$ are $(S')^{-1}B$-modules, then $M \to M'$
+is universally injective as a map of $A$-modules if and only if
+it is universally injective as a map of $S^{-1}A$-modules.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+You can prove this using
+Lemma \ref{lemma-universally-injective-check-stalks}
+but you can also prove it directly as follows.
+Assume $M \to M'$ is $A$-universally injective.
+Let $Q$ be an $A$-module. Then $Q \otimes_A M \to Q \otimes_A M'$
+is injective. Since localization is exact we see that
+$(S')^{-1}(Q \otimes_A M) \to (S')^{-1}(Q \otimes_A M')$ is injective.
+As $(S')^{-1}(Q \otimes_A M) = Q \otimes_A (S')^{-1}M$ and similarly for $M'$
+we see that
+$Q \otimes_A (S')^{-1}M \to Q \otimes_A (S')^{-1}M'$ is injective, hence
+$(S')^{-1}M \to (S')^{-1}M'$ is universally injective as a map of
+$A$-modules. This proves the first part of (1).
+To see (2) we can use the following two facts: (a) if $Q$ is an
+$S^{-1}A$-module, then $Q \otimes_A S^{-1}A = Q$, i.e., tensoring
+with $Q$ over $A$ is the same thing as tensoring with $Q$ over $S^{-1}A$,
+(b) if $M$ is any $A$-module on which the elements of $S$ are invertible,
+then $M \otimes_A Q = M \otimes_{S^{-1}A} S^{-1}Q$.
+Part (2) follows from this immediately.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-universally-injective-into-flat}
+Let $R$ be a ring and let $M \to M'$ be a map of $R$-modules.
+If $M'$ is flat, then $M \to M'$ is universally injective if
+and only if $M/IM \to M'/IM'$ is injective for every finitely
+generated ideal $I$ of $R$.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that $M \otimes_R Q \to M' \otimes_R Q$ is
+injective for every finite $R$-module $Q$, see
+Theorem \ref{theorem-universally-exact-criteria}.
+Then $Q$ has a finite filtration
+$0 = Q_0 \subset Q_1 \subset \ldots \subset Q_n = Q$
+by submodules whose subquotients
+are isomorphic to cyclic modules $R/I_i$, see
+Lemma \ref{lemma-trivial-filter-finite-module}.
+Since $M'$ is flat, we obtain a filtration
+$$
+\xymatrix{
+M \otimes Q_1 \ar[r] \ar[d] &
+M \otimes Q_2 \ar[r] \ar[d] &
+\ldots \ar[r] &
+M \otimes Q \ar[d] \\
+M' \otimes Q_1 \ar@{^{(}->}[r] &
+M' \otimes Q_2 \ar@{^{(}->}[r] &
+\ldots \ar@{^{(}->}[r] &
+M' \otimes Q
+}
+$$
+of $M' \otimes_R Q$ by submodules $M' \otimes_R Q_i$ whose successive
+quotients are $M' \otimes_R R/I_i = M'/I_iM'$. A simple induction argument
+shows that it suffices to check $M/I_i M \to M'/I_i M'$ is injective.
+Note that the collection of finitely generated ideals $I'_i \subset I_i$
+is a directed set. Thus $M/I_iM = \colim M/I'_iM$ is a filtered
+colimit, similarly for $M'$, the maps $M/I'_iM \to M'/I'_i M'$ are
+injective by assumption, and since filtered colimits are exact
+(Lemma \ref{lemma-directed-colimit-exact}) we conclude.
+\end{proof}
+
+
+
+\section{Descent for finite projective modules}
+\label{section-finite-projective}
+
+\noindent
+In this section we give an elementary proof of the fact that the property of
+being a {\it finite} projective module descends along faithfully flat ring
+maps. The proof does not apply when we drop the finiteness condition.
+However, the method is indicative of the one we shall use to prove descent
+for the property of being a {\it countably generated} projective module---see
+the comments at the end of this section.
+
+\begin{lemma}
+\label{lemma-finite-projective-again}
+Let $M$ be an $R$-module. Then $M$ is finite projective if and only if $M$ is
+finitely presented and flat.
+\end{lemma}
+
+\begin{proof}
+This is part of
+Lemma \ref{lemma-finite-projective}.
+However, at this point we can give a more elegant proof of the implication
+(1) $\Rightarrow$ (2) of that lemma as follows.
+If $M$ is finitely presented and flat, then take a surjection
+$R^n \to M$. By
+Lemma \ref{lemma-flat-surjective-hom}
+applied to $P = M$, the map $R^n \to M$ admits a section.
+So $M$ is a direct summand of a free module and hence projective.
+\end{proof}
+
+\noindent
+Here are some properties of modules that descend.
+
+\begin{lemma}
+\label{lemma-descend-properties-modules}
+Let $R \to S$ be a faithfully flat ring map.
+Let $M$ be an $R$-module. Then
+\begin{enumerate}
+\item if the $S$-module $M \otimes_R S$ is of finite type, then
+$M$ is of finite type,
+\item if the $S$-module $M \otimes_R S$ is of finite presentation, then
+$M$ is of finite presentation,
+\item if the $S$-module $M \otimes_R S$ is flat, then
+$M$ is flat, and
+\item add more here as needed.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $M \otimes_R S$ is of finite type. Let $y_1, \ldots, y_m$ be generators
+of $M \otimes_R S$ over $S$. Write $y_j = \sum x_i \otimes f_i$ for some
+$x_1, \ldots, x_n \in M$. Then we see that the map
+$\varphi : R^{\oplus n} \to M$
+has the property that
+$\varphi \otimes \text{id}_S : S^{\oplus n} \to M \otimes_R S$
+is surjective. Since $R \to S$ is faithfully flat we see that
+$\varphi$ is surjective, and $M$ is finitely generated.
+
+\medskip\noindent
+Assume $M \otimes_R S$ is of finite presentation. By (1) we see that
+$M$ is of finite type. Choose a surjection $R^{\oplus n} \to M$ and
+denote $K$ the kernel. As $R \to S$ is flat we see that $K \otimes_R S$
+is the kernel of the base change $S^{\oplus n} \to M \otimes_R S$.
+As $M \otimes_R S$ is of finite presentation we conclude that $K \otimes_R S$
+is of finite type. Hence by (1) we see that $K$ is of finite type
+and hence $M$ is of finite presentation.
+
+\medskip\noindent
+Part (3) is
+Lemma \ref{lemma-flatness-descends}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-ffdescent-finite-projectivity}
+Let $R \to S$ be a faithfully flat ring map. Let $M$ be an $R$-module.
+If the $S$-module $M \otimes_R S$ is finite projective, then $M$ is finite
+projective.
+\end{proposition}
+
+\begin{proof}
+Follows from
+Lemmas \ref{lemma-finite-projective-again} and
+\ref{lemma-descend-properties-modules}.
+\end{proof}
+
+\noindent
+The next few sections are about removing the finiteness assumption by using
+d\'evissage to reduce to the countably generated case. In the countably
+generated case, the strategy is to find a characterization of countably
+generated projective modules analogous to
+Lemma \ref{lemma-finite-projective-again},
+and then to prove directly that this characterization descends. We do this by
+introducing the notion of a Mittag-Leffler module and proving that if a module
+$M$ is countably generated, then it is projective if and only if it is flat and
+Mittag-Leffler (Theorem \ref{theorem-projectivity-characterization}). When $M$
+is finitely generated, this statement reduces to Lemma
+\ref{lemma-finite-projective-again} (since, according to
+Example \ref{example-ML} (1),
+a finitely generated module is Mittag-Leffler if and only if it is
+finitely presented).
+
+\section{Transfinite d\'evissage of modules}
+\label{section-transfinite-devissage}
+
+\noindent
+In this section we introduce a d\'evissage technique for decomposing a module
+into a direct sum. The main result is that a projective module is a direct sum
+of countably generated modules (Theorem \ref{theorem-projective-direct-sum}
+below). We follow \cite{Kaplansky}.
+
+
+\begin{definition}
+\label{definition-devissage}
+Let $M$ be an $R$-module. A {\it direct sum d\'evissage} of $M$ is a family
+of submodules $(M_{\alpha})_{\alpha \in S}$, indexed by an ordinal $S$ and
+increasing (with respect to inclusion), such that:
+\begin{enumerate}
+\item[(0)] $M_0 = 0$;
+\item[(1)] $M = \bigcup_{\alpha} M_{\alpha}$;
+\item[(2)] if $\alpha \in S$ is a limit ordinal, then $M_{\alpha} =
+\bigcup_{\beta < \alpha} M_{\beta}$;
+\item[(3)] if $\alpha + 1 \in S$, then $M_{\alpha}$ is a direct summand of
+$M_{\alpha + 1}$.
+\end{enumerate}
+If moreover
+\begin{enumerate}
+\item[(4)] $M_{\alpha + 1}/M_{\alpha}$ is countably generated for
+$\alpha + 1 \in S$,
+\end{enumerate}
+then $(M_{\alpha})_{\alpha \in S}$ is called a {\it Kaplansky d\'evissage}
+of $M$.
+\end{definition}
+
+\noindent
+The terminology is justified by the following lemma.
+
+\begin{lemma}
+\label{lemma-direct-sum-devissage}
+Let $M$ be an $R$-module. If $(M_{\alpha})_{\alpha \in S}$ is a direct sum
+d\'evissage of $M$, then
+$M \cong \bigoplus_{\alpha + 1 \in S} M_{\alpha + 1}/M_{\alpha}$.
+\end{lemma}
+
+\begin{proof}
+By property (3) of a direct sum d\'evissage, there is an inclusion
+$M_{\alpha + 1}/M_{\alpha} \to M$ for each $\alpha \in S$. Consider the
+map
+$$
+f : \bigoplus\nolimits_{\alpha + 1\in S} M_{\alpha + 1}/M_{\alpha} \to M
+$$
+given by the sum of these inclusions.
+Further consider the restrictions
+$$
+f_{\beta} :
+\bigoplus\nolimits_{\alpha + 1 \leq \beta} M_{\alpha + 1}/M_{\alpha}
+\longrightarrow
+M
+$$
+for $\beta\in S$. Transfinite induction on $S$ shows that the image of
+$f_{\beta}$ is $M_{\beta}$. For $\beta=0$ this is true by $(0)$. If $\beta+1$
+is a successor ordinal and it is true for $\beta$, then it is true for
+$\beta + 1$ by (3). And if $\beta$ is a limit ordinal and it is true for
+$\alpha < \beta$, then it is true for $\beta$ by (2). Hence $f$ is surjective
+by (1).
+
+\medskip\noindent
+Transfinite induction on $S$ also shows that the restrictions $f_{\beta}$
+are injective. For $\beta = 0$ it is true. If $\beta+1$ is a
+successor ordinal and $f_{\beta}$ is injective, then let $x$ be in the kernel
+and write $x = (x_{\alpha + 1})_{\alpha + 1 \leq \beta + 1}$ in terms of its
+components $x_{\alpha + 1} \in M_{\alpha + 1}/M_{\alpha}$. By property (3) and
+the fact that the image of $f_{\beta}$ is $M_{\beta}$ both
+$(x_{\alpha + 1})_{\alpha + 1 \leq \beta}$ and $x_{\beta + 1}$ map to $0$.
+Hence $x_{\beta+1} = 0$ and, by the assumption that the restriction
+$f_{\beta}$ is injective also $x_{\alpha + 1} = 0$
+for every $\alpha + 1 \leq \beta$. So $x = 0$ and $f_{\beta+1}$ is injective.
+If $\beta$ is a limit ordinal consider an element $x$ of the kernel. Then $x$
+is already contained in the domain of $f_{\alpha}$ for some $\alpha < \beta$.
+Thus $x = 0$ which finishes the induction.
+We conclude that $f$ is injective since $f_{\beta}$ is for each $\beta \in S$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Kaplansky-devissage}
+Let $M$ be an $R$-module. Then $M$ is a direct sum of countably generated
+$R$-modules if and only if it admits a Kaplansky d\'evissage.
+\end{lemma}
+
+\begin{proof}
+The lemma takes care of the ``if'' direction. Conversely, suppose $M =
+\bigoplus_{i \in I} N_i$ where each $N_i$ is a countably generated $R$-module.
+Well-order $I$ so that we can think of it as an ordinal. Then setting $M_i =
+\bigoplus_{j < i} N_j$ gives a Kaplansky d\'evissage $(M_i)_{i \in I}$ of
+$M$.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-kaplansky-direct-sum}
+Suppose $M$ is a direct sum of countably generated $R$-modules. If $P$ is a
+direct summand of $M$, then $P$ is also a direct sum of countably generated
+$R$-modules.
+\end{theorem}
+
+\begin{proof}
+Write $M = P \oplus Q$. We are going to construct a Kaplansky d\'evissage
+$(M_{\alpha})_{\alpha \in S}$ of $M$ which, in addition to the defining
+properties (0)-(4), satisfies:
+\begin{enumerate}
+\item[(5)] Each $M_{\alpha}$ is a direct summand of $M$;
+\item[(6)] $M_{\alpha} = P_{\alpha} \oplus Q_{\alpha}$, where $P_{\alpha} =P
+\cap M_{\alpha}$ and $Q = Q \cap M_{\alpha}$.
+\end{enumerate}
+(Note: if properties (0)-(2) hold, then in fact property (3) is equivalent to
+property (5).)
+
+\medskip\noindent
+To see how this implies the theorem, it is enough to show that
+$(P_{\alpha})_{\alpha \in S}$ forms a Kaplansky d\'evissage of $P$. Properties
+(0), (1), and (2) are clear. By (5) and (6) for $(M_{\alpha})$, each
+$P_{\alpha}$ is a direct summand of $M$. Since $P_{\alpha} \subset P_{\alpha +
+1}$, this implies $P_{\alpha}$ is a direct summand of $P_{\alpha + 1}$; hence
+(3) holds for $(P_{\alpha})$. For (4), note that
+$$
+M_{\alpha + 1}/M_{\alpha} \cong P_{\alpha + 1}/P_{\alpha} \oplus
+Q_{\alpha + 1}/Q_{\alpha},
+$$
+so $P_{\alpha + 1}/P_{\alpha}$ is countably generated because this is true of
+$M_{\alpha + 1}/M_{\alpha}$.
+
+\medskip\noindent
+It remains to construct the $M_{\alpha}$. Write $M = \bigoplus_{i \in I} N_i$
+where each $N_i$ is a countably generated $R$-module. Choose a well-ordering
+of $I$. By transfinite recursion we are going to define an increasing family
+of submodules $M_{\alpha}$ of $M$, one for each ordinal $\alpha$, such that
+$M_{\alpha}$ is a direct sum of some subset of the $N_i$.
+
+\medskip\noindent
+For $\alpha = 0$ let $M_{0} = 0$. If $\alpha$ is a limit ordinal and
+$M_{\beta}$ has been defined for all $\beta < \alpha$, then define $M_{\alpha}
+= \bigcup_{\beta < \alpha} M_{\beta}$. Since each $M_{\beta}$ for $\beta <
+\alpha$ is a direct sum of a subset of the $N_i$, the same will be true of
+$M_{\alpha}$. If $\alpha + 1$ is a successor ordinal and $M_{\alpha}$ has been
+defined, then define $M_{\alpha + 1}$ as follows. If $M_{\alpha} = M$, then let
+$M_{\alpha + 1} = M$. If not, choose the smallest $j \in I$ such that $N_j$ is
+not contained in $M_{\alpha}$. We will construct an infinite matrix $(x_{mn}),
+m, n = 1, 2, 3, \ldots$ such that:
+\begin{enumerate}
+\item $N_j$ is contained in the submodule of $M$ generated by the entries
+$x_{mn}$;
+\item if we write any entry $x_{k\ell}$ in terms of its $P$- and
+$Q$-components, $x_{k\ell} = y_{k\ell} + z_{k\ell}$, then the matrix $(x_{mn})$
+contains a set of generators for each $N_i$ for which $y_{k\ell}$ or
+$z_{k\ell}$ has nonzero component.
+\end{enumerate}
+Then we define $M_{\alpha + 1}$ to be the submodule of $M$ generated by
+$M_{\alpha}$ and all $x_{mn}$; by property (2) of the matrix $(x_{mn})$,
+$M_{\alpha + 1}$ will be a direct sum of some subset of the $N_i$.
+To construct the matrix $(x_{mn})$, let $x_{11}, x_{12}, x_{13}, \ldots$
+be a countable set of generators for $N_j$. Then if
+$x_{11} = y_{11} + z_{11}$ is the decomposition into $P$- and
+$Q$-components, let $x_{21}, x_{22}, x_{23}, \ldots$ be a countable
+set of generators for the sum of the $N_i$ for which $y_{11}$ or $z_{11}$ have
+nonzero component. Repeat this process on $x_{12}$ to get elements $x_{31},
+x_{32}, \ldots$, the third row of our matrix. Repeat on $x_{21}$ to get the
+fourth row, on $x_{13}$ to get the fifth, and so on, going down along
+successive anti-diagonals as indicated below:
+$$
+\left(
+\vcenter{
+\xymatrix@R=2mm@C=2mm{
+x_{11} & x_{12} \ar[dl] & x_{13} \ar[dl] & x_{14} \ar[dl] & \ldots \\
+x_{21} & x_{22} \ar[dl] & x_{23} \ar[dl] & \ldots \\
+x_{31} & x_{32} \ar[dl] & \ldots \\
+x_{41} & \ldots \\
+\ldots
+}
+}
+\right).
+$$
+
+\medskip\noindent
+Transfinite induction on $I$ (using the fact that we constructed
+$M_{\alpha + 1}$
+to contain $N_j$ for the smallest $j$ such that $N_j$ is not contained in
+$M_{\alpha}$) shows that for each $i \in I$, $N_i$ is contained in some
+$M_{\alpha}$. Thus, there is some large enough ordinal $S$ satisfying: for
+each $i \in I$ there is $\alpha \in S$ such that $N_i$ is contained in
+$M_{\alpha}$. This means $(M_{\alpha})_{\alpha \in S}$ satisfies property (1)
+of a Kaplansky d\'evissage of $M$. The family $(M_{\alpha})_{\alpha \in S}$
+moreover satisfies the other defining properties, and also (5) and (6) above:
+properties (0), (2), (4), and (6) are clear by construction; property (5) is
+true because each $M_{\alpha}$ is by construction a direct sum of some $N_i$;
+and (3) is implied by (5) and the fact that $M_{\alpha} \subset M_{\alpha + 1}$.
+\end{proof}
+
+\noindent
+As a corollary we get the result for projective modules stated at the beginning
+of the section.
+
+\begin{theorem}
+\label{theorem-projective-direct-sum}
+\begin{slogan}
+Any projective module is a direct sum of countably generated
+projective modules.
+\end{slogan}
+If $P$ is a projective $R$-module, then $P$ is a direct sum of countably
+generated projective $R$-modules.
+\end{theorem}
+
+\begin{proof}
+A module is projective if and only if it is a direct summand of a free module,
+so this follows from Theorem \ref{theorem-kaplansky-direct-sum}.
+\end{proof}
+
+
+
+\section{Projective modules over a local ring}
+\label{section-projective-local-ring}
+
+\noindent
+In this section we prove a very cute result:
+a projective module $M$ over a local ring is free
+(Theorem \ref{theorem-projective-free-over-local-ring} below).
+Note that with the additional assumption that $M$ is finite, this result is
+Lemma \ref{lemma-finite-flat-local}.
+In general we have:
+
+\begin{lemma}
+\label{lemma-projective-free}
+Let $R$ be a ring. Then every projective $R$-module is free if and only if
+every countably generated projective $R$-module is free.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from
+Theorem \ref{theorem-projective-direct-sum}.
+\end{proof}
+
+\noindent
+Here is a criterion for a countably generated module to be free.
+
+\begin{lemma}
+\label{lemma-freeness-criteria}
+Let $M$ be a countably generated $R$-module with the following property:
+if $M = N \oplus N'$ with $N'$ a finite free $R$-module, then
+any element of $N$ is contained in a free direct summand of $N$.
+Then $M$ is free.
+\end{lemma}
+
+\begin{proof}
+Let $x_1, x_2, \ldots$ be a countable set of generators for $M$.
+We inductively construct finite free direct summands
+$F_1, F_2, \ldots$ of $M$ such that for all $n$ we have
+that $F_1 \oplus \ldots \oplus F_n$ is a direct summand of $M$
+which contains $x_1, \ldots, x_n$. Namely, given $F_1, \ldots, F_n$
+with the desired properties, write
+$$
+M = F_1 \oplus \ldots \oplus F_n \oplus N
+$$
+and let $x \in N$ be the image of $x_{n + 1}$.
+Then we can find a free direct summand $F_{n + 1} \subset N$
+containing $x$ by the assumption in the statement of the lemma.
+Of course we can replace $F_{n + 1}$ by a finite free direct
+summand of $F_{n + 1}$ and the induction step is complete.
+Then $M = \bigoplus_{i = 1}^{\infty} F_i$ is free.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projective-freeness-criteria}
+Let $P$ be a projective module over a local ring $R$. Then any element of $P$
+is contained in a free direct summand of $P$.
+\end{lemma}
+
+\begin{proof}
+Since $P$ is projective it is a direct summand of some free $R$-module $F$, say
+$F = P \oplus Q$. Let $x \in P$ be the element that we wish to show is
+contained in a free direct summand of $P$. Let $B$ be a basis of $F$ such that
+the number of basis elements needed in the expression of $x$ is minimal, say $x
+= \sum_{i=1}^n a_i e_i$ for some $e_i \in B$ and $a_i \in R$. Then no $a_j$
+can be expressed as a linear combination of the other $a_i$; for if $a_j =
+\sum_{i \neq j} a_i b_i$ for some $b_i \in R$, then replacing $e_i$ by $e_i +
+b_ie_j$ for $i \neq j$ and leaving unchanged the other elements of $B$, we get
+a new basis for $F$ in terms of which $x$ has a shorter expression.
+
+\medskip\noindent
+Let $e_i = y_i + z_i, y_i \in P, z_i \in Q$ be the decomposition of $e_i$ into
+its $P$- and $Q$-components. Write $y_i = \sum_{j=1}^{n} b_{ij} e_j + t_i$,
+where $t_i$ is a linear combination of elements in $B$ other than $e_1, \ldots,
+e_n$. To finish the proof it suffices to show that the matrix $(b_{ij})$ is
+invertible. For then the map $F \to F$ sending $e_i \mapsto y_i$ for
+$i=1, \ldots, n$ and fixing $B \setminus \{e_1, \ldots, e_n\}$ is an
+isomorphism,
+so that $y_1, \ldots, y_n$ together with $B \setminus \{e_1, \ldots, e_n\}$
+form a basis for $F$. Then the submodule $N$ spanned by $y_1, \ldots, y_n$
+is a free submodule of $P$; $N$ is a direct summand of $P$ since $N \subset P$
+and both $N$ and $P$ are direct summands of $F$; and $x \in N$ since $x \in P$
+implies $x = \sum_{i=1}^n a_i e_i = \sum_{i=1}^n a_i y_i$.
+
+\medskip\noindent
+Now we prove that $(b_{ij})$ is invertible. Plugging $y_i = \sum_{j=1}^{n}
+b_{ij} e_j + t_i$ into $\sum_{i=1}^n a_i e_i = \sum_{i=1}^n a_i y_i$ and
+equating the coefficients of $e_j$ gives $a_j = \sum_{i=1}^n a_i b_{ij}$. But
+as noted above, our choice of $B$ guarantees that no $a_j$ can be written as a
+linear combination of the other $a_i$. Thus $b_{ij}$ is a non-unit for $i \neq
+j$, and $1-b_{ii}$ is a non-unit---so in particular $b_{ii}$ is a unit---for
+all $i$. But a matrix over a local ring having units along the diagonal and
+non-units elsewhere is invertible, as its determinant is a unit.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-projective-free-over-local-ring}
+\begin{slogan}
+Projective modules over local rings are free.
+\end{slogan}
+If $P$ is a projective module over a local ring $R$, then $P$ is free.
+\end{theorem}
+
+\begin{proof}
+Follows from Lemmas \ref{lemma-projective-free}, \ref{lemma-freeness-criteria},
+and \ref{lemma-projective-freeness-criteria}.
+\end{proof}
+
+
+
+\section{Mittag-Leffler systems}
+\label{section-mittag-leffler}
+
+\noindent
+The purpose of this section is to define Mittag-Leffler systems
+and why this is a useful notion.
+
+\medskip\noindent
+In the following, $I$ will be a directed set, see
+Categories, Definition \ref{categories-definition-directed-set}.
+Let $(A_i, \varphi_{ji}: A_j \to A_i)$ be an inverse
+system of sets or of modules indexed by $I$, see
+Categories, Definition \ref{categories-definition-directed-system}.
+This is a directed inverse system as we assumed $I$ directed
+(Categories, Definition \ref{categories-definition-directed-system}).
+For each $i \in I$, the images $\varphi_{ji}(A_j) \subset A_i$ for $j \geq i$
+form a decreasing directed family of subsets (or submodules) of $A_i$. Let
+$A'_i = \bigcap_{j \geq i} \varphi_{ji}(A_j)$.
+Then $\varphi_{ji}(A'_j) \subset A'_i$ for $j \geq i$, hence by restricting
+we get a directed inverse system $(A'_i, \varphi_{ji}|_{A'_j})$.
+From the construction of the limit of an inverse system in the category
+of sets or modules, we have $\lim A_i = \lim A'_i$. The Mittag-Leffler
+condition on $(A_i, \varphi_{ji})$ is that $A'_i$ equals
+$\varphi_{ji}(A_j)$ for some $j \geq i$ (and hence equals
+$\varphi_{ki}(A_k)$ for all $k \geq j$):
+
+\begin{definition}
+\label{definition-ML-system}
+Let $(A_i, \varphi_{ji})$ be a directed inverse system of sets over $I$. Then
+we say $(A_i, \varphi_{ji})$ is {\it Mittag-Leffler} if for
+each $i \in I$, the family $\varphi_{ji}(A_j) \subset A_i$ for
+$j \geq i$ stabilizes. Explicitly, this means that for each $i \in I$, there
+exists $j \geq i$ such that for $k \geq j$ we have $\varphi_{ki}(A_k) =
+\varphi_{ji}( A_j)$. If $(A_i, \varphi_{ji})$ is a directed inverse system
+of modules over a ring $R$, we say that it is Mittag-Leffler if the underlying
+inverse system of sets is Mittag-Leffler.
+\end{definition}
+
+\begin{example}
+\label{example-ML-surjective-maps}
+If $(A_i, \varphi_{ji})$ is a directed inverse system of sets or of modules and
+the maps $\varphi_{ji}$ are surjective, then clearly the system is
+Mittag-Leffler. Conversely, suppose $(A_i, \varphi_{ji})$ is Mittag-Leffler.
+Let $A'_i \subset A_i$ be the stable image of $\varphi_{ji}(A_j)$ for $j \geq
+i$. Then $\varphi_{ji}|_{A'_j}: A'_j \to A'_i$ is surjective for
+$j \geq i$ and $\lim A_i = \lim A'_i$. Hence the limit of the Mittag-Leffler
+system $(A_i, \varphi_{ji})$ can also be written as the limit of a directed
+inverse system over $I$ with surjective maps.
+\end{example}
+
+\begin{lemma}
+\label{lemma-ML-limit-nonempty}
+Let $(A_i, \varphi_{ji})$ be a directed inverse system over $I$. Suppose $I$
+is countable. If $(A_i, \varphi_{ji})$ is Mittag-Leffler and the $A_i$ are
+nonempty, then $\lim A_i$ is nonempty.
+\end{lemma}
+
+\begin{proof}
+Let $i_1, i_2, i_3, \ldots$ be an enumeration of the elements of $I$. Define
+inductively a sequence of elements $j_n \in I$ for $n = 1, 2, 3, \ldots$ by the
+conditions: $j_1 = i_1$, and $j_n \geq i_n$ and $j_n \geq j_m$ for $m < n$.
+ Then the sequence $j_n$ is increasing and forms a cofinal subset of $I$.
+Hence we may assume $I =\{1, 2, 3, \ldots \}$. So by Example
+\ref{example-ML-surjective-maps} we are reduced to showing that the limit of an
+inverse system of nonempty sets with surjective maps indexed by the positive
+integers is nonempty. This is obvious.
+\end{proof}
+
+\noindent
+The Mittag-Leffler condition will be important for us because of the following
+exactness property.
+
+\begin{lemma}
+\label{lemma-ML-exact-sequence}
+Let
+$$
+0 \to A_i \xrightarrow{f_i} B_i \xrightarrow{g_i} C_i \to 0
+$$
+be an exact sequence of directed inverse systems of abelian groups over $I$.
+Suppose $I$ is countable. If $(A_i)$ is Mittag-Leffler, then
+$$
+0 \to \lim A_i \to \lim B_i \to \lim C_i\to 0
+$$
+is exact.
+\end{lemma}
+
+\begin{proof}
+Taking limits of directed inverse systems is left exact, hence we only need to
+prove surjectivity of $\lim B_i \to \lim C_i$. So let $(c_i) \in \lim
+C_i$. For each $i \in I$, let $E_i = g_i^{-1}(c_i)$, which is nonempty since
+$g_i: B_i \to C_i$ is surjective. The system of maps $\varphi_{ji}: B_j
+\to B_i$ for $(B_i)$ restrict to maps $E_j \to E_i$ which
+make $(E_i)$ into an inverse system of nonempty sets. It is enough to show
+that $(E_i)$ is Mittag-Leffler. For then Lemma \ref{lemma-ML-limit-nonempty}
+would show $\lim E_i$ is nonempty, and taking any element of $\lim E_i$ would
+give an element of $\lim B_i$ mapping to $(c_i)$.
+
+\medskip\noindent
+By the injection $f_i: A_i \to B_i$ we will regard $A_i$ as a subset of
+$B_i$. Since $(A_i)$ is Mittag-Leffler, if $i \in I$ then there exists $j \geq
+i$ such that $\varphi_{ki}(A_k) = \varphi_{ji}(A_j)$ for $k \geq j$. We claim
+that also $\varphi_{ki}(E_k) = \varphi_{ji}(E_j)$ for $k \geq j$. Always
+$\varphi_{ki}(E_k) \subset \varphi_{ji}(E_j)$ for $k \geq j$. For the reverse
+inclusion let $e_j \in E_j$, and we need to find $x_k \in E_k$ such that
+$\varphi_{ki}(x_k) = \varphi_{ji}(e_j)$. Let $e'_k \in E_k$ be any element,
+and set $e'_j = \varphi_{kj}(e'_k)$. Then $g_j(e_j - e'_j) = c_j - c_j = 0$,
+hence $e_j - e'_j = a_j \in A_j$. Since $\varphi_{ki}(A_k) =
+\varphi_{ji}(A_j)$, there exists $a_k \in A_k$ such that $\varphi_{ki}(a_k) =
+\varphi_{ji}(a_j)$. Hence
+$$
+\varphi_{ki}(e'_k + a_k) = \varphi_{ji}(e'_j) + \varphi_{ji}(a_j) =
+\varphi_{ji}(e_j),
+$$
+so we can take $x_k = e'_k + a_k$.
+\end{proof}
+
+
+
+
+\section{Inverse systems}
+\label{section-inverse-systems}
+
+\noindent
+In many papers (and in this section) the term {\it inverse system} is
+used to indicate an inverse system over the partially ordered set
+$(\mathbf{N}, \geq)$. We briefly discuss such systems in this section.
+This material will be discussed more broadly in
+Homology, Section \ref{homology-section-inverse-systems}.
+Suppose we are given a ring $R$ and a sequence of $R$-modules
+$$
+M_1 \xleftarrow{\varphi_2} M_2 \xleftarrow{\varphi_3} M_3 \leftarrow \ldots
+$$
+with maps as indicated. By composing successive maps we obtain maps
+$\varphi_{ii'} : M_i \to M_{i'}$ whenever $i \geq i'$ such that moreover
+$\varphi_{ii''} = \varphi_{i'i''} \circ \varphi_{i i'}$ whenever
+$i \geq i' \geq i''$. Conversely, given the system of maps $\varphi_{ii'}$
+we can set $\varphi_i = \varphi_{i(i-1)}$ and recover the maps displayed
+above. In this case
+$$
+\lim M_i
+=
+\{(x_i) \in \prod M_i \mid \varphi_i(x_i) = x_{i - 1}, \ i = 2, 3, \ldots\}
+$$
+compare with
+Categories, Section \ref{categories-section-limit-sets}.
+As explained in
+Homology, Section \ref{homology-section-inverse-systems}
+this is actually a limit in the category of $R$-modules, as defined in
+Categories, Section \ref{categories-section-limits}.
+
+\begin{lemma}
+\label{lemma-Mittag-Leffler}
+Let $R$ be a ring.
+Let $0 \to K_i \to L_i \to M_i \to 0$ be short exact sequences of
+$R$-modules, $i \geq 1$ which fit into maps of short exact sequences
+$$
+\xymatrix{
+0 \ar[r] &
+K_i \ar[r] &
+L_i \ar[r] &
+M_i \ar[r] &
+0 \\
+0 \ar[r] &
+K_{i + 1} \ar[r] \ar[u] &
+L_{i + 1} \ar[r] \ar[u] &
+M_{i + 1} \ar[r] \ar[u] &
+0}
+$$
+If for every $i$ there exists a $c = c(i) \geq i$ such that
+$\Im(K_c \to K_i) = \Im(K_j \to K_i)$
+for all $j \geq c$, then the sequence
+$$
+0 \to \lim K_i \to \lim L_i \to \lim M_i \to 0
+$$
+is exact.
+\end{lemma}
+
+\begin{proof}
+This is a special case of the more general
+Lemma \ref{lemma-ML-exact-sequence}.
+\end{proof}
+
+
+
+\section{Mittag-Leffler modules}
+\label{section-mittag-leffler-modules}
+
+\noindent
+A Mittag-Leffler module is (very roughly) a module which can be written
+as a directed limit whose dual is a Mittag-Leffler system. To be able
+to give a precise definition we need to do a bit of work.
+
+\begin{definition}
+\label{definition-ML-inductive-system}
+Let $(M_i, f_{ij})$ be a directed system of $R$-modules. We say that
+$(M_i, f_{ij})$ is a {\it Mittag-Leffler directed system of modules} if each
+$M_i$ is an $R$-module of finite presentation and if for every $R$-module $N$,
+the inverse system
+$$
+(\Hom_R(M_i, N), \Hom_R(f_{ij}, N))
+$$
+is Mittag-Leffler.
+\end{definition}
+
+\noindent
+We are going to characterize those $R$-modules that are colimits of
+Mittag-Leffler directed systems of modules.
+
+\begin{definition}
+\label{definition-domination}
+Let $f: M \to N$ and $g: M \to M'$ be maps of $R$-modules.
+Then we say $g$ {\it dominates} $f$ if for any $R$-module $Q$, we have $\Ker(f
+\otimes_R \text{id}_Q) \subset \Ker(g \otimes_R \text{id}_Q)$.
+\end{definition}
+
+\noindent
+It is enough to check this condition for finitely presented modules.
+
+\begin{lemma}
+\label{lemma-domination-fp}
+Let $f: M \to N$ and $g: M \to M'$ be maps of $R$-modules.
+Then $g$ dominates $f$ if and only if for any finitely presented $R$-module
+$Q$, we have $\Ker(f \otimes_R \text{id}_Q) \subset \Ker(g \otimes_R
+\text{id}_Q)$.
+\end{lemma}
+
+\begin{proof}
+Suppose $\Ker(f \otimes_R \text{id}_Q) \subset \Ker(g \otimes_R
+\text{id}_Q)$ for all finitely presented modules $Q$. If $Q$ is an
+arbitrary module, write $Q = \colim_{i \in I} Q_i$ as a colimit of a
+directed
+system of finitely presented modules $Q_i$. Then $\Ker(f \otimes_R
+\text{id}_{Q_i}) \subset \Ker(g \otimes_R \text{id}_{Q_i})$ for
+all $i$. Since taking directed colimits is exact and commutes with tensor
+product, it follows that $\Ker(f \otimes_R \text{id}_Q) \subset \Ker(g
+\otimes_R \text{id}_Q)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-domination-universally-injective}
+Let $f : M \to N$ and $g : M \to M'$ be maps of $R$-modules.
+Consider the pushout of $f$ and $g$,
+$$
+\xymatrix{
+M \ar[r]_f \ar[d]_g & N \ar[d]^{g'} \\
+M' \ar[r]^{f'} & N'
+}
+$$
+Then $g$ dominates $f$ if and only if $f'$ is universally injective.
+\end{lemma}
+
+\begin{proof}
+Recall that $N'$ is $M' \oplus N$ modulo the submodule consisting of elements
+$(g(x), -f(x))$ for $x \in M$.
+From the construction of $N'$ we have a short exact sequence
+$$
+0 \to \Ker(f) \cap \Ker(g) \to \Ker(f) \to \Ker(f')
+\to 0.
+$$
+Since tensoring commutes with taking pushouts, we have such a short exact
+sequence
+$$
+0 \to \Ker(f \otimes \text{id}_Q ) \cap \Ker(g \otimes
+\text{id}_Q) \to \Ker(f \otimes \text{id}_Q)
+\to \Ker(f' \otimes \text{id}_Q) \to 0
+$$
+for every $R$-module $Q$. So $f'$ is universally injective if and only if
+$\Ker(f \otimes \text{id}_Q ) \subset \Ker(g \otimes
+\text{id}_Q)$ for every $Q$, if and only if $g$ dominates $f$.
+\end{proof}
+
+\noindent
+The above definition of domination is sometimes related to the usual notion
+of domination of maps as the following lemma shows.
+
+\begin{lemma}
+\label{lemma-domination}
+Let $f: M \to N$ and $g: M \to M'$ be maps of $R$-modules.
+Suppose $\Coker(f)$ is of finite presentation. Then $g$ dominates $f$ if
+and
+only if $g$ factors through $f$, i.e.\ there exists a module map $h: N
+\to M'$ such that $g = h \circ f$.
+\end{lemma}
+
+\begin{proof}
+Consider the pushout of $f$ and $g$ as in the statement of
+Lemma \ref{lemma-domination-universally-injective}.
+From the construction of the pushout it follows that
+$\Coker(f') = \Coker(f)$, so $\Coker(f')$ is of finite
+presentation. Then by
+Lemma \ref{lemma-universally-exact-split}, $f'$ is universally injective if and
+only if
+$$
+0 \to M' \xrightarrow{f'} N' \to \Coker(f') \to 0
+$$
+splits. This is the case if and only if there is a map $h' : N' \to M'$
+such that $h' \circ f' = \text{id}_{M'}$. From the universal
+property of the pushout, the existence of such an $h'$ is equivalent to $g$
+factoring through $f$.
+\end{proof}
+
+
+\begin{proposition}
+\label{proposition-ML-characterization}
+Let $M$ be an $R$-module. Let $(M_i, f_{ij})$ be a directed system of finitely
+presented $R$-modules, indexed by $I$, such that $M = \colim M_i$. Let
+$f_i:
+M_i \to M$ be the canonical map. The following are equivalent:
+\begin{enumerate}
+\item For every finitely presented $R$-module $P$ and module map $f: P
+\to M$, there exists a finitely presented $R$-module $Q$ and a module
+map $g: P \to Q$ such that $g$ and $f$ dominate each other, i.e.,
+$\Ker(f \otimes_R \text{id}_N) = \Ker(g \otimes_R \text{id}_N)$
+for every $R$-module $N$.
+\item For each $i \in I$, there exists $j \geq i$ such that $f_{ij}: M_i
+\to M_j$ dominates $f_i: M_i \to M$.
+\item For each $i \in I$, there exists $j \geq i$ such that $f_{ij}: M_i
+\to M_j$ factors through $f_{ik}: M_i \to M_k$ for all $k \geq
+i$.
+\item For every $R$-module $N$, the inverse system
+$(\Hom_R(M_i, N), \Hom_R(f_{ij}, N))$ is Mittag-Leffler.
+\item For $N = \prod_{s \in I} M_s$, the inverse system
+$(\Hom_R(M_i, N), \Hom_R(f_{ij}, N))$ is Mittag-Leffler.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+First we prove the equivalence of (1) and (2). Suppose (1) holds and let $i
+\in I$. Corresponding to the map $f_i: M_i \to M$, we can choose $g:
+M_i \to Q$ as in (1). Since $M_i$ and $Q$ are of finite presentation,
+so is $\Coker(g)$. Then by Lemma \ref{lemma-domination}, $f_i : M_i
+\to M$ factors through $g: M_i \to Q$, say $f_i = h \circ g$
+for some $h: Q \to M$. Then since $Q$ is finitely presented, $h$
+factors through $M_j \to M$ for some $j \geq i$, say $h = f_j \circ h'$
+for some $h': Q \to M_j$. In total we have a commutative diagram
+$$
+\xymatrix{
+ & M & \\
+M_i \ar[dr]_g \ar[ur]^{f_i} \ar[rr]^{f_{ij}} &
+& M_j \ar[ul]_{f_j} \\
+ & Q \ar[ur]_{h'} &
+}
+$$
+Thus $f_{ij}$ dominates $g$. But $g$ dominates $f_i$, so $f_{ij}$ dominates
+$f_i$.
+
+\medskip\noindent
+Conversely, suppose (2) holds. Let $P$ be of finite presentation and $f: P
+\to M$ a module map. Then $f$ factors through $f_i: M_i \to M$
+for some $i \in I$, say $f = f_i \circ g'$ for some $g': P \to M_i$.
+Choose by (2) a $j \geq i$ such that $f_{ij}$ dominates $f_i$. We have a
+commutative diagram
+$$
+\xymatrix{
+P \ar[d]_{g'} \ar[r]^{f} & M \\
+M_i \ar[ur]^{f_i} \ar[r]_{f_{ij}} & M_j \ar[u]_{f_j}
+}
+$$
+From the diagram and the fact that $f_{ij}$ dominates $f_i$, we find that $f$
+and $f_{ij} \circ g'$ dominate each other. Hence taking $g = f_{ij} \circ g' :
+P \to M_j$ works.
+
+\medskip\noindent
+Next we prove (2) is equivalent to (3). Let $i \in I$. It is always true that
+$f_i$ dominates $f_{ik}$ for $k \geq i$, since $f_i$ factors through
+$f_{ik}$. If (2) holds, choose $j \geq i$ such that $f_{ij}$ dominates
+$f_i$. Then since domination is a transitive relation, $f_{ij}$ dominates
+$f_{ik}$ for $k \geq i$. All $M_i$ are of finite presentation, so
+$\Coker(f_{ik})$ is of finite presentation for $k \geq i$. By Lemma
+\ref{lemma-domination}, $f_{ij}$ factors through $f_{ik}$ for all $k \geq i$.
+Thus (2) implies (3). On the other hand, if (3) holds then for any $R$-module
+$N$, $f_{ij} \otimes_R \text{id}_N$ factors through $f_{ik}
+\otimes_R \text{id}_N$ for $k \geq i$. So $\Ker(f_{ik} \otimes_R
+\text{id}_N) \subset \Ker(f_{ij} \otimes_R \text{id}_N)$ for $k
+\geq i$. But $\Ker(f_i \otimes_R \text{id}_N: M_i \otimes_R N
+\to M \otimes_R N)$ is the union of $\Ker(f_{ik} \otimes_R
+\text{id}_N)$ for $k \geq i$. Thus $\Ker(f_i \otimes_R
+\text{id}_N) \subset \Ker(f_{ij} \otimes_R \text{id}_N)$ for
+any $R$-module $N$, which by definition means $f_{ij}$ dominates $f_i$.
+
+\medskip\noindent
+It is trivial that (3) implies (4) implies (5). We show (5) implies (3). Let
+$N = \prod_{s \in I} M_s$. If (5) holds, then given $i \in I$ choose $j \geq i$
+such that
+$$
+\Im( \Hom(M_j, N) \to \Hom(M_i, N)) =
+\Im( \Hom(M_k, N) \to \Hom(M_i, N))
+$$
+for all $k \geq j$. Passing the product over $s \in I$ outside of the
+$\Hom$'s
+and looking at the maps on each component of the product, this says
+$$
+\Im( \Hom(M_j, M_s) \to \Hom(M_i, M_s)) =
+\Im( \Hom(M_k, M_s) \to \Hom(M_i, M_s))
+$$
+for all $k \geq j$ and $s \in I$. Taking $s = j$ we have
+$$
+\Im( \Hom(M_j, M_j) \to \Hom(M_i, M_j)) =
+\Im( \Hom(M_k, M_j) \to \Hom(M_i, M_j))
+$$
+for all $k \geq j$. Since $f_{ij}$ is the image of
+$\text{id} \in \Hom(M_j, M_j)$ under
+$\Hom(M_j, M_j) \to \Hom(M_i, M_j)$,
+this shows that for any $k \geq j$ there is $h \in \Hom(M_k, M_j)$
+such that $f_{ij} = h \circ f_{ik}$. If $j \geq k$ then we can take
+$h = f_{kj}$. Hence (3) holds.
+\end{proof}
+
+\begin{definition}
+\label{definition-mittag-leffler-module}
+Let $M$ be an $R$-module. We say that $M$ is {\it Mittag-Leffler} if the
+equivalent conditions of
+Proposition \ref{proposition-ML-characterization}
+hold.
+\end{definition}
+
+\noindent
+In particular a finitely presented module is Mittag-Leffler.
+
+\begin{remark}
+\label{remark-flat-ML}
+Let $M$ be a flat $R$-module. By Lazard's theorem
+(Theorem \ref{theorem-lazard})
+we can write $M = \colim M_i$ as the colimit of a
+directed system $(M_i, f_{ij})$ where the $M_i$ are free
+finite $R$-modules. For $M$ to be Mittag-Leffler, it is enough for the inverse
+system of duals $(\Hom_R(M_i, R), \Hom_R(f_{ij}, R))$ to be
+Mittag-Leffler. This follows from criterion (4) of
+Proposition \ref{proposition-ML-characterization}
+and the fact that for a free finite $R$-module $F$,
+there is a functorial isomorphism
+$\Hom_R(F, R) \otimes_R N \cong \Hom_R(F, N)$
+for any $R$-module $N$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-tensor-ML-modules}
+If $R$ is a ring and $M$, $N$ are Mittag-Leffler modules over $R$,
+then $M \otimes_R N$ is a Mittag-Leffler module.
+\end{lemma}
+
+\begin{proof}
+Write $M = \colim_{i \in I} M_i$ and $N = \colim_{j \in J} N_j$
+as directed colimits of finitely presented $R$-modules.
+Denote $f_{ii'} : M_i \to M_{i'}$ and $g_{jj'} : N_j \to N_{j'}$ the
+transition maps. Then $M_i \otimes_R N_j$ is a finitely presented
+$R$-module (see
+Lemma \ref{lemma-tensor-finiteness}),
+and $M \otimes_R N = \colim_{(i, j) \in I \times J} M_i \otimes_R M_j$.
+Pick $(i, j) \in I \times J$. By the definition of a Mittag-Leffler module
+we have
+Proposition \ref{proposition-ML-characterization} (3)
+for both systems. In other words there exist $i' \geq i$ and $j' \geq j$
+such that for every choice of $i'' \geq i$ and $j'' \geq j$ there exist
+maps $a : M_{i''} \to M_{i'}$ and $b : M_{j''} \to M_{j'}$ such that
+$f_{ii'} = a \circ f_{ii''}$ and $g_{jj'} = b \circ g_{jj''}$.
+Then it is clear that
+$a \otimes b : M_{i''} \otimes_R N_{j''} \to M_{i'} \otimes_R N_{j'}$
+serves the same purpose for the system
+$(M_i \otimes_R N_j, f_{ii'} \otimes g_{jj'})$.
+Thus by the characterization
+Proposition \ref{proposition-ML-characterization} (3)
+we conclude that $M \otimes_R N$ is Mittag-Leffler.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ML-also}
+Let $R$ be a ring and $M$ an $R$-module. Then $M$ is Mittag-Leffler if and
+only if for every finite free $R$-module $F$ and module map
+$f: F \to M$, there exists a finitely presented $R$-module $Q$
+and a module map $g : F \to Q$ such that $g$ and $f$ dominate each other, i.e.,
+$\Ker(f \otimes_R \text{id}_N) = \Ker(g \otimes_R \text{id}_N)$
+for every $R$-module $N$.
+\end{lemma}
+
+\begin{proof}
+Since the condition is clear weaker than condition (1) of
+Proposition \ref{proposition-ML-characterization}
+we see that a Mittag-Leffler module satisfies the condition.
+Conversely, suppose that $M$ satisfies the condition and that
+$f : P \to M$ is an $R$-module map from a finitely presented
+$R$-module $P$ into $M$. Choose a surjection $F \to P$ where
+$F$ is a finite free $R$-module. By assumption we can find a map
+$F \to Q$ where $Q$ is a finitely presented $R$-module such that
+$F \to Q$ and $F \to M$ dominate each other. In particular, the kernel
+of $F \to Q$ contains the kernel of $F \to P$, hence we obtain an
+$R$-module map $g : P \to Q$ such that $F \to Q$ is equal to
+the composition $F \to P \to Q$. Let $N$ be any $R$-module and
+consider the commutative diagram
+$$
+\xymatrix{
+F \otimes_R N \ar[d] \ar[r] & Q \otimes_R N \\
+P \otimes_R N \ar[ru] \ar[r] & M \otimes_R N
+}
+$$
+By assumption the kernels of $F \otimes_R N \to Q \otimes_R N$
+and $F \otimes_R N \to M \otimes_R N$ are equal. Hence, as
+$F \otimes_R N \to P \otimes_R N$ is surjective, also the kernels
+of $P \otimes_R N \to Q \otimes_R N$
+and $P \otimes_R N \to M \otimes_R N$ are equal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-restrict-ML-modules}
+Let $R \to S$ be a finite and finitely presented ring map.
+Let $M$ be an $S$-module.
+If $M$ is a Mittag-Leffler module over $S$ then
+$M$ is a Mittag-Leffler module over $R$.
+\end{lemma}
+
+\begin{proof}
+Assume $M$ is a Mittag-Leffler module over $S$.
+Write $M = \colim M_i$ as a directed colimit of finitely presented
+$S$-modules $M_i$. As $M$ is Mittag-Leffler over $S$ there exists for each
+$i$ an index $j \geq i$ such that for all $k \geq j$ there is a factorization
+$f_{ij} = h \circ f_{ik}$ (where $h$ depends on $i$, the choice of $j$ and
+$k$). Note that by
+Lemma \ref{lemma-finite-finitely-presented-extension}
+the modules $M_i$ are also finitely presented as $R$-modules. Moreover, all
+the maps $f_{ij}, f_{ik}, h$ are maps of $R$-modules. Thus we see that the
+system $(M_i, f_{ij})$ satisfies the same condition when viewed as a system
+of $R$-modules. Thus $M$ is Mittag-Leffler as an $R$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-mod-ideal-ML-modules}
+Let $R$ be a ring.
+Let $S = R/I$ for some finitely generated ideal $I$.
+Let $M$ be an $S$-module.
+Then $M$ is a Mittag-Leffler module over $R$ if and only if
+$M$ is a Mittag-Leffler module over $S$.
+\end{lemma}
+
+\begin{proof}
+One implication follows from
+Lemma \ref{lemma-restrict-ML-modules}.
+To prove the other, assume $M$ is Mittag-Leffler as an $R$-module.
+Write $M = \colim M_i$ as a directed colimit of finitely presented
+$S$-modules. As $I$ is finitely generated, the ring $S$ is finite and finitely
+presented as an $R$-algebra, hence the modules $M_i$ are finitely
+presented as $R$-modules, see
+Lemma \ref{lemma-finite-finitely-presented-extension}.
+Next, let $N$ be any $S$-module. Note that for each $i$ we have
+$\Hom_R(M_i, N) = \Hom_S(M_i, N)$ as $R \to S$ is surjective.
+Hence the condition that the inverse system
+$(\Hom_R(M_i, N))_i$ satisfies Mittag-Leffler, implies that the system
+$(\Hom_S(M_i, N))_i$ satisfies Mittag-Leffler. Thus $M$ is
+Mittag-Leffler over $S$ by definition.
+\end{proof}
+
+\begin{remark}
+\label{remark-go-up-ML-modules}
+Let $R \to S$ be a finite and finitely presented ring map.
+Let $M$ be an $S$-module which is Mittag-Leffler as an $R$-module.
+Then it is in general not the case that $M$ is Mittag-Leffler as
+an $S$-module. For example suppose that $S$ is the ring of dual numbers
+over $R$, i.e., $S = R \oplus R\epsilon$ with $\epsilon^2 = 0$. Then an
+$S$-module consists of an $R$-module $M$ endowed with a square zero
+$R$-linear endomorphism $\epsilon : M \to M$. Now suppose that $M_0$
+is an $R$-module which is not Mittag-Leffler. Choose a presentation
+$F_1 \xrightarrow{u} F_0 \to M_0 \to 0$ with $F_1$ and $F_0$ free $R$-modules.
+Set $M = F_1 \oplus F_0$ with
+$$
+\epsilon =
+\left(
+\begin{matrix}
+0 & 0 \\
+u & 0
+\end{matrix}
+\right) : M \longrightarrow M.
+$$
+Then $M/\epsilon M \cong F_1 \oplus M_0$ is not Mittag-Leffler over
+$R = S/\epsilon S$, hence not Mittag-Leffler over $S$ (see
+Lemma \ref{lemma-mod-ideal-ML-modules}).
+On the other hand, $M/\epsilon M = M \otimes_S S/\epsilon S$ which would
+be Mittag-Leffler over $S$ if $M$ was, see
+Lemma \ref{lemma-tensor-ML-modules}.
+\end{remark}
+
+
+\section{Interchanging direct products with tensor}
+\label{section-products-tensor}
+
+\noindent
+Let $M$ be an $R$-module and let $(Q_{\alpha})_{\alpha \in A}$ be a family of
+$R$-modules. Then there is a canonical map $M \otimes_R \left( \prod_{\alpha
+\in A} Q_{\alpha} \right) \to \prod_{\alpha \in A} ( M \otimes_R
+Q_{\alpha})$ given on pure tensors by $x \otimes (q_{\alpha}) \mapsto (x
+\otimes q_{\alpha})$. This map is not necessarily injective or surjective, as
+the following example shows.
+
+\begin{example}
+\label{example-Q-not-ML}
+Take $R = \mathbf{Z}$, $M = \mathbf{Q}$, and consider the family $Q_n =
+\mathbf{Z}/n$ for $n \geq 1$. Then $\prod_n (M \otimes Q_n) = 0$. However
+there is an injection $\mathbf{Q} \to M \otimes (\prod_n Q_n)$
+obtained by tensoring the injection $\mathbf{Z} \to \prod_n Q_n$ by
+$M$, so $M \otimes (\prod_n Q_n)$ is nonzero. Thus $M \otimes (\prod_n
+Q_n) \to \prod_n (M \otimes Q_n)$ is not injective.
+
+\medskip\noindent
+On the other hand, take again $R = \mathbf{Z}$, $M = \mathbf{Q}$, and let $Q_n
+= \mathbf{Z}$ for $n \geq 1$. The image of $M \otimes (\prod_n Q_n)
+\to \prod_n (M \otimes Q_n) = \prod_n M$ consists precisely of
+sequences of the form $(a_n/m)_{n \geq 1}$ with $a_n \in \mathbf{Z}$ and $m$
+some nonzero integer. Hence the map is not surjective.
+\end{example}
+
+\noindent
+We determine below the precise conditions needed on $M$ for the map $M
+\otimes_R \left( \prod_{\alpha} Q_{\alpha} \right) \to \prod_{\alpha}
+(M \otimes_R Q_{\alpha})$ to be surjective, bijective, or injective for all
+choices of $(Q_{\alpha})_{\alpha \in A}$. This is relevant because the modules
+for which it is injective turn out to be exactly Mittag-Leffler modules
+(Proposition \ref{proposition-ML-tensor}). In what follows, if $M$ is an
+$R$-module and $A$ a set, we write $M^A$ for the product $\prod_{\alpha \in A}
+M$.
+
+\begin{proposition}
+\label{proposition-fg-tensor}
+Let $M$ be an $R$-module. The following are equivalent:
+\begin{enumerate}
+\item $M$ is finitely generated.
+\item For every family $(Q_{\alpha})_{\alpha \in A}$ of $R$-modules, the
+canonical map $M \otimes_R \left( \prod_{\alpha} Q_{\alpha} \right)
+\to \prod_{\alpha} (M \otimes_R Q_{\alpha})$ is surjective.
+\item For every $R$-module $Q$ and every set $A$, the canonical map $M
+\otimes_R Q^{A} \to (M \otimes_R Q)^{A}$ is surjective.
+\item For every set $A$, the canonical map $M \otimes_R R^{A} \to
+M^{A}$ is surjective.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+First we prove (1) implies (2). Choose a surjection $R^n \to M$ and
+consider the commutative diagram
+$$
+\xymatrix{
+R^n \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r]^{\cong} \ar[d] &
+\prod_{\alpha} (R^n \otimes_R Q_{\alpha}) \ar[d] \\
+M \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r] & \prod_{\alpha} ( M
+\otimes_R Q_{\alpha}).
+}
+$$
+The top arrow is an isomorphism and the vertical arrows are surjections. We
+conclude that the bottom arrow is a surjection.
+
+\medskip\noindent
+Obviously (2) implies (3) implies (4), so it remains to prove (4) implies (1).
+In fact for (1) to hold it suffices that the element $d = (x)_{x \in M}$ of
+$M^M$ is in the image of the map $f: M \otimes_R R^{M} \to M^M$. In
+this case $d = \sum_{i = 1}^{n} f(x_i \otimes a_i)$ for some $x_i \in M$ and
+$a_i \in R^M$. If for $x \in M$ we write $p_x: M^M \to M$ for the
+projection onto the $x$-th factor, then
+$$
+x = p_x(d) = \sum\nolimits_{i = 1}^{n} p_x(f(x_i \otimes a_i)) =
+\sum\nolimits_{i=1}^{n} p_x(a_i) x_i.
+$$
+Thus $x_1, \ldots, x_n$ generate $M$.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-fp-tensor}
+Let $M$ be an $R$-module. The following are equivalent:
+\begin{enumerate}
+\item $M$ is finitely presented.
+\item For every family $(Q_{\alpha})_{\alpha \in A}$ of $R$-modules, the
+canonical map $M \otimes_R \left( \prod_{\alpha} Q_{\alpha} \right)
+\to \prod_{\alpha} (M \otimes_R Q_{\alpha})$ is bijective.
+\item For every $R$-module $Q$ and every set $A$, the canonical map $M
+\otimes_R Q^{A} \to (M \otimes_R Q)^{A}$ is bijective.
+\item For every set $A$, the canonical map $M \otimes_R R^{A} \to
+M^{A}$ is bijective.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+First we prove (1) implies (2). Choose a presentation $R^m \to R^n
+\to M$ and consider the commutative diagram
+$$
+\xymatrix{
+R^m \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r] \ar[d]^{\cong} & R^n
+\otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r] \ar[d]^{\cong} & M \otimes_R
+(\prod_{\alpha} Q_{\alpha}) \ar[r] \ar[d] & 0 \\
+\prod_{\alpha} (R^m \otimes_R Q_{\alpha}) \ar[r] & \prod_{\alpha} (R^n
+\otimes_R Q_{\alpha}) \ar[r] & \prod_{\alpha} (M \otimes_R Q_{\alpha})
+\ar[r] & 0.
+}
+$$
+The first two vertical arrows are isomorphisms and the rows are exact. This
+implies that the map
+$M \otimes_R (\prod_{\alpha} Q_{\alpha}) \to
+\prod_{\alpha} ( M \otimes_R Q_{\alpha})$
+is surjective and, by a diagram chase, also injective. Hence (2) holds.
+
+\medskip\noindent
+Obviously (2) implies (3) implies (4), so it remains to prove (4) implies (1).
+From Proposition \ref{proposition-fg-tensor}, if (4) holds we already know that
+$M$ is finitely generated. So we can choose a surjection $F \to M$
+where $F$ is free and finite. Let $K$ be the kernel. We must show $K$ is
+finitely generated. For any set $A$, we have a commutative diagram
+$$
+\xymatrix{
+& K \otimes_R R^A \ar[r] \ar[d]_{f_3} & F \otimes_R R^A \ar[r]
+\ar[d]_{f_2}^{\cong} & M \otimes_R R^A \ar[r] \ar[d]_{f_1}^{\cong} & 0 \\
+0 \ar[r] & K^A \ar[r] & F^A \ar[r] & M^A \ar[r] & 0 .
+}
+$$
+The map $f_1$ is an isomorphism by assumption, the map $f_2$ is a isomorphism
+since $F$ is free and finite, and the rows are exact. A diagram chase shows
+that $f_3$ is surjective, hence by Proposition \ref{proposition-fg-tensor} we
+get that $K$ is finitely generated.
+\end{proof}
+
+\noindent
+We need the following lemma for the next proposition.
+
+\begin{lemma}
+\label{lemma-kernel-tensored-fp}
+Let $M$ be an $R$-module, $P$ a finitely presented $R$-module, and $f: P
+\to M$ a map. Let $Q$ be an $R$-module and suppose $x \in \Ker(P
+\otimes Q \to M \otimes Q)$. Then there exists a finitely presented
+$R$-module $P'$ and a map $f': P \to P'$ such that $f$ factors through
+$f'$ and $x \in \Ker(P \otimes Q \to P' \otimes Q)$.
+\end{lemma}
+
+\begin{proof}
+Write $M$ as a colimit $M = \colim_{i \in I} M_i$ of a directed system of
+finitely presented modules $M_i$. Since $P$ is finitely presented, the map $f:
+P \to M$ factors through $M_j \to M$ for some $j \in I$. Upon
+tensoring by $Q$ we have a commutative diagram
+$$
+\xymatrix{
+& M_j \otimes Q \ar[dr] & \\
+P \otimes Q \ar[ur] \ar[rr] & & M \otimes Q .
+}
+$$
+The image $y$ of $x$ in $M_j \otimes Q$ is in the kernel of $M_j \otimes Q
+\to M \otimes Q$. Since $M \otimes Q = \colim_{i \in I} (M_i
+\otimes
+Q)$, this means $y$ maps to $0$ in $M_{j'} \otimes Q$ for some $j' \geq j$.
+Thus we may take $P' = M_{j'}$ and $f'$ to be the composite $P \to M_j
+\to M_{j'}$.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-ML-tensor}
+Let $M$ be an $R$-module. The following are equivalent:
+\begin{enumerate}
+\item $M$ is Mittag-Leffler.
+\item For every family $(Q_{\alpha})_{\alpha \in A}$ of $R$-modules, the
+canonical map $M \otimes_R \left( \prod_{\alpha} Q_{\alpha} \right)
+\to \prod_{\alpha} (M \otimes_R Q_{\alpha})$ is injective.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+First we prove (1) implies (2). Suppose $M$ is Mittag-Leffler and let $x$ be
+in the kernel of $M \otimes_R (\prod_{\alpha} Q_{\alpha}) \to
+\prod_{\alpha} (M \otimes_R Q_{\alpha})$. Write $M$ as a colimit $M =
+\colim_{i \in I} M_i$ of a directed system of finitely presented modules
+$M_i$.
+ Then $M \otimes_R (\prod_{\alpha} Q_{\alpha})$ is the colimit of $M_i
+\otimes_R (\prod_{\alpha} Q_{\alpha})$. So $x$ is the image of an element
+$x_i \in M_i \otimes_R (\prod_{\alpha} Q_{\alpha})$. We must show that $x_i$
+maps to $0$ in $M_j \otimes_R (\prod_{\alpha} Q_{\alpha})$ for some $j \geq
+i$. Since $M$ is Mittag-Leffler, we may choose $j \geq i$ such that $M_i
+\to M_j$ and $M_i \to M$ dominate each other. Then consider
+the commutative diagram
+$$
+\xymatrix{
+M \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r] & \prod_{\alpha} (M
+\otimes_R Q_{\alpha}) \\
+M_i \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r]^{\cong} \ar[d] \ar[u] &
+\prod_{\alpha} (M_i \otimes_R Q_{\alpha}) \ar[d] \ar[u] \\
+M_j \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r]^{\cong} & \prod_{\alpha}
+(M_j \otimes_R Q_{\alpha})
+}
+$$
+whose bottom two horizontal maps are isomorphisms, according to Proposition
+\ref{proposition-fp-tensor}. Since $x_i$ maps to $0$ in $\prod_{\alpha} (M
+\otimes_R Q_{\alpha})$, its image in $\prod_{\alpha} (M_i \otimes_R
+Q_{\alpha})$ is in the kernel of the map $\prod_{\alpha} (M_i \otimes_R
+Q_{\alpha}) \to \prod_{\alpha} (M \otimes_R Q_{\alpha})$. But this
+kernel equals the kernel of $\prod_{\alpha} (M_i \otimes_R Q_{\alpha})
+\to \prod_{\alpha} (M_j \otimes_R Q_{\alpha})$ according to the
+choice of $j$. Thus $x_i$ maps to $0$ in $\prod_{\alpha} (M_j \otimes_R
+Q_{\alpha})$ and hence to $0$ in $M_j \otimes_R (\prod_{\alpha} Q_{\alpha})$.
+
+\medskip\noindent
+Now suppose (2) holds. We prove $M$ satisfies formulation (1) of being
+Mittag-Leffler from Proposition \ref{proposition-ML-characterization}. Let $f:
+P \to M$ be a map from a finitely presented module $P$ to $M$. Choose
+a set $B$ of representatives of the isomorphism classes of finitely presented
+$R$-modules. Let $A$ be the set of pairs $(Q, x)$ where $Q \in B$ and $x \in
+\Ker(P \otimes Q \to M \otimes Q)$. For $\alpha = (Q, x) \in A$, we
+write $Q_{\alpha}$ for $Q$ and $x_{\alpha}$ for $x$. Consider the commutative
+diagram
+$$
+\xymatrix{
+M \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r] &
+\prod_{\alpha} (M \otimes_R Q_{\alpha}) \\
+P \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r]^{\cong} \ar[u] &
+\prod_{\alpha} (P \otimes_R Q_{\alpha}) \ar[u] .
+}
+$$
+The top arrow is an injection by assumption, and the bottom arrow is an
+isomorphism by Proposition \ref{proposition-fp-tensor}. Let $x \in P
+\otimes_R (\prod_{\alpha} Q_{\alpha})$ be the element corresponding to
+$(x_{\alpha}) \in \prod_{\alpha} (P \otimes_R Q_{\alpha})$ under this
+isomorphism. Then $x \in \Ker( P \otimes_R (\prod_{\alpha} Q_{\alpha})
+\to M \otimes_R (\prod_{\alpha} Q_{\alpha}))$ since the top arrow in
+the diagram is injective. By Lemma \ref{lemma-kernel-tensored-fp}, we get a
+finitely presented module $P'$ and a map $f': P \to P'$ such that $f: P
+\to M$ factors through $f'$ and $x \in \Ker(P \otimes_R
+(\prod_{\alpha} Q_{\alpha}) \to P' \otimes_R (\prod_{\alpha}
+Q_{\alpha}))$. We have a commutative diagram
+$$
+\xymatrix{
+P' \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r]^{\cong} &
+\prod_{\alpha} (P' \otimes_R Q_{\alpha}) \\
+P \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r]^{\cong} \ar[u] &
+\prod_{\alpha} (P \otimes_R Q_{\alpha}) \ar[u] .
+}
+$$
+where both the top and bottom arrows are isomorphisms by Proposition
+\ref{proposition-fp-tensor}. Thus since $x$ is in the kernel of the left
+vertical map, $(x_{\alpha})$ is in the kernel of the right vertical map. This
+means $x_{\alpha} \in \Ker(P \otimes_R Q_{\alpha} \to P' \otimes_R
+Q_{\alpha})$ for every $\alpha \in A$. By the definition of $A$ this means
+$\Ker(P \otimes_R Q \to P' \otimes_R Q) \supset \Ker(P \otimes_R
+Q \to M \otimes_R Q)$ for all finitely presented $Q$ and, since $f: P
+\to M$ factors through $f': P \to P'$, actually equality holds.
+ By Lemma \ref{lemma-domination-fp}, $f$ and $f'$ dominate each other.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-minimal-contains}
+Let $M$ be a flat Mittag-Leffler module over $R$. Let $F$ be an $R$-module
+and let $x \in F \otimes_R M$. Then there exists a smallest submodule
+$F' \subset F$ such that $x \in F' \otimes_R M$.
+\end{lemma}
+
+\begin{proof}
+Since $M$ is flat we have $F' \otimes_R M \subset F \otimes_R M$
+if $F' \subset F$ is a submodule, hence the statement makes sense.
+Let $I = \{F' \subset F \mid x \in F' \otimes_R M\}$ and for
+$i \in I$ denote $F_i \subset F$ the corresponding submodule.
+Then $x$ maps to zero under the map
+$$
+F \otimes_R M \longrightarrow \prod (F/F_i \otimes_R M)
+$$
+whence by Proposition \ref{proposition-ML-tensor}
+$x$ maps to zero under the map
+$$
+F \otimes_R M \longrightarrow \left(\prod F/F_i\right) \otimes_R M
+$$
+Since $M$ is flat the kernel of this arrow is
+$(\bigcap F_i) \otimes_R M$ which proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pure-submodule-ML}
+Let $0 \to M_1 \to M_2 \to M_3 \to 0$ be a
+universally exact sequence of $R$-modules. Then:
+\begin{enumerate}
+\item If $M_2$ is Mittag-Leffler, then $M_1$ is Mittag-Leffler.
+\item If $M_1$ and $M_3$ are Mittag-Leffler, then $M_2$ is Mittag-Leffler.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For any family $(Q_{\alpha})_{\alpha \in A}$ of $R$-modules we have a
+commutative diagram
+$$
+\xymatrix{
+0 \ar[r] & M_1 \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r] \ar[d] & M_2
+\otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r] \ar[d] & M_3 \otimes_R
+(\prod_{\alpha} Q_{\alpha}) \ar[r] \ar[d] & 0 \\
+0 \ar[r] & \prod_{\alpha}(M_1 \otimes Q_{\alpha}) \ar[r] & \prod_{\alpha}(M_2
+\otimes Q_{\alpha}) \ar[r] & \prod_{\alpha}(M_3 \otimes Q_{\alpha})\ar[r] & 0
+}
+$$
+with exact rows. Thus (1) and (2) follow from
+Proposition \ref{proposition-ML-tensor}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quotient-module-ML}
+Let $M_1 \to M_2 \to M_3 \to 0$ be an exact sequence of $R$-modules.
+If $M_1$ is finitely generated and $M_2$ is Mittag-Leffler, then $M_3$
+is Mittag-Leffler.
+\end{lemma}
+
+\begin{proof}
+For any family $(Q_{\alpha})_{\alpha \in A}$ of $R$-modules,
+since tensor product is right exact, we have a commutative diagram
+$$
+\xymatrix{
+M_1 \otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r] \ar[d] & M_2
+\otimes_R (\prod_{\alpha} Q_{\alpha}) \ar[r] \ar[d] & M_3 \otimes_R
+(\prod_{\alpha} Q_{\alpha}) \ar[r] \ar[d] & 0 \\
+\prod_{\alpha}(M_1 \otimes Q_{\alpha}) \ar[r] & \prod_{\alpha}(M_2
+\otimes Q_{\alpha}) \ar[r] & \prod_{\alpha}(M_3 \otimes Q_{\alpha})\ar[r] & 0
+}
+$$
+with exact rows. By Proposition \ref{proposition-fg-tensor}
+the left vertical arrow is surjective. By
+Proposition \ref{proposition-ML-tensor} the middle vertical arrow
+is injective. A diagram chase shows the right vertical arrow
+is injective. Hence $M_3$ is Mittag-Leffler by
+Proposition \ref{proposition-ML-tensor}.
+\end{proof}
+
+
+\begin{lemma}
+\label{lemma-colimit-universally-injective-ML}
+If $M = \colim M_i$ is the colimit of a directed system of Mittag-Leffler
+$R$-modules $M_i$ with universally injective transition maps, then $M$ is
+Mittag-Leffler.
+\end{lemma}
+
+\begin{proof}
+Let $(Q_{\alpha})_{\alpha \in A}$ be a family of $R$-modules. We have to
+show that $M \otimes_R (\prod Q_\alpha) \to \prod M \otimes_R Q_\alpha$
+is injective and we know that
+$M_i \otimes_R (\prod Q_\alpha) \to \prod M_i \otimes_R Q_\alpha$
+is injective for each $i$, see Proposition \ref{proposition-ML-tensor}.
+Since $\otimes$ commutes with filtered colimits, it suffices to show that
+$\prod M_i \otimes_R Q_\alpha \to \prod M \otimes_R Q_\alpha$
+is injective. This is clear as each of the maps
+$M_i \otimes_R Q_\alpha \to M \otimes_R Q_\alpha$ is injective
+by our assumption that the transition maps are universally injective.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-direct-sum-ML}
+If $M = \bigoplus_{i \in I} M_i$ is a direct sum of $R$-modules, then $M$ is
+Mittag-Leffler if and only if each $M_i$ is Mittag-Leffler.
+\end{lemma}
+
+\begin{proof}
+The ``only if'' direction follows from Lemma \ref{lemma-pure-submodule-ML} (1)
+and the fact that a split short exact sequence is universally exact. The
+converse follows from Lemma \ref{lemma-colimit-universally-injective-ML}
+but we can also argue it directly as follows. First note that if $I$ is
+finite then this follows from Lemma
+\ref{lemma-pure-submodule-ML} (2). For general $I$, if all $M_i$ are
+Mittag-Leffler then we prove the same of $M$ by verifying condition (1) of
+Proposition \ref{proposition-ML-characterization}.
+Let $f: P \to M$ be a map from a finitely presented module $P$.
+Then $f$ factors as
+$P \xrightarrow{f'} \bigoplus_{i' \in I'} M_{i'} \hookrightarrow
+\bigoplus_{i \in I} M_i$
+for some finite subset $I'$ of $I$. By the finite case
+$\bigoplus_{i' \in I'} M_{i'}$ is Mittag-Leffler and hence there exists
+a finitely presented module $Q$ and a map $g: P \to Q$ such that $g$
+and $f'$ dominate each other. Then also $g$ and $f$ dominate each other.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-ML-over-ML-ring}
+Let $R \to S$ be a ring map. Let $M$ be an $S$-module.
+If $S$ is Mittag-Leffler as an $R$-module, and $M$ is flat and Mittag-Leffler
+as an $S$-module, then $M$ is Mittag-Leffler as an $R$-module.
+\end{lemma}
+
+\begin{proof}
+We deduce this from the characterization of
+Proposition \ref{proposition-ML-tensor}.
+Namely, suppose that $Q_\alpha$ is a family of $R$-modules.
+Consider the composition
+$$
+\xymatrix{
+M \otimes_R \prod_\alpha Q_\alpha =
+M \otimes_S S \otimes_R \prod_\alpha Q_\alpha \ar[d] \\
+M \otimes_S \prod_\alpha (S \otimes_R Q_\alpha) \ar[d] \\
+\prod_\alpha (M \otimes_S S \otimes_R Q_\alpha) =
+\prod_\alpha (M \otimes_R Q_\alpha)
+}
+$$
+The first arrow is injective as $M$ is flat over $S$ and $S$ is
+Mittag-Leffler over $R$ and the second arrow is injective as
+$M$ is Mittag-Leffler over $S$. Hence $M$ is Mittag-Leffler over $R$.
+\end{proof}
+
+
+\section{Coherent rings}
+\label{section-coherent}
+
+\noindent
+We use the discussion on interchanging $\prod$ and $\otimes$ to determine
+for which rings products of flat modules are flat. It turns out that these
+are the so-called coherent rings. You may be more familiar with the notion
+of a coherent $\mathcal{O}_X$-module on a ringed space, see
+Modules, Section \ref{modules-section-coherent}.
+
+\begin{definition}
+\label{definition-coherent}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+\begin{enumerate}
+\item We say $M$ is a {\it coherent module} if it is finitely generated
+and every finitely generated submodule of $M$ is finitely presented over
+$R$.
+\item We say $R$ is a {\it coherent ring} if it is coherent as a module
+over itself.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Thus a ring is coherent if and only if every finitely generated ideal is
+finitely presented as a module.
+
+\begin{example}
+\label{example-valuation-ring-coherent}
+A valuation ring is a coherent ring. Namely, every nonzero finitely generated
+ideal is principal (Lemma \ref{lemma-characterize-valuation-ring}),
+hence free as a valuation ring is a domain, hence finitely presented.
+\end{example}
+
+\noindent
+The category of coherent modules is abelian.
+
+\begin{lemma}
+\label{lemma-coherent}
+Let $R$ be a ring.
+\begin{enumerate}
+\item A finite submodule of a coherent module is coherent.
+\item Let $\varphi : N \to M$ be a homomorphism from a finite
+module to a coherent module. Then $\Ker(\varphi)$ is finite.
+\item Let $\varphi : N \to M$ be a homomorphism of coherent modules.
+Then $\Ker(\varphi)$ and $\Coker(\varphi)$ are coherent
+modules.
+\item Given a short exact sequence of $R$-modules
+$0 \to M_1 \to M_2 \to M_3 \to 0$ if two out of three are coherent
+so is the third.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first statement is immediate from the definition.
+During the rest of the proof we will use the results of
+Lemma \ref{lemma-extension}
+without further mention.
+
+\medskip\noindent
+Let $\varphi : N \to M$ satisfy the assumptions of (2).
+Suppose that $N$ is generated by $x_1, \ldots, x_n$. By
+Definition \ref{definition-coherent}
+the kernel $K$ of the induced map $R^{\oplus n} \to M$,
+$e_i \mapsto \varphi(x_i)$ is of finite type.
+Hence $\Ker(\varphi)$ which is the image of the
+composition $K \to R^{\oplus n} \to N$
+is of finite type. This proves (2).
+
+\medskip\noindent
+Let $\varphi : N \to M$ satisfy the assumptions of (3).
+By (2) the kernel of $\varphi$ is of finite type and
+hence by (1) it is coherent.
+
+\medskip\noindent
+With the same hypotheses
+let us show that $\Coker(\varphi)$ is coherent.
+Since $M$ is finite so is $\Coker(\varphi)$.
+Let $\overline{x}_i \in \Coker(\varphi)$.
+We have to show that the kernel of the associated morphism
+$\overline{\Psi} : R^{\oplus n} \to \Coker(\varphi)$
+is finite. Choose $x_i \in M$ lifting $\overline{x}_i$.
+Choose additionally generators $y_1, \ldots, y_m$ of $\Im(\varphi)$.
+Let $\Phi : R^{\oplus m} \to \Im(\varphi)$ using $y_j$ and
+$\Psi : R^{\oplus m} \oplus R^{\oplus n} \to M$ using $y_j$ and $x_i$
+be the corresponding maps.
+Consider the following commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+R^{\oplus m} \ar[d]_\Phi \ar[r] &
+R^{\oplus m} \oplus R^{\oplus n} \ar[d]_\Psi \ar[r] &
+R^{\oplus n} \ar[d]_{\overline{\Psi}} \ar[r] &
+0 \\
+0 \ar[r] &
+\Im(\varphi) \ar[r] &
+M \ar[r] &
+\Coker(\varphi) \ar[r] &
+0
+}
+$$
+with exact rows. By Lemma \ref{lemma-snake} we get an exact sequence
+$\Ker(\Psi) \to \Ker(\overline{\Psi}) \to 0$.
+Since $\Ker(\Psi)$ is a finite $R$-module,
+we see that $\Ker(\overline{\Psi})$ is finite.
+
+\medskip\noindent
+Statement (4) follows from (3).
+
+\medskip\noindent
+Let $0 \to M_1 \to M_2 \to M_3 \to 0$
+be a short exact sequence of $R$-modules. It suffices
+to prove that if $M_1$ and $M_3$ are coherent
+so is $M_2$. By
+Lemma \ref{lemma-extension}
+we see that $M_2$ is finite. Let $x_1, \ldots, x_n$ be finitely many
+elements of $M_2$.
+We have to show that the module of relations $K$
+between them is finite.
+Consider the following commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+0 \ar[r] \ar[d] &
+\bigoplus_{i = 1}^{n} R \ar[r] \ar[d] &
+\bigoplus_{i = 1}^{n} R \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+M_1 \ar[r] &
+M_2 \ar[r] &
+M_3 \ar[r] &
+0
+}
+$$
+with obvious notation. By the snake lemma we get an exact sequence
+$0 \to K \to K_3 \to M_1$
+where $K_3$ is the module of relations among
+the images of the $x_i$ in $M_3$.
+Since $M_3$ is coherent we see that
+$K_3$ is a finite module. Since $M_1$
+is coherent we see that the image $I$
+of $K_3 \to M_1$
+is coherent. Hence $K$
+is the kernel of the map $K_3 \to I$
+between a finite module and a coherent module and hence
+finite by (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-coherent-ring}
+Let $R$ be a ring. If $R$ is coherent, then a module is coherent
+if and only if it is finitely presented.
+\end{lemma}
+
+\begin{proof}
+It is clear that a coherent module is finitely presented (over any ring).
+Conversely, if $R$ is coherent, then $R^{\oplus n}$ is coherent and so is
+the cokernel of any map $R^{\oplus m} \to R^{\oplus n}$, see
+Lemma \ref{lemma-coherent}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-coherent}
+A Noetherian ring is a coherent ring.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-Noetherian-finite-type-is-finite-presentation}
+any finite $R$-module is finitely presented. In particular any ideal of $R$
+is finitely presented.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-characterize-coherent}
+\begin{reference}
+This is \cite[Theorem 2.1]{Chase}.
+\end{reference}
+Let $R$ be a ring. The following are equivalent
+\begin{enumerate}
+\item $R$ is coherent,
+\item any product of flat $R$-modules is flat, and
+\item for every set $A$ the module $R^A$ is flat.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Assume $R$ coherent, and let $Q_\alpha$, $\alpha \in A$ be a set of flat
+$R$-modules. We have to show that
+$I \otimes_R \prod_\alpha Q_\alpha \to \prod Q_\alpha$ is injective
+for every finitely generated ideal $I$ of $R$, see
+Lemma \ref{lemma-flat}.
+Since $R$ is coherent $I$ is an $R$-module of finite presentation.
+Hence $I \otimes_R \prod_\alpha Q_\alpha = \prod I \otimes_R Q_\alpha$ by
+Proposition \ref{proposition-fp-tensor}.
+The desired injectivity follows as $I \otimes_R Q_\alpha \to Q_\alpha$
+is injective by flatness of $Q_\alpha$.
+
+\medskip\noindent
+The implication (2) $\Rightarrow$ (3) is trivial.
+
+\medskip\noindent
+Assume that the $R$-module $R^A$ is flat for every set $A$. Let $I$
+be a finitely generated ideal in $R$. Then $I \otimes_R R^A \to R^A$
+is injective by assumption. By
+Proposition \ref{proposition-fg-tensor}
+and the finiteness of $I$ the image is equal to $I^A$. Hence
+$I \otimes_R R^A = I^A$ for every set $A$ and we conclude that $I$
+is finitely presented by
+Proposition \ref{proposition-fp-tensor}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Examples and non-examples of Mittag-Leffler modules}
+\label{section-examples}
+
+\noindent
+We end this section with some examples and non-examples of Mittag-Leffler
+modules.
+
+\begin{example}
+\label{example-ML}
+Mittag-Leffler modules.
+\begin{enumerate}
+\item Any finitely presented module is Mittag-Leffler. This follows, for
+instance, from Proposition \ref{proposition-ML-characterization} (1). In
+general, it is true that a finitely generated module is Mittag-Leffler if and
+only it is finitely presented. This follows from Propositions
+\ref{proposition-fg-tensor}, \ref{proposition-fp-tensor}, and
+\ref{proposition-ML-tensor}.
+\item A free module is Mittag-Leffler since it satisfies condition (1) of
+Proposition \ref{proposition-ML-characterization}.
+\item By the previous example together with Lemma \ref{lemma-direct-sum-ML},
+projective modules are Mittag-Leffler.
+\end{enumerate}
+\end{example}
+
+\noindent
+We also want to add to our list of examples power series rings over a
+Noetherian ring $R$. This will be a consequence the following lemma.
+
+\begin{lemma}
+\label{lemma-flat-ML-criterion}
+Let $M$ be a flat $R$-module. The following are equivalent
+\begin{enumerate}
+\item $M$ is Mittag-Leffler, and
+\item if $F$ is a finite free $R$-module and
+$x \in F \otimes_R M$, then there exists a smallest submodule $F'$ of $F$
+such that $x \in F' \otimes_R M$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (1) $\Rightarrow$ (2) is a special case of
+Lemma \ref{lemma-minimal-contains}. Assume (2).
+By Theorem \ref{theorem-lazard} we can write $M$ as the colimit
+$M = \colim_{i \in I} M_i$ of a directed system $(M_i, f_{ij})$ of
+finite free $R$-modules.
+By Remark \ref{remark-flat-ML}, it suffices to show that the inverse system
+$(\Hom_R(M_i, R), \Hom_R(f_{ij}, R))$ is Mittag-Leffler. In
+other words,
+fix $i \in I$ and for $j \geq i$ let $Q_j$ be the image of
+$\Hom_R(M_j, R)
+\to \Hom_R(M_i, R)$; we must show that the $Q_j$ stabilize.
+
+\medskip\noindent
+Since $M_i$ is free and finite, we can make the identification
+$\Hom_R(M_i, M_j) = \Hom_R(M_i, R) \otimes_R M_j$ for all $j$.
+ Using the
+fact that the $M_j$ are free, it follows that for $j \geq i$, $Q_j$ is the
+smallest submodule of $\Hom_R(M_i, R)$ such that $f_{ij} \in Q_j
+\otimes_R
+M_j$. Under the identification $\Hom_R(M_i, M) = \Hom_R(M_i, R)
+\otimes_R
+M$, the canonical map $f_i: M_i \to M$ is in $\Hom_R(M_i, R)
+\otimes_R M$. By the assumption on $M$, there exists a smallest submodule
+$Q$ of $\Hom_R(M_i, R)$ such that $f_i \in Q \otimes_R M$. We are
+going to
+show that the $Q_j$ stabilize to $Q$.
+
+\medskip\noindent
+For $j \geq i$ we have a commutative diagram
+$$
+\xymatrix{
+Q_j \otimes_R M_j \ar[r] \ar[d] & \Hom_R(M_i, R) \otimes_R M_j
+\ar[d] \\
+Q_j \otimes_R M \ar[r] & \Hom_R(M_i, R) \otimes_R M.
+}
+$$
+Since $f_{ij} \in Q_j \otimes_R M_j$ maps to $f_i \in \Hom_R(M_i, R)
+\otimes_R M$, it follows that $f_i \in Q_j \otimes_R M$. Hence, by the
+choice of $Q$, we have $Q \subset Q_j$ for all $j \geq i$.
+
+\medskip\noindent
+Since the $Q_j$ are decreasing and $Q \subset Q_j$ for all $j \geq i$, to show
+that the $Q_j$ stabilize to $Q$ it suffices to find a $j \geq i$ such that $Q_j
+\subset Q$. As an element of
+$$
+\Hom_R(M_i, R) \otimes_R M = \colim_{j \in J}
+(\Hom_R(M_i, R) \otimes_R
+M_j),
+$$
+$f_i$ is the colimit of $f_{ij}$ for $j \geq i$, and $f_i$ also lies in the
+submodule
+$$
+\colim_{j \in J} (Q \otimes_R M_j) \subset \colim_{j \in J}
+(\Hom_R(M_i, R)
+\otimes_R M_j).
+$$
+It follows that for some $j \geq i$, $f_{ij}$ lies in $Q \otimes_R M_j$.
+Since $Q_j$ is the smallest submodule of $\Hom_R(M_i, R)$ with $f_{ij}
+\in
+Q_j \otimes_R M_j$, we conclude $Q_j\subset Q$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-product-over-Noetherian-ring}
+Let $R$ be a Noetherian ring and $A$ a set.
+Then $M = R^A$ is a flat and Mittag-Leffler $R$-module.
+\end{lemma}
+
+\begin{proof}
+Combining
+Lemma \ref{lemma-Noetherian-coherent}
+and
+Proposition \ref{proposition-characterize-coherent}
+we see that $M$ is flat over $R$. We show that $M$ satisfies the condition of
+Lemma \ref{lemma-flat-ML-criterion}.
+Let $F$ be a free finite $R$-module. If $F'$ is any submodule of $F$ then it
+is finitely presented since $R$ is Noetherian. So by
+Proposition \ref{proposition-fp-tensor}
+we have a commutative diagram
+$$
+\xymatrix{
+F' \otimes_R M \ar[r] \ar[d]^{\cong} & F \otimes_R M \ar[d]^{\cong} \\
+(F')^A \ar[r] & F^A
+}
+$$
+by which we can identify the map $F' \otimes_R M \to F \otimes_R M$
+with $(F')^A \to F^A$. Hence if $x \in F \otimes_R M$ corresponds to
+$(x_\alpha) \in F^A$, then the submodule of $F'$ of $F$ generated by the
+$x_\alpha$ is the smallest submodule of $F$ such that $x \in F' \otimes_R M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-power-series-ML}
+Let $R$ be a Noetherian ring and $n$ a positive integer. Then the $R$-module
+$M = R[[t_1, \ldots, t_n]]$ is flat and Mittag-Leffler.
+\end{lemma}
+
+\begin{proof}
+As an $R$-module, we have $M = R^A$ for a (countable) set $A$.
+Hence this lemma is a special case of
+Lemma \ref{lemma-product-over-Noetherian-ring}.
+\end{proof}
+
+\begin{example}
+\label{example-not-ML}
+Non Mittag-Leffler modules.
+\begin{enumerate}
+\item By Example \ref{example-Q-not-ML} and
+Proposition \ref{proposition-ML-tensor}, $\mathbf{Q}$ is not a Mittag-Leffler
+$\mathbf{Z}$-module.
+\item We prove below (Theorem \ref{theorem-projectivity-characterization}) that
+for a flat and countably generated module, projectivity is equivalent to being
+Mittag-Leffler. Thus any flat, countably generated, non-projective module $M$
+is an example of a non-Mittag-Leffler module. For such an example, see
+Remark \ref{remark-warning}.
+\item Let $k$ be a field. Let $R = k[[x]]$. The $R$-module
+$M = \prod_{n \in \mathbf{N}} R/(x^n)$ is not Mittag-Leffler.
+Namely, consider the element $\xi = (\xi_1, \xi_2, \xi_3, \ldots)$
+defined by $\xi_{2^m} = x^{2^{m - 1}}$ and $\xi_n = 0$ else, so
+$$
+\xi = (0, x, 0, x^2, 0, 0, 0, x^4, 0, 0, 0, 0, 0, 0, 0, x^8, \ldots)
+$$
+Then the annihilator of $\xi$ in $M/x^{2^m}M$ is generated $x^{2^{m - 1}}$
+for $m \gg 0$. But if $M$ was Mittag-Leffler, then there would exist a finite
+$R$-module $Q$ and an element $\xi' \in Q$ such that the annihilator
+of $\xi'$ in $Q/x^l Q$ agrees with the annihilator of $\xi$ in $M/x^l M$
+for all $l \geq 1$, see
+Proposition \ref{proposition-ML-characterization} (1).
+Now you can prove there exists an integer $a \geq 0$ such that
+the annihilator of $\xi'$ in $Q/x^l Q$ is generated
+by either $x^a$ or $x^{l - a}$ for all $l \gg 0$ (depending on
+whether $\xi' \in Q$ is torsion or not). The combination of the above
+would give for all $l = 2^m >> 0$ the equality $a = l/2$ or $l - a = l/2$
+which is nonsensical.
+\item The same argument shows that $(x)$-adic completion of
+$\bigoplus_{n \in \mathbf{N}} R/(x^n)$ is not Mittag-Leffler over
+$R = k[[x]]$ (hint: $\xi$ is actually an element of this completion).
+\item Let $R = k[a, b]/(a^2, ab, b^2)$. Let $S$ be the finitely presented
+$R$-algebra with presentation $S = R[t]/(at - b)$. Then as an $R$-module
+$S$ is countably generated and indecomposable (details omitted).
+On the other hand, $R$ is Artinian local, hence complete local,
+hence a henselian local ring, see
+Lemma \ref{lemma-complete-henselian}.
+If $S$ was Mittag-Leffler as an $R$-module, then it would be a
+direct sum of finite $R$-modules by
+Lemma \ref{lemma-split-ML-henselian}.
+Thus we conclude that $S$ is not Mittag-Leffler as an $R$-module.
+\end{enumerate}
+\end{example}
+
+
+
+
+\section{Countably generated Mittag-Leffler modules}
+\label{section-ML-countable}
+
+\noindent
+It turns out that countably generated Mittag-Leffler modules have a
+particularly simple structure.
+
+\begin{lemma}
+\label{lemma-ML-countable-colimit}
+Let $M$ be an $R$-module. Write $M = \colim_{i \in I} M_i$ where $(M_i,
+f_{ij})$ is a directed system of finitely presented $R$-modules. If $M$ is
+Mittag-Leffler and countably generated, then there is a directed countable
+subset $I' \subset I$ such that $M \cong \colim_{i \in I'} M_i$.
+\end{lemma}
+
+\begin{proof}
+Let $x_1, x_2, \ldots$ be a countable set of generators for $M$. For each $x_n$
+choose $i \in I$ such that $x_n$ is in the image of the canonical map $f_i:
+M_i \to M$; let $I'_{0} \subset I$ be the set of all these $i$. Now
+since $M$ is Mittag-Leffler, for each $i \in I'_{0}$ we can choose $j \in I$
+such that $j \geq i$ and $f_{ij}: M_i \to M_j$ factors through
+$f_{ik}: M_i \to M_k$ for all $k \geq i$ (condition (3) of Proposition
+\ref{proposition-ML-characterization}); let $I'_1$ be the union of $I'_0$ with
+all of these $j$. Since $I'_1$ is a countable, we can enlarge it to a
+countable directed set $I'_{2} \subset I$. Now we can apply the same procedure
+to $I'_{2}$ as we did to $I'_{0}$ to get a new countable set $I'_{3} \subset
+I$. Then we enlarge $I'_{3}$ to a countable directed set $I'_{4}$. Continuing
+in this way---adding in a $j$ as in Proposition
+\ref{proposition-ML-characterization} (3) for each $ i \in I'_{\ell}$ if $\ell$
+is odd and enlarging $I'_{\ell}$ to a directed set if $\ell$ is even---we get a
+sequence of subsets $I'_{\ell} \subset I$ for $\ell \geq 0$. The union $I' =
+\bigcup I'_{\ell}$ satisfies:
+\begin{enumerate}
+\item $I'$ is countable and directed;
+\item each $x_n$ is in the image of $f_i: M_i \to M$ for some $i
+\in I'$;
+\item if $i \in I'$, then there is $j \in I'$ such that $j \geq i$ and $f_{ij}:
+M_i \to M_j$ factors through $f_{ik}: M_i \to M_k$ for all
+$k \in I$ with $k \geq i$. In particular $\Ker(f_{ik}) \subset \Ker(f_{ij})$
+for $k \geq i$.
+\end{enumerate}
+We claim that the canonical map $\colim_{i \in I'} M_i \to
+\colim_{i
+\in I} M_i = M$ is an isomorphism. By (2) it is surjective. For injectivity,
+suppose $x \in \colim_{i \in I'} M_i$ maps to $0$ in $\colim_{i \in
+I} M_i$.
+Representing $x$ by an element $\tilde{x} \in M_i$ for some $i \in I'$, this
+means that $f_{ik}(\tilde{x}) = 0$ for some $k \in I, k \geq i$. But then by
+(3) there is $j \in I', j \geq i,$ such that $f_{ij}(\tilde{x}) = 0$. Hence $x
+= 0$ in $\colim_{i \in I'} M_i$.
+\end{proof}
+
+\noindent
+Lemma \ref{lemma-ML-countable-colimit}
+implies that a countably generated Mittag-Leffler module $M$ over
+$R$ is the colimit of a system
+$$
+M_1 \to M_2 \to M_3 \to M_4 \to \ldots
+$$
+with each $M_n$ a finitely presented $R$-module. To see this argue as in the
+proof of
+Lemma \ref{lemma-ML-limit-nonempty}
+to see that a countable directed set has a cofinal
+subset isomorphic to $(\mathbf{N}, \geq)$. Suppose
+$R = k[x_1, x_2, x_3, \ldots]$ and $M = R/(x_i)$. Then $M$ is finitely
+generated but not finitely presented, hence not Mittag-Leffler (see
+Example \ref{example-ML} part (1)).
+But of course you can write $M = \colim_n M_n$ by taking
+$M_n = R/(x_1, \ldots, x_n)$, hence the condition that you can write
+$M$ as such a limit does not imply that $M$ is Mittag-Leffler.
+
+\begin{lemma}
+\label{lemma-ML-countable}
+Let $R$ be a ring.
+Let $M$ be an $R$-module.
+Assume $M$ is Mittag-Leffler and countably generated.
+For any $R$-module map $f : P \to M$ with $P$ finitely generated there
+exists an endomorphism $\alpha : M \to M$ such that
+\begin{enumerate}
+\item $\alpha : M \to M$ factors through a finitely presented $R$-module, and
+\item $\alpha \circ f = f$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Write $M = \colim_{i \in I} M_i$ as a directed colimit of finitely
+presented $R$-modules with $I$ countable, see
+Lemma \ref{lemma-ML-countable-colimit}.
+The transition maps are denoted $f_{ij}$ and we use $f_i : M_i \to M$
+to denote the canonical maps into $M$. Set $N = \prod_{s \in I} M_s$. Denote
+$$
+M_i^* = \Hom_R(M_i, N) = \prod\nolimits_{s \in I} \Hom_R(M_i, M_s)
+$$
+so that $(M_i^*)$ is an inverse system of $R$-modules over $I$.
+Note that $\Hom_R(M, N) = \lim M_i^*$.
+As $M$ is Mittag-Leffler, we find for every
+$i \in I$ an index $k(i) \geq i$ such that
+$$
+E_i := \bigcap\nolimits_{i' \geq i} \Im(M_{i'}^* \to M_i^*)
+=
+\Im(M_{k(i)}^* \to M_i^*)
+$$
+Choose and fix $j \in I$ such that $\Im(P \to M) \subset \Im(M_j \to M)$.
+This is possible as $P$ is finitely generated. Set $k = k(j)$.
+Let
+$x = (0, \ldots, 0, \text{id}_{M_k}, 0, \ldots, 0) \in M_k^*$ and
+note that this maps to $y = (0, \ldots, 0, f_{jk}, 0, \ldots, 0) \in M_j^*$.
+By our choice of $k$ we see that $y \in E_j$. By
+Example \ref{example-ML-surjective-maps}
+the transition maps $E_i \to E_j$ are surjective for each $i \geq j$
+and $\lim E_i = \lim M_i^* = \Hom_R(M, N)$. Hence
+Lemma \ref{lemma-ML-limit-nonempty}
+guarantees there exists an element $z \in \Hom_R(M, N)$
+which maps to $y$ in $E_j \subset M_j^*$. Let $z_k$ be the $k$th component
+of $z$. Then $z_k : M \to M_k$ is a homomorphism such that
+$$
+\xymatrix{
+M \ar[r]_{z_k} & M_k \\
+M_j \ar[ru]_{f_{jk}} \ar[u]^{f_j}
+}
+$$
+commutes. Let $\alpha : M \to M$ be the composition
+$f_k \circ z_k : M \to M_k \to M$.
+Then $\alpha$ factors through a finitely presented module by construction and
+$\alpha \circ f_j = f_j$. Since the image of $f$ is contained in the image of
+$f_j$ this also implies that $\alpha \circ f = f$.
+\end{proof}
+
+\noindent
+We will see later (see
+Lemma \ref{lemma-split-ML-henselian})
+that
+Lemma \ref{lemma-ML-countable}
+means that a countably generated Mittag-Leffler module over a henselian local
+ring is a direct sum of finitely presented modules.
+
+
+
+
+\section{Characterizing projective modules}
+\label{section-characterize-projective}
+
+\noindent
+The goal of this section is to prove that a module is projective if and only if
+it is flat, Mittag-Leffler, and a direct sum of countably generated modules
+(Theorem \ref{theorem-projectivity-characterization} below).
+
+\begin{lemma}
+\label{lemma-countgen-projective}
+Let $M$ be an $R$-module. If $M$ is flat, Mittag-Leffler, and countably
+generated, then $M$ is projective.
+\end{lemma}
+
+\begin{proof}
+By Lazard's theorem (Theorem \ref{theorem-lazard}), we can write $M =
+\colim_{i
+\in I} M_i$ for a directed system of finite free $R$-modules $(M_i, f_{ij})$
+indexed by a set $I$. By Lemma \ref{lemma-ML-countable-colimit}, we may assume
+$I$ is countable. Now let
+$$
+0 \to N_1 \to N_2 \to N_3 \to 0
+$$
+be an exact sequence of $R$-modules. We must show that applying
+$\Hom_R(M, -)$
+preserves exactness. Since $M_i$ is finite free,
+$$
+0 \to \Hom_R(M_i, N_1) \to \Hom_R(M_i, N_2) \to
+\Hom_R(M_i, N_3) \to 0
+$$
+is exact for each $i$. Since $M$ is Mittag-Leffler, $(\Hom_R(M_i,
+N_{1}))$ is
+a Mittag-Leffler inverse system. So by Lemma \ref{lemma-ML-exact-sequence},
+$$
+0 \to \lim_{i \in I} \Hom_R(M_i, N_1) \to
+\lim_{i \in I} \Hom_R(M_i, N_2) \to
+\lim_{i \in I} \Hom_R(M_i, N_3) \to 0
+$$
+is exact. But for any $R$-module $N$ there is a functorial isomorphism
+$\Hom_R(M, N) \cong \lim_{i \in I} \Hom_R(M_i, N)$, so
+$$
+0 \to \Hom_R(M, N_1) \to \Hom_R(M, N_2) \to
+\Hom_R(M, N_3) \to 0
+$$
+is exact.
+\end{proof}
+
+\begin{remark}
+\label{remark-characterize-projective}
+Lemma \ref{lemma-countgen-projective} does not hold without the countable
+generation assumption. For example, the $\mathbf Z$-module $M =
+\mathbf{Z}[[x]]$ is flat and Mittag-Leffler but not projective. It is
+Mittag-Leffler by Lemma \ref{lemma-power-series-ML}. Subgroups of free abelian
+groups are free, hence a projective $\mathbf Z$-module is in fact free and so
+are its submodules. Thus to show $M$ is not projective it suffices to produce
+a non-free submodule. Fix a prime $p$ and consider the submodule $N$
+consisting of power series $f(x) = \sum a_i x^i$ such that for every integer $m
+\geq 1$, $p^m$ divides $a_i$ for all but finitely many $i$. Then $\sum a_i p^i
+x^i$ is in $N$ for all $a_i \in \mathbf{Z}$, so $N$ is uncountable. Thus if
+$N$ were free it would have uncountable rank and the dimension of $N/pN$ over
+$\mathbf{Z}/p$ would be uncountable. This is not true as the elements $x^i \in
+N/pN$ for $i \geq 0$ span $N/pN$.
+\end{remark}
+
+\begin{theorem}
+\label{theorem-projectivity-characterization}
+Let $M$ be an $R$-module. Then $M$ is projective if and only it satisfies:
+\begin{enumerate}
+\item $M$ is flat,
+\item $M$ is Mittag-Leffler,
+\item $M$ is a direct sum of countably generated $R$-modules.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+First suppose $M$ is projective. Then $M$ is a direct summand of a free
+module, so $M$ is flat and Mittag-Leffler since these properties pass to direct
+summands. By Kaplansky's theorem (Theorem \ref{theorem-projective-direct-sum}),
+$M$ satisfies (3).
+
+\medskip\noindent
+Conversely, suppose $M$ satisfies (1)-(3). Since being flat and Mittag-Leffler
+passes to direct summands, $M$ is a direct sum of flat, Mittag-Leffler,
+countably generated $R$-modules.
+Lemma \ref{lemma-countgen-projective}
+implies $M$ is a direct sum of projective modules. Hence $M$ is projective.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ML-ui-descent}
+Let $f: M \to N$ be universally injective map of $R$-modules. Suppose
+$M$ is a direct sum of countably generated $R$-modules, and suppose $N$ is flat
+and Mittag-Leffler. Then $M$ is projective.
+\end{lemma}
+
+\begin{proof}
+By
+Lemmas \ref{lemma-ui-flat-domain} and
+\ref{lemma-pure-submodule-ML},
+$M$ is flat and Mittag-Leffler, so the conclusion follows from Theorem
+\ref{theorem-projectivity-characterization}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-injective-submodule-powerseries}
+Let $R$ be a Noetherian ring and let $M$ be a $R$-module. Suppose $M$ is a
+direct sum of countably generated $R$-modules, and suppose there is a
+universally injective map $M \to R[[t_1, \ldots, t_n]]$ for some $n$.
+Then $M$ is projective.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Lemmas \ref{lemma-ML-ui-descent} and
+\ref{lemma-power-series-ML}.
+\end{proof}
+
+
+
+\section{Ascending properties of modules}
+\label{section-ascent}
+
+\noindent
+All of the properties of a module in Theorem
+\ref{theorem-projectivity-characterization} ascend along arbitrary ring maps:
+
+\begin{lemma}
+\label{lemma-ascend-properties-modules}
+Let $R \to S$ be a ring map. Let $M$ be an $R$-module. Then:
+\begin{enumerate}
+\item If $M$ is flat, then the $S$-module $M \otimes_R S$ is flat.
+\item If $M$ is Mittag-Leffler, then the $S$-module $M \otimes_R S$ is
+Mittag-Leffler.
+\item If $M$ is a direct sum of countably generated $R$-modules, then the
+$S$-module $M \otimes_R S$ is a direct sum of countably generated $S$-modules.
+\item If $M$ is projective, then the $S$-module $M \otimes_R S$ is projective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+All are obvious except (2). For this, use formulation (3) of being
+Mittag-Leffler from Proposition \ref{proposition-ML-characterization} and the
+fact that tensoring commutes with taking colimits.
+\end{proof}
+
+
+
+\section{Descending properties of modules}
+\label{section-descent}
+
+\noindent
+We address the faithfully flat descent of the properties from Theorem
+\ref{theorem-projectivity-characterization} that characterize projectivity.
+In the presence of flatness, the property of being a Mittag-Leffler module
+descends:
+
+\begin{lemma}
+\label{lemma-ffdescent-ML}
+\begin{reference}
+Email from Juan Pablo Acosta Lopez dated 12/20/14.
+\end{reference}
+Let $R \to S$ be a faithfully flat ring map. Let $M$ be an $R$-module. If the
+$S$-module $M \otimes_R S$ is Mittag-Leffler, then $M$ is Mittag-Leffler.
+\end{lemma}
+
+\begin{proof}
+Write $M = \colim_{i\in I} M_i$ as a directed colimit
+of finitely presented $R$-modules $M_i$.
+Using Proposition \ref{proposition-ML-characterization}, we see that we
+have to prove that for each $i \in I$ there exists $i \leq j$, $j\in I$
+such that $M_i\rightarrow M_j$ dominates $M_i\rightarrow M$.
+
+\medskip\noindent
+Take $N$ the pushout
+$$
+\xymatrix{
+M_i \ar[r] \ar[d] & M_j \ar[d] \\
+M \ar[r] & N
+}
+$$
+Then the lemma is equivalent to the existence of $j$ such that
+$M_j\rightarrow N$ is universally injective, see
+Lemma \ref{lemma-domination-universally-injective}.
+Observe that the tensorization by $S$
+$$
+\xymatrix{
+M_i\otimes_R S \ar[r] \ar[d] & M_j\otimes_R S \ar[d] \\
+M\otimes_R S \ar[r] & N\otimes_R S
+}
+$$
+Is a pushout diagram. So because
+$M \otimes_R S = \colim_{i\in I} M_i \otimes_R S$
+expresses $M\otimes_R S$ as a colimit of $S$-modules of
+finite presentation, and $M\otimes_R S$ is Mittag-Leffler,
+there exists $j \geq i$ such that $M_j\otimes_R S\rightarrow N\otimes_R S$
+is universally injective. So using that $R\rightarrow S$ is faithfully flat
+we conclude that $M_j\rightarrow N$ is universally injective too.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ffdescent-countable}
+Let $R \to S$ be a faithfully flat ring map. Let $M$ be an $R$-module. If
+the $S$-module $M \otimes_R S$ is countably generated, then $M$
+is countably generated.
+\end{lemma}
+
+\begin{proof}
+Say $M \otimes_R S$ is generated by the elements
+$y_i$, $i = 1, 2, 3, \ldots$. Write $y_i = \sum_{j = 1, \ldots, n_i}
+x_{ij} \otimes s_{ij}$ for some $n_i \geq 0$, $x_{ij} \in M$
+and $s_{ij} \in S$. Denote $M' \subset M$ the submodule generated
+by the countable collection of elements $x_{ij}$. Then
+$M' \otimes_R S \to M \otimes_R S$ is surjective as the
+image contains the generators $y_i$. Since $S$ is faithfully flat
+over $R$ we conclude that $M' = M$ as desired.
+\end{proof}
+
+\noindent
+At this point the faithfully flat descent of countably generated projective
+modules follows easily.
+
+\begin{lemma}
+\label{lemma-ffdescent-countable-projectivity}
+Let $R \to S$ be a faithfully flat ring map. Let $M$ be an $R$-module.
+ If the $S$-module $M \otimes_R S$ is countably generated and projective,
+then $M$ is countably generated and projective.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemmas \ref{lemma-descend-properties-modules},
+\ref{lemma-ffdescent-ML}, and \ref{lemma-ffdescent-countable} and
+Theorem \ref{theorem-projectivity-characterization}.
+\end{proof}
+
+\noindent
+All that remains is to use d\'evissage to reduce descent of projectivity in the
+general case to the countably generated case. First, two simple lemmas.
+
+\begin{lemma}
+\label{lemma-lift-countably-generated-submodule}
+Let $R \to S$ be a ring map, let $M$ be an $R$-module, and let $Q$ be a
+countably generated $S$-submodule of $M \otimes_R S$. Then there exists a
+countably generated $R$-submodule $P$ of $M$ such that
+$\Im(P \otimes_R S \to M \otimes_R S)$ contains $Q$.
+\end{lemma}
+
+\begin{proof}
+Let $y_1, y_2, \ldots$ be generators for $Q$ and write $y_j = \sum_k x_{jk}
+\otimes s_{jk}$ for some $x_{jk} \in M$ and $s_{jk} \in S$. Then take $P$ be
+the submodule of $M$ generated by the $x_{jk}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-adapted-submodule}
+Let $R \to S$ be a ring map, and let $M$ be an $R$-module. Suppose $M
+\otimes_R S = \bigoplus_{i \in I} Q_i$ is a direct sum of countably generated
+$S$-modules $Q_i$. If $N$ is a countably generated submodule of $M$, then
+there is a countably generated submodule $N'$ of $M$ such that $N' \supset N$
+and $\Im(N' \otimes_R S \to M \otimes_R S) =
+\bigoplus_{i \in I'} Q_i$ for some subset $I' \subset I$.
+\end{lemma}
+
+\begin{proof}
+Let $N'_0 = N$. We construct by induction an increasing sequence of countably
+generated submodules $N'_{\ell} \subset M$ for $\ell = 0, 1, 2, \ldots$
+such that: if $I'_{\ell}$ is the set of $i \in I$ such that the projection of
+$\Im(N'_{\ell} \otimes_R S \to M \otimes_R S)$ onto $Q_i$ is
+nonzero, then $\Im(N'_{\ell + 1} \otimes_R S \to M \otimes_R
+S)$ contains $Q_i$ for all $i \in I'_{\ell}$. To construct $N'_{\ell + 1}$
+from $N'_\ell$, let $Q$ be the sum of (the countably many) $Q_i$ for
+$i \in I'_{\ell}$, choose $P$ as in Lemma
+\ref{lemma-lift-countably-generated-submodule}, and then let $N'_{\ell + 1} =
+N'_{\ell} + P$. Having constructed the $N'_{\ell}$, just take $N' =
+\bigcup_{\ell} N'_{\ell}$ and $I' = \bigcup_{\ell} I'_{\ell}$.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-ffdescent-projectivity}
+Let $R \to S$ be a faithfully flat ring map. Let $M$ be an $R$-module.
+ If the $S$-module $M \otimes_R S$ is projective, then $M$ is projective.
+\end{theorem}
+
+\begin{proof}
+We are going to construct a Kaplansky d\'evissage of $M$ to show that it is a
+direct sum of projective modules and hence projective. By Theorem
+\ref{theorem-projective-direct-sum} we can write $M \otimes_R S =
+\bigoplus_{i \in I} Q_i$ as a direct sum of countably generated $S$-modules
+$Q_i$. Choose a well-ordering on $M$. Using transfinite recursion we are going
+to define an increasing family of submodules $M_{\alpha}$ of $M$, one for each
+ordinal $\alpha$, such that $M_{\alpha} \otimes_R S$ is a direct sum of some
+subset of the $Q_i$.
+
+\medskip\noindent
+For $\alpha = 0$ let $M_0 = 0$. If $\alpha$ is a limit ordinal and $M_{\beta}$
+has been defined for all $\beta < \alpha$, then define $M_{\beta} =
+\bigcup_{\beta < \alpha} M_{\beta}$. Since each $M_{\beta} \otimes_R S$ for
+$\beta < \alpha$ is a direct sum of a subset of the $Q_i$, the same will be
+true of $M_{\alpha} \otimes_R S$. If $\alpha + 1$ is a successor ordinal and
+$M_{\alpha}$ has been defined, then define $M_{\alpha + 1}$ as follows. If
+$M_{\alpha} = M$, then let $M_{\alpha +1} = M$. Otherwise choose the smallest
+$x \in M$ (with respect to the fixed well-ordering) such that $x \notin
+M_{\alpha}$. Since $S$ is flat over $R$, $(M/M_{\alpha}) \otimes_R S = M
+\otimes_R S/M_{\alpha} \otimes_R S$, so since $M_{\alpha} \otimes_R S$ is
+a direct sum of some $Q_i$, the same is true of $(M/M_{\alpha}) \otimes_R
+S$. By Lemma \ref{lemma-adapted-submodule}, we can find a countably generated
+$R$-submodule $P$ of $M/M_{\alpha}$ containing the image of $x$ in
+$M/M_{\alpha}$ and such that $P \otimes_R S$ (which equals $\Im(P
+\otimes_R S \to M \otimes_R S)$ since $S$ is flat over $R$) is a
+direct sum of some $Q_i$. Since $M \otimes_R S = \bigoplus_{i \in I} Q_i$
+is projective and projectivity passes to direct summands, $P \otimes_R S$ is
+also projective. Thus by Lemma \ref{lemma-ffdescent-countable-projectivity},
+$P$ is projective. Finally we define $M_{\alpha + 1}$ to be the preimage of $P$
+in $M$, so that $M_{\alpha + 1}/M_{\alpha} = P$ is countably generated and
+projective. In particular $M_{\alpha}$ is a direct summand of $M_{\alpha + 1}$
+since projectivity of $M_{\alpha + 1}/M_{\alpha}$ implies the sequence $0
+\to M_{\alpha} \to M_{\alpha + 1} \to
+M_{\alpha + 1}/M_{\alpha} \to 0$ splits.
+
+\medskip\noindent
+Transfinite induction on $M$ (using the fact that we constructed
+$M_{\alpha + 1}$ to contain the smallest $x \in M$ not contained in
+$M_{\alpha}$) shows that
+each $x \in M$ is contained in some $M_{\alpha}$. Thus, there is some large
+enough ordinal $S$ satisfying: for each $x \in M$ there is $\alpha \in S$ such
+that $x \in M_{\alpha}$. This means $(M_{\alpha})_{\alpha \in S}$ satisfies
+property (1) of a Kaplansky d\'evissage of $M$. The other properties are clear
+by construction. We conclude
+$M = \bigoplus_{\alpha + 1 \in S} M_{\alpha + 1}/M_{\alpha}$.
+Since each $M_{\alpha + 1}/M_{\alpha}$ is projective
+by construction, $M$ is projective.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Completion}
+\label{section-completion}
+
+\noindent
+Suppose that $R$ is a ring and $I$ is an ideal.
+We define the {\it completion of $R$ with respect to $I$}
+to be the limit
+$$
+R^\wedge = \lim_n R/I^n.
+$$
+An element of $R^\wedge$ is given by a sequence
+of elements $f_n \in R/I^n$ such that $f_n \equiv f_{n + 1} \bmod I^n$
+for all $n$. We will view $R^\wedge$ as an $R$-algebra.
+Similarly, if $M$ is an $R$-module then we define the
+{\it completion of $M$ with respect to $I$}
+to be the limit
+$$
+M^\wedge = \lim_n M/I^nM.
+$$
+An element of $M^\wedge$ is given by a sequence of
+elements $m_n \in M/I^nM$ such that $m_n \equiv m_{n + 1} \bmod I^nM$
+for all $n$. We will view $M^\wedge$ as an $R^\wedge$-module.
+From this description it is clear that there
+are always canonical maps
+$$
+M \longrightarrow M^\wedge
+\quad\text{and}\quad
+M \otimes_R R^\wedge \longrightarrow M^\wedge.
+$$
+Moreover, given a map $\varphi : M \to N$ of modules we get an induced
+map $\varphi^\wedge : M^\wedge \to N^\wedge$ on completions making the
+diagram
+$$
+\xymatrix{
+M \ar[r] \ar[d] & N \ar[d] \\
+M^\wedge \ar[r] & N^\wedge
+}
+$$
+commute. In general completion is not an exact functor, see
+Examples, Section \ref{examples-section-completion-not-exact}.
+Here are some initial positive results.
+
+\begin{lemma}
+\label{lemma-completion-generalities}
+Let $R$ be a ring. Let $I \subset R$ be an ideal.
+Let $\varphi : M \to N$ be a map of $R$-modules.
+\begin{enumerate}
+\item If $M/IM \to N/IN$ is surjective, then $M^\wedge \to N^\wedge$
+is surjective.
+\item If $M \to N$ is surjective, then $M^\wedge \to N^\wedge$ is surjective.
+\item If $0 \to K \to M \to N \to 0$ is a short exact sequence of
+$R$-modules and $N$ is flat, then
+$0 \to K^\wedge \to M^\wedge \to N^\wedge \to 0$ is a short exact sequence.
+\item The map $M \otimes_R R^\wedge \to M^\wedge$ is
+surjective for any finite $R$-module $M$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $M/IM \to N/IN$ is surjective. Then the map $M/I^nM \to N/I^nN$
+is surjective for each $n \geq 1$ by Nakayama's lemma. More precisely,
+apply Lemma \ref{lemma-NAK} part (11) to the
+map $M/I^nM \to N/I^nN$ over the ring $R/I^n$ and the nilpotent
+ideal $I/I^n$ to see this. Set $K_n = \{x \in M \mid \varphi(x) \in I^nN\}$.
+Thus we get short exact sequences
+$$
+0 \to K_n/I^nM \to M/I^nM \to N/I^nN \to 0
+$$
+We claim that the canonical map $K_{n + 1}/I^{n + 1}M \to K_n/I^nM$
+is surjective. Namely, if $x \in K_n$ write
+$\varphi(x) = \sum z_j n_j$ with $z_j \in I^n$, $n_j \in N$.
+By assumption we can write $n_j = \varphi(m_j) + \sum z_{jk}n_{jk}$
+with $m_j \in M$, $z_{jk} \in I$ and $n_{jk} \in N$. Hence
+$$
+\varphi(x - \sum z_j m_j) = \sum z_jz_{jk} n_{jk}.
+$$
+This means that $x' = x - \sum z_j m_j \in K_{n + 1}$ maps
+to $x \bmod I^nM$ which proves the claim. Now we may apply
+Lemma \ref{lemma-Mittag-Leffler}
+to the inverse system of short exact sequences above to see (1).
+Part (2) is a special case of (1).
+If the assumptions of (3) hold, then for each $n$ the sequence
+$$
+0 \to K/I^nK \to M/I^nM \to N/I^nN \to 0
+$$
+is short exact by
+Lemma \ref{lemma-flat-tor-zero}.
+Hence we can directly apply
+Lemma \ref{lemma-Mittag-Leffler}
+to conclude (3) is true.
+To see (4) choose generators $x_i \in M$, $i = 1, \ldots, n$.
+Then the map $R^{\oplus n} \to M$, $(a_1, \ldots, a_n) \mapsto \sum a_ix_i$
+is surjective. Hence by (2) we see
+$(R^\wedge)^{\oplus n} \to M^\wedge$, $(a_1, \ldots, a_n) \mapsto \sum a_ix_i$
+is surjective. Assertion (4) follows from this.
+\end{proof}
+
+\begin{definition}
+\label{definition-complete}
+Let $R$ be a ring. Let $I \subset R$ be an ideal.
+Let $M$ be an $R$-module. We say $M$ is {\it $I$-adically complete}
+if the map
+$$
+M \longrightarrow M^\wedge = \lim_n M/I^nM
+$$
+is an isomorphism\footnote{This includes the condition that
+$\bigcap I^nM = (0)$.}. We say $R$ is {\it $I$-adically complete}
+if $R$ is $I$-adically complete as an $R$-module.
+\end{definition}
+
+\noindent
+It is not true that the completion of an $R$-module $M$ with respect
+to $I$ is $I$-adically complete. For an example see
+Examples, Section \ref{examples-section-noncomplete-completion}.
+If the ideal is finitely generated, then the completion is complete.
+
+\begin{lemma}
+\label{lemma-hathat-finitely-generated}
+\begin{reference}
+\cite[Theorem 15]{Matlis}. The slick proof given here is from
+an email of Bjorn Poonen dated Nov 5, 2016.
+\end{reference}
+Let $R$ be a ring. Let $I$ be a finitely generated ideal of $R$.
+Let $M$ be an $R$-module. Then
+\begin{enumerate}
+\item the completion $M^\wedge$ is $I$-adically complete, and
+\item $I^nM^\wedge = \Ker(M^\wedge \to M/I^nM) = (I^nM)^\wedge$ for all
+$n \geq 1$.
+\end{enumerate}
+In particular $R^\wedge$ is $I$-adically complete,
+$I^nR^\wedge = (I^n)^\wedge$, and
+$R^\wedge/I^nR^\wedge = R/I^n$.
+\end{lemma}
+
+\begin{proof}
+Since $I$ is finitely generated,
+$I^n$ is finitely generated, say by $f_1, \ldots, f_r$.
+Applying Lemma \ref{lemma-completion-generalities} part (2)
+to the surjection $(f_1, \ldots, f_r) : M^{\oplus r} \to I^n M$
+yields a surjection
+$$
+(M^\wedge)^{\oplus r} \xrightarrow{(f_1, \ldots, f_r)} (I^n M)^\wedge =
+\lim_{m \geq n} I^n M/I^m M = \Ker(M^\wedge \to M/I^n M).
+$$
+On the other hand, the image of
+$(f_1, \ldots, f_r) : (M^\wedge)^{\oplus r} \to M^\wedge$
+is $I^n M^\wedge$.
+Thus $M^\wedge / I^n M^\wedge \simeq M/I^n M$.
+Taking inverse limits yields $(M^\wedge)^\wedge \simeq M^\wedge$;
+that is, $M^\wedge$ is $I$-adically complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-differ-by-torsion}
+Let $R$ be a ring. Let $I \subset R$ be an ideal. Let
+$0 \to M \to N \to Q \to 0$ be an exact sequence of
+$R$-modules such that $Q$ is annihilated by a power of $I$.
+Then completion produces an exact sequence
+$0 \to M^\wedge \to N^\wedge \to Q \to 0$.
+\end{lemma}
+
+\begin{proof}
+Say $I^c Q = 0$. Then $Q/I^nQ = Q$ for $n \geq c$.
+On the other hand, it is clear that
+$I^nM \subset M \cap I^nN \subset I^{n - c}M$ for $n \geq c$.
+Thus $M^\wedge = \lim M/(M \cap I^n N)$. Apply Lemma \ref{lemma-Mittag-Leffler}
+to the system of exact sequences
+$$
+0 \to M/(M \cap I^n N) \to N/I^n N \to Q \to 0
+$$
+for $n \geq c$ to conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hathat}
+\begin{reference}
+Taken from an unpublished note of Lenstra and de Smit.
+\end{reference}
+Let $R$ be a ring. Let $I \subset R$ be an ideal. Let $M$ be an $R$-module.
+Denote $K_n = \Ker(M^\wedge \to M/I^nM)$. Then $M^\wedge$ is $I$-adically
+complete if and only if $K_n$ is equal to $I^nM^\wedge$ for all $n \geq 1$.
+\end{lemma}
+
+\begin{proof}
+The module $I^n M^\wedge$ is contained in $K_n$.
+Thus for each $n \geq 1$ there is a canonical exact sequence
+$$
+0 \to K_n/I^nM^\wedge \to M^\wedge/I^nM^\wedge \to M/I^nM \to 0.
+$$
+As $I^nM^\wedge$ maps onto $I^nM/I^{n + 1}M$ we see that
+$K_{n + 1} + I^n M^\wedge = K_n$. Thus the inverse system
+$\{K_n/I^n M^\wedge\}_{n \geq 1}$ has surjective transition maps.
+By
+Lemma \ref{lemma-Mittag-Leffler}
+we see that there is a short exact sequence
+$$
+0 \to
+\lim_n K_n/I^n M^\wedge \to
+(M^\wedge)^\wedge \to
+M^\wedge \to 0
+$$
+Hence $M^\wedge$ is complete if and only if $K_n/I^n M^\wedge = 0$
+for all $n \geq 1$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-radical-completion}
+Let $R$ be a ring, let $I \subset R$ be an ideal, and let
+$R^\wedge = \lim R/I^n$.
+\begin{enumerate}
+\item any element of $R^\wedge$ which maps to a unit of $R/I$ is a unit,
+\item any element of $1 + I$ maps to an invertible element of $R^\wedge$,
+\item any element of $1 + IR^\wedge$ is invertible in $R^\wedge$, and
+\item the ideals $IR^\wedge$ and $\Ker(R^\wedge \to R/I)$ are contained
+in the Jacobson radical of $R^\wedge$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $x \in R^\wedge$ map to a unit $x_1$ in $R/I$.
+Then $x$ maps to a unit $x_n$ in $R/I^n$ for every $n$ by
+Lemma \ref{lemma-locally-nilpotent-unit}.
+Hence $y = (x_n^{-1}) \in \lim R/I^n = R^\wedge$ is an inverse to $x$.
+Parts (2) and (3) follow immediately from (1).
+Part (4) follows from (1) and Lemma \ref{lemma-contained-in-radical}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-surjective-to-completion}
+Let $A$ be a ring. Let $I = (f_1, \ldots, f_r)$ be a finitely
+generated ideal. If $M \to \lim M/f_i^nM$ is surjective for
+each $i$, then $M \to \lim M/I^nM$ is surjective.
+\end{lemma}
+
+\begin{proof}
+Note that $\lim M/I^nM = \lim M/(f_1^n, \ldots, f_r^n)M$ as
+$I^n \supset (f_1^n, \ldots, f_r^n) \supset I^{rn}$.
+An element $\xi$ of $\lim M/(f_1^n, \ldots, f_r^n)M$ can be symbolically
+written as
+$$
+\xi = \sum\nolimits_{n \geq 0} \sum\nolimits_i f_i^n x_{n, i}
+$$
+with $x_{n, i} \in M$. If $M \to \lim M/f_i^nM$ is surjective, then there is
+an $x_i \in M$ mapping to $\sum x_{n, i} f_i^n$ in $\lim M/f_i^nM$.
+Then $x = \sum x_i$ maps to $\xi$ in $\lim M/I^nM$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complete-by-sub}
+Let $A$ be a ring. Let $I \subset J \subset A$ be ideals.
+If $M$ is $J$-adically complete and $I$ is finitely generated, then
+$M$ is $I$-adically complete.
+\end{lemma}
+
+\begin{proof}
+Assume $M$ is $J$-adically complete and $I$ is finitely generated.
+We have $\bigcap I^nM = 0$ because $\bigcap J^nM = 0$. By
+Lemma \ref{lemma-when-surjective-to-completion}
+it suffices to prove the surjectivity of $M \to \lim M/I^nM$ in case
+$I$ is generated by a single element. Say $I = (f)$.
+Let $x_n \in M$ with $x_{n + 1} - x_n \in f^nM$. We have to show there exists
+an $x \in M$ such that $x_n - x \in f^nM$ for all $n$.
+As $x_{n + 1} - x_n \in J^nM$ and as $M$ is $J$-adically complete,
+there exists an element $x \in M$ such that $x_n - x \in J^nM$.
+Replacing $x_n$ by $x_n - x$ we may assume that $x_n \in J^nM$.
+To finish the proof we will show that this implies $x_n \in I^nM$.
+Namely, write $x_n - x_{n + 1} = f^nz_n$.
+Then
+$$
+x_n = f^n(z_n + fz_{n + 1} + f^2z_{n + 2} + \ldots)
+$$
+The sum $z_n + fz_{n + 1} + f^2z_{n + 2} + \ldots$ converges in $M$
+as $f^c \in J^c$. The sum $f^n(z_n + fz_{n + 1} + f^2z_{n + 2} + \ldots)$
+converges in $M$ to $x_n$ because
+the partial sums equal $x_n - x_{n + c}$ and $x_{n + c} \in J^{n + c}M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-change-ideal-completion}
+Let $R$ be a ring.
+Let $I$, $J$ be ideals of $R$.
+Assume there exist integers $c, d > 0$ such that
+$I^c \subset J$ and $J^d \subset I$.
+Then completion with respect to $I$ agrees with completion
+with respect to $J$ for any $R$-module.
+In particular an $R$-module $M$ is $I$-adically complete
+if and only if it is $J$-adically complete.
+\end{lemma}
+
+\begin{proof}
+Consider the system of maps
+$M/I^nM \to M/J^{\lfloor n/d \rfloor}M$ and
+the system of maps $M/J^mM \to M/I^{\lfloor m/c \rfloor}M$
+to get mutually inverse maps between the completions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quotient-complete}
+Let $R$ be a ring. Let $I$ be an ideal of $R$.
+Let $M$ be an $I$-adically complete $R$-module,
+and let $K \subset M$ be an $R$-submodule.
+The following are equivalent
+\begin{enumerate}
+\item $K = \bigcap (K + I^nM)$ and
+\item $M/K$ is $I$-adically complete.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Set $N = M/K$. By
+Lemma \ref{lemma-completion-generalities}
+the map $M = M^\wedge \to N^\wedge$ is surjective.
+Hence $N \to N^\wedge$ is surjective. It is easy to see that the
+kernel of $N \to N^\wedge$ is the module $\bigcap (K + I^nM) / K$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-finite-module-complete-over-complete-ring}
+Let $R$ be a ring. Let $I$ be an ideal of $R$.
+Let $M$ be an $R$-module.
+If (a) $R$ is $I$-adically complete, (b) $M$ is a finite $R$-module,
+and (c) $\bigcap I^nM = (0)$, then $M$ is $I$-adically complete.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-completion-generalities}
+the map $M = M \otimes_R R = M \otimes_R R^\wedge \to M^\wedge$
+is surjective. The kernel of this map is $\bigcap I^nM$ hence zero
+by assumption. Hence $M \cong M^\wedge$ and $M$ is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-over-complete-ring}
+Let $R$ be a ring. Let $I \subset R$ be an ideal. Let $M$ be an $R$-module.
+Assume
+\begin{enumerate}
+\item $R$ is $I$-adically complete,
+\item $\bigcap_{n \geq 1} I^nM = (0)$, and
+\item $M/IM$ is a finite $R/I$-module.
+\end{enumerate}
+Then $M$ is a finite $R$-module.
+\end{lemma}
+
+\begin{proof}
+Let $x_1, \ldots, x_n \in M$ be elements whose images in $M/IM$ generate
+$M/IM$ as a $R/I$-module. Denote $M' \subset M$ the $R$-submodule
+generated by $x_1, \ldots, x_n$. By Lemma \ref{lemma-completion-generalities}
+the map $(M')^\wedge \to M^\wedge$ is surjective.
+Since $\bigcap I^nM = 0$ we see in particular that $\bigcap I^nM' = (0)$.
+Hence by Lemma \ref{lemma-when-finite-module-complete-over-complete-ring}
+we see that $M'$ is complete, and we conclude that $M' \to M^\wedge$
+is surjective. Finally, the kernel of $M \to M^\wedge$ is
+zero since it is equal to $\bigcap I^nM = (0)$.
+Hence we conclude that $M \cong M' \cong M^\wedge$
+is finitely generated.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Completion for Noetherian rings}
+\label{section-completion-noetherian}
+
+\noindent
+In this section we discuss completion with respect to ideals in
+Noetherian rings.
+
+\begin{lemma}
+\label{lemma-completion-tensor}
+Let $I$ be an ideal of a Noetherian ring $R$.
+Denote ${}^\wedge$ completion with respect to $I$.
+\begin{enumerate}
+\item If $K \to N$ is an injective map of finite $R$-modules,
+then the map on completions $K^\wedge \to N^\wedge$ is injective.
+\item If $0 \to K \to N \to M \to 0$ is a short exact sequence
+of finite $R$-modules, then $0 \to K^\wedge \to N^\wedge \to M^\wedge \to 0$
+is a short exact sequence.
+\item If $M$ is a finite $R$-module, then $M^\wedge = M \otimes_R R^\wedge$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Setting $M = N/K$ we find that part (1) follows from part (2).
+Let $0 \to K \to N \to M \to 0$ be as in (2).
+For each $n$ we get the short exact sequence
+$$
+0 \to K/(I^nN \cap K) \to N/I^nN \to M/I^nM \to 0.
+$$
+By Lemma \ref{lemma-Mittag-Leffler}
+we obtain the exact sequence
+$$
+0 \to \lim K/(I^nN \cap K) \to N^\wedge \to M^\wedge \to 0.
+$$
+By the Artin-Rees Lemma \ref{lemma-Artin-Rees} we may choose $c$ such that
+$I^nK \subset I^n N \cap K \subset I^{n-c} K$ for $n \geq c$.
+Hence $K^\wedge = \lim K/I^nK = \lim K/(I^nN \cap K)$
+and we conclude that (2) is true.
+
+\medskip\noindent
+Let $M$ be as in (3) and let $0 \to K \to R^{\oplus t} \to M \to 0$
+be a presentation of $M$. We get a commutative diagram
+$$
+\xymatrix{
+&
+K \otimes_R R^\wedge \ar[r] \ar[d] &
+R^{\oplus t} \otimes_R R^\wedge \ar[r] \ar[d] &
+M \otimes_R R^\wedge \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+K^\wedge \ar[r] &
+(R^{\oplus t})^\wedge \ar[r] &
+M^\wedge \ar[r] & 0
+}
+$$
+The top row is exact, see Section \ref{section-flat}.
+The bottom row is exact by part (2).
+By Lemma \ref{lemma-completion-generalities}
+the vertical arrows are surjective.
+The middle vertical arrow is an isomorphism.
+We conclude (3) holds by the Snake Lemma \ref{lemma-snake}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-flat}
+Let $I$ be a ideal of a Noetherian ring $R$.
+Denote ${}^\wedge$ completion with respect to $I$.
+\begin{enumerate}
+\item The ring map $R \to R^\wedge$ is flat.
+\item The functor $M \mapsto M^\wedge$ is exact on the category of
+finitely generated $R$-modules.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Consider $J \otimes_R R^\wedge \to R \otimes_R R^\wedge = R^\wedge$
+where $J$ is an arbitrary ideal of $R$.
+According to Lemma \ref{lemma-completion-tensor} this
+is identified with $J^\wedge \to R^\wedge$ and $J^\wedge \to R^\wedge$
+is injective. Part (1) follows from Lemma \ref{lemma-flat}.
+Part (2) is a reformulation of
+Lemma \ref{lemma-completion-tensor} part (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-faithfully-flat}
+Let $(R, \mathfrak m)$ be a Noetherian local ring.
+Let $I \subset \mathfrak m$ be an ideal. Denote $R^\wedge$
+the completion of $R$ with respect to $I$.
+The ring map $R \to R^\wedge$ is faithfully flat.
+In particular the completion with respect to $\mathfrak m$,
+namely $\lim_n R/\mathfrak m^n$ is faithfully flat.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-completion-flat} it is flat.
+The composition $R \to R^\wedge \to R/\mathfrak m$ where
+the last map is the projection map $R^\wedge \to R/I$
+combined with $R/I \to R/\mathfrak m$ shows that
+$\mathfrak m$ is in the image of $\Spec(R^\wedge)
+\to \Spec(R)$. Hence the map is faithfully
+flat by Lemma \ref{lemma-ff}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-complete}
+Let $R$ be a Noetherian ring.
+Let $I$ be an ideal of $R$.
+Let $M$ be an $R$-module.
+Then the completion $M^\wedge$
+of $M$ with respect to $I$ is $I$-adically complete,
+$I^n M^\wedge = (I^nM)^\wedge$, and $M^\wedge/I^nM^\wedge = M/I^nM$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of
+Lemma \ref{lemma-hathat-finitely-generated}
+because $I$ is a finitely generated ideal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-Noetherian}
+Let $I$ be an ideal of a ring $R$. Assume
+\begin{enumerate}
+\item $R/I$ is a Noetherian ring,
+\item $I$ is finitely generated.
+\end{enumerate}
+Then the completion $R^\wedge$ of $R$ with respect to $I$
+is a Noetherian ring complete with respect to $IR^\wedge$.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-hathat-finitely-generated}
+we see that $R^\wedge$ is $I$-adically complete. Hence it is also
+$IR^\wedge$-adically complete. Since $R^\wedge/IR^\wedge = R/I$ is
+Noetherian we see that after replacing $R$ by $R^\wedge$ we may in
+addition to assumptions (1) and (2) assume that also $R$ is $I$-adically
+complete.
+
+\medskip\noindent
+Let $f_1, \ldots, f_t$ be generators of $I$.
+Then there is a surjection of rings
+$R/I[T_1, \ldots, T_t] \to \bigoplus I^n/I^{n + 1}$
+mapping $T_i$ to the element $\overline{f}_i \in I/I^2$.
+Hence $\bigoplus I^n/I^{n + 1}$ is a Noetherian ring.
+Let $J \subset R$ be an ideal. Consider the ideal
+$$
+\bigoplus J \cap I^n/J \cap I^{n + 1} \subset \bigoplus I^n/I^{n + 1}.
+$$
+Let $\overline{g}_1, \ldots, \overline{g}_m$ be generators of this
+ideal. We may choose $\overline{g}_j$ to be a homogeneous element
+of degree $d_j$ and we may pick $g_j \in J \cap I^{d_j}$ mapping to
+$\overline{g}_j \in J \cap I^{d_j}/J \cap I^{d_j + 1}$. We claim
+that $g_1, \ldots, g_m$ generate $J$.
+
+\medskip\noindent
+Let $x \in J \cap I^n$. There exist $a_j \in I^{\max(0, n - d_j)}$ such that
+$x - \sum a_j g_j \in J \cap I^{n + 1}$.
+The reason is that $J \cap I^n/J \cap I^{n + 1}$ is equal to
+$\sum \overline{g}_j I^{n - d_j}/I^{n - d_j + 1}$ by our choice
+of $g_1, \ldots, g_m$. Hence starting with $x \in J$ we can find
+a sequence of vectors $(a_{1, n}, \ldots, a_{m, n})_{n \geq 0}$
+with $a_{j, n} \in I^{\max(0, n - d_j)}$ such that
+$$
+x =
+\sum\nolimits_{n = 0, \ldots, N}
+\sum\nolimits_{j = 1, \ldots, m} a_{j, n} g_j \bmod I^{N + 1}
+$$
+Setting $A_j = \sum_{n \geq 0} a_{j, n}$ we see that
+$x = \sum A_j g_j$ as $R$ is complete. Hence $J$ is finitely generated and
+we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-Noetherian-Noetherian}
+Let $R$ be a Noetherian ring.
+Let $I$ be an ideal of $R$.
+The completion $R^\wedge$ of $R$ with respect to $I$ is
+Noetherian.
+\end{lemma}
+
+\begin{proof}
+This is a consequence of
+Lemma \ref{lemma-completion-Noetherian}.
+It can also be seen directly as follows.
+Choose generators $f_1, \ldots, f_n$ of $I$.
+Consider the map
+$$
+R[[x_1, \ldots, x_n]] \longrightarrow R^\wedge,
+\quad
+x_i \longmapsto f_i.
+$$
+This is a well defined and surjective ring map
+(details omitted).
+Since $R[[x_1, \ldots, x_n]]$ is Noetherian (see
+Lemma \ref{lemma-Noetherian-power-series}) we win.
+\end{proof}
+
+\noindent
+Suppose $R \to S$ is a local homomorphism of local rings $(R, \mathfrak m)$
+and $(S, \mathfrak n)$. Let $S^\wedge$ be the completion
+of $S$ with respect to $\mathfrak n$. In general $S^\wedge$ is not
+the $\mathfrak m$-adic completion of $S$. If
+$\mathfrak n^t \subset \mathfrak mS$ for some $t \geq 1$
+then we do have $S^\wedge = \lim S/\mathfrak m^nS$ by
+Lemma \ref{lemma-change-ideal-completion}. In some cases this even
+implies that $S^\wedge$ is finite over $R^\wedge$.
+
+\begin{lemma}
+\label{lemma-finite-after-completion}
+Let $R \to S$ be a local homomorphism of local rings $(R, \mathfrak m)$
+and $(S, \mathfrak n)$. Let $R^\wedge$, resp.\ $S^\wedge$ be the completion
+of $R$, resp.\ $S$ with respect to $\mathfrak m$, resp.\ $\mathfrak n$.
+If $\mathfrak m$ and $\mathfrak n$ are finitely generated and
+$\dim_{\kappa(\mathfrak m)} S/\mathfrak mS < \infty$, then
+\begin{enumerate}
+\item $S^\wedge$ is equal to the $\mathfrak m$-adic completion of $S$, and
+\item $S^\wedge$ is a finite $R^\wedge$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We have $\mathfrak mS \subset \mathfrak n$ because $R \to S$ is a local
+ring map.
+The assumption $\dim_{\kappa(\mathfrak m)} S/\mathfrak mS < \infty$
+implies that $S/\mathfrak mS$ is an Artinian ring, see
+Lemma \ref{lemma-finite-dimensional-algebra}.
+Hence has dimension $0$, see
+Lemma \ref{lemma-Noetherian-dimension-0},
+hence $\mathfrak n = \sqrt{\mathfrak mS}$.
+This and the fact that $\mathfrak n$ is finitely generated
+implies that $\mathfrak n^t \subset \mathfrak mS$ for
+some $t \geq 1$. By
+Lemma \ref{lemma-change-ideal-completion}
+we see that $S^\wedge$ can be identified with the $\mathfrak m$-adic
+completion of $S$. As $\mathfrak m$ is finitely generated we see from
+Lemma \ref{lemma-hathat-finitely-generated}
+that $S^\wedge$ and $R^\wedge$ are $\mathfrak m$-adically complete.
+At this point we may apply
+Lemma \ref{lemma-finite-over-complete-ring}
+to $S^\wedge$ as an $R^\wedge$-module to conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-finite-extension}
+Let $R$ be a Noetherian ring. Let $R \to S$ be a finite ring map.
+Let $\mathfrak p \subset R$ be a prime and let
+$\mathfrak q_1, \ldots, \mathfrak q_m$ be the primes of $S$
+lying over $\mathfrak p$
+(Lemma \ref{lemma-finite-finite-fibres}).
+Then
+$$
+R_\mathfrak p^\wedge \otimes_R S =
+(S_\mathfrak p)^\wedge =
+S_{\mathfrak q_1}^\wedge \times \ldots \times S_{\mathfrak q_m}^\wedge
+$$
+where the $(S_\mathfrak p)^\wedge$ is the completion with respect to
+$\mathfrak p$ and the local rings $R_\mathfrak p$ and
+$S_{\mathfrak q_i}$ are completed with respect to their maximal ideals.
+\end{lemma}
+
+\begin{proof}
+The first equality follows from Lemma \ref{lemma-completion-tensor}.
+We may replace $R$ by the localization $R_\mathfrak p$ and
+$S$ by $S_\mathfrak p = S \otimes_R R_\mathfrak p$.
+Hence we may assume that $R$ is a local Noetherian ring and
+that $\mathfrak p = \mathfrak m$ is its maximal ideal.
+The $\mathfrak q_iS_{\mathfrak q_i}$-adic completion
+$S_{\mathfrak q_i}^\wedge$ is equal to the $\mathfrak m$-adic
+completion by Lemma \ref{lemma-finite-after-completion}.
+For every $n \geq 1$ prime ideals of $S/\mathfrak m^nS$ are in 1-to-1
+correspondence with the maximal ideals
+$\mathfrak q_1, \ldots, \mathfrak q_m$ of $S$
+(by going up for $S$ over $R$, see
+Lemma \ref{lemma-integral-going-up}).
+Hence
+$S/\mathfrak m^nS = \prod S_{\mathfrak q_i}/\mathfrak m^nS_{\mathfrak q_i}$
+by Lemma \ref{lemma-artinian-finite-length}
+(using for example
+Proposition \ref{proposition-dimension-zero-ring}
+to see that $S/\mathfrak m^nS$ is Artinian).
+Hence the $\mathfrak m$-adic completion $S^\wedge$ of $S$ is equal to
+$\prod S_{\mathfrak q_i}^\wedge$. Finally, we have
+$R^\wedge \otimes_R S = S^\wedge$ by Lemma \ref{lemma-completion-tensor}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-split-completed-sequence}
+Let $R$ be a ring. Let $I \subset R$ be an ideal.
+Let $0 \to K \to P \to M \to 0$ be a short exact sequence of
+$R$-modules. If $M$ is flat over $R$ and $M/IM$ is a projective
+$R/I$-module, then the sequence of $I$-adic completions
+$$
+0 \to K^\wedge \to P^\wedge \to M^\wedge \to 0
+$$
+is a split exact sequence.
+\end{lemma}
+
+\begin{proof}
+As $M$ is flat, each of the sequences
+$$
+0 \to K/I^nK \to P/I^nP \to M/I^nM \to 0
+$$
+is short exact, see
+Lemma \ref{lemma-flat-tor-zero}
+and the sequence $0 \to K^\wedge \to P^\wedge \to M^\wedge \to 0$
+is a short exact sequence, see
+Lemma \ref{lemma-completion-generalities}.
+It suffices to show that we can find splittings
+$s_n : M/I^nM \to P/I^nP$ such that $s_{n + 1} \bmod I^n = s_n$.
+We will construct these $s_n$ by induction on $n$.
+Pick any splitting $s_1$, which exists as $M/IM$ is a projective $R/I$-module.
+Assume given $s_n$ for some $n > 0$. Set
+$P_{n + 1} = \{x \in P \mid x \bmod I^nP \in \Im(s_n)\}$.
+The map $\pi : P_{n + 1}/I^{n + 1}P_{n + 1} \to M/I^{n + 1}M$ is surjective
+(details omitted). As $M/I^{n + 1}M$ is projective as a $R/I^{n + 1}$-module
+by
+Lemma \ref{lemma-lift-projective}
+we may choose a section
+$t : M/I^{n + 1}M \to P_{n + 1}/I^{n + 1}P_{n + 1}$
+of $\pi$. Setting $s_{n + 1}$ equal to the composition of
+$t$ with the canonical map $P_{n + 1}/I^{n + 1}P_{n + 1} \to P/I^{n + 1}P$
+works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complete-modulo-nilpotent}
+Let $A$ be a Noetherian ring. Let $I, J \subset A$ be ideals.
+If $A$ is $I$-adically complete and $A/I$ is $J$-adically complete,
+then $A$ is $J$-adically complete.
+\end{lemma}
+
+\begin{proof}
+Let $B$ be the $(I + J)$-adic completion of $A$. By
+Lemma \ref{lemma-completion-flat} $B/IB$ is the $J$-adic completion of $A/I$
+hence isomorphic to $A/I$ by assumption. Moreover $B$ is $I$-adically
+complete by Lemma \ref{lemma-complete-by-sub}. Hence $B$ is a finite
+$A$-module by Lemma \ref{lemma-finite-over-complete-ring}.
+By Nakayama's lemma (Lemma \ref{lemma-NAK} using $I$ is in the
+Jacobson radical of $A$
+by Lemma \ref{lemma-radical-completion}) we find that $A \to B$ is surjective.
+The map $A \to B$ is flat by Lemma \ref{lemma-completion-flat}.
+The image of $\Spec(B) \to \Spec(A)$ contains $V(I)$ and as $I$
+is contained in the Jacobson radical of $A$ we find $A \to B$ is faithfully flat
+(Lemma \ref{lemma-ff-rings}). Thus $A \to B$ is injective. Thus $A$ is
+complete with respect to $I + J$, hence a fortiori complete
+with respect to $J$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Taking limits of modules}
+\label{section-limits}
+
+\noindent
+In this section we discuss what happens when we take a limit of modules.
+
+\begin{lemma}
+\label{lemma-limit-complete-pre}
+Let $I \subset A$ be a finitely generated ideal of a ring.
+Let $(M_n)$ be an inverse system of $A$-modules with $I^n M_n = 0$.
+Then $M = \lim M_n$ is $I$-adically complete.
+\end{lemma}
+
+\begin{proof}
+We have $M \to M/I^nM \to M_n$. Taking the limit we get
+$M \to M^\wedge \to M$. Hence $M$ is a direct summand of $M^\wedge$.
+Since $M^\wedge$ is $I$-adically complete by
+Lemma \ref{lemma-hathat-finitely-generated}, so is $M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-complete}
+Let $I \subset A$ be a finitely generated ideal of a ring.
+Let $(M_n)$ be an inverse system of $A$-modules with
+$M_n = M_{n + 1}/I^nM_{n + 1}$. Then $M/I^nM = M_n$ and $M$ is
+$I$-adically complete.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-limit-complete-pre} we see that $M$ is $I$-adically
+complete. Since the transition maps are surjective, the maps $M \to M_n$
+are surjective. Consider the inverse system of short exact sequences
+$$
+0 \to N_n \to M \to M_n \to 0
+$$
+defining $N_n$. Since $M_n = M_{n + 1}/I^nM_{n + 1}$ the map
+$N_{n + 1} + I^nM \to N_n$ is surjective. Hence
+$N_{n + 1}/(N_{n + 1} \cap I^{n + 1}M) \to N_n/(N_n \cap I^nM)$
+is surjective. Taking the inverse limit of the short exact sequences
+$$
+0 \to N_n/(N_n \cap I^nM) \to M/I^nM \to M_n \to 0
+$$
+we obtain an exact sequence
+$$
+0 \to \lim N_n/(N_n \cap I^nM) \to M^\wedge \to M
+$$
+Since $M$ is $I$-adically complete we conclude that
+$\lim N_n/(N_n \cap I^nM) = 0$ and hence by the surjectivity
+of the transition maps we get $N_n/(N_n \cap I^nM) = 0$ for all $n$.
+Thus $M_n = M/I^nM$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finiteness-graded}
+Let $A$ be a Noetherian graded ring. Let $I \subset A_+$ be a homogeneous
+ideal. Let $(N_n)$ be an inverse system of finite graded $A$-modules with
+$N_n = N_{n + 1}/I^n N_{n + 1}$. Then there is a finite graded $A$-module
+$N$ such that $N_n = N/I^nN$ as graded modules for all $n$.
+\end{lemma}
+
+\begin{proof}
+Pick $r$ and homogeneous elements $x_{1, 1}, \ldots, x_{1, r} \in N_1$ of
+degrees $d_1, \ldots, d_r$ generating $N_1$. Since the transition maps
+are surjective, we can pick a compatible system of homogeneous elements
+$x_{n, i} \in N_n$ lifting $x_{1, i}$. By the graded Nakayama lemma
+(Lemma \ref{lemma-graded-NAK}) we see that
+$N_n$ is generated by the elements $x_{n, 1}, \ldots, x_{n, r}$
+sitting in degrees $d_1, \ldots, d_r$.
+Thus for $m \leq n$ we see that $N_n \to N_n/I^m N_n$
+is an isomorphism in degrees $< \min(d_i) + m$ (as $I^mN_n$ is zero
+in those degrees). Thus the inverse system of degree $d$ parts
+$$
+\ldots
+= N_{2 + d - \min(d_i), d}
+= N_{1 + d - \min(d_i), d}
+= N_{d - \min(d_i), d} \to N_{-1 + d - \min(d_i), d} \to \ldots
+$$
+stabilizes as indicated. Let $N$ be the graded $A$-module whose
+$d$th graded part is this stabilization. In particular, we have the
+elements $x_i = \lim x_{n, i}$ in $N$. We claim the $x_i$ generate $N$:
+any $x \in N_d$ is a linear combination of $x_1, \ldots, x_r$
+because we can check this in $N_{d - \min(d_i), d}$ where it holds
+as $x_{d - \min(d_i), i}$ generate $N_{d - \min(d_i)}$.
+Finally, the reader checks that the surjective map
+$N/I^nN \to N_n$ is an isomorphism by checking to see
+what happens in each degree as before. Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-daniel-litt}
+Let $A$ be a graded ring. Let $I \subset A_+$ be a homogeneous ideal.
+Denote $A' = \lim A/I^n$. Let $(G_n)$ be an inverse system
+of graded $A$-modules with $G_n$ annihilated by $I^n$.
+Let $M$ be a graded $A$-module and let
+$\varphi_n : M \to G_n$ be a compatible system of graded
+$A$-module maps. If the induced map
+$$
+\varphi : M \otimes_A A' \longrightarrow \lim G_n
+$$
+is an isomorphism, then $M_d \to \lim G_{n, d}$
+is an isomorphism for all $d \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+By convention graded rings are in degrees $\geq 0$ and graded modules
+may have nonzero parts of any degree, see Section \ref{section-graded}.
+The map $\varphi$ exists because $\lim G_n$ is
+a module over $A'$ as $G_n$ is annihilated by $I^n$.
+Another useful thing to keep in mind is that we have
+$$
+\bigoplus\nolimits_{d \in \mathbf{Z}} \lim G_{n, d} \subset
+\lim G_n \subset
+\prod\nolimits_{d \in \mathbf{Z}} \lim G_{n, d}
+$$
+where a subscript ${\ }_d$ indicates the $d$th graded part.
+
+\medskip\noindent
+Injective. Let $x \in M_d$. If $x \mapsto 0$ in $\lim G_{n, d}$
+then $x \otimes 1 = 0$ in $M \otimes_A A'$. Then we can find
+a finitely generated submodule $M' \subset M$ with $x \in M'$
+such that $x \otimes 1$ is zero in $M' \otimes_A A'$.
+Say $M'$ is generated by homogeneous elements sitting
+in degrees $d_1, \ldots, d_r$. Let $n = d - \min(d_i) + 1$.
+Since $A'$ has a map to $A/I^n$ and since
+$A \to A/I^n$ is an isomorphism in degrees $\leq n - 1$
+we see that $M' \to M' \otimes_A A'$ is injective in
+degrees $\leq n - 1$. Thus $x = 0$ as desired.
+
+\medskip\noindent
+Surjective. Let $y \in \lim G_{n, d}$. Choose a finite
+sum $\sum x_i \otimes f'_i$ in $M \otimes_A A'$ mapping to $y$.
+We may assume $x_i$ is homogeneous, say of degree $d_i$.
+Observe that although $A'$ is not a graded ring, it is a
+limit of the graded rings $A/I^nA$ and moreover, in any
+given degree the transition maps eventually become isomorphisms
+(see above). This gives
+$$
+A = \bigoplus\nolimits_{d \geq 0} A_d \subset A' \subset
+\prod\nolimits_{d \geq 0} A_d
+$$
+Thus we can write
+$$
+f'_i = \sum\nolimits_{j = 0, \ldots, d - d_i - 1} f_{i, j} + f_i + g'_i
+$$
+with $f_{i, j} \in A_j$, $f_i \in A_{d - d_i}$, and
+$g'_i \in A'$ mapping to zero in $\prod_{j \leq d - d_i} A_j$.
+Now if we compute $\varphi_n(\sum_{i, j} f_{i, j}x_i) \in G_n$,
+then we get a sum of homogeneous elements of degree $< d$.
+Hence $\varphi(\sum x_i \otimes f_{i, j})$ maps to zero in
+$\lim G_{n, d}$.
+Similarly, a computation shows the element $\varphi(\sum x_i \otimes g'_i)$
+maps to zero in $\prod_{d' \leq d} \lim G_{n, d'}$.
+Since we know that $\varphi(\sum x_i \otimes f'_i)$
+is $y$, we conclude that $\sum f_ix_i \in M_d$ maps to $y$ as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Criteria for flatness}
+\label{section-criteria-flatness}
+
+\noindent
+In this section we prove some important technical lemmas in the Noetherian
+case. We will (partially) generalize these to the non-Noetherian case
+in Section \ref{section-more-flatness-criteria}.
+
+\begin{lemma}
+\label{lemma-mod-injective}
+Suppose that $R \to S$ is a local homomorphism of Noetherian local rings.
+Denote $\mathfrak m$ the maximal ideal of $R$. Let $M$ be a flat $R$-module
+and $N$ a finite $S$-module. Let $u : N \to M$ be a map of $R$-modules.
+If $\overline{u} : N/\mathfrak m N \to M/\mathfrak m M$
+is injective then $u$ is injective.
+In this case $M/u(N)$ is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+First we claim that $u_n : N/{\mathfrak m}^nN \to M/{\mathfrak m}^nM$
+is injective for all $n \geq 1$. We proceed by induction, the base
+case is that $\overline{u} = u_1$ is injective. By our assumption that $M$
+is flat over $R$ we have a short exact sequence
+$0 \to M \otimes_R {\mathfrak m}^n/{\mathfrak m}^{n + 1}
+\to M/{\mathfrak m}^{n + 1}M \to M/{\mathfrak m}^n M \to 0$.
+Also, $M \otimes_R {\mathfrak m}^n/{\mathfrak m}^{n + 1}
+= M/{\mathfrak m}M \otimes_{R/{\mathfrak m}}
+{\mathfrak m}^n/{\mathfrak m}^{n + 1}$. We have
+a similar exact sequence $N \otimes_R {\mathfrak m}^n/{\mathfrak m}^{n + 1}
+\to N/{\mathfrak m}^{n + 1}N \to N/{\mathfrak m}^n N \to 0$
+for $N$ except we do not have the zero on the left. We also
+have $N \otimes_R {\mathfrak m}^n/{\mathfrak m}^{n + 1}
+= N/{\mathfrak m}N \otimes_{R/{\mathfrak m}}
+{\mathfrak m}^n/{\mathfrak m}^{n + 1}$. Thus the map $u_{n + 1}$ is
+injective as both $u_n$ and the map
+$\overline{u} \otimes \text{id}_{{\mathfrak m}^n/{\mathfrak m}^{n + 1}}$ are.
+
+\medskip\noindent
+By Krull's intersection theorem
+(Lemma \ref{lemma-intersect-powers-ideal-module-zero})
+applied to $N$ over the ring $S$ and the ideal $\mathfrak mS$
+we have $\bigcap \mathfrak m^nN = 0$. Thus the injectivity
+of $u_n$ for all $n$ implies $u$ is injective.
+
+\medskip\noindent
+To show that $M/u(N)$ is flat over $R$, it suffices to show that
+$\text{Tor}_1^R(M/u(N), R/I) = 0$ for every ideal $I \subset R$,
+see Lemma \ref{lemma-characterize-flat}. From the short exact sequence
+$$
+0 \to N \xrightarrow{u} M \to M/u(N) \to 0
+$$
+and the flatness of $M$ we obtain an exact sequence of Tors
+$$
+0 \to \text{Tor}_1^R(M/u(N), R/I) \to N/IN \to M/IM
+$$
+See Lemma \ref{lemma-long-exact-sequence-tor}. Thus it suffices to show that
+$N/IN$ injects into $M/IM$. Note that $R/I \to S/IS$ is a local
+homomorphism of Noetherian local rings, $N/IN \to M/IM$ is a map
+of $R/I$-modules, $N/IN$ is finite over $S/IS$, and $M/IM$ is flat over
+$R/I$ and $u \bmod I : N/IN \to M/IM$ is injective modulo $\mathfrak m$.
+Thus we may apply the first part of the proof to $u \bmod I$ and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-grothendieck}
+Suppose that $R \to S$ is a flat and local ring homomorphism of Noetherian
+local rings. Denote $\mathfrak m$ the maximal ideal of $R$.
+Suppose $f \in S$ is a nonzerodivisor in $S/{\mathfrak m}S$.
+Then $S/fS$ is flat over $R$, and $f$ is a nonzerodivisor in $S$.
+\end{lemma}
+
+\begin{proof}
+Follows directly from Lemma \ref{lemma-mod-injective}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-grothendieck-regular-sequence}
+Suppose that $R \to S$ is a flat and local ring homomorphism of Noetherian
+local rings. Denote $\mathfrak m$ the maximal ideal of $R$.
+Suppose $f_1, \ldots, f_c$ is a sequence of elements of
+$S$ such that the images $\overline{f}_1, \ldots, \overline{f}_c$
+form a regular sequence in $S/{\mathfrak m}S$.
+Then $f_1, \ldots, f_c$ is a regular sequence in $S$ and each
+of the quotients $S/(f_1, \ldots, f_i)$ is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+Induction and Lemma \ref{lemma-grothendieck}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-free-fibre-flat-free}
+Let $R \to S$ be a local homomorphism of Noetherian
+local rings. Let $\mathfrak m$ be the maximal
+ideal of $R$. Let $M$ be a finite $S$-module.
+Suppose that (a) $M/\mathfrak mM$
+is a free $S/\mathfrak mS$-module, and (b) $M$ is flat over $R$.
+Then $M$ is free and $S$ is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+Let $\overline{x}_1, \ldots, \overline{x}_n$ be a basis
+for the free module $M/\mathfrak mM$. Choose
+$x_1, \ldots, x_n \in M$ with $x_i$ mapping to $\overline{x}_i$. Let
+$u : S^{\oplus n} \to M$ be the map which maps the $i$th
+standard basis vector to $x_i$. By Lemma \ref{lemma-mod-injective}
+we see that $u$ is injective. On the other hand, by
+Nakayama's Lemma \ref{lemma-NAK} the map is surjective. The
+lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complex-exact-mod}
+Let $R \to S$ be a local homomorphism of local Noetherian
+rings. Let $\mathfrak m$ be the maximal ideal of $R$.
+Let $0 \to F_e \to F_{e-1} \to \ldots \to F_0$
+be a finite complex of finite $S$-modules. Assume that
+each $F_i$ is $R$-flat, and that the complex
+$0 \to F_e/\mathfrak m F_e \to F_{e-1}/\mathfrak m F_{e-1}
+\to \ldots \to F_0 / \mathfrak m F_0$ is exact.
+Then $0 \to F_e \to F_{e-1} \to \ldots \to F_0$
+is exact, and moreover the module
+$\Coker(F_1 \to F_0)$ is $R$-flat.
+\end{lemma}
+
+\begin{proof}
+By induction on $e$. If $e = 1$, then this is exactly
+Lemma \ref{lemma-mod-injective}. If $e > 1$, we see
+by Lemma \ref{lemma-mod-injective} that $F_e \to F_{e-1}$
+is injective and that $C = \Coker(F_e \to F_{e-1})$
+is a finite $S$-module flat over $R$. Hence we can
+apply the induction hypothesis to the complex
+$0 \to C \to F_{e-2} \to \ldots \to F_0$.
+We deduce that $C \to F_{e-2}$ is injective
+and the exactness of the complex follows, as well
+as the flatness of the cokernel of $F_1 \to F_0$.
+\end{proof}
+
+\noindent
+In the rest of this section we prove two versions of what is called the
+``{\it local criterion of flatness}''. Note also the interesting
+Lemma \ref{lemma-CM-over-regular-flat} below.
+
+\begin{lemma}
+\label{lemma-prepare-local-criterion-flatness}
+Let $R$ be a local ring with maximal ideal $\mathfrak m$
+and residue field $\kappa = R/\mathfrak m$.
+Let $M$ be an $R$-module. If $\text{Tor}_1^R(\kappa, M) = 0$,
+then for every finite length $R$-module $N$ we have
+$\text{Tor}_1^R(N, M) = 0$.
+\end{lemma}
+
+\begin{proof}
+By descending induction on the length of $N$.
+If the length of $N$ is $1$, then $N \cong \kappa$
+and we are done. If the length of $N$ is more than
+$1$, then we can fit $N$ into a short exact sequence
+$0 \to N' \to N \to N'' \to 0$ where $N'$, $N''$ are
+finite length $R$-modules of smaller length.
+The vanishing of $\text{Tor}_1^R(N, M)$ follows
+from the vanishing of $\text{Tor}_1^R(N', M)$
+and $\text{Tor}_1^R(N'', M)$ (induction hypothesis)
+and the long exact sequence of Tor groups, see Lemma
+\ref{lemma-long-exact-sequence-tor}.
+\end{proof}
+
+\begin{lemma}[Local criterion for flatness]
+\label{lemma-local-criterion-flatness}
+Let $R \to S$ be a local homomorphism of local Noetherian
+rings. Let $\mathfrak m$ be the maximal ideal of $R$,
+and let $\kappa = R/\mathfrak m$.
+Let $M$ be a finite $S$-module. If $\text{Tor}_1^R(\kappa, M) = 0$,
+then $M$ is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+Let $I \subset R$ be an ideal. By Lemma \ref{lemma-flat} it suffices
+to show that $I \otimes_R M \to M$ is injective. By Remark
+\ref{remark-Tor-ring-mod-ideal} we see that this kernel is
+equal to $\text{Tor}_1^R(M, R/I)$. By
+Lemma \ref{lemma-prepare-local-criterion-flatness}
+we see that $J \otimes_R M \to M$ is injective for all ideals
+of finite colength.
+
+\medskip\noindent
+Choose $n >> 0$ and consider the following short exact
+sequence
+$$
+0
+\to I \cap \mathfrak m^n
+\to I \oplus \mathfrak m^n
+\to I + \mathfrak m^n
+\to 0
+$$
+This is a sub sequence of the short exact sequence
+$0 \to R \to R^{\oplus 2} \to R \to 0$. Thus we get the diagram
+$$
+\xymatrix{
+(I\cap \mathfrak m^n) \otimes_R M \ar[r] \ar[d] &
+I \otimes_R M \oplus \mathfrak m^n \otimes_R M \ar[r] \ar[d] &
+(I + \mathfrak m^n) \otimes_R M \ar[d] \\
+M \ar[r] &
+M \oplus M \ar[r] &
+M
+}
+$$
+Note that $I + \mathfrak m^n$ and $\mathfrak m^n$
+are ideals of finite colength.
+Thus a diagram chase shows that
+$\Ker((I \cap \mathfrak m^n)\otimes_R M \to M)
+\to \Ker(I \otimes_R M \to M)$
+is surjective. We conclude in particular that
+$K = \Ker(I \otimes_R M \to M)$ is contained
+in the image of $(I \cap \mathfrak m^n) \otimes_R M$
+in $I \otimes_R M$. By Artin-Rees, Lemma \ref{lemma-Artin-Rees}
+we see that $K$ is contained
+in $\mathfrak m^{n-c}(I \otimes_R M)$ for some $c > 0$
+and all $n >> 0$. Since $I \otimes_R M$ is a finite
+$S$-module (!) and since $S$ is Noetherian, we see
+that this implies $K = 0$. Namely, the above implies
+$K$ maps to zero in the $\mathfrak mS$-adic completion
+of $I \otimes_R M$. But the map from $S$
+to its $\mathfrak mS$-adic completion is faithfully flat
+by Lemma \ref{lemma-completion-faithfully-flat}.
+Hence $K = 0$, as desired.
+\end{proof}
+
+\noindent
+In the following we often encounter the conditions
+``$M/IM$ is flat over $R/I$ and $\text{Tor}_1^R(R/I, M) = 0$''.
+The following lemma gives some consequences of these conditions
+(it is a generalization of
+Lemma \ref{lemma-prepare-local-criterion-flatness}).
+
+\begin{lemma}
+\label{lemma-what-does-it-mean}
+Let $R$ be a ring.
+Let $I \subset R$ be an ideal.
+Let $M$ be an $R$-module.
+If $M/IM$ is flat over $R/I$ and $\text{Tor}_1^R(R/I, M) = 0$ then
+\begin{enumerate}
+\item $M/I^nM$ is flat over $R/I^n$ for all $n \geq 1$, and
+\item for any module $N$ which is annihilated by $I^m$ for some $m \geq 0$
+we have $\text{Tor}_1^R(N, M) = 0$.
+\end{enumerate}
+In particular, if $I$ is nilpotent, then $M$ is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+Assume $M/IM$ is flat over $R/I$ and $\text{Tor}_1^R(R/I, M) = 0$.
+Let $N$ be an $R/I$-module. Choose a short exact sequence
+$$
+0 \to K \to \bigoplus\nolimits_{i \in I} R/I \to N \to 0
+$$
+By the long exact sequence of $\text{Tor}$ and the vanishing of
+$\text{Tor}_1^R(R/I, M)$ we get
+$$
+0 \to \text{Tor}_1^R(N, M) \to K \otimes_R M \to
+(\bigoplus\nolimits_{i \in I} R/I) \otimes_R M \to N \otimes_R M \to 0
+$$
+But since $K$, $\bigoplus_{i \in I} R/I$, and $N$ are all annihilated
+by $I$ we see that
+\begin{align*}
+K \otimes_R M & = K \otimes_{R/I} M/IM, \\
+(\bigoplus\nolimits_{i \in I} R/I) \otimes_R M & =
+(\bigoplus\nolimits_{i \in I} R/I) \otimes_{R/I} M/IM, \\
+N \otimes_R M & = N \otimes_{R/I} M/IM.
+\end{align*}
+As $M/IM$ is flat over $R/I$ we conclude that
+$$
+0 \to K \otimes_{R/I} M/IM \to
+(\bigoplus\nolimits_{i \in I} R/I) \otimes_{R/I} M/IM \to
+N \otimes_{R/} M/IM \to 0
+$$
+is exact. Combining this with the above we conclude that
+$\text{Tor}_1^R(N, M) = 0$ for any $R$-module $N$ annihilated by $I$.
+
+\medskip\noindent
+In particular, if we apply this to the
+module $I/I^2$, then we conclude that the sequence
+$$
+0 \to I^2 \otimes_R M \to I \otimes_R M \to I/I^2 \otimes_R M \to 0
+$$
+is short exact. This implies that $I^2 \otimes_R M \to M$ is injective
+and it implies that $I/I^2 \otimes_{R/I} M/IM = IM/I^2M$.
+
+\medskip\noindent
+Let us prove that $M/I^2M$ is flat over $R/I^2$. Let $I^2 \subset J$
+be an ideal. We have to show that
+$J/I^2 \otimes_{R/I^2} M/I^2M \to M/I^2M$ is injective, see
+Lemma \ref{lemma-flat}.
+As $M/IM$ is flat over $R/I$ we know that the map
+$(I + J)/I \otimes_{R/I} M/IM \to M/IM$ is injective.
+The sequence
+$$
+(I \cap J)/I^2 \otimes_{R/I^2} M/I^2M \to
+J/I^2 \otimes_{R/I^2} M/I^2M \to
+(I + J)/I \otimes_{R/I} M/IM \to 0
+$$
+is exact, as you get it by tensoring the exact sequence
+$0 \to (I \cap J) \to J \to (I + J)/I \to 0$ by $M/I^2M$.
+Hence suffices to prove the injectivity of the map
+$(I \cap J)/I^2 \otimes_{R/I} M/IM \to IM/I^2M$. However, the map
+$(I \cap J)/I^2 \to I/I^2$ is injective and as $M/IM$
+is flat over $R/I$ the map
+$(I \cap J)/I^2 \otimes_{R/I} M/IM \to I/I^2 \otimes_{R/I} M/IM$
+is injective. Since we have previously seen that
+$I/I^2 \otimes_{R/I} M/IM = IM/I^2M$ we obtain the desired injectivity.
+
+\medskip\noindent
+Hence we have proven that the assumptions imply:
+(a) $\text{Tor}_1^R(N, M) = 0$ for all $N$ annihilated by $I$,
+(b) $I^2 \otimes_R M \to M$ is injective, and (c) $M/I^2M$ is flat
+over $R/I^2$. Thus we can continue by induction to get the
+same results for $I^n$ for all $n \geq 1$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-what-does-it-mean-again}
+Let $R$ be a ring. Let $I \subset R$ be an ideal.
+Let $M$ be an $R$-module.
+\begin{enumerate}
+\item If $M/IM$ is flat over $R/I$ and $M \otimes_R I/I^2 \to IM/I^2M$
+is injective, then $M/I^2M$ is flat over $R/I^2$.
+\item If $M/IM$ is flat over $R/I$ and $M \otimes_R I^n/I^{n + 1}
+\to I^nM/I^{n + 1}M$ is injective for $n = 1, \ldots, k$,
+then $M/I^{k + 1}M$ is flat over $R/I^{k + 1}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first statement is a consequence of
+Lemma \ref{lemma-what-does-it-mean} applied with $R$ replaced by $R/I^2$
+and $M$ replaced by $M/I^2M$ using that
+$$
+\text{Tor}_1^{R/I^2}(M/I^2M, R/I) =
+\Ker(M \otimes_R I/I^2 \to IM/I^2M),
+$$
+see Remark \ref{remark-Tor-ring-mod-ideal}.
+The second statement follows in the same manner using induction
+on $n$ to show that $M/I^{n + 1}M$ is flat over $R/I^{n + 1}$ for
+$n = 1, \ldots, k$. Here we use that
+$$
+\text{Tor}_1^{R/I^{n + 1}}(M/I^{n + 1}M, R/I) =
+\Ker(M \otimes_R I^n/I^{n + 1} \to I^nM/I^{n + 1}M)
+$$
+for every $n$.
+\end{proof}
+
+\begin{lemma}[Variant of the local criterion]
+\label{lemma-variant-local-criterion-flatness}
+Let $R \to S$ be a local homomorphism of Noetherian
+local rings. Let $I \not = R$ be an ideal in $R$.
+Let $M$ be a finite $S$-module. If $\text{Tor}_1^R(M, R/I) = 0$
+and $M/IM$ is flat over $R/I$, then $M$ is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+First proof: By
+Lemma \ref{lemma-what-does-it-mean}
+we see that $\text{Tor}_1^R(\kappa, M)$ is zero where $\kappa$
+is the residue field of $R$. Hence we see that $M$
+is flat over $R$ by
+Lemma \ref{lemma-local-criterion-flatness}.
+
+\medskip\noindent
+Second proof: Let $\mathfrak m$ be the maximal ideal of $R$.
+We will show that $\mathfrak m \otimes_R M \to M$ is injective,
+and then apply
+Lemma \ref{lemma-local-criterion-flatness}.
+Suppose that $\sum f_i \otimes x_i \in \mathfrak m \otimes_R M$
+and that $\sum f_i x_i = 0$ in $M$. By the equational criterion
+for flatness Lemma \ref{lemma-flat-eq} applied to $M/IM$
+over $R/I$ we see there exist $\overline{a}_{ij} \in R/I$
+and $\overline{y}_j \in M/IM$ such that
+$x_i \bmod IM = \sum_j \overline{a}_{ij} \overline{y}_j $
+and $0 = \sum_i (f_i \bmod I) \overline{a}_{ij}$.
+Let $a_{ij} \in R$ be a lift of $\overline{a}_{ij}$ and
+similarly let $y_j \in M$ be a lift of $\overline{y}_j$.
+Then we see that
+\begin{eqnarray*}
+\sum f_i \otimes x_i
+& = &
+\sum f_i \otimes x_i +
+\sum f_ia_{ij} \otimes y_j -
+\sum f_i \otimes a_{ij} y_j
+\\
+& = &
+\sum f_i \otimes (x_i - \sum a_{ij} y_j) +
+\sum (\sum f_i a_{ij}) \otimes y_j
+\end{eqnarray*}
+Since $x_i - \sum a_{ij} y_j \in IM$ and
+$\sum f_i a_{ij} \in I$ we see that there exists
+an element in $I \otimes_R M$ which maps to our given
+element $\sum f_i \otimes x_i$ in $\mathfrak m \otimes_R M$.
+But $I \otimes_R M \to M$ is injective by assumption (see
+Remark \ref{remark-Tor-ring-mod-ideal}) and we win.
+\end{proof}
+
+\noindent
+In particular, in the situation of
+Lemma \ref{lemma-variant-local-criterion-flatness}, suppose that
+$I = (x)$ is generated by a single element $x$ which is
+a nonzerodivisor in $R$. Then $\text{Tor}_1^R(M, R/(x)) = (0)$
+if and only if $x$ is a nonzerodivisor on $M$.
+
+\begin{lemma}
+\label{lemma-flat-module-powers}
+Let $R \to S$ be a ring map. Let $I \subset R$ be an ideal.
+Let $M$ be an $S$-module. Assume
+\begin{enumerate}
+\item $R$ is a Noetherian ring,
+\item $S$ is a Noetherian ring,
+\item $M$ is a finite $S$-module, and
+\item for each $n \geq 1$ the module $M/I^n M$ is flat over
+$R/I^n$.
+\end{enumerate}
+Then for every $\mathfrak q \in V(IS)$
+the localization $M_{\mathfrak q}$ is flat over $R$.
+In particular, if $S$ is local and $IS$ is contained
+in its maximal ideal, then $M$ is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+We are going to use
+Lemma \ref{lemma-variant-local-criterion-flatness}.
+By assumption $M/IM$ is flat over $R/I$. Hence it suffices to check
+that $\text{Tor}_1^R(M, R/I)$ is zero on localization at $\mathfrak q$. By
+Remark \ref{remark-Tor-ring-mod-ideal}
+this Tor group is equal to $K = \Ker(I \otimes_R M \to M)$.
+We know for each $n \geq 1$ that the kernel
+$\Ker(I/I^n \otimes_{R/I^n} M/I^nM \to M/I^nM)$ is zero.
+Since there is a module map
+$I/I^n \otimes_{R/I^n} M/I^nM \to (I \otimes_R M)/I^{n - 1}(I \otimes_R M)$
+we conclude that $K \subset I^{n - 1}(I \otimes_R M)$ for each $n$.
+By the Artin-Rees lemma, and more precisely
+Lemma \ref{lemma-intersection-powers-ideal-module}
+we conclude that $K_{\mathfrak q} = 0$, as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-on-tor-one}
+Let $R \to R' \to R''$ be ring maps.
+Let $M$ be an $R$-module. Suppose that $M \otimes_R R'$
+is flat over $R'$. Then the natural map
+$\text{Tor}_1^R(M, R') \otimes_{R'} R'' \to
+\text{Tor}_1^R(M, R'')$ is onto.
+\end{lemma}
+
+\begin{proof}
+Let $F_\bullet$ be a free resolution of $M$ over $R$.
+The complex $F_2 \otimes_R R' \to F_1\otimes_R R' \to F_0 \otimes_R R'$
+computes $\text{Tor}_1^R(M, R')$.
+The complex $F_2 \otimes_R R'' \to F_1\otimes_R R'' \to F_0 \otimes_R R''$
+computes $\text{Tor}_1^R(M, R'')$. Note that
+$F_i \otimes_R R' \otimes_{R'} R'' = F_i \otimes_R R''$. Let
+$K' = \Ker(F_1\otimes_R R' \to F_0 \otimes_R R')$ and
+similarly $K'' = \Ker(F_1\otimes_R R'' \to F_0 \otimes_R R'')$.
+Thus we have an exact sequence
+$$
+0 \to K' \to F_1\otimes_R R' \to F_0 \otimes_R R' \to M \otimes_R R' \to 0.
+$$
+By the assumption that $M \otimes_R R'$ is flat over $R'$,
+the sequence
+$$
+K' \otimes_{R'} R'' \to
+F_1 \otimes_R R'' \to
+F_0 \otimes_R R'' \to
+M \otimes_R R'' \to 0
+$$
+is still exact. This means that $K' \otimes_{R'} R'' \to K''$
+is surjective. Since $\text{Tor}_1^R(M, R')$ is a quotient of $K'$ and
+$\text{Tor}_1^R(M, R'')$ is a quotient of $K''$ we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-on-tor-one-trivial}
+Let $R \to R'$ be a ring map. Let $I \subset R$ be
+an ideal and $I' = IR'$. Let $M$ be an $R$-module
+and set $M' = M \otimes_R R'$. The natural map
+$\text{Tor}_1^R(R'/I', M) \to \text{Tor}_1^{R'}(R'/I', M')$
+is surjective.
+\end{lemma}
+
+\begin{proof}
+Let $F_2 \to F_1 \to F_0 \to M \to 0$ be a free resolution of
+$M$ over $R$. Set $F_i' = F_i \otimes_R R'$. The sequence
+$F_2' \to F_1' \to F_0' \to M' \to 0$ may no longer be exact
+at $F_1'$. A free resolution of $M'$ over $R'$ therefore looks
+like
+$$
+F_2' \oplus F_2'' \to F_1' \to F_0' \to M' \to 0
+$$
+for a suitable free module $F_2''$ over $R'$. Next, note that
+$F_i \otimes_R R'/I' = F_i' / IF_i' = F_i'/I'F_i'$.
+So the complex $F_2'/I'F_2' \to F_1'/I'F_1' \to F_0'/I'F_0'$
+computes $\text{Tor}_1^R(M, R'/I')$. On the other hand
+$F_i' \otimes_{R'} R'/I' = F_i'/I'F_i'$ and similarly
+for $F_2''$. Thus the complex
+$F_2'/I'F_2' \oplus F_2''/I'F_2'' \to F_1'/I'F_1' \to F_0'/I'F_0'$
+computes $\text{Tor}_1^{R'}(M', R'/I')$. Since the vertical
+map on complexes
+$$
+\xymatrix{
+F_2'/I'F_2' \ar[r] \ar[d] &
+F_1'/I'F_1' \ar[r] \ar[d] &
+F_0'/I'F_0' \ar[d] \\
+F_2'/I'F_2' \oplus F_2''/I'F_2'' \ar[r] &
+F_1'/I'F_1' \ar[r] &
+F_0'/I'F_0'
+}
+$$
+clearly induces a surjection on cohomology we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-another-variant-local-criterion-flatness}
+Let
+$$
+\xymatrix{
+S \ar[r] & S' \\
+R \ar[r] \ar[u] & R' \ar[u]
+}
+$$
+be a commutative diagram of local homomorphisms of local Noetherian rings.
+Let $I \subset R$ be a proper ideal.
+Let $M$ be a finite $S$-module.
+Denote $I' = IR'$ and $M' = M \otimes_S S'$.
+Assume that
+\begin{enumerate}
+\item $S'$ is a localization of the tensor product
+$S \otimes_R R'$,
+\item $M/IM$ is flat over $R/I$,
+\item $\text{Tor}_1^R(M, R/I) \to \text{Tor}_1^{R'}(M', R'/I')$
+is zero.
+\end{enumerate}
+Then $M'$ is flat over $R'$.
+\end{lemma}
+
+\begin{proof}
+Since $S'$ is a localization of $S \otimes_R R'$ we see that
+$M'$ is a localization of $M \otimes_R R'$. Note that
+by Lemma \ref{lemma-flat-base-change} the module $M/IM \otimes_{R/I} R'/I'
+= M \otimes_R R' /I'(M \otimes_R R')$ is flat over $R'/I'$. Hence also
+$M'/I'M'$ is flat over $R'/I'$ as the localization of a flat module
+is flat. By Lemma \ref{lemma-variant-local-criterion-flatness}
+it suffices to show that $\text{Tor}_1^{R'}(M', R'/I')$ is zero.
+Since $M'$ is a localization of $M \otimes_R R'$, the last assumption
+implies that it suffices to show that
+$\text{Tor}_1^R(M, R/I) \otimes_R R'
+\to
+\text{Tor}_1^{R'}(M \otimes_R R', R'/I')$
+is surjective.
+
+\medskip\noindent
+By Lemma \ref{lemma-surjective-on-tor-one-trivial} we see that
+$\text{Tor}_1^R(M, R'/I') \to \text{Tor}_1^{R'}(M \otimes_R R', R'/I')$
+is surjective. So now it suffices to show that
+$\text{Tor}_1^R(M, R/I) \otimes_R R'
+\to
+\text{Tor}_1^R(M, R'/I')$
+is surjective. This follows from Lemma \ref{lemma-surjective-on-tor-one}
+by looking at the ring maps $R \to R/I \to R'/I'$ and the module $M$.
+\end{proof}
+
+\noindent
+Please compare the lemma below to
+Lemma \ref{lemma-criterion-flatness-fibre-nilpotent}
+(the case of a nilpotent ideal) and
+Lemma \ref{lemma-criterion-flatness-fibre}
+(the case of finitely presented algebras).
+
+\begin{lemma}[Crit\`ere de platitude par fibres; Noetherian case]
+\label{lemma-criterion-flatness-fibre-Noetherian}
+Let $R$, $S$, $S'$ be Noetherian local rings and let $R \to S \to S'$
+be local ring homomorphisms. Let $\mathfrak m \subset R$ be the
+maximal ideal. Let $M$ be an $S'$-module. Assume
+\begin{enumerate}
+\item The module $M$ is finite over $S'$.
+\item The module $M$ is not zero.
+\item The module $M/\mathfrak m M$
+is a flat $S/\mathfrak m S$-module.
+\item The module $M$ is a flat $R$-module.
+\end{enumerate}
+Then $S$ is flat over $R$ and $M$ is a flat $S$-module.
+\end{lemma}
+
+\begin{proof}
+Set $I = \mathfrak mS \subset S$. Then we see that $M/IM$ is a flat
+$S/I$-module because of (3). Since
+$\mathfrak m \otimes_R S' \to I \otimes_S S'$ is surjective we see
+that also $\mathfrak m \otimes_R M \to I \otimes_S M$ is surjective.
+Consider
+$$
+\mathfrak m \otimes_R M \to I \otimes_S M \to M.
+$$
+As $M$ is flat over $R$ the composition is injective
+and so both arrows are injective.
+In particular $\text{Tor}_1^S(S/I, M) = 0$ see
+Remark \ref{remark-Tor-ring-mod-ideal}. By
+Lemma \ref{lemma-variant-local-criterion-flatness} we conclude
+that $M$ is flat over $S$. Note that since $M/\mathfrak m_{S'}M$
+is not zero by Nakayama's Lemma \ref{lemma-NAK}
+we see that actually $M$ is faithfully flat over $S$ by
+Lemma \ref{lemma-ff} (since it forces $M/\mathfrak m_SM \not = 0$).
+
+\medskip\noindent
+Consider the exact sequence
+$0 \to \mathfrak m \to R \to \kappa \to 0$.
+This gives an exact sequence
+$0 \to \text{Tor}_1^R(\kappa, S) \to \mathfrak m \otimes_R S \to I \to 0$.
+Since $M$ is flat over $S$ this gives an exact sequence
+$0 \to \text{Tor}_1^R(\kappa, S)\otimes_S M \to
+\mathfrak m \otimes_R M \to I \otimes_S M \to 0$.
+By the above this implies that $\text{Tor}_1^R(\kappa, S)\otimes_S M = 0$.
+Since $M$ is faithfully flat over $S$ this implies that
+$\text{Tor}_1^R(\kappa, S) = 0$ and we conclude that
+$S$ is flat over $R$ by Lemma \ref{lemma-local-criterion-flatness}.
+\end{proof}
+
+
+
+
+
+
+\section{Base change and flatness}
+\label{section-base-change-flat}
+
+\noindent
+Some lemmas which deal with what happens with flatness when
+doing a base change.
+
+\begin{lemma}
+\label{lemma-base-change-flat-up-down}
+Let
+$$
+\xymatrix{
+S \ar[r] & S' \\
+R \ar[r] \ar[u] & R' \ar[u]
+}
+$$
+be a commutative diagram of local homomorphisms of local rings.
+Assume that $S'$ is a localization of the tensor product $S \otimes_R R'$.
+Let $M$ be an $S$-module and set $M' = S' \otimes_S M$.
+\begin{enumerate}
+\item If $M$ is flat over $R$ then $M'$ is flat over $R'$.
+\item If $M'$ is flat over $R'$ and $R \to R'$ is flat then
+$M$ is flat over $R$.
+\end{enumerate}
+In particular we have
+\begin{enumerate}
+\item[(3)] If $S$ is flat over $R$ then $S'$ is flat over $R'$.
+\item[(4)] If $R' \to S'$ and $R \to R'$ are flat then $S$ is flat over $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). If $M$ is flat over $R$, then $M \otimes_R R'$
+is flat over $R'$ by
+Lemma \ref{lemma-flat-base-change}.
+If $W \subset S \otimes_R R'$ is the multiplicative subset such that
+$W^{-1}(S \otimes_R R') = S'$ then $M' = W^{-1}(M \otimes_R R')$.
+Hence $M'$ is flat over $R'$ as the localization of a flat module, see
+Lemma \ref{lemma-flat-localization} part (5). This proves (1) and in
+particular, we see that (3) holds.
+
+\medskip\noindent
+Proof of (2). Suppose that $M'$ is flat over $R'$ and $R \to R'$ is flat.
+By (3) applied to the diagram reflected in the northwest diagonal
+we see that $S \to S'$ is flat. Thus $S \to S'$ is faithfully flat by
+Lemma \ref{lemma-local-flat-ff}.
+We are going to use the criterion of
+Lemma \ref{lemma-flat} (\ref{item-f-ideal})
+to show that $M$ is flat.
+Let $I \subset R$ be an ideal. If $I \otimes_R M \to M$ has a kernel,
+so does $(I \otimes_R M) \otimes_S S' \to M \otimes_S S' = M'$.
+Note that $I \otimes_R R' = IR'$ as $R \to R'$ is flat, and that
+$$
+(I \otimes_R M) \otimes_S S' =
+(I \otimes_R R') \otimes_{R'} (M \otimes_S S') =
+IR' \otimes_{R'} M'.
+$$
+From flatness of $M'$ over $R'$
+we conclude that this maps injectively into $M'$.
+This concludes the proof of (2), and hence (4) is true as well.
+\end{proof}
+
+\noindent
+Here is yet another application of the local criterion of flatness.
+
+\begin{lemma}
+\label{lemma-yet-another-variant-local-criterion-flatness}
+Consider a commutative diagram of local rings and local homomorphisms
+$$
+\xymatrix{
+S \ar[r] & S' \\
+R \ar[r] \ar[u] & R' \ar[u]
+}
+$$
+Let $M$ be a finite $S$-module. Assume that
+\begin{enumerate}
+\item the horizontal arrows are flat ring maps
+\item $M$ is flat over $R$,
+\item $\mathfrak m_R R' = \mathfrak m_{R'}$,
+\item $R'$ and $S'$ are Noetherian.
+\end{enumerate}
+Then $M' = M \otimes_S S'$ is flat over $R'$.
+\end{lemma}
+
+\begin{proof}
+Since $\mathfrak m_R \subset R$ and $R \to R'$ is flat, we get
+$\mathfrak m_R \otimes_R R' = \mathfrak m_R R' = \mathfrak m_{R'}$
+by assumption (3). Observe that $M'$ is a finite $S'$-module
+which is flat over $R$ by Lemma \ref{lemma-flatness-descends-more-general}.
+Thus $\mathfrak m_R \otimes_R M' \to M'$ is injective.
+Then we get
+$$
+\mathfrak m_R \otimes_R M' =
+\mathfrak m_R \otimes_R R' \otimes_{R'} M' =
+\mathfrak m_{R'} \otimes_{R'} M'
+$$
+Thus $\mathfrak m_{R'} \otimes_{R'} M' \to M'$ is injective.
+This shows that $\text{Tor}_1^{R'}(\kappa_{R'}, M') = 0$
+(Remark \ref{remark-Tor-ring-mod-ideal}).
+Thus $M'$ is flat over $R'$ by
+Lemma \ref{lemma-local-criterion-flatness}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Flatness criteria over Artinian rings}
+\label{section-flatness-artinian}
+
+\noindent
+We discuss some flatness criteria for modules over Artinian rings.
+Note that an Artinian local ring has a nilpotent maximal ideal
+so that the following two lemmas apply to Artinian local rings.
+
+\begin{lemma}
+\label{lemma-local-artinian-basis-when-flat}
+Let $(R, \mathfrak m)$ be a local ring with nilpotent maximal ideal
+$\mathfrak m$. Let $M$ be a flat $R$-module.
+If $A$ is a set and $x_\alpha \in M$, $\alpha \in A$ is a collection
+of elements of $M$, then the following are equivalent:
+\begin{enumerate}
+\item $\{\overline{x}_\alpha\}_{\alpha \in A}$ forms a basis
+for the vector space $M/\mathfrak mM$ over $R/\mathfrak m$, and
+\item $\{x_\alpha\}_{\alpha \in A}$ forms a basis for $M$ over $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (2) $\Rightarrow$ (1) is immediate.
+Assume (1). By Nakayama's Lemma \ref{lemma-NAK}
+the elements $x_\alpha$ generate $M$. Then one gets a short exact
+sequence
+$$
+0 \to K \to \bigoplus\nolimits_{\alpha \in A} R \to M \to 0
+$$
+Tensoring with $R/\mathfrak m$ and using Lemma \ref{lemma-flat-tor-zero}
+we obtain $K/\mathfrak mK = 0$. By Nakayama's Lemma \ref{lemma-NAK}
+we conclude $K = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-artinian-characterize-flat}
+Let $R$ be a local ring with nilpotent maximal ideal. Let $M$ be an $R$-module.
+The following are equivalent
+\begin{enumerate}
+\item $M$ is flat over $R$,
+\item $M$ is a free $R$-module, and
+\item $M$ is a projective $R$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since any projective module is flat (as a direct summand of a free module)
+and every free module is projective, it suffices to prove that a flat module
+is free. Let $M$ be a flat module. Let $A$ be a set and let $x_\alpha \in M$,
+$\alpha \in A$ be elements such that
+$\overline{x_\alpha} \in M/\mathfrak m M$ forms a basis over the residue
+field of $R$. By
+Lemma \ref{lemma-local-artinian-basis-when-flat}
+the $x_\alpha$ are a basis for $M$ over $R$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-basis}
+Let $R$ be a ring.
+Let $I \subset R$ be an ideal.
+Let $M$ be an $R$-module.
+Let $A$ be a set and let $x_\alpha \in M$, $\alpha \in A$ be a collection
+of elements of $M$.
+Assume
+\begin{enumerate}
+\item $I$ is nilpotent,
+\item $\{\overline{x}_\alpha\}_{\alpha \in A}$ forms a basis for $M/IM$ over
+$R/I$, and
+\item $\text{Tor}_1^R(R/I, M) = 0$.
+\end{enumerate}
+Then $M$ is free on $\{x_\alpha\}_{\alpha \in A}$ over $R$.
+\end{lemma}
+
+\begin{proof}
+Let $R$, $I$, $M$, $\{x_\alpha\}_{\alpha \in A}$ be as in the lemma
+and satisfy assumptions (1), (2), and (3). By
+Nakayama's Lemma \ref{lemma-NAK}
+the elements $x_\alpha$ generate $M$ over $R$.
+The assumption $\text{Tor}_1^R(R/I, M) = 0$ implies that we have a short
+exact sequence
+$$
+0 \to I \otimes_R M \to M \to M/IM \to 0.
+$$
+Let $\sum f_\alpha x_\alpha = 0$ be a relation in $M$.
+By choice of $x_\alpha$ we see that $f_\alpha \in I$.
+Hence we conclude that $\sum f_\alpha \otimes x_\alpha = 0$ in
+$I \otimes_R M$. The map $I \otimes_R M \to I/I^2 \otimes_{R/I} M/IM$
+and the fact that $\{x_\alpha\}_{\alpha \in A}$ forms a basis
+for $M/IM$ implies that $f_\alpha \in I^2$! Hence we conclude that
+there are no relations among the images of the $x_\alpha$ in
+$M/I^2M$. In other words, we see that $M/I^2M$ is free with basis
+the images of the $x_\alpha$. Using the map
+$I \otimes_R M \to I/I^3 \otimes_{R/I^2} M/I^2M$
+we then conclude that $f_\alpha \in I^3$!
+And so on. Since $I^n = 0$ for some $n$ by assumption (1) we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-prepare-lift-flatness}
+Let $\varphi : R \to R'$ be a ring map.
+Let $I \subset R$ be an ideal.
+Let $M$ be an $R$-module.
+Assume
+\begin{enumerate}
+\item $M/IM$ is flat over $R/I$, and
+\item $R' \otimes_R M$ is flat over $R'$.
+\end{enumerate}
+Set $I_2 = \varphi^{-1}(\varphi(I^2)R')$.
+Then $M/I_2M$ is flat over $R/I_2$.
+\end{lemma}
+
+\begin{proof}
+We may replace $R$, $M$, and $R'$ by $R/I_2$, $M/I_2M$, and
+$R'/\varphi(I)^2R'$. Then $I^2 = 0$ and $\varphi$ is injective. By
+Lemma \ref{lemma-what-does-it-mean}
+and the fact that $I^2 = 0$ it suffices to prove that
+$\text{Tor}^R_1(R/I, M) = K = \Ker(I \otimes_R M \to M)$ is zero.
+Set $M' = M \otimes_R R'$ and $I' = IR'$.
+By assumption the map $I' \otimes_{R'} M' \to M'$ is injective.
+Hence $K$ maps to zero in
+$$
+I' \otimes_{R'} M' = I' \otimes_R M = I' \otimes_{R/I} M/IM.
+$$
+Then $I \to I'$ is an injective map of $R/I$-modules.
+Since $M/IM$ is flat over $R/I$ the map
+$$
+I \otimes_{R/I} M/IM \longrightarrow I' \otimes_{R/I} M/IM
+$$
+is injective. This implies that $K$ is zero in
+$I \otimes_R M = I \otimes_{R/I} M/IM$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-flatness}
+Let $\varphi : R \to R'$ be a ring map.
+Let $I \subset R$ be an ideal.
+Let $M$ be an $R$-module.
+Assume
+\begin{enumerate}
+\item $I$ is nilpotent,
+\item $R \to R'$ is injective,
+\item $M/IM$ is flat over $R/I$, and
+\item $R' \otimes_R M$ is flat over $R'$.
+\end{enumerate}
+Then $M$ is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+Define inductively $I_1 = I$ and $I_{n + 1} = \varphi^{-1}(\varphi(I_n)^2R')$
+for $n \geq 1$. Note that by
+Lemma \ref{lemma-prepare-lift-flatness}
+we find that $M/I_nM$ is flat over $R/I_n$ for each $n \geq 1$.
+It is clear that $\varphi(I_n) \subset \varphi(I)^{2^n}R'$. Since
+$I$ is nilpotent we see that $\varphi(I_n) = 0$ for some $n$. As
+$\varphi$ is injective we conclude that $I_n = 0$ for some $n$ and
+we win.
+\end{proof}
+
+\noindent
+Here is the local Artinian version of the local criterion for flatness.
+
+\begin{lemma}
+\label{lemma-artinian-variant-local-criterion-flatness}
+Let $R$ be an Artinian local ring. Let $M$ be an $R$-module.
+Let $I \subset R$ be a proper ideal. The following are
+equivalent
+\begin{enumerate}
+\item $M$ is flat over $R$, and
+\item $M/IM$ is flat over $R/I$ and $\text{Tor}_1^R(R/I, M) = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (1) $\Rightarrow$ (2) follows immediately from the
+definitions. Assume $M/IM$ is flat over $R/I$ and
+$\text{Tor}_1^R(R/I, M) = 0$. By
+Lemma \ref{lemma-local-artinian-characterize-flat}
+this implies that $M/IM$ is free over $R/I$. Pick a set $A$
+and elements $x_\alpha \in M$ such that the images in $M/IM$ form
+a basis. By
+Lemma \ref{lemma-lift-basis}
+we conclude that $M$ is free and in particular flat.
+\end{proof}
+
+\noindent
+It turns out that flatness descends along injective homomorphism
+whose source is an Artinian ring.
+
+\begin{lemma}
+\label{lemma-descent-flatness-injective-map-artinian-rings}
+Let $R \to S$ be a ring map. Let $M$ be an $R$-module.
+Assume
+\begin{enumerate}
+\item $R$ is Artinian
+\item $R \to S$ is injective, and
+\item $M \otimes_R S$ is a flat $S$-module.
+\end{enumerate}
+Then $M$ is a flat $R$-module.
+\end{lemma}
+
+\begin{proof}
+First proof: Let $I \subset R$ be the Jacobson radical of $R$.
+Then $I$ is nilpotent and $M/IM$ is flat over $R/I$ as $R/I$
+is a product of fields, see
+Section \ref{section-artinian}.
+Hence $M$ is flat by an application of
+Lemma \ref{lemma-lift-flatness}.
+
+\medskip\noindent
+Second proof: By
+Lemma \ref{lemma-artinian-finite-length}
+we may write $R = \prod R_i$ as a finite product of local Artinian
+rings. This induces similar product decompositions for both $R$ and $S$.
+Hence we reduce to the case where $R$ is local Artinian (details omitted).
+
+\medskip\noindent
+Assume that $R \to S$, $M$ are as in the lemma satisfying (1), (2), and (3)
+and in addition that $R$ is local with maximal ideal $\mathfrak m$.
+Let $A$ be a set and $x_\alpha \in A$ be elements such that
+$\overline{x}_\alpha$ forms a basis for $M/\mathfrak mM$
+over $R/\mathfrak m$. By
+Nakayama's Lemma \ref{lemma-NAK}
+we see that the elements $x_\alpha$ generate $M$ as an $R$-module.
+Set $N = S \otimes_R M$ and $I = \mathfrak mS$.
+Then $\{1 \otimes x_\alpha\}_{\alpha \in A}$ is a family of elements
+of $N$ which form a basis for $N/IN$. Moreover, since $N$ is flat over
+$S$ we have $\text{Tor}_1^S(S/I, N) = 0$. Thus we conclude from
+Lemma \ref{lemma-lift-basis}
+that $N$ is free on $\{1 \otimes x_\alpha\}_{\alpha \in A}$.
+The injectivity of $R \to S$ then guarantees that there cannot be a
+nontrivial relation among the $x_\alpha$ with coefficients in $R$.
+\end{proof}
+
+\noindent
+Please compare the lemma below to
+Lemma \ref{lemma-criterion-flatness-fibre-Noetherian}
+(the case of Noetherian local rings),
+Lemma \ref{lemma-criterion-flatness-fibre}
+(the case of finitely presented algebras), and
+Lemma \ref{lemma-criterion-flatness-fibre-locally-nilpotent}
+(the case of locally nilpotent ideals).
+
+\begin{lemma}[Crit\`ere de platitude par fibres: Nilpotent case]
+\label{lemma-criterion-flatness-fibre-nilpotent}
+Let
+$$
+\xymatrix{
+S \ar[rr] & & S' \\
+& R \ar[lu] \ar[ru]
+}
+$$
+be a commutative diagram in the category of rings.
+Let $I \subset R$ be a nilpotent ideal and $M$ an $S'$-module. Assume
+\begin{enumerate}
+\item The module $M/IM$ is a flat $S/IS$-module.
+\item The module $M$ is a flat $R$-module.
+\end{enumerate}
+Then $M$ is a flat $S$-module and $S_{\mathfrak q}$ is flat over $R$
+for every $\mathfrak q \subset S$ such that $M \otimes_S \kappa(\mathfrak q)$
+is nonzero.
+\end{lemma}
+
+\begin{proof}
+As $M$ is flat over $R$ tensoring with the short exact
+sequence $0 \to I \to R \to R/I \to 0$ gives a short exact sequence
+$$
+0 \to I \otimes_R M \to M \to M/IM \to 0.
+$$
+Note that $I \otimes_R M \to IS \otimes_S M$ is surjective. Combined with
+the above this means both maps in
+$$
+I \otimes_R M \to IS \otimes_S M \to M
+$$
+are injective. Hence $\text{Tor}_1^S(IS, M) = 0$ (see
+Remark \ref{remark-Tor-ring-mod-ideal})
+and we conclude that $M$ is a flat $S$-module by
+Lemma \ref{lemma-what-does-it-mean}.
+To finish we need to show that $S_{\mathfrak q}$ is flat over
+$R$ for any prime $\mathfrak q \subset S$ such that
+$M \otimes_S \kappa(\mathfrak q)$ is nonzero. This follows from
+Lemma \ref{lemma-ff} and \ref{lemma-flat-permanence}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{What makes a complex exact?}
+\label{section-complex-exact}
+
+\noindent
+Some of this material can be found in the paper \cite{WhatExact}
+by Buchsbaum and Eisenbud.
+
+\begin{situation}
+\label{situation-complex}
+Here $R$ is a ring, and we have a complex
+$$
+0
+\to
+R^{n_e}
+\xrightarrow{\varphi_e}
+R^{n_{e-1}}
+\xrightarrow{\varphi_{e-1}}
+\ldots
+\xrightarrow{\varphi_{i + 1}}
+R^{n_i}
+\xrightarrow{\varphi_i}
+R^{n_{i-1}}
+\xrightarrow{\varphi_{i-1}}
+\ldots
+\xrightarrow{\varphi_1}
+R^{n_0}
+$$
+In other words we require $\varphi_i \circ \varphi_{i + 1} = 0$
+for $i = 1, \ldots, e - 1$.
+\end{situation}
+
+\begin{lemma}
+\label{lemma-add-trivial-complex}
+Suppose $R$ is a ring. Let
+$$
+\ldots
+\xrightarrow{\varphi_{i + 1}}
+R^{n_i}
+\xrightarrow{\varphi_i}
+R^{n_{i-1}}
+\xrightarrow{\varphi_{i-1}}
+\ldots
+$$
+be a complex of finite free $R$-modules. Suppose that for some $i$
+some matrix coefficient of the map $\varphi_i$ is invertible.
+Then the displayed complex is isomorphic to the direct sum of a complex
+$$
+\ldots \to
+R^{n_{i + 2}} \xrightarrow{\varphi_{i + 2}}
+R^{n_{i + 1}} \to
+R^{n_i - 1} \to
+R^{n_{i - 1} - 1} \to
+R^{n_{i - 2}} \xrightarrow{\varphi_{i - 2}}
+R^{n_{i - 3}} \to
+\ldots
+$$
+and the complex $\ldots \to 0 \to R \to R \to 0 \to \ldots$
+where the map $R \to R$ is the identity map.
+\end{lemma}
+
+\begin{proof}
+The assumption means, after a change of basis of
+$R^{n_i}$ and $R^{n_{i-1}}$ that the first basis
+vector of $R^{n_i}$ is mapped via $\varphi_i$ to the first basis
+vector of $R^{n_{i-1}}$. Let $e_j$ denote the
+$j$th basis vector of $R^{n_i}$ and $f_k$ the $k$th
+basis vector of $R^{n_{i-1}}$. Write $\varphi_i(e_j)
+= \sum a_{jk} f_k$. So $a_{1k} = 0$ unless $k = 1$
+and $a_{11} = 1$. Change basis on $R^{n_i}$ again
+by setting $e'_j = e_j - a_{j1} e_1$ for $j > 1$.
+After this change of coordinates we have $a_{j1} = 0$
+for $j > 1$. Note the image
+of $R^{n_{i + 1}} \to R^{n_i}$ is contained in the
+subspace spanned by $e_j$, $j > 1$. Note also
+that $R^{n_{i-1}} \to R^{n_{i-2}}$ has to annihilate
+$f_1$ since it is in the image. These conditions
+and the shape of the matrix $(a_{jk})$ for $\varphi_i$
+imply the lemma.
+\end{proof}
+
+\noindent
+In Situation \ref{situation-complex} we say a complex of the form
+$$
+0 \to \ldots \to 0 \to R \xrightarrow{1} R \to 0 \to \ldots \to 0
+$$
+or of the form
+$$
+0 \to \ldots \to 0 \to R
+$$
+is {\it trivial}. More precisely, we say
+$0 \to R^{n_e} \to R^{n_{e-1}} \to \ldots \to R^{n_0}$
+is trivial if either there exists an $e \geq i \geq 1$
+with $n_i = n_{i - 1} = 1$, $\varphi_i = \text{id}_R$, and
+$n_j = 0$ for $j \not \in \{i, i - 1\}$ or
+$n_0 = 1$ and $n_i = 0$ for $i > 0$.
+The lemma above clearly says that
+any finite complex of finite free modules over a local ring is up to direct
+sums with trivial complexes the same as a complex
+all of whose maps have all matrix coefficients in
+the maximal ideal.
+
+\begin{lemma}
+\label{lemma-exact-depth-zero-local}
+In Situation \ref{situation-complex}. Suppose $R$ is
+a local Noetherian ring with maximal ideal $\mathfrak m$.
+Assume $\mathfrak m \in \text{Ass}(R)$, in other words
+$R$ has depth $0$. Suppose that
+$0 \to R^{n_e} \to R^{n_{e-1}} \to \ldots \to R^{n_0}$
+is exact at $R^{n_e}, \ldots, R^{n_1}$.
+Then the complex is isomorphic to a direct sum of trivial
+complexes.
+\end{lemma}
+
+\begin{proof}
+Pick $x \in R$, $x \not = 0$, with $\mathfrak m x = 0$.
+Let $i$ be the biggest index such that $n_i > 0$.
+If $i = 0$, then the statement is true. If
+$i > 0$ denote $f_1$ the first basis vector of $R^{n_i}$.
+Since $xf_1$ is not mapped to zero by
+exactness of the complex we deduce that some matrix
+coefficient of the map $R^{n_i} \to R^{n_{i - 1}}$
+is not in $\mathfrak m$.
+Lemma \ref{lemma-add-trivial-complex} then allows
+us to decrease $n_e + \ldots + n_1$. Induction finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exact-artinian-local}
+In Situation \ref{situation-complex}. Let $R$ be a Artinian local ring.
+Suppose that $0 \to R^{n_e} \to R^{n_{e-1}} \to \ldots \to R^{n_0}$
+is exact at $R^{n_e}, \ldots, R^{n_1}$. Then the complex is isomorphic
+to a direct sum of trivial complexes.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-exact-depth-zero-local}
+because an Artinian local ring has depth $0$.
+\end{proof}
+
+\noindent
+Below we define the rank of a map of finite free modules.
+This is just one possible definition of rank. It
+is just the definition that works in this section; there
+are others that may be more convenient in other settings.
+
+\begin{definition}
+\label{definition-rank}
+Let $R$ be a ring. Suppose that $\varphi : R^m \to R^n$ is a map
+of finite free modules.
+\begin{enumerate}
+\item The {\it rank} of $\varphi$ is the maximal $r$ such that
+$\wedge^r \varphi : \wedge^r R^m \to \wedge^r R^n$ is nonzero.
+\item We let $I(\varphi) \subset R$ be the ideal generated by
+the $r \times r$ minors of the matrix of $\varphi$, where $r$
+is the rank as defined above.
+\end{enumerate}
+\end{definition}
+
+\noindent
+The rank of $\varphi : R^m \to R^n$ is $0$
+if and only if $\varphi = 0$ and in this case $I(\varphi) = R$.
+
+\begin{lemma}
+\label{lemma-trivial-case-exact}
+In Situation \ref{situation-complex}, suppose the complex is
+isomorphic to a direct sum of trivial complexes. Then
+we have
+\begin{enumerate}
+\item the maps $\varphi_i$ have rank
+$r_i = n_i - n_{i + 1} + \ldots + (-1)^{e-i-1} n_{e-1} + (-1)^{e-i} n_e$,
+\item for all $i$, $1 \leq i \leq e - 1$ we have
+$\text{rank}(\varphi_{i + 1}) + \text{rank}(\varphi_i) = n_i$,
+\item each $I(\varphi_i) = R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We may assume the complex is the direct sum of trivial
+complexes. Then for each $i$ we can split the standard basis
+elements of $R^{n_i}$ into those that map to a basis element
+of $R^{n_{i-1}}$ and those that are mapped to zero (and these
+are mapped onto by basis elements of $R^{n_{i + 1}}$ if $i > 0$).
+Using descending
+induction starting with $i = e$ it is easy to prove that there
+are $r_{i + 1}$-basis elements of $R^{n_i}$ which are mapped
+to zero and $r_i$ which are mapped to basis elements of
+$R^{n_{i-1}}$. From this the result follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-div-x-exact-one-less}
+In Situation \ref{situation-complex}. Suppose $R$ is
+a local ring with maximal ideal $\mathfrak m$.
+Suppose that $0 \to R^{n_e} \to R^{n_{e-1}} \to \ldots \to R^{n_0}$
+is exact at $R^{n_e}, \ldots, R^{n_1}$.
+Let $x \in \mathfrak m$ be a nonzerodivisor. The complex
+$0 \to (R/xR)^{n_e} \to \ldots \to (R/xR)^{n_1}$
+is exact at $(R/xR)^{n_e}, \ldots, (R/xR)^{n_2}$.
+\end{lemma}
+
+\begin{proof}
+Denote $F_\bullet$ the complex with terms $F_i = R^{n_i}$
+and differential given by $\varphi_i$. Then we have a short
+exact sequence of complexes
+$$
+0 \to F_\bullet \xrightarrow{x} F_\bullet \to F_\bullet/xF_\bullet \to 0
+$$
+Applying the snake lemma we get a long exact sequence
+$$
+H_i(F_\bullet) \xrightarrow{x} H_i(F_\bullet) \to
+H_i(F_\bullet/xF_\bullet) \to H_{i - 1}(F_\bullet)
+\xrightarrow{x} H_{i - 1}(F_\bullet)
+$$
+The lemma follows.
+\end{proof}
+
+\begin{lemma}[Acyclicity lemma]
+\label{lemma-acyclic}
+\begin{reference}
+\cite[Lemma 1.8]{Peskine-Szpiro}
+\end{reference}
+Let $R$ be a local Noetherian ring.
+Let $0 \to M_e \to M_{e-1} \to \ldots \to M_0$
+be a complex of finite $R$-modules.
+Assume $\text{depth}(M_i) \geq i$.
+Let $i$ be the largest index such that the complex is
+not exact at $M_i$. If $i > 0$ then
+$\Ker(M_i \to M_{i-1})/\Im(M_{i + 1} \to M_i)$
+has depth $\geq 1$.
+\end{lemma}
+
+\begin{proof}
+Let $H = \Ker(M_i \to M_{i-1})/\Im(M_{i + 1} \to M_i)$ be the
+cohomology group in question.
+We may break the complex into short exact sequences
+$0 \to M_e \to M_{e-1} \to K_{e-2} \to 0$,
+$0 \to K_j \to M_j \to K_{j-1} \to 0$, for $i + 2 \leq j \leq e-2 $,
+$0 \to K_{i + 1} \to M_{i + 1} \to B_i \to 0$,
+$0 \to K_i \to M_i \to M_{i-1}$, and
+$0 \to B_i \to K_i \to H \to 0$.
+We proceed up through these complexes to
+prove the statements about depths, repeatedly using
+Lemma \ref{lemma-depth-in-ses}.
+First of all, since $\text{depth}(M_e) \geq e$,
+and $\text{depth}(M_{e-1}) \geq e-1$ we deduce
+that $\text{depth}(K_{e-2}) \geq e - 1$. At this point the
+sequences $0 \to K_j \to M_j \to K_{j-1} \to 0$ for $i + 2 \leq j \leq e-2 $
+imply similarly that $\text{depth}(K_{j-1}) \geq j$ for
+$i + 2 \leq j \leq e-2$. The sequence
+$0 \to K_{i + 1} \to M_{i + 1} \to B_i \to 0$
+then shows that $\text{depth}(B_i) \geq i + 1$. The sequence
+$0 \to K_i \to M_i \to M_{i-1}$ shows that $\text{depth}(K_i) \geq 1$
+since $M_i$ has depth $\geq i \geq 1$ by assumption.
+The sequence $0 \to B_i \to K_i \to H \to 0$ then
+implies the result.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-what-exact}
+\begin{reference}
+\cite[Corollary 1]{WhatExact}
+\end{reference}
+In Situation \ref{situation-complex}, suppose $R$ is
+a local Noetherian ring. The following are equivalent
+\begin{enumerate}
+\item $0 \to R^{n_e} \to R^{n_{e-1}} \to \ldots \to R^{n_0}$
+is exact at $R^{n_e}, \ldots, R^{n_1}$, and
+\item for all $i$, $1 \leq i \leq e$
+the following two conditions are satisfied:
+\begin{enumerate}
+\item $\text{rank}(\varphi_i) = r_i$ where
+$r_i = n_i - n_{i + 1} + \ldots + (-1)^{e-i-1} n_{e-1} + (-1)^{e-i} n_e$,
+\item $I(\varphi_i) = R$, or $I(\varphi_i)$ contains a
+regular sequence of length $i$.
+\end{enumerate}
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+If for some $i$ some matrix coefficient of $\varphi_i$
+is not in $\mathfrak m$, then we apply Lemma \ref{lemma-add-trivial-complex}.
+It is easy to see that the proposition for a complex and
+for the same complex with a trivial complex added to it
+are equivalent. Thus we may assume that all matrix entries
+of each $\varphi_i$ are elements of the maximal ideal.
+We may also assume that $e \geq 1$.
+
+\medskip\noindent
+Assume the complex is exact at $R^{n_e}, \ldots, R^{n_1}$.
+Let $\mathfrak q \in \text{Ass}(R)$.
+Note that the ring $R_{\mathfrak q}$ has depth $0$
+and that the complex remains exact after localization at $\mathfrak q$.
+We apply Lemmas \ref{lemma-exact-depth-zero-local} and
+\ref{lemma-trivial-case-exact} to the localized complex
+over $R_{\mathfrak q}$. We conclude that
+$\varphi_{i, \mathfrak q}$ has rank $r_i$ for all $i$.
+Since $R \to \bigoplus_{\mathfrak q \in \text{Ass}(R)} R_\mathfrak q$
+is injective (Lemma \ref{lemma-zero-at-ass-zero}), we conclude that
+$\varphi_i$ has rank $r_i$ over $R$ by the definition of rank as given
+in Definition \ref{definition-rank}. Therefore we see that
+$I(\varphi_i)_\mathfrak q = I(\varphi_{i, \mathfrak q})$
+as the ranks do not change. Since all of the ideals
+$I(\varphi_i)_{\mathfrak q}$, $e \geq i \geq 1$
+are equal to $R_{\mathfrak q}$ (by the lemmas referenced above)
+we conclude none of the ideals $I(\varphi_i)$ is contained in $\mathfrak q$.
+This implies that $I(\varphi_e)I(\varphi_{e-1})\ldots I(\varphi_1)$
+is not contained in any of the associated primes
+of $R$. By Lemma \ref{lemma-silly} we may choose
+$x \in I(\varphi_e)I(\varphi_{e - 1})\ldots I(\varphi_1)$,
+$x \not \in \mathfrak q$ for all $\mathfrak q \in \text{Ass}(R)$.
+Observe that $x$ is a nonzerodivisor (Lemma \ref{lemma-ass-zero-divisors}).
+According to Lemma \ref{lemma-div-x-exact-one-less}
+the complex $0 \to (R/xR)^{n_e} \to \ldots \to (R/xR)^{n_1}$ is exact
+at $(R/xR)^{n_e}, \ldots, (R/xR)^{n_2}$. By induction
+on $e$ all the ideals $I(\varphi_i)/xR$ have a regular
+sequence of length $i - 1$. This proves that $I(\varphi_i)$
+contains a regular sequence of length $i$.
+
+\medskip\noindent
+Assume (2)(a) and (2)(b) hold. We claim that for any prime
+$\mathfrak p \subset R$ conditions (2)(a) and (2)(b)
+hold for the complex
+$0 \to R_\mathfrak p^{n_e} \to R_\mathfrak p^{n_{e - 1}} \to \ldots \to
+R_\mathfrak p^{n_0}$ with maps $\varphi_{i, \mathfrak p}$
+over $R_\mathfrak p$. Namely, since $I(\varphi_i)$ contains a
+nonzero divisor, the image of $I(\varphi_i)$ in $R_\mathfrak p$
+is nonzero. This implies that the rank of $\varphi_{i, \mathfrak p}$
+is the same as the rank of $\varphi_i$: the rank as defined above
+of a matrix $\varphi$ over a ring $R$ can only drop when passing
+to an $R$-algebra $R'$ and this happens if and only if $I(\varphi)$
+maps to zero in $R'$. Thus (2)(a) holds. Having said this
+we know that $I(\varphi_{i, \mathfrak p}) = I(\varphi_i)_\mathfrak p$
+and we see that (2)(b) is preserved under localization as well.
+By induction on the dimension of $R$ we may assume the complex
+is exact when localized at any nonmaximal prime $\mathfrak p$ of $R$.
+Thus $\Ker(\varphi_i)/\Im(\varphi_{i + 1})$ has support contained in
+$\{\mathfrak m\}$ and hence if nonzero has depth $0$.
+As $I(\varphi_i) \subset \mathfrak m$ for all $i$ because
+of what was said in the first paragraph of the proof, we
+see that (2)(b) implies $\text{depth}(R) \geq e$.
+By Lemma \ref{lemma-acyclic} we see
+that the complex is exact at $R^{n_e}, \ldots, R^{n_1}$
+concluding the proof.
+\end{proof}
+
+\begin{remark}
+\label{remark-what-exact}
+If in Proposition \ref{proposition-what-exact} the equivalent conditions (1)
+and (2) are satisfied, then there exists a $j$ such that $I(\varphi_i) = R$
+if and only if $i \geq j$. As in the proof of the proposition, it suffices
+to see this when all the matrices have coefficients in the maximal ideal
+$\mathfrak m$ of $R$. In this case we see that $I(\varphi_j) = R$
+if and only if $\varphi_j = 0$. But if $\varphi_j = 0$, then we get
+arbitrarily long exact complexes
+$0 \to R^{n_e} \to R^{n_{e-1}} \to \ldots
+\to R^{n_j} \to 0 \to 0 \to \ldots \to 0$
+and hence by the proposition we see that $I(\varphi_i)$ for $i > j$
+has to be $R$ (since otherwise it is a proper ideal of a Noetherian
+local ring containing arbitrary long regular sequences which is impossible).
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+\section{Cohen-Macaulay modules}
+\label{section-CM}
+
+\noindent
+Here we show that Cohen-Macaulay modules have good properties. We postpone
+using Ext groups to establish the connection with duality and so on.
+
+\begin{definition}
+\label{definition-CM}
+Let $R$ be a Noetherian local ring.
+Let $M$ be a finite $R$-module.
+We say $M$ is {\it Cohen-Macaulay}
+if $\dim(\text{Supp}(M)) = \text{depth}(M)$.
+\end{definition}
+
+\noindent
+A first goal will be to establish Proposition \ref{proposition-CM-module}.
+We do this by a (perhaps nonstandard) sequence of elementary lemmas
+involving almost none of the earlier results on depth. Let us introduce
+some notation.
+
+\medskip\noindent
+Let $R$ be a local Noetherian ring. Let $M$ be
+a Cohen-Macaulay module, and let $f_1, \ldots, f_d$
+be an $M$-regular sequence with $d = \dim(\text{Supp}(M))$.
+We say that $g \in \mathfrak m$ is {\it good with respect to
+$(M, f_1, \ldots, f_d)$} if for all $i = 0, 1, \ldots, d-1$
+we have $\dim (\text{Supp}(M) \cap V(g, f_1, \ldots, f_i))
+= d - i - 1$. This is equivalent to the condition that
+$\dim(\text{Supp}(M/(f_1, \ldots, f_i)M) \cap V(g)) =
+d - i - 1$ for $i = 0, 1, \ldots, d - 1$.
+
+\begin{lemma}
+\label{lemma-good-element}
+Notation and assumptions as above. If $g$ is good with respect to
+$(M, f_1, \ldots, f_d)$, then (a) $g$ is a nonzerodivisor on $M$,
+and (b) $M/gM$ is Cohen-Macaulay with maximal regular
+sequence $f_1, \ldots, f_{d - 1}$.
+\end{lemma}
+
+\begin{proof}
+We prove the lemma by induction on $d$.
+If $d = 0$, then $M$ is finite and there is no case
+to which the lemma applies.
+If $d = 1$, then we have to show that $g : M \to M$ is
+injective. The kernel $K$ has support $\{\mathfrak m\}$
+because by assumption $\dim \text{Supp}(M) \cap V(g) = 0$.
+Hence $K$ has finite length. Hence $f_1 : K \to K$ injective
+implies the length of the image is the length of $K$, and hence
+$f_1 K = K$, which by Nakayama's Lemma \ref{lemma-NAK} implies $K = 0$.
+Also, $\dim \text{Supp}(M/gM) = 0$ and so $M/gM$ is Cohen-Macaulay
+of depth $0$.
+
+\medskip\noindent
+Assume $d > 1$. Observe that $g$ is good for $(M/f_1M, f_2, \ldots, f_d)$,
+as is easily seen from the definition. By induction, we have that
+(a) $g$ is a nonzerodivisor on $M/f_1M$ and
+(b) $M/(g, f_1)M$ is Cohen-Macaulay with maximal regular sequence
+$f_2, \ldots, f_{d - 1}$. By
+Lemma \ref{lemma-permute-xi}
+we see that $g, f_1$ is an $M$-regular sequence.
+Hence $g$ is a nonzerodivisor on $M$ and
+$f_1, \ldots, f_{d - 1}$ is an $M/gM$-regular sequence.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM-one-g}
+Let $R$ be a Noetherian local ring.
+Let $M$ be a Cohen-Macaulay module over $R$.
+Suppose $g \in \mathfrak m$ is such that $\dim(\text{Supp}(M) \cap V(g))
+= \dim(\text{Supp}(M)) - 1$. Then (a) $g$ is a nonzerodivisor on $M$,
+and (b) $M/gM$ is Cohen-Macaulay of depth one less.
+\end{lemma}
+
+\begin{proof}
+Choose a $M$-regular sequence $f_1, \ldots, f_d$ with
+$d = \dim(\text{Supp}(M))$. If $g$ is good with respect to
+$(M, f_1, \ldots, f_d)$ we win by Lemma \ref{lemma-good-element}.
+In particular the lemma holds if $d = 1$. (The case $d = 0$ does
+not occur.) Assume $d > 1$. Choose an element $h \in R$ such that
+(\romannumeral1) $h$ is good with respect to $(M, f_1, \ldots, f_d)$,
+and (\romannumeral2) $\dim(\text{Supp}(M) \cap V(h, g)) = d - 2$.
+To see $h$ exists, let $\{\mathfrak q_j\}$ be the (finite) set of
+minimal primes of the closed sets $\text{Supp}(M)$,
+$\text{Supp}(M)\cap V(f_1, \ldots, f_i)$, $i = 1, \ldots, d - 1$,
+and $\text{Supp}(M) \cap V(g)$. None of these $\mathfrak q_j$
+is equal to $\mathfrak m$ and hence we may find $h \in \mathfrak m$,
+$h \not \in \mathfrak q_j$ by Lemma \ref{lemma-silly}. It is clear
+that $h$ satisfies (\romannumeral1) and (\romannumeral2). From
+Lemma \ref{lemma-good-element} we conclude that
+$M/hM$ is Cohen-Macaulay. By (\romannumeral2) we see that the pair
+$(M/hM, g)$ satisfies the induction hypothesis. Hence
+$M/(h, g)M$ is Cohen-Macaulay and $g : M/hM \to M/hM$
+is injective. By Lemma \ref{lemma-permute-xi} we see
+that $g : M \to M$ and $h : M/gM \to M/gM$
+are injective. Combined with the fact that $M/(g, h)M$
+is Cohen-Macaulay this finishes the proof.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-CM-module}
+Let $R$ be a Noetherian local ring, with maximal ideal $\mathfrak m$.
+Let $M$ be a Cohen-Macaulay module over $R$ whose support has dimension $d$.
+Suppose that $g_1, \ldots, g_c$ are elements of
+$\mathfrak m$ such that $\dim(\text{Supp}(M/(g_1, \ldots, g_c)M))
+= d - c$. Then $g_1, \ldots, g_c$ is an $M$-regular sequence,
+and can be extended to a maximal $M$-regular sequence.
+\end{proposition}
+
+\begin{proof}
+Let $Z = \text{Supp}(M) \subset \Spec(R)$.
+By Lemma \ref{lemma-one-equation} in the chain
+$Z \supset Z \cap V(g_1) \supset \ldots \supset Z \cap V(g_1, \ldots, g_c)$
+each step decreases the dimension at most by $1$. Hence by assumption
+each step decreases the dimension by exactly $1$ each time. Thus we
+may successively apply Lemma \ref{lemma-CM-one-g} to the modules
+$M/(g_1, \ldots, g_i)$ and the element $g_{i + 1}$.
+
+\medskip\noindent
+To extend $g_1, \ldots, g_c$ by one element if $c < d$ we simply
+choose an element $g_{c + 1} \in \mathfrak m$ which is not
+in any of the finitely many minimal primes of $Z \cap V(g_1, \ldots, g_c)$,
+using Lemma \ref{lemma-silly}.
+\end{proof}
+
+\noindent
+Having proved Proposition \ref{proposition-CM-module} we continue the
+development of standard theory.
+
+\begin{lemma}
+\label{lemma-nonzerodivisor-on-CM}
+Let $R$ be a Noetherian local ring with maximal ideal $\mathfrak m$.
+Let $M$ be a finite $R$-module. Let $x \in \mathfrak m$ be a
+nonzerodivisor on $M$. Then $M$ is Cohen-Macaulay if and only
+if $M/xM$ is Cohen-Macaulay.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-depth-drops-by-one} we have
+$\text{depth}(M/xM) = \text{depth}(M)-1$.
+By Lemma \ref{lemma-one-equation-module}
+we have $\dim(\text{Supp}(M/xM)) = \dim(\text{Supp}(M)) - 1$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM-over-quotient}
+Let $R \to S$ be a surjective homomorphism of Noetherian local rings.
+Let $N$ be a finite $S$-module. Then $N$ is Cohen-Macaulay as an $S$-module
+if and only if $N$ is Cohen-Macaulay as an $R$-module.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM-ass-minimal-support}
+\begin{reference}
+\cite[Chapter 0, Proposition 16.5.4]{EGA}
+\end{reference}
+Let $R$ be a Noetherian local ring. Let $M$ be a finite Cohen-Macaulay
+$R$-module. If $\mathfrak p \in \text{Ass}(M)$, then
+$\dim(R/\mathfrak p) = \dim(\text{Supp}(M))$ and $\mathfrak p$
+is a minimal prime in the support of $M$.
+In particular, $M$ has no embedded associated primes.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-depth-dim-associated-primes} we have
+$\text{depth}(M) \leq \dim(R/\mathfrak p)$.
+Of course $\dim(R/\mathfrak p) \leq \dim(\text{Supp}(M))$
+as $\mathfrak p \in \text{Supp}(M)$ (Lemma \ref{lemma-ass-support}).
+Thus we have equality in both inequalities as $M$ is Cohen-Macaulay.
+Then $\mathfrak p$ must be minimal in $\text{Supp}(M)$ otherwise
+we would have $\dim(R/\mathfrak p) < \dim(\text{Supp}(M))$.
+Finally, minimal primes in the support of $M$ are equal to
+the minimal elements of $\text{Ass}(M)$
+(Proposition \ref{proposition-minimal-primes-associated-primes})
+hence $M$ has no embedded associated primes
+(Definition \ref{definition-embedded-primes}).
+\end{proof}
+
+\begin{definition}
+\label{definition-maximal-CM}
+Let $R$ be a Noetherian local ring.
+A finite module $M$ over $R$ is called a {\it maximal Cohen-Macaulay}
+module if $\text{depth}(M) = \dim(R)$.
+\end{definition}
+
+\noindent
+In other words, a maximal Cohen-Macaulay module over a Noetherian local
+ring is a finite module with the largest possible depth over that ring.
+Equivalently, a maximal Cohen-Macaulay module over a Noetherian local
+ring $R$ is a Cohen-Macaulay module of dimension equal to the dimension
+of the ring. In particular, if $M$ is a Cohen-Macaulay $R$-module with
+$\Spec(R) = \text{Supp}(M)$, then $M$ is maximal Cohen-Macaulay.
+Thus the following two lemmas are on maximal Cohen-Macaulay modules.
+
+\begin{lemma}
+\label{lemma-maximal-chain-maximal-CM}
+\begin{slogan}
+In a local Cohen-Macaulay ring, any maximal chain of prime ideals has
+length equal to the dimension.
+\end{slogan}
+Let $R$ be a Noetherian local ring. Assume there exists a
+Cohen-Macaulay module $M$ with $\Spec(R) = \text{Supp}(M)$.
+Then any maximal chain of prime ideals $\mathfrak p_0 \subset
+\mathfrak p_1 \subset \ldots \subset \mathfrak p_n$
+has length $n = \dim(R)$.
+\end{lemma}
+
+\begin{proof}
+We will prove this by induction on $\dim(R)$. If $\dim(R) = 0$,
+then the statement is clear. Assume $\dim(R) > 0$. Then $n > 0$.
+Choose an element $x \in \mathfrak p_1$, with $x$ not in
+any of the minimal primes of $R$, and in particular
+$x \not \in \mathfrak p_0$. (See Lemma \ref{lemma-silly}.)
+Then $\dim(R/xR) = \dim(R) - 1$ by Lemma \ref{lemma-one-equation}.
+The module $M/xM$ is Cohen-Macaulay over $R/xR$ by
+Proposition \ref{proposition-CM-module} and
+Lemma \ref{lemma-CM-over-quotient}.
+The support of $M/xM$ is $\Spec(R/xR)$ by
+Lemma \ref{lemma-support-quotient}.
+After replacing $x$ by $x^n$ for some $n$,
+we may assume that $\mathfrak p_1$ is an associated prime of $M/xM$, see
+Lemma \ref{lemma-inherit-minimal-primes}.
+By Lemma \ref{lemma-CM-ass-minimal-support}
+we conclude that $\mathfrak p_1/(x)$ is a minimal prime of $R/xR$.
+It follows that the chain
+$\mathfrak p_1/(x) \subset \ldots \subset \mathfrak p_n/(x)$
+is a maximal chain of primes in $R/xR$.
+By induction we find that this chain
+has length $\dim(R/xR) = \dim(R) - 1$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dim-formula-maximal-CM}
+Suppose $R$ is a Noetherian local ring. Assume there exists a
+Cohen-Macaulay module $M$ with $\Spec(R) = \text{Supp}(M)$. Then for
+a prime $\mathfrak p \subset R$ we have
+$$
+\dim(R) = \dim(R_{\mathfrak p}) + \dim(R/\mathfrak p).
+$$
+\end{lemma}
+
+\begin{proof}
+Follows immediately from Lemma \ref{lemma-maximal-chain-maximal-CM}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-CM-module}
+Suppose $R$ is a Noetherian local ring. Let $M$ be a Cohen-Macaulay
+module over $R$. For any prime $\mathfrak p \subset R$ the
+module $M_{\mathfrak p}$ is Cohen-Macaulay over $R_\mathfrak p$.
+\end{lemma}
+
+\begin{proof}
+We may and do assume $\mathfrak p \not = \mathfrak m$ and $M$ not zero.
+Choose a maximal chain of primes $\mathfrak p = \mathfrak p_c \subset
+\mathfrak p_{c - 1} \subset \ldots \subset \mathfrak p_1 \subset \mathfrak m$.
+If we prove the result for $M_{\mathfrak p_1}$ over $R_{\mathfrak p_1}$,
+then the lemma will follow by induction on $c$. Thus we may assume that
+there is no prime strictly between $\mathfrak p$ and $\mathfrak m$.
+Note that
+$\dim(\text{Supp}(M_\mathfrak p)) \leq \dim(\text{Supp}(M)) - 1$
+because any chain of primes in the support of $M_\mathfrak p$
+can be extended by one more prime (namely $\mathfrak m$) in the
+support of $M$. On the other hand, we have
+$\text{depth}(M_\mathfrak p) \geq \text{depth}(M) - \dim(R/\mathfrak p) =
+\text{depth}(M) - 1$ by Lemma \ref{lemma-depth-localization} and our
+choice of $\mathfrak p$.
+Thus $\text{depth}(M_\mathfrak p) \geq \dim(\text{Supp}(M_\mathfrak p))$
+as desired (the other inequality is Lemma \ref{lemma-bound-depth}).
+\end{proof}
+
+\begin{definition}
+\label{definition-module-CM}
+Let $R$ be a Noetherian ring. Let $M$ be a finite $R$-module.
+We say $M$ is {\it Cohen-Macaulay} if $M_\mathfrak p$ is a Cohen-Macaulay
+module over $R_\mathfrak p$ for all primes $\mathfrak p$ of $R$.
+\end{definition}
+
+\noindent
+By Lemma \ref{lemma-localize-CM-module} it suffices to check this
+in the maximal ideals of $R$.
+
+\begin{lemma}
+\label{lemma-maximal-CM-polynomial-algebra}
+Let $R$ be a Noetherian ring. Let $M$ be a Cohen-Macaulay module
+over $R$. Then $M \otimes_R R[x_1, \ldots, x_n]$ is a
+Cohen-Macaulay module over $R[x_1, \ldots, x_n]$.
+\end{lemma}
+
+\begin{proof}
+By induction on the number of variables it suffices to prove this for
+$M[x] = M \otimes_R R[x]$ over $R[x]$. Let $\mathfrak m \subset R[x]$
+be a maximal ideal, and let $\mathfrak p = R \cap \mathfrak m$.
+Let $f_1, \ldots, f_d$ be a $M_\mathfrak p$-regular sequence in the maximal
+ideal of $R_{\mathfrak p}$ of length $d = \dim(\text{Supp}(M_{\mathfrak p}))$.
+Note that since $R[x]$ is flat over $R$ the localization
+$R[x]_{\mathfrak m}$ is flat over $R_{\mathfrak p}$.
+Hence, by Lemma \ref{lemma-flat-increases-depth}, the sequence
+$f_1, \ldots, f_d$ is a $M[x]_{\mathfrak m}$-regular sequence of length $d$
+in $R[x]_{\mathfrak m}$. The quotient
+$$
+Q = M[x]_{\mathfrak m}/(f_1, \ldots, f_d)M[x]_{\mathfrak m} =
+M_{\mathfrak p}/(f_1, \ldots, f_d)M_{\mathfrak p}
+\otimes_{R_\mathfrak p} R[x]_{\mathfrak m}
+$$
+has support equal to the primes lying over $\mathfrak p$
+because $R_\mathfrak p \to R[x]_\mathfrak m$ is flat and
+the support of $M_{\mathfrak p}/(f_1, \ldots, f_d)M_{\mathfrak p}$
+is equal to $\{\mathfrak p\}$ (details omitted; hint: follows from
+Lemmas \ref{lemma-annihilator-flat-base-change} and
+\ref{lemma-support-closed}). Hence the dimension is $1$.
+To finish the proof it suffices to find an $f \in \mathfrak m$
+which is a nonzerodivisor on $Q$. Since $\mathfrak m$ is
+a maximal ideal, the field extension
+$\kappa(\mathfrak m)/\kappa(\mathfrak p)$
+is finite (Theorem \ref{theorem-nullstellensatz}).
+Hence we can find $f \in \mathfrak m$ which
+viewed as a polynomial in $x$ has leading
+coefficient not in $\mathfrak p$. Such an $f$ acts as
+a nonzerodivisor on
+$$
+M_{\mathfrak p}/(f_1, \ldots, f_d)M_{\mathfrak p} \otimes_R R[x] =
+\bigoplus\nolimits_{n \geq 0}
+M_{\mathfrak p}/(f_1, \ldots, f_d)M_{\mathfrak p} \cdot x^n
+$$
+and hence acts as a nonzerodivisor on $Q$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Cohen-Macaulay rings}
+\label{section-CM-ring}
+
+\noindent
+Most of the results of this section are special cases of the results
+in Section \ref{section-CM}.
+
+\begin{definition}
+\label{definition-local-ring-CM}
+A Noetherian local ring $R$ is called {\it Cohen-Macaulay}
+if it is Cohen-Macaulay as a module over itself.
+\end{definition}
+
+\noindent
+Note that this is equivalent to requiring the existence
+of a $R$-regular sequence $x_1, \ldots, x_d$ of the maximal
+ideal such that $R/(x_1, \ldots, x_d)$ has dimension $0$.
+We will usually just say ``regular sequence'' and not
+``$R$-regular sequence''.
+
+\begin{lemma}
+\label{lemma-reformulate-CM}
+\begin{slogan}
+Regular sequences in Cohen-Macaulay local rings are characterized
+by cutting out something of the correct dimension.
+\end{slogan}
+Let $R$ be a Noetherian local Cohen-Macaulay ring with maximal
+ideal $\mathfrak m $. Let $x_1, \ldots, x_c \in \mathfrak m$ be
+elements. Then
+$$
+x_1, \ldots, x_c
+\text{ is a regular sequence }
+\Leftrightarrow
+\dim(R/(x_1, \ldots, x_c)) = \dim(R) - c
+$$
+If so
+$x_1, \ldots, x_c$ can be extended to
+a regular sequence of length $\dim(R)$ and each quotient
+$R/(x_1, \ldots, x_i)$ is a Cohen-Macaulay ring of dimension
+$\dim(R) - i$.
+\end{lemma}
+
+\begin{proof}
+Special case of Proposition \ref{proposition-CM-module}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-maximal-chain-CM}
+Let $R$ be Noetherian local.
+Suppose $R$ is Cohen-Macaulay of dimension $d$.
+Any maximal chain of ideals $\mathfrak p_0 \subset
+\mathfrak p_1 \subset \ldots \subset \mathfrak p_n$
+has length $n = d$.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-maximal-chain-maximal-CM}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM-dim-formula}
+Suppose $R$ is a Noetherian local Cohen-Macaulay ring of dimension $d$.
+For any prime $\mathfrak p \subset R$ we have
+$$
+\dim(R) = \dim(R_{\mathfrak p}) + \dim(R/\mathfrak p).
+$$
+\end{lemma}
+
+\begin{proof}
+Follows immediately from Lemma \ref{lemma-maximal-chain-CM}.
+(Also, this is a special case of Lemma \ref{lemma-dim-formula-maximal-CM}.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-CM}
+Suppose $R$ is a Cohen-Macaulay local ring.
+For any prime $\mathfrak p \subset R$ the
+ring $R_{\mathfrak p}$ is Cohen-Macaulay as well.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-localize-CM-module}.
+\end{proof}
+
+\begin{definition}
+\label{definition-ring-CM}
+A Noetherian ring $R$ is called {\it Cohen-Macaulay} if all
+its local rings are Cohen-Macaulay.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-CM-polynomial-algebra}
+Suppose $R$ is a Noetherian Cohen-Macaulay ring.
+Any polynomial algebra over $R$ is Cohen-Macaulay.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-maximal-CM-polynomial-algebra}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-shift}
+Let $R$ be a Noetherian local Cohen-Macaulay ring of dimension $d$.
+Let $0 \to K \to R^{\oplus n} \to M \to 0$ be an exact sequence of
+$R$-modules. Then either $M = 0$, or $\text{depth}(K) > \text{depth}(M)$, or
+$\text{depth}(K) = \text{depth}(M) = d$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-depth-in-ses}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-mcm-resolution}
+Let $R$ be a local Noetherian Cohen-Macaulay ring of dimension $d$.
+Let $M$ be a finite $R$-module of depth $e$.
+There exists an exact complex
+$$
+0 \to K \to F_{d-e-1} \to \ldots \to F_0 \to M \to 0
+$$
+with each $F_i$ finite free and $K$ maximal Cohen-Macaulay.
+\end{lemma}
+
+\begin{proof}
+Immediate from the definition and Lemma \ref{lemma-dimension-shift}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-find-sequence-image-regular}
+Let $\varphi : A \to B$ be a map of local rings.
+Assume that $B$ is Noetherian and Cohen-Macaulay and that
+$\mathfrak m_B = \sqrt{\varphi(\mathfrak m_A) B}$. Then there exists
+a sequence of elements $f_1, \ldots, f_{\dim(B)}$ in $A$
+such that $\varphi(f_1), \ldots, \varphi(f_{\dim(B)})$ is a
+regular sequence in $B$.
+\end{lemma}
+
+\begin{proof}
+By induction on $\dim(B)$ it suffices to prove: If $\dim(B) \geq 1$, then we
+can find an element $f$ of $A$ which maps to a nonzerodivisor in $B$.
+By
+Lemma \ref{lemma-reformulate-CM}
+it suffices to find $f \in A$ whose image in $B$ is not contained in any
+of the finitely many minimal primes $\mathfrak q_1, \ldots, \mathfrak q_r$
+of $B$. By the assumption that
+$\mathfrak m_B = \sqrt{\varphi(\mathfrak m_A) B}$
+we see that $\mathfrak m_A \not \subset \varphi^{-1}(\mathfrak q_i)$.
+Hence we can find $f$ by
+Lemma \ref{lemma-silly}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Catenary rings}
+\label{section-catenary}
+
+\noindent
+Compare with Topology, Section \ref{topology-section-catenary-spaces}.
+
+\begin{definition}
+\label{definition-catenary}
+A ring $R$ is said to be {\it catenary} if for any pair of prime ideals
+$\mathfrak p \subset \mathfrak q$, there exists an integer bounding the
+lengths of all finite chains of prime ideals
+$\mathfrak p = \mathfrak p_0 \subset \mathfrak p_1 \subset \ldots \subset
+\mathfrak p_e = \mathfrak q$ and all maximal such chains have the same length.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-catenary}
+A ring $R$ is catenary if and only if the topological space
+$\Spec(R)$ is catenary (see
+Topology, Definition \ref{topology-definition-catenary}).
+\end{lemma}
+
+\begin{proof}
+Immediate from the definition and the characterization of
+irreducible closed subsets in Lemma \ref{lemma-irreducible}.
+\end{proof}
+
+\noindent
+In general it is not the case that a finitely generated
+$R$-algebra is catenary if $R$ is. Thus we make the following
+definition.
+
+\begin{definition}
+\label{definition-universally-catenary}
+A Noetherian ring $R$ is said to be {\it universally catenary}
+if every $R$ algebra of finite type is catenary.
+\end{definition}
+
+\noindent
+We restrict to Noetherian rings as it is not clear
+this definition is the right one for non-Noetherian rings.
+By Lemma \ref{lemma-quotient-catenary} to check a Noetherian
+ring $R$ is universally catenary, it suffices to check
+each polynomial algebra $R[x_1, \ldots, x_n]$ is catenary.
+
+\begin{lemma}
+\label{lemma-localization-catenary}
+Any localization of a catenary ring is catenary.
+Any localization of a Noetherian universally catenary
+ring is universally catenary.
+\end{lemma}
+
+\begin{proof}
+Let $A$ be a ring and let $S \subset A$ be a multiplicative subset.
+The description of $\Spec(S^{-1}A)$ in Lemma \ref{lemma-spec-localization}
+shows that if $A$ is catenary, then so is $S^{-1}A$. If $S^{-1}A \to C$
+is of finite type, then $C = S^{-1}B$ for some finite type ring map
+$A \to B$. Hence if $A$ is Noetherian and universally catenary, then
+$B$ is catenary and we see that $C$ is catenary too. This proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-catenary}
+Let $A$ be a Noetherian universally catenary ring.
+Any $A$-algebra essentially of finite type over $A$
+is universally catenary.
+\end{lemma}
+
+\begin{proof}
+If $B$ is a finite type $A$-algebra, then $B$ is Noetherian
+by Lemma \ref{lemma-Noetherian-permanence}. Any finite type
+$B$-algebra is a finite type $A$-algebra and hence catenary
+by our assumption that $A$ is universally catenary. Thus $B$
+is universally catenary. Any localization of $B$ is universally
+catenary by Lemma \ref{lemma-localization-catenary} and this
+finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-catenary-check-local}
+Let $R$ be a ring. The following are equivalent
+\begin{enumerate}
+\item $R$ is catenary,
+\item $R_\mathfrak p$ is catenary for all prime ideals $\mathfrak p$,
+\item $R_\mathfrak m$ is catenary for all maximal ideals $\mathfrak m$.
+\end{enumerate}
+Assume $R$ is Noetherian. The following are equivalent
+\begin{enumerate}
+\item $R$ is universally catenary,
+\item $R_\mathfrak p$ is universally catenary for all prime ideals
+$\mathfrak p$,
+\item $R_\mathfrak m$ is universally catenary for all maximal ideals
+$\mathfrak m$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (1) $\Rightarrow$ (2) follows from
+Lemma \ref{lemma-localization-catenary} in both cases.
+The implication (2) $\Rightarrow$ (3) is immediate in both cases.
+Assume $R_\mathfrak m$ is catenary for all maximal ideals
+$\mathfrak m$ of $R$. If $\mathfrak p \subset \mathfrak q$ are primes
+in $R$, then choose a maximal ideal $\mathfrak q \subset \mathfrak m$.
+Chains of primes ideals between $\mathfrak p$ and $\mathfrak q$ are
+in 1-to-1 correspondence with chains of prime ideals between
+$\mathfrak pR_\mathfrak m$ and $\mathfrak qR_\mathfrak m$ hence we
+see $R$ is catenary. Assume $R$ is Noetherian and $R_\mathfrak m$ is
+universally catenary for all maximal ideals $\mathfrak m$ of $R$.
+Let $R \to S$ be a finite type ring map. Let $\mathfrak q$ be a prime
+ideal of $S$ lying over the prime $\mathfrak p \subset R$.
+Choose a maximal ideal $\mathfrak p \subset \mathfrak m$ in $R$.
+Then $R_\mathfrak p$ is a localization of $R_\mathfrak m$ hence
+universally catenary by Lemma \ref{lemma-localization-catenary}.
+Then $S_\mathfrak p$ is catenary as a finite type ring over $R_\mathfrak p$.
+Hence $S_\mathfrak q$ is catenary as a localization. Thus $S$ is catenary
+by the first case treated above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quotient-catenary}
+Any quotient of a catenary ring is catenary.
+Any quotient of a Noetherian universally catenary ring is
+universally catenary.
+\end{lemma}
+
+\begin{proof}
+Let $A$ be a ring and let $I \subset A$ be an ideal.
+The description of $\Spec(A/I)$ in Lemma \ref{lemma-spec-closed}
+shows that if $A$ is catenary, then so is $A/I$.
+The second statement is a special case of
+Lemma \ref{lemma-universally-catenary}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-catenary-check-irreducible}
+Let $R$ be a Noetherian ring.
+\begin{enumerate}
+\item $R$ is catenary if and only if $R/\mathfrak p$ is catenary
+for every minimal prime $\mathfrak p$.
+\item $R$ is universally catenary if and only if $R/\mathfrak p$ is
+universally catenary for every minimal prime $\mathfrak p$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $\mathfrak a \subset \mathfrak b$ is an inclusion of primes of $R$,
+then we can find a minimal prime $\mathfrak p \subset \mathfrak a$
+and the first assertion is clear. We omit the proof of the second.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM-ring-catenary}
+A Noetherian Cohen-Macaulay ring is universally catenary.
+More generally, if $R$ is a Noetherian ring and $M$ is
+a Cohen-Macaulay $R$-module with $\text{Supp}(M) = \Spec(R)$,
+then $R$ is universally catenary.
+\end{lemma}
+
+\begin{proof}
+Since a polynomial algebra over $R$ is Cohen-Macaulay,
+by Lemma \ref{lemma-CM-polynomial-algebra},
+it suffices to show that a Cohen-Macaulay ring is
+catenary.
+Let $R$ be Cohen-Macaulay and $\mathfrak p \subset \mathfrak q$
+primes of $R$. By definition $R_{\mathfrak q}$ and
+$R_{\mathfrak p}$ are Cohen-Macaulay.
+Take a maximal chain of primes
+$\mathfrak p = \mathfrak p_0 \subset \mathfrak p_1 \subset
+\ldots \subset \mathfrak p_n = \mathfrak q$.
+Next choose a maximal chain of primes
+$\mathfrak q_0 \subset \mathfrak q_1 \subset \ldots \subset
+\mathfrak q_m = \mathfrak p$. By
+Lemma \ref{lemma-maximal-chain-CM}
+we have $n + m = \dim(R_{\mathfrak q})$. And we have
+$m = \dim(R_{\mathfrak p})$ by the same lemma.
+Hence $n = \dim(R_{\mathfrak q}) - \dim(R_{\mathfrak p})$
+is independent of choices.
+
+\medskip\noindent
+To prove the more general statement, argue exactly as above but
+using Lemmas \ref{lemma-maximal-CM-polynomial-algebra}
+and \ref{lemma-maximal-chain-maximal-CM}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-catenary-Noetherian-local}
+Let $(A, \mathfrak m)$ be a Noetherian local ring. The following are equivalent
+\begin{enumerate}
+\item $A$ is catenary, and
+\item $\mathfrak p \mapsto \dim(A/\mathfrak p)$ is a dimension function
+on $\Spec(A)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $A$ is catenary, then $\Spec(A)$ has a dimension function $\delta$ by
+Topology, Lemma \ref{topology-lemma-locally-dimension-function}
+(and Lemma \ref{lemma-catenary}). We may assume $\delta(\mathfrak m) = 0$.
+Then we see that
+$$
+\delta(\mathfrak p) = \text{codim}(V(\mathfrak m), V(\mathfrak p)) =
+\dim(A/\mathfrak p)
+$$
+by Topology, Lemma \ref{topology-lemma-dimension-function-catenary}.
+In this way we see that (1) implies (2). The reverse implication
+follows from
+Topology, Lemma \ref{topology-lemma-dimension-function-catenary}
+as well.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Regular local rings}
+\label{section-regular}
+
+\noindent
+It is not that easy to show that all prime localizations of a regular local
+ring are regular. In fact, quite a bit of the material developed so far is
+geared towards a proof of this fact. See
+Proposition \ref{proposition-finite-gl-dim-regular}, and
+trace back the references.
+
+\begin{lemma}
+\label{lemma-regular-graded}
+Let $(R, \mathfrak m, \kappa)$ be a regular local ring of dimension $d$.
+The graded ring $\bigoplus \mathfrak m^n / \mathfrak m^{n + 1}$
+is isomorphic to the graded polynomial algebra
+$\kappa[X_1, \ldots, X_d]$.
+\end{lemma}
+
+\begin{proof}
+Let $x_1, \ldots, x_d$ be a minimal set of generators
+for the maximal ideal $\mathfrak m$, see
+Definition \ref{definition-regular-local}.
+There is a surjection $\kappa[X_1, \ldots, X_d]
+\to \bigoplus \mathfrak m^n/\mathfrak m^{n + 1}$,
+which maps $X_i$ to the class of $x_i$ in $\mathfrak m/\mathfrak m^2$.
+Since $d(R) = d$ by
+Proposition \ref{proposition-dimension}
+we know that the numerical
+polynomial $n \mapsto \dim_\kappa \mathfrak m^n/\mathfrak m^{n + 1}$
+has degree $d - 1$. By Lemma \ref{lemma-quotient-smaller-d} we
+conclude that the surjection $\kappa[X_1, \ldots, X_d]
+\to \bigoplus \mathfrak m^n/\mathfrak m^{n + 1}$ is an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-domain}
+Any regular local ring is a domain.
+\end{lemma}
+
+\begin{proof}
+We will use that $\bigcap \mathfrak m^n = 0$
+by Lemma \ref{lemma-intersect-powers-ideal-module-zero}.
+Let $f, g \in R$ such that $fg = 0$.
+Suppose that $f \in \mathfrak m^a$ and
+$g \in \mathfrak m^b$, with $a, b$ maximal.
+Since $fg = 0 \in \mathfrak m^{a + b + 1}$
+we see from the result of Lemma \ref{lemma-regular-graded}
+that either $f \in \mathfrak m^{a + 1}$ or
+$g \in \mathfrak m^{b + 1}$. Contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-ring-CM}
+Let $R$ be a regular local ring and let
+$x_1, \ldots, x_d$ be a minimal set of generators
+for the maximal ideal $\mathfrak m$. Then
+$x_1, \ldots, x_d$ is a regular sequence, and
+each $R/(x_1, \ldots, x_c)$ is a regular local ring
+of dimension $d - c$. In particular $R$ is Cohen-Macaulay.
+\end{lemma}
+
+\begin{proof}
+Note that $R/x_1R$ is a Noetherian local ring of dimension $\geq d - 1$
+by Lemma \ref{lemma-one-equation} with $x_2, \ldots, x_d$
+generating the maximal ideal. Hence it is a regular local ring by definition.
+Since $R$ is a domain by Lemma \ref{lemma-regular-domain}
+$x_1$ is a nonzerodivisor.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-quotient-regular}
+Let $R$ be a regular local ring. Let $I \subset R$ be an ideal
+such that $R/I$ is a regular local ring as well. Then
+there exists a minimal set of generators $x_1, \ldots, x_d$
+for the maximal ideal $\mathfrak m$ of $R$ such that
+$I = (x_1, \ldots, x_c)$ for some $0 \leq c \leq d$.
+\end{lemma}
+
+\begin{proof}
+Say $\dim(R) = d$ and $\dim(R/I) = d - c$.
+Denote $\overline{\mathfrak m} = \mathfrak m/I$ the
+maximal ideal of $R/I$. Let $\kappa = R/\mathfrak m$. We have
+$$
+\dim_\kappa((I + \mathfrak m^2)/\mathfrak m^2) =
+\dim_\kappa(\mathfrak m/\mathfrak m^2)
+- \dim(\overline{\mathfrak m}/\overline{\mathfrak m}^2) = d - (d - c) = c
+$$
+by the definition of a regular local ring. Hence we can choose
+$x_1, \ldots, x_c \in I$ whose images in $\mathfrak m/\mathfrak m^2$
+are linearly independent and supplement with
+$x_{c + 1}, \ldots, x_d$ to get a minimal system of generators of
+$\mathfrak m$. The induced map $R/(x_1, \ldots, x_c) \to R/I$ is a
+surjection between regular local rings of the same dimension
+(Lemma \ref{lemma-regular-ring-CM}). It follows that
+the kernel is zero, i.e., $I = (x_1, \ldots, x_c)$. Namely, if not
+then we would have $\dim(R/I) < \dim(R/(x_1, \ldots, x_c))$ by
+Lemmas \ref{lemma-regular-domain} and \ref{lemma-one-equation}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-free-mod-x}
+Let $R$ be a Noetherian local ring.
+Let $x \in \mathfrak m$.
+Let $M$ be a finite $R$-module such that
+$x$ is a nonzerodivisor on $M$ and
+$M/xM$ is free over $R/xR$.
+Then $M$ is free over $R$.
+\end{lemma}
+
+\begin{proof}
+Let $m_1, \ldots, m_r$ be elements of $M$ which map to
+a $R/xR$-basis of $M/xM$. By Nakayama's Lemma \ref{lemma-NAK}
+$m_1, \ldots, m_r$ generate $M$. If $\sum a_i m_i = 0$
+is a relation, then $a_i \in xR$ for all $i$. Hence
+$a_i = b_i x$ for some $b_i \in R$. Hence
+the kernel $K$ of $R^r \to M$ satisfies $xK = K$
+and hence is zero by Nakayama's lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-mcm-free}
+Let $R$ be a regular local ring.
+Any maximal Cohen-Macaulay module over $R$ is free.
+\end{lemma}
+
+\begin{proof}
+Let $M$ be a maximal Cohen-Macaulay module over $R$.
+Let $x \in \mathfrak m$ be part of a regular sequence
+generating $\mathfrak m$. Then $x$ is a nonzerodivisor
+on $M$ by Proposition \ref{proposition-CM-module}, and
+$M/xM$ is a maximal Cohen-Macaulay module over $R/xR$.
+By induction on $\dim(R)$ we see that $M/xM$ is free.
+We win by Lemma \ref{lemma-free-mod-x}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-mod-x}
+Suppose $R$ is a Noetherian local ring.
+Let $x \in \mathfrak m$ be a nonzerodivisor
+such that $R/xR$ is a regular local ring. Then $R$ is a regular local ring.
+More generally, if $x_1, \ldots, x_r$ is a regular sequence in $R$
+such that $R/(x_1, \ldots, x_r)$ is a regular local ring, then
+$R$ is a regular local ring.
+\end{lemma}
+
+\begin{proof}
+This is true because $x$ together with the lifts of a system
+of minimal generators of the maximal ideal of $R/xR$ will give
+$\dim(R)$ generators of $\mathfrak m$.
+Use Lemma \ref{lemma-one-equation}.
+The last statement follows from the first and induction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-regular}
+Let $(R_i, \varphi_{ii'})$ be a directed system of local rings whose
+transition maps are local ring maps. If each $R_i$ is a regular local ring and
+$R = \colim R_i$ is Noetherian, then $R$ is a regular local ring.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak m \subset R$ be the maximal ideal; it is the colimit
+of the maximal ideal $\mathfrak m_i \subset R_i$.
+We prove the lemma by induction on $d = \dim \mathfrak m/\mathfrak m^2$.
+If $d = 0$, then $R = R/\mathfrak m$ is a field and $R$ is a regular local ring.
+If $d > 0$ pick an $x \in \mathfrak m$, $x \not \in \mathfrak m^2$.
+For some $i$ we can find an $x_i \in \mathfrak m_i$ mapping to $x$.
+Note that $R/xR = \colim_{i' \geq i} R_{i'}/x_iR_{i'}$ is a Noetherian
+local ring. By
+Lemma \ref{lemma-regular-ring-CM}
+we see that $R_{i'}/x_iR_{i'}$ is a regular local ring.
+Hence by induction we see
+that $R/xR$ is a regular local ring. Since each $R_i$ is a domain
+(Lemma \ref{lemma-regular-graded}) we see that $R$ is a domain.
+Hence $x$ is a nonzerodivisor and we conclude that $R$ is
+a regular local ring by Lemma \ref{lemma-regular-mod-x}.
+\end{proof}
+
+
+
+
+
+\section{Epimorphisms of rings}
+\label{section-epimorphism}
+
+\noindent
+In any category there is a notion of an {\it epimorphism}.
+Some of this material is taken from \cite{Autour} and \cite{Mazet}.
+
+\begin{lemma}
+\label{lemma-epimorphism}
+Let $R \to S$ be a ring map. The following are equivalent
+\begin{enumerate}
+\item $R \to S$ is an epimorphism,
+\item the two ring maps $S \to S \otimes_R S$ are equal,
+\item either of the ring maps $S \to S \otimes_R S$ is an isomorphism, and
+\item the ring map $S \otimes_R S \to S$ is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-epimorphism}
+The composition of two epimorphisms of rings is an epimorphism.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is true in any category.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-epimorphism}
+If $R \to S$ is an epimorphism of rings and $R \to R'$ is any ring map,
+then $R' \to R' \otimes_R S$ is an epimorphism.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: True in any category with pushouts.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-permanence-epimorphism}
+If $A \to B \to C$ are ring maps and $A \to C$ is an epimorphism, so is
+$B \to C$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is true in any category.
+\end{proof}
+
+\noindent
+This means in particular, that if $R \to S$ is an epimorphism with
+image $\overline{R} \subset S$, then $\overline{R} \to S$ is an epimorphism.
+Hence while proving results for epimorphisms we may often assume
+the map is injective. The following lemma means in particular that
+every localization is an epimorphism.
+
+\begin{lemma}
+\label{lemma-epimorphism-local}
+Let $R \to S$ be a ring map. The following are equivalent:
+\begin{enumerate}
+\item $R \to S$ is an epimorphism, and
+\item $R_{\mathfrak p} \to S_{\mathfrak p}$ is an epimorphism for
+each prime $\mathfrak p$ of $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $S_{\mathfrak p} = R_{\mathfrak p} \otimes_R S$ (see
+Lemma \ref{lemma-tensor-localization})
+we see that (1) implies (2) by
+Lemma \ref{lemma-base-change-epimorphism}.
+Conversely, assume that (2) holds. Let $a, b : S \to A$ be two ring maps
+from $S$ to a ring $A$ equalizing the map $R \to S$. By assumption we see
+that for every prime $\mathfrak p$ of $R$ the induced maps
+$a_{\mathfrak p}, b_{\mathfrak p} : S_{\mathfrak p} \to A_{\mathfrak p}$ are
+the same. Hence $a = b$ as $A \subset \prod_{\mathfrak p} A_{\mathfrak p}$, see
+Lemma \ref{lemma-characterize-zero-local}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-epimorphism-surjective}
+\begin{slogan}
+A ring map is surjective if and only if it is a finite epimorphism.
+\end{slogan}
+Let $R \to S$ be a ring map. The following are equivalent
+\begin{enumerate}
+\item $R \to S$ is an epimorphism and finite, and
+\item $R \to S$ is surjective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+(This lemma seems to have been reproved many times in the literature, and
+has many different proofs.)
+It is clear that a surjective ring map is an epimorphism.
+Suppose that $R \to S$ is a finite ring map such that
+$S \otimes_R S \to S$ is an isomorphism. Our goal is to show that
+$R \to S$ is surjective. Assume $S/R$ is not zero.
+The exact sequence $R \to S \to S/R \to 0$
+leads to an exact sequence
+$$
+R \otimes_R S \to S \otimes_R S \to S/R \otimes_R S \to 0.
+$$
+Our assumption implies that the first arrow is an isomorphism, hence
+we conclude that $S/R \otimes_R S = 0$. Hence also $S/R \otimes_R S/R = 0$. By
+Lemma \ref{lemma-trivial-filter-finite-module}
+there exists a surjection of $R$-modules $S/R \to R/I$ for some proper
+ideal $I \subset R$. Hence there exists a
+surjection $S/R \otimes_R S/R \to R/I \otimes_R R/I = R/I \not = 0$,
+contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-faithfully-flat-epimorphism}
+A faithfully flat epimorphism is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+This is clear from
+Lemma \ref{lemma-epimorphism} part (3)
+as the map $S \to S \otimes_R S$ is the map $R \to S$ tensored with $S$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-epimorphism-over-field}
+If $k \to S$ is an epimorphism and $k$ is a field, then $S = k$ or $S = 0$.
+\end{lemma}
+
+\begin{proof}
+This is clear from the result of
+Lemma \ref{lemma-faithfully-flat-epimorphism}
+(as any nonzero algebra over $k$ is faithfully flat), or
+by arguing directly that $R \to R \otimes_k R$ cannot be
+surjective unless $\dim_k(R) \leq 1$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-epimorphism-injective-spec}
+Let $R \to S$ be an epimorphism of rings. Then
+\begin{enumerate}
+\item $\Spec(S) \to \Spec(R)$ is injective, and
+\item for $\mathfrak q \subset S$ lying over $\mathfrak p \subset R$
+we have $\kappa(\mathfrak p) = \kappa(\mathfrak q)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p$ be a prime of $R$. The fibre of the map is the spectrum
+of the fibre ring $S \otimes_R \kappa(\mathfrak p)$. By
+Lemma \ref{lemma-base-change-epimorphism}
+the map $\kappa(\mathfrak p) \to S \otimes_R \kappa(\mathfrak p)$
+is an epimorphism, and hence by
+Lemma \ref{lemma-epimorphism-over-field}
+we have either $S \otimes_R \kappa(\mathfrak p) = 0$
+or $S \otimes_R \kappa(\mathfrak p) = \kappa(\mathfrak p)$
+which proves (1) and (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relations}
+Let $R$ be a ring.
+Let $M$, $N$ be $R$-modules.
+Let $\{x_i\}_{i \in I}$ be a set of generators of $M$.
+Let $\{y_j\}_{j \in J}$ be a set of generators of $N$.
+Let $\{m_j\}_{j \in J}$ be a family of elements of $M$ with $m_j = 0$
+for all but finitely many $j$.
+Then
+$$
+\sum\nolimits_{j \in J} m_j \otimes y_j = 0 \text{ in } M \otimes_R N
+$$
+is equivalent to the following:
+There exist $a_{i, j} \in R$ with $a_{i, j} = 0$ for all but finitely many
+pairs $(i, j)$ such that
+\begin{align*}
+m_j & = \sum\nolimits_{i \in I} a_{i, j} x_i \quad\text{for all } j \in J, \\
+0 & = \sum\nolimits_{j \in J} a_{i, j} y_j \quad\text{for all } i \in I.
+\end{align*}
+\end{lemma}
+
+\begin{proof}
+The sufficiency is immediate. Suppose that
+$\sum_{j \in J} m_j \otimes y_j = 0$.
+Consider the short exact sequence
+$$
+0 \to K \to \bigoplus\nolimits_{j \in J} R \to N \to 0
+$$
+where the $j$th basis vector of $\bigoplus\nolimits_{j \in J} R$ maps
+to $y_j$. Tensor this with $M$ to get the exact sequence
+$$
+K \otimes_R M \to \bigoplus\nolimits_{j \in J} M \to N \otimes_R M \to 0.
+$$
+The assumption implies that there exist elements $k_i \in K$ such that
+$\sum k_i \otimes x_i$ maps to the element $(m_j)_{j \in J}$ of the middle.
+Writing $k_i = (a_{i, j})_{j \in J}$ and we obtain what we want.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kernel-difference-projections}
+Let $\varphi : R \to S$ be a ring map.
+Let $g \in S$. The following are equivalent:
+\begin{enumerate}
+\item $g \otimes 1 = 1 \otimes g$ in $S \otimes_R S$, and
+\item there exist $n \geq 0$ and elements $y_i, z_j \in S$
+and $x_{i, j} \in R$ for $1 \leq i, j \leq n$ such that
+\begin{enumerate}
+\item $g = \sum_{i, j \leq n} x_{i, j} y_i z_j$,
+\item for each $j$ we have $\sum x_{i, j}y_i \in \varphi(R)$, and
+\item for each $i$ we have $\sum x_{i, j}z_j \in \varphi(R)$.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that (2) implies (1). Conversely, suppose that
+$g \otimes 1 = 1 \otimes g$. Choose generators $\{s_i\}_{i \in I}$
+of $S$ as an $R$-module with $0, 1 \in I$ and $s_0 = 1$ and $s_1 = g$.
+Apply
+Lemma \ref{lemma-relations}
+to the relation $g \otimes s_0 + (-1) \otimes s_1 = 0$.
+We see that there exist $a_{i, j} \in R$ such that
+$g = \sum_i a_{i, 0} s_i$, $-1 = \sum_i a_{i, 1} s_i$, and for
+$j \not = 0, 1$ we have $0 = \sum_i a_{i, j} s_i$, and moreover
+for all $i$ we have $\sum_j a_{i, j}s_j = 0$.
+Then we have
+$$
+\sum\nolimits_{i, j \not = 0} a_{i, j} s_i s_j = -g + a_{0, 0}
+$$
+and for each $j \not = 0$ we have
+$\sum_{i \not = 0} a_{i, j}s_i \in R$. This proves that $-g + a_{0, 0}$
+can be written as in (2). It follows that $g$ can be written as
+in (2). Details omitted.
+Hint: Show that the set of elements of $S$ which have an
+expression as in (2) form an $R$-subalgebra of $S$.
+\end{proof}
+
+\begin{remark}
+\label{remark-matrices-associated-to-elements-epicenter}
+Let $R \to S$ be a ring map. Sometimes the set of elements
+$g \in S$ such that $g \otimes 1 = 1 \otimes g$ is called the
+{\it epicenter} of $S$. It is an $R$-algebra. By the construction of
+Lemma \ref{lemma-kernel-difference-projections}
+we get for each $g$ in the epicenter a matrix factorization
+$$
+(g) = Y X Z
+$$
+with $X \in \text{Mat}(n \times n, R)$,
+$Y \in \text{Mat}(1 \times n, S)$, and
+$Z \in \text{Mat}(n \times 1, S)$. Namely, let $x_{i, j}, y_i, z_j$
+be as in part (2) of the lemma. Set $X = (x_{i, j})$, let $y$ be the
+row vector whose entries are the $y_i$ and let $z$ be the column vector
+whose entries are the $z_j$. With this notation conditions (b) and (c) of
+Lemma \ref{lemma-kernel-difference-projections}
+mean exactly that $Y X \in \text{Mat}(1 \times n, R)$,
+$X Z \in \text{Mat}(n \times 1, R)$.
+It turns out to be very convenient to consider the triple of
+matrices $(X, YX, XZ)$. Given $n \in \mathbf{N}$ and a triple
+$(P, U, V)$ we say that $(P, U, V)$ is a {\it $n$-triple associated to $g$}
+if there exists a matrix factorization as above such that
+$P = X$, $U = YX$ and $V = XZ$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-epimorphism-cardinality}
+Let $R \to S$ be an epimorphism of rings.
+Then the cardinality of $S$ is at most the cardinality of $R$.
+In a formula: $|S| \leq |R|$.
+\end{lemma}
+
+\begin{proof}
+The condition that $R \to S$ is an epimorphism means that each $g \in S$
+satisfies $g \otimes 1 = 1 \otimes g$, see
+Lemma \ref{lemma-epimorphism}.
+We are going to use the notation introduced in
+Remark \ref{remark-matrices-associated-to-elements-epicenter}.
+Suppose that $g, g' \in S$ and suppose that $(P, U, V)$ is an $n$-triple
+which is associated to both $g$ and $g'$. Then we claim that
+$g = g'$. Namely, write $(P, U, V) = (X, YX, XZ)$ for a matrix
+factorization $(g) = YXZ$ of $g$ and write $(P, U, V) = (X', Y'X', X'Z')$
+for a matrix factorization $(g') = Y'X'Z'$ of $g'$.
+Then we see that
+$$
+(g) = YXZ = UZ = Y'X'Z = Y'PZ = Y'XZ = Y'V = Y'X'Z' = (g')
+$$
+and hence $g = g'$. This implies that the cardinality of $S$ is bounded
+by the number of possible triples, which has cardinality at most
+$\sup_{n \in \mathbf{N}} |R|^n$. If $R$ is infinite then this is
+at most $|R|$, see \cite[Ch. I, 10.13]{Kunen}.
+
+\medskip\noindent
+If $R$ is a finite ring then the argument above only proves that $S$
+is at worst countable. In fact in this case $R$ is Artinian and the
+map $R \to S$ is surjective. We omit the proof of this case.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-epimorphism-modules}
+Let $R \to S$ be an epimorphism of rings. Let $N_1, N_2$ be $S$-modules.
+Then $\Hom_S(N_1, N_2) = \Hom_R(N_1, N_2)$. In other words, the
+restriction functor $\text{Mod}_S \to \text{Mod}_R$ is fully faithful.
+\end{lemma}
+
+\begin{proof}
+Let $\varphi : N_1 \to N_2$ be an $R$-linear map. For any $x \in N_1$
+consider the map $S \otimes_R S \to N_2$ defined by the rule
+$g \otimes g' \mapsto g\varphi(g'x)$. Since both maps $S \to S \otimes_R S$
+are isomorphisms (Lemma \ref{lemma-epimorphism}), we conclude that
+$g \varphi(g'x) = gg'\varphi(x) = \varphi(gg' x)$. Thus $\varphi$
+is $S$-linear.
+\end{proof}
+
+
+
+\section{Pure ideals}
+\label{section-pure-ideals}
+
+\noindent
+The material in this section is discussed in many papers, see for example
+\cite{Lazard}, \cite{Bkouche}, and \cite{DeMarco}.
+
+\begin{definition}
+\label{definition-pure-ideal}
+Let $R$ be a ring. We say that $I \subset R$ is {\it pure}
+if the quotient ring $R/I$ is flat over $R$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-pure}
+Let $R$ be a ring.
+Let $I \subset R$ be an ideal.
+The following are equivalent:
+\begin{enumerate}
+\item $I$ is pure,
+\item for every ideal $J \subset R$ we have $J \cap I = IJ$,
+\item for every finitely generated ideal $J \subset R$ we have
+$J \cap I = JI$,
+\item for every $x \in R$ we have $(x) \cap I = xI$,
+\item for every $x \in I$ we have $x = yx$ for some $y \in I$,
+\item for every $x_1, \ldots, x_n \in I$ there exists a
+$y \in I$ such that $x_i = yx_i$ for all $i = 1, \ldots, n$,
+\item for every prime $\mathfrak p$ of $R$ we have
+$IR_{\mathfrak p} = 0$ or $IR_{\mathfrak p} = R_{\mathfrak p}$,
+\item $\text{Supp}(I) = \Spec(R) \setminus V(I)$,
+\item $I$ is the kernel of the map $R \to (1 + I)^{-1}R$,
+\item $R/I \cong S^{-1}R$ as $R$-algebras for some multiplicative
+subset $S$ of $R$, and
+\item $R/I \cong (1 + I)^{-1}R$ as $R$-algebras.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For any ideal $J$ of $R$ we have the short exact sequence
+$0 \to J \to R \to R/J \to 0$. Tensoring with $R/I$ we get
+an exact sequence $J \otimes_R R/I \to R/I \to R/I + J \to 0$
+and $J \otimes_R R/I = J/JI$. Thus the
+equivalence of (1), (2), and (3) follows from
+Lemma \ref{lemma-flat}. Moreover, these imply (4).
+
+\medskip\noindent
+The implication (4) $\Rightarrow$ (5) is trivial.
+Assume (5) and let $x_1, \ldots, x_n \in I$.
+Choose $y_i \in I$ such that $x_i = y_ix_i$.
+Let $y \in I$ be the element such that
+$1 - y = \prod_{i = 1, \ldots, n} (1 - y_i)$.
+Then $x_i = yx_i$ for all $i = 1, \ldots, n$.
+Hence (6) holds, and it follows that (5) $\Leftrightarrow$ (6).
+
+\medskip\noindent
+Assume (5). Let $x \in I$. Then $x = yx$ for some $y \in I$.
+Hence $x(1 - y) = 0$, which shows that $x$ maps to zero in
+$(1 + I)^{-1}R$. Of course the kernel of the map
+$R \to (1 + I)^{-1}R$ is always contained in $I$. Hence we
+see that (5) implies (9). Assume (9). Then for any $x \in I$
+we see that $x(1 - y) = 0$ for some $y \in I$.
+In other words, $x = yx$. We conclude that (5) is equivalent to (9).
+
+\medskip\noindent
+Assume (5). Let $\mathfrak p$ be a prime of $R$.
+If $\mathfrak p \not \in V(I)$, then $IR_{\mathfrak p} = R_{\mathfrak p}$.
+If $\mathfrak p \in V(I)$, in other words, if $I \subset \mathfrak p$,
+then $x \in I$ implies $x(1 - y) = 0$ for some $y \in I$, implies
+$x$ maps to zero in $R_{\mathfrak p}$, i.e., $IR_{\mathfrak p} = 0$.
+Thus we see that (7) holds.
+
+\medskip\noindent
+Assume (7). Then $(R/I)_{\mathfrak p}$ is either $0$ or $R_{\mathfrak p}$
+for any prime $\mathfrak p$ of $R$. Hence by
+Lemma \ref{lemma-flat-localization}
+we see that (1) holds. At this point we see that all of
+(1) -- (7) and (9) are equivalent.
+
+\medskip\noindent
+As $IR_{\mathfrak p} = I_{\mathfrak p}$ we see that (7) implies (8).
+Finally, if (8) holds, then this means exactly that $I_{\mathfrak p}$
+is the zero module if and only if $\mathfrak p \in V(I)$, which
+is clearly saying that (7) holds. Now (1) -- (9) are equivalent.
+
+\medskip\noindent
+Assume (1) -- (9) hold. Then $R/I \subset (1 + I)^{-1}R$ by (9) and
+the map $R/I \to (1 + I)^{-1}R$ is also surjective by the description
+of localizations at primes afforded by (7). Hence (11) holds.
+
+\medskip\noindent
+The implication (11) $\Rightarrow$ (10) is trivial.
+And (10) implies that (1) holds because a localization of
+$R$ is flat over $R$, see
+Lemma \ref{lemma-flat-localization}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pure-ideal-determined-by-zero-set}
+\begin{slogan}
+Pure ideals are determined by their vanishing locus.
+\end{slogan}
+Let $R$ be a ring.
+If $I, J \subset R$ are pure ideals, then $V(I) = V(J)$
+implies $I = J$.
+\end{lemma}
+
+\begin{proof}
+For example, by property (7) of
+Lemma \ref{lemma-pure}
+we see that
+$I = \Ker(R \to \prod_{\mathfrak p \in V(I)} R_{\mathfrak p})$
+can be recovered from the closed subset associated to it.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pure-open-closed-specializations}
+Let $R$ be a ring. The rule
+$I \mapsto V(I)$
+determines a bijection
+$$
+\{I \subset R \text{ pure}\}
+\leftrightarrow
+\{Z \subset \Spec(R)\text{ closed and closed under generalizations}\}
+$$
+\end{lemma}
+
+\begin{proof}
+Let $I$ be a pure ideal. Then since $R \to R/I$ is flat, by going down
+generalizations lift along the map $\Spec(R/I) \to \Spec(R)$.
+Hence $V(I)$ is closed under generalizations. This shows that the map
+is well defined. By
+Lemma \ref{lemma-pure-ideal-determined-by-zero-set}
+the map is injective. Suppose that
+$Z \subset \Spec(R)$ is closed and closed under generalizations.
+Let $J \subset R$ be the radical ideal such that $Z = V(J)$.
+Let $I = \{x \in R : x \in xJ\}$. Note that $I$ is an ideal:
+if $x, y \in I$ then there exist $f, g \in J$ such that
+$x = xf$ and $y = yg$. Then
+$$
+x + y = (x + y)(f + g - fg)
+$$
+Verification left to the reader.
+We claim that $I$ is pure and that $V(I) = V(J)$.
+If the claim is true then the map of the lemma is surjective and
+the lemma holds.
+
+\medskip\noindent
+Note that $I \subset J$, so that $V(J) \subset V(I)$.
+Let $I \subset \mathfrak p$ be a prime. Consider the multiplicative
+subset $S = (R \setminus \mathfrak p)(1 + J)$. By definition of
+$I$ and $I \subset \mathfrak p$ we see that $0 \not \in S$.
+Hence we can find a prime $\mathfrak q$ of $R$ which is disjoint
+from $S$, see
+Lemmas \ref{lemma-localization-zero} and
+\ref{lemma-spec-localization}.
+Hence $\mathfrak q \subset \mathfrak p$ and
+$\mathfrak q \cap (1 + J) = \emptyset$.
+This implies that $\mathfrak q + J$ is a proper ideal of $R$.
+Let $\mathfrak m$ be a maximal ideal containing $\mathfrak q + J$.
+Then we get
+$\mathfrak m \in V(J)$ and hence $\mathfrak q \in V(J) = Z$
+as $Z$ was assumed to be closed under generalization.
+This in turn implies $\mathfrak p \in V(J)$ as
+$\mathfrak q \subset \mathfrak p$. Thus we see that $V(I) = V(J)$.
+
+\medskip\noindent
+Finally, since $V(I) = V(J)$ (and $J$ radical) we see that $J = \sqrt{I}$.
+Pick $x \in I$, so that $x = xy$ for some $y \in J$ by definition.
+Then $x = xy = xy^2 = \ldots = xy^n$. Since $y^n \in I$ for some $n > 0$
+we conclude that property (5) of
+Lemma \ref{lemma-pure}
+holds and we see that $I$ is indeed pure.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finitely-generated-pure-ideal}
+Let $R$ be a ring. Let $I \subset R$ be an ideal.
+The following are equivalent
+\begin{enumerate}
+\item $I$ is pure and finitely generated,
+\item $I$ is generated by an idempotent,
+\item $I$ is pure and $V(I)$ is open, and
+\item $R/I$ is a projective $R$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If (1) holds, then $I = I \cap I = I^2$ by
+Lemma \ref{lemma-pure}.
+Hence $I$ is generated by an idempotent by
+Lemma \ref{lemma-ideal-is-squared-union-connected}.
+Thus (1) $\Rightarrow$ (2).
+If (2) holds, then $I = (e)$ and $R = (1 - e) \oplus (e)$ as
+an $R$-module hence $R/I$ is flat and $I$ is pure and $V(I) = D(1 - e)$
+is open. Thus (2) $\Rightarrow$ (1) $+$ (3).
+Finally, assume (3). Then $V(I)$ is open and closed, hence
+$V(I) = D(1 - e)$ for some idempotent $e$ of $R$, see
+Lemma \ref{lemma-disjoint-decomposition}.
+The ideal $J = (e)$ is a pure ideal such that $V(J) = V(I)$ hence
+$I = J$ by
+Lemma \ref{lemma-pure-ideal-determined-by-zero-set}.
+In this way we see that (3) $\Rightarrow$ (2). By
+Lemma \ref{lemma-finite-projective}
+we see that (4) is equivalent to the assertion that $I$ is pure
+and $R/I$ finitely presented. Moreover, $R/I$ is finitely presented
+if and only if $I$ is finitely generated, see Lemma \ref{lemma-extension}.
+Hence (4) is equivalent to (1).
+\end{proof}
+
+\noindent
+We can use the above to characterize those rings for which every finite
+flat module is finitely presented.
+
+\begin{lemma}
+\label{lemma-finite-flat-module-finitely-presented}
+Let $R$ be a ring. The following are equivalent:
+\begin{enumerate}
+\item every $Z \subset \Spec(R)$ which is closed and closed under
+generalizations is also open, and
+\item any finite flat $R$-module is finite locally free.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If any finite flat $R$-module is finite locally free then the support
+of $R/I$ where $I$ is a pure ideal is open. Hence the implication
+(2) $\Rightarrow$ (1) follows from
+Lemma \ref{lemma-pure-ideal-determined-by-zero-set}.
+
+\medskip\noindent
+For the converse assume that $R$ satisfies (1).
+Let $M$ be a finite flat $R$-module.
+The support $Z = \text{Supp}(M)$ of $M$ is closed, see
+Lemma \ref{lemma-support-closed}.
+On the other hand, if $\mathfrak p \subset \mathfrak p'$, then by
+Lemma \ref{lemma-finite-flat-local}
+the module $M_{\mathfrak p'}$ is free, and
+$M_{\mathfrak p} = M_{\mathfrak p'} \otimes_{R_{\mathfrak p'}} R_{\mathfrak p}$
+Hence
+$\mathfrak p' \in \text{Supp}(M) \Rightarrow \mathfrak p \in \text{Supp}(M)$,
+in other words, the support is closed under generalization.
+As $R$ satisfies (1) we see that the support of $M$ is open and closed.
+Suppose that $M$ is generated by $r$ elements $m_1, \ldots, m_r$.
+The modules $\wedge^i(M)$, $i = 1, \ldots, r$ are finite flat $R$-modules
+also, because $\wedge^i(M)_{\mathfrak p} = \wedge^i(M_{\mathfrak p})$
+is free over $R_{\mathfrak p}$. Note that
+$\text{Supp}(\wedge^{i + 1}(M)) \subset \text{Supp}(\wedge^i(M))$.
+Thus we see that there exists a decomposition
+$$
+\Spec(R) = U_0 \amalg U_1 \amalg \ldots \amalg U_r
+$$
+by open and closed subsets such that the support of
+$\wedge^i(M)$ is $U_r \cup \ldots \cup U_i$ for all $i = 0, \ldots, r$.
+Let $\mathfrak p$ be a prime of $R$, and say $\mathfrak p \in U_i$.
+Note that
+$\wedge^i(M) \otimes_R \kappa(\mathfrak p) =
+\wedge^i(M \otimes_R \kappa(\mathfrak p))$.
+Hence, after possibly renumbering $m_1, \ldots, m_r$ we may assume that
+$m_1, \ldots, m_i$ generate $M \otimes_R \kappa(\mathfrak p)$. By
+Nakayama's Lemma \ref{lemma-NAK}
+we get a surjection
+$$
+R_f^{\oplus i} \longrightarrow M_f, \quad
+(a_1, \ldots, a_i) \longmapsto \sum a_im_i
+$$
+for some $f \in R$, $f \not \in \mathfrak p$. We may also assume that
+$D(f) \subset U_i$. This means that $\wedge^i(M_f) = \wedge^i(M)_f$
+is a flat $R_f$ module whose support is all of $\Spec(R_f)$.
+By the above it is generated by a single element, namely
+$m_1 \wedge \ldots \wedge m_i$. Hence $\wedge^i(M)_f \cong R_f/J$
+for some pure ideal $J \subset R_f$ with $V(J) = \Spec(R_f)$.
+Clearly this means that $J = (0)$, see
+Lemma \ref{lemma-pure-ideal-determined-by-zero-set}.
+Thus $m_1 \wedge \ldots \wedge m_i$ is a basis for
+$\wedge^i(M_f)$ and it follows that the displayed map is injective
+as well as surjective. This proves that $M$ is finite locally free
+as desired.
+\end{proof}
+
+
+
+
+
+
+
+\section{Rings of finite global dimension}
+\label{section-ring-finite-gl-dim}
+
+\noindent
+The following lemma is often used to compare different
+projective resolutions of a given module.
+
+\begin{lemma}[Schanuel's lemma]
+\label{lemma-Schanuel}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Suppose that
+$$
+0 \to K \xrightarrow{c_1} P_1 \xrightarrow{p_1} M \to 0
+\quad\text{and}\quad
+0 \to L \xrightarrow{c_2} P_2 \xrightarrow{p_2} M \to 0
+$$
+are two short exact sequences, with $P_i$ projective.
+Then $K \oplus P_2 \cong L \oplus P_1$. More precisely,
+there exist a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+K \oplus P_2 \ar[r]_{(c_1, \text{id})} \ar[d] &
+P_1 \oplus P_2 \ar[r]_{(p_1, 0)} \ar[d] &
+M \ar[r] \ar@{=}[d] &
+0 \\
+0 \ar[r] &
+P_1 \oplus L \ar[r]^{(\text{id}, c_2)} &
+P_1 \oplus P_2 \ar[r]^{(0, p_2)} &
+M \ar[r] &
+0
+}
+$$
+whose vertical arrows are isomorphisms.
+\end{lemma}
+
+\begin{proof}
+Consider the module $N$ defined by the short exact sequence
+$0 \to N \to P_1 \oplus P_2 \to M \to 0$,
+where the last map is the sum of the two maps
+$P_i \to M$. It is easy to see that the projection
+$N \to P_1$ is surjective with kernel $L$, and that
+$N \to P_2$ is surjective with kernel $K$.
+Since $P_i$ are projective we have $N \cong K \oplus P_2
+\cong L \oplus P_1$. This proves the first statement.
+
+\medskip\noindent
+To prove the second statement (and to reprove the first), choose
+$a : P_1 \to P_2$ and $b : P_2 \to P_1$ such that
+$p_1 = p_2 \circ a$ and $p_2 = p_1 \circ b$. This is possible
+because $P_1$ and $P_2$ are projective. Then we get a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+K \oplus P_2 \ar[r]_{(c_1, \text{id})} &
+P_1 \oplus P_2 \ar[r]_{(p_1, 0)} &
+M \ar[r] &
+0 \\
+0 \ar[r] &
+N \ar[r] \ar[d] \ar[u] &
+P_1 \oplus P_2 \ar[r]_{(p_1, p_2)}
+\ar[d]_S \ar[u]^T &
+M \ar[r] \ar@{=}[d] \ar@{=}[u] &
+0 \\
+0 \ar[r] &
+P_1 \oplus L \ar[r]^{(\text{id}, c_2)} &
+P_1 \oplus P_2 \ar[r]^{(0, p_2)} &
+M \ar[r] &
+0
+}
+$$
+with $T$ and $S$ given by the matrices
+$$
+S = \left(
+\begin{matrix}
+\text{id} & 0 \\
+a & \text{id}
+\end{matrix}
+\right)
+\quad\text{and}\quad
+T = \left(
+\begin{matrix}
+\text{id} & b \\
+0 & \text{id}
+\end{matrix}
+\right)
+$$
+Then $S$, $T$ and the maps $N \to P_1 \oplus L$
+and $N \to K \oplus P_2$ are isomorphisms as desired.
+\end{proof}
+
+\begin{definition}
+\label{definition-finite-proj-dim}
+Let $R$ be a ring. Let $M$ be an $R$-module. We say $M$ has
+{\it finite projective dimension} if it has a finite length
+resolution by projective $R$-modules. The minimal length of such a
+resolution is called the {\it projective dimension}
+of $M$.
+\end{definition}
+
+\noindent
+It is clear that the projective dimension of $M$ is $0$ if and
+only if $M$ is a projective module.
+The following lemma explains to what extent the projective
+dimension is independent of the choice of a projective
+resolution.
+
+\begin{lemma}
+\label{lemma-independent-resolution}
+Let $R$ be a ring. Suppose that $M$ is an $R$-module of projective
+dimension $d$. Suppose that $F_e \to F_{e-1} \to \ldots \to F_0 \to M \to 0$
+is exact with $F_i$ projective and $e \geq d - 1$.
+Then the kernel of $F_e \to F_{e-1}$ is projective
+(or the kernel of $F_0 \to M$ is projective in case
+$e = 0$).
+\end{lemma}
+
+\begin{proof}
+We prove this by induction on $d$. If $d = 0$, then
+$M$ is projective. In this case there is a splitting
+$F_0 = \Ker(F_0 \to M) \oplus M$, and hence
+$\Ker(F_0 \to M)$ is projective. This finishes
+the proof if $e = 0$, and if $e > 0$, then replacing
+$M$ by $\Ker(F_0 \to M)$ we decrease $e$.
+
+\medskip\noindent
+Next assume $d > 0$.
+Let $0 \to P_d \to P_{d-1} \to \ldots \to P_0 \to M \to 0$
+be a minimal length finite resolution with $P_i$ projective.
+According to
+Schanuel's Lemma \ref{lemma-Schanuel}
+we have
+$P_0 \oplus \Ker(F_0 \to M) \cong F_0 \oplus \Ker(P_0 \to M)$.
+This proves the case $d = 1$, $e = 0$, because then the right
+hand side is $F_0 \oplus P_1$ which is projective. Hence now we may
+assume $e > 0$. The module $F_0 \oplus \Ker(P_0 \to M)$ has the
+finite projective resolution
+$$
+0 \to P_d \to P_{d-1} \to \ldots \to
+P_2 \to P_1 \oplus F_0 \to \Ker(P_0 \to M) \oplus F_0 \to 0
+$$
+of length $d - 1$. By induction applied to the exact sequence
+$$
+F_e \to F_{e-1} \to \ldots \to F_2 \to P_0 \oplus F_1 \to
+P_0 \oplus \Ker(F_0 \to M) \to 0
+$$
+of length $e - 1$ we conclude $\Ker(F_e \to F_{e - 1})$
+is projective (if $e \geq 2$)
+or that $\Ker(F_1 \oplus P_0 \to F_0 \oplus P_0)$ is projective.
+This implies the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-what-kind-of-resolutions}
+Let $R$ be a ring. Let $M$ be an $R$-module. Let $d \geq 0$.
+The following are equivalent
+\begin{enumerate}
+\item $M$ has projective dimension $\leq d$,
+\item there exists a resolution
+$0 \to P_d \to P_{d - 1} \to \ldots \to P_0 \to M \to 0$
+with $P_i$ projective,
+\item for some resolution
+$\ldots \to P_2 \to P_1 \to P_0 \to M \to 0$ with
+$P_i$ projective we have $\Ker(P_{d - 1} \to P_{d - 2})$
+is projective if $d \geq 2$, or $\Ker(P_0 \to M)$ is projective if
+$d = 1$, or $M$ is projective if $d = 0$,
+\item for any resolution
+$\ldots \to P_2 \to P_1 \to P_0 \to M \to 0$ with
+$P_i$ projective we have $\Ker(P_{d - 1} \to P_{d - 2})$
+is projective if $d \geq 2$, or $\Ker(P_0 \to M)$ is projective if
+$d = 1$, or $M$ is projective if $d = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) is the definition of projective
+dimension, see Definition \ref{definition-finite-proj-dim}.
+We have (2) $\Rightarrow$ (4) by Lemma \ref{lemma-independent-resolution}.
+The implications (4) $\Rightarrow$ (3) and (3) $\Rightarrow$ (2) are
+immediate.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-what-kind-of-resolutions-local}
+Let $R$ be a local ring. Let $M$ be an $R$-module. Let $d \geq 0$.
+The equivalent conditions (1) -- (4) of
+Lemma \ref{lemma-what-kind-of-resolutions}
+are also equivalent to
+\begin{enumerate}
+\item[(5)] there exists a resolution
+$0 \to P_d \to P_{d - 1} \to \ldots \to P_0 \to M \to 0$
+with $P_i$ free.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-what-kind-of-resolutions} and
+Theorem \ref{theorem-projective-free-over-local-ring}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-what-kind-of-resolutions-Noetherian}
+Let $R$ be a Noetherian ring. Let $M$ be a finite $R$-module.
+Let $d \geq 0$. The equivalent conditions (1) -- (4) of
+Lemma \ref{lemma-what-kind-of-resolutions}
+are also equivalent to
+\begin{enumerate}
+\item[(6)] there exists a resolution
+$0 \to P_d \to P_{d - 1} \to \ldots \to P_0 \to M \to 0$
+with $P_i$ finite projective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose a resolution $\ldots \to F_2 \to F_1 \to F_0 \to M \to 0$
+with $F_i$ finite free (Lemma \ref{lemma-resolution-by-finite-free}).
+By Lemma \ref{lemma-what-kind-of-resolutions} we see that
+$P_d = \Ker(F_{d - 1} \to F_{d - 2})$ is projective at least if $d \geq 2$.
+Then $P_d$ is a finite $R$-module as $R$ is Noetherian and
+$P_d \subset F_{d - 1}$ which is finite free.
+Whence $0 \to P_d \to F_{d - 1} \to \ldots \to F_1 \to F_0 \to M \to 0$
+is the desired resolution.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-what-kind-of-resolutions-Noetherian-local}
+Let $R$ be a local Noetherian ring. Let $M$ be a finite $R$-module.
+Let $d \geq 0$. The equivalent conditions (1) -- (4) of
+Lemma \ref{lemma-what-kind-of-resolutions},
+condition (5) of Lemma \ref{lemma-what-kind-of-resolutions-local},
+and condition (6) of Lemma \ref{lemma-what-kind-of-resolutions-Noetherian}
+are also equivalent to
+\begin{enumerate}
+\item[(7)] there exists a resolution
+$0 \to F_d \to F_{d - 1} \to \ldots \to F_0 \to M \to 0$
+with $F_i$ finite free.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from Lemmas \ref{lemma-what-kind-of-resolutions},
+\ref{lemma-what-kind-of-resolutions-local}, and
+\ref{lemma-what-kind-of-resolutions-Noetherian}
+and because a finite projective module over a local ring
+is finite free, see Lemma \ref{lemma-finite-projective}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projective-dimension-ext}
+Let $R$ be a ring. Let $M$ be an $R$-module. Let $n \geq 0$.
+The following are equivalent
+\begin{enumerate}
+\item $M$ has projective dimension $\leq n$,
+\item $\Ext^i_R(M, N) = 0$ for all $R$-modules $N$ and all
+$i \geq n + 1$, and
+\item $\Ext^{n + 1}_R(M, N) = 0$ for all $R$-modules $N$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). Choose a free resolution $F_\bullet \to M$ of $M$. Denote
+$d_e : F_e \to F_{e - 1}$. By
+Lemma \ref{lemma-independent-resolution}
+we see that $P_e = \Ker(d_e)$ is projective for $e \geq n - 1$.
+This implies that $F_e \cong P_e \oplus P_{e - 1}$ for $e \geq n$
+where $d_e$ maps the summand $P_{e - 1}$ isomorphically to $P_{e - 1}$
+in $F_{e - 1}$. Hence, for any $R$-module $N$ the complex
+$\Hom_R(F_\bullet, N)$ is split exact in degrees $\geq n + 1$.
+Whence (2) holds. The implication (2) $\Rightarrow$ (3) is trivial.
+
+\medskip\noindent
+Assume (3) holds. If $n = 0$ then $M$ is projective by
+Lemma \ref{lemma-characterize-projective}
+and we see that (1) holds. If $n > 0$ choose a free $R$-module $F$
+and a surjection $F \to M$ with kernel $K$. By
+Lemma \ref{lemma-reverse-long-exact-seq-ext}
+and the vanishing of $\Ext_R^i(F, N)$ for all $i > 0$ by part (1)
+we see that $\Ext_R^n(K, N) = 0$ for all $R$-modules $N$.
+Hence by induction we see that $K$ has projective dimension $\leq n - 1$.
+Then $M$ has projective dimension $\leq n$ as any finite projective
+resolution of $K$ gives a projective resolution of length one more
+for $M$ by adding $F$ to the front.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exact-sequence-projective-dimension}
+Let $R$ be a ring. Let $0 \to M' \to M \to M'' \to 0$ be a short
+exact sequence of $R$-modules.
+\begin{enumerate}
+\item If $M$ has projective dimension $\leq n$ and $M''$
+has projective dimension $\leq n + 1$, then $M'$ has projective
+dimension $\leq n$.
+\item If $M'$ and $M''$ have projective dimension
+$\leq n$ then $M$ has projective dimension $\leq n$.
+\item If $M'$ has projective dimension $\leq n$ and
+$M$ has projective dimension $\leq n + 1$ then
+$M''$ has projective dimension $\leq n + 1$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Combine the characterization of projective dimension in
+Lemma \ref{lemma-projective-dimension-ext}
+with the long exact sequence of ext groups in
+Lemma \ref{lemma-reverse-long-exact-seq-ext}.
+\end{proof}
+
+\begin{definition}
+\label{definition-finite-gl-dim}
+Let $R$ be a ring. The ring
+$R$ is said to have {\it finite global dimension}
+if there exists an integer $n$ such that
+every $R$-module has a resolution by
+projective $R$-modules of length at most $n$.
+The minimal such $n$ is then called the {\it global dimension}
+of $R$.
+\end{definition}
+
+\noindent
+The argument in the proof of the following lemma can be found
+in the paper \cite{Auslander} by Auslander.
+
+\begin{lemma}
+\label{lemma-colimit-projective-dimension}
+Let $R$ be a ring. Suppose we have a module $M = \bigcup_{e \in E} M_e$
+where the $M_e$ are submodules well-ordered by inclusion. Assume the quotients
+$M_e/\bigcup\nolimits_{e' < e} M_{e'}$ have projective dimension $\leq n$.
+Then $M$ has projective dimension $\leq n$.
+\end{lemma}
+
+\begin{proof}
+We will prove this by induction on $n$.
+
+\medskip\noindent
+Base case: $n = 0$. Then $P_e = M_e/\bigcup_{e' < e} M_{e'}$ is projective.
+Thus we may choose a section $P_e \to M_e$ of the projection $M_e \to P_e$.
+We claim that the induced map $\psi : \bigoplus_{e \in E} P_e \to M$ is an
+isomorphism. Namely, if $x = \sum x_e \in \bigoplus P_e$ is nonzero,
+then we let $e_{max}$ be maximal such that $x_{e_{max}}$ is nonzero
+and we conclude that $y = \psi(x) = \psi(\sum x_e)$ is nonzero because
+$y \in M_{e_{max}}$ has nonzero image $x_{e_{max}}$ in $P_{e_{max}}$.
+On the other hand, let $y \in M$. Then $y \in M_e$ for some $e$.
+We show that $y \in \Im(\psi)$ by transfinite induction on $e$.
+Let $x_e \in P_e$ be the image of $y$. Then
+$y - \psi(x_e) \in \bigcup_{e' < e} M_{e'}$.
+By induction hypothesis we conclude that $y - \psi(x_e) \in \Im(\psi)$
+hence $y \in \Im(\psi)$. Thus the claim is true and
+$\psi$ is an isomorphism. We conclude that $M$ is projective as
+a direct sum of projectives, see
+Lemma \ref{lemma-direct-sum-projective}.
+
+\medskip\noindent
+If $n > 0$, then for $e \in E$ we denote $F_e$ the free $R$-module
+on the set of elements of $M_e$. Then we have a system of
+short exact sequences
+$$
+0 \to K_e \to F_e \to M_e \to 0
+$$
+over the well-ordered set $E$. Note that the transition maps
+$F_{e'} \to F_e$ and $K_{e'} \to K_e$ are injective too.
+Set $F = \bigcup F_e$ and $K = \bigcup K_e$. Then
+$$
+0 \to
+K_e/\bigcup\nolimits_{e' < e} K_{e'} \to
+F_e/\bigcup\nolimits_{e' < e} F_{e'} \to
+M_e/\bigcup\nolimits_{e' < e} M_{e'} \to 0
+$$
+is a short exact sequence of $R$-modules too and
+$F_e/\bigcup_{e' < e} F_{e'}$ is the free $R$-module on the
+set of elements in $M_e$ which are not contained in $\bigcup_{e' < e} M_{e'}$.
+Hence by
+Lemma \ref{lemma-exact-sequence-projective-dimension}
+we see that the projective dimension of $K_e/\bigcup_{e' < e} K_{e'}$
+is at most $n - 1$. By induction we conclude that $K$ has projective
+dimension at most $n - 1$. Whence $M$ has projective dimension at most
+$n$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-gl-dim}
+Let $R$ be a ring. The following are equivalent
+\begin{enumerate}
+\item $R$ has finite global dimension $\leq n$,
+\item every finite $R$-module has projective dimension $\leq n$, and
+\item every cyclic $R$-module $R/I$ has projective dimension $\leq n$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that (1) $\Rightarrow$ (2) and (2) $\Rightarrow$ (3).
+Assume (3). Choose a set $E \subset M$ of generators of $M$.
+Choose a well ordering on $E$. For $e \in E$ denote
+$M_e$ the submodule of $M$ generated by the elements $e' \in E$
+with $e' \leq e$. Then $M = \bigcup_{e \in E} M_e$.
+Note that for each $e \in E$ the quotient
+$$
+M_e/\bigcup\nolimits_{e' < e} M_{e'}
+$$
+is either zero or generated by one element, hence has projective
+dimension $\leq n$ by (3). By Lemma \ref{lemma-colimit-projective-dimension}
+this means that $M$ has projective dimension $\leq n$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-finite-gl-dim}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $S \subset R$ be a multiplicative subset.
+\begin{enumerate}
+\item If $M$ has projective dimension $\leq n$, then $S^{-1}M$ has
+projective dimension $\leq n$ over $S^{-1}R$.
+\item If $R$ has finite global dimension $\leq n$, then
+$S^{-1}R$ has finite global dimension $\leq n$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $0 \to P_n \to P_{n - 1} \to \ldots \to P_0 \to M \to 0$
+be a projective resolution. As localization is exact, see
+Proposition \ref{proposition-localization-exact},
+and as each $S^{-1}P_i$ is a projective $S^{-1}R$-module, see
+Lemma \ref{lemma-ascend-properties-modules},
+we see that $0 \to S^{-1}P_n \to \ldots \to S^{-1}P_0 \to S^{-1}M \to 0$
+is a projective resolution of $S^{-1}M$. This proves (1).
+Let $M'$ be an $S^{-1}R$-module.
+Note that $M' = S^{-1}M'$.
+Hence we see that (2) follows from (1).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Regular rings and global dimension}
+\label{section-regular-finite-gl-dim}
+
+\noindent
+We can use the material on rings of finite global dimension
+to give another characterization of regular local rings.
+
+\begin{proposition}
+\label{proposition-regular-finite-gl-dim}
+Let $R$ be a regular local ring of dimension $d$.
+Every finite $R$-module $M$ of depth $e$ has a finite free
+resolution
+$$
+0 \to F_{d-e} \to \ldots \to F_0 \to M \to 0.
+$$
+In particular a regular local ring has global dimension $\leq d$.
+\end{proposition}
+
+\begin{proof}
+The first part holds in view of Lemma \ref{lemma-regular-mcm-free}
+and Lemma \ref{lemma-mcm-resolution}. The last part follows from this
+and Lemma \ref{lemma-finite-gl-dim}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-gl-dim-primes}
+Let $R$ be a Noetherian ring.
+Then $R$ has finite global dimension if and
+only if there exists an integer $n$ such that
+for all maximal ideals $\mathfrak m$ of $R$
+the ring $R_{\mathfrak m}$ has global dimension
+$\leq n$.
+\end{lemma}
+
+\begin{proof}
+We saw, Lemma \ref{lemma-localize-finite-gl-dim}
+that if $R$ has finite global dimension $n$,
+then all the localizations $R_{\mathfrak m}$
+have finite global dimension at most $n$.
+Conversely, suppose that all the $R_{\mathfrak m}$
+have global dimension $\leq n$. Let $M$ be a finite
+$R$-module. Let
+$0 \to K_n \to F_{n-1} \to \ldots \to F_0 \to M \to 0$
+be a resolution with $F_i$ finite free.
+Then $K_n$ is a finite $R$-module.
+According to
+Lemma \ref{lemma-independent-resolution}
+and the assumption all the modules $K_n \otimes_R R_{\mathfrak m}$
+are projective. Hence by
+Lemma \ref{lemma-finite-projective}
+the module $K_n$ is finite projective.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-length-resolution-residue-field}
+Suppose that $R$ is a Noetherian local ring
+with maximal ideal $\mathfrak m$ and
+residue field $\kappa$. In this case
+the projective dimension of $\kappa$ is
+$\geq \dim_\kappa \mathfrak m / \mathfrak m^2$.
+\end{lemma}
+
+\begin{proof}
+Let $x_1 , \ldots, x_n$ be elements of $\mathfrak m$
+whose images in $\mathfrak m / \mathfrak m^2$ form a basis.
+Consider the {\it Koszul complex} on $x_1, \ldots, x_n$.
+This is the complex
+$$
+0 \to \wedge^n R^n \to \wedge^{n-1} R^n \to \wedge^{n-2} R^n \to
+\ldots \to \wedge^i R^n \to \ldots \to R^n \to R
+$$
+with maps given by
+$$
+e_{j_1} \wedge \ldots \wedge e_{j_i}
+\longmapsto
+\sum_{a = 1}^i (-1)^{i + 1} x_{j_a} e_{j_1} \wedge \ldots
+\wedge \hat e_{j_a} \wedge \ldots \wedge e_{j_i}
+$$
+It is easy to see that this is a complex $K_{\bullet}(R, x_{\bullet})$.
+Note that the cokernel of the last map of $K_{\bullet}(R, x_{\bullet})$
+is $\kappa$ by Lemma \ref{lemma-NAK} part (8).
+
+\medskip\noindent
+If $\kappa$ has finite projective dimension $d$, then we can find
+a resolution $F_{\bullet} \to \kappa$ by finite free $R$-modules
+of length $d$
+(Lemma \ref{lemma-what-kind-of-resolutions-Noetherian-local}).
+By Lemma \ref{lemma-add-trivial-complex}
+we may assume all the maps in the complex $F_{\bullet}$
+have the property that $\Im(F_i \to F_{i-1})
+\subset \mathfrak m F_{i-1}$, because removing a trivial
+summand from the resolution can at worst shorten the resolution.
+By Lemma \ref{lemma-compare-resolutions} we can find a map
+of complexes $\alpha : K_{\bullet}(R, x_{\bullet}) \to F_{\bullet}$
+inducing the identity on $\kappa$. We will prove by induction
+that the maps $\alpha_i : \wedge^i R^n = K_i(R, x_{\bullet}) \to F_i$
+have the property that
+$\alpha_i \otimes \kappa : \wedge^i \kappa^n \to F_i \otimes \kappa$
+are injective. This shows that $F_n \not = 0$ and hence $d \geq n$
+as desired.
+
+\medskip\noindent
+The result is clear for $i = 0$ because the composition
+$R \xrightarrow{\alpha_0} F_0 \to \kappa$ is nonzero.
+Note that $F_0$ must have rank $1$ since
+otherwise the map $F_1 \to F_0$ whose cokernel is a single
+copy of $\kappa$ cannot have image contained in $\mathfrak m F_0$.
+
+\medskip\noindent
+Next we check the case $i = 1$ as we feel that it is instructive;
+the reader can skip this as the induction step will deduce the $i = 1$
+case from the case $i = 0$. We saw above that
+$F_0 = R$ and $F_1 \to F_0 = R$ has image $\mathfrak m$.
+We have a commutative diagram
+$$
+\begin{matrix}
+R^n & = & K_1(R, x_{\bullet}) & \to & K_0(R, x_{\bullet}) & = & R \\
+& & \downarrow & & \downarrow & & \downarrow \\
+& & F_1 & \to & F_0 & = & R
+\end{matrix}
+$$
+where the rightmost vertical arrow is given by multiplication
+by a unit. Hence we see that the image of the composition
+$R^n \to F_1 \to F_0 = R$ is also equal to $\mathfrak m$.
+Thus the map $R^n \otimes \kappa \to F_1 \otimes \kappa$
+has to be injective since $\dim_\kappa (\mathfrak m / \mathfrak m^2) = n$.
+
+\medskip\noindent
+Let $i \geq 1$ and assume injectivity of $\alpha_j \otimes \kappa$ has been
+proved for all $j \leq i - 1$. Consider the commutative diagram
+$$
+\begin{matrix}
+\wedge^i R^n & = & K_i(R, x_{\bullet}) & \to & K_{i-1}(R, x_{\bullet})
+& = & \wedge^{i-1} R^n \\
+& & \downarrow & & \downarrow & & \\
+& & F_i & \to & F_{i-1} & &
+\end{matrix}
+$$
+We know that $\wedge^{i-1} \kappa^n \to F_{i-1} \otimes \kappa$
+is injective. This proves that
+$\wedge^{i-1} \kappa^n \otimes_{\kappa} \mathfrak m/\mathfrak m^2
+\to F_{i-1} \otimes \mathfrak m/\mathfrak m^2$ is injective.
+Also, by our choice of the complex, $F_i$ maps into
+$\mathfrak mF_{i-1}$, and similarly for the Koszul complex.
+Hence we get a commutative diagram
+$$
+\begin{matrix}
+\wedge^i \kappa^n & \to &
+\wedge^{i-1} \kappa^n \otimes \mathfrak m/\mathfrak m^2 \\
+\downarrow & & \downarrow \\
+F_i \otimes \kappa & \to & F_{i-1} \otimes \mathfrak m/\mathfrak m^2
+\end{matrix}
+$$
+At this point it suffices to verify the map
+$\wedge^i \kappa^n \to
+\wedge^{i-1} \kappa^n \otimes \mathfrak m/\mathfrak m^2$
+is injective, which can be done by hand.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dim-gl-dim}
+Let $R$ be a Noetherian local ring.
+Suppose that the residue field $\kappa$ has finite
+projective dimension $n$ over $R$.
+In this case $\dim(R) \geq n$.
+\end{lemma}
+
+\begin{proof}
+Let $F_{\bullet}$ be a finite resolution of $\kappa$ by finite free
+$R$-modules (Lemma \ref{lemma-what-kind-of-resolutions-Noetherian-local}).
+By Lemma \ref{lemma-add-trivial-complex}
+we may assume all the maps in the complex $F_{\bullet}$
+have to property that $\Im(F_i \to F_{i-1})
+\subset \mathfrak m F_{i-1}$, because removing a trivial
+summand from the resolution can at worst shorten the resolution.
+Say $F_n \not = 0$ and $F_i = 0$ for $i > n$, so that
+the projective dimension of $\kappa$ is $n$.
+By Proposition \ref{proposition-what-exact} we see that
+$\text{depth}_{I(\varphi_n)}(R) \geq n$ since $I(\varphi_n)$
+cannot equal $R$ by our choice of the complex.
+Thus by Lemma \ref{lemma-bound-depth} also $\dim(R) \geq n$.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-finite-gl-dim-regular}
+Let $(R, \mathfrak m, \kappa)$ be a Noetherian local ring.
+The following are equivalent
+\begin{enumerate}
+\item $\kappa$ has finite projective dimension as an $R$-module,
+\item $R$ has finite global dimension,
+\item $R$ is a regular local ring.
+\end{enumerate}
+Moreover, in this case the global dimension of $R$ equals
+$\dim(R) = \dim_\kappa(\mathfrak m/\mathfrak m^2)$.
+\end{proposition}
+
+\begin{proof}
+We have (3) $\Rightarrow$ (2) by
+Proposition \ref{proposition-regular-finite-gl-dim}.
+The implication (2) $\Rightarrow$ (1) is trivial.
+Assume (1). By Lemmas \ref{lemma-length-resolution-residue-field}
+and \ref{lemma-dim-gl-dim} we see that
+$\dim(R) \geq \dim_\kappa(\mathfrak m /\mathfrak m^2)$.
+Thus $R$ is regular, see Definition
+\ref{definition-regular-local} and the discussion preceding it.
+Assume the equivalent conditions (1) -- (3) hold.
+By Proposition \ref{proposition-regular-finite-gl-dim}
+the global dimension of $R$ is at most $\dim(R)$ and by
+Lemma \ref{lemma-length-resolution-residue-field}
+it is at least $\dim_\kappa(\mathfrak m/\mathfrak m^2)$.
+Thus the stated equality holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localization-of-regular-local-is-regular}
+A Noetherian local ring $R$ is a regular local ring if and only if
+it has finite global dimension. In this case
+$R_{\mathfrak p}$ is a regular local ring for all primes $\mathfrak p$.
+\end{lemma}
+
+\begin{proof}
+By Propositions \ref{proposition-finite-gl-dim-regular} and
+\ref{proposition-regular-finite-gl-dim}
+we see that a Noetherian local ring is a regular local ring if and only if
+it has finite global dimension. Furthermore, any localization
+$R_{\mathfrak p}$ has finite global dimension,
+see Lemma \ref{lemma-localize-finite-gl-dim},
+and hence is a regular local ring.
+\end{proof}
+
+\noindent
+By Lemma \ref{lemma-localization-of-regular-local-is-regular}
+it makes sense to make the following definition,
+because it does not conflict with the earlier
+definition of a regular local ring.
+
+\begin{definition}
+\label{definition-regular}
+A Noetherian ring $R$ is said to be {\it regular}
+if all the localizations $R_{\mathfrak p}$ at primes are
+regular local rings.
+\end{definition}
+
+\noindent
+It is enough to require the local rings at maximal ideals to be regular.
+Note that this is not the same as asking $R$ to have finite
+global dimension, even assuming $R$ is Noetherian. This is
+because there is an example of a regular Noetherian ring
+which does not have finite global dimension, namely because
+it does not have finite dimension.
+
+\begin{lemma}
+\label{lemma-finite-gl-dim-finite-dim-regular}
+Let $R$ be a Noetherian ring.
+The following are equivalent:
+\begin{enumerate}
+\item $R$ has finite global dimension $n$,
+\item $R$ is a regular ring of dimension $n$,
+\item there exists an integer $n$ such that
+all the localizations $R_{\mathfrak m}$ at maximal ideals
+are regular of dimension $\leq n$ with equality for at least
+one $\mathfrak m$, and
+\item there exists an integer $n$ such that
+all the localizations $R_{\mathfrak p}$ at prime ideals
+are regular of dimension $\leq n$ with equality for at least
+one $\mathfrak p$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from the discussion above. More precisely, it follows
+by combining Definition \ref{definition-regular} with
+Lemma \ref{lemma-finite-gl-dim-primes}
+and Proposition \ref{proposition-finite-gl-dim-regular}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-under-regular}
+Let $R \to S$ be a local homomorphism of local Noetherian rings.
+Assume that $R \to S$ is flat and that $S$ is regular.
+Then $R$ is regular.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak m \subset R$ be the maximal ideal
+and let $\kappa = R/\mathfrak m$ be the residue field.
+Let $d = \dim S$.
+Choose any resolution $F_\bullet \to \kappa$
+with each $F_i$ a finite free $R$-module. Set
+$K_d = \Ker(F_{d - 1} \to F_{d - 2})$.
+By flatness of $R \to S$ the complex
+$0 \to K_d \otimes_R S \to F_{d - 1} \otimes_R S \to \ldots
+\to F_0 \otimes_R S \to \kappa \otimes_R S \to 0$
+is still exact. Because the global dimension of $S$
+is $d$, see Proposition \ref{proposition-regular-finite-gl-dim},
+we see that $K_d \otimes_R S$ is a finite free $S$-module
+(see also Lemma \ref{lemma-independent-resolution}).
+By Lemma \ref{lemma-finite-projective-descends} we see
+that $K_d$ is a finite free $R$-module.
+Hence $\kappa$ has finite projective dimension and $R$ is regular by
+Proposition \ref{proposition-finite-gl-dim-regular}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Auslander-Buchsbaum}
+\label{section-Auslander-Buchsbaum}
+
+\noindent
+The following result can be found in \cite{Auslander-Buchsbaum}.
+
+\begin{proposition}
+\label{proposition-Auslander-Buchsbaum}
+Let $R$ be a Noetherian local ring. Let $M$ be a nonzero finite $R$-module
+which has finite projective dimension $\text{pd}_R(M)$. Then we have
+$$
+\text{depth}(R) = \text{pd}_R(M) + \text{depth}(M)
+$$
+\end{proposition}
+
+\begin{proof}
+We prove this by induction on $\text{depth}(M)$. The most interesting
+case is the case $\text{depth}(M) = 0$. In this case, let
+$$
+0 \to R^{n_e} \to R^{n_{e-1}} \to \ldots \to R^{n_0} \to M \to 0
+$$
+be a minimal finite free resolution, so $e = \text{pd}_R(M)$.
+By Lemma \ref{lemma-add-trivial-complex} we may assume all matrix
+coefficients of the maps in the complex are contained in the maximal
+ideal of $R$. Then on the one hand, by
+Proposition \ref{proposition-what-exact} we see that
+$\text{depth}(R) \geq e$. On the other hand, breaking the long
+exact sequence into short exact sequences
+\begin{align*}
+0 \to R^{n_e} \to R^{n_{e - 1}} \to K_{e - 2} \to 0,\\
+0 \to K_{e - 2} \to R^{n_{e - 2}} \to K_{e - 3} \to 0,\\
+\ldots,\\
+0 \to K_0 \to R^{n_0} \to M \to 0
+\end{align*}
+we see, using Lemma \ref{lemma-depth-in-ses}, that
+\begin{align*}
+\text{depth}(K_{e - 2}) \geq \text{depth}(R) - 1,\\
+\text{depth}(K_{e - 3}) \geq \text{depth}(R) - 2,\\
+\ldots,\\
+\text{depth}(K_0) \geq \text{depth}(R) - (e - 1),\\
+\text{depth}(M) \geq \text{depth}(R) - e
+\end{align*}
+and since $\text{depth}(M) = 0$ we conclude $\text{depth}(R) \leq e$.
+This finishes the proof of the case $\text{depth}(M) = 0$.
+
+\medskip\noindent
+Induction step. If $\text{depth}(M) > 0$, then we pick $x \in \mathfrak m$
+which is a nonzerodivisor on both $M$ and $R$. This is possible, because
+either $\text{pd}_R(M) > 0$ and $\text{depth}(R) > 0$ by the aforementioned
+Proposition \ref{proposition-what-exact} or $\text{pd}_R(M) = 0$ in which
+case $M$ is finite free hence also $\text{depth}(R) = \text{depth}(M) > 0$.
+Thus $\text{depth}(R \oplus M) > 0$ by Lemma \ref{lemma-depth-in-ses}
+(for example) and we can find an $x \in \mathfrak m$ which is a nonzerodivisor
+on both $R$ and $M$. Let
+$$
+0 \to R^{n_e} \to R^{n_{e-1}} \to \ldots \to R^{n_0} \to M \to 0
+$$
+be a minimal resolution as above. An application of the snake lemma
+shows that
+$$
+0 \to (R/xR)^{n_e} \to (R/xR)^{n_{e-1}} \to \ldots \to (R/xR)^{n_0} \to
+M/xM \to 0
+$$
+is a minimal resolution too. Thus $\text{pd}_R(M) = \text{pd}_{R/xR}(M/xM)$.
+By Lemma \ref{lemma-depth-drops-by-one} we have
+$\text{depth}(R/xR) = \text{depth}(R) - 1$ and
+$\text{depth}(M/xM) = \text{depth}(M) - 1$.
+Till now depths have all been depths as $R$ modules, but we observe that
+$\text{depth}_R(M/xM) = \text{depth}_{R/xR}(M/xM)$ and similarly for $R/xR$.
+By induction hypothesis we see that the
+Auslander-Buchsbaum formula holds for $M/xM$ over $R/xR$. Since the
+depths of both $R/xR$ and $M/xM$ have decreased by one and the projective
+dimension has not changed we conclude.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Homomorphisms and dimension}
+\label{section-homomorphism-dimension}
+
+\noindent
+This section contains a collection of easy results relating
+dimensions of rings when there are maps between them.
+
+\begin{lemma}
+\label{lemma-dimension-going-up}
+Suppose $R \to S$ is a ring map satisfying either going up, see
+Definition \ref{definition-going-up-down}, or going down
+see Definition \ref{definition-going-up-down}.
+Assume in addition that $\Spec(S) \to \Spec(R)$
+is surjective. Then $\dim(R) \leq \dim(S)$.
+\end{lemma}
+
+\begin{proof}
+Assume going up.
+Take any chain $\mathfrak p_0 \subset \mathfrak p_1 \subset \ldots
+\subset \mathfrak p_e$ of prime ideals in $R$.
+By surjectivity we may choose a prime $\mathfrak q_0$ mapping
+to $\mathfrak p_0$. By going up we may extend this to a chain
+of length $e$ of primes $\mathfrak q_i$ lying over
+$\mathfrak p_i$. Thus $\dim(S) \geq \dim(R)$.
+The case of going down is exactly the same.
+See also Topology, Lemma \ref{topology-lemma-dimension-specializations-lift}
+for a purely topological version.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-going-up-maximal-on-top}
+Suppose that $R \to S$ is a ring map with the going up property,
+see Definition \ref{definition-going-up-down}. If
+$\mathfrak q \subset S$ is a maximal ideal.
+Then the inverse image of $\mathfrak q$ in $R$
+is a maximal ideal too.
+\end{lemma}
+
+\begin{proof}
+Trivial.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-dim-up}
+Suppose that $R \to S$ is a ring map such that $S$ is integral over $R$.
+Then $\dim (R) \geq \dim(S)$, and every closed point of $\Spec(S)$
+maps to a closed point of $\Spec(R)$.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemmas \ref{lemma-integral-no-inclusion} and
+\ref{lemma-going-up-maximal-on-top}
+and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-sub-dim-equal}
+Suppose $R \subset S$ and $S$ integral over $R$.
+Then $\dim(R) = \dim(S)$.
+\end{lemma}
+
+\begin{proof}
+This is a combination of Lemmas
+\ref{lemma-integral-going-up},
+\ref{lemma-integral-overring-surjective},
+\ref{lemma-dimension-going-up}, and
+\ref{lemma-integral-dim-up}.
+\end{proof}
+
+\begin{definition}
+\label{definition-fibre}
+Suppose that $R \to S$ is a ring map.
+Let $\mathfrak q \subset S$ be a prime lying
+over the prime $\mathfrak p$ of $R$.
+The {\it local ring of the fibre at $\mathfrak q$}
+is the local ring
+$$
+S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}
+=
+(S/\mathfrak pS)_{\mathfrak q}
+=
+(S \otimes_R \kappa(\mathfrak p))_{\mathfrak q}
+$$
+\end{definition}
+
+\begin{lemma}
+\label{lemma-dimension-base-fibre-total}
+Let $R \to S$ be a homomorphism of Noetherian rings.
+Let $\mathfrak q \subset S$ be a prime lying
+over the prime $\mathfrak p$. Then
+$$
+\dim(S_{\mathfrak q})
+\leq
+\dim(R_{\mathfrak p})
++
+\dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}).
+$$
+\end{lemma}
+
+\begin{proof}
+We use the characterization of dimension of
+Proposition \ref{proposition-dimension}.
+Let $x_1, \ldots, x_d$ be elements of $\mathfrak p$
+generating an ideal of definition of $R_{\mathfrak p}$ with
+$d = \dim(R_{\mathfrak p})$.
+Let $y_1, \ldots, y_e$ be elements of $\mathfrak q$
+generating an ideal of definition of
+$S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}$
+with $e = \dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q})$.
+It is clear that $S_{\mathfrak q}/(x_1, \ldots, x_d, y_1, \ldots, y_e)$
+has a nilpotent maximal ideal. Hence
+$x_1, \ldots, x_d, y_1, \ldots, y_e$ generate an ideal of definition
+of $S_{\mathfrak q}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-base-fibre-equals-total}
+Let $R \to S$ be a homomorphism of Noetherian rings.
+Let $\mathfrak q \subset S$ be a prime lying
+over the prime $\mathfrak p$. Assume the going down property holds
+for $R \to S$ (for example if $R \to S$ is flat, see
+Lemma \ref{lemma-flat-going-down}). Then
+$$
+\dim(S_{\mathfrak q})
+=
+\dim(R_{\mathfrak p})
++
+\dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}).
+$$
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-dimension-base-fibre-total}
+we have an inequality
+$\dim(S_{\mathfrak q}) \leq
+\dim(R_{\mathfrak p}) + \dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q})$.
+To get equality, choose a chain of primes
+$\mathfrak pS \subset \mathfrak q_0 \subset \mathfrak q_1 \subset \ldots
+\subset \mathfrak q_d = \mathfrak q$ with
+$d = \dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q})$.
+On the other hand, choose a chain of primes
+$\mathfrak p_0 \subset \mathfrak p_1 \subset \ldots \subset \mathfrak p_e
+= \mathfrak p$ with $e = \dim(R_{\mathfrak p})$.
+By the going down theorem we may choose
+$\mathfrak q_{-1} \subset \mathfrak q_0$ lying over
+$\mathfrak p_{e-1}$. And then we may choose
+$\mathfrak q_{-2} \subset \mathfrak q_{e-1}$ lying over
+$\mathfrak p_{e-2}$. Inductively we keep going until we
+get a chain
+$\mathfrak q_{-e} \subset \ldots \subset \mathfrak q_d$
+of length $e + d$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-over-regular-with-regular-fibre}
+Let $R \to S$ be a local homomorphism of local Noetherian rings.
+Assume
+\begin{enumerate}
+\item $R$ is regular,
+\item $S/\mathfrak m_RS$ is regular, and
+\item $R \to S$ is flat.
+\end{enumerate}
+Then $S$ is regular.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-dimension-base-fibre-equals-total}
+we have
+$\dim(S) = \dim(R) + \dim(S/\mathfrak m_RS)$.
+Pick generators $x_1, \ldots, x_d \in \mathfrak m_R$ with $d = \dim(R)$,
+and pick $y_1, \ldots, y_e \in \mathfrak m_S$
+which generate the maximal ideal of $S/\mathfrak m_RS$ with
+$e = \dim(S/\mathfrak m_RS)$. Then we see that
+$x_1, \ldots, x_d, y_1, \ldots, y_e$ are elements which generate
+the maximal ideal of $S$ and $e + d = \dim(S)$.
+\end{proof}
+
+\noindent
+The lemma below will later be used to show that rings of finite type over
+a field are Cohen-Macaulay if and only if they are quasi-finite flat over
+a polynomial ring. It is a partial converse to
+Lemma \ref{lemma-CM-over-regular-flat}.
+
+\begin{lemma}
+\label{lemma-finite-flat-over-regular-CM}
+Let $R \to S$ be a local homomorphism of Noetherian local rings.
+Assume $R$ Cohen-Macaulay.
+If $S$ is finite flat over $R$, or if $S$ is flat over $R$ and
+$\dim(S) \leq \dim(R)$, then $S$ is Cohen-Macaulay and $\dim(R) = \dim(S)$.
+\end{lemma}
+
+\begin{proof}
+Let $x_1, \ldots, x_d \in \mathfrak m_R$ be a regular sequence
+of length $d = \dim(R)$. By Lemma \ref{lemma-flat-increases-depth}
+this maps to a regular sequence in $S$.
+Hence $S$ is Cohen-Macaulay if $\dim(S) \leq d$. This is true
+if $S$ is finite flat over $R$ by Lemma \ref{lemma-integral-sub-dim-equal}.
+And in the second case we assumed it.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{The dimension formula}
+\label{section-dimension-formula}
+
+\noindent
+Recall the definitions of catenary (Definition \ref{definition-catenary})
+and universally catenary (Definition \ref{definition-universally-catenary}).
+
+\begin{lemma}
+\label{lemma-dimension-formula}
+Let $R \to S$ be a ring map.
+Let $\mathfrak q$ be a prime of $S$ lying over the prime $\mathfrak p$ of $R$.
+Assume that
+\begin{enumerate}
+\item $R$ is Noetherian,
+\item $R \to S$ is of finite type,
+\item $R$, $S$ are domains, and
+\item $R \subset S$.
+\end{enumerate}
+Then we have
+$$
+\text{height}(\mathfrak q)
+\leq
+\text{height}(\mathfrak p) + \text{trdeg}_R(S)
+- \text{trdeg}_{\kappa(\mathfrak p)} \kappa(\mathfrak q)
+$$
+with equality if $R$ is universally catenary.
+\end{lemma}
+
+\begin{proof}
+Suppose that $R \subset S' \subset S$ is a finitely generated $R$-subalgebra
+of $S$. In this case set $\mathfrak q' = S' \cap \mathfrak q$.
+The lemma for the ring maps $R \to S'$ and $S' \to S$ implies the
+lemma for $R \to S$ by additivity of transcendence degree in towers
+of fields (Fields, Lemma \ref{fields-lemma-transcendence-degree-tower}).
+Hence we can use induction on the number of generators
+of $S$ over $R$ and reduce to the case where $S$ is generated by
+one element over $R$.
+
+\medskip\noindent
+Case I: $S = R[x]$ is a polynomial algebra over $R$.
+In this case we have $\text{trdeg}_R(S) = 1$.
+Also $R \to S$ is flat and hence
+$$
+\dim(S_{\mathfrak q}) =
+\dim(R_{\mathfrak p}) + \dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q})
+$$
+see Lemma \ref{lemma-dimension-base-fibre-equals-total}.
+Let $\mathfrak r = \mathfrak pS$. Then
+$\text{trdeg}_{\kappa(\mathfrak p)} \kappa(\mathfrak q) = 1$
+is equivalent to $\mathfrak q = \mathfrak r$, and implies that
+$\dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}) = 0$.
+In the same vein $\text{trdeg}_{\kappa(\mathfrak p)} \kappa(\mathfrak q) = 0$
+is equivalent to having a strict inclusion
+$\mathfrak r \subset \mathfrak q$, which implies that
+$\dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}) = 1$.
+Thus we are done with case I with equality in every instance.
+
+\medskip\noindent
+Case II: $S = R[x]/\mathfrak n$ with $\mathfrak n \not = 0$.
+In this case we have $\text{trdeg}_R(S) = 0$.
+Denote $\mathfrak q' \subset R[x]$ the prime corresponding to $\mathfrak q$.
+Thus we have
+$$
+S_{\mathfrak q} = (R[x])_{\mathfrak q'}/\mathfrak n(R[x])_{\mathfrak q'}
+$$
+By the previous case we have
+$\dim((R[x])_{\mathfrak q'}) =
+\dim(R_{\mathfrak p}) + 1
+- \text{trdeg}_{\kappa(\mathfrak p)} \kappa(\mathfrak q)$.
+Since $\mathfrak n \not = 0$ we see that the dimension of
+$S_{\mathfrak q}$ decreases by at least one, see
+Lemma \ref{lemma-one-equation},
+which proves the inequality of the lemma.
+To see the equality in case $R$ is universally catenary note that
+$\mathfrak n \subset R[x]$ is a height one prime as it corresponds
+to a nonzero prime in $F[x]$ where $F$ is the fraction field of $R$.
+Hence any maximal chain of primes in
+$S_\mathfrak q = R[x]_{\mathfrak q'}/\mathfrak nR[x]_{\mathfrak q'}$
+corresponds to a maximal chain of primes
+with length 1 greater between $\mathfrak q'$ and $(0)$ in $R[x]$.
+If $R$ is universally catenary these all have the same length equal to
+the height of $\mathfrak q'$. This proves that
+$\dim(S_\mathfrak q) = \dim(R[x]_{\mathfrak q'}) - 1$
+and this implies equality holds as desired.
+\end{proof}
+
+\noindent
+The following lemma says that generically finite maps
+tend to be quasi-finite in codimension $1$.
+
+\begin{lemma}
+\label{lemma-finite-in-codim-1}
+Let $A \to B$ be a ring map.
+Assume
+\begin{enumerate}
+\item $A \subset B$ is an extension of domains,
+\item the induced extension of fraction fields is finite,
+\item $A$ is Noetherian, and
+\item $A \to B$ is of finite type.
+\end{enumerate}
+Let $\mathfrak p \subset A$ be a prime of height $1$.
+Then there are at most finitely many primes of $B$
+lying over $\mathfrak p$ and they all have height $1$.
+\end{lemma}
+
+\begin{proof}
+By the dimension formula (Lemma \ref{lemma-dimension-formula})
+for any prime $\mathfrak q$ lying over $\mathfrak p$ we have
+$$
+\dim(B_{\mathfrak q}) \leq
+\dim(A_{\mathfrak p}) - \text{trdeg}_{\kappa(\mathfrak p)} \kappa(\mathfrak q).
+$$
+As the domain $B_\mathfrak q$ has at least $2$ prime ideals we see that
+$\dim(B_{\mathfrak q}) \geq 1$. We conclude that
+$\dim(B_{\mathfrak q}) = 1$ and that the extension
+$\kappa(\mathfrak p) \subset \kappa(\mathfrak q)$ is algebraic.
+Hence $\mathfrak q$ defines a closed point of its fibre
+$\Spec(B \otimes_A \kappa(\mathfrak p))$, see
+Lemma \ref{lemma-finite-residue-extension-closed}.
+Since $B \otimes_A \kappa(\mathfrak p)$ is a Noetherian ring
+the fibre $\Spec(B \otimes_A \kappa(\mathfrak p))$
+is a Noetherian topological space, see
+Lemma \ref{lemma-Noetherian-topology}.
+A Noetherian topological space consisting of closed points
+is finite, see for example
+Topology, Lemma \ref{topology-lemma-Noetherian}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Dimension of finite type algebras over fields}
+\label{section-dimension-finite-type-algebras}
+
+\noindent
+In this section we compute the dimension of a polynomial ring over
+a field. We also prove that the dimension of a finite type
+domain over a field is the dimension of its local rings at maximal
+ideals. We will establish the connection with the transcendence
+degree over the ground field in
+Section \ref{section-dimension-finite-type-algebras-reprise}.
+
+\begin{lemma}
+\label{lemma-dim-affine-space}
+Let $\mathfrak m$ be a maximal ideal in $k[x_1, \ldots, x_n]$.
+The ideal $\mathfrak m$ is generated by $n$ elements.
+The dimension of $k[x_1, \ldots, x_n]_{\mathfrak m}$ is $n$.
+Hence $k[x_1, \ldots, x_n]_{\mathfrak m}$ is a regular local
+ring of dimension $n$.
+\end{lemma}
+
+\begin{proof}
+By the Hilbert Nullstellensatz
+(Theorem \ref{theorem-nullstellensatz})
+we know the residue field $\kappa = \kappa(\mathfrak m)$ is
+a finite extension of $k$. Denote $\alpha_i \in \kappa$ the
+image of $x_i$. Denote $\kappa_i = k(\alpha_1, \ldots, \alpha_i)
+\subset \kappa$, $i = 1, \ldots, n$ and $\kappa_0 = k$.
+Note that $\kappa_i = k[\alpha_1, \ldots, \alpha_i]$
+by field theory. Define inductively elements
+$f_i \in \mathfrak m \cap k[x_1, \ldots, x_i]$
+as follows: Let $P_i(T) \in \kappa_{i-1}[T]$
+be the monic minimal polynomial of $\alpha_i $ over $\kappa_{i-1}$.
+Let $Q_i(T) \in k[x_1, \ldots, x_{i-1}][T]$ be a monic lift of $P_i(T)$
+(of the same degree). Set $f_i = Q_i(x_i)$.
+Note that if $d_i = \deg_T(P_i) = \deg_T(Q_i) = \deg_{x_i}(f_i)$
+then $d_1d_2\ldots d_i = [\kappa_i : k]$ by
+Fields, Lemmas \ref{fields-lemma-multiplicativity-degrees} and
+\ref{fields-lemma-degree-minimal-polynomial}.
+
+\medskip\noindent
+We claim that for all $i = 0, 1, \ldots, n$ there is an
+isomorphism
+$$
+\psi_i : k[x_1, \ldots, x_i] /(f_1, \ldots, f_i) \cong \kappa_i.
+$$
+By construction the composition
+$k[x_1, \ldots, x_i] \to k[x_1, \ldots, x_n] \to \kappa$
+is surjective onto $\kappa_i$ and $f_1, \ldots, f_i$ are
+in the kernel. This gives a surjective homomorphism.
+We prove $\psi_i$ is injective by induction. It is clear for $i = 0$.
+Given the statement for $i$ we prove it for $i + 1$.
+The ring extension $k[x_1, \ldots, x_i]/(f_1, \ldots, f_i) \to
+k[x_1, \ldots, x_{i + 1}]/(f_1, \ldots, f_{i + 1})$
+is generated by $1$ element over a field and one
+irreducible equation. By elementary field theory
+$k[x_1, \ldots, x_{i + 1}]/(f_1, \ldots, f_{i + 1})$
+is a field, and hence $\psi_i$ is injective.
+
+\medskip\noindent
+This implies that $\mathfrak m = (f_1, \ldots, f_n)$.
+Moreover, we also conclude that
+$$
+k[x_1, \ldots, x_n]/(f_1, \ldots, f_i)
+\cong
+\kappa_i[x_{i + 1}, \ldots, x_n].
+$$
+Hence $(f_1, \ldots, f_i)$ is a prime ideal. Thus
+$$
+(0) \subset (f_1) \subset (f_1, f_2) \subset \ldots \subset
+(f_1, \ldots, f_n) = \mathfrak m
+$$
+is a chain of primes of length $n$. The lemma follows.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-finite-gl-dim-polynomial-ring}
+A polynomial algebra in $n$ variables over a field is a regular ring.
+It has global dimension $n$. All localizations at maximal ideals
+are regular local rings of dimension $n$.
+\end{proposition}
+
+\begin{proof}
+By Lemma \ref{lemma-dim-affine-space}
+all localizations $k[x_1, \ldots, x_n]_{\mathfrak m}$
+at maximal ideals are regular local rings of dimension $n$. Hence
+we conclude by Lemma \ref{lemma-finite-gl-dim-finite-dim-regular}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-height-polynomial-ring}
+Let $k$ be a field.
+Let $\mathfrak p \subset \mathfrak q \subset k[x_1, \ldots, x_n]$
+be a pair of primes.
+Any maximal chain of primes between $\mathfrak p$ and $\mathfrak q$
+has length $\text{height}(\mathfrak q) - \text{height}(\mathfrak p)$.
+\end{lemma}
+
+\begin{proof}
+By
+Proposition \ref{proposition-finite-gl-dim-polynomial-ring}
+any local ring of $k[x_1, \ldots, x_n]$ is regular.
+Hence all local rings are Cohen-Macaulay, see
+Lemma \ref{lemma-regular-ring-CM}.
+The local rings at maximal ideals have dimension $n$ hence
+every maximal chain of primes in $k[x_1, \ldots, x_n]$
+has length $n$, see
+Lemma \ref{lemma-maximal-chain-CM}.
+Hence every maximal chain of primes between $(0)$ and $\mathfrak p$
+has length $\text{height}(\mathfrak p)$, see
+Lemma \ref{lemma-CM-dim-formula}
+for example.
+Putting these together leads to the assertion of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-spell-it-out}
+Let $k$ be a field.
+Let $S$ be a finite type $k$-algebra which is an integral domain.
+Then $\dim(S) = \dim(S_{\mathfrak m})$ for any maximal
+ideal $\mathfrak m$ of $S$. In words: every maximal chain
+of primes has length equal to the dimension of $S$.
+\end{lemma}
+
+\begin{proof}
+Write $S = k[x_1, \ldots, x_n]/\mathfrak p$.
+By Proposition \ref{proposition-finite-gl-dim-polynomial-ring} and
+Lemma \ref{lemma-dimension-height-polynomial-ring}
+all the maximal chains of primes in $S$ (which necessarily end
+with a maximal ideal) have length $n - \text{height}(\mathfrak p)$.
+Thus this number is the dimension of $S$ and of $S_{\mathfrak m}$
+for any maximal ideal $\mathfrak m$ of $S$.
+\end{proof}
+
+\noindent
+Recall that we defined the
+dimension $\dim_x(X)$ of a topological space $X$ at a point $x$
+in Topology, Definition \ref{topology-definition-Krull}.
+
+\begin{lemma}
+\label{lemma-dimension-at-a-point-finite-type-over-field}
+Let $k$ be a field.
+Let $S$ be a finite type $k$-algebra.
+Let $X = \Spec(S)$.
+Let $\mathfrak p \subset S$ be a prime ideal and let
+$x \in X$ be the corresponding point.
+The following numbers are equal
+\begin{enumerate}
+\item $\dim_x(X)$,
+\item $\max \dim(Z)$ where the maximum is over those
+irreducible components $Z$ of $X$ passing through $x$, and
+\item $\min \dim(S_{\mathfrak m})$ where the minimum
+is over maximal ideals $\mathfrak m$ with
+$\mathfrak p \subset \mathfrak m$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $X = \bigcup_{i \in I} Z_i$ be the decomposition of $X$ into
+its irreducible components. There are finitely many of
+them (see
+Lemmas \ref{lemma-obvious-Noetherian} and \ref{lemma-Noetherian-topology}).
+Let $I' = \{i \mid x \in Z_i\}$, and let
+$T = \bigcup_{i \not \in I'} Z_i$. Then $U = X \setminus T$
+is an open subset of $X$ containing the point $x$.
+The number (2) is $\max_{i \in I'} \dim(Z_i)$.
+For any open $W \subset U$ with $x \in W$
+the irreducible components of $W$ are the irreducible sets
+$W_i = Z_i \cap W$ for $i \in I'$ and $x$ is contained
+in each of these.
+Note that each $W_i$, $i \in I'$ contains a closed point because
+$X$ is Jacobson, see Section \ref{section-ring-jacobson}.
+Since $W_i \subset Z_i$ we have $\dim(W_i) \leq \dim(Z_i)$.
+The existence of a closed point implies, via Lemma
+\ref{lemma-dimension-spell-it-out}, that there is a chain of
+irreducible closed subsets of length equal to $\dim(Z_i)$ in the open $W_i$.
+Thus $\dim(W_i) = \dim(Z_i)$ for any $i \in I'$. Hence $\dim(W)$
+is equal to the number (2). This proves that (1) $ = $ (2).
+
+\medskip\noindent
+Let $\mathfrak m \supset \mathfrak p$ be any maximal ideal
+containing $\mathfrak p$. Let $x_0 \in X$ be the corresponding
+point. First of all, $x_0$ is contained in all the
+irreducible components $Z_i$, $i \in I'$. Let $\mathfrak q_i$
+denote the minimal primes of $S$ corresponding to the
+irreducible components $Z_i$. For each $i$ such that
+$x_0 \in Z_i$ (which is equivalent to $\mathfrak m \supset \mathfrak q_i$)
+we have a surjection
+$$
+S_{\mathfrak m} \longrightarrow
+S_\mathfrak m/\mathfrak q_i S_\mathfrak m =(S/\mathfrak q_i)_{\mathfrak m}
+$$
+Moreover, the primes $\mathfrak q_i S_\mathfrak m$ so obtained
+exhaust the minimal
+primes of the Noetherian local ring $S_{\mathfrak m}$, see
+Lemma \ref{lemma-irreducible-components-containing-x}.
+We conclude, using Lemma \ref{lemma-dimension-spell-it-out},
+that the dimension of $S_{\mathfrak m}$ is the
+maximum of the dimensions of the $Z_i$ passing through $x_0$.
+To finish the proof of the lemma it suffices to show that
+we can choose $x_0$ such that $x_0 \in Z_i \Rightarrow i \in I'$.
+Because $S$ is Jacobson (as we saw above)
+it is enough to show that $V(\mathfrak p) \setminus T$
+(with $T$ as above) is nonempty. And this is clear since it
+contains the point $x$ (i.e. $\mathfrak p$).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-closed-point-finite-type-field}
+Let $k$ be a field.
+Let $S$ be a finite type $k$-algebra.
+Let $X = \Spec(S)$.
+Let $\mathfrak m \subset S$ be a maximal ideal and let
+$x \in X$ be the associated closed point.
+Then $\dim_x(X) = \dim(S_{\mathfrak m})$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of
+Lemma \ref{lemma-dimension-at-a-point-finite-type-over-field}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-disjoint-decomposition-CM-algebra}
+Let $k$ be a field.
+Let $S$ be a finite type $k$ algebra.
+Assume that $S$ is Cohen-Macaulay.
+Then $\Spec(S) = \coprod T_d$ is a finite disjoint union of
+open and closed subsets $T_d$ with $T_d$ equidimensional
+(see Topology, Definition \ref{topology-definition-equidimensional})
+of dimension $d$. Equivalently, $S$ is a product of rings
+$S_d$, $d = 0, \ldots, \dim(S)$ such that every maximal ideal
+$\mathfrak m$ of $S_d$ has height $d$.
+\end{lemma}
+
+\begin{proof}
+The equivalence of the two statements follows from
+Lemma \ref{lemma-disjoint-implies-product}.
+Let $\mathfrak m \subset S$ be a maximal ideal.
+Every maximal chain of primes in $S_{\mathfrak m}$ has
+the same length equal to $\dim(S_{\mathfrak m})$, see
+Lemma \ref{lemma-maximal-chain-CM}. Hence, the dimension of the irreducible
+components passing through the point corresponding to $\mathfrak m$
+all have dimension equal to $\dim(S_{\mathfrak m})$, see
+Lemma \ref{lemma-dimension-spell-it-out}.
+Since $\Spec(S)$ is a Jacobson topological space
+the intersection
+of any two irreducible components of it contains a closed point if nonempty,
+see
+Lemmas \ref{lemma-finite-type-field-Jacobson} and
+\ref{lemma-jacobson}.
+Thus we have shown that any two irreducible components
+that meet have the same dimension. The lemma follows
+easily from this, and the fact that $\Spec(S)$
+has a finite number of irreducible components (see
+Lemmas \ref{lemma-obvious-Noetherian} and \ref{lemma-Noetherian-topology}).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Noether normalization}
+\label{section-Noether-normalization}
+
+\noindent
+In this section we prove variants of the Noether normalization lemma.
+The key ingredient we will use is contained in the following two lemmas.
+
+\begin{lemma}
+\label{lemma-helper}
+Let $n \in \mathbf{N}$.
+Let $N$ be a finite nonempty
+set of multi-indices $\nu = (\nu_1, \ldots, \nu_n)$.
+Given $e = (e_1, \ldots, e_n)$ we set $e \cdot \nu = \sum e_i\nu_i$.
+Then for $e_1 \gg e_2 \gg \ldots \gg e_{n-1} \gg e_n$ we have:
+If $\nu, \nu' \in N$ then
+$$
+(e \cdot \nu = e \cdot \nu')
+\Leftrightarrow
+(\nu = \nu')
+$$
+\end{lemma}
+
+\begin{proof}
+Say $N = \{\nu_j\}$ with $\nu_j = (\nu_{j1}, \ldots, \nu_{jn})$.
+Let $A_i = \max_j \nu_{ji} - \min_j \nu_{ji}$. If for each $i$ we have
+$e_{i - 1} > A_ie_i + A_{i + 1}e_{i + 1} + \ldots + A_ne_n$ then
+the lemma holds. For suppose that $e \cdot (\nu - \nu') = 0$. Then for
+$n \ge 2$,
+$$
+e_1(\nu_1 - \nu'_1) = \sum\nolimits_{i = 2}^n e_i(\nu'_i - \nu_i).
+$$
+We may assume that $(\nu_1 - \nu'_1) \ge 0$. If $(\nu_1 - \nu'_1) > 0$, then
+$$
+e_1(\nu_1 - \nu'_1) \ge e_1 >
+A_2e_2 + \ldots + A_ne_n \ge
+\sum\nolimits_{i = 2}^n e_i|\nu'_i - \nu_i| \ge
+\sum\nolimits_{i = 2}^n e_i(\nu'_i - \nu_i).
+$$
+This contradiction implies that $\nu'_1 = \nu_1$. By
+induction, $\nu'_i = \nu_i$ for $2 \le i \le n$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-helper-polynomial}
+Let $R$ be a ring. Let $g \in R[x_1, \ldots, x_n]$ be an element
+which is nonconstant, i.e., $g \not \in R$.
+For $e_1 \gg e_2 \gg \ldots \gg e_{n-1} \gg e_n = 1$ the polynomial
+$$
+g(x_1 + x_n^{e_1}, x_2 + x_n^{e_2}, \ldots, x_{n - 1} + x_n^{e_{n - 1}}, x_n)
+=
+ax_n^d + \text{lower order terms in }x_n
+$$
+where $d > 0$ and $a \in R$ is one of the nonzero coefficients of $g$.
+\end{lemma}
+
+\begin{proof}
+Write $g = \sum_{\nu \in N} a_\nu x^\nu$ with $a_\nu \in R$ not zero.
+Here $N$ is a finite set of multi-indices as in
+Lemma \ref{lemma-helper}
+and $x^\nu = x_1^{\nu_1} \ldots x_n^{\nu_n}$.
+Note that the leading term in
+$$
+(x_1 + x_n^{e_1})^{\nu_1} \ldots (x_{n-1} + x_n^{e_{n-1}})^{\nu_{n-1}}
+x_n^{\nu_n}
+\quad\text{is}\quad
+x_n^{e_1\nu_1 + \ldots + e_{n-1}\nu_{n-1} + \nu_n}.
+$$
+Hence the lemma follows from
+Lemma \ref{lemma-helper}
+which guarantees that there is exactly one nonzero term $a_\nu x^\nu$ of $g$
+which gives rise to the leading term of
+$g(x_1 + x_n^{e_1}, x_2 + x_n^{e_2}, \ldots, x_{n - 1} + x_n^{e_{n - 1}},
+x_n)$, i.e., $a = a_\nu$ for the unique $\nu \in N$ such that
+$e \cdot \nu$ is maximal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-one-relation}
+Let $k$ be a field.
+Let $S = k[x_1, \ldots, x_n]/I$ for some proper ideal $I$.
+If $I \not = 0$, then there exist $y_1, \ldots, y_{n-1} \in k[x_1, \ldots, x_n]$
+such that $S$ is finite over $k[y_1, \ldots, y_{n-1}]$. Moreover we may
+choose $y_i$ to be in the $\mathbf{Z}$-subalgebra of $k[x_1, \ldots, x_n]$
+generated by $x_1, \ldots, x_n$.
+\end{lemma}
+
+\begin{proof}
+Pick $f \in I$, $f\not = 0$. It suffices to show the lemma
+for $k[x_1, \ldots, x_n]/(f)$ since $S$ is a quotient of that ring.
+We will take $y_i = x_i - x_n^{e_i}$, $i = 1, \ldots, n-1$
+for suitable integers $e_i$. When does this work? It suffices
+to show that $\overline{x_n} \in k[x_1, \ldots, x_n]/(f)$
+is integral over the ring $k[y_1, \ldots, y_{n-1}]$. The
+equation for $\overline{x_n}$ over this ring is
+$$
+f(y_1 + x_n^{e_1}, \ldots, y_{n-1} + x_n^{e_{n-1}}, x_n) = 0.
+$$
+Hence we are done if we can show there exists integers $e_i$ such
+that the leading coefficient with respect to $x_n$ of the equation
+above is a nonzero element of $k$. This can be achieved for example
+by choosing $e_1 \gg e_2 \gg \ldots \gg e_{n-1}$, see
+Lemma \ref{lemma-helper-polynomial}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noether-normalization}
+\begin{slogan}
+Noether normalization
+\end{slogan}
+Let $k$ be a field. Let $S = k[x_1, \ldots, x_n]/I$ for some ideal $I$.
+If $I \neq (1)$, there exist $r\geq 0$, and
+$y_1, \ldots, y_r \in k[x_1, \ldots, x_n]$
+such that (a) the map $k[y_1, \ldots, y_r] \to S$ is injective,
+and (b) the map $k[y_1, \ldots, y_r] \to S$ is finite.
+In this case the integer $r$ is the dimension of $S$.
+Moreover we may choose $y_i$ to be in the
+$\mathbf{Z}$-subalgebra of $k[x_1, \ldots, x_n]$
+generated by $x_1, \ldots, x_n$.
+\end{lemma}
+
+\begin{proof}
+By induction on $n$, with $n = 0$ being trivial.
+If $I = 0$, then take $r = n$ and $y_i = x_i$.
+If $I \not = 0$, then choose $y_1, \ldots, y_{n-1}$
+as in Lemma \ref{lemma-one-relation}. Let
+$S' \subset S$ be the subring generated by
+the images of the $y_i$. By induction we can
+choose $r$ and $z_1, \ldots, z_r \in k[y_1, \ldots, y_{n-1}]$
+such that (a), (b) hold for $k[z_1, \ldots, z_r]
+\to S'$. Since $S' \to S$ is injective and finite
+we see (a), (b) hold for $k[z_1, \ldots, z_r]
+\to S$. The last assertion follows from Lemma
+\ref{lemma-integral-sub-dim-equal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noether-normalization-at-point}
+Let $k$ be a field.
+Let $S$ be a finite type $k$ algebra and denote $X = \Spec(S)$.
+Let $\mathfrak q$ be a prime of $S$, and let $x \in X$ be the
+corresponding point. There exists a $g \in S$, $g \not \in \mathfrak q$
+such that $\dim(S_g) = \dim_x(X) =: d$ and such that
+there exists a finite injective map $k[y_1, \ldots, y_d] \to S_g$.
+\end{lemma}
+
+\begin{proof}
+Note that by definition $\dim_x(X)$ is the minimum
+of the dimensions of $S_g$ for $g \in S$, $g \not \in \mathfrak q$, i.e.,
+the minimum is attained. Thus the lemma follows from
+Lemma \ref{lemma-Noether-normalization}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-refined-Noether-normalization}
+Let $k$ be a field. Let $\mathfrak q \subset k[x_1, \ldots, x_n]$
+be a prime ideal. Set $r = \text{trdeg}_k\ \kappa(\mathfrak q)$.
+Then there exists a finite ring map
+$\varphi : k[y_1, \ldots, y_n] \to k[x_1, \ldots, x_n]$ such
+that $\varphi^{-1}(\mathfrak q) = (y_{r + 1}, \ldots, y_n)$.
+\end{lemma}
+
+\begin{proof}
+By induction on $n$. The case $n = 0$ is clear. Assume $n > 0$.
+If $r = n$, then $\mathfrak q = (0)$ and the result is clear.
+Choose a nonzero $f \in \mathfrak q$. Of course $f$ is nonconstant.
+After applying an automorphism of the form
+$$
+k[x_1, \ldots, x_n] \longrightarrow k[x_1, \ldots, x_n],
+\quad
+x_n \mapsto x_n,
+\quad
+x_i \mapsto x_i + x_n^{e_i}\ (i < n)
+$$
+we may assume that $f$ is monic in $x_n$ over $k[x_1, \ldots, x_n]$, see
+Lemma \ref{lemma-helper-polynomial}. Hence the ring map
+$$
+k[y_1, \ldots, y_n] \longrightarrow k[x_1, \ldots, x_n],
+\quad
+y_n \mapsto f,
+\quad
+y_i \mapsto x_i\ (i < n)
+$$
+is finite. Moreover $y_n \in \mathfrak q \cap k[y_1, \ldots, y_n]$ by
+construction. Thus
+$\mathfrak q \cap k[y_1, \ldots, y_n] = \mathfrak pk[y_1, \ldots, y_n] + (y_n)$
+where $\mathfrak p \subset k[y_1, \ldots, y_{n - 1}]$ is a prime ideal.
+Note that $\kappa(\mathfrak p) \subset \kappa(\mathfrak q)$ is finite, and
+hence $r = \text{trdeg}_k\ \kappa(\mathfrak p)$.
+Apply the induction hypothesis to the pair
+$(k[y_1, \ldots, y_{n - 1}], \mathfrak p)$ and we obtain a finite ring map
+$k[z_1, \ldots, z_{n - 1}] \to k[y_1, \ldots, y_{n - 1}]$ such that
+$\mathfrak p \cap k[z_1, \ldots, z_{n - 1}] = (z_{r + 1}, \ldots, z_{n - 1})$.
+We extend the ring map
+$k[z_1, \ldots, z_{n - 1}] \to k[y_1, \ldots, y_{n - 1}]$
+to a ring map
+$k[z_1, \ldots, z_n] \to k[y_1, \ldots, y_n]$
+by mapping $z_n$ to $y_n$.
+The composition of the ring maps
+$$
+k[z_1, \ldots, z_n] \to k[y_1, \ldots, y_n] \to k[x_1, \ldots, x_n]
+$$
+solves the problem.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noether-normalization-over-a-domain}
+Let $R \to S$ be an injective finite type ring map. Assume $R$ is a domain.
+Then there exists an integer $d$ and a factorization
+$$
+R \to R[y_1, \ldots, y_d] \to S' \to S
+$$
+by injective maps such that $S'$ is finite over $R[y_1, \ldots, y_d]$
+and such that $S'_f \cong S_f$ for some nonzero $f \in R$.
+\end{lemma}
+
+\begin{proof}
+Pick $x_1, \ldots, x_n \in S$ which generate $S$ over $R$.
+Let $K$ be the fraction field of $R$ and $S_K = S \otimes_R K$.
+By Lemma \ref{lemma-Noether-normalization}
+we can find $y_1, \ldots, y_d \in S$ such that $K[y_1, \ldots, y_d] \to S_K$
+is a finite injective map. Note that $y_i \in S$ because we may pick the
+$y_j$ in the $\mathbf{Z}$-algebra generated by $x_1, \ldots, x_n$.
+As a finite ring map is integral (see
+Lemma \ref{lemma-finite-is-integral})
+we can find monic $P_i \in K[y_1, \ldots, y_d][T]$ such that
+$P_i(x_i) = 0$ in $S_K$. Let $f \in R$ be a nonzero element such that
+$fP_i \in R[y_1, \ldots, y_d][T]$ for all $i$. Then $fP_i(x_i)$
+maps to zero in $S_K$. Hence after replacing $f$ by another
+nonzero element of $R$ we may also assume $fP_i(x_i)$ is zero in $S$.
+Set $x_i' = fx_i$ and let $S' \subset S$ be the $R$-subalgebra generated by
+$y_1, \ldots, y_d$ and $x'_1, \ldots, x'_n$. Note that $x'_i$ is integral over
+$R[y_1, \ldots, y_d]$ as we have $Q_i(x_i') = 0$ where
+$Q_i = f^{\deg_T(P_i)}P_i(T/f)$ which is a monic polynomial in
+$T$ with coefficients in $R[y_1, \ldots, y_d]$ by our choice of $f$.
+Hence $R[y_1, \ldots, y_d] \subset S'$ is finite by
+Lemma \ref{lemma-characterize-finite-in-terms-of-integral}.
+Since $S' \subset S$ we have $S'_f \subset S_f$ (localization is exact).
+On the other hand, the elements $x_i = x'_i/f$ in $S'_f$ generate $S_f$
+over $R_f$ and hence $S'_f \to S_f$ is surjective. Whence
+$S'_f \cong S_f$ and we win.
+\end{proof}
+
+
+
+
+
+\section{Dimension of finite type algebras over fields, reprise}
+\label{section-dimension-finite-type-algebras-reprise}
+
+\noindent
+This section is a continuation of
+Section \ref{section-dimension-finite-type-algebras}.
+In this section we establish the connection between dimension and
+transcendence degree over the ground field for finite type domains
+over a field.
+
+\begin{lemma}
+\label{lemma-dimension-prime-polynomial-ring}
+Let $k$ be a field.
+Let $S$ be a finite type $k$ algebra which is an integral domain.
+Let $K$ be the field of fractions of $S$.
+Let $r = \text{trdeg}(K/k)$ be the transcendence degree of $K$ over $k$.
+Then $\dim(S) = r$. Moreover, the local ring of $S$ at every maximal
+ideal has dimension $r$.
+\end{lemma}
+
+\begin{proof}
+We may write $S = k[x_1, \ldots, x_n]/\mathfrak p$.
+By Lemma \ref{lemma-dimension-height-polynomial-ring}
+all local rings of $S$ at maximal ideals have the same dimension.
+Apply Lemma \ref{lemma-Noether-normalization}.
+We get a finite injective ring map
+$$
+k[y_1, \ldots, y_d] \to S
+$$
+with $d = \dim(S)$. Clearly, $k(y_1, \ldots, y_d) \subset K$
+is a finite extension and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tr-deg-specialization}
+Let $k$ be a field. Let $S$ be a finite type $k$-algebra.
+Let $\mathfrak q \subset \mathfrak q' \subset S$ be distinct
+prime ideals. Then
+$\text{trdeg}_k\ \kappa(\mathfrak q') < \text{trdeg}_k\ \kappa(\mathfrak q)$.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-dimension-prime-polynomial-ring}
+we have $\dim V(\mathfrak q) = \text{trdeg}_k\ \kappa(\mathfrak q)$
+and similarly for $\mathfrak q'$. Hence the result follows
+as the strict inclusion $V(\mathfrak q') \subset V(\mathfrak q)$
+implies a strict inequality of dimensions.
+\end{proof}
+
+\noindent
+The following lemma generalizes
+Lemma \ref{lemma-dimension-closed-point-finite-type-field}.
+
+\begin{lemma}
+\label{lemma-dimension-at-a-point-finite-type-field}
+Let $k$ be a field.
+Let $S$ be a finite type $k$ algebra.
+Let $X = \Spec(S)$.
+Let $\mathfrak p \subset S$ be a prime ideal,
+and let $x \in X$ be the corresponding point.
+Then we have
+$$
+\dim_x(X) = \dim(S_{\mathfrak p}) + \text{trdeg}_k\ \kappa(\mathfrak p).
+$$
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-dimension-prime-polynomial-ring} we know that
+$r = \text{trdeg}_k\ \kappa(\mathfrak p)$ is equal to the
+dimension of $V(\mathfrak p)$.
+Pick any maximal chain of primes
+$\mathfrak p \subset \mathfrak p_1 \subset \ldots \subset \mathfrak p_r$
+starting with $\mathfrak p$ in $S$.
+This has length $r$ by Lemma \ref{lemma-dimension-spell-it-out}.
+Let $\mathfrak q_j$, $j \in J$ be the minimal
+primes of $S$ which are contained in $\mathfrak p$.
+These correspond $1-1$ to minimal primes in $S_{\mathfrak p}$
+via the rule $\mathfrak q_j \mapsto \mathfrak q_jS_{\mathfrak p}$.
+By Lemma \ref{lemma-dimension-at-a-point-finite-type-over-field}
+we know that $\dim_x(X)$ is equal
+to the maximum of the dimensions of the rings $S/\mathfrak q_j$.
+For each $j$ pick a maximal chain of primes
+$\mathfrak q_j \subset \mathfrak p'_1 \subset \ldots \subset \mathfrak p'_{s(j)}
+= \mathfrak p$.
+Then $\dim(S_{\mathfrak p}) = \max_{j \in J} s(j)$.
+Now, each chain
+$$
+\mathfrak q_i \subset \mathfrak p'_1 \subset \ldots \subset
+\mathfrak p'_{s(j)} = \mathfrak p \subset
+\mathfrak p_1 \subset \ldots \subset \mathfrak p_r
+$$
+is a maximal chain in $S/\mathfrak q_j$, and by what was said
+before we have
+$\dim_x(X) = \max_{j \in J} r + s(j)$.
+The lemma follows.
+\end{proof}
+
+\noindent
+The following lemma says that the codimension of one finite type
+Spec in another is the difference of heights.
+
+\begin{lemma}
+\label{lemma-codimension}
+Let $k$ be a field.
+Let $S' \to S$ be a surjection of finite type $k$ algebras.
+Let $\mathfrak p \subset S$ be a prime ideal,
+and let $\mathfrak p'$ be the corresponding prime ideal of $S'$.
+Let $X = \Spec(S)$, resp.\ $X' = \Spec(S')$,
+and let $x \in X$, resp. $x'\in X'$ be the point corresponding
+to $\mathfrak p$, resp.\ $\mathfrak p'$.
+Then
+$$
+\dim_{x'} X' - \dim_x X =
+\text{height}(\mathfrak p') - \text{height}(\mathfrak p).
+$$
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-dimension-at-a-point-finite-type-field}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-preserved-field-extension}
+Let $k$ be a field.
+Let $S$ be a finite type $k$-algebra.
+Let $K/k$ be a field extension.
+Then $\dim(S) = \dim(K \otimes_k S)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-Noether-normalization}
+there exists a finite injective map
+$k[y_1, \ldots, y_d] \to S$ with $d = \dim(S)$.
+Since $K$ is flat over $k$ we also get a finite injective
+map $K[y_1, \ldots, y_d] \to K \otimes_k S$.
+The result follows from Lemma \ref{lemma-integral-sub-dim-equal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-at-a-point-preserved-field-extension}
+Let $k$ be a field.
+Let $S$ be a finite type $k$-algebra.
+Set $X = \Spec(S)$.
+Let $K/k$ be a field extension.
+Set $S_K = K \otimes_k S$, and $X_K = \Spec(S_K)$.
+Let $\mathfrak q \subset S$ be a prime corresponding to $x \in X$
+and let $\mathfrak q_K \subset S_K$ be a prime corresponding
+to $x_K \in X_K$ lying over $\mathfrak q$.
+Then $\dim_x X = \dim_{x_K} X_K$.
+\end{lemma}
+
+\begin{proof}
+Choose a presentation $S = k[x_1, \ldots, x_n]/I$.
+This gives a presentation
+$K \otimes_k S = K[x_1, \ldots, x_n]/(K \otimes_k I)$.
+Let $\mathfrak q_K' \subset K[x_1, \ldots, x_n]$,
+resp.\ $\mathfrak q' \subset k[x_1, \ldots, x_n]$ be
+the corresponding primes. Consider the following
+commutative diagram of Noetherian local rings
+$$
+\xymatrix{
+K[x_1, \ldots, x_n]_{\mathfrak q_K'} \ar[r] &
+(K \otimes_k S)_{\mathfrak q_K} \\
+k[x_1, \ldots, x_n]_{\mathfrak q'} \ar[r] \ar[u] &
+S_{\mathfrak q} \ar[u]
+}
+$$
+Both vertical arrows are flat because they are localizations of
+the flat ring maps $S \to S_K$ and
+$k[x_1, \ldots, x_n] \to K[x_1, \ldots, x_n]$.
+Moreover, the vertical arrows have the same fibre rings.
+Hence, we see from
+Lemma \ref{lemma-dimension-base-fibre-equals-total} that
+$\text{height}(\mathfrak q') - \text{height}(\mathfrak q)
+= \text{height}(\mathfrak q_K') - \text{height}(\mathfrak q_K)$.
+Denote $x' \in X' = \Spec(k[x_1, \ldots, x_n])$
+and $x'_K \in X'_K = \Spec(K[x_1, \ldots, x_n])$
+the points corresponding to $\mathfrak q'$ and
+$\mathfrak q_K'$. By Lemma \ref{lemma-codimension} and what we showed
+above we have
+\begin{eqnarray*}
+n - \dim_x X & = & \dim_{x'} X' - \dim_x X \\
+& = & \text{height}(\mathfrak q') - \text{height}(\mathfrak q) \\
+& = & \text{height}(\mathfrak q_K') - \text{height}(\mathfrak q_K) \\
+& = & \dim_{x'_K} X'_K - \dim_{x_K} X_K \\
+& = & n - \dim_{x_K} X_K
+\end{eqnarray*}
+and the lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inequalities-under-field-extension}
+Let $k$ be a field. Let $S$ be a finite type $k$-algebra.
+Let $K/k$ be a field extension. Set $S_K = K \otimes_k S$.
+Let $\mathfrak q \subset S$ be a prime and let $\mathfrak q_K \subset S_K$
+be a prime lying over $\mathfrak q$. Then
+$$
+\dim (S_K \otimes_S \kappa(\mathfrak q))_{\mathfrak q_K} =
+\dim (S_K)_{\mathfrak q_K} - \dim S_\mathfrak q =
+\text{trdeg}_k \kappa(\mathfrak q) - \text{trdeg}_K \kappa(\mathfrak q_K)
+$$
+Moreover, given $\mathfrak q$ we can always choose $\mathfrak q_K$ such
+that the number above is zero.
+\end{lemma}
+
+\begin{proof}
+Observe that $S_\mathfrak q \to (S_K)_{\mathfrak q_K}$ is a flat
+local homomorphism of local Noetherian rings with special fibre
+$(S_K \otimes_S \kappa(\mathfrak q))_{\mathfrak q_K}$. Hence the first
+equality by Lemma \ref{lemma-dimension-base-fibre-equals-total}.
+The second equality follows from the fact that we have
+$\dim_x X = \dim_{x_K} X_K$ with notation as in
+Lemma \ref{lemma-dimension-at-a-point-preserved-field-extension}
+and we have
+$\dim_x X = \dim S_\mathfrak q + \text{trdeg}_k \kappa(\mathfrak q)$
+by Lemma \ref{lemma-dimension-at-a-point-finite-type-field}
+and similarly for $\dim_{x_K} X_K$.
+If we choose $\mathfrak q_K$ minimal over $\mathfrak q S_K$, then
+the dimension of the fibre ring will be zero.
+\end{proof}
+
+
+
+
+
+
+
+\section{Dimension of graded algebras over a field}
+\label{section-dimension-graded}
+
+\noindent
+Here is a basic result.
+
+\begin{lemma}
+\label{lemma-dimension-graded}
+Let $k$ be a field. Let $S$ be a graded $k$-algebra generated over $k$
+by finitely many elements of degree $1$.
+Assume $S_0 = k$. Let $P(T) \in \mathbf{Q}[T]$ be the polynomial
+such that $\dim(S_d) = P(d)$ for all $d \gg 0$. See
+Proposition \ref{proposition-graded-hilbert-polynomial}.
+Then
+\begin{enumerate}
+\item The irrelevant ideal $S_{+}$ is a maximal ideal $\mathfrak m$.
+\item Any minimal prime of $S$ is a homogeneous ideal and is contained
+in $S_{+} = \mathfrak m$.
+\item We have $\dim(S) = \deg(P) + 1 = \dim_x\Spec(S)$
+(with the convention that $\deg(0) = -1$)
+where $x$ is the point corresponding to the maximal ideal
+$S_{+} = \mathfrak m$.
+\item The Hilbert function of the local ring $R = S_{\mathfrak m}$
+is equal to the Hilbert function of $S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first statement is obvious.
+The second follows from Lemma \ref{lemma-graded-ring-minimal-prime}.
+By (2) every irreducible component passes through $x$.
+Thus we have $\dim(S) = \dim_x\Spec(S) = \dim(S_\mathfrak m)$
+by Lemma \ref{lemma-dimension-at-a-point-finite-type-over-field}. Since
+$\mathfrak m^d/\mathfrak m^{d + 1} \cong
+\mathfrak m^dS_\mathfrak m/\mathfrak m^{d + 1}S_\mathfrak m$
+we see that the Hilbert function of the local ring $S_\mathfrak m$
+is equal to the
+Hilbert function of $S$, which is (4). We conclude the last equality
+of (3) by Proposition \ref{proposition-dimension}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Generic flatness}
+\label{section-generic-flatness}
+
+\noindent
+Basically this says that a finite type algebra over a domain becomes
+flat after inverting a single element of the domain.
+There are several versions of this result (in increasing order of
+strength).
+
+\begin{lemma}
+\label{lemma-generic-flatness-Noetherian}
+Let $R \to S$ be a ring map.
+Let $M$ be an $S$-module.
+Assume
+\begin{enumerate}
+\item $R$ is Noetherian,
+\item $R$ is a domain,
+\item $R \to S$ is of finite type, and
+\item $M$ is a finite type $S$-module.
+\end{enumerate}
+Then there exists a nonzero $f \in R$ such that
+$M_f$ is a free $R_f$-module.
+\end{lemma}
+
+\begin{proof}
+Let $K$ be the fraction field of $R$. Set $S_K = K \otimes_R S$. This
+is an algebra of finite type over $K$. We will argue by induction on
+$d = \dim(S_K)$ (which is finite for example by Noether normalization, see
+Section \ref{section-Noether-normalization}).
+Fix $d \geq 0$.
+Assume we know that the lemma holds in all cases where $\dim(S_K) < d$.
+
+\medskip\noindent
+Suppose given $R \to S$ and $M$ as in the lemma with $\dim(S_K) = d$. By
+Lemma \ref{lemma-filter-Noetherian-module}
+there exists a filtration
+$0 \subset M_1 \subset M_2 \subset \ldots \subset M_n = M$
+so that $M_i/M_{i - 1}$ is isomorphic to $S/\mathfrak q$
+for some prime $\mathfrak q$ of $S$. Note that
+$\dim((S/\mathfrak q)_K) \leq \dim(S_K)$. Also, note that an extension of
+free modules is free (see basic notion \ref{item-extension-free}).
+Thus we may assume $M = S$ and that $S$ is a domain of finite type over $R$.
+
+\medskip\noindent
+If $R \to S$ has a nontrivial kernel, then take a nonzero $f \in R$ in
+this kernel. In this case $S_f = 0$ and the lemma holds. (This is really
+the case $d = -1$ and the start of the induction.) Hence we
+may assume that $R \to S$ is a finite type extension of Noetherian domains.
+
+\medskip\noindent
+Apply Lemma \ref{lemma-Noether-normalization-over-a-domain}
+and replace $R$ by $R_f$ (with $f$ as in the lemma) to get a
+factorization
+$$
+R \subset R[y_1, \ldots, y_d] \subset S
+$$
+where the second extension is finite. Choose $z_1, \ldots, z_r \in S$ which
+form a basis for the fraction field of $S$ over the fraction field of
+$R[y_1, \ldots, y_d]$. This gives a short exact sequence
+$$
+0 \to
+R[y_1, \ldots, y_d]^{\oplus r} \xrightarrow{(z_1, \ldots, z_r)}
+S \to N \to 0
+$$
+By construction $N$ is a finite $R[y_1, \ldots, y_d]$-module whose
+support does not contain the generic point $(0)$ of
+$\Spec(R[y_1, \ldots, y_d])$. By
+Lemma \ref{lemma-support-closed}
+there exists a nonzero $g \in R[y_1, \ldots, y_d]$ such that
+$g$ annihilates $N$, so we may view $N$ as a finite module over
+$S' = R[y_1, \ldots, y_d]/(g)$. Since $\dim(S'_K) < d$ by induction
+there exists a nonzero $f \in R$ such that $N_f$ is a free
+$R_f$-module. Since
+$(R[y_1, \ldots, y_d])_f \cong R_f[y_1, \ldots, y_d]$ is free
+also we conclude by the already mentioned fact that an extension
+of free modules is free.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generic-flatness-finitely-presented}
+\begin{slogan}
+Generic freeness.
+\end{slogan}
+Let $R \to S$ be a ring map.
+Let $M$ be an $S$-module.
+Assume
+\begin{enumerate}
+\item $R$ is a domain,
+\item $R \to S$ is of finite presentation, and
+\item $M$ is an $S$-module of finite presentation.
+\end{enumerate}
+Then there exists a nonzero $f \in R$ such that
+$M_f$ is a free $R_f$-module.
+\end{lemma}
+
+\begin{proof}
+Write $S = R[x_1, \ldots, x_n]/(g_1, \ldots, g_m)$.
+For $g \in R[x_1, \ldots, x_n]$ denote $\overline{g}$ its image in $S$.
+We may write $M = S^{\oplus t}/\sum Sn_i$ for some $n_i \in S^{\oplus t}$.
+Write $n_i = (\overline{g}_{i1}, \ldots, \overline{g}_{it})$ for some
+$g_{ij} \in R[x_1, \ldots, x_n]$. Let $R_0 \subset R$ be the subring
+generated by all the coefficients of all the elements
+$g_i, g_{ij} \in R[x_1, \ldots, x_n]$. Define
+$S_0 = R_0[x_1, \ldots, x_n]/(g_1, \ldots, g_m)$.
+Define $M_0 = S_0^{\oplus t}/\sum S_0n_i$.
+Then $R_0$ is a domain of finite type over $\mathbf{Z}$ and hence
+Noetherian (see
+Lemma \ref{lemma-Noetherian-permanence}).
+Moreover via the injection $R_0 \to R$ we have $S \cong R \otimes_{R_0} S_0$
+and $M \cong R \otimes_{R_0} M_0$. Applying
+Lemma \ref{lemma-generic-flatness-Noetherian}
+we obtain a nonzero $f \in R_0$ such that $(M_0)_f$ is a free
+$(R_0)_f$-module. Hence $M_f = R_f \otimes_{(R_0)_f} (M_0)_f$
+is a free $R_f$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generic-flatness}
+Let $R \to S$ be a ring map.
+Let $M$ be an $S$-module.
+Assume
+\begin{enumerate}
+\item $R$ is a domain,
+\item $R \to S$ is of finite type, and
+\item $M$ is a finite type $S$-module.
+\end{enumerate}
+Then there exists a nonzero $f \in R$ such that
+\begin{enumerate}
+\item[(a)] $M_f$ and $S_f$ are free as $R_f$-modules, and
+\item[(b)] $S_f$ is a finitely presented $R_f$-algebra and $M_f$ is a
+finitely presented $S_f$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We first prove the lemma for $S = R[x_1, \ldots, x_n]$, and then
+we deduce the result in general.
+
+\medskip\noindent
+Assume $S = R[x_1, \ldots, x_n]$.
+Choose elements $m_1, \ldots, m_t$ which generate $M$. This gives
+a short exact sequence
+$$
+0 \to N \to S^{\oplus t} \xrightarrow{(m_1, \ldots, m_t)} M \to 0.
+$$
+Denote $K$ the fraction field of $R$. Denote
+$S_K = K \otimes_R S = K[x_1, \ldots, x_n]$, and similarly
+$N_K = K \otimes_R N$, $M_K = K \otimes_R M$.
+As $R \to K$ is flat the sequence remains exact after tensoring with $K$.
+As $S_K = K[x_1, \ldots, x_n]$ is a Noetherian ring (see
+Lemma \ref{lemma-Noetherian-permanence})
+we can find finitely many elements $n'_1, \ldots, n'_s \in N_K$
+which generate it. Choose $n_1, \ldots, n_r \in N$ such that
+$n'_i = \sum a_{ij}n_j$ for some $a_{ij} \in K$. Set
+$$
+M' = S^{\oplus t}/\sum\nolimits_{i = 1, \ldots, r} Sn_i
+$$
+By construction $M'$ is a finitely presented $S$-module, and
+there is a surjection $M' \to M$ which induces an isomorphism
+$M'_K \cong M_K$. We may apply
+Lemma \ref{lemma-generic-flatness-finitely-presented}
+to $R \to S$ and $M'$ and we find an $f \in R$ such that $M'_f$
+is a free $R_f$-module. Thus $M'_f \to M_f$ is a surjection of
+modules over the domain $R_f$ where the source is a free module
+and which becomes an isomorphism upon tensoring with $K$.
+Thus it is injective as $M'_f \subset M'_K$ as it is free over
+the domain $R_f$. Hence $M'_f \to M_f$ is an isomorphism and the
+result is proved.
+
+\medskip\noindent
+For the general case, choose a surjection $R[x_1, \ldots, x_n] \to S$.
+Think of both $S$ and $M$ as finite modules over $R[x_1, \ldots, x_n]$.
+By the special case proved above there exists a nonzero $f \in R$
+such that both $S_f$ and $M_f$ are free as $R_f$-modules and finitely
+presented as $R_f[x_1, \ldots, x_n]$-modules. Clearly this implies that
+$S_f$ is a finitely presented $R_f$-algebra and that $M_f$ is a
+finitely presented $S_f$-module.
+\end{proof}
+
+\noindent
+Let $R \to S$ be a ring map. Let $M$ be an $S$-module. Consider
+the following condition on an element $f \in R$:
+\begin{equation}
+\label{equation-flat-and-finitely-presented}
+\left\{
+\begin{matrix}
+S_f & \text{is of finite presentation over }R_f\\
+M_f & \text{is of finite presentation as }S_f\text{-module}\\
+S_f, M_f & \text{are free as }R_f\text{-modules}
+\end{matrix}
+\right.
+\end{equation}
+We define
+\begin{equation}
+\label{equation-good-locus}
+U(R \to S, M)
+=
+\bigcup\nolimits_{f \in
+R\text{ with }(\ref{equation-flat-and-finitely-presented})}
+D(f)
+\end{equation}
+which is an open subset of $\Spec(R)$.
+
+\begin{lemma}
+\label{lemma-generic-flatness-locus-extension}
+Let $R \to S$ be a ring map.
+Let $0 \to M_1 \to M_2 \to M_3 \to 0$ be a short exact sequence
+of $S$-modules.
+Then
+$$
+U(R \to S, M_1) \cap U(R \to S, M_3) \subset U(R \to S, M_2).
+$$
+\end{lemma}
+
+\begin{proof}
+Let $u \in U(R \to S, M_1) \cap U(R \to S, M_3)$. Choose
+$f_1, f_3 \in R$ such that $u \in D(f_1)$, $u \in D(f_3)$ and
+such that (\ref{equation-flat-and-finitely-presented}) holds for
+$f_1$ and $M_1$ and for $f_3$ and $M_3$. Then set $f = f_1f_3$.
+Then $u \in D(f)$ and (\ref{equation-flat-and-finitely-presented})
+holds for $f$ and both $M_1$ and $M_3$. An extension of free modules
+is free, and an extension of finitely presented modules is finitely presented
+(Lemma \ref{lemma-extension}). Hence we see that
+(\ref{equation-flat-and-finitely-presented}) holds for $f$ and $M_2$.
+Thus $u \in U(R \to S, M_2)$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generic-flatness-locus-localize}
+Let $R \to S$ be a ring map.
+Let $M$ be an $S$-module.
+Let $f \in R$.
+Using the identification $\Spec(R_f) = D(f)$ we have
+$U(R_f \to S_f, M_f) = D(f) \cap U(R \to S, M)$.
+\end{lemma}
+
+\begin{proof}
+Suppose that $u \in U(R_f \to S_f, M_f)$. Then there exists an
+element $g \in R_f$ such that $u \in D(g)$ and such that
+(\ref{equation-flat-and-finitely-presented})
+holds for the pair $((R_f)_g \to (S_f)_g, (M_f)_g)$.
+Write $g = a/f^n$ for some $a \in R$. Set $h = af$.
+Then $R_h = (R_f)_g$, $S_h = (S_f)_g$, and $M_h = (M_f)_g$.
+Moreover $u \in D(h)$. Hence $u \in U(R \to S, M)$.
+Conversely, suppose that $u \in D(f) \cap U(R \to S, M)$.
+Then there exists an element $g \in R$ such that $u \in D(g)$ and such that
+(\ref{equation-flat-and-finitely-presented})
+holds for the pair $(R_g \to S_g, M_g)$.
+Then it is clear that (\ref{equation-flat-and-finitely-presented})
+also holds for the pair
+$(R_{fg} \to S_{fg}, M_{fg}) = ((R_f)_g \to (S_f)_g, (M_f)_g)$.
+Hence $u \in U(R_f \to S_f, M_f)$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generic-flatness-locus-reduce}
+Let $R \to S$ be a ring map.
+Let $M$ be an $S$-module.
+Let $U \subset \Spec(R)$ be a dense open.
+Assume there is a covering $U = \bigcup_{i \in I} D(f_i)$ of
+opens such that $U(R_{f_i} \to S_{f_i}, M_{f_i})$ is dense in
+$D(f_i)$ for each $i \in I$. Then $U(R \to S, M)$ is dense in
+$\Spec(R)$.
+\end{lemma}
+
+\begin{proof}
+In view of
+Lemma \ref{lemma-generic-flatness-locus-localize}
+this is a purely topological statement. Namely, by that lemma
+we see that $U(R \to S, M) \cap D(f_i)$ is dense in $D(f_i)$
+for each $i \in I$. By
+Topology, Lemma \ref{topology-lemma-nowhere-dense-local}
+we see that $U(R \to S, M) \cap U$ is dense in $U$.
+Since $U$ is dense in $\Spec(R)$ we conclude that $U(R \to S, M)$
+is dense in $\Spec(R)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generic-flatness-reduced}
+Let $R \to S$ be a ring map. Let $M$ be an $S$-module.
+Assume
+\begin{enumerate}
+\item $R \to S$ is of finite type,
+\item $M$ is a finite $S$-module, and
+\item $R$ is reduced.
+\end{enumerate}
+Then there exists a subset $U \subset \Spec(R)$ such that
+\begin{enumerate}
+\item $U$ is open and dense in $\Spec(R)$,
+\item for every $u \in U$ there exists an $f \in R$ such
+that $u \in D(f) \subset U$ and such that we have
+\begin{enumerate}
+\item $M_f$ and $S_f$ are free over $R_f$,
+\item $S_f$ is a finitely presented $R_f$-algebra, and
+\item $M_f$ is a finitely presented $S_f$-module.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Note that the lemma is equivalent to the statement that the open
+$U(R \to S, M)$, see Equation (\ref{equation-good-locus}), is dense
+in $\Spec(R)$. We first prove the lemma for
+$S = R[x_1, \ldots, x_n]$, and then
+we deduce the result in general.
+
+\medskip\noindent
+Proof of the case $S = R[x_1, \ldots, x_n]$ and $M$ any finite module
+over $S$. Note that in this case $S_f = R_f[x_1, \ldots, x_n]$
+is free and of finite presentation over $R_f$, so we do not have
+to worry about the conditions regarding $S$,
+only those that concern $M$. We will use induction on $n$.
+
+\medskip\noindent
+There exists a finite filtration
+$$
+0 \subset M_1 \subset M_2 \subset \ldots \subset M_t = M
+$$
+such that $M_i/M_{i - 1} \cong S/J_i$ for some ideal $J_i \subset S$, see
+Lemma \ref{lemma-trivial-filter-finite-module}. Since
+a finite intersection of dense opens is dense open,
+we see from
+Lemma \ref{lemma-generic-flatness-locus-extension}
+that it suffices to prove the lemma for each of the modules $R/J_i$.
+Hence we may assume that $M = S/J$ for some ideal $J$ of
+$S = R[x_1, \ldots, x_n]$.
+
+\medskip\noindent
+Let $I \subset R$ be the ideal generated by the coefficients of
+elements of $J$. Let $U_1 = \Spec(R) \setminus V(I)$ and
+let
+$$
+U_2 = \Spec(R) \setminus \overline{U_1}.
+$$
+Then it is clear that $U = U_1 \cup U_2$ is dense in $\Spec(R)$.
+Let $f \in R$ be an element such that either (a) $D(f) \subset U_1$ or
+(b) $D(f) \subset U_2$. If for any such $f$
+the lemma holds for the pair $(R_f \to R_f[x_1, \ldots, x_n], M_f)$
+then by
+Lemma \ref{lemma-generic-flatness-locus-reduce}
+we see that $U(R \to S, M)$ is dense in $\Spec(R)$.
+Hence we may assume either (a) $I = R$, or (b) $V(I) = \Spec(R)$.
+
+\medskip\noindent
+In case (b) we actually have $I = 0$ as $R$ is reduced! Hence $J = 0$
+and $M = S$ and the lemma holds in this case.
+
+\medskip\noindent
+In case (a) we have to do a little bit more work. Note that every element
+of $I$ is actually the coefficient of a monomial of an element of $J$, because
+the set of coefficients of elements of $J$ forms an ideal (details omitted).
+Hence we find an element
+$$
+g = \sum\nolimits_{K \in E} a_K x^K \in J
+$$
+where $E$ is a finite set of multi-indices $K = (k_1, \ldots, k_n)$
+with at least one coefficient $a_{K_0}$ a unit in $R$. Actually we can
+find one which has a coefficient equal to $1$ as $1 \in I$ in case (a).
+Let $m = \#\{K \in E \mid a_K \text{ is not a unit}\}$.
+Note that $0 \leq m \leq \# E - 1$.
+We will argue by induction on $m$.
+
+\medskip\noindent
+The case $m = 0$. In this case all the coefficients $a_K$, $K \in E$
+of $g$ are units and $E \not = \emptyset$.
+If $E = \{K_0\}$ is a singleton and $K_0 = (0, \ldots, 0)$, then $g$
+is a unit and $J = S$ so the result holds for sure. (This happens in
+particular when $n = 0$ and it provides the base case of the induction
+on $n$.) If not $E = \{(0, \ldots, 0)\}$, then at least one $K$ is not
+equal to $(0, \ldots, 0)$, i.e., $g \not \in R$. At this point we employ
+the usual trick of Noether normalization. Namely, we consider
+$$
+G(y_1, \ldots, y_n) =
+g(y_1 + y_n^{e_1}, y_2 + y_n^{e_2}, \ldots, y_{n - 1} + y_n^{e_{n - 1}}, y_n)
+$$
+with $0 \ll e_{n -1} \ll e_{n - 2} \ll \ldots \ll e_1$. By
+Lemma \ref{lemma-helper-polynomial}
+it follows that $G(y_1, \ldots, y_n)$ as a polynomial in $y_n$
+looks like
+$$
+a_K y_n^{k_n + \sum_{i = 1, \ldots, n - 1} e_i k_i} +
+\text{lower order terms in }y_n
+$$
+As $a_K$ is a unit we conclude that $M = R[x_1, \ldots, x_n]/J$ is
+finite over $R[y_1, \ldots, y_{n - 1}]$. Hence
+$U(R \to R[x_1, \ldots, x_n], M) = U(R \to R[y_1, \ldots, y_{n - 1}], M)$
+and we win by induction on $n$.
+
+\medskip\noindent
+The case $m > 0$. Pick a multi-index $K \in E$ such that $a_K$ is
+not a unit. As before set
+$U_1 = \Spec(R_{a_K}) = \Spec(R) \setminus V(a_K)$
+and set
+$$
+U_2 = \Spec(R) \setminus \overline{U_1}.
+$$
+Then it is clear that $U = U_1 \cup U_2$ is dense in $\Spec(R)$.
+Let $f \in R$ be an element such that either (a) $D(f) \subset U_1$ or
+(b) $D(f) \subset U_2$. If for any such $f$
+the lemma holds for the pair $(R_f \to R_f[x_1, \ldots, x_n], M_f)$
+then by
+Lemma \ref{lemma-generic-flatness-locus-reduce}
+we see that $U(R \to S, M)$ is dense in $\Spec(R)$.
+Hence we may assume either (a) $a_KR = R$, or (b) $V(a_K) = \Spec(R)$.
+In case (a) the number $m$ drops, as $a_K$ has turned into a unit.
+In case (b), since $R$ is reduced, we conclude that $a_K = 0$.
+Hence the set $E$ decreases so the number $m$ drops as well.
+In both cases we win by induction on $m$.
+
+\medskip\noindent
+At this point we have proven the lemma in case $S = R[x_1, \ldots, x_n]$.
+Assume that $(R \to S, M)$ is an arbitrary pair satisfying the conditions of
+the lemma. Choose a surjection $R[x_1, \ldots, x_n] \to S$. Observe that,
+with the notation introduced in (\ref{equation-good-locus}), we have
+$$
+U(R \to S, M) =
+U(R \to R[x_1, \ldots, x_n], S)
+\cap
+U(R \to R[x_1, \ldots, x_n], M)
+$$
+Hence as we've just finished proving the right two opens are dense also
+the open on the left is dense.
+\end{proof}
+
+
+
+
+
+
+\section{Around Krull-Akizuki}
+\label{section-krull-akizuki}
+
+\noindent
+One application of Krull-Akizuki is to show that there are plenty
+of discrete valuation rings. More generally in this section we
+show how to construct discrete valuation rings dominating Noetherian
+local rings.
+
+\medskip\noindent
+First we show how to dominate a Noetherian local domain
+by a $1$-dimensional Noetherian local domain by blowing up
+the maximal ideal.
+
+\begin{lemma}
+\label{lemma-dominate-by-dimension-1}
+Let $R$ be a local Noetherian domain with fraction field $K$.
+Assume $R$ is not a field.
+Then there exist $R \subset R' \subset K$ with
+\begin{enumerate}
+\item $R'$ local Noetherian of dimension $1$,
+\item $R \to R'$ a local ring map, i.e., $R'$ dominates $R$, and
+\item $R \to R'$ essentially of finite type.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose any valuation ring $A \subset K$ dominating $R$ (which
+exist by Lemma \ref{lemma-dominate}).
+Denote $v$ the corresponding valuation.
+Let $x_1, \ldots, x_r$ be a minimal set of generators of the
+maximal ideal $\mathfrak m$ of $R$. We may and do assume that
+$v(x_r) = \min\{v(x_1), \ldots, v(x_r)\}$. Consider the ring
+$$
+S = R[x_1/x_r, x_2/x_r, \ldots, x_{r - 1}/x_r] \subset K.
+$$
+Note that $\mathfrak mS = x_rS$ is a principal ideal.
+Note that $S \subset A$ and that $v(x_r) > 0$, hence we see
+that $x_rS \not = S$. Choose a minimal prime $\mathfrak q$
+over $x_rS$. Then $\text{height}(\mathfrak q) = 1$ by
+Lemma \ref{lemma-minimal-over-1}
+and $\mathfrak q$ lies over $\mathfrak m$. Hence we
+see that $R' = S_{\mathfrak q}$ is a solution.
+\end{proof}
+
+\begin{lemma}[Koll\'ar]
+\label{lemma-hart-serre-loc-thm}
+\begin{reference}
+This is taken from a forthcoming paper by
+J\'anos Koll\'ar entitled ``Variants of normality for Noetherian schemes''.
+\end{reference}
+Let $(R, \mathfrak m)$ be a local Noetherian ring.
+Then exactly one of the following holds:
+\begin{enumerate}
+\item $(R, \mathfrak m)$ is Artinian,
+\item $(R, \mathfrak m)$ is regular of dimension $1$,
+\item $\text{depth}(R) \geq 2$, or
+\item there exists a finite ring map $R \to R'$ which is not
+an isomorphism whose kernel and cokernel are annihilated by a power
+of $\mathfrak m$ such that $\mathfrak m$ is not an associated
+prime of $R'$ and $R' \not = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Observe that $(R, \mathfrak m)$ is not Artinian if and only if
+$V(\mathfrak m) \subset \Spec(R)$ is nowhere dense. See
+Proposition \ref{proposition-dimension-zero-ring}. We assume this from now on.
+
+\medskip\noindent
+Let $J \subset R$ be the largest ideal killed by a power of $\mathfrak m$.
+If $J \not = 0$ then $R \to R/J$ shows that $(R, \mathfrak m)$
+is as in (4).
+
+\medskip\noindent
+Otherwise $J = 0$. In particular $\mathfrak m$ is not an associated prime
+of $R$ and we see that there is a nonzerodivisor $x \in \mathfrak m$ by
+Lemma \ref{lemma-ideal-nonzerodivisor}. If $\mathfrak m$
+is not an associated prime of $R/xR$ then $\text{depth}(R) \geq 2$
+by the same lemma. Thus we are left with the case when there is a
+$y \in R$, $y \not \in xR$ such that $y \mathfrak m \subset xR$.
+
+\medskip\noindent
+If $y \mathfrak m \subset x \mathfrak m$ then we can consider the
+map $\varphi : \mathfrak m \to \mathfrak m$, $f \mapsto yf/x$
+(well defined as $x$ is a nonzerodivisor). By the determinantal trick
+of Lemma \ref{lemma-charpoly-module} there exists a monic
+polynomial $P$ with coefficients in $R$ such that $P(\varphi) = 0$.
+We conclude that $P(y/x) = 0$ in $R_x$.
+Let $R' \subset R_x$ be the ring generated by
+$R$ and $y/x$. Then $R \subset R'$ and $R'/R$ is a finite $R$-module
+annihilated by a power of $\mathfrak m$. Thus $R$ is as in (4).
+
+\medskip\noindent
+Otherwise there is a $t \in \mathfrak m$ such that $y t = u x$
+for some unit $u$ of $R$. After replacing $t$ by $u^{-1}t$
+we get $yt = x$. In particular $y$ is a nonzerodivisor.
+For any $t' \in \mathfrak m$ we have $y t' = x s$ for some $s \in R$.
+Thus $y (t' - s t ) = x s - x s = 0$.
+Since $y$ is not a zero-divisor this implies that $t' = ts$ and so
+$\mathfrak m = (t)$. Thus $(R, \mathfrak m)$ is regular of dimension 1.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nonregular-dimension-one}
+Let $R$ be a local ring with maximal ideal $\mathfrak m$.
+Assume $R$ is Noetherian, has dimension $1$, and that
+$\dim(\mathfrak m/\mathfrak m^2) > 1$. Then there exists
+a ring map $R \to R'$ such that
+\begin{enumerate}
+\item $R \to R'$ is finite,
+\item $R \to R'$ is not an isomorphism,
+\item the kernel and cokernel of $R \to R'$ are annihilated by
+a power of $\mathfrak m$, and
+\item $\mathfrak m$ is not an associated prime of $R'$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-hart-serre-loc-thm}
+and the fact that $R$ is not Artinian, not regular, and
+does not have depth $\geq 2$ (the last part because the
+depth does not exceed the dimension by
+Lemma \ref{lemma-bound-depth}).
+\end{proof}
+
+\begin{example}
+\label{example-nonreduced}
+Consider the Noetherian local ring
+$$
+R = k[[x, y]]/(y^2)
+$$
+It has dimension 1 and it is Cohen-Macaulay.
+An example of an extension as in
+Lemma \ref{lemma-nonregular-dimension-one}
+is the extension
+$$
+k[[x, y]]/(y^2) \subset k[[x, z]]/(z^2), \ \ y \mapsto xz
+$$
+in other words it is gotten by adjoining $y/x$ to $R$. The
+effect of repeating the construction $n > 1$ times is
+to adjoin the element $y/x^n$.
+\end{example}
+
+\begin{example}
+\label{example-bad-dvr-char-p}
+Let $k$ be a field of characteristic $p > 0$ such that $k$
+has infinite degree over its subfield $k^p$ of $p$th powers.
+For example $k = \mathbf{F}_p(t_1, t_2, t_3, \ldots)$.
+Consider the ring
+$$
+A =
+\left\{
+\sum a_i x^i \in k[[x]] \text{ such that }
+[k^p(a_0, a_1, a_2, \ldots) : k^p] < \infty
+\right\}
+$$
+Then $A$ is a discrete valuation ring and its completion is
+$A^\wedge = k[[x]]$. Note that the induced extension of fraction
+fields of $A \subset k[[x]]$ is infinite purely inseparable.
+Choose any $f \in k[[x]]$, $f \not \in A$. Let $R = A[f] \subset k[[x]]$.
+Then $R$ is a Noetherian local domain of dimension $1$ whose
+completion $R^\wedge$ is nonreduced (think!).
+\end{example}
+
+\begin{remark}
+\label{remark-resolution-dim-1}
+Suppose that $R$ is a $1$-dimensional semi-local Noetherian domain.
+If there is a maximal ideal $\mathfrak m \subset R$ such that
+$R_{\mathfrak m}$ is not regular, then we may apply
+Lemma \ref{lemma-nonregular-dimension-one} to $(R, \mathfrak m)$
+to get a finite ring extension $R \subset R_1$.
+(For example one can do this so that $\Spec(R_1) \to \Spec(R)$
+is the blowup of $\Spec(R)$ in the ideal $\mathfrak m$.)
+Of course $R_1$ is a $1$-dimensional semi-local Noetherian
+domain with the same fraction field as $R$. If $R_1$ is not a
+regular semi-local ring, then we may repeat the construction to
+get $R_1 \subset R_2$. Thus we get a sequence
+$$
+R \subset R_1 \subset R_2 \subset R_3 \subset \ldots
+$$
+of finite ring extensions which may stop if $R_n$ is regular for
+some $n$. Resolution of singularities would be the claim
+that eventually $R_n$ is indeed regular. In reality this is not
+the case. Namely, there exists a characteristic $0$
+Noetherian local domain $A$ of dimension $1$ whose completion is nonreduced,
+see \cite[Proposition 3.1]{Ferrand-Raynaud} or our
+Examples, Section \ref{examples-section-local-completion-nonreduced}.
+For an example in characteristic $p > 0$ see
+Example \ref{example-bad-dvr-char-p}.
+Since the construction of blowing up commutes with completion it
+is easy to see the sequence never stabilizes.
+See \cite{Bennett} for a discussion (mostly in positive characteristic).
+On the other hand, if the completion of $R$ in all of its maximal
+ideals is reduced, then the procedure stops (insert future reference
+here).
+\end{remark}
+
+\begin{lemma}
+\label{lemma-characterize-dvr}
+Let $A$ be a ring. The following are equivalent.
+\begin{enumerate}
+\item The ring $A$ is a discrete valuation ring.
+\item The ring $A$ is a valuation ring and Noetherian but not a field.
+\item The ring $A$ is a regular local ring of dimension $1$.
+\item The ring $A$ is a Noetherian local domain with maximal ideal
+$\mathfrak m$ generated by a single nonzero element.
+\item The ring $A$ is a Noetherian local normal domain of dimension $1$.
+\end{enumerate}
+In this case if $\pi$ is a generator of the maximal ideal of
+$A$, then every element of $A$ can be uniquely written as
+$u\pi^n$, where $u \in A$ is a unit.
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) is
+Lemma \ref{lemma-valuation-ring-Noetherian-discrete}.
+Moreover, in the proof of Lemma \ref{lemma-valuation-ring-Noetherian-discrete}
+we saw that if $A$ is a discrete valuation ring, then $A$ is a PID, hence (3).
+Note that a regular local ring is a domain (see
+Lemma \ref{lemma-regular-domain}). Using this the equivalence of (3) and (4)
+follows from dimension theory, see Section \ref{section-dimension}.
+
+\medskip\noindent
+Assume (3) and let $\pi$ be a generator of the maximal ideal $\mathfrak m$.
+For all $n \geq 0$ we have
+$\dim_{A/\mathfrak m} \mathfrak m^n/\mathfrak m^{n + 1} = 1$
+because it is generated by $\pi^n$ (and it cannot be zero).
+In particular $\mathfrak m^n = (\pi^n)$ and
+the graded ring $\bigoplus \mathfrak m^n/\mathfrak m^{n + 1}$
+is isomorphic to the polynomial ring $A/\mathfrak m[T]$.
+For $x \in A \setminus \{0\}$ define
+$v(x) = \max\{n \mid x \in \mathfrak m^n\}$.
+In other words $x = u \pi^{v(x)}$ with $u \in A^*$.
+By the remarks above we have $v(xy) = v(x) + v(y)$
+for all $x, y \in A \setminus \{0\}$. We extend this to the field of fractions
+$K$ of $A$ by setting $v(a/b) = v(a) - v(b)$ (well defined by multiplicativity
+shown above). Then it is clear that $A$ is the set of elements of
+$K$ which have valuation $\geq 0$. Hence we see that $A$ is a
+valuation ring by Lemma \ref{lemma-valuation-valuation-ring}.
+
+\medskip\noindent
+A valuation ring is a normal domain by Lemma \ref{lemma-valuation-ring-normal}.
+Hence we see that the equivalent conditions (1) -- (3) imply
+(5). Assume (5). Suppose that $\mathfrak m$ cannot be generated
+by $1$ element to get a contradiction.
+Then Lemma \ref{lemma-nonregular-dimension-one} implies there is a finite
+ring map $A \to A'$ which is an isomorphism after inverting
+any nonzero element of $\mathfrak m$ but not an isomorphism.
+In particular we may identify $A'$ with a subset of the fraction field of $A$.
+Since $A \to A'$ is finite it is integral (see
+Lemma \ref{lemma-finite-is-integral}).
+Since $A$ is normal we get $A = A'$ a contradiction.
+\end{proof}
+
+\begin{definition}
+\label{definition-uniformizer}
+Let $A$ be a discrete valuation ring. A {\it uniformizer} is an element
+$\pi \in A$ which generates the maximal ideal of $A$.
+\end{definition}
+
+\noindent
+By Lemma \ref{lemma-characterize-dvr} any two uniformizers of a discrete
+valuation ring are associates.
+
+\begin{lemma}
+\label{lemma-finite-length}
+Let $R$ be a domain with fraction field $K$.
+Let $M$ be an $R$-submodule of $K^{\oplus r}$.
+Assume $R$ is local Noetherian of dimension $1$.
+For any nonzero $x \in R$ we have $\text{length}_R(R/xR) < \infty$
+and
+$$
+\text{length}_R(M/xM) \leq r \cdot \text{length}_R(R/xR).
+$$
+\end{lemma}
+
+\begin{proof}
+If $x$ is a unit then the result is true. Hence we may assume
+$x \in \mathfrak m$ the maximal ideal of $R$. Since $x$ is not
+zero and $R$ is a domain we have $\dim(R/xR) = 0$, and hence
+$R/xR$ has finite length. Consider $M \subset K^{\oplus r}$ as
+in the lemma. We may assume that the elements of $M$ generate
+$K^{\oplus r}$ as a $K$-vector space after replacing $K^{\oplus r}$
+by a smaller subspace if necessary.
+
+\medskip\noindent
+Suppose first that $M$ is a finite $R$-module. In that case we can clear
+denominators and assume $M \subset R^{\oplus r}$. Since
+$M$ generates $K^{\oplus r}$ as a vectors space we see that
+$R^{\oplus r}/M$ has finite length. In particular there exists
+an integer $c \geq 0$ such that $x^cR^{\oplus r} \subset M$.
+Note that $M \supset xM \supset x^2M \supset \ldots$ is a sequence of
+modules with successive quotients each isomorphic to $M/xM$. Hence
+we see that
+$$
+n \text{length}_R(M/xM) = \text{length}_R(M/x^nM).
+$$
+The same argument for $M = R^{\oplus r}$ shows that
+$$
+n \text{length}_R(R^{\oplus r}/xR^{\oplus r}) =
+\text{length}_R(R^{\oplus r}/x^nR^{\oplus r}).
+$$
+By our choice of $c$ above we see that $x^nM$ is sandwiched between
+$x^n R^{\oplus r}$ and $x^{n + c}R^{\oplus r}$. This easily gives that
+$$
+r(n + c) \text{length}_R(R/xR)
+\geq
+n \text{length}_R(M/xM)
+\geq
+r (n - c) \text{length}_R(R/xR)
+$$
+Hence in the finite case we actually get the result of the lemma with
+equality.
+
+\medskip\noindent
+Suppose now that $M$ is not finite. Suppose that the length of $M/xM$ is
+$\geq k$ for some natural number $k$. Then we can find
+$$
+0 \subset N_0 \subset N_1 \subset N_2 \subset \ldots N_k \subset M/xM
+$$
+with $N_i \not = N_{i + 1}$ for $i = 0, \ldots k - 1$.
+Choose an element $m_i \in M$ whose congruence class mod $xM$ falls
+into $N_i$ but not into $N_{i - 1}$ for $i = 1, \ldots, k$.
+Consider the finite $R$-module $M' = Rm_1 + \ldots + Rm_k \subset M$.
+Let $N'_i \subset M'/xM'$ be the inverse image of $N_i$.
+It is clear that $N'_i \not =N'_{i + 1}$ by our choice of $m_i$.
+Hence we see that $\text{length}_R(M'/xM') \geq k$. By the
+finite case we conclude $k \leq r\text{length}_R(R/xR)$
+as desired.
+\end{proof}
+
+\noindent
+Here is a first application.
+
+\begin{lemma}
+\label{lemma-finite-extension-residue-fields-dimension-1}
+Let $R \to S$ be a homomorphism of domains inducing an
+injection of fraction fields $K \subset L$. If $R$ is Noetherian
+local of dimension $1$ and $[L : K] < \infty$ then
+\begin{enumerate}
+\item each prime ideal $\mathfrak n_i$ of $S$ lying over
+the maximal ideal $\mathfrak m$ of $R$ is maximal,
+\item there are finitely many of these, and
+\item $[\kappa(\mathfrak n_i) : \kappa(\mathfrak m)] < \infty$ for each $i$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Pick $x \in \mathfrak m$ nonzero. Apply Lemma \ref{lemma-finite-length}
+to the submodule $S \subset L \cong K^{\oplus n}$ where $n = [L : K]$.
+Thus the ring $S/xS$ has finite length over $R$. It follows that
+$S/\mathfrak m S$ has finite length over $\kappa(\mathfrak m)$.
+In other words, $\dim_{\kappa(\mathfrak m)} S/\mathfrak m S$
+is finite (Lemma \ref{lemma-dimension-is-length}). Thus $S/\mathfrak mS$
+is Artinian (Lemma \ref{lemma-finite-dimensional-algebra}). The
+structural results on Artinian rings implies parts (1) and (2), see
+for example Lemma \ref{lemma-artinian-finite-length}.
+Part (3) is implied by the finiteness established above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-length-global}
+Let $R$ be a domain with fraction field $K$.
+Let $M$ be an $R$-submodule of $K^{\oplus r}$.
+Assume $R$ is Noetherian of dimension $1$.
+For any nonzero $x \in R$ we have
+$\text{length}_R(M/xM) < \infty$.
+\end{lemma}
+
+\begin{proof}
+Since $R$ has dimension $1$ we see that $x$ is contained in finitely many
+primes $\mathfrak m_i$, $i = 1, \ldots, n$, each maximal. Since $R$ is
+Noetherian we see that $R/xR$ is Artinian and
+$R/xR = \prod_{i = 1, \ldots, n} (R/xR)_{\mathfrak m_i}$ by
+Proposition \ref{proposition-dimension-zero-ring} and
+Lemma \ref{lemma-artinian-finite-length}. Hence $M/xM$ similarly
+decomposes as the product $M/xM = \prod (M/xM)_{\mathfrak m_i}$
+of its localizations at the $\mathfrak m_i$. By Lemma \ref{lemma-finite-length}
+applied to $M_{\mathfrak m_i}$ over $R_{\mathfrak m_i}$ we see each
+$M_{\mathfrak m_i}/xM_{\mathfrak m_i} = (M/xM)_{\mathfrak m_i}$
+has finite length over $R_{\mathfrak m_i}$. Thus $M/xM$
+has finite length over $R$ as the above implies $M/xM$
+has a finite filtration by $R$-submodules whose successive
+quotients are isomorphic to the residue fields $\kappa(\mathfrak m_i)$.
+\end{proof}
+
+\begin{lemma}[Krull-Akizuki]
+\label{lemma-krull-akizuki}
+Let $R$ be a domain with fraction field $K$.
+Let $L/K$ be a finite extension of fields.
+Assume $R$ is Noetherian and $\dim(R) = 1$.
+In this case any ring $A$ with $R \subset A \subset L$ is
+Noetherian.
+\end{lemma}
+
+\begin{proof}
+To begin we may assume that $L$ is the fraction field of $A$
+by replacing $L$ by the fraction field of $A$ if necessary.
+Let $I \subset A$ be a nonzero ideal. Clearly $I$ generates $L$ as
+a $K$-vector space. Hence we see that $I \cap R \not = (0)$.
+Pick any nonzero $x \in I \cap R$. Then we get
+$I/xA \subset A/xA$. By Lemma \ref{lemma-finite-length-global}
+the $R$-module $A/xA$ has finite length as an $R$-module. Hence
+$I/xA$ has finite length as an $R$-module. Hence $I$ is finitely
+generated as an ideal in $A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exists-dvr}
+Let $R$ be a Noetherian local domain with fraction field $K$.
+Assume that $R$ is not a field.
+Let $L/K$ be a finitely generated field extension.
+Then there exists discrete valuation ring $A$ with fraction field
+$L$ which dominates $R$.
+\end{lemma}
+
+\begin{proof}
+If $L$ is not finite over $K$ choose a transcendence basis
+$x_1, \ldots, x_r$ of $L$ over $K$ and replace $R$ by
+$R[x_1, \ldots, x_r]$ localized at the maximal ideal
+generated by $\mathfrak m_R$ and $x_1, \ldots, x_r$.
+Thus we may assume $K \subset L$ finite.
+
+\medskip\noindent
+By Lemma \ref{lemma-dominate-by-dimension-1} we may assume $\dim(R) = 1$.
+
+\medskip\noindent
+Let $A \subset L$ be the integral closure of $R$ in $L$.
+By Lemma \ref{lemma-krull-akizuki} this is Noetherian.
+By Lemma \ref{lemma-integral-overring-surjective} there
+is a prime ideal $\mathfrak q \subset A$ lying
+over the maximal ideal of $R$.
+By Lemma \ref{lemma-characterize-dvr} the ring $A_{\mathfrak q}$ is a discrete
+valuation ring dominating $R$ as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Factorization}
+\label{section-factoring}
+
+\noindent
+Here are some notions and relations between them that are typically taught
+in a first year course on algebra at the undergraduate level.
+
+\begin{definition}
+\label{definition-irreducible-prime-element}
+Let $R$ be a domain.
+\begin{enumerate}
+\item Elements $x, y \in R$ are called {\it associates} if
+there exists a unit $u \in R^*$ such that $x = uy$.
+\item An element $x \in R$ is called {\it irreducible}
+if it is nonzero, not a unit and whenever $x = yz$, $y, z \in R$,
+then $y$ is either a unit or an associate of $x$.
+\item An element $x \in R$ is called {\it prime} if the ideal
+generated by $x$ is a prime ideal.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-easy-divisibility}
+Let $R$ be a domain. Let $x, y \in R$.
+Then $x$, $y$ are associates if and only if $(x) = (y)$.
+\end{lemma}
+
+\begin{proof}
+If $x = uy$ for some unit $u \in R$, then $(x) \subset (y)$ and
+$y = u^{-1}x$ so also $(y) \subset (x)$. Conversely, suppose that
+$(x) = (y)$. Then $x = fy$ and $y = gx$ for some $f, g \in A$.
+Then $x = fg x$ and since $R$ is a domain $fg = 1$. Thus
+$x$ and $y$ are associates.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-factorization-exists}
+Let $R$ be a domain. Consider the following conditions:
+\begin{enumerate}
+\item The ring $R$ satisfies the ascending chain condition for
+principal ideals.
+\item Every nonzero, nonunit element $a \in R$
+has a factorization $a = b_1 \ldots b_k$
+with each $b_i$ an irreducible element of $R$.
+\end{enumerate}
+Then (1) implies (2).
+\end{lemma}
+
+\begin{proof}
+Let $x$ be a nonzero element, not a unit, which does not have a
+factorization into
+irreducibles. Set $x_1 = x$. We can write $x = yz$ where neither
+$y$ nor $z$ is irreducible or a unit.
+Then either $y$ does not have a factorization
+into irreducibles, in which case we set $x_2 = y$, or $z$ does not
+have a factorization into irreducibles, in which case we set $x_2 = z$.
+Continuing in this fashion we find a sequence
+$$
+x_1 | x_2 | x_3 | \ldots
+$$
+of elements of $R$ with $x_n/x_{n + 1}$ not a unit.
+This gives a strictly increasing sequence of principal ideals
+$(x_1) \subset (x_2) \subset (x_3) \subset \ldots$ thereby finishing the proof.
+\end{proof}
+
+\begin{definition}
+\label{definition-UFD}
+A {\it unique factorization domain}, abbreviated {\it UFD},
+is a domain $R$ such that
+if $x \in R$ is a nonzero, nonunit, then $x$ has a factorization
+into irreducibles, and if
+$$
+x = a_1 \ldots a_m = b_1 \ldots b_n
+$$
+are factorizations into irreducibles then $n = m$ and
+there exists a permutation $\sigma : \{1, \ldots, n\} \to \{1, \ldots, n\}$
+such that $a_i$ and $b_{\sigma(i)}$ are associates.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-characterize-UFD}
+Let $R$ be a domain. Assume every nonzero, nonunit factors into
+irreducibles. Then $R$ is a UFD if and only if every irreducible
+element is prime.
+\end{lemma}
+
+\begin{proof}
+Assume $R$ is a UFD and let $x \in R$ be an irreducible element.
+Say $ab \in (x)$, i.e., $ab = cx$. Choose factorizations
+$a = a_1 \ldots a_n$, $b = b_1 \ldots b_m$, and $c = c_1 \ldots c_r$.
+By uniqueness of the factorization
+$$
+a_1 \ldots a_n b_1 \ldots b_m = c_1 \ldots c_r x
+$$
+we find that $x$ is an associate of one of the elements
+$a_1, \ldots, b_m$. In other words, either $a \in (x)$ or $b \in (x)$
+and we conclude that $x$ is prime.
+
+\medskip\noindent
+Assume every irreducible element is prime. We have to prove that
+factorization into irreducibles is unique up to permutation and
+taking associates. Say $a_1 \ldots a_m = b_1 \ldots b_n$ with $a_i$
+and $b_j$ irreducible. Since $a_1$ is prime, we see that
+$b_j \in (a_1)$ for some $j$. After renumbering we may assume
+$b_1 \in (a_1)$. Then $b_1 = a_1 u$ and since $b_1$ is irreducible
+we see that $u$ is a unit. Hence $a_1$ and $b_1$ are associates
+and $a_2 \ldots a_n = ub_2\ldots b_m$. By induction on $n + m$
+we see that $n = m$ and $a_i$ associate to $b_{\sigma(i)}$ for
+$i = 2, \ldots, n$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-UFD-height-1}
+Let $R$ be a Noetherian domain. Then $R$ is a UFD if and only if every
+height $1$ prime ideal is principal.
+\end{lemma}
+
+\begin{proof}
+Assume $R$ is a UFD and let $\mathfrak p$ be a height 1 prime ideal.
+Take $x \in \mathfrak p$ nonzero and let $x = a_1 \ldots a_n$
+be a factorization into irreducibles.
+Since $\mathfrak p$ is prime we see that $a_i \in \mathfrak p$
+for some $i$. By Lemma \ref{lemma-characterize-UFD} the ideal
+$(a_i)$ is prime. Since $\mathfrak p$ has height $1$ we conclude
+that $(a_i) = \mathfrak p$.
+
+\medskip\noindent
+Assume every height $1$ prime is principal. Since $R$ is Noetherian
+every nonzero nonunit element $x$ has a factorization into irreducibles,
+see Lemma \ref{lemma-factorization-exists}. It suffices to prove that
+an irreducible element $x$ is prime, see Lemma \ref{lemma-characterize-UFD}.
+Let $(x) \subset \mathfrak p$ be a prime minimal over $(x)$. Then
+$\mathfrak p$ has height $1$ by Lemma \ref{lemma-minimal-over-1}.
+By assumption $\mathfrak p = (y)$. Hence $x = yz$ and $z$ is a unit
+as $x$ is irreducible. Thus $(x) = (y)$ and we see that $x$ is prime.
+\end{proof}
+
+\begin{lemma}[Nagata's criterion for factoriality]
+\label{lemma-invert-prime-elements}
+\begin{reference}
+\cite[Lemma 2]{Nagata-UFD}
+\end{reference}
+Let $A$ be a domain. Let $S \subset A$ be a multiplicative subset
+generated by prime elements. Let $x \in A$ be irreducible. Then
+\begin{enumerate}
+\item the image of $x$ in $S^{-1}A$ is irreducible or a unit, and
+\item $x$ is prime if and only if the image of $x$ in $S^{-1}A$ is
+a unit or a prime element in $S^{-1}A$.
+\end{enumerate}
+Moreover, then $A$ is a UFD if and only if every element of $A$ has a
+factorization into irreducibles and $S^{-1}A$ is a UFD.
+\end{lemma}
+
+\begin{proof}
+Say $x = \alpha \beta$ for $\alpha, \beta \in S^{-1}A$. Then
+$\alpha = a/s$ and $\beta = b/s'$ for $a, b \in A$, $s, s' \in S$.
+Thus we get $ss'x = ab$. By assumption we can write
+$ss' = p_1 \ldots p_r$ for some prime elements $p_i$.
+For each $i$ the element $p_i$ divides either $a$ or $b$.
+Dividing we find a factorization $x = a' b'$ and
+$a = s'' a'$, $b = s''' b'$ for some $s'', s''' \in S$.
+As $x$ is irreducible, either $a'$ or $b'$ is a unit.
+Tracing back we find that either $\alpha$ or $\beta$ is a unit.
+This proves (1).
+
+\medskip\noindent
+Suppose $x$ is prime. Then $A/(x)$ is a domain. Hence
+$S^{-1}A/xS^{-1}A = S^{-1}(A/(x))$ is a domain or zero.
+Thus $x$ maps to a prime element or a unit.
+
+\medskip\noindent
+Suppose that the image of $x$ in $S^{-1}A$ is a unit.
+Then $y x = s$ for some $s \in S$ and $y \in A$. By assumption
+$s = p_1 \ldots p_r$ with $p_i$ a prime element. For each $i$ either
+$p_i$ divides $y$ or $p_i$ divides $x$. In the second case
+$p_i$ and $x$ are associates (as $x$ is irreducible) and we are done.
+But if the first case happens for all $i = 1, \ldots, r$, then
+$x$ is a unit which is a contradiction.
+
+\medskip\noindent
+Suppose that the image of $x$ in $S^{-1}A$ is a prime element.
+Assume $a, b \in A$ and $ab \in (x)$. Then $sa = xy$ or
+$sb = xy$ for some $s \in S$ and $y \in A$. Say the first
+case happens. By assumption
+$s = p_1 \ldots p_r$ with $p_i$ a prime element. For each $i$ either
+$p_i$ divides $y$ or $p_i$ divides $x$. In the second case
+$p_i$ and $x$ are associates (as $x$ is irreducible) and we are done.
+If the first case happens for all $i = 1, \ldots, r$, then
+$a \in (x)$ as desired. This completes the proof of (2).
+
+\medskip\noindent
+The final statement of the lemma follows from (1) and (2)
+and Lemma \ref{lemma-characterize-UFD}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-UFD-ascending-chain-condition-principal-ideals}
+A UFD satisfies the ascending chain condition for principal
+ideals.
+\end{lemma}
+
+\begin{proof}
+Consider an ascending chain $(a_1) \subset (a_2) \subset (a_3) \subset \ldots$
+of principal ideals in $R$. Write $a_1 = p_1^{e_1} \ldots p_r^{e_r}$
+with $p_i$ prime. Then we see that $a_n$ is an associate of
+$p_1^{c_1} \ldots p_r^{c_r}$ for some $0 \leq c_i \leq e_i$.
+Since there are only finitely many possibilities we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-factoring-in-polynomial}
+Let $R$ be a domain. Assume $R$ has the ascending chain condition
+for principal ideals. Then the same property holds for a polynomial
+ring over $R$.
+\end{lemma}
+
+\begin{proof}
+Consider an ascending chain $(f_1) \subset (f_2) \subset (f_3) \subset \ldots$
+of principal ideals in $R[x]$. Since $f_{n + 1}$ divides $f_n$ we see
+that the degrees decrease in the sequence. Thus $f_n$
+has fixed degree $d \geq 0$ for all $n \gg 0$. Let $a_n$ be the
+leading coefficient of $f_n$. The condition $f_n \in (f_{n + 1})$
+implies that $a_{n + 1}$ divides $a_n$ for all $n$.
+By our assumption on $R$ we see that $a_{n + 1}$ and $a_n$
+are associates for all $n$ large enough (Lemma \ref{lemma-easy-divisibility}).
+Thus for large $n$ we see that $f_n = u f_{n + 1}$ where
+$u \in R$ (for reasons of degree) is a unit (as $a_n$ and $a_{n + 1}$
+are associates).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-polynomial-ring-UFD}
+A polynomial ring over a UFD is a UFD. In particular, if $k$ is a field,
+then $k[x_1, \ldots, x_n]$ is a UFD.
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a UFD. Then $R$ satisfies the ascending chain condition for
+principal ideals
+(Lemma \ref{lemma-UFD-ascending-chain-condition-principal-ideals}),
+hence $R[x]$ satisfies the ascending chain condition for principal
+ideals (Lemma \ref{lemma-factoring-in-polynomial}), and hence every
+element of $R[x]$ has a factorization into irreducibles
+(Lemma \ref{lemma-factorization-exists}).
+Let $S \subset R$ be the multiplicative subset
+generated by prime elements. Since every nonunit of $R$ is a product
+of prime elements we see that $K = S^{-1}R$ is the fraction field
+of $R$. Observe that every prime element of $R$ maps to a prime element
+of $R[x]$ and that $S^{-1}(R[x]) = S^{-1}R[x] = K[x]$ is a
+UFD (and even a PID). Thus we may apply
+Lemma \ref{lemma-invert-prime-elements} to conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-UFD-normal}
+A unique factorization domain is normal.
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a UFD. Let $x$ be an element of the fraction field of
+$R$ which is integral over $R$. Say $x^d - a_1 x^{d - 1} - \ldots - a_d = 0$
+with $a_i \in R$. We can write
+$x = u p_1^{e_1} \ldots p_r^{e_r}$ with $u$ a unit, $e_i \in \mathbf{Z}$,
+and $p_1, \ldots, p_r$ irreducible elements which are not associates.
+To prove the lemma we have to show $e_i \geq 0$. If not, say $e_1 < 0$,
+then for $N \gg 0$ we get
+$$
+u^d p_2^{de_2 + N} \ldots p_r^{de_r + N} =
+p_1^{-de_1}p_2^N \ldots p_r^N(
+\sum\nolimits_{i = 1, \ldots, d} a_i x^{d - i} ) \in (p_1)
+$$
+which contradicts uniqueness of factorization in $R$.
+\end{proof}
+
+\begin{definition}
+\label{definition-PID}
+A {\it principal ideal domain}, abbreviated {\it PID},
+is a domain $R$ such that every ideal is a principal ideal.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-PID-UFD}
+A principal ideal domain is a unique factorization domain.
+\end{lemma}
+
+\begin{proof}
+As a PID is Noetherian this follows from
+Lemma \ref{lemma-characterize-UFD-height-1}.
+\end{proof}
+
+\begin{definition}
+\label{definition-dedekind-domain}
+A {\it Dedekind domain} is a domain $R$ such that every
+nonzero ideal $I \subset R$ can be written as a product
+$$
+I = \mathfrak p_1 \ldots \mathfrak p_r
+$$
+of nonzero prime ideals uniquely up to permutation of the $\mathfrak p_i$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-PID-dedekind}
+A PID is a Dedekind domain.
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a PID. Since every nonzero ideal of $R$ is principal,
+and $R$ is a UFD (Lemma \ref{lemma-PID-UFD}), this follows from
+the fact that every irreducible element in $R$ is prime
+(Lemma \ref{lemma-characterize-UFD})
+so that factorizations of elements turn into factorizations into primes.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-product-ideals-principal}
+\begin{slogan}
+A product of ideals is an invertible module iff both factors are.
+\end{slogan}
+Let $A$ be a ring. Let $I$ and $J$ be nonzero ideals of $A$
+such that $IJ = (f)$ for some nonzerodivisor $f \in A$. Then $I$ and $J$ are
+finitely generated ideals and finitely locally free of rank $1$ as $A$-modules.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that $I$ and $J$ are finite locally free $A$-modules
+of rank $1$, see Lemma \ref{lemma-finite-projective}.
+To do this, write $f = \sum_{i = 1, \ldots, n} x_i y_i$ with $x_i \in I$
+and $y_i \in J$. We can
+also write $x_i y_i = a_i f$ for some $a_i \in A$.
+Since $f$ is a nonzerodivisor we see that $\sum a_i = 1$.
+Thus it suffices to show that each $I_{a_i}$ and
+$J_{a_i}$ is free of rank $1$ over $A_{a_i}$. After replacing $A$ by
+$A_{a_i}$ we conclude that $f = xy$ for some $x \in I$ and $y \in J$.
+Note that both $x$ and $y$ are nonzerodivisors. We claim that
+$I = (x)$ and $J = (y)$ which finishes the proof. Namely, if $x' \in I$,
+then $x'y = af = axy$ for some $a \in A$. Hence $x' = ax$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-Dedekind}
+Let $R$ be a ring. The following are equivalent
+\begin{enumerate}
+\item $R$ is a Dedekind domain,
+\item $R$ is a Noetherian domain, and for every maximal ideal $\mathfrak m$
+the local ring $R_{\mathfrak m}$ is a discrete valuation ring, and
+\item $R$ is a Noetherian, normal domain, and $\dim(R) \leq 1$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). The argument is nontrivial because we did not assume that $R$
+was Noetherian in our definition of a Dedekind domain. Let
+$\mathfrak p \subset R$ be a prime ideal. Observe that
+$\mathfrak p \not = \mathfrak p^2$ by uniqueness
+of the factorizations in the definition. Pick $x \in \mathfrak p$
+with $x \not \in \mathfrak p^2$. Let $y \in \mathfrak p$ be a
+second element (for example $y = 0$).
+Write $(x, y) = \mathfrak p_1 \ldots \mathfrak p_r$.
+Since $(x, y) \subset \mathfrak p$ at least one of the primes
+$\mathfrak p_i$ is contained in $\mathfrak p$.
+But as $x \not \in \mathfrak p^2$ there is at most one.
+Thus exactly one of $\mathfrak p_1, \ldots, \mathfrak p_r$ is contained
+in $\mathfrak p$, say $\mathfrak p_1 \subset \mathfrak p$.
+We conclude that $(x, y)R_\mathfrak p = \mathfrak p_1R_\mathfrak p$
+is prime for every choice of $y$. We claim that
+$(x)R_\mathfrak p = \mathfrak pR_\mathfrak p$. Namely,
+pick $y \in \mathfrak p$. By the above applied with $y^2$ we see
+that $(x, y^2)R_\mathfrak p$ is prime.
+Hence $y \in (x, y^2)R_\mathfrak p$, i.e., $y = ax + by^2$ in
+$R_\mathfrak p$. Thus $(1 - by)y = ax \in (x)R_\mathfrak p$, i.e.,
+$y \in (x)R_\mathfrak p$ as desired.
+
+\medskip\noindent
+Writing $(x) = \mathfrak p_1 \ldots \mathfrak p_r$ anew with
+$\mathfrak p_1 \subset \mathfrak p$ we conclude that
+$\mathfrak p_1 R_\mathfrak p = \mathfrak p R_\mathfrak p$, i.e.,
+$\mathfrak p_1 = \mathfrak p$. Moreover, $\mathfrak p_1 = \mathfrak p$ is
+a finitely generated ideal of $R$ by
+Lemma \ref{lemma-product-ideals-principal}.
+We conclude that $R$ is Noetherian by Lemma \ref{lemma-cohen}.
+Moreover, it follows that $R_\mathfrak m$ is a discrete
+valuation ring for every prime ideal $\mathfrak p$, see
+Lemma \ref{lemma-characterize-dvr}.
+
+\medskip\noindent
+The equivalence of (2) and (3) follows from
+Lemmas \ref{lemma-normality-is-local} and
+\ref{lemma-characterize-dvr}. Assume (2) and (3) are satisfied.
+Let $I \subset R$ be an ideal. We will construct a factorization
+of $I$. If $I$ is prime, then there is nothing to prove.
+If not, pick $I \subset \mathfrak p$ with $\mathfrak p \subset R$
+maximal. Let $J = \{x \in R \mid x \mathfrak p \subset I\}$.
+We claim $J \mathfrak p = I$. It suffices to check this after localization
+at the maximal ideals $\mathfrak m$ of $R$ (the formation of $J$ commutes with
+localization and we use Lemma \ref{lemma-characterize-zero-local}).
+Then either $\mathfrak p R_\mathfrak m = R_\mathfrak m$ and the result
+is clear, or $\mathfrak p R_\mathfrak m = \mathfrak m R_\mathfrak m$.
+In the last case $\mathfrak p R_\mathfrak m = (\pi)$ and the case where
+$\mathfrak p$ is principal is immediate. By Noetherian induction the
+ideal $J$ has a factorization and we obtain the desired factorization
+of $I$. We omit the proof of uniqueness of the factorization.
+\end{proof}
+
+\noindent
+The following is a variant of the Krull-Akizuki lemma.
+
+\begin{lemma}
+\label{lemma-integral-closure-Dedekind}
+Let $A$ be a Noetherian domain of dimension $1$ with fraction field $K$.
+Let $L/K$ be a finite extension. Let $B$ be the
+integral closure of $A$ in $L$. Then $B$ is a Dedekind domain and
+$\Spec(B) \to \Spec(A)$ is surjective, has finite fibres, and
+induces finite residue field extensions.
+\end{lemma}
+
+\begin{proof}
+By Krull-Akizuki (Lemma \ref{lemma-krull-akizuki})
+the ring $B$ is Noetherian. By Lemma \ref{lemma-integral-sub-dim-equal}
+$\dim(B) = 1$. Thus $B$ is a Dedekind domain by
+Lemma \ref{lemma-characterize-Dedekind}.
+Surjectivity of the map on spectra follows from
+Lemma \ref{lemma-integral-overring-surjective}.
+The last two statements follow from
+Lemma \ref{lemma-finite-extension-residue-fields-dimension-1}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Orders of vanishing}
+\label{section-orders-of-vanishing}
+
+\begin{lemma}
+\label{lemma-ord-additive}
+Let $R$ be a semi-local Noetherian ring of dimension $1$.
+If $a, b \in R$ are nonzerodivisors then
+$$
+\text{length}_R(R/(ab)) =
+\text{length}_R(R/(a)) +
+\text{length}_R(R/(b))
+$$
+and these lengths are finite.
+\end{lemma}
+
+\begin{proof}
+We saw the finiteness in Lemma \ref{lemma-finite-length-global}.
+Additivity holds since there is a short exact sequence
+$0 \to R/(a) \to R/(ab) \to R/(b) \to 0$ where the first map
+is given by multiplication by $b$. (Use length is additive,
+see Lemma \ref{lemma-length-additive}.)
+\end{proof}
+
+\begin{definition}
+\label{definition-ord}
+Suppose that $K$ is a field, and $R \subset K$ is a
+local\footnote{We could also define this when $R$ is only
+semi-local but this is probably never really what you want!}
+Noetherian subring of dimension $1$ with fraction field $K$.
+In this case we define the {\it order of vanishing along $R$}
+$$
+\text{ord}_R : K^* \longrightarrow \mathbf{Z}
+$$
+by the rule
+$$
+\text{ord}_R(x) = \text{length}_R(R/(x))
+$$
+if $x \in R$ and we set
+$\text{ord}_R(x/y) = \text{ord}_R(x) - \text{ord}_R(y)$
+for $x, y \in R$ both nonzero.
+\end{definition}
+
+\noindent
+We can use the order of vanishing to compare lattices in a
+vector space. Here is the definition.
+
+\begin{definition}
+\label{definition-lattice}
+Let $R$ be a Noetherian local domain of dimension $1$ with
+fraction field $K$. Let $V$ be a finite dimensional $K$-vector space.
+A {\it lattice in $V$} is a finite $R$-submodule $M \subset V$ such
+that $V = K \otimes_R M$.
+\end{definition}
+
+\noindent
+The condition $V = K \otimes_R M$ signifies that $M$ contains a
+basis for the vector space $V$. We remark that in many places in the
+literature the notion of a lattice may be defined only in case the
+ring $R$ is a discrete valuation ring. If $R$ is a discrete valuation
+ring then any lattice is a free $R$-module, and this may not be the case
+in general.
+
+\begin{lemma}
+\label{lemma-compare-lattices}
+Let $R$ be a Noetherian local domain of dimension $1$ with
+fraction field $K$. Let $V$ be a finite dimensional $K$-vector space.
+\begin{enumerate}
+\item If $M$ is a lattice in $V$ and $M \subset M' \subset V$
+is an $R$-submodule of $V$ containing $M$
+then the following are equivalent
+\begin{enumerate}
+\item $M'$ is a lattice,
+\item $\text{length}_R(M'/M)$ is finite, and
+\item $M'$ is finitely generated.
+\end{enumerate}
+\item If $M$ is a lattice in $V$ and $M' \subset M$ is an $R$-submodule
+of $M$ then $M'$ is a lattice if and only if
+$\text{length}_R(M/M')$ is finite.
+\item If $M$, $M'$ are lattices in $V$, then so are
+$M \cap M'$ and $M + M'$.
+\item If $M \subset M' \subset M'' \subset V$ are lattices in $V$
+then
+$$
+\text{length}_R(M''/M) =
+\text{length}_R(M'/M) +
+\text{length}_R(M''/M').
+$$
+\item If $M$, $M'$, $N$, $N'$ are lattices in $V$ and
+$N \subset M \cap M'$, $M + M' \subset N'$, then we have
+\begin{eqnarray*}
+& & \text{length}_R(M/M \cap M') - \text{length}_R(M'/M \cap M')\\
+& = &
+\text{length}_R(M/N) - \text{length}_R(M'/N) \\
+& = &
+\text{length}_R(M + M' / M') - \text{length}_R(M + M'/M) \\
+& = &
+\text{length}_R(N' / M') - \text{length}_R(N'/M)
+\end{eqnarray*}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Assume (1)(a). Say $y_1, \ldots, y_m$ generate $M'$.
+Then each $y_i = x_i/f_i$ for some $x_i \in M$ and
+nonzero $f_i \in R$.
+Hence we see that $f_1 \ldots f_m M' \subset M$.
+Since $R$ is Noetherian local of dimension $1$
+we see that $\mathfrak m^n \subset (f_1 \ldots f_m)$
+for some $n$ (for example combine
+Lemmas \ref{lemma-one-equation} and
+Proposition \ref{proposition-dimension-zero-ring} or combine
+Lemmas \ref{lemma-finite-length} and \ref{lemma-length-infinite}).
+In other words $\mathfrak m^nM' \subset M$ for some $n$
+Hence
+$\text{length}(M'/M) < \infty$ by Lemma \ref{lemma-length-finite},
+in other words (1)(b) holds.
+Assume (1)(b). Then $M'/M$ is a finite $R$-module
+(see Lemma \ref{lemma-finite-length-finite}).
+Hence $M'$ is a finite $R$-module as an extension of finite $R$-modules.
+Hence (1)(c). The implication
+(1)(c) $\Rightarrow$ (1)(a) follows from the remark following
+Definition \ref{definition-lattice}.
+
+\medskip\noindent
+Proof of (2). Suppose
+$M$ is a lattice in $V$ and $M' \subset M$ is an $R$-submodule.
+We have seen in (1) that if $M'$ is a lattice, then
+$\text{length}_R(M/M') < \infty$. Conversely, assume that
+$\text{length}_R(M/M') < \infty$. Then $M'$ is finitely generated
+as $R$ is Noetherian and for some $n$ we have
+$\mathfrak m^n M \subset M'$ (Lemma \ref{lemma-length-infinite}).
+Hence it follows
+that $M'$ contains a basis for $V$, and $M'$ is a lattice.
+
+\medskip\noindent
+Proof of (3). Assume $M$, $M'$ are lattices in $V$.
+Since $R$ is Noetherian the submodule $M \cap M'$ of $M$ is finite.
+As $M$ is a lattice we can find
+$x_1, \ldots, x_n \in M$ which form a $K$-basis for
+$V$. Because $M'$ is a lattice we can write $x_i = y_i/f_i$ with
+$y_i \in M'$ and $f_i \in R$. Hence $f_ix_i \in M \cap M'$. Hence
+$M \cap M'$ is a lattice also.
+The fact that $M + M'$ is a lattice follows from part (1).
+
+\medskip\noindent
+Part (4) follows from additivity of lengths
+(Lemma \ref{lemma-length-additive})
+and the exact sequence
+$$
+0 \to M'/M \to M''/M \to M''/M' \to 0
+$$
+Part (5) follows from repeatedly applying part (4).
+\end{proof}
+
+\begin{definition}
+\label{definition-distance}
+Let $R$ be a Noetherian local domain of dimension $1$ with
+fraction field $K$. Let $V$ be a finite dimensional $K$-vector space.
+Let $M$, $M'$ be two lattices in $V$. The {\it distance between
+$M$ and $M'$} is the integer
+$$
+d(M, M') = \text{length}_R(M/M \cap M') - \text{length}_R(M'/M \cap M')
+$$
+of Lemma \ref{lemma-compare-lattices} part (5).
+\end{definition}
+
+\noindent
+In particular, if $M' \subset M$, then
+$d(M, M') = \text{length}_R(M/M')$.
+
+\begin{lemma}
+\label{lemma-properties-distance-function}
+Let $R$ be a Noetherian local domain of dimension $1$ with
+fraction field $K$. Let $V$ be a finite dimensional $K$-vector space.
+This distance function has the property that
+$$
+d(M, M'') = d(M, M') + d(M', M'')
+$$
+whenever given three lattices $M$, $M'$, $M''$ of $V$.
+In particular we have $d(M, M') = - d(M', M)$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-order-vanishing-determinant}
+Let $R$ be a Noetherian local domain of dimension $1$ with
+fraction field $K$. Let $V$ be a finite dimensional $K$-vector space.
+Let $\varphi : V \to V$ be a $K$-linear isomorphism.
+For any lattice $M \subset V$ we have
+$$
+d(M, \varphi(M)) = \text{ord}_R(\det(\varphi))
+$$
+\end{lemma}
+
+\begin{proof}
+We can see that the integer $d(M, \varphi(M))$ does not depend
+on the lattice $M$ as follows. Suppose that $M'$ is a second such
+lattice. Then we see that
+\begin{eqnarray*}
+d(M, \varphi(M)) & = & d(M, M') + d(M', \varphi(M)) \\
+& = & d(M, M') + d(\varphi(M'), \varphi(M)) + d(M', \varphi(M'))
+\end{eqnarray*}
+Since $\varphi$ is an isomorphism we see that
+$d(\varphi(M'), \varphi(M)) = d(M', M) = -d(M, M')$, and hence
+$d(M, \varphi(M)) = d(M', \varphi(M'))$. Moreover, both sides of the
+equation (of the lemma) are additive in $\varphi$, i.e.,
+$$
+\text{ord}_R(\det(\varphi \circ \psi))
+=
+\text{ord}_R(\det(\varphi))
++
+\text{ord}_R(\det(\psi))
+$$
+and also
+\begin{eqnarray*}
+d(M, \varphi(\psi((M))) & = &
+d(M, \psi(M)) + d(\psi(M), \varphi(\psi(M))) \\
+& = & d(M, \psi(M)) + d(M, \varphi(M))
+\end{eqnarray*}
+by the independence shown above. Hence it suffices to prove the lemma
+for generators of $\text{GL}(V)$. Choose an isomorphism
+$K^{\oplus n} \cong V$. Then $\text{GL}(V) = \text{GL}_n(K)$ is
+generated by elementary matrices $E$.
+The result is clear for $E$ equal to the identity matrix.
+If $E = E_{ij}(\lambda)$ with $i \not = j$, $\lambda \in K$,
+$\lambda \not = 0$, for example
+$$
+E_{12}(\lambda) =
+\left(
+\begin{matrix}
+1 & \lambda & \ldots \\
+0 & 1 & \ldots \\
+\ldots & \ldots & \ldots
+\end{matrix}
+\right)
+$$
+then with respect to a different basis we get $E_{12}(1)$.
+The result is clear for $E = E_{12}(1)$ by taking as lattice
+$R^{\oplus n} \subset K^{\oplus n}$. Finally, if $E = E_i(a)$,
+with $a \in K^*$ for example
+$$
+E_1(a) =
+\left(
+\begin{matrix}
+a & 0 & \ldots \\
+0 & 1 & \ldots \\
+\ldots & \ldots & \ldots
+\end{matrix}
+\right)
+$$
+then $E_1(a)(R^{\oplus b}) = aR \oplus R^{\oplus n - 1}$ and
+it is clear that $d(R^{\oplus n}, aR \oplus R^{\oplus n - 1})
+= \text{ord}_R(a)$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-extension-dim-1}
+Let $A \to B$ be a ring map. Assume
+\begin{enumerate}
+\item $A$ is a Noetherian local domain of dimension $1$,
+\item $A \subset B$ is a finite extension of domains.
+\end{enumerate}
+Let $L/K$ be the corresponding finite extension of fraction fields.
+Let $y \in L^*$ and $x = \text{Nm}_{L/K}(y)$.
+In this situation $B$ is semi-local.
+Let $\mathfrak m_i$, $i = 1, \ldots, n$ be the maximal ideals of $B$.
+Then
+$$
+\text{ord}_A(x) =
+\sum\nolimits_i
+[\kappa(\mathfrak m_i) : \kappa(\mathfrak m_A)]
+\text{ord}_{B_{\mathfrak m_i}}(y)
+$$
+where $\text{ord}$ is defined as in Definition \ref{definition-ord}.
+\end{lemma}
+
+\begin{proof}
+The ring $B$ is semi-local by Lemma \ref{lemma-finite-in-codim-1}.
+Write $y = b/b'$ for some $b, b' \in B$.
+By the additivity of $\text{ord}$ and multiplicativity of
+$\text{Nm}$ it suffices to prove the lemma for
+$y = b$ or $y = b'$. In other words we may assume $y \in B$.
+In this case the right hand side of the formula is
+$$
+\sum [\kappa(\mathfrak m_i) : \kappa(\mathfrak m_A)]
+\text{length}_{B_{\mathfrak m_i}}((B/yB)_{\mathfrak m_i})
+$$
+By Lemma \ref{lemma-pushdown-module} this is equal to
+$\text{length}_A(B/yB)$. By Lemma \ref{lemma-order-vanishing-determinant}
+we have
+$$
+\text{length}_A(B/yB) = d(B, yB) =
+\text{ord}_A(\det\nolimits_K(L \xrightarrow{y} L)).
+$$
+Since $x = \text{Nm}_{L/K}(y) = \det\nolimits_K(L \xrightarrow{y} L)$
+by definition the lemma is proved.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Quasi-finite maps}
+\label{section-quasi-finite}
+
+\noindent
+Consider a ring map $R \to S$ of finite type.
+A map $\Spec(S) \to \Spec(R)$ is quasi-finite
+at a point if that point is isolated in its fibre.
+This means that the fibre is zero dimensional at that point.
+In this section we study the basic properties of this
+important but technical notion. More advanced material
+can be found in the next section.
+
+\begin{lemma}
+\label{lemma-isolated-point}
+Let $k$ be a field.
+Let $S$ be a finite type $k$-algebra.
+Let $\mathfrak q$ be a prime of $S$.
+The following are equivalent:
+\begin{enumerate}
+\item $\mathfrak q$ is an isolated point of $\Spec(S)$,
+\item $S_{\mathfrak q}$ is finite over $k$,
+\item there exists a $g \in S$, $g \not\in \mathfrak q$ such that
+$D(g) = \{ \mathfrak q \}$,
+\item $\dim_{\mathfrak q} \Spec(S) = 0$,
+\item $\mathfrak q$ is a closed point of $\Spec(S)$ and
+$\dim(S_{\mathfrak q}) = 0$, and
+\item the field extension $\kappa(\mathfrak q)/k$ is finite
+and $\dim(S_{\mathfrak q}) = 0$.
+\end{enumerate}
+In this case $S = S_{\mathfrak q} \times S'$ for some
+finite type $k$-algebra $S'$. Also, the element $g$
+as in (3) has the property $S_{\mathfrak q} = S_g$.
+\end{lemma}
+
+\begin{proof}
+Suppose $\mathfrak q$ is an isolated point of $\Spec(S)$, i.e.,
+$\{\mathfrak q\}$ is open in $\Spec(S)$.
+Because $\Spec(S)$ is a Jacobson space (see
+Lemmas \ref{lemma-finite-type-field-Jacobson} and
+\ref{lemma-jacobson})
+we see that $\mathfrak q$ is a closed point. Hence
+$\{\mathfrak q\}$ is open and closed in $\Spec(S)$.
+By
+Lemmas \ref{lemma-disjoint-decomposition} and
+\ref{lemma-disjoint-implies-product} we may
+write $S = S_1 \times S_2$ with $\mathfrak q$
+corresponding to the only point $\Spec(S_1)$.
+Hence $S_1 = S_{\mathfrak q}$ is a zero dimensional
+ring of finite type over $k$. Hence it is finite over $k$
+for example by Lemma \ref{lemma-Noether-normalization}.
+We have proved (1) implies (2).
+
+\medskip\noindent
+Suppose $S_{\mathfrak q}$ is finite over $k$.
+Then $S_{\mathfrak q}$ is Artinian local, see
+Lemma \ref{lemma-finite-dimensional-algebra}. So
+$\Spec(S_{\mathfrak q}) = \{\mathfrak qS_{\mathfrak q}\}$ by
+Lemma \ref{lemma-artinian-finite-length}.
+Consider the exact sequence $0 \to K \to S \to S_{\mathfrak q}
+\to Q \to 0$. It is clear that $K_{\mathfrak q} = Q_{\mathfrak q} = 0$.
+Also, $K$ is a finite $S$-module as $S$ is Noetherian and
+$Q$ is a finite $S$-module since $S_{\mathfrak q}$ is finite over $k$.
+Hence there exists $g \in S$, $g \not \in \mathfrak q$ such that
+$K_g = Q_g = 0$. Thus $S_{\mathfrak q} = S_g$ and
+$D(g) = \{ \mathfrak q \}$. We have proved that (2) implies (3).
+
+\medskip\noindent
+Suppose $D(g) = \{ \mathfrak q \}$. Since $D(g)$ is open by
+construction of the topology on $\Spec(S)$ we see that
+$\mathfrak q$ is an isolated point of $\Spec(S)$.
+We have proved that (3) implies (1).
+In other words (1), (2) and (3) are equivalent.
+
+\medskip\noindent
+Assume $\dim_{\mathfrak q} \Spec(S) = 0$. This means that
+there is some open neighbourhood of $\mathfrak q$ in $\Spec(S)$
+which has dimension zero. Then there is an open neighbourhood of the
+form $D(g)$ which has dimension zero. Since $S_g$ is Noetherian
+we conclude that $S_g$ is Artinian and
+$D(g) = \Spec(S_g)$ is a finite discrete set, see
+Proposition \ref{proposition-dimension-zero-ring}.
+Thus $\mathfrak q$ is an isolated point of $D(g)$ and,
+by the equivalence of (1) and (2) above applied to
+$\mathfrak qS_g \subset S_g$, we see that
+$S_{\mathfrak q} = (S_g)_{\mathfrak qS_g}$ is finite over $k$.
+Hence (4) implies (2). It is clear that (1) implies (4).
+Thus (1) -- (4) are all equivalent.
+
+\medskip\noindent
+Lemma \ref{lemma-dimension-closed-point-finite-type-field}
+gives the implication (5) $\Rightarrow$ (4).
+The implication (4) $\Rightarrow$ (6) follows from
+Lemma \ref{lemma-dimension-at-a-point-finite-type-field}.
+The implication (6) $\Rightarrow$ (5) follows from
+Lemma \ref{lemma-finite-residue-extension-closed}.
+At this point we know (1) -- (6) are equivalent.
+
+\medskip\noindent
+The two statements at the end of the lemma we saw during the
+course of the proof of the equivalence of (1), (2) and (3) above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-isolated-point-fibre}
+\begin{slogan}
+Equivalent conditions for isolated points in fibres
+\end{slogan}
+Let $R \to S$ be a ring map of finite type.
+Let $\mathfrak q \subset S$ be a prime lying over
+$\mathfrak p \subset R$. Let $F = \Spec(S \otimes_R \kappa(\mathfrak p))$
+be the fibre of $\Spec(S) \to \Spec(R)$, see
+Remark \ref{remark-fundamental-diagram}.
+Denote $\overline{\mathfrak q} \in F$ the point corresponding to
+$\mathfrak q$. The following are equivalent
+\begin{enumerate}
+\item $\overline{\mathfrak q}$ is an isolated point of $F$,
+\item $S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}$ is finite over
+$\kappa(\mathfrak p)$,
+\item there exists a $g \in S$, $g \not \in \mathfrak q$ such that
+the only prime of $D(g)$ mapping to $\mathfrak p$ is $\mathfrak q$,
+\item $\dim_{\overline{\mathfrak q}}(F) = 0$,
+\item $\overline{\mathfrak q}$ is a closed point of $F$ and
+$\dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}) = 0$, and
+\item the field extension $\kappa(\mathfrak q)/\kappa(\mathfrak p)$
+is finite and $\dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}) = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Note that $S_{\mathfrak q}/\mathfrak pS_{\mathfrak q} =
+(S \otimes_R \kappa(\mathfrak p))_{\overline{\mathfrak q}}$.
+Moreover $S \otimes_R \kappa(\mathfrak p)$ is of finite type over
+$\kappa(\mathfrak p)$.
+The conditions correspond exactly to the conditions of
+Lemma \ref{lemma-isolated-point}
+for the $\kappa(\mathfrak p)$-algebra $S \otimes_R \kappa(\mathfrak p)$
+and the prime $\overline{\mathfrak q}$, hence they are equivalent.
+\end{proof}
+
+\begin{definition}
+\label{definition-quasi-finite}
+Let $R \to S$ be a finite type ring map.
+Let $\mathfrak q \subset S$ be a prime.
+\begin{enumerate}
+\item If the equivalent conditions of Lemma \ref{lemma-isolated-point-fibre}
+are satisfied then we say $R \to S$ is {\it quasi-finite at $\mathfrak q$}.
+\item We say a ring map $A \to B$ is {\it quasi-finite}
+if it is of finite type and quasi-finite at all primes of $B$.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-quasi-finite}
+Let $R \to S$ be a finite type ring map.
+Then $R \to S$ is quasi-finite if and only if for all
+primes $\mathfrak p \subset R$
+the fibre $S \otimes_R \kappa(\mathfrak p)$ is finite
+over $\kappa(\mathfrak p)$.
+\end{lemma}
+
+\begin{proof}
+If the fibres are finite then the map is clearly quasi-finite.
+For the converse, note that $S \otimes_R \kappa(\mathfrak p)$
+is a $\kappa(\mathfrak p)$-algebra of finite type and of dimension $0$.
+Hence it is finite over $\kappa(\mathfrak p)$ for example
+by Lemma \ref{lemma-Noether-normalization}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finite-local}
+Let $R \to S$ be a finite type ring map. Let $\mathfrak q \subset S$
+be a prime lying over $\mathfrak p \subset R$. Let
+$f \in R$, $f \not \in \mathfrak p$ and $g \in S$, $g \not \in \mathfrak q$.
+Then $R \to S$ is quasi-finite at $\mathfrak q$ if and only if
+$R_f \to S_{fg}$ is quasi-finite at $\mathfrak qS_{fg}$.
+\end{lemma}
+
+\begin{proof}
+The fibre of $\Spec(S_{fg}) \to \Spec(R_f)$ is homeomorphic
+to an open subset of the fibre of $\Spec(S) \to \Spec(R)$.
+Hence the lemma follows from part (1) of the equivalent conditions of
+Lemma \ref{lemma-isolated-point-fibre}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-four-rings}
+Let
+$$
+\xymatrix{
+S \ar[r] & S' & &
+\mathfrak q \ar@{-}[r] & \mathfrak q' \\
+R \ar[u] \ar[r] & R' \ar[u] & &
+\mathfrak p \ar@{-}[r] \ar@{-}[u] & \mathfrak p' \ar@{-}[u]
+}
+$$
+be a commutative diagram of rings with primes as indicated.
+Assume $R \to S$ of finite type, and $S \otimes_R R' \to S'$ surjective.
+If $R \to S$ is quasi-finite at $\mathfrak q$, then
+$R' \to S'$ is quasi-finite at $\mathfrak q'$.
+\end{lemma}
+
+\begin{proof}
+Write $S \otimes_R \kappa(\mathfrak p) = S_1 \times S_2$
+with $S_1$ finite over $\kappa(\mathfrak p)$ and such that
+$\mathfrak q$ corresponds to a point of $S_1$ as in
+Lemma \ref{lemma-isolated-point}. This product decomposition
+induces a corresponding product decomposition for any
+$S \otimes_R \kappa(\mathfrak p)$-algebra. In particular,
+we obtain $S' \otimes_{R'} \kappa(\mathfrak p') = S'_1 \times S'_2$.
+Because $S \otimes_R R' \to S'$ is surjective the canonical map
+$(S \otimes_R \kappa(\mathfrak p)) \otimes_{\kappa(\mathfrak p)}
+\kappa(\mathfrak p') \to S' \otimes_{R'} \kappa(\mathfrak p')$
+is surjective and hence
+$S_i \otimes_{\kappa(\mathfrak p)} \kappa(\mathfrak p') \to S'_i$
+is surjective. It follows that $S'_1$ is finite over $\kappa(\mathfrak p')$.
+The map $S' \otimes_{R'} \kappa(\mathfrak p') \to
+\kappa(\mathfrak q')$ factors through $S_1'$
+(i.e.\ it annihilates the factor $S_2'$)
+because the map $S \otimes_R \kappa(\mathfrak p) \to
+\kappa(\mathfrak q)$ factors through $S_1$
+(i.e.\ it annihilates the factor $S_2$). Thus
+$\mathfrak q'$ corresponds to a point of
+$\Spec(S_1')$ in the disjoint union decomposition
+of the fibre: $\Spec(S' \otimes_{R'} \kappa(\mathfrak p'))
+= \Spec(S_1') \amalg \Spec(S_2')$, see
+Lemma \ref{lemma-spec-product}.
+Since $S_1'$ is finite over a field, it is Artinian ring,
+and hence $\Spec(S_1')$ is a finite discrete set.
+(See Proposition \ref{proposition-dimension-zero-ring}.)
+We conclude $\mathfrak q'$ is isolated in its fibre as
+desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finite-composition}
+A composition of quasi-finite ring maps is quasi-finite.
+\end{lemma}
+
+\begin{proof}
+Suppose $A \to B$ and $B \to C$ are quasi-finite ring maps. By
+Lemma \ref{lemma-compose-finite-type}
+we see that $A \to C$ is of finite type.
+Let $\mathfrak r \subset C$ be a prime of $C$ lying over
+$\mathfrak q \subset B$ and $\mathfrak p \subset A$. Since $A \to B$ and
+$B \to C$ are quasi-finite at $\mathfrak q$ and $\mathfrak r$ respectively,
+then there exist $b \in B$ and $c \in C$ such that $\mathfrak q$ is
+the only prime of $D(b)$ which maps to $\mathfrak p$ and similarly
+$\mathfrak r$ is the only prime of $D(c)$ which maps to $\mathfrak q$.
+If $c' \in C$ is the image of $b \in B$, then $\mathfrak r$ is the only
+prime of $D(cc')$ which maps to $\mathfrak p$.
+Therefore $A \to C$ is quasi-finite at $\mathfrak r$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finite-base-change}
+Let $R \to S$ be a ring map of finite type.
+Let $R \to R'$ be any ring map. Set $S' = R' \otimes_R S$.
+\begin{enumerate}
+\item The set
+$\{\mathfrak q' \mid R' \to S' \text{ quasi-finite at }\mathfrak q'\}$
+is the inverse image of the corresponding set of $\Spec(S)$
+under the canonical map $\Spec(S') \to \Spec(S)$.
+\item If $\Spec(R') \to \Spec(R)$ is surjective,
+then $R \to S$ is quasi-finite if and only if $R' \to S'$ is quasi-finite.
+\item Any base change of a quasi-finite ring map is quasi-finite.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p' \subset R'$ be a prime lying over $\mathfrak p \subset R$.
+Then the fibre ring $S' \otimes_{R'} \kappa(\mathfrak p')$ is the
+base change of the fibre ring $S \otimes_R \kappa(\mathfrak p)$
+by the field extension $\kappa(\mathfrak p) \to \kappa(\mathfrak p')$.
+Hence the first assertion follows from the invariance of dimension
+under field extension
+(Lemma \ref{lemma-dimension-at-a-point-preserved-field-extension})
+and Lemma \ref{lemma-isolated-point}.
+The stability of quasi-finite maps under base change follows from
+this and the stability of finite type property under base change.
+The second assertion follows
+since the assumption implies that given a prime $\mathfrak q \subset S$ we can
+find a prime $\mathfrak q' \subset S'$ lying over it.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finite-permanence}
+Let $A \to B$ and $B \to C$ be ring homomorphisms such that $A \to C$
+is of finite type. Let $\mathfrak r$ be a prime of $C$ lying over
+$\mathfrak q \subset B$ and $\mathfrak p \subset A$.
+If $A \to C$ is quasi-finite at $\mathfrak r$, then
+$B \to C$ is quasi-finite at $\mathfrak r$.
+\end{lemma}
+
+\begin{proof}
+Observe that $B \to C$ is of finite type
+(Lemma \ref{lemma-compose-finite-type})
+so that the statement makes sense.
+Let us use characterization (3) of Lemma \ref{lemma-isolated-point-fibre}.
+If $A \to C$ is quasi-finite at $\mathfrak r$, then
+there exists some $c \in C$ such that
+$$
+\{\mathfrak r' \subset C \text{ lying over }\mathfrak p\} \cap D(c) =
+\{\mathfrak{r}\}.
+$$
+Since the primes $\mathfrak r' \subset C$ lying over $\mathfrak q$
+form a subset of the primes $\mathfrak r' \subset C$ lying over
+$\mathfrak p$ we conclude $B \to C$ is quasi-finite at $\mathfrak r$.
+\end{proof}
+
+\noindent
+The following lemma is not quite about quasi-finite ring maps, but
+it does not seem to fit anywhere else so well.
+
+\begin{lemma}
+\label{lemma-generically-finite}
+Let $R \to S$ be a ring map of finite type.
+Let $\mathfrak p \subset R$ be a minimal prime.
+Assume that there are at most finitely many primes of $S$
+lying over $\mathfrak p$. Then there exists a
+$g \in R$, $g \not \in \mathfrak p$ such that the
+ring map $R_g \to S_g$ is finite.
+\end{lemma}
+
+\begin{proof}
+Let $x_1, \ldots, x_n$ be generators of $S$ over $R$.
+Since $\mathfrak p$ is a minimal prime we have that
+$\mathfrak pR_{\mathfrak p}$ is a locally nilpotent ideal, see
+Lemma \ref{lemma-minimal-prime-reduced-ring}.
+Hence $\mathfrak pS_{\mathfrak p}$ is a locally nilpotent ideal, see
+Lemma \ref{lemma-locally-nilpotent}.
+By assumption the finite type $\kappa(\mathfrak p)$-algebra
+$S_{\mathfrak p}/\mathfrak pS_{\mathfrak p}$ has finitely many
+primes. Hence (for example by
+Lemmas \ref{lemma-finite-type-algebra-finite-nr-primes} and
+\ref{lemma-Noether-normalization})
+$\kappa(\mathfrak p) \to S_{\mathfrak p}/\mathfrak pS_{\mathfrak p}$
+is a finite ring map. Thus we may find monic polynomials
+$P_i \in R_{\mathfrak p}[X]$ such that $P_i(x_i)$ maps to zero
+in $S_{\mathfrak p}/\mathfrak pS_{\mathfrak p}$. By what we said
+above there exist $e_i \geq 1$ such that $P(x_i)^{e_i} = 0$
+in $S_{\mathfrak p}$. Let $g_1 \in R$, $g_1 \not \in \mathfrak p$
+be an element such that $P_i$ has coefficients in $R[1/g_1]$ for all $i$.
+Next, let $g_2 \in R$, $g_2 \not \in \mathfrak p$ be an element
+such that $P(x_i)^{e_i} = 0$ in $S_{g_1g_2}$. Setting $g = g_1g_2$
+we win.
+\end{proof}
+
+
+
+
+
+
+
+\section{Zariski's Main Theorem}
+\label{section-Zariski}
+
+\noindent
+In this section our aim is to prove the algebraic version of
+Zariski's Main theorem. This theorem will be the basis of many
+further developments in the theory of schemes and morphisms of
+schemes later in the Stacks project.
+
+\medskip\noindent
+Let $R \to S$ be a ring map of finite type.
+Our goal in this section is to show that the set
+of points of $\Spec(S)$ where the map is quasi-finite
+is {\it open} (Theorem \ref{theorem-main-theorem}).
+In fact, it will turn out that there exists
+a finite ring map $R \to S'$ such that in some sense the quasi-finite
+locus of $S/R$ is open in $\Spec(S')$
+(but we will not prove this in the algebra chapter since we do not
+develop the language of schemes here -- for the case where $R \to S$
+is quasi-finite see Lemma \ref{lemma-quasi-finite-open-integral-closure}).
+These statements are somewhat tricky to prove and
+we do it by a long list of lemmas concerning integral and
+finite extensions of rings. This material may be found
+in \cite{Henselian}, and \cite{Peskine}. We also found notes
+by Thierry Coquand helpful.
+
+\begin{lemma}
+\label{lemma-make-integral-trivial}
+Let $\varphi : R \to S$ be a ring map.
+Suppose $t \in S$ satisfies the
+relation $\varphi(a_0) + \varphi(a_1)t + \ldots + \varphi(a_n) t^n = 0$.
+Then $\varphi(a_n)t$ is integral over $R$.
+\end{lemma}
+
+\begin{proof}
+Namely, multiply the equation
+$\varphi(a_0) + \varphi(a_1)t + \ldots + \varphi(a_n) t^n = 0$
+with $\varphi(a_n)^{n-1}$ and write it as
+$\varphi(a_0 a_n^{n-1}) +
+\varphi(a_1 a_n^{n-2}) (\varphi(a_n)t) +
+\ldots +
+(\varphi(a_n) t)^n = 0$.
+\end{proof}
+
+\noindent
+The following lemma is in some sense the key lemma in this
+section.
+
+\begin{lemma}
+\label{lemma-make-integral-trick}
+Let $R$ be a ring. Let $\varphi : R[x] \to S$ be
+a ring map. Let $t \in S$.
+Assume that (a) $t$ is integral over $R[x]$,
+and (b) there exists a monic $p \in R[x]$ such that
+$t \varphi(p) \in \Im(\varphi)$. Then there
+exists a $q \in R[x]$ such that $t - \varphi(q)$
+is integral over $R$.
+\end{lemma}
+
+\begin{proof}
+Write $t \varphi(p) = \varphi(r)$ for some $r \in R[x]$.
+Using euclidean division, write $r = qp + r'$ with
+$q, r' \in R[x]$ and $\deg(r') < \deg(p)$. We may replace
+$t$ by $t - \varphi(q)$ which is still integral over
+$R[x]$, so that we obtain $t \varphi(p) = \varphi(r')$.
+In the ring $S_t$ we may write this as
+$\varphi(p) - (1/t) \varphi(r') = 0$.
+This implies that $\varphi(x)$ gives an element of the
+localization $S_t$ which is integral over
+$\varphi(R)[1/t] \subset S_t$. On the other hand,
+$t$ is integral over the subring $\varphi(R)[\varphi(x)] \subset S$.
+Combined we conclude that $t$ is integral over
+the subring $\varphi(R)[1/t] \subset S_t$, see Lemma
+\ref{lemma-integral-transitive}. In other words
+there exists an equation of the form
+$$
+t^d + \sum\nolimits_{i < d}
+\left(\sum\nolimits_{j = 0, \ldots, n_i} \varphi(r_{i, j})/t^j\right) t^i = 0
+$$
+in $S_t$ with $r_{i, j} \in R$. This means that
+$t^{d + N} +
+\sum_{i < d} \sum_{j = 0, \ldots, n_i} \varphi(r_{i, j}) t^{i + N - j} = 0$
+in $S$ for some $N$ large enough. In other words
+$t$ is integral over $R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-combine-lemmas}
+Let $R$ be a ring. Let $\varphi : R[x] \to S$ be
+a ring map. Let $t \in S$. Assume $t$ is integral
+over $R[x]$. Let $p \in R[x]$, $p = a_0 + a_1x + \ldots +
+a_k x^k$ such that $t \varphi(p) \in \Im(\varphi)$.
+Then there exists a $q \in R[x]$ and $n \geq 0$
+such that $\varphi(a_k)^n t - \varphi(q)$ is integral
+over $R$.
+\end{lemma}
+
+\begin{proof}
+Let $R'$ and $S'$ be the localization of $R$ and $S$ at the element $a_k$.
+Let $\varphi' : R'[x] \to S'$ be the localization of $\varphi$.
+Let $t' \in S'$ be the image of $t$. Set $p' = p/a_k \in R'[x]$.
+Then $t' \varphi'(p') \in \Im(\varphi')$ since $t \varphi(p) \in \Im(\varphi)$.
+As $p'$ is monic, by
+Lemma \ref{lemma-make-integral-trick} there exists a $q' \in R'[x]$
+such that $t' - \varphi'(q')$ is integral over $R'$.
+We may choose an $n \geq 0$ and an element $q \in R[x]$
+such that $a_k^n q'$ is the image of $q$.
+Then $\varphi(a_k)^n t - \varphi(q)$ is an element of $S$ whose
+image in $S'$ is integral over $R'$.
+By Lemma \ref{lemma-integral-closure-localize}
+there exists an $m \geq 0$ such that
+$\varphi(a_k)^m(\varphi(a_k)^n t - \varphi(q))$ is integral over $R$.
+Thus $\varphi(a_k)^{m + n}t - \varphi(a_k^m q)$
+is integral over $R$ as desired.
+\end{proof}
+
+\begin{situation}
+\label{situation-one-transcendental-element}
+Let $R$ be a ring.
+Let $\varphi : R[x] \to S$ be finite.
+Let
+$$
+J = \{ g \in S \mid gS \subset \Im(\varphi)\}
+$$
+be the ``conductor ideal'' of $\varphi$.
+Assume $\varphi(R) \subset S$ integrally closed in $S$.
+\end{situation}
+
+\begin{lemma}
+\label{lemma-leading-coefficient-in-J}
+In Situation \ref{situation-one-transcendental-element}.
+Suppose $u \in S$, $a_0, \ldots, a_k \in R$,
+$u \varphi(a_0 + a_1x + \ldots + a_k x^k) \in J$.
+Then there exists an $m \geq 0$ such that
+$u \varphi(a_k)^m \in J$.
+\end{lemma}
+
+\begin{proof}
+Assume that $S$ is generated by $t_1, \ldots, t_n$
+as an $R[x]$-module. In this case
+$J = \{ g \in S \mid gt_i \in \Im(\varphi)\text{ for all }i\}$.
+Note that each element $u t_i$ is integral over
+$R[x]$, see Lemma \ref{lemma-finite-is-integral}.
+We have $\varphi(a_0 + a_1x + \ldots + a_k x^k) u t_i \in
+\Im(\varphi)$. By Lemma \ref{lemma-combine-lemmas}, for
+each $i$ there exists an integer $n_i$ and an element
+$q_i \in R[x]$ such that $\varphi(a_k^{n_i}) u t_i - \varphi(q_i)$
+is integral over $R$. By assumption this element is in $\varphi(R)$
+and hence $\varphi(a_k^{n_i}) u t_i \in \Im(\varphi)$.
+It follows that $m = \max\{n_1, \ldots, n_n\}$ works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-all-coefficients-in-J}
+In Situation \ref{situation-one-transcendental-element}.
+Suppose $u \in S$, $a_0, \ldots, a_k \in R$,
+$u \varphi(a_0 + a_1x + \ldots + a_k x^k) \in \sqrt{J}$.
+Then $u \varphi(a_i) \in \sqrt{J}$ for all $i$.
+\end{lemma}
+
+\begin{proof}
+Under the assumptions of the lemma we have
+$u^n \varphi(a_0 + a_1x + \ldots + a_k x^k)^n \in J$ for
+some $n \geq 1$. By Lemma \ref{lemma-leading-coefficient-in-J}
+we deduce $u^n \varphi(a_k^{nm}) \in J$ for some $m \geq 1$.
+Thus $u \varphi(a_k) \in \sqrt{J}$, and so
+$u \varphi(a_0 + a_1x + \ldots + a_k x^k) - u \varphi(a_k x^k) =
+u \varphi(a_0 + a_1x + \ldots + a_{k-1} x^{k-1}) \in \sqrt{J}$.
+We win by induction on $k$.
+\end{proof}
+
+\noindent
+This lemma suggests the following definition.
+
+\begin{definition}
+\label{definition-strongly-transcendental}
+Given an inclusion of rings $R \subset S$ and
+an element $x \in S$ we say that $x$ is
+{\it strongly transcendental over $R$} if
+whenever $u(a_0 + a_1 x + \ldots + a_k x^k) = 0$
+with $u \in S$ and $a_i \in R$, then
+we have $ua_i = 0$ for all $i$.
+\end{definition}
+
+\noindent
+Note that if $S$ is a domain then this is the same as
+saying that $x$ as an element of the fraction field of
+$S$ is transcendental over the fraction field of $R$.
+
+\begin{lemma}
+\label{lemma-reduced-strongly-transcendental-minimal-prime}
+Suppose $R \subset S$ is an inclusion of reduced rings
+and suppose that $x \in S$ is strongly transcendental over $R$.
+Let $\mathfrak q \subset S$ be a minimal prime
+and let $\mathfrak p = R \cap \mathfrak q$.
+Then the image of $x$ in $S/\mathfrak q$ is strongly
+transcendental over the subring $R/\mathfrak p$.
+\end{lemma}
+
+\begin{proof}
+Suppose $u(a_0 + a_1x + \ldots + a_k x^k) \in \mathfrak q$.
+By Lemma \ref{lemma-minimal-prime-reduced-ring}
+the local ring $S_{\mathfrak q}$ is a field,
+and hence $u(a_0 + a_1x + \ldots + a_k x^k) $ is zero
+in $S_{\mathfrak q}$. Thus $uu'(a_0 + a_1x + \ldots + a_k x^k) = 0$
+for some $u' \in S$, $u' \not\in \mathfrak q$.
+Since $x$ is strongly transcendental over $R$ we get
+$uu'a_i = 0$ for all $i$. This in turn implies
+that $ua_i \in \mathfrak q$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-domains-transcendental-not-quasi-finite}
+Suppose $R\subset S$ is an inclusion of domains and
+let $x \in S$. Assume $x$ is (strongly) transcendental over $R$
+and that $S$ is finite over $R[x]$. Then $R \to S$ is not
+quasi-finite at any prime of $S$.
+\end{lemma}
+
+\begin{proof}
+As a first case, assume that $R$ is normal, see
+Definition \ref{definition-ring-normal}.
+By Lemma \ref{lemma-polynomial-ring-normal}
+we see that $R[x]$ is normal.
+Take a prime $\mathfrak q \subset S$,
+and set $\mathfrak p = R \cap \mathfrak q$.
+Assume that the extension $\kappa(\mathfrak p)
+\subset \kappa(\mathfrak q)$ is finite.
+This would be the case if $R \to S$ is
+quasi-finite at $\mathfrak q$.
+Let $\mathfrak r = R[x] \cap \mathfrak q$.
+Then since $\kappa(\mathfrak p)
+\subset \kappa(\mathfrak r) \subset \kappa(\mathfrak q)$
+we see that the extension $\kappa(\mathfrak p)
+\subset \kappa(\mathfrak r)$ is finite too.
+Thus the inclusion $\mathfrak r \supset \mathfrak p R[x]$
+is strict. By going down for $R[x] \subset S$,
+see Proposition \ref{proposition-going-down-normal-integral},
+we find a prime $\mathfrak q' \subset \mathfrak q$,
+lying over the prime $\mathfrak pR[x]$. Hence
+the fibre $\Spec(S \otimes_R \kappa(\mathfrak p))$
+contains a point not equal to $\mathfrak q$,
+namely $\mathfrak q'$, whose closure contains $\mathfrak q$ and hence
+$\mathfrak q$ is not isolated in its fibre.
+
+\medskip\noindent
+If $R$ is not normal, let $R \subset R' \subset K$ be
+the integral closure $R'$ of $R$ in its field of fractions
+$K$. Let $S \subset S' \subset L$ be the subring $S'$ of
+the field of fractions $L$ of $S$ generated by $R'$ and
+$S$. Note that by construction the map $S \otimes_R R'
+\to S'$ is surjective. This implies that $R'[x] \subset S'$
+is finite. Also, the map $S \subset S'$
+induces a surjection on $\Spec$, see
+Lemma \ref{lemma-integral-overring-surjective}.
+We conclude by Lemma \ref{lemma-four-rings} and the normal case
+we just discussed.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reduced-strongly-transcendental-not-quasi-finite}
+Suppose $R \subset S$ is an inclusion of reduced rings.
+Assume $x \in S$ be strongly transcendental over $R$,
+and $S$ finite over $R[x]$. Then $R \to S$ is not
+quasi-finite at any prime of $S$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak q \subset S$ be any prime.
+Choose a minimal prime $\mathfrak q' \subset \mathfrak q$.
+According to Lemmas
+\ref{lemma-reduced-strongly-transcendental-minimal-prime} and
+\ref{lemma-domains-transcendental-not-quasi-finite}
+the extension $R/(R \cap \mathfrak q') \subset
+S/\mathfrak q'$ is not quasi-finite at the prime corresponding
+to $\mathfrak q$. By Lemma \ref{lemma-four-rings}
+the extension $R \to S$ is not quasi-finite
+at $\mathfrak q$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finite-monogenic}
+Let $R$ be a ring. Let $S = R[x]/I$.
+Let $\mathfrak q \subset S$ be a prime.
+Assume $R \to S$ is quasi-finite at $\mathfrak q$.
+Let $S' \subset S$ be the integral closure of $R$ in $S$.
+Then there exists an element
+$g \in S'$, $g \not\in \mathfrak q$ such that
+$S'_g \cong S_g$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p$ be the image of $\mathfrak q$ in $\Spec(R)$.
+There exists an $f \in I$, $f = a_nx^n + \ldots + a_0$ such that
+$a_i \not \in \mathfrak p$ for some $i$. Namely, otherwise the fibre ring
+$S \otimes_R \kappa(\mathfrak p)$ would be $\kappa(\mathfrak p)[x]$
+and the map would not be quasi-finite at any prime lying
+over $\mathfrak p$. We conclude there exists a relation
+$b_m x^m + \ldots + b_0 = 0$ with $b_j \in S'$, $j = 0, \ldots, m$
+and $b_j \not \in \mathfrak q \cap S'$ for some $j$.
+We prove the lemma by induction on $m$. The base case is
+$m = 0$ is vacuous (because the statements $b_0 = 0$ and
+$b_0 \not \in \mathfrak q$ are contradictory).
+
+\medskip\noindent
+The case $b_m \not \in \mathfrak q$. In this case $x$ is integral
+over $S'_{b_m}$, in fact $b_mx \in S'$ by
+Lemma \ref{lemma-make-integral-trivial}.
+Hence the injective map $S'_{b_m} \to S_{b_m}$ is also surjective, i.e.,
+an isomorphism as desired.
+
+\medskip\noindent
+The case $b_m \in \mathfrak q$. In this case we have $b_mx \in S'$ by
+Lemma \ref{lemma-make-integral-trivial}.
+Set $b'_{m - 1} = b_mx + b_{m - 1}$. Then
+$$
+b'_{m - 1}x^{m - 1} + b_{m - 2}x^{m - 2} + \ldots + b_0 = 0
+$$
+Since $b'_{m - 1}$ is congruent to $b_{m - 1}$ modulo $S' \cap \mathfrak q$
+we see that it is still the case that one of
+$b'_{m - 1}, b_{m - 2}, \ldots, b_0$ is not in $S' \cap \mathfrak q$.
+Thus we win by induction on $m$.
+\end{proof}
+
+\begin{theorem}[Zariski's Main Theorem]
+\label{theorem-main-theorem}
+Let $R$ be a ring. Let $R \to S$ be a finite type $R$-algebra.
+Let $S' \subset S$ be the integral closure of $R$ in $S$.
+Let $\mathfrak q \subset S$ be a prime of $S$.
+If $R \to S$ is quasi-finite at $\mathfrak q$ then
+there exists a $g \in S'$, $g \not \in \mathfrak q$
+such that $S'_g \cong S_g$.
+\end{theorem}
+
+\begin{proof}
+There exist finitely many elements
+$x_1, \ldots, x_n \in S$ such that $S$ is finite
+over the $R$-sub algebra generated by $x_1, \ldots, x_n$. (For
+example generators of $S$ over $R$.) We prove the proposition
+by induction on the minimal such number $n$.
+
+\medskip\noindent
+The case $n = 0$ is trivial, because in this case $S' = S$,
+see Lemma \ref{lemma-finite-is-integral}.
+
+\medskip\noindent
+The case $n = 1$. We may replace $R$ by its integral closure in $S$
+(Lemma \ref{lemma-quasi-finite-permanence} guarantees that $R \to S$
+is still quasi-finite at $\mathfrak q$). Thus we may assume
+$R \subset S$ is integrally closed in $S$, in other words $R = S'$.
+Consider the map $\varphi : R[x] \to S$, $x \mapsto x_1$.
+(We will see that $\varphi$ is not injective below.)
+By assumption $\varphi$ is finite. Hence we are in Situation
+\ref{situation-one-transcendental-element}.
+Let $J \subset S$ be the ``conductor ideal'' defined
+in Situation \ref{situation-one-transcendental-element}.
+Consider the diagram
+$$
+\xymatrix{
+R[x] \ar[r] & S \ar[r] & S/\sqrt{J} & R/(R \cap \sqrt{J})[x] \ar[l]
+\\
+& R \ar[lu] \ar[r] \ar[u] & R/(R \cap \sqrt{J}) \ar[u] \ar[ru] &
+}
+$$
+According to Lemma \ref{lemma-all-coefficients-in-J}
+the image of $x$ in the quotient $S/\sqrt{J}$
+is strongly transcendental over $R/ (R \cap \sqrt{J})$.
+Hence by Lemma \ref{lemma-reduced-strongly-transcendental-not-quasi-finite}
+the ring map $R/ (R \cap \sqrt{J}) \to S/\sqrt{J}$
+is not quasi-finite at any prime of $S/\sqrt{J}$.
+By Lemma \ref{lemma-four-rings} we deduce that $\mathfrak q$
+does not lie in $V(J) \subset \Spec(S)$.
+Thus there exists an element $s \in J$,
+$s \not\in \mathfrak q$. By definition of $J$ we may write
+$s = \varphi(f)$ for some polynomial $f \in R[x]$.
+Let $I = \Ker(\varphi : R[x] \to S)$. Since $\varphi(f) \in J$
+we get $(R[x]/I)_f \cong S_{\varphi(f)}$. Also $s \not \in \mathfrak q$
+means that $f \not \in \varphi^{-1}(\mathfrak q)$. Thus
+$\varphi^{-1}(\mathfrak q)$ is a prime of $R[x]/I$
+at which $R \to R[x]/I$ is quasi-finite, see
+Lemma \ref{lemma-quasi-finite-local}. Note that $R$ is integrally closed
+in $R[x]/I$ since $R$ is integrally closed in $S$. By
+Lemma \ref{lemma-quasi-finite-monogenic}
+there exists an element $h \in R$, $h \not \in R \cap \mathfrak q$
+such that $R_h \cong (R[x]/I)_h$. Thus
+$(R[x]/I)_{fh} = S_{\varphi(fh)}$ is isomorphic to a principal
+localization $R_{h'}$ of $R$ for some
+$h' \in R$, $h' \not \in \mathfrak q$.
+
+\medskip\noindent
+The case $n > 1$. Consider the subring $R' \subset S$
+which is the integral closure of $R[x_1, \ldots, x_{n-1}]$
+in $S$. By Lemma \ref{lemma-quasi-finite-permanence} the extension
+$S/R'$ is quasi-finite at $\mathfrak q$. Also, note
+that $S$ is finite over $R'[x_n]$.
+By the case $n = 1$ above, there exists a $g' \in R'$,
+$g' \not \in \mathfrak q$ such that
+$(R')_{g'} \cong S_{g'}$. At this point we cannot
+apply induction to $R \to R'$ since $R'$ may not be finite type over $R$.
+Since $S$ is finitely generated over $R$ we deduce in particular
+that $(R')_{g'}$ is finitely generated over $R$. Say
+the elements $g'$, and $y_1/(g')^{n_1}, \ldots, y_N/(g')^{n_N}$
+with $y_i \in R'$
+generate $(R')_{g'}$ over $R$. Let $R''$ be the $R$-sub algebra
+of $R'$ generated by $x_1, \ldots, x_{n-1}, y_1, \ldots, y_N, g'$.
+This has the property $(R'')_{g'} \cong S_{g'}$. Surjectivity
+because of how we chose $y_i$, injectivity because
+$R'' \subset R'$, and localization is exact. Note that
+$R''$ is finite over $R[x_1, \ldots, x_{n-1}]$ because
+of our choice of $R'$, see Lemma \ref{lemma-characterize-integral}.
+Let $\mathfrak q'' = R'' \cap \mathfrak q$. Since
+$(R'')_{\mathfrak q''} = S_{\mathfrak q}$ we see that
+$R \to R''$ is quasi-finite at $\mathfrak q''$, see
+Lemma \ref{lemma-isolated-point-fibre}.
+We apply our induction hypothesis to $R \to R''$, $\mathfrak q''$
+and $x_1, \ldots, x_{n-1} \in R''$ and we find a subring
+$R''' \subset R''$ which is integral over $R$ and an
+element $g'' \in R'''$, $g'' \not \in \mathfrak q''$
+such that $(R''')_{g''} \cong (R'')_{g''}$. Write the image of $g'$ in
+$(R'')_{g''}$ as $g'''/(g'')^n$ for some $g''' \in R'''$.
+Set $g = g''g''' \in R'''$. Then it is clear that $g \not\in
+\mathfrak q$ and $(R''')_g \cong S_g$. Since by construction
+we have $R''' \subset S'$ we also have $S'_g \cong S_g$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finite-open}
+Let $R \to S$ be a finite type ring map.
+The set of points $\mathfrak q$ of $\Spec(S)$ at which
+$S/R$ is quasi-finite is open in $\Spec(S)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak q \subset S$ be a point at which the ring map
+is quasi-finite. By Theorem \ref{theorem-main-theorem}
+there exists an integral ring extension $R \to S'$, $S' \subset S$
+and an element $g \in S'$, $g\not \in \mathfrak q$ such that
+$S'_g \cong S_g$. Since $S$ and hence $S_g$ are of finite type
+over $R$ we may find finitely many elements
+$y_1, \ldots, y_N$ of $S'$ such that $S''_g \cong S_g$
+where $S'' \subset S'$ is the sub $R$-algebra generated
+by $g, y_1, \ldots, y_N$. Since $S''$ is finite over $R$
+(see Lemma \ref{lemma-characterize-integral}) we see that
+$S''$ is quasi-finite over $R$ (see Lemma \ref{lemma-quasi-finite}).
+It is easy to see that this implies that $S''_g$ is quasi-finite over $R$,
+for example because the property of being quasi-finite at a prime depends
+only on the local ring at the prime. Thus we see that $S_g$ is quasi-finite
+over $R$. By the same token this implies that $R \to S$ is quasi-finite
+at every prime of $S$ which lies in $D(g)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finite-open-integral-closure}
+Let $R \to S$ be a finite type ring map.
+Suppose that $S$ is quasi-finite over $R$.
+Let $S' \subset S$ be the integral closure of $R$ in $S$. Then
+\begin{enumerate}
+\item $\Spec(S) \to \Spec(S')$ is a homeomorphism
+onto an open subset,
+\item if $g \in S'$ and $D(g)$ is contained in the image
+of the map, then $S'_g \cong S_g$, and
+\item there exists a finite $R$-algebra $S'' \subset S'$
+such that (1) and (2) hold for the ring map
+$S'' \to S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Because $S/R$ is quasi-finite we may apply
+Theorem \ref{theorem-main-theorem} to
+each point $\mathfrak q$ of $\Spec(S)$.
+Since $\Spec(S)$ is quasi-compact, see
+Lemma \ref{lemma-quasi-compact}, we may choose
+a finite number of $g_i \in S'$, $i = 1, \ldots, n$
+such that $S'_{g_i} = S_{g_i}$, and such that
+$g_1, \ldots, g_n$ generate the unit ideal in $S$
+(in other words the standard opens of $\Spec(S)$ associated
+to $g_1, \ldots, g_n$ cover all of $\Spec(S)$).
+
+\medskip\noindent
+Suppose that $D(g) \subset \Spec(S')$
+is contained in the image. Then $D(g) \subset \bigcup D(g_i)$.
+In other words, $g_1, \ldots, g_n$ generate the unit ideal of
+$S'_g$. Note that $S'_{gg_i} \cong S_{gg_i}$ by our choice
+of $g_i$. Hence $S'_g \cong S_g$ by Lemma \ref{lemma-cover}.
+
+\medskip\noindent
+We construct a finite algebra $S'' \subset S'$ as
+in (3). To do this note that each $S'_{g_i} \cong S_{g_i}$
+is a finite type $R$-algebra. For each $i$ pick
+some elements $y_{ij} \in S'$ such that each
+$S'_{g_i}$ is generated as $R$-algebra by $1/g_i$
+and the elements $y_{ij}$. Then set $S''$
+equal to the sub $R$-algebra of $S'$ generated by all $g_i$
+and all the $y_{ij}$. Details omitted.
+\end{proof}
+
+
+
+
+\section{Applications of Zariski's Main Theorem}
+\label{section-apply-main-theorem}
+
+\noindent
+Here is an immediate application characterizing the finite
+maps of $1$-dimensional semi-local rings among the quasi-finite
+ones as those where equality always holds in the
+formula of Lemma \ref{lemma-finite-extension-dim-1}.
+
+\begin{lemma}
+\label{lemma-quasi-finite-extension-dim-1}
+Let $A \subset B$ be an extension of domains. Assume
+\begin{enumerate}
+\item $A$ is a local Noetherian ring of dimension $1$,
+\item $A \to B$ is of finite type, and
+\item the induced extension $L/K$ of fraction fields is finite.
+\end{enumerate}
+Then $B$ is semi-local.
+Let $x \in \mathfrak m_A$, $x \not = 0$.
+Let $\mathfrak m_i$, $i = 1, \ldots, n$
+be the maximal ideals of $B$.
+Then
+$$
+[L : K]\text{ord}_A(x)
+\geq
+\sum\nolimits_i
+[\kappa(\mathfrak m_i) : \kappa(\mathfrak m_A)]
+\text{ord}_{B_{\mathfrak m_i}}(x)
+$$
+where $\text{ord}$ is defined as in Definition \ref{definition-ord}.
+We have equality if and only if $A \to B$ is finite.
+\end{lemma}
+
+\begin{proof}
+The ring $B$ is semi-local by Lemma \ref{lemma-finite-in-codim-1}.
+Let $B'$ be the integral closure of $A$ in $B$. By
+Lemma \ref{lemma-quasi-finite-open-integral-closure}
+we can find a finite $A$-subalgebra $C \subset B'$ such that
+on setting $\mathfrak n_i = C \cap \mathfrak m_i$ we have
+$C_{\mathfrak n_i} \cong B_{\mathfrak m_i}$ and the primes
+$\mathfrak n_1, \ldots, \mathfrak n_n$ are pairwise distinct.
+The ring $C$ is semi-local by Lemma \ref{lemma-finite-in-codim-1}.
+Let $\mathfrak p_j$, $j = 1, \ldots, m$ be the other maximal
+ideals of $C$ (the ``missing points''). By
+Lemma \ref{lemma-finite-extension-dim-1} we have
+$$
+\text{ord}_A(x^{[L : K]}) =
+\sum\nolimits_i
+[\kappa(\mathfrak n_i) : \kappa(\mathfrak m_A)]
+\text{ord}_{C_{\mathfrak n_i}}(x)
++
+\sum\nolimits_j
+[\kappa(\mathfrak p_j) : \kappa(\mathfrak m_A)]
+\text{ord}_{C_{\mathfrak p_j}}(x)
+$$
+hence the inequality follows. In case of equality we conclude that
+$m = 0$ (no ``missing points''). Hence $C \subset B$ is an inclusion
+of semi-local rings inducing a bijection on maximal ideals and
+an isomorphism on all localizations at maximal ideals. So if $b \in B$,
+then $I = \{x \in C \mid xb \in C\}$ is an ideal of $C$ which is not
+contained in any of the maximal ideals of $C$, and hence $I = C$,
+hence $b \in C$. Thus $B = C$ and $B$ is finite over $A$.
+\end{proof}
+
+\noindent
+Here is a more standard application of Zariski's main theorem to the
+structure of local homomorphisms of local rings.
+
+\begin{lemma}
+\label{lemma-essentially-finite-type-fibre-dim-zero}
+Let $(R, \mathfrak m_R) \to (S, \mathfrak m_S)$ be a local homomorphism
+of local rings. Assume
+\begin{enumerate}
+\item $R \to S$ is essentially of finite type,
+\item $\kappa(\mathfrak m_R) \subset \kappa(\mathfrak m_S)$ is finite, and
+\item $\dim(S/\mathfrak m_RS) = 0$.
+\end{enumerate}
+Then $S$ is the localization of a finite $R$-algebra.
+\end{lemma}
+
+\begin{proof}
+Let $S'$ be a finite type $R$-algebra such that $S = S'_{\mathfrak q'}$
+for some prime $\mathfrak q'$ of $S'$. By
+Definition \ref{definition-quasi-finite}
+we see that $R \to S'$ is quasi-finite at $\mathfrak q'$.
+After replacing $S'$ by $S'_{g'}$ for some
+$g' \in S'$, $g' \not \in \mathfrak q'$ we may assume that $R \to S'$ is
+quasi-finite, see
+Lemma \ref{lemma-quasi-finite-open}.
+Then by
+Lemma \ref{lemma-quasi-finite-open-integral-closure}
+there exists a finite $R$-algebra $S''$ and elements
+$g' \in S'$, $g' \not \in \mathfrak q'$ and $g'' \in S''$
+such that $S'_{g'} \cong S''_{g''}$ as $R$-algebras.
+This proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-at-quasi-finite-prime}
+Let $R \to S$ be a ring map, $\mathfrak q$ a prime of $S$
+lying over $\mathfrak p$ in $R$. If
+\begin{enumerate}
+\item $R$ is Noetherian,
+\item $R \to S$ is of finite type, and
+\item $R \to S$ is quasi-finite at $\mathfrak q$,
+\end{enumerate}
+then $R_\mathfrak p^\wedge \otimes_R S = S_\mathfrak q^\wedge \times B$
+for some $R_\mathfrak p^\wedge$-algebra $B$.
+\end{lemma}
+
+\begin{proof}
+There exists a finite $R$-algebra $S' \subset S$ and an element
+$g \in S'$, $g \not \in \mathfrak q' = S' \cap \mathfrak q$
+such that $S'_g = S_g$ and in particular
+$S'_{\mathfrak q'} = S_\mathfrak q$, see
+Lemma \ref{lemma-quasi-finite-open-integral-closure}.
+We have
+$$
+R_\mathfrak p^\wedge \otimes_R S' = (S'_{\mathfrak q'})^\wedge \times B'
+$$
+by Lemma \ref{lemma-completion-finite-extension}. Observe that under this
+product decomposition $g$ maps to a pair $(u, b')$ with
+$u \in (S'_{\mathfrak q'})^\wedge$ a unit because $g \not \in \mathfrak q'$.
+The product decomposition for $R_\mathfrak p^\wedge \otimes_R S'$
+induces a product decomposition
+$$
+R_\mathfrak p^\wedge \otimes_R S = A \times B
+$$
+Since $S'_g = S_g$ we also have
+$(R_\mathfrak p^\wedge \otimes_R S')_g = (R_\mathfrak p^\wedge \otimes_R S)_g$
+and since $g \mapsto (u, b')$ where $u$ is a unit we see that
+$(S'_{\mathfrak q'})^\wedge = A$. Since the isomorphism
+$S'_{\mathfrak q'} = S_\mathfrak q$ determines an isomorphism on
+completions this also tells us that $A = S_\mathfrak q^\wedge$.
+This finishes the proof, except that we should perform the sanity check
+that the induced map
+$\phi : R_\mathfrak p^\wedge \otimes_R S \to A = S_\mathfrak q^\wedge$
+is the natural one. For elements of the form $x \otimes 1$
+with $x \in R_\mathfrak p^\wedge$ this is clear as the natural
+map $R_\mathfrak p^\wedge \to S_\mathfrak q^\wedge$ factors through
+$(S'_{\mathfrak q'})^\wedge$. For elements of the form $1 \otimes y$
+with $y \in S$ we can argue that for some $n \geq 1$ the element
+$g^ny$ is the image of some $y' \in S'$. Thus $\phi(1 \otimes g^ny)$
+is the image of $y'$ by the composition
+$S' \to (S'_{\mathfrak q'})^\wedge \to S_\mathfrak q^\wedge$ which is
+equal to the image of $g^ny$ by the map $S \to S_\mathfrak q^\wedge$.
+Since $g$ maps to a unit this also
+implies that $\phi(1 \otimes y)$ has the correct value, i.e., the
+image of $y$ by $S \to S_\mathfrak q^\wedge$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Dimension of fibres}
+\label{section-dimension-fibres}
+
+\noindent
+We study the behaviour of dimensions of fibres, using
+Zariski's main theorem. Recall that we defined the
+dimension $\dim_x(X)$ of a topological space $X$ at a point $x$
+in Topology, Definition \ref{topology-definition-Krull}.
+
+\begin{definition}
+\label{definition-relative-dimension}
+Suppose that $R \to S$ is of finite type, and let
+$\mathfrak q \subset S$ be a prime lying over a prime
+$\mathfrak p$ of $R$.
+We define the {\it relative dimension
+of $S/R$ at $\mathfrak q$}, denoted
+$\dim_{\mathfrak q}(S/R)$, to be the dimension
+of $\Spec(S \otimes_R \kappa(\mathfrak p))$
+at the point corresponding to $\mathfrak q$. We let
+$\dim(S/R)$ be the supremum of $\dim_{\mathfrak q}(S/R)$
+over all $\mathfrak q$. This is called the
+{\it relative dimension of} $S/R$.
+\end{definition}
+
+\noindent
+In particular, $R \to S$ is quasi-finite at $\mathfrak q$ if
+and only if $\dim_{\mathfrak q}(S/R) = 0$. The following lemma
+is more or less a reformulation of Zariski's Main Theorem.
+
+\begin{lemma}
+\label{lemma-quasi-finite-over-polynomial-algebra}
+Let $R \to S$ be a finite type ring map.
+Let $\mathfrak q \subset S$ be a prime.
+Suppose that $\dim_{\mathfrak q}(S/R) = n$.
+There exists a $g \in S$, $g \not\in \mathfrak q$
+such that $S_g$ is quasi-finite over a
+polynomial algebra $R[t_1, \ldots, t_n]$.
+\end{lemma}
+
+\begin{proof}
+The ring $\overline{S} = S \otimes_R \kappa(\mathfrak p)$ is
+of finite type over $\kappa(\mathfrak p)$.
+Let $\overline{\mathfrak q}$ be the prime of $\overline{S}$
+corresponding to $\mathfrak q$.
+By definition of
+the dimension of a topological space at a point there exists
+an open $U \subset \Spec(\overline{S})$ with
+$\overline{q} \in U$ and $\dim(U) = n$.
+Since the topology on $\Spec(\overline{S})$ is
+induced from the topology on $\Spec(S)$ (see
+Remark \ref{remark-fundamental-diagram}), we can find
+a $g \in S$, $g \not \in \mathfrak q$ with image
+$\overline{g} \in \overline{S}$ such that
+$D(\overline{g}) \subset U$.
+Thus after replacing $S$ by $S_g$ we see that
+$\dim(\overline{S}) = n$.
+
+\medskip\noindent
+Next, choose generators $x_1, \ldots, x_N$ for $S$ as an $R$-algebra. By
+Lemma \ref{lemma-Noether-normalization}
+there exist elements $y_1, \ldots, y_n$ in the $\mathbf{Z}$-subalgebra of $S$
+generated by $x_1, \ldots, x_N$ such that the map
+$R[t_1, \ldots, t_n] \to S$, $t_i \mapsto y_i$ has the property
+that $\kappa(\mathfrak p)[t_1\ldots, t_n] \to \overline{S}$
+is finite. In particular, $S$ is quasi-finite over $R[t_1, \ldots, t_n]$
+at $\mathfrak q$. Hence, by Lemma \ref{lemma-quasi-finite-open}
+we may replace $S$ by $S_g$ for some $g\in S$, $g \not \in \mathfrak q$
+such that $R[t_1, \ldots, t_n] \to S$ is quasi-finite.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-refined-quasi-finite-over-polynomial-algebra}
+Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$
+be a prime lying over the prime $\mathfrak p$ of $R$.
+Assume
+\begin{enumerate}
+\item $R \to S$ is of finite type,
+\item $\dim_{\mathfrak q}(S/R) = n$, and
+\item $\text{trdeg}_{\kappa(\mathfrak p)}\kappa(\mathfrak q) = r$.
+\end{enumerate}
+Then there exist $f \in R$, $f \not \in \mathfrak p$,
+$g \in S$, $g \not\in \mathfrak q$ and a quasi-finite ring map
+$$
+\varphi : R_f[x_1, \ldots, x_n] \longrightarrow S_g
+$$
+such that $\varphi^{-1}(\mathfrak qS_g) =
+(\mathfrak p, x_{r + 1}, \ldots, x_n)R_f[x_{r + 1}, \ldots, x_n]$
+\end{lemma}
+
+\begin{proof}
+After replacing $S$ by a principal localization we may assume there
+exists a quasi-finite ring map $\varphi : R[t_1, \ldots, t_n] \to S$, see
+Lemma \ref{lemma-quasi-finite-over-polynomial-algebra}.
+Set $\mathfrak q' = \varphi^{-1}(\mathfrak q)$.
+Let $\overline{\mathfrak q}' \subset \kappa(\mathfrak p)[t_1, \ldots, t_n]$
+be the prime corresponding to $\mathfrak q'$. By
+Lemma \ref{lemma-refined-Noether-normalization}
+there exists a finite ring map
+$\kappa(\mathfrak p)[x_1, \ldots, x_n] \to
+\kappa(\mathfrak p)[t_1, \ldots, t_n]$
+such that the inverse image of $\overline{\mathfrak q}'$ is
+$(x_{r + 1}, \ldots, x_n)$. Let
+$\overline{h}_i \in \kappa(\mathfrak p)[t_1, \ldots, t_n]$
+be the image of $x_i$. We can find an element
+$f \in R$, $f \not \in \mathfrak p$
+and $h_i \in R_f[t_1, \ldots, t_n]$ which map to $\overline{h}_i$
+in $\kappa(\mathfrak p)[t_1, \ldots, t_n]$. Then the ring map
+$$
+R_f[x_1, \ldots, x_n] \longrightarrow R_f[t_1, \ldots, t_n]
+$$
+becomes finite after tensoring with $\kappa(\mathfrak p)$.
+In particular, $R_f[t_1, \ldots, t_n]$ is quasi-finite over
+$R_f[x_1, \ldots, x_n]$ at the prime $\mathfrak q'R_f[t_1, \ldots, t_n]$.
+Hence, by
+Lemma \ref{lemma-quasi-finite-open}
+there exists a $g \in R_f[t_1, \ldots, t_n]$,
+$g \not \in \mathfrak q'R_f[t_1, \ldots, t_n]$
+such that $R_f[x_1, \ldots, x_n] \to R_f[t_1, \ldots, t_n, 1/g]$
+is quasi-finite. Thus we see that the composition
+$$
+R_f[x_1, \ldots, x_n] \longrightarrow
+R_f[t_1, \ldots, t_n, 1/g] \longrightarrow S_{\varphi(g)}
+$$
+is quasi-finite and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-inequality-quasi-finite}
+Let $R \to S$ be a finite type ring map.
+Let $\mathfrak q \subset S$ be a prime lying over $\mathfrak p \subset R$.
+If $R \to S$ is quasi-finite at $\mathfrak q$, then
+$\dim(S_{\mathfrak q}) \leq \dim(R_{\mathfrak p})$.
+\end{lemma}
+
+\begin{proof}
+If $R_{\mathfrak p}$ is Noetherian
+(and hence $S_{\mathfrak q}$ Noetherian since it is essentially of
+finite type over $R_{\mathfrak p}$)
+then this follows immediately from
+Lemma \ref{lemma-dimension-base-fibre-total} and the
+definitions. In the general case, let $S'$ be the integral
+closure of $R_\mathfrak p$ in $S_\mathfrak p$.
+By Zariski's Main Theorem \ref{theorem-main-theorem}
+we have $S_{\mathfrak q} = S'_{\mathfrak q'}$ for some
+$\mathfrak q' \subset S'$ lying over $\mathfrak q$.
+By Lemma \ref{lemma-integral-dim-up} we have
+$\dim(S') \leq \dim(R_\mathfrak p)$ and hence a fortiori
+$\dim(S_\mathfrak q) = \dim(S'_{\mathfrak q'}) \leq \dim(R_\mathfrak p)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-quasi-finite-over-polynomial-algebra}
+\begin{slogan}
+A quasi-finite cover of affine n-space has dimension at most n.
+\end{slogan}
+Let $k$ be a field. Let $S$ be a finite type $k$-algebra.
+Suppose there is a quasi-finite $k$-algebra map
+$k[t_1, \ldots, t_n] \subset S$. Then $\dim(S) \leq n$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-dim-affine-space} the dimension of
+any local ring of $k[t_1, \ldots, t_n]$ is at most $n$.
+Thus the result follows from
+Lemma \ref{lemma-dimension-inequality-quasi-finite}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-fibres-bounded-open-upstairs}
+Let $R \to S$ be a finite type ring map.
+Let $\mathfrak q \subset S$ be a prime.
+Suppose that $\dim_{\mathfrak q}(S/R) = n$.
+There exists an open neighbourhood $V$ of $\mathfrak q$
+in $\Spec(S)$ such that
+$\dim_{\mathfrak q'}(S/R) \leq n$ for all $\mathfrak q' \in V$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-quasi-finite-over-polynomial-algebra}
+we see that we may assume that $S$ is quasi-finite over
+a polynomial algebra $R[t_1, \ldots, t_n]$. Considering
+the fibres, we reduce to
+Lemma \ref{lemma-dimension-quasi-finite-over-polynomial-algebra}.
+\end{proof}
+
+\noindent
+In other words, the lemma says that the set of points where the
+fibre has dimension $\leq n$ is open in $\Spec(S)$.
+The next lemma says that formation of this open commutes with
+base change.
+If the ring map is of finite presentation then this set is
+quasi-compact open (see below).
+
+\begin{lemma}
+\label{lemma-dimension-fibres-bounded-open-upstairs-base-change}
+Let $R \to S$ be a finite type ring map.
+Let $R \to R'$ be any ring map.
+Set $S' = R' \otimes_R S$ and denote $f : \Spec(S') \to \Spec(S)$
+the associated map on spectra.
+Let $n \geq 0$.
+The inverse image
+$f^{-1}(\{\mathfrak q \in \Spec(S) \mid
+\dim_{\mathfrak q}(S/R) \leq n\})$
+is equal to
+$\{\mathfrak q' \in \Spec(S') \mid
+\dim_{\mathfrak q'}(S'/R') \leq n\}$.
+\end{lemma}
+
+\begin{proof}
+The condition is formulated in terms of dimensions
+of fibre rings which are of finite type over a field.
+Combined with
+Lemma \ref{lemma-dimension-at-a-point-preserved-field-extension}
+this yields the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-fibres-bounded-quasi-compact-open-upstairs}
+Let $R \to S$ be a ring homomorphism of finite presentation.
+Let $n \geq 0$. The set
+$$
+V_n = \{\mathfrak q \in \Spec(S) \mid \dim_{\mathfrak q}(S/R) \leq n\}
+$$
+is a quasi-compact open subset of $\Spec(S)$.
+\end{lemma}
+
+\begin{proof}
+It is open by Lemma \ref{lemma-dimension-fibres-bounded-open-upstairs}.
+Let $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$ be a presentation of
+$S$. Let $R_0$ be the $\mathbf{Z}$-subalgebra of $R$ generated by the
+coefficients of the polynomials $f_i$.
+Let $S_0 = R_0[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$.
+Then $S = R \otimes_{R_0} S_0$. By
+Lemma \ref{lemma-dimension-fibres-bounded-open-upstairs-base-change}
+$V_n$ is the inverse image of an open $V_{0, n}$ under the quasi-compact
+continuous map $\Spec(S) \to \Spec(S_0)$. Since
+$S_0$ is Noetherian we see that $V_{0, n}$ is quasi-compact.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-type-domain-over-valuation-ring-dim-fibres}
+Let $R$ be a valuation ring with residue field $k$ and field
+of fractions $K$. Let $S$ be a domain containing $R$ such that
+$S$ is of finite type over $R$. If $S \otimes_R k$ is not the
+zero ring then
+$$
+\dim(S \otimes_R k) = \dim(S \otimes_R K)
+$$
+In fact, $\Spec(S \otimes_R k)$ is equidimensional.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that $\dim_{\mathfrak q}(S/k)$ is equal
+to $\dim(S \otimes_R K)$ for every prime $\mathfrak q$ of
+$S$ containing $\mathfrak m_RS$. Pick such a prime. By
+Lemma \ref{lemma-dimension-fibres-bounded-open-upstairs}
+the inequality $\dim_{\mathfrak q}(S/k) \geq \dim(S \otimes_R K)$
+holds. Set $n = \dim_{\mathfrak q}(S/k)$. By
+Lemma \ref{lemma-quasi-finite-over-polynomial-algebra}
+after replacing $S$ by $S_g$ for some $g \in S$, $g \not \in \mathfrak q$
+there exists a quasi-finite ring map
+$R[t_1, \ldots, t_n] \to S$. If $\dim(S \otimes_R K) < n$,
+then $K[t_1, \ldots, t_n] \to S \otimes_R K$ has a nonzero kernel.
+Say $f = \sum a_I t_1^{i_1}\ldots t_n^{i_n}$. After dividing
+$f$ by a nonzero coefficient of $f$ with minimal valuation, we may
+assume $f\in R[t_1, \ldots, t_n]$ and some $a_I$ does not map
+to zero in $k$. Hence the ring map $k[t_1, \ldots, t_n] \to S \otimes_R k$
+has a nonzero kernel which implies that $\dim(S \otimes_R k) < n$.
+Contradiction.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Algebras and modules of finite presentation}
+\label{section-finite-presentation}
+
+\noindent
+In this section we discuss some standard results where the key
+feature is that the assumption involves a finite type or finite
+presentation assumption.
+
+\begin{lemma}
+\label{lemma-finite-type-descends}
+Let $R \to S$ be a ring map.
+Let $R \to R'$ be a faithfully flat ring map.
+Set $S' = R'\otimes_R S$.
+Then $R \to S$ is of finite type if and only if $R' \to S'$
+is of finite type.
+\end{lemma}
+
+\begin{proof}
+It is clear that if $R \to S$ is of finite type then $R' \to S'$
+is of finite type. Assume that $R' \to S'$ is of finite type.
+Say $y_1, \ldots, y_m$ generate $S'$ over $R'$.
+Write $y_j = \sum_i a_{ij} \otimes x_{ji}$ for some
+$a_{ij} \in R'$ and $x_{ji} \in S$. Let $A \subset S$
+be the $R$-subalgebra generated by the $x_{ij}$.
+By flatness we have $A' := R' \otimes_R A \subset S'$, and
+by construction $y_j \in A'$. Hence $A' = S'$.
+By faithful flatness $A = S$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-presentation-descends}
+Let $R \to S$ be a ring map.
+Let $R \to R'$ be a faithfully flat ring map.
+Set $S' = R'\otimes_R S$.
+Then $R \to S$ is of finite presentation if and only if $R' \to S'$
+is of finite presentation.
+\end{lemma}
+
+\begin{proof}
+It is clear that if $R \to S$ is of finite presentation then $R' \to S'$
+is of finite presentation. Assume that $R' \to S'$ is of finite presentation.
+By Lemma \ref{lemma-finite-type-descends} we see
+that $R \to S$ is of finite type. Write $S = R[x_1, \ldots, x_n]/I$.
+By flatness $S' = R'[x_1, \ldots, x_n]/R'\otimes I$.
+Say $g_1, \ldots, g_m$ generate $R'\otimes I$ over $R'[x_1, \ldots, x_n]$.
+Write $g_j = \sum_i a_{ij} \otimes f_{ji}$ for some
+$a_{ij} \in R'$ and $f_{ji} \in I$. Let $J \subset I$
+be the ideal generated by the $f_{ij}$.
+By flatness we have $R' \otimes_R J \subset R'\otimes_R I$, and
+both are ideals over $R'[x_1, \ldots, x_n]$.
+By construction $g_j \in R' \otimes_R J$. Hence
+$R' \otimes_R J = R'\otimes_R I$.
+By faithful flatness $J = I$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-construct-fp-module}
+Let $R$ be a ring.
+Let $I \subset R$ be an ideal.
+Let $S \subset R$ be a multiplicative subset.
+Set $R' = S^{-1}(R/I) = S^{-1}R/S^{-1}I$.
+\begin{enumerate}
+\item For any finite $R'$-module $M'$ there exists a
+finite $R$-module $M$ such that $S^{-1}(M/IM) \cong M'$.
+\item For any finitely presented $R'$-module $M'$ there exists a
+finitely presented $R$-module $M$ such that $S^{-1}(M/IM) \cong M'$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Choose a short exact sequence
+$0 \to K' \to (R')^{\oplus n} \to M' \to 0$.
+Let $K \subset R^{\oplus n}$ be the inverse image of
+$K'$ under the map $R^{\oplus n} \to (R')^{\oplus n}$.
+Then $M = R^{\oplus n}/K$ works.
+
+\medskip\noindent
+Proof of (2).
+Choose a presentation $(R')^{\oplus m} \to (R')^{\oplus n} \to M' \to 0$.
+Suppose that the first map is given by the matrix
+$A' = (a'_{ij})$ and the second map is determined by generators
+$x'_i \in M'$, $i = 1, \ldots, n$. As $R' = S^{-1}(R/I)$ we can choose
+$s \in S$ and a matrix $A = (a_{ij})$ with coefficients in $R$
+such that $a'_{ij} = a_{ij} / s \bmod S^{-1}I$. Let $M$ be the
+finitely presented $R$-module with presentation
+$R^{\oplus m} \to R^{\oplus n} \to M \to 0$
+where the first map is given by the matrix $A$ and the second map is
+determined by generators $x_i \in M$, $i = 1, \ldots, n$.
+Then the map $M \to M'$, $x_i \mapsto x'_i$ induces an isomorphism
+$S^{-1}(M/IM) \cong M'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-construct-fp-module-from-localization}
+Let $R$ be a ring.
+Let $S \subset R$ be a multiplicative subset.
+Let $M$ be an $R$-module.
+\begin{enumerate}
+\item If $S^{-1}M$ is a finite $S^{-1}R$-module then there
+exists a finite $R$-module $M'$ and a map $M' \to M$ which induces an
+isomorphism $S^{-1}M' \to S^{-1}M$.
+\item If $S^{-1}M$ is a finitely presented $S^{-1}R$-module
+then there exists an $R$-module $M'$ of finite presentation
+and a map $M' \to M$ which induces an isomorphism
+$S^{-1}M' \to S^{-1}M$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Let $x_1, \ldots, x_n \in M$ be elements which generate
+$S^{-1}M$ as an $S^{-1}R$-module. Let $M'$ be the
+$R$-submodule of $M$ generated by $x_1, \ldots, x_n$.
+
+\medskip\noindent
+Proof of (2). Let $x_1, \ldots, x_n \in M$ be elements which generate
+$S^{-1}M$ as an $S^{-1}R$-module. Let
+$K = \Ker(R^{\oplus n} \to M)$ where the map is given by
+the rule $(a_1, \ldots, a_n) \mapsto \sum a_i x_i$. By
+Lemma \ref{lemma-extension}
+we see that $S^{-1}K$ is a finite $S^{-1}R$-module.
+By (1) we can find a finite submodule $K' \subset K$
+with $S^{-1}K' = S^{-1}K$. Take
+$M' = \Coker(K' \to R^{\oplus n})$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-construct-fp-module-from-stalk}
+Let $R$ be a ring.
+Let $\mathfrak p \subset R$ be a prime ideal.
+Let $M$ be an $R$-module.
+\begin{enumerate}
+\item If $M_{\mathfrak p}$ is a finite $R_{\mathfrak p}$-module then there
+exists a finite $R$-module $M'$ and a map $M' \to M$ which induces an
+isomorphism $M'_{\mathfrak p} \to M_{\mathfrak p}$.
+\item If $M_{\mathfrak p}$ is a finitely presented $R_{\mathfrak p}$-module
+then there exists an $R$-module $M'$ of finite presentation
+and a map $M' \to M$ which induces an isomorphism
+$M'_{\mathfrak p} \to M_{\mathfrak p}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is a special case of
+Lemma \ref{lemma-construct-fp-module-from-localization}
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-isomorphism}
+Let $\varphi : R \to S$ be a ring map. Let $\mathfrak q \subset S$
+be a prime lying over $\mathfrak p \subset R$. Assume
+\begin{enumerate}
+\item $S$ is of finite presentation over $R$,
+\item $\varphi$ induces an isomorphism $R_\mathfrak p \cong S_\mathfrak q$.
+\end{enumerate}
+Then there exist $f \in R$, $f \not \in \mathfrak p$ and an
+$R_f$-algebra $C$ such that $S_f \cong R_f \times C$ as $R_f$-algebras.
+\end{lemma}
+
+\begin{proof}
+Write $S = R[x_1, \ldots, x_n]/(g_1, \ldots, g_m)$. Let $a_i \in R_\mathfrak p$
+be an element mapping to the image of $x_i$ in $S_\mathfrak q$.
+Write $a_i = b_i/f$ for some $f \in R$, $f \not \in \mathfrak p$.
+After replacing $R$ by $R_f$ and $x_i$ by $x_i - a_i$ we may
+assume that $S = R[x_1, \ldots, x_n]/(g_1, \ldots, g_m)$ such
+that $x_i$ maps to zero in $S_\mathfrak q$. Then if $c_j$ denotes
+the constant term of $g_j$ we conclude that $c_j$ maps to zero
+in $R_\mathfrak p$. After another replacement of $R$ we may
+assume that the constant coefficients $c_j$ of the $g_j$ are zero.
+Thus we obtain an $R$-algebra map $S \to R$, $x_i \mapsto 0$ whose
+kernel is the ideal $(x_1, \ldots, x_n)$.
+
+\medskip\noindent
+Note that $\mathfrak q = \mathfrak pS + (x_1, \ldots, x_n)$.
+Write $g_j = \sum a_{ji}x_i + h.o.t.$. Since $S_\mathfrak q = R_\mathfrak p$
+we have $\mathfrak p \otimes \kappa(\mathfrak p) =
+\mathfrak q \otimes \kappa(\mathfrak q)$. It follows that
+$m \times n$ matrix $A = (a_{ji})$ defines a surjective
+map $\kappa(\mathfrak p)^{\oplus m} \to \kappa(\mathfrak p)^{\oplus n}$.
+Thus after inverting some element of $R$ not in $\mathfrak p$ we may
+assume there are $b_{ij} \in R$ such that $\sum b_{ij} g_j = x_i + h.o.t.$.
+We conclude that $(x_1, \ldots, x_n) = (x_1, \ldots, x_n)^2$ in $S$.
+It follows from Lemma \ref{lemma-ideal-is-squared-union-connected}
+that $(x_1, \ldots, x_n)$ is generated by an idempotent $e$.
+Setting $C = eS$ finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-isomorphic-local-rings}
+Let $R$ be a ring.
+Let $S$, $S'$ be of finite presentation over $R$.
+Let $\mathfrak q \subset S$ and $\mathfrak q' \subset S'$
+be primes. If $S_{\mathfrak q} \cong S'_{\mathfrak q'}$ as
+$R$-algebras, then there exist $g \in S$, $g \not \in \mathfrak q$
+and $g' \in S'$, $g' \not \in \mathfrak q'$ such that
+$S_g \cong S'_{g'}$ as $R$-algebras.
+\end{lemma}
+
+\begin{proof}
+Let $\psi : S_{\mathfrak q} \to S'_{\mathfrak q'}$ be the isomorphism
+of the hypothesis of the lemma.
+Write $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_r)$ and
+$S' = R[y_1, \ldots, y_m]/J$.
+For each $i = 1, \ldots, n$ choose a fraction
+$h_i/g_i$ with $h_i, g_i \in R[y_1, \ldots, y_m]$
+and $g_i \bmod J$ not in $\mathfrak q'$ which represents
+the image of $x_i$ under $\psi$. After replacing
+$S'$ by $S'_{g_1 \ldots g_n}$ and
+$R[y_1, \ldots, y_m, y_{m + 1}]$ (mapping $y_{m + 1}$ to $1/(g_1\ldots g_n)$)
+we may assume that $\psi(x_i)$ is the image of some
+$h_i \in R[y_1, \ldots, y_m]$. Consider the elements
+$f_j(h_1, \ldots, h_n) \in R[y_1, \ldots, y_m]$.
+Since $\psi$ kills each $f_j$ we see that
+there exists a $g \in R[y_1, \ldots, y_m]$, $g \bmod J \not \in \mathfrak q'$
+such that $g f_j(h_1, \ldots, h_n) \in J$ for each $j = 1, \ldots, r$.
+After replacing $S'$ by $S'_g$ and
+$R[y_1, \ldots, y_m, y_{m + 1}]$ as before we may assume that
+$f_j(h_1, \ldots, h_n) \in J$. Thus we obtain a ring map
+$S \to S'$, $x_i \mapsto h_i$ which induces $\psi$ on local rings.
+By Lemma \ref{lemma-compose-finite-type}
+the map $S \to S'$ is of finite presentation.
+By Lemma \ref{lemma-local-isomorphism}
+we may assume that $S' = S \times C$. Thus localizing $S'$ at the
+idempotent corresponding to the factor $C$ we obtain the result.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-type-mod-nilpotent}
+Let $R$ be a ring. Let $I \subset R$ be a nilpotent ideal.
+Let $S$ be an $R$-algebra such that $R/I \to S/IS$
+is of finite type. Then $R \to S$ is of finite type.
+\end{lemma}
+
+\begin{proof}
+Choose $s_1, \ldots, s_n \in S$ whose images in $S/IS$ generate
+$S/IS$ as an algebra over $R/I$. By Lemma \ref{lemma-NAK} part (11)
+we see that the $R$-algebra map $R[x_1, \ldots, x_n \to S$, $x_i \mapsto s_i$
+is surjective and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-mod-locally-nilpotent}
+Let $R$ be a ring. Let $I \subset R$ be a locally nilpotent ideal.
+Let $S \to S'$ be an $R$-algebra map such that $S \to S'/IS'$ is surjective
+and such that $S'$ is of finite type over $R$. Then $S \to S'$ is surjective.
+\end{lemma}
+
+\begin{proof}
+Write $S' = R[x_1, \ldots, x_m]/K$ for some ideal $K$. By assumption there
+exist $g_j = x_j + \sum \delta_{j, J} x^J \in R[x_1, \ldots, x_n]$ with
+$\delta_{j, J} \in I$ and with $g_j \bmod K \in \Im(S \to S')$.
+Hence it suffices to show that $g_1, \ldots, g_m$ generate
+$R[x_1, \ldots, x_n]$. Let $R_0 \subset R$ be a finitely generated
+$\mathbf{Z}$-subalgebra of $R$ containing at least the $\delta_{j, J}$.
+Then $R_0 \cap I$ is a nilpotent ideal (by
+Lemma \ref{lemma-Noetherian-power}). It follows that
+$R_0[x_1, \ldots, x_n]$ is generated by $g_1, \ldots, g_m$ (because
+$x_j \mapsto g_j$ defines an automorphism of $R_0[x_1, \ldots, x_m]$;
+details omitted). Since $R$ is the union of the subrings $R_0$ we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-isomorphism-modulo-ideal}
+Let $R$ be a ring. Let $I \subset R$ be an ideal. Let $S \to S'$
+be an $R$-algebra map. Let $IS \subset \mathfrak q \subset S$
+be a prime
+ideal. Assume that
+\begin{enumerate}
+\item $S \to S'$ is surjective,
+\item $S_\mathfrak q/IS_\mathfrak q \to S'_\mathfrak q/IS'_\mathfrak q$
+is an isomorphism,
+\item $S$ is of finite type over $R$,
+\item $S'$ of finite presentation over $R$, and
+\item $S'_\mathfrak q$ is flat over $R$.
+\end{enumerate}
+Then $S_g \to S'_g$ is an isomorphism for some
+$g \in S$, $g \not \in \mathfrak q$.
+\end{lemma}
+
+\begin{proof}
+Let $J = \Ker(S \to S')$. By
+Lemma \ref{lemma-compose-finite-type}
+$J$ is a finitely generated ideal. Since $S'_\mathfrak q$ is flat
+over $R$ we see that
+$J_\mathfrak q/IJ_\mathfrak q \subset S_\mathfrak q/IS_{\mathfrak q}$
+(apply Lemma \ref{lemma-flat-tor-zero} to $0 \to J \to S \to S' \to 0$).
+By assumption (2) we see that $J_\mathfrak q/IJ_\mathfrak q$ is zero.
+By Nakayama's lemma (Lemma \ref{lemma-NAK}) we see that
+there exists a $g \in S$, $g \not \in \mathfrak q$ such
+that $J_g = 0$. Hence $S_g \cong S'_g$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-isomorphism-modulo-locally-nilpotent}
+Let $R$ be a ring. Let $I \subset R$ be an ideal. Let $S \to S'$
+be an $R$-algebra map. Assume that
+\begin{enumerate}
+\item $I$ is locally nilpotent,
+\item $S/IS \to S'/IS'$ is an isomorphism,
+\item $S$ is of finite type over $R$,
+\item $S'$ of finite presentation over $R$, and
+\item $S'$ is flat over $R$.
+\end{enumerate}
+Then $S \to S'$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-surjective-mod-locally-nilpotent} the map
+$S \to S'$ is surjective. As $I$ is locally nilpotent, so are the
+ideals $IS$ and $IS'$ (Lemma \ref{lemma-locally-nilpotent}). Hence
+every prime ideal $\mathfrak q$ of $S$ contains $IS$ and (trivially)
+$S_\mathfrak q/IS_\mathfrak q \cong S'_\mathfrak q/IS'_\mathfrak q$.
+Thus Lemma \ref{lemma-isomorphism-modulo-ideal} applies
+and we see that $S_\mathfrak q \to S'_\mathfrak q$ is an
+isomorphism for every prime $\mathfrak q \subset S$.
+It follows that $S \to S'$ is injective for example by
+Lemma \ref{lemma-characterize-zero-local}.
+\end{proof}
+
+
+
+
+
+\section{Colimits and maps of finite presentation}
+\label{section-colimits-flat}
+
+\noindent
+In this section we prove some preliminary lemmas
+which will eventually help us prove result using
+absolute Noetherian reduction.
+In Categories, Section \ref{categories-section-directed-colimits}
+we discuss filtered colimits in general.
+Here is an example of this very general notion.
+
+\begin{lemma}
+\label{lemma-ring-colimit-fp-category}
+Let $R \to A$ be a ring map. Consider the category $\mathcal{I}$ of all
+diagrams of $R$-algebra maps $A' \to A$ with $A'$ finitely presented over
+$R$. Then $\mathcal{I}$ is filtered, and the colimit of the $A'$ over
+$\mathcal{I}$ is isomorphic to $A$.
+\end{lemma}
+
+\begin{proof}
+The category\footnote{To avoid set theoretical difficulties we
+consider only $A' \to A$ such that $A'$ is a quotient of
+$R[x_1, x_2, x_3, \ldots]$.}
+$\mathcal{I}$ is nonempty as $R \to R$ is an object of it.
+Consider a pair of objects $A' \to A$, $A'' \to A$ of $\mathcal{I}$.
+Then $A' \otimes_R A'' \to A$ is in
+$\mathcal{I}$ (use Lemmas \ref{lemma-compose-finite-type} and
+\ref{lemma-base-change-finiteness}). The ring maps
+$A' \to A' \otimes_R A''$ and $A'' \to A' \otimes_R A''$
+define arrows in $\mathcal{I}$ thereby proving the second defining
+property of a filtered category, see
+Categories, Definition \ref{categories-definition-directed}.
+Finally, suppose that we have two morphisms $\sigma, \tau : A' \to A''$
+in $\mathcal{I}$. If $x_1, \ldots, x_r \in A'$ are generators of
+$A'$ as an $R$-algebra, then we can consider
+$A''' = A''/(\sigma(x_i) - \tau(x_i))$.
+This is a finitely presented $R$-algebra and the given $R$-algebra map
+$A'' \to A$ factors through the surjection $\nu : A'' \to A'''$.
+Thus $\nu$ is a morphism in $\mathcal{I}$ equalizing $\sigma$ and $\tau$
+as desired.
+
+\medskip\noindent
+The fact that our index category is cofiltered means that we may
+compute the value of $B = \colim_{A' \to A} A'$ in the category of sets
+(some details omitted; compare with the discussion in
+Categories, Section \ref{categories-section-directed-colimits}).
+To see that $B \to A$ is surjective, for
+every $a \in A$ we can use $R[x] \to A$, $x \mapsto a$ to see that
+$a$ is in the image of $B \to A$. Conversely, if $b \in B$ is mapped
+to zero in $A$, then we can find $A' \to A$ in $\mathcal{I}$ and
+$a' \in A'$ which maps to $b$. Then $A'/(a') \to A$ is in $\mathcal{I}$
+as well and the map $A' \to B$ factors as $A' \to A'/(a') \to B$
+which shows that $b = 0$ as desired.
+\end{proof}
+
+\noindent
+Often it is easier to think about colimits over preordered sets.
+Let $(\Lambda, \geq)$ a preordered set.
+A system of rings over $\Lambda$ is given by
+a ring $R_\lambda$ for every $\lambda \in \Lambda$,
+and a morphism $R_\lambda \to R_\mu$ whenever $\lambda \leq \mu$.
+These morphisms have to satisfy the rule that
+$R_\lambda \to R_\mu \to R_\nu$ is equal to the map
+$R_\lambda \to R_\nu$ for all $\lambda \leq \mu \leq \nu$.
+See Categories, Section \ref{categories-section-posets-limits}.
+We will often assume that $(I, \leq)$ is {\it directed},
+which means that $\Lambda$ is nonempty and
+given $\lambda, \mu \in \Lambda$
+there exists a $\nu \in \Lambda$ with $\lambda \leq \nu$ and $\mu \leq \nu$.
+Recall that the colimit $\colim_\lambda R_\lambda$
+is sometimes called a ``direct limit'' in this case
+(but we will not use this terminology).
+
+\medskip\noindent
+Note that Categories, Lemma \ref{categories-lemma-directed-category-system}
+tells us that colimits over filtered index categories are the same
+thing as colimits over directed sets.
+
+\begin{lemma}
+\label{lemma-ring-colimit-fp}
+Let $R \to A$ be a ring map. There exists a directed system $A_\lambda$ of
+$R$-algebras of finite presentation such that $A = \colim_\lambda A_\lambda$.
+If $A$ is of finite type over $R$ we may arrange it so that all the
+transition maps in the system of $A_\lambda$ are surjective.
+\end{lemma}
+
+\begin{proof}
+The first proof is that this follows from
+Lemma \ref{lemma-ring-colimit-fp-category} and
+Categories, Lemma \ref{categories-lemma-directed-category-system}.
+
+\medskip\noindent
+Second proof.
+Compare with the proof of Lemma \ref{lemma-module-colimit-fp}.
+Consider any finite subset $S \subset A$, and any finite
+collection of polynomial relations $E$ among the elements of $S$.
+So each $s \in S$ corresponds to $x_s \in A$ and
+each $e \in E$ consists of a polynomial
+$f_e \in R[X_s; s\in S]$ such that $f_e(x_s) = 0$.
+Let $A_{S, E} = R[X_s; s\in S]/(f_e; e\in E)$
+which is a finitely presented $R$-algebra.
+There are canonical maps $A_{S, E} \to A$.
+If $S \subset S'$ and if the elements of
+$E$ correspond, via the map $R[X_s; s \in S] \to R[X_s; s\in S']$,
+to a subset of $E'$, then there is an obvious map
+$A_{S, E} \to A_{S', E'}$ commuting with the
+maps to $A$. Thus, setting $\Lambda$ equal the set of pairs
+$(S, E)$ with ordering by inclusion as above, we get a
+directed partially ordered set.
+It is clear that the colimit of this directed system is $A$.
+
+\medskip\noindent
+For the last statement, suppose $A = R[x_1, \ldots, x_n]/I$.
+In this case, consider the subset $\Lambda' \subset \Lambda$
+consisting of those systems $(S, E)$ above
+with $S = \{x_1, \ldots, x_n\}$. It is easy to see that
+still $A = \colim_{\lambda' \in \Lambda'} A_{\lambda'}$.
+Moreover, the transition maps are clearly surjective.
+\end{proof}
+
+\noindent
+It turns out that we can characterize ring maps of finite
+presentation as follows. This in some sense says that the
+algebras of finite presentation are the ``compact'' objects
+in the category of $R$-algebras.
+
+\begin{lemma}
+\label{lemma-characterize-finite-presentation}
+Let $\varphi : R \to S$ be a ring map. The following are equivalent
+\begin{enumerate}
+\item $\varphi$ is of finite presentation,
+\item for every directed system $A_\lambda$ of $R$-algebras
+the map
+$$
+\colim_\lambda \Hom_R(S, A_\lambda) \longrightarrow
+\Hom_R(S, \colim_\lambda A_\lambda)
+$$
+is bijective, and
+\item for every directed system $A_\lambda$ of $R$-algebras
+the map
+$$
+\colim_\lambda \Hom_R(S, A_\lambda) \longrightarrow
+\Hom_R(S, \colim_\lambda A_\lambda)
+$$
+is surjective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1) and write $S = R[x_1, \ldots, x_n] / (f_1, \ldots, f_m)$.
+Let $A = \colim A_\lambda$. Observe that an $R$-algebra homomorphism
+$S \to A$ or $S \to A_\lambda$ is determined by the images of
+$x_1, \ldots, x_n$. Hence it is clear that
+$\colim_\lambda \Hom_R(S, A_\lambda) \to \Hom_R(S, A)$
+is injective. To see that it is surjective, let $\chi : S \to A$
+be an $R$-algebra homomorphism. Then each
+$x_i$ maps to some element in the image of some $A_{\lambda_i}$.
+We may pick $\mu \geq \lambda_i$, $i = 1, \ldots, n$ and
+assume $\chi(x_i)$ is the image of $y_i \in A_\mu$ for
+$i = 1, \ldots, n$. Consider $z_j = f_j(y_1, \ldots, y_n) \in A_\mu$.
+Since $\chi$ is a homomorphism the image of $z_j$ in
+$A = \colim_\lambda A_\lambda$ is zero. Hence there exists a
+$\mu_j \geq \mu$ such that $z_j$ maps to zero in $A_{\mu_j}$.
+Pick $\nu \geq \mu_j$, $j = 1, \ldots, m$. Then the
+images of $z_1, \ldots, z_m$ are zero in $A_\nu$. This
+exactly means that the $y_i$ map to elements
+$y'_i \in A_\nu$ which satisfy the relations $f_j(y'_1, \ldots, y'_n) = 0$.
+Thus we obtain a ring map $S \to A_\nu$. This shows that
+(1) implies (2).
+
+\medskip\noindent
+It is clear that (2) implies (3). Assume (3).
+By Lemma \ref{lemma-ring-colimit-fp} we may write
+$S = \colim_\lambda S_\lambda$ with $S_\lambda$
+of finite presentation over $R$. Then the identity map
+factors as
+$$
+S \to S_\lambda \to S
+$$
+for some $\lambda$. This implies that $S$
+is finitely presented over $S_\lambda$ by
+Lemma \ref{lemma-compose-finite-type} part (4)
+applied to $S \to S_\lambda \to S$. Applying part (2) of the same
+lemma to $R \to S_\lambda \to S$ we conclude that $S$ is of finite
+presentation over $R$.
+\end{proof}
+
+\noindent
+Using the basic material above we can give a criterion of when
+an algebra $A$ is a filtered colimit of given type of algebra
+as follows.
+
+\begin{lemma}
+\label{lemma-when-colimit}
+Let $R \to \Lambda$ be a ring map. Let $\mathcal{E}$ be a set of $R$-algebras
+such that each $A \in \mathcal{E}$ is of finite presentation over $R$.
+Then the following two statements are equivalent
+\begin{enumerate}
+\item $\Lambda$ is a filtered colimit of elements of $\mathcal{E}$, and
+\item for any $R$ algebra map $A \to \Lambda$ with $A$ of finite
+presentation over $R$ we can find a factorization $A \to B \to \Lambda$
+with $B \in \mathcal{E}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Suppose that $\mathcal{I} \to \mathcal{E}$, $i \mapsto A_i$
+is a filtered diagram such that $\Lambda = \colim_i A_i$.
+Let $A \to \Lambda$ be an $R$-algebra map with $A$ of finite
+presentation over $R$. Then we get a factorization $A \to A_i \to \Lambda$
+by applying Lemma \ref{lemma-characterize-finite-presentation}.
+Thus (1) implies (2).
+
+\medskip\noindent
+Consider the category
+$\mathcal{I}$ of Lemma \ref{lemma-ring-colimit-fp-category}.
+By Categories, Lemma \ref{categories-lemma-cofinal-in-filtered}
+the full subcategory $\mathcal{J}$ consisting of those
+$A \to \Lambda$ with $A \in \mathcal{E}$ is cofinal in $\mathcal{I}$ and
+is a filtered category. Then $\Lambda$ is also the colimit
+over $\mathcal{J}$ by Categories, Lemma \ref{categories-lemma-cofinal}.
+\end{proof}
+
+\noindent
+But more is true. Namely, given $R = \colim_\lambda R_\lambda$
+we see that the category of finitely presented $R$-modules is equivalent
+to the limit of the category of finitely presented $R_\lambda$-modules.
+Similarly for the categories of finitely presented $R$-algebras.
+
+\begin{lemma}
+\label{lemma-module-map-property-in-colimit}
+Let $A$ be a ring and let $M, N$ be $A$-modules.
+Suppose that $R = \colim_{i \in I} R_i$ is a directed colimit
+of $A$-algebras.
+\begin{enumerate}
+\item If $M$ is a finite $A$-module, and $u, u' : M \to N$ are
+$A$-module maps such that
+$u \otimes 1 = u' \otimes 1 : M \otimes_A R \to N \otimes_A R$
+then for some $i$ we have
+$u \otimes 1 = u' \otimes 1 : M \otimes_A R_i \to N \otimes_A R_i$.
+\item If $N$ is a finite $A$-module and $u : M \to N$ is an $A$-module
+map such that $u \otimes 1 : M \otimes_A R \to N \otimes_A R$ is surjective,
+then for some $i$ the map $u \otimes 1 : M \otimes_A R_i \to N \otimes_A R_i$
+is surjective.
+\item If $N$ is a finitely presented $A$-module, and
+$v : N \otimes_A R \to M \otimes_A R$ is an $R$-module
+map, then there exists an $i$ and an $R_i$-module map
+$v_i : N \otimes_A R_i \to M \otimes_A R_i$ such that $v = v_i \otimes 1$.
+\item If $M$ is a finite $A$-module, $N$ is a finitely presented $A$-module,
+and $u : M \to N$ is an $A$-module map such that
+$u \otimes 1 : M \otimes_A R \to N \otimes_A R$ is an isomorphism, then
+for some $i$ the map $u \otimes 1 : M \otimes_A R_i \to N \otimes_A R_i$
+is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove (1) assume $u$ is as in (1) and
+let $x_1, \ldots, x_m \in M$ be generators. Since
+$N \otimes_A R = \colim_i N \otimes_A R_i$
+we may pick an $i \in I$ such that $u(x_j) \otimes 1 = u'(x_j) \otimes 1$
+in $M \otimes_A R_i$, $j = 1, \ldots, m$.
+For such an $i$ we have
+$u \otimes 1 = u' \otimes 1 : M \otimes_A R_i \to N \otimes_A R_i$.
+
+\medskip\noindent
+To prove (2) assume $u \otimes 1$ surjective and
+let $y_1, \ldots, y_m \in N$ be generators. Since
+$N \otimes_A R = \colim_i N \otimes_A R_i$
+we may pick an $i \in I$ and $z_j \in M \otimes_A R_i$, $j = 1, \ldots, m$
+whose images in $N \otimes_A R$ equal $y_j \otimes 1$.
+For such an $i$ the map $u \otimes 1 : M \otimes_A R_i \to N \otimes_A R_i$
+is surjective.
+
+\medskip\noindent
+To prove (3) let $y_1, \ldots, y_m \in N$ be generators. Let
+$K = \Ker(A^{\oplus m} \to N)$ where the map is given by
+the rule $(a_1, \ldots, a_m) \mapsto \sum a_j x_j$. Let $k_1, \ldots, k_t$
+be generators for $K$. Say $k_s = (k_{s1}, \ldots, k_{sm})$.
+Since $M \otimes_A R = \colim_i M \otimes_A R_i$
+we may pick an $i \in I$ and $z_j \in M \otimes_A R_i$, $j = 1, \ldots, m$
+whose images in $M \otimes_A R$ equal $v(y_j \otimes 1)$.
+We want to use the $z_j$ to define the map
+$v_i : N \otimes_A R_i \to M \otimes_A R_i$.
+Since $K \otimes_A R_i \to R_i^{\oplus m} \to N \otimes_A R_i \to 0$
+is a presentation, it suffices to check that $\xi_s = \sum_j k_{sj}z_j$ is
+zero in $M \otimes_A R_i$ for each $s = 1, \ldots, t$. This may not
+be the case, but since the image of $\xi_s$ in $M \otimes_A R$ is zero
+we see that it will be the case after increasing $i$ a bit.
+
+\medskip\noindent
+To prove (4) assume $u \otimes 1$ is an isomorphism, that
+$M$ is finite, and that $N$ is finitely presented.
+Let $v : N \otimes_A R \to M \otimes_A R$ be an inverse to
+$u \otimes 1$. Apply part (3) to get a map
+$v_i : N \otimes_A R_i \to M \otimes_A R_i$ for some $i$.
+Apply part (1) to see that, after increasing $i$ we have
+$v_i \circ (u \otimes 1) = \text{id}_{M \otimes_R R_i}$ and
+$(u \otimes 1) \circ v_i = \text{id}_{N \otimes_R R_i}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-category-fp-modules}
+Suppose that $R = \colim_{\lambda \in \Lambda} R_\lambda$ is a directed colimit
+of rings. Then the category of finitely presented $R$-modules is
+the colimit of the categories of finitely presented $R_\lambda$-modules.
+More precisely
+\begin{enumerate}
+\item Given a finitely presented $R$-module $M$ there exists a
+$\lambda \in \Lambda$ and a finitely presented $R_\lambda$-module
+$M_\lambda$ such that $M \cong M_\lambda \otimes_{R_\lambda} R$.
+\item Given a $\lambda \in \Lambda$, finitely presented
+$R_\lambda$-modules $M_\lambda, N_\lambda$, and an $R$-module map
+$\varphi : M_\lambda \otimes_{R_\lambda} R \to N_\lambda \otimes_{R_\lambda} R$,
+then there exists a $\mu \geq \lambda$ and an $R_\mu$-module map
+$\varphi_\mu : M_\lambda \otimes_{R_\lambda} R_\mu \to
+N_\lambda \otimes_{R_\lambda} R_\mu$
+such that $\varphi = \varphi_\mu \otimes 1_R$.
+\item Given a $\lambda \in \Lambda$, finitely presented
+$R_\lambda$-modules $M_\lambda, N_\lambda$, and $R$-module maps
+$\varphi_\lambda, \psi_\lambda : M_\lambda \to N_\lambda$
+such that $\varphi \otimes 1_R = \psi \otimes 1_R$, then
+$\varphi \otimes 1_{R_\mu} = \psi \otimes 1_{R_\mu}$ for some
+$\mu \geq \lambda$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove (1) choose a presentation
+$R^{\oplus m} \to R^{\oplus n} \to M \to 0$.
+Suppose that the first map is given by the matrix $A = (a_{ij})$.
+We can choose a $\lambda \in \Lambda$ and a matrix
+$A_\lambda = (a_{\lambda, ij})$ with coefficients in $R_\lambda$
+which maps to $A$ in $R$.
+Then we simply let $M_\lambda$ be the $R_\lambda$-module with presentation
+$R_\lambda^{\oplus m} \to R_\lambda^{\oplus n} \to M_\lambda \to 0$
+where the first arrow is given by $A_\lambda$.
+
+\medskip\noindent
+Parts (2) and (3) follow from
+Lemma \ref{lemma-module-map-property-in-colimit}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-algebra-map-property-in-colimit}
+Let $A$ be a ring and let $B, C$ be $A$-algebras.
+Suppose that $R = \colim_{i \in I} R_i$ is a directed colimit
+of $A$-algebras.
+\begin{enumerate}
+\item If $B$ is a finite type $A$-algebra, and $u, u' : B \to C$ are
+$A$-algebra maps such that
+$u \otimes 1 = u' \otimes 1 : B \otimes_A R \to C \otimes_A R$
+then for some $i$ we have
+$u \otimes 1 = u' \otimes 1 : B \otimes_A R_i \to C \otimes_A R_i$.
+\item If $C$ is a finite type $A$-algebra and $u : B \to C$ is an
+$A$-algebra map such that
+$u \otimes 1 : B \otimes_A R \to C \otimes_A R$ is surjective, then
+for some $i$ the map $u \otimes 1 : B \otimes_A R_i \to C \otimes_A R_i$
+is surjective.
+\item If $C$ is of finite presentation over $A$ and
+$v : C \otimes_A R \to B \otimes_A R$ is an $R$-algebra map, then there
+exists an $i$ and an $R_i$-algebra map
+$v_i : C \otimes_A R_i \to B \otimes_A R_i$ such that
+$v = v_i \otimes 1$.
+\item If $B$ is a finite type $A$-algebra, $C$ is a finitely presented
+$A$-algebra, and
+$u \otimes 1 : B \otimes_A R \to C \otimes_A R$ is an isomorphism, then
+for some $i$ the map $u \otimes 1 : B \otimes_A R_i \to C \otimes_A R_i$
+is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove (1) assume $u$ is as in (1) and
+let $x_1, \ldots, x_m \in B$ be generators. Since
+$B \otimes_A R = \colim_i B \otimes_A R_i$
+we may pick an $i \in I$ such that $u(x_j) \otimes 1 = u'(x_j) \otimes 1$
+in $B \otimes_A R_i$, $j = 1, \ldots, m$.
+For such an $i$ we have
+$u \otimes 1 = u' \otimes 1 : B \otimes_A R_i \to C \otimes_A R_i$.
+
+\medskip\noindent
+To prove (2) assume $u \otimes 1$ surjective and
+let $y_1, \ldots, y_m \in C$ be generators. Since
+$B \otimes_A R = \colim_i B \otimes_A R_i$
+we may pick an $i \in I$ and $z_j \in B \otimes_A R_i$, $j = 1, \ldots, m$
+whose images in $C \otimes_A R$ equal $y_j \otimes 1$.
+For such an $i$ the map $u \otimes 1 : B \otimes_A R_i \to C \otimes_A R_i$
+is surjective.
+
+\medskip\noindent
+To prove (3) let $c_1, \ldots, c_m \in C$ be generators. Let
+$K = \Ker(A[x_1, \ldots, x_m] \to N)$ where the map is given by
+the rule $x_j \mapsto \sum c_j$. Let $f_1, \ldots, f_t$
+be generators for $K$ as an ideal in $A[x_1, \ldots, x_m]$.
+We think of $f_j = f_j(x_1, \ldots, x_m)$ as a polynomial.
+Since $B \otimes_A R = \colim_i B \otimes_A R_i$
+we may pick an $i \in I$ and $z_j \in B \otimes_A R_i$, $j = 1, \ldots, m$
+whose images in $B \otimes_A R$ equal $v(c_j \otimes 1)$.
+We want to use the $z_j$ to define a map
+$v_i : C \otimes_A R_i \to B \otimes_A R_i$.
+Since $K \otimes_A R_i \to R_i[x_1, \ldots, x_m] \to C \otimes_A R_i \to 0$
+is a presentation, it suffices to check that
+$\xi_s = f_j(z_1, \ldots, z_m)$ is
+zero in $B \otimes_A R_i$ for each $s = 1, \ldots, t$. This may not
+be the case, but since the image of $\xi_s$ in $B \otimes_A R$ is zero
+we see that it will be the case after increasing $i$ a bit.
+
+\medskip\noindent
+To prove (4) assume $u \otimes 1$ is an isomorphism, that
+$B$ is a finite type $A$-algebra, and that $C$ is a finitely presented
+$A$-algebra. Let $v : B \otimes_A R \to C \otimes_A R$ be an inverse to
+$u \otimes 1$. Let $v_i : C \otimes_A R_i \to B \otimes_A R_i$ be as
+in part (3). Apply part (1) to see that, after increasing $i$ we have
+$v_i \circ (u \otimes 1) = \text{id}_{B \otimes_R R_i}$ and
+$(u \otimes 1) \circ v_i = \text{id}_{C \otimes_R R_i}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-category-fp-algebras}
+Suppose that $R = \colim_{\lambda \in \Lambda} R_\lambda$ is a directed colimit
+of rings. Then the category of finitely presented $R$-algebras is
+the colimit of the categories of finitely presented $R_\lambda$-algebras.
+More precisely
+\begin{enumerate}
+\item Given a finitely presented $R$-algebra $A$ there exists a
+$\lambda \in \Lambda$ and a finitely presented $R_\lambda$-algebra
+$A_\lambda$ such that $A \cong A_\lambda \otimes_{R_\lambda} R$.
+\item Given a $\lambda \in \Lambda$, finitely presented
+$R_\lambda$-algebras $A_\lambda, B_\lambda$, and an $R$-algebra map
+$\varphi : A_\lambda \otimes_{R_\lambda} R \to B_\lambda \otimes_{R_\lambda} R$,
+then there exists a $\mu \geq \lambda$ and an $R_\mu$-algebra map
+$\varphi_\mu : A_\lambda \otimes_{R_\lambda} R_\mu \to
+B_\lambda \otimes_{R_\lambda} R_\mu$
+such that $\varphi = \varphi_\mu \otimes 1_R$.
+\item Given a $\lambda \in \Lambda$, finitely presented
+$R_\lambda$-algebras $A_\lambda, B_\lambda$, and $R_\lambda$-algebra maps
+$\varphi_\lambda, \psi_\lambda : A_\lambda \to B_\lambda$
+such that $\varphi \otimes 1_R = \psi \otimes 1_R$, then
+$\varphi \otimes 1_{R_\mu} = \psi \otimes 1_{R_\mu}$ for some
+$\mu \geq \lambda$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove (1) choose a presentation
+$A = R[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$.
+We can choose a $\lambda \in \Lambda$ and elements
+$f_{\lambda, j} \in R_\lambda[x_1, \ldots, x_n]$ mapping to
+$f_j \in R[x_1, \ldots, x_n]$.
+Then we simply let
+$A_\lambda =
+R_\lambda[x_1, \ldots, x_n]/(f_{\lambda, 1}, \ldots, f_{\lambda, m})$.
+
+\medskip\noindent
+Parts (2) and (3) follow from
+Lemma \ref{lemma-algebra-map-property-in-colimit}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-no-condition-local}
+Suppose $R \to S$ is a local homomorphism of local rings.
+There exists a directed set $(\Lambda, \leq)$, and
+a system of local homomorphisms $R_\lambda \to S_\lambda$
+of local rings such that
+\begin{enumerate}
+\item The colimit of the system $R_\lambda \to S_\lambda$
+is equal to $R \to S$.
+\item Each $R_\lambda$ is essentially of finite type
+over $\mathbf{Z}$.
+\item Each $S_\lambda$ is essentially of finite type
+over $R_\lambda$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $\varphi : R \to S$ the ring map.
+Let $\mathfrak m \subset R$ be the maximal ideal
+of $R$ and let $\mathfrak n \subset S$ be the maximal
+ideal of $S$. Let
+$$
+\Lambda = \{
+(A, B)
+\mid
+A \subset R, B \subset S, \# A < \infty, \# B < \infty, \varphi(A) \subset B
+\}.
+$$
+As partial ordering we take the inclusion relation. For each
+$\lambda = (A, B) \in \Lambda$ we let $R'_\lambda$ be
+the sub $\mathbf{Z}$-algebra generated by
+$a \in A$, and we let $S'_\lambda$ be the sub
+$\mathbf{Z}$-algebra generated by $b$, $b \in B$.
+Let $R_\lambda$ be the localization of $R'_\lambda$
+at the prime ideal $R'_\lambda \cap \mathfrak m$ and let
+$S_\lambda$ be the localization of $S'_\lambda$ at
+the prime ideal $S'_\lambda \cap \mathfrak n$.
+In a picture
+$$
+\xymatrix{
+B \ar[r] &
+S'_\lambda \ar[r] &
+S_\lambda \ar[r] &
+S \\
+A \ar[r] \ar[u] &
+R'_\lambda \ar[r] \ar[u] &
+R_\lambda \ar[r] \ar[u] &
+R \ar[u]
+}.
+$$
+The transition maps are clear. We leave the proofs of the other
+assertions to the reader.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-essentially-finite-type}
+Suppose $R \to S$ is a local homomorphism of local rings.
+Assume that $S$ is essentially of finite type over $R$.
+Then there exists a directed set $(\Lambda, \leq)$, and
+a system of local homomorphisms $R_\lambda \to S_\lambda$
+of local rings such that
+\begin{enumerate}
+\item The colimit of the system $R_\lambda \to S_\lambda$
+is equal to $R \to S$.
+\item Each $R_\lambda$ is essentially of finite type
+over $\mathbf{Z}$.
+\item Each $S_\lambda$ is essentially of finite type
+over $R_\lambda$.
+\item For each $\lambda \leq \mu$ the map
+$S_\lambda \otimes_{R_\lambda} R_\mu \to S_\mu$
+presents $S_\mu$ as the localization of a quotient
+of $S_\lambda \otimes_{R_\lambda} R_\mu$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $\varphi : R \to S$ the ring map.
+Let $\mathfrak m \subset R$ be the maximal ideal
+of $R$ and let $\mathfrak n \subset S$ be the maximal
+ideal of $S$. Let $x_1, \ldots, x_n \in S$ be elements such that
+$S$ is a localization of the sub $R$-algebra of $S$
+generated by $x_1, \ldots, x_n$. In other words, $S$
+is a quotient of a localization of the polynomial ring
+$R[x_1, \ldots, x_n]$.
+
+\medskip\noindent
+Let $\Lambda = \{ A \subset R \mid \# A < \infty\}$
+be the set of finite subsets of $R$.
+As partial ordering we take the inclusion relation. For each
+$\lambda = A \in \Lambda$ we let $R'_\lambda$ be
+the sub $\mathbf{Z}$-algebra generated by
+$a \in A$, and we let $S'_\lambda$ be the sub
+$\mathbf{Z}$-algebra generated by $\varphi(a)$, $a \in A$
+and the elements $x_1, \ldots, x_n$. Let $R_\lambda$ be
+the localization of $R'_\lambda$ at the prime ideal
+$R'_\lambda \cap \mathfrak m$ and let
+$S_\lambda$ be the localization of $S'_\lambda$ at
+the prime ideal $S'_\lambda \cap \mathfrak n$.
+In a picture
+$$
+\xymatrix{
+\varphi(A) \amalg \{x_i\} \ar[r] &
+S'_\lambda \ar[r] &
+S_\lambda \ar[r] &
+S \\
+A \ar[r] \ar[u] &
+R'_\lambda \ar[r] \ar[u] &
+R_\lambda \ar[r] \ar[u] &
+R \ar[u]
+}
+$$
+It is clear that if $A \subset B$ corresponds to
+$\lambda \leq \mu$ in $\Lambda$, then there are
+canonical maps $R_\lambda \to R_\mu$, and $S_\lambda \to S_\mu$
+and we obtain a system over the directed set $\Lambda$.
+
+\medskip\noindent
+The assertion that $R = \colim R_\lambda$ is clear
+because all the maps $R_\lambda \to R$ are injective and
+any element of $R$ eventually is in the image. The same
+argument works for $S = \colim S_\lambda$.
+Assertions (2), (3) are true by construction.
+The final assertion holds because clearly
+the maps $S'_\lambda \otimes_{R'_\lambda} R'_\mu
+\to S'_\mu$ are surjective.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-essentially-finite-presentation}
+Suppose $R \to S$ is a local homomorphism of local rings.
+Assume that $S$ is essentially of finite presentation over $R$.
+Then there exists a directed set $(\Lambda, \leq)$, and
+a system of local homomorphism $R_\lambda \to S_\lambda$
+of local rings such that
+\begin{enumerate}
+\item The colimit of the system $R_\lambda \to S_\lambda$
+is equal to $R \to S$.
+\item Each $R_\lambda$ is essentially of finite type
+over $\mathbf{Z}$.
+\item Each $S_\lambda$ is essentially of finite type
+over $R_\lambda$.
+\item For each $\lambda \leq \mu$ the map
+$S_\lambda \otimes_{R_\lambda} R_\mu \to S_\mu$
+presents $S_\mu$ as the localization of
+$S_\lambda \otimes_{R_\lambda} R_\mu$
+at a prime ideal.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By assumption we may choose an isomorphism
+$\Phi : (R[x_1, \ldots, x_n]/I)_{\mathfrak q} \to S$
+where $I \subset R[x_1, \ldots, x_n]$ is a finitely generated ideal,
+and $\mathfrak q \subset R[x_1, \ldots, x_n]/I$ is a prime.
+(Note that $R \cap \mathfrak q$
+is equal to the maximal ideal $\mathfrak m$ of $R$.)
+We also choose generators $f_1, \ldots, f_m \in I$ for the ideal $I$.
+Write $R$ in any way as a colimit $R = \colim R_\lambda$
+over a directed set $(\Lambda, \leq )$, with each $R_\lambda$
+local and essentially of finite type over $\mathbf{Z}$.
+There exists some $\lambda_0 \in \Lambda$ such that $f_j$ is the image
+of some $f_{j, \lambda_0} \in R_{\lambda_0}[x_1, \ldots, x_n]$.
+For all $\lambda \geq \lambda_0$ denote
+$f_{j, \lambda} \in R_{\lambda}[x_1, \ldots, x_n]$ the image
+of $f_{j, \lambda_0}$. Thus we obtain a system of ring maps
+$$
+R_\lambda[x_1, \ldots, x_n]/(f_{1, \lambda}, \ldots, f_{m, \lambda})
+\to
+R[x_1, \ldots, x_n]/(f_1, \ldots, f_m) \to S
+$$
+Set $\mathfrak q_\lambda$ the inverse image of $\mathfrak q$.
+Set $S_\lambda = (R_\lambda[x_1, \ldots, x_n]/
+(f_{1, \lambda}, \ldots, f_{m, \lambda}))_{\mathfrak q_\lambda}$.
+We leave it to the reader to see that this works.
+\end{proof}
+
+\begin{remark}
+\label{remark-suitable-systems-limits}
+Suppose that $R \to S$ is a local homomorphism
+of local rings, which is essentially of finite presentation.
+Take any system $(\Lambda, \leq)$, $R_\lambda \to S_\lambda$
+with the properties listed in
+Lemma \ref{lemma-limit-essentially-finite-type}.
+What may happen is that this is the ``wrong'' system, namely,
+it may happen that property (4) of
+Lemma \ref{lemma-limit-essentially-finite-presentation} is not
+satisfied. Here is an example. Let $k$ be a field. Consider the ring
+$$
+R = k[[z, y_1, y_2, \ldots]]/(y_i^2 - zy_{i + 1}).
+$$
+Set $S = R/zR$. As system take $\Lambda = \mathbf{N}$ and
+$R_n = k[[z, y_1, \ldots, y_n]]/(\{y_i^2 - zy_{i + 1}\}_{i \leq n-1})$
+and $S_n = R_n/(z, y_n^2)$. All the maps
+$S_n \otimes_{R_n} R_{n + 1} \to S_{n + 1}$
+are not localizations (i.e., isomorphisms in this case)
+since $1 \otimes y_{n + 1}^2$ maps to zero.
+If we take instead $S_n' = R_n/zR_n$ then the
+maps $S'_n \otimes_{R_n} R_{n + 1} \to S'_{n + 1}$ are
+isomorphisms. The moral of this remark is that we do have to be
+a little careful in choosing the systems.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-limit-module-essentially-finite-presentation}
+Suppose $R \to S$ is a local homomorphism of local rings.
+Assume that $S$ is essentially of finite presentation over $R$.
+Let $M$ be a finitely presented $S$-module.
+Then there exists a directed set $(\Lambda, \leq)$, and
+a system of local homomorphisms $R_\lambda \to S_\lambda$
+of local rings together with $S_\lambda$-modules $M_\lambda$,
+such that
+\begin{enumerate}
+\item The colimit of the system $R_\lambda \to S_\lambda$
+is equal to $R \to S$. The colimit of the system $M_\lambda$
+is $M$.
+\item Each $R_\lambda$ is essentially of finite type
+over $\mathbf{Z}$.
+\item Each $S_\lambda$ is essentially of finite type
+over $R_\lambda$.
+\item Each $M_\lambda$ is finite over $S_\lambda$.
+\item For each $\lambda \leq \mu$ the map
+$S_\lambda \otimes_{R_\lambda} R_\mu \to S_\mu$
+presents $S_\mu$ as the localization of
+$S_\lambda \otimes_{R_\lambda} R_\mu$
+at a prime ideal.
+\item For each $\lambda \leq \mu$ the map
+$M_\lambda \otimes_{S_\lambda} S_\mu \to M_\mu$
+is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+As in the proof of Lemma \ref{lemma-limit-essentially-finite-presentation}
+we may first write $R = \colim R_\lambda$ as a directed colimit
+of local $\mathbf{Z}$-algebras which are essentially of finite type.
+Next, we may assume that for some $\lambda_1 \in \Lambda$ there
+exist $f_{j, \lambda_1} \in R_{\lambda_1}[x_1, \ldots, x_n]$
+such that
+$$
+S =
+\colim_{\lambda \geq \lambda_1} S_\lambda, \text{ with }
+S_\lambda =
+(R_\lambda[x_1, \ldots, x_n]/
+(f_{1, \lambda}, \ldots, f_{m, \lambda}))_{\mathfrak q_\lambda}
+$$
+Choose a presentation
+$$
+S^{\oplus s} \to S^{\oplus t} \to M \to 0
+$$
+of $M$ over $S$. Let $A \in \text{Mat}(t \times s, S)$ be
+the matrix of the presentation. For some $\lambda_2 \in \Lambda$,
+$\lambda_2 \geq \lambda_1$
+we can find a matrix $A_{\lambda_2} \in \text{Mat}(t \times s, S_{\lambda_2})$
+which maps to $A$. For all $\lambda \geq \lambda_2$ we let
+$M_\lambda = \Coker(S_\lambda^{\oplus s} \xrightarrow{A_\lambda}
+S_\lambda^{\oplus t})$. We leave it to the reader to see that
+this works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-no-condition}
+Suppose $R \to S$ is a ring map.
+Then there exists a directed set $(\Lambda, \leq)$, and
+a system of ring maps $R_\lambda \to S_\lambda$
+such that
+\begin{enumerate}
+\item The colimit of the system $R_\lambda \to S_\lambda$
+is equal to $R \to S$.
+\item Each $R_\lambda$ is of finite type
+over $\mathbf{Z}$.
+\item Each $S_\lambda$ is of finite type
+over $R_\lambda$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is the non-local version of
+Lemma \ref{lemma-limit-no-condition-local}.
+Proof is similar and left to the reader.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-integral}
+Suppose $R \to S$ is a ring map.
+Assume that $S$ is integral over $R$.
+Then there exists a directed set $(\Lambda, \leq)$, and
+a system of ring maps $R_\lambda \to S_\lambda$
+such that
+\begin{enumerate}
+\item The colimit of the system $R_\lambda \to S_\lambda$
+is equal to $R \to S$.
+\item Each $R_\lambda$ is of finite type
+over $\mathbf{Z}$.
+\item Each $S_\lambda$ is of finite over $R_\lambda$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Consider the set $\Lambda$ of pairs $(E, F)$ where $E \subset R$
+is a finite subset, $F \subset S$ is a finite subset, and
+every element $f \in F$ is the root of a monic $P(X) \in R[X]$
+whose coefficients are in $E$. Say $(E, F) \leq (E', F')$
+if $E \subset E'$ and $F \subset F'$.
+Given $\lambda = (E, F) \in \Lambda$ set $R_\lambda \subset R$ equal
+to the $\mathbf{Z}$-subalgebra of $R$ generated by $E$ and
+$S_\lambda \subset S$ equal to the $\mathbf{Z}$-subalgebra generated by
+$F$ and the image of $E$ in $S$. It is clear that $R = \colim R_\lambda$.
+We have $S = \colim S_\lambda$ as every element of $S$ is integral
+over $S$. The ring maps $R_\lambda \to S_\lambda$ are finite by
+Lemma \ref{lemma-characterize-finite-in-terms-of-integral} and the fact that
+$S_\lambda$ is generated over $R_\lambda$ by the elements of
+$F$ which are integral over $R_\lambda$ by our condition on the
+pairs $(E, F)$. The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-finite-type}
+Suppose $R \to S$ is a ring map.
+Assume that $S$ is of finite type over $R$.
+Then there exists a directed set $(\Lambda, \leq)$, and
+a system of ring maps $R_\lambda \to S_\lambda$
+such that
+\begin{enumerate}
+\item The colimit of the system $R_\lambda \to S_\lambda$
+is equal to $R \to S$.
+\item Each $R_\lambda$ is of finite type
+over $\mathbf{Z}$.
+\item Each $S_\lambda$ is of finite type
+over $R_\lambda$.
+\item For each $\lambda \leq \mu$ the map
+$S_\lambda \otimes_{R_\lambda} R_\mu \to S_\mu$
+presents $S_\mu$ as a quotient
+of $S_\lambda \otimes_{R_\lambda} R_\mu$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is the non-local version of
+Lemma \ref{lemma-limit-essentially-finite-type}.
+Proof is similar and left to the reader.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-finite-presentation}
+Suppose $R \to S$ is a ring map.
+Assume that $S$ is of finite presentation over $R$.
+Then there exists a directed set $(\Lambda, \leq)$, and
+a system of ring maps $R_\lambda \to S_\lambda$
+such that
+\begin{enumerate}
+\item The colimit of the system $R_\lambda \to S_\lambda$
+is equal to $R \to S$.
+\item Each $R_\lambda$ is of finite type
+over $\mathbf{Z}$.
+\item Each $S_\lambda$ is of finite type
+over $R_\lambda$.
+\item For each $\lambda \leq \mu$ the map
+$S_\lambda \otimes_{R_\lambda} R_\mu \to S_\mu$
+is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is the non-local version of
+Lemma \ref{lemma-limit-essentially-finite-presentation}.
+Proof is similar and left to the reader.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-module-finite-presentation}
+Suppose $R \to S$ is a ring map.
+Assume that $S$ is of finite presentation over $R$.
+Let $M$ be a finitely presented $S$-module.
+Then there exists a directed set $(\Lambda, \leq)$, and
+a system of ring maps $R_\lambda \to S_\lambda$
+together with $S_\lambda$-modules $M_\lambda$,
+such that
+\begin{enumerate}
+\item The colimit of the system $R_\lambda \to S_\lambda$
+is equal to $R \to S$. The colimit of the system $M_\lambda$
+is $M$.
+\item Each $R_\lambda$ is of finite type
+over $\mathbf{Z}$.
+\item Each $S_\lambda$ is of finite type
+over $R_\lambda$.
+\item Each $M_\lambda$ is finite over $S_\lambda$.
+\item For each $\lambda \leq \mu$ the map
+$S_\lambda \otimes_{R_\lambda} R_\mu \to S_\mu$
+is an isomorphism.
+\item For each $\lambda \leq \mu$ the map
+$M_\lambda \otimes_{S_\lambda} S_\mu \to M_\mu$
+is an isomorphism.
+\end{enumerate}
+In particular, for every $\lambda \in \Lambda$ we have
+$$
+M = M_\lambda \otimes_{S_\lambda} S
+= M_\lambda \otimes_{R_\lambda} R.
+$$
+\end{lemma}
+
+\begin{proof}
+This is the non-local version of
+Lemma \ref{lemma-limit-module-essentially-finite-presentation}.
+Proof is similar and left to the reader.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{More flatness criteria}
+\label{section-more-flatness-criteria}
+
+\noindent
+The following lemma is often used in algebraic geometry to show that a finite
+morphism from a normal surface to a smooth surface is flat. It is a partial
+converse to
+Lemma \ref{lemma-finite-flat-over-regular-CM}
+because an injective finite local ring map certainly satisfies condition (3).
+
+\begin{lemma}
+\label{lemma-CM-over-regular-flat}
+\begin{slogan}
+Miracle flatness
+\end{slogan}
+Let $R \to S$ be a local homomorphism of Noetherian local
+rings. Assume
+\begin{enumerate}
+\item $R$ is regular,
+\item $S$ Cohen-Macaulay,
+\item $\dim(S) = \dim(R) + \dim(S/\mathfrak m_R S)$.
+\end{enumerate}
+Then $R \to S$ is flat.
+\end{lemma}
+
+\begin{proof}
+By induction on $\dim(R)$. The case $\dim(R) = 0$ is trivial, because
+then $R$ is a field. Assume $\dim(R) > 0$. By (3) this implies that
+$\dim(S) > 0$. Let $\mathfrak q_1, \ldots, \mathfrak q_r$ be the minimal
+primes of $S$. Note that $\mathfrak q_i \not \supset \mathfrak m_R S$ since
+$$
+\dim(S/\mathfrak q_i) = \dim(S) > \dim(S/\mathfrak m_R S)
+$$
+the first equality by Lemma \ref{lemma-maximal-chain-CM} and the
+inequality by (3). Thus
+$\mathfrak p_i = R \cap \mathfrak q_i$ is not equal to $\mathfrak m_R$.
+Pick $x \in \mathfrak m$, $x \not \in \mathfrak m^2$, and
+$x \not \in \mathfrak p_i$, see
+Lemma \ref{lemma-silly}.
+Hence we see that $x$ is not contained in any of the minimal
+primes of $S$. Hence $x$ is a nonzerodivisor on $S$ by (2), see
+Lemma \ref{lemma-reformulate-CM} and
+$S/xS$ is Cohen-Macaulay with $\dim(S/xS) = \dim(S) - 1$.
+By (1) and Lemma \ref{lemma-regular-ring-CM} the ring $R/xR$ is regular
+with $\dim(R/xR) = \dim(R) - 1$.
+By induction we see that $R/xR \to S/xS$ is flat. Hence we
+conclude by Lemma \ref{lemma-variant-local-criterion-flatness}
+and the remark following it.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-over-regular}
+Let $R \to S$ be a homomorphism of Noetherian local rings.
+Assume that $R$ is a regular local ring and that a regular system
+of parameters maps to a regular sequence in $S$. Then $R \to S$
+is flat.
+\end{lemma}
+
+\begin{proof}
+Suppose that $x_1, \ldots, x_d$ are a system of parameters of $R$
+which map to a regular sequence in $S$. Note that
+$S/(x_1, \ldots, x_d)S$ is flat over $R/(x_1, \ldots, x_d)$
+as the latter is a field. Then $x_d$ is a nonzerodivisor in
+$S/(x_1, \ldots, x_{d - 1})S$ hence $S/(x_1, \ldots, x_{d - 1})S$
+is flat over $R/(x_1, \ldots, x_{d - 1})$ by the local criterion
+of flatness (see Lemma \ref{lemma-variant-local-criterion-flatness}
+and remarks following). Then $x_{d - 1}$ is a nonzerodivisor in
+$S/(x_1, \ldots, x_{d - 2})S$ hence $S/(x_1, \ldots, x_{d - 2})S$
+is flat over $R/(x_1, \ldots, x_{d - 2})$ by the local criterion
+of flatness (see Lemma \ref{lemma-variant-local-criterion-flatness}
+and remarks following). Continue till one reaches the conclusion
+that $S$ is flat over $R$.
+\end{proof}
+
+\noindent
+The following lemma is the key to proving that results for finitely presented
+modules over finitely presented rings over a base ring follow from the
+corresponding results for finite modules in the Noetherian case.
+
+\begin{lemma}
+\label{lemma-colimit-eventually-flat}
+Let $R \to S$, $M$, $\Lambda$, $R_\lambda \to S_\lambda$, $M_\lambda$
+be as in Lemma \ref{lemma-limit-module-essentially-finite-presentation}.
+Assume that $M$ is flat over $R$.
+Then for some $\lambda \in \Lambda$ the module
+$M_\lambda$ is flat over $R_\lambda$.
+\end{lemma}
+
+\begin{proof}
+Pick some $\lambda \in \Lambda$ and consider
+$$
+\text{Tor}_1^{R_\lambda}(M_\lambda, R_\lambda/\mathfrak m_\lambda)
+=
+\Ker(\mathfrak m_\lambda \otimes_{R_\lambda} M_\lambda
+\to M_\lambda).
+$$
+See Remark \ref{remark-Tor-ring-mod-ideal}. The right hand side
+shows that this is a finitely generated $S_\lambda$-module (because
+$S_\lambda$ is Noetherian and the modules in question are finite).
+Let $\xi_1, \ldots, \xi_n$ be generators.
+Because $M$ is flat over $R$ we
+have that $0 = \Ker(\mathfrak m_\lambda R \otimes_R M \to M)$.
+Since $\otimes$ commutes with colimits we see there exists
+a $\lambda' \geq \lambda$ such that each $\xi_i$ maps to
+zero in
+$\mathfrak m_{\lambda}R_{\lambda'} \otimes_{R_{\lambda'}} M_{\lambda'}$.
+Hence we see that
+$$
+\text{Tor}_1^{R_\lambda}(M_\lambda, R_\lambda/\mathfrak m_\lambda)
+\longrightarrow
+\text{Tor}_1^{R_{\lambda'}}(M_{\lambda'},
+R_{\lambda'}/\mathfrak m_{\lambda}R_{\lambda'})
+$$
+is zero. Note that
+$M_\lambda \otimes_{R_\lambda} R_\lambda/\mathfrak m_\lambda$
+is flat over $R_\lambda/\mathfrak m_\lambda$ because this last
+ring is a field. Hence we may apply Lemma
+\ref{lemma-another-variant-local-criterion-flatness}
+to get that $M_{\lambda'}$ is flat over $R_{\lambda'}$.
+\end{proof}
+
+\noindent
+Using the lemma above we can start to reprove the results of
+Section \ref{section-criteria-flatness}
+in the non-Noetherian case.
+
+\begin{lemma}
+\label{lemma-mod-injective-general}
+Suppose that $R \to S$ is a local homomorphism of local rings.
+Denote $\mathfrak m$ the maximal ideal of $R$.
+Let $u : M \to N$ be a map of $S$-modules.
+Assume
+\begin{enumerate}
+\item $S$ is essentially of finite presentation over $R$,
+\item $M$, $N$ are finitely presented over $S$,
+\item $N$ is flat over $R$, and
+\item $\overline{u} : M/\mathfrak mM \to N/\mathfrak mN$ is injective.
+\end{enumerate}
+Then $u$ is injective, and $N/u(M)$ is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-limit-module-essentially-finite-presentation}
+and its proof we can find a system $R_\lambda \to S_\lambda$ of
+local ring maps together with maps of $S_\lambda$-modules
+$u_\lambda : M_\lambda \to N_\lambda$ satisfying the conclusions
+(1) -- (6) for both $N$ and $M$ of that lemma and such that the
+colimit of the maps $u_\lambda$ is $u$. By
+Lemma \ref{lemma-colimit-eventually-flat}
+we may assume that $N_\lambda$ is flat over $R_\lambda$ for all
+sufficiently large $\lambda$. Denote $\mathfrak m_\lambda \subset R_\lambda$
+the maximal ideal and $\kappa_\lambda = R_\lambda / \mathfrak m_\lambda$,
+resp.\ $\kappa = R/\mathfrak m$ the residue fields.
+
+\medskip\noindent
+Consider the map
+$$
+\Psi_\lambda :
+M_\lambda/\mathfrak m_\lambda M_\lambda \otimes_{\kappa_\lambda} \kappa
+\longrightarrow
+M/\mathfrak m M.
+$$
+Since $S_\lambda/\mathfrak m_\lambda S_\lambda$ is essentially of finite type
+over the field $\kappa_\lambda$ we see that the tensor product
+$S_\lambda/\mathfrak m_\lambda S_\lambda \otimes_{\kappa_\lambda} \kappa$
+is essentially of finite type over $\kappa$. Hence it is a Noetherian
+ring and we conclude the kernel of $\Psi_\lambda$ is finitely generated.
+Since $M/\mathfrak m M$ is the colimit of the system
+$M_\lambda/\mathfrak m_\lambda M_\lambda$ and $\kappa$ is the colimit of
+the fields $\kappa_\lambda$ there exists a $\lambda' > \lambda$ such that
+the kernel of $\Psi_\lambda$ is generated by the kernel of
+$$
+\Psi_{\lambda, \lambda'} :
+M_\lambda/\mathfrak m_\lambda M_\lambda
+\otimes_{\kappa_\lambda}
+\kappa_{\lambda'}
+\longrightarrow
+M_{\lambda'}/\mathfrak m_{\lambda'} M_{\lambda'}.
+$$
+By construction there exists a multiplicative subset
+$W \subset S_\lambda \otimes_{R_\lambda} R_{\lambda'}$ such that
+$S_{\lambda'} = W^{-1}(S_\lambda \otimes_{R_\lambda} R_{\lambda'})$ and
+$$
+W^{-1}(M_\lambda/\mathfrak m_\lambda M_\lambda
+\otimes_{\kappa_\lambda}
+\kappa_{\lambda'})
+=
+M_{\lambda'}/\mathfrak m_{\lambda'} M_{\lambda'}.
+$$
+Now suppose that $x$ is an element of the kernel of
+$$
+\Psi_{\lambda'} :
+M_{\lambda'}/\mathfrak m_{\lambda'} M_{\lambda'}
+\otimes_{\kappa_{\lambda'}} \kappa
+\longrightarrow
+M/\mathfrak m M.
+$$
+Then for some $w \in W$ we have
+$wx \in M_\lambda/\mathfrak m_\lambda M_\lambda \otimes \kappa$.
+Hence $wx \in \Ker(\Psi_\lambda)$. Hence $wx$ is a linear
+combination of elements in the kernel of $\Psi_{\lambda, \lambda'}$.
+Hence $wx = 0$ in $M_{\lambda'}/\mathfrak m_{\lambda'} M_{\lambda'}
+\otimes_{\kappa_{\lambda'}} \kappa$, hence $x = 0$ because $w$ is invertible
+in $S_{\lambda'}$.
+We conclude that the kernel of $\Psi_{\lambda'}$ is zero for all sufficiently
+large $\lambda'$!
+
+\medskip\noindent
+By the result of the preceding paragraph we may assume that
+the kernel of $\Psi_\lambda$ is zero for all $\lambda$ sufficiently large,
+which implies that the map
+$M_\lambda/\mathfrak m_\lambda M_\lambda \to M/\mathfrak m M$
+is injective. Combined with $\overline{u}$ being injective this
+formally implies that also
+$\overline{u_\lambda} : M_\lambda/\mathfrak m_\lambda M_\lambda
+\to N_\lambda/\mathfrak m_\lambda N_\lambda$ is injective.
+By
+Lemma \ref{lemma-mod-injective}
+we conclude that (for all sufficiently large $\lambda$) the map
+$u_\lambda$ is injective and that $N_\lambda/u_\lambda(M_\lambda)$ is flat
+over $R_\lambda$.
+The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-grothendieck-general}
+Suppose that $R \to S$ is a local ring homomorphism of local rings.
+Denote $\mathfrak m$ the maximal ideal of $R$.
+Suppose
+\begin{enumerate}
+\item $S$ is essentially of finite presentation over $R$,
+\item $S$ is flat over $R$, and
+\item $f \in S$ is a nonzerodivisor in $S/{\mathfrak m}S$.
+\end{enumerate}
+Then $S/fS$ is flat over $R$, and $f$ is a nonzerodivisor in $S$.
+\end{lemma}
+
+\begin{proof}
+Follows directly from Lemma \ref{lemma-mod-injective-general}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-grothendieck-regular-sequence-general}
+Suppose that $R \to S$ is a local ring homomorphism of local rings.
+Denote $\mathfrak m$ the maximal ideal of $R$.
+Suppose
+\begin{enumerate}
+\item $R \to S$ is essentially of finite presentation,
+\item $R \to S$ is flat, and
+\item $f_1, \ldots, f_c$ is a sequence of elements of
+$S$ such that the images $\overline{f}_1, \ldots, \overline{f}_c$
+form a regular sequence in $S/{\mathfrak m}S$.
+\end{enumerate}
+Then $f_1, \ldots, f_c$ is a regular sequence in $S$ and each
+of the quotients $S/(f_1, \ldots, f_i)$ is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+Induction and Lemma \ref{lemma-grothendieck-general}.
+\end{proof}
+
+\noindent
+Here is the version of the local criterion of flatness for the case
+of local ring maps which are locally of finite presentation.
+
+\begin{lemma}
+\label{lemma-variant-local-criterion-flatness-general}
+Let $R \to S$ be a local homomorphism of local rings.
+Let $I \not = R$ be an ideal in $R$. Let $M$ be an $S$-module. Assume
+\begin{enumerate}
+\item $S$ is essentially of finite presentation over $R$,
+\item $M$ is of finite presentation over $S$,
+\item $\text{Tor}_1^R(M, R/I) = 0$, and
+\item $M/IM$ is flat over $R/I$.
+\end{enumerate}
+Then $M$ is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+Let $\Lambda$, $R_\lambda \to S_\lambda$, $M_\lambda$ be as in
+Lemma \ref{lemma-limit-module-essentially-finite-presentation}.
+Denote $I_\lambda \subset R_\lambda$ the inverse image of $I$.
+In this case the system
+$R/I \to S/IS$, $M/IM$, $R_\lambda \to S_\lambda/I_\lambda S_\lambda$,
+and $M_\lambda/I_\lambda M_\lambda$ satisfies the conclusions of
+Lemma \ref{lemma-limit-module-essentially-finite-presentation}
+as well. Hence by
+Lemma \ref{lemma-colimit-eventually-flat}
+we may assume (after shrinking the index set $\Lambda$)
+that $M_\lambda/I_\lambda M_\lambda$ is flat for all $\lambda$.
+Pick some $\lambda$ and consider
+$$
+\text{Tor}_1^{R_\lambda}(M_\lambda, R_\lambda/I_\lambda)
+=
+\Ker(I_\lambda \otimes_{R_\lambda} M_\lambda
+\to M_\lambda).
+$$
+See Remark \ref{remark-Tor-ring-mod-ideal}. The right hand side
+shows that this is a finitely generated $S_\lambda$-module (because
+$S_\lambda$ is Noetherian and the modules in question are finite).
+Let $\xi_1, \ldots, \xi_n$ be generators.
+Because $\text{Tor}_1^R(M, R/I) = 0$ and since $\otimes$ commutes
+with colimits we see there exists
+a $\lambda' \geq \lambda$ such that each $\xi_i$ maps to
+zero in
+$\text{Tor}_1^{R_{\lambda'}}(M_{\lambda'}, R_{\lambda'}/I_{\lambda'})$.
+The composition of the maps
+$$
+\xymatrix{
+R_{\lambda'} \otimes_{R_\lambda}
+\text{Tor}_1^{R_\lambda}(M_\lambda, R_\lambda/I_\lambda)
+\ar[d]^{\text{surjective by Lemma \ref{lemma-surjective-on-tor-one}}} \\
+\text{Tor}_1^{R_\lambda}(M_\lambda, R_{\lambda'}/I_\lambda R_{\lambda'})
+\ar[d]^{\text{surjective up to localization by
+Lemma \ref{lemma-surjective-on-tor-one-trivial}}} \\
+\text{Tor}_1^{R_{\lambda'}}(M_{\lambda'}, R_{\lambda'}/I_\lambda R_{\lambda'})
+\ar[d]^{\text{surjective by Lemma \ref{lemma-surjective-on-tor-one}}} \\
+\text{Tor}_1^{R_{\lambda'}}(M_{\lambda'}, R_{\lambda'}/I_{\lambda'}).
+}
+$$
+is surjective up to a localization by the reasons indicated.
+The localization is necessary since $M_{\lambda'}$ is not equal
+to $M_\lambda \otimes_{R_\lambda} R_{\lambda'}$. Namely, it is equal
+to $M_\lambda \otimes_{S_\lambda} S_{\lambda'}$ and $S_{\lambda'}$
+is the localization of $S_{\lambda} \otimes_{R_\lambda} R_{\lambda'}$ whence
+the statement up to a localization (or tensoring with $S_{\lambda'}$).
+Note that
+Lemma \ref{lemma-surjective-on-tor-one}
+applies to the first and third arrows because
+$M_\lambda/I_\lambda M_\lambda$ is flat over
+$R_\lambda/I_\lambda$ and because $M_{\lambda'}/I_\lambda M_{\lambda'}$
+is flat over $R_{\lambda'}/I_\lambda R_{\lambda'}$ as it is a base
+change of the flat module $M_\lambda/I_\lambda M_\lambda$.
+The composition maps the generators $\xi_i$ to zero as we explained above.
+We finally conclude that
+$\text{Tor}_1^{R_{\lambda'}}(M_{\lambda'}, R_{\lambda'}/I_{\lambda'})$
+is zero. This implies that $M_{\lambda'}$ is flat over $R_{\lambda'}$ by
+Lemma \ref{lemma-variant-local-criterion-flatness}.
+\end{proof}
+
+\noindent
+Please compare the lemma below to
+Lemma \ref{lemma-criterion-flatness-fibre-Noetherian}
+(the case of Noetherian local rings) and
+Lemma \ref{lemma-criterion-flatness-fibre-nilpotent}
+(the case of a nilpotent ideal in the base).
+
+\begin{lemma}[Crit\`ere de platitude par fibres]
+\label{lemma-criterion-flatness-fibre}
+Let $R$, $S$, $S'$ be local rings and let $R \to S \to S'$ be local ring
+homomorphisms. Let $M$ be an $S'$-module. Let $\mathfrak m \subset R$
+be the maximal ideal. Assume
+\begin{enumerate}
+\item The ring maps $R \to S$ and $R \to S'$ are essentially
+of finite presentation.
+\item The module $M$ is of finite presentation over $S'$.
+\item The module $M$ is not zero.
+\item The module $M/\mathfrak mM$ is a flat $S/\mathfrak mS$-module.
+\item The module $M$ is a flat $R$-module.
+\end{enumerate}
+Then $S$ is flat over $R$ and $M$ is a flat $S$-module.
+\end{lemma}
+
+\begin{proof}
+As in the proof of Lemma \ref{lemma-limit-essentially-finite-presentation}
+we may first write $R = \colim R_\lambda$ as a directed colimit
+of local $\mathbf{Z}$-algebras which are essentially of finite type.
+Denote $\mathfrak p_\lambda$ the maximal ideal of $R_\lambda$.
+Next, we may assume that for some $\lambda_1 \in \Lambda$ there
+exist $f_{j, \lambda_1} \in R_{\lambda_1}[x_1, \ldots, x_n]$
+such that
+$$
+S =
+\colim_{\lambda \geq \lambda_1} S_\lambda, \text{ with }
+S_\lambda =
+(R_\lambda[x_1, \ldots, x_n]/
+(f_{1, \lambda}, \ldots, f_{u, \lambda}))_{\mathfrak q_\lambda}
+$$
+For some $\lambda_2 \in \Lambda$,
+$\lambda_2 \geq \lambda_1$ there exist
+$g_{j, \lambda_2} \in R_{\lambda_2}[x_1, \ldots, x_n, y_1, \ldots, y_m]$
+with images
+$\overline{g}_{j, \lambda_2} \in S_{\lambda_2}[y_1, \ldots, y_m]$
+such that
+$$
+S' =
+\colim_{\lambda \geq \lambda_2} S'_\lambda, \text{ with }
+S'_\lambda =
+(S_\lambda[y_1, \ldots, y_m]/
+(\overline{g}_{1, \lambda}, \ldots,
+\overline{g}_{v, \lambda}))_{\overline{\mathfrak q}'_\lambda}
+$$
+Note that this also implies that
+$$
+S'_\lambda =
+(R_\lambda[x_1, \ldots, x_n, y_1, \ldots, y_m]/
+(g_{1, \lambda}, \ldots, g_{v, \lambda}))_{\mathfrak q'_\lambda}
+$$
+Choose a presentation
+$$
+(S')^{\oplus s} \to (S')^{\oplus t} \to M \to 0
+$$
+of $M$ over $S'$. Let $A \in \text{Mat}(t \times s, S')$ be
+the matrix of the presentation. For some $\lambda_3 \in \Lambda$,
+$\lambda_3 \geq \lambda_2$
+we can find a matrix $A_{\lambda_3} \in \text{Mat}(t \times s, S_{\lambda_3})$
+which maps to $A$. For all $\lambda \geq \lambda_3$ we let
+$M_\lambda = \Coker((S'_\lambda)^{\oplus s} \xrightarrow{A_\lambda}
+(S'_\lambda)^{\oplus t})$.
+
+\medskip\noindent
+With these choices, we have for each $\lambda_3 \leq \lambda \leq \mu$
+that $S_\lambda \otimes_{R_{\lambda}} R_\mu \to S_\mu$ is a localization,
+$S'_\lambda \otimes_{S_{\lambda}} S_\mu \to S'_\mu$ is a localization, and
+the map $M_\lambda \otimes_{S'_\lambda} S'_\mu \to M_\mu$ is an
+isomorphism. This also implies that
+$S'_\lambda \otimes_{R_{\lambda}} R_\mu \to S'_\mu$ is a localization.
+Thus, since $M$ is flat over $R$ we see by
+Lemma \ref{lemma-colimit-eventually-flat} that
+for all $\lambda$ big enough the module $M_\lambda$ is
+flat over $R_\lambda$.
+Moreover, note that
+$
+\mathfrak m = \colim \mathfrak p_\lambda
+$,
+$
+S/\mathfrak mS = \colim S_\lambda/\mathfrak p_\lambda S_\lambda
+$,
+$
+S'/\mathfrak mS' = \colim S'_\lambda/\mathfrak p_\lambda S'_\lambda
+$,
+and
+$
+M/\mathfrak mM = \colim M_\lambda/\mathfrak p_\lambda M_\lambda
+$. Also, for each $\lambda_3 \leq \lambda \leq \mu$ we see (from the
+properties listed above) that
+$$
+S'_\lambda/\mathfrak p_\lambda S'_\lambda
+\otimes_{S_{\lambda}/\mathfrak p_\lambda S_\lambda}
+S_\mu/\mathfrak p_\mu S_\mu
+\longrightarrow
+S'_\mu/\mathfrak p_\mu S'_\mu
+$$
+is a localization, and the map
+$$
+M_\lambda / \mathfrak p_\lambda M_\lambda
+\otimes_{S'_\lambda/\mathfrak p_\lambda S'_\lambda}
+S'_\mu /\mathfrak p_\mu S'_\mu
+\longrightarrow
+M_\mu/\mathfrak p_\mu M_\mu
+$$
+is an isomorphism. Hence the system
+$(S_\lambda/\mathfrak p_\lambda S_\lambda \to
+S'_\lambda/\mathfrak p_\lambda S'_\lambda,
+M_\lambda/\mathfrak p_\lambda M_\lambda)$
+is a system as in
+Lemma \ref{lemma-limit-module-essentially-finite-presentation} as well.
+We may apply Lemma \ref{lemma-colimit-eventually-flat} again because
+$M/\mathfrak m M$ is assumed flat over $S/\mathfrak mS$ and we see that
+$M_\lambda/\mathfrak p_\lambda M_\lambda$ is flat over
+$S_\lambda/\mathfrak p_\lambda S_\lambda$ for all $\lambda$ big enough.
+Thus for $\lambda$ big enough the data
+$R_\lambda \to S_\lambda \to S'_\lambda, M_\lambda$ satisfies
+the hypotheses of Lemma \ref{lemma-criterion-flatness-fibre-Noetherian}.
+Pick such a $\lambda$. Then $S = S_\lambda \otimes_{R_\lambda} R$
+is flat over $R$, and $M = M_\lambda \otimes_{S_\lambda} S$
+is flat over $S$ (since the base change of a flat module is flat).
+\end{proof}
+
+\noindent
+The following is an easy consequence of the ``crit\`ere de platitude par
+fibres'' Lemma \ref{lemma-criterion-flatness-fibre}. For more results of
+this kind see More on Flatness, Section \ref{flat-section-introduction}.
+
+\begin{lemma}
+\label{lemma-criterion-flatness-fibre-fp-over-ft}
+Let $R$, $S$, $S'$ be local rings and let $R \to S \to S'$ be local ring
+homomorphisms. Let $M$ be an $S'$-module. Let $\mathfrak m \subset R$
+be the maximal ideal. Assume
+\begin{enumerate}
+\item $R \to S'$ is essentially of finite presentation,
+\item $R \to S$ is essentially of finite type,
+\item $M$ is of finite presentation over $S'$,
+\item $M$ is not zero,
+\item $M/\mathfrak mM$ is a flat $S/\mathfrak mS$-module, and
+\item $M$ is a flat $R$-module.
+\end{enumerate}
+Then $S$ is essentially of finite presentation and flat over $R$
+and $M$ is a flat $S$-module.
+\end{lemma}
+
+\begin{proof}
+As $S$ is essentially of finite presentation over $R$ we can write
+$S = C_{\overline{\mathfrak q}}$ for some finite type $R$-algebra $C$.
+Write $C = R[x_1, \ldots, x_n]/I$. Denote
+$\mathfrak q \subset R[x_1, \ldots, x_n]$ be the prime ideal corresponding
+to $\overline{\mathfrak q}$. Then we see that $S = B/J$ where
+$B = R[x_1, \ldots, x_n]_{\mathfrak q}$ is essentially of finite presentation
+over $R$ and $J = IB$. We can find $f_1, \ldots, f_k \in J$ such that
+the images $\overline{f}_i \in B/\mathfrak mB$
+generate the image $\overline{J}$ of $J$ in the Noetherian ring
+$B/\mathfrak mB$. Hence there exist finitely generated ideals
+$J' \subset J$ such that $B/J' \to B/J$ induces an isomorphism
+$$
+(B/J') \otimes_R R/\mathfrak m \longrightarrow
+B/J \otimes_R R/\mathfrak m = S/\mathfrak mS.
+$$
+For any $J'$ as above we see that
+Lemma \ref{lemma-criterion-flatness-fibre}
+applies to the ring maps
+$$
+R \longrightarrow B/J' \longrightarrow S'
+$$
+and the module $M$. Hence we conclude that $B/J'$ is flat over $R$
+for any choice $J'$ as above. Now, if $J' \subset J' \subset J$ are
+two finitely generated ideals as above, then we conclude that
+$B/J' \to B/J''$ is a surjective map between flat $R$-algebras
+which are essentially of finite presentation which is an isomorphism
+modulo $\mathfrak m$. Hence
+Lemma \ref{lemma-mod-injective-general}
+implies that $B/J' = B/J''$, i.e., $J' = J''$. Clearly this means that
+$J$ is finitely generated, i.e., $S$ is essentially of finite presentation
+over $R$. Thus we may apply
+Lemma \ref{lemma-criterion-flatness-fibre}
+to $R \to S \to S'$ and we win.
+\end{proof}
+
+\begin{lemma}[Crit\`ere de platitude par fibres: locally nilpotent case]
+\label{lemma-criterion-flatness-fibre-locally-nilpotent}
+Let
+$$
+\xymatrix{
+S \ar[rr] & & S' \\
+& R \ar[lu] \ar[ru]
+}
+$$
+be a commutative diagram in the category of rings.
+Let $I \subset R$ be a locally nilpotent ideal and
+$M$ an $S'$-module. Assume
+\begin{enumerate}
+\item $R \to S$ is of finite type,
+\item $R \to S'$ is of finite presentation,
+\item $M$ is a finitely presented $S'$-module,
+\item $M/IM$ is flat as a $S/IS$-module, and
+\item $M$ is flat as an $R$-module.
+\end{enumerate}
+Then $M$ is a flat $S$-module and $S_\mathfrak q$ is flat
+and essentially of finite presentation over $R$
+for every $\mathfrak q \subset S$ such that
+$M \otimes_S \kappa(\mathfrak q)$ is nonzero.
+\end{lemma}
+
+\begin{proof}
+If $M \otimes_S \kappa(\mathfrak q)$ is nonzero, then
+$S' \otimes_S \kappa(\mathfrak q)$ is nonzero and hence
+there exists a prime $\mathfrak q' \subset S'$ lying over
+$\mathfrak q$ (Lemma \ref{lemma-in-image}). Let
+$\mathfrak p \subset R$ be the image of $\mathfrak q$ in $\Spec(R)$.
+Then $I \subset \mathfrak p$ as $I$ is locally nilpotent
+hence $M/\mathfrak p M$ is flat over $S/\mathfrak pS$.
+Hence we may apply Lemma \ref{lemma-criterion-flatness-fibre-fp-over-ft}
+to $R_\mathfrak p \to S_\mathfrak q \to S'_{\mathfrak q'}$
+and $M_{\mathfrak q'}$. We conclude that $M_{\mathfrak q'}$
+is flat over $S$ and $S_\mathfrak q$ is flat and essentially
+of finite presentation over $R$.
+Since $\mathfrak q'$ was an arbitrary prime of $S'$ we also
+see that $M$ is flat over $S$ (Lemma \ref{lemma-flat-localization}).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Openness of the flat locus}
+\label{section-open-flat}
+
+\noindent
+We use Lemma \ref{lemma-colimit-eventually-flat} to reduce to the
+Noetherian case. The Noetherian case is handled using the characterization
+of exact complexes given in Section \ref{section-complex-exact}.
+
+\begin{lemma}
+\label{lemma-CM-dim-finite-type}
+Let $k$ be a field. Let $S$ be a finite type
+$k$-algebra. Let $f_1, \ldots, f_i$ be elements
+of $S$. Assume that $S$ is Cohen-Macaulay and
+equidimensional of dimension $d$, and that
+$\dim V(f_1, \ldots, f_i) \leq d - i$. Then equality
+holds and $f_1, \ldots, f_i$ forms a regular
+sequence in $S_{\mathfrak q}$ for every prime $\mathfrak q$
+of $V(f_1, \ldots, f_i)$.
+\end{lemma}
+
+\begin{proof}
+If $S$ is Cohen-Macaulay and equidimensional of dimension
+$d$, then we have $\dim(S_{\mathfrak m}) = d$ for all maximal
+ideals $\mathfrak m$ of $S$, see
+Lemma \ref{lemma-disjoint-decomposition-CM-algebra}.
+By Proposition \ref{proposition-CM-module} we see that
+for all maximal ideals $\mathfrak m \in V(f_1, \ldots, f_i)$
+the sequence is a regular sequence in $S_{\mathfrak m}$ and
+the local ring $S_{\mathfrak m}/(f_1, \ldots, f_i)$ is
+Cohen-Macaulay of dimension $d - i$. This actually
+means that $S/(f_1, \ldots, f_i)$ is Cohen-Macaulay
+and equidimensional of dimension $d - i$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-open-regular-sequence}
+Let $R \to S$ be a finite type ring map. Let $d$ be an integer such that all
+fibres $S \otimes_R \kappa(\mathfrak p)$ are Cohen-Macaulay and equidimensional
+of dimension $d$. Let $f_1, \ldots, f_i$ be elements of $S$. The set
+$$
+\{ \mathfrak q \in V(f_1, \ldots, f_i)
+\mid f_1, \ldots, f_i
+\text{ are a regular sequence in }
+S_{\mathfrak q}/\mathfrak p S_{\mathfrak q}
+\text{ where }\mathfrak p = R \cap \mathfrak q
+\}
+$$
+is open in $V(f_1, \ldots, f_i)$.
+\end{lemma}
+
+\begin{proof}
+Write $\overline{S} = S/(f_1, \ldots, f_i)$.
+Suppose $\mathfrak q$ is an element of the set defined in the
+lemma, and $\mathfrak p$ is the corresponding prime of $R$.
+We will use relative dimension as defined in
+Definition \ref{definition-relative-dimension}.
+First, note that $d = \dim_{\mathfrak q}(S/R) =
+\dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}) +
+\text{trdeg}_{\kappa(\mathfrak p)}\ \kappa(\mathfrak q)$
+by Lemma \ref{lemma-dimension-at-a-point-finite-type-field}.
+Since $f_1, \ldots, f_i$ form a regular sequence in the
+Noetherian local ring $S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}$
+Lemma \ref{lemma-one-equation} tells us that
+$\dim(\overline{S}_{\mathfrak q}/\mathfrak p\overline{S}_{\mathfrak q})
+= \dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}) - i$.
+We conclude that $\dim_{\mathfrak q}(\overline{S}/R)
+= \dim(\overline{S}_{\mathfrak q}/\mathfrak p\overline{S}_{\mathfrak q}) +
+\text{trdeg}_{\kappa(\mathfrak p)}\ \kappa(\mathfrak q)
+= d - i$ by Lemma \ref{lemma-dimension-at-a-point-finite-type-field}.
+By Lemma \ref{lemma-dimension-fibres-bounded-open-upstairs}
+we have $\dim_{\mathfrak q'}(\overline{S}/R) \leq d - i$
+for all $\mathfrak q' \in V(f_1, \ldots, f_i) = \Spec(\overline{S})$
+in a neighbourhood of $\mathfrak q$. Thus after replacing
+$S$ by $S_g$ for some $g \in S$, $g \not \in \mathfrak q$
+we may assume that the inequality holds for all
+$\mathfrak q'$. The result follows from Lemma
+\ref{lemma-CM-dim-finite-type}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exact-on-fibres-open}
+Let $R \to S$ be a ring map. Consider a finite homological complex of
+finite free $S$-modules:
+$$
+F_{\bullet} :
+0
+\to
+S^{n_e}
+\xrightarrow{\varphi_e}
+S^{n_{e-1}}
+\xrightarrow{\varphi_{e-1}}
+\ldots
+\xrightarrow{\varphi_{i + 1}}
+S^{n_i}
+\xrightarrow{\varphi_i}
+S^{n_{i-1}}
+\xrightarrow{\varphi_{i-1}}
+\ldots
+\xrightarrow{\varphi_1}
+S^{n_0}
+$$
+For every prime $\mathfrak q$ of $S$ consider the
+complex $\overline{F}_{\bullet, \mathfrak q} =
+F_{\bullet, \mathfrak q} \otimes_R \kappa(\mathfrak p)$
+where $\mathfrak p$ is inverse image of $\mathfrak q$ in $R$.
+Assume $R$ is Noetherian and there exists an integer $d$ such
+that $R \to S$ is finite type, flat
+with fibres $S \otimes_R \kappa(\mathfrak p)$
+Cohen-Macaulay of dimension $d$.
+The set
+$$
+\{\mathfrak q \in \Spec(S) \mid
+\overline{F}_{\bullet, \mathfrak q}\text{ is exact}\}
+$$
+is open in $\Spec(S)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak q$ be an element of the set defined in the lemma.
+We are going to use Proposition \ref{proposition-what-exact}
+to show there exists a $g \in S$, $g \not \in \mathfrak q$
+such that $D(g)$ is contained in the set defined in the lemma.
+In other words, we are going to show that after replacing $S$
+by $S_g$, the set of the lemma is all of $\Spec(S)$.
+Thus during the proof we will, finitely often, replace
+$S$ by such a localization.
+Recall that Proposition \ref{proposition-what-exact}
+characterizes exactness of complexes
+in terms of ranks of the maps $\varphi_i$ and the ideals
+$I(\varphi_i)$, in case the ring is local. We first address
+the rank condition. Set
+$r_i = n_i - n_{i + 1} + \ldots + (-1)^{e - i} n_e$.
+Note that $r_i + r_{i + 1} = n_i$ and note that
+$r_i$ is the expected rank of $\varphi_i$ (in the
+exact case).
+
+\medskip\noindent
+By Lemma \ref{lemma-complex-exact-mod} we see that if
+$\overline{F}_{\bullet, \mathfrak q}$ is exact, then
+the localization $F_{\bullet, \mathfrak q}$ is exact.
+In particular the complex $F_\bullet$ becomes
+exact after localizing by an element
+$g \in S$, $g \not \in \mathfrak q$. In this case
+Proposition \ref{proposition-what-exact} applied
+to all localizations of $S$ at prime ideals
+implies that all $(r_i + 1) \times (r_i + 1)$-minors
+of $\varphi_i$ are zero. Thus we see that the rank
+of $\varphi_i$ is at most $r_i$.
+
+\medskip\noindent
+Let $I_i \subset S$ denote the ideal generated
+by the $r_i \times r_i$-minors of the matrix
+of $\varphi_i$. By Proposition \ref{proposition-what-exact}
+the complex $\overline{F}_{\bullet, \mathfrak q}$ is exact
+if and only if for every $1 \leq i \leq e$ we have
+either $(I_i)_{\mathfrak q} = S_{\mathfrak q}$ or
+$(I_i)_{\mathfrak q}$ contains a $S_{\mathfrak q}/\mathfrak p
+S_{\mathfrak q}$-regular sequence of length $i$.
+Namely, by our choice of $r_i$ above and by the
+bound on the ranks of the $\varphi_i$ this is the
+only way the conditions of Proposition \ref{proposition-what-exact}
+can be satisfied.
+
+\medskip\noindent
+If $(I_i)_{\mathfrak q} = S_{\mathfrak q}$, then after localizing $S$ at
+some element $g \not\in \mathfrak q$ we may assume that
+$I_i = S$. Clearly, this is an open condition.
+
+\medskip\noindent
+If $(I_i)_{\mathfrak q} \not = S_{\mathfrak q}$, then we have
+a sequence $f_1, \ldots, f_i \in (I_i)_{\mathfrak q}$ which
+form a regular sequence in $S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}$.
+Note that for any prime $\mathfrak q' \subset S$ such that
+$(f_1, \ldots, f_i) \not \subset \mathfrak q'$ we have
+$(I_i)_{\mathfrak q'} = S_{\mathfrak q'}$.
+Thus the result follows from Lemma \ref{lemma-open-regular-sequence}.
+\end{proof}
+
+
+\begin{theorem}
+\label{theorem-openness-flatness}
+Let $R$ be a ring. Let $R \to S$ be a ring map of finite
+presentation. Let $M$ be a finitely presented $S$-module.
+The set
+$$
+\{ \mathfrak q \in \Spec(S) \mid
+M_{\mathfrak q}\text{ is flat over }R\}
+$$
+is open in $\Spec(S)$.
+\end{theorem}
+
+\begin{proof}
+Let $\mathfrak q \in \Spec(S)$ be a prime.
+Let $\mathfrak p \subset R$ be the inverse image of $\mathfrak q$ in $R$.
+Note that $M_{\mathfrak q}$ is flat over $R$ if and only if
+it is flat over $R_{\mathfrak p}$.
+Let us assume that $M_{\mathfrak q}$ is flat over $R$.
+We claim that there exists a $g \in S$, $g \not \in \mathfrak q$
+such that $M_g$ is flat over $R$.
+
+\medskip\noindent
+We first reduce to the case where $R$ and $S$ are
+of finite type over $\mathbf{Z}$.
+Choose a directed set $\Lambda$ and
+a system $(R_\lambda \to S_\lambda, M_\lambda)$
+as in Lemma \ref{lemma-limit-module-finite-presentation}.
+Set $\mathfrak p_\lambda$ equal to the inverse image of
+$\mathfrak p$ in $R_\lambda$.
+Set $\mathfrak q_\lambda$ equal to the inverse image of
+$\mathfrak q$ in $S_\lambda$.
+Then the system
+$$
+((R_\lambda)_{\mathfrak p_\lambda},
+(S_\lambda)_{\mathfrak q_\lambda},
+(M_\lambda)_{\mathfrak q_{\lambda}})
+$$
+is a system as in
+Lemma \ref{lemma-limit-module-essentially-finite-presentation}.
+Hence by Lemma \ref{lemma-colimit-eventually-flat}
+we see that for some $\lambda$ the module
+$M_\lambda$ is flat over $R_\lambda$ at the prime
+$\mathfrak q_{\lambda}$. Suppose we
+can prove our claim for the system
+$(R_\lambda \to S_\lambda, M_\lambda, \mathfrak q_{\lambda})$.
+In other words, suppose that we can find a $g \in S_\lambda$,
+$g \not\in \mathfrak q_\lambda$ such that $(M_\lambda)_g$
+is flat over $R_\lambda$. By Lemma \ref{lemma-limit-module-finite-presentation}
+we have $M = M_\lambda \otimes_{R_\lambda} R$ and hence
+also $M_g = (M_\lambda)_g \otimes_{R_\lambda} R$. Thus by
+Lemma \ref{lemma-flat-base-change} we deduce the claim
+for the system $(R \to S, M, \mathfrak q)$.
+
+\medskip\noindent
+At this point we may assume that $R$ and $S$ are of finite type
+over $\mathbf{Z}$. We may write $S$ as a quotient of a
+polynomial ring $R[x_1, \ldots, x_n]$. Of course, we may replace
+$S$ by $R[x_1, \ldots, x_n]$ and assume that $S$ is a polynomial
+ring over $R$. In particular we see that $R \to S$ is flat
+and all fibres rings $S \otimes_R \kappa(\mathfrak p)$
+have global dimension $n$.
+
+\medskip\noindent
+Choose a resolution $F_\bullet$ of $M$ over $S$ with each
+$F_i$ finite free, see Lemma \ref{lemma-resolution-by-finite-free}.
+Let $K_n = \Ker(F_{n-1} \to F_{n-2})$. Note that
+$(K_n)_{\mathfrak q}$ is flat over $R$, since each $F_i$
+is flat over $R$ and by assumption on $M$, see Lemma
+\ref{lemma-flat-ses}. In addition, the sequence
+$$
+0 \to
+K_n/\mathfrak p K_n \to
+F_{n-1}/ \mathfrak p F_{n-1} \to
+\ldots \to
+F_0 / \mathfrak p F_0 \to
+M/\mathfrak p M \to
+0
+$$
+is exact upon localizing at $\mathfrak q$, because of vanishing
+of $\text{Tor}_i^{R_\mathfrak p}(\kappa(\mathfrak p), M_{\mathfrak q})$.
+Since the global dimension of $S_\mathfrak q/\mathfrak p S_{\mathfrak q}$
+is $n$ we conclude that $K_n / \mathfrak p K_n$ localized
+at $\mathfrak q$ is a finite free module over
+$S_\mathfrak q/\mathfrak p S_{\mathfrak q}$. By
+Lemma \ref{lemma-free-fibre-flat-free} $(K_n)_{\mathfrak q}$
+is free over $S_{\mathfrak q}$. In particular, there exists a
+$g \in S$, $g \not \in \mathfrak q$ such that $(K_n)_g$
+is finite free over $S_g$.
+
+\medskip\noindent
+By Lemma \ref{lemma-exact-on-fibres-open}
+there exists a further localization $S_g$ such that
+the complex
+$$
+0 \to K_n \to F_{n-1} \to \ldots \to F_0
+$$
+is exact on {\it all fibres} of $R \to S$. By
+Lemma \ref{lemma-complex-exact-mod}
+this implies that the cokernel of $F_1 \to F_0$ is
+flat. This proves the theorem in the Noetherian case.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Openness of Cohen-Macaulay loci}
+\label{section-CM-open}
+
+\noindent
+In this section we characterize the Cohen-Macaulay property
+of finite type algebras in terms of flatness. We then use this
+to prove the set of points where such an algebra is Cohen-Macaulay
+is open.
+
+\begin{lemma}
+\label{lemma-where-CM}
+Let $S$ be a finite type algebra over a field $k$.
+Let $\varphi : k[y_1, \ldots, y_d] \to S$ be a quasi-finite ring map.
+As subsets of $\Spec(S)$ we have
+$$
+\{ \mathfrak q \mid
+S_{\mathfrak q} \text{ flat over }k[y_1, \ldots, y_d]\}
+=
+\{ \mathfrak q \mid
+S_{\mathfrak q} \text{ CM and }\dim_{\mathfrak q}(S/k) = d\}
+$$
+For notation see Definition \ref{definition-relative-dimension}.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak q \subset S$ be a prime. Denote
+$\mathfrak p = k[y_1, \ldots, y_d] \cap \mathfrak q$.
+Note that always
+$\dim(S_{\mathfrak q}) \leq \dim(k[y_1, \ldots, y_d]_{\mathfrak p})$
+by Lemma \ref{lemma-dimension-inequality-quasi-finite} for example.
+Moreover, the field extension $\kappa(\mathfrak q)/\kappa(\mathfrak p)$
+is finite and hence
+$\text{trdeg}_k(\kappa(\mathfrak p)) = \text{trdeg}_k(\kappa(\mathfrak q))$.
+
+\medskip\noindent
+Let $\mathfrak q$ be an element of the left hand side.
+Then Lemma \ref{lemma-finite-flat-over-regular-CM} applies
+and we conclude that $S_{\mathfrak q}$ is Cohen-Macaulay
+and $\dim(S_{\mathfrak q}) = \dim(k[y_1, \ldots, y_d]_{\mathfrak p})$.
+Combined with the equality of transcendence degrees above and
+Lemma \ref{lemma-dimension-at-a-point-finite-type-field} this
+implies that $\dim_{\mathfrak q}(S/k) = d$. Hence $\mathfrak q$
+is an element of the right hand side.
+
+\medskip\noindent
+Let $\mathfrak q$ be an element of the right hand side.
+By the equality of transcendence degrees above, the assumption
+that $\dim_{\mathfrak q}(S/k) = d$ and
+Lemma \ref{lemma-dimension-at-a-point-finite-type-field}
+we conclude that
+$\dim(S_{\mathfrak q}) = \dim(k[y_1, \ldots, y_d]_{\mathfrak p})$.
+Hence Lemma \ref{lemma-CM-over-regular-flat}
+applies and we see that $\mathfrak q$ is an
+element of the left hand side.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-type-over-field-CM-open}
+Let $S$ be a finite type algebra over a field $k$.
+The set of primes $\mathfrak q$ such that $S_{\mathfrak q}$ is
+Cohen-Macaulay is open in $S$.
+\end{lemma}
+
+\noindent
+This lemma is a special case of
+Lemma \ref{lemma-finite-presentation-flat-CM-locus-open} below,
+so you can skip straight to the proof of that lemma if you like.
+
+\begin{proof}
+Let $\mathfrak q \subset S$ be a prime such that $S_{\mathfrak q}$ is
+Cohen-Macaulay. We have to show there exists a
+$g \in S$, $g \not \in \mathfrak q$ such that the ring
+$S_g$ is Cohen-Macaulay. For any $g \in S$, $g \not \in \mathfrak q$
+we may replace $S$ by $S_g$ and $\mathfrak q$ by $\mathfrak qS_g$.
+Combining this with
+Lemmas \ref{lemma-Noether-normalization-at-point} and
+\ref{lemma-dimension-at-a-point-finite-type-field}
+we may assume that there exists a finite injective
+ring map $k[y_1, \ldots, y_d] \to S$ with
+$d = \dim(S_{\mathfrak q}) + \text{trdeg}_k(\kappa(\mathfrak q))$.
+Set $\mathfrak p = k[y_1, \ldots, y_d] \cap \mathfrak q$.
+By construction we see that $\mathfrak q$ is an element of
+the right hand side of the displayed equality of
+Lemma \ref{lemma-where-CM}. Hence it is also an element of
+the left hand side.
+
+\medskip\noindent
+By Theorem \ref{theorem-openness-flatness} we see that for some $g \in S$,
+$g \not \in \mathfrak q$ the ring $S_g$ is flat over $k[y_1, \ldots, y_d]$.
+Hence by the equality of Lemma \ref{lemma-where-CM} again we conclude that
+all local rings of $S_g$ are Cohen-Macaulay as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generic-CM}
+Let $k$ be a field. Let $S$ be a finite type $k$ algebra.
+The set of Cohen-Macaulay primes forms a dense open
+$U \subset \Spec(S)$.
+\end{lemma}
+
+\begin{proof}
+The set is open by Lemma \ref{lemma-finite-type-over-field-CM-open}.
+It contains all minimal primes $\mathfrak q \subset S$
+since the local ring at a minimal prime $S_{\mathfrak q}$
+has dimension zero and hence is Cohen-Macaulay.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dim-not-zero-exists-nonzerodivisor-nonunit}
+Let $k$ be a field. Let $S$ be a finite type $k$ algebra.
+If $\dim(S) > 0$, then there exists an element $f \in S$
+which is a nonzerodivisor and a nonunit.
+\end{lemma}
+
+\begin{proof}
+Let $I \subset S$ be the radical ideal such that $V(I) \subset \Spec(S)$
+is the set of primes $\mathfrak q \subset S$ with $S_\mathfrak q$
+not Cohen-Macaulay. See Lemma \ref{lemma-generic-CM} which also
+tells us that $V(I)$ is nowhere dense in $\Spec(S)$.
+Let $\mathfrak m \subset S$ be a maximal ideal such that
+$\dim(S_\mathfrak m) > 0$ and $\mathfrak m \not \in V(I)$.
+Such a maximal ideal exists as $\dim(S) > 0$ using the
+Hilbert Nullstellensatz (Theorem \ref{theorem-nullstellensatz}) and
+Lemma \ref{lemma-dimension-at-a-point-finite-type-over-field}
+which implies that any dense open of
+$\Spec(S)$ has the same dimension as $\Spec(S)$.
+Finally, let $\mathfrak q_1, \ldots, \mathfrak q_m$ be the minimal
+primes of $S$. Choose $f \in S$ with
+$$
+f \equiv 1 \bmod I,\quad
+f \in \mathfrak m,\quad
+f \not \in \bigcup \mathfrak q_i
+$$
+This is possible by Lemma \ref{lemma-silly-silly}. Namely, we have
+$S/(I \cap \mathfrak m) = S/I \times S/\mathfrak m$ by
+Lemma \ref{lemma-chinese-remainder}. Thus we can first choose
+$g \in S$ such that $g \equiv 1 \bmod I$ and $g \in \mathfrak m$.
+Then $g + (I \cap \mathfrak m) \not \subset \mathfrak q_i$
+since $V(I \cap \mathfrak m) \not \supset V(\mathfrak q_i)$.
+Hence the lemma applies. Clearly $f$ is not a unit.
+To show that $f$ is a nonzerodivisor, it suffices to
+prove that $f : S_\mathfrak q \to S_\mathfrak q$ is
+injective for every prime ideal $\mathfrak q \subset S$.
+If $S_\mathfrak q$ is not Cohen-Macaulay, then $\mathfrak q \in V(I)$
+and $f$ maps to a unit of $S_\mathfrak q$. On the other hand, if
+$S_\mathfrak q$ is Cohen-Macaulay, then we use that
+$\dim(S_\mathfrak q/fS_\mathfrak q) < \dim(S_\mathfrak q)$
+by the requirement $f \not \in \mathfrak q_i$ and we conclude
+that $f$ is a nonzerodivisor in $S_\mathfrak q$ by
+Lemma \ref{lemma-reformulate-CM}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-presentation-flat-CM-locus-open}
+Let $R$ be a ring. Let $R \to S$ be of finite presentation
+and flat. For any $d \geq 0$ the set
+$$
+\left\{
+\begin{matrix}
+\mathfrak q \in \Spec(S)
+\text{ such that setting }\mathfrak p = R \cap \mathfrak q
+\text{ the fibre ring}\\
+S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}
+\text{ is Cohen-Macaulay}
+\text{ and } \dim_{\mathfrak q}(S/R) = d
+\end{matrix}
+\right\}
+$$
+is open in $\Spec(S)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak q$ be an element of the set indicated, with
+$\mathfrak p$ the corresponding prime of $R$.
+We have to find a $g \in S$, $g \not \in \mathfrak q$ such that
+all fibre rings of $R \to S_g$ are Cohen-Macaulay.
+During the course of the proof we may (finitely many times)
+replace $S$ by $S_g$ for a $g \in S$, $g \not \in \mathfrak q$.
+Thus by Lemma \ref{lemma-quasi-finite-over-polynomial-algebra}
+we may assume there is a quasi-finite ring map
+$R[t_1, \ldots, t_d] \to S$ with $d = \dim_{\mathfrak q}(S/R)$.
+Let $\mathfrak q' = R[t_1, \ldots, t_d] \cap \mathfrak q$.
+By Lemma \ref{lemma-where-CM} we see that the ring map
+$$
+R[t_1, \ldots, t_d]_{\mathfrak q'} /
+\mathfrak p R[t_1, \ldots, t_d]_{\mathfrak q'}
+\longrightarrow
+S_{\mathfrak q}/\mathfrak p S_{\mathfrak q}
+$$
+is flat. Hence by the crit\`ere de platitude par fibres
+Lemma \ref{lemma-criterion-flatness-fibre} we see that
+$R[t_1, \ldots, t_d]_{\mathfrak q'} \to S_{\mathfrak q}$ is flat.
+Hence by Theorem \ref{theorem-openness-flatness} we see that
+for some $g \in S$, $g \not \in \mathfrak q$ the ring map
+$R[t_1, \ldots, t_d] \to S_g$ is flat. Replacing $S$ by $S_g$
+we see that for every prime $\mathfrak r \subset S$,
+setting $\mathfrak r' = R[t_1, \ldots, t_d] \cap \mathfrak r$
+and $\mathfrak p' = R \cap \mathfrak r$
+the local ring map
+$R[t_1, \ldots, t_d]_{\mathfrak r'} \to S_{\mathfrak r}$ is flat.
+Hence also the base change
+$$
+R[t_1, \ldots, t_d]_{\mathfrak r'} /
+\mathfrak p' R[t_1, \ldots, t_d]_{\mathfrak r'}
+\longrightarrow
+S_{\mathfrak r}/\mathfrak p' S_{\mathfrak r}
+$$
+is flat. Hence by Lemma \ref{lemma-where-CM} applied with
+$k = \kappa(\mathfrak p')$ we see
+$\mathfrak r$ is in the set of the lemma
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generic-CM-flat-finite-presentation}
+Let $R$ be a ring. Let $R \to S$ be flat of finite presentation.
+The set of primes $\mathfrak q$ such that the fibre ring
+$S_{\mathfrak q} \otimes_R \kappa(\mathfrak p)$,
+with $\mathfrak p = R \cap \mathfrak q$ is Cohen-Macaulay
+is open and dense in every fibre of $\Spec(S) \to \Spec(R)$.
+\end{lemma}
+
+\begin{proof}
+The set, call it $W$, is open by
+Lemma \ref{lemma-finite-presentation-flat-CM-locus-open}.
+It is dense in the fibres because the intersection of $W$
+with a fibre is the corresponding set of the fibre
+to which Lemma \ref{lemma-generic-CM} applies.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extend-field-CM-locus}
+Let $k$ be a field. Let $S$ be a finite type $k$-algebra.
+Let $K/k$ be a field extension, and set $S_K = K \otimes_k S$.
+Let $\mathfrak q \subset S$ be a prime of $S$.
+Let $\mathfrak q_K \subset S_K$ be a prime of $S_K$ lying
+over $\mathfrak q$. Then $S_{\mathfrak q}$ is Cohen-Macaulay
+if and only if $(S_K)_{\mathfrak q_K}$ is Cohen-Macaulay.
+\end{lemma}
+
+\begin{proof}
+During the course of the proof we may (finitely many times) replace
+$S$ by $S_g$ for any $g \in S$, $g \not \in \mathfrak q$. Hence
+using Lemma \ref{lemma-Noether-normalization-at-point} we may
+assume that $\dim(S) = \dim_{\mathfrak q}(S/k) =: d$ and
+find a finite injective map $k[x_1, \ldots, x_d] \to S$.
+Note that this also induces a finite injective map
+$K[x_1, \ldots, x_d] \to S_K$ by base change.
+By Lemma \ref{lemma-dimension-at-a-point-preserved-field-extension}
+we have $\dim_{\mathfrak q_K}(S_K/K) = d$.
+Set $\mathfrak p = k[x_1, \ldots, x_d] \cap \mathfrak q$
+and $\mathfrak p_K = K[x_1, \ldots, x_d] \cap \mathfrak q_K$.
+Consider the following commutative diagram of Noetherian local
+rings
+$$
+\xymatrix{
+S_{\mathfrak q} \ar[r] &
+(S_K)_{\mathfrak q_K} \\
+k[x_1, \ldots, x_d]_{\mathfrak p} \ar[r] \ar[u] &
+K[x_1, \ldots, x_d]_{\mathfrak p_K} \ar[u]
+}
+$$
+By Lemma \ref{lemma-where-CM} we have to show that
+the left vertical arrow is flat if and only if the right
+vertical arrow is flat. Because the bottom arrow is flat
+this equivalence holds by Lemma \ref{lemma-base-change-flat-up-down}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM-locus-commutes-base-change}
+Let $R$ be a ring. Let $R \to S$ be of finite type.
+Let $R \to R'$ be any ring map. Set $S' = R' \otimes_R S$.
+Denote $f : \Spec(S') \to \Spec(S)$ the map
+associated to the ring map $S \to S'$.
+Set $W$ equal to the
+set of primes $\mathfrak q$ such that the fibre ring
+$S_{\mathfrak q} \otimes_R \kappa(\mathfrak p)$,
+$\mathfrak p = R \cap \mathfrak q$ is Cohen-Macaulay,
+and let $W'$ denote the analogue for $S'/R'$. Then
+$W' = f^{-1}(W)$.
+\end{lemma}
+
+\begin{proof}
+Trivial from Lemma \ref{lemma-extend-field-CM-locus} and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dimension-CM}
+Let $R$ be a ring. Let $R \to S$ be a ring map which is (a) flat,
+(b) of finite presentation, (c) has Cohen-Macaulay fibres. Then we can write
+$S = S_0 \times \ldots \times S_n$ as a product of $R$-algebras $S_d$
+such that each $S_d$ satisfies
+(a), (b), (c) and has all fibres equidimensional of dimension $d$.
+\end{lemma}
+
+\begin{proof}
+For each integer $d$ denote $W_d \subset \Spec(S)$ the set
+defined in Lemma \ref{lemma-finite-presentation-flat-CM-locus-open}.
+Clearly we have $\Spec(S) = \coprod W_d$, and each $W_d$
+is open by the lemma we just quoted. Hence the result follows
+from Lemma \ref{lemma-disjoint-implies-product}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Differentials}
+\label{section-differentials}
+
+\noindent
+In this section we define the module of differentials of a ring map.
+
+\begin{definition}
+\label{definition-derivation}
+Let $\varphi : R \to S$ be a ring map and let $M$ be an $S$-module.
+A {\it derivation}, or more precisely an
+{\it $R$-derivation} into $M$ is a map $D : S \to M$
+which is additive, annihilates elements of $\varphi(R)$,
+and satisfies the {\it Leibniz rule}: $D(ab) = aD(b) + bD(a)$.
+\end{definition}
+
+\noindent
+Note that $D(ra) = rD(a)$ if $r \in R$ and $a \in S$. An equivalent
+definition is that an $R$-derivation is an $R$-linear map
+$D : S \to M$ which satisfies the Leibniz rule.
+The set of all $R$-derivations forms an
+$S$-module: Given two $R$-derivations $D, D'$
+the sum $D + D' : S \to M$, $a \mapsto D(a)+D'(a)$
+is an $R$-derivation, and given an $R$-derivation $D$
+and an element $c\in S$ the scalar multiple $cD : S \to M$,
+$a \mapsto cD(a)$ is an $R$-derivation. We denote this
+$S$-module
+$$
+\text{Der}_R(S, M).
+$$
+Also, if $\alpha : M \to N$ is an $S$-module map, then the
+composition $\alpha \circ D$ is an $R$-derivation into
+$N$. In this way the assignment $M \mapsto \text{Der}_R(S, M)$
+is a covariant functor.
+
+\medskip\noindent
+Consider the following map of free $S$-modules
+$$
+\bigoplus\nolimits_{(a, b)\in S^2} S[(a, b)] \oplus
+\bigoplus\nolimits_{(f, g)\in S^2} S[(f, g)] \oplus
+\bigoplus\nolimits_{r\in R} S[r]
+\longrightarrow
+\bigoplus\nolimits_{a\in S} S[a]
+$$
+defined by the rules
+$$
+[(a, b)] \longmapsto [a + b] - [a] - [b],\quad
+[(f, g)] \longmapsto [fg] -f[g] - g[f],\quad
+[r] \longmapsto [\varphi(r)]
+$$
+with obvious notation. Let $\Omega_{S/R}$ be the cokernel of this map.
+There is a map $\text{d} : S \to \Omega_{S/R}$ which maps $a$ to the
+class $\text{d}a$ of $[a]$ in the cokernel. This is an $R$-derivation
+by the relations imposed on $\Omega_{S/R}$, in other words
+$$
+\text{d}(a + b) = \text{d}a + \text{d}b, \quad
+\text{d}(fg) = f\text{d}g + g\text{d}f, \quad
+\text{d}\varphi(r) = 0
+$$
+where $a,b,f,g \in S$ and $r \in R$.
+
+\begin{definition}
+\label{definition-differentials}
+The pair $(\Omega_{S/R}, \text{d})$ is called the {\it module
+of K\"ahler differentials} or the {\it module of differentials}
+of $S$ over $R$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-universal-omega}
+\begin{slogan}
+Maps out of the module of differentials are the same as derivations.
+\end{slogan}
+The module of differentials of $S$ over $R$ has the following
+universal property. The map
+$$
+\Hom_S(\Omega_{S/R}, M)
+\longrightarrow
+\text{Der}_R(S, M), \quad
+\alpha
+\longmapsto
+\alpha \circ \text{d}
+$$
+is an isomorphism of functors.
+\end{lemma}
+
+\begin{proof}
+By definition an $R$-derivation is a rule which associates
+to each $a \in S$ an element $D(a) \in M$. Thus $D$ gives
+rise to a map $[D] : \bigoplus S[a] \to M$. However, the conditions
+of being an $R$-derivation exactly mean that $[D]$ annihilates
+the image of the map in the displayed presentation of
+$\Omega_{S/R}$ above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-trivial-differential-surjective}
+Suppose that $R \to S$ is surjective.
+Then $\Omega_{S/R} = 0$.
+\end{lemma}
+
+\begin{proof}
+You can see this either because all $R$-derivations
+clearly have to be zero, or because
+the map in the presentation of $\Omega_{S/R}$ is surjective.
+\end{proof}
+
+\noindent
+Suppose that
+\begin{equation}
+\label{equation-functorial-omega}
+\vcenter{
+\xymatrix{
+S \ar[r]_\varphi
+&
+S'
+\\
+R \ar[r]^\psi \ar[u]^\alpha
+&
+R' \ar[u]_\beta
+}
+}
+\end{equation}
+is a commutative diagram of rings. In this case there is a
+natural map of modules of differentials fitting into the
+commutative diagram
+$$
+\xymatrix{
+\Omega_{S/R} \ar[r] &
+\Omega_{S'/R'}
+\\
+S \ar[u]^{\text{d}} \ar[r]^{\varphi}
+&
+S' \ar[u]_{\text{d}}
+}
+$$
+To construct the map just use the obvious map
+between the presentations for $\Omega_{S/R}$ and $\Omega_{S'/R'}$.
+Namely,
+$$
+\xymatrix{
+\bigoplus S'[(a', b')]
+\oplus
+\bigoplus S'[(f', g')]
+\oplus
+\bigoplus S'[r'] \ar[r]
+&
+\bigoplus S' [a'] \\
+ \\
+\bigoplus S[(a, b)]
+\oplus
+\bigoplus S[(f, g)]
+\oplus
+\bigoplus S[r] \ar[r]
+\ar[uu]^{
+\begin{matrix}
+[(a, b)] \mapsto [(\varphi(a), \varphi(b))] \\
+[(f, g)] \mapsto [(\varphi(f), \varphi(g))] \\
+[r]\mapsto [\psi(r)]
+\end{matrix}
+} &
+\bigoplus S[a] \ar[uu]_{[a] \mapsto [\varphi(a)]}
+}
+$$
+The result is simply that $f\text{d}g \in \Omega_{S/R}$ is
+mapped to $\varphi(f)\text{d}\varphi(g)$.
+
+
+\begin{lemma}
+\label{lemma-colimit-differentials}
+Let $I$ be a directed set.
+Let $(R_i \to S_i, \varphi_{ii'})$ be a system of
+ring maps over $I$, see
+Categories, Section \ref{categories-section-posets-limits}.
+Then we have
+$$
+\Omega_{S/R} =
+\colim_i \Omega_{S_i/R_i}.
+$$
+where $R \to S = \colim (R_i \to S_i)$.
+\end{lemma}
+
+\begin{proof}
+This is clear from the defining presentation of $\Omega_{S/R}$
+and the functoriality of this described above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differential-surjective}
+In diagram (\ref{equation-functorial-omega}), suppose
+that $S \to S'$ is surjective with kernel $I \subset S$.
+Then $\Omega_{S/R} \to \Omega_{S'/R'}$ is surjective with
+kernel generated as an $S$-module by the elements
+$\text{d}a$, where $a \in S$ is such that $\varphi(a) \in \beta(R')$.
+(This includes in particular the elements $\text{d}(i)$, $i \in I$.)
+\end{lemma}
+
+\begin{proof}
+We urge the reader to find their own (hopefully different) proof
+of this lemma.
+Consider the map of presentations above. Clearly the right vertical
+map of free modules is surjective. Thus the map is surjective.
+Suppose that some element $\eta$ of $\Omega_{S/R}$ maps to zero in
+$\Omega_{S'/R'}$. Write $\eta$ as the image of $\sum s_i[a_i]$ for some
+$s_i, a_i \in S$. Then we see that $\sum \varphi(s_i)[\varphi(a_i)]$
+is the image of an element
+$$
+\theta = \sum s_j'[a_j', b_j'] + \sum s_k'[f_k', g_k'] + \sum s_l'[r_l']
+$$
+in the upper left corner of the diagram.
+Since $\varphi$ is surjective, the terms
+$s_j'[a_j', b_j']$ and $s_k'[f_k', g_k']$ are in the image
+of elements in the lower right corner. Thus, modifying
+$\eta$ and $\theta$ by substracting the images of these
+elements, we may assume $\theta = \sum s_l'[r_l']$.
+In other words, we see $\sum \varphi(s_i)[\varphi(a_i)]$ is of the form
+$\sum s'_l [\beta(r'_l)]$. Pick $a' \in S'$.
+Next, we may assume that we have some $a' \in S'$
+such that $a' = \varphi(a_i)$ for all $i$ and $a' = \beta(r_l')$
+for all $l$. This is clear from the direct sum decomposition
+of the upper right corner of the diagram. Choose $a \in S$ with
+$\varphi(a) = a'$. Then we can write $a_i = a + x_i$
+for some $x_i \in I$. Thus we may assume that all $a_i$ are equal to
+$a$ by using the relations that are allowed. But then we may
+assume our element is of the form $s[a]$. We still know that
+$\varphi(s)[a'] = \sum \varphi(s_l')[\beta(r_l')]$.
+Hence either $\varphi(s) = 0$ and we're done, or
+$a' = \varphi(a)$ is in the image of $\beta$ and we're done as well.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exact-sequence-differentials}
+Let $A \to B \to C$ be ring maps.
+Then there is a canonical exact sequence
+$$
+C \otimes_B \Omega_{B/A} \to
+\Omega_{C/A} \to
+\Omega_{C/B} \to 0
+$$
+of $C$-modules.
+\end{lemma}
+
+\begin{proof}
+We get a diagram (\ref{equation-functorial-omega}) by putting
+$R = A$, $S = C$, $R' = B$, and $S' = C$.
+By Lemma \ref{lemma-differential-surjective} the map
+$\Omega_{C/A} \to \Omega_{C/B}$ is surjective, and the kernel
+is generated by the elements $\text{d}(c)$, where $c \in C$
+is in the image of $B \to C$. The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differentials-localize}
+Let $\varphi : A \to B$ be a ring map.
+\begin{enumerate}
+\item If $S \subset A$ is a multiplicative subset mapping to
+invertible elements of $B$, then $\Omega_{B/A} = \Omega_{B/S^{-1}A}$.
+\item If $S \subset B$ is a multiplicative subset then
+$S^{-1}\Omega_{B/A} = \Omega_{S^{-1}B/A}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To show the equality of (1) it is enough to show that any
+$A$-derivation $D : B \to M$ annihilates the elements $\varphi(s)^{-1}$.
+This is clear from the Leibniz rule applied to
+$1 = \varphi(s) \varphi(s)^{-1}$.
+To show (2) note that there is an obvious map
+$S^{-1}\Omega_{B/A} \to \Omega_{S^{-1}B/A}$.
+To show it is an isomorphism it is enough to show that
+there is a $A$-derivation $\text{d}'$ of $S^{-1}B$ into $S^{-1}\Omega_{B/A}$.
+To define it we simply set
+$\text{d}'(b/s) = (1/s)\text{d}b - (1/s^2)b\text{d}s$.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differential-seq}
+In diagram (\ref{equation-functorial-omega}),
+suppose that $S \to S'$ is surjective with kernel $I \subset S$,
+and assume that $R' = R$.
+Then there is a canonical exact sequence of $S'$-modules
+$$
+I/I^2
+\longrightarrow
+\Omega_{S/R} \otimes_S S'
+\longrightarrow
+\Omega_{S'/R}
+\longrightarrow
+0
+$$
+The leftmost map is characterized by the rule that
+$f \in I$ maps to $\text{d}f \otimes 1$.
+\end{lemma}
+
+\begin{proof}
+The middle term is $\Omega_{S/R} \otimes_S S/I$.
+For $f \in I$ denote $\overline{f}$ the image of $f$ in $I/I^2$.
+To show that the map $\overline{f} \mapsto \text{d}f \otimes 1$
+is well defined we just have to check that
+$\text{d} f_1f_2 \otimes 1 = 0$ if $f_1, f_2 \in I$.
+And this is clear from the Leibniz rule
+$\text{d} f_1f_2 \otimes 1 = (f_1 \text{d}f_2 + f_2 \text{d} f_1 )\otimes 1 =
+\text{d}f_2 \otimes f_1 + \text{d}f_1 \otimes f_2 = 0$.
+A similar computation show this map is $S' = S/I$-linear.
+
+\medskip\noindent
+The map $\Omega_{S/R} \otimes_S S' \to \Omega_{S'/R}$
+is the canonical $S'$-linear map associated to the
+$S$-linear map $\Omega_{S/R} \to \Omega_{S'/R}$.
+It is surjective because $\Omega_{S/R} \to \Omega_{S'/R}$
+is surjective by Lemma \ref{lemma-differential-surjective}.
+
+\medskip\noindent
+The composite of the two maps is zero because
+$\text{d}f$ maps to zero in $\Omega_{S'/R}$
+for $f \in I$. Note that exactness just says that
+the kernel of $\Omega_{S/R} \to \Omega_{S'/R}$
+is generated as an $S$-submodule by the submodule $I\Omega_{S/R}$ together
+with the elements $\text{d}f$, with $f \in I$. We know by
+Lemma \ref{lemma-differential-surjective}
+that this kernel is generated by the elements $\text{d}(a)$
+where $\varphi(a) = \beta(r)$ for some $r \in R$.
+But then $a = \alpha(r) + a - \alpha(r)$, so
+$\text{d}(a) = \text{d}(a - \alpha(r))$. And
+$a - \alpha(r) \in I$ since $\varphi(a - \alpha(r)) =
+\varphi(a) - \varphi(\alpha(r)) = \beta(r) - \beta(r) = 0$.
+We conclude the elements $\text{d}f$ with $f \in I$ already
+generate the kernel as an $S$-module, as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differential-seq-split}
+In diagram (\ref{equation-functorial-omega}),
+suppose that $S \to S'$ is surjective with kernel $I \subset S$,
+and assume that $R' = R$. Moreover, assume that there exists
+an $R$-algebra map $S' \to S$ which is a right inverse to
+$S \to S'$. Then the exact sequence of $S'$-modules
+of Lemma \ref{lemma-differential-seq} turns into a short exact sequence
+$$
+0 \longrightarrow
+I/I^2
+\longrightarrow
+\Omega_{S/R} \otimes_S S'
+\longrightarrow
+\Omega_{S'/R}
+\longrightarrow
+0
+$$
+which is even a split short exact sequence.
+\end{lemma}
+
+\begin{proof}
+Let $\beta : S' \to S$ be the right inverse to the surjection
+$\alpha : S \to S'$, so $S = I \oplus \beta(S')$.
+Clearly we can use $\beta : \Omega_{S'/R} \to \Omega_{S/R}$,
+to get a right inverse to the map $\Omega_{S/R} \otimes_S S' \to \Omega_{S'/R}$.
+On the other hand, consider the map
+$$
+D : S \longrightarrow I/I^2,
+\quad
+x \longmapsto x - \beta(\alpha(x))
+$$
+It is easy to show that $D$ is an $R$-derivation (omitted).
+Moreover $x D(s) = 0$ if $x \in I, s \in S$. Hence, by the universal property
+$D$ induces a map $\tau : \Omega_{S/R} \otimes_S S' \to I/I^2$.
+We omit the verification that it is a left inverse to
+$\text{d} : I/I^2 \to \Omega_{S/R} \otimes_S S'$. Hence we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differential-mod-power-ideal}
+Let $R \to S$ be a ring map. Let $I \subset S$ be an ideal.
+Let $n \geq 1$ be an integer. Set $S' = S/I^{n + 1}$.
+The map $\Omega_{S/R} \to \Omega_{S'/R}$ induces an
+isomorphism
+$$
+\Omega_{S/R} \otimes_S S/I^n
+\longrightarrow
+\Omega_{S'/R} \otimes_{S'} S/I^n.
+$$
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-differential-seq} and the fact that
+$\text{d}(I^{n + 1}) \subset I^n\Omega_{S/R}$ by the
+Leibniz rule for $\text{d}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differentials-base-change}
+Suppose that we have ring maps $R \to R'$ and $R \to S$.
+Set $S' = S \otimes_R R'$, so that we obtain a diagram
+(\ref{equation-functorial-omega}). Then the canonical map defined above
+induces an isomorphism $\Omega_{S/R} \otimes_R R' = \Omega_{S'/R'}$.
+\end{lemma}
+
+\begin{proof}
+Let $\text{d}' : S' = S \otimes_R R' \to \Omega_{S/R} \otimes_R R'$ denote the
+map $\text{d}'( \sum a_i \otimes x_i ) = \sum \text{d}(a_i) \otimes x_i$.
+It exists because the map $S \times R' \to \Omega_{S/R} \otimes_R R'$,
+$(a, x)\mapsto \text{d}a \otimes_R x$ is $R$-bilinear.
+This is an $R'$-derivation, as can be verified by a simple computation.
+We will show that $(\Omega_{S/R} \otimes_R R', \text{d}')$ satisfies
+the universal property. Let $D : S' \to M'$ be an $R'$ derivation
+into an $S'$-module. The composition $S \to S' \to M'$ is an $R$-derivation,
+hence we get an $S$-linear map $\varphi_D : \Omega_{S/R} \to M'$. We may
+tensor this with $R'$ and get the map $\varphi'_D :
+\Omega_{S/R} \otimes_R R' \to M'$, $\varphi'_D(\eta \otimes x) =
+x\varphi_D(\eta)$. It is clear that $D = \varphi'_D \circ \text{d}'$.
+\end{proof}
+
+\noindent
+The multiplication map $S \otimes_R S \to S$ is the $R$-algebra
+map which maps $a \otimes b$ to $ab$ in $S$. It is also an
+$S$-algebra map, if we think of $S \otimes_R S$ as an $S$-algebra
+via either of the maps $S \to S \otimes_R S$.
+
+\begin{lemma}
+\label{lemma-differentials-diagonal}
+Let $R \to S$ be a ring map. Let $J = \Ker(S \otimes_R S \to S)$
+be the kernel of the multiplication map. There is a canonical
+isomorphism of $S$-modules $\Omega_{S/R} \to J/J^2$,
+$a \text{d} b \mapsto a \otimes b - ab \otimes 1$.
+\end{lemma}
+
+\begin{proof}[First proof]
+Apply Lemma \ref{lemma-differential-seq-split} to the commutative diagram
+$$
+\xymatrix{
+S \otimes_R S \ar[r] & S \\
+S \ar[r] \ar[u] & S \ar[u]
+}
+$$
+where the left vertical arrow is $a \mapsto a \otimes 1$. We get the
+exact sequence
+$0 \to J/J^2 \to
+\Omega_{S \otimes_R S/S} \otimes_{S \otimes_R S} S \to \Omega_{S/S} \to 0$.
+By Lemma \ref{lemma-trivial-differential-surjective}
+the term $\Omega_{S/S}$ is $0$, and we obtain an
+isomorphism between the other two terms. We have
+$\Omega_{S \otimes_R S/S} = \Omega_{S/R} \otimes_S (S \otimes_R S)$
+by Lemma \ref{lemma-differentials-base-change} as $S \to S \otimes_R S$
+is the base change of $R \to S$ and hence
+$$
+\Omega_{S \otimes_R S/S} \otimes_{S \otimes_R S} S =
+\Omega_{S/R} \otimes_S (S \otimes_R S) \otimes_{S \otimes_R S} S =
+\Omega_{S/R}
+$$
+We omit the verification that the map is given by the rule of the lemma.
+\end{proof}
+
+\begin{proof}[Second proof]
+First we show that the rule $a \text{d} b \mapsto a \otimes b - ab \otimes 1$
+is well defined. In order to do this we have to show
+that $\text{d}r$ and $a\text{d}b + b \text{d}a - d(ab)$ map to zero.
+The first because $r \otimes 1 - 1 \otimes r = 0$ by definition
+of the tensor product. The second because
+$$
+(a \otimes b - ab \otimes 1) +
+(b \otimes a - ba \otimes 1) -
+(1 \otimes ab - ab \otimes 1)
+=
+(a \otimes 1 - 1\otimes a)(1\otimes b - b \otimes 1)
+$$
+is in $J^2$.
+
+\medskip\noindent
+We construct a map in the other direction.
+We may think of $S \to S \otimes_R S$, $a \mapsto a \otimes 1$
+as the base change of $R \to S$. Hence we have
+$\Omega_{S \otimes_R S/S} = \Omega_{S/R} \otimes_S (S \otimes_R S)$,
+by Lemma \ref{lemma-differentials-base-change}.
+At this point the sequence of Lemma \ref{lemma-differential-seq} gives a map
+$$
+J/J^2 \to \Omega_{S \otimes_R S/ S} \otimes_{S \otimes_R S} S
+= (\Omega_{S/R} \otimes_S (S \otimes_R S))\otimes_{S \otimes_R S} S
+= \Omega_{S/R}.
+$$
+We leave it to the reader to see it is the inverse of the map
+above.
+\end{proof}
+
+
+
+
+\begin{lemma}
+\label{lemma-differentials-polynomial-ring}
+If $S = R[x_1, \ldots, x_n]$, then
+$\Omega_{S/R}$ is a finite free $S$-module with
+basis $\text{d}x_1, \ldots, \text{d}x_n$.
+\end{lemma}
+
+\begin{proof}
+We first show that $\text{d}x_1, \ldots, \text{d}x_n$
+generate $\Omega_{S/R}$ as an $S$-module. To prove this
+we show that $\text{d}g$ can be expressed as a
+sum $\sum g_i \text{d}x_i$ for any $g \in R[x_1, \ldots, x_n]$.
+We do this by induction on the (total) degree of $g$.
+It is clear if the degree of $g$ is $0$, because then
+$\text{d}g = 0$. If the degree of $g$ is $> 0$, then
+we may write $g$ as $c + \sum g_i x_i$ with $c\in R$
+and $\deg(g_i) < \deg(g)$. By the Leibniz rule we have
+$\text{d}g = \sum g_i \text{d} x_i + \sum x_i \text{d}g_i$,
+and hence we win by induction.
+
+\medskip\noindent
+Consider the $R$-derivation $\partial / \partial x_i :
+R[x_1, \ldots, x_n] \to R[x_1, \ldots, x_n]$. (We leave it to
+the reader to define this; the defining property
+being that $\partial / \partial x_i (x_j) = \delta_{ij}$.)
+By the universal property this corresponds to an $S$-module map $l_i :
+\Omega_{S/R} \to R[x_1, \ldots, x_n]$ which maps $\text{d}x_i$
+to $1$ and $\text{d}x_j$ to $0$ for $j \not = i$.
+Thus it is clear that there are no $S$-linear relations
+among the elements $\text{d}x_1, \ldots, \text{d}x_n$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differentials-finitely-presented}
+Suppose $R \to S$ is of finite presentation.
+Then $\Omega_{S/R}$ is a finitely presented
+$S$-module.
+\end{lemma}
+
+\begin{proof}
+Write $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$.
+Write $I = (f_1, \ldots, f_m)$. According
+to Lemma \ref{lemma-differential-seq} there is an exact sequence
+of $S$-modules
+$$
+I/I^2
+\to
+\Omega_{R[x_1, \ldots, x_n]/R} \otimes_{R[x_1, \ldots, x_n]} S
+\to
+\Omega_{S/R}
+\to
+0
+$$
+The result follows from the fact that $I/I^2$ is a finite
+$S$-module (generated by the images of the $f_i$), and that
+the middle term is finite free by
+Lemma \ref{lemma-differentials-polynomial-ring}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differentials-finitely-generated}
+Suppose $R \to S$ is of finite type.
+Then $\Omega_{S/R}$ is finitely generated
+$S$-module.
+\end{lemma}
+
+\begin{proof}
+This is very similar to, but easier than the proof
+of Lemma \ref{lemma-differentials-finitely-presented}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{The de Rham complex}
+\label{section-de-rham-complex}
+
+\noindent
+Let $A \to B$ be a ring map. Denote $\text{d} : B \to \Omega_{B/A}$
+the module of differentials with its universal $A$-derivation
+constructed in Section \ref{section-differentials}.
+Let $\Omega_{B/A}^i = \wedge^i_B(\Omega_{B/A})$ for $i \geq 0$ be the
+$i$th exterior power as in Section \ref{section-tensor-algebra}.
+The {\it de Rham complex of $B$ over $A$} is the complex
+$$
+\Omega_{B/A}^0 \to \Omega_{B/A}^1 \to \Omega_{B/A}^2 \to \ldots
+$$
+constructed and described below.
+
+\medskip\noindent
+The map $\text{d} : \Omega^0_{B/A} \to \Omega^1_{B/A}$
+is the universal derivation $\text{d} : B \to \Omega_{B/A}$.
+
+\medskip\noindent
+For $p \geq 1$ we claim there is a unique $A$-linear map
+$\text{d} : \Omega_{B/A}^p \to \Omega_{B/A}^{p + 1}$
+such that
+\begin{equation}
+\label{equation-rule}
+\text{d}\left(b_0\text{d}b_1 \wedge \ldots \wedge \text{d}b_p\right) =
+\text{d}b_0 \wedge \text{d}b_1 \wedge \ldots \wedge \text{d}b_p
+\end{equation}
+Recall that $\Omega_{B/A}$ is generated as a $B$-module by the
+elements $\text{d}b$. Thus $\Omega^p_{B/A}$ is additively generated
+by the element of the form $b_0\text{d}b_1 \wedge \ldots \wedge \text{d}b_p$
+and it follows that the map
+$\text{d} : \Omega^p_{B/A} \to \Omega^{p + 1}_{B/A}$
+if it exists is unique.
+
+\medskip\noindent
+Construction of $\text{d} : \Omega_{B/A}^1 \to \Omega_{B/A}^2$.
+The elements $\text{d}b$ freely generate $\Omega_{B/A}$
+subject to the relations $\text{d}a = 0$ for $a \in A$
+and $\text{d}(b + b') = \text{d}b + \text{d}b'$ and
+$\text{d}(bb') = b\text{d}b' + b'\text{d}b$ for $b, b' \in B$.
+We will show that the rule
+$$
+\sum b'_i \text{d}b_i \longmapsto \sum \text{d}b'_i \wedge \text{d}b_i
+$$
+is well defined. To do this we have to show that the elements
+$$
+\text{d}a,
+\quad\text{and}\quad
+b\text{d}(b' + b'') - b\text{d}b' - b\text{d}b''
+\quad\text{and}\quad
+b\text{d}(b'b'') - bb'\text{d}b'' - bb''\text{d}b'
+$$
+for $a \in A$ and $b, b', b'' \in B$ are mapped to zero.
+This is clear by direct computation using the Leibniz rule for $\text{d}$.
+
+\medskip\noindent
+Observe that the composition
+$\Omega^0_{B/A} \to \Omega^1_{B/A} \to \Omega^2_{B/A}$
+is zero as $\text{d}(\text{d}(b)) = \text{d}(1 \text{d}b) =
+\text{d}(1) \wedge \text{d}(b) = 0 \wedge \text{d}b = 0$.
+Here $\text{d}(1) = 0$ as $1 \in B$ is in the image of $A \to B$.
+We will use this below.
+
+\medskip\noindent
+Construction of $\text{d} : \Omega_{B/A}^p \to \Omega_{B/A}^{p + 1}$
+for $p \geq 2$. We will show the map
+$$
+\gamma : \Omega^1_{B/A} \otimes_A \ldots \otimes_A \Omega^1_{B/A}
+\longrightarrow
+\Omega_{B/A}^{p + 1}
+$$
+defined by the formula
+$$
+\omega_1 \otimes \ldots \otimes \omega_p
+\longmapsto
+\sum (-1)^{i + 1}
+\omega_1 \wedge \ldots \wedge \text{d}(\omega_i) \wedge \ldots \wedge \omega_p
+$$
+factors over the natural surjection
+$\Omega^1_{B/A} \otimes_A \ldots \otimes_A \Omega^1_{B/A}
+\to \Omega^p_{B/A}$ to give a map
+$\text{d} : \Omega^p_{B/A} \to \Omega^{p + 1}_{B/A}$.
+The kernel of $\Omega^1_{B/A} \otimes_A \ldots \otimes_A \Omega^1_{B/A}
+\to \Omega^p_{B/A}$ is additively generated by the elements
+$\omega_1 \otimes \ldots \otimes \omega_p$
+with $\omega_i = \omega_j$ for some $i \not = j$
+and by the elements
+$\omega_1 \otimes \ldots \otimes f\omega_i \otimes \ldots \otimes \omega_p -
+\omega_1 \otimes \ldots \otimes f\omega_j \otimes \ldots \otimes \omega_p$
+for $f \in B$; details omitted.
+A direct computation shows the first type of element
+is mapped to $0$ by $\gamma$, in other words, $\gamma$ is alternating.
+To finish we have to show that
+$$
+\gamma(
+\omega_1 \otimes \ldots \otimes f\omega_i \otimes \ldots \otimes \omega_p)
+=
+\gamma(
+\omega_1 \otimes \ldots \otimes f\omega_j \otimes \ldots \otimes \omega_p)
+$$
+for $f \in B$. By $A$-linearity and the alternating property, it is enough
+to show this for $p = 2$, $i = 1$, $j = 2$, $\omega_1 = b \text{d}b'$
+and $\omega_2 = c \text{d} c'$ for $b, b', c, c' \in B$.
+Thus we need to show that
+\begin{align*}
+& \text{d}(fb) \wedge \text{d}b' \wedge c \text{d}c'
+- fb \text{d}b' \wedge \text{d}c \wedge \text{d}c' \\
+& =
+\text{d}b \wedge \text{d}b' \wedge fc\text{d}c'
+- b \text{d}b' \wedge \text{d}(fc) \wedge \text{d}c'
+\end{align*}
+in other words that
+$$
+(c \text{d}(fb) + fb \text{d}c - fc \text{d}b - b \text{d}(fc))
+\wedge \text{d}b' \wedge \text{d}c' = 0.
+$$
+This follows from the Leibniz rule. Observe that the value of $\gamma$
+on the element
+$b_0\text{d}b_1 \otimes \text{d}b_2 \otimes \ldots \otimes \text{d}b_p$
+is $\text{d}b_0 \wedge \text{d}b_1 \wedge \ldots \wedge \text{d}b_p$
+and hence (\ref{equation-rule}) will be satisfied for the map
+$\text{d} : \Omega^p_{B/A} \to \Omega^{p + 1}_{B/A}$ so obtained.
+
+\medskip\noindent
+Finally, since $\Omega^p_{B/A}$ is additively generated
+by the elements $b_0\text{d}b_1 \wedge \ldots \wedge \text{d}b_p$
+and since $\text{d}(b_0\text{d}b_1 \wedge \ldots \wedge \text{d}b_p) =
+\text{d}b_0 \wedge \ldots \wedge \text{d}b_p$ we see
+in exactly the same manner that the composition
+$
+\Omega^p_{B/A} \to
+\Omega^{p + 1}_{B/A} \to
+\Omega^{p + 2}_{B/A}
+$
+is zero for $p \geq 1$. Thus the de Rham complex is indeed a complex.
+
+\medskip\noindent
+Given just a ring $R$ we set $\Omega_R = \Omega_{R/\mathbf{Z}}$.
+This is sometimes called the absolute module of differentials of $R$;
+this makes sense: if $\Omega_R$ is the module of differentials
+where we only assume the Leibniz rule and not the vanishing of $\text{d}1$,
+then the Leibniz rule gives $\text{d}1 = \text{d}(1 \cdot 1) =
+1 \text{d}1 + 1 \text{d}1 = 2 \text{d}1$ and hence
+$\text{d}1 = 0$ in $\Omega_R$. In this case the
+{\it absolute de Rham complex of $R$} is the corresponding complex
+$$
+\Omega_R^0 \to \Omega_R^1 \to \Omega_R^2 \to \ldots
+$$
+where we set $\Omega^i_R = \Omega^i_{R/\mathbf{Z}}$ and so on.
+
+\medskip\noindent
+Suppose we have a commutative diagram of rings
+$$
+\xymatrix{
+B \ar[r] & B' \\
+A \ar[r] \ar[u] & A' \ar[u]
+}
+$$
+There is a natural map of de Rham complexes
+$$
+\Omega^\bullet_{B/A} \longrightarrow \Omega^\bullet_{B'/A'}
+$$
+Namely, in degree $0$ this is the map $B \to B'$, in degree $1$
+this is the map $\Omega_{B/A} \to \Omega_{B'/A'}$ constructed in
+Section \ref{section-differentials}, and for $p \geq 2$ it is the induced map
+$\Omega^p_{B/A} = \wedge^p_B(\Omega_{B/A}) \to
+\wedge^p_{B'}(\Omega_{B'/A'}) = \Omega^p_{B'/A'}$.
+The compatibility with differentials follows from the characterization
+of the differentials by the formula (\ref{equation-rule}).
+
+\begin{lemma}
+\label{lemma-de-rham-complex}
+Let $A \to B$ be a ring map. Let $\pi : \Omega_{B/A} \to \Omega$
+be a surjective $B$-module map. Denote $\text{d} : B \to \Omega$
+the composition of $\pi$ with the universal derivation
+$\text{d}_{B/A} : B \to \Omega_{B/A}$. Set $\Omega^i = \wedge_B^i(\Omega)$.
+Assume that the kernel of $\pi$ is generated, as a $B$-module,
+by elements $\omega \in \Omega_{B/A}$ such that
+$\text{d}_{B/A}(\omega) \in \Omega_{B/A}^2$ maps to zero in $\Omega^2$.
+Then there is a de Rham complex
+$$
+\Omega^0 \to \Omega^1 \to \Omega^2 \to \ldots
+$$
+whose differential is defined by the rule
+$$
+\text{d} : \Omega^p \to \Omega^{p + 1},\quad
+\text{d}\left(f_0\text{d}f_1 \wedge \ldots \wedge \text{d}f_p\right) =
+\text{d}f_0 \wedge \text{d}f_1 \wedge \ldots \wedge \text{d}f_p
+$$
+\end{lemma}
+
+\begin{proof}
+We will show that there exists a commutative diagram
+$$
+\xymatrix{
+\Omega_{B/A}^0 \ar[d] \ar[r]_{\text{d}_{B/A}} &
+\Omega_{B/A}^1 \ar[d]_\pi \ar[r]_{\text{d}_{B/A}} &
+\Omega_{B/A}^2 \ar[d]_{\wedge^2\pi} \ar[r]_{\text{d}_{B/A}} &
+\ldots \\
+\Omega^0 \ar[r]^{\text{d}} &
+\Omega^1 \ar[r]^{\text{d}} &
+\Omega^2 \ar[r]^{\text{d}} &
+\ldots
+}
+$$
+the description of the map $\text{d}$ will follow from the construction
+of the differentials
+$\text{d}_{B/A} : \Omega^p_{B/A} \to \Omega^{p + 1}_{B/A}$ of the
+de Rham complex of $B$ over $A$ given above.
+Since the left most vertical arrow is an isomorphism we have
+the first square. Because $\pi$ is surjective, to get the second
+square it suffices to show that $\text{d}_{B/A}$ maps the kernel
+of $\pi$ into the kernel of $\wedge^2\pi$. We are given that any element
+of the kernel of $\pi$ is of the form
+$\sum b_i\omega_i$ with $\pi(\omega_i) = 0$ and
+$\wedge^2\pi(\text{d}_{B/A}(\omega_i)) = 0$.
+By the Leibniz rule for $\text{d}_{B/A}$ we have
+$\text{d}_{B/A}(\sum b_i\omega_i) = \sum b_i \text{d}_{B/A}(\omega_i) +
+\sum \text{d}_{B/A}(b_i) \wedge \omega_i$. Hence this maps to zero
+under $\wedge^2\pi$.
+
+\medskip\noindent
+For $i > 1$ we note that $\wedge^i \pi$ is surjective with
+kernel the image of $\Ker(\pi) \wedge \Omega^{i - 1}_{B/A}
+\to \Omega_{B/A}^i$. For $\omega_1 \in \Ker(\pi)$ and
+$\omega_2 \in \Omega^{i - 1}_{B/A}$ we have
+$$
+\text{d}_{B/A}(\omega_1 \wedge \omega_2) =
+\text{d}_{B/A}(\omega_1) \wedge \omega_2 -
+\omega_1 \wedge \text{d}_{B/A}(\omega_2)
+$$
+which is in the kernel of $\wedge^{i + 1}\pi$ by what we just proved above.
+Hence we get the $(i + 1)$st square in the diagram above.
+This concludes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Finite order differential operators}
+\label{section-differential-operators}
+
+\noindent
+In this section we introduce differential operators of finite order.
+
+\begin{definition}
+\label{definition-differential-operators}
+Let $R \to S$ be a ring map. Let $M$, $N$ be $S$-modules.
+Let $k \geq 0$ be an integer. We inductively define a
+{\it differential operator $D : M \to N$ of order $k$}
+to be an $R$-linear map such that for all $g \in S$ the map
+$m \mapsto D(gm) - gD(m)$ is a differential operator of
+order $k - 1$. For the base case $k = 0$ we define a
+differential operator of order $0$ to be an $S$-linear map.
+\end{definition}
+
+\noindent
+If $D : M \to N$ is a differential operator of order $k$,
+then for all $g \in S$ the map $gD$ is a differential operator
+of order $k$. The sum of two differential operators of order $k$
+is another. Hence the set of all these
+$$
+\text{Diff}^k(M, N) = \text{Diff}^k_{S/R}(M, N)
+$$
+is an $S$-module. We have
+$$
+\text{Diff}^0(M, N) \subset
+\text{Diff}^1(M, N) \subset
+\text{Diff}^2(M, N) \subset \ldots
+$$
+
+\begin{lemma}
+\label{lemma-composition-differential-operators}
+Let $R \to S$ be a ring map. Let $L, M, N$ be $S$-modules.
+If $D : L \to M$ and $D' : M \to N$ are differential
+operators of order $k$ and $k'$, then $D' \circ D$ is a
+differential operator of order $k + k'$.
+\end{lemma}
+
+\begin{proof}
+Let $g \in S$. Then the map which sends $x \in L$ to
+$$
+D'(D(gx)) - gD'(D(x)) = D'(D(gx)) - D'(gD(x)) + D'(gD(x)) - gD'(D(x))
+$$
+is a sum of two compositions of differential operators of lower order.
+Hence the lemma follows by induction on $k + k'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-module-principal-parts}
+Let $R \to S$ be a ring map. Let $M$ be an $S$-module.
+Let $k \geq 0$. There exists an $S$-module $P^k_{S/R}(M)$
+and a canonical isomorphism
+$$
+\text{Diff}^k_{S/R}(M, N) = \Hom_S(P^k_{S/R}(M), N)
+$$
+functorial in the $S$-module $N$.
+\end{lemma}
+
+\begin{proof}
+The existence of $P^k_{S/R}(M)$ follows from general category theoretic
+arguments (insert future reference here), but we will also give a
+construction. Set $F = \bigoplus_{m \in M} S[m]$ where $[m]$ is a
+symbol indicating the basis element in the summand corresponding to $m$.
+Given any differential operator $D : M \to N$ we obtain an $S$-linear
+map $L_D : F \to N$ sending $[m]$ to $D(m)$. If $D$ has order $0$,
+then $L_D$ annihilates the elements
+$$
+[m + m'] - [m] - [m'],\quad
+g_0[m] - [g_0m]
+$$
+where $g_0 \in S$ and $m, m' \in M$.
+If $D$ has order $1$, then $L_D$ annihilates the elements
+$$
+[m + m'] - [m] - [m'],\quad
+f[m] - [fm], \quad
+g_0g_1[m] - g_0[g_1m] - g_1[g_0m] + [g_1g_0m]
+$$
+where
+$f \in R$, $g_0, g_1 \in S$, and $m \in M$.
+If $D$ has order $k$, then $L_D$ annihilates the elements
+$[m + m'] - [m] - [m']$, $f[m] - [fm]$, and the elements
+$$
+g_0g_1\ldots g_k[m] - \sum g_0 \ldots \hat g_i \ldots g_k[g_im] + \ldots
++(-1)^{k + 1}[g_0\ldots g_km]
+$$
+Conversely, if $L : F \to N$ is an
+$S$-linear map annihilating all the elements listed in the previous
+sentence, then $m \mapsto L([m])$ is a differential operator
+of order $k$. Thus we see that $P^k_{S/R}(M)$ is the quotient
+of $F$ by the submodule generated by these elements.
+\end{proof}
+
+\begin{definition}
+\label{definition-module-principal-parts}
+Let $R \to S$ be a ring map. Let $M$ be an $S$-module. The module
+$P^k_{S/R}(M)$ constructed in Lemma \ref{lemma-module-principal-parts}
+is called the {\it module of principal parts of order $k$} of $M$.
+\end{definition}
+
+\noindent
+Note that the inclusions
+$$
+\text{Diff}^0(M, N) \subset
+\text{Diff}^1(M, N) \subset
+\text{Diff}^2(M, N) \subset \ldots
+$$
+correspond via Yoneda's lemma (Categories, Lemma \ref{categories-lemma-yoneda})
+to surjections
+$$
+\ldots \to P^2_{S/R}(M) \to P^1_{S/R}(M) \to P^0_{S/R}(M) = M
+$$
+
+\begin{example}
+\label{example-derivations-and-differential-operators}
+Let $R \to S$ be a ring map and let $N$ be an $S$-module. Observe that
+$\text{Diff}^1(S, N) = \text{Der}_R(S, N) \oplus N$.
+Namely, if $D : S \to N$ is a differential operator of order $1$
+then $\sigma_D : S \to N$ defined by $\sigma_D(g) := D(g) - gD(1)$
+is an $R$-derivation and
+$D = \sigma_D + \lambda_{D(1)}$ where $\lambda_x : S \to N$ is the
+linear map sending $g$ to $gx$. It follows that
+$P^1_{S/R} = \Omega_{S/R} \oplus S$ by the universal property of
+$\Omega_{S/R}$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-sequence-of-principal-parts}
+Let $R \to S$ be a ring map. Let $M$ be an $S$-module. There is a
+canonical short exact sequence
+$$
+0 \to \Omega_{S/R} \otimes_S M \to P^1_{S/R}(M) \to M \to 0
+$$
+functorial in $M$ called the {\it sequence of principal parts}.
+\end{lemma}
+
+\begin{proof}
+The map $P^1_{S/R}(M) \to M$ is given above.
+Let $N$ be an $S$-module and let $D : M \to N$ be a
+differential operator of order $1$. For $m \in M$ the map
+$$
+g \longmapsto D(gm) - gD(m)
+$$
+is an $R$-derivation $S \to N$ by the axioms for differential operators
+of order $1$. Thus it corresponds to a linear map $D_m : \Omega_{S/R} \to N$
+determined by the rule $a\text{d}b \mapsto aD(bm) - abD(m)$
+(see Lemma \ref{lemma-universal-omega}). The map
+$$
+\Omega_{S/R} \times M \longrightarrow N,\quad
+(\eta, m) \longmapsto D_m(\eta)
+$$
+is $S$-bilinear (details omitted) and hence determines an $S$-linear
+map
+$$
+\sigma_D : \Omega_{S/R} \otimes_S M \to N
+$$
+In this way we obtain a map
+$\text{Diff}^1(M, N) \to \Hom_S(\Omega_{S/R} \otimes_S M, N)$,
+$D \mapsto \sigma_D$ functorial in $N$. By the Yoneda lemma this corresponds
+a map $\Omega_{S/R} \otimes_S M \to P^1_{S/R}(M)$. It is immediate
+from the construction that this map is functorial in $M$. The sequence
+$$
+\Omega_{S/R} \otimes_S M \to P^1_{S/R}(M) \to M \to 0
+$$
+is exact because for every module $N$ the sequence
+$$
+0 \to \Hom_S(M, N) \to
+\text{Diff}^1(M, N) \to
+\Hom_S(\Omega_{S/R} \otimes_S M, N)
+$$
+is exact by inspection.
+
+\medskip\noindent
+To see that $\Omega_{S/R} \otimes_S M \to P^1_{S/R}(M)$ is injective
+we argue as follows. Choose an exact sequence
+$$
+0 \to M' \to F \to M \to 0
+$$
+with $F$ a free $S$-module. This induces an exact sequence
+$$
+0 \to \text{Diff}^1(M, N) \to \text{Diff}^1(F, N) \to \text{Diff}^1(M', N)
+$$
+for all $N$. This proves that in the commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\Omega_{S/R} \otimes_S M' \ar[r] \ar[d] &
+P^1_{S/R}(M') \ar[r] \ar[d] &
+M' \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\Omega_{S/R} \otimes_S F \ar[r] \ar[d] &
+P^1_{S/R}(F) \ar[r] \ar[d] &
+F \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\Omega_{S/R} \otimes_S M \ar[r] \ar[d] &
+P^1_{S/R}(M) \ar[r] \ar[d] &
+M \ar[r] \ar[d] &
+0 \\
+& 0 & 0 & 0
+}
+$$
+the middle column is exact. The left column is exact by
+right exactness of $\Omega_{S/R} \otimes_S -$. By the snake lemma
+(see Section \ref{section-snake}) it suffices to prove exactness
+on the left for the free module $F$.
+Using that $P^1_{S/R}(-)$ commutes with direct sums we reduce to the case
+$M = S$. This case is a consequence of the discussion in
+Example \ref{example-derivations-and-differential-operators}.
+\end{proof}
+
+\begin{remark}
+\label{remark-functoriality-principal-parts}
+Suppose given a commutative diagram of rings
+$$
+\xymatrix{
+B \ar[r] & B' \\
+A \ar[u] \ar[r] & A' \ar[u]
+}
+$$
+a $B$-module $M$, a $B'$-module $M'$, and a $B$-linear map $M \to M'$.
+Then we get a compatible system of module maps
+$$
+\xymatrix{
+\ldots \ar[r] &
+P^2_{B'/A'}(M') \ar[r] &
+P^1_{B'/A'}(M') \ar[r] &
+P^0_{B'/A'}(M') \\
+\ldots \ar[r] &
+P^2_{B/A}(M) \ar[r] \ar[u] &
+P^1_{B/A}(M) \ar[r] \ar[u] &
+P^0_{B/A}(M) \ar[u]
+}
+$$
+These maps are compatible with further composition of maps of this type.
+The easiest way to see this is to use the description of the modules
+$P^k_{B/A}(M)$ in terms of generators and relations in the proof of
+Lemma \ref{lemma-module-principal-parts} but it can also be seen
+directly from the universal
+property of these modules. Moreover, these maps are compatible with
+the short exact sequences of Lemma \ref{lemma-sequence-of-principal-parts}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-differentials-de-rham-complex-order-1}
+Let $A \to B$ be a ring map. The differentials
+$\text{d} : \Omega^i_{B/A} \to \Omega^{i + 1}_{B/A}$
+are differential operators of order $1$.
+\end{lemma}
+
+\begin{proof}
+Given $b \in B$ we have to show that $\text{d} \circ b - b \circ \text{d}$
+is a linear operator. Thus we have to show that
+$$
+\text{d} \circ b \circ b' - b \circ \text{d} \circ b' -
+b' \circ \text{d} \circ b + b' \circ b \circ \text{d} = 0
+$$
+To see this it suffices to check this on additive generators for
+$\Omega^i_{B/A}$. Thus it suffices to show that
+$$
+\text{d}(bb'b_0\text{d}b_1 \wedge \ldots \wedge \text{d}b_i) -
+b\text{d}(b'b_0\text{d}b_1 \wedge \ldots \wedge \text{d}b_i) -
+b'\text{d}(bb_0\text{d}b_1 \wedge \ldots \wedge \text{d}b_i) +
+bb'\text{d}(b_0\text{d}b_1 \wedge \ldots \wedge \text{d}b_i)
+$$
+is zero. This is a pleasant calculation using the Leibniz rule
+which is left to the reader.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-differential-operators}
+Let $A \to B$ be a ring map. Let $g_i \in B$, $i \in I$ be a set of generators
+for $B$ as an $A$-algebra. Let $M, N$ be $B$-modules.
+Let $D : M \to N$ be an $A$-linear map. In order to show
+that $D$ is a differential operator of order $k$ it suffices
+to show that $D \circ g_i - g_i \circ D$ is a differential
+operator of order $k - 1$ for $i \in I$.
+\end{lemma}
+
+\begin{proof}
+Namely, we claim that the set of elements $g \in B$ such that
+$D \circ g - g \circ D$ is a differential operator of order $k - 1$
+is an $A$-subalgebra of $B$. This follows from the relations
+$$
+D \circ (g + g') - (g + g') \circ D =
+(D \circ g - g \circ D) + (D \circ g' - g' \circ D)
+$$
+and
+$$
+D \circ gg' - gg' \circ D =
+(D \circ g - g \circ D) \circ g' + g \circ (D \circ g' - g' \circ D)
+$$
+Strictly speaking, to conclude for products we also use
+Lemma \ref{lemma-composition-differential-operators}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-invert-system-differential-operators}
+Let $A \to B$ be a ring map. Let $M, N$ be $B$-modules.
+Let $S \subset B$ be a multiplicative subset. Any differential operator
+$D : M \to N$ of order $k$ extends uniquely to a differential operator
+$E : S^{-1}M \to S^{-1}N$ of order $k$.
+\end{lemma}
+
+\begin{proof}
+By induction on $k$. If $k = 0$, then $D$ is $B$-linear and hence we
+get the extension by the functoriality of localization. Given $b \in B$
+the operator $L_b : m \mapsto D(bm) - bD(m)$ has order $k - 1$. Hence
+it has a unique extension to a differential operator
+$E_b : S^{-1}M \to S^{-1}N$ of order $k - 1$ by induction.
+Moreover, a computation shows that $L_{b'b} = L_{b'} \circ b + b' \circ L_b$
+hence by uniqueness we obtain $E_{b'b} = E_{b'} \circ b + b' \circ E_b$.
+Similarly, we obtain $E_{b'} \circ b - b \circ E_{b'} =
+E_b \circ b' - b' \circ E_b$.
+Now for $m \in M$ and $g \in S$ we set
+$$
+E(m/g) = (1/g)(D(m) - E_g(m/g))
+$$
+To show that this is well defined it suffices to show that
+for $g' \in S$ if we use the representative $g'm/g'g$ we get the
+same result. We compute
+\begin{align*}
+(1/g'g)(D(g'm) - E_{g'g}(g'm/gg'))
+& =
+(1/gg')(g'D(m) + E_{g'}(m) - E_{g'g}(g'm/gg')) \\
+& =
+(1/g'g)(g'D(m) - g' E_g(m/g))
+\end{align*}
+which is the same as before. It is clear that $E$ is $R$-linear
+as $D$ and $E_g$ are $R$-linear. Taking $g = 1$ and using that $E_1 = 0$
+we see that $E$ extends $D$. By Lemma \ref{lemma-check-differential-operators}
+it now suffices to show that $E \circ b - b \circ E$ for $b \in B$ and
+$E \circ 1/g' - 1/g' \circ E$ for $g' \in S$ are differential operators of
+order $k - 1$ in order to show that $E$ is a differential operator of
+order $k$. For the first, choose an element $m/g$ in $S^{-1}M$ and
+observe that
+\begin{align*}
+E(b m/g) - bE(m/g)
+& =
+(1/g)(D(bm) - bD(m) - E_g(bm/g) + bE_g(m/g)) \\
+& =
+(1/g)(L_b(m) - E_b(m) + gE_b(m/g)) \\
+& =
+E_b(m/g)
+\end{align*}
+which is a differential operator of order $k - 1$. Finally, we have
+\begin{align*}
+E(m/g'g) - (1/g')E(m/g)
+& =
+(1/g'g)(D(m) - E_{g'g}(m/g'g)) -
+(1/g'g)(D(m) - E_g(m/g)) \\
+& =
+-(1/g')E_{g'}(m/g'g)
+\end{align*}
+which also is a differential operator of order $k - 1$ as the composition
+of linear maps (multiplication by $1/g'$ and signs) and $E_{g'}$.
+We omit the proof of uniqueness.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-differential-operators}
+Let $R \to A$ and $R \to B$ be ring maps. Let $M$ and $M'$ be $A$-modules.
+Let $D : M \to M'$ be a differential operator of order $k$ with respect to
+$R \to A$. Let $N$ be any $B$-module. Then the map
+$$
+D \otimes \text{id}_N : M \otimes_R N \to M' \otimes_R N
+$$
+is a differential operator of order $k$ with respect to $B \to A \otimes_R B$.
+\end{lemma}
+
+\begin{proof}
+It is clear that $D' = D \otimes \text{id}_N$ is $B$-linear.
+By Lemma \ref{lemma-check-differential-operators} it suffices
+to show that
+$$
+D' \circ a \otimes 1 - a \otimes 1 \circ D' =
+(D \circ a - a \circ D) \otimes \text{id}_N
+$$
+is a differential operator of order $k - 1$ which follows
+by induction on $k$.
+\end{proof}
+
+
+
+
+
+
+\section{The naive cotangent complex}
+\label{section-netherlander}
+
+\noindent
+Let $R \to S$ be a ring map. Denote $R[S]$ the polynomial ring
+whose variables are the elements $s \in S$. Let's denote $[s] \in R[S]$
+the variable corresponding to $s \in S$. Thus $R[S]$ is a free
+$R$-module on the basis elements $[s_1] \ldots [s_n]$ where
+$s_1, \ldots, s_n$ ranges over all unordered sequences of elements of $S$.
+There is a canonical surjection
+\begin{equation}
+\label{equation-canonical-presentation}
+R[S] \longrightarrow S,\quad [s] \longmapsto s
+\end{equation}
+whose kernel we denote $I \subset R[S]$. It is a simple observation that
+$I$ is generated by the elements
+$[s + s'] - [s] - [s']$, $[s][s'] - [ss']$ and $[r] - r$.
+According to Lemma \ref{lemma-differential-seq}
+there is a canonical map
+\begin{equation}
+\label{equation-naive-cotangent-complex}
+I/I^2 \longrightarrow \Omega_{R[S]/R} \otimes_{R[S]} S
+\end{equation}
+whose cokernel is canonically isomorphic to $\Omega_{S/R}$. Observe that
+the $S$-module $\Omega_{R[S]/R} \otimes_{R[S]} S$ is free on the generators
+$\text{d}[s]$.
+
+\begin{definition}
+\label{definition-naive-cotangent-complex}
+Let $R \to S$ be a ring map. The {\it naive cotangent complex}
+$\NL_{S/R}$ is the chain complex (\ref{equation-naive-cotangent-complex})
+$$
+\NL_{S/R} = \left(I/I^2 \longrightarrow \Omega_{R[S]/R} \otimes_{R[S]} S\right)
+$$
+with $I/I^2$ placed in (homological) degree $1$ and
+$\Omega_{R[S]/R} \otimes_{R[S]} S$ placed in degree $0$. We will denote
+$H_1(L_{S/R}) = H_1(\NL_{S/R})$\footnote{This module is sometimes
+denoted $\Gamma_{S/R}$ in the literature.} the homology in degree $1$.
+\end{definition}
+
+\noindent
+Before we continue let us say a few words about the actual cotangent
+complex (Cotangent, Section \ref{cotangent-section-cotangent-ring-map}).
+Given a ring map $R \to S$ there exists a canonical simplicial
+$R$-algebra $P_\bullet$ whose terms are polynomial algebras and
+which comes equipped with a canonical homotopy equivalence
+$$
+P_\bullet \longrightarrow S
+$$
+The cotangent complex $L_{S/R}$ of $S$ over $R$ is defined as the chain
+complex associated to the cosimplicial module
+$$
+\Omega_{P_\bullet/R} \otimes_{P_\bullet} S
+$$
+The naive cotangent complex as defined above is canonically isomorphic to
+the truncation $\tau_{\leq 1}L_{S/R}$ (see
+Homology, Section \ref{homology-section-truncations} and
+Cotangent, Section \ref{cotangent-section-surjections}). In particular, it is
+indeed the case that $H_1(\NL_{S/R}) = H_1(L_{S/R})$ so our definition is
+compatible with the one using the cotangent complex. Moreover,
+$H_0(L_{S/R}) = H_0(\NL_{S/R}) = \Omega_{S/R}$ as we've seen above.
+
+\medskip\noindent
+Let $R \to S$ be a ring map. A {\it presentation of $S$ over $R$} is
+a surjection $\alpha : P \to S$ of $R$-algebras where $P$ is a polynomial
+algebra (on a set of variables). Often, when $S$ is of finite type over $R$
+we will indicate this by saying: ``Let $R[x_1, \ldots, x_n] \to S$ be a
+presentation of $S/R$'', or
+``Let $0 \to I \to R[x_1, \ldots, x_n] \to S \to 0$ be a presentation
+of $S/R$'' if we want to indicate that $I$ is the kernel of the presentation.
+Note that the map $R[S] \to S$ used to define the naive cotangent complex
+is an example of a presentation.
+
+\medskip\noindent
+Note that for every presentation $\alpha$ we obtain a two term
+chain complex of $S$-modules
+$$
+\NL(\alpha) : I/I^2 \longrightarrow \Omega_{P/R} \otimes_P S.
+$$
+Here the term $I/I^2$ is placed in degree $1$ and the term
+$\Omega_{P/R} \otimes S$ is placed in degree $0$. The class of $f \in I$
+in $I/I^2$ is mapped to $\text{d}f \otimes 1$ in $\Omega_{P/R} \otimes S$.
+The cokernel of this complex is canonically $\Omega_{S/R}$,
+see Lemma \ref{lemma-differential-seq}. We call the complex $\NL(\alpha)$
+the {\it naive cotangent complex associated to the
+presentation $\alpha : P \to S$ of $S/R$}. Note that if $P = R[S]$
+with its canonical surjection onto $S$, then we recover $\NL_{S/R}$.
+If $P = R[x_1, \ldots, x_n]$ then will sometimes use the notation
+$I/I^2 \to \bigoplus_{i = 1, \ldots, n} S\text{d}x_i$
+to denote this complex.
+
+\medskip\noindent
+Suppose we are given a commutative diagram
+\begin{equation}
+\label{equation-functoriality-NL}
+\vcenter{
+\xymatrix{
+S \ar[r]_{\phi} & S' \\
+R \ar[r] \ar[u] & R' \ar[u]
+}
+}
+\end{equation}
+of rings. Let $\alpha : P \to S$ be a presentation of $S$ over $R$ and let
+$\alpha' : P' \to S'$ be a presentation of $S'$ over $R'$.
+A {\it morphism of presentations from $\alpha : P \to S$ to
+$\alpha' : P' \to S'$} is defined to be an $R$-algebra
+map
+$$
+\varphi : P \to P'
+$$
+such that $\phi \circ \alpha = \alpha' \circ \varphi$. Note that
+in this case $\varphi(I) \subset I'$, where $I = \Ker(\alpha)$
+and $I' = \Ker(\alpha')$. Thus $\varphi$ induces a map
+of $S$-modules $I/I^2 \to I'/(I')^2$ and by functoriality of
+differentials also an $S$-module map
+$\Omega_{P/R} \otimes S \to \Omega_{P'/R'} \otimes S'$.
+These maps are compatible with the differentials of $\NL(\alpha)$ and
+$\NL(\alpha')$ and we obtain a map of naive cotangent complexes
+$$
+\NL(\alpha) \longrightarrow \NL(\alpha').
+$$
+It is often convenient to consider the induced map
+$\NL(\alpha) \otimes_S S' \to \NL(\alpha')$.
+
+\medskip\noindent
+In the special case that $P = R[S]$ and $P' = R'[S']$ the map
+$\phi : S \to S'$ induces a canonical ring map
+$\varphi : P \to P'$ by the rule $[s] \mapsto [\phi(s)]$.
+Hence the construction above determines canonical(!) maps
+of chain complexes
+$$
+\NL_{S/R} \longrightarrow \NL_{S'/R'},\quad\text{and}\quad
+\NL_{S/R} \otimes_S S' \longrightarrow \NL_{S'/R'}
+$$
+associated to the diagram (\ref{equation-functoriality-NL}). Note that
+this construction is compatible with composition: given a commutative
+diagram
+$$
+\xymatrix{
+S \ar[r]_{\phi} & S' \ar[r]_{\phi'} & S'' \\
+R \ar[r] \ar[u] & R' \ar[u] \ar[r] & R'' \ar[u]
+}
+$$
+we see that the composition of
+$$
+\NL_{S/R} \longrightarrow \NL_{S'/R'} \longrightarrow \NL_{S''/R''}
+$$
+is the map $\NL_{S/R} \to \NL_{S''/R''}$ given by the outer square.
+
+\medskip\noindent
+It turns out that $\NL(\alpha)$ is homotopy equivalent to $\NL_{S/R}$
+and that the maps constructed above are well defined up to homotopy
+(homotopies of maps of complexes are discussed in
+Homology, Section \ref{homology-section-complexes}
+but we also spell out the exact meaning of the statements in the lemma
+below in its proof).
+
+\begin{lemma}
+\label{lemma-NL-homotopy}
+Suppose given a diagram (\ref{equation-functoriality-NL}).
+Let $\alpha : P \to S$ and $\alpha' : P' \to S'$ be presentations.
+\begin{enumerate}
+\item There exists a morphism of presentations from $\alpha$ to $\alpha'$.
+\item Any two morphisms of presentations induce homotopic
+morphisms of complexes $\NL(\alpha) \to \NL(\alpha')$.
+\item The construction is compatible with compositions of morphisms
+of presentations (see proof for exact statement).
+\item If $R \to R'$ and $S \to S'$ are isomorphisms, then
+for any map $\varphi$ of presentations from $\alpha$ to $\alpha'$
+the induced map $\NL(\alpha) \to \NL(\alpha')$ is a homotopy equivalence
+and a quasi-isomorphism.
+\end{enumerate}
+In particular, comparing $\alpha$ to the canonical presentation
+(\ref{equation-canonical-presentation}) we conclude there is a
+quasi-isomorphism $\NL(\alpha) \to \NL_{S/R}$ well defined
+up to homotopy and compatible with all functorialities (up to homotopy).
+\end{lemma}
+
+\begin{proof}
+Since $P$ is a polynomial algebra over $R$ we can write
+$P = R[x_a, a \in A]$ for some set $A$.
+As $\alpha'$ is surjective, we can choose
+for every $a \in A$ an element $f_a \in P'$
+such that $\alpha'(f_a) = \phi(\alpha(x_a))$. Let
+$\varphi : P = R[x_a, a \in A] \to P'$ be the
+unique $R$-algebra map such that $\varphi(x_a) = f_a$.
+This gives the morphism in (1).
+
+\medskip\noindent
+Let $\varphi$ and $\varphi'$ morphisms of presentations from $\alpha$
+to $\alpha'$. Let $I = \Ker(\alpha)$ and $I' = \Ker(\alpha')$.
+We have to construct the diagonal map $h$ in the diagram
+$$
+\xymatrix{
+I/I^2 \ar[r]^-{\text{d}}
+\ar@<1ex>[d]^{\varphi'_1} \ar@<-1ex>[d]_{\varphi_1}
+&
+\Omega_{P/R} \otimes_P S
+\ar@<1ex>[d]^{\varphi'_0} \ar@<-1ex>[d]_{\varphi_0}
+\ar[ld]_h
+\\
+I'/(I')^2 \ar[r]^-{\text{d}}
+&
+\Omega_{P'/R'} \otimes_{P'} S'
+}
+$$
+where the vertical maps are induced by $\varphi$, $\varphi'$ such that
+$$
+\varphi_1 - \varphi'_1 = h \circ \text{d}
+\quad\text{and}\quad
+\varphi_0 - \varphi'_0 = \text{d} \circ h
+$$
+Consider the map $\varphi - \varphi' : P \to P'$. Since both $\varphi$
+and $\varphi'$ are compatible with $\alpha$ and $\alpha'$ we obtain
+$\varphi - \varphi' : P \to I'$. This implies that
+$\varphi, \varphi' : P \to P'$ induce the same $P$-module structure
+on $I'/(I')^2$, since
+$\varphi(p)i' - \varphi'(p)i' = (\varphi - \varphi')(p)i' \in (I')^2$.
+Also $\varphi - \varphi'$ is $R$-linear and
+$$
+(\varphi - \varphi')(fg) =
+\varphi(f)(\varphi - \varphi')(g) + (\varphi - \varphi')(f)\varphi'(g)
+$$
+Hence the induced map $D : P \to I'/(I')^2$ is a $R$-derivation.
+Thus we obtain a canonical map $h : \Omega_{P/R} \otimes_P S \to I'/(I')^2$
+such that $D = h \circ \text{d}$.
+A calculation (omitted) shows that $h$ is the desired homotopy.
+
+\medskip\noindent
+Suppose that we have a commutative diagram
+$$
+\xymatrix{
+S \ar[r]_{\phi} & S' \ar[r]_{\phi'} & S'' \\
+R \ar[r] \ar[u] & R' \ar[u] \ar[r] & R'' \ar[u]
+}
+$$
+and that
+\begin{enumerate}
+\item $\alpha : P \to S$,
+\item $\alpha' : P' \to S'$, and
+\item $\alpha'' : P'' \to S''$
+\end{enumerate}
+are presentations. Suppose that
+\begin{enumerate}
+\item $\varphi : P \to P$ is a morphism of presentations from
+$\alpha$ to $\alpha'$ and
+\item $\varphi' : P' \to P''$
+is a morphism of presentations from $\alpha'$ to $\alpha''$.
+\end{enumerate}
+Then it is immediate that
+$\varphi' \circ \varphi : P \to P''$
+is a morphism of presentations from $\alpha$ to $\alpha''$ and that
+the induced map $\NL(\alpha) \to \NL(\alpha'')$ of naive cotangent complexes
+is the composition of the maps $\NL(\alpha) \to \NL(\alpha')$ and
+$\NL(\alpha') \to \NL(\alpha'')$ induced by $\varphi$ and $\varphi'$.
+
+\medskip\noindent
+In the simple case of complexes with 2 terms a quasi-isomorphism
+is just a map that induces an isomorphism on both the cokernel
+and the kernel of the maps between the terms. Note that homotopic
+maps of 2 term complexes (as explained above) define the same maps on
+kernel and cokernel. Hence if $\varphi$ is a map from a presentation
+$\alpha$ of $S$ over $R$ to itself, then the induced map
+$\NL(\alpha) \to \NL(\alpha)$ is a quasi-isomorphism being homotopic
+to the identity by part (2). To prove (4) in full generality, consider
+a morphism $\varphi'$ from $\alpha'$ to $\alpha$ which exists by (1).
+The compositions $\NL(\alpha) \to \NL(\alpha') \to \NL(\alpha)$ and
+$\NL(\alpha') \to \NL(\alpha) \to \NL(\alpha')$ are homotopic to the identity
+maps by (3), hence these maps are homotopy equivalences by definition.
+It follows formally that both maps
+$\NL(\alpha) \to \NL(\alpha')$ and $\NL(\alpha') \to \NL(\alpha)$ are
+quasi-isomorphisms. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-NL-polynomial-algebra}
+Let $A \to B$ be a polynomial algebra. Then $\NL_{B/A}$ is homotopy equivalent
+to the chain complex $(0 \to \Omega_{B/A})$ with $\Omega_{B/A}$
+in degree $0$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-NL-homotopy}
+and the fact that $\text{id}_B : B \to B$ is a presentation of $B$ over $A$
+with zero kernel.
+\end{proof}
+
+\noindent
+The following lemma is part of the motivation for introducing the
+naive cotangent complex. The cotangent complex extends this
+to a genuine long exact cohomology sequence. If $B \to C$ is a
+local complete intersection, then one can extend the sequence with a
+zero on the left, see
+More on Algebra, Lemma \ref{more-algebra-lemma-transitive-lci-at-end}.
+
+\begin{lemma}[Jacobi-Zariski sequence]
+\label{lemma-exact-sequence-NL}
+Let $A \to B \to C$ be ring maps. Choose a presentation
+$\alpha : A[x_s, s \in S] \to B$ with kernel $I$. Choose a presentation
+$\beta : B[y_t, t \in T] \to C$ with kernel $J$. Let
+$\gamma : A[x_s, y_t] \to C$ be the induced presentation of $C$ with kernel
+$K$. Then we get a canonical commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\Omega_{A[x_s]/A} \otimes C \ar[r] &
+\Omega_{A[x_s, y_t]/A} \otimes C \ar[r] &
+\Omega_{B[y_t]/B} \otimes C \ar[r] &
+0 \\
+&
+I/I^2 \otimes C \ar[r] \ar[u] &
+K/K^2 \ar[r] \ar[u] &
+J/J^2 \ar[r] \ar[u] &
+0
+}
+$$
+with exact rows. We get the following exact sequence
+of homology groups
+$$
+H_1(\NL_{B/A} \otimes_B C) \to
+H_1(L_{C/A}) \to
+H_1(L_{C/B}) \to
+C \otimes_B \Omega_{B/A} \to
+\Omega_{C/A} \to
+\Omega_{C/B} \to 0
+$$
+of $C$-modules extending the sequence of
+Lemma \ref{lemma-exact-sequence-differentials}.
+If $\text{Tor}_1^B(\Omega_{B/A}, C) = 0$, then
+$H_1(\NL_{B/A} \otimes_B C) = H_1(L_{B/A}) \otimes_B C$.
+\end{lemma}
+
+\begin{proof}
+The precise definition of the maps is omitted.
+The exactness of the top row follows as the $\text{d}x_s$,
+$\text{d}y_t$ form a basis for the middle module.
+The map $\gamma$ factors
+$$
+A[x_s, y_t] \to B[y_t] \to C
+$$
+with surjective first arrow and second arrow equal to $\beta$.
+Thus we see that $K \to J$ is surjective.
+Moreover, the kernel of the first displayed arrow is
+$IA[x_s, y_t]$. Hence $I/I^2 \otimes C$ surjects onto the
+kernel of $K/K^2 \to J/J^2$. Finally, we can use
+Lemma \ref{lemma-NL-homotopy}
+to identify the terms as homology groups of the naive
+cotangent complexes.
+The final assertion follows as the degree $0$ term of the complex
+$\NL_{B/A}$ is a free $B$-module.
+\end{proof}
+
+\begin{remark}
+\label{remark-composition-homotopy-equivalent-to-zero}
+Let $A \to B$ and $\phi : B \to C$ be ring maps.
+Then the composition $\NL_{B/A} \to \NL_{C/A} \to \NL_{C/B}$ is
+homotopy equivalent to zero. Namely, this composition is the functoriality
+of the naive cotangent complex for the square
+$$
+\xymatrix{
+B \ar[r]_\phi & C \\
+A \ar[r] \ar[u] & B \ar[u]
+}
+$$
+Write $J = \Ker(B[C] \to C)$. An explicit homotopy is given by the map
+$\Omega_{A[B]/A} \otimes_A B \to J/J^2$ which maps the basis element
+$\text{d}[b]$ to the class of $[\phi(b)] - b$ in $J/J^2$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-NL-surjection}
+Let $A \to B$ be a surjective ring map with kernel $I$.
+Then $\NL_{B/A}$ is homotopy equivalent to the chain complex
+$(I/I^2 \to 0)$ with $I/I^2$ in degree $1$. In particular
+$H_1(L_{B/A}) = I/I^2$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-NL-homotopy}
+and the fact that $A \to B$ is a presentation of $B$ over $A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-application-NL}
+Let $A \to B \to C$ be ring maps. Assume $A \to C$ is surjective (so
+also $B \to C$ is). Denote $I = \Ker(A \to C)$ and
+$J = \Ker(B \to C)$. Then the sequence
+$$
+I/I^2 \to J/J^2 \to \Omega_{B/A} \otimes_B B/J \to 0
+$$
+is exact.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Lemma \ref{lemma-exact-sequence-NL}
+and the description of the naive cotangent complexes
+$\NL_{C/B}$ and $\NL_{C/A}$ in Lemma \ref{lemma-NL-surjection}.
+\end{proof}
+
+\begin{lemma}[Flat base change]
+\label{lemma-change-base-NL}
+Let $R \to S$ be a ring map. Let $\alpha : P \to S$ be a presentation.
+Let $R \to R'$ be a flat ring map.
+Let $\alpha' : P \otimes_R R' \to S' = S \otimes_R R'$
+be the induced presentation.
+Then $\NL(\alpha) \otimes_R R' = \NL(\alpha) \otimes_S S' = \NL(\alpha')$.
+In particular, the canonical map
+$$
+\NL_{S/R} \otimes_S S' \longrightarrow \NL_{S \otimes_R R'/R'}
+$$
+is a homotopy equivalence if $R \to R'$ is flat.
+\end{lemma}
+
+\begin{proof}
+This is true because
+$\Ker(\alpha') = R' \otimes_R \Ker(\alpha)$
+since $R \to R'$ is flat.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimits-NL}
+Let $R_i \to S_i$ be a system of ring maps over the directed set $I$.
+Set $R = \colim R_i$ and $S = \colim S_i$.
+Then $\NL_{S/R} = \colim \NL_{S_i/R_i}$.
+\end{lemma}
+
+\begin{proof}
+Recall that $\NL_{S/R}$ is the complex
+$I/I^2 \to \bigoplus_{s \in S} S\text{d}[s]$ where $I \subset R[S]$
+is the kernel of the canonical presentation $R[S] \to S$.
+Now it is clear that $R[S] = \colim R_i[S_i]$ and similarly
+that $I = \colim I_i$ where $I_i = \Ker(R_i[S_i] \to S_i)$.
+Hence the lemma is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-NL-of-localization}
+If $S \subset A$ is a multiplicative subset of $A$, then
+$\NL_{S^{-1}A/A}$ is homotopy equivalent to the zero complex.
+\end{lemma}
+
+\begin{proof}
+Since $A \to S^{-1}A$ is flat we see that
+$\NL_{S^{-1}A/A} \otimes_A S^{-1}A \to \NL_{S^{-1}A/S^{-1}A}$
+is a homotopy equivalence by flat base change
+(Lemma \ref{lemma-change-base-NL}). Since the source of the arrow
+is isomorphic to $\NL_{S^{-1}A/A}$ and the target of the arrow is
+zero (by Lemma \ref{lemma-NL-surjection}) we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-NL-localize-bottom}
+Let $S \subset A$ is a multiplicative subset of $A$.
+Let $S^{-1}A \to B$ be a ring map.
+Then $\NL_{B/A} \to \NL_{B/S^{-1}A}$ is a homotopy equivalence.
+\end{lemma}
+
+\begin{proof}
+Choose a presentation $\alpha : P \to B$ of $B$ over $A$.
+Then $\beta : S^{-1}P \to B$ is a presentation of $B$ over $S^{-1}A$.
+A direct computation shows that we have $\NL(\alpha) = \NL(\beta)$
+which proves the lemma as the naive cotangent complex is well defined
+up to homotopy by Lemma \ref{lemma-NL-homotopy}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-principal-localization-NL}
+\begin{slogan}
+The formation of the naive cotangent complex commutes with localization
+at an element.
+\end{slogan}
+Let $A \to B$ be a ring map. Let $g \in B$. Suppose $\alpha : P \to B$
+is a presentation with kernel $I$. Then a presentation of $B_g$ over $A$ is
+the map
+$$
+\beta : P[x] \longrightarrow B_g
+$$
+extending $\alpha$ and sending $x$ to $1/g$.
+The kernel $J$ of $\beta$ is generated by $I$ and the element $f x - 1$
+where $f \in P$ is an element mapped to $g \in B$ by $\alpha$. In this
+situation we have
+\begin{enumerate}
+\item $J/J^2 = (I/I^2)_g \oplus B_g (f x - 1)$,
+\item $\Omega_{P[x]/A} \otimes_{P[x]} B_g =
+\Omega_{P/A} \otimes_P B_g \oplus B_g \text{d}x$,
+\item $\NL(\beta) \cong
+\NL(\alpha) \otimes_B B_g \oplus (B_g \xrightarrow{g} B_g)$
+\end{enumerate}
+Hence the canonical map $\NL_{B/A} \otimes_B B_g \to \NL_{B_g/A}$
+is a homotopy equivalence.
+\end{lemma}
+
+\begin{proof}
+Since $P[x]/(I, fx - 1) = B[x]/(gx - 1) = B_g$ we get the statement about
+$I$ and $fx - 1$ generating $J$. Consider the commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\Omega_{P/A} \otimes B_g \ar[r] &
+\Omega_{P[x]/A} \otimes B_g \ar[r] &
+\Omega_{B[x]/B} \otimes B_g \ar[r] &
+0 \\
+&
+(I/I^2)_g \ar[r] \ar[u] &
+J/J^2 \ar[r] \ar[u] &
+(gx - 1)/(gx - 1)^2 \ar[r] \ar[u] &
+0
+}
+$$
+with exact rows of Lemma \ref{lemma-exact-sequence-NL}.
+The $B_g$-module $\Omega_{B[x]/B} \otimes B_g$ is free of
+rank $1$ on $\text{d}x$. The element $\text{d}x$ in the
+$B_g$-module $\Omega_{P[x]/A} \otimes B_g$ provides
+a splitting for the top row. The element $gx - 1 \in (gx - 1)/(gx - 1)^2$
+is mapped to $g\text{d}x$ in $\Omega_{B[x]/B} \otimes B_g$
+and hence $(gx - 1)/(gx - 1)^2$ is free of rank $1$ over $B_g$.
+(This can also be seen by arguing that $gx - 1$ is a nonzerodivisor
+in $B[x]$ because it is a polynomial with invertible constant term
+and any nonzerodivisor gives a quasi-regular sequence of length $1$
+by Lemma \ref{lemma-regular-quasi-regular}.)
+
+\medskip\noindent
+Let us prove $(I/I^2)_g \to J/J^2$ injective. Consider the $P$-algebra map
+$$
+\pi : P[x] \to (P/I^2)_f = P_f/I_f^2
+$$
+sending $x$ to $1/f$. Since $J$ is generated by $I$ and $fx - 1$
+we see that $\pi(J) \subset (I/I^2)_f = (I/I^2)_g$. Since this
+is an ideal of square zero we see that $\pi(J^2) = 0$.
+If $a \in I$ maps to an element of $J^2$ in $J$, then
+$\pi(a) = 0$, which implies that $a$ maps to zero in $I_f/I_f^2$.
+This proves the desired injectivity.
+
+\medskip\noindent
+Thus we have a short exact sequence of two term complexes
+$$
+0 \to \NL(\alpha) \otimes_B B_g \to \NL(\beta)
+\to (B_g \xrightarrow{g} B_g) \to 0
+$$
+Such a short exact sequence can always be split in the category of
+complexes. In our particular case we can take as splittings
+$$
+J/J^2 = (I/I^2)_g \oplus B_g (fx - 1)\quad\text{and}\quad
+\Omega_{P[x]/A} \otimes B_g = \Omega_{P/A} \otimes B_g \oplus
+B_g (g^{-2}\text{d}f + \text{d}x)
+$$
+This works because
+$\text{d}(fx - 1) = x\text{d}f + f \text{d}x =
+g(g^{-2}\text{d}f + \text{d}x)$
+in $\Omega_{P[x]/A} \otimes B_g$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-NL}
+Let $A \to B$ be a ring map. Let $S \subset B$ be a multiplicative subset.
+The canonical map $\NL_{B/A} \otimes_B S^{-1}B \to \NL_{S^{-1}B/A}$
+is a quasi-isomorphism.
+\end{lemma}
+
+\begin{proof}
+We have $S^{-1}B = \colim_{g \in S} B_g$ where we think of $S$
+as a directed set (ordering by divisibility), see
+Lemma \ref{lemma-localization-colimit}.
+By Lemma \ref{lemma-principal-localization-NL} each of the maps
+$\NL_{B/A} \otimes_B B_g \to \NL_{B_g/A}$
+are quasi-isomorphisms.
+The lemma follows from Lemma \ref{lemma-colimits-NL}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sum-two-terms}
+Let $R$ be a ring.
+Let $A_1 \to A_0$, and $B_1 \to B_0$ be
+two term complexes. Suppose that there exist
+morphisms of complexes $\varphi : A_\bullet \to B_\bullet$
+and $\psi : B_\bullet \to A_\bullet$ such that
+$\varphi \circ \psi$ and $\psi \circ \varphi$ are
+homotopic to the identity maps.
+Then $A_1 \oplus B_0 \cong B_1 \oplus A_0$ as
+$R$-modules.
+\end{lemma}
+
+\begin{proof}
+Choose a map $h : A_0 \to A_1$ such that
+$$
+\text{id}_{A_1} - \psi_1 \circ \varphi_1 = h \circ d_A
+\text{ and }
+\text{id}_{A_0} - \psi_0 \circ \varphi_0 = d_A \circ h.
+$$
+Similarly, choose a map $h' : B_0 \to B_1$ such that
+$$
+\text{id}_{B_1} - \varphi_1 \circ \psi_1 = h' \circ d_B
+\text{ and }
+\text{id}_{B_0} - \varphi_0 \circ \psi_0 = d_B \circ h'.
+$$
+A trivial computation shows that
+$$
+\left(
+\begin{matrix}
+\text{id}_{A_1} & -h' \circ \psi_1 + h \circ \psi_0 \\
+0 & \text{id}_{B_0}
+\end{matrix}
+\right)
+=
+\left(
+\begin{matrix}
+\psi_1 & h \\
+-d_B & \varphi_0
+\end{matrix}
+\right)
+\left(
+\begin{matrix}
+\varphi_1 & - h' \\
+d_A & \psi_0
+\end{matrix}
+\right)
+$$
+This shows that both matrices on the right hand side
+are invertible and proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-conormal-module}
+Let $R \to S$ be a ring map of finite type.
+For any presentations $\alpha : R[x_1, \ldots, x_n] \to S$, and
+$\beta : R[y_1, \ldots, y_m] \to S$ we have
+$$
+I/I^2 \oplus S^{\oplus m} \cong J/J^2 \oplus S^{\oplus n}
+$$
+as $S$-modules where $I = \Ker(\alpha)$ and $J = \Ker(\beta)$.
+\end{lemma}
+
+\begin{proof}
+See Lemmas \ref{lemma-NL-homotopy} and \ref{lemma-sum-two-terms}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-conormal-module-localize}
+Let $R \to S$ be a ring map of finite type.
+Let $g \in S$. For any presentations
+$\alpha : R[x_1, \ldots, x_n] \to S$, and
+$\beta : R[y_1, \ldots, y_m] \to S_g$ we have
+$$
+(I/I^2)_g \oplus S^{\oplus m}_g \cong J/J^2 \oplus S_g^{\oplus n}
+$$
+as $S_g$-modules where
+$I = \Ker(\alpha)$ and $J = \Ker(\beta)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-conormal-module}, we see that it suffices to
+prove this for a single choice of $\alpha$ and $\beta$. Thus we may take
+$\beta$ the presentation of Lemma \ref{lemma-principal-localization-NL}
+and the result is clear.
+\end{proof}
+
+
+
+
+
+
+
+\section{Local complete intersections}
+\label{section-lci}
+
+\noindent
+The property of being a local complete intersection is an
+intrinsic property of a Noetherian local ring.
+This will be discussed in
+Divided Power Algebra, Section \ref{dpa-section-lci}.
+However, for the moment we just define this property for
+finite type algebras over a field.
+
+\begin{definition}
+\label{definition-lci-field}
+Let $k$ be a field.
+Let $S$ be a finite type $k$-algebra.
+\begin{enumerate}
+\item We say that $S$ is a {\it global complete intersection over $k$}
+if there exists a presentation $S = k[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+such that $\dim(S) = n - c$.
+\item We say that $S$ is a {\it local complete intersection over $k$}
+if there exists a covering $\Spec(S) = \bigcup D(g_i)$ such
+that each of the rings $S_{g_i}$ is a global complete intersection
+over $k$.
+\end{enumerate}
+We will also use the convention that the zero ring is a global
+complete intersection over $k$.
+\end{definition}
+
+\noindent
+Suppose $S$ is a global complete intersection
+$S = k[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$ as in
+Definition \ref{definition-lci-field}.
+For a maximal ideal $\mathfrak m \subset k[x_1, \ldots, x_n]$
+we have $\dim(k[x_1, \ldots, x_n]_\mathfrak m) = n$
+(Lemma \ref{lemma-dim-affine-space}).
+If $(f_1, \ldots, f_c) \subset \mathfrak m$, then we
+conclude that $\dim(S_\mathfrak m) \geq n - c$ by
+Lemma \ref{lemma-one-equation}. Since $\dim(S) = n - c$
+by Definition \ref{definition-lci-field} we conclude
+that $\dim(S_\mathfrak m) = n - c$ for all maximal ideals of $S$
+and that $\Spec(S)$ is equidimensional
+(Topology, Definition \ref{topology-definition-equidimensional})
+of dimension $n - c$, see
+Lemma \ref{lemma-dimension-at-a-point-finite-type-over-field}.
+We will often use this without further mention.
+
+\begin{lemma}
+\label{lemma-localize-lci}
+Let $k$ be a field.
+Let $S$ be a finite type $k$-algebra.
+Let $g \in S$.
+\begin{enumerate}
+\item If $S$ is a global complete intersection so is $S_g$.
+\item If $S$ is a local complete intersection so is $S_g$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The second statement follows immediately from the first.
+Proof of the first statement. If $S_g$ is the zero ring,
+then it is true. Assume $S_g$ is nonzero.
+Write $S = k[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+with $n - c = \dim(S)$ as in Definition \ref{definition-lci-field}.
+By the remarks following the definition $S$ is equidimensional
+of dimension $n - c$, so $\dim(S_g) = n - c$ as well. Let
+$g' \in k[x_1, \ldots, x_n]$ be an element whose residue class
+corresponds to $g$. Then
+$S_g = k[x_1, \ldots, x_n, x_{n + 1}]/(f_1, \ldots, f_c, x_{n + 1}g' - 1)$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-CM}
+Let $k$ be a field. Let $S$ be a finite type $k$-algebra.
+If $S$ is a local complete intersection, then
+$S$ is a Cohen-Macaulay ring.
+\end{lemma}
+
+\begin{proof}
+Choose a maximal prime $\mathfrak m$ of $S$.
+We have to show that $S_\mathfrak m$ is Cohen-Macaulay.
+By assumption we may assume $S = k[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+with $\dim(S) = n - c$. Let $\mathfrak m' \subset k[x_1, \ldots, x_n]$
+be the maximal ideal corresponding to $\mathfrak m$.
+According to Proposition \ref{proposition-finite-gl-dim-polynomial-ring}
+the local ring
+$k[x_1, \ldots, x_n]_{\mathfrak m'}$ is regular local of
+dimension $n$. In particular it is Cohen-Macaulay by
+Lemma \ref{lemma-regular-ring-CM}.
+By Lemma \ref{lemma-one-equation} applied $c$ times the local ring
+$S_{\mathfrak m} = k[x_1, \ldots, x_n]_{\mathfrak m'}/(f_1, \ldots, f_c)$
+has dimension $\geq n - c$. By assumption $\dim(S_{\mathfrak m}) \leq n - c$.
+Thus we get equality. This implies that $f_1, \ldots, f_c$ is a regular
+sequence in $k[x_1, \ldots, x_n]_{\mathfrak m'}$ and that
+$S_{\mathfrak m}$ is Cohen-Macaulay, see Proposition
+\ref{proposition-CM-module}.
+\end{proof}
+
+\noindent
+The following is the technical key to the rest of the material in this
+section. An important feature of this lemma is that we may choose any
+presentation for the ring $S$, but that condition (1) does not depend
+on this choice.
+
+\begin{lemma}
+\label{lemma-lci}
+Let $k$ be a field.
+Let $S$ be a finite type $k$-algebra.
+Let $\mathfrak q$ be a prime of $S$.
+Choose any presentation $S = k[x_1, \ldots, x_n]/I$.
+Let $\mathfrak q'$ be the prime of $k[x_1, \ldots, x_n]$ corresponding
+to $\mathfrak q$. Set
+$c = \text{height}(\mathfrak q') - \text{height}(\mathfrak q)$,
+in other words $\dim_{\mathfrak q}(S) = n - c$
+(see Lemma \ref{lemma-codimension}). The following are equivalent
+\begin{enumerate}
+\item There exists a $g \in S$, $g \not \in \mathfrak q$
+such that $S_g$ is a global complete intersection over $k$.
+\item The ideal $I_{\mathfrak q'} \subset k[x_1, \ldots, x_n]_{\mathfrak q'}$
+can be generated by $c$ elements.
+\item The conormal module $(I/I^2)_{\mathfrak q}$ can be generated by
+$c$ elements over $S_{\mathfrak q}$.
+\item The conormal module $(I/I^2)_{\mathfrak q}$ is a free
+$S_{\mathfrak q}$-module of rank $c$.
+\item The ideal $I_{\mathfrak q'}$ can be generated by a regular sequence
+in the regular local ring $k[x_1, \ldots, x_n]_{\mathfrak q'}$.
+\end{enumerate}
+In this case any $c$ elements of $I_{\mathfrak q'}$
+which generate $I_{\mathfrak q'}/\mathfrak q'I_{\mathfrak q'}$
+form a regular sequence in the local
+ring $k[x_1, \ldots, x_n]_{\mathfrak q'}$.
+\end{lemma}
+
+\begin{proof}
+Set $R = k[x_1, \ldots, x_n]_{\mathfrak q'}$. This is a
+Cohen-Macaulay local
+ring of dimension $\text{height}(\mathfrak q')$, see for example
+Lemma \ref{lemma-lci-CM}. Moreover,
+$\overline{R} = R/IR = R/I_{\mathfrak q'} = S_{\mathfrak q}$
+is a quotient of dimension $\text{height}(\mathfrak q)$.
+Let $f_1, \ldots, f_c \in I_{\mathfrak q'}$ be elements
+which generate $(I/I^2)_{\mathfrak q}$. By Lemma \ref{lemma-NAK}
+we see that $f_1, \ldots, f_c$ generate $I_{\mathfrak q'}$.
+Since the dimensions work out, we conclude
+by Proposition \ref{proposition-CM-module} that
+$f_1, \ldots, f_c$ is a regular sequence in $R$.
+By Lemma \ref{lemma-regular-quasi-regular} we see that
+$(I/I^2)_{\mathfrak q}$ is free.
+These arguments show that (2), (3), (4) are equivalent and
+that they imply the last statement of the lemma, and therefore
+they imply (5).
+
+\medskip\noindent
+If (5) holds, say $I_{\mathfrak q'}$ is generated by a regular
+sequence of length $e$, then
+$\text{height}(\mathfrak q) = \dim(S_{\mathfrak q}) =
+\dim(k[x_1, \ldots, x_n]_{\mathfrak q'}) - e =
+\text{height}(\mathfrak q') - e$ by dimension theory,
+see Section \ref{section-dimension}. We conclude that $e = c$.
+Thus (5) implies (2).
+
+\medskip\noindent
+We continue with the notation introduced in the first paragraph.
+For each $f_i$ we may find $d_i \in k[x_1, \ldots, x_n]$,
+$d_i \not \in \mathfrak q'$ such that
+$f_i' = d_i f_i \in k[x_1, \ldots, x_n]$.
+Then it is still true that $I_{\mathfrak q'} = (f_1', \ldots, f_c')R$.
+Hence there exists a $g' \in k[x_1, \ldots, x_n]$, $g' \not \in \mathfrak q'$
+such that $I_{g'} = (f_1', \ldots, f_c')$.
+Moreover, pick $g'' \in k[x_1, \ldots, x_n]$, $g'' \not \in \mathfrak q'$
+such that $\dim(S_{g''}) = \dim_{\mathfrak q} \Spec(S)$.
+By Lemma \ref{lemma-codimension} this dimension is equal to $n - c$.
+Finally, set $g$ equal to the image of $g'g''$ in $S$.
+Then we see that
+$$
+S_g \cong k[x_1, \ldots, x_n, x_{n + 1}]
+/
+(f_1', \ldots, f_c', x_{n + 1}g'g'' - 1)
+$$
+and by our choice of $g''$ this ring has dimension $n - c$.
+Therefore it is a global complete intersection.
+Thus each of (2), (3), and (4) implies (1).
+
+\medskip\noindent
+Assume (1). Let $S_g \cong k[y_1, \ldots, y_m]/(f_1, \ldots, f_t)$
+be a presentation of $S_g$ as a global complete intersection.
+Write $J = (f_1, \ldots, f_t)$. Let $\mathfrak q'' \subset k[y_1, \ldots, y_m]$
+be the prime corresponding to $\mathfrak qS_g$. Note that
+$t = m - \dim(S_g) =
+\text{height}(\mathfrak q'') - \text{height}(\mathfrak q)$,
+see Lemma \ref{lemma-codimension} for the last equality.
+As seen in the proof of Lemma \ref{lemma-lci-CM} (and also above) the elements
+$f_1, \ldots, f_t$ form a regular sequence in the local ring
+$k[y_1, \ldots, y_m]_{\mathfrak q''}$.
+By Lemma \ref{lemma-regular-quasi-regular} we see that
+$(J/J^2)_{\mathfrak q}$ is free of rank $t$.
+By Lemma \ref{lemma-conormal-module-localize} we have
+$$
+J/J^2 \oplus S_g^n \cong (I/I^2)_g \oplus S_g^m
+$$
+Thus $(I/I^2)_{\mathfrak q}$ is free of rank
+$t + n - m = m - \dim(S_g) + n - m = n - \dim(S_g) =
+\text{height}(\mathfrak q') - \text{height}(\mathfrak q) = c$.
+Thus we obtain (4).
+\end{proof}
+
+\noindent
+The result of Lemma \ref{lemma-lci} suggests the following definition.
+
+\begin{definition}
+\label{definition-lci-local-ring}
+Let $k$ be a field. Let $S$ be a local $k$-algebra essentially of finite type
+over $k$. We say $S$ is a {\it complete intersection (over $k$)}
+if there exists a local $k$-algebra $R$ and elements
+$f_1, \ldots, f_c \in \mathfrak m_R$ such that
+\begin{enumerate}
+\item $R$ is essentially of finite type over $k$,
+\item $R$ is a regular local ring,
+\item $f_1, \ldots, f_c$ form a regular sequence in $R$, and
+\item $S \cong R/(f_1, \ldots, f_c)$ as $k$-algebras.
+\end{enumerate}
+\end{definition}
+
+\noindent
+By the Cohen structure theorem (see
+Theorem \ref{theorem-cohen-structure-theorem}) any complete
+Noetherian local ring may be written as the quotient of some regular complete
+local ring. Hence we may use the definition above to define the notion of
+a complete intersection ring for any complete Noetherian local ring.
+We will discuss this in
+Divided Power Algebra, Section \ref{dpa-section-lci}.
+In the meantime the following lemma shows that such a definition makes sense.
+
+\begin{lemma}
+\label{lemma-ci-well-defined}
+Let $A \to B \to C$ be surjective local ring homomorphisms.
+Assume $A$ and $B$ are regular local rings. The following are equivalent
+\begin{enumerate}
+\item $\Ker(A \to C)$ is generated by a regular sequence,
+\item $\Ker(A \to C)$ is generated by $\dim(A) - \dim(C)$ elements,
+\item $\Ker(B \to C)$ is generated by a regular sequence, and
+\item $\Ker(B \to C)$ is generated by $\dim(B) - \dim(C)$ elements.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+A regular local ring is Cohen-Macaulay, see Lemma \ref{lemma-regular-ring-CM}.
+Hence the equivalences (1) $\Leftrightarrow$ (2) and
+(3) $\Leftrightarrow$ (4), see Proposition \ref{proposition-CM-module}.
+By Lemma \ref{lemma-regular-quotient-regular}
+the ideal $\Ker(A \to B)$ can be generated
+by $\dim(A) - \dim(B)$ elements.
+Hence we see that (4) implies (2).
+
+\medskip\noindent
+It remains to show that (1) implies (4). We do this by induction on
+$\dim(A) - \dim(B)$. The case $\dim(A) - \dim(B) = 0$ is trivial.
+Assume $\dim(A) > \dim (B)$.
+Write $I = \Ker(A \to C)$ and $J = \Ker(A \to B)$.
+Note that $J \subset I$. Our assumption is that the minimal number
+of generators of $I$ is $\dim(A) - \dim(C)$.
+Let $\mathfrak m \subset A$ be the maximal
+ideal. Consider the maps
+$$
+J/ \mathfrak m J \to I / \mathfrak m I \to \mathfrak m /\mathfrak m^2
+$$
+By Lemma \ref{lemma-regular-quotient-regular} and its proof the
+composition is injective. Take any element $x \in J$ which is
+not zero in $J /\mathfrak mJ$. By the above and Nakayama's lemma
+$x$ is an element of a minimal set of generators of $I$.
+Hence we may replace $A$ by $A/xA$ and $I$ by $I/xA$ which
+decreases both $\dim(A)$ and the minimal number of generators of $I$
+by $1$. Thus we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-local}
+Let $k$ be a field. Let $S$ be a local $k$-algebra essentially of finite
+type over $k$. The following are equivalent:
+\begin{enumerate}
+\item $S$ is a complete intersection over $k$,
+\item for any surjection $R \to S$ with $R$ a regular local ring
+essentially of finite presentation over $k$ the ideal
+$\Ker(R \to S)$ can be generated by a regular sequence,
+\item for some surjection $R \to S$ with $R$ a regular local ring
+essentially of finite presentation over $k$ the ideal
+$\Ker(R \to S)$ can be generated by
+$\dim(R) - \dim(S)$ elements,
+\item there exists a global complete intersection
+$A$ over $k$ and a prime $\mathfrak a$ of $A$ such
+that $S \cong A_{\mathfrak a}$, and
+\item there exists a local complete intersection
+$A$ over $k$ and a prime $\mathfrak a$ of $A$ such
+that $S \cong A_{\mathfrak a}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that (2) implies (1) and (1) implies (3).
+It is also clear that (4) implies (5). Let us show that (3) implies
+(4). Thus we assume there exists a surjection
+$R \to S$ with $R$ a regular local ring
+essentially of finite presentation over $k$ such that the ideal
+$\Ker(R \to S)$ can be generated by $\dim(R) - \dim(S)$ elements.
+We may write $R = (k[x_1, \ldots, x_n]/J)_{\mathfrak q}$
+for some $J \subset k[x_1, \ldots, x_n]$ and
+some prime $\mathfrak q \subset k[x_1, \ldots, x_n]$ with
+$J \subset \mathfrak q$. Let $I \subset k[x_1, \ldots, x_n]$
+be the kernel of the map $k[x_1, \ldots, x_n] \to S$ so that
+$S \cong (k[x_1, \ldots, x_n]/I)_{\mathfrak q}$.
+By assumption $(I/J)_{\mathfrak q}$ is generated by
+$\dim(R) - \dim(S)$ elements. We conclude that
+$I_{\mathfrak q}$ can be generated by
+$\dim(k[x_1, \ldots, x_n]_{\mathfrak q}) - \dim(S)$ elements
+by Lemma \ref{lemma-ci-well-defined}.
+From Lemma \ref{lemma-lci} we see that for some
+$g \in k[x_1, \ldots, x_n]$, $g \not \in \mathfrak q$
+the algebra $(k[x_1, \ldots, x_n]/I)_g$ is a global
+complete intersection and $S$ is isomorphic to
+a local ring of it.
+
+\medskip\noindent
+To finish the proof of the lemma we have to show that (5) implies (2).
+Assume (5) and let $\pi : R \to S$ be a surjection with $R$ a regular local
+$k$-algebra essentially of finite type over $k$.
+By assumption we have $S = A_{\mathfrak a}$ for some local
+complete intersection $A$ over $k$.
+Choose a presentation $R = (k[y_1, \ldots, y_m]/J)_{\mathfrak q}$
+with $J \subset \mathfrak q \subset k[y_1, \ldots, y_m]$.
+We may and do assume that $J$ is the kernel of the map
+$k[y_1, \ldots, y_m] \to R$. Let $I \subset k[y_1, \ldots, y_m]$
+be the kernel of the map $k[y_1, \ldots, y_m] \to S = A_{\mathfrak a}$.
+Then $J \subset I$ and $(I/J)_{\mathfrak q}$ is the kernel of
+the surjection $\pi : R \to S$. So
+$S = (k[y_1, \ldots, y_m]/I)_{\mathfrak q}$.
+
+\medskip\noindent
+By Lemma \ref{lemma-isomorphic-local-rings} we see that there exist
+$g \in A$, $g \not \in \mathfrak a$ and
+$g' \in k[y_1, \ldots, y_m]$, $g' \not \in \mathfrak q$
+such that $A_g \cong (k[y_1, \ldots, y_m]/I)_{g'}$.
+After replacing $A$ by $A_g$ and $k[y_1, \ldots, y_m]$ by
+$k[y_1, \ldots, y_{m + 1}]$ we may assume that
+$A \cong k[y_1, \ldots, y_m]/I$. Consider the surjective
+maps of local rings
+$$
+k[y_1, \ldots, y_m]_{\mathfrak q} \to R \to S.
+$$
+We have to show that the kernel of $R \to S$ is generated by
+a regular sequence. By Lemma \ref{lemma-lci} we know that
+$k[y_1, \ldots, y_m]_{\mathfrak q} \to A_{\mathfrak a} = S$
+has this property (as $A$ is a local complete intersection over $k$).
+We win by Lemma \ref{lemma-ci-well-defined}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-at-prime}
+Let $k$ be a field. Let $S$ be a finite type $k$-algebra.
+Let $\mathfrak q$ be a prime of $S$. The following are
+equivalent:
+\begin{enumerate}
+\item The local ring $S_{\mathfrak q}$ is a complete intersection
+ring (Definition \ref{definition-lci-local-ring}).
+\item There exists a $g \in S$, $g \not \in \mathfrak q$
+such that $S_g$ is a local complete intersection over $k$.
+\item There exists a $g \in S$, $g \not \in \mathfrak q$
+such that $S_g$ is a global complete intersection over $k$.
+\item For any presentation $S = k[x_1, \ldots, x_n]/I$ with
+$\mathfrak q' \subset k[x_1, \ldots, x_n]$ corresponding to $\mathfrak q$
+any of the equivalent conditions (1) -- (5) of Lemma \ref{lemma-lci} hold.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is a combination of Lemmas \ref{lemma-lci} and \ref{lemma-lci-local}
+and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-global}
+Let $k$ be a field. Let $S$ be a finite type $k$-algebra.
+The following are equivalent:
+\begin{enumerate}
+\item The ring $S$ is a local complete intersection over $k$.
+\item All local rings of $S$ are complete intersection rings over $k$.
+\item All localizations of $S$
+at maximal ideals are complete intersection rings over $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-lci-at-prime},
+the fact that $\Spec(S)$ is quasi-compact and the definitions.
+\end{proof}
+
+\noindent
+The following lemma says that being a complete intersection is
+preserved under change of base field (in a strong sense).
+
+\begin{lemma}
+\label{lemma-lci-field-change-local}
+Let $K/k$ be a field extension.
+Let $S$ be a finite type algebra over $k$.
+Let $\mathfrak q_K$ be a prime of $S_K = K \otimes_k S$
+and let $\mathfrak q$ be the corresponding prime of $S$.
+Then $S_{\mathfrak q}$ is a complete intersection
+over $k$ (Definition \ref{definition-lci-local-ring})
+if and only if $(S_K)_{\mathfrak q_K}$ is a complete
+intersection over $K$.
+\end{lemma}
+
+\begin{proof}
+Choose a presentation $S = k[x_1, \ldots, x_n]/I$.
+This gives a presentation
+$S_K = K[x_1, \ldots, x_n]/I_K$ where $I_K = K \otimes_k I$.
+Let $\mathfrak q_K' \subset K[x_1, \ldots, x_n]$,
+resp.\ $\mathfrak q' \subset k[x_1, \ldots, x_n]$ be
+the corresponding prime. We will show that the equivalent conditions
+of Lemma \ref{lemma-lci}
+hold for the pair $(S = k[x_1, \ldots, x_n]/I, \mathfrak q)$
+if and only if they hold for the pair
+$(S_K = K[x_1, \ldots, x_n]/I_K, \mathfrak q_K)$.
+The lemma will follow from this (see Lemma \ref{lemma-lci-at-prime}).
+
+\medskip\noindent
+By Lemma \ref{lemma-dimension-at-a-point-preserved-field-extension} we have
+$\dim_{\mathfrak q} S = \dim_{\mathfrak q_K} S_K$.
+Hence the integer $c$ occurring in Lemma \ref{lemma-lci}
+is the same for the pair $(S = k[x_1, \ldots, x_n]/I, \mathfrak q)$
+as for the pair $(S_K = K[x_1, \ldots, x_n]/I_K, \mathfrak q_K)$.
+On the other hand we have
+\begin{eqnarray*}
+I \otimes_{k[x_1, \ldots, x_n]} \kappa(\mathfrak q')
+\otimes_{\kappa(\mathfrak q')} \kappa(\mathfrak q_K')
+& = &
+I \otimes_{k[x_1, \ldots, x_n]} \kappa(\mathfrak q_K') \\
+& = &
+I \otimes_{k[x_1, \ldots, x_n]} K[x_1, \ldots, x_n]
+\otimes_{K[x_1, \ldots, x_n]} \kappa(\mathfrak q_K') \\
+& = &
+(K \otimes_k I) \otimes_{K[x_1, \ldots, x_n]} \kappa(\mathfrak q_K') \\
+& = &
+I_K \otimes_{K[x_1, \ldots, x_n]} \kappa(\mathfrak q'_K).
+\end{eqnarray*}
+Therefore,
+$\dim_{\kappa(\mathfrak q')}
+I \otimes_{k[x_1, \ldots, x_n]} \kappa(\mathfrak q')
+=
+\dim_{\kappa(\mathfrak q'_K)}
+I_K \otimes_{K[x_1, \ldots, x_n]} \kappa(\mathfrak q_K')$.
+Thus it follows from
+Nakayama's Lemma \ref{lemma-NAK} that the minimal number
+of generators of $I_{\mathfrak q'}$ is the same as the minimal
+number of generators of $(I_K)_{\mathfrak q'_K}$.
+Thus the lemma follows from characterization (2) of Lemma \ref{lemma-lci}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-field-change}
+Let $k \to K$ be a field extension.
+Let $S$ be a finite type $k$-algebra.
+Then $S$ is a local complete intersection over $k$ if and
+only if $S \otimes_k K$ is a local complete intersection over $K$.
+\end{lemma}
+
+\begin{proof}
+This follows from a combination of Lemmas
+\ref{lemma-lci-global} and \ref{lemma-lci-field-change-local}.
+But we also give a different
+proof here (based on the same principles).
+
+\medskip\noindent
+Set $S' = S \otimes_k K$. Let $\alpha : k[x_1, \ldots, x_n] \to S$ be a
+presentation with kernel $I$. Let $\alpha' : K[x_1, \ldots, x_n] \to S'$
+be the induced presentation with kernel $I'$.
+
+\medskip\noindent
+Suppose that $S$ is a local complete intersection.
+Pick a prime $\mathfrak q \subset S'$. Denote
+$\mathfrak q'$ the corresponding prime of $K[x_1, \ldots, x_n]$,
+$\mathfrak p$ the corresponding prime of $S$, and
+$\mathfrak p'$ the corresponding prime of $k[x_1, \ldots, x_n]$.
+Consider the following diagram of Noetherian local rings
+$$
+\xymatrix{
+S'_{\mathfrak q} & K[x_1, \ldots, x_n]_{\mathfrak q'} \ar[l] \\
+S_{\mathfrak p}\ar[u] & k[x_1, \ldots, x_n]_{\mathfrak p'} \ar[u] \ar[l]
+}
+$$
+By Lemma \ref{lemma-lci} we know that $S_{\mathfrak p}$
+is cut out by some regular sequence $f_1, \ldots, f_c$ in
+$k[x_1, \ldots, x_n]_{\mathfrak p'}$. Since the right vertical
+arrow is flat we see that the images of $f_1, \ldots, f_c$
+form a regular sequence in $K[x_1, \ldots, x_n]_{\mathfrak q'}$.
+Because tensoring with $K$ over $k$ is an exact functor we have
+$S'_{\mathfrak q} = K[x_1, \ldots, x_n]_{\mathfrak q'}/(f_1, \ldots, f_c)$.
+Hence by Lemma \ref{lemma-lci} again we see that $S'$ is a local
+complete intersection in a neighbourhood of $\mathfrak q$. Since
+$\mathfrak q$ was arbitrary we see that $S'$ is a local complete
+intersection over $K$.
+
+\medskip\noindent
+Suppose that $S'$ is a local complete intersection.
+Pick a maximal ideal $\mathfrak m$ of $S$. Let $\mathfrak m'$
+denote the corresponding maximal ideal of $k[x_1, \ldots, x_n]$.
+Denote $\kappa = \kappa(\mathfrak m)$ the residue field.
+By Remark \ref{remark-fundamental-diagram} the primes of
+$S'$ lying over $\mathfrak m$ correspond to primes
+in $K \otimes_k \kappa$. By the Hilbert-Nullstellensatz
+Theorem \ref{theorem-nullstellensatz} we have $[\kappa : k] < \infty$.
+Hence $K \otimes_k \kappa$ is finite nonzero over $K$.
+Hence $K \otimes_k \kappa$ has a finite number $> 0$ of primes
+which are all maximal, each of which has a residue field
+finite over $K$ (see Section \ref{section-artinian}).
+Hence there are finitely many $> 0$ prime ideals
+$\mathfrak n \subset S'$ lying over $\mathfrak m$,
+each of which is maximal and has a residue field
+which is finite over $K$. Pick one, say $\mathfrak n \subset S'$,
+and let $\mathfrak n' \subset K[x_1, \ldots, x_n]$ denote the corresponding
+prime ideal of $K[x_1, \ldots, x_n]$.
+Note that since $V(\mathfrak mS')$ is finite, we see that
+$\mathfrak n$ is an isolated closed point of it, and we
+deduce that $\mathfrak mS'_{\mathfrak n}$ is an ideal of definition
+of $S'_{\mathfrak n}$. This implies that
+$\dim(S_{\mathfrak m}) = \dim(S'_{\mathfrak n})$ for example by
+Lemma \ref{lemma-dimension-base-fibre-equals-total}.
+(This can also be seen using
+Lemma \ref{lemma-dimension-at-a-point-preserved-field-extension}.)
+Consider the corresponding diagram of Noetherian local rings
+$$
+\xymatrix{
+S'_{\mathfrak n} & K[x_1, \ldots, x_n]_{\mathfrak n'} \ar[l] \\
+S_{\mathfrak m}\ar[u] & k[x_1, \ldots, x_n]_{\mathfrak m'} \ar[u] \ar[l]
+}
+$$
+According to Lemma \ref{lemma-change-base-NL} we have
+$\NL(\alpha) \otimes_S S' = \NL(\alpha')$, in particular
+$I'/(I')^2 = I/I^2 \otimes_S S'$. Thus
+$(I/I^2)_{\mathfrak m} \otimes_{S_{\mathfrak m}} \kappa$
+and
+$(I'/(I')^2)_{\mathfrak n} \otimes_{S'_{\mathfrak n}} \kappa(\mathfrak n)$
+have the same dimension. Since $(I'/(I')^2)_{\mathfrak n}$
+is free of rank $n - \dim S'_{\mathfrak n}$ we deduce that
+$(I/I^2)_{\mathfrak m}$ can be generated by
+$n - \dim S'_{\mathfrak n} = n - \dim S_{\mathfrak m}$ elements.
+By Lemma \ref{lemma-lci} we see that $S$ is a local
+complete intersection in a neighbourhood of $\mathfrak m$.
+Since $\mathfrak m$ was any maximal ideal we conclude that
+$S$ is a local complete intersection.
+\end{proof}
+
+\noindent
+We end with a lemma which we will later use to prove that
+given ring maps $T \to A \to B$ where $B$ is syntomic over $T$,
+and $B$ is syntomic over $A$, then $A$ is syntomic over $T$.
+
+\begin{lemma}
+\label{lemma-lci-permanence-initial}
+Let
+$$
+\xymatrix{
+B & S \ar[l] \\
+A \ar[u] & R \ar[l] \ar[u]
+}
+$$
+be a commutative square of local rings. Assume
+\begin{enumerate}
+\item $R$ and $\overline{S} = S/\mathfrak m_R S$ are regular local rings,
+\item $A = R/I$ and $B = S/J$ for some ideals $I$, $J$,
+\item $J \subset S$ and
+$\overline{J} = J/\mathfrak m_R \cap J \subset \overline{S}$
+are generated by regular sequences, and
+\item $A \to B$ and $R \to S$ are flat.
+\end{enumerate}
+Then $I$ is generated by a regular sequence.
+\end{lemma}
+
+\begin{proof}
+Set $\overline{B} = B/\mathfrak m_RB = B/\mathfrak m_AB$ so that
+$\overline{B} = \overline{S}/\overline{J}$.
+Let $f_1, \ldots, f_{\overline{c}} \in J$ be elements such that
+$\overline{f}_1, \ldots, \overline{f}_{\overline{c}} \in \overline{J}$
+form a regular sequence generating $\overline{J}$.
+Note that $\overline{c} = \dim(\overline{S}) - \dim(\overline{B})$,
+see Lemma \ref{lemma-ci-well-defined}.
+By Lemma \ref{lemma-grothendieck-regular-sequence}
+the ring $S/(f_1, \ldots, f_{\overline{c}})$ is flat
+over $R$. Hence $S/(f_1, \ldots, f_{\overline{c}}) + IS$ is flat over $A$.
+The map $S/(f_1, \ldots, f_{\overline{c}}) + IS \to B$ is therefore a
+surjection of finite $S/IS$-modules flat over $A$ which
+is an isomorphism modulo $\mathfrak m_A$, and hence an
+isomorphism by Lemma \ref{lemma-mod-injective}. In other words,
+$J = (f_1, \ldots, f_{\overline{c}}) + IS$.
+
+\medskip\noindent
+By Lemma \ref{lemma-ci-well-defined} again the ideal $J$ is
+generated by a regular sequence of $c = \dim(S) - \dim(B)$ elements. Hence
+$J/\mathfrak m_SJ$ is a vector space of dimension $c$.
+By the description of $J$ above there exist
+$g_1, \ldots, g_{c - \overline{c}} \in I$ such that
+$J$ is generated by
+$f_1, \ldots, f_{\overline{c}}, g_1, \ldots, g_{c - \overline{c}}$
+(use Nakayama's Lemma \ref{lemma-NAK}). Consider the ring
+$A' = R/(g_1, \ldots, g_{c - \overline{c}})$ and the surjection
+$A' \to A$. We see from the above that
+$B = S/(f_1, \ldots, f_{\overline{c}}, g_1, \ldots, g_{c - \overline{c}})$
+is flat over $A'$ (as $S/(f_1, \ldots, f_{\overline{c}})$ is flat
+over $R$). Hence $A' \to B$ is injective (as it is faithfully flat,
+see Lemma \ref{lemma-local-flat-ff}).
+Since this map factors through $A$ we get $A' = A$.
+Note that $\dim(B) = \dim(A) + \dim(\overline{B})$, and
+$\dim(S) = \dim(R) + \dim(\overline{S})$, see
+Lemma \ref{lemma-dimension-base-fibre-equals-total}.
+Hence $c - \overline{c} = \dim(R) -\dim(A)$ by elementary algebra.
+Thus $I = (g_1, \ldots, g_{c - \overline{c}})$ is generated
+by a regular sequence according to Lemma \ref{lemma-ci-well-defined}.
+\end{proof}
+
+
+
+
+
+
+\section{Syntomic morphisms}
+\label{section-syntomic}
+
+\noindent
+Syntomic ring maps are flat finitely presented ring maps all of whose fibers
+are local complete intersections. We discuss general local complete
+intersection ring maps in
+More on Algebra, Section \ref{more-algebra-section-lci}.
+
+\begin{definition}
+\label{definition-lci}
+A ring map $R \to S$ is called {\it syntomic}, or we say $S$ is a
+{\it flat local complete intersection over $R$}
+if it is flat, of finite presentation, and if all of its fibre rings
+$S \otimes_R \kappa(\mathfrak p)$ are local complete intersections,
+see Definition \ref{definition-lci-field}.
+\end{definition}
+
+\noindent
+Clearly, an algebra over a field is syntomic over the field
+if and only if it is a local complete intersection. Here is
+a pleasing feature of this definition.
+
+\begin{lemma}
+\label{lemma-syntomic-descends}
+\begin{slogan}
+Being syntomic is fpqc local on the base.
+\end{slogan}
+Let $R \to S$ be a ring map.
+Let $R \to R'$ be a faithfully flat ring map.
+Set $S' = R'\otimes_R S$.
+Then $R \to S$ is syntomic if and only if $R' \to S'$ is syntomic.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-finite-presentation-descends} and
+Lemma \ref{lemma-flatness-descends} this holds for the property
+of being flat and for the property of being of finite presentation.
+The map $\Spec(R') \to \Spec(R)$ is surjective,
+see Lemma \ref{lemma-ff-rings}. Thus it suffices to show
+given primes $\mathfrak p' \subset R'$ lying over $\mathfrak p \subset R$
+that $S \otimes_R \kappa(\mathfrak p)$ is a local complete
+intersection if and only if $S' \otimes_{R'} \kappa(\mathfrak p')$
+is a local complete intersection. Note that
+$S' \otimes_{R'} \kappa(\mathfrak p') =
+S \otimes_R \kappa(\mathfrak p)
+\otimes_{\kappa(\mathfrak p)} \kappa(\mathfrak p')$.
+Thus Lemma \ref{lemma-lci-field-change} applies.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-syntomic}
+Any base change of a syntomic map is syntomic.
+\end{lemma}
+
+\begin{proof}
+This is true for being flat, for being of finite presentation,
+and for having local complete intersections as fibres by
+Lemmas \ref{lemma-flat-base-change}, \ref{lemma-compose-finite-type} and
+\ref{lemma-lci-field-change}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-syntomic}
+Let $R \to S$ be a ring map.
+Suppose we have $g_1, \ldots g_m \in S$ which generate the
+unit ideal such that each $R \to S_{g_i}$ is syntomic.
+Then $R \to S$ is syntomic.
+\end{lemma}
+
+\begin{proof}
+This is true for being flat and for being of finite presentation by
+Lemmas \ref{lemma-flat-localization} and \ref{lemma-cover-upstairs}.
+The property of having fibre rings which are local complete intersections
+is local on $S$ by its very definition, see
+Definition \ref{definition-lci-field}.
+\end{proof}
+
+\begin{definition}
+\label{definition-relative-global-complete-intersection}
+Let $R \to S$ be a ring map. We say that $R \to S$ is
+a {\it relative global complete intersection} if there exists
+a presentation $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$ and
+every nonempty fibre of $\Spec(S) \to \Spec(R)$ has dimension $n - c$.
+We will say ``let $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$ be a relative
+global complete intersection'' to indicate this situation.
+\end{definition}
+
+\noindent
+The following lemma is occasionally useful to find
+global presentations.
+
+\begin{lemma}
+\label{lemma-huber}
+Let $S$ be a finitely presented $R$-algebra which has a presentation
+$S = R[x_1, \ldots, x_n]/I$ such that $I/I^2$ is free over $S$. Then
+$S$ has a presentation $S = R[y_1, \ldots, y_m]/(f_1, \ldots, f_c)$
+such that $(f_1, \ldots, f_c)/(f_1, \ldots, f_c)^2$ is free with
+basis given by the classes of $f_1, \ldots, f_c$.
+\end{lemma}
+
+\begin{proof}
+Note that $I$ is a finitely generated ideal by
+Lemma \ref{lemma-finite-presentation-independent}.
+Let $f_1, \ldots, f_c \in I$ be elements which map to a basis of $I/I^2$.
+By Nakayama's lemma (Lemma \ref{lemma-NAK})
+there exists a $g \in 1 + I$ such that
+$$
+g \cdot I \subset (f_1, \ldots, f_c)
+$$
+and $I_g \cong (f_1, \ldots, f_c)_g$. Hence we see that
+$$
+S \cong R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)[1/g]
+\cong R[x_1, \ldots, x_n, x_{n + 1}]/(f_1, \ldots, f_c, gx_{n + 1} - 1)
+$$
+as desired. It follows that $f_1, \ldots, f_c,gx_{n + 1} - 1$
+form a basis for
+$(f_1, \ldots, f_c, gx_{n + 1} - 1)/(f_1, \ldots, f_c, gx_{n + 1} - 1)^2$
+for example by applying Lemma \ref{lemma-principal-localization-NL}.
+\end{proof}
+
+
+
+\begin{example}
+\label{example-factor-polynomials}
+Let $n , m \geq 1$ be integers. Consider the ring map
+\begin{eqnarray*}
+R = \mathbf{Z}[a_1, \ldots, a_{n + m}]
+& \longrightarrow &
+S = \mathbf{Z}[b_1, \ldots, b_n, c_1, \ldots, c_m] \\
+a_1 & \longmapsto & b_1 + c_1 \\
+a_2 & \longmapsto & b_2 + b_1 c_1 + c_2 \\
+\ldots & \ldots & \ldots \\
+a_{n + m} & \longmapsto & b_n c_m
+\end{eqnarray*}
+In other words, this is the unique ring map of polynomial rings
+as indicated such that the polynomial factorization
+$$
+x^{n + m} + a_1 x^{n + m - 1} + \ldots + a_{n + m}
+=
+(x^n + b_1 x^{n - 1} + \ldots + b_n)
+(x^m + c_1 x^{m - 1} + \ldots + c_m)
+$$
+holds. Note that $S$ is generated by $n + m$ elements over $R$
+(namely, $b_i, c_j$) and that there are $n + m$ equations
+(namely $a_k = a_k(b_i, c_j)$). In order to show that
+$S$ is a relative global complete intersection over $R$ it suffices
+to prove that all fibres have dimension $0$.
+
+\medskip\noindent
+To prove this, let $R \to k$ be a
+ring map into a field $k$. Say $a_i$ maps to $\alpha_i \in k$.
+Consider the fibre ring $S_k = k \otimes_R S$. Let $k \to K$ be
+a field extension. A $k$-algebra map of $S_k \to K$ is the same thing as
+finding $\beta_1, \ldots, \beta_n, \gamma_1, \ldots, \gamma_m \in K$
+such that
+$$
+x^{n + m} + \alpha_1 x^{n + m - 1} + \ldots + \alpha_{n + m}
+=
+(x^n + \beta_1 x^{n - 1} + \ldots + \beta_n)
+(x^m + \gamma_1 x^{m - 1} + \ldots + \gamma_m).
+$$
+Hence we see there are at most finitely many choices of
+such $n + m$-tuples in $K$. This proves that all fibres
+have finitely many closed points (use Hilbert's Nullstellensatz
+to see they all correspond to solutions in $\overline{k}$ for example)
+and hence that $R \to S$ is a relative global complete intersection.
+
+\medskip\noindent
+Another way to argue this is to show
+$\mathbf{Z}[a_1, \ldots, a_{n + m}] \to
+\mathbf{Z}[b_1, \ldots, b_n, c_1, \ldots, c_m]$ is actually
+also a {\it finite} ring map. Namely, by Lemma \ref{lemma-polynomials-divide}
+each of $b_i, c_j$ is integral over $R$, and hence $R \to S$ is
+finite by Lemma \ref{lemma-characterize-integral}.
+\end{example}
+
+\begin{example}
+\label{example-roots-universal-polynomial}
+Consider the ring map
+\begin{eqnarray*}
+R = \mathbf{Z}[a_1, \ldots, a_n]
+& \longrightarrow &
+S = \mathbf{Z}[\alpha_1, \ldots, \alpha_n] \\
+a_1 & \longmapsto &
+\alpha_1 + \ldots + \alpha_n \\
+\ldots & \ldots & \ldots \\
+a_n & \longmapsto & \alpha_1 \ldots \alpha_n
+\end{eqnarray*}
+In other words this is the unique ring map of polynomial
+rings as indicated
+such that
+$$
+x^n + a_1 x^{n - 1} + \ldots + a_n
+=
+\prod\nolimits_{i = 1}^n (x + \alpha_i)
+$$
+holds in $\mathbf{Z}[\alpha_i, x]$. Another way to say this
+is that $a_i$ maps to the $i$th elementary symmetric function
+in $\alpha_1, \ldots, \alpha_n$. Note that $S$ is generated by
+$n$ elements over $R$ subject to $n$ equations. Hence to show
+that $S$ is a relative global complete intersection over
+$R$ we have to show that the fibre rings $S \otimes_R \kappa(\mathfrak p)$
+have dimension $0$. This follows as in
+Example \ref{example-factor-polynomials} because the ring map
+$\mathbf{Z}[a_1, \ldots, a_n] \to
+\mathbf{Z}[\alpha_1, \ldots, \alpha_n]$ is actually {\it finite}
+since each $\alpha_i \in S$
+satisfies the monic equation $x^n - a_1 x^{n - 1} + \ldots + (-1)^n a_n$
+over $R$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-adjoin-roots}
+Suppose that $A$ is a ring, and
+$P(x) = x^n + b_1 x^{n-1} + \ldots + b_n \in A[x]$ is
+a monic polynomial over $A$. Then there exists a
+syntomic, finite locally free, faithfully flat ring extension
+$A \subset A'$ such that $P(x) = \prod_{i = 1, \ldots, n} (x - \beta_i)$
+for certain $\beta_i \in A'$.
+\end{lemma}
+
+\begin{proof}
+Take $A' = A \otimes_R S$, where $R$ and $S$ are as in
+Example \ref{example-roots-universal-polynomial},
+where $R \to A$ maps $a_i$ to $b_i$, and let
+$\beta_i = -1 \otimes \alpha_i$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-relative-global-complete-intersection}
+Let $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$ be a
+relative global complete intersection
+(Definition \ref{definition-relative-global-complete-intersection})
+\begin{enumerate}
+\item For any $R \to R'$ the base change
+$R' \otimes_R S = R'[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$ is a relative
+global complete intersection.
+\item For any $g \in S$ which is the image of $h \in R[x_1, \ldots, x_n]$
+the ring
+$S_g = R[x_1, \ldots, x_n, x_{n + 1}]/(f_1, \ldots, f_c, hx_{n + 1} - 1)$
+is a relative global complete intersection.
+\item If $R \to S$ factors as $R \to R_f \to S$ for some $f \in R$.
+Then the ring $S = R_f[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+is a relative global complete intersection over $R_f$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-dimension-preserved-field-extension}
+the fibres of a base change have the same dimension as the
+fibres of the original map. Moreover
+$R' \otimes_R R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)
+= R'[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$. Thus (1) follows.
+The proof of (2) is that
+the localization at one element can be described as
+$S_g \cong S[x_{n + 1}]/(gx_{n + 1} - 1)$.
+Assertion (3) follows from (1) since under the assumptions of (3) we have
+$R_f \otimes_R S \cong S$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-relative-complete-intersection}
+Let $R$ be a ring. Let $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$.
+We will find $h \in R[x_1, \ldots, x_n]$ which maps to $g \in S$
+such that
+$$
+S_g = R[x_1, \ldots, x_n, x_{n + 1}]/(f_1, \ldots, f_c, hx_{n + 1} - 1)
+$$
+is a relative global complete intersection with a presentation as in
+Definition \ref{definition-relative-global-complete-intersection}
+in each of the following cases:
+\begin{enumerate}
+\item Let $I \subset R$ be an ideal. If the fibres of
+$\Spec(S/IS) \to \Spec(R/I)$ have dimension $n - c$, then we can
+find $(h, g)$ as above such that $g$ maps to $1 \in S/IS$.
+\item Let $\mathfrak p \subset R$ be a prime. If
+$\dim(S \otimes_R \kappa(\mathfrak p)) = n - c$, then we can
+find $(h, g)$ as above such that $g$ maps to a unit of
+$S \otimes_R \kappa(\mathfrak p)$.
+\item Let $\mathfrak q \subset S$ be a prime lying over
+$\mathfrak p \subset R$. If $\dim_{\mathfrak q}(S/R) = n - c$, then we can
+find $(h, g)$ as above such that $g \not \in \mathfrak q$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Ad (1). By Lemma \ref{lemma-dimension-fibres-bounded-open-upstairs}
+there exists an open subset $W \subset \Spec(S)$ containing $V(IS)$
+such that all fibres of $W \to \Spec(R)$ have dimension $\leq n - c$.
+Say $W = \Spec(S) \setminus V(J)$. Then $V(J) \cap V(IS) = \emptyset$
+hence we can find a $g \in J$ which maps to $1 \in S/IS$.
+Let $h \in R[x_1, \ldots, x_n]$ be any preimage of $g$.
+
+\medskip\noindent
+Ad (2). By Lemma \ref{lemma-dimension-fibres-bounded-open-upstairs}
+there exists an open subset $W \subset \Spec(S)$ containing
+$\Spec(S \otimes_R \kappa(\mathfrak p))$
+such that all fibres of $W \to \Spec(R)$ have dimension $\leq n - c$.
+Say $W = \Spec(S) \setminus V(J)$. Then
+$V(J \cdot S \otimes_R \kappa(\mathfrak p)) = \emptyset$.
+Hence we can find a $g \in J$ which maps to a unit in
+$S \otimes_R \kappa(\mathfrak p)$ (details omitted).
+Let $h \in R[x_1, \ldots, x_n]$ be any preimage of $g$.
+
+\medskip\noindent
+Ad (3). By Lemma \ref{lemma-dimension-fibres-bounded-open-upstairs}
+there exists a $g \in S$, $g \not \in \mathfrak q$
+such that all nonempty fibres of $R \to S_g$
+have dimension $\leq n - c$. Let $h \in R[x_1, \ldots, x_n]$
+be any element that maps to $g$.
+\end{proof}
+
+\noindent
+The following lemma says we can do absolute Noetherian
+approximation for relative global complete intersections.
+
+\begin{lemma}
+\label{lemma-relative-global-complete-intersection-Noetherian}
+Let $R$ be a ring. Let $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+be a relative global complete intersection
+(Definition \ref{definition-relative-global-complete-intersection}).
+There exist a finite type $\mathbf{Z}$-subalgebra $R_0 \subset R$
+such that $f_i \in R_0[x_1, \ldots, x_n]$ and such that
+$$
+S_0 = R_0[x_1, \ldots, x_n]/(f_1, \ldots, f_c)
+$$
+is a relative global complete intersection.
+\end{lemma}
+
+\begin{proof}
+Let $R_0 \subset R$ be the $\mathbf{Z}$-algebra of $R$ generated by all the
+coefficients of the polynomials $f_1, \ldots, f_c$. Let
+$S_0 = R_0[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$.
+Clearly, $S = R \otimes_{R_0} S_0$.
+Pick a prime $\mathfrak q \subset S$ and denote
+$\mathfrak p \subset R$, $\mathfrak q_0 \subset S_0$, and
+$\mathfrak p_0 \subset R_0$ the primes it lies over.
+Because $\dim (S \otimes_R \kappa(\mathfrak p) ) = n - c$
+we also have $\dim (S_0 \otimes_{R_0} \kappa(\mathfrak p_0)) = n - c$,
+see Lemma \ref{lemma-dimension-preserved-field-extension}.
+By Lemma \ref{lemma-dimension-fibres-bounded-open-upstairs}
+there exists a $g \in S_0$, $g \not \in \mathfrak q_0$
+such that all nonempty fibres of $R_0 \to (S_0)_g$
+have dimension $\leq n - c$. As $\mathfrak q$ was arbitrary and
+$\Spec(S)$ quasi-compact, we can find finitely many
+$g_1, \ldots, g_m \in S_0$ such that (a) for $j = 1, \ldots, m$
+the nonempty fibres of
+$R_0 \to (S_0)_{g_j}$ have dimension $\leq n - c$ and (b) the image of
+$\Spec(S) \to \Spec(S_0)$ is contained in $D(g_1) \cup \ldots \cup D(g_m)$.
+In other words, the images of $g_1, \ldots, g_m$ in $S = R \otimes_{R_0} S_0$
+generate the unit ideal. After increasing $R_0$ we may assume
+that $g_1, \ldots, g_m$ generate the unit ideal in $S_0$. By (a)
+the nonempty fibres of $R_0 \to S_0$ all have dimension $\leq n - c$
+and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-global-complete-intersection-conormal}
+Let $R$ be a ring. Let $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+be a relative global complete intersection (Definition
+\ref{definition-relative-global-complete-intersection}). For every prime
+$\mathfrak q$ of $S$, let $\mathfrak q'$ denote the corresponding
+prime of $R[x_1, \ldots, x_n]$. Then
+\begin{enumerate}
+\item $f_1, \ldots, f_c$ is a regular sequence in the local ring
+$R[x_1, \ldots, x_n]_{\mathfrak q'}$,
+\item each of the rings
+$R[x_1, \ldots, x_n]_{\mathfrak q'}/(f_1, \ldots, f_i)$ is flat over $R$, and
+\item the $S$-module $(f_1, \ldots, f_c)/(f_1, \ldots, f_c)^2$
+is free with basis given by the elements $f_i \bmod (f_1, \ldots, f_c)^2$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-regular-quasi-regular} part (3) follows
+from part (1).
+
+\medskip\noindent
+Assume $R$ is Noetherian. Let $\mathfrak p = R \cap \mathfrak q'$.
+By Lemma \ref{lemma-lci} for example we see
+that $f_1, \ldots, f_c$ form a regular sequence in the local ring
+$R[x_1, \ldots, x_n]_{\mathfrak q'} \otimes_R \kappa(\mathfrak p)$.
+Moreover, the local ring $R[x_1, \ldots, x_n]_{\mathfrak q'}$
+is flat over $R_{\mathfrak p}$. Since $R$, and hence
+$R[x_1, \ldots, x_n]_{\mathfrak q'}$ is Noetherian we see from
+Lemma \ref{lemma-grothendieck-regular-sequence} that (1) and (2) hold.
+
+\medskip\noindent
+Let $R$ be general. Write $R = \colim_{\lambda \in \Lambda} R_\lambda$
+as the filtered colimit of finite type $\mathbf{Z}$-subalgebras (compare with
+Section \ref{section-colimits-flat}). We may assume that
+$f_1, \ldots, f_c \in R_\lambda[x_1, \ldots, x_n]$ for all $\lambda$.
+Let $R_0 \subset R$ be as in
+Lemma \ref{lemma-relative-global-complete-intersection-Noetherian}.
+Then we may assume $R_0 \subset R_\lambda$ for all $\lambda$.
+It follows that $S_\lambda = R_\lambda[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+is a relative global complete intersection (as base change of
+$S_0$ via $R_0 \to R_\lambda$, see
+Lemma \ref{lemma-base-change-relative-global-complete-intersection}).
+Denote $\mathfrak p_\lambda$, $\mathfrak q_\lambda$, $\mathfrak q'_\lambda$
+the prime of $R_\lambda$, $S_\lambda$, $R_\lambda[x_1, \ldots, x_n]$
+induced by $\mathfrak p$, $\mathfrak q$, $\mathfrak q'$.
+With this notation, we have (1) and (2) for each $\lambda$. Since
+$$
+R[x_1, \ldots, x_n]_{\mathfrak q'}/(f_1, \ldots, f_i)
+=
+\colim R_\lambda[x_1, \ldots, x_n]_{\mathfrak q_\lambda'}/(f_1, \ldots, f_i)
+$$
+we deduce flatness in (2) over $R$ from
+Lemma \ref{lemma-colimit-rings-flat}.
+Since we have
+\begin{align*}
+R[x_1, \ldots, x_n]_{\mathfrak q'}/(f_1, \ldots, f_i)
+\xrightarrow{f_{i + 1}}
+R[x_1, \ldots, x_n]_{\mathfrak q'}/(f_1, \ldots, f_i) \\
+=
+\colim
+\left(
+R_\lambda[x_1, \ldots, x_n]_{\mathfrak q_\lambda'}/(f_1, \ldots, f_i)
+\xrightarrow{f_{i + 1}}
+R_\lambda[x_1, \ldots, x_n]_{\mathfrak q_\lambda'}/(f_1, \ldots, f_i)
+\right)
+\end{align*}
+and since filtered colimits are exact
+(Lemma \ref{lemma-directed-colimit-exact})
+we conclude that we have (1).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-global-complete-intersection}
+A relative global complete intersection is syntomic, i.e., flat.
+\end{lemma}
+
+\begin{proof}
+Let $R \to S$ be a relative global complete intersection.
+The fibres are global complete intersections, and
+$S$ is of finite presentation over $R$.
+Thus the only thing to prove is that $R \to S$ is flat.
+This is true by (2) of
+Lemma \ref{lemma-relative-global-complete-intersection-conormal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-syntomic}
+Let $R \to S$ be a ring map.
+Let $\mathfrak q \subset S$ be a prime lying over
+the prime $\mathfrak p$ of $R$.
+The following are equivalent:
+\begin{enumerate}
+\item There exists an element $g \in S$, $g \not \in \mathfrak q$ such that
+$R \to S_g$ is syntomic.
+\item There exists an element $g \in S$, $g \not \in \mathfrak q$
+such that $S_g$ is a relative global complete intersection over $R$.
+\item There exists an element $g \in S$, $g \not \in \mathfrak q$,
+such that $R \to S_g$ is of finite presentation,
+the local ring map $R_{\mathfrak p} \to S_{\mathfrak q}$ is flat, and
+the local ring $S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}$ is
+a complete intersection ring over $\kappa(\mathfrak p)$ (see
+Definition \ref{definition-lci-local-ring}).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (1) $\Rightarrow$ (3) is Lemma \ref{lemma-lci-at-prime}.
+The implication (2) $\Rightarrow$ (1) is
+Lemma \ref{lemma-relative-global-complete-intersection}.
+It remains to show that (3) implies (2).
+
+\medskip\noindent
+Assume (3). After replacing $S$ by $S_g$ for some $g \in S$,
+$g\not\in \mathfrak q$ we may assume $S$ is finitely presented over $R$.
+Choose a presentation $S = R[x_1, \ldots, x_n]/I$. Let
+$\mathfrak q' \subset R[x_1, \ldots, x_n]$ be the prime corresponding
+to $\mathfrak q$. Write $\kappa(\mathfrak p) = k$.
+Note that $S \otimes_R k = k[x_1, \ldots, x_n]/\overline{I}$ where
+$\overline{I} \subset k[x_1, \ldots, x_n]$ is the ideal generated
+by the image of $I$. Let $\overline{\mathfrak q}' \subset k[x_1, \ldots, x_n]$
+be the prime ideal generated by the image of $\mathfrak q'$.
+By Lemma \ref{lemma-lci-at-prime} the equivalent conditions of
+Lemma \ref{lemma-lci} hold for $\overline{I}$ and $\overline{\mathfrak q}'$.
+Say the dimension of
+$\overline{I}_{\overline{\mathfrak q}'}/
+\overline{\mathfrak q}'\overline{I}_{\overline{\mathfrak q}'}$
+over $\kappa(\overline{\mathfrak q}')$ is $c$.
+Pick $f_1, \ldots, f_c \in I$ mapping to a basis of this vector space.
+The images $\overline{f}_j \in \overline{I}$ generate
+$\overline{I}_{\overline{\mathfrak q}'}$ (by Lemma \ref{lemma-lci}).
+Set $S' = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$. Let $J$ be the
+kernel of the surjection $S' \to S$. Since $S$ is of finite presentation
+$J$ is a finitely generated ideal
+(Lemma \ref{lemma-compose-finite-type}). Consider the short exact sequence
+$$
+0 \to J \to S' \to S \to 0
+$$
+As $S_\mathfrak q$ is flat over $R$ we see that
+$J_{\mathfrak q'} \otimes_R k \to S'_{\mathfrak q'} \otimes_R k$
+is injective (Lemma \ref{lemma-flat-tor-zero}).
+However, by construction $S'_{\mathfrak q'} \otimes_R k$
+maps isomorphically to $S_\mathfrak q \otimes_R k$. Hence we
+conclude that $J_{\mathfrak q'} \otimes_R k =
+J_{\mathfrak q'}/\mathfrak pJ_{\mathfrak q'} = 0$. By Nakayama's
+lemma (Lemma \ref{lemma-NAK}) we conclude that there exists a
+$g \in R[x_1, \ldots, x_n]$, $g \not \in \mathfrak q'$ such that
+$J_g = 0$. In other words $S'_g \cong S_g$. After further localizing
+we see that $S'$ (and hence $S$) becomes a relative global complete
+intersection by
+Lemma \ref{lemma-localize-relative-complete-intersection}
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-syntomic-presentation-ideal-mod-squares}
+Let $R$ be a ring. Let $S = R[x_1, \ldots, x_n]/I$ for some
+finitely generated ideal $I$. If $g \in S$ is such that
+$S_g$ is syntomic over $R$, then $(I/I^2)_g$ is a finite projective
+$S_g$-module.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-syntomic} there exist finitely many elements
+$g_1, \ldots, g_m \in S$ which generate the unit ideal in $S_g$
+such that each $S_{gg_j}$ is a relative global complete intersection
+over $R$. Since it suffices to prove that $(I/I^2)_{gg_j}$ is
+finite projective, see
+Lemma \ref{lemma-finite-projective},
+we may assume that $S_g$ is a relative global complete intersection.
+In this case the result follows from
+Lemmas \ref{lemma-conormal-module-localize} and
+\ref{lemma-relative-global-complete-intersection-conormal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-syntomic}
+Let $R \to S$, $S \to S'$ be ring maps.
+\begin{enumerate}
+\item If $R \to S$ and $S \to S'$ are syntomic, then $R \to S'$
+is syntomic.
+\item If $R \to S$ and $S \to S'$ are relative global complete intersections,
+then $R \to S'$ is a relative global complete intersection.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (2). Say $R \to S$ and $S \to S'$ are relative global complete
+intersections and we have presentations
+$S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$ and
+$S' = S[y_1, \ldots, y_m]/(h_1, \ldots, h_d)$ as in
+Definition \ref{definition-relative-global-complete-intersection}.
+Then
+$$
+S' \cong
+R[x_1, \ldots, x_n, y_1, \ldots, y_m]/(f_1, \ldots, f_c, h'_1, \ldots, h'_d)
+$$
+for some lifts $h_j' \in R[x_1, \ldots, x_n, y_1, \ldots, y_m]$ of the $h_j$.
+Hence it suffices to bound the dimensions of the fibre rings.
+Thus we may assume $R = k$ is a field.
+In this case we see that we have a ring, namely $S$, which is of finite
+type over $k$ and equidimensional of dimension $n - c$, and a
+finite type ring map $S \to S'$ all of whose nonempty fibre
+rings are equidimensional of dimension $m - d$. Then, by
+Lemma \ref{lemma-dimension-base-fibre-total} for example applied
+to localizations at maximal ideals of $S'$, we see that
+$\dim(S') \leq n - c + m - d$ as desired.
+
+\medskip\noindent
+We will reduce part (1) to part (2). Assume $R \to S$ and $S \to S'$
+are syntomic. Let $\mathfrak q' \subset S$ be a prime ideal lying
+over $\mathfrak q \subset S$. By Lemma \ref{lemma-syntomic}
+there exists a $g' \in S'$, $g' \not \in \mathfrak q'$ such that
+$S \to S'_{g'}$ is a relative global complete intersection.
+Similarly, we find $g \in S$, $g \not \in \mathfrak q$ such that
+$R \to S_g$ is a relative global complete intersection.
+By Lemma \ref{lemma-base-change-relative-global-complete-intersection}
+the ring map $S_g \to S_{gg'}$ is a relative global complete intersection.
+By part (2) we see that $R \to S_{gg'}$ is a relative global
+complete intersection and $gg' \not \in \mathfrak q'$.
+Since $\mathfrak q'$ was arbitrary
+combining Lemmas \ref{lemma-syntomic} and \ref{lemma-local-syntomic}
+we see that $R \to S'$ is syntomic (this also uses that the spectrum
+of $S'$ is quasi-compact, see Lemma \ref{lemma-quasi-compact}).
+\end{proof}
+
+\noindent
+The following lemma will be improved later, see
+Smoothing Ring Maps, Proposition \ref{smoothing-proposition-lift-smooth}.
+
+\begin{lemma}
+\label{lemma-lift-syntomic}
+Let $R$ be a ring and let $I \subset R$ be an ideal.
+Let $R/I \to \overline{S}$ be a syntomic map.
+Then there exists elements $\overline{g}_i \in \overline{S}$
+which generate the unit ideal of $\overline{S}$
+such that each $\overline{S}_{g_i} \cong S_i/IS_i$
+for some relative global complete intersection $S_i$
+over $R$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-syntomic} we find a collection of elements
+$\overline{g}_i \in \overline{S}$
+which generate the unit ideal of $\overline{S}$
+such that each $\overline{S}_{g_i}$ is a relative
+global complete intersection over $R/I$.
+Hence we may assume that $\overline{S}$ is a
+relative global complete intersection.
+Write
+$\overline{S} =
+(R/I)[x_1, \ldots, x_n]/(\overline{f}_1, \ldots, \overline{f}_c)$
+as in Definition \ref{definition-relative-global-complete-intersection}.
+Choose $f_1, \ldots, f_c \in R[x_1, \ldots, x_n]$
+lifting $\overline{f}_1, \ldots, \overline{f}_c$.
+Set $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$.
+Note that $S/IS \cong \overline{S}$.
+By Lemma \ref{lemma-localize-relative-complete-intersection}
+we can find $g \in S$ mapping to $1$ in $\overline{S}$ such
+that $S_g$ is a relative global complete intersection over $R$.
+Since $\overline{S} \cong S_g/IS_g$ this finishes the proof.
+\end{proof}
+
+
+\section{Smooth ring maps}
+\label{section-smooth}
+
+\noindent
+Let us motivate the definition of a smooth ring map by an example.
+Suppose $R$ is a ring and $S = R[x, y]/(f)$ for some nonzero $f \in R[x, y]$.
+In this case there is an exact sequence
+$$
+S \to
+S\text{d}x \oplus S\text{d}y \to
+\Omega_{S/R} \to 0
+$$
+where the first arrow maps $1$ to
+$\frac{\partial f}{\partial x} \text{d}x +
+\frac{\partial f}{\partial y} \text{d}y$ see
+Section \ref{section-netherlander}.
+We conclude that $\Omega_{S/R}$ is locally free of rank $1$ if
+the partial derivatives of $f$ generate the unit ideal in $S$.
+In this case $S$ is smooth of relative dimension $1$ over $R$.
+But it can happen that $\Omega_{S/R}$ is locally free of rank $2$
+namely if both partial derivatives of $f$ are zero. For example if
+for a prime $p$ we have $p = 0$ in $R$ and $f = x^p + y^p$ then this
+happens. Here $R \to S$ is a relative global complete intersection
+of relative dimension $1$ which is not smooth.
+Hence, in order to check that a ring map
+is smooth it is not sufficient to check whether the module of differentials
+is free. The correct condition is the following.
+
+\begin{definition}
+\label{definition-smooth}
+A ring map $R \to S$ is {\it smooth} if it is of finite presentation
+and the naive cotangent complex $\NL_{S/R}$ is quasi-isomorphic to a
+finite projective $S$-module placed in degree $0$.
+\end{definition}
+
+\noindent
+In particular, if $R \to S$ is smooth then the module $\Omega_{S/R}$
+is a finite projective $S$-module. Moreover, by
+Lemma \ref{lemma-smooth-independent-presentation} the naive cotangent
+complex of any presentation has the same structure. Thus, for a surjection
+$\alpha : R[x_1, \ldots, x_n] \to S$ with kernel $I$ the map
+$$
+I/I^2
+\longrightarrow
+\Omega_{R[x_1, \ldots, x_n]/R} \otimes_{R[x_1, \ldots, x_n]} S
+$$
+is a split injection. In other words
+$\bigoplus_{i = 1}^n S \text{d}x_i \cong I/I^2 \oplus \Omega_{S/R}$
+as $S$-modules. This implies that $I/I^2$ is a finite projective
+$S$-module too!
+
+\begin{lemma}
+\label{lemma-smooth-independent-presentation}
+Let $R \to S$ be a ring map of finite presentation.
+If for some presentation $\alpha$ of $S$ over $R$ the
+naive cotangent complex $\NL(\alpha)$ is quasi-isomorphic
+to a finite projective $S$-module placed in degree $0$, then
+this holds for any presentation.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-NL-homotopy}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-smooth}
+Let $R \to S$ be a smooth ring map.
+Any localization $S_g$ is smooth over $R$.
+If $f \in R$ maps to an invertible element of $S$,
+then $R_f \to S$ is smooth.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-localize-NL} the naive cotangent
+complex for $S_g$ over $R$ is the base change of the naive cotangent
+complex of $S$ over $R$. The assumption is that the naive cotangent
+complex of $S/R$ is $\Omega_{S/R}$ and that this is a finite projective
+$S$-module. Hence so is its base change. Thus $S_g$ is smooth over $R$.
+
+\medskip\noindent
+The second assertion follows in the same way from
+Lemma \ref{lemma-NL-localize-bottom}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-smooth}
+\begin{slogan}
+Smoothness is preserved under base change
+\end{slogan}
+Let $R \to S$ be a smooth ring map.
+Let $R \to R'$ be any ring map.
+Then the base change $R' \to S' = R' \otimes_R S$ is smooth.
+\end{lemma}
+
+\begin{proof}
+Let $\alpha : R[x_1, \ldots, x_n] \to S$ be a presentation
+with kernel $I$. Let $\alpha' : R'[x_1, \ldots, x_n] \to R' \otimes_R S$
+be the induced presentation. Let $I' = \Ker(\alpha')$.
+Since $0 \to I \to R[x_1, \ldots, x_n] \to S \to 0$
+is exact, the sequence
+$R' \otimes_R I \to R'[x_1, \ldots, x_n] \to R' \otimes_R S \to 0$
+is exact. Thus $R' \otimes_R I \to I'$ is surjective.
+By Definition \ref{definition-smooth} there is a short exact sequence
+$$
+0 \to I/I^2 \to
+\Omega_{R[x_1, \ldots, x_n]/R} \otimes_{R[x_1, \ldots, x_n]} S \to
+\Omega_{S/R} \to
+0
+$$
+and the $S$-module $\Omega_{S/R}$ is finite projective.
+In particular $I/I^2$ is a direct summand of
+$\Omega_{R[x_1, \ldots, x_n]/R} \otimes_{R[x_1, \ldots, x_n]} S$.
+Consider the commutative diagram
+$$
+\xymatrix{
+R' \otimes_R (I/I^2) \ar[r] \ar[d] &
+R' \otimes_R (\Omega_{R[x_1, \ldots, x_n]/R} \otimes_{R[x_1, \ldots, x_n]} S)
+\ar[d] \\
+I'/(I')^2 \ar[r] &
+\Omega_{R'[x_1, \ldots, x_n]/R'}
+\otimes_{R'[x_1, \ldots, x_n]} (R' \otimes_R S)
+}
+$$
+Since the right vertical map is an isomorphism we see that
+the left vertical map is injective and surjective by what was
+said above. Thus we conclude that $\NL(\alpha')$ is quasi-isomorphic
+to $\Omega_{S'/R'} \cong S' \otimes_S \Omega_{S/R}$.
+And this is finite projective since it is the base change
+of a finite projective module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-over-field}
+Let $k$ be a field.
+Let $S$ be a smooth $k$-algebra.
+Then $S$ is a local complete intersection.
+\end{lemma}
+
+\begin{proof}
+By Lemmas \ref{lemma-base-change-smooth} and
+\ref{lemma-lci-field-change} it suffices to prove this when
+$k$ is algebraically closed. Choose a presentation
+$\alpha : k[x_1, \ldots, x_n] \to S$ with kernel $I$. Let $\mathfrak m$
+be a maximal ideal of $S$, and let $\mathfrak m' \supset I$ be the
+corresponding maximal ideal of $k[x_1, \ldots, x_n]$.
+We will show that condition (5) of
+Lemma \ref{lemma-lci}
+holds (with $\mathfrak m$ instead of $\mathfrak q$).
+We may write $\mathfrak m' = (x_1 - a_1, \ldots, x_n - a_n)$
+for some $a_i \in k$, because $k$ is algebraically closed, see
+Theorem \ref{theorem-nullstellensatz}.
+By our assumption that $k \to S$ is smooth the $S$-module map
+$\text{d} : I/I^2 \to \bigoplus_{i = 1}^n S \text{d}x_i$
+is a split injection. Hence the corresponding map
+$I/\mathfrak m' I \to \bigoplus \kappa(\mathfrak m') \text{d}x_i$
+is injective. Say $\dim_{\kappa(\mathfrak m')}(I/\mathfrak m' I) = c$
+and pick $f_1, \ldots, f_c \in I$ which map to a $\kappa(\mathfrak m')$-basis
+of $I/\mathfrak m' I$. By
+Nakayama's Lemma \ref{lemma-NAK}
+we see that $f_1, \ldots, f_c$ generate $I_{\mathfrak m'}$ over
+$k[x_1, \ldots, x_n]_{\mathfrak m'}$. Consider the commutative diagram
+$$
+\xymatrix{
+I \ar[r] \ar[d] & I/I^2 \ar[rr] \ar[d] & &
+I/\mathfrak m'I \ar[d] \\
+\Omega_{k[x_1, \ldots, x_n]/k} \ar[r] &
+\bigoplus S\text{d}x_i \ar[rr]^{\text{d}x_i \mapsto x_i - a_i} & &
+\mathfrak m'/(\mathfrak m')^2
+}
+$$
+(proof commutativity omitted). The middle vertical map is the one defining
+the naive cotangent complex of $\alpha$. Note that the right lower
+horizontal arrow induces an isomorphism
+$\bigoplus \kappa(\mathfrak m') \text{d}x_i \to \mathfrak m'/(\mathfrak m')^2$.
+Hence our generators $f_1, \ldots, f_c$ of $I_{\mathfrak m'}$ map to a
+collection of elements in $k[x_1, \ldots, x_n]_{\mathfrak m'}$ whose
+classes in $\mathfrak m'/(\mathfrak m')^2$ are linearly independent
+over $\kappa(\mathfrak m')$. Therefore they form a regular sequence
+in the ring $k[x_1, \ldots, x_n]_{\mathfrak m'}$ by
+Lemma \ref{lemma-regular-ring-CM}.
+This verifies condition (5) of
+Lemma \ref{lemma-lci}
+hence $S_g$ is a global complete intersection over $k$ for some
+$g \in S$, $g \not \in \mathfrak m$. As this works for any maximal
+ideal of $S$ we conclude that $S$ is a local complete intersection over $k$.
+\end{proof}
+
+\begin{definition}
+\label{definition-standard-smooth}
+Let $R$ be a ring. Given integers $n \geq c \geq 0$ and
+$f_1, \ldots, f_c \in R[x_1, \ldots, x_n]$ we say
+$$
+S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)
+$$
+is a {\it standard smooth algebra over $R$} if the polynomial
+$$
+g =
+\det
+\left(
+\begin{matrix}
+\partial f_1/\partial x_1 &
+\partial f_2/\partial x_1 &
+\ldots &
+\partial f_c/\partial x_1 \\
+\partial f_1/\partial x_2 &
+\partial f_2/\partial x_2 &
+\ldots &
+\partial f_c/\partial x_2 \\
+\ldots & \ldots & \ldots & \ldots \\
+\partial f_1/\partial x_c &
+\partial f_2/\partial x_c &
+\ldots &
+\partial f_c/\partial x_c
+\end{matrix}
+\right)
+$$
+maps to an invertible element in $S$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-standard-smooth}
+Let
+$S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c) = R[x_1, \ldots, x_n]/I$
+be a standard smooth algebra. Then
+\begin{enumerate}
+\item the ring map $R \to S$ is smooth,
+\item the $S$-module $\Omega_{S/R}$ is free on
+$\text{d}x_{c + 1}, \ldots, \text{d}x_n$,
+\item the $S$-module $I/I^2$ is free on the classes of $f_1, \ldots, f_c$,
+\item for any $g \in S$ the ring map $R \to S_g$ is standard smooth,
+\item for any ring map $R \to R'$ the base change
+$R' \to R'\otimes_R S$ is standard smooth,
+\item if $f \in R$ maps to an invertible element in $S$, then
+$R_f \to S$ is standard smooth, and
+\item the ring $S$ is a relative global complete intersection over $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Consider the naive cotangent complex of the given presentation
+$$
+(f_1, \ldots, f_c)/(f_1, \ldots, f_c)^2
+\longrightarrow
+\bigoplus\nolimits_{i = 1}^n S \text{d}x_i
+$$
+Let us compose this map with the projection onto the first $c$ direct summands
+of the direct sum. According to the definition of a standard smooth
+algebra the classes $f_i \bmod (f_1, \ldots, f_c)^2$ map to a basis of
+$\bigoplus_{i = 1}^c S\text{d}x_i$. We conclude that
+$(f_1, \ldots, f_c)/(f_1, \ldots, f_c)^2$ is free of rank $c$ with
+a basis given by the elements $f_i \bmod (f_1, \ldots, f_c)^2$, and
+that the homology in degree $0$, i.e., $\Omega_{S/R}$,
+of the naive cotangent complex is a free $S$-module with basis the images of
+$\text{d}x_{c + j}$, $j = 1, \ldots, n - c$.
+In particular, this proves $R \to S$ is smooth.
+
+\medskip\noindent
+The proofs of (4) and (6) are omitted. But see the example below and
+the proof of
+Lemma \ref{lemma-base-change-relative-global-complete-intersection}.
+
+\medskip\noindent
+Let $\varphi : R \to R'$ be any ring map.
+Denote $S' = R'[x_1, \ldots, x_n]/(f_1^\varphi, \ldots, f_c^\varphi)$
+where $f^\varphi$ is the polynomial obtained from $f \in R[x_1, \ldots, x_n]$
+by applying $\varphi$ to all the coefficients. Then $S' \cong R' \otimes_R S$.
+Moreover, the determinant of Definition \ref{definition-standard-smooth}
+for $S'/R'$ is equal to $g^\varphi$. Its image in $S'$ is therefore
+the image of $g$ via $R[x_1, \ldots, x_n] \to S \to S'$
+and hence invertible. This proves (5).
+
+\medskip\noindent
+To prove (7) it suffices to show that
+$S \otimes_R \kappa(\mathfrak p)$ has dimension $n - c$
+for every prime $\mathfrak p \subset R$.
+By (5) it suffices to prove that any standard smooth
+algebra $k[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+over a field $k$ has dimension $n - c$. We already
+know that $k[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$ is a local
+complete intersection by Lemma \ref{lemma-smooth-over-field}.
+Hence, since $I/I^2$ is free of rank $c$ we see that
+$k[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$ has dimension
+$n - c$, by Lemma \ref{lemma-lci} for example.
+\end{proof}
+
+\begin{example}
+\label{example-make-standard-smooth}
+Let $R$ be a ring.
+Let $f_1, \ldots, f_c \in R[x_1, \ldots, x_n]$.
+Let
+$$
+h =
+\det
+\left(
+\begin{matrix}
+\partial f_1/\partial x_1 &
+\partial f_2/\partial x_1 &
+\ldots &
+\partial f_c/\partial x_1 \\
+\partial f_1/\partial x_2 &
+\partial f_2/\partial x_2 &
+\ldots &
+\partial f_c/\partial x_2 \\
+\ldots & \ldots & \ldots & \ldots \\
+\partial f_1/\partial x_c &
+\partial f_2/\partial x_c &
+\ldots &
+\partial f_c/\partial x_c
+\end{matrix}
+\right).
+$$
+Set $S = R[x_1, \ldots, x_{n + 1}]/(f_1, \ldots, f_c, x_{n + 1}h - 1)$.
+This is an example of a standard smooth algebra, except that the
+presentation is wrong and the variables should be in the
+following order:
+$x_1, \ldots, x_c, x_{n + 1}, x_{c + 1}, \ldots, x_n$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-compose-standard-smooth}
+A composition of standard smooth ring maps is standard smooth.
+\end{lemma}
+
+\begin{proof}
+Suppose that $R \to S$ and $S \to S'$ are standard smooth. We choose
+presentations
+$S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+and
+$S' = S[y_1, \ldots, y_m]/(g_1, \ldots, g_d)$.
+Choose elements $g_j' \in R[x_1, \ldots, x_n, y_1, \ldots, y_m]$ mapping
+to the $g_j$. In this way we see
+$S' = R[x_1, \ldots, x_n, y_1, \ldots, y_m]/
+(f_1, \ldots, f_c, g'_1, \ldots, g'_d)$.
+To show that $S'$ is standard smooth it suffices to verify
+that the determinant
+$$
+\det
+\left(
+\begin{matrix}
+\partial f_1/\partial x_1 &
+\ldots &
+\partial f_c/\partial x_1 &
+\partial g_1/\partial x_1 &
+\ldots &
+\partial g_d/\partial x_1 \\
+\ldots &
+\ldots &
+\ldots &
+\ldots &
+\ldots &
+\ldots \\
+\partial f_1/\partial x_c &
+\ldots &
+\partial f_c/\partial x_c &
+\partial g_1/\partial x_c &
+\ldots &
+\partial g_d/\partial x_c \\
+0 &
+\ldots &
+0 &
+\partial g_1/\partial y_1 &
+\ldots &
+\partial g_d/\partial y_1 \\
+\ldots &
+\ldots &
+\ldots &
+\ldots &
+\ldots &
+\ldots \\
+0 &
+\ldots &
+0 &
+\partial g_1/\partial y_d &
+\ldots &
+\partial g_d/\partial y_d
+\end{matrix}
+\right)
+$$
+is invertible in $S'$. This is clear since it is the product
+of the two determinants which were assumed to be invertible
+by hypothesis.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-syntomic}
+Let $R \to S$ be a smooth ring map.
+There exists an open covering of $\Spec(S)$ by
+standard opens $D(g)$ such that each $S_g$ is standard smooth
+over $R$. In particular $R \to S$ is syntomic.
+\end{lemma}
+
+\begin{proof}
+Choose a presentation $\alpha : R[x_1, \ldots, x_n] \to S$
+with kernel $I = (f_1, \ldots, f_m)$. For every subset
+$E \subset \{1, \ldots, m\}$ consider the open
+subset $U_E$ where the classes $f_e, e\in E$ freely generate
+the finite projective $S$-module $I/I^2$, see Lemma \ref{lemma-cokernel-flat}.
+We may cover $\Spec(S)$ by standard opens $D(g)$ each
+completely contained in one of the opens $U_E$. For such a $g$
+we look at the presentation
+$$
+\beta : R[x_1, \ldots, x_n, x_{n + 1}] \longrightarrow S_g
+$$
+mapping $x_{n + 1}$ to $1/g$. Setting $J = \Ker(\beta)$ we
+use Lemma \ref{lemma-principal-localization-NL} to see that
+$J/J^2 \cong (I/I^2)_g \oplus S_g$ is free.
+We may and do replace $S$ by $S_g$. Then using
+Lemma \ref{lemma-huber} we may assume we have a presentation
+$\alpha : R[x_1, \ldots, x_n] \to S$ with kernel $I = (f_1, \ldots, f_c)$
+such that $I/I^2$ is free on the classes of $f_1, \ldots, f_c$.
+
+\medskip\noindent
+Using the presentation $\alpha$ obtained at the end of the previous
+paragraph, we more or less repeat this argument with
+the basis elements $\text{d}x_1, \ldots, \text{d}x_n$
+of $\Omega_{R[x_1, \ldots, x_n]/R}$.
+Namely, for any subset $E \subset \{1, \ldots, n\}$ of cardinality $c$
+we may consider the open subset $U_E$ of $\Spec(S)$ where
+the differential of $\NL(\alpha)$ composed with the projection
+$$
+S^{\oplus c} \cong I/I^2
+\longrightarrow
+\Omega_{R[x_1, \ldots, x_n]/R} \otimes_{R[x_1, \ldots, x_n]} S
+\longrightarrow
+\bigoplus\nolimits_{i \in E} S\text{d}x_i
+$$
+is an isomorphism. Again we may find a covering of $\Spec(S)$
+by (finitely many) standard opens $D(g)$ such that each $D(g)$
+is completely contained in one of the opens $U_E$.
+By renumbering, we may assume $E = \{1, \ldots, c\}$.
+For a $g$ with $D(g) \subset U_E$ we look at the presentation
+$$
+\beta : R[x_1, \ldots, x_n, x_{n + 1}] \to S_g
+$$
+mapping $x_{n + 1}$ to $1/g$. Setting $J = \Ker(\beta)$
+we conclude from Lemma \ref{lemma-principal-localization-NL}
+that $J = (f_1, \ldots, f_c, fx_{n + 1} - 1)$ where $\alpha(f) = g$
+and that the composition
+$$
+J/J^2 \longrightarrow
+\Omega_{R[x_1, \ldots, x_{n + 1}]/R} \otimes_{R[x_1, \ldots, x_{n + 1}]} S_g
+\longrightarrow
+\bigoplus\nolimits_{i = 1}^c S_g\text{d}x_i \oplus S_g \text{d}x_{n + 1}
+$$
+is an isomorphism. Reordering the coordinates as
+$x_1, \ldots, x_c, x_{n + 1}, x_{c + 1}, \ldots, x_n$
+we conclude that $S_g$ is standard smooth over $R$ as desired.
+
+\medskip\noindent
+This finishes the proof as standard smooth algebras are syntomic
+(Lemmas \ref{lemma-standard-smooth} and
+\ref{lemma-relative-global-complete-intersection})
+and being syntomic over $R$ is local on $S$
+(Lemma \ref{lemma-local-syntomic}).
+\end{proof}
+
+\begin{definition}
+\label{definition-smooth-at-prime}
+Let $R \to S$ be a ring map.
+Let $\mathfrak q$ be a prime of $S$.
+We say $R \to S$ is {\it smooth at $\mathfrak q$} if there
+exists a $g \in S$, $g \not \in \mathfrak q$ such
+that $R \to S_g$ is smooth.
+\end{definition}
+
+\noindent
+For ring maps of finite presentation we can characterize this as follows.
+
+\begin{lemma}
+\label{lemma-smooth-at-point}
+Let $R \to S$ be of finite presentation. Let $\mathfrak q$ be a
+prime of $S$. The following are equivalent
+\begin{enumerate}
+\item $R \to S$ is smooth at $\mathfrak q$,
+\item $H_1(L_{S/R})_\mathfrak q = 0$ and
+$\Omega_{S/R, \mathfrak q}$ is a finite free $S_\mathfrak q$-module,
+\item $H_1(L_{S/R})_\mathfrak q = 0$ and
+$\Omega_{S/R, \mathfrak q}$ is a projective $S_\mathfrak q$-module, and
+\item $H_1(L_{S/R})_\mathfrak q = 0$ and
+$\Omega_{S/R, \mathfrak q}$ is a flat $S_\mathfrak q$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will use without further mention that formation of the
+naive cotangent complex commutes with localization, see
+Section \ref{section-netherlander}, especially
+Lemma \ref{lemma-localize-NL}.
+Note that $\Omega_{S/R}$ is a finitely presented $S$-module, see
+Lemma \ref{lemma-differentials-finitely-presented}. Hence
+(2), (3), and (4) are equivalent by Lemma \ref{lemma-finite-projective}.
+It is clear that (1) implies the equivalent conditions (2), (3), and (4).
+Assume (2) holds. Writing $S_\mathfrak q$ as
+the colimit of principal localizations we see from
+Lemma \ref{lemma-colimit-category-fp-modules}
+that we can find a $g \in S$, $g \not \in \mathfrak q$ such that
+$(\Omega_{S/R})_g$ is finite free. Choose a presentation
+$\alpha : R[x_1, \ldots, x_n] \to S$ with kernel $I$. We may work
+with $\NL(\alpha)$ instead of $\NL_{S/R}$, see
+Lemma \ref{lemma-NL-homotopy}. The surjection
+$$
+\Omega_{R[x_1, \ldots, x_n]/R} \otimes_R S \to \Omega_{S/R} \to 0
+$$
+has a right inverse after inverting $g$ because $(\Omega_{S/R})_g$ is
+projective. Hence the image of
+$\text{d} : (I/I^2)_g \to \Omega_{R[x_1, \ldots, x_n]/R} \otimes_R S_g$
+is a direct summand and this map has a right inverse too.
+We conclude that $H_1(L_{S/R})_g$ is a quotient of $(I/I^2)_g$.
+In particular $H_1(L_{S/R})_g$ is a finite $S_g$-module.
+Thus the vanishing of $H_1(L_{S/R})_{\mathfrak q}$
+implies the vanishing of $H_1(L_{S/R})_{gg'}$ for some $g' \in S$,
+$g' \not \in \mathfrak q$. Then $R \to S_{gg'}$ is smooth by
+definition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-smooth}
+\begin{slogan}
+A ring map is smooth if and only if it is smooth at all primes of the target
+\end{slogan}
+Let $R \to S$ be a ring map.
+Then $R \to S$ is smooth if and only if $R \to S$ is smooth
+at every prime $\mathfrak q$ of $S$.
+\end{lemma}
+
+\begin{proof}
+The direct implication is trivial. Suppose that $R \to S$ is smooth
+at every prime $\mathfrak q$ of $S$. Since $\Spec(S)$ is
+quasi-compact, see Lemma \ref{lemma-quasi-compact},
+there exists a finite covering
+$\Spec(S) = \bigcup D(g_i)$ such that each $S_{g_i}$ is
+smooth. By Lemma \ref{lemma-cover-upstairs} this implies that
+$S$ is of finite presentation over $R$. According to
+Lemma \ref{lemma-localize-NL} we see that
+$\NL_{S/R} \otimes_S S_{g_i}$ is quasi-isomorphic to a finite projective
+$S_{g_i}$-module. By Lemma \ref{lemma-finite-projective}
+this implies that $\NL_{S/R}$ is quasi-isomorphic to a finite
+projective $S$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compose-smooth}
+A composition of smooth ring maps is smooth.
+\end{lemma}
+
+\begin{proof}
+You can prove this in many different ways. One way is to use
+the snake lemma (Lemma \ref{lemma-snake}), the Jacobi-Zariski sequence
+(Lemma \ref{lemma-exact-sequence-NL}), combined with the
+characterization of projective modules as being
+direct summands of free modules (Lemma \ref{lemma-characterize-projective}).
+Another proof can be obtained by combining
+Lemmas \ref{lemma-smooth-syntomic}, \ref{lemma-compose-standard-smooth}
+and \ref{lemma-locally-smooth}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-product-smooth}
+Let $R$ be a ring. Let $S = S' \times S''$ be a product of $R$-algebras.
+Then $S$ is smooth over $R$ if and only if both $S'$ and $S''$ are
+smooth over $R$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hints: By Lemma \ref{lemma-locally-smooth} we can check smoothness
+one prime at a time. Since $\Spec(S)$ is the disjoint union of
+$\Spec(S')$ and $\Spec(S'')$ by Lemma \ref{lemma-spec-product}
+we find that smoothness of $R \to S$ at $\mathfrak q$ corresponds
+to either smoothness of $R \to S'$ at the corresponding prime or
+smoothness of $R \to S''$ at the corresponding prime.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-global-complete-intersection-smooth}
+Let $R$ be a ring. Let $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+be a relative global complete intersection.
+Let $\mathfrak q \subset S$ be a prime. Then $R \to S$
+is smooth at $\mathfrak q$ if and only if there exists a
+subset $I \subset \{1, \ldots, n\}$ of cardinality $c$
+such that the polynomial
+$$
+g_I = \det (\partial f_j/\partial x_i)_{j = 1, \ldots, c, \ i \in I}.
+$$
+does not map to an element of $\mathfrak q$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-relative-global-complete-intersection-conormal}
+we see that the naive cotangent complex
+associated to the given presentation of $S$ is the complex
+$$
+\bigoplus\nolimits_{j = 1}^c S \cdot f_j
+\longrightarrow
+\bigoplus\nolimits_{i = 1}^n S \cdot \text{d}x_i, \quad
+f_j \longmapsto \sum \frac{\partial f_j}{\partial x_i} \text{d}x_i.
+$$
+The maximal minors of the matrix giving the map are exactly
+the polynomials $g_I$.
+
+\medskip\noindent
+Assume $g_I$ maps to $g \in S$, with $g \not \in \mathfrak q$.
+Then the algebra $S_g$ is smooth over $R$. Namely, its naive
+cotangent complex is quasi-isomorphic to the complex above
+localized at $g$, see Lemma \ref{lemma-localize-NL}. And by
+construction it is quasi-isomorphic to a free rank $n - c$
+module in degree $0$.
+
+\medskip\noindent
+Conversely, suppose that all $g_I$ end up in $\mathfrak q$.
+In this case the complex above tensored with $\kappa(\mathfrak q)$
+does not have maximal rank, and hence there is no localization
+by an element $g \in S$, $g \not \in \mathfrak q$
+where this map becomes a split injection. By Lemma \ref{lemma-localize-NL}
+again there is no such localization which is smooth over $R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-fibre-smooth}
+Let $R \to S$ be a ring map.
+Let $\mathfrak q \subset S$ be a prime lying over the
+prime $\mathfrak p$ of $R$. Assume
+\begin{enumerate}
+\item there exists a $g \in S$, $g \not\in \mathfrak q$
+such that $R \to S_g$ is of finite presentation,
+\item the local ring homomorphism
+$R_{\mathfrak p} \to S_{\mathfrak q}$ is flat,
+\item the fibre $S \otimes_R \kappa(\mathfrak p)$ is smooth
+over $\kappa(\mathfrak p)$ at the prime corresponding
+to $\mathfrak q$.
+\end{enumerate}
+Then $R \to S$ is smooth at $\mathfrak q$.
+\end{lemma}
+
+\begin{proof}
+By Lemmas \ref{lemma-syntomic} and \ref{lemma-smooth-over-field}
+we see that there exists a $g \in S$ such that $S_g$ is a
+relative global complete intersection. Replacing $S$ by $S_g$ we may assume
+$S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$ is a relative
+global complete intersection.
+For any subset $I \subset \{1, \ldots, n\}$ of cardinality
+$c$ consider the polynomial
+$g_I = \det (\partial f_j/\partial x_i)_{j = 1, \ldots, c, i \in I}$
+of Lemma \ref{lemma-relative-global-complete-intersection-smooth}.
+Note that the image $\overline{g}_I$ of $g_I$ in the polynomial ring
+$\kappa(\mathfrak p)[x_1, \ldots, x_n]$ is the determinant
+of the partial derivatives of the images $\overline{f}_j$ of the $f_j$
+in the ring $\kappa(\mathfrak p)[x_1, \ldots, x_n]$. Thus the lemma follows
+by applying Lemma \ref{lemma-relative-global-complete-intersection-smooth}
+both to $R \to S$ and to
+$\kappa(\mathfrak p) \to S \otimes_R \kappa(\mathfrak p)$.
+\end{proof}
+
+\noindent
+Note that the sets $U, V$ in the following lemma
+are open by definition.
+
+\begin{lemma}
+\label{lemma-flat-base-change-locus-smooth}
+Let $R \to S$ be a ring map of finite presentation.
+Let $R \to R'$ be a flat ring map.
+Denote $S' = R' \otimes_R S$ the base change.
+Let $U \subset \Spec(S)$ be the set of primes at
+which $R \to S$ is smooth.
+Let $V \subset \Spec(S')$ the set of primes at
+which $R' \to S'$ is smooth.
+Then $V$ is the inverse image of $U$ under the
+map $f : \Spec(S') \to \Spec(S)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-change-base-NL} we see that
+$\NL_{S/R} \otimes_S S'$ is homotopy equivalent to $\NL_{S'/R'}$.
+This already implies that $f^{-1}(U) \subset V$.
+
+\medskip\noindent
+Let $\mathfrak q' \subset S'$ be a prime lying over
+$\mathfrak q \subset S$. Assume $\mathfrak q' \in V$.
+We have to show that $\mathfrak q \in U$.
+Since $S \to S'$ is flat, we see that $S_{\mathfrak q} \to S'_{\mathfrak q'}$
+is faithfully flat (Lemma \ref{lemma-local-flat-ff}). Thus the vanishing of
+$H_1(L_{S'/R'})_{\mathfrak q'}$ implies the
+vanishing of $H_1(L_{S/R})_{\mathfrak q}$.
+By Lemma \ref{lemma-finite-projective-descends}
+applied to the $S_{\mathfrak q}$-module $(\Omega_{S/R})_{\mathfrak q}$
+and the map $S_{\mathfrak q} \to S'_{\mathfrak q'}$ we see that
+$(\Omega_{S/R})_{\mathfrak q}$ is projective. Hence
+$R \to S$ is smooth at $\mathfrak q$ by
+Lemma \ref{lemma-smooth-at-point}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-field-change-local}
+Let $K/k$ be a field extension.
+Let $S$ be a finite type algebra over $k$.
+Let $\mathfrak q_K$ be a prime of $S_K = K \otimes_k S$
+and let $\mathfrak q$ be the corresponding prime of $S$.
+Then $S$ is smooth over $k$ at $\mathfrak q$ if and only if
+$S_K$ is smooth at $\mathfrak q_K$ over $K$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-flat-base-change-locus-smooth}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-smooth}
+Let $R$ be a ring and let $I \subset R$ be an ideal.
+Let $R/I \to \overline{S}$ be a smooth ring map.
+Then there exists elements $\overline{g}_i \in \overline{S}$
+which generate the unit ideal of $\overline{S}$
+such that each $\overline{S}_{g_i} \cong S_i/IS_i$
+for some (standard) smooth ring $S_i$ over $R$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-smooth-syntomic} we find a collection of elements
+$\overline{g}_i \in \overline{S}$
+which generate the unit ideal of $\overline{S}$
+such that each $\overline{S}_{g_i}$ is standard smooth over $R/I$.
+Hence we may assume that $\overline{S}$ is standard smooth
+over $R/I$. Write
+$\overline{S} =
+(R/I)[x_1, \ldots, x_n]/(\overline{f}_1, \ldots, \overline{f}_c)$
+as in Definition \ref{definition-standard-smooth}.
+Choose $f_1, \ldots, f_c \in R[x_1, \ldots, x_n]$
+lifting $\overline{f}_1, \ldots, \overline{f}_c$. Set
+$S = R[x_1, \ldots, x_n, x_{n + 1}]/(f_1, \ldots, f_c, x_{n + 1}\Delta - 1)$
+where $\Delta = \det(\frac{\partial f_j}{\partial x_i})_{i, j = 1, \ldots, c}$
+as in Example \ref{example-make-standard-smooth}.
+This proves the lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Formally smooth maps}
+\label{section-formally-smooth}
+
+\noindent
+In this section we define formally smooth ring maps. It will turn out
+that a ring map of finite presentation is formally smooth if and only if
+it is smooth, see
+Proposition \ref{proposition-smooth-formally-smooth}.
+
+\begin{definition}
+\label{definition-formally-smooth}
+Let $R \to S$ be a ring map.
+We say $S$ is {\it formally smooth over $R$} if for every
+commutative solid diagram
+$$
+\xymatrix{
+S \ar[r] \ar@{-->}[rd] & A/I \\
+R \ar[r] \ar[u] & A \ar[u]
+}
+$$
+where $I \subset A$ is an ideal of square zero, a dotted
+arrow exists which makes the diagram commute.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-base-change-fs}
+Let $R \to S$ be a formally smooth ring map.
+Let $R \to R'$ be any ring map.
+Then the base change $S' = R' \otimes_R S$ is formally smooth over $R'$.
+\end{lemma}
+
+\begin{proof}
+Let a solid diagram
+$$
+\xymatrix{
+S \ar[r] \ar@{-->}[rrd] & R' \otimes_R S \ar[r] \ar@{-->}[rd] & A/I \\
+R \ar[u] \ar[r] & R' \ar[r] \ar[u] & A \ar[u]
+}
+$$
+as in Definition \ref{definition-formally-smooth} be given.
+By assumption the longer dotted arrow exists. By the universal
+property of tensor product we obtain the shorter dotted arrow.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compose-formally-smooth}
+A composition of formally smooth ring maps is formally smooth.
+\end{lemma}
+
+\begin{proof}
+Omitted. (Hint: This is completely formal, and follows from considering
+a suitable diagram.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-polynomial-ring-formally-smooth}
+A polynomial ring over $R$ is formally smooth over $R$.
+\end{lemma}
+
+\begin{proof}
+Suppose we have a diagram as in Definition \ref{definition-formally-smooth}
+with $S = R[x_j; j \in J]$. Then there exists a dotted arrow
+simply by choosing lifts $a_j \in A$ of the elements in $A/I$
+to which the elements $x_j$ map to under the top horizontal arrow.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-formally-smooth}
+Let $R \to S$ be a ring map.
+Let $P \to S$ be a surjective $R$-algebra map from a
+polynomial ring $P$ onto $S$. Denote $J \subset P$ the
+kernel. Then $R \to S$ is formally smooth if and only
+if there exists an $R$-algebra map $\sigma : S \to P/J^2$
+which is a right inverse to the surjection
+$P/J^2 \to S$.
+\end{lemma}
+
+\begin{proof}
+Assume $R \to S$ is formally smooth.
+Consider the commutative diagram
+$$
+\xymatrix{
+S \ar[r] \ar@{-->}[rd] & P/J \\
+R \ar[r] \ar[u] & P/J^2\ar[u]
+}
+$$
+By assumption the dotted arrow exists. This proves that
+$\sigma$ exists.
+
+\medskip\noindent
+Conversely, suppose we have a $\sigma$ as in the lemma.
+Let a solid diagram
+$$
+\xymatrix{
+S \ar[r] \ar@{-->}[rd] & A/I \\
+R \ar[r] \ar[u] & A \ar[u]
+}
+$$
+as in Definition \ref{definition-formally-smooth} be given.
+Because $P$ is formally smooth by
+Lemma \ref{lemma-polynomial-ring-formally-smooth},
+there exists an $R$-algebra homomorphism
+$\psi : P \to A$ which lifts the map $P \to S \to A/I$.
+Clearly $\psi(J) \subset I$ and since $I^2 = 0$ we conclude that
+$\psi(J^2) = 0$. Hence $\psi$ factors as
+$\overline{\psi} : P/J^2 \to A$. The desired dotted arrow
+is the composition $\overline{\psi} \circ \sigma : S \to A$.
+\end{proof}
+
+\begin{remark}
+\label{remark-lemma-characterize-formally-smooth}
+Lemma \ref{lemma-characterize-formally-smooth} holds more
+generally whenever $P$ is formally smooth over $R$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-characterize-formally-smooth-again}
+Let $R \to S$ be a ring map.
+Let $P \to S$ be a surjective $R$-algebra map from a
+polynomial ring $P$ onto $S$. Denote $J \subset P$ the
+kernel. Then $R \to S$ is formally smooth if and only
+if the sequence
+$$
+0 \to J/J^2 \to \Omega_{P/R} \otimes_P S \to \Omega_{S/R} \to 0
+$$
+of Lemma \ref{lemma-differential-seq} is a split exact sequence.
+\end{lemma}
+
+\begin{proof}
+Assume $S$ is formally smooth over $R$. By
+Lemma \ref{lemma-characterize-formally-smooth}
+this means there exists an $R$-algebra map
+$S \to P/J^2$ which is a right inverse to the
+canonical map $P/J^2 \to S$. By Lemma \ref{lemma-differential-mod-power-ideal}
+we have $\Omega_{P/R} \otimes_P S = \Omega_{(P/J^2)/R} \otimes_{P/J^2} S$.
+By Lemma \ref{lemma-differential-seq-split} the sequence is split.
+
+\medskip\noindent
+Assume the exact sequence of the lemma is split exact.
+Choose a splitting $\sigma : \Omega_{S/R} \to \Omega_{P/R} \otimes_P S$.
+For each $\lambda \in S$ choose $x_\lambda \in P$
+which maps to $\lambda$. Next, for each $\lambda \in S$ choose
+$f_\lambda \in J$ such that
+$$
+\text{d}f_\lambda = \text{d}x_\lambda - \sigma(\text{d}\lambda)
+$$
+in the middle term of the exact sequence.
+We claim that $s : \lambda \mapsto x_\lambda - f_\lambda \mod J^2$
+is an $R$-algebra homomorphism $s : S \to P/J^2$.
+To prove this we will repeatedly use that if $h \in J$ and
+$\text{d}h = 0$ in $\Omega_{P/R} \otimes_R S$, then $h \in J^2$.
+Let $\lambda, \mu \in S$.
+Then $\sigma(\text{d}\lambda + \text{d}\mu - \text{d}(\lambda + \mu)) = 0$.
+This implies
+$$
+\text{d}(x_\lambda + x_\mu - x_{\lambda + \mu}
+- f_\lambda - f_\mu + f_{\lambda + \mu}) = 0
+$$
+which means that $x_\lambda + x_\mu - x_{\lambda + \mu}
+- f_\lambda - f_\mu + f_{\lambda + \mu} \in J^2$, which in turn
+means that $s(\lambda) + s(\mu) = s(\lambda + \mu)$.
+Similarly, we have
+$\sigma(\lambda \text{d}\mu + \mu \text{d}\lambda - \text{d}\lambda \mu) = 0$
+which implies that
+$$
+\mu(\text{d}x_\lambda - \text{d}f_\lambda) +
+\lambda(\text{d}x_\mu - \text{d}f_\mu) -
+\text{d}x_{\lambda\mu} + \text{d}f_{\lambda\mu} = 0
+$$
+in the middle term of the exact sequence.
+Moreover we have
+$$
+\text{d}(x_\lambda x_\mu) =
+x_\lambda \text{d}x_\mu + x_\mu \text{d}x_\lambda =
+\lambda \text{d}x_\mu + \mu \text{d} x_\lambda
+$$
+in the middle term again. Combined these equations mean that
+$x_\lambda x_\mu - x_{\lambda\mu}
+- \mu f_\lambda - \lambda f_\mu + f_{\lambda\mu} \in J^2$,
+hence $(x_\lambda - f_\lambda)(x_\mu - f_\mu) -
+(x_{\lambda\mu} - f_{\lambda\mu}) \in J^2$ as $f_\lambda f_\mu \in J^2$,
+which means that $s(\lambda)s(\mu) = s(\lambda\mu)$.
+If $\lambda \in R$, then $\text{d}\lambda = 0$ and we see
+that $\text{d}f_\lambda = \text{d}x_\lambda$, hence
+$\lambda - x_\lambda + f_\lambda \in J^2$ and hence
+$s(\lambda) = \lambda$ as desired. At this point we can
+apply Lemma \ref{lemma-characterize-formally-smooth}
+to conclude that $S/R$ is formally smooth.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-characterize-formally-smooth}
+Let $R \to S$ be a ring map. Consider a formally smooth $R$-algebra $P$ and
+a surjection $P \to S$ with kernel $J$. The following are equivalent
+\begin{enumerate}
+\item $S$ is formally smooth over $R$,
+\item for some $P \to S$ as above there exists a
+section to $P/J^2 \to S$,
+\item for all $P \to S$ as above there exists a
+section to $P/J^2 \to S$,
+\item for some $P \to S$ as above the sequence
+$0 \to J/J^2 \to \Omega_{P/R} \otimes S \to \Omega_{S/R} \to 0$ is split exact,
+\item for all $P \to S$ as above the sequence
+$0 \to J/J^2 \to \Omega_{P/R} \otimes S \to \Omega_{S/R} \to 0$ is split exact,
+and
+\item the naive cotangent complex $\NL_{S/R}$ is quasi-isomorphic to a
+projective $S$-module placed in degree $0$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+It is clear that (1) implies (3) implies (2), see first part of the proof of
+Lemma \ref{lemma-characterize-formally-smooth}.
+It is also true that (3) implies (5) implies (4) and that (2) implies (4), see
+first part of the proof of
+Lemma \ref{lemma-characterize-formally-smooth-again}.
+Finally, Lemma \ref{lemma-characterize-formally-smooth-again}
+applied to the canonical surjection $R[S] \to S$
+(\ref{equation-canonical-presentation}) shows that (1) implies (6).
+
+\medskip\noindent
+Assume (4) and let's prove (6). Consider the sequence of
+Lemma \ref{lemma-exact-sequence-NL}
+associated to the ring maps $R \to P \to S$. By the implication
+(1) $\Rightarrow$ (6) proved above we see that $\NL_{P/R} \otimes_R S$
+is quasi-isomorphic to $\Omega_{P/R} \otimes_P S$ placed in degree $0$.
+Hence $H_1(\NL_{P/R} \otimes_P S) = 0$. Since $P \to S$ is surjective we
+see that $\NL_{S/P}$ is homotopy equivalent to $J/J^2$ placed in degree $1$
+(Lemma \ref{lemma-NL-surjection}). Thus we obtain the exact sequence
+$0 \to H_1(L_{S/R}) \to J/J^2 \to \Omega_{P/R} \otimes_P S \to
+\Omega_{S/R} \to 0$.
+By assumption we see that $H_1(L_{S/R}) = 0$ and that $\Omega_{S/R}$
+is a projective $S$-module. Thus (6) follows.
+
+\medskip\noindent
+Finally, let's prove that (6) implies (1). The assumption means that
+the complex $J/J^2 \to \Omega_{P/R} \otimes S$ where $P = R[S]$ and
+$P \to S$ is the canonical surjection (\ref{equation-canonical-presentation}).
+Hence Lemma \ref{lemma-characterize-formally-smooth-again} shows that $S$
+is formally smooth over $R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ses-formally-smooth}
+Let $A \to B \to C$ be ring maps. Assume $B \to C$ is formally smooth.
+Then the sequence
+$$
+0 \to \Omega_{B/A} \otimes_B C \to \Omega_{C/A} \to \Omega_{C/B} \to 0
+$$
+of
+Lemma \ref{lemma-exact-sequence-differentials}
+is a split short exact sequence.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Proposition \ref{proposition-characterize-formally-smooth}
+and
+Lemma \ref{lemma-exact-sequence-NL}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differential-seq-formally-smooth}
+Let $A \to B \to C$ be ring maps with $A \to C$ formally smooth
+and $B \to C$ surjective with kernel $J \subset B$.
+Then the exact sequence
+$$
+0 \to J/J^2 \to \Omega_{B/A} \otimes_B C \to \Omega_{C/A} \to 0
+$$
+of
+Lemma \ref{lemma-differential-seq}
+is split exact.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Proposition \ref{proposition-characterize-formally-smooth},
+Lemma \ref{lemma-exact-sequence-NL}, and
+Lemma \ref{lemma-differential-seq}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-application-NL-formally-smooth}
+Let $A \to B \to C$ be ring maps. Assume $A \to C$ is surjective (so
+also $B \to C$ is) and $A \to B$ formally smooth.
+Denote $I = \Ker(A \to C)$ and $J = \Ker(B \to C)$.
+Then the sequence
+$$
+0 \to I/I^2 \to J/J^2 \to \Omega_{B/A} \otimes_B B/J \to 0
+$$
+of
+Lemma \ref{lemma-application-NL}
+is split exact.
+\end{lemma}
+
+\begin{proof}
+Since $A \to B$ is formally smooth there exists a ring map
+$\sigma : B \to A/I^2$ whose composition with $A \to B$ equals
+the quotient map $A \to A/I^2$. Then $\sigma$ induces a map
+$J/J^2 \to I/I^2$ which is inverse to the map $I/I^2 \to J/J^2$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-formal-smoothness}
+Let $R \to S$ be a ring map.
+Let $I \subset R$ be an ideal. Assume
+\begin{enumerate}
+\item $I^2 = 0$,
+\item $R \to S$ is flat, and
+\item $R/I \to S/IS$ is formally smooth.
+\end{enumerate}
+Then $R \to S$ is formally smooth.
+\end{lemma}
+
+\begin{proof}
+Assume (1), (2) and (3).
+Let $P = R[\{x_t\}_{t \in T}] \to S$ be a surjection of $R$-algebras
+with kernel $J$. Thus $0 \to J \to P \to S \to 0$ is a
+short exact sequence of flat $R$-modules. This implies that
+$I \otimes_R S = IS$, $I \otimes_R P = IP$ and $I \otimes_R J = IJ$
+as well as $J \cap IP = IJ$.
+We will use throughout the proof that
+$$
+\Omega_{(S/IS)/(R/I)} = \Omega_{S/R} \otimes_S (S/IS)
+= \Omega_{S/R} \otimes_R R/I = \Omega_{S/R} / I\Omega_{S/R}
+$$
+and similarly for $P$ (see Lemma \ref{lemma-differentials-base-change}).
+By Lemma \ref{lemma-characterize-formally-smooth-again} the sequence
+\begin{equation}
+\label{equation-split}
+0 \to J/(IJ + J^2) \to
+\Omega_{P/R} \otimes_P S/IS \to
+\Omega_{S/R} \otimes_S S/IS \to 0
+\end{equation}
+is split exact. Of course the middle term is
+$\bigoplus_{t \in T} S/IS \text{d}x_t$. Choose a splitting
+$\sigma : \Omega_{P/R} \otimes_P S/IS \to J/(IJ + J^2)$.
+For each $t \in T$ choose an element $f_t \in J$ which maps
+to $\sigma(\text{d}x_t)$ in $J/(IJ + J^2)$. This determines a
+unique $S$-module map
+$$
+\tilde \sigma : \Omega_{P/R} \otimes_R S
+= \bigoplus S\text{d}x_t \longrightarrow J/J^2
+$$
+with the property that $\tilde\sigma(\text{d}x_t) = f_t$.
+As $\sigma$ is a section to $\text{d}$ the difference
+$$
+\Delta = \text{id}_{J/J^2} - \tilde \sigma \circ \text{d}
+$$
+is a self map $J/J^2 \to J/J^2$ whose image is contained in
+$(IJ + J^2)/J^2$. In particular $\Delta((IJ + J^2)/J^2) = 0$
+because $I^2 = 0$. This means that $\Delta$ factors as
+$$
+J/J^2 \to J/(IJ + J^2) \xrightarrow{\overline{\Delta}}
+(IJ + J^2)/J^2 \to J/J^2
+$$
+where $\overline{\Delta}$ is a $S/IS$-module map.
+Using again that the sequence (\ref{equation-split})
+is split, we can find a $S/IS$-module map
+$\overline{\delta} : \Omega_{P/R} \otimes_P S/IS \to (IJ + J^2)/J^2$
+such that $\overline{\delta} \circ d$ is equal to $\overline{\Delta}$.
+In the same manner as above the map $\overline{\delta}$ determines
+an $S$-module map
+$\delta : \Omega_{P/R} \otimes_P S \to J/J^2$.
+After replacing $\tilde \sigma$ by $\tilde \sigma + \delta$
+a simple computation shows that $\Delta = 0$. In other words $\tilde \sigma$
+is a section of $J/J^2 \to \Omega_{P/R} \otimes_P S$.
+By Lemma \ref{lemma-characterize-formally-smooth-again}
+we conclude that $R \to S$ is formally smooth.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-smooth-formally-smooth}
+Let $R \to S$ be a ring map. The following are equivalent
+\begin{enumerate}
+\item $R \to S$ is of finite presentation and formally smooth,
+\item $R \to S$ is smooth.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Follows from
+Proposition \ref{proposition-characterize-formally-smooth}
+and Definition \ref{definition-smooth}.
+(Note that $\Omega_{S/R}$ is a finitely presented $S$-module if $R \to S$ is
+of finite presentation, see
+Lemma \ref{lemma-differentials-finitely-presented}.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-presentation-fs-Noetherian}
+Let $R \to S$ be a smooth ring map. Then there exists a subring
+$R_0 \subset R$ of finite type over $\mathbf{Z}$ and a smooth
+ring map $R_0 \to S_0$ such that $S \cong R \otimes_{R_0} S_0$.
+\end{lemma}
+
+\begin{proof}
+We are going to use that smooth is equivalent to finite presentation
+and formally smooth, see Proposition \ref{proposition-smooth-formally-smooth}.
+Write $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$
+and denote $I = (f_1, \ldots, f_m)$.
+Choose a right inverse
+$\sigma : S \to R[x_1, \ldots, x_n]/I^2$
+to the projection to $S$ as in
+Lemma \ref{lemma-characterize-formally-smooth}.
+Choose $h_i \in R[x_1, \ldots, x_n]$ such that
+$\sigma(x_i \bmod I) = h_i \bmod I^2$.
+The fact that $\sigma$ is an $R$-algebra homomorphism
+$R[x_1, \ldots, x_n]/I \to R[x_1, \ldots, x_n]/I^2$
+is equivalent to the condition that
+$$
+f_j(h_1, \ldots, h_n) = \sum\nolimits_{j_1 j_2} a_{j_1 j_2} f_{j_1} f_{j_2}
+$$
+for certain $a_{kl} \in R[x_1, \ldots, x_n]$.
+Let $R_0 \subset R$ be the subring generated over $\mathbf{Z}$
+by all the coefficients of the polynomials $f_j, h_i, a_{kl}$.
+Set $S_0 = R_0[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$,
+with $I_0 = (f_1, \ldots, f_m)$.
+Let $\sigma_0 : S_0 \to R_0[x_1, \ldots, x_n]/I_0^2$ defined by
+the rule $x_i \mapsto h_i \bmod I_0^2$; this works since the
+$a_{lk}$ are defined over $R_0$ and satisfy the same relations.
+Thus by Lemma \ref{lemma-characterize-formally-smooth}
+the ring $S_0$ is formally smooth over $R_0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-descends-through-colimit}
+Let $A = \colim A_i$ be a filtered colimit of rings. Let
+$A \to B$ be a smooth ring map. There exists an $i$ and
+a smooth ring map $A_i \to B_i$ such that $B = B_i \otimes_{A_i} A$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-finite-presentation-fs-Noetherian}
+since $R_0 \to A$ will factor through $A_i$ for some $i$ by
+Lemma \ref{lemma-characterize-finite-presentation}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-formally-smooth}
+Let $R \to S$ be a ring map. Let $R \to R'$ be a faithfully flat ring map.
+Set $S' = S \otimes_R R'$. Then $R \to S$ is formally smooth if and only
+if $R' \to S'$ is formally smooth.
+\end{lemma}
+
+\begin{proof}
+If $R \to S$ is formally smooth, then $R' \to S'$ is formally smooth by
+Lemma \ref{lemma-base-change-fs}.
+To prove the converse, assume $R' \to S'$ is formally smooth.
+Note that $N \otimes_R R' = N \otimes_S S'$ for any $S$-module $N$. In
+particular $S \to S'$ is faithfully flat also.
+Choose a polynomial ring $P = R[\{x_i\}_{i \in I}]$ and a surjection
+of $R$-algebras $P \to S$ with kernel $J$. Note that $P' = P \otimes_R R'$
+is a polynomial algebra over $R'$. Since $R \to R'$ is flat the kernel
+$J'$ of the surjection $P' \to S'$ is $J \otimes_R R'$. Hence the
+split exact sequence (see
+Lemma \ref{lemma-characterize-formally-smooth-again})
+$$
+0 \to J'/(J')^2 \to \Omega_{P'/R'} \otimes_{P'} S' \to \Omega_{S'/R'} \to 0
+$$
+is the base change via $S \to S'$ of the corresponding sequence
+$$
+J/J^2 \to \Omega_{P/R} \otimes_P S \to \Omega_{S/R} \to 0
+$$
+see
+Lemma \ref{lemma-differential-seq}.
+As $S \to S'$ is faithfully flat we conclude two things:
+(1) this sequence (without ${}'$) is exact too, and (2)
+$\Omega_{S/R}$ is a projective $S$-module. Namely, $\Omega_{S'/R'}$
+is projective as a direct sum of the free module
+$\Omega_{P'/R'} \otimes_{P'} S'$ and
+$\Omega_{S/R} \otimes_S {S'} = \Omega_{S'/R'}$ by what we said above.
+Thus (2) follows by descent of projectivity
+through faithfully flat ring maps, see
+Theorem \ref{theorem-ffdescent-projectivity}.
+Hence the sequence
+$0 \to J/J^2 \to \Omega_{P/R} \otimes_P S \to \Omega_{S/R} \to 0$
+is exact also and we win by applying
+Lemma \ref{lemma-characterize-formally-smooth-again}
+once more.
+\end{proof}
+
+\noindent
+It turns out that smooth ring maps satisfy the following strong
+lifting property.
+
+\begin{lemma}
+\label{lemma-smooth-strong-lift}
+Let $R \to S$ be a smooth ring map. Given a commutative solid diagram
+$$
+\xymatrix{
+S \ar[r] \ar@{-->}[rd] & A/I \\
+R \ar[r] \ar[u] & A \ar[u]
+}
+$$
+where $I \subset A$ is a locally nilpotent ideal, a dotted
+arrow exists which makes the diagram commute.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-finite-presentation-fs-Noetherian} we can extend the
+diagram to a commutative diagram
+$$
+\xymatrix{
+S_0 \ar[r] & S \ar[r] \ar@{-->}[rd] & A/I \\
+R_0 \ar[r] \ar[u] & R \ar[r] \ar[u] & A \ar[u]
+}
+$$
+with $R_0 \to S_0$ smooth, $R_0$ of finite type over $\mathbf{Z}$, and
+$S = S_0 \otimes_{R_0} R$. Let $x_1, \ldots, x_n \in S_0$ be generators of
+$S_0$ over $R_0$. Let $a_1, \ldots, a_n$ be elements of $A$ which
+map to the same elements in $A/I$ as the elements $x_1, \ldots, x_n$.
+Denote $A_0 \subset A$ the subring generated by the image of $R_0$
+and the elements $a_1, \ldots, a_n$. Set $I_0 = A_0 \cap I$. Then
+$A_0/I_0 \subset A/I$ and $S_0 \to A/I$ maps into $A_0/I_0$.
+Thus it suffices to find the dotted arrow in the diagram
+$$
+\xymatrix{
+S_0 \ar[r] \ar@{-->}[rd] & A_0/I_0 \\
+R_0 \ar[r] \ar[u] & A_0 \ar[u]
+}
+$$
+The ring $A_0$ is of finite type over $\mathbf{Z}$ by construction.
+Hence $A_0$ is Noetherian, whence $I_0$ is nilpotent, see
+Lemma \ref{lemma-Noetherian-power}.
+Say $I_0^n = 0$. By Proposition \ref{proposition-smooth-formally-smooth}
+we can successively lift the $R_0$-algebra map $S_0 \to A_0/I_0$ to
+$S_0 \to A_0/I_0^2$, $S_0 \to A_0/I_0^3$, $\ldots$,
+and finally $S_0 \to A_0/I_0^n = A_0$.
+\end{proof}
+
+
+
+
+
+
+\section{Smoothness and differentials}
+\label{section-smooth-differential}
+
+\noindent
+Some results on differentials and smooth ring maps.
+
+\begin{lemma}
+\label{lemma-triangle-differentials-smooth}
+Given ring maps $A \to B \to C$ with $B \to C$ smooth, then the sequence
+$$
+0 \to C \otimes_B \Omega_{B/A} \to \Omega_{C/A} \to \Omega_{C/B} \to 0
+$$
+of Lemma \ref{lemma-exact-sequence-differentials} is exact.
+\end{lemma}
+
+\begin{proof}
+This follows from the more general
+Lemma \ref{lemma-ses-formally-smooth}
+because a smooth ring map is formally smooth, see
+Proposition \ref{proposition-smooth-formally-smooth}.
+But it also follows directly from
+Lemma \ref{lemma-exact-sequence-NL}
+since $H_1(L_{C/B}) = 0$ is part of the definition of smoothness of $B \to C$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differential-seq-smooth}
+Let $A \to B \to C$ be ring maps with $A \to C$ smooth
+and $B \to C$ surjective with kernel $J \subset B$.
+Then the exact sequence
+$$
+0 \to J/J^2 \to \Omega_{B/A} \otimes_B C \to \Omega_{C/A} \to 0
+$$
+of
+Lemma \ref{lemma-differential-seq}
+is split exact.
+\end{lemma}
+
+\begin{proof}
+This follows from the more general
+Lemma \ref{lemma-differential-seq-formally-smooth}
+because a smooth ring map is formally smooth, see
+Proposition \ref{proposition-smooth-formally-smooth}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-application-NL-smooth}
+Let $A \to B \to C$ be ring maps. Assume $A \to C$ is surjective (so
+also $B \to C$ is) and $A \to B$ smooth.
+Denote $I = \Ker(A \to C)$ and $J = \Ker(B \to C)$.
+Then the sequence
+$$
+0 \to I/I^2 \to J/J^2 \to \Omega_{B/A} \otimes_B B/J \to 0
+$$
+of
+Lemma \ref{lemma-application-NL}
+is exact.
+\end{lemma}
+
+\begin{proof}
+This follows from the more general
+Lemma \ref{lemma-application-NL-formally-smooth}
+because a smooth ring map is formally smooth, see
+Proposition \ref{proposition-smooth-formally-smooth}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-section-smooth}
+\begin{slogan}
+If $R$ is a summand of $S$ and $S$ is smooth over $R$, then the
+$I$-adic completion of $S$ is often a power series over $R$
+where $I$ is the kernel of the projection map from $S$ to $R$.
+\end{slogan}
+Let $\varphi : R \to S$ be a smooth ring map.
+Let $\sigma : S \to R$ be a left inverse to $\varphi$.
+Set $I = \Ker(\sigma)$. Then
+\begin{enumerate}
+\item $I/I^2$ is a finite locally free $R$-module, and
+\item if $I/I^2$ is free, then $S^\wedge \cong R[[t_1, \ldots, t_d]]$
+as $R$-algebras, where $S^\wedge$ is the $I$-adic completion of $S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-differential-seq-split}
+applied to $R \to S \to R$ we see that
+$I/I^2 = \Omega_{S/R} \otimes_{S, \sigma} R$.
+Since by definition of a smooth morphism the module $\Omega_{S/R}$ is
+finite locally free over $S$ we deduce that (1) holds.
+If $I/I^2$ is free, then choose $f_1, \ldots, f_d \in I$ whose images
+in $I/I^2$ form an $R$-basis. Consider the $R$-algebra map defined by
+$$
+\Psi : R[[x_1, \ldots, x_d]] \longrightarrow S^\wedge, \quad
+x_i \longmapsto f_i.
+$$
+Denote $P = R[[x_1, \ldots, x_d]]$ and $J = (x_1, \ldots, x_d) \subset P$.
+We write $\Psi_n : P/J^n \to S/I^n$ for the induced map of quotient rings.
+Note that $S/I^2 = \varphi(R) \oplus I/I^2$. Thus $\Psi_2$ is an
+isomorphism. Denote $\sigma_2 : S/I^2 \to P/J^2$ the inverse of $\Psi_2$.
+We will prove by induction on $n$ that for all $n > 2$ there exists an inverse
+$\sigma_n : S/I^n \to P/J^n$ of $\Psi_n$. Namely, as $S$ is formally
+smooth over $R$ (by
+Proposition \ref{proposition-smooth-formally-smooth})
+we see that in the solid diagram
+$$
+\xymatrix{
+S \ar@{..>}[r] \ar[rd]_{\sigma_{n - 1}} & P/J^n \ar[d] \\
+ & P/J^{n - 1}
+}
+$$
+of $R$-algebras we can fill in the dotted arrow by some $R$-algebra
+map $\tau : S \to P/J^n$ making the diagram commute. This induces an
+$R$-algebra map $\overline{\tau} : S/I^n \to P/J^n$ which is equal to
+$\sigma_{n - 1}$ modulo $J^n$. By construction the map $\Psi_n$ is surjective
+and now $\overline{\tau} \circ \Psi_n$ is an $R$-algebra endomorphism
+of $P/J^n$ which maps $x_i$ to $x_i + \delta_{i, n}$ with
+$\delta_{i, n} \in J^{n -1}/J^n$. It follows that $\Psi_n$ is an
+isomorphism and hence it has an inverse $\sigma_n$.
+This proves the lemma.
+\end{proof}
+
+
+
+
+
+\section{Smooth algebras over fields}
+\label{section-smooth-over-field}
+
+\noindent
+Warning: The following two lemmas do not hold over nonperfect
+fields in general.
+
+\begin{lemma}
+\label{lemma-rank-omega}
+Let $k$ be an algebraically closed field.
+Let $S$ be a finite type $k$-algebra.
+Let $\mathfrak m \subset S$ be a maximal ideal.
+Then
+$$
+\dim_{\kappa(\mathfrak m)} \Omega_{S/k} \otimes_S \kappa(\mathfrak m)
+=
+\dim_{\kappa(\mathfrak m)} \mathfrak m/\mathfrak m^2.
+$$
+\end{lemma}
+
+\begin{proof}
+Consider the exact sequence
+$$
+\mathfrak m/\mathfrak m^2 \to
+\Omega_{S/k} \otimes_S \kappa(\mathfrak m) \to
+\Omega_{\kappa(\mathfrak m)/k} \to 0
+$$
+of Lemma \ref{lemma-differential-seq}. We would like to show that the
+first map is an isomorphism. Since $k$ is algebraically closed the
+composition $k \to \kappa(\mathfrak m)$ is an isomorphism by
+Theorem \ref{theorem-nullstellensatz}.
+So the surjection $S \to \kappa(\mathfrak m)$ splits as a map of
+$k$-algebras, and Lemma \ref{lemma-differential-seq-split} shows
+that the sequence above is exact
+on the left. Since $\Omega_{\kappa(\mathfrak m)/k} = 0$, we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-smooth-kbar}
+Let $k$ be an algebraically closed field.
+Let $S$ be a finite type $k$-algebra.
+Let $\mathfrak m \subset S$ be a maximal ideal.
+The following are equivalent:
+\begin{enumerate}
+\item The ring $S_{\mathfrak m}$ is a regular local ring.
+\item We have
+$\dim_{\kappa(\mathfrak m)} \Omega_{S/k} \otimes_S \kappa(\mathfrak m)
+\leq \dim(S_{\mathfrak m})$.
+\item We have
+$\dim_{\kappa(\mathfrak m)} \Omega_{S/k} \otimes_S \kappa(\mathfrak m)
+= \dim(S_{\mathfrak m})$.
+\item There exists a $g \in S$, $g \not \in \mathfrak m$
+such that $S_g$ is smooth over $k$. In other words $S/k$
+is smooth at $\mathfrak m$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Note that (1), (2) and (3) are equivalent by Lemma \ref{lemma-rank-omega}
+and Definition \ref{definition-regular}.
+
+\medskip\noindent
+Assume that $S$ is smooth at $\mathfrak m$.
+By Lemma \ref{lemma-smooth-syntomic} we see that
+$S_g$ is standard smooth over $k$
+for a suitable $g \in S$, $g \not \in \mathfrak m$.
+Hence by Lemma \ref{lemma-standard-smooth}
+we see that $\Omega_{S_g/k}$ is free of rank $\dim(S_g)$.
+Hence by Lemma \ref{lemma-rank-omega}
+we see that $\dim(S_{\mathfrak m}) = \dim (\mathfrak m/\mathfrak m^2)$
+in other words $S_\mathfrak m$ is regular.
+
+\medskip\noindent
+Conversely, suppose that $S_{\mathfrak m}$ is regular.
+Let $d = \dim(S_{\mathfrak m}) = \dim \mathfrak m/\mathfrak m^2$.
+Choose a presentation $S = k[x_1, \ldots, x_n]/I$
+such that $x_i$ maps to an element of $\mathfrak m$ for
+all $i$. In other words, $\mathfrak m'' = (x_1, \ldots, x_n)$
+is the corresponding maximal ideal of $k[x_1, \ldots, x_n]$.
+Note that we have a short exact sequence
+$$
+I/\mathfrak m''I \to \mathfrak m''/(\mathfrak m'')^2
+\to \mathfrak m/(\mathfrak m)^2 \to 0
+$$
+Pick $c = n - d$ elements $f_1, \ldots, f_c \in I$ such that
+their images in $\mathfrak m''/(\mathfrak m'')^2$ span the
+kernel of the map to $\mathfrak m/\mathfrak m^2$. This is clearly
+possible. Denote $J = (f_1, \ldots, f_c)$. So $J \subset I$.
+Denote $S' = k[x_1, \ldots, x_n]/J$ so there is a surjection
+$S' \to S$. Denote $\mathfrak m' = \mathfrak m''S'$ the corresponding
+maximal ideal of $S'$. Hence we have
+$$
+\xymatrix{
+k[x_1, \ldots, x_n] \ar[r] & S' \ar[r] & S \\
+\mathfrak m'' \ar[u] \ar[r] & \mathfrak m' \ar[r] \ar[u] &
+\mathfrak m \ar[u]
+}
+$$
+By our choice of $J$ the exact sequence
+$$
+J/\mathfrak m''J \to \mathfrak m''/(\mathfrak m'')^2
+\to \mathfrak m'/(\mathfrak m')^2 \to 0
+$$
+shows that $\dim( \mathfrak m'/(\mathfrak m')^2 ) = d$.
+Since $S'_{\mathfrak m'}$ surjects onto $S_{\mathfrak m}$
+we see that $\dim(S_{\mathfrak m'}) \geq d$. Hence by
+the discussion preceding Definition \ref{definition-regular-local}
+we conclude that $S'_{\mathfrak m'}$ is
+regular of dimension $d$ as well. Because $S'$ was cut out
+by $c = n - d$ equations we
+conclude that there exists a $g' \in S'$, $g' \not \in \mathfrak m'$
+such that $S'_{g'}$ is a global complete intersection over $k$,
+see Lemma \ref{lemma-lci}.
+Also the map $S'_{\mathfrak m'} \to S_{\mathfrak m}$
+is a surjection of Noetherian local domains of the same
+dimension and hence an isomorphism. Hence $S' \to S$ is surjective
+with finitely generated kernel and becomes an isomorphism
+after localizing at $\mathfrak m'$. Thus we can find $g' \in S'$,
+$g \not \in \mathfrak m'$ such that $S'_{g'} \to S_{g'}$
+is an isomorphism. All in all we conclude that
+after replacing $S$ by a principal localization we may
+assume that $S$ is a global complete intersection.
+
+\medskip\noindent
+At this point we may write $S = k[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+with $\dim S = n - c$. Recall that the naive cotangent complex
+of this algebra is given by
+$$
+\bigoplus S \cdot f_j
+\to
+\bigoplus S \cdot \text{d}x_i
+$$
+see Lemma \ref{lemma-relative-global-complete-intersection-conormal}.
+By Lemma \ref{lemma-relative-global-complete-intersection-smooth}
+in order to show that $S$ is smooth at
+$\mathfrak m$ we have to show that one of the $c \times c$
+minors $g_I$ of the matrix ``$A$'' giving the map above
+does not vanish at $\mathfrak m$. By Lemma \ref{lemma-rank-omega}
+the matrix $A \bmod \mathfrak m$ has rank $c$. Thus we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-smooth-over-field}
+Let $k$ be any field.
+Let $S$ be a finite type $k$-algebra.
+Let $X = \Spec(S)$.
+Let $\mathfrak q \subset S$ be a prime
+corresponding to $x \in X$.
+The following are equivalent:
+\begin{enumerate}
+\item The $k$-algebra $S$ is smooth at $\mathfrak q$ over $k$.
+\item We have
+$\dim_{\kappa(\mathfrak q)} \Omega_{S/k} \otimes_S \kappa(\mathfrak q)
+\leq \dim_x X$.
+\item We have
+$\dim_{\kappa(\mathfrak q)} \Omega_{S/k} \otimes_S \kappa(\mathfrak q)
+= \dim_x X$.
+\end{enumerate}
+Moreover, in this case the local ring $S_{\mathfrak q}$ is regular.
+\end{lemma}
+
+\begin{proof}
+If $S$ is smooth at $\mathfrak q$ over $k$, then there exists
+a $g \in S$, $g \not \in \mathfrak q$ such that $S_g$ is
+standard smooth over $k$, see Lemma \ref{lemma-smooth-syntomic}.
+A standard smooth algebra over $k$ has a module of differentials
+which is free of rank equal to the dimension, see
+Lemma \ref{lemma-standard-smooth} (use that a relative global
+complete intersection over a field has dimension equal to the
+number of variables minus the number of equations). Thus we see that
+(1) implies (3). To finish the proof of the lemma it
+suffices to show that (2) implies (1) and that it implies
+that $S_{\mathfrak q}$ is regular.
+
+\medskip\noindent
+Assume (2). By Nakayama's Lemma \ref{lemma-NAK} we see that
+$\Omega_{S/k, \mathfrak q}$ can be generated by $\leq \dim_x X$ elements.
+We may replace $S$ by $S_g$ for some $g \in S$, $g \not \in \mathfrak q$
+such that $\Omega_{S/k}$ is generated by at most
+$\dim_x X$ elements.
+Let $K/k$ be an algebraically closed field extension
+such that there exists a $k$-algebra map $\psi : \kappa(\mathfrak q) \to K$.
+Consider $S_K = K \otimes_k S$. Let $\mathfrak m \subset S_K$
+be the maximal ideal corresponding to the surjection
+$$
+\xymatrix{
+S_K = K \otimes_k S \ar[r] &
+K \otimes_k \kappa(\mathfrak q)
+\ar[r]^-{\text{id}_K \otimes \psi} &
+K.
+}
+$$
+Note that $\mathfrak m \cap S = \mathfrak q$, in other words
+$\mathfrak m$ lies over $\mathfrak q$.
+By Lemma \ref{lemma-dimension-at-a-point-preserved-field-extension}
+the dimension of $X_K = \Spec(S_K)$ at the point corresponding
+to $\mathfrak m$ is $\dim_x X$. By
+Lemma \ref{lemma-dimension-closed-point-finite-type-field}
+this is equal to $\dim((S_K)_{\mathfrak m})$.
+By Lemma \ref{lemma-differentials-base-change}
+the module of differentials of $S_K$ over $K$ is
+the base change of $\Omega_{S/k}$, hence also
+generated by at most $\dim_x X = \dim((S_K)_{\mathfrak m})$
+elements. By Lemma \ref{lemma-characterize-smooth-kbar}
+we see that $S_K$ is smooth at $\mathfrak m$ over $K$.
+By Lemma \ref{lemma-flat-base-change-locus-smooth} this
+implies that $S$ is smooth at $\mathfrak q$ over $k$.
+This proves (1). Moreover, we know by
+Lemma \ref{lemma-characterize-smooth-kbar}
+that the local ring $(S_K)_{\mathfrak m}$ is regular.
+Since $S_{\mathfrak q} \to (S_K)_{\mathfrak m}$ is flat we
+conclude from Lemma \ref{lemma-flat-under-regular}
+that $S_{\mathfrak q}$ is regular.
+\end{proof}
+
+\noindent
+The following lemma can be significantly generalized
+(in several different ways).
+
+\begin{lemma}
+\label{lemma-computation-differential}
+Let $k$ be a field.
+Let $R$ be a Noetherian local ring containing $k$.
+Assume that the residue field $\kappa = R/\mathfrak m$
+is a finitely generated separable extension of $k$.
+Then the map
+$$
+\text{d} :
+\mathfrak m/\mathfrak m^2
+\longrightarrow
+\Omega_{R/k} \otimes_R \kappa(\mathfrak m)
+$$
+is injective.
+\end{lemma}
+
+\begin{proof}
+We may replace $R$ by $R/\mathfrak m^2$. Hence we may assume that
+$\mathfrak m^2 = 0$. By assumption we may write
+$\kappa = k(\overline{x}_1, \ldots, \overline{x}_r, \overline{y})$
+where $\overline{x}_1, \ldots, \overline{x}_r$ is a transcendence basis
+of $\kappa$ over $k$ and $\overline{y}$ is separable algebraic over
+$k(\overline{x}_1, \ldots, \overline{x}_r)$. Say its minimal
+equation is $P(\overline{y}) = 0$ with $P(T) = T^d + \sum_{i < d} a_iT^i$,
+with $a_i \in k(\overline{x}_1, \ldots, \overline{x}_r)$ and
+$P'(\overline{y}) \not = 0$. Choose any lifts
+$x_i \in R$ of the elements $\overline{x}_i \in \kappa$.
+This gives a commutative diagram
+$$
+\xymatrix{
+R \ar[r] & \kappa \\
+& k(\overline{x}_1, \ldots, \overline{x}_r) \ar[lu]^\varphi \ar[u]
+}
+$$
+of $k$-algebras. We want to extend the left upwards arrow
+$\varphi$ to a $k$-algebra
+map from $\kappa$ to $R$. To do this choose any $y \in R$ lifting
+$\overline{y}$. To see that it defines a $k$-algebra map
+defined on $\kappa \cong k(\overline{x}_1, \ldots, \overline{x}_r)[T]/(P)$
+all we have to show is that we may choose $y$ such that $P^\varphi(y) = 0$.
+If not then we compute for $\delta \in \mathfrak m$ that
+$$
+P(y + \delta) = P(y) + P'(y)\delta
+$$
+because $\mathfrak m^2 = 0$. Since $P'(y)\delta = P'(\overline{y})\delta$
+we see that we can adjust our choice as desired.
+This shows that $R \cong \kappa \oplus \mathfrak m$ as
+$k$-algebras! From a direct computation of
+$\Omega_{\kappa \oplus \mathfrak m/k}$ the lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-separable-smooth}
+Let $k$ be a field.
+Let $S$ be a finite type $k$-algebra.
+Let $\mathfrak q \subset S$ be a prime.
+Assume $\kappa(\mathfrak q)$ is separable over $k$.
+The following are equivalent:
+\begin{enumerate}
+\item The algebra $S$ is smooth at $\mathfrak q$ over $k$.
+\item The ring $S_{\mathfrak q}$ is regular.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $R = S_{\mathfrak q}$ and denote its maximal
+by $\mathfrak m$ and its residue field $\kappa$.
+By Lemma \ref{lemma-computation-differential} and
+\ref{lemma-differential-seq} we see that there is a short exact
+sequence
+$$
+0 \to \mathfrak m/\mathfrak m^2 \to
+\Omega_{R/k} \otimes_R \kappa \to
+\Omega_{\kappa/k} \to 0
+$$
+Note that $\Omega_{R/k} = \Omega_{S/k, \mathfrak q}$, see
+Lemma \ref{lemma-differentials-localize}.
+Moreover, since $\kappa$ is separable over $k$
+we have $\dim_{\kappa} \Omega_{\kappa/k} = \text{trdeg}_k(\kappa)$.
+Hence we get
+$$
+\dim_{\kappa} \Omega_{R/k} \otimes_R \kappa
+=
+\dim_\kappa \mathfrak m/\mathfrak m^2 + \text{trdeg}_k (\kappa)
+\geq
+\dim R + \text{trdeg}_k (\kappa)
+=
+\dim_{\mathfrak q} S
+$$
+(see Lemma \ref{lemma-dimension-at-a-point-finite-type-field} for
+the last equality)
+with equality if and only if $R$ is regular.
+Thus we win by applying Lemma \ref{lemma-characterize-smooth-over-field}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characteristic-zero}
+Let $R \to S$ be a $\mathbf{Q}$-algebra map.
+Let $f \in S$ be such that $\Omega_{S/R} = S \text{d}f \oplus C$
+for some $S$-submodule $C$. Then
+\begin{enumerate}
+\item $f$ is not nilpotent, and
+\item if $S$ is a Noetherian local ring, then $f$ is a nonzerodivisor in $S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For $a \in S$ write $\text{d}(a) = \theta(a)\text{d}f + c(a)$ for some
+$\theta(a) \in S$ and $c(a) \in C$.
+Consider the $R$-derivation $S \to S$, $a \mapsto \theta(a)$.
+Note that $\theta(f) = 1$.
+
+\medskip\noindent
+If $f^n = 0$ with $n > 1$ minimal, then $0 = \theta(f^n) = n f^{n - 1}$
+contradicting the minimality of $n$. We conclude that $f$ is not nilpotent.
+
+\medskip\noindent
+Suppose $fa = 0$. If $f$ is a unit then $a = 0$ and we win. Assume
+$f$ is not a unit. Then
+$0 = \theta(fa) = f\theta(a) + a$ by the Leibniz rule and hence $a \in (f)$.
+By induction suppose we have shown $fa = 0 \Rightarrow a \in (f^n)$.
+Then writing $a = f^nb$ we get
+$0 = \theta(f^{n + 1}b) = (n + 1)f^nb + f^{n + 1}\theta(b)$.
+Hence $a = f^n b = -f^{n + 1}\theta(b)/(n + 1) \in (f^{n + 1})$.
+Since in the Noetherian local ring $S$ we have $\bigcap (f^n) = 0$, see
+Lemma \ref{lemma-intersect-powers-ideal-module-zero}
+we win.
+\end{proof}
+
+\noindent
+The following is probably quite useless in applications.
+
+\begin{lemma}
+\label{lemma-characteristic-zero-local-smooth}
+Let $k$ be a field of characteristic $0$.
+Let $S$ be a finite type $k$-algebra.
+Let $\mathfrak q \subset S$ be a prime.
+The following are equivalent:
+\begin{enumerate}
+\item The algebra $S$ is smooth at $\mathfrak q$ over $k$.
+\item The $S_{\mathfrak q}$-module $\Omega_{S/k, \mathfrak q}$
+is (finite) free.
+\item The ring $S_{\mathfrak q}$ is regular.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+In characteristic zero any field extension is separable and hence the
+equivalence of (1) and (3) follows from Lemma \ref{lemma-separable-smooth}.
+Also (1) implies (2) by definition of smooth algebras.
+Assume that $\Omega_{S/k, \mathfrak q}$ is free over $S_{\mathfrak q}$.
+We are going to use the notation and observations made in the
+proof of Lemma \ref{lemma-separable-smooth}. So $R = S_{\mathfrak q}$
+with maximal ideal $\mathfrak m$ and residue field $\kappa$.
+Our goal is to prove $R$ is regular.
+
+\medskip\noindent
+If $\mathfrak m/\mathfrak m^2 = 0$, then $\mathfrak m = 0$
+and $R \cong \kappa$. Hence $R$ is regular and we win.
+
+\medskip\noindent
+If $\mathfrak m/ \mathfrak m^2 \not = 0$, then choose any
+$f \in \mathfrak m$ whose image in $\mathfrak m/ \mathfrak m^2$
+is not zero. By Lemma \ref{lemma-computation-differential}
+we see that $\text{d}f$ has nonzero image in
+$\Omega_{R/k}/\mathfrak m\Omega_{R/k}$. By assumption
+$\Omega_{R/k} = \Omega_{S/k, \mathfrak q}$ is finite free and
+hence by Nakayama's Lemma \ref{lemma-NAK} we see that
+$\text{d}f$ generates a direct summand. We apply
+Lemma \ref{lemma-characteristic-zero}
+to deduce that $f$ is a nonzerodivisor in $R$.
+Furthermore, by Lemma \ref{lemma-differential-seq} we get an exact sequence
+$$
+(f)/(f^2) \to \Omega_{R/k} \otimes_R R/fR \to \Omega_{(R/fR)/k} \to 0
+$$
+This implies that $\Omega_{(R/fR)/k}$ is finite free as well.
+Hence by induction we see that $R/fR$ is a regular local ring.
+Since $f \in \mathfrak m$ was a nonzerodivisor we
+conclude that $R$ is regular, see Lemma \ref{lemma-regular-mod-x}.
+\end{proof}
+
+\begin{example}
+\label{example-characteristic-p}
+Lemma \ref{lemma-characteristic-zero-local-smooth}
+does not hold in characteristic $p > 0$.
+The standard examples are the ring maps
+$$
+\mathbf{F}_p \longrightarrow \mathbf{F}_p[x]/(x^p)
+$$
+whose module of differentials is free but is clearly not smooth, and
+the ring map ($p > 2$)
+$$
+\mathbf{F}_p(t) \to \mathbf{F}_p(t)[x, y]/(x^p + y^2 + \alpha)
+$$
+which is not smooth at the prime $\mathfrak q = (y, x^p + \alpha)$
+but is regular.
+\end{example}
+
+\noindent
+Using the material above we can characterize smoothness at the generic
+point in terms of field extensions.
+
+\begin{lemma}
+\label{lemma-smooth-at-generic-point}
+Let $R \to S$ be an injective finite type ring map with $R$ and $S$ domains.
+Then $R \to S$ is smooth at $\mathfrak q = (0)$ if and only if
+the induced extension $L/K$ of fraction fields is separable.
+\end{lemma}
+
+\begin{proof}
+Assume $R \to S$ is smooth at $(0)$. We may replace $S$ by $S_g$
+for some nonzero $g \in S$ and assume that $R \to S$ is smooth.
+Then $K \to S \otimes_R K$ is smooth
+(Lemma \ref{lemma-base-change-smooth}). Moreover, for any
+field extension $K'/K$ the ring map $K' \to S \otimes_R K'$
+is smooth as well. Hence $S \otimes_R K'$ is a regular ring
+by Lemma \ref{lemma-characterize-smooth-over-field}, in particular reduced.
+It follows that $S \otimes_R K$ is a geometrically reduced over $K$.
+Hence $L$ is geometrically reduced over $K$, see
+Lemma \ref{lemma-geometrically-reduced-permanence}.
+Hence $L/K$ is separable by
+Lemma \ref{lemma-characterize-separable-field-extensions}.
+
+\medskip\noindent
+Conversely, assume that $L/K$ is separable.
+We may assume $R \to S$ is of finite presentation, see
+Lemma \ref{lemma-generic-finite-presentation}.
+It suffices to prove that $K \to S \otimes_R K$ is smooth
+at $(0)$, see
+Lemma \ref{lemma-flat-base-change-locus-smooth}.
+This follows from Lemma \ref{lemma-separable-smooth}, the
+fact that a field is a regular ring,
+and the assumption that $L/K$ is separable.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Smooth ring maps in the Noetherian case}
+\label{section-smooth-Noetherian}
+
+\begin{definition}
+\label{definition-small-extension}
+Let $\varphi : B' \to B$ be a ring map.
+We say $\varphi$ is a {\it small extension} if
+$B'$ and $B$ are local Artinian rings, $\varphi$ is surjective
+and $I = \Ker(\varphi)$ has length $1$ as a $B'$-module.
+\end{definition}
+
+\noindent
+Clearly this means that $I^2 = 0$ and that $I = (x)$ for some
+$x \in B'$ such that $\mathfrak m' x = 0$ where $\mathfrak m' \subset B'$
+is the maximal ideal.
+
+\begin{lemma}
+\label{lemma-smooth-test-artinian}
+Let $R \to S$ be a ring map. Let $\mathfrak q$ be a prime ideal of
+$S$ lying over $\mathfrak p \subset R$. Assume $R$ is Noetherian
+and $R \to S$ of finite type.
+The following are equivalent:
+\begin{enumerate}
+\item $R \to S$ is smooth at $\mathfrak q$,
+\item for every surjection of local $R$-algebras
+$(B', \mathfrak m') \to (B, \mathfrak m)$
+with $\Ker(B' \to B)$ having square zero
+and every solid commutative diagram
+$$
+\xymatrix{
+S \ar[r] \ar@{-->}[rd] & B \\
+R \ar[r] \ar[u] & B' \ar[u]
+}
+$$
+such that $\mathfrak q = S \cap \mathfrak m$ there exists a dotted
+arrow making the diagram commute,
+\item same as in (2) but with $B' \to B$ ranging over small extensions, and
+\item same as in (2) but with $B' \to B$ ranging over small extensions
+such that in addition $S \to B$ induces an isomorphism
+$\kappa(\mathfrak q) \cong \kappa(\mathfrak m)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). This means there exists a $g \in S$, $g \not \in \mathfrak q$
+such that $R \to S_g$ is smooth. By
+Proposition \ref{proposition-smooth-formally-smooth}
+we know that $R \to S_g$ is formally smooth. Note that given any diagram
+as in (2) the map $S \to B$ factors automatically through $S_{\mathfrak q}$
+and a fortiori through $S_g$. The formal smoothness of $S_g$ over $R$
+gives us a morphism $S_g \to B'$ fitting into a similar diagram with $S_g$ at
+the upper left corner. Composing with $S \to S_g$ gives the desired arrow.
+In other words, we have shown that (1) implies (2).
+
+\medskip\noindent
+Clearly (2) implies (3) and (3) implies (4).
+
+\medskip\noindent
+Assume (4). We are going to show that (1) holds, thereby finishing the
+proof of the lemma. Choose a presentation
+$S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$.
+This is possible as $S$ is of finite type over $R$ and therefore of finite
+presentation (see
+Lemma \ref{lemma-Noetherian-finite-type-is-finite-presentation}).
+Set $I = (f_1, \ldots, f_m)$.
+Consider the naive cotangent complex
+$$
+\text{d} : I/I^2
+\longrightarrow
+\bigoplus\nolimits_{j = 1}^m S\text{d}x_j
+$$
+of this presentation (see Section \ref{section-netherlander}).
+It suffices to show that when we localize this complex at $\mathfrak q$
+then the map becomes a split injection, see Lemma \ref{lemma-smooth-at-point}.
+Denote $S' = R[x_1, \ldots, x_n]/I^2$.
+By Lemma \ref{lemma-differential-mod-power-ideal} we have
+$$
+S \otimes_{S'} \Omega_{S'/R} =
+S \otimes_{R[x_1, \ldots, x_n]} \Omega_{R[x_1, \ldots, x_n]/R} =
+\bigoplus\nolimits_{j = 1}^m S\text{d}x_j.
+$$
+Thus the map
+$$
+\text{d} :
+I/I^2
+\longrightarrow
+S \otimes_{S'} \Omega_{S'/R}
+$$
+is the same as the map in the naive cotangent complex above. In particular
+the truth of the assertion we are trying to prove
+depends only on the three rings $R \to S' \to S$.
+Let $\mathfrak q' \subset R[x_1, \ldots, x_n]$ be the prime ideal
+corresponding to $\mathfrak q$. Since
+localization commutes with taking modules of differentials
+(Lemma \ref{lemma-differentials-localize}) we see that it suffices to show
+that the map
+\begin{equation}
+\label{equation-target-map}
+\text{d} :
+I_{\mathfrak q'}/I_{\mathfrak q'}^2
+\longrightarrow
+S_{\mathfrak q} \otimes_{S'_{\mathfrak q'}} \Omega_{S'_{\mathfrak q'}/R}
+\end{equation}
+coming from $R \to S'_{\mathfrak q'} \to S_{\mathfrak q}$
+is a split injection.
+
+\medskip\noindent
+Let $N \in \mathbf{N}$ be an integer.
+Consider the ring
+$$
+B'_N = S'_{\mathfrak q'} / (\mathfrak q')^N S'_{\mathfrak q'}
+= (S'/(\mathfrak q')^N S')_{\mathfrak q'}
+$$
+and its quotient $B_N = B'_N/IB'_N$. Note that
+$B_N \cong S_{\mathfrak q}/\mathfrak q^NS_{\mathfrak q}$.
+Observe that $B'_N$ is an Artinian local ring since it is the
+quotient of a local Noetherian ring by a power of its maximal ideal.
+Consider a filtration of the kernel $I_N$ of $B'_N \to B_N$
+by $B'_N$-submodules
+$$
+0 \subset J_{N, 1} \subset J_{N, 2} \subset \ldots \subset J_{N, n(N)} = I_N
+$$
+such that each successive quotient $J_{N, i}/J_{N, i - 1}$ has length $1$.
+(As $B'_N$ is Artinian such a filtration exists.)
+This gives a sequence of small extensions
+$$
+B'_N \to B'_N/J_{N, 1} \to B'_N/J_{N, 2} \to \ldots \to
+B'_N/J_{N, n(N)} = B'_N/I_N
+= B_N = S_{\mathfrak q}/\mathfrak q^NS_{\mathfrak q}
+$$
+Applying condition (4) successively to these small extensions
+starting with the map $S \to B_N$ we see there
+exists a commutative diagram
+$$
+\xymatrix{
+S \ar[r] \ar[rd] & B_N \\
+R \ar[r] \ar[u] & B'_N \ar[u]
+}
+$$
+Clearly the ring map $S \to B'_N$ factors as $S \to S_{\mathfrak q} \to B'_N$
+where $S_{\mathfrak q} \to B'_N$ is a local homomorphism of local rings.
+Moreover, since the maximal ideal of $B'_N$ to the $N$th power is zero
+we conclude that $S_{\mathfrak q} \to B'_N$ factors through
+$S_{\mathfrak q}/(\mathfrak q)^NS_{\mathfrak q} = B_N$. In other words
+we have shown that for all $N \in \mathbf{N}$ the surjection of
+$R$-algebras $B'_N \to B_N$ has a splitting.
+
+\medskip\noindent
+Consider the presentation
+$$
+I_N \to B_N \otimes_{B'_N} \Omega_{B'_N/R} \to \Omega_{B_N/R} \to 0
+$$
+coming from the surjection $B'_N \to B_N$ with kernel $I_N$ (see
+Lemma \ref{lemma-differential-seq}). By the above the $R$-algebra map
+$B'_N \to B_N$ has a right inverse. Hence by
+Lemma \ref{lemma-differential-seq-split} we see that the sequence above
+is split exact! Thus for every $N$ the map
+$$
+I_N \longrightarrow B_N \otimes_{B'_N} \Omega_{B'_N/R}
+$$
+is a split injection. The rest of the proof is gotten by unwinding what
+this means exactly. Note that
+$$
+I_N = I_{\mathfrak q'}/
+(I_{\mathfrak q'}^2 + (\mathfrak q')^N \cap I_{\mathfrak q'})
+$$
+By Artin-Rees (Lemma \ref{lemma-Artin-Rees}) we find a $c \geq 0$
+such that
+$$
+S_{\mathfrak q}/\mathfrak q^{N - c}S_{\mathfrak q}
+\otimes_{S_{\mathfrak q}} I_N =
+S_{\mathfrak q}/\mathfrak q^{N - c}S_{\mathfrak q}
+\otimes_{S_{\mathfrak q}}
+I_{\mathfrak q'}/I_{\mathfrak q'}^2
+$$
+for all $N \geq c$
+(these tensor product are just a fancy way of dividing by
+$\mathfrak q^{N - c}$). We may of course assume $c \geq 1$.
+By Lemma \ref{lemma-differential-mod-power-ideal} we see that
+$$
+S'_{\mathfrak q'}/(\mathfrak q')^{N - c}S'_{\mathfrak q'}
+\otimes_{S'_{\mathfrak q'}} \Omega_{B'_N/R} =
+S'_{\mathfrak q'}/(\mathfrak q')^{N - c}S'_{\mathfrak q'}
+\otimes_{S'_{\mathfrak q'}} \Omega_{S'_{\mathfrak q'}/R}
+$$
+we can further tensor this by $B_N = S_{\mathfrak q}/\mathfrak q^N$
+to see that
+$$
+S_{\mathfrak q}/\mathfrak q^{N - c}S_{\mathfrak q}
+\otimes_{S'_{\mathfrak q'}} \Omega_{B'_N/R} =
+S_{\mathfrak q}/\mathfrak q^{N - c}S_{\mathfrak q}
+\otimes_{S'_{\mathfrak q'}} \Omega_{S'_{\mathfrak q'}/R}.
+$$
+Since a split injection remains a split injection after tensoring
+with anything we see that
+$$
+S_{\mathfrak q}/\mathfrak q^{N - c}S_{\mathfrak q}
+\otimes_{S_{\mathfrak q}}
+(\ref{equation-target-map}) =
+S_{\mathfrak q}/\mathfrak q^{N - c}S_{\mathfrak q}
+\otimes_{S_{\mathfrak q}/\mathfrak q^N S_{\mathfrak q}}
+(I_N \longrightarrow B_N \otimes_{B'_N} \Omega_{B'_N/R})
+$$
+is a split injection for all $N \geq c$. By
+Lemma \ref{lemma-split-injection-after-completion} we see that
+(\ref{equation-target-map}) is a split injection. This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Overview of results on smooth ring maps}
+\label{section-smooth-overview}
+
+\noindent
+Here is a list of results on smooth ring maps that we
+proved in the preceding sections. For more precise statements
+and definitions please consult the references given.
+\begin{enumerate}
+\item A ring map $R \to S$ is smooth if it is of finite presentation
+and the naive cotangent complex of $S/R$ is quasi-isomorphic to
+a finite projective $S$-module in degree $0$, see
+Definition \ref{definition-smooth}.
+\item If $S$ is smooth over $R$, then $\Omega_{S/R}$ is a finite projective
+$S$-module, see discussion following Definition \ref{definition-smooth}.
+\item The property of being smooth is local on $S$, see
+Lemma \ref{lemma-locally-smooth}.
+\item The property of being smooth is stable under base change, see
+Lemma \ref{lemma-base-change-smooth}.
+\item The property of being smooth is stable under composition, see
+Lemma \ref{lemma-compose-smooth}.
+\item A smooth ring map is syntomic, in particular flat, see
+Lemma \ref{lemma-smooth-syntomic}.
+\item A finitely presented, flat ring map with smooth fibre rings
+is smooth, see Lemma \ref{lemma-flat-fibre-smooth}.
+\item A finitely presented ring map $R \to S$ is smooth if and
+only if it is formally smooth, see
+Proposition \ref{proposition-smooth-formally-smooth}.
+\item If $R \to S$ is a finite type ring map with $R$ Noetherian
+then to check that $R \to S$ is smooth it suffices to check the lifting
+property of formal smoothness along small extensions of Artinian
+local rings, see Lemma \ref{lemma-smooth-test-artinian}.
+\item A smooth ring map $R \to S$ is the base change
+of a smooth ring map $R_0 \to S_0$
+with $R_0$ of finite type over $\mathbf{Z}$, see
+Lemma \ref{lemma-finite-presentation-fs-Noetherian}.
+\item Formation of the set of points where a
+ring map is smooth commutes with flat base change, see
+Lemma \ref{lemma-flat-base-change-locus-smooth}.
+\item If $S$ is of finite type over an algebraically closed
+field $k$, and $\mathfrak m \subset S$ a maximal ideal,
+then the following are equivalent
+\begin{enumerate}
+\item $S$ is smooth over $k$ in a neighbourhood of $\mathfrak m$,
+\item $S_{\mathfrak m}$ is a regular local ring,
+\item $\dim(S_{\mathfrak m}) =
+\dim_{\kappa(m)} \Omega_{S/k} \otimes_S \kappa(\mathfrak m)$.
+\end{enumerate}
+see Lemma \ref{lemma-characterize-smooth-kbar}.
+\item If $S$ is of finite type over a field $k$, and
+$\mathfrak q \subset S$ a prime ideal,
+then the following are equivalent
+\begin{enumerate}
+\item $S$ is smooth over $k$ in a neighbourhood of $\mathfrak q$,
+\item $\dim_{\mathfrak q}(S/k) =
+\dim_{\kappa(\mathfrak q)} \Omega_{S/k} \otimes_S \kappa(\mathfrak q)$.
+\end{enumerate}
+see Lemma \ref{lemma-characterize-smooth-over-field}.
+\item If $S$ is smooth over a field, then all its local rings are
+regular, see Lemma \ref{lemma-characterize-smooth-over-field}.
+\item If $S$ is of finite type over a field $k$,
+$\mathfrak q \subset S$ a prime ideal,
+the field extension $\kappa(\mathfrak q)/k$ is separable
+and $S_{\mathfrak q}$ is regular, then $S$ is smooth over $k$ at
+$\mathfrak q$, see Lemma \ref{lemma-separable-smooth}.
+\item If $S$ is of finite type over a field $k$,
+if $k$ has characteristic $0$, if
+$\mathfrak q \subset S$ a prime ideal, and if
+$\Omega_{S/k, \mathfrak q}$ is free, then $S$ is smooth over $k$ at
+$\mathfrak q$, see Lemma \ref{lemma-characteristic-zero-local-smooth}.
+\end{enumerate}
+Some of these results were proved using the notion of a standard
+smooth ring map, see Definition \ref{definition-standard-smooth}.
+This is the analogue of what a relative global
+complete intersection map is for the case of syntomic morphisms.
+It is also the easiest way to make examples.
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{\'Etale ring maps}
+\label{section-etale}
+
+\noindent
+An \'etale ring map is a smooth ring map whose relative dimension
+is equal to zero. This is the same as the following slightly more
+direct definition.
+
+\begin{definition}
+\label{definition-etale}
+Let $R \to S$ be a ring map. We say $R \to S$ is {\it \'etale} if it is
+of finite presentation and the naive cotangent complex
+$\NL_{S/R}$ is quasi-isomorphic to zero. Given a prime $\mathfrak q$
+of $S$ we say that $R \to S$ is {\it \'etale at $\mathfrak q$}
+if there exists a $g \in S$, $g \not \in \mathfrak q$ such that
+$R \to S_g$ is \'etale.
+\end{definition}
+
+\noindent
+In particular we see that $\Omega_{S/R} = 0$ if $S$ is \'etale over $R$.
+If $R \to S$ is smooth,
+then $R \to S$ is \'etale if and only if $\Omega_{S/R} = 0$.
+From our results on smooth ring maps we automatically get a whole host
+of results for \'etale maps. We summarize these in Lemma \ref{lemma-etale}
+below. But before we do so we prove that {\it any} \'etale ring map is
+standard smooth.
+
+\begin{lemma}
+\label{lemma-etale-standard-smooth}
+Any \'etale ring map is standard smooth. More precisely, if
+$R \to S$ is \'etale, then there exists a presentation
+$S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$ such that
+the image of $\det(\partial f_j/\partial x_i)$ is invertible in $S$.
+\end{lemma}
+
+\begin{proof}
+Let $R \to S$ be \'etale. Choose a presentation $S = R[x_1, \ldots, x_n]/I$.
+As $R \to S$ is \'etale we know that
+$$
+\text{d} :
+I/I^2
+\longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} S\text{d}x_i
+$$
+is an isomorphism, in particular $I/I^2$ is a free $S$-module.
+Thus by Lemma \ref{lemma-huber} we may assume (after possibly changing
+the presentation), that $I = (f_1, \ldots, f_c)$ such that the classes
+$f_i \bmod I^2$ form a basis of $I/I^2$. It follows immediately from
+the fact that the displayed map above is an isomorphism that $c = n$ and
+that $\det(\partial f_j/\partial x_i)$ is invertible in $S$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale}
+Results on \'etale ring maps.
+\begin{enumerate}
+\item The ring map $R \to R_f$ is \'etale for any ring $R$ and any $f \in R$.
+\item Compositions of \'etale ring maps are \'etale.
+\item A base change of an \'etale ring map is \'etale.
+\item The property of being \'etale is local: Given a ring map
+$R \to S$ and elements $g_1, \ldots, g_m \in S$ which generate the unit ideal
+such that $R \to S_{g_j}$ is \'etale for $j = 1, \ldots, m$ then
+$R \to S$ is \'etale.
+\item Given $R \to S$ of finite presentation, and a flat ring map
+$R \to R'$, set $S' = R' \otimes_R S$. The set of primes where $R' \to S'$
+is \'etale is the inverse image via $\Spec(S') \to \Spec(S)$
+of the set of primes where $R \to S$ is \'etale.
+\item An \'etale ring map is syntomic, in particular flat.
+\item If $S$ is finite type over a field $k$, then $S$ is \'etale over
+$k$ if and only if $\Omega_{S/k} = 0$.
+\item Any \'etale ring map $R \to S$ is the base change of an \'etale
+ring map $R_0 \to S_0$ with $R_0$ of finite type over $\mathbf{Z}$.
+\item Let $A = \colim A_i$ be a filtered colimit of rings.
+Let $A \to B$ be an \'etale ring map. Then there exists an \'etale ring
+map $A_i \to B_i$ for some $i$ such that $B \cong A \otimes_{A_i} B_i$.
+\item Let $A$ be a ring. Let $S$ be a multiplicative subset of $A$.
+Let $S^{-1}A \to B'$ be \'etale. Then there exists an \'etale ring map
+$A \to B$ such that $B' \cong S^{-1}B$.
+\item Let $A$ be a ring. Let $B = B' \times B''$ be a product of $A$-algebras.
+Then $B$ is \'etale over $A$ if and only if both $B'$ and $B''$ are
+\'etale over $A$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+In each case we use the corresponding result for smooth ring maps with
+a small argument added to show that $\Omega_{S/R}$ is zero.
+
+\medskip\noindent
+Proof of (1). The ring map $R \to R_f$ is smooth and $\Omega_{R_f/R} = 0$.
+
+\medskip\noindent
+Proof of (2). The composition $A \to C$ of smooth maps $A \to B$ and
+$B \to C$ is smooth, see Lemma \ref{lemma-compose-smooth}. By
+Lemma \ref{lemma-exact-sequence-differentials} we see that
+$\Omega_{C/A}$ is zero as both $\Omega_{C/B}$ and $\Omega_{B/A}$ are zero.
+
+\medskip\noindent
+Proof of (3). Let $R \to S$ be \'etale and $R \to R'$ be arbitrary.
+Then $R' \to S' = R' \otimes_R S$ is smooth, see
+Lemma \ref{lemma-base-change-smooth}. Since
+$\Omega_{S'/R'} = S' \otimes_S \Omega_{S/R}$ by
+Lemma \ref{lemma-differentials-base-change}
+we conclude that $\Omega_{S'/R'} = 0$. Hence $R' \to S'$ is \'etale.
+
+\medskip\noindent
+Proof of (4). Assume the hypotheses of (4). By
+Lemma \ref{lemma-locally-smooth} we see that $R \to S$ is smooth.
+We are also given that $\Omega_{S_{g_i}/R} = (\Omega_{S/R})_{g_i} = 0$
+for all $i$. Then $\Omega_{S/R} = 0$, see Lemma \ref{lemma-cover}.
+
+\medskip\noindent
+Proof of (5). The result for smooth maps is
+Lemma \ref{lemma-flat-base-change-locus-smooth}.
+In the proof of that lemma we used that $\NL_{S/R} \otimes_S S'$
+is homotopy equivalent to $\NL_{S'/R'}$.
+This reduces us to showing that if $M$ is a finitely presented
+$S$-module the set of primes $\mathfrak q'$ of $S'$
+such that $(M \otimes_S S')_{\mathfrak q'} = 0$ is the inverse
+image of the set of primes $\mathfrak q$ of $S$ such that
+$M_{\mathfrak q} = 0$. This follows from Lemma \ref{lemma-support-base-change}.
+
+\medskip\noindent
+Proof of (6). Follows directly from the corresponding result for
+smooth ring maps (Lemma \ref{lemma-smooth-syntomic}).
+
+\medskip\noindent
+Proof of (7). Follows from Lemma \ref{lemma-characterize-smooth-over-field}
+and the definitions.
+
+\medskip\noindent
+Proof of (8). Lemma \ref{lemma-finite-presentation-fs-Noetherian}
+gives the result for smooth ring maps. The resulting smooth ring map
+$R_0 \to S_0$ satisfies the
+hypotheses of Lemma \ref{lemma-relative-dimension-CM}, and hence we may
+replace $S_0$ by the factor of relative dimension $0$ over $R_0$.
+
+\medskip\noindent
+Proof of (9). Follows from (8) since $R_0 \to A$ will factor through
+$A_i$ for some $i$ by Lemma \ref{lemma-characterize-finite-presentation}.
+
+\medskip\noindent
+Proof of (10). Follows from (9), (1), and (2) since $S^{-1}A$ is a
+filtered colimit of principal localizations of $A$.
+
+\medskip\noindent
+Proof of (11). Use Lemma \ref{lemma-product-smooth} to see the
+result for smoothness and then use that $\Omega_{B/A}$
+is zero if and only if both $\Omega_{B'/A}$ and $\Omega_{B''/A}$
+are zero.
+\end{proof}
+
+\noindent
+Next we work out in more detail what it means to be \'etale
+over a field.
+
+\begin{lemma}
+\label{lemma-etale-over-field}
+Let $k$ be a field. A ring map $k \to S$ is \'etale if and only if $S$
+is isomorphic as a $k$-algebra to a finite product of finite separable
+extensions of $k$.
+\end{lemma}
+
+\begin{proof}
+We are going to use without further mention: if
+$S = S_1 \times \ldots \times S_n$ is a finite product
+of $k$-algebras, then $S$ is \'etale over $k$ if and only
+if each $S_i$ is \'etale over $k$. See Lemma \ref{lemma-etale} part (11).
+
+\medskip\noindent
+If $k'/k$ is a finite separable field extension then we can
+write $k' = k(\alpha) \cong k[x]/(f)$. Here $f$ is the minimal
+polynomial of the element $\alpha$. Since $k'$ is separable over $k$
+we have $\gcd(f, f') = 1$. This
+implies that $\text{d} : k'\cdot f \to k' \cdot \text{d}x$
+is an isomorphism. Hence $k \to k'$ is \'etale. Thus if $S$
+is a finite product of finite separable extension of $k$, then
+$S$ is \'etale over $k$.
+
+\medskip\noindent
+Conversely, suppose that $k \to S$ is \'etale. Then $S$ is smooth
+over $k$ and $\Omega_{S/k} = 0$. By
+Lemma \ref{lemma-characterize-smooth-over-field}
+we see that $\dim_\mathfrak m \Spec(S) = 0$ for every
+maximal ideal $\mathfrak m$ of $S$. Thus $\dim(S) = 0$. By
+Proposition \ref{proposition-dimension-zero-ring}
+we find that $S$ is a finite product of Artinian local rings.
+By the already used
+Lemma \ref{lemma-characterize-smooth-over-field}
+these local rings are fields. Hence we may assume $S = k'$ is a field.
+By the Hilbert Nullstellensatz (Theorem \ref{theorem-nullstellensatz})
+we see that the extension $k'/k$ is finite. The smoothness
+of $k \to k'$ implies by Lemma \ref{lemma-smooth-at-generic-point}
+that $k'/k$ is a separable extension and
+the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-at-prime}
+Let $R \to S$ be a ring map.
+Let $\mathfrak q \subset S$ be a prime lying over $\mathfrak p$ in $R$.
+If $S/R$ is \'etale at $\mathfrak q$ then
+\begin{enumerate}
+\item we have $\mathfrak p S_{\mathfrak q} = \mathfrak qS_{\mathfrak q}$
+is the maximal ideal of the local ring $S_{\mathfrak q}$, and
+\item the field extension $\kappa(\mathfrak q)/\kappa(\mathfrak p)$
+is finite separable.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First we may replace $S$ by $S_g$ for some $g \in S$, $g \not \in \mathfrak q$
+and assume that $R \to S$ is \'etale. Then the lemma follows from
+Lemma \ref{lemma-etale-over-field} by unwinding the
+fact that $S \otimes_R \kappa(\mathfrak p)$ is \'etale over
+$\kappa(\mathfrak p)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-quasi-finite}
+An \'etale ring map is quasi-finite.
+\end{lemma}
+
+\begin{proof}
+Let $R \to S$ be an \'etale ring map. By definition $R \to S$ is of finite type.
+For any prime $\mathfrak p \subset R$ the fibre ring
+$S \otimes_R \kappa(\mathfrak p)$ is \'etale over $\kappa(\mathfrak p)$
+and hence a finite products of fields finite separable over
+$\kappa(\mathfrak p)$, in particular finite over $\kappa(\mathfrak p)$.
+Thus $R \to S$ is quasi-finite by Lemma \ref{lemma-quasi-finite}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-etale}
+Let $R \to S$ be a ring map. Let $\mathfrak q$ be a prime of $S$
+lying over a prime $\mathfrak p$ of $R$. If
+\begin{enumerate}
+\item $R \to S$ is of finite presentation,
+\item $R_{\mathfrak p} \to S_{\mathfrak q}$ is flat
+\item $\mathfrak p S_{\mathfrak q}$ is the maximal ideal
+of the local ring $S_{\mathfrak q}$, and
+\item the field extension $\kappa(\mathfrak q)/\kappa(\mathfrak p)$
+is finite separable,
+\end{enumerate}
+then $R \to S$ is \'etale at $\mathfrak q$.
+\end{lemma}
+
+\begin{proof}
+Apply
+Lemma \ref{lemma-isolated-point-fibre}
+to find a $g \in S$, $g \not \in \mathfrak q$ such that
+$\mathfrak q$ is the only prime of $S_g$ lying over $\mathfrak p$.
+We may and do replace $S$ by $S_g$. Then
+$S \otimes_R \kappa(\mathfrak p)$ has a unique prime, hence is a
+local ring, hence is equal to
+$S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}
+\cong \kappa(\mathfrak q)$.
+By Lemma \ref{lemma-flat-fibre-smooth}
+there exists a $g \in S$, $g \not \in \mathfrak q$
+such that $R \to S_g$ is smooth. Replace $S$ by $S_g$ again we may
+assume that $R \to S$ is smooth. By
+Lemma \ref{lemma-smooth-syntomic} we may even assume that
+$R \to S$ is standard smooth, say $S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$.
+Since $S \otimes_R \kappa(\mathfrak p) = \kappa(\mathfrak q)$
+has dimension $0$ we conclude that $n = c$, i.e., $R \to S$ is \'etale.
+\end{proof}
+
+\noindent
+Here is a completely new phenomenon.
+
+\begin{lemma}
+\label{lemma-map-between-etale}
+Let $R \to S$ and $R \to S'$ be \'etale.
+Then any $R$-algebra map $S' \to S$ is \'etale.
+\end{lemma}
+
+\begin{proof}
+First of all we note that $S' \to S$ is of finite presentation by
+Lemma \ref{lemma-compose-finite-type}.
+Let $\mathfrak q \subset S$ be a prime ideal lying over the primes
+$\mathfrak q' \subset S'$ and $\mathfrak p \subset R$.
+By Lemma \ref{lemma-etale-at-prime} the ring map
+$S'_{\mathfrak q'}/\mathfrak p S'_{\mathfrak q'} \to
+S_{\mathfrak q}/\mathfrak p S_{\mathfrak q}$
+is a map finite separable extensions of $\kappa(\mathfrak p)$.
+In particular it is flat. Hence by
+Lemma \ref{lemma-criterion-flatness-fibre} we see that
+$S'_{\mathfrak q'} \to S_{\mathfrak q}$ is flat. Thus $S' \to S$
+is flat. Moreover, the above also shows that $\mathfrak q'S_{\mathfrak q}$
+is the maximal ideal of $S_{\mathfrak q}$ and that the residue
+field extension of $S'_{\mathfrak q'} \to S_{\mathfrak q}$ is
+finite separable. Hence from Lemma \ref{lemma-characterize-etale}
+we conclude that $S' \to S$ is \'etale at $\mathfrak q$. Since
+being \'etale is local (see Lemma \ref{lemma-etale}) we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-flat-finitely-presented}
+Let $\varphi : R \to S$ be a ring map. If $R \to S$ is surjective, flat and
+finitely presented then there exist an idempotent $e \in R$ such that
+$S = R_e$.
+\end{lemma}
+
+\begin{proof}[First proof]
+Let $I$ be the kernel of $\varphi$.
+We have that $I$ is finitely generated by
+Lemma \ref{lemma-finite-presentation-independent}
+since $\varphi$ is of finite presentation.
+Moreover, since $S$ is flat over $R$, tensoring the exact sequence
+$0 \to I \to R \to S \to 0$ over $R$ with $S$
+gives $I/I^2 = 0$. Now we conclude by
+Lemma \ref{lemma-ideal-is-squared-union-connected}.
+\end{proof}
+
+\begin{proof}[Second proof]
+Since $\Spec(S) \to \Spec(R)$ is a homeomorphism
+onto a closed subset (see Lemma \ref{lemma-spec-closed}) and
+is open (see Proposition \ref{proposition-fppf-open}) we see that
+the image is $D(e)$ for some idempotent $e \in R$ (see
+Lemma \ref{lemma-disjoint-decomposition}). Thus $R_e \to S$
+induces a bijection on spectra. Now this map induces an isomorphism
+on all local rings for example by
+Lemmas \ref{lemma-finite-flat-local} and \ref{lemma-NAK}.
+Then it follows that $R_e \to S$ is also injective, for example
+see Lemma \ref{lemma-characterize-zero-local}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-etale}
+\begin{slogan}
+\'Etale ring maps lift along surjections of rings
+\end{slogan}
+Let $R$ be a ring and let $I \subset R$ be an ideal.
+Let $R/I \to \overline{S}$ be an \'etale ring map.
+Then there exists an \'etale ring map
+$R \to S$ such that $\overline{S} \cong S/IS$ as $R/I$-algebras.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-etale-standard-smooth} we can write
+$\overline{S} =
+(R/I)[x_1, \ldots, x_n]/(\overline{f}_1, \ldots, \overline{f}_n)$
+as in Definition \ref{definition-standard-smooth} with
+$\overline{\Delta} =
+\det(\frac{\partial \overline{f}_i}{\partial x_j})_{i, j = 1, \ldots, n}$
+invertible in $\overline{S}$. Just take some lifts $f_i$ and set
+$S = R[x_1, \ldots, x_n, x_{n+1}]/(f_1, \ldots, f_n, x_{n + 1}\Delta - 1)$
+where $\Delta = \det(\frac{\partial f_i}{\partial x_j})_{i, j = 1, \ldots, n}$
+as in Example \ref{example-make-standard-smooth}.
+This proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-etale-infinitesimal}
+Consider a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+J \ar[r] &
+B' \ar[r] &
+B \ar[r] & 0 \\
+0 \ar[r] &
+I \ar[r] \ar[u] &
+A' \ar[r] \ar[u] &
+A \ar[r] \ar[u] & 0
+}
+$$
+with exact rows where $B' \to B$ and $A' \to A$ are surjective ring maps
+whose kernels are ideals of square zero. If $A \to B$ is \'etale,
+and $J = I \otimes_A B$, then $A' \to B'$ is \'etale.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-lift-etale}
+there exists an \'etale ring map $A' \to C$ such that $C/IC = B$.
+Then $A' \to C$ is formally smooth (by
+Proposition \ref{proposition-smooth-formally-smooth})
+hence we get an $A'$-algebra map $\varphi : C \to B'$.
+Since $A' \to C$ is flat we have $I \otimes_A B = I \otimes_A C/IC = IC$.
+Hence the assumption that $J = I \otimes_A B$ implies that
+$\varphi$ induces an isomorphism $IC \to J$ and an isomorphism
+$C/IC \to B'/IB'$, whence $\varphi$ is an isomorphism.
+\end{proof}
+
+\begin{example}
+\label{example-factor-polynomials-etale}
+Let $n , m \geq 1$ be integers. Consider the ring map
+\begin{eqnarray*}
+R = \mathbf{Z}[a_1, \ldots, a_{n + m}]
+& \longrightarrow &
+S = \mathbf{Z}[b_1, \ldots, b_n, c_1, \ldots, c_m] \\
+a_1 & \longmapsto & b_1 + c_1 \\
+a_2 & \longmapsto & b_2 + b_1 c_1 + c_2 \\
+\ldots & \ldots & \ldots \\
+a_{n + m} & \longmapsto & b_n c_m
+\end{eqnarray*}
+of Example \ref{example-factor-polynomials}.
+Write symbolically
+$$
+S = R[b_1, \ldots, c_m]/(\{a_k(b_i, c_j) - a_k\}_{k = 1, \ldots, n + m})
+$$
+where for example $a_1(b_i, c_j) = b_1 + c_1$.
+The matrix of partial derivatives is
+$$
+\left(
+\begin{matrix}
+1 & c_1 & \ldots & c_m & 0 & \ldots & \ldots & 0 \\
+0 & 1 & c_1 & \ldots & c_m & 0 & \ldots & 0 \\
+\ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\
+0 & \ldots & 0 & 1 & c_1 & c_2 & \ldots & c_m \\
+1 & b_1 & \ldots & b_{n - 1} & b_n & 0 & \ldots & 0 \\
+0 & 1 & b_1 & \ldots & b_{n - 1} & b_n & \ldots & 0 \\
+\ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\
+0 & \ldots & \ldots & 0 & 1 & b_1 & \ldots & b_n
+\end{matrix}
+\right)
+$$
+The determinant $\Delta$ of this matrix is better known as the
+{\it resultant} of the polynomials $g = x^n + b_1 x^{n - 1} + \ldots + b_n$
+and $h = x^m + c_1 x^{m - 1} + \ldots + c_m$, and the matrix above
+is known as the {\it Sylvester matrix} associated to $g, h$.
+In a formula $\Delta = \text{Res}_x(g, h)$. The Sylvester matrix
+is the transpose of the matrix of the linear map
+\begin{eqnarray*}
+S[x]_{< m} \oplus S[x]_{< n} & \longrightarrow & S[x]_{< n + m} \\
+a \oplus b & \longmapsto & ag + bh
+\end{eqnarray*}
+Let $\mathfrak q \subset S$ be any prime. By the above the
+following are equivalent:
+\begin{enumerate}
+\item $R \to S$ is \'etale at $\mathfrak q$,
+\item $\Delta = \text{Res}_x(g, h) \not \in \mathfrak q$,
+\item the images $\overline{g}, \overline{h} \in \kappa(\mathfrak q)[x]$
+of the polynomials $g, h$ are relatively prime in $\kappa(\mathfrak q)[x]$.
+\end{enumerate}
+The equivalence of (2) and (3) holds because the image of the
+Sylvester matrix in $\text{Mat}(n + m, \kappa(\mathfrak q))$
+has a kernel if and only if the polynomials $\overline{g}, \overline{h}$
+have a factor in common. We conclude that the ring map
+$$
+R \longrightarrow S[\frac{1}{\Delta}] = S[\frac{1}{\text{Res}_x(g, h)}]
+$$
+is \'etale.
+\end{example}
+
+
+\begin{lemma}
+\label{lemma-factor-mod-lift-etale}
+Let $R$ be a ring. Let $f \in R[x]$ be a monic polynomial. Let $\mathfrak p$
+be a prime of $R$. Let $f \bmod \mathfrak p = \overline{g} \overline{h}$
+be a factorization of the image of $f$ in $\kappa(\mathfrak p)[x]$.
+If $\gcd(\overline{g}, \overline{h}) = 1$, then there exist
+\begin{enumerate}
+\item an \'etale ring map $R \to R'$,
+\item a prime $\mathfrak p' \subset R'$ lying over $\mathfrak p$, and
+\item a factorization $f = g h$ in $R'[x]$
+\end{enumerate}
+such that
+\begin{enumerate}
+\item $\kappa(\mathfrak p) = \kappa(\mathfrak p')$,
+\item $\overline{g} = g \bmod \mathfrak p'$,
+$\overline{h} = h \bmod \mathfrak p'$, and
+\item the polynomials $g, h$ generate the unit ideal in $R'[x]$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Suppose
+$\overline{g} = \overline{b}_0 x^n + \overline{b}_1 x^{n - 1} + \ldots
++ \overline{b}_n$, and
+$\overline{h} = \overline{c}_0 x^m + \overline{c}_1 x^{m - 1} + \ldots
++ \overline{c}_m$ with $\overline{b}_0, \overline{c}_0 \in \kappa(\mathfrak p)$
+nonzero. After localizing $R$ at some element of $R$ not contained in
+$\mathfrak p$ we may assume $\overline{b}_0$ is the
+image of an invertible element $b_0 \in R$. Replacing
+$\overline{g}$ by $\overline{g}/b_0$ and
+$\overline{h}$ by $b_0\overline{h}$ we reduce to the case where
+$\overline{g}$, $\overline{h}$ are monic (verification omitted).
+Say $\overline{g} = x^n + \overline{b}_1 x^{n - 1} + \ldots + \overline{b}_n$,
+and $\overline{h} = x^m + \overline{c}_1 x^{m - 1} + \ldots + \overline{c}_m$.
+Write $f = x^{n + m} + a_1 x^{n - 1} + \ldots + a_{n + m}$.
+Consider the fibre product
+$$
+R' = R \otimes_{\mathbf{Z}[a_1, \ldots, a_{n + m}]}
+\mathbf{Z}[b_1, \ldots, b_n, c_1, \ldots, c_m]
+$$
+where the map $\mathbf{Z}[a_k] \to \mathbf{Z}[b_i, c_j]$
+is as in Examples \ref{example-factor-polynomials} and
+\ref{example-factor-polynomials-etale}. By construction there
+is an $R$-algebra map
+$$
+R' = R \otimes_{\mathbf{Z}[a_1, \ldots, a_{n + m}]}
+\mathbf{Z}[b_1, \ldots, b_n, c_1, \ldots, c_m]
+\longrightarrow
+\kappa(\mathfrak p)
+$$
+which maps $b_i$ to $\overline{b}_i$ and $c_j$ to $\overline{c}_j$.
+Denote $\mathfrak p' \subset R'$ the kernel of this map.
+Since by assumption the polynomials $\overline{g}, \overline{h}$
+are relatively prime we see that the element
+$\Delta = \text{Res}_x(g, h) \in \mathbf{Z}[b_i, c_j]$
+(see Example \ref{example-factor-polynomials-etale})
+does not map to zero in $\kappa(\mathfrak p)$ under the displayed map.
+We conclude that $R \to R'$ is \'etale at $\mathfrak p'$.
+In fact a solution to the problem posed in the lemma is
+the ring map $R \to R'[1/\Delta]$ and the prime
+$\mathfrak p' R'[1/\Delta]$. Because $\text{Res}_x(f, g)$ is
+invertible in this ring the Sylvester matrix is invertible over
+$R'[1/\Delta]$ and hence $1 = a g + b h$ for some $a, b \in R'[1/\Delta][x]$
+see Example \ref{example-factor-polynomials-etale}.
+\end{proof}
+
+
+
+
+
+\section{Local structure of \'etale ring maps}
+\label{section-etale-local-structure}
+
+\noindent
+Lemma \ref{lemma-etale-standard-smooth} tells us that it does not really
+make sense to define a standard \'etale morphism to be
+a standard smooth morphism of relative dimension $0$.
+As a model for an \'etale morphism we take the example given
+by a finite separable extension $k'/k$ of fields.
+Namely, we can always find an element $\alpha \in k'$ such
+that $k' = k(\alpha)$ and such that the minimal polynomial
+$f(x) \in k[x]$ of $\alpha$ has derivative $f'$ which is
+relatively prime to $f$.
+
+\begin{definition}
+\label{definition-standard-etale}
+Let $R$ be a ring. Let $g , f \in R[x]$.
+Assume that $f$ is monic and the derivative $f'$ is invertible in
+the localization $R[x]_g/(f)$.
+In this case the ring map $R \to R[x]_g/(f)$ is said to be
+{\it standard \'etale}.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-standard-etale}
+Let $R \to R[x]_g/(f)$ be standard \'etale.
+\begin{enumerate}
+\item The ring map $R \to R[x]_g/(f)$ is \'etale.
+\item For any ring map $R \to R'$ the base change $R' \to R'[x]_g/(f)$
+of the standard \'etale ring map $R \to R[x]_g/(f)$ is standard \'etale.
+\item Any principal localization of $R[x]_g/(f)$ is standard \'etale over $R$.
+\item A composition of standard \'etale maps is {\bf not} standard \'etale
+in general.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. Here is an example for (4).
+The ring map $\mathbf{F}_2 \to \mathbf{F}_{2^2}$ is standard \'etale.
+The ring map
+$\mathbf{F}_{2^2} \to \mathbf{F}_{2^2} \times \mathbf{F}_{2^2}
+\times \mathbf{F}_{2^2} \times \mathbf{F}_{2^2}$ is standard \'etale.
+But the ring map
+$\mathbf{F}_2 \to \mathbf{F}_{2^2} \times \mathbf{F}_{2^2}
+\times \mathbf{F}_{2^2} \times \mathbf{F}_{2^2}$ is not standard \'etale.
+\end{proof}
+
+\noindent
+Standard \'etale morphisms are a convenient way to produce \'etale maps.
+Here is an example.
+
+\begin{lemma}
+\label{lemma-make-etale-map-prescribed-residue-field}
+Let $R$ be a ring.
+Let $\mathfrak p$ be a prime of $R$.
+Let $L/\kappa(\mathfrak p)$ be a finite separable field extension.
+There exists an \'etale ring map $R \to R'$ together with a prime $\mathfrak p'$
+lying over $\mathfrak p$ such that the field extension
+$\kappa(\mathfrak p')/\kappa(\mathfrak p)$ is isomorphic
+to $\kappa(\mathfrak p) \subset L$.
+\end{lemma}
+
+\begin{proof}
+By the theorem of the primitive element we may write
+$L = \kappa(\mathfrak p)[\alpha]$. Let
+$\overline{f} \in \kappa(\mathfrak p)[x]$
+denote the minimal polynomial for $\alpha$ (in particular this is monic).
+After replacing $\alpha$ by $c\alpha$ for some $c \in R$,
+$c\not \in \mathfrak p$ we may assume all the coefficients
+of $\overline{f}$ are in the image of $R \to \kappa(\mathfrak p)$
+(verification omitted). Thus we can find a monic polynomial
+$f \in R[x]$ which maps to $\overline{f}$ in $\kappa(\mathfrak p)[x]$.
+Since $\kappa(\mathfrak p) \subset L$ is separable, we see
+that $\gcd(\overline{f}, \overline{f}') = 1$.
+Hence there is an element $\gamma \in L$ such that
+$\overline{f}'(\alpha) \gamma = 1$. Thus we get a $R$-algebra map
+\begin{eqnarray*}
+R[x, 1/f']/(f) & \longrightarrow & L \\
+x & \longmapsto & \alpha \\
+1/f' & \longmapsto & \gamma
+\end{eqnarray*}
+The left hand side is a standard \'etale algebra $R'$ over $R$
+and the kernel of the ring map gives the desired prime.
+\end{proof}
+
+
+
+
+
+
+\begin{proposition}
+\label{proposition-etale-locally-standard}
+Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime.
+If $R \to S$ is \'etale at $\mathfrak q$, then there exists
+a $g \in S$, $g \not \in \mathfrak q$ such that $R \to S_g$
+is standard \'etale.
+\end{proposition}
+
+\begin{proof}
+The following proof is a little roundabout and there may be ways to
+shorten it.
+
+\medskip\noindent
+Step 1. By Definition \ref{definition-etale}
+there exists a $g \in S$, $g \not \in \mathfrak q$
+such that $R \to S_g$ is \'etale. Thus we may assume that $S$ is \'etale
+over $R$.
+
+\medskip\noindent
+Step 2. By Lemma \ref{lemma-etale} there exists an \'etale ring map
+$R_0 \to S_0$ with $R_0$ of finite type over $\mathbf{Z}$, and a ring map
+$R_0 \to R$ such that $R = R \otimes_{R_0} S_0$. Denote
+$\mathfrak q_0$ the prime of $S_0$ corresponding to $\mathfrak q$.
+If we show the result for $(R_0 \to S_0, \mathfrak q_0)$ then the
+result follows for $(R \to S, \mathfrak q)$ by base change. Hence
+we may assume that $R$ is Noetherian.
+
+\medskip\noindent
+Step 3.
+Note that $R \to S$ is quasi-finite by Lemma \ref{lemma-etale-quasi-finite}.
+By Lemma \ref{lemma-quasi-finite-open-integral-closure}
+there exists a finite ring map $R \to S'$, an $R$-algebra map
+$S' \to S$, an element $g' \in S'$ such that
+$g' \not \in \mathfrak q$ such that $S' \to S$ induces
+an isomorphism $S'_{g'} \cong S_{g'}$.
+(Note that of course $S'$ is not \'etale over $R$ in general.)
+Thus we may assume that (a) $R$ is Noetherian, (b) $R \to S$ is finite
+and (c) $R \to S$ is \'etale at $\mathfrak q$
+(but no longer necessarily \'etale at all primes).
+
+\medskip\noindent
+Step 4. Let $\mathfrak p \subset R$ be the prime corresponding
+to $\mathfrak q$. Consider the fibre ring
+$S \otimes_R \kappa(\mathfrak p)$. This is a finite algebra over
+$\kappa(\mathfrak p)$. Hence it is Artinian
+(see Lemma \ref{lemma-finite-dimensional-algebra}) and
+so a finite product of local rings
+$$
+S \otimes_R \kappa(\mathfrak p) = \prod\nolimits_{i = 1}^n A_i
+$$
+see Proposition \ref{proposition-dimension-zero-ring}. One of the factors,
+say $A_1$, is the local ring $S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}$
+which is isomorphic to $\kappa(\mathfrak q)$,
+see Lemma \ref{lemma-etale-at-prime}. The other factors correspond to
+the other primes, say $\mathfrak q_2, \ldots, \mathfrak q_n$ of
+$S$ lying over $\mathfrak p$.
+
+\medskip\noindent
+Step 5. We may choose a nonzero element $\alpha \in \kappa(\mathfrak q)$ which
+generates the finite separable field extension
+$\kappa(\mathfrak q)/\kappa(\mathfrak p)$ (so even if the
+field extension is trivial we do not allow $\alpha = 0$).
+Note that for any $\lambda \in \kappa(\mathfrak p)^*$ the
+element $\lambda \alpha$ also generates $\kappa(\mathfrak q)$
+over $\kappa(\mathfrak p)$. Consider the element
+$$
+\overline{t} =
+(\alpha, 0, \ldots, 0) \in
+\prod\nolimits_{i = 1}^n A_i =
+S \otimes_R \kappa(\mathfrak p).
+$$
+After possibly replacing $\alpha$ by $\lambda \alpha$ as above
+we may assume that $\overline{t}$ is the image of $t \in S$.
+Let $I \subset R[x]$ be the kernel of the $R$-algebra
+map $R[x] \to S$ which maps $x$ to $t$. Set $S' = R[x]/I$,
+so $S' \subset S$. Here is a diagram
+$$
+\xymatrix{
+R[x] \ar[r] & S' \ar[r] & S \\
+R \ar[u] \ar[ru] \ar[rru] & &
+}
+$$
+By construction the primes $\mathfrak q_j$, $j \geq 2$ of $S$ all
+lie over the prime $(\mathfrak p, x)$ of $R[x]$, whereas
+the prime $\mathfrak q$ lies over a different prime of $R[x]$
+because $\alpha \not = 0$.
+
+\medskip\noindent
+Step 6. Denote $\mathfrak q' \subset S'$ the prime of $S'$
+corresponding to $\mathfrak q$. By the above $\mathfrak q$ is
+the only prime of $S$ lying over $\mathfrak q'$. Thus we see that
+$S_{\mathfrak q} = S_{\mathfrak q'}$, see
+Lemma \ref{lemma-unique-prime-over-localize-below} (we have
+going up for $S' \to S$ by Lemma \ref{lemma-integral-going-up}
+since $S' \to S$ is finite as $R \to S$ is finite).
+It follows that $S'_{\mathfrak q'} \to S_{\mathfrak q}$ is finite
+and injective as the localization of the finite injective ring map
+$S' \to S$. Consider the maps of local rings
+$$
+R_{\mathfrak p} \to S'_{\mathfrak q'} \to S_{\mathfrak q}
+$$
+The second map is finite and injective. We have
+$S_{\mathfrak q}/\mathfrak pS_{\mathfrak q} = \kappa(\mathfrak q)$,
+see Lemma \ref{lemma-etale-at-prime}.
+Hence a fortiori
+$S_{\mathfrak q}/\mathfrak q'S_{\mathfrak q} = \kappa(\mathfrak q)$.
+Since
+$$
+\kappa(\mathfrak p) \subset \kappa(\mathfrak q') \subset \kappa(\mathfrak q)
+$$
+and since $\alpha$ is in the image of $\kappa(\mathfrak q')$ in
+$\kappa(\mathfrak q)$
+we conclude that $\kappa(\mathfrak q') = \kappa(\mathfrak q)$.
+Hence by Nakayama's Lemma \ref{lemma-NAK} applied to the
+$S'_{\mathfrak q'}$-module map $S'_{\mathfrak q'} \to S_{\mathfrak q}$,
+the map $S'_{\mathfrak q'} \to S_{\mathfrak q}$ is surjective.
+In other words,
+$S'_{\mathfrak q'} \cong S_{\mathfrak q}$.
+
+\medskip\noindent
+Step 7. By Lemma \ref{lemma-isomorphic-local-rings} there exist
+$g \in S$, $g \not \in \mathfrak q$ and $g' \in S'$, $g' \not \in \mathfrak q'$
+such that $S'_{g'} \cong S_g$. As $R$ is Noetherian the ring $S'$ is finite
+over $R$ because it is an $R$-submodule
+of the finite $R$-module $S$. Hence after replacing $S$ by $S'$ we may
+assume that (a) $R$ is Noetherian, (b) $S$ finite over $R$, (c)
+$S$ is \'etale over $R$ at $\mathfrak q$, and (d) $S = R[x]/I$.
+
+\medskip\noindent
+Step 8. Consider the ring
+$S \otimes_R \kappa(\mathfrak p) = \kappa(\mathfrak p)[x]/\overline{I}$
+where $\overline{I} = I \cdot \kappa(\mathfrak p)[x]$ is the ideal generated
+by $I$ in $\kappa(\mathfrak p)[x]$. As $\kappa(\mathfrak p)[x]$ is a PID
+we know that $\overline{I} = (\overline{h})$ for some monic
+$\overline{h} \in \kappa(\mathfrak p)[x]$. After replacing $\overline{h}$
+by $\lambda \cdot \overline{h}$ for some $\lambda \in \kappa(\mathfrak p)$
+we may assume that $\overline{h}$ is the image of some $h \in I \subset R[x]$.
+(The problem is that we do not know if we may choose $h$ monic.)
+Also, as in Step 4 we know that
+$S \otimes_R \kappa(\mathfrak p) = A_1 \times \ldots \times A_n$ with
+$A_1 = \kappa(\mathfrak q)$ a finite separable extension of
+$\kappa(\mathfrak p)$ and $A_2, \ldots, A_n$ local. This implies
+that
+$$
+\overline{h} = \overline{h}_1 \overline{h}_2^{e_2} \ldots \overline{h}_n^{e_n}
+$$
+for certain pairwise coprime irreducible monic polynomials
+$\overline{h}_i \in \kappa(\mathfrak p)[x]$ and certain
+$e_2, \ldots, e_n \geq 1$. Here the numbering is chosen so that
+$A_i = \kappa(\mathfrak p)[x]/(\overline{h}_i^{e_i})$ as
+$\kappa(\mathfrak p)[x]$-algebras. Note that $\overline{h}_1$ is
+the minimal polynomial of $\alpha \in \kappa(\mathfrak q)$ and hence
+is a separable polynomial (its derivative is prime to itself).
+
+\medskip\noindent
+Step 9. Let $m \in I$ be a monic element; such an element exists
+because the ring extension $R \to R[x]/I$ is finite hence integral.
+Denote $\overline{m}$ the image in $\kappa(\mathfrak p)[x]$.
+We may factor
+$$
+\overline{m} = \overline{k}
+\overline{h}_1^{d_1} \overline{h}_2^{d_2} \ldots \overline{h}_n^{d_n}
+$$
+for some $d_1 \geq 1$, $d_j \geq e_j$, $j = 2, \ldots, n$ and
+$\overline{k} \in \kappa(\mathfrak p)[x]$ prime to all the $\overline{h}_i$.
+Set $f = m^l + h$ where $l \deg(m) > \deg(h)$, and $l \geq 2$.
+Then $f$ is monic as a polynomial over $R$. Also, the image $\overline{f}$
+of $f$ in $\kappa(\mathfrak p)[x]$ factors as
+$$
+\overline{f} =
+\overline{h}_1 \overline{h}_2^{e_2} \ldots \overline{h}_n^{e_n}
++
+\overline{k}^l \overline{h}_1^{ld_1} \overline{h}_2^{ld_2}
+\ldots \overline{h}_n^{ld_n}
+=
+\overline{h}_1(\overline{h}_2^{e_2} \ldots \overline{h}_n^{e_n}
++
+\overline{k}^l
+\overline{h}_1^{ld_1 - 1} \overline{h}_2^{ld_2} \ldots \overline{h}_n^{ld_n})
+= \overline{h}_1 \overline{w}
+$$
+with $\overline{w}$ a polynomial relatively prime to $\overline{h}_1$.
+Set $g = f'$ (the derivative with respect to $x$).
+
+\medskip\noindent
+Step 10. The ring map $R[x] \to S = R[x]/I$ has the properties:
+(1) it maps $f$ to zero, and
+(2) it maps $g$ to an element of $S \setminus \mathfrak q$.
+The first assertion is clear since $f$ is an element of $I$.
+For the second assertion we just have to show that $g$ does
+not map to zero in
+$\kappa(\mathfrak q) = \kappa(\mathfrak p)[x]/(\overline{h}_1)$.
+The image of $g$ in $\kappa(\mathfrak p)[x]$ is the derivative
+of $\overline{f}$. Thus (2) is clear because
+$$
+\overline{g} =
+\frac{\text{d}\overline{f}}{\text{d}x} =
+\overline{w}\frac{\text{d}\overline{h}_1}{\text{d}x} +
+\overline{h}_1\frac{\text{d}\overline{w}}{\text{d}x},
+$$
+$\overline{w}$ is prime to $\overline{h}_1$ and
+$\overline{h}_1$ is separable.
+
+\medskip\noindent
+Step 11.
+We conclude that $\varphi : R[x]/(f) \to S$ is a surjective ring map,
+$R[x]_g/(f)$ is \'etale over $R$ (because it is standard \'etale,
+see Lemma \ref{lemma-standard-etale}) and $\varphi(g) \not \in \mathfrak q$.
+Pick an element $g' \in R[x]/(f)$ such that
+also $\varphi(g') \not \in \mathfrak q$ and $S_{\varphi(g')}$
+is \'etale over $R$ (which exists since $S$ is \'etale over $R$ at
+$\mathfrak q$). Then the ring map
+$R[x]_{gg'}/(f) \to S_{\varphi(gg')}$ is a surjective map of \'etale
+algebras over $R$. Hence it is \'etale by Lemma \ref{lemma-map-between-etale}.
+Hence it is a localization by
+Lemma \ref{lemma-surjective-flat-finitely-presented}.
+Thus a localization of $S$ at an element not in $\mathfrak q$ is
+isomorphic to a localization of a standard \'etale algebra over $R$
+which is what we wanted to show.
+\end{proof}
+
+\noindent
+The following two lemmas say that the \'etale topology is coarser than the
+topology generated by Zariski coverings and finite flat morphisms.
+They should be skipped on a first reading.
+
+\begin{lemma}
+\label{lemma-standard-etale-finite-flat-Zariski}
+Let $R \to S$ be a standard \'etale morphism.
+There exists a ring map $R \to S'$ with the following properties
+\begin{enumerate}
+\item $R \to S'$ is finite, finitely presented, and flat
+(in other words $S'$ is finite projective as an $R$-module),
+\item $\Spec(S') \to \Spec(R)$ is surjective,
+\item for every prime $\mathfrak q \subset S$, lying over
+$\mathfrak p \subset R$ and every prime
+$\mathfrak q' \subset S'$ lying over $\mathfrak p$ there exists
+a $g' \in S'$, $g' \not \in \mathfrak q'$
+such that the ring map $R \to S'_{g'}$ factors
+through a map $\varphi : S \to S'_{g'}$ with
+$\varphi^{-1}(\mathfrak q'S'_{g'}) = \mathfrak q$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $S = R[x]_g/(f)$ be a presentation of $S$ as in
+Definition \ref{definition-standard-etale}.
+Write $f = x^n + a_1 x^{n - 1} + \ldots + a_n$ with $a_i \in R$.
+By Lemma \ref{lemma-adjoin-roots} there exists a finite locally free
+and faithfully flat ring map $R \to S'$ such that $f = \prod (x - \alpha_i)$
+for certain $\alpha_i \in S'$. Hence $R \to S'$ satisfies conditions (1), (2).
+Let $\mathfrak q \subset R[x]/(f)$ be a prime ideal with
+$g \not \in \mathfrak q$ (i.e., it corresponds to a prime of $S$).
+Let $\mathfrak p = R \cap \mathfrak q$ and let
+$\mathfrak q' \subset S'$ be a prime lying over $\mathfrak p$.
+Note that there are
+$n$ maps of $R$-algebras
+\begin{eqnarray*}
+\varphi_i : R[x]/(f) & \longrightarrow & S' \\
+x & \longmapsto & \alpha_i
+\end{eqnarray*}
+To finish the proof we have to show that for some $i$ we have
+(a) the image of $\varphi_i(g)$ in $\kappa(\mathfrak q')$ is not zero,
+and (b) $\varphi_i^{-1}(\mathfrak q') = \mathfrak q$.
+Because then we can just take $g' = \varphi_i(g)$, and
+$\varphi = \varphi_i$ for that $i$.
+
+\medskip\noindent
+Let $\overline{f}$ denote the image of $f$ in $\kappa(\mathfrak p)[x]$.
+Note that as a point of $\Spec(\kappa(\mathfrak p)[x]/(\overline{f}))$
+the prime $\mathfrak q$ corresponds to an irreducible factor
+$f_1$ of $\overline{f}$. Moreover, $g \not \in \mathfrak q$ means
+that $f_1$ does not divide the image $\overline{g}$ of $g$ in
+$\kappa(\mathfrak p)[x]$.
+Denote $\overline{\alpha}_1, \ldots, \overline{\alpha}_n$ the images
+of $\alpha_1, \ldots, \alpha_n$ in $\kappa(\mathfrak q')$.
+Note that the polynomial $\overline{f}$ splits completely
+in $\kappa(\mathfrak q')[x]$, namely
+$$
+\overline{f} = \prod\nolimits_i (x - \overline{\alpha}_i)
+$$
+Moreover $\varphi_i(g)$ reduces to $\overline{g}(\overline{\alpha}_i)$.
+It follows we may pick $i$ such that $f_1(\overline{\alpha}_i) = 0$ and
+$\overline{g}(\overline{\alpha}_i) \not = 0$.
+For this $i$ properties (a) and (b) hold. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-finite-flat-zariski}
+Let $R \to S$ be a ring map.
+Assume that
+\begin{enumerate}
+\item $R \to S$ is \'etale, and
+\item $\Spec(S) \to \Spec(R)$ is surjective.
+\end{enumerate}
+Then there exists a ring map $R \to S'$ such that
+\begin{enumerate}
+\item $R \to S'$ is finite, finitely presented, and flat
+(in other words it is finite projective as an $R$-module),
+\item $\Spec(S') \to \Spec(R)$ is surjective,
+\item for every prime $\mathfrak q' \subset S'$ there exists a
+$g' \in S'$, $g' \not \in \mathfrak q'$ such that
+the ring map $R \to S'_{g'}$ factors as $R \to S \to S'_{g'}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Proposition \ref{proposition-etale-locally-standard} and
+the quasi-compactness of $\Spec(S)$ (see Lemma \ref{lemma-quasi-compact})
+we can find $g_1, \ldots, g_n \in S$ generating the unit ideal
+of $S$ such that each $R \to S_{g_i}$ is standard \'etale.
+If we prove the lemma for the ring map $R \to \prod_{i = 1, \ldots, n} S_{g_i}$
+then the lemma follows for the ring map $R \to S$.
+Hence we may assume that $S = \prod_{i = 1, \ldots, n} S_i$
+is a finite product of standard \'etale morphisms.
+
+\medskip\noindent
+For each $i$ choose a ring map $R \to S_i'$ as in
+Lemma \ref{lemma-standard-etale-finite-flat-Zariski}
+adapted to the standard \'etale morphism $R \to S_i$.
+Set $S' = S_1' \otimes_R \ldots \otimes_R S_n'$; we will use
+the $R$-algebra maps $S_i' \to S'$ without further mention below.
+We claim this works. Properties (1) and (2) are immediate.
+For property (3) suppose that $\mathfrak q' \subset S'$ is a prime.
+Denote $\mathfrak p$ its image in $\Spec(R)$.
+Choose $i \in \{1, \ldots, n\}$ such that $\mathfrak p$
+is in the image of $\Spec(S_i) \to \Spec(R)$; this is
+possible by assumption. Set $\mathfrak q_i' \subset S_i'$
+the image of $\mathfrak q'$ in the spectrum of $S_i'$.
+By construction of $S'_i$ there exists a $g'_i \in S_i'$
+such that $R \to (S_i')_{g_i'}$ factors as
+$R \to S_i \to (S_i')_{g_i'}$. Hence also
+$R \to S'_{g_i'}$ factors as
+$$
+R \to S_i \to (S_i')_{g_i'} \to S'_{g_i'}
+$$
+as desired.
+\end{proof}
+
+
+
+
+
+
+
+\section{\'Etale local structure of quasi-finite ring maps}
+\label{section-etale-local-quasi-finite}
+
+\noindent
+The following lemmas say roughly that after an \'etale extension
+a quasi-finite ring map becomes finite.
+To help interpret the results recall that the locus where a
+finite type ring map is quasi-finite is open
+(see Lemma \ref{lemma-quasi-finite-open}) and that formation of
+this locus commutes with arbitrary base change
+(see Lemma \ref{lemma-quasi-finite-base-change}).
+
+\begin{lemma}
+\label{lemma-produce-finite}
+Let $R \to S' \to S$ be ring maps.
+Let $\mathfrak p \subset R$ be a prime.
+Let $g \in S'$ be an element.
+Assume
+\begin{enumerate}
+\item $R \to S'$ is integral,
+\item $R \to S$ is finite type,
+\item $S'_g \cong S_g$, and
+\item $g$ invertible in $S' \otimes_R \kappa(\mathfrak p)$.
+\end{enumerate}
+Then there exists a $f \in R$, $f \not \in \mathfrak p$ such
+that $R_f \to S_f$ is finite.
+\end{lemma}
+
+\begin{proof}
+By assumption the image $T$ of $V(g) \subset \Spec(S')$ under
+the morphism $\Spec(S') \to \Spec(R)$ does not
+contain $\mathfrak p$. By Section \ref{section-going-up}
+especially, Lemma \ref{lemma-going-up-closed} we see $T$ is closed.
+Pick $f \in R$, $f \not \in \mathfrak p$ such that
+$T \cap D(f) = \emptyset$. Then we see that $g$ becomes invertible
+in $S'_f$. Hence $S'_f \cong S_f$. Thus $S_f$ is both of finite type
+and integral over $R_f$, hence finite.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-makes-quasi-finite-finite-one-prime}
+Let $R \to S$ be a ring map.
+Let $\mathfrak q \subset S$ be a prime lying over
+the prime $\mathfrak p \subset R$.
+Assume $R \to S$ finite type and quasi-finite at $\mathfrak q$.
+Then there exists
+\begin{enumerate}
+\item an \'etale ring map $R \to R'$,
+\item a prime $\mathfrak p' \subset R'$ lying over $\mathfrak p$,
+\item a product decomposition
+$$
+R' \otimes_R S = A \times B
+$$
+\end{enumerate}
+with the following properties
+\begin{enumerate}
+\item $\kappa(\mathfrak p) = \kappa(\mathfrak p')$,
+\item $R' \to A$ is finite,
+\item $A$ has exactly one prime $\mathfrak r$ lying over $\mathfrak p'$, and
+\item $\mathfrak r$ lies over $\mathfrak q$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $S' \subset S$ be the integral closure of $R$ in $S$.
+Let $\mathfrak q' = S' \cap \mathfrak q$.
+By Zariski's Main Theorem \ref{theorem-main-theorem}
+there exists a $g \in S'$, $g \not \in \mathfrak q'$ such
+that $S'_g \cong S_g$. Consider the fibre rings
+$F = S \otimes_R \kappa(\mathfrak p)$ and
+$F' = S' \otimes_R \kappa(\mathfrak p)$. Denote $\overline{\mathfrak q}'$
+the prime of $F'$ corresponding to $\mathfrak q'$. Since
+$F'$ is integral over $\kappa(\mathfrak p)$ we see
+that $\overline{\mathfrak q}'$ is a closed point of
+$\Spec(F')$, see Lemma \ref{lemma-integral-over-field}.
+Note that $\mathfrak q$ defines an isolated closed point
+$\overline{\mathfrak q}$ of
+$\Spec(F)$ (see Definition \ref{definition-quasi-finite}).
+Since $S'_g \cong S_g$ we have $F'_g \cong F_g$,
+so $\overline{\mathfrak q}$ and $\overline{\mathfrak q}'$
+have isomorphic open neighbourhoods in $\Spec(F)$
+and $\Spec(F')$. We conclude the set
+$\{\overline{\mathfrak q}'\} \subset \Spec(F')$ is
+open. Combined with $\mathfrak q'$ being closed (shown above)
+we conclude that $\overline{\mathfrak q}'$ defines
+an isolated closed point of $\Spec(F')$ as well.
+
+\medskip\noindent
+An additional small remark is that under the map
+$\Spec(F) \to \Spec(F')$ the point $\overline{\mathfrak q}$
+is the only point mapping to $\overline{\mathfrak q}'$. This follows
+from the discussion above.
+
+\medskip\noindent
+By Lemma \ref{lemma-disjoint-implies-product} we may write
+$F' = F'_1 \times F'_2$ with
+$\Spec(F'_1) = \{\overline{\mathfrak q}'\}$.
+Since $F' = S' \otimes_R \kappa(\mathfrak p)$, there
+exists an $s' \in S'$ which maps to the element
+$(r, 0) \in F'_1 \times F'_2 = F'$ for some $r \in R$, $r \not \in \mathfrak p$.
+In fact, what we will use about $s'$ is that it is an element of $S'$,
+not contained in $\mathfrak q'$, and contained in any other prime
+lying over $\mathfrak p$.
+
+\medskip\noindent
+Let $f(x) \in R[x]$ be a monic polynomial such that $f(s') = 0$.
+Denote $\overline{f} \in \kappa(\mathfrak p)[x]$ the image.
+We can factor it as $\overline{f} = x^e \overline{h}$ where
+$\overline{h}(0) \not = 0$. After replacing $f$
+by $x f$ if necessary, we may assume $e \geq 1$.
+By Lemma \ref{lemma-factor-mod-lift-etale}
+we can find an \'etale ring extension $R \to R'$,
+a prime $\mathfrak p'$ lying over $\mathfrak p$, and
+a factorization $f = h i$ in $R'[x]$ such that
+$\kappa(\mathfrak p) = \kappa(\mathfrak p')$,
+$\overline{h} = h \bmod \mathfrak p'$,
+$x^e = i \bmod \mathfrak p'$, and
+we can write $a h + b i = 1$ in $R'[x]$ (for suitable $a, b$).
+
+\medskip\noindent
+Consider the elements $h(s'), i(s') \in R' \otimes_R S'$.
+By construction we have $h(s')i(s') = f(s') = 0$. On the other
+hand they generate the unit ideal since $a(s')h(s') + b(s')i(s') = 1$.
+Thus we see that $R' \otimes_R S'$ is the product of the
+localizations at these elements:
+$$
+R' \otimes_R S'
+=
+(R' \otimes_R S')_{i(s')}
+\times
+(R' \otimes_R S')_{h(s')}
+=
+S'_1 \times S'_2
+$$
+Moreover this product decomposition is compatible with the product
+decomposition we found for the fibre ring $F'$; this comes from our
+choices of $s', i, h$ which guarantee that $\overline{\mathfrak q}'$
+is the only prime of $F'$ which does not contain the image of $i(s')$
+in $F'$. Here we use that the fibre ring of $R'\otimes_R S'$ over $R'$ at
+$\mathfrak p'$ is the same as $F'$ due to the fact that
+$\kappa(\mathfrak p) = \kappa(\mathfrak p')$.
+It follows that $S'_1$ has exactly
+one prime, say $\mathfrak r'$,
+lying over $\mathfrak p'$ and
+that this prime lies over $\mathfrak q'$.
+Hence the element $g \in S'$ maps to an element of $S'_1$ not contained
+in $\mathfrak r'$.
+
+\medskip\noindent
+The base change $R'\otimes_R S$ inherits a similar product decomposition
+$$
+R' \otimes_R S
+=
+(R' \otimes_R S)_{i(s')}
+\times
+(R' \otimes_R S)_{h(s')}
+=
+S_1 \times S_2
+$$
+It follows from the above that $S_1$ has exactly
+one prime, say $\mathfrak r$,
+lying over $\mathfrak p'$ (consider the fibre ring as above),
+and that this prime lies over $\mathfrak q$.
+
+\medskip\noindent
+Now we may apply Lemma \ref{lemma-produce-finite} to the ring maps
+$R' \to S'_1 \to S_1$, the prime $\mathfrak p'$ and
+the element $g$ to see that after replacing $R'$ by
+a principal localization we can assume that $S_1$ is
+finite over $R'$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-makes-quasi-finite-finite}
+Let $R \to S$ be a ring map.
+Let $\mathfrak p \subset R$ be a prime.
+Assume $R \to S$ finite type.
+Then there exists
+\begin{enumerate}
+\item an \'etale ring map $R \to R'$,
+\item a prime $\mathfrak p' \subset R'$ lying over $\mathfrak p$,
+\item a product decomposition
+$$
+R' \otimes_R S = A_1 \times \ldots \times A_n \times B
+$$
+\end{enumerate}
+with the following properties
+\begin{enumerate}
+\item we have $\kappa(\mathfrak p) = \kappa(\mathfrak p')$,
+\item each $A_i$ is finite over $R'$,
+\item each $A_i$ has exactly one prime $\mathfrak r_i$ lying over
+$\mathfrak p'$, and
+\item $R' \to B$ not quasi-finite at any prime lying over $\mathfrak p'$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $F = S \otimes_R \kappa(\mathfrak p)$ the fibre ring of $S/R$
+at the prime $\mathfrak p$. As $F$ is of finite type over $\kappa(\mathfrak p)$
+it is Noetherian and hence $\Spec(F)$ has finitely many isolated closed
+points. If there are no isolated closed points,
+i.e., no primes $\mathfrak q$ of $S$ over $\mathfrak p$ such that
+$S/R$ is quasi-finite at $\mathfrak q$, then the lemma holds.
+If there exists at least one such prime $\mathfrak q$, then
+we may apply Lemma \ref{lemma-etale-makes-quasi-finite-finite-one-prime}.
+This gives a diagram
+$$
+\xymatrix{
+S \ar[r] & R'\otimes_R S \ar@{=}[r] & A_1 \times B' \\
+R \ar[r] \ar[u] & R' \ar[u] \ar[ru]
+}
+$$
+as in said lemma. Since the residue fields at $\mathfrak p$ and $\mathfrak p'$
+are the same, the fibre rings of $S/R$ and $(A_1 \times B')/R'$
+are the same. Hence, by induction on the number of isolated closed points
+of the fibre we may assume that the lemma holds for
+$R' \to B'$ and $\mathfrak p'$. Thus we get an \'etale ring
+map $R' \to R''$, a prime $\mathfrak p'' \subset R''$ and
+a decomposition
+$$
+R'' \otimes_{R'} B' = A_2 \times \ldots \times A_n \times B
+$$
+We omit the verification that the ring map $R \to R''$, the
+prime $\mathfrak p''$ and the resulting decomposition
+$$
+R'' \otimes_R S = (R'' \otimes_{R'} A_1) \times
+A_2 \times \ldots \times A_n \times B
+$$
+is a solution to the problem posed in the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-makes-quasi-finite-finite-variant}
+Let $R \to S$ be a ring map.
+Let $\mathfrak p \subset R$ be a prime.
+Assume $R \to S$ finite type.
+Then there exists
+\begin{enumerate}
+\item an \'etale ring map $R \to R'$,
+\item a prime $\mathfrak p' \subset R'$ lying over $\mathfrak p$,
+\item a product decomposition
+$$
+R' \otimes_R S = A_1 \times \ldots \times A_n \times B
+$$
+\end{enumerate}
+with the following properties
+\begin{enumerate}
+\item each $A_i$ is finite over $R'$,
+\item each $A_i$ has exactly one prime $\mathfrak r_i$ lying over
+$\mathfrak p'$,
+\item the finite field extensions
+$\kappa(\mathfrak r_i)/\kappa(\mathfrak p')$
+are purely inseparable, and
+\item $R' \to B$ not quasi-finite at any prime lying over $\mathfrak p'$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The strategy of the proof is to make two \'etale ring
+extensions: first we control the residue fields, then we
+apply Lemma \ref{lemma-etale-makes-quasi-finite-finite}.
+
+\medskip\noindent
+Denote $F = S \otimes_R \kappa(\mathfrak p)$ the fibre ring of $S/R$
+at the prime $\mathfrak p$.
+As in the proof of Lemma \ref{lemma-etale-makes-quasi-finite-finite}
+there are finitely may primes, say
+$\mathfrak q_1, \ldots, \mathfrak q_n$ of $S$ lying over
+$R$ at which the ring map $R \to S$ is quasi-finite.
+Let $\kappa(\mathfrak p) \subset L_i \subset \kappa(\mathfrak q_i)$
+be the subfield such that $\kappa(\mathfrak p) \subset L_i$
+is separable, and the field extension $\kappa(\mathfrak q_i)/L_i$
+is purely inseparable. Let $L/\kappa(\mathfrak p)$
+be a finite Galois extension into which $L_i$ embeds for $i = 1, \ldots, n$.
+By Lemma \ref{lemma-make-etale-map-prescribed-residue-field}
+we can find an \'etale ring extension
+$R \to R'$ together with a prime $\mathfrak p'$ lying over $\mathfrak p$
+such that the field extension
+$\kappa(\mathfrak p')/\kappa(\mathfrak p)$ is isomorphic
+to $\kappa(\mathfrak p) \subset L$.
+Thus the fibre ring of $R' \otimes_R S$ at $\mathfrak p'$ is
+isomorphic to $F \otimes_{\kappa(\mathfrak p)} L$.
+The primes lying over $\mathfrak q_i$ correspond to primes
+of $\kappa(\mathfrak q_i) \otimes_{\kappa(\mathfrak p)} L$
+which is a product of fields purely inseparable over
+$L$ by our choice of $L$ and elementary field theory.
+These are also the only primes over $\mathfrak p'$
+at which $R' \to R' \otimes_R S$ is quasi-finite, by
+Lemma \ref{lemma-quasi-finite-base-change}.
+Hence after replacing $R$ by $R'$, $\mathfrak p$ by $\mathfrak p'$,
+and $S$ by $R' \otimes_R S$ we may assume that for all
+primes $\mathfrak q$ lying over $\mathfrak p$
+for which $S/R$ is quasi-finite the field extensions
+$\kappa(\mathfrak q)/\kappa(\mathfrak p)$
+are purely inseparable.
+
+\medskip\noindent
+Next apply Lemma \ref{lemma-etale-makes-quasi-finite-finite}.
+The result is what we want since the field extensions do not
+change under this \'etale ring extension.
+\end{proof}
+
+
+
+
+
+
+
+\section{Local homomorphisms}
+\label{section-local-homomorphisms}
+
+\noindent
+Some lemmas which don't have a natural section to go into.
+The first lemma says that
+
+\begin{lemma}
+\label{lemma-lindel}
+\begin{reference}
+\cite[Lemma on page 321]{Lindel}, \cite[Lemma 4.1.5]{KC}
+\end{reference}
+Let $(R, \mathfrak m_R) \to (S, \mathfrak m_S)$ be a local homomorphism
+of local rings. Assume $S$ is the localization of an \'etale ring extension
+of $R$ and that $\kappa(\mathfrak m_R) \to \kappa(\mathfrak m_S)$
+is an isomorphism. Then there exists an $t \in \mathfrak m_R$ such that
+$R/t^nR \to S/t^nS$ is an isomorphsm for all $n \geq 1$.
+\end{lemma}
+
+\begin{proof}
+Write $S = T_{\mathfrak q}$ for some \'etale $R$-algebra $T$
+and prime ideal $\mathfrak q \subset T$ lying over $\mathfrak m_R$.
+By Proposition \ref{proposition-etale-locally-standard}
+we may assume $R \to T$ is standard \'etale.
+Write $T = R[x]_g/(f)$ as in Definition \ref{definition-standard-etale}.
+By our assumption on residue fields, we may
+choose $a \in R$ such that $x$ and $a$ have the same image in
+$\kappa(\mathfrak q) = \kappa(\mathfrak m_S) = \kappa(\mathfrak m_R)$.
+Then after replacing $x$ by $x - a$ we may assume that
+$\mathfrak q$ is generated by $x$ and $\mathfrak m_R$ in $T$.
+In particular $t = f(0) \in \mathfrak m_R$.
+We will show that $t = f(0)$ works.
+
+\medskip\noindent
+Write $f = x^d + \sum_{i = 1, \ldots, d - 1} a_i x^i + t$.
+Since $R \to T$ is standard \'etale we find
+that $a_1$ is a unit in $R$: the derivative of $f$ is invertible
+in $T$ in particular is not contained in $\mathfrak q$.
+Let $h = a_1 + a_2 x + \ldots + a_{d - 1} x^{d - 2} + x^{d - 1} \in R[x]$
+so that $f = t + xh$ in $R[x]$. We see that $h \not \in \mathfrak q$ and
+hence we may replace $T$ by $R[x]_{hg}/(f)$. After this replacement we see that
+$$
+T/tT = (R/tR)[x]_{hg}/(f) = (R/tR)[x]_{hg}/(xh) = (R/tR)[x]_{hg}/(x)
+$$
+is a quotient of $R/tR$. By Lemma \ref{lemma-surjective-mod-locally-nilpotent}
+we conclude that $R/t^nR \to T/t^nT$ is surjective for all $n \geq 1$.
+On the other hand, we know that the flat local ring map $R/t^nR \to S/t^nS$
+factors through $R/t^nR \to T/t^nT$ for all $n$, hence these maps are
+also injective (a flat local homomorphism of local rings is faithfully
+flat and hence injective, see
+Lemmas \ref{lemma-local-flat-ff} and
+\ref{lemma-faithfully-flat-universally-injective}).
+As $S$ is the localization of $T$ we see that $S/t^nS$ is the localization
+of $T/t^nT = R/t^nR$ at a prime lying over the maximal ideal,
+but this ring is already local and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-under-finite-flat}
+Let $(R, \mathfrak m_R) \to (S, \mathfrak m_S)$ be a local homomorphism
+of local rings. Assume $S$ is the localization of an \'etale ring extension
+of $R$. Then there exists a finite, finitely presented, faithfully flat
+ring map $R \to S'$ such that for every maximal ideal $\mathfrak m'$ of $S'$
+there is a factorization
+$$
+R \to S \to S'_{\mathfrak m'}.
+$$
+of the ring map $R \to S'_{\mathfrak m'}$.
+\end{lemma}
+
+\begin{proof}
+Write $S = T_{\mathfrak q}$ for some \'etale $R$-algebra $T$. By
+Proposition \ref{proposition-etale-locally-standard}
+we may assume $T$ is standard \'etale.
+Apply
+Lemma \ref{lemma-standard-etale-finite-flat-Zariski}
+to the ring map $R \to T$ to get $R \to S'$. Then in particular
+for every maximal ideal $\mathfrak m'$ of $S'$ we get a factorization
+$\varphi : T \to S'_{g'}$ for some $g' \not \in \mathfrak m'$ such
+that $\mathfrak q = \varphi^{-1}(\mathfrak m'S'_{g'})$. Thus $\varphi$
+induces the desired local ring map $S \to S'_{\mathfrak m'}$.
+\end{proof}
+
+
+
+
+
+\section{Integral closure and smooth base change}
+\label{section-integral-closure-smooth-base-change}
+
+\begin{lemma}
+\label{lemma-trick}
+Let $R$ be a ring.
+Let $f \in R[x]$ be a monic polynomial.
+Let $R \to B$ be a ring map.
+If $h \in B[x]/(f)$ is integral over $R$, then the element
+$f' h$ can be written as $f'h = \sum_i b_i x^i$ with $b_i \in B$
+integral over $R$.
+\end{lemma}
+
+\begin{proof}
+Say $h^e + r_1 h^{e - 1} + \ldots + r_e = 0$ in the ring $B[x]/(f)$
+with $r_i \in R$.
+There exists a finite free ring extension $B \subset B'$ such that
+$f = (x - \alpha_1) \ldots (x - \alpha_d)$ for some $\alpha_i \in B'$,
+see Lemma \ref{lemma-adjoin-roots}.
+Note that each $\alpha_i$ is integral over $R$.
+We may represent $h = h_0 + h_1 x + \ldots + h_{d - 1} x^{d - 1}$
+with $h_i \in B$. Then it is a universal fact that
+$$
+f' h
+\equiv
+\sum\nolimits_{i = 1, \ldots, d}
+h(\alpha_i)
+(x - \alpha_1) \ldots \widehat{(x - \alpha_i)} \ldots (x - \alpha_d)
+$$
+as elements of $B[x]/(f)$. You prove this by
+evaluating both sides at the points $\alpha_i$ over the ring
+$B_{univ} = \mathbf{Z}[\alpha_i, h_j]$ (some details omitted).
+By our assumption that $h$ satisfies
+$h^e + r_1 h^{e - 1} + \ldots + r_e = 0$ in the ring $B[x]/(f)$
+we see that
+$$
+h(\alpha_i)^e + r_1 h(\alpha_i)^{e - 1} + \ldots + r_e = 0
+$$
+in $B'$. Hence $h(\alpha_i)$ is integral over $R$. Using the formula
+above we see that $f'h \equiv \sum_{j = 0, \ldots, d - 1} b'_j x^j$
+in $B'[x]/(f)$ with $b'_j \in B'$ integral over $R$. However,
+since $f' h \in B[x]/(f)$ and since $1, x, \ldots, x^{d - 1}$ is a
+$B'$-basis for $B'[x]/(f)$ we see that $b'_j \in B$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-closure-commutes-etale}
+Let $R \to S$ be an \'etale ring map.
+Let $R \to B$ be any ring map.
+Let $A \subset B$ be the integral closure of $R$ in $B$.
+Let $A' \subset S \otimes_R B$ be the integral closure of $S$ in
+$S \otimes_R B$. Then the canonical map $S \otimes_R A \to A'$ is
+an isomorphism.
+\end{lemma}
+
+\begin{proof}
+The map $S \otimes_R A \to A'$ is injective because $A \subset B$ and
+$R \to S$ is flat. We are going to use repeatedly that taking integral
+closure commutes with localization, see
+Lemma \ref{lemma-integral-closure-localize}.
+Hence we may localize on $S$, by Lemma \ref{lemma-cover} (the criterion
+for checking whether an $S$-module map is an isomorphism).
+Thus we may assume that $S = R[x]_g/(f) = (R[x]/(f))_g$
+is standard \'etale over $R$,
+see Proposition \ref{proposition-etale-locally-standard}.
+Applying localization one more time we see that
+$A'$ is $(A'')_g$ where $A''$ is the integral closure of
+$R[x]/(f)$ in $B[x]/(f)$. Suppose that $a \in A''$. It suffices
+to show that $a$ is in $S \otimes_R A$. By
+Lemma \ref{lemma-trick} we see that $f' a = \sum a_i x^i$ with $a_i \in A$.
+Since $f'$ is invertible in $B[x]_g/(f)$ (by definition of a standard
+\'etale ring map) we conclude that $a \in S \otimes_R A$ as desired.
+\end{proof}
+
+\begin{example}
+\label{example-fourier}
+Let $p$ be a prime number. The ring extension
+$$
+R = \mathbf{Z}[1/p] \subset
+R' = \mathbf{Z}[1/p][x]/(x^{p - 1} + \ldots + x + 1)
+$$
+has the following property: For $d < p$ there exist elements
+$\alpha_0, \ldots, \alpha_{d - 1} \in R'$ such that
+$$
+\prod\nolimits_{0 \leq i < j < d} (\alpha_i - \alpha_j)
+$$
+is a unit in $R'$. Namely, take $\alpha_i$ equal to the class of
+$x^i$ in $R'$ for $i = 0, \ldots, p - 1$. Then we have
+$$
+T^p - 1 = \prod\nolimits_{i = 0, \ldots, p - 1} (T - \alpha_i)
+$$
+in $R'[T]$. Namely, the ring $\mathbf{Q}[x]/(x^{p - 1} + \ldots + x + 1)$
+is a field because the cyclotomic polynomial $x^{p - 1} + \ldots + x + 1$
+is irreducible over $\mathbf{Q}$ and the $\alpha_i$ are pairwise distinct
+roots of $T^p - 1$, whence the equality. Taking
+derivatives on both sides and substituting $T = \alpha_i$ we obtain
+$$
+p \alpha_i^{p - 1}
+=
+(\alpha_i - \alpha_1) \ldots
+\widehat{(\alpha_i - \alpha_i)} \ldots
+(\alpha_i - \alpha_1)
+$$
+and we see this is invertible in $R'$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-integral-closure-commutes-smooth}
+Let $R \to S$ be a smooth ring map.
+Let $R \to B$ be any ring map.
+Let $A \subset B$ be the integral closure of $R$ in $B$.
+Let $A' \subset S \otimes_R B$ be the integral closure of $S$ in
+$S \otimes_R B$. Then the canonical map $S \otimes_R A \to A'$ is
+an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Arguing as in the proof of Lemma \ref{lemma-integral-closure-commutes-etale}
+we may localize on $S$. Hence we may assume that $R \to S$ is a standard
+smooth ring map, see Lemma \ref{lemma-smooth-syntomic}. By definition of
+a standard smooth ring map we see that $S$ is \'etale over a polynomial
+ring $R[x_1, \ldots, x_n]$. Since we have seen the result in the case of
+an \'etale ring extension (Lemma \ref{lemma-integral-closure-commutes-etale})
+this reduces us to the case where $S = R[x]$. Thus we have to show
+$$
+f = \sum b_i x^i
+\text{ integral over }R[x]
+\Leftrightarrow
+\text{each }b_i\text{ integral over }R.
+$$
+The implication from right to left holds because the set of elements
+in $B[x]$ integral over $R[x]$ is a ring
+(Lemma \ref{lemma-integral-closure-is-ring}) and contains
+$x$.
+
+\medskip\noindent
+Suppose that $f \in B[x]$ is integral over $R[x]$, and assume that
+$f = \sum_{i < d} b_i x^i$ has degree $< d$. Since integral closure
+and localization commute, it suffices to show there exist
+distinct primes $p, q$ such that each $b_i$ is
+integral both over $R[1/p]$ and over $R[1/q]$. Hence, we can find a finite
+free ring extension $R \subset R'$ such that $R'$ contains
+$\alpha_1, \ldots, \alpha_d$ with the property that
+$\prod_{i < j} (\alpha_i - \alpha_j)$ is a unit in $R'$, see
+Example \ref{example-fourier}.
+In this case we have the universal equality
+$$
+f
+=
+\sum_i
+f(\alpha_i)
+\frac{(x - \alpha_1) \ldots \widehat{(x - \alpha_i)} \ldots (x - \alpha_d)}
+{(\alpha_i - \alpha_1) \ldots \widehat{(\alpha_i - \alpha_i)} \ldots
+(\alpha_i - \alpha_d)}.
+$$
+OK, and the elements $f(\alpha_i)$ are integral over $R'$ since
+$(R' \otimes_R B)[x] \to R' \otimes_R B$, $h \mapsto h(\alpha_i)$
+is a ring map. Hence we see that the coefficients of $f$
+in $(R' \otimes_R B)[x]$ are integral over $R'$. Since $R'$ is finite
+over $R$ (hence integral over $R$) we see that they are integral
+over $R$ also, as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-closure-commutes-colim-smooth}
+Let $R \to S$ and $R \to B$ be ring maps.
+Let $A \subset B$ be the integral closure of $R$ in $B$.
+Let $A' \subset S \otimes_R B$ be the integral closure of $S$ in
+$S \otimes_R B$. If $S$ is a filtered colimit of smooth $R$-algebras,
+then the canonical map $S \otimes_R A \to A'$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+This follows from the straightforward fact that taking
+tensor products and taking integral closures
+commutes with filtered colimits and
+Lemma \ref{lemma-integral-closure-commutes-smooth}.
+\end{proof}
+
+
+
+
+
+
+\section{Formally unramified maps}
+\label{section-formally-unramified}
+
+\noindent
+It turns out to be logically more efficient to define
+the notion of a formally unramified map before introducing
+the notion of a formally \'etale one.
+
+\begin{definition}
+\label{definition-formally-unramified}
+Let $R \to S$ be a ring map.
+We say $S$ is {\it formally unramified over $R$} if for every
+commutative solid diagram
+$$
+\xymatrix{
+S \ar[r] \ar@{-->}[rd] & A/I \\
+R \ar[r] \ar[u] & A \ar[u]
+}
+$$
+where $I \subset A$ is an ideal of square zero, there exists
+at most one dotted arrow making the diagram commute.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-characterize-formally-unramified}
+Let $R \to S$ be a ring map.
+The following are equivalent:
+\begin{enumerate}
+\item $R \to S$ is formally unramified,
+\item the module of differentials $\Omega_{S/R}$ is zero.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $J = \Ker(S \otimes_R S \to S)$ be the kernel of
+the multiplication map. Let $A_{univ} = S \otimes_R S/J^2$. Recall
+that $I_{univ} = J/J^2$ is isomorphic to $\Omega_{S/R}$, see
+Lemma \ref{lemma-differentials-diagonal}. Moreover, the two $R$-algebra maps
+$\sigma_1, \sigma_2 : S \to A_{univ}$, $\sigma_1(s) = s \otimes 1 \bmod J^2$,
+and $\sigma_2(s) = 1 \otimes s \bmod J^2$ differ by the
+universal derivation $\text{d} : S \to \Omega_{S/R} = I_{univ}$.
+
+\medskip\noindent
+Assume $R \to S$ formally unramified.
+Then we see that $\sigma_1 = \sigma_2$.
+Hence $\text{d}(s) = 0$ for all $s \in S$.
+Hence $\Omega_{S/R} = 0$.
+
+\medskip\noindent
+Assume that $\Omega_{S/R} = 0$. Let $A, I, R \to A, S \to A/I$
+be a solid diagram as in Definition \ref{definition-formally-unramified}.
+Let $\tau_1, \tau_2 : S \to A$ be two dotted arrows making the
+diagram commute. Consider the $R$-algebra map $A_{univ} \to A$
+defined by the rule $s_1 \otimes s_2 \mapsto \tau_1(s_1)\tau_2(s_2)$.
+We omit the verification that this is well defined. Since $A_{univ} \cong S$
+as $I_{univ} = \Omega_{S/R} = 0$ we conclude that $\tau_1 = \tau_2$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-formally-unramified-local}
+Let $R \to S$ be a ring map.
+The following are equivalent:
+\begin{enumerate}
+\item $R \to S$ is formally unramified,
+\item $R \to S_{\mathfrak q}$ is formally unramified for all
+primes $\mathfrak q$ of $S$, and
+\item $R_{\mathfrak p} \to S_{\mathfrak q}$ is formally unramified
+for all primes $\mathfrak q$ of $S$ with $\mathfrak p = R \cap \mathfrak q$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We have seen in
+Lemma \ref{lemma-characterize-formally-unramified}
+that (1) is equivalent to
+$\Omega_{S/R} = 0$. Similarly, by
+Lemma \ref{lemma-differentials-localize}
+we see that (2) and (3)
+are equivalent to $(\Omega_{S/R})_{\mathfrak q} = 0$ for all
+$\mathfrak q$. Hence the equivalence follows from
+Lemma \ref{lemma-characterize-zero-local}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-formally-unramified-localize}
+Let $A \to B$ be a formally unramified ring map.
+\begin{enumerate}
+\item For $S \subset A$ a multiplicative subset,
+$S^{-1}A \to S^{-1}B$ is formally unramified.
+\item For $S \subset B$ a multiplicative subset,
+$A \to S^{-1}B$ is formally unramified.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows from
+Lemma \ref{lemma-formally-unramified-local}.
+(You can also deduce it from
+Lemma \ref{lemma-characterize-formally-unramified}
+combined with
+Lemma \ref{lemma-differentials-localize}.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-formally-unramified}
+Let $R$ be a ring. Let $I$ be a directed set.
+Let $(S_i, \varphi_{ii'})$ be a system of $R$-algebras
+over $I$. If each $R \to S_i$ is formally unramified, then
+$S = \colim_{i \in I} S_i$ is formally unramified over $R$
+\end{lemma}
+
+\begin{proof}
+Consider a diagram as in Definition \ref{definition-formally-unramified}.
+By assumption there exists at most one $R$-algebra map $S_i \to A$ lifting
+the compositions $S_i \to S \to A/I$. Since every element of $S$
+is in the image of one of the maps $S_i \to S$ we see that there
+is at most one map $S \to A$ fitting into the diagram.
+\end{proof}
+
+
+
+\section{Conormal modules and universal thickenings}
+\label{section-conormal}
+
+\noindent
+It turns out that one can define the first infinitesimal neighbourhood
+not just for a closed immersion of schemes, but already for any formally
+unramified morphism. This is based on the following algebraic fact.
+
+\begin{lemma}
+\label{lemma-universal-thickening}
+Let $R \to S$ be a formally unramified ring map. There exists a surjection of
+$R$-algebras $S' \to S$ whose kernel is an ideal of square zero with the
+following universal property: Given any commutative diagram
+$$
+\xymatrix{
+S \ar[r]_a & A/I \\
+R \ar[r]^b \ar[u] & A \ar[u]
+}
+$$
+where $I \subset A$ is an ideal of square zero, there is a unique $R$-algebra
+map $a' : S' \to A$ such that $S' \to A \to A/I$ is equal to $S' \to S \to A/I$.
+\end{lemma}
+
+\begin{proof}
+Choose a set of generators $z_i \in S$, $i \in I$ for $S$ as an $R$-algebra.
+Let $P = R[\{x_i\}_{i \in I}]$ denote the polynomial ring on generators
+$x_i$, $i \in I$. Consider the $R$-algebra map $P \to S$ which maps
+$x_i$ to $z_i$. Let $J = \Ker(P \to S)$. Consider the map
+$$
+\text{d} : J/J^2 \longrightarrow \Omega_{P/R} \otimes_P S
+$$
+see
+Lemma \ref{lemma-differential-seq}.
+This is surjective since $\Omega_{S/R} = 0$ by assumption, see
+Lemma \ref{lemma-characterize-formally-unramified}.
+Note that $\Omega_{P/R}$ is free on $\text{d}x_i$, and hence the module
+$\Omega_{P/R} \otimes_P S$ is free over $S$. Thus we may choose a splitting
+of the surjection above and write
+$$
+J/J^2 = K \oplus \Omega_{P/R} \otimes_P S
+$$
+Let $J^2 \subset J' \subset J$ be the ideal of $P$ such that
+$J'/J^2$ is the second summand in the decomposition above.
+Set $S' = P/J'$. We obtain a short exact sequence
+$$
+0 \to J/J' \to S' \to S \to 0
+$$
+and we see that $J/J' \cong K$ is a square zero ideal in $S'$. Hence
+$$
+\xymatrix{
+S \ar[r]_1 & S \\
+R \ar[r] \ar[u] & S' \ar[u]
+}
+$$
+is a diagram as above. In fact we claim that this is an initial object in
+the category of diagrams. Namely, let $(I \subset A, a, b)$ be an arbitrary
+diagram. We may choose an $R$-algebra map $\beta : P \to A$ such that
+$$
+\xymatrix{
+S \ar[r]_1 & S \ar[r]_a & A/I \\
+R \ar[r] \ar@/_/[rr]_b \ar[u] & P \ar[u] \ar[r]^\beta & A \ar[u]
+}
+$$
+is commutative. Now it may not be the case that $\beta(J') = 0$, in other
+words it may not be true that $\beta$ factors through $S' = P/J'$.
+But what is clear is that $\beta(J') \subset I$ and
+since $\beta(J) \subset I$ and $I^2 = 0$ we have $\beta(J^2) = 0$.
+Thus the ``obstruction'' to finding a morphism from
+$(J/J' \subset S', 1, R \to S')$ to $(I \subset A, a, b)$ is
+the corresponding $S$-linear map $\overline{\beta} : J'/J^2 \to I$.
+The choice in picking $\beta$ lies in the choice of $\beta(x_i)$.
+A different choice of $\beta$, say $\beta'$, is gotten by taking
+$\beta'(x_i) = \beta(x_i) + \delta_i$ with $\delta_i \in I$.
+In this case, for $g \in J'$, we obtain
+$$
+\beta'(g) =
+\beta(g) + \sum\nolimits_i \delta_i \frac{\partial g}{\partial x_i}.
+$$
+Since the map $\text{d}|_{J'/J^2} : J'/J^2 \to \Omega_{P/R} \otimes_P S$
+given by $g \mapsto \frac{\partial g}{\partial x_i}\text{d}x_i$
+is an isomorphism by construction, we see that there is a unique choice
+of $\delta_i \in I$ such that $\beta'(g) = 0$ for all $g \in J'$.
+(Namely, $\delta_i$ is $-\overline{\beta}(g)$ where $g \in J'/J^2$
+is the unique element with $\frac{\partial g}{\partial x_j} = 1$ if
+$i = j$ and $0$ else.) The uniqueness of the solution implies the
+uniqueness required in the lemma.
+\end{proof}
+
+\noindent
+In the situation of
+Lemma \ref{lemma-universal-thickening}
+the $R$-algebra map $S' \to S$ is unique up to unique isomorphism.
+
+\begin{definition}
+\label{definition-universal-thickening}
+Let $R \to S$ be a formally unramified ring map.
+\begin{enumerate}
+\item The {\it universal first order thickening} of $S$ over $R$ is
+the surjection of $R$-algebras $S' \to S$ of
+Lemma \ref{lemma-universal-thickening}.
+\item The {\it conormal module} of $R \to S$ is the kernel $I$ of the
+universal first order thickening $S' \to S$, seen as an $S$-module.
+\end{enumerate}
+We often denote the conormal module {\it $C_{S/R}$} in this situation.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-universal-thickening-quotient}
+Let $I \subset R$ be an ideal of a ring.
+The universal first order thickening of $R/I$ over $R$
+is the surjection $R/I^2 \to R/I$. The conormal module
+of $R/I$ over $R$ is $C_{(R/I)/R} = I/I^2$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universal-thickening-localize}
+Let $A \to B$ be a formally unramified ring map.
+Let $\varphi : B' \to B$ be the universal first order thickening of
+$B$ over $A$.
+\begin{enumerate}
+\item Let $S \subset A$ be a multiplicative subset.
+Then $S^{-1}B' \to S^{-1}B$ is the universal first order thickening of
+$S^{-1}B$ over $S^{-1}A$. In particular $S^{-1}C_{B/A} = C_{S^{-1}B/S^{-1}A}$.
+\item Let $S \subset B$ be a multiplicative subset.
+Then $S' = \varphi^{-1}(S)$ is a multiplicative subset in $B'$
+and $(S')^{-1}B' \to S^{-1}B$ is the universal first order thickening
+of $S^{-1}B$ over $A$. In particular $S^{-1}C_{B/A} = C_{S^{-1}B/A}$.
+\end{enumerate}
+Note that the lemma makes sense by
+Lemma \ref{lemma-formally-unramified-localize}.
+\end{lemma}
+
+\begin{proof}
+With notation and assumptions as in (1). Let $(S^{-1}B)' \to S^{-1}B$
+be the universal first order thickening of $S^{-1}B$ over $S^{-1}A$.
+Note that $S^{-1}B' \to S^{-1}B$ is a surjection of $S^{-1}A$-algebras
+whose kernel has square zero. Hence by definition we obtain a map
+$(S^{-1}B)' \to S^{-1}B'$ compatible with the maps towards $S^{-1}B$.
+Consider any commutative diagram
+$$
+\xymatrix{
+B \ar[r] & S^{-1}B \ar[r] & D/I \\
+A \ar[r] \ar[u] & S^{-1}A \ar[r] \ar[u] & D \ar[u]
+}
+$$
+where $I \subset D$ is an ideal of square zero. Since $B'$ is the universal
+first order thickening of $B$ over $A$ we obtain an $A$-algebra map
+$B' \to D$. But it is clear that the image of $S$ in $D$ is mapped to
+invertible elements of $D$, and hence we obtain a compatible map
+$S^{-1}B' \to D$. Applying this to $D = (S^{-1}B)'$ we see that we get
+a map $S^{-1}B' \to (S^{-1}B)'$. We omit the verification that this map
+is inverse to the map described above.
+
+\medskip\noindent
+With notation and assumptions as in (2). Let $(S^{-1}B)' \to S^{-1}B$
+be the universal first order thickening of $S^{-1}B$ over $A$.
+Note that $(S')^{-1}B' \to S^{-1}B$ is a surjection of $A$-algebras
+whose kernel has square zero. Hence by definition we obtain a map
+$(S^{-1}B)' \to (S')^{-1}B'$ compatible with the maps towards $S^{-1}B$.
+Consider any commutative diagram
+$$
+\xymatrix{
+B \ar[r] & S^{-1}B \ar[r] & D/I \\
+A \ar[r] \ar[u] & A \ar[r] \ar[u] & D \ar[u]
+}
+$$
+where $I \subset D$ is an ideal of square zero. Since $B'$ is the universal
+first order thickening of $B$ over $A$ we obtain an $A$-algebra map
+$B' \to D$. But it is clear that the image of $S'$ in $D$ is mapped to
+invertible elements of $D$, and hence we obtain a compatible map
+$(S')^{-1}B' \to D$. Applying this to $D = (S^{-1}B)'$ we see that we get
+a map $(S')^{-1}B' \to (S^{-1}B)'$. We omit the verification that this map
+is inverse to the map described above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-differentials-universal-thickening}
+Let $R \to A \to B$ be ring maps. Assume $A \to B$ formally unramified.
+Let $B' \to B$ be the universal first order thickening of $B$ over $A$.
+Then $B'$ is formally unramified over $A$, and the canonical map
+$\Omega_{A/R} \otimes_A B \to \Omega_{B'/R} \otimes_{B'} B$ is an
+isomorphism.
+\end{lemma}
+
+\begin{proof}
+We are going to use the construction of $B'$ from the proof of
+Lemma \ref{lemma-universal-thickening}
+although in principle it should be possible to deduce these results
+formally from the definition. Namely, we choose a presentation
+$B = P/J$, where $P = A[x_i]$ is a polynomial ring over $A$.
+Next, we choose elements $f_i \in J$ such that
+$\text{d}f_i = \text{d}x_i \otimes 1$ in $\Omega_{P/A} \otimes_P B$.
+Having made these choices we have
+$B' = P/J'$ with $J' = (f_i) + J^2$, see proof of
+Lemma \ref{lemma-universal-thickening}.
+
+\medskip\noindent
+Consider the canonical exact sequence
+$$
+J'/(J')^2 \to \Omega_{P/A} \otimes_P B' \to \Omega_{B'/A} \to 0
+$$
+see
+Lemma \ref{lemma-differential-seq}.
+By construction the classes of the $f_i \in J'$ map to elements of
+the module $\Omega_{P/A} \otimes_P B'$ which generate it modulo
+$J'/J^2$ by construction. Since $J'/J^2$ is a nilpotent ideal, we see
+that these elements generate the module altogether (by
+Nakayama's Lemma \ref{lemma-NAK}). This proves that $\Omega_{B'/A} = 0$
+and hence that $B'$ is formally unramified over $A$, see
+Lemma \ref{lemma-characterize-formally-unramified}.
+
+\medskip\noindent
+Since $P$ is a polynomial ring over $A$ we have
+$\Omega_{P/R} = \Omega_{A/R} \otimes_A P \oplus \bigoplus P\text{d}x_i$.
+We are going to use this decomposition.
+Consider the following exact sequence
+$$
+J'/(J')^2 \to
+\Omega_{P/R} \otimes_P B' \to
+\Omega_{B'/R} \to 0
+$$
+see
+Lemma \ref{lemma-differential-seq}.
+We may tensor this with $B$ and obtain the exact sequence
+$$
+J'/(J')^2 \otimes_{B'} B \to
+\Omega_{P/R} \otimes_P B \to
+\Omega_{B'/R} \otimes_{B'} B \to 0
+$$
+If we remember that $J' = (f_i) + J^2$
+then we see that the first arrow annihilates the submodule $J^2/(J')^2$.
+In terms of the direct sum decomposition
+$\Omega_{P/R} \otimes_P B =
+\Omega_{A/R} \otimes_A B \oplus \bigoplus B\text{d}x_i $ given
+we see that the submodule $(f_i)/(J')^2 \otimes_{B'} B$ maps
+isomorphically onto the summand $\bigoplus B\text{d}x_i$. Hence what is
+left of this exact sequence is an isomorphism
+$\Omega_{A/R} \otimes_A B \to \Omega_{B'/R} \otimes_{B'} B$
+as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Formally \'etale maps}
+\label{section-formally-etale}
+
+\begin{definition}
+\label{definition-formally-etale}
+Let $R \to S$ be a ring map.
+We say $S$ is {\it formally \'etale over $R$} if for every
+commutative solid diagram
+$$
+\xymatrix{
+S \ar[r] \ar@{-->}[rd] & A/I \\
+R \ar[r] \ar[u] & A \ar[u]
+}
+$$
+where $I \subset A$ is an ideal of square zero, there exists
+a unique dotted arrow making the diagram commute.
+\end{definition}
+
+\noindent
+Clearly a ring map is formally \'etale if and only if
+it is both formally smooth and formally unramified.
+
+\begin{lemma}
+\label{lemma-formally-etale-etale}
+Let $R \to S$ be a ring map of finite presentation.
+The following are equivalent:
+\begin{enumerate}
+\item $R \to S$ is formally \'etale,
+\item $R \to S$ is \'etale.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume that $R \to S$ is formally \'etale.
+Then $R \to S$ is smooth by
+Proposition \ref{proposition-smooth-formally-smooth}.
+By Lemma \ref{lemma-characterize-formally-unramified}
+we have $\Omega_{S/R} = 0$.
+Hence $R \to S$ is \'etale by definition.
+
+\medskip\noindent
+Assume that $R \to S$ is \'etale.
+Then $R \to S$ is formally smooth by
+Proposition \ref{proposition-smooth-formally-smooth}.
+By Lemma \ref{lemma-characterize-formally-unramified}
+it is formally unramified. Hence $R \to S$ is formally \'etale.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-formally-etale}
+Let $R$ be a ring. Let $I$ be a directed set.
+Let $(S_i, \varphi_{ii'})$ be a system of $R$-algebras
+over $I$. If each $R \to S_i$ is formally \'etale, then
+$S = \colim_{i \in I} S_i$ is formally \'etale over $R$
+\end{lemma}
+
+\begin{proof}
+Consider a diagram as in Definition \ref{definition-formally-etale}.
+By assumption we get unique $R$-algebra maps $S_i \to A$ lifting
+the compositions $S_i \to S \to A/I$. Hence these are compatible
+with the transition maps $\varphi_{ii'}$ and define a lift
+$S \to A$. This proves existence.
+The uniqueness is clear by restricting to each $S_i$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localization-formally-etale}
+Let $R$ be a ring. Let $S \subset R$ be any multiplicative subset.
+Then the ring map $R \to S^{-1}R$ is formally \'etale.
+\end{lemma}
+
+\begin{proof}
+Let $I \subset A$ be an ideal of square zero. What we are saying
+here is that given a ring map $\varphi : R \to A$ such that
+$\varphi(f) \mod I$ is invertible for all $f \in S$ we have also that
+$\varphi(f)$ is invertible in $A$ for all $f \in S$. This is true because
+$A^*$ is the inverse image of $(A/I)^*$ under the canonical map
+$A \to A/I$.
+\end{proof}
+
+
+
+
+
+\section{Unramified ring maps}
+\label{section-unramified}
+
+\noindent
+The definition of a G-unramified ring map is the one from EGA.
+The definition of an unramified ring map is the one from \cite{Henselian}.
+
+\begin{definition}
+\label{definition-unramified}
+Let $R \to S$ be a ring map.
+\begin{enumerate}
+\item We say $R \to S$ is {\it unramified} if $R \to S$ is of
+finite type and $\Omega_{S/R} = 0$.
+\item We say $R \to S$ is {\it G-unramified} if $R \to S$ is of finite
+presentation and $\Omega_{S/R} = 0$.
+\item Given a prime $\mathfrak q$ of $S$ we say that $S$ is
+{\it unramified at $\mathfrak q$} if there exists a
+$g \in S$, $g \not \in \mathfrak q$ such that $R \to S_g$ is unramified.
+\item Given a prime $\mathfrak q$ of $S$ we say that $S$ is
+{\it G-unramified at $\mathfrak q$} if there exists a
+$g \in S$, $g \not \in \mathfrak q$ such that $R \to S_g$ is G-unramified.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Of course a G-unramified map is unramified.
+
+\begin{lemma}
+\label{lemma-formally-unramified-unramified}
+Let $R \to S$ be a ring map. The following are equivalent
+\begin{enumerate}
+\item $R \to S$ is formally unramified and of finite type, and
+\item $R \to S$ is unramified.
+\end{enumerate}
+Moreover, also the following are equivalent
+\begin{enumerate}
+\item $R \to S$ is formally unramified and of finite presentation, and
+\item $R \to S$ is G-unramified.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-characterize-formally-unramified}
+and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unramified}
+Properties of unramified and G-unramified ring maps.
+\begin{enumerate}
+\item The base change of an unramified ring map is unramified.
+The base change of a G-unramified ring map is G-unramified.
+\item The composition of unramified ring maps is unramified.
+The composition of G-unramified ring maps is G-unramified.
+\item Any principal localization $R \to R_f$ is G-unramified and
+unramified.
+\item If $I \subset R$ is an ideal, then $R \to R/I$ is unramified.
+If $I \subset R$ is a finitely generated ideal, then $R \to R/I$ is
+G-unramified.
+\item An \'etale ring map is G-unramified and unramified.
+\item If $R \to S$ is of finite type (resp.\ finite presentation),
+$\mathfrak q \subset S$ is a prime and $(\Omega_{S/R})_{\mathfrak q} = 0$,
+then $R \to S$ is unramified (resp.\ G-unramified) at $\mathfrak q$.
+\item If $R \to S$ is of finite type (resp.\ finite presentation),
+$\mathfrak q \subset S$ is a prime and
+$\Omega_{S/R} \otimes_S \kappa(\mathfrak q) = 0$, then
+$R \to S$ is unramified (resp.\ G-unramified) at $\mathfrak q$.
+\item If $R \to S$ is of finite type (resp.\ finite presentation),
+$\mathfrak q \subset S$ is a prime lying over $\mathfrak p \subset R$ and
+$(\Omega_{S \otimes_R \kappa(\mathfrak p)/\kappa(\mathfrak p)})_{\mathfrak q}
+= 0$, then $R \to S$ is unramified (resp.\ G-unramified) at $\mathfrak q$.
+\item If $R \to S$ is of finite type (resp.\ presentation),
+$\mathfrak q \subset S$ is a prime lying over $\mathfrak p \subset R$ and
+$(\Omega_{S \otimes_R \kappa(\mathfrak p)/\kappa(\mathfrak p)})
+\otimes_{S \otimes_R \kappa(\mathfrak p)} \kappa(\mathfrak q) = 0$,
+then $R \to S$ is unramified (resp.\ G-unramified) at $\mathfrak q$.
+\item If $R \to S$ is a ring map, $g_1, \ldots, g_m \in S$ generate
+the unit ideal and $R \to S_{g_j}$ is unramified (resp.\ G-unramified) for
+$j = 1, \ldots, m$, then $R \to S$ is unramified (resp.\ G-unramified).
+\item If $R \to S$ is a ring map which is unramified (resp.\ G-unramified)
+at every prime of $S$, then $R \to S$ is unramified (resp.\ G-unramified).
+\item If $R \to S$ is G-unramified, then there exists a finite type
+$\mathbf{Z}$-algebra $R_0$ and a G-unramified ring map $R_0 \to S_0$
+and a ring map $R_0 \to R$ such that $S = R \otimes_{R_0} S_0$.
+\item If $R \to S$ is unramified, then there exists a finite type
+$\mathbf{Z}$-algebra $R_0$ and an unramified ring map $R_0 \to S_0$
+and a ring map $R_0 \to R$ such that $S$ is a quotient of
+$R \otimes_{R_0} S_0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We prove each point, in order.
+
+\medskip\noindent
+Ad (1). Follows from Lemmas \ref{lemma-differentials-base-change}
+and \ref{lemma-base-change-finiteness}.
+
+\medskip\noindent
+Ad (2). Follows from Lemmas \ref{lemma-exact-sequence-differentials}
+and \ref{lemma-base-change-finiteness}.
+
+\medskip\noindent
+Ad (3). Follows by direct computation of $\Omega_{R_f/R}$ which we omit.
+
+\medskip\noindent
+Ad (4). We have $\Omega_{(R/I)/R} = 0$, see
+Lemma \ref{lemma-trivial-differential-surjective},
+and the ring map $R \to R/I$
+is of finite type. If $I$ is a finitely generated ideal then $R \to R/I$
+is of finite presentation.
+
+\medskip\noindent
+Ad (5). See discussion following Definition \ref{definition-etale}.
+
+\medskip\noindent
+Ad (6). In this case $\Omega_{S/R}$ is a finite $S$-module (see
+Lemma \ref{lemma-differentials-finitely-generated}) and hence there
+exists a $g \in S$, $g \not \in \mathfrak q$ such that
+$(\Omega_{S/R})_g = 0$. By Lemma \ref{lemma-differentials-localize}
+this means that $\Omega_{S_g/R} = 0$ and hence $R \to S_g$ is
+unramified as desired.
+
+\medskip\noindent
+Ad (7). Use Nakayama's lemma (Lemma \ref{lemma-NAK}) to see that
+the condition is equivalent to the condition of (6).
+
+\medskip\noindent
+Ad (8) and (9). These are equivalent in the same manner that (6) and (7)
+are equivalent. Moreover
+$\Omega_{S \otimes_R \kappa(\mathfrak p)/\kappa(\mathfrak p)} =
+\Omega_{S/R} \otimes_S (S \otimes_R \kappa(\mathfrak p))$ by
+Lemma \ref{lemma-differentials-base-change}.
+Hence we see that (9) is equivalent to (7) since
+the $\kappa(\mathfrak q)$ vector spaces in both are canonically
+isomorphic.
+
+\medskip\noindent
+Ad (10). Follows from Lemmas \ref{lemma-cover}
+and \ref{lemma-differentials-localize}.
+
+\medskip\noindent
+Ad (11). Follows from (6) and (7) and the fact that the spectrum of $S$
+is quasi-compact.
+
+\medskip\noindent
+Ad (12). Write $S = R[x_1, \ldots, x_n]/(g_1, \ldots, g_m)$.
+As $\Omega_{S/R} = 0$ we can write
+$$
+\text{d}x_i = \sum h_{ij}\text{d}g_j + \sum a_{ijk}g_j\text{d}x_k
+$$
+in $\Omega_{R[x_1, \ldots, x_n]/R}$
+for some $h_{ij}, a_{ijk} \in R[x_1, \ldots, x_n]$.
+Choose a finitely generated
+$\mathbf{Z}$-subalgebra $R_0 \subset R$ containing all the coefficients of the
+polynomials $g_i, h_{ij}, a_{ijk}$. Set
+$S_0 = R_0[x_1, \ldots, x_n]/(g_1, \ldots, g_m)$. This works.
+
+\medskip\noindent
+Ad (13). Write $S = R[x_1, \ldots, x_n]/I$.
+As $\Omega_{S/R} = 0$ we can write
+$$
+\text{d}x_i = \sum h_{ij}\text{d}g_{ij} + \sum g'_{ik}\text{d}x_k
+$$
+in $\Omega_{R[x_1, \ldots, x_n]/R}$
+for some $h_{ij} \in R[x_1, \ldots, x_n]$ and $g_{ij}, g'_{ik} \in I$.
+Choose a finitely generated $\mathbf{Z}$-subalgebra $R_0 \subset R$
+containing all the coefficients of the
+polynomials $g_{ij}, h_{ij}, g'_{ik}$. Set
+$S_0 = R_0[x_1, \ldots, x_n]/(g_{ij}, g'_{ik})$. This works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-diagonal-unramified}
+Let $R \to S$ be a ring map.
+If $R \to S$ is unramified, then there exists an idempotent
+$e \in S \otimes_R S$ such that $S \otimes_R S \to S$ is isomorphic
+to $S \otimes_R S \to (S \otimes_R S)_e$.
+\end{lemma}
+
+\begin{proof}
+Let $J = \Ker(S \otimes_R S \to S)$. By assumption
+$J/J^2 = 0$, see
+Lemma \ref{lemma-differentials-diagonal}.
+Since $S$ is of finite type over $R$ we
+see that $J$ is finitely generated, namely by
+$x_i \otimes 1 - 1 \otimes x_i$, where $x_i$ generate $S$ over $R$.
+We win by Lemma \ref{lemma-ideal-is-squared-union-connected}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unramified-at-prime}
+Let $R \to S$ be a ring map.
+Let $\mathfrak q \subset S$ be
+a prime lying over $\mathfrak p$ in $R$.
+If $S/R$ is unramified at $\mathfrak q$ then
+\begin{enumerate}
+\item we have $\mathfrak p S_{\mathfrak q} = \mathfrak qS_{\mathfrak q}$
+is the maximal ideal of the local ring $S_{\mathfrak q}$, and
+\item the field extension $\kappa(\mathfrak q)/\kappa(\mathfrak p)$
+is finite separable.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We may first replace $S$ by $S_g$ for some $g \in S$, $g \not \in \mathfrak q$
+and assume that $R \to S$ is unramified.
+The base change $S \otimes_R \kappa(\mathfrak p)$
+is unramified over $\kappa(\mathfrak p)$ by
+Lemma \ref{lemma-unramified}.
+By
+Lemma \ref{lemma-characterize-smooth-over-field}
+it is smooth hence \'etale over $\kappa(\mathfrak p)$.
+Hence we see that
+$S \otimes_R \kappa(\mathfrak p) =
+(R \setminus \mathfrak p)^{-1} S/\mathfrak pS$
+is a product of finite separable field extensions of
+$\kappa(\mathfrak p)$ by Lemma \ref{lemma-etale-over-field}.
+This implies the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unramified-quasi-finite}
+Let $R \to S$ be a finite type ring map.
+Let $\mathfrak q$ be a prime of $S$.
+If $R \to S$ is unramified at $\mathfrak q$ then
+$R \to S$ is quasi-finite at $\mathfrak q$.
+In particular, an unramified ring map is quasi-finite.
+\end{lemma}
+
+\begin{proof}
+An unramified ring map is of finite type.
+Thus it is clear that the second statement follows from the first.
+To see the first statement apply the characterization of
+Lemma \ref{lemma-isolated-point-fibre} part (2) using
+Lemma \ref{lemma-unramified-at-prime}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-unramified}
+Let $R \to S$ be a ring map. Let $\mathfrak q$ be a prime of $S$
+lying over a prime $\mathfrak p$ of $R$. If
+\begin{enumerate}
+\item $R \to S$ is of finite type,
+\item $\mathfrak p S_{\mathfrak q}$ is the maximal ideal
+of the local ring $S_{\mathfrak q}$, and
+\item the field extension $\kappa(\mathfrak q)/\kappa(\mathfrak p)$
+is finite separable,
+\end{enumerate}
+then $R \to S$ is unramified at $\mathfrak q$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-unramified} (8) it suffices to show that
+$\Omega_{S \otimes_R \kappa(\mathfrak p) / \kappa(\mathfrak p)}$
+is zero when localized at $\mathfrak q$. Hence we may replace $S$
+by $S \otimes_R \kappa(\mathfrak p)$ and $R$ by $\kappa(\mathfrak p)$.
+In other words, we may assume that $R = k$ is a field and $S$
+is a finite type $k$-algebra.
+In this case the hypotheses imply that
+$S_{\mathfrak q} \cong \kappa(\mathfrak q)$.
+Thus $(\Omega_{S/k})_{\mathfrak q} = \Omega_{S_\mathfrak q/k} =
+\Omega_{\kappa(\mathfrak q)/k}$ is zero as desired (the
+first equality is Lemma \ref{lemma-differentials-localize}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-flat-unramified-finite-presentation}
+Let $R \to S$ be a ring map. The following are equivalent
+\begin{enumerate}
+\item $R \to S$ is \'etale,
+\item $R \to S$ is flat and G-unramified, and
+\item $R \to S$ is flat, unramified, and of finite presentation.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Parts (2) and (3) are equivalent by definition.
+The implication (1) $\Rightarrow$ (3) follows from
+the fact that \'etale ring maps are of finite presentation,
+Lemma \ref{lemma-etale} (flatness of \'etale maps), and
+Lemma \ref{lemma-unramified} (\'etale maps are unramified).
+Conversely, the characterization of \'etale ring maps in
+Lemma \ref{lemma-characterize-etale}
+and the structure of unramified ring maps in
+Lemma \ref{lemma-unramified-at-prime}
+shows that (3) implies (1). (This uses that $R \to S$
+is \'etale if $R \to S$ is \'etale at every prime $\mathfrak q \subset S$,
+see Lemma \ref{lemma-etale}.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-etale-over-polynomial-ring}
+Let $k$ be a field. Let
+$$
+\varphi : k[x_1, \ldots, x_n] \to A, \quad x_i \longmapsto a_i
+$$
+be a finite type ring map. Then $\varphi$ is \'etale if and only if we
+have the following two conditions: (a) the local rings of $A$ at maximal ideals
+have dimension $n$, and (b) the elements $\text{d}(a_1), \ldots, \text{d}(a_n)$
+generate $\Omega_{A/k}$ as an $A$-module.
+\end{lemma}
+
+\begin{proof}
+Assume (a) and (b). Condition (b) implies that
+$\Omega_{A/k[x_1, \ldots, x_n]} = 0$ and hence $\varphi$ is unramified.
+Thus it suffices to prove that $\varphi$ is flat, see
+Lemma \ref{lemma-etale-flat-unramified-finite-presentation}.
+Let $\mathfrak m \subset A$ be a maximal ideal.
+Set $X = \Spec(A)$ and denote $x \in X$ the closed point corresponding
+to $\mathfrak m$. Then $\dim(A_\mathfrak m)$ is $\dim_x X$, see
+Lemma \ref{lemma-dimension-closed-point-finite-type-field}.
+Thus by Lemma \ref{lemma-characterize-smooth-over-field}
+we see that if (a) and (b) hold, then $A_\mathfrak m$ is
+a regular local ring for every maximal ideal $\mathfrak m$. Then
+$k[x_1, \ldots, x_n]_{\varphi^{-1}(\mathfrak m)} \to A_\mathfrak m$
+is flat by Lemma \ref{lemma-CM-over-regular-flat}
+(and the fact that a regular local ring is CM, see
+Lemma \ref{lemma-regular-ring-CM}).
+Thus $\varphi$ is flat by Lemma \ref{lemma-flat-localization}.
+
+\medskip\noindent
+Assume $\varphi$ is \'etale. Then $\Omega_{A/k[x_1, \ldots, x_n]} = 0$
+and hence (b) holds. On the other hand, \'etale ring maps are flat
+(Lemma \ref{lemma-etale}) and quasi-finite
+(Lemma \ref{lemma-etale-quasi-finite}).
+Hence for every maximal ideal $\mathfrak m$ of $A$ we my apply
+Lemma \ref{lemma-dimension-base-fibre-equals-total} to
+$k[x_1, \ldots, x_n]_{\varphi^{-1}(\mathfrak m)} \to A_\mathfrak m$
+to see that $\dim(A_\mathfrak m) = n$ and hence (a) holds.
+\end{proof}
+
+
+
+
+\section{Local structure of unramified ring maps}
+\label{section-local-structure-unramified}
+
+\noindent
+An unramified morphism is locally (in a suitable sense) the composition
+of a closed immersion and an \'etale morphism. The algebraic underpinnings
+of this fact are discussed in this section.
+
+\begin{proposition}
+\label{proposition-unramified-locally-standard}
+Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime.
+If $R \to S$ is unramified at $\mathfrak q$, then there exist
+\begin{enumerate}
+\item a $g \in S$, $g \not \in \mathfrak q$,
+\item a standard \'etale ring map $R \to S'$, and
+\item a surjective $R$-algebra map $S' \to S_g$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+This proof is the ``same'' as the proof of
+Proposition \ref{proposition-etale-locally-standard}.
+The proof is a little roundabout and there may be ways to
+shorten it.
+
+\medskip\noindent
+Step 1. By Definition \ref{definition-unramified}
+there exists a $g \in S$, $g \not \in \mathfrak q$
+such that $R \to S_g$ is unramified. Thus we may assume that $S$ is
+unramified over $R$.
+
+\medskip\noindent
+Step 2. By Lemma \ref{lemma-unramified}
+there exists an unramified ring map $R_0 \to S_0$
+with $R_0$ of finite type over $\mathbf{Z}$, and a ring map
+$R_0 \to R$ such that $S$ is a quotient of $R \otimes_{R_0} S_0$. Denote
+$\mathfrak q_0$ the prime of $S_0$ corresponding to $\mathfrak q$.
+If we show the result for $(R_0 \to S_0, \mathfrak q_0)$ then the
+result follows for $(R \to S, \mathfrak q)$ by base change. Hence
+we may assume that $R$ is Noetherian.
+
+\medskip\noindent
+Step 3.
+Note that $R \to S$ is quasi-finite by
+Lemma \ref{lemma-unramified-quasi-finite}.
+By Lemma \ref{lemma-quasi-finite-open-integral-closure}
+there exists a finite ring map $R \to S'$, an $R$-algebra map
+$S' \to S$, an element $g' \in S'$ such that
+$g' \not \in \mathfrak q$ such that $S' \to S$ induces
+an isomorphism $S'_{g'} \cong S_{g'}$.
+(Note that $S'$ may not be unramified over $R$.)
+Thus we may assume that (a) $R$ is Noetherian, (b) $R \to S$ is finite
+and (c) $R \to S$ is unramified at $\mathfrak q$
+(but no longer necessarily unramified at all primes).
+
+\medskip\noindent
+Step 4. Let $\mathfrak p \subset R$ be the prime corresponding
+to $\mathfrak q$. Consider the fibre ring
+$S \otimes_R \kappa(\mathfrak p)$. This is a finite algebra over
+$\kappa(\mathfrak p)$. Hence it is Artinian
+(see Lemma \ref{lemma-finite-dimensional-algebra}) and
+so a finite product of local rings
+$$
+S \otimes_R \kappa(\mathfrak p) = \prod\nolimits_{i = 1}^n A_i
+$$
+see Proposition \ref{proposition-dimension-zero-ring}. One of the factors,
+say $A_1$, is the local ring $S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}$
+which is isomorphic to $\kappa(\mathfrak q)$,
+see Lemma \ref{lemma-unramified-at-prime}. The other factors correspond to
+the other primes, say $\mathfrak q_2, \ldots, \mathfrak q_n$ of
+$S$ lying over $\mathfrak p$.
+
+\medskip\noindent
+Step 5. We may choose a nonzero element $\alpha \in \kappa(\mathfrak q)$ which
+generates the finite separable field extension
+$\kappa(\mathfrak q)/\kappa(\mathfrak p)$ (so even if the
+field extension is trivial we do not allow $\alpha = 0$).
+Note that for any $\lambda \in \kappa(\mathfrak p)^*$ the
+element $\lambda \alpha$ also generates $\kappa(\mathfrak q)$
+over $\kappa(\mathfrak p)$. Consider the element
+$$
+\overline{t} =
+(\alpha, 0, \ldots, 0) \in
+\prod\nolimits_{i = 1}^n A_i =
+S \otimes_R \kappa(\mathfrak p).
+$$
+After possibly replacing $\alpha$ by $\lambda \alpha$ as above
+we may assume that $\overline{t}$ is the image of $t \in S$.
+Let $I \subset R[x]$ be the kernel of the $R$-algebra
+map $R[x] \to S$ which maps $x$ to $t$. Set $S' = R[x]/I$,
+so $S' \subset S$. Here is a diagram
+$$
+\xymatrix{
+R[x] \ar[r] & S' \ar[r] & S \\
+R \ar[u] \ar[ru] \ar[rru] & &
+}
+$$
+By construction the primes $\mathfrak q_j$, $j \geq 2$ of $S$ all
+lie over the prime $(\mathfrak p, x)$ of $R[x]$, whereas
+the prime $\mathfrak q$ lies over a different prime of $R[x]$
+because $\alpha \not = 0$.
+
+\medskip\noindent
+Step 6. Denote $\mathfrak q' \subset S'$ the prime of $S'$
+corresponding to $\mathfrak q$. By the above $\mathfrak q$ is
+the only prime of $S$ lying over $\mathfrak q'$. Thus we see that
+$S_{\mathfrak q} = S_{\mathfrak q'}$, see
+Lemma \ref{lemma-unique-prime-over-localize-below} (we have
+going up for $S' \to S$ by Lemma \ref{lemma-integral-going-up}
+since $S' \to S$ is finite as $R \to S$ is finite).
+It follows that $S'_{\mathfrak q'} \to S_{\mathfrak q}$ is finite
+and injective as the localization of the finite injective ring map
+$S' \to S$. Consider the maps of local rings
+$$
+R_{\mathfrak p} \to S'_{\mathfrak q'} \to S_{\mathfrak q}
+$$
+The second map is finite and injective. We have
+$S_{\mathfrak q}/\mathfrak pS_{\mathfrak q} = \kappa(\mathfrak q)$,
+see Lemma \ref{lemma-unramified-at-prime}.
+Hence a fortiori
+$S_{\mathfrak q}/\mathfrak q'S_{\mathfrak q} = \kappa(\mathfrak q)$.
+Since
+$$
+\kappa(\mathfrak p) \subset \kappa(\mathfrak q') \subset \kappa(\mathfrak q)
+$$
+and since $\alpha$ is in the image of $\kappa(\mathfrak q')$ in
+$\kappa(\mathfrak q)$
+we conclude that $\kappa(\mathfrak q') = \kappa(\mathfrak q)$.
+Hence by Nakayama's Lemma \ref{lemma-NAK} applied to the
+$S'_{\mathfrak q'}$-module map $S'_{\mathfrak q'} \to S_{\mathfrak q}$,
+the map $S'_{\mathfrak q'} \to S_{\mathfrak q}$ is surjective.
+In other words,
+$S'_{\mathfrak q'} \cong S_{\mathfrak q}$.
+
+\medskip\noindent
+Step 7. By Lemma \ref{lemma-isomorphic-local-rings} there exist
+$g \in S$, $g \not \in \mathfrak q$ and
+$g' \in S'$, $g' \not \in \mathfrak q'$ such that $S'_{g'} \cong S_g$.
+As $R$ is Noetherian the ring $S'$ is finite over $R$
+because it is an $R$-submodule
+of the finite $R$-module $S$. Hence after replacing $S$ by $S'$ we may
+assume that (a) $R$ is Noetherian, (b) $S$ finite over $R$, (c)
+$S$ is unramified over $R$ at $\mathfrak q$, and (d) $S = R[x]/I$.
+
+\medskip\noindent
+Step 8. Consider the ring
+$S \otimes_R \kappa(\mathfrak p) = \kappa(\mathfrak p)[x]/\overline{I}$
+where $\overline{I} = I \cdot \kappa(\mathfrak p)[x]$ is the ideal generated
+by $I$ in $\kappa(\mathfrak p)[x]$. As $\kappa(\mathfrak p)[x]$ is a PID
+we know that $\overline{I} = (\overline{h})$ for some monic
+$\overline{h} \in \kappa(\mathfrak p)$. After replacing $\overline{h}$
+by $\lambda \cdot \overline{h}$ for some $\lambda \in \kappa(\mathfrak p)$
+we may assume that $\overline{h}$ is the image of some $h \in R[x]$.
+(The problem is that we do not know if we may choose $h$ monic.)
+Also, as in Step 4 we know that
+$S \otimes_R \kappa(\mathfrak p) = A_1 \times \ldots \times A_n$ with
+$A_1 = \kappa(\mathfrak q)$ a finite separable extension of
+$\kappa(\mathfrak p)$ and $A_2, \ldots, A_n$ local. This implies
+that
+$$
+\overline{h} = \overline{h}_1 \overline{h}_2^{e_2} \ldots \overline{h}_n^{e_n}
+$$
+for certain pairwise coprime irreducible monic polynomials
+$\overline{h}_i \in \kappa(\mathfrak p)[x]$ and certain
+$e_2, \ldots, e_n \geq 1$. Here the numbering is chosen so that
+$A_i = \kappa(\mathfrak p)[x]/(\overline{h}_i^{e_i})$ as
+$\kappa(\mathfrak p)[x]$-algebras. Note that $\overline{h}_1$ is
+the minimal polynomial of $\alpha \in \kappa(\mathfrak q)$ and hence
+is a separable polynomial (its derivative is prime to itself).
+
+\medskip\noindent
+Step 9. Let $m \in I$ be a monic element; such an element exists
+because the ring extension $R \to R[x]/I$ is finite hence integral.
+Denote $\overline{m}$ the image in $\kappa(\mathfrak p)[x]$.
+We may factor
+$$
+\overline{m} = \overline{k}
+\overline{h}_1^{d_1} \overline{h}_2^{d_2} \ldots \overline{h}_n^{d_n}
+$$
+for some $d_1 \geq 1$, $d_j \geq e_j$, $j = 2, \ldots, n$ and
+$\overline{k} \in \kappa(\mathfrak p)[x]$ prime to all the $\overline{h}_i$.
+Set $f = m^l + h$ where $l \deg(m) > \deg(h)$, and $l \geq 2$.
+Then $f$ is monic as a polynomial over $R$. Also, the image $\overline{f}$
+of $f$ in $\kappa(\mathfrak p)[x]$ factors as
+$$
+\overline{f} =
+\overline{h}_1 \overline{h}_2^{e_2} \ldots \overline{h}_n^{e_n}
++
+\overline{k}^l \overline{h}_1^{ld_1} \overline{h}_2^{ld_2}
+\ldots \overline{h}_n^{ld_n}
+=
+\overline{h}_1(\overline{h}_2^{e_2} \ldots \overline{h}_n^{e_n}
++
+\overline{k}^l
+\overline{h}_1^{ld_1 - 1} \overline{h}_2^{ld_2} \ldots \overline{h}_n^{ld_n})
+= \overline{h}_1 \overline{w}
+$$
+with $\overline{w}$ a polynomial relatively prime to $\overline{h}_1$.
+Set $g = f'$ (the derivative with respect to $x$).
+
+\medskip\noindent
+Step 10. The ring map $R[x] \to S = R[x]/I$ has the properties:
+(1) it maps $f$ to zero, and
+(2) it maps $g$ to an element of $S \setminus \mathfrak q$.
+The first assertion is clear since $f$ is an element of $I$.
+For the second assertion we just have to show that $g$ does
+not map to zero in
+$\kappa(\mathfrak q) = \kappa(\mathfrak p)[x]/(\overline{h}_1)$.
+The image of $g$ in $\kappa(\mathfrak p)[x]$ is the derivative
+of $\overline{f}$. Thus (2) is clear because
+$$
+\overline{g} =
+\frac{\text{d}\overline{f}}{\text{d}x} =
+\overline{w}\frac{\text{d}\overline{h}_1}{\text{d}x} +
+\overline{h}_1\frac{\text{d}\overline{w}}{\text{d}x},
+$$
+$\overline{w}$ is prime to $\overline{h}_1$ and
+$\overline{h}_1$ is separable.
+
+\medskip\noindent
+Step 11.
+We conclude that $\varphi : R[x]/(f) \to S$ is a surjective ring map,
+$R[x]_g/(f)$ is \'etale over $R$ (because it is standard \'etale,
+see Lemma \ref{lemma-standard-etale}) and $\varphi(g) \not \in \mathfrak q$.
+Thus the map $(R[x]/(f))_g \to S_{\varphi(g)}$ is the desired
+surjection.
+\end{proof}
+
+
+
+\begin{lemma}
+\label{lemma-etale-makes-unramified-closed-at-prime}
+Let $R \to S$ be a ring map.
+Let $\mathfrak q$ be a prime of $S$ lying over $\mathfrak p \subset R$.
+Assume that $R \to S$ is of finite type and unramified at $\mathfrak q$.
+Then there exist
+\begin{enumerate}
+\item an \'etale ring map $R \to R'$,
+\item a prime $\mathfrak p' \subset R'$ lying over $\mathfrak p$.
+\item a product decomposition
+$$
+R' \otimes_R S = A \times B
+$$
+\end{enumerate}
+with the following properties
+\begin{enumerate}
+\item $R' \to A$ is surjective, and
+\item $\mathfrak p'A$ is a prime of $A$ lying over $\mathfrak p'$ and
+over $\mathfrak q$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We may replace $(R \to S, \mathfrak p, \mathfrak q)$
+with any base change $(R' \to R'\otimes_R S, \mathfrak p', \mathfrak q')$
+by an \'etale ring map $R \to R'$ with a prime $\mathfrak p'$
+lying over $\mathfrak p$, and a choice of $\mathfrak q'$ lying over
+both $\mathfrak q$ and $\mathfrak p'$. Note also that given
+$R \to R'$ and $\mathfrak p'$ a suitable $\mathfrak q'$ can always
+be found.
+
+\medskip\noindent
+The assumption that $R \to S$ is of finite type means that we may apply
+Lemma \ref{lemma-etale-makes-quasi-finite-finite-variant}. Thus we may
+assume that $S = A_1 \times \ldots \times A_n \times B$, that
+each $R \to A_i$ is finite with exactly one prime $\mathfrak r_i$
+lying over $\mathfrak p$ such that
+$\kappa(\mathfrak p) \subset \kappa(\mathfrak r_i)$ is purely inseparable
+and that $R \to B$ is not quasi-finite at any prime lying over $\mathfrak p$.
+Then clearly $\mathfrak q = \mathfrak r_i$ for some $i$, since
+an unramified morphism is quasi-finite
+(see Lemma \ref{lemma-unramified-quasi-finite}).
+Say $\mathfrak q = \mathfrak r_1$.
+By Lemma \ref{lemma-unramified-at-prime} we see that
+$\kappa(\mathfrak r_1)/\kappa(\mathfrak p)$
+is separable hence the trivial field extension, and that
+$\mathfrak p(A_1)_{\mathfrak r_1}$ is the maximal ideal.
+Also, by Lemma \ref{lemma-unique-prime-over-localize-below}
+(which applies to $R \to A_1$ because a finite ring map satisfies going up by
+Lemma \ref{lemma-integral-going-up})
+we have $(A_1)_{\mathfrak r_1} = (A_1)_{\mathfrak p}$.
+It follows from Nakayama's Lemma \ref{lemma-NAK}
+that the map of local rings
+$R_{\mathfrak p} \to (A_1)_{\mathfrak p} = (A_1)_{\mathfrak r_1}$
+is surjective. Since $A_1$ is finite over $R$ we see that there
+exists a $f \in R$, $f \not \in \mathfrak p$ such that
+$R_f \to (A_1)_f$ is surjective. After replacing $R$ by $R_f$ we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-makes-unramified-closed}
+\begin{slogan}
+In an unramified ring map, one can separate the points in a fiber
+by passing to an \'etale neighbourhood.
+\end{slogan}
+Let $R \to S$ be a ring map.
+Let $\mathfrak p$ be a prime of $R$.
+If $R \to S$ is unramified then there exist
+\begin{enumerate}
+\item an \'etale ring map $R \to R'$,
+\item a prime $\mathfrak p' \subset R'$ lying over $\mathfrak p$.
+\item a product decomposition
+$$
+R' \otimes_R S = A_1 \times \ldots \times A_n \times B
+$$
+\end{enumerate}
+with the following properties
+\begin{enumerate}
+\item $R' \to A_i$ is surjective,
+\item $\mathfrak p'A_i$ is a prime of $A_i$ lying over $\mathfrak p'$, and
+\item there is no prime of $B$ lying over $\mathfrak p'$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We may apply Lemma \ref{lemma-etale-makes-quasi-finite-finite-variant}.
+Thus, after an \'etale base change,
+we may assume that $S = A_1 \times \ldots \times A_n \times B$,
+that each $R \to A_i$ is finite with exactly one prime $\mathfrak r_i$
+lying over $\mathfrak p$ such that
+$\kappa(\mathfrak p) \subset \kappa(\mathfrak r_i)$ is purely inseparable,
+and that $R \to B$ is not quasi-finite at any prime lying over $\mathfrak p$.
+Since $R \to S$ is quasi-finite (see
+Lemma \ref{lemma-unramified-quasi-finite})
+we see there is no prime of $B$ lying over $\mathfrak p$.
+By Lemma \ref{lemma-unramified-at-prime} we see that
+$\kappa(\mathfrak r_i)/\kappa(\mathfrak p)$
+is separable hence the trivial field extension, and that
+$\mathfrak p(A_i)_{\mathfrak r_i}$ is the maximal ideal.
+Also, by Lemma \ref{lemma-unique-prime-over-localize-below}
+(which applies to $R \to A_i$ because a finite ring map satisfies going up by
+Lemma \ref{lemma-integral-going-up})
+we have $(A_i)_{\mathfrak r_i} = (A_i)_{\mathfrak p}$.
+It follows from Nakayama's Lemma \ref{lemma-NAK}
+that the map of local rings
+$R_{\mathfrak p} \to (A_i)_{\mathfrak p} = (A_i)_{\mathfrak r_i}$
+is surjective. Since $A_i$ is finite over $R$ we see that there
+exists a $f \in R$, $f \not \in \mathfrak p$ such that
+$R_f \to (A_i)_f$ is surjective. After replacing $R$ by $R_f$ we win.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Henselian local rings}
+\label{section-henselian}
+
+\noindent
+In this section we discuss a bit the notion of a henselian local ring.
+Let $(R, \mathfrak m, \kappa)$ be a local ring.
+For $a \in R$ we denote $\overline{a}$ the image of $a$ in $\kappa$.
+For a polynomial $f \in R[T]$ we often denote $\overline{f}$
+the image of $f$ in $\kappa[T]$.
+Given a polynomial $f \in R[T]$ we denote $f'$ the derivative
+of $f$ with respect to $T$. Note that $\overline{f}' = \overline{f'}$.
+
+\begin{definition}
+\label{definition-henselian}
+Let $(R, \mathfrak m, \kappa)$ be a local ring.
+\begin{enumerate}
+\item We say $R$ is {\it henselian} if for every monic $f \in R[T]$ and
+every root $a_0 \in \kappa$ of $\overline{f}$ such that
+$\overline{f'}(a_0) \not = 0$
+there exists an $a \in R$ such that $f(a) = 0$ and
+$a_0 = \overline{a}$.
+\item We say $R$ is {\it strictly henselian} if $R$ is henselian
+and its residue field is separably algebraically closed.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that the condition $\overline{f'}(a_0) \not = 0$ is equivalent to the
+condition that $a_0$ is a simple root of the polynomial $\overline{f}$.
+In fact, it implies that the lift $a \in R$, if it exists, is unique.
+
+\begin{lemma}
+\label{lemma-uniqueness}
+Let $(R, \mathfrak m, \kappa)$ be a local ring.
+Let $f \in R[T]$. Let $a, b \in R$ such that $f(a) = f(b) = 0$,
+$a = b \bmod \mathfrak m$, and $f'(a) \not \in \mathfrak m$.
+Then $a = b$.
+\end{lemma}
+
+\begin{proof}
+Write $f(x + y) - f(x) = f'(x)y + g(x, y) y^2$ in $R[x, y]$ (this is possible
+as one sees by expanding $f(x + y)$; details omitted).
+Then we see that $0 = f(b) - f(a) = f(a + (b - a)) - f(a) =
+f'(a)(b - a) + c (b - a)^2$ for some $c \in R$. By assumption
+$f'(a)$ is a unit in $R$. Hence $(b - a)(1 + f'(a)^{-1}c(b - a)) = 0$.
+By assumption $b - a \in \mathfrak m$, hence $1 + f'(a)^{-1}c(b - a)$
+is a unit in $R$. Hence $b - a = 0$ in $R$.
+\end{proof}
+
+\noindent
+Here is the characterization of henselian local rings.
+
+\begin{lemma}
+\label{lemma-characterize-henselian}
+\begin{slogan}
+Characterizations of henselian local rings
+\end{slogan}
+Let $(R, \mathfrak m, \kappa)$ be a local ring.
+The following are equivalent
+\begin{enumerate}
+\item $R$ is henselian,
+\item for every $f \in R[T]$ and every root $a_0 \in \kappa$
+of $\overline{f}$ such that $\overline{f'}(a_0) \not = 0$
+there exists an $a \in R$ such that $f(a) = 0$ and
+$a_0 = \overline{a}$,
+\item for any monic $f \in R[T]$ and any factorization
+$\overline{f} = g_0 h_0$ with $\gcd(g_0, h_0) = 1$ there
+exists a factorization $f = gh$ in $R[T]$ such that
+$g_0 = \overline{g}$ and $h_0 = \overline{h}$,
+\item for any monic $f \in R[T]$ and any factorization
+$\overline{f} = g_0 h_0$ with $\gcd(g_0, h_0) = 1$ there
+exists a factorization $f = gh$ in $R[T]$ such that
+$g_0 = \overline{g}$ and $h_0 = \overline{h}$ and moreover
+$\deg_T(g) = \deg_T(g_0)$,
+\item for any $f \in R[T]$ and any factorization
+$\overline{f} = g_0 h_0$ with $\gcd(g_0, h_0) = 1$ there
+exists a factorization $f = gh$ in $R[T]$ such that
+$g_0 = \overline{g}$ and $h_0 = \overline{h}$,
+\item for any $f \in R[T]$ and any factorization
+$\overline{f} = g_0 h_0$ with $\gcd(g_0, h_0) = 1$ there
+exists a factorization $f = gh$ in $R[T]$ such that
+$g_0 = \overline{g}$ and $h_0 = \overline{h}$ and
+moreover $\deg_T(g) = \deg_T(g_0)$,
+\item for any \'etale ring map $R \to S$ and prime $\mathfrak q$ of $S$
+lying over $\mathfrak m$ with $\kappa = \kappa(\mathfrak q)$
+there exists a section $\tau : S \to R$ of $R \to S$,
+\item for any \'etale ring map $R \to S$ and prime $\mathfrak q$ of $S$
+lying over $\mathfrak m$ with $\kappa = \kappa(\mathfrak q)$
+there exists a section $\tau : S \to R$ of $R \to S$ with
+$\mathfrak q = \tau^{-1}(\mathfrak m)$,
+\item any finite $R$-algebra is a product of local rings,
+\item any finite $R$-algebra is a finite product of local rings,
+\item any finite type $R$-algebra $S$ can be written as
+$A \times B$ with $R \to A$ finite
+and $R \to B$ not quasi-finite at any prime lying over $\mathfrak m$,
+\item any finite type $R$-algebra $S$ can be written as
+$A \times B$ with $R \to A$ finite
+such that each irreducible component of $\Spec(B \otimes_R \kappa)$
+has dimension $\geq 1$, and
+\item any quasi-finite $R$-algebra $S$ can be written as
+$S = A \times B$ with $R \to A$ finite such that $B \otimes_R \kappa = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Here is a list of the easier implications:
+\begin{enumerate}
+\item 2$\Rightarrow$1 because in (2) we consider all polynomials and
+in (1) only monic ones,
+\item 5$\Rightarrow$3 because in (5) we consider all polynomials and
+in (3) only monic ones,
+\item 6$\Rightarrow$4 because in (6) we consider all polynomials and
+in (4) only monic ones,
+\item 4$\Rightarrow$3 is obvious,
+\item 6$\Rightarrow$5 is obvious,
+\item 8$\Rightarrow$7 is obvious,
+\item 10$\Rightarrow$9 is obvious,
+\item 11$\Leftrightarrow$12 by definition of being quasi-finite at a prime,
+\item 11$\Rightarrow$13 by definition of being quasi-finite,
+\end{enumerate}
+
+\noindent
+Proof of 1$\Rightarrow$8. Assume (1).
+Let $R \to S$ be \'etale, and let $\mathfrak q \subset S$
+be a prime ideal such that $\kappa(\mathfrak q) \cong \kappa$. By
+Proposition \ref{proposition-etale-locally-standard}
+we can find a $g \in S$, $g \not \in \mathfrak q$ such that
+$R \to S_g$ is standard \'etale. After replacing $S$ by $S_g$ we may assume
+that $S = R[t]_g/(f)$ is standard \'etale. Since the prime $\mathfrak q$
+has residue field $\kappa$ it corresponds to a root $a_0$ of
+$\overline{f}$ which is not a root of $\overline{g}$. By definition
+of a standard \'etale algebra this also means that
+$\overline{f'}(a_0) \not = 0$.
+Since also $f$ is monic by definition of a standard \'etale algebra again we
+may use that $R$ is henselian to conclude that there exists an $a \in R$
+with $a_0 = \overline{a}$ such that $f(a) = 0$. This implies that
+$g(a)$ is a unit of $R$ and we obtain the desired map
+$\tau : S = R[t]_g/(f) \to R$ by the rule $t \mapsto a$. By construction
+$\tau^{-1}(\mathfrak q) = \mathfrak m$. This proves (8) holds.
+
+\medskip\noindent
+Proof of 7$\Rightarrow$8. (This is really unimportant and should be
+skipped.) Assume (7) holds and assume $R \to S$ is \'etale.
+Let $\mathfrak q_1, \ldots, \mathfrak q_r$ be
+the other primes of $S$ lying over $\mathfrak m$.
+Then we can find a $g \in S$, $g \not \in \mathfrak q$ and
+$g \in \mathfrak q_i$ for $i = 1, \ldots, r$.
+Namely, we can argue that
+$\bigcap_{i=1}^{r} \mathfrak{q}_{i} \not\subset \mathfrak{q}$
+since otherwise
+$\mathfrak{q}_{i} \subset \mathfrak{q}$
+for some $i$, but this cannot happen as the fiber of an
+\'etale morphism is discrete (use Lemma \ref{lemma-etale-over-field}
+for example).
+Apply (7) to the \'etale ring map
+$R \to S_g$ and the prime $\mathfrak qS_g$. This gives a section
+$\tau_g : S_g \to R$ such that the composition $\tau : S \to S_g \to R$
+has the property $\tau^{-1}(\mathfrak m) = \mathfrak q$.
+Minor details omitted.
+
+\medskip\noindent
+Proof of 8$\Rightarrow$11. Assume (8) and let $R \to S$ be a finite type
+ring map. Apply
+Lemma \ref{lemma-etale-makes-quasi-finite-finite}.
+We find an \'etale ring map $R \to R'$ and a prime $\mathfrak m' \subset R'$
+lying over $\mathfrak m$ with $\kappa = \kappa(\mathfrak m')$
+such that $R' \otimes_R S = A' \times B'$ with $A'$ finite over $R'$
+and $B'$ not quasi-finite over $R'$ at any prime lying over $\mathfrak m'$.
+Apply (8) to get a section $\tau : R' \to R$ with
+$\mathfrak m = \tau^{-1}(\mathfrak m')$. Then use that
+$$
+S = (S \otimes_R R') \otimes_{R', \tau} R
+= (A' \times B') \otimes_{R', \tau} R
+= (A' \otimes_{R', \tau} R) \times (B' \otimes_{R', \tau} R)
+$$
+which gives a decomposition as in (11).
+
+\medskip\noindent
+Proof of 8$\Rightarrow$10. Assume (8) and let $R \to S$ be a finite
+ring map. Apply
+Lemma \ref{lemma-etale-makes-quasi-finite-finite}.
+We find an \'etale ring map $R \to R'$ and a prime $\mathfrak m' \subset R'$
+lying over $\mathfrak m$ with $\kappa = \kappa(\mathfrak m')$
+such that $R' \otimes_R S = A'_1 \times \ldots \times A'_n \times B'$
+with $A'_i$ finite over $R'$ having exactly one prime over $\mathfrak m'$
+and $B'$ not quasi-finite over $R'$ at any prime lying over $\mathfrak m'$.
+Apply (8) to get a section $\tau : R' \to R$ with
+$\mathfrak m' = \tau^{-1}(\mathfrak m)$. Then we obtain
+\begin{align*}
+S & = (S \otimes_R R') \otimes_{R', \tau} R \\
+& = (A'_1 \times \ldots \times A'_n \times B') \otimes_{R', \tau} R \\
+& = (A'_1 \otimes_{R', \tau} R) \times
+\ldots \times (A'_1 \otimes_{R', \tau} R) \times
+(B' \otimes_{R', \tau} R) \\
+& = A_1 \times \ldots \times A_n \times B
+\end{align*}
+The factor $B$ is finite over $R$ but $R \to B$
+is not quasi-finite at any prime lying over $\mathfrak m$. Hence
+$B = 0$. The factors $A_i$ are finite $R$-algebras having exactly
+one prime lying over $\mathfrak m$, hence they are local rings.
+This proves that $S$ is a finite product of local rings.
+
+\medskip\noindent
+Proof of 9$\Rightarrow$10. This holds because if $S$ is finite over the local
+ring $R$, then it has at most finitely many maximal ideals. Namely, by
+going up for $R \to S$ the maximal ideals of $S$ all lie over $\mathfrak m$,
+and $S/\mathfrak mS$ is Artinian hence has finitely many primes.
+
+\medskip\noindent
+Proof of 10$\Rightarrow$1. Assume (10). Let $f \in R[T]$ be a monic
+polynomial and $a_0 \in \kappa$ a simple root of $\overline{f}$.
+Then $S = R[T]/(f)$ is a finite $R$-algebra. Applying (10)
+we get $S = A_1 \times \ldots \times A_r$ is a finite product of
+local $R$-algebras. In particular we see that
+$S/\mathfrak mS = \prod A_i/\mathfrak mA_i$ is the decomposition
+of $\kappa[T]/(\overline{f})$ as a product of local rings.
+This means that one of the factors, say $A_1/\mathfrak mA_1$
+is the quotient $\kappa[T]/(\overline{f}) \to \kappa[T]/(T - a_0)$.
+Since $A_1$ is a summand of the finite free $R$-module $S$ it
+is a finite free $R$-module itself. As $A_1/\mathfrak mA_1$ is a
+$\kappa$-vector space of dimension 1 we see that $A_1 \cong R$ as an
+$R$-module. Clearly this means that $R \to A_1$ is an isomorphism.
+Let $a \in R$ be the image of $T$ under the map
+$R[T] \to S \to A_1 \to R$. Then $f(a) = 0$ and $\overline{a} = a_0$
+as desired.
+
+\medskip\noindent
+Proof of 13$\Rightarrow$1. Assume (13). Let $f \in R[T]$ be a monic
+polynomial and $a_0 \in \kappa$ a simple root of $\overline{f}$.
+Then $S_1 = R[T]/(f)$ is a finite $R$-algebra. Let $g \in R[T]$
+be any element such that $\overline{g} = \overline{f}/(T - a_0)$.
+Then $S = (S_1)_g$ is a quasi-finite $R$-algebra such that
+$S \otimes_R \kappa \cong \kappa[T]_{\overline{g}}/(\overline{f})
+\cong \kappa[T]/(T - a_0) \cong \kappa$.
+Applying (13) to $S$ we get $S = A \times B$ with $A$ finite over $R$ and
+$B \otimes_R \kappa = 0$. In particular we see that
+$\kappa \cong S/\mathfrak mS = A/\mathfrak mA$.
+Since $A$ is a summand of the flat $R$-algebra $S$ we see
+that it is finite flat, hence free over $R$.
+As $A/\mathfrak mA$ is a
+$\kappa$-vector space of dimension 1 we see that $A \cong R$ as an
+$R$-module. Clearly this means that $R \to A$ is an isomorphism.
+Let $a \in R$ be the image of $T$ under the map
+$R[T] \to S \to A \to R$. Then $f(a) = 0$ and $\overline{a} = a_0$
+as desired.
+
+\medskip\noindent
+Proof of 8$\Rightarrow$2. Assume (8). Let $f \in R[T]$ be any
+polynomial and let $a_0 \in \kappa$ be a simple root. Then
+the algebra $S = R[T]_{f'}/(f)$ is \'etale over $R$.
+Let $\mathfrak q \subset S$ be the prime
+generated by $\mathfrak m$ and $T - b$ where $b \in R$ is any
+element such that $\overline{b} = a_0$. Apply (8) to $S$ and $\mathfrak q$
+to get $\tau : S \to R$.
+Then the image $\tau(T) = a \in R$ works in (2).
+
+\medskip\noindent
+At this point we see that (1), (2), (7), (8), (9), (10), (11), (12), (13) are
+all equivalent. The weakest assertion of (3), (4), (5) and (6)
+is (3) and the strongest is (6). Hence we still have to prove that
+(3) implies (1) and (1) implies (6).
+
+\medskip\noindent
+Proof of 3$\Rightarrow$1. Assume (3). Let $f \in R[T]$ be monic and
+let $a_0 \in \kappa$ be a simple root of $\overline{f}$. This gives
+a factorization $\overline{f} = (T - a_0)h_0$ with $h_0(a_0) \not = 0$,
+so $\gcd(T - a_0, h_0) = 1$. Apply (3) to get a factorization
+$f = gh$ with $\overline{g} = T - a_0$ and $\overline{h} = h_0$.
+Set $S = R[T]/(f)$ which is a finite free $R$-algebra. We will write
+$g$, $h$ also for the images of $g$ and $h$ in $S$. Then
+$gS + hS = S$ by
+Nakayama's Lemma \ref{lemma-NAK}
+as the equality holds modulo $\mathfrak m$. Since $gh = f = 0$ in $S$
+this also implies that $gS \cap hS = 0$. Hence by the Chinese Remainder
+theorem we obtain $S = S/(g) \times S/(h)$. This implies that
+$A = S/(g)$ is a summand of a finite free $R$-module, hence finite
+free. Moreover, the rank of $A$ is $1$ as
+$A/\mathfrak mA = \kappa[T]/(T - a_0)$. Thus the map $R \to A$
+is an isomorphism. Setting $a \in R$ equal to the image of $T$
+under the maps $R[T] \to S \to A \to R$ gives an element of $R$
+with $f(a) = 0$ and $\overline{a} = a_0$.
+
+\medskip\noindent
+Proof of 1$\Rightarrow$6. Assume (1) or equivalently all of
+(1), (2), (7), (8), (9), (10), (11), (12), (13).
+Let $f \in R[T]$ be a polynomial.
+Suppose that $\overline{f} = g_0h_0$ is a factorization with
+$\gcd(g_0, h_0) = 1$. We may and do assume that $g_0$ is monic.
+Consider $S = R[T]/(f)$. Because we
+have the factorization we see that the coefficients of
+$f$ generate the unit ideal in $R$.
+This implies that $S$ has finite fibres over $R$, hence is
+quasi-finite over $R$. It also implies that $S$ is flat over $R$ by
+Lemma \ref{lemma-grothendieck-general}.
+Combining (13) and (10) we may write
+$S = A_1 \times \ldots \times A_n \times B$
+where each $A_i$ is local and finite over $R$, and
+$B \otimes_R \kappa = 0$. After reordering the factors $A_1, \ldots, A_n$
+we may assume that
+$$
+\kappa[T]/(g_0) =
+A_1/\mathfrak m A_1 \times \ldots \times A_r/\mathfrak mA_r,
+\ \kappa[T]/(h_0) =
+A_{r + 1}/\mathfrak mA_{r + 1} \times \ldots \times A_n/\mathfrak mA_n
+$$
+as quotients of $\kappa[T]$. The finite flat $R$-algebra
+$A = A_1 \times \ldots \times A_r$ is free as an $R$-module, see
+Lemma \ref{lemma-finite-flat-local}.
+Its rank is $\deg_T(g_0)$. Let $g \in R[T]$ be the characteristic polynomial
+of the $R$-linear operator $T : A \to A$. Then $g$ is a monic polynomial
+of degree $\deg_T(g) = \deg_T(g_0)$ and moreover $\overline{g} = g_0$.
+By Cayley-Hamilton
+(Lemma \ref{lemma-charpoly})
+we see that $g(T_A) = 0$ where $T_A$ indicates
+the image of $T$ in $A$. Hence we obtain a well defined surjective map
+$R[T]/(g) \to A$ which is an isomorphism by
+Nakayama's Lemma \ref{lemma-NAK}. The map $R[T] \to A$ factors
+through $R[T]/(f)$ by construction hence we may write $f = gh$ for
+some $h$. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-over-henselian}
+Let $(R, \mathfrak m, \kappa)$ be a henselian local ring.
+\begin{enumerate}
+\item If $R \to S$ is a finite ring map then $S$ is
+a finite product of henselian local rings each finite over $R$.
+\item If $R \to S$ is a finite ring map and $S$ is local, then
+$S$ is a henselian local ring and $R \to S$ is a (finite) local ring map.
+\item If $R \to S$ is a finite type ring map, and $\mathfrak q$ is
+a prime of $S$ lying over $\mathfrak m$ at which $R \to S$ is quasi-finite,
+then $S_{\mathfrak q}$ is henselian and finite over $R$.
+\item If $R \to S$ is quasi-finite then $S_{\mathfrak q}$ is henselian
+and finite over $R$ for every prime $\mathfrak q$ lying over $\mathfrak m$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (2) implies part (1) since $S$ as in part (1) is a finite product
+of its localizations at the primes lying over $\mathfrak m$ by
+Lemma \ref{lemma-characterize-henselian} part (10).
+Part (2) also follows from Lemma \ref{lemma-characterize-henselian} part (10)
+since any finite $S$-algebra is also a finite $R$-algebra (of course
+any finite ring map between local rings is local).
+
+\medskip\noindent
+Let $R \to S$ and $\mathfrak q$ be as in (3). Write $S = A \times B$
+with $A$ finite over $R$ and $B$ not quasi-finite over $R$ at any
+prime lying over $\mathfrak m$, see
+Lemma \ref{lemma-characterize-henselian} part (11).
+Hence $S_\mathfrak q$ is a localization of $A$ at a maximal ideal
+and we deduce (3) from (1). Part (4) follows from part (3).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-mop-up}
+Let $(R, \mathfrak m, \kappa)$ be a henselian local ring.
+Any finite type $R$-algebra $S$ can be written as
+$S = A_1 \times \ldots \times A_n \times B$ with $A_i$ local
+and finite over $R$ and $R \to B$ not quasi-finite at any
+prime of $B$ lying over $\mathfrak m$.
+\end{lemma}
+
+\begin{proof}
+This is a combination of parts (11) and (10) of
+Lemma \ref{lemma-characterize-henselian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-mop-up-strictly-henselian}
+Let $(R, \mathfrak m, \kappa)$ be a strictly henselian local ring.
+Any finite type $R$-algebra $S$ can be written as
+$S = A_1 \times \ldots \times A_n \times B$ with $A_i$ local
+and finite over $R$ and $\kappa \subset \kappa(\mathfrak m_{A_i})$
+finite purely inseparable and $R \to B$ not quasi-finite
+at any prime of $B$ lying over $\mathfrak m$.
+\end{lemma}
+
+\begin{proof}
+First write $S = A_1 \times \ldots \times A_n \times B$ as in
+Lemma \ref{lemma-mop-up}.
+The field extension $\kappa(\mathfrak m_{A_i})/\kappa$
+is finite and $\kappa$ is separably algebraically closed, hence
+it is finite purely inseparable.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-henselian-cat-finite-etale}
+Let $(R, \mathfrak m, \kappa)$ be a henselian local ring.
+The category of finite \'etale ring extensions $R \to S$ is
+equivalent to the category of finite \'etale algebras
+$\kappa \to \overline{S}$ via the functor $S \mapsto S/\mathfrak mS$.
+\end{lemma}
+
+\begin{proof}
+Denote $\mathcal{C} \to \mathcal{D}$ the functor of categories
+of the statement.
+Suppose that $R \to S$ is finite \'etale. Then we may write
+$$
+S = A_1 \times \ldots \times A_n
+$$
+with $A_i$ local and finite \'etale over $S$, use either
+Lemma \ref{lemma-mop-up}
+or
+Lemma \ref{lemma-characterize-henselian} part (10).
+In particular $A_i/\mathfrak mA_i$ is a finite separable field
+extension of $\kappa$, see
+Lemma \ref{lemma-etale-at-prime}.
+Thus we see that every object of $\mathcal{C}$ and
+$\mathcal{D}$ decomposes canonically into irreducible pieces
+which correspond via the given functor.
+Next, suppose that $S_1$, $S_2$ are finite \'etale over $R$ such that
+$\kappa_1 = S_1/\mathfrak mS_1$ and $\kappa_2 = S_2/\mathfrak mS_2$
+are fields (finite separable over $\kappa$). Then $S_1 \otimes_R S_2$
+is finite \'etale over $R$ and we may write
+$$
+S_1 \otimes_R S_2 = A_1 \times \ldots \times A_n
+$$
+as before. Then we see that $\Hom_R(S_1, S_2)$ is identified
+with the set of indices $i \in \{1, \ldots, n\}$ such that
+$S_2 \to A_i$ is an isomorphism. To see this use that given any $R$-algebra
+map $\varphi : S_1 \to S_2$ the map
+$\varphi \times 1 : S_1 \otimes_R S_2 \to S_2$
+is surjective, and hence is equal to projection onto one of the factors $A_i$.
+But in exactly the same way we see that
+$\Hom_\kappa(\kappa_1, \kappa_2)$ is identified with
+the set of indices $i \in \{1, \ldots, n\}$ such that
+$\kappa_2 \to A_i/\mathfrak mA_i$ is an isomorphism.
+By the discussion above these sets of indices match, and we conclude
+that our functor is fully faithful.
+Finally, let $\kappa'/\kappa$ be a finite
+separable field extension. By
+Lemma \ref{lemma-make-etale-map-prescribed-residue-field}
+there exists an \'etale ring map $R \to S$ and a prime $\mathfrak q$
+of $S$ lying over $\mathfrak m$ such that $\kappa \subset \kappa(\mathfrak q)$
+is isomorphic to the given extension. By part (1)
+we may write $S = A_1 \times \ldots \times A_n \times B$.
+Since $R \to S$ is quasi-finite we see that there exists no
+prime of $B$ over $\mathfrak m$. Hence $S_{\mathfrak q}$ is
+equal to $A_i$ for some $i$. Hence $R \to A_i$ is finite \'etale
+and produces the given residue field extension. Thus the functor
+is essentially surjective and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unramified-over-strictly-henselian}
+Let $(R, \mathfrak m, \kappa)$ be a strictly henselian local ring.
+Let $R \to S$ be an unramified ring map. Then
+$$
+S = A_1 \times \ldots \times A_n \times B
+$$
+with each $R \to A_i$ surjective and no prime of $B$ lying
+over $\mathfrak m$.
+\end{lemma}
+
+\begin{proof}
+First write $S = A_1 \times \ldots \times A_n \times B$ as in
+Lemma \ref{lemma-mop-up}.
+Now we see that $R \to A_i$ is finite unramified and $A_i$ local.
+Hence the maximal ideal of $A_i$ is $\mathfrak mA_i$ and its
+residue field $A_i / \mathfrak m A_i$ is a finite
+separable extension of $\kappa$, see
+Lemma \ref{lemma-unramified-at-prime}.
+However, the condition that $R$ is strictly henselian means that
+$\kappa$ is separably algebraically closed, so
+$\kappa = A_i / \mathfrak m A_i$. By
+Nakayama's Lemma \ref{lemma-NAK}
+we conclude that $R \to A_i$ is surjective as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complete-henselian}
+\begin{slogan}
+Complete local rings are Henselian by Newton's method
+\end{slogan}
+Let $(R, \mathfrak m, \kappa)$ be a complete local ring, see
+Definition \ref{definition-complete-local-ring}.
+Then $R$ is henselian.
+\end{lemma}
+
+\begin{proof}
+Let $f \in R[T]$ be monic.
+Denote $f_n \in R/\mathfrak m^{n + 1}[T]$ the image.
+Denote $f'_n$ the derivative of $f_n$ with respect to $T$.
+Let $a_0 \in \kappa$ be a simple root of $f_0$. We lift this
+to a solution of $f$ over $R$ inductively as follows:
+Suppose given $a_n \in R/\mathfrak m^{n + 1}$ such that
+$a_n \bmod \mathfrak m = a_0$ and $f_n(a_n) = 0$. Pick any
+element $b \in R/\mathfrak m^{n + 2}$ such that
+$a_n = b \bmod \mathfrak m^{n + 1}$. Then
+$f_{n + 1}(b) \in \mathfrak m^{n + 1}/\mathfrak m^{n + 2}$.
+Set
+$$
+a_{n + 1} = b - f_{n + 1}(b)/f'_{n + 1}(b)
+$$
+(Newton's method). This makes sense as
+$f'_{n + 1}(b) \in R/\mathfrak m^{n + 1}$
+is invertible by the condition on $a_0$. Then we compute
+$f_{n + 1}(a_{n + 1}) = f_{n + 1}(b) - f_{n + 1}(b) = 0$
+in $R/\mathfrak m^{n + 2}$. Since the system of elements
+$a_n \in R/\mathfrak m^{n + 1}$ so constructed is compatible
+we get an element
+$a \in \lim R/\mathfrak m^n = R$ (here we use that $R$ is complete).
+Moreover, $f(a) = 0$ since it maps to zero in each $R/\mathfrak m^n$.
+Finally $\overline{a} = a_0$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-dimension-zero-henselian}
+\begin{slogan}
+Local rings of dimension zero are henselian.
+\end{slogan}
+Let $(R, \mathfrak m)$ be a local ring of dimension $0$.
+Then $R$ is henselian.
+\end{lemma}
+
+\begin{proof}
+Let $R \to S$ be a finite ring map. By
+Lemma \ref{lemma-characterize-henselian}
+it suffices to show that $S$ is a product of local rings. By
+Lemma \ref{lemma-finite-finite-fibres}
+$S$ has finitely many primes $\mathfrak m_1, \ldots, \mathfrak m_r$
+which all lie over $\mathfrak m$. There are no inclusions among these
+primes, see
+Lemma \ref{lemma-integral-no-inclusion},
+hence they are all maximal. Every element of
+$\mathfrak m_1 \cap \ldots \cap \mathfrak m_r$ is nilpotent by
+Lemma \ref{lemma-Zariski-topology}.
+It follows $S$ is the product of the localizations of $S$ at the primes
+$\mathfrak m_i$ by
+Lemma \ref{lemma-product-local}.
+\end{proof}
+
+\noindent
+The following lemma will be the key to the uniqueness and functorial
+properties of henselization and strict henselization.
+
+\begin{lemma}
+\label{lemma-map-into-henselian}
+Let $R \to S$ be a ring map with $S$ henselian local.
+Given
+\begin{enumerate}
+\item an \'etale ring map $R \to A$,
+\item a prime $\mathfrak q$ of $A$ lying over
+$\mathfrak p = R \cap \mathfrak m_S$,
+\item a $\kappa(\mathfrak p)$-algebra map
+$\tau : \kappa(\mathfrak q) \to S/\mathfrak m_S$,
+\end{enumerate}
+then there exists a unique homomorphism of $R$-algebras $f : A \to S$
+such that $\mathfrak q = f^{-1}(\mathfrak m_S)$ and
+$f \bmod \mathfrak q = \tau$.
+\end{lemma}
+
+\begin{proof}
+Consider $A \otimes_R S$. This is an \'etale algebra over $S$, see
+Lemma \ref{lemma-etale}. Moreover, the kernel
+$$
+\mathfrak q' = \Ker(A \otimes_R S \to
+\kappa(\mathfrak q) \otimes_{\kappa(\mathfrak p)} \kappa(\mathfrak m_S) \to
+\kappa(\mathfrak m_S))
+$$
+of the map using the map given in (3) is a prime ideal lying over
+$\mathfrak m_S$ with residue field equal to the residue field of $S$.
+Hence by Lemma \ref{lemma-characterize-henselian}
+there exists a unique splitting $\tau : A \otimes_R S \to S$
+with $\tau^{-1}(\mathfrak m_S) = \mathfrak q'$.
+Set $f$ equal to the composition $A \to A \otimes_R S \to S$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strictly-henselian-solutions}
+Let $\varphi : R \to S$ be a local homomorphism
+of strictly henselian local rings.
+Let $P_1, \ldots, P_n \in R[x_1, \ldots, x_n]$ be polynomials such that
+$R[x_1, \ldots, x_n]/(P_1, \ldots, P_n)$ is \'etale over $R$.
+Then the map
+$$
+R^n \longrightarrow S^n, \quad
+(h_1, \ldots, h_n) \longmapsto (\varphi(h_1), \ldots, \varphi(h_n))
+$$
+induces a bijection between
+$$
+\{
+(r_1, \ldots, r_n) \in R^n
+\mid
+P_i(r_1, \ldots, r_n) = 0, \ i = 1, \ldots, n
+\}
+$$
+and
+$$
+\{
+(s_1, \ldots, s_n) \in S^n
+\mid
+P'_i(s_1, \ldots, s_n) = 0, \ i = 1, \ldots, n
+\}
+$$
+where $P'_i \in S[x_1, \ldots, x_n]$ are the images of the $P_i$
+under $\varphi$.
+\end{lemma}
+
+\begin{proof}
+The first solution set is canonically isomorphic to the set
+$$
+\Hom_R(R[x_1, \ldots, x_n]/(P_1, \ldots, P_n), R).
+$$
+As $R$ is henselian the map $R \to R/\mathfrak m_R$ induces a bijection
+between this set and the set of solutions in the
+residue field $R/\mathfrak m_R$, see
+Lemma \ref{lemma-characterize-henselian}.
+The same is true for $S$.
+Now since $R[x_1, \ldots, x_n]/(P_1, \ldots, P_n)$ is \'etale over $R$
+and $R/\mathfrak m_R$ is separably algebraically closed we see that
+$R/\mathfrak m_R[x_1, \ldots, x_n]/(\overline{P_1}, \ldots, \overline{P_n})$
+is a finite product of copies of $R/\mathfrak m_R$. Hence the
+tensor product
+$$
+R/\mathfrak m_R[x_1, \ldots, x_n]/(\overline{P_1}, \ldots, \overline{P_n})
+\otimes_{R/\mathfrak m_R} S/\mathfrak m_S
+=
+S/\mathfrak m_S[x_1, \ldots, x_n]/(\overline{P_1'}, \ldots, \overline{P_n'})
+$$
+is also a finite product of copies of $S/\mathfrak m_S$ with the same
+index set. This proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-split-ML-henselian}
+Let $R$ be a henselian local ring.
+Any countably generated Mittag-Leffler module over $R$ is a direct
+sum of finitely presented $R$-modules.
+\end{lemma}
+
+\begin{proof}
+Let $M$ be a countably generated and Mittag-Leffler $R$-module.
+We claim that for any element $x \in M$ there exists a direct
+sum decomposition $M = N \oplus K$ with $x \in N$, the module
+$N$ finitely presented, and $K$ Mittag-Leffler.
+
+\medskip\noindent
+Suppose the claim is true. Choose generators $x_1, x_2, x_3, \ldots$
+of $M$. By the claim we can inductively find direct sum decompositions
+$$
+M = N_1 \oplus N_2 \oplus \ldots \oplus N_n \oplus K_n
+$$
+with $N_i$ finitely presented,
+$x_1, \ldots, x_n \in N_1 \oplus \ldots \oplus N_n$, and $K_n$ Mittag-Leffler.
+Repeating ad infinitum we see that $M = \bigoplus N_i$.
+
+\medskip\noindent
+We still have to prove the claim. Let $x \in M$. By
+Lemma \ref{lemma-ML-countable}
+there exists an endomorphism $\alpha : M \to M$
+such that $\alpha$ factors through a finitely presented module, and
+$\alpha (x) = x$. Say $\alpha$ factors as
+$$
+\xymatrix{
+M \ar[r]^\pi & P \ar[r]^i & M
+}
+$$
+Set $a = \pi \circ \alpha \circ i : P \to P$, so
+$i \circ a \circ \pi = \alpha^3$. By
+Lemma \ref{lemma-charpoly-module}
+there exists a monic polynomial $P \in R[T]$ such that $P(a) = 0$.
+Note that this implies formally that $\alpha^2 P(\alpha) = 0$.
+Hence we may think of $M$ as a module over $R[T]/(T^2P)$.
+Assume that $x \not = 0$. Then $\alpha(x) = x$ implies that
+$0 = \alpha^2P(\alpha)x = P(1)x$ hence $P(1) = 0$ in $R/I$ where
+$I = \{r \in R \mid rx = 0\}$ is the annihilator of $x$.
+As $x \not = 0$ we see $I \subset \mathfrak m_R$, hence
+$1$ is a root of $\overline{P} = P \bmod \mathfrak m_R \in R/\mathfrak m_R[T]$.
+As $R$ is henselian we can find a factorization
+$$
+T^2P = (T^2 Q_1) Q_2
+$$
+for some $Q_1, Q_2 \in R[T]$ with
+$Q_2 = (T - 1)^e \bmod \mathfrak m_R R[T]$ and
+$Q_1(1) \not = 0 \bmod \mathfrak m_R$, see
+Lemma \ref{lemma-characterize-henselian}.
+Let $N = \Im(\alpha^2Q_1(\alpha) : M \to M)$ and
+$K = \Im(Q_2(\alpha) : M \to M)$. As $T^2Q_1$ and
+$Q_2$ generate the unit ideal of $R[T]$ we get a direct sum
+decomposition $M = N \oplus K$. Moreover, $Q_2$ acts as zero on $N$ and
+$T^2Q_1$ acts as zero on $K$. Note that $N$ is a quotient of $P$
+hence is finitely generated. Also $x \in N$ because
+$\alpha^2Q_1(\alpha)x = Q_1(1)x$ and $Q_1(1)$ is a unit in $R$. By
+Lemma \ref{lemma-direct-sum-ML}
+the modules $N$ and $K$ are Mittag-Leffler. Finally, the finitely generated
+module $N$ is finitely presented as a finitely generated Mittag-Leffler
+module is finitely presented, see
+Example \ref{example-ML} part (1).
+\end{proof}
+
+
+
+\section{Filtered colimits of \'etale ring maps}
+\label{section-ind-etale}
+
+\noindent
+This section is a precursor to the section on ind-\'etale ring maps
+(Pro-\'etale Cohomology, Section \ref{proetale-section-ind-etale}).
+The material will also be useful to prove uniqueness properties of the
+henselization and strict henselization of a local ring.
+
+\begin{lemma}
+\label{lemma-base-change-colimit-etale}
+Let $R \to A$ and $R \to R'$ be ring maps. If $A$ is a filtered
+colimit of \'etale ring maps, then so is $R' \to R' \otimes_R A$.
+\end{lemma}
+
+\begin{proof}
+This is true because colimits commute with tensor products
+and \'etale ring maps are preserved under base change
+(Lemma \ref{lemma-etale}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-colimit-etale}
+Let $A \to B \to C$ be ring maps. If $A \to B$ is a filtered
+colimit of \'etale ring maps and $B \to C$ is a filtered colimit
+of \'etale ring maps, then $A \to C$ is a filtered colimit of
+\'etale ring maps.
+\end{lemma}
+
+\begin{proof}
+We will use the criterion of Lemma \ref{lemma-when-colimit}.
+Let $A \to P \to C$ be a factorization of $A \to C$
+with $P$ of finite presentation over $A$.
+Write $B = \colim_{i \in I} B_i$ where $I$ is a directed set and
+where $B_i$ is an \'etale $A$-algebra.
+Write $C = \colim_{j \in J} C_j$ where $J$ is a directed set and
+where $C_j$ is an \'etale $B$-algebra.
+We can factor $P \to C$ as $P \to C_j \to C$ for
+some $j$ by Lemma \ref{lemma-characterize-finite-presentation}.
+By Lemma \ref{lemma-etale} we can find an
+$i \in I$ and an \'etale ring map $B_i \to C'_j$
+such that $C_j = B \otimes_{B_i} C'_j$.
+Then $C_j = \colim_{i' \geq i} B_{i'} \otimes_{B_i} C'_j$
+and again we see that $P \to C_j$ factors as
+$P \to B_{i'} \otimes_{B_i} C'_j \to C$.
+As $A \to C' = B_{i'} \otimes_{B_i} C'_j$ is \'etale as
+compositions and tensor products of \'etale ring maps
+are \'etale. Hence we have factored $P \to C$ as
+$P \to C' \to C$ with $C'$ \'etale over $A$ and the criterion
+of Lemma \ref{lemma-when-colimit} applies.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-colimit-etale}
+Let $R$ be a ring. Let $A = \colim A_i$ be a filtered colimit
+of $R$-algebras such that each $A_i$ is a filtered colimit of
+\'etale $R$-algebras. Then $A$ is a filtered colimit of \'etale
+$R$-algebras.
+\end{lemma}
+
+\begin{proof}
+Write $A_i = \colim_{j \in J_i} A_j$ where $J_i$ is a directed set and
+$A_j$ is an \'etale $R$-algebra.
+For each $i \leq i'$ and $j \in J_i$ there exists an
+$j' \in J_{i'}$ and an $R$-algebra map $\varphi_{jj'} : A_j \to A_{j'}$
+making the diagram
+$$
+\xymatrix{
+A_i \ar[r] & A_{i'} \\
+A_j \ar[u] \ar[r]^{\varphi_{jj'}} & A_{j'} \ar[u]
+}
+$$
+commute. This is true because $R \to A_j$ is of finite presentation
+so that Lemma \ref{lemma-characterize-finite-presentation} applies.
+Let $\mathcal{J}$ be the category with objects $\coprod_{i \in I} J_i$
+and morphisms triples $(j, j', \varphi_{jj'})$ as above (and obvious
+composition law). Then $\mathcal{J}$ is a filtered category and
+$A = \colim_\mathcal{J} A_j$. Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-colimit-etale-better}
+Let $I$ be a directed set. Let $i \mapsto (R_i \to A_i)$ be a
+system of arrows of rings over $I$. Set $R = \colim R_i$
+and $A = \colim A_i$. If each $A_i$ is a filtered colimit of
+\'etale $R_i$-algebras, then $A$ is a filtered colimit of \'etale
+$R$-algebras.
+\end{lemma}
+
+\begin{proof}
+This is true because $A = A \otimes_R R = \colim A_i \otimes_{R_i} R$
+and hence we can apply Lemma \ref{lemma-colimit-colimit-etale}
+because $R \to A_i \otimes_{R_i} R$ is a filtered colimit of
+\'etale ring maps by Lemma \ref{lemma-base-change-colimit-etale}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimits-of-etale}
+Let $R$ be a ring. Let $A \to B$ be an $R$-algebra homomorphism.
+If $A$ and $B$ are filtered colimits of \'etale $R$-algebras, then
+$B$ is a filtered colimit of \'etale $A$-algebras.
+\end{lemma}
+
+\begin{proof}
+Write $A = \colim A_i$ and $B = \colim B_j$ as filtered colimits with $A_i$
+and $B_j$ \'etale over $R$. For each $i$ we can find a $j$ such that
+$A_i \to B$ factors through $B_j$, see
+Lemma \ref{lemma-characterize-finite-presentation}.
+The factorization $A_i \to B_j$ is \'etale by
+Lemma \ref{lemma-map-between-etale}.
+Since $A \to A \otimes_{A_i} B_j$ is \'etale (Lemma \ref{lemma-etale})
+it suffices to prove that $B = \colim A \otimes_{A_i} B_j$ where the
+colimit is over pairs $(i, j)$ and factorizations $A_i \to B_j \to B$
+of $A_i \to B$ (this is a directed system; details omitted).
+This is clear because colimits commute with tensor products
+and hence $\colim A \otimes_{A_i} B_j = A \otimes_A B = B$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-into-henselian-colimit}
+Let $R \to S$ be a ring map with $S$ henselian local. Given
+\begin{enumerate}
+\item an $R$-algebra $A$ which is a filtered colimit of \'etale $R$-algebras,
+\item a prime $\mathfrak q$ of $A$ lying over
+$\mathfrak p = R \cap \mathfrak m_S$,
+\item a $\kappa(\mathfrak p)$-algebra map
+$\tau : \kappa(\mathfrak q) \to S/\mathfrak m_S$,
+\end{enumerate}
+then there exists a unique homomorphism of $R$-algebras $f : A \to S$
+such that $\mathfrak q = f^{-1}(\mathfrak m_S)$ and
+$f \bmod \mathfrak q = \tau$.
+\end{lemma}
+
+\begin{proof}
+Write $A = \colim A_i$ as a filtered colimit of \'etale $R$-algebras.
+Set $\mathfrak q_i = A_i \cap \mathfrak q$. We obtain $f_i : A_i \to S$
+by applying Lemma \ref{lemma-map-into-henselian}. Set $f = \colim f_i$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-uniqueness-henselian}
+Let $R$ be a ring. Given a commutative diagram of ring maps
+$$
+\xymatrix{
+S \ar[r] & K \\
+R \ar[u] \ar[r] & S' \ar[u]
+}
+$$
+where $S$, $S'$ are henselian local, $S$, $S'$ are filtered colimits
+of \'etale $R$-algebras, $K$ is a field and the arrows $S \to K$ and
+$S' \to K$ identify $K$ with the residue field of both $S$ and $S'$.
+Then there exists an unique $R$-algebra isomorphism $S \to S'$
+compatible with the maps to $K$.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from Lemma \ref{lemma-map-into-henselian-colimit}.
+\end{proof}
+
+\noindent
+The following lemma is not strictly speaking about colimits of \'etale
+ring maps.
+
+\begin{lemma}
+\label{lemma-colimit-henselian}
+A filtered colimit of (strictly) henselian local rings along local
+homomorphisms is (strictly) henselian.
+\end{lemma}
+
+\begin{proof}
+Categories, Lemma \ref{categories-lemma-directed-category-system}
+says that this is really just a question about a colimit of
+(strictly) henselian local rings over a directed set.
+Let $(R_i, \varphi_{ii'})$ be such a system with each $\varphi_{ii'}$
+local. Then $R = \colim_i R_i$ is local, and
+its residue field $\kappa$ is $\colim \kappa_i$
+(argument omitted). It is easy to see that $\colim \kappa_i$
+is separably algebraically closed if each $\kappa_i$ is so;
+thus it suffices to prove $R$ is henselian if each $R_i$ is henselian.
+Suppose that $f \in R[T]$ is monic and that $a_0 \in \kappa$ is
+a simple root of $\overline{f}$. Then for some large enough $i$
+there exists an $f_i \in R_i[T]$ mapping to $f$ and an
+$a_{0, i} \in \kappa_i$ mapping to $a_0$. Since
+$\overline{f_i}(a_{0, i}) \in \kappa_i$,
+resp.\ $\overline{f_i'}(a_{0, i}) \in \kappa_i$ maps to
+$0 = \overline{f}(a_0) \in \kappa$,
+resp.\ $0 \not = \overline{f'}(a_0) \in \kappa$
+we conclude that $a_{0, i}$ is a simple root of $\overline{f_i}$.
+As $R_i$ is henselian we can find $a_i \in R_i$ such that
+$f_i(a_i) = 0$ and $a_{0, i} = \overline{a_i}$.
+Then the image $a \in R$ of $a_i$ is the desired solution.
+Thus $R$ is henselian.
+\end{proof}
+
+
+
+
+\section{Henselization and strict henselization}
+\label{section-henselization}
+
+\noindent
+In this section we construct the henselization. We encourage the reader
+to keep in mind the uniqueness already proved in
+Lemma \ref{lemma-uniqueness-henselian}
+and the functorial behaviour pointed out in
+Lemma \ref{lemma-map-into-henselian-colimit}
+while reading this material.
+
+\begin{lemma}
+\label{lemma-henselization}
+Let $(R, \mathfrak m, \kappa)$ be a local ring. There exists a
+local ring map $R \to R^h$ with the following properties
+\begin{enumerate}
+\item $R^h$ is henselian,
+\item $R^h$ is a filtered colimit of \'etale $R$-algebras,
+\item $\mathfrak m R^h$ is the
+maximal ideal of $R^h$, and
+\item $\kappa = R^h/\mathfrak m R^h$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Consider the category of pairs $(S, \mathfrak q)$ where $R \to S$ is an
+\'etale ring map, and $\mathfrak q$ is a prime of $S$ lying over
+$\mathfrak m$ with $\kappa = \kappa(\mathfrak q)$. A morphism of pairs
+$(S, \mathfrak q) \to (S', \mathfrak q')$ is given by an $R$-algebra
+map $\varphi : S \to S'$ such that $\varphi^{-1}(\mathfrak q') = \mathfrak q$.
+We set
+$$
+R^h = \colim_{(S, \mathfrak q)} S.
+$$
+Let us show that the category of pairs is filtered, see
+Categories, Definition \ref{categories-definition-directed}.
+The category contains the pair $(R, \mathfrak m)$ and hence is not empty,
+which proves part (1) of
+Categories, Definition \ref{categories-definition-directed}.
+For any pair $(S, \mathfrak q)$ the prime ideal $\mathfrak q$
+is maximal with residue field $\kappa$ since the composition
+$\kappa \to S/\mathfrak q \to \kappa(\mathfrak q)$ is an isomorphism.
+Suppose that $(S, \mathfrak q)$ and $(S', \mathfrak q')$ are two objects. Set
+$S'' = S \otimes_R S'$ and $\mathfrak q'' = \mathfrak qS'' + \mathfrak q'S''$.
+Then $S''/\mathfrak q'' = S/\mathfrak q \otimes_R S'/\mathfrak q' = \kappa$
+by what we said above. Moreover, $R \to S''$ is \'etale by
+Lemma \ref{lemma-etale}.
+This proves part (2) of
+Categories, Definition \ref{categories-definition-directed}.
+Next, suppose that
+$\varphi, \psi : (S, \mathfrak q) \to (S', \mathfrak q')$
+are two morphisms of pairs. Then $\varphi$, $\psi$, and
+$S' \otimes_R S' \to S'$ are \'etale ring maps by
+Lemma \ref{lemma-map-between-etale}.
+Consider
+$$
+S'' = (S' \otimes_{\varphi, S, \psi} S')
+\otimes_{S' \otimes_R S'} S'
+$$
+with prime ideal
+$$
+\mathfrak q'' =
+(\mathfrak q' \otimes S' + S' \otimes \mathfrak q') \otimes S'
++
+(S' \otimes_{\varphi, S, \psi} S') \otimes \mathfrak q'
+$$
+Arguing as above (base change of \'etale maps is \'etale, composition of
+\'etale maps is \'etale) we see that $S''$ is \'etale over $R$. Moreover,
+the canonical map $S' \to S''$ (using the right most factor for example)
+equalizes $\varphi$ and $\psi$. This proves part (3) of
+Categories, Definition \ref{categories-definition-directed}.
+Hence we conclude that $R^h$ consists of triples $(S, \mathfrak q, f)$
+with $f \in S$, and two such triples
+$(S, \mathfrak q, f)$, $(S', \mathfrak q', f')$
+define the same element of $R^h$ if and only if there exists
+a pair $(S'', \mathfrak q'')$ and morphisms of pairs
+$\varphi : (S, \mathfrak q) \to (S'', \mathfrak q'')$
+and
+$\varphi' : (S', \mathfrak q') \to (S'', \mathfrak q'')$
+such that $\varphi(f) = \varphi'(f')$.
+
+\medskip\noindent
+Suppose that $x \in R^h$. Represent $x$ by a triple $(S, \mathfrak q, f)$.
+Let $\mathfrak q_1, \ldots, \mathfrak q_r$ be the other primes of $S$
+lying over $\mathfrak m$. Then $\mathfrak q \not \subset \mathfrak q_i$
+as we have seen above that $\mathfrak q$ is maximal.
+Thus, since $\mathfrak q$ is a prime ideal,
+we can find a $g \in S$, $g \not \in \mathfrak q$ and
+$g \in \mathfrak q_i$ for $i = 1, \ldots, r$. Consider the morphism of
+pairs $(S, \mathfrak q) \to (S_g, \mathfrak qS_g)$.
+In this way we see that we may always assume that $x$
+is given by a triple $(S, \mathfrak q, f)$ where
+$\mathfrak q$ is the only prime of $S$ lying over $\mathfrak m$,
+i.e., $\sqrt{\mathfrak mS} = \mathfrak q$. But since
+$R \to S$ is \'etale, we have
+$\mathfrak mS_{\mathfrak q} = \mathfrak qS_{\mathfrak q}$, see
+Lemma \ref{lemma-etale-at-prime}.
+Hence we actually get that $\mathfrak mS = \mathfrak q$.
+
+\medskip\noindent
+Suppose that $x \not \in \mathfrak mR^h$.
+Represent $x$ by a triple $(S, \mathfrak q, f)$ with
+$\mathfrak mS = \mathfrak q$.
+Then $f \not \in \mathfrak mS$, i.e., $f \not \in \mathfrak q$.
+Hence $(S, \mathfrak q) \to (S_f, \mathfrak qS_f)$ is a morphism
+of pairs such that the image of $f$ becomes invertible.
+Hence $x$ is invertible with inverse represented by the triple
+$(S_f, \mathfrak qS_f, 1/f)$. We conclude that $R^h$ is a local
+ring with maximal ideal $\mathfrak mR^h$. The residue field is
+$\kappa$ since we can define $R^h/\mathfrak mR^h \to \kappa$
+by mapping a triple $(S, \mathfrak q, f)$ to the residue
+class of $f$ modulo $\mathfrak q$.
+
+\medskip\noindent
+We still have to show that $R^h$ is henselian.
+Namely, suppose that $P \in R^h[T]$ is a monic
+polynomial and $a_0 \in \kappa$ is a simple root of
+the reduction $\overline{P} \in \kappa[T]$.
+Then we can find a pair $(S, \mathfrak q)$ such that
+$P$ is the image of a monic polynomial $Q \in S[T]$.
+Since $S \to R^h$ induces an isomorphism of residue
+fields we see that $S' = S[T]/(Q)$ has a prime ideal
+$\mathfrak q' = (\mathfrak q, T - a_0)$ at which
+$S \to S'$ is standard \'etale. Moreover, $\kappa = \kappa(\mathfrak q')$.
+Pick $g \in S'$, $g \not \in \mathfrak q'$ such that
+$S'' = S'_g$ is \'etale over $S$. Then
+$(S, \mathfrak q) \to (S'', \mathfrak q'S'')$ is a morphism
+of pairs. Now that triple $(S'', \mathfrak q'S'', \text{class of }T)$
+determines an element $a \in R^h$ with the properties $P(a) = 0$,
+and $\overline{a} = a_0$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strict-henselization}
+Let $(R, \mathfrak m, \kappa)$ be a local ring.
+Let $\kappa \subset \kappa^{sep}$ be a separable algebraic closure.
+There exists a commutative diagram
+$$
+\xymatrix{
+\kappa \ar[r] & \kappa \ar[r] & \kappa^{sep} \\
+R \ar[r] \ar[u] & R^h \ar[r] \ar[u] & R^{sh} \ar[u]
+}
+$$
+with the following properties
+\begin{enumerate}
+\item the map $R^h \to R^{sh}$ is local
+\item $R^{sh}$ is strictly henselian,
+\item $R^{sh}$ is a filtered colimit of \'etale $R$-algebras,
+\item $\mathfrak m R^{sh}$ is the
+maximal ideal of $R^{sh}$, and
+\item $\kappa^{sep} = R^{sh}/\mathfrak m R^{sh}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is proved by exactly the same proof as used for
+Lemma \ref{lemma-henselization}.
+The only difference is that, instead of pairs, one uses triples
+$(S, \mathfrak q, \alpha)$ where $R \to S$ \'etale,
+$\mathfrak q$ is a prime of $S$ lying over $\mathfrak m$, and
+$\alpha : \kappa(\mathfrak q) \to \kappa^{sep}$ is an embedding
+of extensions of $\kappa$.
+\end{proof}
+
+\begin{definition}
+\label{definition-henselization}
+Let $(R, \mathfrak m, \kappa)$ be a local ring.
+\begin{enumerate}
+\item The local ring map $R \to R^h$ constructed in
+Lemma \ref{lemma-henselization}
+is called the {\it henselization} of $R$.
+\item Given a separable algebraic closure $\kappa \subset \kappa^{sep}$
+the local ring map $R \to R^{sh}$ constructed in
+Lemma \ref{lemma-strict-henselization}
+is called the
+{\it strict henselization of $R$ with respect to
+$\kappa \subset \kappa^{sep}$}.
+\item A local ring map $R \to R^{sh}$ is called a {\it strict henselization}
+of $R$ if it is isomorphic to one of the local ring maps constructed in
+Lemma \ref{lemma-strict-henselization}
+\end{enumerate}
+\end{definition}
+
+\noindent
+The maps $R \to R^h \to R^{sh}$ are flat local ring homomorphisms.
+By Lemma \ref{lemma-uniqueness-henselian} the $R$-algebras $R^h$ and
+$R^{sh}$ are well defined up to unique isomorphism by the conditions
+that they are henselian local, filtered colimits of \'etale $R$-algebras
+with residue field $\kappa$ and $\kappa^{sep}$.
+In the rest of this section we mostly just discuss functoriality of the
+(strict) henselizations.
+We will discuss more intricate results concerning
+the relationship between $R$ and its henselization in
+More on Algebra, Section \ref{more-algebra-section-permanence-henselization}.
+
+\begin{remark}
+\label{remark-construct-sh-from-h}
+We can also construct $R^{sh}$ from $R^h$. Namely, for any finite separable
+subextension $\kappa^{sep}/\kappa'/\kappa$
+there exists a unique (up to unique isomorphism) finite \'etale local
+ring extension $R^h \subset R^h(\kappa')$
+whose residue field extension reproduces the given extension, see
+Lemma \ref{lemma-henselian-cat-finite-etale}.
+Hence we can set
+$$
+R^{sh} =
+\bigcup\nolimits_{\kappa \subset \kappa' \subset \kappa^{sep}}
+R^h(\kappa')
+$$
+The arrows in this system, compatible with the arrows on the level
+of residue fields, exist by
+Lemma \ref{lemma-henselian-cat-finite-etale}.
+This will produce a henselian local ring by
+Lemma \ref{lemma-colimit-henselian}
+since each of the rings
+$R^h(\kappa')$ is henselian by
+Lemma \ref{lemma-finite-over-henselian}.
+By construction the residue field extension induced by
+$R^h \to R^{sh}$ is the field extension $\kappa^{sep}/\kappa$.
+Hence $R^{sh}$ so constructed is strictly henselian.
+By Lemma \ref{lemma-composition-colimit-etale} the $R$-algebra
+$R^{sh}$ is a colimit of \'etale $R$-algebras. Hence the uniqueness
+of Lemma \ref{lemma-uniqueness-henselian} shows that $R^{sh}$
+is the strict henselization.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-henselian-functorial-prepare}
+Let $R \to S$ be a local map of local rings.
+Let $S \to S^h$ be the henselization.
+Let $R \to A$ be an \'etale ring map and let $\mathfrak q$
+be a prime of $A$ lying over $\mathfrak m_R$
+such that $R/\mathfrak m_R \cong \kappa(\mathfrak q)$.
+Then there exists a unique morphism of rings
+$f : A \to S^h$ fitting into the commutative diagram
+$$
+\xymatrix{
+A \ar[r]_f & S^h \\
+R \ar[u] \ar[r] & S \ar[u]
+}
+$$
+such that $f^{-1}(\mathfrak m_{S^h}) = \mathfrak q$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-map-into-henselian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-henselian-functorial}
+Let $R \to S$ be a local map of local rings.
+Let $R \to R^h$ and $S \to S^h$ be the henselizations.
+There exists a unique local ring map $R^h \to S^h$ fitting
+into the commutative diagram
+$$
+\xymatrix{
+R^h \ar[r]_f & S^h \\
+R \ar[u] \ar[r] & S \ar[u]
+}
+$$
+\end{lemma}
+
+\begin{proof}
+Follows immediately from Lemma \ref{lemma-map-into-henselian-colimit}.
+\end{proof}
+
+\noindent
+Here is a slightly different construction of the henselization.
+
+\begin{lemma}
+\label{lemma-henselization-different}
+Let $R$ be a ring.
+Let $\mathfrak p \subset R$ be a prime ideal.
+Consider the category of pairs $(S, \mathfrak q)$ where
+$R \to S$ is \'etale and $\mathfrak q$ is a prime lying over $\mathfrak p$
+such that $\kappa(\mathfrak p) = \kappa(\mathfrak q)$.
+This category is filtered and
+$$
+(R_{\mathfrak p})^h = \colim_{(S, \mathfrak q)} S
+= \colim_{(S, \mathfrak q)} S_{\mathfrak q}
+$$
+canonically.
+\end{lemma}
+
+\begin{proof}
+A morphism of pairs $(S, \mathfrak q) \to (S', \mathfrak q')$
+is given by an $R$-algebra map $\varphi : S \to S'$ such that
+$\varphi^{-1}(\mathfrak q') = \mathfrak q$.
+Let us show that the category of pairs is filtered, see
+Categories, Definition \ref{categories-definition-directed}.
+The category contains the pair $(R, \mathfrak p)$ and hence is not empty,
+which proves part (1) of
+Categories, Definition \ref{categories-definition-directed}.
+Suppose that $(S, \mathfrak q)$ and $(S', \mathfrak q')$ are two pairs.
+Note that $\mathfrak q$, resp.\ $\mathfrak q'$ correspond to primes
+of the fibre rings $S \otimes \kappa(\mathfrak p)$,
+resp.\ $S' \otimes \kappa(\mathfrak p)$ with residue fields
+$\kappa(\mathfrak p)$, hence they correspond to maximal ideals of
+$S \otimes \kappa(\mathfrak p)$, resp.\ $S' \otimes \kappa(\mathfrak p)$.
+Set $S'' = S \otimes_R S'$. By the above there exists a unique
+prime $\mathfrak q'' \subset S''$ lying over $\mathfrak q$ and over
+$\mathfrak q'$ whose residue field is $\kappa(\mathfrak p)$.
+The ring map $R \to S''$ is \'etale by
+Lemma \ref{lemma-etale}.
+This proves part (2) of
+Categories, Definition \ref{categories-definition-directed}.
+Next, suppose that
+$\varphi, \psi : (S, \mathfrak q) \to (S', \mathfrak q')$
+are two morphisms of pairs. Then $\varphi$, $\psi$, and
+$S' \otimes_R S' \to S'$ are \'etale ring maps by
+Lemma \ref{lemma-map-between-etale}. Consider
+$$
+S'' = (S' \otimes_{\varphi, S, \psi} S')
+\otimes_{S' \otimes_R S'} S'
+$$
+Arguing as above (base change of \'etale maps is \'etale, composition of
+\'etale maps is \'etale) we see that $S''$ is \'etale over $R$. The fibre
+ring of $S''$ over $\mathfrak p$ is
+$$
+F'' = (F' \otimes_{\varphi, F, \psi} F')
+\otimes_{F' \otimes_{\kappa(\mathfrak p)} F'} F'
+$$
+where $F', F$ are the fibre rings of $S'$ and $S$. Since $\varphi$ and
+$\psi$ are morphisms of pairs the map $F' \to \kappa(\mathfrak p)$
+corresponding to $\mathfrak p'$ extends to a map $F'' \to \kappa(\mathfrak p)$
+and in turn corresponds to a prime ideal $\mathfrak q'' \subset S''$
+whose residue field is $\kappa(\mathfrak p)$.
+The canonical map $S' \to S''$ (using the right most factor for example)
+is a morphism of pairs $(S', \mathfrak q') \to (S'', \mathfrak q'')$
+which equalizes $\varphi$ and $\psi$. This proves part (3) of
+Categories, Definition \ref{categories-definition-directed}.
+Hence we conclude that the category is filtered.
+
+\medskip\noindent
+Recall that in the proof of
+Lemma \ref{lemma-henselization}
+we constructed $(R_{\mathfrak p})^h$ as the corresponding colimit
+but starting with $R_{\mathfrak p}$ and its maximal ideal
+$\mathfrak pR_{\mathfrak p}$. Now, given any pair $(S, \mathfrak q)$
+for $(R, \mathfrak p)$ we obtain a pair
+$(S_{\mathfrak p}, \mathfrak qS_{\mathfrak p})$ for
+$(R_{\mathfrak p}, \mathfrak pR_{\mathfrak p})$.
+Moreover, in this situation
+$$
+S_{\mathfrak p} = \colim_{f \in R, f \not \in \mathfrak p} S_f.
+$$
+Hence in order to show the equalities
+of the lemma, it suffices to show that any pair $(S_{loc}, \mathfrak q_{loc})$
+for $(R_{\mathfrak p}, \mathfrak pR_{\mathfrak p})$ is of the form
+$(S_{\mathfrak p}, \mathfrak qS_{\mathfrak p})$ for some pair
+$(S, \mathfrak q)$ over $(R, \mathfrak p)$ (some details omitted).
+This follows from
+Lemma \ref{lemma-etale}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-henselian-functorial-improve}
+Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime lying
+over $\mathfrak p \subset R$. Let $R \to R^h$ and $S \to S^h$ be the
+henselizations of $R_\mathfrak p$ and $S_\mathfrak q$. The local ring map
+$R^h \to S^h$ of Lemma \ref{lemma-henselian-functorial} identifies $S^h$
+with the henselization of $R^h \otimes_R S$ at the unique prime
+lying over $\mathfrak m^h$ and $\mathfrak q$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-henselization-different} we see that $R^h$, resp.\ $S^h$
+are filtered colimits of \'etale $R$, resp.\ $S$-algebras.
+Hence we see that $R^h \otimes_R S$ is a filtered colimit of
+\'etale $S$-algebras $A_i$ (Lemma \ref{lemma-etale}). By
+Lemma \ref{lemma-colimits-of-etale} we see that $S^h$ is a
+filtered colimit of \'etale $R^h \otimes_R S$-algebras.
+Since moreover $S^h$ is a henselian local ring with residue field
+equal to $\kappa(\mathfrak q)$, the statement follows from the uniqueness
+result of Lemma \ref{lemma-uniqueness-henselian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strictly-henselian-functorial-prepare}
+Let $\varphi : R \to S$ be a local map of local rings.
+Let $S/\mathfrak m_S \subset \kappa^{sep}$ be a separable algebraic closure.
+Let $S \to S^{sh}$ be the strict henselization of $S$
+with respect to $S/\mathfrak m_S \subset \kappa^{sep}$.
+Let $R \to A$ be an \'etale ring map and let $\mathfrak q$
+be a prime of $A$ lying over $\mathfrak m_R$.
+Given any commutative diagram
+$$
+\xymatrix{
+\kappa(\mathfrak q) \ar[r]_{\phi} & \kappa^{sep} \\
+R/\mathfrak m_R \ar[r]^{\varphi} \ar[u] & S/\mathfrak m_S \ar[u]
+}
+$$
+there exists a unique morphism of rings
+$f : A \to S^{sh}$ fitting into the commutative diagram
+$$
+\xymatrix{
+A \ar[r]_f & S^{sh} \\
+R \ar[u] \ar[r]^{\varphi} & S \ar[u]
+}
+$$
+such that $f^{-1}(\mathfrak m_{S^h}) = \mathfrak q$ and the induced
+map $\kappa(\mathfrak q) \to \kappa^{sep}$ is the given one.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-map-into-henselian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strictly-henselian-functorial}
+Let $R \to S$ be a local map of local rings.
+Choose separable algebraic closures
+$R/\mathfrak m_R \subset \kappa_1^{sep}$
+and
+$S/\mathfrak m_S \subset \kappa_2^{sep}$.
+Let $R \to R^{sh}$ and $S \to S^{sh}$ be the corresponding strict
+henselizations. Given any commutative diagram
+$$
+\xymatrix{
+\kappa_1^{sep} \ar[r]_{\phi} & \kappa_2^{sep} \\
+R/\mathfrak m_R \ar[r]^{\varphi} \ar[u] & S/\mathfrak m_S \ar[u]
+}
+$$
+There exists a unique local ring map $R^{sh} \to S^{sh}$ fitting
+into the commutative diagram
+$$
+\xymatrix{
+R^{sh} \ar[r]_f & S^{sh} \\
+R \ar[u] \ar[r] & S \ar[u]
+}
+$$
+and inducing $\phi$ on the residue fields of
+$R^{sh}$ and $S^{sh}$.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from Lemma \ref{lemma-map-into-henselian-colimit}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strict-henselization-different}
+Let $R$ be a ring.
+Let $\mathfrak p \subset R$ be a prime ideal.
+Let $\kappa(\mathfrak p) \subset \kappa^{sep}$ be a
+separable algebraic closure.
+Consider the category of triples $(S, \mathfrak q, \phi)$
+where $R \to S$ is \'etale, $\mathfrak q$ is a prime lying over $\mathfrak p$,
+and $\phi : \kappa(\mathfrak q) \to \kappa^{sep}$ is a
+$\kappa(\mathfrak p)$-algebra map. This category is filtered and
+$$
+(R_{\mathfrak p})^{sh} =
+\colim_{(S, \mathfrak q, \phi)} S =
+\colim_{(S, \mathfrak q, \phi)} S_{\mathfrak q}
+$$
+canonically.
+\end{lemma}
+
+\begin{proof}
+A morphism of triples $(S, \mathfrak q, \phi) \to (S', \mathfrak q', \phi')$
+is given by an $R$-algebra map $\varphi : S \to S'$ such that
+$\varphi^{-1}(\mathfrak q') = \mathfrak q$ and such that
+$\phi' \circ \varphi = \phi$.
+Let us show that the category of pairs is filtered, see
+Categories, Definition \ref{categories-definition-directed}.
+The category contains the triple
+$(R, \mathfrak p, \kappa(\mathfrak p) \subset \kappa^{sep})$
+and hence is not empty, which proves part (1) of
+Categories, Definition \ref{categories-definition-directed}.
+Suppose that $(S, \mathfrak q, \phi)$ and $(S', \mathfrak q', \phi')$
+are two triples.
+Note that $\mathfrak q$, resp.\ $\mathfrak q'$ correspond to primes
+of the fibre rings $S \otimes \kappa(\mathfrak p)$,
+resp.\ $S' \otimes \kappa(\mathfrak p)$ with residue fields
+finite separable over $\kappa(\mathfrak p)$ and $\phi$, resp.\ $\phi'$
+correspond to maps into $\kappa^{sep}$. Hence this data corresponds to
+$\kappa(\mathfrak p)$-algebra maps
+$$
+\phi : S \otimes_R \kappa(\mathfrak p) \longrightarrow \kappa^{sep},
+\quad
+\phi' : S' \otimes_R \kappa(\mathfrak p) \longrightarrow \kappa^{sep}.
+$$
+Set $S'' = S \otimes_R S'$. Combining the maps the above we get a unique
+$\kappa(\mathfrak p)$-algebra map
+$$
+\phi'' = \phi \otimes \phi' :
+S'' \otimes_R \kappa(\mathfrak p)
+\longrightarrow
+\kappa^{sep}
+$$
+whose kernel corresponds to a prime $\mathfrak q'' \subset S''$
+lying over $\mathfrak q$ and over $\mathfrak q'$, and whose residue field
+maps via $\phi''$ to the compositum of
+$\phi(\kappa(\mathfrak q))$ and $\phi'(\kappa(\mathfrak q'))$ in
+$\kappa^{sep}$. The ring map $R \to S''$ is \'etale by
+Lemma \ref{lemma-etale}.
+Hence $(S'', \mathfrak q'', \phi'')$ is a triple dominating both
+$(S, \mathfrak q, \phi)$ and $(S', \mathfrak q', \phi')$.
+This proves part (2) of
+Categories, Definition \ref{categories-definition-directed}.
+Next, suppose that
+$\varphi, \psi : (S, \mathfrak q, \phi) \to (S', \mathfrak q', \phi')$
+are two morphisms of pairs. Then $\varphi$, $\psi$, and
+$S' \otimes_R S' \to S'$ are \'etale ring maps by
+Lemma \ref{lemma-map-between-etale}.
+Consider
+$$
+S'' = (S' \otimes_{\varphi, S, \psi} S')
+\otimes_{S' \otimes_R S'} S'
+$$
+Arguing as above (base change of \'etale maps is \'etale, composition of
+\'etale maps is \'etale) we see that $S''$ is \'etale over $R$. The fibre
+ring of $S''$ over $\mathfrak p$ is
+$$
+F'' = (F' \otimes_{\varphi, F, \psi} F')
+\otimes_{F' \otimes_{\kappa(\mathfrak p)} F'} F'
+$$
+where $F', F$ are the fibre rings of $S'$ and $S$. Since $\varphi$ and
+$\psi$ are morphisms of triples the map $\phi' : F' \to \kappa^{sep}$
+extends to a map $\phi'' : F'' \to \kappa^{sep}$
+which in turn corresponds to a prime ideal $\mathfrak q'' \subset S''$.
+The canonical map $S' \to S''$ (using the right most factor for example)
+is a morphism of triples
+$(S', \mathfrak q', \phi') \to (S'', \mathfrak q'', \phi'')$
+which equalizes $\varphi$ and $\psi$. This proves part (3) of
+Categories, Definition \ref{categories-definition-directed}.
+Hence we conclude that the category is filtered.
+
+\medskip\noindent
+We still have to show that the colimit $R_{colim}$ of the system
+is equal to the strict henselization
+of $R_{\mathfrak p}$ with respect to $\kappa^{sep}$. To see this note that
+the system of triples $(S, \mathfrak q, \phi)$ contains as a subsystem
+the pairs $(S, \mathfrak q)$ of
+Lemma \ref{lemma-henselization-different}.
+Hence $R_{colim}$ contains $R_{\mathfrak p}^h$ by the result of that lemma.
+Moreover, it is clear that $R_{\mathfrak p}^h \subset R_{colim}$
+is a directed colimit of \'etale ring extensions.
+It follows that $R_{colim}$ is henselian by
+Lemmas \ref{lemma-finite-over-henselian} and
+\ref{lemma-colimit-henselian}.
+Finally, by
+Lemma \ref{lemma-make-etale-map-prescribed-residue-field}
+we see that the residue field of $R_{colim}$ is equal to
+$\kappa^{sep}$. Hence we conclude that $R_{colim}$ is strictly henselian
+and hence equals the strict henselization of $R_{\mathfrak p}$ as desired.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strictly-henselian-functorial-improve}
+Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime lying
+over $\mathfrak p \subset R$. Choose separable algebraic closures
+$\kappa(\mathfrak p) \subset \kappa_1^{sep}$
+and
+$\kappa(\mathfrak q) \subset \kappa_2^{sep}$.
+Let $R^{sh}$ and $S^{sh}$ be the corresponding strict
+henselizations of $R_\mathfrak p$ and $S_\mathfrak q$.
+Given any commutative diagram
+$$
+\xymatrix{
+\kappa_1^{sep} \ar[r]_{\phi} & \kappa_2^{sep} \\
+\kappa(\mathfrak p) \ar[r]^{\varphi} \ar[u] & \kappa(\mathfrak q) \ar[u]
+}
+$$
+The local ring map $R^{sh} \to S^{sh}$ of
+Lemma \ref{lemma-strictly-henselian-functorial} identifies $S^{sh}$
+with the strict henselization of $R^{sh} \otimes_R S$ at a prime
+lying over $\mathfrak q$ and the maximal ideal
+$\mathfrak m^{sh} \subset R^{sh}$.
+\end{lemma}
+
+\begin{proof}
+The proof is identical to the proof of
+Lemma \ref{lemma-henselian-functorial-improve}
+except that it uses
+Lemma \ref{lemma-strict-henselization-different}
+instead of
+Lemma \ref{lemma-henselization-different}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sh-from-h-map}
+Let $R \to S$ be a ring map. Let $\mathfrak q \subset S$ be a prime
+lying over $\mathfrak p \subset R$ such that
+$\kappa(\mathfrak p) \to \kappa(\mathfrak q)$ is an isomorphism.
+Choose a separable algebraic closure $\kappa^{sep}$ of
+$\kappa(\mathfrak p) = \kappa(\mathfrak q)$.
+Then
+$$
+(S_\mathfrak q)^{sh} =
+(S_\mathfrak q)^h \otimes_{(R_\mathfrak p)^h} (R_\mathfrak p)^{sh}
+$$
+\end{lemma}
+
+\begin{proof}
+This follows from the alternative construction of the strict henselization
+of a local ring in Remark \ref{remark-construct-sh-from-h} and the
+fact that the residue fields are equal. Some details omitted.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Henselization and quasi-finite ring maps}
+\label{section-henselization-quasi-finite}
+
+\noindent
+In this section we prove some results concerning the functorial maps
+between (strict) henselizations for quasi-finite ring maps.
+
+\begin{lemma}
+\label{lemma-quasi-finite-henselization}
+Let $R \to S$ be a ring map.
+Let $\mathfrak q$ be a prime of $S$ lying over $\mathfrak p$ in $R$.
+Assume $R \to S$ is quasi-finite at $\mathfrak q$.
+The commutative diagram
+$$
+\xymatrix{
+R_{\mathfrak p}^h \ar[r] & S_{\mathfrak q}^h \\
+R_{\mathfrak p} \ar[u] \ar[r] & S_{\mathfrak q} \ar[u]
+}
+$$
+of
+Lemma \ref{lemma-henselian-functorial}
+identifies $S_{\mathfrak q}^h$ with the localization of
+$R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$
+at the prime generated by $\mathfrak q$. Moreover, the ring
+map $R_{\mathfrak p}^h \to S_{\mathfrak q}^h$ is finite.
+\end{lemma}
+
+\begin{proof}
+Note that $R_{\mathfrak p}^h \otimes_R S$ is quasi-finite over
+$R_{\mathfrak p}^h$ at the prime ideal corresponding to $\mathfrak q$, see
+Lemma \ref{lemma-four-rings}. Hence the localization $S'$ of
+$R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$ is henselian
+and finite over $R_{\mathfrak p}^h$, see
+Lemma \ref{lemma-finite-over-henselian}. As a localization $S'$ is a filtered
+colimit of \'etale
+$R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$-algebras.
+By Lemma \ref{lemma-henselian-functorial-improve} we see that
+$S_\mathfrak q^h$ is the henselization of
+$R_{\mathfrak p}^h \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$.
+Thus $S' = S_\mathfrak q^h$ by the uniqueness
+result of Lemma \ref{lemma-uniqueness-henselian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quotient-henselization}
+\begin{slogan}
+Henselization is compatible with quotients.
+\end{slogan}
+Let $R$ be a local ring with henselization $R^h$.
+Let $I \subset \mathfrak m_R$.
+Then $R^h/IR^h$ is the henselization of $R/I$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of
+Lemma \ref{lemma-quasi-finite-henselization}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finite-strict-henselization}
+Let $R \to S$ be a ring map.
+Let $\mathfrak q$ be a prime of $S$ lying over $\mathfrak p$ in $R$.
+Assume $R \to S$ is quasi-finite at $\mathfrak q$.
+Let $\kappa_2^{sep}/\kappa(\mathfrak q)$ be a separable algebraic closure
+and denote $\kappa_1^{sep} \subset \kappa_2^{sep}$ the subfield
+of elements separable algebraic over $\kappa(\mathfrak q)$
+(Fields, Lemma \ref{fields-lemma-separable-first}).
+The commutative diagram
+$$
+\xymatrix{
+R_{\mathfrak p}^{sh} \ar[r] & S_{\mathfrak q}^{sh} \\
+R_{\mathfrak p} \ar[u] \ar[r] & S_{\mathfrak q} \ar[u]
+}
+$$
+of Lemma \ref{lemma-strictly-henselian-functorial}
+identifies $S_{\mathfrak q}^{sh}$ with the localization of
+$R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$
+at the prime ideal which is the kernel of the map
+$$
+R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}
+\longrightarrow
+\kappa_1^{sep} \otimes_{\kappa(\mathfrak p)} \kappa(\mathfrak q)
+\longrightarrow
+\kappa_2^{sep}
+$$
+Moreover, the ring map $R_{\mathfrak p}^{sh} \to S_{\mathfrak q}^{sh}$
+is a finite local homomorphism of local rings whose residue field extension
+is the extension $\kappa_2^{sep}/\kappa_1^{sep}$ which is both finite and
+purely inseparable.
+\end{lemma}
+
+\begin{proof}
+Since $R \to S$ is quasi-finite at $\mathfrak q$ we see that the extension
+$\kappa(\mathfrak q)/\kappa(\mathfrak p)$ is finite, see
+Definition \ref{definition-quasi-finite} and
+Lemma \ref{lemma-isolated-point-fibre}.
+Hence $\kappa_1^{sep}$ is a separable algebraic closure of
+$\kappa(\mathfrak p)$ (small detail omitted).
+In particular Lemma \ref{lemma-strictly-henselian-functorial}
+does really apply. Next, the compositum of $\kappa(\mathfrak p)$ and
+$\kappa_1^{sep}$ in $\kappa_2^{sep}$ is separably algebraically
+closed and hence equal to $\kappa_2^{sep}$. We conclude that
+$\kappa_2^{sep}/\kappa_1^{sep}$ is finite. By construction
+the extension $\kappa_2^{sep}/\kappa_1^{sep}$ is purely inseparable.
+The ring map $R_{\mathfrak p}^{sh} \to S_{\mathfrak q}^{sh}$ is
+indeed local and induces the residue field extension
+$\kappa_2^{sep}/\kappa_1^{sep}$ which is indeed finite purely inseparable.
+
+\medskip\noindent
+Note that $R_{\mathfrak p}^{sh} \otimes_R S$ is quasi-finite over
+$R_{\mathfrak p}^{sh}$ at the prime ideal $\mathfrak q'$
+given in the statement of the lemma, see
+Lemma \ref{lemma-four-rings}. Hence the localization $S'$ of
+$R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$
+at $\mathfrak q'$ is henselian and finite over $R_{\mathfrak p}^{sh}$, see
+Lemma \ref{lemma-finite-over-henselian}.
+Note that the residue field of $S'$ is $\kappa_2^{sep}$
+as the map $\kappa_1^{sep} \otimes_{\kappa(\mathfrak p)} \kappa(\mathfrak q)
+\to \kappa_2^{sep}$ is surjective by the discussion in the previous
+paragraph.
+Furthermore, as a localization $S'$ is a filtered colimit of \'etale
+$R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$-algebras.
+By Lemma \ref{lemma-strictly-henselian-functorial-improve}
+we see that $S_{\mathfrak q}^{sh}$ is the strict henselization of
+$R_{\mathfrak p}^{sh} \otimes_{R_{\mathfrak p}} S_{\mathfrak q}$
+at $\mathfrak q'$. Thus $S' = S_\mathfrak q^{sh}$ by the uniqueness
+result of Lemma \ref{lemma-uniqueness-henselian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quotient-strict-henselization}
+Let $R$ be a local ring with strict henselization $R^{sh}$.
+Let $I \subset \mathfrak m_R$.
+Then $R^{sh}/IR^{sh}$ is a strict henselization of $R/I$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of
+Lemma \ref{lemma-quasi-finite-strict-henselization}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-tensor-with-integral}
+Let $A \to B$ and $A \to C$ be local homomorphisms of local rings.
+If $A \to C$ is integral and either
+$\kappa(\mathfrak m_C)/\kappa(\mathfrak m_A)$ or
+$\kappa(\mathfrak m_B)/\kappa(\mathfrak m_A)$ is purely
+inseparable, then $D = B \otimes_A C$ is a local ring and
+$B \to D$ and $C \to D$ are local.
+\end{lemma}
+
+\begin{proof}
+Any maximal ideal of $D$ lies over the maximal ideal of $B$ by
+going up for the integral ring map
+$B \to D$ (Lemma \ref{lemma-integral-going-up}).
+Now $D/\mathfrak m_B D = \kappa(\mathfrak m_B) \otimes_A C =
+\kappa(\mathfrak m_B) \otimes_{\kappa(\mathfrak m_A)} C/\mathfrak m_A C$.
+The spectrum of $C/\mathfrak m_A C$ consists of a
+single point, namely $\mathfrak m_C$. Thus the spectrum of
+$D/\mathfrak m_B D$ is the same as the spectrum of
+$\kappa(\mathfrak m_B) \otimes_{\kappa(\mathfrak m_A)} \kappa(\mathfrak m_C)$
+which is a single point by our assumption that either
+$\kappa(\mathfrak m_C)/\kappa(\mathfrak m_A)$ or
+$\kappa(\mathfrak m_B)/\kappa(\mathfrak m_A)$ is purely
+inseparable. This proves that $D$ is local and that the ring maps
+$B \to D$ and $C \to D$ are local.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-strict-henselization-quasi-finite}
+Let $A \to B$ and $A \to C$ be ring maps. Let $\kappa$ be a separably
+algebraically closed field and let $B \otimes_A C \to \kappa$
+be a ring homomorphism. Denote
+$$
+\xymatrix{
+B^{sh} \ar[r] & (B \otimes_A C)^{sh} \\
+A^{sh} \ar[u] \ar[r] & C^{sh} \ar[u]
+}
+$$
+the corresponding maps of strict henselizations (see proof). If
+\begin{enumerate}
+\item $A \to B$ is quasi-finite at the prime
+$\mathfrak p_B = \Ker(B \to \kappa)$, or
+\item $B$ is a filtered colimit of quasi-finite $A$-algebras, or
+\item $B_{\mathfrak p_B}$ is a filtered colimit of quasi-finite
+algebras over $A_{\mathfrak p_A}$, or
+\item $B$ is integral over $A$,
+\end{enumerate}
+then $B^{sh} \otimes_{A^{sh}} C^{sh} \to (B \otimes_A C)^{sh}$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Write $D = B \otimes_A C$. Denote
+$\mathfrak p_A = \Ker(A \to \kappa)$ and similarly
+for $\mathfrak p_B$, $\mathfrak p_C$, and $\mathfrak p_D$.
+Denote $\kappa_A \subset \kappa$ the separable algebraic
+closure of $\kappa(\mathfrak p_A)$ in $\kappa$
+and similarly for $\kappa_B$, $\kappa_C$, and $\kappa_D$.
+Denote $A^{sh}$ the strict henselization of $A_{\mathfrak p_A}$
+constructed using the separable algebraic closure
+$\kappa_A/\kappa(\mathfrak p_A)$. Similarly for
+$B^{sh}$, $C^{sh}$, and $D^{sh}$. We obtain the commutative
+diagram of the lemma from the functoriality of
+Lemma \ref{lemma-strictly-henselian-functorial}.
+
+\medskip\noindent
+Consider the map
+$$
+c : B^{sh} \otimes_{A^{sh}} C^{sh} \to D^{sh} = (B \otimes_A C)^{sh}
+$$
+we obtain from the commutative diagram. If $A \to B$ is quasi-finite at
+$\mathfrak p_B = \Ker(B \to \kappa)$, then the ring map $C \to D$ is
+quasi-finite at $\mathfrak p_D$ by Lemma \ref{lemma-four-rings}. Hence by
+Lemma \ref{lemma-quasi-finite-strict-henselization}
+(and Lemma \ref{lemma-base-change-integral})
+the ring map $c$ is a homomorphism of finite $C^{sh}$-algebras and
+$$
+B^{sh} = (B \otimes_A A^{sh})_{\mathfrak q}
+\quad\text{and}\quad
+D^{sh} = (D \otimes_C C^{sh})_{\mathfrak r} =
+(B \otimes_A C^{sh})_{\mathfrak r}
+$$
+for some primes $\mathfrak q$ and $\mathfrak r$. Since
+$$
+B^{sh} \otimes_{A^{sh}} C^{sh} =
+(B \otimes_A A^{sh})_{\mathfrak q} \otimes_{A^{sh}} C^{sh} =
+\text{a localization of }
+B \otimes_A C^{sh}
+$$
+we conclude that source and target of $c$ are both
+localizations of $B \otimes_A C^{sh}$ (compatibly with the map).
+Hence it suffices to show that $B^{sh} \otimes_{A^{sh}} C^{sh}$
+is local (small detail omitted). This follows from
+Lemma \ref{lemma-local-tensor-with-integral}
+and the fact that $A^{sh} \to B^{sh}$ is finite with
+purely inseparable residue field extension by the already used
+Lemma \ref{lemma-quasi-finite-strict-henselization}.
+This proves case (1) of the lemma.
+
+\medskip\noindent
+In case (2) write $B = \colim B_i$ as a filtered colimit of
+quasi-finite $A$-algebras. We correspondingly get
+$D = \colim D_i$ with $D_i = B_i \otimes_A C$.
+Observe that $B^{sh} = \colim B_i^{sh}$. Namely, the ring
+$\colim B_i^{sh}$ is a strictly henselian local ring
+by Lemma \ref{lemma-colimit-henselian}.
+Also $\colim B_i^{sh}$ is a filtered colimit of
+\'etale $B$-algebras by Lemma \ref{lemma-colimit-colimit-etale-better}.
+Finally, the residue field of $\colim B_i^{sh}$ is a separable
+algebraic closure of $\kappa(\mathfrak p_B)$ (details omitted).
+Hence we conclude that $B^{sh} = \colim B_i^{sh}$, see
+discussion following Definition \ref{definition-henselization}.
+Similarly, we have $D^{sh} = \colim D_i^{sh}$.
+Then we conclude by case (1) because
+$$
+D^{sh} = \colim D_i^{sh} = \colim B_i^{sh} \otimes_{A^{sh}} C^{sh} =
+B^{sh} \otimes_{A^{sh}} C^{sh}
+$$
+since filtered colimit commute with tensor products.
+
+\medskip\noindent
+Case (3). We may replace $A$, $B$, $C$ by their localizations
+at $\mathfrak p_A$, $\mathfrak p_B$, and $\mathfrak p_C$.
+Thus (3) follows from (2).
+
+\medskip\noindent
+Since an integral ring map is a filtered colimit of finite ring
+maps, we see that (4) follows from (2) as well.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Serre's criterion for normality}
+\label{section-serre-criterion}
+
+\noindent
+We introduce the following properties of Noetherian rings.
+
+\begin{definition}
+\label{definition-conditions}
+Let $R$ be a Noetherian ring.
+Let $k \geq 0$ be an integer.
+\begin{enumerate}
+\item We say $R$ has property {\it $(R_k)$} if for every prime $\mathfrak p$
+of height $\leq k$ the local ring $R_{\mathfrak p}$ is regular.
+We also say that $R$ is {\it regular in codimension $\leq k$}.
+\item We say $R$ has property {\it $(S_k)$} if for every prime $\mathfrak p$
+the local ring $R_{\mathfrak p}$ has depth at least
+$\min\{k, \dim(R_{\mathfrak p})\}$.
+\item Let $M$ be a finite $R$-module. We say $M$ has property $(S_k)$
+if for every prime $\mathfrak p$ the module
+$M_{\mathfrak p}$ has depth at least
+$\min\{k, \dim(\text{Supp}(M_{\mathfrak p}))\}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Any Noetherian ring has property $(S_0)$ and so does any finite module
+over it. Our convention that the depth of the zero module is $\infty$
+(see Section \ref{section-depth}) and the dimension of the empty set is
+$-\infty$ (see Topology, Section \ref{topology-section-krull-dimension})
+guarantees that the zero module has property $(S_k)$ for all $k$.
+
+\begin{lemma}
+\label{lemma-criterion-no-embedded-primes}
+Let $R$ be a Noetherian ring.
+Let $M$ be a finite $R$-module.
+The following are equivalent:
+\begin{enumerate}
+\item $M$ has no embedded associated prime, and
+\item $M$ has property $(S_1)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p$ be an embedded associated prime of $M$.
+Then there exists another associated prime $\mathfrak q$ of $M$
+such that $\mathfrak p \supset \mathfrak q$. In particular this
+implies that $\dim(\text{Supp}(M_{\mathfrak p})) \geq 1$ (since $\mathfrak q$
+is in the support as well). On the other hand $\mathfrak pR_{\mathfrak p}$
+is associated to $M_{\mathfrak p}$
+(Lemma \ref{lemma-associated-primes-localize}) and hence
+$\text{depth}(M_{\mathfrak p}) = 0$
+(see Lemma \ref{lemma-ideal-nonzerodivisor}).
+In other words $(S_1)$ does not hold.
+Conversely, if $(S_1)$ does not hold then there exists a prime
+$\mathfrak p$ such that $\dim(\text{Supp}(M_{\mathfrak p})) \geq 1$
+and $\text{depth}(M_{\mathfrak p}) = 0$. Since
+$\text{depth}(M_{\mathfrak p}) = 0$, we see that
+$\mathfrak p \in \text{Ass}(M)$ by the two Lemmas
+\ref{lemma-associated-primes-localize} and \ref{lemma-ideal-nonzerodivisor}.
+Since $\dim(\text{Supp}(M_{\mathfrak p})) \geq 1$, there
+is a prime $\mathfrak q \in \text{Supp}(M)$ with
+$\mathfrak q \subset \mathfrak p$, $\mathfrak q \not = \mathfrak p$.
+We can take such a $\mathfrak q$ that is minimal in
+$\text{Supp}(M)$. Then by
+Proposition \ref{proposition-minimal-primes-associated-primes}
+we have $\mathfrak q \in \text{Ass}(M)$ and hence
+$\mathfrak p$ is an embedded associated prime.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-criterion-reduced}
+\begin{slogan}
+Reduced equals R0 plus S1.
+\end{slogan}
+Let $R$ be a Noetherian ring.
+The following are equivalent:
+\begin{enumerate}
+\item $R$ is reduced, and
+\item $R$ has properties $(R_0)$ and $(S_1)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Suppose that $R$ is reduced. Then $R_{\mathfrak p}$ is a field for
+every minimal prime $\mathfrak p$ of $R$, according to
+Lemma \ref{lemma-minimal-prime-reduced-ring}. Hence we have $(R_0)$.
+Let $\mathfrak p$ be a prime of height $\geq 1$. Then $A = R_{\mathfrak p}$
+is a reduced local ring of dimension $\geq 1$. Hence its maximal
+ideal $\mathfrak m$ is not an associated prime
+since this would mean there exists an $x \in \mathfrak m$
+with annihilator $\mathfrak m$ so $x^2 = 0$. Hence the depth of
+$A = R_{\mathfrak p}$ is at least one, by Lemma \ref{lemma-ass-zero-divisors}.
+This shows that $(S_1)$ holds.
+
+\medskip\noindent
+Conversely, assume that $R$ satisfies $(R_0)$ and $(S_1)$.
+If $\mathfrak p$ is a minimal prime of $R$, then
+$R_{\mathfrak p}$ is a field by $(R_0)$, and hence is reduced.
+If $\mathfrak p$ is not minimal, then we see that $R_{\mathfrak p}$
+has depth $\geq 1$ by $(S_1)$ and we conclude there exists an element
+$t \in \mathfrak pR_{\mathfrak p}$ such that
+$R_{\mathfrak p} \to R_{\mathfrak p}[1/t]$ is injective.
+Now $R_\mathfrak p[1/t]$ is contained in the product
+of its localizations at prime ideals, see
+Lemma \ref{lemma-characterize-zero-local}.
+This implies that $R_{\mathfrak p}$ is a subring of a product of
+localizations of $R$ at $\mathfrak q \supset \mathfrak p$ with
+$t \not \in \mathfrak q$. Since theses primes have smaller height
+by induction on the height we conclude that $R$ is reduced.
+\end{proof}
+
+\begin{lemma}[Serre's criterion for normality]
+\label{lemma-criterion-normal}
+\begin{reference}
+\cite[IV, Theorem 5.8.6]{EGA}
+\end{reference}
+\begin{slogan}
+Normal equals R1 plus S2.
+\end{slogan}
+Let $R$ be a Noetherian ring.
+The following are equivalent:
+\begin{enumerate}
+\item $R$ is a normal ring, and
+\item $R$ has properties $(R_1)$ and $(S_2)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1) $\Rightarrow$ (2). Assume $R$ is normal, i.e., all
+localizations $R_{\mathfrak p}$ at primes are normal domains.
+In particular we see that $R$ has $(R_0)$ and $(S_1)$ by
+Lemma \ref{lemma-criterion-reduced}. Hence it suffices to show
+that a local Noetherian normal domain $R$ of dimension $d$ has
+depth $\geq \min(2, d)$ and is regular if $d = 1$. The assertion
+if $d = 1$ follows from Lemma \ref{lemma-characterize-dvr}.
+
+\medskip\noindent
+Let $R$ be a local Noetherian normal domain with maximal ideal
+$\mathfrak m$ and dimension $d \geq 2$. Apply
+Lemma \ref{lemma-hart-serre-loc-thm} to $R$.
+It is clear that $R$ does not fall into cases (1) or (2)
+of the lemma.
+Let $R \to R'$ as in (4) of the lemma.
+Since $R$ is a domain we have $R \subset R'$. Since $\mathfrak m$
+is not an associated prime of $R'$ there exists an $x \in \mathfrak m$
+which is a nonzerodivisor on $R'$. Then $R_x = R'_x$ so
+$R$ and $R'$ are domains with the same fraction field. But
+finiteness of $R \subset R'$ implies every element of $R'$ is integral
+over $R$ (Lemma \ref{lemma-finite-is-integral})
+and we conclude that $R = R'$ as $R$ is normal.
+This means (4) does not happen. Thus we get the remaining possibility
+(3), i.e., $\text{depth}(R) \geq 2$ as desired.
+
+\medskip\noindent
+Proof of (2) $\Rightarrow$ (1). Assume $R$ satisfies $(R_1)$ and $(S_2)$.
+By Lemma \ref{lemma-criterion-reduced} we conclude that $R$ is
+reduced. Hence it suffices to show that if $R$ is a reduced local
+Noetherian ring of dimension $d$ satisfying $(S_2)$ and $(R_1)$
+then $R$ is a normal domain. If $d = 0$, the result is clear.
+If $d = 1$, then the result follows from Lemma \ref{lemma-characterize-dvr}.
+
+\medskip\noindent
+Let $R$ be a reduced local Noetherian ring with maximal ideal
+$\mathfrak m$ and dimension $d \geq 2$ which satisfies $(R_1)$ and
+$(S_2)$. By Lemma \ref{lemma-characterize-reduced-ring-normal}
+it suffices to show that $R$ is integrally closed in its
+total ring of fractions $Q(R)$. Pick $x \in Q(R)$ which is integral
+over $R$. Then $R' = R[x]$ is a finite ring extension of $R$
+(Lemma \ref{lemma-characterize-finite-in-terms-of-integral}).
+Because $\dim(R_\mathfrak p) < d$ for
+every nonmaximal prime $\mathfrak p \subset R$
+we have $R_\mathfrak p = R'_\mathfrak p$ by induction.
+Hence the support of $R'/R$ is $\{\mathfrak m\}$.
+It follows that $R'/R$ is annihilated by a power of $\mathfrak m$
+(Lemma \ref{lemma-Noetherian-power-ideal-kills-module}).
+By Lemma \ref{lemma-hart-serre-loc-thm} this
+contradicts the assumption that the depth of $R$ is $\geq 2 = \min(2, d)$
+and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-normal}
+A regular ring is normal.
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a regular ring. By
+Lemma \ref{lemma-criterion-normal}
+it suffices to prove that $R$ is $(R_1)$ and $(S_2)$.
+As a regular local ring is Cohen-Macaulay, see
+Lemma \ref{lemma-regular-ring-CM},
+it is clear that $R$ is $(S_2)$.
+Property $(R_1)$ is immediate.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-normal-domain-intersection-localizations-height-1}
+Let $R$ be a Noetherian normal domain with fraction field $K$. Then
+\begin{enumerate}
+\item for any nonzero $a \in R$ the quotient $R/aR$ has no embedded primes,
+and all its associated primes have height $1$
+\item
+$$
+R = \bigcap\nolimits_{\text{height}(\mathfrak p) = 1} R_{\mathfrak p}
+$$
+\item For any nonzero $x \in K$ the quotient $R/(R \cap xR)$
+has no embedded primes, and all its associates primes have height $1$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-criterion-normal} we see that $R$ has $(S_2)$.
+Hence for any nonzero element $a \in R$ we see that $R/aR$ has $(S_1)$
+(use Lemma \ref{lemma-depth-in-ses} for example)
+Hence $R/aR$ has no embedded primes
+(Lemma \ref{lemma-criterion-no-embedded-primes}).
+We conclude the associated primes of $R/aR$ are exactly
+the minimal primes $\mathfrak p$ over $(a)$, which have height $1$
+as $a$ is not zero (Lemma \ref{lemma-minimal-over-1}). This proves (1).
+
+\medskip\noindent
+Thus, given $b \in R$ we have $b \in aR$ if and only if
+$b \in aR_{\mathfrak p}$ for every minimal prime $\mathfrak p$
+over $(a)$ (see Lemma \ref{lemma-zero-at-ass-zero}).
+These primes all have height $1$ as seen above so
+$b/a \in R$ if and only if $b/a \in R_{\mathfrak p}$ for all
+height 1 primes. Hence (2) holds.
+
+\medskip\noindent
+For (3) write $x = a/b$. Let $\mathfrak p_1, \ldots, \mathfrak p_r$
+be the minimal primes over $(ab)$. These all have height 1 by the above.
+Then we see that
+$R \cap xR = \bigcap_{i = 1, \ldots, r} (R \cap xR_{\mathfrak p_i})$
+by part (2) of the lemma. Hence $R/(R \cap xR)$ is a submodule of
+$\bigoplus R/(R \cap xR_{\mathfrak p_i})$.
+As $R_{\mathfrak p_i}$ is a discrete valuation ring (by property $(R_1)$
+for the Noetherian normal domain $R$, see Lemma \ref{lemma-criterion-normal})
+we have $xR_{\mathfrak p_i} = \mathfrak p_i^{e_i}R_{\mathfrak p_i}$
+for some $e_i \in \mathbf{Z}$. Hence the direct sum is equal
+to $\bigoplus_{e_i > 0} R/\mathfrak p_i^{(e_i)}$, see
+Definition \ref{definition-symbolic-power}.
+By Lemma \ref{lemma-symbolic-power-associated}
+the only associated prime of the module
+$R/\mathfrak p^{(n)}$ is $\mathfrak p$. Hence the set of associate primes
+of $R/(R \cap xR)$ is a subset of $\{\mathfrak p_i\}$ and there are
+no inclusion relations among them. This proves (3).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Formal smoothness of fields}
+\label{section-p-bases}
+
+\noindent
+In this section we show that field extensions are formally smooth
+if and only if they are separable. However, we first prove finitely
+generated field extensions are separable algebraic if and only if
+they are formally unramified.
+
+\begin{lemma}
+\label{lemma-characterize-separable-algebraic-field-extensions}
+Let $K/k$ be a finitely generated field extension.
+The following are equivalent
+\begin{enumerate}
+\item $K$ is a finite separable field extension of $k$,
+\item $\Omega_{K/k} = 0$,
+\item $K$ is formally unramified over $k$,
+\item $K$ is unramified over $k$,
+\item $K$ is formally \'etale over $k$,
+\item $K$ is \'etale over $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (2) and (3) is
+Lemma \ref{lemma-characterize-formally-unramified}.
+By Lemma \ref{lemma-etale-over-field}
+we see that (1) is equivalent to (6).
+Property (6) implies (5) and (4) which both in turn imply (3)
+(Lemmas \ref{lemma-formally-etale-etale}, \ref{lemma-unramified},
+and \ref{lemma-formally-unramified-unramified}).
+Thus it suffices to show that (2) implies (1).
+Choose a finitely generated $k$-subalgebra $A \subset K$
+such that $K$ is the fraction field of the domain $A$.
+Set $S = A \setminus \{0\}$.
+Since $0 = \Omega_{K/k} = S^{-1}\Omega_{A/k}$
+(Lemma \ref{lemma-differentials-localize})
+and since $\Omega_{A/k}$ is finitely generated
+(Lemma \ref{lemma-differentials-finitely-generated}),
+we can replace $A$ by a localization $A_f$ to reduce to the case
+that $\Omega_{A/k} = 0$ (details omitted).
+Then $A$ is unramified over $k$, hence
+$K/k$ is finite separable for example by
+Lemma \ref{lemma-unramified-at-prime} applied with $\mathfrak q = (0)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derivative-zero-pth-power}
+Let $k$ be a perfect field of characteristic $p > 0$.
+Let $K/k$ be an extension.
+Let $a \in K$. Then $\text{d}a = 0$ in $\Omega_{K/k}$
+if and only if $a$ is a $p$th power.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-colimit-differentials} we see that there exists a subfield
+$k \subset L \subset K$ such that $L/k$
+is a finitely generated field extension and such that
+$\text{d}a$ is zero in $\Omega_{L/k}$.
+Hence we may assume that $K$ is a finitely generated field extension
+of $k$.
+
+\medskip\noindent
+Choose a transcendence basis $x_1, \ldots, x_r \in K$
+such that $K$ is finite separable over $k(x_1, \ldots, x_r)$.
+This is possible by the definitions, see
+Definitions \ref{definition-perfect} and
+\ref{definition-separable-field-extension}.
+We remark that the result holds for the purely transcendental
+subfield $k(x_1, \ldots, x_r) \subset K$.
+Namely,
+$$
+\Omega_{k(x_1, \ldots, x_r)/k} =
+\bigoplus\nolimits_{i = 1}^r k(x_1, \ldots, x_r) \text{d}x_i
+$$
+and any rational function all of whose partial derivatives are zero
+is a $p$th power. Moreover, we also have
+$$
+\Omega_{K/k} =
+\bigoplus\nolimits_{i = 1}^r K\text{d}x_i
+$$
+since $k(x_1, \ldots, x_r) \subset K$ is finite separable
+(computation omitted). Suppose $a \in K$ is an element such that
+$\text{d}a = 0$ in the module of differentials. By our choice of $x_i$ we
+see that the minimal polynomial $P(T) \in k(x_1, \ldots, x_r)[T]$
+of $a$ is separable. Write
+$$
+P(T) = T^d + \sum\nolimits_{i = 1}^d a_i T^{d - i}
+$$
+and hence
+$$
+0 = \text{d}P(a) = \sum\nolimits_{i = 1}^d a^{d - i}\text{d}a_i
+$$
+in $\Omega_{K/k}$. By the description of
+$\Omega_{K/k}$ above and the fact that $P$ was the minimal
+polynomial of $a$, we see that this implies $\text{d}a_i = 0$.
+Hence $a_i = b_i^p$ for each $i$. Therefore by
+Fields, Lemma \ref{fields-lemma-pth-root}
+we see that $a$ is a $p$th power.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-size-extension-pth-roots}
+Let $k$ be a field of characteristic $p > 0$.
+Let $a_1, \ldots, a_n \in k$ be elements such that
+$\text{d}a_1, \ldots, \text{d}a_n$ are linearly independent in
+$\Omega_{k/\mathbf{F}_p}$. Then the field extension
+$k(a_1^{1/p}, \ldots, a_n^{1/p})$ has degree $p^n$ over $k$.
+\end{lemma}
+
+\begin{proof}
+By induction on $n$. If $n = 1$ the result is
+Lemma \ref{lemma-derivative-zero-pth-power}.
+For the induction step, suppose that $k(a_1^{1/p}, \ldots, a_{n - 1}^{1/p})$
+has degree $p^{n - 1}$ over $k$. We have to show that $a_n$ does not
+map to a $p$th power in $k(a_1^{1/p}, \ldots, a_{n - 1}^{1/p})$.
+If it does then we can write
+\begin{align*}
+a_n & =
+\left(\sum\nolimits_{I = (i_1, \ldots, i_{n - 1}),\ 0 \leq i_j \leq p - 1}
+\lambda_I a_1^{i_1/p} \ldots a_{n - 1}^{i_{n - 1}/p}\right)^p \\
+& = \sum\nolimits_{I = (i_1, \ldots, i_{n - 1}),\ 0 \leq i_j \leq p - 1}
+\lambda_I^p a_1^{i_1} \ldots a_{n - 1}^{i_{n - 1}}
+\end{align*}
+Applying $\text{d}$ we see that $\text{d}a_n$ is linearly dependent on
+$\text{d}a_i$, $i < n$. This is a contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-separable-differentials}
+Let $k$ be a field of characteristic $p > 0$.
+The following are equivalent:
+\begin{enumerate}
+\item the field extension $K/k$ is separable
+(see Definition \ref{definition-separable-field-extension}), and
+\item the map
+$K \otimes_k \Omega_{k/\mathbf{F}_p} \to \Omega_{K/\mathbf{F}_p}$
+is injective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Write $K$ as a directed colimit $K = \colim_i K_i$ of finitely generated
+field extensions $K_i/k$. By definition $K$ is separable if and only
+if each $K_i$ is separable over $k$, and by
+Lemma \ref{lemma-colimit-differentials} we see that
+$K \otimes_k \Omega_{k/\mathbf{F}_p} \to \Omega_{K/\mathbf{F}_p}$
+is injective if and only if each
+$K_i \otimes_k \Omega_{k/\mathbf{F}_p} \to \Omega_{K_i/\mathbf{F}_p}$
+is injective. Hence we may assume that $K/k$ is a finitely generated field
+extension.
+
+\medskip\noindent
+Assume $K/k$ is a finitely generated field extension which is
+separable. Choose $x_1, \ldots, x_{r + 1} \in K$ as in
+Lemma \ref{lemma-generating-finitely-generated-separable-field-extensions}.
+In this case there exists an irreducible polynomial
+$G(X_1, \ldots, X_{r + 1}) \in k[X_1, \ldots, X_{r + 1}]$
+such that $G(x_1, \ldots, x_{r + 1}) = 0$ and such that
+$\partial G/\partial X_{r + 1}$ is not identically zero.
+Moreover $K$ is the field of fractions of the domain.
+$S = K[X_1, \ldots, X_{r + 1}]/(G)$.
+Write
+$$
+G = \sum a_I X^I, \quad X^I = X_1^{i_1}\ldots X_{r + 1}^{i_{r + 1}}.
+$$
+Using the presentation of $S$ above we see that
+$$
+\Omega_{S/\mathbf{F}_p}
+=
+\frac{
+S \otimes_k \Omega_k \oplus
+\bigoplus\nolimits_{i = 1, \ldots, r + 1} S\text{d}X_i
+}{
+\langle
+\sum X^I \text{d}a_I + \sum \partial G/\partial X_i \text{d}X_i
+\rangle
+}
+$$
+Since $\Omega_{K/\mathbf{F}_p}$ is the localization
+of the $S$-module $\Omega_{S/\mathbf{F}_p}$ (see
+Lemma \ref{lemma-differentials-localize}) we conclude
+that
+$$
+\Omega_{K/\mathbf{F}_p}
+=
+\frac{
+K \otimes_k \Omega_k \oplus
+\bigoplus\nolimits_{i = 1, \ldots, r + 1} K\text{d}X_i
+}{
+\langle
+\sum X^I \text{d}a_I + \sum \partial G/\partial X_i \text{d}X_i
+\rangle
+}
+$$
+Now, since the polynomial $\partial G/\partial X_{r + 1}$ is not identically
+zero we conclude that the map
+$K \otimes_k \Omega_{k/\mathbf{F}_p} \to \Omega_{S/\mathbf{F}_p}$
+is injective as desired.
+
+\medskip\noindent
+Assume $K/k$ is a finitely generated field extension
+and that
+$K \otimes_k \Omega_{k/\mathbf{F}_p} \to \Omega_{K/\mathbf{F}_p}$
+is injective.
+(This part of the proof is the same as the argument proving
+Lemma \ref{lemma-characterize-separable-field-extensions}.)
+Let $x_1, \ldots, x_r$ be a transcendence basis of $K$ over $k$ such
+that the degree of inseparability of the finite extension
+$k(x_1, \ldots, x_r) \subset K$ is minimal.
+If $K$ is separable over $k(x_1, \ldots, x_r)$ then we win.
+Assume this is not the case to get a contradiction.
+Then there exists an element $\alpha \in K$ which is not
+separable over $k(x_1, \ldots, x_r)$. Let $P(T) \in k(x_1, \ldots, x_r)[T]$
+be its minimal polynomial. Because $\alpha$ is not separable
+actually $P$ is a polynomial in $T^p$. Clear denominators
+to get an irreducible polynomial
+$$
+G(X_1, \ldots, X_r, T) = \sum a_{I, i} X^I T^i \in k[X_1, \ldots, X_r, T]
+$$
+such that $G(x_1, \ldots, x_r, \alpha) = 0$ in $L$.
+Note that this means $k[X_1, \ldots, X_r, T]/(G) \subset L$.
+We may assume that for some pair $(I_0, i_0)$ the coefficient
+$a_{I_0, i_0} = 1$.
+We claim that $\text{d}G/\text{d}X_i$ is not identically zero
+for at least one $i$. Namely, if this is not the case, then
+$G$ is actually a polynomial in $X_1^p, \ldots, X_r^p, T^p$.
+Then this means that
+$$
+\sum\nolimits_{(I, i) \not = (I_0, i_0)} x^I\alpha^i \text{d}a_{I, i}
+$$
+is zero in $\Omega_{K/\mathbf{F}_p}$. Note that there is no
+$k$-linear relation among the elements
+$$
+\{x^I\alpha^i \mid a_{I, i} \not = 0 \text{ and } (I, i) \not = (I_0, i_0)\}
+$$
+of $K$. Hence the assumption
+that $K \otimes_k \Omega_{k/\mathbf{F}_p} \to \Omega_{K/\mathbf{F}_p}$
+is injective this implies that $\text{d}a_{I, i} = 0$
+in $\Omega_{k/\mathbf{F}_p}$ for all $(I, i)$.
+By Lemma \ref{lemma-derivative-zero-pth-power}
+we see that each $a_{I, i}$ is a $p$th power, which
+implies that $G$ is a $p$th power contradicting the irreducibility of
+$G$. Thus,
+after renumbering, we may assume that $\text{d}G/\text{d}X_1$ is not zero.
+Then we see that $x_1$ is separably algebraic over
+$k(x_2, \ldots, x_r, \alpha)$, and that $x_2, \ldots, x_r, \alpha$
+is a transcendence basis of $L$ over $k$. This means that
+the degree of inseparability of the finite extension
+$k(x_2, \ldots, x_r, \alpha) \subset L$ is less than the
+degree of inseparability of the finite extension
+$k(x_1, \ldots, x_r) \subset L$, which is a contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-formally-smooth-implies-separable}
+Let $K/k$ be an extension of fields.
+If $K$ is formally smooth over $k$, then $K$ is
+a separable extension of $k$.
+\end{lemma}
+
+\begin{proof}
+Assume $K$ is formally smooth over $k$.
+By Lemma \ref{lemma-ses-formally-smooth} we see that
+$K \otimes_k \Omega_{k/\mathbf{Z}} \to \Omega_{K/\mathbf{Z}}$
+is injective. Hence $K$ is separable over $k$ by
+Lemma \ref{lemma-separable-differentials}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-formally-smooth-field-extension}
+Let $K/k$ be an extension of fields.
+Then $K$ is formally smooth over $k$ if and only if
+$H_1(L_{K/k}) = 0$.
+\end{lemma}
+
+\begin{proof}
+This follows from Proposition \ref{proposition-characterize-formally-smooth}
+and the fact that a vector spaces is free (hence projective).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-formally-smooth-extensions-easy}
+Let $K/k$ be an extension of fields.
+\begin{enumerate}
+\item If $K$ is purely transcendental over $k$, then
+$K$ is formally smooth over $k$.
+\item If $K$ is separable algebraic over $k$, then $K$ is
+formally smooth over $k$.
+\item If $K$ is separable over $k$, then $K$ is formally smooth
+over $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For (1) write $K = k(x_j; j \in J)$. Suppose that
+$A$ is a $k$-algebra, and $I \subset A$ is an ideal of
+square zero. Let $\varphi : K \to A/I$ be a $k$-algebra map.
+Let $a_j \in A$ be an element such that $a_j \mod I = \varphi(x_j)$.
+Then it is easy to see that there is a unique $k$-algebra
+map $K \to A$ which maps $x_j$ to $a_j$ and which reduces
+to $\varphi$ mod $I$. Hence $k \subset K$ is formally smooth.
+
+\medskip\noindent
+In case (2) we see that $k \subset K$ is a colimit of
+\'etale ring extensions. An \'etale ring map is formally \'etale
+(Lemma \ref{lemma-formally-etale-etale}). Hence this case follows from
+Lemma \ref{lemma-colimit-formally-etale} and the trivial observation
+that a formally \'etale ring map is formally smooth.
+
+\medskip\noindent
+In case (3), write $K = \colim K_i$ as the filtered colimit of its
+finitely generated sub $k$-extensions. By
+Definition \ref{definition-separable-field-extension}
+each $K_i$ is separable algebraic over a purely transcendental
+extension of $k$. Hence $K_i/k$ is formally smooth by cases (1) and (2) and
+Lemma \ref{lemma-compose-formally-smooth}. Thus
+$H_1(L_{K_i/k}) = 0$ by
+Lemma \ref{lemma-characterize-formally-smooth-field-extension}.
+Hence $H_1(L_{K/k}) = 0$ by Lemma \ref{lemma-colimits-NL}.
+Hence $K/k$ is formally smooth by
+Lemma \ref{lemma-characterize-formally-smooth-field-extension} again.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fields-are-formally-smooth}
+\begin{slogan}
+Formally smooth equals separable for field extensions.
+\end{slogan}
+Let $k$ be a field.
+\begin{enumerate}
+\item If the characteristic of $k$ is zero, then any extension field
+of $k$ is formally smooth over $k$.
+\item If the characteristic of $k$ is $p > 0$, then $K/k$ is
+formally smooth if and only if it is a separable field extension.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-formally-smooth-implies-separable} and
+\ref{lemma-formally-smooth-extensions-easy}.
+\end{proof}
+
+\noindent
+Here we put together all the different characterizations of separable
+field extensions.
+
+\begin{proposition}
+\label{proposition-characterize-separable-field-extensions}
+Let $K/k$ be a field extension.
+If the characteristic of $k$ is zero then
+\begin{enumerate}
+\item $K$ is separable over $k$,
+\item $K$ is geometrically reduced over $k$,
+\item $K$ is formally smooth over $k$,
+\item $H_1(L_{K/k}) = 0$, and
+\item the map $K \otimes_k \Omega_{k/\mathbf{Z}} \to \Omega_{K/\mathbf{Z}}$
+is injective.
+\end{enumerate}
+If the characteristic of $k$ is $p > 0$, then the following are
+equivalent:
+\begin{enumerate}
+\item $K$ is separable over $k$,
+\item the ring $K \otimes_k k^{1/p}$ is reduced,
+\item $K$ is geometrically reduced over $k$,
+\item the map $K \otimes_k \Omega_{k/\mathbf{F}_p} \to \Omega_{K/\mathbf{F}_p}$
+is injective,
+\item $H_1(L_{K/k}) = 0$, and
+\item $K$ is formally smooth over $k$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+This is a combination of
+Lemmas \ref{lemma-characterize-separable-field-extensions},
+\ref{lemma-fields-are-formally-smooth}
+\ref{lemma-formally-smooth-implies-separable}, and
+\ref{lemma-separable-differentials}.
+\end{proof}
+
+\noindent
+Here is yet another characterization of finitely generated separable field
+extensions.
+
+\begin{lemma}
+\label{lemma-localization-smooth-separable}
+Let $K/k$ be a finitely generated field extension.
+Then $K$ is separable over $k$ if and only if $K$ is
+the localization of a smooth $k$-algebra.
+\end{lemma}
+
+\begin{proof}
+Choose a finite type $k$-algebra $R$ which is a domain whose
+fraction field is $K$. Lemma \ref{lemma-smooth-at-generic-point}
+says that $k \to R$ is smooth
+at $(0)$ if and only if $K/k$ is separable.
+This proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-syntomic}
+Let $K/k$ be a field extension.
+Then $K$ is a filtered colimit of global complete intersection
+algebras over $k$. If $K/k$ is separable, then $K$ is a filtered
+colimit of smooth algebras over $k$.
+\end{lemma}
+
+\begin{proof}
+Suppose that $E \subset K$ is a finite subset. It suffices to show that
+there exists a $k$ subalgebra $A \subset K$ which contains $E$
+and which is a global complete intersection (resp.\ smooth) over $k$.
+The separable/smooth case follows from
+Lemma \ref{lemma-localization-smooth-separable}.
+In general let $L \subset K$ be the subfield generated by $E$.
+Pick a transcendence basis $x_1, \ldots, x_d \in L$ over $k$.
+The extension $L/k(x_1, \ldots, x_d)$ is finite.
+Say $L = k(x_1, \ldots, x_d)[y_1, \ldots, y_r]$.
+Pick inductively polynomials $P_i \in k(x_1, \ldots, x_d)[Y_1, \ldots, Y_r]$
+such that $P_i = P_i(Y_1, \ldots, Y_i)$ is monic in $Y_i$ over
+$k(x_1, \ldots, x_d)[Y_1, \ldots, Y_{i - 1}]$ and maps to the
+minimum polynomial of $y_i$ in
+$k(x_1, \ldots, x_d)[y_1, \ldots, y_{i - 1}][Y_i]$.
+Then it is clear that $P_1, \ldots, P_r$ is a regular sequence
+in $k(x_1, \ldots, x_r)[Y_1, \ldots, Y_r]$ and that
+$L = k(x_1, \ldots, x_r)[Y_1, \ldots, Y_r]/(P_1, \ldots, P_r)$.
+If $h \in k[x_1, \ldots, x_d]$ is a polynomial such that
+$P_i \in k[x_1, \ldots, x_d, 1/h, Y_1, \ldots, Y_r]$, then
+we see that $P_1, \ldots, P_r$ is a regular sequence in
+$k[x_1, \ldots, x_d, 1/h, Y_1, \ldots, Y_r]$ and
+$A = k[x_1, \ldots, x_d, 1/h, Y_1, \ldots, Y_r]/(P_1, \ldots, P_r)$
+is a global complete intersection. After adjusting our choice of $h$
+we may assume $E \subset A$ and we win.
+\end{proof}
+
+
+
+
+
+
+\section{Constructing flat ring maps}
+\label{section-constructing-flat}
+
+\noindent
+The following lemma is occasionally useful.
+
+\begin{lemma}
+\label{lemma-flat-local-given-residue-field}
+Let $(R, \mathfrak m, k)$ be a local ring. Let $K/k$ be a field
+extension. There exists a local ring $(R', \mathfrak m', k')$, a flat local
+ring map $R \to R'$ such that $\mathfrak m' = \mathfrak mR'$ and such that
+$k'$ is isomorphic to $K$ as an extension of $k$.
+\end{lemma}
+
+\begin{proof}
+Suppose that $k' = k(\alpha)$ is a monogenic extension of $k$.
+Then $k'$ is the residue field of a flat local extension $R \subset R'$
+as in the lemma. Namely, if $\alpha$ is transcendental over $k$, then we let
+$R'$ be the localization of $R[x]$ at the prime $\mathfrak mR[x]$.
+If $\alpha$ is algebraic with minimal polynomial
+$T^d + \sum \overline{\lambda}_iT^{d - i}$, then we let
+$R' = R[T]/(T^d + \sum \lambda_i T^{d - i})$.
+
+\medskip\noindent
+Consider the collection of triples $(k', R \to R', \phi)$, where
+$k \subset k' \subset K$ is a subfield,
+$R \to R'$ is a local ring map as in the lemma, and
+$\phi : R' \to k'$ induces an isomorphism $R'/\mathfrak mR' \cong k'$
+of $k$-extensions. These form a ``big'' category $\mathcal{C}$ with morphisms
+$(k_1, R_1, \phi_1) \to (k_2, R_2, \phi_2)$
+given by ring maps $\psi : R_1 \to R_2$ such that
+$$
+\xymatrix{
+R_1 \ar[d]_\psi \ar[r]_{\phi_1} & k_1 \ar[r] & K \ar@{=}[d] \\
+R_2 \ar[r]^{\phi_2} & k_2 \ar[r] & K
+}
+$$
+commutes. This implies that $k_1 \subset k_2$.
+
+\medskip\noindent
+Suppose that $I$ is a directed set, and
+$((R_i, k_i, \phi_i), \psi_{ii'})$ is a system over $I$, see
+Categories, Section \ref{categories-section-posets-limits}.
+In this case we can consider
+$$
+R' = \colim_{i \in I} R_i
+$$
+This is a local ring with maximal ideal $\mathfrak mR'$, and
+residue field $k' = \bigcup_{i \in I} k_i$. Moreover, the ring
+map $R \to R'$ is flat as it is a colimit of flat maps (and tensor
+products commute with directed colimits).
+Hence we see that $(R', k', \phi')$ is an ``upper bound'' for the system.
+
+\medskip\noindent
+An almost trivial application of Zorn's Lemma would finish the proof
+if $\mathcal{C}$ was a set, but it isn't.
+(Actually, you can make this work by finding a reasonable bound on the
+cardinals of the local rings occurring.)
+To get around this problem we choose a well ordering on $K$.
+For $x \in K$ we let $K(x)$ be the subfield of $K$ generated
+by all elements of $K$ which are $\leq x$.
+By transfinite recursion on $x \in K$ we will produce ring maps
+$R \subset R(x)$ as in the lemma with residue field extension
+$K(x)/k$. Moreover, by construction we will have that
+$R(x)$ will contain $R(y)$ for all $y \leq x$.
+Namely, if $x$ has a predecessor $x'$, then $K(x) = K(x')[x]$
+and hence we can let $R(x') \subset R(x)$ be the local ring extension
+constructed in the first paragraph of the proof. If $x$ does not
+have a predecessor, then we first set
+$R'(x) = \colim_{x' < x} R(x')$ as in the third paragraph
+of the proof. The residue field of $R'(x)$ is $K'(x) = \bigcup_{x' < x} K(x')$.
+Since $K(x) = K'(x)[x]$ we see that we can use the construction of the
+first paragraph of the proof to produce $R'(x) \subset R(x)$.
+This finishes the proof of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-finite-etale-given-residue-field}
+Let $(R, \mathfrak m, k)$ be a local ring. If $k \subset K$ is a
+separable algebraic extension, then there exists a directed set $I$ and
+a system of finite \'etale extensions $R \subset R_i$, $i \in I$
+of local rings such that $R' = \colim R_i$ has residue field
+$K$ (as extension of $k$).
+\end{lemma}
+
+\begin{proof}
+Let $R \subset R'$ be the extension constructed in the proof of
+Lemma \ref{lemma-flat-local-given-residue-field}. By construction
+$R' = \colim_{\alpha \in A} R_\alpha$ where $A$ is a well-ordered
+set and the transition maps $R_\alpha \to R_{\alpha + 1}$
+are finite \'etale and $R_\alpha = \colim_{\beta < \alpha} R_\beta$
+if $\alpha$ is not a successor. We will prove the result by transfinite
+induction.
+
+\medskip\noindent
+Suppose the result holds for $R_\alpha$, i.e., $R_\alpha = \colim R_i$
+with $R_i$ finite \'etale over $R$. Since
+$R_\alpha \to R_{\alpha + 1}$ is finite \'etale
+there exists an $i$ and a finite \'etale extension $R_i \to R_{i, 1}$
+such that $R_{\alpha + 1} = R_\alpha \otimes_{R_i} R_{i, 1}$.
+Thus $R_{\alpha + 1} = \colim_{i' \geq i} R_{i'} \otimes_{R_i} R_{i, 1}$
+and the result holds for $\alpha + 1$. Suppose $\alpha$ is not a successor
+and the result holds for $R_\beta$ for all $\beta < \alpha$.
+Since every finite subset $E \subset R_\alpha$ is contained in $R_\beta$
+for some $\beta < \alpha$ and we see that $E$ is contained in a finite \'etale
+subextension by assumption. Thus the result holds for $R_\alpha$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-free-given-residue-field-extension}
+Let $R$ be a ring. Let $\mathfrak p \subset R$ be a prime and
+let $L/\kappa(\mathfrak p)$ be a finite extension of fields.
+Then there exists a finite free ring map $R \to S$ such that
+$\mathfrak q = \mathfrak pS$ is prime and
+$\kappa(\mathfrak q)/\kappa(\mathfrak p)$ is isomorphic to the given
+extension $L/\kappa(\mathfrak p)$.
+\end{lemma}
+
+\begin{proof}
+By induction of the degree of $\kappa(\mathfrak p) \subset L$.
+If the degree is $1$, then we take $R = S$.
+In general, if there exists a sub extension
+$\kappa(\mathfrak p) \subset L' \subset L$ then we win by induction
+on the degree (by first constructing $R \subset S'$ corresponding
+to $L'/\kappa(\mathfrak p)$ and then construction $S' \subset S$
+corresponding to $L/L'$). Thus we may assume that
+$L \supset \kappa(\mathfrak p)$ is generated by a single element
+$\alpha \in L$. Let $X^d + \sum_{i < d} a_iX^i$ be the minimal polynomial
+of $\alpha$ over $\kappa(\mathfrak p)$, so $a_i \in \kappa(\mathfrak p)$.
+We may write $a_i$ as the image
+of $f_i/g$ for some $f_i, g \in R$ and $g \not \in \mathfrak p$.
+After replacing $\alpha$ by $g\alpha$ (and correspondingly
+replacing $a_i$ by $g^{d - i}a_i$) we may assume that $a_i$ is
+the image of some $f_i \in R$.
+Then we simply take $S = R[x]/(x^d + \sum f_ix^i)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cofinal-system-flat}
+Let $A$ be a ring. Let $\kappa = \max(|A|, \aleph_0)$. Then every flat
+$A$-algebra $B$ is the filtered colimit of its flat $A$-subalgebras
+$B' \subset B$ of cardinality $|B'| \leq \kappa$. (Observe that $B'$
+is faithfully flat over $A$ if $B$ is faithfully flat over $A$.)
+\end{lemma}
+
+\begin{proof}
+If $B$ has cardinality $\leq \kappa$ then this is true.
+Let $E \subset B$ be an $A$-subalgebra with $|E| \leq \kappa$.
+We will show that $E$ is contained in a flat $A$-subalgebra
+$B'$ with $|B'| \leq \kappa$. The lemma follows because
+(a) every finite subset of $B$ is contained in an $A$-subalgebra of
+cardinality at most $\kappa$ and (b) every pair of $A$-subalgebras
+of $B$ of cardinality at most $\kappa$ is contained in an $A$-subalgebra
+of cardinality at most $\kappa$. Details omitted.
+
+\medskip\noindent
+We will inductively construct a sequence of $A$-subalgebras
+$$
+E = E_0 \subset E_1 \subset E_2 \subset \ldots
+$$
+each having cardinality $\leq \kappa$ and we will show that
+$B' = \bigcup E_k$ is flat over $A$ to finish the proof.
+
+\medskip\noindent
+The construction is as follows. Set $E_0 = E$.
+Given $E_k$ for $k \geq 0$ we consider the set $S_k$ of
+relations between elements of $E_k$ with coefficients in $A$.
+Thus an element $s \in S_k$ is given by an integer $n \geq 1$ and
+$a_1, \ldots, a_n \in A$, and $e_1, \ldots, e_n \in E_k$
+such that $\sum a_i e_i = 0$ in $E_k$.
+The flatness of $A \to B$ implies by Lemma \ref{lemma-flat-eq}
+that for every $s = (n, a_1, \ldots, a_n, e_1, \ldots, e_n) \in S_k$
+we may choose
+$$
+(m_s, b_{s, 1}, \ldots, b_{s, m_s}, a_{s, 11}, \ldots, a_{s, nm_s})
+$$
+where $m_s \geq 0$ is an integer, $b_{s, j} \in B$, $a_{s, ij} \in A$, and
+$$
+e_i = \sum\nolimits_j a_{s, ij} b_{s, j}, \forall i,
+\quad\text{and}\quad
+0 = \sum\nolimits_i a_i a_{s, ij}, \forall j.
+$$
+Given these choicse, we let $E_{k + 1} \subset B$ be the $A$-subalgebra
+generated by
+\begin{enumerate}
+\item $E_k$ and
+\item the elements $b_{s, 1}, \ldots, b_{s, m_s}$
+for every $s \in S_k$.
+\end{enumerate}
+Some set theory (omitted) shows that $E_{k + 1}$ has at most
+cardinality $\kappa$ (this uses that we inductively know
+$|E_k| \leq \kappa$ and consequently the cardinality of $S_k$ is
+also at most $\kappa$).
+
+\medskip\noindent
+To show that $B' = \bigcup E_k$ is flat over $A$ we consider
+a relation $\sum_{i = 1, \ldots, n} a_i b'_i = 0$ in $B'$
+with coefficients in $A$. Choose $k$ large enough so that
+$b'_i \in E_k$ for $i = 1, \ldots, n$. Then
+$(n, a_1, \ldots, a_n, b'_1, \ldots, b'_n) \in S_k$
+and hence we see that the relation is trivial in $E_{k + 1}$
+and a fortiori in $B'$.
+Thus $A \to B'$ is flat by Lemma \ref{lemma-flat-eq}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{The Cohen structure theorem}
+\label{section-cohen-structure-theorem}
+
+\noindent
+Here is a fundamental notion in commutative algebra.
+
+\begin{definition}
+\label{definition-complete-local-ring}
+Let $(R, \mathfrak m)$ be a local ring. We say $R$ is a
+{\it complete local ring} if the canonical map
+$$
+R \longrightarrow \lim_n R/\mathfrak m^n
+$$
+to the completion of $R$ with respect to $\mathfrak m$ is an
+isomorphism\footnote{This includes the condition
+that $\bigcap \mathfrak m^n = (0)$; in some texts this may be indicated
+by saying that $R$ is complete and separated. Warning: It can happen
+that the completion $\lim_n R/\mathfrak m^n$ of a local ring is
+non-complete, see
+Examples, Lemma \ref{examples-lemma-noncomplete-completion}.
+This does not happen when $\mathfrak m$ is finitely generated, see
+Lemma \ref{lemma-hathat-finitely-generated} in which
+case the completion is Noetherian, see
+Lemma \ref{lemma-completion-Noetherian}.}.
+\end{definition}
+
+\noindent
+Note that an Artinian local ring $R$ is a complete local ring
+because $\mathfrak m_R^n = 0$ for some $n > 0$. In this section
+we mostly focus on Noetherian complete local rings.
+
+\begin{lemma}
+\label{lemma-quotient-complete-local}
+Let $R$ be a Noetherian complete local ring.
+Any quotient of $R$ is also a Noetherian complete local ring.
+Given a finite ring map $R \to S$, then $S$ is a product of
+Noetherian complete local rings.
+\end{lemma}
+
+\begin{proof}
+The ring $S$ is Noetherian by Lemma \ref{lemma-Noetherian-permanence}.
+As an $R$-module $S$ is complete by Lemma \ref{lemma-completion-tensor}.
+Hence $S$ is the product of the completions at its maximal ideals
+by Lemma \ref{lemma-completion-finite-extension}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complete-local-ring-Noetherian}
+Let $(R, \mathfrak m)$ be a complete local ring.
+If $\mathfrak m$ is a finitely generated ideal then
+$R$ is Noetherian.
+\end{lemma}
+
+\begin{proof}
+See Lemma \ref{lemma-completion-Noetherian}.
+\end{proof}
+
+\begin{definition}
+\label{definition-coefficient-ring}
+Let $(R, \mathfrak m)$ be a complete local ring.
+A subring $\Lambda \subset R$ is
+called a {\it coefficient ring} if the following conditions hold:
+\begin{enumerate}
+\item $\Lambda$ is a complete local ring with maximal ideal
+$\Lambda \cap \mathfrak m$,
+\item the residue field of $\Lambda$ maps isomorphically to the
+residue field of $R$, and
+\item $\Lambda \cap \mathfrak m = p\Lambda$, where $p$ is the characteristic
+of the residue field of $R$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Let us make some remarks on this definition. We split the discussion
+into the following cases:
+\begin{enumerate}
+\item The local ring $R$ contains a field. This happens if
+either $\mathbf{Q} \subset R$, or $pR = 0$ where $p$ is the
+characteristic of $R/\mathfrak m$. In this case a coefficient ring
+$\Lambda$ is a field contained in $R$ which maps isomorphically to
+$R/\mathfrak m$.
+\item The characteristic of $R/\mathfrak m$ is $p > 0$ but no
+power of $p$ is zero in $R$. In this case $\Lambda$ is a complete
+discrete valuation ring with uniformizer $p$ and residue field $R/\mathfrak m$.
+\item The characteristic of $R/\mathfrak m$ is $p > 0$, and for some
+$n > 1$ we have $p^{n - 1} \not = 0$, $p^n = 0$ in $R$. In this case
+$\Lambda$ is an Artinian local ring whose maximal ideal is
+generated by $p$ and which has residue field $R/\mathfrak m$.
+\end{enumerate}
+The complete discrete valuation rings with uniformizer $p$
+above play a special role and we baptize them as follows.
+
+\begin{definition}
+\label{definition-cohen-ring}
+A {\it Cohen ring} is a complete discrete valuation ring with
+uniformizer $p$ a prime number.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-cohen-rings-exist}
+Let $p$ be a prime number.
+Let $k$ be a field of characteristic $p$.
+There exists a Cohen ring $\Lambda$ with $\Lambda/p\Lambda \cong k$.
+\end{lemma}
+
+\begin{proof}
+First note that the $p$-adic integers $\mathbf{Z}_p$ form a Cohen ring
+for $\mathbf{F}_p$. Let $k$ be an arbitrary field of characteristic $p$.
+Let $\mathbf{Z}_p \to R$ be a flat local ring map such that
+$\mathfrak m_R = pR$ and $R/pR = k$, see
+Lemma \ref{lemma-flat-local-given-residue-field}.
+By Lemma \ref{lemma-completion-Noetherian} the completion
+$\Lambda = R^\wedge$ is Noetherian. It is a complete Noetherian local ring
+with maximal ideal $(p)$ as $\Lambda/p\Lambda = R/pR$ is a field (use
+Lemma \ref{lemma-hathat-finitely-generated}).
+Since $\mathbf{Z}_p \to R \to \Lambda$ is flat
+(by Lemma \ref{lemma-completion-flat}) we see that $p$ is a
+nonzerodivisor in $\Lambda$. Hence $\Lambda$ has dimension $\geq 1$
+(Lemma \ref{lemma-one-equation})
+and we conclude that $\Lambda$ is regular of dimension $1$, i.e.,
+a discrete valuation ring by Lemma \ref{lemma-characterize-dvr}.
+We conclude $\Lambda$ is a Cohen ring for $k$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohen-ring-formally-smooth}
+Let $p > 0$ be a prime.
+Let $\Lambda$ be a Cohen ring with residue field of characteristic $p$.
+For every $n \geq 1$ the ring map
+$$
+\mathbf{Z}/p^n\mathbf{Z} \to \Lambda/p^n\Lambda
+$$
+is formally smooth.
+\end{lemma}
+
+\begin{proof}
+If $n = 1$, this follows from
+Proposition \ref{proposition-characterize-separable-field-extensions}.
+For general $n$ we argue by induction on $n$.
+Namely, if $\mathbf{Z}/p^n\mathbf{Z} \to \Lambda/p^n\Lambda$ is
+formally smooth, then we can apply Lemma \ref{lemma-lift-formal-smoothness}
+to the ring map
+$\mathbf{Z}/p^{n + 1}\mathbf{Z} \to \Lambda/p^{n + 1}\Lambda$
+and the ideal $I = (p^n) \subset \mathbf{Z}/p^{n + 1}\mathbf{Z}$.
+\end{proof}
+
+\begin{theorem}[Cohen structure theorem]
+\label{theorem-cohen-structure-theorem}
+Let $(R, \mathfrak m)$ be a complete local ring.
+\begin{enumerate}
+\item $R$ has a coefficient ring (see
+Definition \ref{definition-coefficient-ring}),
+\item if $\mathfrak m$ is a finitely generated ideal, then
+$R$ is isomorphic to a quotient
+$$
+\Lambda[[x_1, \ldots, x_n]]/I
+$$
+where $\Lambda$ is either a field or a Cohen ring.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+Let us prove a coefficient ring exists.
+First we prove this in case the characteristic of the residue field $\kappa$
+is zero. Namely, in this case we will prove by induction
+on $n > 0$ that there exists a section
+$$
+\varphi_n : \kappa \longrightarrow R/\mathfrak m^n
+$$
+to the canonical map $R/\mathfrak m^n \to \kappa = R/\mathfrak m$.
+This is trivial for $n = 1$. If $n > 1$, let $\varphi_{n - 1}$ be given.
+The field extension $\kappa/\mathbf{Q}$ is formally smooth by
+Proposition \ref{proposition-characterize-separable-field-extensions}.
+Hence we can find the dotted arrow
+in the following diagram
+$$
+\xymatrix{
+R/\mathfrak m^{n - 1} &
+R/\mathfrak m^n \ar[l] \\
+\kappa \ar[u]^{\varphi_{n - 1}} \ar@{..>}[ru] & \mathbf{Q} \ar[l] \ar[u]
+}
+$$
+This proves the induction step. Putting these maps together
+$$
+\lim_n\ \varphi_n : \kappa \longrightarrow
+R = \lim_n\ R/\mathfrak m^n
+$$
+gives a map whose image is the desired coefficient ring.
+
+\medskip\noindent
+Next, we prove the existence of a coefficient ring in the case
+where the characteristic of the residue field $\kappa$ is $p > 0$.
+Namely, choose a Cohen ring $\Lambda$ with $\kappa = \Lambda/p\Lambda$,
+see Lemma \ref{lemma-cohen-rings-exist}. In this case we will prove by
+induction on $n > 0$ that there exists a map
+$$
+\varphi_n :
+\Lambda/p^n\Lambda
+\longrightarrow
+R/\mathfrak m^n
+$$
+whose composition with the reduction map $R/\mathfrak m^n \to \kappa$
+produces the given isomorphism $\Lambda/p\Lambda = \kappa$. This is trivial
+for $n = 1$. If $n > 1$, let $\varphi_{n - 1}$ be given.
+The ring map $\mathbf{Z}/p^n\mathbf{Z} \to \Lambda/p^n\Lambda$
+is formally smooth by Lemma \ref{lemma-cohen-ring-formally-smooth}.
+Hence we can find the dotted arrow
+in the following diagram
+$$
+\xymatrix{
+R/\mathfrak m^{n - 1} &
+R/\mathfrak m^n \ar[l] \\
+\Lambda/p^n\Lambda \ar[u]^{\varphi_{n - 1}} \ar@{..>}[ru] &
+\mathbf{Z}/p^n\mathbf{Z} \ar[l] \ar[u]
+}
+$$
+This proves the induction step. Putting these maps together
+$$
+\lim_n\ \varphi_n :
+\Lambda = \lim_n\ \Lambda/p^n\Lambda
+\longrightarrow
+R = \lim_n\ R/\mathfrak m^n
+$$
+gives a map whose image is the desired coefficient ring.
+
+\medskip\noindent
+The final statement of the theorem follows readily. Namely, if
+$y_1, \ldots, y_n$ are generators of the ideal $\mathfrak m$,
+then we can use the map $\Lambda \to R$ just constructed
+to get a map
+$$
+\Lambda[[x_1, \ldots, x_n]] \longrightarrow R,
+\quad x_i \longmapsto y_i.
+$$
+Since both sides are $(x_1, \ldots, x_n)$-adically complete
+this map is surjective by Lemma \ref{lemma-completion-generalities}
+as it is surjective modulo $(x_1, \ldots, x_n)$ by
+construction.
+\end{proof}
+
+\begin{remark}
+\label{remark-Noetherian-complete-local-ring-universally-catenary}
+If $k$ is a field then the power series ring $k[[X_1, \ldots, X_d]]$
+is a Noetherian complete local regular ring of dimension $d$.
+If $\Lambda$ is a Cohen ring then $\Lambda[[X_1, \ldots, X_d]]$
+is a complete local Noetherian regular ring of dimension $d + 1$.
+Hence the Cohen structure theorem implies that any Noetherian
+complete local ring is a quotient of a regular local ring.
+In particular we see that a Noetherian complete local ring is
+universally catenary, see Lemma \ref{lemma-CM-ring-catenary}
+and Lemma \ref{lemma-regular-ring-CM}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-regular-complete-containing-coefficient-field}
+Let $(R, \mathfrak m)$ be a Noetherian complete local ring.
+Assume $R$ is regular.
+\begin{enumerate}
+\item If $R$ contains either $\mathbf{F}_p$ or $\mathbf{Q}$, then $R$
+is isomorphic to a power series ring over its residue field.
+\item If $k$ is a field and $k \to R$ is a ring map inducing
+an isomorphism $k \to R/\mathfrak m$, then $R$ is isomorphic
+as a $k$-algebra to a power series ring over $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+In case (1), by the Cohen structure theorem
+(Theorem \ref{theorem-cohen-structure-theorem})
+there exists a coefficient ring which must be a field
+mapping isomorphically to the residue field. Thus
+it suffices to prove (2). In case (2) we pick
+$f_1, \ldots, f_d \in \mathfrak m$ which
+map to a basis of $\mathfrak m/\mathfrak m^2$ and we consider
+the continuous $k$-algebra map $k[[x_1, \ldots, x_d]] \to R$
+sending $x_i$ to $f_i$. As both source and target are
+$(x_1, \ldots, x_d)$-adically complete, this map is surjective by
+Lemma \ref{lemma-completion-generalities}. On the other hand, it
+has to be injective because otherwise the dimension of
+$R$ would be $< d$ by Lemma \ref{lemma-one-equation}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complete-local-Noetherian-domain-finite-over-regular}
+Let $(R, \mathfrak m)$ be a Noetherian complete local domain.
+Then there exists a $R_0 \subset R$ with the following properties
+\begin{enumerate}
+\item $R_0$ is a regular complete local ring,
+\item $R_0 \subset R$ is finite and induces an isomorphism on
+residue fields,
+\item $R_0$ is either isomorphic to $k[[X_1, \ldots, X_d]]$ where $k$
+is a field or $\Lambda[[X_1, \ldots, X_d]]$ where $\Lambda$ is a Cohen ring.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\Lambda$ be a coefficient ring of $R$.
+Since $R$ is a domain we see that either $\Lambda$ is a field
+or $\Lambda$ is a Cohen ring.
+
+\medskip\noindent
+Case I: $\Lambda = k$ is a field. Let $d = \dim(R)$.
+Choose $x_1, \ldots, x_d \in \mathfrak m$
+which generate an ideal of definition $I \subset R$.
+(See Section \ref{section-dimension}.)
+By Lemma \ref{lemma-change-ideal-completion} we see that $R$
+is $I$-adically complete as well.
+Consider the map $R_0 = k[[X_1, \ldots, X_d]] \to R$
+which maps $X_i$ to $x_i$.
+Note that $R_0$ is complete with respect to the ideal
+$I_0 = (X_1, \ldots, X_d)$,
+and that $R/I_0R \cong R/IR$ is finite over $k = R_0/I_0$
+(because $\dim(R/I) = 0$, see Section \ref{section-dimension}.)
+Hence we conclude that $R_0 \to R$ is finite by
+Lemma \ref{lemma-finite-over-complete-ring}.
+Since $\dim(R) = \dim(R_0)$ this implies that
+$R_0 \to R$ is injective (see Lemma \ref{lemma-integral-dim-up}),
+and the lemma is proved.
+
+\medskip\noindent
+Case II: $\Lambda$ is a Cohen ring. Let $d + 1 = \dim(R)$.
+Let $p > 0$ be the characteristic of the residue field $k$.
+As $R$ is a domain we see that $p$ is a nonzerodivisor in $R$.
+Hence $\dim(R/pR) = d$, see Lemma \ref{lemma-one-equation}.
+Choose $x_1, \ldots, x_d \in R$
+which generate an ideal of definition in $R/pR$.
+Then $I = (p, x_1, \ldots, x_d)$ is an ideal of definition of $R$.
+By Lemma \ref{lemma-change-ideal-completion} we see that $R$
+is $I$-adically complete as well.
+Consider the map $R_0 = \Lambda[[X_1, \ldots, X_d]] \to R$
+which maps $X_i$ to $x_i$.
+Note that $R_0$ is complete with respect to the ideal
+$I_0 = (p, X_1, \ldots, X_d)$,
+and that $R/I_0R \cong R/IR$ is finite over $k = R_0/I_0$
+(because $\dim(R/I) = 0$, see Section \ref{section-dimension}.)
+Hence we conclude that $R_0 \to R$ is finite by
+Lemma \ref{lemma-finite-over-complete-ring}.
+Since $\dim(R) = \dim(R_0)$ this implies that
+$R_0 \to R$ is injective (see Lemma \ref{lemma-integral-dim-up}),
+and the lemma is proved.
+\end{proof}
+
+
+
+
+
+\section{Japanese rings}
+\label{section-japanese}
+
+\noindent
+In this section we begin to discuss finiteness of integral closure.
+
+\begin{definition}
+\label{definition-N}
+\begin{reference}
+\cite[Chapter 0, Definition 23.1.1]{EGA}
+\end{reference}
+Let $R$ be a domain with field of fractions $K$.
+\begin{enumerate}
+\item We say $R$ is {\it N-1} if the integral closure of $R$ in $K$
+is a finite $R$-module.
+\item We say $R$ is {\it N-2} or {\it Japanese} if for any finite
+extension $L/K$ of fields the integral closure of $R$ in $L$
+is finite over $R$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+The main interest in these notions is for Noetherian rings,
+but here is a non-Noetherian example.
+
+\begin{example}
+\label{example-Japanese-not-Noetherian}
+Let $k$ be a field. The domain $R = k[x_1, x_2, x_3, \ldots]$ is N-2,
+but not Noetherian. The reason is the following. Suppose that $R \subset L$
+and the field $L$ is a finite extension of the fraction field of $R$.
+Then there exists an integer $n$ such that $L$ comes from a finite
+extension $L_0/k(x_1, \ldots, x_n)$ by adjoining
+the (transcendental) elements $x_{n + 1}, x_{n + 2}$, etc.
+Let $S_0$ be the integral
+closure of $k[x_1, \ldots, x_n]$ in $L_0$. By
+Proposition \ref{proposition-ubiquity-nagata} below
+it is true that $S_0$ is finite over $k[x_1, \ldots, x_n]$.
+Moreover, the integral closure of $R$ in $L$ is
+$S = S_0[x_{n + 1}, x_{n + 2}, \ldots]$ (use
+Lemma \ref{lemma-polynomial-domain-normal}) and
+hence finite over $R$. The same argument works for
+$R = \mathbf{Z}[x_1, x_2, x_3, \ldots]$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-localize-N}
+Let $R$ be a domain.
+If $R$ is N-1 then so is any localization of $R$.
+Same for N-2.
+\end{lemma}
+
+\begin{proof}
+These statements hold because taking integral closure commutes
+with localization, see Lemma \ref{lemma-integral-closure-localize}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Japanese-local}
+Let $R$ be a domain. Let $f_1, \ldots, f_n \in R$ generate the
+unit ideal. If each domain $R_{f_i}$ is N-1 then so is $R$.
+Same for N-2.
+\end{lemma}
+
+\begin{proof}
+Assume $R_{f_i}$ is N-2 (or N-1).
+Let $L$ be a finite extension of the fraction field of $R$ (equal to
+the fraction field in the N-1 case). Let $S$ be the integral
+closure of $R$ in $L$. By Lemma \ref{lemma-integral-closure-localize}
+we see that $S_{f_i}$ is the integral closure of $R_{f_i}$ in $L$.
+Hence $S_{f_i}$ is finite over $R_{f_i}$ by assumption.
+Thus $S$ is finite over $R$ by Lemma \ref{lemma-cover}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finite-over-Noetherian-japanese}
+Let $R$ be a domain. Let $R \subset S$ be a quasi-finite extension of domains
+(for example finite). Assume $R$ is N-2 and Noetherian. Then $S$ is N-2.
+\end{lemma}
+
+\begin{proof}
+Let $L/K$ be the induced extension of fraction fields.
+Note that this is a finite field extension (for example by
+Lemma \ref{lemma-isolated-point-fibre} (2)
+applied to the fibre $S \otimes_R K$, and the definition of a
+quasi-finite ring map).
+Let $S'$ be the integral closure of $R$ in $S$.
+Then $S'$ is contained in the integral closure of $R$ in $L$
+which is finite over $R$ by assumption. As $R$ is Noetherian this
+implies $S'$ is finite over $R$.
+By Lemma \ref{lemma-quasi-finite-open-integral-closure}
+there exist elements $g_1, \ldots, g_n \in S'$
+such that $S'_{g_i} \cong S_{g_i}$ and such that $g_1, \ldots, g_n$
+generate the unit ideal in $S$. Hence it suffices to show that
+$S'$ is N-2 by Lemmas \ref{lemma-localize-N} and \ref{lemma-Japanese-local}.
+Thus we have reduced to the case where $S$ is finite over $R$.
+
+\medskip\noindent
+Assume $R \subset S$ with hypotheses as in the lemma and moreover
+that $S$ is finite over $R$. Let $M$ be a finite field extension
+of the fraction field of $S$. Then $M$ is also a finite field extension
+of $K$ and we conclude that the integral closure $T$ of $R$ in
+$M$ is finite over $R$. By Lemma \ref{lemma-integral-closure-transitive}
+we see that $T$ is also the integral closure of $S$ in $M$ and we win by
+Lemma \ref{lemma-integral-permanence}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Laurent-ring-N-1}
+Let $R$ be a Noetherian domain.
+If $R[z, z^{-1}]$ is N-1, then so is $R$.
+\end{lemma}
+
+\begin{proof}
+Let $R'$ be the integral closure of $R$ in its field of fractions $K$.
+Let $S'$ be the integral closure of $R[z, z^{-1}]$ in its field of fractions.
+Clearly $R' \subset S'$.
+Since $K[z, z^{-1}]$ is a normal domain we see that $S' \subset K[z, z^{-1}]$.
+Suppose that $f_1, \ldots, f_n \in S'$ generate $S'$ as $R[z, z^{-1}]$-module.
+Say $f_i = \sum a_{ij}z^j$ (finite sum), with $a_{ij} \in K$.
+For any $x \in R'$ we can write
+$$
+x = \sum h_i f_i
+$$
+with $h_i \in R[z, z^{-1}]$. Thus we see that $R'$ is contained in the
+finite $R$-submodule $\sum Ra_{ij} \subset K$. Since $R$ is Noetherian
+we conclude that $R'$ is a finite $R$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-extension-N-2}
+Let $R$ be a Noetherian domain, and let $R \subset S$ be a
+finite extension of domains. If $S$ is N-1, then so is $R$.
+If $S$ is N-2, then so is $R$.
+\end{lemma}
+
+\begin{proof}
+Omitted. (Hint: Integral closures of $R$ in extension fields
+are contained in integral closures of $S$ in extension fields.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-normal-domain-finite-separable-extension}
+Let $R$ be a Noetherian normal domain with fraction field $K$.
+Let $L/K$ be a finite separable field extension.
+Then the integral closure of $R$ in $L$ is finite over $R$.
+\end{lemma}
+
+\begin{proof}
+Consider the trace pairing
+(Fields, Definition \ref{fields-definition-trace-pairing})
+$$
+L \times L \longrightarrow K,
+\quad (x, y) \longmapsto \langle x, y\rangle := \text{Trace}_{L/K}(xy).
+$$
+Since $L/K$ is separable this is nondegenerate
+(Fields, Lemma \ref{fields-lemma-separable-trace-pairing}).
+Moreover, if $x \in L$ is integral over $R$, then
+$\text{Trace}_{L/K}(x)$ is in $R$. This is true because the
+minimal polynomial of $x$ over $K$ has coefficients in $R$
+(Lemma \ref{lemma-minimal-polynomial-normal-domain})
+and because $\text{Trace}_{L/K}(x)$ is an
+integer multiple of one of these coefficients
+(Fields, Lemma \ref{fields-lemma-trace-and-norm-from-minimal-polynomial}).
+Pick $x_1, \ldots, x_n \in L$ which are integral over $R$
+and which form a $K$-basis of $L$. Then the integral closure
+$S \subset L$ is contained in the $R$-module
+$$
+M = \{y \in L \mid \langle x_i, y\rangle \in R, \ i = 1, \ldots, n\}
+$$
+By linear algebra we see that $M \cong R^{\oplus n}$ as an $R$-module.
+Hence $S \subset R^{\oplus n}$ is a finitely generated $R$-module
+as $R$ is Noetherian.
+\end{proof}
+
+\begin{example}
+\label{example-bad-invariants}
+Lemma \ref{lemma-Noetherian-normal-domain-finite-separable-extension}
+does not work if the ring is not Noetherian.
+For example consider the action of $G = \{+1, -1\}$ on
+$A = \mathbf{C}[x_1, x_2, x_3, \ldots]$ where $-1$ acts by
+mapping $x_i$ to $-x_i$. The invariant ring $R = A^G$ is
+the $\mathbf{C}$-algebra generated by all $x_ix_j$. Hence
+$R \subset A$ is not finite. But $R$ is a normal domain
+with fraction field $K = L^G$ the $G$-invariants in the fraction field
+$L$ of $A$. And clearly $A$ is the integral closure of $R$ in
+$L$.
+\end{example}
+
+\noindent
+The following lemma can sometimes be used as a substitute for
+Lemma \ref{lemma-Noetherian-normal-domain-finite-separable-extension}
+in case of purely inseparable extensions.
+
+\begin{lemma}
+\label{lemma-Noetherian-normal-domain-insep-extension}
+Let $R$ be a Noetherian normal domain with fraction field $K$
+of characteristic $p > 0$.
+Let $a \in K$ be an element such that there exists a derivation
+$D : R \to R$ with $D(a) \not = 0$. Then the integral closure
+of $R$ in $L = K[x]/(x^p - a)$ is finite over $R$.
+\end{lemma}
+
+\begin{proof}
+After replacing $x$ by $fx$ and $a$ by $f^pa$ for some $f \in R$
+we may assume $a \in R$. Hence also $D(a) \in R$. We will show
+by induction on $i \leq p - 1$ that if
+$$
+y = a_0 + a_1x + \ldots + a_i x^i,\quad a_j \in K
+$$
+is integral over $R$, then $D(a)^i a_j \in R$. Thus the integral
+closure is contained in the finite $R$-module with basis
+$D(a)^{-p + 1}x^j$, $j = 0, \ldots, p - 1$. Since $R$ is Noetherian
+this proves the lemma.
+
+\medskip\noindent
+If $i = 0$, then $y = a_0$ is integral over $R$ if and only if $a_0 \in R$
+and the statement is true. Suppose the statement holds for some $i < p - 1$
+and suppose that
+$$
+y = a_0 + a_1x + \ldots + a_{i + 1} x^{i + 1},\quad a_j \in K
+$$
+is integral over $R$. Then
+$$
+y^p = a_0^p + a_1^p a + \ldots + a_{i + 1}^pa^{i + 1}
+$$
+is an element of $R$ (as it is in $K$ and integral over $R$). Applying
+$D$ we obtain
+$$
+(a_1^p + 2a_2^p a + \ldots + (i + 1)a_{i + 1}^p a^i)D(a)
+$$
+is in $R$. Hence it follows that
+$$
+D(a)a_1 + 2D(a) a_2 x + \ldots + (i + 1)D(a) a_{i + 1} x^i
+$$
+is integral over $R$. By induction we find $D(a)^{i + 1}a_j \in R$
+for $j = 1, \ldots, i + 1$. (Here we use that $1, \ldots, i + 1$
+are invertible.) Hence $D(a)^{i + 1}a_0$ is also in $R$ because it
+is the difference of $y$ and $\sum_{j > 0} D(a)^{i + 1}a_jx^j$ which
+are integral over $R$ (since $x$ is integral over $R$ as $a \in R$).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-domain-char-zero-N-1-2}
+A Noetherian domain whose fraction field has characteristic zero is N-1
+if and only if it is N-2 (i.e., Japanese).
+\end{lemma}
+
+\begin{proof}
+This is clear from
+Lemma \ref{lemma-Noetherian-normal-domain-finite-separable-extension}
+since every field extension in characteristic zero is separable.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-domain-char-p-N-1-2}
+Let $R$ be a Noetherian domain with fraction field $K$ of
+characteristic $p > 0$. Then $R$ is N-2 if and only if
+for every finite purely inseparable extension $L/K$ the integral
+closure of $R$ in $L$ is finite over $R$.
+\end{lemma}
+
+\begin{proof}
+Assume the integral closure of $R$ in every finite purely inseparable
+field extension of $K$ is finite.
+Let $L/K$ be any finite extension. We have to show the
+integral closure of $R$ in $L$ is finite over $R$.
+Choose a finite normal field extension $M/K$
+containing $L$. As $R$ is Noetherian it suffices to show that
+the integral closure of $R$ in $M$ is finite over $R$.
+By Fields, Lemma \ref{fields-lemma-normal-case}
+there exists a subextension $M/M_{insep}/K$
+such that $M_{insep}/K$ is purely inseparable, and $M/M_{insep}$
+is separable. By assumption the integral closure $R'$ of $R$ in
+$M_{insep}$ is finite over $R$. By
+Lemma \ref{lemma-Noetherian-normal-domain-finite-separable-extension}
+the integral
+closure $R''$ of $R'$ in $M$ is finite over $R'$. Then $R''$ is finite
+over $R$ by Lemma \ref{lemma-finite-transitive}.
+Since $R''$ is also the integral closure
+of $R$ in $M$ (see Lemma \ref{lemma-integral-closure-transitive}) we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-polynomial-ring-N-2}
+Let $R$ be a Noetherian domain.
+If $R$ is N-1 then $R[x]$ is N-1.
+If $R$ is N-2 then $R[x]$ is N-2.
+\end{lemma}
+
+\begin{proof}
+Assume $R$ is N-1. Let $R'$ be the integral closure of $R$
+which is finite over $R$. Hence also $R'[x]$ is finite over
+$R[x]$. The ring $R'[x]$ is normal (see
+Lemma \ref{lemma-polynomial-domain-normal}), hence N-1.
+This proves the first assertion.
+
+\medskip\noindent
+For the second assertion, by Lemma \ref{lemma-finite-extension-N-2}
+it suffices to show that $R'[x]$ is N-2. In other words we may
+and do assume that $R$ is a normal N-2 domain. In characteristic zero
+we are done by Lemma \ref{lemma-domain-char-zero-N-1-2}.
+In characteristic $p > 0$ we have to show that the integral
+closure of $R[x]$ is finite in any finite purely inseparable extension
+of $L/K(x)$ where $K$ is the fraction field of $R$. There
+exists a finite purely inseparable field extension $L'/K$
+and $q = p^e$ such that $L \subset L'(x^{1/q})$; some details omitted.
+As $R[x]$ is Noetherian it suffices to show that the integral closure of $R[x]$
+in $L'(x^{1/q})$ is finite over $R[x]$. And this integral closure
+is equal to $R'[x^{1/q}]$ with $R \subset R' \subset L'$ the integral
+closure of $R$ in $L'$.
+Since $R$ is N-2 we see that $R'$ is finite over $R$ and hence
+$R'[x^{1/q}]$ is finite over $R[x]$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-openness-normal-locus}
+Let $R$ be a Noetherian domain.
+If there exists an $f \in R$ such that $R_f$ is normal
+then
+$$
+U = \{\mathfrak p \in \Spec(R) \mid R_{\mathfrak p} \text{ is normal}\}
+$$
+is open in $\Spec(R)$.
+\end{lemma}
+
+\begin{proof}
+It is clear that the standard open $D(f)$ is contained in $U$.
+By Serre's criterion Lemma \ref{lemma-criterion-normal} we see that
+$\mathfrak p \not \in U$ implies that for some
+$\mathfrak q \subset \mathfrak p$ we have
+either
+\begin{enumerate}
+\item Case I: $\text{depth}(R_{\mathfrak q}) < 2$
+and $\dim(R_{\mathfrak q}) \geq 2$, and
+\item Case II: $R_{\mathfrak q}$ is not regular
+and $\dim(R_{\mathfrak q}) = 1$.
+\end{enumerate}
+This in particular also means that $R_{\mathfrak q}$ is not
+normal, and hence $f \in \mathfrak q$. In case I we see that
+$\text{depth}(R_{\mathfrak q}) =
+\text{depth}(R_{\mathfrak q}/fR_{\mathfrak q}) + 1$.
+Hence such a prime $\mathfrak q$ is the same thing as an embedded
+associated prime of $R/fR$. In case II $\mathfrak q$ is an associated
+prime of $R/fR$ of height 1. Thus there is a finite set $E$
+of such primes $\mathfrak q$ (see Lemma \ref{lemma-finite-ass}) and
+$$
+\Spec(R) \setminus U
+=
+\bigcup\nolimits_{\mathfrak q \in E} V(\mathfrak q)
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-N-1}
+Let $R$ be a Noetherian domain. Then $R$ is N-1 if and only if the following
+two conditions hold
+\begin{enumerate}
+\item there exists a nonzero $f \in R$ such that $R_f$ is normal, and
+\item for every maximal ideal $\mathfrak m \subset R$
+the local ring $R_{\mathfrak m}$ is N-1.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First assume $R$ is N-1. Let $R'$ be the integral closure of $R$ in its
+field of fractions $K$. By assumption we can find $x_1, \ldots, x_n$ in $R'$
+which generate $R'$ as an $R$-module. Since $R' \subset K$ we can find
+$f_i \in R$ nonzero such that $f_i x_i \in R$. Then $R_f \cong R'_f$
+where $f = f_1 \ldots f_n$. Hence $R_f$ is normal and we have (1).
+Part (2) follows from Lemma \ref{lemma-localize-N}.
+
+\medskip\noindent
+Assume (1) and (2). Let $K$ be the fraction field of $R$.
+Suppose that $R \subset R' \subset K$ is a finite
+extension of $R$ contained in $K$. Note that $R_f = R'_f$ since
+$R_f$ is already normal. Hence by Lemma \ref{lemma-openness-normal-locus}
+the set of primes
+$\mathfrak p' \in \Spec(R')$ with $R'_{\mathfrak p'}$ non-normal
+is closed in $\Spec(R')$. Since $\Spec(R') \to \Spec(R)$
+is closed the image of this set is closed in $\Spec(R)$.
+For such a ring $R'$ denote $Z_{R'} \subset \Spec(R)$ this image.
+
+\medskip\noindent
+Pick a maximal ideal $\mathfrak m \subset R$.
+Let $R_{\mathfrak m} \subset R_{\mathfrak m}'$ be the integral
+closure of the local ring in $K$. By assumption this is
+a finite ring extension. By Lemma \ref{lemma-integral-closure-localize}
+we can find finitely
+many elements $x_1, \ldots, x_n \in K$ integral over $R$ such that
+$R_{\mathfrak m}'$ is generated by $x_1, \ldots, x_n$ over $R_{\mathfrak m}$.
+Let $R' = R[x_1, \ldots, x_n] \subset K$. With this choice it is clear
+that $\mathfrak m \not \in Z_{R'}$.
+
+\medskip\noindent
+As $\Spec(R)$ is quasi-compact, the above shows that we can
+find a finite collection $R \subset R'_i \subset K$ such that
+$\bigcap Z_{R'_i} = \emptyset$. Let $R'$ be the subring of $K$
+generated by all of these. It is finite over $R$. Also $Z_{R'} = \emptyset$.
+Namely, every prime $\mathfrak p'$ lies over a prime $\mathfrak p'_i$
+such that $(R'_i)_{\mathfrak p'_i}$ is normal. This implies
+that $R'_{\mathfrak p'} = (R'_i)_{\mathfrak p'_i}$ is normal too.
+Hence $R'$ is normal, in other words
+$R'$ is the integral closure of $R$ in $K$.
+\end{proof}
+
+\begin{lemma}[Tate]
+\label{lemma-tate-japanese}
+\begin{reference}
+\cite[Theorem 23.1.3]{EGA}
+\end{reference}
+Let $R$ be a ring.
+Let $x \in R$.
+Assume
+\begin{enumerate}
+\item $R$ is a normal Noetherian domain,
+\item $R/xR$ is a domain and N-2,
+\item $R \cong \lim_n R/x^nR$ is complete with respect to $x$.
+\end{enumerate}
+Then $R$ is N-2.
+\end{lemma}
+
+\begin{proof}
+We may assume $x \not = 0$ since otherwise the lemma is trivial.
+Let $K$ be the fraction field of $R$. If the characteristic of $K$
+is zero the lemma follows from (1), see
+Lemma \ref{lemma-domain-char-zero-N-1-2}. Hence we may assume
+that the characteristic of $K$ is $p > 0$, and we may apply
+Lemma \ref{lemma-domain-char-p-N-1-2}. Thus given $L/K$
+a finite purely inseparable field extension we have to show
+that the integral closure $S$ of $R$ in $L$ is finite over $R$.
+
+\medskip\noindent
+Let $q$ be a power of $p$ such that $L^q \subset K$.
+By enlarging $L$ if necessary we may assume there exists
+an element $y \in L$ such that $y^q = x$. Since $R \to S$
+induces a homeomorphism of spectra (see Lemma \ref{lemma-p-ring-map})
+there is a unique prime ideal $\mathfrak q \subset S$ lying
+over the prime ideal $\mathfrak p = xR$. It is clear that
+$$
+\mathfrak q = \{f \in S \mid f^q \in \mathfrak p\} = yS
+$$
+since $y^q = x$. Observe that $R_{\mathfrak p}$ is a discrete
+valuation ring by Lemma \ref{lemma-characterize-dvr}. Then
+$S_{\mathfrak q}$ is Noetherian by Krull-Akizuki
+(Lemma \ref{lemma-krull-akizuki}). Whereupon we conclude
+$S_{\mathfrak q}$ is a discrete valuation ring by
+Lemma \ref{lemma-characterize-dvr} once again.
+By Lemma \ref{lemma-finite-extension-residue-fields-dimension-1} we
+see that $\kappa(\mathfrak q)/\kappa(\mathfrak p)$ is
+a finite field extension. Hence the integral closure
+$S' \subset \kappa(\mathfrak q)$ of $R/xR$ is finite over
+$R/xR$ by assumption (2). Since $S/yS \subset S'$ this implies
+that $S/yS$ is finite over $R$. Note that $S/y^nS$ has a finite
+filtration whose subquotients are the modules
+$y^iS/y^{i + 1}S \cong S/yS$. Hence we see that each $S/y^nS$
+is finite over $R$. In particular $S/xS$ is finite over $R$.
+Also, it is clear that $\bigcap x^nS = (0)$ since an element
+in the intersection has $q$th power contained in $\bigcap x^nR = (0)$
+(Lemma \ref{lemma-intersect-powers-ideal-module-zero}).
+Thus we may apply Lemma \ref{lemma-finite-over-complete-ring} to conclude
+that $S$ is finite over $R$, and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-power-series-over-N-2}
+Let $R$ be a ring.
+If $R$ is Noetherian, a domain, and N-2, then so is $R[[x]]$.
+\end{lemma}
+
+\begin{proof}
+Observe that $R[[x]]$ is Noetherian by
+Lemma \ref{lemma-Noetherian-power-series}.
+Let $R' \supset R$ be the integral closure of $R$ in its fraction
+field. Because $R$ is N-2 this is finite over $R$. Hence $R'[[x]]$
+is finite over $R[[x]]$. By
+Lemma \ref{lemma-power-series-over-Noetherian-normal-domain}
+we see that $R'[[x]]$ is a normal domain.
+Apply Lemma \ref{lemma-tate-japanese} to the
+element $x \in R'[[x]]$ to see that $R'[[x]]$ is N-2. Then
+Lemma \ref{lemma-finite-extension-N-2} shows that $R[[x]]$ is N-2.
+\end{proof}
+
+
+
+
+
+\section{Nagata rings}
+\label{section-nagata}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-nagata}
+Let $R$ be a ring.
+\begin{enumerate}
+\item We say $R$ is {\it universally Japanese} if for any finite
+type ring map $R \to S$ with $S$ a domain we have that $S$ is N-2
+(i.e., Japanese).
+\item We say that $R$ is a {\it Nagata ring} if $R$ is Noetherian and
+for every prime ideal $\mathfrak p$ the ring $R/\mathfrak p$ is N-2.
+\end{enumerate}
+\end{definition}
+
+\noindent
+It is clear that a Noetherian universally Japanese ring is a Nagata ring.
+It is our goal to show that a Nagata ring is universally Japanese. This is
+not obvious at all, and requires some work. But first, here is a useful
+lemma.
+
+\begin{lemma}
+\label{lemma-nagata-in-reduced-finite-type-finite-integral-closure}
+Let $R$ be a Nagata ring.
+Let $R \to S$ be essentially of finite type with $S$ reduced.
+Then the integral closure of $R$ in $S$ is finite over $R$.
+\end{lemma}
+
+\begin{proof}
+As $S$ is essentially of finite type over $R$ it is Noetherian and
+has finitely many minimal primes $\mathfrak q_1, \ldots, \mathfrak q_m$,
+see Lemma \ref{lemma-Noetherian-irreducible-components}.
+Since $S$ is reduced we have $S \subset \prod S_{\mathfrak q_i}$
+and each $S_{\mathfrak q_i} = K_i$ is a field, see
+Lemmas \ref{lemma-total-ring-fractions-no-embedded-points}
+and \ref{lemma-minimal-prime-reduced-ring}.
+It suffices to show that the integral closure
+$A_i'$ of $R$ in each $K_i$ is finite over $R$.
+This is true because $R$ is Noetherian and $A \subset \prod A_i'$.
+Let $\mathfrak p_i \subset R$ be the prime of $R$
+corresponding to $\mathfrak q_i$.
+As $S$ is essentially of finite type over $R$ we see that
+$K_i = S_{\mathfrak q_i} = \kappa(\mathfrak q_i)$ is a finitely
+generated field extension of $\kappa(\mathfrak p_i)$. Hence the algebraic
+closure $L_i$ of $\kappa(\mathfrak p_i)$ in $K_i$
+is finite over $\kappa(\mathfrak p_i)$, see
+Fields, Lemma \ref{fields-lemma-algebraic-closure-in-finitely-generated}.
+It is clear that $A_i'$ is the integral closure of $R/\mathfrak p_i$
+in $L_i$, and hence we win by definition of a Nagata ring.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-universally-japanese}
+Let $R$ be a ring.
+To check that $R$ is universally Japanese it suffices to show:
+If $R \to S$ is of finite type, and $S$ a domain then $S$ is N-1.
+\end{lemma}
+
+\begin{proof}
+Namely, assume the condition of the lemma.
+Let $R \to S$ be a finite type ring map with $S$ a domain.
+Let $L$ be a finite extension of the fraction field of $S$.
+Then there exists a finite ring extension $S \subset S' \subset L$
+such that $L$ is the fraction field of $S'$.
+By assumption $S'$ is N-1, and hence the integral
+closure $S''$ of $S'$ in $L$ is finite over $S'$. Thus $S''$ is finite
+over $S$ (Lemma \ref{lemma-finite-transitive})
+and $S''$ is the integral closure of $S$ in $L$
+(Lemma \ref{lemma-integral-closure-transitive}).
+We conclude that $R$ is universally Japanese.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-japanese}
+If $R$ is universally Japanese then any algebra essentially of finite type
+over $R$ is universally Japanese.
+\end{lemma}
+
+\begin{proof}
+The case of an algebra of finite type over $R$ is immediate from
+the definition. The general case follows on applying
+Lemma \ref{lemma-localize-N}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finite-over-nagata}
+Let $R$ be a Nagata ring.
+If $R \to S$ is a quasi-finite ring map (for example finite)
+then $S$ is a Nagata ring also.
+\end{lemma}
+
+\begin{proof}
+First note that $S$ is Noetherian as $R$ is Noetherian and a quasi-finite
+ring map is of finite type.
+Let $\mathfrak q \subset S$ be a prime ideal, and set
+$\mathfrak p = R \cap \mathfrak q$. Then
+$R/\mathfrak p \subset S/\mathfrak q$ is quasi-finite and
+hence we conclude that $S/\mathfrak q$ is N-2 by
+Lemma \ref{lemma-quasi-finite-over-Noetherian-japanese}
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nagata-localize}
+A localization of a Nagata ring is a Nagata ring.
+\end{lemma}
+
+\begin{proof}
+Clear from Lemma \ref{lemma-localize-N}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nagata-local}
+Let $R$ be a ring. Let $f_1, \ldots, f_n \in R$ generate the
+unit ideal.
+\begin{enumerate}
+\item If each $R_{f_i}$ is universally Japanese then so is $R$.
+\item If each $R_{f_i}$ is Nagata then so is $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\varphi : R \to S$ be a finite type ring map so that $S$ is a domain.
+Then $\varphi(f_1), \ldots, \varphi(f_n)$ generate the unit ideal
+in $S$. Hence if each $S_{f_i} = S_{\varphi(f_i)}$ is N-1 then so is
+$S$, see Lemma \ref{lemma-Japanese-local}. This proves (1).
+
+\medskip\noindent
+If each $R_{f_i}$ is Nagata, then each $R_{f_i}$ is Noetherian and
+hence $R$ is Noetherian, see Lemma \ref{lemma-cover}. And if
+$\mathfrak p \subset R$ is a prime, then we see each
+$R_{f_i}/\mathfrak pR_{f_i} = (R/\mathfrak p)_{f_i}$ is N-2
+and hence we conclude $R/\mathfrak p$ is N-2 by
+Lemma \ref{lemma-Japanese-local}. This proves (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-complete-local-Nagata}
+A Noetherian complete local ring is a Nagata ring.
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a complete local Noetherian ring.
+Let $\mathfrak p \subset R$ be a prime.
+Then $R/\mathfrak p$ is also a complete local Noetherian ring,
+see Lemma \ref{lemma-quotient-complete-local}.
+Hence it suffices to show that a Noetherian complete local
+domain $R$ is N-2. By
+Lemmas \ref{lemma-quasi-finite-over-Noetherian-japanese}
+and \ref{lemma-complete-local-Noetherian-domain-finite-over-regular}
+we reduce to the case $R = k[[X_1, \ldots, X_d]]$ where $k$ is a field or
+$R = \Lambda[[X_1, \ldots, X_d]]$ where $\Lambda$ is a Cohen ring.
+
+\medskip\noindent
+In the case $k[[X_1, \ldots, X_d]]$ we reduce to the statement that a
+field is N-2 by Lemma \ref{lemma-power-series-over-N-2}. This is clear.
+In the case $\Lambda[[X_1, \ldots, X_d]]$ we reduce to the statement
+that a Cohen ring $\Lambda$ is N-2. Applying Lemma \ref{lemma-tate-japanese}
+once more with $x = p \in \Lambda$ we reduce yet again to the case
+of a field. Thus we win.
+\end{proof}
+
+\begin{definition}
+\label{definition-analytically-unramified}
+Let $(R, \mathfrak m)$ be a Noetherian local ring.
+We say $R$ is {\it analytically unramified} if its completion
+$R^\wedge = \lim_n R/\mathfrak m^n$ is reduced.
+A prime ideal $\mathfrak p \subset R$ is said to be
+{\it analytically unramified} if $R/\mathfrak p$ is analytically
+unramified.
+\end{definition}
+
+\noindent
+At this point we know
+the following are true for any Noetherian local ring $R$:
+The map $R \to R^\wedge$ is a faithfully flat local ring homomorphism
+(Lemma \ref{lemma-completion-faithfully-flat}).
+The completion $R^\wedge$ is Noetherian
+(Lemma \ref{lemma-completion-Noetherian})
+and complete (Lemma \ref{lemma-completion-complete}).
+Hence the completion $R^\wedge$ is a Nagata ring
+(Lemma \ref{lemma-Noetherian-complete-local-Nagata}).
+Moreover, we have seen in Section \ref{section-cohen-structure-theorem}
+that $R^\wedge$ is
+a quotient of a regular local ring
+(Theorem \ref{theorem-cohen-structure-theorem}), and hence
+universally catenary
+(Remark \ref{remark-Noetherian-complete-local-ring-universally-catenary}).
+
+\begin{lemma}
+\label{lemma-analytically-unramified-easy}
+Let $(R, \mathfrak m)$ be a Noetherian local ring.
+\begin{enumerate}
+\item If $R$ is analytically unramified, then $R$ is reduced.
+\item If $R$ is analytically unramified, then each minimal prime of
+$R$ is analytically unramified.
+\item If $R$ is reduced with minimal primes
+$\mathfrak q_1, \ldots, \mathfrak q_t$, and each $\mathfrak q_i$
+is analytically unramified, then $R$ is analytically unramified.
+\item If $R$ is analytically unramified, then the integral closure
+of $R$ in its total ring of fractions $Q(R)$ is finite over $R$.
+\item If $R$ is a domain and analytically unramified, then $R$ is N-1.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+In this proof we will use the remarks immediately following
+Definition \ref{definition-analytically-unramified}.
+As $R \to R^\wedge$ is a faithfully flat local ring homomorphism
+it is injective and (1) follows.
+
+\medskip\noindent
+Let $\mathfrak q$ be a minimal prime of $R$, and assume $R$ is
+analytically unramified.
+Then $\mathfrak q$ is an associated
+prime of $R$ (see
+Proposition \ref{proposition-minimal-primes-associated-primes}).
+Hence there exists an $f \in R$
+such that $\{x \in R \mid fx = 0\} = \mathfrak q$.
+Note that $(R/\mathfrak q)^\wedge = R^\wedge/\mathfrak q^\wedge$,
+and that $\{x \in R^\wedge \mid fx = 0\} = \mathfrak q^\wedge$,
+because completion is exact (Lemma \ref{lemma-completion-flat}).
+If $x \in R^\wedge$ is such
+that $x^2 \in \mathfrak q^\wedge$, then $fx^2 = 0$ hence
+$(fx)^2 = 0$ hence $fx = 0$ hence $x \in \mathfrak q^\wedge$.
+Thus $\mathfrak q$ is analytically unramified and (2) holds.
+
+\medskip\noindent
+Assume $R$ is reduced with minimal primes
+$\mathfrak q_1, \ldots, \mathfrak q_t$, and each $\mathfrak q_i$
+is analytically unramified. Then
+$R \to R/\mathfrak q_1 \times \ldots \times R/\mathfrak q_t$ is
+injective. Since completion is exact (see Lemma \ref{lemma-completion-flat})
+we see that
+$R^\wedge \subset (R/\mathfrak q_1)^\wedge \times \ldots \times
+(R/\mathfrak q_t)^\wedge$. Hence (3) is clear.
+
+\medskip\noindent
+Assume $R$ is analytically unramified.
+Let $\mathfrak p_1, \ldots, \mathfrak p_s$ be the minimal primes
+of $R^\wedge$. Then we see that
+$$
+Q(R^\wedge) =
+R^\wedge_{\mathfrak p_1} \times \ldots \times R^\wedge_{\mathfrak p_s}
+$$
+with each $R^\wedge_{\mathfrak p_i}$ a field
+as $R^\wedge$ is reduced (see
+Lemma \ref{lemma-total-ring-fractions-no-embedded-points}).
+Hence the integral closure $S$ of $R^\wedge$
+in $Q(R^\wedge)$ is equal to $S = S_1 \times \ldots \times S_s$ with
+$S_i$ the integral closure of $R^\wedge/\mathfrak p_i$ in its fraction
+field. In particular $S$ is finite over $R^\wedge$.
+Denote $R'$ the integral closure of $R$ in $Q(R)$.
+As $R \to R^\wedge$ is flat we see that
+$R' \otimes_R R^\wedge \subset Q(R) \otimes_R R^\wedge \subset Q(R^\wedge)$.
+Moreover $R' \otimes_R R^\wedge$ is integral over $R^\wedge$
+(Lemma \ref{lemma-base-change-integral}).
+Hence $R' \otimes_R R^\wedge \subset S$ is a $R^\wedge$-submodule.
+As $R^\wedge$ is Noetherian it is a finite $R^\wedge$-module.
+Thus we may find $f_1, \ldots, f_n \in R'$ such that
+$R' \otimes_R R^\wedge$ is generated by the elements $f_i \otimes 1$
+as a $R^\wedge$-module.
+By faithful flatness we see that $R'$ is generated by $f_1, \ldots, f_n$
+as an $R$-module. This proves (4).
+
+\medskip\noindent
+Part (5) is a special case of part (4).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-codimension-1-analytically-unramified}
+Let $R$ be a Noetherian local ring.
+Let $\mathfrak p \subset R$ be a prime.
+Assume
+\begin{enumerate}
+\item $R_{\mathfrak p}$ is a discrete valuation ring, and
+\item $\mathfrak p$ is analytically unramified.
+\end{enumerate}
+Then for any associated prime $\mathfrak q$ of $R^\wedge/\mathfrak pR^\wedge$
+the local ring $(R^\wedge)_{\mathfrak q}$ is a discrete valuation ring.
+\end{lemma}
+
+\begin{proof}
+Assumption (2) says that $R^\wedge/\mathfrak pR^\wedge$ is a reduced ring.
+Hence an associated prime $\mathfrak q \subset R^\wedge$
+of $R^\wedge/\mathfrak pR^\wedge$
+is the same thing as a minimal prime over $\mathfrak pR^\wedge$.
+In particular we see that the maximal ideal of $(R^\wedge)_{\mathfrak q}$
+is $\mathfrak p(R^\wedge)_{\mathfrak q}$.
+Choose $x \in R$ such that $xR_{\mathfrak p} = \mathfrak pR_{\mathfrak p}$.
+By the above we see that $x \in (R^\wedge)_{\mathfrak q}$ generates
+the maximal ideal. As $R \to R^\wedge$ is faithfully flat we see that
+$x$ is a nonzerodivisor in $(R^\wedge)_{\mathfrak q}$.
+Hence we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-criterion-analytically-unramified}
+Let $(R, \mathfrak m)$ be a Noetherian local domain.
+Let $x \in \mathfrak m$. Assume
+\begin{enumerate}
+\item $x \not = 0$,
+\item $R/xR$ has no embedded primes, and
+\item for each associated prime $\mathfrak p \subset R$
+of $R/xR$ we have
+\begin{enumerate}
+\item the local ring $R_{\mathfrak p}$ is regular, and
+\item $\mathfrak p$ is analytically unramified.
+\end{enumerate}
+\end{enumerate}
+Then $R$ is analytically unramified.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p_1, \ldots, \mathfrak p_t$ be the associated primes
+of the $R$-module $R/xR$. Since $R/xR$ has no embedded primes we
+see that each $\mathfrak p_i$ has height $1$, and is a minimal
+prime over $(x)$.
+For each $i$, let $\mathfrak q_{i1}, \ldots, \mathfrak q_{is_i}$
+be the associated primes of the $R^\wedge$-module
+$R^\wedge/\mathfrak p_iR^\wedge$.
+By Lemma \ref{lemma-codimension-1-analytically-unramified}
+we see that $(R^\wedge)_{\mathfrak q_{ij}}$ is regular.
+By Lemma \ref{lemma-bourbaki} we see that
+$$
+\text{Ass}_{R^\wedge}(R^\wedge/xR^\wedge)
+=
+\bigcup\nolimits_{\mathfrak p \in \text{Ass}_R(R/xR)}
+\text{Ass}_{R^\wedge}(R^\wedge/\mathfrak pR^\wedge)
+=
+\{\mathfrak q_{ij}\}.
+$$
+Let $y \in R^\wedge$ with $y^2 = 0$.
+As $(R^\wedge)_{\mathfrak q_{ij}}$ is regular, and hence a domain
+(Lemma \ref{lemma-regular-domain})
+we see that $y$ maps to zero in $(R^\wedge)_{\mathfrak q_{ij}}$.
+Hence $y$ maps to zero in $R^\wedge/xR^\wedge$ by
+Lemma \ref{lemma-zero-at-ass-zero}.
+Hence $y = xy'$. Since $x$ is a nonzerodivisor (as $R \to R^\wedge$ is flat)
+we see that $(y')^2 = 0$. Hence we conclude that
+$y \in \bigcap x^nR^\wedge = (0)$
+(Lemma \ref{lemma-intersect-powers-ideal-module-zero}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-nagata-domain-analytically-unramified}
+Let $(R, \mathfrak m)$ be a local ring.
+If $R$ is Noetherian, a domain, and Nagata, then $R$ is
+analytically unramified.
+\end{lemma}
+
+\begin{proof}
+By induction on $\dim(R)$.
+The case $\dim(R) = 0$ is trivial. Hence we assume $\dim(R) = d$ and that
+the lemma holds for all Noetherian Nagata domains of dimension $< d$.
+
+\medskip\noindent
+Let $R \subset S$ be the integral closure
+of $R$ in the field of fractions of $R$. By assumption $S$ is a finite
+$R$-module. By Lemma \ref{lemma-quasi-finite-over-nagata} we see that
+$S$ is Nagata. By Lemma \ref{lemma-integral-sub-dim-equal} we see
+$\dim(R) = \dim(S)$.
+Let $\mathfrak m_1, \ldots, \mathfrak m_t$ be the maximal
+ideals of $S$. Each of these lies over the maximal ideal $\mathfrak m$
+of $R$. Moreover
+$$
+(\mathfrak m_1 \cap \ldots \cap \mathfrak m_t)^n \subset \mathfrak mS
+$$
+for sufficiently large $n$ as $S/\mathfrak mS$ is Artinian.
+By Lemma \ref{lemma-completion-flat} $R^\wedge \to S^\wedge$
+is an injective map, and by the Chinese Remainder
+Lemma \ref{lemma-chinese-remainder} combined with
+Lemma \ref{lemma-change-ideal-completion} we have
+$S^\wedge = \prod S^\wedge_i$ where $S^\wedge_i$
+is the completion of $S$ with respect to the maximal ideal $\mathfrak m_i$.
+Hence it suffices to show that $S_{\mathfrak m_i}$ is analytically unramified.
+In other words, we have reduced to the case where $R$ is a Noetherian
+normal Nagata domain.
+
+\medskip\noindent
+Assume $R$ is a Noetherian, normal, local Nagata domain.
+Pick a nonzero $x \in \mathfrak m$ in the maximal ideal.
+We are going to apply Lemma \ref{lemma-criterion-analytically-unramified}.
+We have to check properties (1), (2), (3)(a) and (3)(b).
+Property (1) is clear.
+We have that $R/xR$ has no embedded primes by
+Lemma \ref{lemma-normal-domain-intersection-localizations-height-1}.
+Thus property (2) holds. The same lemma also tells us each associated
+prime $\mathfrak p$ of $R/xR$ has height $1$.
+Hence $R_{\mathfrak p}$ is a $1$-dimensional normal domain
+hence regular (Lemma \ref{lemma-characterize-dvr}). Thus (3)(a) holds.
+Finally (3)(b) holds by induction hypothesis, since
+$R/\mathfrak p$ is Nagata (by Lemma \ref{lemma-quasi-finite-over-nagata}
+or directly from the definition).
+Thus we conclude $R$ is analytically unramified.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-nagata-and-analytically-unramified}
+Let $(R, \mathfrak m)$ be a Noetherian local ring. The following
+are equivalent
+\begin{enumerate}
+\item $R$ is Nagata,
+\item for $R \to S$ finite with $S$ a domain and $\mathfrak m' \subset S$
+maximal the local ring $S_{\mathfrak m'}$ is analytically unramified,
+\item for $(R, \mathfrak m) \to (S, \mathfrak m')$ finite
+local homomorphism with $S$ a domain, then $S$ is analytically
+unramified.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $R$ is Nagata and let $R \to S$ and $\mathfrak m' \subset S$
+be as in (2). Then $S$ is Nagata by Lemma \ref{lemma-quasi-finite-over-nagata}.
+Hence the local ring $S_{\mathfrak m'}$ is Nagata
+(Lemma \ref{lemma-nagata-localize}). Thus it is analytically
+unramified by Lemma \ref{lemma-local-nagata-domain-analytically-unramified}.
+It is clear that (2) implies (3).
+
+\medskip\noindent
+Assume (3) holds. Let $\mathfrak p \subset R$ be a prime ideal and
+let $L/\kappa(\mathfrak p)$ be a finite extension of fields.
+To prove (1) we have to show that the integral closure of $R/\mathfrak p$
+is finite over $R/\mathfrak p$. Choose $x_1, \ldots, x_n \in L$
+which generate $L$ over $\kappa(\mathfrak p)$. For each $i$ let
+$P_i(T) = T^{d_i} + a_{i, 1} T^{d_i - 1} + \ldots + a_{i, d_i}$
+be the minimal polynomial for $x_i$ over $\kappa(\mathfrak p)$.
+After replacing $x_i$ by $f_i x_i$ for a suitable
+$f_i \in R$, $f_i \not \in \mathfrak p$ we may assume
+$a_{i, j} \in R/\mathfrak p$. In fact, after further multiplying
+by elements of $\mathfrak m$, we may assume
+$a_{i, j} \in \mathfrak m/\mathfrak p \subset R/\mathfrak p$ for all $i, j$.
+Having done this let $S = R/\mathfrak p[x_1, \ldots, x_n] \subset L$.
+Then $S$ is finite over $R$, a domain, and $S/\mathfrak m S$ is a quotient
+of $R/\mathfrak m[T_1, \ldots, T_n]/(T_1^{d_1}, \ldots, T_n^{d_n})$.
+Hence $S$ is local. By (3) $S$ is analytically unramified and by
+Lemma \ref{lemma-analytically-unramified-easy}
+we find that its integral closure $S'$ in $L$ is finite over $S$.
+Since $S'$ is also the integral closure of $R/\mathfrak p$ in
+$L$ we win.
+\end{proof}
+
+\noindent
+The following proposition says in particular that an algebra of finite
+type over a Nagata ring is a Nagata ring.
+
+\begin{proposition}[Nagata]
+\label{proposition-nagata-universally-japanese}
+Let $R$ be a ring. The following are equivalent:
+\begin{enumerate}
+\item $R$ is a Nagata ring,
+\item any finite type $R$-algebra is Nagata, and
+\item $R$ is universally Japanese and Noetherian.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+It is clear that a Noetherian universally Japanese ring is universally
+Nagata (i.e., condition (2) holds). Let $R$ be a Nagata ring.
+We will show that any finitely generated $R$-algebra $S$ is Nagata.
+This will prove the proposition.
+
+\medskip\noindent
+Step 1. There exists a sequence of ring maps
+$R = R_0 \to R_1 \to R_2 \to \ldots \to R_n = S$ such that
+each $R_i \to R_{i + 1}$ is generated by a single element.
+Hence by induction it suffices to prove $S$ is Nagata if
+$S \cong R[x]/I$.
+
+\medskip\noindent
+Step 2. Let $\mathfrak q \subset S$ be a prime of $S$, and let
+$\mathfrak p \subset R$ be the corresponding prime of $R$.
+We have to show that $S/\mathfrak q$ is N-2. Hence we have
+reduced to the proving the following:
+(*) Given a Nagata domain $R$ and a monogenic extension $R \subset S$
+of domains then $S$ is N-2.
+
+\medskip\noindent
+Step 3. Let $R$ be a Nagata domain and $R \subset S$ a monogenic
+extension of domains. Let $R \subset R'$ be the integral closure
+of $R$ in its fraction field. Let $S'$ be the subring of the fraction field of
+$S$ generated by $R'$ and $S$. As $R'$ is finite over $R$
+(by the Nagata property) also $S'$ is finite over $S$.
+Since $S$ is Noetherian it suffices to prove that $S'$
+is N-2 (Lemma \ref{lemma-finite-extension-N-2}).
+Hence we have reduced to proving the following:
+(**) Given a normal Nagata domain $R$ and a
+monogenic extension $R \subset S$ of domains then $S$ is N-2.
+
+\medskip\noindent
+Step 4: Let $R$ be a normal Nagata domain and
+let $R \subset S$ be a monogenic extension of domains.
+Suppose the induced extension of fraction fields of $R$ and $S$
+is purely transcendental. In this case $S = R[x]$. By
+Lemma \ref{lemma-polynomial-ring-N-2} we see that $S$ is N-2.
+Hence we have reduced to proving the following:
+(**) Given a normal Nagata domain $R$ and a
+monogenic extension $R \subset S$ of domains
+inducing a finite extension of fraction fields
+then $S$ is N-2.
+
+\medskip\noindent
+Step 5. Let $R$ be a normal Nagata domain and
+let $R \subset S$ be a monogenic extension of domains
+inducing a finite extension of fraction fields $L/K$.
+Choose an element $x \in S$
+which generates $S$ as an $R$-algebra. Let $M/L$
+be a finite extension of fields.
+Let $R'$ be the integral closure of $R$ in $M$.
+Then the integral closure $S'$ of $S$ in $M$ is equal to the integral
+closure of $R'[x]$ in $M$.
+Also the fraction field of $R'$ is $M$ and $R \subset R'$
+is finite (by the Nagata property of $R$).
+This implies that $R'$ is a Nagata ring
+(Lemma \ref{lemma-quasi-finite-over-nagata}).
+To show that $S'$ is finite over $S$ is the same as showing that
+$S'$ is finite over $R'[x]$. Replace $R$ by $R'$ and $S$ by $R'[x]$
+to reduce to the following statement:
+(***) Given a normal Nagata domain $R$ with fraction field $K$,
+and $x \in K$, the ring $S \subset K$ generated by $R$ and $x$
+is N-1.
+
+\medskip\noindent
+Step 6. Let $R$ be a normal Nagata domain with fraction field $K$.
+Let $x = b/a \in K$. We have to show that the ring $S \subset K$
+generated by $R$ and $x$ is N-1. Note that $S_a \cong R_a$ is normal.
+Hence by Lemma \ref{lemma-characterize-N-1} it suffices to show that
+$S_{\mathfrak m}$ is N-1 for every maximal ideal $\mathfrak m$ of $S$.
+
+\medskip\noindent
+With assumptions as in the preceding paragraph, pick such a maximal
+ideal and set $\mathfrak n = R \cap \mathfrak m$. The residue field
+extension $\kappa(\mathfrak m)/\kappa(\mathfrak n)$ is finite
+(Theorem \ref{theorem-nullstellensatz}) and generated by the image of $x$.
+Hence there exists a monic polynomial
+$f(X) = X^d + \sum_{i = 1, \ldots, d} a_iX^{d -i}$ with
+$f(x) \in \mathfrak m$. Let $K''/K$ be a finite extension
+of fields such that $f(X)$ splits completely in $K''[X]$.
+Let $R'$ be the integral closure of $R$ in $K''$.
+Let $S' \subset K''$ be the subring generated by $R'$ and $x$.
+As $R$ is Nagata we see $R'$ is finite over $R$ and Nagata
+(Lemma \ref{lemma-quasi-finite-over-nagata}).
+Moreover, $S'$ is finite over $S$. If for every maximal ideal
+$\mathfrak m'$ of $S'$ the local ring $S'_{\mathfrak m'}$ is
+N-1, then $S'_{\mathfrak m}$ is N-1 by
+Lemma \ref{lemma-characterize-N-1}, which in turn
+implies that $S_{\mathfrak m}$ is N-1 by
+Lemma \ref{lemma-finite-extension-N-2}.
+After replacing $R$ by $R'$ and $S$ by $S'$, and $\mathfrak m$ by
+any of the maximal ideals $\mathfrak m'$ lying over $\mathfrak m$
+we reach the situation where the polynomial $f$ above split completely:
+$f(X) = \prod_{i = 1, \ldots, d} (X - a_i)$ with $a_i \in R$.
+Since $f(x) \in \mathfrak m$ we see that $x - a_i \in \mathfrak m$
+for some $i$. Finally, after replacing $x$ by $x - a_i$ we may assume
+that $x \in \mathfrak m$.
+
+\medskip\noindent
+To recapitulate: $R$ is a normal Nagata domain with fraction field $K$,
+$x \in K$ and $S$ is the subring of $K$ generated by $x$ and $R$,
+finally $\mathfrak m \subset S$ is a maximal ideal with $x \in \mathfrak m$.
+We have to show $S_{\mathfrak m}$ is N-1.
+
+\medskip\noindent
+We will show that Lemma \ref{lemma-criterion-analytically-unramified}
+applies to the local ring
+$S_{\mathfrak m}$ and the element $x$. This will imply that
+$S_{\mathfrak m}$ is analytically unramified, whereupon we
+see that it is N-1 by Lemma \ref{lemma-analytically-unramified-easy}.
+
+\medskip\noindent
+We have to check properties (1), (2), (3)(a) and (3)(b).
+Property (1) is trivial.
+Let $I = \Ker(R[X] \to S)$ where $X \mapsto x$.
+We claim that $I$ is generated by all linear forms $aX - b$ such that
+$ax = b$ in $K$. Clearly all these linear forms are in $I$.
+If $g = a_d X^d + \ldots a_1 X + a_0 \in I$, then we see that
+$a_dx$ is integral over $R$ (Lemma \ref{lemma-make-integral-trivial})
+and hence $b := a_dx \in R$
+as $R$ is normal. Then $g - (a_dX - b)X^{d - 1} \in I$ and we win by
+induction on the degree. As a consequence we see that
+$$
+S/xS = R[X]/(X, I) = R/J
+$$
+where
+$$
+J = \{b \in R \mid ax = b \text{ for some }a \in R\} = xR \cap R
+$$
+By Lemma \ref{lemma-normal-domain-intersection-localizations-height-1}
+we see that $S/xS = R/J$ has no embedded primes as an $R$-module, hence as
+an $R/J$-module, hence as an $S/xS$-module, hence as an $S$-module.
+This proves property (2).
+Take such an associated prime $\mathfrak q \subset S$ with the
+property $\mathfrak q \subset \mathfrak m$ (so that it is an
+associated prime of $S_{\mathfrak m}/xS_{\mathfrak m}$ -- it does not
+matter for the arguments).
+Then $\mathfrak q$ is minimal over $xS$ and hence has height $1$.
+By the sequence of equalities above we see that
+$\mathfrak p = R \cap \mathfrak q$ is an associated
+prime of $R/J$, and so has height $1$
+(see Lemma \ref{lemma-normal-domain-intersection-localizations-height-1}).
+Thus $R_{\mathfrak p}$ is a discrete valuation ring and therefore
+$R_{\mathfrak p} \subset S_{\mathfrak q}$ is an equality. This shows
+that $S_{\mathfrak q}$ is regular. This proves property (3)(a).
+Finally, $(S/\mathfrak q)_{\mathfrak m}$ is a localization
+of $S/\mathfrak q$, which is a quotient of $S/xS = R/J$.
+Hence $(S/\mathfrak q)_{\mathfrak m}$ is a localization of
+a quotient of the Nagata ring $R$, hence
+Nagata (Lemmas \ref{lemma-quasi-finite-over-nagata}
+and \ref{lemma-nagata-localize})
+and hence analytically unramified
+(Lemma \ref{lemma-local-nagata-domain-analytically-unramified}).
+This shows (3)(b) holds and we are done.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-ubiquity-nagata}
+The following types of rings are Nagata and in particular universally Japanese:
+\begin{enumerate}
+\item fields,
+\item Noetherian complete local rings,
+\item $\mathbf{Z}$,
+\item Dedekind domains with fraction field of characteristic zero,
+\item finite type ring extensions of any of the above.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+The Noetherian complete local ring case is
+Lemma \ref{lemma-Noetherian-complete-local-Nagata}.
+In the other cases you just check if $R/\mathfrak p$ is N-2 for every
+prime ideal $\mathfrak p$ of the ring. This is clear whenever
+$R/\mathfrak p$ is a field, i.e., $\mathfrak p$ is maximal.
+Hence for the Dedekind ring case we only need to check it when
+$\mathfrak p = (0)$. But since we assume the fraction field has
+characteristic zero Lemma \ref{lemma-domain-char-zero-N-1-2} kicks in.
+\end{proof}
+
+\begin{example}
+\label{example-nonjapanese-dvr}
+A discrete valuation ring is Nagata if and only if it is N-2 (because the
+quotient by the maximal ideal is a field and hence N-2). The discrete valuation
+ring $A$ of Example \ref{example-bad-dvr-char-p} is not Nagata, i.e.,
+it is not N-2. Namely, the finite extension
+$A \subset R = A[f]$ is not N-1. To see this say $f = \sum a_i x^i$.
+For every $n \geq 1$ set $g_n = \sum_{i < n} a_i x^i \in A$.
+Then $h_n = (f - g_n)/x^n$ is an element of the fraction field of $R$
+and $h_n^p \in k^p[[x]] \subset A$. Hence the integral closure $R'$
+of $R$ contains $h_1, h_2, h_3, \ldots$. Now, if $R'$
+were finite over $R$ and hence $A$, then $f = x^n h_n + g_n$
+would be contained in the submodule $A + x^nR'$ for all $n$. By
+Artin-Rees this would imply $f \in A$
+(Lemma \ref{lemma-intersect-powers-ideal-module-zero}), a contradiction.
+\end{example}
+
+\begin{lemma}
+\label{lemma-nagata-pth-roots}
+Let $(A, \mathfrak m)$ be a Noetherian local domain which is Nagata
+and has fraction field of characteristic $p$. If $a \in A$ has a
+$p$th root in $A^\wedge$, then $a$ has a $p$th root in $A$.
+\end{lemma}
+
+\begin{proof}
+Consider the ring extension $A \subset B = A[x]/(x^p - a)$.
+If $a$ does not have a $p$th root in $A$, then $B$ is a domain
+whose completion isn't reduced. This contradicts our earlier
+results, as $B$ is a Nagata ring
+(Proposition \ref{proposition-nagata-universally-japanese})
+and hence analytically unramified by
+Lemma \ref{lemma-local-nagata-domain-analytically-unramified}.
+\end{proof}
+
+
+
+
+
+
+\section{Ascending properties}
+\label{section-ascending-properties}
+
+\noindent
+In this section we start proving some algebraic facts concerning the
+``ascent'' of properties of rings. To do this for depth of rings
+one uses the following result on ascending depth of modules, see
+\cite[IV, Proposition 6.3.1]{EGA}.
+
+\begin{lemma}
+\label{lemma-apply-grothendieck-module}
+\begin{reference}
+\cite[IV, Proposition 6.3.1]{EGA}
+\end{reference}
+We have
+$$
+\text{depth}(M \otimes_R N)
+=
+\text{depth}(M) + \text{depth}(N/\mathfrak m_RN)
+$$
+where $R \to S$ is a local homomorphism of local Noetherian rings,
+$M$ is a finite $R$-module, and $N$ is a finite $S$-module flat over $R$.
+\end{lemma}
+
+\begin{proof}
+In the statement and in the proof below, we take the depth of $M$
+as an $R$-module, the depth of $M \otimes_R N$ as an $S$-module, and
+the depth of $N/\mathfrak m_RN$ as an $S/\mathfrak m_RS$-module.
+Denote $n$ the right hand side. First assume that $n$ is zero.
+Then both $\text{depth}(M) = 0$ and
+$\text{depth}(N/\mathfrak m_RN) = 0$.
+This means there is a $z \in M$ whose annihilator is $\mathfrak m_R$
+and a $\overline{y} \in N/\mathfrak m_RN$
+whose annihilator is $\mathfrak m_S/\mathfrak m_RS$.
+Let $y \in N$ be a lift of $\overline{y}$.
+Since $N$ is flat over $R$ the map $z : R/\mathfrak m_R \to M$
+produces an injective map $N/\mathfrak m_RN \to M \otimes_R N$.
+Hence the annihilator of $z \otimes y$ is $\mathfrak m_S$.
+Thus $\text{depth}(M \otimes_R N) = 0$ as well.
+
+\medskip\noindent
+Assume $n > 0$. If $\text{depth}(N/\mathfrak m_RN) > 0$, then we may choose
+$f \in \mathfrak m_S$ mapping to $\overline{f} \in S/\mathfrak m_RS$ which
+is a nonzerodivisor on $N/\mathfrak m_RN$.
+Then $\text{depth}(N/\mathfrak m_RN) =
+\text{depth}(N/(f, \mathfrak m_R)N) + 1$
+by Lemma \ref{lemma-depth-drops-by-one}.
+According to Lemma \ref{lemma-mod-injective} the element $f \in S$ is a
+nonzerodivisor on $N$ and $N/fN$ is flat over $R$.
+Hence by induction on $n$ we have
+$$
+\text{depth}(M \otimes_R N/fN) =
+\text{depth}(M) + \text{depth}(N/(f, \mathfrak m_R)N).
+$$
+Because $N/fN$ is flat over $R$ the sequence
+$$
+0 \to M \otimes_R N \to M \otimes_R N \to M \otimes_R N/fN \to 0
+$$
+is exact where the first map is multiplication by $f$
+(Lemma \ref{lemma-flat-tor-zero}). Hence by
+Lemma \ref{lemma-depth-drops-by-one} we find that
+$\text{depth}(M \otimes_R N) = \text{depth}(M \otimes_R N/fN) + 1$
+and we conclude that equality holds in the formula of the lemma.
+
+\medskip\noindent
+If $n > 0$, but $\text{depth}(N/\mathfrak m_RN) = 0$,
+then we can choose $f \in \mathfrak m_R$ which is a nonzerodivisor on $M$.
+As $N$ is flat over $R$ it is also the case that $f$ is a nonzerodivisor on
+$M \otimes_R N$. By induction on $n$ again we have
+$$
+\text{depth}(M/fM \otimes_R N) =
+\text{depth}(M/fM) + \text{depth}(N/\mathfrak m_RN).
+$$
+In this case
+$\text{depth}(M \otimes_R N) = \text{depth}(M/fM \otimes_R N) + 1$
+and $\text{depth}(M) = \text{depth}(M/fM) + 1$
+by Lemma \ref{lemma-depth-drops-by-one} and
+we conclude that equality holds in the formula of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-apply-grothendieck}
+Suppose that $R \to S$ is a flat and local ring homomorphism of Noetherian
+local rings. Then
+$$
+\text{depth}(S) = \text{depth}(R) + \text{depth}(S/\mathfrak m_RS).
+$$
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-apply-grothendieck-module}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM-goes-up}
+Let $R \to S$ be a flat local homomorphism of local Noetherian rings.
+Then the following are equivalent
+\begin{enumerate}
+\item $S$ is Cohen-Macaulay, and
+\item $R$ and $S/\mathfrak m_RS$ are Cohen-Macaulay.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows from the definitions and
+Lemmas \ref{lemma-apply-grothendieck} and
+\ref{lemma-dimension-base-fibre-equals-total}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Sk-goes-up}
+Let $\varphi : R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item $R$ is Noetherian,
+\item $S$ is Noetherian,
+\item $\varphi$ is flat,
+\item the fibre rings $S \otimes_R \kappa(\mathfrak p)$ are $(S_k)$, and
+\item $R$ has property $(S_k)$.
+\end{enumerate}
+Then $S$ has property $(S_k)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak q$ be a prime of $S$
+lying over a prime $\mathfrak p$ of $R$. By
+Lemma \ref{lemma-apply-grothendieck} we have
+$$
+\text{depth}(S_{\mathfrak q}) =
+\text{depth}(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}) +
+\text{depth}(R_{\mathfrak p}).
+$$
+On the other hand, we have
+$$
+\dim(R_{\mathfrak p})
++
+\dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q})
+\geq
+\dim(S_{\mathfrak q})
+$$
+by Lemma \ref{lemma-dimension-base-fibre-total}.
+(Actually equality holds, by
+Lemma \ref{lemma-dimension-base-fibre-equals-total}
+but strictly speaking we do not need this.)
+Finally, as the fibre rings of the map
+are assumed $(S_k)$ we see that
+$\text{depth}(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q})
+\geq \min(k, \dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}))$.
+Thus the lemma follows by the following string of inequalities
+\begin{eqnarray*}
+\text{depth}(S_{\mathfrak q}) & = &
+\text{depth}(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}) +
+\text{depth}(R_{\mathfrak p}) \\
+& \geq &
+\min(k, \dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q})) +
+\min(k, \dim(R_{\mathfrak p})) \\
+& = &
+\min(2k, \dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}) + k,
+k + \dim(R_\mathfrak p),
+\dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}) +
+\dim(R_{\mathfrak p})) \\
+& \geq &
+\min(k, \dim(S_{\mathfrak q}))
+\end{eqnarray*}
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Rk-goes-up}
+Let $\varphi : R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item $R$ is Noetherian,
+\item $S$ is Noetherian
+\item $\varphi$ is flat,
+\item the fibre rings $S \otimes_R \kappa(\mathfrak p)$
+have property $(R_k)$, and
+\item $R$ has property $(R_k)$.
+\end{enumerate}
+Then $S$ has property $(R_k)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak q$ be a prime of $S$
+lying over a prime $\mathfrak p$ of $R$.
+Assume that $\dim(S_{\mathfrak q}) \leq k$.
+Since $\dim(S_{\mathfrak q}) = \dim(R_{\mathfrak p})
++ \dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q})$ by
+Lemma \ref{lemma-dimension-base-fibre-equals-total}
+we see that $\dim(R_{\mathfrak p}) \leq k$ and
+$\dim(S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}) \leq k$.
+Hence $R_{\mathfrak p}$ and $S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}$
+are regular by assumption.
+It follows that $S_{\mathfrak q}$ is regular by
+Lemma \ref{lemma-flat-over-regular-with-regular-fibre}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reduced-goes-up-noetherian}
+Let $\varphi : R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item $R$ is Noetherian,
+\item $S$ is Noetherian
+\item $\varphi$ is flat,
+\item the fibre rings $S \otimes_R \kappa(\mathfrak p)$ are reduced,
+\item $R$ is reduced.
+\end{enumerate}
+Then $S$ is reduced.
+\end{lemma}
+
+\begin{proof}
+For Noetherian rings reduced is the same as having properties
+$(S_1)$ and $(R_0)$, see Lemma \ref{lemma-criterion-reduced}.
+Thus we know $R$ and the fibre rings have these properties.
+Hence we may apply Lemmas \ref{lemma-Sk-goes-up} and \ref{lemma-Rk-goes-up}
+and we see that $S$ is $(S_1)$ and $(R_0)$, in other words reduced
+by Lemma \ref{lemma-criterion-reduced} again.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reduced-goes-up}
+Let $\varphi : R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item $\varphi$ is smooth,
+\item $R$ is reduced.
+\end{enumerate}
+Then $S$ is reduced.
+\end{lemma}
+
+\begin{proof}
+Observe that $R \to S$ is flat with regular fibres (see the list of
+results on smooth ring maps in Section \ref{section-smooth-overview}).
+In particular, the fibres are reduced.
+Thus if $R$ is Noetherian, then $S$ is Noetherian and we get
+the result from Lemma \ref{lemma-reduced-goes-up-noetherian}.
+
+\medskip\noindent
+In the general case we may find a finitely generated
+$\mathbf{Z}$-subalgebra $R_0 \subset R$ and a smooth ring
+map $R_0 \to S_0$ such that $S \cong R \otimes_{R_0} S_0$, see
+remark (10) in Section \ref{section-smooth-overview}.
+Now, if $x \in S$ is an element with $x^2 = 0$,
+then we can enlarge $R_0$ and assume that $x$ comes
+from an element $x_0 \in S_0$. After enlarging
+$R_0$ once more we may assume that $x_0^2 = 0$ in $S_0$.
+However, since $R_0 \subset R$ is reduced we see that
+$S_0$ is reduced and hence $x_0 = 0$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-normal-goes-up-noetherian}
+Let $\varphi : R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item $R$ is Noetherian,
+\item $S$ is Noetherian,
+\item $\varphi$ is flat,
+\item the fibre rings $S \otimes_R \kappa(\mathfrak p)$ are normal, and
+\item $R$ is normal.
+\end{enumerate}
+Then $S$ is normal.
+\end{lemma}
+
+\begin{proof}
+For a Noetherian ring being normal is the same as having properties
+$(S_2)$ and $(R_1)$, see Lemma \ref{lemma-criterion-normal}.
+Thus we know $R$ and the fibre rings have these properties.
+Hence we may apply Lemmas \ref{lemma-Sk-goes-up} and \ref{lemma-Rk-goes-up}
+and we see that $S$ is $(S_2)$ and $(R_1)$, in other words normal
+by Lemma \ref{lemma-criterion-normal} again.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-normal-goes-up}
+Let $\varphi : R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item $\varphi$ is smooth,
+\item $R$ is normal.
+\end{enumerate}
+Then $S$ is normal.
+\end{lemma}
+
+\begin{proof}
+Observe that $R \to S$ is flat with regular fibres (see the list of
+results on smooth ring maps in Section \ref{section-smooth-overview}).
+In particular, the fibres are normal. Thus if $R$ is Noetherian,
+then $S$ is Noetherian and we get the result from
+Lemma \ref{lemma-normal-goes-up-noetherian}.
+
+\medskip\noindent
+The general case. First note that $R$ is reduced and hence
+$S$ is reduced by Lemma \ref{lemma-reduced-goes-up}.
+Let $\mathfrak q$ be a prime of $S$ and let $\mathfrak p$ be
+the corresponding prime of $R$. Note that $R_{\mathfrak p}$
+is a normal domain. We have to show that $S_{\mathfrak q}$ is
+a normal domain. To do this we may replace $R$ by $R_{\mathfrak p}$
+and $S$ by $S_{\mathfrak p}$. Hence we may assume that $R$ is
+a normal domain.
+
+\medskip\noindent
+Assume $R \to S$ smooth, and $R$ a normal domain.
+We may find a finitely generated $\mathbf{Z}$-subalgebra
+$R_0 \subset R$ and a smooth ring map $R_0 \to S_0$ such
+that $S \cong R \otimes_{R_0} S_0$, see
+remark (10) in Section \ref{section-smooth-overview}.
+As $R_0$ is a Nagata domain (see Proposition \ref{proposition-ubiquity-nagata})
+we see that its integral closure $R_0'$ is finite over $R_0$.
+Moreover, as $R$ is a normal domain it is clear that $R_0' \subset R$.
+Hence we may replace $R_0$ by $R_0'$ and $S_0$ by
+$R_0' \otimes_{R_0} S_0$ and assume that $R_0$ is a normal
+Noetherian domain. By the first paragraph of the proof we conclude
+that $S_0$ is a normal ring (it need not be a domain of course).
+In this way we see that $R = \bigcup R_\lambda$
+is the union of normal Noetherian domains and correspondingly
+$S = \colim R_\lambda \otimes_{R_0} S_0$ is the colimit
+of normal rings. This implies that $S$ is a normal ring.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-goes-up}
+\begin{slogan}
+Regularity ascends along smooth maps of rings.
+\end{slogan}
+Let $\varphi : R \to S$ be a ring map. Assume
+\begin{enumerate}
+\item $\varphi$ is smooth,
+\item $R$ is a regular ring.
+\end{enumerate}
+Then $S$ is regular.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-Rk-goes-up} applied for all $(R_k)$
+using Lemma \ref{lemma-characterize-smooth-over-field} to see that the
+hypotheses are satisfied.
+\end{proof}
+
+
+
+
+
+
+\section{Descending properties}
+\label{section-descending-properties}
+
+\noindent
+In this section we start proving some algebraic facts concerning the
+``descent'' of properties of rings. It turns out that it is often
+``easier'' to descend properties than it is to ascend them. In other
+words, the assumption on the ring map $R \to S$ are often weaker than
+the assumptions in the corresponding lemma of the preceding section.
+However, we warn the reader that the results on descent are often
+useless unless the corresponding ascent can also be shown!
+Here is a typical result which illustrates this phenomenon.
+
+\begin{lemma}
+\label{lemma-descent-Noetherian}
+Let $R \to S$ be a ring map.
+Assume that
+\begin{enumerate}
+\item $R \to S$ is faithfully flat, and
+\item $S$ is Noetherian.
+\end{enumerate}
+Then $R$ is Noetherian.
+\end{lemma}
+
+\begin{proof}
+Let $I_0 \subset I_1 \subset I_2 \subset \ldots$ be a
+growing sequence of ideals of $R$. By assumption we have
+$I_nS = I_{n +1}S = I_{n + 2}S = \ldots$ for some $n$.
+Since $R \to S$ is flat we have $I_kS = I_k \otimes_R S$.
+Hence, as $R \to S$ is faithfully flat we see that
+$I_nS = I_{n +1}S = I_{n + 2}S = \ldots$ implies that
+$I_n = I_{n +1} = I_{n + 2} = \ldots$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-reduced}
+Let $R \to S$ be a ring map.
+Assume that
+\begin{enumerate}
+\item $R \to S$ is faithfully flat, and
+\item $S$ is reduced.
+\end{enumerate}
+Then $R$ is reduced.
+\end{lemma}
+
+\begin{proof}
+This is clear as $R \to S$ is injective.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-normal}
+Let $R \to S$ be a ring map.
+Assume that
+\begin{enumerate}
+\item $R \to S$ is faithfully flat, and
+\item $S$ is a normal ring.
+\end{enumerate}
+Then $R$ is a normal ring.
+\end{lemma}
+
+\begin{proof}
+Since $S$ is reduced it follows that $R$ is reduced.
+Let $\mathfrak p$ be a prime of $R$. We have to show that
+$R_{\mathfrak p}$ is a normal domain. Since $S_{\mathfrak p}$
+is faithfully over $R_{\mathfrak p}$ too we may assume that
+$R$ is local with maximal ideal $\mathfrak m$.
+Let $\mathfrak q$ be a prime of $S$ lying over $\mathfrak m$.
+Then we see that $R \to S_{\mathfrak q}$ is faithfully flat
+(Lemma \ref{lemma-local-flat-ff}).
+Hence we may assume $S$ is local as well.
+In particular $S$ is a normal domain.
+Since $R \to S$ is faithfully flat
+and $S$ is a normal domain we see that $R$ is a domain.
+Next, suppose that $a/b$ is integral over $R$ with $a, b \in R$.
+Then $a/b \in S$ as $S$ is normal. Hence $a \in bS$.
+This means that $a : R \to R/bR$ becomes the zero map
+after base change to $S$. By faithful flatness we see that
+$a \in bR$, so $a/b \in R$. Hence $R$ is normal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-regular}
+Let $R \to S$ be a ring map.
+Assume that
+\begin{enumerate}
+\item $R \to S$ is faithfully flat, and
+\item $S$ is a regular ring.
+\end{enumerate}
+Then $R$ is a regular ring.
+\end{lemma}
+
+\begin{proof}
+We see that $R$ is Noetherian by Lemma \ref{lemma-descent-Noetherian}.
+Let $\mathfrak p \subset R$ be a prime. Choose a prime $\mathfrak q \subset S$
+lying over $\mathfrak p$. Then Lemma \ref{lemma-flat-under-regular}
+applies to $R_\mathfrak p \to S_\mathfrak q$ and we conclude that
+$R_\mathfrak p$ is regular. Since $\mathfrak p$ was arbitrary we see
+$R$ is regular.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-Sk}
+Let $R \to S$ be a ring map.
+Assume that
+\begin{enumerate}
+\item $R \to S$ is faithfully flat, and
+\item $S$ is Noetherian and has property $(S_k)$.
+\end{enumerate}
+Then $R$ is Noetherian and has property $(S_k)$.
+\end{lemma}
+
+\begin{proof}
+We have already seen that (1) and (2) imply that $R$ is Noetherian,
+see Lemma \ref{lemma-descent-Noetherian}.
+Let $\mathfrak p \subset R$ be a prime ideal.
+Choose a prime $\mathfrak q \subset S$ lying over $\mathfrak p$
+which corresponds to a minimal prime of the fibre ring
+$S \otimes_R \kappa(\mathfrak p)$. Then
+$A = R_{\mathfrak p} \to S_{\mathfrak q} = B$ is a flat local ring
+homomorphism of Noetherian local rings with $\mathfrak m_AB$ an
+ideal of definition of $B$. Hence
+$\dim(A) = \dim(B)$ (Lemma \ref{lemma-dimension-base-fibre-equals-total}) and
+$\text{depth}(A) = \text{depth}(B)$ (Lemma \ref{lemma-apply-grothendieck}).
+Hence since $B$ has $(S_k)$ we
+see that $A$ has $(S_k)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-Rk}
+Let $R \to S$ be a ring map. Assume that
+\begin{enumerate}
+\item $R \to S$ is faithfully flat, and
+\item $S$ is Noetherian and has property $(R_k)$.
+\end{enumerate}
+Then $R$ is Noetherian and has property $(R_k)$.
+\end{lemma}
+
+\begin{proof}
+We have already seen that (1) and (2) imply that $R$ is Noetherian,
+see Lemma \ref{lemma-descent-Noetherian}.
+Let $\mathfrak p \subset R$ be a prime ideal and assume
+$\dim(R_{\mathfrak p}) \leq k$.
+Choose a prime $\mathfrak q \subset S$ lying over $\mathfrak p$
+which corresponds to a minimal prime of the fibre ring
+$S \otimes_R \kappa(\mathfrak p)$. Then
+$A = R_{\mathfrak p} \to S_{\mathfrak q} = B$ is a flat local ring
+homomorphism of Noetherian local rings with $\mathfrak m_AB$ an
+ideal of definition of $B$. Hence
+$\dim(A) = \dim(B)$ (Lemma \ref{lemma-dimension-base-fibre-equals-total}).
+As $S$ has $(R_k)$ we conclude that $B$ is a regular local ring.
+By Lemma \ref{lemma-flat-under-regular} we conclude that $A$ is regular.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-nagata}
+Let $R \to S$ be a ring map. Assume that
+\begin{enumerate}
+\item $R \to S$ is smooth and surjective on spectra, and
+\item $S$ is a Nagata ring.
+\end{enumerate}
+Then $R$ is a Nagata ring.
+\end{lemma}
+
+\begin{proof}
+Recall that a Nagata ring is the same thing as a Noetherian
+universally Japanese ring
+(Proposition \ref{proposition-nagata-universally-japanese}).
+We have already seen that $R$ is Noetherian in
+Lemma \ref{lemma-descent-Noetherian}.
+Let $R \to A$ be a finite type ring map into a domain.
+According to Lemma \ref{lemma-check-universally-japanese}
+it suffices to check that $A$ is N-1.
+It is clear that $B = A \otimes_R S$ is a finite type $S$-algebra
+and hence Nagata (Proposition \ref{proposition-nagata-universally-japanese}).
+Since $A \to B$ is smooth (Lemma \ref{lemma-base-change-smooth})
+we see that $B$ is reduced (Lemma \ref{lemma-reduced-goes-up}).
+Since $B$ is Noetherian it has only a finite number of minimal
+primes $\mathfrak q_1, \ldots, \mathfrak q_t$ (see
+Lemma \ref{lemma-Noetherian-irreducible-components}).
+As $A \to B$ is flat each of these lies over $(0) \subset A$
+(by going down, see Lemma \ref{lemma-flat-going-down})
+The total ring of fractions $Q(B)$ is the product of the
+$L_i = \kappa(\mathfrak q_i)$ (Lemmas
+\ref{lemma-total-ring-fractions-no-embedded-points} and
+\ref{lemma-minimal-prime-reduced-ring}).
+Moreover, the integral closure $B'$ of $B$ in $Q(B)$ is
+the product of the integral closures $B_i'$ of the $B/\mathfrak q_i$
+in the factors $L_i$ (compare with
+Lemma \ref{lemma-characterize-reduced-ring-normal}).
+Since $B$ is universally Japanese the
+ring extensions $B/\mathfrak q_i \subset B_i'$ are finite
+and we conclude that $B' = \prod B_i'$ is finite over $B$.
+Since $A \to B$ is flat we see that any
+nonzerodivisor on $A$ maps to a nonzerodivisor on $B$.
+The corresponding map
+$$
+Q(A) \otimes_A B = (A \setminus \{0\})^{-1}A \otimes_A B
+= (A \setminus \{0\})^{-1}B \to Q(B)
+$$
+is injective (we used Lemma \ref{lemma-tensor-localization}).
+Via this map $A'$ maps into $B'$. This induces a map
+$$
+A' \otimes_A B \longrightarrow B'
+$$
+which is injective (by the above and the flatness of $A \to B$).
+Since $B'$ is a finite $B$-module
+and $B$ is Noetherian we see that $A' \otimes_A B$ is a finite $B$-module.
+Hence there exist finitely many elements $x_i \in A'$ such that
+the elements $x_i \otimes 1$ generate $A' \otimes_A B$ as a $B$-module.
+Finally, by faithful flatness of $A \to B$ we conclude that
+the $x_i$ also generated $A'$ as an $A$-module, and we win.
+\end{proof}
+
+\begin{remark}
+\label{remark-universally-catenary-does-not-descend}
+The property of being ``universally catenary'' does not descend;
+not even along \'etale ring maps. In
+Examples, Section \ref{examples-section-non-catenary-Noetherian-local}
+there is a construction of a finite ring map $A \to B$ with
+$A$ local Noetherian and not universally catenary,
+$B$ semi-local with two maximal ideals $\mathfrak m$, $\mathfrak n$
+with $B_{\mathfrak m}$ and $B_{\mathfrak n}$ regular of dimension $2$ and $1$
+respectively, and the same residue fields as that of $A$.
+Moreover, $\mathfrak m_A$ generates the maximal ideal in both
+$B_{\mathfrak m}$ and $B_{\mathfrak n}$ (so $A \to B$ is unramified
+as well as finite).
+By Lemma \ref{lemma-etale-makes-unramified-closed}
+there exists a local \'etale ring map
+$A \to A'$ such that $B \otimes_A A' = B_1 \times B_2$ decomposes
+with $A' \to B_i$ surjective.
+This shows that $A'$ has two minimal primes $\mathfrak q_i$
+with $A'/\mathfrak q_i \cong B_i$. Since $B_i$ is regular local
+(since it is \'etale over either $B_{\mathfrak m}$ or $B_{\mathfrak n}$)
+we conclude that $A'$ is universally catenary.
+\end{remark}
+
+
+
+
+
+
+
+
+
+\section{Geometrically normal algebras}
+\label{section-geometrically-normal}
+
+\noindent
+In this section we put some applications of ascent and descent of
+properties of rings.
+
+\begin{lemma}
+\label{lemma-geometrically-normal}
+Let $k$ be a field. Let $A$ be a $k$-algebra.
+The following properties of $A$ are equivalent:
+\begin{enumerate}
+\item $k' \otimes_k A$ is a normal ring
+for every field extension $k'/k$,
+\item $k' \otimes_k A$ is a normal ring
+for every finitely generated field extension $k'/k$,
+\item $k' \otimes_k A$ is a normal ring
+for every finite purely inseparable extension $k'/k$,
+\item $k^{perf} \otimes_k A$ is a normal ring.
+\end{enumerate}
+Here normal ring is defined in Definition \ref{definition-ring-normal}.
+\end{lemma}
+
+\begin{proof}
+It is clear that (1) $\Rightarrow$ (2) $\Rightarrow$ (3)
+and (1) $\Rightarrow$ (4).
+
+\medskip\noindent
+If $k'/k$ is a finite purely inseparable extension, then
+there is an embedding $k' \to k^{perf}$ of $k$-extensions.
+The ring map $k' \otimes_k A \to k^{perf} \otimes_k A$
+is faithfully flat, hence $k' \otimes_k A$ is normal if
+$k^{perf} \otimes_k A$ is normal by
+Lemma \ref{lemma-descent-normal}. In this way we see that
+(4) $\Rightarrow$ (3).
+
+\medskip\noindent
+Assume (2) and let $k'/k$ be any field extension.
+Then we can write $k' = \colim_i k_i$ as a directed
+colimit of finitely generated field extensions. Hence we
+see that $k' \otimes_k A = \colim_i k_i \otimes_k A$
+is a directed colimit of normal rings. Thus we see
+that $k' \otimes_k A$ is a normal ring by
+Lemma \ref{lemma-colimit-normal-ring}.
+Hence (1) holds.
+
+\medskip\noindent
+Assume (3) and let $K/k$ be a finitely generated field extension.
+By Lemma \ref{lemma-make-separable} we can find a diagram
+$$
+\xymatrix{
+K \ar[r] & K' \\
+k \ar[u] \ar[r] & k' \ar[u]
+}
+$$
+where $k'/k$, $K'/K$ are finite purely inseparable field
+extensions such that $K'/k'$ is separable. By
+Lemma \ref{lemma-localization-smooth-separable}
+there exists a smooth $k'$-algebra $B$ such that $K'$ is the
+fraction field of $B$. Now we can argue as follows:
+Step 1: $k' \otimes_k A$ is a normal ring because we assumed (3).
+Step 2: $B \otimes_{k'} k' \otimes_k A$ is a normal ring as
+$k' \otimes_k A \to B \otimes_{k'} k' \otimes_k A$ is smooth
+(Lemma \ref{lemma-base-change-smooth})
+and ascent of normality along smooth maps
+(Lemma \ref{lemma-normal-goes-up}).
+Step 3. $K' \otimes_{k'} k' \otimes_k A = K' \otimes_k A$ is
+a normal ring as it is a localization of a normal ring
+(Lemma \ref{lemma-localization-normal-ring}).
+Step 4. Finally $K \otimes_k A$ is a normal ring by descent of
+normality along the faithfully flat ring map
+$K \otimes_k A \to K' \otimes_k A$ (Lemma \ref{lemma-descent-normal}).
+This proves the lemma.
+\end{proof}
+
+\begin{definition}
+\label{definition-geometrically-normal}
+Let $k$ be a field.
+A $k$-algebra $R$ is called {\it geometrically normal} over $k$ if
+the equivalent conditions of Lemma \ref{lemma-geometrically-normal} hold.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-localization-geometrically-normal-algebra}
+\begin{slogan}
+Localization preserves geometric normality.
+\end{slogan}
+Let $k$ be a field. A localization of a geometrically normal $k$-algebra
+is geometrically normal.
+\end{lemma}
+
+\begin{proof}
+This is clear as being a normal ring is checked at the localizations at
+prime ideals.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-separable-field-extension-geometrically-normal}
+Let $k$ be a field. Let $K/k$ be a separable field extension.
+Then $K$ is geometrically normal over $k$.
+\end{lemma}
+
+\begin{proof}
+This is true because $k^{perf} \otimes_k K$ is a field.
+Namely, it is reduced for example by
+Lemma \ref{lemma-characterize-separable-field-extensions}
+and it has a unique prime ideal because $K \subset k^{perf} \otimes_k K$
+is a universal homeomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-normal-tensor-normal}
+Let $k$ be a field. Let $A, B$ be $k$-algebras. Assume $A$ is geometrically
+normal over $k$ and $B$ is a normal ring. Then $A \otimes_k B$ is a normal
+ring.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak r$ be a prime ideal of $A \otimes_k B$. Denote
+$\mathfrak p$, resp.\ $\mathfrak q$ the corresponding prime of $A$,
+resp.\ $B$. Then $(A \otimes_k B)_{\mathfrak r}$ is a localization of
+$A_{\mathfrak p} \otimes_k B_{\mathfrak q}$. Hence it suffices to prove the
+result for the ring $A_{\mathfrak p} \otimes_k B_{\mathfrak q}$, see
+Lemma \ref{lemma-localization-normal-ring}
+and
+Lemma \ref{lemma-localization-geometrically-normal-algebra}.
+Thus we may assume $A$ and $B$ are domains.
+
+\medskip\noindent
+Assume that $A$ and $B$ are domains with fractions fields $K$ and $L$.
+Note that $B$ is the filtered colimit of its finite type normal
+$k$-sub algebras (as $k$ is a Nagata ring, see
+Proposition \ref{proposition-ubiquity-nagata},
+and hence the integral closure of a finite type $k$-sub algebra is still
+a finite type $k$-sub algebra by
+Proposition \ref{proposition-nagata-universally-japanese}).
+By
+Lemma \ref{lemma-colimit-normal-ring}
+we reduce to the case that $B$ is of finite type over $k$.
+
+\medskip\noindent
+Assume that $A$ and $B$ are domains with fractions fields $K$ and $L$
+and $B$ of finite type over $k$. In this case the ring $K \otimes_k B$
+is of finite type over $K$, hence Noetherian
+(Lemma \ref{lemma-Noetherian-permanence}).
+In particular $K \otimes_k B$ has finitely many minimal primes
+(Lemma \ref{lemma-Noetherian-irreducible-components}).
+Since $A \to A \otimes_k B$ is flat, this implies that $A \otimes_k B$
+has finitely many minimal primes (by going down for flat ring maps --
+Lemma \ref{lemma-flat-going-down}
+-- these primes all lie over $(0) \subset A$). Thus it suffices to prove
+that $A \otimes_k B$ is integrally closed in its total ring of fractions
+(Lemma \ref{lemma-characterize-reduced-ring-normal}).
+
+\medskip\noindent
+We claim that $K \otimes_k B$ and $A \otimes_k L$ are both normal rings.
+If this is true then any element $x$ of $Q(A \otimes_k B)$ which is
+integral over $A \otimes_k B$ is (by
+Lemma \ref{lemma-normal-ring-integrally-closed})
+contained in $K \otimes_k B \cap A \otimes_k L = A \otimes_k B$ and we're done.
+Since $A \otimes_K L$ is a normal ring by assumption, it suffices to
+prove that $K \otimes_k B$ is normal.
+
+\medskip\noindent
+As $A$ is geometrically normal over $k$ we see $K$ is geometrically normal
+over $k$
+(Lemma \ref{lemma-localization-geometrically-normal-algebra})
+hence $K$ is geometrically reduced over $k$.
+Hence $K = \bigcup K_i$ is the union of finitely generated field extensions
+of $k$ which are geometrically reduced
+(Lemma \ref{lemma-subalgebra-separable}).
+Each $K_i$ is the localization of a smooth $k$-algebra
+(Lemma \ref{lemma-localization-smooth-separable}).
+So $K_i \otimes_k B$ is the localization of a smooth $B$-algebra hence normal
+(Lemma \ref{lemma-normal-goes-up}).
+Thus $K \otimes_k B$ is a normal ring
+(Lemma \ref{lemma-colimit-normal-ring})
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-normal-over-separable-algebraic}
+Let $k'/k$ be a separable algebraic field extension.
+Let $A$ be an algebra over $k'$. Then $A$ is geometrically normal
+over $k$ if and only if it is geometrically normal over $k'$.
+\end{lemma}
+
+\begin{proof}
+Let $L/k$ be a finite purely inseparable field extension.
+Then $L' = k' \otimes_k L$ is a field (see material in
+Fields, Section \ref{fields-section-algebraic})
+and $A \otimes_k L = A \otimes_{k'} L'$. Hence if
+$A$ is geometrically normal over $k'$, then $A$ is geometrically
+normal over $k$.
+
+\medskip\noindent
+Assume $A$ is geometrically normal over $k$. Let $K/k'$ be a field
+extension. Then
+$$
+K \otimes_{k'} A = (K \otimes_k A) \otimes_{(k' \otimes_k k')} k'
+$$
+Since $k' \otimes_k k' \to k'$ is a localization by
+Lemma \ref{lemma-separable-algebraic-diagonal},
+we see that $K \otimes_{k'} A$
+is a localization of a normal ring, hence normal.
+\end{proof}
+
+
+
+
+\section{Geometrically regular algebras}
+\label{section-geometrically-regular}
+
+\noindent
+Let $k$ be a field.
+Let $A$ be a Noetherian $k$-algebra.
+Let $K/k$ be a finitely generated field extension.
+Then the ring $K \otimes_k A$ is Noetherian as well, see
+Lemma \ref{lemma-Noetherian-field-extension}.
+Thus the following lemma makes sense.
+
+\begin{lemma}
+\label{lemma-geometrically-regular}
+Let $k$ be a field. Let $A$ be a $k$-algebra.
+Assume $A$ is Noetherian.
+The following properties of $A$ are equivalent:
+\begin{enumerate}
+\item $k' \otimes_k A$ is regular for every finitely generated field
+extension $k'/k$, and
+\item $k' \otimes_k A$ is regular for every finite purely inseparable
+extension $k'/k$.
+\end{enumerate}
+Here regular ring is as in Definition \ref{definition-regular}.
+\end{lemma}
+
+\begin{proof}
+The lemma makes sense by the remarks preceding the lemma.
+It is clear that (1) $\Rightarrow$ (2).
+
+\medskip\noindent
+Assume (2) and let $K/k$ be a finitely generated field extension.
+By Lemma \ref{lemma-make-separable} we can find a diagram
+$$
+\xymatrix{
+K \ar[r] & K' \\
+k \ar[u] \ar[r] & k' \ar[u]
+}
+$$
+where $k'/k$, $K'/K$ are finite purely inseparable field
+extensions such that $K'/k'$ is separable. By
+Lemma \ref{lemma-localization-smooth-separable}
+there exists a smooth $k'$-algebra $B$ such that $K'$ is the
+fraction field of $B$. Now we can argue as follows:
+Step 1: $k' \otimes_k A$ is a regular ring because we assumed (2).
+Step 2: $B \otimes_{k'} k' \otimes_k A$ is a regular ring as
+$k' \otimes_k A \to B \otimes_{k'} k' \otimes_k A$ is smooth
+(Lemma \ref{lemma-base-change-smooth})
+and ascent of regularity along smooth maps
+(Lemma \ref{lemma-regular-goes-up}).
+Step 3. $K' \otimes_{k'} k' \otimes_k A = K' \otimes_k A$ is
+a regular ring as it is a localization of a regular ring
+(immediate from the definition).
+Step 4. Finally $K \otimes_k A$ is a regular ring by descent of
+regularity along the faithfully flat ring map
+$K \otimes_k A \to K' \otimes_k A$ (Lemma \ref{lemma-descent-regular}).
+This proves the lemma.
+\end{proof}
+
+\begin{definition}
+\label{definition-geometrically-regular}
+Let $k$ be a field. Let $R$ be a Noetherian $k$-algebra.
+The $k$-algebra $R$ is called {\it geometrically regular} over $k$ if
+the equivalent conditions of Lemma \ref{lemma-geometrically-regular} hold.
+\end{definition}
+
+\noindent
+It is clear from the definition that $K \otimes_k R$ is a geometrically
+regular algebra over $K$ for any finitely generated field extension $K$ of
+$k$. We will see later (More on Algebra, Proposition
+\ref{more-algebra-proposition-characterization-geometrically-regular})
+that it suffices to check $R \otimes_k k'$ is regular whenever
+$k \subset k' \subset k^{1/p}$ (finite).
+
+\begin{lemma}
+\label{lemma-geometrically-regular-descent}
+\begin{slogan}
+Geometric regularity descends through faithfully flat maps of algebras
+\end{slogan}
+Let $k$ be a field. Let $A \to B$ be a faithfully flat $k$-algebra
+map. If $B$ is geometrically regular over $k$, so is $A$.
+\end{lemma}
+
+\begin{proof}
+Assume $B$ is geometrically regular over $k$.
+Let $k'/k$ be a finite, purely inseparable extension.
+Then $A \otimes_k k' \to B \otimes_k k'$ is faithfully flat as a
+base change of $A \to B$ (by
+Lemmas \ref{lemma-surjective-spec-radical-ideal} and
+\ref{lemma-flat-base-change})
+and $B \otimes_k k'$ is regular by our
+assumption on $B$ over $k$. Then $A \otimes_k k'$ is regular by
+Lemma \ref{lemma-descent-regular}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-regular-goes-up}
+Let $k$ be a field. Let $A \to B$ be a smooth ring map
+of $k$-algebras. If $A$ is geometrically regular over $k$,
+then $B$ is geometrically regular over $k$.
+\end{lemma}
+
+\begin{proof}
+Let $k'/k$ be a finitely generated field extension.
+Then $A \otimes_k k' \to B \otimes_k k'$ is a smooth ring map
+(Lemma \ref{lemma-base-change-smooth}) and $A \otimes_k k'$
+is regular. Hence $B \otimes_k k'$ is regular by
+Lemma \ref{lemma-regular-goes-up}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-regular-over-subfields}
+Let $k$ be a field. Let $A$ be an algebra over $k$.
+Let $k = \colim k_i$ be a directed colimit of subfields.
+If $A$ is geometrically regular over each $k_i$, then
+$A$ is geometrically regular over $k$.
+\end{lemma}
+
+\begin{proof}
+Let $k'/k$ be a finite purely inseparable field extension.
+We can get $k'$ by adjoining finitely many variables to $k$ and
+imposing finitely many polynomial relations. Hence we see that
+there exists an $i$ and a finite purely inseparable field extension
+$k_i'/k_i$ such that $k_i = k \otimes_{k_i} k_i'$.
+Thus $A \otimes_k k' = A \otimes_{k_i} k_i'$ and the lemma is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometrically-regular-over-separable-algebraic}
+Let $k'/k$ be a separable algebraic field extension.
+Let $A$ be an algebra over $k'$. Then $A$ is geometrically
+regular over $k$ if and only if it is geometrically regular over $k'$.
+\end{lemma}
+
+\begin{proof}
+Let $L/k$ be a finite purely inseparable field extension.
+Then $L' = k' \otimes_k L$ is a field (see material in
+Fields, Section \ref{fields-section-algebraic})
+and $A \otimes_k L = A \otimes_{k'} L'$. Hence if
+$A$ is geometrically regular over $k'$, then $A$ is geometrically
+regular over $k$.
+
+\medskip\noindent
+Assume $A$ is geometrically regular over $k$. Since $k'$
+is the filtered colimit of finite extensions of $k$ we may
+assume by Lemma \ref{lemma-geometrically-regular-over-subfields}
+that $k'/k$ is finite separable. Consider the ring maps
+$$
+k' \to A \otimes_k k' \to A.
+$$
+Note that $A \otimes_k k'$ is geometrically regular over $k'$
+as a base change of $A$ to $k'$. Note that $A \otimes_k k' \to A$
+is the base change of $k' \otimes_k k' \to k'$ by the map
+$k' \to A$. Since $k'/k$ is an \'etale extension of rings, we
+see that $k' \otimes_k k' \to k'$ is \'etale
+(Lemma \ref{lemma-etale}). Hence $A$ is
+geometrically regular over $k'$ by
+Lemma \ref{lemma-geometrically-regular-goes-up}.
+\end{proof}
+
+
+
+
+
+\section{Geometrically Cohen-Macaulay algebras}
+\label{section-geometrically-CM}
+
+\noindent
+This section is a bit of a misnomer, since Cohen-Macaulay algebras
+are automatically geometrically Cohen-Macaulay. Namely, see
+Lemma \ref{lemma-extend-field-CM-locus}
+and
+Lemma \ref{lemma-CM-geometrically-CM}
+below.
+
+\begin{lemma}
+\label{lemma-tensor-fields-CM}
+Let $k$ be a field and let $K/k$ and $L/k$ be
+two field extensions such that one of them is a field extension of finite type.
+Then $K \otimes_k L$ is a Noetherian Cohen-Macaulay ring.
+\end{lemma}
+
+\begin{proof}
+The ring $K \otimes_k L$ is Noetherian by
+Lemma \ref{lemma-Noetherian-field-extension}.
+Say $K$ is a finite extension of the purely transcendental extension
+$k(t_1, \ldots, t_r)$. Then
+$k(t_1, \ldots, t_r) \otimes_k L \to K \otimes_k L$
+is a finite free ring map. By
+Lemma \ref{lemma-finite-flat-over-regular-CM}
+it suffices to show that $k(t_1, \ldots, t_r) \otimes_k L$ is Cohen-Macaulay.
+This is clear because it is a localization of the polynomial
+ring $L[t_1, \ldots, t_r]$. (See for example
+Lemma \ref{lemma-CM-polynomial-algebra}
+for the fact that a polynomial ring is Cohen-Macaulay.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM-geometrically-CM}
+Let $k$ be a field. Let $S$ be a Noetherian $k$-algebra.
+Let $K/k$ be a finitely generated field extension,
+and set $S_K = K \otimes_k S$. Let $\mathfrak q \subset S$
+be a prime of $S$. Let $\mathfrak q_K \subset S_K$ be a prime
+of $S_K$ lying over $\mathfrak q$. Then $S_{\mathfrak q}$ is Cohen-Macaulay
+if and only if $(S_K)_{\mathfrak q_K}$ is Cohen-Macaulay.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-Noetherian-field-extension}
+the ring $S_K$ is Noetherian. Hence
+$S_{\mathfrak q} \to (S_K)_{\mathfrak q_K}$ is a flat local homomorphism
+of Noetherian local rings. Note that the fibre
+$$
+(S_K)_{\mathfrak q_K} / \mathfrak q (S_K)_{\mathfrak q_K}
+\cong (\kappa(\mathfrak q) \otimes_k K)_{\mathfrak q'}
+$$
+is the localization of the Cohen-Macaulay (Lemma \ref{lemma-tensor-fields-CM})
+ring $\kappa(\mathfrak q) \otimes_k K$ at a suitable prime ideal
+$\mathfrak q'$. Hence the lemma follows from Lemma \ref{lemma-CM-goes-up}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Colimits and maps of finite presentation, II}
+\label{section-colimits-finite-presentation}
+
+\noindent
+This section is a continuation of Section \ref{section-colimits-flat}.
+
+\medskip\noindent
+We start with an application of the openness of flatness.
+It says that we can approximate flat modules by flat modules
+which is useful.
+
+\begin{lemma}
+\label{lemma-flat-finite-presentation-limit-flat}
+Let $R \to S$ be a ring map.
+Let $M$ be an $S$-module.
+Assume that
+\begin{enumerate}
+\item $R \to S$ is of finite presentation,
+\item $M$ is a finitely presented $S$-module, and
+\item $M$ is flat over $R$.
+\end{enumerate}
+In this case we have the following:
+\begin{enumerate}
+\item There exists a finite type $\mathbf{Z}$-algebra $R_0$ and
+a finite type ring map $R_0 \to S_0$ and a finite $S_0$-module $M_0$
+such that $M_0$ is flat over $R_0$, together with a ring maps
+$R_0 \to R$ and $S_0 \to S$ and an $S_0$-module map $M_0 \to M$
+such that $S \cong R \otimes_{R_0} S_0$ and $M = S \otimes_{S_0} M_0$.
+\item If $R = \colim_{\lambda \in \Lambda} R_\lambda$ is written
+as a directed colimit, then there exists a $\lambda$ and a ring map
+$R_\lambda \to S_\lambda$ of finite presentation, and an $S_\lambda$-module
+$M_\lambda$ of finite presentation such that $M_\lambda$ is flat over
+$R_\lambda$ and such that $S = R \otimes_{R_\lambda} S_\lambda$ and
+$M = S \otimes_{S_{\lambda}} M_\lambda$.
+\item If
+$$
+(R \to S, M) =
+\colim_{\lambda \in \Lambda}
+(R_\lambda \to S_\lambda, M_\lambda)
+$$
+is written as a directed colimit such that
+\begin{enumerate}
+\item $R_\mu \otimes_{R_\lambda} S_\lambda \to S_\mu$ and
+$S_\mu \otimes_{S_\lambda} M_\lambda \to M_\mu$ are isomorphisms
+for $\mu \geq \lambda$,
+\item $R_\lambda \to S_\lambda$ is of finite presentation,
+\item $M_\lambda$ is a finitely presented $S_\lambda$-module,
+\end{enumerate}
+then for all sufficiently large $\lambda$ the module $M_\lambda$
+is flat over $R_\lambda$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We first write $(R \to S, M)$ as the directed colimit of a system
+$(R_\lambda \to S_\lambda, M_\lambda)$ as in
+as in Lemma \ref{lemma-limit-module-finite-presentation}.
+Let $\mathfrak q \subset S$ be a prime.
+Let $\mathfrak p \subset R$, $\mathfrak q_\lambda \subset S_\lambda$,
+and $\mathfrak p_\lambda \subset R_\lambda$ the corresponding primes.
+As seen in the proof of Theorem \ref{theorem-openness-flatness}
+$$
+((R_\lambda)_{\mathfrak p_\lambda},
+(S_\lambda)_{\mathfrak q_\lambda},
+(M_\lambda)_{\mathfrak q_{\lambda}})
+$$
+is a system as in
+Lemma \ref{lemma-limit-module-essentially-finite-presentation}, and
+hence by Lemma \ref{lemma-colimit-eventually-flat}
+we see that for some $\lambda_{\mathfrak q} \in \Lambda$
+for all $\lambda \geq \lambda_{\mathfrak q}$
+the module $M_\lambda$ is flat over
+$R_\lambda$ at the prime $\mathfrak q_{\lambda}$.
+
+\medskip\noindent
+By Theorem \ref{theorem-openness-flatness} we get an open subset
+$U_\lambda \subset \Spec(S_\lambda)$ such that $M_\lambda$
+flat over $R_\lambda$ at all the primes of $U_\lambda$.
+Denote $V_\lambda \subset \Spec(S)$ the inverse image of
+$U_\lambda$ under the map $\Spec(S) \to \Spec(S_\lambda)$.
+The argument above shows that for every $\mathfrak q \in \Spec(S)$
+there exists a $\lambda_{\mathfrak q}$ such that
+$\mathfrak q \in V_\lambda$ for all $\lambda \geq \lambda_{\mathfrak q}$.
+Since $\Spec(S)$ is quasi-compact we see this implies there
+exists a single $\lambda_0 \in \Lambda$ such that
+$V_{\lambda_0} = \Spec(S)$.
+
+\medskip\noindent
+The complement $\Spec(S_{\lambda_0}) \setminus U_{\lambda_0}$
+is $V(I)$ for some ideal $I \subset S_{\lambda_0}$. As
+$V_{\lambda_0} = \Spec(S)$ we see that $IS = S$.
+Choose $f_1, \ldots, f_r \in I$ and $s_1, \ldots, s_n \in S$ such
+that $\sum f_i s_i = 1$. Since $\colim S_\lambda = S$, after
+increasing $\lambda_0$ we may assume there exist
+$s_{i, \lambda_0} \in S_{\lambda_0}$ such that
+$\sum f_i s_{i, \lambda_0} = 1$.
+Hence for this $\lambda_0$ we have
+$U_{\lambda_0} = \Spec(S_{\lambda_0})$.
+This proves (1).
+
+\medskip\noindent
+Proof of (2). Let $(R_0 \to S_0, M_0)$ be as in (1) and suppose that
+$R = \colim R_\lambda$. Since $R_0$ is a finite type $\mathbf{Z}$
+algebra, there exists a $\lambda$ and a map $R_0 \to R_\lambda$ such
+that $R_0 \to R_\lambda \to R$ is the given map $R_0 \to R$ (see
+Lemma \ref{lemma-characterize-finite-presentation}).
+Then, part (2) follows by taking $S_\lambda = R_\lambda \otimes_{R_0} S_0$
+and $M_\lambda = S_\lambda \otimes_{S_0} M_0$.
+
+\medskip\noindent
+Finally, we come to the proof of (3). Let
+$(R_\lambda \to S_\lambda, M_\lambda)$ be as in (3). Choose
+$(R_0 \to S_0, M_0)$ and $R_0 \to R$ as in (1).
+As in the proof of (2), there exists a $\lambda_0$ and a ring map
+$R_0 \to R_{\lambda_0}$ such that $R_0 \to R_{\lambda_0} \to R$ is the given
+map $R_0 \to R$. Since $S_0$ is of finite presentation over $R_0$ and since
+$S = \colim S_\lambda$ we see that for some $\lambda_1 \geq \lambda_0$
+we get an $R_0$-algebra map $S_0 \to S_{\lambda_1}$ such that the
+composition $S_0 \to S_{\lambda_1} \to S$ is the given map $S_0 \to S$
+(see Lemma \ref{lemma-characterize-finite-presentation}).
+For all $\lambda \geq \lambda_1$ this gives maps
+$$
+\Psi_{\lambda} :
+R_\lambda \otimes_{R_0} S_0
+\longrightarrow
+R_\lambda \otimes_{R_{\lambda_1}} S_{\lambda_1}
+\cong
+S_\lambda
+$$
+the last isomorphism by assumption. By construction
+$\colim_\lambda \Psi_\lambda$ is an isomorphism. Hence $\Psi_\lambda$
+is an isomorphism for all $\lambda$ large enough by
+Lemma \ref{lemma-colimit-category-fp-algebras}.
+In the same vein, there exists a $\lambda_2 \geq \lambda_1$
+and an $S_0$-module map $M_0 \to M_{\lambda_2}$ such that
+$M_0 \to M_{\lambda_2} \to M$ is the given
+map $M_0 \to M$ (see Lemma \ref{lemma-module-map-property-in-colimit}).
+For $\lambda \geq \lambda_2$ there is an induced map
+$$
+S_\lambda \otimes_{S_0} M_0
+\longrightarrow
+S_\lambda \otimes_{S_{\lambda_2}} M_{\lambda_2}
+\cong
+M_\lambda
+$$
+and for $\lambda$ large enough this map is an isomorphism by
+Lemma \ref{lemma-colimit-category-fp-modules}.
+This implies (3) because $M_0$ is flat over $R_0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descend-faithfully-flat-finite-presentation}
+Let $R \to A \to B$ be ring maps.
+Assume $A \to B$ faithfully flat of finite presentation.
+Then there exists a commutative diagram
+$$
+\xymatrix{
+R \ar[r] \ar@{=}[d] &
+A_0 \ar[d] \ar[r] &
+B_0 \ar[d] \\
+R \ar[r] & A \ar[r] & B
+}
+$$
+with $R \to A_0$ of finite presentation,
+$A_0 \to B_0$ faithfully flat of finite presentation
+and $B = A \otimes_{A_0} B_0$.
+\end{lemma}
+
+\begin{proof}
+We first prove the lemma with $R$ replaced $\mathbf{Z}$.
+By Lemma \ref{lemma-flat-finite-presentation-limit-flat}
+there exists a diagram
+$$
+\xymatrix{
+A_0 \ar[r] & A \\
+B_0 \ar[u] \ar[r] & B \ar[u]
+}
+$$
+where $A_0$ is of finite type over $\mathbf{Z}$, $B_0$ is flat of finite
+presentation over $A_0$ such that $B = A \otimes_{A_0} B_0$.
+As $A_0 \to B_0$ is flat of finite presentation we see that the image of
+$\Spec(B_0) \to \Spec(A_0)$ is open, see
+Proposition \ref{proposition-fppf-open}. Hence the complement of the image
+is $V(I_0)$ for some ideal $I_0 \subset A_0$.
+As $A \to B$ is faithfully
+flat the map $\Spec(B) \to \Spec(A)$ is surjective, see
+Lemma \ref{lemma-ff-rings}.
+Now we use that
+the base change of the image is the image of the base change.
+Hence $I_0A = A$. Pick a relation
+$\sum f_i r_i = 1$, with $r_i \in A$, $f_i \in I_0$. Then after
+enlarging $A_0$ to contain the elements $r_i$ (and correspondingly
+enlarging $B_0$) we see that $A_0 \to B_0$ is surjective on spectra
+also, i.e., faithfully flat.
+
+\medskip\noindent
+Thus the lemma holds in case $R = \mathbf{Z}$.
+In the general case, take the solution $A_0' \to B_0'$
+just obtained and set $A_0 = A_0' \otimes_{\mathbf{Z}} R$,
+$B_0 = B_0' \otimes_{\mathbf{Z}} R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-finite}
+Let $A = \colim_{i \in I} A_i$ be a directed colimit of rings.
+Let $0 \in I$ and $\varphi_0 : B_0 \to C_0$ a map of $A_0$-algebras.
+Assume
+\begin{enumerate}
+\item $A \otimes_{A_0} B_0 \to A \otimes_{A_0} C_0$ is finite,
+\item $C_0$ is of finite type over $B_0$.
+\end{enumerate}
+Then there exists an $i \geq 0$ such that the map
+$A_i \otimes_{A_0} B_0 \to A_i \otimes_{A_0} C_0$
+is finite.
+\end{lemma}
+
+\begin{proof}
+Let $x_1, \ldots, x_m$ be generators for $C_0$ over $B_0$.
+Pick monic polynomials $P_j \in A \otimes_{A_0} B_0[T]$ such
+that $P_j(1 \otimes x_j) = 0$ in $A \otimes_{A_0} C_0$. For some
+$i \geq 0$ we can find $P_{j, i} \in A_i \otimes_{A_0} B_0[T]$
+mapping to $P_j$. Since $\otimes$
+commutes with colimits we see that $P_{j, i}(1 \otimes x_j)$ is zero
+in $A_i \otimes_{A_0} C_0$ after possibly increasing $i$.
+Then this $i$ works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-surjective}
+Let $A = \colim_{i \in I} A_i$ be a directed colimit of rings.
+Let $0 \in I$ and $\varphi_0 : B_0 \to C_0$ a map of $A_0$-algebras.
+Assume
+\begin{enumerate}
+\item $A \otimes_{A_0} B_0 \to A \otimes_{A_0} C_0$ is surjective,
+\item $C_0$ is of finite type over $B_0$.
+\end{enumerate}
+Then for some $i \geq 0$ the map
+$A_i \otimes_{A_0} B_0 \to A_i \otimes_{A_0} C_0$
+is surjective.
+\end{lemma}
+
+\begin{proof}
+Let $x_1, \ldots, x_m$ be generators for $C_0$ over $B_0$.
+Pick $b_j \in A \otimes_{A_0} B_0$ mapping to $1 \otimes x_j$ in
+$A \otimes_{A_0} C_0$. For some $i \geq 0$ we can find
+$b_{j, i} \in A_i \otimes_{A_0} B_0$ mapping to $b_j$.
+Then this $i$ works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-unramified}
+Let $A = \colim_{i \in I} A_i$ be a directed colimit of rings.
+Let $0 \in I$ and $\varphi_0 : B_0 \to C_0$ a map of $A_0$-algebras.
+Assume
+\begin{enumerate}
+\item $A \otimes_{A_0} B_0 \to A \otimes_{A_0} C_0$ is unramified,
+\item $C_0$ is of finite type over $B_0$.
+\end{enumerate}
+Then for some $i \geq 0$ the map
+$A_i \otimes_{A_0} B_0 \to A_i \otimes_{A_0} C_0$
+is unramified.
+\end{lemma}
+
+\begin{proof}
+Set $B_i = A_i \otimes_{A_0} B_0$, $C_i = A_i \otimes_{A_0} C_0$,
+$B = A \otimes_{A_0} B_0$, and $C = A \otimes_{A_0} C_0$.
+Let $x_1, \ldots, x_m$ be generators for $C_0$ over $B_0$.
+Then $\text{d}x_1, \ldots, \text{d}x_m$ generate $\Omega_{C_0/B_0}$
+over $C_0$ and their images generate $\Omega_{C_i/B_i}$ over $C_i$
+(Lemmas \ref{lemma-differentials-polynomial-ring} and
+\ref{lemma-differential-seq}).
+Observe that $0 = \Omega_{C/B} = \colim \Omega_{C_i/B_i}$
+(Lemma \ref{lemma-colimit-differentials}).
+Thus there is an $i$ such that $\text{d}x_1, \ldots, \text{d}x_m$
+map to zero and hence $\Omega_{C_i/B_i} = 0$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-isomorphism}
+Let $A = \colim_{i \in I} A_i$ be a directed colimit of rings.
+Let $0 \in I$ and $\varphi_0 : B_0 \to C_0$ a map of $A_0$-algebras.
+Assume
+\begin{enumerate}
+\item $A \otimes_{A_0} B_0 \to A \otimes_{A_0} C_0$ is
+an isomorphism,
+\item $B_0 \to C_0$ is of finite presentation.
+\end{enumerate}
+Then for some $i \geq 0$ the map
+$A_i \otimes_{A_0} B_0 \to A_i \otimes_{A_0} C_0$ is
+an isomorphism.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-colimit-surjective} there exists an $i$ such that
+$A_i \otimes_{A_0} B_0 \to A_i \otimes_{A_0} C_0$ is
+surjective. Since the map is of finite presentation
+the kernel is a finitely generated ideal. Let
+$g_1, \ldots, g_r \in A_i \otimes_{A_0} B_0$ generate the kernel.
+Then we may pick $i' \geq i$ such that $g_j$ map to zero
+in $A_{i'} \otimes_{A_0} B_0$. Then
+$A_{i'} \otimes_{A_0} B_0 \to A_{i'} \otimes_{A_0} C_0$ is
+an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-etale}
+Let $A = \colim_{i \in I} A_i$ be a directed colimit of rings.
+Let $0 \in I$ and $\varphi_0 : B_0 \to C_0$ a map of $A_0$-algebras.
+Assume
+\begin{enumerate}
+\item $A \otimes_{A_0} B_0 \to A \otimes_{A_0} C_0$ is \'etale,
+\item $B_0 \to C_0$ is of finite presentation.
+\end{enumerate}
+Then for some $i \geq 0$ the map
+$A_i \otimes_{A_0} B_0 \to A_i \otimes_{A_0} C_0$
+is \'etale.
+\end{lemma}
+
+\begin{proof}
+Write $C_0 = B_0[x_1, \ldots, x_n]/(f_{1, 0}, \ldots, f_{m, 0})$.
+Write $B_i = A_i \otimes_{A_0} B_0$ and $C_i = A_i \otimes_{A_0} C_0$.
+Note that $C_i = B_i[x_1, \ldots, x_n]/(f_{1, i}, \ldots, f_{m, i})$
+where $f_{j, i}$ is the image of $f_{j, 0}$ in the polynomial ring
+over $B_i$. Write $B = A \otimes_{A_0} B_0$ and $C = A \otimes_{A_0} C_0$.
+Note that $C = B[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$
+where $f_j$ is the image of $f_{j, 0}$ in the polynomial ring
+over $B$. The assumption is that the map
+$$
+\text{d} :
+(f_1, \ldots, f_m)/(f_1, \ldots, f_m)^2
+\longrightarrow
+\bigoplus C \text{d}x_k
+$$
+is an isomorphism. Thus for sufficiently large $i$ we can find elements
+$$
+\xi_{k, i} \in (f_{1, i}, \ldots, f_{m, i})/(f_{1, i}, \ldots, f_{m, i})^2
+$$
+with $\text{d}\xi_{k, i} = \text{d}x_k$ in $\bigoplus C_i \text{d}x_k$.
+Moreover, on increasing $i$ if necessary, we see that
+$\sum (\partial f_{j, i}/\partial x_k) \xi_{k, i} =
+f_{j, i} \bmod (f_{1, i}, \ldots, f_{m, i})^2$
+since this is true in the limit. Then this $i$ works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-smooth}
+Let $A = \colim_{i \in I} A_i$ be a directed colimit of rings.
+Let $0 \in I$ and $\varphi_0 : B_0 \to C_0$ a map of $A_0$-algebras.
+Assume
+\begin{enumerate}
+\item $A \otimes_{A_0} B_0 \to A \otimes_{A_0} C_0$ is smooth,
+\item $B_0 \to C_0$ is of finite presentation.
+\end{enumerate}
+Then for some $i \geq 0$ the map
+$A_i \otimes_{A_0} B_0 \to A_i \otimes_{A_0} C_0$ is smooth.
+\end{lemma}
+
+\begin{proof}
+Write $C_0 = B_0[x_1, \ldots, x_n]/(f_{1, 0}, \ldots, f_{m, 0})$.
+Write $B_i = A_i \otimes_{A_0} B_0$ and $C_i = A_i \otimes_{A_0} C_0$.
+Note that $C_i = B_i[x_1, \ldots, x_n]/(f_{1, i}, \ldots, f_{m, i})$
+where $f_{j, i}$ is the image of $f_{j, 0}$ in the polynomial ring
+over $B_i$. Write $B = A \otimes_{A_0} B_0$ and $C = A \otimes_{A_0} C_0$.
+Note that $C = B[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$
+where $f_j$ is the image of $f_{j, 0}$ in the polynomial ring
+over $B$. The assumption is that the map
+$$
+\text{d} :
+(f_1, \ldots, f_m)/(f_1, \ldots, f_m)^2
+\longrightarrow
+\bigoplus C \text{d}x_k
+$$
+is a split injection. Let $\xi_k \in (f_1, \ldots, f_m)/(f_1, \ldots, f_m)^2$
+be elements such that $\sum (\partial f_j/\partial x_k) \xi_k =
+f_j \bmod (f_1, \ldots, f_m)^2$. Then for sufficiently large $i$ we can
+find elements
+$$
+\xi_{k, i} \in (f_{1, i}, \ldots, f_{m, i})/(f_{1, i}, \ldots, f_{m, i})^2
+$$
+with $\sum (\partial f_{j, i}/\partial x_k) \xi_{k, i} =
+f_{j, i} \bmod (f_{1, i}, \ldots, f_{m, i})^2$
+since this is true in the limit. Then this $i$ works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-lci}
+Let $A = \colim_{i \in I} A_i$ be a directed colimit of rings.
+Let $0 \in I$ and $\varphi_0 : B_0 \to C_0$ a map of $A_0$-algebras.
+Assume
+\begin{enumerate}
+\item $A \otimes_{A_0} B_0 \to A \otimes_{A_0} C_0$ is
+syntomic (resp.\ a relative global complete intersection),
+\item $C_0$ is of finite presentation over $B_0$.
+\end{enumerate}
+Then there exists an $i \geq 0$ such that the map
+$A_i \otimes_{A_0} B_0 \to A_i \otimes_{A_0} C_0$
+is syntomic (resp.\ a relative global complete intersection).
+\end{lemma}
+
+\begin{proof}
+Assume $A \otimes_{A_0} B_0 \to A \otimes_{A_0} C_0$ is a relative
+global complete intersection.
+By Lemma \ref{lemma-relative-global-complete-intersection-Noetherian}
+there exists a finite type $\mathbf{Z}$-algebra $R$,
+a ring map $R \to A \otimes_{A_0} B_0$, a relative
+global complete intersection $R \to S$, and an isomorphism
+$$
+(A \otimes_{A_0} B_0) \otimes_R S
+\longrightarrow
+A \otimes_{A_0} C_0
+$$
+Because $R$ is of finite type (and hence finite presentation)
+over $\mathbf{Z}$, there exists an $i$ and a map
+$R \to A_i \otimes_{A_0} B_0$ lifting the map $R \to A \otimes_{A_0} B_0$,
+see Lemma \ref{lemma-characterize-finite-presentation}.
+Using the same lemma, there exists an $i' \geq i$ such that
+$(A_i \otimes_{A_0} B_0) \otimes_R S \to A \otimes_{A_0} C_0$
+comes from a map
+$(A_i \otimes_{A_0} B_0) \otimes_R S \to A_{i'} \otimes_{A_0} C_0$.
+Thus we may assume, after replacing $i$ by $i'$,
+that the displayed map comes from an $A_i \otimes_{A_0} B_0$-algebra map
+$$
+(A_i \otimes_{A_0} B_0) \otimes_R S
+\longrightarrow
+A_i \otimes_{A_0} C_0
+$$
+By Lemma \ref{lemma-colimit-isomorphism} after increasing $i$ this
+map is an isomorphism. This finishes the proof in this case because the base
+change of a relative global complete intersection is a relative
+global complete intersection by
+Lemma \ref{lemma-base-change-relative-global-complete-intersection}.
+
+\medskip\noindent
+Assume $A \otimes_{A_0} B_0 \to A \otimes_{A_0} C_0$ is syntomic.
+Then there exist elements $g_1, \ldots, g_m$ in
+$A \otimes_{A_0} C_0$ generating the unit ideal such that
+$A \otimes_{A_0} B_0 \to (A \otimes_{A_0} C_0)_{g_j}$ is a
+relative global complete intersection, see Lemma \ref{lemma-syntomic}.
+We can find an $i$ and elements $g_{i, j} \in A_i \otimes_{A_0} C_0$
+mapping to $g_j$. After increasing $i$ we may assume
+$g_{i, 1}, \ldots, g_{i, m}$ generate the unit ideal
+of $A_i \otimes_{A_0} C_0$. The result of the previous paragraph
+implies that, after increasing $i$, we may assume the maps
+$A_i \otimes_{A_0} B_0 \to (A_i \otimes_{A_0} C_0)_{g_{i, j}}$
+are relative global complete intersections.
+Then $A_i \otimes_{A_0} B_0 \to A_i \otimes_{A_0} C_0$
+is syntomic by Lemma \ref{lemma-local-syntomic}
+(and the already used Lemma \ref{lemma-syntomic}).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\noindent
+The following lemma is an application of the results above
+which doesn't seem to fit well anywhere else.
+
+\begin{lemma}
+\label{lemma-fppf-fpqf}
+Let $R \to S$ be a faithfully flat ring map of finite presentation.
+Then there exists a commutative diagram
+$$
+\xymatrix{
+S \ar[rr] & & S' \\
+& R \ar[lu] \ar[ru]
+}
+$$
+where $R \to S'$ is quasi-finite, faithfully flat and of finite presentation.
+\end{lemma}
+
+\begin{proof}
+As a first step we reduce this lemma to the case where $R$ is of finite
+type over $\mathbf{Z}$.
+By Lemma \ref{lemma-descend-faithfully-flat-finite-presentation}
+there exists a diagram
+$$
+\xymatrix{
+S_0 \ar[r] & S \\
+R_0 \ar[u] \ar[r] & R \ar[u]
+}
+$$
+where $R_0$ is of finite type over $\mathbf{Z}$,
+and $S_0$ is faithfully flat of finite presentation over $R_0$
+such that $S = R \otimes_{R_0} S_0$.
+If we prove the lemma for the ring map $R_0 \to S_0$, then the lemma
+follows for $R \to S$ by base change, as the base change of
+a quasi-finite ring map is quasi-finite, see
+Lemma \ref{lemma-quasi-finite-base-change}. (Of course we
+also use that base changes of flat maps are flat and
+base changes of maps of finite presentation are of finite presentation.)
+
+\medskip\noindent
+Assume $R \to S$ is a faithfully flat ring map of finite presentation
+and that $R$ is Noetherian (which we may assume by the preceding
+paragraph). Let $W \subset \Spec(S)$ be the open set of
+Lemma \ref{lemma-finite-presentation-flat-CM-locus-open}.
+As $R \to S$ is faithfully flat the map $\Spec(S) \to \Spec(R)$
+is surjective, see Lemma \ref{lemma-ff-rings}.
+By Lemma \ref{lemma-generic-CM-flat-finite-presentation}
+the map $W \to \Spec(R)$ is also surjective.
+Hence by replacing $S$ with a product $S_{g_1} \times \ldots \times S_{g_m}$
+we may assume $W = \Spec(S)$; here we use that $\Spec(R)$
+is quasi-compact (Lemma \ref{lemma-quasi-compact}), and that the map
+$\Spec(S) \to \Spec(R)$ is open
+(Proposition \ref{proposition-fppf-open}).
+Suppose that $\mathfrak p \subset R$ is a prime. Choose a prime
+$\mathfrak q \subset S$ lying over $\mathfrak p$ which corresponds
+to a maximal ideal of the fibre ring $S \otimes_R \kappa(\mathfrak p)$.
+The Noetherian local ring
+$\overline{S}_{\mathfrak q} = S_{\mathfrak q}/\mathfrak pS_{\mathfrak q}$
+is Cohen-Macaulay, say of dimension $d$. We may choose $f_1, \ldots, f_d$
+in the maximal ideal of $S_{\mathfrak q}$ which map to a regular sequence
+in $\overline{S}_{\mathfrak q}$. Choose a common denominator
+$g \in S$, $g \not \in \mathfrak q$ of $f_1, \ldots, f_d$, and consider
+the $R$-algebra
+$$
+S' = S_g/(f_1, \ldots, f_d).
+$$
+By construction there is a prime ideal $\mathfrak q' \subset S'$
+lying over $\mathfrak p$ and corresponding to $\mathfrak q$ (via
+$S_g \to S'_g$). Also by construction the ring map $R \to S'$ is
+quasi-finite at $\mathfrak q$ as the local ring
+$$
+S'_{\mathfrak q'}/\mathfrak pS'_{\mathfrak q'} =
+S_{\mathfrak q}/(f_1, \ldots, f_d) + \mathfrak pS_{\mathfrak q} =
+\overline{S}_{\mathfrak q}/(\overline{f}_1, \ldots, \overline{f}_d)
+$$
+has dimension zero, see Lemma \ref{lemma-isolated-point-fibre}.
+Also by construction $R \to S'$ is of finite presentation.
+Finally, by Lemma \ref{lemma-grothendieck-regular-sequence} the local ring map
+$R_{\mathfrak p} \to S'_{\mathfrak q'}$ is flat (this is where we
+use that $R$ is Noetherian). Hence, by openness of flatness
+(Theorem \ref{theorem-openness-flatness}), and openness of quasi-finiteness
+(Lemma \ref{lemma-quasi-finite-open})
+we may after replacing
+$g$ by $gg'$ for a suitable $g' \in S$, $g' \not \in \mathfrak q$
+assume that $R \to S'$ is flat and quasi-finite.
+The image $\Spec(S') \to \Spec(R)$ is open and
+contains $\mathfrak p$. In other words we have shown
+a ring $S'$ as in the statement of the lemma exists (except possibly
+the faithfulness part) whose image contains any given prime.
+Using one more time the quasi-compactness of $\Spec(R)$
+we see that a finite product of such rings does the job.
+\end{proof}
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/algebraic.tex b/books/stacks/algebraic.tex
new file mode 100644
index 0000000000000000000000000000000000000000..0c827f63a64ecb95d0515f76b3167bb475b26ba8
--- /dev/null
+++ b/books/stacks/algebraic.tex
@@ -0,0 +1,2664 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Algebraic Stacks}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+This is where we define algebraic stacks and make some very elementary
+observations. The general philosophy will be to have no separation
+conditions whatsoever and add those conditions necessary to make lemmas,
+propositions, theorems true/provable. Thus the notions discussed here
+differ slightly from those in other places in the literature, e.g.,
+\cite{LM-B}.
+
+\medskip\noindent
+This chapter is not an introduction to algebraic stacks.
+For an informal discussion of algebraic stacks, please take a look at
+Introducing Algebraic Stacks, Section
+\ref{stacks-introduction-section-introduction}.
+
+
+\section{Conventions}
+\label{section-conventions}
+
+\noindent
+The conventions we use in this chapter are the same as those in the
+chapter on algebraic spaces. For convenience we repeat them here.
+
+\medskip\noindent
+We work in a suitable big fppf site $\Sch_{fppf}$
+as in Topologies, Definition \ref{topologies-definition-big-fppf-site}.
+So, if not explicitly stated otherwise all schemes will be objects
+of $\Sch_{fppf}$. We discuss what changes if you change the big
+fppf site in
+Section \ref{section-change-big-site}.
+
+\medskip\noindent
+We will always work relative to a base $S$ contained in $\Sch_{fppf}$.
+And we will then work with the big fppf site $(\Sch/S)_{fppf}$, see
+Topologies, Definition \ref{topologies-definition-big-small-fppf}.
+The absolute case can be recovered by taking
+$S = \Spec(\mathbf{Z})$.
+
+\medskip\noindent
+If $U, T$ are schemes over $S$, then we denote
+$U(T)$ for the set of $T$-valued points {\it over} $S$.
+In a formula: $U(T) = \Mor_S(T, U)$.
+
+\medskip\noindent
+Note that any fpqc covering is a universal effective
+epimorphism, see
+Descent, Lemma \ref{descent-lemma-fpqc-universal-effective-epimorphisms}.
+Hence the topology on $\Sch_{fppf}$
+is weaker than the canonical topology and all representable presheaves
+are sheaves.
+
+
+
+
+
+
+
+
+\section{Notation}
+\label{section-notation}
+
+\noindent
+We use the letters $S, T, U, V, X, Y$ to indicate schemes.
+We use the letters $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$ to indicate
+categories (fibred, fibred in groupoids, stacks, ...)
+over $(\Sch/S)_{fppf}$. We use small case letters
+$f$, $g$ for functors such as $f : \mathcal{X} \to \mathcal{Y}$
+over $(\Sch/S)_{fppf}$.
+We use capital $F$, $G$, $H$ for algebraic spaces over $S$, and more
+generally for presheaves of sets on $(\Sch/S)_{fppf}$.
+(In future chapters we will revert to using also $X$, $Y$, etc
+for algebraic spaces.)
+
+\medskip\noindent
+The reason for these choices is that we want to clearly distinguish between
+the different types of objects in this chapter, to build the foundations.
+
+
+
+
+
+
+
+
+
+\section{Representable categories fibred in groupoids}
+\label{section-representable}
+
+\noindent
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+The basic object of study in this chapter will be a
+category fibred in groupoids
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$, see
+Categories, Definition \ref{categories-definition-fibred-groupoids}.
+We will often simply say ``let $\mathcal{X}$ be a category fibred
+in groupoids over $(\Sch/S)_{fppf}$'' to indicate
+this situation. A $1$-morphism $\mathcal{X} \to \mathcal{Y}$ of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$ will be a $1$-morphism
+in the $2$-category of categories fibred in groupoids over
+$(\Sch/S)_{fppf}$, see
+Categories,
+Definition \ref{categories-definition-categories-fibred-in-groupoids-over-C}.
+It is simply a functor $\mathcal{X} \to \mathcal{Y}$ over
+$(\Sch/S)_{fppf}$.
+We recall this is really a $(2, 1)$-category and that all $2$-fibre products
+exist.
+
+\medskip\noindent
+Let $\mathcal{X}$ be a category fibred in groupoids over
+$(\Sch/S)_{fppf}$. Recall that $\mathcal{X}$
+is said to be {\it representable} if there exists a
+scheme $U \in \Ob((\Sch/S)_{fppf})$ and an
+equivalence
+$$
+j : \mathcal{X} \longrightarrow (\Sch/U)_{fppf}
+$$
+of categories over $(\Sch/S)_{fppf}$, see
+Categories,
+Definition \ref{categories-definition-representable-fibred-category}.
+We will sometimes say that $\mathcal{X}$ is
+{\it representable by a scheme} to distinguish from the case
+where $\mathcal{X}$ is representable by an algebraic space (see
+below).
+
+\medskip\noindent
+If $\mathcal{X}, \mathcal{Y}$ are fibred in groupoids and
+representable by $U, V$, then we have
+\begin{equation}
+\label{equation-morphisms-schemes}
+\Mor_{\textit{Cat}/(\Sch/S)_{fppf}}(\mathcal{X}, \mathcal{Y})
+\Big/
+2\text{-isomorphism}
+=
+\Mor_{\Sch/S}(U, V)
+\end{equation}
+see
+Categories,
+Lemma \ref{categories-lemma-morphisms-representable-fibred-categories}.
+More precisely, any $1$-morphism $\mathcal{X} \to \mathcal{Y}$
+gives rise to a morphism $U \to V$. Conversely, given a morphism
+of schemes $U \to V$ over $S$ there exists a $1$-morphism
+$\phi : \mathcal{X} \to \mathcal{Y}$ which gives rise to $U \to V$
+and which is unique up to unique $2$-isomorphism.
+
+
+
+
+
+
+\section{The 2-Yoneda lemma}
+\label{section-2-yoneda}
+
+\noindent
+Let $U \in \Ob((\Sch/S)_{fppf})$, and let $\mathcal{X}$ be a
+category fibred in groupoids over $(\Sch/S)_{fppf}$.
+We will frequently use the $2$-Yoneda lemma, see
+Categories, Lemma \ref{categories-lemma-yoneda-2category}.
+Technically it says that there is an equivalence of categories
+$$
+\Mor_{\textit{Cat}/(\Sch/S)_{fppf}}(
+(\Sch/U)_{fppf}, \mathcal{X})
+\longrightarrow
+\mathcal{X}_U, \quad
+f \longmapsto f(U/U).
+$$
+It says that $1$-morphisms $(\Sch/U)_{fppf} \to \mathcal{X}$
+correspond to objects $x$ of the fibre category $\mathcal{X}_U$.
+Namely, given a $1$-morphism $f : (\Sch/U)_{fppf} \to \mathcal{X}$
+we obtain the object $x = f(U/U) \in \Ob(\mathcal{X}_U)$.
+Conversely, given a choice of pullbacks for $\mathcal{X}$ as in
+Categories,
+Definition \ref{categories-definition-pullback-functor-fibred-category},
+and an object $x$ of $\mathcal{X}_U$, we obtain a functor
+$(\Sch/U)_{fppf} \to \mathcal{X}$ defined by the rule
+$$
+(\varphi : V \to U) \longmapsto \varphi^*x
+$$
+on objects. By abuse of notation we use
+$x : (\Sch/U)_{fppf} \to \mathcal{X}$
+to indicate this functor. It indeed has the property that $x(U/U) = x$
+and moreover, given any other functor $f$ with $f(U/U) = x$ there exists
+a unique $2$-isomorphism $x \to f$. In other words the functor $x$
+is well determined by the object $x$ up to unique $2$-isomorphism.
+
+\medskip\noindent
+We will use this without further mention in the following.
+
+
+
+
+
+\section{Representable morphisms of categories fibred in groupoids}
+\label{section-representable-morphism}
+
+
+\noindent
+Let $\mathcal{X}$, $\mathcal{Y}$ be categories fibred in groupoids
+over $(\Sch/S)_{fppf}$. Let $f : \mathcal{X} \to \mathcal{Y}$
+be a {\it representable $1$-morphism}, see
+Categories, Definition
+\ref{categories-definition-representable-map-categories-fibred-in-groupoids}.
+This means that for every $U \in \Ob((\Sch/S)_{fppf})$ and
+any $y \in \Ob(\mathcal{Y}_U)$ the $2$-fibre product
+$(\Sch/U)_{fppf} \times_{y, \mathcal{Y}} \mathcal{X}$
+is representable. Choose a representing object $V_y$ and an equivalence
+$$
+(\Sch/V_y)_{fppf}
+\longrightarrow
+(\Sch/U)_{fppf} \times_{y, \mathcal{Y}} \mathcal{X}.
+$$
+The projection
+$(\Sch/V_y)_{fppf} \to
+(\Sch/U)_{fppf} \times_\mathcal{Y} \mathcal{Y}
+\to (\Sch/U)_{fppf}$
+comes from a morphism of schemes $f_y : V_y \to U$, see
+Section \ref{section-representable}. We represent this by the diagram
+\begin{equation}
+\label{equation-representable}
+\vcenter{
+\xymatrix{
+V_y \ar@{~>}[r] \ar[d]_{f_y} &
+(\Sch/V_y)_{fppf} \ar[d] \ar[r] &
+\mathcal{X} \ar[d]^f \\
+U \ar@{~>}[r] &
+(\Sch/U)_{fppf} \ar[r]^-y &
+\mathcal{Y}
+}
+}
+\end{equation}
+where the squiggly arrows represent the $2$-Yoneda embedding.
+Here are some lemmas about this notion that work in great generality
+(namely, they work for categories fibred in groupoids over any
+base category which has fibre products).
+
+\begin{lemma}
+\label{lemma-morphism-schemes-gives-representable-transformation}
+Let $S$, $X$, $Y$ be objects of $\Sch_{fppf}$.
+Let $f : X \to Y$ be a morphism of schemes.
+Then the $1$-morphism induced by $f$
+$$
+(\Sch/X)_{fppf} \longrightarrow (\Sch/Y)_{fppf}
+$$
+is a representable $1$-morphism.
+\end{lemma}
+
+\begin{proof}
+This is formal and relies only on the fact that
+the category $(\Sch/S)_{fppf}$ has fibre products.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-morphism-equivalent}
+Let $S$ be an object of $\Sch_{fppf}$.
+Consider a $2$-commutative diagram
+$$
+\xymatrix{
+\mathcal{X}' \ar[r] \ar[d]_{f'} & \mathcal{X} \ar[d]^f \\
+\mathcal{Y}' \ar[r] & \mathcal{Y}
+}
+$$
+of $1$-morphisms of categories fibred in groupoids over
+$(\Sch/S)_{fppf}$.
+Assume the horizontal arrows are equivalences.
+Then $f$ is representable if and only if $f'$ is representable.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-representable-transformations}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$
+be categories fibred in groupoids over $(\Sch/S)_{fppf}$
+Let $f : \mathcal{X} \to \mathcal{Y}$, $g : \mathcal{Y} \to \mathcal{Z}$
+be representable $1$-morphisms. Then
+$$
+g \circ f : \mathcal{X} \longrightarrow \mathcal{Z}
+$$
+is a representable $1$-morphism.
+\end{lemma}
+
+\begin{proof}
+This is entirely formal and works in any category.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-representable-transformations}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$
+be categories fibred in groupoids over $(\Sch/S)_{fppf}$
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a representable $1$-morphism.
+Let $g : \mathcal{Z} \to \mathcal{Y}$ be any $1$-morphism.
+Consider the fibre product diagram
+$$
+\xymatrix{
+\mathcal{Z} \times_{g, \mathcal{Y}, f} \mathcal{X} \ar[r]_-{g'} \ar[d]_{f'} &
+\mathcal{X} \ar[d]^f \\
+\mathcal{Z} \ar[r]^g & \mathcal{Y}
+}
+$$
+Then the base change $f'$ is a representable $1$-morphism.
+\end{lemma}
+
+\begin{proof}
+This is entirely formal and works in any category.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-product-representable-transformations}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}_i, \mathcal{Y}_i$ be categories fibred in groupoids over
+$(\Sch/S)_{fppf}$, $i = 1, 2$.
+Let $f_i : \mathcal{X}_i \to \mathcal{Y}_i$, $i = 1, 2$
+be representable $1$-morphisms.
+Then
+$$
+f_1 \times f_2 :
+\mathcal{X}_1 \times \mathcal{X}_2
+\longrightarrow
+\mathcal{Y}_1 \times \mathcal{Y}_2
+$$
+is a representable $1$-morphism.
+\end{lemma}
+
+\begin{proof}
+Write $f_1 \times f_2$ as the composition
+$\mathcal{X}_1 \times \mathcal{X}_2 \to
+\mathcal{Y}_1 \times \mathcal{X}_2 \to
+\mathcal{Y}_1 \times \mathcal{Y}_2$.
+The first arrow is the base change of $f_1$ by the map
+$\mathcal{Y}_1 \times \mathcal{X}_2 \to \mathcal{Y}_1$, and the second arrow
+is the base change of $f_2$ by the map
+$\mathcal{Y}_1 \times \mathcal{Y}_2 \to \mathcal{Y}_2$.
+Hence this lemma is a formal
+consequence of Lemmas \ref{lemma-composition-representable-transformations}
+and \ref{lemma-base-change-representable-transformations}.
+\end{proof}
+
+
+
+\section{Split categories fibred in groupoids}
+\label{section-split}
+
+\noindent
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Recall that given a ``presheaf of groupoids''
+$$
+F : (\Sch/S)_{fppf}^{opp} \longrightarrow \textit{Groupoids}
+$$
+we get a category fibred in groupoids $\mathcal{S}_F$ over
+$(\Sch/S)_{fppf}$, see
+Categories, Example \ref{categories-example-functor-groupoids}.
+Any category fibred in groupoids isomorphic (!) to one of these
+is called a {\it split category fibred in groupoids}.
+Any category fibred in groupoids is equivalent to a split one.
+
+\medskip\noindent
+If $F$ is a presheaf of sets then $\mathcal{S}_F$ is
+fibred in sets, see
+Categories,
+Definition \ref{categories-definition-category-fibred-sets},
+and
+Categories, Example \ref{categories-example-presheaf}.
+The rule $F \mapsto \mathcal{S}_F$ is in some sense fully faithful
+on presheaves, see
+Categories, Lemma \ref{categories-lemma-2-category-fibred-sets}.
+If $F, G$ are presheaves, then
+$$
+\mathcal{S}_{F \times G}
+=
+\mathcal{S}_F \times_{(\Sch/S)_{fppf}} \mathcal{S}_G
+$$
+and if $F \to H$ and $G \to H$ are maps of presheaves of sets, then
+$$
+\mathcal{S}_{F \times_H G} =
+\mathcal{S}_F \times_{\mathcal{S}_H} \mathcal{S}_G
+$$
+where the right hand sides are $2$-fibre products. This is immediate
+from the definitions as the fibre categories of
+$\mathcal{S}_F, \mathcal{S}_G, \mathcal{S}_H$ have only identity morphisms.
+
+\medskip\noindent
+An even more special case is where $F = h_X$ is a representable
+presheaf. In this case we have
+$\mathcal{S}_{h_X} = (\Sch/X)_{fppf}$, see
+Categories,
+Example \ref{categories-example-fibred-category-from-functor-of-points}.
+
+\medskip\noindent
+We will use the notation $\mathcal{S}_F$ without further mention in the
+following.
+
+
+
+
+\section{Categories fibred in groupoids representable by algebraic spaces}
+\label{section-representable-by-algebraic-spaces}
+
+\noindent
+A slightly weaker notion than being representable is the notion of
+being representable by algebraic spaces which we discuss in this section.
+This discussion might have been avoided had we worked with some category
+$\textit{Spaces}_{fppf}$ of algebraic spaces instead of the category
+$\Sch_{fppf}$. However, it seems to us natural to consider the
+category of schemes as the natural collection of ``test objects'' over
+which the fibre categories of an algebraic stack are defined.
+
+\medskip\noindent
+In analogy with Categories, Definitions
+\ref{categories-definition-representable-fibred-category}
+we make the following definition.
+
+\begin{definition}
+\label{definition-representable-by-algebraic-space}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+A category fibred in groupoids $p : \mathcal{X} \to (\Sch/S)_{fppf}$
+is called {\it representable by an algebraic space over $S$}
+if there exists an algebraic space $F$ over $S$ and an equivalence
+$j : \mathcal{X} \to \mathcal{S}_F$
+of categories over $(\Sch/S)_{fppf}$.
+\end{definition}
+
+\noindent
+We continue our abuse of notation in suppressing the equivalence $j$
+whenever we encounter such a situation.
+It follows formally from the above that if $\mathcal{X}$ is
+representable (by a scheme), then it is representable by an
+algebraic space. Here is the analogue of
+Categories,
+Lemma \ref{categories-lemma-characterize-representable-fibred-category}.
+
+\begin{lemma}
+\label{lemma-characterize-representable-by-space}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $p : \mathcal{X} \to (\Sch/S)_{fppf}$
+be a category fibred in groupoids.
+Then $\mathcal{X}$ is representable by an algebraic space over $S$
+if and only if the following conditions are satisfied:
+\begin{enumerate}
+\item $\mathcal{X}$ is fibred in setoids\footnote{This means that
+it is fibred in groupoids and objects in the fibre categories
+have no nontrivial automorphisms, see Categories,
+Definition \ref{categories-definition-category-fibred-sets}.}, and
+\item the presheaf $U \mapsto \Ob(\mathcal{X}_U)/\!\!\cong$ is
+an algebraic space.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted, but see Categories,
+Lemma \ref{categories-lemma-characterize-representable-fibred-category}.
+\end{proof}
+
+\noindent
+If $\mathcal{X}, \mathcal{Y}$ are fibred in groupoids and
+representable by algebraic spaces $F, G$ over $S$, then we have
+\begin{equation}
+\label{equation-morphisms-spaces}
+\Mor_{\textit{Cat}/(\Sch/S)_{fppf}}(\mathcal{X}, \mathcal{Y})
+\Big/
+2\text{-isomorphism}
+=
+\Mor_{\Sch/S}(F, G)
+\end{equation}
+see
+Categories, Lemma \ref{categories-lemma-2-category-fibred-setoids}.
+More precisely, any $1$-morphism $\mathcal{X} \to \mathcal{Y}$
+gives rise to a morphism $F \to G$. Conversely, given a morphism
+of sheaves $F \to G$ over $S$ there exists a $1$-morphism
+$\phi : \mathcal{X} \to \mathcal{Y}$ which gives rise to $F \to G$
+and which is unique up to unique $2$-isomorphism.
+
+
+
+\section{Morphisms representable by algebraic spaces}
+\label{section-morphisms-representable-by-algebraic-spaces}
+
+\noindent
+In analogy with Categories, Definition
+\ref{categories-definition-representable-map-categories-fibred-in-groupoids}
+we make the following definition.
+
+\begin{definition}
+\label{definition-representable-by-algebraic-spaces}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+A $1$-morphism $f : \mathcal{X} \to \mathcal{Y}$ of
+categories fibred in groupoids over $(\Sch/S)_{fppf}$
+is called {\it representable by algebraic spaces} if
+for any $U \in \Ob((\Sch/S)_{fppf})$
+and any $y : (\Sch/U)_{fppf} \to \mathcal{Y}$
+the category fibred in groupoids
+$$
+(\Sch/U)_{fppf} \times_{y, \mathcal{Y}} \mathcal{X}
+$$
+over $(\Sch/U)_{fppf}$
+is representable by an algebraic space over $U$.
+\end{definition}
+
+\noindent
+Choose an algebraic space $F_y$ over $U$ which represents
+$(\Sch/U)_{fppf} \times_{y, \mathcal{Y}} \mathcal{X}$.
+We may think of $F_y$ as an algebraic space over $S$
+which comes equipped with a canonical morphism $f_y : F_y \to U$
+over $S$, see
+Spaces, Section \ref{spaces-section-change-base-scheme}.
+Here is the diagram
+\begin{equation}
+\label{equation-representable-by-algebraic-spaces}
+\vcenter{
+\xymatrix{
+F_y \ar[d]_{f_y} &
+(\Sch/U)_{fppf} \times_{y, \mathcal{Y}} \mathcal{X}
+\ar@{~>}[l] \ar[d]_{\text{pr}_0} \ar[r]_-{\text{pr}_1} &
+\mathcal{X} \ar[d]^f \\
+U &
+(\Sch/U)_{fppf} \ar@{~>}[l] \ar[r]^-y &
+\mathcal{Y}
+}
+}
+\end{equation}
+where the squiggly arrows represent the construction which associates
+to a stack fibred in setoids its associated sheaf of isomorphism classes
+of objects. The right square is
+$2$-commutative, and is a $2$-fibre product square.
+
+\medskip\noindent
+Here is the analogue of Categories,
+Lemma \ref{categories-lemma-criterion-representable-map-stack-in-groupoids}.
+
+\begin{lemma}
+\label{lemma-criterion-map-representable-spaces-fibred-in-groupoids}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+The following are necessary and sufficient conditions for
+$f$ to be representable by algebraic spaces:
+\begin{enumerate}
+\item for each scheme $U/S$ the
+functor $f_U : \mathcal{X}_U \longrightarrow \mathcal{Y}_U$
+between fibre categories is faithful, and
+\item for each $U$ and each $y \in \Ob(\mathcal{Y}_U)$ the presheaf
+$$
+(h : V \to U)
+\longmapsto
+\{(x, \phi) \mid x \in \Ob(\mathcal{X}_V), \phi : h^*y \to f(x)\}/\cong
+$$
+is an algebraic space over $U$.
+\end{enumerate}
+Here we have made a choice of pullbacks for $\mathcal{Y}$.
+\end{lemma}
+
+\begin{proof}
+This follows from the description of fibre categories of the $2$-fibre products
+$(\Sch/U)_{fppf} \times_{y, \mathcal{Y}} \mathcal{X}$ in
+Categories, Lemma \ref{categories-lemma-identify-fibre-product}
+combined with
+Lemma \ref{lemma-characterize-representable-by-space}.
+\end{proof}
+
+\noindent
+Here are some lemmas about this notion that work in great generality.
+
+\begin{lemma}
+\label{lemma-representable-by-spaces-morphism-equivalent}
+Let $S$ be an object of $\Sch_{fppf}$.
+Consider a $2$-commutative diagram
+$$
+\xymatrix{
+\mathcal{X}' \ar[r] \ar[d]_{f'} & \mathcal{X} \ar[d]^f \\
+\mathcal{Y}' \ar[r] & \mathcal{Y}
+}
+$$
+of $1$-morphisms of categories fibred in groupoids over
+$(\Sch/S)_{fppf}$.
+Assume the horizontal arrows are equivalences.
+Then $f$ is representable by algebraic spaces
+if and only if $f'$ is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphism-spaces-gives-representable-by-spaces}
+Let $S$ be an object of $\Sch_{fppf}$.
+Let $f : \mathcal{X} \to \mathcal{Y}$
+be a $1$-morphism of categories fibred in groupoids over $S$.
+If $\mathcal{X}$ and $\mathcal{Y}$ are representable by
+algebraic spaces over $S$, then the $1$-morphism $f$
+is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Omitted. This relies only on the fact that
+the category of algebraic spaces over $S$ has fibre products,
+see Spaces, Lemma \ref{spaces-lemma-fibre-product-spaces}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-presheaves-representable-by-algebraic-spaces}
+Let $S$ be an object of $\Sch_{fppf}$.
+Let $a : F \to G$ be a map of presheaves of sets on $(\Sch/S)_{fppf}$.
+Denote $a' : \mathcal{S}_F \to \mathcal{S}_G$ the associated
+map of categories fibred in sets.
+Then $a$ is representable by algebraic spaces (see
+Bootstrap,
+Definition \ref{bootstrap-definition-morphism-representable-by-spaces})
+if and only if $a'$ is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-fibred-setoids-representable-algebraic-spaces}
+Let $S$ be an object of $\Sch_{fppf}$.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of
+categories fibred in setoids over $(\Sch/S)_{fppf}$.
+Let $F$, resp.\ $G$ be the presheaf which to $T$ associates
+the set of isomorphism classes of objects of
+$\mathcal{X}_T$, resp.\ $\mathcal{Y}_T$.
+Let $a : F \to G$ be the map of presheaves corresponding to $f$.
+Then $a$ is representable by algebraic spaces (see
+Bootstrap,
+Definition \ref{bootstrap-definition-morphism-representable-by-spaces})
+if and only if $f$ is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: Combine
+Lemmas \ref{lemma-representable-by-spaces-morphism-equivalent}
+and \ref{lemma-map-presheaves-representable-by-algebraic-spaces}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-representable-by-spaces}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$
+be categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+representable by algebraic spaces.
+Let $g : \mathcal{Z} \to \mathcal{Y}$ be any $1$-morphism.
+Consider the fibre product diagram
+$$
+\xymatrix{
+\mathcal{Z} \times_{g, \mathcal{Y}, f} \mathcal{X} \ar[r]_-{g'} \ar[d]_{f'} &
+\mathcal{X} \ar[d]^f \\
+\mathcal{Z} \ar[r]^g & \mathcal{Y}
+}
+$$
+Then the base change $f'$ is a $1$-morphism representable by
+algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+This is formal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-by-space-representable-by-space}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$
+be categories fibred in groupoids over $(\Sch/S)_{fppf}$
+Let $f : \mathcal{X} \to \mathcal{Y}$,
+$g : \mathcal{Z} \to \mathcal{Y}$ be $1$-morphisms.
+Assume
+\begin{enumerate}
+\item $f$ is representable by algebraic spaces, and
+\item $\mathcal{Z}$ is representable by an algebraic space over $S$.
+\end{enumerate}
+Then the $2$-fibre product
+$\mathcal{Z} \times_{g, \mathcal{Y}, f} \mathcal{X}$
+is representable by an algebraic space.
+\end{lemma}
+
+\begin{proof}
+This is a reformulation of
+Bootstrap, Lemma \ref{bootstrap-lemma-representable-by-spaces-over-space}.
+First note that
+$\mathcal{Z} \times_{g, \mathcal{Y}, f} \mathcal{X}$
+is fibred in setoids over $(\Sch/S)_{fppf}$.
+Hence it is equivalent to $\mathcal{S}_F$ for some presheaf
+$F$ on $(\Sch/S)_{fppf}$, see
+Categories, Lemma \ref{categories-lemma-setoid-fibres}.
+Moreover, let $G$ be an algebraic space which represents
+$\mathcal{Z}$. The $1$-morphism
+$\mathcal{Z} \times_{g, \mathcal{Y}, f} \mathcal{X} \to \mathcal{Z}$
+is representable by algebraic spaces by
+Lemma \ref{lemma-base-change-representable-by-spaces}.
+And $\mathcal{Z} \times_{g, \mathcal{Y}, f} \mathcal{X} \to \mathcal{Z}$
+corresponds to a morphism $F \to G$ by
+Categories, Lemma \ref{categories-lemma-2-category-fibred-setoids}.
+Then $F \to G$ is representable by algebraic spaces by
+Lemma \ref{lemma-map-fibred-setoids-representable-algebraic-spaces}.
+Hence
+Bootstrap, Lemma \ref{bootstrap-lemma-representable-by-spaces-over-space}
+implies that $F$ is an algebraic space as desired.
+\end{proof}
+
+\noindent
+Let $S$, $\mathcal{X}$, $\mathcal{Y}$, $\mathcal{Z}$, $f$, $g$ be as in
+Lemma \ref{lemma-base-change-by-space-representable-by-space}.
+Let $F$ and $G$ be algebraic spaces over $S$ such that
+$F$ represents $\mathcal{Z} \times_{g, \mathcal{Y}, f} \mathcal{X}$
+and $G$ represents $\mathcal{Z}$. The $1$-morphism
+$f' : \mathcal{Z} \times_{g, \mathcal{Y}, f} \mathcal{X} \to \mathcal{Z}$
+corresponds to a morphism $f' : F \to G$ of algebraic spaces
+by (\ref{equation-morphisms-spaces}).
+Thus we have the following diagram
+\begin{equation}
+\label{equation-representable-by-algebraic-spaces-on-space}
+\vcenter{
+\xymatrix{
+F \ar[d]_{f'} &
+\mathcal{Z} \times_{g, \mathcal{Y}, f} \mathcal{X}
+\ar@{~>}[l] \ar[d] \ar[r] &
+\mathcal{X} \ar[d]^f \\
+G &
+\mathcal{Z} \ar@{~>}[l] \ar[r]^-g &
+\mathcal{Y}
+}
+}
+\end{equation}
+where the squiggly arrows represent the construction which associates
+to a stack fibred in setoids its associated sheaf of isomorphism classes
+of objects.
+
+\begin{lemma}
+\label{lemma-composition-representable-by-spaces}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$
+be categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+If $f : \mathcal{X} \to \mathcal{Y}$, $g : \mathcal{Y} \to \mathcal{Z}$
+are $1$-morphisms representable by algebraic spaces, then
+$$
+g \circ f : \mathcal{X} \longrightarrow \mathcal{Z}
+$$
+is a $1$-morphism representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+This follows from
+Lemma \ref{lemma-base-change-by-space-representable-by-space}.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-product-representable-by-spaces}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}_i, \mathcal{Y}_i$ be categories fibred in groupoids over
+$(\Sch/S)_{fppf}$, $i = 1, 2$.
+Let $f_i : \mathcal{X}_i \to \mathcal{Y}_i$, $i = 1, 2$
+be $1$-morphisms representable by algebraic spaces.
+Then
+$$
+f_1 \times f_2 :
+\mathcal{X}_1 \times \mathcal{X}_2
+\longrightarrow
+\mathcal{Y}_1 \times \mathcal{Y}_2
+$$
+is a $1$-morphism representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Write $f_1 \times f_2$ as the composition
+$\mathcal{X}_1 \times \mathcal{X}_2 \to
+\mathcal{Y}_1 \times \mathcal{X}_2 \to
+\mathcal{Y}_1 \times \mathcal{Y}_2$.
+The first arrow is the base change of $f_1$ by the map
+$\mathcal{Y}_1 \times \mathcal{X}_2 \to \mathcal{Y}_1$, and the second arrow
+is the base change of $f_2$ by the map
+$\mathcal{Y}_1 \times \mathcal{Y}_2 \to \mathcal{Y}_2$.
+Hence this lemma is a formal
+consequence of Lemmas \ref{lemma-composition-representable-by-spaces}
+and \ref{lemma-base-change-representable-by-spaces}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-get-a-stack}
+\begin{reference}
+Lemma in an email of Matthew Emerton dated June 15, 2016
+\end{reference}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X} \to \mathcal{Z}$ and $\mathcal{Y} \to \mathcal{Z}$
+be $1$-morphisms of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+If $\mathcal{X} \to \mathcal{Z}$ is representable by algebraic spaces
+and $\mathcal{Y}$ is a stack in groupoids, then
+$\mathcal{X} \times_\mathcal{Z} \mathcal{Y}$ is a stack in groupoids.
+\end{lemma}
+
+\begin{proof}
+The property of a morphism being representable by algebraic spaces
+is preserved under base-change
+(Lemma \ref{lemma-base-change-by-space-representable-by-space}),
+and so, passing to the base-change
+$\mathcal{X} \times_\mathcal{Z} \mathcal{Y}$ over $\mathcal{Y}$,
+we may reduce to the case of a morphism of categories
+fibred in groupoids $\mathcal{X} \to \mathcal{Y}$
+which is representable by algebraic spaces, and
+whose target is a stack in groupoids; our goal is then to prove
+that $\mathcal{X}$ is also a stack in groupoids.
+This follows from Stacks, Lemma
+\ref{stacks-lemma-relative-sheaf-over-stack-is-stack}
+whose assumptions are satisfied as a result of
+Lemma \ref{lemma-criterion-map-representable-spaces-fibred-in-groupoids}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Properties of morphisms representable by algebraic spaces}
+\label{section-representable-properties}
+
+\noindent
+Here is the definition that makes this work.
+
+\begin{definition}
+\label{definition-relative-representable-property}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+Assume $f$ is representable by algebraic spaces.
+Let $\mathcal{P}$ be a property of morphisms of algebraic spaces which
+\begin{enumerate}
+\item is preserved under any base change, and
+\item is fppf local on the base, see
+Descent on Spaces,
+Definition \ref{spaces-descent-definition-property-morphisms-local}.
+\end{enumerate}
+In this case we say that $f$ has {\it property $\mathcal{P}$} if for every
+$U \in \Ob((\Sch/S)_{fppf})$ and
+any $y \in \mathcal{Y}_U$ the resulting morphism of algebraic spaces
+$f_y : F_y \to U$, see
+diagram (\ref{equation-representable-by-algebraic-spaces}),
+has property $\mathcal{P}$.
+\end{definition}
+
+\noindent
+It is important to note that we will only use this definition for
+properties of morphisms that are stable under base change, and
+local in the fppf topology on the target. This is
+not because the definition doesn't make sense otherwise; rather it
+is because we may want to give a different definition which is
+better suited to the property we have in mind.
+
+\begin{lemma}
+\label{lemma-property-morphism-equivalent}
+Let $S$ be an object of $\Sch_{fppf}$.
+Let $\mathcal{P}$ be as in
+Definition \ref{definition-relative-representable-property}.
+Consider a $2$-commutative diagram
+$$
+\xymatrix{
+\mathcal{X}' \ar[r] \ar[d]_{f'} & \mathcal{X} \ar[d]^f \\
+\mathcal{Y}' \ar[r] & \mathcal{Y}
+}
+$$
+of $1$-morphisms of categories fibred in groupoids over
+$(\Sch/S)_{fppf}$.
+Assume the horizontal arrows are equivalences and $f$ (or equivalently $f'$)
+is representable by algebraic spaces.
+Then $f$ has $\mathcal{P}$ if and only if $f'$ has $\mathcal{P}$.
+\end{lemma}
+
+\begin{proof}
+Note that this makes sense by
+Lemma \ref{lemma-representable-by-spaces-morphism-equivalent}.
+Proof omitted.
+\end{proof}
+
+\noindent
+Here is a sanity check.
+
+\begin{lemma}
+\label{lemma-map-presheaves-representable-by-spaces-transformation-property}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $a : F \to G$ be a map of presheaves on $(\Sch/S)_{fppf}$.
+Let $\mathcal{P}$ be as in
+Definition \ref{definition-relative-representable-property}.
+Assume $a$ is representable by algebraic spaces.
+Then $a : F \to G$ has property $\mathcal{P}$ (see
+Bootstrap, Definition \ref{bootstrap-definition-property-transformation})
+if and only if the corresponding morphism
+$\mathcal{S}_F \to \mathcal{S}_G$ of categories fibred in groupoids
+has property $\mathcal{P}$.
+\end{lemma}
+
+\begin{proof}
+Note that the lemma makes sense by
+Lemma \ref{lemma-map-presheaves-representable-by-algebraic-spaces}.
+Proof omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-fibred-setoids-property}
+Let $S$ be an object of $\Sch_{fppf}$. Let $\mathcal{P}$ be as in
+Definition \ref{definition-relative-representable-property}.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of
+categories fibred in setoids over $(\Sch/S)_{fppf}$.
+Let $F$, resp.\ $G$ be the presheaf which to $T$ associates
+the set of isomorphism classes of objects of
+$\mathcal{X}_T$, resp.\ $\mathcal{Y}_T$.
+Let $a : F \to G$ be the map of presheaves corresponding to $f$.
+Then $a$ has $\mathcal{P}$ if and only if $f$ has $\mathcal{P}$.
+\end{lemma}
+
+\begin{proof}
+The lemma makes sense by
+Lemma \ref{lemma-map-fibred-setoids-representable-algebraic-spaces}.
+The lemma follows on combining
+Lemmas \ref{lemma-property-morphism-equivalent}
+and \ref{lemma-map-presheaves-representable-by-spaces-transformation-property}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-representable-transformations-property}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}$, $\mathcal{Y}$, $\mathcal{Z}$ be categories fibred
+in groupoids over $(\Sch/S)_{fppf}$.
+Let $\mathcal{P}$ be a property as in
+Definition \ref{definition-relative-representable-property}
+which is stable under composition.
+Let $f : \mathcal{X} \to \mathcal{Y}$,
+$g : \mathcal{Y} \to \mathcal{Z}$ be $1$-morphisms which
+are representable by algebraic spaces.
+If $f$ and $g$ have property $\mathcal{P}$ so does
+$g \circ f : \mathcal{X} \to \mathcal{Z}$.
+\end{lemma}
+
+\begin{proof}
+Note that the lemma makes sense by
+Lemma \ref{lemma-composition-representable-by-spaces}.
+Proof omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-representable-transformations-property}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$
+be categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+Let $\mathcal{P}$ be a property as in
+Definition \ref{definition-relative-representable-property}.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+representable by algebraic spaces.
+Let $g : \mathcal{Z} \to \mathcal{Y}$ be any $1$-morphism.
+Consider the $2$-fibre product diagram
+$$
+\xymatrix{
+\mathcal{Z} \times_{g, \mathcal{Y}, f} \mathcal{X} \ar[r]_-{g'} \ar[d]_{f'} &
+\mathcal{X} \ar[d]^f \\
+\mathcal{Z} \ar[r]^g & \mathcal{Y}
+}
+$$
+If $f$ has $\mathcal{P}$, then the base change $f'$
+has $\mathcal{P}$.
+\end{lemma}
+
+\begin{proof}
+The lemma makes sense by
+Lemma \ref{lemma-base-change-representable-by-spaces}.
+Proof omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-representable-transformations-property}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$
+be categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+Let $\mathcal{P}$ be a property as in
+Definition \ref{definition-relative-representable-property}.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+representable by algebraic spaces.
+Let $g : \mathcal{Z} \to \mathcal{Y}$ be any $1$-morphism.
+Consider the fibre product diagram
+$$
+\xymatrix{
+\mathcal{Z} \times_{g, \mathcal{Y}, f} \mathcal{X} \ar[r]_-{g'} \ar[d]_{f'} &
+\mathcal{X} \ar[d]^f \\
+\mathcal{Z} \ar[r]^g & \mathcal{Y}
+}
+$$
+Assume that for every scheme $U$ and object $x$ of $\mathcal{Y}_U$,
+there exists an fppf covering $\{U_i \to U\}$ such that $x|_{U_i}$
+is in the essential image of the functor
+$g : \mathcal{Z}_{U_i} \to \mathcal{Y}_{U_i}$.
+In this case, if $f'$ has $\mathcal{P}$, then $f$ has $\mathcal{P}$.
+\end{lemma}
+
+\begin{proof}
+Proof omitted. Hint: Compare with the proof of
+Spaces,
+Lemma \ref{spaces-lemma-descent-representable-transformations-property}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-product-representable-transformations-property}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{P}$ be a property as in
+Definition \ref{definition-relative-representable-property}
+which is stable under composition.
+Let $\mathcal{X}_i, \mathcal{Y}_i$ be categories fibred in groupoids over
+$(\Sch/S)_{fppf}$, $i = 1, 2$.
+Let $f_i : \mathcal{X}_i \to \mathcal{Y}_i$, $i = 1, 2$
+be $1$-morphisms representable by algebraic spaces.
+If $f_1$ and $f_2$ have property $\mathcal{P}$ so does
+$
+f_1 \times f_2 :
+\mathcal{X}_1 \times \mathcal{X}_2
+\to
+\mathcal{Y}_1 \times \mathcal{Y}_2
+$.
+\end{lemma}
+
+\begin{proof}
+The lemma makes sense by
+Lemma \ref{lemma-product-representable-by-spaces}.
+Proof omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-transformations-property-implication}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}$, $\mathcal{Y}$ be categories fibred in groupoids
+over $(\Sch/S)_{fppf}$.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism representable
+by algebraic spaces.
+Let $\mathcal{P}$, $\mathcal{P}'$ be properties as in
+Definition \ref{definition-relative-representable-property}.
+Suppose that for any morphism of algebraic spaces $a : F \to G$
+we have $\mathcal{P}(a) \Rightarrow \mathcal{P}'(a)$.
+If $f$ has property $\mathcal{P}$ then
+$f$ has property $\mathcal{P}'$.
+\end{lemma}
+
+\begin{proof}
+Formal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-open-fibred-category-is-full}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $j : \mathcal X \to \mathcal Y$ be a $1$-morphism of
+categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+Assume $j$ is representable by algebraic spaces and a monomorphism
+(see
+Definition \ref{definition-relative-representable-property}
+and
+Descent on Spaces, Lemma
+\ref{spaces-descent-lemma-descending-property-monomorphism}).
+Then $j$ is fully faithful on fibre categories.
+\end{lemma}
+
+\begin{proof}
+We have seen in
+Lemma \ref{lemma-criterion-map-representable-spaces-fibred-in-groupoids}
+that $j$ is faithful on fibre categories. Consider a scheme $U$,
+two objects $u, v$ of $\mathcal{X}_U$, and an isomorphism
+$t : j(u) \to j(v)$ in $\mathcal{Y}_U$. We have to construct an
+isomorphism in $\mathcal{X}_U$ between $u$ and $v$.
+By the $2$-Yoneda lemma (see Section \ref{section-2-yoneda})
+we think of $u$, $v$ as $1$-morphisms
+$u, v : (\Sch/U)_{fppf} \to \mathcal{X}$
+and we consider the $2$-fibre product
+$$
+(\Sch/U)_{fppf} \times_{j \circ v, \mathcal{Y}} \mathcal{X}.
+$$
+By assumption this is representable by an algebraic space
+$F_{j \circ v}$, over $U$ and the morphism
+$F_{j \circ v} \to U$ is a monomorphism.
+But since $(1_U, v, 1_{j(v)})$ gives a $1$-morphism of
+$(\Sch/U)_{fppf}$ into the displayed $2$-fibre product,
+we see that $F_{j \circ v} = U$ (here we use
+that if $V \to U$ is a monomorphism of algebraic spaces which has a
+section, then $V = U$). Therefore the $1$-morphism projecting to
+the first coordinate
+$$
+(\Sch/U)_{fppf} \times_{j \circ v, \mathcal{Y}} \mathcal{X}
+\to (\Sch/U)_{fppf}
+$$
+is an equivalence of fibre categories.
+Since $(1_U, u, t)$ and $(1_U, v, 1_{j(v)})$ give two
+objects in $((\Sch/U)_{fppf} \times_{j \circ v, \mathcal{Y}}
+\mathcal{X})_U$ which have the same first coordinate, there must
+be a $2$-morphism between them in the $2$-fibre product.
+This is by definition a morphism $\tilde t : u \to v$ such that
+$j(\tilde t) = t$.
+\end{proof}
+
+\noindent
+Here is a characterization of those categories fibred in groupoids
+for which the diagonal is representable by algebraic spaces.
+
+\begin{lemma}
+\label{lemma-representable-diagonal}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}$ be a category fibred in groupoids over
+$(\Sch/S)_{fppf}$. The following are equivalent:
+\begin{enumerate}
+\item the diagonal $\mathcal{X} \to \mathcal{X} \times \mathcal{X}$
+is representable by algebraic spaces,
+\item for every scheme $U$ over $S$, and any
+$x, y \in \Ob(\mathcal{X}_U)$ the sheaf
+$\mathit{Isom}(x, y)$ is an algebraic space over $U$,
+\item for every scheme $U$ over $S$, and any $x \in \Ob(\mathcal{X}_U)$
+the associated $1$-morphism $x : (\Sch/U)_{fppf} \to \mathcal{X}$
+is representable by algebraic spaces,
+\item for every pair of schemes $T_1, T_2$ over $S$, and any
+$x_i \in \Ob(\mathcal{X}_{T_i})$, $i = 1, 2$ the $2$-fibre product
+$(\Sch/T_1)_{fppf} \times_{x_1, \mathcal{X}, x_2}
+(\Sch/T_2)_{fppf}$
+is representable by an algebraic space,
+\item for every representable category fibred in groupoids $\mathcal{U}$
+over $(\Sch/S)_{fppf}$ every $1$-morphism
+$\mathcal{U} \to \mathcal{X}$ is representable by algebraic spaces,
+\item for every pair $\mathcal{T}_1, \mathcal{T}_2$ of representable
+categories fibred in groupoids over $(\Sch/S)_{fppf}$ and any
+$1$-morphisms $x_i : \mathcal{T}_i \to \mathcal{X}$, $i = 1, 2$ the
+$2$-fibre product $\mathcal{T}_1 \times_{x_1, \mathcal{X}, x_2} \mathcal{T}_2$
+is representable by an algebraic space,
+\item for every category fibred in groupoids $\mathcal{U}$
+over $(\Sch/S)_{fppf}$ which is
+representable by an algebraic space every $1$-morphism
+$\mathcal{U} \to \mathcal{X}$ is representable by algebraic spaces,
+\item for every pair $\mathcal{T}_1, \mathcal{T}_2$ of categories fibred
+in groupoids over $(\Sch/S)_{fppf}$ which are representable
+by algebraic spaces, and any $1$-morphisms
+$x_i : \mathcal{T}_i \to \mathcal{X}$ the
+$2$-fibre product $\mathcal{T}_1 \times_{x_1, \mathcal{X}, x_2} \mathcal{T}_2$
+is representable by an algebraic space.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) follows from
+Stacks, Lemma \ref{stacks-lemma-isom-as-2-fibre-product}
+and the definitions.
+Let us prove the equivalence of (1) and (3).
+Write $\mathcal{C} = (\Sch/S)_{fppf}$ for the base category.
+We will use some of the observations of the proof of the similar
+Categories, Lemma \ref{categories-lemma-representable-diagonal-groupoids}.
+We will use the symbol $\cong$ to mean ``equivalence of categories fibred
+in groupoids over $\mathcal{C} = (\Sch/S)_{fppf}$''.
+Assume (1). Suppose given $U$ and $x$ as in (3). For any scheme $V$
+and $y \in \Ob(\mathcal{X}_V)$ we see (compare reference above) that
+$$
+\mathcal{C}/U
+\times_{x, \mathcal{X}, y}
+\mathcal{C}/V
+\cong
+(\mathcal{C}/U \times_S V)
+\times_{(x, y), \mathcal{X} \times \mathcal{X}, \Delta}
+\mathcal{X}
+$$
+which is representable by an algebraic space by assumption. Conversely,
+assume (3). Consider any scheme $U$ over $S$ and a pair $(x, x')$
+of objects of $\mathcal{X}$ over $U$. We have to show that
+$\mathcal{X} \times_{\Delta, \mathcal{X} \times \mathcal{X}, (x, x')} U$
+is representable by an algebraic space. This is clear because
+(compare reference above)
+$$
+\mathcal{X}
+\times_{\Delta, \mathcal{X} \times \mathcal{X}, (x, x')}
+\mathcal{C}/U
+\cong
+(\mathcal{C}/U \times_{x, \mathcal{X}, x'} \mathcal{C}/U)
+\times_{\mathcal{C}/U \times_S U, \Delta}
+\mathcal{C}/U
+$$
+and the right hand side is representable by an algebraic space by assumption
+and the fact that the category of algebraic spaces over $S$ has fibre products
+and contains $U$ and $S$.
+
+\medskip\noindent
+The equivalences
+(3) $\Leftrightarrow$ (4),
+(5) $\Leftrightarrow$ (6),
+and
+(7) $\Leftrightarrow$ (8)
+are formal. The equivalences
+(3) $\Leftrightarrow$ (5) and
+(4) $\Leftrightarrow$ (6)
+follow from
+Lemma \ref{lemma-representable-by-spaces-morphism-equivalent}.
+Assume (3), and let $\mathcal{U} \to \mathcal{X}$ be as in (7).
+To prove (7) we have to show that for every scheme $V$ and $1$-morphism
+$y : (\Sch/V)_{fppf} \to \mathcal{X}$ the $2$-fibre product
+$(\Sch/V)_{fppf} \times_{y, \mathcal{X}} \mathcal{U}$
+is representable by an algebraic space. Property (3) tells us
+that $y$ is representable by algebraic spaces hence
+Lemma \ref{lemma-base-change-by-space-representable-by-space}
+implies what we want. Finally, (7) directly implies (3).
+\end{proof}
+
+\noindent
+In the situation of the lemma, for any $1$-morphism
+$x : (\Sch/U)_{fppf} \to \mathcal{X}$ as in the lemma, it makes sense
+to say that $x$ has property $\mathcal{P}$, for any property as in
+Definition \ref{definition-relative-representable-property}.
+In particular this holds for
+$\mathcal{P} = $ ``surjective'',
+$\mathcal{P} = $ ``smooth'', and
+$\mathcal{P} = $ ``\'etale'',
+see
+Descent on Spaces,
+Lemmas \ref{spaces-descent-lemma-descending-property-surjective},
+\ref{spaces-descent-lemma-descending-property-smooth}, and
+\ref{spaces-descent-lemma-descending-property-etale}.
+We will use these three cases in the definitions
+of algebraic stacks below.
+
+
+
+
+
+
+
+
+\section{Stacks in groupoids}
+\label{section-stacks}
+
+\noindent
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Recall that a category $p : \mathcal{X} \to (\Sch/S)_{fppf}$
+over $(\Sch/S)_{fppf}$ is said to be a
+{\it stack in groupoids} (see
+Stacks, Definition \ref{stacks-definition-stack-in-groupoids})
+if and only if
+\begin{enumerate}
+\item $p : \mathcal{X} \to (\Sch/S)_{fppf}$ is fibred
+in groupoids over $(\Sch/S)_{fppf}$,
+\item for all $U \in \Ob((\Sch/S)_{fppf})$,
+for all $x, y\in \Ob(\mathcal{X}_U)$ the presheaf
+$\mathit{Isom}(x, y)$ is a sheaf on the site $(\Sch/U)_{fppf}$, and
+\item for all coverings $\mathcal{U} = \{U_i \to U\}$ in
+$(\Sch/S)_{fppf}$, all descent data $(x_i, \phi_{ij})$
+for $\mathcal{U}$ are effective.
+\end{enumerate}
+For examples see
+Examples of Stacks,
+Section \ref{examples-stacks-section-examples-stacks-in-groupoids}
+ff.
+
+
+
+
+
+
+
+
+
+
+\section{Algebraic stacks}
+\label{section-algebraic-stacks}
+
+\noindent
+Here is the definition of an algebraic stack. We remark that condition
+(2) implies we can make sense out of the condition in part (3) that
+$(\Sch/U)_{fppf} \to \mathcal{X}$
+is smooth and surjective, see discussion following
+Lemma \ref{lemma-representable-diagonal}.
+
+\begin{definition}
+\label{definition-algebraic-stack}
+Let $S$ be a base scheme contained in $\Sch_{fppf}$.
+An {\it algebraic stack over $S$} is a category
+$$
+p : \mathcal{X} \to (\Sch/S)_{fppf}
+$$
+over $(\Sch/S)_{fppf}$ with the following properties:
+\begin{enumerate}
+\item The category $\mathcal{X}$ is a stack in groupoids over
+$(\Sch/S)_{fppf}$.
+\item The diagonal
+$\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$
+is representable by algebraic spaces.
+\item There exists a scheme $U \in \Ob((\Sch/S)_{fppf})$
+and a $1$-morphism $(\Sch/U)_{fppf} \to \mathcal{X}$
+which is surjective and smooth\footnote{In future chapters we will denote
+this simply $U \to \mathcal{X}$ as is customary in the literature. Another
+good alternative would be to formulate this condition as the existence of a
+representable category fibred in groupoids $\mathcal{U}$ and a surjective
+smooth $1$-morphism $\mathcal{U} \to \mathcal{X}$.}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+There are some differences with other definitions found in the literature.
+
+\medskip\noindent
+The first is that we require $\mathcal{X}$ to be a stack in groupoids
+in the fppf topology, whereas in many references the \'etale topology is
+used. It somehow seems to us that the fppf topology is the natural topology
+to work with. In the end the resulting $2$-category of algebraic stacks
+ends up being the same. This is explained in
+Criteria for Representability, Section \ref{criteria-section-stacks-etale}.
+
+\medskip\noindent
+The second is that we only require the diagonal map of $\mathcal{X}$ to be
+representable by algebraic spaces, whereas in most references some other
+conditions are imposed. Our point of view is to try to prove a certain
+number of the results that follow only assuming that the diagonal
+of $\mathcal{X}$ be representable by algebraic spaces, and simply add
+an additional hypothesis wherever this is necessary. It has the added
+benefit that any algebraic space (as defined in
+Spaces, Definition \ref{spaces-definition-algebraic-space})
+gives rise to an algebraic stack.
+
+\medskip\noindent
+The third is that in some papers it is required that there exists a
+scheme $U$ and a surjective and \'etale morphism $U \to \mathcal{X}$.
+In the groundbreaking paper \cite{DM} where algebraic stacks were first
+introduced Deligne and Mumford used this definition and showed that
+the moduli stack of stable genus $g > 1$ curves is an algebraic stack
+which has an \'etale covering by a scheme. Michael Artin, see
+\cite{ArtinVersal}, realized that many
+natural results on algebraic stacks generalize to the case where one
+only assume a smooth covering by a scheme. Hence our choice above.
+To distinguish the two cases one sees the terms ``Deligne-Mumford stack''
+and ``Artin stack'' used in the literature. We will reserve the term
+``Artin stack'' for later use (insert future reference here), and continue
+to use ``algebraic stack'', but we will use ``Deligne-Mumford stack''
+to indicate those algebraic stacks which have an \'etale covering by a
+scheme.
+
+\begin{definition}
+\label{definition-deligne-mumford}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}$ be an algebraic stack over $S$.
+We say $\mathcal{X}$ is a {\it Deligne-Mumford stack} if there exists
+a scheme $U$ and a surjective \'etale morphism
+$(\Sch/U)_{fppf} \to \mathcal{X}$.
+\end{definition}
+
+\noindent
+We will compare our notion of a Deligne-Mumford stack with
+the notion as defined in the paper by Deligne and Mumford later
+(see insert future reference here).
+
+\medskip\noindent
+The category of algebraic stacks over $S$ forms a $2$-category.
+Here is the precise definition.
+
+\begin{definition}
+\label{definition-morphism-algebraic-stacks}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+The {\it $2$-category of algebraic stacks over $S$} is the
+sub $2$-category of the $2$-category of categories fibred in
+groupoids over $(\Sch/S)_{fppf}$ (see
+Categories,
+Definition \ref{categories-definition-categories-fibred-in-groupoids-over-C})
+defined as follows:
+\begin{enumerate}
+\item Its objects are those categories fibred in groupoids
+over $(\Sch/S)_{fppf}$ which are algebraic stacks over $S$.
+\item Its $1$-morphisms $f : \mathcal{X} \to \mathcal{Y}$ are
+any functors of categories over $(\Sch/S)_{fppf}$, as in
+Categories, Definition \ref{categories-definition-categories-over-C}.
+\item Its $2$-morphisms are transformations between functors
+over $(\Sch/S)_{fppf}$, as in
+Categories, Definition \ref{categories-definition-categories-over-C}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In other words this $2$-category is the full sub $2$-category of
+$\textit{Cat}/(\Sch/S)_{fppf}$ whose objects are algebraic stacks.
+Note that every $2$-morphism is automatically an isomorphism.
+Hence this is actually a $(2, 1)$-category and not just a $2$-category.
+
+\medskip\noindent
+We will see later (insert future reference here) that this $2$-category
+has $2$-fibre products.
+
+\medskip\noindent
+Similar to the remark above the $2$-category of algebraic stacks over $S$ is a
+full sub $2$-category of the $2$-category of categories fibred in groupoids
+over $(\Sch/S)_{fppf}$. It turns out that it is closed under
+equivalences. Here is the precise statement.
+
+\begin{lemma}
+\label{lemma-equivalent}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}$, $\mathcal{Y}$ be categories over $(\Sch/S)_{fppf}$.
+Assume $\mathcal{X}$, $\mathcal{Y}$ are equivalent as categories over
+$(\Sch/S)_{fppf}$. Then $\mathcal{X}$ is an algebraic stack if and
+only if $\mathcal{Y}$ is an algebraic stack. Similarly, $\mathcal{X}$
+is a Deligne-Mumford stack if and only if $\mathcal{Y}$ is a Deligne-Mumford
+stack.
+\end{lemma}
+
+\begin{proof}
+Assume $\mathcal{X}$ is an algebraic stack (resp.\ a Deligne-Mumford stack). By
+Stacks, Lemma \ref{stacks-lemma-stack-in-groupoids-equivalent}
+this implies that $\mathcal{Y}$ is a stack in groupoids over
+$\Sch_{fppf}$. Choose an equivalence $f : \mathcal{X} \to \mathcal{Y}$
+over $\Sch_{fppf}$. This gives a $2$-commutative diagram
+$$
+\xymatrix{
+\mathcal{X} \ar[r]_f \ar[d]_{\Delta_\mathcal{X}} &
+\mathcal{Y} \ar[d]^{\Delta_\mathcal{Y}} \\
+\mathcal{X} \times \mathcal{X} \ar[r]^{f \times f} &
+\mathcal{Y} \times \mathcal{Y}
+}
+$$
+whose horizontal arrows are equivalences. This implies that
+$\Delta_\mathcal{Y}$ is representable by algebraic spaces according to
+Lemma \ref{lemma-representable-by-spaces-morphism-equivalent}.
+Finally, let $U$ be a scheme over $S$, and let
+$x : (\Sch/U)_{fppf} \to \mathcal{X}$ be a $1$-morphism which
+is surjective and smooth (resp.\ \'etale). Considering the diagram
+$$
+\xymatrix{
+(\Sch/U)_{fppf} \ar[r]_{\text{id}} \ar[d]_x &
+(\Sch/U)_{fppf} \ar[d]^{f \circ x} \\
+\mathcal{X} \ar[r]^f &
+\mathcal{Y}
+}
+$$
+and applying
+Lemma \ref{lemma-property-morphism-equivalent}
+we conclude that $f \circ x$ is surjective and smooth (resp.\ \'etale)
+as desired.
+\end{proof}
+
+
+
+
+\section{Algebraic stacks and algebraic spaces}
+\label{section-stacks-spaces}
+
+\noindent
+In this section we discuss some simple criteria which imply that an
+algebraic stack is an algebraic space. The main result is that this
+happens exactly when objects of fibre categories have no nontrivial
+automorphisms. This is not a triviality! Before we come to this
+we first do a sanity check.
+
+\begin{lemma}
+\label{lemma-representable-algebraic}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+\begin{enumerate}
+\item A category fibred in groupoids
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$
+which is representable by an algebraic space is a Deligne-Mumford stack.
+\item If $F$ is an algebraic space over $S$, then the associated
+category fibred in groupoids
+$p : \mathcal{S}_F \to (\Sch/S)_{fppf}$
+is a Deligne-Mumford stack.
+\item If $X \in \Ob((\Sch/S)_{fppf})$, then
+$(\Sch/X)_{fppf} \to (\Sch/S)_{fppf}$ is
+a Deligne-Mumford stack.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that (2) implies (3).
+Parts (1) and (2) are equivalent by Lemma \ref{lemma-equivalent}.
+Hence it suffices to prove (2).
+First, we note that $\mathcal{S}_F$ is stack in sets since
+$F$ is a sheaf (Stacks, Lemma
+\ref{stacks-lemma-stack-in-setoids-characterize}).
+A fortiori it is a stack in groupoids. Second the diagonal
+morphism $\mathcal{S}_F \to \mathcal{S}_F \times \mathcal{S}_F$
+is the same as the morphism $\mathcal{S}_F \to \mathcal{S}_{F \times F}$
+which comes from the diagonal of $F$. Hence this is representable
+by algebraic spaces according to
+Lemma \ref{lemma-morphism-spaces-gives-representable-by-spaces}.
+Actually it is even representable (by schemes), as the diagonal of
+an algebraic space is representable, but we do not need this.
+Let $U$ be a scheme and let $h_U \to F$ be a surjective \'etale morphism.
+We may think of this as a surjective \'etale morphism of algebraic spaces.
+Hence by
+Lemma
+\ref{lemma-map-presheaves-representable-by-spaces-transformation-property}
+the corresponding $1$-morphism $(\Sch/U)_{fppf} \to \mathcal{S}_F$
+is surjective and \'etale.
+\end{proof}
+
+\noindent
+The following result says that a Deligne-Mumford stack whose inertia
+is trivial ``is'' an algebraic space. This lemma will be obsoleted by
+the stronger
+Proposition \ref{proposition-algebraic-stack-no-automorphisms}
+below which says that this holds more generally for algebraic stacks...
+
+\begin{lemma}
+\label{lemma-algebraic-stack-no-automorphisms}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}$ be an algebraic stack over $S$.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{X}$ is a Deligne-Mumford stack and is a stack in setoids,
+\item $\mathcal{X}$ is a Deligne-Mumford stack such that the
+canonical $1$-morphism $\mathcal{I}_\mathcal{X} \to \mathcal{X}$
+is an equivalence, and
+\item $\mathcal{X}$ is representable by an algebraic space.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) follows from
+Stacks, Lemma \ref{stacks-lemma-characterize-stack-in-setoids}.
+The implication (3) $\Rightarrow$ (1) follows from
+Lemma \ref{lemma-representable-algebraic}.
+Finally, assume (1). By
+Stacks, Lemma \ref{stacks-lemma-stack-in-setoids-characterize}
+there exists a sheaf $F$ on $(\Sch/S)_{fppf}$
+and an equivalence $j : \mathcal{X} \to \mathcal{S}_F$. By
+Lemma \ref{lemma-map-presheaves-representable-by-algebraic-spaces}
+the fact that $\Delta_\mathcal{X}$ is representable by algebraic
+spaces, means that $\Delta_F : F \to F \times F$
+is representable by algebraic spaces.
+Let $U$ be a scheme, and let $x : (\Sch/U)_{fppf} \to \mathcal{X}$
+be a surjective \'etale morphism. The composition
+$j \circ x : (\Sch/U)_{fppf} \to \mathcal{S}_F$
+corresponds to a morphism $h_U \to F$ of sheaves. By
+Bootstrap, Lemma \ref{bootstrap-lemma-representable-diagonal}
+this morphism is representable by algebraic spaces.
+Hence by
+Lemma \ref{lemma-map-fibred-setoids-property}
+we conclude that $h_U \to F$ is surjective and \'etale.
+Finally, we apply
+Bootstrap, Theorem \ref{bootstrap-theorem-bootstrap}
+to see that $F$ is an algebraic space.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-algebraic-stack-no-automorphisms}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}$ be an algebraic stack over $S$.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{X}$ is a stack in setoids,
+\item the canonical $1$-morphism $\mathcal{I}_\mathcal{X} \to \mathcal{X}$
+is an equivalence, and
+\item $\mathcal{X}$ is representable by an algebraic space.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+The equivalence of (1) and (2) follows from
+Stacks, Lemma \ref{stacks-lemma-characterize-stack-in-setoids}.
+The implication (3) $\Rightarrow$ (1) follows from
+Lemma \ref{lemma-algebraic-stack-no-automorphisms}.
+Finally, assume (1). By
+Stacks, Lemma \ref{stacks-lemma-stack-in-setoids-characterize}
+there exists an equivalence $j : \mathcal{X} \to \mathcal{S}_F$
+where $F$ is a sheaf on $(\Sch/S)_{fppf}$. By
+Lemma \ref{lemma-map-presheaves-representable-by-algebraic-spaces}
+the fact that $\Delta_\mathcal{X}$ is representable by algebraic
+spaces, means that $\Delta_F : F \to F \times F$
+is representable by algebraic spaces.
+Let $U$ be a scheme and let $x : (\Sch/U)_{fppf} \to \mathcal{X}$
+be a surjective smooth morphism. The composition
+$j \circ x : (\Sch/U)_{fppf} \to \mathcal{S}_F$
+corresponds to a morphism $h_U \to F$ of sheaves. By
+Bootstrap, Lemma \ref{bootstrap-lemma-representable-diagonal}
+this morphism is representable by algebraic spaces.
+Hence by
+Lemma \ref{lemma-map-fibred-setoids-property}
+we conclude that $h_U \to F$ is surjective and smooth.
+In particular it is surjective, flat and locally of finite presentation
+(by
+Lemma \ref{lemma-representable-transformations-property-implication}
+and the fact that a smooth morphism of algebraic spaces is flat and
+locally of finite presentation, see
+Morphisms of Spaces,
+Lemmas \ref{spaces-morphisms-lemma-smooth-locally-finite-presentation} and
+\ref{spaces-morphisms-lemma-smooth-flat}).
+Finally, we apply
+Bootstrap, Theorem \ref{bootstrap-theorem-final-bootstrap}
+to see that $F$ is an algebraic space.
+\end{proof}
+
+
+
+
+
+
+\section{2-Fibre products of algebraic stacks}
+\label{section-2-fibre-products}
+
+\noindent
+The $2$-category of algebraic stacks has products and $2$-fibre products.
+The first lemma is really a special case of
+Lemma \ref{lemma-2-fibre-product}
+but its proof is slightly easier.
+
+\begin{lemma}
+\label{lemma-product-spaces}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}$, $\mathcal{Y}$ be algebraic stacks over $S$.
+Then $\mathcal{X} \times_{(\Sch/S)_{fppf}} \mathcal{Y}$
+is an algebraic stack, and is a product in the $2$-category of
+algebraic stacks over $S$.
+\end{lemma}
+
+\begin{proof}
+An object of $\mathcal{X} \times_{(\Sch/S)_{fppf}} \mathcal{Y}$
+over $T$ is just a pair $(x, y)$ where $x$ is an object of $\mathcal{X}_T$
+and $y$ is an object of $\mathcal{Y}_T$. Hence it is immediate from
+the definitions that
+$\mathcal{X} \times_{(\Sch/S)_{fppf}} \mathcal{Y}$ is a
+stack in groupoids. If $(x, y)$ and $(x', y')$ are
+two objects of $\mathcal{X} \times_{(\Sch/S)_{fppf}} \mathcal{Y}$
+over $T$, then
+$$
+\mathit{Isom}((x, y), (x', y')) =
+\mathit{Isom}(x, x') \times \mathit{Isom}(y, y').
+$$
+Hence it follows from the equivalences in
+Lemma \ref{lemma-representable-diagonal}
+and the fact that the category of algebraic spaces has products
+that the diagonal of $\mathcal{X} \times_{(\Sch/S)_{fppf}} \mathcal{Y}$
+is representable by algebraic spaces.
+Finally, suppose that $U, V \in \Ob((\Sch/S)_{fppf})$,
+and let $x, y$ be surjective smooth morphisms
+$x : (\Sch/U)_{fppf} \to \mathcal{X}$,
+$y : (\Sch/V)_{fppf} \to \mathcal{Y}$.
+Note that
+$$
+(\Sch/U \times_S V)_{fppf} =
+(\Sch/U)_{fppf}
+\times_{(\Sch/S)_{fppf}} (\Sch/V)_{fppf}.
+$$
+The object $(\text{pr}_U^*x, \text{pr}_V^*y)$ of
+$\mathcal{X} \times_{(\Sch/S)_{fppf}} \mathcal{Y}$ over
+$(\Sch/U \times_S V)_{fppf}$ thus defines a $1$-morphism
+$$
+(\Sch/U \times_S V)_{fppf}
+\longrightarrow
+\mathcal{X} \times_{(\Sch/S)_{fppf}} \mathcal{Y}
+$$
+which is the composition of base changes of $x$ and $y$, hence
+is surjective and smooth, see
+Lemmas \ref{lemma-base-change-representable-transformations-property} and
+\ref{lemma-composition-representable-transformations-property}.
+We conclude that $\mathcal{X} \times_{(\Sch/S)_{fppf}} \mathcal{Y}$
+is indeed an algebraic stack. We omit the verification that it
+really is a product.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-2-fibre-product-general}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{Z}$ be a stack in groupoids over $(\Sch/S)_{fppf}$
+whose diagonal is representable by algebraic spaces.
+Let $\mathcal{X}$, $\mathcal{Y}$ be algebraic stacks over $S$.
+Let $f : \mathcal{X} \to \mathcal{Z}$, $g : \mathcal{Y} \to \mathcal{Z}$
+be $1$-morphisms of stacks in groupoids. Then the $2$-fibre product
+$\mathcal{X} \times_{f, \mathcal{Z}, g} \mathcal{Y}$ is an algebraic stack.
+\end{lemma}
+
+\begin{proof}
+We have to check conditions (1), (2), and (3) of
+Definition \ref{definition-algebraic-stack}.
+The first condition follows from
+Stacks, Lemma \ref{stacks-lemma-2-product-stacks-in-groupoids}.
+
+\medskip\noindent
+The second condition we have to check is that the $\mathit{Isom}$-sheaves
+are representable by algebraic spaces. To do this, suppose that
+$T$ is a scheme over $S$, and $u, v$ are objects of
+$(\mathcal{X} \times_{f, \mathcal{Z}, g} \mathcal{Y})_T$.
+By our construction of $2$-fibre products (which goes all the way
+back to
+Categories, Lemma \ref{categories-lemma-2-product-categories-over-C})
+we may write $u = (x, y, \alpha)$ and $v = (x', y', \alpha')$.
+Here $\alpha : f(x) \to g(y)$ and similarly for $\alpha'$.
+Then it is clear that
+$$
+\xymatrix{
+\mathit{Isom}(u, v) \ar[d] \ar[rr] & &
+\mathit{Isom}(y, y') \ar[d]^{\phi \mapsto g(\phi) \circ \alpha} \\
+\mathit{Isom}(x, x') \ar[rr]^-{\psi \mapsto \alpha' \circ f(\psi)} & &
+\mathit{Isom}(f(x), g(y'))
+}
+$$
+is a cartesian diagram of sheaves on $(\Sch/T)_{fppf}$.
+Since by assumption the sheaves
+$\mathit{Isom}(y, y')$, $\mathit{Isom}(x, x')$, $\mathit{Isom}(f(x), g(y'))$
+are algebraic spaces (see
+Lemma \ref{lemma-representable-diagonal})
+we see that $\mathit{Isom}(u, v)$
+is an algebraic space.
+
+\medskip\noindent
+Let $U, V \in \Ob((\Sch/S)_{fppf})$,
+and let $x, y$ be surjective smooth morphisms
+$x : (\Sch/U)_{fppf} \to \mathcal{X}$,
+$y : (\Sch/V)_{fppf} \to \mathcal{Y}$.
+Consider the morphism
+$$
+(\Sch/U)_{fppf}
+\times_{f \circ x, \mathcal{Z}, g \circ y}
+(\Sch/V)_{fppf}
+\longrightarrow
+\mathcal{X} \times_{f, \mathcal{Z}, g} \mathcal{Y}.
+$$
+As the diagonal of $\mathcal{Z}$ is representable by algebraic spaces
+the source of this arrow is representable by an algebraic space $F$, see
+Lemma \ref{lemma-representable-diagonal}.
+Moreover, the morphism is the composition
+of base changes of $x$ and $y$, hence surjective and smooth, see
+Lemmas \ref{lemma-base-change-representable-transformations-property} and
+\ref{lemma-composition-representable-transformations-property}.
+Choosing a scheme $W$ and a surjective \'etale morphism $W \to F$
+we see that the composition of the displayed $1$-morphism
+with the corresponding $1$-morphism
+$$
+(\Sch/W)_{fppf}
+\longrightarrow
+(\Sch/U)_{fppf}
+\times_{f \circ x, \mathcal{Z}, g \circ y}
+(\Sch/V)_{fppf}
+$$
+is surjective and smooth which proves the last condition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-2-fibre-product}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$ be algebraic stacks over $S$.
+Let $f : \mathcal{X} \to \mathcal{Z}$, $g : \mathcal{Y} \to \mathcal{Z}$
+be $1$-morphisms of algebraic stacks. Then the $2$-fibre product
+$\mathcal{X} \times_{f, \mathcal{Z}, g} \mathcal{Y}$ is an algebraic stack.
+It is also the $2$-fibre product in the $2$-category of algebraic stacks
+over $(\Sch/S)_{fppf}$.
+\end{lemma}
+
+\begin{proof}
+The fact that $\mathcal{X} \times_{f, \mathcal{Z}, g} \mathcal{Y}$ is an
+algebraic stack follows from the stronger
+Lemma \ref{lemma-2-fibre-product-general}.
+The fact that $\mathcal{X} \times_{f, \mathcal{Z}, g} \mathcal{Y}$
+is a $2$-fibre product in the $2$-category of algebraic stacks over $S$
+follows formally from the fact that the $2$-category of algebraic stacks
+over $S$ is a full sub $2$-category of the $2$-category of stacks in
+groupoids over $(\Sch/S)_{fppf}$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Algebraic stacks, overhauled}
+\label{section-overhaul}
+
+\noindent
+Some basic results on algebraic stacks.
+
+\begin{lemma}
+\label{lemma-lift-morphism-presentations}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of algebraic
+stacks over $S$.
+Let $V \in \Ob((\Sch/S)_{fppf})$.
+Let $y : (\Sch/V)_{fppf} \to \mathcal{Y}$ be surjective and smooth.
+Then there exists an object $U \in \Ob((\Sch/S)_{fppf})$
+and a $2$-commutative diagram
+$$
+\xymatrix{
+(\Sch/U)_{fppf} \ar[r]_a \ar[d]_x &
+(\Sch/V)_{fppf} \ar[d]^y \\
+\mathcal{X} \ar[r]^f & \mathcal{Y}
+}
+$$
+with $x$ surjective and smooth.
+\end{lemma}
+
+\begin{proof}
+First choose $W \in \Ob((\Sch/S)_{fppf})$ and a surjective
+smooth $1$-morphism $z : (\Sch/W)_{fppf} \to \mathcal{X}$.
+As $\mathcal{Y}$ is an algebraic stack we may choose an equivalence
+$$
+j :
+\mathcal{S}_F
+\longrightarrow
+(\Sch/W)_{fppf}
+\times_{f \circ z, \mathcal{Y}, y}
+(\Sch/V)_{fppf}
+$$
+where $F$ is an algebraic space. By
+Lemma \ref{lemma-base-change-representable-transformations-property}
+the morphism
+$\mathcal{S}_F \to (\Sch/W)_{fppf}$ is surjective and smooth
+as a base change of $y$. Hence by
+Lemma \ref{lemma-composition-representable-transformations-property}
+we see that $\mathcal{S}_F \to \mathcal{X}$ is surjective and smooth.
+Choose an object $U \in \Ob((\Sch/S)_{fppf})$
+and a surjective \'etale morphism $U \to F$. Then applying
+Lemma \ref{lemma-composition-representable-transformations-property}
+once more we obtain the desired properties.
+\end{proof}
+
+\noindent
+This lemma is a generalization of
+Proposition \ref{proposition-algebraic-stack-no-automorphisms}.
+
+\begin{lemma}
+\label{lemma-characterize-representable-by-algebraic-spaces}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of algebraic
+stacks over $S$. The following are equivalent:
+\begin{enumerate}
+\item for $U \in \Ob((\Sch/S)_{fppf})$
+the functor $f : \mathcal{X}_U \to \mathcal{Y}_U$ is faithful,
+\item the functor $f$ is faithful, and
+\item $f$ is representable by algebraic spaces.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Parts (1) and (2) are equivalent by general properties of $1$-morphisms
+of categories fibred in groupoids, see
+Categories, Lemma \ref{categories-lemma-equivalence-fibred-categories}.
+We see that (3) implies (2) by
+Lemma \ref{lemma-criterion-map-representable-spaces-fibred-in-groupoids}.
+Finally, assume (2).
+Let $U$ be a scheme. Let $y \in \Ob(\mathcal{Y}_U)$.
+We have to prove that
+$$
+\mathcal{W} = (\Sch/U)_{fppf} \times_{y, \mathcal{Y}} \mathcal{X}
+$$
+is representable by an algebraic space over $U$. Since
+$(\Sch/U)_{fppf}$ is an algebraic stack we see from
+Lemma \ref{lemma-2-fibre-product}
+that $\mathcal{W}$ is an algebraic stack.
+On the other hand the explicit description of objects of $\mathcal{W}$
+as triples $(V, x, \alpha : y(V) \to f(x))$ and the fact that $f$ is
+faithful, shows that the fibre categories of $\mathcal{W}$ are setoids. Hence
+Proposition \ref{proposition-algebraic-stack-no-automorphisms}
+guarantees that $\mathcal{W}$ is representable by an algebraic space.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-surjective-morphism-implies-algebraic}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $u : \mathcal{U} \to \mathcal{X}$ be a $1$-morphism of
+stacks in groupoids over $(\Sch/S)_{fppf}$. If
+\begin{enumerate}
+\item $\mathcal{U}$ is representable by an algebraic space, and
+\item $u$ is representable by algebraic spaces, surjective and smooth,
+\end{enumerate}
+then $\mathcal X$ is an algebraic stack over $S$.
+\end{lemma}
+
+\begin{proof}
+We have to show that $\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$
+is representable by algebraic spaces, see
+Definition \ref{definition-algebraic-stack}.
+Given two schemes $T_1$, $T_2$ over $S$ denote
+$\mathcal{T}_i = (\Sch/T_i)_{fppf}$ the associated representable
+fibre categories. Suppose given $1$-morphisms
+$f_i : \mathcal{T}_i \to \mathcal{X}$.
+According to
+Lemma \ref{lemma-representable-diagonal}
+it suffices to prove that the $2$-fibered
+product $\mathcal{T}_1 \times_\mathcal{X} \mathcal{T}_2$
+is representable by an algebraic space. By
+Stacks, Lemma
+\ref{stacks-lemma-2-fibre-product-stacks-in-setoids-over-stack-in-groupoids}
+this is in any case a stack in setoids. Thus
+$\mathcal{T}_1 \times_\mathcal{X} \mathcal{T}_2$ corresponds
+to some sheaf $F$ on $(\Sch/S)_{fppf}$, see
+Stacks, Lemma \ref{stacks-lemma-stack-in-setoids-characterize}.
+Let $U$ be the algebraic space which represents $\mathcal{U}$.
+By assumption
+$$
+\mathcal{T}_i' = \mathcal{U} \times_{u, \mathcal{X}, f_i} \mathcal{T}_i
+$$
+is representable by an algebraic space $T'_i$ over $S$. Hence
+$\mathcal{T}_1' \times_\mathcal{U} \mathcal{T}_2'$ is representable
+by the algebraic space $T'_1 \times_U T'_2$.
+Consider the commutative diagram
+$$
+\xymatrix{
+&
+\mathcal{T}_1 \times_{\mathcal X} \mathcal{T}_2 \ar[rr]\ar'[d][dd] & &
+\mathcal{T}_1 \ar[dd] \\
+\mathcal{T}_1' \times_\mathcal{U} \mathcal{T}_2' \ar[ur]\ar[rr]\ar[dd] & &
+\mathcal{T}_1' \ar[ur]\ar[dd] \\
+&
+\mathcal{T}_2 \ar'[r][rr] & &
+\mathcal X \\
+\mathcal{T}_2' \ar[rr]\ar[ur] & &
+\mathcal{U} \ar[ur] }
+$$
+In this diagram the bottom square, the right square, the back square, and
+the front square are $2$-fibre products. A formal argument then shows
+that $\mathcal{T}_1' \times_\mathcal{U} \mathcal{T}_2' \to
+\mathcal{T}_1 \times_{\mathcal X} \mathcal{T}_2$
+is the ``base change'' of $\mathcal{U} \to \mathcal{X}$, more precisely
+the diagram
+$$
+\xymatrix{
+\mathcal{T}_1' \times_\mathcal{U} \mathcal{T}_2' \ar[d] \ar[r] &
+\mathcal{U} \ar[d] \\
+\mathcal{T}_1 \times_{\mathcal X} \mathcal{T}_2 \ar[r] &
+\mathcal{X}
+}
+$$
+is a $2$-fibre square.
+Hence $T'_1 \times_U T'_2 \to F$ is representable by algebraic spaces,
+smooth, and surjective, see
+Lemmas \ref{lemma-map-fibred-setoids-representable-algebraic-spaces},
+\ref{lemma-base-change-representable-by-spaces},
+\ref{lemma-map-fibred-setoids-property}, and
+\ref{lemma-base-change-representable-transformations-property}.
+Therefore $F$ is an algebraic space by
+Bootstrap, Theorem \ref{bootstrap-theorem-final-bootstrap}
+and we win.
+\end{proof}
+
+\noindent
+An application of
+Lemma \ref{lemma-smooth-surjective-morphism-implies-algebraic}
+is that something which is an algebraic space over an algebraic stack
+is an algebraic stack. This is the analogue of
+Bootstrap, Lemma \ref{bootstrap-lemma-representable-by-spaces-over-space}.
+Actually, it suffices to assume the morphism
+$\mathcal{X} \to \mathcal{Y}$ is ``algebraic'', as we will see in
+Criteria for Representability,
+Lemma \ref{criteria-lemma-algebraic-morphism-to-algebraic}.
+
+\begin{lemma}
+\label{lemma-representable-morphism-to-algebraic}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X} \to \mathcal{Y}$ be a morphism of stacks in groupoids
+over $(\Sch/S)_{fppf}$. Assume that
+\begin{enumerate}
+\item $\mathcal{X} \to \mathcal{Y}$ is representable by algebraic spaces, and
+\item $\mathcal{Y}$ is an algebraic stack over $S$.
+\end{enumerate}
+Then $\mathcal{X}$ is an algebraic stack over $S$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{V} \to \mathcal{Y}$ be a surjective smooth $1$-morphism
+from a representable stack in groupoids to $\mathcal{Y}$. This exists by
+Definition \ref{definition-algebraic-stack}.
+Then the $2$-fibre product
+$\mathcal{U} = \mathcal{V} \times_{\mathcal Y} \mathcal X$
+is representable by an algebraic space by
+Lemma \ref{lemma-base-change-by-space-representable-by-space}.
+The $1$-morphism $\mathcal{U} \to \mathcal X$ is representable by algebraic
+spaces, smooth, and surjective, see
+Lemmas \ref{lemma-base-change-representable-by-spaces} and
+\ref{lemma-base-change-representable-transformations-property}.
+By
+Lemma \ref{lemma-smooth-surjective-morphism-implies-algebraic}
+we conclude that $\mathcal{X}$ is an algebraic stack.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-open-fibred-category-is-algebraic}
+\begin{reference}
+Removing the hypothesis that $j$ is a monomorphism was observed
+in an email from Matthew Emerton dates June 15, 2016
+\end{reference}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $j : \mathcal X \to \mathcal Y$ be a $1$-morphism of
+categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+Assume $j$ is representable by algebraic spaces.
+Then, if $\mathcal{Y}$ is a stack in groupoids
+(resp.\ an algebraic stack), so is $\mathcal{X}$.
+\end{lemma}
+
+\begin{proof}
+The statement on algebraic stacks will follow from the statement on
+stacks in groupoids by Lemma \ref{lemma-representable-morphism-to-algebraic}.
+If $j$ is representable by algebraic spaces, then $j$ is
+faithful on fibre categories and for each $U$ and each
+$y \in \Ob(\mathcal{Y}_U)$ the presheaf
+$$
+(h : V \to U)
+\longmapsto
+\{(x, \phi) \mid x \in \Ob(\mathcal{X}_V), \phi : h^*y \to f(x)\}/\cong
+$$
+is an algebraic space over $U$. See
+Lemma \ref{lemma-criterion-map-representable-spaces-fibred-in-groupoids}.
+In particular this presheaf is a sheaf and the conclusion follows
+from Stacks, Lemma \ref{stacks-lemma-relative-sheaf-over-stack-is-stack}.
+\end{proof}
+
+
+
+\section{From an algebraic stack to a presentation}
+\label{section-stack-to-presentation}
+
+\noindent
+Given an algebraic stack over $S$ we obtain a groupoid in algebraic spaces
+over $S$ whose associated quotient stack is the algebraic stack.
+
+\medskip\noindent
+Recall that if $(U, R, s, t, c)$ is a groupoid in algebraic spaces over $S$
+then $[U/R]$ denotes the quotient stack associated to this datum, see
+Groupoids in Spaces,
+Definition \ref{spaces-groupoids-definition-quotient-stack}.
+In general $[U/R]$ is {\bf not} an algebraic stack. In particular the
+stack $[U/R]$ occurring in the following lemma is in general not
+algebraic.
+
+\begin{lemma}
+\label{lemma-map-space-into-stack}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}$ be an algebraic stack over $S$.
+Let $\mathcal{U}$ be an algebraic stack over $S$ which
+is representable by an algebraic space.
+Let $f : \mathcal{U} \to \mathcal{X}$ be a 1-morphism. Then
+\begin{enumerate}
+\item the $2$-fibre product
+$\mathcal{R} = \mathcal{U} \times_{f, \mathcal{X}, f} \mathcal{U}$
+is representable by an algebraic space,
+\item there is a canonical equivalence
+$$
+\mathcal{U} \times_{f, \mathcal{X}, f} \mathcal{U}
+\times_{f, \mathcal{X}, f} \mathcal{U} =
+\mathcal{R} \times_{\text{pr}_1, \mathcal{U}, \text{pr}_0} \mathcal{R},
+$$
+\item the projection $\text{pr}_{02}$ induces via (2) a $1$-morphism
+$$
+\text{pr}_{02} :
+\mathcal{R} \times_{\text{pr}_1, \mathcal{U}, \text{pr}_0} \mathcal{R}
+\longrightarrow
+\mathcal{R}
+$$
+\item let $U$, $R$ be the algebraic spaces representing
+$\mathcal{U}, \mathcal{R}$ and $t, s : R \to U$ and
+$c : R \times_{s, U, t} R \to R$ are the morphisms corresponding
+to the $1$-morphisms
+$\text{pr}_0, \text{pr}_1 : \mathcal{R} \to \mathcal{U}$
+and
+$\text{pr}_{02} :
+\mathcal{R} \times_{\text{pr}_1, \mathcal{U}, \text{pr}_0} \mathcal{R} \to
+\mathcal{R}$ above, then the quintuple $(U, R, s, t, c)$ is a groupoid in
+algebraic spaces over $S$,
+\item the morphism $f$ induces a canonical $1$-morphism
+$f_{can} : [U/R] \to \mathcal{X}$
+of stacks in groupoids over $(\Sch/S)_{fppf}$, and
+\item the $1$-morphism $f_{can} : [U/R] \to \mathcal{X}$ is fully faithful.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). By definition $\Delta_\mathcal{X}$ is representable
+by algebraic spaces so
+Lemma \ref{lemma-representable-diagonal}
+applies to show that $\mathcal{U} \to \mathcal{X}$ is representable
+by algebraic spaces. Hence the result follows from
+Lemma \ref{lemma-base-change-by-space-representable-by-space}.
+
+\medskip\noindent
+Let $T$ be a scheme over $S$. By construction of the $2$-fibre product (see
+Categories, Lemma \ref{categories-lemma-2-product-categories-over-C})
+we see that the objects of the fibre category $\mathcal{R}_T$
+are triples $(a, b, \alpha)$ where $a, b \in \Ob(\mathcal{U}_T)$
+and $\alpha : f(a) \to f(b)$
+is a morphism in the fibre category $\mathcal{X}_T$.
+
+\medskip\noindent
+Proof of (2). The equivalence comes from repeatedly applying
+Categories, Lemmas \ref{categories-lemma-associativity-2-fibre-product} and
+\ref{categories-lemma-2-fibre-product-erase-factor}.
+Let us identify
+$\mathcal{U} \times_\mathcal{X} \mathcal{U} \times_\mathcal{X} \mathcal{U}$
+with
+$(\mathcal{U} \times_\mathcal{X} \mathcal{U})
+\times_\mathcal{X} \mathcal{U}$.
+If $T$ is a scheme over $S$, then on fibre categories over $T$
+this equivalence maps the object
+$((a, b, \alpha), c, \beta)$ on the left hand side
+to the object $((a, b, \alpha), (b, c, \beta))$ of the right hand side.
+
+\medskip\noindent
+Proof of (3). The $1$-morphism $\text{pr}_{02}$ is constructed in the proof of
+Categories, Lemma \ref{categories-lemma-triple-2-fibre-product-pr02}.
+In terms of the description of objects of the fibre category
+above we see that $((a, b, \alpha), (b, c, \beta))$
+maps to $(a, c, \beta \circ \alpha)$.
+
+\medskip\noindent
+Unfortunately, this is {\it not compatible} with our conventions on
+groupoids where we always have $j = (t, s) : R \to U$, and we ``think''
+of a $T$-valued point $r$ of $R$ as a morphism $r : s(r) \to t(r)$.
+However, this does not affect the proof of (4), since the opposite of
+a groupoid is a groupoid. But in the proof of (5) it is responsible
+for the inverses in the displayed formula below.
+
+\medskip\noindent
+Proof of (4). Recall that the sheaf $U$ is isomorphic to the sheaf
+$T \mapsto \Ob(\mathcal{U}_T)/\!\cong$, and
+similarly for $R$, see
+Lemma \ref{lemma-characterize-representable-by-space}.
+It follows from
+Categories,
+Lemma \ref{categories-lemma-category-fibred-setoids-presheaves-products}
+that this description is compatible with $2$-fibre products
+so we get a similar matching of
+$\mathcal{R} \times_{\text{pr}_1, \mathcal{U}, \text{pr}_0} \mathcal{R}$
+and $R \times_{s, U, t} R$.
+The morphisms $t, s : R \to U$ and $c : R \times_{s, U, t} R \to R$
+we get from the general equality (\ref{equation-morphisms-spaces}).
+Explicitly these maps are the transformations of functors that come
+from letting $\text{pr}_0$, $\text{pr}_0$, $\text{pr}_{02}$
+act on isomorphism classes of objects of fibre categories.
+Hence to show that we obtain a groupoid in algebraic
+spaces it suffices to show that for every scheme $T$ over $S$
+the structure
+$$
+(\Ob(\mathcal{U}_T)/\!\cong,
+\Ob(\mathcal{R}_T)/\!\cong,
+\text{pr}_1, \text{pr}_0, \text{pr}_{02})
+$$
+is a groupoid which is clear from our description of objects of
+$\mathcal{R}_T$ above.
+
+\medskip\noindent
+Proof of (5). We will eventually apply
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-2-coequalizer}
+to obtain the functor $[U/R] \to \mathcal{X}$.
+Consider the $1$-morphism $f : \mathcal{U} \to \mathcal{X}$.
+We have a $2$-arrow $\tau : f \circ \text{pr}_1 \to f \circ \text{pr}_0$
+by definition of $\mathcal{R}$ as the $2$-fibre product.
+Namely, on an object $(a, b, \alpha)$ of $\mathcal{R}$ over $T$ it is
+the map $\alpha^{-1} : b \to a$. We claim that
+$$
+\tau \circ \text{id}_{\text{pr}_{02}} =
+(\tau \star \text{id}_{\text{pr}_0})
+\circ
+(\tau \star \text{id}_{\text{pr}_1}).
+$$
+This identity says that given an object
+$((a, b, \alpha), (b, c, \beta))$ of
+$\mathcal{R} \times_{\text{pr}_1, \mathcal{U}, \text{pr}_0} \mathcal{R}$
+over $T$, then the composition of
+$$
+\xymatrix{
+c \ar[r]^{\beta^{-1}} & b \ar[r]^{\alpha^{-1}} & a
+}
+$$
+is the same as the arrow $(\beta \circ \alpha)^{-1} : a \to c$. This is
+clearly true, hence the claim holds. In this way we see that all the
+assumption of
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-2-coequalizer}
+are satisfied for the structure
+$(\mathcal{U}, \mathcal{R}, \text{pr}_0, \text{pr}_1, \text{pr}_{02})$
+and the $1$-morphism $f$ and the $2$-morphism $\tau$.
+Except, to apply the lemma we need to prove this holds
+for the structure $(\mathcal{S}_U, \mathcal{S}_R, s, t, c)$
+with suitable morphisms.
+
+\medskip\noindent
+Now there should be some general abstract nonsense
+argument which transfer these data between the two, but it seems to
+be quite long. Instead, we use the following trick.
+Pick a quasi-inverse $j^{-1} : \mathcal{S}_U \to \mathcal{U}$
+of the canonical equivalence $j : \mathcal{U} \to \mathcal{S}_U$ which comes
+from $U(T) = \Ob(\mathcal{U}_T)/\!\!\cong$.
+This just means that for every scheme $T/S$ and every
+object $a \in \mathcal{U}_T$ we have picked out a particular
+element of its isomorphism class, namely $j^{-1}(j(a))$.
+Using $j^{-1}$ we may therefore see $\mathcal{S}_U$
+as a subcategory of $\mathcal{U}$. Having chosen this subcategory
+we can consider those objects $(a, b, \alpha)$ of $\mathcal{R}_T$
+such that $a, b$ are objects of $(\mathcal{S}_U)_T$, i.e., such
+that $j^{-1}(j(a)) = a$ and $j^{-1}(j(b)) = b$. Then it is clear that
+this forms a subcategory of $\mathcal{R}$ which maps isomorphically
+to $\mathcal{S}_R$ via the canonical equivalence
+$\mathcal{R} \to \mathcal{S}_R$. Moreover, this is clearly compatible
+with forming the $2$-fibre product
+$\mathcal{R} \times_{\text{pr}_1, \mathcal{U}, \text{pr}_0} \mathcal{R}$.
+Hence we see that we may simply restrict
+$f$ to $\mathcal{S}_U$ and restrict $\tau$ to a transformation
+between functors $\mathcal{S}_R \to \mathcal{X}$. Hence it is clear that
+the displayed equality of
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-2-coequalizer}
+holds since it holds even as an equality of transformations of functors
+$\mathcal{R} \times_{\text{pr}_1, \mathcal{U}, \text{pr}_0} \mathcal{R}
+\to \mathcal{X}$ before restricting to the subcategory
+$\mathcal{S}_{R \times_{s, U, t} R}$.
+
+\medskip\noindent
+This proves that
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-2-coequalizer}
+applies and we get our desired morphism of stacks
+$f_{can} : [U/R] \to \mathcal{X}$. We briefly spell out how
+$f_{can}$ is defined in this special case.
+On an object $a$ of $\mathcal{S}_U$ over $T$
+we have $f_{can}(a) = f(a)$, where we think of
+$\mathcal{S}_U \subset \mathcal{U}$ by the chosen embedding above.
+If $a, b$ are objects of $\mathcal{S}_U$ over $T$, then a morphism
+$\varphi : a \to b$ in $[U/R]$ is by definition an object of the
+form $\varphi = (b, a, \alpha)$ of $\mathcal{R}$ over $T$. (Note
+switch.) And the rule in the proof of
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-2-coequalizer}
+is that
+\begin{equation}
+\label{equation-on-morphisms}
+f_{can}(\varphi) = \Big(f(a) \xrightarrow{\alpha^{-1}} f(b)\Big).
+\end{equation}
+Proof of (6). Both $[U/R]$ and $\mathcal{X}$ are stacks.
+Hence given a scheme $T/S$ and objects $a, b$ of $[U/R]$
+over $T$ we obtain a transformation of fppf sheaves
+$$
+\mathit{Isom}(a, b) \longrightarrow \mathit{Isom}(f_{can}(a), f_{can}(b))
+$$
+on $(\Sch/T)_{fppf}$. We have to show that this is an
+isomorphism. We may work fppf locally on $T$, hence we may assume that
+$a, b$ come from morphisms $a, b : T \to U$. By the embedding
+$\mathcal{S}_U \subset \mathcal{U}$ above we may also think of $a, b$ as
+objects of $\mathcal{U}$ over $T$. In
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-morphisms}
+we have seen that the left hand sheaf is represented by the algebraic space
+$$
+R \times_{(t, s), U \times_S U, (b, a)} T
+$$
+over $T$. On the other hand, the right hand side is by
+Stacks, Lemma \ref{stacks-lemma-isom-as-2-fibre-product}
+equal to the sheaf associated to the following stack in setoids:
+$$
+\mathcal{X}
+\times_{\mathcal{X} \times \mathcal{X}, (f \circ b, f \circ a)} T =
+\mathcal{X}
+\times_{\mathcal{X} \times \mathcal{X}, (f, f)}
+(\mathcal{U} \times \mathcal{U})
+\times_{\mathcal{U} \times \mathcal{U}, (b, a)} T =
+\mathcal{R}
+\times_{(\text{pr}_0, \text{pr}_1), \mathcal{U} \times \mathcal{U}, (b, a)} T
+$$
+which is representable by the fibre product displayed above.
+At this point we have shown that the two $\mathit{Isom}$-sheaves
+are isomorphic. Our $1$-morphism $f_{can} : [U/R] \to \mathcal{X}$ induces
+this isomorphism on $\mathit{Isom}$-sheaves by
+Equation (\ref{equation-on-morphisms}).
+\end{proof}
+
+\noindent
+We can use the previous very abstract lemma to produce
+presentations.
+
+\begin{lemma}
+\label{lemma-stack-presentation}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $\mathcal{X}$ be an algebraic stack over $S$.
+Let $U$ be an algebraic space over $S$.
+Let $f : \mathcal{S}_U \to \mathcal{X}$ be a surjective smooth morphism.
+Let $(U, R, s, t, c)$ be the groupoid in algebraic spaces
+and $f_{can} : [U/R] \to \mathcal{X}$ be the result of applying
+Lemma \ref{lemma-map-space-into-stack}
+to $U$ and $f$. Then
+\begin{enumerate}
+\item the morphisms $s$, $t$ are smooth, and
+\item the $1$-morphism $f_{can} : [U/R] \to \mathcal{X}$
+is an equivalence.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The morphisms $s, t$ are smooth by
+Lemmas \ref{lemma-property-morphism-equivalent} and
+\ref{lemma-map-presheaves-representable-by-spaces-transformation-property}.
+As the $1$-morphism $f$ is smooth and
+surjective it is clear that given any scheme $T$ and any object
+$a \in \Ob(\mathcal{X}_T)$ there exists a smooth and surjective
+morphism $T' \to T$ such that $a|_T'$ comes from an object of
+$[U/R]_{T'}$. Since $f_{can} : [U/R] \to \mathcal{X}$
+is fully faithful, we deduce that
+$[U/R] \to \mathcal{X}$ is essentially surjective as
+descent data on objects are effective on both sides, see
+Stacks, Lemma \ref{stacks-lemma-characterize-essentially-surjective-when-ff}.
+\end{proof}
+
+\begin{remark}
+\label{remark-flat-fp-presentation}
+If the morphism $f : \mathcal{S}_U \to \mathcal{X}$ of
+Lemma \ref{lemma-stack-presentation}
+is only assumed surjective, flat and locally of finite presentation, then
+it will still be the case that $f_{can} : [U/R] \to \mathcal{X}$ is an
+equivalence. In this case the morphisms $s$, $t$ will be flat and
+locally of finite presentation, but of course not smooth in general.
+\end{remark}
+
+\noindent
+Lemma \ref{lemma-stack-presentation}
+suggests the following definitions.
+
+\begin{definition}
+\label{definition-smooth-groupoid}
+Let $S$ be a scheme. Let $B$ be an algebraic space over $S$.
+Let $(U, R, s, t, c)$ be a groupoid in algebraic spaces over $B$.
+We say $(U, R, s, t, c)$ is a {\it smooth groupoid}\footnote{This terminology
+might be a bit confusing: it does not imply that $[U/R]$ is smooth
+over anything.}
+if $s, t : R \to U$ are smooth morphisms of algebraic spaces.
+\end{definition}
+
+\begin{definition}
+\label{definition-presentation}
+Let $\mathcal{X}$ be an algebraic stack over $S$.
+A {\it presentation} of $\mathcal{X}$ is given by a smooth groupoid
+$(U, R, s, t, c)$ in algebraic spaces over $S$, and an
+equivalence $f : [U/R] \to \mathcal{X}$.
+\end{definition}
+
+\noindent
+We have seen above that every algebraic stack has a presentation.
+Our next task is to show that every smooth groupoid in algebraic
+spaces over $S$ gives rise to an algebraic stack.
+
+
+\section{The algebraic stack associated to a smooth groupoid}
+\label{section-smooth-groupoid-gives-algebraic-stack}
+
+\noindent
+In this section we start with a smooth groupoid in algebraic spaces
+and we show that the associated quotient stack is an algebraic stack.
+
+\begin{lemma}
+\label{lemma-diagonal-quotient-stack}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $(U, R, s, t, c)$ be a groupoid in algebraic spaces over $S$.
+Then the diagonal of $[U/R]$ is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that the $\mathit{Isom}$-sheaves are algebraic
+spaces, see
+Lemma \ref{lemma-representable-diagonal}.
+This follows from
+Bootstrap, Lemma \ref{bootstrap-lemma-quotient-stack-isom}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-quotient-smooth-presentation}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $(U, R, s, t, c)$ be a smooth groupoid in algebraic spaces over $S$.
+Then the morphism $\mathcal{S}_U \to [U/R]$ is smooth and surjective.
+\end{lemma}
+
+\begin{proof}
+Let $T$ be a scheme and let $x : (\Sch/T)_{fppf} \to [U/R]$
+be a $1$-morphism. We have to show that the projection
+$$
+\mathcal{S}_U \times_{[U/R]} (\Sch/T)_{fppf}
+\longrightarrow
+(\Sch/T)_{fppf}
+$$
+is surjective and smooth. We already know that the left hand side
+is representable by an algebraic space $F$, see
+Lemmas \ref{lemma-diagonal-quotient-stack} and
+\ref{lemma-representable-diagonal}.
+Hence we have to show the corresponding morphism $F \to T$ of
+algebraic spaces is surjective and smooth.
+Since we are working with properties of morphisms of algebraic
+spaces which are local on the target in the fppf topology we
+may check this fppf locally on $T$. By construction, there exists
+an fppf covering $\{T_i \to T\}$ of $T$ such that
+$x|_{(\Sch/T_i)_{fppf}}$ comes from a morphism
+$x_i : T_i \to U$. (Note that $F \times_T T_i$ represents the
+$2$-fibre product $\mathcal{S}_U \times_{[U/R]} (\Sch/T_i)_{fppf}$
+so everything is compatible with the base change via $T_i \to T$.)
+Hence we may assume that $x$ comes from $x : T \to U$.
+In this case we see that
+$$
+\mathcal{S}_U \times_{[U/R]} (\Sch/T)_{fppf}
+=
+(\mathcal{S}_U \times_{[U/R]} \mathcal{S}_U)
+\times_{\mathcal{S}_U} (\Sch/T)_{fppf}
+=
+\mathcal{S}_R \times_{\mathcal{S}_U} (\Sch/T)_{fppf}
+$$
+The first equality by
+Categories, Lemma \ref{categories-lemma-2-fibre-product-erase-factor}
+and the second equality by
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-2-cartesian}.
+Clearly the last $2$-fibre product is represented by the algebraic
+space $F = R \times_{s, U, x} T$ and the projection
+$R \times_{s, U, x} T \to T$ is smooth as the base change of
+the smooth morphism of algebraic spaces $s : R \to U$.
+It is also surjective as $s$ has a section (namely the identity
+$e : U \to R$ of the groupoid).
+This proves the lemma.
+\end{proof}
+
+\noindent
+Here is the main result of this section.
+
+\begin{theorem}
+\label{theorem-smooth-groupoid-gives-algebraic-stack}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $(U, R, s, t, c)$ be a smooth groupoid in algebraic spaces over $S$.
+Then the quotient stack $[U/R]$ is an algebraic stack over $S$.
+\end{theorem}
+
+\begin{proof}
+We check the three conditions of
+Definition \ref{definition-algebraic-stack}.
+By construction we have that $[U/R]$ is a stack in groupoids
+which is the first condition.
+
+\medskip\noindent
+The second condition follows from the stronger
+Lemma \ref{lemma-diagonal-quotient-stack}.
+
+\medskip\noindent
+Finally, we have to show there exists a scheme $W$ over $S$
+and a surjective smooth $1$-morphism
+$(\Sch/W)_{fppf} \longrightarrow \mathcal{X}$.
+First choose $W \in \Ob((\Sch/S)_{fppf})$ and a
+surjective \'etale morphism $W \to U$. Note that this
+gives a surjective \'etale morphism $\mathcal{S}_W \to \mathcal{S}_U$
+of categories fibred in sets, see
+Lemma
+\ref{lemma-map-presheaves-representable-by-spaces-transformation-property}.
+Of course then $\mathcal{S}_W \to \mathcal{S}_U$ is also surjective and
+smooth, see
+Lemma \ref{lemma-representable-transformations-property-implication}.
+Hence $\mathcal{S}_W \to \mathcal{S}_U \to [U/R]$ is surjective
+and smooth by a combination of
+Lemmas \ref{lemma-smooth-quotient-smooth-presentation} and
+\ref{lemma-composition-representable-transformations-property}.
+\end{proof}
+
+
+
+
+
+\section{Change of big site}
+\label{section-change-big-site}
+
+\noindent
+In this section we briefly discuss what happens when we change big sites.
+The upshot is that we can always enlarge the big site at will, hence we
+may assume any set of schemes we want to consider is contained in the big
+fppf site over which we consider our algebraic space.
+We encourage the reader to skip this section.
+
+\medskip\noindent
+Pullbacks of stacks is defined in
+Stacks, Section \ref{stacks-section-inverse-image}.
+
+\begin{lemma}
+\label{lemma-change-big-site}
+Suppose given big sites $\Sch_{fppf}$ and $\Sch'_{fppf}$.
+Assume that $\Sch_{fppf}$ is contained in $\Sch'_{fppf}$,
+see Topologies, Section \ref{topologies-section-change-alpha}.
+Let $S$ be an object of $\Sch_{fppf}$.
+Let $f : (\Sch'/S)_{fppf} \to (\Sch/S)_{fppf}$ the morphism
+of sites corresponding to the inclusion functor
+$u : (\Sch/S)_{fppf} \to (\Sch'/S)_{fppf}$.
+Let $\mathcal{X}$ be a stack in groupoids over $(\Sch/S)_{fppf}$.
+\begin{enumerate}
+\item if $\mathcal{X}$ is representable by some
+$X \in \Ob((\Sch/S)_{fppf})$, then
+$f^{-1}\mathcal{X}$ is representable too, in fact it is representable by the
+same scheme $X$, now viewed as an object of $(\Sch'/S)_{fppf}$,
+\item if $\mathcal{X}$ is representable by
+$F \in \Sh((\Sch/S)_{fppf})$ which is
+an algebraic space, then $f^{-1}\mathcal{X}$ is representable
+by the algebraic space $f^{-1}F$,
+\item if $\mathcal{X}$ is an algebraic stack, then $f^{-1}\mathcal{X}$
+is an algebraic stack, and
+\item if $\mathcal{X}$ is a Deligne-Mumford stack, then $f^{-1}\mathcal{X}$
+is a Deligne-Mumford stack too.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let us prove (3). By
+Lemma \ref{lemma-stack-presentation}
+we may write $\mathcal{X} = [U/R]$ for some smooth
+groupoid in algebraic spaces $(U, R, s, t, c)$. By
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-change-big-site}
+we see that $f^{-1}[U/R] = [f^{-1}U/f^{-1}R]$.
+Of course $(f^{-1}U, f^{-1}R, f^{-1}s, f^{-1}t, f^{-1}c)$
+is a smooth groupoid in algebraic spaces too. Hence (3) is proved.
+
+\medskip\noindent
+Now the other cases (1), (2), (4) each mean that $\mathcal{X}$ has
+a presentation $[U/R]$ of a particular kind, and hence translate into the
+same kind of presentation for $f^{-1}\mathcal{X} = [f^{-1}U/f^{-1}R]$.
+Whence the lemma is proved.
+\end{proof}
+
+\noindent
+It is not true (in general) that the restriction of an algebraic space
+over the bigger site is an algebraic space over the smaller site (simply
+by reasons of cardinality). Hence we can only ever use a simple lemma of this
+kind to enlarge the base category and never to shrink it.
+
+\begin{lemma}
+\label{lemma-fully-faithful}
+Suppose $\Sch_{fppf}$ is contained in $\Sch'_{fppf}$.
+Let $S$ be an object of $\Sch_{fppf}$. Denote
+$\textit{Algebraic-Stacks}/S$ the $2$-category of algebraic stacks over $S$
+defined using $\Sch_{fppf}$. Similarly, denote
+$\textit{Algebraic-Stacks}'/S$ the $2$-category of algebraic stacks over $S$
+defined using $\Sch'_{fppf}$. The rule
+$\mathcal{X} \mapsto f^{-1}\mathcal{X}$ of
+Lemma \ref{lemma-change-big-site}
+defines a functor of $2$-categories
+$$
+\textit{Algebraic-Stacks}/S \longrightarrow \textit{Algebraic-Stacks}'/S
+$$
+which defines equivalences of morphism categories
+$$
+\Mor_{\textit{Algebraic-Stacks}/S}(\mathcal{X}, \mathcal{Y})
+\longrightarrow
+\Mor_{\textit{Algebraic-Stacks}'/S}(f^{-1}\mathcal{X}, f^{-1}\mathcal{Y})
+$$
+for every objects $\mathcal{X}, \mathcal{Y}$ of
+$\textit{Algebraic-Stacks}/S$. An object
+$\mathcal{X}'$ of $\textit{Algebraic-Stacks}'/S$
+is equivalence to $f^{-1}\mathcal{X}$ for some
+$\mathcal{X}$ in $\textit{Algebraic-Stacks}/S$
+if and only if it has a presentation $\mathcal{X} = [U'/R']$
+with $U', R'$ isomorphic to $f^{-1}U$, $f^{-1}R$ for some
+$U, R \in \textit{Spaces}/S$.
+\end{lemma}
+
+\begin{proof}
+The statement on morphism categories is a consequence of the more general
+Stacks, Lemma \ref{stacks-lemma-bigger-site}.
+The characterization of the ``essential image'' follows from the description
+of $f^{-1}$ in the proof of
+Lemma \ref{lemma-change-big-site}.
+\end{proof}
+
+
+\section{Change of base scheme}
+\label{section-change-base-scheme}
+
+\noindent
+In this section we briefly discuss what happens when we change base schemes.
+The upshot is that given a morphism $S \to S'$ of base schemes, any algebraic
+stack over $S$ can be viewed as an algebraic stack over $S'$.
+
+\begin{lemma}
+\label{lemma-category-of-spaces-over-smaller-base-scheme}
+Let $\Sch_{fppf}$ be a big fppf site.
+Let $S \to S'$ be a morphism of this site.
+The constructions A and B of
+Stacks, Section \ref{stacks-section-localize}
+above give isomorphisms of $2$-categories
+$$
+\left\{
+\begin{matrix}
+2\text{-category of algebraic}\\
+\text{stacks }\mathcal{X}\text{ over }S
+\end{matrix}
+\right\}
+\leftrightarrow
+\left\{
+\begin{matrix}
+2\text{-category of pairs }(\mathcal{X}', f)\text{ consisting of an}\\
+\text{algebraic stack }\mathcal{X}'\text{ over }S'\text{ and a morphism}\\
+f : \mathcal{X}' \to (\Sch/S)_{fppf}\text{ of algebraic stacks over }S'
+\end{matrix}
+\right\}
+$$
+\end{lemma}
+
+\begin{proof}
+The statement makes sense as the functor
+$j : (\Sch/S)_{fppf} \to (\Sch/S')_{fppf}$
+is the localization functor associated to the object $S/S'$
+of $(\Sch/S')_{fppf}$. By
+Stacks, Lemma \ref{stacks-lemma-localize-stacks}
+the only thing to show is that the constructions A and B
+preserve the subcategories of algebraic stacks.
+For example, if $\mathcal{X} = [U/R]$ then construction A
+applied to $\mathcal{X}$ just produces
+$\mathcal{X}' = \mathcal{X}$. Conversely, if $\mathcal{X}' = [U'/R']$
+the morphism $p$ induces morphisms of algebraic spaces
+$U' \to S$ and $R' \to S$, and then $\mathcal{X} = [U'/R']$
+but now viewed as a stack over $S$. Hence the lemma is clear.
+\end{proof}
+
+\begin{definition}
+\label{definition-viewed-as}
+Let $\Sch_{fppf}$ be a big fppf site.
+Let $S \to S'$ be a morphism of this site.
+If $p : \mathcal{X} \to (\Sch/S)_{fppf}$
+is an algebraic stack over $S$, then
+$\mathcal{X}$ {\it viewed as an algebraic stack over $S'$}
+is the algebraic stack
+$$
+\mathcal{X} \longrightarrow (\Sch/S')_{fppf}
+$$
+gotten by applying construction A of
+Lemma \ref{lemma-category-of-spaces-over-smaller-base-scheme}
+to $\mathcal{X}$.
+\end{definition}
+
+\noindent
+Conversely, what if we start with an algebraic stack $\mathcal{X}'$
+over $S'$ and we want to get an algebraic stack over $S$?
+Well, then we consider the $2$-fibre product
+$$
+\mathcal{X}'_S
+=
+(\Sch/S)_{fppf} \times_{(\Sch/S')_{fppf}} \mathcal{X}'
+$$
+which is an algebraic stack over $S'$ according to
+Lemma \ref{lemma-2-fibre-product}.
+Moreover, it comes equipped with a natural $1$-morphism
+$p : \mathcal{X}'_S \to (\Sch/S)_{fppf}$ and hence by
+Lemma \ref{lemma-category-of-spaces-over-smaller-base-scheme}
+it corresponds in a canonical way to an algebraic stack over $S$.
+
+\begin{definition}
+\label{definition-change-of-base}
+Let $\Sch_{fppf}$ be a big fppf site.
+Let $S \to S'$ be a morphism of this site.
+Let $\mathcal{X}'$ be an algebraic stack over $S'$.
+The {\it change of base of $\mathcal{X}'$} is the
+algebraic space $\mathcal{X}'_S$ over $S$ described above.
+\end{definition}
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/algebraization.tex b/books/stacks/algebraization.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5b9e4bad9f5400dbb72b1e8ebfedfc4f8ce14135
--- /dev/null
+++ b/books/stacks/algebraization.tex
@@ -0,0 +1,9652 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Algebraic and Formal Geometry}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+This chapter continues the study of formal algebraic geometry
+and in particular the question of whether a formal object is
+the completion of an algebraic one. A fundamental reference is \cite{SGA2}.
+Here is a list of results we have already discussed
+in the Stacks project:
+\begin{enumerate}
+\item The theorem on formal functions, see
+Cohomology of Schemes, Section \ref{coherent-section-theorem-formal-functions}.
+\item Coherent formal modules, see
+Cohomology of Schemes, Section \ref{coherent-section-coherent-formal}.
+\item Grothendieck's existence theorem, see
+Cohomology of Schemes, Sections \ref{coherent-section-existence},
+\ref{coherent-section-existence-proper}, and
+\ref{coherent-section-existence-proper-support}.
+\item Grothendieck's algebraization theorem, see
+Cohomology of Schemes, Section \ref{coherent-section-algebraization}.
+\item Grothendieck's existence theorem more generally, see
+More on Flatness, Sections \ref{flat-section-existence} and
+\ref{flat-section-existence-derived}.
+\end{enumerate}
+Let us give an overview of the contents of this chapter.
+
+\medskip\noindent
+Let $X$ be a scheme and let $\mathcal{I} \subset \mathcal{O}_X$
+be a finite type quasi-coherent sheaf of ideals. Many questions
+in this chapter have to do with inverse systems $(\mathcal{F}_n)$
+of quasi-coherent $\mathcal{O}_X$-modules such that
+$\mathcal{F}_n = \mathcal{F}_{n + 1}/\mathcal{I}^n\mathcal{F}_{n + 1}$.
+An important special case is where $X$ is a scheme over a Noetherian
+ring $A$ and $\mathcal{I} = I \mathcal{O}_X$ for some ideal $I \subset A$.
+In Section \ref{section-ML-degree-zero}
+we prove some elementary results on such systems of coherent modules.
+In Section \ref{section-formal-functions-principal} we discuss
+additional results when $I = (f)$ is a principal. In Section
+\ref{section-formal-sections-cd-one} we work in the slightly
+more general setting where $\text{cd}(A, I) = 1$. One of the themes
+of this chapter will be to show that results proven in the case $I = (f)$
+also hold true when we only assume $\text{cd}(A, I) = 1$.
+
+\medskip\noindent
+In Section \ref{section-derived-completion} we discuss derived completion
+of modules on a ringed site $(\mathcal{C}, \mathcal{O})$
+with respect to a finite type sheaf of ideals $\mathcal{I}$.
+This section is the natural continuation of the theory of derived completion
+in commutative algebra as described in
+More on Algebra, Section \ref{more-algebra-section-derived-completion}.
+The first main result is that derived completion exists.
+The second main result is that for a morphism $f$ if ringed sites
+derived completion commutes with derived pushforward:
+$$
+(Rf_*K)^\wedge = Rf_*(K^\wedge)
+$$
+if the ideal sheaf upstairs is locally generated by sections coming
+from the ideal downstairs, see
+Lemma \ref{lemma-pushforward-commutes-with-derived-completion}.
+We stress that both main results are very elementary in case the
+ideals in question are globally finitely generated which will
+be true for all applications of this theory in this chapter.
+The displayed equality is the ``correct'' version of the
+theorem on formal functions, see discussion in
+Section \ref{section-formal-functions}.
+
+\medskip\noindent
+Let $A$ be a Noetherian ring and let $I, J$ be two ideals of $A$.
+Let $M$ be a finite $A$-module.
+The next topic in this chapter is the map
+$$
+R\Gamma_J(M) \longrightarrow R\Gamma_J(M)^\wedge
+$$
+from local cohomology of $M$ into the derived $I$-adic completion
+of the same. It turns out that if we impose suitable depth conditions
+this map becomes an isomorphism on cohomology in a range of degrees.
+In Section \ref{section-algebraization-sections-general}
+we work essentially in the generality just mentioned.
+In Section \ref{section-algebraization-punctured}
+we assume $A$ is a local ring and $J = \mathfrak m$ is a maximal ideal.
+We encourage the reader to read this section before the other two in
+this part of the chapter.
+Finally, in Section \ref{section-bootstrap} we bootstrap
+the local case to obtain stronger results back in the general case.
+
+\medskip\noindent
+In the next part of this chapter we use the results on
+completion of local cohomology to get a nonexhaustive list of results on
+cohomology of the completion of coherent modules.
+More precisely, let $A$ be a Noetherian ring, let $I \subset A$
+be an ideal, and let $U \subset \Spec(A)$ be an open subscheme.
+If $\mathcal{F}$ is a coherent $\mathcal{O}_U$-module, then
+we may consider the maps
+$$
+H^i(U, \mathcal{F}) \longrightarrow \lim H^i(U, \mathcal{F}/I^n\mathcal{F})
+$$
+and ask if we get an isomorphism in a certain range of degrees.
+In Section \ref{section-algebraization-sections}
+we work out some examples where $U$ is the punctured spectrum
+of a local ring. In Section \ref{section-algebraization-sections-coherent}
+we discuss the general case.
+In Section \ref{section-connected} we apply some of the results
+obtained to questions of connectedness in algebraic geometry.
+
+\medskip\noindent
+The remaining sections of this chapter are devoted to a discussion
+of algebraization of coherent formal modules. In other words, given
+an inverse system of coherent modules $(\mathcal{F}_n)$ on $U$
+as above with
+$\mathcal{F}_n = \mathcal{F}_{n + 1}/I^n\mathcal{F}_{n + 1}$
+we ask whether there exists a coherent $\mathcal{O}_U$-module
+$\mathcal{F}$ such that
+$\mathcal{F}_n = \mathcal{F}/I^n\mathcal{F}$
+for all $n$. We encourage the reader to read
+Section \ref{section-algebraization-modules}
+for a precise statement of the question, a useful general result
+(Lemma \ref{lemma-when-done}), and a nontrivial application
+(Lemma \ref{lemma-algebraization-principal-variant}).
+To prove a result going essentially beyond this case
+quite a bit more theory has to be developed.
+Please see Section \ref{section-algebraization-modules-conclusion}
+for the strongest results of this type obtained in this chapter.
+
+
+
+\section{Formal sections, I}
+\label{section-ML-degree-zero}
+
+\noindent
+Let $A$ be a ring and $I \subset A$ an ideal. Let $X$ be a scheme
+over $\Spec(A)$. In this section we prove some general facts on inverse
+systems of $\mathcal{O}_X$-modules $\{\mathcal{F}_n\}$ such that
+$\mathcal{F}_n = \mathcal{F}_{n + 1} / I^n \mathcal{F}_{n + 1}$.
+Some of these results are proved in greater generality in
+Cohomology, Section \ref{cohomology-section-inverse-systems}.
+
+\begin{lemma}
+\label{lemma-ML-general}
+Let $I$ be an ideal of a ring $A$. Let $X$ be a scheme over $\Spec(A)$. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of $\mathcal{O}_X$-modules
+such that $\mathcal{F}_n = \mathcal{F}_{n + 1}/I^n\mathcal{F}_{n + 1}$.
+Assume
+$$
+\bigoplus\nolimits_{n \geq 0} H^1(X, I^n\mathcal{F}_{n + 1})
+$$
+satisfies the ascending chain condition as a graded
+$\bigoplus_{n \geq 0} I^n/I^{n + 1}$-module.
+Then the inverse system $M_n = \Gamma(X, \mathcal{F}_n)$ satisfies the
+Mittag-Leffler condition.
+\end{lemma}
+
+\begin{proof}
+This is a special case of the more general
+Cohomology, Lemma \ref{cohomology-lemma-ML-general}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ML-general-better}
+Let $I$ be an ideal of a ring $A$. Let $X$ be a scheme over $\Spec(A)$. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of $\mathcal{O}_X$-modules
+such that $\mathcal{F}_n = \mathcal{F}_{n + 1}/I^n\mathcal{F}_{n + 1}$.
+Given $n$ define
+$$
+H^1_n =
+\bigcap\nolimits_{m \geq n}
+\Im\left(
+H^1(X, I^n\mathcal{F}_{m + 1}) \to H^1(X, I^n\mathcal{F}_{n + 1})
+\right)
+$$
+If $\bigoplus H^1_n$ satisfies the ascending chain condition as a graded
+$\bigoplus_{n \geq 0} I^n/I^{n + 1}$-module, then the inverse system
+$M_n = \Gamma(X, \mathcal{F}_n)$ satisfies the Mittag-Leffler condition.
+\end{lemma}
+
+\begin{proof}
+This is a special case of the more general
+Cohomology, Lemma \ref{cohomology-lemma-ML-general-better}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-topology-I-adic-general}
+Let $I$ be a finitely generated ideal of a ring $A$.
+Let $X$ be a scheme over $\Spec(A)$. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of $\mathcal{O}_X$-modules such that
+$\mathcal{F}_n = \mathcal{F}_{n + 1}/I^n\mathcal{F}_{n + 1}$. Assume
+$$
+\bigoplus\nolimits_{n \geq 0} H^0(X, I^n\mathcal{F}_{n + 1})
+$$
+satisfies the ascending chain condition as a graded
+$\bigoplus_{n \geq 0} I^n/I^{n + 1}$-module.
+Then the limit topology on $M = \lim \Gamma(X, \mathcal{F}_n)$
+is the $I$-adic topology.
+\end{lemma}
+
+\begin{proof}
+This is a special case of the more general
+Cohomology, Lemma \ref{cohomology-lemma-topology-I-adic-general}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-properties-system}
+Let $X$ be a scheme. Let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of quasi-coherent $\mathcal{O}_X$-modules
+such that
+$\mathcal{F}_n = \mathcal{F}_{n + 1}/\mathcal{I}^n\mathcal{F}_{n + 1}$.
+Set $\mathcal{F} = \lim \mathcal{F}_n$. Then
+\begin{enumerate}
+\item $\mathcal{F} = R\lim \mathcal{F}_n$,
+\item for any affine open $U \subset X$ we have
+$H^p(U, \mathcal{F}) = 0$ for $p > 0$, and
+\item for each $p$ there is a short exact sequence
+$0 \to R^1\lim H^{p - 1}(X, \mathcal{F}_n) \to
+H^p(X, \mathcal{F}) \to \lim H^p(X, \mathcal{F}_n) \to 0$.
+\end{enumerate}
+If moreover $\mathcal{I}$ is of finite type, then
+\begin{enumerate}
+\item[(4)]
+$\mathcal{F}_n = \mathcal{F}/\mathcal{I}^n\mathcal{F}$, and
+\item[(5)]
+$\mathcal{I}^n \mathcal{F} = \lim_{m \geq n} \mathcal{I}^n\mathcal{F}_m$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Parts (1), (2), and (3) are general facts about inverse systems of
+quasi-coherent modules with surjective transition maps, see
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-Rlim-quasi-coherent}
+and Cohomology, Lemma \ref{cohomology-lemma-RGamma-commutes-with-Rlim}.
+Next, assume $\mathcal{I}$ is of finite type.
+Let $U \subset X$ be affine open. Say $U = \Spec(A)$ and $\mathcal{I}|_U$
+corresponds to $I \subset A$. Observe that $I$ is a finitely generated ideal.
+By the equivalence of categories between quasi-coherent $\mathcal{O}_U$-modules
+and $A$-modules (Schemes, Lemma \ref{schemes-lemma-equivalence-quasi-coherent})
+we find that $M_n = \mathcal{F}_n(U)$ is an inverse system
+of $A$-modules with $M_n = M_{n + 1}/I^nM_{n + 1}$. Thus
+$$
+M = \mathcal{F}(U) = \lim \mathcal{F}_n(U) = \lim M_n
+$$
+is an $I$-adically complete module with $M/I^nM = M_n$ by
+Algebra, Lemma \ref{algebra-lemma-limit-complete}. This proves (4).
+Part (5) translates into the statement that
+$\lim_{m \geq n} I^nM/I^mM = I^nM$.
+Since $I^mM = I^{m - n} \cdot I^nM$ this is just the statement that
+$I^mM$ is $I$-adically complete. This follows from
+Algebra, Lemma \ref{algebra-lemma-hathat-finitely-generated}
+and the fact that $M$ is complete.
+\end{proof}
+
+
+
+
+
+\section{Formal sections, II}
+\label{section-formal-functions-principal}
+
+\noindent
+In this section we ask if completion and taking cohomology commute
+for sheaves of modules on schemes over an affine base $A$ when completion
+is with respect to a principal ideal in $A$. Of course, we have already
+discussed the theorem on formal functions in
+Cohomology of Schemes, Section \ref{coherent-section-theorem-formal-functions}.
+Moreover, we will see in
+Remark \ref{remark-local-calculation-derived-completion} and
+Section \ref{section-formal-functions}
+that derived completion commutes with total cohomology in great generality.
+In this section we just collect a few simple special cases of this material
+that will help us with future developments.
+
+\begin{lemma}
+\label{lemma-equivalent-f-good}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $f \in \Gamma(X, \mathcal{O}_X)$. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be inverse system of $\mathcal{O}_X$-modules.
+The following are equivalent
+\begin{enumerate}
+\item for all $n \geq 1$ the map
+$f : \mathcal{F}_{n + 1} \to \mathcal{F}_{n + 1}$ factors
+through $\mathcal{F}_{n + 1} \to \mathcal{F}_n$ to give a
+short exact sequence
+$0 \to \mathcal{F}_n \to \mathcal{F}_{n + 1} \to \mathcal{F}_1 \to 0$,
+\item for all $n \geq 1$ the map
+$f^n : \mathcal{F}_{n + 1} \to \mathcal{F}_{n + 1}$
+factors through $\mathcal{F}_{n + 1} \to \mathcal{F}_1$
+to give a short exact sequence
+$0 \to \mathcal{F}_1 \to \mathcal{F}_{n + 1} \to \mathcal{F}_n \to 0$
+\item there exists an $\mathcal{O}_X$-module $\mathcal{G}$
+which is $f$-divisible such that $\mathcal{F}_n = \mathcal{G}[f^n]$.
+\end{enumerate}
+If $X$ is a scheme and $\mathcal{F}_n$ is quasi-coherent, then these
+are also equivalent to
+\begin{enumerate}
+\item[(4)] there exists an $\mathcal{O}_X$-module $\mathcal{F}$
+which is $f$-torsion free such that
+$\mathcal{F}_n = \mathcal{F}/f^n\mathcal{F}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We omit the proof of the equivalence of (1) and (2).
+The condition that $\mathcal{G}$ is $f$-divisible means that
+$f : \mathcal{G} \to \mathcal{G}$ is surjective.
+Thus given $\mathcal{F}_n$ as in (1) we set
+$\mathcal{G} = \colim \mathcal{F}_n$ where the maps
+$\mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to \ldots$
+are as in (1). This produces an $f$-divisible $\mathcal{O}_X$-module
+with $\mathcal{F}_n = \mathcal{G}[f^n]$ as can be seen by
+checking on stalks.
+The condition that $\mathcal{F}$ is $f$-torsion free means that
+$f : \mathcal{F} \to \mathcal{F}$ is injective.
+If $X$ is a scheme and $\mathcal{F}_n$ is quasi-coherent,
+then we set $\mathcal{F} = \lim \mathcal{F}_n$. Namely, for an
+affine open $U \subset X$ the transition maps
+$\mathcal{F}_{n + 1}(U) \to \mathcal{F}_n(U)$ are surjective
+by vanishing of higher cohomology. This produces an $f$-torsion free
+$\mathcal{O}_X$-module with
+$\mathcal{F}_n = \mathcal{F}/f^n\mathcal{F}$
+(Lemma \ref{lemma-properties-system}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-topology-I-adic-f}
+Suppose $X$, $f$, $(\mathcal{F}_n)$ is as in
+Lemma \ref{lemma-equivalent-f-good}. Then the limit topology on
+$H^p = \lim H^p(X, \mathcal{F}_n)$ is the $f$-adic topology.
+\end{lemma}
+
+\begin{proof}
+Namely, it is clear that $f^t H^p$ maps to zero in $H^p(X, \mathcal{F}_t)$.
+On the other hand, let $c \geq 1$. If $\xi = (\xi_n) \in H^p$ is small in the
+limit topology, then $\xi_c = 0$, and hence $\xi_n$
+maps to zero in $H^p(X, \mathcal{F}_c)$ for $n \geq c$.
+Consider the inverse system of short exact sequences
+$$
+0 \to \mathcal{F}_{n - c} \xrightarrow{f^c} \mathcal{F}_n \to
+\mathcal{F}_c \to 0
+$$
+and the corresponding inverse system of long exact cohomology sequences
+$$
+H^{p - 1}(X, \mathcal{F}_c) \to
+H^p(X, \mathcal{F}_{n - c}) \to
+H^p(X, \mathcal{F}_n) \to
+H^p(X, \mathcal{F}_c)
+$$
+Since the term $H^{p - 1}(X, \mathcal{F}_c)$ is independent of
+$n$ we can choose a compatible sequence of elements
+$\xi'_n \in H^1(X, \mathcal{F}_{n - c})$
+lifting $\xi_n$. Setting $\xi' = (\xi'_n)$ we see that
+$\xi = f^{c + 1} \xi'$. This even shows that
+$f^c H^p = \Ker(H^p \to H^p(X, \mathcal{F}_c))$ on the nose.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-finite}
+Let $A$ be a Noetherian ring complete with respect to a principal ideal $(f)$.
+Let $X$ be a scheme over $\Spec(A)$. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of $\mathcal{O}_X$-modules. Assume
+\begin{enumerate}
+\item $\Gamma(X, \mathcal{F}_1)$ is a finite $A$-module,
+\item the equivalent conditions of Lemma \ref{lemma-equivalent-f-good} hold.
+\end{enumerate}
+Then
+$$
+M = \lim \Gamma(X, \mathcal{F}_n)
+$$
+is a finite $A$-module, $f$ is a nonzerodivisor on $M$, and
+$M/fM$ is the image of $M$ in $\Gamma(X, \mathcal{F}_1)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-topology-I-adic-f} and its proof we have
+$M/fM \subset H^0(X, \mathcal{F}_1)$. From (1) and the Noetherian
+property of $A$ we get that $M/fM$ is a finite $A$-module.
+Observe that $\bigcap f^nM = 0$ as $f^nM$ maps to zero in
+$H^0(X, \mathcal{F}_n)$. By
+Algebra, Lemma \ref{algebra-lemma-finite-over-complete-ring}
+we conclude that $M$ is finite over $A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ML}
+Let $A$ be a ring. Let $f \in A$. Let $X$ be a scheme over $\Spec(A)$. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of $\mathcal{O}_X$-modules. Assume
+\begin{enumerate}
+\item either $H^1(X, \mathcal{F}_1)$ is an $A$-module of finite length
+or $A$ is Noetherian and $H^1(X, \mathcal{F}_1)$ is a finite $A$-module,
+\item the equivalent conditions of Lemma \ref{lemma-equivalent-f-good} hold.
+\end{enumerate}
+Then the inverse system $M_n = \Gamma(X, \mathcal{F}_n)$ satisfies the
+Mittag-Leffler condition.
+\end{lemma}
+
+\begin{proof}
+Set $I = (f)$. We will use the criterion of Lemma \ref{lemma-ML-general}.
+Observe that $f^n : \mathcal{F}_0 \to I^n\mathcal{F}_{n + 1}$
+is an isomorphism for all $n \geq 0$.
+Thus it suffices to show that
+$$
+\bigoplus\nolimits_{n \geq 1} H^1(X, \mathcal{F}_1) \cdot f^{n + 1}
+$$
+is a graded $S = \bigoplus_{n \geq 0} A/(f) \cdot f^n$-module satisfying the
+ascending chain condition. If $A$ is not Noetherian, then
+$H^1(X, \mathcal{F}_1)$ has finite length and the result holds.
+If $A$ is Noetherian, then $S$ is a Noetherian ring and the result
+holds as the module is finite over $S$ by the assumed finiteness
+of $H^1(X, \mathcal{F}_1)$. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ML-better}
+Let $A$ be a ring. Let $f \in A$. Let $X$ be a scheme over $\Spec(A)$. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of $\mathcal{O}_X$-modules. Assume
+\begin{enumerate}
+\item either there is an $m \geq 1$ such that the image of
+$H^1(X, \mathcal{F}_m) \to H^1(X, \mathcal{F}_1)$
+is an $A$-module of finite length or $A$ is Noetherian
+and the intersection of the images of
+$H^1(X, \mathcal{F}_m) \to H^1(X, \mathcal{F}_1)$
+is a finite $A$-module,
+\item the equivalent conditions of Lemma \ref{lemma-equivalent-f-good} hold.
+\end{enumerate}
+Then the inverse system $M_n = \Gamma(X, \mathcal{F}_n)$ satisfies the
+Mittag-Leffler condition.
+\end{lemma}
+
+\begin{proof}
+Set $I = (f)$. We will use the criterion of Lemma \ref{lemma-ML-general-better}
+involving the modules $H^1_n$. For $m \geq n$ we have
+$I^n\mathcal{F}_{m + 1} = \mathcal{F}_{m + 1 - n}$. Thus we see that
+$$
+H^1_n = \bigcap\nolimits_{m \geq 1} \Im\left(
+H^1(X, \mathcal{F}_m) \to H^1(X, \mathcal{F}_1)
+\right)
+$$
+is independent of $n$ and
+$\bigoplus H^1_n = \bigoplus H^1_1 \cdot f^{n + 1}$.
+Thus we conclude exactly as in the proof of Lemma \ref{lemma-ML}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-formal-functions-principal}
+\begin{reference}
+\cite[Lemma 1.6]{Bhatt-local}
+\end{reference}
+Let $A$ be a ring and $f \in A$. Let $X$ be a scheme over $A$.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Assume that $\mathcal{F}[f^n] = \Ker(f^n : \mathcal{F} \to \mathcal{F})$
+stabilizes. Then
+$$
+R\Gamma(X, \lim \mathcal{F}/f^n\mathcal{F}) =
+R\Gamma(X, \mathcal{F})^\wedge
+$$
+where the right hand side indicates the derived completion
+with respect to the ideal $(f) \subset A$. Let $H^p$ be the
+$p$th cohomology group of this complex. Then there are short
+exact sequences
+$$
+0 \to R^1\lim H^{p - 1}(X, \mathcal{F}/f^n\mathcal{F})
+\to H^p \to \lim H^p(X, \mathcal{F}/f^n\mathcal{F}) \to 0
+$$
+and
+$$
+0 \to H^0(H^p(X, \mathcal{F})^\wedge) \to H^p \to
+T_f(H^{p + 1}(X, \mathcal{F})) \to 0
+$$
+where $T_f(-)$ denote the $f$-adic Tate module as in
+More on Algebra, Example
+\ref{more-algebra-example-spectral-sequence-principal}.
+\end{lemma}
+
+\begin{proof}
+We start with the canonical identifications
+\begin{align*}
+R\Gamma(X, \mathcal{F})^\wedge
+& =
+R\lim R\Gamma(X, \mathcal{F}) \otimes_A^\mathbf{L} (A \xrightarrow{f^n} A) \\
+& =
+R\lim R\Gamma(X, \mathcal{F} \xrightarrow{f^n} \mathcal{F}) \\
+& =
+R\Gamma(X, R\lim (\mathcal{F} \xrightarrow{f^n} \mathcal{F}))
+\end{align*}
+The first equality holds by
+More on Algebra, Lemma \ref{more-algebra-lemma-derived-completion-koszul}.
+The second by the projection formula, see
+Cohomology, Lemma \ref{cohomology-lemma-projection-formula-perfect}.
+The third by Cohomology, Lemma
+\ref{cohomology-lemma-Rf-commutes-with-Rlim}.
+Note that by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-Rlim-quasi-coherent}
+we have
+$\lim \mathcal{F}/f^n\mathcal{F} = R\lim \mathcal{F}/f^n \mathcal{F}$.
+Thus to finish the proof of the first statement of the lemma it suffices to
+show that the pro-objects $(f^n : \mathcal{F} \to \mathcal{F})$
+and $(\mathcal{F}/f^n \mathcal{F})$ are isomorphic. There is clearly
+a map from the first inverse system to the second. Suppose that
+$\mathcal{F}[f^c] = \mathcal{F}[f^{c + 1}] = \mathcal{F}[f^{c + 2}] = \ldots$.
+Then we can define an arrow of inverse systems in $D(\mathcal{O}_X)$
+in the other direction by the diagrams
+$$
+\xymatrix{
+\mathcal{F}/\mathcal{F}[f^c] \ar[r]_-{f^{n + c}} \ar[d]_{f^c} &
+\mathcal{F} \ar[d]^1 \\
+\mathcal{F} \ar[r]^{f^n} & \mathcal{F}
+}
+$$
+Since the top horizontal arrow is injective the complex
+in the top row is quasi-isomorphic to $\mathcal{F}/f^{n + c}\mathcal{F}$.
+Some details omitted.
+
+\medskip\noindent
+Since $R\Gamma(X, -)$ commutes with derived limits
+(Injectives, Lemma \ref{injectives-lemma-RF-commutes-with-Rlim})
+we see that
+$$
+R\Gamma(X, \lim \mathcal{F}/f^n\mathcal{F}) =
+R\Gamma(X, R\lim \mathcal{F}/f^n\mathcal{F}) =
+R\lim R\Gamma(X, \mathcal{F}/f^n\mathcal{F})
+$$
+(for first equality see first paragraph of proof).
+By More on Algebra, Remark \ref{more-algebra-remark-compare-derived-limit}
+we obtain exact sequences
+$$
+0 \to
+R^1\lim H^{p - 1}(X, \mathcal{F}/f^n\mathcal{F}) \to
+H^p(X, \lim \mathcal{F}/I^n\mathcal{F}) \to
+\lim H^p(X, \mathcal{F}/I^n\mathcal{F}) \to 0
+$$
+of $A$-modules. The second set of short exact sequences follow immediately
+from the discussion in More on Algebra, Example
+\ref{more-algebra-example-spectral-sequence-principal}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Formal sections, III}
+\label{section-formal-sections-cd-one}
+
+\noindent
+In this section we generalize some of the results of
+Section \ref{section-formal-functions-principal}
+to the case of an ideal $I \subset A$ of cohomological dimension $1$.
+
+\begin{lemma}
+\label{lemma-cd-one}
+Let $I = (f_1, \ldots, f_r)$ be an ideal of a Noetherian ring $A$.
+If $\text{cd}(A, I) = 1$, then there exist $c \geq 1$ and maps
+$\varphi_j : I^c \to A$ such that $\sum f_j \varphi_j : I^c \to I$
+is the inclusion map.
+\end{lemma}
+
+\begin{proof}
+Since $\text{cd}(A, I) = 1$ the complement $U = \Spec(A) \setminus V(I)$
+is affine (Local Cohomology, Lemma \ref{local-cohomology-lemma-cd-is-one}).
+Say $U = \Spec(B)$. Then $IB = B$
+and we can write $1 = \sum_{j = 1, \ldots, r} f_j b_j$
+for some $b_j \in B$. By
+Cohomology of Schemes, Lemma \ref{coherent-lemma-homs-over-open}
+we can represent $b_j$ by maps $\varphi_j : I^c \to A$
+for some $c \geq 0$. Then $\sum f_j \varphi_j : I^c \to I \subset A$
+is the canonical embedding, after possibly replacing $c$ by a larger
+integer, by the same lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cd-one-extend}
+Let $I = (f_1, \ldots, f_r)$ be an ideal of a Noetherian ring $A$
+with $\text{cd}(A, I) = 1$. Let $c \geq 1$ and $\varphi_j : I^c \to A$,
+$j = 1, \ldots, r$ be as in Lemma \ref{lemma-cd-one}.
+Then there is a unique graded $A$-algebra map
+$$
+\Phi : \bigoplus\nolimits_{n \geq 0} I^{nc} \to A[T_1, \ldots, T_r]
+$$
+with $\Phi(g) = \sum \varphi_j(g) T_j$ for $g \in I^c$.
+Moreover, the composition of $\Phi$ with the map
+$A[T_1, \ldots, T_r] \to \bigoplus_{n \geq 0} I^n$,
+$T_j \mapsto f_j$ is the inclusion map
+$\bigoplus_{n \geq 0} I^{nc} \to \bigoplus_{n \geq 0} I^n$.
+\end{lemma}
+
+\begin{proof}
+For each $j$ and $m \geq c$ the restriction of $\varphi_j$ to
+$I^m$ is a map $\varphi_j : I^m \to I^{m - c}$.
+Given $j_1, \ldots, j_n \in \{1, \ldots, r\}$ we claim that the
+composition
+$$
+\varphi_{j_1} \ldots \varphi_{j_n} :
+I^{nc} \to I^{(n - 1)c} \to \ldots \to I^c \to A
+$$
+is independent of the order of the indices $j_1, \ldots, j_n$.
+Namely, if $g = g_1 \ldots g_n$ with $g_i \in I^c$, then
+we see that
+$$
+(\varphi_{j_1} \ldots \varphi_{j_n})(g) =
+\varphi_{j_1}(g_1) \ldots \varphi_{j_n}(g_n)
+$$
+is independent of the ordering as multiplication in $A$ is commutative.
+Thus we can define $\Phi$ by sending $g \in I^{nc}$ to
+$$
+\Phi(g) = \sum\nolimits_{e_1 + \ldots + e_r = n}
+(\varphi_1^{e_1} \circ \ldots \circ \varphi_r^{e_r})(g)
+T_1^{e_1} \ldots T_r^{e_r}
+$$
+It is straightforward to prove that this is a graded $A$-algebra
+homomorphism with the desired property. Uniqueness is immediate
+as is the final property. This proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cd-one-extend-to-module}
+Let $I = (f_1, \ldots, f_r)$ be an ideal of a Noetherian ring $A$
+with $\text{cd}(A, I) = 1$. Let $c \geq 1$ and $\varphi_j : I^c \to A$,
+$j = 1, \ldots, r$ be as in Lemma \ref{lemma-cd-one}.
+Let $A \to B$ be a ring map with $B$ Noetherian and let $N$ be
+a finite $B$-module. Then, after possibly increasing $c$
+and adjusting $\varphi_j$ accordingly, there is a unique
+unique graded $B$-module map
+$$
+\Phi_N : \bigoplus\nolimits_{n \geq 0} I^{nc}N \to N[T_1, \ldots, T_r]
+$$
+with $\Phi_N(g x) = \Phi(g) x$ for $g \in I^{nc}$ and $x \in N$
+where $\Phi$ is as in Lemma \ref{lemma-cd-one-extend}.
+The composition of $\Phi_N$ with the map
+$N[T_1, \ldots, T_r] \to \bigoplus_{n \geq 0} I^nN$,
+$T_j \mapsto f_j$ is the inclusion map
+$\bigoplus_{n \geq 0} I^{nc}N \to \bigoplus_{n \geq 0} I^nN$.
+\end{lemma}
+
+\begin{proof}
+The uniqueness is clear from the formula and the uniqueness of $\Phi$ in
+Lemma \ref{lemma-cd-one-extend}. Consider the Noetherian $A$-algebra
+$B' = B \oplus N$ where $N$ is an ideal of square zero. To show
+the existence of $\Phi_N$ it is enough
+(via Lemma \ref{lemma-cd-one}) to show that $\varphi_j$ extends to
+a map $\varphi'_j : I^cB' \to B'$ after possibly increasing $c$
+to some $c'$ (and replacing $\varphi_j$ by the composition of the inclusion
+$I^{c'} \to I^c$ with $\varphi_j$). Recall that $\varphi_j$ corresponds to a
+section
+$$
+h_j \in \Gamma(\Spec(A) \setminus V(I), \mathcal{O}_{\Spec(A)})
+$$
+see Cohomology of Schemes, Lemma \ref{coherent-lemma-homs-over-open}.
+(This is in fact how we chose our $\varphi_j$ in the proof of
+Lemma \ref{lemma-cd-one}.) Let us use the same lemma to represent the pullback
+$$
+h'_j \in \Gamma(\Spec(B') \setminus V(IB'), \mathcal{O}_{\Spec(B')})
+$$
+of $h_j$ by a $B'$-linear map
+$\varphi'_j : I^{c'}B' \to B'$ for some $c' \geq c$.
+The agreement with $\varphi_j$ will hold for $c'$
+sufficiently large by a further application of the lemma:
+namely we can test agreement on a finite list of generators of $I^{c'}$.
+Small detail omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cd-is-one-for-system}
+Let $I = (f_1, \ldots, f_r)$ be an ideal of a Noetherian ring $A$ with
+$\text{cd}(A, I) = 1$. Let $c \geq 1$ and $\varphi_j : I^c \to A$,
+$j = 1, \ldots, r$ be as in Lemma \ref{lemma-cd-one}.
+Let $X$ be a Noetherian scheme over $\Spec(A)$. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of coherent $\mathcal{O}_X$-modules
+such that $\mathcal{F}_n = \mathcal{F}_{n + 1}/I^n\mathcal{F}_{n + 1}$.
+Set $\mathcal{F} = \lim \mathcal{F}_n$.
+Then, after possibly increasing $c$ and adjusting $\varphi_j$ accordingly,
+there exists a unique graded $\mathcal{O}_X$-module map
+$$
+\Phi_\mathcal{F} :
+\bigoplus\nolimits_{n \geq 0} I^{nc}\mathcal{F}
+\longrightarrow
+\mathcal{F}[T_1, \ldots, T_r]
+$$
+with $\Phi_\mathcal{F}(g s) = \Phi(g) s$ for $g \in I^{nc}$ and
+$s$ a local section of $\mathcal{F}$ where $\Phi$ is as in
+Lemma \ref{lemma-cd-one-extend}. The composition of $\Phi_\mathcal{F}$
+with the map
+$\mathcal{F}[T_1, \ldots, T_r] \to \bigoplus_{n \geq 0} I^n\mathcal{F}$,
+$T_j \mapsto f_j$
+is the canonical inclusion
+$\bigoplus_{n \geq 0} I^{nc}\mathcal{F} \to
+\bigoplus_{n \geq 0} I^n\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+The uniqueness is immediate from the $\mathcal{O}_X$-linearity
+and the requirement that $\Phi_\mathcal{F}(g s) = \Phi(g) s$ for
+$g \in I^{nc}$ and $s$ a local section of $\mathcal{F}$.
+Thus we may assume $X = \Spec(B)$ is affine.
+Observe that $(\mathcal{F}_n)$ is an object of the category
+$\textit{Coh}(X, I\mathcal{O}_X)$ introduced
+in Cohomology of Schemes, Section \ref{coherent-section-coherent-formal}.
+Let $B' = B^\wedge$ be the $I$-adic completion of $B$.
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-inverse-systems-affine}
+the object $(\mathcal{F}_n)$ corresponds to a finite $B'$-module $N$
+in the sense that $\mathcal{F}_n$ is the coherent
+module associated to the finite $B$-module $N/I^n N$.
+Applying Lemma \ref{lemma-cd-one-extend-to-module}
+to $I \subset A \to B'$ and $N$
+we see that, after possibly increasing $c$ and adjusting
+$\varphi_j$ accordingly, we get unique maps
+$$
+\Phi_N : \bigoplus\nolimits_{n \geq 0} I^{nc}N \to N[T_1, \ldots, T_r]
+$$
+with the corresponding properties. Note that in degree $n$ we obtain
+an inverse system of maps $N/I^mN \to \bigoplus_{e_1 + \ldots + e_r = n}
+N/I^{m - nc}N \cdot T_1^{e_1} \ldots T_r^{e_r}$ for $m \geq nc$.
+Translating back into coherent
+sheaves we see that $\Phi_N$ corresponds to a system of maps
+$$
+\Phi^n_m :
+I^{nc}\mathcal{F}_m
+\longrightarrow
+\bigoplus\nolimits_{e_1 + \ldots + e_r = n}
+\mathcal{F}_{m - nc} \cdot T_1^{e_1} \ldots T_r^{e_r}
+$$
+for varying $m \geq nc$ and $n \geq 1$. Taking the inverse limit of
+these maps over $m$ we obtain $\Phi_\mathcal{F} = \bigoplus_n \lim_m \Phi^n_m$.
+Note that $\lim_m I^t\mathcal{F}_m = I^t \mathcal{F}$ as can be seen by
+evaluating on affines for example, but in fact we don't need this because
+it is clear there is a map $I^t\mathcal{F} \to \lim_m I^t\mathcal{F}_m$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-topology-I-adic}
+Let $I$ be an ideal of a Noetherian ring $A$. Let $X$ be a Noetherian scheme
+over $\Spec(A)$. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of coherent $\mathcal{O}_X$-modules
+such that $\mathcal{F}_n = \mathcal{F}_{n + 1}/I^n\mathcal{F}_{n + 1}$.
+If $\text{cd}(A, I) = 1$, then for all $p \in \mathbf{Z}$ the limit topology on
+$\lim H^p(X, \mathcal{F}_n)$ is $I$-adic.
+\end{lemma}
+
+\begin{proof}
+First it is clear that $I^t \lim H^p(X, \mathcal{F}_n)$
+maps to zero in $H^p(X, \mathcal{F}_t)$. Thus the $I$-adic topology
+is finer than the limit topology. For the converse we set
+$\mathcal{F} = \lim \mathcal{F}_n$, we pick generators $f_1, \ldots, f_r$
+of $I$, we pick $c \geq 1$, and we choose
+$\Phi_\mathcal{F}$ as in Lemma \ref{lemma-cd-is-one-for-system}.
+We will use the results of Lemma \ref{lemma-properties-system}
+without further mention. In particular we have a short exact
+sequence
+$$
+0 \to R^1\lim H^{p - 1}(X, \mathcal{F}_n) \to H^p(X, \mathcal{F})
+\to \lim H^p(X, \mathcal{F}_n) \to 0
+$$
+Thus we can lift any element $\xi$ of $\lim H^p(X, \mathcal{F}_n)$
+to an element $\xi' \in H^p(X, \mathcal{F})$. Suppose $\xi$ maps to zero
+in $H^p(X, \mathcal{F}_{nc})$ for some $n$, in other
+words, suppose $\xi$ is ``small'' in the limit topology. We have a
+short exact sequence
+$$
+0 \to I^{nc}\mathcal{F} \to \mathcal{F} \to \mathcal{F}_{nc} \to 0
+$$
+and hence the assumption means we can lift $\xi'$ to an element
+$\xi'' \in H^p(X, I^{nc}\mathcal{F})$. Applying $\Phi_\mathcal{F}$
+we get
+$$
+\Phi_\mathcal{F}(\xi'') = \sum\nolimits_{e_1 + \ldots + e_r = n}
+\xi'_{e_1, \ldots, e_r} \cdot T_1^{e_1} \ldots T_r^{e_r}
+$$
+for some $\xi'_{e_1, \ldots, e_r} \in H^p(X, \mathcal{F})$.
+Letting $\xi_{e_1, \ldots, e_r} \in \lim H^p(X, \mathcal{F}_n)$
+be the images and using the final assertion of
+Lemma \ref{lemma-cd-is-one-for-system}
+we conclude that
+$$
+\xi = \sum f_1^{e_1} \ldots f_r^{e_r} \xi_{e_1, \ldots, e_r}
+$$
+is in $I^n \lim H^p(X, \mathcal{F}_n)$ as desired.
+\end{proof}
+
+\begin{example}
+\label{example-not-I-adic}
+Let $k$ be a field. Let $A = k[x, y][[s, t]]/(xs - yt)$.
+Let $I = (s, t)$ and $\mathfrak a = (x, y, s, t)$.
+Let $X = \Spec(A) - V(\mathfrak a)$ and
+$\mathcal{F}_n = \mathcal{O}_X/I^n\mathcal{O}_X$.
+Observe that the rational function
+$$
+g = \frac{t}{x} = \frac{s}{y}
+$$
+is regular in an open neighbourhood $V \subset X$ of
+$V(I\mathcal{O}_X)$. Hence every power $g^e$ determines a section
+$g^e \in M = \lim H^0(X, \mathcal{F}_n)$. Observe that
+$g^e \to 0$ as $e \to \infty$ in the limit topology on $M$
+since $g^e$ maps to zero in $\mathcal{F}_e$.
+On the other hand, $g^e \not \in IM$ for any $e$
+as the reader can see by computing $H^0(U, \mathcal{F}_n)$;
+computation omitted. Observe that $\text{cd}(A, I) = 2$.
+Thus the result of Lemma \ref{lemma-topology-I-adic} is sharp.
+\end{example}
+
+
+
+
+
+
+
+
+\section{Mittag-Leffler conditions}
+\label{section-ML}
+
+\noindent
+When taking local cohomology with respect to the maximal ideal
+of a local Noetherian ring, we often get the Mittag-Leffler condition
+for free. This implies the same thing is true for higher cohomology
+groups of an inverse system of coherent sheaves with surjective transition
+maps on the puncture spectrum.
+
+\begin{lemma}
+\label{lemma-descending-chain}
+Let $(A, \mathfrak m)$ be a Noetherian local ring.
+\begin{enumerate}
+\item Let $M$ be a finite $A$-module. Then the $A$-module
+$H^i_\mathfrak m(M)$ satisfies the descending chain condition
+for any $i$.
+\item Let $U = \Spec(A) \setminus \{\mathfrak m\}$ be the
+punctured spectrum of $A$.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_U$-module.
+Then the $A$-module $H^i(U, \mathcal{F})$
+satisfies the descending chain condition for $i > 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will prove part (1) by induction on the dimension of the support of $M$.
+The statement holds if $M = 0$, thus we may and do assume $M$ is not zero.
+
+\medskip\noindent
+Base case of the induction.
+If $\dim(\text{Supp}(M)) = 0$, then the support
+of $M$ is $\{\mathfrak m\}$ and we see that $H^0_\mathfrak m(M) = M$
+and $H^i_\mathfrak m(M) = 0$ for $i > 0$ as is clear from the
+construction of local cohomology, see
+Dualizing Complexes, Section \ref{dualizing-section-local-cohomology}.
+Since $M$ has finite length (Algebra, Lemma \ref{algebra-lemma-length-finite})
+it has the descending chain condition.
+
+\medskip\noindent
+Induction step. Assume $\dim(\text{Supp}(M)) > 0$.
+By the base case the finite module $H^0_\mathfrak m(M) \subset M$
+has the descending chain condition.
+By Dualizing Complexes, Lemma \ref{dualizing-lemma-divide-by-torsion}
+we may replace $M$ by $M/H^0_\mathfrak m(M)$.
+Then $H^0_\mathfrak m(M) = 0$, i.e., $M$ has depth $\geq 1$, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-depth}.
+Choose $x \in \mathfrak m$ such that $x : M \to M$ is injective.
+By Algebra, Lemma \ref{algebra-lemma-one-equation-module} we have
+$\dim(\text{Supp}(M/xM)) = \dim(\text{Supp}(M)) - 1$ and the
+induction hypothesis applies. Pick an index $i$ and consider the
+exact sequence
+$$
+H^{i - 1}_\mathfrak m(M/xM) \to H^i_\mathfrak m(M) \xrightarrow{x}
+H^i_\mathfrak m(M)
+$$
+coming from the short exact sequence $0 \to M \xrightarrow{x} M \to M/xM \to 0$.
+It follows that the $x$-torsion $H^i_\mathfrak m(M)[x]$
+is a quotient of a module with the descending chain condition, and
+hence has the descending chain condition itself. Hence the
+$\mathfrak m$-torsion submodule $H^i_\mathfrak m(M)[\mathfrak m]$ has
+the descending chain condition (and hence is finite dimensional
+over $A/\mathfrak m$). Thus we conclude that the $\mathfrak m$-power
+torsion module $H^i_\mathfrak m(M)$ has the descending chain
+condition by Dualizing Complexes, Lemma
+\ref{dualizing-lemma-describe-categories}.
+
+\medskip\noindent
+Part (2) follows from (1) via Local Cohomology,
+Lemma \ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ML-local}
+Let $(A, \mathfrak m)$ be a Noetherian local ring.
+\begin{enumerate}
+\item Let $(M_n)$ be an inverse system of finite $A$-modules. Then the
+inverse system $H^i_\mathfrak m(M_n)$ satisfies the Mittag-Leffler
+condition for any $i$.
+\item Let $U = \Spec(A) \setminus \{\mathfrak m\}$ be the
+punctured spectrum of $A$.
+Let $\mathcal{F}_n$ be an inverse system of
+coherent $\mathcal{O}_U$-modules.
+Then the inverse system $H^i(U, \mathcal{F}_n)$
+satisfies the Mittag-Leffler condition for $i > 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows immediately from Lemma \ref{lemma-descending-chain}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-terrific}
+Let $(A, \mathfrak m)$ be a Noetherian local ring.
+Let $(M_n)$ be an inverse system of finite $A$-modules.
+Let $M \to \lim M_n$ be a map where $M$ is a finite $A$-module
+such that for some $i$ the map
+$H^i_\mathfrak m(M) \to \lim H^i_\mathfrak m(M_n)$
+is an isomorphism.
+Then the inverse system $H^i_\mathfrak m(M_n)$
+is essentially constant with value $H^i_\mathfrak m(M)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-ML-local} the inverse system $H^i_\mathfrak m(M_n)$
+satisfies the Mittag-Leffler condition. Let $E_n \subset H^i_\mathfrak m(M_n)$
+be the image of $H^i_\mathfrak m(M_{n'})$ for $n' \gg n$.
+Then $(E_n)$ is an inverse system with surjective transition maps
+and $H^i_\mathfrak m(M) = \lim E_n$. Since $H^i_\mathfrak m(M)$
+has the descending chain condition by
+Lemma \ref{lemma-descending-chain}
+we find there can only be a finite number of nontrivial
+kernels of the surjections $H^i_\mathfrak m(M) \to E_n$.
+Thus $E_n \to E_{n - 1}$ is an isomorphism for all $n \gg 0$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-cohomology-derived-completion}
+Let $(A, \mathfrak m)$ be a Noetherian local ring.
+Let $I \subset A$ be an ideal. Let $M$ be a finite $A$-module.
+Then
+$$
+H^i(R\Gamma_\mathfrak m(M)^\wedge) = \lim H^i_\mathfrak m(M/I^nM)
+$$
+for all $i$ where $R\Gamma_\mathfrak m(M)^\wedge$ denotes
+the derived $I$-adic completion.
+\end{lemma}
+
+\begin{proof}
+Apply Dualizing Complexes, Lemma \ref{dualizing-lemma-completion-local}
+and Lemma \ref{lemma-ML-local} to see the vanishing of the $R^1\lim$ terms.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Derived completion on a ringed site}
+\label{section-derived-completion}
+
+\noindent
+We urge the reader to skip this section on a first reading.
+
+\medskip\noindent
+The algebra version of this material can be found in
+More on Algebra, Section \ref{more-algebra-section-derived-completion}.
+Let $\mathcal{O}$ be a sheaf of rings on a site $\mathcal{C}$.
+Let $f$ be a global section of $\mathcal{O}$. We denote
+$\mathcal{O}_f$ the sheaf associated to the presheaf of localizations
+$U \mapsto \mathcal{O}(U)_f$.
+
+\begin{lemma}
+\label{lemma-map-twice-localize}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site. Let $f$ be a global
+section of $\mathcal{O}$.
+\begin{enumerate}
+\item For $L, N \in D(\mathcal{O}_f)$ we have
+$R\SheafHom_\mathcal{O}(L, N) = R\SheafHom_{\mathcal{O}_f}(L, N)$.
+In particular the two $\mathcal{O}_f$-structures on
+$R\SheafHom_\mathcal{O}(L, N)$ agree.
+\item For $K \in D(\mathcal{O})$ and
+$L \in D(\mathcal{O}_f)$ we have
+$$
+R\SheafHom_\mathcal{O}(L, K) =
+R\SheafHom_{\mathcal{O}_f}(L, R\SheafHom_\mathcal{O}(\mathcal{O}_f, K))
+$$
+In particular
+$R\SheafHom_\mathcal{O}(\mathcal{O}_f,
+R\SheafHom_\mathcal{O}(\mathcal{O}_f, K)) =
+R\SheafHom_\mathcal{O}(\mathcal{O}_f, K)$.
+\item If $g$ is a second global
+section of $\mathcal{O}$, then
+$$
+R\SheafHom_\mathcal{O}(\mathcal{O}_f, R\SheafHom_\mathcal{O}(\mathcal{O}_g, K))
+= R\SheafHom_\mathcal{O}(\mathcal{O}_{gf}, K).
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Let $\mathcal{J}^\bullet$ be a K-injective complex of
+$\mathcal{O}_f$-modules representing $N$. By Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-K-injective-flat} it follows that
+$\mathcal{J}^\bullet$ is a K-injective complex of
+$\mathcal{O}$-modules as well. Let $\mathcal{F}^\bullet$ be a complex of
+$\mathcal{O}_f$-modules representing $L$. Then
+$$
+R\SheafHom_\mathcal{O}(L, N) =
+R\SheafHom_\mathcal{O}(\mathcal{F}^\bullet, \mathcal{J}^\bullet) =
+R\SheafHom_{\mathcal{O}_f}(\mathcal{F}^\bullet, \mathcal{J}^\bullet)
+$$
+by
+Modules on Sites, Lemma \ref{sites-modules-lemma-epimorphism-modules}
+because $\mathcal{J}^\bullet$ is a K-injective complex of $\mathcal{O}$
+and of $\mathcal{O}_f$-modules.
+
+\medskip\noindent
+Proof of (2). Let $\mathcal{I}^\bullet$ be a K-injective complex of
+$\mathcal{O}$-modules representing $K$.
+Then $R\SheafHom_\mathcal{O}(\mathcal{O}_f, K)$ is represented by
+$\SheafHom_\mathcal{O}(\mathcal{O}_f, \mathcal{I}^\bullet)$ which is
+a K-injective complex of $\mathcal{O}_f$-modules and of
+$\mathcal{O}$-modules by
+Cohomology on Sites, Lemmas \ref{sites-cohomology-lemma-hom-K-injective} and
+\ref{sites-cohomology-lemma-K-injective-flat}.
+Let $\mathcal{F}^\bullet$ be a complex of $\mathcal{O}_f$-modules
+representing $L$. Then
+$$
+R\SheafHom_\mathcal{O}(L, K) =
+R\SheafHom_\mathcal{O}(\mathcal{F}^\bullet, \mathcal{I}^\bullet) =
+R\SheafHom_{\mathcal{O}_f}(\mathcal{F}^\bullet,
+\SheafHom_\mathcal{O}(\mathcal{O}_f, \mathcal{I}^\bullet))
+$$
+by Modules on Sites, Lemma \ref{sites-modules-lemma-adjoint-hom-restrict}
+and because $\SheafHom_\mathcal{O}(\mathcal{O}_f, \mathcal{I}^\bullet)$ is a
+K-injective complex of $\mathcal{O}_f$-modules.
+
+\medskip\noindent
+Proof of (3). This follows from the fact that
+$R\SheafHom_\mathcal{O}(\mathcal{O}_g, \mathcal{I}^\bullet)$
+is K-injective as a complex of $\mathcal{O}$-modules and the fact that
+$\SheafHom_\mathcal{O}(\mathcal{O}_f,
+\SheafHom_\mathcal{O}(\mathcal{O}_g, \mathcal{H})) =
+\SheafHom_\mathcal{O}(\mathcal{O}_{gf}, \mathcal{H})$
+for all sheaves of $\mathcal{O}$-modules $\mathcal{H}$.
+\end{proof}
+
+\noindent
+Let $K \in D(\mathcal{O})$. We denote
+$T(K, f)$ a derived limit (Derived Categories, Definition
+\ref{derived-definition-derived-limit}) of the inverse system
+$$
+\ldots \to K \xrightarrow{f} K \xrightarrow{f} K
+$$
+in $D(\mathcal{O})$.
+
+\begin{lemma}
+\label{lemma-hom-from-Af}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site. Let $f$ be a global
+section of $\mathcal{O}$. Let $K \in D(\mathcal{O})$.
+The following are equivalent
+\begin{enumerate}
+\item $R\SheafHom_\mathcal{O}(\mathcal{O}_f, K) = 0$,
+\item $R\SheafHom_\mathcal{O}(L, K) = 0$ for all $L$ in $D(\mathcal{O}_f)$,
+\item $T(K, f) = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that (2) implies (1). The implication (1) $\Rightarrow$ (2)
+follows from Lemma \ref{lemma-map-twice-localize}.
+A free resolution of the $\mathcal{O}$-module $\mathcal{O}_f$ is given by
+$$
+0 \to \bigoplus\nolimits_{n \in \mathbf{N}} \mathcal{O} \to
+\bigoplus\nolimits_{n \in \mathbf{N}} \mathcal{O}
+\to \mathcal{O}_f \to 0
+$$
+where the first map sends a local section $(x_0, x_1, \ldots)$ to
+$(x_0, x_1 - fx_0, x_2 - fx_1, \ldots)$ and the second map sends
+$(x_0, x_1, \ldots)$ to $x_0 + x_1/f + x_2/f^2 + \ldots$.
+Applying $\SheafHom_\mathcal{O}(-, \mathcal{I}^\bullet)$
+where $\mathcal{I}^\bullet$ is a K-injective complex of $\mathcal{O}$-modules
+representing $K$ we get a short exact sequence of complexes
+$$
+0 \to \SheafHom_\mathcal{O}(\mathcal{O}_f, \mathcal{I}^\bullet) \to
+\prod \mathcal{I}^\bullet \to \prod \mathcal{I}^\bullet \to 0
+$$
+because $\mathcal{I}^n$ is an injective $\mathcal{O}$-module.
+The products are products in $D(\mathcal{O})$, see
+Injectives, Lemma \ref{injectives-lemma-derived-products}.
+This means that the object $T(K, f)$ is a representative of
+$R\SheafHom_\mathcal{O}(\mathcal{O}_f, K)$ in $D(\mathcal{O})$.
+Thus the equivalence of (1) and (3).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ideal-of-elements-complete-wrt}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site. Let $K \in D(\mathcal{O})$.
+The rule which associates to $U$ the set $\mathcal{I}(U)$
+of sections $f \in \mathcal{O}(U)$ such that $T(K|_U, f) = 0$
+is a sheaf of ideals in $\mathcal{O}$.
+\end{lemma}
+
+\begin{proof}
+We will use the results of Lemma \ref{lemma-hom-from-Af} without further
+mention. If $f \in \mathcal{I}(U)$, and $g \in \mathcal{O}(U)$, then
+$\mathcal{O}_{U, gf}$ is an $\mathcal{O}_{U, f}$-module
+hence $R\SheafHom_\mathcal{O}(\mathcal{O}_{U, gf}, K|_U) = 0$, hence
+$gf \in \mathcal{I}(U)$. Suppose $f, g \in \mathcal{O}(U)$.
+Then there is a short exact sequence
+$$
+0 \to \mathcal{O}_{U, f + g} \to
+\mathcal{O}_{U, f(f + g)} \oplus \mathcal{O}_{U, g(f + g)} \to
+\mathcal{O}_{U, gf(f + g)} \to 0
+$$
+because $f, g$ generate the unit ideal in $\mathcal{O}(U)_{f + g}$.
+This follows from
+Algebra, Lemma \ref{algebra-lemma-standard-covering}
+and the easy fact that the last arrow is surjective.
+Because $R\SheafHom_\mathcal{O}( - , K|_U)$ is an exact functor
+of triangulated categories the vanishing of
+$R\SheafHom_{\mathcal{O}_U}(\mathcal{O}_{U, f(f + g)}, K|_U)$,
+$R\SheafHom_{\mathcal{O}_U}(\mathcal{O}_{U, g(f + g)}, K|_U)$, and
+$R\SheafHom_{\mathcal{O}_U}(\mathcal{O}_{U, gf(f + g)}, K|_U)$,
+implies the vanishing of
+$R\SheafHom_{\mathcal{O}_U}(\mathcal{O}_{U, f + g}, K|_U)$.
+We omit the verification of the sheaf condition.
+\end{proof}
+
+\noindent
+We can make the following definition for any ringed site.
+
+\begin{definition}
+\label{definition-derived-complete}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site.
+Let $\mathcal{I} \subset \mathcal{O}$ be a sheaf of ideals.
+Let $K \in D(\mathcal{O})$. We say that $K$ is
+{\it derived complete with respect to $\mathcal{I}$}
+if for every object $U$ of $\mathcal{C}$ and $f \in \mathcal{I}(U)$
+the object $T(K|_U, f)$ of $D(\mathcal{O}_U)$ is zero.
+\end{definition}
+
+\noindent
+It is clear that the full subcategory
+$D_{comp}(\mathcal{O}) = D_{comp}(\mathcal{O}, \mathcal{I}) \subset
+D(\mathcal{O})$ consisting of derived complete objects
+is a saturated triangulated subcategory, see
+Derived Categories, Definitions
+\ref{derived-definition-triangulated-subcategory} and
+\ref{derived-definition-saturated}. This subcategory is preserved
+under products and homotopy limits in $D(\mathcal{O})$.
+But it is not preserved under countable direct sums in general.
+
+\begin{lemma}
+\label{lemma-derived-complete-internal-hom}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site.
+Let $\mathcal{I} \subset \mathcal{O}$ be a sheaf of ideals.
+If $K \in D(\mathcal{O})$ and $L \in D_{comp}(\mathcal{O})$, then
+$R\SheafHom_\mathcal{O}(K, L) \in D_{comp}(\mathcal{O})$.
+\end{lemma}
+
+\begin{proof}
+Let $U$ be an object of $\mathcal{C}$ and let $f \in \mathcal{I}(U)$.
+Recall that
+$$
+\Hom_{D(\mathcal{O}_U)}(\mathcal{O}_{U, f}, R\SheafHom_\mathcal{O}(K, L)|_U)
+=
+\Hom_{D(\mathcal{O}_U)}(
+K|_U \otimes_{\mathcal{O}_U}^\mathbf{L} \mathcal{O}_{U, f}, L|_U)
+$$
+by Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-internal-hom}.
+The right hand side is zero by Lemma \ref{lemma-hom-from-Af}
+and the relationship between internal hom and actual hom, see
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-section-RHom-over-U}.
+The same vanishing holds for all $U'/U$. Thus the object
+$R\SheafHom_{\mathcal{O}_U}(\mathcal{O}_{U, f},
+R\SheafHom_\mathcal{O}(K, L)|_U)$ of $D(\mathcal{O}_U)$ has vanishing
+$0$th cohomology sheaf (by locus citatus). Similarly for the other
+cohomology sheaves, i.e., $R\SheafHom_{\mathcal{O}_U}(\mathcal{O}_{U, f},
+R\SheafHom_\mathcal{O}(K, L)|_U)$ is zero in $D(\mathcal{O}_U)$.
+By Lemma \ref{lemma-hom-from-Af} we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-restriction-derived-complete}
+Let $\mathcal{C}$ be a site. Let $\mathcal{O} \to \mathcal{O}'$
+be a homomorphism of sheaves of rings. Let $\mathcal{I} \subset \mathcal{O}$
+be a sheaf of ideals. The inverse image of $D_{comp}(\mathcal{O}, \mathcal{I})$
+under the restriction functor $D(\mathcal{O}') \to D(\mathcal{O})$ is
+$D_{comp}(\mathcal{O}', \mathcal{I}\mathcal{O}')$.
+\end{lemma}
+
+\begin{proof}
+Using Lemma \ref{lemma-ideal-of-elements-complete-wrt}
+we see that $K' \in D(\mathcal{O}')$ is in
+$D_{comp}(\mathcal{O}', \mathcal{I}\mathcal{O}')$
+if and only if $T(K'|_U, f)$ is zero for every local section
+$f \in \mathcal{I}(U)$. Observe that the cohomology sheaves of
+$T(K'|_U, f)$ are computed in the category of abelian sheaves,
+so it doesn't matter whether we think of $f$ as a section of
+$\mathcal{O}$ or take the image of $f$ as a section of $\mathcal{O}'$.
+The lemma follows immediately from this and the
+definition of derived complete objects.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushforward-derived-complete}
+Let $f : (\Sh(\mathcal{D}), \mathcal{O}') \to (\Sh(\mathcal{C}), \mathcal{O})$
+be a morphism of ringed topoi. Let $\mathcal{I} \subset \mathcal{O}$
+and $\mathcal{I}' \subset \mathcal{O}'$ be sheaves of ideals such
+that $f^\sharp$ sends $f^{-1}\mathcal{I}$ into $\mathcal{I}'$.
+Then $Rf_*$ sends $D_{comp}(\mathcal{O}', \mathcal{I}')$
+into $D_{comp}(\mathcal{O}, \mathcal{I})$.
+\end{lemma}
+
+\begin{proof}
+We may assume $f$ is given by a morphism of ringed sites corresponding
+to a continuous functor $\mathcal{C} \to \mathcal{D}$
+(Modules on Sites, Lemma
+\ref{sites-modules-lemma-morphism-ringed-topoi-comes-from-morphism-ringed-sites}
+).
+Let $U$ be an object of $\mathcal{C}$ and let $g$ be a section of
+$\mathcal{I}$ over $U$. We have to show that
+$\Hom_{D(\mathcal{O}_U)}(\mathcal{O}_{U, g}, Rf_*K|_U) = 0$
+whenever $K$ is derived complete with respect to $\mathcal{I}'$.
+Namely, by Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-section-RHom-over-U}
+this, applied to all objects over $U$ and all shifts of $K$,
+will imply that $R\SheafHom_{\mathcal{O}_U}(\mathcal{O}_{U, g}, Rf_*K|_U)$
+is zero, which implies that $T(Rf_*K|_U, g)$ is zero
+(Lemma \ref{lemma-hom-from-Af}) which is what we have to show
+(Definition \ref{definition-derived-complete}).
+Let $V$ in $\mathcal{D}$ be the image of $U$. Then
+$$
+\Hom_{D(\mathcal{O}_U)}(\mathcal{O}_{U, g}, Rf_*K|_U) =
+\Hom_{D(\mathcal{O}'_V)}(\mathcal{O}'_{V, g'}, K|_V) = 0
+$$
+where $g' = f^\sharp(g) \in \mathcal{I}'(V)$. The second equality
+because $K$ is derived complete and the first equality because
+the derived pullback of $\mathcal{O}_{U, g}$ is $\mathcal{O}'_{V, g'}$
+and
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-adjoint}.
+\end{proof}
+
+\noindent
+The following lemma is the simplest case where one has derived completion.
+
+\begin{lemma}
+\label{lemma-derived-completion}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed on a site. Let $f_1, \ldots, f_r$
+be global sections of $\mathcal{O}$. Let $\mathcal{I} \subset \mathcal{O}$ be
+the ideal sheaf generated by $f_1, \ldots, f_r$.
+Then the inclusion functor $D_{comp}(\mathcal{O}) \to D(\mathcal{O})$
+has a left adjoint, i.e., given any object $K$ of $D(\mathcal{O})$
+there exists a map $K \to K^\wedge$ with $K^\wedge$ in $D_{comp}(\mathcal{O})$
+such that the map
+$$
+\Hom_{D(\mathcal{O})}(K^\wedge, E) \longrightarrow \Hom_{D(\mathcal{O})}(K, E)
+$$
+is bijective whenever $E$ is in $D_{comp}(\mathcal{O})$. In fact
+we have
+$$
+K^\wedge =
+R\SheafHom_\mathcal{O}
+(\mathcal{O} \to \prod\nolimits_{i_0} \mathcal{O}_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} \mathcal{O}_{f_{i_0}f_{i_1}} \to
+\ldots \to \mathcal{O}_{f_1\ldots f_r}, K)
+$$
+functorially in $K$.
+\end{lemma}
+
+\begin{proof}
+Define $K^\wedge$ by the last displayed formula of the lemma.
+There is a map of complexes
+$$
+(\mathcal{O} \to \prod\nolimits_{i_0} \mathcal{O}_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} \mathcal{O}_{f_{i_0}f_{i_1}} \to
+\ldots \to \mathcal{O}_{f_1\ldots f_r}) \longrightarrow \mathcal{O}
+$$
+which induces a map $K \to K^\wedge$. It suffices to prove that
+$K^\wedge$ is derived complete and that $K \to K^\wedge$ is an
+isomorphism if $K$ is derived complete.
+
+\medskip\noindent
+Let $f$ be a global section of $\mathcal{O}$.
+By Lemma \ref{lemma-map-twice-localize} the object
+$R\SheafHom_\mathcal{O}(\mathcal{O}_f, K^\wedge)$
+is equal to
+$$
+R\SheafHom_\mathcal{O}(
+(\mathcal{O}_f \to \prod\nolimits_{i_0} \mathcal{O}_{ff_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} \mathcal{O}_{ff_{i_0}f_{i_1}} \to
+\ldots \to \mathcal{O}_{ff_1\ldots f_r}), K)
+$$
+If $f = f_i$ for some $i$, then $f_1, \ldots, f_r$ generate the
+unit ideal in $\mathcal{O}_f$, hence the extended alternating
+{\v C}ech complex
+$$
+\mathcal{O}_f \to \prod\nolimits_{i_0} \mathcal{O}_{ff_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} \mathcal{O}_{ff_{i_0}f_{i_1}} \to
+\ldots \to \mathcal{O}_{ff_1\ldots f_r}
+$$
+is zero (even homotopic to zero). In this way we see that $K^\wedge$
+is derived complete.
+
+\medskip\noindent
+If $K$ is derived complete, then $R\SheafHom_\mathcal{O}(\mathcal{O}_f, K)$
+is zero for all $f = f_{i_0} \ldots f_{i_p}$, $p \geq 0$. Thus
+$K \to K^\wedge$ is an isomorphism in $D(\mathcal{O})$.
+\end{proof}
+
+\noindent
+Next we explain why derived completion is a completion.
+
+\begin{lemma}
+\label{lemma-derived-completion-koszul}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed on a site. Let $f_1, \ldots, f_r$
+be global sections of $\mathcal{O}$. Let $\mathcal{I} \subset \mathcal{O}$ be
+the ideal sheaf generated by $f_1, \ldots, f_r$. Let $K \in D(\mathcal{O})$.
+The derived completion $K^\wedge$ of Lemma \ref{lemma-derived-completion}
+is given by the formula
+$$
+K^\wedge = R\lim K \otimes^\mathbf{L}_\mathcal{O} K_n
+$$
+where $K_n = K(\mathcal{O}, f_1^n, \ldots, f_r^n)$
+is the Koszul complex on $f_1^n, \ldots, f_r^n$ over $\mathcal{O}$.
+\end{lemma}
+
+\begin{proof}
+In More on Algebra, Lemma
+\ref{more-algebra-lemma-extended-alternating-Cech-is-colimit-koszul}
+we have seen that the extended alternating {\v C}ech complex
+$$
+\mathcal{O} \to \prod\nolimits_{i_0} \mathcal{O}_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} \mathcal{O}_{f_{i_0}f_{i_1}} \to
+\ldots \to \mathcal{O}_{f_1\ldots f_r}
+$$
+is a colimit of the Koszul complexes
+$K^n = K(\mathcal{O}, f_1^n, \ldots, f_r^n)$ sitting in
+degrees $0, \ldots, r$. Note that $K^n$ is a finite chain complex
+of finite free $\mathcal{O}$-modules with dual
+$\SheafHom_\mathcal{O}(K^n, \mathcal{O}) = K_n$ where $K_n$ is the Koszul
+cochain complex sitting in degrees $-r, \ldots, 0$ (as usual). By
+Lemma \ref{lemma-derived-completion}
+the functor $E \mapsto E^\wedge$ is gotten by taking
+$R\SheafHom$ from the extended alternating {\v C}ech complex into $E$:
+$$
+E^\wedge = R\SheafHom(\colim K^n, E)
+$$
+This is equal to $R\lim (E \otimes_\mathcal{O}^\mathbf{L} K_n)$
+by
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-colim-and-lim-of-duals}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-all-rings}
+There exist a way to construct
+\begin{enumerate}
+\item for every pair $(A, I)$ consisting of a ring $A$ and a finitely
+generated ideal $I \subset A$ a complex $K(A, I)$ of $A$-modules,
+\item a map $K(A, I) \to A$ of complexes of $A$-modules,
+\item for every ring map $A \to B$ and finitely generated ideal $I \subset A$
+a map of complexes $K(A, I) \to K(B, IB)$,
+\end{enumerate}
+such that
+\begin{enumerate}
+\item[(a)] for $A \to B$ and $I \subset A$ finitely generated the diagram
+$$
+\xymatrix{
+K(A, I) \ar[r] \ar[d] & A \ar[d] \\
+K(B, IB) \ar[r] & B
+}
+$$
+commutes,
+\item[(b)] for $A \to B \to C$ and $I \subset A$ finitely generated
+the composition of the maps
+$K(A, I) \to K(B, IB) \to K(C, IC)$ is the map $K(A, I) \to K(C, IC)$.
+\item[(c)] for $A \to B$ and a finitely generated ideal $I \subset A$
+the induced map $K(A, I) \otimes_A^\mathbf{L} B \to K(B, IB)$
+is an isomorphism in $D(B)$, and
+\item[(d)] if $I = (f_1, \ldots, f_r) \subset A$ then there is a commutative
+diagram
+$$
+\xymatrix{
+(A \to \prod\nolimits_{i_0} A_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} A_{f_{i_0}f_{i_1}} \to
+\ldots \to A_{f_1\ldots f_r}) \ar[r] \ar[d] & K(A, I) \ar[d] \\
+A \ar[r]^1 & A
+}
+$$
+in $D(A)$ whose horizontal arrows are isomorphisms.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $S$ be the set of rings $A_0$ of the form
+$A_0 = \mathbf{Z}[x_1, \ldots, x_n]/J$.
+Every finite type $\mathbf{Z}$-algebra is isomorphic to
+an element of $S$. Let $\mathcal{A}_0$ be the category whose objects
+are pairs $(A_0, I_0)$ where $A_0 \in S$ and $I_0 \subset A_0$
+is an ideal and whose morphisms $(A_0, I_0) \to (B_0, J_0)$ are
+ring maps $\varphi : A_0 \to B_0$ such that $J_0 = \varphi(I_0)B_0$.
+
+\medskip\noindent
+Suppose we can construct $K(A_0, I_0) \to A_0$ functorially for
+objects of $\mathcal{A}_0$ having properties (a), (b), (c), and (d).
+Then we take
+$$
+K(A, I) = \colim_{\varphi : (A_0, I_0) \to (A, I)} K(A_0, I_0)
+$$
+where the colimit is over ring maps $\varphi : A_0 \to A$ such
+that $\varphi(I_0)A = I$ with $(A_0, I_0)$ in $\mathcal{A}_0$.
+A morphism between $(A_0, I_0) \to (A, I)$ and $(A_0', I_0') \to (A, I)$
+are given by maps $(A_0, I_0) \to (A_0', I_0')$ in $\mathcal{A}_0$
+commuting with maps to $A$.
+The category of these $(A_0, I_0) \to (A, I)$ is filtered
+(details omitted). Moreover, $\colim_{\varphi : (A_0, I_0) \to (A, I)} A_0 = A$
+so that $K(A, I)$ is a complex of $A$-modules.
+Finally, given $\varphi : A \to B$ and $I \subset A$
+for every $(A_0, I_0) \to (A, I)$ in the colimit, the composition
+$(A_0, I_0) \to (B, IB)$ lives in the colimit for $(B, IB)$.
+In this way we get a map on colimits. Properties (a), (b), (c), and (d)
+follow readily from this and the corresponding
+properties of the complexes $K(A_0, I_0)$.
+
+\medskip\noindent
+Endow $\mathcal{C}_0 = \mathcal{A}_0^{opp}$ with the chaotic topology.
+We equip $\mathcal{C}_0$ with the sheaf of rings
+$\mathcal{O} : (A, I) \mapsto A$. The ideals $I$ fit together to give a
+sheaf of ideals $\mathcal{I} \subset \mathcal{O}$.
+Choose an injective resolution $\mathcal{O} \to \mathcal{J}^\bullet$.
+Consider the object
+$$
+\mathcal{F}^\bullet = \bigcup\nolimits_n \mathcal{J}^\bullet[\mathcal{I}^n]
+$$
+Let $U = (A, I) \in \Ob(\mathcal{C}_0)$.
+Since the topology in $\mathcal{C}_0$ is chaotic, the value
+$\mathcal{J}^\bullet(U)$ is a resolution of $A$ by injective
+$A$-modules. Hence the value $\mathcal{F}^\bullet(U)$ is an
+object of $D(A)$ representing the image of $R\Gamma_I(A)$ in $D(A)$, see
+Dualizing Complexes, Section \ref{dualizing-section-local-cohomology}.
+Choose a complex of $\mathcal{O}$-modules $\mathcal{K}^\bullet$
+and a commutative diagram
+$$
+\xymatrix{
+\mathcal{O} \ar[r] & \mathcal{J}^\bullet \\
+\mathcal{K}^\bullet \ar[r] \ar[u] & \mathcal{F}^\bullet \ar[u]
+}
+$$
+where the horizontal arrows are quasi-isomorphisms. This is possible
+by the construction of the derived category $D(\mathcal{O})$.
+Set $K(A, I) = \mathcal{K}^\bullet(U)$ where $U = (A, I)$.
+Properties (a) and (b) are clear and properties (c) and (d)
+follow from Dualizing Complexes, Lemmas
+\ref{dualizing-lemma-compute-local-cohomology-noetherian} and
+\ref{dualizing-lemma-local-cohomology-change-rings}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-global-extended-cech-complex}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site. Let
+$\mathcal{I} \subset \mathcal{O}$ be a finite type sheaf of ideals.
+There exists a map $K \to \mathcal{O}$ in $D(\mathcal{O})$
+such that for every $U \in \Ob(\mathcal{C})$ such that
+$\mathcal{I}|_U$ is generated by $f_1, \ldots, f_r \in \mathcal{I}(U)$
+there is an isomorphism
+$$
+(\mathcal{O}_U \to \prod\nolimits_{i_0} \mathcal{O}_{U, f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} \mathcal{O}_{U, f_{i_0}f_{i_1}} \to
+\ldots \to \mathcal{O}_{U, f_1\ldots f_r}) \longrightarrow K|_U
+$$
+compatible with maps to $\mathcal{O}_U$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{C}' \subset \mathcal{C}$ be the full subcategory
+of objects $U$ such that $\mathcal{I}|_U$ is generated by
+finitely many sections. Then $\mathcal{C}' \to \mathcal{C}$
+is a special cocontinuous functor
+(Sites, Definition \ref{sites-definition-special-cocontinuous-functor}).
+Hence it suffices to work with $\mathcal{C}'$, see
+Sites, Lemma \ref{sites-lemma-equivalence}.
+In other words we may assume that for every
+object $U$ of $\mathcal{C}$ there exists a finitely generated
+ideal $I \subset \mathcal{I}(U)$ such that
+$\mathcal{I}|_U = \Im(I \otimes \mathcal{O}_U \to \mathcal{O}_U)$.
+We will say that $I$ generates $\mathcal{I}|_U$.
+Warning: We do not know that $\mathcal{I}(U)$ is a finitely generated
+ideal in $\mathcal{O}(U)$.
+
+\medskip\noindent
+Let $U$ be an object and $I \subset \mathcal{O}(U)$ a finitely
+generated ideal which generates $\mathcal{I}|_U$.
+On the category $\mathcal{C}/U$ consider the complex of presheaves
+$$
+K_{U, I}^\bullet : U'/U \longmapsto K(\mathcal{O}(U'), I\mathcal{O}(U'))
+$$
+with $K(-, -)$ as in Lemma \ref{lemma-all-rings}.
+We claim that the sheafification of this is independent of
+the choice of $I$. Indeed, if $I' \subset \mathcal{O}(U)$
+is a finitely generated ideal which also generates $\mathcal{I}|_U$, then
+there exists a covering $\{U_j \to U\}$ such that
+$I\mathcal{O}(U_j) = I'\mathcal{O}(U_j)$. (Hint: this works because
+both $I$ and $I'$ are finitely generated and generate $\mathcal{I}|_U$.)
+Hence $K_{U, I}^\bullet$ and $K_{U, I'}^\bullet$ are the {\it same}
+for any object lying over one of the $U_j$. The statement
+on sheafifications follows. Denote $K_U^\bullet$ the common value.
+
+\medskip\noindent
+The independence of choice of $I$ also shows that
+$K_U^\bullet|_{\mathcal{C}/U'} = K_{U'}^\bullet$
+whenever we are given a morphism
+$U' \to U$ and hence a localization morphism
+$\mathcal{C}/U' \to \mathcal{C}/U$. Thus the complexes
+$K_U^\bullet$ glue to give a single well defined complex $K^\bullet$
+of $\mathcal{O}$-modules. The existence of the map $K^\bullet \to \mathcal{O}$
+and the quasi-isomorphism of the lemma follow immediately from
+the corresponding properties of the complexes $K(-, -)$ in
+Lemma \ref{lemma-all-rings}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-derived-completion}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site.
+Let $\mathcal{I} \subset \mathcal{O}$ be a finite type sheaf of
+ideals. There exists a left adjoint to the inclusion
+functor $D_{comp}(\mathcal{O}) \to D(\mathcal{O})$.
+\end{proposition}
+
+\begin{proof}
+Let $K \to \mathcal{O}$ in $D(\mathcal{O})$ be as constructed in
+Lemma \ref{lemma-global-extended-cech-complex}. Let $E \in D(\mathcal{O})$.
+Then $E^\wedge = R\SheafHom(K, E)$ together with the map $E \to E^\wedge$
+will do the job. Namely, locally on the site $\mathcal{C}$ we
+recover the adjoint of Lemma \ref{lemma-derived-completion}.
+This shows that $E^\wedge$ is always derived complete and that
+$E \to E^\wedge$ is an isomorphism if $E$ is derived complete.
+\end{proof}
+
+\begin{remark}[Comparison with completion]
+\label{remark-compare-with-completion}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site.
+Let $\mathcal{I} \subset \mathcal{O}$ be a finite type sheaf of
+ideals. Let $K \mapsto K^\wedge$ be the derived completion functor
+of Proposition \ref{proposition-derived-completion}.
+For any $n \geq 1$ the object
+$K \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}/\mathcal{I}^n$
+is derived complete as it is annihilated by powers of
+local sections of $\mathcal{I}$. Hence there is a canonical factorization
+$$
+K \to K^\wedge \to K \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}/\mathcal{I}^n
+$$
+of the canonical map
+$K \to K \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}/\mathcal{I}^n$.
+These maps are compatible for varying $n$ and we obtain a comparison map
+$$
+K^\wedge
+\longrightarrow
+R\lim \left(K \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}/\mathcal{I}^n\right)
+$$
+The right hand side is more recognizable as a kind of completion.
+In general this comparison map is not an isomorphism.
+\end{remark}
+
+\begin{remark}[Localization and derived completion]
+\label{remark-localization-and-completion}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site.
+Let $\mathcal{I} \subset \mathcal{O}$ be a finite type sheaf of
+ideals. Let $K \mapsto K^\wedge$ be the derived completion functor
+of Proposition \ref{proposition-derived-completion}. It follows
+from the construction in the proof of the proposition that $K^\wedge|_U$
+is the derived completion of $K|_U$ for any $U \in \Ob(\mathcal{C})$.
+But we can also prove this as follows. From the definition
+of derived complete objects it follows that $K^\wedge|_U$ is derived complete.
+Thus we obtain a canonical map $a : (K|_U)^\wedge \to K^\wedge|_U$.
+On the other hand, if $E$ is a derived complete object of
+$D(\mathcal{O}_U)$, then $Rj_*E$ is a derived complete object of
+$D(\mathcal{O})$ by Lemma \ref{lemma-pushforward-derived-complete}.
+Here $j$ is the localization morphism
+(Modules on Sites, Section \ref{sites-modules-section-localize}).
+Hence we also obtain a canonical
+map $b : K^\wedge \to Rj_*((K|_U)^\wedge)$. We omit the (formal) verification
+that the adjoint of $b$ is the inverse of $a$.
+\end{remark}
+
+\begin{remark}[Completed tensor product]
+\label{remark-completed-tensor-product}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site. Let
+$\mathcal{I} \subset \mathcal{O}$ be a finite type sheaf of ideals.
+Denote $K \mapsto K^\wedge$ the adjoint of
+Proposition \ref{proposition-derived-completion}.
+Then we set
+$$
+K \otimes^\wedge_\mathcal{O} L = (K \otimes_\mathcal{O}^\mathbf{L} L)^\wedge
+$$
+This {\it completed tensor product} defines a functor
+$D_{comp}(\mathcal{O}) \times D_{comp}(\mathcal{O}) \to D_{comp}(\mathcal{O})$
+such that we have
+$$
+\Hom_{D_{comp}(\mathcal{O})}(K, R\SheafHom_\mathcal{O}(L, M))
+=
+\Hom_{D_{comp}(\mathcal{O})}(K \otimes_\mathcal{O}^\wedge L, M)
+$$
+for $K, L, M \in D_{comp}(\mathcal{O})$. Note that
+$R\SheafHom_\mathcal{O}(L, M) \in D_{comp}(\mathcal{O})$ by
+Lemma \ref{lemma-derived-complete-internal-hom}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-map-identifies-koszul-and-cech-complexes}
+Let $\mathcal{C}$ be a site.
+Assume $\varphi : \mathcal{O} \to \mathcal{O}'$ is a flat homomorphism
+of sheaves of rings. Let $f_1, \ldots, f_r$ be global sections
+of $\mathcal{O}$ such that $\mathcal{O}/(f_1, \ldots, f_r) \cong
+\mathcal{O}'/(f_1, \ldots, f_r)\mathcal{O}'$.
+Then the map of extended alternating {\v C}ech complexes
+$$
+\xymatrix{
+\mathcal{O} \to
+\prod_{i_0} \mathcal{O}_{f_{i_0}} \to
+\prod_{i_0 < i_1} \mathcal{O}_{f_{i_0}f_{i_1}} \to \ldots \to
+\mathcal{O}_{f_1\ldots f_r} \ar[d] \\
+\mathcal{O}' \to
+\prod_{i_0} \mathcal{O}'_{f_{i_0}} \to
+\prod_{i_0 < i_1} \mathcal{O}'_{f_{i_0}f_{i_1}} \to \ldots \to
+\mathcal{O}'_{f_1\ldots f_r}
+}
+$$
+is a quasi-isomorphism.
+\end{lemma}
+
+\begin{proof}
+Observe that the second complex is the tensor product of the first
+complex with $\mathcal{O}'$. We can write the first extended
+alternating {\v C}ech complex as a colimit of the Koszul complexes
+$K_n = K(\mathcal{O}, f_1^n, \ldots, f_r^n)$, see
+More on Algebra, Lemma
+\ref{more-algebra-lemma-extended-alternating-Cech-is-colimit-koszul}.
+Hence it suffices to prove $K_n \to K_n \otimes_\mathcal{O} \mathcal{O}'$
+is a quasi-isomorphism. Since $\mathcal{O} \to \mathcal{O}'$ is flat
+it suffices to show that $H^i \to H^i \otimes_\mathcal{O} \mathcal{O}'$
+is an isomorphism where $H^i$ is the $i$th cohomology sheaf
+$H^i = H^i(K_n)$. These sheaves are annihilated by $f_1^n, \ldots, f_r^n$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-homotopy-koszul}.
+Hence these sheaves are annihilated by $(f_1, \ldots, f_r)^m$
+for some $m \gg 0$. Thus $H^i \to H^i \otimes_\mathcal{O} \mathcal{O}'$
+is an isomorphism by Modules on Sites, Lemma
+\ref{sites-modules-lemma-neighbourhood-isomorphism}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-restriction-derived-complete-equivalence}
+Let $\mathcal{C}$ be a site. Let $\mathcal{O} \to \mathcal{O}'$ be a
+homomorphism of sheaves of rings. Let $\mathcal{I} \subset \mathcal{O}$
+be a finite type sheaf of ideals.
+If $\mathcal{O} \to \mathcal{O}'$ is flat and
+$\mathcal{O}/\mathcal{I} \cong \mathcal{O}'/\mathcal{I}\mathcal{O}'$,
+then the restriction functor $D(\mathcal{O}') \to D(\mathcal{O})$
+induces an equivalence
+$D_{comp}(\mathcal{O}', \mathcal{I}\mathcal{O}') \to
+D_{comp}(\mathcal{O}, \mathcal{I})$.
+\end{lemma}
+
+\begin{proof}
+Lemma \ref{lemma-pushforward-derived-complete} implies
+restriction $r : D(\mathcal{O}') \to D(\mathcal{O})$
+sends $D_{comp}(\mathcal{O}', \mathcal{I}\mathcal{O}')$
+into $D_{comp}(\mathcal{O}, \mathcal{I})$. We will construct a
+quasi-inverse $E \mapsto E'$.
+
+\medskip\noindent
+Let $K \to \mathcal{O}$ be the morphism of $D(\mathcal{O})$
+constructed in Lemma \ref{lemma-global-extended-cech-complex}.
+Set $K' = K \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}'$ in $D(\mathcal{O}')$.
+Then $K' \to \mathcal{O}'$ is a map in $D(\mathcal{O}')$ which
+satisfies the conclusions of Lemma \ref{lemma-global-extended-cech-complex}
+with respect to $\mathcal{I}' = \mathcal{I}\mathcal{O}'$.
+The map $K \to r(K')$ is a quasi-isomorphism by
+Lemma \ref{lemma-map-identifies-koszul-and-cech-complexes}.
+Now, for $E \in D_{comp}(\mathcal{O}, \mathcal{I})$ we set
+$$
+E' = R\SheafHom_\mathcal{O}(r(K'), E)
+$$
+viewed as an object in $D(\mathcal{O}')$ using the $\mathcal{O}'$-module
+structure on $K'$. Since $E$ is derived complete
+we have $E = R\SheafHom_\mathcal{O}(K, E)$, see
+proof of Proposition \ref{proposition-derived-completion}.
+On the other hand, since $K \to r(K')$ is an isomorphism in
+we see that there is an isomorphism
+$E \to r(E')$ in $D(\mathcal{O})$. To finish the proof we
+have to show that, if $E = r(M')$ for an object $M'$ of
+$D_{comp}(\mathcal{O}', \mathcal{I}')$, then
+$E' \cong M'$. To get a map we use
+$$
+M' = R\SheafHom_{\mathcal{O}'}(\mathcal{O}', M') \to
+R\SheafHom_\mathcal{O}(r(\mathcal{O}'), r(M')) \to
+R\SheafHom_\mathcal{O}(r(K'), r(M')) = E'
+$$
+where the second arrow uses the map $K' \to \mathcal{O}'$.
+To see that this is an isomorphism, one shows that $r$ applied
+to this arrow is the same as the isomorphism $E \to r(E')$ above.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushforward-derived-complete-adjoint}
+Let $f : (\Sh(\mathcal{D}), \mathcal{O}') \to (\Sh(\mathcal{C}), \mathcal{O})$
+be a morphism of ringed topoi. Let $\mathcal{I} \subset \mathcal{O}$
+and $\mathcal{I}' \subset \mathcal{O}'$
+be finite type sheaves of ideals such that $f^\sharp$ sends
+$f^{-1}\mathcal{I}$ into $\mathcal{I}'$.
+Then $Rf_*$ sends $D_{comp}(\mathcal{O}', \mathcal{I}')$
+into $D_{comp}(\mathcal{O}, \mathcal{I})$ and has a left adjoint
+$Lf_{comp}^*$ which is $Lf^*$ followed by derived completion.
+\end{lemma}
+
+\begin{proof}
+The first statement we have seen in
+Lemma \ref{lemma-pushforward-derived-complete}.
+Note that the second statement makes sense as we have a derived
+completion functor $D(\mathcal{O}') \to D_{comp}(\mathcal{O}', \mathcal{I}')$
+by Proposition \ref{proposition-derived-completion}.
+OK, so now let $K \in D_{comp}(\mathcal{O}, \mathcal{I})$
+and $M \in D_{comp}(\mathcal{O}', \mathcal{I}')$. Then we have
+$$
+\Hom(K, Rf_*M) = \Hom(Lf^*K, M) = \Hom(Lf_{comp}^*K, M)
+$$
+by the universal property of derived completion.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushforward-commutes-with-derived-completion}
+\begin{reference}
+Generalization of \cite[Lemma 6.5.9 (2)]{BS}. Compare with
+\cite[Theorem 6.5]{HL-P} in the setting of quasi-coherent modules
+and morphisms of (derived) algebraic stacks.
+\end{reference}
+Let $f : (\Sh(\mathcal{D}), \mathcal{O}') \to (\Sh(\mathcal{C}), \mathcal{O})$
+be a morphism of ringed topoi. Let $\mathcal{I} \subset \mathcal{O}$
+be a finite type sheaf of ideals. Let $\mathcal{I}' \subset \mathcal{O}'$
+be the ideal generated by $f^\sharp(f^{-1}\mathcal{I})$.
+Then $Rf_*$ commutes with derived completion, i.e.,
+$Rf_*(K^\wedge) = (Rf_*K)^\wedge$.
+\end{lemma}
+
+\begin{proof}
+By Proposition \ref{proposition-derived-completion} the derived completion
+functors exist. By Lemma \ref{lemma-pushforward-derived-complete} the object
+$Rf_*(K^\wedge)$ is derived complete, and hence we obtain a canonical map
+$(Rf_*K)^\wedge \to Rf_*(K^\wedge)$ by the universal property of derived
+completion. We may check this map is an isomorphism locally on $\mathcal{C}$.
+Thus, since derived completion commutes with localization
+(Remark \ref{remark-localization-and-completion}) we may assume
+that $\mathcal{I}$ is generated by global sections $f_1, \ldots, f_r$.
+Then $\mathcal{I}'$ is generated by $g_i = f^\sharp(f_i)$. By
+Lemma \ref{lemma-derived-completion-koszul}
+we have to prove that
+$$
+R\lim \left(
+Rf_*K \otimes^\mathbf{L}_\mathcal{O} K(\mathcal{O}, f_1^n, \ldots, f_r^n)
+\right)
+=
+Rf_*\left(
+R\lim
+K \otimes^\mathbf{L}_{\mathcal{O}'} K(\mathcal{O}', g_1^n, \ldots, g_r^n)
+\right)
+$$
+Because $Rf_*$ commutes with $R\lim$
+(Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-Rf-commutes-with-Rlim})
+it suffices to prove that
+$$
+Rf_*K \otimes^\mathbf{L}_\mathcal{O} K(\mathcal{O}, f_1^n, \ldots, f_r^n) =
+Rf_*\left(
+K \otimes^\mathbf{L}_{\mathcal{O}'} K(\mathcal{O}', g_1^n, \ldots, g_r^n)
+\right)
+$$
+This follows from the projection formula (Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-projection-formula}) and the fact that
+$Lf^*K(\mathcal{O}, f_1^n, \ldots, f_r^n) =
+K(\mathcal{O}', g_1^n, \ldots, g_r^n)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-formal-functions-general}
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal.
+Let $\mathcal{C}$ be a site and let $\mathcal{O}$ be a sheaf
+of $A$-algebras. Let $\mathcal{F}$ be a sheaf of $\mathcal{O}$-modules.
+Then we have
+$$
+R\Gamma(\mathcal{C}, \mathcal{F})^\wedge =
+R\Gamma(\mathcal{C}, \mathcal{F}^\wedge)
+$$
+in $D(A)$ where $\mathcal{F}^\wedge$ is the derived
+completion of $\mathcal{F}$ with respect to $I\mathcal{O}$ and on the
+left hand wide we have the derived completion with respect to $I$.
+This produces two spectral sequences
+$$
+E_2^{i, j} = H^i(H^j(\mathcal{C}, \mathcal{F})^\wedge)
+\quad\text{and}\quad
+E_2^{p, q} = H^p(\mathcal{C}, H^q(\mathcal{F}^\wedge))
+$$
+both converging to
+$H^*(R\Gamma(\mathcal{C}, \mathcal{F})^\wedge) =
+H^*(\mathcal{C}, \mathcal{F}^\wedge)$
+\end{lemma}
+
+\begin{proof}
+Apply Lemma \ref{lemma-pushforward-commutes-with-derived-completion}
+to the morphism of ringed topoi $(\mathcal{C}, \mathcal{O}) \to (pt, A)$
+and take cohomology to get the first statement. The second spectral sequence
+is the second spectral sequence of
+Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}.
+The first spectral sequence is the spectral sequence of
+More on Algebra, Example
+\ref{more-algebra-example-derived-completion-spectral-sequence}
+applied to $R\Gamma(\mathcal{C}, \mathcal{F})^\wedge$.
+\end{proof}
+
+\begin{remark}
+\label{remark-local-calculation-derived-completion}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site.
+Let $\mathcal{I} \subset \mathcal{O}$ be a finite type sheaf of
+ideals. Let $K \mapsto K^\wedge$ be the derived completion of
+Proposition \ref{proposition-derived-completion}.
+Let $U \in \Ob(\mathcal{C})$ be an object such that $\mathcal{I}$
+is generated as an ideal sheaf by $f_1, \ldots, f_r \in \mathcal{I}(U)$.
+Set $A = \mathcal{O}(U)$ and $I = (f_1, \ldots, f_r) \subset A$.
+Warning: it may not be the case that $I = \mathcal{I}(U)$.
+Then we have
+$$
+R\Gamma(U, K^\wedge) = R\Gamma(U, K)^\wedge
+$$
+where the right hand side is the derived completion of
+the object $R\Gamma(U, K)$ of $D(A)$ with respect to $I$.
+This is true because derived completion commutes with localization
+(Remark \ref{remark-localization-and-completion}) and
+Lemma \ref{lemma-formal-functions-general}.
+\end{remark}
+
+
+
+
+
+
+
+
+\section{The theorem on formal functions}
+\label{section-formal-functions}
+
+\noindent
+We interrupt the flow of the exposition to talk a little bit about
+derived completion in the setting of quasi-coherent modules on schemes
+and to use this to give a somewhat different proof of the theorem on
+formal functions. We give some pointers to the literature in
+Remark \ref{remark-references}.
+
+\medskip\noindent
+Lemma \ref{lemma-pushforward-commutes-with-derived-completion} is a
+(very formal) derived version of the theorem on formal functions
+(Cohomology of Schemes, Theorem \ref{coherent-theorem-formal-functions}).
+To make this more explicit, suppose $f : X \to S$ is a morphism of schemes,
+$\mathcal{I} \subset \mathcal{O}_S$ is a quasi-coherent sheaf of ideals
+of finite type,
+and $\mathcal{F}$ is a quasi-coherent sheaf on $X$. Then the lemma says that
+\begin{equation}
+\label{equation-formal-functions}
+Rf_*(\mathcal{F}^\wedge) = (Rf_*\mathcal{F})^\wedge
+\end{equation}
+where $\mathcal{F}^\wedge$ is the derived completion of $\mathcal{F}$
+with respect to $f^{-1}\mathcal{I} \cdot \mathcal{O}_X$ and the right
+hand side is the derived completion of $Rf_*\mathcal{F}$
+with respect to $\mathcal{I}$. To see that this gives back the theorem
+on formal functions we have to do a bit of work.
+
+\begin{lemma}
+\label{lemma-sections-derived-completion-pseudo-coherent}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. Let $K$ be a
+pseudo-coherent object of $D(\mathcal{O}_X)$ with derived completion
+$K^\wedge$. Then
+$$
+H^p(U, K^\wedge) = \lim H^p(U, K)/I^nH^p(U, K) =
+H^p(U, K)^\wedge
+$$
+for any affine open $U \subset X$
+where $I = \mathcal{I}(U)$ and where on the right we have the derived
+completion with respect to $I$.
+\end{lemma}
+
+\begin{proof}
+Write $U = \Spec(A)$. The ring $A$ is Noetherian
+and hence $I \subset A$ is finitely generated. Then we have
+$$
+R\Gamma(U, K^\wedge) = R\Gamma(U, K)^\wedge
+$$
+by Remark \ref{remark-local-calculation-derived-completion}.
+Now $R\Gamma(U, K)$ is a pseudo-coherent complex of $A$-modules
+(Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-pseudo-coherent-affine}).
+By More on Algebra, Lemma
+\ref{more-algebra-lemma-derived-completion-pseudo-coherent}
+we conclude that the $p$th cohomology module of $R\Gamma(U, K^\wedge)$
+is equal to the $I$-adic completion of $H^p(U, K)$.
+This proves the first equality. The second (less important) equality
+follows immediately from a second application of the lemma just used.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derived-completion-pseudo-coherent}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals.
+Let $K$ be an object of $D(\mathcal{O}_X)$. Then
+\begin{enumerate}
+\item the derived completion $K^\wedge$ is equal to
+$R\lim (K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{O}_X/\mathcal{I}^n)$.
+\end{enumerate}
+Let $K$ is a pseudo-coherent object of $D(\mathcal{O}_X)$. Then
+\begin{enumerate}
+\item[(2)] the cohomology sheaf $H^q(K^\wedge)$ is equal to
+$\lim H^q(K)/\mathcal{I}^nH^q(K)$.
+\end{enumerate}
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module\footnote{For example
+$H^q(K)$ for $K$ pseudo-coherent on our locally Noetherian $X$.}. Then
+\begin{enumerate}
+\item[(3)] the derived completion $\mathcal{F}^\wedge$ is equal to
+$\lim \mathcal{F}/\mathcal{I}^n\mathcal{F}$,
+\item[(4)]
+$\lim \mathcal{F}/I^n \mathcal{F} = R\lim \mathcal{F}/I^n \mathcal{F}$,
+\item[(5)] $H^p(U, \mathcal{F}^\wedge) = 0$ for $p \not = 0$ for all
+affine opens $U \subset X$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). There is a canonical map
+$$
+K \longrightarrow
+R\lim (K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{O}_X/\mathcal{I}^n),
+$$
+see Remark \ref{remark-compare-with-completion}.
+Derived completion commutes with passing to open subschemes
+(Remark \ref{remark-localization-and-completion}).
+Formation of $R\lim$ commutes with passsing to open subschemes.
+It follows that to check our map is an isomorphism, we may work locally.
+Thus we may assume $X = U = \Spec(A)$.
+Say $I = (f_1, \ldots, f_r)$. Let
+$K_n = K(A, f_1^n, \ldots, f_r^n)$ be the Koszul complex.
+By More on Algebra, Lemma \ref{more-algebra-lemma-sequence-Koszul-complexes}
+we have seen that the pro-systems $\{K_n\}$ and
+$\{A/I^n\}$ of $D(A)$ are isomorphic.
+Using the equivalence $D(A) = D_{\QCoh}(\mathcal{O}_X)$
+of Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-compare-bounded}
+we see that the pro-systems $\{K(\mathcal{O}_X, f_1^n, \ldots, f_r^n)\}$
+and $\{\mathcal{O}_X/\mathcal{I}^n\}$ are isomorphic in
+$D(\mathcal{O}_X)$. This proves the second equality in
+$$
+K^\wedge = R\lim \left(
+K \otimes_{\mathcal{O}_X}^\mathbf{L} K(\mathcal{O}_X, f_1^n, \ldots, f_r^n)
+\right) =
+R\lim (K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{O}_X/\mathcal{I}^n)
+$$
+The first equality is
+Lemma \ref{lemma-derived-completion-koszul}.
+
+\medskip\noindent
+Assume $K$ is pseudo-coherent. For $U \subset X$ affine open
+we have $H^q(U, K^\wedge) = \lim H^q(U, K)/\mathcal{I}^n(U)H^q(U, K)$
+by Lemma \ref{lemma-sections-derived-completion-pseudo-coherent}.
+As this is true for every $U$ we see that
+$H^q(K^\wedge) = \lim H^q(K)/\mathcal{I}^nH^q(K)$ as sheaves.
+This proves (2).
+
+\medskip\noindent
+Part (3) is a special case of (2).
+Parts (4) and (5) follow from
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-Rlim-quasi-coherent}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-formal-functions}
+Let $A$ be a Noetherian ring and let $I \subset A$ be an ideal. Let $X$ be a
+Noetherian scheme over $A$. Let $\mathcal{F}$ be a coherent
+$\mathcal{O}_X$-module. Assume that $H^p(X, \mathcal{F})$ is
+a finite $A$-module for all $p$. Then there are short exact sequences
+$$
+0 \to R^1\lim H^{p - 1}(X, \mathcal{F}/I^n\mathcal{F}) \to
+H^p(X, \mathcal{F})^\wedge \to \lim H^p(X, \mathcal{F}/I^n\mathcal{F}) \to 0
+$$
+of $A$-modules where $H^p(X, \mathcal{F})^\wedge$ is the usual $I$-adic
+completion. If $f$ is proper, then the $R^1\lim$ term is zero.
+\end{lemma}
+
+\begin{proof}
+Consider the two spectral sequences of
+Lemma \ref{lemma-formal-functions-general}.
+The first degenerates by More on Algebra, Lemma
+\ref{more-algebra-lemma-derived-completion-pseudo-coherent}.
+We obtain $H^p(X, \mathcal{F})^\wedge$ in degree $p$.
+This is where we use the assumption that $H^p(X, \mathcal{F})$ is
+a finite $A$-module. The second degenerates because
+$$
+\mathcal{F}^\wedge = \lim \mathcal{F}/I^n\mathcal{F} =
+R\lim \mathcal{F}/I^n\mathcal{F}
+$$
+is a sheaf by Lemma \ref{lemma-derived-completion-pseudo-coherent}.
+We obtain $H^p(X, \lim \mathcal{F}/I^n\mathcal{F})$ in degree $p$.
+Since $R\Gamma(X, -)$ commutes with derived limits
+(Injectives, Lemma \ref{injectives-lemma-RF-commutes-with-Rlim})
+we also get
+$$
+R\Gamma(X, \lim \mathcal{F}/I^n\mathcal{F}) =
+R\Gamma(X, R\lim \mathcal{F}/I^n\mathcal{F}) =
+R\lim R\Gamma(X, \mathcal{F}/I^n\mathcal{F})
+$$
+By More on Algebra, Remark
+\ref{more-algebra-remark-how-unique}
+we obtain exact sequences
+$$
+0 \to
+R^1\lim H^{p - 1}(X, \mathcal{F}/I^n\mathcal{F}) \to
+H^p(X, \lim \mathcal{F}/I^n\mathcal{F}) \to
+\lim H^p(X, \mathcal{F}/I^n\mathcal{F}) \to 0
+$$
+of $A$-modules. Combining the above we get the first statement of the lemma.
+The vanishing of the $R^1\lim$ term follows from
+Cohomology of Schemes, Lemma \ref{coherent-lemma-ML-cohomology-powers-ideal}.
+\end{proof}
+
+\begin{remark}
+\label{remark-references}
+Here are some references to discussions of related material the literature.
+It seems that a ``derived formal functions theorem'' for proper maps
+goes back to \cite[Theorem 6.3.1]{lurie-thesis}.
+There is the discussion in \cite{dag12}, especially
+Chapter 4 which discusses the affine story, see
+More on Algebra, Section \ref{more-algebra-section-derived-completion}.
+In \cite[Section 2.9]{G-R} one finds a discussion of proper base change and
+derived completion using (ind) coherent modules.
+An analogue of (\ref{equation-formal-functions})
+for complexes of quasi-coherent modules can be found as
+\cite[Theorem 6.5]{HL-P}
+\end{remark}
+
+
+
+
+
+
+
+\section{Algebraization of local cohomology, I}
+\label{section-algebraization-sections-general}
+
+\noindent
+Let $A$ be a Noetherian ring and let $I$ and $J$ be two ideals of $A$.
+Let $M$ be a finite $A$-module. In this section we study the
+cohomology groups of the object
+$$
+R\Gamma_J(M)^\wedge
+\quad\text{of}\quad
+D(A)
+$$
+where ${}^\wedge$ denotes derived $I$-adic completion. Observe that in
+Dualizing Complexes, Lemma \ref{dualizing-lemma-completion-local-H0}
+we have shown, if $A$ is complete with respect to $I$,
+that there is an isomorphism
+$$
+\colim H^0_Z(M) \longrightarrow H^0(R\Gamma_J(M)^\wedge)
+$$
+where the (directed) colimit is over the closed subsets $Z = V(J')$
+with $J' \subset J$ and $V(J') \cap V(I) = V(J) \cap V(I)$.
+The union of these closed subsets is
+\begin{equation}
+\label{equation-associated-subset}
+T = \{\mathfrak p \in \Spec(A) :
+V(\mathfrak p) \cap V(I) \subset V(J) \cap V(I)\}
+\end{equation}
+This is a subset of $\Spec(A)$ stable under specialization.
+The result above becomes the statement that
+$$
+H^0_T(M) \longrightarrow H^0(R\Gamma_J(M)^\wedge)
+$$
+is an isomorphism provided $A$ is complete with respect to $I$, see
+Local Cohomology, Lemma \ref{local-cohomology-lemma-adjoint-ext} and
+Remark \ref{local-cohomology-remark-upshot}.
+Our method to extend this isomorphism to higher cohomology groups
+rests on the following lemma.
+
+\begin{lemma}
+\label{lemma-kill-completion-general}
+Let $I, J$ be ideals of a Noetherian ring $A$.
+Let $M$ be a finite $A$-module. Let $\mathfrak p \subset A$ be a prime.
+Let $s$ and $d$ be integers. Assume
+\begin{enumerate}
+\item $A$ has a dualizing complex,
+\item $\mathfrak p \not \in V(J) \cap V(I)$,
+\item $\text{cd}(A, I) \leq d$, and
+\item for all primes $\mathfrak p' \subset \mathfrak p$
+we have
+$\text{depth}_{A_{\mathfrak p'}}(M_{\mathfrak p'}) +
+\dim((A/\mathfrak p')_\mathfrak q) > d + s$
+for all $\mathfrak q \in V(\mathfrak p') \cap V(J) \cap V(I)$.
+\end{enumerate}
+Then there exists an $f \in A$, $f \not \in \mathfrak p$ which annihilates
+$H^i(R\Gamma_J(M)^\wedge)$ for $i \leq s$ where ${}^\wedge$
+indicates $I$-adic completion.
+\end{lemma}
+
+\begin{proof}
+We will use that $R\Gamma_J = R\Gamma_{V(J)}$ and similarly for
+$I + J$, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-local-cohomology-noetherian}.
+Observe that
+$R\Gamma_J(M)^\wedge = R\Gamma_I(R\Gamma_J(M))^\wedge =
+R\Gamma_{I + J}(M)^\wedge$, see
+Dualizing Complexes, Lemmas
+\ref{dualizing-lemma-complete-and-local} and
+\ref{dualizing-lemma-local-cohomology-ss}.
+Thus we may replace $J$ by $I + J$ and assume $I \subset J$
+and $\mathfrak p \not \in V(J)$.
+Recall that
+$$
+R\Gamma_J(M)^\wedge = R\Hom_A(R\Gamma_I(A), R\Gamma_J(M))
+$$
+by the description of derived completion in
+More on Algebra, Lemma \ref{more-algebra-lemma-derived-completion}
+combined with the description of local cohomology in
+Dualizing Complexes, Lemma
+\ref{dualizing-lemma-compute-local-cohomology-noetherian}.
+Assumption (3) means that $R\Gamma_I(A)$ has nonzero cohomology
+only in degrees $\leq d$. Using the canonical truncations of
+$R\Gamma_I(A)$ we find it suffices to show that
+$$
+\text{Ext}^i(N, R\Gamma_J(M))
+$$
+is annihilated by an $f \in A$, $f \not \in \mathfrak p$ for
+$i \leq s + d$ and any $A$-module $N$.
+In turn using the canonical truncations for $R\Gamma_J(M)$
+we see that it suffices to show
+$H^i_J(M)$ is annihilated by an $f \in A$, $f \not \in \mathfrak p$
+for $i \leq s + d$.
+This follows from Local Cohomology, Lemma
+\ref{local-cohomology-lemma-kill-local-cohomology-at-prime}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kill-colimit-weak-general}
+Let $I, J$ be ideals of a Noetherian ring. Let $M$ be a finite $A$-module.
+Let $s$ and $d$ be integers. With $T$ as in
+(\ref{equation-associated-subset}) assume
+\begin{enumerate}
+\item $A$ has a dualizing complex,
+\item if $\mathfrak p \in V(I)$, then no condition,
+\item if $\mathfrak p \not \in V(I)$, $\mathfrak p \in T$, then
+$\dim((A/\mathfrak p)_\mathfrak q) \leq d$ for some
+$\mathfrak q \in V(\mathfrak p) \cap V(J) \cap V(I)$,
+\item if $\mathfrak p \not \in V(I)$, $\mathfrak p \not \in T$, then
+$$
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) \geq s
+\quad\text{or}\quad
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > d + s
+$$
+for all $\mathfrak q \in V(\mathfrak p) \cap V(J) \cap V(I)$.
+\end{enumerate}
+Then there exists an ideal $J_0 \subset J$ with
+$V(J_0) \cap V(I) = V(J) \cap V(I)$ such that for any $J' \subset J_0$ with
+$V(J') \cap V(I) = V(J) \cap V(I)$ the map
+$$
+R\Gamma_{J'}(M) \longrightarrow R\Gamma_{J_0}(M)
+$$
+induces an isomorphism in cohomology in degrees $\leq s$
+and moreover these modules are annihilated by a power of $J_0I$.
+\end{lemma}
+
+\begin{proof}
+Let us consider the set
+$$
+B = \{\mathfrak p \not \in V(I),\ \mathfrak p \in T,\text{ and }
+\text{depth}(M_\mathfrak p) \leq s\}
+$$
+Choose $J_0 \subset J$ such that $V(J_0)$ is the closure of $B \cup V(J)$.
+
+\medskip\noindent
+Claim I: $V(J_0) \cap V(I) = V(J) \cap V(I)$.
+
+\medskip\noindent
+Proof of Claim I. The inclusion $\supset$ holds by construction.
+Let $\mathfrak p$ be a minimal prime of $V(J_0)$.
+If $\mathfrak p \in B \cup V(J)$, then either $\mathfrak p \in T$
+or $\mathfrak p \in V(J)$ and in both cases
+$V(\mathfrak p) \cap V(I) \subset V(J) \cap V(I)$ as desired.
+If $\mathfrak p \not \in B \cup V(J)$, then
+$V(\mathfrak p) \cap B$ is dense, hence infinite, and we conclude that
+$\text{depth}(M_\mathfrak p) < s$ by
+Local Cohomology, Lemma \ref{local-cohomology-lemma-depth-function}.
+In fact, let
+$V(\mathfrak p) \cap B = \{\mathfrak p_\lambda\}_{\lambda \in \Lambda}$.
+Pick $\mathfrak q_\lambda \in V(\mathfrak p_\lambda) \cap V(J) \cap V(I)$
+as in (3).
+Let $\delta : \Spec(A) \to \mathbf{Z}$ be the dimension function
+associated to a dualizing complex $\omega_A^\bullet$ for $A$.
+Since $\Lambda$ is infinite and $\delta$ is bounded,
+there exists an infinite subset $\Lambda' \subset \Lambda$ on which
+$\delta(\mathfrak q_\lambda)$ is constant. For
+$\lambda \in \Lambda'$ we have
+$$
+\text{depth}(M_{\mathfrak p_\lambda}) +
+\delta(\mathfrak p_\lambda) - \delta(\mathfrak q_\lambda) =
+\text{depth}(M_{\mathfrak p_\lambda}) +
+\dim((A/\mathfrak p_\lambda)_{\mathfrak q_\lambda})
+\leq d + s
+$$
+by (3) and the definition of $B$. By the semi-continuity of
+the function $\text{depth} + \delta$ proved in
+Duality for Schemes, Lemma \ref{duality-lemma-sitting-in-degrees}
+we conclude that
+$$
+\text{depth}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_{\mathfrak q_\lambda}) =
+\text{depth}(M_\mathfrak p) + \delta(\mathfrak p) - \delta(\mathfrak q_\lambda)
+\leq d + s
+$$
+Since also $\mathfrak p \not \in V(I)$ we read off from (4) that
+$\mathfrak p \in T$, i.e.,
+$V(\mathfrak p) \cap V(I) \subset V(J) \cap V(I)$. This finishes the
+proof of Claim I.
+
+\medskip\noindent
+Claim II: $H^i_{J_0}(M) \to H^i_J(M)$ is an isomorphism for $i \leq s$
+and $J' \subset J_0$ with $V(J') \cap V(I) = V(J) \cap V(I)$.
+
+\medskip\noindent
+Proof of claim II. Choose $\mathfrak p \in V(J')$ not in $V(J_0)$.
+It suffices to show that $H^i_{\mathfrak pA_\mathfrak p}(M_\mathfrak p) = 0$
+for $i \leq s$, see
+Local Cohomology, Lemma \ref{local-cohomology-lemma-isomorphism}.
+Observe that $\mathfrak p \in T$. Hence since $\mathfrak p$ is not in $B$
+we see that $\text{depth}(M_\mathfrak p) > s$ and the groups vanish by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-depth}.
+
+\medskip\noindent
+Claim III. The final statement of the lemma is true.
+
+\medskip\noindent
+By Claim II for $i \leq s$ we have
+$$
+H^i_T(M) = H^i_{J_0}(M) = H^i_{J'}(M)
+$$
+for all ideals $J' \subset J_0$ with $V(J') \cap V(I) = V(J) \cap V(I)$.
+See Local Cohomology, Lemma \ref{local-cohomology-lemma-adjoint-ext}.
+Let us check the hypotheses of Local Cohomology,
+Proposition \ref{local-cohomology-proposition-annihilator}
+for the subsets $T \subset T \cup V(I)$, the module $M$, and the integer $s$.
+We have to show that given $\mathfrak p \subset \mathfrak q$
+with $\mathfrak p \not \in T \cup V(I)$ and $\mathfrak q \in T$
+we have
+$$
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > s
+$$
+If $\text{depth}(M_\mathfrak p) \geq s$, then this is true because
+the dimension of $(A/\mathfrak p)_\mathfrak q$ is at least $1$.
+Thus we may assume $\text{depth}(M_\mathfrak p) < s$.
+If $\mathfrak q \in V(I)$, then $\mathfrak q \in V(J) \cap V(I)$
+and the inequality holds by (4). If $\mathfrak q \not \in V(I)$,
+then we can use (3) to pick
+$\mathfrak q' \in V(\mathfrak q) \cap V(J) \cap V(I)$ with
+$\dim((A/\mathfrak q)_{\mathfrak q'}) \leq d$.
+Then assumption (4) gives
+$$
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_{\mathfrak q'}) > s + d
+$$
+Since $A$ is catenary this implies the inequality we want.
+Applying Local Cohomology,
+Proposition \ref{local-cohomology-proposition-annihilator} we
+find $J'' \subset A$ with $V(J'') \subset T \cup V(I)$
+such that $J''$ annihilates $H^i_T(M)$ for $i \leq s$.
+Then we can write $V(J'') \cup V(J_0) \cup V(I) = V(J'I)$
+for some $J' \subset J_0$ with $V(J') \cap V(I) = V(J) \cap V(I)$.
+Replacing $J_0$ by $J'$ the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kill-colimit-general}
+In Lemma \ref{lemma-kill-colimit-weak-general} if instead of the empty
+condition (2) we assume
+\begin{enumerate}
+\item[(2')] if $\mathfrak p \in V(I)$, $\mathfrak p \not \in V(J) \cap V(I)$,
+then
+$\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > s$
+for all $\mathfrak q \in V(\mathfrak p) \cap V(J) \cap V(I)$,
+\end{enumerate}
+then the conditions also imply that $H^i_{J_0}(M)$ is a finite
+$A$-module for $i \leq s$.
+\end{lemma}
+
+\begin{proof}
+Recall that $H^i_{J_0}(M) = H^i_T(M)$, see proof of
+Lemma \ref{lemma-kill-colimit-weak-general}. Thus it suffices to
+check that for $\mathfrak p \not \in T$ and $\mathfrak q \in T$
+with $\mathfrak p \subset \mathfrak q$ we have
+$\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > s$, see Local Cohomology,
+Proposition \ref{local-cohomology-proposition-finiteness}.
+Condition (2') tells us this is true for $\mathfrak p \in V(I)$.
+Since we know $H^i_T(M)$ is annihilated by a power of $IJ_0$
+we know the condition holds if $\mathfrak p \not \in V(IJ_0)$
+by Local Cohomology, Proposition \ref{local-cohomology-proposition-annihilator}.
+This covers all cases and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kill-colimit-support-general}
+If in Lemma \ref{lemma-kill-colimit-weak-general} we additionally assume
+\begin{enumerate}
+\item[(6)] if $\mathfrak p \not \in V(I)$, $\mathfrak p \in T$, then
+$\text{depth}_{A_\mathfrak p}(M_\mathfrak p) > s$,
+\end{enumerate}
+then $H^i_{J_0}(M) = H^i_J(M) = H^i_{J + I}(M)$ for $i \leq s$ and these
+modules are annihilated by a power of $I$.
+\end{lemma}
+
+\begin{proof}
+Choose $\mathfrak p \in V(J)$ or $\mathfrak p \in V(J_0)$ but
+$\mathfrak p \not \in V(J + I) = V(J_0 + I)$.
+It suffices to show that $H^i_{\mathfrak pA_\mathfrak p}(M_\mathfrak p) = 0$
+for $i \leq s$, see
+Local Cohomology, Lemma \ref{local-cohomology-lemma-isomorphism}.
+These groups vanish by condition (6) and
+Dualizing Complexes, Lemma \ref{dualizing-lemma-depth}.
+The final statement follows from
+Local Cohomology, Proposition \ref{local-cohomology-proposition-annihilator}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-algebraize-local-cohomology-general}
+Let $I, J$ be ideals of a Noetherian ring $A$.
+Let $M$ be a finite $A$-module.
+Let $s$ and $d$ be integers. With $T$ as in
+(\ref{equation-associated-subset}) assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete and has a dualizing complex,
+\item if $\mathfrak p \in V(I)$ no condition,
+\item $\text{cd}(A, I) \leq d$,
+\item if $\mathfrak p \not \in V(I)$, $\mathfrak p \not \in T$ then
+$$
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) \geq s
+\quad\text{or}\quad
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > d + s
+$$
+for all $\mathfrak q \in V(\mathfrak p) \cap V(J) \cap V(I)$,
+\item if $\mathfrak p \not \in V(I)$, $\mathfrak p \not \in T$,
+$V(\mathfrak p) \cap V(J) \cap V(I) \not = \emptyset$, and
+$\text{depth}(M_\mathfrak p) < s$, then one
+of the following holds\footnote{Our method
+forces this additional condition. We will return to this
+(insert future reference).}:
+\begin{enumerate}
+\item $\dim(\text{Supp}(M_\mathfrak p)) < s + 2$\footnote{For example
+if $M$ satisfies Serre's condition $(S_s)$
+on the complement of $V(I) \cup T$.}, or
+\item $\delta(\mathfrak p) > d + \delta_{max} - 1$
+where $\delta$ is a dimension function and $\delta_{max}$
+is the maximum of $\delta$ on $V(J) \cap V(I)$, or
+\item $\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > d + s + \delta_{max} - \delta_{min} - 2$
+for all $\mathfrak q \in V(\mathfrak p) \cap V(J) \cap V(I)$.
+\end{enumerate}
+\end{enumerate}
+Then there exists an ideal $J_0 \subset J$ with
+$V(J_0) \cap V(I) = V(J) \cap V(I)$
+such that for any $J' \subset J_0$ with
+$V(J') \cap V(I) = V(J) \cap V(I)$ the map
+$$
+R\Gamma_{J'}(M) \longrightarrow R\Gamma_J(M)^\wedge
+$$
+induces an isomorphism on cohomology in degrees $\leq s$.
+Here ${}^\wedge$ denotes derived $I$-adic completion.
+\end{lemma}
+
+\noindent
+We encourage the reader to read the proof in the local case first
+(Lemma \ref{lemma-algebraize-local-cohomology}) as it explains the structure
+of the proof without having to deal with all the inequalities.
+
+\begin{proof}
+For an ideal $\mathfrak a \subset A$ we have
+$R\Gamma_\mathfrak a = R\Gamma_{V(\mathfrak a)}$, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-local-cohomology-noetherian}.
+Next, we observe that
+$$
+R\Gamma_J(M)^\wedge =
+R\Gamma_I(R\Gamma_J(M))^\wedge =
+R\Gamma_{I + J}(M)^\wedge =
+R\Gamma_{I + J'}(M)^\wedge =
+R\Gamma_I(R\Gamma_{J'}(M))^\wedge =
+R\Gamma_{J'}(M)^\wedge
+$$
+by Dualizing Complexes, Lemmas \ref{dualizing-lemma-local-cohomology-ss} and
+\ref{dualizing-lemma-complete-and-local}.
+This explains how we define the arrow in the statement of the lemma.
+
+\medskip\noindent
+We claim that the hypotheses of Lemma \ref{lemma-kill-colimit-weak-general}
+are implied by our current hypotheses on $M$.
+The only thing to verify is hypothesis (3).
+Thus let $\mathfrak p \not \in V(I)$, $\mathfrak p \in T$.
+Then $V(\mathfrak p) \cap V(I)$ is nonempty as $I$ is
+contained in the Jacobson radical of $A$
+(Algebra, Lemma \ref{algebra-lemma-radical-completion}).
+Since $\mathfrak p \in T$ we have
+$V(\mathfrak p) \cap V(I) = V(\mathfrak p) \cap V(J) \cap V(I)$.
+Let $\mathfrak q \in V(\mathfrak p) \cap V(I)$ be the
+generic point of an irreducible component.
+We have $\text{cd}(A_\mathfrak q, I_\mathfrak q) \leq d$
+by Local Cohomology, Lemma \ref{local-cohomology-lemma-cd-local}.
+We have $V(\mathfrak pA_\mathfrak q) \cap V(I_\mathfrak q) =
+\{\mathfrak qA_\mathfrak q\}$ by our choice of $\mathfrak q$
+and we conclude $\dim((A/\mathfrak p)_\mathfrak q) \leq d$
+by Local Cohomology, Lemma \ref{local-cohomology-lemma-cd-bound-dim-local}.
+
+\medskip\noindent
+Observe that the lemma holds for $s < 0$. This is not a trivial case because
+it is not a priori clear that $H^i(R\Gamma_J(M)^\wedge)$
+is zero for $i < 0$. However, this vanishing was established in
+Dualizing Complexes, Lemma \ref{dualizing-lemma-completion-local}.
+We will prove the lemma by induction for $s \geq 0$.
+
+\medskip\noindent
+The lemma for $s = 0$ follows immediately from
+the conclusion of Lemma \ref{lemma-kill-colimit-weak-general}
+and Dualizing Complexes, Lemma \ref{dualizing-lemma-completion-local-H0}.
+
+\medskip\noindent
+Assume $s > 0$ and the lemma has been shown for smaller values of $s$.
+Let $M' \subset M$ be the maximal submodule whose support is contained
+in $V(I) \cup T$. Then $M'$ is a finite $A$-module whose support
+is contained in $V(J') \cup V(I)$ for some ideal $J' \subset J$
+with $V(J') \cap V(I) = V(J) \cap V(I)$.
+We claim that
+$$
+R\Gamma_{J'}(M') \to R\Gamma_J(M')^\wedge
+$$
+is an isomorphism for any choice of $J'$.
+Namely, we can choose a short exact sequence
+$0 \to M_1 \oplus M_2 \to M' \to N \to 0$ with
+$M_1$ annihilated by a power of $J'$, with $M_2$ annihilated
+by a power of $I$, and with $N$ annihilated by a power of $I + J'$.
+Thus it suffices to show that the claim holds for $M_1$, $M_2$, and $N$.
+In the case of $M_1$ we see that $R\Gamma_{J'}(M_1) = M_1$ and
+since $M_1$ is a finite $A$-module and $I$-adically complete
+we have $M_1^\wedge = M_1$. This proves the claim for $M_1$
+by the initial remarks of the proof. In the case of $M_2$ we see that
+$H^i_J(M_2) = H^i_{I + J}(M) = H^i_{I + J'}(M) = H^i_{J'}(M_2)$
+are annihilated by a power of $I$ and hence derived complete.
+Thus the claim in this case also. For $N$ we can use either of
+the arguments just given. Considering the short exact sequence
+$0 \to M' \to M \to M/M' \to 0$
+we see that it suffices to prove the lemma for $M/M'$.
+Thus we may assume $\text{Ass}(M) \cap (V(I) \cup T) = \emptyset$.
+
+\medskip\noindent
+Let $\mathfrak p \in \text{Ass}(M)$ be such that
+$V(\mathfrak p) \cap V(J) \cap V(I) = \emptyset$.
+Since $I$ is contained in the Jacobson radical of $A$ this implies
+that $V(\mathfrak p) \cap V(J') = \emptyset$ for any
+$J' \subset J$ with $V(J') \cap V(I) = V(J) \cap V(I)$.
+Thus setting $N = H^0_\mathfrak p(M)$ we see that
+$R\Gamma_J(N) = R\Gamma_{J'}(N) = 0$ for all
+$J' \subset J$ with $V(J') \cap V(I) = V(J) \cap V(I)$.
+In particular $R\Gamma_J(N)^\wedge = 0$.
+Thus we may replace $M$ by $M/N$ as this changes the
+structure of $M$ only in primes which do not play
+a role in conditions (4) or (5). Repeating we may assume that
+$V(\mathfrak p) \cap V(J) \cap V(I) \not = \emptyset$
+for all $\mathfrak p \in \text{Ass}(M)$.
+
+\medskip\noindent
+Assume $\text{Ass}(M) \cap (V(I) \cup T) = \emptyset$ and that
+$V(\mathfrak p) \cap V(J) \cap V(I) \not = \emptyset$
+for all $\mathfrak p \in \text{Ass}(M)$.
+Let $\mathfrak p \in \text{Ass}(M)$. We want to show that we may apply
+Lemma \ref{lemma-kill-completion-general}.
+It is in the verification of this that we will use the supplemental
+condition (5). Choose $\mathfrak p' \subset \mathfrak p$
+and $\mathfrak q' \subset V(\mathfrak p) \cap V(J) \cap V(I)$.
+\begin{enumerate}
+\item If $M_{\mathfrak p'} = 0$, then
+$\text{depth}(M_{\mathfrak p'}) = \infty$ and
+$\text{depth}(M_{\mathfrak p'}) +
+\dim((A/\mathfrak p')_{\mathfrak q'}) > d + s$.
+\item If $\text{depth}(M_{\mathfrak p'}) < s$, then
+$\text{depth}(M_{\mathfrak p'}) +
+\dim((A/\mathfrak p')_{\mathfrak q'}) > d + s$ by (4).
+\end{enumerate}
+In the remaining cases we have $M_{\mathfrak p'} \not = 0$ and
+$\text{depth}(M_{\mathfrak p'}) \geq s$. In particular, we see that
+$\mathfrak p'$ is in the support of $M$ and we can choose
+$\mathfrak p'' \subset \mathfrak p'$ with $\mathfrak p'' \in \text{Ass}(M)$.
+\begin{enumerate}
+\item[(a)] Observe that
+$\dim((A/\mathfrak p'')_{\mathfrak p'}) \geq \text{depth}(M_{\mathfrak p'})$
+by Algebra, Lemma \ref{algebra-lemma-depth-dim-associated-primes}.
+If equality holds, then we have
+$$
+\text{depth}(M_{\mathfrak p'}) + \dim((A/\mathfrak p')_{\mathfrak q'}) =
+\text{depth}(M_{\mathfrak p''}) + \dim((A/\mathfrak p'')_{\mathfrak q'})
+> s + d
+$$
+by (4) applied to $\mathfrak p''$ and we are done. This means we are
+only in trouble if
+$\dim((A/\mathfrak p'')_{\mathfrak p'}) > \text{depth}(M_{\mathfrak p'})$.
+This implies that $\dim(M_\mathfrak p) \geq s + 2$.
+Thus if (5)(a) holds, then this does not occur.
+\item[(b)] If (5)(b) holds, then we get
+$$
+\text{depth}(M_{\mathfrak p'}) + \dim((A/\mathfrak p')_{\mathfrak q'})
+\geq s + \delta(\mathfrak p') - \delta(\mathfrak q')
+\geq s + 1 + \delta(\mathfrak p) - \delta_{max}
+> s + d
+$$
+as desired.
+\item[(c)] If (5)(c) holds, then we get
+\begin{align*}
+\text{depth}(M_{\mathfrak p'}) + \dim((A/\mathfrak p')_{\mathfrak q'})
+& \geq
+s + \delta(\mathfrak p') - \delta(\mathfrak q') \\
+& \geq
+s + 1 + \delta(\mathfrak p) - \delta(\mathfrak q') \\
+& =
+s + 1 + \delta(\mathfrak p) - \delta(\mathfrak q) +
+\delta(\mathfrak q) - \delta(\mathfrak q') \\
+& >
+s + 1 + (s + d + \delta_{max} - \delta_{min} - 2) +
+\delta(\mathfrak q) - \delta(\mathfrak q') \\
+& \geq
+2s + d - 1 \geq s + d
+\end{align*}
+as desired. Observe that this argument works because
+we know that a prime $\mathfrak q \in V(\mathfrak p) \cap V(J) \cap V(I)$
+exists.
+\end{enumerate}
+Now we are ready to do the induction step.
+
+\medskip\noindent
+Choose an ideal $J_0$ as in Lemma \ref{lemma-kill-colimit-weak-general}
+and an integer $t > 0$ such that $(J_0I)^t$ annihilates $H^s_J(M)$.
+The assumptions of Lemma \ref{lemma-kill-completion-general}
+are satisfied for every $\mathfrak p \in \text{Ass}(M)$
+(see previous paragraph).
+Thus the annihilator $\mathfrak a \subset A$ of
+$H^s(R\Gamma_J(M)^\wedge)$
+is not contained in $\mathfrak p$ for $\mathfrak p \in \text{Ass}(M)$.
+Thus we can find an $f \in \mathfrak a(J_0I)^t$
+not in any associated prime of $M$ which is an annihilator
+of both $H^s(R\Gamma_J(M)^\wedge)$ and $H^s_J(M)$.
+Then $f$ is a nonzerodivisor on $M$ and we can consider the
+short exact sequence
+$$
+0 \to M \xrightarrow{f} M \to M/fM \to 0
+$$
+Our choice of $f$ shows that we obtain
+$$
+\xymatrix{
+H^{s - 1}_{J'}(M) \ar[d] \ar[r] &
+H^{s - 1}_{J'}(M/fM) \ar[d] \ar[r] &
+H^s_{J'}(M) \ar[d] \ar[r] & 0 \\
+H^{s - 1}(R\Gamma_J(M)^\wedge) \ar[r] &
+H^{s - 1}(R\Gamma_J(M/fM)^\wedge) \ar[r] &
+H^s(R\Gamma_J(M)^\wedge) \ar[r] & 0
+}
+$$
+for any $J' \subset J_0$ with $V(J') \cap V(I) = V(J) \cap V(I)$.
+Thus if we choose $J'$ such that it works for
+$M$ and $M/fM$ and $s - 1$ (possible by induction hypothesis --
+see next paragraph), then we conclude that the lemma is true.
+
+\medskip\noindent
+To finish the proof we have to show that the module
+$M/fM$ satisfies the hypotheses (4) and (5) for $s - 1$.
+Thus we let $\mathfrak p$ be a prime in the support
+of $M/fM$ with $\text{depth}((M/fM)_\mathfrak p) < s - 1$
+and with $V(\mathfrak p) \cap V(J) \cap V(I)$ nonempty.
+Then $\dim(M_\mathfrak p) = \dim((M/fM)_\mathfrak p) + 1$
+and $\text{depth}(M_\mathfrak p) = \text{depth}((M/fM)_\mathfrak p) + 1$.
+In particular, we know (4) and (5) hold for $\mathfrak p$ and $M$
+with the original value $s$.
+The desired inequalities then follow by inspection.
+\end{proof}
+
+\begin{example}
+\label{example-no-ML}
+In Lemma \ref{lemma-algebraize-local-cohomology-general}
+we do not know that the inverse systems $H^i_J(M/I^nM)$ satisfy the
+Mittag-Leffler condition.
+For example, suppose that $A = \mathbf{Z}_p[[x, y]]$, $I = (p)$,
+$J = (p, x)$, and $M = A/(xy - p)$. Then the image of
+$H^0_J(M/p^nM) \to H^0_J(M/pM)$
+is the ideal generated by $y^n$ in $M/pM = A/(p, xy)$.
+\end{example}
+
+
+
+
+
+\section{Algebraization of local cohomology, II}
+\label{section-algebraization-punctured}
+
+\noindent
+In this section we redo the arguments of
+Section \ref{section-algebraization-sections-general}
+when $(A, \mathfrak m)$ is a local ring and we take local cohomology
+$R\Gamma_\mathfrak m$ with respect to $\mathfrak m$. As before our
+main tool is the following lemma.
+
+\begin{lemma}
+\label{lemma-kill-completion}
+Let $(A, \mathfrak m)$ be a Noetherian local ring.
+Let $I \subset A$ be an ideal. Let $M$ be a finite $A$-module and
+let $\mathfrak p \subset A$ be a prime. Let $s$ and $d$ be integers. Assume
+\begin{enumerate}
+\item $A$ has a dualizing complex,
+\item $\text{cd}(A, I) \leq d$, and
+\item
+$\text{depth}_{A_\mathfrak p}(M_\mathfrak p) + \dim(A/\mathfrak p) > d + s$.
+\end{enumerate}
+Then there exists an $f \in A \setminus \mathfrak p$ which annihilates
+$H^i(R\Gamma_\mathfrak m(M)^\wedge)$ for $i \leq s$ where ${}^\wedge$
+indicates $I$-adic completion.
+\end{lemma}
+
+\begin{proof}
+According to Local Cohomology, Lemma
+\ref{local-cohomology-lemma-sitting-in-degrees}
+the function
+$$
+\mathfrak p' \longmapsto
+\text{depth}_{A_{\mathfrak p'}}(M_{\mathfrak p'}) + \dim(A/\mathfrak p')
+$$
+is lower semi-continuous on $\Spec(A)$. Thus the value
+of this function on $\mathfrak p' \subset \mathfrak p$
+is $> s + d$. Thus our lemma is a special case of
+Lemma \ref{lemma-kill-completion-general}
+provided that $\mathfrak p \not = \mathfrak m$.
+If $\mathfrak p = \mathfrak m$,
+then we have $H^i_\mathfrak m(M) = 0$ for $i \leq s + d$ by
+the relationship between depth and local cohomology
+(Dualizing Complexes, Lemma \ref{dualizing-lemma-depth}).
+Thus the argument given in the proof of
+Lemma \ref{lemma-kill-completion-general}
+shows that $H^i(R\Gamma_\mathfrak m(M)^\wedge) = 0$
+for $i \leq s$ in this (degenerate) case.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kill-colimit-weak}
+Let $(A, \mathfrak m)$ be a Noetherian local ring.
+Let $I \subset A$ be an ideal. Let $M$ be a finite $A$-module.
+Let $s$ and $d$ be integers. Assume
+\begin{enumerate}
+\item $A$ has a dualizing complex,
+\item if $\mathfrak p \in V(I)$, then no condition,
+\item if $\mathfrak p \not \in V(I)$ and
+$V(\mathfrak p) \cap V(I) = \{\mathfrak m\}$, then
+$\dim(A/\mathfrak p) \leq d$,
+\item if $\mathfrak p \not \in V(I)$ and
+$V(\mathfrak p) \cap V(I) \not = \{\mathfrak m\}$, then
+$$
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) \geq s
+\quad\text{or}\quad
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) + \dim(A/\mathfrak p) > d + s
+$$
+\end{enumerate}
+Then there exists an ideal $J_0 \subset A$ with
+$V(J_0) \cap V(I) = \{\mathfrak m\}$ such that for any $J \subset J_0$ with
+$V(J) \cap V(I) = \{\mathfrak m\}$ the map
+$$
+R\Gamma_J(M) \longrightarrow R\Gamma_{J_0}(M)
+$$
+induces an isomorphism in cohomology in degrees $\leq s$
+and moreover these modules are annihilated by a power of $J_0I$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-kill-colimit-weak-general}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kill-colimit}
+In Lemma \ref{lemma-kill-colimit-weak} if instead of the empty
+condition (2) we assume
+\begin{enumerate}
+\item[(2')] if $\mathfrak p \in V(I)$ and $\mathfrak p \not = \mathfrak m$,
+then $\text{depth}_{A_\mathfrak p}(M_\mathfrak p) + \dim(A/\mathfrak p) > s$,
+\end{enumerate}
+then the conditions also imply that $H^i_{J_0}(M)$ is a finite
+$A$-module for $i \leq s$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-kill-colimit-general}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kill-colimit-support}
+If in Lemma \ref{lemma-kill-colimit-weak} we additionally assume
+\begin{enumerate}
+\item[(6)] if $\mathfrak p \not \in V(I)$ and
+$V(\mathfrak p) \cap V(I) = \{\mathfrak m\}$, then
+$\text{depth}_{A_\mathfrak p}(M_\mathfrak p) > s$,
+\end{enumerate}
+then $H^i_{J_0}(M) = H^i_J(M) = H^i_\mathfrak m(M)$ for $i \leq s$
+and these modules are annihilated by a power of $I$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-kill-colimit-support-general}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-algebraize-local-cohomology}
+Let $(A, \mathfrak m)$ be a Noetherian local ring.
+Let $I \subset A$ be an ideal. Let $M$ be a finite $A$-module.
+Let $s$ and $d$ be integers. Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete and has a dualizing complex,
+\item if $\mathfrak p \in V(I)$, no condition,
+\item $\text{cd}(A, I) \leq d$,
+\item if $\mathfrak p \not \in V(I)$ and
+$V(\mathfrak p) \cap V(I) \not = \{\mathfrak m\}$ then
+$$
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) \geq s
+\quad\text{or}\quad
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) + \dim(A/\mathfrak p) > d + s
+$$
+\end{enumerate}
+Then there exists an ideal $J_0 \subset A$ with
+$V(J_0) \cap V(I) = \{\mathfrak m\}$ such that for any $J \subset J_0$ with
+$V(J) \cap V(I) = \{\mathfrak m\}$ the map
+$$
+R\Gamma_J(M) \longrightarrow
+R\Gamma_J(M)^\wedge = R\Gamma_\mathfrak m(M)^\wedge
+$$
+induces an isomorphism in cohomology in degrees $\leq s$.
+Here ${}^\wedge$ denotes derived $I$-adic completion.
+\end{lemma}
+
+\begin{proof}
+This lemma is a special case of
+Lemma \ref{lemma-algebraize-local-cohomology-general}
+since condition (5)(c) is implied by condition (4)
+as $\delta_{max} = \delta_{min} = \delta(\mathfrak m)$.
+We will give the proof of this important special case
+as it is somewhat easier (fewer things to check).
+
+\medskip\noindent
+There is no difference between $R\Gamma_\mathfrak a$ and
+$R\Gamma_{V(\mathfrak a)}$ in our current situation, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-local-cohomology-noetherian}.
+Next, we observe that
+$$
+R\Gamma_\mathfrak m(M)^\wedge =
+R\Gamma_I(R\Gamma_J(M))^\wedge =
+R\Gamma_J(M)^\wedge
+$$
+by Dualizing Complexes, Lemmas \ref{dualizing-lemma-local-cohomology-ss} and
+\ref{dualizing-lemma-complete-and-local}
+which explains the equality sign in the statement of the lemma.
+
+\medskip\noindent
+Observe that the lemma holds for $s < 0$. This is not a trivial case because
+it is not a priori clear that $H^s(R\Gamma_\mathfrak m(M)^\wedge)$
+is zero for negative $s$. However, this vanishing was established
+in Lemma \ref{lemma-local-cohomology-derived-completion}.
+We will prove the lemma by induction for $s \geq 0$.
+
+\medskip\noindent
+The assumptions of Lemma \ref{lemma-kill-colimit-weak}
+are satisfied by Local Cohomology, Lemma
+\ref{local-cohomology-lemma-cd-bound-dim-local}.
+The lemma for $s = 0$ follows from Lemma \ref{lemma-kill-colimit-weak} and
+Dualizing Complexes, Lemma \ref{dualizing-lemma-completion-local-H0}.
+
+\medskip\noindent
+Assume $s > 0$ and the lemma holds for smaller values of $s$.
+Let $M' \subset M$ be the submodule of elements whose
+support is condained in $V(I) \cup V(J)$ for some
+ideal $J$ with $V(J) \cap V(I) = \{\mathfrak m\}$.
+Then $M'$ is a finite $A$-module.
+We claim that
+$$
+R\Gamma_J(M') \to R\Gamma_\mathfrak m(M')^\wedge
+$$
+is an isomorphism for any choice of $J$.
+Namely, for any such module there is a short exact sequence
+$0 \to M_1 \oplus M_2 \to M' \to N \to 0$ with
+$M_1$ annihilated by a power of $J$, with $M_2$ annihilated
+by a power of $I$ and with $N$ annihilated by a power of $\mathfrak m$.
+In the case of $M_1$ we see that $R\Gamma_J(M_1) = M_1$ and
+since $M_1$ is a finite $A$-module and $I$-adically complete
+we have $M_1^\wedge = M_1$. Thus the claim holds for $M_1$.
+In the case of $M_2$ we see that $H^i_J(M_2)$ is annihilated
+by a power of $I$ and hence derived complete. Thus the claim
+for $M_2$. By the same arguments the claim holds for $N$
+and we conclude that the claim holds. Considering the
+short exact sequence $0 \to M' \to M \to M/M' \to 0$
+we see that it suffices to prove the lemma for $M/M'$.
+This we may assume $\mathfrak p \in \text{Ass}(M)$
+implies $V(\mathfrak p) \cap V(I) \not = \{\mathfrak m\}$, i.e.,
+$\mathfrak p$ is a prime as in (4).
+
+\medskip\noindent
+Choose an ideal $J_0$ as in Lemma \ref{lemma-kill-colimit-weak}
+and an integer $t > 0$ such that $(J_0I)^t$ annihilates $H^s_J(M)$.
+Here $J$ denotes an arbitrary ideal $J \subset J_0$ with
+$V(J) \cap V(I) = \{\mathfrak m\}$.
+The assumptions of Lemma \ref{lemma-kill-completion}
+are satisfied for every $\mathfrak p \in \text{Ass}(M)$
+(see previous paragraph). Thus the annihilator $\mathfrak a \subset A$ of
+$H^s(R\Gamma_\mathfrak m(M)^\wedge)$
+is not contained in $\mathfrak p$ for $\mathfrak p \in \text{Ass}(M)$.
+Thus we can find an $f \in \mathfrak a(J_0I)^t$
+not in any associated prime of $M$ which is an annihilator
+of both $H^s(R\Gamma_\mathfrak m(M)^\wedge)$ and $H^s_J(M)$.
+Then $f$ is a nonzerodivisor on $M$ and we can consider the
+short exact sequence
+$$
+0 \to M \xrightarrow{f} M \to M/fM \to 0
+$$
+Our choice of $f$ shows that we obtain
+$$
+\xymatrix{
+H^{s - 1}_J(M) \ar[d] \ar[r] &
+H^{s - 1}_J(M/fM) \ar[d] \ar[r] &
+H^s_J(M) \ar[d] \ar[r] & 0 \\
+H^{s - 1}(R\Gamma_\mathfrak m(M)^\wedge) \ar[r] &
+H^{s - 1}(R\Gamma_\mathfrak m(M/fM)^\wedge) \ar[r] &
+H^s(R\Gamma_\mathfrak m(M)^\wedge) \ar[r] & 0
+}
+$$
+for any $J \subset J_0$ with $V(J) \cap V(I) = \{\mathfrak m\}$.
+Thus if we choose $J$ such that it works for
+$M$ and $M/fM$ and $s - 1$ (possible by induction hypothesis),
+then we conclude that the lemma is true.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Algebraization of local cohomology, III}
+\label{section-bootstrap}
+
+\noindent
+In this section we bootstrap the material in
+Sections \ref{section-algebraization-sections-general} and
+\ref{section-algebraization-sections}
+to give a stronger result the following situation.
+
+\begin{situation}
+\label{situation-bootstrap}
+Here $A$ is a Noetherian ring. We have an ideal $I \subset A$,
+a finite $A$-module $M$, and a subset $T \subset V(I)$ stable under
+specialization. We have integers $s$ and $d$. We assume
+\begin{enumerate}
+\item[(1)] $A$ has a dualizing complex,
+\item[(3)] $\text{cd}(A, I) \leq d$,
+\item[(4)] given primes $\mathfrak p \subset \mathfrak r \subset \mathfrak q$
+with $\mathfrak p \not \in V(I)$,
+$\mathfrak r \in V(I) \setminus T$,
+$\mathfrak q \in T$ we have
+$$
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) \geq s
+\quad\text{or}\quad
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > d + s
+$$
+\item[(6)] given $\mathfrak q \in T$ denoting
+$A', \mathfrak m', I', M'$ are the usual $I$-adic completions
+of $A_\mathfrak q, \mathfrak qA_\mathfrak q, I_\mathfrak q, M_\mathfrak q$
+we have
+$$
+\text{depth}(M'_{\mathfrak p'}) > s
+$$
+for all $\mathfrak p' \in \Spec(A') \setminus V(I')$ with
+$V(\mathfrak p') \cap V(I') = \{\mathfrak m'\}$.
+\end{enumerate}
+\end{situation}
+
+\noindent
+The following lemma explains why in Situation \ref{situation-bootstrap}
+it suffices to look at triples
+$\mathfrak p \subset \mathfrak r \subset \mathfrak q$ of primes in (4)
+even though the actual assumption only involves $\mathfrak p$ and $\mathfrak q$.
+
+\begin{lemma}
+\label{lemma-helper-bootstrap}
+In Situation \ref{situation-bootstrap} let $\mathfrak p \subset \mathfrak q$
+be primes of $A$ with $\mathfrak p \not \in V(I)$ and
+$\mathfrak q \in T$. If there does not exist an
+$\mathfrak r \in V(I) \setminus T$ with
+$\mathfrak p \subset \mathfrak r \subset \mathfrak q$
+then $\text{depth}(M_\mathfrak p) > s$.
+\end{lemma}
+
+\begin{proof}
+Choose $\mathfrak q' \in T$ with
+$\mathfrak p \subset \mathfrak q' \subset \mathfrak q$
+such that there is no prime in $T$ strictly
+in between $\mathfrak p$ and $\mathfrak q'$. To prove the lemma
+we may and do replace $\mathfrak q$ by $\mathfrak q'$.
+Next, let $\mathfrak p' \subset A_\mathfrak q$ be the prime corresponding to
+$\mathfrak p$. After doing this we obtain that
+$V(\mathfrak p') \cap V(IA_\mathfrak q) = \{\mathfrak q A_\mathfrak q\}$
+because of the nonexistence of a prime $\mathfrak r$ as in the lemma.
+Let $A', I', \mathfrak m', M'$ be the $I$-adic completions of
+$A_\mathfrak q, I_\mathfrak q, \mathfrak qA_\mathfrak q, M_\mathfrak q$.
+Since $A_\mathfrak q \to A'$ is faithfully flat
+(Algebra, Lemma \ref{algebra-lemma-completion-faithfully-flat})
+we can choose $\mathfrak p'' \subset A'$ lying over $\mathfrak p'$
+with $\dim(A'_{\mathfrak p''}/\mathfrak p' A'_{\mathfrak p''}) = 0$.
+Then we see that
+$$
+\text{depth}(M'_{\mathfrak p''}) =
+\text{depth}((M_\mathfrak q \otimes_{A_\mathfrak q} A')_{\mathfrak p''}) =
+\text{depth}(M_\mathfrak p \otimes_{A_\mathfrak p} A'_{\mathfrak p''}) =
+\text{depth}(M_\mathfrak p)
+$$
+by flatness of $A \to A'$ and our choice of $\mathfrak p''$, see
+Algebra, Lemma \ref{algebra-lemma-apply-grothendieck-module}.
+Since $\mathfrak p''$ lies over $\mathfrak p'$ we have
+$V(\mathfrak p'') \cap V(I') = \{\mathfrak m'\}$. Thus
+condition (6) in Situation \ref{situation-bootstrap} implies
+$\text{depth}(M'_{\mathfrak p''}) > s$ which finishes the proof.
+\end{proof}
+
+\noindent
+The following tedious lemma explains the relationships between various
+collections of conditions one might impose.
+
+\begin{lemma}
+\label{lemma-bootstrap-inherited}
+In Situation \ref{situation-bootstrap} we have
+\begin{enumerate}
+\item[(E)] if $T' \subset T$ is a smaller specialization stable subset, then
+$A, I, T', M$ satisfies the assumptions of Situation \ref{situation-bootstrap},
+\item[(F)] if $S \subset A$ is a multiplicative subset, then
+$S^{-1}A, S^{-1}I, T', S^{-1}M$
+satisfies the assumptions of Situation \ref{situation-bootstrap}
+where $T' \subset V(S^{-1}I)$ is the inverse image of $T$,
+\item[(G)] the quadruple $A', I', T', M'$
+satisfies the assumptions of Situation \ref{situation-bootstrap}
+where $A', I', M'$ are the usual $I$-adic completions of $A, I, M$
+and $T' \subset V(I')$ is the inverse image of $T$.
+\end{enumerate}
+Let $I \subset \mathfrak a \subset A$ be an ideal such that
+$V(\mathfrak a) \subset T$. Then
+\begin{enumerate}
+\item[(A)] if $I$ is contained in the Jacobson radical of $A$,
+then all hypotheses of
+Lemmas \ref{lemma-kill-colimit-weak-general} and
+\ref{lemma-kill-colimit-support-general} are satisfied
+for $A, I, \mathfrak a, M$,
+\item[(B)] if $A$ is complete with respect to $I$, then
+all hypotheses except for possibly (5) of
+Lemma \ref{lemma-algebraize-local-cohomology-general}
+are satisfied for $A, I, \mathfrak a, M$,
+\item[(C)] if $A$ is local with maximal ideal $\mathfrak m = \mathfrak a$,
+then all hypotheses of
+Lemmas \ref{lemma-kill-colimit-weak} and \ref{lemma-kill-colimit-support}
+hold for $A, \mathfrak m, I, M$,
+\item[(D)] if $A$ is local with maximal ideal $\mathfrak m = \mathfrak a$
+and $I$-adically complete, then all hypotheses of
+Lemma \ref{lemma-algebraize-local-cohomology}
+hold for $A, \mathfrak m, I, M$,
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (E). We have to prove assumptions (1), (3), (4), (6)
+of Situation \ref{situation-bootstrap} hold for
+$A, I, T, M$. Shrinking $T$ to $T'$
+weakens assumption (6) and strengthens assumption (4). However, if we have
+$\mathfrak p \subset \mathfrak r \subset \mathfrak q$ with
+$\mathfrak p \not \in V(I)$, $\mathfrak r \in V(I) \setminus T'$,
+$\mathfrak q \in T'$ as in assumption (4) for $A, I, T', M$, then
+either we can pick $\mathfrak r \in V(I) \setminus T$ and
+condition (4) for $A, I, T, M$ kicks in or we cannot
+find such an $\mathfrak r$ in which case we get
+$\text{depth}(M_\mathfrak p) > s$ by Lemma \ref{lemma-helper-bootstrap}.
+This proves (4) holds for $A, I, T', M$ as desired.
+
+\medskip\noindent
+Proof of (F). This is straightforward and we omit the details.
+
+\medskip\noindent
+Proof of (G). We have to prove assumptions (1), (3), (4), (6)
+of Situation \ref{situation-bootstrap} hold for the $I$-adic
+completions $A', I', T', M'$. Please keep in mind that
+$\Spec(A') \to \Spec(A)$ induces an isomorphism $V(I') \to V(I)$.
+
+\medskip\noindent
+Assumption (1): The ring $A'$ has a dualizing complex, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-ubiquity-dualizing}.
+
+\medskip\noindent
+Assumption (3): Since $I' = IA'$ this follows from Local Cohomology,
+Lemma \ref{local-cohomology-lemma-cd-change-rings}.
+
+\medskip\noindent
+Assumption (4): If we have primes
+$\mathfrak p' \subset \mathfrak r' \subset \mathfrak q'$ in $A'$
+with $\mathfrak p' \not \in V(I')$,
+$\mathfrak r' \in V(I') \setminus T'$,
+$\mathfrak q' \in T'$ then their images
+$\mathfrak p \subset \mathfrak r \subset \mathfrak q$ in
+the spectrum of $A$
+satisfy
+$\mathfrak p \not \in V(I)$, $\mathfrak r \in V(I) \setminus T$,
+$\mathfrak q \in T$.
+Then we have
+$$
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) \geq s
+\quad\text{or}\quad
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > d + s
+$$
+by assumption (4) for $A, I, T, M$. We have
+$\text{depth}(M'_{\mathfrak p'}) \geq \text{depth}(M_\mathfrak p)$ and
+$\text{depth}(M'_{\mathfrak p'}) +
+\dim((A'/\mathfrak p')_{\mathfrak q'}) =
+\text{depth}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q)$
+by Local Cohomology, Lemma \ref{local-cohomology-lemma-change-completion}.
+Thus assumption (4) holds for $A', I', T', M'$.
+
+\medskip\noindent
+Assumption (6): Let $\mathfrak q' \in T'$ lying over the
+prime $\mathfrak q \in T$. Then $A'_{\mathfrak q'}$
+and $A_\mathfrak q$ have isomorphic $I$-adic completions
+and similarly for $M_\mathfrak q$ and $M'_{\mathfrak q'}$.
+Thus assumption (6) for $A', I', T', M'$ is equivalent
+to assumption (6) for $A, I, T, M$.
+
+\medskip\noindent
+Proof of (A). We have to check conditions (1), (2), (3), (4), and (6)
+of Lemmas \ref{lemma-kill-colimit-weak-general} and
+\ref{lemma-kill-colimit-support-general} for
+$(A, I, \mathfrak a, M)$. Warning: the set $T$ in the statement of
+these lemmas is not the same as the set $T$ above.
+
+\medskip\noindent
+Condition (1): This holds because we have assumed $A$ has a dualizing complex in
+Situation \ref{situation-bootstrap}.
+
+\medskip\noindent
+Condition (2): This is empty.
+
+\medskip\noindent
+Condition (3): Let $\mathfrak p \subset A$ with
+$V(\mathfrak p) \cap V(I) \subset V(\mathfrak a)$.
+Since $I$ is contained in the Jacobson radical of $A$ we see
+that $V(\mathfrak p) \cap V(I) \not = \emptyset$.
+Let $\mathfrak q \in V(\mathfrak p) \cap V(I)$ be a generic point.
+Since $\text{cd}(A_\mathfrak q, I_\mathfrak q) \leq d$
+(Local Cohomology, Lemma \ref{local-cohomology-lemma-cd-local}) and since
+$V(\mathfrak p A_\mathfrak q) \cap V(I_\mathfrak q) =
+\{\mathfrak q A_\mathfrak q\}$ we get
+$\dim((A/\mathfrak p)_\mathfrak q) \leq d$ by Local Cohomology,
+Lemma \ref{local-cohomology-lemma-cd-bound-dim-local} which proves (3).
+
+\medskip\noindent
+Condition (4): Suppose $\mathfrak p \not \in V(I)$ and
+$\mathfrak q \in V(\mathfrak p) \cap V(\mathfrak a)$.
+It suffices to show
+$$
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) \geq s
+\quad\text{or}\quad
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > d + s
+$$
+If there exists a prime $\mathfrak p \subset \mathfrak r \subset \mathfrak q$
+with $\mathfrak r \in V(I) \setminus T$, then this follows
+immediately from assumption (4) in Situation \ref{situation-bootstrap}.
+If not, then $\text{depth}(M_\mathfrak p) > s$ by
+Lemma \ref{lemma-helper-bootstrap}.
+
+\medskip\noindent
+Condition (6): Let $\mathfrak p \not \in V(I)$ with
+$V(\mathfrak p) \cap V(I) \subset V(\mathfrak a)$.
+Since $I$ is contained in the Jacobson radical of $A$ we see
+that $V(\mathfrak p) \cap V(I) \not = \emptyset$.
+Choose $\mathfrak q \in V(\mathfrak p) \cap V(I) \subset V(\mathfrak a)$.
+It is clear there does not exist a prime
+$\mathfrak p \subset \mathfrak r \subset \mathfrak q$
+with $\mathfrak r \in V(I) \setminus T$.
+By Lemma \ref{lemma-helper-bootstrap} we have
+$\text{depth}(M_\mathfrak p) > s$ which proves (6).
+
+\medskip\noindent
+Proof of (B). We have to check conditions (1), (2), (3), (4) of
+Lemma \ref{lemma-algebraize-local-cohomology-general}. Warning:
+the set $T$ in the statement of
+this lemma is not the same as the set $T$ above.
+
+\medskip\noindent
+Condition (1): This holds because $A$ is complete and has a dualizing complex.
+
+\medskip\noindent
+Condition (2): This is empty.
+
+\medskip\noindent
+Condition (3): This is the same as assumption (3) in
+Situation \ref{situation-bootstrap}.
+
+\medskip\noindent
+Condition (4): This is the same as assumption (4) in
+Lemma \ref{lemma-kill-colimit-weak-general} which we proved in (A).
+
+\medskip\noindent
+Proof of (C). This is true because the assumptions in
+Lemmas \ref{lemma-kill-colimit-weak} and \ref{lemma-kill-colimit-support}
+are the same as the assumptions in
+Lemmas \ref{lemma-kill-colimit-weak-general} and
+\ref{lemma-kill-colimit-support-general} in the local case
+and we proved these hold in (A).
+
+\medskip\noindent
+Proof of (D). This is true because the assumptions in
+Lemma \ref{lemma-algebraize-local-cohomology}
+are the same as the assumptions (1), (2), (3), (4) in
+Lemma \ref{lemma-algebraize-local-cohomology-general}
+and we proved these hold in (B).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-algebraize-local-cohomology-bis}
+In Situation \ref{situation-bootstrap} assume $A$ is local with
+maximal ideal $\mathfrak m$ and $T = \{\mathfrak m\}$. Then
+$H^i_\mathfrak m(M) \to \lim H^i_\mathfrak m(M/I^nM)$
+is an isomorphism for $i \leq s$ and these modules are
+annihilated by a power of $I$.
+\end{lemma}
+
+\begin{proof}
+Let $A', I', \mathfrak m', M'$ be the usual $I$-adic completions
+of $A, I, \mathfrak m, M$. Recall that we have
+$H^i_\mathfrak m(M) \otimes_A A' = H^i_{\mathfrak m'}(M')$
+by flatness of $A \to A'$ and Dualizing Complexes, Lemma
+\ref{dualizing-lemma-torsion-change-rings}.
+Since $H^i_\mathfrak m(M)$ is $\mathfrak m$-power torsion we have
+$H^i_\mathfrak m(M) = H^i_\mathfrak m(M) \otimes_A A'$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-neighbourhood-equivalence}.
+We conclude that $H^i_\mathfrak m(M) = H^i_{\mathfrak m'}(M')$.
+The exact same arguments will show that
+$H^i_\mathfrak m(M/I^nM) = H^i_{\mathfrak m'}(M'/(I')^nM')$
+for all $n$ and $i$.
+
+\medskip\noindent
+Lemmas \ref{lemma-algebraize-local-cohomology},
+\ref{lemma-kill-colimit-weak}, and
+\ref{lemma-kill-colimit-support}
+apply to $A', \mathfrak m', I', M'$ by
+Lemma \ref{lemma-bootstrap-inherited} parts (C) and (D).
+Thus we get an isomorphism
+$$
+H^i_{\mathfrak m'}(M') \longrightarrow H^i(R\Gamma_{\mathfrak m'}(M')^\wedge)
+$$
+for $i \leq s$ where ${}^\wedge$ is derived $I'$-adic completion and these
+modules are annihilated by a power of $I'$.
+By Lemma \ref{lemma-local-cohomology-derived-completion}
+we obtain isomorphisms
+$$
+H^i_{\mathfrak m'}(M') \longrightarrow
+\lim H^i_{\mathfrak m'}(M'/(I')^nM'))
+$$
+for $i \leq s$. Combined with the already established comparison
+with local cohomology over $A$ we conclude the lemma is true.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bootstrap-bis-bis}
+Let $I \subset \mathfrak a$ be ideals of a Noetherian ring $A$.
+Let $M$ be a finite $A$-module. Let $s$ and $d$ be integers.
+If we assume
+\begin{enumerate}
+\item[(a)] $A$ has a dualizing complex,
+\item[(b)] $\text{cd}(A, I) \leq d$,
+\item[(c)] if $\mathfrak p \not \in V(I)$ and
+$\mathfrak q \in V(\mathfrak p) \cap V(\mathfrak a)$ then
+$\text{depth}_{A_\mathfrak p}(M_\mathfrak p) > s$ or
+$\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > d + s$.
+\end{enumerate}
+Then $A, I, V(\mathfrak a), M, s, d$ are as in
+Situation \ref{situation-bootstrap}.
+\end{lemma}
+
+\begin{proof}
+We have to show that assumptions (1), (3), (4), and (6) of
+Situation \ref{situation-bootstrap} hold.
+It is clear that (a) $\Rightarrow$ (1),
+(b) $\Rightarrow$ (3), and (c) $\Rightarrow$ (4).
+To finish the proof in the next paragraph we show (6) holds.
+
+\medskip\noindent
+Let $\mathfrak q \in V(\mathfrak a)$.
+Denote $A', I', \mathfrak m', M'$
+the $I$-adic completions of
+$A_\mathfrak q, I_\mathfrak q, \mathfrak qA_\mathfrak q, M_\mathfrak q$.
+Let $\mathfrak p' \subset A'$ be a nonmaximal prime with
+$V(\mathfrak p') \cap V(I') = \{\mathfrak m'\}$.
+Observe that this implies $\dim(A'/\mathfrak p') \leq d$
+by Local Cohomology, Lemma \ref{local-cohomology-lemma-cd-bound-dim-local}.
+Denote $\mathfrak p \subset A$ the image of $\mathfrak p'$.
+We have
+$\text{depth}(M'_{\mathfrak p'}) \geq \text{depth}(M_\mathfrak p)$ and
+$\text{depth}(M'_{\mathfrak p'}) +
+\dim(A'/\mathfrak p') =
+\text{depth}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q)$
+by Local Cohomology, Lemma \ref{local-cohomology-lemma-change-completion}.
+By assumption (c) either we have
+$\text{depth}(M'_{\mathfrak p'}) \geq \text{depth}(M_\mathfrak p) > s$
+and we're done or we have
+$\text{depth}(M'_{\mathfrak p'}) +
+\dim(A'/\mathfrak p') > s + d$ which implies
+$\text{depth}(M'_{\mathfrak p'}) > s$ because of the already shown
+inequality $\dim(A'/\mathfrak p') \leq d$. In both cases we
+obtain what we want.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bootstrap}
+In Situation \ref{situation-bootstrap} the inverse systems
+$\{H^i_T(I^nM)\}_{n \geq 0}$ are pro-zero for $i \leq s$.
+Moreover, there exists an integer $m_0$ such that for all
+$m \geq m_0$ there exists an integer $m'(m) \geq m$ such that for
+$k \geq m'(m)$ the image of
+$H^{s + 1}_T(I^kM) \to H^{s + 1}_T(I^mM)$
+maps injectively to $H^{s + 1}_T(I^{m_0}M)$.
+\end{lemma}
+
+\begin{proof}
+Fix $m$. Let $\mathfrak q \in T$.
+By Lemmas \ref{lemma-bootstrap-inherited} and
+\ref{lemma-algebraize-local-cohomology-bis}
+we see that
+$$
+H^i_\mathfrak q(M_\mathfrak q)
+\longrightarrow
+\lim H^i_\mathfrak q(M_\mathfrak q/I^nM_\mathfrak q)
+$$
+is an isomorphism for $i \leq s$. The inverse systems
+$\{H^i_\mathfrak q(I^nM_\mathfrak q)\}_{n \geq 0}$ and
+$\{H^i_\mathfrak q(M/I^nM)\}_{n \geq 0}$
+satisfy the Mittag-Leffler condition for all $i$, see
+Lemma \ref{lemma-ML-local}. Thus looking at the inverse system of
+long exact sequences
+$$
+0 \to H^0_\mathfrak q(I^nM_\mathfrak q) \to
+H^0_\mathfrak q(M_\mathfrak q) \to
+H^0_\mathfrak q(M_\mathfrak q/I^nM_\mathfrak q) \to
+H^1_\mathfrak q(I^nM_\mathfrak q) \to
+H^1_\mathfrak q(M_\mathfrak q) \to \ldots
+$$
+we conclude (some details omitted) that there exists an integer
+$m'(m, \mathfrak q) \geq m$ such that for all $k \geq m'(m, \mathfrak q)$
+the map
+$H^i_\mathfrak q(I^kM_\mathfrak q) \to H^i_\mathfrak q(I^mM_\mathfrak q)$
+is zero for $i \leq s$ and the image of
+$H^{s + 1}_\mathfrak q(I^kM_\mathfrak q) \to
+H^{s + 1}_\mathfrak q(I^mM_\mathfrak q)$
+is independent of $k \geq m'(m, \mathfrak q)$ and
+maps injectively into $H^{s + 1}_\mathfrak q(M_\mathfrak q)$.
+
+\medskip\noindent
+Suppose we can show that $m'(m, \mathfrak q)$ can be chosen
+independently of $\mathfrak q \in T$.
+Then the lemma follows immediately from
+Local Cohomology, Lemmas \ref{local-cohomology-lemma-zero} and
+\ref{local-cohomology-lemma-essential-image}.
+
+\medskip\noindent
+Let $\omega_A^\bullet$ be a dualizing complex. Let
+$\delta : \Spec(A) \to \mathbf{Z}$ be the corresponding
+dimension function. Recall that $\delta$ attains only a
+finite number of values, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-universally-catenary}.
+Claim: for each $d \in \mathbf{Z}$ the integer
+$m'(m, \mathfrak q)$ can be chosen independently
+of $\mathfrak q \in T$ with $\delta(\mathfrak q) = d$.
+Clearly the claim implies the lemma by what we said above.
+
+\medskip\noindent
+Pick $\mathfrak q \in T$ with $\delta(\mathfrak q) = d$.
+Consider the ext modules
+$$
+E(n, j) = \text{Ext}^j_A(I^nM, \omega_A^\bullet)
+$$
+A key feature we will use is that these are finite $A$-modules.
+Recall that $(\omega_A^\bullet)_\mathfrak q[-d]$ is a normalized
+dualizing complex for $A_\mathfrak q$ by definition of the
+dimension function associated to a dualizing complex, see
+Dualizing Complexes, Section \ref{dualizing-section-dimension-function}.
+The local duality theorem (Dualizing Complexes, Lemma
+\ref{dualizing-lemma-special-case-local-duality}) tells us that
+the $\mathfrak qA_\mathfrak q$-adic completion of
+$E(n, -d - i)_\mathfrak q$ is Matlis dual to
+$H^i_\mathfrak q(I^nM_\mathfrak q)$. Thus the choice of
+$m'(m, \mathfrak q)$ for $i \leq s$ in the first paragraph tells us that
+for $k \geq m'(m, \mathfrak q)$ and $j \geq -d - s$ the map
+$$
+E(m, j)_\mathfrak q \to E(k, j)_\mathfrak q
+$$
+is zero. Since these modules are finite and nonzero only
+for a finite number of possible $j$ (small detail omitted),
+we can find an open neighbourhood $W \subset \Spec(A)$ of $\mathfrak q$
+such that
+$$
+E(m, j)_{\mathfrak q'} \to E(m'(m, \mathfrak q), j)_{\mathfrak q'}
+$$
+is zero for $j \geq -d - s$ for all $\mathfrak q' \in W$.
+Then of course the maps $E(m, j)_{\mathfrak q'} \to E(k, j)_{\mathfrak q'}$
+for $k \geq m'(m, \mathfrak q)$ are zero as well.
+
+\medskip\noindent
+For $i = s + 1$ corresponding to $j = - d - s - 1$ we obtain
+from local duality and the results of the first paragraph that
+$$
+K_{k, \mathfrak q} =
+\Ker(E(m, -d - s - 1)_\mathfrak q \to E(k, -d - s - 1)_\mathfrak q)
+$$
+is independent of $k \geq m'(m, \mathfrak q)$ and that
+$$
+E(0, -d - s - 1)_\mathfrak q \to
+E(m, -d - s - 1)_\mathfrak q/K_{m'(m, \mathfrak q), \mathfrak q}
+$$
+is surjective. For $k \geq m'(m, \mathfrak q)$ set
+$$
+K_k = \Ker(E(m, -d - s - 1) \to E(k, -d - s - 1))
+$$
+Since $K_k$ is an increasing sequence of submodules of the finite
+module $E(m, -d - s - 1)$ we see that, at the cost of increasing
+$m'(m, \mathfrak q)$ a little bit, we may assume
+$K_{m'(m, \mathfrak q)} = K_k$ for $k \geq m'(m, \mathfrak q)$.
+After shrinking $W$ further if necessary, we may also assume that
+$$
+E(0, -d - s - 1)_{\mathfrak q'} \to
+E(m, -d - s - 1)_{\mathfrak q'}/K_{m'(m, \mathfrak q), \mathfrak q'}
+$$
+is surjective for all $\mathfrak q' \in W$ (as before use that
+these modules are finite
+and that the map is surjective after localization at $\mathfrak q$).
+
+\medskip\noindent
+Any subset, in particular
+$T_d = \{\mathfrak q \in T \text{ with }\delta(\mathfrak q) = d\}$,
+of the Noetherian topological space $\Spec(A)$
+with the endowed topology is Noetherian and hence quasi-compact.
+Above we have seen that for every $\mathfrak q \in T_d$
+there is an open neighbourhood $W$ where
+$m'(m, \mathfrak q)$ works for all $\mathfrak q' \in T_d \cap W$.
+We conclude that we can find an integer $m'(m, d)$ such that for all
+$\mathfrak q \in T_d$ we have
+$$
+E(m, j)_\mathfrak q \to E(m'(m, d), j)_\mathfrak q
+$$
+is zero for $j \geq -d - s$ and with
+$K_{m'(m, d)} = \Ker(E(m, -d - s - 1) \to E(m'(m, d), -d - s - 1))$
+we have
+$$
+K_{m'(m, d), \mathfrak q} =
+\Ker(E(m, -d - s - 1)_{\mathfrak q} \to E(k, -d - s - 1)_{\mathfrak q})
+$$
+for all $k \geq m'(m, d)$ and the map
+$$
+E(0, -d - s - 1)_\mathfrak q \to
+E(m, -d - s - 1)_\mathfrak q/K_{m'(m, d), \mathfrak q}
+$$
+is surjective. Using the local duality theorem again (in the opposite
+direction) we conclude that the claim is correct. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-final-bootstrap}
+In Situation \ref{situation-bootstrap} there exists an integer $m_0 \geq 0$
+such that
+\begin{enumerate}
+\item $\{H^i_T(M/I^nM)\}_{n \geq 0}$
+satisfies the Mittag-Leffler condition for $i < s$.
+\item $\{H^i_T(I^{m_0}M/I^nM)\}_{n \geq m_0}$
+satisfies the Mittag-Leffler condition for $i \leq s$,
+\item $H^i_T(M) \to \lim H^i_T(M/I^nM)$
+is an isomorphism for $i < s$,
+\item $H^s_T(I^{m_0}M) \to \lim H^s_T(I^{m_0}M/I^nM)$
+is an isomorphism for $i \leq s$,
+\item $H^s_T(M) \to \lim H^s_T(M/I^nM)$ is
+injective with cokernel killed by $I^{m_0}$, and
+\item $R^1\lim H^s_T(M/I^nM)$ is killed by $I^{m_0}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Consider the long exact sequences
+$$
+0 \to H^0_T(I^nM) \to H^0_T(M) \to
+H^0_T(M/I^nM) \to H^1_T(I^nM) \to
+H^1_T(M) \to \ldots
+$$
+Parts (1) and (3) follows easily from this and Lemma \ref{lemma-bootstrap}.
+
+\medskip\noindent
+Let $m_0$ and $m'(-)$ be as in Lemma \ref{lemma-bootstrap}.
+For $m \geq m_0$ consider the long exact sequence
+$$
+H^s_T(I^mM) \to H^s_T(I^{m_0}M) \to
+H^s_T(I^{m_0}M/I^mM) \to H^{s + 1}_T(I^mM) \to
+H^1_T(I^{m_0}M)
+$$
+Then for $k \geq m'(m)$ the image of
+$H^{s + 1}_T(I^kM) \to H^{s + 1}_T(I^mM)$
+maps injectively to $H^{s + 1}_T(I^{m_0}M)$.
+Hence the image of
+$H^s_T(I^{m_0}M/I^kM) \to H^s_T(I^{m_0}M/I^mM)$
+maps to zero in $H^{s + 1}_T(I^mM)$ for all $k \geq m'(m)$.
+We conclude that (2) and (4) hold.
+
+\medskip\noindent
+Consider the short exact sequences
+$0 \to I^{m_0}M \to M \to M/I^{m_0} M \to 0$ and
+$0 \to I^{m_0}M/I^nM \to M/I^nM \to M/I^{m_0} M \to 0$.
+We obtain a diagram
+$$
+\xymatrix{
+H^{s - 1}_T(M/I^{m_0}M) \ar[r] &
+\lim H^s_T(I^{m_0}M/I^nM) \ar[r] &
+\lim H^s_T(M/I^nM) \ar[r] &
+H^s_T(M/I^{m_0}M) \\
+H^{s - 1}_T(M/I^{m_0}M) \ar[r] \ar@{=}[u] &
+H^s_T(I^{m_0}M) \ar[r] \ar[u]_{\cong} &
+H^s_T(M) \ar[r] \ar[u] &
+H^s_T(M/I^{m_0}M) \ar@{=}[u]
+}
+$$
+whose lower row is exact. The top row is also exact
+(at the middle two spots) by
+Homology, Lemma \ref{homology-lemma-apply-Mittag-Leffler}.
+Part (5) follows.
+
+\medskip\noindent
+Write $B_n = H^s_T(M/I^nM)$. Let $A_n \subset B_n$
+be the image of $H^s_T(I^{m_0}M/I^nM) \to H^s_T(M/I^nM)$.
+Then $(A_n)$ satisfies the Mittag-Leffler condition by (2) and
+Homology, Lemma \ref{homology-lemma-Mittag-Leffler}.
+Also $C_n = B_n/A_n$ is killed by $I^{m_0}$. Thus
+$R^1\lim B_n \cong R^1\lim C_n$ is killed by $I^{m_0}$ and we get (6).
+\end{proof}
+
+\begin{theorem}
+\label{theorem-final-bootstrap}
+In Situation \ref{situation-bootstrap} the inverse system
+$\{H^i_T(M/I^nM)\}_{n \geq 0}$ satisfies the
+Mittag-Leffler condition for $i \leq s$, the map
+$$
+H^i_T(M) \longrightarrow \lim H^i_T(M/I^nM)
+$$
+is an isomorphism for $i \leq s$, and $H^i_T(M)$
+is annihilated by a power of $I$ for $i \leq s$.
+\end{theorem}
+
+\begin{proof}
+To prove the final assertion of the theorem we apply Local Cohomology,
+Proposition \ref{local-cohomology-proposition-annihilator} with
+$T \subset V(I) \subset \Spec(A)$. Namely, suppose
+that $\mathfrak p \not \in V(I)$, $\mathfrak q \in T$
+with $\mathfrak p \subset \mathfrak q$.
+Then either there exists a prime
+$\mathfrak p \subset \mathfrak r \subset \mathfrak q$
+with $\mathfrak r \in V(I) \setminus T$ and we get
+$$
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) \geq s
+\quad\text{or}\quad
+\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > d + s
+$$
+by (4) in Situation \ref{situation-bootstrap} or there does
+not exist an $\mathfrak r$ and we get
+$\text{depth}_{A_\mathfrak p}(M_\mathfrak p) > s$ by
+Lemma \ref{lemma-helper-bootstrap}.
+In all three cases we see that
+$\text{depth}_{A_\mathfrak p}(M_\mathfrak p) +
+\dim((A/\mathfrak p)_\mathfrak q) > s$.
+Thus Local Cohomology, Proposition
+\ref{local-cohomology-proposition-annihilator} (2)
+holds and we find that a power of $I$ annihilates
+$H^i_T(M)$ for $i \leq s$.
+
+\medskip\noindent
+We already know the other two assertions of the theorem hold
+for $i < s$ by Lemma \ref{lemma-final-bootstrap} and for the
+module $I^{m_0}M$ for $i = s$ and $m_0$ large enough.
+To finish of the proof we will show that in fact these
+assertions for $i = s$ holds for $M$.
+
+\medskip\noindent
+Let $M' = H^0_I(M)$ and $M'' = M/M'$ so that we have a short exact
+sequence
+$$
+0 \to M' \to M \to M'' \to 0
+$$
+and $M''$ has $H^0_I(M') = 0$ by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-divide-by-torsion}.
+By Artin-Rees (Algebra, Lemma \ref{algebra-lemma-Artin-Rees})
+we get short exact sequences
+$$
+0 \to M' \to M/I^n M \to M''/I^n M'' \to 0
+$$
+for $n$ large enough. Consider the long exact sequences
+$$
+H^s_T(M') \to
+H^s_T(M/I^nM) \to
+H^s_T(M''/I^nM'') \to
+H^{s + 1}_T(M')
+$$
+Now it is a simple matter to see that if we have Mittag-Leffler
+for the inverse system $\{H^s_T(M''/I^nM'')\}_{n \geq 0}$
+then we have Mittag-Leffler for the inverse system
+$\{H^s_T(M/I^nM)\}_{n \geq 0}$.
+(Note that the ML condition for an inverse system of groups $G_n$
+only depends on the values of the inverse system for sufficiently large $n$.)
+Moreover the sequence
+$$
+H^s_T(M') \to
+\lim H^s_T(M/I^nM) \to
+\lim H^s_T(M''/I^nM'') \to
+H^{s + 1}_T(M')
+$$
+is exact because we have ML in the required spots, see
+Homology, Lemma \ref{homology-lemma-apply-Mittag-Leffler}.
+Hence, if $H^s_T(M'') \to \lim H^s_T(M''/I^nM'')$
+is an isomorphism, then
+$H^s_T(M) \to \lim H^s_T(M/I^nM)$
+is an isomorphism too by the five lemma
+(Homology, Lemma \ref{homology-lemma-five-lemma}).
+This reduces us to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume that $H^0_I(M) = 0$. Choose generators
+$f_1, \ldots, f_r$ of $I^{m_0}$ where $m_0$ is the
+integer found for $M$ in Lemma \ref{lemma-final-bootstrap}.
+Then we consider the exact sequence
+$$
+0 \to M \xrightarrow{f_1, \ldots, f_r}
+(I^{m_0}M)^{\oplus r} \to Q \to 0
+$$
+defining $Q$. Some observations: the first map is injective
+exactly because $H^0_I(M) = 0$. The cokernel $Q$ of this injection
+is a finite $A$-module such that for every $1 \leq j \leq r$
+we have $Q_{f_j} \cong (M_{f_j})^{\oplus r - 1}$.
+In particular, for a prime $\mathfrak p \subset A$
+with $\mathfrak p \not \in V(I)$ we have
+$Q_\mathfrak p \cong (M_\mathfrak p)^{\oplus r - 1}$.
+Similarly, given $\mathfrak q \in T$ and
+$\mathfrak p' \subset A' = (A_\mathfrak q)^\wedge$
+not contained in $V(IA')$, we have
+$Q'_{\mathfrak p'} \cong (M'_{\mathfrak p'})^{\oplus r - 1}$
+where $Q' = (Q_\mathfrak q)^\wedge$ and $M' = (M_\mathfrak q)^\wedge$.
+Thus the conditions in Situation \ref{situation-bootstrap}
+hold for $A, I, T, Q$. (Observe that $Q$ may have
+nonvanishing $H^0_I(Q)$ but this won't matter.)
+
+\medskip\noindent
+For any $n \geq 0$ we set $F^nM = M \cap I^n(I^{m_0}M)^{\oplus r}$
+so that we get short exact sequences
+$$
+0 \to F^nM \to I^n(I^{m_0}M)^{\oplus r} \to I^nQ \to 0
+$$
+By Artin-Rees (Algebra, Lemma \ref{algebra-lemma-Artin-Rees})
+there exists a $c \geq 0$ such that
+$I^n M \subset F^nM \subset I^{n - c}M$ for all $n \geq c$.
+Let $m_0$ be the integer and let $m'(m)$
+be the function defined for $m \geq m_0$
+found in Lemma \ref{lemma-bootstrap}
+applied to $M$. Note that the integer $m_0$
+is the same as our integer $m_0$ chosen above (you don't need to
+check this: you can just take the maximum of the two integers if
+you like). Finally, by Lemma \ref{lemma-bootstrap}
+applied to $Q$ for every integer $m$ there exists an integer
+$m''(m) \geq m$ such that $H^s_T(I^kQ) \to H^s_T(I^mQ)$
+is zero for all $k \geq m''(m)$.
+
+\medskip\noindent
+Fix $m \geq m_0$. Choose $k \geq m'(m''(m + c))$.
+Choose $\xi \in H^{s + 1}_T(I^kM)$
+which maps to zero in $H^{s + 1}_T(M)$.
+We want to show that $\xi$ maps to zero in $H^{s + 1}_T(I^mM)$.
+Namely, this will show that $\{H^s_T(M/I^nM)\}_{n \geq 0}$
+is Mittag-Leffler exactly as in the proof of Lemma \ref{lemma-final-bootstrap}.
+Picture to help vizualize the argument:
+$$
+\xymatrix{
+&
+H^{s + 1}_T(I^kM) \ar[r] \ar[d] &
+H^{s + 1}_T(I^k(I^{m_0}M)^{\oplus r}) \ar[d] &
+\\
+H^s_T(I^{m''(m + c)}Q) \ar[r]_-\delta \ar[d] &
+H^{s + 1}_T(F^{m''(m + c)}M) \ar[r] \ar[d] &
+H^{s + 1}_T(I^{m''(m + c)}(I^{m_0}M)^{\oplus r}) \\
+H^s_T(I^{m + c}Q) \ar[r] &
+H^{s + 1}_T(F^{m + c}M) \ar[d] &
+\\
+&
+H^{s + 1}_T(I^mM)
+}
+$$
+The image of $\xi$ in $H^{s + 1}_T(I^k(I^{m_0}M)^{\oplus r})$
+maps to zero in $H^{s + 1}_T((I^{m_0}M)^{\oplus r})$
+and hence maps to zero in
+$H^{s + 1}_T(I^{m''(m + c)}(I^{m_0}M)^{\oplus r})$
+by choice of $m'(-)$.
+Thus the image $\xi' \in H^{s + 1}_T(F^{m''(m + c)}M)$
+maps to zero in $H^{s + 1}_T(I^{m''(m + c)}(I^{m_0}M)^{\oplus r})$
+and hence $\xi' = \delta(\eta)$ for some
+$\eta \in H^s_T(I^{m''(m + c)}Q)$.
+By our choice of $m''(-)$ we find that $\eta$ maps to
+zero in $H^s_T(I^{m + c}Q)$.
+This in turn means that $\xi'$ maps to zero in
+$H^{s + 1}_T(F^{m + c}M)$.
+Since $F^{m + c}M \subset I^mM$ we conclude.
+
+\medskip\noindent
+Finally, we prove the statement on limits. Consider the short
+exact sequences
+$$
+0 \to M/F^nM \to (I^{m_0}M)^{\oplus r}/I^n (I^{m_0}M)^{\oplus r}
+\to Q/I^nQ \to 0
+$$
+We have $\lim H^s_T(M/I^nM) = \lim H^s_T(M/F^nM)$
+as these inverse systems are pro-isomorphic. We obtain a commutative diagram
+$$
+\xymatrix{
+H^{s - 1}_T(Q) \ar[r] \ar[d] &
+\lim H^{s - 1}_T(Q/I^nQ) \ar[d] \\
+H^s_T(M) \ar[r] \ar[d] &
+\lim H^s_T(M/I^nM) \ar[d] \\
+H^s_T((I^{m_0}M)^{\oplus r}) \ar[r] \ar[d] &
+\lim H^s_T((I^{m_0}M)^{\oplus r}/I^n(I^{m_0}M)^{\oplus r}) \ar[d] \\
+H^s_T(Q) \ar[r] &
+\lim H^s_T(Q/I^nQ)
+}
+$$
+The right column is exact because we have ML in the required spots, see
+Homology, Lemma \ref{homology-lemma-apply-Mittag-Leffler}.
+The lowest horizontal arrow is injective (!) by
+part (5) of Lemma \ref{lemma-final-bootstrap}.
+The horizontal arrow above it is bijective by
+part (4) of Lemma \ref{lemma-final-bootstrap}.
+The arrows in cohomological degrees $\leq s - 1$ are isomorphisms.
+Thus we conclude $H^s_T(M) \to \lim H^s_T(M/I^nM)$
+is an isomorphism by the five lemma
+(Homology, Lemma \ref{homology-lemma-five-lemma}).
+This finishes the proof of the theorem.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-combine-two}
+Let $I \subset \mathfrak a \subset A$ be ideals of a Noetherian ring $A$
+and let $M$ be a finite $A$-module. Let $s$ and $d$ be integers.
+Suppose that
+\begin{enumerate}
+\item $A, I, V(\mathfrak a), M$ satisfy the assumptions of
+Situation \ref{situation-bootstrap} for $s$ and $d$, and
+\item $A, I, \mathfrak a, M$ satisfy the conditions of
+Lemma \ref{lemma-algebraize-local-cohomology-general}
+for $s + 1$ and $d$ with $J = \mathfrak a$.
+\end{enumerate}
+Then there exists an ideal
+$J_0 \subset \mathfrak a$ with $V(J_0) \cap V(I) = V(\mathfrak a)$
+such that for any $J \subset J_0$ with $V(J) \cap V(I) = V(\mathfrak a)$
+the map
+$$
+H^{s + 1}_J(M) \longrightarrow \lim H^{s + 1}_\mathfrak a(M/I^nM)
+$$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Namely, we have the existence of $J_0$
+and the isomorphism
+$H^{s + 1}_J(M) = H^{s + 1}(R\Gamma_\mathfrak a(M)^\wedge)$
+by Lemma \ref{lemma-algebraize-local-cohomology-general},
+we have a short exact sequence
+$$
+0 \to R^1\lim H^s_\mathfrak a(M/I^nM) \to
+H^{s + 1}(R\Gamma_\mathfrak a(M)^\wedge) \to
+\lim H^{s + 1}_\mathfrak a(M/I^nM) \to 0
+$$
+by Dualizing Complexes, Lemma \ref{dualizing-lemma-completion-local},
+and the module $R^1\lim H^s_\mathfrak a(M/I^nM)$ is zero because
+$\{H^s_\mathfrak a(M/I^nM)\}_{n \geq 0}$ has Mittag-Leffler
+by Theorem \ref{theorem-final-bootstrap}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Algebraization of formal sections, I}
+\label{section-algebraization-sections}
+
+\noindent
+In this section we study the problem of algebraization of
+formal sections in the local case.
+Let $(A, \mathfrak m)$ be a Noetherian local ring.
+Let $I \subset A$ be an ideal. Let
+$$
+X = \Spec(A) \supset U = \Spec(A) \setminus \{\mathfrak m\}
+$$
+and denote $Y = V(I)$ the closed subscheme corresponding to $I$.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_U$-module.
+In this section we consider the limits
+$$
+\lim_n H^i(U, \mathcal{F}/I^n\mathcal{F})
+$$
+This is closely related to the cohomology of the pullback
+of $\mathcal{F}$ to the formal completion of $U$ along $Y$;
+however, since we have not yet introduced formal schemes,
+we cannot use this terminology here.
+
+\begin{lemma}
+\label{lemma-compare-with-derived-completion}
+Let $U$ be the punctured spectrum of a Noetherian local ring $A$.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_U$-module.
+Let $I \subset A$ be an ideal. Then
+$$
+H^i(R\Gamma(U, \mathcal{F})^\wedge) =
+\lim H^i(U, \mathcal{F}/I^n\mathcal{F})
+$$
+for all $i$ where $R\Gamma(U, \mathcal{F})^\wedge$ denotes
+the derived $I$-adic completion.
+\end{lemma}
+
+\begin{proof}
+By Lemmas \ref{lemma-formal-functions-general} and
+\ref{lemma-derived-completion-pseudo-coherent} we have
+$$
+R\Gamma(U, \mathcal{F})^\wedge =
+R\Gamma(U, \mathcal{F}^\wedge) =
+R\Gamma(U, R\lim \mathcal{F}/I^n\mathcal{F})
+$$
+Thus we obtain short exact sequences
+$$
+0 \to R^1\lim H^{i - 1}(U, \mathcal{F}/I^n\mathcal{F}) \to
+H^i(R\Gamma(U, \mathcal{F})^\wedge) \to
+\lim H^i(U, \mathcal{F}/I^n\mathcal{F}) \to 0
+$$
+by Cohomology, Lemma \ref{cohomology-lemma-RGamma-commutes-with-Rlim}.
+The $R^1\lim$ terms vanish because the inverse systems of groups
+$H^i(U, \mathcal{F}/I^n\mathcal{F})$ satisfy the Mittag-Leffler condition
+by Lemma \ref{lemma-ML-local}.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-algebraization-formal-sections}
+\begin{reference}
+The method of proof follows roughly the method of
+proof of \cite[Theorem 1]{Faltings-algebraisation}
+and \cite[Satz 2]{Faltings-uber}.
+The result is almost the same as
+\cite[Theorem 1.1]{MRaynaud-paper} (affine complement case) and
+\cite[Theorem 3.9]{MRaynaud-book} (complement is union of few affines).
+\end{reference}
+Let $(A, \mathfrak m)$ be a Noetherian local ring which has a
+dualizing complex and is complete with respect to an ideal $I$.
+Set $X = \Spec(A)$, $Y = V(I)$, and $U = X \setminus \{\mathfrak m\}$.
+Let $\mathcal{F}$ be a coherent sheaf on $U$.
+Assume
+\begin{enumerate}
+\item $\text{cd}(A, I) \leq d$, i.e.,
+$H^i(X \setminus Y, \mathcal{G}) = 0$ for $i \geq d$ and
+quasi-coherent $\mathcal{G}$ on $X$,
+\item for any $x \in X \setminus Y$ whose closure $\overline{\{x\}}$
+in $X$ meets $U \cap Y$ we have
+$$
+\text{depth}_{\mathcal{O}_{X, x}}(\mathcal{F}_x) \geq s
+\quad\text{or}\quad
+\text{depth}_{\mathcal{O}_{X, x}}(\mathcal{F}_x)
++ \dim(\overline{\{x\}}) > d + s
+$$
+\end{enumerate}
+Then there exists an open $V_0 \subset U$ containing $U \cap Y$
+such that for any open $V \subset V_0$ containing $U \cap Y$
+the map
+$$
+H^i(V, \mathcal{F}) \to \lim H^i(U, \mathcal{F}/I^n\mathcal{F})
+$$
+is an isomorphism for $i < s$. If in addition
+$
+\text{depth}_{\mathcal{O}_{X, x}}(\mathcal{F}_x) +
+\dim(\overline{\{x\}}) > s
+$
+for all $x \in U \cap Y$, then these cohomology groups are finite $A$-modules.
+\end{theorem}
+
+\begin{proof}
+Choose a finite $A$-module $M$ such that $\mathcal{F}$ is the
+restriction to $U$ of the
+coherent $\mathcal{O}_X$-module associated to $M$, see Local Cohomology,
+Lemma \ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}.
+Then the assumptions of
+Lemma \ref{lemma-algebraize-local-cohomology}
+are satisfied.
+Pick $J_0$ as in that lemma and set $V_0 = X \setminus V(J_0)$.
+Then opens $V \subset V_0$ containing $U \cap Y$
+correspond $1$-to-$1$ with ideals $J \subset J_0$ with
+$V(J) \cap V(I) = \{\mathfrak m\}$.
+Moreover, for such a choice we have a distinguished triangle
+$$
+R\Gamma_J(M) \to M \to R\Gamma(V, \mathcal{F}) \to
+R\Gamma_J(M)[1]
+$$
+We similarly have a distinguished triangle
+$$
+R\Gamma_\mathfrak m(M)^\wedge \to
+M \to
+R\Gamma(U, \mathcal{F})^\wedge \to
+R\Gamma_\mathfrak m(M)^\wedge[1]
+$$
+involving derived $I$-adic completions.
+The cohomology groups of $R\Gamma(U, \mathcal{F})^\wedge$ are
+equal to the limits in the statement of the theorem by
+Lemma \ref{lemma-compare-with-derived-completion}.
+The canonical map between these triangles
+and some easy arguments show that our
+theorem follows from the main Lemma \ref{lemma-algebraize-local-cohomology}
+(note that we have $i < s$ here whereas we have
+$i \leq s$ in the lemma; this is because of the shift).
+The finiteness of the cohomology groups
+(under the additional assumption) follows from
+Lemma \ref{lemma-kill-colimit}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-application-theorem}
+Let $(A, \mathfrak m)$ be a Noetherian local ring which has a
+dualizing complex and is complete with respect to an ideal $I$.
+Set $X = \Spec(A)$, $Y = V(I)$, and $U = X \setminus \{\mathfrak m\}$.
+Let $\mathcal{F}$ be a coherent sheaf on $U$.
+Assume for any associated point $x \in U$ of $\mathcal{F}$
+we have $\dim(\overline{\{x\}}) > \text{cd}(A, I) + 1$
+where $\overline{\{x\}}$ is the closure in $X$.
+Then the map
+$$
+\colim H^0(V, \mathcal{F})
+\longrightarrow
+\lim H^0(U, \mathcal{F}/I^n\mathcal{F})
+$$
+is an isomorphism of finite $A$-modules
+where the colimit is over opens $V \subset U$
+containing $U \cap Y$.
+\end{lemma}
+
+\begin{proof}
+Apply Theorem \ref{theorem-algebraization-formal-sections} with $s = 1$
+(we get finiteness too).
+\end{proof}
+
+
+
+
+\section{Algebraization of formal sections, II}
+\label{section-algebraization-sections-coherent}
+
+\noindent
+It is a bit difficult to succintly state all possible
+consequences of the results in
+Sections \ref{section-algebraization-sections-general} and
+\ref{section-bootstrap}
+for cohomology of coherent sheaves on quasi-affine schemes
+and their completion with respect to an ideal.
+This section gives a nonexhaustive list of
+applications to $H^0$. The next section contains
+applications to higher cohomology.
+
+\medskip\noindent
+The following lemma will be superceded by
+Proposition \ref{proposition-application-H0}.
+
+\begin{lemma}
+\label{lemma-application-H0-pre}
+Let $I \subset \mathfrak a$ be ideals of a Noetherian ring $A$.
+Let $\mathcal{F}$ be a coherent module on
+$U = \Spec(A) \setminus V(\mathfrak a)$.
+Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete and has a dualizing complex,
+\item if $x \in \text{Ass}(\mathcal{F})$, $x \not \in V(I)$,
+$\overline{\{x\}} \cap V(I) \not \subset V(\mathfrak a)$
+and $z \in \overline{\{x\}} \cap V(\mathfrak a)$, then
+$\dim(\mathcal{O}_{\overline{\{x\}}, z}) > \text{cd}(A, I) + 1$,
+\item one of the following holds:
+\begin{enumerate}
+\item the restriction of $\mathcal{F}$ to $U \setminus V(I)$ is $(S_1)$
+\item the dimension of $V(\mathfrak a)$ is at most $2$\footnote{In
+the sense that the difference of the maximal and minimal values
+on $V(\mathfrak a)$ of a dimension function on $\Spec(A)$ is at most $2$.}.
+\end{enumerate}
+\end{enumerate}
+Then we obtain an isomorphism
+$$
+\colim H^0(V, \mathcal{F})
+\longrightarrow
+\lim H^0(U, \mathcal{F}/I^n\mathcal{F})
+$$
+where the colimit is over opens $V \subset U$ containing $U \cap V(I)$.
+\end{lemma}
+
+\begin{proof}
+Choose a finite $A$-module $M$ such that $\mathcal{F}$ is the restriction
+to $U$ of the coherent module associated to $M$, see Local Cohomology,
+Lemma \ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}.
+Set $d = \text{cd}(A, I)$.
+Let $\mathfrak p$ be a prime of $A$ not contained in $V(I)$
+and let $\mathfrak q \in V(\mathfrak p) \cap V(\mathfrak a)$.
+Then either $\mathfrak p$ is not an associated prime of $M$
+and hence $\text{depth}(M_\mathfrak p) \geq 1$
+or we have $\dim((A/\mathfrak p)_\mathfrak q) > d + 1$ by (2).
+Thus the hypotheses of
+Lemma \ref{lemma-algebraize-local-cohomology-general}
+are satisfied for $s = 1$ and $d$; here we use condition (3).
+Thus we find there exists an ideal
+$J_0 \subset \mathfrak a$ with $V(J_0) \cap V(I) = V(\mathfrak a)$
+such that for any $J \subset J_0$ with $V(J) \cap V(I) = V(\mathfrak a)$
+the maps
+$$
+H^i_J(M) \longrightarrow H^i(R\Gamma_\mathfrak a(M)^\wedge)
+$$
+are isomorphisms for $i = 0, 1$. Consider the morphisms of
+exact triangles
+$$
+\xymatrix{
+R\Gamma_J(M) \ar[d] \ar[r] &
+M \ar[r] \ar[d] &
+R\Gamma(V, \mathcal{F}) \ar[d] \ar[r] &
+R\Gamma_J(M)[1] \ar[d] \\
+R\Gamma_J(M)^\wedge \ar[r] &
+M \ar[r] &
+R\Gamma(V, \mathcal{F})^\wedge \ar[r] &
+R\Gamma_J(M)^\wedge[1] \\
+R\Gamma_\mathfrak a(M)^\wedge \ar[r] \ar[u] &
+M \ar[r] \ar[u] &
+R\Gamma(U, \mathcal{F})^\wedge \ar[r] \ar[u] &
+R\Gamma_\mathfrak a(M)^\wedge[1] \ar[u]
+}
+$$
+where $V = \Spec(A) \setminus V(J)$. Recall that
+$R\Gamma_\mathfrak a(M)^\wedge \to R\Gamma_J(M)^\wedge$
+is an isomorphism (because $\mathfrak a$, $\mathfrak a + I$, and $J + I$
+cut out the same closed subscheme, for example
+see proof of Lemma \ref{lemma-algebraize-local-cohomology-general}).
+Hence
+$R\Gamma(U, \mathcal{F})^\wedge = R\Gamma(V, \mathcal{F})^\wedge$.
+This produces a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+H^0_J(M) \ar[r] \ar[d] &
+M \ar[r] \ar[d] \ar[r] &
+\Gamma(V, \mathcal{F}) \ar[d] \ar[r] &
+H^1_J(M) \ar[d] \ar[r] &
+0 \\
+0 \ar[r] &
+H^0(R\Gamma_J(M)^\wedge) \ar[r] &
+M \ar[r] &
+H^0(R\Gamma(V, \mathcal{F})^\wedge) \ar[r] &
+H^1(R\Gamma_J(M)^\wedge) \ar[r] &
+0 \\
+0 \ar[r] &
+H^0(R\Gamma_\mathfrak a(M)^\wedge) \ar[r] \ar[u] &
+M \ar[r] \ar[u] &
+H^0(R\Gamma(U, \mathcal{F})^\wedge) \ar[r] \ar[u] &
+H^1(R\Gamma_\mathfrak a(M)^\wedge) \ar[r] \ar[u] &
+0
+}
+$$
+with exact rows and isomorphisms for the lower vertical arrows. Hence
+we obtain an isomorphism
+$\Gamma(V, \mathcal{F}) \to H^0(R\Gamma(U, \mathcal{F})^\wedge)$.
+By Lemmas \ref{lemma-formal-functions-general}
+and \ref{lemma-derived-completion-pseudo-coherent} we have
+$$
+R\Gamma(U, \mathcal{F})^\wedge =
+R\Gamma(U, \mathcal{F}^\wedge) =
+R\Gamma(U, R\lim \mathcal{F}/I^n\mathcal{F})
+$$
+and we find $H^0(R\Gamma(U, \mathcal{F})^\wedge) =
+\lim H^0(U, \mathcal{F}/I^n\mathcal{F})$ by
+Cohomology, Lemma \ref{cohomology-lemma-RGamma-commutes-with-Rlim}.
+\end{proof}
+
+\noindent
+Now we bootstrap the preceding lemma to get rid of condition (3).
+
+\begin{proposition}
+\label{proposition-application-H0}
+Let $I \subset \mathfrak a$ be ideals of a Noetherian ring $A$.
+Let $\mathcal{F}$ be a coherent module on
+$U = \Spec(A) \setminus V(\mathfrak a)$.
+Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete and has a dualizing complex,
+\item if $x \in \text{Ass}(\mathcal{F})$, $x \not \in V(I)$,
+$\overline{\{x\}} \cap V(I) \not \subset V(\mathfrak a)$
+and $z \in \overline{\{x\}} \cap V(\mathfrak a)$, then
+$\dim(\mathcal{O}_{\overline{\{x\}}, z}) > \text{cd}(A, I) + 1$.
+\end{enumerate}
+Then we obtain an isomorphism
+$$
+\colim H^0(V, \mathcal{F})
+\longrightarrow
+\lim H^0(U, \mathcal{F}/I^n\mathcal{F})
+$$
+where the colimit is over opens $V \subset U$ containing $U \cap V(I)$.
+\end{proposition}
+
+\begin{proof}
+Let $T \subset U$ be the set of points $x$ with
+$\overline{\{x\}} \cap V(I) \subset V(\mathfrak a)$.
+Let $\mathcal{F} \to \mathcal{F}'$ be the surjection
+of coherent modules on $U$ constructed in
+Local Cohomology, Lemma \ref{local-cohomology-lemma-get-depth-1-along-Z}.
+Since $\mathcal{F} \to \mathcal{F}'$ is an isomorphism
+over an open $V \subset U$ containing $U \cap V(I)$
+it suffices to prove the lemma with $\mathcal{F}$ replaced
+by $\mathcal{F}'$. Hence we may and do assume
+for $x \in U$ with $\overline{\{x\}} \cap V(I) \subset V(\mathfrak a)$
+we have $\text{depth}(\mathcal{F}_x) \geq 1$.
+
+\medskip\noindent
+Let $\mathcal{V}$ be the set of open subschemes $V \subset U$
+containing $U \cap V(I)$ ordered by reverse inclusion.
+This is a directed set. We first claim that
+$$
+\mathcal{F}(V)
+\longrightarrow
+\lim H^0(U, \mathcal{F}/I^n\mathcal{F})
+$$
+is injective for any $V \in \mathcal{F}$ (and in particular the map
+of the lemma is injective). Namely, an associated point $x$ of $\mathcal{F}$
+must have $\overline{\{x\}} \cap U \cap Y \not = \emptyset$
+by the previous paragraph. If $y \in \overline{\{x\}} \cap U \cap Y$ then
+$\mathcal{F}_x$ is a localization of $\mathcal{F}_y$
+and $\mathcal{F}_y \subset \lim \mathcal{F}_y/I^n \mathcal{F}_y$
+by Krull's intersection theorem
+(Algebra, Lemma \ref{algebra-lemma-intersect-powers-ideal-module-zero}).
+This proves the claim as a section $s \in \mathcal{F}(V)$
+in the kernel would have to have empty support, hence would have to be zero.
+
+\medskip\noindent
+Choose a finite $A$-module $M$ such that $\mathcal{F}$ is the restriction
+of $\widetilde{M}$ to $U$, see Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}.
+We may and do assume that $H^0_\mathfrak a(M) = 0$.
+Let $\text{Ass}(M) \setminus V(I) = \{\mathfrak p_1, \ldots, \mathfrak p_n\}$.
+We will prove the lemma by induction on $n$. After reordering we
+may assume that $\mathfrak p_n$ is a minimal element of the set
+$\{\mathfrak p_1, \ldots, \mathfrak p_n\}$ with respect to inclusion, i.e,
+$\mathfrak p_n$ is a generic point of the support of $M$.
+Set
+$$
+M' = H^0_{\mathfrak p_1 \ldots \mathfrak p_{n - 1} I}(M)
+$$
+and $M'' = M/M'$. Let $\mathcal{F}'$ and $\mathcal{F}''$ be the
+coherent $\mathcal{O}_U$-modules corresponding to $M'$ and $M''$.
+Dualizing Complexes, Lemma \ref{dualizing-lemma-divide-by-torsion}
+implies that $M''$ has only one associated prime, namely $\mathfrak p_n$.
+On the other hand, since
+$\mathfrak p_n \not \in V(\mathfrak p_1 \ldots \mathfrak p_{n - 1} I)$
+we see that $\mathfrak p_n$ is not an associated prime of $M'$.
+Hence the induction hypothesis applies to $M'$; note
+that since $\mathcal{F}' \subset \mathcal{F}$
+the condition $\text{depth}(\mathcal{F}'_x) \geq 1$ at points $x$ with
+$\overline{\{x\}} \cap V(I) \subset V(\mathfrak a)$ holds, see
+Algebra, Lemma \ref{algebra-lemma-depth-in-ses}.
+
+\medskip\noindent
+Let $\hat s$ be an element of $\lim H^0(U, \mathcal{F}/I^n\mathcal{F})$.
+Let $\hat s''$ be the image in $\lim H^0(U, \mathcal{F}''/I^n\mathcal{F}'')$.
+Since $\mathcal{F}''$ has only one associated point, namely the point
+corresponding to $\mathfrak p_n$, we see that
+Lemma \ref{lemma-application-H0-pre} applies and we find an open
+$U \cap V(I) \subset V \subset U$
+and a section $s'' \in \mathcal{F}''(V)$ mapping to $\hat s''$.
+Let $J \subset A$ be an ideal such that $V(J) = \Spec(A) \setminus V$.
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-homs-over-open}
+after replacing $J$ by a power, we may assume
+there is an $A$-linear map $\varphi : J \to M''$
+corresponding to $s''$. Since $M \to M''$ is surjective, for
+each $g \in J$ we can choose $m_g \in M$ mapping to
+$\varphi(g) \in M''$. Then $\hat s'_g = g \hat s - m_g$
+is in $\lim H^0(U, \mathcal{F}'/I^n\mathcal{F}')$.
+By induction hypothesis there is a $V' \geq V$
+section $s'_g \in \mathcal{F}'(V')$
+mapping to $\hat s'_g$. All in all we conclude that
+$g \hat s$ is in the image of
+$\mathcal{F}(V') \to \lim H^0(U, \mathcal{F}/I^n\mathcal{F})$
+for some $V' \subset V$ possibly depending on $g$.
+However, since $J$ is finitely generated we can find a single
+$V' \in \mathcal{V}$ which works for each of the generators
+and it follows that $V'$ works for all $g$.
+
+\medskip\noindent
+Combining the previous paragraph with the injectivity
+shown in the second paragraph we find there exists
+a $V' \geq V$ and an $A$-module map $\psi : J \to \mathcal{F}(V')$
+such that $\psi(g)$ maps to $g\hat s$. This determines a
+map $\widetilde{J} \to (V' \to \Spec(A))_*\mathcal{F}|_{V'}$
+whose restriction to $V'$ provides an element
+$s \in \mathcal{F}(V')$ mapping to $\hat s$.
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-application-H0}
+Let $I \subset \mathfrak a$ be ideals of a Noetherian ring $A$.
+Let $\mathcal{F}$ be a coherent module on
+$U = \Spec(A) \setminus V(\mathfrak a)$.
+Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete and has a dualizing complex,
+\item if $x \in \text{Ass}(\mathcal{F})$, $x \not \in V(I)$,
+$z \in V(\mathfrak a) \cap \overline{\{x\}}$, then
+$\dim(\mathcal{O}_{\overline{\{x\}}, z}) > \text{cd}(A, I) + 1$,
+\item for $x \in U$ with $\overline{\{x\}} \cap V(I) \subset V(\mathfrak a)$
+we have $\text{depth}(\mathcal{F}_x) \geq 2$,
+\end{enumerate}
+Then we obtain an isomorphism
+$$
+H^0(U, \mathcal{F})
+\longrightarrow
+\lim H^0(U, \mathcal{F}/I^n\mathcal{F})
+$$
+\end{lemma}
+
+\begin{proof}
+Let $\hat s \in \lim H^0(U, \mathcal{F}/I^n\mathcal{F})$.
+By Proposition \ref{proposition-application-H0}
+we find that $\hat s$ is the image of an element $s \in \mathcal{F}(V)$
+for some $V \subset U$ open containing $U \cap V(I)$.
+However, condition (3) shows that $\text{depth}(\mathcal{F}_x) \geq 2$
+for all $x \in U \setminus V$ and hence we find that
+$\mathcal{F}(V) = \mathcal{F}(U)$ by
+Divisors, Lemma \ref{divisors-lemma-depth-2-hartog}
+and the proof is complete.
+\end{proof}
+
+\begin{example}
+\label{example-H0}
+Let $A$ be a Noetherian domain which has a dualizing complex
+and which is complete with respect to a nonzero $f \in A$.
+Let $f \in \mathfrak a \subset A$ be an ideal.
+Assume every irreducible component of $Z = V(\mathfrak a)$
+has codimension $> 2$ in $X = \Spec(A)$. Equivalently, assume every
+irreducible component of $Z$ has codimension $> 1$ in $Y = V(f)$.
+Then with
+$U = X \setminus Z$ every element of
+$$
+\lim_n \Gamma(U, \mathcal{O}_U/f^n \mathcal{O}_U)
+$$
+is the restriction of a section of $\mathcal{O}_U$ defined on an
+open neighbourhood of
+$$
+V(f) \setminus Z = V(f) \cap U = Y \setminus Z = U \cap Y
+$$
+In particular we see that $Y \setminus Z$ is connected. See
+Lemma \ref{lemma-connected} below.
+\end{example}
+
+\begin{lemma}
+\label{lemma-alternative-colim-H0}
+Let $A$ be a Noetherian ring. Let $f \in \mathfrak a \subset A$
+be an element of an ideal of $A$. Let $M$ be a finite $A$-module.
+Assume
+\begin{enumerate}
+\item $A$ is $f$-adically complete,
+\item $f$ is a nonzerodivisor on $M$,
+\item $H^1_\mathfrak a(M/fM)$ is a finite $A$-module.
+\end{enumerate}
+Then with $U = \Spec(A) \setminus V(\mathfrak a)$ the map
+$$
+\colim_V \Gamma(V, \widetilde{M})
+\longrightarrow
+\lim \Gamma(U, \widetilde{M/f^nM})
+$$
+is an isomorphism where the colimit is over opens $V \subset U$
+containing $U \cap V(f)$.
+\end{lemma}
+
+\begin{proof}
+Set $\mathcal{F} = \widetilde{M}|_U$.
+The finiteness of $H^1_\mathfrak a(M/fM)$ implies that
+$H^0(U, \mathcal{F}/f\mathcal{F})$ is finite, see
+Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}.
+By Lemma \ref{lemma-limit-finite} (which applies as $f$ is a
+nonzerodivisor on $\mathcal{F}$)
+we see that $N = \lim H^0(U, \mathcal{F}/f^n\mathcal{F})$
+is a finite $A$-module, is $f$-torsion free, and
+$N/fN \subset H^0(U, \mathcal{F}/f\mathcal{F})$.
+On the other hand, we have $M \to N$ and the map
+$$
+M/fM \longrightarrow H^0(U, \mathcal{F}/f\mathcal{F})
+$$
+is an isomorphism upon localization at any prime $\mathfrak q$ in
+$U_0 = V(f) \setminus \{\mathfrak m\}$ (details omitted). Thus
+$M_\mathfrak q \to N_\mathfrak q$ induces an isomorphism
+$$
+M_\mathfrak q/fM_\mathfrak q =
+(M/fM)_\mathfrak q \to (N/fN)_\mathfrak q =
+N_\mathfrak q/fN_\mathfrak q
+$$
+Since $f$ is a nonzerodivisor on both $N$ and $M$ we conclude
+that $M_\mathfrak q \to N_\mathfrak q$ is an isomorphism (use
+Nakayama to see surjectivity). We conclude that $M$ and $N$
+determine isomorphic coherent modules over an open $V$
+as in the statement of the lemma. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-alternative-H0}
+Let $A$ be a Noetherian ring. Let $f \in \mathfrak a \subset A$
+be an element of an ideal of $A$. Let $M$ be a finite $A$-module.
+Assume
+\begin{enumerate}
+\item $A$ is $f$-adically complete,
+\item $H^1_\mathfrak a(M)$ and $H^2_\mathfrak a(M)$ are
+annihilated by a power of $f$.
+\end{enumerate}
+Then with $U = \Spec(A) \setminus V(\mathfrak a)$ the map
+$$
+\Gamma(U, \widetilde{M})
+\longrightarrow
+\lim \Gamma(U, \widetilde{M/f^nM})
+$$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+We may apply
+Lemma \ref{lemma-formal-functions-principal}
+to $U$ and $\mathcal{F} = \widetilde{M}|_U$
+because $\mathcal{F}$ is a Noetherian object in
+the category of coherent $\mathcal{O}_U$-modules.
+Since $H^1(U, \mathcal{F}) = H^2_\mathfrak a(M)$
+(Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local})
+is annihilated by a power of $f$, we see that
+its $f$-adic Tate module is zero.
+Hence the lemma shows $\lim H^0(U, \mathcal{F}/f^n \mathcal{F})$
+is the $0$th cohomology group of the
+derived $f$-adic completion of $H^0(U, \mathcal{F})$.
+Consider the exact sequence
+$$
+0 \to H^0_\mathfrak a(M) \to M \to
+\Gamma(U, \mathcal{F}) \to H^1_\mathfrak a(M) \to 0
+$$
+of Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}.
+Since $H^1_\mathfrak a(M)$ is annihilated by a power of $f$
+it is derived complete with respect to $(f)$.
+Since $M$ and $H^0_\mathfrak a(M)$ are finite $A$-modules
+they are complete
+(Algebra, Lemma \ref{algebra-lemma-completion-tensor})
+hence derived complete
+(More on Algebra,
+Proposition \ref{more-algebra-proposition-derived-complete-modules}).
+By More on Algebra, Lemma \ref{more-algebra-lemma-serre-subcategory}
+we conclude that $\Gamma(U, \mathcal{F})$ is derived complete
+as desired.
+\end{proof}
+
+
+
+
+
+
+
+\section{Algebraization of formal sections, III}
+\label{section-algebraization-sections-coherent-III}
+
+\noindent
+The next section contains a nonexhaustive list of
+applications of the material
+on completion of local cohomology to higher cohomology
+of coherent modules on quasi-affine schemes and their
+completion with respect to an ideal.
+
+\begin{proposition}
+\label{proposition-application-higher}
+Let $I \subset \mathfrak a$ be ideals of a Noetherian ring $A$.
+Let $\mathcal{F}$ be a coherent module on
+$U = \Spec(A) \setminus V(\mathfrak a)$.
+Let $s \geq 0$.
+Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete and has a dualizing complex,
+\item if $x \in U \setminus V(I)$ then
+$\text{depth}(\mathcal{F}_x) > s$ or
+$$
+\text{depth}(\mathcal{F}_x) +
+\dim(\mathcal{O}_{\overline{\{x\}}, z}) > \text{cd}(A, I) + s + 1
+$$
+for all $z \in V(\mathfrak a) \cap \overline{\{x\}}$,
+\item one of the following conditions holds:
+\begin{enumerate}
+\item the restriction of $\mathcal{F}$ to $U \setminus V(I)$
+is $(S_{s + 1})$, or
+\item the dimension of $V(\mathfrak a)$ is at most $2$\footnote{In
+the sense that the difference of the maximal and minimal values
+on $V(\mathfrak a)$ of a dimension function on $\Spec(A)$ is at most $2$.}.
+\end{enumerate}
+\end{enumerate}
+Then the maps
+$$
+H^i(U, \mathcal{F})
+\longrightarrow
+\lim H^i(U, \mathcal{F}/I^n\mathcal{F})
+$$
+are isomorphisms for $i < s$. Moreover we have an isomorphism
+$$
+\colim H^s(V, \mathcal{F})
+\longrightarrow
+\lim H^s(U, \mathcal{F}/I^n\mathcal{F})
+$$
+where the colimit is over opens $V \subset U$ containing $U \cap V(I)$.
+\end{proposition}
+
+\begin{proof}
+We may assume $s > 0$ as the case $s = 0$ was done in
+Proposition \ref{proposition-application-H0}.
+
+\medskip\noindent
+Choose a finite $A$-module $M$ such that $\mathcal{F}$ is the restriction
+to $U$ of the coherent module associated to $M$, see Local Cohomology,
+Lemma \ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}.
+Set $d = \text{cd}(A, I)$.
+Let $\mathfrak p$ be a prime of $A$ not contained in $V(I)$
+and let $\mathfrak q \in V(\mathfrak p) \cap V(\mathfrak a)$.
+Then either $\text{depth}(M_\mathfrak p) \geq s + 1 > s$
+or we have $\dim((A/\mathfrak p)_\mathfrak q) > d + s + 1$ by (2).
+By Lemma \ref{lemma-bootstrap-bis-bis} we conclude that the
+assumptions of Situation \ref{situation-bootstrap}
+are satisfied for $A, I, V(\mathfrak a), M, s, d$.
+On the other hand, the hypotheses of
+Lemma \ref{lemma-algebraize-local-cohomology-general}
+are satisfied for $s + 1$ and $d$; this is where condition (3) is used.
+
+\medskip\noindent
+Applying Lemma \ref{lemma-algebraize-local-cohomology-general}
+we find there exists an ideal
+$J_0 \subset \mathfrak a$ with $V(J_0) \cap V(I) = V(\mathfrak a)$
+such that for any $J \subset J_0$ with $V(J) \cap V(I) = V(\mathfrak a)$
+the maps
+$$
+H^i_J(M) \longrightarrow H^i(R\Gamma_\mathfrak a(M)^\wedge)
+$$
+is an isomorphism for $i \leq s + 1$.
+
+\medskip\noindent
+For $i \leq s$ the map $H^i_\mathfrak a(M) \to H^i_J(M)$
+is an isomorphism by Lemmas \ref{lemma-bootstrap-inherited} and
+\ref{lemma-kill-colimit-support-general}.
+Using the comparison of cohomology and local cohomology
+(Local Cohomology, Lemma \ref{local-cohomology-lemma-local-cohomology})
+we deduce
+$H^i(U, \mathcal{F}) \to H^i(V,\mathcal{F})$
+is an isomorphism for $V = \Spec(A) \setminus V(J)$ and
+$i < s$.
+
+\medskip\noindent
+By Theorem \ref{theorem-final-bootstrap} we have
+$H^i_\mathfrak a(M) = \lim H^i_\mathfrak a(M/I^nM)$
+for $i \leq s$. By Lemma \ref{lemma-combine-two} we have
+$H^{s + 1}_\mathfrak a(M) = \lim H^{s + 1}_\mathfrak a(M/I^nM)$.
+
+\medskip\noindent
+The isomorphism $H^0(U, \mathcal{F}) = H^0(V, \mathcal{F}) =
+\lim H^0(U, \mathcal{F}/I^n\mathcal{F})$ follows from the above and
+Proposition \ref{proposition-application-H0}.
+For $0 < i < s$ we get the desired isomorphisms
+$H^i(U, \mathcal{F}) = H^i(V, \mathcal{F}) =
+\lim H^i(U, \mathcal{F}/I^n\mathcal{F})$ in
+the same manner using the relation between local cohomology
+and cohomology; it is easier than the case $i = 0$
+because for $i > 0$ we have
+$$
+H^i(U, \mathcal{F}) = H^{i + 1}_\mathfrak a(M),
+\quad
+H^i(V, \mathcal{F}) = H^{i + 1}_J(M),
+\quad
+H^i(R\Gamma(U, \mathcal{F})^\wedge) =
+H^{i + 1}(R\Gamma_\mathfrak a(M)^\wedge)
+$$
+Similarly for the final statement.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-alternative-higher}
+Let $A$ be a Noetherian ring. Let $f \in \mathfrak a \subset A$
+be an element of an ideal of $A$. Let $M$ be a finite $A$-module.
+Let $s \geq 0$. Assume
+\begin{enumerate}
+\item $A$ is $f$-adically complete,
+\item $H^i_\mathfrak a(M)$ is annihilated by a power of $f$
+for $i \leq s + 1$.
+\end{enumerate}
+Then with $U = \Spec(A) \setminus V(\mathfrak a)$ the map
+$$
+H^i(U, \widetilde{M})
+\longrightarrow
+\lim H^i(U, \widetilde{M/f^nM})
+$$
+is an isomorphism for $i < s$.
+\end{lemma}
+
+\begin{proof}
+The proof is the same as the proof of Lemma \ref{lemma-alternative-H0}.
+We may apply Lemma \ref{lemma-formal-functions-principal}
+to $U$ and $\mathcal{F} = \widetilde{M}|_U$
+because $\mathcal{F}$ is a Noetherian object in
+the category of coherent $\mathcal{O}_U$-modules.
+Since $H^i(U, \mathcal{F}) = H^{i + 1}_\mathfrak a(M)$
+(Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local})
+is annihilated by a power of $f$ for $i \leq s$, we see that
+its $f$-adic Tate module is zero.
+Hence the lemma shows $\lim H^{i - 1}(U, \mathcal{F}/f^n \mathcal{F})$
+is the $0$th cohomology group of the
+derived $f$-adic completion of $H^{i - 1}(U, \mathcal{F})$.
+However, if $s \geq i > 1$, then this equal to the $f$-power torsion
+module $H^i_\mathfrak a(M)$ and hence equal to its own
+(derived) completion. For $i = 0$, we refer to
+Lemma \ref{lemma-alternative-H0}.
+\end{proof}
+
+
+
+
+
+\section{Application to connectedness}
+\label{section-connected}
+
+\noindent
+In this section we discuss Grothendieck's connectedness theorem
+and variants; the original version can be found as
+\cite[Exposee XIII, Theorem 2.1]{SGA2}. There is a version
+called Faltings' connectedness theorem in the literature;
+our guess is that this refers to \cite[Theorem 6]{Faltings-some}.
+Let us state and prove the optimal version for complete
+local rings given in \cite[Theorem 1.6]{Varbaro}.
+
+\begin{lemma}
+\label{lemma-punctured-still-connected}
+\begin{reference}
+\cite[Theorem 1.6]{Varbaro}
+\end{reference}
+Let $(A, \mathfrak m)$ be a Noetherian complete local ring.
+Let $I$ be a proper ideal of $A$.
+Set $X = \Spec(A)$ and $Y = V(I)$.
+Denote
+\begin{enumerate}
+\item $d$ the minimal dimension of an irreducible component of $X$, and
+\item $c$ the minimal dimension of a closed subset $Z \subset X$
+such that $X \setminus Z$ is disconnected.
+\end{enumerate}
+Then for $Z \subset Y$ closed we have $Y \setminus Z$ is connected if
+$\dim(Z) < \min(c, d - 1) - \text{cd}(A, I)$. In particular, the punctured
+spectrum of $A/I$ is connected if $\text{cd}(A, I) < \min(c, d - 1)$.
+\end{lemma}
+
+\begin{proof}
+Let us first prove the final assertion. As a first case, if the punctured
+spectrum of $A/I$ is empty, then
+Local Cohomology, Lemma \ref{local-cohomology-lemma-cd-bound-dim-local}
+shows every irreducible component of $X$ has dimension
+$\leq \text{cd}(A, I)$ and we get $\min(c, d - 1) - \text{cd}(A, I) < 0$
+which implies the lemma holds in this case. Thus we may assume
+$U \cap Y$ is nonempty where $U = X \setminus \{\mathfrak m\}$
+is the punctured spectrum of $A$. We may replace $A$ by its reduction.
+Observe that $A$ has a dualizing complex
+(Dualizing Complexes, Lemma \ref{dualizing-lemma-ubiquity-dualizing})
+and that $A$ is complete with respect to $I$
+(Algebra, Lemma \ref{algebra-lemma-complete-by-sub}).
+If we assume $d - 1 > \text{cd}(A, I)$, then we may apply
+Lemma \ref{lemma-application-theorem} to see that
+$$
+\colim H^0(V, \mathcal{O}_V)
+\longrightarrow
+\lim H^0(U, \mathcal{O}_U/I^n\mathcal{O}_U)
+$$
+is an isomorphism where the colimit is over opens $V \subset U$
+containing $U \cap Y$. If $U \cap Y$ is disconnected, then
+its $n$th infinitesimal neighbourhood in $U$ is disconnected
+for all $n$ and we find the
+right hand side has a nontrivial idempotent (here we use
+that $U \cap Y$ is nonempty).
+Thus we can find a $V$ which is disconnected.
+Set $Z = X \setminus V$. By
+Local Cohomology, Lemma \ref{local-cohomology-lemma-cd-bound-dim-local}
+we see that every irreducible component of $Z$ has dimension
+$\leq \text{cd}(A, I)$. Hence $c \leq \text{cd}(A, I)$ and this
+indeed proves the final statement.
+
+\medskip\noindent
+We can deduce the statement of the lemma from what we just proved
+as follows. Suppose that $Z \subset Y$ closed and $Y \setminus Z$ is
+disconnected and $\dim(Z) = e$. Recall that a connected space is nonempty
+by convention. Hence we conclude either (a) $Y = Z$ or (b)
+$Y \setminus Z = W_1 \amalg W_2$ with $W_i$ nonempty, open, and closed
+in $Y \setminus Z$. In case (b) we may pick points $w_i \in W_i$
+which are closed in $U$, see
+Morphisms, Lemma \ref{morphisms-lemma-ubiquity-Jacobson-schemes}.
+Then we can find $f_1, \ldots, f_e \in \mathfrak m$
+such that $V(f_1, \ldots, f_e) \cap Z = \{\mathfrak m\}$
+and in case (b) we may assume $w_i \in V(f_1, \ldots, f_e)$.
+Namely, we can inductively using prime avoidance
+choose $f_i$ such that $\dim V(f_1, \ldots, f_i) \cap Z = e - i$
+and such that in case (b) we have $w_1, w_2 \in V(f_i)$.
+It follows that the punctured spectrum of $A/I + (f_1, \ldots, f_e)$
+is disconnected (small detail omitted). Since
+$\text{cd}(A, I + (f_1, \ldots, f_e)) \leq \text{cd}(A, I) + e$ by
+Local Cohomology, Lemmas \ref{local-cohomology-lemma-cd-sum} and
+\ref{local-cohomology-lemma-bound-cd} we conclude that
+$$
+\text{cd}(A, I) + e \geq \min(c, d - 1)
+$$
+by the first part of the proof. This implies
+$e \geq \min(c, d - 1) - \text{cd}(A, I)$ which is what we had to show.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-connected}
+Let $I \subset \mathfrak a$ be ideals of a Noetherian ring $A$.
+Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete and has a dualizing complex,
+\item if $\mathfrak p \subset A$ is a minimal prime not contained
+in $V(I)$ and $\mathfrak q \in V(\mathfrak p) \cap V(\mathfrak a)$, then
+$\dim((A/\mathfrak p)_\mathfrak q) > \text{cd}(A, I) + 1$,
+\item any nonempty open $V \subset \Spec(A)$ which contains
+$V(I) \setminus V(\mathfrak a)$ is connected\footnote{For example
+if $A$ is a domain.}.
+\end{enumerate}
+Then $V(I) \setminus V(\mathfrak a)$ is either empty or connected.
+\end{lemma}
+
+\begin{proof}
+We may replace $A$ by its reduction. Then we have the inequality
+in (2) for all associated primes of $A$. By
+Proposition \ref{proposition-application-H0} we see that
+$$
+\colim H^0(V, \mathcal{O}_V) = \lim H^0(T_n, \mathcal{O}_{T_n})
+$$
+where the colimit is over the opens $V$ as in (3) and $T_n$ is the
+$n$th infinitesimal neighbourhood of $T = V(I) \setminus V(\mathfrak a)$
+in $U = \Spec(A) \setminus V(\mathfrak a)$. Thus $T$ is either empty
+or connected, since if not, then the right hand side would have a
+nontrivial idempotent and we've assumed the left hand side does not.
+Some details omitted.
+\end{proof}
+
+
+
+
+
+
+\section{The completion functor}
+\label{section-completion}
+
+\noindent
+Let $X$ be a Noetherian scheme. Let $Y \subset X$ be a closed subscheme
+with quasi-coherent sheaf of ideals $\mathcal{I} \subset \mathcal{O}_X$.
+In this section we consider inverse systems of coherent
+$\mathcal{O}_X$-modules $(\mathcal{F}_n)$ with $\mathcal{F}_n$
+annihilated by $I^n$ such that the transition maps induce
+isomorphisms $\mathcal{F}_{n + 1}/I^n\mathcal{F}_{n + 1} \to \mathcal{F}_n$.
+The category of these inverse systems was denoted
+$$
+\textit{Coh}(X, \mathcal{I})
+$$
+in Cohomology of Schemes, Section \ref{coherent-section-coherent-formal}.
+This category is equivalent to the category of coherent modules
+on the formal completion of $X$ along $Y$; however, since we have
+not yet introduced formal schemes or coherent modules on them,
+we cannot use this terminology here. We are particularly interested
+in the completion functor
+$$
+\textit{Coh}(\mathcal{O}_X)
+\longrightarrow
+\textit{Coh}(X, \mathcal{I}),\quad
+\mathcal{F} \longmapsto \mathcal{F}^\wedge
+$$
+See
+Cohomology of Schemes, Equation (\ref{coherent-equation-completion-functor}).
+
+\begin{lemma}
+\label{lemma-completion-fully-faithful}
+Let $X$ be a Noetherian scheme and let $Y \subset X$ be a closed subscheme.
+Let $Y_n \subset X$ be the $n$th infinitesimal neighbourhood of $Y$ in $X$.
+Consider the following conditions
+\begin{enumerate}
+\item $X$ is quasi-affine and
+$\Gamma(X, \mathcal{O}_X) \to \lim \Gamma(Y_n, \mathcal{O}_{Y_n})$
+is an isomorphism,
+\item $X$ has an ample invertible module $\mathcal{L}$ and
+$\Gamma(X, \mathcal{L}^{\otimes m}) \to
+\lim \Gamma(Y_n, \mathcal{L}^{\otimes m}|_{Y_n})$
+is an isomorphism for all $m \gg 0$,
+\item for every finite locally free $\mathcal{O}_X$-module
+$\mathcal{E}$ the map
+$\Gamma(X, \mathcal{E}) \to \lim \Gamma(Y_n, \mathcal{E}|_{Y_n})$
+is an isomorphism, and
+\item the completion functor
+$\textit{Coh}(\mathcal{O}_X) \to \textit{Coh}(X, \mathcal{I})$
+is fully faithful on the full subcategory of finite locally free
+objects.
+\end{enumerate}
+Then (1) $\Rightarrow$ (2) $\Rightarrow$ (3) $\Rightarrow$ (4)
+and (4) $\Rightarrow$ (3).
+\end{lemma}
+
+\begin{proof}
+Proof of (3) $\Rightarrow$ (4). If $\mathcal{F}$ and $\mathcal{G}$
+are finite locally free on $X$, then considering
+$\mathcal{H} = \SheafHom_{\mathcal{O}_X}(\mathcal{G}, \mathcal{F})$
+and using Cohomology of Schemes, Lemma
+\ref{coherent-lemma-completion-internal-hom}
+we see that (3) implies (4).
+
+\medskip\noindent
+Proof of (2) $\rightarrow$ (3). Namely, let $\mathcal{L}$ be ample
+on $X$ and suppose that $\mathcal{E}$ is a
+finite locally free $\mathcal{O}_X$-module.
+We claim we can find a universally exact sequence
+$$
+0 \to \mathcal{E} \to
+(\mathcal{L}^{\otimes p})^{\oplus r} \to
+(\mathcal{L}^{\otimes q})^{\oplus s}
+$$
+for some $r, s \geq 0$ and $0 \ll p \ll q$. If this holds, then
+using the exact sequence
+$$
+0 \to \lim \Gamma(\mathcal{E}|_{Y_n}) \to
+\lim \Gamma((\mathcal{L}^{\otimes p})^{\oplus r}|_{Y_n}) \to
+\lim \Gamma((\mathcal{L}^{\otimes q})^{\oplus s}|_{Y_n})
+$$
+and the isomorphisms in (2) we get the isomorphism in (3).
+To prove the claim, consider the dual locally free module
+$\SheafHom_{\mathcal{O}_X}(\mathcal{E}, \mathcal{O}_X)$
+and apply
+Properties, Proposition \ref{properties-proposition-characterize-ample}
+to find a surjection
+$$
+(\mathcal{L}^{\otimes -p})^{\oplus r}
+\longrightarrow
+\SheafHom_{\mathcal{O}_X}(\mathcal{E}, \mathcal{O}_X)
+$$
+Taking duals we obtain the first map in the exact sequence
+(it is universally injective because being a surjection is universal).
+Repeat with the cokernel to get the second. Some details omitted.
+
+\medskip\noindent
+Proof of (1) $\Rightarrow$ (2). This is true because if $X$ is quasi-affine
+then $\mathcal{O}_X$ is an ample invertible module, see
+Properties, Lemma \ref{properties-lemma-quasi-affine-O-ample}.
+
+\medskip\noindent
+We omit the proof of (4) $\Rightarrow$ (3).
+\end{proof}
+
+\noindent
+Given a Noetherian scheme and a quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_X$ we will say
+an object $(\mathcal{F}_n)$ of $\textit{Coh}(X, \mathcal{I})$
+is {\it finite locally free} if each $\mathcal{F}_n$ is a finite
+locally free $\mathcal{O}_X/\mathcal{I}^n$-module.
+
+\begin{lemma}
+\label{lemma-completion-fully-faithful-general}
+Let $X$ be a Noetherian scheme and let $Y \subset X$ be a closed subscheme
+with ideal sheaf $\mathcal{I} \subset \mathcal{O}_X$.
+Let $Y_n \subset X$ be the $n$th infinitesimal neighbourhood of $Y$ in $X$.
+Let $\mathcal{V}$ be the set of open subschemes $V \subset X$ containing $Y$
+ordered by reverse inclusion.
+\begin{enumerate}
+\item $X$ is quasi-affine and
+$$
+\colim_\mathcal{V} \Gamma(V, \mathcal{O}_V)
+\longrightarrow
+\lim \Gamma(Y_n, \mathcal{O}_{Y_n})
+$$
+is an isomorphism,
+\item $X$ has an ample invertible module $\mathcal{L}$ and
+$$
+\colim_\mathcal{V} \Gamma(V, \mathcal{L}^{\otimes m})
+\longrightarrow
+\lim \Gamma(Y_n, \mathcal{L}^{\otimes m}|_{Y_n})
+$$
+is an isomorphism for all $m \gg 0$,
+\item for every $V \in \mathcal{V}$ and every finite locally free
+$\mathcal{O}_V$-module $\mathcal{E}$ the map
+$$
+\colim_{V' \geq V} \Gamma(V', \mathcal{E}|_{V'})
+\longrightarrow
+\lim \Gamma(Y_n, \mathcal{E}|_{Y_n})
+$$
+is an isomorphism, and
+\item the completion functor
+$$
+\colim_\mathcal{V} \textit{Coh}(\mathcal{O}_V)
+\longrightarrow
+\textit{Coh}(X, \mathcal{I}),
+\quad
+\mathcal{F} \longmapsto \mathcal{F}^\wedge
+$$
+is fully faithful on the full subcategory of
+finite locally free objects (see explanation above).
+\end{enumerate}
+Then (1) $\Rightarrow$ (2) $\Rightarrow$ (3) $\Rightarrow$ (4)
+and (4) $\Rightarrow$ (3).
+\end{lemma}
+
+\begin{proof}
+Observe that $\mathcal{V}$ is a directed set, so the colimits are
+as in Categories, Section \ref{categories-section-directed-colimits}.
+The rest of the argument is almost exactly the same as the argument
+in the proof of Lemma \ref{lemma-completion-fully-faithful}; we urge
+the reader to skip it.
+
+\medskip\noindent
+Proof of (3) $\Rightarrow$ (4). If $\mathcal{F}$ and $\mathcal{G}$
+are finite locally free on $V \in \mathcal{V}$, then considering
+$\mathcal{H} = \SheafHom_{\mathcal{O}_V}(\mathcal{G}, \mathcal{F})$
+and using Cohomology of Schemes, Lemma
+\ref{coherent-lemma-completion-internal-hom}
+we see that (3) implies (4).
+
+\medskip\noindent
+Proof of (2) $\Rightarrow$ (3). Let $\mathcal{L}$ be ample
+on $X$ and suppose that $\mathcal{E}$ is a
+finite locally free $\mathcal{O}_V$-module
+for some $V \in \mathcal{V}$.
+We claim we can find a universally exact sequence
+$$
+0 \to \mathcal{E} \to
+(\mathcal{L}^{\otimes p})^{\oplus r}|_{V} \to
+(\mathcal{L}^{\otimes q})^{\oplus s}|_{V}
+$$
+for some $r, s \geq 0$ and $0 \ll p \ll q$. If this is true, then
+the isomorphism in (2) will imply the isomorphism in (3).
+To prove the claim, recall that $\mathcal{L}|_V$ is ample, see
+Properties, Lemma \ref{properties-lemma-ample-on-locally-closed}.
+Consider the dual locally free module
+$\SheafHom_{\mathcal{O}_V}(\mathcal{E}, \mathcal{O}_V)$
+and apply
+Properties, Proposition \ref{properties-proposition-characterize-ample}
+to find a surjection
+$$
+(\mathcal{L}^{\otimes -p})^{\oplus r}|_V \longrightarrow
+\SheafHom_{\mathcal{O}_V}(\mathcal{E}, \mathcal{O}_V)
+$$
+(it is universally injective because being a surjection is universal).
+Taking duals we obtain the first map in the exact sequence.
+Repeat with the cokernel to get the second. Some details omitted.
+
+\medskip\noindent
+Proof of (1) $\Rightarrow$ (2). This is true because if $X$ is quasi-affine
+then $\mathcal{O}_X$ is an ample invertible module, see
+Properties, Lemma \ref{properties-lemma-quasi-affine-O-ample}.
+
+\medskip\noindent
+We omit the proof of (4) $\Rightarrow$ (3).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-recognize-formal-coherent-modules}
+Let $X$ be a Noetherian scheme. Let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. The functor
+$$
+\textit{Coh}(X, \mathcal{I}) \longrightarrow \text{Pro-}\QCoh(\mathcal{O}_X)
+$$
+is fully faithful, see Categories, Remark \ref{categories-remark-pro-category}.
+\end{lemma}
+
+\begin{proof}
+Let $(\mathcal{F}_n)$ and $(\mathcal{G}_n)$ be objects of
+$\textit{Coh}(X, \mathcal{I})$. A morphism of pro-objects
+$\alpha$ from $(\mathcal{F}_n)$ to $(\mathcal{G}_n)$ is given
+by a system of maps
+$\alpha_n : \mathcal{F}_{n'(n)} \to \mathcal{G}_n$
+where $\mathbf{N} \to \mathbf{N}$, $n \mapsto n'(n)$
+is an increasing function. Since
+$\mathcal{F}_n = \mathcal{F}_{n'(n)}/\mathcal{I}^n\mathcal{F}_{n'(n)}$
+and since $\mathcal{G}_n$ is annihilated by $\mathcal{I}^n$
+we see that $\alpha_n$ induces a map $\mathcal{F}_n \to \mathcal{G}_n$.
+\end{proof}
+
+\noindent
+Next we add some examples of the kind of fully faithfulness
+result we will be able to prove using the work done earlier in this chapter.
+
+\begin{lemma}
+\label{lemma-fully-faithful}
+Let $I \subset \mathfrak a$ be ideals of a Noetherian ring $A$.
+Let $U = \Spec(A) \setminus V(\mathfrak a)$. Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete and has a dualizing complex,
+\item for any associated prime $\mathfrak p \subset A$,
+$I \not \subset \mathfrak p$ and
+$\mathfrak q \in V(\mathfrak p) \cap V(\mathfrak a)$ we have
+$\dim((A/\mathfrak p)_\mathfrak q) > \text{cd}(A, I) + 1$.
+\item for $\mathfrak p \subset A$, $I \not \subset \mathfrak p$ with
+with $V(\mathfrak p) \cap V(I) \subset V(\mathfrak a)$
+we have $\text{depth}(A_\mathfrak p) \geq 2$.
+\end{enumerate}
+Then the completion functor
+$$
+\textit{Coh}(\mathcal{O}_U)
+\longrightarrow
+\textit{Coh}(U, I\mathcal{O}_U),
+\quad
+\mathcal{F} \longmapsto \mathcal{F}^\wedge
+$$
+is fully faithful on the full subcategory of
+finite locally free objects.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-completion-fully-faithful}
+it suffices to show that
+$$
+\Gamma(U, \mathcal{O}_U) =
+\lim \Gamma(U, \mathcal{O}_U/I^n\mathcal{O}_U)
+$$
+This follows immediately from
+Lemma \ref{lemma-application-H0}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-simple-one}
+Let $A$ be a Noetherian ring. Let $f \in \mathfrak a$ be an element of
+an ideal of $A$. Let $U = \Spec(A) \setminus V(\mathfrak a)$. Assume
+\begin{enumerate}
+\item $A$ has a dualizing complex and is complete with respect to $f$,
+\item $A_f$ is $(S_2)$ and for every minimal prime $\mathfrak p \subset A$,
+$f \not \in \mathfrak p$ and
+$\mathfrak q \in V(\mathfrak p) \cap V(\mathfrak a)$ we have
+$\dim((A/\mathfrak p)_\mathfrak q) \geq 3$.
+\end{enumerate}
+Then the completion functor
+$$
+\textit{Coh}(\mathcal{O}_U)
+\longrightarrow
+\textit{Coh}(U, I\mathcal{O}_U),
+\quad
+\mathcal{F} \longmapsto \mathcal{F}^\wedge
+$$
+is fully faithful on the full subcategory of finite locally free objects.
+\end{lemma}
+
+\begin{proof}
+We will show that Lemma \ref{lemma-fully-faithful} applies.
+Assumption (1) of Lemma \ref{lemma-fully-faithful} holds.
+Observe that $\text{cd}(A, (f)) \leq 1$, see
+Local Cohomology, Lemma \ref{local-cohomology-lemma-bound-cd}.
+Since $A_f$ is $(S_2)$ we see that every associated prime
+$\mathfrak p \subset A$, $f \not \in \mathfrak p$ is a minimal prime.
+Thus we get assumption (2) of Lemma \ref{lemma-fully-faithful}.
+If $\mathfrak p \subset A$, $f \not \in \mathfrak p$ satisfies
+$V(\mathfrak p) \cap V(I) \subset V(\mathfrak a)$ and if
+$\mathfrak q \in V(\mathfrak p) \cap V(f)$ is a generic point,
+then $\dim((A/\mathfrak p)_\mathfrak q) = 1$.
+Then we obtain $\dim(A_\mathfrak p) \geq 2$ by looking at the minimal primes
+$\mathfrak p_0 \subset \mathfrak p$ and using that
+$\dim((A/\mathfrak p_0)_\mathfrak q) \geq 3$ by assumption. Thus
+$\text{depth}(A_\mathfrak p) \geq 2$ by the $(S_2)$ assumption.
+This verifies assumption (3) of Lemma \ref{lemma-fully-faithful}
+and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-alternative}
+Let $A$ be a Noetherian ring. Let $f \in \mathfrak a \subset A$
+be an element of an ideal of $A$. Let $U = \Spec(A) \setminus V(\mathfrak a)$.
+Assume
+\begin{enumerate}
+\item $A$ is $f$-adically complete,
+\item $H^1_\mathfrak a(A)$ and $H^2_\mathfrak a(A)$ are
+annihilated by a power of $f$.
+\end{enumerate}
+Then the completion functor
+$$
+\textit{Coh}(\mathcal{O}_U)
+\longrightarrow
+\textit{Coh}(U, I\mathcal{O}_U),
+\quad
+\mathcal{F} \longmapsto \mathcal{F}^\wedge
+$$
+is fully faithful on the full subcategory of
+finite locally free objects.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-completion-fully-faithful}
+it suffices to show that
+$$
+\Gamma(U, \mathcal{O}_U) =
+\lim \Gamma(U, \mathcal{O}_U/I^n\mathcal{O}_U)
+$$
+This follows immediately from
+Lemma \ref{lemma-alternative-H0}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-simple-two}
+Let $A$ be a Noetherian ring. Let $f \in \mathfrak a$ be an element of
+an ideal of $A$. Let $U = \Spec(A) \setminus V(\mathfrak a)$. Assume
+\begin{enumerate}
+\item $A$ has a dualizing complex and is complete with respect to $f$,
+\item for every prime $\mathfrak p \subset A$, $f \not \in \mathfrak p$
+and $\mathfrak q \in V(\mathfrak p) \cap V(\mathfrak a)$ we have
+$\text{depth}(A_\mathfrak p) + \dim((A/\mathfrak p)_\mathfrak q) > 2$.
+\end{enumerate}
+Then the completion functor
+$$
+\textit{Coh}(\mathcal{O}_U)
+\longrightarrow
+\textit{Coh}(U, I\mathcal{O}_U),
+\quad
+\mathcal{F} \longmapsto \mathcal{F}^\wedge
+$$
+is fully faithful on the full subcategory of finite locally free objects.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-fully-faithful-alternative} and
+Local Cohomology, Proposition \ref{local-cohomology-proposition-annihilator}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-general}
+Let $I \subset \mathfrak a \subset A$ be ideals of a Noetherian ring $A$.
+Let $U = \Spec(A) \setminus V(\mathfrak a)$. Let $\mathcal{V}$ be
+the set of open subschemes of $U$ containing $U \cap V(I)$
+ordered by reverse inclusion. Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete and has a dualizing complex,
+\item for any associated prime
+$\mathfrak p \subset A$ with
+$I \not \subset \mathfrak p$ and
+$V(\mathfrak p) \cap V(I) \not \subset V(\mathfrak a)$
+and $\mathfrak q \in V(\mathfrak p) \cap V(\mathfrak a)$ we have
+$\dim((A/\mathfrak p)_\mathfrak q) > \text{cd}(A, I) + 1$.
+\end{enumerate}
+Then the completion functor
+$$
+\colim_\mathcal{V} \textit{Coh}(\mathcal{O}_V)
+\longrightarrow
+\textit{Coh}(U, I\mathcal{O}_U),
+\quad
+\mathcal{F} \longmapsto \mathcal{F}^\wedge
+$$
+is fully faithful on the full subcategory of
+finite locally free objects.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-completion-fully-faithful-general}
+it suffices to show that
+$$
+\colim_\mathcal{V} \Gamma(V, \mathcal{O}_V) =
+\lim \Gamma(U, \mathcal{O}_U/I^n\mathcal{O}_U)
+$$
+This follows immediately from Proposition \ref{proposition-application-H0}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-general-alternative}
+Let $A$ be a Noetherian ring. Let $f \in \mathfrak a \subset A$
+be an element of an ideal of $A$. Let $U = \Spec(A) \setminus V(\mathfrak a)$.
+Let $\mathcal{V}$ be the set of open subschemes of $U$ containing $U \cap V(f)$
+ordered by reverse inclusion. Assume
+\begin{enumerate}
+\item $A$ is $f$-adically complete,
+\item $f$ is a nonzerodivisor,
+\item $H^1_\mathfrak a(A/fA)$ is a finite $A$-module.
+\end{enumerate}
+Then the completion functor
+$$
+\colim_\mathcal{V} \textit{Coh}(\mathcal{O}_V)
+\longrightarrow
+\textit{Coh}(U, f\mathcal{O}_U),
+\quad
+\mathcal{F} \longmapsto \mathcal{F}^\wedge
+$$
+is fully faithful on the full subcategory of finite locally free objects.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-completion-fully-faithful-general}
+it suffices to show that
+$$
+\colim_\mathcal{V} \Gamma(V, \mathcal{O}_V) =
+\lim \Gamma(U, \mathcal{O}_U/I^n\mathcal{O}_U)
+$$
+This follows immediately from Lemma \ref{lemma-alternative-colim-H0}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-very-general}
+Let $I \subset \mathfrak a \subset A$ be ideals of a Noetherian ring $A$.
+Let $U = \Spec(A) \setminus V(\mathfrak a)$. Let $\mathcal{V}$ be the set
+of open subschemes of $U$ containing $U \cap V(I)$ ordered by reverse
+inclusion. Let $\mathcal{F}$ and
+$\mathcal{G}$ be coherent $\mathcal{O}_V$-modules for some
+$V \in \mathcal{V}$. The map
+$$
+\colim_{V' \geq V} \Hom_V(\mathcal{G}|_{V'}, \mathcal{F}|_{V'})
+\longrightarrow
+\Hom_{\textit{Coh}(U, I\mathcal{O}_U)}(\mathcal{G}^\wedge, \mathcal{F}^\wedge)
+$$
+is bijective if the following assumptions hold:
+\begin{enumerate}
+\item $A$ is $I$-adically complete and has a dualizing complex,
+\item if $x \in \text{Ass}(\mathcal{F})$, $x \not \in V(I)$,
+$\overline{\{x\}} \cap V(I) \not \subset V(\mathfrak a)$
+and $z \in \overline{\{x\}} \cap V(\mathfrak a)$, then
+$\dim(\mathcal{O}_{\overline{\{x\}}, z}) > \text{cd}(A, I) + 1$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We may choose coherent $\mathcal{O}_U$-modules
+$\mathcal{F}'$ and $\mathcal{G}'$ whose restriction to $V$
+is $\mathcal{F}$ and $\mathcal{G}$, see
+Properties, Lemma \ref{properties-lemma-lift-finite-presentation}.
+We may modify our choice of $\mathcal{F}'$ to ensure that
+$\text{Ass}(\mathcal{F}') \subset V$, see for example
+Local Cohomology, Lemma \ref{local-cohomology-lemma-get-depth-1-along-Z}.
+Thus we may and do replace $V$ by $U$ and $\mathcal{F}$ and $\mathcal{G}$
+by $\mathcal{F}'$ and $\mathcal{G}'$.
+Set $\mathcal{H} = \SheafHom_{\mathcal{O}_U}(\mathcal{G}, \mathcal{F})$.
+This is a coherent $\mathcal{O}_U$-module. We have
+$$
+\Hom_V(\mathcal{G}|_V, \mathcal{F}|_V) =
+H^0(V, \mathcal{H})
+\quad\text{and}\quad
+\lim H^0(U, \mathcal{H}/\mathcal{I}^n\mathcal{H}) =
+\Mor_{\textit{Coh}(U, I\mathcal{O}_U)}
+(\mathcal{G}^\wedge, \mathcal{F}^\wedge)
+$$
+See Cohomology of Schemes, Lemma \ref{coherent-lemma-completion-internal-hom}.
+Thus if we can show that the assumptions of
+Proposition \ref{proposition-application-H0}
+hold for $\mathcal{H}$, then the proof is complete.
+This holds because
+$\text{Ass}(\mathcal{H}) \subset \text{Ass}(\mathcal{F})$.
+See Cohomology of Schemes, Lemma
+\ref{coherent-lemma-hom-into-depth}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Algebraization of coherent formal modules, I}
+\label{section-algebraization-modules}
+
+\noindent
+The essential surjectivity of the completion functor (see below)
+was studied systematically in
+\cite{SGA2}, \cite{MRaynaud-book}, and \cite{MRaynaud-paper}.
+We work in the following affine situation.
+
+\begin{situation}
+\label{situation-algebraize}
+Here $A$ is a Noetherian ring and $I \subset \mathfrak a \subset A$ are ideals.
+We set $X = \Spec(A)$, $Y = V(I) = \Spec(A/I)$, and
+$Z = V(\mathfrak a) = \Spec(A/\mathfrak a)$. Furthermore $U = X \setminus Z$.
+\end{situation}
+
+\noindent
+In this section we try to find conditions that guarantee an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$ is in the image of the completion functor
+$\textit{Coh}(\mathcal{O}_U) \to \textit{Coh}(U, I\mathcal{O}_U)$.
+See Cohomology of Schemes, Section \ref{coherent-section-coherent-formal} and
+Section \ref{section-completion}.
+
+\begin{lemma}
+\label{lemma-system-of-modules}
+In Situation \ref{situation-algebraize}.
+Consider an inverse system $(M_n)$ of $A$-modules such
+that
+\begin{enumerate}
+\item $M_n$ is a finite $A$-module,
+\item $M_n$ is annihilated by $I^n$,
+\item the kernel and cokernel of $M_{n + 1}/I^nM_{n + 1} \to M_n$
+are $\mathfrak a$-power torsion.
+\end{enumerate}
+Then $(\widetilde{M}_n|_U)$ is in $\textit{Coh}(U, I\mathcal{O}_U)$.
+Conversely, every object of $\textit{Coh}(U, I\mathcal{O}_U)$
+arises in this manner.
+\end{lemma}
+
+\begin{proof}
+We omit the verification that $(\widetilde{M}_n|_U)$ is in
+$\textit{Coh}(U, I\mathcal{O}_U)$. Let $(\mathcal{F}_n)$
+be an object of $\textit{Coh}(U, I\mathcal{O}_U)$.
+By Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}
+we see that $\mathcal{F}_n = \widetilde{M_n}$ for some finite
+$A/I^n$-module $M_n$. After dividing $M_n$ by $H^0_\mathfrak a(M_n)$
+we may assume $M_n \subset H^0(U, \mathcal{F}_n)$, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-divide-by-torsion}
+and the already referenced lemma.
+After replacing inductively $M_{n + 1}$ by the inverse image
+of $M_n$ under the map $M_{n + 1} \to H^0(U, \mathcal{F}_{n + 1})
+\to H^0(U, \mathcal{F}_n)$, we may assume $M_{n + 1}$ maps into
+$M_n$. This gives a inverse system $(M_n)$ satisfying (1) and (2)
+such that $\mathcal{F}_n = \widetilde{M_n}$. To see that (3)
+holds, use that $M_{n + 1}/I^nM_{n + 1} \to M_n$ is a map
+of finite $A$-modules which induces an isomorphism after
+applying $\widetilde{\ }$ and restriction to $U$
+(here we use the first referenced lemma one more time).
+\end{proof}
+
+\noindent
+In Situation \ref{situation-algebraize} we can study the completion functor
+Cohomology of Schemes, Equation (\ref{coherent-equation-completion-functor})
+\begin{equation}
+\label{equation-completion}
+\textit{Coh}(\mathcal{O}_U)
+\longrightarrow
+\textit{Coh}(U, I\mathcal{O}_U),\quad
+\mathcal{F} \longmapsto \mathcal{F}^\wedge
+\end{equation}
+If $A$ is $I$-adically complete, then this functor is fully faithful
+on suitable subcategories by our earlier work on algebraization of
+formal sections, see Section \ref{section-completion} and
+Lemma \ref{lemma-fully-faithful-inequalities} for some sample results.
+Next, let $(\mathcal{F}_n)$ be an object of $\textit{Coh}(U, I\mathcal{O}_U)$.
+Still assuming $A$ is $I$-adically complete, we can ask:
+When is $(\mathcal{F}_n)$ in the essential image of the completion
+functor displayed above?
+
+\begin{lemma}
+\label{lemma-essential-image-completion}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$
+be an object of $\textit{Coh}(U, I\mathcal{O}_U)$. Consider the
+following conditions:
+\begin{enumerate}
+\item $(\mathcal{F}_n)$ is in the essential image
+of the functor (\ref{equation-completion}),
+\item $(\mathcal{F}_n)$ is the completion of a
+coherent $\mathcal{O}_U$-module,
+\item $(\mathcal{F}_n)$ is the completion of a coherent
+$\mathcal{O}_V$-module for $U \cap Y \subset V \subset U$ open,
+\item $(\mathcal{F}_n)$ is the completion of
+the restriction to $U$ of a coherent $\mathcal{O}_X$-module,
+\item $(\mathcal{F}_n)$ is the restriction to $U$ of
+the completion of a coherent $\mathcal{O}_X$-module,
+\item there exists an object $(\mathcal{G}_n)$ of
+$\textit{Coh}(X, I\mathcal{O}_X)$ whose restriction
+to $U$ is $(\mathcal{F}_n)$.
+\end{enumerate}
+Then conditions (1), (2), (3), (4), and (5) are equivalent and imply (6).
+If $A$ is $I$-adically complete then condition (6) implies the others.
+\end{lemma}
+
+\begin{proof}
+Parts (1) and (2) are equivalent, because the completion of a coherent
+$\mathcal{O}_U$-module $\mathcal{F}$ is by definition the image of
+$\mathcal{F}$ under the functor (\ref{equation-completion}).
+If $V \subset U$ is an open subscheme containing $U \cap Y$, then we have
+$$
+\textit{Coh}(V, I\mathcal{O}_V) =
+\textit{Coh}(U, I\mathcal{O}_U)
+$$
+since the category of coherent $\mathcal{O}_V$-modules supported on
+$V \cap Y$ is the same as the category of coherent $\mathcal{O}_U$-modules
+supported on $U \cap Y$. Thus the completion of a coherent
+$\mathcal{O}_V$-module is an object of $\textit{Coh}(U, I\mathcal{O}_U)$.
+Having said this the equivalence of (2), (3), (4), and (5)
+holds because the functors
+$\textit{Coh}(\mathcal{O}_X) \to \textit{Coh}(\mathcal{O}_U) \to
+\textit{Coh}(\mathcal{O}_V)$ are essentially surjective.
+See Properties, Lemma \ref{properties-lemma-lift-finite-presentation}.
+
+\medskip\noindent
+It is always the case that (5) implies (6). Assume $A$ is $I$-adically complete.
+Then any object of $\textit{Coh}(X, I\mathcal{O}_X)$ corresponds to a finite
+$A$-module by Cohomology of Schemes, Lemma
+\ref{coherent-lemma-inverse-systems-affine}.
+Thus we see that (6) implies (5) in this case.
+\end{proof}
+
+\begin{example}
+\label{example-not-algebraizable}
+Let $k$ be a field. Let $A = k[x, y][[t]]$ with $I = (t)$ and
+$\mathfrak a = (x, y, t)$. Let us use notation as in
+Situation \ref{situation-algebraize}. Observe that
+$U \cap Y = (D(x) \cap Y) \cup (D(y) \cap Y)$ is an affine
+open covering. For $n \geq 1$ consider the invertible
+module $\mathcal{L}_n$ of $\mathcal{O}_U/t^n\mathcal{O}_U$
+given by glueing $A_x/t^nA_x$ and $A_y/t^nA_y$ via the invertible
+element of $A_{xy}/t^nA_{xy}$ which is the image of any power series
+of the form
+$$
+u = 1 + \frac{t}{xy} + \sum_{n \geq 2} a_n \frac{t^n}{(xy)^{\varphi(n)}}
+$$
+with $a_n \in k[x, y]$ and $\varphi(n) \in \mathbf{N}$.
+Then $(\mathcal{L}_n)$ is an invertible object of
+$\textit{Coh}(U, I\mathcal{O}_U)$ which is not the
+completion of a coherent $\mathcal{O}_U$-module $\mathcal{L}$.
+We only sketch the argument and we omit most of the details.
+Let $y \in U \cap Y$. Then the completion of the stalk
+$\mathcal{L}_y$ would be an invertible module hence $\mathcal{L}_y$
+is invertible. Thus there would exist an open $V \subset U$
+containing $U \cap Y$ such that $\mathcal{L}|_V$ is invertible.
+By Divisors, Lemma \ref{divisors-lemma-extend-invertible-module}
+we find an invertible $A$-module $M$ with
+$\widetilde{M}|_V \cong \mathcal{L}|_V$. However the ring $A$
+is a UFD hence we see $M \cong A$ which would imply
+$\mathcal{L}_n \cong \mathcal{O}_U/I^n\mathcal{O}_U$.
+Since $\mathcal{L}_2 \not \cong \mathcal{O}_U/I^2\mathcal{O}_U$
+by construction we get a contradiction as desired.
+
+\medskip\noindent
+Note that if we take $a_n = 0$ for $n \geq 2$, then we see
+that $\lim H^0(U, \mathcal{L}_n)$ is nonzero: in this case
+we the function $x$ on $D(x)$ and the function $x + t/y$ on $D(y)$
+glue. On the other hand, if we take $a_n = 1$ and $\varphi(n) = 2^n$
+or even $\varphi(n) = n^2$ then the reader can show that
+$\lim H^0(U, \mathcal{L}_n)$ is zero; this gives another proof
+that $(\mathcal{L}_n)$ is not algebraizable in this case.
+\end{example}
+
+\noindent
+If in Situation \ref{situation-algebraize} the ring $A$ is not
+$I$-adically complete, then Lemma \ref{lemma-essential-image-completion}
+suggests the correct thing is to ask whether $(\mathcal{F}_n)$
+is in the essential image of the restriction functor
+$$
+\textit{Coh}(X, I\mathcal{O}_X)
+\longrightarrow
+\textit{Coh}(U, I\mathcal{O}_U)
+$$
+However, we can no longer say that this means $(\mathcal{F}_n)$
+is algebraizable. Thus we introduce the following terminology.
+
+\begin{definition}
+\label{definition-algebraizable}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an
+object of $\textit{Coh}(U, I\mathcal{O}_U)$. We say
+{\it $(\mathcal{F}_n)$ extends to $X$} if there exists an object
+$(\mathcal{G}_n)$ of $\textit{Coh}(X, I\mathcal{O}_X)$ whose restriction
+to $U$ is isomorphic to $(\mathcal{F}_n)$.
+\end{definition}
+
+\noindent
+This notion is equivalent to being algebraizable over the completion.
+
+\begin{lemma}
+\label{lemma-algebraizable}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an
+object of $\textit{Coh}(U, I\mathcal{O}_U)$. Let $A', I', \mathfrak a'$
+be the $I$-adic completions of $A, I, \mathfrak a$. Set $X' = \Spec(A')$
+and $U' = X' \setminus V(\mathfrak a')$. The following are equivalent
+\begin{enumerate}
+\item $(\mathcal{F}_n)$ extends to $X$, and
+\item the pullback of $(\mathcal{F}_n)$ to $U'$ is the completion
+of a coherent $\mathcal{O}_{U'}$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Recall that $A \to A'$ is a flat ring map which induces an isomorphism
+$A/I \to A'/I'$. See
+Algebra, Lemmas \ref{algebra-lemma-completion-flat} and
+\ref{algebra-lemma-completion-complete}.
+Thus $X' \to X$ is a
+flat morphism inducing an isomorphism $Y' \to Y$. Thus $U' \to U$
+is a flat morphism which induces an isomorphism $U' \cap Y' \to U \cap Y$.
+This implies that in the commutative diagram
+$$
+\xymatrix{
+\textit{Coh}(X', I\mathcal{O}_{X'}) \ar[r] &
+\textit{Coh}(U', I\mathcal{O}_{U'}) \\
+\textit{Coh}(X, I\mathcal{O}_X) \ar[u] \ar[r] &
+\textit{Coh}(U, I\mathcal{O}_U) \ar[u]
+}
+$$
+the vertical functors are equivalences. See
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-inverse-systems-pullback-equivalence}.
+The lemma follows formally from this and the results of
+Lemma \ref{lemma-essential-image-completion}.
+\end{proof}
+
+\noindent
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an
+object of $\textit{Coh}(U, I\mathcal{O}_U)$. To figure out if
+$(\mathcal{F}_n)$ extends to $X$ it makes sense to look at the $A$-module
+\begin{equation}
+\label{equation-guess}
+M = \lim H^0(U, \mathcal{F}_n)
+\end{equation}
+Observe that $M$ has a limit topology which is (a priori) coarser than
+the $I$-adic topology since $M \to H^0(U, \mathcal{F}_n)$ annihilates $I^nM$.
+There are canonical maps
+$$
+\widetilde{M}|_U \to \widetilde{M/I^nM}|_U \to
+\widetilde{H^0(U, \mathcal{F}_n)}|_U \to
+\mathcal{F}_n
+$$
+One could hope that $\widetilde{M}$ restricts to a coherent module
+on $U$ and that $(\mathcal{F}_n)$ is the completion of this module.
+This is naive because this has almost no chance of being true
+if $A$ is not complete. But even if $A$ is $I$-adically complete
+this notion is very difficult to work with.
+A less naive approach is to consider the following requirement.
+
+\begin{definition}
+\label{definition-canonically-algebraizable}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an
+object of $\textit{Coh}(U, I\mathcal{O}_U)$. We say
+{\it $(\mathcal{F}_n)$ canonically extends to $X$} if the the
+inverse system
+$$
+\{\widetilde{H^0(U, \mathcal{F}_n)}\}_{n \geq 1}
+$$
+in $\QCoh(\mathcal{O}_X)$ is pro-isomorphic to an object
+$(\mathcal{G}_n)$ of $\textit{Coh}(X, I\mathcal{O}_X)$.
+\end{definition}
+
+\noindent
+We will see in Lemma \ref{lemma-canonically-algebraizable}
+that the condition in Definition \ref{definition-canonically-algebraizable}
+is stronger than the condition of Definition \ref{definition-algebraizable}.
+
+\begin{lemma}
+\label{lemma-canonically-algebraizable}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an
+object of $\textit{Coh}(U, I\mathcal{O}_U)$. If $(\mathcal{F}_n)$
+canonically extends to $X$, then
+\begin{enumerate}
+\item $(\widetilde{H^0(U, \mathcal{F}_n)})$ is pro-isomorphic to an
+object $(\mathcal{G}_n)$ of $\textit{Coh}(X, I \mathcal{O}_X)$
+unique up to unique isomorphism,
+\item the restriction of $(\mathcal{G}_n)$ to $U$ is isomorphic
+to $(\mathcal{F}_n)$, i.e., $(\mathcal{F}_n)$ extends to $X$,
+\item the inverse system $\{H^0(U, \mathcal{F}_n)\}$
+satisfies the Mittag-Leffler condition, and
+\item the module $M$ in (\ref{equation-guess}) is finite over the
+$I$-adic completion of $A$ and the limit topology on
+$M$ is the $I$-adic topology.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The existence of $(\mathcal{G}_n)$ in (1) follows from
+Definition \ref{definition-canonically-algebraizable}.
+The uniqueness of $(\mathcal{G}_n)$ in (1) follows from
+Lemma \ref{lemma-recognize-formal-coherent-modules}.
+Write $\mathcal{G}_n = \widetilde{M_n}$.
+Then $\{M_n\}$ is an inverse system of finite $A$-modules
+with $M_n = M_{n + 1}/I^n M_{n + 1}$.
+By Definition \ref{definition-canonically-algebraizable}
+the inverse system $\{H^0(U, \mathcal{F}_n)\}$
+is pro-isomorphic to $\{M_n\}$.
+Hence we see that the inverse system $\{H^0(U, \mathcal{F}_n)\}$
+satisfies the Mittag-Leffler condition and that
+$M = \lim M_n$ (as topological modules).
+Thus the properties of $M$ in (4) follow from
+Algebra, Lemmas \ref{algebra-lemma-limit-complete},
+\ref{algebra-lemma-finite-over-complete-ring}, and
+\ref{algebra-lemma-hathat-finitely-generated}.
+Since $U$ is quasi-affine the canonical maps
+$$
+\widetilde{H^0(U, \mathcal{F}_n)}|_U \to \mathcal{F}_n
+$$
+are isomorphisms (Properties, Lemma
+\ref{properties-lemma-quasi-coherent-quasi-affine}).
+We conclude that $(\mathcal{G}_n|_U)$ and $(\mathcal{F}_n)$ are
+pro-isomorphic and hence isomorphic by
+Lemma \ref{lemma-recognize-formal-coherent-modules}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-canonically-extend-base-change}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an
+object of $\textit{Coh}(U, I\mathcal{O}_U)$. Let $A \to A'$ be a flat ring
+map. Set $X' = \Spec(A')$, let $U' \subset X'$ be the inverse image of $U$,
+and denote $g : U' \to U$ the induced morphism. Set
+$(\mathcal{F}'_n) = (g^*\mathcal{F}_n)$, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-inverse-systems-pullback}.
+If $(\mathcal{F}_n)$ canonically extends to $X$, then
+$(\mathcal{F}'_n)$ canonically extends to $X'$.
+Moreover, the extension found in Lemma \ref{lemma-canonically-algebraizable}
+for $(\mathcal{F}_n)$ pulls back to the extension for
+$(\mathcal{F}'_n)$.
+\end{lemma}
+
+\begin{proof}
+Let $f : X' \to X$ be the induced morphism.
+We have $H^0(U', \mathcal{F}'_n) = H^0(U, \mathcal{F}_n) \otimes_A A'$ by
+flat base change, see Cohomology of Schemes, Lemma
+\ref{coherent-lemma-flat-base-change-cohomology}.
+Thus if $(\mathcal{G}_n)$ in $\textit{Coh}(X, I\mathcal{O}_X)$
+is pro-isomorphic to $(\widetilde{H^0(U, \mathcal{F}_n)})$, then
+$(f^*\mathcal{G}_n)$ is pro-isomorphic to
+$$
+(f^*\widetilde{H^0(U, \mathcal{F}_n)}) =
+(\widetilde{H^0(U, \mathcal{F}_n) \otimes_A A'}) =
+(\widetilde{H^0(U', \mathcal{F}'_n)})
+$$
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-done}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$. Let $M$ be as in (\ref{equation-guess}).
+Assume
+\begin{enumerate}
+\item[(a)] the inverse system $H^0(U, \mathcal{F}_n)$ has Mittag-Leffler,
+\item[(b)] the limit topology on $M$ agrees with the $I$-adic topology, and
+\item[(c)] the image of $M \to H^0(U, \mathcal{F}_n)$ is a finite $A$-module
+for all $n$.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ extends canonically to $X$.
+In particular, if $A$ is $I$-adically complete, then
+$(\mathcal{F}_n)$ is the completion of a coherent $\mathcal{O}_U$-module.
+\end{lemma}
+
+\begin{proof}
+Since $H^0(U, \mathcal{F}_n)$ has the Mittag-Leffler condition
+and since the limit topology on $M$ is the $I$-adic topology
+we see that $\{M/I^nM\}$ and $\{H^0(U, \mathcal{F}_n)\}$
+are pro-isomorphic inverse systems of $A$-modules.
+Thus if we set
+$$
+\mathcal{G}_n = \widetilde{M/I^n M}
+$$
+then we see that to verify the condition in
+Definition \ref{definition-canonically-algebraizable}
+it suffices to show that $M$ is a finite module over the
+$I$-adic completion of $A$. This follows from
+the fact that $M/I^n M$ is finite by condition (c)
+and the above and
+Algebra, Lemma \ref{algebra-lemma-finite-over-complete-ring}.
+\end{proof}
+
+\noindent
+The following is in some sense the most straightforward possible application
+Lemma \ref{lemma-when-done} above.
+
+\begin{lemma}
+\label{lemma-algebraization-principal-variant}
+In Situation \ref{situation-algebraize} let
+$(\mathcal{F}_n)$ be an object of $\textit{Coh}(U, I\mathcal{O}_U)$.
+Assume
+\begin{enumerate}
+\item $I = (f)$ is a principal ideal for a nonzerodivisor $f \in \mathfrak a$,
+\item $\mathcal{F}_n$ is a finite locally free
+$\mathcal{O}_U/f^n\mathcal{O}_U$-module,
+\item $H^1_\mathfrak a(A/fA)$ and $H^2_\mathfrak a(A/fA)$
+are finite $A$-modules.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ extends canonically to $X$. In particular, if $A$
+is complete, then $(\mathcal{F}_n)$ is the completion of a coherent
+$\mathcal{O}_U$-module.
+\end{lemma}
+
+\begin{proof}
+We will prove this by verifying hypotheses (a), (b), and (c) of
+Lemma \ref{lemma-when-done}.
+
+\medskip\noindent
+Since $\mathcal{F}_n$ is locally free over $\mathcal{O}_U/f^n\mathcal{O}_U$
+we see that we have short exact sequences
+$0 \to \mathcal{F}_n \to \mathcal{F}_{n + 1} \to \mathcal{F}_1 \to 0$
+for all $n$. Thus condition (b) holds by Lemma \ref{lemma-topology-I-adic-f}.
+
+\medskip\noindent
+As $f$ is a nonzerodivisor we obtain short exact sequences
+$$
+0 \to A/f^nA \xrightarrow{f} A/f^{n + 1}A \to A/fA \to 0
+$$
+and we have corresponding short exact sequences
+$0 \to \mathcal{F}_n \to \mathcal{F}_{n + 1} \to \mathcal{F}_1 \to 0$.
+We will use
+Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}
+without further mention. Our assumptions imply that
+$H^0(U, \mathcal{O}_U/f\mathcal{O}_U)$ and
+$H^1(U, \mathcal{O}_U/f\mathcal{O}_U)$
+are finite $A$-modules. Hence the same thing is true for $\mathcal{F}_1$, see
+Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-for-finite-locally-free}.
+Using induction and the short exact sequences we find that
+$H^0(U, \mathcal{F}_n)$ are finite $A$-modules for all $n$.
+In this way we see hypothesis (c) is satisfied.
+
+\medskip\noindent
+Finally, as $H^1(U, \mathcal{F}_1)$ is a finite $A$-module
+we can apply Lemma \ref{lemma-ML} to see hypothesis (a) holds.
+\end{proof}
+
+\begin{remark}
+\label{remark-interesting-case-variant}
+In Lemma \ref{lemma-algebraization-principal-variant}
+if $A$ is universally catenary with Cohen-Macaulay
+formal fibres (for example if $A$ has a dualizing complex), then
+the condition that
+$H^1_\mathfrak a(A/fA)$ and $H^2_\mathfrak a(A/fA)$
+are finite $A$-modules, is equivalent with
+$$
+\text{depth}((A/f)_\mathfrak q) + \dim((A/\mathfrak q)_\mathfrak p) > 2
+$$
+for all $\mathfrak q \in V(f) \setminus V(\mathfrak a)$
+and $\mathfrak p \in V(\mathfrak q) \cap V(\mathfrak a)$
+by Local Cohomology, Theorem \ref{local-cohomology-theorem-finiteness}.
+
+\medskip\noindent
+For example, if $A/fA$ is $(S_2)$ and if every irreducible
+component of $Z = V(\mathfrak a)$ has codimension $\geq 3$
+in $Y = \Spec(A/fA)$, then we get the finiteness of
+$H^1_\mathfrak a(A/fA)$ and $H^2_\mathfrak a(A/fA)$.
+This should be contrasted with the slightly weaker conditions
+found in Lemma \ref{lemma-algebraization-principal}
+(see also Remark \ref{remark-interesting-case}).
+\end{remark}
+
+
+
+
+
+
+\section{Algebraization of coherent formal modules, II}
+\label{section-algebraization-modules-general}
+
+\noindent
+We continue the discussion started in
+Section \ref{section-algebraization-modules}.
+This section can be skipped on a first reading.
+
+\begin{lemma}
+\label{lemma-map-kernel-cokernel-on-closed}
+In Situation \ref{situation-algebraize}. Let
+$(\mathcal{F}_n) \to (\mathcal{F}'_n)$ be a morphism of
+$\textit{Coh}(U, I\mathcal{O}_U)$
+whose kernel and cokernel are annihilated by a power of $I$. Then
+\begin{enumerate}
+\item $(\mathcal{F}_n)$ extends to $X$ if and only if
+$(\mathcal{F}'_n)$ extends to $X$, and
+\item $(\mathcal{F}_n)$ is the completion of a coherent $\mathcal{O}_U$-module
+if and only if $(\mathcal{F}'_n)$ is.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (2) follows immediately from
+Cohomology of Schemes, Lemma \ref{coherent-lemma-existence-easy}.
+To see part (1), we first use Lemma \ref{lemma-algebraizable}
+to reduce to the case where $A$ is $I$-adically complete.
+However, in that case (1) reduces to (2) by
+Lemma \ref{lemma-essential-image-completion}.
+\end{proof}
+
+\noindent
+The following two lemmas where originally used in the proof
+of Lemma \ref{lemma-when-done}. We keep them here for the reader who is
+interested to know what intermediate results one can obtain.
+
+\begin{lemma}
+\label{lemma-when-ML}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$. If the inverse system
+$H^0(U, \mathcal{F}_n)$ has Mittag-Leffler, then the canonical maps
+$$
+\widetilde{M/I^nM}|_U \to \mathcal{F}_n
+$$
+are surjective for all $n$ where $M$ is as in (\ref{equation-guess}).
+\end{lemma}
+
+\begin{proof}
+Surjectivity may be checked on the stalk at some point $y \in Y \setminus Z$.
+If $y$ corresponds to the prime $\mathfrak q \subset A$, then we can
+choose $f \in \mathfrak a$, $f \not \in \mathfrak q$. Then it suffices
+to show
+$$
+M_f \longrightarrow H^0(U, \mathcal{F}_n)_f = H^0(D(f), \mathcal{F}_n)
+$$
+is surjective as $D(f)$ is affine (equality holds by Properties,
+Lemma \ref{properties-lemma-invert-f-sections}). Since we have the
+Mittag-Leffler property, we find that
+$$
+\Im(M \to H^0(U, \mathcal{F}_n)) =
+\Im(H^0(U, \mathcal{F}_m) \to H^0(U, \mathcal{F}_n))
+$$
+for some $m \geq n$. Using the long exact sequence of cohomology we see
+that
+$$
+\Coker(H^0(U, \mathcal{F}_m) \to H^0(U, \mathcal{F}_n))
+\subset
+H^1(U, \Ker(\mathcal{F}_m \to \mathcal{F}_n))
+$$
+Since $U = X \setminus V(\mathfrak a)$ this $H^1$ is $\mathfrak a$-power
+torsion. Hence after inverting $f$ the cokernel becomes zero.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-topology}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$. Let $M$ be as in (\ref{equation-guess}).
+Set
+$$
+\mathcal{G}_n = \widetilde{M/I^nM}.
+$$
+If the limit topology on $M$ agrees with the $I$-adic topology, then
+$\mathcal{G}_n|_U$ is a coherent
+$\mathcal{O}_U$-module and the map of inverse systems
+$$
+(\mathcal{G}_n|_U) \longrightarrow (\mathcal{F}_n)
+$$
+is injective in the abelian category $\textit{Coh}(U, I\mathcal{O}_U)$.
+\end{lemma}
+
+\begin{proof}
+Observe that $\mathcal{G}_n$ is a quasi-coherent $\mathcal{O}_X$-module
+annihilated by $I^n$ and that
+$\mathcal{G}_{n + 1}/I^n\mathcal{G}_{n + 1} = \mathcal{G}_n$.
+Consider
+$$
+M_n = \Im(M \longrightarrow H^0(U, \mathcal{F}_n))
+$$
+The assumption says that the inverse systems $(M_n)$ and
+$(M/I^nM)$ are isomorphic as pro-objects of $\text{Mod}_A$.
+Pick $f \in \mathfrak a$ so $D(f) \subset U$ is an affine open. Then we have
+$$
+(M_n)_f \subset H^0(U, \mathcal{F}_n)_f = H^0(D(f), \mathcal{F}_n)
+$$
+Equality holds by Properties, Lemma \ref{properties-lemma-invert-f-sections}.
+Thus $\widetilde{M_n}|_U \to \mathcal{F}_n$ is injective.
+It follows that $\widetilde{M_n}|_U$ is a coherent module
+(Cohomology of Schemes, Lemma
+\ref{coherent-lemma-coherent-Noetherian-quasi-coherent-sub-quotient}).
+Since $M \to M/I^nM$ is surjective and factors as
+$M_{n'} \to M/I^nM$ for some $n' \geq n$ we find that $\mathcal{G}_n|_U$
+is coherent as the quotient of a coherent module.
+Combined with the initical remarks of the proof we conclude
+that $(\mathcal{G}_n|_U)$ indeed forms an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$.
+Finally, to show the injectivity of the map
+it suffices to show that
+$$
+\lim (M/I^nM)_f = \lim H^0(D(f), \mathcal{G}_n) \to
+\lim H^0(D(f), \mathcal{F}_n)
+$$
+is injective, see Cohomology of Schemes, Lemmas
+\ref{coherent-lemma-inverse-systems-abelian} and
+\ref{coherent-lemma-inverse-systems-affine}.
+The injectivity of $\lim (M_n)_f \to \lim H^0(D(f), \mathcal{F}_n)$
+is clear (see above) and by our remark on pro-systems we have
+$\lim (M_n)_f = \lim (M/I^nM)_f$. This finishes the proof.
+\end{proof}
+
+
+
+
+
+\section{A distance function}
+\label{section-distance}
+
+\noindent
+Let $Y$ be a Noetherian scheme and let $Z \subset Y$ be a closed subset.
+We define a function
+\begin{equation}
+\label{equation-delta-Z}
+\delta^Y_Z = \delta_Z : Y \longrightarrow \mathbf{Z}_{\geq 0} \cup \{\infty\}
+\end{equation}
+which measures the ``distance'' of a point of $Y$ from $Z$.
+For an informal discussion, please see Remark \ref{remark-discussion}.
+Let $y \in Y$. We set $\delta_Z(y) = \infty$ if $y$ is contained
+in a connected component of $Y$ which does not meet $Z$.
+If $y$ is contained in a connected component of $Y$ which meets $Z$,
+then we can find $k \geq 0$ and a system
+$$
+V_0 \subset W_0 \supset V_1 \subset W_1 \supset \ldots \supset
+V_k \subset W_k
+$$
+of integral closed subschemes of $Y$ such that $V_0 \subset Z$
+and $y \in V_k$ is the generic point. Set
+$c_i = \text{codim}(V_i, W_i)$ for $i = 0, \ldots, k$
+and $b_i = \text{codim}(V_{i + 1}, W_i)$ for $i = 0, \ldots, k - 1$.
+For such a system we set
+$$
+\delta(V_0, W_0, V_1, \ldots, W_k) =
+k +
+\max_{i = 0, 1, \ldots, k}
+(c_i + c_{i + 1} + \ldots + c_k - b_i - b_{i + 1} - \ldots - b_{k - 1})
+$$
+This is $\geq k$ as we can take $i = k$ and we have $c_k \geq 0$.
+Finally, we set
+$$
+\delta_Z(y) = \min \delta(V_0, W_0, V_1, \ldots, W_k)
+$$
+where the minimum is over all systems of integral closed subschemes of $Y$
+as above.
+
+\begin{lemma}
+\label{lemma-discussion}
+Let $Y$ be a Noetherian scheme and let $Z \subset Y$ be a closed subset.
+\begin{enumerate}
+\item For $y \in Y$ we have $\delta_Z(y) = 0 \Leftrightarrow y \in Z$.
+\item The subsets $\{y \in Y \mid \delta_Z(y) \leq k\}$ are
+stable under specialization.
+\item For $y \in Y$ and $z \in \overline{\{y\}} \cap Z$ we have
+$\dim(\mathcal{O}_{\overline{\{y\}}, z}) \geq \delta_Z(y)$.
+\item If $\delta$ is a dimension function on $Y$, then
+$\delta(y) \leq \delta_Z(y) + \delta_{max}$ where $\delta_{max}$
+is the maximum value of $\delta$ on $Z$.
+\item If $Y = \Spec(A)$ is the spectrum of a catenary Noetherian local ring
+with maximal ideal $\mathfrak m$ and $Z = \{\mathfrak m\}$, then
+$\delta_Z(y) = \dim(\overline{\{y\}})$.
+\item Given a pattern of specializations
+$$
+\xymatrix{
+& y'_0 \ar@{~>}[ld] \ar@{~>}[rd] &
+& y'_1 \ar@{~>}[ld] & \ldots
+& y'_{k - 1} \ar@{~>}[rd] &
+\\
+y_0 & &
+y_1 & &
+\ldots & &
+y_k = y
+}
+$$
+between points of $Y$ with $y_0 \in Z$ and $y_i' \leadsto y_i$
+an immediate specialization, then $\delta_Z(y_k) \leq k$.
+\item If $Y' \subset Y$ is an open subscheme, then
+$\delta^{Y'}_{Y' \cap Z}(y') \geq \delta^Y_Z(y')$ for $y' \in Y'$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is essentially true by definition. Namely, if $y \in Z$,
+then we can take $k = 0$ and $V_0 = W_0 = \overline{\{y\}}$.
+
+\medskip\noindent
+Proof of (2). Let $y \leadsto y'$ be a nontrivial specialization and let
+$V_0 \subset W_0 \supset V_1 \subset W_1 \supset \ldots \subset W_k$
+is a system for $y$. Here there are two cases.
+Case I: $V_k = W_k$, i.e., $c_k = 0$. In this case
+we can set $V'_k = W'_k = \overline{\{y'\}}$.
+An easy computation shows that
+$\delta(V_0, W_0, \ldots, V'_k, W'_k) \leq
+\delta(V_0, W_0, \ldots, V_k, W_k)$
+because only $b_{k - 1}$ is changed into a bigger integer.
+Case II: $V_k \not = W_k$, i.e., $c_k > 0$. Observe that
+in this case $\max_{i = 0, 1, \ldots, k}
+(c_i + c_{i + 1} + \ldots + c_k - b_i - b_{i + 1} - \ldots - b_{k - 1}) > 0$.
+Hence if we set $V'_{k + 1} = W_{k + 1} = \overline{\{y'\}}$,
+then although $k$ is replaced by $k + 1$, the maximum now looks like
+$$
+\max_{i = 0, 1, \ldots, k + 1}
+(c_i + c_{i + 1} + \ldots + c_k + c_{k + 1}
+- b_i - b_{i + 1} - \ldots - b_{k - 1} - b_k)
+$$
+with $c_{k + 1} = 0$ and $b_k = \text{codim}(V_{k + 1}, W_k) > 0$.
+This is strictly smaller than
+$\max_{i = 0, 1, \ldots, k}
+(c_i + c_{i + 1} + \ldots + c_k - b_i - b_{i + 1} - \ldots - b_{k - 1})$
+and hence
+$\delta(V_0, W_0, \ldots, V'_{k + 1}, W'_{k + 1}) \leq
+\delta(V_0, W_0, \ldots, V_k, W_k)$ as desired.
+
+\medskip\noindent
+Proof of (3). Given $y \in Y$ and $z \in \overline{\{y\}} \cap Z$
+we get the system
+$$
+V_0 = \overline{\{z\}} \subset W_0 = \overline{\{y\}}
+$$
+and $c_0 = \text{codim}(V_0, W_0) = \dim(\mathcal{O}_{\overline{\{y\}}, z})$
+by Properties, Lemma \ref{properties-lemma-codimension-local-ring}.
+Thus we see that $\delta(V_0, W_0) = 0 + c_0 = c_0$ which proves
+what we want.
+
+\medskip\noindent
+Proof of (4). Let $\delta$ be a dimension function on $Y$.
+Let $V_0 \subset W_0 \supset V_1 \subset W_1 \supset \ldots \subset W_k$
+be a system for $y$. Let $y'_i \in W_i$ and $y_i \in V_i$ be the
+generic points, so $y_0 \in Z$ and $y_k = y$. Then we see that
+$$
+\delta(y_i) - \delta(y_{i - 1}) =
+\delta(y'_{i - 1}) - \delta(y_{i - 1}) - \delta(y'_{i - 1}) + \delta(y_i) =
+c_{i - 1} - b_{i - 1}
+$$
+Finally, we have $\delta(y'_k) - \delta(y_{k - 1}) = c_k$.
+Thus we see that
+$$
+\delta(y) - \delta(y_0) =
+c_0 + \ldots + c_k - b_0 - \ldots - b_{k - 1}
+$$
+We conclude
+$\delta(V_0, W_0, \ldots, W_k) \geq k + \delta(y) - \delta(y_0)$
+which proves what we want.
+
+\medskip\noindent
+Proof of (5). The function $\delta(y) = \dim(\overline{\{y\}})$
+is a dimension function. Hence $\delta(y) \leq \delta_Z(y)$ by
+part (4). By part (3) we have $\delta_Z(y) \leq \delta(y)$
+and we are done.
+
+\medskip\noindent
+Proof of (6). Given such a sequence of points, we may assume
+all the specializations $y'_i \leadsto y_{i + 1}$ are nontrivial
+(otherwise we can shorten the chain of specializations).
+Then we set $V_i = \overline{\{y_i\}}$ and $W_i = \overline{\{y'_i\}}$
+and we compute $\delta(V_0, W_1, V_1, \ldots, W_{k - 1}) = k$ because all
+the codimensions $c_i$ of $V_i \subset W_i$ are $1$ and all $b_i > 0$.
+This implies $\delta_Z(y'_{k - 1}) \leq k$ as $y'_{k - 1}$ is the generic
+point of $W_k$. Then $\delta_Z(y) \leq k$ by part (2) as $y$ is a
+specialization of $y_{k - 1}$.
+
+\medskip\noindent
+Proof of (7). This is clear as their are fewer systems to consider
+in the computation of $\delta^{Y'}_{Y' \cap Z}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-change-distance-function}
+Let $Y$ be a universally catenary Noetherian scheme. Let $Z \subset Y$
+be a closed subscheme. Let $f : Y' \to Y$ be a finite type
+morphism all of whose fibres have dimension $\leq e$. Set $Z' = f^{-1}(Z)$.
+Then
+$$
+\delta_Z(y) \leq \delta_{Z'}(y') + e - \text{trdeg}_{\kappa(y)}(\kappa(y'))
+$$
+for $y' \in Y'$ with image $y \in Y$.
+\end{lemma}
+
+\begin{proof}
+If $\delta_{Z'}(y') = \infty$, then there is nothing to prove.
+If $\delta_{Z'}(y') < \infty$, then we choose a system
+of integral closed subschemes
+$$
+V'_0 \subset W'_0 \supset V'_1 \subset W'_1 \supset \ldots \subset W'_k
+$$
+of $Y'$ with $V'_0 \subset Z'$ and $y'$ the generic point of $W'_k$
+such that $\delta_{Z'}(y') = \delta(V'_0, W'_0, \ldots, W'_k)$.
+Denote
+$$
+V_0 \subset W_0 \supset V_1 \subset W_1 \supset \ldots \subset W_k
+$$
+the scheme theoretic images of the above schemes in $Y$. Observe
+that $y$ is the generic point of $W_k$ and that $V_0 \subset Z$.
+For each $i$ we look at the diagram
+$$
+\xymatrix{
+V'_i \ar[r] \ar[d] & W'_i \ar[d] & V'_{i + 1} \ar[l] \ar[d] \\
+V_i \ar[r] & W_i & V_{i + 1} \ar[l]
+}
+$$
+Denote $n_i$ the relative dimension of $V'_i/V_i$ and
+$m_i$ the relative dimension of $W'_i/W_i$; more precisely
+these are the transcendence degrees of the corresponding extensions of
+the function fields. Set
+$c_i = \text{codim}(V_i, W_i)$,
+$c'_i = \text{codim}(V'_i, W'_i)$,
+$b_i = \text{codim}(V_{i + 1}, W_i)$, and
+$b'_i = \text{codim}(V'_{i + 1}, W'_i)$.
+By the dimension formula we have
+$$
+c_i = c'_i + n_i - m_i
+\quad\text{and}\quad
+b_i = b'_i + n_{i + 1} - m_i
+$$
+See Morphisms, Lemma \ref{morphisms-lemma-dimension-formula}.
+Hence $c_i - b_i = c'_i - b'_i + n_i - n_{i + 1}$. Thus we see that
+\begin{align*}
+& c_i + c_{i + 1} + \ldots + c_k - b_i - b_{i + 1} - \ldots - b_{k - 1} \\
+& =
+c'_i + c'_{i + 1} + \ldots + c'_k - b'_i - b'_{i + 1} - \ldots - b'_{k - 1}
++ n_i - n_k + c_k - c'_k \\
+& =
+c'_i + c'_{i + 1} + \ldots + c'_k - b'_i - b'_{i + 1} - \ldots - b'_{k - 1}
++ n_i - m_k
+\end{align*}
+Thus we see that
+\begin{align*}
+\max_{i = 0, \ldots, k}
+& (c_i + c_{i + 1} + \ldots + c_k - b_i - b_{i + 1} - \ldots - b_{k - 1}) \\
+& =
+\max_{i = 0, \ldots, k}
+(c'_i + c'_{i + 1} + \ldots + c'_k - b'_i - b'_{i + 1} - \ldots - b'_{k - 1}
++ n_i - m_k) \\
+& =
+\max_{i = 0, \ldots, k}
+(c'_i + c'_{i + 1} + \ldots + c'_k - b'_i - b'_{i + 1} - \ldots - b'_{k - 1}
++ n_i) - m_k \\
+& \leq
+\max_{i = 0, \ldots, k}
+(c'_i + c'_{i + 1} + \ldots + c'_k - b'_i - b'_{i + 1} - \ldots - b'_{k - 1})
++ e - m_k
+\end{align*}
+Since $m_k = \text{trdeg}_{\kappa(y)}(\kappa(y'))$ we conclude that
+$$
+\delta(V_0, W_0, \ldots, W_k) \leq
+\delta(V'_0, W'_0, \ldots, W'_k) + e - \text{trdeg}_{\kappa(y)}(\kappa(y'))
+$$
+as desired.
+\end{proof}
+
+\begin{remark}
+\label{remark-discussion}
+Let $Y$ be a Noetherian scheme and let $Z \subset Y$ be a closed subset.
+By Lemma \ref{lemma-discussion} we have
+$$
+\delta_Z(y) \leq \min
+\left\{ k \middle|
+\begin{matrix}
+\text{ there exist specializations in }Y \\
+y_0 \leftarrow y'_0 \rightarrow y_1 \leftarrow y'_1 \rightarrow \ldots
+\leftarrow y'_{k - 1} \rightarrow y_k = y \\
+\text{ with }y_0 \in Z\text{ and }y_i' \leadsto y_i
+\text{ immediate}
+\end{matrix}
+\right\}
+$$
+We claim that if $Y$ is of finite type over a field,
+then equality holds. If we ever need this result we
+will formulate a precise result and prove it here.
+However, in general if we define $\delta_Z$
+by the right hand side of this inequality, then we don't
+know if Lemma \ref{lemma-change-distance-function} remains true.
+\end{remark}
+
+\begin{example}
+\label{example-distance}
+Let $k$ be a field and $Y = \mathbf{A}^n_k$. Denote
+$\delta : Y \to \mathbf{Z}_{\geq 0}$ the usual dimension function.
+\begin{enumerate}
+\item If $Z = \{z\}$ for some closed point $z$, then
+\begin{enumerate}
+\item $\delta_Z(y) = \delta(y)$ if $y \leadsto z$ and
+\item $\delta_Z(y) = \delta(y) + 1$ if $y \not \leadsto z$.
+\end{enumerate}
+\item If $Z$ is a closed subvariety and $W = \overline{\{y\}}$, then
+\begin{enumerate}
+\item $\delta_Z(y) = 0$ if $W \subset Z$,
+\item $\delta_Z(y) = \dim(W) - \dim(Z)$ if $Z$ is contained in $W$,
+\item $\delta_Z(y) = 1$ if $\dim(W) \leq \dim(Z)$ and $W \not \subset Z$,
+\item $\delta_Z(y) = \dim(W) - \dim(Z) + 1$ if $\dim(W) > \dim(Z)$
+and $Z \not \subset W$.
+\end{enumerate}
+\end{enumerate}
+A generalization of case (1) is if $Y$ is of finite type over a field
+and $Z = \{z\}$ is a closed point. Then $\delta_Z(y) = \delta(y) + t$
+where $t$ is the minimum length of a chain of curves connecting
+$z$ to a closed point of $\overline{\{y\}}$.
+\end{example}
+
+
+
+
+
+
+
+\section{Algebraization of coherent formal modules, III}
+\label{section-uniqueness}
+
+\noindent
+We continue the discussion started in Sections
+\ref{section-algebraization-modules} and
+\ref{section-algebraization-modules-general}.
+We will use the distance function of Section \ref{section-distance}
+to formulate a some natural conditions on
+coherent formal modules in Situation \ref{situation-algebraize}.
+
+\medskip\noindent
+In Situation \ref{situation-algebraize} given a point $y \in U \cap Y$
+we can consider the $I$-adic completion
+$$
+\mathcal{O}_{X, y}^\wedge = \lim \mathcal{O}_{X, y}/I^n\mathcal{O}_{X, y}
+$$
+This is a Noetherian local ring complete with respect to
+$I\mathcal{O}_{X, y}^\wedge$ with maximal ideal $\mathfrak m_y^\wedge$, see
+Algebra, Section \ref{algebra-section-completion-noetherian}.
+Let $(\mathcal{F}_n)$ be an object of $\textit{Coh}(U, I\mathcal{O}_U)$.
+Let us define the ``stalk'' of $(\mathcal{F}_n)$ at $y$ by the formula
+$$
+\mathcal{F}_y^\wedge = \lim \mathcal{F}_{n, y}
+$$
+This is a finite module over $\mathcal{O}_{X, y}^\wedge$. See
+Algebra, Lemmas \ref{algebra-lemma-limit-complete} and
+\ref{algebra-lemma-finite-over-complete-ring}.
+
+\begin{definition}
+\label{definition-s-d-inequalities}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$. Let $a, b$ be integers.
+Let $\delta^Y_Z$ be as in (\ref{equation-delta-Z}).
+We say
+{\it $(\mathcal{F}_n)$ satisfies the $(a, b)$-inequalities} if for
+$y \in U \cap Y$ and a prime $\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$
+with $\mathfrak p \not \in V(I\mathcal{O}_{X, y}^\wedge)$
+\begin{enumerate}
+\item if $V(\mathfrak p) \cap V(I\mathcal{O}_{X, y}^\wedge) \not =
+\{\mathfrak m_y^\wedge\}$, then
+$$
+\text{depth}((\mathcal{F}^\wedge_y)_\mathfrak p) + \delta^Y_Z(y) \geq a
+\quad\text{or}\quad
+\text{depth}((\mathcal{F}^\wedge_y)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) + \delta^Y_Z(y) > b
+$$
+\item if $V(\mathfrak p) \cap V(I\mathcal{O}_{X, y}^\wedge) =
+\{\mathfrak m_y^\wedge\}$, then
+$$
+\text{depth}((\mathcal{F}^\wedge_y)_\mathfrak p) + \delta^Y_Z(y) > a
+$$
+\end{enumerate}
+We say {\it $(\mathcal{F}_n)$ satisfies the strict $(a, b)$-inequalities}
+if for $y \in U \cap Y$ and a prime
+$\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$ with
+$\mathfrak p \not \in V(I\mathcal{O}_{X, y}^\wedge)$
+we have
+$$
+\text{depth}((\mathcal{F}^\wedge_y)_\mathfrak p) + \delta^Y_Z(y) > a
+\quad\text{or}\quad
+\text{depth}((\mathcal{F}^\wedge_y)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) + \delta^Y_Z(y) > b
+$$
+\end{definition}
+
+\noindent
+Here are some elementary observations.
+
+\begin{lemma}
+\label{lemma-elementary}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$. Let $a, b$ be integers.
+\begin{enumerate}
+\item If $(\mathcal{F}_n)$ is annihilated by a power of $I$, then
+$(\mathcal{F}_n)$ satisfies the $(a, b)$-inequalities for any $a, b$.
+\item If $(\mathcal{F}_n)$ satisfies the $(a + 1, b)$-inequalities, then
+$(\mathcal{F}_n)$ satisfies the strict $(a, b)$-inequalities.
+\end{enumerate}
+If $\text{cd}(A, I) \leq d$ and $A$ has a dualizing complex, then
+\begin{enumerate}
+\item[(3)] $(\mathcal{F}_n)$ satisfies the $(s, s + d)$-inequalities
+if and only if for all $y \in U \cap Y$ the tuple
+$\mathcal{O}_{X, y}^\wedge, I\mathcal{O}_{X, y}^\wedge,
+\{\mathfrak m_y^\wedge\}, \mathcal{F}_y^\wedge, s - \delta^Y_Z(y), d$
+is as in Situation \ref{situation-bootstrap}.
+\item[(4)]
+If $(\mathcal{F}_n)$ satisfies the strict $(s, s + d)$-inequalities, then
+$(\mathcal{F}_n)$ satisfies the $(s, s + d)$-inequalities.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Immediate except for part (4) which is a consequence of
+Lemma \ref{lemma-bootstrap-bis-bis} and the translation in (3).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-explain-2-3-cd-1}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$. If $\text{cd}(A, I) = 1$, then
+$\mathcal{F}$ satisfies the $(2, 3)$-inequalities if and only if
+$$
+\text{depth}((\mathcal{F}^\wedge_y)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) + \delta^Y_Z(y) > 3
+$$
+for all $y \in U \cap Y$ and $\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$
+with $\mathfrak p \not \in V(I\mathcal{O}_{X, y}^\wedge)$.
+\end{lemma}
+
+\begin{proof}
+Observe that for a prime $\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$,
+$\mathfrak p \not \in V(I\mathcal{O}_{X, y}^\wedge)$
+we have $V(\mathfrak p) \cap V(I\mathcal{O}_{X, y}^\wedge) =
+\{\mathfrak m_y^\wedge\}
+\Leftrightarrow \dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) = 1$
+as $\text{cd}(A, I) = 1$.
+See Local Cohomology, Lemmas
+\ref{local-cohomology-lemma-cd-change-rings} and
+\ref{local-cohomology-lemma-cd-bound-dim-local}.
+OK, consider the three numbers
+$\alpha = \text{depth}((\mathcal{F}^\wedge_y)_\mathfrak p) \geq 0$,
+$\beta = \dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) \geq 1$, and
+$\gamma = \delta^Y_Z(y) \geq 1$.
+Then we see Definition \ref{definition-s-d-inequalities} requires
+\begin{enumerate}
+\item if $\beta > 1$, then
+$\alpha + \gamma \geq 2$ or $\alpha + \beta + \gamma > 3$, and
+\item if $\beta = 1$, then $\alpha + \gamma > 2$.
+\end{enumerate}
+It is trivial to see that this is equivalent to
+$\alpha + \beta + \gamma > 3$.
+\end{proof}
+
+\noindent
+In the rest of this section, which we suggest the reader skip on a first
+reading, we will show that, when $A$ is $I$-adically complete,
+the category of $(\mathcal{F}_n)$ of $\textit{Coh}(U, I\mathcal{O}_U)$
+which extend to $X$ and satisfy the
+strict $(1, 1 + \text{cd}(A, I))$-inequalities
+is equivalent to a full subcategory of the category of coherent
+$\mathcal{O}_U$-modules.
+
+\begin{lemma}
+\label{lemma-sanity}
+In Situation \ref{situation-algebraize} let $\mathcal{F}$ be a
+coherent $\mathcal{O}_U$-module and $d \geq 1$. Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete, has a dualizing complex, and
+$\text{cd}(A, I) \leq d$,
+\item the completion $\mathcal{F}^\wedge$ of $\mathcal{F}$
+satisfies the strict $(1, 1 + d)$-inequalities.
+\end{enumerate}
+Let $x \in X$ be a point. Let $W = \overline{\{x\}}$.
+If $W \cap Y$ has an irreducible component contained in $Z$
+and one which is not, then $\text{depth}(\mathcal{F}_x) \geq 1$.
+\end{lemma}
+
+\begin{proof}
+Let $W \cap Y = W_1 \cup \ldots \cup W_n$ be the decomposition into
+irreducible components. By assumption, after renumbering, we can find
+$0 < m < n$ such that $W_1, \ldots, W_m \subset Z$ and
+$W_{m + 1}, \ldots, W_n \not \subset Z$. We conclude that
+$$
+W \cap Y \setminus
+\left((W_1 \cup \ldots \cup W_m) \cap (W_{m + 1} \cup \ldots \cup W_n)\right)
+$$
+is disconnected. By Lemma \ref{lemma-connected} we can find
+$1 \leq i \leq m < j \leq n$ and
+$z \in W_i \cap W_j$ such that $\dim(\mathcal{O}_{W, z}) \leq d + 1$.
+Choose an immediate specialization $y \leadsto z$ with
+$y \in W_j$, $y \not \in Z$; existence of $y$ follows from
+Properties, Lemma \ref{properties-lemma-complement-closed-point-Jacobson}.
+Observe that $\delta^Y_Z(y) = 1$ and $\dim(\mathcal{O}_{W, y}) \leq d$.
+Let $\mathfrak p \subset \mathcal{O}_{X, y}$ be the prime corresponding to $x$.
+Let $\mathfrak p' \subset \mathcal{O}_{X, y}^\wedge$ be a minimal prime
+over $\mathfrak p\mathcal{O}_{X, y}^\wedge$. Then we have
+$$
+\text{depth}(\mathcal{F}_x) =
+\text{depth}((\mathcal{F}^\wedge_y)_{\mathfrak p'})
+\quad\text{and}\quad
+\dim(\mathcal{O}_{W, y}) = \dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p')
+$$
+See Algebra, Lemma \ref{algebra-lemma-apply-grothendieck-module} and
+Local Cohomology, Lemma \ref{local-cohomology-lemma-change-completion}.
+Now we read off the conclusion from the inequalities given to us.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-recover}
+In Situation \ref{situation-algebraize} let $\mathcal{F}$ be a
+coherent $\mathcal{O}_U$-module and $d \geq 1$. Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete, has a dualizing complex, and
+$\text{cd}(A, I) \leq d$,
+\item the completion $\mathcal{F}^\wedge$ of $\mathcal{F}$
+satisfies the strict $(1, 1+ d)$-inequalities, and
+\item for $x \in U$ with $\overline{\{x\}} \cap Y \subset Z$
+we have $\text{depth}(\mathcal{F}_x) \geq 2$.
+\end{enumerate}
+Then $H^0(U, \mathcal{F}) \to \lim H^0(U, \mathcal{F}/I^n\mathcal{F})$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+We will prove this by showing that Lemma \ref{lemma-application-H0} applies.
+Thus we let $x \in \text{Ass}(\mathcal{F})$ with $x \not \in Y$.
+Set $W = \overline{\{x\}}$.
+By condition (3) we see that $W \cap Y \not \subset Z$.
+By Lemma \ref{lemma-sanity} we see that no irreducible
+component of $W \cap Y$ is contained in $Z$.
+Thus if $z \in W \cap Z$, then there is an immediate
+specialization $y \leadsto z$, $y \in W \cap Y$, $y \not \in Z$.
+For existence of $y$ use
+Properties, Lemma \ref{properties-lemma-complement-closed-point-Jacobson}.
+Then $\delta^Y_Z(y) = 1$ and the assumption
+implies that $\dim(\mathcal{O}_{W, y}) > d$.
+Hence $\dim(\mathcal{O}_{W, z}) > 1 + d$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-inequalities}
+In Situation \ref{situation-algebraize} let $\mathcal{F}$ be a
+coherent $\mathcal{O}_U$-module and $d \geq 1$. Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete, has a dualizing complex, and
+$\text{cd}(A, I) \leq d$,
+\item the completion $\mathcal{F}^\wedge$ of $\mathcal{F}$
+satisfies the strict $(1, 1 + d)$-inequalities, and
+\item for $x \in U$ with $\overline{\{x\}} \cap Y \subset Z$
+we have $\text{depth}(\mathcal{F}_x) \geq 2$.
+\end{enumerate}
+Then the map
+$$
+\Hom_U(\mathcal{G}, \mathcal{F})
+\longrightarrow
+\Hom_{\textit{Coh}(U, I\mathcal{O}_U)}(\mathcal{G}^\wedge, \mathcal{F}^\wedge)
+$$
+is bijective for every coherent $\mathcal{O}_U$-module $\mathcal{G}$.
+\end{lemma}
+
+\begin{proof}
+Set $\mathcal{H} = \SheafHom_{\mathcal{O}_U}(\mathcal{G}, \mathcal{F})$.
+Using Cohomology of Schemes, Lemma
+\ref{coherent-lemma-hom-into-depth} or
+More on Algebra, Lemma \ref{more-algebra-lemma-hom-into-depth}
+we see that the completion of $\mathcal{H}$
+satisfies the strict $(1, 1 + d)$-inequalities and that for
+$x \in U$ with $\overline{\{x\}} \cap Y \subset Z$
+we have $\text{depth}(\mathcal{H}_x) \geq 2$. Details omitted.
+Thus by Lemma \ref{lemma-recover} we have
+$$
+\Hom_U(\mathcal{G}, \mathcal{F}) =
+H^0(U, \mathcal{H}) =
+\lim H^0(U, \mathcal{H}/\mathcal{I}^n\mathcal{H}) =
+\Mor_{\textit{Coh}(U, I\mathcal{O}_U)}
+(\mathcal{G}^\wedge, \mathcal{F}^\wedge)
+$$
+See Cohomology of Schemes, Lemma \ref{coherent-lemma-completion-internal-hom}
+for the final equality.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-construct-unique}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an
+object of $\textit{Coh}(U, I\mathcal{O}_U)$ and $d \geq 1$. Assume
+\begin{enumerate}
+\item $A$ is $I$-adically complete, has a dualizing complex, and
+$\text{cd}(A, I) \leq d$,
+\item $(\mathcal{F}_n)$ is the completion of a coherent $\mathcal{O}_U$-module,
+\item $(\mathcal{F}_n)$ satisfies the strict $(1, 1 + d)$-inequalities.
+\end{enumerate}
+Then there exists a unique coherent $\mathcal{O}_U$-module $\mathcal{F}$
+whose completion is $(\mathcal{F}_n)$ such that for
+$x \in U$ with $\overline{\{x\}} \cap Y \subset Z$
+we have $\text{depth}(\mathcal{F}_x) \geq 2$.
+\end{lemma}
+
+\begin{proof}
+Choose a coherent $\mathcal{O}_U$-module $\mathcal{F}$ whose
+completion is $(\mathcal{F}_n)$. Let
+$T = \{x \in U \mid \overline{\{x\}} \cap Y \subset Z\}$.
+We will construct $\mathcal{F}$ by applying Local Cohomology,
+Lemma \ref{local-cohomology-lemma-make-S2-along-T-simple}
+with $\mathcal{F}$ and $T$.
+Then uniqueness will follow from the mapping property
+of Lemma \ref{lemma-fully-faithful-inequalities}.
+
+\medskip\noindent
+Since $T$ is stable under specialization in $U$ the only
+thing to check is the following. If $x' \leadsto x$ is an
+immediate specialization of points of $U$ with $x \in T$
+and $x' \not \in T$, then $\text{depth}(\mathcal{F}_{x'}) \geq 1$.
+Set $W = \overline{\{x\}}$ and $W' = \overline{\{x'\}}$.
+Since $x' \not \in T$ we see that $W' \cap Y$ is not contained in $Z$.
+If $W' \cap Y$ contains an irreducible component contained in $Z$,
+then we are done by Lemma \ref{lemma-sanity}.
+If not, we choose an irreducible component $W_1$ of $W \cap Y$ and
+an irreducible component $W'_1$ of $W' \cap Y$ with $W_1 \subset W'_1$.
+Let $z \in W_1$ be the generic point. Let $y \leadsto z$, $y \in W'_1$
+be an immediate specialization with $y \not \in Z$; existence of $y$
+follows from $W'_1 \not \subset Z$ (see above) and
+Properties, Lemma \ref{properties-lemma-complement-closed-point-Jacobson}.
+Then we have the following $z \in Z$, $x \leadsto z$,
+$x' \leadsto y \leadsto z$, $y \in Y \setminus Z$, and $\delta^Y_Z(y) = 1$.
+By Local Cohomology, Lemma \ref{local-cohomology-lemma-cd-bound-dim-local}
+and the fact that $z$
+is a generic point of $W \cap Y$ we have
+$\dim(\mathcal{O}_{W, z}) \leq d$.
+Since $x' \leadsto x$ is an immediate specialization we have
+$\dim(\mathcal{O}_{W', z}) \leq d + 1$.
+Since $y \not = z$ we conclude
+$\dim(\mathcal{O}_{W', y}) \leq d$.
+If $\text{depth}(\mathcal{F}_{x'}) = 0$ then we would get
+a contradiction with assumption (3); details about passage
+from $\mathcal{O}_{X, y}$ to its completion omitted.
+This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Algebraization of coherent formal modules, IV}
+\label{section-algebraization-modules-local}
+
+\noindent
+In this section we prove two stronger versions of
+Lemma \ref{lemma-algebraization-principal-variant}
+in the local case, namely, Lemmas \ref{lemma-algebraization-principal}
+and \ref{lemma-algebraization-principal-bis}.
+Although these lemmas will be obsoleted by the more general
+Proposition \ref{proposition-cd-1}, their proofs are significantly
+easier.
+
+\begin{lemma}
+\label{lemma-algebraization-principal}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$. Assume
+\begin{enumerate}
+\item $A$ is local and $\mathfrak a = \mathfrak m$ is the maximal ideal,
+\item $A$ has a dualizing complex,
+\item $I = (f)$ is a principal ideal for a nonzerodivisor $f \in \mathfrak m$,
+\item $\mathcal{F}_n$ is a finite locally free
+$\mathcal{O}_U/f^n\mathcal{O}_U$-module,
+\item if $\mathfrak p \in V(f) \setminus \{\mathfrak m\}$, then
+$\text{depth}((A/f)_\mathfrak p) + \dim(A/\mathfrak p) > 1$, and
+\item if $\mathfrak p \not \in V(f)$ and
+$V(\mathfrak p) \cap V(f) \not = \{\mathfrak m\}$, then
+$\text{depth}(A_\mathfrak p) + \dim(A/\mathfrak p) > 3$.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ extends canonically to $X$. In particular, if $A$
+is complete, then $(\mathcal{F}_n)$ is the completion of a coherent
+$\mathcal{O}_U$-module.
+\end{lemma}
+
+\begin{proof}
+We will prove this by verifying hypotheses (a), (b), and (c) of
+Lemma \ref{lemma-when-done}.
+
+\medskip\noindent
+Since $\mathcal{F}_n$ is locally free over $\mathcal{O}_U/f^n\mathcal{O}_U$
+we see that we have short exact sequences
+$0 \to \mathcal{F}_n \to \mathcal{F}_{n + 1} \to \mathcal{F}_1 \to 0$
+for all $n$. Thus condition (b) holds by Lemma \ref{lemma-topology-I-adic-f}.
+
+\medskip\noindent
+By induction on $n$ and the short exact sequences
+$0 \to A/f^n \to A/f^{n + 1} \to A/f \to 0$ we see that
+the associated primes of $A/f^nA$ agree with the associated
+primes of $A/fA$. Since the associated points of $\mathcal{F}_n$
+correspond to the associated primes of $A/f^nA$ not equal to $\mathfrak m$
+by assumption (3), we conclude that
+$M_n = H^0(U, \mathcal{F}_n)$ is a finite $A$-module by (5) and
+Local Cohomology, Proposition \ref{local-cohomology-proposition-kollar}.
+Thus hypothesis (c) holds.
+
+\medskip\noindent
+To finish the proof it suffices to show that there exists an $n > 1$
+such that the image of
+$$
+H^1(U, \mathcal{F}_n) \longrightarrow H^1(U, \mathcal{F}_1)
+$$
+has finite length as an $A$-module. Namely, this will imply hypothesis (a)
+by Lemma \ref{lemma-ML-better}. The image is independent
+of $n$ for $n$ large enough by Lemma \ref{lemma-ML-local}.
+Let $\omega_A^\bullet$ be a normalized dualizing complex for $A$.
+By the local duality theorem and Matlis duality
+(Dualizing Complexes, Lemma \ref{dualizing-lemma-special-case-local-duality}
+and Proposition \ref{dualizing-proposition-matlis})
+our claim is equivalent to: the image of
+$$
+\text{Ext}^{-2}_A(M_1, \omega_A^\bullet) \to
+\text{Ext}^{-2}_A(M_n, \omega_A^\bullet)
+$$
+has finite length for $n \gg 1$. The modules in question are
+finite $A$-modules supported at $V(f)$. Thus it suffices to show that this
+map is zero after localization at a prime $\mathfrak q$
+containing $f$ and different from $\mathfrak m$.
+Let $\omega_{A_\mathfrak q}^\bullet$ be a normalized
+dualizing complex on $A_\mathfrak q$ and recall that
+$\omega_{A_\mathfrak q}^\bullet =
+(\omega_A^\bullet)_\mathfrak q[\dim(A/\mathfrak q)]$ by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-dimension-function}.
+Using the local structure of $\mathcal{F}_n$ given in (4)
+we find that it suffices to show the vanishing of
+$$
+\text{Ext}^{-2 + \dim(A/\mathfrak q)}_{A_\mathfrak q}(
+A_\mathfrak q/f, \omega_{A_\mathfrak q}^\bullet)
+\to
+\text{Ext}^{-2 + \dim(A/\mathfrak q)}_{A_\mathfrak q}(
+A_\mathfrak q/f^n, \omega_{A_\mathfrak q}^\bullet)
+$$
+for $n$ large enough. If $\dim(A/\mathfrak q) > 3$, then this is immediate from
+Local Cohomology, Lemma \ref{local-cohomology-lemma-sitting-in-degrees}.
+For the other cases we will use the long exact sequence
+$$
+\ldots
+\xrightarrow{f^n}
+H^{-1}(\omega_{A_\mathfrak q}^\bullet)
+\to
+\text{Ext}^{-1}_{A_\mathfrak q}(
+A_\mathfrak q/f^n, \omega_{A_\mathfrak q}^\bullet) \to
+H^0(\omega_{A_\mathfrak q}^\bullet)
+\xrightarrow{f^n}
+H^0(\omega_{A_\mathfrak q}^\bullet)
+\to
+\text{Ext}^0_{A_\mathfrak q}(
+A_\mathfrak q/f^n, \omega_{A_\mathfrak q}^\bullet) \to 0
+$$
+If $\dim(A/\mathfrak q) = 2$, then
+$H^0(\omega_{A_\mathfrak q}^\bullet) = 0$
+because $\text{depth}(A_\mathfrak q) \geq 1$ as
+$f$ is a nonzerodivisor.
+Thus the long exact sequence shows the condition is that
+$$
+f^{n - 1} :
+H^{-1}(\omega_{A_\mathfrak q}^\bullet)/f \to
+H^{-1}(\omega_{A_\mathfrak q}^\bullet)/f^n
+$$
+is zero. Now $H^{-1}(\omega^\bullet_\mathfrak q)$ is a finite
+module supported in the primes $\mathfrak p \subset A_\mathfrak q$ such that
+$\text{depth}(A_\mathfrak p) + \dim((A/\mathfrak p)_\mathfrak q) \leq 1$.
+Since $\dim((A/\mathfrak p)_\mathfrak q) = \dim(A/\mathfrak p) - 2$
+condition (6) tells us these primes are contained in $V(f)$.
+Thus the desired vanishing for $n$ large enough.
+Finally, if $\dim(A/\mathfrak q) = 1$, then condition (5) combined
+with the fact that $f$ is a nonzerodivisor
+insures that $A_\mathfrak q$ has depth at least $2$. Hence
+$H^0(\omega_{A_\mathfrak q}^\bullet) =
+H^{-1}(\omega_{A_\mathfrak q}^\bullet) = 0$
+and the long exact sequence shows the claim is
+equivalent to the vanishing of
+$$
+f^{n - 1} :
+H^{-2}(\omega_{A_\mathfrak q}^\bullet)/f \to
+H^{-2}(\omega_{A_\mathfrak q}^\bullet)/f^n
+$$
+Now $H^{-2}(\omega^\bullet_\mathfrak q)$ is a finite
+module supported in the primes $\mathfrak p \subset A_\mathfrak q$
+such that $\text{depth}(A_\mathfrak p) + \dim((A/\mathfrak p)_\mathfrak q)
+\leq 2$. By condition (6) all of these primes are contained in $V(f)$.
+Thus the desired vanishing for $n$ large enough.
+\end{proof}
+
+\begin{remark}
+\label{remark-interesting-case}
+Let $(A, \mathfrak m)$ be a complete Noetherian normal local domain
+of dimension $\geq 4$ and let $f \in \mathfrak m$ be nonzero.
+Then assumptions (1), (2), (3), (5), and (6) of
+Lemma \ref{lemma-algebraization-principal}
+are satisfied. Thus vectorbundles
+on the formal completion of $U$ along $U \cap V(f)$
+can be algebraized. In Lemma \ref{lemma-algebraization-principal-bis}
+we will generalize this to more general coherent formal modules;
+please also compare with Remark \ref{remark-interesting-case-bis}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-helper-algebraize}
+In Situation \ref{situation-algebraize} let $(M_n)$ be an inverse system of
+$A$-modules as in Lemma \ref{lemma-system-of-modules} and let
+$(\mathcal{F}_n)$ be the corresponding object of
+$\textit{Coh}(U, I\mathcal{O}_U)$. Let $d \geq \text{cd}(A, I)$
+and $s \geq 0$ be integers.
+With notation as above assume
+\begin{enumerate}
+\item $A$ is local with maximal ideal $\mathfrak m = \mathfrak a$,
+\item $A$ has a dualizing complex, and
+\item $(\mathcal{F}_n)$ satisfies the $(s, s + d)$-inequalities
+(Definition \ref{definition-s-d-inequalities}).
+\end{enumerate}
+Let $E$ be an injective hull of the residue field of $A$. Then for $i \leq s$
+there exists a finite $A$-module $N$ annihilated by a power
+of $I$ and for $n \gg 0$ compatible maps
+$$
+H^i_\mathfrak m(M_n) \to \Hom_A(N, E)
+$$
+whose cokernels are finite length $A$-modules and whose kernels $K_n$
+form an inverse system such that $\Im(K_{n''} \to K_{n'})$ has finite
+length for $n'' \gg n' \gg 0$.
+\end{lemma}
+
+\begin{proof}
+Let $\omega_A^\bullet$ be a normalized dualizing complex. Then
+$\delta^Y_Z = \delta$ is the dimension function associated with
+this dualizing complex.
+Observe that $\Ext^{-i}_A(M_n, \omega_A^\bullet)$ is a finite $A$-module
+annihilated by $I^n$. Fix $0 \leq i \leq s$.
+Below we will find $n_1 > n_0 > 0$ such that if we set
+$$
+N = \Im(\Ext^{-i}_A(M_{n_0}, \omega_A^\bullet) \to
+\Ext^{-i}_A(M_{n_1}, \omega_A^\bullet))
+$$
+then the kernels of the maps
+$$
+N \to \Ext^{-i}_A(M_n, \omega_A^\bullet),\quad n \geq n_1
+$$
+are finite length $A$-modules and the cokernels $Q_n$ form a
+system such that $\Im(Q_{n'} \to Q_{n''})$ has finite length
+for $n'' \gg n' \gg n_1$. This is equivalent to the statement that
+the system $\{\Ext^{-i}_A(M_n, \omega_A^\bullet)\}_{n \geq 1}$
+is essentially constant in the quotient of the category of finite
+$A$-modules modulo the Serre subcategory of finite length $A$-modules.
+By the local duality theorem
+(Dualizing Complexes, Lemma \ref{dualizing-lemma-special-case-local-duality})
+and Matlis duality
+(Dualizing Complexes, Proposition \ref{dualizing-proposition-matlis})
+we conclude that there are maps
+$$
+H^i_\mathfrak m(M_n) \to \Hom_A(N, E),\quad n \geq n_1
+$$
+as in the statement of the lemma.
+
+\medskip\noindent
+Pick $f \in \mathfrak m$. Let $B = A_f^\wedge$ be the $I$-adic completion
+of the localization $A_f$. Recall that
+$\omega_{A_f}^\bullet = \omega_A^\bullet \otimes_A A_f$
+and $\omega_B^\bullet = \omega_A^\bullet \otimes_A B$ are dualizing
+complexes (Dualizing Complexes, Lemma \ref{dualizing-lemma-dualizing-localize}
+and \ref{dualizing-lemma-completion-henselization-dualizing}).
+Let $M$ be the finite $B$-module $\lim M_{n, f}$ (compare with
+discussion in Cohomology of Schemes, Lemma
+\ref{coherent-lemma-inverse-systems-affine}). Then
+$$
+\Ext^{-i}_A(M_n, \omega_A^\bullet)_f =
+\Ext^{-i}_{A_f}(M_{n, f}, \omega_{A_f}^\bullet) =
+\Ext^{-i}_B(M/I^n M, \omega_B^\bullet)
+$$
+Since $\mathfrak m$ can be generated by finitely many $f \in \mathfrak m$
+it suffices to show that for each $f$ the system
+$$
+\{\Ext^{-i}_B(M/I^n M, \omega_B^\bullet)\}_{n \geq 1}
+$$
+is essentially constant. Some details omitted.
+
+\medskip\noindent
+Let $\mathfrak q \subset IB$ be a prime ideal. Then $\mathfrak q$ corresponds
+to a point $y \in U \cap Y$. Observe that
+$\delta(\mathfrak q) = \dim(\overline{\{y\}})$
+is also the value of the dimension function associated to $\omega_B^\bullet$
+(we omit the details; use that $\omega_B^\bullet$ is gotten from
+$\omega_A^\bullet$ by tensoring up with $B$). Assumption
+(3) guarantees via Lemma \ref{lemma-elementary}
+that Lemma \ref{lemma-algebraize-local-cohomology-bis}
+applies to
+$B_\mathfrak q, IB_\mathfrak q, \mathfrak qB_\mathfrak q, M_\mathfrak q$
+with $s$ replaced by $s - \delta(y)$. We obtain that
+$$
+H^{i - \delta(\mathfrak q)}_{\mathfrak qB_\mathfrak q}(M_\mathfrak q) =
+\lim H^{i - \delta(\mathfrak q)}_{\mathfrak qB_\mathfrak q}(
+(M/I^nM)_\mathfrak q)
+$$
+and this module is annihilated by a power of $I$.
+By Lemma \ref{lemma-terrific} we find that the inverse systems
+$H^{i - \delta(\mathfrak q)}_{\mathfrak qB_\mathfrak q}((M/I^nM)_\mathfrak q)$
+are essentially constant with value
+$H^{i - \delta(\mathfrak q)}_{\mathfrak qB_\mathfrak q}(M_\mathfrak q)$.
+Since $(\omega_B^\bullet)_\mathfrak q[-\delta(\mathfrak q)]$ is a normalized
+dualizing complex on $B_\mathfrak q$ the local duality theorem
+shows that the system
+$$
+\Ext^{-i}_B(M/I^n M, \omega_B^\bullet)_\mathfrak q
+$$
+is essentially constant with value
+$\Ext^{-i}_B(M, \omega_B^\bullet)_\mathfrak q$.
+
+\medskip\noindent
+To finish the proof we globalize as in the proof of
+Lemma \ref{lemma-bootstrap}; the argument here is easier
+because we know the value of our system already. Namely, consider the maps
+$$
+\alpha_n :
+\Ext^{-i}_B(M/I^n M, \omega_B^\bullet)
+\longrightarrow
+\Ext^{-i}_B(M, \omega_B^\bullet)
+$$
+for varying $n$. By the above, for every $\mathfrak q$ we can find an
+$n$ such that $\alpha_n$ is surjective after localization at $\mathfrak q$.
+Since $B$ is Noetherian and $\Ext^{-i}_B(M, \omega_B^\bullet)$
+a finite module, we can find an $n$ such that $\alpha_n$ is surjective.
+For any $n$ such that $\alpha_n$ is surjective, given a prime
+$\mathfrak q \in V(IB)$ we can find an $n' > n$ such that
+$\Ker(\alpha_n)$ maps to zero in $\Ext^{-i}(M/I^{n'}M, \omega_B^\bullet)$
+at least after localizing at $\mathfrak q$.
+Since $\Ker(\alpha_n)$ is a finite $A$-module and since supports of
+sections are quasi-compact, we can find an $n'$ such that
+$\Ker(\alpha_n)$ maps to zero in $\Ext^{-i}(M/I^{n'}M, \omega_B^\bullet)$.
+In this way we see that $\Ext^{-i}(M/I^n M, \omega_B^\bullet)$
+is essentially constant with value $\Ext^{-i}(M, \omega_B^\bullet)$.
+This finishes the proof.
+\end{proof}
+
+\noindent
+Here is a more general version of Lemma \ref{lemma-algebraization-principal}.
+
+\begin{lemma}
+\label{lemma-algebraization-principal-bis}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$. Assume
+\begin{enumerate}
+\item $A$ is local and $\mathfrak a = \mathfrak m$ is the maximal ideal,
+\item $A$ has a dualizing complex,
+\item $I = (f)$ is a principal ideal,
+\item $(\mathcal{F}_n)$ satisfies the $(2, 3)$-inequalities.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ extends to $X$. In particular, if $A$ is
+$I$-adically complete, then $(\mathcal{F}_n)$ is the completion
+of a coherent $\mathcal{O}_U$-module.
+\end{lemma}
+
+\begin{proof}
+Recall that $\textit{Coh}(U, I\mathcal{O}_U)$ is an abelian category, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-inverse-systems-abelian}.
+Over affine opens of $U$ the object $(\mathcal{F}_n)$
+corresponds to a finite module over a Noetherian ring
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-inverse-systems-affine}).
+Thus the kernels of the maps $f^N : (\mathcal{F}_n) \to (\mathcal{F}_n)$
+stabilize for $N$ large enough. By
+Lemmas \ref{lemma-map-kernel-cokernel-on-closed} and
+\ref{lemma-essential-image-completion}
+in order to prove the lemma
+we may replace $(\mathcal{F}_n)$ by the image of such a map.
+Thus we may assume $f$ is injective on $(\mathcal{F}_n)$.
+After this replacement the equivalent conditions of
+Lemma \ref{lemma-equivalent-f-good} hold for the inverse system
+$(\mathcal{F}_n)$ on $U$. We will use this without further mention
+in the rest of the proof.
+
+\medskip\noindent
+We will check hypotheses (a), (b), and (c) of
+Lemma \ref{lemma-when-done}.
+Hypothesis (b) holds by Lemma \ref{lemma-topology-I-adic-f}.
+
+\medskip\noindent
+Pick a inverse system of modules $\{M_n\}$ as in
+Lemma \ref{lemma-system-of-modules}.
+We may assume $H^0_\mathfrak m(M_n) = 0$ by replacing $M_n$ by
+$M_n/H^0_\mathfrak m(M_n)$ if necessary. Then we obtain short exact
+sequences
+$$
+0 \to M_n \to H^0(U, \mathcal{F}_n) \to H^1_\mathfrak m(M_n) \to 0
+$$
+for all $n$. Let $E$ be an injective hull of the residue field of $A$.
+By Lemma \ref{lemma-helper-algebraize} and our current assumption (4)
+we can choose, an integer $m \geq 0$, finite $A$-modules
+$N_1$ and $N_2$ annihilated by $f^c$ for some $c \geq 0$ and
+compatible systems of maps
+$$
+H^i_\mathfrak m(M_n) \to \Hom_A(N_i, E), \quad i = 1, 2
+$$
+for $n \geq m$
+with the properties stated in the lemma.
+
+\medskip\noindent
+We know that $M = \lim H^0(U, \mathcal{F}_n)$ is an $A$-module whose
+limit topology is the $f$-adic topology. Thus, given $n$, the module
+$M/f^nM$ is a subquotient of $H^0(U, \mathcal{F}_N)$ for some $N \gg n$.
+Looking at the information obtained above we see that
+$f^cM/f^nM$ is a finite $A$-module. Since $f$ is a nonzerodivisor
+on $M$ we conclude that $M/f^{n - c}M$ is a finite $A$-module.
+In this way we see that hypothesis (c) of Lemma \ref{lemma-when-done} holds.
+
+\medskip\noindent
+Next, we study the module
+$$
+Ob = \lim H^1(U, \mathcal{F}_n) = \lim H^2_\mathfrak m(M_n)
+$$
+For $n \geq m$ let $K_n$ be the kernel of the map
+$H^2_\mathfrak m(M_n) \to \Hom_A(N_2, E)$.
+Set $K = \lim K_n$. We obtain an exact sequence
+$$
+0 \to K \to Ob \to \Hom_A(N_2, E)
+$$
+By the above the limit topology on $Ob = \lim H^2_\mathfrak m(M_n)$
+is the $f$-adic topology. Since $N_2$ is annihilated by $f^c$
+we conclude the same is true for the limit topology on $K = \lim K_n$.
+Thus $K/fK$ is a subquotient of $K_n$ for $n \gg 1$.
+However, since $\{K_n\}$ is pro-isomorphic to a inverse system of
+finite length $A$-modules (by the conclusion of
+Lemma \ref{lemma-helper-algebraize})
+we conclude that $K/fK$ is a subquotient of a finite length
+$A$-module. It follows that $K$ is a finite $A$-module, see
+Algebra, Lemma \ref{algebra-lemma-finite-over-complete-ring}.
+(In fact, we even see that $\dim(\text{Supp}(K)) = 1$ but
+we will not need this.)
+
+\medskip\noindent
+Given $n \geq 1$ consider the boundary map
+$$
+\delta_n :
+H^0(U, \mathcal{F}_n)
+\longrightarrow
+\lim_N H^1(U, f^n\mathcal{F}_N) \xrightarrow{f^{-n}} Ob
+$$
+(the second map is an isomorphism)
+coming from the short exact sequences
+$$
+0 \to f^n\mathcal{F}_N \to \mathcal{F}_N \to \mathcal{F}_n \to 0
+$$
+For each $n$ set
+$$
+P_n = \Im(H^0(U, \mathcal{F}_{n + m}) \to H^0(U, \mathcal{F}_n))
+$$
+where $m$ is as above. Observe that $\{P_n\}$ is an inverse
+system and that the map $f : \mathcal{F}_n \to \mathcal{F}_{n + 1}$
+on global sections maps $P_n$ into $P_{n + 1}$.
+If $p \in P_n$, then $\delta_n(p) \in K \subset Ob$
+because $\delta_n(p)$ maps to zero in
+$H^1(U, f^n\mathcal{F}_{n + m}) = H^2_\mathfrak m(M_m)$
+and the composition of $\delta_n$ and $Ob \to \Hom_A(N_2, E)$
+factors through $H^2_\mathfrak m(M_m)$ by our choice of $m$.
+Hence
+$$
+\bigoplus\nolimits_{n \geq 0} \Im(P_n \to Ob)
+$$
+is a finite graded $A[T]$-module where $T$ acts via multiplication by $f$.
+Namely, it is a graded submodule of $K[T]$ and $K$ is finite over $A$.
+Arguing as in the proof of
+Cohomology, Lemma \ref{cohomology-lemma-ML-general}\footnote{Choose
+homogeneous generators of the form $\delta_{n_j}(p_j)$ for the displayed
+module. Then if $k = \max(n_j)$ we find that for $n \geq k$
+and any $p \in P_n$ we can find $a_j \in A$ such that
+$p - \sum a_j f^{n - n_j} p_j$ is in the kernel of $\delta_n$
+and hence in the image of $P_{n'}$ for all $n' \geq n$.
+Thus $\Im(P_n \to P_{n - k}) = \Im(P_{n'} \to P_{n - k})$
+for all $n' \geq n$.}
+we find that the inverse system $\{P_n\}$ satisfies ML.
+Since $\{P_n\}$ is pro-isomorphic to $\{H^0(U, \mathcal{F}_n)\}$
+we conclude that $\{H^0(U, \mathcal{F}_n)\}$ has ML.
+Thus hypothesis (a) of Lemma \ref{lemma-when-done}
+holds and the proof is complete.
+\end{proof}
+
+\noindent
+We can unwind condition of
+Lemma \ref{lemma-algebraization-principal-bis} as follows.
+
+\begin{lemma}
+\label{lemma-unwinding-conditions}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$. Assume
+\begin{enumerate}
+\item $A$ is local with maximal ideal $\mathfrak a = \mathfrak m$,
+\item $\text{cd}(A, I) = 1$.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ satisfies the $(2, 3)$-inequalities if and only
+if for all $y \in U \cap Y$ with $\dim(\{y\}) = 1$ and every prime
+$\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$,
+$\mathfrak p \not \in V(I\mathcal{O}_{X, y}^\wedge)$ we have
+$$
+\text{depth}((\mathcal{F}_y^\wedge)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) > 2
+$$
+\end{lemma}
+
+\begin{proof}
+We will use Lemma \ref{lemma-explain-2-3-cd-1}
+without further mention. In particular, we see the condition is necessary.
+Conversely, suppose the condition is true.
+Note that $\delta^Y_Z(y) = \dim(\overline{\{y\}})$ by
+Lemma \ref{lemma-discussion}. Let us write $\delta$ for this function.
+Let $y \in U \cap Y$. If $\delta(y) > 2$, then the inequality
+of Lemma \ref{lemma-explain-2-3-cd-1} holds.
+Finally, suppose $\delta(y) = 2$. We have to show that
+$$
+\text{depth}((\mathcal{F}_y^\wedge)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) > 1
+$$
+Choose a specialization $y \leadsto y'$ with $\delta(y') = 1$. Then
+there is a ring map $\mathcal{O}_{X, y'}^\wedge \to \mathcal{O}_{X, y}^\wedge$
+which identifies the target with the completion of the localization
+of $\mathcal{O}_{X, y'}^\wedge$ at a prime $\mathfrak q$
+with $\dim(\mathcal{O}_{X, y'}^\wedge/\mathfrak q) = 1$.
+Moreover, we then obtain
+$$
+\mathcal{F}_y^\wedge =
+\mathcal{F}_{y'}^\wedge
+\otimes_{\mathcal{O}_{X, y'}^\wedge}
+\mathcal{O}_{X, y}^\wedge
+$$
+Let $\mathfrak p' \subset \mathcal{O}_{X, y'}^\wedge$ be the image
+of $\mathfrak p$.
+By Local Cohomology, Lemma \ref{local-cohomology-lemma-change-completion}
+we have
+\begin{align*}
+\text{depth}((\mathcal{F}_y^\wedge)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p)
+& =
+\text{depth}((\mathcal{F}_{y'}^\wedge)_{\mathfrak p'}) +
+\dim((\mathcal{O}_{X, y}^\wedge/\mathfrak p)_{\mathfrak p'}) \\
+& =
+\text{depth}((\mathcal{F}_{y'}^\wedge)_{\mathfrak p'}) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p') - 1
+\end{align*}
+the last equality because the specialization is immediate.
+Thus the lemma is prove by the assumed inequality for $y', \mathfrak p'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unwinding-conditions-bis}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an object
+of $\textit{Coh}(U, I\mathcal{O}_U)$. Assume
+\begin{enumerate}
+\item $A$ is local with maximal ideal $\mathfrak a = \mathfrak m$,
+\item $A$ has a dualizing complex,
+\item $\text{cd}(A, I) = 1$,
+\item for $y \in U \cap Y$ the module $\mathcal{F}_y^\wedge$
+is finite locally free outside $V(I\mathcal{O}_{X, y}^\wedge)$,
+for example if $\mathcal{F}_n$ is a finite locally free
+$\mathcal{O}_U/I^n\mathcal{O}_U$-module, and
+\item one of the following is true
+\begin{enumerate}
+\item $A_f$ is $(S_2)$ and every irreducible component of $X$
+not contained in $Y$ has dimension $\geq 4$, or
+\item if $\mathfrak p \not \in V(f)$ and
+$V(\mathfrak p) \cap V(f) \not = \{\mathfrak m\}$, then
+$\text{depth}(A_\mathfrak p) + \dim(A/\mathfrak p) > 3$.
+\end{enumerate}
+\end{enumerate}
+Then $(\mathcal{F}_n)$ satisfies the $(2, 3)$-inequalities.
+\end{lemma}
+
+\begin{proof}
+We will use the criterion of Lemma \ref{lemma-unwinding-conditions}.
+Let $y \in U \cap Y$ with $\dim(\overline{\{y\}} = 1$ and let
+$\mathfrak p$ be a prime $\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$ with
+$\mathfrak p \not \in V(I\mathcal{O}_{X, y}^\wedge)$.
+Condition (4) shows that
+$\text{depth}((\mathcal{F}_y^\wedge)_\mathfrak p) =
+\text{depth}((\mathcal{O}_{X, y}^\wedge)_\mathfrak p)$.
+Thus we have to prove
+$$
+\text{depth}((\mathcal{O}_{X, y}^\wedge)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) > 2
+$$
+Let $\mathfrak p_0 \subset A$ be the image of $\mathfrak p$.
+Let $\mathfrak q \subset A$ be the prime corresponding to $y$.
+By Local Cohomology, Lemma
+\ref{local-cohomology-lemma-change-completion}
+we have
+\begin{align*}
+\text{depth}((\mathcal{O}_{X, y}^\wedge)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p)
+& =
+\text{depth}(A_{\mathfrak p_0}) + \dim((A/\mathfrak p_0)_\mathfrak q) \\
+& =
+\text{depth}(A_{\mathfrak p_0}) + \dim(A/\mathfrak p_0) - 1
+\end{align*}
+If (5)(a) holds, then we get that this is
+$$
+\geq \min(2, \dim(A_{\mathfrak p_0})) + \dim(A/\mathfrak p_0) - 1
+$$
+Note that in any case $\dim(A/\mathfrak p_0) \geq 2$. Hence if
+we get $2$ for the minimum, then we are done. If not we get
+$$
+\dim(A_{\mathfrak p_0}) + \dim(A/\mathfrak p_0) - 1 \geq 4 - 1
+$$
+because every component of $\Spec(A)$ passing through $\mathfrak p_0$
+has dimension $\geq 4$. If (5)(b) holds, then we win immediately.
+\end{proof}
+
+\begin{remark}
+\label{remark-interesting-case-bis}
+Let $(A, \mathfrak m)$ be a Noetherian local ring which has a
+dualizing complex and is complete with respect to $f \in \mathfrak m$.
+Let $(\mathcal{F}_n)$ be an object of $\textit{Coh}(U, f\mathcal{O}_U)$
+where $U$ is the punctured spectrum of $A$.
+Set $Y = V(f) \subset X = \Spec(A)$.
+If for $y \in U \cap V(f)$ closed in $U$, i.e., with
+$\dim(\overline{\{y\}}) = 1$, we assume the
+$\mathcal{O}_{X, y}^\wedge$-module $\mathcal{F}_y^\wedge$
+satisfies the following two conditions
+\begin{enumerate}
+\item $\mathcal{F}_y^\wedge[1/f]$ is $(S_2)$ as a
+$\mathcal{O}_{X, y}^\wedge[1/f]$-module, and
+\item for $\mathfrak p \in \text{Ass}(\mathcal{F}_y^\wedge[1/f])$
+we have $\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) \geq 3$.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ is the completion of a coherent module on $U$.
+This follows from Lemmas \ref{lemma-algebraization-principal-bis}
+and \ref{lemma-unwinding-conditions}.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Improving coherent formal modules}
+\label{section-improving-formal-modules}
+
+\noindent
+Let $X$ be a Noetherian scheme. Let $Y \subset X$ be a closed subscheme
+with quasi-coherent sheaf of ideals $\mathcal{I} \subset \mathcal{O}_X$.
+Let $(\mathcal{F}_n)$ be an object of $\textit{Coh}(X, \mathcal{I})$.
+In this section we construct maps
+$(\mathcal{F}_n) \to (\mathcal{F}'_n)$
+similar to the maps constructed in
+Local Cohomology, Section \ref{local-cohomology-section-improve}
+for coherent modules. For a point $y \in Y$ we set
+$$
+\mathcal{O}_{X, y}^\wedge = \lim \mathcal{O}_{X, y}/\mathcal{I}^n_y, \quad
+\mathcal{I}_y^\wedge = \lim \mathcal{I}_y/\mathcal{I}^n_y
+\quad\text{and}\quad
+\mathfrak m_y^\wedge = \lim \mathfrak m_y/\mathcal{I}_y^n
+$$
+Then $\mathcal{O}_{X, y}^\wedge$ is a Noetherian local ring
+with maximal ideal $\mathfrak m_y^\wedge$ complete with respect to
+$\mathcal{I}_y^\wedge = \mathcal{I}_y\mathcal{O}_{X, y}^\wedge$.
+We also set
+$$
+\mathcal{F}_y^\wedge = \lim \mathcal{F}_{n, y}
+$$
+Then $\mathcal{F}_y^\wedge$ is a finite module over
+$\mathcal{O}_{X, y}^\wedge$ with
+$\mathcal{F}_y^\wedge/(\mathcal{I}_y^\wedge)^n\mathcal{F}_y^\wedge =
+\mathcal{F}_{n, y}$ for all $n$, see Algebra, Lemmas
+\ref{algebra-lemma-limit-complete} and
+\ref{algebra-lemma-finite-over-complete-ring}.
+
+\begin{lemma}
+\label{lemma-divide-torsion-formal-coherent-module}
+In the situation above assume $X$ locally has a dualizing complex.
+Let $T \subset Y$ be a subset stable under specialization.
+Assume for $y \in T$ and for a nonmaximal prime
+$\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$ with
+$V(\mathfrak p) \cap V(\mathcal{I}^\wedge_y) = \{\mathfrak m_y^\wedge\}$
+we have
+$$
+\text{depth}_{(\mathcal{O}_{X, y})_\mathfrak p}
+((\mathcal{F}^\wedge_y)_\mathfrak p) > 0
+$$
+Then there exists a canonical map
+$(\mathcal{F}_n) \to (\mathcal{F}_n')$
+of inverse systems of coherent $\mathcal{O}_X$-modules
+with the following properties
+\begin{enumerate}
+\item for $y \in T$ we have $\text{depth}(\mathcal{F}'_{n, y}) \geq 1$,
+\item $(\mathcal{F}'_n)$ is isomorphic as a pro-system to an object
+$(\mathcal{G}_n)$ of $\textit{Coh}(X, \mathcal{I})$,
+\item the induced morphism
+$(\mathcal{F}_n) \to (\mathcal{G}_n)$ of
+$\textit{Coh}(X, \mathcal{I})$ is surjective with kernel
+annihilated by a power of $\mathcal{I}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For every $n$ we let $\mathcal{F}_n \to \mathcal{F}'_n$ be the surjection
+constructed in
+Local Cohomology, Lemma \ref{local-cohomology-lemma-get-depth-1-along-Z}.
+Since this is the quotient of $\mathcal{F}_n$ by the subsheaf
+of sections supported on $T$ we see that we get canonical maps
+$\mathcal{F}'_{n + 1} \to \mathcal{F}'_n$ such that we obtain
+a map $(\mathcal{F}_n) \to (\mathcal{F}_n')$
+of inverse systems of coherent $\mathcal{O}_X$-modules.
+Property (1) holds by construction.
+
+\medskip\noindent
+To prove properties (2) and (3) we may assume that $X = \Spec(A_0)$ is
+affine and $A_0$ has a dualizing complex. Let $I_0 \subset A_0$ be the
+ideal corresponding to $Y$. Let $A, I$ be the $I$-adic completions of
+$A_0, I_0$. For later use we observe that $A$ has a dualizing complex
+(Dualizing Complexes, Lemma \ref{dualizing-lemma-ubiquity-dualizing}).
+Let $M$ be the finite $A$-module corresponding to $(\mathcal{F}_n)$, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-inverse-systems-affine}.
+Then $\mathcal{F}_n$ corresponds to $M_n = M/I^nM$. Recall that
+$\mathcal{F}'_n$ corresponds to the quotient $M'_n = M_n / H^0_T(M_n)$,
+see Local Cohomology, Lemma \ref{local-cohomology-lemma-get-depth-1-along-Z}
+and its proof.
+
+\medskip\noindent
+Set $s = 0$ and $d = \text{cd}(A, I)$.
+We claim that $A, I, T, M, s, d$ satisfy assumptions (1), (3), (4), (6)
+of Situation \ref{situation-bootstrap}.
+Namely, (1) and (3) are immediate from the above, (4) is
+the empty condition as $s = 0$, and (6) is the assumption
+we made in the statement of the lemma.
+
+\medskip\noindent
+By Theorem \ref{theorem-final-bootstrap} we see that $\{H^0_T(M_n)\}$
+is Mittag-Leffler, that $\lim H^0_T(M_n) = H^0_T(M)$, and that
+$H^0_T(M)$ is killed by a power of $I$. Thus the
+limit of the short exact sequences $0 \to H^0_T(M_n) \to M_n \to M'_n \to 0$
+is the short exact sequence
+$$
+0 \to H^0_T(M) \to M \to \lim M'_n \to 0
+$$
+Setting $M' = \lim M'_n$ we find that $\mathcal{G}_n$ corresponds to
+the finite $A_0$-module $M'/I^nM'$. To finish the prove we have to show
+that the canonical map $\{M'/I^nM'\} \to \{M'_n\}$ is a pro-isomorphism.
+This is equivalent to saying that
+$\{H^0_T(M) + I^nM\} \to \{\ker(M \to M'_n)\}$ is a
+pro-isomorphism. Which in turn says that
+$\{H^0_T(M)/H^0_T(M) \cap I^nM\} \to \{H^0_T(M_n)\}$
+is a pro-isomorphism. This is true because $\{H^0_T(M_n)\}$
+is Mittag-Leffler, $\lim H^0_T(M_n) = H^0_T(M)$, and
+$H^0_T(M)$ is killed by a power of $I$ (so that Artin-Rees
+tells us that $H^0_T(M) \cap I^nM = 0$ for $n$ large enough).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-improvement-formal-coherent-module-better}
+In the situation above assume $X$ locally has a dualizing complex.
+Let $T' \subset T \subset Y$ be subsets stable under specialization.
+Let $d \geq 0$ be an integer. Assume
+\begin{enumerate}
+\item[(a)] affine locally we have $X = \Spec(A_0)$ and $Y = V(I_0)$
+and $\text{cd}(A_0, I_0) \leq d$,
+\item[(b)] for $y \in T$ and a nonmaximal prime
+$\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$ with
+$V(\mathfrak p) \cap V(\mathcal{I}_y^\wedge) = \{\mathfrak m_y^\wedge\}$
+we have
+$$
+\text{depth}_{(\mathcal{O}_{X, y})_\mathfrak p}
+((\mathcal{F}^\wedge_y)_\mathfrak p) > 0
+$$
+\item[(c)] for $y \in T'$ and for a prime
+$\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$ with
+$\mathfrak p \not \in V(\mathcal{I}_y^\wedge)$
+and $V(\mathfrak p) \cap V(\mathcal{I}_y^\wedge) \not =
+\{\mathfrak m_y^\wedge\}$ we have
+$$
+\text{depth}_{(\mathcal{O}_{X, y})_\mathfrak p}
+((\mathcal{F}^\wedge_y)_\mathfrak p) \geq 1
+\quad\text{or}\quad
+\text{depth}_{(\mathcal{O}_{X, y})_\mathfrak p}
+((\mathcal{F}^\wedge_y)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) > 1 + d
+$$
+\item[(d)] for $y \in T'$ and a nonmaximal prime
+$\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$ with
+$V(\mathfrak p) \cap V(\mathcal{I}_y^\wedge) = \{\mathfrak m_y^\wedge\}$
+we have
+$$
+\text{depth}_{(\mathcal{O}_{X, y})_\mathfrak p}
+((\mathcal{F}^\wedge_y)_\mathfrak p) > 1
+$$
+\item[(e)] if $y \leadsto y'$ is an immediate specialization and
+$y' \in T'$, then $y \in T$.
+\end{enumerate}
+Then there exists a canonical map $(\mathcal{F}_n) \to (\mathcal{F}_n'')$
+of inverse systems of coherent $\mathcal{O}_X$-modules
+with the following properties
+\begin{enumerate}
+\item for $y \in T$ we have $\text{depth}(\mathcal{F}''_{n, y}) \geq 1$,
+\item for $y' \in T'$ we have $\text{depth}(\mathcal{F}''_{n, y'}) \geq 2$,
+\item $(\mathcal{F}''_n)$ is isomorphic as a pro-system to an object
+$(\mathcal{H}_n)$ of $\textit{Coh}(X, \mathcal{I})$,
+\item the induced morphism $(\mathcal{F}_n) \to (\mathcal{H}_n)$ of
+$\textit{Coh}(X, \mathcal{I})$ has kernel and cokernel
+annihilated by a power of $\mathcal{I}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+As in Lemma \ref{lemma-divide-torsion-formal-coherent-module} and its proof
+for every $n$ we let $\mathcal{F}_n \to \mathcal{F}'_n$ be the surjection
+constructed in
+Local Cohomology, Lemma \ref{local-cohomology-lemma-get-depth-1-along-Z}.
+Next, we let $\mathcal{F}'_n \to \mathcal{F}''_n$ be the injection
+constructed in
+Local Cohomology, Lemma \ref{local-cohomology-lemma-make-S2-along-T}
+and its proof. The constructions show that we get canonical maps
+$\mathcal{F}''_{n + 1} \to \mathcal{F}''_n$ such that we obtain
+maps
+$$
+(\mathcal{F}_n) \longrightarrow (\mathcal{F}_n') \longrightarrow
+(\mathcal{F}''_n)
+$$
+of inverse systems of coherent $\mathcal{O}_X$-modules.
+Properties (1) and (2) hold by construction.
+
+\medskip\noindent
+To prove properties (3) and (4) we may assume that $X = \Spec(A_0)$ is
+affine and $A_0$ has a dualizing complex. Let $I_0 \subset A_0$ be the
+ideal corresponding to $Y$. Let $A, I$ be the $I$-adic completions of
+$A_0, I_0$. For later use we observe that $A$ has a dualizing complex
+(Dualizing Complexes, Lemma \ref{dualizing-lemma-ubiquity-dualizing}).
+Let $M$ be the finite $A$-module corresponding to $(\mathcal{F}_n)$, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-inverse-systems-affine}.
+Then $\mathcal{F}_n$ corresponds to $M_n = M/I^nM$. Recall that
+$\mathcal{F}'_n$ corresponds to the quotient $M'_n = M_n / H^0_T(M_n)$.
+Also, recall that $M' = \lim M'_n$ is the quotient of $M$ by
+$H^0_T(M)$ and that $\{M'_n\}$ and $\{M'/I^nM'\}$ are isomorphic
+as pro-systems. Finally, we see that $\mathcal{F}''_n$ corresponds
+to an extension
+$$
+0 \to M'_n \to M''_n \to H^1_{T'}(M'_n) \to 0
+$$
+see proof of
+Local Cohomology, Lemma \ref{local-cohomology-lemma-make-S2-along-T}.
+
+\medskip\noindent
+Set $s = 1$. We claim that $A, I, T', M', s, d$ satisfy assumptions
+(1), (3), (4), (6) of Situation \ref{situation-bootstrap}. Namely, (1) and (3)
+are immediate, (4) is implied by (c), and (6) follows from (d).
+We omit the details of the verification (c) $\Rightarrow$ (4).
+
+\medskip\noindent
+By Theorem \ref{theorem-final-bootstrap} we see that $\{H^1_{T'}(M'/I^nM')\}$
+is Mittag-Leffler, that $H^1_{T'}(M') = \lim H^1_{T'}(M'/I^nM')$, and that
+$H^1_{T'}(M')$ is killed by a power of $I$. We deduce
+$\{H^1_{T'}(M'_n)\}$ is Mittag-Leffler and $H^1_{T'}(M') = \lim H^1_{T'}(M'_n)$.
+Thus the limit of the short exact sequences displayed above
+is the short exact sequence
+$$
+0 \to M' \to \lim M''_n \to H^1_{T'}(M') \to 0
+$$
+Set $M'' = \lim M''_n$. It follows from
+Local Cohomology, Proposition \ref{local-cohomology-proposition-finiteness}
+that $H^1_{T'}(M')$
+and hence $M''$ are finite $A$-modules.
+Thus we find that $\mathcal{H}_n$ corresponds to
+the finite $A_0$-module $M''/I^nM''$. To finish the prove we have to show
+that the canonical map $\{M''/I^nM''\} \to \{M''_n\}$ is a pro-isomorphism.
+Since we already know that $\{M'/I^nM'\}$ is pro-isomorphic to
+$\{M'_n\}$ the reader verifies (omitted) this is equivalent to asking
+$\{H^1_{T'}(M')/I^nH^1_{T'}(M')\} \to \{H^1_{T'}(M'_n)\}$
+to be a pro-isomorphism. This is true because $\{H^1_{T'}(M'_n)\}$
+is Mittag-Leffler, $H^1_{T'}(M') = \lim H^1_{T'}(M'_n)$, and
+$H^1_{T'}(M')$ is killed by a power of $I$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-improvement-application}
+In Situation \ref{situation-algebraize} assume that $A$ has
+a dualizing complex. Let $d \geq \text{cd}(A, I)$. Let $(\mathcal{F}_n)$
+be an object of $\textit{Coh}(U, I\mathcal{O}_U)$. Assume
+$(\mathcal{F}_n)$ satisfies the $(2, 2 + d)$-inequalities, see
+Definition \ref{definition-s-d-inequalities}.
+Then there exists a canonical map $(\mathcal{F}_n) \to (\mathcal{F}_n'')$
+of inverse systems of coherent $\mathcal{O}_U$-modules
+with the following properties
+\begin{enumerate}
+\item if $\text{depth}(\mathcal{F}''_{n, y}) + \delta^Y_Z(y) \geq 3$
+for all $y \in U \cap Y$,
+\item $(\mathcal{F}''_n)$ is isomorphic as a pro-system to an object
+$(\mathcal{H}_n)$ of $\textit{Coh}(U, I\mathcal{O}_U)$,
+\item the induced morphism $(\mathcal{F}_n) \to (\mathcal{H}_n)$ of
+$\textit{Coh}(U, I\mathcal{O}_U)$ has kernel and cokernel
+annihilated by a power of $I$,
+\item the modules $H^0(U, \mathcal{F}''_n)$ and $H^1(U, \mathcal{F}''_n)$
+are finite $A$-modules for all $n$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The existence and properties (2), (3), (4) follow immediately from
+Lemma \ref{lemma-improvement-formal-coherent-module-better} applied
+to $U$, $U \cap Y$, $T = \{y \in U \cap Y : \delta^Y_Z(y) \leq 2\}$,
+$T' = \{y \in U \cap Y : \delta^Y_Z(y) \leq 1\}$, and $(\mathcal{F}_n)$.
+The finiteness of the modules $H^0(U, \mathcal{F}''_n)$ and
+$H^1(U, \mathcal{F}''_n)$ follows from
+Local Cohomology, Lemma \ref{local-cohomology-lemma-finiteness-Rjstar}
+and the elementary properties of the function $\delta^Y_Z(-)$
+proved in Lemma \ref{lemma-discussion}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Algebraization of coherent formal modules, V}
+\label{section-algebraization-modules-conclusion}
+
+\noindent
+In this section we prove our most general results on algebraization
+of coherent formal modules. We first prove it in case
+the ideal has cohomological dimension $1$. Then we apply this
+to a blowup to prove a more general result.
+
+\begin{lemma}
+\label{lemma-cd-1-canonical}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$
+be an object of $\textit{Coh}(U, I\mathcal{O}_U)$. Assume
+\begin{enumerate}
+\item $A$ has a dualizing complex and $\text{cd}(A, I) = 1$,
+\item $(\mathcal{F}_n)$ is pro-isomorphic to an inverse system
+$(\mathcal{F}_n'')$ of coherent $\mathcal{O}_U$-modules such that
+$\text{depth}(\mathcal{F}''_{n, y}) + \delta^Y_Z(y) \geq 3$
+for all $y \in U \cap Y$.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ extends canonically to $X$, see
+Definition \ref{definition-canonically-algebraizable}.
+\end{lemma}
+
+\begin{proof}
+We will check hypotheses (a), (b), and (c) of Lemma \ref{lemma-when-done}.
+Before we start, let us point out that the modules
+$H^0(U, \mathcal{F}''_n)$ and $H^1(U, \mathcal{F}''_n)$
+are finite $A$-modules for all $n$ by
+Local Cohomology, Lemma \ref{local-cohomology-lemma-finiteness-Rjstar}.
+
+\medskip\noindent
+Observe that for each $p \geq 0$
+the limit topology on $\lim H^p(U, \mathcal{F}_n)$
+is the $I$-adic topology by Lemma \ref{lemma-topology-I-adic}.
+In particular, hypothesis (b) holds.
+
+\medskip\noindent
+We know that $M = \lim H^0(U, \mathcal{F}_n)$ is an $A$-module whose
+limit topology is the $I$-adic topology. Thus, given $n$, the module
+$M/I^nM$ is a subquotient of $H^0(U, \mathcal{F}_N)$ for some $N \gg n$.
+Since the inverse system $\{H^0(U, \mathcal{F}_N)\}$ is pro-isomorphic to an
+inverse system of finite $A$-modules, namely $\{H^0(U, \mathcal{F}''_N)\}$,
+we conclude that $M/I^nM$ is finite. It follows that $M$ is finite, see
+Algebra, Lemma \ref{algebra-lemma-finite-over-complete-ring}.
+In particular hypothesis (c) holds.
+
+\medskip\noindent
+For each $n \geq 0$ let us write $Ob_n = \lim_N H^1(U, I^n\mathcal{F}_N)$.
+A special case is $Ob = Ob_0 = \lim_N H^1(U, \mathcal{F}_N)$.
+Arguing exactly as in the previous paragraph we find that $Ob$
+is a finite $A$-module. (In fact, we also know that $Ob/I Ob$ is annihilated
+by a power of $\mathfrak a$, but it seems somewhat difficult to use this.)
+
+\medskip\noindent
+We set $\mathcal{F} = \lim \mathcal{F}_n$, we pick generators
+$f_1, \ldots, f_r$ of $I$, we pick $c \geq 1$, and we choose
+$\Phi_\mathcal{F}$ as in Lemma \ref{lemma-cd-is-one-for-system}.
+We will use the results of Lemma \ref{lemma-properties-system}
+without further mention. In particular, for each $n \geq 1$ there are maps
+$$
+\delta_n :
+H^0(U, \mathcal{F}_n)
+\longrightarrow
+H^1(U, I^n\mathcal{F})
+\longrightarrow
+Ob_n
+$$
+The first comes from the short exact sequence
+$0 \to I^n\mathcal{F} \to \mathcal{F} \to \mathcal{F}_n \to 0$
+and the second from $I^n\mathcal{F} = \lim I^n\mathcal{F}_N$.
+We will later use that if $\delta_n(s) = 0$ for $s \in H^0(U, \mathcal{F}_n)$
+then we can for each $n' \geq n$ find $s' \in H^0(U, \mathcal{F}_{n'})$
+mapping to $s$.
+Observe that there are commutative diagrams
+$$
+\xymatrix{
+H^0(U, \mathcal{F}_{nc}) \ar[r] \ar[dd] &
+H^1(U, I^{nc}\mathcal{F}) \ar[dd] \ar[rd]^{\Phi_\mathcal{F}} \\
+& &
+\bigoplus_{e_1 + \ldots + e_r = n}
+H^1(U, \mathcal{F}) \cdot T_1^{e_1} \ldots T_r^{e_r} \ar[ld] \\
+H^0(U, \mathcal{F}_n) \ar[r] &
+H^1(U, I^n\mathcal{F})
+}
+$$
+We conclude that the obstruction map
+$H^0(U, \mathcal{F}_n) \to Ob_n$
+sends the image of
+$H^0(U, \mathcal{F}_{nc}) \to H^0(U, \mathcal{F}_n)$
+into the submodule
+$$
+Ob'_n =
+\Im\left(
+\bigoplus\nolimits_{e_1 + \ldots + e_r = n}
+Ob \cdot T_1^{e_1} \ldots T_r^{e_r} \to Ob_n
+\right)
+$$
+where on the summand $Ob \cdot T_1^{e_1} \ldots T_r^{e_r}$
+we use the map on cohomology coming from the reductions modulo
+powers of $I$ of the multiplication map
+$f_1^{e_1} \ldots f_r^{e_r} : \mathcal{F} \to I^n\mathcal{F}$.
+By construction
+$$
+\bigoplus\nolimits_{n \geq 0} Ob'_n
+$$
+is a finite graded module over the Rees algebra $\bigoplus_{n \geq 0} I^n$.
+For each $n$ we set
+$$
+M_n = \{s \in H^0(U, \mathcal{F}_n) \mid \delta_n(s) \in Ob'_n\}
+$$
+Observe that $\{M_n\}$ is an inverse system and that
+$f_j : \mathcal{F}_n \to \mathcal{F}_{n + 1}$ on global
+sections maps $M_n$ into $M_{n + 1}$.
+By exactly the same argument as in the proof of
+Cohomology, Lemma \ref{cohomology-lemma-ML-general}
+we find that $\{M_n\}$ is ML. Namely, because the Rees algebra
+is Noetherian we can choose a finite number of homogeneous generators
+of the form $\delta_{n_j}(z_j)$ with $z_j \in M_{n_j}$ for the graded submodule
+$\bigoplus_{n \geq 0} \Im(M_n \to Ob'_n)$.
+Then if $k = \max(n_j)$ we find that for $n \geq k$
+and any $z \in M_n$ we can find $a_j \in I^{n - n_j}$ such that
+$z - \sum a_j z_j$ is in the kernel of $\delta_n$
+and hence in the image of $M_{n'}$ for all $n' \geq n$
+(because the vanishing of $\delta_n$ means that we can
+lift $z - \sum a_j z_j$ to an element $z' \in H^0(U, \mathcal{F}_{n'c})$
+for all $n' \ge n$ and then the image of $z'$ in $H^0(U, \mathcal{F}_{n'})$
+is in $M_{n'}$ by what we proved above).
+Thus $\Im(M_n \to M_{n - k}) = \Im(M_{n'} \to M_{n - k})$
+for all $n' \geq n$.
+
+\medskip\noindent
+Choose $n$. By the Mittag-Leffler property of $\{M_n\}$ we just established
+we can find an $n' \geq n$ such that the image of $M_{n'} \to M_n$
+is the same as the image of $M' \to M_n$. By the above we see that
+the image of $M' \to M_n$ contains the image of
+$H^0(U, \mathcal{F}_{n'c}) \to H^0(U, \mathcal{F}_n)$.
+Thus we see that $\{M_n\}$ and $\{H^0(U, \mathcal{F}_n)\}$
+are pro-isomorphic. Therefore $\{H^0(U, \mathcal{F}_n)\}$
+has ML and we finally conclude that hypothesis (a) holds.
+This concludes the proof.
+\end{proof}
+
+\begin{proposition}[Algebraization in cohomological dimension 1]
+\label{proposition-cd-1}
+\begin{reference}
+The local case of this result is \cite[IV Corollaire 2.9]{MRaynaud-book}.
+\end{reference}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$
+be an object of $\textit{Coh}(U, I\mathcal{O}_U)$. Assume
+\begin{enumerate}
+\item $A$ has a dualizing complex and $\text{cd}(A, I) = 1$,
+\item $(\mathcal{F}_n)$ satisfies the $(2, 3)$-inequalities, see
+Definition \ref{definition-s-d-inequalities}.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ extends to $X$. In particular, if $A$ is
+$I$-adically complete, then $(\mathcal{F}_n)$ is the completion
+of a coherent $\mathcal{O}_U$-module.
+\end{proposition}
+
+\begin{proof}
+By Lemma \ref{lemma-map-kernel-cokernel-on-closed}
+we may replace $(\mathcal{F}_n)$ by the object $(\mathcal{H}_n)$
+of $\textit{Coh}(U, I\mathcal{O}_U)$ found in
+Lemma \ref{lemma-improvement-application}.
+Thus we may assume that $(\mathcal{F}_n)$ is pro-isomorphic
+to a inverse system $(\mathcal{F}_n'')$ with the properties
+mentioned in Lemma \ref{lemma-improvement-application}.
+In Lemma \ref{lemma-cd-1-canonical} we proved that
+$(\mathcal{F}_n)$ canonically extends to $X$.
+The final statement follows from Lemma \ref{lemma-canonically-algebraizable}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowup}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$
+be an object of $\textit{Coh}(U, I\mathcal{O}_U)$. Assume
+\begin{enumerate}
+\item $A$ has a dualizing complex,
+\item all fibres of the blowing up $b : X' \to X$ of $I$
+have dimension $\leq d - 1$,
+\item one of the following is true
+\begin{enumerate}
+\item $(\mathcal{F}_n)$ satisfies the $(d + 1, d + 2)$-inequalities
+(Definition \ref{definition-s-d-inequalities}), or
+\item for $y \in U \cap Y$ and a prime
+$\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$ with
+$\mathfrak p \not \in V(I\mathcal{O}_{X, y}^\wedge)$
+we have
+$$
+\text{depth}((\mathcal{F}^\wedge_y)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) + \delta^Y_Z(y) > d + 2
+$$
+\end{enumerate}
+\end{enumerate}
+Then $(\mathcal{F}_n)$ extends to $X$.
+\end{lemma}
+
+\begin{proof}
+Let $Y' \subset X'$ be the exceptional divisor.
+Let $Z' \subset Y'$ be the inverse image of $Z \subset Y$.
+Then $U' = X' \setminus Z'$ is the inverse image of $U$.
+With $\delta^{Y'}_{Z'}$ as in (\ref{equation-delta-Z}) we set
+$$
+T' = \{y' \in Y' \mid \delta^{Y'}_{Z'}(y') = 1\text{ or }2\}
+\subset
+T = \{y' \in Y' \mid \delta^{Y'}_{Z'}(y') = 1\}
+$$
+These are specialization stable subsets of
+$U' \cap Y' = Y' \setminus Z'$. Consider the
+object $(b|_{U'}^*\mathcal{F}_n)$ of $\textit{Coh}(U', I\mathcal{O}_{U'})$,
+see Cohomology of Schemes, Lemma \ref{coherent-lemma-inverse-systems-pullback}.
+For $y' \in U' \cap Y'$ let us denote
+$$
+\mathcal{F}_{y'}^\wedge = \lim (b|_{U'}^*\mathcal{F}_n)_{y'}
+$$
+the ``stalk'' of this pullback at $y'$. We claim that conditions
+(a), (b), (c), (d), and (e) of
+Lemma \ref{lemma-improvement-formal-coherent-module-better}
+hold for the object $(b|_{U'}^*\mathcal{F}_n)$ on $U'$ with $d$
+replaced by $1$ and the subsets $T' \subset T \subset U' \cap Y'$.
+Condition (a) holds because $Y'$ is an effective Cartier divisor
+and hence locally cut out by $1$ equation. Condition (e) holds
+by Lemma \ref{lemma-discussion} parts (1) and (2).
+To prove (b), (c), and (d) we need some preparation.
+
+\medskip\noindent
+Let $y' \in U' \cap Y'$ and let
+$\mathfrak p' \subset \mathcal{O}_{X', y'}^\wedge$
+be a prime ideal not contained in $V(I\mathcal{O}_{X', y'}^\wedge)$.
+Denote $y = b(y') \in U \cap Y$. Choose $f \in I$ such that
+$y'$ is contained in the spectrum of the affine blowup algebra
+$A[\frac{I}{f}]$, see Divisors, Lemma \ref{divisors-lemma-blowing-up-affine}.
+For any $A$-algebra $B$ denote $B' = B[\frac{IB}{f}]$ the corresponding affine
+blowup algebra. Denote $I$-adic completion by ${\ }^\wedge$.
+By our choice of $f$ we get a ring map
+$(\mathcal{O}_{X, y}^\wedge)' \to \mathcal{O}_{X', y'}^\wedge$.
+If we let $\mathfrak q' \subset (\mathcal{O}_{X, y}^\wedge)'$
+be the inverse image of $\mathfrak m_{y'}^\wedge$, then
+we see that
+$((\mathcal{O}_{X, y}^\wedge)'_{\mathfrak q'})^\wedge =
+\mathcal{O}_{X', y'}^\wedge$.
+Let $\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$ be the corresponding
+prime. At this point we have a commutative diagram
+$$
+\xymatrix{
+\mathcal{O}_{X, y}^\wedge \ar[d] \ar[r] &
+(\mathcal{O}_{X, y}^\wedge)' \ar[d]_\alpha \ar[r] &
+(\mathcal{O}_{X, y}^\wedge)'_{\mathfrak q'} \ar[d] \ar[r]_\beta &
+\mathcal{O}_{X', y'}^\wedge \ar[d] \\
+\mathcal{O}_{X, y}^\wedge/\mathfrak p \ar[r] &
+(\mathcal{O}_{X, y}^\wedge/\mathfrak p)' \ar[r] &
+(\mathcal{O}_{X, y}^\wedge/\mathfrak p)'_{\mathfrak q'} \ar[r]^\gamma &
+((\mathcal{O}_{X, y}^\wedge/\mathfrak p)'_{\mathfrak q'})^\wedge \ar[d] \\
+ & & &
+\mathcal{O}_{X', y'}^\wedge/\mathfrak p'
+}
+$$
+whose vertical arrows are surjective. By
+More on Algebra, Lemma \ref{more-algebra-lemma-completion-dimension}
+and the dimension formula
+(Algebra, Lemma \ref{algebra-lemma-dimension-formula})
+we have
+$$
+\dim(((\mathcal{O}_{X, y}^\wedge/\mathfrak p)'_{\mathfrak q'})^\wedge) =
+\dim((\mathcal{O}_{X, y}^\wedge/\mathfrak p)'_{\mathfrak q'}) =
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p)
+- \text{trdeg}(\kappa(y')/\kappa(y))
+$$
+Tracing through the definitions of pullbacks, stalks, localizations,
+and completions we find
+$$
+(\mathcal{F}_y^\wedge)_{\mathfrak p}
+\otimes_{(\mathcal{O}_{X, y}^\wedge)_\mathfrak p}
+(\mathcal{O}_{X', y'}^\wedge)_{\mathfrak p'}
+=
+(\mathcal{F}_{y'}^\wedge)_{\mathfrak p'}
+$$
+Details omitted. The ring maps $\beta$ and $\gamma$ in the diagram
+are flat with Gorenstein (hence Cohen-Macaulay) fibres, as these are
+completions of rings having a dualizing complex. See
+Dualizing Complexes, Lemmas
+\ref{dualizing-lemma-formal-fibres-gorenstein} and
+\ref{dualizing-lemma-dualizing-gorenstein-formal-fibres}
+and the discussion in More on Algebra, Section
+\ref{more-algebra-section-properties-formal-fibres}.
+Observe that $(\mathcal{O}_{X, y}^\wedge)_\mathfrak p =
+(\mathcal{O}_{X, y}^\wedge)'_{\tilde{\mathfrak p}}$
+where $\tilde{\mathfrak p}$ is the kernel of $\alpha$
+in the diagram. On the other hand,
+$(\mathcal{O}_{X, y}^\wedge)'_{\tilde{\mathfrak p}}
+\to (\mathcal{O}_{X', y'}^\wedge)_{\mathfrak p'}$
+is flat with CM fibres by the above. Whence
+$(\mathcal{O}_{X, y}^\wedge)_\mathfrak p \to
+(\mathcal{O}_{X', y'}^\wedge)_{\mathfrak p'}$ is flat with CM fibres.
+Using Algebra, Lemma \ref{algebra-lemma-apply-grothendieck-module}
+we see that
+$$
+\text{depth}((\mathcal{F}_{y'}^\wedge)_{\mathfrak p'}) =
+\text{depth}((\mathcal{F}_y^\wedge)_{\mathfrak p}) +
+\dim(F_\mathfrak r)
+$$
+where $F$ is the generic formal fibre of
+$(\mathcal{O}_{X, y}^\wedge/\mathfrak p)'_{\mathfrak q'}$
+and $\mathfrak r$ is the prime corresponding to $\mathfrak p'$.
+Since $(\mathcal{O}_{X, y}^\wedge/\mathfrak p)'_{\mathfrak q'}$
+is a universally catenary local domain, its $I$-adic completion
+is equidimensional and (universally) catenary by Ratliff's theorem
+(More on Algebra, Proposition \ref{more-algebra-proposition-ratliff}).
+It then follows that
+$$
+\dim(((\mathcal{O}_{X, y}^\wedge/\mathfrak p)'_{\mathfrak q'})^\wedge) =
+\dim(F_\mathfrak r) + \dim(\mathcal{O}_{X', y'}^\wedge/\mathfrak p')
+$$
+Combined with Lemma \ref{lemma-change-distance-function}
+we get
+\begin{equation}
+\label{equation-one}
+\begin{aligned}
+&
+\text{depth}((\mathcal{F}_{y'}^\wedge)_{\mathfrak p'}) +
+\delta^{Y'}_{Z'}(y') \\
+& =
+\text{depth}((\mathcal{F}_y^\wedge)_{\mathfrak p}) +
+\dim(F_\mathfrak r) + \delta^{Y'}_{Z'}(y') \\
+& \geq
+\text{depth}((\mathcal{F}_y^\wedge)_{\mathfrak p}) + \delta^Y_Z(y) +
+\dim(F_\mathfrak r) + \text{trdeg}(\kappa(y')/\kappa(y)) - (d - 1) \\
+& =
+\text{depth}((\mathcal{F}_y^\wedge)_{\mathfrak p}) + \delta^Y_Z(y) - (d - 1)
++ \dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) -
+\dim(\mathcal{O}_{X', y'}^\wedge/\mathfrak p')
+\end{aligned}
+\end{equation}
+Please keep in mind that
+$\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) \geq
+\dim(\mathcal{O}_{X', y'}^\wedge/\mathfrak p')$. Rewriting this we get
+\begin{equation}
+\label{equation-two}
+\begin{aligned}
+&
+\text{depth}((\mathcal{F}_{y'}^\wedge)_{\mathfrak p'}) +
+\dim(\mathcal{O}_{X', y'}^\wedge/\mathfrak p') +
+\delta^{Y'}_{Z'}(y') \\
+& \geq
+\text{depth}((\mathcal{F}_y^\wedge)_{\mathfrak p}) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) +
+\delta^Y_Z(y) - (d - 1)
+\end{aligned}
+\end{equation}
+This inequality will allow us to check the remaning conditions.
+
+\medskip\noindent
+Conditions (b) and (d) of
+Lemma \ref{lemma-improvement-formal-coherent-module-better}. Assume
+$V(\mathfrak p') \cap V(I\mathcal{O}_{X', y'}^\wedge) =
+\{\mathfrak m_{y'}^\wedge\}$.
+This implies that $\dim(\mathcal{O}_{X', y'}^\wedge/\mathfrak p') = 1$
+because $Z'$ is an effective Cartier divisor.
+The combination of (b) and (d) is equivalent with
+$$
+\text{depth}((\mathcal{F}_{y'}^\wedge)_{\mathfrak p'}) + \delta^{Y'}_{Z'}(y')
+> 2
+$$
+If $(\mathcal{F}_n)$ satisfies the inequalities in (3)(b)
+then we immediately conclude this is true by applying (\ref{equation-two}).
+If $(\mathcal{F}_n)$ satisfies (3)(a), i.e., the
+$(d + 1, d + 2)$-inequalities, then we see that in any case
+$$
+\text{depth}((\mathcal{F}_y^\wedge)_{\mathfrak p}) + \delta^Y_Z(y)
+\geq d + 1
+\quad\text{or}\quad
+\text{depth}((\mathcal{F}_y^\wedge)_{\mathfrak p}) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) +
+\delta^Y_Z(y) > d + 2
+$$
+Looking at (\ref{equation-one}) and (\ref{equation-two}) above this gives
+what we want except possibly if
+$\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) = 1$.
+However, if $\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) = 1$, then we have
+$V(\mathfrak p) \cap V(I\mathcal{O}_{X, y}^\wedge) = \{\mathfrak m_y^\wedge\}$
+and we see that actually
+$$
+\text{depth}((\mathcal{F}_y^\wedge)_{\mathfrak p}) + \delta^Y_Z(y) > d + 1
+$$
+as $(\mathcal{F}_n)$ satisfies the $(d + 1, d + 2)$-inequalities and we
+conclude again.
+
+\medskip\noindent
+Condition (c) of
+Lemma \ref{lemma-improvement-formal-coherent-module-better}. Assume
+$V(\mathfrak p') \cap V(I\mathcal{O}_{X', y'}^\wedge) \not =
+\{\mathfrak m_{y'}^\wedge\}$. Then condition (c) is equivalent to
+$$
+\text{depth}((\mathcal{F}_{y'}^\wedge)_{\mathfrak p'}) + \delta^{Y'}_{Z'}(y')
+\geq 2
+\quad\text{or}\quad
+\text{depth}((\mathcal{F}_{y'}^\wedge)_{\mathfrak p'}) +
+\dim(\mathcal{O}_{X', y'}^\wedge/\mathfrak p') +
+\delta^{Y'}_{Z'}(y') > 3
+$$
+If $(\mathcal{F}_n)$ satisfies the inequalities in (3)(b)
+then we see the second of the two displayed inequalities holds true
+by applying (\ref{equation-two}). If $(\mathcal{F}_n)$ satisfies (3)(a), i.e.,
+the $(d + 1, d + 2)$-inequalities, then this follows immediately from
+(\ref{equation-one}) and (\ref{equation-two}).
+This finishes the proof of our claim.
+
+\medskip\noindent
+Choose $(b|_{U'}^*\mathcal{F}_n) \to (\mathcal{F}_n'')$
+and $(\mathcal{H}_n)$ in $\textit{Coh}(U', I\mathcal{O}_{U'})$
+as in Lemma \ref{lemma-improvement-formal-coherent-module-better}.
+For any affine open $W \subset X'$ observe that
+$\delta^{W \cap Y'}_{W \cap Z'}(y') \geq \delta^{Y'}_{Z'}(y')$ by
+Lemma \ref{lemma-discussion} part (7). Hence we see that
+$(\mathcal{H}_n|_W)$ satisfies the assumptions of
+Lemma \ref{lemma-cd-1-canonical}.
+Thus $(\mathcal{H}_n|_W)$ extends canonically to $W$.
+Let $(\mathcal{G}_{W, n})$ in $\textit{Coh}(W, I\mathcal{O}_W)$
+be the canonical extension as in
+Lemma \ref{lemma-canonically-algebraizable}.
+By Lemma \ref{lemma-canonically-extend-base-change}
+we see that for $W' \subset W$ there is a unique isomorphism
+$$
+(\mathcal{G}_{W, n}|_{W'}) \longrightarrow
+(\mathcal{G}_{W', n})
+$$
+compatible with the given isomorphisms
+$(\mathcal{G}_{W, n}|_{W \cap U}) \cong (\mathcal{H}_n|_{W \cap U})$.
+We conclude that there exists an object
+$(\mathcal{G}_n)$ of $\textit{Coh}(X', I\mathcal{O}_{X'})$
+whose restriction to $U$ is isomorphic to $(\mathcal{H}_n)$.
+
+\medskip\noindent
+If $A$ is $I$-adically complete we can finish the proof as follows.
+By Grothedieck's existence theorem
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-existence-projective})
+we see that $(\mathcal{G}_n)$ is the completion of a coherent
+$\mathcal{O}_{X'}$-module. Then by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-existence-easy}
+we see that $(b|_{U'}^*\mathcal{F}_n)$
+is the completion of a coherent $\mathcal{O}_{U'}$-module
+$\mathcal{F}'$.
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-inverse-systems-push-pull}
+we see that there is a map
+$$
+(\mathcal{F}_n) \longrightarrow ((b|_{U'})_*\mathcal{F}')^\wedge
+$$
+whose kernel and cokernel is annihilated by a power of $I$.
+Then finally, we win by applying
+Lemma \ref{lemma-map-kernel-cokernel-on-closed}.
+
+\medskip\noindent
+If $A$ is not complete, then, before starting the proof, we may replace $A$
+by its completion, see Lemma \ref{lemma-algebraizable}.
+After completion the assumptions still hold: this is immediate
+for condition (3), follows from
+Dualizing Complexes, Lemma \ref{dualizing-lemma-ubiquity-dualizing}
+for condition (1), and from
+Divisors, Lemma \ref{divisors-lemma-flat-base-change-blowing-up}
+for condition (2).
+Thus the complete case implies the general case.
+\end{proof}
+
+\begin{proposition}[Algebraization for ideals with few generators]
+\label{proposition-d-generators}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$
+be an object of $\textit{Coh}(U, I\mathcal{O}_U)$. Assume
+\begin{enumerate}
+\item $A$ has a dualizing complex,
+\item $V(I) = V(f_1, \ldots, f_d)$ for some $d \geq 1$ and
+$f_1, \ldots, f_d \in A$,
+\item one of the following is true
+\begin{enumerate}
+\item $(\mathcal{F}_n)$ satisfies the $(d + 1, d + 2)$-inequalities
+(Definition \ref{definition-s-d-inequalities}), or
+\item for $y \in U \cap Y$ and a prime
+$\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$ with
+$\mathfrak p \not \in V(I\mathcal{O}_{X, y}^\wedge)$
+we have
+$$
+\text{depth}((\mathcal{F}^\wedge_y)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) + \delta^Y_Z(y) > d + 2
+$$
+\end{enumerate}
+\end{enumerate}
+Then $(\mathcal{F}_n)$ extends to $X$. In particular, if $A$ is
+$I$-adically complete, then $(\mathcal{F}_n)$ is the completion
+of a coherent $\mathcal{O}_U$-module.
+\end{proposition}
+
+\begin{proof}
+We may assume $I = (f_1, \ldots, f_d)$, see
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-inverse-systems-ideals-equivalence}.
+Then we see that
+all fibres of the blowup of $X$ in $I$ have dimension at most $d - 1$.
+Thus we get the extension from Lemma \ref{lemma-blowup}.
+The final statement follows from Lemma \ref{lemma-essential-image-completion}.
+\end{proof}
+
+\noindent
+Please compare the next lemma with
+Remarks \ref{remark-interesting-case-variant},
+\ref{remark-interesting-case},
+\ref{remark-interesting-case-bis}, and
+\ref{remark-interesting-case-ter}.
+
+\begin{lemma}
+\label{lemma-interesting-case-final}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$
+be an object of $\textit{Coh}(U, I\mathcal{O}_U)$. Assume
+\begin{enumerate}
+\item $A$ is a local ring which has a dualizing complex,
+\item all irreducible components of $X$ have the same dimension,
+\item the scheme $X \setminus Y$ is Cohen-Macaulay,
+\item $I$ is generated by $d$ elements,
+\item $\dim(X) - \dim(Z) > d + 2$, and
+\item for $y \in U \cap Y$ the module $\mathcal{F}_y^\wedge$
+is finite locally free outside $V(I\mathcal{O}_{X, y}^\wedge)$,
+for example if $\mathcal{F}_n$ is a finite locally free
+$\mathcal{O}_U/I^n\mathcal{O}_U$-module.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ extends to $X$. In particular if $A$ is $I$-adically
+complete, then $(\mathcal{F}_n)$ is the completion of a coherent
+$\mathcal{O}_U$-module.
+\end{lemma}
+
+\begin{proof}
+We will show that the hypotheses (1), (2), (3)(b) of
+Proposition \ref{proposition-d-generators} are satisfied.
+This is clear for (1) and (2).
+
+\medskip\noindent
+Let $y \in U \cap Y$ and let $\mathfrak p$ be a prime
+$\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$ with
+$\mathfrak p \not \in V(I\mathcal{O}_{X, y}^\wedge)$.
+The last condition shows that
+$\text{depth}((\mathcal{F}_y^\wedge)_\mathfrak p) =
+\text{depth}((\mathcal{O}_{X, y}^\wedge)_\mathfrak p)$.
+Since $X \setminus Y$ is Cohen-Macaulay we see that
+$(\mathcal{O}_{X, y}^\wedge)_\mathfrak p$ is Cohen-Macaulay.
+Thus we see that
+\begin{align*}
+& \text{depth}((\mathcal{F}^\wedge_y)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) + \delta^Y_Z(y) \\
+& =
+\dim((\mathcal{O}_{X, y}^\wedge)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) + \delta^Y_Z(y) \\
+& =
+\dim(\mathcal{O}_{X, y}^\wedge) + \delta^Y_Z(y)
+\end{align*}
+The final equality because $\mathcal{O}_{X, y}$ is equidimensional
+by the second condition.
+Let $\delta(y) = \dim(\overline{\{y\}})$. This is a dimension function
+as $A$ is a catenary local ring.
+By Lemma \ref{lemma-discussion}
+we have $\delta^Y_Z(y) \geq \delta(y) - \dim(Z)$. Since $X$ is
+equidimensional we get
+$$
+\dim(\mathcal{O}_{X, y}^\wedge) + \delta^Y_Z(y)
+\geq \dim(\mathcal{O}_{X, y}^\wedge) + \delta(y) - \dim(Z)
+= \dim(X) - \dim(Z)
+$$
+Thus we get the desired inequality and we win.
+\end{proof}
+
+\begin{remark}
+\label{remark-question}
+We are unable to prove or disprove the analogue of
+Proposition \ref{proposition-d-generators}
+where the assumption that $I$ has $d$ generators
+is replaced with the assumption $\text{cd}(A, I) \leq d$.
+If you know a proof or have a counter example, please email
+\href{mailto:stacks.project@gmail.com}{stacks.project@gmail.com}.
+Another obvious question is to what extend the conditions in
+Proposition \ref{proposition-d-generators}
+are necessary.
+\end{remark}
+
+
+
+
+\section{Algebraization of coherent formal modules, VI}
+\label{section-algebraization-modules-yet-more}
+
+\noindent
+In this section we add a few more easier to prove cases.
+
+\begin{proposition}
+\label{proposition-algebraization-regular-sequence}
+In Situation \ref{situation-algebraize} let
+$(\mathcal{F}_n)$ be an object of $\textit{Coh}(U, I\mathcal{O}_U)$.
+Assume
+\begin{enumerate}
+\item there exist $f_1, \ldots, f_d \in I$ such that
+for $y \in U \cap Y$ the ideal $I\mathcal{O}_{X, y}$
+is generated by $f_1, \ldots, f_d$ and
+$f_1, \ldots, f_d$ form a $\mathcal{F}_y^\wedge$-regular sequence,
+\item $H^0(U, \mathcal{F}_1)$ and $H^1(U, \mathcal{F}_1)$
+are finite $A$-modules.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ extends canonically to $X$. In particular, if $A$
+is complete, then $(\mathcal{F}_n)$ is the completion of a coherent
+$\mathcal{O}_U$-module.
+\end{proposition}
+
+\begin{proof}
+We will prove this by verifying hypotheses (a), (b), and (c) of
+Lemma \ref{lemma-when-done}.
+For every $n$ we have a short exact sequence
+$$
+0 \to I^n\mathcal{F}_{n + 1} \to \mathcal{F}_{n + 1} \to \mathcal{F}_n \to 0
+$$
+Since $f_1, \ldots, f_d$ forms a regular sequence (and hence
+quasi-regular, see Algebra, Lemma \ref{algebra-lemma-regular-quasi-regular})
+on each of the ``stalks'' $\mathcal{F}_y^\wedge$ and since we have
+$I\mathcal{F}_n = (f_1, \ldots, f_d)\mathcal{F}_n$ for all $n$,
+we find that
+$$
+I^n\mathcal{F}_{n + 1} =
+\bigoplus\nolimits_{e_1 + \ldots + e_d = n} \mathcal{F}_1 \cdot
+f_1^{e_1} \ldots f_d^{e_d}
+$$
+by checking on stalks. Using the assumption of finiteness of
+$H^0(U, \mathcal{F}_1)$ and induction, we first conclude that
+$M_n = H^0(U, \mathcal{F}_n)$ is a finite $A$-module for all $n$.
+In this way we see that condition (c) of Lemma \ref{lemma-when-done} holds.
+We also see that
+$$
+\bigoplus\nolimits_{n \geq 0} H^1(U, I^n\mathcal{F}_{n + 1})
+$$
+is a finite graded $R = \bigoplus I^n/I^{n +1}$-module.
+By Lemma \ref{lemma-ML-general} we conclude that condition (a) of
+Lemma \ref{lemma-when-done} is satisfied. Finally, condition (b) of
+Lemma \ref{lemma-when-done} is satisfied because
+$\bigoplus H^0(U, I^n\mathcal{F}_{n + 1})$ is a finite graded $R$-module
+and we can apply Lemma \ref{lemma-topology-I-adic-general}.
+\end{proof}
+
+\begin{remark}
+\label{remark-interesting-case-ter}
+In the situation of
+Proposition \ref{proposition-algebraization-regular-sequence}
+if we assume $A$ has a dualizing complex, then
+the condition that $H^0(U, \mathcal{F}_1)$ and
+$H^1(U, \mathcal{F}_1)$ are finite is equivalent to
+$$
+\text{depth}(\mathcal{F}_{1, y}) +
+\dim(\mathcal{O}_{\overline{\{y\}}, z}) > 2
+$$
+for all $y \in U \cap Y$ and $z \in Z \cap \overline{\{y\}}$.
+See Local Cohomology, Lemma \ref{local-cohomology-lemma-finiteness-Rjstar}.
+This holds for example if $\mathcal{F}_1$ is a finite locally free
+$\mathcal{O}_{U \cap Y}$-module, $Y$ is $(S_2)$, and
+$\text{codim}(Z', Y') \geq 3$ for every pair of irreducible components
+$Y'$ of $Y$, $Z'$ of $Z$ with $Z' \subset Y'$.
+\end{remark}
+
+\begin{proposition}
+\label{proposition-algebraization-flat}
+In Situation \ref{situation-algebraize} let
+$(\mathcal{F}_n)$ be an object of $\textit{Coh}(U, I\mathcal{O}_U)$.
+Assume there is Noetherian local ring $(R, \mathfrak m)$ and a ring
+map $R \to A$ such that
+\begin{enumerate}
+\item $I = \mathfrak m A$,
+\item for $y \in U \cap Y$ the stalk $\mathcal{F}_y^\wedge$ is $R$-flat,
+\item $H^0(U, \mathcal{F}_1)$ and $H^1(U, \mathcal{F}_1)$ are finite
+$A$-modules.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ extends canonically to $X$. In particular, if $A$
+is complete, then $(\mathcal{F}_n)$ is the completion of a coherent
+$\mathcal{O}_U$-module.
+\end{proposition}
+
+\begin{proof}
+The proof is exactly the same as the proof of
+Proposition \ref{proposition-algebraization-regular-sequence}.
+Namely, if $\kappa = R/\mathfrak m$ then for $n \geq 0$
+there is an isomorphism
+$$
+I^n \mathcal{F}_{n + 1} \cong
+\mathcal{F}_1 \otimes_\kappa \mathfrak m^n/\mathfrak m^{n + 1}
+$$
+and the right hand side is a finite direct sum of copies
+of $\mathcal{F}_1$. This can be checked by looking at stalks.
+Everything else is exactly the same.
+\end{proof}
+
+\begin{remark}
+\label{remark-interesting-case-quater}
+Proposition \ref{proposition-algebraization-flat} is a local version
+of \cite[Theorem 2.10 (i)]{Baranovsky}. It is straightforward to deduce
+the global results from the local one; we will sketch the argument.
+Namely, suppose $(R, \mathfrak m)$
+is a complete Noetherian local ring and $X \to \Spec(R)$ is a proper morphism.
+For $n \geq 1$ set $X_n = X \times_{\Spec(R)} \Spec(R/\mathfrak m^n)$.
+Let $Z \subset X_1$ be a closed subset of the special fibre.
+Set $U = X \setminus Z$ and denote $j : U \to X$ the inclusion morphism.
+Suppose given an object
+$$
+(\mathcal{F}_n) \text{ of } \textit{Coh}(U, \mathfrak m\mathcal{O}_U)
+$$
+which is flat over $R$ in the sense that $\mathcal{F}_n$ is flat over
+$R/\mathfrak m^n$ for all $n$.
+Assume that $j_*\mathcal{F}_1$ and $R^1j_*\mathcal{F}_1$ are coherent
+modules. Then affine locally on $X$ we get a canonical extension
+of $(\mathcal{F}_n)$ by
+Proposition \ref{proposition-algebraization-flat}
+and formation of this extension commutes with localization
+(by Lemma \ref{lemma-algebraization-principal-variant}).
+Thus we get a canonical global object $(\mathcal{G}_n)$ of
+$\textit{Coh}(X, \mathfrak m\mathcal{O}_X)$
+whose restriction of $U$ is $(\mathcal{F}_n)$.
+By Grothendieck's existence theorem
+(Cohomology of Schemes, Proposition
+\ref{coherent-proposition-existence-proper})
+we see there exists a coherent $\mathcal{O}_X$-module
+$\mathcal{G}$ whose completion is $(\mathcal{G}_n)$.
+In this way we see that $(\mathcal{F}_n)$ is algebraizable, i.e.,
+it is the completion of a coherent $\mathcal{O}_U$-module.
+
+\medskip\noindent
+We add that the coherence of $j_*\mathcal{F}_1$ and $R^1j_*\mathcal{F}_1$
+is a condition on the special fibre. Namely, if we denote
+$j_1 : U_1 \to X_1$ the special fibre of $j : U \to X$, then we can
+think of $\mathcal{F}_1$ as a coherent sheaf on $U_1$ and we have
+$j_*\mathcal{F}_1 = j_{1, *}\mathcal{F}_1$ and
+$R^1j_*\mathcal{F}_1 = R^1j_{1, *}\mathcal{F}_1$.
+Hence for example if $X_1$ is $(S_2)$ and irreducible, we have
+$\dim(X_1) - \dim(Z) \geq 3$, and $\mathcal{F}_1$ is a locally free
+$\mathcal{O}_{U_1}$-module, then $j_{1, *}\mathcal{F}_1$ and
+$R^1j_{1, *}\mathcal{F}_1$ are coherent modules.
+\end{remark}
+
+
+
+
+
+
+
+
+\section{Application to the completion functor}
+\label{section-completion-application}
+
+\noindent
+In this section we just combine some already obtained results
+in order to conveniently reference them. There are many
+(stronger) results we could state here.
+
+\begin{lemma}
+\label{lemma-equivalence-better}
+In Situation \ref{situation-algebraize} assume
+\begin{enumerate}
+\item $A$ has a dualizing complex and is $I$-adically complete,
+\item $I = (f)$ generated by a single element,
+\item $A$ is local with maximal ideal $\mathfrak a = \mathfrak m$,
+\item one of the following is true
+\begin{enumerate}
+\item $A_f$ is $(S_2)$ and for $\mathfrak p \subset A$,
+$f \not \in \mathfrak p$ minimal we have $\dim(A/\mathfrak p) \geq 4$, or
+\item if $\mathfrak p \not \in V(f)$ and
+$V(\mathfrak p) \cap V(f) \not = \{\mathfrak m\}$, then
+$\text{depth}(A_\mathfrak p) + \dim(A/\mathfrak p) > 3$.
+\end{enumerate}
+\end{enumerate}
+Then with $U_0 = U \cap V(f)$ the completion functor
+$$
+\colim_{U_0 \subset U' \subset U\text{ open}}
+\textit{Coh}(\mathcal{O}_{U'})
+\longrightarrow
+\textit{Coh}(U, f\mathcal{O}_U)
+$$
+is an equivalence on the full subcategories of finite locally free objects.
+\end{lemma}
+
+\begin{proof}
+It follows from Lemma \ref{lemma-fully-faithful-general}
+that the functor is fully faithful (details omitted).
+Let us prove essential surjectivity. Let $(\mathcal{F}_n)$ be a finite locally
+free object of $\textit{Coh}(U, f\mathcal{O}_U)$. By either
+Lemma \ref{lemma-algebraization-principal-bis} or
+Proposition \ref{proposition-cd-1}
+there exists a coherent $\mathcal{O}_U$-module $\mathcal{F}$
+such that $(\mathcal{F}_n)$ is the completion of $\mathcal{F}$.
+Namely, for the application of either result the only thing to
+check is that $(\mathcal{F}_n)$ satisfies the $(2, 3)$-inequalities.
+This is done in Lemma \ref{lemma-unwinding-conditions-bis}. If $y \in U_0$,
+then the $f$-adic completion of the stalk $\mathcal{F}_y$ is isomorphic to
+a finite free module over the $f$-adic completion of $\mathcal{O}_{U, y}$.
+Hence $\mathcal{F}$ is finite locally free in an open neighbourhood
+$U'$ of $U_0$. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-equivalence}
+In Situation \ref{situation-algebraize} assume
+\begin{enumerate}
+\item $I = (f)$ is principal,
+\item $A$ is $f$-adically complete,
+\item $f$ is a nonzerodivisor,
+\item $H^1_\mathfrak a(A/fA)$ and $H^2_\mathfrak a(A/fA)$
+are finite $A$-modules.
+\end{enumerate}
+Then with $U_0 = U \cap V(f)$ the completion functor
+$$
+\colim_{U_0 \subset U' \subset U\text{ open}}
+\textit{Coh}(\mathcal{O}_{U'})
+\longrightarrow
+\textit{Coh}(U, f\mathcal{O}_U)
+$$
+is an equivalence on the full subcategories of finite locally free objects.
+\end{lemma}
+
+\begin{proof}
+The functor is fully faithful by
+Lemma \ref{lemma-fully-faithful-general-alternative}.
+Essential surjectivity follows from
+Lemma \ref{lemma-algebraization-principal-variant}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Coherent triples}
+\label{section-coherent-triples}
+
+\noindent
+Let $(A, \mathfrak m)$ be a Noetherian local ring.
+Let $f \in \mathfrak m$ be a nonzerodivisor. Set
+$X = \Spec(A)$, $X_0 = \Spec(A/fA)$, $U = X \setminus V(\mathfrak m)$, and
+$U_0 = U \cap X_0$.
+We say $(\mathcal{F}, \mathcal{F}_0, \alpha)$ is a {\it coherent triple}
+if we have
+\begin{enumerate}
+\item $\mathcal{F}$ is a coherent $\mathcal{O}_U$-module such that
+$f : \mathcal{F} \to \mathcal{F}$ is injective,
+\item $\mathcal{F}_0$ is a coherent $\mathcal{O}_{X_0}$-module,
+\item $\alpha : \mathcal{F}/f\mathcal{F} \to \mathcal{F}_0|_{U_0}$
+is an isomorphism.
+\end{enumerate}
+There is an obvious notion of a {\it morphism of coherent triples}
+which turns the collection of all coherent triples into a category.
+
+\medskip\noindent
+The category of coherent triples is additive but not abelian.
+However, it is clear what a short exact sequence of coherent
+triples is.
+
+\medskip\noindent
+Given two coherent triples $(\mathcal{F}, \mathcal{F}_0, \alpha)$
+and $(\mathcal{G}, \mathcal{G}_0, \beta)$ it may not be the case that
+$(\mathcal{F} \otimes_{\mathcal{O}_U} \mathcal{G},
+\mathcal{F}_0 \otimes_{\mathcal{O}_{X_0}} \mathcal{G}_0,
+\alpha \otimes \beta)$ is a coherent triple\footnote{Namely, it
+isn't necessarily the case that $f$
+is injective on $\mathcal{F} \otimes_{\mathcal{O}_U} \mathcal{G}$.}.
+However, if the stalks $\mathcal{G}_x$ are
+free for all $x \in U_0$, then this does hold.
+
+\medskip\noindent
+We will say the coherent triple $(\mathcal{G}, \mathcal{G}_0, \beta)$
+is {\it locally free}, resp.\ {\it invertible}
+if $\mathcal{G}$ and $\mathcal{G}_0$
+are locally free, resp.\ invertible modules. In this case tensoring
+with $(\mathcal{G}, \mathcal{G}_0, \beta)$ makes sense (see above)
+and turns short exact sequences of coherent triples into short exact
+sequences of coherent triples.
+
+\begin{lemma}
+\label{lemma-prepare-chi-triple}
+For any coherent triple $(\mathcal{F}, \mathcal{F}_0, \alpha)$
+there exists a coherent $\mathcal{O}_X$-module $\mathcal{F}'$
+such that $f : \mathcal{F}' \to \mathcal{F}'$ is injective,
+an isomorphism $\alpha' : \mathcal{F}'|_U \to \mathcal{F}$, and a map
+$\alpha'_0 : \mathcal{F}'/f\mathcal{F}' \to \mathcal{F}_0$
+such that $\alpha \circ (\alpha' \bmod f) = \alpha'_0|_{U_0}$.
+\end{lemma}
+
+\begin{proof}
+Choose a finite $A$-module $M$ such that $\mathcal{F}$ is the restriction
+to $U$ of the coherent $\mathcal{O}_X$-module associated to $M$, see
+Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}.
+Since $\mathcal{F}$ is $f$-torsion free, we may replace $M$ by
+its quotient by $f$-power torsion.
+On the other hand, let $M_0 = \Gamma(X_0, \mathcal{F}_0)$
+so that $\mathcal{F}_0$ is the coherent $\mathcal{O}_{X_0}$-module
+associated to the finite $A/fA$-module $M_0$.
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-homs-over-open}
+there exists an $n$ such that the
+isomorphism $\alpha_0$ corresponds to an $A/fA$-module homomorphism
+$\mathfrak m^n M/fM \to M_0$ (whose kernel and cokernel
+are annihilated by a power of $\mathfrak m$, but we don't need this).
+Thus if we take $M' = \mathfrak m^n M$ and we let
+$\mathcal{F}'$ be the coherent $\mathcal{O}_X$-module
+associated to $M'$, then the lemma is clear.
+\end{proof}
+
+\noindent
+Let $(\mathcal{F}, \mathcal{F}_0, \alpha)$ be a coherent triple.
+Choose $\mathcal{F}', \alpha', \alpha'_0$ as in
+Lemma \ref{lemma-prepare-chi-triple}. Set
+\begin{equation}
+\label{equation-chi-triple}
+\chi(\mathcal{F}, \mathcal{F}_0, \alpha) =
+\text{length}_A(\Coker(\alpha'_0)) -
+\text{length}_A(\Ker(\alpha'_0))
+\end{equation}
+The expression on the right makes sense as $\alpha'_0$ is an isomorphism
+over $U_0$ and hence its kernel and coherent are coherent modules supported
+on $\{\mathfrak m\}$ which therefore have finite length
+(Algebra, Lemma \ref{algebra-lemma-support-point}).
+
+\begin{lemma}
+\label{lemma-well-defined-chi-triple}
+The quantity $\chi(\mathcal{F}, \mathcal{F}_0, \alpha)$ in
+(\ref{equation-chi-triple}) does not depend on the choice of
+$\mathcal{F}', \alpha', \alpha'_0$ as in Lemma \ref{lemma-prepare-chi-triple}.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}', \alpha', \alpha'_0$ and
+$\mathcal{F}'', \alpha'', \alpha''_0$ be two such choices.
+For $n > 0$ set $\mathcal{F}'_n = \mathfrak m^n \mathcal{F}'$.
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-homs-over-open}
+for some $n$
+there exists an $\mathcal{O}_X$-module map $\mathcal{F}'_n \to \mathcal{F}''$
+agreeing with the identification
+$\mathcal{F}''|_U = \mathcal{F}'|_U$ determined by $\alpha'$ and $\alpha''$.
+Then the diagram
+$$
+\xymatrix{
+\mathcal{F}'_n/f\mathcal{F}'_n \ar[r] \ar[d] &
+\mathcal{F}'/f\mathcal{F}' \ar[d]^{\alpha_0'} \\
+\mathcal{F}''/f\mathcal{F}'' \ar[r]^{\alpha_0''} &
+\mathcal{F}_0
+}
+$$
+is commutative after restricting to $U_0$. Hence by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-homs-over-open}
+it is commutative after restricting to
+$\mathfrak m^l(\mathcal{F}'_n/f\mathcal{F}'_n)$ for some $l > 0$. Since
+$\mathcal{F}'_{n + l}/f\mathcal{F}'_{n + l} \to \mathcal{F}'_n/f\mathcal{F}'_n$
+factors through $\mathfrak m^l(\mathcal{F}'_n/f\mathcal{F}'_n)$
+we see that after replacing $n$ by $n + l$ the diagram
+is commutative. In other words, we have found a third choice
+$\mathcal{F}''', \alpha''', \alpha'''_0$
+such that there are maps $\mathcal{F}''' \to \mathcal{F}''$
+and $\mathcal{F}''' \to \mathcal{F}'$ over $X$
+compatible with the maps over $U$ and $X_0$. This reduces us to
+the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume we have a map $\mathcal{F}'' \to \mathcal{F}'$ over $X$ compatible with
+$\alpha', \alpha''$ over $U$ and with $\alpha'_0, \alpha''_0$ over $X_0$.
+Observe that $\mathcal{F}'' \to \mathcal{F}'$ is injective as it is an
+isomorphism over $U$ and since $f : \mathcal{F}'' \to \mathcal{F}''$
+is injective. Clearly $\mathcal{F}'/\mathcal{F}''$ is supported on
+$\{\mathfrak m\}$ hence has finite length. We have the maps
+of coherent $\mathcal{O}_{X_0}$-modules
+$$
+\mathcal{F}''/f\mathcal{F}'' \to
+\mathcal{F}'/f\mathcal{F}' \xrightarrow{\alpha'_0}
+\mathcal{F}_0
+$$
+whose composition is $\alpha''_0$ and which are isomorphisms over $U_0$.
+Elementary homological algebra gives a $6$-term exact sequence
+$$
+\begin{matrix}
+0 \to
+\Ker(\mathcal{F}''/f\mathcal{F}'' \to \mathcal{F}'/f\mathcal{F}') \to
+\Ker(\alpha''_0) \to
+\Ker(\alpha'_0) \to \\
+\Coker(\mathcal{F}''/f\mathcal{F}'' \to \mathcal{F}'/f\mathcal{F}') \to
+\Coker(\alpha''_0) \to
+\Coker(\alpha'_0) \to 0
+\end{matrix}
+$$
+By additivity of lengths (Algebra, Lemma \ref{algebra-lemma-length-additive})
+we find that it suffices to show that
+$$
+\text{length}_A(
+\Coker(\mathcal{F}''/f\mathcal{F}'' \to \mathcal{F}'/f\mathcal{F}')) -
+\text{length}_A(
+\Ker(\mathcal{F}''/f\mathcal{F}'' \to \mathcal{F}'/f\mathcal{F}')) = 0
+$$
+This follows from applying the snake lemma to
+the diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{F}'' \ar[r]_f \ar[d] &
+\mathcal{F}'' \ar[r] \ar[d] &
+\mathcal{F}''/f\mathcal{F}'' \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\mathcal{F}' \ar[r]^f &
+\mathcal{F}' \ar[r] &
+\mathcal{F}'/f\mathcal{F}' \ar[r] &
+0
+}
+$$
+and the fact that $\mathcal{F}'/\mathcal{F}''$ has finite length.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ses-chi-triple}
+We have
+$\chi(\mathcal{G}, \mathcal{G}_0, \beta) =
+\chi(\mathcal{F}, \mathcal{F}_0, \alpha) +
+\chi(\mathcal{H}, \mathcal{H}_0, \gamma)$ if
+$$
+0 \to
+(\mathcal{F}, \mathcal{F}_0, \alpha) \to
+(\mathcal{G}, \mathcal{G}_0, \beta) \to
+(\mathcal{H}, \mathcal{H}_0, \gamma)
+\to 0
+$$
+is a short exact sequence of coherent triples.
+\end{lemma}
+
+\begin{proof}
+Choose $\mathcal{G}', \beta', \beta'_0$ as in
+Lemma \ref{lemma-prepare-chi-triple}
+for the triple $(\mathcal{G}, \mathcal{G}_0, \beta)$.
+Denote $j : U \to X$ the inclusion morphism.
+Let $\mathcal{F}' \subset \mathcal{G}'$
+be the kernel of the composition
+$$
+\mathcal{G}' \xrightarrow{\beta'} j_*\mathcal{G} \to j_*\mathcal{H}
+$$
+Observe that $\mathcal{H}' = \mathcal{G}'/\mathcal{F}'$
+is a coherent subsheaf of $j_*\mathcal{H}$ and hence
+$f : \mathcal{H}' \to \mathcal{H}'$ is injective.
+Hence by the snake lemma we obtain a short exact sequence
+$$
+0 \to \mathcal{F}'/f\mathcal{F}' \to
+\mathcal{G}'/f\mathcal{G}' \to
+\mathcal{H}'/f\mathcal{H}' \to 0
+$$
+We have isomorphisms
+$\alpha' : \mathcal{F}'|_U \to \mathcal{F}$,
+$\beta' : \mathcal{G}'|_U \to \mathcal{G}$, and
+$\gamma' : \mathcal{H}'|_U \to \mathcal{H}$ by construction.
+To finish the proof we'll need to construct maps
+$\alpha'_0 : \mathcal{F}'/f\mathcal{F}' \to \mathcal{F}_0$ and
+$\gamma'_0 : \mathcal{H}'/f\mathcal{H}' \to \mathcal{H}_0$ as in
+Lemma \ref{lemma-prepare-chi-triple} and fitting into
+a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{F}'/f\mathcal{F}' \ar[r] \ar@{..>}[d]^{\alpha'_0} &
+\mathcal{G}'/f\mathcal{G}' \ar[r] \ar[d]^{\beta'_0} &
+\mathcal{H}'/f\mathcal{H}' \ar[r] \ar@{..>}[d]^{\gamma'_0} &
+0 \\
+0 \ar[r] &
+\mathcal{F}_0 \ar[r] &
+\mathcal{G}_0 \ar[r] &
+\mathcal{H}_0 \ar[r] &
+0
+}
+$$
+However, this may not be possible with our initial choice of $\mathcal{G}'$.
+From the displayed diagram we see the obstruction is
+exactly the composition
+$$
+\delta :
+\mathcal{F}'/f\mathcal{F}' \to
+\mathcal{G}'/f\mathcal{G}' \xrightarrow{\beta'_0}
+\mathcal{G}_0 \to
+\mathcal{H}_0
+$$
+Note that the restriction of $\delta$ to $U_0$ is zero by our choice of
+$\mathcal{F}'$ and $\mathcal{H}'$. Hence by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-homs-over-open}
+there exists an $k > 0$ such that
+$\delta$ vanishes on $\mathfrak m^k \cdot (\mathcal{F}'/f\mathcal{F}')$.
+For $n > k$ set $\mathcal{G}'_n = \mathfrak m^n \mathcal{G}'$,
+$\mathcal{F}'_n = \mathcal{G}'_n \cap \mathcal{F}'$, and
+$\mathcal{H}'_n = \mathcal{G}'_n/\mathcal{F}'_n$.
+Observe that $\beta'_0$ can be composed with
+$\mathcal{G}'_n/f\mathcal{G}'_n \to \mathcal{G}'/f\mathcal{G}'$
+to give a map
+$\beta'_{n, 0} : \mathcal{G}'_n/f\mathcal{G}'_n \to \mathcal{G}_0$
+as in Lemma \ref{lemma-prepare-chi-triple}.
+By Artin-Rees (Algebra, Lemma \ref{algebra-lemma-Artin-Rees})
+we may choose $n$ such that
+$\mathcal{F}'_n \subset \mathfrak m^k \mathcal{F}'$.
+As above the maps
+$f : \mathcal{F}'_n \to \mathcal{F}'_n$,
+$f : \mathcal{G}'_n \to \mathcal{G}'_n$, and
+$f : \mathcal{H}'_n \to \mathcal{H}'_n$ are injective
+and as above using the snake lemma we obtain a short exact
+sequence
+$$
+0 \to \mathcal{F}'_n/f\mathcal{F}'_n \to
+\mathcal{G}'_n/f\mathcal{G}'_n \to
+\mathcal{H}'_n/f\mathcal{H}'_n \to 0
+$$
+As above we have isomorphisms
+$\alpha'_n : \mathcal{F}'_n|_U \to \mathcal{F}$,
+$\beta'_n : \mathcal{G}'_n|_U \to \mathcal{G}$, and
+$\gamma'_n : \mathcal{H}'_n|_U \to \mathcal{H}$.
+We consider the obstruction
+$$
+\delta_n :
+\mathcal{F}'_n/f\mathcal{F}'_n \to
+\mathcal{G}'_n/f\mathcal{G}'_n
+\xrightarrow{\beta'_{n, 0}}
+\mathcal{G}_0 \to
+\mathcal{H}_0
+$$
+as before. However, the commutative diagram
+$$
+\xymatrix{
+\mathcal{F}'_n/f\mathcal{F}'_n \ar[r] \ar[d] &
+\mathcal{G}'_n/f\mathcal{G}'_n \ar[r]_{\beta'_{n, 0}} \ar[d] &
+\mathcal{G}_0 \ar[r] \ar[d] &
+\mathcal{H}_0 \ar[d] \\
+\mathcal{F}'/f\mathcal{F}' \ar[r] &
+\mathcal{G}'/f\mathcal{G}' \ar[r]^{\beta'_0} &
+\mathcal{G}_0 \ar[r] &
+\mathcal{H}_0
+}
+$$
+our choice of $n$ and our observation about $\delta$
+show that $\delta_n = 0$.
+This produces the desired maps
+$\alpha'_{n, 0} : \mathcal{F}'_n/f\mathcal{F}'_n \to \mathcal{F}_0$, and
+$\gamma'_{n, 0} : \mathcal{H}'_n/f\mathcal{H}'_n \to \mathcal{H}_0$.
+OK, so we may use
+$\mathcal{F}'_n, \alpha'_n, \alpha'_{n, 0}$,
+$\mathcal{G}'_n, \beta'_n, \beta'_{n, 0}$, and
+$\mathcal{H}'_n, \gamma'_n, \gamma'_{n, 0}$
+to compute
+$\chi(\mathcal{F}, \mathcal{F}_0, \alpha)$,
+$\chi(\mathcal{G}, \mathcal{G}_0, \beta)$, and
+$\chi(\mathcal{H}, \mathcal{H}_0, \gamma)$.
+Now finally the lemma follows from
+an application of the snake lemma to
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{F}'_n/f\mathcal{F}'_n \ar[r] \ar[d] &
+\mathcal{G}'_n/f\mathcal{G}'_n \ar[r] \ar[d] &
+\mathcal{H}'_n/f\mathcal{H}'_n \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\mathcal{F}_0 \ar[r] &
+\mathcal{G}_0 \ar[r] &
+\mathcal{H}_0 \ar[r] &
+0
+}
+$$
+and additivity of lengths (Algebra, Lemma \ref{algebra-lemma-length-additive}).
+\end{proof}
+
+\begin{proposition}
+\label{proposition-hilbert-triple}
+Let $(\mathcal{F}, \mathcal{F}_0, \alpha)$ be a coherent triple.
+Let $(\mathcal{L}, \mathcal{L}_0, \lambda)$ be an invertible coherent
+triple. Then the function
+$$
+\mathbf{Z} \longrightarrow \mathbf{Z},\quad
+n \longmapsto
+\chi((\mathcal{F}, \mathcal{F}_0, \alpha) \otimes
+(\mathcal{L}, \mathcal{L}_0, \lambda)^{\otimes n})
+$$
+is a polynomial of degree $\leq \dim(\text{Supp}(\mathcal{F}))$.
+\end{proposition}
+
+\noindent
+More precisely, if $\mathcal{F} = 0$, then the function is constant.
+If $\mathcal{F}$ has finite support in $U$, then the function is constant.
+If the support of $\mathcal{F}$ in $U$ has dimension $1$, i.e., the
+closure of the support of $\mathcal{F}$ in $X$ has dimension $2$, then
+the function is linear, etc.
+
+\begin{proof}
+We will prove this by induction on the dimension of the support of
+$\mathcal{F}$.
+
+\medskip\noindent
+The base case is when $\mathcal{F} = 0$. Then either
+$\mathcal{F}_0$ is zero or its support is $\{\mathfrak m\}$.
+In this case we have
+$$
+(\mathcal{F}, \mathcal{F}_0, \alpha) \otimes
+(\mathcal{L}, \mathcal{L}_0, \lambda)^{\otimes n} =
+(0, \mathcal{F}_0 \otimes \mathcal{L}_0^{\otimes n}, 0) \cong
+(0, \mathcal{F}_0, 0)
+$$
+Thus the function of the lemma is constant with value equal
+to the length of $\mathcal{F}_0$.
+
+\medskip\noindent
+Induction step. Assume the support of $\mathcal{F}$ is nonempty.
+Let $\mathcal{G}_0 \subset \mathcal{F}_0$ denote the submodule
+of sections supported on $\{\mathfrak m\}$. Then we get a short
+exact sequence
+$$
+0 \to (0, \mathcal{G}_0, 0) \to
+(\mathcal{F}, \mathcal{F}_0, \alpha) \to
+(\mathcal{F}, \mathcal{F}_0/\mathcal{G}_0, \alpha) \to 0
+$$
+This sequence remains exact if we tensor by the invertible
+coherent triple $(\mathcal{L}, \mathcal{L}_0, \lambda)$, see
+discussion above. Thus by additivity of $\chi$
+(Lemma \ref{lemma-ses-chi-triple})
+and the base case explained above, it suffices to prove
+the induction step for
+$(\mathcal{F}, \mathcal{F}_0/\mathcal{G}_0, \alpha)$.
+In this way we see that we may assume $\mathfrak m$ is not
+an associated point of $\mathcal{F}_0$.
+
+\medskip\noindent
+Let $T = \text{Ass}(\mathcal{F}) \cup \text{Ass}(\mathcal{F}/f\mathcal{F})$.
+Since $U$ is quasi-affine, we can find $s \in \Gamma(U, \mathcal{L})$
+which does not vanish at any $u \in T$, see
+Properties, Lemma
+\ref{properties-lemma-quasi-affine-invertible-nonvanishing-section}.
+After multiplying $s$ by a suitable element of $\mathfrak m$
+we may assume $\lambda(s \bmod f) = s_0|_{U_0}$ for some
+$s_0 \in \Gamma(X_0, \mathcal{L}_0)$; details omitted.
+We obtain a morphism
+$$
+(s, s_0) :
+(\mathcal{O}_U, \mathcal{O}_{X_0}, 1)
+\longrightarrow
+(\mathcal{L}, \mathcal{L}_0, \lambda)
+$$
+in the category of coherent triples. Let
+$\mathcal{G} = \Coker(s : \mathcal{F} \to \mathcal{F} \otimes \mathcal{L})$
+and
+$\mathcal{G}_0 = \Coker(s_0 : \mathcal{F}_0 \to
+\mathcal{F}_0 \otimes \mathcal{L}_0)$. Observe that $s_0 : \mathcal{F}_0 \to
+\mathcal{F}_0 \otimes \mathcal{L}_0$ is injective as it is injective
+on $U_0$ by our choice of $s$ and as $\mathfrak m$ isn't an
+associated point of $\mathcal{F}_0$. It follows that
+there exists an
+isomorphism $\beta : \mathcal{G}/f\mathcal{G} \to \mathcal{G}_0|_{U_0}$
+such that we obtain a short exact sequence
+$$
+0 \to
+(\mathcal{F}, \mathcal{F}_0, \alpha) \to
+(\mathcal{F}, \mathcal{F}_0, \alpha) \otimes
+(\mathcal{L}, \mathcal{L}_0, \lambda) \to
+(\mathcal{G}, \mathcal{G}_0, \beta) \to 0
+$$
+By induction on the dimension of the support we know the proposition
+holds for the coherent triple $(\mathcal{G}, \mathcal{G}_0, \beta)$.
+Using the additivity of Lemma \ref{lemma-ses-chi-triple}
+we see that
+$$
+n \longmapsto
+\chi((\mathcal{F}, \mathcal{F}_0, \alpha) \otimes
+(\mathcal{L}, \mathcal{L}_0, \lambda)^{\otimes n + 1})
+-
+\chi((\mathcal{F}, \mathcal{F}_0, \alpha) \otimes
+(\mathcal{L}, \mathcal{L}_0, \lambda)^{\otimes n})
+$$
+is a polynomial. We conclude by a variant of
+Algebra, Lemma \ref{algebra-lemma-numerical-polynomial}
+for functions defined for all integers (details omitted).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nonnegative-chi-triple}
+Assume $\text{depth}(A) \geq 3$ or equivalently
+$\text{depth}(A/fA) \geq 2$. Let $(\mathcal{L}, \mathcal{L}_0, \lambda)$
+be an invertible coherent triple. Then
+$$
+\chi(\mathcal{L}, \mathcal{L}_0, \lambda) =
+\text{length}_A \Coker(\Gamma(U, \mathcal{L}) \to \Gamma(U_0, \mathcal{L}_0))
+$$
+and in particular this is $\geq 0$. Moreover,
+$\chi(\mathcal{L}, \mathcal{L}_0, \lambda) = 0$ if and only if
+$\mathcal{L} \cong \mathcal{O}_U$.
+\end{lemma}
+
+\begin{proof}
+The equivalence of the depth conditions follows from
+Algebra, Lemma \ref{algebra-lemma-depth-drops-by-one}.
+By the depth condition we see that
+$\Gamma(U, \mathcal{O}_U) = A$ and
+$\Gamma(U_0, \mathcal{O}_{U_0}) = A/fA$, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-depth} and
+Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}.
+Using Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-for-finite-locally-free}
+we find that $M = \Gamma(U, \mathcal{L})$ is a finite $A$-module.
+This in turn implies $\text{depth}(M) \geq 2$ for example by
+part (4) of Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}
+or by Divisors, Lemma \ref{divisors-lemma-depth-pushforward}.
+Also, we have $\mathcal{L}_0 \cong \mathcal{O}_{X_0}$
+as $X_0$ is a local scheme. Hence we also see that
+$M_0 = \Gamma(X_0, \mathcal{L}_0) = \Gamma(U_0, \mathcal{L}_0|_{U_0})$
+and that this module is isomorphic to $A/fA$.
+
+\medskip\noindent
+By the above $\mathcal{F}' = \widetilde{M}$ is a coherent
+$\mathcal{O}_X$-module whose restriction to $U$ is isomorphic to $\mathcal{L}$.
+The isomorphism $\lambda : \mathcal{L}/f\mathcal{L} \to \mathcal{L}_0|_{U_0}$
+determines a map $M/fM \to M_0$ on global sections
+which is an isomorphism over $U_0$.
+Since $\text{depth}(M) \geq 2$ we see
+that $H^0_\mathfrak m(M/fM) = 0$ and it follows that
+$M/fM \to M_0$ is injective. Thus by definition
+$$
+\chi(\mathcal{L}, \mathcal{L}_0, \lambda) =
+\text{length}_A \Coker(M/fM \to M_0)
+$$
+which gives the first statement of the lemma.
+
+\medskip\noindent
+Finally, if this length is $0$, then $M \to M_0$ is surjective.
+Hence we can find $s \in M = \Gamma(U, \mathcal{L})$
+mapping to a trivializing section of $\mathcal{L}_0$.
+Consider the finite $A$-modules $K$, $Q$ defined by the exact
+sequence
+$$
+0 \to K \to A \xrightarrow{s} M \to Q \to 0
+$$
+The supports of $K$ and $Q$ do not meet $U_0$ because $s$
+is nonzero at points of $U_0$. Using
+Algebra, Lemma \ref{algebra-lemma-depth-in-ses}
+we see that $\text{depth}(K) \geq 2$ (observe that
+$As \subset M$ has $\text{depth} \geq 1$ as a submodule of $M$).
+Thus the support of $K$ if nonempty has dimension $\geq 2$ by
+Algebra, Lemma \ref{algebra-lemma-bound-depth}.
+This contradicts $\text{Supp}(M) \cap V(f) \subset \{\mathfrak m\}$
+unless $K = 0$. When $K = 0$ we find that
+$\text{depth}(Q) \geq 2$ and we conclude
+$Q = 0$ as before. Hence $A \cong M$ and
+$\mathcal{L}$ is trivial.
+\end{proof}
+
+
+
+
+
+
+
+\section{Invertible modules on punctured spectra, I}
+\label{section-local-lefschetz-for-pic}
+
+\noindent
+In this section we prove some local Lefschetz theorems for the Picard group.
+Some of the ideas are taken from
+\cite{Kollar-pic}, \cite{Bhatt-local}, and \cite{Kollar-map-pic}.
+
+\begin{lemma}
+\label{lemma-injective-torsion-in-pic}
+Let $(A, \mathfrak m)$ be a Noetherian local ring. Let $f \in \mathfrak m$
+be a nonzerodivisor and assume that $\text{depth}(A/fA) \geq 2$, or equivalently
+$\text{depth}(A) \geq 3$. Let $U$, resp.\ $U_0$ be the punctured
+spectrum of $A$, resp.\ $A/fA$. The map
+$$
+\Pic(U) \to \Pic(U_0)
+$$
+is injective on torsion.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_U$-module.
+Observe that $\mathcal{L}$ maps to $0$ in $\Pic(U_0)$
+if and only if we can extend $\mathcal{L}$ to an invertible
+coherent triple $(\mathcal{L}, \mathcal{L}_0, \lambda)$
+as in Section \ref{section-coherent-triples}.
+By Proposition \ref{proposition-hilbert-triple}
+the function
+$$
+n \longmapsto \chi((\mathcal{L}, \mathcal{L}_0, \lambda)^{\otimes n})
+$$
+is a polynomial. By Lemma \ref{lemma-nonnegative-chi-triple}
+the value of this polynomial is zero if and only if
+$\mathcal{L}^{\otimes n}$ is trivial.
+Thus if $\mathcal{L}$ is torsion, then this
+polynomial has infinitely many zeros, hence is
+identically zero, hence $\mathcal{L}$ is trivial.
+\end{proof}
+
+\begin{proposition}[Koll\'ar]
+\label{proposition-injective-pic}
+\begin{reference}
+\cite[Theorem 1.9]{Kollar-map-pic}
+\end{reference}
+Let $(A, \mathfrak m)$ be a Noetherian local ring. Let $f \in \mathfrak m$.
+Assume
+\begin{enumerate}
+\item $A$ has a dualizing complex,
+\item $f$ is a nonzerodivisor,
+\item $\text{depth}(A/fA) \geq 2$, or equivalently $\text{depth}(A) \geq 3$,
+\item if $f \in \mathfrak p \subset A$ is a prime ideal with
+$\dim(A/\mathfrak p) = 2$, then $\text{depth}(A_\mathfrak p) \geq 2$.
+\end{enumerate}
+Let $U$, resp.\ $U_0$ be the punctured spectrum of $A$, resp.\ $A/fA$. The map
+$$
+\Pic(U) \to \Pic(U_0)
+$$
+is injective. Finally, if (1), (2), (3), $A$ is $(S_2)$, and
+$\dim(A) \geq 4$, then (4) holds.
+\end{proposition}
+
+\begin{proof}
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_U$-module.
+Observe that $\mathcal{L}$ maps to $0$ in $\Pic(U_0)$
+if and only if we can extend $\mathcal{L}$ to an invertible
+coherent triple $(\mathcal{L}, \mathcal{L}_0, \lambda)$
+as in Section \ref{section-coherent-triples}.
+By Proposition \ref{proposition-hilbert-triple}
+the function
+$$
+n \longmapsto \chi((\mathcal{L}, \mathcal{L}_0, \lambda)^{\otimes n})
+$$
+is a polynomial $P$. By Lemma \ref{lemma-nonnegative-chi-triple}
+we have
+$P(n) \geq 0$ for all $n \in \mathbf{Z}$ with equality if and only if
+$\mathcal{L}^{\otimes n}$ is trivial. In particular $P(0) = 0$
+and $P$ is either identically zero and we win or $P$ has even degree $\geq 2$.
+
+\medskip\noindent
+Set $M = \Gamma(U, \mathcal{L})$ and
+$M_0 = \Gamma(X_0, \mathcal{L}_0) = \Gamma(U_0, \mathcal{L}_0)$.
+Then $M$ is a finite $A$-module of depth $\geq 2$
+and $M_0 \cong A/fA$, see proof of Lemma \ref{lemma-nonnegative-chi-triple}.
+Note that $H^2_\mathfrak m(M)$ is finite $A$-module by
+Local Cohomology, Lemma
+\ref{local-cohomology-lemma-local-finiteness-for-finite-locally-free}
+and the fact that $H^i_\mathfrak m(A) = 0$ for $i = 0, 1, 2$
+since $\text{depth}(A) \geq 3$.
+Consider the short exact sequence
+$$
+0 \to M/fM \to M_0 \to Q \to 0
+$$
+Lemma \ref{lemma-nonnegative-chi-triple} tells us $Q$ has finite length
+equal to $\chi(\mathcal{L}, \mathcal{L}_0, \lambda)$.
+We obtain $Q = H^1_\mathfrak m(M/fM)$ and
+$H^i_\mathfrak m(M/fM) = H^i_\mathfrak m(M_0) \cong H^i_\mathfrak m(A/fA)$
+for $i > 1$ from the long exact sequence of local cohomology
+associated to the displayed short exact sequence. Consider the long
+exact sequence of local cohomology associated to the sequence
+$0 \to M \to M \to M/fM \to 0$. It starts with
+$$
+0 \to Q \to H^2_\mathfrak m(M) \to H^2_\mathfrak m(M) \to
+H^2_\mathfrak m(A/fA)
+$$
+Using additivity of lengths we see that
+$\chi(\mathcal{L}, \mathcal{L}_0, \lambda)$
+is equal to the length of the image of
+$H^2_\mathfrak m(M) \to H^2_\mathfrak m(A/fA)$.
+
+\medskip\noindent
+Let prove the lemma in a special case to elucidate the rest of the proof.
+Namely, assume for a moment that $H^2_\mathfrak m(A/fA)$ is
+a finite length module. Then
+we would have $P(1) \leq \text{length}_A H^2_\mathfrak m(A/fA)$.
+The exact same argument applied to $\mathcal{L}^{\otimes n}$ shows that
+$P(n) \leq \text{length}_A H^2_\mathfrak m(A/fA)$ for all $n$.
+Thus $P$ cannot have positive degree and we win.
+In the rest of the proof we will modify this argument to give
+a linear upper bound for $P(n)$ which suffices.
+
+\medskip\noindent
+Let us study the map
+$H^2_\mathfrak m(M) \to H^2_\mathfrak m(M_0) \cong H^2_\mathfrak m(A/fA)$.
+Choose a normalized dualizing complex $\omega_A^\bullet$ for $A$.
+By local duality
+(Dualizing Complexes, Lemma \ref{dualizing-lemma-special-case-local-duality})
+this map is Matlis dual to the map
+$$
+\text{Ext}^{-2}_A(M, \omega_A^\bullet)
+\longleftarrow
+\text{Ext}^{-2}_A(M_0, \omega_A^\bullet)
+$$
+whose image therefore has the same (finite) length.
+The support (if nonempty) of the finite $A$-module
+$\text{Ext}^{-2}_A(M_0, \omega_A^\bullet)$ consists of
+$\mathfrak m$ and a finite number of primes
+$\mathfrak p_1, \ldots, \mathfrak p_r$ containing $f$ with
+$\dim(A/\mathfrak p_i) = 1$. Namely, by
+Local Cohomology, Lemma \ref{local-cohomology-lemma-sitting-in-degrees}
+the support is contained in the set of primes $\mathfrak p \subset A$ with
+$\text{depth}_{A_\mathfrak p}(M_{0, \mathfrak p}) + \dim(A/\mathfrak p) \leq 2$.
+Thus it suffices to show there is no prime $\mathfrak p$ containing $f$ with
+$\dim(A/\mathfrak p) = 2$ and
+$\text{depth}_{A_\mathfrak p}(M_{0, \mathfrak p}) = 0$.
+However, because $M_{0, \mathfrak p} \cong (A/fA)_\mathfrak p$
+this would give $\text{depth}(A_\mathfrak p) = 1$ which contradicts
+assumption (4).
+Choose a section $t \in \Gamma(U, \mathcal{L}^{\otimes -1})$
+which does not vanish in the points $\mathfrak p_1, \ldots, \mathfrak p_r$, see
+Properties, Lemma
+\ref{properties-lemma-quasi-affine-invertible-nonvanishing-section}.
+Multiplication by $t$ on global sections determines a map $t : M \to A$
+which defines an isomorphism $M_{\mathfrak p_i} \to A_{\mathfrak p_i}$ for
+$i = 1, \ldots, r$. Denote $t_0 = t|_{U_0}$ the corresponding section
+of $\Gamma(U_0, \mathcal{L}_0^{\otimes -1})$ which similarly determines
+a map $t_0 : M_0 \to A/fA$ compatible with $t$.
+We conclude that there is a commutative diagram
+$$
+\xymatrix{
+\text{Ext}^{-2}_A(M, \omega_A^\bullet) &
+\text{Ext}^{-2}_A(M_0, \omega_A^\bullet) \ar[l] \\
+\text{Ext}^{-2}_A(A, \omega_A^\bullet) \ar[u]^t &
+\text{Ext}^{-2}_A(A/fA, \omega_A^\bullet) \ar[l] \ar[u]_{t_0}
+}
+$$
+It follows that the length of the image of the top horizontal
+map is at most the length of $\text{Ext}^{-2}_A(A/fA, \omega_A^\bullet)$
+plus the length of the cokernel of $t_0$.
+
+\medskip\noindent
+However, if we replace $\mathcal{L}$ by $\mathcal{L}^n$ for $n > 1$,
+then we can use
+$$
+t^n :
+M_n = \Gamma(U, \mathcal{L}^{\otimes n})
+\longrightarrow
+\Gamma(U, \mathcal{O}_U) = A
+$$
+instead of $t$. This replaces $t_0$ by its $n$th power.
+Thus the length of the image of the map
+$\text{Ext}^{-2}_A(M_n, \omega_A^\bullet) \leftarrow
+\text{Ext}^{-2}_A(M_{n, 0}, \omega_A^\bullet)$
+is at most the length of $\text{Ext}^{-2}_A(A/fA, \omega_A^\bullet)$
+plus the length of the cokernel of
+$$
+t_0^n :
+\text{Ext}^{-2}_A(A/fA, \omega_A^\bullet)
+\longrightarrow
+\text{Ext}^{-2}_A(M_{n, 0}, \omega_A^\bullet)
+$$
+Via the isomorphism $M_0 \cong A/fA$ the map $t_0$ becomes
+$g : A/fA \to A/fA$ for some $g \in A/fA$ and via the corresponding
+isomorphisms $M_{n, 0} \cong A/fA$ the map $t_0^n$ becomes
+$g^n : A/fA \to A/fA$. Thus the length of the cokernel above
+is the length of the quotient of
+$\text{Ext}^{-2}_A(A/fA, \omega_A^\bullet)$ by $g^n$.
+Since $\text{Ext}^{-2}_A(A/fA, \omega_A^\bullet)$ is a finite $A$-module
+with support $T$ of dimension $1$ and since $V(g) \cap T$
+consists of the closed point by our choice of $t$
+this length grows linearly in $n$ by
+Algebra, Lemma \ref{algebra-lemma-support-dimension-d}.
+
+\medskip\noindent
+To finish the proof we prove the final assertion. Assume
+$f \in \mathfrak m \subset A$ satisfies
+(1), (2), (3), $A$ is $(S_2)$, and $\dim(A) \geq 4$.
+Condition (1) implies $A$ is catenary, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-universally-catenary}.
+Then $\Spec(A)$ is equidimensional by Local Cohomology, Lemma
+\ref{local-cohomology-lemma-catenary-S2-equidimensional}.
+Thus $\dim(A_\mathfrak p) + \dim(A/\mathfrak p) \geq 4$
+for every prime $\mathfrak p$ of $A$. Then
+$\text{depth}(A_\mathfrak p) \geq \min(2, \dim(A_\mathfrak p))
+\geq \min(2, 4 - \dim(A/\mathfrak p))$ and hence (4) holds.
+\end{proof}
+
+\begin{remark}
+\label{remark-compare-SGA2}
+In SGA2 we find the following result. Let $(A, \mathfrak m)$ be a
+Noetherian local ring. Let $f \in \mathfrak m$. Assume $A$
+is a quotient of a regular ring, the element
+$f$ is a nonzerodivisor, and
+\begin{enumerate}
+\item[(a)] if $\mathfrak p \subset A$ is a prime ideal with
+$\dim(A/\mathfrak p) = 1$, then $\text{depth}(A_\mathfrak p) \geq 2$, and
+\item[(b)] $\text{depth}(A/fA) \geq 3$, or equivalently
+$\text{depth}(A) \geq 4$.
+\end{enumerate}
+Let $U$, resp.\ $U_0$ be the punctured spectrum of $A$, resp.\ $A/fA$. Then
+the map
+$$
+\Pic(U) \to \Pic(U_0)
+$$
+is injective. This is \cite[Exposee XI, Lemma 3.16]{SGA2}\footnote{Condition
+(a) follows from condition (b), see
+Algebra, Lemma \ref{algebra-lemma-depth-localization}.}. This result
+from SGA2 follows from Proposition \ref{proposition-injective-pic}
+because
+\begin{enumerate}
+\item a quotient of a regular ring has a dualizing complex (see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-regular-gorenstein} and
+Proposition \ref{dualizing-proposition-dualizing-essentially-finite-type}), and
+\item if $\text{depth}(A) \geq 4$ then $\text{depth}(A_\mathfrak p) \geq 2$
+for all primes $\mathfrak p$ with $\dim(A/\mathfrak p) = 2$, see
+Algebra, Lemma \ref{algebra-lemma-depth-localization}.
+\end{enumerate}
+\end{remark}
+
+
+
+
+
+
+\section{Invertible modules on punctured spectra, II}
+\label{section-local-lefschetz-for-pic-surjective}
+
+\noindent
+Next we turn to surjectivity in local Lefschetz for the Picard group.
+First to extend an invertible module on $U_0$ to an open neighbourhood
+we have the following simple criterion.
+
+\begin{lemma}
+\label{lemma-surjective-Pic-first}
+Let $(A, \mathfrak m)$ be a Noetherian local ring and $f \in \mathfrak m$.
+Assume
+\begin{enumerate}
+\item $A$ is $f$-adically complete,
+\item $f$ is a nonzerodivisor,
+\item $H^1_\mathfrak m(A/fA)$ and $H^2_\mathfrak m(A/fA)$
+are finite $A$-modules, and
+\item $H^3_\mathfrak m(A/fA) = 0$\footnote{Observe that (3) and (4) hold
+if $\text{depth}(A/fA) \geq 4$, or equivalently $\text{depth}(A) \geq 5$.}.
+\end{enumerate}
+Let $U$, resp.\ $U_0$ be the punctured spectrum of $A$, resp.\ $A/fA$.
+Then
+$$
+\colim_{U_0 \subset U' \subset U\text{ open}} \Pic(U')
+\longrightarrow
+\Pic(U_0)
+$$
+is surjective.
+\end{lemma}
+
+\begin{proof}
+Let $U_0 \subset U_n \subset U$ be the $n$th infinitesimal neighbourhood
+of $U_0$. Observe that the ideal sheaf of $U_n$ in $U_{n + 1}$ is
+isomorphic to $\mathcal{O}_{U_0}$ as $U_0 \subset U$ is the principal
+closed subscheme cut out by the nonzerodivisor $f$. Hence we have
+an exact sequence of abelian groups
+$$
+\Pic(U_{n + 1}) \to \Pic(U_n) \to
+H^2(U_0, \mathcal{O}_{U_0}) = H^3_\mathfrak m(A/fA) = 0
+$$
+see More on Morphisms, Lemma
+\ref{more-morphisms-lemma-picard-group-first-order-thickening}.
+Thus every invertible $\mathcal{O}_{U_0}$-module is the restriction
+of an invertible coherent formal module, i.e., an invertible object of
+$\textit{Coh}(U, f\mathcal{O}_U)$. We conclude by applying
+Lemma \ref{lemma-equivalence}.
+\end{proof}
+
+\begin{remark}
+\label{remark-surjective-Pic-second}
+Let $(A, \mathfrak m)$ be a Noetherian local ring and $f \in \mathfrak m$.
+The conclusion of Lemma \ref{lemma-surjective-Pic-first} holds if we assume
+\begin{enumerate}
+\item $A$ has a dualizing complex,
+\item $A$ is $f$-adically complete,
+\item $f$ is a nonzerodivisor,
+\item one of the following is true
+\begin{enumerate}
+\item $A_f$ is $(S_2)$ and for $\mathfrak p \subset A$,
+$f \not \in \mathfrak p$ minimal we have $\dim(A/\mathfrak p) \geq 4$, or
+\item if $\mathfrak p \not \in V(f)$ and
+$V(\mathfrak p) \cap V(f) \not = \{\mathfrak m\}$, then
+$\text{depth}(A_\mathfrak p) + \dim(A/\mathfrak p) > 3$.
+\end{enumerate}
+\item $H^3_{\mathfrak m}(A/fA) = 0$.
+\end{enumerate}
+The proof is exactly the same as the proof of
+Lemma \ref{lemma-surjective-Pic-first}
+using Lemma \ref{lemma-equivalence-better} instead of
+Lemma \ref{lemma-equivalence}.
+Two points need to be made here: (a)
+it seems hard to find examples where one knows
+$H^3_{\mathfrak m}(A/fA) = 0$ without assuming
+$\text{depth}(A/fA) \geq 4$, and
+(b) the proof of Lemma \ref{lemma-equivalence-better} is a
+good deal harder than the proof of Lemma \ref{lemma-equivalence}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-surjective-Pic-first-better}
+Let $(A, \mathfrak m)$ be a Noetherian local ring and $f \in \mathfrak m$.
+Assume
+\begin{enumerate}
+\item the conditions of Lemma \ref{lemma-surjective-Pic-first} hold, and
+\item for every maximal ideal $\mathfrak p \subset A_f$
+the punctured spectrum of $(A_f)_\mathfrak p$ has trivial Picard group.
+\end{enumerate}
+Let $U$, resp.\ $U_0$ be the punctured spectrum of $A$, resp.\ $A/fA$.
+Then
+$$
+\Pic(U) \longrightarrow \Pic(U_0)
+$$
+is surjective.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{L}_0 \in \Pic(U_0)$. By
+Lemma \ref{lemma-surjective-Pic-first}
+there exists an open $U_0 \subset U' \subset U$
+and $\mathcal{L}' \in \Pic(U')$ whose restriction
+to $U_0$ is $\mathcal{L}_0$.
+Since $U' \supset U_0$ we see that $U \setminus U'$
+consists of points corresponding to prime ideals
+$\mathfrak p_1, \ldots, \mathfrak p_n$ as in (2).
+By assumption we can find invertible modules
+$\mathcal{L}'_i$ on $\Spec(A_{\mathfrak p_i})$ agreeing with
+$\mathcal{L}'$ over the punctured spectrum
+$U' \times_U \Spec(A_{\mathfrak p_i})$ since
+trivial invertible modules always extend.
+By Limits, Lemma \ref{limits-lemma-glueing-near-closed-point-modules}
+applied $n$ times we see that $\mathcal{L}'$ extends to an
+invertible module on $U$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-pic-to-completion}
+Let $(A, \mathfrak m)$ be a Noetherian local ring of depth $\geq 2$.
+Let $A^\wedge$ be its completion. Let $U$, resp.\ $U^\wedge$
+be the punctured spectrum of $A$, resp.\ $A^\wedge$. Then
+$\Pic(U) \to \Pic(U^\wedge)$ is injective.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_U$-module
+with pullback $\mathcal{L}^\wedge$ on $U^\wedge$.
+We have $H^0(U, \mathcal{O}_U) = A$ by our assumption on depth and
+Dualizing Complexes, Lemma \ref{dualizing-lemma-depth} and
+Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}.
+Thus $\mathcal{L}$ is trivial if and only if
+$M = H^0(U, \mathcal{L})$ is isomorphic to $A$ as an $A$-module.
+(Details omitted.) Since $A \to A^\wedge$ is flat
+we have $M \otimes_A A^\wedge = \Gamma(U^\wedge, \mathcal{L}^\wedge)$
+by flat base change, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology}.
+Finally, it is easy to see that $M \cong A$ if and only if
+$M \otimes_A A^\wedge \cong A^\wedge$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-trivial-local-pic-regular}
+Let $(A, \mathfrak m)$ be a regular local ring. Then the Picard
+group of the punctured spectrum of $A$ is trivial.
+\end{lemma}
+
+\begin{proof}
+Combine Divisors, Lemma \ref{divisors-lemma-extend-invertible-module}
+with More on Algebra, Lemma \ref{more-algebra-lemma-regular-local-UFD}.
+\end{proof}
+
+\noindent
+Now we can bootstrap the earlier results to prove that
+Picard groups are trivial for punctured spectra
+of complete intersections of dimension $\geq 4$.
+Recall that a Noetherian local ring is called a complete
+intersection if its completion is the quotient of a
+regular local ring by the ideal generated by a regular sequence.
+See the discussion in Divided Power Algebra, Section \ref{dpa-section-lci}.
+
+\begin{proposition}[Grothendieck]
+\label{proposition-trivial-local-pic-complete-intersection}
+Let $(A, \mathfrak m)$ be a Noetherian local ring. If $A$ is a
+complete intersection of dimension $\geq 4$, then the Picard
+group of the punctured spectrum of $A$ is trivial.
+\end{proposition}
+
+\begin{proof}
+By Lemma \ref{lemma-local-pic-to-completion} we may assume that $A$ is
+a complete local ring. By assumption we can write
+$A = B/(f_1, \ldots, f_r)$ where $B$ is a complete regular local
+ring and $f_1, \ldots, f_r$ is a regular sequence.
+We will finish the proof by induction on $r$.
+The base case is $r = 0$ which follows from
+Lemma \ref{lemma-trivial-local-pic-regular}.
+
+\medskip\noindent
+Assume that $A = B/(f_1, \ldots, f_r)$ and that the proposition
+holds for $r - 1$. Set $A' = B/(f_1, \ldots, f_{r - 1})$ and apply
+Lemma \ref{lemma-surjective-Pic-first-better} to $f_r \in A'$.
+This is permissible:
+\begin{enumerate}
+\item condition (1) of Lemma \ref{lemma-surjective-Pic-first} holds
+because our local rings are complete,
+\item condition (2) of Lemma \ref{lemma-surjective-Pic-first} holds
+holds as $f_1, \ldots, f_r$ is a regular sequence,
+\item condition (3) and (4) of Lemma \ref{lemma-surjective-Pic-first} hold
+as $A = A'/f_r A'$ is Cohen-Macaulay of dimension $\dim(A) \geq 4$,
+\item condition (2) of Lemma \ref{lemma-surjective-Pic-first-better}
+holds by induction hypothesis as
+$\dim((A'_{f_r})_\mathfrak p) \geq 4$ for a maximal
+prime $\mathfrak p$ of $A'_{f_r}$ and as
+$(A'_{f_r})_\mathfrak p = B_\mathfrak q/(f_1, \ldots, f_{r - 1})$
+for some prime ideal $\mathfrak q \subset B$ and $B_\mathfrak q$ is regular.
+\end{enumerate}
+This finishes the proof.
+\end{proof}
+
+\begin{example}
+\label{example-grothendieck-sharp}
+The dimension bound in
+Proposition \ref{proposition-trivial-local-pic-complete-intersection}
+is sharp. For example the Picard group of the punctured spectrum of
+$A = k[[x, y, z, w]]/(xy - zw)$ is nontrivial. Namely, the
+ideal $I = (x, z)$ cuts out an effective Cartier divisor $D$ on the punctured
+spectrum $U$ of $A$ as it is easy to see that $I_x, I_y, I_z, I_w$
+are invertible ideals in $A_x, A_y, A_z, A_w$. But on the other hand,
+$A/I$ has depth $\geq 1$ (in fact $2$), hence $I$ has depth $\geq 2$
+(in fact $3$), hence
+$I = \Gamma(U, \mathcal{O}_U(-D))$. Thus if $\mathcal{O}_U(-D)$
+were trivial, then we'd have $I \cong \Gamma(U, \mathcal{O}_U) = A$
+which isn't true as $I$ isn't generated by $1$ element.
+\end{example}
+
+\begin{example}
+\label{example-grothendieck-sharp-bis}
+Proposition \ref{proposition-trivial-local-pic-complete-intersection}
+cannot be extended to quotients
+$$
+A = B/(f_1, \ldots, f_r)
+$$
+where $B$ is regular and $\dim(B) - r \geq 4$. In other words, the condition
+that $f_1, \ldots, f_r$ be a regular sequence is (in general) needed
+for vanishing of the Picard group of the punctured spectrum of $A$.
+Namely, let $k$ be a field and set
+$$
+A = k[[a, b, x, y, z, u, v, w]]/(a^3, b^3, xa^2 + yab + zb^2, w^2)
+$$
+Observe that $A = A_0[w]/(w^2)$ with
+$A_0 = k[[a, b, x, y, z, u, v]]/(a^3, b^3, xa^2 + yab + zb^2)$.
+We will show below that $A_0$ has depth $2$.
+Denote $U$ the punctured spectrum of $A$ and $U_0$ the punctured
+spectrum of $A_0$. Observe there is a short exact sequence
+$0 \to A_0 \to A \to A_0 \to 0$ where the first arrow is
+given by multiplication by $w$.
+By More on Morphisms, Lemma
+\ref{more-morphisms-lemma-picard-group-first-order-thickening}
+we find that there is an exact sequence
+$$
+H^0(U, \mathcal{O}_U^*) \to
+H^0(U_0, \mathcal{O}_{U_0}^*) \to
+H^1(U_0, \mathcal{O}_{U_0}) \to
+\Pic(U)
+$$
+Since the depth of $A_0$ and hence $A$ is $2$ we see that
+$H^0(U_0, \mathcal{O}_{U_0}) = A_0$ and $H^0(U, \mathcal{O}_U) = A$
+and that $H^1(U_0, \mathcal{O}_{U_0})$ is nonzero, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-depth} and
+Local Cohomology, Lemma \ref{local-cohomology-lemma-local-cohomology}.
+Thus the last arrow displayed above is nonzero and we conclude
+that $\Pic(U)$ is nonzero.
+
+\medskip\noindent
+To show that $A_0$ has depth $2$ it suffices to show that
+$A_1 = k[[a, b, x, y, z]]/(a^3, b^3, xa^2 + yab + zb^2)$
+has depth $0$. This is true because $a^2b^2$ maps to
+a nonzero element of $A_1$ which is annihilated
+by each of the variables $a, b, x, y, z$. For example
+$ya^2b^2 = (yab)(ab) = - (xa^2 + zb^2)(ab) = -xa^3b - yab^3 = 0$ in $A_1$.
+The other cases are similar.
+\end{example}
+
+
+
+
+
+
+
+
+
+
+\section{Application to Lefschetz theorems}
+\label{section-lefschetz}
+
+\noindent
+In this section we discuss the relation between coherent sheaves on a
+projective scheme $P$ and coherent modules on formal completion along
+an ample divisor $Q$.
+
+\medskip\noindent
+Let $k$ be a field. Let $P$ be a proper scheme over $k$.
+Let $\mathcal{L}$ be an ample invertible $\mathcal{O}_P$-module.
+Let $s \in \Gamma(P, \mathcal{L})$ be a section\footnote{We do not
+require $s$ to be a regular section. Correspondingly, $Q$ is only
+a locally principal closed subscheme of $P$ and not necessarily an effective
+Cartier divisor.} and let
+$Q = Z(s)$ be the zero scheme, see
+Divisors, Definition \ref{divisors-definition-zero-scheme-s}.
+For all $n \geq 1$ we denote $Q_n = Z(s^n)$
+the $n$th infinitesimal neighbourhood of $Q$.
+If $\mathcal{F}$ is a coherent $\mathcal{O}_P$-module, then we denote
+$\mathcal{F}_n = \mathcal{F}|_{Q_n}$ the restriction, i.e.,
+the pullback of $\mathcal{F}$ by the closed immersion
+$Q_n \to P$.
+
+\begin{proposition}
+\label{proposition-lefschetz}
+In the situation above assume for all points $p \in P \setminus Q$ we have
+$$
+\text{depth}(\mathcal{F}_p) + \dim(\overline{\{p\}}) > s
+$$
+Then the map
+$$
+H^i(P, \mathcal{F}) \longrightarrow \lim H^i(Q_n, \mathcal{F}_n)
+$$
+is an isomorphism for $0 \leq i < s$.
+\end{proposition}
+
+\begin{proof}
+We will use More on Morphisms, Lemma \ref{more-morphisms-lemma-apply-proj-spec}
+and we will use the notation used and results found
+More on Morphisms, Section \ref{more-morphisms-section-proj-spec}
+without further mention; this proof will not make sense without
+at least understanding the statement of the lemma.
+Observe that in our case
+$A = \bigoplus_{m \geq 0} \Gamma(P, \mathcal{L}^{\otimes m})$
+is a finite type $k$-algebra
+all of whose graded parts are finite dimensional $k$-vector spaces, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-proper-ample}.
+
+\medskip\noindent
+We may and do think of $s$ as an element $f \in A_1 \subset A$, i.e.,
+a homogeneous element of degree $1$ of $A$. Denote
+$Y = V(f) \subset X$ the closed subscheme defined by $f$.
+Then $U \cap Y = (\pi|_U)^{-1}(Q)$ scheme theoretically.
+Recall the notation
+$\mathcal{F}_U = \pi^*\mathcal{F}|_U = (\pi|_U)^*\mathcal{F}$.
+This is a coherent $\mathcal{O}_U$-module.
+Choose a finite $A$-module $M$ such that
+$\mathcal{F}_U = \widetilde{M}|_U$
+(for existence see Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}).
+We claim that $H^i_Z(M)$ is annihilated by
+a power of $f$ for $i \leq s + 1$.
+
+\medskip\noindent
+To prove the claim we will apply
+Local Cohomology, Proposition \ref{local-cohomology-proposition-annihilator}.
+Translating into geometry we see that it suffices to prove
+for $u \in U$, $u \not \in Y$ and $z \in \overline{\{u\}} \cap Z$
+that
+$$
+\text{depth}(\mathcal{F}_{U, u}) +
+\dim(\mathcal{O}_{\overline{\{u\}}, z}) > s + 1
+$$
+This requires only a small amount of thought.
+
+\medskip\noindent
+Observe that $Z = \Spec(A_0)$ is a finite set of closed points of $X$ because
+$A_0$ is a finite dimensional $k$-algebra.
+(The reader who would like $Z$ to be a singleton can replace the
+finite $k$-algebra $A_0$ by $k$; it won't affect anything else in the proof.)
+
+\medskip\noindent
+The morphism $\pi : L \to P$ and its restriction $\pi|_U : U \to P$
+are smooth of relative dimension $1$.
+Let $u \in U$, $u \not \in Y$ and $z \in \overline{\{u\}} \cap Z$.
+Let $p = \pi(u) \in P \setminus Q$ be its image.
+Then either $u$ is a generic
+point of the fibre of $\pi$ over $p$ or a closed point of the fibre.
+If $u$ is a generic point of the fibre, then
+$\text{depth}(\mathcal{F}_{U, u}) = \text{depth}(\mathcal{F}_p)$
+and
+$\dim(\overline{\{u\}}) = \dim(\overline{\{p\}}) + 1$.
+If $u$ is a closed point of the fibre, then
+$\text{depth}(\mathcal{F}_{U, u}) = \text{depth}(\mathcal{F}_p) + 1$
+and
+$\dim(\overline{\{u\}}) = \dim(\overline{\{p\}})$.
+In both cases we have
+$\dim(\overline{\{u\}}) = \dim(\mathcal{O}_{\overline{\{u\}}, z})$
+because every point of $Z$ is closed. Thus the desired
+inequality follows from the assumption in the statement of
+the lemma.
+
+\medskip\noindent
+Let $A'$ be the $f$-adic completion of $A$. So $A \to A'$ is flat by
+Algebra, Lemma \ref{algebra-lemma-completion-flat}.
+Denote $U' \subset X' = \Spec(A')$ the inverse image of
+$U$ and similarly for $Y'$ and $Z'$. Let $\mathcal{F}'$
+on $U'$ be the pullback of $\mathcal{F}_U$ and let
+$M' = M \otimes_A A'$.
+By flat base change for local cohomology
+(Local Cohomology, Lemma \ref{local-cohomology-lemma-torsion-change-rings})
+we have
+$$
+H^i_{Z'}(M') = H^i_Z(M) \otimes_A A'
+$$
+and we find that for $i \leq s + 1$ these are annihilated by a power of $f$.
+Consider the diagram
+$$
+\xymatrix{
+& H^i(U, \mathcal{F}_U) \ar[ld] \ar[d] \ar[r] &
+\lim H^i(U, \mathcal{F}_U/f^n\mathcal{F}_U) \ar@{=}[d] \\
+H^i(U, \mathcal{F}_U) \otimes_A A' \ar@{=}[r] &
+H^i(U', \mathcal{F}') \ar[r] &
+\lim H^i(U', \mathcal{F}'/f^n\mathcal{F}')
+}
+$$
+The lower horizontal arrow is an isomorphism for $i < s$ by
+Lemma \ref{lemma-alternative-higher} and the torsion
+property we just proved. The horizontal equal sign is flat base change
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology})
+and the vertical equal sign is because $U \cap Y$ and $U' \cap Y'$
+as well as their $n$th infinitesimal neighbourhoods
+are mapped isomorphically onto each other (as we are
+completing with respect to $f$).
+
+\medskip\noindent
+Applying More on Morphisms, Equation
+(\ref{more-morphisms-equation-cohomology-torsor})
+we have compatible direct sum decompositions
+$$
+\lim H^i(U, \mathcal{F}_U/f^n\mathcal{F}_U) =
+\lim
+\left(
+\bigoplus\nolimits_{m \in \mathbf{Z}}
+H^i(Q_n, \mathcal{F}_n \otimes \mathcal{L}^{\otimes m})
+\right)
+$$
+and
+$$
+H^i(U, \mathcal{F}_U) =
+\bigoplus\nolimits_{m \in \mathbf{Z}}
+H^i(P, \mathcal{F} \otimes \mathcal{L}^{\otimes m})
+$$
+Thus we conclude by Algebra, Lemma \ref{algebra-lemma-daniel-litt}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lefschetz-addendum}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$.
+Let $\mathcal{L}$ be an ample invertible $\mathcal{O}_X$-module.
+Let $s \in \Gamma(X, \mathcal{L})$. Let $Y = Z(s)$ be the
+zero scheme of $s$ with $n$th infinitesimal neighbourhood $Y_n = Z(s^n)$.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Assume that for all $x \in X \setminus Y$ we have
+$$
+\text{depth}(\mathcal{F}_x) + \dim(\overline{\{x\}}) > 1
+$$
+Then $\Gamma(V, \mathcal{F}) \to \lim \Gamma(Y_n, \mathcal{F}|_{Y_n})$
+is an isomorphism for any open subscheme $V \subset X$ containing $Y$.
+\end{lemma}
+
+\begin{proof}
+By Proposition \ref{proposition-lefschetz} this is true for $V = X$.
+Thus it suffices to show that the map
+$\Gamma(V, \mathcal{F}) \to \lim \Gamma(Y_n, \mathcal{F}|_{Y_n})$
+is injective. If $\sigma \in \Gamma(V, \mathcal{F})$
+maps to zero, then its support is disjoint from $Y$ (details omitted; hint:
+use Krull's intersection theorem). Then the closure $T \subset X$
+of $\text{Supp}(\sigma)$ is disjoint from $Y$.
+Whence $T$ is proper over $k$ (being closed in $X$)
+and affine (being closed in the affine scheme $X \setminus Y$, see
+Morphisms, Lemma \ref{morphisms-lemma-proper-ample-delete-affine})
+and hence finite over $k$
+(Morphisms, Lemma \ref{morphisms-lemma-finite-proper}).
+Thus $T$ is a finite set of closed points of $X$.
+Thus $\text{depth}(\mathcal{F}_x) \geq 2$ is at least $1$
+for $x \in T$ by our assumption. We conclude that
+$\Gamma(V, \mathcal{F}) \to \Gamma(V \setminus T, \mathcal{F})$
+is injective and $\sigma = 0$ as desired.
+\end{proof}
+
+
+\begin{example}
+\label{example-lefschetz}
+Let $k$ be a field and let $X$ be a proper variety over $k$.
+Let $Y \subset X$ be an effective Cartier divisor such that
+$\mathcal{O}_X(Y)$ is ample and
+denote $Y_n$ its $n$th infinitesimal neighbourhood.
+Let $\mathcal{E}$ be a finite locally free $\mathcal{O}_X$-module.
+Here are some special cases of Proposition \ref{proposition-lefschetz}.
+\begin{enumerate}
+\item If $X$ is a curve, we don't learn anything.
+\item If $X$ is a Cohen-Macaulay (for example normal) surface, then
+$$
+H^0(X, \mathcal{E}) \to \lim H^0(Y_n, \mathcal{E}|_{Y_n})
+$$
+is an isomorphism.
+\item If $X$ is a Cohen-Macaulay threefold, then
+$$
+H^0(X, \mathcal{E}) \to \lim H^0(Y_n, \mathcal{E}|_{Y_n})
+\quad\text{and}\quad
+H^1(X, \mathcal{E}) \to \lim H^1(Y_n, \mathcal{E}|_{Y_n})
+$$
+are isomorphisms.
+\end{enumerate}
+Presumably the pattern is clear. If $X$ is a normal threefold, then
+we can conclude the result for $H^0$ but not for $H^1$.
+\end{example}
+
+\noindent
+Before we prove the next main result, we need a lemma.
+
+\begin{lemma}
+\label{lemma-Gm-equivariant-extend-canonically}
+In Situation \ref{situation-algebraize} let $(\mathcal{F}_n)$ be an
+object of $\textit{Coh}(U, I\mathcal{O}_U)$. Assume
+\begin{enumerate}
+\item $A$ is a graded ring, $\mathfrak a = A_+$, and
+$I$ is a homogeneous ideal,
+\item $(\mathcal{F}_n) = (\widetilde{M_n}|_U)$ where $(M_n)$
+is an inverse system of graded $A$-modules, and
+\item $(\mathcal{F}_n)$ extends canonically to $X$.
+\end{enumerate}
+Then there is a finite graded $A$-module $N$ such that
+\begin{enumerate}
+\item[(a)] the inverse systems $(N/I^nN)$ and $(M_n)$ are pro-isomorphic
+in the category of graded $A$-modules modulo $A_+$-power torsion
+modules, and
+\item[(b)] $(\mathcal{F}_n)$ is the completion of of the coherent
+module associated to $N$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $(\mathcal{G}_n)$ be the canonical extension as in
+Lemma \ref{lemma-canonically-algebraizable}.
+The grading on $A$ and $M_n$ determines an action
+$$
+a : \mathbf{G}_m \times X \longrightarrow X
+$$
+of the group scheme $\mathbf{G}_m$ on $X$ such that
+$(\widetilde{M_n})$ becomes an inverse system of
+$\mathbf{G}_m$-equivariant quasi-coherent $\mathcal{O}_X$-modules, see
+Groupoids, Example \ref{groupoids-example-Gm-on-affine}.
+Since $\mathfrak a$ and $I$ are homogeneous ideals
+the closed subschemes $Z$, $Y$ and the open subscheme $U$
+are $\mathbf{G}_m$-invariant closed and open subschemes.
+The restriction $(\mathcal{F}_n)$ of $(\widetilde{M_n})$
+is an inverse system of $\mathbf{G}_m$-equivariant
+coherent $\mathcal{O}_U$-modules. In other words, $(\mathcal{F}_n)$
+is a $\mathbf{G}_m$-equivariant coherent formal module,
+in the sense that there is an isomorphism
+$$
+\alpha : (a^*\mathcal{F}_n) \longrightarrow (p^*\mathcal{F}_n)
+$$
+over $\mathbf{G}_m \times U$ satisfying a suitable cocycle condition.
+Since $a$ and $p$ are flat morphisms of affine schemes,
+by Lemma \ref{lemma-canonically-extend-base-change}
+we conclude that there exists a unique isomorphism
+$$
+\beta : (a^*\mathcal{G}_n) \longrightarrow (p^*\mathcal{G}_n)
+$$
+over $\mathbf{G}_m \times X$ restricting to $\alpha$ on
+$\mathbf{G}_m \times U$. The uniqueness guarantees that
+$\beta$ satisfies the corresponding cocycle condition.
+In this way each $\mathcal{G}_n$ becomes
+a $\mathbf{G}_m$-equivariant coherent $\mathcal{O}_X$-module
+in a manner compatible with transition maps.
+
+\medskip\noindent
+By Groupoids, Lemma \ref{groupoids-lemma-Gm-equivariant-module}
+we see that $\mathcal{G}_n$ with its $\mathbf{G}_m$-equivariant
+structure corresponds to a graded $A$-module $N_n$. The transition maps
+$N_{n + 1} \to N_n$ are graded module maps. Note that $N_n$ is a finite
+$A$-module and $N_n = N_{n + 1}/I^n N_{n + 1}$ because
+$(\mathcal{G}_n)$ is an object of $\textit{Coh}(X, I\mathcal{O}_X)$.
+Let $N$ be the finite graded $A$-module foud in
+Algebra, Lemma \ref{algebra-lemma-finiteness-graded}.
+Then $N_n = N/I^nN$, whence $(\mathcal{G}_n)$
+is the completion of the coherent module
+associated to $N$, and a fortiori we see that (b) is true.
+
+\medskip\noindent
+To see (a) we have to unwind the situation described above a bit more.
+First, observe that the kernel and cokernel of $M_n \to H^0(U, \mathcal{F}_n)$
+is $A_+$-power torsion (Local Cohomology, Lemma
+\ref{local-cohomology-lemma-finiteness-pushforwards-and-H1-local}).
+Observe that $H^0(U, \mathcal{F}_n)$ comes with a natural grading
+such that these maps and the transition maps of the system are
+graded $A$-module map; for example we can use that
+$(U \to X)_*\mathcal{F}_n$ is a $\mathbf{G}_m$-equivariant module
+on $X$ and use
+Groupoids, Lemma \ref{groupoids-lemma-Gm-equivariant-module}.
+Next, recall that $(N_n)$ and $(H^0(U, \mathcal{F}_n))$
+are pro-isomorphic by Definition \ref{definition-canonically-algebraizable}
+and Lemma \ref{lemma-canonically-algebraizable}.
+We omit the verification that the maps defining this
+pro-isomorphism are graded module maps.
+Thus $(N_n)$ and $(M_n)$ are pro-isomorphic in the category of
+graded $A$-modules modulo $A_+$-power torsion modules.
+\end{proof}
+
+\noindent
+Let $k$ be a field. Let $P$ be a proper scheme over $k$.
+Let $\mathcal{L}$ be an ample invertible $\mathcal{O}_P$-module.
+Let $s \in \Gamma(P, \mathcal{L})$ be a section and let
+$Q = Z(s)$ be the zero scheme, see
+Divisors, Definition \ref{divisors-definition-zero-scheme-s}.
+Let $\mathcal{I} \subset \mathcal{O}_P$ be the ideal sheaf of $Q$.
+We will use $\textit{Coh}(P, \mathcal{I})$ to denote the category
+of coherent formal modules introduced in
+Cohomology of Schemes, Section \ref{coherent-section-coherent-formal}.
+
+\begin{proposition}
+\label{proposition-lefschetz-existence}
+In the situation above let $(\mathcal{F}_n)$ be an object of
+$\textit{Coh}(P, \mathcal{I})$. Assume for all $q \in Q$ and for
+all primes $\mathfrak p \in \mathcal{O}_{P, q}^\wedge$,
+$\mathfrak p \not \in V(\mathcal{I}_q^\wedge)$ we have
+$$
+\text{depth}((\mathcal{F}_q^\wedge)_\mathfrak p) +
+\dim(\mathcal{O}_{P, q}^\wedge/\mathfrak p) +
+\dim(\overline{\{q\}}) > 2
+$$
+Then $(\mathcal{F}_n)$ is the completion of a coherent
+$\mathcal{O}_P$-module.
+\end{proposition}
+
+\begin{proof}
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-existence-easy}
+to prove the lemma, we may replace $(\mathcal{F}_n)$ by an object
+differing from it by $\mathcal{I}$-torsion (see below for more precision).
+Let $T' = \{q \in Q \mid \dim(\overline{\{q\}}) = 0\}$
+and $T = \{q \in Q \mid \dim(\overline{\{q\}}) \leq 1\}$.
+The assumption in the proposition is exactly that
+$Q \subset P$, $(\mathcal{F}_n)$, and $T' \subset T \subset Q$
+satisfy the conditions of
+Lemma \ref{lemma-improvement-formal-coherent-module-better}
+with $d = 1$; besides trivial manipulations of inequalities, use that
+$V(\mathfrak p) \cap V(\mathcal{I}^\wedge_y) = \{\mathfrak m^\wedge_y\}
+\Leftrightarrow \dim(\mathcal{O}_{P, q}^\wedge/\mathfrak p) = 1$
+as $\mathcal{I}_y^\wedge$ is generated by $1$ element.
+Combining these two remarks, we may replace $(\mathcal{F}_n)$ by the
+object $(\mathcal{H}_n)$ of $\textit{Coh}(P, \mathcal{I})$ found in
+Lemma \ref{lemma-improvement-formal-coherent-module-better}.
+Thus we may and do assume $(\mathcal{F}_n)$ is pro-isomorphic to
+an inverse system $(\mathcal{F}_n'')$ of coherent $\mathcal{O}_P$-modules
+such that $\text{depth}(\mathcal{F}''_{n, q}) + \dim(\overline{\{q\}}) \geq 2$
+for all $q \in Q$.
+
+\medskip\noindent
+We will use More on Morphisms, Lemma \ref{more-morphisms-lemma-apply-proj-spec}
+and we will use the notation used and results found
+More on Morphisms, Section \ref{more-morphisms-section-proj-spec}
+without further mention; this proof will not make sense without
+at least understanding the statement of the lemma.
+Observe that in our case
+$A = \bigoplus_{m \geq 0} \Gamma(P, \mathcal{L}^{\otimes m})$
+is a finite type $k$-algebra
+all of whose graded parts are finite dimensional $k$-vector spaces, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-proper-ample}.
+
+\medskip\noindent
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-inverse-systems-pullback}
+the pull back by $\pi|_U : U \to P$ is an object
+$(\pi|_U^*\mathcal{F}_n)$ of $\textit{Coh}(U, f\mathcal{O}_U)$
+which is pro-isomorphic to the inverse system
+$(\pi|_U^*\mathcal{F}_n'')$ of coherent $\mathcal{O}_U$-modules.
+We claim
+$$
+\text{depth}(\pi|_U^*\mathcal{F}''_{n, y}) + \delta_Z^Y(y) \geq 3
+$$
+for all $y \in U \cap Y$. Since all the points of $Z$ are closed, we
+see that $\delta_Z^Y(y) \geq \dim(\overline{\{y\}})$ for all
+$y \in U \cap Y$, see Lemma \ref{lemma-discussion}.
+Let $q \in Q$ be the image of $y$. Since the morphism $\pi : U \to P$ is
+smooth of relative dimension $1$ we see that either $y$ is a closed point
+of a fibre of $\pi$ or a generic point.
+Thus we see that
+$$
+\text{depth}(\pi^*\mathcal{F}''_{n, y}) + \delta_Z^Y(y)
+\geq
+\text{depth}(\pi^*\mathcal{F}''_{n, y}) + \dim(\overline{\{y\}}) =
+\text{depth}(\mathcal{F}''_{n, q}) + \dim(\overline{\{q\}}) + 1
+$$
+because either the depth goes up by $1$ or the dimension.
+This proves the claim.
+
+\medskip\noindent
+By Lemma \ref{lemma-cd-1-canonical} we conclude that
+$(\pi|_U^*\mathcal{F}_n)$ canonically extends to $X$.
+Observe that
+$$
+M_n = \Gamma(U, \pi|_U^*\mathcal{F}_n) =
+\bigoplus\nolimits_{m \in \mathbf{Z}}
+\Gamma(P, \mathcal{F}_n \otimes_{\mathcal{O}_P} \mathcal{L}^{\otimes m})
+$$
+is canonically a graded $A$-module, see
+More on Morphisms, Equation (\ref{more-morphisms-equation-cohomology-torsor}).
+By Properties, Lemma \ref{properties-lemma-quasi-coherent-quasi-affine}
+we have $\pi|_U^*\mathcal{F}_n = \widetilde{M_n}|_U$.
+Thus we may apply Lemma \ref{lemma-Gm-equivariant-extend-canonically}
+to find a finite graded $A$-module $N$ such that
+$(M_n)$ and $(N/I^nN)$ are pro-isomorphic in the category
+of graded $A$-modules modulo $A_+$-torsion modules.
+Let $\mathcal{F}$ be the coherent $\mathcal{O}_P$-module
+associated to $N$, see
+Cohomology of Schemes, Proposition
+\ref{coherent-proposition-coherent-modules-on-proj-general}.
+The same proposition tells us that $(\mathcal{F}/\mathcal{I}^n\mathcal{F})$
+is pro-isomorphic to $(\mathcal{F}_n)$.
+Since both are objects of $\textit{Coh}(P, \mathcal{I})$
+we win by Lemma \ref{lemma-recognize-formal-coherent-modules}.
+\end{proof}
+
+\begin{example}
+\label{example-lefschetz-existence}
+Let $k$ be a field and let $X$ be a proper variety over $k$.
+Let $Y \subset X$ be an effective Cartier divisor such that
+$\mathcal{O}_X(Y)$ is ample and denote
+$\mathcal{I} \subset \mathcal{O}_X$ the corresponding
+sheaf of ideals. Let $(\mathcal{E}_n)$ an object
+of $\textit{Coh}(X, \mathcal{I})$ with $\mathcal{E}_n$
+finite locally free.
+Here are some special cases of
+Proposition \ref{proposition-lefschetz-existence}.
+\begin{enumerate}
+\item If $X$ is a curve or a surface, we don't learn anything.
+\item If $X$ is a Cohen-Macaulay threefold, then
+$(\mathcal{E}_n)$ is the completion of a coherent
+$\mathcal{O}_X$-module $\mathcal{E}$.
+\item More generally, if $\dim(X) \geq 3$ and $X$ is $(S_3)$, then
+$(\mathcal{E}_n)$ is the completion of a coherent
+$\mathcal{O}_X$-module $\mathcal{E}$.
+\end{enumerate}
+Of course, if $\mathcal{E}$ exists, then $\mathcal{E}$ is finite locally
+free in an open neighbourhood of $Y$.
+\end{example}
+
+\begin{proposition}
+\label{proposition-lefschetz-equivalence}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$.
+Let $\mathcal{L}$ be an ample invertible $\mathcal{O}_X$-module
+and let $s \in \Gamma(X, \mathcal{L})$. Let $Y = Z(s)$
+be the zero scheme of $s$ and denote $\mathcal{I} \subset \mathcal{O}_X$
+the corresponding sheaf of ideals.
+Let $\mathcal{V}$ be the set of open subschemes of $X$ containing $Y$
+ordered by reverse inclusion.
+Assume that for all $x \in X \setminus Y$ we have
+$$
+\text{depth}(\mathcal{O}_{X, x}) + \dim(\overline{\{x\}}) > 2
+$$
+Then the completion functor
+$$
+\colim_\mathcal{V}
+\textit{Coh}(\mathcal{O}_V)
+\longrightarrow
+\textit{Coh}(X, \mathcal{I})
+$$
+is an equivalence on the full subcategories of finite locally free objects.
+\end{proposition}
+
+\begin{proof}
+To prove fully faithfulness it suffices to prove that
+$$
+\colim_\mathcal{V} \Gamma(V, \mathcal{L}^{\otimes m})
+\longrightarrow
+\lim \Gamma(Y_n, \mathcal{L}^{\otimes m}|_{Y_n})
+$$
+is an isomorphism for all $m$, see
+Lemma \ref{lemma-completion-fully-faithful-general}.
+This follows from Lemma \ref{lemma-lefschetz-addendum}.
+
+\medskip\noindent
+Essential surjectivity. Let $(\mathcal{F}_n)$ be a finite locally
+free object of $\textit{Coh}(X, \mathcal{I})$. Then for $y \in Y$ we have
+$\mathcal{F}_y^\wedge = \lim \mathcal{F}_{n, y}$ is
+is a finite free $\mathcal{O}_{X, y}^\wedge$-module.
+Let $\mathfrak p \subset \mathcal{O}_{X, y}^\wedge$
+be a prime with $\mathfrak p \not \in V(\mathcal{I}_y^\wedge)$.
+Then $\mathfrak p$ lies over a prime $\mathfrak p_0 \subset \mathcal{O}_{X, y}$
+which corresponds to a specialization $x \leadsto y$ with
+$x \not \in Y$. By Local Cohomology, Lemma
+\ref{local-cohomology-lemma-change-completion}
+and some dimension theory
+(see Varieties, Section \ref{varieties-section-algebraic-schemes})
+we have
+$$
+\text{depth}((\mathcal{O}_{X, y}^\wedge)_\mathfrak p) +
+\dim(\mathcal{O}_{X, y}^\wedge/\mathfrak p) =
+\text{depth}(\mathcal{O}_{X, x}) +
+\dim(\overline{\{x\}}) - \dim(\overline{\{y\}})
+$$
+Thus our assumptions imply the assumptions of
+Proposition \ref{proposition-lefschetz-existence}
+are satisfied and we find that $(\mathcal{F}_n)$
+is the completion of a coherent $\mathcal{O}_X$-module $\mathcal{F}$.
+It then follows that $\mathcal{F}_y$ is finite free for all $y \in Y$
+and hence $\mathcal{F}$ is finite locally free in an open
+neighbourhood $V$ of $Y$. This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/artin.tex b/books/stacks/artin.tex
new file mode 100644
index 0000000000000000000000000000000000000000..7a8f9d165183530ddcf4fd676c2098f6696359c9
--- /dev/null
+++ b/books/stacks/artin.tex
@@ -0,0 +1,6371 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Artin's Axioms}
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+
+
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we discuss Artin's axioms for the representability of
+functors by algebraic spaces. As references we suggest the papers
+\cite{ArtinI}, \cite{ArtinII}, \cite{ArtinVersal}.
+
+\medskip\noindent
+Some of the notation, conventions, and terminology in this chapter is awkward
+and may seem backwards to the more experienced reader. This is intentional.
+Please see Quot, Section \ref{quot-section-conventions} for an
+explanation.
+
+\medskip\noindent
+Let $S$ be a locally Noetherian base scheme. Let
+$$
+p : \mathcal{X} \longrightarrow (\Sch/S)_{fppf}
+$$
+be a category fibred in groupoids. Let $x_0$ be an object of $\mathcal{X}$
+over a field $k$ of finite type over $S$. Throughout this chapter an important
+role is played by the predeformation category
+(see Formal Deformation Theory,
+Definition \ref{formal-defos-definition-predeformation-category})
+$$
+\mathcal{F}_{\mathcal{X}, k, x_0}
+\longrightarrow
+\{\text{Artinian local }S\text{-algebras with residue field }k\}
+$$
+associated to $x_0$ over $k$. We introduce the Rim-Schlessinger condition (RS)
+for $\mathcal{X}$ and show it guarantees that
+$\mathcal{F}_{\mathcal{X}, k, x_0}$ is a deformation category, i.e.,
+$\mathcal{F}_{\mathcal{X}, k, x_0}$ satisies (RS) itself.
+We discuss how $\mathcal{F}_{\mathcal{X}, k, x_0}$
+changes if one replaces $k$ by a finite extension
+and we discuss tangent spaces.
+
+\medskip\noindent
+Next, we discuss formal objects $\xi = (\xi_n)$ of $\mathcal{X}$ which are
+inverse systems of objects lying over the quotients $R/\mathfrak m^n$
+where $R$ is a Noetherian complete local $S$-algebra whose residue field
+is of finite type over $S$. This is the same thing as having a formal
+object in $\mathcal{F}_{\mathcal{X}, k, x_0}$ for some $x_0$ and $k$.
+A formal object is called effective when there is an object of
+$\mathcal{X}$ over $R$ which gives rise to the inverse system.
+A formal object of $\mathcal{X}$ is called versal if it gives rise to a
+versal formal object of $\mathcal{F}_{\mathcal{X}, k, x_0}$.
+Finally, given a finite type $S$-scheme $U$, an object $x$
+of $\mathcal{X}$ over $U$, and a closed point $u_0 \in U$ we say
+$x$ is versal at $u_0$ if the induced formal object over the complete
+local ring $\mathcal{O}_{U, u_0}^\wedge$ is versal.
+
+\medskip\noindent
+Having worked through this material we can state Artin's celebrated
+theorem: our $\mathcal{X}$ is an algebraic stack if the following are true
+\begin{enumerate}
+\item $\mathcal{O}_{S, s}$ is a G-ring for all $s \in S$,
+\item $\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$
+is representable by algebraic spaces,
+\item $\mathcal{X}$ is a stack for the \'etale topology,
+\item $\mathcal{X}$ is limit preserving,
+\item $\mathcal{X}$ satisfies (RS),
+\item tangent spaces and spaces of infinitesimal automorphisms
+of the deformation categories $\mathcal{F}_{\mathcal{X}, k, x_0}$
+are finite dimensional,
+\item formal objects are effective,
+\item $\mathcal{X}$ satisfies openness of versality.
+\end{enumerate}
+This is Lemma \ref{lemma-diagonal-representable}; see also
+Proposition \ref{proposition-second-diagonal-representable}
+for a slight improvement. There is an analogous proposition
+characterizing which functors $F : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$
+are algebraic spaces, see Section \ref{section-algebraic-spaces}.
+
+\medskip\noindent
+Here is a rough outline of the proof of Artin's theorem.
+First we show that there are plenty of versal formal objects
+using (RS) and the finite dimensionality of tangent and aut spaces, see
+for example Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-minimal-groupoid-in-functors-construction}.
+These formal objects are effective by assumption.
+Effective formal objects can be ``approximated'' by objects $x$ over
+finite type $S$-schemes $U$, see Lemma \ref{lemma-approximate}.
+This approximation uses the local rings of $S$ are G-rings and
+that $\mathcal{X}$ is limit preserving; it is perhaps the most difficult
+part of the proof relying as it does on general N\'eron desingularization to
+approximate formal solutions of algebraic equations over a Noetherian local
+G-ring by solutions in the henselization.
+Next openness of versality implies we may (after shrinking $U$)
+assume $x$ is versal at every closed point of $U$.
+Having done all of this we show that $U \to \mathcal{X}$
+is a smooth morphism. Taking sufficiently many $U \to \mathcal{X}$
+we show that we obtain a ``smooth atlas'' for $\mathcal{X}$ which shows
+that $\mathcal{X}$ is an algebraic stack.
+
+\medskip\noindent
+In checking Artin's axioms for a given category $\mathcal{X}$ fibred
+in groupoids, the most difficult step is often to verify openness
+of versality. For the discussion that follows, assume that $\mathcal{X}/S$
+already satisfies the other conditions listed above.
+In this chapter we offer two methods that will allow the reader
+to prove $\mathcal{X}$ satisfies openness of versality:
+\begin{enumerate}
+\item The first is to assume a stronger Rim-Schlessinger
+condition, called (RS*) and to assume a stronger version of
+formal effectiveness, essentially requiring objects over
+inverse systems of thickenings to be effective. It turns out
+that under these assumptions, openness of versality comes for
+free, see Lemma \ref{lemma-SGE-implies-openness-versality}.
+Please observe that here we are using in an essential manner
+that $\mathcal{X}$ is defined on that category of all schemes
+over $S$, not just the category of Noetherian schemes!
+\item The second, following Artin, is to require $\mathcal{X}$
+to come equipped with an obstruction theory. If said obstruction
+theory ``commutes with products'' in a suitable sense, then
+$\mathcal{X}$ satisfies openness of versality, see
+Lemma \ref{lemma-get-openness-obstruction-theory}.
+\end{enumerate}
+Obstruction theories can be axiomatized in many different ways
+and indeed many variants (often adapted to specific moduli stacks)
+can be found in the literature. We explain a variant using the derived category
+(which often arises naturally from deformation theory computations
+done in the literature) in Lemma \ref{lemma-dual-openness}.
+
+\medskip\noindent
+In Section \ref{section-algebraic-spaces-noetherian}
+we discuss what needs to be modified to make
+things work for functors defined on the category
+$(\textit{Noetherian}/S)_\etale$ of locally Noetherian
+schemes over $S$.
+
+\medskip\noindent
+In the final section of this chapter as an application of Artin's axioms
+we prove Artin's theorem on the existence of contractions, see
+Section \ref{section-contractions}. The theorem says roughly that given an
+algebraic space $X'$ separated of finite type over $S$,
+a closed subset $T' \subset |X'|$, and a formal modification
+$$
+\mathfrak{f} : X'_{/T'} \longrightarrow \mathfrak{X}
+$$
+where $\mathfrak{X}$ is a Noetherian formal algebraic space over $S$,
+there exists a proper morphism $f : X' \to X$ which
+``realizes the contraction''. By this we mean that there exists an
+identification $\mathfrak{X} = X_{/T}$ such that
+$\mathfrak{f} = f_{/T'} : X'_{/T'} \to X_{/T}$ where $T = f(T')$
+and moreover $f$ is an isomorphism over $X \setminus T$. The proof proceeds
+by defining a functor $F$ on the category of locally Noetherian schemes
+over $S$ and proving Artin's axioms for $F$. Amusingly, in this
+application of Artin's axioms, openness of versality is not the hardest
+thing to prove, instead the proof that $F$ is limit preserving requires
+a lot of work and preliminary results.
+
+
+
+
+\section{Conventions}
+\label{section-conventions}
+
+\noindent
+The conventions we use in this chapter are the same as those in the
+chapter on algebraic stacks, see
+Algebraic Stacks, Section \ref{algebraic-section-conventions}.
+In this chapter the base scheme $S$ will often be locally Noetherian
+(although we will always reiterate this condition when stating
+results).
+
+
+
+
+
+\section{Predeformation categories}
+\label{section-predeformation-categories}
+
+\noindent
+Let $S$ be a locally Noetherian base scheme. Let
+$$
+p : \mathcal{X} \longrightarrow (\Sch/S)_{fppf}
+$$
+be a category fibred in groupoids. Let $k$ be a field
+and let $\Spec(k) \to S$ be a morphism of finite type (see
+Morphisms, Lemma \ref{morphisms-lemma-point-finite-type}). We will sometimes
+simply say that {\it $k$ is a field of finite type over $S$}. Let
+$x_0$ be an object of $\mathcal{X}$ lying over $\Spec(k)$.
+Given $S$, $\mathcal{X}$, $k$, and $x_0$ we will construct a
+predeformation category, as defined in
+Formal Deformation Theory,
+Definition \ref{formal-defos-definition-predeformation-category}.
+The construction will resemble the construction of
+Formal Deformation Theory,
+Remark \ref{formal-defos-remark-localize-cofibered-groupoid}.
+
+\medskip\noindent
+First, by Morphisms, Lemma \ref{morphisms-lemma-point-finite-type}
+we may pick an affine open $\Spec(\Lambda) \subset S$ such that
+$\Spec(k) \to S$ factors through $\Spec(\Lambda)$ and the associated
+ring map $\Lambda \to k$ is finite. This provides us with the category
+$\mathcal{C}_\Lambda$, see
+Formal Deformation Theory, Definition \ref{formal-defos-definition-CLambda}.
+The category $\mathcal{C}_\Lambda$, up to canonical equivalence,
+does not depend on the choice of the affine open $\Spec(\Lambda)$ of $S$.
+Namely, $\mathcal{C}_\Lambda$ is equivalent to the opposite
+of the category of factorizations
+\begin{equation}
+\label{equation-factor}
+\Spec(k) \to \Spec(A) \to S
+\end{equation}
+of the structure morphism such that $A$ is an Artinian local ring and
+such that $\Spec(k) \to \Spec(A)$ corresponds to a ring map $A \to k$ which
+identifies $k$ with the residue field of $A$.
+
+\medskip\noindent
+We let $\mathcal{F} = \mathcal{F}_{\mathcal{X}, k, x_0}$ be the
+category whose
+\begin{enumerate}
+\item objects are morphisms $x_0 \to x$ of $\mathcal{X}$ where
+$p(x) = \Spec(A)$ with $A$ an Artinian local ring and
+$p(x_0) \to p(x) \to S$ a factorization as in (\ref{equation-factor}), and
+\item morphisms $(x_0 \to x) \to (x_0 \to x')$ are commutative
+diagrams
+$$
+\xymatrix{
+x & & x' \ar[ll] \\
+& x_0 \ar[lu] \ar[ru]
+}
+$$
+in $\mathcal{X}$. (Note the reversal of arrows.)
+\end{enumerate}
+If $x_0 \to x$ is an object of $\mathcal{F}$ then writing $p(x) = \Spec(A)$
+we obtain an object $A$ of $\mathcal{C}_\Lambda$. We often say that
+$x_0 \to x$ or $x$ lies over $A$. A morphism of $\mathcal{F}$ between objects
+$x_0 \to x$ lying over $A$ and $x_0 \to x'$ lying over $A'$
+corresponds to a morphism $x' \to x$ of $\mathcal{X}$, hence a morphism
+$p(x' \to x) : \Spec(A') \to \Spec(A)$ which in turn corresponds to a
+ring map $A \to A'$. As $\mathcal{X}$ is a category
+over the category of schemes over $S$ we see that $A \to A'$ is
+$\Lambda$-algebra homomorphism. Thus we obtain a functor
+\begin{equation}
+\label{equation-predeformation-category}
+p : \mathcal{F} = \mathcal{F}_{\mathcal{X}, k, x_0}
+\longrightarrow
+\mathcal{C}_\Lambda.
+\end{equation}
+We will use the notation $\mathcal{F}(A)$ to denote the fibre category
+over an object $A$ of $\mathcal{C}_\Lambda$. An object of $\mathcal{F}(A)$
+is simply a morphism $x_0 \to x$ of $\mathcal{X}$ such that
+$x$ lies over $\Spec(A)$ and $x_0 \to x$ lies over $\Spec(k) \to \Spec(A)$.
+
+\begin{lemma}
+\label{lemma-predeformation-category}
+The functor $p : \mathcal{F} \to \mathcal{C}_\Lambda$ defined above
+is a predeformation category.
+\end{lemma}
+
+\begin{proof}
+We have to show that $\mathcal{F}$ is (a) cofibred in groupoids over
+$\mathcal{C}_\Lambda$ and (b) that $\mathcal{F}(k)$ is a category equivalent
+to a category with a single object and a single morphism.
+
+\medskip\noindent
+Proof of (a). The fibre categories of $\mathcal{F}$
+over $\mathcal{C}_\Lambda$ are groupoids as the fibre categories
+of $\mathcal{X}$ are groupoids. Let $A \to A'$ be a morphism of
+$\mathcal{C}_\Lambda$ and let $x_0 \to x$ be an object of $\mathcal{F}(A)$.
+Because $\mathcal{X}$ is fibred in groupoids, we can find a morphism
+$x' \to x$ lying over $\Spec(A') \to \Spec(A)$. Since the composition
+$A \to A' \to k$ is equal the given map $A \to k$ we see (by uniqueness
+of pullbacks up to isomorphism) that the pullback via $\Spec(k) \to \Spec(A')$
+of $x'$ is $x_0$, i.e., that there exists a morphism $x_0 \to x'$
+lying over $\Spec(k) \to \Spec(A')$ compatible with
+$x_0 \to x$ and $x' \to x$. This proves that $\mathcal{F}$ has
+pushforwards. We conclude by (the dual of)
+Categories, Lemma \ref{categories-lemma-fibred-groupoids}.
+
+\medskip\noindent
+Proof of (b). If $A = k$, then $\Spec(k) = \Spec(A)$ and since $\mathcal{X}$
+is fibred in groupoids over $(\Sch/S)_{fppf}$ we see that given any object
+$x_0 \to x$ in $\mathcal{F}(k)$ the morphism $x_0 \to x$ is an isomorphism.
+Hence every object of $\mathcal{F}(k)$ is isomorphic to $x_0 \to x_0$.
+Clearly the only self morphism of $x_0 \to x_0$ in $\mathcal{F}$ is
+the identity.
+\end{proof}
+
+\noindent
+Let $S$ be a locally Noetherian base scheme. Let
+$F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism between categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. Let $k$ is a field
+of finite type over $S$. Let $x_0$ be an object of $\mathcal{X}$ lying
+over $\Spec(k)$. Set $y_0 = F(x_0)$ which is an object of $\mathcal{Y}$
+lying over $\Spec(k)$. Then $F$ induces a functor
+\begin{equation}
+\label{equation-functoriality}
+F :
+\mathcal{F}_{\mathcal{X}, k, x_0}
+\longrightarrow
+\mathcal{F}_{\mathcal{Y}, k, y_0}
+\end{equation}
+of categories cofibred over $\mathcal{C}_\Lambda$. Namely, to the object
+$x_0 \to x$ of $\mathcal{F}_{\mathcal{X}, k, x_0}(A)$ we associate
+the object $F(x_0) \to F(x)$ of $\mathcal{F}_{\mathcal{Y}, k, y_0}(A)$.
+
+\begin{lemma}
+\label{lemma-formally-smooth-on-deformation-categories}
+Let $S$ be a locally Noetherian scheme. Let $F : \mathcal{X} \to \mathcal{Y}$
+be a $1$-morphism of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+Assume either
+\begin{enumerate}
+\item $F$ is formally smooth on objects (Criteria for Representability,
+Section \ref{criteria-section-formally-smooth}),
+\item $F$ is representable by algebraic spaces and formally smooth, or
+\item $F$ is representable by algebraic spaces and smooth.
+\end{enumerate}
+Then for every finite type field $k$ over $S$ and object
+$x_0$ of $\mathcal{X}$ over $k$ the functor (\ref{equation-functoriality})
+is smooth in the sense of
+Formal Deformation Theory, Definition
+\ref{formal-defos-definition-smooth-morphism}.
+\end{lemma}
+
+\begin{proof}
+Case (1) is a matter of unwinding the definitions.
+Assumption (2) implies (1) by
+Criteria for Representability, Lemma
+\ref{criteria-lemma-representable-by-spaces-formally-smooth}.
+Assumption (3) implies (2) by
+More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-smooth-formally-smooth}
+and the principle of
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-representable-transformations-property-implication}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibre-product-deformation-categories}
+Let $S$ be a locally Noetherian scheme. Let
+$$
+\xymatrix{
+\mathcal{W} \ar[d] \ar[r] & \mathcal{Z} \ar[d] \\
+\mathcal{X} \ar[r] & \mathcal{Y}
+}
+$$
+be a $2$-fibre product of categories fibred in groupoids over
+$(\Sch/S)_{fppf}$. Let $k$ be a finite type field over $S$ and
+$w_0$ an object of $\mathcal{W}$ over $k$. Let $x_0, z_0, y_0$ be
+the images of $w_0$ under the morphisms in the diagram. Then
+$$
+\xymatrix{
+\mathcal{F}_{\mathcal{W}, k, w_0} \ar[d] \ar[r] &
+\mathcal{F}_{\mathcal{Z}, k, z_0} \ar[d] \\
+\mathcal{F}_{\mathcal{X}, k, x_0} \ar[r] & \mathcal{F}_{\mathcal{Y}, k, y_0}
+}
+$$
+is a fibre product of predeformation categories.
+\end{lemma}
+
+\begin{proof}
+This is a matter of unwinding the definitions. Details omitted.
+\end{proof}
+
+
+
+
+
+
+\section{Pushouts and stacks}
+\label{section-pushouts}
+
+\noindent
+In this section we show that algebraic stacks behave well with
+respect to certain pushouts. The results in this section hold over
+any base scheme.
+
+\medskip\noindent
+The following lemma is also correct when $Y$, $X'$, $X$, $Y'$ are
+algebraic spaces, see (insert future reference here).
+
+\begin{lemma}
+\label{lemma-pushout}
+\begin{slogan}
+Algebraic stacks satisfy the (strong) Rim-Schlessinger condition
+\end{slogan}
+Let $S$ be a scheme. Let
+$$
+\xymatrix{
+X \ar[r] \ar[d] & X' \ar[d] \\
+Y \ar[r] & Y'
+}
+$$
+be a pushout in the category of schemes over $S$ where $X \to X'$
+is a thickening and $X \to Y$ is affine, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-pushout-along-thickening}.
+Let $\mathcal{Z}$ be an algebraic stack over $S$.
+Then the functor of fibre categories
+$$
+\mathcal{Z}_{Y'}
+\longrightarrow
+\mathcal{Z}_Y \times_{\mathcal{Z}_X} \mathcal{Z}_{X'}
+$$
+is an equivalence of categories.
+\end{lemma}
+
+\begin{proof}
+Let $y'$ be an object of left hand side. The sheaf
+$\mathit{Isom}(y', y')$ on the category of schemes over $Y'$
+is representable by an algebraic space $I$ over $Y'$, see
+Algebraic Stacks, Lemma \ref{algebraic-lemma-representable-diagonal}.
+We conclude that the functor of the lemma is fully faithful as
+$Y'$ is the pushout in the category of algebraic spaces as
+well as the category of schemes, see
+Pushouts of Spaces, Lemma
+\ref{spaces-pushouts-lemma-pushout-along-thickening-schemes}.
+
+\medskip\noindent
+Let $(y, x', f)$ be an object of the right hand side. Here $f : y|_X \to x'|_X$
+is an isomorphism. To finish the proof we have to construct an object $y'$ of
+$\mathcal{Z}_{Y'}$ whose restrictions to $Y$ and $X'$ agree with $y$ and $x'$
+in a manner compatible with $f$. In fact, it suffices to construct $y'$
+fppf locally on $Y'$, see
+Stacks, Lemma \ref{stacks-lemma-characterize-essentially-surjective-when-ff}.
+Choose a representable algebraic stack
+$\mathcal{W}$ and a surjective smooth morphism $\mathcal{W} \to \mathcal{Z}$.
+Then
+$$
+(\Sch/Y)_{fppf} \times_{y, \mathcal{Z}} \mathcal{W}
+\quad\text{and}\quad
+(\Sch/X')_{fppf} \times_{x', \mathcal{Z}} \mathcal{W}
+$$
+are algebraic stacks representable by algebraic spaces $V$ and $U'$
+smooth over $Y$ and $X'$. The isomorphism $f$ induces an isomorphism
+$\varphi : V \times_Y X \to U' \times_{X'} X$ over $X$. By
+Pushouts of Spaces, Lemmas
+\ref{spaces-pushouts-lemma-pushout-along-thickening} and
+\ref{spaces-pushouts-lemma-equivalence-categories-spaces-pushout-flat}
+we see that the pushout $V' = V \amalg_{V \times_Y X} U'$ is
+an algebraic space smooth over $Y'$ whose base change to
+$Y$ and $X'$ recovers $V$ and $U'$ in a manner compatible with $\varphi$.
+
+\medskip\noindent
+Let $W$ be the algebraic space representing $\mathcal{W}$.
+The projections $V \to W$ and $U' \to W$ agree as morphisms
+over $V \times_Y X \cong U' \times_{X'} X$ hence the universal
+property of the pushout determines a morphism of algebraic spaces
+$V' \to W$. Choose a scheme $Y_1'$ and a surjective \'etale morphism
+$Y_1' \to V'$. Set $Y_1 = Y \times_{Y'} Y_1'$,
+$X_1' = X' \times_{Y'} Y_1'$, $X_1 = X \times_{Y'} Y_1'$.
+The composition
+$$
+(\Sch/Y_1') \to (\Sch/V') \to (\Sch/W) = \mathcal{W} \to \mathcal{Z}
+$$
+corresponds by the $2$-Yoneda lemma to an object $y_1'$ of $\mathcal{Z}$
+over $Y_1'$ whose restriction to $Y_1$ and $X_1'$ agrees with $y|_{Y_1}$
+and $x'|_{X_1'}$ in a manner compatible with $f|_{X_1}$. Thus we have
+constructed our desired object smooth locally over $Y'$ and we win.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{The Rim-Schlessinger condition}
+\label{section-RS}
+
+\noindent
+The motivation for the following definition comes from
+Lemma \ref{lemma-pushout}
+and
+Formal Deformation Theory, Definition \ref{formal-defos-definition-RS} and
+Lemma \ref{formal-defos-lemma-RS-2-categorical}.
+
+\begin{definition}
+\label{definition-RS}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{Z}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$. We say $\mathcal{Z}$
+satisfies {\it condition (RS)} if for every pushout
+$$
+\xymatrix{
+X \ar[r] \ar[d] & X' \ar[d] \\
+Y \ar[r] & Y' = Y \amalg_X X'
+}
+$$
+in the category of schemes over $S$ where
+\begin{enumerate}
+\item $X$, $X'$, $Y$, $Y'$ are spectra of local Artinian rings,
+\item $X$, $X'$, $Y$, $Y'$ are of finite type over $S$, and
+\item $X \to X'$ (and hence $Y \to Y'$) is a closed immersion
+\end{enumerate}
+the functor of fibre categories
+$$
+\mathcal{Z}_{Y'}
+\longrightarrow
+\mathcal{Z}_Y \times_{\mathcal{Z}_X} \mathcal{Z}_{X'}
+$$
+is an equivalence of categories.
+\end{definition}
+
+\noindent
+If $A$ is an Artinian local ring with residue field $k$, then
+any morphism $\Spec(A) \to S$ is affine and of finite type if and
+only if the induced morphism $\Spec(k) \to S$ is of finite type, see
+Morphisms, Lemmas \ref{morphisms-lemma-Artinian-affine} and
+\ref{morphisms-lemma-artinian-finite-type}.
+
+\begin{lemma}
+\label{lemma-algebraic-stack-RS}
+Let $\mathcal{X}$ be an algebraic stack over a locally Noetherian base
+$S$. Then $\mathcal{X}$ satisfies (RS).
+\end{lemma}
+
+\begin{proof}
+Immediate from the definitions and Lemma \ref{lemma-pushout}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibre-product-RS}
+Let $S$ be a scheme. Let $p : \mathcal{X} \to \mathcal{Y}$ and
+$q : \mathcal{Z} \to \mathcal{Y}$ be $1$-morphisms of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. If $\mathcal{X}$, $\mathcal{Y}$,
+and $\mathcal{Z}$ satisfy (RS), then so
+does $\mathcal{X} \times_\mathcal{Y} \mathcal{Z}$.
+\end{lemma}
+
+\begin{proof}
+This is formal. Let
+$$
+\xymatrix{
+X \ar[r] \ar[d] & X' \ar[d] \\
+Y \ar[r] & Y' = Y \amalg_X X'
+}
+$$
+be a diagram as in Definition \ref{definition-RS}. We have to show that
+$$
+(\mathcal{X} \times_{\mathcal{Y}} \mathcal{Z})_{Y'}
+\longrightarrow
+(\mathcal{X} \times_{\mathcal{Y}} \mathcal{Z})_Y
+\times_{(\mathcal{X} \times_{\mathcal{Y}} \mathcal{Z})_X}
+(\mathcal{X} \times_{\mathcal{Y}} \mathcal{Z})_{X'}
+$$
+is an equivalence. Using the definition of the $2$-fibre product
+this becomes
+\begin{equation}
+\label{equation-RS-fibre-product}
+\mathcal{X}_{Y'} \times_{\mathcal{Y}_{Y'}} \mathcal{Z}_{Y'}
+\longrightarrow
+(\mathcal{X}_Y \times_{\mathcal{Y}_Y} \mathcal{Z}_Y)
+\times_{(\mathcal{X}_X \times_{\mathcal{Y}_X} \mathcal{Z}_X)}
+(\mathcal{X}_{X'} \times_{\mathcal{Y}_{X'}} \mathcal{Z}_{X'}).
+\end{equation}
+We are given that each of the functors
+$$
+\mathcal{X}_{Y'} \to \mathcal{X}_Y \times_{\mathcal{Y}_Y} \mathcal{Z}_Y,
+\quad
+\mathcal{Y}_{Y'} \to \mathcal{X}_X \times_{\mathcal{Y}_X} \mathcal{Z}_X,
+\quad
+\mathcal{Z}_{Y'} \to
+\mathcal{X}_{X'} \times_{\mathcal{Y}_{X'}} \mathcal{Z}_{X'}
+$$
+are equivalences. An object of the right hand side of
+(\ref{equation-RS-fibre-product}) is a system
+$$
+((x_Y, z_Y, \phi_Y), (x_{X'}, z_{X'}, \phi_{X'}), (\alpha, \beta)).
+$$
+Then $(x_Y, x_{Y'}, \alpha)$ is isomorphic to the image of an object
+$x_{Y'}$ in $\mathcal{X}_{Y'}$ and $(z_Y, z_{Y'}, \beta)$ is isomorphic
+to the image of an object $z_{Y'}$ of $\mathcal{Z}_{Y'}$. The pair of
+morphisms $(\phi_Y, \phi_{X'})$ corresponds to a morphism $\psi$
+between the images of $x_{Y'}$ and $z_{Y'}$ in $\mathcal{Y}_{Y'}$.
+Then $(x_{Y'}, z_{Y'}, \psi)$ is an object of the left hand side of
+(\ref{equation-RS-fibre-product}) mapping to the given object of the
+right hand side. This proves that (\ref{equation-RS-fibre-product}) is
+essentially surjective. We omit the proof that it is fully faithful.
+\end{proof}
+
+
+
+
+
+\section{Deformation categories}
+\label{section-deformation-categories}
+
+\noindent
+We match the notation introduced above with the notation from the
+chapter ``Formal Deformation Theory''.
+
+\begin{lemma}
+\label{lemma-deformation-category}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$ satisfying (RS). For any field
+$k$ of finite type over $S$ and any object $x_0$ of $\mathcal{X}$ lying
+over $k$ the predeformation category
+$p : \mathcal{F}_{\mathcal{X}, k, x_0} \to \mathcal{C}_\Lambda$
+(\ref{equation-predeformation-category}) is a deformation category, see
+Formal Deformation Theory, Definition
+\ref{formal-defos-definition-deformation-category}.
+\end{lemma}
+
+\begin{proof}
+Set $\mathcal{F} = \mathcal{F}_{\mathcal{X}, k, x_0}$.
+Let $f_1 : A_1 \to A$ and $f_2 : A_2 \to A$ be ring maps in
+$\mathcal{C}_\Lambda$ with $f_2$ surjective. We have to show that
+the functor
+$$
+\mathcal{F}(A_1 \times_A A_2)
+\longrightarrow
+\mathcal{F}(A_1) \times_{\mathcal{F}(A)} \mathcal{F}(A_2)
+$$
+is an equivalence, see
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-RS-2-categorical}.
+Set $X = \Spec(A)$, $X' = \Spec(A_2)$, $Y = \Spec(A_1)$ and
+$Y' = \Spec(A_1 \times_A A_2)$. Note that $Y' = Y \amalg_X X'$ in the
+category of schemes, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-pushout-along-thickening}.
+We know that in the diagram of functors of fibre categories
+$$
+\xymatrix{
+\mathcal{X}_{Y'} \ar[r] \ar[d] &
+\mathcal{X}_Y \times_{\mathcal{X}_X} \mathcal{X}_{X'} \ar[d] \\
+\mathcal{X}_{\Spec(k)} \ar@{=}[r] & \mathcal{X}_{\Spec(k)}
+}
+$$
+the top horizontal arrow is an equivalence by
+Definition \ref{definition-RS}.
+Since $\mathcal{F}(B)$ is the category of objects of $\mathcal{X}_{\Spec(B)}$
+with an identification with $x_0$ over $k$ we win.
+\end{proof}
+
+\begin{remark}
+\label{remark-deformation-category-implies}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be fibred
+in groupoids over $(\Sch/S)_{fppf}$. Let $k$ be a field of finite type over
+$S$ and $x_0$ an object
+of $\mathcal{X}$ over $k$. Let $p : \mathcal{F} \to \mathcal{C}_\Lambda$
+be as in (\ref{equation-predeformation-category}). If $\mathcal{F}$
+is a deformation category, i.e., if $\mathcal{F}$ satisfies the
+Rim-Schlessinger condition (RS), then we see that $\mathcal{F}$ satisfies
+Schlessinger's conditions (S1) and (S2) by
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-RS-implies-S1-S2}.
+Let $\overline{\mathcal{F}}$ be the functor of isomorphism classes, see
+Formal Deformation Theory, Remarks
+\ref{formal-defos-remarks-cofibered-groupoids}
+(\ref{formal-defos-item-associated-functor-isomorphism-classes}).
+Then $\overline{\mathcal{F}}$ satisfies (S1) and (S2) as well, see
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-S1-S2-associated-functor}.
+This holds in particular in the situation of
+Lemma \ref{lemma-deformation-category}.
+\end{remark}
+
+
+
+
+\section{Change of field}
+\label{section-change-of-field}
+
+\noindent
+This section is the analogue of
+Formal Deformation Theory, Section \ref{formal-defos-section-change-of-field}.
+As pointed out there, to discuss what happens under change of field
+we need to write $\mathcal{C}_{\Lambda, k}$ instead of $\mathcal{C}_\Lambda$.
+In the following lemma we use the notation $\mathcal{F}_{l/k}$
+introduced in Formal Deformation Theory, Situation
+\ref{formal-defos-situation-change-of-fields}.
+
+\begin{lemma}
+\label{lemma-change-of-field}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$. Let $k$ be a
+field of finite type over $S$ and let $l/k$ be a finite extension.
+Let $x_0$ be an object of $\mathcal{F}$ lying over $\Spec(k)$.
+Denote $x_{l, 0}$ the restriction of $x_0$ to $\Spec(l)$.
+Then there is a canonical functor
+$$
+(\mathcal{F}_{\mathcal{X}, k , x_0})_{l/k}
+\longrightarrow
+\mathcal{F}_{\mathcal{X}, l, x_{l, 0}}
+$$
+of categories cofibred in groupoids over $\mathcal{C}_{\Lambda, l}$.
+If $\mathcal{X}$ satisfies (RS), then this functor is an equivalence.
+\end{lemma}
+
+\begin{proof}
+Consider a factorization
+$$
+\Spec(l) \to \Spec(B) \to S
+$$
+as in (\ref{equation-factor}). By definition we have
+$$
+(\mathcal{F}_{\mathcal{X}, k , x_0})_{l/k}(B) =
+\mathcal{F}_{\mathcal{X}, k, x_0}(B \times_l k)
+$$
+see Formal Deformation Theory, Situation
+\ref{formal-defos-situation-change-of-fields}. Thus an object of this
+is a morphism $x_0 \to x$ of $\mathcal{X}$ lying over the morphism
+$\Spec(k) \to \Spec(B \times_l k)$. Choosing pullback functor for $\mathcal{X}$
+we can associate to $x_0 \to x$ the morphism $x_{l, 0} \to x_B$
+where $x_B$ is the restriction of $x$ to $\Spec(B)$ (via the morphism
+$\Spec(B) \to \Spec(B \times_l k)$ coming from $B \times_l k \subset B$).
+This construction is functorial in $B$ and compatible with morphisms.
+
+\medskip\noindent
+Next, assume $\mathcal{X}$ satisfies (RS). Consider the diagrams
+$$
+\vcenter{
+\xymatrix{
+l & B \ar[l] \\
+k \ar[u] & B \times_l k \ar[l] \ar[u]
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+\Spec(l) \ar[d] \ar[r] & \Spec(B) \ar[d] \\
+\Spec(k) \ar[r] & \Spec(B \times_l k)
+}
+}
+$$
+The diagram on the left is a fibre product of rings. The diagram on the
+right is a pushout in the category of schemes, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-pushout-along-thickening}.
+These schemes are all of finite type over $S$ (see remarks following
+Definition \ref{definition-RS}). Hence (RS) kicks in to give an equivalence
+of fibre categories
+$$
+\mathcal{X}_{\Spec(B \times_l k)}
+\longrightarrow
+\mathcal{X}_{\Spec(k)}
+\times_{\mathcal{X}_{\Spec(l)}}
+\mathcal{X}_{\Spec(B)}
+$$
+This implies that the functor defined above gives an equivalence of
+fibre categories. Hence the functor is an equivalence on categories
+cofibred in groupoids by (the dual of)
+Categories, Lemma \ref{categories-lemma-equivalence-fibred-categories}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Tangent spaces}
+\label{section-tangent-spaces}
+
+\noindent
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$. Let $k$ be a field of finite
+type over $S$ and let $x_0$ be an object of $\mathcal{X}$ over $k$.
+In Formal Deformation Theory, Section \ref{formal-defos-section-tangent-spaces}
+we have defined the {\it tangent space}
+\begin{equation}
+\label{equation-tangent-space}
+T\mathcal{F}_{\mathcal{X}, k, x_0} =
+\left\{
+\begin{matrix}
+\text{isomorphism classes of morphisms}\\
+x_0 \to x\text{ over }\Spec(k) \to \Spec(k[\epsilon])
+\end{matrix}
+\right\}
+\end{equation}
+of the predeformation category $\mathcal{F}_{\mathcal{X}, k, x_0}$.
+In Formal Deformation Theory, Section
+\ref{formal-defos-section-infinitesimal-automorphisms}
+we have defined
+\begin{equation}
+\label{equation-infinitesimal-automorphisms}
+\text{Inf}(\mathcal{F}_{\mathcal{X}, k, x_0}) =
+\Ker\left(
+\text{Aut}_{\Spec(k[\epsilon])}(x'_0) \to \text{Aut}_{\Spec(k)}(x_0)
+\right)
+\end{equation}
+where $x_0'$ is the pullback of $x_0$ to $\Spec(k[\epsilon])$.
+If $\mathcal{X}$ satisfies the Rim-Schlessinger condition (RS), then
+$T\mathcal{F}_{\mathcal{X}, k, x_0}$ comes equipped with a natural
+$k$-vector space structure by Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-tangent-space-vector-space}
+(assumptions hold by Lemma \ref{lemma-deformation-category} and
+Remark \ref{remark-deformation-category-implies}). Moreover,
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-infaut-vector-space}
+shows that $\text{Inf}(\mathcal{F}_{\mathcal{X}, k, x_0})$ has a
+natural $k$-vector space structure such that addition agrees with
+composition of automorphisms. A natural condition
+is to ask these vector spaces to have finite dimension.
+
+\medskip\noindent
+The following lemma tells us this is true if
+$\mathcal{X}$ is locally of finite type over $S$ (see
+Morphisms of Stacks, Section \ref{stacks-morphisms-section-finite-type}).
+
+\begin{lemma}
+\label{lemma-finite-dimension}
+Let $S$ be a locally Noetherian scheme. Assume
+\begin{enumerate}
+\item $\mathcal{X}$ is an algebraic stack,
+\item $U$ is a scheme locally of finite type over $S$, and
+\item $(\Sch/U)_{fppf} \to \mathcal{X}$ is a smooth surjective
+morphism.
+\end{enumerate}
+Then, for any $\mathcal{F} = \mathcal{F}_{\mathcal{X}, k, x_0}$ as in
+Section \ref{section-predeformation-categories}
+the tangent space $T\mathcal{F}$ and infinitesimal automorphism space
+$\text{Inf}(\mathcal{F})$ have finite dimension over $k$
+\end{lemma}
+
+\begin{proof}
+Let us write $\mathcal{U} = (\Sch/U)_{fppf}$. By our definition
+of algebraic stacks the $1$-morphism $\mathcal{U} \to \mathcal{X}$
+is representable by algebraic spaces. Hence in particular the
+2-fibre product
+$$
+\mathcal{U}_{x_0} = (\Sch/\Spec(k))_{fppf} \times_\mathcal{X} \mathcal{U}
+$$
+is representable by an algebraic space $U_{x_0}$ over $\Spec(k)$. Then
+$U_{x_0} \to \Spec(k)$ is smooth and surjective (in particular $U_{x_0}$
+is nonempty). By Spaces over Fields, Lemma
+\ref{spaces-over-fields-lemma-smooth-separable-closed-points-dense}
+we can find a finite extension $l/k$ and a point
+$\Spec(l) \to U_{x_0}$ over $k$. We have
+$$
+(\mathcal{F}_{\mathcal{X}, k , x_0})_{l/k} =
+\mathcal{F}_{\mathcal{X}, l, x_{l, 0}}
+$$
+by Lemma \ref{lemma-change-of-field} and the fact that $\mathcal{X}$
+satisfies (RS). Thus we see that
+$$
+T\mathcal{F} \otimes_k l \cong T\mathcal{F}_{\mathcal{X}, l, x_{l, 0}}
+\quad\text{and}\quad
+\text{Inf}(\mathcal{F}) \otimes_k l \cong
+\text{Inf}(\mathcal{F}_{\mathcal{X}, l, x_{l, 0}})
+$$
+by
+Formal Deformation Theory, Lemmas
+\ref{formal-defos-lemma-tangent-space-change-of-field} and
+\ref{formal-defos-lemma-inf-aut-change-of-field}
+(these are applicable by
+Lemmas \ref{lemma-algebraic-stack-RS} and
+\ref{lemma-deformation-category} and
+Remark \ref{remark-deformation-category-implies}).
+Hence it suffices to prove that $T\mathcal{F}_{\mathcal{X}, l, x_{l, 0}}$
+and $\text{Inf}(\mathcal{F}_{\mathcal{X}, l, x_{l, 0}})$
+have finite dimension over $l$. Note that $x_{l, 0}$ comes from a point
+$u_0$ of $\mathcal{U}$ over $l$.
+
+\medskip\noindent
+We interrupt the flow of the argument to show that the lemma for
+infinitesimal automorphisms follows from the lemma for tangent spaces.
+Namely, let
+$\mathcal{R} = \mathcal{U} \times_\mathcal{X} \mathcal{U}$.
+Let $r_0$ be the $l$-valued point $(u_0, u_0, \text{id}_{x_0})$ of
+$\mathcal{R}$. Combining
+Lemma \ref{lemma-fibre-product-deformation-categories} and
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-deformation-functor-diagonal}
+we see that
+$$
+\text{Inf}(\mathcal{F}_{\mathcal{X}, l, x_{l, 0}})
+\subset
+T\mathcal{F}_{\mathcal{R}, l, r_0}
+$$
+Note that $\mathcal{R}$ is an algebraic stack, see
+Algebraic Stacks, Lemma \ref{algebraic-lemma-2-fibre-product-general}.
+Also, $\mathcal{R}$ is representable by an algebraic space $R$
+smooth over $U$ (via either projection, see
+Algebraic Stacks, Lemma \ref{algebraic-lemma-stack-presentation}).
+Hence, choose an scheme $U'$ and a surjective \'etale morphism
+$U' \to R$ we see that $U'$ is smooth over $U$, hence locally of
+finite type over $S$. As $(\Sch/U')_{fppf} \to \mathcal{R}$ is
+surjective and smooth, we have reduced the question to the case
+of tangent spaces.
+
+\medskip\noindent
+The functor (\ref{equation-functoriality})
+$$
+\mathcal{F}_{\mathcal{U}, l, u_0}
+\longrightarrow
+\mathcal{F}_{\mathcal{X}, l, x_{l, 0}}
+$$
+is smooth by Lemma \ref{lemma-formally-smooth-on-deformation-categories}.
+The induced map on tangent spaces
+$$
+T\mathcal{F}_{\mathcal{U}, l, u_0}
+\longrightarrow
+T\mathcal{F}_{\mathcal{X}, l, x_{l, 0}}
+$$
+is $l$-linear (by
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-k-linear-differential})
+and surjective (as smooth maps of predeformation categories induce
+surjective maps on tangent spaces by
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-smooth-morphism-essentially-surjective}).
+Hence it suffices to prove that the tangent space of the deformation
+space associated to the representable algebraic stack $\mathcal{U}$
+at the point $u_0$ is finite dimensional. Let $\Spec(R) \subset U$ be
+an affine open such that $u_0 : \Spec(l) \to U$ factors through $\Spec(R)$
+and such that $\Spec(R) \to S$ factors through $\Spec(\Lambda) \subset S$.
+Let $\mathfrak m_R \subset R$ be the kernel of the $\Lambda$-algebra map
+$\varphi_0 : R \to l$ corresponding to $u_0$. Note that $R$, being of finite
+type over the Noetherian ring $\Lambda$, is a Noetherian ring. Hence
+$\mathfrak m_R = (f_1, \ldots, f_n)$ is a finitely generated ideal.
+We have
+$$
+T\mathcal{F}_{\mathcal{U}, l, u_0}
+=
+\{\varphi : R \to l[\epsilon] \mid
+\varphi \text{ is a } \Lambda\text{-algebra map and }
+\varphi \bmod \epsilon = \varphi_0\}
+$$
+An element of the right hand side is determined by its values on
+$f_1, \ldots, f_n$ hence the dimension is at most $n$ and we win.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibre-product-tangent-spaces}
+Let $S$ be a locally Noetherian scheme. Let $p : \mathcal{X} \to \mathcal{Y}$
+and $q : \mathcal{Z} \to \mathcal{Y}$ be $1$-morphisms of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. Assume $\mathcal{X}$,
+$\mathcal{Y}$, $\mathcal{Z}$ satisfy (RS).
+Let $k$ be a field of finite type over $S$ and let $w_0$ be an object of
+$\mathcal{W} = \mathcal{X} \times_\mathcal{Y} \mathcal{Z}$ over $k$.
+Denote $x_0, y_0, z_0$ the objects of $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$
+you get from $w_0$. Then there is a $6$-term exact sequence
+$$
+\xymatrix{
+0 \ar[r] &
+\text{Inf}(\mathcal{F}_{\mathcal{W}, k, w_0}) \ar[r] &
+\text{Inf}(\mathcal{F}_{\mathcal{X}, k, x_0}) \oplus
+\text{Inf}(\mathcal{F}_{\mathcal{Z}, k, z_0}) \ar[r] &
+\text{Inf}(\mathcal{F}_{\mathcal{Y}, k, y_0}) \ar[lld] \\
+ &
+T\mathcal{F}_{\mathcal{W}, k, w_0} \ar[r] &
+T\mathcal{F}_{\mathcal{X}, k, x_0} \oplus
+T\mathcal{F}_{\mathcal{Z}, k, z_0} \ar[r] &
+T\mathcal{F}_{\mathcal{Y}, k, y_0}
+}
+$$
+of $k$-vector spaces.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-fibre-product-RS} we see that $\mathcal{W}$
+satisfies (RS) and hence the lemma makes sense. To see the lemma
+is true, apply Lemmas \ref{lemma-fibre-product-deformation-categories} and
+\ref{lemma-deformation-category}
+and Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-deformation-categories-fiber-product-morphisms}.
+\end{proof}
+
+
+
+
+
+
+\section{Formal objects}
+\label{section-formal-objects}
+
+\noindent
+In this section we transfer some of the notions already defined
+in the chapter ``Formal Deformation Theory'' to the current setting.
+In the following we will say ``$R$ is an $S$-algebra'' to indicate
+that $R$ is a ring endowed with a morphism of schemes $\Spec(R) \to S$.
+
+\begin{definition}
+\label{definition-formal-objects}
+Let $S$ be a locally Noetherian scheme. Let
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$ be a category fibred in groupoids.
+\begin{enumerate}
+\item A {\it formal object} $\xi = (R, \xi_n, f_n)$ of $\mathcal{X}$ consists
+of a Noetherian complete local $S$-algebra $R$, objects $\xi_n$ of
+$\mathcal{X}$ lying over $\Spec(R/\mathfrak m_R^n)$, and morphisms
+$f_n : \xi_n \to \xi_{n + 1}$ of $\mathcal{X}$ lying over
+$\Spec(R/\mathfrak m^n) \to \Spec(R/\mathfrak m^{n + 1})$
+such that $R/\mathfrak m$ is a field of finite type over $S$.
+\item A {\it morphism of formal objects}
+$a : \xi = (R, \xi_n, f_n) \to \eta = (T, \eta_n, g_n)$
+is given by morphisms $a_n : \xi_n \to \eta_n$ such that for every $n$
+the diagram
+$$
+\xymatrix{
+\xi_n \ar[r]_{f_n} \ar[d]_{a_n} & \xi_{n + 1} \ar[d]^{a_{n + 1}} \\
+\eta_n \ar[r]^{g_n} & \eta_{n + 1}
+}
+$$
+is commutative. Applying the functor $p$ we obtain a compatible collection
+of morphisms $\Spec(R/\mathfrak m_R^n) \to \Spec(T/\mathfrak m_T^n)$ and
+hence a morphism $a_0 : \Spec(R) \to \Spec(T)$ over $S$. We say that
+$a$ {\it lies over} $a_0$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Thus we obtain a category of formal objects of $\mathcal{X}$.
+
+\begin{remark}
+\label{remark-formal-objects-match}
+Let $S$ be a locally Noetherian scheme. Let
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$ be a category fibred in groupoids.
+Let $\xi = (R, \xi_n, f_n)$ be a formal object. Set $k = R/\mathfrak m$ and
+$x_0 = \xi_1$. The formal object $\xi$ defines a formal object
+$\xi$ of the predeformation category $\mathcal{F}_{\mathcal{X}, k, x_0}$.
+This follows immediately from
+Definition \ref{definition-formal-objects} above,
+Formal Deformation Theory, Definition
+\ref{formal-defos-definition-formal-objects},
+and our construction of the predeformation category
+$\mathcal{F}_{\mathcal{X}, k, x_0}$ in
+Section \ref{section-predeformation-categories}.
+\end{remark}
+
+\noindent
+If $F : \mathcal{X} \to \mathcal{Y}$ is a $1$-morphism of categories fibred
+in groupoids over $(\Sch/S)_{fppf}$, then $F$ induces a functor between
+categories of formal objects as well.
+
+\begin{lemma}
+\label{lemma-smooth-lift-formal}
+Let $S$ be a locally Noetherian scheme. Let $F : \mathcal{X} \to \mathcal{Y}$
+be a $1$-morphism of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+Let $\eta = (R, \eta_n, g_n)$ be a formal object of $\mathcal{Y}$
+and let $\xi_1$ be an object of $\mathcal{X}$ with $F(\xi_1) \cong \eta_1$.
+If $F$ is formally smooth on objects (see
+Criteria for Representability, Section \ref{criteria-section-formally-smooth}),
+then there exists a formal object $\xi = (R, \xi_n, f_n)$ of $\mathcal{X}$
+such that $F(\xi) \cong \eta$.
+\end{lemma}
+
+\begin{proof}
+Note that each of the morphisms
+$\Spec(R/\mathfrak m^n) \to \Spec(R/\mathfrak m^{n + 1})$ is a first order
+thickening of affine schemes over $S$. Hence the assumption on $F$ means
+that we can successively lift $\xi_1$ to objects $\xi_2, \xi_3, \ldots$
+of $\mathcal{X}$ endowed with compatible isomorphisms
+$\eta_n|_{\Spec(R/\mathfrak m^{n - 1})} \cong \eta_{n - 1}$
+and $F(\eta_n) \cong \xi_n$.
+\end{proof}
+
+\noindent
+Let $S$ be a locally Noetherian scheme. Let
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$ be a category fibred in groupoids.
+Suppose that $x$ is an object of $\mathcal{X}$ over $R$, where $R$ is a
+Noetherian complete local $S$-algebra with residue field of finite type
+over $S$. Then we can consider the system of restrictions
+$\xi_n = x|_{\Spec(R/\mathfrak m^n)}$ endowed with the natural morphisms
+$\xi_1 \to \xi_2 \to \ldots$ coming from transitivity of restriction.
+Thus $\xi = (R, \xi_n, \xi_n \to \xi_{n + 1})$ is a formal object of
+$\mathcal{X}$. This construction is functorial in the object $x$.
+Thus we obtain a functor
+\begin{equation}
+\label{equation-approximation}
+\left\{
+\begin{matrix}
+\text{objects }x\text{ of }\mathcal{X} \text{ such that }p(x) = \Spec(R) \\
+\text{where }R\text{ is Noetherian complete local}\\
+\text{with }R/\mathfrak m\text{ of finite type over }S
+\end{matrix}
+\right\}
+\longrightarrow
+\left\{
+\begin{matrix}
+\text{formal objects of }\mathcal{X}
+\end{matrix}
+\right\}
+\end{equation}
+To be precise the left hand side is the full subcategory of $\mathcal{X}$
+consisting of objects as indicated and the right hand side is the category
+of formal objects of $\mathcal{X}$ as in
+Definition \ref{definition-formal-objects}.
+
+\begin{definition}
+\label{definition-effective}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$. A formal object
+$\xi = (R, \xi_n, f_n)$ of $\mathcal{X}$ is called {\it effective}
+if it is in the essential image of the functor
+(\ref{equation-approximation}).
+\end{definition}
+
+\noindent
+If the category fibred in groupoids is an algebraic stack, then every
+formal object is effective as follows from the next lemma.
+
+\begin{lemma}
+\label{lemma-effective}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be an algebraic
+stack over $S$. The functor (\ref{equation-approximation}) is an equivalence.
+\end{lemma}
+
+\begin{proof}
+Case I: $\mathcal{X}$ is representable (by a scheme). Say
+$\mathcal{X} = (\Sch/X)_{fppf}$ for some scheme $X$ over $S$.
+Unwinding the definitions we have to prove the following: Given
+a Noetherian complete local $S$-algebra $R$ with $R/\mathfrak m$ of
+finite type over $S$ we have
+$$
+\Mor_S(\Spec(R), X) \longrightarrow \lim \Mor_S(\Spec(R/\mathfrak m^n), X)
+$$
+is bijective. This follows from Formal Spaces, Lemma
+\ref{formal-spaces-lemma-map-into-scheme}.
+
+\medskip\noindent
+Case II. $\mathcal{X}$ is representable by an algebraic space. Say
+$\mathcal{X}$ is representable by $X$. Again we have to show that
+$$
+\Mor_S(\Spec(R), X) \longrightarrow \lim \Mor_S(\Spec(R/\mathfrak m^n), X)
+$$
+is bijective for $R$ as above. This is Formal Spaces, Lemma
+\ref{formal-spaces-lemma-map-into-algebraic-space}.
+
+\medskip\noindent
+Case III: General case of an algebraic stack. A general remark is that
+the left and right hand side of (\ref{equation-approximation}) are
+categories fibred in groupoids over the category of affine schemes
+over $S$ which are spectra of Noetherian complete local rings
+with residue field of finite type over $S$. We will also see in the
+proof below that they form stacks for a certain topology on this
+category.
+
+\medskip\noindent
+We first prove fully faithfulness. Let $R$ be a Noetherian complete
+local $S$-algebra with $k = R/\mathfrak m$ of finite type over $S$.
+Let $x, x'$ be objects of $\mathcal{X}$ over $R$. As $\mathcal{X}$ is
+an algebraic stack $\mathit{Isom}(x, x')$ is representable by an
+algebraic space $I$ over $\Spec(R)$, see
+Algebraic Stacks, Lemma \ref{algebraic-lemma-representable-diagonal}.
+Applying Case II to $I$ over $\Spec(R)$ implies immediately that
+(\ref{equation-approximation}) is fully faithful on fibre categories over
+$\Spec(R)$. Hence the functor is fully faithful by
+Categories, Lemma \ref{categories-lemma-equivalence-fibred-categories}.
+
+\medskip\noindent
+Essential surjectivity. Let $\xi = (R, \xi_n, f_n)$ be a formal object of
+$\mathcal{X}$. Choose a scheme $U$ over $S$ and a surjective smooth morphism
+$f : (\Sch/U)_{fppf} \to \mathcal{X}$. For every $n$ consider the fibre product
+$$
+(\Sch/\Spec(R/\mathfrak m^n))_{fppf}
+\times_{\xi_n, \mathcal{X}, f}
+(\Sch/U)_{fppf}
+$$
+By assumption this is representable by an algebraic space $V_n$ surjective and
+smooth over $\Spec(R/\mathfrak m^n)$. The morphisms
+$f_n : \xi_n \to \xi_{n + 1}$ induce cartesian squares
+$$
+\xymatrix{
+V_{n + 1} \ar[d] & V_n \ar[d] \ar[l] \\
+\Spec(R/\mathfrak m^{n + 1}) & \Spec(R/\mathfrak m^n) \ar[l]
+}
+$$
+of algebraic spaces. By Spaces over Fields, Lemma
+\ref{spaces-over-fields-lemma-smooth-separable-closed-points-dense}
+we can find a finite separable extension $k'/k$ and a point
+$v'_1 : \Spec(k') \to V_1$ over $k$. Let $R \subset R'$ be the finite \'etale
+extension whose residue field extension is $k'/k$ (exists and
+is unique by
+Algebra, Lemmas \ref{algebra-lemma-henselian-cat-finite-etale} and
+\ref{algebra-lemma-complete-henselian}).
+By the infinitesimal lifting criterion of smoothness (see
+More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-smooth-formally-smooth})
+applied to $V_n \to \Spec(R/\mathfrak m^n)$ for $n = 2, 3, 4, \ldots$
+we can successively find morphisms
+$v'_n : \Spec(R'/(\mathfrak m')^n) \to V_n$ over $\Spec(R/\mathfrak m^n)$
+fitting into commutative diagrams
+$$
+\xymatrix{
+\Spec(R'/(\mathfrak m')^{n + 1}) \ar[d]_{v'_{n + 1}} &
+\Spec(R'/(\mathfrak m')^n) \ar[d]^{v'_n} \ar[l] \\
+V_{n + 1} & V_n \ar[l]
+}
+$$
+Composing with the projection morphisms $V_n \to U$ we obtain a compatible
+system of morphisms $u'_n : \Spec(R'/(\mathfrak m')^n) \to U$.
+By Case I the family $(u'_n)$ comes from a unique
+morphism $u' : \Spec(R') \to U$. Denote $x'$ the object of $\mathcal{X}$
+over $\Spec(R')$ we get by applying the $1$-morphism $f$ to $u'$.
+By construction, there exists a morphism of formal objects
+$$
+(\ref{equation-approximation})(x') =
+(R', x'|_{\Spec(R'/(\mathfrak m')^n)}, \ldots)
+\longrightarrow
+(R, \xi_n, f_n)
+$$
+lying over $\Spec(R') \to \Spec(R)$. Note that $R' \otimes_R R'$ is a finite
+product of spectra of Noetherian complete local rings to which our current
+discussion applies. Denote $p_0, p_1 : \Spec(R' \otimes_R R') \to \Spec(R')$
+the two projections. By the fully faithfulness shown above there exists
+a canonical isomorphism $\varphi : p_0^*x' \to p_1^*x'$ because we have
+such isomorphisms over
+$\Spec((R' \otimes_R R')/\mathfrak m^n(R' \otimes_R R'))$.
+We omit the proof that the isomorphism $\varphi$ satisfies the cocycle
+condition (see Stacks, Definition \ref{stacks-definition-descent-data}).
+Since $\{\Spec(R') \to \Spec(R)\}$ is an fppf covering we conclude
+that $x'$ descends to an object $x$ of $\mathcal{X}$ over $\Spec(R)$.
+We omit the proof that $x_n$ is the restriction of $x$ to
+$\Spec(R/\mathfrak m^n)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibre-product-effective}
+Let $S$ be a scheme. Let $p : \mathcal{X} \to \mathcal{Y}$ and
+$q : \mathcal{Z} \to \mathcal{Y}$ be $1$-morphisms of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. If the functor
+(\ref{equation-approximation}) is an equivalence for
+$\mathcal{X}$, $\mathcal{Y}$, and $\mathcal{Z}$, then it is
+an equivalence for $\mathcal{X} \times_\mathcal{Y} \mathcal{Z}$.
+\end{lemma}
+
+\begin{proof}
+The left and the right hand side of (\ref{equation-approximation})
+for $\mathcal{X} \times_\mathcal{Y} \mathcal{Z}$ are simply the $2$-fibre
+products of the left and the right hand side of (\ref{equation-approximation})
+for $\mathcal{X}$, $\mathcal{Z}$ over $\mathcal{Y}$.
+Hence the result follows as taking $2$-fibre products is compatible
+with equivalences of categories, see
+Categories, Lemma \ref{categories-lemma-equivalence-2-fibre-product}.
+\end{proof}
+
+
+
+
+
+
+\section{Approximation}
+\label{section-approximation}
+
+\noindent
+A fundamental insight of Michael Artin is that you can approximate
+objects of a limit preserving stack. Namely, given an object $x$
+of the stack over a Noetherian complete local ring, you can find
+an object $x_A$ over an algebraic ring which is ``close to'' $x$.
+Here an algebraic ring means a finite type $S$-algebra and close
+means adically close. In this section we present this in a simple,
+yet general form.
+
+\medskip\noindent
+To formulate the result we need to pull together some definitions from
+different places in the Stacks project. First, in
+Criteria for Representability, Section \ref{criteria-section-limit-preserving}
+we introduced {\it limit preserving on objects} for $1$-morphisms
+of categories fibred in groupoids over the category of schemes.
+In More on Algebra, Definition \ref{more-algebra-definition-G-ring}
+we defined the notion of a {\it G-ring}. Let $S$ be a locally Noetherian scheme.
+Let $A$ be an $S$-algebra. We say that $A$ is {\it of finite type over $S$}
+or is a {\it finite type $S$-algebra} if $\Spec(A) \to S$ is of finite type.
+In this case $A$ is a Noetherian ring. Finally, given a ring $A$ and ideal
+$I$ we denote $\text{Gr}_I(A) = \bigoplus I^n/I^{n + 1}$.
+
+\begin{lemma}
+\label{lemma-approximate}
+Let $S$ be a locally Noetherian scheme. Let
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$ be a category
+fibred in groupoids. Let $x$ be an object of
+$\mathcal{X}$ lying over $\Spec(R)$ where $R$ is a Noetherian complete
+local ring with residue field $k$ of finite type over $S$. Let $s \in S$
+be the image of $\Spec(k) \to S$. Assume that (a) $\mathcal{O}_{S, s}$ is
+a G-ring and (b) $p$ is limit preserving on objects. Then for every
+integer $N \geq 1$ there exist
+\begin{enumerate}
+\item a finite type $S$-algebra $A$,
+\item a maximal ideal $\mathfrak m_A \subset A$,
+\item an object $x_A$ of $\mathcal{X}$ over $\Spec(A)$,
+\item an $S$-isomorphism $R/\mathfrak m_R^N \cong A/\mathfrak m_A^N$,
+\item an isomorphism
+$x|_{\Spec(R/\mathfrak m_R^N)} \cong x_A|_{\Spec(A/\mathfrak m_A^N)}$
+compatible with (4), and
+\item an isomorphism
+$\text{Gr}_{\mathfrak m_R}(R) \cong \text{Gr}_{\mathfrak m_A}(A)$
+of graded $k$-algebras.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose an affine open $\Spec(\Lambda) \subset S$ such that $k$ is a finite
+$\Lambda$-algebra, see
+Morphisms, Lemma \ref{morphisms-lemma-point-finite-type}.
+We may and do replace $S$ by $\Spec(\Lambda)$.
+
+\medskip\noindent
+We may write $R$ as a directed colimit $R = \colim C_j$ where each
+$C_j$ is a finite type $\Lambda$-algebra (see
+Algebra, Lemma \ref{algebra-lemma-ring-colimit-fp}).
+By assumption (b) the object $x$ is isomorphic to the restriction of
+an object over one of the $C_j$. Hence we may choose a finite type
+$\Lambda$-algebra $C$, a $\Lambda$-algebra map $C \to R$, and an object
+$x_C$ of $\mathcal{X}$ over $\Spec(C)$ such that $x = x_C|_{\Spec(R)}$.
+The choice of $C$ is a bookkeeping device and could be avoided.
+For later use, let us write $C = \Lambda[y_1, \ldots, y_u]/(f_1, \ldots, f_v)$
+and we denote $\overline{a}_i \in R$ the image of $y_i$ under the
+map $C \to R$. Set $\mathfrak m_C = C \cap \mathfrak m_R$.
+
+\medskip\noindent
+Choose a $\Lambda$-algebra surjection $\Lambda[x_1, \ldots, x_s] \to k$
+and denote $\mathfrak m'$ the kernel.
+By the universal property of polynomial rings we may lift this
+to a $\Lambda$-algebra map $\Lambda[x_1, \ldots, x_s] \to R$.
+We add some variables (i.e., we increase $s$ a bit) mapping to generators
+of $\mathfrak m_R$. Having done this we see that
+$\Lambda[x_1, \ldots, x_s] \to R/\mathfrak m_R^2$ is surjective.
+Then we see that
+\begin{equation}
+\label{equation-surjection}
+P = \Lambda[x_1, \ldots, x_s]_{\mathfrak m'}^\wedge \longrightarrow R
+\end{equation}
+is a surjective map of Noetherian complete local rings, see for example
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-surjective-cotangent-space}.
+
+\medskip\noindent
+Choose lifts $a_i \in P$ of $\overline{a}_i$ we found above.
+Choose generators $b_1, \ldots, b_r \in P$ for the kernel of
+(\ref{equation-surjection}).
+Choose $c_{ji} \in P$ such that
+$$
+f_j(a_1, \ldots, a_u) = \sum c_{ji} b_i
+$$
+in $P$ which is possible by the choices made so far. Choose generators
+$$
+k_1, \ldots, k_t \in
+\Ker(P^{\oplus r} \xrightarrow{(b_1, \ldots, b_r)} P)
+$$
+and write $k_i = (k_{i1}, \ldots, k_{ir})$ and $K = (k_{ij})$
+so that
+$$
+P^{\oplus t} \xrightarrow{K}
+P^{\oplus r} \xrightarrow{(b_1, \ldots, b_r)}
+P \to R \to 0
+$$
+is an exact sequence of $P$-modules. In particular we have
+$\sum k_{ij} b_j = 0$. After possibly increasing $N$ we may
+assume $N - 1$ works in the Artin-Rees lemma for the first two maps of this
+exact sequence (see More on Algebra, Section
+\ref{more-algebra-section-artin-rees} for terminology).
+
+\medskip\noindent
+By assumption $\mathcal{O}_{S, s} = \Lambda_{\Lambda \cap \mathfrak m'}$ is
+a G-ring. Hence by More on Algebra, Proposition
+\ref{more-algebra-proposition-finite-type-over-G-ring}
+the ring $\Lambda[x_1, \ldots, x_s]_{\mathfrak m'}$ is a $G$-ring.
+Hence by Smoothing Ring Maps, Theorem
+\ref{smoothing-theorem-approximation-property-variant}
+there exist an \'etale ring map
+$$
+\Lambda[x_1, \ldots, x_s]_{\mathfrak m'} \to B,
+$$
+a maximal ideal $\mathfrak m_B$ of $B$ lying over $\mathfrak m'$, and
+elements $a'_i, b'_i, c'_{ij}, k'_{ij} \in B'$ such that
+\begin{enumerate}
+\item $\kappa(\mathfrak m') = \kappa(\mathfrak m_B)$ which implies
+that $\Lambda[x_1, \ldots, x_s]_{\mathfrak m'} \subset B_{\mathfrak m_B}
+\subset P$ and $P$ is identified with the completion of $B$ at
+$\mathfrak m_B$, see remark preceding Smoothing Ring Maps, Theorem
+\ref{smoothing-theorem-approximation-property-variant},
+\item $a_i - a'_i, b_i - b'_i, c_{ij} - c'_{ij}, k_{ij} - k'_{ij} \in
+(\mathfrak m')^N P$, and
+\item $f_j(a'_1, \ldots, a'_u) = \sum c'_{ji} b'_i$ and $\sum k'_{ij}b'_j = 0$.
+\end{enumerate}
+Set $A = B/(b'_1, \ldots, b'_r)$ and denote $\mathfrak m_A$ the
+image of $\mathfrak m_B$ in $A$. (Note that $A$ is essentially of finite
+type over $\Lambda$; at the end of the proof we will show how to obtain
+an $A$ which is of finite type over $\Lambda$.) There is a ring map
+$C \to A$ sending $y_i \mapsto a'_i$ because the $a'_i$ satisfy
+the desired equations modulo $(b'_1, \ldots, b'_r)$.
+Note that $A/\mathfrak m_A^N = R/\mathfrak m_R^N$ as quotients of
+$P = B^\wedge$ by property (2) above. Set $x_A = x_C|_{\Spec(A)}$.
+Since the maps
+$$
+C \to A \to A/\mathfrak m_A^N \cong R/\mathfrak m_R^N
+\quad\text{and}\quad
+C \to R \to R/\mathfrak m_R^N
+$$
+are equal we see that $x_A$ and $x$ agree modulo $\mathfrak m_R^N$
+via the isomorphism $A/\mathfrak m_A^N = R/\mathfrak m_R^N$. At this
+point we have shown properties (1) -- (5) of the statement of the lemma.
+To see (6) note that
+$$
+P^{\oplus t} \xrightarrow{K}
+P^{\oplus r} \xrightarrow{(b_1, \ldots, b_r)}
+P
+\quad\text{and}\quad
+P^{\oplus t} \xrightarrow{K'}
+P^{\oplus r} \xrightarrow{(b'_1, \ldots, b'_r)}
+P
+$$
+are two complexes of $P$-modules which are congruent modulo
+$(\mathfrak m')^N$ with the first one being exact. By our choice of $N$
+above we see from
+More on Algebra, Lemma \ref{more-algebra-lemma-approximate-complex-graded}
+that $R = P/(b_1, \ldots, b_r)$ and
+$P/(b'_1, \ldots, b'_r) = B^\wedge/(b'_1, \ldots, b'_r) = A^\wedge$
+have isomorphic associated graded algebras, which is what we wanted to show.
+
+\medskip\noindent
+This last paragraph of the proof serves to clean up the issue that $A$ is
+essentially of finite type over $S$ and not yet of finite type.
+The construction above gives $A = B/(b'_1, \ldots, b'_r)$ and
+$\mathfrak m_A \subset A$ with $B$ \'etale over
+$\Lambda[x_1, \ldots, x_s]_{\mathfrak m'}$. Hence $A$ is of finite
+type over the Noetherian ring $\Lambda[x_1, \ldots, x_s]_{\mathfrak m'}$.
+Thus we can write $A = (A_0)_{\mathfrak m'}$ for some finite type
+$\Lambda[x_1, \ldots, x_n]$ algebra $A_0$. Then
+$A = \colim (A_0)_f$ where
+$f \in \Lambda[x_1, \ldots, x_n] \setminus \mathfrak m'$, see
+Algebra, Lemma \ref{algebra-lemma-localization-colimit}.
+Because $p : \mathcal{X} \to (\Sch/S)_{fppf}$ is limit preserving on
+objects, we see that
+$x_A$ comes from some object $x_{(A_0)_f}$ over $\Spec((A_0)_f)$ for
+an $f$ as above. After replacing $A$ by $(A_0)_f$ and $x_A$ by
+$x_{(A_0)_f}$ and $\mathfrak m_A$ by $(A_0)_f \cap \mathfrak m_A$
+the proof is finished.
+\end{proof}
+
+
+
+
+
+\section{Limit preserving}
+\label{section-limits}
+
+\noindent
+The morphism $p : \mathcal{X} \to (\Sch/S)_{fppf}$ is limit preserving
+on objects, as defined in Criteria for Representability, Section
+\ref{criteria-section-limit-preserving}, if the functor of the definition
+below is essentially surjective. However, the example
+in Examples, Section \ref{examples-section-limit-preserving}
+shows that this isn't equivalent to being limit preserving.
+
+\begin{definition}
+\label{definition-limit-preserving}
+Let $S$ be a scheme. Let $\mathcal{X}$ be a category fibred in groupoids
+over $(\Sch/S)_{fppf}$. We say $\mathcal{X}$ is {\it limit preserving}
+if for every affine scheme $T$ over $S$ which is a limit $T = \lim T_i$
+of a directed inverse system of affine schemes $T_i$ over $S$, we have
+an equivalence
+$$
+\colim \mathcal{X}_{T_i} \longrightarrow \mathcal{X}_T
+$$
+of fibre categories.
+\end{definition}
+
+\noindent
+We spell out what this means. First, given objects $x, y$ of $\mathcal{X}$
+over $T_i$ we should have
+$$
+\Mor_{\mathcal{X}_T}(x|_T, y|_T) =
+\colim_{i' \geq i} \Mor_{\mathcal{X}_{T_{i'}}}(x|_{T_{i'}}, y|_{T_{i'}})
+$$
+and second every object of $\mathcal{X}_T$ is isomorphic to the restriction
+of an object over $T_i$ for some $i$. Note that the first condition means
+that the presheaves $\mathit{Isom}_\mathcal{X}(x, y)$ (see
+Stacks, Definition \ref{stacks-definition-mor-presheaf})
+are limit preserving.
+
+\begin{lemma}
+\label{lemma-fibre-product-limit-preserving}
+Let $S$ be a scheme. Let $p : \mathcal{X} \to \mathcal{Y}$ and
+$q : \mathcal{Z} \to \mathcal{Y}$ be $1$-morphisms of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$.
+\begin{enumerate}
+\item If $\mathcal{X} \to (\Sch/S)_{fppf}$ and
+$\mathcal{Z} \to (\Sch/S)_{fppf}$ are limit preserving on objects and
+$\mathcal{Y}$ is limit preserving, then
+$\mathcal{X} \times_\mathcal{Y} \mathcal{Z} \to (\Sch/S)_{fppf}$ is
+limit preserving on objects.
+\item If $\mathcal{X}$, $\mathcal{Y}$,
+and $\mathcal{Z}$ are limit preserving, then so
+is $\mathcal{X} \times_\mathcal{Y} \mathcal{Z}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is formal. Proof of (1). Let $T = \lim_{i \in I} T_i$ be the directed
+limit of affine schemes $T_i$ over $S$. We will prove that the functor
+$\colim \mathcal{X}_{T_i} \to \mathcal{X}_T$ is essentially surjective.
+Recall that an object of the fibre product over $T$ is a quadruple
+$(T, x, z, \alpha)$ where $x$ is an object of $\mathcal{X}$ lying over $T$,
+$z$ is an object of $\mathcal{Z}$ lying over $T$, and
+$\alpha : p(x) \to q(z)$ is a morphism in the fibre category of
+$\mathcal{Y}$ over $T$. By assumption on $\mathcal{X}$ and $\mathcal{Z}$
+we can find an $i$ and objects $x_i$ and $z_i$ over $T_i$ such that
+$x_i|_T \cong T$ and $z_i|_T \cong z$. Then $\alpha$ corresponds to
+an isomorphism $p(x_i)|_T \to q(z_i)|_T$ which comes from an isomorphism
+$\alpha_{i'} : p(x_i)|_{T_{i'}} \to q(z_i)|_{T_{i'}}$ by our assumption on
+$\mathcal{Y}$. After replacing $i$ by $i'$, $x_i$ by $x_i|_{T_{i'}}$, and
+$z_i$ by $z_i|_{T_{i'}}$ we see that $(T_i, x_i, z_i, \alpha_i)$
+is an object of the fibre product over $T_i$ which restricts to
+an object isomorphic to $(T, x, z, \alpha)$ over $T$ as desired.
+
+\medskip\noindent
+We omit the arguments showing that $\colim \mathcal{X}_{T_i} \to \mathcal{X}_T$
+is fully faithful in (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-preserving-algebraic-space}
+Let $S$ be a scheme. Let $\mathcal{X}$ be an algebraic stack over $S$.
+Then the following are equivalent
+\begin{enumerate}
+\item $\mathcal{X}$ is a stack in setoids and
+$\mathcal{X} \to (\Sch/S)_{fppf}$ is limit preserving on objects,
+\item $\mathcal{X}$ is a stack in setoids and limit preserving,
+\item $\mathcal{X}$ is representable by an algebraic space
+locally of finite presentation.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Under each of the three assumptions $\mathcal{X}$ is representable
+by an algebraic space $X$ over $S$, see Algebraic Stacks, Proposition
+\ref{algebraic-proposition-algebraic-stack-no-automorphisms}.
+It is clear that (1) and (2) are equivalent as a functor between
+setoids is an equivalence if and only if it is surjective on isomorphism
+classes. Finally, (1) and (3) are equivalent by
+Limits of Spaces, Proposition
+\ref{spaces-limits-proposition-characterize-locally-finite-presentation}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-diagonal}
+Let $S$ be a scheme. Let $\mathcal{X}$ be a category fibred
+in groupoids over $(\Sch/S)_{fppf}$. Assume
+$\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$ is
+representable by algebraic spaces and $\mathcal{X}$ is limit preserving.
+Then $\Delta$ is locally of finite type.
+\end{lemma}
+
+\begin{proof}
+We apply Criteria for Representability, Lemma
+\ref{criteria-lemma-check-property-limit-preserving}.
+Let $V$ be an affine scheme $V$ locally of finite presentation over $S$
+and let $\theta$ be an object of $\mathcal{X} \times \mathcal{X}$
+over $V$. Let $F_\theta$ be an algebraic space representing
+$\mathcal{X} \times_{\Delta, \mathcal{X} \times \mathcal{X}, \theta}
+(\Sch/V)_{fppf}$ and let $f_\theta : F_\theta \to V$ be the canonical morphism
+(see Algebraic Stacks, Section
+\ref{algebraic-section-morphisms-representable-by-algebraic-spaces}).
+It suffices to show that
+$F_\theta \to V$ has the corresponding properties. By
+Lemmas \ref{lemma-fibre-product-limit-preserving} and
+\ref{lemma-limit-preserving-algebraic-space}
+we see that $F_\theta \to S$ is locally of finite presentation.
+It follows that $F_\theta \to V$ is locally of finite type
+by Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-permanence-finite-type}.
+\end{proof}
+
+
+
+
+
+
+\section{Versality}
+\label{section-versality}
+
+\noindent
+In the previous section we explained how to approximate objects over
+complete local rings by algebraic objects. But in order to show that
+a stack $\mathcal{X}$ is an algebraic stack, we need to find smooth
+$1$-morphisms from schemes towards $\mathcal{X}$. Since we are not going
+to assume a priori that $\mathcal{X}$ has a representable diagonal, we
+cannot even speak about smooth morphisms towards $\mathcal{X}$. Instead,
+borrowing terminology from deformation theory, we will introduce versal
+objects.
+
+\begin{definition}
+\label{definition-versal-formal-object}
+Let $S$ be a locally Noetherian scheme. Let
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$ be a category fibred in groupoids.
+Let $\xi = (R, \xi_n, f_n)$ be a formal object. Set $k = R/\mathfrak m$ and
+$x_0 = \xi_1$. We will say that $\xi$ is {\it versal} if $\xi$
+as a formal object of $\mathcal{F}_{\mathcal{X}, k, x_0}$
+(Remark \ref{remark-formal-objects-match}) is versal in the sense
+of Formal Deformation Theory, Definition \ref{formal-defos-definition-versal}.
+\end{definition}
+
+\noindent
+We briefly spell out what this means. With notation as in the definition,
+suppose given morphisms $\xi_1 = x_0 \to y \to z$ of $\mathcal{X}$ lying over
+closed immersions
+$\Spec(k) \to \Spec(A) \to \Spec(B)$
+where $A, B$ are Artinian local rings with residue field $k$.
+Suppose given an $n \geq 1$ and a commutative diagram
+$$
+\vcenter{
+\xymatrix{
+& y \ar[ld] \\
+\xi_n & \xi_1 \ar[u] \ar[l]
+}
+}
+\quad\text{lying over}\quad
+\vcenter{
+\xymatrix{
+& \Spec(A) \ar[ld] \\
+\Spec(R/\mathfrak m^n) & \Spec(k) \ar[u] \ar[l]
+}
+}
+$$
+Versality means that for any data as above
+there exists an $m \geq n$ and a commutative diagram
+$$
+\vcenter{
+\xymatrix{
+& & z \ar[lldd] \\
+& & y \ar[ld] \ar[u] \\
+\xi_m & \xi_n \ar[l] & \xi_1 \ar[u] \ar[l]
+}
+}
+\quad\text{lying over}
+\vcenter{
+\xymatrix{
+& & \Spec(B) \ar[lldd] \\
+& & \Spec(A) \ar[ld] \ar[u] \\
+\Spec(R/\mathfrak m^m) &
+\Spec(R/\mathfrak m^n) \ar[l] &
+\Spec(k) \ar[u] \ar[l]
+}
+}
+$$
+Please compare with Formal Deformation Theory, Remark
+\ref{formal-defos-remark-versal-object}.
+
+\medskip\noindent
+Let $S$ be a locally Noetherian scheme. Let $U$ be a scheme over $S$
+with structure morphism $U \to S$ locally of finite type. Let
+$u_0 \in U$ be a finite type point of $U$, see
+Morphisms, Definition \ref{morphisms-definition-finite-type-point}.
+Set $k = \kappa(u_0)$.
+Note that the composition $\Spec(k) \to S$ is also of finite type,
+see Morphisms, Lemma \ref{morphisms-lemma-composition-finite-type}.
+Let $p : \mathcal{X} \to (\Sch/S)_{fppf}$ be a category fibred in groupoids.
+Let $x$ be an object of $\mathcal{X}$ which lies over $U$. Denote $x_0$
+the pullback of $x$ by $u_0$. By the $2$-Yoneda lemma $x$ corresponds
+to a $1$-morphism
+$$
+x : (\Sch/U)_{fppf} \longrightarrow \mathcal{X},
+$$
+see Algebraic Stacks, Section \ref{algebraic-section-2-yoneda}. We obtain a
+morphism of predeformation categories
+\begin{equation}
+\label{equation-hat-x}
+\hat x :
+\mathcal{F}_{(\Sch/U)_{fppf}, k, u_0}
+\longrightarrow
+\mathcal{F}_{\mathcal{X}, k, x_0},
+\end{equation}
+over $\mathcal{C}_\Lambda$ see (\ref{equation-functoriality}).
+
+\begin{definition}
+\label{definition-versal}
+Let $S$ be a locally Noetherian scheme.
+Let $\mathcal{X}$ be fibred in groupoids over $(\Sch/S)_{fppf}$.
+Let $U$ be a scheme locally of finite type over $S$.
+Let $x$ be an object of $\mathcal{X}$ lying over $U$.
+Let $u_0$ be finite type point of $U$.
+We say $x$ is {\it versal} at $u_0$ if the morphism $\hat x$
+(\ref{equation-hat-x}) is smooth, see Formal Deformation Theory, Definition
+\ref{formal-defos-definition-smooth-morphism}.
+\end{definition}
+
+\noindent
+This definition matches our notion of versality for formal objects of
+$\mathcal{X}$.
+
+\begin{lemma}
+\label{lemma-versality-matches}
+With notation as in Definition \ref{definition-versal}.
+Let $R = \mathcal{O}_{U, u_0}^\wedge$.
+Let $\xi$ be the formal object of $\mathcal{X}$
+over $R$ associated to $x|_{\Spec(R)}$, see (\ref{equation-approximation}).
+Then
+$$
+x\text{ is versal at }u_0
+\Leftrightarrow
+\xi\text{ is versal}
+$$
+\end{lemma}
+
+\begin{proof}
+Observe that $\mathcal{O}_{U, u_0}$ is a Noetherian local $S$-algebra
+with residue field $k$. Hence $R = \mathcal{O}_{U, u_0}^\wedge$ is an object of
+$\mathcal{C}_\Lambda^\wedge$, see Formal Deformation Theory, Definition
+\ref{formal-defos-definition-completion-CLambda}.
+Recall that $\xi$ is versal if
+$\underline{\xi} : \underline{R}|_{\mathcal{C}_\Lambda} \to
+\mathcal{F}_{\mathcal{X}, k, x_0}$
+is smooth and $x$ is versal at $u_0$ if
+$\hat x : \mathcal{F}_{(\Sch/U)_{fppf}, k, u_0}
+\to \mathcal{F}_{\mathcal{X}, k, x_0}$ is smooth.
+There is an identification of predeformation categories
+$$
+\underline{R}|_{\mathcal{C}_\Lambda}
+=
+\mathcal{F}_{(\Sch/U)_{fppf}, k, u_0},
+$$
+see Formal Deformation Theory, Remark
+\ref{formal-defos-remark-formal-objects-yoneda} for notation.
+Namely, given an Artinian local $S$-algebra $A$ with residue field
+identified with $k$ we have
+$$
+\Mor_{\mathcal{C}_\Lambda^\wedge}(R, A) =
+\{\varphi \in \Mor_S(\Spec(A), U) \mid \varphi|_{\Spec(k)} = u_0\}
+$$
+Unwinding the definitions the reader verifies that the resulting map
+$$
+\underline{R}|_{\mathcal{C}_\Lambda} =
+\mathcal{F}_{(\Sch/U)_{fppf}, k, u_0}
+\xrightarrow{\hat x}
+\mathcal{F}_{\mathcal{X}, k, x_0},
+$$
+is equal to $\underline{\xi}$ and we see that the lemma is true.
+\end{proof}
+
+\noindent
+Here is a sanity check.
+
+\begin{lemma}
+\label{lemma-versal-implies-smooth}
+Let $S$ be a locally Noetherian scheme. Let $f : U \to V$
+be a morphism of schemes locally of finite type over $S$.
+Let $u_0 \in U$ be a finite type point. The following are equivalent
+\begin{enumerate}
+\item $f$ is smooth at $u_0$,
+\item $f$ viewed as an object of $(\Sch/V)_{fppf}$ over $U$ is
+versal at $u_0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is a restatement of More on Morphisms, Lemma
+\ref{more-morphisms-lemma-lifting-along-artinian-at-point}.
+\end{proof}
+
+\noindent
+It turns out that this notion is well behaved with respect to field
+extensions.
+
+\begin{lemma}
+\label{lemma-versal-change-of-field}
+Let $S$, $\mathcal{X}$, $U$, $x$, $u_0$ be as in
+Definition \ref{definition-versal}. Let $l$ be a field and let
+$u_{l, 0} : \Spec(l) \to U$ be a morphism with image $u_0$ such that
+$l/k = \kappa(u_0)$ is finite. Set $x_{l, 0} = x_0|_{\Spec(l)}$.
+If $\mathcal{X}$ satisfies (RS) and $x$ is versal at $u_0$, then
+$$
+\mathcal{F}_{(\Sch/U)_{fppf}, l, u_{l, 0}}
+\longrightarrow
+\mathcal{F}_{\mathcal{X}, l, x_{l, 0}}
+$$
+is smooth.
+\end{lemma}
+
+\begin{proof}
+Note that $(\Sch/U)_{fppf}$ satisfies (RS) by
+Lemma \ref{lemma-algebraic-stack-RS}.
+Hence the functor of the lemma is the functor
+$$
+(\mathcal{F}_{(\Sch/U)_{fppf}, k , u_0})_{l/k}
+\longrightarrow
+(\mathcal{F}_{\mathcal{X}, k , x_0})_{l/k}
+$$
+associated to $\hat x$, see Lemma \ref{lemma-change-of-field}.
+Hence the lemma follows from
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-change-of-fields-smooth}.
+\end{proof}
+
+\noindent
+The following lemma is another sanity check. It more or less
+signifies that if $x$ is versal at $u_0$ as in
+Definition \ref{definition-versal},
+then $x$ viewed as a morphism from $U$ to $\mathcal{X}$ is
+smooth whenever we make a base change by a scheme.
+
+\begin{lemma}
+\label{lemma-base-change-versal}
+Let $S$, $\mathcal{X}$, $U$, $x$, $u_0$ be as in
+Definition \ref{definition-versal}. Assume
+\begin{enumerate}
+\item $\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$
+is representable by algebraic spaces,
+\item $\Delta$ is locally of finite type
+(for example if $\mathcal{X}$ is limit preserving), and
+\item $\mathcal{X}$ has (RS).
+\end{enumerate}
+Let $V$ be a scheme locally of finite type over $S$
+and let $y$ be an object of $\mathcal{X}$ over $V$.
+Form the $2$-fibre product
+$$
+\xymatrix{
+\mathcal{Z} \ar[r] \ar[d] & (\Sch/U)_{fppf} \ar[d]^x \\
+(\Sch/V)_{fppf} \ar[r]^y & \mathcal{X}
+}
+$$
+Let $Z$ be the algebraic space representing $\mathcal{Z}$
+and let $z_0 \in |Z|$ be a finite type point lying over $u_0$.
+If $x$ is versal at $u_0$, then
+the morphism $Z \to V$ is smooth at $z_0$.
+\end{lemma}
+
+\begin{proof}
+(The parenthetical remark in the statement holds by
+Lemma \ref{lemma-diagonal}.)
+Observe that $Z$ exists by assumption (1) and Algebraic Stacks, Lemma
+\ref{algebraic-lemma-representable-diagonal}. By assumption (2) we see that
+$Z \to V \times_S U$ is locally of finite type.
+Choose a scheme $W$, a closed point $w_0 \in W$, and
+an \'etale morphism $W \to Z$ mapping $w_0$ to $z_0$, see
+Morphisms of Spaces, Definition
+\ref{spaces-morphisms-definition-finite-type-point}.
+Then $W$ is locally of finite type over $S$ and
+$w_0$ is a finite type point of $W$.
+Let $l = \kappa(z_0)$. Denote $z_{l, 0}$, $v_{l, 0}$,
+$u_{l, 0}$, and $x_{l, 0}$ the objects of
+$\mathcal{Z}$, $(\Sch/V)_{fppf}$, $(\Sch/U)_{fppf}$,
+and $\mathcal{X}$ over $\Spec(l)$ obtained by pullback to $\Spec(l) = w_0$.
+Consider
+$$
+\xymatrix{
+\mathcal{F}_{(\Sch/W)_{fppf}, l, w_0} \ar[r] &
+\mathcal{F}_{\mathcal{Z}, l, z_{l, 0}} \ar[d] \ar[r] &
+\mathcal{F}_{(\Sch/U)_{fppf}, l, u_{l, 0}} \ar[d] \\
+& \mathcal{F}_{(\Sch/V)_{fppf}, l, v_{l, 0}} \ar[r] &
+\mathcal{F}_{\mathcal{X}, l, x_{l, 0}}
+}
+$$
+By Lemma \ref{lemma-fibre-product-deformation-categories}
+the square is a fibre product of predeformation categories.
+By Lemma \ref{lemma-versal-change-of-field}
+we see that the right vertical arrow is smooth.
+By Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-smooth-properties}
+the left vertical arrow is smooth.
+By Lemma \ref{lemma-formally-smooth-on-deformation-categories}
+we see that the left horizontal arrow is smooth.
+We conclude that the map
+$$
+\mathcal{F}_{(\Sch/W)_{fppf}, l, w_0} \to
+\mathcal{F}_{(\Sch/V)_{fppf}, l, v_{l, 0}}
+$$
+is smooth by Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-smooth-properties}.
+Thus we conclude that $W \to V$ is smooth at $w_0$ by
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-lifting-along-artinian-at-point}.
+This exactly means that $Z \to V$ is smooth at $z_0$
+and the proof is complete.
+\end{proof}
+
+\noindent
+We restate the approximation result in terms of
+versal objects.
+
+\begin{lemma}
+\label{lemma-approximate-versal}
+Let $S$ be a locally Noetherian scheme. Let
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$ be a category fibred in groupoids.
+Let $\xi = (R, \xi_n, f_n)$ be a formal object of $\mathcal{X}$ with
+$\xi_1$ lying over $\Spec(k) \to S$ with image $s \in S$. Assume
+\begin{enumerate}
+\item $\xi$ is versal,
+\item $\xi$ is effective,
+\item $\mathcal{O}_{S, s}$ is a G-ring, and
+\item $p : \mathcal{X} \to (\Sch/S)_{fppf}$ is limit preserving on objects.
+\end{enumerate}
+Then there exist a morphism of finite type $U \to S$, a finite type
+point $u_0 \in U$ with residue field $k$, and an object $x$ of $\mathcal{X}$
+over $U$ such that $x$ is versal at $u_0$ and such that
+$x|_{\Spec(\mathcal{O}_{U, u_0}/\mathfrak m_{u_0}^n)} \cong \xi_n$.
+\end{lemma}
+
+\begin{proof}
+Choose an object $x_R$ of $\mathcal{X}$ lying over $\Spec(R)$ whose associated
+formal object is $\xi$. Let $N = 2$ and apply Lemma \ref{lemma-approximate}.
+We obtain $A, \mathfrak m_A, x_A, \ldots$.
+Let $\eta = (A^\wedge, \eta_n, g_n)$ be the formal object associated to
+$x_A|_{\Spec(A^\wedge)}$. We have a diagram
+$$
+\vcenter{
+\xymatrix{
+& \eta \ar[d] \\
+\xi \ar[r] \ar@{..>}[ru] & \xi_2 = \eta_2
+}
+}
+\quad\text{lying over}\quad
+\vcenter{
+\xymatrix{
+& A^\wedge \ar[d] \\
+R \ar[r] \ar@{..>}[ru] & R/\mathfrak m_R^2 = A/\mathfrak m_A^2
+}
+}
+$$
+The versality of $\xi$ means exactly that we can find the
+dotted arrows in the diagrams, because we can successively find
+morphisms $\xi \to \eta_3$, $\xi \to \eta_4$, and so on by
+Formal Deformation Theory, Remark \ref{formal-defos-remark-versal-object}.
+The corresponding ring map $R \to A^\wedge$ is surjective by
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-surjective-cotangent-space}.
+On the other hand, we have
+$\dim_k \mathfrak m_R^n/\mathfrak m_R^{n + 1} =
+\dim_k \mathfrak m_A^n/\mathfrak m_A^{n + 1}$ for all $n$ by construction.
+Hence $R/\mathfrak m_R^n$ and $A/\mathfrak m_A^n$ have the same (finite)
+length as $\Lambda$-modules by additivity of length and
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-length}.
+It follows that $R/\mathfrak m_R^n \to A/\mathfrak m_A^n$ is an isomorphism
+for all $n$, hence $R \to A^\wedge$ is an isomorphism. Thus $\eta$ is
+isomorphic to a versal object, hence versal itself. By
+Lemma \ref{lemma-versality-matches}
+we conclude that $x_A$ is versal at the point $u_0$ of
+$U = \Spec(A)$ corresponding to $\mathfrak m_A$.
+\end{proof}
+
+\begin{example}
+\label{example-approximate-versal-implies}
+In this example we show that the local ring $\mathcal{O}_{S, s}$ has to be
+a G-ring in order for the result of Lemma \ref{lemma-approximate-versal} to
+be true. Namely, let $\Lambda$ be a Noetherian ring and let $\mathfrak m$
+be a maximal ideal of $\Lambda$. Set $R = \Lambda_\mathfrak m^\wedge$. Let
+$\Lambda \to C \to R$ be a factorization with $C$ of finite type over
+$\Lambda$. Set $S = \Spec(\Lambda)$, $U = S \setminus \{\mathfrak m\}$, and
+$S' = U \amalg \Spec(C)$. Consider the functor
+$F : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$ defined by the rule
+$$
+F(T) =
+\left\{
+\begin{matrix}
+* & \text{if }T \to S\text{ factors through }S' \\
+\emptyset & \text{else}
+\end{matrix}
+\right.
+$$
+Let $\mathcal{X} = \mathcal{S}_F$ is the category fibred in sets associated
+to $F$, see Algebraic Stacks, Section \ref{algebraic-section-split}.
+Then $\mathcal{X} \to (\Sch/S)_{fppf}$ is limit preserving on objects and
+there exists an effective, versal formal object $\xi$ over $R$.
+Hence if the conclusion of Lemma \ref{lemma-approximate-versal} holds
+for $\mathcal{X}$, then there exists a finite type ring map $\Lambda \to A$
+and a maximal ideal $\mathfrak m_A$ lying over $\mathfrak m$ such that
+\begin{enumerate}
+\item $\kappa(\mathfrak m) = \kappa(\mathfrak m_A)$,
+\item $\Lambda \to A$ and $\mathfrak m_A$ satisfy condition (4) of
+Algebra, Lemma \ref{algebra-lemma-smooth-test-artinian}, and
+\item there exists a $\Lambda$-algebra map $C \to A$.
+\end{enumerate}
+Thus $\Lambda \to A$ is smooth at $\mathfrak m_A$ by the lemma cited.
+Slicing $A$ we may assume that $\Lambda \to A$ is \'etale at
+$\mathfrak m_A$, see for example
+More on Morphisms, Lemma \ref{more-morphisms-lemma-slice-smooth}
+or argue directly. Write $C = \Lambda[y_1, \ldots, y_n]/(f_1, \ldots, f_m)$.
+Then $C \to R$ corresponds to a solution in $R$ of the system of equations
+$f_1 = \ldots = f_m = 0$, see Smoothing Ring Maps, Section
+\ref{smoothing-section-approximation-G-rings}.
+Thus if the conclusion of
+Lemma \ref{lemma-approximate-versal} holds for every $\mathcal{X}$ as
+above, then a system of equations which has a solution in $R$ has a
+solution in the henselization of $\Lambda_{\mathfrak m}$.
+In other words, the approximation property holds for
+$\Lambda_{\mathfrak m}^h$. This implies that $\Lambda_{\mathfrak m}^h$
+is a G-ring (insert future reference here; see also discussion in
+Smoothing Ring Maps, Section \ref{smoothing-section-introduction})
+which in turn implies that $\Lambda_{\mathfrak m}$ is a G-ring.
+\end{example}
+
+
+
+
+
+
+\section{Openness of versality}
+\label{section-openness-versality}
+
+\noindent
+Next, we come to openness of versality.
+
+\begin{definition}
+\label{definition-openness-versality}
+Let $S$ be a locally Noetherian scheme.
+\begin{enumerate}
+\item Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$. We say $\mathcal{X}$ satisfies
+{\it openness of versality} if given a scheme $U$ locally of finite type
+over $S$, an object $x$ of $\mathcal{X}$ over $U$, and a finite type point
+$u_0 \in U$ such that $x$ is versal at $u_0$, then there exists an open
+neighbourhood $u_0 \in U' \subset U$ such that $x$ is versal at every finite
+type point of $U'$.
+\item Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. We say $f$ satisfies
+{\it openness of versality} if given a scheme $U$ locally of finite type
+over $S$, an object $y$ of $\mathcal{Y}$ over $U$, openness
+of versality holds for
+$(\Sch/U)_{fppf} \times_\mathcal{Y} \mathcal{X}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Openness of versality is often the hardest to check. The following example
+shows that requiring this is necessary however.
+
+\begin{example}
+\label{example-versality}
+Let $k$ be a field and set $\Lambda = k[s, t]$. Consider the functor
+$F : \Lambda\text{-algebras} \longrightarrow \textit{Sets}$
+defined by the rule
+$$
+F(A) =
+\left\{
+\begin{matrix}
+* & \text{if there exist }f_1, \ldots, f_n \in A\text{ such that } \\
+ & A = (s, t, f_1, \ldots, f_n)\text{ and } f_i s = 0\ \forall i \\
+\emptyset & \text{else}
+\end{matrix}
+\right.
+$$
+Geometrically $F(A) = *$ means there exists a quasi-compact open neighbourhood
+$W$ of $V(s, t) \subset \Spec(A)$ such that $s|_W = 0$.
+Let $\mathcal{X} \subset (\Sch/\Spec(\Lambda))_{fppf}$ be the full
+subcategory consisting of schemes $T$ which have an affine open covering
+$T = \bigcup \Spec(A_j)$ with $F(A_j) = *$ for all $j$. Then $\mathcal{X}$
+satisfies [0], [1], [2], [3], and [4] but not [5]. Namely, over
+$U = \Spec(k[s, t]/(s))$
+there exists an object $x$ which is versal at $u_0 = (s, t)$ but not
+at any other point. Details omitted.
+\end{example}
+
+\noindent
+Let $S$ be a locally Noetherian scheme.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. Consider the following property
+\begin{equation}
+\label{equation-smooth}
+\begin{matrix}
+\text{for all fields }k\text{ of finite type over }S
+\text{ and all }x_0 \in \Ob(\mathcal{X}_{\Spec(k)})\text{ the}\\
+\text{map }
+\mathcal{F}_{\mathcal{X}, k, x_0} \to \mathcal{F}_{\mathcal{Y}, k, f(x_0)}
+\text{ of predeformation categories is smooth}
+\end{matrix}
+\end{equation}
+We formulate some lemmas around this concept. First we link it with
+(openness of) versality.
+
+\begin{lemma}
+\label{lemma-versal-smooth}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$. Let $U$ be a scheme locally
+of finite type over $S$. Let $x$ be an object of $\mathcal{X}$ over $U$.
+Assume that $x$ is versal at every finite type point of $U$ and that
+$\mathcal{X}$ satisfies (RS). Then $x : (\Sch/U)_{fppf} \to \mathcal{X}$
+satisfies (\ref{equation-smooth}).
+\end{lemma}
+
+\begin{proof}
+Let $\Spec(l) \to U$ be a morphism with $l$ of finite type over $S$.
+Then the image $u_0 \in U$ is a finite type point of $U$ and
+$l/\kappa(u_0)$ is a finite extension, see discussion in
+Morphisms, Section \ref{morphisms-section-points-finite-type}.
+Hence we see that
+$\mathcal{F}_{(\Sch/U)_{fppf}, l, u_{l, 0}} \to
+\mathcal{F}_{\mathcal{X}, l, x_{l, 0}}$
+is smooth by Lemma \ref{lemma-versal-change-of-field}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-smooth}
+Let $S$ be a locally Noetherian scheme. Let $f : \mathcal{X} \to \mathcal{Y}$
+and $g : \mathcal{Y} \to \mathcal{Z}$ be composable $1$-morphisms of
+categories fibred in groupoids over $(\Sch/S)_{fppf}$. If $f$ and $g$
+satisfy (\ref{equation-smooth}) so does $g \circ f$.
+\end{lemma}
+
+\begin{proof}
+This follows formally from Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-smooth-properties}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-smooth}
+Let $S$ be a locally Noetherian scheme. Let $f : \mathcal{X} \to \mathcal{Y}$
+and $\mathcal{Z} \to \mathcal{Y}$ be $1$-morphisms of
+categories fibred in groupoids over $(\Sch/S)_{fppf}$. If $f$
+satisfies (\ref{equation-smooth}) so does the projection
+$\mathcal{X} \times_\mathcal{Y} \mathcal{Z} \to \mathcal{Z}$.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from
+Lemma \ref{lemma-fibre-product-deformation-categories}
+and
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-smooth-properties}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-smooth}
+Let $S$ be a locally Noetherian scheme. Let $f : \mathcal{X} \to \mathcal{Y}$
+be a $1$-morphisms of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+If $f$ is formally smooth on objects, then $f$ satisfies
+(\ref{equation-smooth}). If $f$ is representable by algebraic spaces
+and smooth, then $f$ satisfies (\ref{equation-smooth}).
+\end{lemma}
+
+\begin{proof}
+A reformulation of Lemma \ref{lemma-formally-smooth-on-deformation-categories}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-implies-smooth}
+Let $S$ be a locally Noetherian scheme. Let $f : \mathcal{X} \to \mathcal{Y}$
+be a $1$-morphism of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+Assume
+\begin{enumerate}
+\item $f$ is representable by algebraic spaces,
+\item $f$ satisfies (\ref{equation-smooth}),
+\item $\mathcal{X} \to (\Sch/S)_{fppf}$ is limit preserving on objects, and
+\item $\mathcal{Y}$ is limit preserving.
+\end{enumerate}
+Then $f$ is smooth.
+\end{lemma}
+
+\begin{proof}
+The key ingredient of the proof is More on Morphisms, Lemma
+\ref{more-morphisms-lemma-lifting-along-artinian-at-point}
+which (almost) says that a morphism of schemes of finite type over $S$
+satisfying (\ref{equation-smooth}) is a smooth morphism. The other
+arguments of the proof are essentially bookkeeping.
+
+\medskip\noindent
+Let $V$ be a scheme over $S$ and let $y$ be an object of $\mathcal{Y}$ over
+$V$. Let $Z$ be an algebraic space representing the $2$-fibre product
+$\mathcal{Z} = \mathcal{X} \times_{f, \mathcal{X}, y} (\Sch/V)_{fppf}$.
+We have to show that the projection morphism $Z \to V$ is smooth, see
+Algebraic Stacks, Definition
+\ref{algebraic-definition-relative-representable-property}.
+In fact, it suffices to do this when $V$ is an affine scheme
+locally of finite presentation over $S$, see
+Criteria for Representability, Lemma
+\ref{criteria-lemma-check-property-limit-preserving}.
+Then $(\Sch/V)_{fppf}$ is limit preserving by
+Lemma \ref{lemma-limit-preserving-algebraic-space}.
+Hence $Z \to S$ is locally of finite presentation by
+Lemmas \ref{lemma-fibre-product-limit-preserving} and
+\ref{lemma-limit-preserving-algebraic-space}.
+Choose a scheme $W$ and a surjective \'etale morphism $W \to Z$.
+Then $W$ is locally of finite presentation over $S$.
+
+\medskip\noindent
+Since $f$ satisfies (\ref{equation-smooth}) we see that so does
+$\mathcal{Z} \to (\Sch/V)_{fppf}$, see Lemma \ref{lemma-base-change-smooth}.
+Next, we see that $(\Sch/W)_{fppf} \to \mathcal{Z}$ satisfies
+(\ref{equation-smooth}) by Lemma \ref{lemma-smooth-smooth}.
+Thus the composition
+$$
+(\Sch/W)_{fppf} \to \mathcal{Z} \to (\Sch/V)_{fppf}
+$$
+satisfies (\ref{equation-smooth}) by Lemma \ref{lemma-composition-smooth}.
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-lifting-along-artinian-at-point}
+shows that the composition $W \to Z \to V$ is smooth at every finite type
+point $w_0$ of $W$. Since the smooth locus is open we conclude
+that $W \to V$ is a smooth morphism of schemes by
+Morphisms, Lemma \ref{morphisms-lemma-enough-finite-type-points}.
+Thus we conclude that $Z \to V$ is a smooth morphism
+of algebraic spaces by definition.
+\end{proof}
+
+\noindent
+The lemma below is how we will use openness of versality.
+
+\begin{lemma}
+\label{lemma-get-smooth}
+Let $S$ be a locally Noetherian scheme. Let
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$ be a category fibred in groupoids.
+Let $k$ be a finite type field over $S$ and let $x_0$ be an object of
+$\mathcal{X}$ over $\Spec(k)$ with image $s \in S$. Assume
+\begin{enumerate}
+\item $\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$ is
+representable by algebraic spaces,
+\item $\mathcal{X}$ satisfies axioms [1], [2], [3] (see
+Section \ref{section-axioms}),
+\item every formal object of $\mathcal{X}$ is effective,
+\item openness of versality holds for $\mathcal{X}$, and
+\item $\mathcal{O}_{S, s}$ is a G-ring.
+\end{enumerate}
+Then there exist a morphism of finite type $U \to S$ and an object
+$x$ of $\mathcal{X}$ over $U$ such that
+$$
+x : (\Sch/U)_{fppf} \longrightarrow \mathcal{X}
+$$
+is smooth and such that there exists a finite type point $u_0 \in U$
+whose residue field is $k$ and such that $x|_{u_0} \cong x_0$.
+\end{lemma}
+
+\begin{proof}
+By axiom [2], Lemma \ref{lemma-deformation-category}, and
+Remark \ref{remark-deformation-category-implies}
+we see that $\mathcal{F}_{\mathcal{X}, k, x_0}$ satisfies (S1) and (S2).
+Since also the tangent space has finite dimension by axiom [3]
+we deduce from Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-versal-object-existence}
+that $\mathcal{F}_{\mathcal{X}, k, x_0}$ has a versal formal object $\xi$.
+Assumption (3) says $\xi$ is effective. By axiom [1] and
+Lemma \ref{lemma-approximate-versal}
+there exists a morphism of finite type $U \to S$, an object $x$ of
+$\mathcal{X}$ over $U$, and a finite type point $u_0$ of $U$ with residue
+field $k$ such that $x$ is versal at $u_0$ and such that
+$x|_{\Spec(k)} \cong x_0$. By openness of versality we may shrink
+$U$ and assume that $x$ is versal at every finite type point of $U$.
+We claim that
+$$
+x : (\Sch/U)_{fppf} \longrightarrow \mathcal{X}
+$$
+is smooth which proves the lemma. Namely, by Lemma \ref{lemma-versal-smooth}
+$x$ satisfies (\ref{equation-smooth})
+whereupon Lemma \ref{lemma-implies-smooth}
+finishes the proof.
+\end{proof}
+
+
+
+
+
+
+\section{Axioms}
+\label{section-axioms}
+
+\noindent
+Let $S$ be a locally Noetherian scheme. Let
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$ be a category fibred in groupoids.
+Here are the axioms we will consider on $\mathcal{X}$.
+\begin{enumerate}
+\item[{[-1]}] a set theoretic condition\footnote{The condition is the
+following: the supremum of all the cardinalities
+$|\Ob(\mathcal{X}_{\Spec(k)})/\cong|$ and
+$|\text{Arrows}(\mathcal{X}_{\Spec(k)})|$ where $k$ runs over the finite
+type fields over $S$ is $\leq$ than the size of some
+object of $(\Sch/S)_{fppf}$.} to be ignored by
+readers who are not interested in set theoretical issues,
+\item[{[0]}] $\mathcal{X}$ is a stack in groupoids for the \'etale topology,
+\item[{[1]}] $\mathcal{X}$ is limit preserving,
+\item[{[2]}] $\mathcal{X}$ satisfies the Rim-Schlessinger condition (RS),
+\item[{[3]}] the spaces $T\mathcal{F}_{\mathcal{X}, k, x_0}$ and
+$\text{Inf}(\mathcal{F}_{\mathcal{X}, k, x_0})$
+are finite dimensional
+for every $k$ and $x_0$, see
+(\ref{equation-tangent-space}) and
+(\ref{equation-infinitesimal-automorphisms}),
+\item[{[4]}] the functor (\ref{equation-approximation}) is an equivalence,
+\item[{[5]}] $\mathcal{X}$ and
+$\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$ satisfy
+openness of versality.
+\end{enumerate}
+
+
+
+
+
+
+
+
+
+
+\section{Axioms for functors}
+\label{section-axioms-functors}
+
+\noindent
+Let $S$ be a scheme. Let $F : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$ be a
+functor. Denote $\mathcal{X} = \mathcal{S}_F$ the category fibred in sets
+associated to $F$, see Algebraic Stacks, Section \ref{algebraic-section-split}.
+In this section we provide a translation between the material above
+as it applies to $\mathcal{X}$, to statements about $F$.
+
+\medskip\noindent
+Let $S$ be a locally Noetherian scheme. Let
+$F : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$ be a functor. Let $k$ be
+a field of finite type over $S$. Let $x_0 \in F(\Spec(k))$.
+The associated predeformation category (\ref{equation-predeformation-category})
+corresponds to the functor
+$$
+F_{k, x_0} : \mathcal{C}_\Lambda \longrightarrow \textit{Sets},
+\quad
+A \longmapsto \{x \in F(\Spec(A)) \mid x|_{\Spec(k)} = x_0 \}.
+$$
+Recall that we do not distinguish between
+categories cofibred in sets over $\mathcal{C}_\Lambda$
+and functor $\mathcal{C}_\Lambda \to \textit{Sets}$,
+see Formal Deformation Theory, Remarks
+\ref{formal-defos-remarks-cofibered-groupoids}
+(\ref{formal-defos-item-convention-cofibered-sets}).
+Given a transformation of functors $a : F \to G$, setting
+$y_0 = a(x_0)$ we obtain a morphism
+$$
+F_{k, x_0} \longrightarrow G_{k, y_0}
+$$
+see (\ref{equation-functoriality}).
+Lemma \ref{lemma-formally-smooth-on-deformation-categories} tells us that if
+$a : F \to G$ is formally smooth (in the sense of
+More on Morphisms of Spaces, Definition
+\ref{spaces-more-morphisms-definition-formally-smooth-etale-unramified}), then
+$F_{k, x_0} \longrightarrow G_{k, y_0}$ is smooth as
+in Formal Deformation Theory, Remark
+\ref{formal-defos-remark-compare-smooth-schlessinger}.
+
+\medskip\noindent
+Lemma \ref{lemma-pushout} says that if $Y' = Y \amalg_X X'$ in the
+category of schemes over $S$ where $X \to X'$ is a thickening and
+$X \to Y$ is affine, then the map
+$$
+F(Y \amalg_X X') \to F(Y) \times_{F(X)} F(X')
+$$
+is a bijection, provided that $F$ is an algebraic space.
+We say a general functor $F$ satisfies the {\it Rim-Schlessinger condition}
+or we say $F$ {\it satisfies (RS)} if given any
+pushout $Y' = Y \amalg_X X'$ where $Y, X, X'$ are spectra of Artinian
+local rings of finite type over $S$, then
+$$
+F(Y \amalg_X X') \to F(Y) \times_{F(X)} F(X')
+$$
+is a bijection. Thus every algebraic space satisfies (RS).
+
+\medskip\noindent
+Lemma \ref{lemma-deformation-category} says that
+given a functor $F$ which satisfies (RS), then all $F_{k, x_0}$
+are deformation functors as in
+Formal Deformation Theory, Definition
+\ref{formal-defos-definition-deformation-category}, i.e., they satisfy
+(RS) as in
+Formal Deformation Theory, Remark
+\ref{formal-defos-remark-compare-schlessinger-H4}.
+In particular the tangent space
+$$
+TF_{k, x_0} = \{x \in F(\Spec(k[\epsilon])) \mid x|_{\Spec(k)} = x_0\}
+$$
+has the structure of a $k$-vector space by Formal Deformation Theory,
+Lemma \ref{formal-defos-lemma-tangent-space-vector-space}.
+
+\medskip\noindent
+Lemma \ref{lemma-finite-dimension} says that an algebraic space $F$
+locally of finite type over $S$ gives rise to deformation functors
+$F_{k, x_0}$ with finite dimensional tangent spaces $TF_{k, x_0}$.
+
+\medskip\noindent
+A {\it formal object}\footnote{This is what Artin calls a formal deformation.}
+$\xi = (R, \xi_n)$ of $F$ consists of a Noetherian
+complete local $S$-algebra $R$ whose residue field is of finite type
+over $S$, together with elements $\xi_n \in F(\Spec(R/\mathfrak m^n))$
+such that $\xi_{n + 1}|_{\Spec(R/\mathfrak m^n)} = \xi_n$. A formal
+object $\xi$ defines a formal object $\xi$ of $F_{R/\mathfrak m, \xi_1}$.
+We say $\xi$ is {\it versal} if and only if it is versal in the sense of
+Formal Deformation Theory, Definition \ref{formal-defos-definition-versal}.
+A formal object $\xi = (R, \xi_n)$ is called {\it effective}
+if there exists an $x \in F(\Spec(R))$ such that
+$\xi_n = x|_{\Spec(R/\mathfrak m^n)}$ for all $n \geq 1$.
+Lemma \ref{lemma-effective} says that if $F$ is an algebraic space,
+then every formal object is effective.
+
+\medskip\noindent
+Let $U$ be a scheme locally of finite type over $S$ and let $x \in F(U)$.
+Let $u_0 \in U$ be a finite type point. We say that $x$ is versal at $u_0$
+if and only if
+$\xi = (\mathcal{O}_{U, u_0}^\wedge,
+x|_{\Spec(\mathcal{O}_{U, u_0}/\mathfrak m_{u_0}^n)})$
+is a versal formal object in the sense described above.
+
+\medskip\noindent
+Let $S$ be a locally Noetherian scheme. Let
+$F : (\Sch/S)_{fppf}^{opp} \to \Sch$ be a functor.
+Here are the axioms we will consider on $F$.
+\begin{enumerate}
+\item[{[-1]}] a set theoretic condition\footnote{The condition is the
+following: the supremum of all the cardinalities
+$|F(\Spec(k))|$ where $k$ runs over the finite
+type fields over $S$ is $\leq$ than the size of some
+object of $(\Sch/S)_{fppf}$.} to be ignored by
+readers who are not interested in set theoretical issues,
+\item[{[0]}] $F$ is a sheaf for the \'etale topology,
+\item[{[1]}] $F$ is limit preserving,
+\item[{[2]}] $F$ satisfies the Rim-Schlessinger condition (RS),
+\item[{[3]}] every tangent space $TF_{k, x_0}$ is finite dimensional,
+\item[{[4]}] every formal object is effective,
+\item[{[5]}] $F$ satisfies openness of versality.
+\end{enumerate}
+Here {\it limit preserving} is the notion defined in
+Limits of Spaces, Definition
+\ref{spaces-limits-definition-locally-finite-presentation} and
+{\it openness of versality} means the following: Given a scheme $U$
+locally of finite type over $S$, given $x \in F(U)$, and given
+a finite type point $u_0 \in U$ such that $x$ is versal at $u_0$,
+then there exists an open neighbourhood $u_0 \in U' \subset U$
+such that $x$ is versal at every finite type point of $U'$.
+
+
+
+
+
+
+\section{Algebraic spaces}
+\label{section-algebraic-spaces}
+
+\noindent
+The following is our first main result on algebraic spaces.
+
+\begin{proposition}
+\label{proposition-spaces-diagonal-representable}
+Let $S$ be a locally Noetherian scheme. Let
+$F : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$ be a functor. Assume that
+\begin{enumerate}
+\item $\Delta : F \to F \times F$ is representable by algebraic spaces,
+\item $F$ satisfies axioms [-1], [0], [1], [2], [3], [4], [5]
+(see Section \ref{section-axioms-functors}), and
+\item $\mathcal{O}_{S, s}$ is a G-ring for all finite type points $s$ of $S$.
+\end{enumerate}
+Then $F$ is an algebraic space.
+\end{proposition}
+
+\begin{proof}
+Lemma \ref{lemma-get-smooth} applies to $F$. Using this we
+choose, for every finite type field $k$ over $S$ and $x_0 \in F(\Spec(k))$,
+an affine scheme $U_{k, x_0}$ of finite type over $S$ and a smooth morphism
+$U_{k, x_0} \to F$ such that there exists a finite type point
+$u_{k, x_0} \in U_{k, x_0}$ with residue field $k$ such that $x_0$
+is the image of $u_{k, x_0}$. Then
+$$
+U = \coprod\nolimits_{k, x_0} U_{k, x_0} \longrightarrow F
+$$
+is smooth\footnote{Set theoretical remark: This coproduct is (isomorphic)
+to an object of $(\Sch/S)_{fppf}$ as we have a bound on the index set
+by axiom [-1], see Sets, Lemma \ref{sets-lemma-what-is-in-it}.}.
+To finish the proof it suffices to show this map is surjective,
+see Bootstrap, Lemma \ref{bootstrap-lemma-spaces-etale-smooth-cover}
+(this is where we use axiom [0]). By Criteria for Representability, Lemma
+\ref{criteria-lemma-check-property-limit-preserving}
+it suffices to show that $U \times_F V \to V$ is surjective for those
+$V \to F$ where $V$ is an affine scheme locally of finite presentation
+over $S$. Since $U \times_F V \to V$ is smooth the image is open. Hence
+it suffices to show that the image of $U \times_F V \to V$ contains all
+finite type points of $V$, see
+Morphisms, Lemma \ref{morphisms-lemma-enough-finite-type-points}.
+Let $v_0 \in V$ be a finite type point. Then $k = \kappa(v_0)$ is
+a finite type field over $S$. Denote $x_0$ the composition
+$\Spec(k) \xrightarrow{v_0} V \to F$. Then
+$(u_{k, x_0}, v_0) : \Spec(k) \to U \times_F V$ is a point mapping to
+$v_0$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-monomorphism}
+Let $S$ be a locally Noetherian scheme. Let $a : F \to G$ be a transformation
+of functors $(\Sch/S)_{fppf}^{opp} \to \textit{Sets}$.
+Assume that
+\begin{enumerate}
+\item $a$ is injective,
+\item $F$ satisfies axioms [0], [1], [2], [4], and [5],
+\item $\mathcal{O}_{S, s}$ is a G-ring for all finite type points $s$ of $S$,
+\item $G$ is an algebraic space locally of finite type over $S$,
+\end{enumerate}
+Then $F$ is an algebraic space.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-finite-dimension} the functor $G$ satisfies [3].
+As $F \to G$ is injective, we conclude that $F$ also satisfies [3].
+Moreover, as $F \to G$ is injective, we see that given schemes
+$U$, $V$ and morphisms $U \to F$ and $V \to F$, then
+$U \times_F V = U \times_G V$. Hence $\Delta : F \to F \times F$ is
+representable (by schemes) as this holds for $G$ by assumption.
+Thus Proposition \ref{proposition-spaces-diagonal-representable}
+applies\footnote{The set
+theoretic condition [-1] holds for $F$ as it holds for $G$. Details
+omitted.}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Algebraic stacks}
+\label{section-algebraic-stacks}
+
+\noindent
+Proposition \ref{proposition-second-diagonal-representable} is our first
+main result on algebraic stacks.
+
+\begin{lemma}
+\label{lemma-diagonal-representable}
+Let $S$ be a locally Noetherian scheme. Let
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$ be a category fibred in groupoids.
+Assume that
+\begin{enumerate}
+\item $\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$
+is representable by algebraic spaces,
+\item $\mathcal{X}$ satisfies axioms [-1], [0], [1], [2], [3] (see
+Section \ref{section-axioms}),
+\item every formal object of $\mathcal{X}$ is effective,
+\item $\mathcal{X}$ satisfies openness of versality, and
+\item $\mathcal{O}_{S, s}$ is a G-ring for all finite type points $s$ of $S$.
+\end{enumerate}
+Then $\mathcal{X}$ is an algebraic stack.
+\end{lemma}
+
+\begin{proof}
+Lemma \ref{lemma-get-smooth} applies to $\mathcal{X}$. Using this we
+choose, for every finite type field $k$ over $S$ and every
+isomorphism class of object $x_0 \in \Ob(\mathcal{X}_{\Spec(k)})$,
+an affine scheme $U_{k, x_0}$ of finite type over $S$ and a smooth morphism
+$(\Sch/U_{k, x_0})_{fppf} \to \mathcal{X}$ such that there exists a finite
+type point $u_{k, x_0} \in U_{k, x_0}$ with residue field $k$ such that $x_0$
+is the image of $u_{k, x_0}$. Then
+$$
+(\Sch/U)_{fppf} \to \mathcal{X},
+\quad\text{with}\quad
+U = \coprod\nolimits_{k, x_0} U_{k, x_0}
+$$
+is smooth\footnote{Set theoretical remark: This coproduct is (isomorphic to)
+an object of $(\Sch/S)_{fppf}$ as we have a bound on the index set
+by axiom [-1], see Sets, Lemma \ref{sets-lemma-what-is-in-it}.}.
+To finish the proof it suffices to show this map is surjective,
+see Criteria for Representability, Lemma \ref{criteria-lemma-stacks-etale}
+(this is where we use axiom [0]). By Criteria for Representability, Lemma
+\ref{criteria-lemma-check-property-limit-preserving}
+it suffices to show that
+$(\Sch/U)_{fppf} \times_\mathcal{X} (\Sch/V)_{fppf} \to (\Sch/V)_{fppf}$
+is surjective for those $y : (\Sch/V)_{fppf} \to \mathcal{X}$ where
+$V$ is an affine scheme locally of finite presentation
+over $S$. By assumption (1) the fibre product
+$(\Sch/U)_{fppf} \times_\mathcal{X} (\Sch/V)_{fppf}$ is representable
+by an algebraic space $W$. Then $W \to V$ is smooth, hence the image is
+open. Hence it suffices to show that the image of $W \to V$ contains all
+finite type points of $V$, see
+Morphisms, Lemma \ref{morphisms-lemma-enough-finite-type-points}.
+Let $v_0 \in V$ be a finite type point. Then $k = \kappa(v_0)$ is
+a finite type field over $S$. Denote $x_0 = y|_{\Spec(k)}$
+the pullback of $y$ by $v_0$. Then $(u_{k, x_0}, v_0)$ will give
+a morphism $\Spec(k) \to W$ whose composition with $W \to V$
+is $v_0$ and we win.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-second-diagonal-representable}
+Let $S$ be a locally Noetherian scheme. Let
+$p : \mathcal{X} \to (\Sch/S)_{fppf}$ be a category fibred in groupoids.
+Assume that
+\begin{enumerate}
+\item $\Delta_\Delta : \mathcal{X} \to
+\mathcal{X} \times_{\mathcal{X} \times \mathcal{X}} \mathcal{X}$
+is representable by algebraic spaces,
+\item $\mathcal{X}$ satisfies axioms [-1], [0], [1], [2], [3], [4], and [5]
+(see Section \ref{section-axioms}),
+\item $\mathcal{O}_{S, s}$ is a G-ring for all finite type points $s$ of $S$.
+\end{enumerate}
+Then $\mathcal{X}$ is an algebraic stack.
+\end{proposition}
+
+\begin{proof}
+We first prove that $\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$
+is representable by algebraic spaces. To do this it suffices to show
+that
+$$
+\mathcal{Y} =
+\mathcal{X} \times_{\Delta, \mathcal{X} \times \mathcal{X}, y} (\Sch/V)_{fppf}
+$$
+is representable by an algebraic space for any affine scheme $V$ locally
+of finite presentation over $S$ and object $y$ of
+$\mathcal{X} \times \mathcal{X}$ over $V$, see
+Criteria for Representability, Lemma
+\ref{criteria-lemma-check-representable-limit-preserving}\footnote{The
+set theoretic condition in Criteria for Representability, Lemma
+\ref{criteria-lemma-check-representable-limit-preserving}
+will hold: the size of the algebraic space $Y$ representing $\mathcal{Y}$ is
+suitably bounded. Namely, $Y \to S$ will be locally of finite type and $Y$
+will satisfy axiom [-1]. Details omitted.}.
+Observe that $\mathcal{Y}$ is fibred in setoids
+(Stacks, Lemma \ref{stacks-lemma-isom-as-2-fibre-product})
+and let $Y : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$,
+$T \mapsto \Ob(\mathcal{Y}_T)/\cong$ be the functor of isomorphism
+classes. We will apply
+Proposition \ref{proposition-spaces-diagonal-representable}
+to see that $Y$ is an algebraic space.
+
+\medskip\noindent
+Note that
+$\Delta_\mathcal{Y} : \mathcal{Y} \to \mathcal{Y} \times \mathcal{Y}$
+(and hence also $Y \to Y \times Y$)
+is representable by algebraic spaces by condition (1) and
+Criteria for Representability, Lemma \ref{criteria-lemma-second-diagonal}.
+Observe that $Y$ is a sheaf for the \'etale topology by
+Stacks, Lemmas \ref{stacks-lemma-stack-in-setoids-characterize} and
+\ref{stacks-lemma-2-fibre-product-gives-stack-in-setoids}, i.e.,
+axiom [0] holds. Also $Y$ is limit preserving by
+Lemma \ref{lemma-fibre-product-limit-preserving}, i.e., we have [1].
+Note that $Y$ has (RS), i.e., axiom [2] holds, by
+Lemmas \ref{lemma-algebraic-stack-RS} and
+\ref{lemma-fibre-product-RS}. Axiom [3] for $Y$ follows
+from Lemmas \ref{lemma-finite-dimension} and
+\ref{lemma-fibre-product-tangent-spaces}.
+Axiom [4] follows from Lemmas \ref{lemma-effective} and
+\ref{lemma-fibre-product-effective}.
+Axiom [5] for $Y$ follows directly from openness of versality
+for $\Delta_\mathcal{X}$ which is part of axiom [5] for $\mathcal{X}$.
+Thus all the assumptions of
+Proposition \ref{proposition-spaces-diagonal-representable}
+are satisfied and $Y$ is an algebraic space.
+
+\medskip\noindent
+At this point it follows from Lemma \ref{lemma-diagonal-representable}
+that $\mathcal{X}$ is an algebraic stack.
+\end{proof}
+
+
+
+
+
+\section{Strong Rim-Schlessinger}
+\label{section-RS-star}
+
+\noindent
+In the rest of this chapter the following strictly stronger version
+of the Rim-Schlessinger conditions will play an important role.
+
+\begin{definition}
+\label{definition-RS-star}
+Let $S$ be a scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$. We say $\mathcal{X}$
+satisfies {\it condition (RS*)} if given a fibre product diagram
+$$
+\xymatrix{
+B' \ar[r] & B \\
+A' = A \times_B B' \ar[u] \ar[r] & A \ar[u]
+}
+$$
+of $S$-algebras, with $B' \to B$ surjective with square zero kernel,
+the functor of fibre categories
+$$
+\mathcal{X}_{\Spec(A')}
+\longrightarrow
+\mathcal{X}_{\Spec(A)} \times_{\mathcal{X}_{\Spec(B)}} \mathcal{X}_{\Spec(B')}
+$$
+is an equivalence of categories.
+\end{definition}
+
+\noindent
+We make some observations:
+with $A \to B \leftarrow B'$ as in Definition \ref{definition-RS-star}
+\begin{enumerate}
+\item we have $\Spec(A') = \Spec(A) \amalg_{\Spec(B)} \Spec(B')$
+in the category of schemes, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-pushout-along-thickening},
+and
+\item if $\mathcal{X}$ is an algebraic stack, then $\mathcal{X}$ satisfies
+(RS*) by Lemma \ref{lemma-algebraic-stack-RS-star}.
+\end{enumerate}
+If $S$ is locally Noetherian, then
+\begin{enumerate}
+\item[(3)] if $A$, $B$, $B'$ are of finite type over $S$ and
+$B$ is finite over $A$, then $A'$ is of finite type over
+$S$\footnote{If $\Spec(A)$ maps into an affine open of $S$
+this follows from
+More on Algebra, Lemma \ref{more-algebra-lemma-fibre-product-finite-type}.
+The general case follows using
+More on Algebra, Lemma \ref{more-algebra-lemma-diagram-localize}.}, and
+\item[(4)] if $\mathcal{X}$ satisfies (RS*), then $\mathcal{X}$ satisfies (RS)
+because (RS) covers exactly those cases of (RS*) where
+$A$, $B$, $B'$ are Artinian local.
+\end{enumerate}
+
+\begin{lemma}
+\label{lemma-algebraic-stack-RS-star}
+Let $\mathcal{X}$ be an algebraic stack over a base $S$.
+Then $\mathcal{X}$ satisfies (RS*).
+\end{lemma}
+
+\begin{proof}
+This is implied by Lemma \ref{lemma-pushout}, see
+remarks following Definition \ref{definition-RS-star}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibre-product-RS-star}
+Let $S$ be a scheme. Let $p : \mathcal{X} \to \mathcal{Y}$ and
+$q : \mathcal{Z} \to \mathcal{Y}$ be $1$-morphisms of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. If $\mathcal{X}$, $\mathcal{Y}$,
+and $\mathcal{Z}$ satisfy (RS*), then so
+does $\mathcal{X} \times_\mathcal{Y} \mathcal{Z}$.
+\end{lemma}
+
+\begin{proof}
+The proof is exactly the same as the proof of
+Lemma \ref{lemma-fibre-product-RS}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Versality and generalizations}
+\label{section-generalize-versality}
+
+\noindent
+We prove that versality is preserved under generalizations
+for stacks which have (RS*) and are limit preserving.
+We suggest skipping this section on a first reading.
+
+\begin{lemma}
+\label{lemma-single-point}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$ having (RS*).
+Let $x$ be an object of $\mathcal{X}$ over an affine scheme $U$
+of finite type over $S$. Let $u \in U$ be a finite type point such that
+$x$ is not versal at $u$. Then there exists a morphism $x \to y$
+of $\mathcal{X}$ lying over $U \to T$ satisfying
+\begin{enumerate}
+\item the morphism $U \to T$ is a first order thickening,
+\item we have a short exact sequence
+$$
+0 \to \kappa(u) \to \mathcal{O}_T \to \mathcal{O}_U \to 0
+$$
+\item there does {\bf not} exist a pair $(W, \alpha)$
+consisting of an open neighbourhood $W \subset T$ of $u$
+and a morphism $\beta : y|_W \to x$ such that the composition
+$$
+x|_{U \cap W} \xrightarrow{\text{restriction of }x \to y}
+y|_W \xrightarrow{\beta} x
+$$
+is the canonical morphism $x|_{U \cap W} \to x$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $R = \mathcal{O}_{U, u}^\wedge$. Let $k = \kappa(u)$
+be the residue field of $R$. Let $\xi$ be the formal object
+of $\mathcal{X}$ over $R$ associated to $x$. Since $x$ is not
+versal at $u$, we see that $\xi$ is not versal, see
+Lemma \ref{lemma-versality-matches}. By the discussion following
+Definition \ref{definition-versal-formal-object}
+this means we can find
+morphisms $\xi_1 \to x_A \to x_B$ of $\mathcal{X}$ lying over
+closed immersions $\Spec(k) \to \Spec(A) \to \Spec(B)$
+where $A, B$ are Artinian local rings with residue field $k$,
+an $n \geq 1$ and a commutative diagram
+$$
+\vcenter{
+\xymatrix{
+& x_A \ar[ld] \\
+\xi_n & \xi_1 \ar[u] \ar[l]
+}
+}
+\quad\text{lying over}\quad
+\vcenter{
+\xymatrix{
+& \Spec(A) \ar[ld] \\
+\Spec(R/\mathfrak m^n) & \Spec(k) \ar[u] \ar[l]
+}
+}
+$$
+such that there does {\bf not} exist an $m \geq n$ and a commutative diagram
+$$
+\vcenter{
+\xymatrix{
+& & x_B \ar[lldd] \\
+& & x_A \ar[ld] \ar[u] \\
+\xi_m & \xi_n \ar[l] & \xi_1 \ar[u] \ar[l]
+}
+}
+\quad\text{lying over}
+\vcenter{
+\xymatrix{
+& & \Spec(B) \ar[lldd] \\
+& & \Spec(A) \ar[ld] \ar[u] \\
+\Spec(R/\mathfrak m^m) &
+\Spec(R/\mathfrak m^n) \ar[l] &
+\Spec(k) \ar[u] \ar[l]
+}
+}
+$$
+We may moreover assume that $B \to A$ is a small
+extension, i.e., that the kernel $I$ of the surjection $B \to A$
+is isomorphic to $k$ as an $A$-module.
+This follows from Formal Deformation Theory, Remark
+\ref{formal-defos-remark-versal-object}.
+Then we simply define
+$$
+T = U \amalg_{\Spec(A)} \Spec(B)
+$$
+By property (RS*) we find $y$ over $T$ whose restriction to
+$\Spec(B)$ is $x_B$ and whose restriction to $U$ is $x$
+(this gives the arrow $x \to y$ lying over $U \to T$).
+To finish the proof we verify conditions (1), (2), and (3).
+
+\medskip\noindent
+By the construction of the pushout we have a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+I \ar[r] &
+B \ar[r] &
+A \ar[r] &
+0 \\
+0 \ar[r] &
+I \ar[r] \ar[u] &
+\Gamma(T, \mathcal{O}_T) \ar[r] \ar[u] &
+\Gamma(U, \mathcal{O}_U) \ar[r] \ar[u] &
+0
+}
+$$
+with exact rows. This immediately proves (1) and (2).
+To finish the proof we will argue by contradiction.
+Assume we have a pair $(W, \beta)$ as in (3).
+Since $\Spec(B) \to T$ factors through $W$ we get the morphism
+$$
+x_B \to y|_W \xrightarrow{\beta} x
+$$
+Since $B$ is Artinian local with residue field $k = \kappa(u)$
+we see that $x_B \to x$ lies over a morphism $\Spec(B) \to U$
+which factors through $\Spec(\mathcal{O}_{U, u}/\mathfrak m_u^m)$
+for some $m \geq n$. In other words, $x_B \to x$ factors
+through $\xi_m$ giving a map $x_B \to \xi_m$.
+The compatibility condition on the morphism $\alpha$
+in condition (3) translates into the condition that
+$$
+\xymatrix{
+x_B \ar[d] & x_A \ar[d] \ar[l] \\
+\xi_m & \xi_n \ar[l]
+}
+$$
+is commutative. This gives the contradiction we were looking for.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generalization-versality}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category fibred
+in groupoids over $(\Sch/S)_{fppf}$. Assume
+\begin{enumerate}
+\item $\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$ is
+representable by algebraic spaces,
+\item $\mathcal{X}$ has (RS*),
+\item $\mathcal{X}$ is limit preserving.
+\end{enumerate}
+Let $x$ be an object of $\mathcal{X}$ over a scheme $U$ of finite type over
+$S$. Let $u \leadsto u_0$ be a specialization of finite type points of $U$
+such that $x$ is versal at $u_0$. Then $x$ is versal at $u$.
+\end{lemma}
+
+\begin{proof}
+After shrinking $U$ we may assume $U$ is affine and $U$ maps into an
+affine open $\Spec(\Lambda)$ of $S$. If $x$ is not versal at $u$ then
+we may pick $x \to y$ lying over $U \to T$ as in
+Lemma \ref{lemma-single-point}. Write $U = \Spec(R_0)$ and $T = \Spec(R)$.
+The morphism $U \to T$ corresponds to a surjective ring map
+$R \to R_0$ whose kernel is an ideal of square zero.
+By assumption (3) we get that $y$ comes from an object $x'$ over
+$U' = \Spec(R')$ for some finite type $\Lambda$-subalgebra
+$R' \subset R$. After increasing $R'$ we may and do assume that
+$R' \to R_0$ is surjective, so that $U \subset U'$ is a first order thickening.
+Thus we now have
+$$
+x \to y \to x'
+\text{ lying over }
+U \to T \to U'
+$$
+By assumption (1) there is an algebraic space $Z$ over $S$ representing
+$$
+(\Sch/U)_{fppf} \times_{x, \mathcal{X}, x'} (\Sch/U')_{fppf}
+$$
+see Algebraic Stacks, Lemma \ref{algebraic-lemma-representable-diagonal}.
+By construction of $2$-fibre products, a $V$-valued point of $Z$
+corresponds to a triple $(a, a', \alpha)$ consisting of morphisms
+$a : V \to U$, $a' : V \to U'$ and a morphism $\alpha : a^*x \to (a')^*x'$.
+We obtain a commutative diagram
+$$
+\xymatrix{
+U \ar[rd] \ar[rdd] \ar[rrd] \\
+& Z \ar[r]_{p'} \ar[d]^p & U' \ar[d] \\
+& U \ar[r] & S
+}
+$$
+The morphism $i : U \to Z$ comes the isomorphism $x \to x'|_U$.
+Let $z_0 = i(u_0) \in Z$. By Lemma \ref{lemma-base-change-versal}
+we see that $Z \to U'$ is smooth at $z_0$. After replacing $U$ by an
+affine open neighbourhood of $u_0$, replacing $U'$ by the corresponding
+open, and replacing $Z$ by the intersection of the inverse images
+of these opens by $p$ and $p'$, we reach the situation where
+$Z \to U'$ is smooth along $i(U)$. Since $u \leadsto u_0$ the point
+$u$ is in this open. Condition (3) of Lemma \ref{lemma-single-point}
+is clearly preserved by shrinking $U$ (all of the schemes $U$, $T$, $U'$
+have the same underlying topological space).
+Since $U \to U'$ is a first order thickening of affine schemes,
+we can choose a morphism $i' : U' \to Z$
+such that $p' \circ i' = \text{id}_{U'}$ and
+whose restriction to $U$ is $i$
+(More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-smooth-formally-smooth}).
+Pulling back the universal morphism $p^*x \to (p')^*x'$ by $i'$
+we obtain a morphism
+$$
+x' \to x
+$$
+lying over $p \circ i' : U' \to U$ such that the composition
+$$
+x \to x' \to x
+$$
+is the identity. Recall that we have $y \to x'$ lying over
+the morphism $T \to U'$. Composing we get a morphism
+$y \to x$ whose existence contradicts condition
+(3) of Lemma \ref{lemma-single-point}.
+This contradiction finishes the proof.
+\end{proof}
+
+
+
+
+
+
+\section{Strong formal effectiveness}
+\label{section-strong-formal-effectiveness}
+
+\noindent
+In this section we demonstrate how a strong version of effectiveness
+of formal objects implies openness of versality. The proof of
+\cite[Theorem 1.1]{Bhatt-Algebraize} shows that quasi-compact and
+quasi-separated algebraic spaces satisfy the strong formal effectiveness
+discussed in Remark \ref{remark-strong-effectiveness}. In addition, the
+theory we develop is nonempty: we use it later to show openness of versality
+for the stack of coherent sheaves and for moduli of complexes, see
+Quot, Theorems
+\ref{quot-theorem-coherent-algebraic-general} and
+\ref{quot-theorem-complexes-algebraic}.
+
+\begin{lemma}
+\label{lemma-infinite-sequence-pre}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$ having (RS*).
+Let $x$ be an object of
+$\mathcal{X}$ over an affine scheme $U$ of finite type over $S$.
+Let $u_n \in U$, $n \geq 1$ be finite type points such that
+(a) there are no specializations $u_n \leadsto u_m$ for $n \not = m$, and
+(b) $x$ is not versal at $u_n$ for all $n$. Then there exist morphisms
+$$
+x \to x_1 \to x_2 \to \ldots
+\quad\text{in }\mathcal{X}\text{ lying over }\quad
+U \to U_1 \to U_2 \to \ldots
+$$
+over $S$ such that
+\begin{enumerate}
+\item for each $n$ the morphism $U \to U_n$ is a first order
+thickening,
+\item for each $n$ we have a short exact sequence
+$$
+0 \to \kappa(u_n) \to \mathcal{O}_{U_n} \to \mathcal{O}_{U_{n - 1}} \to 0
+$$
+with $U_0 = U$ for $n = 1$,
+\item for each $n$ there does {\bf not} exist a pair $(W, \alpha)$
+consisting of an open neighbourhood $W \subset U_n$ of $u_n$
+and a morphism $\alpha : x_n|_W \to x$
+such that the composition
+$$
+x|_{U \cap W} \xrightarrow{\text{restriction of }x \to x_n}
+x_n|_W \xrightarrow{\alpha} x
+$$
+is the canonical morphism $x|_{U \cap W} \to x$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since there are no specializations among the points $u_n$ (and in
+particular the $u_n$ are pairwise distinct), for every $n$
+we can find an open $U' \subset U$
+such that $u_n \in U'$ and $u_i \not \in U'$ for $i = 1, \ldots, n - 1$.
+By Lemma \ref{lemma-single-point} for each $n \geq 1$ we can find
+$$
+x \to y_n
+\quad\text{in }\mathcal{X}\text{ lying over}\quad
+U \to T_n
+$$
+such that
+\begin{enumerate}
+\item the morphism $U \to T_n$ is a first order thickening,
+\item we have a short exact sequence
+$$
+0 \to \kappa(u_n) \to \mathcal{O}_{T_n} \to \mathcal{O}_U \to 0
+$$
+\item there does {\bf not} exist a pair $(W, \alpha)$
+consisting of an open neighbourhood $W \subset T_n$ of $u_n$
+and a morphism $\beta : y_n|_W \to x$ such that the composition
+$$
+x|_{U \cap W} \xrightarrow{\text{restriction of }x \to y_n}
+y_n|_W \xrightarrow{\beta} x
+$$
+is the canonical morphism $x|_{U \cap W} \to x$.
+\end{enumerate}
+Thus we can define inductively
+$$
+U_1 = T_1, \quad
+U_{n + 1} = U_n \amalg_U T_{n + 1}
+$$
+Setting $x_1 = y_1$ and using (RS*) we find inductively
+$x_{n + 1}$ over $U_{n + 1}$ restricting to
+$x_n$ over $U_n$ and $y_{n + 1}$ over $T_{n + 1}$.
+Property (1) for $U \to U_n$ follows from the construction
+of the pushout in More on Morphisms, Lemma
+\ref{more-morphisms-lemma-pushout-along-thickening}.
+Property (2) for $U_n$ similarly follows from
+property (2) for $T_n$ by the construction of the pushout.
+After shrinking to an open neighbourhood $U'$ of $u_n$
+as discussed above, property (3) for $(U_n, x_n)$ follows from property (3)
+for $(T_n, y_n)$ simply because the corresponding open subschemes
+of $T_n$ and $U_n$ are isomorphic. Some details omitted.
+\end{proof}
+
+\begin{remark}[Strong effectiveness]
+\label{remark-strong-effectiveness}
+Let $S$ be a locally Noetherian scheme.
+Let $\mathcal{X}$ be a category fibred in groupoids over $(\Sch/S)_{fppf}$.
+Assume we have
+\begin{enumerate}
+\item an affine open $\Spec(\Lambda) \subset S$,
+\item an inverse system $(R_n)$ of $\Lambda$-algebras
+with surjective transition maps whose kernels are locally nilpotent,
+\item a system $(\xi_n)$ of objects of $\mathcal{X}$ lying
+over the system $(\Spec(R_n))$.
+\end{enumerate}
+In this situation, set $R = \lim R_n$. We say that
+$(\xi_n)$ is {\it effective} if there exists an object
+$\xi$ of $\mathcal{X}$ over $\Spec(R)$ whose restriction
+to $\Spec(R_n)$ gives the system $(\xi_n)$.
+\end{remark}
+
+\noindent
+It is not the case that every algebraic stack $\mathcal{X}$
+over $S$ satisfies a strong effectiveness axiom of the form:
+every system $(\xi_n)$ as in Remark \ref{remark-strong-effectiveness}
+is effective. An example is given in
+Examples, Section \ref{examples-section-non-formal-effectiveness}.
+
+\begin{lemma}
+\label{lemma-SGE-implies-openness-versality}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category fibred
+in groupoids over $(\Sch/S)_{fppf}$. Assume
+\begin{enumerate}
+\item $\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$ is
+representable by algebraic spaces,
+\item $\mathcal{X}$ has (RS*),
+\item $\mathcal{X}$ is limit preserving,
+\item systems $(\xi_n)$ as in Remark \ref{remark-strong-effectiveness}
+where $\Ker(R_m \to R_n)$ is an ideal of square zero for all $m \geq n$
+are effective.
+\end{enumerate}
+Then $\mathcal{X}$ satisfies openness of versality.
+\end{lemma}
+
+\begin{proof}
+Choose a scheme $U$ locally of finite type over $S$,
+a finite type point $u_0$ of $U$, and an object $x$ of $\mathcal{X}$
+over $U$ such that $x$ is versal at $u_0$. After shrinking
+$U$ we may assume $U$ is affine and $U$ maps into an affine open
+$\Spec(\Lambda)$ of $S$. Let $E \subset U$ be the set of finite type
+points $u$ such that $x$ is not versal at $u$. By
+Lemma \ref{lemma-generalization-versality}
+if $u \in E$ then $u_0$ is not a specialization of $u$.
+If openness of versality does not hold, then $u_0$ is in the closure
+$\overline{E}$ of $E$. By
+Properties, Lemma \ref{properties-lemma-countable-dense-subset}
+we may choose a countable subset $E' \subset E$ with the same closure
+as $E$. By Properties, Lemma \ref{properties-lemma-maximal-points}
+we may assume there are no specializations among the points of $E'$.
+Observe that $E'$ has to be (countably) infinite as $u_0$
+isn't the specialization of any point of $E'$ as pointed out above.
+Thus we can write $E' = \{u_1, u_2, u_3, \ldots\}$, there
+are no specializations among the $u_i$, and $u_0$ is in the closure
+of $E'$.
+
+\medskip\noindent
+Choose $x \to x_1 \to x_2 \to \ldots$ lying over
+$U \to U_1 \to U_2 \to \ldots$ as in Lemma \ref{lemma-infinite-sequence-pre}.
+Write $U_n = \Spec(R_n)$ and $U = \Spec(R_0)$.
+Set $R = \lim R_n$. Observe that $R \to R_0$ is surjective
+with kernel an ideal of square zero. By assumption (4)
+we get $\xi$ over $\Spec(R)$ whose base change to $R_n$ is $x_n$.
+By assumption (3) we get that $\xi$ comes from an object $\xi'$ over
+$U' = \Spec(R')$ for some finite type $\Lambda$-subalgebra
+$R' \subset R$. After increasing $R'$ we may and do assume that
+$R' \to R_0$ is surjective, so that $U \subset U'$ is a first order thickening.
+Thus we now have
+$$
+x \to x_1 \to x_2 \to \ldots \to \xi'
+\text{ lying over }
+U \to U_1 \to U_2 \to \ldots \to U'
+$$
+By assumption (1) there is an algebraic space $Z$ over $S$ representing
+$$
+(\Sch/U)_{fppf} \times_{x, \mathcal{X}, \xi'} (\Sch/U')_{fppf}
+$$
+see Algebraic Stacks, Lemma \ref{algebraic-lemma-representable-diagonal}.
+By construction of $2$-fibre products, a $T$-valued point of $Z$
+corresponds to a triple $(a, a', \alpha)$ consisting of morphisms
+$a : T \to U$, $a' : T \to U'$ and a morphism $\alpha : a^*x \to (a')^*\xi'$.
+We obtain a commutative diagram
+$$
+\xymatrix{
+U \ar[rd] \ar[rdd] \ar[rrd] \\
+& Z \ar[r]_{p'} \ar[d]^p & U' \ar[d] \\
+& U \ar[r] & S
+}
+$$
+The morphism $i : U \to Z$ comes the isomorphism $x \to \xi'|_U$.
+Let $z_0 = i(u_0) \in Z$. By Lemma \ref{lemma-base-change-versal}
+we see that $Z \to U'$ is smooth at $z_0$. After replacing $U$ by an
+affine open neighbourhood of $u_0$, replacing $U'$ by the corresponding
+open, and replacing $Z$ by the intersection of the inverse images
+of these opens by $p$ and $p'$, we reach the situation where
+$Z \to U'$ is smooth along $i(U)$. Note that this
+also involves replacing $u_n$ by a subsequence, namely
+by those indices such that $u_n$ is in the open. Moreover, condition
+(3) of Lemma \ref{lemma-infinite-sequence-pre}
+is clearly preserved by shrinking $U$
+(all of the schemes $U$, $U_n$, $U'$ have the same underlying
+topological space).
+Since $U \to U'$ is a first order thickening of affine schemes,
+we can choose a morphism $i' : U' \to Z$
+such that $p' \circ i' = \text{id}_{U'}$ and
+whose restriction to $U$ is $i$
+(More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-smooth-formally-smooth}).
+Pulling back the universal morphism
+$p^*x \to (p')^*\xi'$ by $i'$ we obtain a morphism
+$$
+\xi' \to x
+$$
+lying over $p \circ i' : U' \to U$ such that the composition
+$$
+x \to \xi' \to x
+$$
+is the identity. Recall that we have $x_1 \to \xi'$ lying over
+the morphism $U_1 \to U'$. Composing we get a morphism
+$x_1 \to x$ whose existence contradicts condition
+(3) of Lemma \ref{lemma-infinite-sequence-pre}.
+This contradiction finishes the proof.
+\end{proof}
+
+\begin{remark}
+\label{remark-trade-openness-versality-diagonal-with-strong-effectiveness}
+There is a way to deduce openness of versality of the diagonal
+of an category fibred in groupoids from a strong formal effectiveness
+axiom.
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category fibred
+in groupoids over $(\Sch/S)_{fppf}$. Assume
+\begin{enumerate}
+\item $\Delta_\Delta : \mathcal{X} \to
+\mathcal{X} \times_{\mathcal{X} \times \mathcal{X}} \mathcal{X}$
+is representable by algebraic spaces,
+\item $\mathcal{X}$ has (RS*),
+\item $\mathcal{X}$ is limit preserving,
+\item given an inverse system $(R_n)$ of $S$-algebras
+as in Remark \ref{remark-strong-effectiveness}
+where $\Ker(R_m \to R_n)$ is an ideal of square zero for all $m \geq n$
+the functor
+$$
+\mathcal{X}_{\Spec(\lim R_n)} \longrightarrow
+\lim_n \mathcal{X}_{\Spec(R_n)}
+$$
+is fully faithful.
+\end{enumerate}
+Then $\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$
+satisfies openness of versality. This follows by applying
+Lemma \ref{lemma-SGE-implies-openness-versality}
+to fibre products of the form
+$\mathcal{X} \times_{\Delta, \mathcal{X} \times \mathcal{X}, y}
+(\Sch/V)_{fppf}$ for any affine scheme $V$ locally
+of finite presentation over $S$ and object $y$ of
+$\mathcal{X} \times \mathcal{X}$ over $V$.
+If we ever need this, we will change this remark into
+a lemma and provide a detailed proof.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+\section{Infinitesimal deformations}
+\label{section-inf}
+
+\noindent
+In this section we discuss a generalization of the notion of the
+tangent space introduced in Section \ref{section-tangent-spaces}.
+To do this intelligently, we borrow some notation from
+Formal Deformation Theory, Sections
+\ref{formal-defos-section-tangent-spaces-functors},
+\ref{formal-defos-section-lifts}, and
+\ref{formal-defos-section-infinitesimal-automorphisms}.
+
+\medskip\noindent
+Let $S$ be a scheme. Let $\mathcal{X}$ be a category fibred in groupoids
+over $(\Sch/S)_{fppf}$. Given a homomorphism $A' \to A$ of $S$-algebras
+and an object $x$ of $\mathcal{X}$ over $\Spec(A)$ we write
+$\textit{Lift}(x, A')$ for the category of lifts of $x$ to $\Spec(A')$.
+An object of $\textit{Lift}(x, A')$ is a morphism $x \to x'$ of $\mathcal{X}$
+lying over $\Spec(A) \to \Spec(A')$ and morphisms of $\textit{Lift}(x, A')$
+are defined as commutative diagrams. The set of isomorphism classes of
+$\textit{Lift}(x, A')$ is denoted $\text{Lift}(x, A')$. See
+Formal Deformation Theory, Definition \ref{formal-defos-definition-lifts} and
+Remark \ref{formal-defos-remark-omit-arrow}.
+If $A' \to A$ is surjective with locally nilpotent kernel we call an element
+$x'$ of $\text{Lift}(x, A')$ a {\it (infinitesimal) deformation} of $x$.
+In this case the {\it group of infinitesimal automorphisms of $x'$ over $x$}
+is the kernel
+$$
+\text{Inf}(x'/x) =
+\Ker\left(
+\text{Aut}_{\mathcal{X}_{\Spec(A')}}(x') \to
+\text{Aut}_{\mathcal{X}_{\Spec(A)}}(x)\right)
+$$
+Note that an element of $\text{Inf}(x'/x)$ is the same thing as a lift
+of $\text{id}_x$ over $\Spec(A')$ for (the category fibred in sets associated
+to) $\mathit{Aut}_\mathcal{X}(x')$. Compare with
+Formal Deformation Theory, Definition
+\ref{formal-defos-definition-relative-infinitesimal-auts} and
+Formal Deformation Theory, Remark
+\ref{formal-defos-remark-infaut-lifting-equalities}.
+
+\medskip\noindent
+If $M$ is an $A$-module we denote $A[M]$ the $A$-algebra whose underlying
+$A$-module is $A \oplus M$ and whose multiplication is given by
+$(a, m) \cdot (a', m') = (aa', am' + a'm)$. When $M = A$ this is the ring
+of dual numbers over $A$, which we denote $A[\epsilon]$ as is customary.
+There is an $A$-algebra map $A[M] \to A$. The pullback of $x$ to $\Spec(A[M])$
+is called the {\it trivial deformation} of $x$ to $\Spec(A[M])$.
+
+\begin{lemma}
+\label{lemma-functoriality}
+Let $S$ be a scheme. Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+of categories fibred in groupoids over $(\Sch/S)_{fppf}$. Let
+$$
+\xymatrix{
+B' \ar[r] & B \\
+A' \ar[u] \ar[r] & A \ar[u]
+}
+$$
+be a commutative diagram of $S$-algebras. Let $x$ be an object of $\mathcal{X}$
+over $\Spec(A)$, let $y$ be an object of $\mathcal{Y}$ over $\Spec(B)$,
+and let $\phi : f(x)|_{\Spec(B)} \to y$ be a morphism of $\mathcal{Y}$
+over $\Spec(B)$. Then there is a canonical functor
+$$
+\textit{Lift}(x, A') \longrightarrow \textit{Lift}(y, B')
+$$
+of categories of lifts induced by $f$ and $\phi$. The construction is
+compatible with compositions of $1$-morphisms of categories fibred in
+groupoids in an obvious manner.
+\end{lemma}
+
+\begin{proof}
+This lemma proves itself.
+\end{proof}
+
+\noindent
+Let $S$ be a base scheme. Let $\mathcal{X}$ be a category fibred
+in groupoids over $(\Sch/S)_{fppf}$. We define a category whose objects are
+pairs $(x, A' \to A)$ where
+\begin{enumerate}
+\item $A' \to A$ is a surjection of $S$-algebras whose kernel
+is an ideal of square zero,
+\item $x$ is an object of $\mathcal{X}$ lying over $\Spec(A)$.
+\end{enumerate}
+A morphism $(y, B' \to B) \to (x, A' \to A)$ is given by a commutative
+diagram
+$$
+\xymatrix{
+B' \ar[r] & B \\
+A' \ar[u] \ar[r] & A \ar[u]
+}
+$$
+of $S$-algebras together with a morphism $x|_{\Spec(B)} \to y$ over
+$\Spec(B)$. Let us call this the category of {\it deformation situations}.
+
+\begin{lemma}
+\label{lemma-properties-lift-RS-star}
+Let $S$ be a scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$. Assume $\mathcal{X}$ satisfies
+condition (RS*). Let $A$ be an $S$-algebra and let $x$ be an object of
+$\mathcal{X}$ over $\Spec(A)$.
+\begin{enumerate}
+\item There exists an $A$-linear functor
+$\text{Inf}_x : \text{Mod}_A \to \text{Mod}_A$
+such that given a deformation situation $(x, A' \to A)$ and a lift $x'$
+there is an isomorphism $\text{Inf}_x(I) \to \text{Inf}(x'/x)$ where
+$I = \Ker(A' \to A)$.
+\item There exists an $A$-linear functor
+$T_x : \text{Mod}_A \to \text{Mod}_A$
+such that
+\begin{enumerate}
+\item given $M$ in $\text{Mod}_A$ there is a bijection
+$T_x(M) \to \text{Lift}(x, A[M])$,
+\item given a deformation situation $(x, A' \to A)$ there is an action
+$$
+T_x(I) \times \text{Lift}(x, A') \to \text{Lift}(x, A')
+$$
+where $I = \Ker(A' \to A)$. It is simply transitive if
+$\text{Lift}(x, A') \not = \emptyset$.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We define $\text{Inf}_x$ as the functor
+$$
+\text{Mod}_A \longrightarrow \textit{Sets},\quad
+M \longrightarrow
+\text{Inf}(x'_M/x) = \text{Lift}(\text{id}_x, A[M])
+$$
+mapping $M$ to the group of infinitesimal automorphisms
+of the trivial deformation $x'_M$ of $x$ to $\Spec(A[M])$
+or equivalently the group of lifts of $\text{id}_x$ in
+$\mathit{Aut}_\mathcal{X}(x'_M)$.
+We define $T_x$ as the functor
+$$
+\text{Mod}_A \longrightarrow \textit{Sets},\quad
+M \longrightarrow \text{Lift}(x, A[M])
+$$
+of isomorphism classes of infintesimal deformations of $x$ to
+$\Spec(A[M])$. We apply Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-linear-functor}
+to $\text{Inf}_x$ and $T_x$. This lemma is applicable, since
+(RS*) tells us that
+$$
+\textit{Lift}(x, A[M \times N]) =
+\textit{Lift}(x, A[M]) \times \textit{Lift}(x, A[N])
+$$
+as categories (and trivial deformations match up too).
+
+\medskip\noindent
+Let $(x, A' \to A)$ be a deformation situation. Consider the ring map
+$g : A' \times_A A' \to A[I]$ defined by the
+rule $g(a_1, a_2) = \overline{a_1} \oplus a_2 - a_1$.
+There is an isomorphism
+$$
+A' \times_A A' \longrightarrow A' \times_A A[I]
+$$
+given by $(a_1, a_2) \mapsto (a_1, g(a_1, a_2))$.
+This isomorphism commutes with the projections to $A'$ on the first
+factor, and hence with the projections to $A$. Thus applying (RS*)
+twice we find equivalences of categories
+\begin{align*}
+\textit{Lift}(x, A') \times \textit{Lift}(x, A')
+& =
+\textit{Lift}(x, A' \times_A A') \\
+& =
+\textit{Lift}(x, A' \times_A A[I]) \\
+& =
+\textit{Lift}(x, A') \times \textit{Lift}(x, A[I])
+\end{align*}
+Using these maps and projection onto the last factor of the last product
+we see that we obtain ``difference maps''
+$$
+\text{Inf}(x'/x) \times \text{Inf}(x'/x)
+\longrightarrow
+\text{Inf}_x(I)
+\quad\text{and}\quad
+\text{Lift}(x, A') \times \text{Lift}(x, A')
+\longrightarrow
+T_x(I)
+$$
+These difference maps satisfy the transitivity rule
+``$(x'_1 - x'_2) + (x'_2 - x'_3) = x'_1 - x'_3$'' because
+$$
+\xymatrix{
+A' \times_A A' \times_A A'
+\ar[rrrrr]_-{(a_1, a_2, a_3) \mapsto (g(a_1, a_2), g(a_2, a_3))}
+\ar[rrrrrd]_{(a_1, a_2, a_3) \mapsto g(a_1, a_3)} & & & & &
+A[I] \times_A A[I] = A[I \times I] \ar[d]^{+} \\
+& & & & & A[I]
+}
+$$
+is commutative. Inverting the string of equivalences above we obtain
+an action which is free and transitive provided $\text{Inf}(x'/x)$,
+resp.\ $\text{Lift}(x, A')$ is nonempty. Note that $\text{Inf}(x'/x)$
+is always nonempty as it is a group.
+\end{proof}
+
+\begin{remark}[Functoriality]
+\label{remark-functoriality}
+Assumptions and notation as in Lemma \ref{lemma-properties-lift-RS-star}.
+Suppose $A \to B$ is a ring map and $y = x|_{\Spec(B)}$.
+Let $M \in \text{Mod}_A$, $N \in \text{Mod}_B$
+and let $M \to N$ an $A$-linear map. Then there are canonical maps
+$\text{Inf}_x(M) \to \text{Inf}_y(N)$ and
+$T_x(M) \to T_y(N)$ simply because there is a pullback functor
+$$
+\textit{Lift}(x, A[M]) \to \textit{Lift}(y, B[N])
+$$
+coming from the ring map $A[M] \to B[N]$. Similarly, given a morphism of
+deformation situations $(y, B' \to B) \to (x, A' \to A)$ we obtain a pullback
+functor $\textit{Lift}(x, A') \to \textit{Lift}(y, B')$. Since the
+construction of the action, the addition, and the scalar multiplication
+on $\text{Inf}_x$ and $T_x$ use only morphisms in the categories of lifts
+(see proof of
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-linear-functor})
+we see that the constructions above are functorial. In other words we
+obtain $A$-linear maps
+$$
+\text{Inf}_x(M) \to \text{Inf}_y(N)
+\quad\text{and}\quad
+T_x(M) \to T_y(N)
+$$
+such that the diagrams
+$$
+\vcenter{
+\xymatrix{
+\text{Inf}_y(J) \ar[r] & \text{Inf}(y'/y) \\
+\text{Inf}_x(I) \ar[r] \ar[u] & \text{Inf}(x'/x) \ar[u]
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+T_y(J) \times \text{Lift}(y, B') \ar[r] & \text{Lift}(y, B') \\
+T_x(I) \times \text{Lift}(x, A') \ar[r] \ar[u] & \text{Lift}(x, A') \ar[u]
+}
+}
+$$
+commute. Here $I = \Ker(A' \to A)$, $J = \Ker(B' \to B)$,
+$x'$ is a lift of $x$ to $A'$ (which may not always exist) and
+$y' = x'|_{\Spec(B')}$.
+\end{remark}
+
+\begin{remark}[Automorphisms]
+\label{remark-automorphisms}
+Assumptions and notation as in Lemma \ref{lemma-properties-lift-RS-star}.
+Let $x', x''$ be lifts of $x$ to $A'$. Then we have a composition
+map
+$$
+\text{Inf}(x'/x) \times
+\Mor_{\textit{Lift}(x, A')}(x', x'') \times \text{Inf}(x''/x)
+\longrightarrow
+\Mor_{\textit{Lift}(x, A')}(x', x'').
+$$
+Since $\textit{Lift}(x, A')$ is a groupoid, if
+$\Mor_{\textit{Lift}(x, A')}(x', x'')$ is nonempty, then this defines
+a simply transitive left action of $\text{Inf}(x'/x)$ on
+$\Mor_{\textit{Lift}(x, A')}(x', x'')$ and a simply transitive
+right action by $\text{Inf}(x''/x)$. Now the lemma says that
+$\text{Inf}(x'/x) = \text{Inf}_x(I) = \text{Inf}(x''/x)$.
+We claim that the two actions described above agree via these identifications.
+Namely, either $x' \not \cong x''$ in which the claim is clear, or
+$x' \cong x''$ and in that case we may assume that $x'' = x'$ in which
+case the result follows from the fact that $\text{Inf}(x'/x)$ is
+commutative. In particular, we obtain a well defined action
+$$
+\text{Inf}_x(I) \times \Mor_{\textit{Lift}(x, A')}(x', x'')
+\longrightarrow
+\Mor_{\textit{Lift}(x, A')}(x', x'')
+$$
+which is simply transitive as soon as $\Mor_{\textit{Lift}(x, A')}(x', x'')$
+is nonempty.
+\end{remark}
+
+\begin{remark}
+\label{remark-short-exact-sequence-thickenings}
+Let $S$ be a scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$. Let $A$ be an $S$-algebra. There
+is a notion of a {\it short exact sequence}
+$$
+(x, A_1' \to A) \to (x, A_2' \to A) \to (x, A_3' \to A)
+$$
+of deformation situations: we ask the corresponding maps between
+the kernels $I_i = \Ker(A_i' \to A)$ give a short exact sequence
+$$
+0 \to I_3 \to I_2 \to I_1 \to 0
+$$
+of $A$-modules. Note that in this case the map $A_3' \to A_1'$
+factors through $A$, hence there is a canonical isomorphism
+$A_1' = A[I_1]$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-ses-inf-and-T}
+Let $S$ be a scheme. Let $p : \mathcal{X} \to \mathcal{Y}$
+and $q : \mathcal{Z} \to \mathcal{Y}$ be $1$-morphisms of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. Assume $\mathcal{X}$,
+$\mathcal{Y}$, $\mathcal{Z}$ satisfy (RS*).
+Let $A$ be an $S$-algebra and let $w$ be an object of
+$\mathcal{W} = \mathcal{X} \times_\mathcal{Y} \mathcal{Z}$ over $A$.
+Denote $x, y, z$ the objects of $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$
+you get from $w$. For any $A$-module $M$ there is a $6$-term exact sequence
+$$
+\xymatrix{
+0 \ar[r] &
+\text{Inf}_w(M) \ar[r] &
+\text{Inf}_x(M) \oplus \text{Inf}_z(M) \ar[r] &
+\text{Inf}_y(M) \ar[lld] \\
+ &
+T_w(M) \ar[r] &
+T_x(M) \oplus T_z(M) \ar[r] &
+T_y(M)
+}
+$$
+of $A$-modules.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-fibre-product-RS-star} we see that $\mathcal{W}$
+satisfies (RS*) and hence $T_w(M)$ and $\text{Inf}_w(M)$ are defined.
+The horizontal arrows are defined using the functoriality of
+Lemma \ref{lemma-functoriality}.
+
+\medskip\noindent
+Definition of the ``boundary'' map $\delta : \text{Inf}_y(M) \to T_w(M)$.
+Choose isomorphisms $p(x) \to y$ and $y \to q(z)$ such that
+$w = (x, z, p(x) \to y \to q(z))$ in the description of
+the $2$-fibre product of
+Categories, Lemma \ref{categories-lemma-2-product-fibred-categories}
+and more precisely
+Categories, Lemma \ref{categories-lemma-2-product-categories-over-C}.
+Let $x', y', z', w'$ denote the trivial deformation of
+$x, y, z, w$ over $A[M]$. By pullback we get isomorphisms
+$y' \to p(x')$ and $q(z') \to y'$. An element $\alpha \in \text{Inf}_y(M)$
+is the same thing as an automorphism $\alpha : y' \to y'$
+over $A[M]$ which restricts to the identity on $y$ over $A$.
+Thus setting
+$$
+\delta(\alpha) =
+(x', z', p(x') \to y' \xrightarrow{\alpha} y' \to q(z'))
+$$
+we obtain an object of $T_w(M)$. This is a map of $A$-modules
+by Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-morphism-linear-functors}.
+
+\medskip\noindent
+The rest of the proof is exactly the same as the proof of
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-deformation-categories-fiber-product-morphisms}.
+\end{proof}
+
+\begin{remark}[Compatibility with previous tangent spaces]
+\label{remark-compare-deformation-spaces}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category fibred
+in groupoids over $(\Sch/S)_{fppf}$. Assume $\mathcal{X}$ has (RS*).
+Let $k$ be a field of finite type over $S$ and let $x_0$ be an object of
+$\mathcal{X}$ over $\Spec(k)$. Then we have equalities of
+$k$-vector spaces
+$$
+T\mathcal{F}_{\mathcal{X}, k, x_0} = T_{x_0}(k)
+\quad\text{and}\quad
+\text{Inf}(\mathcal{F}_{\mathcal{X}, k, x_0}) =
+\text{Inf}_{x_0}(k)
+$$
+where the spaces on the left hand side of the equality signs are
+given in (\ref{equation-tangent-space}) and
+(\ref{equation-infinitesimal-automorphisms})
+and the spaces on the right hand side are given by
+Lemma \ref{lemma-properties-lift-RS-star}.
+\end{remark}
+
+\begin{remark}[Canonical element]
+\label{remark-canonical-element}
+Assumptions and notation as in Lemma \ref{lemma-properties-lift-RS-star}.
+Choose an affine open $\Spec(\Lambda) \subset S$ such that $\Spec(A) \to S$
+corresponds to a ring map $\Lambda \to A$. Consider the ring map
+$$
+A \longrightarrow A[\Omega_{A/\Lambda}],
+\quad
+a \longmapsto (a, \text{d}_{A/\Lambda}(a))
+$$
+Pulling back $x$ along the corresponding morphism
+$\Spec(A[\Omega_{A/\Lambda}]) \to \Spec(A)$ we obtain a
+deformation $x_{can}$ of $x$ over $A[\Omega_{A/\Lambda}]$. We call this
+the {\it canonical element}
+$$
+x_{can} \in T_x(\Omega_{A/\Lambda}) = \text{Lift}(x, A[\Omega_{A/\Lambda}]).
+$$
+Next, assume that $\Lambda$ is Noetherian and $\Lambda \to A$
+is of finite type. Let
+$k = \kappa(\mathfrak p)$ be a residue field at a finite type point $u_0$
+of $U = \Spec(A)$. Let $x_0 = x|_{u_0}$. By (RS*) and the fact that
+$A[k] = A \times_k k[k]$ the space $T_x(k)$ is the tangent space to the
+deformation functor $\mathcal{F}_{\mathcal{X}, k, x_0}$. Via
+$$
+T\mathcal{F}_{U, k, u_0} =
+\text{Der}_\Lambda(A, k) = \Hom_A(\Omega_{A/\Lambda}, k)
+$$
+(see Formal Deformation Theory, Example
+\ref{formal-defos-example-tangent-space-prorepresentable-functor})
+and functoriality of $T_x$ the canonical element produces the map
+on tangent spaces induced by the object $x$ over $U$. Namely,
+$\theta \in T\mathcal{F}_{U, k, u_0}$ maps to $T_x(\theta)(x_{can})$
+in $T_x(k) = T\mathcal{F}_{\mathcal{X}, k, x_0}$.
+\end{remark}
+
+\begin{remark}[Canonical automorphism]
+\label{remark-canonical-isomorphism}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$. Assume $\mathcal{X}$ satisfies
+condition (RS*). Let $A$ be an $S$-algebra such that
+$\Spec(A) \to S$ maps into an affine open and let $x, y$ be objects of
+$\mathcal{X}$ over $\Spec(A)$. Further, let $A \to B$ be a ring map and
+let $\alpha : x|_{\Spec(B)} \to y|_{\Spec(B)}$ be a morphism of
+$\mathcal{X}$ over $\Spec(B)$. Consider the ring map
+$$
+B \longrightarrow B[\Omega_{B/A}],
+\quad
+b \longmapsto (b, \text{d}_{B/A}(b))
+$$
+Pulling back $\alpha$ along the corresponding morphism
+$\Spec(B[\Omega_{B/A}]) \to \Spec(B)$ we obtain a
+morphism $\alpha_{can}$ between the pullbacks of $x$ and $y$ over
+$B[\Omega_{B/A}]$. On the other hand, we can pullback $\alpha$
+by the morphism $\Spec(B[\Omega_{B/A}]) \to \Spec(B)$ corresponding
+to the injection of $B$ into the first summand of $B[\Omega_{B/A}]$.
+By the discussion of Remark \ref{remark-automorphisms}
+we can take the difference
+$$
+\varphi(x, y, \alpha) = \alpha_{can} - \alpha|_{\Spec(B[\Omega_{B/A}])} \in
+\text{Inf}_{x|_{\Spec(B)}}(\Omega_{B/A}).
+$$
+We will call this the {\it canonical automorphism}. It depends
+on all the ingredients $A$, $x$, $y$, $A \to B$ and $\alpha$.
+\end{remark}
+
+
+
+
+
+\section{Obstruction theories}
+\label{section-obstruction-theory}
+
+\noindent
+In this section we describe what an obstruction theory is.
+Contrary to the spaces of infinitesimal deformations and infinitesimal
+automorphisms, an obstruction theory is an additional piece of data.
+The formulation is motivated by the results of
+Lemma \ref{lemma-properties-lift-RS-star}
+and Remark \ref{remark-functoriality}.
+
+\begin{definition}
+\label{definition-obstruction-theory}
+Let $S$ be a locally Noetherian base. Let $\mathcal{X}$ be a category fibred
+in groupoids over $(\Sch/S)_{fppf}$. An {\it obstruction theory} is
+given by the following data
+\begin{enumerate}
+\item for every $S$-algebra $A$ such that $\Spec(A) \to S$
+maps into an affine open and every object $x$ of $\mathcal{X}$ over
+$\Spec(A)$ an $A$-linear functor
+$$
+\mathcal{O}_x : \text{Mod}_A \to \text{Mod}_A
+$$
+of {\it obstruction modules},
+\item for $(x, A)$ as in (1), a ring map $A \to B$,
+$M \in \text{Mod}_A$, $N \in \text{Mod}_B$, and an $A$-linear
+map $M \to N$ an induced $A$-linear map $\mathcal{O}_x(M) \to \mathcal{O}_y(N)$
+where $y = x|_{\Spec(B)}$, and
+\item for every deformation situation $(x, A' \to A)$ an
+{\it obstruction} element
+$o_x(A') \in \mathcal{O}_x(I)$ where $I = \Ker(A' \to A)$.
+\end{enumerate}
+These data are subject to the following conditions
+\begin{enumerate}
+\item[(i)] the functoriality maps turn the obstruction modules into a functor
+from the category of triples $(x, A, M)$ to sets,
+\item[(ii)] for every morphism of deformation situations
+$(y, B' \to B) \to (x, A' \to A)$ the element $o_x(A')$ maps
+to $o_y(B')$, and
+\item[(iii)] we have
+$$
+\text{Lift}(x, A') \not = \emptyset
+\Leftrightarrow
+o_x(A') = 0
+$$
+for every deformation situation $(x, A' \to A)$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+This last condition explains the terminology. The module $\mathcal{O}_x(A')$
+is called the {\it obstruction module}. The element $o_x(A')$ is the
+{\it obstruction}.
+Most obstruction theories have additional properties, and in order to
+make them useful additional conditions are needed.
+Moreover, this is just a sample definition, for example in the definition
+we could consider only deformation situations of finite type over $S$.
+
+\medskip\noindent
+One of the main reasons for introducing obstruction theories is to check
+openness of versality. An example of this type of result is
+Lemma \ref{lemma-get-openness-obstruction-theory} below.
+The initial idea to do this is due to Artin, see
+the papers of Artin mentioned in the introduction. It has been taken up
+for example in the work by Flenner \cite{Flenner},
+Hall \cite{Hall-coherent},
+Hall and Rydh \cite{rydh_axioms},
+Olsson \cite{olsson_deformation},
+Olsson and Starr \cite{olsson-starr}, and
+Lieblich \cite{lieblich-complexes} (random order of references).
+Moreover, for particular categories fibred in groupoids, often
+authors develop a little bit of theory adapted to the problem at hand.
+We will develop this theory later (insert future reference here).
+
+\begin{lemma}
+\label{lemma-get-openness-obstruction-theory}
+\begin{reference}
+This is \cite[Theorem 4.4]{Hall-coherent}
+\end{reference}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category fibred
+in groupoids over $(\Sch/S)_{fppf}$. Assume
+\begin{enumerate}
+\item $\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$ is
+representable by algebraic spaces,
+\item $\mathcal{X}$ has (RS*),
+\item $\mathcal{X}$ is limit preserving,
+\item there exists an obstruction theory\footnote{Analyzing the proof
+the reader sees that in fact it suffices to check
+the functoriality (ii) of obstruction classes in
+Definition \ref{definition-obstruction-theory}
+for maps $(y, B' \to B) \to (x, A' \to A)$
+with $B = A$ and $y = x$.},
+\item for an object $x$ of $\mathcal{X}$ over $\Spec(A)$
+and $A$-modules $M_n$, $n \geq 1$ we have
+\begin{enumerate}
+\item $T_x(\prod M_n) = \prod T_x(M_n)$,
+\item $\mathcal{O}_x(\prod M_n) \to \prod \mathcal{O}_x(M_n)$
+is injective.
+\end{enumerate}
+\end{enumerate}
+Then $\mathcal{X}$ satisfies openness of versality.
+\end{lemma}
+
+\begin{proof}
+We prove this by verifying condition (4) of
+Lemma \ref{lemma-SGE-implies-openness-versality}.
+Let $(\xi_n)$ and $(R_n)$ be as in Remark \ref{remark-strong-effectiveness}
+such that $\Ker(R_m \to R_n)$ is an ideal of square zero
+for all $m \geq n$. Set $A = R_1$ and $x = \xi_1$.
+Denote $M_n = \Ker(R_n \to R_1)$.
+Then $M_n$ is an $A$-module. Set $R = \lim R_n$.
+Let
+$$
+\tilde R = \{(r_1, r_2, r_3 \ldots) \in \prod R_n
+\text{ such that all have the same image in }A\}
+$$
+Then $\tilde R \to A$ is surjective with kernel $M = \prod M_n$.
+There is a map $R \to \tilde R$ and a map
+$\tilde R \to A[M]$, $(r_1, r_2, r_3, \ldots) \mapsto
+(r_1, r_2 - r_1, r_3 - r_2, \ldots)$.
+Together these give a short exact sequence
+$$
+(x, R \to A) \to (x, \tilde R \to A) \to (x, A[M])
+$$
+of deformation situations, see
+Remark \ref{remark-short-exact-sequence-thickenings}.
+The associated sequence of kernels
+$0 \to \lim M_n \to M \to M \to 0$
+is the canonical sequence computing the limit
+of the system of modules $(M_n)$.
+
+\medskip\noindent
+Let $o_x(\tilde R) \in \mathcal{O}_x(M)$ be the obstruction element.
+Since we have the lifts $\xi_n$ we see that $o_x(\tilde R)$
+maps to zero in $\mathcal{O}_x(M_n)$. By assumption (5)(b)
+we see that $o_x(\tilde R) = 0$. Choose a lift $\tilde \xi$
+of $x$ to $\Spec(\tilde R)$. Let $\tilde \xi_n$ be the
+restriction of $\tilde \xi$ to $\Spec(R_n)$. There exists
+elements $t_n \in T_x(M_n)$ such that
+$t_n \cdot \tilde \xi_n = \xi_n$ by
+Lemma \ref{lemma-properties-lift-RS-star} part (2)(b).
+By assumption (5)(a) we can find $t \in T_x(M)$
+mapping to $t_n$ in $T_x(M_n)$. After replacing
+$\tilde \xi$ by $t \cdot \tilde \xi$ we find that
+$\tilde \xi$ restricts to $\xi_n$ over $\Spec(R_n)$ for all $n$.
+In particular, since $\xi_{n + 1}$ restricts to $\xi_n$
+over $\Spec(R_n)$, the restriction $\overline{\xi}$ of $\tilde \xi$
+to $\Spec(A[M])$ has the property that it restricts to
+the trivial deformation over $\Spec(A[M_n])$ for all $n$.
+Hence by assumption (5)(a) we find that $\overline{\xi}$
+is the trivial deformation of $x$. By axiom (RS*)
+applied to $R = \tilde R \times_{A[M]} A$
+this implies that $\tilde \xi$ is the pullback
+of a deformation $\xi$ of $x$ over $R$. This finishes the proof.
+\end{proof}
+
+\begin{example}
+\label{example-global-sections}
+Let $S = \Spec(\Lambda)$ for some Noetherian ring $\Lambda$.
+Let $W \to S$ be a morphism of schemes. Let $\mathcal{F}$
+be a quasi-coherent $\mathcal{O}_W$-module flat over $S$.
+Consider the functor
+$$
+F : (\Sch/S)_{fppf}^{opp} \longrightarrow \textit{Sets},
+\quad
+T/S \longrightarrow H^0(W_T, \mathcal{F}_T)
+$$
+where $W_T = T \times_S W$ is the base change and $\mathcal{F}_T$ is
+the pullback of $\mathcal{F}$ to $W_T$. If $T = \Spec(A)$
+we will write $W_T = W_A$, etc. Let $\mathcal{X} \to (\Sch/S)_{fppf}$
+be the category fibred in groupoids associated to $F$. Then
+$\mathcal{X}$ has an obstruction theory. Namely,
+\begin{enumerate}
+\item given $A$ over $\Lambda$ and
+$x \in H^0(W_A, \mathcal{F}_A)$ we set
+$\mathcal{O}_x(M) = H^1(W_A, \mathcal{F}_A \otimes_A M)$,
+\item given a deformation situation $(x, A' \to A)$ we let
+$o_x(A') \in \mathcal{O}_x(A)$ be the image of $x$ under the boundary map
+$$
+H^0(W_A, \mathcal{F}_A) \longrightarrow H^1(W_A, \mathcal{F}_A \otimes_A I)
+$$
+coming from the short exact sequence of modules
+$$
+0 \to \mathcal{F}_A \otimes_A I \to
+\mathcal{F}_{A'} \to \mathcal{F}_A \to 0.
+$$
+\end{enumerate}
+We have omitted some details, in particular the construction of the short
+exact sequence above (it uses that $W_A$ and $W_{A'}$ have the same
+underlying topological space) and the explanation for why flatness
+of $\mathcal{F}$ over $S$ implies that the sequence above is short exact.
+\end{example}
+
+\begin{example}[Key example]
+\label{example-key}
+Let $S = \Spec(\Lambda)$ for some Noetherian ring $\Lambda$.
+Say $\mathcal{X} = (\Sch/X)_{fppf}$ with $X = \Spec(R)$ and
+$R = \Lambda[x_1, \ldots, x_n]/J$. The naive cotangent
+complex $\NL_{R/\Lambda}$ is (canonically) homotopy equivalent to
+$$
+J/J^2
+\longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} R\text{d}x_i,
+$$
+see Algebra, Lemma \ref{algebra-lemma-NL-homotopy}.
+Consider a deformation situation $(x, A' \to A)$. Denote $I$ the kernel of
+$A' \to A$. The object $x$ corresponds to $(a_1, \ldots, a_n)$
+with $a_i \in A$ such that $f(a_1, \ldots, a_n) = 0$ in $A$ for all $f \in J$.
+Set
+\begin{align*}
+\mathcal{O}_x(A')
+& =
+\Hom_R(J/J^2, I)/\Hom_R(R^{\oplus n}, I) \\
+& =
+\Ext^1_R(\NL_{R/\Lambda}, I) \\
+& =
+\Ext^1_A(\NL_{R/\Lambda} \otimes_R A, I).
+\end{align*}
+Choose lifts $a_i' \in A'$ of $a_i$ in $A$. Then $o_x(A')$
+is the class of the map $J/J^2 \to I$ defined by sending $f \in J$ to
+$f(a_1', \ldots, a'_n) \in I$. We omit the verification that
+$o_x(A')$ is independent of choices. It is clear that if $o_x(A') = 0$
+then the map lifts. Finally, functoriality is straightforward.
+Thus we obtain an obstruction theory. We observe that $o_x(A')$
+can be described a bit more canonically as the composition
+$$
+\NL_{R/\Lambda} \to \NL_{A/\Lambda} \to \NL_{A/A'} = I[1]
+$$
+in $D(A)$, see Algebra, Lemma \ref{algebra-lemma-NL-surjection}
+for the last identification.
+\end{example}
+
+
+
+
+
+
+
+
+
+\section{Naive obstruction theories}
+\label{section-naive-obstruction-theory}
+
+\noindent
+The title of this section refers to the fact that we will use the
+naive cotangent complex in this section. Let $(x, A' \to A)$
+be a deformation situation for a given category fibred in groupoids over a
+locally Noetherian scheme $S$. The key Example \ref{example-key}
+suggests that any obstruction theory should be closely related to
+maps in $D(A)$ with target the naive cotangent complex of $A$.
+Working this out we find a criterion for versality in
+Lemma \ref{lemma-characterize-versal} which leads to a criterion for
+openness of versality in Lemma \ref{lemma-openness}. We introduce a notion of
+a naive obstruction theory in
+Definition \ref{definition-naive-obstruction-theory} to try to formalize
+the notion a bit further.
+
+\medskip\noindent
+In the following we will use the naive cotangent complex as
+defined in Algebra, Section \ref{algebra-section-netherlander}.
+In particular, if $A' \to A$ is a surjection of $\Lambda$-algebras
+with square zero kernel $I$, then there are maps
+$$
+\NL_{A'/\Lambda} \to \NL_{A/\Lambda} \to \NL_{A/A'}
+$$
+whose composition is homotopy equivalent to zero (see
+Algebra, Remark \ref{algebra-remark-composition-homotopy-equivalent-to-zero}).
+This doesn't form a distinguished triangle in general as we are using
+the naive cotangent complex and not the full one.
+There is a homotopy equivalence $\NL_{A/A'} \to I[1]$ (the complex
+consisting of $I$ placed in degree $-1$, see
+Algebra, Lemma \ref{algebra-lemma-NL-surjection}).
+Finally, note that there is a canonical map
+$\NL_{A/\Lambda} \to \Omega_{A/\Lambda}$.
+
+\begin{lemma}
+\label{lemma-compute-ext-into-field}
+Let $A \to k$ be a ring map with $k$ a field. Let $E \in D^-(A)$.
+Then $\Ext^i_A(E, k) = \Hom_k(H^{-i}(E \otimes^\mathbf{L} k), k)$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: Replace $E$ by a bounded above complex of free $A$-modules
+and compute both sides.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-construct-essential-surjection}
+Let $\Lambda \to A \to k$ be finite type ring maps of Noetherian rings with
+$k = \kappa(\mathfrak p)$ for some prime $\mathfrak p$ of $A$. Let
+$\xi : E \to \NL_{A/\Lambda}$ be morphism of $D^{-}(A)$ such that
+$H^{-1}(\xi \otimes^{\mathbf{L}} k)$ is not surjective.
+Then there exists a surjection $A' \to A$ of $\Lambda$-algebras
+such that
+\begin{enumerate}
+\item[(a)] $I = \Ker(A' \to A)$ has square zero and is isomorphic to $k$
+as an $A$-module,
+\item[(b)] $\Omega_{A'/\Lambda} \otimes k = \Omega_{A/\Lambda} \otimes k$, and
+\item[(c)] $E \to \NL_{A/A'}$ is zero.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $f \in A$, $f \not \in \mathfrak p$. Suppose that $A'' \to A_f$
+satisfies (a), (b), (c) for the induced map
+$E \otimes_A A_f \to \NL_{A_f/\Lambda}$, see
+Algebra, Lemma \ref{algebra-lemma-localize-NL}.
+Then we can set $A' = A'' \times_{A_f} A$ and get a solution.
+Namely, it is clear that $A' \to A$ satisfies (a) because
+$\Ker(A' \to A) = \Ker(A'' \to A) = I$. Pick
+$f'' \in A''$ lifting $f$. Then the localization of $A'$ at
+$(f'', f)$ is isomorphic to $A''$
+(for example by
+More on Algebra, Lemma \ref{more-algebra-lemma-diagram-localize}).
+Thus (b) and (c) are clear for $A'$ too.
+In this way we see that we may replace $A$ by the localization
+$A_f$ (finitely many times).
+In particular (after such a replacement) we may assume that $\mathfrak p$
+is a maximal ideal of $A$, see
+Morphisms, Lemma \ref{morphisms-lemma-point-finite-type}.
+
+\medskip\noindent
+Choose a presentation $A = \Lambda[x_1, \ldots, x_n]/J$. Then
+$\NL_{A/\Lambda}$ is (canonically) homotopy equivalent to
+$$
+J/J^2
+\longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} A\text{d}x_i,
+$$
+see Algebra, Lemma \ref{algebra-lemma-NL-homotopy}. After localizing
+if necessary (using Nakayama's lemma) we can choose generators
+$f_1, \ldots, f_m$ of $J$ such that $f_j \otimes 1$ form a basis for
+$J/J^2 \otimes_A k$. Moreover, after renumbering, we can assume that the
+images of $\text{d}f_1, \ldots, \text{d}f_r$ form a
+basis for the image of $J/J^2 \otimes k \to \bigoplus k\text{d}x_i$
+and that $\text{d}f_{r + 1}, \ldots, \text{d}f_m$ map to zero in
+$\bigoplus k\text{d}x_i$. With these choices the space
+$$
+H^{-1}(\NL_{A/\Lambda} \otimes^{\mathbf{L}}_A k) =
+H^{-1}(\NL_{A/\Lambda} \otimes_A k)
+$$
+has basis $f_{r + 1} \otimes 1, \ldots, f_m \otimes 1$. Changing basis
+once again we may assume that the image of $H^{-1}(\xi \otimes^{\mathbf{L}} k)$
+is contained in the $k$-span of
+$f_{r + 1} \otimes 1, \ldots, f_{m - 1} \otimes 1$.
+Set
+$$
+A' = \Lambda[x_1, \ldots, x_n]/(f_1, \ldots, f_{m - 1}, \mathfrak pf_m)
+$$
+By construction $A' \to A$ satisfies (a). Since $\text{d}f_m$ maps
+to zero in $\bigoplus k\text{d}x_i$ we see that (b) holds. Finally, by
+construction the induced map $E \to \NL_{A/A'} = I[1]$ induces the zero map
+$H^{-1}(E \otimes_A^\mathbf{L} k) \to I \otimes_A k$. By
+Lemma \ref{lemma-compute-ext-into-field}
+we see that the composition is zero.
+\end{proof}
+
+\noindent
+The following lemma is our key technical result.
+
+\begin{lemma}
+\label{lemma-characterize-versal}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$ satisfying (RS*).
+Let $U = \Spec(A)$ be an
+affine scheme of finite type over $S$ which maps into an affine open
+$\Spec(\Lambda)$. Let $x$ be an object of $\mathcal{X}$ over $U$.
+Let $\xi : E \to \NL_{A/\Lambda}$ be a morphism of $D^{-}(A)$. Assume
+\begin{enumerate}
+\item[(i)] for every deformation situation $(x, A' \to A)$ we have:
+$x$ lifts to $\Spec(A')$ if and only if
+$E \to \NL_{A/\Lambda} \to \NL_{A/A'}$ is zero, and
+\item[(ii)] there is an isomorphism of functors
+$T_x(-) \to \Ext^0_A(E, -)$
+such that $E \to \NL_{A/\Lambda} \to \Omega^1_{A/\Lambda}$
+corresponds to the canonical element (see
+Remark \ref{remark-canonical-element}).
+\end{enumerate}
+Let $u_0 \in U$ be a finite type point with residue field
+$k = \kappa(u_0)$. Consider the following statements
+\begin{enumerate}
+\item $x$ is versal at $u_0$, and
+\item $\xi : E \to \NL_{A/\Lambda}$ induces a surjection
+$H^{-1}(E \otimes_A^{\mathbf{L}} k) \to
+H^{-1}(\NL_{A/\Lambda} \otimes_A^{\mathbf{L}} k)$
+and an injection
+$H^0(E \otimes_A^{\mathbf{L}} k) \to
+H^0(\NL_{A/\Lambda} \otimes_A^{\mathbf{L}} k)$.
+\end{enumerate}
+Then we always have (2) $\Rightarrow$ (1) and we have (1) $\Rightarrow$ (2)
+if $u_0$ is a closed point.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p = \Ker(A \to k)$ be the prime corresponding to $u_0$.
+
+\medskip\noindent
+Assume that $x$ versal at $u_0$ and that $u_0$ is a closed point of $U$.
+If $H^{-1}(\xi \otimes_A^{\mathbf{L}} k)$ is not surjective, then
+let $A' \to A$ be an extension with kernel $I$ as in
+Lemma \ref{lemma-construct-essential-surjection}.
+Because $u_0$ is a closed point, we see that $I$ is a finite $A$-module,
+hence that $A'$ is a finite type $\Lambda$-algebra (this fails if
+$u_0$ is not closed). In particular $A'$ is Noetherian.
+By property (c) for $A'$ and (i) for $\xi$ we see that $x$ lifts to
+an object $x'$ over $A'$.
+Let $\mathfrak p' \subset A'$ be kernel of the surjective map to $k$.
+By Artin-Rees (Algebra, Lemma \ref{algebra-lemma-Artin-Rees})
+there exists an $n > 1$ such that $(\mathfrak p')^n \cap I = 0$.
+Then we see that
+$$
+B' = A'/(\mathfrak p')^n \longrightarrow A/\mathfrak p^n = B
+$$
+is a small, essential extension of local Artinian rings, see
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-essential-surjection}.
+On the other hand, as $x$ is versal at $u_0$ and as $x'|_{\Spec(B')}$
+is a lift of $x|_{\Spec(B)}$, there exists an integer
+$m \geq n$ and a map $q : A/\mathfrak p^m \to B'$
+such that the composition
+$A/\mathfrak p^m \to B' \to B$ is the quotient map.
+Since the maximal ideal of $B'$ has $n$th power equal to zero, this
+$q$ factors through $B$ which contradicts the fact that $B' \to B$ is an
+essential surjection. This contradiction shows that
+$H^{-1}(\xi \otimes_A^{\mathbf{L}} k)$
+is surjective.
+
+\medskip\noindent
+Assume that $x$ versal at $u_0$. By Lemma \ref{lemma-compute-ext-into-field}
+the map $H^0(\xi \otimes_A^{\mathbf{L}} k)$ is dual to the map
+$\Ext^0_A(\NL_{A/\Lambda}, k) \to \text{Ext}^0_A(E, k)$. Note that
+$$
+\Ext^0_A(\NL_{A/\Lambda}, k) = \text{Der}_\Lambda(A, k)
+\quad\text{and}\quad
+T_x(k) = \Ext^0_A(E, k)
+$$
+Condition (ii) assures us the map
+$\Ext^0_A(\NL_{A/\Lambda}, k) \to \text{Ext}^0_A(E, k)$
+sends a tangent vector $\theta$ to $U$ at $u_0$ to the corresponding
+infinitesimal deformation of $x_0$, see Remark \ref{remark-canonical-element}.
+Hence if $x$ is versal, then this map is surjective, see
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-versal-criterion}.
+Hence $H^0(\xi \otimes_A^{\mathbf{L}} k)$ is injective.
+This finishes the proof of (1) $\Rightarrow$ (2) in case $u_0$ is a
+closed point.
+
+\medskip\noindent
+For the rest of the proof assume $H^{-1}(E \otimes_A^\mathbf{L} k) \to
+H^{-1}(\NL_{A/\Lambda} \otimes_A^\mathbf{L} k)$
+is surjective and
+$H^0(E \otimes_A^\mathbf{L} k) \to
+H^0(\NL_{A/\Lambda} \otimes_A^\mathbf{L} k)$
+injective. Set $R = A_\mathfrak p^\wedge$ and let $\eta$ be the
+formal object over $R$ associated to $x|_{\Spec(R)}$.
+The map $d\underline{\eta}$ on tangent spaces is surjective
+because it is identified with the dual of the injective map
+$H^0(E \otimes_A^{\mathbf{L}} k) \to
+H^0(\NL_{A/\Lambda} \otimes_A^{\mathbf{L}} k)$
+(see previous paragraph). According to
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-versal-criterion}
+it suffices to prove the following:
+Let $C' \to C$ be a small extension of finite type Artinian local
+$\Lambda$-algebras with residue field $k$. Let $R \to C$ be a
+$\Lambda$-algebra map compatible with identifications of residue fields.
+Let $y = x|_{\Spec(C)}$ and let $y'$ be a lift of $y$ to $C'$.
+To show: we can lift the $\Lambda$-algebra map $R \to C$ to $R \to C'$.
+
+\medskip\noindent
+Observe that it suffices to lift the $\Lambda$-algebra map $A \to C$.
+Let $I = \Ker(C' \to C)$. Note that $I$ is a $1$-dimensional $k$-vector
+space. The obstruction $ob$ to lifting $A \to C$ is an element of
+$\Ext^1_A(\NL_{A/\Lambda}, I)$, see Example \ref{example-key}.
+By Lemma \ref{lemma-compute-ext-into-field} and our assumption the map
+$\xi$ induces an injection
+$$
+\Ext^1_A(\NL_{A/\Lambda}, I)
+\longrightarrow
+\Ext^1_A(E, I)
+$$
+By the construction of $ob$ and (i) the image of $ob$ in $\Ext^1_A(E, I)$
+is the obstruction to lifting $x$ to $A \times_C C'$. By (RS*) the fact that
+$y/C$ lifts to $y'/C'$ implies that $x$ lifts to $A \times_C C'$. Hence
+$ob = 0$ and we are done.
+\end{proof}
+
+\noindent
+The key lemma above allows us to conclude that we have openness of
+versality in some cases.
+
+\begin{lemma}
+\label{lemma-openness}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$ satisfying (RS*).
+Let $U = \Spec(A)$ be an affine scheme of finite type over $S$ which maps
+into an affine open $\Spec(\Lambda)$. Let $x$ be an object of $\mathcal{X}$
+over $U$. Let $\xi : E \to \NL_{A/\Lambda}$ be a morphism of $D^{-}(A)$.
+Assume
+\begin{enumerate}
+\item[(i)] for every deformation situation $(x, A' \to A)$ we have:
+$x$ lifts to $\Spec(A')$ if and only if
+$E \to \NL_{A/\Lambda} \to \NL_{A/A'}$ is zero,
+\item[(ii)] there is an isomorphism of functors
+$T_x(-) \to \Ext^0_A(E, -)$
+such that $E \to \NL_{A/\Lambda} \to \Omega^1_{A/\Lambda}$
+corresponds to the canonical element (see
+Remark \ref{remark-canonical-element}),
+\item[(iii)] the cohomology groups of $E$ are finite $A$-modules.
+\end{enumerate}
+If $x$ is versal at a closed point $u_0 \in U$,
+then there exists an open neighbourhood $u_0 \in U' \subset U$
+such that $x$ is versal at every finite type point of $U'$.
+\end{lemma}
+
+\begin{proof}
+Let $C$ be the cone of $\xi$ so that we have a distinguished triangle
+$$
+E \to \NL_{A/\Lambda} \to C \to E[1]
+$$
+in $D^{-}(A)$. By Lemma \ref{lemma-characterize-versal}
+the assumption that $x$ is versal at $u_0$ implies that
+$H^{-1}(C \otimes^\mathbf{L} k) = 0$. By
+More on Algebra, Lemma \ref{more-algebra-lemma-cut-complex-in-two}
+there exists an $f \in A$ not contained in the prime corresponding to
+$u_0$ such that $H^{-1}(C \otimes^\mathbf{L}_A M) = 0$ for
+any $A_f$-module $M$. Using
+Lemma \ref{lemma-characterize-versal}
+again we see that we have versality for all finite type points of
+the open $D(f) \subset U$.
+\end{proof}
+
+\noindent
+The technical lemmas above suggest the following definition.
+
+\begin{definition}
+\label{definition-naive-obstruction-theory}
+Let $S$ be a locally Noetherian base. Let $\mathcal{X}$ be a category fibred
+in groupoids over $(\Sch/S)_{fppf}$. Assume that $\mathcal{X}$
+satisfies (RS*). A {\it naive obstruction theory} is
+given by the following data
+\begin{enumerate}
+\item
+\label{item-map}
+for every $S$-algebra $A$ such that $\Spec(A) \to S$
+maps into an affine open $\Spec(\Lambda) \subset S$ and every object $x$
+of $\mathcal{X}$ over $\Spec(A)$ we are given an object $E_x \in D^-(A)$
+and a map $\xi_x : E \to \NL_{A/\Lambda}$,
+\item
+\label{item-inf}
+given $(x, A)$ as in (\ref{item-map}) there are transformations of
+functors
+$$
+\text{Inf}_x( - ) \to \Ext^{-1}_A(E_x, -)
+\quad\text{and}\quad
+T_x(-) \to \Ext^0_A(E_x, -)
+$$
+\item
+\label{item-functoriality}
+for $(x, A)$ as in (\ref{item-map}) and a ring map $A \to B$
+setting $y = x|_{\Spec(B)}$ there is a functoriality map
+$E_x \to E_y$ in $D(A)$.
+\end{enumerate}
+These data are subject to the following conditions
+\begin{enumerate}
+\item[(i)]
+in the situation of (\ref{item-functoriality}) the diagram
+$$
+\xymatrix{
+E_y \ar[r]_{\xi_y} & \NL_{B/\Lambda} \\
+E_x \ar[u] \ar[r]^{\xi_x} & \NL_{A/\Lambda} \ar[u]
+}
+$$
+is commutative in $D(A)$,
+\item[(ii)]
+given $(x, A)$ as in (\ref{item-map}) and $A \to B \to C$
+setting $y = x|_{\Spec(B)}$ and $z = x|_{\Spec(C)}$ the
+composition of the functoriality maps $E_x \to E_y$ and $E_y \to E_z$ is
+the functoriality map $E_x \to E_z$,
+\item[(iii)]
+the maps of (\ref{item-inf}) are isomorphisms
+compatible with the functoriality
+maps and the maps of Remark \ref{remark-functoriality},
+\item[(iv)]
+the composition $E_x \to \NL_{A/\Lambda} \to \Omega_{A/\Lambda}$
+corresponds to the canonical element of
+$T_x(\Omega_{A/\Lambda}) = \Ext^0(E_x, \Omega_{A/\Lambda})$, see
+Remark \ref{remark-canonical-element},
+\item[(v)]
+given a deformation situation $(x, A' \to A)$ with $I = \Ker(A' \to A)$
+the composition $E_x \to \NL_{A/\Lambda} \to \NL_{A/A'}$ is zero in
+$$
+\Hom_A(E_x, \NL_{A/\Lambda}) = \Ext^0_A(E_x, \NL_{A/A'}) =
+\Ext^1_A(E_x, I)
+$$
+if and only if $x$ lifts to $A'$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Thus we see in particular that we obtain an obstruction theory
+as in Section \ref{section-obstruction-theory} by setting
+$\mathcal{O}_x( - ) = \Ext^1_A(E_x, -)$.
+
+\begin{lemma}
+\label{lemma-naive-obstruction-theory-qis}
+Let $S$ and $\mathcal{X}$ be as in
+Definition \ref{definition-naive-obstruction-theory}
+and let $\mathcal{X}$ be endowed with a naive obstruction theory.
+Let $A \to B$ and $y \to x$ be as in (\ref{item-functoriality}).
+Let $k$ be a $B$-algebra which is a field. Then the functoriality
+map $E_x \to E_y$ induces bijections
+$$
+H^i(E_x \otimes_A^{\mathbf{L}} k) \to H^i(E_y \otimes_B^{\mathbf{L}} k)
+$$
+for $i = 0, 1$.
+\end{lemma}
+
+\begin{proof}
+Let $z = x|_{\Spec(k)}$. Then (RS*) implies that
+$$
+\textit{Lift}(x, A[k]) = \textit{Lift}(z, k[k])
+\quad\text{and}\quad
+\textit{Lift}(y, B[k]) = \textit{Lift}(z, k[k])
+$$
+because $A[k] = A \times_k k[k]$ and $B[k] = B \times_k k[k]$.
+Hence the properties of a naive obstruction theory imply that the
+functoriality map $E_x \to E_y$ induces bijections
+$\Ext^i_A(E_x, k) \to \text{Ext}^i_B(E_y, k)$
+for $i = -1, 0$. By Lemma \ref{lemma-compute-ext-into-field} our maps
+$H^i(E_x \otimes_A^{\mathbf{L}} k) \to H^i(E_y \otimes_B^{\mathbf{L}} k)$,
+$i = 0, 1$ induce isomorphisms on dual vector spaces hence are isomorphisms.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-naive-obstruction-theory-gives-openness}
+Let $S$ be a locally Noetherian scheme. Let
+$p : \mathcal{X} \to (\Sch/S)_{fppf}^{opp}$ be a category fibred in groupoids.
+Assume that $\mathcal{X}$ satisfies (RS*)
+and that $\mathcal{X}$ has a naive obstruction theory.
+Then openness of versality holds for $\mathcal{X}$ provided the
+complexes $E_x$ of Definition \ref{definition-naive-obstruction-theory}
+have finitely generated cohomology groups for pairs $(A, x)$ where
+$A$ is of finite type over $S$.
+\end{lemma}
+
+\begin{proof}
+Let $U$ be a scheme locally of finite type over $S$, let $x$ be an object of
+$\mathcal{X}$ over $U$, and let $u_0$ be a finite type point of $U$ such that
+$x$ is versal at $u_0$. We may first shrink $U$ to an affine scheme such
+that $u_0$ is a closed point and such that $U \to S$ maps into an affine
+open $\Spec(\Lambda)$. Say $U = \Spec(A)$. Let
+$\xi_x : E_x \to \NL_{A/\Lambda}$ be the obstruction map.
+At this point we may apply Lemma \ref{lemma-openness} to conclude.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{A dual notion}
+\label{section-dual}
+
+\noindent
+Let $(x, A' \to A)$ be a deformation situation for a given category
+$\mathcal{X}$ fibred in groupoids over a locally Noetherian scheme $S$.
+Assume $\mathcal{X}$ has an obstruction theory, see
+Definition \ref{definition-obstruction-theory}. In practice
+one often has a complex $K^\bullet$ of $A$-modules and isomorphisms of
+functors
+$$
+\text{Inf}_x(-) \to H^0(K^\bullet \otimes_A^\mathbf{L} -),\quad
+T_x(-) \to H^1(K^\bullet \otimes_A^\mathbf{L} -),\quad
+\mathcal{O}_x(-) \to H^2(K^\bullet \otimes_A^\mathbf{L} -)
+$$
+In this section we formalize this a little bit and show how this leads
+to a verification of openness of versality in some cases.
+
+\begin{example}
+\label{example-global-sections-dual}
+Let $\Lambda, S, W, \mathcal{F}$ be as in
+Example \ref{example-global-sections}.
+Assume that $W \to S$ is proper and $\mathcal{F}$ coherent. By
+Cohomology of Schemes, Remark
+\ref{coherent-remark-explain-perfect-direct-image}
+there exists a finite complex of finite projective $\Lambda$-modules
+$N^\bullet$ which universally computes the cohomology of $\mathcal{F}$.
+In particular the obstruction spaces from Example \ref{example-global-sections}
+are $\mathcal{O}_x(M) = H^1(N^\bullet \otimes_\Lambda M)$.
+Hence with $K^\bullet = N^\bullet \otimes_\Lambda A[-1]$ we see that
+$\mathcal{O}_x(M) = H^2(K^\bullet \otimes_A^\mathbf{L} M)$.
+\end{example}
+
+\begin{situation}
+\label{situation-dual}
+Let $S$ be a locally Noetherian scheme. Let $\mathcal{X}$ be a category
+fibred in groupoids over $(\Sch/S)_{fppf}$. Assume that
+$\mathcal{X}$ has (RS*) so that we can speak of the functor $T_x(-)$, see
+Lemma \ref{lemma-properties-lift-RS-star}.
+Let $U = \Spec(A)$ be an affine scheme of finite type over $S$ which maps
+into an affine open $\Spec(\Lambda)$. Let $x$ be an object of $\mathcal{X}$
+over $U$. Assume we are given
+\begin{enumerate}
+\item a complex of $A$-modules $K^\bullet$,
+\item a transformation of functors
+$T_x(-) \to H^1(K^\bullet \otimes_A^\mathbf{L} -)$,
+\item for every deformation situation $(x, A' \to A)$ with kernel
+$I = \Ker(A' \to A)$ an element
+$o_x(A') \in H^2(K^\bullet \otimes_A^\mathbf{L} I)$
+\end{enumerate}
+satisfying the following (minimal) conditions
+\begin{enumerate}
+\item[(i)] the transformation
+$T_x(-) \to H^1(K^\bullet \otimes_A^\mathbf{L} -)$
+is an isomorphism,
+\item[(ii)] given a morphism $(x, A'' \to A) \to (x, A' \to A)$ of deformation
+situations the element $o_x(A')$ maps to the element $o_x(A'')$
+via the map
+$H^2(K^\bullet \otimes_A^\mathbf{L} I) \to
+H^2(K^\bullet \otimes_A^\mathbf{L} I')$
+where $I' = \Ker(A'' \to A)$, and
+\item[(iii)] $x$ lifts to an object over $\Spec(A')$ if and only if
+$o_x(A') = 0$.
+\end{enumerate}
+It is possible to incorporate infinitesimal automorphisms as well, but
+we refrain from doing so in order to get the sharpest possible result.
+\end{situation}
+
+\noindent
+In Situation \ref{situation-dual} an important role will be played by
+$K^\bullet \otimes_A^\mathbf{L} \NL_{A/\Lambda}$. Suppose we are given an
+element $\xi \in H^1(K^\bullet \otimes_A^\mathbf{L} \NL_{A/\Lambda})$.
+Then (1) for any surjection $A' \to A$ of $\Lambda$-algebras with kernel
+$I$ of square zero the canonical map $\NL_{A/\Lambda} \to \NL_{A/A'} = I[1]$
+sends $\xi$ to an element $\xi_{A'} \in H^2(K^\bullet \otimes_A^\mathbf{L} I)$
+and (2) the map $\NL_{A/\Lambda} \to \Omega_{A/\Lambda}$ sends
+$\xi$ to an element $\xi_{can}$ of
+$H^1(K^\bullet \otimes_A^\mathbf{L} \Omega_{A/\Lambda})$.
+
+\begin{lemma}
+\label{lemma-dual-obstruction}
+In Situation \ref{situation-dual}. Assume furthermore that
+\begin{enumerate}
+\item[(iv)] given a short exact sequence of deformation situations
+as in Remark \ref{remark-short-exact-sequence-thickenings} and
+a lift $x'_2 \in \text{Lift}(x, A_2')$ then
+$o_x(A_3') \in H^2(K^\bullet \otimes_A^\mathbf{L} I_3)$
+equals $\partial\theta$ where
+$\theta \in H^1(K^\bullet \otimes_A^\mathbf{L} I_1)$
+is the element corresponding to $x'_2|_{\Spec(A_1')}$ via
+$A_1' = A[I_1]$ and the given map
+$T_x(-) \to H^1(K^\bullet \otimes_A^\mathbf{L} -)$.
+\end{enumerate}
+In this case there exists an element
+$\xi \in H^1(K^\bullet \otimes_A^\mathbf{L} \NL_{A/\Lambda})$
+such that
+\begin{enumerate}
+\item for every deformation situation $(x, A' \to A)$ we have
+$\xi_{A'} = o_x(A')$, and
+\item $\xi_{can}$ matches the canonical element of
+Remark \ref{remark-canonical-element} via the given transformation
+$T_x(-) \to H^1(K^\bullet \otimes_A^\mathbf{L} -)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose a $\alpha : \Lambda[x_1, \ldots, x_n] \to A$ with kernel $J$.
+Write $P = \Lambda[x_1, \ldots, x_n]$. In the rest of this proof we work with
+$$
+\NL(\alpha) = (J/J^2 \longrightarrow \bigoplus A \text{d}x_i)
+$$
+which is permissible by
+Algebra, Lemma \ref{algebra-lemma-NL-homotopy}
+and
+More on Algebra, Lemma \ref{more-algebra-lemma-derived-tor-homotopy}.
+Consider the element
+$o_x(P/J^2) \in H^2(K^\bullet \otimes_A^\mathbf{L} J/J^2)$ and consider
+the quotient
+$$
+C = (P/J^2 \times \bigoplus A \text{d}x_i)/(J/J^2)
+$$
+where $J/J^2$ is embedded diagonally. Note that $C \to A$ is a surjection
+with kernel $\bigoplus A\text{d}x_i$. Moreover there is a section
+$A \to C$ to $C \to A$ given by mapping the class of $f \in P$ to the class
+of $(f, \text{d}f)$ in the pushout. For later use, denote $x_C$ the
+pullback of $x$ along the corresponding morphism $\Spec(C) \to \Spec(A)$.
+Thus we see that $o_x(C) = 0$.
+We conclude that $o_x(P/J^2)$ maps to zero in
+$H^2(K^\bullet \otimes_A^\mathbf{L} \bigoplus A\text{d}x_i)$.
+It follows that there exists some element
+$\xi \in H^1(K^\bullet \otimes_A^\mathbf{L} \NL(\alpha))$
+mapping to $o_x(P/J^2)$.
+
+\medskip\noindent
+Note that for any deformation situation $(x, A' \to A)$ there exists
+a $\Lambda$-algebra map $P/J^2 \to A'$ compatible with the augmentations
+to $A$. Hence the
+element $\xi$ satisfies the first property of the lemma by construction
+and property (ii) of Situation \ref{situation-dual}.
+
+\medskip\noindent
+Note that our choice of $\xi$ was well defined up to the choice of an
+element of $H^1(K^\bullet \otimes_A^\mathbf{L} \bigoplus A\text{d}x_i)$.
+We will show that after modifying $\xi$ by an element of the aforementioned
+group we can arrange it so that the second assertion of the lemma is true.
+Let $C' \subset C$ be the image of $P/J^2$ under the
+$\Lambda$-algebra map $P/J^2 \to C$ (inclusion of first factor).
+Observe that
+$\Ker(C' \to A) = \Im(J/J^2 \to \bigoplus A\text{d}x_i)$.
+Set $\overline{C} = A[\Omega_{A/\Lambda}]$. The map
+$P/J^2 \times \bigoplus A \text{d}x_i \to \overline{C}$,
+$(f, \sum f_i \text{d}x_i) \mapsto (f \bmod J, \sum f_i \text{d}x_i)$
+factors through a surjective map $C \to \overline{C}$. Then
+$$
+(x, \overline{C} \to A) \to (x, C \to A) \to (x, C' \to A)
+$$
+is a short exact sequence of deformation situations. The
+associated splitting $\overline{C} = A[\Omega_{A/\Lambda}]$ (from
+Remark \ref{remark-short-exact-sequence-thickenings}) equals the given
+splitting above. Moreover, the section $A \to C$ composed with the map
+$C \to \overline{C}$
+is the map $(1, \text{d}) : A \to A[\Omega_{A/\Lambda}]$ of
+Remark \ref{remark-canonical-element}.
+Thus $x_C$ restricts to the canonical element $x_{can}$ of
+$T_x(\Omega_{A/\Lambda}) = \text{Lift}(x, A[\Omega_{A/\Lambda}])$.
+By condition (iv) we conclude that $o_x(P/J^2)$ maps to $\partial x_{can}$
+in
+$$
+H^1(K^\bullet \otimes_A^\mathbf{L} \Im(J/J^2 \to \bigoplus A\text{d}x_i))
+$$
+By construction $\xi$ maps to $o_x(P/J^2)$. It follows that
+$x_{can}$ and $\xi_{can}$ map to the same element in the
+displayed group which means (by the long exact cohomology sequence)
+that they differ by an element of
+$H^1(K^\bullet \otimes_A^\mathbf{L} \bigoplus A\text{d}x_i)$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dual-openness}
+In Situation \ref{situation-dual} assume that (iv) of
+Lemma \ref{lemma-dual-obstruction} holds and that $K^\bullet$ is a
+perfect object of $D(A)$. In this case, if $x$ is versal at a closed
+point $u_0 \in U$ then there exists an open neighbourhood
+$u_0 \in U' \subset U$ such that $x$ is versal at every finite type
+point of $U'$.
+\end{lemma}
+
+\begin{proof}
+We may assume that $K^\bullet$ is a finite complex of finite projective
+$A$-modules. Thus the derived tensor product with $K^\bullet$ is the
+same as simply tensoring with $K^\bullet$. Let
+$E^\bullet$ be the dual perfect complex to $K^\bullet$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-dual-perfect-complex}.
+(So $E^n = \Hom_A(K^{-n}, A)$ with differentials the transpose of the
+differentials of $K^\bullet$.) Let $E \in D^{-}(A)$ denote the
+object represented by the complex $E^\bullet[-1]$.
+Let $\xi \in H^1(\text{Tot}(K^\bullet \otimes_A \NL_{A/\Lambda}))$
+be the element constructed in Lemma \ref{lemma-dual-obstruction}
+and denote $\xi : E = E^\bullet[-1] \to \NL_{A/\Lambda}$ the corresponding
+map (loc.cit.). We claim that the pair $(E, \xi)$ satisfies all the
+assumptions of Lemma \ref{lemma-openness} which finishes the proof.
+
+\medskip\noindent
+Namely, assumption (i) of Lemma \ref{lemma-openness} follows from conclusion
+(1) of Lemma \ref{lemma-dual-obstruction}
+and the fact that $H^2(K^\bullet \otimes_A^\mathbf{L} -) =
+\Ext^1(E, -)$ by loc.cit. Assumption (ii) of
+Lemma \ref{lemma-openness} follows from conclusion (2) of
+Lemma \ref{lemma-dual-obstruction}
+and the fact that $H^1(K^\bullet \otimes_A^\mathbf{L} -) =
+\Ext^0(E, -)$ by loc.cit. Assumption (iii) of Lemma \ref{lemma-openness}
+is clear.
+\end{proof}
+
+
+
+
+
+
+
+\section{Limit preserving functors on Noetherian schemes}
+\label{section-noetherian}
+
+\noindent
+It is sometimes convenient to consider functors or stacks defined only
+on the full subcategory of (locally) Noetherian schemes. In this section
+we discuss this in the case of algebraic spaces.
+
+\medskip\noindent
+Let $S$ be a locally Noetherian scheme. Let us be a bit pedantic in order
+to line up our categories correctly; people who are ignoring set theoretical
+issues can just replace the sets of schemes we choose by the collection
+of all schemes in what follows. As in
+Topologies, Remark \ref{topologies-remark-choice-sites}
+we choose a category $\Sch_\alpha$ of schemes containing
+$S$ such that we obtain big sites $(\Sch/S)_{Zar}$,
+$(\Sch/S)_\etale$, $(\Sch/S)_{smooth}$, $(\Sch/S)_{syntomic}$,
+and $(\Sch/S)_{fppf}$ all with the same underlying
+category $\Sch_\alpha/S$. Denote
+$$
+\textit{Noetherian}_\alpha \subset \Sch_\alpha
+$$
+the full subcategory consisting of locally Noetherian schemes.
+This determines a full subcategory
+$$
+\textit{Noetherian}_\alpha/S \subset \Sch_\alpha/S
+$$
+For $\tau \in \{Zariski, \etale, smooth, syntomic, fppf\}$ we have
+\begin{enumerate}
+\item if $f : X \to Y$ is a morphism of $\Sch_\alpha/S$
+with $Y$ in $\textit{Noetherian}_\alpha/S$ and
+$f$ locally of finite type, then $X$ is in $\textit{Noetherian}_\alpha/S$,
+\item for morphisms $f : X \to Y$ and $g : Z \to Y$ of
+$\textit{Noetherian}_\alpha/S$ with $f$ locally of finite type
+the fibre product $X \times_Y Z$ in $\textit{Noetherian}_\alpha/S$
+exists and agrees with the fibre product in $\Sch_\alpha/S$,
+\item if $\{X_i \to X\}_{i \in I}$ is a covering of
+$(\Sch/S)_\tau$ and $X$ is in $\textit{Noetherian}_\alpha/S$,
+then the objects $X_i$ are in $\textit{Noetherian}_\alpha/S$
+\item the category $\textit{Noetherian}_\alpha/S$ endowed
+with the set of coverings of $(\Sch/S)_\tau$ whose objects
+are in $\textit{Noetherian}_\alpha/S$ is a site
+we will denote $(\textit{Noetherian}/S)_\tau$,
+\item the inclusion functor
+$(\textit{Noetherian}/S)_\tau \to (\Sch/S)_\tau$
+is fully faithful, continuous, and cocontinuous.
+\end{enumerate}
+By Sites, Lemmas \ref{sites-lemma-cocontinuous-morphism-topoi} and
+\ref{sites-lemma-when-shriek} we obtain a morphism of topoi
+$$
+g_\tau : \Sh((\textit{Noetherian}/S)_\tau) \longrightarrow \Sh((\Sch/S)_\tau)
+$$
+whose pullback functor is the restriction of sheaves along
+the inclusion functor $(\textit{Noetherian}/S)_\tau \to (\Sch/S)_\tau$.
+
+\begin{remark}[Warning]
+\label{remark-no-fibre-products}
+The site $(\textit{Noetherian}/S)_\tau$ does not have fibre products.
+Hence we have to be careful in working with sheaves. For example,
+the continuous inclusion functor
+$(\textit{Noetherian}/S)_\tau \to (\Sch/S)_\tau$
+does not define a morphism of sites. See
+Examples, Section \ref{examples-section-sheaves-locally-Noetherian}
+for an example in case $\tau = fppf$.
+\end{remark}
+
+\noindent
+Let $F : (\textit{Noetherian}/S)_\tau^{opp} \to \textit{Sets}$
+be a functor. We say $F$ is {\it limit preserving} if for any
+directed limit of affine schemes $X = \lim X_i$ of
+$(\textit{Noetherian}/S)_\tau$ we have $F(X) = \colim F(X_i)$.
+
+\begin{lemma}
+\label{lemma-canonical-extension}
+Let $\tau \in \{Zariski, \etale, smooth, syntomic, fppf\}$.
+Restricting along the inclusion functor
+$(\textit{Noetherian}/S)_\tau \to (\Sch/S)_\tau$
+defines an equivalence of categories between
+\begin{enumerate}
+\item the category of limit preserving sheaves on
+$(\Sch/S)_\tau$ and
+\item the category of limit preserving sheaves on
+$(\textit{Noetherian}/S)_\tau$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $F : (\textit{Noetherian}/S)_\tau^{opp} \to \textit{Sets}$
+be a functor which is both limit preserving and a sheaf.
+By Topologies, Lemmas
+\ref{topologies-lemma-extend} and \ref{topologies-lemma-extend-sheaf-general}
+there exists a unique functor
+$F' : (\Sch/S)_\tau^{opp} \to \textit{Sets}$
+which is limit preserving, a sheaf, and restricts to $F$.
+In fact, the construction of $F'$ in
+Topologies, Lemma \ref{topologies-lemma-extend}
+is functorial in $F$ and this construction is a quasi-inverse
+to restriction. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-limit-preserving}
+Let $X$ be an object of $(\textit{Noetherian}/S)_\tau$. If the functor
+of points $h_X : (\textit{Noetherian}/S)_\tau^{opp} \to \textit{Sets}$
+is limit preserving, then $X$ is locally of finite presentation over $S$.
+\end{lemma}
+
+\begin{proof}
+Let $V \subset X$ be an affine open subscheme which maps into an affine
+open $U \subset S$. We may write $V = \lim V_i$ as a directed limit of affine
+schemes $V_i$ of finite presentation over $U$, see
+Algebra, Lemma \ref{algebra-lemma-ring-colimit-fp}.
+By assumption, the arrow $V \to X$ factors as $V \to V_i \to X$
+for some $i$. After increasing $i$ we may assume $V_i \to X$
+factors through $V$ as the inverse image of $V \subset X$ in $V_i$
+eventually becomes equal to $V_i$ by
+Limits, Lemma \ref{limits-lemma-descend-opens}.
+Then the identity morphism $V \to V$ factors through $V_i$ for some $i$
+in the category of schemes over $U$. Thus $V \to U$ is of finite presentation;
+the corresponding algebra fact is that if $B$ is an $A$-algebra
+such that $\text{id} : B \to B$ factors through a finitely presented
+$A$-algebra, then $B$ is of finite presentation over $A$ (nice exercise).
+Hence $X$ is locally of finite presentation over $S$.
+\end{proof}
+
+\noindent
+The following lemma has a variant for transformations
+representable by algebraic spaces.
+
+\begin{lemma}
+\label{lemma-representable}
+Let $\tau \in \{Zariski, \etale, smooth, syntomic, fppf\}$.
+Let $F', G' : (\Sch/S)_\tau^{opp} \to \textit{Sets}$ be limit preserving
+and sheaves. Let $a' : F' \to G'$ be a transformation of functors.
+Denote $a : F \to G$ the restriction of $a' : F' \to G'$ to
+$(\textit{Noetherian}/S)_\tau$. The following are equivalent
+\begin{enumerate}
+\item $a'$ is representable (as a transformation of functors, see
+Categories, Definition \ref{categories-definition-representable-morphism}), and
+\item for every object $V$ of $(\textit{Noetherian}/S)_\tau$
+and every map $V \to G$ the fibre product
+$F \times_G V : (\textit{Noetherian}/S)_\tau^{opp} \to \textit{Sets}$
+is a representable functor, and
+\item same as in (2) but only for $V$ affine finite type over $S$
+mapping into an affine open of $S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). By Limits of Spaces, Lemma
+\ref{spaces-limits-lemma-locally-finite-presentation-permanence}
+the transformation $a'$ is limit preserving\footnote{This
+makes sense even if $\tau \not = fppf$ as the underlying
+category of $(\Sch/S)_\tau$ equals the underlying category
+of $(\Sch/S)_{fppf}$ and the statement doesn't refer to the topology.}.
+Take $\xi : V \to G$ as in (2). Denote $V' = V$ but viewed as an
+object of $(\Sch/S)_\tau$. Since $G$ is the restriction of
+$G'$ to $(\textit{Noetherian}/S)_\tau$ we see that
+$\xi \in G(V)$ corresponds to $\xi' \in G'(V')$.
+By assumption $V' \times_{\xi', G'} F'$ is representable
+by a scheme $U'$. The morphism of schemes $U' \to V'$ corresponding to
+the projection $V' \times_{\xi', G'} F' \to V'$ is locally of finite
+presentation by
+Limits of Spaces, Lemma
+\ref{spaces-limits-lemma-base-change-locally-finite-presentation} and
+Limits, Proposition
+\ref{limits-proposition-characterize-locally-finite-presentation}.
+Hence $U'$ is a locally Noetherian scheme and therefore $U'$ is
+isomorphic to an object $U$ of $(\textit{Noetherian}/S)_\tau$.
+Then $U$ represents $F \times_G V$ as desired.
+
+\medskip\noindent
+The implication (2) $\Rightarrow$ (3) is immediate. Assume (3).
+We will prove (1). Let $T$ be an object of $(\Sch/S)_\tau$
+and let $T \to G'$ be a morphism. We have to show
+the functor $F' \times_{G'} T$ is representable by a scheme $X$
+over $T$. Let $\mathcal{B}$ be the set of affine opens of
+$T$ which map into an affine open of $S$. This is a basis
+for the topology of $T$. Below we will show that for $W \in \mathcal{B}$
+the fibre product $F' \times_{G'} W$ is representable
+by a scheme $X_W$ over $W$. If $W_1 \subset W_2$ in $\mathcal{B}$, then
+we obtain an isomorphism $X_{W_1} \to X_{W_2} \times_{W_2} W_1$ because both
+$X_{W_1}$ and $X_{W_2} \times_{W_2} W_1$ represent the functor
+$F' \times_{G'} W_1$.
+These isomorphisms are canonical and satisfy the cocycle condition
+mentioned in Constructions, Lemma
+\ref{constructions-lemma-relative-glueing}.
+Hence we can glue the schemes $X_W$ to a scheme $X$ over $T$.
+Compatibility of the glueing maps with the maps
+$X_W \to F'$ provide us with a map $X \to F'$.
+The resulting map $X \to F' \times_{G'} T$ is an
+isomorphism as we may check this locally on $T$ (as source
+and target of this arrow are sheaves for the Zariski topology).
+
+\medskip\noindent
+Let $W$ be an affine scheme which maps into an affine open $U \subset S$.
+Let $W \to G'$ be a map.
+Still assuming (3) we have to show that $F' \times_{G'} W$
+is representable by a scheme.
+We may write $W = \lim V'_i$ as a directed limit of affine
+schemes $V'_i$ of finite presentation over $U$, see
+Algebra, Lemma \ref{algebra-lemma-ring-colimit-fp}.
+Since $V'_i$ is of finite type over an Noetherian scheme,
+we see that $V'_i$ is a Noetherian scheme.
+Denote $V_i = V'_i$ but viewed as an object of
+$(\textit{Noetherian}/S)_\tau$. As $G'$
+is limit preserving can choose an $i$ and a map
+$V'_i \to G'$ such that $W \to G'$ is the composition
+$W \to V'_i \to G'$. Since $G$ is the restriction of $G'$
+to $(\textit{Noetherian}/S)_\tau$ the morphism $V'_i \to G'$
+is the same thing as a morphism $V_i \to G$ (see above).
+By assumption (3) the functor $F \times_G V_i$ is representable by an object
+$X_i$ of $(\textit{Noetherian}/S)_\tau$.
+The functor $F \times_G V_i$ is limit preserving
+as it is the restriction of $F' \times_{G'} V'_i$
+and this functor is limit preserving by
+Limits of Spaces, Lemma
+\ref{spaces-limits-lemma-fibre-product-locally-finite-presentation},
+the assumption that $F'$ and $G'$ are limit preserving, and
+Limits, Remark \ref{limits-remark-limit-preserving} which
+tells us that the functor of points of $V'_i$ is limit preserving.
+By Lemma \ref{lemma-representable-limit-preserving}
+we conclude that $X_i$ is locally of finite presentation over $S$.
+Denote $X'_i = X_i$ but viewed as an object of
+$(\Sch/S)_\tau$. Then we see that $F' \times_{G'} V'_i$
+and the functors of points $h_{X'_i}$ are both extensions
+of $h_{X_i} : (\textit{Noetherian}/S)_\tau^{opp} \to \textit{Sets}$
+to limit preserving sheaves on $(\Sch/S)_\tau$.
+By the equivalence of categories of Lemma \ref{lemma-canonical-extension}
+we deduce that $X'_i$ represents $F' \times_{G'} V'_i$.
+Then finally
+$$
+F' \times_{G'} W = F' \times_{G'} V'_i \times_{V'_i} W =
+X'_i \times_{V'_i} W
+$$
+is representable as desired.
+\end{proof}
+
+
+
+
+
+
+\section{Algebraic spaces in the Noetherian setting}
+\label{section-algebraic-spaces-noetherian}
+
+\noindent
+Let $S$ be a locally Noetherian scheme. Let
+$(\textit{Noetherian}/S)_\etale \subset (\Sch/S)_\etale$
+denote the site studied in Section \ref{section-noetherian}.
+Let $F : (\textit{Noetherian}/S)_\etale^{opp} \to \textit{Sets}$
+be a functor, i.e., $F$ is a presheaf on $(\textit{Noetherian}/S)_\etale$.
+In this setting all the axioms [-1], [0], [1], [2], [3], [4], [5] of
+Section \ref{section-axioms-functors} make sense. We will review them
+one by one and make sure the reader knows exactly what we mean.
+
+\medskip\noindent
+Axiom [-1]. This is a set theoretic condition to be ignored
+by readers who are not interested in set theoretic questions.
+It makes sense for $F$ since it concerns the evaluation of
+$F$ on spectra of fields of finite type over $S$ which are
+objects of $(\textit{Noetherian}/S)_\etale$.
+
+\medskip\noindent
+Axiom [0]. This is the axiom that $F$ is a sheaf
+on $(\textit{Noetherian}/S)_\etale^{opp}$, i.e., satisfies
+the sheaf condition for \'etale coverings.
+
+\medskip\noindent
+Axiom [1]. This is the axiom that $F$ is limit preserving as defined
+in Section \ref{section-noetherian}: for any
+directed limit of affine schemes $X = \lim X_i$ of
+$(\textit{Noetherian}/S)_\etale$ we have $F(X) = \colim F(X_i)$.
+
+\medskip\noindent
+Axiom [2]. This is the axiom that $F$ satisfies the Rim-Schlessinger
+condition (RS). Looking at the definition of condition (RS) in
+Definition \ref{definition-RS} and the discussion in
+Section \ref{section-axioms-functors}
+we see that this means: given any pushout $Y' = Y \amalg_X X'$
+of schemes of finite type over $S$ where $Y, X, X'$
+are spectra of Artinian local rings, then
+$$
+F(Y \amalg_X X') \to F(Y) \times_{F(X)} F(X')
+$$
+is a bijection. This condition makes sense as the schemes
+$X$, $X'$, $Y$, and $Y'$ are in $(\text{Noetherian}/S)_\etale$
+since they are of finite type over $S$.
+
+\medskip\noindent
+Axiom [3]. This is the axiom that every tangent space $TF_{k, x_0}$
+is finite dimensional. This makes sense as the tangent spaces $TF_{k, x_0}$
+are constructed from evaluations of $F$ at $\Spec(k)$ and
+$\Spec(k[\epsilon])$ with $k$ a field of finite type over $S$
+and hence are obtained by evaluating at objects of the category
+$(\textit{Noetherian}/S)_\etale$.
+
+\medskip\noindent
+Axiom [4]. This is axiom that the every formal object is effective.
+Looking at the discussion in
+Sections \ref{section-formal-objects} and \ref{section-axioms-functors}
+we see that this involves evaluating our functor at Noetherian schemes
+only and hence this condition makes sense for $F$.
+
+\medskip\noindent
+Axiom [5]. This is the axiom stating that $F$ satisfies openness of versality.
+Recall that this means the following: Given a scheme $U$
+locally of finite type over $S$, given $x \in F(U)$, and given
+a finite type point $u_0 \in U$ such that $x$ is versal at $u_0$,
+then there exists an open neighbourhood $u_0 \in U' \subset U$
+such that $x$ is versal at every finite type point of $U'$.
+As before, verifying this only involves evaluating
+our functor at Noetherian schemes.
+
+\begin{proposition}
+\label{proposition-spaces-diagonal-representable-noetherian}
+Let $S$ be a locally Noetherian scheme. Let
+$F : (\textit{Noetherian}/S)_\etale^{opp} \to \textit{Sets}$
+be a functor. Assume that
+\begin{enumerate}
+\item $\Delta : F \to F \times F$ is representable
+(as a transformation of functors, see
+Categories, Definition \ref{categories-definition-representable-morphism}),
+\item $F$ satisfies axioms [-1], [0], [1], [2], [3], [4], [5]
+(see above), and
+\item $\mathcal{O}_{S, s}$ is a G-ring for all finite type points $s$ of $S$.
+\end{enumerate}
+Then there exists a unique algebraic space
+$F' : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$
+whose restriction to $(\textit{Noetherian}/S)_\etale$ is $F$
+(see proof for elucidation).
+\end{proposition}
+
+\begin{proof}
+Recall that the sites $(\Sch/S)_{fppf}$ and $(\Sch/S)_\etale$ have the same
+underlying category, see discussion in Section \ref{section-noetherian}.
+Similarly the sites $(\textit{Noetherian}/S)_\etale$ and
+$(\textit{Noetherian}/S)_{fppf}$ have the same underlying categories.
+By axioms [0] and [1] the functor $F$ is a sheaf and
+limit preserving.
+Let $F' : (\Sch/S)_\etale^{opp} \to \textit{Sets}$
+be the unique extension of $F$ which is a sheaf (for the \'etale topology)
+and which is limit preserving, see
+Lemma \ref{lemma-canonical-extension}.
+Then $F'$ satisfies axioms [0] and [1] as given in
+Section \ref{section-axioms-functors}.
+By Lemma \ref{lemma-representable} we see that
+$\Delta' : F' \to F' \times F'$ is representable (by schemes).
+On the other hand, it is immediately clear that
+$F'$ satisfies axioms [-1], [2], [3], [4], [5] of
+Section \ref{section-axioms-functors}
+as each of these involves only evaluating $F'$ at objects
+of $(\textit{Noetherian}/S)_\etale$ and we've assumed the
+corresponding conditions for $F$.
+Whence $F'$ is an algebraic space by
+Proposition \ref{proposition-spaces-diagonal-representable}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Artin's theorem on contractions}
+\label{section-contractions}
+
+\noindent
+In this section we will freely use the language of formal algebraic spaces,
+see Formal Spaces, Section \ref{formal-spaces-section-introduction}.
+Artin's theorem on contractions is one of the two main theorems of
+Artin's paper \cite{ArtinII}; the first one is his theorem on dilatations
+which we stated and proved in Algebraization of Formal Spaces,
+Section \ref{restricted-section-dilatations}.
+
+\begin{situation}
+\label{situation-contractions}
+Let $S$ be a locally Noetherian scheme. Let $X'$ be an algebraic space
+locally of finite type over $S$. Let $T' \subset |X'|$ be a closed
+subset. Let $U' \subset X'$ be the open subspace with
+$|U'| = |X'| \setminus T'$. Let $W$ be a locally Noetherian
+formal algebraic space over $S$ with $W_{red}$ locally of finite type
+over $S$. Finally, we let
+$$
+g : X'_{/T'} \longrightarrow W
+$$
+be a formal modification, see
+Algebraization of Formal Spaces, Definition
+\ref{restricted-definition-formal-modification}.
+Recall that $X'_{/T'}$ denotes the formal completion of $X'$ along
+$T'$, see Formal Spaces, Section \ref{formal-spaces-section-completion}.
+\end{situation}
+
+\noindent
+In the situation above our goal is to prove that there exists a
+proper morphism $f : X' \to X$ of algebraic spaces over $S$,
+a closed subset $T \subset |X|$, and an isomorphism
+$a : X_{/T} \to W$ of formal algebraic spaces such that
+\begin{enumerate}
+\item $T'$ is the inverse image of $T$ by $|f| : |X'| \to |X|$,
+\item $f : X' \to X$ maps $U'$ isomorphically to
+an open subspace $U$ of $X$, and
+\item $g = a \circ f_{/T}$ where $f_{/T} : X'_{/T'} \to X_{/T}$
+is the induced morphism.
+\end{enumerate}
+Let us say that $(f : X' \to X, T, a)$ is a {\it solution}.
+
+\medskip\noindent
+We will follow Artin's strategy by constructing a functor $F$ on
+the category of locally Noetherian schemes over $S$, showing that $F$ is an
+algebraic space using
+Proposition \ref{proposition-spaces-diagonal-representable-noetherian},
+and proving that setting $X = F$ works.
+
+\begin{remark}
+\label{remark-G-rings}
+In particular, we cannot prove that the desired result is true for
+every Situation \ref{situation-contractions} because we will need to
+assume the local rings of $S$ are G-rings. If you can prove the
+result in general or if you have a counter example, please let
+us know at
+\href{mailto:stacks.project@gmail.com}{stacks.project@gmail.com}.
+\end{remark}
+
+\noindent
+In Situation \ref{situation-contractions}
+let $V$ be a locally Noetherian scheme over $S$.
+The value of our functor $F$ on $V$ will be all triples
+$$
+(Z, u' : V \setminus Z \to U', \hat x : V_{/Z} \to W)
+$$
+satisfying the following conditions
+\begin{enumerate}
+\item $Z \subset V$ is a closed subset,
+\item $u' : V \setminus Z \to U'$ is a morphism over $S$,
+\item $\hat x : V_{/Z} \to W$ is an adic morphism of formal algebraic
+spaces over $S$,
+\item $u'$ and $\hat x$ are compatible (see below).
+\end{enumerate}
+The compatibility condition is the following: pulling back the
+formal modification $g$ we obtain a formal modification
+$$
+X'_{/T'} \times_{g, W, \hat x} V_{/Z} \longrightarrow V_{/Z}
+$$
+See Algebraization of Formal Spaces, Lemma
+\ref{restricted-lemma-base-change-formal-modification}.
+By the main theorem on dilatations
+(Algebraization of Formal Spaces, Theorem
+\ref{restricted-theorem-dilatations}), there is a unique proper
+morphism $V' \to V$ of algebraic spaces which is an isomorphism over
+$V \setminus Z$ such that $V'_{/Z} \to V_{/Z}$ is isomorphic to the
+displayed arrow. In other words, for some morphism
+$\hat x' : V'_{/Z} \to X'_{/T'}$ we have a cartesian diagram
+$$
+\xymatrix{
+V'_{/Z} \ar[r] \ar[d]_{\hat x'} & V_{/Z} \ar[d]^{\hat x} \\
+X'_{/T'} \ar[r]^g & W
+}
+$$
+of formal algebraic spaces. We will think
+of $V \setminus Z$ as an open subspace of $V'$ without further mention.
+The compatibility condition is that there should be a
+morphism $x' : V' \to X'$ restricting to $u'$ and $\hat x$
+over $V \setminus Z \subset V'$ and $V'_{/Z}$.
+In other words, such that the diagram
+$$
+\xymatrix{
+V \setminus Z \ar[r] \ar[d]_{u'} &
+V' \ar[d]^{x'} &
+V'_{/Z} \ar[l] \ar[d]^{\hat x'} \ar[r] &
+V_{/Z} \ar[d]^{\hat x} \\
+U' \ar[r] &
+X' &
+X'_{/T'} \ar[r]^g \ar[l] &
+W
+}
+$$
+is commutative. Observe that by Algebraization of Formal Spaces,
+Lemma \ref{restricted-lemma-faithful} the morphism $x'$ is unique
+if it exists. We will indicate this situation by saying
+``{\it $V' \to V$, $\hat x'$, and $x'$ witness the compatibility
+between $u'$ and $\hat x$}''.
+
+\begin{remark}
+\label{remark-how-to-think-compatibility}
+In Situation \ref{situation-contractions} let $V$ be a locally Noetherian
+scheme over $S$. Let $(Z, u', \hat x)$ be a triple satisfying (1), (2), and
+(3) above. We want to explain a way to think about the compatibility
+condition (4). It will not be mathematically precise as we are going use
+a fictitious category $\textit{An}_S$ of analytic spaces over $S$
+and a fictitious analytification functor
+$$
+\left\{
+\begin{matrix}
+\text{locally Noetherian formal} \\
+\text{algebraic spaces over }S
+\end{matrix}
+\right\}
+\longrightarrow
+\textit{An}_S,
+\quad\quad
+Y \longmapsto Y^{an}
+$$
+For example if $Y = \text{Spf}(k[[t]])$ over $S = \Spec(k)$, then $Y^{an}$
+should be thought of as an open unit disc. If $Y = \Spec(k)$, then $Y^{an}$
+is a single point. The category $\textit{An}_S$ should have open and
+closed immersions and we should be able to take the open complement
+of a closed. Given $Y$ the morphism $Y_{red} \to Y$ should induces a
+closed immersion $Y_{red}^{an} \to Y^{an}$. We set
+$Y^{rig} = Y^{an} \setminus Y_{red}^{an}$ equal to its open complement.
+If $Y$ is an algebraic space and if $Z \subset Y$ is closed, then
+the morphism $Y_{/Z} \to Y$ should induce an open immersion
+$Y_{/Z}^{an} \to Y^{an}$ which in turn should induce an open immersion
+$$
+can : (Y_{/Z})^{rig} \longrightarrow (Y \setminus Z)^{an}
+$$
+Also, given a formal modification $g : Y' \to Y$ of locally Noetherian formal
+algebraic spaces, the induced morphism $g^{rig} : (Y')^{rig} \to Y^{rig}$
+should be an isomorphism. Given $\text{An}_S$ and the analytification
+functor, we can consider the requirement that
+$$
+\xymatrix{
+(V_{/Z})^{rig} \ar[rr]_{can} \ar[d]_{(g^{rig})^{-1} \circ \hat x^{an}} & &
+(V \setminus Z)^{an} \ar[d]^{(u')^{an}} \\
+(X'_{/T'})^{rig} \ar[rr]^{can} & & (X' \setminus T')^{an}
+}
+$$
+commutes. This makes sense as $g^{rig} : (X'_{T'})^{rig} \to W^{rig}$
+is an isomorphism and $U' = X' \setminus T'$. Finally, under some assumptions
+of faithfulness of the analytification functor, this requirement will
+be equivalent to the compatibility condition formulated above.
+We hope this will motivate the reader to think of the compatibility
+of $u'$ and $\hat x$ as the requirement that some maps be equal,
+rather than asking for the existence of a certain commutative diagram.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-functor}
+In Situation \ref{situation-contractions} the rule $F$ that sends
+a locally Noetherian scheme $V$ over $S$ to the set of triples
+$(Z, u', \hat x)$ satisfying the compatibility condition and which sends a
+a morphism $\varphi : V_2 \to V_1$ of locally Noetherian schemes over $S$
+to the map
+$$
+F(\varphi) : F(V_1) \longrightarrow F(V_2)
+$$
+sending an element $(Z_1, u'_1, \hat x_1)$ of $F(V_1)$ to
+$(Z_2, u'_2, \hat x_2)$ in $F(V_2)$ given by
+\begin{enumerate}
+\item $Z_2 \subset V_2$ is the inverse image of $Z_1$ by $\varphi$,
+\item $u'_2$ is the composition of $u'_1$ and
+$\varphi|_{V_2 \setminus Z_2} : V_2 \setminus Z_2 \to V_1 \setminus Z_1$,
+\item $\hat x_2$ is the composition of $\hat x_1$ and
+$\varphi_{/Z_2} : V_{2, /Z_2} \to V_{1, /Z_1}$
+\end{enumerate}
+is a contravariant functor.
+\end{lemma}
+
+\begin{proof}
+To see the compatibility condition between $u'_2$ and $\hat x_2$, let
+$V'_1 \to V_1$, $\hat x'_1$, and $x'_1$ witness the compatibility between
+$u'_1$ and $\hat x_1$. Set $V'_2 = V_2 \times_{V_1} V'_1$, set
+$\hat x'_2$ equal to the composition of $\hat x'_1$ and
+$V'_{2, /Z_2} \to V'_{1, /Z_1}$, and set $x'_2$
+equal to the composition of $x'_1$ and $V'_2 \to V'_1$.
+Then $V'_2 \to V_2$, $\hat x'_2$, and $x'_2$ witness the compatibility between
+$u'_2$ and $\hat x_2$. We omit the detailed verification.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-solution}
+In Situation \ref{situation-contractions} if there exists a solution
+$(f : X' \to X, T, a)$ then there is a functorial bijection
+$F(V) = \Mor_S(V, X)$ on the category of
+locally Noetherian schemes $V$ over $S$.
+\end{lemma}
+
+\begin{proof}
+Let $V$ be a locally Noetherian scheme over $S$.
+Let $x : V \to X$ be a morphism over $S$.
+Then we get an element $(Z, u', \hat x)$ in $F(V)$ as follows
+\begin{enumerate}
+\item $Z \subset V$ is the inverse image of $T$ by $x$,
+\item $u' : V \setminus Z \to U' = U$ is the restriction of
+$x$ to $V \setminus Z$,
+\item $\hat x : V_{/Z} \to W$ is the composition of
+$x_{/Z} : V_{/Z} \to X_{/T}$ with the isomorphism $a : X_{/T} \to W$.
+\end{enumerate}
+This triple satisfies the compatibility condition because we
+can take $V' = V \times_{x, X} X'$, we can take $\hat x'$
+the completion of the projection $x' : V' \to X'$.
+
+\medskip\noindent
+Conversely, suppose given an element $(Z, u', \hat x)$ of $F(V)$. We claim
+there is a unique morphism $x : V \to X$ compatible with $u'$ and $\hat x$.
+Namely, let $V' \to V$, $\hat x'$, and $x'$ witness the
+compatibility between $u'$ and $\hat x$. Then
+Algebraization of Formal Spaces, Proposition
+\ref{restricted-proposition-glue-modification}
+is exactly the result we need to find
+a unique morphism $x : V \to X$ agreeing with
+$\hat x$ over $V_{/Z}$ and with $x'$ over $V'$ (and a fortiori
+agreeing with $u'$ over $V \setminus Z$).
+
+\medskip\noindent
+We omit the verification that the two constructions above define inverse
+bijections between their respective domains.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-functor-is-solution}
+In Situation \ref{situation-contractions} if there exists an
+algebraic space $X$ locally of finite type over $S$ and a
+functorial bijection $F(V) = \Mor_S(V, X)$ on the category of
+locally Noetherian schemes $V$ over $S$, then $X$ is a solution.
+\end{lemma}
+
+\begin{proof}
+We have to construct a proper morphism $f : X' \to X$, a closed subset
+$T \subset |X|$, and an isomorphism $a : X_{/T} \to W$ with properties
+(1), (2), (3) listed just below Situation \ref{situation-contractions}.
+
+\medskip\noindent
+The discussion in this proof is a bit pedantic because we want
+to carefully match the underlying categories. In this paragraph
+we explain how the adventurous reader can proceed less timidly.
+Namely, the reader may extend our definition of the functor $F$
+to all locally Noetherian algebraic spaces over $S$.
+Doing so the reader may then conclude that $F$ and $X$ agree as
+functors on the category of these algebraic spaces, i.e.,
+$X$ represents $F$. Then one considers the universal object
+$(T, u', \hat x)$ in $F(X)$. Then the reader will find that
+for the triple $X'' \to X$, $\hat x'$, $x'$ witnessing the compatibility
+between $u'$ and $\hat x$ the morphism $x' : X'' \to X'$ is an isomorphism
+and this will produce $f : X' \to X$ by inverting $x'$. Finally, we already
+have $T \subset |X|$ and the reader may show that $\hat x$ is an isomorphism
+which can served as the last ingredient namely $a$.
+
+\medskip\noindent
+Denote $h_X(-) = \Mor_S(-, X)$ the functor of
+points of $X$ restricted to the category
+$(\textit{Noetherian}/S)_\etale$ of Section \ref{section-noetherian}.
+By Limits of Spaces, Remark \ref{spaces-limits-remark-limit-preserving}
+the algebraic spaces $X$ and $X'$ are limit preserving. Hence so
+are the restrictions $h_X$ and $h_{X'}$.
+To construct $f$ it therefore suffices to construct a
+transformation $h_{X'} \to h_X = F$, see Lemma \ref{lemma-canonical-extension}.
+Thus let $V \to S$ be an object of $(\textit{Noetherian}/S)_\etale$
+and let $\tilde x : V \to X'$ be in $h_{X'}(V)$.
+Then we get an element $(Z, u', \hat x)$ in $F(V)$ as follows
+\begin{enumerate}
+\item $Z \subset V$ is the inverse image of $T'$ by $\tilde x$,
+\item $u' : V \setminus Z \to U'$ is the restriction of
+$\tilde x$ to $V \setminus Z$,
+\item $\hat x : V_{/Z} \to W$ is the composition of
+$x_{/Z} : V_{/Z} \to X'_{/T'}$ with $g : X'_{/T'} \to W$.
+\end{enumerate}
+This triple satisfies the compatibility condition: first we always
+obtain $V' \to V$ and $\hat x' : V'_{/Z'} \to X'_{/T'}$ for free
+(see discussion preceding Lemma \ref{lemma-functor}).
+Then we just define $x' : V' \to X'$ to be the composition
+of $V' \to V$ and the morphism $\tilde x : V \to X'$.
+We omit the verification that this works.
+
+\medskip\noindent
+If $\xi : V \to X$ is an \'etale morphism where $V$ is a scheme, then
+we obtain $\xi = (Z, u', \hat x) \in F(V) = h_X(V) = X(V)$.
+Of course, if $\varphi : V' \to V$ is a further \'etale morphism of schemes,
+then $(Z, u', \hat x)$ pulled back to $F(V')$ corresponds to
+$\xi \circ \varphi$.
+The closed subset $T \subset |X|$ is just defined as the closed
+subset such that $\xi : V \to X$ for $\xi = (Z, u', \hat x)$
+pulls $T$ back to $Z$
+
+\medskip\noindent
+Consider Noetherian schemes $V$ over $S$ and a morphism
+$\xi : V \to X$ corresponding to $(Z, u', \hat x)$ as above.
+Then we see that $\xi(V)$ is set theoretically contained in $T$
+if and only if $V = Z$ (as topological spaces). Hence we see that
+$X_{/T}$ agrees with $W$ as a functor. This produces the isomorphism
+$a : X_{/T} \to W$. (We've omitted a small detail here which is
+that for the locally Noetherian formal algebraic spaces $X_{/T}$ and
+$W$ it suffices to check one gets an isomorphism after evaluating
+on locally Noetherian schemes over $S$.)
+
+\medskip\noindent
+We omit the proof of conditions (1), (2), and (3).
+\end{proof}
+
+\begin{remark}
+\label{remark-diagonal}
+In Situation \ref{situation-contractions}.
+Let $V$ be a locally Noetherian scheme over $S$.
+Let $(Z_i, u'_i, \hat x_i) \in F(V)$ for $i = 1, 2$. Let $V'_i \to V$,
+$\hat x'_i$ and $x'_i$ witness the compatibility between $u'_i$ and
+$\hat x_i$ for $i = 1, 2$.
+
+\medskip\noindent
+Set $V' = V'_1 \times_V V'_2$. Let $E' \to V'$ denote the equalizer
+of the morphisms
+$$
+V' \to V'_1 \xrightarrow{x'_1} X'
+\quad\text{and}\quad
+V' \to V'_2 \xrightarrow{x'_2} X'
+$$
+Set $Z = Z_1 \cap Z_2$. Let $E_W \to V_{/Z}$
+be the equalizer of the morphisms
+$$
+V_{/Z} \to V_{/Z_1} \xrightarrow{\hat x_1} W
+\quad\text{and}\quad
+V_{/Z} \to V_{/Z_2} \xrightarrow{\hat x_2} W
+$$
+Observe that $E' \to V$ is separated and locally of finite type
+and that $E_W$ is a locally Noetherian formal algebraic space
+separated over $V$.
+The compatibilities between the various morphisms involved show that
+\begin{enumerate}
+\item $\Im(E' \to V) \cap (Z_1 \cup Z_2)$
+is contained in $Z = Z_1 \cap Z_2$,
+\item the morphism $E' \times_V (V \setminus Z) \to V \setminus Z$
+is a monomorphism and is equal to the equalizer of the restrictions
+of $u'_1$ and $u'_2$ to $V \setminus (Z_1 \cup Z_2)$,
+\item the morphism $E'_{/Z} \to V_{/Z}$ factors through $E_W$
+and the diagram
+$$
+\xymatrix{
+E'_{/Z} \ar[r] \ar[d] & X'_{/T'} \ar[d]^g \\
+E_W \ar[r] & W
+}
+$$
+is cartesian. In particular, the morphism $E'_{/Z} \to E_W$
+is a formal modification as the base change of $g$,
+\item $E'$, $(E' \to V)^{-1}Z$, and $E'_{/Z} \to E_W$
+is a triple as in Situation \ref{situation-contractions}
+with base scheme the locally Noetherian scheme $V$,
+\item given a morphism $\varphi : A \to V$
+of locally Noetherian schemes, the following are equivalent
+\begin{enumerate}
+\item $(Z_1, u'_1, \hat x_1)$ and $(Z_2, u'_2, \hat x_2)$
+restrict to the same element of $F(A)$,
+\item $A \setminus \varphi^{-1}(Z) \to V \setminus Z$
+factors through $E' \times_V (V \setminus Z)$
+and $A_{/\varphi^{-1}(Z)} \to V_{/Z}$
+factors through $E_W$.
+\end{enumerate}
+\end{enumerate}
+We conclude, using
+Lemmas \ref{lemma-solution} and \ref{lemma-functor-is-solution},
+that if there is a solution $E \to V$
+for the triple in (4), then $E$ represents
+$F \times_{\Delta, F \times F} V$ on the category of
+locally Noetherian schemes over $V$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-closed-immersion}
+In Situation \ref{situation-contractions} assume given a closed
+subset $Z \subset S$ such that
+\begin{enumerate}
+\item the inverse image of $Z$ in $X'$ is $T'$,
+\item $U' \to S \setminus Z$ is a closed immersion,
+\item $W \to S_{/Z}$ is a closed immersion.
+\end{enumerate}
+Then there exists a solution $(f : X' \to X, T, a)$
+and moreover $X \to S$ is a closed immersion.
+\end{lemma}
+
+\begin{proof}
+Suppose we have a closed subscheme $X \subset S$ such that
+$X \cap (S \setminus Z) = U'$ and $X_{/Z} = W$. Then
+$X$ represents the functor $F$ (some details omitted) and hence
+is a solution. To find $X$ is clearly a local question on $S$.
+In this way we reduce to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $S = \Spec(A)$ is affine. Let $I \subset A$ be the
+radical ideal cutting out $Z$.
+Write $I = (f_1, \ldots, f_r)$. By assumption we are given
+\begin{enumerate}
+\item the closed immersion $U' \to S \setminus Z$ determines
+ideals $J_i \subset A[1/f_i]$ such that $J_i$ and $J_j$
+generate the same ideal in $A[1/f_if_j]$,
+\item the closed immersion $W \to S_{/Z}$ is the map
+$\text{Spf}(A^\wedge/J') \to \text{Spf}(A^\wedge)$ for some
+ideal $J' \subset A^\wedge$ in the $I$-adic completion $A^\wedge$ of $A$.
+\end{enumerate}
+To finish the proof we need to find an ideal $J \subset A$
+such that $J_i = J[1/f_i]$ and $J' = JA^\wedge$. By
+More on Algebra, Proposition \ref{more-algebra-proposition-equivalence}
+it suffices to show that $J_i$ and $J'$ generate the same ideal
+in $A^\wedge[1/f_i]$ for all $i$.
+
+\medskip\noindent
+Recall that $A' = H^0(X', \mathcal{O})$ is a finite $A$-algebra
+whose formation commutes with flat base change
+(Cohomology of Spaces, Lemmas
+\ref{spaces-cohomology-lemma-proper-over-affine-cohomology-finite} and
+\ref{spaces-cohomology-lemma-flat-base-change-cohomology}). Denote
+$J'' = \Ker(A \to A')$\footnote{Contrary to what the reader
+may expect, the ideals $J$ and $J''$ won't agreee in general.}.
+We have $J_i = J''A[1/f_i]$ as follows
+from base change to the spectrum of $A[1/f_i]$.
+Observe that we have a commutative diagram
+$$
+\xymatrix{
+X' \ar[d] &
+X'_{/T'} \times_{S_{/Z}} \text{Spf}(A^\wedge) \ar[l] \ar[d] &
+X'_{/T'} \times_W \text{Spf}(A^\wedge/J') \ar@{=}[l] \ar[d] \\
+\Spec(A) &
+\text{Spf}(A^\wedge) \ar[l] &
+\text{Spf}(A^\wedge/J') \ar[l]
+}
+$$
+The middle vertical arrow is the completion of the left vertical
+arrow along the obvious closed subsets. By the theorem on formal
+functions we have
+$$
+(A')^\wedge = \Gamma(X' \times_S \Spec(A^\wedge), \mathcal{O}) =
+\lim H^0(X' \times_S \Spec(A/I^n), \mathcal{O})
+$$
+See Cohomology of Spaces, Theorem
+\ref{spaces-cohomology-theorem-formal-functions}.
+From the diagram we conclude that $J'$ maps to zero in $(A')^\wedge$.
+Hence $J' \subset J'' A^\wedge$. Consider the arrows
+$$
+X'_{/T'} \to
+\text{Spf}(A^\wedge/J''A^\wedge) \to
+\text{Spf}(A^\wedge/J') = W
+$$
+We know the composition $g$ is a formal modification
+(in particular rig-\'etale and rig-surjective) and the second
+arrow is a closed immersion (in particular an adic monomorphism).
+Hence $X'_{/T'} \to \text{Spf}(A^\wedge/J''A^\wedge)$ is
+rig-surjective and rig-\'etale, see
+Algebraization of Formal Spaces, Lemmas
+\ref{restricted-lemma-rig-surjective-alternative-permanence} and
+\ref{restricted-lemma-rig-etale-alternative-permanence}.
+Applying Algebraization of Formal Spaces, Lemmas
+\ref{restricted-lemma-rig-etale-descent} and
+\ref{restricted-lemma-permanence-rig-surjective}
+we conclude that $\text{Spf}(A^\wedge/J''A^\wedge) \to W$
+is rig-\'etale and rig-surjective.
+By Algebraization of Formal Spaces, Lemma
+\ref{restricted-lemma-closed-immersion-rig-smooth-rig-surjective}
+we conclude that $I^n J'' A^\wedge \subset J'$ for some $n > 0$.
+It follows that $J'' A^\wedge[1/f_i] = J' A^\wedge[1/f_i]$ and
+we deduce $J_i A^\wedge[1/f_i] = J' A^\wedge[1/f_i]$ for all
+$i$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-diagonal-contractions}
+In Situation \ref{situation-contractions} assume $X' \to S$
+and $W \to S$ are separated. Then the diagonal $\Delta : F \to F \times F$
+is representable by closed immersions.
+\end{lemma}
+
+\begin{proof}
+Combine Lemma \ref{lemma-closed-immersion}
+with the discussion in Remark \ref{remark-diagonal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sheaf}
+In Situation \ref{situation-contractions} the functor
+$F$ satisfies the sheaf property for all \'etale coverings
+of locally Noetherian schemes over $S$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: morphisms may be defined \'etale locally.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-preserving}
+In Situation \ref{situation-contractions} the functor $F$ is limit preserving:
+for any directed limit $V = \lim V_\lambda$ of Noetherian affine schemes
+over $S$ we have $F(V) = \colim F(V_\lambda)$.
+\end{lemma}
+
+\begin{proof}
+This is an absurdly long proof. Much of it consists of standard
+arguments on limits and \'etale localization. We urge the reader to skip ahead
+to the last part of the proof where something interesting happens.
+
+\medskip\noindent
+Let $V = \lim_{\lambda \in \Lambda} V_i$ be a directed limit of schemes
+over $S$ with $V$ and $V_\lambda$ Noetherian and with affine transition
+morphisms. See Limits, Section \ref{limits-section-limits} for material on
+limits of schemes. We will prove that $\colim F(V_\lambda) \to F(V)$
+is bijective.
+
+\medskip\noindent
+Proof of injectivity: notation.
+Let $\lambda \in \Lambda$ and
+$\xi_{\lambda, 1}, \xi_{\lambda, 2} \in F(V_\lambda)$
+be elements which restrict to the same element of $F(V)$.
+Write $\xi_{\lambda, 1} = (Z_{\lambda, 1}, u'_{\lambda, 1},
+\hat x_{\lambda, 1})$ and
+$\xi_{\lambda, 2} = (Z_{\lambda, 2}, u'_{\lambda, 2}, \hat x_{\lambda, 2})$.
+
+\medskip\noindent
+Proof of injectivity: agreement of $Z_{\lambda, i}$.
+Since $Z_{\lambda, 1}$ and $Z_{\lambda, 2}$ restrict to the same closed
+subset of $V$, we may after increasing $i$ assume
+$Z_{\lambda, 1} = Z_{\lambda, 2}$, see
+Limits, Lemma \ref{limits-lemma-inverse-limit-top} and
+Topology, Lemma \ref{topology-lemma-describe-limits}.
+Let us denote the common value $Z_\lambda \subset V_\lambda$, for
+$\mu \geq \lambda$
+denote $Z_\mu \subset V_\mu$ the inverse image in $V_\mu$ and
+and denote $Z$ the inverse image in $V$. We will use below that
+$Z = \lim_{\mu \geq \lambda} Z_\mu$ as schemes if we view $Z$
+and $Z_\mu$ as reduced closed subschemes.
+
+\medskip\noindent
+Proof of injectivity: agreement of $u'_{\lambda, i}$.
+Since $U'$ is locally of finite type over $S$ and since
+the restrictions of $u'_{\lambda, 1}$ and $u'_{\lambda, 2}$
+to $V \setminus Z$ are the same, we may after increasing $\lambda$ assume
+$u'_{\lambda, 1} = u'_{\lambda, 2}$, see Limits, Proposition
+\ref{limits-proposition-characterize-locally-finite-presentation}.
+Let us denote the common value $u'_\lambda$ and denote
+$u'$ the restriction to $V \setminus Z$.
+
+\medskip\noindent
+Proof of injectivity: restatement.
+At this point we have
+$\xi_{\lambda, 1} = (Z_\lambda, u'_\lambda, \hat x_{\lambda, 1})$ and
+$\xi_{\lambda, 2} = (Z_\lambda, u'_\lambda, \hat x_{\lambda, 2})$.
+The main problem we face in this part of the proof
+is to show that the morphisms
+$\hat x_{\lambda, 1}$ and $\hat x_{\lambda, 2}$ become the same after
+increasing $\lambda$.
+
+\medskip\noindent
+Proof of injectivity: agreement of $\hat x_{\lambda, i}|_{Z_\lambda}$.
+Consider the morpisms
+$\hat x_{\lambda, 1}|_{Z_\lambda}, \hat x_{\lambda, 2}|_{Z_\lambda} :
+Z_\lambda \to W_{red}$.
+These morphisms restrict to the same morphism $Z \to W_{red}$.
+Since $W_{red}$ is a scheme locally of finite type over $S$
+we see using Limits, Proposition
+\ref{limits-proposition-characterize-locally-finite-presentation}
+that after replacing $\lambda$ by a bigger index
+we may assume
+$\hat x_{\lambda, 1}|_{Z_\lambda} = \hat x_{\lambda, 2}|_{Z_\lambda} :
+Z_\lambda \to W_{red}$.
+
+\medskip\noindent
+Proof of injectivity: end.
+Next, we are going to apply the discussion in
+Remark \ref{remark-diagonal} to $V_\lambda$ and the two elements
+$\xi_{\lambda, 1}, \xi_{\lambda, 2} \in F(V_\lambda)$.
+This gives us
+\begin{enumerate}
+\item $e_\lambda : E_\lambda' \to V_\lambda$
+separated and locally of finite type,
+\item $e_\lambda^{-1}(V_\lambda \setminus Z_\lambda) \to
+V_\lambda \setminus Z_\lambda$ is an isomorphism,
+\item a monomorphism $E_{W, \lambda} \to V_{\lambda, /Z_\lambda}$
+which is the equalizer of $\hat x_{\lambda, 1}$ and $\hat x_{\lambda, 2}$,
+\item a formal modification $E'_{\lambda, /Z_\lambda} \to E_{W, \lambda}$
+\end{enumerate}
+Assertion (2) holds by assertion (2) in Remark \ref{remark-diagonal}
+and the preparatory work we did above getting
+$u'_{\lambda, 1} = u'_{\lambda, 2} = u'_\lambda$.
+Since $Z_\lambda = (V_{\lambda, /Z_\lambda})_{red}$ factors through
+$E_{W, \lambda}$ because
+$\hat x_{\lambda, 1}|_{Z_\lambda} = \hat x_{\lambda, 2}|_{Z_\lambda}$
+we see from
+Formal Spaces, Lemma \ref{formal-spaces-lemma-monomorphism-iso-over-red}
+that $E_{W, \lambda} \to V_{\lambda, /Z_\lambda}$ is a closed immersion.
+Then we see from assertion (4) in Remark \ref{remark-diagonal}
+and Lemma \ref{lemma-closed-immersion} applied to the triple
+$E_\lambda'$, $e_\lambda^{-1}(Z_\lambda)$,
+$E'_{\lambda, /Z_\lambda} \to E_{W, \lambda}$ over $V_\lambda$ that
+there exists a closed immersion $E_\lambda \to V_\lambda$
+which is a solution for this triple.
+Next we use assertion (5) in Remark \ref{remark-diagonal}
+which combined with Lemma \ref{lemma-solution}
+says that $E_\lambda$ is the ``equalizer'' of $\xi_{\lambda, 1}$
+and $\xi_{\lambda, 2}$. In particular, we see that $V \to V_\lambda$
+factors through $E_\lambda$. Then using Limits, Proposition
+\ref{limits-proposition-characterize-locally-finite-presentation}
+once more we find $\mu \geq \lambda$ such that $V_\mu \to V_\lambda$
+factors through $E_\lambda$ and hence the pullbacks of
+$\xi_{\lambda, 1}$ and $\xi_{\lambda, 2}$ to $V_\mu$ are the same
+as desired.
+
+\medskip\noindent
+Proof of surjectivity: statement.
+Let $\xi = (Z, u', \hat x)$ be an element of $F(V)$.
+We have to find a $\lambda \in \Lambda$ and an element
+$\xi_\lambda \in F(V_\lambda)$ restricting to $\xi$.
+
+\medskip\noindent
+Proof of surjectivity: the question is \'etale local.
+By the unicity proved in the previous part of the proof and by the
+sheaf property of $F$ in Lemma \ref{lemma-sheaf}, the problem
+is local on $V$ in the \'etale topology. More precisely, let $v \in V$.
+We claim it suffices to find an \'etale morphism
+$(\tilde V, \tilde v) \to (V, v)$ and some
+$\lambda$, some an \'etale morphism $\tilde V_\lambda \to V_\lambda$,
+and some element $\tilde \xi_\lambda \in F(\tilde V_\lambda)$ such that
+$\tilde V = \tilde V_\lambda \times_{V_\lambda} V$
+and $\xi|_U = \tilde \xi_\lambda|_U$. We omit a detailed proof of
+this claim\footnote{To prove this
+one assembles a collection of the morphisms $\tilde V \to V$
+into a finite \'etale covering and shows that the corresponding morphisms
+$\tilde V_\lambda \to V_\lambda$ form an \'etale covering as well (after
+increasing $\lambda$). Next one uses the injectivity to see that
+the elements $\tilde \xi_\lambda$ glue (after increasing $\lambda$)
+and one uses the sheaf property for $F$ to descend these
+elements to an element of $F(V_\lambda)$.}.
+
+\medskip\noindent
+Proof of surjectivity: rephrasing the problem.
+Recall that any \'etale morphism $(\tilde V, \tilde v) \to (V, v)$
+with $\tilde V$ affine is the base change of an \'etale morphism
+$\tilde V_\lambda \to V_\lambda$ with $\tilde V_\lambda$ affine
+for some $\lambda$, see for example
+Topologies, Lemma \ref{topologies-lemma-limit-fppf-topology}.
+Given $\tilde V_\lambda$ we have
+$\tilde V = \lim_{\mu \geq \lambda} \tilde V_\lambda \times_{V_\lambda} V_\mu$.
+Hence given $(\tilde V, \tilde v) \to (V, v)$ \'etale with $\tilde V$ affine,
+we may replace $(V, v)$ by $(\tilde V, \tilde v)$ and $\xi$ by
+the restriction of $\xi$ to $\tilde V$.
+
+\medskip\noindent
+Proof of surjectivity: reduce to base being affine. In particular,
+suppose $\tilde S \subset S$ is an affine open subscheme such
+that $v \in V$ maps to a point of $\tilde S$. Then we may according
+to the previous paragraph, replace $V$ by $\tilde V = \tilde S \times_S V$.
+Of course, if we do this, it suffices to solve the problem
+for the functor $F$ restricted to the category of locally Noetherian
+schemes over $\tilde S$. This functor is of course the functor
+associated to the whole situation base changed to $\tilde S$.
+Thus we may and do assume $S = \Spec(R)$ is a Noetherian affine scheme
+for the rest of the proof.
+
+\medskip\noindent
+Proof of surjectivity: easy case.
+If $v \in V \setminus Z$, then we can take $\tilde V = V \setminus Z$.
+This descends to an open subscheme $\tilde V_\lambda \subset V_\lambda$
+for some $\lambda$ by Limits, Lemma
+\ref{limits-lemma-descend-opens}.
+Next, after increasing $\lambda$ we may assume
+there is a morphism $u'_\lambda : \tilde V_\lambda \to U'$
+restricting to $u'$. Taking
+$\tilde \xi_\lambda = (\emptyset, u'_\lambda, \emptyset)$
+gives the desired element of $F(\tilde V_\lambda)$.
+
+\medskip\noindent
+Proof of surjectivity: hard case and reduction to affine $W$.
+The most difficult case comes from considering $v \in Z \subset V$.
+We claim that we can reduce this to the case where $W$ is an
+affine formal scheme; we urge the reader to skip this argument\footnote{Artin's
+approach to the proof of this lemma is to work around this and
+consequently he can avoid proving the injectivity first. Namely, Artin
+consistently works with a finite affine \'etale coverings of all spaces
+in sight keeping track of the maps between them during the proof.
+In hindsight that might be preferable to what we do here.}.
+Namely, we can choose an \'etale morphism
+$\tilde W \to W$ where $\tilde W$ is an affine formal algebraic space
+such that the image of $v$ by $\hat x : V_{/Z} \to W$ is
+in the image of $\tilde W \to W$ (on reductions).
+Then the morphisms
+$$
+p : \tilde W \times_{W, g} X'_{/T'} \longrightarrow X'_{/T'}
+$$
+and
+$$
+q : \tilde W \times_{W, \hat x} V_{/Z} \to V_{/Z}
+$$
+are \'etale morphisms of locally Noetherian formal algebraic spaces.
+By (an easy case of) Algebraization of Formal Spaces, Theorem
+\ref{restricted-theorem-dilatations-general}
+there exists a morphism $\tilde X' \to X'$ of algebraic spaces
+which is locally of finite type, is an isomorphism over $U'$, and
+such that $\tilde X'_{/T'} \to X'_{/T'}$ is isomorphic to $p$.
+By Algebraization of Formal Spaces, Lemma \ref{restricted-lemma-output-etale}
+the morphism $\tilde X' \to X'$ is \'etale. Denote
+$\tilde T' \subset |\tilde X'|$ the inverse image of $T'$.
+Denote $\tilde U' \subset \tilde X'$ the complementary open subspace.
+Denote $\tilde g' : \tilde X'_{/\tilde T'} \to \tilde W$
+the formal modification which is the base change of $g$ by
+$\tilde W \to W$. Then we see that
+$$
+\tilde X',\ \tilde T',\ \tilde U',\ \tilde W,
+\ \tilde g : \tilde X'_{/\tilde T'} \to \tilde W
+$$
+is another example of Situation \ref{situation-contractions}.
+Denote $\tilde F$ the functor constructed from this triple.
+There is a transformation of functors
+$$
+\tilde F \longrightarrow F
+$$
+constructed using the morphisms $\tilde X' \to X'$ and
+$\tilde W \to W$ in the obvious manner; details omitted.
+
+\medskip\noindent
+Proof of surjectivity: hard case and reduction to affine $W$, part 2.
+By the same theorem as used above, there exists a morphism
+$\tilde V \to V$ of algebraic spaces which is locally of finite type,
+is an isomorphism over $V \setminus Z$ and such that
+$\tilde V_{/Z} \to V_{/Z}$ is isomorphic to $q$.
+Denote $\tilde Z \subset \tilde V$ the inverse image of $Z$.
+By Algebraization of Formal Spaces, Lemmas
+\ref{restricted-lemma-output-etale} and \ref{restricted-lemma-output-separated}
+the morphism $\tilde V \to V$ is \'etale and separated.
+In particular $\tilde V$ is a (locally Noetherian) scheme, see for example
+Morphisms of Spaces, Proposition
+\ref{spaces-morphisms-proposition-locally-quasi-finite-separated-over-scheme}.
+We have the morphism $u'$ which we may view as a morphism
+$$
+\tilde u' : \tilde V \setminus \tilde Z \longrightarrow \tilde U'
+$$
+where $\tilde U' \subset \tilde X'$ is the open mapping isomorphically
+to $U'$. We have a morphism
+$$
+\tilde {\hat x} :
+\tilde V_{/\tilde Z} = \tilde W \times_{W, \hat x} V_{/Z}
+\longrightarrow
+\tilde W
+$$
+Namely, here we just use the projection. Thus we have the triple
+$$
+\tilde \xi = (\tilde Z, \tilde u', \tilde {\hat x}) \in \tilde F(\tilde V)
+$$
+We omit proving the compatibility condition; hints: if $V' \to V$, $\hat x'$,
+and $x'$ witness the compatibility between $u'$ and $\hat x$, then one
+sets $\tilde V' = V' \times_V \tilde V$ which comes with morphsms
+$\tilde{\hat x}'$ and $\tilde x'$ and show this works.
+The image of $\tilde \xi$ under the transformation $\tilde F \to F$
+is the restriction of $\xi$ to $\tilde V$.
+
+\medskip\noindent
+Proof of surjectivity: hard case and reduction to affine $W$, part 3.
+By our choice of $\tilde W \to W$, there is an affine open
+$\tilde V_{open} \subset \tilde V$ (we're running out of notation)
+whose image in $V$ contains our chosen point $v \in V$.
+Now by the case studied in the next paragraph and the remarks made
+earlier, we can descend $\tilde \xi|_{\tilde V_{open}}$
+to some element $\tilde \xi_\lambda$ of $\tilde F$ over
+$\tilde V_{\lambda, open}$
+for some \'etale morphism $\tilde V_{\lambda, open} \to V_\lambda$
+whose base change to $V$ is $\tilde V_{open}$.
+Applying the transformation of functors $\tilde F \to F$
+we obtain the element of $F(\tilde V_{\lambda, open})$
+we were looking for. This reduces us to the case discussed
+in the next paragraph.
+
+\medskip\noindent
+Proof of surjectivity: the case of an affine $W$. We have $v \in Z \subset V$
+and $W$ is an affine formal algebraic space. Recall that
+$$
+\xi = (Z, u', \hat x) \in F(V)
+$$
+We may still replace $V$ by an \'etale neighbourhood of $v$.
+In particular we may and do assume $V$ and $V_\lambda$ are affine.
+
+\medskip\noindent
+Proof of surjectivity: descending $Z$. We can find a $\lambda$
+and a closed subscheme $Z_\lambda \subset V_\lambda$ such that
+$Z$ is the base change of $Z_\lambda$ to $V$. See
+Limits, Lemma \ref{limits-lemma-descend-finite-presentation}.
+Warning: we don't know (and in general it won't be true)
+that $Z_\lambda$ is a reduced closed subscheme of $V_\lambda$.
+For $\mu \geq \lambda$ denote $Z_\mu \subset V_\mu$ the scheme theoretic
+inverse image in $V_\mu$. We will use below that
+$Z = \lim_{\mu \geq \lambda} Z_\mu$ as schemes.
+
+\medskip\noindent
+Proof of surjectivity: descending $u'$.
+Since $U'$ is locally of finite type over $S$
+we may assume after increasing $\lambda$
+that there exists a morphism
+$u'_\lambda : V_\lambda \setminus Z_\lambda \to U'$
+whose restriction to $V \setminus Z$ is $u'$.
+See Limits, Proposition
+\ref{limits-proposition-characterize-locally-finite-presentation}.
+For $\mu \geq \lambda$ we will denote $u'_\mu$ the restriction
+of $u'_\lambda$ to $V_\mu \setminus Z_\mu$.
+
+\medskip\noindent
+Proof of surjectivity: descending a witness.
+Let $V' \to V$, $\hat x'$, and $x'$ witness the compatibility between
+$u'$ and $\hat x$. Using the same references as above we may assume
+(after increasing $\lambda$) that there exists a morphism
+$V'_\lambda \to V_\lambda$ of finite type whose base change to $V$
+is $V' \to V$. After increasing $\lambda$ we may assume
+$V'_\lambda \to V_\lambda$ is proper
+(Limits, Lemma \ref{limits-lemma-eventually-proper}).
+Next, we may assume $V'_\lambda \to V_\lambda$ is an isomorphism
+over $V_\lambda \setminus Z_\lambda$
+(Limits, Lemma \ref{limits-lemma-descend-isomorphism}).
+Next, we may assume there is a morphism $x'_\lambda : V'_\lambda \to X'$
+whose restriction to $V'$ is $x'$.
+Increasing $\lambda$ again we may assume $x'_\lambda$
+agrees with $u'_\lambda$ over $V_\lambda \setminus Z_\lambda$.
+For $\mu \geq \lambda$ we denote
+$V'_\mu$ and $x'_\mu$ the base change of $V'_\lambda$ and the
+restriction of $x'_\lambda$.
+
+\medskip\noindent
+Proof of surjectivity: algebra.
+Write $W = \text{Spf}(B)$, $V = \Spec(A)$, and
+for $\mu \geq \lambda$ write $V_\mu = \Spec(A_\mu)$.
+Denote $I_\mu \subset A_\mu$ and $I \subset A$
+the ideals cutting out $Z_\mu$ and $Z$.
+Then $I_\lambda A_\mu = I_\mu$ and $I_\lambda A = I$.
+The morphism $\hat x$ determines and is determined by a
+continuous ring map
+$$
+(\hat x)^\sharp : B \longrightarrow A^\wedge
+$$
+where $A^\wedge$ is the $I$-adic completion of $A$.
+To finish the proof we have to show that this map descends to a map into
+$A_\mu^\wedge$ for some sufficiently large $\mu$ where $A_\mu^\wedge$
+is the $I_\mu$-adic completion of $A_\mu$.
+This is a nontrivial fact; Artin writes in his paper
+\cite{ArtinII}: ``Since the data (3.5) involve $I$-adic completions,
+which do not commute with direct limits, the verification is somewhat
+delicate. It is an algebraic analogue of a convergence proof in analysis.''
+
+\medskip\noindent
+Proof of surjectivity: algebra, more rings.
+Let us denote
+$$
+C_\mu = \Gamma(V'_\mu, \mathcal{O})
+\quad\text{and}\quad
+C = \Gamma(V', \mathcal{O})
+$$
+Observe that $A \to C$ and $A_\mu \to C_\mu$ are finite ring maps as
+$V' \to V$ and $V'_\mu \to V_\mu$ are proper morphisms, see
+Cohomology of Spaces, Lemma
+\ref{spaces-cohomology-lemma-proper-over-affine-cohomology-finite}.
+Since $V = \lim V_\mu$ and $V' = \lim V'_\mu$
+we have
+$$
+A = \colim A_\mu
+\quad\text{and}\quad
+C = \colim C_\mu
+$$
+by Limits, Lemma \ref{limits-lemma-descend-section}\footnote{We don't
+know that $C_\mu = C_\lambda \otimes_{A_\lambda} A_\mu$ as the
+various morphisms aren't flat.}. For an element
+$a \in I$, resp.\ $a \in I_\mu$ the maps $A_a \to C_a$,
+resp.\ $(A_\mu)_a \to (C_\mu)_a$ are isomorphisms by flat base change
+(Cohomology of Spaces, Lemma
+\ref{spaces-cohomology-lemma-flat-base-change-cohomology}).
+Hence the kernel and cokernel of $A \to C$ is supported
+on $V(I)$ and similarly for $A_\mu \to C_\mu$.
+We conclude the kernel and cokernel of $A \to C$
+are annihilated by a power of $I$ and the kernel and cokernel of
+$A_\mu \to C_\mu$ are annihilated by a power of $I_\mu$, see
+Algebra, Lemma \ref{algebra-lemma-Noetherian-power-ideal-kills-module}.
+
+\medskip\noindent
+Proof of surjectivity: algebra, more ring maps.
+Denote $Z_n \subset V$ the $n$th infinitesimal
+neighbourhood of $Z$ and denote $Z_{\mu, n} \subset V_\mu$
+the $n$th infinitesimal neighbourhoof of $Z_\mu$.
+By the theorem on formal functions
+(Cohomology of Spaces, Theorem
+\ref{spaces-cohomology-theorem-formal-functions})
+we have
+$$
+C^\wedge = \lim_n H^0(V' \times_V Z_n, \mathcal{O})
+\quad\text{and}\quad
+C_\mu^\wedge =
+\lim_n H^0(V'_\mu \times_{V_\mu} Z_{\mu, n}, \mathcal{O})
+$$
+where $C^\wedge$ and $C_\mu^\wedge$ are the completion with
+respect to $I$ and $I_\mu$.
+Combining the completion of the morphism
+$x'_\mu : V'_\mu \to X'$ with the morphism $g : X'_{/T'} \to W$ we obtain
+$$
+g \circ x'_{\mu, /Z_\mu} :
+V'_{\mu, /Z_\mu} = \colim V_\mu' \times_{V_\mu} Z_{\mu, n}
+\longrightarrow
+W
+$$
+and hence by the description of the completion
+$C_\mu^\wedge$ above we obtain a continuous ring homomorphism
+$$
+(g \circ x'_{\mu, /Z_\mu})^\sharp : B \longrightarrow C_\mu^\wedge
+$$
+The fact that $V' \to V$, $\hat x'$, $x'$ witnesses the compatibility
+between $u'$ and $\hat x$ implies the
+commutativity of the following diagram
+$$
+\xymatrix{
+C_\mu^\wedge \ar[r] &
+C^\wedge \\
+B \ar[u]^{(g \circ x'_{\mu, /Z_\mu})^\sharp} \ar[r]^{(\hat x)^\sharp} &
+A^\wedge \ar[u]
+}
+$$
+
+\medskip\noindent
+Proof of surjectivity: more algebra arguments. Recall that the finite
+$A$-modules $\Ker(A \to C)$ and $\Coker(A \to C)$ are annihilated
+by a power of $I$ and similarly the finite $A_\mu$-modules
+$\Ker(A_\mu \to C_\mu)$ and $\Coker(A_\mu \to C_\mu)$ are annihilated
+by a power of $I_\mu$. This implies that these modules are
+equal to their completions. Since $I$-adic completion on the category of
+finite $A$-modules is exact (see
+Algebra, Section \ref{algebra-section-completion-noetherian})
+it follows that we have
+$$
+\Coker(A^\wedge \to C^\wedge) = \Coker(A \to C)
+$$
+and similarly for kernels and for the maps $A_\mu \to C_\mu$.
+Of course we also have
+$$
+\Ker(A \to C) = \colim \Ker(A_\mu \to C_\mu)
+\quad\text{and}\quad
+\Coker(A \to C) = \colim \Coker(A_\mu \to C_\mu)
+$$
+Recall that $S = \Spec(R)$ is affine. All of the ring maps
+above are $R$-algebra homomorphisms as all of the morphisms
+are morphisms over $S$. By
+Algebraization of Formal Spaces, Lemma
+\ref{restricted-lemma-Noetherian-finite-type-red}
+we see that $B$ is topologically of finite type over $R$.
+Say $B$ is topologically generated by $b_1, \ldots, b_n$.
+Pick some $\mu$ (for example $\lambda$) and consider
+the elements
+$$
+\text{images of }
+(g \circ x'_{\mu, /Z_\mu})^\sharp(b_1)
+, \ldots,
+(g \circ x'_{\mu, /Z_\mu})^\sharp(b_n)
+\text{ in }\Coker(A_\mu \to C_\mu)
+$$
+The image of these elements in $\Coker(\alpha)$ are zero
+by the commutativity of the square above. Since
+$\Coker(A \to C) = \colim \Coker(A_\mu \to C_\mu)$ and these
+cokernels are equal to their completions
+we see that after increasing $\mu$ we may assume these
+images are all zero. This means that the continuous
+homomorphism $(g \circ x'_{\mu, /Z_\mu})^\sharp$ has image contained
+in $\Im(A_\mu \to C_\mu)$.
+Choose elements $a_{\mu, j} \in (A_\mu)^\wedge$ mapping to
+$(g \circ x'_{\mu, /Z_\mu})^\sharp(b_1)$ in $(C_\mu)^\wedge$.
+Then $a_{\mu, j} \in A_\mu^\wedge$ and $(\hat x)^\sharp(b_j) \in A^\wedge$
+map to the same element of $C^\wedge$ by the commutativity of the
+square above. Since
+$\Ker(A \to C) = \colim \Ker(A_\mu \to C_\mu)$ and these kernels
+are equal to their completions, we may after increasing
+$\mu$ adjust our choices of $a_{\mu, j}$ such that
+the image of $a_{\mu, j}$ in $A^\wedge$ is equal to $(\hat x)^\sharp(b_j)$.
+
+\medskip\noindent
+Proof of surjectivity: final algebra arguments.
+Let $\mathfrak b \subset B$ be the ideal of topologically nilpotent
+elements. Let $J \subset R[x_1, \ldots, x_n]$ be the ideal
+consisting of those $h(x_1, \ldots, x_n)$ such that
+$h(b_1, \ldots, b_n) \in \mathfrak b$. Then we get a continuous
+surjection of topological $R$-algebras
+$$
+\Phi : R[x_1, \ldots, x_n]^\wedge \longrightarrow B,\quad
+x_j \longmapsto b_j
+$$
+where the completion on the left hand side is with respect to $J$.
+Since $R[x_1, \ldots, x_n]$ is Noetherian we can choose
+generators $h_1, \ldots, h_m$ for $J$. By the commutativity
+of the square above we see that $h_j(a_{\mu, 1}, \ldots, a_{\mu, n})$ is
+an element of $A_\mu^\wedge$ whose image in $A^\wedge$ is contained
+in $IA^\wedge$. Namely, the ring map $(\hat x)^\sharp$ is continuous
+and $IA^\wedge$ is the ideal of topological nilpotent elements
+of $A^\wedge$ because $A^\wedge/IA^\wedge = A/I$ is reduced.
+(See Algebra, Section \ref{algebra-section-completion-noetherian}
+for results on completion in Noetherian rings.)
+Since $A/I = \colim A_\mu/I_\mu$ we conclude that after increasing
+$\mu$ we may assume $h_j(a_{\mu, 1}, \ldots, a_{\mu, n})$ is in
+$I_\mu A_\mu^\wedge$. In particular the elements
+$h_j(a_{\mu, 1}, \ldots, a_{\mu, n})$ of $A_\mu^\wedge$
+are topologically nilpotent in $A_\mu^\wedge$.
+Thus we obtain a continuous $R$-algebra homomorphism
+$$
+\Psi : R[x_1, \ldots, x_n]^\wedge \longrightarrow A_\mu^\wedge,\quad
+x_j \longmapsto a_{\mu, j}
+$$
+In order to conclude what we want, we need to see if $\Ker(\Phi)$ is
+annihilated by $\Psi$. This may not be true, but we can achieve
+this after increasing $\mu$. Indeed, since
+$R[x_1, \ldots, x_n]^\wedge$ is Noetherian,
+we can choose generators $g_1, \ldots, g_l$ of the ideal
+$\Ker(\Phi)$. Then we see that
+$$
+\Psi(g_1), \ldots, \Psi(g_l) \in
+\Ker(A_\mu^\wedge \to C_\mu^\wedge) = \Ker(A_\mu \to C_\mu)
+$$
+map to zero in $\Ker(A \to C) = \colim \Ker(A_\mu \to C_\mu)$.
+Hence increasing $\mu$ as before we get the desired result.
+
+\medskip\noindent
+Proof of surjectivity: mopping up. The continuous ring homomorphism
+$B \to (A_\mu)^\wedge$ constructed above determines a morphism
+$\hat x_\mu : V_{\mu, /Z_\mu} \to W$.
+The compatibility of $\hat x_\mu$ and $u'_\mu$ follows
+from the fact that the ring map $B \to (A_\mu)^\wedge$
+is by construction compatible with the ring map $A_\mu \to C_\mu$.
+In fact, the compatibility will be witnessed by the proper morphism
+$V'_\mu \to V_\mu$ and the morphisms
+$x'_\mu$ and $\hat x'_\mu = x'_{\mu, /Z_\mu}$ we used in
+the construction. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-rs}
+In Situation \ref{situation-contractions} the functor $F$ satisfies
+the Rim-Schlessinger condition (RS).
+\end{lemma}
+
+\begin{proof}
+Recall that the condition only involves the evaluation $F(V)$ of the functor
+$F$ on schemes $V$ over $S$ which are spectra of Artinian local rings
+and the restriction maps $F(V_2) \to F(V_1)$ for morphisms $V_1 \to V_2$
+of schemes over $S$ which are spectra of Artinian local rings.
+Thus let $V/S$ be the spetruim of an Artinian local ring.
+If $\xi = (Z, u', \hat x) \in F(V)$ then either $Z = \emptyset$
+or $Z = V$ (set theoretically). In the first case we see that
+$\hat x$ is a morphism from the empty formal algebraic space
+into $W$. In the second case we see that $u'$ is a morphism from
+the empty scheme into $X'$ and we see that $\hat x : V \to W$
+is a morphism into $W$. We conclude that
+$$
+F(V) = U'(V) \amalg W(V)
+$$
+and moreover for $V_1 \to V_2$ as above the induced map
+$F(V_2) \to F(V_1)$ is compatible with this decomposition.
+Hence it suffices to prove that both $U'$ and $W$ satisfy the
+Rim-Schlessinger condition. For $U'$ this follows from
+Lemma \ref{lemma-algebraic-stack-RS}.
+To see that it is true for $W$, we write $W = \colim W_n$ as in
+Formal Spaces, Lemma
+\ref{formal-spaces-lemma-structure-locally-noetherian}.
+Say $V = \Spec(A)$ with $(A, \mathfrak m)$ an Artinian local ring.
+Pick $n \geq 1$ such that $\mathfrak m^n = 0$. Then we have
+$W(V) = W_n(V)$. Hence we see that the Rim-Schlessinger condition
+for $W$ follows from the Rim-Schlessinger condition for $W_n$ for
+all $n$ (which in turn follows from
+Lemma \ref{lemma-algebraic-stack-RS}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-dim}
+In Situation \ref{situation-contractions} the tangent spaces of
+the functor $F$ are finite dimensional.
+\end{lemma}
+
+\begin{proof}
+In the proof of Lemma \ref{lemma-rs} we have seen that
+$F(V) = U'(V) \amalg W(V)$ if $V$ is the spectrum of an Artinian
+local ring. The tangent spaces are computed entirely from evaluations
+of $F$ on such schemes over $S$.
+Hence it suffices to prove that the tangent spaces
+of the functors $U'$ and $W$ are finite dimensional.
+For $U'$ this follows from
+Lemma \ref{lemma-finite-dimension}.
+Write $W = \colim W_n$ as in the proof of Lemma \ref{lemma-rs}.
+Then we see that the tangent spaces of $W$ are equal to the
+tangent spaces of $W_2$, as to get at the tangent space
+we only need to evaluate $W$ on spectra of Artinian local
+rings $(A, \mathfrak m)$ with $\mathfrak m^2 = 0$.
+Then again we see that the tangent spaces of $W_2$ have
+finite dimension by
+Lemma \ref{lemma-finite-dimension}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-formal-object-effective}
+In Situation \ref{situation-contractions} assume $X' \to S$ is separated.
+Then every formal object for $F$ is effective.
+\end{lemma}
+
+\begin{proof}
+A formal object $\xi = (R, \xi_n)$ of $F$ consists of a Noetherian
+complete local $S$-algebra $R$ whose residue field is of finite type
+over $S$, together with elements $\xi_n \in F(\Spec(R/\mathfrak m^n))$
+for all $n$ such that $\xi_{n + 1}|_{\Spec(R/\mathfrak m^n)} = \xi_n$.
+By the discussion in the proof of Lemma \ref{lemma-rs}
+we see that either $\xi$ is a formal object of $U'$ or a formal
+object of $W$. In the first case we see that $\xi$ is effective
+by Lemma \ref{lemma-effective}. The second case is the interesting case.
+Set $V = \Spec(R)$. We will construct an element
+$(Z, u', \hat x) \in F(V)$ whose image in $F(\Spec(R/\mathfrak m^n))$
+is $\xi_n$ for all $n \geq 1$.
+
+\medskip\noindent
+We may view the collection of elements $\xi_n$ as a morphism
+$$
+\xi : \text{Spf}(R) \longrightarrow W
+$$
+of locally Noetherian formal algebraic spaces over $S$. Observe that $\xi$
+is {\it not} an adic morphism in general. To fix this, let $I \subset R$
+be the ideal corresponding to the formal closed subspace
+$$
+\text{Spf}(R) \times_{\xi, W} W_{red} \subset \text{Spf}(R)
+$$
+Note that $I \subset \mathfrak m_R$. Set $Z = V(I) \subset V = \Spec(R)$.
+Since $R$ is $\mathfrak m_R$-adically complete it is a fortiori
+$I$-adically complete (Algebra, Lemma \ref{algebra-lemma-complete-by-sub}).
+Moreover, we claim that for each $n \geq 1$ the morphism
+$$
+\xi|_{\text{Spf}(R/I^n)} :
+\text{Spf}(R/I^n)
+\longrightarrow
+W
+$$
+actually comes from a morphism
+$$
+\xi'_n : \Spec(R/I^n) \longrightarrow W
+$$
+Namely, this follows from writing $W = \colim W_n$ as in the
+proof of Lemma \ref{lemma-rs}, noticing that $\xi|_{\text{Spf}(R/I^n)}$
+maps into $W_n$, and applying Formal Spaces, Lemma
+\ref{formal-spaces-lemma-map-into-algebraic-space}
+to algebraize this to a morphism $\Spec(R/I^n) \to W_n$
+as desired. Let us denote $\text{Spf}'(R) = V_{/Z}$ the formal spectrum
+of $R$ endowed with the $I$-adic topology -- equivalently the formal
+completion of $V$ along $Z$. Using the morphisms
+$\xi'_n$ we obtain an adic morphism
+$$
+\hat x = (\xi'_n) : \text{Spf}'(R) \longrightarrow W
+$$
+of locally Noetherian formal algebraic spaces over $S$.
+Consider the base change
+$$
+\text{Spf}'(R) \times_{\hat x, W, g} X'_{/T'} \longrightarrow \text{Spf}'(R)
+$$
+This is a formal modification by
+Algebraization of Formal Spaces, Lemma
+\ref{restricted-lemma-base-change-formal-modification}.
+Hence by the main theorem on dilatations
+(Algebraization of Formal Spaces, Theorem \ref{restricted-theorem-dilatations})
+we obtain a proper morphism
+$$
+V' \longrightarrow V = \Spec(R)
+$$
+which is an isomorphism over $\Spec(R) \setminus V(I)$ and
+whose completion recovers the formal modification above, in other words
+$$
+V' \times_{\Spec(R)} \Spec(R/I^n) =
+\Spec(R/I^n) \times_{\xi'_n, W, g} X'_{/T'}
+$$
+This in particular tells us we have a compatible system of morphisms
+$$
+V' \times_{\Spec(R)} \Spec(R/I^n) \longrightarrow X' \times_S \Spec(R/I^n)
+$$
+Hence by Grothendieck's algebraization theorem (in the form of
+More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-algebraize-morphism})
+we obtain a morphism
+$$
+x' : V' \to X'
+$$
+over $S$ recovering the morphisms displayed above. Then finally
+setting $u' : V \setminus Z \to X'$ the restriction of $x'$ to
+$V \setminus Z \subset V'$ gives the third component of our
+desired element $(Z, u', \hat x) \in F(V)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-openness-smoothness}
+Let $S$ be a locally Noetherian scheme. Let $V$ be a scheme locally
+of finite type over $S$. Let $Z \subset V$ be closed. Let $W$ be
+a locally Noetherian formal algebraic space over $S$ such that
+$W_{red}$ is locally of finite type over $S$. Let $g : V_{/Z} \to W$
+be an adic morphism of formal algebraic spaces over $S$. Let $v \in V$
+be a closed point such that $g$ is versal at $v$ (as in
+Section \ref{section-axioms-functors}).
+Then after replacing $V$ by an open neighbourhood of $v$ the
+morphism $g$ is smooth (see proof).
+\end{lemma}
+
+\begin{proof}
+Since $g$ is adic it is representable by algebraic spaces (Formal Spaces,
+Section \ref{formal-spaces-section-adic}).
+Thus by saying $g$ is smooth we mean that $g$ should be smooth
+in the sense of
+Bootstrap, Definition \ref{bootstrap-definition-property-transformation}.
+
+\medskip\noindent
+Write $W = \colim W_n$ as in Formal Spaces, Lemma
+\ref{formal-spaces-lemma-structure-locally-noetherian}.
+Set $V_n = V_{/Z} \times_{\hat x, W} W_n$.
+Then $V_n$ is a closed subscheme with underlying set $Z$.
+Smoothness of $V \to W$ is equivalent to the smoothness
+of all the morphisms $V_n \to W_n$ (this holds because any morphism
+$T \to W$ with $T$ a quasi-compact scheme factors through
+$W_n$ for some $n$). We know that the morphism $V_n \to W_n$
+is smooth at $v$ by
+Lemma \ref{lemma-base-change-versal}\footnote{The lemma applies since
+the diagonal of $W$ is representable by algebraic spaces and
+locally of finite type, see Formal Spaces, Lemma
+\ref{formal-spaces-lemma-diagonal-morphism-formal-algebraic-spaces}
+and we have seen that $W$ has (RS) in the proof of Lemma \ref{lemma-rs}.}.
+Of course this means that given any $n$ we can shrink $V$
+such that $V_n \to W_n$ is smooth. The problem is to find
+an open which works for all $n$ at the same time.
+
+\medskip\noindent
+The question is local on $V$, hence we may assume $S = \Spec(R)$ and
+$V = \Spec(A)$ are affine.
+
+\medskip\noindent
+In this paragraph we reduce to the case where $W$ is an affine formal
+algebraic space. Choose an affine formal scheme $W'$ and an \'etale morphism
+$W' \to W$ such that the image of $v$ in $W_{red}$ is in the
+image of $W'_{red} \to W_{red}$. Then $V_{/Z} \times_{g, W} W' \to V_{/Z}$
+is an adic \'etale morphism of formal algebraic spaces over $S$
+and $V_{/Z} \times_{g, W} W'$ is an affine fromal algebraic space.
+By Algebraization of Formal Spaces,
+Lemma \ref{restricted-lemma-algebraize-rig-etale-affine}
+there exists an \'etale morphism $\varphi : V' \to V$ of affine schemes
+such that the completion of $V'$ along $Z' = \varphi^{-1}(Z)$
+is isomorphic to $V_{/Z} \times_{g, W} W'$ over $V_{/Z}$.
+Observe that $v$ is the image of some $v' \in V'$.
+Since smoothness is preserved under base change we see that
+$V'_n \to W'_n$ is smooth for all $n$. In the next paragraph
+we show that after replacing $V'$ by an open neighbourhood of $v'$
+the morphisms $V'_n \to W'_n$ are smooth for all $n$.
+Then, after we replace $V$ by the open image of $V' \to V$,
+we obtain that $V_n \to W_n$ is smooth by \'etale descent of smoothness.
+Some details omitted.
+
+\medskip\noindent
+Assume $S = \Spec(R)$, $V = \Spec(A)$, $Z = V(I)$, and $W = \text{Spf}(B)$.
+Let $v$ correspond to the maximal ideal $I \subset \mathfrak m \subset A$.
+We are given an adic continuous $R$-algebra homomorphism
+$$
+B \longrightarrow A^\wedge
+$$
+Let $\mathfrak b \subset B$ be the ideal of topologically nilpotent
+elements (this is the maximal ideal of definition of the Noetherian adic
+topological ring $B$). Observe that $\mathfrak b A^\wedge$ and
+$IA^\wedge$ are both ideals of definition of the Noetherian adic
+ring $A^\wedge$. Also, $\mathfrak m A^\wedge$ is a maximal ideal
+of $A^\wedge$ containing both $\mathfrak b A^\wedge$ and $IA^\wedge$.
+We are given that
+$$
+B_n = B/\mathfrak b^n \to A^\wedge/\mathfrak b^n A^\wedge = A_n
+$$
+is smooth at $\mathfrak m$ for all $n$. By the discussion above
+we may and do assume that $B_1 \to A_1$ is a smooth ring map.
+Denote $\mathfrak m_1 \subset A_1$ the maximal ideal corresponing
+to $\mathfrak m$. Since smoothness implies flatness, we see that:
+for all $n \geq 1$ the map
+$$
+\mathfrak b^n/\mathfrak b^{n + 1} \otimes_{B_1} (A_1)_{\mathfrak m_1}
+\longrightarrow
+\left(\mathfrak b^nA^\wedge/\mathfrak b^{n + 1}A^\wedge\right)_{\mathfrak m_1}
+$$
+is an isomorphism (see
+Algebra, Lemma \ref{algebra-lemma-what-does-it-mean-again}).
+Consider the Rees algebra
+$$
+B' = \bigoplus\nolimits_{n \geq 0} \mathfrak b^n/\mathfrak b^{n + 1}
+$$
+which is a finite type graded algebra over the Noetherian ring $B_1$ and
+the Rees algebra
+$$
+A' = \bigoplus\nolimits_{n \geq 0}
+\mathfrak b^nA^\wedge/\mathfrak b^{n + 1}A^\wedge
+$$
+which is a a finite type graded algebra over the Noetherian ring $A_1$.
+Consider the homomorphism of graded $A_1$-algebras
+$$
+\Psi : B' \otimes_{B_1} A_1 \longrightarrow A'
+$$
+By the above this map is an isomorphism after localizing at
+the maximal ideal $\mathfrak m_1$ of $A_1$.
+Hence $\Ker(\Psi)$, resp.\ $\Coker(\Psi)$ is a finite module
+over $B' \otimes_{B_1} A_1$, resp.\ $A'$ whose localization
+at $\mathfrak m_1$ is zero. It follows that after replacing
+$A_1$ (and correspondingly $A$) by a principal localization
+we may assume $\Psi$ is an isomorphism. (This is the key step of the proof.)
+Then working backwards we see that $B_n \to A_n$ is flat, see
+Algebra, Lemma \ref{algebra-lemma-what-does-it-mean-again}.
+Hence $A_n \to B_n$ is smooth (as a flat ring map with smooth
+fibres, see Algebra, Lemma \ref{algebra-lemma-flat-fibre-smooth})
+and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-openness-versality}
+In Situation \ref{situation-contractions} the functor
+$F$ satisfies openness of versality.
+\end{lemma}
+
+\begin{proof}
+We have to show the following. Given a scheme $V$ locally of finite type over
+$S$, given $\xi \in F(V)$, and given a finite type point $v_0 \in V$ such that
+$\xi$ is versal at $v_0$, after replacing $V$ by an open neighbourhood
+of $v_0$ we have that $\xi$ is versal at every finite type point of $V$.
+Write $\xi = (Z, u', \hat x)$.
+
+\medskip\noindent
+First case: $v_0 \not \in Z$. Then we can first replace $V$ by
+$V \setminus Z$. Hence we see that $\xi = (\emptyset, u', \emptyset)$
+and the morphism $u' : V \to X'$ is versal at $v_0$.
+By More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-lifting-along-artinian-at-point}
+this means that $u' : V \to X'$ is smooth at $v_0$.
+Since the set of a points where a morphism is smooth is open,
+we can after shrinking $V$ assume $u'$ is smooth.
+Then the same lemma tells us that $\xi$ is versal at every
+point as desired.
+
+\medskip\noindent
+Second case: $v_0 \in Z$. Write $W = \colim W_n$ as in
+Formal Spaces, Lemma \ref{formal-spaces-lemma-structure-locally-noetherian}.
+By Lemma \ref{lemma-openness-smoothness} we may assume $\hat x : V_{/Z} \to W$
+is a smooth morphism of formal algebraic spaces. It follows immediately
+that $\xi = (Z, u', \hat x)$ is versal at all finite type points of $Z$.
+Let $V' \to V$, $\hat x'$, and $x'$ witness the compatibility between $u'$
+and $\hat x$. We see that $\hat x' : V'_{/Z} \to X'_{/T'}$ is smooth as a
+base change of $\hat x$. Since $\hat x'$ is the completion of
+$x' : V' \to X'$ this implies that $x' : V' \to X'$ is smooth at all
+points of $(V' \to V)^{-1}(Z) = |x'|^{-1}(T') \subset |V'|$
+by the already used More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-lifting-along-artinian-at-point}.
+Since the set of smooth points of a morphism is open, we see that
+the closed set of points $B \subset |V'|$ where $x'$ is not smooth
+does not meet $(V' \to V)^{-1}(Z)$. Since $V' \to V$ is proper and
+hence closed, we see that $(V' \to V)(B) \subset V$ is a closed
+subset not meeting $Z$. Hence after shrinking $V$ we may assume
+$B = \emptyset$, i.e., $x'$ is smooth. By the discussion in the previous
+paragraph this exactly means that $\xi$ is versal at all finite type
+points of $V$ not contained in $Z$ and the proof is complete.
+\end{proof}
+
+\noindent
+Here is the final result.
+
+\begin{theorem}
+\label{theorem-contractions}
+\begin{reference}
+\cite[Theorem 3.1]{ArtinII}
+\end{reference}
+Let $S$ be a locally Noetherian scheme such that $\mathcal{O}_{S, s}$
+is a G-ring for all finite type points $s \in S$. Let $X'$ be an algebraic
+space locally of finite type over $S$. Let $T' \subset |X'|$ be a closed
+subset. Let $W$ be a locally Noetherian formal algebraic space over $S$
+with $W_{red}$ locally of finite type over $S$. Finally, we let
+$$
+g : X'_{/T'} \longrightarrow W
+$$
+be a formal modification, see Algebraization of Formal Spaces, Definition
+\ref{restricted-definition-formal-modification}. If $X'$ and $W$ are
+separated\footnote{See Remark \ref{remark-separated-needed}.} over $S$, then
+there exists a proper morphism $f : X' \to X$ of algebraic spaces over $S$,
+a closed subset $T \subset |X|$, and an isomorphism $a : X_{/T} \to W$
+of formal algebraic spaces such that
+\begin{enumerate}
+\item $T'$ is the inverse image of $T$ by $|f| : |X'| \to |X|$,
+\item $f : X' \to X$ maps $X' \setminus T'$ isomorphically to
+$X \setminus T$, and
+\item $g = a \circ f_{/T}$ where $f_{/T} : X'_{/T'} \to X_{/T}$
+is the induced morphism.
+\end{enumerate}
+In other words, $(f : X' \to X, T, a)$ is a solution as defined earlier in
+this section.
+\end{theorem}
+
+\begin{proof}
+Let $F$ be the functor constructed using $X'$, $T'$, $W$, $g$ in this section.
+By Lemma \ref{lemma-functor-is-solution} it suffices to show that
+$F$ corresponds to an algebraic space $X$ locally of finite type over $S$.
+In order to do this, we will apply
+Proposition \ref{proposition-spaces-diagonal-representable-noetherian}.
+Namely, by Lemma \ref{lemma-diagonal-contractions}
+the diagonal of $F$ is representable by closed immersions
+and by
+Lemmas \ref{lemma-sheaf}, \ref{lemma-limit-preserving},
+\ref{lemma-rs}, \ref{lemma-finite-dim},
+\ref{lemma-formal-object-effective}, and \ref{lemma-openness-versality}
+we have axioms [0], [1], [2], [3], [4], and [5].
+\end{proof}
+
+\begin{remark}
+\label{remark-separated-needed}
+The proof of Theorem \ref{theorem-contractions} uses that $X'$ and $W$
+are separated over $S$ in two places. First, the proof uses this in showing
+$\Delta : F \to F \times F$ is representable by algebraic spaces.
+This use of the assumption can be entirely avoided by proving
+that $\Delta$ is representable by applying the theorem in the
+separated case to the triples
+$E'$, $(E' \to V)^{-1}Z$, and $E'_{/Z} \to E_W$
+found in Remark \ref{remark-diagonal} (this is the usual bootstrap
+procedure for the diagonal). Thus the proof of
+Lemma \ref{lemma-formal-object-effective} is the only
+place in our proof of Theorem \ref{theorem-contractions}
+where we really need to use that $X' \to S$ is separated.
+The reader checks that we use the assumption only to obtain
+the morphism $x' : V' \to X'$. The existence of $x'$ can be shown,
+using results in the literature, if $X' \to S$ is quasi-separated, see
+More on Morphisms of Spaces, Remark
+\ref{spaces-more-morphisms-remark-weaken-separation-axioms-question}.
+We conclude the theorem holds as stated with
+``separated'' replaced by ``quasi-separated''. If we ever need this
+we will precisely state and carefully prove this here.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/bibliography.tex b/books/stacks/bibliography.tex
new file mode 100644
index 0000000000000000000000000000000000000000..0f456faecb0259718c7cad6dbeb8c99c2129fea2
--- /dev/null
+++ b/books/stacks/bibliography.tex
@@ -0,0 +1,16 @@
+\input{preamble}
+
+% OK, start here.
+%
+
+\begin{document}
+
+\title{Bibliography}
+
+\maketitle
+
+\bibliography{my}
+\nocite{*}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/bootstrap.tex b/books/stacks/bootstrap.tex
new file mode 100644
index 0000000000000000000000000000000000000000..a08a0b916d618526886ac1d60875d72d3f0cbcee
--- /dev/null
+++ b/books/stacks/bootstrap.tex
@@ -0,0 +1,2348 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Bootstrap}
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+
+
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we use the material from the preceding sections to
+give criteria under which a presheaf of sets on the category of schemes
+is an algebraic space. Some of this material comes from the work
+of Artin, see \cite{ArtinI}, \cite{ArtinII},
+\cite{Artin-Theorem-Representability},
+\cite{Artin-Construction-Techniques},
+\cite{Artin-Algebraic-Spaces},
+\cite{Artin-Algebraic-Approximation},
+\cite{Artin-Implicit-Function},
+and \cite{ArtinVersal}.
+However, our method will be to use as much as possible arguments
+similar to those of the paper by Keel and Mori, see
+\cite{K-M}.
+
+\section{Conventions}
+\label{section-conventions}
+
+\noindent
+The standing assumption is that all schemes are contained in
+a big fppf site $\Sch_{fppf}$. And all rings $A$ considered
+have the property that $\Spec(A)$ is (isomorphic) to an
+object of this big site.
+
+\medskip\noindent
+Let $S$ be a scheme and let $X$ be an algebraic space over $S$.
+In this chapter and the following we will write $X \times_S X$
+for the product of $X$ with itself (in the category of algebraic
+spaces over $S$), instead of $X \times X$.
+
+
+
+
+\section{Morphisms representable by algebraic spaces}
+\label{section-morphism-representable-by-spaces}
+
+\noindent
+Here we define the notion of one presheaf being relatively representable
+by algebraic spaces over another, and we prove some properties of this notion.
+
+\begin{definition}
+\label{definition-morphism-representable-by-spaces}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $F$, $G$ be presheaves on $\Sch_{fppf}/S$.
+We say a morphism $a : F \to G$ is
+{\it representable by algebraic spaces}
+if for every $U \in \Ob((\Sch/S)_{fppf})$ and
+any $\xi : U \to G$ the fiber product $U \times_{\xi, G} F$
+is an algebraic space.
+\end{definition}
+
+\noindent
+Here is a sanity check.
+
+\begin{lemma}
+\label{lemma-morphism-spaces-is-representable-by-spaces}
+Let $S$ be a scheme.
+Let $f : X \to Y$ be a morphism of algebraic spaces over $S$.
+Then $f$ is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+This is formal. It relies on the fact that
+the category of algebraic spaces over $S$ has fibre products, see
+Spaces, Lemma \ref{spaces-lemma-fibre-product-spaces}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-transformation}
+\begin{slogan}
+A base change of a representable by algebraic spaces morphism of
+presheaves is representable by algebraic spaces.
+\end{slogan}
+Let $S$ be a scheme. Let
+$$
+\xymatrix{
+G' \times_G F \ar[r] \ar[d]^{a'} & F \ar[d]^a \\
+G' \ar[r] & G
+}
+$$
+be a fibre square of presheaves on $(\Sch/S)_{fppf}$.
+If $a$ is representable by algebraic spaces so is $a'$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is formal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-by-spaces-transformation-to-sheaf}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $F, G : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$.
+Let $a : F \to G$ be representable by algebraic spaces.
+If $G$ is a sheaf, then so is $F$.
+\end{lemma}
+
+\begin{proof}
+(Same as the proof of
+Spaces, Lemma \ref{spaces-lemma-representable-transformation-to-sheaf}.)
+Let $\{\varphi_i : T_i \to T\}$ be a covering of the site
+$(\Sch/S)_{fppf}$.
+Let $s_i \in F(T_i)$ which satisfy the sheaf condition.
+Then $\sigma_i = a(s_i) \in G(T_i)$ satisfy the sheaf condition
+also. Hence there exists a unique $\sigma \in G(T)$ such
+that $\sigma_i = \sigma|_{T_i}$. By assumption
+$F' = h_T \times_{\sigma, G, a} F$ is a sheaf.
+Note that $(\varphi_i, s_i) \in F'(T_i)$ satisfy the
+sheaf condition also, and hence come from some unique
+$(\text{id}_T, s) \in F'(T)$. Clearly $s$ is the section of
+$F$ we are looking for.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-by-spaces-transformation-diagonal}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $F, G : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$.
+Let $a : F \to G$ be representable by algebraic spaces.
+Then $\Delta_{F/G} : F \to F \times_G F$ is representable by
+algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+(Same as the proof of
+Spaces, Lemma \ref{spaces-lemma-representable-transformation-diagonal}.)
+Let $U$ be a scheme. Let $\xi = (\xi_1, \xi_2) \in (F \times_G F)(U)$.
+Set $\xi' = a(\xi_1) = a(\xi_2) \in G(U)$.
+By assumption there exist an algebraic space $V$ and a morphism $V \to U$
+representing the fibre product $U \times_{\xi', G} F$.
+In particular, the elements $\xi_1, \xi_2$ give morphisms
+$f_1, f_2 : U \to V$ over $U$. Because $V$ represents the
+fibre product $U \times_{\xi', G} F$ and because
+$\xi' = a \circ \xi_1 = a \circ \xi_2$
+we see that if $g : U' \to U$ is a morphism then
+$$
+g^*\xi_1 = g^*\xi_2
+\Leftrightarrow
+f_1 \circ g = f_2 \circ g.
+$$
+In other words, we see that $U \times_{\xi, F \times_G F} F$
+is represented by $V \times_{\Delta, V \times V, (f_1, f_2)} U$
+which is an algebraic space.
+\end{proof}
+
+\noindent
+The proof of
+Lemma \ref{lemma-representable-by-spaces-over-space}
+below is actually slightly tricky. Namely,
+we cannot use the argument of the proof of
+Spaces, Lemma \ref{spaces-lemma-representable-over-space}
+because we do not yet know that a composition of transformations
+representable by algebraic spaces is representable by algebraic
+spaces. In fact, we will use this lemma to prove that statement.
+
+\begin{lemma}
+\label{lemma-representable-by-spaces-over-space}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $F, G : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$.
+Let $a : F \to G$ be representable by algebraic spaces.
+If $G$ is an algebraic space, then so is $F$.
+\end{lemma}
+
+\begin{proof}
+We have seen in
+Lemma \ref{lemma-representable-by-spaces-transformation-to-sheaf}
+that $F$ is a sheaf.
+
+\medskip\noindent
+Let $U$ be a scheme and let $U \to G$ be a surjective \'etale morphism.
+In this case $U \times_G F$ is an algebraic space. Let $W$ be a scheme
+and let $W \to U \times_G F$ be a surjective \'etale morphism.
+
+\medskip\noindent
+First we claim that $W \to F$ is representable.
+To see this let $X$ be a scheme and let $X \to F$ be a morphism.
+Then
+$$
+W \times_F X = W \times_{U \times_G F} U \times_G F \times_F X
+= W \times_{U \times_G F} (U \times_G X)
+$$
+Since both $U \times_G F$ and $G$ are algebraic spaces we see that
+this is a scheme.
+
+\medskip\noindent
+Next, we claim that $W \to F$ is surjective and \'etale (this makes
+sense now that we know it is representable). This follows from the
+formula above since both $W \to U \times_G F$ and $U \to G$
+are \'etale and surjective, hence
+$W \times_{U \times_G F} (U \times_G X) \to U \times_G X$ and
+$U \times_G X \to X$ are surjective and \'etale, and the composition of
+surjective \'etale morphisms is surjective and \'etale.
+
+\medskip\noindent
+Set $R = W \times_F W$. By the above $R$ is a scheme and
+the projections $t, s : R \to W$
+are \'etale. It is clear that $R$ is an equivalence relation, and
+$W \to F$ is a surjection of sheaves. Hence $R$ is an \'etale equivalence
+relation and $F = W/R$. Hence $F$ is an algebraic space by
+Spaces,
+Theorem \ref{spaces-theorem-presentation}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-by-spaces}
+Let $S$ be a scheme.
+Let $a : F \to G$ be a map of presheaves on $(\Sch/S)_{fppf}$.
+Suppose $a : F \to G$ is representable by algebraic spaces.
+If $X$ is an algebraic space over $S$, and $X \to G$ is a map of presheaves
+then $X \times_G F$ is an algebraic space.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-base-change-transformation} the transformation
+$X \times_G F \to X$ is representable by algebraic spaces. Hence it is
+an algebraic space by
+Lemma \ref{lemma-representable-by-spaces-over-space}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-transformation}
+Let $S$ be a scheme.
+Let
+$$
+\xymatrix{
+F \ar[r]^a & G \ar[r]^b & H
+}
+$$
+be maps of presheaves on $(\Sch/S)_{fppf}$.
+If $a$ and $b$ are representable by algebraic spaces, so is
+$b \circ a$.
+\end{lemma}
+
+\begin{proof}
+Let $T$ be a scheme over $S$, and let $T \to H$ be a morphism.
+By assumption $T \times_H G$ is an algebraic space. Hence by
+Lemma \ref{lemma-representable-by-spaces}
+we see that $T \times_H F = (T \times_H G) \times_G F$ is an
+algebraic space as well.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-product-transformations}
+Let $S$ be a scheme.
+Let $F_i, G_i : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$, $i = 1, 2$.
+Let $a_i : F_i \to G_i$, $i = 1, 2$
+be representable by algebraic spaces.
+Then
+$$
+a_1 \times a_2 : F_1 \times F_2 \longrightarrow G_1 \times G_2
+$$
+is a representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Write $a_1 \times a_2$ as the composition
+$F_1 \times F_2 \to G_1 \times F_2 \to G_1 \times G_2$.
+The first arrow is the base change of $a_1$ by the map
+$G_1 \times F_2 \to G_1$, and the second arrow
+is the base change of $a_2$ by the map
+$G_1 \times G_2 \to G_2$. Hence this lemma is a formal
+consequence of Lemmas \ref{lemma-composition-transformation}
+and \ref{lemma-base-change-transformation}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-by-spaces-permanence}
+Let $S$ be a scheme. Let $a : F \to G$ and $b : G \to H$ be
+transformations of functors $(\Sch/S)_{fppf}^{opp} \to \textit{Sets}$.
+Assume
+\begin{enumerate}
+\item $\Delta : G \to G \times_H G$ is representable
+by algebraic spaces, and
+\item $b \circ a : F \to H$ is representable by algebraic spaces.
+\end{enumerate}
+Then $a$ is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Let $U$ be a scheme over $S$ and let $\xi \in G(U)$. Then
+$$
+U \times_{\xi, G, a} F =
+(U \times_{b(\xi), H, b \circ a} F) \times_{(\xi, a), (G \times_H G), \Delta} G
+$$
+Hence the result using Lemma \ref{lemma-representable-by-spaces}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glueing-sheaves}
+Let $S \in \Ob(\Sch_{fppf})$. Let $F$ be a presheaf of sets on
+$(\Sch/S)_{fppf}$. Assume
+\begin{enumerate}
+\item $F$ is a sheaf for the Zariski topology on $(\Sch/S)_{fppf}$,
+\item there exists an index set $I$ and subfunctors $F_i \subset F$ such that
+\begin{enumerate}
+\item each $F_i$ is an fppf sheaf,
+\item each $F_i \to F$ is representable by algebraic spaces,
+\item $\coprod F_i \to F$ becomes surjective after fppf sheafification.
+\end{enumerate}
+\end{enumerate}
+Then $F$ is an fppf sheaf.
+\end{lemma}
+
+\begin{proof}
+Let $T \in \Ob((\Sch/S)_{fppf})$ and let $s \in F(T)$. By (2)(c)
+there exists an fppf covering $\{T_j \to T\}$ such that
+$s|_{T_j}$ is a section of $F_{\alpha(j)}$ for some $\alpha(j) \in I$.
+Let $W_j \subset T$ be the image of $T_j \to T$
+which is an open subscheme Morphisms, Lemma \ref{morphisms-lemma-fppf-open}.
+By (2)(b) we see
+$F_{\alpha(j)} \times_{F, s|_{W_j}} W_j \to W_j$ is a monomorphism
+of algebraic spaces through which $T_j$ factors. Since $\{T_j \to W_j\}$
+is an fppf covering, we conclude that
+$F_{\alpha(j)} \times_{F, s|_{W_j}} W_j = W_j$, in other words
+$s|_{W_j} \in F_{\alpha(j)}(W_j)$. Hence we conclude that
+$\coprod F_i \to F$ is surjective for the Zariski topology.
+
+\medskip\noindent
+Let $\{T_j \to T\}$ be an fppf covering in $(\Sch/S)_{fppf}$.
+Let $s, s' \in F(T)$ with $s|_{T_j} = s'|_{T_j}$ for all $j$.
+We want to show that $s, s'$ are equal. As $F$ is a Zariski sheaf by (1)
+we may work Zariski locally on $T$. By the result of the previous paragraph
+we may assume there exist $i$ such that $s \in F_i(T)$. Then we see that
+$s'|_{T_j}$ is a section of $F_i$. By (2)(b) we see
+$F_{i} \times_{F, s'} T \to T$ is a monomorphism of algebraic spaces
+through which all of the $T_j$ factor. Hence we conclude that
+$s' \in F_i(T)$. Since $F_i$ is a sheaf for the fppf topology
+we conclude that $s = s'$.
+
+\medskip\noindent
+Let $\{T_j \to T\}$ be an fppf covering in $(\Sch/S)_{fppf}$ and let
+$s_j \in F(T_j)$ such that
+$s_j|_{T_j \times_T T_{j'}} = s_{j'}|_{T_j \times_T T_{j'}}$. By assumption
+(2)(b) we may refine the covering and assume that $s_j \in F_{\alpha(j)}(T_j)$
+for some $\alpha(j) \in I$. Let $W_j \subset T$ be the image of $T_j \to T$
+which is an open subscheme Morphisms, Lemma \ref{morphisms-lemma-fppf-open}.
+Then $\{T_j \to W_j\}$ is an fppf covering. Since $F_{\alpha(j)}$ is a sub
+presheaf of $F$ we see that the two restrictions of $s_j$ to
+$T_j \times_{W_j} T_j$ agree as elements of
+$F_{\alpha(j)}(T_j \times_{W_j} T_j)$. Hence, the sheaf condition for
+$F_{\alpha(j)}$ implies there exists a $s'_j \in F_{\alpha(j)}(W_j)$
+whose restriction to $T_j$ is $s_j$. For a pair of indices
+$j$ and $j'$ the sections $s'_j|_{W_j \cap W_{j'}}$ and
+$s'_{j'}|_{W_j \cap W_{j'}}$ of $F$ agree by the result of the
+previous paragraph. This finishes the proof by the fact that
+$F$ is a Zariski sheaf.
+\end{proof}
+
+
+
+
+
+\section{Properties of maps of presheaves representable by algebraic spaces}
+\label{section-representable-by-spaces-properties}
+
+\noindent
+Here is the definition that makes this work.
+
+\begin{definition}
+\label{definition-property-transformation}
+Let $S$ be a scheme. Let $a : F \to G$ be a map of presheaves on
+$(\Sch/S)_{fppf}$ which is representable by algebraic spaces.
+Let $\mathcal{P}$ be a property of morphisms of algebraic spaces which
+\begin{enumerate}
+\item is preserved under any base change, and
+\item is fppf local on the base, see
+Descent on Spaces,
+Definition \ref{spaces-descent-definition-property-morphisms-local}.
+\end{enumerate}
+In this case we say that $a$ has {\it property $\mathcal{P}$} if for every
+scheme $U$ and $\xi : U \to G$ the resulting morphism of algebraic spaces
+$U \times_G F \to U$ has property $\mathcal{P}$.
+\end{definition}
+
+\noindent
+It is important to note that we will only use this definition for
+properties of morphisms that are stable under base change, and
+local in the fppf topology on the base. This is
+not because the definition doesn't make sense otherwise; rather it
+is because we may want to give a different definition which is
+better suited to the property we have in mind.
+
+\medskip\noindent
+The definition above applies\footnote{Being preserved under base
+change holds by
+Morphisms of Spaces, Lemmas
+\ref{spaces-morphisms-lemma-base-change-surjective},
+\ref{spaces-morphisms-lemma-base-change-quasi-compact},
+\ref{spaces-morphisms-lemma-base-change-etale},
+\ref{spaces-morphisms-lemma-base-change-smooth},
+\ref{spaces-morphisms-lemma-base-change-flat},
+\ref{spaces-morphisms-lemma-base-change-separated},
+\ref{spaces-morphisms-lemma-base-change-finite-type},
+\ref{spaces-morphisms-lemma-base-change-quasi-finite},
+\ref{spaces-morphisms-lemma-base-change-finite-presentation},
+\ref{spaces-morphisms-lemma-base-change-affine},
+\ref{spaces-morphisms-lemma-base-change-proper}, and
+Spaces, Lemma
+\ref{spaces-lemma-base-change-immersions}.
+Being fppf local on the base holds by
+Descent on Spaces, Lemmas
+\ref{spaces-descent-lemma-descending-property-surjective},
+\ref{spaces-descent-lemma-descending-property-quasi-compact},
+\ref{spaces-descent-lemma-descending-property-etale},
+\ref{spaces-descent-lemma-descending-property-smooth},
+\ref{spaces-descent-lemma-descending-property-flat},
+\ref{spaces-descent-lemma-descending-property-separated},
+\ref{spaces-descent-lemma-descending-property-finite-type},
+\ref{spaces-descent-lemma-descending-property-quasi-finite},
+\ref{spaces-descent-lemma-descending-property-locally-finite-presentation},
+\ref{spaces-descent-lemma-descending-property-affine},
+\ref{spaces-descent-lemma-descending-property-proper}, and
+\ref{spaces-descent-lemma-descending-property-closed-immersion}.
+}
+for example to the properties of being
+``surjective'',
+``quasi-compact'',
+``\'etale'',
+``smooth'',
+``flat'',
+``separated'',
+``(locally) of finite type'',
+``(locally) quasi-finite'',
+``(locally) of finite presentation'',
+``affine'',
+``proper'', and
+``a closed immersion''.
+In other words, $a$ is
+{\it surjective}
+(resp.\ {\it quasi-compact},
+{\it \'etale},
+{\it smooth},
+{\it flat},
+{\it separated},
+{\it (locally) of finite type},
+{\it (locally) quasi-finite},
+{\it (locally) of finite presentation},
+{\it proper},
+{\it a closed immersion})
+if for every scheme $T$ and map $\xi : T \to G$
+the morphism of algebraic spaces $T \times_{\xi, G} F \to T$ is
+surjective
+(resp.\ quasi-compact,
+\'etale,
+flat,
+separated,
+(locally) of finite type,
+(locally) quasi-finite,
+(locally) of finite presentation,
+proper,
+a closed immersion).
+
+\medskip\noindent
+Next, we check consistency with the already existing notions. By
+Lemma \ref{lemma-morphism-spaces-is-representable-by-spaces}
+any morphism between algebraic spaces over $S$ is representable by
+algebraic spaces. And by
+Morphisms of Spaces,
+Lemma \ref{spaces-morphisms-lemma-surjective-local}
+(resp.\ \ref{spaces-morphisms-lemma-quasi-compact-local},
+\ref{spaces-morphisms-lemma-etale-local},
+\ref{spaces-morphisms-lemma-smooth-local},
+\ref{spaces-morphisms-lemma-flat-local},
+\ref{spaces-morphisms-lemma-separated-local},
+\ref{spaces-morphisms-lemma-finite-type-local},
+\ref{spaces-morphisms-lemma-quasi-finite-local},
+\ref{spaces-morphisms-lemma-finite-presentation-local},
+\ref{spaces-morphisms-lemma-affine-local},
+\ref{spaces-morphisms-lemma-proper-local},
+\ref{spaces-morphisms-lemma-closed-immersion-local})
+the definition of
+surjective
+(resp.\ quasi-compact,
+\'etale,
+smooth,
+flat,
+separated,
+(locally) of finite type,
+(locally) quasi-finite,
+(locally) of finite presentation,
+affine,
+proper,
+closed immersion)
+above agrees with the already existing definition of morphisms
+of algebraic spaces.
+
+\medskip\noindent
+Some formal lemmas follow.
+
+\begin{lemma}
+\label{lemma-base-change-transformation-property}
+Let $S$ be a scheme.
+Let $\mathcal{P}$ be a property as in
+Definition \ref{definition-property-transformation}.
+Let
+$$
+\xymatrix{
+G' \times_G F \ar[r] \ar[d]^{a'} & F \ar[d]^a \\
+G' \ar[r] & G
+}
+$$
+be a fibre square of presheaves on $(\Sch/S)_{fppf}$.
+If $a$ is representable by algebraic spaces and has $\mathcal{P}$
+so does $a'$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is formal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-transformation-property}
+Let $S$ be a scheme.
+Let $\mathcal{P}$ be a property as in
+Definition \ref{definition-property-transformation},
+and assume $\mathcal{P}$ is stable under composition.
+Let
+$$
+\xymatrix{
+F \ar[r]^a & G \ar[r]^b & H
+}
+$$
+be maps of presheaves on $(\Sch/S)_{fppf}$.
+If $a$, $b$ are representable by algebraic spaces and has
+$\mathcal{P}$ so does $b \circ a$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: See
+Lemma \ref{lemma-composition-transformation}
+and use stability under composition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-product-transformations-property}
+Let $S$ be a scheme.
+Let $F_i, G_i : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$,
+$i = 1, 2$.
+Let $a_i : F_i \to G_i$, $i = 1, 2$ be representable by algebraic spaces.
+Let $\mathcal{P}$ be a property as in
+Definition \ref{definition-property-transformation}
+which is stable under composition.
+If $a_1$ and $a_2$ have property $\mathcal{P}$ so does
+$a_1 \times a_2 : F_1 \times F_2 \longrightarrow G_1 \times G_2$.
+\end{lemma}
+
+\begin{proof}
+Note that the lemma makes sense by
+Lemma \ref{lemma-product-transformations}.
+Proof omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-transformations-property-implication}
+Let $S$ be a scheme.
+Let $F, G : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$.
+Let $a : F \to G$ be a transformation of functors representable by
+algebraic spaces.
+Let $\mathcal{P}$, $\mathcal{P}'$ be properties as in
+Definition \ref{definition-property-transformation}.
+Suppose that for any morphism $f : X \to Y$ of algebraic spaces over $S$
+we have $\mathcal{P}(f) \Rightarrow \mathcal{P}'(f)$.
+If $a$ has property $\mathcal{P}$, then
+$a$ has property $\mathcal{P}'$.
+\end{lemma}
+
+\begin{proof}
+Formal.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-flat-locally-finite-presentation}
+Let $S$ be a scheme.
+Let $F, G : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$ be sheaves.
+Let $a : F \to G$ be representable by algebraic spaces, flat,
+locally of finite presentation, and surjective.
+Then $a : F \to G$ is surjective as a map of sheaves.
+\end{lemma}
+
+\begin{proof}
+Let $T$ be a scheme over $S$ and let $g : T \to G$ be a $T$-valued point of
+$G$. By assumption $T' = F \times_G T$ is an algebraic space and
+the morphism $T' \to T$ is a flat, locally of finite presentation, and
+surjective morphism of algebraic spaces.
+Let $U \to T'$ be a surjective \'etale morphism, where $U$ is a scheme.
+Then by the definition of flat morphisms of algebraic spaces
+the morphism of schemes $U \to T$ is flat. Similarly for
+``locally of finite presentation''. The morphism $U \to T$ is surjective
+also, see
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-surjective-local}.
+Hence we see that $\{U \to T\}$ is an fppf covering such
+that $g|_U \in G(U)$ comes from an element of $F(U)$, namely
+the map $U \to T' \to F$. This proves the map is surjective as
+a map of sheaves, see
+Sites, Definition \ref{sites-definition-sheaves-injective-surjective}.
+\end{proof}
+
+
+
+
+\section{Bootstrapping the diagonal}
+\label{section-bootstrap-diagonal}
+
+\noindent
+In this section we prove that the diagonal of a sheaf $F$ on
+$(\Sch/S)_{fppf}$ is representable as soon as there exists
+an ``fppf cover'' of $F$ by a scheme or by an algebraic space, see
+Lemma \ref{lemma-bootstrap-diagonal}.
+
+\begin{lemma}
+\label{lemma-representable-diagonal}
+\begin{slogan}
+The diagonal of a presheaf is representable by algebraic spaces if and only if
+every map from a scheme to the presheaf is representable by algebraic spaces.
+\end{slogan}
+Let $S$ be a scheme.
+If $F$ is a presheaf on $(\Sch/S)_{fppf}$.
+The following are equivalent:
+\begin{enumerate}
+\item $\Delta_F : F \to F \times F$ is representable by algebraic spaces,
+\item for every scheme $T$ any map $T \to F$ is representable by algebraic
+spaces, and
+\item for every algebraic space $X$ any map $X \to F$ is representable
+by algebraic spaces.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). Let $X \to F$ be as in (3). Let $T$ be a scheme, and let
+$T \to F$ be a morphism. Then we have
+$$
+T \times_F X = (T \times_S X) \times_{F \times F, \Delta} F
+$$
+which is an algebraic space by
+Lemma \ref{lemma-representable-by-spaces}
+and (1). Hence $X \to F$ is representable, i.e., (3) holds.
+The implication (3) $\Rightarrow$ (2) is trivial. Assume (2).
+Let $T$ be a scheme, and let $(a, b) : T \to F \times F$ be a morphism.
+Then
+$$
+F \times_{\Delta_F, F \times F} T =
+(T \times_{a, F, b} T) \times_{T \times T, \Delta_T} T
+$$
+which is an algebraic space by assumption. Hence $\Delta_F$ is
+representable by algebraic spaces, i.e., (1) holds.
+\end{proof}
+
+\noindent
+In particular if $F$ is a presheaf satisfying the equivalent conditions of
+the lemma, then for any morphism $X \to F$ where $X$ is an algebraic space
+it makes sense to say that $X \to F$ is surjective (resp.\ \'etale, flat,
+locally of finite presentation) by using
+Definition \ref{definition-property-transformation}.
+
+\medskip\noindent
+Before we actually do the bootstrap we prove a fun lemma.
+
+\begin{lemma}
+\label{lemma-after-fppf-sep-lqf}
+Let $S$ be a scheme.
+Let
+$$
+\xymatrix{
+E \ar[r]_a \ar[d]_f & F \ar[d]^g \\
+H \ar[r]^b & G
+}
+$$
+be a cartesian diagram of sheaves on $(\Sch/S)_{fppf}$, so
+$E = H \times_G F$. If
+\begin{enumerate}
+\item $g$ is representable by algebraic spaces, surjective, flat, and
+locally of finite presentation, and
+\item $a$ is representable by algebraic spaces, separated, and
+locally quasi-finite
+\end{enumerate}
+then $b$ is representable (by schemes) as well as separated and
+locally quasi-finite.
+\end{lemma}
+
+\begin{proof}
+Let $T$ be a scheme, and let $T \to G$ be a morphism.
+We have to show that $T \times_G H$ is a scheme, and that
+the morphism $T \times_G H \to T$ is separated and
+locally quasi-finite. Thus we may base change the whole diagram to $T$
+and assume that $G$ is a scheme. In this case $F$ is an algebraic space.
+Let $U$ be a scheme, and let $U \to F$ be a surjective \'etale morphism.
+Then $U \to F$ is representable, surjective, flat and
+locally of finite presentation by
+Morphisms of Spaces,
+Lemmas \ref{spaces-morphisms-lemma-etale-flat} and
+\ref{spaces-morphisms-lemma-etale-locally-finite-presentation}.
+By
+Lemma \ref{lemma-composition-transformation}
+$U \to G$ is surjective, flat and locally of finite presentation also.
+Note that the base change $E \times_F U \to U$ of $a$ is still
+separated and locally quasi-finite (by
+Lemma \ref{lemma-base-change-transformation-property}). Hence we
+may replace the upper part of the diagram of the lemma by
+$E \times_F U \to U$. In other words, we may assume that
+$F \to G$ is a surjective, flat morphism of schemes
+which is locally of finite presentation.
+In particular, $\{F \to G\}$ is an fppf covering of schemes.
+By
+Morphisms of Spaces, Proposition
+\ref{spaces-morphisms-proposition-locally-quasi-finite-separated-over-scheme}
+we conclude that $E$ is a scheme also.
+By
+Descent, Lemma \ref{descent-lemma-descent-data-sheaves}
+the fact that $E = H \times_G F$ means that we get a descent datum
+on $E$ relative to the fppf covering $\{F \to G\}$.
+By
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-separated-locally-quasi-finite-morphisms-fppf-descend}
+this descent datum is effective.
+By
+Descent, Lemma \ref{descent-lemma-descent-data-sheaves}
+again this implies that $H$ is a scheme.
+By
+Descent, Lemmas \ref{descent-lemma-descending-property-separated} and
+\ref{descent-lemma-descending-property-quasi-finite}
+it now follows that $b$ is separated and locally quasi-finite.
+\end{proof}
+
+\noindent
+Here is the result that the section title refers to.
+
+\begin{lemma}
+\label{lemma-bootstrap-diagonal}
+Let $S$ be a scheme.
+Let $F : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$ be a functor.
+Assume that
+\begin{enumerate}
+\item the presheaf $F$ is a sheaf,
+\item there exists an algebraic space $X$ and a map $X \to F$
+which is representable by algebraic spaces, surjective, flat and
+locally of finite presentation.
+\end{enumerate}
+Then $\Delta_F$ is representable (by schemes).
+\end{lemma}
+
+\begin{proof}
+Let $U \to X$ be a surjective \'etale morphism from a scheme towards $X$.
+Then $U \to X$ is representable, surjective, flat and
+locally of finite presentation by
+Morphisms of Spaces,
+Lemmas \ref{spaces-morphisms-lemma-etale-flat} and
+\ref{spaces-morphisms-lemma-etale-locally-finite-presentation}.
+By
+Lemma \ref{lemma-composition-transformation-property}
+the composition $U \to F$ is representable by algebraic spaces,
+surjective, flat and locally of finite presentation also.
+Thus we see that $R = U \times_F U$ is an algebraic space, see
+Lemma \ref{lemma-representable-by-spaces}.
+The morphism of algebraic spaces $R \to U \times_S U$ is
+a monomorphism, hence separated (as the diagonal of a monomorphism
+is an isomorphism, see
+Morphisms of Spaces,
+Lemma \ref{spaces-morphisms-lemma-monomorphism}).
+Since $U \to F$ is locally of finite presentation, both
+morphisms $R \to U$ are locally of finite presentation, see
+Lemma \ref{lemma-base-change-transformation-property}.
+Hence $R \to U \times_S U$ is locally of finite type (use
+Morphisms of Spaces,
+Lemmas \ref{spaces-morphisms-lemma-finite-presentation-finite-type} and
+\ref{spaces-morphisms-lemma-permanence-finite-type}).
+Altogether this means that
+$R \to U \times_S U$ is a monomorphism which is locally of finite
+type, hence a separated and locally quasi-finite morphism, see
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-monomorphism-loc-finite-type-loc-quasi-finite}.
+
+\medskip\noindent
+Now we are ready to prove that $\Delta_F$ is representable.
+Let $T$ be a scheme, and let $(a, b) : T \to F \times F$ be a morphism.
+Set
+$$
+T' = (U \times_S U) \times_{F \times F} T.
+$$
+Note that $U \times_S U \to F \times F$ is
+representable by algebraic spaces, surjective, flat and
+locally of finite presentation by
+Lemma \ref{lemma-product-transformations-property}.
+Hence $T'$ is an algebraic space, and the projection morphism
+$T' \to T$ is surjective, flat, and locally of finite presentation.
+Consider $Z = T \times_{F \times F} F$ (this is a sheaf) and
+$$
+Z' = T' \times_{U \times_S U} R
+= T' \times_T Z.
+$$
+We see that $Z'$ is an algebraic space, and
+$Z' \to T'$ is separated and locally quasi-finite by the
+discussion in the first paragraph of the proof which showed that $R$ is
+an algebraic space and that the
+morphism $R \to U \times_S U$ has those properties.
+Hence we may apply
+Lemma \ref{lemma-after-fppf-sep-lqf}
+to the diagram
+$$
+\xymatrix{
+Z' \ar[r] \ar[d] & T' \ar[d] \\
+Z \ar[r] & T
+}
+$$
+and we conclude.
+\end{proof}
+
+\noindent
+Here is a variant of the result above.
+
+\begin{lemma}
+\label{lemma-bootstrap-locally-quasi-finite}
+Let $S$ be a scheme. Let $F : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$ be a
+functor. Let $X$ be a scheme and let $X \to F$ be representable by algebraic
+spaces and locally quasi-finite. Then $X \to F$ is representable
+(by schemes).
+\end{lemma}
+
+\begin{proof}
+Let $T$ be a scheme and let $T \to F$ be a morphism. We have to show that
+the algebraic space $X \times_F T$ is representable by a scheme. Consider
+the morphism
+$$
+X \times_F T \longrightarrow X \times_{\Spec(\mathbf{Z})} T
+$$
+Since $X \times_F T \to T$ is locally quasi-finite, so is the displayed
+arrow (Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-permanence-quasi-finite}).
+On the other hand, the displayed arrow is a monomorphism
+and hence separated (Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-monomorphism-separated}).
+Thus $X \times_F T$ is a scheme by Morphisms of Spaces, Proposition
+\ref{spaces-morphisms-proposition-locally-quasi-finite-separated-over-scheme}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Bootstrap}
+\label{section-bootstrap}
+
+\noindent
+We warn the reader right away that the result of this section will
+be superseded by the stronger
+Theorem \ref{theorem-final-bootstrap}.
+On the other hand, the theorem in this section is quite a bit easier to
+prove and still provides quite a bit of insight into how things work,
+especially for those readers mainly interested in Deligne-Mumford
+stacks.
+
+\medskip\noindent
+In
+Spaces, Section \ref{spaces-section-algebraic-spaces}
+we defined an algebraic space as a sheaf in the fppf topology whose
+diagonal is representable, and such that there exist a surjective \'etale
+morphism from a scheme towards it. In this section we show that
+a sheaf in the fppf topology whose diagonal is representable by algebraic
+spaces and which has an \'etale surjective covering by an algebraic space
+is also an algebraic space.
+In other words, the category of algebraic spaces is an enlargement of the
+category of schemes by those fppf sheaves $F$ which have a representable
+diagonal and an \'etale covering by a scheme. The
+result of this section says that doing the same process again starting with
+the category of algebraic spaces, does not lead to yet another category.
+
+\medskip\noindent
+Another motivation for the material in this section is that it will guarantee
+later that a Deligne-Mumford stack whose inertia stack is trivial is equivalent
+to an algebraic space, see
+Algebraic Stacks, Lemma \ref{algebraic-lemma-algebraic-stack-no-automorphisms}.
+
+\medskip\noindent
+Here is the main result of this section (as we mentioned above this
+will be superseded by the stronger
+Theorem \ref{theorem-final-bootstrap}).
+
+\begin{theorem}
+\label{theorem-bootstrap}
+Let $S$ be a scheme.
+Let $F : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$ be a functor.
+Assume that
+\begin{enumerate}
+\item the presheaf $F$ is a sheaf,
+\item the diagonal morphism $F \to F \times F$ is representable by
+algebraic spaces, and
+\item there exists an algebraic space $X$
+and a map $X \to F$ which is surjective, and \'etale.
+\end{enumerate}
+or assume that
+\begin{enumerate}
+\item[(a)] the presheaf $F$ is a sheaf, and
+\item[(b)] there exists an algebraic space $X$ and a map $X \to F$
+which is representable by algebraic paces, surjective, and \'etale.
+\end{enumerate}
+Then $F$ is an algebraic space.
+\end{theorem}
+
+\begin{proof}
+We will use the remarks directly below
+Definition \ref{definition-property-transformation}
+without further mention.
+
+\medskip\noindent
+Assume (1), (2), and (3) and let $X \to F$ be as in (3).
+By Lemma \ref{lemma-representable-diagonal} the morphism
+$X \to F$ is representable by algebraic spaces. Thus
+we see that (a) and (b) hold.
+
+\medskip\noindent
+Assume (a) and (b) and let $X \to F$ be as in (b).
+Let $U \to X$ be a surjective \'etale morphism from a scheme towards $X$.
+By Lemma \ref{lemma-composition-transformation} the transformation
+$U \to F$ is representable by algebraic spaces, surjective, and \'etale.
+Hence to prove that $F$ is an algebraic space boils down to proving that
+$\Delta_F$ is representable (Spaces, Definition
+\ref{spaces-definition-algebraic-space}). This follows immediately from
+Lemma \ref{lemma-bootstrap-diagonal}.
+On the other hand we can circumvent this lemma and show directly $F$
+is an algebraic space as in the next paragraph.
+
+\medskip\noindent
+Namely, let $U$ be a scheme and let $U \to F$ be representable by algebraic
+spaces, surjective, and \'etale. Consider the fibre product $R = U \times_F U$.
+Both projections $R \to U$ are representable by algebraic spaces, surjective,
+and \'etale (Lemma \ref{lemma-base-change-transformation-property}).
+In particular $R$ is an algebraic space by
+Lemma \ref{lemma-representable-by-spaces-over-space}.
+The morphism of algebraic spaces $R \to U \times_S U$ is a monomorphism,
+hence separated (as the diagonal of a monomorphism is an isomorphism).
+Since $R \to U$ is \'etale, we see that $R \to U$ is locally quasi-finite, see
+Morphisms of Spaces,
+Lemma \ref{spaces-morphisms-lemma-etale-locally-quasi-finite}.
+We conclude that also $R \to U \times_S U$ is
+locally quasi-finite by
+Morphisms of Spaces,
+Lemma \ref{spaces-morphisms-lemma-permanence-quasi-finite}.
+Hence
+Morphisms of Spaces, Proposition
+\ref{spaces-morphisms-proposition-locally-quasi-finite-separated-over-scheme}
+applies and $R$ is a scheme. By
+Lemma \ref{lemma-surjective-flat-locally-finite-presentation}
+the map $U \to F$ is a surjection of sheaves. Thus $F = U/R$.
+We conclude that $F$ is an algebraic space by
+Spaces, Theorem \ref{spaces-theorem-presentation}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Finding opens}
+\label{section-finding-opens}
+
+
+\medskip\noindent
+First we prove a lemma which is a slight improvement and generalization of
+Spaces, Lemma \ref{spaces-lemma-finding-opens}
+to quotient sheaves associated to groupoids.
+
+\begin{lemma}
+\label{lemma-better-finding-opens}
+Let $S$ be a scheme.
+Let $(U, R, s, t, c)$ be a groupoid scheme over $S$.
+Let $g : U' \to U$ be a morphism.
+Assume
+\begin{enumerate}
+\item the composition
+$$
+\xymatrix{
+U' \times_{g, U, t} R \ar[r]_-{\text{pr}_1} \ar@/^3ex/[rr]^h
+& R \ar[r]_s & U
+}
+$$
+has an open image $W \subset U$, and
+\item the resulting map $h : U' \times_{g, U, t} R \to W$
+defines a surjection of sheaves in the fppf topology.
+\end{enumerate}
+Let $R' = R|_{U'}$ be the restriction of $R$ to $U'$. Then the map
+of quotient sheaves
+$$
+U'/R' \to U/R
+$$
+in the fppf topology is representable, and is an open immersion.
+\end{lemma}
+
+\begin{proof}
+Note that $W$ is an $R$-invariant open subscheme of $U$.
+This is true because the set of points of $W$ is the set
+of points of $U$ which are equivalent in the sense of
+Groupoids,
+Lemma \ref{groupoids-lemma-pre-equivalence-equivalence-relation-points}
+to a point of $g(U') \subset U$ (the lemma applies as $j : R \to U \times_S U$
+is a pre-equivalence relation by
+Groupoids, Lemma \ref{groupoids-lemma-groupoid-pre-equivalence}).
+Also $g : U' \to U$ factors through $W$.
+Let $R|_W$ be the restriction of $R$ to $W$.
+Then it follows that $R'$ is also the restriction of $R|_W$ to $U'$.
+Hence we can factor the map of sheaves of the lemma as
+$$
+U'/R' \longrightarrow W/R|_W \longrightarrow U/R
+$$
+By Groupoids, Lemma \ref{groupoids-lemma-quotient-groupoid-restrict}
+we see that the first arrow is an isomorphism of sheaves.
+Hence it suffices to show the lemma in case $g$ is the immersion
+of an $R$-invariant open into $U$.
+
+\medskip\noindent
+Assume $U' \subset U$ is an $R$-invariant open and $g$ is the inclusion
+morphism. Set $F = U/R$ and $F' = U'/R'$. By
+Groupoids,
+Lemma \ref{groupoids-lemma-quotient-pre-equivalence-relation-restrict}
+or \ref{groupoids-lemma-quotient-groupoid-restrict}
+the map $F' \to F$ is injective. Let $\xi \in F(T)$.
+We have to show that $T \times_{\xi, F} F'$ is representable
+by an open subscheme of $T$.
+There exists an fppf covering $\{f_i : T_i \to T\}$ such that
+$\xi|_{T_i}$ is the image via $U \to U/R$ of a morphism $a_i : T_i \to U$.
+Set $V_i = a_i^{-1}(U')$.
+We claim that $V_i \times_T T_j = T_i \times_T V_j$ as open subschemes
+of $T_i \times_T T_j$.
+
+\medskip\noindent
+As $a_i \circ \text{pr}_0$ and $a_j \circ \text{pr}_1$ are morphisms
+$T_i \times_T T_j \to U$ which both map to the section
+$\xi|_{T_i \times_T T_j} \in F(T_i \times_T T_j)$ we can find
+an fppf covering $\{f_{ijk} : T_{ijk} \to T_i \times_T T_j\}$ and morphisms
+$r_{ijk} : T_{ijk} \to R$ such that
+$$
+a_i \circ \text{pr}_0 \circ f_{ijk} = s \circ r_{ijk},
+\quad
+a_j \circ \text{pr}_1 \circ f_{ijk} = t \circ r_{ijk},
+$$
+see
+Groupoids, Lemma \ref{groupoids-lemma-quotient-pre-equivalence}.
+Since $U'$ is $R$-invariant we have $s^{-1}(U') = t^{-1}(U')$ and
+hence $f_{ijk}^{-1}(V_i \times_T T_j) = f_{ijk}^{-1}(T_i \times_T V_j)$.
+As $\{f_{ijk}\}$ is surjective this implies the claim above.
+Hence by
+Descent, Lemma \ref{descent-lemma-open-fpqc-covering}
+there exists an open subscheme $V \subset T$ such that
+$f_i^{-1}(V) = V_i$. We claim that $V$ represents $T \times_{\xi, F} F'$.
+
+\medskip\noindent
+As a first step, we will show that $\xi|_V$ lies in $F'(V) \subset F(V)$.
+Namely, the family of morphisms $\{V_i \to V\}$ is an fppf covering,
+and by construction we have $\xi|_{V_i} \in F'(V_i)$.
+Hence by the sheaf property of $F'$ we get $\xi|_V \in F'(V)$.
+Finally, let $T' \to T$ be a morphism of schemes and
+that $\xi|_{T'} \in F'(T')$. To finish the proof we have to show that
+$T' \to T$ factors through $V$.
+We can find a fppf covering $\{T'_j \to T'\}_{j \in J}$ and morphisms
+$b_j : T'_j \to U'$ such that $\xi|_{T'_j}$ is the image via
+$U' \to U/R$ of $b_j$. Clearly, it is enough to show that the compositions
+$T'_j \to T$ factor through $V$. Hence we may assume that $\xi|_{T'}$
+is the image of a morphism $b : T' \to U'$. Now, it is enough to show
+that $T'\times_T T_i \to T_i$ factors through $V_i$. Over the scheme
+$T' \times_T T_i$ the restriction of $\xi$ is the image of two
+elements of $(U/R)(T' \times_T T_i)$, namely $a_i \circ \text{pr}_1$, and
+$b \circ \text{pr}_0$, the second of which factors through the $R$-invariant
+open $U'$. Hence by
+Groupoids, Lemma \ref{groupoids-lemma-quotient-pre-equivalence}
+there exists a covering $\{h_k : Z_k \to T' \times_T T_i\}$ and morphisms
+$r_k : Z_k \to R$ such that $a_i \circ \text{pr}_1 \circ h_k = s \circ r_k$
+and $b \circ \text{pr}_0 \circ h_k = t \circ r_k$. As $U'$ is an $R$-invariant
+open the fact that $b$ has image in $U'$ then implies that each
+$a_i \circ \text{pr}_1 \circ h_k$ has image in $U'$. It follows from this
+that $T' \times_T T_i \to T_i$ has image in $V_i$ by definition of $V_i$
+which concludes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Slicing equivalence relations}
+\label{section-slicing}
+
+\noindent
+In this section we explain how to ``improve'' a given equivalence
+relation by slicing. This is not a kind of ``\'etale slicing'' that you
+may be used to but a much coarser kind of slicing.
+
+
+\begin{lemma}
+\label{lemma-slice-equivalence-relation}
+Let $S$ be a scheme.
+Let $j : R \to U \times_S U$ be an equivalence relation on schemes over $S$.
+Assume $s, t : R \to U$ are flat and locally of finite presentation.
+Then there exists an equivalence relation $j' : R' \to U'\times_S U'$
+on schemes over $S$, and an isomorphism
+$$
+U'/R' \longrightarrow U/R
+$$
+induced by a morphism $U' \to U$ which maps $R'$ into $R$ such that
+$s', t' : R \to U$ are flat, locally of finite presentation
+and locally quasi-finite.
+\end{lemma}
+
+\begin{proof}
+We will prove this lemma in several steps. We will use without further
+mention that an equivalence relation gives rise to a groupoid scheme
+and that the restriction of an equivalence relation is an equivalence
+relation, see
+Groupoids, Lemmas
+\ref{groupoids-lemma-restrict-relation},
+\ref{groupoids-lemma-equivalence-groupoid}, and
+\ref{groupoids-lemma-restrict-groupoid-relation}.
+
+\medskip\noindent
+Step 1: We may assume that $s, t : R \to U$ are locally of finite presentation
+and Cohen-Macaulay morphisms. Namely, as in
+More on Groupoids, Lemma \ref{more-groupoids-lemma-make-CM}
+let $g : U' \to U$ be the open subscheme such that
+$t^{-1}(U') \subset R$ is the maximal open over which $s : R \to U$ is
+Cohen-Macaulay, and denote $R'$ the restriction of $R$ to $U'$.
+By the lemma cited above we see that
+$$
+\xymatrix{
+t^{-1}(U') \ar@{=}[r] &
+U' \times_{g, U, t} R \ar[r]_-{\text{pr}_1} \ar@/^3ex/[rr]^h &
+R \ar[r]_s &
+U
+}
+$$
+is surjective. Since $h$ is flat and locally of finite presentation, we
+see that $\{h\}$ is a fppf covering. Hence by
+Groupoids, Lemma \ref{groupoids-lemma-quotient-groupoid-restrict}
+we see that $U'/R' \to U/R$ is an isomorphism. By the construction of $U'$
+we see that $s', t'$ are Cohen-Macaulay and locally of finite presentation.
+
+\medskip\noindent
+Step 2. Assume $s, t$ are Cohen-Macaulay and locally of finite presentation.
+Let $u \in U$ be a point of finite type. By
+More on Groupoids, Lemma \ref{more-groupoids-lemma-max-slice-quasi-finite}
+there exists an affine scheme $U'$ and a morphism $g : U' \to U$ such that
+\begin{enumerate}
+\item $g$ is an immersion,
+\item $u \in U'$,
+\item $g$ is locally of finite presentation,
+\item $h$ is flat, locally of finite presentation and locally quasi-finite, and
+\item the morphisms $s', t' : R' \to U'$ are flat, locally of finite
+presentation and locally quasi-finite.
+\end{enumerate}
+Here we have used the notation introduced in
+More on Groupoids, Situation \ref{more-groupoids-situation-slice}.
+
+\medskip\noindent
+Step 3. For each point $u \in U$ which is of finite type
+choose a $g_u : U'_u \to U$ as in
+Step 2 and denote $R'_u$ the restriction of $R$ to $U'_u$.
+Denote $h_u = s \circ \text{pr}_1 : U'_u \times_{g_u, U, t} R \to U$. Set
+$U' = \coprod_{u \in U} U'_u$, and $g = \coprod g_u$. Let $R'$ be the
+restriction of $R$ to $U'$ as above. We claim that
+the pair $(U', g)$ works\footnote{Here we should check that $U'$ is not
+too large, i.e., that it is isomorphic to an object of the category
+$\Sch_{fppf}$, see
+Section \ref{section-conventions}.
+This is a purely set theoretical matter; let us use the notion of size of
+a scheme introduced in
+Sets, Section \ref{sets-section-categories-schemes}.
+Note that each $U'_u$ has size at most the size of $U$
+and that the cardinality of the index set is at most the cardinality of
+$|U|$ which is bounded by the size of $U$. Hence $U'$ is isomorphic
+to an object of $\Sch_{fppf}$ by
+Sets, Lemma \ref{sets-lemma-what-is-in-it} part (6).}.
+Note that
+\begin{align*}
+R' = &
+\coprod\nolimits_{u_1, u_2 \in U}
+(U'_{u_1} \times_{g_{u_1}, U, t} R)
+\times_R
+(R \times_{s, U, g_{u_2}} U'_{u_2}) \\
+= &
+\coprod\nolimits_{u_1, u_2 \in U}
+(U'_{u_1} \times_{g_{u_1}, U, t} R) \times_{h_{u_1}, U, g_{u_2}} U'_{u_2}
+\end{align*}
+Hence the projection $s' : R' \to U' = \coprod U'_{u_2}$
+is flat, locally of finite
+presentation and locally quasi-finite as a base change of $\coprod h_{u_1}$.
+Finally, by construction the morphism
+$h : U' \times_{g, U, t} R \to U$ is equal to $\coprod h_u$ hence
+its image contains all points of finite type of $U$.
+Since each $h_u$ is flat and locally of finite presentation we conclude that
+$h$ is flat and locally of finite presentation.
+In particular, the image of $h$ is open (see
+Morphisms, Lemma \ref{morphisms-lemma-fppf-open})
+and since the set of points of finite type is dense (see
+Morphisms, Lemma \ref{morphisms-lemma-enough-finite-type-points})
+we conclude that the image of $h$ is $U$. This implies that
+$\{h\}$ is an fppf covering. By
+Groupoids, Lemma \ref{groupoids-lemma-quotient-groupoid-restrict}
+this means that $U'/R' \to U/R$ is an isomorphism.
+This finishes the proof of the lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Quotient by a subgroupoid}
+\label{section-dividing}
+
+\noindent
+We need one more lemma before we can do our final bootstrap.
+Let us discuss what is going on in terms of ``plain'' groupoids before
+embarking on the scheme theoretic version.
+
+\medskip\noindent
+Let $\mathcal{C}$ be a groupoid, see
+Categories, Definition \ref{categories-definition-groupoid}.
+As discussed in
+Groupoids, Section \ref{groupoids-section-groupoids}
+this corresponds to a quintuple $(\text{Ob}, \text{Arrows}, s, t, c)$.
+Suppose we are given a subset $P \subset \text{Arrows}$ such that
+$(\text{Ob}, P, s|_P, t|_P, c|_P)$ is also a groupoid and such
+that there are no nontrivial automorphisms in $P$. Then we can construct
+the quotient groupoid
+$(\overline{\text{Ob}}, \overline{\text{Arrows}}, \overline{s},
+\overline{t}, \overline{c})$
+as follows:
+\begin{enumerate}
+\item $\overline{\text{Ob}} = \text{Ob}/P$
+is the set of $P$-isomorphism classes,
+\item $\overline{\text{Arrows}} = P\backslash \text{Arrows}/P$
+is the set of arrows in $\mathcal{C}$ up to pre-composing and
+post-composing by arrows of $P$,
+\item the source and target maps
+$\overline{s}, \overline{t} : P\backslash \text{Arrows}/P \to \text{Ob}/P$
+are induced by $s, t$,
+\item composition is defined by the rule
+$\overline{c}(\overline{a}, \overline{b}) = \overline{c(a, b)}$
+which is well defined.
+\end{enumerate}
+In fact, it turns out that the original groupoid
+$(\text{Ob}, \text{Arrows}, s, t, c)$ is canonically
+isomorphic to the restriction (see discussion in
+Groupoids, Section \ref{groupoids-section-restrict-groupoid})
+of the groupoid
+$(\overline{\text{Ob}}, \overline{\text{Arrows}}, \overline{s},
+\overline{t}, \overline{c})$ via the quotient map
+$g : \text{Ob} \to \overline{\text{Ob}}$. Recall that this means
+that
+$$
+\text{Arrows} =
+\text{Ob}
+\times_{g, \overline{\text{Ob}}, \overline{t}}
+\overline{\text{Arrows}}
+\times_{\overline{s}, \overline{\text{Ob}}, g}
+\text{Ob}
+$$
+which holds as $P$ has no nontrivial automorphisms.
+We omit the details.
+
+\medskip\noindent
+The following lemma holds in much greater generality, but this is
+the version we use in the proof of the final bootstrap (after which
+we can more easily prove the more general versions of this lemma).
+
+\begin{lemma}
+\label{lemma-divide-subgroupoid}
+Let $S$ be a scheme.
+Let $(U, R, s, t, c)$ be a groupoid scheme over $S$.
+Let $P \to R$ be monomorphism of schemes. Assume that
+\begin{enumerate}
+\item $(U, P, s|_P, t|_P, c|_{P \times_{s, U, t}P})$ is a groupoid scheme,
+\item $s|_P, t|_P : P \to U$ are finite locally free,
+\item $j|_P : P \to U \times_S U$ is a monomorphism.
+\item $U$ is affine, and
+\item $j : R \to U \times_S U$ is separated and locally quasi-finite,
+\end{enumerate}
+Then $U/P$ is representable by an affine scheme $\overline{U}$, the
+quotient morphism $U \to \overline{U}$ is finite locally free, and
+$P = U \times_{\overline{U}} U$. Moreover, $R$ is the restriction of a
+groupoid scheme
+$(\overline{U}, \overline{R}, \overline{s}, \overline{t}, \overline{c})$
+on $\overline{U}$ via the quotient morphism $U \to \overline{U}$.
+\end{lemma}
+
+\begin{proof}
+Conditions (1), (2), (3), and (4) and
+Groupoids, Proposition \ref{groupoids-proposition-finite-flat-equivalence}
+imply the affine scheme $\overline{U}$ representing $U/P$ exists,
+the morphism $U \to \overline{U}$ is finite locally free, and
+$P = U \times_{\overline{U}} U$. The identification
+$P = U \times_{\overline{U}} U$ is such that $t|_P = \text{pr}_0$ and
+$s|_P = \text{pr}_1$, and such that composition is equal to
+$\text{pr}_{02} : U \times_{\overline{U}} U \times_{\overline{U}} U
+\to U \times_{\overline{U}} U$.
+A product of finite locally free morphisms is finite locally free (see
+Spaces, Lemma \ref{spaces-lemma-product-representable-transformations-property}
+and
+Morphisms, Lemmas \ref{morphisms-lemma-base-change-finite-locally-free} and
+\ref{morphisms-lemma-composition-finite-locally-free}).
+To get $\overline{R}$ we are going to descend
+the scheme $R$ via the finite locally free morphism
+$U \times_S U \to \overline{U} \times_S \overline{U}$.
+Namely, note that
+$$
+(U \times_S U)
+\times_{(\overline{U} \times_S \overline{U})}
+(U \times_S U)
+=
+P \times_S P
+$$
+by the above. Thus giving a descent datum (see
+Descent, Definition \ref{descent-definition-descent-datum})
+for $R / U \times_S U / \overline{U} \times_S \overline{U}$
+consists of an isomorphism
+$$
+\varphi :
+R \times_{(U \times_S U), t \times t} (P \times_S P)
+\longrightarrow
+(P \times_S P) \times_{s \times s, (U \times_S U)} R
+$$
+over $P \times_S P$ satisfying a cocycle condition. We define $\varphi$
+on $T$-valued points by the rule
+$$
+\varphi : (r, (p, p')) \longmapsto ((p, p'), p^{-1} \circ r \circ p')
+$$
+where the composition is taken in the groupoid category
+$(U(T), R(T), s, t, c)$.
+This makes sense because for $(r, (p, p'))$ to be a $T$-valued point
+of the source of $\varphi$ it needs to be the case that $t(r) = t(p)$
+and $s(r) = t(p')$. Note that this map is an isomorphism
+with inverse given by
+$((p, p'), r') \mapsto (p \circ r' \circ (p')^{-1}, (p, p'))$.
+To check the cocycle condition we have to verify that
+$\varphi_{02} = \varphi_{12} \circ \varphi_{01}$
+as maps over
+$$
+(U \times_S U)
+\times_{(\overline{U} \times_S \overline{U})} (U \times_S U)
+\times_{(\overline{U} \times_S \overline{U})} (U \times_S U) =
+(P \times_S P) \times_{s \times s, (U \times_S U), t \times t} (P \times_S P)
+$$
+By explicit calculation we see that
+$$
+\begin{matrix}
+\varphi_{02} & (r, (p_1, p_1'), (p_2, p_2')) & \mapsto &
+((p_1, p_1'), (p_2, p_2'),
+(p_1 \circ p_2)^{-1} \circ r \circ (p_1' \circ p_2')) \\
+\varphi_{01} & (r, (p_1, p_1'), (p_2, p_2')) & \mapsto &
+((p_1, p_1'), p_1^{-1} \circ r \circ p_1', (p_2, p_2')) \\
+\varphi_{12} & ((p_1, p_1'), r, (p_2, p_2')) & \mapsto &
+((p_1, p_1'), (p_2, p_2'), p_2^{-1} \circ r \circ p_2')
+\end{matrix}
+$$
+(with obvious notation) which implies what we want.
+As $j$ is separated and locally quasi-finite by (5) we may apply
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-separated-locally-quasi-finite-morphisms-fppf-descend}
+to get a scheme $\overline{R} \to \overline{U} \times_S \overline{U}$
+and an isomorphism
+$$
+R \to \overline{R} \times_{(\overline{U} \times_S \overline{U})} (U \times_S U)
+$$
+which identifies the descent datum $\varphi$ with the canonical
+descent datum on
+$\overline{R} \times_{(\overline{U} \times_S \overline{U})} (U \times_S U)$,
+see
+Descent, Definition \ref{descent-definition-effective}.
+
+\medskip\noindent
+Since $U \times_S U \to \overline{U} \times_S \overline{U}$ is finite
+locally free we conclude that $R \to \overline{R}$ is finite locally free
+as a base change. Hence $R \to \overline{R}$ is surjective as a map of
+sheaves on $(\Sch/S)_{fppf}$.
+Our choice of $\varphi$ implies that given $T$-valued points $r, r' \in R(T)$
+these have the same image in $\overline{R}$ if and only if
+$p^{-1} \circ r \circ p'$ for some $p, p' \in P(T)$. Thus
+$\overline{R}$ represents the sheaf
+$$
+T \longmapsto \overline{R(T)} = P(T)\backslash R(T)/P(T)
+$$
+with notation as in the discussion preceding the lemma.
+Hence we can define the groupoid structure on
+$(\overline{U} = U/P, \overline{R} = P\backslash R/P)$ exactly as in
+the discussion of the ``plain'' groupoid case.
+It follows from this that $(U, R, s, t, c)$ is the pullback of
+this groupoid structure via the morphism $U \to \overline{U}$.
+This concludes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Final bootstrap}
+\label{section-final-bootstrap}
+
+\noindent
+The following result goes quite a bit beyond the earlier results.
+
+\begin{theorem}
+\label{theorem-final-bootstrap}
+Let $S$ be a scheme.
+Let $F : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$ be a functor.
+Any one of the following conditions implies that $F$ is an algebraic space:
+\begin{enumerate}
+\item $F = U/R$ where $(U, R, s, t, c)$ is a groupoid in algebraic spaces
+over $S$ such that $s, t$ are flat and locally of finite presentation, and
+$j = (t, s) : R \to U \times_S U$ is an equivalence relation,
+\item $F = U/R$ where $(U, R, s, t, c)$ is a groupoid scheme
+over $S$ such that $s, t$ are flat and locally of finite presentation, and
+$j = (t, s) : R \to U \times_S U$ is an equivalence relation,
+\item $F$ is a sheaf and there exists an algebraic space $U$ and a morphism
+$U \to F$ which is representable by algebraic spaces,
+surjective, flat and locally of finite presentation,
+\item $F$ is a sheaf and there exists a scheme $U$ and a morphism
+$U \to F$ which is representable by algebraic spaces or schemes,
+surjective, flat and locally of finite presentation,
+\item $F$ is a sheaf, $\Delta_F$ is representable by algebraic spaces,
+and there exists an algebraic space $U$ and a morphism $U \to F$ which is
+surjective, flat, and locally of finite presentation, or
+\item $F$ is a sheaf, $\Delta_F$ is representable,
+and there exists a scheme $U$ and a morphism $U \to F$ which is
+surjective, flat, and locally of finite presentation.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+Trivial observations: (6) is a special case of (5) and
+(4) is a special case of (3).
+We first prove that cases (5) and (3) reduce to case (1).
+Namely, by bootstrapping the diagonal
+Lemma \ref{lemma-bootstrap-diagonal}
+we see that (3) implies (5). In case (5) we set $R = U \times_F U$ which
+is an algebraic space by assumption. Moreover, by assumption both
+projections $s, t : R \to U$ are surjective, flat and locally of
+finite presentation. The map $j : R \to U \times_S U$ is clearly an
+equivalence relation. By
+Lemma \ref{lemma-surjective-flat-locally-finite-presentation}
+the map $U \to F$ is a surjection of sheaves. Thus $F = U/R$
+which reduces us to case (1).
+
+\medskip\noindent
+Next, we show that (1) reduces to (2).
+Namely, let $(U, R, s, t, c)$ be a groupoid in algebraic spaces
+over $S$ such that $s, t$ are flat and locally of finite presentation, and
+$j = (t, s) : R \to U \times_S U$ is an equivalence relation.
+Choose a scheme $U'$ and a surjective \'etale morphism $U' \to U$.
+Let $R' = R|_{U'}$ be the restriction of $R$ to $U'$. By
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-pre-equivalence-relation-restrict}
+we see that $U/R = U'/R'$. Since $s', t' : R' \to U'$ are also
+flat and locally of finite presentation (see
+More on Groupoids in Spaces,
+Lemma \ref{spaces-more-groupoids-lemma-restrict-preserves-type})
+this reduces us to the case where $U$ is a scheme.
+As $j$ is an equivalence relation we see that $j$ is a monomorphism.
+As $s : R \to U$ is locally of finite presentation we see that
+$j : R \to U \times_S U$ is locally of finite type, see
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-permanence-finite-type}.
+By
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-monomorphism-loc-finite-type-loc-quasi-finite}
+we see that $j$ is locally quasi-finite and separated.
+Hence if $U$ is a scheme, then $R$ is a scheme by
+Morphisms of Spaces, Proposition
+\ref{spaces-morphisms-proposition-locally-quasi-finite-separated-over-scheme}.
+Thus we reduce to proving the theorem in case (2).
+
+\medskip\noindent
+Assume $F = U/R$ where $(U, R, s, t, c)$ is a groupoid scheme
+over $S$ such that $s, t$ are flat and locally of finite presentation, and
+$j = (t, s) : R \to U \times_S U$ is an equivalence relation. By
+Lemma \ref{lemma-slice-equivalence-relation}
+we reduce to that case where $s, t$ are flat,
+locally of finite presentation, and locally quasi-finite.
+Let $U = \bigcup_{i \in I} U_i$ be an affine open covering
+(with index set $I$ of cardinality $\leq$ than the size of $U$ to avoid
+set theoretic problems later -- most readers can safely ignore this remark).
+Let $(U_i, R_i, s_i, t_i, c_i)$ be the restriction of $R$
+to $U_i$. It is clear that $s_i, t_i$ are still flat, locally of finite
+presentation, and locally quasi-finite as $R_i$ is the open subscheme
+$s^{-1}(U_i) \cap t^{-1}(U_i)$ of $R$
+and $s_i, t_i$ are the restrictions of $s, t$ to this open. By
+Lemma \ref{lemma-better-finding-opens}
+(or the simpler
+Spaces, Lemma \ref{spaces-lemma-finding-opens})
+the map $U_i/R_i \to U/R$ is representable by open immersions.
+Hence if we can show that $F_i = U_i/R_i$ is an algebraic space, then
+$\coprod_{i \in I} F_i$ is an algebraic space by
+Spaces, Lemma \ref{spaces-lemma-coproduct-algebraic-spaces}.
+As $U = \bigcup U_i$ is an open covering it is clear that
+$\coprod F_i \to F$ is surjective. Thus
+it follows that $U/R$ is an algebraic space, by
+Spaces, Lemma \ref{spaces-lemma-glueing-algebraic-spaces}.
+In this way we reduce to the case where $U$ is affine and $s, t$ are flat,
+locally of finite presentation, and locally quasi-finite and
+$j$ is an equivalence.
+
+\medskip\noindent
+Assume $(U, R, s, t, c)$ is a groupoid scheme over $S$,
+with $U$ affine, such that $s, t$ are flat, locally of finite presentation,
+and locally quasi-finite, and $j$ is an equivalence relation.
+Choose $u \in U$. We apply
+More on Groupoids in Spaces,
+Lemma \ref{spaces-more-groupoids-lemma-quasi-splitting-affine-scheme}
+to $u \in U, R, s, t, c$. We obtain an affine scheme $U'$, an \'etale
+morphism $g : U' \to U$, a point $u' \in U'$ with $\kappa(u) = \kappa(u')$
+such that the restriction $R' = R|_{U'}$ is quasi-split over $u'$.
+Note that the image $g(U')$ is open as $g$ is \'etale and contains $u$.
+Hence, repeatedly applying the lemma, we can find finitely many
+points $u_i \in U$, $i = 1, \ldots, n$,
+affine schemes $U'_i$, \'etale morphisms $g_i : U_i' \to U$, points
+$u'_i \in U'_i$ with $g(u'_i) = u_i$ such that (a) each
+restriction $R'_i$ is quasi-split over some point in $U'_i$ and
+(b) $U = \bigcup_{i = 1, \ldots, n} g_i(U'_i)$.
+Now we rerun the last part of the argument in the preceding paragraph:
+Using
+Lemma \ref{lemma-better-finding-opens}
+(or the simpler
+Spaces, Lemma \ref{spaces-lemma-finding-opens})
+the map $U'_i/R'_i \to U/R$ is representable by open immersions.
+If we can show that $F_i = U'_i/R'_i$ is an algebraic space, then
+$\coprod_{i \in I} F_i$ is an algebraic space by
+Spaces, Lemma \ref{spaces-lemma-coproduct-algebraic-spaces}.
+As $\{g_i : U'_i \to U\}$ is an \'etale covering
+it is clear that $\coprod F_i \to F$ is surjective. Thus
+it follows that $U/R$ is an algebraic space, by
+Spaces, Lemma \ref{spaces-lemma-glueing-algebraic-spaces}.
+In this way we reduce to the case where $U$ is affine and $s, t$ are flat,
+locally of finite presentation, and locally quasi-finite,
+$j$ is an equivalence, and $R$ is quasi-split over $u$ for some
+$u \in U$.
+
+\medskip\noindent
+Assume $(U, R, s, t, c)$ is a groupoid scheme over $S$,
+with $U$ affine, $u \in U$ such that $s, t$ are flat, locally
+of finite presentation, and locally quasi-finite and
+$j = (t, s) : R \to U \times_S U$ is an equivalence relation
+and $R$ is quasi-split over $u$. Let $P \subset R$ be a quasi-splitting
+of $R$ over $u$. By
+Lemma \ref{lemma-divide-subgroupoid}
+we see that $(U, R, s, t, c)$ is the restriction of a groupoid
+$(\overline{U}, \overline{R}, \overline{s}, \overline{t}, \overline{c})$
+by a surjective finite locally free morphism $U \to \overline{U}$ such that
+$P = U \times_{\overline{U}} U$. Note that $s$ admits a factorization
+$$
+R = U \times_{\overline{U}, \overline{t}} \overline{R}
+\times_{\overline{s}, \overline{U}} U
+\xrightarrow{\text{pr}_{23}}
+\overline{R} \times_{\overline{s}, \overline{U}} U
+\xrightarrow{\text{pr}_2} U
+$$
+The map $\text{pr}_2$ is the base change of $\overline{s}$, and
+the map $\text{pr}_{23}$ is a base change of the surjective finite locally
+free map $U \to \overline{U}$. Since $s$ is flat, locally
+of finite presentation, and locally quasi-finite and since $\text{pr}_{23}$
+is surjective finite locally free (as a base change of such), we
+conclude that $\text{pr}_2$ is flat, locally
+of finite presentation, and locally quasi-finite by
+Descent, Lemmas
+\ref{descent-lemma-flat-fpqc-local-source} and
+\ref{descent-lemma-locally-finite-presentation-fppf-local-source} and
+Morphisms, Lemma \ref{morphisms-lemma-quasi-finite-local-source}.
+Since $\text{pr}_2$ is the base change of the morphism
+$\overline{s}$ by $U \to \overline{U}$ and $\{U \to \overline{U}\}$
+is an fppf covering we conclude $\overline{s}$ is
+flat, locally of finite presentation, and locally quasi-finite, see
+Descent, Lemmas \ref{descent-lemma-descending-property-flat},
+\ref{descent-lemma-descending-property-locally-finite-presentation}, and
+\ref{descent-lemma-descending-property-quasi-finite}. The same goes
+for $\overline{t}$. Consider the commutative diagram
+$$
+\xymatrix{
+U \times_{\overline{U}} U \ar@{=}[r] \ar[rd] & P \ar[r] \ar[d] & R \ar[d] \\
+& \overline{U} \ar[r]^{\overline{e}} & \overline{R}
+}
+$$
+It is a general fact about restrictions that the outer four corners
+form a cartesian diagram. By the equality we see the inner square is
+cartesian. Since $P$ is open in $R$ (by definition of a quasi-splitting)
+we conclude that $\overline{e}$ is an open immersion by
+Descent, Lemma \ref{descent-lemma-descending-property-open-immersion}.
+An application of
+Groupoids,
+Lemma \ref{groupoids-lemma-quotient-pre-equivalence-relation-restrict}
+shows that $U/R = \overline{U}/\overline{R}$. Hence we have reduced to
+the case where $(U, R, s, t, c)$ is a groupoid scheme over $S$,
+with $U$ affine, $u \in U$ such that $s, t$ are flat, locally
+of finite presentation, and locally quasi-finite and
+$j = (t, s) : R \to U \times_S U$ is an equivalence relation
+and $e : U \to R$ is an open immersion!
+
+\medskip\noindent
+But of course, if $e$ is an open immersion and
+$s, t$ are flat and locally of finite presentation
+then the morphisms $t, s$ are \'etale.
+For example you can see this by applying
+More on Groupoids, Lemma \ref{more-groupoids-lemma-sheaf-differentials}
+which shows that $\Omega_{R/U} = 0$ which in turn implies
+that $s, t : R \to U$ is G-unramified (see
+Morphisms, Lemma \ref{morphisms-lemma-unramified-omega-zero}),
+which in turn implies that $s, t$ are \'etale (see
+Morphisms, Lemma \ref{morphisms-lemma-flat-unramified-etale}).
+And if $s, t$ are \'etale then finally $U/R$ is an algebraic
+space by
+Spaces, Theorem \ref{spaces-theorem-presentation}.
+\end{proof}
+
+
+
+
+
+\section{Applications}
+\label{section-applications}
+
+\noindent
+As a first application we obtain the following fundamental fact:
+$$
+\fbox{A sheaf which is fppf locally an algebraic space is an algebraic space.}
+$$
+This is the content of the following lemma.
+Note that assumption (2) is equivalent to the condition that
+$F|_{(\Sch/S_i)_{fppf}}$ is an algebraic space, see
+Spaces, Lemma \ref{spaces-lemma-rephrase}.
+Assumption (3) is a set theoretic condition which may be ignored
+by those not worried about set theoretic questions.
+
+\begin{lemma}
+\label{lemma-locally-algebraic-space}
+\begin{slogan}
+The definition of an algebraic space is fppf local.
+\end{slogan}
+Let $S$ be a scheme.
+Let $F : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$ be a functor.
+Let $\{S_i \to S\}_{i \in I}$ be a covering of $(\Sch/S)_{fppf}$.
+Assume that
+\begin{enumerate}
+\item $F$ is a sheaf,
+\item each $F_i = h_{S_i} \times F$ is an algebraic space, and
+\item $\coprod_{i \in I} F_i$ is an algebraic space (see
+Spaces, Lemma \ref{spaces-lemma-coproduct-algebraic-spaces}).
+\end{enumerate}
+Then $F$ is an algebraic space.
+\end{lemma}
+
+\begin{proof}
+Consider the morphism $\coprod F_i \to F$. This is the base change
+of $\coprod S_i \to S$ via $F \to S$. Hence it is representable,
+locally of finite presentation, flat and surjective by our definition
+of an fppf covering and
+Lemma \ref{lemma-base-change-transformation-property}.
+Thus
+Theorem \ref{theorem-final-bootstrap}
+applies to show that $F$ is an algebraic space.
+\end{proof}
+
+\noindent
+Here is a special case of Lemma \ref{lemma-locally-algebraic-space}
+where we do not need to worry about set theoretical issues.
+
+\begin{lemma}
+\label{lemma-locally-algebraic-space-finite-type}
+Let $S$ be a scheme.
+Let $F : (\Sch/S)_{fppf}^{opp} \to \textit{Sets}$ be a functor.
+Let $\{S_i \to S\}_{i \in I}$ be a covering of $(\Sch/S)_{fppf}$.
+Assume that
+\begin{enumerate}
+\item $F$ is a sheaf,
+\item each $F_i = h_{S_i} \times F$ is an algebraic space, and
+\item the morphisms $F_i \to S_i$ are of finite type.
+\end{enumerate}
+Then $F$ is an algebraic space.
+\end{lemma}
+
+\begin{proof}
+We will use
+Lemma \ref{lemma-locally-algebraic-space}
+above. To do this we will show that the assumption that
+$F_i$ is of finite type over $S_i$ to prove that the set theoretic
+condition in the lemma is satisfied (after perhaps refining the given
+covering of $S$ a bit).
+We suggest the reader skip the rest of the proof.
+
+\medskip\noindent
+If $S'_i \to S_i$ is a morphism of schemes then
+$$
+h_{S'_i} \times F =
+h_{S'_i} \times_{h_{S_i}} h_{S_i} \times F =
+h_{S'_i} \times_{h_{S_i}} F_i
+$$
+is an algebraic space of finite type over $S'_i$, see
+Spaces, Lemma \ref{spaces-lemma-fibre-product-spaces}
+and
+Morphisms of Spaces,
+Lemma \ref{spaces-morphisms-lemma-base-change-finite-type}.
+Thus we may refine the given covering. After doing this we may assume:
+(a) each $S_i$ is affine, and (b) the cardinality of $I$ is at most
+the cardinality of the set of points of $S$. (Since to cover
+all of $S$ it is enough that each point is in the image of $S_i \to S$
+for some $i$.)
+
+\medskip\noindent
+Since each $S_i$ is affine and each $F_i$ of finite type over $S_i$
+we conclude that $F_i$ is quasi-compact. Hence by
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-quasi-compact-affine-cover}
+we can find an affine $U_i \in \Ob((\Sch/S)_{fppf})$
+and a surjective \'etale morphism $U_i \to F_i$. The fact that
+$F_i \to S_i$ is locally of finite type then implies that
+$U_i \to S_i$ is locally of finite type, and in particular
+$U_i \to S$ is locally of finite type. By
+Sets, Lemma \ref{sets-lemma-bound-finite-type}
+we conclude that $\text{size}(U_i) \leq \text{size}(S)$.
+Since also $|I| \leq \text{size}(S)$ we conclude that
+$\coprod_{i \in I} U_i$ is isomorphic to an object of
+$(\Sch/S)_{fppf}$ by
+Sets, Lemma \ref{sets-lemma-bound-size}
+and the construction of $\Sch$. This implies that
+$\coprod F_i$ is an algebraic space by
+Spaces, Lemma \ref{spaces-lemma-coproduct-algebraic-spaces}
+and we win.
+\end{proof}
+
+\noindent
+As a second application we obtain
+$$
+\fbox{Any fppf descent datum for algebraic spaces is effective.}
+$$
+This holds modulo set theoretical difficulties; as an example result
+we offer the following lemma.
+
+\begin{lemma}
+\label{lemma-descend-algebraic-space}
+\begin{slogan}
+Fppf descent data for algebraic spaces are effective.
+\end{slogan}
+Let $S$ be a scheme. Let $\{X_i \to X\}_{i \in I}$ be an fppf
+covering of algebraic spaces over $S$.
+\begin{enumerate}
+\item If $I$ is countable\footnote{The restriction on countablility can be
+ignored by those who do not care about set theoretical issues. We can allow
+larger index sets here if we can bound the size of the algebraic spaces
+which we are descending. See for example
+Lemma \ref{lemma-locally-algebraic-space-finite-type}.}, then any
+descent datum for algebraic spaces relative to $\{X_i \to X\}$ is effective.
+\item Any descent datum $(Y_i, \varphi_{ij})$ relative to
+$\{X_i \to X\}_{i \in I}$ (Descent on Spaces, Definition
+\ref{spaces-descent-definition-descent-datum-for-family-of-morphisms})
+with $Y_i \to X_i$ of finite type
+is effective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). By
+Descent on Spaces, Lemma \ref{spaces-descent-lemma-descent-data-sheaves}
+this translates into the statement that an fppf sheaf $F$
+endowed with a map $F \to X$ is an algebraic space provided that
+each $F \times_X X_i$ is an algebraic space.
+The restriction on the cardinality of $I$ implies that
+coproducts of algebraic spaces indexed by $I$ are algebraic spaces, see
+Spaces, Lemma \ref{spaces-lemma-coproduct-algebraic-spaces}
+and
+Sets, Lemma \ref{sets-lemma-what-is-in-it}.
+The morphism
+$$
+\coprod F \times_X X_i \longrightarrow F
+$$
+is representable by algebraic spaces (as the base change of
+$\coprod X_i \to X$, see Lemma \ref{lemma-base-change-transformation}),
+and surjective, flat, and locally of finite presentation
+(as the base change of $\coprod X_i \to X$, see
+Lemma \ref{lemma-base-change-transformation-property}).
+Hence part (1) follows from Theorem \ref{theorem-final-bootstrap}.
+
+\medskip\noindent
+Proof of (2). First we apply
+Descent on Spaces, Lemma \ref{spaces-descent-lemma-descent-data-sheaves}
+to obtain an fppf sheaf $F$ endowed with a map $F \to X$
+such that $F \times_X X_i = Y_i$ for all $i \in I$.
+Our goal is to show that $F$ is an algebraic space.
+Choose a scheme $U$ and a surjective \'etale morphism $U \to X$.
+Then $F' = U \times_X F \to F$ is representable, surjective, and \'etale
+as the base change of $U \to X$.
+By Theorem \ref{theorem-final-bootstrap} it suffices to show
+that $F' = U \times_X F$ is an algebraic space.
+We may choose an fppf covering $\{U_j \to U\}_{j \in J}$
+where $U_j$ is a scheme refining the fppf covering
+$\{X_i \times_X U \to U\}_{i \in I}$, see
+Topologies on Spaces, Lemma
+\ref{spaces-topologies-lemma-refine-fppf-schemes}.
+Thus we get a map $a : J \to I$ and for each $j$
+a morphism $U_j \to X_{a(j)}$ over $X$.
+Then we see that $U_j \times_U F' = U_j \times_{X_{a(j)}} Y_{a(j)}$
+is of finite type over $U_j$. Hence $F'$ is an algebraic
+space by Lemma \ref{lemma-locally-algebraic-space-finite-type}.
+\end{proof}
+
+\noindent
+Here is a different type of application.
+
+\begin{lemma}
+\label{lemma-representable-by-spaces-cover}
+Let $S$ be a scheme. Let $a : F \to G$ and $b : G \to H$ be
+transformations of functors $(\Sch/S)_{fppf}^{opp} \to \textit{Sets}$.
+Assume
+\begin{enumerate}
+\item $F, G, H$ are sheaves,
+\item $a : F \to G$ is representable by algebraic spaces, flat,
+locally of finite presentation, and surjective, and
+\item $b \circ a : F \to H$ is representable by algebraic spaces.
+\end{enumerate}
+Then $b$ is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Let $U$ be a scheme over $S$ and let $\xi \in H(U)$. We have to show that
+$U \times_{\xi, H} G$ is an algebraic space. On the other hand, we know
+that $U \times_{\xi, H} F$ is an algebraic space and that
+$U \times_{\xi, H} F \to U \times_{\xi, H} G$ is representable by
+algebraic spaces, flat, locally of finite presentation, and surjective
+as a base change of the morphism $a$ (see
+Lemma \ref{lemma-base-change-transformation-property}).
+Thus the result follows from Theorem \ref{theorem-final-bootstrap}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quotient-stack-isom}
+Assume $B \to S$ and $(U, R, s, t, c)$ are as in
+Groupoids in Spaces,
+Definition \ref{spaces-groupoids-definition-quotient-stack} (1).
+For any scheme $T$ over $S$ and objects $x, y$ of $[U/R]$ over $T$
+the sheaf $\mathit{Isom}(x, y)$ on $(\Sch/T)_{fppf}$
+is an algebraic space.
+\end{lemma}
+
+\begin{proof}
+By
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-isom}
+there exists an fppf covering $\{T_i \to T\}_{i \in I}$
+such that $\mathit{Isom}(x, y)|_{(\Sch/T_i)_{fppf}}$
+is an algebraic space for each $i$. By
+Spaces, Lemma \ref{spaces-lemma-rephrase}
+this means that each $F_i = h_{S_i} \times \mathit{Isom}(x, y)$
+is an algebraic space.
+Thus to prove the lemma we only have to verify the set theoretic condition
+that $\coprod F_i$ is an algebraic space of
+Lemma \ref{lemma-locally-algebraic-space}
+above to conclude. To do this we use
+Spaces, Lemma \ref{spaces-lemma-coproduct-algebraic-spaces}
+which requires showing that $I$ and the $F_i$ are not ``too large''.
+We suggest the reader skip the rest of the proof.
+
+\medskip\noindent
+Choose $U' \in \Ob(\Sch/S)_{fppf}$ and a surjective
+\'etale morphism $U' \to U$. Let $R'$ be the restriction of $R$ to $U'$.
+Since $[U/R] = [U'/R']$ we may, after replacing $U$ by $U'$,
+assume that $U$ is a scheme. (This step is here so that the
+fibre products below are over a scheme.)
+
+\medskip\noindent
+Note that if we refine the covering $\{T_i \to T\}$ then it remains
+true that each $F_i$ is an algebraic space.
+Hence we may assume that each $T_i$ is affine. Since
+$T_i \to T$ is locally of finite presentation, this then implies that
+$\text{size}(T_i) \leq \text{size}(T)$, see
+Sets, Lemma \ref{sets-lemma-bound-finite-type}.
+We may also assume that the cardinality of the index set $I$ is at most the
+cardinality of the set of points of $T$ since to get a
+covering it suffices to check that each point of $T$ is in the image.
+Hence $|I| \leq \text{size}(T)$.
+Choose $W \in \Ob((\Sch/S)_{fppf})$
+and a surjective \'etale morphism $W \to R$. Note that in the proof of
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-isom}
+we showed that $F_i$ is representable by
+$T_i \times_{(y_i, x_i), U \times_B U} R$ for some
+$x_i, y_i : T_i \to U$. Hence now we see that
+$V_i = T_i \times_{(y_i, x_i), U \times_B U} W$ is a
+scheme which comes with an \'etale surjection $V_i \to F_i$.
+By
+Sets, Lemma \ref{sets-lemma-bound-size-fibre-product}
+we see that
+$$
+\text{size}(V_i) \leq \max\{\text{size}(T_i), \text{size}(W)\}
+\leq \max\{\text{size}(T), \text{size}(W)\}
+$$
+Hence, by
+Sets, Lemma \ref{sets-lemma-bound-size}
+we conclude that
+$$
+\text{size}(\coprod\nolimits_{i \in I} V_i)
+\leq \max\{|I|, \text{size}(T), \text{size}(W)\}.
+$$
+Hence we conclude by our construction of $\Sch$
+that $\coprod_{i \in I} V_i$ is isomorphic to an object
+$V$ of $(\Sch/S)_{fppf}$. This verifies the
+hypothesis of
+Spaces, Lemma \ref{spaces-lemma-coproduct-algebraic-spaces}
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-covering-quotient}
+Let $S$ be a scheme. Consider an algebraic space $F$ of the form $F = U/R$
+where $(U, R, s, t, c)$ is a groupoid in algebraic spaces
+over $S$ such that $s, t$ are flat and locally of finite presentation, and
+$j = (t, s) : R \to U \times_S U$ is an equivalence relation.
+Then $U \to F$ is surjective, flat, and locally of finite presentation.
+\end{lemma}
+
+\begin{proof}
+This is almost but not quite a triviality. Namely, by
+Groupoids in Spaces, Lemma
+\ref{spaces-groupoids-lemma-quotient-pre-equivalence}
+and the fact that $j$ is a monomorphism we see that $R = U \times_F U$.
+Choose a scheme $W$ and a surjective \'etale morphism $W \to F$.
+As $U \to F$ is a surjection of sheaves we can find an fppf covering
+$\{W_i \to W\}$ and maps $W_i \to U$ lifting the morphisms $W_i \to F$.
+Then we see that
+$$
+W_i \times_F U = W_i \times_U U \times_F U = W_i \times_{U, t} R
+$$
+and the projection $W_i \times_F U \to W_i$ is the base change of
+$t : R \to U$ hence flat and locally of finite presentation, see
+Morphisms of Spaces, Lemmas
+\ref{spaces-morphisms-lemma-base-change-flat} and
+\ref{spaces-morphisms-lemma-base-change-finite-presentation}.
+Hence by
+Descent on Spaces, Lemmas
+\ref{spaces-descent-lemma-descending-property-flat} and
+\ref{spaces-descent-lemma-descending-property-locally-finite-presentation}
+we see that $U \to F$ is flat and locally of finite presentation.
+It is surjective by
+Spaces, Remark \ref{spaces-remark-warning}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quotient-free-action}
+Let $S$ be a scheme. Let $X \to B$ be a morphism of algebraic spaces over
+$S$. Let $G$ be a group algebraic space over $B$ and let
+$a : G \times_B X \to X$ be an action of $G$ on $X$ over $B$.
+If
+\begin{enumerate}
+\item $a$ is a free action, and
+\item $G \to B$ is flat and locally of finite presentation,
+\end{enumerate}
+then $X/G$ (see
+Groupoids in Spaces, Definition
+\ref{spaces-groupoids-definition-quotient-sheaf})
+is an algebraic space and $X \to X/G$ is surjective, flat, and locally
+of finite presentation.
+\end{lemma}
+
+\begin{proof}
+The fact that $X/G$ is an algebraic space is immediate from
+Theorem \ref{theorem-final-bootstrap}
+and the definitions. Namely, $X/G = X/R$ where $R = G \times_B X$.
+The morphisms $s, t : G \times_B X \to X$ are flat and locally of
+finite presentation (clear for $s$ as a base change of $G \to B$ and
+by symmetry using the inverse it follows for $t$) and the morphism
+$j : G \times_B X \to X \times_B X$ is a monomorphism by
+Groupoids in Spaces, Lemma \ref{spaces-groupoids-lemma-free-action}
+as the action is free. The assertions about the morphism $X \to X/G$
+follow from
+Lemma \ref{lemma-covering-quotient}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-torsor}
+Let $\{S_i \to S\}_{i \in I}$ be a covering of $(\Sch/S)_{fppf}$.
+Let $G$ be a group algebraic space over $S$, and denote
+$G_i = G_{S_i}$ the base changes. Suppose given
+\begin{enumerate}
+\item for each $i \in I$ an fppf $G_i$-torsor $X_i$ over $S_i$,
+and
+\item for each $i, j \in I$ a $G_{S_i \times_S S_j}$-equivariant isomorphism
+$\varphi_{ij} : X_i \times_S S_j \to S_i \times_S X_j$ satisfying the cocycle
+condition over every $S_i \times_S S_j \times_S S_j$.
+\end{enumerate}
+Then there exists an fppf $G$-torsor $X$ over $S$
+whose base change to $S_i$ is isomorphic to $X_i$ such that we
+recover the descent datum $\varphi_{ij}$.
+\end{lemma}
+
+\begin{proof}
+We may think of $X_i$ as a sheaf on $(\Sch/S_i)_{fppf}$, see
+Spaces, Section \ref{spaces-section-change-base-scheme}.
+By
+Sites, Section \ref{sites-section-glueing-sheaves}
+the descent datum $(X_i, \varphi_{ij})$ is effective in the sense that
+there exists a unique sheaf $X$ on $(\Sch/S)_{fppf}$ which
+recovers the algebraic spaces $X_i$ after restricting back to
+$(\Sch/S_i)_{fppf}$. Hence we see that
+$X_i = h_{S_i} \times X$. By
+Lemma \ref{lemma-locally-algebraic-space}
+we see that $X$ is an algebraic space, modulo verifying that $\coprod X_i$
+is an algebraic space which we do at the end of the proof.
+By the equivalence of categories in
+Sites, Lemma \ref{sites-lemma-mapping-property-glue}
+the action maps $G_i \times_{S_i} X_i \to X_i$
+glue to give a map $a : G \times_S X \to X$.
+Now we have to show that $a$ is an action and that $X$
+is a pseudo-torsor, and fppf locally trivial (see
+Groupoids in Spaces,
+Definition \ref{spaces-groupoids-definition-principal-homogeneous-space}).
+These may be checked fppf locally, and
+hence follow from the corresponding properties of the actions
+$G_i \times_{S_i} X_i \to X_i$. Hence the lemma is true.
+
+\medskip\noindent
+We suggest the reader skip the rest of the proof, which is purely set
+theoretical. Pick coverings $\{S_{ij} \to S_j\}_{j \in J_i}$ of
+$(\Sch/S)_{fppf}$
+which trivialize the $G_i$ torsors $X_i$ (possible by assumption, and
+Topologies, Lemma \ref{topologies-lemma-fppf-induced} part (1)).
+Then $\{S_{ij} \to S\}_{i \in I, j \in J_i}$ is a covering of
+$(\Sch/S)_{fppf}$ and hence we may assume that each $X_i$
+is the trivial torsor! Of course we may also refine the covering further,
+hence we may assume that each $S_i$ is affine and that the index
+set $I$ has cardinality bounded by the cardinality of the set of points
+of $S$. Choose $U \in \Ob((\Sch/S)_{fppf})$ and a surjective
+\'etale morphism $U \to G$. Then we see that $U_i = U \times_S S_i$ comes
+with an \'etale surjective morphism to $X_i \cong G_i$. By
+Sets, Lemma \ref{sets-lemma-bound-size-fibre-product}
+we see $\text{size}(U_i) \leq \max\{\text{size}(U), \text{size}(S_i)\}$. By
+Sets, Lemma \ref{sets-lemma-bound-finite-type}
+we have $\text{size}(S_i) \leq \text{size}(S)$.
+Hence we see that
+$\text{size}(U_i) \leq \max\{\text{size}(U), \text{size}(S)\}$
+for all $i \in I$. Together with the bound on $|I|$ we found above we
+conclude from
+Sets, Lemma \ref{sets-lemma-bound-size}
+that $\text{size}(\coprod U_i) \leq \max\{\text{size}(U), \text{size}(S)\}$.
+Hence
+Spaces, Lemma \ref{spaces-lemma-coproduct-algebraic-spaces}
+applies to show that $\coprod X_i$ is an algebraic space which is
+what we had to prove.
+\end{proof}
+
+
+
+
+\section{Algebraic spaces in the \'etale topology}
+\label{section-spaces-etale}
+
+\noindent
+Let $S$ be a scheme. Instead of working with sheaves over
+the big fppf site $(\Sch/S)_{fppf}$ we could work with sheaves
+over the big \'etale site $(\Sch/S)_\etale$. All of the material in
+Algebraic Spaces, Sections \ref{spaces-section-representable} and
+\ref{spaces-section-representable-properties}
+makes sense for sheaves over $(\Sch/S)_\etale$.
+Thus we get a second notion of algebraic spaces by working in the
+\'etale topology. This notion is (a priori) weaker then the notion introduced
+in Algebraic Spaces, Definition \ref{spaces-definition-algebraic-space}
+since a sheaf in the fppf topology is certainly a sheaf in the \'etale
+topology. However, the notions are equivalent as is shown by the following
+lemma.
+
+\begin{lemma}
+\label{lemma-spaces-etale}
+Denote the common underlying category of $\Sch_{fppf}$ and $\Sch_\etale$ by
+$\Sch_\alpha$ (see Topologies, Remark \ref{topologies-remark-choice-sites}).
+Let $S$ be an object of $\Sch_\alpha$. Let
+$$
+F : (\Sch_\alpha/S)^{opp} \longrightarrow \textit{Sets}
+$$
+be a presheaf with the following properties:
+\begin{enumerate}
+\item $F$ is a sheaf for the \'etale topology,
+\item the diagonal $\Delta : F \to F \times F$ is representable, and
+\item there exists $U \in \Ob(\Sch_\alpha/S)$
+and $U \to F$ which is surjective and \'etale.
+\end{enumerate}
+Then $F$ is an algebraic space in the sense of
+Algebraic Spaces, Definition \ref{spaces-definition-algebraic-space}.
+\end{lemma}
+
+\begin{proof}
+Note that properties (2) and (3) of the lemma and the corresponding
+properties (2) and (3) of
+Algebraic Spaces, Definition \ref{spaces-definition-algebraic-space}
+are independent of the topology. This is true because these properties
+involve only the notion of a fibre product of presheaves, maps of
+presheaves, the notion of a representable transformation of functors,
+and what it means for such a transformation to be surjective and \'etale.
+Thus all we have to prove is that an \'etale sheaf $F$ with properties
+(2) and (3) is also an fppf sheaf.
+
+\medskip\noindent
+To do this, let $R = U \times_F U$. By (2) the presheaf $R$ is representable
+by a scheme and by (3) the projections $R \to U$ are \'etale. Thus
+$j : R \to U \times_S U$ is an \'etale equivalence relation. Moreover
+$U \to F$ identifies $F$ as the quotient of $U$ by $R$ for the
+\'etale topology: (a) if $T \to F$ is a morphism, then $\{T \times_F U \to T\}$
+is an \'etale covering, hence $U \to F$ is a surjection of sheaves for the
+\'etale topology, (b) if $a, b : T \to U$ map to the same section of $F$,
+then $(a, b) : T \to R$ hence $a$ and $b$ have the same image in the quotient
+of $U$ by $R$ for the \'etale topology. Next, let $U/R$ denote the quotient
+sheaf in the fppf topology which is an algebraic space by
+Spaces, Theorem \ref{spaces-theorem-presentation}.
+Thus we have morphisms (transformations of functors)
+$$
+U \to F \to U/R.
+$$
+By the aforementioned
+Spaces, Theorem \ref{spaces-theorem-presentation}
+the composition is representable, surjective, and \'etale. Hence for any
+scheme $T$ and morphism $T \to U/R$ the fibre product $V = T \times_{U/R} U$
+is a scheme surjective and \'etale over $T$. In other words, $\{V \to U\}$
+is an \'etale covering. This proves that $U \to U/R$ is surjective as
+a map of sheaves in the \'etale topology. It follows that
+$F \to U/R$ is surjective as a map of sheaves in the \'etale topology.
+On the other hand, the map $F \to U/R$ is injective (as a map of presheaves)
+since $R = U \times_{U/R} U$ again by
+Spaces, Theorem \ref{spaces-theorem-presentation}.
+It follows that $F \to U/R$ is an isomorphism of \'etale sheaves, see
+Sites, Lemma \ref{sites-lemma-mono-epi-sheaves}
+which concludes the proof.
+\end{proof}
+
+\noindent
+There is also an analogue of
+Spaces, Lemma \ref{spaces-lemma-etale-locally-representable-gives-space}.
+
+\begin{lemma}
+\label{lemma-spaces-etale-locally-representable}
+Denote the common underlying category of $\Sch_{fppf}$ and $\Sch_\etale$ by
+$\Sch_\alpha$ (see Topologies, Remark \ref{topologies-remark-choice-sites}).
+Let $S$ be an object of $\Sch_\alpha$. Let
+$$
+F : (\Sch_\alpha/S)^{opp} \longrightarrow \textit{Sets}
+$$
+be a presheaf with the following properties:
+\begin{enumerate}
+\item $F$ is a sheaf for the \'etale topology,
+\item there exists an algebraic space $U$ over $S$
+and a map $U \to F$ which is representable by
+algebraic spaces, surjective, and \'etale.
+\end{enumerate}
+Then $F$ is an algebraic space in the sense of
+Algebraic Spaces, Definition \ref{spaces-definition-algebraic-space}.
+\end{lemma}
+
+\begin{proof}
+Set $R = U \times_F U$. This is an algebraic space as $U \to F$ is assumed
+representable by algebraic spaces. The projections $s, t : R \to U$ are
+\'etale morphisms of algebraic spaces as $U \to F$ is assumed \'etale.
+The map $j = (t, s) : R \to U \times_S U$ is a monomorphism and an
+equivalence relation as $R = U \times_F U$. By
+Theorem \ref{theorem-final-bootstrap}
+the fppf quotient sheaf $F' = U/R$ is an algebraic space.
+The morphism $U \to F'$ is surjective, flat, and locally of finite
+presentation by Lemma \ref{lemma-covering-quotient}.
+The map $R \to U \times_{F'} U$ is surjective as a map of fppf
+sheaves by Groupoids in Spaces, Lemma
+\ref{spaces-groupoids-lemma-quotient-pre-equivalence}
+and since $j$ is a monomorphism it is an isomorphism.
+Hence the base change of $U \to F'$ by $U \to F'$ is \'etale,
+and we conclude that $U \to F'$ is \'etale by
+Descent on Spaces, Lemma \ref{spaces-descent-lemma-descending-property-etale}.
+Thus $U \to F'$ is surjective as a map of \'etale sheaves.
+This means that $F'$ is equal to the quotient sheaf $U/R$
+in the \'etale topology (small check omitted). Hence we obtain
+a canonical factorization $U \to F' \to F$ and $F' \to F$ is an injective
+map of sheaves. On the other hand, $U \to F$ is surjective as a map
+of \'etale sheaves and hence so is $F' \to F$. This means that $F' = F$
+and the proof is complete.
+\end{proof}
+
+\noindent
+In fact, it suffices to have a smooth cover by a scheme and it suffices
+to assume the diagonal is representable by algebraic spaces.
+
+\begin{lemma}
+\label{lemma-spaces-etale-smooth-cover}
+Denote the common underlying category of $\Sch_{fppf}$
+and $\Sch_\etale$ by $\Sch_\alpha$ (see
+Topologies, Remark \ref{topologies-remark-choice-sites}). Let $S$ be an object
+of $\Sch_\alpha$.
+$$
+F : (\Sch_\alpha/S)^{opp} \longrightarrow \textit{Sets}
+$$
+be a presheaf with the following properties:
+\begin{enumerate}
+\item $F$ is a sheaf for the \'etale topology,
+\item the diagonal $\Delta : F \to F \times F$ is representable
+by algebraic spaces, and
+\item there exists $U \in \Ob(\Sch_\alpha/S)$
+and $U \to F$ which is surjective and smooth.
+\end{enumerate}
+Then $F$ is an algebraic space in the sense of
+Algebraic Spaces, Definition \ref{spaces-definition-algebraic-space}.
+\end{lemma}
+
+\begin{proof}
+The proof mirrors the proof of Lemma \ref{lemma-spaces-etale}. Let
+$R = U \times_F U$. By (2) the presheaf $R$ is an algebraic space and by (3)
+the projections $R \to U$ are smooth and surjective. Denote $(U, R, s, t, c)$
+the groupoid associated to the equivalence relation $j : R \to U \times_S U$
+(see Groupoids in Spaces, Lemma
+\ref{spaces-groupoids-lemma-equivalence-groupoid}).
+By Theorem \ref{theorem-final-bootstrap} we see that $X = U/R$ (quotient
+in the fppf-topology) is an algebraic space. Using that the smooth
+topology and the \'etale topology have the same sheaves (by
+More on Morphisms, Lemma \ref{more-morphisms-lemma-etale-dominates-smooth})
+we see the map $U \to F$ identifies $F$ as the quotient of
+$U$ by $R$ for the smooth topology (details omitted).
+Thus we have morphisms (transformations of functors)
+$$
+U \to F \to X.
+$$
+By Lemma \ref{lemma-covering-quotient} we see that $U \to X$ is
+surjective, flat and locally of finite presentation. By
+Groupoids in Spaces, Lemma
+\ref{spaces-groupoids-lemma-quotient-pre-equivalence}
+(and the fact that $j$ is a monomorphism) we have $R = U \times_X U$. By
+Descent on Spaces, Lemma \ref{spaces-descent-lemma-descending-property-smooth}
+we conclude that $U \to X$ is smooth and surjective (as the projections
+$R \to U$ are smooth and surjective and $\{U \to X\}$ is an fppf
+covering). Hence for any scheme $T$ and morphism $T \to X$ the fibre product
+$T \times_X U$ is an algebraic space surjective and smooth over $T$.
+Choose a scheme $V$ and a surjective \'etale morphism $V \to T \times_X U$.
+Then $\{V \to T\}$ is a smooth covering such that $V \to T \to X$
+lifts to a morphism $V \to U$. This proves that
+$U \to X$ is surjective as a map of sheaves in the smooth topology.
+It follows that $F \to X$ is surjective as a map of sheaves in the smooth
+topology. On the other hand, the map $F \to X$ is injective (as a map
+of presheaves) since $R = U \times_X U$.
+It follows that $F \to X$ is an isomorphism of smooth ($=$ \'etale)
+sheaves, see Sites, Lemma \ref{sites-lemma-mono-epi-sheaves}
+which concludes the proof.
+\end{proof}
+
+\noindent
+Finally, here is the analogue of
+Spaces, Lemma \ref{spaces-lemma-etale-locally-representable-gives-space}
+with a smooth morphism covering the space.
+
+\begin{lemma}
+\label{lemma-spaces-smooth-locally-representable}
+Denote the common underlying category of $\Sch_{fppf}$ and $\Sch_\etale$ by
+$\Sch_\alpha$ (see Topologies, Remark \ref{topologies-remark-choice-sites}).
+Let $S$ be an object of $\Sch_\alpha$. Let
+$$
+F : (\Sch_\alpha/S)^{opp} \longrightarrow \textit{Sets}
+$$
+be a presheaf with the following properties:
+\begin{enumerate}
+\item $F$ is a sheaf for the \'etale topology,
+\item there exists an algebraic space $U$ over $S$
+and a map $U \to F$ which is representable by
+algebraic spaces, surjective, and smooth.
+\end{enumerate}
+Then $F$ is an algebraic space in the sense of
+Algebraic Spaces, Definition \ref{spaces-definition-algebraic-space}.
+\end{lemma}
+
+\begin{proof}
+The proof is identical to the proof of
+Lemma \ref{lemma-spaces-etale-locally-representable}.
+Set $R = U \times_F U$. This is an algebraic space as $U \to F$ is assumed
+representable by algebraic spaces. The projections $s, t : R \to U$ are
+smooth morphisms of algebraic spaces as $U \to F$ is assumed smooth.
+The map $j = (t, s) : R \to U \times_S U$ is a monomorphism and an
+equivalence relation as $R = U \times_F U$. By
+Theorem \ref{theorem-final-bootstrap}
+the fppf quotient sheaf $F' = U/R$ is an algebraic space.
+The morphism $U \to F'$ is surjective, flat, and locally of finite
+presentation by Lemma \ref{lemma-covering-quotient}.
+The map $R \to U \times_{F'} U$ is surjective as a map of fppf
+sheaves by Groupoids in Spaces, Lemma
+\ref{spaces-groupoids-lemma-quotient-pre-equivalence}
+and since $j$ is a monomorphism it is an isomorphism.
+Hence the base change of $U \to F'$ by $U \to F'$ is smooth,
+and we conclude that $U \to F'$ is smooth by
+Descent on Spaces, Lemma \ref{spaces-descent-lemma-descending-property-smooth}.
+Thus $U \to F'$ is surjective as a map of \'etale sheaves (as the
+smooth topology is equal to the \'etale topology by
+More on Morphisms, Lemma \ref{more-morphisms-lemma-etale-dominates-smooth}).
+This means that $F'$ is equal to the quotient sheaf $U/R$
+in the \'etale topology (small check omitted). Hence we obtain
+a canonical factorization $U \to F' \to F$ and $F' \to F$ is an injective
+map of sheaves. On the other hand, $U \to F$ is surjective as a map
+of \'etale sheaves (as the smooth topology is the same as the
+\'etale topology) and hence so is $F' \to F$. This means that $F' = F$
+and the proof is complete.
+\end{proof}
+
+
+
+
+
+
+\input{chapters}
+
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/brauer.tex b/books/stacks/brauer.tex
new file mode 100644
index 0000000000000000000000000000000000000000..8b218ab6527f649453f7ed09455ed1af726570c3
--- /dev/null
+++ b/books/stacks/brauer.tex
@@ -0,0 +1,819 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Brauer groups}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+A reference is the lectures by Serre in the Seminaire Cartan, see
+\cite{Serre-Cartan}. Serre in turn refers to
+\cite{Deuring} and \cite{ANT}. We changed some of the proofs, in particular
+we used a fun argument of Rieffel to prove Wedderburn's theorem.
+Very likely this change is not an improvement and we strongly
+encourage the reader to read the original exposition by Serre.
+
+
+\section{Noncommutative algebras}
+\label{section-algebras}
+
+\noindent
+Let $k$ be a field. In this chapter an {\it algebra} $A$ over $k$ is
+a possibly noncommutative ring $A$ together with a ring map
+$k \to A$ such that $k$ maps into the center of $A$ and such that
+$1$ maps to an identity element of $A$. An {\it $A$-module} is a right
+$A$-module such that the identity of $A$ acts as the identity.
+
+\begin{definition}
+\label{definition-finite}
+Let $A$ be a $k$-algebra. We say $A$ is {\it finite} if $\dim_k(A) < \infty$.
+In this case we write $[A : k] = \dim_k(A)$.
+\end{definition}
+
+\begin{definition}
+\label{definition-skew-field}
+A {\it skew field} is a possibly noncommutative ring with an identity
+element $1$, with $1 \not = 0$, in which every nonzero element
+has a multiplicative inverse.
+\end{definition}
+
+\noindent
+A skew field is a $k$-algebra for some $k$ (e.g., for the prime field
+contained in it). We will use below that any module over a skew field
+is free because a maximal linearly independent set of vectors forms a
+basis and exists by Zorn's lemma.
+
+\begin{definition}
+\label{definition-simple}
+Let $A$ be a $k$-algebra.
+We say an $A$-module $M$ is {\it simple} if it is nonzero and
+the only $A$-submodules are $0$ and $M$.
+We say $A$ is {\it simple} if the only two-sided ideals of $A$ are
+$0$ and $A$.
+\end{definition}
+
+\begin{definition}
+\label{definition-central}
+A $k$-algebra $A$ is {\it central} if the center of $A$ is the image of
+$k \to A$.
+\end{definition}
+
+\begin{definition}
+\label{definition-opposite}
+Given a $k$-algebra $A$ we denote $A^{op}$ the $k$-algebra we get by
+reversing the order of multiplication in $A$. This is called the
+{\it opposite algebra}.
+\end{definition}
+
+
+
+
+\section{Wedderburn's theorem}
+\label{section-wedderburn}
+
+\noindent
+The following cute argument can be found in a paper of Rieffel, see
+\cite{Rieffel}. The proof could not be simpler (quote from
+Carl Faith's review).
+
+\begin{lemma}
+\label{lemma-rieffel}
+Let $A$ be a possibly noncommutative ring with $1$ which contains no
+nontrivial two-sided ideal. Let $M$ be a nonzero right ideal in $A$,
+and view $M$ as a right $A$-module. Then $A$ coincides with the
+bicommutant of $M$.
+\end{lemma}
+
+\begin{proof}
+Let $A' = \text{End}_A(M)$, so $M$ is a left $A'$-module.
+Set $A'' = \text{End}_{A'}(M)$ (the bicommutant of $M$).
+We view $A''$ as an algebra so that $M$ is a right $A''$-module\footnote{This
+means that given $a'' \in A''$ and $m \in M$ we have a product
+$m a'' \in M$. In particular, the multiplication in $A''$
+is the opposite of what you'd get if you wrote elements of $A''$
+as endomorphisms acting on the left.}.
+Let $R : A \to A''$ be the natural homomorphism such that
+$mR(a) = ma$. Then $R$ is injective, since $R(1) = \text{id}_M$
+and $A$ contains no nontrivial two-sided ideal. We claim that $R(M)$
+is a right ideal in $A''$. Namely, $R(m)a'' = R(ma'')$ for $a'' \in A''$
+and $m$ in $M$, because {\it left} multiplication of $M$ by any element $n$
+of $M$ represents an element of $A'$, and so
+$(nm)a'' = n(ma'')$ for all $n$ in $M$.
+Finally, the product ideal $AM$ is a two-sided ideal, and so
+$A = AM$. Thus $R(A) = R(A)R(M)$, so that $R(A)$ is a right ideal in $A''$.
+But $R(A)$ contains the identity element of $A''$, and so $R(A) = A''$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-simple-module}
+Let $A$ be a $k$-algebra. If $A$ is finite, then
+\begin{enumerate}
+\item $A$ has a simple module,
+\item any nonzero module contains a simple submodule,
+\item a simple module over $A$ has finite dimension over $k$, and
+\item if $M$ is a simple $A$-module, then $\text{End}_A(M)$ is a
+skew field.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Of course (1) follows from (2) since $A$ is a nonzero $A$-module.
+For (2), any submodule of minimal (finite) dimension as a $k$-vector
+space will be simple. There exists a finite dimensional one
+because a cyclic submodule is one. If $M$ is simple, then
+$mA \subset M$ is a sub-module, hence we see (3). Any nonzero element
+of $\text{End}_A(M)$ is an isomorphism, hence (4) holds.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-wedderburn}
+\begin{slogan}
+Simple finite algebras over a field are matrix algebras over a skew field.
+\end{slogan}
+Let $A$ be a simple finite $k$-algebra. Then $A$ is a matrix algebra over
+a finite $k$-algebra $K$ which is a skew field.
+\end{theorem}
+
+\begin{proof}
+We may choose a simple submodule $M \subset A$ and then
+the $k$-algebra $K = \text{End}_A(M)$ is a skew field, see
+Lemma \ref{lemma-simple-module}.
+By
+Lemma \ref{lemma-rieffel}
+we see that $A = \text{End}_K(M)$. Since $K$ is a skew field and
+$M$ is finitely generated (since $\dim_k(M) < \infty$) we see that
+$M$ is finite free as a left $K$-module. It follows immediately that
+$A \cong \text{Mat}(n \times n, K^{op})$.
+\end{proof}
+
+
+
+
+
+
+\section{Lemmas on algebras}
+\label{section-lemmas}
+
+\noindent
+Let $A$ be a $k$-algebra. Let $B \subset A$ be a subalgebra.
+The {\it centralizer of $B$ in $A$} is the subalgebra
+$$
+C = \{y \in A \mid xy = yx \text{ for all }x \in B\}.
+$$
+It is a $k$-algebra.
+
+\begin{lemma}
+\label{lemma-centralizer}
+Let $A$, $A'$ be $k$-algebras. Let $B \subset A$, $B' \subset A'$ be
+subalgebras with centralizers $C$, $C'$. Then the centralizer of
+$B \otimes_k B'$ in $A \otimes_k A'$ is $C \otimes_k C'$.
+\end{lemma}
+
+\begin{proof}
+Denote $C'' \subset A \otimes_k A'$ the centralizer of $B \otimes_k B'$.
+It is clear that $C \otimes_k C' \subset C''$. Conversely, every element
+of $C''$ commutes with $B \otimes 1$ hence is contained in $C \otimes_k A'$.
+Similarly $C'' \subset A \otimes_k C'$. Thus
+$C'' \subset C \otimes_k A' \cap A \otimes_k C' = C \otimes_k C'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-center-csa}
+Let $A$ be a finite simple $k$-algebra. Then the center $k'$ of $A$
+is a finite field extension of $k$.
+\end{lemma}
+
+\begin{proof}
+Write $A = \text{Mat}(n \times n, K)$ for some skew field $K$ finite
+over $k$, see
+Theorem \ref{theorem-wedderburn}.
+By
+Lemma \ref{lemma-centralizer}
+the center of $A$ is $k \otimes_k k'$ where $k' \subset K$ is the
+center of $K$. Since the center of a skew field is a field, we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generate-two-sided-sub}
+Let $V$ be a $k$ vector space. Let $K$ be a central $k$-algebra
+which is a skew field. Let $W \subset V \otimes_k K$ be a two-sided
+$K$-sub vector space. Then $W$ is generated as a left $K$-vector
+space by $W \cap (V \otimes 1)$.
+\end{lemma}
+
+\begin{proof}
+Let $V' \subset V$ be the $k$-sub vector space generated by $v \in V$
+such that $v \otimes 1 \in W$. Then $V' \otimes_k K \subset W$ and
+we have
+$$
+W/(V' \otimes_k K) \subset (V/V') \otimes_k K.
+$$
+If $\overline{v} \in V/V'$ is a nonzero vector such that
+$\overline{v} \otimes 1$ is contained in $W/(V' \otimes_k K)$,
+then we see that $v \otimes 1 \in W$ where $v \in V$ lifts $\overline{v}$.
+This contradicts our construction of $V'$. Hence we may replace
+$V$ by $V/V'$ and $W$ by $W/(V' \otimes_k K)$ and it suffices to prove
+that $W \cap (V \otimes 1)$ is nonzero if $W$ is nonzero.
+
+\medskip\noindent
+To see this let $w \in W$ be a nonzero element which can be written
+as $w = \sum_{i = 1, \ldots, n} v_i \otimes k_i$ with $n$ minimal.
+We may right multiply with $k_1^{-1}$ and assume that $k_1 = 1$.
+If $n = 1$, then we win because $v_1 \otimes 1 \in W$.
+If $n > 1$, then we see that for any $c \in K$
+$$
+c w - w c = \sum\nolimits_{i = 2, \ldots, n} v_i \otimes (c k_i - k_i c) \in W
+$$
+and hence $c k_i - k_i c = 0$ by minimality of $n$.
+This implies that $k_i$ is in the center of $K$ which is $k$ by
+assumption. Hence $w = (v_1 + \sum k_i v_i) \otimes 1$ contradicting
+the minimality of $n$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generate-two-sided-ideal}
+Let $A$ be a $k$-algebra. Let $K$ be a central $k$-algebra
+which is a skew field. Then any two-sided ideal $I \subset A \otimes_k K$
+is of the form $J \otimes_k K$ for some two-sided ideal $J \subset A$.
+In particular, if $A$ is simple, then so is $A \otimes_k K$.
+\end{lemma}
+
+\begin{proof}
+Set $J = \{a \in A \mid a \otimes 1 \in I\}$. This is a two-sided ideal
+of $A$. And $I = J \otimes_k K$ by
+Lemma \ref{lemma-generate-two-sided-sub}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-matrix-algebras}
+Let $R$ be a possibly noncommutative ring. Let $n \geq 1$ be an integer.
+Let $R_n = \text{Mat}(n \times n, R)$.
+\begin{enumerate}
+\item The functors $M \mapsto M^{\oplus n}$ and
+$N \mapsto Ne_{11}$ define quasi-inverse equivalences of categories
+$\text{Mod}_R \leftrightarrow \text{Mod}_{R_n}$.
+\item A two-sided ideal of $R_n$ is of the form $IR_n$ for some
+two-sided ideal $I$ of $R$.
+\item The center of $R_n$ is equal to the center of $R$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) proves itself. If $J \subset R_n$ is a two-sided ideal, then
+$J = \bigoplus e_{ii}Je_{jj}$ and all of the summands $e_{ii}Je_{jj}$ are
+equal to each other and are a two-sided ideal $I$ of $R$. This proves (2).
+Part (3) is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-simple-module-unique}
+Let $A$ be a finite simple $k$-algebra.
+\begin{enumerate}
+\item There exists exactly one simple $A$-module $M$ up to isomorphism.
+\item Any finite $A$-module is a direct sum of copies of a simple module.
+\item Two finite $A$-modules are isomorphic if and only if they
+have the same dimension over $k$.
+\item If $A = \text{Mat}(n \times n, K)$ with $K$ a finite skew field
+extension of $k$, then $M = K^{\oplus n}$ is a simple $A$-module and
+$\text{End}_A(M) = K^{op}$.
+\item If $M$ is a simple $A$-module, then $L = \text{End}_A(M)$
+is a skew field finite over $k$ acting on the left on $M$, we have
+$A = \text{End}_L(M)$, and the centers of $A$ and $L$ agree.
+Also $[A : k] [L : k] = \dim_k(M)^2$.
+\item For a finite $A$-module $N$ the algebra $B = \text{End}_A(N)$ is a
+matrix algebra over the skew field $L$ of (5). Moreover $\text{End}_B(N) = A$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By
+Theorem \ref{theorem-wedderburn}
+we can write $A = \text{Mat}(n \times n, K)$ for some finite skew
+field extension $K$ of $k$. By
+Lemma \ref{lemma-matrix-algebras}
+the category of modules over $A$ is equivalent to the category of
+modules over $K$. Thus (1), (2), and (3) hold
+because every module over $K$ is free. Part (4) holds
+because the equivalence transforms the $K$-module $K$
+to $M = K^{\oplus n}$. Using $M = K^{\oplus n}$ in (5)
+we see that $L = K^{op}$. The statement about the center of $L = K^{op}$
+follows from
+Lemma \ref{lemma-matrix-algebras}.
+The statement about $\text{End}_L(M)$ follows from the explicit form
+of $M$. The formula of dimensions is clear.
+Part (6) follows as $N$ is isomorphic to a direct sum of
+copies of a simple module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-simple}
+Let $A$, $A'$ be two simple $k$-algebras one of which is finite and central
+over $k$. Then $A \otimes_k A'$ is simple.
+\end{lemma}
+
+\begin{proof}
+Suppose that $A'$ is finite and central over $k$.
+Write $A' = \text{Mat}(n \times n, K')$, see
+Theorem \ref{theorem-wedderburn}.
+Then the center of $K'$ is $k$ and we conclude that
+$A \otimes_k K'$ is simple by
+Lemma \ref{lemma-generate-two-sided-ideal}.
+Hence $A \otimes_k A' = \text{Mat}(n \times n, A \otimes_k K')$ is simple
+by Lemma \ref{lemma-matrix-algebras}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-central-simple}
+The tensor product of finite central simple algebras over $k$ is finite,
+central, and simple.
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-centralizer} and \ref{lemma-tensor-simple}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change}
+Let $A$ be a finite central simple algebra over $k$.
+Let $k'/k$ be a field extension. Then $A' = A \otimes_k k'$ is
+a finite central simple algebra over $k'$.
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-centralizer} and \ref{lemma-tensor-simple}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inverse}
+Let $A$ be a finite central simple algebra over $k$.
+Then $A \otimes_k A^{op} \cong \text{Mat}(n \times n, k)$
+where $n = [A : k]$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-tensor-central-simple} the algebra $A \otimes_k A^{op}$
+is simple. Hence the map
+$$
+A \otimes_k A^{op} \longrightarrow \text{End}_k(A),\quad
+a \otimes a' \longmapsto (x \mapsto axa')
+$$
+is injective. Since both sides of the arrow have the same dimension
+we win.
+\end{proof}
+
+
+
+
+
+\section{The Brauer group of a field}
+\label{section-brauer}
+
+\noindent
+Let $k$ be a field. Consider two finite central simple algebras
+$A$ and $B$ over $k$. We say $A$ and $B$ are {\it similar} if there
+exist $n, m > 0$ such that
+$\text{Mat}(n \times n, A) \cong \text{Mat}(m \times m, B)$
+as $k$-algebras.
+
+\begin{lemma}
+\label{lemma-similar}
+Similarity.
+\begin{enumerate}
+\item Similarity defines an equivalence relation on the set of isomorphism
+classes of finite central simple algebras over $k$.
+\item Every similarity class contains a unique (up to isomorphism)
+finite central skew field extension of $k$.
+\item If $A = \text{Mat}(n \times n, K)$ and $B = \text{Mat}(m \times m, K')$
+for some finite central skew fields $K$, $K'$ over $k$
+then $A$ and $B$ are similar if and only if $K \cong K'$ as $k$-algebras.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Note that by Wedderburn's theorem (Theorem \ref{theorem-wedderburn})
+we can always write a finite central simple algebra as a matrix
+algebra over a finite central skew field. Hence it suffices to prove
+the third assertion. To see this it suffices to show that if
+$A = \text{Mat}(n \times n, K) \cong \text{Mat}(m \times m, K') = B$
+then $K \cong K'$. To see this note that for a simple module $M$ of $A$
+we have $\text{End}_A(M) = K^{op}$, see
+Lemma \ref{lemma-simple-module-unique}.
+Hence $A \cong B$ implies $K^{op} \cong (K')^{op}$ and we win.
+\end{proof}
+
+\noindent
+Given two finite central simple $k$-algebras $A$, $B$ the tensor
+product $A \otimes_k B$ is another, see
+Lemma \ref{lemma-tensor-central-simple}.
+Moreover if $A$ is similar to $A'$, then $A \otimes_k B$ is similar
+to $A' \otimes_k B$ because tensor products and taking matrix
+algebras commute. Hence tensor product defines an operation on
+equivalence classes of finite central simple algebras which is clearly
+associative and commutative. Finally,
+Lemma \ref{lemma-inverse}
+shows that $A \otimes_k A^{op}$ is isomorphic to a matrix algebra, i.e.,
+that $A \otimes_k A^{op}$ is in the similarity class of $k$.
+Thus we obtain an abelian group.
+
+\begin{definition}
+\label{definition-brauer-group}
+Let $k$ be a field. The {\it Brauer group} of $k$ is the abelian group
+of similarity classes of finite central simple $k$-algebras defined
+above. Notation $\text{Br}(k)$.
+\end{definition}
+
+\noindent
+For any map of fields $k \to k'$ we obtain a group homomorphism
+$$
+\text{Br}(k) \longrightarrow \text{Br}(k'),\quad
+A \longmapsto A \otimes_k k'
+$$
+see Lemma \ref{lemma-base-change}. In other words, $\text{Br}(-)$ is
+a functor from the category of fields to the category of abelian groups.
+Observe that the Brauer group
+of a field is zero if and only if every finite central skew field
+extension $k \subset K$ is trivial.
+
+\begin{lemma}
+\label{lemma-brauer-algebraically-closed}
+The Brauer group of an algebraically closed field is zero.
+\end{lemma}
+
+\begin{proof}
+Let $k \subset K$ be a finite central skew field extension.
+For any element $x \in K$ the subring $k[x] \subset K$ is a
+commutative finite integral $k$-sub algebra, hence a field, see
+Algebra, Lemma \ref{algebra-lemma-integral-over-field}.
+Since $k$ is algebraically closed we conclude that
+$k[x] = k$. Since $x$ was arbitrary we conclude $k = K$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-square}
+Let $A$ be a finite central simple algebra over a field $k$.
+Then $[A : k]$ is a square.
+\end{lemma}
+
+\begin{proof}
+This is true because $A \otimes_k \overline{k}$ is a matrix
+algebra over $\overline{k}$ by
+Lemma \ref{lemma-brauer-algebraically-closed}.
+\end{proof}
+
+
+
+
+\section{Skolem-Noether}
+\label{section-skolem-noether}
+
+
+
+\begin{theorem}
+\label{theorem-skolem-noether}
+Let $A$ be a finite central simple $k$-algebra. Let $B$ be a simple
+$k$-algebra. Let $f, g : B \to A$ be two $k$-algebra homomorphisms.
+Then there exists an invertible element $x \in A$ such that
+$f(b) = xg(b)x^{-1}$ for all $b \in B$.
+\end{theorem}
+
+\begin{proof}
+Choose a simple $A$-module $M$. Set $L = \text{End}_A(M)$.
+Then $L$ is a skew field with center $k$ which acts on the left on $M$, see
+Lemmas \ref{lemma-simple-module} and \ref{lemma-simple-module-unique}.
+Then $M$ has two $B \otimes_k L^{op}$-module structures defined by
+$m \cdot_1 (b \otimes l) = lmf(b)$ and $m \cdot_2 (b \otimes l) = lmg(b)$.
+The $k$-algebra $B \otimes_k L^{op}$ is simple by
+Lemma \ref{lemma-tensor-simple}. Since $B$ is simple, the existence of a
+$k$-algebra homomorphism $B \to A$ implies that $B$ is finite. Thus
+$B \otimes_k L^{op}$ is finite simple and we conclude the two
+$B \otimes_k L^{op}$-module structures on $M$
+are isomorphic by Lemma \ref{lemma-simple-module-unique}.
+Hence we find $\varphi : M \to M$ intertwining these operations.
+In particular $\varphi$ is in the commutant of $L$ which implies that
+$\varphi$ is multiplication by some $x \in A$, see
+Lemma \ref{lemma-simple-module-unique}. Working out the definitions we see
+that $x$ is a solution to our problem.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-automorphism-inner}
+Let $A$ be a finite central simple $k$-algebra. Any automorphism of $A$ is
+inner. In particular, any automorphism of $\text{Mat}(n \times n, k)$
+is inner.
+\end{lemma}
+
+\begin{proof}
+Note that $A$ is a finite central simple algebra over the center
+of $A$ which is a finite field extension of $k$, see
+Lemma \ref{lemma-center-csa}.
+Hence the Skolem-Noether theorem (Theorem \ref{theorem-skolem-noether})
+applies.
+\end{proof}
+
+
+
+\section{The centralizer theorem}
+\label{section-centralizer}
+
+
+\begin{theorem}
+\label{theorem-centralizer}
+Let $A$ be a finite central simple algebra over $k$, and let
+$B$ be a simple subalgebra of $A$. Then
+\begin{enumerate}
+\item the centralizer $C$ of $B$ in $A$ is simple,
+\item $[A : k] = [B : k][C : k]$, and
+\item the centralizer of $C$ in $A$ is $B$.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+Throughout this proof we use the results of
+Lemma \ref{lemma-simple-module-unique} freely.
+Choose a simple $A$-module $M$. Set $L = \text{End}_A(M)$.
+Then $L$ is a skew field with center $k$ which acts on the left on $M$
+and $A = \text{End}_L(M)$.
+Then $M$ is a right $B \otimes_k L^{op}$-module and
+$C = \text{End}_{B \otimes_k L^{op}}(M)$.
+Since the algebra $B \otimes_k L^{op}$ is simple by
+Lemma \ref{lemma-tensor-simple} we see that $C$ is simple (by
+Lemma \ref{lemma-simple-module-unique} again).
+
+\medskip\noindent
+Write $B \otimes_k L^{op} = \text{Mat}(m \times m, K)$ for some
+skew field $K$ finite over $k$. Then $C = \text{Mat}(n \times n, K^{op})$
+if $M$ is isomorphic to a direct sum of $n$ copies of the simple
+$B \otimes_k L^{op}$-module $K^{\oplus m}$ (the lemma again). Thus we have
+$\dim_k(M) = nm [K : k]$, $[B : k] [L : k] = m^2 [K : k]$,
+$[C : k] = n^2 [K : k]$, and $[A : k] [L : k] = \dim_k(M)^2$ (by
+the lemma again). We conclude that (2) holds.
+
+\medskip\noindent
+Part (3) follows because of (2) applied to $C \subset A$ shows
+that $[B : k] = [C' : k]$ where $C'$ is the centralizer of $C$ in $A$
+(and the obvious fact that $B \subset C')$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-tensor-is-equal}
+Let $A$ be a finite central simple algebra over $k$, and let
+$B$ be a simple subalgebra of $A$. If $B$ is a central
+$k$-algebra, then $A = B \otimes_k C$ where $C$ is the (central simple)
+centralizer of $B$ in $A$.
+\end{lemma}
+
+\begin{proof}
+We have $\dim_k(A) = \dim_k(B \otimes_k C)$ by
+Theorem \ref{theorem-centralizer}. By
+Lemma \ref{lemma-tensor-simple}
+the tensor product is simple. Hence the natural map
+$B \otimes_k C \to A$ is injective hence an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-self-centralizing-subfield}
+Let $A$ be a finite central simple algebra over $k$.
+If $K \subset A$ is a subfield, then the following are equivalent
+\begin{enumerate}
+\item $[A : k] = [K : k]^2$,
+\item $K$ is its own centralizer, and
+\item $K$ is a maximal commutative subring.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Theorem \ref{theorem-centralizer}
+shows that (1) and (2) are equivalent.
+It is clear that (3) and (2) are equivalent.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-maximal-subfield}
+\begin{slogan}
+The dimension of a finite central skew field is the square of the dimension
+of any maximal subfield.
+\end{slogan}
+Let $A$ be a finite central skew field over $k$.
+Then every maximal subfield $K \subset A$ satisfies
+$[A : k] = [K : k]^2$.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-self-centralizing-subfield}.
+\end{proof}
+
+
+
+
+
+\section{Splitting fields}
+\label{section-splitting}
+
+
+\begin{definition}
+\label{definition-splitting}
+Let $A$ be a finite central simple $k$-algebra.
+We say a field extension $k'/k$ {\it splits} $A$, or
+$k'$ is a {\it splitting field} for $A$ if $A \otimes_k k'$ is
+a matrix algebra over $k'$.
+\end{definition}
+
+\noindent
+Another way to say this is that the class of $A$ maps to zero
+under the map $\text{Br}(k) \to \text{Br}(k')$.
+
+\begin{theorem}
+\label{theorem-splitting}
+Let $A$ be a finite central simple $k$-algebra.
+Let $k'/k$ be a finite field extension.
+The following are equivalent
+\begin{enumerate}
+\item $k'$ splits $A$, and
+\item there exists a finite central simple algebra $B$ similar to $A$
+such that $k' \subset B$ and $[B : k] = [k' : k]^2$.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+Assume (2). It suffices to show that $B \otimes_k k'$ is a matrix
+algebra. We know that $B \otimes_k B^{op} \cong \text{End}_k(B)$.
+Since $k'$ is the centralizer of $k'$ in $B^{op}$ by
+Lemma \ref{lemma-self-centralizing-subfield}
+we see that $B \otimes_k k'$ is the centralizer of $k \otimes k'$
+in $B \otimes_k B^{op} = \text{End}_k(B)$. Of course this centralizer
+is just $\text{End}_{k'}(B)$ where we view $B$ as a $k'$ vector space
+via the embedding $k' \to B$. Thus the result.
+
+\medskip\noindent
+Assume (1). This means that we have an isomorphism
+$A \otimes_k k' \cong \text{End}_{k'}(V)$ for some $k'$-vector space $V$.
+Let $B$ be the commutant of $A$ in $\text{End}_k(V)$. Note that
+$k'$ sits in $B$. By
+Lemma \ref{lemma-when-tensor-is-equal}
+the classes of $A$ and $B$ add up to zero in $\text{Br}(k)$.
+From the dimension formula in
+Theorem \ref{theorem-centralizer}
+we see that
+$$
+[B : k] [A : k] =
+\dim_k(V)^2 =
+[k' : k]^2 \dim_{k'}(V)^2 =
+[k' : k]^2 [A : k].
+$$
+Hence $[B : k] = [k' : k]^2$. Thus we have proved the result for the
+opposite to the Brauer class of $A$. However, $k'$ splits the Brauer
+class of $A$ if and only if it splits
+the Brauer class of the opposite algebra, so we win anyway.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-maximal-subfield-splits}
+A maximal subfield of a finite central skew field $K$ over $k$ is
+a splitting field for $K$.
+\end{lemma}
+
+\begin{proof}
+Combine Lemma \ref{lemma-maximal-subfield} with
+Theorem \ref{theorem-splitting}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-splitting-field-degree}
+Consider a finite central skew field $K$ over $k$. Let $d^2 = [K : k]$.
+For any finite splitting field $k'$ for $K$ the degree $[k' : k]$ is
+divisible by $d$.
+\end{lemma}
+
+\begin{proof}
+By Theorem \ref{theorem-splitting} there exists a finite central
+simple algebra $B$ in the Brauer class of $K$ such that
+$[B : k] = [k' : k]^2$. By
+Lemma \ref{lemma-similar}
+we see that $B = \text{Mat}(n \times n, K)$ for some $n$.
+Then $[k' : k]^2 = n^2d^2$ whence the result.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-separable-splitting-field}
+Consider a finite central skew field $K$ over $k$.
+There exists a maximal subfield $k \subset k' \subset K$ which
+is separable over $k$.
+In particular, every Brauer class has a finite separable
+spitting field.
+\end{proposition}
+
+\begin{proof}
+Since every Brauer class is represented by a finite central skew
+field over $k$, we see that the second statement follows from the
+first by
+Lemma \ref{lemma-maximal-subfield-splits}.
+
+\medskip\noindent
+To prove the first statement, suppose that we are given a separable
+subfield $k' \subset K$. Then the centralizer $K'$ of $k'$ in $K$
+has center $k'$, and the problem reduces to finding a maximal
+subfield of $K'$ separable over $k'$. Thus it suffices to prove, if
+$k \not = K$, that we can find an element $x \in K$, $x \not \in k$
+which is separable over $k$. This statement is clear in characteristic
+zero. Hence we may assume that $k$ has characteristic $p > 0$. If the
+ground field $k$ is finite then, the result is clear as well (because
+extensions of finite fields are always separable). Thus we may assume
+that $k$ is an infinite field of positive characteristic.
+
+\medskip\noindent
+To get a contradiction assume no element of $K$ is separable over $k$.
+By the discussion in
+Fields, Section \ref{fields-section-algebraic}
+this means the minimal polynomial of any $x \in K$ is of the form
+$T^q - a$ where $q$ is a power of $p$ and $a \in k$. Since it is
+clear that every element of $K$ has a minimal polynomial of degree
+$\leq \dim_k(K)$ we conclude that there exists a fixed $p$-power
+$q$ such that $x^q \in k$ for all $x \in K$.
+
+\medskip\noindent
+Consider the map
+$$
+(-)^q : K \longrightarrow K
+$$
+and write it out in terms of a $k$-basis $\{a_1, \ldots, a_n\}$ of $K$
+with $a_1 = 1$. So
+$$
+(\sum x_i a_i)^q = \sum f_i(x_1, \ldots, x_n)a_i.
+$$
+Since multiplication on $A$ is $k$-bilinear we see that each $f_i$
+is a polynomial in $x_1, \ldots, x_n$ (details omitted).
+The choice of $q$ above and the fact that $k$ is infinite shows that
+$f_i$ is identically zero for $i \geq 2$. Hence we see that it remains
+zero on extending $k$ to its algebraic closure $\overline{k}$. But the
+algebra $A \otimes_k \overline{k}$ is a matrix algebra, which implies
+there are some elements whose $q$th power is not central (e.g., $e_{11}$).
+This is the desired contradiction.
+\end{proof}
+
+\noindent
+The results above allow us to characterize finite central simple algebras
+as follows.
+
+\begin{lemma}
+\label{lemma-finite-central-simple-algebra}
+Let $k$ be a field. For a $k$-algebra $A$ the following are equivalent
+\begin{enumerate}
+\item $A$ is finite central simple $k$-algebra,
+\item $A$ is a finite dimensional $k$-vector space, $k$ is the center of $A$,
+and $A$ has no nontrivial two-sided ideal,
+\item there exists $d \geq 1$ such that
+$A \otimes_k \bar k \cong \text{Mat}(d \times d, \bar k)$,
+\item there exists $d \geq 1$ such that
+$A \otimes_k k^{sep} \cong \text{Mat}(d \times d, k^{sep})$,
+\item there exist $d \geq 1$ and a finite Galois extension $k'/k$
+such that
+$A \otimes_k k' \cong \text{Mat}(d \times d, k')$,
+\item there exist $n \geq 1$ and a finite central skew field $K$
+over $k$ such that $A \cong \text{Mat}(n \times n, K)$.
+\end{enumerate}
+The integer $d$ is called the {\it degree} of $A$.
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) is a consequence of the definitions, see
+Section \ref{section-algebras}.
+Assume (1). By
+Proposition \ref{proposition-separable-splitting-field}
+there exists a separable splitting field $k \subset k'$ for $A$.
+Of course, then a Galois closure of $k'/k$ is a splitting field also.
+Thus we see that (1) implies (5). It is clear that (5) $\Rightarrow$ (4)
+$\Rightarrow$ (3). Assume (3). Then $A \otimes_k \overline{k}$
+is a finite central simple $\overline{k}$-algebra for example by
+Lemma \ref{lemma-matrix-algebras}.
+This trivially implies that $A$ is a finite central simple $k$-algebra.
+Finally, the equivalence of (1) and (6) is Wedderburn's theorem, see
+Theorem \ref{theorem-wedderburn}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/categories.tex b/books/stacks/categories.tex
new file mode 100644
index 0000000000000000000000000000000000000000..59d06b3603b251938d9374e571e18df30c0399cd
--- /dev/null
+++ b/books/stacks/categories.tex
@@ -0,0 +1,9340 @@
+\input{preamble}
+
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Categories}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+Categories were first introduced in \cite{GenEqui}.
+The category of categories (which is a proper class)
+is a $2$-category. Similarly, the category of stacks
+forms a $2$-category. If you already know
+about categories, but not about $2$-categories you
+should read
+Section \ref{section-formal-cat-cat}
+as an introduction to the formal definitions later on.
+
+\section{Definitions}
+\label{section-definition-categories}
+
+\noindent
+We recall the definitions, partly to fix notation.
+
+\begin{definition}
+\label{definition-category}
+A {\it category} $\mathcal{C}$ consists of the following data:
+\begin{enumerate}
+\item A set of objects $\Ob(\mathcal{C})$.
+\item For each pair $x, y \in \Ob(\mathcal{C})$ a set of morphisms
+$\Mor_\mathcal{C}(x, y)$.
+\item For each triple $x, y, z\in \Ob(\mathcal{C})$ a composition
+map $ \Mor_\mathcal{C}(y, z) \times \Mor_\mathcal{C}(x, y)
+\to \Mor_\mathcal{C}(x, z) $, denoted $(\phi, \psi) \mapsto
+\phi \circ \psi$.
+\end{enumerate}
+These data are to satisfy the following rules:
+\begin{enumerate}
+\item For every element $x\in \Ob(\mathcal{C})$ there exists a
+morphism $\text{id}_x\in \Mor_\mathcal{C}(x, x)$ such that
+$\text{id}_x \circ \phi = \phi$ and $\psi \circ \text{id}_x = \psi $ whenever
+these compositions make sense.
+\item Composition is associative, i.e., $(\phi \circ \psi) \circ \chi =
+\phi \circ ( \psi \circ \chi)$ whenever these compositions make sense.
+\end{enumerate}
+\end{definition}
+
+\noindent
+It is customary to require all the morphism sets
+$\Mor_\mathcal{C}(x, y)$ to be disjoint.
+In this way a morphism $\phi : x \to y$ has a unique {\it source} $x$
+and a unique {\it target} $y$. This is not strictly necessary,
+although care has to be taken in formulating condition (2) above
+if it is not the case. It is convenient and we will often assume
+this is the case. In this case we say that $\phi$ and $\psi$ are
+{\it composable} if the source of $\phi$ is equal to the
+target of $\psi$, in which case $\phi \circ \psi$ is defined.
+An equivalent definition would be to define a category
+as a quintuple $(\text{Ob}, \text{Arrows}, s, t, \circ)$
+consisting of a set of objects, a set of morphisms (arrows),
+source, target and composition subject to a long list of axioms.
+We will occasionally use this point of view.
+
+\begin{remark}
+\label{remark-big-categories}
+Big categories. In some texts a category is allowed to have a proper
+class of objects. We will allow this as well in these notes but only
+in the following list of cases (to be updated as we go along).
+In particular, when we say: ``Let $\mathcal{C}$ be a category''
+then it is understood that $\Ob(\mathcal{C})$ is a set.
+\begin{enumerate}
+\item The category $\textit{Sets}$ of sets.
+\item The category $\textit{Ab}$ of abelian groups.
+\item The category $\textit{Groups}$ of groups.
+\item Given a group $G$ the category $G\textit{-Sets}$ of
+sets with a left $G$-action.
+\item Given a ring $R$ the category $\text{Mod}_R$ of $R$-modules.
+\item Given a field $k$ the category of vector spaces over $k$.
+\item The category of rings.
+\item The category of divided power rings, see
+Divided Power Algebra, Section \ref{dpa-section-divided-power-rings}.
+\item The category of schemes.
+\item The category $\textit{Top}$ of topological spaces.
+\item Given a topological space $X$ the category
+$\textit{PSh}(X)$ of presheaves of sets over $X$.
+\item Given a topological space $X$ the category
+$\Sh(X)$ of sheaves of sets over $X$.
+\item Given a topological space $X$ the category
+$\textit{PAb}(X)$ of presheaves of abelian groups over $X$.
+\item Given a topological space $X$ the category
+$\textit{Ab}(X)$ of sheaves of abelian groups over $X$.
+\item Given a small category $\mathcal{C}$ the category of functors
+from $\mathcal{C}$ to $\textit{Sets}$.
+\item Given a category $\mathcal{C}$ the category of presheaves of sets
+over $\mathcal{C}$.
+\item Given a site $\mathcal{C}$ the category of sheaves
+of sets over $\mathcal{C}$.
+\end{enumerate}
+One of the reason to enumerate these here is to try and avoid
+working with something like the ``collection'' of ``big'' categories
+which would be like working with the collection of all classes
+which I think definitively is a meta-mathematical object.
+\end{remark}
+
+\begin{remark}
+\label{remark-unique-identity}
+It follows directly from the definition that any two identity morphisms
+of an object $x$ of $\mathcal{A}$ are the same. Thus we may and will
+speak of {\it the} identity morphism $\text{id}_x$ of $x$.
+\end{remark}
+
+\begin{definition}
+\label{definition-isomorphism}
+A morphism $\phi : x \to y$ is an {\it isomorphism} of the category
+$\mathcal{C}$ if there exists a morphism $\psi : y \to x$
+such that $\phi \circ \psi = \text{id}_y$ and
+$\psi \circ \phi = \text{id}_x$.
+\end{definition}
+
+\noindent
+An isomorphism $\phi$ is also sometimes called an {\it invertible}
+morphism, and the morphism $\psi$ of the definition is called the
+{\it inverse} and denoted $\phi^{-1}$. It is unique if it exists. Note that
+given an object $x$ of a category $\mathcal{A}$ the set of invertible
+elements $\text{Aut}_\mathcal{A}(x)$
+of $\Mor_\mathcal{A}(x, x)$ forms a group under composition.
+This group is called the {\it automorphism} group of $x$ in $\mathcal{A}$.
+
+\begin{definition}
+\label{definition-groupoid}
+A {\it groupoid} is a category where every morphism is an isomorphism.
+\end{definition}
+
+\begin{example}
+\label{example-group-groupoid}
+A group $G$ gives rise to a groupoid with a single object $x$
+and morphisms $\Mor(x, x) = G$, with the composition rule
+given by the group law in $G$. Every groupoid with a single
+object is of this form.
+\end{example}
+
+\begin{example}
+\label{example-set-groupoid}
+A set $C$ gives rise to a groupoid $\mathcal{C}$ defined as follows:
+As objects we take $\Ob(\mathcal{C}) := C$ and for morphisms
+we take $\Mor(x, y)$ empty if $x\neq y$ and equal to
+$\{\text{id}_x\}$ if $x = y$.
+\end{example}
+
+\begin{definition}
+\label{definition-functor}
+A {\it functor} $F : \mathcal{A} \to \mathcal{B}$
+between two categories $\mathcal{A}, \mathcal{B}$ is given by the
+following data:
+\begin{enumerate}
+\item A map $F : \Ob(\mathcal{A}) \to \Ob(\mathcal{B})$.
+\item For every $x, y \in \Ob(\mathcal{A})$ a map
+$F : \Mor_\mathcal{A}(x, y) \to \Mor_\mathcal{B}(F(x), F(y))$,
+denoted $\phi \mapsto F(\phi)$.
+\end{enumerate}
+These data should be compatible with composition and identity morphisms
+in the following manner: $F(\phi \circ \psi) =
+F(\phi) \circ F(\psi)$ for a composable pair $(\phi, \psi)$ of
+morphisms of $\mathcal{A}$ and $F(\text{id}_x) = \text{id}_{F(x)}$.
+\end{definition}
+
+\noindent
+Note that every category $\mathcal{A}$ has an
+{\it identity} functor $\text{id}_\mathcal{A}$.
+In addition, given a functor $G : \mathcal{B} \to \mathcal{C}$
+and a functor $F : \mathcal{A} \to \mathcal{B}$ there is
+a {\it composition} functor $G \circ F : \mathcal{A} \to \mathcal{C}$
+defined in an obvious manner.
+
+\begin{definition}
+\label{definition-faithful}
+Let $F : \mathcal{A} \to \mathcal{B}$ be a functor.
+\begin{enumerate}
+\item We say $F$ is {\it faithful} if
+for any objects $x, y \in \Ob(\mathcal{A})$ the map
+$$
+F : \Mor_\mathcal{A}(x, y) \to \Mor_\mathcal{B}(F(x), F(y))
+$$
+is injective.
+\item If these maps are all bijective then $F$ is called
+{\it fully faithful}.
+\item
+The functor $F$ is called {\it essentially surjective} if for any
+object $y \in \Ob(\mathcal{B})$ there exists an object
+$x \in \Ob(\mathcal{A})$ such that $F(x)$ is isomorphic to $y$ in
+$\mathcal{B}$.
+\end{enumerate}
+\end{definition}
+
+\begin{definition}
+\label{definition-subcategory}
+A {\it subcategory} of a category $\mathcal{B}$ is
+a category $\mathcal{A}$ whose objects and arrows
+form subsets of the objects and arrows
+of $\mathcal{B}$ and such that source, target
+and composition in $\mathcal{A}$ agree with those
+of $\mathcal{B}$. We say $\mathcal{A}$ is a
+{\it full subcategory} of $\mathcal{B}$ if $\Mor_\mathcal{A}(x, y)
+= \Mor_\mathcal{B}(x, y)$ for all $x, y \in \Ob(\mathcal{A})$.
+We say $\mathcal{A}$ is a {\it strictly full} subcategory of $\mathcal{B}$
+if it is a full subcategory and given $x \in \Ob(\mathcal{A})$ any
+object of $\mathcal{B}$ which is isomorphic to $x$ is also in $\mathcal{A}$.
+\end{definition}
+
+\noindent
+If $\mathcal{A} \subset \mathcal{B}$ is a subcategory then the
+identity map is a functor from $\mathcal{A}$ to $\mathcal{B}$.
+Furthermore a subcategory $\mathcal{A} \subset \mathcal{B}$
+is full if and only if the inclusion functor is fully faithful.
+Note that given a category $\mathcal{B}$ the set of full subcategories
+of $\mathcal{B}$ is the same as the set of subsets of
+$\Ob(\mathcal{B})$.
+
+\begin{remark}
+\label{remark-functor-into-sets}
+Suppose that $\mathcal{A}$ is a category.
+A functor $F$ from $\mathcal{A}$ to $\textit{Sets}$
+is a mathematical object (i.e., it is a set not a class or a formula
+of set theory, see
+Sets, Section \ref{sets-section-sets-everything})
+even though the category of sets is ``big''.
+Namely, the range of $F$ on objects will be
+a set $F(\Ob(\mathcal{A}))$ and then we
+may think of $F$ as a functor between
+$\mathcal{A}$ and the full subcategory
+of the category of sets whose
+objects are elements of $F(\Ob(\mathcal{A}))$.
+\end{remark}
+
+\begin{example}
+\label{example-group-homomorphism-functor}
+A homomorphism $p : G\to H$ of groups gives rise to a functor
+between the associated groupoids in Example \ref{example-group-groupoid}. It is
+faithful (resp.\ fully faithful) if and only if $p$ is injective (resp.\ an
+isomorphism).
+\end{example}
+
+\begin{example}
+\label{example-category-over-X}
+Given a category $\mathcal{C}$ and an object $X\in \Ob(\mathcal{C})$
+we define the {\it category of objects over $X$},
+denoted $\mathcal{C}/X$ as follows.
+The objects of $\mathcal{C}/X$ are morphisms $Y\to X$ for
+some $Y\in \Ob(\mathcal{C})$. Morphisms between objects
+$Y\to X$ and $Y'\to X$ are morphisms $Y\to Y'$ in $\mathcal{C}$ that
+make the obvious diagram commute. Note that there is a functor
+$p_X : \mathcal{C}/X\to \mathcal{C}$ which simply forgets the
+morphism. Moreover given a morphism $f : X'\to X$ in
+$\mathcal{C}$ there is an induced functor
+$F : \mathcal{C}/X' \to \mathcal{C}/X$ obtained by composition with $f$,
+and $p_X\circ F = p_{X'}$.
+\end{example}
+
+\begin{example}
+\label{example-category-under-X}
+Given a category $\mathcal{C}$ and an object $X\in \Ob(\mathcal{C})$
+we define the {\it category of objects under $X$},
+denoted $X/\mathcal{C}$ as follows.
+The objects of $X/\mathcal{C}$ are morphisms $X\to Y$ for
+some $Y\in \Ob(\mathcal{C})$. Morphisms between objects
+$X\to Y$ and $X\to Y'$ are morphisms $Y\to Y'$ in $\mathcal{C}$ that
+make the obvious diagram commute. Note that there is a functor
+$p_X : X/\mathcal{C}\to \mathcal{C}$ which simply forgets the
+morphism. Moreover given a morphism $f : X'\to X$ in
+$\mathcal{C}$ there is an induced functor
+$F : X/\mathcal{C} \to X'/\mathcal{C}$
+obtained by composition with $f$,
+and $p_{X'}\circ F = p_X$.
+\end{example}
+
+
+
+
+\begin{definition}
+\label{definition-transformation-functors}
+Let $F, G : \mathcal{A} \to \mathcal{B}$ be functors.
+A {\it natural transformation}, or a {\it morphism of functors}
+$t : F \to G$, is a collection $\{t_x\}_{x\in \Ob(\mathcal{A})}$
+such that
+\begin{enumerate}
+\item $t_x : F(x) \to G(x)$ is a morphism in the category $\mathcal{B}$, and
+\item for every morphism $\phi : x \to y$ of $\mathcal{A}$ the following
+diagram is commutative
+$$
+\xymatrix{
+F(x) \ar[r]^{t_x} \ar[d]_{F(\phi)} & G(x) \ar[d]^{G(\phi)} \\
+F(y) \ar[r]^{t_y} & G(y) }
+$$
+\end{enumerate}
+\end{definition}
+
+\noindent
+Sometimes we use the diagram
+$$
+\xymatrix{
+\mathcal{A}
+\rtwocell^F_G{t}
+&
+\mathcal{B}
+}
+$$
+to indicate that $t$ is a morphism from $F$ to $G$.
+
+\medskip\noindent
+Note that every functor $F$ comes with the {\it identity} transformation
+$\text{id}_F : F \to F$. In addition, given a morphism of
+functors $t : F \to G$ and a morphism of functors $s : E \to F$
+then the {\it composition} $t \circ s$ is defined by the rule
+$$
+(t \circ s)_x = t_x \circ s_x : E(x) \to G(x)
+$$
+for $x \in \Ob(\mathcal{A})$.
+It is easy to verify that this is indeed a morphism of functors
+from $E$ to $G$.
+In this way, given categories
+$\mathcal{A}$ and $\mathcal{B}$ we obtain a new category,
+namely the category of functors between $\mathcal{A}$ and
+$\mathcal{B}$.
+
+\begin{remark}
+\label{remark-functors-sets-sets}
+This is one instance where the same thing does not hold if
+$\mathcal{A}$ is a ``big'' category. For example consider
+functors $\textit{Sets} \to \textit{Sets}$. As we have currently
+defined it such a functor is a class and not a set. In other
+words, it is given by a formula in set theory (with some variables
+equal to specified sets)! It is not a good idea to try to consider
+all possible formulae of set theory as part of the definition of
+a mathematical object. The same problem presents itself when
+considering sheaves on the category of schemes for example.
+We will come back to this point later.
+\end{remark}
+
+\begin{definition}
+\label{definition-equivalence-categories}
+An {\it equivalence of categories}
+$F : \mathcal{A} \to \mathcal{B}$ is a functor such that there
+exists a functor $G : \mathcal{B} \to \mathcal{A}$ such that
+the compositions $F \circ G$ and $G \circ F$ are isomorphic to the
+identity functors $\text{id}_\mathcal{B}$,
+respectively $\text{id}_\mathcal{A}$.
+In this case we say that $G$ is a {\it quasi-inverse} to $F$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-construct-quasi-inverse}
+Let $F : \mathcal{A} \to \mathcal{B}$ be a fully faithful functor.
+Suppose for every $X \in \Ob(\mathcal{B})$ given an
+object $j(X)$ of $\mathcal{A}$ and an isomorphism $i_X : X \to F(j(X))$.
+Then there is a unique functor $j : \mathcal{B} \to \mathcal{A}$
+such that $j$ extends the rule on objects, and the isomorphisms
+$i_X$ define an isomorphism of functors
+$\text{id}_\mathcal{B} \to F \circ j$. Moreover, $j$ and $F$
+are quasi-inverse equivalences of categories.
+\end{lemma}
+
+\begin{proof}
+This lemma proves itself.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-equivalence-categories}
+A functor is an equivalence of categories if and only if it is both fully
+faithful and essentially surjective.
+\end{lemma}
+
+\begin{proof}
+Let $F : \mathcal{A} \to \mathcal{B}$ be essentially surjective and fully
+faithful. As by convention all categories are small and as $F$ is essentially
+surjective we can, using the axiom of choice, choose for every
+$X \in \Ob(\mathcal{B})$ an object $j(X)$ of $\mathcal{A}$ and an
+isomorphism $i_X : X \to F(j(X))$. Then we apply
+Lemma \ref{lemma-construct-quasi-inverse}
+using that $F$ is fully faithful.
+\end{proof}
+
+\begin{definition}
+\label{definition-product-category}
+Let $\mathcal{A}$, $\mathcal{B}$ be categories.
+We define the {\it product category}
+$\mathcal{A} \times \mathcal{B}$ to be the category with
+objects
+$\Ob(\mathcal{A} \times \mathcal{B}) =
+\Ob(\mathcal{A}) \times \Ob(\mathcal{B})$
+and
+$$
+\Mor_{\mathcal{A} \times \mathcal{B}}((x, y), (x', y'))
+:=
+\Mor_\mathcal{A}(x, x')\times
+\Mor_\mathcal{B}(y, y').
+$$
+Composition is defined componentwise.
+\end{definition}
+
+
+\section{Opposite Categories and the Yoneda Lemma}
+\label{section-opposite}
+
+\begin{definition}
+\label{definition-opposite}
+Given a category $\mathcal{C}$ the {\it opposite category}
+$\mathcal{C}^{opp}$ is the category with the same objects
+as $\mathcal{C}$ but all morphisms reversed.
+\end{definition}
+
+\noindent
+In other words
+$\Mor_{\mathcal{C}^{opp}}(x, y) = \Mor_\mathcal{C}(y, x)$.
+Composition in $\mathcal{C}^{opp}$ is the same as in $\mathcal{C}$
+except backwards: if $\phi : y \to z$ and $\psi : x \to y$
+are morphisms in $\mathcal{C}^{opp}$, in other words arrows
+$z \to y$ and $y \to x$ in $\mathcal{C}$,
+then $\phi \circ^{opp} \psi$ is the morphism $x \to z$
+of $\mathcal{C}^{opp}$ which corresponds to the composition
+$z \to y \to x$ in $\mathcal{C}$.
+
+\begin{definition}
+\label{definition-contravariant}
+Let $\mathcal{C}$, $\mathcal{S}$ be categories.
+A {\it contravariant} functor $F$
+from $\mathcal{C}$ to $\mathcal{S}$
+is a functor $\mathcal{C}^{opp}\to \mathcal{S}$.
+\end{definition}
+
+\noindent
+Concretely, a contravariant functor $F$ is given
+by a map $F : \Ob(\mathcal{C}) \to
+\Ob(\mathcal{S})$ and for every morphism
+$\psi : x \to y$ in $\mathcal{C}$ a morphism
+$F(\psi) : F(y) \to F(x)$. These should satisfy the property
+that, given another morphism
+$\phi : y \to z$, we have $F(\phi \circ \psi)
+= F(\psi) \circ F(\phi)$ as morphisms $F(z) \to F(x)$.
+(Note the reverse of order.)
+
+\begin{definition}
+\label{definition-presheaf}
+Let $\mathcal{C}$ be a category.
+\begin{enumerate}
+\item A {\it presheaf of sets on $\mathcal{C}$}
+or simply a {\it presheaf} is a contravariant functor
+$F$ from $\mathcal{C}$ to $\textit{Sets}$.
+\item The category of presheaves is denoted $\textit{PSh}(\mathcal{C})$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Of course the category of presheaves is a proper class.
+
+\begin{example}
+\label{example-hom-functor}
+Functor of points.
+For any $U\in \Ob(\mathcal{C})$ there is a contravariant
+functor
+$$
+\begin{matrix}
+h_U & : & \mathcal{C}
+&
+\longrightarrow
+&
+\textit{Sets} \\
+& &
+X
+&
+\longmapsto
+&
+\Mor_\mathcal{C}(X, U)
+\end{matrix}
+$$
+which takes an object $X$ to the set
+$\Mor_\mathcal{C}(X, U)$. In other words $h_U$ is a presheaf.
+Given a morphism $f : X\to Y$ the corresponding map
+$h_U(f) : \Mor_\mathcal{C}(Y, U)\to \Mor_\mathcal{C}(X, U)$
+takes $\phi$ to $\phi\circ f$. We will always denote
+this presheaf $h_U : \mathcal{C}^{opp} \to \textit{Sets}$.
+It is called the {\it representable presheaf} associated to $U$.
+If $\mathcal{C}$ is the category of schemes this functor is
+sometimes referred to as the
+\emph{functor of points} of $U$.
+\end{example}
+
+\noindent
+Note that given a morphism $\phi : U \to V$ in $\mathcal{C}$ we get a
+corresponding natural transformation of functors $h(\phi) : h_U \to h_V$
+defined by composing with the morphism $U \to V$. This turns
+composition of morphisms in $\mathcal{C}$ into composition of
+transformations of functors. In other words we get a functor
+$$
+h :
+\mathcal{C}
+\longrightarrow
+\textit{PSh}(\mathcal{C})
+$$
+Note that the target is a ``big'' category, see
+Remark \ref{remark-big-categories}. On the other hand,
+$h$ is an actual mathematical object (i.e.\ a set), compare Remark
+\ref{remark-functor-into-sets}.
+
+\begin{lemma}[Yoneda lemma]
+\label{lemma-yoneda}
+\begin{reference}
+Appeared in some form in \cite{Yoneda-homology}. Used by Grothendieck in a
+generalized form in \cite{Gr-II}.
+\end{reference}
+Let $U, V \in \Ob(\mathcal{C})$.
+Given any morphism of functors $s : h_U \to h_V$
+there is a unique morphism $\phi : U \to V$
+such that $h(\phi) = s$. In other words the
+functor $h$ is fully faithful. More generally,
+given any contravariant functor $F$ and any object
+$U$ of $\mathcal{C}$ we have a natural bijection
+$$
+\Mor_{\textit{PSh}(\mathcal{C})}(h_U, F) \longrightarrow F(U),
+\quad
+s \longmapsto s_U(\text{id}_U).
+$$
+\end{lemma}
+
+\begin{proof}
+For the first statement, just take
+$\phi = s_U(\text{id}_U) \in \Mor_\mathcal{C}(U, V)$.
+For the second statement, given $\xi \in F(U)$ define
+$s$ by $s_V : h_U(V) \to F(V)$ by sending the element $f : V \to U$
+of $h_U(V) = \Mor_\mathcal{C}(V, U)$ to $F(f)(\xi)$.
+\end{proof}
+
+\begin{definition}
+\label{definition-representable-functor}
+A contravariant functor $F : \mathcal{C}\to \textit{Sets}$ is said
+to be {\it representable} if it is isomorphic to the functor of
+points $h_U$ for some object $U$ of $\mathcal{C}$.
+\end{definition}
+
+\noindent
+Let $\mathcal{C}$ be a category and let
+$F : \mathcal{C}^{opp} \to \textit{Sets}$ be a representable functor.
+Choose an object $U$ of $\mathcal{C}$ and an isomorphism $s : h_U \to F$.
+The Yoneda lemma guarantees that the pair $(U, s)$
+is unique up to unique isomorphism. The object
+$U$ is called an object {\it representing} $F$.
+By the Yoneda lemma the transformation $s$ corresponds to a unique
+element $\xi \in F(U)$. This element is called the {\it universal object}.
+It has the property that for $V \in \Ob(\mathcal{C})$ the map
+$$
+\Mor_\mathcal{C}(V, U) \longrightarrow F(V),\quad
+(f : V \to U) \longmapsto F(f)(\xi)
+$$
+is a bijection. Thus $\xi$ is universal in the sense that every element
+of $F(V)$ is equal to the image of $\xi$ via $F(f)$ for a unique morphism
+$f : V \to U$ in $\mathcal{C}$.
+
+
+
+
+
+
+\section{Products of pairs}
+\label{section-products-pairs}
+
+\begin{definition}
+\label{definition-products}
+Let $x, y\in \Ob(\mathcal{C})$.
+A {\it product} of $x$ and $y$ is
+an object $x \times y \in \Ob(\mathcal{C})$
+together with morphisms
+$p\in \Mor_{\mathcal C}(x \times y, x)$ and
+$q\in\Mor_{\mathcal C}(x \times y, y)$ such
+that the following universal property holds: for
+any $w\in \Ob(\mathcal{C})$ and morphisms
+$\alpha \in \Mor_{\mathcal C}(w, x)$ and
+$\beta \in \Mor_\mathcal{C}(w, y)$
+there is a unique
+$\gamma\in \Mor_{\mathcal C}(w, x \times y)$ making
+the diagram
+$$
+\xymatrix{
+w \ar[rrrd]^\beta \ar@{-->}[rrd]_\gamma \ar[rrdd]_\alpha & & \\
+& & x \times y \ar[d]_p \ar[r]_q & y \\
+& & x &
+}
+$$
+commute.
+\end{definition}
+
+\noindent
+If a product exists it is unique up to unique
+isomorphism. This follows from the Yoneda lemma as
+the definition requires $x \times y$ to be an object
+of $\mathcal{C}$ such that
+$$
+h_{x \times y}(w) = h_x(w) \times h_y(w)
+$$
+functorially in $w$. In other words the product $x \times y$
+is an object representing the functor
+$w \mapsto h_x(w) \times h_y(w)$.
+
+\begin{definition}
+\label{definition-has-products-of-pairs}
+We say the category $\mathcal{C}$ {\it has products of pairs
+of objects} if a product $x \times y$
+exists for any $x, y \in \Ob(\mathcal{C})$.
+\end{definition}
+
+\noindent
+We use this terminology to distinguish this notion from the notion
+of ``having products'' or ``having finite products'' which usually means
+something else (in particular it always implies there exists a
+final object).
+
+
+
+
+
+\section{Coproducts of pairs}
+\label{section-coproducts-pairs}
+
+\begin{definition}
+\label{definition-coproducts}
+Let $x, y \in \Ob(\mathcal{C})$.
+A {\it coproduct}, or {\it amalgamated sum} of $x$ and $y$ is
+an object $x \amalg y \in \Ob(\mathcal{C})$
+together with morphisms
+$i \in \Mor_{\mathcal C}(x, x \amalg y)$ and
+$j \in \Mor_{\mathcal C}(y, x \amalg y)$ such
+that the following universal property holds: for
+any $w \in \Ob(\mathcal{C})$ and morphisms
+$\alpha \in \Mor_{\mathcal C}(x, w)$ and
+$\beta \in \Mor_\mathcal{C}(y, w)$
+there is a unique
+$\gamma \in \Mor_{\mathcal C}(x \amalg y, w)$ making
+the diagram
+$$
+\xymatrix{
+& y \ar[d]^j \ar[rrdd]^\beta \\
+x \ar[r]^i \ar[rrrd]_\alpha & x \amalg y \ar@{-->}[rrd]^\gamma \\
+& & & w
+}
+$$
+commute.
+\end{definition}
+
+\noindent
+If a coproduct exists it is unique up to unique
+isomorphism. This follows from the Yoneda lemma (applied to the
+opposite category) as
+the definition requires $x \amalg y$ to be an object
+of $\mathcal{C}$ such that
+$$
+\Mor_\mathcal{C}(x \amalg y, w) =
+\Mor_\mathcal{C}(x, w) \times \Mor_\mathcal{C}(y, w)
+$$
+functorially in $w$.
+
+\begin{definition}
+\label{definition-has-coproducts-of-pairs}
+We say the category $\mathcal{C}$ {\it has coproducts of pairs
+of objects} if a coproduct $x \amalg y$
+exists for any $x, y \in \Ob(\mathcal{C})$.
+\end{definition}
+
+\noindent
+We use this terminology to distinguish this notion from the notion
+of ``having coproducts'' or ``having finite coproducts'' which usually means
+something else (in particular it always implies there exists an
+initial object in $\mathcal{C}$).
+
+
+
+
+
+\section{Fibre products}
+\label{section-fibre-products}
+
+\begin{definition}
+\label{definition-fibre-products}
+Let $x, y, z\in \Ob(\mathcal{C})$,
+$f\in \Mor_\mathcal{C}(x, y)$
+and $g\in \Mor_{\mathcal C}(z, y)$.
+A {\it fibre product} of $f$ and $g$ is
+an object $x \times_y z\in \Ob(\mathcal{C})$
+together with morphisms
+$p \in \Mor_{\mathcal C}(x \times_y z, x)$ and
+$q \in \Mor_{\mathcal C}(x \times_y z, z)$ making the diagram
+$$
+\xymatrix{
+x \times_y z \ar[r]_q \ar[d]_p & z \ar[d]^g \\
+x \ar[r]^f & y
+}
+$$
+commute, and such that the following universal property holds: for
+any $w\in \Ob(\mathcal{C})$ and morphisms
+$\alpha \in \Mor_{\mathcal C}(w, x)$ and
+$\beta \in \Mor_\mathcal{C}(w, z)$ with
+$f \circ \alpha = g \circ \beta$
+there is a unique
+$\gamma \in \Mor_{\mathcal C}(w, x \times_y z)$ making
+the diagram
+$$
+\xymatrix{
+w \ar[rrrd]^\beta \ar@{-->}[rrd]_\gamma \ar[rrdd]_\alpha & & \\
+& & x \times_y z \ar[d]^p \ar[r]_q & z \ar[d]^g \\
+& & x \ar[r]^f & y
+}
+$$
+commute.
+\end{definition}
+
+\noindent
+If a fibre product exists it is unique up to unique
+isomorphism. This follows from the Yoneda lemma as
+the definition requires $x \times_y z$ to be an object
+of $\mathcal{C}$ such that
+$$
+h_{x \times_y z}(w) = h_x(w) \times_{h_y(w)} h_z(w)
+$$
+functorially in $w$. In other words the fibre product $x \times_y z$
+is an object representing the functor
+$w \mapsto h_x(w) \times_{h_y(w)} h_z(w)$.
+
+\begin{definition}
+\label{definition-cartesian}
+We say a commutative diagram
+$$
+\xymatrix{
+w \ar[r] \ar[d] &
+z \ar[d] \\
+x \ar[r] &
+y
+}
+$$
+in a category is {\it cartesian} if $w$ and the morphisms $w \to x$ and
+$w \to z$ form a fibre product of the morphisms $x \to y$ and $z \to y$.
+\end{definition}
+
+\begin{definition}
+\label{definition-has-fibre-products}
+We say the category $\mathcal{C}$ {\it has fibre products} if
+the fibre product exists for any $f\in \Mor_{\mathcal C}(x, y)$
+and $g\in \Mor_{\mathcal C}(z, y)$.
+\end{definition}
+
+\begin{definition}
+\label{definition-representable-morphism}
+A morphism $f : x \to y$ of a category $\mathcal{C}$ is said to be
+{\it representable} if for every morphism $z \to y$
+in $\mathcal{C}$ the fibre product $x \times_y z$ exists.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-composition-representable}
+Let $\mathcal{C}$ be a category.
+Let $f : x \to y$, and $g : y \to z$ be representable.
+Then $g \circ f : x \to z$ is representable.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-representable}
+Let $\mathcal{C}$ be a category.
+Let $f : x \to y$ be representable.
+Let $y' \to y$ be a morphism of $\mathcal{C}$.
+Then the morphism $x' := x \times_y y' \to y'$ is representable also.
+\end{lemma}
+
+\begin{proof}
+Let $z \to y'$ be a morphism. The fibre product
+$x' \times_{y'} z$ is supposed to represent the
+functor
+\begin{eqnarray*}
+w & \mapsto & h_{x'}(w)\times_{h_{y'}(w)} h_z(w) \\
+& = & (h_x(w) \times_{h_y(w)} h_{y'}(w)) \times_{h_{y'}(w)} h_z(w) \\
+& = & h_x(w) \times_{h_y(w)} h_z(w)
+\end{eqnarray*}
+which is representable by assumption.
+\end{proof}
+
+\section{Examples of fibre products}
+\label{section-example-fibre-products}
+
+\noindent
+In this section we list examples of fibre products and
+we describe them.
+
+\medskip\noindent
+As a really trivial first example we observe
+that the category of sets has fibre products and hence every
+morphism is representable. Namely, if $f : X \to Y$
+and $g : Z \to Y$ are maps of sets then we define
+$X \times_Y Z$ as the subset of $X \times Z$ consisting
+of pairs $(x, z)$ such that $f(x) = g(z)$. The morphisms
+$p : X \times_Y Z \to X$ and $q : X \times_Y Z \to Z$ are
+the projection maps $(x, z) \mapsto x$, and $(x, z) \mapsto z$.
+Finally, if $\alpha : W \to X$ and $\beta : W \to Z$
+are morphisms such that $f \circ \alpha = g \circ \beta$
+then the map $W \to X \times Z$, $w\mapsto (\alpha(w), \beta(w))$
+obviously ends up in $X \times_Y Z$ as desired.
+
+\medskip\noindent
+In many categories whose objects are sets endowed with certain types of
+algebraic structures the fibre product of the underlying sets also
+provides the fibre product in the category. For example, suppose
+that $X$, $Y$ and $Z$ above are groups and that $f$, $g$ are
+homomorphisms of groups. Then the set-theoretic fibre product
+$X \times_Y Z$ inherits the structure of a group, simply by
+defining the product of two pairs by the formula
+$(x, z) \cdot (x', z') = (xx', zz')$. Here we list those categories
+for which a similar reasoning works.
+\begin{enumerate}
+\item The category $\textit{Groups}$ of groups.
+\item The category $G\textit{-Sets}$ of sets
+endowed with a left $G$-action for some fixed group $G$.
+\item The category of rings.
+\item The category of $R$-modules given a ring $R$.
+\end{enumerate}
+
+
+
+
+\section{Fibre products and representability}
+\label{section-representable-map-presheaves}
+
+\noindent
+In this section we work out fibre products in the
+category of contravariant functors from a category
+to the category of sets. This will later be superseded
+during the discussion of sites, presheaves, sheaves. Of some
+interest is the notion of a ``representable morphism'' between
+such functors.
+
+\begin{lemma}
+\label{lemma-fibre-product-presheaves}
+Let $\mathcal{C}$ be a category.
+Let $F, G, H : \mathcal{C}^{opp} \to \textit{Sets}$
+be functors. Let $a : F \to G$ and $b : H \to G$ be
+transformations of functors. Then the fibre product
+$F \times_{a, G, b} H$ in the category
+$\textit{PSh}(\mathcal{C})$
+exists and is given by the formula
+$$
+(F \times_{a, G, b} H)(X) =
+F(X) \times_{a_X, G(X), b_X} H(X)
+$$
+for any object $X$ of $\mathcal{C}$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\noindent
+As a special case suppose we have a morphism
+$a : F \to G$, an object $U \in \Ob(\mathcal{C})$
+and an element $\xi \in G(U)$. According to the Yoneda
+Lemma \ref{lemma-yoneda} this gives a transformation
+$\xi : h_U \to G$. The fibre product in this case
+is described by the rule
+$$
+(h_U \times_{\xi, G, a} F)(X) =
+\{ (f, \xi') \mid f : X \to U, \ \xi' \in F(X), \ G(f)(\xi) = a_X(\xi')\}
+$$
+If $F$, $G$ are also representable, then this is the functor representing the
+fibre product, if it exists, see Section \ref{section-fibre-products}.
+The analogy with Definition \ref{definition-representable-morphism}
+prompts us to define a notion
+of representable transformations.
+
+\begin{definition}
+\label{definition-representable-map-presheaves}
+Let $\mathcal{C}$ be a category.
+Let $F, G : \mathcal{C}^{opp} \to \textit{Sets}$
+be functors. We say a morphism $a : F \to G$ is
+{\it representable}, or that {\it $F$ is relatively representable
+over $G$}, if for every $U \in \Ob(\mathcal{C})$
+and any $\xi \in G(U)$ the functor
+$h_U \times_G F$ is representable.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-representable-over-representable}
+Let $\mathcal{C}$ be a category.
+Let $a : F \to G$ be a morphism of contravariant functors
+from $\mathcal{C}$ to $\textit{Sets}$. If $a$ is representable,
+and $G$ is a representable functor, then $F$ is representable.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-diagonal}
+Let $\mathcal{C}$ be a category.
+Let $F : \mathcal{C}^{opp} \to \textit{Sets}$ be a functor.
+Assume $\mathcal{C}$ has products of pairs of objects and fibre products.
+The following are equivalent:
+\begin{enumerate}
+\item the diagonal $\Delta : F \to F \times F$ is representable,
+\item for every $U$ in $\mathcal{C}$,
+and any $\xi \in F(U)$ the map $\xi : h_U \to F$ is representable,
+\item for every pair $U, V$ in $\mathcal{C}$
+and any $\xi \in F(U)$, $\xi' \in F(V)$ the fibre product
+$h_U \times_{\xi, F, \xi'} h_V$ is representable.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will continue to use the Yoneda lemma to identify $F(U)$
+with transformations $h_U \to F$ of functors.
+
+\medskip\noindent
+Equivalence of (2) and (3). Let $U, \xi, V, \xi'$ be as in (3).
+Both (2) and (3) tell us exactly that $h_U \times_{\xi, F, \xi'} h_V$
+is representable; the only difference is that the statement
+(3) is symmetric in $U$ and $V$ whereas (2) is not.
+
+\medskip\noindent
+Assume condition (1). Let $U, \xi, V, \xi'$
+be as in (3). Note that $h_U \times h_V = h_{U \times V}$ is representable.
+Denote $\eta : h_{U \times V} \to F \times F$ the map
+corresponding to the product $\xi \times \xi' : h_U \times h_V \to F \times F$.
+Then the fibre product $F \times_{\Delta, F \times F, \eta} h_{U \times V}$
+is representable by assumption. This means there exist
+$W \in \Ob(\mathcal{C})$, morphisms
+$W \to U$, $W \to V$ and $h_W \to F$ such that
+$$
+\xymatrix{
+h_W \ar[d] \ar[r] & h_U \times h_V \ar[d]^{\xi \times \xi'} \\
+F \ar[r] & F \times F
+}
+$$
+is cartesian. Using the explicit description of fibre products
+in Lemma \ref{lemma-fibre-product-presheaves} the reader sees that this
+implies that $h_W = h_U \times_{\xi, F, \xi'} h_V$ as desired.
+
+\medskip\noindent
+Assume the equivalent conditions (2) and (3). Let $U$ be an object
+of $\mathcal{C}$ and let $(\xi, \xi') \in (F \times F)(U)$.
+By (3) the fibre product $h_U \times_{\xi, F, \xi'} h_U$ is
+representable. Choose an object $W$ and an isomorphism
+$h_W \to h_U \times_{\xi, F, \xi'} h_U$. The two projections
+$\text{pr}_i : h_U \times_{\xi, F, \xi'} h_U \to h_U$
+correspond to morphisms $p_i : W \to U$ by Yoneda. Consider
+$W' = W \times_{(p_1, p_2), U \times U} U$. It is formal
+to show that $W'$ represents $F \times_{\Delta, F \times F} h_U$
+because
+$$
+h_{W'} = h_W \times_{h_U \times h_U} h_U
+= (h_U \times_{\xi, F, \xi'} h_U) \times_{h_U \times h_U} h_U
+= F \times_{F \times F} h_U.
+$$
+Thus $\Delta$ is representable and this finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Pushouts}
+\label{section-pushouts}
+
+\noindent
+The dual notion to fibre products is that of pushouts.
+
+\begin{definition}
+\label{definition-pushouts}
+Let $x, y, z\in \Ob(\mathcal{C})$,
+$f\in \Mor_\mathcal{C}(y, x)$
+and $g\in \Mor_{\mathcal C}(y, z)$.
+A {\it pushout} of $f$ and $g$ is
+an object $x\amalg_y z\in \Ob(\mathcal{C})$
+together with morphisms
+$p\in \Mor_{\mathcal C}(x, x\amalg_y z)$ and
+$q\in\Mor_{\mathcal C}(z, x\amalg_y z)$ making the diagram
+$$
+\xymatrix{
+y \ar[r]_g \ar[d]_f & z \ar[d]^q \\
+x \ar[r]^p & x\amalg_y z
+}
+$$
+commute, and such that the following universal property holds:
+For any $w\in \Ob(\mathcal{C})$ and morphisms
+$\alpha \in \Mor_{\mathcal C}(x, w)$ and
+$\beta \in \Mor_\mathcal{C}(z, w)$ with
+$\alpha \circ f = \beta \circ g$ there is a unique
+$\gamma\in \Mor_{\mathcal C}(x\amalg_y z, w)$ making
+the diagram
+$$
+\xymatrix{
+y \ar[r]_g \ar[d]_f & z \ar[d]^q \ar[rrdd]^\beta & & \\
+x \ar[r]^p \ar[rrrd]^\alpha & x \amalg_y z \ar@{-->}[rrd]^\gamma & & \\
+& & & w
+}
+$$
+commute.
+\end{definition}
+
+\noindent
+It is possible and straightforward to prove the uniqueness of the triple
+$(x\amalg_y z, p, q)$ up to unique isomorphism (if it exists) by direct
+arguments. Another possibility is to think of the pushout as the
+fibre product in the opposite category, thereby getting this uniqueness for
+free from the discussion in Section \ref{section-fibre-products}.
+
+\begin{definition}
+\label{definition-cocartesian}
+We say a commutative diagram
+$$
+\xymatrix{
+y \ar[r] \ar[d] & z \ar[d] \\
+x \ar[r] & w
+}
+$$
+in a category is {\it cocartesian} if $w$ and the morphisms $x \to w$ and
+$z \to w$ form a pushout of the morphisms $y \to x$ and $y \to z$.
+\end{definition}
+
+
+\section{Equalizers}
+\label{section-equalizers}
+
+\begin{definition}
+\label{definition-equalizers}
+Suppose that $X$, $Y$ are objects of a category $\mathcal{C}$
+and that $a, b : X \to Y$ are morphisms. We say a morphism
+$e : Z \to X$ is an {\it equalizer} for the pair $(a, b)$ if
+$a \circ e = b \circ e$ and if $(Z, e)$ satisfies the following
+universal property: For every morphism $t : W \to X$
+in $\mathcal{C}$ such that $a \circ t = b \circ t$ there exists
+a unique morphism $s : W \to Z$ such that $t = e \circ s$.
+\end{definition}
+
+\noindent
+As in the case of the fibre products above, equalizers when
+they exist are unique up to unique isomorphism. There is a
+straightforward generalization of this definition to the
+case where we have more than $2$ morphisms.
+
+\section{Coequalizers}
+\label{section-coequalizers}
+
+\begin{definition}
+\label{definition-coequalizers}
+Suppose that $X$, $Y$ are objects of a category $\mathcal{C}$
+and that $a, b : X \to Y$ are morphisms. We say a morphism
+$c : Y \to Z$ is a {\it coequalizer} for the pair $(a, b)$ if
+$c \circ a = c \circ b$ and if $(Z, c)$ satisfies the following
+universal property: For every morphism $t : Y \to W$
+in $\mathcal{C}$ such that $t \circ a = t \circ b$ there exists
+a unique morphism $s : Z \to W$ such that $t = s \circ c$.
+\end{definition}
+
+\noindent
+As in the case of the pushouts above, coequalizers when
+they exist are unique up to unique isomorphism, and this follows
+from the uniqueness of equalizers upon considering the opposite
+category. There is a straightforward generalization of this definition
+to the case where we have more than $2$ morphisms.
+
+\section{Initial and final objects}
+\label{section-initial-final}
+
+\begin{definition}
+\label{definition-initial-final}
+Let $\mathcal{C}$ be a category.
+\begin{enumerate}
+\item An object $x$ of the category $\mathcal{C}$ is called
+an {\it initial} object if for every object $y$ of $\mathcal{C}$
+there is exactly one morphism $x \to y$.
+\item An object $x$ of the category $\mathcal{C}$ is called
+a {\it final} object if for every object $y$ of $\mathcal{C}$
+there is exactly one morphism $y \to x$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In the category of sets the empty set $\emptyset$ is an
+initial object, and in fact the only initial object.
+Also, any {\it singleton}, i.e., a set with one element,
+is a final object (so it is not unique).
+
+
+
+
+\section{Monomorphisms and Epimorphisms}
+\label{section-mono-epi}
+
+\begin{definition}
+\label{definition-mono-epi}
+Let $\mathcal{C}$ be a category and let $f : X \to Y$ be
+a morphism of $\mathcal{C}$.
+\begin{enumerate}
+\item We say that $f$ is a {\it monomorphism} if for every object
+$W$ and every pair of morphisms $a, b : W \to X$ such that
+$f \circ a = f \circ b$ we have $a = b$.
+\item We say that $f$ is an {\it epimorphism} if for every object
+$W$ and every pair of morphisms $a, b : Y \to W$ such that
+$a \circ f = b \circ f$ we have $a = b$.
+\end{enumerate}
+\end{definition}
+
+\begin{example}
+\label{example-mono-epi-sets}
+In the category of sets the monomorphisms correspond to injective
+maps and the epimorphisms correspond to surjective maps.
+\end{example}
+
+\begin{lemma}
+\label{lemma-characterize-mono-epi}
+Let $\mathcal{C}$ be a category, and let $f : X \to Y$ be
+a morphism of $\mathcal{C}$. Then
+\begin{enumerate}
+\item $f$ is a monomorphism if and only if $X$ is the fibre
+product $X \times_Y X$, and
+\item $f$ is an epimorphism if and only if $Y$ is the pushout
+$Y \amalg_X Y$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+
+
+
+
+\section{Limits and colimits}
+\label{section-limits}
+
+\noindent
+Let $\mathcal{C}$ be a category. A {\it diagram} in $\mathcal{C}$ is
+simply a functor $M : \mathcal{I} \to \mathcal{C}$. We say that
+$\mathcal{I}$ is the {\it index category} or that $M$ is an
+$\mathcal{I}$-diagram. We will use the notation $M_i$ to denote the
+image of the object
+$i$ of $\mathcal{I}$. Hence for $\phi : i \to i'$ a morphism
+in $\mathcal{I}$ we have $M(\phi) : M_i \to M_{i'}$.
+
+\begin{definition}
+\label{definition-limit}
+A {\it limit} of the $\mathcal{I}$-diagram $M$ in the category
+$\mathcal{C}$ is given by an object $\lim_\mathcal{I} M$ in $\mathcal{C}$
+together with morphisms $p_i : \lim_\mathcal{I} M \to M_i$ such that
+\begin{enumerate}
+\item for $\phi : i \to i'$ a morphism
+in $\mathcal{I}$ we have $p_{i'} = M(\phi) \circ p_i$, and
+\item for any object $W$ in $\mathcal{C}$ and any family of
+morphisms $q_i : W \to M_i$ (indexed by $i \in \Ob(\mathcal{I})$)
+such that for all $\phi : i \to i'$
+in $\mathcal{I}$ we have $q_{i'} = M(\phi) \circ q_i$ there
+exists a unique morphism $q : W \to \lim_\mathcal{I} M$ such that
+$q_i = p_i \circ q$ for every object $i$ of $\mathcal{I}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Limits $(\lim_\mathcal{I} M, (p_i)_{i\in \Ob(\mathcal{I})})$ are
+(if they exist)
+unique up to unique isomorphism by the uniqueness requirement
+in the definition. Products of pairs, fibre products, and equalizers are
+examples of limits. The limit over the empty diagram is a final object
+of $\mathcal{C}$.
+In the category of sets all limits exist.
+The dual notion is that of colimits.
+
+\begin{definition}
+\label{definition-colimit}
+A {\it colimit} of the $\mathcal{I}$-diagram $M$ in the category
+$\mathcal{C}$ is given by an object $\colim_\mathcal{I} M$ in $\mathcal{C}$
+together with morphisms $s_i : M_i \to \colim_\mathcal{I} M$ such that
+\begin{enumerate}
+\item for $\phi : i \to i'$ a morphism
+in $\mathcal{I}$ we have $s_i = s_{i'} \circ M(\phi)$, and
+\item for any object $W$ in $\mathcal{C}$ and any family of
+morphisms $t_i : M_i \to W$ (indexed by $i \in \Ob(\mathcal{I})$)
+such that for all $\phi : i \to i'$
+in $\mathcal{I}$ we have $t_i = t_{i'} \circ M(\phi)$ there
+exists a unique morphism $t : \colim_\mathcal{I} M \to W$ such that
+$t_i = t \circ s_i$ for every object $i$ of $\mathcal{I}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Colimits $(\colim_\mathcal{I} M, (s_i)_{i\in \Ob(\mathcal{I})})$ are
+(if they exist) unique up to unique isomorphism by the uniqueness requirement
+in the definition. Coproducts of pairs, pushouts, and coequalizers are
+examples of colimits. The colimit over an empty diagram is an initial object
+of $\mathcal{C}$. In the category of sets all colimits exist.
+
+\begin{remark}
+\label{remark-diagram-small}
+The index category of a (co)limit will never be allowed to have
+a proper class of objects. In this project it means that
+it cannot be one of the categories listed in
+Remark \ref{remark-big-categories}
+\end{remark}
+
+\begin{remark}
+\label{remark-limit-colim}
+We often write $\lim_i M_i$, $\colim_i M_i$,
+$\lim_{i\in \mathcal{I}} M_i$, or $\colim_{i\in \mathcal{I}} M_i$
+instead of the versions indexed by $\mathcal{I}$.
+Using this notation, and using the description of
+limits and colimits of sets in Section \ref{section-limit-sets}
+below, we can say the following.
+Let $M : \mathcal{I} \to \mathcal{C}$ be a diagram.
+\begin{enumerate}
+\item The object $\lim_i M_i$ if it exists satisfies the following property
+$$
+\Mor_\mathcal{C}(W, \lim_i M_i)
+=
+\lim_i \Mor_\mathcal{C}(W, M_i)
+$$
+where the limit on the right takes place in the category of sets.
+\item The object $\colim_i M_i$ if it
+exists satisfies the following property
+$$
+\Mor_\mathcal{C}(\colim_i M_i, W)
+=
+\lim_{i \in \mathcal{I}^{opp}} \Mor_\mathcal{C}(M_i, W)
+$$
+where on the right we have the limit over the opposite category
+with value in the category of sets.
+\end{enumerate}
+By the Yoneda lemma (and its dual) this formula completely determines the
+limit, respectively the colimit.
+\end{remark}
+
+\begin{remark}
+\label{remark-cones-and-cocones}
+Let $M : \mathcal{I} \to \mathcal{C}$ be a diagram. In this setting a
+{\it cone} for $M$ is given by an object $W$ and a family of morphisms
+$q_i : W \to M_i$, $i \in \Ob(\mathcal{I})$ such that for all morphisms
+$\phi : i \to i'$ of $\mathcal{I}$ the diagram
+$$
+\xymatrix{
+& W \ar[dl]_{q_i} \ar[dr]^{q_{i'}} \\
+M_i \ar[rr]^{M(\phi)} & & M_{i'}
+}
+$$
+is commutative. The collection of cones forms a category with an obvious
+notion of morphisms. Clearly, the limit of $M$, if it exists, is a final
+object in the category of cones. Dually, a {\it cocone} for $M$ is given
+by an object $W$ and a family of morphisms $t_i : M_i \to W$ such that for
+all morphisms $\phi : i \to i'$ in $\mathcal{I}$ the diagram
+$$
+\xymatrix{
+M_i \ar[rr]^{M(\phi)} \ar[dr]_{t_i} & & M_{i'} \ar[dl]^{t_{i'}} \\
+& W
+}
+$$
+commutes. The collection of cocones forms a category with an obvious notion
+of morphisms. Similarly to the above the colimit of $M$ exists
+if and only if the category of cocones has an initial object.
+\end{remark}
+
+\noindent
+As an application of the notions of limits and colimits
+we define products and coproducts.
+
+\begin{definition}
+\label{definition-product}
+Suppose that $I$ is a set, and suppose given for every $i \in I$ an
+object $M_i$ of the category $\mathcal{C}$. A {\it product}
+$\prod_{i\in I} M_i$ is by definition $\lim_\mathcal{I} M$
+(if it exists)
+where $\mathcal{I}$ is the category having only identities as
+morphisms and having the elements of $I$ as objects.
+\end{definition}
+
+\noindent
+An important special case is where $I = \emptyset$ in which case the
+product is a final object of the category.
+The morphisms $p_i : \prod M_i \to M_i$ are called the
+{\it projection morphisms}.
+
+\begin{definition}
+\label{definition-coproduct}
+Suppose that $I$ is a set, and suppose given for every $i \in I$ an
+object $M_i$ of the category $\mathcal{C}$. A {\it coproduct}
+$\coprod_{i\in I} M_i$ is by definition $\colim_\mathcal{I} M$
+(if it exists) where $\mathcal{I}$ is the category having only
+identities as morphisms and having the elements of $I$ as objects.
+\end{definition}
+
+\noindent
+An important special case is where $I = \emptyset$ in which case the
+coproduct is an initial object of the category.
+Note that the coproduct comes equipped with morphisms
+$M_i \to \coprod M_i$. These are sometimes called the
+{\it coprojections}.
+
+\begin{lemma}
+\label{lemma-functorial-colimit}
+Suppose that $M : \mathcal{I} \to \mathcal{C}$,
+and $N : \mathcal{J} \to \mathcal{C}$ are diagrams
+whose colimits exist. Suppose
+$H : \mathcal{I} \to \mathcal{J}$ is
+a functor, and suppose $t : M \to N \circ H$
+is a transformation of functors.
+Then there is a unique morphism
+$$
+\theta :
+\colim_\mathcal{I} M
+\longrightarrow
+\colim_\mathcal{J} N
+$$
+such that all the diagrams
+$$
+\xymatrix{
+M_i \ar[d]_{t_i} \ar[r]
+&
+\colim_\mathcal{I} M \ar[d]^{\theta}
+\\
+N_{H(i)} \ar[r]
+&
+\colim_\mathcal{J} N
+}
+$$
+commute.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-functorial-limit}
+Suppose that $M : \mathcal{I} \to \mathcal{C}$,
+and $N : \mathcal{J} \to \mathcal{C}$ are diagrams
+whose limits exist. Suppose $H : \mathcal{I} \to \mathcal{J}$ is
+a functor, and suppose $t : N \circ H \to M$
+is a transformation of functors.
+Then there is a unique morphism
+$$
+\theta :
+\lim_\mathcal{J} N
+\longrightarrow
+\lim_\mathcal{I} M
+$$
+such that all the diagrams
+$$
+\xymatrix{
+\lim_\mathcal{J} N \ar[d]^{\theta} \ar[r]
+&
+N_{H(i)} \ar[d]_{t_i}
+\\
+\lim_\mathcal{I} M \ar[r]
+&
+M_i
+}
+$$
+commute.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+
+\begin{lemma}
+\label{lemma-colimits-commute}
+Let $\mathcal{I}$, $\mathcal{J}$ be index categories.
+Let $M : \mathcal{I} \times \mathcal{J} \to \mathcal{C}$ be a functor.
+We have
+$$
+\colim_i \colim_j M_{i, j}
+=
+\colim_{i, j} M_{i, j}
+=
+\colim_j \colim_i M_{i, j}
+$$
+provided all the indicated colimits exist. Similar for limits.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limits-products-equalizers}
+Let $M : \mathcal{I} \to \mathcal{C}$ be a diagram.
+Write $I = \Ob(\mathcal{I})$ and $A = \text{Arrows}(\mathcal{I})$.
+Denote $s, t : A \to I$ the source and target maps.
+Suppose that $\prod_{i \in I} M_i$ and $\prod_{a \in A} M_{t(a)}$
+exist. Suppose that the equalizer of
+$$
+\xymatrix{
+\prod_{i \in I} M_i
+\ar@<1ex>[r]^\phi \ar@<-1ex>[r]_\psi
+&
+\prod_{a \in A} M_{t(a)}
+}
+$$
+exists, where the morphisms are determined by their components
+as follows: $p_a \circ \psi = M(a) \circ p_{s(a)}$
+and $p_a \circ \phi = p_{t(a)}$. Then this equalizer is the
+limit of the diagram.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+
+\begin{lemma}
+\label{lemma-colimits-coproducts-coequalizers}
+\begin{slogan}
+If all coproducts and coequalizers exist, all colimits exist.
+\end{slogan}
+Let $M : \mathcal{I} \to \mathcal{C}$ be a diagram.
+Write $I = \Ob(\mathcal{I})$ and $A = \text{Arrows}(\mathcal{I})$.
+Denote $s, t : A \to I$ the source and target maps.
+Suppose that $\coprod_{i \in I} M_i$ and $\coprod_{a \in A} M_{s(a)}$
+exist. Suppose that the coequalizer of
+$$
+\xymatrix{
+\coprod_{a \in A} M_{s(a)}
+\ar@<1ex>[r]^\phi \ar@<-1ex>[r]_\psi
+&
+\coprod_{i \in I} M_i
+}
+$$
+exists, where the morphisms are determined by their components
+as follows: The component $M_{s(a)}$ maps via $\psi$
+to the component $M_{t(a)}$ via the morphism $M(a)$.
+The component $M_{s(a)}$ maps via $\phi$ to the component
+$M_{s(a)}$ by the identity morphism. Then this coequalizer is the
+colimit of the diagram.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+
+
+
+
+
+
+\section{Limits and colimits in the category of sets}
+\label{section-limit-sets}
+
+\noindent
+Not only do limits and colimits exist in $\textit{Sets}$
+but they are also easy to describe. Namely, let $M : \mathcal{I}
+\to \textit{Sets}$, $i \mapsto M_i$ be a diagram of sets.
+Denote $I = \Ob(\mathcal{I})$.
+The limit is described as
+$$
+\lim_\mathcal{I} M
+=
+\{
+(m_i)_{i\in I} \in \prod\nolimits_{i\in I} M_i
+\mid
+\forall \phi : i \to i' \text{ in }\mathcal{I},
+M(\phi)(m_i) = m_{i'}
+\}.
+$$
+So we think of an element of the limit as a compatible system of elements
+of all the sets $M_i$.
+
+\medskip\noindent
+On the other hand, the colimit is
+$$
+\colim_\mathcal{I} M
+=
+(\coprod\nolimits_{i\in I} M_i)/\sim
+$$
+where the equivalence relation $\sim$ is the equivalence relation
+generated by setting $m_i \sim m_{i'}$ if $m_i \in M_i$,
+$m_{i'} \in M_{i'}$ and $M(\phi)(m_i) = m_{i'}$ for some
+$\phi : i \to i'$. In other words, $m_i \in M_i$
+and $m_{i'} \in M_{i'}$ are equivalent if there are a
+chain of morphisms in $\mathcal{I}$
+$$
+\xymatrix{
+&
+i_1 \ar[ld] \ar[rd] & &
+i_3 \ar[ld] & &
+i_{2n-1} \ar[rd] & \\
+i = i_0 & &
+i_2 & &
+\ldots & &
+i_{2n} = i'
+}
+$$
+and elements $m_{i_j} \in M_{i_j}$ mapping to each other under
+the maps $M_{i_{2k-1}} \to M_{i_{2k-2}}$ and $M_{i_{2k-1}}
+\to M_{i_{2k}}$ induced from the maps in $\mathcal{I}$ above.
+
+\medskip\noindent
+This is not a very pleasant type of object to work with.
+But if the diagram is filtered then it is much easier to
+describe. We will explain this in Section \ref{section-directed-colimits}.
+
+
+
+\section{Connected limits}
+\label{section-connected-limits}
+
+\noindent
+A (co)limit is called connected if its index category is connected.
+
+\begin{definition}
+\label{definition-category-connected}
+We say that a category $\mathcal{I}$ is {\it connected}
+if the equivalence relation generated by
+$x \sim y \Leftrightarrow \Mor_\mathcal{I}(x, y) \not = \emptyset$
+has exactly one equivalence class.
+\end{definition}
+
+\noindent
+Here we follow the convention of
+Topology, Definition \ref{topology-definition-connected-components}
+that connected spaces are nonempty.
+The following in some vague sense characterizes connected limits.
+
+\begin{lemma}
+\label{lemma-connected-limit-over-X}
+Let $\mathcal{C}$ be a category.
+Let $X$ be an object of $\mathcal{C}$.
+Let $M : \mathcal{I} \to \mathcal{C}/X$ be a diagram
+in the category of objects over $X$.
+If the index category $\mathcal{I}$ is connected
+and the limit of $M$ exists in $\mathcal{C}/X$,
+then the limit of the composition
+$\mathcal{I} \to \mathcal{C}/X \to \mathcal{C}$
+exists and is the same.
+\end{lemma}
+
+\begin{proof}
+Let $L \to X$ be an object representing the limit in $\mathcal{C}/X$.
+Consider the functor
+$$
+W \longmapsto \lim_i \Mor_\mathcal{C}(W, M_i).
+$$
+Let $(\varphi_i)$ be an element of the set on the right.
+Since each $M_i$ comes equipped with a morphism $s_i : M_i \to X$ we
+get morphisms $f_i = s_i \circ \varphi_i : W \to X$. But as $\mathcal{I}$
+is connected we see that all $f_i$ are equal. Since $\mathcal{I}$
+is nonempty there is at least one $f_i$.
+Hence this common value $W \to X$ defines the structure of an
+object of $W$ in $\mathcal{C}/X$ and $(\varphi_i)$ defines an
+element of $\lim_i \Mor_{\mathcal{C}/X}(W, M_i)$.
+Thus we obtain a unique morphism $\phi : W \to L$ such that
+$\varphi_i$ is the composition of $\phi$ with $L \to M_i$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-connected-colimit-under-X}
+Let $\mathcal{C}$ be a category.
+Let $X$ be an object of $\mathcal{C}$.
+Let $M : \mathcal{I} \to X/\mathcal{C}$ be a diagram
+in the category of objects under $X$.
+If the index category $\mathcal{I}$ is connected
+and the colimit of $M$ exists in $X/\mathcal{C}$,
+then the colimit of the composition
+$\mathcal{I} \to X/\mathcal{C} \to \mathcal{C}$
+exists and is the same.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This lemma is dual to Lemma \ref{lemma-connected-limit-over-X}.
+\end{proof}
+
+
+
+
+\section{Cofinal and initial categories}
+\label{section-cofinal}
+
+\noindent
+In the literature sometimes the word ``final'' is used instead of cofinal
+in the following definition.
+
+\begin{definition}
+\label{definition-cofinal}
+Let $H : \mathcal{I} \to \mathcal{J}$ be a functor between categories.
+We say {\it $\mathcal{I}$ is cofinal in $\mathcal{J}$} or that
+$H$ is {\it cofinal} if
+\begin{enumerate}
+\item for all $y \in \Ob(\mathcal{J})$ there exist an
+$x \in \Ob(\mathcal{I})$ and a morphism $y \to H(x)$, and
+\item given $y \in \Ob(\mathcal{J})$, $x, x' \in \Ob(\mathcal{I})$
+and morphisms $y \to H(x)$ and $y \to H(x')$ there exist a sequence
+of morphisms
+$$
+x = x_0 \leftarrow x_1 \rightarrow x_2 \leftarrow x_3 \rightarrow \ldots
+\rightarrow x_{2n} = x'
+$$
+in $\mathcal{I}$ and morphisms $y \to H(x_i)$ in $\mathcal{J}$
+such that the diagrams
+$$
+\xymatrix{
+& y \ar[ld] \ar[d] \ar[rd] \\
+H(x_{2k}) & H(x_{2k + 1}) \ar[l] \ar[r] & H(x_{2k + 2})
+}
+$$
+commute for $k = 0, \ldots, n - 1$.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-cofinal}
+Let $H : \mathcal{I} \to \mathcal{J}$ be a functor of categories. Assume
+$\mathcal{I}$ is cofinal in $\mathcal{J}$. Then for every diagram
+$M : \mathcal{J} \to \mathcal{C}$ we have a canonical isomorphism
+$$
+\colim_\mathcal{I} M \circ H
+=
+\colim_\mathcal{J} M
+$$
+if either side exists.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{definition}
+\label{definition-initial}
+Let $H : \mathcal{I} \to \mathcal{J}$ be a functor between categories.
+We say {\it $\mathcal{I}$ is initial in $\mathcal{J}$} or that
+$H$ is {\it initial} if
+\begin{enumerate}
+\item for all $y \in \Ob(\mathcal{J})$ there exist an
+$x \in \Ob(\mathcal{I})$ and a morphism $H(x) \to y$,
+\item for any $y \in \Ob(\mathcal{J})$, $x , x' \in \Ob(\mathcal{I})$ and
+morphisms $H(x) \to y$, $H(x') \to y$ in $\mathcal{J}$
+there exist a sequence of morphisms
+$$
+x = x_0 \leftarrow x_1 \rightarrow x_2 \leftarrow x_3 \rightarrow \ldots
+\rightarrow x_{2n} = x'
+$$
+in $\mathcal{I}$ and morphisms $H(x_i) \to y$ in $\mathcal{J}$
+such that the diagrams
+$$
+\xymatrix{
+H(x_{2k}) \ar[rd] &
+H(x_{2k + 1}) \ar[l] \ar[r] \ar[d] &
+H(x_{2k + 2}) \ar[ld] \\
+& y
+}
+$$
+commute for $k = 0, \ldots, n - 1$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+This is just the dual notion to ``cofinal'' functors.
+
+\begin{lemma}
+\label{lemma-initial}
+Let $H : \mathcal{I} \to \mathcal{J}$ be a functor of categories.
+Assume $\mathcal{I}$ is initial in $\mathcal{J}$.
+Then for every diagram $M : \mathcal{J} \to \mathcal{C}$ we
+have a canonical isomorphism
+$$
+\lim_\mathcal{I} M \circ H = \lim_\mathcal{J} M
+$$
+if either side exists.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-constant-connected-fibers}
+Let $F : \mathcal{I} \to \mathcal{I}'$ be a functor.
+Assume
+\begin{enumerate}
+\item the fibre categories (see
+Definition \ref{definition-fibre-category})
+of $\mathcal{I}$ over $\mathcal{I}'$ are all connected, and
+\item for every morphism $\alpha' : x' \to y'$ in $\mathcal{I}'$ there
+exists a morphism $\alpha : x \to y$ in $\mathcal{I}$ such that
+$F(\alpha) = \alpha'$.
+\end{enumerate}
+Then for every diagram $M : \mathcal{I}' \to \mathcal{C}$
+the colimit $\colim_\mathcal{I} M \circ F$ exists if and only
+if $\colim_{\mathcal{I}'} M$ exists and if so these colimits
+agree.
+\end{lemma}
+
+\begin{proof}
+One can prove this by showing that $\mathcal{I}$ is cofinal in
+$\mathcal{I}'$ and applying Lemma \ref{lemma-cofinal}.
+But we can also prove it directly as follows.
+It suffices to show that for any object $T$ of $\mathcal{C}$ we have
+$$
+\lim_{\mathcal{I}^{opp}} \Mor_\mathcal{C}(M_{F(i)}, T)
+=
+\lim_{(\mathcal{I}')^{opp}} \Mor_\mathcal{C}(M_{i'}, T)
+$$
+If $(g_{i'})_{i' \in \Ob(\mathcal{I}')}$ is an element of
+the right hand side, then setting $f_i = g_{F(i)}$ we obtain an
+element $(f_i)_{i \in \Ob(\mathcal{I})}$ of the left hand side.
+Conversely, let $(f_i)_{i \in \Ob(\mathcal{I})}$ be an element of the
+left hand side. Note that on each (connected)
+fibre category $\mathcal{I}_{i'}$ the functor $M \circ F$
+is constant with value $M_{i'}$. Hence the morphisms
+$f_i$ for $i \in \Ob(\mathcal{I})$ with $F(i) = i'$
+are all the same and determine a well defined morphism
+$g_{i'} : M_{i'} \to T$. By assumption (2) the collection
+$(g_{i'})_{i' \in \Ob(\mathcal{I}')}$ defines an element
+of the right hand side.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-product-with-connected}
+Let $\mathcal{I}$ and $\mathcal{J}$ be a categories and denote
+$p : \mathcal{I} \times \mathcal{J} \to \mathcal{J}$ the projection.
+If $\mathcal{I}$ is connected, then for a diagram
+$M : \mathcal{J} \to \mathcal{C}$ the colimit $\colim_\mathcal{J} M$ exists
+if and only if $\colim_{\mathcal{I} \times \mathcal{J}} M \circ p$ exists and
+if so these colimits are equal.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-colimit-constant-connected-fibers}.
+\end{proof}
+
+
+
+
+
+
+\section{Finite limits and colimits}
+\label{section-finite-limits}
+
+\noindent
+A {\it finite} (co)limit is a (co)limit whose index category is finite,
+i.e., the index category has finitely many objects and finitely many
+morphisms. A (co)limit is called {\it nonempty} if the index category is
+nonempty. A (co)limit is called {\it connected} if the index category is
+connected, see
+Definition \ref{definition-category-connected}.
+It turns out that there are ``enough'' finite index categories.
+
+\begin{lemma}
+\label{lemma-finite-diagram-category}
+Let $\mathcal{I}$ be a category with
+\begin{enumerate}
+\item $\Ob(\mathcal{I})$ is finite, and
+\item there exist finitely many morphisms
+$f_1, \ldots, f_m \in \text{Arrows}(\mathcal{I})$ such
+that every morphism of $\mathcal{I}$ is a composition
+$f_{j_1} \circ f_{j_2} \circ \ldots \circ f_{j_k}$.
+\end{enumerate}
+Then there exists a functor $F : \mathcal{J} \to \mathcal{I}$
+such that
+\begin{enumerate}
+\item[(a)] $\mathcal{J}$ is a finite category, and
+\item[(b)] for any diagram $M : \mathcal{I} \to \mathcal{C}$ the
+(co)limit of $M$ over $\mathcal{I}$ exists if and only if
+the (co)limit of $M \circ F$ over $\mathcal{J}$ exists and in this case
+the (co)limits are canonically isomorphic.
+\end{enumerate}
+Moreover, $\mathcal{J}$ is connected (resp.\ nonempty) if and only if
+$\mathcal{I}$ is so.
+\end{lemma}
+
+\begin{proof}
+Say $\Ob(\mathcal{I}) = \{x_1, \ldots, x_n\}$.
+Denote $s, t : \{1, \ldots, m\} \to \{1, \ldots, n\}$ the functions
+such that $f_j : x_{s(j)} \to x_{t(j)}$.
+We set $\Ob(\mathcal{J}) = \{y_1, \ldots, y_n, z_1, \ldots, z_n\}$
+Besides the identity morphisms we introduce morphisms
+$g_j : y_{s(j)} \to z_{t(j)}$, $j = 1, \ldots, m$ and morphisms
+$h_i : y_i \to z_i$, $i = 1, \ldots, n$. Since all of the nonidentity
+morphisms in $\mathcal{J}$ go from a $y$ to a $z$ there are no
+compositions to define and no associativities to check.
+Set $F(y_i) = F(z_i) = x_i$. Set $F(g_j) = f_j$ and $F(h_i) = \text{id}_{x_i}$.
+It is clear that $F$ is a functor.
+It is clear that $\mathcal{J}$ is finite.
+It is clear that $\mathcal{J}$ is connected, resp.\ nonempty
+if and only if $\mathcal{I}$ is so.
+
+\medskip\noindent
+Let $M : \mathcal{I} \to \mathcal{C}$ be a diagram.
+Consider an object $W$ of $\mathcal{C}$ and morphisms
+$q_i : W \to M(x_i)$ as in
+Definition \ref{definition-limit}.
+Then by taking $q_i : W \to M(F(y_i)) = M(F(z_i)) = M(x_i)$ we obtain
+a family of maps as in
+Definition \ref{definition-limit}
+for the diagram $M \circ F$.
+Conversely, suppose we are given maps
+$qy_i : W \to M(F(y_i))$ and $qz_i : W \to M(F(z_i))$
+as in
+Definition \ref{definition-limit}
+for the diagram $M \circ F$. Since
+$$
+M(F(h_i)) = \text{id} : M(F(y_i)) = M(x_i) \longrightarrow M(x_i) = M(F(z_i))
+$$
+we conclude that $qy_i = qz_i$ for all $i$. Set $q_i$ equal to this common
+value. The compatibility of
+$q_{s(j)} = qy_{s(j)}$ and $q_{t(j)} = qz_{t(j)}$ with the morphism
+$M(f_j)$ guarantees that the family $q_i$ is compatible with all morphisms
+in $\mathcal{I}$ as by assumption every such morphism is a composition
+of the morphisms $f_j$. Thus we have found a canonical bijection
+$$
+\lim_{B \in \Ob(\mathcal{J})} \Mor_\mathcal{C}(W, M(F(B)))
+=
+\lim_{A \in \Ob(\mathcal{I})} \Mor_\mathcal{C}(W, M(A))
+$$
+which implies the statement on limits in the lemma. The statement on colimits
+is proved in the same way (proof omitted).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibre-products-equalizers-exist}
+Let $\mathcal{C}$ be a category.
+The following are equivalent:
+\begin{enumerate}
+\item Connected finite limits exist in $\mathcal{C}$.
+\item Equalizers and fibre products exist in $\mathcal{C}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since equalizers and fibre products are finite connected
+limits we see that (1) implies (2). For the converse, let $\mathcal{I}$
+be a finite connected index category. Let
+$F : \mathcal{J} \to \mathcal{I}$
+be the functor of index categories constructed in the proof of
+Lemma \ref{lemma-finite-diagram-category}.
+Then we see that we may replace $\mathcal{I}$ by $\mathcal{J}$.
+The result is that we may assume that
+$\Ob(\mathcal{I}) = \{x_1, \ldots, x_n\} \amalg \{y_1, \ldots, y_m\}$
+with $n, m \geq 1$ such that all nonidentity morphisms in $\mathcal{I}$
+are morphisms $f : x_i \to y_j$ for some $i$ and $j$.
+
+\medskip\noindent
+Suppose that $n > 1$. Since $\mathcal{I}$ is connected there
+exist indices $i_1, i_2$ and $j_0$ and morphisms $a : x_{i_1} \to y_{j_0}$
+and $b : x_{i_2} \to y_{j_0}$. Consider the category
+$$
+\mathcal{I}' =
+\{x\} \amalg \{x_1, \ldots, \hat x_{i_1}, \ldots, \hat x_{i_2}, \ldots x_n\}
+\amalg \{y_1, \ldots, y_m\}
+$$
+with
+$$
+\Mor_{\mathcal{I}'}(x, y_j) = \Mor_\mathcal{I}(x_{i_1}, y_j)
+\amalg \Mor_\mathcal{I}(x_{i_2}, y_j)
+$$
+and all other morphism sets the same as in $\mathcal{I}$. For any functor
+$M : \mathcal{I} \to \mathcal{C}$ we can construct a functor
+$M' : \mathcal{I}' \to \mathcal{C}$ by setting
+$$
+M'(x) = M(x_{i_1}) \times_{M(a), M(y_{j_0}), M(b)} M(x_{i_2})
+$$
+and for a morphism $f' : x \to y_j$ corresponding to, say,
+$f : x_{i_1} \to y_j$ we set $M'(f) = M(f) \circ \text{pr}_1$.
+Then the functor $M$ has a limit if and only if the functor $M'$ has
+a limit (proof omitted). Hence by induction we reduce to the case $n = 1$.
+
+\medskip\noindent
+If $n = 1$, then the limit of any $M : \mathcal{I} \to \mathcal{C}$ is
+the successive equalizer of pairs of maps $x_1 \to y_j$ hence
+exists by assumption.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-almost-finite-limits-exist}
+Let $\mathcal{C}$ be a category.
+The following are equivalent:
+\begin{enumerate}
+\item Nonempty finite limits exist in $\mathcal{C}$.
+\item Products of pairs and equalizers exist in $\mathcal{C}$.
+\item Products of pairs and fibre products exist in $\mathcal{C}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since products of pairs, fibre products, and equalizers are limits with
+nonempty index categories we see that (1) implies both (2) and (3).
+Assume (2). Then finite nonempty products and equalizers exist. Hence by
+Lemma \ref{lemma-limits-products-equalizers}
+we see that finite nonempty limits exist, i.e., (1) holds. Assume (3).
+If $a, b : A \to B$ are morphisms of $\mathcal{C}$, then the
+equalizer of $a, b$ is
+$$
+(A \times_{a, B, b} A)\times_{(\text{pr}_1, \text{pr}_2), A \times A, \Delta} A.
+$$
+Thus (3) implies (2), and the lemma is proved.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-limits-exist}
+Let $\mathcal{C}$ be a category.
+The following are equivalent:
+\begin{enumerate}
+\item Finite limits exist in $\mathcal{C}$.
+\item Finite products and equalizers exist.
+\item The category has a final object and fibre products exist.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since finite products, fibre products, equalizers, and final objects
+are limits over finite index categories we see that (1) implies both (2)
+and (3). By
+Lemma \ref{lemma-limits-products-equalizers}
+above we see that (2) implies (1). Assume (3).
+Note that the product $A \times B$ is the fibre product over the
+final object. If $a, b : A \to B$ are morphisms of $\mathcal{C}$, then the
+equalizer of $a, b$ is
+$$
+(A \times_{a, B, b} A)\times_{(\text{pr}_1, \text{pr}_2), A \times A, \Delta} A.
+$$
+Thus (3) implies (2) and the lemma is proved.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-push-outs-coequalizers-exist}
+Let $\mathcal{C}$ be a category.
+The following are equivalent:
+\begin{enumerate}
+\item Connected finite colimits exist in $\mathcal{C}$.
+\item Coequalizers and pushouts exist in $\mathcal{C}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is dual to
+Lemma \ref{lemma-fibre-products-equalizers-exist}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-almost-finite-colimits-exist}
+Let $\mathcal{C}$ be a category.
+The following are equivalent:
+\begin{enumerate}
+\item Nonempty finite colimits exist in $\mathcal{C}$.
+\item Coproducts of pairs and coequalizers exist in $\mathcal{C}$.
+\item Coproducts of pairs and pushouts exist in $\mathcal{C}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is the dual of
+Lemma \ref{lemma-almost-finite-limits-exist}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimits-exist}
+Let $\mathcal{C}$ be a category.
+The following are equivalent:
+\begin{enumerate}
+\item Finite colimits exist in $\mathcal{C}$.
+\item Finite coproducts and coequalizers exist in $\mathcal{C}$.
+\item The category has an initial object and pushouts exist.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is dual to Lemma \ref{lemma-finite-limits-exist}.
+\end{proof}
+
+
+
+
+
+\section{Filtered colimits}
+\label{section-directed-colimits}
+
+\noindent
+Colimits are easier to compute or describe when they
+are over a filtered diagram. Here is the definition.
+
+\begin{definition}
+\label{definition-directed}
+We say that a diagram $M : \mathcal{I} \to \mathcal{C}$ is {\it directed},
+or {\it filtered} if the following conditions hold:
+\begin{enumerate}
+\item the category $\mathcal{I}$ has at least one object,
+\item for every pair of objects $x, y$ of $\mathcal{I}$
+there exist an object $z$ and morphisms $x \to z$,
+$y \to z$, and
+\item for every pair of objects $x, y$ of $\mathcal{I}$
+and every pair of morphisms $a, b : x \to y$ of $\mathcal{I}$
+there exists a morphism $c : y \to z$ of $\mathcal{I}$
+such that $M(c \circ a) = M(c \circ b)$ as morphisms in $\mathcal{C}$.
+\end{enumerate}
+We say that an index category $\mathcal{I}$ is {\it directed}, or
+{\it filtered} if $\text{id} : \mathcal{I} \to \mathcal{I}$ is filtered
+(in other words you erase the $M$ in part (3) above).
+\end{definition}
+
+\noindent
+We observe that any diagram with filtered index category is filtered,
+and this is how filtered colimits usually come about. In fact, if
+$M : \mathcal{I} \to \mathcal{C}$ is a filtered diagram, then we
+can factor $M$ as $\mathcal{I} \to \mathcal{I}' \to \mathcal{C}$
+where $\mathcal{I}'$ is a filtered index category\footnote{Namely, let
+$\mathcal{I}'$ have the same objects as $\mathcal{I}$ but
+where $\Mor_{\mathcal{I}'}(x, y)$ is the quotient of $\Mor_\mathcal{I}(x, y)$
+by the equivalence relation which identifies
+$a, b : x \to y$ if $M(a) = M(b)$.}
+such that $\colim_\mathcal{I} M$ exists if and only if
+$\colim_{\mathcal{I}'} M'$ exists in which case the colimits are
+canonically isomorphic.
+
+\medskip\noindent
+Suppose that $M : \mathcal{I} \to \textit{Sets}$ is a filtered diagram. In
+this case we may describe the equivalence relation in the formula
+$$
+\colim_\mathcal{I} M
+=
+(\coprod\nolimits_{i\in I} M_i)/\sim
+$$
+simply as follows
+$$
+m_i \sim m_{i'}
+\Leftrightarrow
+\exists i'', \phi : i \to i'', \phi': i' \to i'',
+M(\phi)(m_i) = M(\phi')(m_{i'}).
+$$
+In other words, two elements are equal in the colimit if and only if
+they ``eventually become equal''.
+
+\begin{lemma}
+\label{lemma-directed-commutes}
+Let $\mathcal{I}$ and $\mathcal{J}$ be index categories.
+Assume that $\mathcal{I}$ is filtered and $\mathcal{J}$ is finite.
+Let $M : \mathcal{I} \times \mathcal{J} \to \textit{Sets}$,
+$(i, j) \mapsto M_{i, j}$ be a diagram of diagrams of sets.
+In this case
+$$
+\colim_i \lim_j M_{i, j}
+=
+\lim_j \colim_i M_{i, j}.
+$$
+In particular, colimits over $\mathcal{I}$ commute with finite products,
+fibre products, and equalizers of sets.
+\end{lemma}
+
+\begin{proof}
+Omitted. In fact, it is a fun exercise to prove that a category is
+filtered if and only if colimits over the category commute with finite
+limits (into the category of sets).
+\end{proof}
+
+\noindent
+We give a counter example to the lemma in
+the case where $\mathcal{J}$ is infinite. Namely, let
+$\mathcal{I}$ consist of $\mathbf{N} = \{1, 2, 3, \ldots\}$
+with a unique morphism $i \to i'$ whenever $i \leq i'$.
+Let $\mathcal{J}$ be the discrete category
+$\mathbf{N} = \{1, 2, 3, \ldots\}$ (only morphisms are identities).
+Let $M_{i, j} = \{1, 2, \ldots, i\}$ with obvious inclusion maps
+$M_{i, j} \to M_{i', j}$ when $i \leq i'$. In this case
+$\colim_i M_{i, j} = \mathbf{N}$ and hence
+$$
+\lim_j \colim_i M_{i, j}
+=
+\prod\nolimits_j \mathbf{N}
+=
+\mathbf{N}^\mathbf{N}
+$$
+On the other hand $\lim_j M_{i, j} = \prod\nolimits_j M_{i, j}$ and
+hence
+$$
+\colim_i \lim_j M_{i, j}
+=
+\bigcup\nolimits_i \{1, 2, \ldots, i\}^{\mathbf{N}}
+$$
+which is smaller than the other limit.
+
+\begin{lemma}
+\label{lemma-cofinal-in-filtered}
+Let $\mathcal{I}$ be a category. Let $\mathcal{J}$ be a full subcategory.
+Assume that $\mathcal{I}$ is filtered. Assume also that for any object
+$i$ of $\mathcal{I}$, there exists a morphism $i \to j$
+to some object $j$ of $\mathcal{J}$. Then $\mathcal{J}$
+is filtered and cofinal in $\mathcal{I}$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Pleasant exercise of the notions involved.
+\end{proof}
+
+\noindent
+It turns out we sometimes need a more finegrained control over the
+possible conditions one can impose on index categories. Thus we add
+some lemmas on the possible things one can require.
+
+\begin{lemma}
+\label{lemma-preserve-products}
+Let $\mathcal{I}$ be an index category, i.e., a category. Assume
+that for every pair of objects $x, y$ of $\mathcal{I}$
+there exist an object $z$ and morphisms $x \to z$ and $y \to z$.
+Then
+\begin{enumerate}
+\item If $M$ and $N$ are diagrams of sets over $\mathcal{I}$,
+then $\colim (M_i \times N_i) \to \colim M_i \times \colim N_i$
+is surjective,
+\item in general colimits of diagrams of sets over $\mathcal{I}$
+do not commute with finite nonempty products.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Let $(\overline{m}, \overline{n})$
+be an element of $\colim M_i \times \colim N_i$.
+Then we can find $m \in M_x$ and $n \in N_y$ for some
+$x, y \in \Ob(\mathcal{I})$ such that $m$ maps to
+$\overline{m}$ and $n$ maps to $\overline{n}$. See
+Section \ref{section-limit-sets}.
+Choose $a : x \to z$ and $b : y \to z$
+in $\mathcal{I}$. Then $(M(a)(m), N(b)(n))$ is an element of
+$(M \times N)_z$ whose image in $\colim (M_i \times N_i)$
+maps to $(\overline{m}, \overline{n})$ as desired.
+
+\medskip\noindent
+Proof of (2). Let $G$ be a non-trivial group and
+let $\mathcal{I}$ be the one-object category with endomorphism monoid $G$.
+Then $\mathcal{I}$ trivially satisfies the condition stated in the lemma.
+Now let $G$ act on itself by translation and view the $G$-set $G$
+as a set-valued $\mathcal{I}$-diagram. Then
+$$
+\colim_\mathcal{I} G \times \colim_\mathcal{I} G \cong G/G \times G/G
+$$
+is not isomorphic to
+$$
+\colim_\mathcal{I} (G \times G) \cong (G \times G)/G
+$$
+This example indicates that you cannot just drop the additional
+condition Lemma \ref{lemma-directed-commutes}
+even if you only care about finite products.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimits-abelian-as-sets}
+Let $\mathcal{I}$ be an index category, i.e., a category. Assume
+that for every pair of objects $x, y$ of $\mathcal{I}$
+there exist an object $z$ and morphisms $x \to z$ and $y \to z$.
+Let $M : \mathcal{I} \to \textit{Ab}$ be a diagram of abelian
+groups over $\mathcal{I}$. Then the colimit of $M$ in the category
+of sets surjects onto the colimit of $M$ in the category of
+abelian groups.
+\end{lemma}
+
+\begin{proof}
+Recall that the colimit in the category of sets is the quotient of
+the disjoint union $\coprod M_i$ by relation, see
+Section \ref{section-limit-sets}.
+Similarly, the colimit in the category of abelian groups is a quotient
+of the direct sum $\bigoplus M_i$.
+The assumption of the lemma means that given $i, j \in \Ob(\mathcal{I})$
+and $m \in M_i$ and $n \in M_j$, then we can find an object
+$k$ and morphisms $a : i \to k$ and $b : j \to k$.
+Thus $m + n$ is represented in the colimit by the element
+$M(a)(m) + M(b)(n)$ of $M_k$. Thus the $\coprod M_i$
+surjects onto the colimit.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-split-into-connected}
+Let $\mathcal{I}$ be an index category, i.e., a category. Assume
+that for every solid diagram
+$$
+\xymatrix{
+x \ar[d] \ar[r] & y \ar@{..>}[d] \\
+z \ar@{..>}[r] & w
+}
+$$
+in $\mathcal{I}$ there exist an object $w$ and dotted arrows
+making the diagram commute. Then $\mathcal{I}$ is either empty
+or a nonempty disjoint union of connected categories having
+the same property.
+\end{lemma}
+
+\begin{proof}
+If $\mathcal{I}$ is the empty category, then the lemma is true.
+Otherwise, we define a relation on objects of $\mathcal{I}$ by
+saying that $x \sim y$ if there exist a $z$ and
+morphisms $x \to z$ and $y \to z$. This is an equivalence
+relation by the assumption of the lemma. Hence $\Ob(\mathcal{I})$
+is a disjoint union of equivalence classes. Let $\mathcal{I}_j$
+be the full subcategories corresponding to these equivalence classes.
+Then $\mathcal{I} = \coprod \mathcal{I}_j$ with $\mathcal{I}_j$
+nonempty as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-preserve-injective-maps}
+Let $\mathcal{I}$ be an index category, i.e., a category. Assume
+that for every solid diagram
+$$
+\xymatrix{
+x \ar[d] \ar[r] & y \ar@{..>}[d] \\
+z \ar@{..>}[r] & w
+}
+$$
+in $\mathcal{I}$ there exist an object $w$ and dotted arrows
+making the diagram commute. Then
+\begin{enumerate}
+\item an injective morphism $M \to N$ of diagrams of sets over
+$\mathcal{I}$ gives rise to an injective map $\colim M_i \to \colim N_i$
+of sets,
+\item in general the same is not the case for diagrams of abelian
+groups and their colimits.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $\mathcal{I}$ is the empty category, then the lemma is true.
+Thus we may assume $\mathcal{I}$ is nonempty. In this case
+we can write $\mathcal{I} = \coprod \mathcal{I}_j$ where each
+$\mathcal{I}_j$ is nonempty and satisfies the same property, see
+Lemma \ref{lemma-split-into-connected}. Since
+$\colim_\mathcal{I} M = \coprod_j \colim_{\mathcal{I}_j} M|_{\mathcal{I}_j}$
+this reduces the proof of (1) to the connected case.
+
+\medskip\noindent
+Assume $\mathcal{I}$ is connected and $M \to N$ is injective, i.e.,
+all the maps $M_i \to N_i$ are injective.
+We identify $M_i$ with the image of $M_i \to N_i$, i.e.,
+we will think of $M_i$ as a subset of $N_i$.
+We will use the description of the colimits given in
+Section \ref{section-limit-sets} without further mention.
+Let $s, s' \in \colim M_i$ map to the same element of $\colim N_i$.
+Say $s$ comes from an element $m$ of $M_i$ and $s'$ comes from an
+element $m'$ of $M_{i'}$. Then we can find a sequence
+$i = i_0, i_1, \ldots, i_n = i'$ of objects of $\mathcal{I}$
+and morphisms
+$$
+\xymatrix{
+&
+i_1 \ar[ld] \ar[rd] & &
+i_3 \ar[ld] & &
+i_{2n-1} \ar[rd] & \\
+i = i_0 & &
+i_2 & &
+\ldots & &
+i_{2n} = i'
+}
+$$
+and elements $n_{i_j} \in N_{i_j}$ mapping to each other under
+the maps $N_{i_{2k-1}} \to N_{i_{2k-2}}$ and $N_{i_{2k-1}}
+\to N_{i_{2k}}$ induced from the maps in $\mathcal{I}$ above
+with $n_{i_0} = m$ and $n_{i_{2n}} = m'$. We will prove by induction
+on $n$ that this implies $s = s'$. The base case $n = 0$ is trivial.
+Assume $n \geq 1$. Using the assumption on $\mathcal{I}$
+we find a commutative diagram
+$$
+\xymatrix{
+& i_1 \ar[ld] \ar[rd] \\
+i_0 \ar[rd] & & i_2 \ar[ld] \\
+& w
+}
+$$
+We conclude that $m$ and $n_{i_2}$ map to the same element of $N_w$
+because both are the image of the element $n_{i_1}$.
+In particular, this element is an element $m'' \in M_w$ which
+gives rise to the same element as $s$ in $\colim M_i$.
+Then we find the chain
+$$
+\xymatrix{
+&
+i_3 \ar[ld] \ar[rd] & &
+i_5 \ar[ld] & &
+i_{2n-1} \ar[rd] & \\
+w & &
+i_4 & &
+\ldots & &
+i_{2n} = i'
+}
+$$
+and the elements $n_{i_j}$ for $j \geq 3$ which has a smaller length
+than the chain we started with. This proves the induction step and the
+proof of (1) is complete.
+
+\medskip\noindent
+Let $G$ be a group and let $\mathcal{I}$ be the one-object category with
+endomorphism monoid $G$. Then $\mathcal{I}$ satisfies the condition stated
+in the lemma because given $g_1, g_2 \in G$ we can find $h_1, h_2 \in G$
+with $h_1 g_1 = h_2 g_2$. An diagram $M$ over $\mathcal{I}$ in
+$\textit{Ab}$ is the same thing as an abelian group $M$ with $G$-action
+and $\colim_\mathcal{I} M$ is the coinvariants $M_G$ of $M$.
+Take $G$ the group of order $2$ acting trivially on $M = \mathbf{Z}/2\mathbf{Z}$
+mapping into the first summand of
+$N = \mathbf{Z}/2\mathbf{Z} \times \mathbf{Z}/2\mathbf{Z}$
+where the nontrivial element of $G$ acts by
+$(x, y) \mapsto (x + y, y)$. Then $M_G \to N_G$ is zero.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-split-into-directed}
+Let $\mathcal{I}$ be an index category, i.e., a category.
+Assume
+\begin{enumerate}
+\item for every pair of morphisms $a : w \to x$ and $b : w \to y$
+in $\mathcal{I}$ there exist an object $z$ and morphisms $c : x \to z$
+and $d : y \to z$ such that $c \circ a = d \circ b$, and
+\item for every pair of morphisms $a, b : x \to y$ there exists
+a morphism $c : y \to z$ such that $c \circ a = c \circ b$.
+\end{enumerate}
+Then $\mathcal{I}$ is a (possibly empty) union
+of disjoint filtered index categories $\mathcal{I}_j$.
+\end{lemma}
+
+\begin{proof}
+If $\mathcal{I}$ is the empty category, then the lemma is true.
+Otherwise, we define a relation on objects of $\mathcal{I}$ by
+saying that $x \sim y$ if there exist a $z$ and
+morphisms $x \to z$ and $y \to z$. This is an equivalence
+relation by the first assumption of the lemma. Hence $\Ob(\mathcal{I})$
+is a disjoint union of equivalence classes. Let $\mathcal{I}_j$
+be the full subcategories corresponding to these equivalence classes.
+The rest is clear from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-almost-directed-commutes-equalizers}
+Let $\mathcal{I}$ be an index category satisfying the hypotheses of
+Lemma \ref{lemma-split-into-directed} above. Then colimits over $\mathcal{I}$
+commute with fibre products and equalizers in sets (and more generally
+with finite connected limits).
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-split-into-directed}
+we may write $\mathcal{I} = \coprod \mathcal{I}_j$ with each $\mathcal{I}_j$
+filtered. By
+Lemma \ref{lemma-directed-commutes}
+we see that colimits of $\mathcal{I}_j$ commute with equalizers and
+fibre products. Thus it suffices to show that equalizers and fibre products
+commute with coproducts in the category of sets (including empty coproducts).
+In other words, given a set $J$ and sets $A_j, B_j, C_j$ and set maps
+$A_j \to B_j$, $C_j \to B_j$ for $j \in J$ we have to show that
+$$
+(\coprod\nolimits_{j \in J} A_j)
+\times_{(\coprod\nolimits_{j \in J} B_j)}
+(\coprod\nolimits_{j \in J} C_j)
+=
+\coprod\nolimits_{j \in J} A_j \times_{B_j} C_j
+$$
+and given $a_j, a'_j : A_j \to B_j$ that
+$$
+\text{Equalizer}(
+\coprod\nolimits_{j \in J} a_j,
+\coprod\nolimits_{j \in J} a'_j)
+=
+\coprod\nolimits_{j \in J}
+\text{Equalizer}(a_j, a'_j)
+$$
+This is true even if $J = \emptyset$. Details omitted.
+\end{proof}
+
+
+
+\section{Cofiltered limits}
+\label{section-codirected-limits}
+
+\noindent
+Limits are easier to compute or describe when they
+are over a cofiltered diagram. Here is the definition.
+
+\begin{definition}
+\label{definition-codirected}
+We say that a diagram $M : \mathcal{I} \to \mathcal{C}$ is {\it codirected}
+or {\it cofiltered} if the following conditions hold:
+\begin{enumerate}
+\item the category $\mathcal{I}$ has at least one object,
+\item for every pair of objects $x, y$ of $\mathcal{I}$
+there exist an object $z$ and morphisms $z \to x$,
+$z \to y$, and
+\item for every pair of objects $x, y$ of $\mathcal{I}$
+and every pair of morphisms $a, b : x \to y$ of $\mathcal{I}$
+there exists a morphism $c : w \to x$ of $\mathcal{I}$
+such that $M(a \circ c) = M(b \circ c)$ as morphisms in $\mathcal{C}$.
+\end{enumerate}
+We say that an index category $\mathcal{I}$ is {\it codirected}, or
+{\it cofiltered} if $\text{id} : \mathcal{I} \to \mathcal{I}$ is
+cofiltered (in other words you erase the $M$ in part (3) above).
+\end{definition}
+
+\noindent
+We observe that any diagram with cofiltered index category is cofiltered,
+and this is how this situation usually occurs.
+
+\medskip\noindent
+As an example of why cofiltered limits of sets are ``easier'' than
+general ones, we mention the fact that a cofiltered diagram of finite
+nonempty sets has nonempty limit (Lemma \ref{lemma-nonempty-limit}).
+This result does not hold for a general limit of finite
+nonempty sets.
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Limits and colimits over preordered sets}
+\label{section-posets-limits}
+
+\noindent
+A special case of diagrams is given by systems over preordered sets.
+
+\begin{definition}
+\label{definition-directed-set}
+Let $I$ be a set and let $\leq$ be a binary relation on $I$.
+\begin{enumerate}
+\item We say $\leq$ is a {\it preorder} if it is
+transitive (if $i \leq j$ and $j \leq k$ then $i \leq k$) and
+reflexive ($i \leq i$ for all $i \in I$).
+\item A {\it preordered set} is a set endowed with a preorder.
+\item A {\it directed set} is a preordered set $(I, \leq)$
+such that $I$ is not empty and such that $\forall i, j \in I$,
+there exists $k \in I$ with $i \leq k, j \leq k$.
+\item We say $\leq$ is a {\it partial order} if it is a preorder
+which is antisymmetric (if $i \leq j$ and $j \leq i$, then $i = j$).
+\item A {\it partially ordered set} is a set endowed with a partial order.
+\item A {\it directed partially ordered set} is a directed set
+whose ordering is a partial order.
+\end{enumerate}
+\end{definition}
+
+\noindent
+It is customary to drop the $\leq$ from the notation when talking
+about preordered sets, that is, one speaks of
+the preordered set $I$ rather than of the preordered set $(I, \leq)$.
+Given a preordered set $I$ the symbol $\geq$ is defined by
+the rule $i \geq j \Leftrightarrow j \leq i$ for all $i, j \in I$.
+The phrase ``partially ordered set'' is sometimes abbreviated to ``poset''.
+
+\medskip\noindent
+Given a preordered set $I$ we can construct a category: the objects are
+the elements of $I$, there is exactly one morphism $i \to i'$
+if $i \leq i'$, and otherwise none. Conversely, given a category $\mathcal{C}$
+with at most one arrow between any two objects, the set
+$\Ob(\mathcal{C})$ is endowed with a preorder defined by the rule
+$x \leq y \Leftrightarrow \Mor_\mathcal{C}(x, y) \not = \emptyset$.
+
+\begin{definition}
+\label{definition-system-over-poset}
+Let $(I, \leq)$ be a preordered set. Let $\mathcal{C}$ be a category.
+\begin{enumerate}
+\item A {\it system over $I$ in $\mathcal{C}$}, sometimes called a
+{\it inductive system over $I$ in $\mathcal{C}$} is given by
+objects $M_i$ of $\mathcal{C}$ and for every $i \leq i'$ a
+morphism $f_{ii'} : M_i \to M_{i'}$ such that $f_{ii}
+= \text{id}$ and such that $f_{ii''} = f_{i'i''} \circ f_{i i'}$
+whenever $i \leq i' \leq i''$.
+\item An {\it inverse system over $I$ in $\mathcal{C}$},
+sometimes called a {\it projective system over $I$ in $\mathcal{C}$}
+is given by objects $M_i$ of $\mathcal{C}$ and for every $i' \leq i$ a
+morphism $f_{ii'} : M_i \to M_{i'}$ such that $f_{ii}
+= \text{id}$ and such that $f_{ii''} = f_{i'i''} \circ f_{i i'}$
+whenever $i'' \leq i' \leq i$. (Note reversal of inequalities.)
+\end{enumerate}
+We will say $(M_i, f_{ii'})$ is a (inverse) system over $I$ to
+denote this. The maps $f_{ii'}$ are sometimes
+called the {\it transition maps}.
+\end{definition}
+
+\noindent
+In other words a system over $I$ is just a diagram
+$M : \mathcal{I} \to \mathcal{C}$ where $\mathcal{I}$ is the category
+we associated to $I$ above: objects are elements of $I$ and
+there is a unique arrow $i \to i'$ in $\mathcal{I}$ if and only if $i \leq i'$.
+An inverse system is a diagram $M : \mathcal{I}^{opp} \to \mathcal{C}$.
+From this point of view we could take (co)limits of any (inverse)
+system over $I$. However, it is customary to take
+{\it only colimits of systems over $I$} and
+{\it only limits of inverse systems over $I$}.
+More precisely: Given a system $(M_i, f_{ii'})$
+over $I$ the colimit of the system
+$(M_i, f_{ii'})$ is defined as
+$$
+\colim_{i \in I} M_i = \colim_\mathcal{I} M,
+$$
+i.e., as the colimit of the corresponding diagram.
+Given a inverse system $(M_i, f_{ii'})$ over $I$ the limit
+of the inverse system $(M_i, f_{ii'})$ is defined as
+$$
+\lim_{i \in I} M_i = \lim_{\mathcal{I}^{opp}} M,
+$$
+i.e., as the limit of the corresponding diagram.
+
+\begin{remark}
+\label{remark-preorder-versus-partial-order}
+Let $I$ be a preordered set. From $I$ we can construct a canonical
+partially ordered set $\overline{I}$ and an order preserving map
+$\pi : I \to \overline{I}$. Namely, we can define an equivalence
+relation $\sim$ on $I$ by the rule
+$$
+i \sim j \Leftrightarrow (i \leq j\text{ and }j \leq i).
+$$
+We set $\overline{I} = I/\sim$ and we let $\pi : I \to \overline{I}$
+be the quotient map. Finally, $\overline{I}$ comes with a unique
+partial ordering such that
+$\pi(i) \leq \pi(j) \Leftrightarrow i \leq j$.
+Observe that if $I$ is a directed set, then $\overline{I}$
+is a directed partially ordered set.
+Given an (inverse) system $N$ over $\overline{I}$ we obtain an
+(inverse) system $M$ over $I$ by setting $M_i = N_{\pi(i)}$.
+This construction defines a functor between the category
+of inverse systems over $I$ and $\overline{I}$.
+In fact, this is an equivalence.
+The reason is that if $i \sim j$, then for any system
+$M$ over $I$ the maps $M_i \to M_j$ and $M_j \to M_i$ are
+mutually inverse isomorphisms. More precisely, choosing
+a section $s : \overline{I} \to I$ of $\pi$ a quasi-inverse
+of the functor above sends $M$ to $N$ with
+$N_{\overline{i}} = M_{s(\overline{i})}$.
+Finally, this correspondence is compatible with colimits of systems:
+if $M$ and $N$ are related as above and
+if either $\colim_{\overline{I}} N$ or $\colim_I M$ exists
+then so does the other and
+$\colim_{\overline{I}} N = \colim_I M$.
+Similar results hold for inverse systems and limits of inverse systems.
+\end{remark}
+
+\noindent
+The upshot of Remark \ref{remark-preorder-versus-partial-order}
+is that while computing a colimit of a system or a limit of
+an inverse system, we may always assume the preorder is a partial order.
+
+\begin{definition}
+\label{definition-directed-system}
+Let $I$ be a preordered set. We say a system (resp.\ inverse system)
+$(M_i, f_{ii'})$ is a
+{\it directed system} (resp.\ {\it directed inverse system})
+if $I$ is a directed set
+(Definition \ref{definition-directed-set}): $I$ is nonempty and
+for all $i_1, i_2 \in I$ there exists $i\in I$ such that
+$i_1 \leq i$ and $i_2 \leq i$.
+\end{definition}
+
+\noindent
+In this case the colimit is sometimes (unfortunately)
+called the ``direct limit''. We will not use this last
+terminology. It turns out that diagrams over a filtered
+category are no more general than directed systems in the
+following sense.
+
+\begin{lemma}
+\label{lemma-directed-category-system}
+Let $\mathcal{I}$ be a filtered index category.
+There exist a directed set $I$
+and a system $(x_i, \varphi_{ii'})$ over $I$ in $\mathcal{I}$
+with the following properties:
+\begin{enumerate}
+\item For every category $\mathcal{C}$ and every diagram
+$M : \mathcal{I} \to \mathcal{C}$ with values in $\mathcal{C}$,
+denote $(M(x_i), M(\varphi_{ii'}))$
+the corresponding system over $I$. If
+$\colim_{i \in I} M(x_i)$ exists then so does
+$\colim_\mathcal{I} M$ and the transformation
+$$
+\theta :
+\colim_{i \in I} M(x_i)
+\longrightarrow
+\colim_\mathcal{I} M
+$$
+of Lemma \ref{lemma-functorial-colimit} is an isomorphism.
+\item For every category $\mathcal{C}$ and every diagram
+$M : \mathcal{I}^{opp} \to \mathcal{C}$ in $\mathcal{C}$, denote
+$(M(x_i), M(\varphi_{ii'}))$ the corresponding inverse system
+over $I$. If $\lim_{i \in I} M(x_i)$ exists then so does
+$\lim_\mathcal{I} M$ and the transformation
+$$
+\theta :
+\lim_{\mathcal{I}^{opp}} M
+\longrightarrow
+\lim_{i \in I} M(x_i)
+$$
+of Lemma \ref{lemma-functorial-limit} is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+As explained in the text following
+Definition \ref{definition-system-over-poset}, we may view
+preordered sets as categories and systems as functors.
+Throughout the proof, we will freely shift between these two points of view.
+We prove the first statement by constructing a category
+$\mathcal{I}_0$, corresponding to a directed set\footnote{In fact,
+our construction will produce a directed partially ordered set.}, and a cofinal
+functor $M_0 : \mathcal{I}_0 \to \mathcal{I}$. Then, by
+Lemma \ref{lemma-cofinal}, the colimit of a diagram
+$M : \mathcal{I} \to \mathcal{C}$ coincides with the
+colimit of the diagram $M \circ M_0 : \mathcal{I}_0 \to \mathcal{C}$,
+from which the statement follows. The second statement is dual to the
+first and may be proved by interpreting a limit in $\mathcal{C}$ as
+a colimit in $\mathcal{C}^{opp}$. We omit the details.
+
+\medskip\noindent
+A category $\mathcal{F}$ is called {\em finitely generated} if
+there exists a finite set $F$ of arrows in $\mathcal{F}$, such that
+each arrow in $\mathcal{F}$ may be obtained by composing
+arrows from $F$. In particular, this implies that $\mathcal{F}$ has
+finitely many objects. We start the proof by reducing to the case
+when $\mathcal{I}$ has the property that every finitely generated
+subcategory of $\mathcal{I}$ may be extended to a finitely
+generated subcategory with a unique final object.
+
+\medskip\noindent
+Let $\omega$ denote the directed set of finite ordinals, which
+we view as a filtered category. It is easy to verify that the
+product category $\mathcal{I}\times \omega$ is also filtered,
+and the projection
+$\Pi : \mathcal{I} \times \omega \to \mathcal{I}$
+is cofinal.
+
+\medskip\noindent
+Now let $\mathcal{F}$ be any finitely generated
+subcategory of $\mathcal{I}\times \omega$.
+By using the axioms of a filtered category and a simple induction
+argument on a finite set of generators of $\mathcal{F}$,
+we may construct a cocone $(\{f_i\}, i_\infty)$ in $\mathcal{I}$
+for the diagram $\mathcal{F} \to \mathcal{I}$. That is, a morphism
+$f_i : i \to i_\infty$ for every object $i$ in $\mathcal{F}$
+such that for each arrow $f : i \to i'$ in $\mathcal{F}$
+we have $f_i = f\circ f_{i'}$. We can also choose $i_\infty$ such
+that there are no arrows from $i_\infty$ to an object in $\mathcal{F}$.
+This is possible since
+we may always post-compose the arrows $f_i$ with an arrow
+which is the identity on the $\mathcal{I}$-component and
+strictly increasing on the $\omega$-component.
+Now let $\mathcal{F}^+$ denote the category consisting of all
+objects and arrows in $\mathcal{F}$
+together with the object $i_\infty$, the identity
+arrow $\text{id}_{i_\infty}$ and the arrows $f_i$.
+Since there are no arrows from $i_\infty$ in $\mathcal{F}^+$
+to any object of $\mathcal{F}$, the arrow set in $\mathcal{F}^+$
+is closed under composition, so $\mathcal{F}^+$ is indeed
+a category. By construction, it is a finitely
+generated subcategory of $\mathcal{I}$ which has $i_\infty$ as
+unique final object. Since, by Lemma \ref{lemma-cofinal},
+the colimit of any diagram $M : \mathcal{I} \to \mathcal{C}$
+coincides with the colimit of $M\circ\Pi$ , this gives the desired
+reduction.
+
+\medskip\noindent
+The set of all finitely generated subcategories of $\mathcal{I}$
+with a unique final object is naturally ordered by inclusion.
+We take $\mathcal{I}_0$ to be the category corresponding
+to this set. We also have a functor
+$M_0 : \mathcal{I}_0 \to \mathcal{I}$, which takes an
+arrow $\mathcal{F} \subset \mathcal{F'}$ in
+$\mathcal{I}_0$ to the unique map from the final object of
+$\mathcal{F}$ to the final object of $\mathcal{F}'$.
+Given any two finitely generated subcategories of
+$\mathcal{I}$, the category generated by these two categories is
+also finitely generated. By our assumption on $\mathcal{I}$, it is
+also contained in a finitely generated subcategory of $\mathcal{I}$
+with a unique final object. This shows that $\mathcal{I}_0$ is directed.
+
+\medskip\noindent
+Finally, we verify that $M_0$ is cofinal. Since any
+object of $\mathcal{I}$ is the final object in the subcategory
+consisting of only that object and its identity arrow, the functor
+$M_0$ is surjective on objects. In particular, Condition (1) of
+Definition \ref{definition-cofinal} is satisfied. Given
+an object $i$ of $\mathcal{I}$, objects $\mathcal{F}_1, \mathcal{F}_2$ in
+$\mathcal{I}_0$ and maps $\varphi_1 : i \to M_0(\mathcal{F}_1)$
+and $\varphi_2 : i \to M_0(\mathcal{F}_2)$ in
+$\mathcal{I}$, we can take $\mathcal{F}_{12}$ to be a finitely
+generated category with a unique final object containing
+$\mathcal{F}_1$, $\mathcal{F}_2$ and the morphisms $\varphi_1, \varphi_2$.
+The resulting diagram commutes
+$$
+\xymatrix{
+& M_0(\mathcal{F}_{12}) & \\
+M_0(\mathcal{F}_{1}) \ar[ru] & & M_0(\mathcal{F}_{2}) \ar[lu] \\
+& i \ar[lu] \ar[ru]
+}
+$$
+since it lives in the category $\mathcal{F}_{12}$ and
+$M_0(\mathcal{F}_{12})$ is final in
+this category. Hence also Condition (2) is satisfied, which concludes
+the proof.
+\end{proof}
+
+\begin{remark}
+\label{remark-trick-needed}
+Note that a finite directed set $(I, \geq)$ always has a greatest object
+$i_\infty$. Hence any colimit of a system $(M_i, f_{ii'})$ over such a set
+is trivial in the sense that the colimit equals $M_{i_\infty}$. In contrast,
+a colimit indexed by a finite filtered category need not
+be trivial. For instance, let $\mathcal{I}$ be the category with a single object
+$i$ and a single non-trivial morphism $e$ satisfying $e = e \circ e$. The
+colimit of a diagram $M : \mathcal{I} \to Sets$ is the image of the
+idempotent $M(e)$. This illustrates that something like the trick of passing
+to $\mathcal{I}\times \omega$ in the proof of
+Lemma \ref{lemma-directed-category-system} is essential.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-nonempty-limit}
+If $S : \mathcal{I} \to \textit{Sets}$ is a cofiltered diagram of sets
+and all the $S_i$ are finite nonempty, then $\lim_i S_i$ is nonempty.
+In other words, the limit of a directed inverse system of finite nonempty sets
+is nonempty.
+\end{lemma}
+
+\begin{proof}
+The two statements are equivalent by
+Lemma \ref{lemma-directed-category-system}.
+Let $I$ be a directed set and let $(S_i)_{i \in I}$
+be an inverse system of finite nonempty sets over $I$.
+Let us say that a {\it subsystem} $T$ is a family $T = (T_i)_{i \in I}$
+of nonempty subsets $T_i \subset S_i$ such that $T_{i'}$ is mapped
+into $T_i$ by the transition map $S_{i'} \to S_i$ for all $i' \geq i$.
+Denote $\mathcal{T}$ the set of subsystems. We order $\mathcal{T}$
+by inclusion. Suppose $T_\alpha$, $\alpha \in A$ is a totally ordered family
+of elements of $\mathcal{T}$. Say $T_\alpha = (T_{\alpha, i})_{i \in I}$.
+Then we can find a lower bound $T = (T_i)_{i \in I}$ by setting
+$T_i = \bigcap_{\alpha \in A} T_{\alpha, i}$ which is manifestly a
+finite nonempty subset of $S_i$ as all the $T_{\alpha, i}$ are nonempty
+and as the $T_\alpha$ form a totally ordered family. Thus we may
+apply Zorn's lemma to see that $\mathcal{T}$ has minimal elements.
+
+\medskip\noindent
+Let's analyze what a minimal element $T \in \mathcal{T}$ looks like.
+First observe that the maps $T_{i'} \to T_i$ are all surjective.
+Namely, as $I$ is a directed set and $T_i$ is finite,
+the intersection $T'_i = \bigcap_{i' \geq i} \Im(T_{i'} \to T_i)$
+is nonempty. Thus $T' = (T'_i)$ is a subsystem contained in $T$ and
+by minimality $T' = T$. Finally, we claim that $T_i$ is a singleton
+for each $i$. Namely, if $x \in T_i$, then we can define
+$T'_{i'} = (T_{i'} \to T_i)^{-1}(\{x\})$ for $i' \geq i$ and
+$T'_j = T_j$ if $j \not \geq i$. This is another subsystem as we've seen
+above that the transition maps of the subsystem $T$ are surjective.
+By minimality we see that $T = T'$ which indeed implies that $T_i$
+is a singleton. This holds for every $i \in I$, hence we see that
+$T_i = \{x_i\}$ for some $x_i \in S_i$ with $x_{i'} \mapsto x_i$
+under the map $S_{i'} \to S_i$ for every $i' \geq i$. In other words,
+$(x_i) \in \lim S_i$ and the lemma is proved.
+\end{proof}
+
+
+
+
+
+
+\section{Essentially constant systems}
+\label{section-essentially-constant}
+
+\noindent
+Let $M : \mathcal{I} \to \mathcal{C}$ be a diagram in a category $\mathcal{C}$.
+Assume the index category $\mathcal{I}$ is filtered. In this case
+there are three successively stronger notions which pick out an object
+$X$ of $\mathcal{C}$. The first is just
+$$
+X = \colim_{i \in \mathcal{I}} M_i.
+$$
+Then $X$ comes equipped with the coprojections $M_i \to X$.
+A stronger condition would be to require that $X$ is the colimit and
+that there exist an $i \in \mathcal{I}$ and a morphism $X \to M_i$ such
+that the composition $X \to M_i \to X$ is $\text{id}_X$. An even
+stronger condition is the following.
+
+\begin{definition}
+\label{definition-essentially-constant-diagram}
+Let $M : \mathcal{I} \to \mathcal{C}$ be a diagram in a category
+$\mathcal{C}$.
+\begin{enumerate}
+\item Assume the index category $\mathcal{I}$ is filtered and
+let $(X, \{M_i \to X\}_i)$ be a cocone for $M$, see
+Remark \ref{remark-cones-and-cocones}. We say $M$ is
+{\it essentially constant} with {\it value} $X$ if there exist an
+$i \in \mathcal{I}$ and a morphism $X \to M_i$ such that
+\begin{enumerate}
+\item $X \to M_i \to X$ is $\text{id}_X$, and
+\item for all $j$ there exist $k$ and morphisms $i \to k$ and $j \to k$
+such that the morphism $M_j \to M_k$ equals the composition
+$M_j \to X \to M_i \to M_k$.
+\end{enumerate}
+\item Assume the index category $\mathcal{I}$ is cofiltered and let
+$(X, \{X \to M_i\}_i)$ be a cone for $M$, see
+Remark \ref{remark-cones-and-cocones}. We say
+$M$ is {\it essentially constant} with {\it value} $X$ if
+there exist an $i \in \mathcal{I}$
+and a morphism $M_i \to X$ such that
+\begin{enumerate}
+\item $X \to M_i \to X$ is $\text{id}_X$, and
+\item for all $j$ there exist $k$ and morphisms $k \to i$ and $k \to j$
+such that the morphism $M_k \to M_j$ equals the composition
+$M_k \to M_i \to X \to M_j$.
+\end{enumerate}
+\end{enumerate}
+Please keep in mind Lemma \ref{lemma-essentially-constant-is-limit-colimit}
+when using this definition.
+\end{definition}
+
+\noindent
+Which of the two versions is meant will be clear from context. If there is
+any confusion we will distinguish between these by saying that the first
+version means $M$ is essentially constant as an {\it ind-object}, and in
+the second case we will say it is essentially constant as a {\it pro-object}.
+This terminology is further explained in
+Remarks \ref{remark-ind-category} and \ref{remark-pro-category}.
+In fact we will often use the terminology ``essentially constant system''
+which formally speaking is only defined for systems over directed sets.
+
+\begin{definition}
+\label{definition-essentially-constant-system}
+Let $\mathcal{C}$ be a category. A directed system
+$(M_i, f_{ii'})$ is an {\it essentially constant system}
+if $M$ viewed as a functor $I \to \mathcal{C}$
+defines an essentially constant diagram. A directed inverse system
+$(M_i, f_{ii'})$ is an {\it essentially constant inverse system} if
+$M$ viewed as a functor $I^{opp} \to \mathcal{C}$ defines an
+essentially constant inverse diagram.
+\end{definition}
+
+\noindent
+If $(M_i, f_{ii'})$ is an essentially constant system and the morphisms
+$f_{ii'}$ are monomorphisms, then for all $i \leq i'$ sufficiently large the
+morphisms $f_{ii'}$ are isomorphisms. On the other hand, consider the system
+$$
+\mathbf{Z}^2 \to \mathbf{Z}^2 \to \mathbf{Z}^2 \to \ldots
+$$
+with maps given by $(a, b) \mapsto (a + b, 0)$. This system is essentially
+constant with value $\mathbf{Z}$ but every transition map has a kernel.
+
+\medskip\noindent
+Here is an example of a system which is not essentially constant. Let
+$M = \bigoplus_{n \geq 0} \mathbf{Z}$ and to let $S : M \to M$ be the
+shift operator $(a_0, a_1, \ldots) \mapsto (a_1, a_2, \ldots)$. In this
+case the system $M \to M \to M \to \ldots$ with transition maps $S$
+has colimit $0$ and the composition $0 \to M \to 0$ is the identity,
+but the system is not essentially constant.
+
+\medskip\noindent
+The following lemma is a sanity check.
+
+\begin{lemma}
+\label{lemma-essentially-constant-is-limit-colimit}
+Let $M : \mathcal{I} \to \mathcal{C}$ be a diagram.
+If $\mathcal{I}$ is filtered and $M$ is essentially
+constant as an ind-object, then $X = \colim M_i$ exists and $M$
+is essentially constant with value $X$.
+If $\mathcal{I}$ is cofiltered and $M$ is essentially
+constant as a pro-object, then $X = \lim M_i$ exists and $M$ is
+essentially constant with value $X$.
+\end{lemma}
+
+\begin{proof}
+Omitted. This is a good excercise in the definitions.
+\end{proof}
+
+\begin{remark}
+\label{remark-ind-category}
+Let $\mathcal{C}$ be a category. There exists a big category
+$\text{Ind-}\mathcal{C}$ of {\it ind-objects of} $\mathcal{C}$.
+Namely, if $F : \mathcal{I} \to \mathcal{C}$ and
+$G : \mathcal{J} \to \mathcal{C}$ are filtered diagrams in $\mathcal{C}$,
+then we can define
+$$
+\Mor_{\text{Ind-}\mathcal{C}}(F, G) =
+\lim_i \colim_j \Mor_\mathcal{C}(F(i), G(j)).
+$$
+There is a canonical functor $\mathcal{C} \to \text{Ind-}\mathcal{C}$
+which maps $X$ to the {\it constant system} on $X$. This is a fully
+faithful embedding. In this language one sees that a diagram $F$ is
+essentially constant if and only if $F$ is isomorphic to a constant system.
+If we ever need this material, then we will formulate this into a lemma
+and prove it here.
+\end{remark}
+
+\begin{remark}
+\label{remark-pro-category}
+Let $\mathcal{C}$ be a category. There exists a big category
+$\text{Pro-}\mathcal{C}$ of {\it pro-objects} of $\mathcal{C}$.
+Namely, if $F : \mathcal{I} \to \mathcal{C}$ and
+$G : \mathcal{J} \to \mathcal{C}$ are cofiltered diagrams in $\mathcal{C}$,
+then we can define
+$$
+\Mor_{\text{Pro-}\mathcal{C}}(F, G) =
+\lim_j \colim_i \Mor_\mathcal{C}(F(i), G(j)).
+$$
+There is a canonical functor $\mathcal{C} \to \text{Pro-}\mathcal{C}$
+which maps $X$ to the {\it constant system} on $X$. This is a fully
+faithful embedding. In this language one sees that a diagram $F$ is
+essentially constant if and only if $F$ is isomorphic to a constant system.
+If we ever need this material, then we will formulate this into a lemma
+and prove it here.
+\end{remark}
+
+\begin{example}
+\label{example-pro-morphism-inverse-systems}
+Let $\mathcal{C}$ be a category. Let $(X_n)$ and $(Y_n)$ be inverse
+systems in $\mathcal{C}$ over $\mathbf{N}$ with the usual ordering.
+Picture:
+$$
+\ldots \to X_3 \to X_2 \to X_1
+\quad\text{and}\quad
+\ldots \to Y_3 \to Y_2 \to Y_1
+$$
+Let $a : (X_n) \to (Y_n)$ be a morphism of pro-objects of $\mathcal{C}$.
+What does $a$ amount to? Well, for each $n \in \mathbf{N}$ there should
+exist an $m(n)$ and a morphism $a_n : X_{m(n)} \to Y_n$. These morphisms
+ought to agree in the following sense: for all $n' \geq n$ there exists an
+$m(n', n) \geq m(n'), m(n)$ such that the diagram
+$$
+\xymatrix{
+X_{m(n, n')} \ar[rr] \ar[d] & & X_{m(n)} \ar[d]^{a_n} \\
+X_{m(n')} \ar[r]^{a_{n'}} & Y_{n'} \ar[r] & Y_n
+}
+$$
+commutes. After replacing $m(n)$ by $\max_{k, l \leq n}\{m(n, k), m(k, l)\}$
+we see that we obtain $\ldots \geq m(3) \geq m(2) \geq m(1)$ and a commutative
+diagram
+$$
+\xymatrix{
+\ldots \ar[r] &
+X_{m(3)} \ar[d]^{a_3} \ar[r] &
+X_{m(2)} \ar[d]^{a_2} \ar[r] &
+X_{m(1)} \ar[d]^{a_1} \\
+\ldots \ar[r] &
+Y_3 \ar[r] &
+Y_2 \ar[r] &
+Y_1
+}
+$$
+Given an increasing map $m' : \mathbf{N} \to \mathbf{N}$ with $m' \geq m$
+and setting $a'_i : X_{m'(i)} \to X_{m(i)} \to Y_i$ the pair
+$(m', a')$ defines the same morphism of pro-systems. Conversely, given
+two pairs $(m_1, a_1)$ and $(m_1, a_2)$ as above then these define the same
+morphism of pro-objects if and only if we can find $m' \geq m_1, m_2$
+such that $a'_1 = a'_2$.
+\end{example}
+
+\begin{remark}
+\label{remark-pro-category-copresheaves}
+Let $\mathcal{C}$ be a category. Let $F : \mathcal{I} \to \mathcal{C}$ and
+$G : \mathcal{J} \to \mathcal{C}$ be cofiltered diagrams in $\mathcal{C}$.
+Consider the functors $A, B : \mathcal{C} \to \textit{Sets}$ defined by
+$$
+A(X) = \colim_i \Mor_\mathcal{C}(F(i), X)
+\quad\text{and}\quad
+B(X) = \colim_j \Mor_\mathcal{C}(G(j), X)
+$$
+We claim that a morphism of pro-systems from $F$ to $G$ is the same thing
+as a transformation of functors $t : B \to A$. Namely, given $t$
+we can apply $t$ to the class of $\text{id}_{G(j)}$ in $B(G(j))$
+to get a compatible system of elements
+$\xi_j \in A(G(j)) = \colim_i \Mor_\mathcal{C}(F(i), G(j))$
+which is exactly our definition of a morphism in $\text{Pro-}\mathcal{C}$ in
+Remark \ref{remark-pro-category}. We omit the construction of a
+transformation $B \to A$ given a morphism of pro-objects from $F$ to $G$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-image-essentially-constant}
+Let $\mathcal{C}$ be a category. Let $M : \mathcal{I} \to \mathcal{C}$
+be a diagram with filtered (resp.\ cofiltered) index category $\mathcal{I}$.
+Let $F : \mathcal{C} \to \mathcal{D}$ be a functor.
+If $M$ is essentially constant as an ind-object (resp.\ pro-object),
+then so is $F \circ M : \mathcal{I} \to \mathcal{D}$.
+\end{lemma}
+
+\begin{proof}
+If $X$ is a value for $M$, then it follows immediately from the
+definition that $F(X)$ is a value for $F \circ M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-essentially-constant-ind}
+Let $\mathcal{C}$ be a category. Let $M : \mathcal{I} \to \mathcal{C}$
+be a diagram with filtered index category $\mathcal{I}$.
+The following are equivalent
+\begin{enumerate}
+\item $M$ is an essentially constant ind-object, and
+\item $X = \colim_i M_i$ exists and for any $W$ in $\mathcal{C}$
+the map
+$$
+\colim_i \Mor_\mathcal{C}(W, M_i) \longrightarrow
+\Mor_\mathcal{C}(W, X)
+$$
+is bijective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2) holds. Then $\text{id}_X \in \Mor_\mathcal{C}(X, X)$
+comes from a morphism $X \to M_i$ for some $i$, i.e., $X \to M_i \to X$
+is the identity. Then both maps
+$$
+\Mor_\mathcal{C}(W, X)
+\longrightarrow
+\colim_i \Mor_\mathcal{C}(W, M_i)
+\longrightarrow
+\Mor_\mathcal{C}(W, X)
+$$
+are bijective for all $W$ where the first one is induced by the morphism
+$X \to M_i$ we found above, and the composition is the identity. This means
+that the composition
+$$
+\colim_i \Mor_\mathcal{C}(W, M_i)
+\longrightarrow
+\Mor_\mathcal{C}(W, X)
+\longrightarrow
+\colim_i \Mor_\mathcal{C}(W, M_i)
+$$
+is the identity too. Setting $W = M_j$ and starting with $\text{id}_{M_j}$
+in the colimit, we see that $M_j \to X \to M_i \to M_k$ is equal to
+$M_j \to M_k$ for some $k$ large enough. This proves (1) holds.
+The proof of (1) $\Rightarrow$ (2) is omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-essentially-constant-pro}
+Let $\mathcal{C}$ be a category. Let $M : \mathcal{I} \to \mathcal{C}$
+be a diagram with cofiltered index category $\mathcal{I}$.
+The following are equivalent
+\begin{enumerate}
+\item $M$ is an essentially constant pro-object, and
+\item $X = \lim_i M_i$ exists and for any $W$ in $\mathcal{C}$
+the map
+$$
+\colim_{i \in \mathcal{I}^{opp}} \Mor_\mathcal{C}(M_i, W)
+\longrightarrow
+\Mor_\mathcal{C}(X, W)
+$$
+is bijective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2) holds. Then $\text{id}_X \in \Mor_\mathcal{C}(X, X)$
+comes from a morphism $M_i \to X$ for some $i$, i.e., $X \to M_i \to X$
+is the identity. Then both maps
+$$
+\Mor_\mathcal{C}(X, W)
+\longrightarrow
+\colim_i \Mor_\mathcal{C}(M_i, W)
+\longrightarrow
+\Mor_\mathcal{C}(X, W)
+$$
+are bijective for all $W$ where the first one is induced by the morphism
+$M_i \to X$ we found above, and the composition is the identity. This means
+that the composition
+$$
+\colim_i \Mor_\mathcal{C}(M_i, W)
+\longrightarrow
+\Mor_\mathcal{C}(X, W)
+\longrightarrow
+\colim_i \Mor_\mathcal{C}(M_i, W)
+$$
+is the identity too. Setting $W = M_j$ and starting with $\text{id}_{M_j}$
+in the colimit, we see that $M_k \to M_i \to X \to M_j$ is equal to
+$M_k \to M_j$ for some $k$ large enough. This proves (1) holds.
+The proof of (1) $\Rightarrow$ (2) is omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cofinal-essentially-constant}
+Let $\mathcal{C}$ be a category. Let $H : \mathcal{I} \to \mathcal{J}$
+be a functor of filtered index categories. If $H$ is cofinal, then
+any diagram $M : \mathcal{J} \to \mathcal{C}$ is essentially constant
+if and only if $M \circ H$ is essentially constant.
+\end{lemma}
+
+\begin{proof}
+This follows formally from
+Lemmas \ref{lemma-characterize-essentially-constant-ind} and
+\ref{lemma-cofinal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-essentially-constant-over-product}
+Let $\mathcal{I}$ and $\mathcal{J}$ be filtered categories and denote
+$p : \mathcal{I} \times \mathcal{J} \to \mathcal{J}$ the projection.
+Then $\mathcal{I} \times \mathcal{J}$ is filtered and a diagram
+$M : \mathcal{J} \to \mathcal{C}$ is essentially constant if and only
+if $M \circ p : \mathcal{I} \times \mathcal{J} \to \mathcal{C}$
+is essentially constant.
+\end{lemma}
+
+\begin{proof}
+We omit the verification that $\mathcal{I} \times \mathcal{J}$ is
+filtered. The equivalence follows from
+Lemma \ref{lemma-cofinal-essentially-constant}
+because $p$ is cofinal (verification omitted).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-initial-essentially-constant}
+Let $\mathcal{C}$ be a category. Let $H : \mathcal{I} \to \mathcal{J}$
+be a functor of cofiltered index categories. If $H$ is initial, then
+any diagram $M : \mathcal{J} \to \mathcal{C}$ is essentially constant
+if and only if $M \circ H$ is essentially constant.
+\end{lemma}
+
+\begin{proof}
+This follows formally from
+Lemmas \ref{lemma-characterize-essentially-constant-pro},
+\ref{lemma-initial}, \ref{lemma-cofinal}, and
+the fact that if $\mathcal{I}$ is initial in $\mathcal{J}$,
+then $\mathcal{I}^{opp}$ is cofinal in $\mathcal{J}^{opp}$.
+\end{proof}
+
+
+
+
+
+\section{Exact functors}
+\label{section-exact-functor}
+
+\noindent
+In this section we define exact functors.
+
+\begin{definition}
+\label{definition-exact}
+Let $F : \mathcal{A} \to \mathcal{B}$ be a functor.
+\begin{enumerate}
+\item Suppose all finite limits exist in $\mathcal{A}$.
+We say $F$ is {\it left exact} if it commutes
+with all finite limits.
+\item Suppose all finite colimits exist in $\mathcal{A}$.
+We say $F$ is {\it right exact} if it commutes
+with all finite colimits.
+\item We say $F$ is {\it exact} if it is both left and right
+exact.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-characterize-left-exact}
+Let $F : \mathcal{A} \to \mathcal{B}$ be a functor.
+Suppose all finite limits exist in $\mathcal{A}$,
+see Lemma \ref{lemma-finite-limits-exist}.
+The following are equivalent:
+\begin{enumerate}
+\item $F$ is left exact,
+\item $F$ commutes with finite products and equalizers, and
+\item $F$ transforms a final object of $\mathcal{A}$
+into a final object of $\mathcal{B}$, and commutes with fibre products.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Lemma \ref{lemma-limits-products-equalizers} shows that (2) implies (1).
+Suppose (3) holds. The fibre product over the final object is the product.
+If $a, b : A \to B$ are morphisms of $\mathcal{A}$, then the
+equalizer of $a, b$ is
+$$
+(A \times_{a, B, b} A)\times_{(\text{pr}_1, \text{pr}_2), A \times A, \Delta} A.
+$$
+Thus (3) implies (2). Finally (1) implies (3) because
+the empty limit is a final object, and fibre products are limits.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-right-exact}
+Let $F : \mathcal{A} \to \mathcal{B}$ be a functor.
+Suppose all finite colimits exist in $\mathcal{A}$,
+see Lemma \ref{lemma-colimits-exist}.
+The following are equivalent:
+\begin{enumerate}
+\item $F$ is right exact,
+\item $F$ commutes with finite coproducts and coequalizers, and
+\item $F$ transforms an initial object of $\mathcal{A}$
+into an initial object of $\mathcal{B}$, and commutes with pushouts.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Dual to Lemma \ref{lemma-characterize-left-exact}.
+\end{proof}
+
+
+
+
+\section{Adjoint functors}
+\label{section-adjoint}
+
+\begin{definition}
+\label{definition-adjoint}
+Let $\mathcal{C}$, $\mathcal{D}$ be categories.
+Let $u : \mathcal{C} \to \mathcal{D}$ and
+$v : \mathcal{D} \to \mathcal{C}$ be functors.
+We say that $u$ is a {\it left adjoint} of $v$, or that
+$v$ is a {\it right adjoint} to $u$ if there are bijections
+$$
+\Mor_\mathcal{D}(u(X), Y)
+\longrightarrow
+\Mor_\mathcal{C}(X, v(Y))
+$$
+functorial in $X \in \Ob(\mathcal{C})$, and
+$Y \in \Ob(\mathcal{D})$.
+\end{definition}
+
+\noindent
+In other words, this means that there is a {\it given} isomorphism of functors
+$\mathcal{C}^{opp} \times \mathcal{D} \to \textit{Sets}$ from
+$\Mor_\mathcal{D}(u(-), -)$ to $\Mor_\mathcal{C}(-, v(-))$. For any object
+$X$ of $\mathcal{C}$ we obtain a morphism $X \to v(u(X))$ corresponding to
+$\text{id}_{u(X)}$. Similarly, for any object $Y$ of $\mathcal{D}$ we obtain
+a morphism $u(v(Y)) \to Y$ corresponding to $\text{id}_{v(Y)}$.
+These maps are called the {\it adjunction maps}. The adjunction maps
+are functorial in $X$ and $Y$, hence we obtain morphisms of functors
+$$
+\eta : \text{id}_\mathcal{C} \to v \circ u\quad (\text{unit})
+\quad\text{and}\quad
+\epsilon : u \circ v \to \text{id}_\mathcal{D}\quad (\text{counit}).
+$$
+Moreover, if $\alpha : u(X) \to Y$
+and $\beta : X \to v(Y)$ are morphisms, then the following are equivalent
+\begin{enumerate}
+\item $\alpha$ and $\beta$ correspond to each other via the
+bijection of the definition,
+\item $\beta$ is the composition $X \to v(u(X)) \xrightarrow{v(\alpha)} v(Y)$,
+and
+\item $\alpha$ is the composition $u(X) \xrightarrow{u(\beta)} u(v(Y)) \to Y$.
+\end{enumerate}
+In this way one can reformulate the notion of adjoint functors in terms
+of adjunction maps.
+
+\begin{lemma}
+\label{lemma-adjoint-exists}
+Let $u : \mathcal{C} \to \mathcal{D}$ be a functor between categories.
+If for each $y \in \Ob(\mathcal{D})$ the functor
+$x \mapsto \Mor_\mathcal{D}(u(x), y)$ is representable, then
+$u$ has a right adjoint.
+\end{lemma}
+
+\begin{proof}
+For each $y$ choose an object $v(y)$ and an isomorphism
+$\Mor_\mathcal{C}(-, v(y)) \to \Mor_\mathcal{D}(u(-), y)$
+of functors. By Yoneda's lemma (Lemma \ref{lemma-yoneda})
+for any morphism $g : y \to y'$ the transformation of functors
+$$
+\Mor_\mathcal{C}(-, v(y)) \to \Mor_\mathcal{D}(u(-), y) \to
+\Mor_\mathcal{D}(u(-), y') \to \Mor_\mathcal{C}(-, v(y'))
+$$
+corresponds to a unique morphism $v(g) : v(y) \to v(y')$.
+We omit the verification that $v$ is a functor and that
+it is right adjoint to $u$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-left-adjoint-composed-fully-faithful}
+\begin{reference}
+Bhargav Bhatt, private communication.
+\end{reference}
+Let $u$ be a left adjoint to $v$ as in Definition \ref{definition-adjoint}.
+\begin{enumerate}
+\item If $v \circ u$ is fully faithful, then $u$ is fully faithful.
+\item If $u \circ v$ is fully faithful, then $v$ is fully faithful.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (2). Assume $u \circ v$ is fully faithful.
+Say we have $X$, $Y$ in $\mathcal{D}$.
+Then the natural composite map
+$$
+\Mor(X,Y) \to \Mor(v(X),v(Y)) \to \Mor(u(v(X)), u(v(Y)))
+$$
+is a bijection, so $v$ is at least faithful. To show full faithfulness,
+we must show that the second map above is injective.
+But the adjunction between $u$ and $v$ says that
+$$
+\Mor(v(X), v(Y)) \to \Mor(u(v(X)), u(v(Y))) \to \Mor(u(v(X)), Y)
+$$
+is a bijection, where the first map is natural one and
+the second map comes from the counit $u(v(Y)) \to Y$ of the adjunction.
+So this says that
+$\Mor(v(X), v(Y)) \to \Mor(u(v(X)), u(v(Y)))$
+is also injective, as wanted. The proof of (1) is dual to this.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-adjoint-fully-faithful}
+Let $u$ be a left adjoint to $v$ as in Definition \ref{definition-adjoint}.
+Then
+\begin{enumerate}
+\item $u$ is fully faithful $\Leftrightarrow$ $\text{id} \cong v \circ u$
+$\Leftrightarrow$ $\eta : \text{id} \to v \circ u$ is an isomorphism,
+\item $v$ is fully faithful $\Leftrightarrow$
+$u \circ v \cong \text{id}$ $\Leftrightarrow$
+$\epsilon : u \circ v \to \text{id}$ is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1).
+Assume $u$ is fully faithful. We will show $\eta_X : X \to v(u(X))$
+is an isomorphism. Let $X' \to v(u(X))$ be any morphism.
+By adjointness this corresponds to a morphism $u(X') \to u(X)$. By fully
+faithfulness of $u$ this corresponds to a unique morphism $X' \to X$.
+Thus we see that post-composing by $\eta_X$ defines a bijection
+$\Mor(X', X) \to \Mor(X', v(u(X)))$. Hence $\eta_X$ is an isomorphism.
+If there exists an isomorphism $\text{id} \cong v \circ u$ of functors,
+then $v \circ u$ is fully faithful. By
+Lemma \ref{lemma-left-adjoint-composed-fully-faithful} we see
+that $u$ is fully faithful. By the above this implies $\eta$
+is an isomorphism. Thus all $3$ conditions are equivalent (and these
+conditions are also equivalent to $v \circ u$ being fully faithful).
+
+\medskip\noindent
+Part (2) is dual to part (1).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-adjoint-exact}
+Let $u$ be a left adjoint to $v$ as in Definition \ref{definition-adjoint}.
+\begin{enumerate}
+\item Suppose that $M : \mathcal{I} \to \mathcal{C}$ is a diagram,
+and suppose that $\colim_\mathcal{I} M$ exists in
+$\mathcal{C}$. Then $u(\colim_\mathcal{I} M) =
+\colim_\mathcal{I} u \circ M$. In other words,
+$u$ commutes with (representable) colimits.
+\item Suppose that $M : \mathcal{I} \to \mathcal{D}$ is a diagram,
+and suppose that $\lim_\mathcal{I} M$ exists in
+$\mathcal{D}$. Then $v(\lim_\mathcal{I} M) =
+\lim_\mathcal{I} v \circ M$. In other words $v$ commutes
+with representable limits.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+A morphism from a colimit into an object is the same as a compatible
+system of morphisms from the constituents of the limit into the
+object, see Remark \ref{remark-limit-colim}. So
+$$
+\begin{matrix}
+\Mor_\mathcal{D}(u(\colim_{i \in \mathcal{I}} M_i), Y) &
+= & \Mor_\mathcal{C}(\colim_{i \in \mathcal{I}} M_i, v(Y)) \\
+& = &
+\lim_{i \in \mathcal{I}^{opp}}
+\Mor_\mathcal{C}(M_i, v(Y)) \\
+& = &
+\lim_{i \in \mathcal{I}^{opp}}
+\Mor_\mathcal{D}(u(M_i), Y)
+\end{matrix}
+$$
+proves that $u(\colim_{i \in \mathcal{I}} M_i)$ is
+the colimit we are looking for.
+A similar argument works for the other statement.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exact-adjoint}
+Let $u$ be a left adjoint of $v$ as in Definition \ref{definition-adjoint}.
+\begin{enumerate}
+\item If $\mathcal{C}$ has finite colimits, then $u$ is right exact.
+\item If $\mathcal{D}$ has finite limits, then $v$ is left exact.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Obvious from the definitions and Lemma \ref{lemma-adjoint-exact}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unit-counit-relations}
+Let $u : \mathcal{C} \to \mathcal{D}$ be a left adjoint to the functor
+$v : \mathcal{D} \to \mathcal{C}$. Let $\eta_X : X \to v(u(X))$ be the unit
+and $\epsilon_Y : u(v(Y)) \to Y$ be the counit. Then
+$$
+u(X) \xrightarrow{u(\eta_X)} u(v(u(X))
+\xrightarrow{\epsilon_{u(X)}} u(X)
+\quad\text{and}\quad
+v(Y) \xrightarrow{\eta_{v(Y)}} v(u(v(Y))) \xrightarrow{v(\epsilon_Y)}
+v(Y)
+$$
+are the identity morphisms.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-transformation-between-functors-and-adjoints}
+Let $u_1, u_2 : \mathcal{C} \to \mathcal{D}$ be functors with right
+adjoints $v_1, v_2 : \mathcal{D} \to \mathcal{C}$. Let $\beta : u_2 \to u_1$
+be a transformation of functors. Let $\beta^\vee : v_1 \to v_2$ be
+the corresponding transformation of adjoint functors. Then
+$$
+\xymatrix{
+u_2 \circ v_1 \ar[r]_\beta \ar[d]_{\beta^\vee} &
+u_1 \circ v_1 \ar[d] \\
+u_2 \circ v_2 \ar[r] & \text{id}
+}
+$$
+is commutative where the unlabeled arrows are the counit transformations.
+\end{lemma}
+
+\begin{proof}
+This is true because $\beta^\vee_D : v_1D \to v_2D$ is the unique
+morphism such that the induced maps $\Mor(C, v_1D) \to \Mor(C, v_2D)$
+is the map $\Mor(u_1C, D) \to \Mor(u_2C, D)$ induced by
+$\beta_C : u_2C \to u_1C$. Namely, this means the map
+$$
+\Mor(u_1 v_1 D, D') \to \Mor(u_2 v_1 D, D')
+$$
+induced by $\beta_{v_1 D}$ is the same as the map
+$$
+\Mor(v_1 D, v_1 D') \to \Mor(v_1 D, v_2 D')
+$$
+induced by $\beta^\vee_{D'}$. Taking $D' = D$ we find that the counit
+$u_1 v_1 D \to D$ precomposed by $\beta_{v_1D}$ corresponds to $\beta^\vee_D$
+under adjunction. This exactly means that the diagram commutes when
+evaluated on $D$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compose-counits}
+Let $\mathcal{A}$, $\mathcal{B}$, and $\mathcal{C}$ be categories.
+Let $v : \mathcal{A} \to \mathcal{B}$ and
+$v' : \mathcal{B} \to \mathcal{C}$ be functors
+with left adjoints $u$ and $u'$ respectively. Then
+\begin{enumerate}
+\item The functor $v'' = v' \circ v$ has a left adjoint equal to
+$u'' = u \circ u'$.
+\item Given $X$ in $\mathcal{A}$ we have
+\begin{equation}
+\label{equation-compose-counits}
+\epsilon_X^v \circ u(\epsilon^{v'}_{v(X)}) = \epsilon^{v''}_X :
+u''(v''(X)) \to X
+\end{equation}
+Where $\epsilon$ is the counit of the adjunctions.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let us unwind the formula in (2) because this will also immediately
+prove (1). First, the counit of the adjunctions for the pairs
+$(u, v)$ and $(u', v')$ are maps
+$\epsilon_X^v : u(v(X)) \to X$ and
+$\epsilon_Y^{v'} : u'(v'(Y)) \to Y$, see discussion following
+Definition \ref{definition-adjoint}.
+With $u''$ and $v''$ as in (1) we unwind everything
+$$
+u''(v''(X)) = u(u'(v'(v(X)))) \xrightarrow{u(\epsilon_{v(X)}^{v'})}
+u(v(X)) \xrightarrow{\epsilon_X^v} X
+$$
+to get the map on the left hand side of (\ref{equation-compose-counits}).
+Let us denote this by $\epsilon_X^{v''}$ for now.
+To see that this is the counit of an adjoint pair
+$(u'', v'')$ we have to show that given $Z$ in $\mathcal{C}$
+the rule that sends a morphism $\beta : Z \to v''(X)$
+to $\alpha = \epsilon_X^{v''} \circ u''(\beta) : u''(Z) \to X$
+is a bijection on sets of morphisms.
+This is true because, this is the composition of the
+rule sending $\beta$ to $\epsilon_{v(X)}^{v'} \circ u'(\beta)$
+which is a bijection by assumption on $(u', v')$ and then
+sending this to
+$\epsilon_X^v \circ u(\epsilon_{v(X)}^{v'} \circ u'(\beta))$
+which is a bijection by assumption on $(u, v)$.
+\end{proof}
+
+
+
+
+
+
+\section{A criterion for representability}
+\label{section-representable}
+
+\noindent
+The following lemma is often useful to prove the existence of
+universal objects in big categories, please see the discussion
+in Remark \ref{remark-how-to-use-it}.
+
+\begin{lemma}
+\label{lemma-a-version-of-brown}
+Let $\mathcal{C}$ be a big\footnote{See Remark \ref{remark-big-categories}.}
+category which has limits. Let $F : \mathcal{C} \to \textit{Sets}$ be a
+functor. Assume that
+\begin{enumerate}
+\item $F$ commutes with limits,
+\item there exist a family $\{x_i\}_{i \in I}$ of objects of $\mathcal{C}$
+and for each $i \in I$ an element $f_i \in F(x_i)$
+such that for $y \in \Ob(\mathcal{C})$ and $g \in F(y)$
+there exist an $i$ and a morphism $\varphi : x_i \to y$
+with $F(\varphi)(f_i) = g$.
+\end{enumerate}
+Then $F$ is representable, i.e., there exists an object $x$
+of $\mathcal{C}$ such that
+$$
+F(y) = \Mor_\mathcal{C}(x, y)
+$$
+functorially in $y$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{I}$ be the category whose objects are the pairs $(x_i, f_i)$
+and whose morphisms $(x_i, f_i) \to (x_{i'}, f_{i'})$ are maps
+$\varphi : x_i \to x_{i'}$ in $\mathcal{C}$
+such that $F(\varphi)(f_i) = f_{i'}$. Set
+$$
+x = \lim_{(x_i, f_i) \in \mathcal{I}} x_i
+$$
+(this will not be the $x$ we are looking for, see below).
+The limit exists by assumption. As $F$ commutes with limits
+we have
+$$
+F(x) = \lim_{(x_i, f_i) \in \mathcal{I}} F(x_i).
+$$
+Hence there is a universal element $f \in F(x)$ which maps to $f_i \in F(x_i)$
+under $F$ applied to the projection map $x \to x_i$.
+Using $f$ we obtain a transformation of functors
+$$
+\xi : \Mor_\mathcal{C}(x, - ) \longrightarrow F(-)
+$$
+see Section \ref{section-opposite}. Let $y$ be an arbitrary object of
+$\mathcal{C}$ and let $g \in F(y)$. Choose $x_i \to y$ such that $f_i$
+maps to $g$ which is possible by assumption. Then $F$ applied to the maps
+$$
+x \longrightarrow x_i \longrightarrow y
+$$
+(the first being the projection map of the limit defining $x$)
+sends $f$ to $g$. Hence the transformation $\xi$ is surjective.
+
+\medskip\noindent
+In order to find the object representing $F$ we let $e : x' \to x$ be the
+equalizer of all self maps $\varphi : x \to x$ with $F(\varphi)(f) = f$.
+Since $F$ commutes with limits, it commutes with equalizers, and
+we see there exists an $f' \in F(x')$ mapping to $f$ in $F(x)$.
+Since $\xi$ is surjective and since $f'$ maps to $f$ we see that
+also $\xi' : \Mor_\mathcal{C}(x', -) \to F(-)$ is surjective.
+Finally, suppose that $a, b : x' \to y$ are two maps such that
+$F(a)(f') = F(b)(f')$. We have to show $a = b$. Consider the equalizer
+$e' : x'' \to x'$. Again we find $f'' \in F(x'')$ mapping to $f'$.
+Choose a map $\psi : x \to x''$ such that $F(\psi)(f) = f''$.
+Then we see that $e \circ e' \circ \psi : x \to x$ is a morphism
+with $F(e \circ e' \circ \psi)(f) = f$. Hence
+$e \circ e' \circ \psi \circ e = e$. Since $e$ is a monomorphism,
+this implies that $e'$ is an epimorphism, thus $a = b$ as desired.
+\end{proof}
+
+\begin{remark}
+\label{remark-how-to-use-it}
+The lemma above is often used to construct the free something on something.
+For example the free abelian group on a set, the free group on a set, etc.
+The idea, say in the case of the free group on a set $E$ is to
+consider the functor
+$$
+F : \textit{Groups} \to \textit{Sets},\quad
+G \longmapsto \text{Map}(E, G)
+$$
+This functor commutes with limits. As our family of objects
+we can take a family $E \to G_i$ consisting of groups $G_i$
+of cardinality at most $\max(\aleph_0, |E|)$ and set maps
+$E \to G_i$ such that every isomorphism class of such a structure
+occurs at least once. Namely, if $E \to G$ is a map from $E$ to
+a group $G$, then the subgroup $G'$ generated by the image has
+cardinality at most $\max(\aleph_0, |E|)$. The lemma tells us
+the functor is representable, hence there exists a group
+$F_E$ such that $\Mor_{\textit{Groups}}(F_E, G) = \text{Map}(E, G)$.
+In particular, the identity morphism of $F_E$ corresponds to
+a map $E \to F_E$ and one can show that $F_E$ is generated by
+the image without imposing any relations.
+
+\medskip\noindent
+Another typical application is that we can use the lemma to construct
+colimits once it is known that limits exist. We illustrate it using
+the category of topological spaces which has limits by
+Topology, Lemma \ref{topology-lemma-limits}. Namely, suppose
+that $\mathcal{I} \to \textit{Top}$, $i \mapsto X_i$ is a functor.
+Then we can consider
+$$
+F : \textit{Top} \longrightarrow \textit{Sets},\quad
+Y \longmapsto \lim_\mathcal{I} \Mor_{\textit{Top}}(X_i, Y)
+$$
+This functor commutes with limits. Moreover, given any topological space
+$Y$ and an element $(\varphi_i : X_i \to Y)$ of $F(Y)$, there is
+a subspace $Y' \subset Y$ of cardinality at most $|\coprod X_i|$
+such that the morphisms $\varphi_i$ map into $Y'$. Namely, we can
+take the induced topology on the union of the images of the $\varphi_i$.
+Thus it is clear that the hypotheses of the lemma are satisfied and we find a
+topological space $X$
+representing the functor $F$, which precisely means that $X$ is
+the colimit of the diagram $i \mapsto X_i$.
+\end{remark}
+
+\begin{theorem}[Adjoint functor theorem]
+\label{theorem-adjoint-functor}
+Let $G : \mathcal{C} \to \mathcal{D}$ be a functor of big categories.
+Assume $\mathcal{C}$ has limits, $G$ commutes with them, and for
+every object $y$ of $\mathcal{D}$ there exists a set of pairs
+$(x_i, f_i)_{i \in I}$ with $x_i \in \Ob(\mathcal{C})$,
+$f_i \in \Mor_\mathcal{D}(y, G(x_i))$ such that for any
+pair $(x, f)$ with $x \in \Ob(\mathcal{C})$,
+$f \in \Mor_\mathcal{D}(y, G(x))$ there are an $i$ and a morphism
+$h : x_i \to x$ such that $f = G(h) \circ f_i$.
+Then $G$ has a left adjoint $F$.
+\end{theorem}
+
+\begin{proof}
+The assumptions imply that for every object $y$ of $\mathcal{D}$
+the functor $x \mapsto \Mor_\mathcal{D}(y, G(x))$ satisfies the
+assumptions of Lemma \ref{lemma-a-version-of-brown}.
+Thus it is representable by an object, let's call it $F(y)$.
+An application of Yoneda's lemma (Lemma \ref{lemma-yoneda})
+turns the rule $y \mapsto F(y)$ into a functor which
+by construction is an adjoint to $G$. We omit the details.
+\end{proof}
+
+
+
+
+
+\section{Categorically compact objects}
+\label{section-compact}
+
+\noindent
+A little bit about ``small'' objects of a category.
+
+\begin{definition}
+\label{definition-compact-object}
+Let $\mathcal{C}$ be a big\footnote{See Remark \ref{remark-big-categories}.}
+category. An object $X$ of $\mathcal{C}$ is called a {\it categorically compact}
+if we have
+$$
+\Mor_\mathcal{C}(X, \colim_i M_i) =
+\colim_i \Mor_\mathcal{C}(X, M_i)
+$$
+for every filtered diagram $M : \mathcal{I} \to \mathcal{C}$ such that
+$\colim_i M_i$ exists.
+\end{definition}
+
+\noindent
+Often this definition is made only under the assumption that $\mathcal{C}$
+has all filtered colimits.
+
+\begin{lemma}
+\label{lemma-extend-functor-by-colim}
+Let $\mathcal{C}$ and $\mathcal{D}$ be big categories having filtered
+colimits. Let $\mathcal{C}' \subset \mathcal{C}$ be a small full subcategory
+consisting of categorically compact objects of $\mathcal{C}$ such that every
+object of $\mathcal{C}$ is a filtered colimit of objects of $\mathcal{C}'$.
+Then every functor $F' : \mathcal{C}' \to \mathcal{D}$ has a unique
+extension $F : \mathcal{C} \to \mathcal{D}$ commuting with filtered colimits.
+\end{lemma}
+
+\begin{proof}
+For every object $X$ of $\mathcal{C}$ we may write $X$ as a filtered
+colimit $X = \colim X_i$ with $X_i \in \Ob(\mathcal{C}')$. Then we set
+$$
+F(X) = \colim F'(X_i)
+$$
+in $\mathcal{D}$. We will show below that this construction does not
+depend on the choice of the colimit presentation of $X$.
+
+\medskip\noindent
+Suppose given a morphism $\alpha : X \to Y$ of $\mathcal{C}$
+and $X = \colim_{i \in I} X_i$ and $Y = \colim_{j \in J} Y_i$
+are written as filtered colimit of objects in $\mathcal{C}'$.
+For each $i \in I$ since $X_i$ is a categorically compact object of
+$\mathcal{C}$ we can find a $j \in J$ and a commutative diagram
+$$
+\xymatrix{
+X_i \ar[r] \ar[d] & X \ar[d]^\alpha \\
+Y_j \ar[r] & Y
+}
+$$
+Then we obtain a morphism $F'(X_i) \to F'(Y_j) \to F(Y)$ where the second
+morphism is the coprojection into $F(Y) = \colim F'(Y_j)$. The arrow
+$\beta_i : F'(X_i) \to F(Y)$ does not depend on the choice of $j$.
+For $i \leq i'$ the composition
+$$
+F'(X_i) \to F'(X_{i'}) \xrightarrow{\beta_{i'}} F(Y)
+$$
+is equal to $\beta_i$. Thus we obtain a well defined arrow
+$$
+F(\alpha) : F(X) = \colim F(X_i) \to F(Y)
+$$
+by the universal property of the colimit. If $\alpha' : Y \to Z$ is a
+second morphism of $\mathcal{C}$ and $Z = \colim Z_k$ is also
+written as filtered colimit of objects in $\mathcal{C}'$, then
+it is a pleasant exercise to show that the induced
+morphisms $F(\alpha) : F(X) \to F(Y)$ and $F(\alpha') : F(Y) \to F(Z)$
+compose to the morphism $F(\alpha' \circ \alpha)$. Details omitted.
+
+\medskip\noindent
+In particular, if we are given two presentations
+$X = \colim X_i$ and $X = \colim X'_{i'}$ as filtered
+colimits of systems in $\mathcal{C}'$, then we get mutually inverse
+arrows $\colim F'(X_i) \to \colim F'(X'_{i'})$ and
+$\colim F'(X'_{i'}) \to \colim F'(X_i)$. In other words, the
+value $F(X)$ is well defined independent of the choice of the
+presentation of $X$ as a filtered colimit of objects of $\mathcal{C}'$.
+Together with the functoriality of $F$ discussed in the previous
+paragraph, we find that $F$ is a functor. Also, it is clear that
+$F(X) = F'(X)$ if $X \in \Ob(\mathcal{C}')$.
+
+\medskip\noindent
+The uniqueness statement in the lemma is clear, provided
+we show that $F$ commutes with filtered colimits (because this
+statement doesn't make sense otherwise). To show this, suppose that
+$X = \colim_{\lambda \in \Lambda} X_\lambda$
+is a filtered colimit of $\mathcal{C}$. Since $F$ is a functor
+we certainly get a map
+$$
+\colim_\lambda F(X_\lambda) \longrightarrow F(X)
+$$
+On the other hand, write $X = \colim X_i$
+as a filtered colimit of objects of $\mathcal{C}'$.
+As above, for each $i \in I$ we can choose a $\lambda \in \Lambda$
+and a commutative diagram
+$$
+\xymatrix{
+X_i \ar[rr] \ar[rd] & & X_\lambda \ar[ld] \\
+& X
+}
+$$
+As above this determines a well defined morphism
+$F'(X_i) \to \colim_\lambda F(X_\lambda)$ compatible
+with transition morphisms and hence a morphism
+$$
+F(X) = \colim_i F'(X_i) \longrightarrow \colim_\lambda F(X_\lambda)
+$$
+This morphism is inverse to the morphism above (details omitted)
+and proves that $F(X) = \colim_\lambda F(X_\lambda)$ as desired.
+\end{proof}
+
+
+
+
+
+
+
+\section{Localization in categories}
+\label{section-localization}
+
+\noindent
+The basic idea of this section is given a category $\mathcal{C}$
+and a set of arrows $S$ to construct a functor
+$F : \mathcal{C} \to S^{-1}\mathcal{C}$
+such that all elements of $S$ become invertible in $S^{-1}\mathcal{C}$
+and such that $F$ is universal among all functors with this property.
+References for this section are \cite[Chapter I, Section 2]{GZ}
+and \cite[Chapter II, Section 2]{Verdier}.
+
+\begin{definition}
+\label{definition-multiplicative-system}
+Let $\mathcal{C}$ be a category. A set of arrows $S$ of $\mathcal{C}$ is
+called a {\it left multiplicative system} if it has the following properties:
+\begin{enumerate}
+\item[LMS1] The identity of every object of $\mathcal{C}$ is in $S$ and
+the composition of two composable elements of $S$ is in $S$.
+\item[LMS2] Every solid diagram
+$$
+\xymatrix{
+X \ar[d]_t \ar[r]_g & Y \ar@{..>}[d]^s \\
+Z \ar@{..>}[r]^f & W
+}
+$$
+with $t \in S$ can be completed to a commutative dotted square with
+$s \in S$.
+\item[LMS3] For every pair of morphisms $f, g : X \to Y$ and
+$t \in S$ with target $X$ such that $f \circ t = g \circ t$
+there exists an $s \in S$ with source $Y$ such that
+$s \circ f = s \circ g$.
+\end{enumerate}
+A set of arrows $S$ of $\mathcal{C}$ is
+called a {\it right multiplicative system}
+if it has the following properties:
+\begin{enumerate}
+\item[RMS1] The identity of every object of $\mathcal{C}$ is in $S$ and
+the composition of two composable elements of $S$ is in $S$.
+\item[RMS2] Every solid diagram
+$$
+\xymatrix{
+X \ar@{..>}[d]_t \ar@{..>}[r]_g & Y \ar[d]^s \\
+Z \ar[r]^f & W
+}
+$$
+with $s \in S$ can be completed to a commutative dotted square with
+$t \in S$.
+\item[RMS3] For every pair of morphisms $f, g : X \to Y$ and
+$s \in S$ with source $Y$ such that $s \circ f = s \circ g$
+there exists a $t \in S$ with target $X$ such that
+$f \circ t = g \circ t$.
+\end{enumerate}
+A set of arrows $S$ of $\mathcal{C}$ is called a {\it multiplicative system}
+if it is both a left multiplicative system and a right multiplicative system.
+In other words, this means that MS1, MS2, MS3 hold, where
+MS1 $=$ LMS1 $+$ RMS1, MS2 $=$ LMS2 $+$ RMS2, and
+MS3 $=$ LMS3 $+$ RMS3. (That said, of course LMS1 $=$ RMS1
+$=$ MS1.)
+\end{definition}
+
+\noindent
+These conditions are useful to construct the categories $S^{-1}\mathcal{C}$
+as follows.
+
+\medskip\noindent
+{\bf Left calculus of fractions.}
+Let $\mathcal{C}$ be a category and let $S$ be a left multiplicative
+system. We define a new category $S^{-1}\mathcal{C}$ as follows
+(we verify this works in the proof of
+Lemma \ref{lemma-left-localization}):
+\begin{enumerate}
+\item We set $\Ob(S^{-1}\mathcal{C}) = \Ob(\mathcal{C})$.
+\item Morphisms $X \to Y$ of $S^{-1}\mathcal{C}$ are given by pairs
+$(f : X \to Y', s : Y \to Y')$ with $s \in S$ up to equivalence.
+(The equivalence is defined below. Think of the equivalence class
+of a pair $(f, s)$ as $s^{-1}f : X \to Y$.)
+\item Two pairs $(f_1 : X \to Y_1, s_1 : Y \to Y_1)$ and
+$(f_2 : X \to Y_2, s_2 : Y \to Y_2)$ are said to be equivalent
+if there exist a third pair $(f_3 : X \to Y_3, s_3 : Y \to Y_3)$
+and morphisms $u : Y_1 \to Y_3$ and $v : Y_2 \to Y_3$ of $\mathcal{C}$
+fitting into the commutative diagram
+$$
+\xymatrix{
+ & Y_1 \ar[d]^u & \\
+X \ar[ru]^{f_1} \ar[r]^{f_3} \ar[rd]_{f_2} &
+Y_3 &
+Y \ar[lu]_{s_1} \ar[l]_{s_3} \ar[ld]^{s_2} \\
+& Y_2 \ar[u]_v &
+}
+$$
+\item The composition of the equivalence classes of the pairs
+$(f : X \to Y', s : Y \to Y')$ and $(g : Y \to Z', t : Z \to Z')$
+is defined as the equivalence class of a pair
+$(h \circ f : X \to Z'', u \circ t : Z \to Z'')$
+where $h$ and $u \in S$ are chosen to fit into a commutative diagram
+$$
+\xymatrix{
+Y \ar[d]_s \ar[r]_g & Z' \ar[d]^u \\
+Y' \ar[r]^h & Z''
+}
+$$
+which exists by assumption.
+\item The identity morphism $X \to X$ in $S^{-1} \mathcal{C}$ is the
+equivalence class of the pair $(\text{id} : X \to X,
+\text{id} : X \to X)$.
+\end{enumerate}
+
+\begin{lemma}
+\label{lemma-left-localization}
+Let $\mathcal{C}$ be a category and let $S$ be a left multiplicative
+system.
+\begin{enumerate}
+\item The relation on pairs defined above is an equivalence relation.
+\item The composition rule given above is well defined on equivalence
+classes.
+\item Composition is associative (and the identity morphisms satisfy
+the identity axioms), and hence $S^{-1}\mathcal{C}$ is a category.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Let us say two pairs $p_1 = (f_1 : X \to Y_1, s_1 : Y \to Y_1)$
+and $p_2 = (f_2 : X \to Y_2, s_2 : Y \to Y_2)$ are elementary equivalent
+if there exists a morphism $a : Y_1 \to Y_2$ of $\mathcal{C}$ such that
+$a \circ f_1 = f_2$ and $a \circ s_1 = s_2$. Diagram:
+$$
+\xymatrix{
+X \ar@{=}[d] \ar[r]_{f_1} & Y_1 \ar[d]^a & Y \ar[l]^{s_1} \ar@{=}[d] \\
+X \ar[r]^{f_2} & Y_2 & Y \ar[l]_{s_2}
+}
+$$
+Let us denote this property by saying $p_1Ep_2$.
+Note that $pEp$ and $aEb, bEc \Rightarrow aEc$.
+(Despite its name, $E$ is not an equivalence
+relation.)
+Part (1) claims that the relation
+$p \sim p' \Leftrightarrow \exists q: pEq \wedge p'Eq$
+(where $q$ is supposed to be a pair satisfying the
+same conditions as $p$ and $p'$)
+is an equivalence relation. A simple formal argument, using the properties
+of $E$ above, shows that it suffices to prove
+$p_3Ep_1, p_3Ep_2 \Rightarrow p_1 \sim p_2$.
+Thus suppose that we are given a commutative diagram
+$$
+\xymatrix{
+ & Y_1 & \\
+X \ar[ru]^{f_1} \ar[r]^{f_3} \ar[rd]_{f_2} &
+Y_3 \ar[u]_{a_{31}} \ar[d]^{a_{32}} &
+Y \ar[lu]_{s_1} \ar[l]_{s_3} \ar[ld]^{s_2} \\
+& Y_2 &
+}
+$$
+with $s_i \in S$.
+First we apply LMS2 to get a commutative diagram
+$$
+\xymatrix{
+Y \ar[d]_{s_1} \ar[r]_{s_2} & Y_2 \ar@{..>}[d]^{a_{24}} \\
+Y_1 \ar@{..>}[r]^{a_{14}} & Y_4
+}
+$$
+with $a_{24} \in S$. Then, we have
+$$
+a_{14} \circ a_{31} \circ s_3 =
+a_{14} \circ s_1 =
+a_{24} \circ s_2 =
+a_{24} \circ a_{32} \circ s_3.
+$$
+Hence, by LMS3, there exists a
+morphism $s_{44} : Y_4 \to Y'_4$ such that $s_{44} \in S$ and
+$s_{44} \circ a_{14} \circ a_{31}
+= s_{44} \circ a_{24} \circ a_{32}$.
+Hence, after replacing $Y_4$, $a_{14}$ and $a_{24}$ by $Y'_4$,
+$s_{44} \circ a_{14}$ and $s_{44} \circ a_{24}$, we may assume
+that $a_{14} \circ a_{31} = a_{24} \circ a_{32}$ (and
+we still have $a_{24} \in S$ and
+$a_{14} \circ s_1 = a_{24} \circ s_2$). Set
+$$
+f_4 =
+a_{14} \circ f_1 =
+a_{14} \circ a_{31} \circ f_3 =
+a_{24} \circ a_{32} \circ f_3 =
+a_{24} \circ f_2
+$$
+and
+$s_4 = a_{14} \circ s_1 = a_{24} \circ s_2$. Then, the diagram
+$$
+\xymatrix{
+X \ar@{=}[d] \ar[r]_{f_1} & Y_1 \ar[d]^{a_{14}} & Y \ar[l]^{s_1} \ar@{=}[d] \\
+X \ar[r]^{f_4} & Y_4 & Y \ar[l]_{s_4}
+}
+$$
+commutes, and we have $s_4 \in S$ (by LMS1). Thus, $p_1 E p_4$,
+where $p_4 = (f_4, s_4)$. Similarly, $p_2 E p_4$. Combining these,
+we find $p_1 \sim p_2$.
+
+\medskip\noindent
+Proof of (2). Let $p = (f : X \to Y', s : Y \to Y')$ and
+$q = (g : Y \to Z', t : Z \to Z')$ be pairs as in the definition of composition
+above. To compose we choose a diagram
+$$
+\xymatrix{
+Y \ar[d]_s \ar[r]_g & Z' \ar[d]^{u_2} \\
+Y' \ar[r]^{h_2} & Z_2
+}
+$$
+with $u_2 \in S$. We first show that the equivalence class of the pair
+$r_2 = (h_2 \circ f : X \to Z_2, u_2 \circ t : Z \to Z_2)$
+is independent of the choice of $(Z_2, h_2, u_2)$. Namely, suppose
+that $(Z_3, h_3, u_3)$ is another choice with corresponding composition
+$r_3 = (h_3 \circ f : X \to Z_3, u_3 \circ t : Z \to Z_3)$.
+Then by LMS2 we can choose a diagram
+$$
+\xymatrix{
+Z' \ar[d]_{u_2} \ar[r]_{u_3} & Z_3 \ar[d]^{u_{34}} \\
+Z_2 \ar[r]^{h_{24}} & Z_4
+}
+$$
+with $u_{34} \in S$. We have $h_2 \circ s = u_2 \circ g$
+and similarly $h_3 \circ s = u_3 \circ g$. Now,
+$$
+u_{34} \circ h_3 \circ s
+= u_{34} \circ u_3 \circ g
+= h_{24} \circ u_2 \circ g
+= h_{24} \circ h_2 \circ s.
+$$
+Hence, LMS3 shows that there
+exist a $Z'_4$ and an $s_{44} : Z_4 \to Z'_4$ such that
+$s_{44} \circ u_{34} \circ h_3 = s_{44} \circ h_{24} \circ h_2$.
+Replacing $Z_4$, $h_{24}$ and $u_{34}$ by $Z'_4$,
+$s_{44} \circ h_{24}$ and $s_{44} \circ u_{34}$, we may
+assume that $u_{34} \circ h_3 = h_{24} \circ h_2$.
+Meanwhile, the relations $u_{34} \circ u_3 = h_{24} \circ u_2$
+and $u_{34} \in S$ continue to hold. We can now set
+$h_4 = u_{34} \circ h_3 = h_{24} \circ h_2$ and
+$u_4 = u_{34} \circ u_3 = h_{24} \circ u_2$. Then, we have a
+commutative diagram
+$$
+\xymatrix{
+X \ar@{=}[d] \ar[r]_{h_2\circ f} &
+Z_2 \ar[d]^{h_{24}} &
+Z \ar[l]^{u_2 \circ t} \ar@{=}[d] \\
+X \ar@{=}[d] \ar[r]^{h_4\circ f} &
+Z_4 &
+Z \ar@{=}[d] \ar[l]_{u_4 \circ t} \\
+X \ar[r]^{h_3 \circ f} &
+Z_3 \ar[u]^{u_{34}} &
+Z \ar[l]_{u_3 \circ t}
+}
+$$
+Hence we obtain a pair
+$r_4 =
+(h_4 \circ f : X \to Z_4, u_4 \circ t : Z \to Z_4)$
+and the above diagram shows that we have $r_2Er_4$ and
+$r_3Er_4$, whence $r_2 \sim r_3$, as desired. Thus it now
+makes sense to define $p \circ q$ as the equivalence class of
+all possible pairs $r$ obtained as above.
+
+\medskip\noindent
+To finish the proof of (2) we have to show that given pairs
+$p_1, p_2, q$ such that $p_1Ep_2$ then $p_1 \circ q = p_2 \circ q$ and
+$q \circ p_1 = q \circ p_2$ whenever the compositions make sense.
+To do this, write $p_1 = (f_1 : X \to Y_1, s_1 : Y \to Y_1)$ and
+$p_2 = (f_2 : X \to Y_2, s_2 : Y \to Y_2)$ and let
+$a : Y_1 \to Y_2$ be a morphism of $\mathcal{C}$ such that
+$f_2 = a \circ f_1$ and $s_2 = a \circ s_1$.
+First assume that $q = (g : Y \to Z', t : Z \to Z')$.
+In this case choose a commutative diagram as the one on the left
+$$
+\vcenter{
+\xymatrix{
+Y \ar[d]_{s_2} \ar[r]^g & Z' \ar[d]^u \\
+Y_2 \ar[r]^h & Z''
+}
+}
+\quad
+\Rightarrow
+\quad
+\vcenter{
+\xymatrix{
+Y \ar[d]_{s_1} \ar[r]^g & Z' \ar[d]^u \\
+Y_1 \ar[r]^{h \circ a} & Z''
+}
+}
+$$
+(with $u \in S$),
+which implies the diagram on the right is commutative as well.
+Using these diagrams we see that both compositions $q \circ p_1$
+and $q \circ p_2$ are the equivalence class of
+$(h \circ a \circ f_1 : X \to Z'', u \circ t : Z \to Z'')$.
+Thus $q \circ p_1 = q \circ p_2$.
+The proof of the other case, in which we have to show
+$p_1 \circ q = p_2 \circ q$, is omitted. (It is similar to the
+case we did.)
+
+\medskip\noindent
+Proof of (3). We have to prove associativity of composition.
+Consider a solid diagram
+$$
+\xymatrix{
+& & & Z \ar[d] \\
+& & Y \ar[d] \ar[r] & Z' \ar@{..>}[d] \\
+& X \ar[d] \ar[r] & Y' \ar@{..>}[d] \ar@{..>}[r] & Z'' \ar@{..>}[d] \\
+W \ar[r] & X' \ar@{..>}[r] & Y'' \ar@{..>}[r] & Z'''
+}
+$$
+(whose vertical arrows belong to $S$)
+which gives rise to three composable pairs.
+Using LMS2 we can choose the dotted arrows making the squares commutative
+and such that the vertical arrows are in $S$.
+Then it is clear that the composition of the three pairs
+is the equivalence class of the pair
+$(W \to Z''', Z \to Z''')$ gotten by composing the
+horizontal arrows on the bottom row and the vertical arrows
+on the right column.
+
+\medskip\noindent
+We leave it to the reader to check the identity axioms.
+\end{proof}
+
+\begin{remark}
+\label{remark-motivation-localization}
+The motivation for the construction of $S^{-1} \mathcal{C}$ is to
+``force'' the morphisms in $S$ to be invertible by artificially
+creating inverses to them (at the cost of some existing morphisms
+possibly becoming identified with each other). This is similar to
+the localization of a commutative ring at a multiplicative subset,
+and more generally to the localization of a noncommutative ring
+at a right denominator set (see \cite[Section 10A]{Lam}). This is
+more than just a similarity: The construction of
+$S^{-1} \mathcal{C}$ (or, more precisely, its version for
+additive categories $\mathcal{C}$) actually generalizes the
+latter type of localization. Namely, a noncommutative ring can be
+viewed as a pre-additive category with a single object (the morphisms
+being the elements of the ring); a multiplicative subset of this
+ring then becomes a set $S$ of morphisms satisfying LMS1 (aka
+RMS1). Then, the conditions RMS2 and RMS3 for this category and
+this subset $S$ translate into the two conditions
+(``right permutable'' and ``right reversible'') of a right
+denominator set (and similarly for LMS and left denominator sets),
+and $S^{-1} \mathcal{C}$ (with a properly defined additive
+structure) is the one-object category corresponding to the
+localization of the ring.
+\end{remark}
+
+\begin{definition}
+\label{definition-left-localization-as-fraction}
+Let $\mathcal{C}$ be a category and let $S$ be a left multiplicative
+system of morphisms of $\mathcal{C}$. Given any morphism
+$f : X \to Y'$ in $\mathcal{C}$ and any morphism $s : Y \to Y'$ in
+$S$, we denote by {\it $s^{-1} f$} the equivalence class of the pair
+$(f : X \to Y', s : Y \to Y')$. This is a morphism from $X$ to $Y$
+in $S^{-1} \mathcal{C}$.
+\end{definition}
+
+\noindent
+This notation is suggestive, and the things it suggests are true:
+Given any morphism $f : X \to Y'$ in $\mathcal{C}$ and any two
+morphisms $s : Y \to Y'$ and $t : Y' \to Y''$ in $S$, we have
+$\left(t \circ s\right)^{-1} \left(t \circ f\right) = s^{-1} f$.
+Also, for any
+$f : X \to Y'$ and $g : Y' \to Z'$ in $\mathcal{C}$ and all
+$s : Z \to Z'$ in $S$, we have
+$s^{-1} \left(g \circ f\right) = \left(s^{-1} g\right) \circ
+\left(\text{id}_{Y'}^{-1} f\right)$.
+Finally, for any $f : X \to Y'$ in $\mathcal{C}$, all
+$s : Y \to Y'$ in $S$, and $t : Z \to Y$ in $S$, we have
+$\left(s \circ t\right)^{-1} f
+= \left(t^{-1} \text{id}_Y\right)
+\circ \left(s^{-1} f\right)$.
+This is all clear from the definition.
+We can ``write any finite collection of morphisms with the same target
+as fractions with common denominator''.
+
+\begin{lemma}
+\label{lemma-morphisms-left-localization}
+Let $\mathcal{C}$ be a category and let $S$ be a left multiplicative
+system of morphisms of $\mathcal{C}$. Given any finite collection
+$g_i : X_i \to Y$ of morphisms of $S^{-1}\mathcal{C}$
+(indexed by $i$),
+we can find an element $s : Y \to Y'$ of $S$ and
+a family of morphisms $f_i : X_i \to Y'$ of $\mathcal{C}$ such that
+each $g_i$ is the equivalence class of the pair
+$(f_i : X_i \to Y', s : Y \to Y')$.
+\end{lemma}
+
+\begin{proof}
+For each $i$ choose a representative $(X_i \to Y_i, s_i : Y \to Y_i)$
+of $g_i$.
+The lemma follows if we can find a morphism $s : Y \to Y'$ in $S$ such that
+for each $i$ there is a morphism $a_i : Y_i \to Y'$ with
+$a_i \circ s_i = s$. If we have two indices $i = 1, 2$, then we can
+do this by completing the square
+$$
+\xymatrix{
+Y \ar[d]_{s_1} \ar[r]_{s_2} & Y_2 \ar[d]^{t_2} \\
+Y_1 \ar[r]^{a_1} & Y'
+}
+$$
+with $t_2 \in S$ as is possible by
+Definition \ref{definition-multiplicative-system}.
+Then $s = t_2 \circ s_2 \in S$ works.
+If we have $n > 2$ morphisms, then we use the above trick to reduce
+to the case of $n - 1$ morphisms, and we win by induction.
+\end{proof}
+
+\noindent
+There is an easy characterization of equality of morphisms if they
+have the same denominator.
+
+\begin{lemma}
+\label{lemma-equality-morphisms-left-localization}
+Let $\mathcal{C}$ be a category and let $S$ be a left multiplicative
+system of morphisms of $\mathcal{C}$. Let $A, B : X \to Y$ be morphisms
+of $S^{-1}\mathcal{C}$ which are the equivalence classes of
+$(f : X \to Y', s : Y \to Y')$ and $(g : X \to Y', s : Y \to Y')$.
+The following are equivalent
+\begin{enumerate}
+\item $A = B$
+\item there exists a morphism $t : Y' \to Y''$
+in $S$ with $t \circ f = t \circ g$, and
+\item there exists a morphism $a : Y' \to Y''$
+such that $a \circ f = a \circ g$ and $a \circ s \in S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We are going to use that $S^{-1}\mathcal{C}$ is a category
+(Lemma \ref{lemma-left-localization}) and we will
+use the notation of Definition \ref{definition-left-localization-as-fraction}
+as well as the discussion following that definition to identify some
+morphisms in $S^{-1}\mathcal{C}$. Thus we write $A = s^{-1}f$ and $B = s^{-1}g$.
+
+\medskip\noindent
+If $A = B$ then
+$(\text{id}_{Y'}^{-1}s) \circ A = (\text{id}_{Y'}^{-1}s) \circ B$.
+We have $(\text{id}_{Y'}^{-1}s) \circ A = \text{id}_{Y'}^{-1}f$
+and $(\text{id}_{Y'}^{-1}s) \circ B = \text{id}_{Y'}^{-1}g$.
+The equality of $\text{id}_{Y'}^{-1}f$ and $\text{id}_{Y'}^{-1}g$
+means by definition that there exists a commutative diagram
+$$
+\xymatrix{
+ & Y' \ar[d]^u & \\
+X \ar[ru]^f \ar[r]^h \ar[rd]_g &
+Z &
+Y' \ar[lu]_{\text{id}_{Y'}} \ar[l]_t \ar[ld]^{\text{id}_{Y'}} \\
+& Y' \ar[u]_v &
+}
+$$
+with $t \in S$. In particular $u = v = t \in S$ and $t \circ f = t\circ g$.
+Thus (1) implies (2).
+
+\medskip\noindent
+The implication (2) $\Rightarrow$ (3) is immediate. Assume $a$ is as in (3).
+Denote $s' = a \circ s \in S$.
+Then $\text{id}_{Y''}^{-1}s'$ is an isomorphism in the category
+$S^{-1}\mathcal{C}$ (with inverse $(s')^{-1}\text{id}_{Y''}$).
+Thus to check $A = B$ it suffices to check that
+$\text{id}_{Y''}^{-1}s' \circ A = \text{id}_{Y''}^{-1}s' \circ B$.
+We compute using the rules discussed in the text following
+Definition \ref{definition-left-localization-as-fraction} that
+$\text{id}_{Y''}^{-1}s' \circ A =
+\text{id}_{Y''}^{-1}(a \circ s) \circ s^{-1}f =
+\text{id}_{Y''}^{-1}(a \circ f) =
+\text{id}_{Y''}^{-1}(a \circ g) =
+\text{id}_{Y''}^{-1}(a \circ s) \circ s^{-1}g =
+\text{id}_{Y''}^{-1}s' \circ B$ and we see that (1) is true.
+\end{proof}
+
+\begin{remark}
+\label{remark-left-localization-morphisms-colimit}
+Let $\mathcal{C}$ be a category. Let $S$ be a left multiplicative system.
+Given an object $Y$ of $\mathcal{C}$ we denote $Y/S$ the category whose
+objects are $s : Y \to Y'$ with $s \in S$ and whose morphisms are
+commutative diagrams
+$$
+\xymatrix{
+& Y \ar[ld]_s \ar[rd]^t & \\
+Y' \ar[rr]^a & & Y''
+}
+$$
+where $a : Y' \to Y''$ is arbitrary. We claim that the category
+$Y/S$ is filtered (see
+Definition \ref{definition-directed}).
+Namely, LMS1 implies that $\text{id}_Y : Y \to Y$
+is in $Y/S$; hence $Y/S$ is nonempty. LMS2 implies that given
+$s_1 : Y \to Y_1$ and $s_2 : Y \to Y_2$ we can find a diagram
+$$
+\xymatrix{
+Y \ar[d]_{s_1} \ar[r]_{s_2} & Y_2 \ar[d]^t \\
+Y_1 \ar[r]^a & Y_3
+}
+$$
+with $t \in S$. Hence $s_1 : Y \to Y_1$ and $s_2 : Y \to Y_2$
+both have maps to $t \circ s_2 : Y \to Y_3$ in $Y/S$. Finally, given
+two morphisms $a, b$ from $s_1 : Y \to Y_1$ to $s_2 : Y \to Y_2$
+in $Y/S$ we see that $a \circ s_1 = b \circ s_1$; hence by LMS3
+there exists a $t : Y_2 \to Y_3$ in $S$ such that
+$t \circ a = t \circ b$.
+Now the combined results of
+Lemmas \ref{lemma-morphisms-left-localization} and
+\ref{lemma-equality-morphisms-left-localization}
+tell us that
+\begin{equation}
+\label{equation-left-localization-morphisms-colimit}
+\Mor_{S^{-1}\mathcal{C}}(X, Y) =
+\colim_{(s : Y \to Y') \in Y/S} \Mor_\mathcal{C}(X, Y')
+\end{equation}
+This formula expressing morphism sets in $S^{-1}\mathcal{C}$ as a filtered
+colimit of morphism sets in $\mathcal{C}$ is occasionally useful.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-properties-left-localization}
+Let $\mathcal{C}$ be a category and let $S$ be a left multiplicative
+system of morphisms of $\mathcal{C}$.
+\begin{enumerate}
+\item The rules $X \mapsto X$ and
+$(f : X \to Y) \mapsto (f : X \to Y, \text{id}_Y : Y \to Y)$
+define a functor $Q : \mathcal{C} \to S^{-1}\mathcal{C}$.
+\item For any $s \in S$ the morphism $Q(s)$ is an isomorphism in
+$S^{-1}\mathcal{C}$.
+\item If $G : \mathcal{C} \to \mathcal{D}$ is any functor such that
+$G(s)$ is invertible for every $s \in S$, then there exists a
+unique functor $H : S^{-1}\mathcal{C} \to \mathcal{D}$
+such that $H \circ Q = G$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Parts (1) and (2) are clear. (In (2), the inverse of $Q(s)$ is
+the equivalence class of the pair $(\text{id}_Y, s)$.)
+To see (3) just set $H(X) = G(X)$
+and set $H((f : X \to Y', s : Y \to Y')) = G(s)^{-1} \circ G(f)$.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-left-localization-limits}
+Let $\mathcal{C}$ be a category and let $S$ be a left multiplicative
+system of morphisms of $\mathcal{C}$. The localization functor
+$Q : \mathcal{C} \to S^{-1}\mathcal{C}$ commutes with finite colimits.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{I}$ be a finite category and let
+$\mathcal{I} \to \mathcal{C}$, $i \mapsto X_i$
+be a functor whose colimit exists. Then using
+(\ref{equation-left-localization-morphisms-colimit}),
+the fact that $Y/S$ is filtered, and
+Lemma \ref{lemma-directed-commutes} we have
+\begin{align*}
+\Mor_{S^{-1}\mathcal{C}}(Q(\colim X_i), Q(Y))
+& =
+\colim_{(s : Y \to Y') \in Y/S} \Mor_\mathcal{C}(\colim X_i, Y') \\
+& =
+\colim_{(s : Y \to Y') \in Y/S} \lim_i \Mor_\mathcal{C}(X_i, Y') \\
+& =
+\lim_i \colim_{(s : Y \to Y') \in Y/S} \Mor_\mathcal{C}(X_i, Y') \\
+& =
+\lim_i \Mor_{S^{-1}\mathcal{C}}(Q(X_i), Q(Y))
+\end{align*}
+and this isomorphism commutes with the projections
+from both sides to the set
+$\Mor_{S^{-1}\mathcal{C}}(Q(X_j), Q(Y))$ for each
+$j \in \Ob(\mathcal{I})$. Thus, $Q(\colim X_i)$ satisfies
+the universal property for the colimit of the functor
+$i \mapsto Q(X_i)$; hence, it is this colimit, as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-left-localization-diagram}
+Let $\mathcal{C}$ be a category. Let $S$ be a left multiplicative
+system. If $f : X \to Y$, $f' : X' \to Y'$ are two morphisms of
+$\mathcal{C}$ and if
+$$
+\xymatrix{
+Q(X) \ar[d]_{Q(f)} \ar[r]_a & Q(X') \ar[d]^{Q(f')} \\
+Q(Y) \ar[r]^b & Q(Y')
+}
+$$
+is a commutative diagram in $S^{-1}\mathcal{C}$, then there exist
+a morphism $f'' : X'' \to Y''$ in $\mathcal{C}$ and a commutative
+diagram
+$$
+\xymatrix{
+X \ar[d]_f \ar[r]_g & X'' \ar[d]^{f''} & X' \ar[d]^{f'} \ar[l]^s \\
+Y \ar[r]^h & Y'' & Y' \ar[l]_t
+}
+$$
+in $\mathcal{C}$ with $s, t \in S$ and $a = s^{-1}g$, $b = t^{-1}h$.
+\end{lemma}
+
+\begin{proof}
+We choose maps and objects in the following way:
+First write $a = s^{-1}g$ for some $s : X' \to X''$ in $S$ and
+$g : X \to X''$. By LMS2 we can find $t : Y' \to Y''$ in $S$ and
+$f'' : X'' \to Y''$ such that
+$$
+\xymatrix{
+X \ar[d]_f \ar[r]_g & X'' \ar[d]^{f''} & X' \ar[d]^{f'} \ar[l]^s \\
+Y & Y'' & Y' \ar[l]_t
+}
+$$
+commutes. Now in this diagram we are going to repeatedly change our
+choice of
+$$
+X'' \xrightarrow{f''} Y'' \xleftarrow{t} Y'
+$$
+by postcomposing both $t$ and $f''$ by a morphism $d : Y'' \to Y'''$
+with the property that $d \circ t \in S$. According to
+Remark \ref{remark-left-localization-morphisms-colimit}
+we may after such a replacement assume that there exists a morphism
+$h : Y \to Y''$ such that $b = t^{-1}h$ holds\footnote{Here is a
+more down-to-earth way to see this:
+Write $b = q^{-1}i$ for some $q : Y' \to Z$ in $S$ and some
+$i : Y \to Z$. By LMS2 we can find $r : Y'' \to Y'''$ in $S$ and
+$j : Z \to Y'''$ such that $j \circ q = r \circ t$. Now, set
+$d = r$ and $h = j \circ i$.}. At this point we have everything
+as in the lemma except that we don't know that the left square of the
+diagram commutes.
+But the definition of composition in $S^{-1} \mathcal{C}$ shows that
+$b \circ Q\left(f\right)$ is the equivalence class of the pair
+$(h \circ f : X \to Y'', t : Y' \to Y'')$ (since $b$ is the
+equivalence class of the pair $(h : Y \to Y'', t : Y' \to Y'')$,
+while $Q\left(f\right)$ is the equivalence class of the pair
+$(f : X \to Y, \text{id} : Y \to Y)$), while
+$Q\left(f'\right) \circ a$ is the equivalence class of the pair
+$(f'' \circ g : X \to Y'', t : Y' \to Y'')$ (since $a$ is the
+equivalence class of the pair $(g : X \to X'', s : X' \to X'')$,
+while $Q\left(f'\right)$ is the equivalence class of the pair
+$(f' : X' \to Y', \text{id} : Y' \to Y')$).
+Since we know that
+$b \circ Q\left(f\right) = Q\left(f'\right) \circ a$, we thus
+conclude that the equivalence classes of the pairs
+$(h \circ f : X \to Y'', t : Y' \to Y'')$ and
+$(f'' \circ g : X \to Y'', t : Y' \to Y'')$ are equal.
+Hence using
+Lemma \ref{lemma-equality-morphisms-left-localization}
+we can find a morphism $d : Y'' \to Y'''$ such that
+$d \circ t \in S$ and $d \circ h \circ f = d \circ f'' \circ g$.
+Hence we make one more replacement of the kind described
+above and we win.
+\end{proof}
+
+\noindent
+{\bf Right calculus of fractions.}
+Let $\mathcal{C}$ be a category and let $S$ be a right multiplicative
+system. We define a new category $S^{-1}\mathcal{C}$ as follows
+(we verify this works in the proof of
+Lemma \ref{lemma-right-localization}):
+\begin{enumerate}
+\item We set $\Ob(S^{-1}\mathcal{C}) = \Ob(\mathcal{C})$.
+\item Morphisms $X \to Y$ of $S^{-1}\mathcal{C}$ are given by pairs
+$(f : X' \to Y, s : X' \to X)$ with $s \in S$ up to equivalence.
+(The equivalence is defined below. Think of the equivalence class
+of a pair $(f, s)$ as $fs^{-1} : X \to Y$.)
+\item Two pairs $(f_1 : X_1 \to Y, s_1 : X_1 \to X)$ and
+$(f_2 : X_2 \to Y, s_2 : X_2 \to X)$ are said to be equivalent
+if there exist a third pair $(f_3 : X_3 \to Y, s_3 : X_3 \to X)$
+and morphisms $u : X_3 \to X_1$ and $v : X_3 \to X_2$ of $\mathcal{C}$
+fitting into the commutative diagram
+$$
+\xymatrix{
+ & X_1 \ar[ld]_{s_1} \ar[rd]^{f_1} & \\
+X &
+X_3 \ar[l]_{s_3} \ar[u]_u \ar[d]^v \ar[r]^{f_3} &
+Y \\
+& X_2 \ar[lu]^{s_2} \ar[ru]_{f_2} &
+}
+$$
+\item The composition of the equivalence classes of the pairs
+$(f : X' \to Y, s : X' \to X)$ and $(g : Y' \to Z, t : Y' \to Y)$
+is defined as the equivalence class of a pair
+$(g \circ h : X'' \to Z, s \circ u : X'' \to X)$
+where $h$ and $u \in S$ are chosen to fit into a commutative diagram
+$$
+\xymatrix{
+X'' \ar[d]_u \ar[r]^h & Y' \ar[d]^t \\
+X' \ar[r]^f & Y
+}
+$$
+which exists by assumption.
+\item The identity morphism $X \to X$ in $S^{-1} \mathcal{C}$ is the
+equivalence class of the pair $(\text{id} : X \to X,
+\text{id} : X \to X)$.
+\end{enumerate}
+
+\begin{lemma}
+\label{lemma-right-localization}
+Let $\mathcal{C}$ be a category and let $S$ be a right multiplicative
+system.
+\begin{enumerate}
+\item The relation on pairs defined above is an equivalence relation.
+\item The composition rule given above is well defined on equivalence
+classes.
+\item Composition is associative (and the identity morphisms satisfy
+the identity axioms), and hence $S^{-1}\mathcal{C}$ is a category.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma is dual to
+Lemma \ref{lemma-left-localization}.
+It follows formally from that lemma by replacing
+$\mathcal{C}$ by its opposite category in which
+$S$ is a left multiplicative system.
+\end{proof}
+
+\begin{definition}
+\label{definition-right-localization-as-fraction}
+Let $\mathcal{C}$ be a category and let $S$ be a right multiplicative
+system of morphisms of $\mathcal{C}$. Given any morphism
+$f : X' \to Y$ in $\mathcal{C}$ and any morphism $s : X' \to X$ in
+$S$, we denote by {\it $f s^{-1}$} the equivalence class of the pair
+$(f : X' \to Y, s : X' \to X)$. This is a morphism from $X$ to $Y$
+in $S^{-1} \mathcal{C}$.
+\end{definition}
+
+\noindent
+Identities similar (actually, dual) to the ones in Definition
+\ref{definition-left-localization-as-fraction} hold.
+We can ``write any finite collection of morphisms with the same source
+as fractions with common denominator''.
+
+\begin{lemma}
+\label{lemma-morphisms-right-localization}
+Let $\mathcal{C}$ be a category and let $S$ be a right multiplicative
+system of morphisms of $\mathcal{C}$. Given any finite collection
+$g_i : X \to Y_i$ of morphisms of $S^{-1}\mathcal{C}$
+(indexed by $i$),
+we can find an element $s : X' \to X$ of $S$ and a family
+of morphisms $f_i : X' \to Y_i$ of $\mathcal{C}$ such that
+$g_i$ is the equivalence class of the pair
+$(f_i : X' \to Y_i, s : X' \to X)$.
+\end{lemma}
+
+\begin{proof}
+This lemma is the dual of
+Lemma \ref{lemma-morphisms-left-localization}
+and follows formally from that lemma by replacing all
+categories in sight by their opposites.
+\end{proof}
+
+\noindent
+There is an easy characterization of equality of morphisms if they
+have the same denominator.
+
+\begin{lemma}
+\label{lemma-equality-morphisms-right-localization}
+Let $\mathcal{C}$ be a category and let $S$ be a right multiplicative
+system of morphisms of $\mathcal{C}$. Let $A, B : X \to Y$ be
+morphisms of $S^{-1}\mathcal{C}$ which are the equivalence
+classes of $(f : X' \to Y, s : X' \to X)$ and
+$(g : X' \to Y, s : X' \to X)$. The following are equivalent
+\begin{enumerate}
+\item $A = B$,
+\item there exists a morphism $t : X'' \to X'$ in $S$ with
+$f \circ t = g \circ t$, and
+\item there exists a morphism $a : X'' \to X'$ with
+$f \circ a = g \circ a$ and $s \circ a \in S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is dual to
+Lemma \ref{lemma-equality-morphisms-left-localization}.
+\end{proof}
+
+\begin{remark}
+\label{remark-right-localization-morphisms-colimit}
+Let $\mathcal{C}$ be a category. Let $S$ be a right multiplicative system.
+Given an object $X$ of $\mathcal{C}$ we denote $S/X$ the category whose
+objects are $s : X' \to X$ with $s \in S$ and whose morphisms are
+commutative diagrams
+$$
+\xymatrix{
+X' \ar[rd]_s \ar[rr]_a & & X'' \ar[ld]^t \\
+& X
+}
+$$
+where $a : X' \to X''$ is arbitrary. The category
+$S/X$ is cofiltered (see
+Definition \ref{definition-codirected}).
+(This is dual to the corresponding statement in
+Remark \ref{remark-left-localization-morphisms-colimit}.)
+Now the combined results of
+Lemmas \ref{lemma-morphisms-right-localization} and
+\ref{lemma-equality-morphisms-right-localization}
+tell us that
+\begin{equation}
+\label{equation-right-localization-morphisms-colimit}
+\Mor_{S^{-1}\mathcal{C}}(X, Y) =
+\colim_{(s : X' \to X) \in (S/X)^{opp}} \Mor_\mathcal{C}(X', Y)
+\end{equation}
+This formula expressing morphisms in $S^{-1}\mathcal{C}$ as a filtered
+colimit of morphisms in $\mathcal{C}$ is occasionally useful.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-properties-right-localization}
+Let $\mathcal{C}$ be a category and let $S$ be a right multiplicative
+system of morphisms of $\mathcal{C}$.
+\begin{enumerate}
+\item The rules $X \mapsto X$ and
+$(f : X \to Y) \mapsto (f : X \to Y, \text{id}_X : X \to X)$
+define a functor $Q : \mathcal{C} \to S^{-1}\mathcal{C}$.
+\item For any $s \in S$ the morphism $Q(s)$ is an isomorphism in
+$S^{-1}\mathcal{C}$.
+\item If $G : \mathcal{C} \to \mathcal{D}$ is any functor such that
+$G(s)$ is invertible for every $s \in S$, then there exists a
+unique functor $H : S^{-1}\mathcal{C} \to \mathcal{D}$
+such that $H \circ Q = G$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma is the dual of
+Lemma \ref{lemma-properties-left-localization}
+and follows formally from that lemma by replacing all
+categories in sight by their opposites.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-right-localization-limits}
+Let $\mathcal{C}$ be a category and let $S$ be a right multiplicative
+system of morphisms of $\mathcal{C}$. The localization functor
+$Q : \mathcal{C} \to S^{-1}\mathcal{C}$ commutes with finite limits.
+\end{lemma}
+
+\begin{proof}
+This is dual to Lemma \ref{lemma-left-localization-limits}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-right-localization-diagram}
+Let $\mathcal{C}$ be a category. Let $S$ be a right multiplicative
+system. If $f : X \to Y$, $f' : X' \to Y'$ are two morphisms of
+$\mathcal{C}$ and if
+$$
+\xymatrix{
+Q(X) \ar[d]_{Q(f)} \ar[r]_a & Q(X') \ar[d]^{Q(f')} \\
+Q(Y) \ar[r]^b & Q(Y')
+}
+$$
+is a commutative diagram in $S^{-1}\mathcal{C}$, then there exist
+a morphism $f'' : X'' \to Y''$ in $\mathcal{C}$ and a commutative
+diagram
+$$
+\xymatrix{
+X \ar[d]_f & X'' \ar[l]^s \ar[d]^{f''} \ar[r]_g & X' \ar[d]^{f'} \\
+Y & Y'' \ar[l]_t \ar[r]^h & Y'
+}
+$$
+in $\mathcal{C}$ with $s, t \in S$ and $a = gs^{-1}$, $b = ht^{-1}$.
+\end{lemma}
+
+\begin{proof}
+This lemma is dual to
+Lemma \ref{lemma-left-localization-diagram}.
+\end{proof}
+
+\noindent
+{\bf Multiplicative systems and two sided calculus of fractions.}
+If $S$ is a multiplicative system then left and right calculus of
+fractions give canonically isomorphic categories.
+
+\begin{lemma}
+\label{lemma-multiplicative-system}
+Let $\mathcal{C}$ be a category and let $S$ be a multiplicative system.
+The category of left fractions and the category of right fractions
+$S^{-1}\mathcal{C}$ are canonically isomorphic.
+\end{lemma}
+
+\begin{proof}
+Denote $\mathcal{C}_{left}$, $\mathcal{C}_{right}$ the two categories
+of fractions. By the universal properties of
+Lemmas \ref{lemma-properties-left-localization} and
+\ref{lemma-properties-right-localization}
+we obtain functors $\mathcal{C}_{left} \to \mathcal{C}_{right}$
+and $\mathcal{C}_{right} \to \mathcal{C}_{left}$.
+By the uniqueness statement in the universal properties, these
+functors are each other's inverse.
+\end{proof}
+
+\begin{definition}
+\label{definition-saturated-multiplicative-system}
+Let $\mathcal{C}$ be a category and let $S$ be a multiplicative system.
+We say $S$ is {\it saturated} if, in addition to MS1, MS2, MS3, we
+also have
+\begin{enumerate}
+\item[MS4] Given three composable morphisms $f, g, h$, if
+$fg, gh \in S$, then $g \in S$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that a saturated multiplicative system contains all isomorphisms.
+Moreover, if $f, g, h$ are composable morphisms in a category and
+$fg, gh$ are isomorphisms, then $g$ is an isomorphism (because then $g$
+has both a left and a right inverse, hence is invertible).
+
+\begin{lemma}
+\label{lemma-what-gets-inverted}
+Let $\mathcal{C}$ be a category and let $S$ be a multiplicative system.
+Denote $Q : \mathcal{C} \to S^{-1}\mathcal{C}$ the localization functor.
+The set
+$$
+\hat S = \{f \in \text{Arrows}(\mathcal{C}) \mid
+Q(f) \text{ is an isomorphism}\}
+$$
+is equal to
+$$
+S' = \{f \in \text{Arrows}(\mathcal{C}) \mid
+\text{there exist }g, h\text{ such that }gf, fh \in S\}
+$$
+and is the smallest saturated multiplicative system containing $S$.
+In particular, if $S$ is saturated, then $\hat S = S$.
+\end{lemma}
+
+\begin{proof}
+It is clear that $S \subset S' \subset \hat S$ because elements of
+$S'$ map to morphisms in $S^{-1}\mathcal{C}$ which have both left
+and right inverses. Note that $S'$ satisfies MS4, and that
+$\hat S$ satisfies MS1. Next, we prove that $S' = \hat S$.
+
+\medskip\noindent
+Let $f \in \hat S$. Let $s^{-1}g = ht^{-1}$ be the inverse morphism
+in $S^{-1}\mathcal{C}$. (We may use both left fractions and right
+fractions to describe morphisms in $S^{-1}\mathcal{C}$, see
+Lemma \ref{lemma-multiplicative-system}.)
+The relation $\text{id}_X = s^{-1}gf$ in $S^{-1}\mathcal{C}$ means
+there exists a commutative diagram
+$$
+\xymatrix{
+ & X' \ar[d]^u & \\
+X \ar[ru]^{gf} \ar[r]^{f'} \ar[rd]_{\text{id}_X} &
+X'' &
+X \ar[lu]_s \ar[l]_{s'} \ar[ld]^{\text{id}_X} \\
+& X \ar[u]_v &
+}
+$$
+for some morphisms $f', u, v$ and $s' \in S$. Hence $ugf = s' \in S$.
+Similarly, using that $\text{id}_Y = fht^{-1}$ one proves that
+$fhw \in S$ for some $w$. We conclude that $f \in S'$. Thus
+$S' = \hat S$. Provided we prove that $S' = \hat S$ is a
+multiplicative system it is now clear that this implies that $S' = \hat S$
+is the smallest saturated system containing $S$.
+
+\medskip\noindent
+Our remarks above take care of MS1 and MS4, so to finish the proof of the
+lemma we have to show that LMS2, RMS2, LMS3, RMS3 hold for $\hat S$.
+Let us check that LMS2 holds for $\hat S$. Suppose we have a solid diagram
+$$
+\xymatrix{
+X \ar[d]_t \ar[r]_g & Y \ar@{..>}[d]^s \\
+Z \ar@{..>}[r]^f & W
+}
+$$
+with $t \in \hat S$. Pick a morphism $a : Z \to Z'$ such that
+$at \in S$. Then we can use LMS2 for $S$ to find a commutative diagram
+$$
+\xymatrix{
+X \ar[d]_t \ar[r]_g & Y \ar[dd]^s \\
+Z \ar[d]_a \\
+Z' \ar[r]^{f'} & W
+}
+$$
+and setting $f = f' \circ a$ we win. The proof of RMS2 is dual to this.
+Finally, suppose given a pair of morphisms $f, g : X \to Y$ and
+$t \in \hat S$ with target $X$ such that $ft = gt$.
+Then we pick a morphism $b$ such that $tb \in S$. Then
+$ftb = gtb$ which implies by LMS3 for $S$ that there exists an $s \in S$
+with source $Y$ such that $sf = sg$ as desired. The proof of
+RMS3 is dual to this.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Formal properties}
+\label{section-formal-cat-cat}
+
+\noindent
+In this section we discuss some formal properties of the
+$2$-category of categories. This will lead us to the definition
+of a (strict) $2$-category later.
+
+\medskip\noindent
+Let us denote $\Ob(\textit{Cat})$ the class of all categories.
+For every pair of categories
+$\mathcal{A}, \mathcal{B} \in \Ob(\textit{Cat})$
+we have the ``small'' category of functors
+$\text{Fun}(\mathcal{A}, \mathcal{B})$.
+Composition of transformation of functors such as
+$$
+\xymatrix{
+\mathcal{A}
+\rruppertwocell^{F''}{t'}
+\ar[rr]_(.3){F'}
+\rrlowertwocell_F{t}
+& &
+\mathcal{B}
+}
+\text{ composes to }
+\xymatrix{
+\mathcal{A}
+\rrtwocell^{F''}_F{\ \ t \circ t'}
+& &
+\mathcal{B}
+}
+$$
+is called {\it vertical} composition. We will use the usual
+symbol $\circ$ for this. Next, we will define {\it horizontal}
+composition. In order to do this we explain a bit more
+of the structure at hand.
+
+\medskip\noindent
+Namely for every triple
+of categories $\mathcal{A}$, $\mathcal{B}$, and $\mathcal{C}$
+there is a composition law
+$$
+\circ : \Ob(\text{Fun}(\mathcal{B}, \mathcal{C}))
+\times
+\Ob(\text{Fun}(\mathcal{A}, \mathcal{B}))
+\longrightarrow
+\Ob(\text{Fun}(\mathcal{A}, \mathcal{C}))
+$$
+coming from composition of functors. This composition law
+is associative, and identity functors act as units. In other
+words -- forgetting about transformations of functors --
+we see that $\textit{Cat}$ forms a category. How does
+this structure interact with the morphisms between functors?
+
+\medskip\noindent
+Well, given $t : F \to F'$ a transformation of
+functors $F, F' : \mathcal{A} \to \mathcal{B}$ and
+a functor
+$G : \mathcal{B} \to \mathcal{C}$ we can define
+a transformation of functors
+$G\circ F \to G \circ F'$. We will denote this
+transformation ${}_Gt$. It is given by the formula
+$({}_Gt)_x = G(t_x) : G(F(x)) \to G(F'(x))$
+for all $x \in \mathcal{A}$.
+In this way composition
+with $G$ becomes a functor
+$$
+\text{Fun}(\mathcal{A}, \mathcal{B})
+\longrightarrow
+\text{Fun}(\mathcal{A}, \mathcal{C}).
+$$
+To see this you just have to check that
+${}_G(\text{id}_F) = \text{id}_{G \circ F}$ and that
+${}_G(t_1 \circ t_2) = {}_Gt_1 \circ {}_Gt_2$.
+Of course we also have that ${}_{\text{id}_\mathcal{A}}t = t$.
+
+\medskip\noindent
+Similarly, given $s : G \to G'$ a transformation of
+functors $G, G' : \mathcal{B} \to \mathcal{C}$ and
+$F : \mathcal{A} \to \mathcal{B}$ a functor we can define
+$s_F$ to be the transformation of functors
+$G\circ F \to G' \circ F$ given by
+$(s_F)_x = s_{F(x)} : G(F(x)) \to G'(F(x))$
+for all $x \in \mathcal{A}$. In this way
+composition with $F$ becomes a functor
+$$
+\text{Fun}(\mathcal{B}, \mathcal{C})
+\longrightarrow
+\text{Fun}(\mathcal{A}, \mathcal{C}).
+$$
+To see this you just have to check that
+$(\text{id}_G)_F = \text{id}_{G\circ F}$ and that
+$(s_1 \circ s_2)_F = s_{1, F} \circ s_{2, F}$.
+Of course we also have that $s_{\text{id}_\mathcal{B}} = s$.
+
+\medskip\noindent
+These constructions satisfy the additional properties
+$$
+{}_{G_1}({}_{G_2}t) = {}_{G_1\circ G_2}t,
+\ (s_{F_1})_{F_2} = s_{F_1 \circ F_2},
+\text{ and }{}_H(s_F) = ({}_Hs)_F
+$$
+whenever these make sense.
+Finally, given functors $F, F' : \mathcal{A} \to \mathcal{B}$,
+and $G, G' : \mathcal{B} \to \mathcal{C}$ and transformations
+$t : F \to F'$, and $s : G \to G'$ the following
+diagram is commutative
+$$
+\xymatrix{
+G \circ F \ar[r]^{{}_Gt} \ar[d]_{s_F}
+&
+G \circ F' \ar[d]^{s_{F'}} \\
+G' \circ F \ar[r]_{{}_{G'}t}
+&
+G' \circ F'
+}
+$$
+in other words ${}_{G'}t \circ s_F = s_{F'}\circ {}_Gt$.
+To prove this we just consider what happens on
+any object $x \in \Ob(\mathcal{A})$:
+$$
+\xymatrix{
+G(F(x)) \ar[r]^{G(t_x)} \ar[d]_{s_{F(x)}}
+&
+G(F'(x)) \ar[d]^{s_{F'(x)}} \\
+G'(F(x)) \ar[r]_{G'(t_x)}
+&
+G'(F'(x))
+}
+$$
+which is commutative because $s$ is a transformation
+of functors. This compatibility relation allows us
+to define horizontal composition.
+
+\begin{definition}
+\label{definition-horizontal-composition}
+Given a diagram as in the left hand side of:
+$$
+\xymatrix{
+\mathcal{A}
+\rtwocell^F_{F'}{t}
+&
+\mathcal{B}
+\rtwocell^G_{G'}{s}
+&
+\mathcal{C}
+}
+\text{ gives }
+\xymatrix{
+\mathcal{A}
+\rrtwocell^{G \circ F} _{G' \circ F'}{\ \ s \star t}
+& &
+\mathcal{C}
+}
+$$
+we define the {\it horizontal} composition $s \star t$ to be the
+transformation of functors ${}_{G'}t \circ s_F = s_{F'}\circ {}_Gt$.
+\end{definition}
+
+\noindent
+Now we see that we may recover our previously constructed
+transformations ${}_Gt$ and $s_F$ as
+$ {}_Gt = \text{id}_G \star t $ and $ s_F = s \star \text{id}_F $.
+Furthermore, all of the rules we found above are consequences of
+the properties stated in the lemma that follows.
+
+\begin{lemma}
+\label{lemma-properties-2-cat-cats}
+The horizontal and vertical compositions have the following
+properties
+\begin{enumerate}
+\item $\circ$ and $\star$ are associative,
+\item the identity transformations $\text{id}_F$
+are units for $\circ$,
+\item the identity transformations of the identity functors
+$\text{id}_{\text{id}_\mathcal{A}}$
+are units for $\star$ and $\circ$, and
+\item given a diagram
+$$
+\xymatrix{
+\mathcal{A}
+\rruppertwocell^F{t}
+\ar[rr]_(.3){F'}
+\rrlowertwocell_{F''}{t'}
+& &
+\mathcal{B}
+\rruppertwocell^G{s}
+\ar[rr]_(.3){G'}
+\rrlowertwocell_{G''}{s'}
+& &
+\mathcal{C}
+}
+$$
+we have $ (s' \circ s) \star (t' \circ t) = (s' \star t') \circ (s \star t)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The last statement turns using our previous notation into the following
+equation
+$$
+s'_{F''}
+\circ
+{}_{G'}t'
+\circ
+s_{F'}
+\circ
+{}_Gt
+=
+(s' \circ s)_{F''}
+\circ
+{}_G(t' \circ t).
+$$
+According to our result above applied to the middle composition
+we may rewrite the left hand side as
+$
+s'_{F''}
+\circ
+s_{F''}
+\circ
+{}_Gt'
+\circ
+{}_Gt
+$
+which is easily shown to be equal to the right hand side.
+\end{proof}
+
+\noindent
+Another way of formulating condition (4) of the lemma is
+that composition of functors and horizontal composition
+of transformation of functors gives rise to a functor
+$$
+(\circ, \star) :
+\text{Fun}(\mathcal{B}, \mathcal{C})
+\times
+\text{Fun}(\mathcal{A}, \mathcal{B})
+\longrightarrow
+\text{Fun}(\mathcal{A}, \mathcal{C})
+$$
+whose source is the product category,
+see Definition \ref{definition-product-category}.
+
+\section{2-categories}
+\label{section-2-categories}
+
+\noindent
+We will give a definition of (strict) $2$-categories as they appear
+in the setting of stacks. Before you read this take a look at
+Section \ref{section-formal-cat-cat} and
+Example \ref{example-2-1-category-of-categories}.
+Basically, you take this example
+and you write out all the rules satisfied by the objects, $1$-morphisms
+and $2$-morphisms in that example.
+
+\begin{definition}
+\label{definition-2-category}
+A (strict) {\it $2$-category} $\mathcal{C}$ consists of the following data
+\begin{enumerate}
+\item A set of objects $\Ob(\mathcal{C})$.
+\item For each pair $x, y \in \Ob(\mathcal{C})$
+a category $\Mor_\mathcal{C}(x, y)$. The objects of
+$\Mor_\mathcal{C}(x, y)$ will be called {\it $1$-morphisms}
+and denoted $F : x \to y$. The morphisms between these $1$-morphisms
+will be called {\it $2$-morphisms} and denoted $t : F' \to F$.
+The composition of $2$-morphisms in $\Mor_\mathcal{C}(x, y)$
+will be called {\it vertical} composition and will be
+denoted $t \circ t'$ for $t : F' \to F$ and $t' : F'' \to F'$.
+\item For each triple $x, y, z\in \Ob(\mathcal{C})$ a
+functor
+$$
+(\circ, \star) :
+\Mor_\mathcal{C}(y, z) \times \Mor_\mathcal{C}(x, y)
+\longrightarrow
+\Mor_\mathcal{C}(x, z).
+$$
+The image of the pair of $1$-morphisms $(F, G)$ on the left hand side
+will be called the {\it composition} of $F$ and $G$, and denoted
+$F\circ G$. The image of the pair of $2$-morphisms $(t, s)$ will
+be called the {\it horizontal} composition and denoted $t \star s$.
+\end{enumerate}
+These data are to satisfy the following rules:
+\begin{enumerate}
+\item The set of objects together with the set of $1$-morphisms endowed
+with composition of $1$-morphisms forms a category.
+\item Horizontal composition of $2$-morphisms is associative.
+\item The identity $2$-morphism $\text{id}_{\text{id}_x}$
+of the identity $1$-morphism $\text{id}_x$ is a unit for
+horizontal composition.
+\end{enumerate}
+\end{definition}
+
+\noindent
+This is obviously not a very pleasant type of object to work with.
+On the other hand, there are lots of examples where it is quite clear
+how you work with it. The only example we have so far is that of
+the $2$-category whose objects are a given collection of categories,
+$1$-morphisms are functors between these categories,
+and $2$-morphisms are natural transformations of functors, see
+Section \ref{section-formal-cat-cat}.
+As far as this text is concerned all $2$-categories will be
+sub $2$-categories of this example. Here is what it means to be
+a sub $2$-category.
+
+\begin{definition}
+\label{definition-sub-2-category}
+Let $\mathcal{C}$ be a $2$-category.
+A {\it sub $2$-category} $\mathcal{C}'$ of $\mathcal{C}$, is given by a subset
+$\Ob(\mathcal{C}')$ of $\Ob(\mathcal{C})$
+and sub categories $\Mor_{\mathcal{C}'}(x, y)$ of the
+categories $\Mor_\mathcal{C}(x, y)$ for all
+$x, y \in \Ob(\mathcal{C}')$ such that these, together with
+the operations $\circ$ (composition $1$-morphisms), $\circ$ (vertical
+composition $2$-morphisms), and $\star$ (horizontal composition)
+form a $2$-category.
+\end{definition}
+
+\begin{remark}
+\label{remark-big-2-categories}
+Big $2$-categories.
+In many texts a $2$-category is allowed to have a class of
+objects (but hopefully a ``class of classes'' is not allowed).
+We will allow these ``big'' $2$-categories as well, but only
+in the following list of cases (to be updated as we go along):
+\begin{enumerate}
+\item The $2$-category of categories $\textit{Cat}$.
+\item The $(2, 1)$-category of categories $\textit{Cat}$.
+\item The $2$-category of groupoids $\textit{Groupoids}$;
+this is a $(2, 1)$-category.
+\item The $2$-category of fibred categories over a fixed category.
+\item The $(2, 1)$-category of fibred categories over a fixed category.
+\end{enumerate}
+See Definition \ref{definition-2-1-category}.
+Note that in each case the class of objects of the $2$-category
+$\mathcal{C}$ is a proper class, but for all objects $x, y \in \Ob(C)$
+the category $\Mor_\mathcal{C}(x, y)$ is ``small'' (according to
+our conventions).
+\end{remark}
+
+\noindent
+The notion of equivalence of categories that we defined in Section
+\ref{section-definition-categories} extends to the more general setting of
+$2$-categories as follows.
+
+\begin{definition}
+\label{definition-equivalence}
+Two objects $x, y$ of a $2$-category are {\it equivalent} if there exist
+$1$-morphisms $F : x \to y$ and $G : y \to x$ such that $F \circ G$ is
+$2$-isomorphic to $\text{id}_y$ and $G \circ F$ is $2$-isomorphic to
+$\text{id}_x$.
+\end{definition}
+
+\noindent
+Sometimes we need to say what it means to have a functor from a
+category into a $2$-category.
+
+\begin{definition}
+\label{definition-functor-into-2-category}
+Let $\mathcal{A}$ be a category and let $\mathcal{C}$ be a $2$-category.
+\begin{enumerate}
+\item A {\it functor} from an ordinary category into a $2$-category
+will ignore the
+$2$-morphisms unless mentioned otherwise. In other words, it will be a
+``usual'' functor into the category formed out of 2-category by forgetting
+all the 2-morphisms.
+\item A {\it weak functor}, or
+a {\it pseudo functor} $\varphi$ from $\mathcal{A}$ into the 2-category
+$\mathcal{C}$ is given by the following data
+\begin{enumerate}
+\item a map $\varphi : \Ob(\mathcal{A}) \to \Ob(\mathcal{C})$,
+\item for every pair $x, y\in \Ob(\mathcal{A})$, and every
+morphism $f : x \to y$ a $1$-morphism $\varphi(f) : \varphi(x) \to \varphi(y)$,
+\item for every $x\in \Ob(A)$ a $2$-morphism
+$\alpha_x : \text{id}_{\varphi(x)} \to \varphi(\text{id}_x)$, and
+\item for every pair of composable morphisms $f : x \to y$,
+$g : y \to z$ of $\mathcal{A}$ a $2$-morphism
+$\alpha_{g, f} : \varphi(g \circ f) \to \varphi(g) \circ \varphi(f)$.
+\end{enumerate}
+These data are subject to the following conditions:
+\begin{enumerate}
+\item the $2$-morphisms $\alpha_x$ and $\alpha_{g, f}$ are all
+isomorphisms,
+\item for any morphism $f : x \to y$ in $\mathcal{A}$ we have
+$\alpha_{\text{id}_y, f} = \alpha_y \star \text{id}_{\varphi(f)}$:
+$$
+\xymatrix{
+\varphi(x)
+\rrtwocell^{\varphi(f)}_{\varphi(f)}{\ \ \ \ \text{id}_{\varphi(f)}}
+& &
+\varphi(y)
+\rrtwocell^{\text{id}_{\varphi(y)}}_{\varphi(\text{id}_y)}{\alpha_y}
+& &
+\varphi(y)
+}
+=
+\xymatrix{
+\varphi(x)
+\rrtwocell^{\varphi(f)}_{\varphi(\text{id}_y) \circ \varphi(f)}{\ \ \ \ \alpha_{\text{id}_y, f}}
+& &
+\varphi(y)
+}
+$$
+\item for any morphism $f : x \to y$ in $\mathcal{A}$ we have
+$\alpha_{f, \text{id}_x} = \text{id}_{\varphi(f)} \star \alpha_x$,
+\item for any triple of composable morphisms
+$f : w \to x$, $g : x \to y$, and $h : y \to z$ of $\mathcal{A}$
+we have
+$$
+(\text{id}_{\varphi(h)} \star \alpha_{g, f})
+\circ
+\alpha_{h, g \circ f}
+=
+(\alpha_{h, g} \star \text{id}_{\varphi(f)})
+\circ
+\alpha_{h \circ g, f}
+$$
+in other words the following diagram with objects
+$1$-morphisms and arrows $2$-morphisms commutes
+$$
+\xymatrix{
+\varphi(h \circ g \circ f)
+\ar[d]_{\alpha_{h, g \circ f}}
+\ar[rr]_{\alpha_{h \circ g, f}}
+& &
+\varphi(h \circ g) \circ \varphi(f)
+\ar[d]^{\alpha_{h, g} \star \text{id}_{\varphi(f)}} \\
+\varphi(h) \circ \varphi(g \circ f)
+\ar[rr]^{\text{id}_{\varphi(h)} \star \alpha_{g, f}}
+& &
+\varphi(h) \circ \varphi(g) \circ \varphi(f)
+}
+$$
+\end{enumerate}
+\end{enumerate}
+\end{definition}
+
+\noindent
+Again this is not a very workable notion, but it does sometimes come up.
+There is a theorem that says that any pseudo-functor is isomorphic to
+a functor. Finally, there are the notions of
+{\it functor between $2$-categories}, and
+{\it pseudo functor between $2$-categories}.
+This last notion leads us into $3$-category territory.
+We would like to avoid having to define this at almost any cost!
+
+\section{(2, 1)-categories}
+\label{section-2-1-categories}
+
+\noindent
+Some $2$-categories have
+the property that all $2$-morphisms are isomorphisms. These will
+play an important role in the following, and they are easier to work with.
+
+\begin{definition}
+\label{definition-2-1-category}
+A (strict) {\it $(2, 1)$-category} is a $2$-category in which all
+$2$-morphisms are isomorphisms.
+\end{definition}
+
+\begin{example}
+\label{example-2-1-category-of-categories}
+The $2$-category $\textit{Cat}$, see Remark \ref{remark-big-2-categories},
+can be turned into a $(2, 1)$-category by only allowing isomorphisms of
+functors as $2$-morphisms.
+\end{example}
+
+\noindent
+In fact, more generally any $2$-category
+$\mathcal{C}$ produces a $(2, 1)$-category by considering the sub $2$-category
+$\mathcal{C}'$ with the same objects and $1$-morphisms but whose
+$2$-morphisms are the invertible $2$-morphisms of $\mathcal{C}$.
+In this situation we will say ``{\it let $\mathcal{C}'$ be
+the $(2, 1)$-category associated to $\mathcal{C}$}'' or similar.
+For example, the $(2, 1)$-category of groupoids means the
+$2$-category whose objects are groupoids, whose
+$1$-morphisms are functors and whose $2$-morphisms are
+isomorphisms of functors. Except that this is a bad example as a
+transformation between functors between groupoids is automatically
+an isomorphism!
+
+\begin{remark}
+\label{remark-other-2-categories}
+Thus there are variants of the construction of
+Example \ref{example-2-1-category-of-categories}
+above where we look at the $2$-category of groupoids,
+or categories fibred in groupoids over a fixed
+category, or stacks. And so on.
+\end{remark}
+
+
+
+
+
+
+\section{2-fibre products}
+\label{section-2-fibre-products}
+
+\noindent
+In this section we introduce $2$-fibre products. Suppose that $\mathcal{C}$
+is a 2-category. We say that a diagram
+$$
+\xymatrix{
+w \ar[r] \ar[d] & y \ar[d] \\
+x \ar[r] & z }
+$$
+2-commutes if the two 1-morphisms $w \to y \to z$ and $w \to x \to z$ are
+2-isomorphic. In a 2-category it is more natural to ask for 2-commutativity
+of diagrams than for actually commuting diagrams. (Indeed, some may say that
+we should not work with strict 2-categories at all, and in a ``weak''
+2-category the notion of a commutative diagram of 1-morphisms does not even
+make sense.) Correspondingly the notion of a fibre product has to be adjusted.
+
+\medskip\noindent
+Let $\mathcal{C}$ be a $2$-category. Let $x, y, z\in \Ob(\mathcal{C})$ and
+$f\in \Mor_\mathcal{C}(x, z)$ and $g\in \Mor_{\mathcal C}(y, z)$.
+In order to define the 2-fibre product of $f$ and $g$ we are going to look at
+2-commutative diagrams
+$$
+\xymatrix{
+& w \ar[r]_a \ar[d]_b & x \ar[d]^{f} \\
+& y \ar[r]^{g} & z. }
+$$
+Now in the case of categories, the fibre product is a final object in the
+category of such diagrams. Correspondingly a 2-fibre product is a final object
+in a 2-category (see definition below). The {\it $2$-category
+of $2$-commutative diagrams over $f$ and $g$}
+is the $2$-category defined as follows:
+\begin{enumerate}
+\item Objects are quadruples $(w, a, b, \phi)$ as above where $\phi$
+is an invertible 2-morphism $\phi : f \circ a \to g \circ b$,
+\item 1-morphisms from $(w', a', b', \phi')$ to $(w, a, b, \phi)$ are given by
+$(k : w' \to w, \alpha : a' \to a \circ k, \beta : b' \to b \circ k)$
+such that
+$$
+\xymatrix{
+f \circ a'
+\ar[rr]_{\text{id}_f \star \alpha}
+\ar[d]_{\phi'}
+& &
+f \circ a \circ k
+\ar[d]^{\phi \star \text{id}_k}
+\\
+g \circ b'
+\ar[rr]^{\text{id}_g \star \beta}
+& &
+g \circ b \circ k
+}
+$$
+is commutative,
+\item given a second $1$-morphism
+$(k', \alpha', \beta') : (w'', a'', b'', \phi'') \to
+(w', \alpha', \beta', \phi')$ the composition of $1$-morphisms
+is given by the rule
+$$
+(k, \alpha, \beta) \circ (k', \alpha', \beta') =
+(k \circ k',
+(\alpha \star \text{id}_{k'}) \circ \alpha',
+(\beta \star \text{id}_{k'}) \circ \beta'),
+$$
+\item a 2-morphism between $1$-morphisms
+$(k_i, \alpha_i, \beta_i)$, $i = 1, 2$ with the same source and target
+is given by a 2-morphism $\delta : k_1 \to k_2$ such that
+$$
+\xymatrix{
+a'
+\ar[rd]_{\alpha_2}
+\ar[r]_{\alpha_1} &
+a \circ k_1
+\ar[d]^{\text{id}_a \star \delta} &
+&
+b \circ k_1
+\ar[d]_{\text{id}_b \star \delta} &
+b'
+\ar[l]^{\beta_1}
+\ar[ld]^{\beta_2}
+\\
+&
+a \circ k_2
+&
+&
+b \circ k_2
+&
+}
+$$
+commute,
+\item vertical composition of $2$-morphisms is given by
+vertical composition of the morphisms $\delta$ in $\mathcal{C}$, and
+\item horizontal composition of the diagram
+$$
+\xymatrix{
+(w'', a'', b'', \phi'')
+\rrtwocell^{(k'_1, \alpha'_1, \beta'_1)}_{(k'_2, \alpha'_2, \beta'_2)}{\delta'}
+& &
+(w', a', b', \phi')
+\rrtwocell^{(k_1, \alpha_1, \beta_1)}_{(k_2, \alpha_2, \beta_2)}{\delta}
+& &
+(w, a, b, \phi)
+}
+$$
+is given by the diagram
+$$
+\xymatrix@C=12pc{
+(w'', a'', b'', \phi'')
+\rtwocell^{(k_1 \circ k'_1, (\alpha_1 \star \text{id}_{k'_1}) \circ \alpha'_1, (\beta_1 \star \text{id}_{k'_1}) \circ \beta'_1)}_{(k_2 \circ k'_2, (\alpha_2 \star \text{id}_{k'_2}) \circ \alpha'_2, (\beta_2 \star \text{id}_{k'_2}) \circ \beta'_2)}{\ \ \ \delta \star \delta'}
+&
+(w, a, b, \phi)
+}
+$$
+\end{enumerate}
+Note that if $\mathcal{C}$ is actually a $(2, 1)$-category,
+the morphisms $\alpha$ and $\beta$ in (2) above are automatically
+also isomorphisms\footnote{In fact it seems in the $2$-category case
+that one could define another 2-category of 2-commutative diagrams where
+the direction of the arrows $\alpha$, $\beta$ is reversed, or even
+where the direction of only one of them is reversed. This is why
+we restrict to $(2, 1)$-categories later on.}.
+In addition the $2$-category of
+$2$-commutative diagrams is also a $(2, 1)$-category if $\mathcal{C}$ is
+a $(2, 1)$-category.
+
+\begin{definition}
+\label{definition-final-object-2-category}
+A {\it final object} of a $(2, 1)$-category
+$\mathcal{C}$ is an object $x$ such that
+\begin{enumerate}
+\item for every $y \in \Ob(\mathcal{C})$ there is a morphism $y \to x$,
+and
+\item every two morphisms $y \to x$ are isomorphic by a unique 2-morphism.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Likely, in the more general case of $2$-categories there are different
+flavours of final objects. We do not want to get into this and hence
+we only define $2$-fibre products in the $(2, 1)$-case.
+
+\begin{definition}
+\label{definition-2-fibre-products}
+Let $\mathcal{C}$ be a $(2, 1)$-category.
+Let $x, y, z\in \Ob(\mathcal{C})$ and
+$f\in \Mor_\mathcal{C}(x, z)$
+and $g\in \Mor_{\mathcal C}(y, z)$. A
+{\it 2-fibre product of $f$ and $g$} is
+a final object in the category of 2-commutative diagrams
+described above. If a 2-fibre product exists we
+will denote it $x \times_z y\in \Ob(\mathcal{C})$, and denote the
+required morphisms $p\in \Mor_{\mathcal C}(x \times_z y, x)$ and
+$q\in \Mor_{\mathcal C}(x \times_z y, y)$ making the diagram
+$$
+\xymatrix{
+& x \times_z y \ar[r]^{p} \ar[d]_q & x \ar[d]^{f} \\
+& y \ar[r]^{g} & z }
+$$
+2-commute and we will denote the given invertible
+2-morphism exhibiting this by $\psi : f \circ p \to g \circ q$.
+\end{definition}
+
+\noindent
+Thus the following universal property holds: for any
+$w\in \Ob(\mathcal{C})$ and morphisms
+$a \in \Mor_{\mathcal C}(w, x)$ and
+$b \in \Mor_\mathcal{C}(w, y)$ with a given 2-isomorphism
+$\phi : f \circ a \to g\circ b$
+there is a $\gamma \in \Mor_{\mathcal C}(w, x \times_z y)$
+making the diagram
+$$
+\xymatrix{
+w\ar[rrrd]^a \ar@{-->}[rrd]_\gamma \ar[rrdd]_b & & \\
+& & x \times_z y \ar[r]_p \ar[d]_q & x \ar[d]^{f} \\
+& & y \ar[r]^{g} & z }
+$$
+2-commute such that for suitable choices of
+$a \to p \circ \gamma$ and $b \to q \circ \gamma$
+the diagram
+$$
+\xymatrix{
+f \circ a \ar[r] \ar[d]_\phi &
+f \circ p \circ \gamma
+\ar[d]^{\psi \star \text{id}_\gamma}
+\\
+g\circ b
+\ar[r]
+&
+g \circ q \circ \gamma
+}
+$$
+commutes. Moreover $\gamma$ is unique up to isomorphism.
+Of course the exact properties are finer than this. All of the
+cases of 2-fibre products that we will need later on come from the following
+example of 2-fibre products in the 2-category of categories.
+
+\begin{example}
+\label{example-2-fibre-product-categories}
+Let $\mathcal{A}$, $\mathcal{B}$, and $\mathcal{C}$ be categories.
+Let $F : \mathcal{A} \to \mathcal{C}$ and $G : \mathcal{B} \to \mathcal{C}$
+be functors. We define a category
+$\mathcal{A} \times_\mathcal{C} \mathcal{B}$ as follows:
+\begin{enumerate}
+\item an object of $\mathcal{A} \times_\mathcal{C} \mathcal{B}$ is a triple
+$(A, B, f)$, where $A\in \Ob(\mathcal{A})$, $B\in \Ob(\mathcal{B})$,
+and $f : F(A) \to G(B)$ is an isomorphism in $\mathcal{C}$,
+\item a morphism $(A, B, f) \to (A', B', f')$ is given by a pair $(a, b)$, where
+$a : A \to A'$ is a morphism in $\mathcal{A}$, and $b : B \to B'$ is a
+morphism in $\mathcal{B}$ such that the diagram
+$$
+\xymatrix{
+F(A) \ar[r]^f \ar[d]^{F(a)} & G(B) \ar[d]^{G(b)} \\
+F(A') \ar[r]^{f'} & G(B')
+}
+$$
+is commutative.
+\end{enumerate}
+Moreover, we define functors
+$p : \mathcal{A} \times_\mathcal{C}\mathcal{B} \to \mathcal{A}$
+and
+$q : \mathcal{A} \times_\mathcal{C}\mathcal{B} \to \mathcal{B}$
+by setting
+$$
+p(A, B, f) = A, \quad q(A, B, f) = B,
+$$
+in other words, these are the forgetful functors.
+We define a transformation of functors $\psi : F \circ p \to G \circ q$.
+On the object $\xi = (A, B, f)$ it is given by
+$\psi_\xi = f : F(p(\xi)) = F(A) \to G(B) = G(q(\xi))$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-2-fibre-product-categories}
+In the $(2, 1)$-category of categories $2$-fibre products exist and
+are given by the construction of
+Example \ref{example-2-fibre-product-categories}.
+\end{lemma}
+
+\begin{proof}
+Let us check the universal property:
+let $\mathcal{W}$ be a category, let
+$a : \mathcal{W} \to \mathcal{A}$ and
+$b : \mathcal{W} \to \mathcal{B}$ be functors, and
+let $t : F \circ a \to G \circ b$ be an isomorphism of functors.
+
+\medskip\noindent
+Consider the functor
+$\gamma : \mathcal{W} \to \mathcal{A} \times_\mathcal{C}\mathcal{B}$
+given by $W \mapsto (a(W), b(W), t_W)$.
+(Check this is a functor omitted.)
+Moreover, consider $\alpha : a \to p \circ \gamma$ and
+$\beta : b \to q \circ \gamma$ obtained from the identities
+$p \circ \gamma = a$ and $q \circ \gamma = b$. Then it is
+clear that $(\gamma, \alpha, \beta)$ is a morphism
+from $(W, a, b, t)$ to
+$(\mathcal{A} \times_\mathcal{C} \mathcal{B}, p, q, \psi)$.
+
+\medskip\noindent
+Let
+$(k, \alpha', \beta') :
+(W, a, b, t) \to (\mathcal{A} \times_\mathcal{C} \mathcal{B}, p, q, \psi)$
+be a second such morphism. For an object $W$ of $\mathcal{W}$ let us write
+$k(W) = (a_k(W), b_k(W), t_{k, W})$. Hence $p(k(W)) = a_k(W)$ and so on.
+The map $\alpha'$ corresponds to functorial maps
+$\alpha' : a(W) \to a_k(W)$. Since we are working in the
+$(2, 1)$-category of categories, in fact each of the maps
+$a(W) \to a_k(W)$ is an isomorphism. We can use these
+(and their counterparts $b(W) \to b_k(W)$) to get isomorphisms
+$$
+\delta_W :
+\gamma(W) = (a(W), b(W), t_W)
+\longrightarrow
+(a_k(W), b_k(W), t_{k, W}) = k(W).
+$$
+It is straightforward to show that $\delta$ defines a
+$2$-isomorphism between $\gamma$ and $k$ in the $2$-category
+of $2$-commutative diagrams as desired.
+\end{proof}
+
+\begin{remark}
+\label{remark-other-description-2-fibre-product}
+Let $\mathcal{A}$, $\mathcal{B}$, and $\mathcal{C}$ be categories.
+Let $F : \mathcal{A} \to \mathcal{C}$ and $G : \mathcal{B} \to \mathcal{C}$
+be functors. Another, slightly more symmetrical, construction of a $2$-fibre
+product $\mathcal{A} \times_\mathcal{C} \mathcal{B}$ is as follows.
+An object is a quintuple $(A, B, C, a, b)$ where $A, B, C$ are objects
+of $\mathcal{A}, \mathcal{B}, \mathcal{C}$ and where $a : F(A) \to C$
+and $b : G(B) \to C$ are isomorphisms. A morphism
+$(A, B, C, a, b) \to (A', B', C', a', b')$ is given by a triple
+of morphisms $A \to A', B \to B', C \to C'$ compatible with the morphisms
+$a, b, a', b'$. We can prove directly that this leads to a $2$-fibre
+product. However, it is easier to observe that the functor
+$(A, B, C, a, b) \mapsto (A, B, b^{-1} \circ a)$ gives an equivalence
+from the category of quintuples to the category constructed in
+Example \ref{example-2-fibre-product-categories}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-functoriality-2-fibre-product}
+Let
+$$
+\xymatrix{
+& \mathcal{Y} \ar[d]_I \ar[rd]^K & \\
+\mathcal{X} \ar[r]^H \ar[rd]^L &
+\mathcal{Z} \ar[rd]^M & \mathcal{B} \ar[d]^G \\
+& \mathcal{A} \ar[r]^F & \mathcal{C}
+}
+$$
+be a $2$-commutative diagram of categories.
+A choice of isomorphisms
+$\alpha : G \circ K \to M \circ I$ and
+$\beta : M \circ H \to F \circ L$
+determines a morphism
+$$
+\mathcal{X} \times_\mathcal{Z} \mathcal{Y}
+\longrightarrow
+\mathcal{A} \times_\mathcal{C} \mathcal{B}
+$$
+of $2$-fibre products associated to this situation.
+\end{lemma}
+
+\begin{proof}
+Just use the functor
+$$
+(X, Y, \phi) \longmapsto (L(X), K(Y),
+\alpha^{-1}_Y \circ M(\phi) \circ \beta^{-1}_X)
+$$
+on objects and
+$$
+(a, b) \longmapsto (L(a), K(b))
+$$
+on morphisms.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-equivalence-2-fibre-product}
+Assumptions as in Lemma \ref{lemma-functoriality-2-fibre-product}.
+\begin{enumerate}
+\item If $K$ and $L$ are faithful
+then the morphism
+$\mathcal{X} \times_\mathcal{Z} \mathcal{Y} \to
+\mathcal{A} \times_\mathcal{C} \mathcal{B}$
+is faithful.
+\item If $K$ and $L$ are fully faithful and $M$ is faithful
+then the morphism
+$\mathcal{X} \times_\mathcal{Z} \mathcal{Y} \to
+\mathcal{A} \times_\mathcal{C} \mathcal{B}$
+is fully faithful.
+\item If $K$ and $L$ are equivalences and $M$ is fully faithful
+then the morphism
+$\mathcal{X} \times_\mathcal{Z} \mathcal{Y} \to
+\mathcal{A} \times_\mathcal{C} \mathcal{B}$
+is an equivalence.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $(X, Y, \phi)$ and $(X', Y', \phi')$ be objects of
+$\mathcal{X} \times_\mathcal{Z} \mathcal{Y}$.
+Set $Z = H(X)$ and identify it with $I(Y)$ via $\phi$.
+Also, identify $M(Z)$ with $F(L(X))$ via $\alpha_X$ and
+identify $M(Z)$ with $G(K(Y))$ via $\beta_Y$. Similarly for
+$Z' = H(X')$ and $M(Z')$.
+The map on morphisms is the map
+$$
+\xymatrix{
+\Mor_\mathcal{X}(X, X')
+\times_{\Mor_\mathcal{Z}(Z, Z')}
+\Mor_\mathcal{Y}(Y, Y')
+\ar[d] \\
+\Mor_\mathcal{A}(L(X), L(X'))
+\times_{\Mor_\mathcal{C}(M(Z), M(Z'))}
+\Mor_\mathcal{B}(K(Y), K(Y'))
+}
+$$
+Hence parts (1) and (2) follow. Moreover, if $K$ and $L$
+are equivalences and $M$ is fully faithful, then any object
+$(A, B, \phi)$ is in the essential image for the following reasons:
+Pick $X$, $Y$ such that $L(X) \cong A$ and $K(Y) \cong B$.
+Then the fully faithfulness of $M$ guarantees that we can
+find an isomorphism $H(X) \cong I(Y)$. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-associativity-2-fibre-product}
+Let
+$$
+\xymatrix{
+\mathcal{A} \ar[rd] & & \mathcal{C} \ar[ld] \ar[rd] & & \mathcal{E} \ar[ld] \\
+& \mathcal{B} & & \mathcal{D}
+}
+$$
+be a diagram of categories and functors.
+Then there is a canonical isomorphism
+$$
+(\mathcal{A} \times_\mathcal{B} \mathcal{C}) \times_\mathcal{D} \mathcal{E}
+\cong
+\mathcal{A} \times_\mathcal{B} (\mathcal{C} \times_\mathcal{D} \mathcal{E})
+$$
+of categories.
+\end{lemma}
+
+\begin{proof}
+Just use the functor
+$$
+((A, C, \phi), E, \psi)
+\longmapsto
+(A, (C, E, \psi), \phi)
+$$
+if you know what I mean.
+\end{proof}
+
+\noindent
+Henceforth we do not write the parentheses when dealing with fibre products
+of more than 2 categories.
+
+\begin{lemma}
+\label{lemma-triple-2-fibre-product-pr02}
+Let
+$$
+\xymatrix{
+\mathcal{A} \ar[rd] & & \mathcal{C} \ar[ld] \ar[rd] & & \mathcal{E} \ar[ld] \\
+& \mathcal{B} \ar[rd]_F & & \mathcal{D} \ar[ld]^G \\
+& & \mathcal{F} &
+}
+$$
+be a commutative diagram of categories and functors.
+Then there is a canonical functor
+$$
+\text{pr}_{02} :
+\mathcal{A} \times_\mathcal{B} \mathcal{C} \times_\mathcal{D} \mathcal{E}
+\longrightarrow
+\mathcal{A} \times_\mathcal{F} \mathcal{E}
+$$
+of categories.
+\end{lemma}
+
+\begin{proof}
+If we write
+$\mathcal{A} \times_\mathcal{B} \mathcal{C}
+\times_\mathcal{D} \mathcal{E}$
+as
+$(\mathcal{A} \times_\mathcal{B} \mathcal{C})
+\times_\mathcal{D} \mathcal{E}$
+then we can just use the functor
+$$
+((A, C, \phi), E, \psi)
+\longmapsto
+(A, E, G(\psi) \circ F(\phi))
+$$
+if you know what I mean.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-2-fibre-product-erase-factor}
+Let
+$$
+\mathcal{A} \to
+\mathcal{B} \leftarrow \mathcal{C} \leftarrow \mathcal{D}
+$$
+be a diagram of categories and functors.
+Then there is a canonical isomorphism
+$$
+\mathcal{A} \times_\mathcal{B} \mathcal{C} \times_\mathcal{C} \mathcal{D}
+\cong
+\mathcal{A} \times_\mathcal{B} \mathcal{D}
+$$
+of categories.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\noindent
+We claim that this means you can work with these $2$-fibre products
+just like with ordinary fibre products. Here are some further lemmas
+that actually come up later.
+
+\begin{lemma}
+\label{lemma-diagonal-1}
+Let
+$$
+\xymatrix{
+\mathcal{C}_3 \ar[r] \ar[d] & \mathcal{S} \ar[d]^\Delta \\
+\mathcal{C}_1 \times \mathcal{C}_2 \ar[r]^{G_1 \times G_2} &
+\mathcal{S} \times \mathcal{S}
+}
+$$
+be a $2$-fibre product of categories.
+Then there is a canonical isomorphism
+$\mathcal{C}_3 \cong
+\mathcal{C}_1 \times_{G_1, \mathcal{S}, G_2} \mathcal{C}_2$.
+\end{lemma}
+
+\begin{proof}
+We may assume that $\mathcal{C}_3$ is the category
+$(\mathcal{C}_1 \times \mathcal{C}_2)\times_{\mathcal{S} \times \mathcal{S}}
+\mathcal{S}$ constructed in Example \ref{example-2-fibre-product-categories}.
+Hence an object is a triple
+$((X_1, X_2), S, \phi)$ where
+$\phi = (\phi_1, \phi_2) : (G_1(X_1), G_2(X_2)) \to (S, S)$
+is an isomorphism. Thus we can associate to this the triple
+$(X_1, X_2, \phi_2^{-1} \circ \phi_1)$.
+Conversely, if $(X_1, X_2, \psi)$ is an object of
+$\mathcal{C}_1 \times_{G_1, \mathcal{S}, G_2} \mathcal{C}_2$,
+then we can associate to this the triple
+$((X_1, X_2), G_2(X_2), (\psi, \text{id}_{G_2(X_2)}))$.
+We claim these constructions given mutually inverse functors.
+We omit describing how to deal with morphisms
+and showing they are mutually inverse.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-diagonal-2}
+Let
+$$
+\xymatrix{
+\mathcal{C}' \ar[r] \ar[d] & \mathcal{S} \ar[d]^\Delta \\
+\mathcal{C} \ar[r]^{G_1 \times G_2} &
+\mathcal{S} \times \mathcal{S}
+}
+$$
+be a $2$-fibre product of categories.
+Then there is a canonical isomorphism
+$$
+\mathcal{C}' \cong
+(\mathcal{C} \times_{G_1, \mathcal{S}, G_2} \mathcal{C})
+\times_{(p, q), \mathcal{C} \times \mathcal{C}, \Delta}
+\mathcal{C}.
+$$
+\end{lemma}
+
+\begin{proof}
+An object of the right hand side is given by
+$((C_1, C_2, \phi), C_3, \psi)$ where
+$\phi : G_1(C_1) \to G_2(C_2)$ is an isomorphism
+and $\psi = (\psi_1, \psi_2) : (C_1, C_2) \to (C_3, C_3)$ is
+an isomorphism. Hence we can associate to this the triple
+$(C_3, G_1(C_1), (G_1(\psi_1^{-1}), \phi^{-1} \circ G_2(\psi_2^{-1})))$
+which is an object of $\mathcal{C}'$.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibre-product-after-map}
+Let $\mathcal{A} \to \mathcal{C}$, $\mathcal{B} \to \mathcal{C}$
+and $\mathcal{C} \to \mathcal{D}$ be functors between categories.
+Then the diagram
+$$
+\xymatrix{
+\mathcal{A} \times_\mathcal{C} \mathcal{B} \ar[d] \ar[r] &
+\mathcal{A} \times_\mathcal{D} \mathcal{B} \ar[d] \\
+\mathcal{C} \ar[r]^-{\Delta_{\mathcal{C}/\mathcal{D}}} \ar[r] &
+\mathcal{C} \times_\mathcal{D} \mathcal{C}
+}
+$$
+is a $2$-fibre product diagram.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-diagonal}
+Let
+$$
+\xymatrix{
+\mathcal{U} \ar[d] \ar[r] & \mathcal{V} \ar[d] \\
+\mathcal{X} \ar[r] & \mathcal{Y}
+}
+$$
+be a $2$-fibre product of categories. Then the diagram
+$$
+\xymatrix{
+\mathcal{U} \ar[d] \ar[r] &
+\mathcal{U} \times_\mathcal{V} \mathcal{U} \ar[d] \\
+\mathcal{X} \ar[r] &
+\mathcal{X} \times_\mathcal{Y} \mathcal{X}
+}
+$$
+is $2$-cartesian.
+\end{lemma}
+
+\begin{proof}
+This is a purely $2$-category theoretic statement, valid in any
+$(2, 1)$-category with $2$-fibre products. Explicitly, it follows
+from the following chain of equivalences:
+\begin{align*}
+\mathcal{X} \times_{(\mathcal{X} \times_\mathcal{Y} \mathcal{X})}
+(\mathcal{U} \times_\mathcal{V} \mathcal{U})
+& =
+\mathcal{X} \times_{(\mathcal{X} \times_\mathcal{Y} \mathcal{X})}
+((\mathcal{X} \times_\mathcal{Y} \mathcal{V})
+\times_\mathcal{V} (\mathcal{X} \times_\mathcal{Y} \mathcal{V})) \\
+& =
+\mathcal{X} \times_{(\mathcal{X} \times_\mathcal{Y} \mathcal{X})}
+(\mathcal{X} \times_\mathcal{Y} \mathcal{X}
+\times_\mathcal{Y} \mathcal{V}) \\
+& =
+\mathcal{X} \times_\mathcal{Y} \mathcal{V} = \mathcal{U}
+\end{align*}
+see
+Lemmas \ref{lemma-associativity-2-fibre-product} and
+\ref{lemma-2-fibre-product-erase-factor}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Categories over categories}
+\label{section-categories-over-categories}
+
+\noindent
+In this section we have a functor $p : \mathcal{S} \to \mathcal{C}$.
+We think of $\mathcal{S}$ as being on top and of $\mathcal{C}$ as being
+at the bottom. To make sure that everybody knows what we are talking about
+we define the $2$-category of categories over $\mathcal{C}$.
+
+\begin{definition}
+\label{definition-categories-over-C}
+Let $\mathcal{C}$ be a category.
+The {\it $2$-category of categories over $\mathcal{C}$}
+is the $2$-category defined as follows:
+\begin{enumerate}
+\item Its objects will be functors $p : \mathcal{S} \to \mathcal{C}$.
+\item Its $1$-morphisms $(\mathcal{S}, p) \to (\mathcal{S}', p')$
+will be functors $G : \mathcal{S} \to \mathcal{S}'$ such that
+$p' \circ G = p$.
+\item Its $2$-morphisms $t : G \to H$ for
+$G, H : (\mathcal{S}, p) \to (\mathcal{S}', p')$
+will be morphisms of functors
+such that $p'(t_x) = \text{id}_{p(x)}$
+for all $x \in \Ob(\mathcal{S})$.
+\end{enumerate}
+In this situation we will denote
+$$
+\Mor_{\textit{Cat}/\mathcal{C}}(\mathcal{S}, \mathcal{S}')
+$$
+the category of $1$-morphisms between
+$(\mathcal{S}, p)$ and $(\mathcal{S}', p')$
+\end{definition}
+
+\noindent
+In this $2$-category we define horizontal and vertical composition
+exactly as is done for $\textit{Cat}$ in Section \ref{section-formal-cat-cat}.
+The axioms of a $2$-category are satisfied for the same reason
+that the hold in $\textit{Cat}$. To see this one can also use that
+the axioms hold in $\textit{Cat}$ and verify
+things such as ``vertical composition of $2$-morphisms over $\mathcal{C}$
+gives another $2$-morphism over $\mathcal{C}$''. This is clear.
+
+\medskip\noindent
+Analogously to the fibre of a map of spaces, we have the notion of a
+fibre category, and some notions of lifting associated to this
+situation.
+
+\begin{definition}
+\label{definition-fibre-category}
+Let $\mathcal{C}$ be a category.
+Let $p : \mathcal{S} \to \mathcal{C}$ be a category over $\mathcal{C}$.
+\begin{enumerate}
+\item The {\it fibre category} over an object $U\in \Ob(\mathcal{C})$
+is the category $\mathcal{S}_U$ with objects
+$$
+\Ob(\mathcal{S}_U) = \{x\in \Ob(\mathcal{S}) :
+p(x) = U\}
+$$
+and morphisms
+$$
+\Mor_{\mathcal{S}_U}(x, y) = \{ \phi \in \Mor_\mathcal{S}(x, y) :
+p(\phi) = \text{id}_U\}.
+$$
+\item A {\it lift} of an object $U \in \Ob(\mathcal{C})$
+is an object $x\in \Ob(\mathcal{S})$ such that $p(x) = U$, i.e.,
+$x\in \Ob(\mathcal{S}_U)$. We will also sometime say
+that {\it $x$ lies over $U$}.
+\item Similarly, a {\it lift} of a morphism $f : V \to U$ in $\mathcal{C}$
+is a morphism $\phi : y \to x$ in $\mathcal{S}$ such that $p(\phi) = f$.
+We sometimes say that {\it $\phi$ lies over $f$}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+There are some observations we could make here. For example if
+$F : (\mathcal{S}, p) \to (\mathcal{S}', p')$ is a $1$-morphism
+of categories over $\mathcal{C}$, then $F$ induces functors
+of fibre categories $F : \mathcal{S}_U \to \mathcal{S}'_U$.
+Similarly for $2$-morphisms.
+
+\medskip\noindent
+Here is the obligatory lemma describing the $2$-fibre product in the
+$(2, 1)$-category of categories over $\mathcal{C}$.
+
+\begin{lemma}
+\label{lemma-2-product-categories-over-C}
+Let $\mathcal{C}$ be a category.
+The $(2, 1)$-category of categories
+over $\mathcal{C}$ has 2-fibre products.
+Suppose that
+$F : \mathcal{X} \to \mathcal{S}$ and
+$G : \mathcal{Y} \to \mathcal{S}$ are morphisms of categories over
+$\mathcal{C}$.
+An explicit 2-fibre product
+$\mathcal{X} \times_\mathcal{S}\mathcal{Y}$ is given by the following
+description
+\begin{enumerate}
+\item an object of $\mathcal{X} \times_\mathcal{S} \mathcal{Y}$ is a quadruple
+$(U, x, y, f)$, where $U \in \Ob(\mathcal{C})$,
+$x\in \Ob(\mathcal{X}_U)$, $y\in \Ob(\mathcal{Y}_U)$,
+and $f : F(x) \to G(y)$ is an isomorphism in $\mathcal{S}_U$,
+\item a morphism $(U, x, y, f) \to (U', x', y', f')$ is given by a pair
+$(a, b)$, where $a : x \to x'$ is a morphism in $\mathcal{X}$, and
+$b : y \to y'$ is a
+morphism in $\mathcal{Y}$ such that
+\begin{enumerate}
+\item $a$ and $b$ induce the same morphism $U \to U'$, and
+\item the diagram
+$$
+\xymatrix{
+F(x) \ar[r]^f \ar[d]^{F(a)} & G(y) \ar[d]^{G(b)} \\
+F(x') \ar[r]^{f'} & G(y')
+}
+$$
+is commutative.
+\end{enumerate}
+\end{enumerate}
+The functors $p : \mathcal{X} \times_\mathcal{S}\mathcal{Y} \to \mathcal{X}$
+and $q : \mathcal{X} \times_\mathcal{S}\mathcal{Y} \to \mathcal{Y}$ are the
+forgetful functors in this case. The transformation $\psi : F \circ p \to
+G \circ q$ is given on the object $\xi = (U, x, y, f)$ by
+$\psi_\xi = f : F(p(\xi)) = F(x) \to G(y) = G(q(\xi))$.
+\end{lemma}
+
+\begin{proof}
+Let us check the universal property: let
+$p_\mathcal{W} : \mathcal{W}\to \mathcal{C}$
+be a category over $\mathcal{C}$, let $X : \mathcal{W} \to \mathcal{X}$ and
+$Y : \mathcal{W} \to \mathcal{Y}$ be functors over $\mathcal{C}$, and let
+$t : F \circ X \to G \circ Y$ be an isomorphism of functors over $\mathcal{C}$.
+The desired functor
+$\gamma : \mathcal{W} \to \mathcal{X} \times_\mathcal{S} \mathcal{Y}$
+is given by $W \mapsto (p_\mathcal{W}(W), X(W), Y(W), t_W)$.
+Details omitted; compare with Lemma \ref{lemma-2-fibre-product-categories}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibre-2-fibre-product-categories-over-C}
+Let $\mathcal{C}$ be a category.
+Let $f : \mathcal{X} \to \mathcal{S}$ and
+$g : \mathcal{Y} \to \mathcal{S}$ be morphisms of categories over
+$\mathcal{C}$. For any object $U$ of $\mathcal{C}$ we have
+the following identity of
+fibre categories
+$$
+\left(\mathcal{X} \times_\mathcal{S}\mathcal{Y}\right)_U
+=
+\mathcal{X}_U \times_{\mathcal{S}_U} \mathcal{Y}_U
+$$
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+
+
+
+
+
+
+\section{Fibred categories}
+\label{section-fibred-categories}
+
+\noindent
+A very brief discussion of fibred categories is warranted.
+
+\medskip\noindent
+Let $p : \mathcal{S} \to \mathcal{C}$ be a category over $\mathcal{C}$.
+Given an object $x \in \mathcal{S}$ with $p(x) = U$, and given a morphism
+$f : V \to U$, we can try to take some kind of ``fibre product
+$V \times_U x$'' (or a {\it base change} of $x$ via $V \to U$).
+Namely, a morphism from an object $z \in \mathcal{S}$
+into ``$V \times_U x$'' should be given by a pair
+$(\varphi, g)$, where
+$\varphi : z \to x$, $g : p(z) \to V$ such that
+$p(\varphi) = f \circ g$. Pictorially:
+$$
+\xymatrix{
+z \ar@{~>}[d]^p \ar@{-}[r] &
+? \ar[r] \ar@{~>}[d]^p &
+x \ar@{~>}[d]^p \\
+p(z) \ar[r] & V \ar[r]^f & U
+}
+$$
+If such a morphism $V \times_U x \to x$ exists then it is called
+a strongly cartesian morphism.
+
+\begin{definition}
+\label{definition-cartesian-over-C}
+Let $\mathcal{C}$ be a category.
+Let $p : \mathcal{S} \to \mathcal{C}$ be a category over $\mathcal{C}$.
+A {\it strongly cartesian morphism}, or more precisely a
+{\it strongly $\mathcal{C}$-cartesian morphism} is a
+morphism $\varphi : y \to x$ of $\mathcal{S}$ such that
+for every $z \in \Ob(\mathcal{S})$ the map
+$$
+\Mor_\mathcal{S}(z, y)
+\longrightarrow
+\Mor_\mathcal{S}(z, x)
+\times_{\Mor_\mathcal{C}(p(z), p(x))}
+\Mor_\mathcal{C}(p(z), p(y)),
+$$
+given by $\psi \longmapsto (\varphi \circ \psi, p(\psi))$
+is bijective.
+\end{definition}
+
+\noindent
+Note that by the Yoneda Lemma \ref{lemma-yoneda}, given
+$x \in \Ob(\mathcal{S})$ lying over $U \in \Ob(\mathcal{C})$
+and the morphism $f : V \to U$ of $\mathcal{C}$, if there is a
+strongly cartesian morphism $\varphi : y \to x$ with $p(\varphi) = f$,
+then $(y, \varphi)$ is unique up to unique isomorphism. This is
+clear from the definition above, as the functor
+$$
+z
+\longmapsto
+\Mor_\mathcal{S}(z, x)
+\times_{\Mor_\mathcal{C}(p(z), U)}
+\Mor_\mathcal{C}(p(z), V)
+$$
+only depends on the data $(x, U, f : V \to U)$. Hence we
+will sometimes use $V \times_U x \to x$ or $f^*x \to x$
+to denote a strongly cartesian morphism which is a lift of $f$.
+
+\begin{lemma}
+\label{lemma-composition-cartesian}
+Let $\mathcal{C}$ be a category.
+Let $p : \mathcal{S} \to \mathcal{C}$ be a category over $\mathcal{C}$.
+\begin{enumerate}
+\item The composition of two strongly cartesian morphisms
+is strongly cartesian.
+\item Any isomorphism of $\mathcal{S}$ is strongly cartesian.
+\item Any strongly cartesian morphism $\varphi$ such that $p(\varphi)$
+is an isomorphism, is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Let $\varphi : y \to x$ and $\psi : z \to y$ be
+strongly cartesian. Let $t$ be an arbitrary object of $\mathcal{S}$.
+Then we have
+\begin{align*}
+& \Mor_\mathcal{S}(t, z) \\
+& =
+\Mor_\mathcal{S}(t, y)
+\times_{\Mor_\mathcal{C}(p(t), p(y))}
+\Mor_\mathcal{C}(p(t), p(z)) \\
+& =
+\Mor_\mathcal{S}(t, x)
+\times_{\Mor_\mathcal{C}(p(t), p(x))}
+\Mor_\mathcal{C}(p(t), p(y))
+\times_{\Mor_\mathcal{C}(p(t), p(y))}
+\Mor_\mathcal{C}(p(t), p(z)) \\
+& =
+\Mor_\mathcal{S}(t, x)
+\times_{\Mor_\mathcal{C}(p(t), p(x))}
+\Mor_\mathcal{C}(p(t), p(z))
+\end{align*}
+hence $z \to x$ is strongly cartesian.
+
+\medskip\noindent
+Proof of (2). Let $y \to x$ be an isomorphism. Then $p(y) \to p(x)$
+is an isomorphism too. Hence
+$\Mor_\mathcal{C}(p(z), p(y)) \to
+\Mor_\mathcal{C}(p(z), p(x))$
+is a bijection. Hence
+$\Mor_\mathcal{S}(z, x)
+\times_{\Mor_\mathcal{C}(p(z), p(x))}
+\Mor_\mathcal{C}(p(z), p(y))$ is bijective to
+$\Mor_\mathcal{S}(z, x)$.
+Hence the displayed map of
+Definition \ref{definition-cartesian-over-C}
+is a bijection as $y \to x$ is an isomorphism, and we conclude that
+$y \to x$ is strongly cartesian.
+
+\medskip\noindent
+Proof of (3). Assume $\varphi : y \to x$ is strongly cartesian with
+$p(\varphi) : p(y) \to p(x)$ an isomorphism. Applying the definition with
+$z = x$ shows that $(\text{id}_x, p(\varphi)^{-1})$ comes from a unique
+morphism $\chi : x \to y$. We omit the verification that $\chi$ is the
+inverse of $\varphi$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cartesian-over-cartesian}
+Let $F : \mathcal{A} \to \mathcal{B}$ and $G : \mathcal{B} \to \mathcal{C}$
+be composable functors between categories. Let $x \to y$ be a morphism of
+$\mathcal{A}$. If $x \to y$ is strongly $\mathcal{B}$-cartesian
+and $F(x) \to F(y)$ is strongly $\mathcal{C}$-cartesian, then
+$x \to y$ is strongly $\mathcal{C}$-cartesian.
+\end{lemma}
+
+\begin{proof}
+This follows directly from the definition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strongly-cartesian-fibre-product}
+Let $\mathcal{C}$ be a category.
+Let $p : \mathcal{S} \to \mathcal{C}$ be a category over $\mathcal{C}$.
+Let $x \to y$ and $z \to y$ be morphisms of $\mathcal{S}$.
+Assume
+\begin{enumerate}
+\item $x \to y$ is strongly cartesian,
+\item $p(x) \times_{p(y)} p(z)$ exists, and
+\item there exists a strongly cartesian morphism $a : w \to z$ in
+$\mathcal{S}$ with $p(w) = p(x) \times_{p(y)} p(z)$ and
+$p(a) = \text{pr}_2 : p(x) \times_{p(y)} p(z) \to p(z)$.
+\end{enumerate}
+Then the fibre product $x \times_y z$ exists and is isomorphic to $w$.
+\end{lemma}
+
+\begin{proof}
+Since $x \to y$ is strongly cartesian there exists a unique morphism
+$b : w \to x$ such that $p(b) = \text{pr}_1$. To see that $w$ is the
+fibre product we compute
+\begin{align*}
+& \Mor_\mathcal{S}(t, w) \\
+& = \Mor_\mathcal{S}(t, z)
+\times_{\Mor_\mathcal{C}(p(t), p(z))}
+\Mor_\mathcal{C}(p(t), p(w)) \\
+& = \Mor_\mathcal{S}(t, z)
+\times_{\Mor_\mathcal{C}(p(t), p(z))}
+(\Mor_\mathcal{C}(p(t), p(x))
+\times_{\Mor_\mathcal{C}(p(t), p(y))}
+\Mor_\mathcal{C}(p(t), p(z))) \\
+& = \Mor_\mathcal{S}(t, z)
+\times_{\Mor_\mathcal{C}(p(t), p(y))}
+\Mor_\mathcal{C}(p(t), p(x)) \\
+& = \Mor_\mathcal{S}(t, z)
+\times_{\Mor_\mathcal{S}(t, y)}
+\Mor_\mathcal{S}(t, y)
+\times_{\Mor_\mathcal{C}(p(t), p(y))}
+\Mor_\mathcal{C}(p(t), p(x)) \\
+& = \Mor_\mathcal{S}(t, z)
+\times_{\Mor_\mathcal{S}(t, y)}
+\Mor_\mathcal{S}(t, x)
+\end{align*}
+as desired. The first equality holds because $a : w \to z$ is strongly
+cartesian and the last equality holds because $x \to y$ is strongly
+cartesian.
+\end{proof}
+
+\begin{definition}
+\label{definition-fibred-category}
+Let $\mathcal{C}$ be a category.
+Let $p : \mathcal{S} \to \mathcal{C}$ be a category over $\mathcal{C}$.
+We say $\mathcal{S}$ is a {\it fibred category over $\mathcal{C}$}
+if given any $x \in \Ob(\mathcal{S})$ lying over
+$U \in \Ob(\mathcal{C})$ and any morphism $f : V \to U$ of
+$\mathcal{C}$, there exists a strongly cartesian morphism $f^*x \to x$
+lying over $f$.
+\end{definition}
+
+\noindent
+Assume $p : \mathcal{S} \to \mathcal{C}$ is a fibred category.
+For every $f : V \to U$ and $x\in \Ob(\mathcal{S}_U)$
+as in the definition we may choose a strongly cartesian morphism
+$f^\ast x \to x$ lying over $f$. By the axiom of choice we may choose
+$f^*x \to x$ for all $f: V \to U = p(x)$ simultaneously.
+We claim that for every morphism $\phi : x \to x'$ in $\mathcal{S}_U$
+and $f : V \to U$ there is a unique
+morphism $f^\ast \phi : f^\ast x \to f^\ast x'$ in $\mathcal{S}_V$
+such that
+$$
+\xymatrix{
+f^\ast x \ar[r]_{f^\ast \phi} \ar[d] & f^\ast x' \ar[d] \\
+x \ar[r]^{\phi} & x' }
+$$
+commutes. Namely, the arrow exists and is unique because $f^*x' \to x'$ is
+strongly cartesian. The uniqueness of this arrow guarantees that
+$f^\ast$ (now also defined on morphisms) is a
+functor $ f^\ast : \mathcal{S}_U \to \mathcal{S}_V$.
+
+\begin{definition}
+\label{definition-pullback-functor-fibred-category}
+Assume $p : \mathcal{S} \to \mathcal{C}$ is a fibred category.
+\begin{enumerate}
+\item A {\it choice of pullbacks}\footnote{This is probably nonstandard
+terminology. In some texts this is called a ``cleavage'' but it conjures up
+the wrong image. Maybe a ``cleaving'' would be a better word.
+A related notion is that of a ``splitting'', but in many texts a ``splitting''
+means a choice of pullbacks such that $g^*f^* = (f \circ g)^*$
+for any composable pair of morphisms. Compare
+also with Definition \ref{definition-split-fibred-category}.}
+for $p : \mathcal{S} \to \mathcal{C}$
+is given by a choice of a strongly cartesian morphism
+$f^\ast x \to x$ lying over $f$ for any morphism
+$f: V \to U$ of $\mathcal{C}$ and any $x \in \Ob(\mathcal{S}_U)$.
+\item Given a choice of pullbacks,
+for any morphism $f : V \to U$ of $\mathcal{C}$
+the functor $f^* : \mathcal{S}_U \to \mathcal{S}_V$ described
+above is called a {\it pullback functor} (associated to the choices
+$f^*x \to x$ made above).
+\end{enumerate}
+\end{definition}
+
+\noindent
+Of course we may always assume our choice of pullbacks has the property that
+$\text{id}_U^*x = x$, although in practice this is a useless property
+without imposing further assumptions on the pullbacks.
+
+\begin{lemma}
+\label{lemma-fibred}
+Assume $p : \mathcal{S} \to \mathcal{C}$ is a fibred category.
+Assume given a choice of pullbacks for $p : \mathcal{S} \to \mathcal{C}$.
+\begin{enumerate}
+\item For any pair of composable morphisms $f : V \to U$,
+$g : W \to V$ there is a unique isomorphism
+$$
+\alpha_{g, f} :
+(f \circ g)^\ast
+\longrightarrow
+g^\ast \circ f^\ast
+$$
+as functors $\mathcal{S}_U \to \mathcal{S}_W$
+such that for every $y\in \Ob(\mathcal{S}_U)$ the following
+diagram commutes
+$$
+\xymatrix{
+g^\ast f^\ast y \ar[r]
+&
+f^\ast y \ar[d] \\
+(f \circ g)^\ast y \ar[r]
+\ar[u]^{(\alpha_{g, f})_y}
+&
+y
+}
+$$
+\item If $f = \text{id}_U$, then there is a canonical isomorphism
+$\alpha_U : \text{id} \to (\text{id}_U)^*$ as functors
+$\mathcal{S}_U \to \mathcal{S}_U$.
+\item The quadruple
+$(U \mapsto \mathcal{S}_U, f \mapsto f^*, \alpha_{g, f}, \alpha_U)$
+defines a pseudo functor from $\mathcal{C}^{opp}$ to
+the $(2, 1)$-category of categories, see
+Definition \ref{definition-functor-into-2-category}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+In fact, it is clear that the commutative diagram of
+part (1) uniquely determines the morphism
+$(\alpha_{g, f})_y$ in the fibre category
+$\mathcal{S}_W$. It is an isomorphism since both
+the morphism $(f \circ g)^*y \to y$
+and the composition $g^*f^*y \to f^*y \to y$ are strongly
+cartesian morphisms lifting $f \circ g$ (see discussion
+following Definition \ref{definition-cartesian-over-C} and
+Lemma \ref{lemma-composition-cartesian}). In the same way,
+since $\text{id}_x : x \to x$ is clearly strongly cartesian
+over $\text{id}_U$ (with $U = p(x)$) we see that there exists
+an isomorphism $(\alpha_U)_x : x \to (\text{id}_U)^*x$.
+(Of course we could have assumed beforehand that $f^*x = x$
+whenever $f$ is an identity morphism, but it is better for
+the sake of generality not to assume this.)
+We omit the verification that $\alpha_{g, f}$ and
+$\alpha_U$ so obtained are transformations of functors.
+We also omit the verification of (3).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibred-equivalent}
+Let $\mathcal{C}$ be a category.
+Let $\mathcal{S}_1$, $\mathcal{S}_2$ be categories over $\mathcal{C}$.
+Suppose that $\mathcal{S}_1$ and $\mathcal{S}_2$ are equivalent
+as categories over $\mathcal{C}$.
+Then $\mathcal{S}_1$ is fibred over $\mathcal{C}$ if and only if
+$\mathcal{S}_2$ is fibred over $\mathcal{C}$.
+\end{lemma}
+
+\begin{proof}
+Denote $p_i : \mathcal{S}_i \to \mathcal{C}$ the given functors.
+Let $F : \mathcal{S}_1 \to \mathcal{S}_2$,
+$G : \mathcal{S}_2 \to \mathcal{S}_1$ be functors over $\mathcal{C}$, and let
+$i : F \circ G \to \text{id}_{\mathcal{S}_2}$,
+$j : G \circ F \to \text{id}_{\mathcal{S}_1}$ be isomorphisms of
+functors over $\mathcal{C}$.
+We claim that in this case $F$ maps strongly cartesian morphisms
+to strongly cartesian morphisms. Namely, suppose that
+$\varphi : y \to x$ is strongly cartesian in $\mathcal{S}_1$.
+Set $f : V \to U$ equal to $p_1(\varphi)$. Suppose that
+$z' \in \Ob(\mathcal{S}_2)$, with $W = p_2(z')$, and we are given
+$g : W \to V$ and $\psi' : z' \to F(x)$ such that
+$p_2(\psi') = f \circ g$. Then
+$$
+\psi = j \circ G(\psi') : G(z') \to G(F(x)) \to x
+$$
+is a morphism in $\mathcal{S}_1$ with $p_1(\psi) = f \circ g$.
+Hence by assumption there exists a unique morphism $\xi : G(z') \to y$
+lying over $g$ such that $\psi = \varphi \circ \xi$. This in turn gives a
+morphism
+$$
+\xi' = F(\xi) \circ i^{-1} : z' \to F(G(z')) \to F(y)
+$$
+lying over $g$ with $\psi' = F(\varphi) \circ \xi'$. We omit the verification
+that $\xi'$ is unique.
+\end{proof}
+
+\noindent
+The conclusion from Lemma \ref{lemma-fibred-equivalent} is that
+equivalences map strongly cartesian morphisms to strongly cartesian
+morphisms. But this may not be the case for an arbitrary functor between
+fibred categories over $\mathcal{C}$. Hence we define the $2$-category
+of fibred categories as follows.
+
+\begin{definition}
+\label{definition-fibred-categories-over-C}
+Let $\mathcal{C}$ be a category.
+The {\it $2$-category of fibred categories over $\mathcal{C}$}
+is the sub $2$-category of the $2$-category of categories
+over $\mathcal{C}$ (see Definition \ref{definition-categories-over-C})
+defined as follows:
+\begin{enumerate}
+\item Its objects will be fibred categories
+$p : \mathcal{S} \to \mathcal{C}$.
+\item Its $1$-morphisms $(\mathcal{S}, p) \to (\mathcal{S}', p')$
+will be functors $G : \mathcal{S} \to \mathcal{S}'$ such that
+$p' \circ G = p$ and such that $G$ maps strongly cartesian
+morphisms to strongly cartesian morphisms.
+\item Its $2$-morphisms $t : G \to H$ for
+$G, H : (\mathcal{S}, p) \to (\mathcal{S}', p')$
+will be morphisms of functors
+such that $p'(t_x) = \text{id}_{p(x)}$
+for all $x \in \Ob(\mathcal{S})$.
+\end{enumerate}
+In this situation we will denote
+$$
+\Mor_{\textit{Fib}/\mathcal{C}}(\mathcal{S}, \mathcal{S}')
+$$
+the category of $1$-morphisms between
+$(\mathcal{S}, p)$ and $(\mathcal{S}', p')$
+\end{definition}
+
+\noindent
+Note the condition on $1$-morphisms.
+Note also that this is a true $2$-category and
+not a $(2, 1)$-category. Hence when taking $2$-fibre
+products we first pass to the associated $(2, 1)$-category.
+
+\begin{lemma}
+\label{lemma-2-product-fibred-categories-over-C}
+Let $\mathcal{C}$ be a category.
+The $(2, 1)$-category of fibred categories
+over $\mathcal{C}$ has 2-fibre products, and
+they are described as in
+Lemma \ref{lemma-2-product-categories-over-C}.
+\end{lemma}
+
+\begin{proof}
+Basically what one has to show here is that given
+$F : \mathcal{X} \to \mathcal{S}$ and
+$G : \mathcal{Y} \to \mathcal{S}$ morphisms of fibred
+categories over $\mathcal{C}$, then the category
+$\mathcal{X} \times_\mathcal{S} \mathcal{Y}$
+described in Lemma \ref{lemma-2-product-categories-over-C} is fibred.
+Let us show that $\mathcal{X} \times_\mathcal{S} \mathcal{Y}$
+has plenty of strongly cartesian morphisms.
+Namely, suppose we have $(U, x, y, \phi)$ an object of
+$\mathcal{X} \times_\mathcal{S} \mathcal{Y}$.
+And suppose $f : V \to U$ is a morphism in $\mathcal{C}$.
+Choose strongly cartesian morphisms $a : f^*x \to x$ in $\mathcal{X}$
+lying over $f$ and $b : f^*y \to y$ in $\mathcal{Y}$ lying over $f$.
+By assumption $F(a)$ and $G(b)$ are strongly cartesian.
+Since $\phi : F(x) \to G(y)$ is an isomorphism, by the uniqueness
+of strongly cartesian morphisms we find a unique isomorphism
+$f^*\phi : F(f^*x) \to G(f^*y)$ such that
+$G(b) \circ f^*\phi = \phi \circ F(a)$. In other words
+$(a, b) : (V, f^*x, f^*y, f^*\phi) \to (U, x, y, \phi)$
+is a morphism in $\mathcal{X} \times_\mathcal{S} \mathcal{Y}$.
+We omit the verification that this is a strongly cartesian morphism
+(and that these are in fact the only strongly cartesian morphisms).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cute}
+Let $\mathcal{C}$ be a category. Let $U \in \Ob(\mathcal{C})$.
+If $p : \mathcal{S} \to \mathcal{C}$ is a fibred category
+and $p$ factors through $p' : \mathcal{S} \to \mathcal{C}/U$
+then $p' : \mathcal{S} \to \mathcal{C}/U$ is a fibred category.
+\end{lemma}
+
+\begin{proof}
+Suppose that $\varphi : x' \to x$ is strongly cartesian with respect to $p$.
+We claim that $\varphi$ is strongly cartesian with respect to $p'$ also.
+Set $g = p'(\varphi)$, so that $g : V'/U \to V/U$
+for some morphisms $f : V \to U$ and $f' : V' \to U$.
+Let $z \in \Ob(\mathcal{S})$. Set $p'(z) = (W \to U)$.
+To show that $\varphi$ is strongly cartesian for $p'$ we have to show
+$$
+\Mor_\mathcal{S}(z, x')
+\longrightarrow
+\Mor_\mathcal{S}(z, x)
+\times_{\Mor_{\mathcal{C}/U}(W/U, V/U)}
+\Mor_{\mathcal{C}/U}(W/U, V'/U),
+$$
+given by $\psi' \longmapsto (\varphi \circ \psi', p'(\psi'))$
+is bijective. Suppose given an element $(\psi, h)$ of the
+right hand side, then in particular $g \circ h = p(\psi)$,
+and by the condition that $\varphi$ is strongly cartesian we
+get a unique morphism $\psi' : z \to x'$ with $\psi = \varphi \circ \psi'$
+and $p(\psi') = h$. OK, and now $p'(\psi') : W/U \to V/U$
+is a morphism whose corresponding map $W \to V$ is $h$, hence
+equal to $h$ as a morphism in $\mathcal{C}/U$. Thus $\psi'$ is
+a unique morphism $z \to x'$ which maps to the given pair $(\psi, h)$.
+This proves the claim.
+
+\medskip\noindent
+Finally, suppose given $g : V'/U \to V/U$ and $x$ with $p'(x) = V/U$.
+Since $p : \mathcal{S} \to \mathcal{C}$ is a fibred category we
+see there exists a strongly cartesian morphism $\varphi : x' \to x$
+with $p(\varphi) = g$. By the same argument as above it follows
+that $p'(\varphi) = g : V'/U \to V/U$. And as seen above the morphism
+$\varphi$ is strongly cartesian. Thus the conditions of
+Definition \ref{definition-fibred-category} are satisfied and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibred-over-fibred}
+Let $\mathcal{A} \to \mathcal{B} \to \mathcal{C}$ be functors between
+categories. If $\mathcal{A}$ is fibred over $\mathcal{B}$ and
+$\mathcal{B}$ is fibred over $\mathcal{C}$, then $\mathcal{A}$
+is fibred over $\mathcal{C}$.
+\end{lemma}
+
+\begin{proof}
+This follows from the definitions and
+Lemma \ref{lemma-cartesian-over-cartesian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibred-category-representable-goes-up}
+Let $p : \mathcal{S} \to \mathcal{C}$ be a fibred category.
+Let $x \to y$ and $z \to y$ be morphisms of $\mathcal{S}$
+with $x \to y$ strongly cartesian. If $p(x) \times_{p(y)} p(z)$ exists,
+then $x \times_y z$ exists, $p(x \times_y z) = p(x) \times_{p(y)} p(z)$,
+and $x \times_y z \to z$ is strongly cartesian.
+\end{lemma}
+
+\begin{proof}
+Pick a strongly cartesian morphism
+$\text{pr}_2^*z \to z$ lying over
+$\text{pr}_2 : p(x) \times_{p(y)} p(z) \to p(z)$. Then
+$\text{pr}_2^*z = x \times_y z$ by
+Lemma \ref{lemma-strongly-cartesian-fibre-product}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ameliorate-morphism-fibred-categories}
+Let $\mathcal{C}$ be a category. Let $F : \mathcal{X} \to \mathcal{Y}$
+be a $1$-morphism of fibred categories over $\mathcal{C}$.
+There exist $1$-morphisms of fibred categories over $\mathcal{C}$
+$$
+\xymatrix{
+\mathcal{X} \ar@<1ex>[r]^u &
+\mathcal{X}' \ar[r]^v \ar@<1ex>[l]^w & \mathcal{Y}
+}
+$$
+such that $F = v \circ u$ and such that
+\begin{enumerate}
+\item $u : \mathcal{X} \to \mathcal{X}'$ is fully faithful,
+\item $w$ is left adjoint to $u$, and
+\item $v : \mathcal{X}' \to \mathcal{Y}$ is a fibred category.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $p : \mathcal{X} \to \mathcal{C}$ and $q : \mathcal{Y} \to \mathcal{C}$
+the structure functors. We construct $\mathcal{X}'$ explicitly as follows.
+An object of $\mathcal{X}'$ is a quadruple $(U, x, y, f)$ where
+$x \in \Ob(\mathcal{X}_U)$, $y \in \Ob(\mathcal{Y}_U)$
+and $f : y \to F(x)$ is a morphism in $\mathcal{Y}_U$.
+A morphism $(a, b) : (U, x, y, f) \to (U', x', y', f')$ is given
+by $a : x \to x'$ and $b : y \to y'$ with $p(a) = q(b) : U \to U'$
+and such that $f' \circ b = F(a) \circ f$.
+
+\medskip\noindent
+Let us make a choice of pullbacks for both $p$ and $q$ and let us
+use the same notation to indicate them.
+Let $(U, x, y, f)$ be an object and let $h : V \to U$ be a morphism.
+Consider the morphism $c : (V, h^*x, h^*y, h^*f) \to (U, x, y, f)$
+coming from the given strongly cartesian maps $h^*x \to x$ and $h^*y \to y$.
+We claim $c$ is strongly cartesian in $\mathcal{X}'$ over $\mathcal{C}$.
+Namely, suppose we are given an object $(W, x', y', f')$ of $\mathcal{X}'$,
+a morphism $(a, b) : (W, x', y', f') \to (U, x, y, f)$ lying over
+$W \to U$, and a factorization $W \to V \to U$ of $W \to U$ through $h$.
+As $h^*x \to x$ and $h^*y \to y$ are strongly cartesian we obtain morphisms
+$a' : x' \to h^*x$ and $b' : y' \to h^*y$ lying over the given morphism
+$W \to V$. Consider the diagram
+$$
+\xymatrix{
+y' \ar[d]_{f'} \ar[r] & h^*y \ar[r] \ar[d]_{h^*f} & y \ar[d]_f \\
+F(x') \ar[r] & F(h^*x) \ar[r] & F(x)
+}
+$$
+The outer rectangle and the right square commute.
+Since $F$ is a $1$-morphism of fibred categories the morphism
+$F(h^*x) \to F(x)$ is strongly cartesian.
+Hence the left square commutes by the universal property
+of strongly cartesian morphisms. This proves that $\mathcal{X}'$
+is fibred over $\mathcal{C}$.
+
+\medskip\noindent
+The functor $u : \mathcal{X} \to \mathcal{X}'$ is given by
+$x \mapsto (p(x), x, F(x), \text{id})$. This is fully faithful.
+The functor $\mathcal{X}' \to \mathcal{Y}$ is given by
+$(U, x, y, f) \mapsto y$. The functor $w : \mathcal{X}' \to \mathcal{X}$
+is given by $(U, x, y, f) \mapsto x$. Each of these functors is
+a $1$-morphism of fibred categories over $\mathcal{C}$ by our
+description of strongly cartesian morphisms of $\mathcal{X}'$ over
+$\mathcal{C}$. Adjointness of $w$ and $u$ means that
+$$
+\Mor_\mathcal{X}(x, x') =
+\Mor_{\mathcal{X}'}((U, x, y, f), (p(x'), x', F(x'), \text{id})),
+$$
+which follows immediately from the definitions.
+
+\medskip\noindent
+Finally, we have to show that $\mathcal{X}' \to \mathcal{Y}$ is a fibred
+category. Let $c : y' \to y$ be a morphism in $\mathcal{Y}$
+and let $(U, x, y, f)$ be an object of $\mathcal{X}'$ lying over $y$.
+Set $V = q(y')$ and let $h = q(c) : V \to U$. Let $a : h^*x \to x$
+and $b : h^*y \to y$ be the strongly cartesian morphisms covering $h$.
+Since $F$ is a $1$-morphism of fibred categories we may identify
+$h^*F(x) = F(h^*x)$ with strongly cartesian morphism
+$F(a) : F(h^*x) \to F(x)$. By the universal property
+of $b : h^*y \to y$ there is a morphism $c' : y' \to h^*y$ in
+$\mathcal{Y}_V$ such that $c = b \circ c'$. We claim that
+$$
+(a, c) : (V, h^*x, y', h^*f \circ c') \longrightarrow (U, x, y, f)
+$$
+is strongly cartesian in $\mathcal{X}'$ over $\mathcal{Y}$. To see this
+let $(W, x_1, y_1, f_1)$ be an object of $\mathcal{X}'$, let
+$(a_1, b_1) : (W, x_1, y_1, f_1) \to (U, x, y, f)$ be a morphism
+and let $b_1 = c \circ b_1'$ for some morphism $b_1' : y_1 \to y'$.
+Then
+$$
+(a_1', b_1') : (W, x_1, y_1, f_1) \longrightarrow (V, h^*x, y', h^*f \circ c')
+$$
+(where $a_1' : x_1 \to h^*x$ is the unique morphism lying over the
+given morphism $q(b_1') : W \to V$ such that $a_1 = a \circ a_1'$)
+is the desired morphism.
+\end{proof}
+
+
+
+
+
+\section{Inertia}
+\label{section-inertia}
+
+\noindent
+Given fibred categories $p : \mathcal{S} \to \mathcal{C}$ and
+$p' : \mathcal{S}' \to \mathcal{C}$ over a category $\mathcal{C}$
+and a $1$-morphism $F : \mathcal{S} \to \mathcal{S}'$
+we have the diagonal morphism
+$$
+\Delta = \Delta_{\mathcal{S}/\mathcal{S}'} :
+\mathcal{S} \longrightarrow \mathcal{S} \times_{\mathcal{S}'} \mathcal{S}
+$$
+in the $(2, 1)$-category of fibred categories over $\mathcal{C}$.
+
+\begin{lemma}
+\label{lemma-inertia-fibred-category}
+Let $\mathcal{C}$ be a category. Let
+$p : \mathcal{S} \to \mathcal{C}$ and
+$p' : \mathcal{S}' \to \mathcal{C}$ be fibred categories.
+Let $F : \mathcal{S} \to \mathcal{S}'$ be a $1$-morphism of
+fibred categories over $\mathcal{C}$. Consider the category
+$\mathcal{I}_{\mathcal{S}/\mathcal{S}'}$ over $\mathcal{C}$ whose
+\begin{enumerate}
+\item objects are pairs $(x, \alpha)$ where $x \in \Ob(\mathcal{S})$
+and $\alpha : x \to x$ is an automorphism with $F(\alpha) = \text{id}$,
+\item morphisms $(x, \alpha) \to (y, \beta)$ are given by morphisms
+$\phi : x \to y$ such that
+$$
+\xymatrix{
+x\ar[r]_\phi\ar[d]_\alpha &
+y\ar[d]^{\beta} \\
+x\ar[r]^\phi &
+y \\
+}
+$$
+commutes, and
+\item the functor $\mathcal{I}_{\mathcal{S}/\mathcal{S}'} \to \mathcal{C}$
+is given by $(x, \alpha) \mapsto p(x)$.
+\end{enumerate}
+Then
+\begin{enumerate}
+\item there is an equivalence
+$$
+\mathcal{I}_{\mathcal{S}/\mathcal{S}'} \longrightarrow
+\mathcal{S}
+\times_{\Delta, (\mathcal{S} \times_{\mathcal{S}'} \mathcal{S}), \Delta}
+\mathcal{S}
+$$
+in the $(2, 1)$-category of categories over $\mathcal{C}$, and
+\item $\mathcal{I}_{\mathcal{S}/\mathcal{S}'}$ is a fibred category over
+$\mathcal{C}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Note that (2) follows from (1) by
+Lemmas \ref{lemma-2-product-fibred-categories-over-C} and
+\ref{lemma-fibred-equivalent}. Thus it suffices to prove (1).
+We will use without further mention the construction of the $2$-fibre product
+from
+Lemma \ref{lemma-2-product-fibred-categories-over-C}.
+In particular an object of
+$\mathcal{S}
+\times_{\Delta, (\mathcal{S} \times_{\mathcal{S}'} \mathcal{S}), \Delta}
+\mathcal{S}$
+is a triple $(x, y, (\iota, \kappa))$ where $x$ and $y$ are objects of
+$\mathcal{S}$, and
+$(\iota, \kappa) : (x, x, \text{id}_{F(x)}) \to (y, y, \text{id}_{F(y)})$
+is an isomorphism in $\mathcal{S} \times_{\mathcal{S}'} \mathcal{S}$.
+This just means that $\iota, \kappa : x \to y$ are isomorphisms and that
+$F(\iota) = F(\kappa)$. Consider the functor
+$$
+I_{\mathcal{S}/\mathcal{S}'}
+\longrightarrow
+\mathcal{S}
+\times_{\Delta, (\mathcal{S} \times_{\mathcal{S}'} \mathcal{S}), \Delta}
+\mathcal{S}
+$$
+which to an object $(x, \alpha)$ of the left hand side assigns the object
+$(x, x, (\alpha, \text{id}_x))$ of the right hand side
+and to a morphism $\phi$ of the left hand side
+assigns the morphism $(\phi, \phi)$ of the right hand side.
+We claim that a quasi-inverse to that morphism is given by the
+functor
+$$
+\mathcal{S}
+\times_{\Delta, (\mathcal{S} \times_{\mathcal{S}'} \mathcal{S}), \Delta}
+\mathcal{S}
+\longrightarrow
+I_{\mathcal{S}/\mathcal{S}'}
+$$
+which to an object $(x, y, (\iota, \kappa))$ of the left hand side
+assigns the object $(x, \kappa^{-1} \circ \iota)$ of the right hand side
+and to a morphism
+$(\phi, \phi') : (x, y, (\iota, \kappa)) \to (z, w, (\lambda, \mu))$
+of the left hand side assigns the morphism $\phi$.
+Indeed, the endo-functor of $I_{\mathcal{S}/\mathcal{S}'}$ induced
+by composing the two functors above is the identity on the nose, and
+the endo-functor induced on
+$\mathcal{S}
+\times_{\Delta, (\mathcal{S} \times_{\mathcal{S}'} \mathcal{S}), \Delta}
+\mathcal{S}$
+is isomorphic to
+the identity via the natural isomorphism
+$$
+(\text{id}_x, \kappa) :
+(x, x, (\kappa^{-1} \circ \iota, \text{id}_x))
+\longrightarrow
+(x, y, (\iota, \kappa)).
+$$
+Some details omitted.
+\end{proof}
+
+\begin{definition}
+\label{definition-inertia-fibred-category}
+Let $\mathcal{C}$ be a category.
+\begin{enumerate}
+\item Let $F : \mathcal{S} \to \mathcal{S}'$ be a $1$-morphism of
+fibred categories over $\mathcal{C}$. The {\it relative inertia
+of $\mathcal{S}$ over $\mathcal{S}'$} is the fibred category
+$\mathcal{I}_{\mathcal{S}/\mathcal{S}'} \to \mathcal{C}$ of
+Lemma \ref{lemma-inertia-fibred-category}.
+\item By the {\it inertia fibred category $\mathcal{I}_\mathcal{S}$
+of $\mathcal{S}$} we mean
+$\mathcal{I}_\mathcal{S} = \mathcal{I}_{\mathcal{S}/\mathcal{C}}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that there are canonical $1$-morphisms
+\begin{equation}
+\label{equation-inertia-structure-map}
+\mathcal{I}_{\mathcal{S}/\mathcal{S}'} \longrightarrow \mathcal{S}
+\quad\text{and}\quad
+\mathcal{I}_\mathcal{S} \longrightarrow \mathcal{S}
+\end{equation}
+of fibred categories over $\mathcal{C}$. In terms of the description of
+Lemma \ref{lemma-inertia-fibred-category}
+these simply map the object $(x, \alpha)$ to the object $x$ and the morphism
+$\phi : (x, \alpha) \to (y, \beta)$ to the morphism $\phi : x \to y$.
+There is also a {\it neutral section}
+\begin{equation}
+\label{equation-neutral-section}
+e : \mathcal{S} \to \mathcal{I}_{\mathcal{S}/\mathcal{S}'}
+\quad\text{and}\quad
+e : \mathcal{S} \to \mathcal{I}_\mathcal{S}
+\end{equation}
+defined by the rules $x \mapsto (x, \text{id}_x)$ and
+$(\phi : x \to y) \mapsto \phi$. This is a right inverse to
+(\ref{equation-inertia-structure-map}). Given a $2$-commutative
+square
+$$
+\xymatrix{
+\mathcal{S}_1 \ar[d]_{F_1} \ar[r]_G & \mathcal{S}_2 \ar[d]^{F_2} \\
+\mathcal{S}'_1 \ar[r]^{G'} & \mathcal{S}'_2
+}
+$$
+there are {\it functoriality maps}
+\begin{equation}
+\label{equation-functorial}
+\mathcal{I}_{\mathcal{S}_1/\mathcal{S}'_1}
+\longrightarrow
+\mathcal{I}_{\mathcal{S}_2/\mathcal{S}'_2}
+\quad\text{and}\quad
+\mathcal{I}_{\mathcal{S}_1}
+\longrightarrow
+\mathcal{I}_{\mathcal{S}_2}
+\end{equation}
+defined by the rules $(x, \alpha) \mapsto (G(x), G(\alpha))$
+and $\phi \mapsto G(\phi)$. In particular there is always a
+comparison map
+\begin{equation}
+\label{equation-comparison}
+\mathcal{I}_{\mathcal{S}/\mathcal{S}'}
+\longrightarrow
+\mathcal{I}_\mathcal{S}
+\end{equation}
+and all the maps above are compatible with this.
+
+\begin{lemma}
+\label{lemma-relative-inertia-as-fibre-product}
+Let $F : \mathcal{S} \to \mathcal{S}'$ be a $1$-morphism of categories
+fibred over a category $\mathcal{C}$. Then the diagram
+$$
+\xymatrix{
+\mathcal{I}_{\mathcal{S}/\mathcal{S}'}
+\ar[d]_{F \circ (\ref{equation-inertia-structure-map})}
+\ar[rr]_{(\ref{equation-comparison})} & &
+\mathcal{I}_\mathcal{S} \ar[d]^{(\ref{equation-functorial})} \\
+\mathcal{S}' \ar[rr]^e & &
+\mathcal{I}_{\mathcal{S}'}
+}
+$$
+is a $2$-fibre product.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+
+
+
+
+\section{Categories fibred in groupoids}
+\label{section-fibred-groupoids}
+
+\noindent
+In this section we explain how to think about categories fibred in groupoids
+and we see how they are basically the same as functors with
+values in the $(2, 1)$-category of groupoids.
+
+\begin{definition}
+\label{definition-fibred-groupoids}
+Let $p : \mathcal{S} \to \mathcal{C}$ be a functor.
+We say that $\mathcal{S}$ is {\it fibred in groupoids} over $\mathcal{C}$ if
+the following two conditions hold:
+\begin{enumerate}
+\item For every morphism $f : V \to U$ in $\mathcal{C}$ and every
+lift $x$ of $U$ there is a lift $\phi : y \to x$ of $f$ with
+target $x$.
+\item For every pair of morphisms $\phi : y \to x$ and $ \psi : z \to x$
+and any morphism $f : p(z) \to p(y)$ such that $p(\phi) \circ f = p(\psi)$
+there exists a unique lift $\chi : z \to y$ of $f$ such that
+$\phi \circ \chi = \psi$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Condition (2) phrased differently says that
+applying the functor $p$ gives a bijection between the sets
+of dotted arrows in the following commutative diagram below:
+$$
+\xymatrix{
+y \ar[r] & x & p(y) \ar[r] & p(x) \\
+z \ar@{-->}[u] \ar[ru] & & p(z) \ar@{-->}[u]\ar[ru] & \\
+}
+$$
+Another way to think about the second condition is the following.
+Suppose that $g : W \to V$ and $f : V \to U$ are morphisms in $\mathcal{C}$.
+Let $x \in \Ob(\mathcal{S}_U)$. By the first condition we can lift
+$f$ to $ \phi : y \to x$ and then we can lift $g$ to $\psi : z \to y$.
+Instead of doing this two step process we can directly lift $g \circ f$ to
+$\gamma : z' \to x$. This gives the solid arrows in the diagram
+\begin{equation}
+\label{equation-fibred-groupoids}
+\vcenter{
+\xymatrix{
+z' \ar@{-->}[d]\ar[rrd]^\gamma & & \\
+z \ar@{-->}[u] \ar[r]^\psi \ar@{~>}[d]^p &
+y \ar[r]^\phi \ar@{~>}[d]^p &
+x \ar@{~>}[d]^p
+\\
+W \ar[r]^g & V \ar[r]^f & U \\
+}
+}
+\end{equation}
+where the squiggly arrows represent not morphisms but the functor $p$.
+Applying the second condition to the arrows $\phi \circ \psi$, $\gamma$
+and $\text{id}_W$ we conclude that there is a unique morphism
+$\chi : z \to z'$ in $\mathcal{S}_W$ such that
+$\gamma \circ \chi = \phi \circ \psi$. Similarly there is a unique morphism
+$z' \to z$. The uniqueness implies that the morphisms $z' \to z$ and
+$z\to z'$ are mutually inverse, in other words isomorphisms.
+
+\medskip\noindent
+It should be clear from this discussion that a
+category fibred in groupoids is very closely related
+to a fibred category. Here is the result.
+
+\begin{lemma}
+\label{lemma-fibred-groupoids}
+Let $p : \mathcal{S} \to \mathcal{C}$ be a functor.
+The following are equivalent
+\begin{enumerate}
+\item $p : \mathcal{S} \to \mathcal{C}$ is a category
+fibred in groupoids, and
+\item all fibre categories are groupoids and
+$\mathcal{S}$ is a fibred category over $\mathcal{C}$.
+\end{enumerate}
+Moreover, in this case every morphism of $\mathcal{S}$ is
+strongly cartesian. In addition, given $f^\ast x \to x$
+lying over $f$ for all $f: V \to U = p(x)$ the data
+$(U \mapsto \mathcal{S}_U, f \mapsto f^*, \alpha_{f, g}, \alpha_U)$
+constructed in Lemma \ref{lemma-fibred}
+defines a pseudo functor from $\mathcal{C}^{opp}$ in to
+the $(2, 1)$-category of groupoids.
+\end{lemma}
+
+\begin{proof}
+Assume $p : \mathcal{S} \to \mathcal{C}$ is fibred in groupoids.
+To show all fibre categories $\mathcal{S}_U$ for
+$U \in \Ob(\mathcal{C})$
+are groupoids, we must exhibit for every $f : y \to x$ in $\mathcal{S}_U$ an
+inverse morphism. The diagram on the left (in $\mathcal{S}_U$) is mapped by
+$p$ to the diagram on the right:
+$$
+\xymatrix{
+y \ar[r]^f & x & U \ar[r]^{\text{id}_U} & U \\
+x \ar@{-->}[u] \ar[ru]_{\text{id}_x} & &
+U \ar@{-->}[u]\ar[ru]_{\text{id}_U} & \\
+}
+$$
+Since only $\text{id}_U$ makes the diagram on the right commute, there is a
+unique $g : x \to y$ making the diagram on the left commute, so
+$fg = \text{id}_x$. By a similar argument there is a unique $h : y \to x$ so
+that $gh = \text{id}_y$. Then $fgh = f : y \to x$. We have $fg = \text{id}_x$,
+so $h = f$. Condition (2) of Definition \ref{definition-fibred-groupoids} says
+exactly that every morphism of $\mathcal{S}$ is strongly cartesian. Hence
+condition (1) of Definition \ref{definition-fibred-groupoids} implies that
+$\mathcal{S}$ is a fibred category over $\mathcal{C}$.
+
+\medskip\noindent
+Conversely, assume all fibre categories are groupoids and
+$\mathcal{S}$ is a fibred category over $\mathcal{C}$.
+We have to check conditions (1) and (2) of
+Definition \ref{definition-fibred-groupoids}.
+The first condition follows trivially. Let $\phi : y \to x$,
+$\psi : z \to x$ and $f : p(z) \to p(y)$ such that
+$p(\phi) \circ f = p(\psi)$ be as in condition (2) of
+Definition \ref{definition-fibred-groupoids}.
+Write $U = p(x)$, $V = p(y)$, $W = p(z)$, $p(\phi) = g : V \to U$,
+$p(\psi) = h : W \to U$. Choose a strongly cartesian $g^*x \to x$
+lying over $g$. Then we get a morphism $i : y \to g^*x$ in
+$\mathcal{S}_V$, which is therefore an isomorphism. We
+also get a morphism $j : z \to g^*x$ corresponding to
+the pair $(\psi, f)$ as $g^*x \to x$ is strongly cartesian.
+Then one checks that $\chi = i^{-1} \circ j$ is a solution.
+
+\medskip\noindent
+We have seen in the proof of (1) $\Rightarrow$ (2) that
+every morphism of $\mathcal{S}$ is strongly cartesian.
+The final statement follows directly from Lemma \ref{lemma-fibred}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibred-gives-fibred-groupoids}
+Let $\mathcal{C}$ be a category.
+Let $p : \mathcal{S} \to \mathcal{C}$ be a fibred category.
+Let $\mathcal{S}'$ be the subcategory of $\mathcal{S}$ defined
+as follows
+\begin{enumerate}
+\item $\Ob(\mathcal{S}') = \Ob(\mathcal{S})$, and
+\item for $x, y \in \Ob(\mathcal{S}')$ the set of morphisms between $x$
+and $y$ in $\mathcal{S}'$ is the set of strongly cartesian morphisms between
+$x$ and $y$ in $\mathcal{S}$.
+\end{enumerate}
+Let $p' : \mathcal{S}' \to \mathcal{C}$ be the restriction of $p$
+to $\mathcal{S}'$. Then $p' : \mathcal{S}' \to \mathcal{C}$ is fibred
+in groupoids.
+\end{lemma}
+
+\begin{proof}
+Note that the construction makes sense since by
+Lemma \ref{lemma-composition-cartesian}
+the identity morphism of any object of $\mathcal{S}$ is strongly cartesian,
+and the composition of strongly cartesian morphisms is strongly cartesian.
+The first lifting property of
+Definition \ref{definition-fibred-groupoids}
+follows from the condition that in a fibred category
+given any morphism $f : V \to U$ and $x$ lying over $U$ there exists
+a strongly cartesian morphism $\varphi : y \to x$ lying over $f$.
+Let us check the second lifting property of
+Definition \ref{definition-fibred-groupoids}
+for the category $p' : \mathcal{S}' \to \mathcal{C}$ over $\mathcal{C}$.
+To do this we argue as in the discussion following
+Definition \ref{definition-fibred-groupoids}.
+Thus in Diagram \ref{equation-fibred-groupoids} the
+morphisms $\phi$, $\psi$ and $\gamma$ are strongly cartesian morphisms
+of $\mathcal{S}$.
+Hence $\gamma$ and $\phi \circ \psi$ are strongly cartesian morphisms
+of $\mathcal{S}$ lying over the same arrow of $\mathcal{C}$ and
+having the same target in $\mathcal{S}$. By the discussion following
+Definition \ref{definition-cartesian-over-C}
+this means these two arrows are isomorphic as desired (here we use also
+that any isomorphism in $\mathcal{S}$ is strongly cartesian, by
+Lemma \ref{lemma-composition-cartesian} again).
+\end{proof}
+
+\begin{example}
+\label{example-group-homomorphism-fibreedingroupoids}
+A homomorphism of groups $p : G \to H$ gives rise to a functor
+$p : \mathcal{S}\to\mathcal{C}$ as in Example
+\ref{example-group-homomorphism-functor}. This functor
+$p : \mathcal{S}\to\mathcal{C}$ is fibred in groupoids if and only if
+$p$ is surjective. The fibre category $\mathcal{S}_U$ over the (unique)
+object $U\in \Ob(\mathcal{C})$ is the category associated to the
+kernel of $p$ as in Example \ref{example-group-groupoid}.
+\end{example}
+
+\noindent
+Given $p : \mathcal{S} \to \mathcal{C}$, we can ask: if the fibre
+category $\mathcal{S}_U$ is a groupoid for all $U \in \Ob(\mathcal{C})$,
+must $\mathcal{S}$ be fibred in groupoids over $\mathcal{C}$? We can see the
+answer is no as follows. Start with a category fibred in groupoids
+$p : \mathcal{S} \to \mathcal{C}$. Altering the morphisms in $\mathcal{S}$
+which do not map to the identity morphism on some object does not alter the
+categories $\mathcal{S}_U$. Hence we can violate the existence and uniqueness
+conditions on lifts. One example is the functor from Example
+\ref{example-group-homomorphism-fibreedingroupoids} when $G \to H$ is not
+surjective. Here is another example.
+
+\begin{example}
+\label{example-not-fibred-in-groupoids-but-fibre-cats-are}
+Let $\Ob(\mathcal{C}) = \{A, B, T\}$ and
+$\Mor_\mathcal{C}(A, B) = \{f\}$, $\Mor_\mathcal{C}(B, T) = \{g\}$,
+$\Mor_\mathcal{C}(A, T) = \{h\} = \{gf\}, $ plus the identity morphism
+for each object. See the diagram below for a picture of this category. Now let
+$\Ob(\mathcal{S}) = \{A', B', T'\}$ and
+$\Mor_\mathcal{S}(A', B') = \emptyset$,
+$\Mor_\mathcal{S}(B', T') = \{g'\}$,
+$\Mor_\mathcal{S}(A', T') = \{h'\}, $ plus the identity morphisms. The
+functor $p : \mathcal{S} \to \mathcal{C}$ is obvious. Then for every
+$U \in \Ob(\mathcal{C})$, $\mathcal{S}_U$ is the category with one
+object and the identity morphism on that object, so a groupoid, but the
+morphism $f: A \to B$ cannot be lifted. Similarly, if we declare
+$\Mor_\mathcal{S}(A', B') = \{f'_1, f'_2\}$ and
+$ \Mor_\mathcal{S}(A', T') = \{h'\} = \{g'f'_1 \} = \{g'f'_2\}$, then
+the fibre categories are the same and $f: A \to B$ in the diagram below has
+two lifts.
+$$
+\xymatrix{
+B' \ar[r]^{g'} & T' & & B \ar[r]^g & T & \\
+A' \ar@{-->}[u]^{??} \ar[ru]_{h'} & & \ar@{}[u]^{above} &
+A \ar[u]^f \ar[ru]_{gf = h} & \\
+}
+$$
+\end{example}
+
+\noindent
+Later we would like to make assertions such as ``any category fibred in
+groupoids over $\mathcal{C}$ is equivalent to a split one'', or
+``any category fibred in groupoids whose fibre categories are setlike
+is equivalent to a category fibred in sets''. The notion of equivalence
+depends on the $2$-category we are working with.
+
+\begin{definition}
+\label{definition-categories-fibred-in-groupoids-over-C}
+Let $\mathcal{C}$ be a category.
+The {\it $2$-category of categories fibred in groupoids over $\mathcal{C}$}
+is the sub $2$-category of the $2$-category of fibred categories
+over $\mathcal{C}$ (see Definition \ref{definition-fibred-categories-over-C})
+defined as follows:
+\begin{enumerate}
+\item Its objects will be categories
+$p : \mathcal{S} \to \mathcal{C}$ fibred in groupoids.
+\item Its $1$-morphisms $(\mathcal{S}, p) \to (\mathcal{S}', p')$
+will be functors $G : \mathcal{S} \to \mathcal{S}'$ such that
+$p' \circ G = p$ (since every morphism is strongly cartesian
+$G$ automatically preserves them).
+\item Its $2$-morphisms $t : G \to H$ for
+$G, H : (\mathcal{S}, p) \to (\mathcal{S}', p')$
+will be morphisms of functors
+such that $p'(t_x) = \text{id}_{p(x)}$
+for all $x \in \Ob(\mathcal{S})$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that every $2$-morphism is automatically an isomorphism!
+Hence this is actually a $(2, 1)$-category and not just a
+$2$-category. Here is the obligatory lemma on $2$-fibre products.
+
+\begin{lemma}
+\label{lemma-2-product-fibred-categories}
+Let $\mathcal{C}$ be a category.
+The $2$-category of categories fibred in groupoids
+over $\mathcal{C}$ has 2-fibre products, and they are described as in
+Lemma \ref{lemma-2-product-categories-over-C}.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-2-product-fibred-categories-over-C}
+the fibre product as described in
+Lemma \ref{lemma-2-product-categories-over-C} is a fibred category.
+Hence it suffices to prove that the fibre categories are
+groupoids, see Lemma \ref{lemma-fibred-groupoids}.
+By Lemma \ref{lemma-fibre-2-fibre-product-categories-over-C}
+it is enough to show that the $2$-fibre product of groupoids
+is a groupoid, which is clear (from the construction in
+Lemma \ref{lemma-2-fibre-product-categories} for example).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-equivalence-fibred-categories}
+Let $p : \mathcal{S}\to \mathcal{C}$ and
+$p' : \mathcal{S'}\to \mathcal{C}$ be categories fibred in groupoids, and
+suppose that $G : \mathcal{S}\to \mathcal {S}'$ is a functor over
+$\mathcal{C}$.
+\begin{enumerate}
+\item Then $G$ is faithful (resp.\ fully faithful, resp.\ an equivalence)
+if and only if for each $U\in\Ob(\mathcal{C})$ the induced functor
+$G_U : \mathcal{S}_U\to \mathcal{S}'_U$ is faithful
+(resp.\ fully faithful, resp.\ an equivalence).
+\item If $G$ is an equivalence, then $G$ is an equivalence in the
+$2$-category of categories fibred in groupoids over $\mathcal{C}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $x, y$ be objects of $\mathcal{S}$ lying over the same object $U$.
+Consider the commutative diagram
+$$
+\xymatrix{
+\Mor_\mathcal{S}(x, y) \ar[rd]_p \ar[rr]_G & &
+\Mor_{\mathcal{S}'}(G(x), G(y)) \ar[ld]^{p'} \\
+& \Mor_\mathcal{C}(U, U) &
+}
+$$
+From this diagram it is clear that if $G$ is faithful (resp.\ fully faithful)
+then so is each $G_U$.
+
+\medskip\noindent
+Suppose $G$ is an equivalence. For every object
+$x'$ of $\mathcal{S}'$ there exists an object $x$ of $\mathcal{S}$
+such that $G(x)$ is isomorphic to $x'$. Suppose that $x'$ lies
+over $U'$ and $x$ lies over $U$. Then there is an isomorphism
+$f : U' \to U$ in $\mathcal{C}$, namely, $p'$ applied to the
+isomorphism $x' \to G(x)$. By the axioms of a category fibred
+in groupoids there exists an arrow $f^*x \to x$ of $\mathcal{S}$
+lying over $f$. Hence there exists an isomorphism
+$\alpha : x' \to G(f^*x)$ such that $p'(\alpha) = \text{id}_{U'}$
+(this time by the axioms for $\mathcal{S}'$). All in all we conclude
+that for every object $x'$ of $\mathcal{S}'$ we can choose
+a pair $(o_{x'}, \alpha_{x'})$ consisting of an object
+$o_{x'}$ of $\mathcal{S}$ and an isomorphism $\alpha_{x'} : x' \to G(o_{x'})$
+with $p'(\alpha_{x'}) = \text{id}_{p'(x')}$.
+From this point on we proceed as usual (see proof of
+Lemma \ref{lemma-equivalence-categories}) to produce an inverse
+functor $F : \mathcal{S}' \to \mathcal{S}$, by taking
+$x' \mapsto o_{x'}$ and $\varphi' : x' \to y'$ to the unique
+arrow $\varphi_{\varphi'} : o_{x'} \to o_{y'}$ with
+$\alpha_{y'}^{-1} \circ G(\varphi_{\varphi'}) \circ \alpha_{x'} = \varphi'$.
+With these choices $F$ is a functor over $\mathcal{C}$.
+We omit the verification that $G \circ F$ and $F \circ G$ are
+$2$-isomorphic to the respective identity functors
+(in the $2$-category of categories fibred in groupoids over $\mathcal{C}$).
+
+\medskip\noindent
+Suppose that $G_U$ is faithful (resp.\ fully faithful)
+for all $U\in\Ob(\mathcal C)$. To
+show that $G$ is faithful (resp.\ fully faithful)
+we have to show for any objects
+$x, y\in\Ob(\mathcal{S})$ that $G$ induces an
+injection (resp.\ bijection) between
+$\Mor_\mathcal{S}(x, y)$ and
+$\Mor_{\mathcal{S}'}(G(x), G(y))$.
+Set $U = p(x)$ and $V = p(y)$.
+It suffices to prove that $G$
+induces an injection (resp.\ bijection) between morphism
+$x \to y$ lying over $f$ to morphisms $G(x) \to G(y)$ lying over $f$
+for any morphism $f : U \to V$.
+Now fix $f : U \to V$. Denote $f^*y \to y$ a pullback.
+Then also $G(f^*y) \to G(y)$ is a pullback.
+The set of morphisms from $x$ to $y$ lying over $f$
+is bijective to the set of morphisms between
+$x$ and $f^*y$ lying over $\text{id}_U$. (By the second axiom
+of a category fibred in groupoids.) Similarly
+the set of morphisms from $G(x)$ to $G(y)$ lying over $f$
+is bijective to the set of morphisms between
+$G(x)$ and $G(f^*y)$ lying over $\text{id}_U$.
+Hence the fact that $G_U$ is faithful (resp.\ fully faithful)
+gives the desired result.
+
+\medskip\noindent
+Finally suppose for all $G_U$ is an equivalence for all $U$, so it is
+fully faithful and essentially surjective. We have seen this implies $G$ is
+fully faithful, and thus to prove it is an equivalence we have to prove that
+it is essentially surjective. This is clear, for if $z'\in
+\Ob(\mathcal{S}')$ then $z'\in \Ob(\mathcal{S}'_U)$ where
+$U = p'(z')$. Since $G_U$ is essentially surjective we know that
+$z'$ is isomorphic, in $\mathcal{S}'_U$, to an object of the form
+$G_U(z)$ for some $z\in \Ob(\mathcal{S}_U)$. But morphisms
+in $\mathcal{S}'_U$ are morphisms in $\mathcal{S}'$ and hence $z'$ is
+isomorphic to $G(z)$ in $\mathcal{S}'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-diagonal-equivalence}
+Let $\mathcal{C}$ be a category. Let $p : \mathcal{S}\to \mathcal{C}$ and
+$p' : \mathcal{S'}\to \mathcal{C}$ be categories fibred in groupoids.
+Let $G : \mathcal{S}\to \mathcal {S}'$ be a functor over $\mathcal{C}$.
+Then $G$ is fully faithful if and only if the diagonal
+$$
+\Delta_G :
+\mathcal{S}
+\longrightarrow
+\mathcal{S} \times_{G, \mathcal{S}', G} \mathcal{S}
+$$
+is an equivalence.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-equivalence-fibred-categories}
+it suffices to look at fibre categories over an object $U$ of $\mathcal{C}$.
+An object of the right hand side is a triple $(x, x', \alpha)$ where
+$\alpha : G(x) \to G(x')$ is a morphism in $\mathcal{S}'_U$.
+The functor $\Delta_G$ maps the object $x$ of $\mathcal{S}_U$
+to the triple $(x, x, \text{id}_{G(x)})$. Note that $(x, x', \alpha)$
+is in the essential image of $\Delta_G$ if and only if $\alpha = G(\beta)$
+for some morphism $\beta : x \to x'$ in $\mathcal{S}_U$ (details omitted).
+Hence in order for $\Delta_G$ to be an equivalence, every $\alpha$ has to
+be the image of a morphism $\beta : x \to x'$, and also every two
+distinct morphisms $\beta, \beta' : x \to x'$ have to give distinct
+morphisms $G(\beta), G(\beta')$. This proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphisms-equivalent-fibred-groupoids}
+Let $\mathcal{C}$ be a category.
+Let $\mathcal{S}_i$, $i = 1, 2, 3, 4$ be categories fibred in
+groupoids over $\mathcal{C}$.
+Suppose that $\varphi : \mathcal{S}_1 \to \mathcal{S}_2$ and
+$\psi : \mathcal{S}_3 \to \mathcal{S}_4$ are equivalences
+over $\mathcal{C}$. Then
+$$
+\Mor_{\textit{Cat}/\mathcal{C}}(\mathcal{S}_2, \mathcal{S}_3)
+\longrightarrow
+\Mor_{\textit{Cat}/\mathcal{C}}(\mathcal{S}_1, \mathcal{S}_4),
+\quad \alpha \longmapsto \psi \circ \alpha \circ \varphi
+$$
+is an equivalence of categories.
+\end{lemma}
+
+\begin{proof}
+This is a generality and holds in any $2$-category.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inertia-fibred-groupoids}
+Let $\mathcal{C}$ be a category.
+If $p : \mathcal{S} \to \mathcal{C}$ is fibred in groupoids, then
+so is the inertia fibred category $\mathcal{I}_\mathcal{S} \to \mathcal{C}$.
+\end{lemma}
+
+\begin{proof}
+Clear from the construction in
+Lemma \ref{lemma-inertia-fibred-category}
+or by using (from the same lemma) that
+$I_\mathcal{S} \to \mathcal{S}
+\times_{\Delta, \mathcal{S} \times_\mathcal{C} \mathcal{S}, \Delta}\mathcal{S}$
+is an equivalence and appealing to
+Lemma \ref{lemma-2-product-fibred-categories}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cute-groupoids}
+Let $\mathcal{C}$ be a category. Let $U \in \Ob(\mathcal{C})$.
+If $p : \mathcal{S} \to \mathcal{C}$ is a category fibred in groupoids
+and $p$ factors through $p' : \mathcal{S} \to \mathcal{C}/U$
+then $p' : \mathcal{S} \to \mathcal{C}/U$ is fibred in groupoids.
+\end{lemma}
+
+\begin{proof}
+We have already seen in Lemma \ref{lemma-cute} that $p'$ is a fibred
+category. Hence it suffices to prove the fibre categories are groupoids,
+see Lemma \ref{lemma-fibred-groupoids}.
+For $V \in \Ob(\mathcal{C})$ we have
+$$
+\mathcal{S}_V = \coprod\nolimits_{f : V \to U} \mathcal{S}_{(f : V \to U)}
+$$
+where the left hand side is the fibre category of $p$ and the right hand side
+is the disjoint union of the fibre categories of $p'$.
+Hence the result.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibred-in-groupoids-over-fibred-in-groupoids}
+Let $\mathcal{A} \to \mathcal{B} \to \mathcal{C}$ be functors between
+categories. If $\mathcal{A}$ is fibred in groupoids over $\mathcal{B}$
+and $\mathcal{B}$ is fibred in groupoids over $\mathcal{C}$, then
+$\mathcal{A}$ is fibred in groupoids over $\mathcal{C}$.
+\end{lemma}
+
+\begin{proof}
+One can prove this directly from the definition. However, we will argue
+using the criterion of Lemma \ref{lemma-fibred-groupoids}.
+By Lemma \ref{lemma-fibred-over-fibred} we see that $\mathcal{A}$
+is fibred over $\mathcal{C}$. To finish the proof we show that the fibre
+category $\mathcal{A}_U$ is a groupoid for $U$ in $\mathcal{C}$.
+Namely, if $x \to y$ is a morphism of $\mathcal{A}_U$, then its
+image in $\mathcal{B}$ is an isomorphism as $\mathcal{B}_U$ is
+a groupoid. But then $x \to y$ is an isomorphism, for example by
+Lemma \ref{lemma-composition-cartesian} and the fact that every
+morphism of $\mathcal{A}$ is strongly $\mathcal{B}$-cartesian
+(see Lemma \ref{lemma-fibred-groupoids}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibred-groupoids-fibre-product-goes-up}
+Let $p : \mathcal{S} \to \mathcal{C}$ be a category fibred in groupoids.
+Let $x \to y$ and $z \to y$ be morphisms of $\mathcal{S}$.
+If $p(x) \times_{p(y)} p(z)$ exists, then
+$x \times_y z$ exists and $p(x \times_y z) = p(x) \times_{p(y)} p(z)$.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Lemma \ref{lemma-fibred-category-representable-goes-up}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ameliorate-morphism-categories-fibred-groupoids}
+Let $\mathcal{C}$ be a category. Let $F : \mathcal{X} \to \mathcal{Y}$
+be a $1$-morphism of categories fibred in groupoids over $\mathcal{C}$.
+There exists a factorization $\mathcal{X} \to \mathcal{X}' \to \mathcal{Y}$
+by $1$-morphisms of categories fibred in groupoids over $\mathcal{C}$ such
+that $\mathcal{X} \to \mathcal{X}'$ is an equivalence over $\mathcal{C}$
+and such that $\mathcal{X}'$ is a category fibred in groupoids over
+$\mathcal{Y}$.
+\end{lemma}
+
+\begin{proof}
+Denote $p : \mathcal{X} \to \mathcal{C}$ and $q : \mathcal{Y} \to \mathcal{C}$
+the structure functors. We construct $\mathcal{X}'$ explicitly as follows.
+An object of $\mathcal{X}'$ is a quadruple $(U, x, y, f)$ where
+$x \in \Ob(\mathcal{X}_U)$, $y \in \Ob(\mathcal{Y}_U)$
+and $f : F(x) \to y$ is an isomorphism in $\mathcal{Y}_U$.
+A morphism $(a, b) : (U, x, y, f) \to (U', x', y', f')$ is given
+by $a : x \to x'$ and $b : y \to y'$ with $p(a) = q(b)$ and
+such that $f' \circ F(a) = b \circ f$. In other words
+$\mathcal{X}' = \mathcal{X} \times_{F, \mathcal{Y}, \text{id}} \mathcal{Y}$
+with the construction of the $2$-fibre product from
+Lemma \ref{lemma-2-product-categories-over-C}.
+By
+Lemma \ref{lemma-2-product-fibred-categories}
+we see that $\mathcal{X}'$ is a category fibred in groupoids over
+$\mathcal{C}$ and that $\mathcal{X}' \to \mathcal{Y}$ is a morphism of
+categories over $\mathcal{C}$. As functor $\mathcal{X} \to \mathcal{X}'$ we take
+$x \mapsto (p(x), x, F(x), \text{id}_{F(x)})$ on objects and
+$(a : x \to x') \mapsto (a, F(a))$ on morphisms. It is clear that
+the composition $\mathcal{X} \to \mathcal{X}' \to \mathcal{Y}$
+equals $F$. We omit the verification that
+$\mathcal{X} \to \mathcal{X}'$ is an equivalence of fibred categories over
+$\mathcal{C}$.
+
+\medskip\noindent
+Finally, we have to show that $\mathcal{X}' \to \mathcal{Y}$ is a category
+fibred in groupoids. Let $b : y' \to y$ be a morphism in $\mathcal{Y}$
+and let $(U, x, y, f)$ be an object of $\mathcal{X}'$ lying over $y$.
+Because $\mathcal{X}$ is fibred in groupoids over $\mathcal{C}$ we
+can find a morphism $a : x' \to x$ lying over $U' = q(y') \to q(y) = U$.
+Since $\mathcal{Y}$ is fibred in groupoids over $\mathcal{C}$ and since
+both $F(x') \to F(x)$ and $y' \to y$ lie over the same morphism $U' \to U$
+we can find $f' : F(x') \to y'$ lying over $\text{id}_{U'}$ such that
+$f \circ F(a) = b \circ f'$. Hence we obtain
+$(a, b) : (U', x', y', f') \to (U, x, y, f)$.
+This verifies the first condition (1) of
+Definition \ref{definition-fibred-groupoids}.
+To see (2) let
+$(a, b) : (U', x', y', f') \to (U, x, y, f)$ and
+$(a', b') : (U'', x'', y'', f'') \to (U, x, y, f)$ be morphisms of
+$\mathcal{X}'$ and let $b'' : y' \to y''$ be a morphism of $\mathcal{Y}$
+such that $b' \circ b'' = b$. We have to show that there exists
+a unique morphism $a'' : x' \to x''$ such that
+$f'' \circ F(a'') = b'' \circ f'$ and such that
+$(a', b') \circ (a'', b'') = (a, b)$. Because $\mathcal{X}$ is fibred
+in groupoids we know there exists a unique morphism
+$a'' : x' \to x''$ such that $a' \circ a'' = a$ and $p(a'') = q(b'')$.
+Because $\mathcal{Y}$ is fibred in groupoids we see that
+$F(a'')$ is the unique morphism $F(x') \to F(x'')$ such that
+$F(a') \circ F(a'') = F(a)$ and $q(F(a'')) = q(b'')$. The relation
+$f'' \circ F(a'') = b'' \circ f'$ follows from this and the given
+relations $f \circ F(a) = b \circ f'$ and $f \circ F(a') = b' \circ f''$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-amelioration-unique}
+Let $\mathcal{C}$ be a category. Let $F : \mathcal{X} \to \mathcal{Y}$
+be a $1$-morphism of categories fibred in groupoids over $\mathcal{C}$.
+Assume we have a $2$-commutative diagram
+$$
+\xymatrix{
+\mathcal{X}' \ar[rd]_f &
+\mathcal{X} \ar[l]^a \ar[d]^F \ar[r]_b &
+\mathcal{X}'' \ar[ld]^g \\
+& \mathcal{Y}
+}
+$$
+where $a$ and $b$ are equivalences of categories over $\mathcal{C}$
+and $f$ and $g$ are categories fibred in groupoids. Then there exists
+an equivalence $h : \mathcal{X}'' \to \mathcal{X}'$ of categories over
+$\mathcal{Y}$ such that $h \circ b$ is $2$-isomorphic to $a$ as $1$-morphisms
+of categories over $\mathcal{C}$. If the diagram above actually commutes, then
+we can arrange it so that $h \circ b$ is $2$-isomorphic to $a$ as
+$1$-morphisms of categories over $\mathcal{Y}$.
+\end{lemma}
+
+\begin{proof}
+We will show that both $\mathcal{X}'$ and $\mathcal{X}''$ over $\mathcal{Y}$
+are equivalent to the category fibred in groupoids
+$\mathcal{X} \times_{F, \mathcal{Y}, \text{id}} \mathcal{Y}$
+over $\mathcal{Y}$, see proof of
+Lemma \ref{lemma-ameliorate-morphism-categories-fibred-groupoids}.
+Choose a quasi-inverse $b^{-1} : \mathcal{X}'' \to \mathcal{X}$ in the
+$2$-category of categories over $\mathcal{C}$.
+Since the right triangle of the diagram is $2$-commutative we see that
+$$
+\xymatrix{
+\mathcal{X} \ar[d]_F & \mathcal{X}'' \ar[l]^{b^{-1}} \ar[d]^g \\
+\mathcal{Y} & \mathcal{Y} \ar[l]
+}
+$$
+is $2$-commutative. Hence we obtain a $1$-morphism
+$c : \mathcal{X}'' \to
+\mathcal{X} \times_{F, \mathcal{Y}, \text{id}} \mathcal{Y}$
+by the universal property of the $2$-fibre product. Moreover $c$
+is a morphism of categories over $\mathcal{Y}$ (!) and an equivalence
+(by the assumption that $b$ is an equivalence, see
+Lemma \ref{lemma-equivalence-2-fibre-product}).
+Hence $c$ is an equivalence in the $2$-category of categories fibred
+in groupoids over $\mathcal{Y}$ by
+Lemma \ref{lemma-equivalence-fibred-categories}.
+
+\medskip\noindent
+We still have to construct a $2$-isomorphism between $c \circ b$ and
+the functor $d : \mathcal{X} \to
+\mathcal{X} \times_{F, \mathcal{Y}, \text{id}} \mathcal{Y}$,
+$x \mapsto (p(x), x, F(x), \text{id}_{F(x)})$
+constructed in the proof of
+Lemma \ref{lemma-ameliorate-morphism-categories-fibred-groupoids}.
+Let $\alpha : F \to g \circ b$ and $\beta : b^{-1} \circ b \to \text{id}$
+be $2$-isomorphisms between $1$-morphisms of categories over $\mathcal{C}$.
+Note that $c \circ b$ is given by the rule
+$$
+x \mapsto (p(x), b^{-1}(b(x)), g(b(x)), \alpha_x \circ F(\beta_x))
+$$
+on objects. Then we see that
+$$
+(\beta_x, \alpha_x) :
+(p(x), x, F(x), \text{id}_{F(x)})
+\longrightarrow
+(p(x), b^{-1}(b(x)), g(b(x)), \alpha_x \circ F(\beta_x))
+$$
+is a functorial isomorphism which gives our $2$-morphism
+$d \to b \circ c$. Finally, if the diagram commutes then
+$\alpha_x$ is the identity for all $x$ and we see that this
+$2$-morphism is a $2$-morphism in the $2$-category of categories
+over $\mathcal{Y}$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Presheaves of categories}
+\label{section-presheaves-categories}
+
+\noindent
+In this section we compare the notion of fibred categories
+with the closely related notion of a ``presheaf of categories''.
+The basic construction is explained in the following example.
+
+\begin{example}
+\label{example-functor-categories}
+Let $\mathcal{C}$ be a category.
+Suppose that $F : \mathcal{C}^{opp} \to \textit{Cat}$ is a functor
+to the $2$-category of categories, see
+Definition \ref{definition-functor-into-2-category}.
+For $f : V \to U$ in $\mathcal{C}$ we will
+suggestively write $F(f) = f^\ast$ for the functor from $F(U)$ to $F(V)$.
+From this we can construct a fibred category $\mathcal{S}_F$ over
+$\mathcal{C}$ as follows. Define
+$$
+\Ob(\mathcal{S}_F) =
+\{(U, x) \mid U\in \Ob(\mathcal{C}), x\in \Ob(F(U))\}.
+$$
+For $(U, x), (V, y) \in \Ob(\mathcal{S}_F)$ we define
+\begin{align*}
+\Mor_{\mathcal{S}_F}((V, y), (U, x)) & =
+\{ (f, \phi) \mid f \in \Mor_\mathcal{C}(V, U),
+\phi \in \Mor_{F(V)}(y, f^\ast x)\} \\
+& =
+\coprod\nolimits_{f \in \Mor_\mathcal{C}(V, U)}
+\Mor_{F(V)}(y, f^\ast x)
+\end{align*}
+In order to define composition we use that $g^\ast \circ f^\ast =
+(f \circ g)^\ast$ for a pair of composable morphisms of $\mathcal{C}$
+(by definition of a functor into a $2$-category).
+Namely, we define the composition of $\psi : z \to g^\ast y$ and
+$ \phi : y \to f^\ast x$ to be $ g^\ast(\phi) \circ \psi$. The functor
+$p_F : \mathcal{S}_F \to \mathcal{C}$ is given by the rule
+$(U, x) \mapsto U$.
+Let us check that this is indeed a fibred category.
+Given $f: V \to U$ in $\mathcal{C}$ and $(U, x)$ a lift of $U$, then
+we claim $(f, \text{id}_{f^\ast x}): (V, {f^\ast x}) \to (U, x)$ is a
+strongly cartesian lift of $f$.
+We have to show a $h$ in the diagram on the left
+determines $(h, \nu)$ on the right:
+$$
+\xymatrix{
+V \ar[r]^f &
+U &
+(V, f^*x) \ar[r]^{(f, \text{id}_{f^*x})} &
+(U, x) \\
+W \ar@{-->}[u]^h \ar[ru]_g & &
+(W, z) \ar@{-->}[u]^{(h, \nu)} \ar[ru]_{(g, \psi)} &
+}
+$$
+Just take $\nu = \psi$ which works because $f \circ h = g$
+and hence $g^*x = h^*f^*x$. Moreover, this is the only lift
+making the diagram (on the right) commute.
+\end{example}
+
+\begin{definition}
+\label{definition-split-fibred-category}
+Let $\mathcal{C}$ be a category.
+Suppose that $F : \mathcal{C}^{opp} \to \textit{Cat}$ is a functor
+to the $2$-category of categories.
+We will write $p_F : \mathcal{S}_F \to \mathcal{C}$ for the
+fibred category constructed in
+Example \ref{example-functor-categories}.
+A {\it split fibred category} is a fibred category isomorphic (!)
+over $\mathcal{C}$ to one of these categories {\it $\mathcal{S}_F$}.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-when-split}
+Let $\mathcal{C}$ be a category.
+Let $\mathcal{S}$ be a fibred category over $\mathcal{C}$.
+Then $\mathcal{S}$ is split if and only if for some choice
+of pullbacks (see Definition \ref{definition-pullback-functor-fibred-category})
+the pullback functors
+$(f \circ g)^*$ and $g^* \circ f^*$ are equal.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibred-strict}
+Let $ p : \mathcal{S} \to \mathcal{C}$ be a fibred category.
+There exists a contravariant functor $F : \mathcal{C} \to \textit{Cat}$
+such that $\mathcal{S}$ is equivalent to $\mathcal{S}_F$
+in the $2$-category of fibred categories over $\mathcal{C}$. In other
+words, every fibred category is equivalent to a split one.
+\end{lemma}
+
+\begin{proof}
+Let us make a choice of pullbacks (see
+Definition \ref{definition-pullback-functor-fibred-category}).
+By Lemma \ref{lemma-fibred} we get pullback functors $f^*$ for
+every morphism $f$ of $\mathcal{C}$.
+
+\medskip\noindent
+We construct a new category $\mathcal{S}'$ as follows.
+The objects of $\mathcal{S}'$ are pairs $(x, f)$
+consisting of a morphism $f : V \to U$ of $\mathcal{C}$
+and an object $x$ of $\mathcal{S}$ over $U$, i.e.,
+$x\in \Ob(\mathcal{S}_U)$. The functor
+$p' : \mathcal{S}' \to \mathcal{C}$ will map the pair $(x, f)$ to the source
+of the morphism $f$, in other words $p'(x, f : V\to U) = V$. A morphism
+$\varphi : (x_1, f_1: V_1 \to U_1) \to (x_2, f_2 : V_2 \to U_2)$ is given by a
+pair $(\varphi, g)$ consisting of a morphism $g : V_1 \to V_2$ and a morphism
+$\varphi : f_1^\ast x_1 \to f_2^\ast x_2$ with $p(\varphi) = g$. It is no
+problem to define the composition law: $(\varphi, g) \circ (\psi, h) =
+(\varphi \circ \psi, g\circ h)$ for any pair of composable morphisms.
+There is a natural functor $\mathcal{S} \to \mathcal{S}'$ which simply maps
+$x$ over $U$ to the pair $(x, \text{id}_U)$.
+
+\medskip\noindent
+At this point we need to check that $p'$ makes $\mathcal{S}'$ into a
+fibred category over $\mathcal{C}$, and we need to check that
+$\mathcal{S} \to \mathcal{S}'$ is an equivalence of categories over
+$\mathcal{C}$ which maps strongly cartesian morphisms to strongly
+cartesian morphisms. We omit the verifications.
+
+\medskip\noindent
+Finally, we can define pullback functors on $\mathcal{S}'$
+by setting $g^\ast(x, f) = (x, f \circ g)$ on objects if
+$g : V' \to V$ and $f : V \to U$. On morphisms
+$(\varphi, \text{id}_V) : (x_1, f_1) \to (x_2, f_2)$
+between morphisms in $\mathcal{S}'_V$ we set $g^\ast(\varphi, \text{id}_V) =
+(g^\ast\varphi, \text{id}_{V'})$ where we use the unique identifications
+$g^\ast f_i^\ast x_i = (f_i \circ g)^\ast x_i$ from Lemma
+\ref{lemma-fibred} to think of $g^\ast\varphi$ as a morphism from
+$(f_1 \circ g)^\ast x_1$ to $(f_2 \circ g)^\ast x_2$. Clearly, these pullback
+functors $g^\ast$ have the property that
+$g_1^\ast \circ g_2^\ast = (g_2\circ g_1)^\ast$, in other words $\mathcal{S}'$
+is split as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Presheaves of groupoids}
+\label{section-presheaves-groupoids}
+
+\noindent
+In this section we compare the notion of categories fibred in groupoids
+with the closely related notion of a ``presheaf of groupoids''. The basic
+construction is explained in the following example.
+
+\begin{example}
+\label{example-functor-groupoids}
+This example is the analogue of
+Example \ref{example-functor-categories},
+for ``presheaves of groupoids'' instead of ``presheaves of categories''.
+The output will be a category fibred in groupoids instead of a fibred category.
+Suppose that $F : \mathcal{C}^{opp} \to \textit{Groupoids}$ is a functor
+to the category of groupoids, see
+Definition \ref{definition-functor-into-2-category}.
+For $f : V \to U$ in $\mathcal{C}$ we will
+suggestively write $F(f) = f^\ast$ for the functor from $F(U)$ to $F(V)$.
+We construct a category $\mathcal{S}_F$ fibred in groupoids over $\mathcal{C}$
+as follows. Define
+$$
+\Ob(\mathcal{S}_F) =
+\{(U, x) \mid U\in \Ob(\mathcal{C}), x\in \Ob(F(U))\}.
+$$
+For $(U, x), (V, y) \in \Ob(\mathcal{S}_F)$ we define
+\begin{align*}
+\Mor_{\mathcal{S}_F}((V, y), (U, x))
+& =
+\{ (f, \phi) \mid f \in \Mor_\mathcal{C}(V, U),
+\phi \in \Mor_{F(V)}(y, f^\ast x)\} \\
+& =
+\coprod\nolimits_{f \in \Mor_\mathcal{C}(V, U)}
+\Mor_{F(V)}(y, f^\ast x)
+\end{align*}
+In order to define composition we use that $g^\ast \circ f^\ast =
+(f \circ g)^\ast$ for a pair of composable morphisms of $\mathcal{C}$
+(by definition of a functor into a $2$-category).
+Namely, we define the composition of $\psi : z \to g^\ast y$ and
+$ \phi : y \to f^\ast x$ to be $ g^\ast(\phi) \circ \psi$. The functor
+$p_F : \mathcal{S}_F \to \mathcal{C}$ is given by the rule $(U, x) \mapsto U$.
+The condition that $F(U)$ is a groupoid for every $U$ guarantees that
+$\mathcal{S}_F$ is fibred in groupoids over $\mathcal{C}$, as we have
+already seen in
+Example \ref{example-functor-categories}
+that $\mathcal{S}_F$ is a fibred category, see
+Lemma \ref{lemma-fibred-groupoids}.
+But we can also prove conditions (1), (2) of
+Definition \ref{definition-fibred-groupoids}
+directly as follows: (1) Lifts of
+morphisms exist since given $f: V \to U$ in $\mathcal{C}$ and $(U, x)$
+an object of $\mathcal{S}_F$ over $U$, then
+$(f, \text{id}_{f^\ast x}): (V, {f^\ast x}) \to (U, x)$ is a lift of $f$.
+(2) Suppose given solid diagrams as follows
+$$
+\xymatrix{
+V \ar[r]^f & U & (V, y) \ar[r]^{(f, \phi)} & (U, x) \\
+W \ar@{-->}[u]^h \ar[ru]_g & &
+(W, z) \ar@{-->}[u]^{(h, \nu)} \ar[ru]_{(g, \psi)} & \\
+}
+$$
+Then for the dotted arrows we have $\nu = (h^\ast \phi)^{-1} \circ \psi$
+so given $h$ there exists a $\nu$ which is unique by uniqueness of inverses.
+\end{example}
+
+\begin{definition}
+\label{definition-split-category-fibred-in-groupoids}
+Let $\mathcal{C}$ be a category.
+Suppose that $F : \mathcal{C}^{opp} \to \textit{Groupoids}$ is a functor
+to the $2$-category of groupoids.
+We will write $p_F : \mathcal{S}_F \to \mathcal{C}$ for the
+category fibred in groupoids constructed in
+Example \ref{example-functor-groupoids}.
+A {\it split category fibred in groupoids} is a
+category fibred in groupoids isomorphic (!)
+over $\mathcal{C}$ to one of these categories {\it $\mathcal{S}_F$}.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-fibred-groupoids-strict}
+Let $ p : \mathcal{S} \to \mathcal{C}$ be a category fibred in groupoids.
+There exists a contravariant functor $F : \mathcal{C} \to \textit{Groupoids}$
+such that $\mathcal{S}$ is equivalent to $\mathcal{S}_F$ over $\mathcal{C}$.
+In other words, every category fibred in groupoids is equivalent to a split one.
+\end{lemma}
+
+\begin{proof}
+Make a choice of pullbacks (see
+Definition \ref{definition-pullback-functor-fibred-category}).
+By Lemmas \ref{lemma-fibred} and \ref{lemma-fibred-groupoids}
+we get pullback functors $f^*$ for
+every morphism $f$ of $\mathcal{C}$.
+
+\medskip\noindent
+We construct a new category $\mathcal{S}'$ as follows.
+The objects of $\mathcal{S}'$ are pairs $(x, f)$
+consisting of a morphism $f : V \to U$ of $\mathcal{C}$
+and an object $x$ of $\mathcal{S}$ over $U$, i.e.,
+$x\in \Ob(\mathcal{S}_U)$. The functor
+$p' : \mathcal{S}' \to \mathcal{C}$ will map the pair $(x, f)$ to the source
+of the morphism $f$, in other words $p'(x, f : V\to U) = V$. A morphism
+$\varphi : (x_1, f_1: V_1 \to U_1) \to (x_2, f_2 : V_2 \to U_2)$ is given by a
+pair $(\varphi, g)$ consisting of a morphism $g : V_1 \to V_2$ and a morphism
+$\varphi : f_1^\ast x_1 \to f_2^\ast x_2$ with $p(\varphi) = g$. It is no
+problem to define the composition law: $(\varphi, g) \circ (\psi, h) =
+(\varphi \circ \psi, g\circ h)$ for any pair of composable morphisms.
+There is a natural functor $\mathcal{S} \to \mathcal{S}'$ which simply maps
+$x$ over $U$ to the pair $(x, \text{id}_U)$.
+
+\medskip\noindent
+At this point we need to check that $p'$ makes $\mathcal{S}'$ into a category
+fibred in groupoids over $\mathcal{C}$, and we need to check that
+$\mathcal{S} \to \mathcal{S}'$ is an equivalence of categories over
+$\mathcal{C}$. We omit the verifications.
+
+\medskip\noindent
+Finally, we can define pullback functors on $\mathcal{S}'$
+by setting $g^\ast(x, f) = (x, f \circ g)$ on objects if
+$g : V' \to V$ and $f : V \to U$. On morphisms
+$(\varphi, \text{id}_V) : (x_1, f_1) \to (x_2, f_2)$
+between morphisms in $\mathcal{S}'_V$ we set $g^\ast(\varphi, \text{id}_V) =
+(g^\ast\varphi, \text{id}_{V'})$ where we use the unique identifications
+$g^\ast f_i^\ast x_i = (f_i \circ g)^\ast x_i$ from Lemma
+\ref{lemma-fibred-groupoids} to think of $g^\ast\varphi$ as a morphism from
+$(f_1 \circ g)^\ast x_1$ to $(f_2 \circ g)^\ast x_2$. Clearly, these pullback
+functors $g^\ast$ have the property that
+$g_1^\ast \circ g_2^\ast = (g_2\circ g_1)^\ast$, in other words $\mathcal{S}'$
+is split as desired.
+\end{proof}
+
+\noindent
+We will see an alternative proof of this lemma in
+Section \ref{section-representable-1-morphisms}.
+
+
+
+
+
+
+
+
+\section{Categories fibred in sets}
+\label{section-fibred-in-sets}
+
+\begin{definition}
+\label{definition-discrete}
+A category is called {\it discrete} if the only morphisms are the identity
+morphisms.
+\end{definition}
+
+\noindent
+A discrete category has only one interesting piece of information:
+its set of objects. Thus we sometime confuse discrete categories
+with sets.
+
+\begin{definition}
+\label{definition-category-fibred-sets}
+Let $\mathcal{C}$ be a category.
+A {\it category fibred in sets}, or a {\it category fibred
+in discrete categories} is a category fibred in groupoids all
+of whose fibre categories are discrete.
+\end{definition}
+
+\noindent
+We want to clarify the relationship between categories fibred in sets
+and presheaves (see Definition \ref{definition-presheaf}).
+To do this it makes sense to first make the following definition.
+
+\begin{definition}
+\label{definition-categories-fibred-in-sets-over-C}
+Let $\mathcal{C}$ be a category.
+The {\it $2$-category of categories fibred in sets over $\mathcal{C}$}
+is the sub $2$-category of the category of categories fibred in groupoids
+over $\mathcal{C}$ (see
+Definition \ref{definition-categories-fibred-in-groupoids-over-C})
+defined as follows:
+\begin{enumerate}
+\item Its objects will be categories
+$p : \mathcal{S} \to \mathcal{C}$ fibred in sets.
+\item Its $1$-morphisms $(\mathcal{S}, p) \to (\mathcal{S}', p')$
+will be functors $G : \mathcal{S} \to \mathcal{S}'$ such that
+$p' \circ G = p$ (since every morphism is strongly cartesian
+$G$ automatically preserves them).
+\item Its $2$-morphisms $t : G \to H$ for
+$G, H : (\mathcal{S}, p) \to (\mathcal{S}', p')$
+will be morphisms of functors
+such that $p'(t_x) = \text{id}_{p(x)}$
+for all $x \in \Ob(\mathcal{S})$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that every $2$-morphism is automatically an isomorphism.
+Hence this $2$-category is actually a $(2, 1)$-category.
+Here is the obligatory lemma on the existence of $2$-fibre products.
+
+\begin{lemma}
+\label{lemma-2-product-categories-fibred-sets}
+Let $\mathcal{C}$ be a category.
+The 2-category of categories fibred in sets over $\mathcal{C}$
+has 2-fibre products. More precisely, the 2-fibre product described in
+Lemma \ref{lemma-2-product-categories-over-C}
+returns a category fibred in sets if one starts out with such.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{example}
+\label{example-presheaf}
+This example is the analogue of
+Examples \ref{example-functor-categories} and
+\ref{example-functor-groupoids}
+for presheaves instead of ``presheaves of categories''.
+The output will be a category fibred in sets instead of a fibred category.
+Suppose that $F : \mathcal{C}^{opp} \to \textit{Sets}$ is a presheaf.
+For $f : V \to U$ in $\mathcal{C}$ we will
+suggestively write $F(f) = f^\ast : F(U) \to F(V)$.
+We construct a category $\mathcal{S}_F$ fibred in sets over $\mathcal{C}$
+as follows. Define
+$$
+\Ob(\mathcal{S}_F) =
+\{(U, x) \mid U \in \Ob(\mathcal{C}), x \in \Ob(F(U))\}.
+$$
+For $(U, x), (V, y) \in \Ob(\mathcal{S}_F)$ we define
+\begin{align*}
+\Mor_{\mathcal{S}_F}((V, y), (U, x))
+& =
+\{f \in \Mor_\mathcal{C}(V, U) \mid f^*x = y\}
+\end{align*}
+Composition is inherited from composition in $\mathcal{C}$
+which works as $g^\ast \circ f^\ast = (f \circ g)^\ast$
+for a pair of composable morphisms of $\mathcal{C}$.
+The functor $p_F : \mathcal{S}_F \to \mathcal{C}$
+is given by the rule $(U, x) \mapsto U$.
+As every fibre category $\mathcal{S}_{F, U}$ is discrete with underlying
+set $F(U)$ and we have already see in
+Example \ref{example-functor-groupoids}
+that $\mathcal{S}_F$ is a category fibred in groupoids,
+we conclude that $\mathcal{S}_F$ is fibred in sets.
+\end{example}
+
+\begin{lemma}
+\label{lemma-2-category-fibred-sets}
+\begin{slogan}
+Categories fibred in sets are precisely presheaves.
+\end{slogan}
+Let $\mathcal{C}$ be a category.
+The only $2$-morphisms between categories fibred in sets are identities.
+In other words, the $2$-category of categories fibred in sets is a category.
+Moreover, there is an equivalence of categories
+$$
+\left\{
+\begin{matrix}
+\text{the category of presheaves}\\
+\text{of sets over }\mathcal{C}
+\end{matrix}
+\right\}
+\leftrightarrow
+\left\{
+\begin{matrix}
+\text{the category of categories}\\
+\text{fibred in sets over }\mathcal{C}
+\end{matrix}
+\right\}
+$$
+The functor from left to right is the construction
+$F \to \mathcal{S}_F$ discussed in
+Example \ref{example-presheaf}.
+The functor from right to left assigns to $p : \mathcal{S} \to \mathcal{C}$
+the presheaf of objects $U \mapsto \Ob(\mathcal{S}_U)$.
+\end{lemma}
+
+\begin{proof}
+The first assertion is clear, as the only morphisms in the fibre
+categories are identities.
+
+\medskip\noindent
+Suppose that $p :
+\mathcal{S} \to \mathcal{C}$ is fibred in sets. Let $f : V \to U$
+be a morphism in $\mathcal{C}$ and let $x \in \Ob(\mathcal{S}_U)$.
+Then there is exactly one choice for the object $f^\ast x$. Thus we see that
+$(f \circ g)^\ast x = g^\ast(f^\ast x)$ for $f, g$ as in Lemma
+\ref{lemma-fibred-groupoids}. It follows that we may think of the
+assignments $U \mapsto \Ob(\mathcal{S}_U)$ and $f \mapsto f^\ast$
+as a presheaf on $\mathcal{C}$.
+\end{proof}
+
+\noindent
+Here is an important example of a category fibred in sets.
+
+\begin{example}
+\label{example-fibred-category-from-functor-of-points}
+Let $\mathcal{C}$ be a category. Let $X \in \Ob(\mathcal{C})$.
+Consider the representable presheaf $h_X = \Mor_\mathcal{C}(-, X)$
+(see Example \ref{example-hom-functor}).
+On the other hand, consider the category $p : \mathcal{C}/X \to \mathcal{C}$
+from Example \ref{example-category-over-X}.
+The fibre category $(\mathcal{C}/X)_U$ has as objects morphisms
+$h : U \to X$, and only identities as morphisms. Hence we see that
+under the correspondence of
+Lemma \ref{lemma-2-category-fibred-sets}
+we have
+$$
+h_X \longleftrightarrow \mathcal{C}/X.
+$$
+In other words, the category $\mathcal{C}/X$ is canonically equivalent
+to the category $\mathcal{S}_{h_X}$ associated
+to $h_X$ in
+Example \ref{example-presheaf}.
+\end{example}
+
+\noindent
+For this reason it is tempting to define a ``representable'' object in the
+2-category of categories fibred in groupoids to be a category fibred in
+sets whose associated presheaf is representable. However, this is would not
+be a good definition for use since we prefer to have a notion which is
+invariant under equivalences. To make this precise we study exactly
+which categories fibred in groupoids are equivalent to categories
+fibred in sets.
+
+
+
+
+
+
+
+
+
+\section{Categories fibred in setoids}
+\label{section-fibred-in-setoids}
+
+\begin{definition}
+\label{definition-setoid}
+Let us call a category a {\it setoid}\footnote{A set on steroids!?}
+if it is a groupoid where every object
+has exactly one automorphism: the identity.
+\end{definition}
+
+\noindent
+If $C$ is a set with an equivalence relation $\sim$, then we can make a setoid
+$\mathcal{C}$ as follows: $\Ob(\mathcal{C}) = C$ and
+$\Mor_\mathcal{C}(x, y) = \emptyset$ unless $x \sim y$ in which
+case we set $\Mor_\mathcal{C}(x, y) = \{1\}$. Transitivity of
+$\sim$ means that we can compose morphisms. Conversely any setoid
+category defines an equivalence relation on its objects (isomorphism)
+such that you recover the category (up to unique isomorphism -- not
+equivalence) from the procedure just described.
+
+\medskip\noindent
+Discrete categories are setoids. For any setoid $\mathcal{C}$ there is a
+canonical procedure to make a discrete category equivalent to it, namely
+one replaces $\Ob(\mathcal{C})$ by the set of isomorphism
+classes (and adds identity morphisms). In terms of sets endowed
+with an equivalence relation this corresponds to taking the quotient
+by the equivalence relation.
+
+\begin{definition}
+\label{definition-category-fibred-setoids}
+Let $\mathcal{C}$ be a category. A {\it category fibred in setoids}
+is a category fibred in groupoids all of whose fibre categories are
+setoids.
+\end{definition}
+
+\noindent
+Below we will clarify the relationship between categories fibred in setoids
+and categories fibred in sets.
+
+\begin{definition}
+\label{definition-categories-fibred-in-setoids-over-C}
+Let $\mathcal{C}$ be a category.
+The {\it $2$-category of categories fibred in setoids over $\mathcal{C}$}
+is the sub $2$-category of the category of categories fibred in groupoids
+over $\mathcal{C}$ (see
+Definition \ref{definition-categories-fibred-in-groupoids-over-C})
+defined as follows:
+\begin{enumerate}
+\item Its objects will be categories
+$p : \mathcal{S} \to \mathcal{C}$ fibred in setoids.
+\item Its $1$-morphisms $(\mathcal{S}, p) \to (\mathcal{S}', p')$
+will be functors $G : \mathcal{S} \to \mathcal{S}'$ such that
+$p' \circ G = p$ (since every morphism is strongly cartesian
+$G$ automatically preserves them).
+\item Its $2$-morphisms $t : G \to H$ for
+$G, H : (\mathcal{S}, p) \to (\mathcal{S}', p')$
+will be morphisms of functors
+such that $p'(t_x) = \text{id}_{p(x)}$
+for all $x \in \Ob(\mathcal{S})$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that every $2$-morphism is automatically an isomorphism.
+Hence this $2$-category is actually a $(2, 1)$-category.
+
+\noindent
+Here is the obligatory lemma on the existence of $2$-fibre products.
+
+\begin{lemma}
+\label{lemma-2-product-categories-fibred-setoids}
+Let $\mathcal{C}$ be a category.
+The 2-category of categories fibred in setoids over $\mathcal{C}$
+has 2-fibre products. More precisely, the 2-fibre product described in
+Lemma \ref{lemma-2-product-categories-over-C} returns a category fibred in
+setoids if one starts out with such.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-setoid-fibres}
+Let $\mathcal{C}$ be a category. Let $\mathcal{S}$ be a category
+over $\mathcal{C}$.
+\begin{enumerate}
+\item If $\mathcal{S} \to \mathcal{S}'$ is an equivalence
+over $\mathcal{C}$ with $\mathcal{S}'$ fibred in sets over $\mathcal{C}$,
+then
+\begin{enumerate}
+\item $\mathcal{S}$ is fibred in setoids over $\mathcal{C}$, and
+\item for each $U \in \Ob(\mathcal{C})$ the map
+$\Ob(\mathcal{S}_U) \to \Ob(\mathcal{S}'_U)$
+identifies the target as the set of isomorphism classes of the source.
+\end{enumerate}
+\item If $p : \mathcal{S} \to \mathcal{C}$ is a category fibred in setoids,
+then there exists a category fibred in sets
+$p' : \mathcal{S}' \to \mathcal{C}$ and an equivalence
+$\text{can} : \mathcal{S} \to \mathcal{S}'$ over $\mathcal{C}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let us prove (2).
+An object of the category $\mathcal{S}'$ will be a pair $(U, \xi)$, where
+$U \in \Ob(\mathcal{C})$ and $\xi$ is an isomorphism class of objects
+of $\mathcal{S}_U$. A morphism $(U, \xi) \to (V , \psi)$ is given by a
+morphism $x \to y$, where $x \in \xi$ and $y \in \psi$. Here we identify
+two morphisms $x \to y$ and $x' \to y'$ if they induce the same morphism
+$U \to V$, and if for some choices of isomorphisms $x \to x'$ in
+$\mathcal{S}_U$ and $y \to y'$ in $\mathcal{S}_V$ the compositions
+$x \to x' \to y'$ and $x \to y \to y'$ agree. By construction there are
+surjective maps on objects and morphisms from $\mathcal{S} \to
+\mathcal{S}'$. We define composition of morphisms in $\mathcal{S}'$ to
+be the unique law that turns $\mathcal{S} \to \mathcal{S}'$ into a functor.
+Some details omitted.
+\end{proof}
+
+\noindent
+Thus categories fibred in setoids are exactly the categories fibred
+in groupoids which are equivalent to categories fibred in sets.
+Moreover, an equivalence of categories fibred in sets is an isomorphism
+by Lemma \ref{lemma-2-category-fibred-sets}.
+
+\begin{lemma}
+\label{lemma-2-category-fibred-setoids}
+Let $\mathcal{C}$ be a category. The construction of
+Lemma \ref{lemma-setoid-fibres}
+part (2) gives a functor
+$$
+F :
+\left\{
+\begin{matrix}
+\text{the 2-category of categories}\\
+\text{fibred in setoids over }\mathcal{C}
+\end{matrix}
+\right\}
+\longrightarrow
+\left\{
+\begin{matrix}
+\text{the category of categories}\\
+\text{fibred in sets over }\mathcal{C}
+\end{matrix}
+\right\}
+$$
+(see
+Definition \ref{definition-functor-into-2-category}).
+This functor is an equivalence in the following sense:
+\begin{enumerate}
+\item for any two 1-morphisms $f, g : \mathcal{S}_1 \to \mathcal{S}_2$
+with $F(f) = F(g)$ there exists a unique 2-isomorphism $f \to g$,
+\item for any morphism $h : F(\mathcal{S}_1) \to F(\mathcal{S}_2)$
+there exists a 1-morphism $f : \mathcal{S}_1 \to \mathcal{S}_2$
+with $F(f) = h$, and
+\item any category fibred in sets $\mathcal{S}$ is equal to $F(\mathcal{S})$.
+\end{enumerate}
+In particular, defining $F_i \in \textit{PSh}(\mathcal{C})$ by the
+rule $F_i(U) = \Ob(\mathcal{S}_{i, U})/\cong$, we have
+$$
+\Mor_{\textit{Cat}/\mathcal{C}}(\mathcal{S}_1, \mathcal{S}_2)
+\Big/
+2\text{-isomorphism}
+=
+\Mor_{\textit{PSh}(\mathcal{C})}(F_1, F_2)
+$$
+More precisely, given any map $\phi : F_1 \to F_2$ there exists a
+$1$-morphism $f : \mathcal{S}_1 \to \mathcal{S}_2$ which induces
+$\phi$ on isomorphism classes of objects and
+which is unique up to unique $2$-isomorphism.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-2-category-fibred-sets}
+the target of $F$ is a category hence the assertion makes sense.
+The construction of
+Lemma \ref{lemma-setoid-fibres} part (2)
+assigns to $\mathcal{S}$ the category fibred in sets whose value over
+$U$ is the set of isomorphism classes in $\mathcal{S}_U$. Hence it
+is clear that it defines a functor as indicated.
+Let $f, g : \mathcal{S}_1 \to \mathcal{S}_2$
+with $F(f) = F(g)$ be as in (1). For each object $U$ of $\mathcal{C}$
+and each object $x$ of $\mathcal{S}_{1, U}$ we see that $f(x) \cong g(x)$
+by assumption. As $\mathcal{S}_2$ is fibred in setoids there exists
+a unique isomorphism $t_x : f(x) \to g(x)$ in $\mathcal{S}_{2, U}$.
+Clearly the rule $x \mapsto t_x$ gives the desired $2$-isomorphism
+$f \to g$. We omit the proofs of (2) and (3).
+To see the final assertion use
+Lemma \ref{lemma-2-category-fibred-sets}
+to see that the right hand side is equal to
+$\Mor_{\textit{Cat}/\mathcal{C}}(F(\mathcal{S}_1), F(\mathcal{S}_2))$
+and apply (1) and (2) above.
+\end{proof}
+
+\noindent
+Here is another characterization of categories fibred in setoids
+among all categories fibred in groupoids.
+
+\begin{lemma}
+\label{lemma-characterize-fibred-setoids-inertia}
+Let $\mathcal{C}$ be a category.
+Let $p : \mathcal{S} \to \mathcal{C}$ be a category fibred in groupoids.
+The following are equivalent:
+\begin{enumerate}
+\item $p : \mathcal{S} \to \mathcal{C}$ is a category fibred in setoids, and
+\item the canonical $1$-morphism $\mathcal{I}_\mathcal{S} \to \mathcal{S}$,
+see (\ref{equation-inertia-structure-map}), is an equivalence (of categories
+over $\mathcal{C}$).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2). The category $\mathcal{I}_\mathcal{S}$ has objects
+$(x, \alpha)$ where $x \in \mathcal{S}$, say with $p(x) = U$, and
+$\alpha : x \to x$ is a morphism in $\mathcal{S}_U$. Hence if
+$\mathcal{I}_\mathcal{S} \to \mathcal{S}$ is an equivalence over $\mathcal{C}$
+then every pair of objects $(x, \alpha)$, $(x, \alpha')$ are isomorphic
+in the fibre category of $\mathcal{I}_\mathcal{S}$ over $U$.
+Looking at the definition of morphisms in $\mathcal{I}_\mathcal{S}$
+we conclude that $\alpha$, $\alpha'$ are conjugate in the group
+of automorphisms of $x$. Hence taking $\alpha' = \text{id}_x$ we conclude
+that every automorphism of $x$ is equal to the identity.
+Since $\mathcal{S} \to \mathcal{C}$ is fibred in groupoids this
+implies that $\mathcal{S} \to \mathcal{C}$ is fibred in setoids.
+We omit the proof of (1) $\Rightarrow$ (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-category-fibred-setoids-presheaves-products}
+Let $\mathcal{C}$ be a category.
+The construction of
+Lemma \ref{lemma-2-category-fibred-setoids}
+which associates to a category fibred in setoids a presheaf is
+compatible with products, in the sense that the presheaf associated
+to a $2$-fibre product $\mathcal{X} \times_\mathcal{Y} \mathcal{Z}$
+is the fibre product of the presheaves associated to
+$\mathcal{X}, \mathcal{Y}, \mathcal{Z}$.
+\end{lemma}
+
+\begin{proof}
+Let $U \in \Ob(\mathcal{C})$. The lemma just says that
+$$
+\Ob((\mathcal{X} \times_\mathcal{Y} \mathcal{Z})_U)/\!\cong
+\quad \text{equals} \quad
+\Ob(\mathcal{X}_U)/\!\cong
+\ \times_{\Ob(\mathcal{Y}_U)/\!\cong}
+\ \Ob(\mathcal{Z}_U)/\!\cong
+$$
+the proof of which we omit. (But note that this would not be true
+in general if the category $\mathcal{Y}_U$ is not a setoid.)
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Representable categories fibred in groupoids}
+\label{section-representable-fibred-groupoids}
+
+\noindent
+Here is our definition of a representable category fibred in groupoids.
+As promised this is invariant under equivalences.
+
+\begin{definition}
+\label{definition-representable-fibred-category}
+Let $\mathcal{C}$ be a category.
+A category fibred in groupoids $p : \mathcal{S} \to \mathcal{C}$ is
+called {\it representable} if there exist an object
+$X$ of $\mathcal{C}$ and an equivalence $j : \mathcal{S} \to \mathcal{C}/X$
+(in the $2$-category of groupoids over $\mathcal{C}$).
+\end{definition}
+
+\noindent
+The usual abuse of notation is to say that {\it $X$ represents $\mathcal{S}$}
+and not mention the equivalence $j$. We spell out what this entails.
+
+\begin{lemma}
+\label{lemma-characterize-representable-fibred-category}
+Let $\mathcal{C}$ be a category.
+Let $p : \mathcal{S} \to \mathcal{C}$ be a category fibred in groupoids.
+\begin{enumerate}
+\item $\mathcal{S}$ is representable if and only if
+the following conditions are satisfied:
+\begin{enumerate}
+\item $\mathcal{S}$ is fibred in setoids, and
+\item the presheaf $U \mapsto \Ob(\mathcal{S}_U)/\cong$ is
+representable.
+\end{enumerate}
+\item If $\mathcal{S}$ is representable the pair $(X, j)$, where $j$ is the
+equivalence $j : \mathcal{S} \to \mathcal{C}/X$, is uniquely determined
+up to isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first assertion follows immediately from
+Lemma \ref{lemma-setoid-fibres}.
+For the second, suppose that $j' : \mathcal{S} \to \mathcal{C}/X'$ is
+a second such pair. Choose a $1$-morphism
+$t' : \mathcal{C}/X' \to \mathcal{S}$ such that
+$j' \circ t' \cong \text{id}_{\mathcal{C}/X'}$ and
+$t' \circ j' \cong \text{id}_\mathcal{S}$. Then
+$j \circ t' : \mathcal{C}/X' \to \mathcal{C}/X$ is an equivalence.
+Hence it is an isomorphism, see Lemma \ref{lemma-2-category-fibred-sets}.
+Hence by the Yoneda Lemma \ref{lemma-yoneda} (via
+Example \ref{example-fibred-category-from-functor-of-points} for example)
+it is given by an isomorphism $X' \to X$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphisms-representable-fibred-categories}
+Let $\mathcal{C}$ be a category.
+Let $\mathcal{X}$, $\mathcal{Y}$ be categories fibred in groupoids
+over $\mathcal{C}$. Assume that $\mathcal{X}$, $\mathcal{Y}$
+are representable by objects $X$, $Y$ of $\mathcal{C}$.
+Then
+$$
+\Mor_{\textit{Cat}/\mathcal{C}}(\mathcal{X}, \mathcal{Y})
+\Big/
+2\text{-isomorphism}
+=
+\Mor_\mathcal{C}(X, Y)
+$$
+More precisely, given $\phi : X \to Y$ there exists a
+$1$-morphism $f : \mathcal{X} \to \mathcal{Y}$ which induces
+$\phi$ on isomorphism classes of objects and
+which is unique up to unique $2$-isomorphism.
+\end{lemma}
+
+\begin{proof}
+By
+Example \ref{example-fibred-category-from-functor-of-points}
+we have $\mathcal{C}/X = \mathcal{S}_{h_X}$ and
+$\mathcal{C}/Y = \mathcal{S}_{h_Y}$. By
+Lemma \ref{lemma-2-category-fibred-setoids}
+we have
+$$
+\Mor_{\textit{Cat}/\mathcal{C}}(\mathcal{X}, \mathcal{Y})
+\Big/
+2\text{-isomorphism}
+=
+\Mor_{\textit{PSh}(\mathcal{C})}(h_X, h_Y)
+$$
+By the Yoneda
+Lemma \ref{lemma-yoneda}
+we have $\Mor_{\textit{PSh}(\mathcal{C})}(h_X, h_Y)
+= \Mor_\mathcal{C}(X, Y)$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{The 2-Yoneda lemma}
+\label{section-2-yoneda}
+
+\noindent
+Let $\mathcal{C}$ be a category. The $2$-category of fibred categories
+over $\mathcal{C}$ was constructed/defined in
+Definition \ref{definition-fibred-categories-over-C}.
+If $\mathcal{S}$, $\mathcal{S}'$ are fibred categories
+over $\mathcal{C}$ then
+$$
+\Mor_{\textit{Fib}/\mathcal{C}}(\mathcal{S}, \mathcal{S}')
+$$
+denotes the category of $1$-morphisms in this $2$-category.
+Here is the $2$-category analogue of the Yoneda lemma
+in the setting of fibred categories.
+
+\begin{lemma}[2-Yoneda lemma for fibred categories]
+\label{lemma-yoneda-2category-fibred}
+Let $\mathcal{C}$ be a category.
+Let $\mathcal{S} \to \mathcal{C}$ be a fibred category over $\mathcal{C}$.
+Let $U \in \Ob(\mathcal{C})$.
+The functor
+$$
+\Mor_{\textit{Fib}/\mathcal{C}}(\mathcal{C}/U, \mathcal{S})
+\longrightarrow
+\mathcal{S}_U
+$$
+given by $G \mapsto G(\text{id}_U)$ is an equivalence.
+\end{lemma}
+
+\begin{proof}
+Make a choice of pullbacks for $\mathcal{S}$
+(see Definition \ref{definition-pullback-functor-fibred-category}).
+We define a functor
+$$
+\mathcal{S}_U
+\longrightarrow
+\Mor_{\textit{Fib}/\mathcal{C}}(\mathcal{C}/U, \mathcal{S})
+$$
+as follows. Given
+$x \in \Ob(\mathcal{S}_U)$
+the associated functor is
+\begin{enumerate}
+\item on objects: $(f : V \to U) \mapsto f^*x$, and
+\item on morphisms: the arrow $(g : V'/U \to V/U)$ maps to
+the composition
+$$
+(f \circ g)^*x \xrightarrow{(\alpha_{g, f})_x} g^*f^*x \rightarrow f^*x
+$$
+where $\alpha_{g, f}$ is as in Lemma \ref{lemma-fibred}.
+\end{enumerate}
+We omit the verification that this is an inverse to the functor
+of the lemma.
+\end{proof}
+
+\noindent
+Let $\mathcal{C}$ be a category. The $2$-category of categories
+fibred in groupoids over $\mathcal{C}$ is a
+``full'' sub $2$-category of the $2$-category of categories over
+$\mathcal{C}$ (see
+Definition \ref{definition-categories-fibred-in-groupoids-over-C}).
+Hence if $\mathcal{S}$, $\mathcal{S}'$ are fibred in groupoids
+over $\mathcal{C}$ then
+$$
+\Mor_{\textit{Cat}/\mathcal{C}}(\mathcal{S}, \mathcal{S}')
+$$
+denotes the category of $1$-morphisms in this $2$-category
+(see Definition \ref{definition-categories-over-C}).
+These are all groupoids, see remarks following
+Definition \ref{definition-categories-fibred-in-groupoids-over-C}.
+Here is the $2$-category analogue of the Yoneda lemma.
+
+\begin{lemma}[2-Yoneda lemma]
+\label{lemma-yoneda-2category}
+Let $\mathcal{S}\to \mathcal{C}$ be fibred in groupoids.
+Let $U \in \Ob(\mathcal{C})$.
+The functor
+$$
+\Mor_{\textit{Cat}/\mathcal{C}}(\mathcal{C}/U, \mathcal{S})
+\longrightarrow
+\mathcal{S}_U
+$$
+given by $G \mapsto G(\text{id}_U)$ is an equivalence.
+\end{lemma}
+
+\begin{proof}
+Make a choice of pullbacks for $\mathcal{S}$
+(see Definition \ref{definition-pullback-functor-fibred-category}).
+We define a functor
+$$
+\mathcal{S}_U
+\longrightarrow
+\Mor_{\textit{Cat}/\mathcal{C}}(\mathcal{C}/U, \mathcal{S})
+$$
+as follows. Given
+$x \in \Ob(\mathcal{S}_U)$
+the associated functor is
+\begin{enumerate}
+\item on objects: $(f : V \to U) \mapsto f^*x$, and
+\item on morphisms: the arrow $(g : V'/U \to V/U)$ maps to
+the composition
+$$
+(f \circ g)^*x \xrightarrow{(\alpha_{g, f})_x} g^*f^*x \rightarrow f^*x
+$$
+where $\alpha_{g, f}$ is as in Lemma \ref{lemma-fibred-groupoids}.
+\end{enumerate}
+We omit the verification that this is an inverse to the functor
+of the lemma.
+\end{proof}
+
+\begin{remark}
+\label{remark-alternative-fibred-groupoids-strict}
+We can use the $2$-Yoneda lemma to give an alternative proof of
+Lemma \ref{lemma-fibred-groupoids-strict}.
+Let $p : \mathcal{S} \to \mathcal{C}$ be a category fibred in groupoids.
+We define a contravariant functor $F$ from $\mathcal{C}$ to the
+category of groupoids as follows: for $U\in \Ob(\mathcal{C})$
+let
+$$
+F(U) = \Mor_{\textit{Cat}/\mathcal{C}}(\mathcal{C}/U, \mathcal{S}).
+$$
+If $f : U \to V$ the induced functor $\mathcal{C}/U \to \mathcal{C}/V$
+induces the morphism $F(f) : F(V) \to F(U)$. Clearly $F$ is a functor.
+Let $\mathcal{S}'$ be the associated category fibred in groupoids from
+Example \ref{example-functor-groupoids}.
+There is an obvious functor $G : \mathcal{S}' \to \mathcal{S}$
+over $\mathcal{C}$ given by taking the pair $(U, x)$, where
+$U \in \Ob(\mathcal{C})$ and $x \in F(U)$, to
+$x(\text{id}_U) \in \mathcal{S}$. Now
+Lemma \ref{lemma-yoneda-2category}
+implies that for each $U$,
+$$
+G_U : \mathcal{S}'_U = F(U)=
+\Mor_{\textit{Cat}/\mathcal{C}}(\mathcal{C}/U, \mathcal{S})
+\to
+\mathcal{S}_U
+$$
+is an equivalence, and thus $G$ is an equivalence between $\mathcal{S}$ and
+$\mathcal{S}'$ by Lemma \ref{lemma-equivalence-fibred-categories}.
+\end{remark}
+
+
+
+
+
+\section{Representable 1-morphisms}
+\label{section-representable-1-morphisms}
+
+\noindent
+Let $\mathcal{C}$ be a category.
+In this section we explain what it means for a $1$-morphism
+between categories fibred in groupoids over $\mathcal{C}$
+to be representable.
+
+\medskip\noindent
+Let $\mathcal{C}$ be a category.
+Let $\mathcal{X}$, $\mathcal{Y}$ be categories fibred in groupoids
+over $\mathcal{C}$.
+Let $U \in \Ob(\mathcal{C})$.
+Let $F : \mathcal{X} \to \mathcal{Y}$ and
+$G : \mathcal{C}/U \to \mathcal{Y}$ be $1$-morphisms of categories
+fibred in groupoids over $\mathcal{C}$.
+We want to describe
+the $2$-fibre product
+$$
+\xymatrix{
+(\mathcal{C}/U) \times_\mathcal{Y} \mathcal{X} \ar[r] \ar[d] &
+\mathcal{X} \ar[d]^F \\
+\mathcal{C}/U \ar[r]^G &
+\mathcal{Y}
+}
+$$
+Let $y = G(\text{id}_U) \in \mathcal{Y}_U$.
+Make a choice of pullbacks for $\mathcal{Y}$
+(see Definition \ref{definition-pullback-functor-fibred-category}).
+Then $G$ is isomorphic to the functor $(f : V \to U) \mapsto f^*y$,
+see Lemma \ref{lemma-yoneda-2category} and its proof.
+We may think of an object of
+$(\mathcal{C}/U) \times_\mathcal{Y} \mathcal{X}$
+as a quadruple $(V, f : V \to U, x, \phi)$, see
+Lemma \ref{lemma-2-product-categories-over-C}.
+Using the description of $G$ above we may think of $\phi$ as
+an isomorphism $\phi : f^*y \to F(x)$ in $\mathcal{Y}_V$.
+
+\begin{lemma}
+\label{lemma-identify-fibre-product}
+In the situation above the fibre category of
+$(\mathcal{C}/U) \times_\mathcal{Y} \mathcal{X}$ over
+an object $f : V \to U$ of $\mathcal{C}/U$
+is the category described as follows:
+\begin{enumerate}
+\item objects are pairs $(x, \phi)$,
+where $x \in \Ob(\mathcal{X}_V)$, and
+$\phi : f^*y \to F(x)$ is a morphism in $\mathcal{Y}_V$,
+\item the set of morphisms between $(x, \phi)$ and $(x', \phi')$
+is the set of morphisms $\psi : x \to x'$ in $\mathcal{X}_V$
+such that $F(\psi) = \phi' \circ \phi^{-1}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+See discussion above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-prepare-representable-map-stack-in-groupoids}
+Let $\mathcal{C}$ be a category.
+Let $\mathcal{X}$, $\mathcal{Y}$ be categories fibred in groupoids
+over $\mathcal{C}$.
+Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism.
+Let $G : \mathcal{C}/U \to \mathcal{Y}$ be a $1$-morphism.
+Then
+$$
+(\mathcal{C}/U) \times_\mathcal{Y} \mathcal{X}
+\longrightarrow
+\mathcal{C}/U
+$$
+is a category fibred in groupoids.
+\end{lemma}
+
+\begin{proof}
+We have already seen in Lemma \ref{lemma-2-product-fibred-categories}
+that the composition
+$$
+(\mathcal{C}/U) \times_\mathcal{Y} \mathcal{X}
+\longrightarrow
+\mathcal{C}/U
+\longrightarrow
+\mathcal{C}
+$$
+is a category fibred in groupoids. Then the lemma follows from
+Lemma \ref{lemma-cute-groupoids}.
+\end{proof}
+
+\begin{definition}
+\label{definition-representable-map-categories-fibred-in-groupoids}
+Let $\mathcal{C}$ be a category.
+Let $\mathcal{X}$, $\mathcal{Y}$ be categories fibred in groupoids
+over $\mathcal{C}$.
+Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism.
+We say $F$ is {\it representable}, or that
+{\it $\mathcal{X}$ is relatively representable over $\mathcal{Y}$},
+if for every $U \in \Ob(\mathcal{C})$
+and any $G : \mathcal{C}/U \to \mathcal{Y}$
+the category fibred in groupoids
+$$
+(\mathcal{C}/U) \times_\mathcal{Y} \mathcal{X}
+\longrightarrow
+\mathcal{C}/U
+$$
+is representable.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-spell-out-representable-map-stack-in-groupoids}
+Let $\mathcal{C}$ be a category.
+Let $\mathcal{X}$, $\mathcal{Y}$ be categories fibred in groupoids
+over $\mathcal{C}$.
+Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism.
+If $F$ is representable then every one of the functors
+$$
+F_U : \mathcal{X}_U \longrightarrow \mathcal{Y}_U
+$$
+between fibre categories is faithful.
+\end{lemma}
+
+\begin{proof}
+Clear from the description of fibre categories in
+Lemma \ref{lemma-identify-fibre-product} and the characterization
+of representable fibred categories in
+Lemma \ref{lemma-characterize-representable-fibred-category}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-criterion-representable-map-stack-in-groupoids}
+Let $\mathcal{C}$ be a category.
+Let $\mathcal{X}$, $\mathcal{Y}$ be categories fibred in groupoids
+over $\mathcal{C}$.
+Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism.
+Make a choice of pullbacks for $\mathcal{Y}$.
+Assume
+\begin{enumerate}
+\item each functor $F_U : \mathcal{X}_U \longrightarrow \mathcal{Y}_U$
+between fibre categories is faithful, and
+\item for each $U$ and each $y \in \mathcal{Y}_U$ the presheaf
+$$
+(f : V \to U)
+\longmapsto
+\{(x, \phi) \mid x \in \mathcal{X}_V, \phi : f^*y \to F(x)\}/\cong
+$$
+is a representable presheaf on $\mathcal{C}/U$.
+\end{enumerate}
+Then $F$ is representable.
+\end{lemma}
+
+\begin{proof}
+Clear from the description of fibre categories in
+Lemma \ref{lemma-identify-fibre-product} and the characterization
+of representable fibred categories in
+Lemma \ref{lemma-characterize-representable-fibred-category}.
+\end{proof}
+
+\noindent
+Before we state the next lemma we point out that the $2$-category
+of categories fibred in groupoids is a $(2, 1)$-category, and hence
+we know what it means to say that it has a final object (see
+Definition \ref{definition-final-object-2-category}). And it has
+a final object namely $\text{id} : \mathcal{C} \to \mathcal{C}$.
+Thus we define {\it $2$-products} of categories fibred in groupoids
+over $\mathcal{C}$ as the $2$-fibre products
+$$
+\mathcal{X} \times \mathcal{Y} :=
+\mathcal{X} \times_\mathcal{C} \mathcal{Y}.
+$$
+With this definition in place the following lemma makes sense.
+
+\begin{lemma}
+\label{lemma-representable-diagonal-groupoids}
+Let $\mathcal{C}$ be a category.
+Let $\mathcal{S} \to \mathcal{C}$ be a category fibred in groupoids.
+Assume $\mathcal{C}$ has products of pairs of objects and fibre products.
+The following are equivalent:
+\begin{enumerate}
+\item The diagonal $\mathcal{S} \to \mathcal{S} \times \mathcal{S}$
+is representable.
+\item For every $U$ in $\mathcal{C}$, any $G : \mathcal{C}/U \to \mathcal{S}$
+is representable.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Suppose the diagonal is representable, and let $U, G$ be given.
+Consider any $V \in \Ob(\mathcal{C})$ and any
+$G' : \mathcal{C}/V \to \mathcal{S}$.
+Note that $\mathcal{C}/U \times \mathcal{C}/V = \mathcal{C}/U \times V$
+is representable. Hence the fibre product
+$$
+\xymatrix{
+(\mathcal{C}/U \times V)
+\times_{(\mathcal{S} \times \mathcal{S})}
+\mathcal{S}
+\ar[r] \ar[d] &
+\mathcal{S} \ar[d] \\
+\mathcal{C}/U \times V \ar[r]^{(G, G')} &
+\mathcal{S} \times \mathcal{S}
+}
+$$
+is representable by assumption.
+This means there exists $W \to U \times V$ in $\mathcal{C}$,
+such that
+$$
+\xymatrix{
+\mathcal{C}/W \ar[d] \ar[r] & \mathcal{S} \ar[d] \\
+\mathcal{C}/U \times \mathcal{C}/V \ar[r] & \mathcal{S} \times \mathcal{S}
+}
+$$
+is cartesian. This implies that
+$\mathcal{C}/W \cong \mathcal{C}/U \times_\mathcal{S} \mathcal{C}/V$
+(see Lemma \ref{lemma-diagonal-1})
+as desired.
+
+\medskip\noindent
+Assume (2) holds. Consider any $V \in \Ob(\mathcal{C})$
+and any $(G, G') : \mathcal{C}/V \to \mathcal{S} \times \mathcal{S}$.
+We have to show that
+$\mathcal{C}/V \times_{\mathcal{S} \times \mathcal{S}} \mathcal{S}$
+is representable. What we know is that
+$\mathcal{C}/V \times_{G, \mathcal{S}, G'} \mathcal{C}/V$
+is representable, say by $a : W \to V$ in $\mathcal{C}/V$.
+The equivalence
+$$
+\mathcal{C}/W \to \mathcal{C}/V \times_{G, \mathcal{S}, G'} \mathcal{C}/V
+$$
+followed by the second projection to $\mathcal{C}/V$ gives a
+second morphism $a' : W \to V$. Consider
+$W' = W \times_{(a, a'), V \times V} V$.
+There exists an equivalence
+$$
+\mathcal{C}/W' \cong
+\mathcal{C}/V \times_{\mathcal{S} \times \mathcal{S}} \mathcal{S}
+$$
+namely
+\begin{eqnarray*}
+\mathcal{C}/W' & \cong &
+\mathcal{C}/W \times_{(\mathcal{C}/V \times \mathcal{C}/V)} \mathcal{C}/V \\
+& \cong &
+\left(\mathcal{C}/V \times_{(G, \mathcal{S}, G')} \mathcal{C}/V\right)
+\times_{(\mathcal{C}/V \times \mathcal{C}/V)} \mathcal{C}/V \\
+& \cong &
+\mathcal{C}/V \times_{(\mathcal{S} \times \mathcal{S})} \mathcal{S}
+\end{eqnarray*}
+(for the last isomorphism see Lemma \ref{lemma-diagonal-2})
+which proves the lemma.
+\end{proof}
+
+\noindent
+{\bf Bibliographic notes:}
+Parts of this have been taken from Vistoli's notes \cite{Vis2}.
+
+
+
+
+
+\section{Monoidal categories}
+\label{section-monoidal}
+
+\noindent
+Let $\mathcal{C}$ be a category. Suppose we are given a functor
+$$
+\otimes : \mathcal{C} \times \mathcal{C} \longrightarrow \mathcal{C}
+$$
+We often want to know whether $\otimes$ satisfies an associative rule
+and whether there is a unit for $\otimes$.
+
+\medskip\noindent
+An {\it associativity constraint} for $(\mathcal{C}, \otimes)$ is
+a functorial isomorphism
+$$
+\phi_{X, Y, Z} : X \otimes (Y \otimes Z) \to (X \otimes Y) \otimes Z
+$$
+such that for all objects $X, Y, Z, W$ the diagram
+$$
+\xymatrix{
+X \otimes (Y \otimes ( Z \otimes W)) \ar[r] \ar[d] &
+(X \otimes Y) \otimes (Z \otimes W) \ar[r] &
+((X \otimes Y) \otimes Z) \otimes W \\
+X \otimes ((Y \otimes Z) \otimes W) \ar[rr] & &
+(X \otimes (Y \otimes Z)) \otimes W \ar[u]
+}
+$$
+is commutative where every arrow is determined by a suitable application
+of $\phi$ and functoriality of $\otimes$. Given an associativity constraint
+there are well defined functors
+$$
+\mathcal{C} \times \ldots \times \mathcal{C} \longrightarrow \mathcal{C},
+\quad
+(X_1, \ldots, X_n) \longmapsto X_1 \otimes \ldots \otimes X_n
+$$
+for all $n \geq 1$.
+
+\medskip\noindent
+Let $\phi$ be an associativity constraint. A {\it unit} for
+$(\mathcal{C}, \otimes, \phi)$ is an object $\mathbf{1}$
+of $\mathcal{C}$ together with functorial isomorphisms
+$$
+\mathbf{1} \otimes X \to X
+\quad\text{and}\quad
+X \otimes \mathbf{1} \to X
+$$
+such that for all objects $X, Y$ the diagram
+$$
+\xymatrix{
+X \otimes (\mathbf{1} \otimes Y) \ar[rr]_\phi \ar[rd] & &
+(X \otimes \mathbf{1}) \otimes Y \ar[ld] \\
+& X \otimes Y
+}
+$$
+is commutative where the diagonal arrows are given by the isomorphisms
+introduced above.
+
+\medskip\noindent
+An equivalent definition would be that a unit is a pair
+$(\mathbf{1}, 1)$ where $\mathbf{1}$ is an object of $\mathcal{C}$ and
+$1 : \mathbf{1} \otimes \mathbf{1} \to \mathbf{1}$
+is an isomorphism such that the functors $L : X \mapsto \mathbf{1} \otimes X$
+and $R : X \mapsto X \otimes \mathbf{1}$ are equivalences.
+Certainly, given a unit as above we get the isomorphism
+$1 : \mathbf{1} \otimes \mathbf{1} \to \mathbf{1}$ for free
+and $L$ and $R$ are equivalences as they are isomorphic to the
+identity functor. Conversely, given $(\mathbf{1}, 1)$ such that
+$L$ and $R$ are equivalences, we obtain functorial isomorphisms
+$l : \mathbf{1} \otimes X \to X$ and $r : X \otimes \mathbf{1} \to X$
+characterized by $L(l) = 1 \otimes \text{id}_X$ and
+$R(r) = \text{id}_X \otimes 1$. Then we can use $r$ and $l$
+in the notion of unit as above.
+
+\medskip\noindent
+A unit is unique up to unique isomorphism if it exists (exercise).
+
+\begin{definition}
+\label{definition-monoidal-category}
+A triple $(\mathcal{C}, \otimes, \phi)$ where $\mathcal{C}$ is a category,
+$\otimes : \mathcal{C} \times \mathcal{C} \to \mathcal{C}$ is a functor,
+and $\phi$ is an associativity constraint is called a {\it monoidal category}
+if there exists a unit $\mathbf{1}$.
+\end{definition}
+
+\noindent
+We always write $\mathbf{1}$ to denote a unit of a monoidal category; as
+it is determined up to unique isomorphism there is no harm in choosing one.
+From now on we no longer write the brackets when taking tensor
+products in monoidal categories and we always identify
+$X \otimes \mathbf{1}$ and $\mathbf{1} \otimes X$ with $X$.
+Moreover, we will say ``let $\mathcal{C}$ be a monoidal category''
+with $\otimes, \phi, \mathbf{1}$ understood.
+
+\begin{definition}
+\label{definition-functor-monoidal-categories}
+Let $\mathcal{C}$ and $\mathcal{C}'$ be monoidal categories.
+A {\it functor of monoidal categories} $F : \mathcal{C} \to \mathcal{C}'$
+is given by a functor $F$ as indicated and a isomorphism
+$$
+F(X) \otimes F(Y) \to F(X \otimes Y)
+$$
+functorial in $X$ and $Y$
+such that for all objects $X$, $Y$, and $Z$ the diagram
+$$
+\xymatrix{
+F(X) \otimes (F(Y) \otimes F(Z)) \ar[r] \ar[d] &
+F(X) \otimes F(Y \otimes Z) \ar[r] &
+F(X \otimes (Y \otimes Z)) \ar[d] \\
+(F(X) \otimes F(Y)) \otimes F(Z) \ar[r] &
+F(X \otimes Y) \otimes F(Z) \ar[r] &
+F((X \otimes Y) \otimes Z)
+}
+$$
+commutes and such that $F(\mathbf{1})$ is a unit in $\mathcal{C}'$.
+\end{definition}
+
+\noindent
+By our conventions about units, we may always assume
+$F(\mathbf{1}) = \mathbf{1}$ if $F$ is a functor of monoidal categories.
+As an example, if $A \to B$ is a ring homomorphism, then
+the functor $M \mapsto M \otimes_A B$ is functor of monoidal
+categories from $\text{Mod}_A$ to $\text{Mod}_B$.
+
+\begin{lemma}
+\label{lemma-invertible}
+Let $\mathcal{C}$ be a monoidal category. Let $X$ be an object of
+$\mathcal{C}$. The following are equivalent
+\begin{enumerate}
+\item the functor $L : Y \mapsto X \otimes Y$ is an equivalence,
+\item the functor $R : Y \mapsto Y \otimes X$ is an equivalence,
+\item there exists an object $X'$ such that
+$X \otimes X' \cong X' \otimes X \cong \mathbf{1}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). Choose $X'$ such that $L(X') = \mathbf{1}$, i.e.,
+$X \otimes X' \cong \mathbf{1}$. Denote $L'$ and $R'$ the functors
+corresponding to $X'$. The equation $X \otimes X' \cong \mathbf{1}$
+implies $L \circ L' \cong \text{id}$. Thus $L'$ must be the quasi-inverse
+to $L$ (which exists by assumption). Hence $L' \circ L \cong \text{id}$.
+Hence $X' \otimes X \cong \mathbf{1}$. Thus (3) holds.
+
+\medskip\noindent
+The proof of (2) $\Rightarrow$ (3) is dual to what we just said.
+
+\medskip\noindent
+Assume (3). Then it is clear that $L'$ and $L$ are quasi-inverse
+to each other and it is clear that $R'$ and $R$ are quasi-inverse
+to each other. Thus (1) and (2) hold.
+\end{proof}
+
+\begin{definition}
+\label{definition-invertible}
+Let $\mathcal{C}$ be a monoidal category. An object $X$ of $\mathcal{C}$
+is called {\it invertible} if any (or all) of the equivalent conditions of
+Lemma \ref{lemma-invertible} hold.
+\end{definition}
+
+\noindent
+Observe that if $F : \mathcal{C} \to \mathcal{C}'$ is a functor of
+monoidal categories, then $F$ sends invertible objects to invertible
+objects.
+
+\begin{definition}
+\label{definition-dual}
+Given a monoidal category $(\mathcal{C}, \otimes, \phi)$
+and an object $X$ a {\it left dual} is an object $Y$ together with
+morphisms $\eta : \mathbf{1} \to X \otimes Y$ and
+$\epsilon : Y \otimes X \to \mathbf{1}$
+such that the diagrams
+$$
+\vcenter{
+\xymatrix{
+X \ar[rd]_1 \ar[r]_-{\eta \otimes 1} &
+X \otimes Y \otimes X \ar[d]^{1 \otimes \epsilon} \\
+& X
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+Y \ar[rd]_1 \ar[r]_-{1 \otimes \eta} &
+Y \otimes X \otimes Y \ar[d]^{\epsilon \otimes 1} \\
+& Y
+}
+}
+$$
+commute. In this situation we say that $X$ is a {\it right dual} of $Y$.
+\end{definition}
+
+\noindent
+Observe that if $F : \mathcal{C} \to \mathcal{C}'$ is a functor of
+monoidal categories, then $F(Y)$ is a left dual of $F(X)$ if
+$Y$ is a left dual of $X$.
+
+\begin{lemma}
+\label{lemma-left-dual}
+Let $\mathcal{C}$ be a monoidal category. If $Y$ is a left dual to $X$,
+then
+$$
+\Mor(Z' \otimes X, Z) = \Mor(Z', Z \otimes Y)
+\quad\text{and}\quad
+\Mor(Y \otimes Z', Z) = \Mor(Z', X \otimes Z)
+$$
+functorially in $Z$ and $Z'$.
+\end{lemma}
+
+\begin{proof}
+Consider the maps
+$$
+\Mor(Z' \otimes X, Z) \to
+\Mor(Z' \otimes X \otimes Y, Z \otimes Y) \to
+\Mor(Z', Z \otimes Y)
+$$
+where we use $\eta$ in the second arrow
+and the sequence of maps
+$$
+\Mor(Z', Z \otimes Y) \to
+\Mor(Z' \otimes X, Z \otimes Y \otimes X) \to
+\Mor(Z' \otimes X, Z)
+$$
+where we use $\epsilon$ in the second arrow. A straightforward calculation
+using the properties of $\eta$ and $\epsilon$
+shows that the compositions of these are mutually inverse.
+Similarly for the other equality.
+\end{proof}
+
+\begin{remark}
+\label{remark-left-dual-adjoint}
+Lemma \ref{lemma-left-dual} says in particular that $Z \mapsto Z \otimes Y$
+is the right adjoint of $Z' \mapsto Z' \otimes X$. In particular, uniqueness
+of adjoint functors guarantees that a left dual of $X$, if it exists, is
+unique up to unique isomorphism.
+Conversely, assume the functor $Z \mapsto Z \otimes Y$ is a right adjoint of
+the functor $Z' \mapsto Z' \otimes X$, i.e., we're given a bijection
+$$
+\Mor(Z' \otimes X, Z) \longrightarrow \Mor(Z', Z \otimes Y)
+$$
+functorial in both $Z$ and $Z'$. The unit of the adjunction produces
+maps
+$$
+\eta_Z : Z \to Z \otimes X \otimes Y
+$$
+functorial in $Z$ and the counit of the adjoint produces maps
+$$
+\epsilon_{Z'} : Z' \otimes Y \otimes X \to Z'
+$$
+functorial in $Z'$. In particular, we find
+$\eta = \eta_\mathbf{1} : \mathbf{1} \to X \otimes Y$ and
+$\epsilon = \epsilon_\mathbf{1} : Y \otimes X \to \mathbf{1}$.
+As an exercise in the relationship between units, counits, and
+the adjunction isomorphism, the reader can show that we have
+$$
+(\epsilon \otimes \text{id}_Y) \circ \eta_Y = \text{id}_Y
+\quad\text{and}\quad
+\epsilon_X \circ (\eta \otimes \text{id}_X) = \text{id}_X
+$$
+However, this isn't enough to show that
+$(\epsilon \otimes \text{id}_Y) \circ (\text{id}_Y \otimes \eta) =
+\text{id}_Y$ and
+$(\text{id}_X \otimes \epsilon) \circ (\eta \otimes \text{id}_X) =
+\text{id}_X$, because we don't know in general that
+$\eta_Y = \text{id}_Y \otimes \eta$ and we don't know that
+$\epsilon_X = \epsilon \otimes \text{id}_X$. For this it would suffice
+to know that our adjunction isomorphism has the following property:
+for every $W, Z, Z'$ the diagram
+$$
+\xymatrix{
+\Mor(Z' \otimes X, Z) \ar[r] \ar[d]_{\text{id}_W \otimes -} &
+\Mor(Z', Z \otimes Y) \ar[d]^{\text{id}_W \otimes -} \\
+\Mor(W \otimes Z' \otimes X, W \otimes Z) \ar[r] &
+\Mor(W \otimes Z', W \otimes Z \otimes Y)
+}
+$$
+If this holds, we will say {\it the adjunction is compatible with
+the given tensor structure}. Thus the requirement that
+$Z \mapsto Z \otimes Y$ be the right adjoint of $Z' \mapsto Z' \otimes X$
+compatible with the given tensor structure is an equivalent formulation of the
+property of being a left dual.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-tensor-dual}
+Let $\mathcal{C}$ be a monoidal category. If $Y_i$, $i = 1, 2$
+are left duals of $X_i$, $i = 1, 2$, then $Y_2 \otimes Y_1$ is
+a left dual of $X_1 \otimes X_2$.
+\end{lemma}
+
+\begin{proof}
+Follows from uniqueness of adjoints and Remark \ref{remark-left-dual-adjoint}.
+\end{proof}
+
+\noindent
+A {\it commutativity constraint} for $(\mathcal{C}, \otimes)$ is a
+functorial isomorphism
+$$
+\psi : X \otimes Y \longrightarrow Y \otimes X
+$$
+such that the composition
+$$
+X \otimes Y \xrightarrow{\psi} Y \otimes X \xrightarrow{\psi} X \otimes Y
+$$
+is the identity. We say $\psi$ is {\it compatible} with a given associativity
+constraint $\phi$ if for all objects $X, Y, Z$ the diagram
+$$
+\xymatrix{
+X \otimes (Y \otimes Z) \ar[r]_\phi \ar[d]^\psi &
+(X \otimes Y) \otimes Z \ar[r]_\psi &
+Z \otimes (X \otimes Y) \ar[d]^\phi \\
+X \otimes (Z \otimes Y) \ar[r]^\phi &
+(X \otimes Z) \otimes Y \ar[r]^\psi &
+(Z \otimes X) \otimes Y
+}
+$$
+commutes.
+
+\begin{definition}
+\label{definition-symmetric-monoidal-category}
+A quadruple $(\mathcal{C}, \otimes, \phi, \psi)$ where
+$\mathcal{C}$ is a category,
+$\otimes : \mathcal{C} \otimes \mathcal{C} \to \mathcal{C}$ is a functor,
+$\phi$ is an associativity constraint, and
+$\psi$ is a commutativity constraint compatible with $\phi$
+is called a {\it symmetric monoidal category} if there exists
+a unit.
+\end{definition}
+
+\noindent
+To be sure, if $(\mathcal{C}, \otimes, \phi, \psi)$ is a
+symmetric monoidal category, then $(\mathcal{C}, \otimes, \phi)$
+is a monoidal category.
+
+\begin{lemma}
+\label{lemma-dual-symmetric}
+Let $(\mathcal{C}, \otimes, \phi, \psi)$ be a symmetric monoidal category.
+Let $X$ be an object of $\mathcal{C}$ and let $Y$,
+$\eta : \mathbf{1} \to X \otimes Y$, and
+$\epsilon : Y \otimes X \to \mathbf{1}$
+be a left dual of $X$ as in Definition \ref{definition-dual}.
+Then $\eta' = \psi \circ \eta : \mathbf{1} \to Y \otimes X$
+and $\epsilon' = \epsilon \circ \psi : X \otimes Y \to \mathbf{1}$
+makes $X$ into a left dual of $Y$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: pleasant exercise in the definitions.
+\end{proof}
+
+\begin{definition}
+\label{definition-functor-symmetric-monoidal-categories}
+Let $\mathcal{C}$ and $\mathcal{C}'$ be symmetric monoidal categories.
+A {\it functor of symmetric monoidal categories}
+$F : \mathcal{C} \to \mathcal{C}'$
+is given by a functor $F$ as indicated and an isomorphism
+$$
+F(X) \otimes F(Y) \to F(X \otimes Y)
+$$
+functorial in $X$ and $Y$
+such that $F$ is a functor of monoidal categories and such that
+for all objects $X$ and $Y$ the diagram
+$$
+\xymatrix{
+F(X) \otimes F(Y) \ar[r] \ar[d] &
+F(X \otimes Y) \ar[d] \\
+F(Y) \otimes F(X) \ar[r] &
+F(Y \otimes X)
+}
+$$
+commutes.
+\end{definition}
+
+\begin{remark}
+\label{remark-internal-hom-monoidal}
+Let $\mathcal{C}$ be a monoidal category. We say $\mathcal{C}$ has
+an {\it internal hom} if for every pair of objects $X, Y$ of $\mathcal{C}$
+there is an object $hom(X, Y)$ of $\mathcal{C}$ such that we have
+$$
+\Mor(X, hom(Y, Z)) = \Mor(X \otimes Y, Z)
+$$
+functorially in $X, Y, Z$. By the Yoneda lemma the bifunctor
+$(X, Y) \mapsto hom(X, Y)$ is determined up to unique isomorphism
+if it exists. Given an internal hom we obtain canonical maps
+\begin{enumerate}
+\item $hom(X, Y) \otimes X \to Y$,
+\item $hom(Y, Z) \otimes hom(X, Y) \to hom(X, Z)$,
+\item $Z \otimes hom(X, Y) \to hom(X, Z \otimes Y)$,
+\item $Y \to hom(X, Y \otimes X)$, and
+\item $hom(Y, Z) \otimes X \to hom(hom(X, Y), Z)$ in case
+$\mathcal{C}$ is symmetric monoidal.
+\end{enumerate}
+Namely, the map in (1) is the image of $\text{id}_{hom(X, Y)}$
+by $\Mor(hom(X, Y), hom(X, Y)) \to \Mor(hom(X, Y) \otimes X, Y)$.
+To construct the map in (2) by the defining property of $hom(X, Z)$
+we need to construct a map
+$$
+hom(Y, Z) \otimes hom(X, Y) \otimes X \longrightarrow Z
+$$
+and such a map exists since by (1) we have
+maps $hom(X, Y) \otimes X \to Y$ and $hom(Y, Z) \otimes Y \to Z$.
+To construct the map in (3) by the defining property of $hom(X, Z \otimes Y)$
+we need to construct a map
+$$
+Z \otimes hom(X, Y) \otimes X \to Z \otimes Y
+$$
+for which we use $\text{id}_Z \otimes a$ where
+$a$ is the map in (1). To construct the map in (4)
+we note that we already have the map
+$Y \otimes hom(X, X) \to hom(X, Y \otimes X)$ by (3).
+Thus it suffices to construct a map $\mathbf{1} \to hom(X, X)$
+and for this we take the element in $\Mor(\mathbf{1}, hom(X, X))$
+corresponding to the canonical isomorphism $\mathbf{1} \otimes X \to X$
+in $\Mor(\mathbf{1} \otimes X, X)$.
+Finally, we come to (5). By the universal property of
+$hom(hom(X, Y), Z)$ it suffices to construct a map
+$$
+hom(Y, Z) \otimes X \otimes hom(X, Y) \longrightarrow Z
+$$
+We do this by swapping the last two tensor products using the
+commutativity constraint and then using the maps
+$hom(X, Y) \otimes X \to Y$ and $hom(Y, Z) \otimes Y \to Z$.
+\end{remark}
+
+
+
+
+
+\section{Categories of dotted arrows}
+\label{section-dotted-arrows}
+
+\noindent
+We discuss certain ``categories of dotted arrows'' in $(2,1)$-categories.
+These will appear when formulating various lifting criteria for
+algebraic stacks, see for example Morphisms of Stacks,
+Section \ref{stacks-morphisms-section-valuative} and
+More on Morphisms of Stacks, Section
+\ref{stacks-more-morphisms-section-formally-smooth}.
+
+\begin{definition}
+\label{definition-dotted-arrows}
+Let $\mathcal{C}$ be a $(2,1)$-category. Consider a $2$-commutative
+solid diagram
+\begin{equation}
+\label{equation-dotted-arrows}
+\vcenter{
+\xymatrix{
+S \ar[r]_-x \ar[d]_j & X \ar[d]^f \\
+T \ar[r]^-y \ar@{..>}[ru] & Y
+}
+}
+\end{equation}
+in $\mathcal{C}$. Fix a $2$-isomorphism
+$$
+\gamma : y \circ j \rightarrow f \circ x
+$$
+witnessing the $2$-commutativity of the diagram.
+Given (\ref{equation-dotted-arrows}) and $\gamma$, a \emph{dotted arrow}
+is a triple $(a, \alpha, \beta)$ consisting of a morphism
+$a \colon T \to X$ and and $2$-isomorphisms
+$\alpha : a \circ j \to x$, $\beta : y \to f \circ a$
+such that
+$\gamma = (\text{id}_f \star \alpha) \circ (\beta \star \text{id}_j)$,
+in other words such that
+$$
+\xymatrix{
+& f \circ a \circ j \ar[rd]^{\text{id}_f \star \alpha} \\
+y \circ j \ar[ru]^{\beta \star \text{id}_j} \ar[rr]^\gamma & &
+f \circ x
+}
+$$
+is commutative. A {\it morphism of dotted arrows}
+$(a, \alpha, \beta) \to (a', \alpha', \beta')$ is a
+$2$-arrow $\theta : a \to a'$ such that
+$\alpha = \alpha' \circ (\theta \star \text{id}_j)$ and
+$\beta' = (\text{id}_f \star \theta) \circ \beta$.
+\end{definition}
+
+\noindent
+In the situation of Definition \ref{definition-dotted-arrows}, there
+is an associated \emph{category of dotted arrows}.
+This category is a groupoid. It may depend on $\gamma$ in general.
+The next two lemmas say that categories of dotted arrows
+are well-behaved with respect to base change and composition for $f$.
+
+\begin{lemma}
+\label{lemma-cat-dotted-arrows-base-change}
+Let $\mathcal{C}$ be a $(2,1)$-category. Assume given a $2$-commutative diagram
+$$
+\xymatrix{
+S \ar[r]_-{x'} \ar[d]_j &
+X' \ar[d]^p \ar[r]_q &
+X \ar[d]^f \\
+T \ar[r]^-{y'} &
+Y' \ar[r]^g &
+Y
+}
+$$
+in $\mathcal{C}$, where the right square is $2$-cartesian
+with respect to a $2$-isomorphism $\phi \colon g \circ p \to f \circ q$.
+Choose a $2$-arrow
+$\gamma' : y' \circ j \to p \circ x'$. Set
+$x = q \circ x'$, $y = g \circ y'$ and let
+$\gamma : y \circ j \to f \circ x$ be the $2$-isomorphism
+$\gamma = (\phi \star \text{id}_{x'}) \circ (\text{id}_g \star \gamma')$.
+Then the category $\mathcal{D}'$ of dotted arrows
+for the left square and $\gamma'$ is equivalent to the category
+$\mathcal{D}$ of dotted
+arrows for the outer rectangle and $\gamma$.
+\end{lemma}
+
+\begin{proof}
+There is a functor $\mathcal{D}' \to \mathcal{D}$ which is
+$(a, \alpha, \beta) \mapsto (q \circ a, \text{id}_q \star \alpha,
+(\phi \star \text{id}_a) \circ (\text{id}_g \star \beta))$
+on objects and $\theta \mapsto \text{id}_q \star \theta$ on arrows.
+Checking that this functor $\mathcal{D}' \to \mathcal{D}$ is an equivalence
+follows formally from the universal property for $2$-fibre products
+as in Section \ref{section-2-fibre-products}. Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cat-dotted-arrows-composition}
+Let $\mathcal{C}$ be a $(2,1)$-category. Assume given a solid $2$-commutative
+diagram
+$$
+\xymatrix{
+S \ar[r]_-x \ar[dd]_j & X \ar[d]^f \\
+& Y \ar[d]^g \\
+T \ar[r]^-z \ar@{..>}[ruu] & Z
+}
+$$
+in $\mathcal{C}$.
+Choose a $2$-isomorphism $\gamma \colon z \circ j \to g \circ f \circ x$.
+Let $\mathcal{D}$ be the category of dotted arrows for
+the outer rectangle and $\gamma$. Let $\mathcal{D}'$ be
+the category of dotted arrows for the solid square
+$$
+\xymatrix{
+S \ar[r]_-{f \circ x} \ar[d]_j & Y \ar[d]^g \\
+T \ar[r]^-z \ar@{..>}[ru] & Z
+}
+$$
+and $\gamma$. Then $\mathcal{D}$ is equivalent to
+a category $\mathcal{D}''$ which has the following property:
+there is a functor $\mathcal{D}'' \to \mathcal{D}'$ which turns $\mathcal{D}''$
+into a category fibred in groupoids over $\mathcal{D}'$ and whose fibre
+categories are isomorphic to categories of dotted arrows for certain
+solid squares of the form
+$$
+\xymatrix{
+S \ar[r]_-x \ar[d]_j & X \ar[d]^f \\
+T \ar[r]^-y \ar@{..>}[ru] & Y
+}
+$$
+and some choices of $2$-isomorphism $y \circ j \to f \circ x$.
+\end{lemma}
+
+\begin{proof}
+Construct the category $\mathcal{D}''$ whose objects are tuples
+$(a,\alpha,\beta,b,\eta)$
+where $(a,\alpha,\beta)$ is an object of $\mathcal{D}$ and
+$b \colon T \rightarrow Y$ is a $1$-morphism
+and $\eta \colon b \rightarrow f \circ a$ is a $2$-isomorphism.
+Morphisms $(a,\alpha,\beta,b,\eta) \rightarrow (a',\alpha',\beta',b',\eta')$
+in $\mathcal{D}''$
+are pairs $(\theta_1,\theta_2)$, where $\theta_1 \colon a \rightarrow a'$
+defines an arrow $(a, \alpha, \beta) \rightarrow (a', \alpha', \beta')$
+in $\mathcal{D}$ and $\theta_2 \colon b \rightarrow b'$ is a $2$-isomorphism
+with the compatibility condition
+$\eta' \circ \theta_2 = (\text{id}_f \star \theta_1) \circ \eta$.
+
+\medskip\noindent
+There is a functor $\mathcal{D}'' \rightarrow \mathcal{D}'$ which is
+$(a, \alpha, \beta, b, \eta) \mapsto
+(b, (\text{id}_f \star \alpha) \circ (\eta \star \text{id}_j),
+(\text{id}_g \star \eta^{-1}) \circ \beta)$
+on objects and $(\theta_1,\theta_2) \mapsto \theta_2$ on arrows.
+Then $\mathcal{D}'' \rightarrow \mathcal{D}'$ is fibred in groupoids.
+
+\medskip\noindent
+If $(y, \delta, \epsilon)$ is an object of $\mathcal{D}'$, write
+$\mathcal{D}_{y,\delta}$
+for the category of dotted arrows for the last displayed diagram with
+$y \circ j \rightarrow f \circ x$ given by $\delta$.
+There is a functor $\mathcal{D}_{y,\delta} \rightarrow \mathcal{D}''$ given by
+$(a, \alpha, \eta) \mapsto
+(a, \alpha, (\text{id}_g \star \eta) \circ \epsilon, y, \eta)$
+on objects and $\theta \mapsto (\theta, \text{id}_y)$ on arrows.
+This exhibits an isomorphism from $\mathcal{D}_{y,\delta}$ to
+the fibre category of $\mathcal{D}'' \rightarrow \mathcal{D}'$ over
+$(y,\delta,\epsilon)$.
+
+\medskip\noindent
+There is also a functor $\mathcal{D} \rightarrow \mathcal{D}''$ which is
+$(a,\alpha,\beta) \mapsto (a,\alpha,\beta,f \circ a, \text{id}_{f \circ a})$
+on objects and $\theta \mapsto (\theta, \text{id}_f \star \theta)$ on arrows.
+This functor is fully faithful and essentially surjective, hence an
+equivalence. Details omitted.
+\end{proof}
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/chapters.tex b/books/stacks/chapters.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ec6ef30abd1c5273ed5c013d75a11d9fbbf08d82
--- /dev/null
+++ b/books/stacks/chapters.tex
@@ -0,0 +1,154 @@
+\begin{multicols}{2}[\section{Other chapters}]
+\noindent
+Preliminaries
+\begin{enumerate}
+\item \hyperref[introduction-section-phantom]{Introduction}
+\item \hyperref[conventions-section-phantom]{Conventions}
+\item \hyperref[sets-section-phantom]{Set Theory}
+\item \hyperref[categories-section-phantom]{Categories}
+\item \hyperref[topology-section-phantom]{Topology}
+\item \hyperref[sheaves-section-phantom]{Sheaves on Spaces}
+\item \hyperref[sites-section-phantom]{Sites and Sheaves}
+\item \hyperref[stacks-section-phantom]{Stacks}
+\item \hyperref[fields-section-phantom]{Fields}
+\item \hyperref[algebra-section-phantom]{Commutative Algebra}
+\item \hyperref[brauer-section-phantom]{Brauer Groups}
+\item \hyperref[homology-section-phantom]{Homological Algebra}
+\item \hyperref[derived-section-phantom]{Derived Categories}
+\item \hyperref[simplicial-section-phantom]{Simplicial Methods}
+\item \hyperref[more-algebra-section-phantom]{More on Algebra}
+\item \hyperref[smoothing-section-phantom]{Smoothing Ring Maps}
+\item \hyperref[modules-section-phantom]{Sheaves of Modules}
+\item \hyperref[sites-modules-section-phantom]{Modules on Sites}
+\item \hyperref[injectives-section-phantom]{Injectives}
+\item \hyperref[cohomology-section-phantom]{Cohomology of Sheaves}
+\item \hyperref[sites-cohomology-section-phantom]{Cohomology on Sites}
+\item \hyperref[dga-section-phantom]{Differential Graded Algebra}
+\item \hyperref[dpa-section-phantom]{Divided Power Algebra}
+\item \hyperref[sdga-section-phantom]{Differential Graded Sheaves}
+\item \hyperref[hypercovering-section-phantom]{Hypercoverings}
+\end{enumerate}
+Schemes
+\begin{enumerate}
+\setcounter{enumi}{25}
+\item \hyperref[schemes-section-phantom]{Schemes}
+\item \hyperref[constructions-section-phantom]{Constructions of Schemes}
+\item \hyperref[properties-section-phantom]{Properties of Schemes}
+\item \hyperref[morphisms-section-phantom]{Morphisms of Schemes}
+\item \hyperref[coherent-section-phantom]{Cohomology of Schemes}
+\item \hyperref[divisors-section-phantom]{Divisors}
+\item \hyperref[limits-section-phantom]{Limits of Schemes}
+\item \hyperref[varieties-section-phantom]{Varieties}
+\item \hyperref[topologies-section-phantom]{Topologies on Schemes}
+\item \hyperref[descent-section-phantom]{Descent}
+\item \hyperref[perfect-section-phantom]{Derived Categories of Schemes}
+\item \hyperref[more-morphisms-section-phantom]{More on Morphisms}
+\item \hyperref[flat-section-phantom]{More on Flatness}
+\item \hyperref[groupoids-section-phantom]{Groupoid Schemes}
+\item \hyperref[more-groupoids-section-phantom]{More on Groupoid Schemes}
+\item \hyperref[etale-section-phantom]{\'Etale Morphisms of Schemes}
+\end{enumerate}
+Topics in Scheme Theory
+\begin{enumerate}
+\setcounter{enumi}{41}
+\item \hyperref[chow-section-phantom]{Chow Homology}
+\item \hyperref[intersection-section-phantom]{Intersection Theory}
+\item \hyperref[pic-section-phantom]{Picard Schemes of Curves}
+\item \hyperref[weil-section-phantom]{Weil Cohomology Theories}
+\item \hyperref[adequate-section-phantom]{Adequate Modules}
+\item \hyperref[dualizing-section-phantom]{Dualizing Complexes}
+\item \hyperref[duality-section-phantom]{Duality for Schemes}
+\item \hyperref[discriminant-section-phantom]{Discriminants and Differents}
+\item \hyperref[derham-section-phantom]{de Rham Cohomology}
+\item \hyperref[local-cohomology-section-phantom]{Local Cohomology}
+\item \hyperref[algebraization-section-phantom]{Algebraic and Formal Geometry}
+\item \hyperref[curves-section-phantom]{Algebraic Curves}
+\item \hyperref[resolve-section-phantom]{Resolution of Surfaces}
+\item \hyperref[models-section-phantom]{Semistable Reduction}
+\item \hyperref[functors-section-phantom]{Functors and Morphisms}
+\item \hyperref[equiv-section-phantom]{Derived Categories of Varieties}
+\item \hyperref[pione-section-phantom]{Fundamental Groups of Schemes}
+\item \hyperref[etale-cohomology-section-phantom]{\'Etale Cohomology}
+\item \hyperref[crystalline-section-phantom]{Crystalline Cohomology}
+\item \hyperref[proetale-section-phantom]{Pro-\'etale Cohomology}
+\item \hyperref[more-etale-section-phantom]{More \'Etale Cohomology}
+\item \hyperref[trace-section-phantom]{The Trace Formula}
+\end{enumerate}
+Algebraic Spaces
+\begin{enumerate}
+\setcounter{enumi}{63}
+\item \hyperref[spaces-section-phantom]{Algebraic Spaces}
+\item \hyperref[spaces-properties-section-phantom]{Properties of Algebraic Spaces}
+\item \hyperref[spaces-morphisms-section-phantom]{Morphisms of Algebraic Spaces}
+\item \hyperref[decent-spaces-section-phantom]{Decent Algebraic Spaces}
+\item \hyperref[spaces-cohomology-section-phantom]{Cohomology of Algebraic Spaces}
+\item \hyperref[spaces-limits-section-phantom]{Limits of Algebraic Spaces}
+\item \hyperref[spaces-divisors-section-phantom]{Divisors on Algebraic Spaces}
+\item \hyperref[spaces-over-fields-section-phantom]{Algebraic Spaces over Fields}
+\item \hyperref[spaces-topologies-section-phantom]{Topologies on Algebraic Spaces}
+\item \hyperref[spaces-descent-section-phantom]{Descent and Algebraic Spaces}
+\item \hyperref[spaces-perfect-section-phantom]{Derived Categories of Spaces}
+\item \hyperref[spaces-more-morphisms-section-phantom]{More on Morphisms of Spaces}
+\item \hyperref[spaces-flat-section-phantom]{Flatness on Algebraic Spaces}
+\item \hyperref[spaces-groupoids-section-phantom]{Groupoids in Algebraic Spaces}
+\item \hyperref[spaces-more-groupoids-section-phantom]{More on Groupoids in Spaces}
+\item \hyperref[bootstrap-section-phantom]{Bootstrap}
+\item \hyperref[spaces-pushouts-section-phantom]{Pushouts of Algebraic Spaces}
+\end{enumerate}
+Topics in Geometry
+\begin{enumerate}
+\setcounter{enumi}{80}
+\item \hyperref[spaces-chow-section-phantom]{Chow Groups of Spaces}
+\item \hyperref[groupoids-quotients-section-phantom]{Quotients of Groupoids}
+\item \hyperref[spaces-more-cohomology-section-phantom]{More on Cohomology of Spaces}
+\item \hyperref[spaces-simplicial-section-phantom]{Simplicial Spaces}
+\item \hyperref[spaces-duality-section-phantom]{Duality for Spaces}
+\item \hyperref[formal-spaces-section-phantom]{Formal Algebraic Spaces}
+\item \hyperref[restricted-section-phantom]{Algebraization of Formal Spaces}
+\item \hyperref[spaces-resolve-section-phantom]{Resolution of Surfaces Revisited}
+\end{enumerate}
+Deformation Theory
+\begin{enumerate}
+\setcounter{enumi}{88}
+\item \hyperref[formal-defos-section-phantom]{Formal Deformation Theory}
+\item \hyperref[defos-section-phantom]{Deformation Theory}
+\item \hyperref[cotangent-section-phantom]{The Cotangent Complex}
+\item \hyperref[examples-defos-section-phantom]{Deformation Problems}
+\end{enumerate}
+Algebraic Stacks
+\begin{enumerate}
+\setcounter{enumi}{92}
+\item \hyperref[algebraic-section-phantom]{Algebraic Stacks}
+\item \hyperref[examples-stacks-section-phantom]{Examples of Stacks}
+\item \hyperref[stacks-sheaves-section-phantom]{Sheaves on Algebraic Stacks}
+\item \hyperref[criteria-section-phantom]{Criteria for Representability}
+\item \hyperref[artin-section-phantom]{Artin's Axioms}
+\item \hyperref[quot-section-phantom]{Quot and Hilbert Spaces}
+\item \hyperref[stacks-properties-section-phantom]{Properties of Algebraic Stacks}
+\item \hyperref[stacks-morphisms-section-phantom]{Morphisms of Algebraic Stacks}
+\item \hyperref[stacks-limits-section-phantom]{Limits of Algebraic Stacks}
+\item \hyperref[stacks-cohomology-section-phantom]{Cohomology of Algebraic Stacks}
+\item \hyperref[stacks-perfect-section-phantom]{Derived Categories of Stacks}
+\item \hyperref[stacks-introduction-section-phantom]{Introducing Algebraic Stacks}
+\item \hyperref[stacks-more-morphisms-section-phantom]{More on Morphisms of Stacks}
+\item \hyperref[stacks-geometry-section-phantom]{The Geometry of Stacks}
+\end{enumerate}
+Topics in Moduli Theory
+\begin{enumerate}
+\setcounter{enumi}{106}
+\item \hyperref[moduli-section-phantom]{Moduli Stacks}
+\item \hyperref[moduli-curves-section-phantom]{Moduli of Curves}
+\end{enumerate}
+Miscellany
+\begin{enumerate}
+\setcounter{enumi}{108}
+\item \hyperref[examples-section-phantom]{Examples}
+\item \hyperref[exercises-section-phantom]{Exercises}
+\item \hyperref[guide-section-phantom]{Guide to Literature}
+\item \hyperref[desirables-section-phantom]{Desirables}
+\item \hyperref[coding-section-phantom]{Coding Style}
+\item \hyperref[obsolete-section-phantom]{Obsolete}
+\item \hyperref[fdl-section-phantom]{GNU Free Documentation License}
+\item \hyperref[index-section-phantom]{Auto Generated Index}
+\end{enumerate}
+\end{multicols}
diff --git a/books/stacks/chow.tex b/books/stacks/chow.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c0b7eab7a8d8bea0acdae94e8c7261e43f0dd58e
--- /dev/null
+++ b/books/stacks/chow.tex
@@ -0,0 +1,18863 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Chow Homology and Chern Classes}
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+
+\tableofcontents
+
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we discuss Chow homology groups and the construction
+of Chern classes of vector bundles as elements of operational
+Chow cohomology groups (everything with $\mathbf{Z}$-coefficients).
+
+\medskip\noindent
+We start this chapter by giving the shortest possible
+algebraic proof of the Key Lemma \ref{lemma-milnor-gersten-low-degree}.
+We first define the Herbrand quotient
+(Section \ref{section-periodic-complexes})
+and we compute it in some cases
+(Section \ref{section-calculation}).
+Next, we prove some simple algebra lemmas on
+existence of suitable factorizations after modifications
+(Section \ref{section-preparation-tame-symbol}).
+Using these we construct/define the tame symbol in
+Section \ref{section-tame-symbol}.
+Only the most basic properties of the tame symbol
+are needed to prove the Key Lemma, which we do
+in Section \ref{section-key-lemma}.
+
+\medskip\noindent
+Next, we introduce the basic setup we work with in the rest of this
+chapter in Section \ref{section-setup}. To make the material a little
+bit more challenging we decided to treat a somewhat more general case
+than is usually done. Namely we assume our schemes $X$ are locally of
+finite type over a fixed locally Noetherian base scheme which is universally
+catenary and is endowed with a dimension function. These assumptions suffice
+to be able to define the Chow homology groups $\CH_*(X)$ and the action of
+capping with Chern classes on them. This is an indication that we should
+be able to define these also for algebraic stacks locally of finite type
+over such a base.
+
+\medskip\noindent
+Next, we follow the first few chapters of \cite{F} in order to define
+cycles, flat pullback, proper pushforward, and rational equivalence,
+except that we have been less precise about the supports of the cycles
+involved.
+
+\medskip\noindent
+We diverge from the presentation given in \cite{F} by using the
+Key lemma mentioned above to prove a basic commutativity relation in
+Section \ref{section-key}. Using this we prove that the operation
+of intersecting with an invertible sheaf passes through rational
+equivalence and is commutative, see Section \ref{section-commutativity}.
+One more application of the Key
+lemma proves that the Gysin map of an effective Cartier divisor
+passes through rational equivalence, see Section \ref{section-gysin}.
+Having proved this, it is straightforward to define Chern
+classes of vector bundles, prove additivity, prove the splitting principle,
+introduce Chern characters, Todd classes, and state the
+Grothendieck-Riemann-Roch theorem.
+
+\medskip\noindent
+There are two appendices. In Appendix A (Section \ref{section-appendix-A})
+we discuss an alternative (longer) construction of the
+tame symbol and corresponding proof of the Key Lemma.
+Finally, in Appendix B (Section \ref{section-appendix-chow})
+we briefly discuss the relationship with $K$-theory of coherent
+sheaves and we discuss some blowup lemmas.
+We suggest the reader look at their introductions for
+more information.
+
+\medskip\noindent
+We will return to the Chow groups $\CH_*(X)$ for smooth projective varieties
+over algebraically closed fields in the next chapter. Using a moving
+lemma as in \cite{Samuel}, \cite{ChevalleyI}, and \cite{ChevalleyII}
+and Serre's Tor-formula
+(see \cite{Serre_local_algebra} or \cite{Serre_algebre_locale})
+we will define a ring structure on $\CH_*(X)$. See
+Intersection Theory, Section \ref{intersection-section-introduction} ff.
+
+
+
+
+
+
+
+
+\section{Periodic complexes and Herbrand quotients}
+\label{section-periodic-complexes}
+
+\noindent
+Of course there is a very general notion of periodic complexes.
+We can require periodicity of the maps, or periodicity of the objects.
+We will add these here as needed. For the moment we only need
+the following cases.
+
+\begin{definition}
+\label{definition-periodic-complex}
+Let $R$ be a ring.
+\begin{enumerate}
+\item A {\it $2$-periodic complex} over $R$ is given
+by a quadruple $(M, N, \varphi, \psi)$ consisting of
+$R$-modules $M$, $N$ and $R$-module maps $\varphi : M \to N$,
+$\psi : N \to M$ such that
+$$
+\xymatrix{
+\ldots \ar[r] &
+M \ar[r]^\varphi &
+N \ar[r]^\psi &
+M \ar[r]^\varphi &
+N \ar[r] & \ldots
+}
+$$
+is a complex. In this setting we define the {\it cohomology modules}
+of the complex to be the $R$-modules
+$$
+H^0(M, N, \varphi, \psi) = \Ker(\varphi)/\Im(\psi)
+\quad\text{and}\quad
+H^1(M, N, \varphi, \psi) = \Ker(\psi)/\Im(\varphi).
+$$
+We say the $2$-periodic complex is {\it exact} if the cohomology
+groups are zero.
+\item A {\it $(2, 1)$-periodic complex} over $R$ is given
+by a triple $(M, \varphi, \psi)$ consisting of an $R$-module $M$ and
+$R$-module maps $\varphi : M \to M$, $\psi : M \to M$
+such that
+$$
+\xymatrix{
+\ldots \ar[r] &
+M \ar[r]^\varphi &
+M \ar[r]^\psi &
+M \ar[r]^\varphi &
+M \ar[r] & \ldots
+}
+$$
+is a complex. Since this is a special case of a $2$-periodic complex
+we have its {\it cohomology modules} $H^0(M, \varphi, \psi)$,
+$H^1(M, \varphi, \psi)$ and a notion of exactness.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In the following we will use any result proved for $2$-periodic
+complexes without further mention for $(2, 1)$-periodic complexes.
+It is clear that the collection of $2$-periodic complexes forms a
+category with morphisms
+$(f, g) : (M, N, \varphi, \psi) \to (M', N', \varphi', \psi')$
+pairs of morphisms $f : M \to M'$ and $g : N \to N'$ such
+that $\varphi' \circ f = g \circ \varphi$ and $\psi' \circ g = f \circ \psi$.
+We obtain an abelian category, with kernels and cokernels as in
+Homology, Lemma \ref{homology-lemma-cat-chain-abelian}.
+
+\begin{definition}
+\label{definition-periodic-length}
+Let $(M, N, \varphi, \psi)$ be a $2$-periodic complex
+over a ring $R$ whose cohomology modules have finite length.
+In this case we define the {\it multiplicity} of $(M, N, \varphi, \psi)$
+to be the integer
+$$
+e_R(M, N, \varphi, \psi) =
+\text{length}_R(H^0(M, N, \varphi, \psi))
+-
+\text{length}_R(H^1(M, N, \varphi, \psi))
+$$
+In the case of a $(2, 1)$-periodic complex $(M, \varphi, \psi)$,
+we denote this by $e_R(M, \varphi, \psi)$ and we will sometimes call this
+the {\it (additive) Herbrand quotient}.
+\end{definition}
+
+\noindent
+If the cohomology groups of $(M, \varphi, \psi)$
+are finite abelian groups, then it is customary to call the
+{\it (multiplicative) Herbrand quotient}
+$$
+q(M, \varphi, \psi) =
+\frac{\# H^0(M, \varphi, \psi)}{\# H^1(M, \varphi, \psi)}
+$$
+In words: the multiplicative Herbrand quotient is the number of elements of
+$H^0$ divided by the number of elements of $H^1$. If $R$ is local and if
+the residue field of $R$ is finite with $q$ elements, then we see that
+$$
+q(M, \varphi, \psi) = q^{e_R(M, \varphi, \psi)}
+$$
+
+\medskip\noindent
+An example of a $(2, 1)$-periodic complex over a ring $R$ is any triple of
+the form $(M, 0, \psi)$ where $M$ is an $R$-module and $\psi$ is an
+$R$-linear map. If the kernel and cokernel of $\psi$ have finite length,
+then we obtain
+\begin{equation}
+\label{equation-multiplicity-coker-ker}
+e_R(M, 0, \psi) = \text{length}_R(\Coker(\psi)) - \text{length}_R(\Ker(\psi))
+\end{equation}
+We state and prove the obligatory lemmas on these notations.
+
+\begin{lemma}
+\label{lemma-additivity-periodic-length}
+Let $R$ be a ring. Suppose that we have a short exact sequence of
+$2$-periodic complexes
+$$
+0 \to (M_1, N_1, \varphi_1, \psi_1)
+\to (M_2, N_2, \varphi_2, \psi_2)
+\to (M_3, N_3, \varphi_3, \psi_3)
+\to 0
+$$
+If two out of three have cohomology modules of finite length so does
+the third and we have
+$$
+e_R(M_2, N_2, \varphi_2, \psi_2) =
+e_R(M_1, N_1, \varphi_1, \psi_1) +
+e_R(M_3, N_3, \varphi_3, \psi_3).
+$$
+\end{lemma}
+
+\begin{proof}
+We abbreviate $A = (M_1, N_1, \varphi_1, \psi_1)$,
+$B = (M_2, N_2, \varphi_2, \psi_2)$ and $C = (M_3, N_3, \varphi_3, \psi_3)$.
+We have a long exact cohomology sequence
+$$
+\ldots
+\to H^1(C)
+\to H^0(A)
+\to H^0(B)
+\to H^0(C)
+\to H^1(A)
+\to H^1(B)
+\to H^1(C)
+\to \ldots
+$$
+This gives a finite exact sequence
+$$
+0 \to I
+\to H^0(A)
+\to H^0(B)
+\to H^0(C)
+\to H^1(A)
+\to H^1(B)
+\to K \to 0
+$$
+with $0 \to K \to H^1(C) \to I \to 0$ a filtration. By additivity of
+the length function (Algebra, Lemma \ref{algebra-lemma-length-additive})
+we see the result.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-periodic-length}
+Let $R$ be a ring. If $(M, N, \varphi, \psi)$ is a $2$-periodic complex
+such that $M$, $N$ have finite length, then
+$e_R(M, N, \varphi, \psi) = \text{length}_R(M) - \text{length}_R(N)$.
+In particular, if $(M, \varphi, \psi)$ is a $(2, 1)$-periodic complex
+such that $M$ has finite length, then
+$e_R(M, \varphi, \psi) = 0$.
+\end{lemma}
+
+\begin{proof}
+This follows from the additity of
+Lemma \ref{lemma-additivity-periodic-length}
+and the short exact sequence
+$0 \to (M, 0, 0, 0) \to (M, N, \varphi, \psi) \to (0, N, 0, 0) \to 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-periodic-lengths}
+Let $R$ be a ring. Let $f : (M, \varphi, \psi) \to (M', \varphi', \psi')$
+be a map of $(2, 1)$-periodic complexes whose cohomology modules
+have finite length. If $\Ker(f)$ and $\Coker(f)$ have finite length,
+then $e_R(M, \varphi, \psi) = e_R(M', \varphi', \psi')$.
+\end{lemma}
+
+\begin{proof}
+Apply the additivity of Lemma \ref{lemma-additivity-periodic-length}
+and observe that $(\Ker(f), \varphi, \psi)$ and
+$(\Coker(f), \varphi', \psi')$ have vanishing multiplicity by
+Lemma \ref{lemma-finite-periodic-length}.
+\end{proof}
+
+
+
+
+\section{Calculation of some multiplicities}
+\label{section-calculation}
+
+\noindent
+To prove equality of certain cycles later on we need to
+compute some multiplicities. Our main tool, besides the
+elementary lemmas on multiplicities given in the previous section,
+will be Algebra, Lemma \ref{algebra-lemma-order-vanishing-determinant}.
+
+\begin{lemma}
+\label{lemma-length-multiplication}
+Let $R$ be a Noetherian local ring.
+Let $M$ be a finite $R$-module. Let $x \in R$. Assume that
+\begin{enumerate}
+\item $\dim(\text{Supp}(M)) \leq 1$, and
+\item $\dim(\text{Supp}(M/xM)) \leq 0$.
+\end{enumerate}
+Write
+$\text{Supp}(M) = \{\mathfrak m, \mathfrak q_1, \ldots, \mathfrak q_t\}$.
+Then
+$$
+e_R(M, 0, x) =
+\sum\nolimits_{i = 1, \ldots, t}
+\text{ord}_{R/\mathfrak q_i}(x)
+\text{length}_{R_{\mathfrak q_i}}(M_{\mathfrak q_i}).
+$$
+\end{lemma}
+
+\begin{proof}
+We first make some preparatory remarks.
+The result of the lemma holds if $M$ has finite length, i.e., if $t = 0$,
+because both the left hand side and the right hand side are zero
+in this case, see Lemma \ref{lemma-finite-periodic-length}.
+Also, if we have a short exact sequence $0 \to M \to M' \to M'' \to 0$
+of modules satisfying (1) and (2), then lemma for 2 out of 3
+of these implies the lemma for the third by the
+additivity of length (Algebra, Lemma \ref{algebra-lemma-length-additive}) and
+additivty of multiplicities (Lemma \ref{lemma-additivity-periodic-length}).
+
+\medskip\noindent
+Denote $M_i$ the image of $M$ in $M_{\mathfrak q_i}$, so
+$\text{Supp}(M_i) = \{\mathfrak m, \mathfrak q_i\}$.
+The kernel and cokernel of the map $M \to \bigoplus M_i$
+have support $\{\mathfrak m\}$ and hence have finite length.
+By our preparatory remarks, it follows that it suffices to
+prove the lemma for each $M_i$. Thus we may assume that
+$\text{Supp}(M) = \{\mathfrak m, \mathfrak q\}$.
+In this case we have a finite filtration
+$M \supset \mathfrak qM \supset \mathfrak q^2M \supset \ldots \supset
+\mathfrak q^nM = 0$ by Algebra, Lemma
+\ref{algebra-lemma-Noetherian-power-ideal-kills-module}.
+Again additivity shows that it suffices to prove the lemma
+in the case $M$ is annihilated by $\mathfrak q$.
+In this case we can view $M$ as a $R/\mathfrak q$-module,
+i.e., we may assume that $R$ is a Noetherian local domain
+of dimension $1$ with fraction field $K$.
+Dividing by the torsion submodule, i.e., by the
+kernel of $M \to M \otimes_R K = V$ (the torsion has
+finite length hence is handled by our preliminary remarks)
+we may assume that $M \subset V$ is a lattice
+(Algebra, Definition \ref{algebra-definition-lattice}).
+Then $x : M \to M$ is injective and
+$\text{length}_R(M/xM) = d(M, xM)$
+(Algebra, Definition \ref{algebra-definition-distance}). Since
+$\text{length}_K(V) = \dim_K(V)$
+we see that $\det(x : V \to V) = x^{\dim_K(V)}$ and
+$\text{ord}_R(\det(x : V \to V)) = \dim_K(V) \text{ord}_R(x)$.
+Thus the desired equality follows from
+Algebra, Lemma \ref{algebra-lemma-order-vanishing-determinant}
+in this case.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-additivity-divisors-restricted}
+Let $R$ be a Noetherian local ring.
+Let $x \in R$. If $M$ is a finite Cohen-Macaulay module over $R$
+with $\dim(\text{Supp}(M)) = 1$ and $\dim(\text{Supp}(M/xM)) = 0$, then
+$$
+\text{length}_R(M/xM)
+=
+\sum\nolimits_i \text{length}_R(R/(x, \mathfrak q_i))
+\text{length}_{R_{\mathfrak q_i}}(M_{\mathfrak q_i}).
+$$
+where $\mathfrak q_1, \ldots, \mathfrak q_t$ are the
+minimal primes of the support of $M$. If $I \subset R$ is an ideal
+such that $x$ is a nonzerodivisor on $R/I$ and $\dim(R/I) = 1$, then
+$$
+\text{length}_R(R/(x, I))
+=
+\sum\nolimits_i \text{length}_R(R/(x, \mathfrak q_i))
+\text{length}_{R_{\mathfrak q_i}}((R/I)_{\mathfrak q_i})
+$$
+where $\mathfrak q_1, \ldots, \mathfrak q_n$ are the minimal
+primes over $I$.
+\end{lemma}
+
+\begin{proof}
+These are special cases of Lemma \ref{lemma-length-multiplication}.
+\end{proof}
+
+\noindent
+Here is another case where we can determine the value of a multiplicity.
+
+\begin{lemma}
+\label{lemma-powers-period-length-zero}
+Let $R$ be a ring. Let $M$ be an $R$-module.
+Let $\varphi : M \to M$ be an endomorphism and $n > 0$
+such that $\varphi^n = 0$ and such that $\Ker(\varphi)/\Im(\varphi^{n - 1})$
+has finite length as an $R$-module.
+Then
+$$
+e_R(M, \varphi^i, \varphi^{n - i}) = 0
+$$
+for $i = 0, \ldots, n$.
+\end{lemma}
+
+\begin{proof}
+The cases $i = 0, n$ are trivial as $\varphi^0 = \text{id}_M$ by convention.
+Let us think of $M$ as an $R[t]$-module where multiplication by $t$
+is given by $\varphi$. Let us write
+$K_i = \Ker(t^i : M \to M)$ and
+$$
+a_i = \text{length}_R(K_i/t^{n - i}M),\quad
+b_i = \text{length}_R(K_i/tK_{i + 1}),\quad
+c_i = \text{length}_R(K_1/t^iK_{i + 1})
+$$
+Boundary values are $a_0 = a_n = b_0 = c_0 = 0$.
+The $c_i$ are integers for $i < n$ as $K_1/t^iK_{i + 1}$
+is a quotient of $K_1/t^{n - 1}M$ which is assumed to have finite length.
+We will use frequently that $K_i \cap t^jM = t^jK_{i + j}$.
+For $0 < i < n - 1$ we have an exact sequence
+$$
+0 \to
+K_1/t^{n - i - 1}K_{n - i} \to
+K_{i + 1}/t^{n - i - 1}M \xrightarrow{t} K_i/t^{n - i}M
+\to K_i/tK_{i + 1} \to 0
+$$
+By induction on $i$ we conclude that $a_i$ and $b_i$ are
+integers for $i < n$ and that
+$$
+c_{n - i - 1} - a_{i + 1} + a_i - b_i = 0
+$$
+For $0 < i < n - 1$ there is a short exact sequence
+$$
+0 \to
+K_i/tK_{i + 1} \to
+K_{i + 1}/tK_{i + 2} \xrightarrow{t^i}
+K_1/t^{i + 1}K_{i + 2} \to
+K_1/t^iK_{i + 1} \to 0
+$$
+which gives
+$$
+b_i - b_{i + 1} + c_{i + 1} - c_i = 0
+$$
+Since $b_0 = c_0$ we conclude that $b_i = c_i$ for $i < n$.
+Then we see that
+$$
+a_2 = a_1 + b_{n - 2} - b_1,\quad
+a_3 = a_2 + b_{n - 3} - b_2,\quad \ldots
+$$
+It is straighforward to see that this implies $a_i = a_{n - i}$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-multiply-period-length}
+Let $(R, \mathfrak m)$ be a Noetherian local ring. Let
+$(M, \varphi, \psi)$ be a $(2, 1)$-periodic complex over $R$
+with $M$ finite and with cohomology groups of finite length over $R$.
+Let $x \in R$ be such that $\dim(\text{Supp}(M/xM)) \leq 0$. Then
+$$
+e_R(M, x\varphi, \psi) = e_R(M, \varphi, \psi) - e_R(\Im(\varphi), 0, x)
+$$
+and
+$$
+e_R(M, \varphi, x\psi) = e_R(M, \varphi, \psi) + e_R(\Im(\psi), 0, x)
+$$
+\end{lemma}
+
+\begin{proof}
+We will only prove the first formula as the second is proved
+in exactly the same manner.
+Let $M' = M[x^\infty]$ be the $x$-power torsion submodule of $M$.
+Consider the short exact sequence $0 \to M' \to M \to M'' \to 0$.
+Then $M''$ is $x$-power torsion free (More on Algebra, Lemma
+\ref{more-algebra-lemma-divide-by-torsion}).
+Since $\varphi$, $\psi$ map $M'$ into $M'$
+we obtain a short exact sequence
+$$
+0 \to (M', \varphi', \psi') \to (M, \varphi, \psi) \to
+(M'', \varphi'', \psi'') \to 0
+$$
+of $(2, 1)$-periodic complexes. Also, we get a short exact sequence
+$0 \to M' \cap \Im(\varphi) \to \Im(\varphi) \to \Im(\varphi'') \to 0$.
+We have
+$e_R(M', \varphi, \psi) = e_R(M', x\varphi, \psi) =
+e_R(M' \cap \Im(\varphi), 0, x) = 0$
+by Lemma \ref{lemma-compare-periodic-lengths}.
+By additivity (Lemma \ref{lemma-additivity-periodic-length})
+we see that it suffices to prove the lemma for $(M'', \varphi'', \psi'')$.
+This reduces us to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $x : M \to M$ is injective.
+In this case $\Ker(x\varphi) = \Ker(\varphi)$.
+On the other hand we have a short exact sequence
+$$
+0 \to \Im(\varphi)/x\Im(\varphi) \to
+\Ker(\psi)/\Im(x\varphi) \to \Ker(\psi)/\Im(\varphi) \to 0
+$$
+This together with (\ref{equation-multiplicity-coker-ker}) proves the formula.
+\end{proof}
+
+
+
+
+
+
+
+\section{Preparation for tame symbols}
+\label{section-preparation-tame-symbol}
+
+\noindent
+In this section we put some lemma that will help us define the
+tame symbol in the next section.
+
+\begin{lemma}
+\label{lemma-glue-at-max}
+Let $A$ be a Noetherian ring. Let $\mathfrak m_1, \ldots, \mathfrak m_r$
+be pairwise distinct maximal ideals of $A$. For $i = 1, \ldots, r$ let
+$\varphi_i : A_{\mathfrak m_i} \to B_i$ be a ring map whose
+kernel and cokernel are annihilated by a power
+of $\mathfrak m_i$. Then there exists a ring map $\varphi : A \to B$ such
+that
+\begin{enumerate}
+\item the localization of $\varphi$ at $\mathfrak m_i$ is
+isomorphic to $\varphi_i$, and
+\item $\Ker(\varphi)$ and $\Coker(\varphi)$ are annihilated
+by a power of $\mathfrak m_1 \cap \ldots \cap \mathfrak m_r$.
+\end{enumerate}
+Moreover, if each $\varphi_i$ is finite, injective, or
+surjective then so is $\varphi$.
+\end{lemma}
+
+\begin{proof}
+Set $I = \mathfrak m_1 \cap \ldots \cap \mathfrak m_r$. Set
+$A_i = A_{\mathfrak m_i}$ and $A' = \prod A_i$.
+Then $IA' = \prod \mathfrak m_i A_i$ and $A \to A'$
+is a flat ring map such that $A/I \cong A'/IA'$.
+Thus we may use More on Algebra, Lemma
+\ref{more-algebra-lemma-application-formal-glueing}
+to see that there exists an $A$-module map $\varphi : A \to B$
+with $\varphi_i$ isomorphic to the localization of $\varphi$
+at $\mathfrak m_i$. Then we can use the discussion in
+More on Algebra, Remark \ref{more-algebra-remark-formal-glueing-algebras}
+to endow $B$ with an $A$-algebra structure
+matching the given $A$-algebra structure on $B_i$.
+The final statement of the lemma follows easily from
+the fact that $\Ker(\varphi)_{\mathfrak m_i} \cong \Ker(\varphi_i)$
+and $\Coker(\varphi)_{\mathfrak m_i} \cong \Coker(\varphi_i)$.
+\end{proof}
+
+\noindent
+The following lemma is very similar to
+Algebra, Lemma \ref{algebra-lemma-nonregular-dimension-one}.
+
+\begin{lemma}
+\label{lemma-Noetherian-domain-dim-1-two-elements}
+Let $(R, \mathfrak m)$ be a Noetherian local ring of dimension $1$.
+Let $a, b \in R$ be nonzerodivisors.
+There exists a finite ring extension $R \subset R'$
+with $R'/R$ annihilated by a power of $\mathfrak m$
+and nonzerodivisors $t, a', b' \in R'$ such that
+$a = ta'$ and $b = tb'$ and $R' = a'R' + b'R'$.
+\end{lemma}
+
+\begin{proof}
+If $a$ or $b$ is a unit, then the lemma is true with $R = R'$.
+Thus we may assume $a, b \in \mathfrak m$.
+Set $I = (a, b)$. The idea is to blow up $R$ in $I$.
+Instead of doing the algebraic argument we work geometrically.
+Let $X = \text{Proj}(\bigoplus_{d \geq 0} I^d)$.
+By Divisors, Lemma
+\ref{divisors-lemma-blowing-up-gives-effective-Cartier-divisor}
+the morphism $X \to \Spec(R)$ is an isomorphism over
+the punctured spectrum $U = \Spec(R) \setminus \{\mathfrak m\}$.
+Thus we may and do view $U$ as an open subscheme of $X$.
+The morphism $X \to \Spec(R)$ is projective by
+Divisors, Lemma \ref{divisors-lemma-blowing-up-projective}.
+Also, every generic point of $X$ lies in $U$, for example
+by Divisors, Lemma \ref{divisors-lemma-blow-up-and-irreducible-components}.
+It follows from Varieties, Lemma \ref{varieties-lemma-finite-in-codim-1}
+that $X \to \Spec(R)$ is finite. Thus $X = \Spec(R')$ is
+affine and $R \to R'$ is finite. We have $R_a \cong R'_a$ as $U = D(a)$.
+Hence a power of $a$ annihilates the finite $R$-module $R'/R$.
+As $\mathfrak m = \sqrt{(a)}$ we see that $R'/R$ is annihilated
+by a power of $\mathfrak m$. By
+Divisors, Lemma \ref{divisors-lemma-blowing-up-gives-effective-Cartier-divisor}
+we see that $IR'$ is a locally principal ideal.
+Since $R'$ is semi-local we see that $IR'$ is principal,
+see Algebra, Lemma \ref{algebra-lemma-locally-free-semi-local-free},
+say $IR' = (t)$. Then we have $a = a't$ and $b = b't$ and everything is
+clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-not-infinitely-divisible}
+Let $(R, \mathfrak m)$ be a Noetherian local ring of dimension $1$.
+Let $a, b \in R$ be nonzerodivisors with $a \in \mathfrak m$.
+There exists an integer $n = n(R, a, b)$ such that for a finite ring
+extension $R \subset R'$ if $b = a^m c$ for some $c \in R'$, then $m \leq n$.
+\end{lemma}
+
+\begin{proof}
+Choose a minimal prime $\mathfrak q \subset R$. Observe that
+$\dim(R/\mathfrak q) = 1$, in particular $R/\mathfrak q$ is not a field.
+We can choose a discrete valuation ring $A$ dominating $R/\mathfrak q$
+with the same fraction field, see
+Algebra, Lemma \ref{algebra-lemma-dominate-by-dimension-1}. Observe that
+$a$ and $b$ map to nonzero elements of $A$ as nonzerodivisors in $R$
+are not contained in $\mathfrak q$. Let $v$ be the discrete valuation on $A$.
+Then $v(a) > 0$ as $a \in \mathfrak m$.
+We claim $n = v(b)/v(a)$ works.
+
+\medskip\noindent
+Let $R \subset R'$ be given. Set $A' = A \otimes_R R'$.
+Since $\Spec(R') \to \Spec(R)$ is surjective
+(Algebra, Lemma \ref{algebra-lemma-integral-overring-surjective})
+also $\Spec(A') \to \Spec(A)$ is surjective
+(Algebra, Lemma \ref{algebra-lemma-surjective-spec-radical-ideal}).
+Pick a prime $\mathfrak q' \subset A'$ lying over $(0) \subset A$.
+Then $A \subset A'' = A'/\mathfrak q'$ is a finite extension of rings
+(again inducing a surjection on spectra).
+Pick a maximal ideal $\mathfrak m'' \subset A''$
+lying over the maximal ideal of $A$ and a discrete valuation ring
+$A'''$ dominating $A''_{\mathfrak m''}$ (see lemma cited above).
+Then $A \to A'''$ is an extension of discrete valuation rings
+and we have $b = a^m c$ in $A'''$. Thus $v'''(b) \geq mv'''(a)$.
+Since $v''' = ev$ where $e$ is the ramification index
+of $A'''/A$, we find that $m \leq n$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-prepare-tame-symbol}
+Let $(A, \mathfrak m)$ be a Noetherian local ring of dimension $1$.
+Let $r \geq 2$ and let $a_1, \ldots, a_r \in A$ be nonzerodivisors
+not all units.
+Then there exist
+\begin{enumerate}
+\item a finite ring extension $A \subset B$ with
+$B/A$ annihilated by a power of $\mathfrak m$,
+\item for each maximal ideal $\mathfrak m_j \subset B$
+a nonzerodivisor $\pi_j \in B_j = B_{\mathfrak m_j}$, and
+\item factorizations $a_i = u_{i, j} \pi_j^{e_{i, j}}$ in $B_j$
+with $u_{i, j} \in B_j$ units and $e_{i, j} \geq 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since at least one $a_i$ is not a unit we find that $\mathfrak m$
+is not an associated prime of $A$. Moreover, for any $A \subset B$
+as in the statement $\mathfrak m$ is not an associated prime of $B$
+and $\mathfrak m_j$ is not an associate prime of $B_j$.
+Keeping this in mind will help check the arguments below.
+
+\medskip\noindent
+First, we claim that it suffices to prove the lemma for $r = 2$.
+We will argue this by induction on $r$; we suggest the reader
+skip the proof. Suppose we are given $A \subset B$ and $\pi_j$ in
+$B_j = B_{\mathfrak m_j}$ and factorizations
+$a_i = u_{i, j} \pi_j^{e_{i, j}}$ for $i = 1, \ldots, r - 1$ in $B_j$
+with $u_{i, j} \in B_j$ units and $e_{i, j} \geq 0$.
+Then by the case $r = 2$ for $\pi_j$ and $a_r$ in $B_j$
+we can find extensions $B_j \subset C_j$ and for every maximal ideal
+$\mathfrak m_{j, k}$ of $C_j$ a nonzerodivisor
+$\pi_{j, k} \in C_{j, k} = (C_j)_{\mathfrak m_{j, k}}$
+and factorizations
+$$
+\pi_j = v_{j, k} \pi_{j, k}^{f_{j, k}}
+\quad\text{and}\quad
+a_r = w_{j, k} \pi_{j, k}^{g_{j, k}}
+$$
+as in the lemma. There exists a unique finite extension $B \subset C$
+with $C/B$ annihilated by a power of $\mathfrak m$ such
+that $C_j \cong C_{\mathfrak m_j}$ for all $j$, see
+Lemma \ref{lemma-glue-at-max}.
+The maximal ideals of $C$ correspond $1$-to-$1$
+to the maximal ideals $\mathfrak m_{j, k}$ in the localizations
+and in these localizations we have
+$$
+a_i = u_{i, j} \pi_j^{e_{i, j}} =
+u_{i, j} v_{j, k}^{e_{i, j}} \pi_{j, k}^{e_{i, j}f_{j, k}}
+$$
+for $i \leq r - 1$. Since $a_r$ factors correctly too the
+proof of the induction step is complete.
+
+\medskip\noindent
+Proof of the case $r = 2$. We will use induction on
+$$
+\ell = \min(\text{length}_A(A/a_1A),\ \text{length}_A(A/a_2A)).
+$$
+If $\ell = 0$, then either $a_1$ or $a_2$ is a unit and
+the lemma holds with $A = B$. Thus we may and do assume $\ell > 0$.
+
+\medskip\noindent
+Suppose we have a finite extension of rings $A \subset A'$ such that
+$A'/A$ is annihilated by a power of $\mathfrak m$ and such that
+$\mathfrak m$ is not an associated prime of $A'$.
+Let $\mathfrak m_1, \ldots, \mathfrak m_r \subset A'$
+be the maximal ideals and set $A'_i = A'_{\mathfrak m_i}$.
+If we can solve the problem for $a_1, a_2$ in each $A'_i$,
+then we can apply Lemma \ref{lemma-glue-at-max}
+to produce a solution for $a_1, a_2$ in $A$.
+Choose $x \in \{a_1, a_2\}$ such that $\ell = \text{length}_A(A/xA)$.
+By Lemma \ref{lemma-compare-periodic-lengths}
+and (\ref{equation-multiplicity-coker-ker})
+we have $\text{length}_A(A/xA) = \text{length}_A(A'/xA')$.
+On the other hand, we have
+$$
+\text{length}_A(A'/xA') =
+\sum [\kappa(\mathfrak m_i) : \kappa(\mathfrak m)]
+\text{length}_{A'_i}(A'_i/xA'_i)
+$$
+by Algebra, Lemma \ref{algebra-lemma-pushdown-module}.
+Since $x \in \mathfrak m$ we see that each term on the right hand side
+is positive. We conclude that the induction hypothesis applies
+to $a_1, a_2$ in each $A'_i$ if $r > 1$ or if $r = 1$ and
+$[\kappa(\mathfrak m_1) : \kappa(\mathfrak m)] > 1$.
+We conclude that we may assume each $A'$ as above is local with
+the same residue field as $A$.
+
+\medskip\noindent
+Applying the discussion of the previous paragraph,
+we may replace $A$ by the ring constructed in
+Lemma \ref{lemma-Noetherian-domain-dim-1-two-elements}
+for $a_1, a_2 \in A$. Then since $A$ is local we find,
+after possibly switching $a_1$ and $a_2$, that $a_2 \in (a_1)$.
+Write $a_2 = a_1^m c$ with $m > 0$ maximal. In fact, by
+Lemma \ref{lemma-not-infinitely-divisible}
+we may assume $m$ is maximal even after replacing $A$
+by any finite extension $A \subset A'$ as in the previous paragraph.
+If $c$ is a unit, then we are done. If not, then we replace
+$A$ by the ring constructed in
+Lemma \ref{lemma-Noetherian-domain-dim-1-two-elements}
+for $a_1, c \in A$. Then either (1) $c = a_1 c'$ or
+(2) $a_1 = c a'_1$. The first case cannot happen since
+it would give $a_2 = a_1^{m + 1} c'$ contradicting the
+maximality of $m$. In the second case we get
+$a_1 = c a'_1$ and $a_2 = c^{m + 1} (a'_1)^m$.
+Then it suffices to prove the lemma for $A$ and $c, a'_1$.
+If $a'_1$ is a unit we're done and if not, then
+$\text{length}_A(A/cA) < \ell$ because $cA$ is a strictly
+bigger ideal than $a_1A$. Thus we win by induction hypothesis.
+\end{proof}
+
+
+
+
+
+
+\section{Tame symbols}
+\label{section-tame-symbol}
+
+\noindent
+Consider a Noetherian local ring $(A, \mathfrak m)$ of dimension $1$.
+We denote $Q(A)$ the total ring of fractions of $A$, see
+Algebra, Example \ref{algebra-example-localize-at-prime}.
+The {\it tame symbol} will be a map
+$$
+\partial_A(-, -) : Q(A)^* \times Q(A)^* \longrightarrow \kappa(\mathfrak m)^*
+$$
+satisfying the following properties:
+\begin{enumerate}
+\item $\partial_A(f, gh) = \partial_A(f, g) \partial_A(f, h)$
+\label{item-bilinear}
+for $f, g, h \in Q(A)^*$,
+\item $\partial_A(f, g) \partial_A(g, f) = 1$
+\label{item-skew}
+for $f, g \in Q(A)^*$,
+\item $\partial_A(f, 1 - f) = 1$
+\label{item-1-x}
+for $f \in Q(A)^*$ such that $1 - f \in Q(A)^*$,
+\item $\partial_A(aa', b) = \partial_A(a, b)\partial_A(a', b)$
+\label{item-bilinear-better}
+and $\partial_A(a, bb') = \partial_A(a, b)\partial_A(a, b')$
+for $a, a', b, b' \in A$ nonzerodivisors,
+\item $\partial_A(b, b) = (-1)^m$
+\label{item-skew-better}
+with $m = \text{length}_A(A/bA)$
+for $b \in A$ a nonzerodivisor,
+\item $\partial_A(u, b) = u^m \bmod \mathfrak m$
+\label{item-normalization}
+with $m = \text{length}_A(A/bA)$ for $u \in A$ a unit and
+$b \in A$ a nonzerodivisor, and
+\item
+\label{item-1-x-better}
+$\partial_A(a, b - a)\partial_A(b, b) = \partial_A(b, b - a)\partial_A(a, b)$
+for $a, b \in A$ such that $a, b, b - a$ are nonzerodivisors.
+\end{enumerate}
+Since it is easier to work with elements of $A$ we will
+often think of $\partial_A$ as a map defined on pairs of
+nonzerodivisors of $A$ satisfying (\ref{item-bilinear-better}),
+(\ref{item-skew-better}), (\ref{item-normalization}),
+(\ref{item-1-x-better}). It is an exercise to see that
+setting
+$$
+\partial_A(\frac{a}{b}, \frac{c}{d}) =
+\partial_A(a, c) \partial_A(a, d)^{-1} \partial_A(b, c)^{-1} \partial_A(b, d)
+$$
+we get a well defined map $Q(A)^* \times Q(A)^* \to \kappa(\mathfrak m)^*$
+satisfying (\ref{item-bilinear}), (\ref{item-skew}), (\ref{item-1-x})
+as well as the other properties.
+
+\medskip\noindent
+We do not claim there is a unique map with these properties.
+Instead, we will give a recipe for constructing such a map.
+Namely, given $a_1, a_2 \in A$ nonzerodivisors, we choose
+a ring extension $A \subset B$ and local factorizations
+as in Lemma \ref{lemma-prepare-tame-symbol}.
+Then we define
+\begin{equation}
+\label{equation-tame-symbol}
+\partial_A(a_1, a_2) = \prod\nolimits_j
+\text{Norm}_{\kappa(\mathfrak m_j)/\kappa(\mathfrak m)}
+((-1)^{e_{1, j}e_{2, j}}u_{1, j}^{e_{2, j}}u_{2, j}^{-e_{1, j}}
+\bmod \mathfrak m_j)^{m_j}
+\end{equation}
+where $m_j = \text{length}_{B_j}(B_j/\pi_j B_j)$ and the product
+is taken over the maximal ideals $\mathfrak m_1, \ldots, \mathfrak m_r$ of $B$.
+
+\begin{lemma}
+\label{lemma-well-defined-tame-symbol}
+The formula (\ref{equation-tame-symbol}) determines a
+well defined element of $\kappa(\mathfrak m)^*$. In other words, the
+right hand side does not depend on the choice of the
+local factorizations or the choice of $B$.
+\end{lemma}
+
+\begin{proof}
+Independence of choice of factorizations. Suppose we have
+a Noetherian $1$-dimensional local ring $B$, elements $a_1, a_2 \in B$,
+and nonzerodivisors $\pi, \theta$ such that we can write
+$$
+a_1 = u_1 \pi^{e_1} = v_1 \theta^{f_1},\quad
+a_2 = u_2 \pi^{e_2} = v_2 \theta^{f_2}
+$$
+with $e_i, f_i \geq 0$ integers and $u_i, v_i$ units in $B$.
+Observe that this implies
+$$
+a_1^{e_2} = u_1^{e_2}u_2^{-e_1}a_2^{e_1},\quad
+a_1^{f_2} = v_1^{f_2}v_2^{-f_1}a_2^{f_1}
+$$
+On the other hand, setting
+$m = \text{length}_B(B/\pi B)$ and $k = \text{length}_B(B/\theta B)$
+we find $e_2 m = \text{length}_B(B/a_2 B) = f_2 k$.
+Expanding $a_1^{e_2m} = a_1^{f_2 k}$ using the above we find
+$$
+(u_1^{e_2}u_2^{-e_1})^m = (v_1^{f_2}v_2^{-f_1})^k
+$$
+This proves the desired equality up to signs. To see the signs
+work out we have to show $me_1e_2$ is even if and only if
+$kf_1f_2$ is even. This follows as both $me_2 = kf_2$ and
+$me_1 = kf_1$ (same argument as above).
+
+\medskip\noindent
+Independence of choice of $B$. Suppose given two extensions
+$A \subset B$ and $A \subset B'$ as in Lemma \ref{lemma-prepare-tame-symbol}.
+Then
+$$
+C = (B \otimes_A B')/(\mathfrak m\text{-power torsion})
+$$
+will be a third one. Thus we may assume we have
+$A \subset B \subset C$ and factorizations over the
+local rings of $B$ and we have to show that using
+the same factorizations over the local rings of $C$
+gives the same element of $\kappa(\mathfrak m)$.
+By transitivity of norms
+(Fields, Lemma \ref{fields-lemma-trace-and-norm-tower})
+this comes down to the following problem:
+if $B$ is a Noetherian local ring of dimension $1$
+and $\pi \in B$ is a nonzerodivisor, then
+$$
+\lambda^m = \prod \text{Norm}_{\kappa_k/\kappa}(\lambda)^{m_k}
+$$
+Here we have used the following notation:
+(1) $\kappa$ is the residue field of $B$,
+(2) $\lambda$ is an element of $\kappa$,
+(3) $\mathfrak m_k \subset C$ are the maximal ideals of $C$,
+(4) $\kappa_k = \kappa(\mathfrak m_k)$ is the residue field of
+$C_k = C_{\mathfrak m_k}$,
+(5) $m = \text{length}_B(B/\pi B)$, and
+(6) $m_k = \text{length}_{C_k}(C_k/\pi C_k)$.
+The displayed equality holds because
+$\text{Norm}_{\kappa_k/\kappa}(\lambda) = \lambda^{[\kappa_k : \kappa]}$
+as $\lambda \in \kappa$ and because $m = \sum m_k[\kappa_k:\kappa]$.
+First, we have $m = \text{length}_B(B/xB) = \text{length}_B(C/\pi C)$
+by Lemma \ref{lemma-compare-periodic-lengths}
+and (\ref{equation-multiplicity-coker-ker}).
+Finally, we have $\text{length}_B(C/\pi C) = \sum m_k[\kappa_k:\kappa]$
+by Algebra, Lemma \ref{algebra-lemma-pushdown-module}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tame-symbol}
+The tame symbol (\ref{equation-tame-symbol}) satisfies
+(\ref{item-bilinear-better}), (\ref{item-skew-better}),
+(\ref{item-normalization}), (\ref{item-1-x-better}) and hence
+gives a map $\partial_A : Q(A)^* \times Q(A)^* \to \kappa(\mathfrak m)^*$
+satisfying (\ref{item-bilinear}), (\ref{item-skew}), (\ref{item-1-x}).
+\end{lemma}
+
+\begin{proof}
+Let us prove (\ref{item-bilinear-better}).
+Let $a_1, a_2, a_3 \in A$ be nonzerodivisors.
+Choose $A \subset B$ as in Lemma \ref{lemma-prepare-tame-symbol}
+for $a_1, a_2, a_3$. Then the equality
+$$
+\partial_A(a_1a_2, a_3) = \partial_A(a_1, a_3) \partial_A(a_2, a_3)
+$$
+follows from the equality
+$$
+(-1)^{(e_{1, j} + e_{2, j})e_{3, j}}
+(u_{1, j}u_{2, j})^{e_{3, j}}u_{3, j}^{-e_{1, j} - e_{2, j}} =
+(-1)^{e_{1, j}e_{3, j}}
+u_{1, j}^{e_{3, j}}u_{3, j}^{-e_{1, j}}
+(-1)^{e_{2, j}e_{3, j}}
+u_{2, j}^{e_{3, j}}u_{3, j}^{-e_{2, j}}
+$$
+in $B_j$. Properties (\ref{item-skew-better}) and
+(\ref{item-normalization}) are equally immediate.
+
+\medskip\noindent
+Let us prove (\ref{item-1-x-better}). Let $a_1, a_2, a_1 - a_2 \in A$
+be nonzerodivisors and set $a_3 = a_1 - a_2$.
+Choose $A \subset B$ as in Lemma \ref{lemma-prepare-tame-symbol}
+for $a_1, a_2, a_3$. Then it suffices to show
+$$
+(-1)^{e_{1, j}e_{2, j} + e_{1, j}e_{3, j} + e_{2, j}e_{3, j} + e_{2, j}}
+u_{1, j}^{e_{2, j} - e_{3, j}}
+u_{2, j}^{e_{3, j} - e_{1, j}}
+u_{3, j}^{e_{1, j} - e_{2, j}} \bmod \mathfrak m_j = 1
+$$
+This is clear if $e_{1, j} = e_{2, j} = e_{3, j}$.
+Say $e_{1, j} > e_{2, j}$. Then we see that $e_{3, j} = e_{2, j}$
+because $a_3 = a_1 - a_2$ and we see that $u_{3, j}$
+has the same residue class as $-u_{2, j}$. Hence
+the formula is true -- the signs work out as well
+and this verification is the reason for the choice of signs
+in (\ref{equation-tame-symbol}).
+The other cases are handled in exactly the same manner.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-norm-down-tame-symbol}
+Let $(A, \mathfrak m)$ be a Noetherian local ring of dimension $1$.
+Let $A \subset B$ be a finite ring extension with $B/A$
+annihilated by a power of $\mathfrak m$ and $\mathfrak m$ not
+an associated prime of $B$.
+For $a, b \in A$ nonzerodivisors we have
+$$
+\partial_A(a, b) = \prod
+\text{Norm}_{\kappa(\mathfrak m_j)/\kappa(\mathfrak m)}(\partial_{B_j}(a, b))
+$$
+where the product is over the maximal ideals $\mathfrak m_j$ of $B$
+and $B_j = B_{\mathfrak m_j}$.
+\end{lemma}
+
+\begin{proof}
+Choose $B_j \subset C_j$ as in
+Lemma \ref{lemma-prepare-tame-symbol} for $a, b$.
+By Lemma \ref{lemma-glue-at-max} we can choose a finite ring
+extension $B \subset C$ with $C_j \cong C_{\mathfrak m_j}$ for all $j$.
+Let $\mathfrak m_{j, k} \subset C$ be the maximal ideals of $C$
+lying over $\mathfrak m_j$. Let
+$$
+a = u_{j, k}\pi_{j, k}^{f_{j, k}},\quad
+b = v_{j, k}\pi_{j, k}^{g_{j, k}}
+$$
+be the local factorizations which exist by our choice of
+$C_j \cong C_{\mathfrak m_j}$. By definition we have
+$$
+\partial_A(a, b) =
+\prod\nolimits_{j, k}
+\text{Norm}_{\kappa(\mathfrak m_{j, k})/\kappa(\mathfrak m)}
+((-1)^{f_{j, k}g_{j, k}}u_{j, k}^{g_{j, k}}v_{j, k}^{-f_{j, k}}
+\bmod \mathfrak m_{j, k})^{m_{j, k}}
+$$
+and
+$$
+\partial_{B_j}(a, b) =
+\prod\nolimits_k
+\text{Norm}_{\kappa(\mathfrak m_{j, k})/\kappa(\mathfrak m_j)}
+((-1)^{f_{j, k}g_{j, k}}u_{j, k}^{g_{j, k}}v_{j, k}^{-f_{j, k}}
+\bmod \mathfrak m_{j, k})^{m_{j, k}}
+$$
+The result follows by transitivity of norms
+for $\kappa(\mathfrak m_{j, k})/\kappa(\mathfrak m_j)/\kappa(\mathfrak m)$, see
+Fields, Lemma \ref{fields-lemma-trace-and-norm-tower}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tame-symbol-formally-smooth}
+Let $(A, \mathfrak m, \kappa) \to (A', \mathfrak m', \kappa')$
+be a local homomorphism of Noetherian local rings. Assume $A \to A'$
+is flat and $\dim(A) = \dim(A') = 1$. Set
+$m = \text{length}_{A'}(A'/\mathfrak mA')$.
+For $a_1, a_2 \in A$ nonzerodivisors
+$\partial_A(a_1, a_2)^m$ maps to $\partial_{A'}(a_1, a_2)$
+via $\kappa \to \kappa'$.
+\end{lemma}
+
+\begin{proof}
+If $a_1, a_2$ are both units, then $\partial_A(a_1, a_2) = 1$
+and $\partial_{A'}(a_1, a_2) = 1$ and the result is true.
+If not, then we can choose a ring extension $A \subset B$ and
+local factorizations as in Lemma \ref{lemma-prepare-tame-symbol}.
+Denote $\mathfrak m_1, \ldots, \mathfrak m_m$
+be the maximal ideals of $B$. Let $\mathfrak m_1, \ldots, \mathfrak m_m$
+be the maximal ideals of $B$ with residue fields $\kappa_1, \ldots, \kappa_m$.
+For each $j \in \{1, \ldots, m\}$ denote $\pi_j \in B_j = B_{\mathfrak m_j}$
+a nonzerodivisor such that we have factorizations
+$a_i = u_{i, j}\pi_j^{e_{i, j}}$ as in the lemma.
+By definition we have
+$$
+\partial_A(a_1, a_2) = \prod\nolimits_j
+\text{Norm}_{\kappa_j/\kappa}
+((-1)^{e_{1, j}e_{2, j}}u_{1, j}^{e_{2, j}}u_{2, j}^{-e_{1, j}}
+\bmod \mathfrak m_j)^{m_j}
+$$
+where $m_j = \text{length}_{B_j}(B_j/\pi_j B_j)$.
+
+\medskip\noindent
+Set $B' = A' \otimes_A B$. Since $A'$ is flat over $A$ we see
+that $A' \subset B'$ is a ring extension with $B'/A'$ annihilated
+by a power of $\mathfrak m'$. Let
+$$
+\mathfrak m'_{j, l},\quad l = 1, \ldots, n_j
+$$
+be the maximal ideals of $B'$ lying over $\mathfrak m_j$. Denote
+$\kappa'_{j, l}$ the residue field of $\mathfrak m'_{j, l}$. Denote
+$B'_{j, l}$ the localization of $B'$ at $\mathfrak m'_{j, l}$.
+As factorizations of $a_1$ and $a_2$ in $B'_{j, l}$
+we use the image of the factorizations
+$a_i = u_{i, j} \pi_j^{e_{i, j}}$ given to us in $B_j$.
+By definition we have
+$$
+\partial_{A'}(a_1, a_2) = \prod\nolimits_{j, l}
+\text{Norm}_{\kappa'_{j, l}/\kappa'}
+((-1)^{e_{1, j}e_{2, j}}u_{1, j}^{e_{2, j}}u_{2, j}^{-e_{1, j}}
+\bmod \mathfrak m'_{j, l})^{m'_{j, l}}
+$$
+where $m'_{j, l} = \text{length}_{B'_{j, l}}(B'_{j, l}/\pi_j B'_{j, l})$.
+
+\medskip\noindent
+Comparing the formulae above we see that it suffices to show that
+for each $j$ and for any unit $u \in B_j$ we have
+\begin{equation}
+\label{equation-to-prove}
+\left(\text{Norm}_{\kappa_j/\kappa}(u \bmod \mathfrak m_j)^{m_j}\right)^m
+=
+\prod\nolimits_l
+\text{Norm}_{\kappa'_{j, l}/\kappa'}(u \bmod \mathfrak m'_{j, l})^{m'_{j, l}}
+\end{equation}
+in $\kappa'$. We are going to use the construction of determinants of
+endomorphisms of finite length modules in
+More on Algebra, Section \ref{more-algebra-section-determinants-finite-length}
+to prove this. Set $M = B_j/\pi_j B_j$. By
+More on Algebra, Lemma \ref{more-algebra-lemma-multiplication} we have
+$$
+\text{Norm}_{\kappa_j/\kappa}(u \bmod \mathfrak m_j)^{m_j} =
+\det\nolimits_\kappa(u : M \to M)
+$$
+Thus, by
+More on Algebra, Lemma \ref{more-algebra-lemma-flat-base-change-det},
+the left hand side of (\ref{equation-to-prove}) is equal to
+$\det_{\kappa'}(u : M \otimes_A A' \to M \otimes_A A')$.
+We have an isomorphism
+$$
+M \otimes_A A' = (B_j/\pi_j B_j) \otimes_A A' =
+\bigoplus\nolimits_l B'_{j, l}/\pi_j B'_{j, l}
+$$
+of $A'$-modules. Setting $M'_l = B'_{j, l}/\pi_j B'_{j, l}$ we see that
+$\text{Norm}_{\kappa'_{j, l}/\kappa'}(u \bmod \mathfrak m'_{j, l})^{m'_{j, l}}
+= \det_{\kappa'}(u_j : M'_l \to M'_l)$ by
+More on Algebra, Lemma \ref{more-algebra-lemma-multiplication} again.
+Hence (\ref{equation-to-prove}) holds by multiplicativity of the determinant
+construction, see More on Algebra, Lemma \ref{more-algebra-lemma-ses}.
+\end{proof}
+
+
+
+
+
+
+\section{A key lemma}
+\label{section-key-lemma}
+
+\noindent
+In this section we apply the results above to prove
+Lemma \ref{lemma-milnor-gersten-low-degree}.
+This lemma is a low degree case of the statement
+that there is a complex for Milnor K-theory similar
+to the Gersten-Quillen complex in Quillen's K-theory.
+See Remark \ref{remark-gersten-complex-milnor}.
+
+\begin{lemma}
+\label{lemma-perpare-key}
+Let $(A, \mathfrak m)$ be a $2$-dimensional Noetherian local ring.
+Let $t \in \mathfrak m$ be a nonzerodivisor. Say
+$V(t) = \{\mathfrak m, \mathfrak q_1, \ldots, \mathfrak q_r\}$.
+Let $A_{\mathfrak q_i} \subset B_i$ be a finite ring
+extension with $B_i/A_{\mathfrak q_i}$ annihilated by a power of
+$t$. Then there exists a finite extension $A \subset B$ of
+local rings identifying residue fields
+with $B_i \cong B_{\mathfrak q_i}$ and $B/A$ annihilated
+by a power of $t$.
+\end{lemma}
+
+\begin{proof}
+Choose $n > 0$ such that $B_i \subset t^{-n}A_{\mathfrak q_i}$.
+Let $M \subset t^{-n}A$, resp.\ $M' \subset t^{-2n}A$ be the
+$A$-submodule consisting of elements mapping to $B_i$ in
+$t^{-n}A_{\mathfrak q_i}$, resp.\ $t^{-2n}A_{\mathfrak q_i}$.
+Then $M \subset M'$ are finite $A$-modules as $A$ is Noetherian
+and $M_{\mathfrak q_i} = M'_{\mathfrak q_i} = B_i$ as localization
+is exact. Thus $M'/M$ is annihilated by $\mathfrak m^c$ for some
+$c > 0$. Observe that $M \cdot M \subset M'$ under the multiplication
+$t^{-n}A \times t^{-n}A \to t^{-2n}A$. Hence
+$B = A + \mathfrak m^{c + 1}M$ is a finite $A$-algebra with the correct
+localizations. We omit the verification that $B$ is local with
+maximal ideal $\mathfrak m + \mathfrak m^{c + 1}M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-key-nonzerodivisors}
+Let $(A, \mathfrak m)$ be a $2$-dimensional Noetherian local ring.
+Let $a, b \in A$ be nonzerodivisors.
+Then we have
+$$
+\sum
+\text{ord}_{A/\mathfrak q}(\partial_{A_{\mathfrak q}}(a, b))
+=
+0
+$$
+where the sum is over the height $1$ primes $\mathfrak q$ of $A$.
+\end{lemma}
+
+\begin{proof}
+If $\mathfrak q$ is a height $1$ prime of $A$ such that $a, b$
+map to a unit of $A_\mathfrak q$, then $\partial_{A_\mathfrak q}(a, b) = 1$.
+Thus the sum is finite. In fact, if
+$V(ab) = \{\mathfrak m, \mathfrak q_1, \ldots, \mathfrak q_r\}$
+then the sum is over $i = 1, \ldots, r$.
+For each $i$ we pick an extension $A_{\mathfrak q_i} \subset B_i$
+as in Lemma \ref{lemma-prepare-tame-symbol} for $a, b$.
+By Lemma \ref{lemma-perpare-key} with $t = ab$ and the given list of primes
+we may assume we have a finite local extension $A \subset B$
+with $B/A$ annihilated by a power of $ab$ and such that
+for each $i$ the $B_{\mathfrak q_i} \cong B_i$.
+Observe that if $\mathfrak q_{i, j}$ are the primes of $B$
+lying over $\mathfrak q_i$ then we have
+$$
+\text{ord}_{A/\mathfrak q_i}(\partial_{A_{\mathfrak q_i}}(a, b))
+=
+\sum\nolimits_j
+\text{ord}_{B/\mathfrak q_{i, j}}(\partial_{B_{\mathfrak q_{i, j}}}(a, b))
+$$
+by Lemma \ref{lemma-norm-down-tame-symbol} and
+Algebra, Lemma \ref{algebra-lemma-finite-extension-dim-1}.
+Thus we may replace $A$ by $B$ and
+reduce to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume for each $i$ there is a nonzerodivisor
+$\pi_i \in A_{\mathfrak q_i}$ and units $u_i, v_i \in A_{\mathfrak q_i}$
+such that for some integers $e_i, f_i \geq 0$ we have
+$$
+a = u_i \pi_i^{e_i},\quad b = v_i \pi_i^{f_i}
+$$
+in $A_{\mathfrak q_i}$. Setting
+$m_i = \text{length}_{A_{\mathfrak q_i}}(A_{\mathfrak q_i}/\pi_i)$
+we have
+$\partial_{A_{\mathfrak q_i}}(a, b) =
+((-1)^{e_if_i}u_i^{f_i}v_i^{-e_i})^{m_i}$ by definition.
+Since $a, b$ are nonzerodivisors the
+$(2, 1)$-periodic complex $(A/(ab), a, b)$ has vanishing cohomology.
+Denote $M_i$ the image of $A/(ab)$ in $A_{\mathfrak q_i}/(ab)$.
+Then we have a map
+$$
+A/(ab) \longrightarrow \bigoplus M_i
+$$
+whose kernel and cokernel are supported in $\{\mathfrak m\}$
+and hence have finite length. Thus we see that
+$$
+\sum e_A(M_i, a, b) = 0
+$$
+by Lemma \ref{lemma-compare-periodic-lengths}. Hence it suffices to show
+$e_A(M_i, a, b) =
+- \text{ord}_{A/\mathfrak q_i}(\partial_{A_{\mathfrak q_i}}(a, b))$.
+
+\medskip\noindent
+Let us prove this first, in case $\pi_i, u_i, v_i$ are the images
+of elements $\pi_i, u_i, v_i \in A$ (using the same symbols
+should not cause any confusion). In this case we get
+\begin{align*}
+e_A(M_i, a, b) & =
+e_A(M_i, u_i\pi_i^{e_i}, v_i\pi_i^{f_i}) \\
+& =
+e_A(M_i, \pi_i^{e_i}, \pi_i^{f_i}) -
+e_A(\pi_i^{e_i}M_i, 0, u_i) +
+e_A(\pi_i^{f_i}M_i, 0, v_i) \\
+& =
+0 -
+f_im_i\text{ord}_{A/\mathfrak q_i}(u_i) +
+e_im_i\text{ord}_{A/\mathfrak q_i}(v_i) \\
+& =
+-m_i\text{ord}_{A/\mathfrak q_i}(u_i^{f_i}v_i^{-e_i}) =
+-\text{ord}_{A/\mathfrak q_i}(\partial_{A_{\mathfrak q_i}}(a, b))
+\end{align*}
+The second equality holds by Lemma \ref{lemma-multiply-period-length}.
+Observe that
+$M_i \subset (M_i)_{\mathfrak q_i} = A_{\mathfrak q_i}/(\pi_i^{e_i + f_i})$
+and
+$(\pi_i^{e_i}M_i)_{\mathfrak q_i} \cong A_{\mathfrak q_i}/\pi_i^{f_i}$ and
+$(\pi_i^{f_i}M_i)_{\mathfrak q_i} \cong A_{\mathfrak q_i}/\pi_i^{e_i}$.
+The $0$ in the third equality comes from
+Lemma \ref{lemma-powers-period-length-zero}
+and the other two terms come from
+Lemma \ref{lemma-length-multiplication}.
+The last two equalities follow from multiplicativity of
+the order function and from the definition of our tame symbol.
+
+\medskip\noindent
+In general, we may first choose $c \in A$, $c \not \in \mathfrak q_i$
+such that $c\pi_i \in A$. After replacing $\pi_i$ by $c\pi_i$
+and $u_i$ by $c^{-e_i}u_i$ and $v_i$ by $c^{-f_i}v_i$
+we may and do assume $\pi_i$ is in $A$.
+Next, choose an $c \in A$, $c \not \in \mathfrak q_i$
+with $cu_i, cv_i \in A$. Then we observe that
+$$
+e_A(M_i, ca, cb) = e_A(M_i, a, b) - e_A(aM_i, 0, c) + e_A(bM_i, 0, c)
+$$
+by Lemma \ref{lemma-length-multiplication}.
+On the other hand, we have
+$$
+\partial_{A_{\mathfrak q_i}}(ca, cb) =
+c^{m_i(f_i - e_i)}\partial_{A_{\mathfrak q_i}}(a, b)
+$$
+in $\kappa(\mathfrak q_i)^*$ because $c$ is a unit in $A_{\mathfrak q_i}$.
+The arguments in the previous paragraph show that
+$e_A(M_i, ca, cb) = -
+\text{ord}_{A/\mathfrak q_i}(\partial_{A_{\mathfrak q_i}}(ca, cb))$.
+Thus it suffices to prove
+$$
+e_A(aM_i, 0, c) = \text{ord}_{A/\mathfrak q_i}(c^{m_if_i})
+\quad\text{and}\quad
+e_A(bM_i, 0, c) = \text{ord}_{A/\mathfrak q_i}(c^{m_ie_i})
+$$
+and this follows from Lemma \ref{lemma-length-multiplication}
+by the description (see above)
+of what happens when we localize at $\mathfrak q_i$.
+\end{proof}
+
+\begin{lemma}[Key Lemma]
+\label{lemma-milnor-gersten-low-degree}
+\begin{reference}
+When $A$ is an excellent ring this is \cite[Proposition 1]{Kato-Milnor-K}.
+\end{reference}
+Let $A$ be a $2$-dimensional Noetherian local domain with fraction field $K$.
+Let $f, g \in K^*$.
+Let $\mathfrak q_1, \ldots, \mathfrak q_t$ be the height
+$1$ primes $\mathfrak q$ of $A$ such that either $f$ or $g$ is not an
+element of $A^*_{\mathfrak q}$.
+Then we have
+$$
+\sum\nolimits_{i = 1, \ldots, t}
+\text{ord}_{A/\mathfrak q_i}(\partial_{A_{\mathfrak q_i}}(f, g))
+=
+0
+$$
+We can also write this as
+$$
+\sum\nolimits_{\text{height}(\mathfrak q) = 1}
+\text{ord}_{A/\mathfrak q}(\partial_{A_{\mathfrak q}}(f, g))
+=
+0
+$$
+since at any height $1$ prime $\mathfrak q$
+of $A$ where $f, g \in A^*_{\mathfrak q}$
+we have $\partial_{A_{\mathfrak q}}(f, g) = 1$.
+\end{lemma}
+
+\begin{proof}
+Since the tame symbols $\partial_{A_{\mathfrak q}}(f, g)$ are
+bilinear and the order functions $\text{ord}_{A/\mathfrak q}$
+are additive it suffices to prove the formula when
+$f$ and $g$ are elements of $A$. This case is proven in
+Lemma \ref{lemma-key-nonzerodivisors}.
+\end{proof}
+
+\begin{remark}[Milnor K-theory]
+\label{remark-gersten-complex-milnor}
+For a field $k$ let us denote $K^M_*(k)$ the quotient of
+the tensor algebra on $k^*$ divided by the two-sided ideal
+generated by the elements $x \otimes 1 - x$ for $x \in k \setminus \{0, 1\}$.
+Thus $K^M_0(k) = \mathbf{Z}$, $K_1^M(k) = k^*$, and
+$$
+K^M_2(k) = k^* \otimes_\mathbf{Z} k^* / \langle x \otimes 1 - x \rangle
+$$
+If $A$ is a discrete valuation ring with fraction field $F = \text{Frac}(A)$
+and residue field $\kappa$, there is a tame symbol
+$$
+\partial_A : K_{i + 1}^M(F) \to K_i^M(\kappa)
+$$
+defined as in Section \ref{section-tame-symbol}; see \cite{Kato-Milnor-K}.
+More generally, this map can be extended to the case where $A$ is an
+excellent local domain of dimension $1$ using normalization and norm
+maps on $K_i^M$, see \cite{Kato-Milnor-K}; presumably the method in
+Section \ref{section-tame-symbol} can be used to extend the construction
+of the tame symbol $\partial_A$ to arbitrary Noetherian local domains $A$
+of dimension $1$. Next, let $X$ be a Noetherian scheme with a
+dimension function $\delta$. Then we can use these tame symbols to get
+the arrows in the following:
+$$
+\bigoplus\nolimits_{\delta(x) = j + 1} K^M_{i + 1}(\kappa(x))
+\longrightarrow
+\bigoplus\nolimits_{\delta(x) = j} K^M_i(\kappa(x))
+\longrightarrow
+\bigoplus\nolimits_{\delta(x) = j - 1} K^M_{i - 1}(\kappa(x))
+$$
+However, it is not clear, that the composition is zero, i.e., that
+we obtain a complex of abelian groups.
+For excellent $X$ this is shown in \cite{Kato-Milnor-K}.
+When $i = 1$ and $j$ arbitrary, this follows from
+Lemma \ref{lemma-milnor-gersten-low-degree}.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Setup}
+\label{section-setup}
+
+\noindent
+We will throughout work over a locally Noetherian universally
+catenary base $S$ endowed with a dimension function $\delta$.
+Although it is likely possible to generalize (parts of) the
+discussion in the chapter, it seems that this is a good first
+approximation. It is exactly the generality discussed in \cite{Thorup}.
+We usually do not assume our schemes are
+separated or quasi-compact. Many interesting algebraic stacks
+are non-separated and/or non-quasi-compact and this is a good
+case study to see how to develop a reasonable theory for those as well.
+In order to reference these hypotheses we give it a number.
+
+\begin{situation}
+\label{situation-setup}
+Here $S$ is a locally Noetherian, and universally catenary scheme.
+Moreover, we assume $S$ is endowed with a dimension function
+$\delta : S \longrightarrow \mathbf{Z}$.
+\end{situation}
+
+\noindent
+See Morphisms, Definition \ref{morphisms-definition-universally-catenary}
+for the notion of a universally catenary scheme, and see
+Topology, Definition \ref{topology-definition-dimension-function}
+for the notion of a dimension function. Recall that any locally
+Noetherian catenary scheme locally has a dimension function, see
+Properties, Lemma \ref{properties-lemma-catenary-dimension-function}.
+Moreover, there are lots of schemes which are universally catenary,
+see Morphisms, Lemma \ref{morphisms-lemma-ubiquity-uc}.
+
+\medskip\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Any scheme $X$ locally of finite type over $S$ is locally Noetherian
+and catenary. In fact, $X$ has a canonical dimension function
+$$
+\delta = \delta_{X/S} : X \longrightarrow \mathbf{Z}
+$$
+associated to $(f : X \to S, \delta)$ given by the rule
+$\delta_{X/S}(x) = \delta(f(x)) + \text{trdeg}_{\kappa(f(x))}\kappa(x)$.
+See Morphisms, Lemma \ref{morphisms-lemma-dimension-function-propagates}.
+Moreover, if $h : X \to Y$ is a morphism of schemes locally of finite
+type over $S$, and $x \in X$, $y = h(x)$,
+then obviously
+$\delta_{X/S}(x) = \delta_{Y/S}(y) + \text{trdeg}_{\kappa(y)}\kappa(x)$.
+We will freely use this function and its properties in the following.
+
+\medskip\noindent
+Here are the basic examples of setups as above.
+In fact, the main interest lies in the case where the base
+is the spectrum of a field, or the case where the base
+is the spectrum of a Dedekind ring (e.g.\ $\mathbf{Z}$,
+or a discrete valuation ring).
+
+\begin{example}
+\label{example-field}
+Here $S = \Spec(k)$ and $k$ is a field.
+We set $\delta(pt) = 0$ where $pt$ indicates the unique point of $S$.
+The pair $(S, \delta)$ is an example of a situation as in
+Situation \ref{situation-setup} by
+Morphisms, Lemma \ref{morphisms-lemma-ubiquity-uc}.
+\end{example}
+
+\begin{example}
+\label{example-domain-dimension-1}
+Here $S = \Spec(A)$, where $A$ is a Noetherian domain
+of dimension $1$.
+For example we could consider $A = \mathbf{Z}$.
+We set $\delta(\mathfrak p) = 0$ if
+$\mathfrak p$ is a maximal ideal and $\delta(\mathfrak p) = 1$
+if $\mathfrak p = (0)$ corresponds to the generic point.
+This is an example of Situation \ref{situation-setup} by
+Morphisms, Lemma \ref{morphisms-lemma-ubiquity-uc}.
+\end{example}
+
+\begin{example}
+\label{example-CM-irreducible}
+Here $S$ is a Cohen-Macaulay scheme. Then $S$ is universally catenary by
+Morphisms, Lemma \ref{morphisms-lemma-ubiquity-uc}.
+We set $\delta(s) = -\dim(\mathcal{O}_{S, s})$.
+If $s' \leadsto s$ is a nontrivial specialization of points of $S$,
+then $\mathcal{O}_{S, s'}$ is the localization of $\mathcal{O}_{S, s}$
+at a nonmaximal prime ideal $\mathfrak p \subset \mathcal{O}_{S, s}$, see
+Schemes, Lemma \ref{schemes-lemma-specialize-points}.
+Thus $\dim(\mathcal{O}_{S, s}) =
+\dim(\mathcal{O}_{S, s'}) + \dim(\mathcal{O}_{S, s}/\mathfrak p) >
+\dim(\mathcal{O}_{S, s'})$ by
+Algebra, Lemma \ref{algebra-lemma-CM-dim-formula}.
+Hence $\delta(s') > \delta(s)$. If $s' \leadsto s$ is
+an immediate specialization, then there is no prime
+ideal strictly between $\mathfrak p$ and $\mathfrak m_s$
+and we find $\delta(s') = \delta(s) + 1$. Thus $\delta$
+is a dimension function. In other words, the pair $(S, \delta)$
+is an example of Situation \ref{situation-setup}.
+\end{example}
+
+\noindent
+If $S$ is Jacobson and $\delta$ sends closed points to zero, then $\delta$
+is the function sending a point to the dimension of its closure.
+
+\begin{lemma}
+\label{lemma-delta-is-dimension}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Assume in addition $S$ is a Jacobson scheme, and $\delta(s) = 0$ for every
+closed point $s$ of $S$. Let $X$ be locally of finite type over $S$.
+Let $Z \subset X$ be an integral closed subscheme and let
+$\xi \in Z$ be its generic point. The following integers are the same:
+\begin{enumerate}
+\item $\delta_{X/S}(\xi)$,
+\item $\dim(Z)$, and
+\item $\dim(\mathcal{O}_{Z, z})$ where $z$ is a closed point of $Z$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $X \to S$, $\xi \in Z \subset X$ be as in the lemma.
+Since $X$ is locally of finite type over $S$ we see that
+$X$ is Jacobson, see
+Morphisms, Lemma \ref{morphisms-lemma-Jacobson-universally-Jacobson}.
+Hence closed points of $X$ are dense in every closed subset of $Z$
+and map to closed points of $S$. Hence given any chain
+of irreducible closed subsets of $Z$ we can end it with a closed point of $Z$.
+It follows that $\dim(Z) = \sup_z(\dim(\mathcal{O}_{Z, z})$
+(see Properties, Lemma \ref{properties-lemma-codimension-local-ring})
+where $z \in Z$ runs over the closed points of $Z$.
+Note that $\dim(\mathcal{O}_{Z, z}) = \delta(\xi) - \delta(z)$
+by the properties of a dimension function.
+For each closed $z \in Z$ the field extension
+$\kappa(z)/\kappa(f(z))$ is finite, see Morphisms,
+Lemma \ref{morphisms-lemma-jacobson-finite-type-points}.
+Hence $\delta_{X/S}(z) = \delta(f(z)) = 0$ for $z \in Z$ closed.
+It follows that all three integers are equal.
+\end{proof}
+
+\noindent
+In the situation of the lemma above the
+value of $\delta$ at the generic point of a closed irreducible subset
+is the dimension of the irreducible closed subset.
+However, in general we cannot expect the equality to hold.
+For example if $S = \Spec(\mathbf{C}[[t]])$ and
+$X = \Spec(\mathbf{C}((t)))$ then we would get
+$\delta(x) = 1$ for the unique point of $X$, but $\dim(X) = 0$.
+Still we want to think of $\delta_{X/S}$ as giving the
+dimension of the irreducible closed subschemes. Thus we introduce
+the following terminology.
+
+\begin{definition}
+\label{definition-delta-dimension}
+Let $(S, \delta)$ as in Situation \ref{situation-setup}.
+For any scheme $X$ locally of finite type over $S$
+and any irreducible closed subset $Z \subset X$ we define
+$$
+\dim_\delta(Z) = \delta(\xi)
+$$
+where $\xi \in Z$ is the generic point of $Z$.
+We will call this the {\it $\delta$-dimension of $Z$}.
+If $Z$ is a closed subscheme of $X$, then we define
+$\dim_\delta(Z)$ as the supremum of the $\delta$-dimensions
+of its irreducible components.
+\end{definition}
+
+
+
+
+
+
+
+\section{Cycles}
+\label{section-cycles}
+
+\noindent
+Since we are not assuming our schemes are quasi-compact we have
+to be a little careful when defining cycles. We have to allow
+infinite sums because a rational function may have infinitely many
+poles for example. In any case, if $X$ is quasi-compact then a
+cycle is a finite sum as usual.
+
+\begin{definition}
+\label{definition-cycles}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $k \in \mathbf{Z}$.
+\begin{enumerate}
+\item A {\it cycle on $X$} is a formal sum
+$$
+\alpha = \sum n_Z [Z]
+$$
+where the sum is over integral closed subschemes $Z \subset X$,
+each $n_Z \in \mathbf{Z}$, and the collection
+$\{Z; n_Z \not = 0\}$ is locally finite
+(Topology, Definition \ref{topology-definition-locally-finite}).
+\item A {\it $k$-cycle} on $X$ is a cycle
+$$
+\alpha = \sum n_Z [Z]
+$$
+where $n_Z \not = 0 \Rightarrow \dim_\delta(Z) = k$.
+\item The abelian group of all $k$-cycles on $X$ is denoted $Z_k(X)$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In other words, a $k$-cycle on $X$
+is a locally finite formal $\mathbf{Z}$-linear
+combination of integral closed subschemes of $\delta$-dimension $k$.
+Addition of $k$-cycles $\alpha = \sum n_Z[Z]$ and
+$\beta = \sum m_Z[Z]$ is given by
+$$
+\alpha + \beta = \sum (n_Z + m_Z)[Z],
+$$
+i.e., by adding the coefficients.
+
+\begin{remark}
+\label{remark-cycles-pointwise}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $k \in \mathbf{Z}$.
+Then we can write
+$$
+Z_k(X) = \bigoplus\nolimits_{\delta(x) = k}' K_0^M(\kappa(x))
+\quad\subset\quad
+\bigoplus\nolimits_{\delta(x) = k} K_0^M(\kappa(x))
+$$
+with the following notation and conventions:
+\begin{enumerate}
+\item $K_0^M(\kappa(x)) = \mathbf{Z}$ is the degree $0$ part of
+the Milnor K-theory of the residue field $\kappa(x)$ of the point
+$x \in X$ (see Remark \ref{remark-gersten-complex-milnor}), and
+\item the direct sum on the right is over all points $x \in X$
+with $\delta(x) = k$,
+\item the notation $\bigoplus'_x$ signifies that we consider the
+subgroup consisting of locally finite elements; namely, elements
+$\sum_x n_x$ such that for every quasi-compact open $U \subset X$
+the set of $x \in U$ with $n_x \not = 0$ is finite.
+\end{enumerate}
+\end{remark}
+
+
+
+
+\section{Cycle associated to a closed subscheme}
+\label{section-cycle-of-closed-subscheme}
+
+\begin{lemma}
+\label{lemma-multiplicity-finite}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $Z \subset X$ be a closed subscheme.
+\begin{enumerate}
+\item Let $Z' \subset Z$ be an irreducible component and
+let $\xi \in Z'$ be its generic point.
+Then
+$$
+\text{length}_{\mathcal{O}_{X, \xi}} \mathcal{O}_{Z, \xi} < \infty
+$$
+\item If $\dim_\delta(Z) \leq k$ and $\xi \in Z$ with
+$\delta(\xi) = k$, then $\xi$ is a generic point of an
+irreducible component of $Z$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $Z' \subset Z$, $\xi \in Z'$ be as in (1).
+Then $\dim(\mathcal{O}_{Z, \xi}) = 0$ (for example by
+Properties, Lemma \ref{properties-lemma-codimension-local-ring}).
+Hence $\mathcal{O}_{Z, \xi}$ is Noetherian
+local ring of dimension zero, and hence has finite length over
+itself (see
+Algebra, Proposition \ref{algebra-proposition-dimension-zero-ring}).
+Hence, it also has finite length over $\mathcal{O}_{X, \xi}$, see
+Algebra, Lemma \ref{algebra-lemma-pushdown-module}.
+
+\medskip\noindent
+Assume $\xi \in Z$ and $\delta(\xi) = k$.
+Consider the closure $Z' = \overline{\{\xi\}}$. It is an irreducible
+closed subscheme with $\dim_\delta(Z') = k$ by definition.
+Since $\dim_\delta(Z) = k$ it must be an irreducible component
+of $Z$. Hence we see (2) holds.
+\end{proof}
+
+\begin{definition}
+\label{definition-cycle-associated-to-closed-subscheme}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $Z \subset X$ be a closed subscheme.
+\begin{enumerate}
+\item For any irreducible component $Z' \subset Z$ with generic point $\xi$
+the integer
+$m_{Z', Z} = \text{length}_{\mathcal{O}_{X, \xi}} \mathcal{O}_{Z, \xi}$
+(Lemma \ref{lemma-multiplicity-finite})
+is called the {\it multiplicity of $Z'$ in $Z$}.
+\item Assume $\dim_\delta(Z) \leq k$.
+The {\it $k$-cycle associated to $Z$} is
+$$
+[Z]_k
+=
+\sum m_{Z', Z}[Z']
+$$
+where the sum is over the irreducible components of $Z$
+of $\delta$-dimension $k$. (This is a $k$-cycle by
+Divisors, Lemma \ref{divisors-lemma-components-locally-finite}.)
+\end{enumerate}
+\end{definition}
+
+\noindent
+It is important to note that we only define $[Z]_k$ if the $\delta$-dimension
+of $Z$ does not exceed $k$. In other words, by convention, if we write
+$[Z]_k$ then this implies that $\dim_\delta(Z) \leq k$.
+
+
+
+\section{Cycle associated to a coherent sheaf}
+\label{section-cycle-of-coherent-sheaf}
+
+
+
+\begin{lemma}
+\label{lemma-length-finite}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item The collection of irreducible components of the support of
+$\mathcal{F}$ is locally finite.
+\item Let $Z' \subset \text{Supp}(\mathcal{F})$
+be an irreducible component and
+let $\xi \in Z'$ be its generic point.
+Then
+$$
+\text{length}_{\mathcal{O}_{X, \xi}} \mathcal{F}_\xi < \infty
+$$
+\item If $\dim_\delta(\text{Supp}(\mathcal{F})) \leq k$
+and $\xi \in Z$ with $\delta(\xi) = k$, then $\xi$ is a
+generic point of an irreducible component of $\text{Supp}(\mathcal{F})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-support-closed}
+the support $Z$ of $\mathcal{F}$ is a closed subset of $X$.
+We may think of $Z$ as a reduced closed subscheme of $X$
+(Schemes, Lemma \ref{schemes-lemma-reduced-closed-subscheme}).
+Hence (1) follows from
+Divisors, Lemma \ref{divisors-lemma-components-locally-finite} applied to $Z$
+and (3) follows from
+Lemma \ref{lemma-multiplicity-finite} applied to $Z$.
+
+\medskip\noindent
+Let $\xi \in Z'$ be as in (2). In this case for any specialization
+$\xi' \leadsto \xi$ in $X$ we have $\mathcal{F}_{\xi'} = 0$.
+Recall that the non-maximal primes of $\mathcal{O}_{X, \xi}$ correspond
+to the points of $X$ specializing to $\xi$
+(Schemes, Lemma \ref{schemes-lemma-specialize-points}).
+Hence $\mathcal{F}_\xi$ is a finite $\mathcal{O}_{X, \xi}$-module
+whose support is $\{\mathfrak m_\xi\}$. Hence it has finite length
+by Algebra, Lemma \ref{algebra-lemma-support-point}.
+\end{proof}
+
+\begin{definition}
+\label{definition-cycle-associated-to-coherent-sheaf}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item For any irreducible component $Z' \subset \text{Supp}(\mathcal{F})$
+with generic point $\xi$ the integer
+$m_{Z', \mathcal{F}} = \text{length}_{\mathcal{O}_{X, \xi}} \mathcal{F}_\xi$
+(Lemma \ref{lemma-length-finite})
+is called the {\it multiplicity of $Z'$ in $\mathcal{F}$}.
+\item Assume $\dim_\delta(\text{Supp}(\mathcal{F})) \leq k$.
+The {\it $k$-cycle associated to $\mathcal{F}$} is
+$$
+[\mathcal{F}]_k
+=
+\sum m_{Z', \mathcal{F}}[Z']
+$$
+where the sum is over the irreducible components of
+$\text{Supp}(\mathcal{F})$ of $\delta$-dimension $k$.
+(This is a $k$-cycle by Lemma \ref{lemma-length-finite}.)
+\end{enumerate}
+\end{definition}
+
+\noindent
+It is important to note that we only define $[\mathcal{F}]_k$
+if $\mathcal{F}$ is coherent and the $\delta$-dimension
+of $\text{Supp}(\mathcal{F})$ does not exceed $k$. In other words,
+by convention, if we write $[\mathcal{F}]_k$ then this implies that
+$\mathcal{F}$ is coherent on $X$ and
+$\dim_\delta(\text{Supp}(\mathcal{F})) \leq k$.
+
+\begin{lemma}
+\label{lemma-cycle-closed-coherent}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $Z \subset X$ be a closed subscheme.
+If $\dim_\delta(Z) \leq k$, then $[Z]_k = [{\mathcal O}_Z]_k$.
+\end{lemma}
+
+\begin{proof}
+This is because in this case the multiplicities $m_{Z', Z}$ and
+$m_{Z', \mathcal{O}_Z}$ agree by definition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-additivity-sheaf-cycle}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $0 \to \mathcal{F} \to \mathcal{G} \to \mathcal{H} \to 0$
+be a short exact sequence of coherent sheaves on $X$.
+Assume that the $\delta$-dimension of the supports
+of $\mathcal{F}$, $\mathcal{G}$, and $\mathcal{H}$ is $\leq k$.
+Then $[\mathcal{G}]_k = [\mathcal{F}]_k + [\mathcal{H}]_k$.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from additivity of lengths, see
+Algebra, Lemma \ref{algebra-lemma-length-additive}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Preparation for proper pushforward}
+\label{section-preparation-pushforward}
+
+\begin{lemma}
+\label{lemma-equal-dimension}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a morphism.
+Assume $X$, $Y$ integral and $\dim_\delta(X) = \dim_\delta(Y)$.
+Then either $f(X)$ is contained in a proper closed subscheme
+of $Y$, or $f$ is dominant and the extension of function fields
+$R(X)/R(Y)$ is finite.
+\end{lemma}
+
+\begin{proof}
+The closure $\overline{f(X)} \subset Y$ is irreducible as $X$
+is irreducible (Topology, Lemmas
+\ref{topology-lemma-image-irreducible-space} and
+\ref{topology-lemma-irreducible}).
+If $\overline{f(X)} \not = Y$, then we are done.
+If $\overline{f(X)} = Y$, then $f$ is dominant and by
+Morphisms,
+Lemma \ref{morphisms-lemma-dominant-finite-number-irreducible-components}
+we see that the generic point $\eta_Y$ of $Y$ is in the image of $f$.
+Of course this implies that $f(\eta_X) = \eta_Y$, where $\eta_X \in X$
+is the generic point of $X$. Since $\delta(\eta_X) = \delta(\eta_Y)$
+we see that $R(Y) = \kappa(\eta_Y) \subset \kappa(\eta_X) = R(X)$
+is an extension of transcendence degree $0$.
+Hence $R(Y) \subset R(X)$ is a finite extension by
+Morphisms, Lemma \ref{morphisms-lemma-finite-degree}
+(which applies by
+Morphisms, Lemma \ref{morphisms-lemma-permanence-finite-type}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-compact-locally-finite}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a morphism.
+Assume $f$ is quasi-compact, and $\{Z_i\}_{i \in I}$ is a locally
+finite collection of closed subsets of $X$.
+Then $\{\overline{f(Z_i)}\}_{i \in I}$ is a locally finite
+collection of closed subsets of $Y$.
+\end{lemma}
+
+\begin{proof}
+Let $V \subset Y$ be a quasi-compact open subset.
+Since $f$ is quasi-compact the open $f^{-1}(V)$ is
+quasi-compact. Hence the set
+$\{i \in I \mid Z_i \cap f^{-1}(V) \not = \emptyset \}$
+is finite by a simple topological argument which we omit.
+Since this is the same as the set
+$$
+\{i \in I \mid f(Z_i) \cap V \not = \emptyset \} =
+\{i \in I \mid \overline{f(Z_i)} \cap V \not = \emptyset \}
+$$
+the lemma is proved.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Proper pushforward}
+\label{section-proper-pushforward}
+
+\begin{definition}
+\label{definition-proper-pushforward}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a morphism.
+Assume $f$ is proper.
+\begin{enumerate}
+\item Let $Z \subset X$ be an integral closed subscheme
+with $\dim_\delta(Z) = k$. We define
+$$
+f_*[Z] =
+\left\{
+\begin{matrix}
+0 & \text{if} & \dim_\delta(f(Z))< k, \\
+\deg(Z/f(Z)) [f(Z)] & \text{if} & \dim_\delta(f(Z)) = k.
+\end{matrix}
+\right.
+$$
+Here we think of $f(Z) \subset Y$ as an integral closed subscheme.
+The degree of $Z$ over $f(Z)$ is finite if
+$\dim_\delta(f(Z)) = \dim_\delta(Z)$
+by Lemma \ref{lemma-equal-dimension}.
+\item Let $\alpha = \sum n_Z [Z]$ be a $k$-cycle on $X$. The
+{\it pushforward} of $\alpha$ as the sum
+$$
+f_* \alpha = \sum n_Z f_*[Z]
+$$
+where each $f_*[Z]$ is defined as above. The sum is locally finite
+by Lemma \ref{lemma-quasi-compact-locally-finite} above.
+\end{enumerate}
+\end{definition}
+
+\noindent
+By definition the proper pushforward of cycles
+$$
+f_* : Z_k(X) \longrightarrow Z_k(Y)
+$$
+is a homomorphism of abelian groups. It turns $X \mapsto Z_k(X)$
+into a covariant functor on the category of schemes locally of
+finite type over $S$ with morphisms equal to proper morphisms.
+
+\begin{lemma}
+\label{lemma-compose-pushforward}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$, and $Z$ be locally of finite type over $S$.
+Let $f : X \to Y$ and $g : Y \to Z$ be proper morphisms.
+Then $g_* \circ f_* = (g \circ f)_*$ as maps $Z_k(X) \to Z_k(Z)$.
+\end{lemma}
+
+\begin{proof}
+Let $W \subset X$ be an integral closed subscheme of dimension $k$.
+Consider $W' = f(W) \subset Y$ and $W'' = g(f(W)) \subset Z$.
+Since $f$, $g$ are proper we see that $W'$ (resp.\ $W''$) is
+an integral closed subscheme of $Y$ (resp.\ $Z$).
+We have to show that $g_*(f_*[W]) = (g \circ f)_*[W]$.
+If $\dim_\delta(W'') < k$, then both sides are zero.
+If $\dim_\delta(W'') = k$, then we see the induced morphisms
+$$
+W \longrightarrow
+W' \longrightarrow
+W''
+$$
+both satisfy the hypotheses of Lemma \ref{lemma-equal-dimension}. Hence
+$$
+g_*(f_*[W]) = \deg(W/W')\deg(W'/W'')[W''],
+\quad
+(g \circ f)_*[W] = \deg(W/W'')[W''].
+$$
+Then we can apply
+Morphisms, Lemma \ref{morphisms-lemma-degree-composition}
+to conclude.
+\end{proof}
+
+\noindent
+A closed immersion is proper. If $i : Z \to X$ is a closed immersion
+then the maps
+$$
+i_* : Z_k(Z) \longrightarrow Z_k(X)
+$$
+are all {\it injective}.
+
+\begin{lemma}
+\label{lemma-exact-sequence-closed}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $X_1, X_2 \subset X$
+be closed subschemes such that $X = X_1 \cup X_2$ set theoretically.
+For every $k \in \mathbf{Z}$ the sequence of abelian groups
+$$
+\xymatrix{
+Z_k(X_1 \cap X_2) \ar[r] &
+Z_k(X_1) \oplus Z_k(X_2) \ar[r] &
+Z_k(X) \ar[r] &
+0
+}
+$$
+is exact. Here $X_1 \cap X_2$ is the scheme theoretic intersection and
+the maps are the pushforward maps with one multiplied by $-1$.
+\end{lemma}
+
+\begin{proof}
+First assume $X$ is quasi-compact. Then $Z_k(X)$ is a free $\mathbf{Z}$-module
+with basis given by the elements $[Z]$ where $Z \subset X$ is integral
+closed of $\delta$-dimension $k$. The groups
+$Z_k(X_1)$, $Z_k(X_2)$, $Z_k(X_1 \cap X_2)$ are free on the subset of these
+$Z$ such that $Z \subset X_1$, $Z \subset X_2$, $Z \subset X_1 \cap X_2$.
+This immediately proves the lemma in this case. The general case is similar
+and the proof is omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cycle-push-sheaf}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a proper morphism of schemes which are
+locally of finite type over $S$.
+\begin{enumerate}
+\item Let $Z \subset X$ be a closed subscheme with $\dim_\delta(Z) \leq k$.
+Then
+$$
+f_*[Z]_k = [f_*{\mathcal O}_Z]_k.
+$$
+\item Let $\mathcal{F}$ be a coherent sheaf on $X$ such that
+$\dim_\delta(\text{Supp}(\mathcal{F})) \leq k$. Then
+$$
+f_*[\mathcal{F}]_k = [f_*{\mathcal F}]_k.
+$$
+\end{enumerate}
+Note that the statement makes sense since $f_*\mathcal{F}$ and
+$f_*\mathcal{O}_Z$ are coherent $\mathcal{O}_Y$-modules by
+Cohomology of Schemes, Proposition
+\ref{coherent-proposition-proper-pushforward-coherent}.
+\end{lemma}
+
+\begin{proof}
+Part (1) follows from (2) and Lemma \ref{lemma-cycle-closed-coherent}.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Assume that $\dim_\delta(\text{Supp}(\mathcal{F})) \leq k$.
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-support-closed}
+there exists a closed subscheme $i : Z \to X$ and a coherent
+$\mathcal{O}_Z$-module $\mathcal{G}$ such that
+$i_*\mathcal{G} \cong \mathcal{F}$ and such that the support
+of $\mathcal{F}$ is $Z$. Let $Z' \subset Y$ be the scheme theoretic image
+of $f|_Z : Z \to Y$. Consider the commutative diagram of schemes
+$$
+\xymatrix{
+Z \ar[r]_i \ar[d]_{f|_Z} &
+X \ar[d]^f \\
+Z' \ar[r]^{i'} & Y
+}
+$$
+We have $f_*\mathcal{F} = f_*i_*\mathcal{G} = i'_*(f|_Z)_*\mathcal{G}$
+by going around the diagram in two ways. Suppose we know the result holds
+for closed immersions and for $f|_Z$. Then we see that
+$$
+f_*[\mathcal{F}]_k = f_*i_*[\mathcal{G}]_k
+= (i')_*(f|_Z)_*[\mathcal{G}]_k =
+(i')_*[(f|_Z)_*\mathcal{G}]_k =
+[(i')_*(f|_Z)_*\mathcal{G}]_k = [f_*\mathcal{F}]_k
+$$
+as desired. The case of a closed immersion is straightforward (omitted).
+Note that $f|_Z : Z \to Z'$ is a dominant morphism (see
+Morphisms, Lemma \ref{morphisms-lemma-quasi-compact-scheme-theoretic-image}).
+Thus we have reduced to the case where
+$\dim_\delta(X) \leq k$ and $f : X \to Y$ is proper and dominant.
+
+\medskip\noindent
+Assume $\dim_\delta(X) \leq k$ and $f : X \to Y$ is proper and dominant.
+Since $f$ is dominant, for every irreducible component $Z \subset Y$
+with generic point $\eta$ there exists a point $\xi \in X$ such
+that $f(\xi) = \eta$. Hence $\delta(\eta) \leq \delta(\xi) \leq k$.
+Thus we see that in the expressions
+$$
+f_*[\mathcal{F}]_k = \sum n_Z[Z],
+\quad
+\text{and}
+\quad
+[f_*\mathcal{F}]_k = \sum m_Z[Z].
+$$
+whenever $n_Z \not = 0$, or $m_Z \not = 0$ the integral closed
+subscheme $Z$ is actually an irreducible component of $Y$ of
+$\delta$-dimension $k$. Pick such an integral closed subscheme
+$Z \subset Y$ and denote $\eta$ its generic point. Note that for
+any $\xi \in X$ with $f(\xi) = \eta$ we have $\delta(\xi) \geq k$
+and hence $\xi$ is a generic point of an irreducible component
+of $X$ of $\delta$-dimension $k$ as well
+(see Lemma \ref{lemma-multiplicity-finite}). Since $f$ is quasi-compact
+and $X$ is locally Noetherian, there can be only finitely many of
+these and hence $f^{-1}(\{\eta\})$ is finite.
+By Morphisms, Lemma \ref{morphisms-lemma-generically-finite} there exists
+an open neighbourhood $\eta \in V \subset Y$ such that $f^{-1}(V) \to V$
+is finite. Replacing $Y$ by $V$ and $X$ by $f^{-1}(V)$ we reduce to the
+case where $Y$ is affine, and $f$ is finite.
+
+\medskip\noindent
+Write $Y = \Spec(R)$ and $X = \Spec(A)$ (possible as
+a finite morphism is affine).
+Then $R$ and $A$ are Noetherian rings and $A$ is finite over $R$.
+Moreover $\mathcal{F} = \widetilde{M}$ for some finite $A$-module
+$M$. Note that $f_*\mathcal{F}$ corresponds to $M$ viewed as an $R$-module.
+Let $\mathfrak p \subset R$ be the minimal prime corresponding
+to $\eta \in Y$. The coefficient of $Z$ in $[f_*\mathcal{F}]_k$
+is clearly $\text{length}_{R_{\mathfrak p}}(M_{\mathfrak p})$.
+Let $\mathfrak q_i$, $i = 1, \ldots, t$ be the primes of $A$
+lying over $\mathfrak p$. Then $A_{\mathfrak p} = \prod A_{\mathfrak q_i}$
+since $A_{\mathfrak p}$ is an Artinian ring being finite over the
+dimension zero local Noetherian ring $R_{\mathfrak p}$.
+Clearly the coefficient of $Z$ in $f_*[\mathcal{F}]_k$ is
+$$
+\sum\nolimits_{i = 1, \ldots, t}
+[\kappa(\mathfrak q_i) : \kappa(\mathfrak p)]
+\text{length}_{A_{\mathfrak q_i}}(M_{\mathfrak q_i})
+$$
+Hence the desired equality follows from
+Algebra, Lemma \ref{algebra-lemma-pushdown-module}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Preparation for flat pullback}
+\label{section-preparation-flat-pullback}
+
+
+\noindent
+Recall that a morphism $f : X \to Y$ which is locally of finite type
+is said to have relative dimension $r$ if every nonempty fibre
+is equidimensional of dimension $r$. See
+Morphisms, Definition \ref{morphisms-definition-relative-dimension-d}.
+
+\begin{lemma}
+\label{lemma-flat-inverse-image-dimension}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a morphism.
+Assume $f$ is flat of relative dimension $r$.
+For any closed subset $Z \subset Y$ we have
+$$
+\dim_\delta(f^{-1}(Z)) = \dim_\delta(Z) + r.
+$$
+provided $f^{-1}(Z)$ is nonempty.
+If $Z$ is irreducible and $Z' \subset f^{-1}(Z)$ is an irreducible
+component, then $Z'$ dominates $Z$ and
+$\dim_\delta(Z') = \dim_\delta(Z) + r$.
+\end{lemma}
+
+\begin{proof}
+It suffices to prove the final statement.
+We may replace $Y$ by the integral closed subscheme $Z$ and
+$X$ by the scheme theoretic inverse image $f^{-1}(Z) = Z \times_Y X$.
+Hence we may assume $Z = Y$ is integral and $f$ is a flat morphism
+of relative dimension $r$. Since $Y$ is locally Noetherian the
+morphism $f$ which is locally of finite type,
+is actually locally of finite presentation. Hence
+Morphisms, Lemma \ref{morphisms-lemma-fppf-open}
+applies and we see that $f$ is open.
+Let $\xi \in X$ be a generic point of an irreducible component
+of $X$. By the openness of $f$ we see that $f(\xi)$ is the
+generic point $\eta$ of $Z = Y$. Note that $\dim_\xi(X_\eta) = r$
+by assumption that $f$ has relative dimension $r$. On the other
+hand, since $\xi$ is a generic point of $X$ we see that
+$\mathcal{O}_{X, \xi} = \mathcal{O}_{X_\eta, \xi}$ has only one
+prime ideal and hence has dimension $0$. Thus by
+Morphisms, Lemma \ref{morphisms-lemma-dimension-fibre-at-a-point}
+we conclude that the transcendence
+degree of $\kappa(\xi)$ over $\kappa(\eta)$ is $r$.
+In other words, $\delta(\xi) = \delta(\eta) + r$ as desired.
+\end{proof}
+
+\noindent
+Here is the lemma that we will use to prove that the flat pullback
+of a locally finite collection of closed subschemes is locally finite.
+
+\begin{lemma}
+\label{lemma-inverse-image-locally-finite}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a morphism.
+Assume $\{Z_i\}_{i \in I}$ is a locally
+finite collection of closed subsets of $Y$.
+Then $\{f^{-1}(Z_i)\}_{i \in I}$ is a locally finite
+collection of closed subsets of $X$.
+\end{lemma}
+
+\begin{proof}
+Let $U \subset X$ be a quasi-compact open subset.
+Since the image $f(U) \subset Y$ is a quasi-compact subset
+there exists a quasi-compact open $V \subset Y$ such that
+$f(U) \subset V$. Note that
+$$
+\{i \in I \mid f^{-1}(Z_i) \cap U \not = \emptyset \}
+\subset
+\{i \in I \mid Z_i \cap V \not = \emptyset \}.
+$$
+Since the right hand side is finite by assumption we win.
+\end{proof}
+
+
+
+\section{Flat pullback}
+\label{section-flat-pullback}
+
+\noindent
+In the following we use $f^{-1}(Z)$ to denote the
+{\it scheme theoretic inverse image} of a closed subscheme
+$Z \subset Y$ for a morphism of schemes $f : X \to Y$.
+We recall that the scheme theoretic inverse image is the fibre product
+$$
+\xymatrix{
+f^{-1}(Z) \ar[r] \ar[d] & X \ar[d] \\
+Z \ar[r] & Y
+}
+$$
+and it is also the closed subscheme of $X$ cut out by the
+quasi-coherent sheaf of ideals $f^{-1}(\mathcal{I})\mathcal{O}_X$, if
+$\mathcal{I} \subset \mathcal{O}_Y$ is the quasi-coherent sheaf of ideals
+corresponding to $Z$ in $Y$.
+(This is discussed in
+Schemes, Section \ref{schemes-section-closed-immersion} and
+Lemma \ref{schemes-lemma-fibre-product-immersion}
+and Definition \ref{schemes-definition-inverse-image-closed-subscheme}.)
+
+\begin{definition}
+\label{definition-flat-pullback}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a morphism.
+Assume $f$ is flat of relative dimension $r$.
+\begin{enumerate}
+\item Let $Z \subset Y$ be an integral closed subscheme of
+$\delta$-dimension $k$. We define $f^*[Z]$ to be the
+$(k+r)$-cycle on $X$ to the scheme theoretic inverse image
+$$
+f^*[Z] = [f^{-1}(Z)]_{k+r}.
+$$
+This makes sense since $\dim_\delta(f^{-1}(Z)) = k + r$
+by Lemma \ref{lemma-flat-inverse-image-dimension}.
+\item Let $\alpha = \sum n_i [Z_i]$ be
+a $k$-cycle on $Y$. The {\it flat pullback of $\alpha$ by $f$}
+is the sum
+$$
+f^* \alpha = \sum n_i f^*[Z_i]
+$$
+where each $f^*[Z_i]$ is defined as above.
+The sum is locally finite by Lemma \ref{lemma-inverse-image-locally-finite}.
+\item We denote $f^* : Z_k(Y) \to Z_{k + r}(X)$ the map of abelian
+groups so obtained.
+\end{enumerate}
+\end{definition}
+
+\noindent
+An open immersion is flat. This is an important though trivial special
+case of a flat morphism. If $U \subset X$ is open then sometimes the
+pullback by $j : U \to X$ of a cycle is called the {\it restriction} of the
+cycle to $U$. Note that in this case the maps
+$$
+j^* : Z_k(X) \longrightarrow Z_k(U)
+$$
+are all {\it surjective}. The reason is that given any integral closed
+subscheme $Z' \subset U$, we can take the closure of $Z$ of $Z'$ in $X$
+and think of it as a reduced closed subscheme of $X$ (see
+Schemes, Lemma \ref{schemes-lemma-reduced-closed-subscheme}).
+And clearly $Z \cap U = Z'$, in other words
+$j^*[Z] = [Z']$ whence the surjectivity. In fact a little bit more
+is true.
+
+\begin{lemma}
+\label{lemma-exact-sequence-open}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $U \subset X$ be an open subscheme, and denote
+$i : Y = X \setminus U \to X$ as a reduced closed subscheme of $X$.
+For every $k \in \mathbf{Z}$ the sequence
+$$
+\xymatrix{
+Z_k(Y) \ar[r]^{i_*} & Z_k(X) \ar[r]^{j^*} & Z_k(U) \ar[r] & 0
+}
+$$
+is an exact complex of abelian groups.
+\end{lemma}
+
+\begin{proof}
+First assume $X$ is quasi-compact. Then $Z_k(X)$ is a free $\mathbf{Z}$-module
+with basis given by the elements $[Z]$ where $Z \subset X$ is integral
+closed of $\delta$-dimension $k$. Such a basis element maps
+either to the basis element $[Z \cap U]$ or to zero if $Z \subset Y$.
+Hence the lemma is clear in this case. The general case is similar
+and the proof is omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compose-flat-pullback}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X, Y, Z$ be locally of finite type over $S$.
+Let $f : X \to Y$ and $g : Y \to Z$ be flat morphisms of relative dimensions
+$r$ and $s$. Then $g \circ f$ is flat of relative dimension
+$r + s$ and
+$$
+f^* \circ g^* = (g \circ f)^*
+$$
+as maps $Z_k(Z) \to Z_{k + r + s}(X)$.
+\end{lemma}
+
+\begin{proof}
+The composition is flat of relative dimension $r + s$ by
+Morphisms, Lemma \ref{morphisms-lemma-composition-relative-dimension-d}.
+Suppose that
+\begin{enumerate}
+\item $W \subset Z$ is a closed integral subscheme of $\delta$-dimension $k$,
+\item $W' \subset Y$ is a closed integral subscheme of $\delta$-dimension
+$k + s$ with $W' \subset g^{-1}(W)$, and
+\item $W'' \subset Y$ is a closed integral subscheme of $\delta$-dimension
+$k + s + r$ with $W'' \subset f^{-1}(W')$.
+\end{enumerate}
+We have to show that the coefficient $n$ of $[W'']$ in
+$(g \circ f)^*[W]$ agrees with the coefficient $m$ of
+$[W'']$ in $f^*(g^*[W])$. That it suffices to check the lemma in these
+cases follows from Lemma \ref{lemma-flat-inverse-image-dimension}.
+Let $\xi'' \in W''$, $\xi' \in W'$
+and $\xi \in W$ be the generic points. Consider the local rings
+$A = \mathcal{O}_{Z, \xi}$, $B = \mathcal{O}_{Y, \xi'}$
+and $C = \mathcal{O}_{X, \xi''}$. Then we have local flat ring maps
+$A \to B$, $B \to C$ and moreover
+$$
+n = \text{length}_C(C/\mathfrak m_AC),
+\quad
+\text{and}
+\quad
+m = \text{length}_C(C/\mathfrak m_BC) \text{length}_B(B/\mathfrak m_AB)
+$$
+Hence the equality follows from
+Algebra, Lemma \ref{algebra-lemma-pullback-transitive}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-coherent}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X, Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a flat morphism of relative dimension $r$.
+\begin{enumerate}
+\item Let $Z \subset Y$ be a closed subscheme with
+$\dim_\delta(Z) \leq k$. Then we have
+$\dim_\delta(f^{-1}(Z)) \leq k + r$
+and $[f^{-1}(Z)]_{k + r} = f^*[Z]_k$ in $Z_{k + r}(X)$.
+\item Let $\mathcal{F}$ be a coherent sheaf on $Y$ with
+$\dim_\delta(\text{Supp}(\mathcal{F})) \leq k$.
+Then we have $\dim_\delta(\text{Supp}(f^*\mathcal{F})) \leq k + r$
+and
+$$
+f^*[{\mathcal F}]_k = [f^*{\mathcal F}]_{k+r}
+$$
+in $Z_{k + r}(X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The statements on dimensions follow immediately from
+Lemma \ref{lemma-flat-inverse-image-dimension}.
+Part (1) follows from part (2) by Lemma \ref{lemma-cycle-closed-coherent}
+and the fact that $f^*\mathcal{O}_Z = \mathcal{O}_{f^{-1}(Z)}$.
+
+\medskip\noindent
+Proof of (2).
+As $X$, $Y$ are locally Noetherian we may apply
+Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-Noetherian} to see
+that $\mathcal{F}$ is of finite type, hence $f^*\mathcal{F}$ is
+of finite type (Modules, Lemma \ref{modules-lemma-pullback-finite-type}),
+hence $f^*\mathcal{F}$ is coherent
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-Noetherian} again).
+Thus the lemma makes sense. Let $W \subset Y$ be an integral closed
+subscheme of $\delta$-dimension $k$, and let $W' \subset X$ be
+an integral closed subscheme of dimension $k + r$ mapping into $W$
+under $f$. We have to show that the coefficient $n$ of
+$[W']$ in $f^*[{\mathcal F}]_k$ agrees with the coefficient
+$m$ of $[W']$ in $[f^*{\mathcal F}]_{k+r}$. Let $\xi \in W$ and
+$\xi' \in W'$ be the generic points. Let
+$A = \mathcal{O}_{Y, \xi}$, $B = \mathcal{O}_{X, \xi'}$
+and set $M = \mathcal{F}_\xi$ as an $A$-module. (Note that
+$M$ has finite length by our dimension assumptions, but we
+actually do not need to verify this. See
+Lemma \ref{lemma-length-finite}.)
+We have $f^*\mathcal{F}_{\xi'} = B \otimes_A M$.
+Thus we see that
+$$
+n = \text{length}_B(B \otimes_A M)
+\quad
+\text{and}
+\quad
+m = \text{length}_A(M) \text{length}_B(B/\mathfrak m_AB)
+$$
+Thus the equality follows from
+Algebra, Lemma \ref{algebra-lemma-pullback-module}.
+\end{proof}
+
+
+
+\section{Push and pull}
+\label{section-push-pull}
+
+\noindent
+In this section we verify that proper pushforward and flat pullback
+are compatible when this makes sense. By the work we did above this
+is a consequence of cohomology and base change.
+
+\begin{lemma}
+\label{lemma-flat-pullback-proper-pushforward}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+be a fibre product diagram of schemes locally of finite type over $S$.
+Assume $f : X \to Y$ proper and $g : Y' \to Y$ flat of relative dimension $r$.
+Then also $f'$ is proper and $g'$ is flat of relative dimension $r$.
+For any $k$-cycle $\alpha$ on $X$ we have
+$$
+g^*f_*\alpha = f'_*(g')^*\alpha
+$$
+in $Z_{k + r}(Y')$.
+\end{lemma}
+
+\begin{proof}
+The assertion that $f'$ is proper follows from
+Morphisms, Lemma \ref{morphisms-lemma-base-change-proper}.
+The assertion that $g'$ is flat of relative dimension $r$ follows from
+Morphisms, Lemmas \ref{morphisms-lemma-base-change-relative-dimension-d}
+and \ref{morphisms-lemma-base-change-flat}.
+It suffices to prove the equality of cycles when $\alpha = [W]$
+for some integral closed subscheme $W \subset X$ of $\delta$-dimension $k$.
+Note that in this case we have $\alpha = [\mathcal{O}_W]_k$, see
+Lemma \ref{lemma-cycle-closed-coherent}.
+By Lemmas \ref{lemma-cycle-push-sheaf} and
+\ref{lemma-pullback-coherent} it therefore suffices
+to show that $f'_*(g')^*\mathcal{O}_W$ is isomorphic to
+$g^*f_*\mathcal{O}_W$. This follows from cohomology and
+base change, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-flat}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a finite locally free morphism
+of degree $d$ (see
+Morphisms, Definition \ref{morphisms-definition-finite-locally-free}).
+Then $f$ is both proper and flat of relative dimension $0$, and
+$$
+f_*f^*\alpha = d\alpha
+$$
+for every $\alpha \in Z_k(Y)$.
+\end{lemma}
+
+\begin{proof}
+A finite locally free morphism is flat and finite by
+Morphisms, Lemma \ref{morphisms-lemma-finite-flat},
+and a finite morphism is proper
+by Morphisms, Lemma \ref{morphisms-lemma-finite-proper}.
+We omit showing that a finite
+morphism has relative dimension $0$. Thus the formula makes sense.
+To prove it, let $Z \subset Y$ be an integral closed subscheme
+of $\delta$-dimension $k$. It suffices to prove the formula
+for $\alpha = [Z]$. Since the base change of a finite locally free
+morphism is finite locally free
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-finite-locally-free})
+we see that $f_*f^*\mathcal{O}_Z$ is a finite locally free sheaf of
+rank $d$ on $Z$. Hence
+$$
+f_*f^*[Z] = f_*f^*[\mathcal{O}_Z]_k =
+[f_*f^*\mathcal{O}_Z]_k = d[Z]
+$$
+where we have used Lemmas \ref{lemma-pullback-coherent} and
+\ref{lemma-cycle-push-sheaf}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Preparation for principal divisors}
+\label{section-preparation-principal-divisors}
+
+\noindent
+Some of the material in this section partially overlaps with the
+discussion in Divisors, Section \ref{divisors-section-Weil-divisors}.
+
+\begin{lemma}
+\label{lemma-divisor-delta-dimension}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Assume $X$ is
+integral.
+\begin{enumerate}
+\item If $Z \subset X$ is an integral closed subscheme, then
+the following are equivalent:
+\begin{enumerate}
+\item $Z$ is a prime divisor,
+\item $Z$ has codimension $1$ in $X$, and
+\item $\dim_\delta(Z) = \dim_\delta(X) - 1$.
+\end{enumerate}
+\item If $Z$ is an irreducible component of an effective Cartier
+divisor on $X$, then $\dim_\delta(Z) = \dim_\delta(X) - 1$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) follows from the definition of a prime divisor
+(Divisors, Definition \ref{divisors-definition-Weil-divisor})
+and the definition of a dimension function
+(Topology, Definition \ref{topology-definition-dimension-function}).
+Let $\xi \in Z$ be the generic point of an irreducible component $Z$ of
+an effective Cartier divisor $D \subset X$.
+Then $\dim(\mathcal{O}_{D, \xi}) = 0$ and
+$\mathcal{O}_{D, \xi} = \mathcal{O}_{X, \xi}/(f)$ for some
+nonzerodivisor $f \in \mathcal{O}_{X, \xi}$ (Divisors,
+Lemma \ref{divisors-lemma-effective-Cartier-in-points}).
+Then $\dim(\mathcal{O}_{X, \xi}) = 1$ by
+Algebra, Lemma \ref{algebra-lemma-one-equation}. Hence $Z$ is as in (1) by
+Properties, Lemma \ref{properties-lemma-codimension-local-ring}
+and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-in-codimension-one}
+Let $f : X \to Y$ be a morphism of schemes.
+Let $\xi \in Y$ be a point.
+Assume that
+\begin{enumerate}
+\item $X$, $Y$ are integral,
+\item $Y$ is locally Noetherian
+\item $f$ is proper, dominant and $R(Y) \subset R(X)$ is finite, and
+\item $\dim(\mathcal{O}_{Y, \xi}) = 1$.
+\end{enumerate}
+Then there exists an open neighbourhood $V \subset Y$ of $\xi$
+such that $f|_{f^{-1}(V)} : f^{-1}(V) \to V$ is finite.
+\end{lemma}
+
+\begin{proof}
+This lemma is a special case of
+Varieties, Lemma \ref{varieties-lemma-finite-in-codim-1}.
+Here is a direct argument in this case.
+By Cohomology of Schemes,
+Lemma \ref{coherent-lemma-proper-finite-fibre-finite-in-neighbourhood}
+it suffices to prove that $f^{-1}(\{\xi\})$ is finite.
+We replace $Y$ by an affine open, say $Y = \Spec(R)$.
+Note that $R$ is Noetherian, as $Y$ is assumed locally Noetherian.
+Since $f$ is proper it is quasi-compact. Hence we can find a finite
+affine open covering $X = U_1 \cup \ldots \cup U_n$ with
+each $U_i = \Spec(A_i)$. Note that $R \to A_i$ is a
+finite type injective homomorphism of domains such that
+the induced extension of fraction fields is finite.
+Thus the lemma follows
+from Algebra, Lemma \ref{algebra-lemma-finite-in-codim-1}.
+\end{proof}
+
+
+\section{Principal divisors}
+\label{section-principal-divisors}
+
+\noindent
+The following definition is the analogue of
+Divisors, Definition \ref{divisors-definition-principal-divisor}
+in our current setup.
+
+\begin{definition}
+\label{definition-principal-divisor}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Assume $X$ is
+integral with $\dim_\delta(X) = n$.
+Let $f \in R(X)^*$. The {\it principal divisor
+associated to $f$} is the $(n - 1)$-cycle
+$$
+\text{div}(f) = \text{div}_X(f) = \sum \text{ord}_Z(f) [Z]
+$$
+defined in Divisors, Definition \ref{divisors-definition-principal-divisor}.
+This makes sense because prime divisors have $\delta$-dimension $n - 1$ by
+Lemma \ref{lemma-divisor-delta-dimension}.
+\end{definition}
+
+\noindent
+In the situation of the definition for $f, g \in R(X)^*$ we have
+$$
+\text{div}_X(fg) = \text{div}_X(f) + \text{div}_X(g)
+$$
+in $Z_{n - 1}(X)$. See Divisors, Lemma \ref{divisors-lemma-div-additive}.
+The following lemma will be superseded by the more general
+Lemma \ref{lemma-flat-pullback-rational-equivalence}.
+
+\begin{lemma}
+\label{lemma-flat-pullback-principal-divisor}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$. Assume $X$, $Y$
+are integral and $n = \dim_\delta(Y)$.
+Let $f : X \to Y$ be a flat morphism of relative dimension $r$.
+Let $g \in R(Y)^*$. Then
+$$
+f^*(\text{div}_Y(g)) = \text{div}_X(g)
+$$
+in $Z_{n + r - 1}(X)$.
+\end{lemma}
+
+\begin{proof}
+Note that since $f$ is flat it is dominant so that
+$f$ induces an embedding $R(Y) \subset R(X)$, and hence
+we may think of $g$ as an element of $R(X)^*$.
+Let $Z \subset X$ be an integral closed subscheme of
+$\delta$-dimension $n + r - 1$. Let $\xi \in Z$
+be its generic point. If $\dim_\delta(f(Z)) > n - 1$,
+then we see that the coefficient of $[Z]$ in the left and
+right hand side of the equation is zero.
+Hence we may assume that $Z' = \overline{f(Z)}$ is an
+integral closed subscheme of $Y$ of $\delta$-dimension $n - 1$.
+Let $\xi' = f(\xi)$. It is the generic point of $Z'$.
+Set $A = \mathcal{O}_{Y, \xi'}$, $B = \mathcal{O}_{X, \xi}$.
+The ring map $A \to B$ is a flat local homomorphism of
+Noetherian local domains of dimension $1$.
+We have $g$ in the fraction field of $A$. What we have to show is that
+$$
+\text{ord}_A(g) \text{length}_B(B/\mathfrak m_AB)
+=
+\text{ord}_B(g).
+$$
+This follows from Algebra, Lemma \ref{algebra-lemma-pullback-module}
+(details omitted).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Principal divisors and pushforward}
+\label{section-two-fun}
+
+\noindent
+The first lemma implies that the pushforward of a principal
+divisor along a generically finite morphism is a principal divisor.
+
+\begin{lemma}
+\label{lemma-proper-pushforward-alteration}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$. Assume $X$, $Y$
+are integral and $n = \dim_\delta(X) = \dim_\delta(Y)$.
+Let $p : X \to Y$ be a dominant proper morphism.
+Let $f \in R(X)^*$. Set
+$$
+g = \text{Nm}_{R(X)/R(Y)}(f).
+$$
+Then we have
+$p_*\text{div}(f) = \text{div}(g)$.
+\end{lemma}
+
+\begin{proof}
+Let $Z \subset Y$ be an integral closed subscheme of $\delta$-dimension
+$n - 1$. We want to show that the coefficient of $[Z]$ in
+$p_*\text{div}(f)$ and $\text{div}(g)$ are equal. We may apply
+Lemma \ref{lemma-finite-in-codimension-one}
+to the morphism $p : X \to Y$ and the generic point $\xi \in Z$.
+Hence we may replace $Y$ by an
+affine open neighbourhood of $\xi$ and assume that $p : X \to Y$ is finite.
+Write $Y = \Spec(R)$ and $X = \Spec(A)$ with $p$ induced
+by a finite homomorphism $R \to A$ of Noetherian domains which induces
+an finite field extension $L/K$ of fraction fields.
+Now we have $f \in L$, $g = \text{Nm}(f) \in K$,
+and a prime $\mathfrak p \subset R$ with $\dim(R_{\mathfrak p}) = 1$.
+The coefficient of $[Z]$ in $\text{div}_Y(g)$ is
+$\text{ord}_{R_\mathfrak p}(g)$.
+The coefficient of $[Z]$ in $p_*\text{div}_X(f)$ is
+$$
+\sum\nolimits_{\mathfrak q\text{ lying over }\mathfrak p}
+[\kappa(\mathfrak q) : \kappa(\mathfrak p)]
+\text{ord}_{A_{\mathfrak q}}(f)
+$$
+The desired equality therefore follows from
+Algebra, Lemma \ref{algebra-lemma-finite-extension-dim-1}.
+\end{proof}
+
+\noindent
+An important role in the discussion of principal divisors
+is played by the ``universal'' principal divisor $[0] - [\infty]$
+on $\mathbf{P}^1_S$. To make this more precise, let us denote
+\begin{equation}
+\label{equation-zero-infty}
+D_0, D_\infty \subset
+\mathbf{P}^1_S = \underline{\text{Proj}}_S(\mathcal{O}_S[T_0, T_1])
+\end{equation}
+the closed subscheme cut out by the section $T_1$, resp.\ $T_0$
+of $\mathcal{O}(1)$. These are effective Cartier divisors, see
+Divisors, Definition \ref{divisors-definition-effective-Cartier-divisor}
+and Lemma \ref{divisors-lemma-characterize-OD}.
+The following lemma says that loosely speaking we have
+``$\text{div}(T_1/T_0) = [D_0] - [D_1]$'' and that this is the
+universal principal divisor.
+
+\begin{lemma}
+\label{lemma-rational-function}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Assume $X$ is
+integral and $n = \dim_\delta(X)$. Let $f \in R(X)^*$.
+Let $U \subset X$ be a nonempty open such that $f$
+corresponds to a section $f \in \Gamma(U, \mathcal{O}_X^*)$.
+Let $Y \subset X \times_S \mathbf{P}^1_S$ be the
+closure of the graph of $f : U \to \mathbf{P}^1_S$.
+Then
+\begin{enumerate}
+\item the projection morphism $p : Y \to X$ is proper,
+\item $p|_{p^{-1}(U)} : p^{-1}(U) \to U$ is an isomorphism,
+\item the pullbacks $Y_0 = q^{-1}D_0$ and $Y_\infty = q^{-1}D_\infty$
+via the morphism $q : Y \to \mathbf{P}^1_S$ are defined
+(Divisors, Definition
+\ref{divisors-definition-pullback-effective-Cartier-divisor}),
+\item we have
+$$
+\text{div}_Y(f) = [Y_0]_{n - 1} - [Y_\infty]_{n - 1}
+$$
+\item we have
+$$
+\text{div}_X(f) = p_*\text{div}_Y(f)
+$$
+\item if we view $Y_0$ and $Y_\infty$ as closed subschemes of $X$
+via the morphism $p$ then we have
+$$
+\text{div}_X(f) = [Y_0]_{n - 1} - [Y_\infty]_{n - 1}
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $X$ is integral, we see that $U$ is integral.
+Hence $Y$ is integral, and $(1, f)(U) \subset Y$ is an open dense subscheme.
+Also, note that the closed subscheme $Y \subset X \times_S \mathbf{P}^1_S$
+does not depend on the choice of the open $U$, since after all it is
+the closure of the one point set $\{\eta'\} = \{(1, f)(\eta)\}$
+where $\eta \in X$ is the generic point. Having said this let us
+prove the assertions of the lemma.
+
+\medskip\noindent
+For (1) note that $p$ is the composition of the closed immersion
+$Y \to X \times_S \mathbf{P}^1_S = \mathbf{P}^1_X$ with the proper
+morphism $\mathbf{P}^1_X \to X$. As a composition of proper morphisms
+is proper (Morphisms, Lemma \ref{morphisms-lemma-composition-proper})
+we conclude.
+
+\medskip\noindent
+It is clear that $Y \cap U \times_S \mathbf{P}^1_S = (1, f)(U)$.
+Thus (2) follows. It also follows that $\dim_\delta(Y) = n$.
+
+\medskip\noindent
+Note that $q(\eta') = f(\eta)$ is not contained in $D_0$ or $D_\infty$
+since $f \in R(X)^*$. Hence (3) by
+Divisors, Lemma \ref{divisors-lemma-pullback-effective-Cartier-defined}.
+We obtain $\dim_\delta(Y_0) = n - 1$
+and $\dim_\delta(Y_\infty) = n - 1$ from
+Lemma \ref{lemma-divisor-delta-dimension}.
+
+\medskip\noindent
+Consider the effective Cartier divisor $Y_0$.
+At every point $\xi \in Y_0$ we have $f \in \mathcal{O}_{Y, \xi}$ and
+the local equation for $Y_0$ is given by $f$.
+In particular, if $\delta(\xi) = n - 1$ so $\xi$ is the generic point
+of a integral closed subscheme $Z$ of $\delta$-dimension $n - 1$,
+then we see that the coefficient of $[Z]$ in $\text{div}_Y(f)$ is
+$$
+\text{ord}_Z(f) =
+\text{length}_{\mathcal{O}_{Y, \xi}}
+(\mathcal{O}_{Y, \xi}/f\mathcal{O}_{Y, \xi}) =
+\text{length}_{\mathcal{O}_{Y, \xi}}
+(\mathcal{O}_{Y_0, \xi})
+$$
+which is the coefficient of $[Z]$ in $[Y_0]_{n - 1}$. A similar
+argument using the rational function $1/f$ shows that
+$-[Y_\infty]$ agrees with the terms with negative coefficients in
+the expression for $\text{div}_Y(f)$. Hence (4) follows.
+
+\medskip\noindent
+Note that $D_0 \to S$ is an isomorphism. Hence we see that
+$X \times_S D_0 \to X$ is an isomorphism as well. Clearly
+we have $Y_0 = Y \cap X \times_S D_0$ (scheme theoretic intersection)
+inside $X \times_S \mathbf{P}^1_S$. Hence it is really the case that
+$Y_0 \to X$ is a closed immersion. It follows that
+$$
+p_*\mathcal{O}_{Y_0} = \mathcal{O}_{Y'_0}
+$$
+where $Y'_0 \subset X$ is the image of $Y_0 \to X$.
+By Lemma \ref{lemma-cycle-push-sheaf} we
+have $p_*[Y_0]_{n - 1} = [Y'_0]_{n - 1}$. The same
+is true for $D_\infty$ and $Y_\infty$. Hence (6) is a consequence of (5).
+Finally, (5) follows immediately from
+Lemma \ref{lemma-proper-pushforward-alteration}.
+\end{proof}
+
+\noindent
+The following lemma says that the degree of a principal divisor on
+a proper curve is zero.
+
+\begin{lemma}
+\label{lemma-curve-principal-divisor}
+Let $K$ be any field. Let $X$ be a $1$-dimensional integral scheme
+endowed with a proper morphism $c : X \to \Spec(K)$.
+Let $f \in K(X)^*$ be an invertible rational function.
+Then
+$$
+\sum\nolimits_{x \in X \text{ closed}}
+[\kappa(x) : K] \text{ord}_{\mathcal{O}_{X, x}}(f)
+=
+0
+$$
+where $\text{ord}$ is as in
+Algebra, Definition \ref{algebra-definition-ord}.
+In other words, $c_*\text{div}(f) = 0$.
+\end{lemma}
+
+\begin{proof}
+Consider the diagram
+$$
+\xymatrix{
+Y \ar[r]_p \ar[d]_q & X \ar[d]^c \\
+\mathbf{P}^1_K \ar[r]^-{c'} & \Spec(K)
+}
+$$
+that we constructed in Lemma \ref{lemma-rational-function}
+starting with $X$ and the rational function $f$ over $S = \Spec(K)$.
+We will use all the results of this lemma without further mention.
+We have to show that $c_*\text{div}_X(f) = c_*p_*\text{div}_Y(f) = 0$.
+This is the same as proving that $c'_*q_*\text{div}_Y(f) = 0$.
+If $q(Y)$ is a closed point of $\mathbf{P}^1_K$ then we
+see that $\text{div}_X(f) = 0$ and the lemma holds.
+Thus we may assume that $q$ is dominant.
+Suppose we can show that $q : Y \to \mathbf{P}^1_K$ is finite
+locally free of degree $d$ (see
+Morphisms, Definition \ref{morphisms-definition-finite-locally-free}).
+Since $\text{div}_Y(f) = [q^{-1}D_0]_0 - [q^{-1}D_\infty]_0$
+we see (by definition of flat pullback) that
+$\text{div}_Y(f) = q^*([D_0]_0 - [D_\infty]_0)$.
+Then by Lemma \ref{lemma-finite-flat} we get
+$q_*\text{div}_Y(f) = d([D_0]_0 - [D_\infty]_0)$.
+Since clearly $c'_*[D_0]_0 = c'_*[D_\infty]_0$ we win.
+
+\medskip\noindent
+It remains to show that $q$ is finite locally free.
+(It will automatically have some given degree as $\mathbf{P}^1_K$
+is connected.)
+Since $\dim(\mathbf{P}^1_K) = 1$ we see that $q$ is finite for example
+by Lemma \ref{lemma-finite-in-codimension-one}.
+All local rings of $\mathbf{P}^1_K$ at
+closed points are regular local rings of dimension $1$
+(in other words discrete valuation rings), since they are
+localizations of $K[T]$ (see
+Algebra, Lemma \ref{algebra-lemma-dim-affine-space}).
+Hence for $y\in Y$ closed the local ring $\mathcal{O}_{Y, y}$
+will be flat over $\mathcal{O}_{\mathbf{P}^1_K, q(y)}$ as soon as
+it is torsion free (More on Algebra, Lemma
+\ref{more-algebra-lemma-dedekind-torsion-free-flat}).
+This is obviously the case as
+$\mathcal{O}_{Y, y}$ is a domain and $q$ is dominant.
+Thus $q$ is flat. Hence $q$ is finite locally free by
+Morphisms, Lemma \ref{morphisms-lemma-finite-flat}.
+\end{proof}
+
+
+
+
+
+\section{Rational equivalence}
+\label{section-rational-equivalence}
+
+\noindent
+In this section we define {\it rational equivalence} on $k$-cycles.
+We will allow locally finite sums of images of
+principal divisors (under closed immersions). This leads to some
+pretty strange phenomena, see Example \ref{example-weird}.
+However, if we do not allow these then we do not know how to prove that
+capping with Chern classes of line bundles factors through rational
+equivalence.
+
+\begin{definition}
+\label{definition-rational-equivalence}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+Let $k \in \mathbf{Z}$.
+\begin{enumerate}
+\item Given any locally finite collection $\{W_j \subset X\}$
+of integral closed subschemes with $\dim_\delta(W_j) = k + 1$,
+and any $f_j \in R(W_j)^*$ we may consider
+$$
+\sum (i_j)_*\text{div}(f_j) \in Z_k(X)
+$$
+where $i_j : W_j \to X$ is the inclusion morphism.
+This makes sense as the morphism
+$\coprod i_j : \coprod W_j \to X$ is proper.
+\item We say that $\alpha \in Z_k(X)$ is {\it rationally equivalent to zero}
+if $\alpha$ is a cycle of the form displayed above.
+\item We say $\alpha, \beta \in Z_k(X)$ are
+{\it rationally equivalent} and we write $\alpha \sim_{rat} \beta$
+if $\alpha - \beta$ is rationally equivalent to zero.
+\item We define
+$$
+\CH_k(X) = Z_k(X) / \sim_{rat}
+$$
+to be the {\it Chow group of $k$-cycles on $X$}. This is sometimes called
+the {\it Chow group of $k$-cycles modulo rational equivalence on $X$}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+There are many other interesting (adequate) equivalence relations.
+Rational equivalence is the coarsest one of them all.
+
+\begin{remark}
+\label{remark-chow-group-pointwise}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $k \in \mathbf{Z}$.
+Let us show that we have a presentation
+$$
+\bigoplus\nolimits_{\delta(x) = k + 1}' K_1^M(\kappa(x))
+\xrightarrow{\partial}
+\bigoplus\nolimits_{\delta(x) = k}' K_0^M(\kappa(x)) \to
+\CH_k(X) \to 0
+$$
+Here we use the notation and conventions introduced in
+Remark \ref{remark-cycles-pointwise} and in addition
+\begin{enumerate}
+\item $K_1^M(\kappa(x)) = \kappa(x)^*$ is the degree $1$ part of
+the Milnor K-theory of the residue field $\kappa(x)$ of the point
+$x \in X$ (see Remark \ref{remark-gersten-complex-milnor}), and
+\item the differential $\partial$ is defined as follows:
+given an element $\xi = \sum_x f_x$ we denote $W_x = \overline{x}$
+the integral closed subscheme of $X$ with generic point $x$ and we set
+$$
+\partial(\xi) = \sum (W_x \to X)_*\text{div}(f_x)
+$$
+in $Z_k(X)$ which makes sense as we have seen that the second
+term of the complex is equal to $Z_k(X)$ by
+Remark \ref{remark-cycles-pointwise}.
+\end{enumerate}
+The fact that we obtain a presentation of $\CH_k(X)$ follows
+immediately by comparing with Definition \ref{definition-rational-equivalence}.
+\end{remark}
+
+\noindent
+A very simple but important lemma is the following.
+
+\begin{lemma}
+\label{lemma-restrict-to-open}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+Let $U \subset X$ be an open subscheme, and denote
+$i : Y = X \setminus U \to X$ as a reduced closed subscheme of $X$.
+Let $k \in \mathbf{Z}$.
+Suppose $\alpha, \beta \in Z_k(X)$.
+If $\alpha|_U \sim_{rat} \beta|_U$ then there exist a cycle
+$\gamma \in Z_k(Y)$ such that
+$$
+\alpha \sim_{rat} \beta + i_*\gamma.
+$$
+In other words, the sequence
+$$
+\xymatrix{
+\CH_k(Y) \ar[r]^{i_*} & \CH_k(X) \ar[r]^{j^*} & \CH_k(U) \ar[r] & 0
+}
+$$
+is an exact complex of abelian groups.
+\end{lemma}
+
+\begin{proof}
+Let $\{W_j\}_{j \in J}$ be a locally finite collection of integral closed
+subschemes of $U$ of $\delta$-dimension $k + 1$, and let $f_j \in R(W_j)^*$
+be elements such that $(\alpha - \beta)|_U = \sum (i_j)_*\text{div}(f_j)$
+as in the definition. Set $W_j' \subset X$ equal
+to the closure of $W_j$. Suppose that $V \subset X$ is a quasi-compact
+open. Then also $V \cap U$ is quasi-compact open in $U$ as
+$V$ is Noetherian. Hence the set
+$\{j \in J \mid W_j \cap V \not = \emptyset\}
+= \{j \in J \mid W'_j \cap V \not = \emptyset\}$
+is finite since $\{W_j\}$ is locally finite. In other words we see that
+$\{W'_j\}$ is also locally finite. Since $R(W_j) = R(W'_j)$ we see
+that
+$$
+\alpha - \beta - \sum (i'_j)_*\text{div}(f_j)
+$$
+is a cycle supported on $Y$ and the lemma follows (see
+Lemma \ref{lemma-exact-sequence-open}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exact-sequence-closed-chow}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $X_1, X_2 \subset X$
+be closed subschemes such that $X = X_1 \cup X_2$ set theoretically.
+For every $k \in \mathbf{Z}$ the sequence of abelian groups
+$$
+\xymatrix{
+\CH_k(X_1 \cap X_2) \ar[r] &
+\CH_k(X_1) \oplus \CH_k(X_2) \ar[r] &
+\CH_k(X) \ar[r] &
+0
+}
+$$
+is exact. Here $X_1 \cap X_2$ is the scheme theoretic intersection and the
+maps are the pushforward maps with one multiplied by $-1$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-exact-sequence-closed} the arrow
+$\CH_k(X_1) \oplus \CH_k(X_2) \to \CH_k(X)$ is surjective.
+Suppose that $(\alpha_1, \alpha_2)$ maps to zero under this map.
+Write $\alpha_1 = \sum n_{1, i}[W_{1, i}]$ and
+$\alpha_2 = \sum n_{2, i}[W_{2, i}]$. Then we obtain a locally
+finite collection $\{W_j\}_{j \in J}$ of integral closed
+subschemes of $X$ of $\delta$-dimension $k + 1$ and $f_j \in R(W_j)^*$
+such that
+$$
+\sum n_{1, i}[W_{1, i}] + \sum n_{2, i}[W_{2, i}] = \sum (i_j)_*\text{div}(f_j)
+$$
+as cycles on $X$ where $i_j : W_j \to X$ is the inclusion morphism.
+Choose a disjoint union decomposition $J = J_1 \amalg J_2$ such that
+$W_j \subset X_1$ if $j \in J_1$ and $W_j \subset X_2$ if $j \in J_2$.
+(This is possible because the $W_j$ are integral.) Then we can write
+the equation above as
+$$
+\sum n_{1, i}[W_{1, i}] - \sum\nolimits_{j \in J_1} (i_j)_*\text{div}(f_j) =
+- \sum n_{2, i}[W_{2, i}] + \sum\nolimits_{j \in J_2} (i_j)_*\text{div}(f_j)
+$$
+Hence this expression is a cycle (!) on $X_1 \cap X_2$. In other words
+the element $(\alpha_1, \alpha_2)$ is in the image of the first arrow
+and the proof is complete.
+\end{proof}
+
+\begin{example}
+\label{example-weird}
+Here is a ``strange'' example.
+Suppose that $S$ is the spectrum of a field $k$
+with $\delta$ as in Example \ref{example-field}.
+Suppose that $X = C_1 \cup C_2 \cup \ldots$ is an infinite
+union of curves $C_j \cong \mathbf{P}^1_k$ glued together
+in the following way: The point $\infty \in C_j$ is glued
+transversally to the point $0 \in C_{j + 1}$ for $j = 1, 2, 3, \ldots$.
+Take the point $0 \in C_1$. This gives a zero cycle
+$[0] \in Z_0(X)$. The ``strangeness'' in this situation is
+that actually $[0] \sim_{rat} 0$! Namely we can choose
+the rational function $f_j \in R(C_j)$ to be the function
+which has a simple zero at $0$ and a simple pole at $\infty$
+and no other zeros or poles. Then we see that the sum
+$\sum (i_j)_*\text{div}(f_j)$ is exactly the $0$-cycle
+$[0]$. In fact it turns out that $\CH_0(X) = 0$ in this example.
+If you find this too bizarre, then you can just
+make sure your spaces are always quasi-compact
+(so $X$ does not even exist for you).
+\end{example}
+
+\begin{remark}
+\label{remark-infinite-sums-rational-equivalences}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+Suppose we have infinite collections $\alpha_i, \beta_i \in Z_k(X)$,
+$i \in I$ of $k$-cycles on $X$. Suppose that the supports
+of $\alpha_i$ and $\beta_i$ form locally finite collections
+of closed subsets of $X$ so that $\sum \alpha_i$
+and $\sum \beta_i$ are defined as cycles. Moreover, assume that
+$\alpha_i \sim_{rat} \beta_i$ for each $i$. Then it is not
+clear that $\sum \alpha_i \sim_{rat} \sum \beta_i$. Namely,
+the problem is that the rational equivalences may be
+given by locally finite
+families $\{W_{i, j}, f_{i, j} \in R(W_{i, j})^*\}_{j \in J_i}$
+but the union $\{W_{i, j}\}_{i \in I, j\in J_i}$ may not
+be locally finite.
+
+\medskip\noindent
+In many cases in practice, one has a locally finite family of closed
+subsets $\{T_i\}_{i \in I}$ such that $\alpha_i, \beta_i$
+are supported on $T_i$ and such that $\alpha_i = \beta_i$
+in $\CH_k(T_i)$, in other words, the families
+$\{W_{i, j}, f_{i, j} \in R(W_{i, j})^*\}_{j \in J_i}$
+consist of subschemes $W_{i, j} \subset T_i$. In this case it is true that
+$\sum \alpha_i \sim_{rat} \sum \beta_i$ on $X$, simply because
+the family $\{W_{i, j}\}_{i \in I, j\in J_i}$ is automatically
+locally finite in this case.
+\end{remark}
+
+
+
+
+
+
+\section{Rational equivalence and push and pull}
+\label{section-properties-rational-equivalence}
+
+\noindent
+In this section we show that flat pullback and proper pushforward
+commute with rational equivalence.
+
+\begin{lemma}
+\label{lemma-prepare-flat-pullback-rational-equivalence}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be schemes locally of finite type over $S$.
+Assume $Y$ integral with $\dim_\delta(Y) = k$.
+Let $f : X \to Y$ be a flat morphism of
+relative dimension $r$. Then for $g \in R(Y)^*$ we have
+$$
+f^*\text{div}_Y(g) =
+\sum n_j i_{j, *}\text{div}_{X_j}(g \circ f|_{X_j})
+$$
+as $(k + r - 1)$-cycles on $X$ where the sum is over the irreducible
+components $X_j$ of $X$ and $n_j$ is the multiplicity of $X_j$ in $X$.
+\end{lemma}
+
+\begin{proof}
+Let $Z \subset X$ be an integral closed subscheme of $\delta$-dimension
+$k + r - 1$. We have to show that the coefficient $n$ of $[Z]$ in
+$f^*\text{div}(g)$ is equal to the coefficient
+$m$ of $[Z]$ in $\sum i_{j, *} \text{div}(g \circ f|_{X_j})$.
+Let $Z'$ be the closure of $f(Z)$ which is an integral closed
+subscheme of $Y$. By Lemma \ref{lemma-flat-inverse-image-dimension}
+we have $\dim_\delta(Z') \geq k - 1$. Thus either $Z' = Y$
+or $Z'$ is a prime divisor on $Y$. If $Z' = Y$, then the coefficients
+$n$ and $m$ are both zero: this is clear for $n$ by definition
+of $f^*$ and follows for $m$ because $g \circ f|_{X_j}$ is
+a unit in any point of $X_j$ mapping to the generic point of $Y$.
+Hence we may assume that $Z' \subset Y$ is a prime divisor.
+
+\medskip\noindent
+We are going to translate the equality of $n$ and $m$ into algebra.
+Namely, let $\xi' \in Z'$ and $\xi \in Z$ be the generic points.
+Set $A = \mathcal{O}_{Y, \xi'}$ and $B = \mathcal{O}_{X, \xi}$.
+Note that $A$, $B$ are Noetherian, $A \to B$ is flat, local,
+$A$ is a domain, and $\mathfrak m_AB$ is an ideal of definition
+of the local ring $B$. The rational function $g$ is an element
+of the fraction field $Q(A)$ of $A$.
+By construction, the closed subschemes $X_j$
+which meet $\xi$ correspond $1$-to-$1$ with minimal primes
+$$
+\mathfrak q_1, \ldots, \mathfrak q_s \subset B
+$$
+The integers $n_j$ are the corresponding lengths
+$$
+n_i = \text{length}_{B_{\mathfrak q_i}}(B_{\mathfrak q_i})
+$$
+The rational functions $g \circ f|_{X_j}$ correspond to the image
+$g_i \in \kappa(\mathfrak q_i)^*$ of $g \in Q(A)$.
+Putting everything together we see that
+$$
+n = \text{ord}_A(g) \text{length}_B(B/\mathfrak m_AB)
+$$
+and that
+$$
+m = \sum \text{ord}_{B/\mathfrak q_i}(g_i)
+\text{length}_{B_{\mathfrak q_i}}(B_{\mathfrak q_i})
+$$
+Writing $g = x/y$ for some nonzero $x, y \in A$ we see that it suffices
+to prove
+$$
+\text{length}_A(A/(x)) \text{length}_B(B/\mathfrak m_AB) =
+\text{length}_B(B/xB)
+$$
+(equality uses Algebra, Lemma \ref{algebra-lemma-pullback-module})
+equals
+$$
+\sum\nolimits_{i = 1, \ldots, s}
+\text{length}_{B/\mathfrak q_i}(B/(x, \mathfrak q_i))
+\text{length}_{B_{\mathfrak q_i}}(B_{\mathfrak q_i})
+$$
+and similarly for $y$. As $A \to B$ is flat it follows that $x$
+is a nonzerodivisor in $B$. Hence the desired equality follows from
+Lemma \ref{lemma-additivity-divisors-restricted}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-pullback-rational-equivalence}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be schemes locally of finite type over $S$.
+Let $f : X \to Y$ be a flat morphism of relative dimension $r$.
+Let $\alpha \sim_{rat} \beta$ be rationally equivalent $k$-cycles on $Y$.
+Then $f^*\alpha \sim_{rat} f^*\beta$ as $(k + r)$-cycles on $X$.
+\end{lemma}
+
+\begin{proof}
+What do we have to show? Well, suppose we are given a collection
+$$
+i_j : W_j \longrightarrow Y
+$$
+of closed immersions, with each $W_j$ integral of $\delta$-dimension $k + 1$
+and rational functions $g_j \in R(W_j)^*$. Moreover, assume that
+the collection $\{i_j(W_j)\}_{j \in J}$ is locally finite on $Y$.
+Then we have to show that
+$$
+f^*(\sum i_{j, *}\text{div}(g_j)) = \sum f^*i_{j, *}\text{div}(g_j)
+$$
+is rationally equivalent to zero on $X$. The sum on the right
+makes sense as $\{W_j\}$ is locally finite in $X$ by
+Lemma \ref{lemma-inverse-image-locally-finite}.
+
+\medskip\noindent
+Consider the fibre products
+$$
+i'_j : W'_j = W_j \times_Y X \longrightarrow X.
+$$
+and denote $f_j : W'_j \to W_j$ the first projection.
+By Lemma \ref{lemma-flat-pullback-proper-pushforward}
+we can write the sum above as
+$$
+\sum i'_{j, *}(f_j^*\text{div}(g_j))
+$$
+By Lemma \ref{lemma-prepare-flat-pullback-rational-equivalence}
+we see that each $f_j^*\text{div}(g_j)$ is rationally equivalent
+to zero on $W'_j$. Hence each $i'_{j, *}(f_j^*\text{div}(g_j))$
+is rationally equivalent to zero. Then the same is true for
+the displayed sum by the discussion in
+Remark \ref{remark-infinite-sums-rational-equivalences}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-pushforward-rational-equivalence}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be schemes locally of finite type over $S$.
+Let $p : X \to Y$ be a proper morphism.
+Suppose $\alpha, \beta \in Z_k(X)$ are rationally equivalent.
+Then $p_*\alpha$ is rationally equivalent to $p_*\beta$.
+\end{lemma}
+
+\begin{proof}
+What do we have to show? Well, suppose we are given a collection
+$$
+i_j : W_j \longrightarrow X
+$$
+of closed immersions, with each $W_j$ integral of $\delta$-dimension $k + 1$
+and rational functions $f_j \in R(W_j)^*$.
+Moreover, assume that
+the collection $\{i_j(W_j)\}_{j \in J}$ is locally finite on $X$.
+Then we have to show that
+$$
+p_*\left(\sum i_{j, *}\text{div}(f_j)\right)
+$$
+is rationally equivalent to zero on $X$.
+
+\medskip\noindent
+Note that the sum is equal to
+$$
+\sum p_*i_{j, *}\text{div}(f_j).
+$$
+Let $W'_j \subset Y$ be the integral closed subscheme which is the
+image of $p \circ i_j$. The collection $\{W'_j\}$ is locally finite
+in $Y$ by Lemma \ref{lemma-quasi-compact-locally-finite}.
+Hence it suffices to show, for a given $j$, that either
+$p_*i_{j, *}\text{div}(f_j) = 0$ or that it
+is equal to $i'_{j, *}\text{div}(g_j)$ for some $g_j \in R(W'_j)^*$.
+
+\medskip\noindent
+The arguments above therefore reduce us to the case of a since
+integral closed subscheme $W \subset X$ of $\delta$-dimension $k + 1$.
+Let $f \in R(W)^*$. Let $W' = p(W)$ as above.
+We get a commutative diagram of morphisms
+$$
+\xymatrix{
+W \ar[r]_i \ar[d]_{p'} & X \ar[d]^p \\
+W' \ar[r]^{i'} & Y
+}
+$$
+Note that $p_*i_*\text{div}(f) = i'_*(p')_*\text{div}(f)$ by
+Lemma \ref{lemma-compose-pushforward}. As explained above
+we have to show that $(p')_*\text{div}(f)$
+is the divisor of a rational function on $W'$ or zero.
+There are three cases to distinguish.
+
+\medskip\noindent
+The case $\dim_\delta(W') < k$. In this case automatically
+$(p')_*\text{div}(f) = 0$ and there is nothing to prove.
+
+\medskip\noindent
+The case $\dim_\delta(W') = k$. Let us show that $(p')_*\text{div}(f) = 0$
+in this case. Let $\eta \in W'$ be the generic point.
+Note that $c : W_\eta \to \Spec(K)$
+is a proper integral curve over $K = \kappa(\eta)$
+whose function field $K(W_\eta)$ is identified with $R(W)$.
+Here is a diagram
+$$
+\xymatrix{
+W_\eta \ar[r] \ar[d]_c & W \ar[d]^{p'} \\
+\Spec(K) \ar[r] & W'
+}
+$$
+Let us denote $f_\eta \in K(W_\eta)^*$ the rational function
+corresponding to $f \in R(W)^*$.
+Moreover, the closed points $\xi$ of $W_\eta$ correspond $1 - 1$ to the
+closed integral subschemes $Z = Z_\xi \subset W$ of $\delta$-dimension $k$
+with $p'(Z) = W'$. Note that the multiplicity
+of $Z_\xi$ in $\text{div}(f)$ is equal to
+$\text{ord}_{\mathcal{O}_{W_\eta, \xi}}(f_\eta)$ simply because the
+local rings $\mathcal{O}_{W_\eta, \xi}$ and $\mathcal{O}_{W, \xi}$
+are identified (as subrings of their fraction fields).
+Hence we see that the multiplicity of $[W']$ in
+$(p')_*\text{div}(f)$ is equal to the multiplicity of
+$[\Spec(K)]$ in $c_*\text{div}(f_\eta)$.
+By Lemma \ref{lemma-curve-principal-divisor} this is zero.
+
+\medskip\noindent
+The case $\dim_\delta(W') = k + 1$. In this case
+Lemma \ref{lemma-proper-pushforward-alteration} applies,
+and we see that indeed $p'_*\text{div}(f) = \text{div}(g)$
+for some $g \in R(W')^*$ as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Rational equivalence and the projective line}
+\label{section-different-rational-equivalence}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+Given any closed subscheme
+$Z \subset X \times_S \mathbf{P}^1_S = X \times \mathbf{P}^1$
+we let $Z_0$, resp.\ $Z_\infty$ be the scheme theoretic closed
+subscheme $Z_0 = \text{pr}_2^{-1}(D_0)$,
+resp.\ $Z_\infty = \text{pr}_2^{-1}(D_\infty)$.
+Here $D_0$, $D_\infty$ are as in (\ref{equation-zero-infty}).
+
+\begin{lemma}
+\label{lemma-rational-equivalence-family}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+Let $W \subset X \times_S \mathbf{P}^1_S$ be an integral
+closed subscheme of $\delta$-dimension $k + 1$.
+Assume $W \not = W_0$, and $W \not = W_\infty$. Then
+\begin{enumerate}
+\item $W_0$, $W_\infty$ are effective Cartier divisors of $W$,
+\item $W_0$, $W_\infty$ can be viewed as closed subschemes
+of $X$ and
+$$
+[W_0]_k \sim_{rat} [W_\infty]_k,
+$$
+\item for any locally finite family of
+integral closed subschemes
+$W_i \subset X \times_S \mathbf{P}^1_S$
+of $\delta$-dimension $k + 1$ with $W_i \not = (W_i)_0$ and
+$W_i \not = (W_i)_\infty$ we have
+$\sum ([(W_i)_0]_k - [(W_i)_\infty]_k) \sim_{rat} 0$
+on $X$, and
+\item for any $\alpha \in Z_k(X)$ with $\alpha \sim_{rat} 0$
+there exists a locally finite family of
+integral closed subschemes $W_i \subset X \times_S \mathbf{P}^1_S$
+as above such that $\alpha = \sum ([(W_i)_0]_k - [(W_i)_\infty]_k)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) follows from
+Divisors, Lemma \ref{divisors-lemma-pullback-effective-Cartier-defined}
+since the generic point
+of $W$ is not mapped into $D_0$ or $D_\infty$ under the projection
+$X \times_S \mathbf{P}^1_S \to \mathbf{P}^1_S$ by assumption.
+
+\medskip\noindent
+Since $X \times_S D_0 \to X$ is a closed immersion, we see that $W_0$
+is isomorphic to a closed subscheme of $X$. Similarly for $W_\infty$.
+The morphism $p : W \to X$ is proper as a composition of
+the closed immersion $W \to X \times_S \mathbf{P}^1_S$ and the
+proper morphism $X \times_S \mathbf{P}^1_S \to X$. By
+Lemma \ref{lemma-rational-function} we have
+$[W_0]_k \sim_{rat} [W_\infty]_k$ as cycles on $W$. Hence part (2) follows from
+Lemma \ref{lemma-proper-pushforward-rational-equivalence} as clearly
+$p_*[W_0]_k = [W_0]_k$ and similarly for $W_\infty$.
+
+\medskip\noindent
+The only content of statement (3) is, given parts (1) and (2), that
+the collection $\{(W_i)_0, (W_i)_\infty\}$ is a locally finite collection
+of closed subschemes of $X$. This is clear.
+
+\medskip\noindent
+Suppose that $\alpha \sim_{rat} 0$.
+By definition this means there exist integral closed subschemes
+$V_i \subset X$ of $\delta$-dimension $k + 1$ and rational
+functions $f_i \in R(V_i)^*$ such that the family
+$\{V_i\}_{i \in I}$ is locally finite in $X$ and such that
+$\alpha = \sum (V_i \to X)_*\text{div}(f_i)$.
+Let
+$$
+W_i \subset V_i \times_S \mathbf{P}^1_S \subset X \times_S \mathbf{P}^1_S
+$$
+be the closure of the graph of the rational map $f_i$ as in
+Lemma \ref{lemma-rational-function}.
+Then we have that $(V_i \to X)_*\text{div}(f_i)$
+is equal to $[(W_i)_0]_k - [(W_i)_\infty]_k$ by that same lemma.
+Hence the result is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-closed-subscheme-cross-p1}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+Let $Z$ be a closed subscheme of $X \times \mathbf{P}^1$.
+Assume
+\begin{enumerate}
+\item $\dim_\delta(Z) \leq k + 1$,
+\item $\dim_\delta(Z_0) \leq k$, $\dim_\delta(Z_\infty) \leq k$, and
+\item for any embedded point $\xi$ (Divisors, Definition
+\ref{divisors-definition-embedded}) of $Z$ either
+$\xi \not \in Z_0 \cup Z_\infty$ or $\delta(\xi) < k$.
+\end{enumerate}
+Then $[Z_0]_k \sim_{rat} [Z_\infty]_k$ as $k$-cycles on $X$.
+\end{lemma}
+
+\begin{proof}
+Let $\{W_i\}_{i \in I}$ be the collection of irreducible
+components of $Z$ which have $\delta$-dimension $k + 1$.
+Write
+$$
+[Z]_{k + 1} = \sum n_i[W_i]
+$$
+with $n_i > 0$ as per definition. Note that $\{W_i\}$
+is a locally finite collection of closed subsets of
+$X \times_S \mathbf{P}^1_S$ by
+Divisors, Lemma \ref{divisors-lemma-components-locally-finite}.
+We claim that
+$$
+[Z_0]_k = \sum n_i[(W_i)_0]_k
+$$
+and similarly for $[Z_\infty]_k$. If we prove this then the lemma
+follows from Lemma \ref{lemma-rational-equivalence-family}.
+
+\medskip\noindent
+Let $Z' \subset X$ be an integral closed subscheme of $\delta$-dimension $k$.
+To prove the equality above it suffices to show that the coefficient $n$
+of $[Z']$ in $[Z_0]_k$ is the same as the coefficient $m$ of
+$[Z']$ in $\sum n_i[(W_i)_0]_k$. Let $\xi' \in Z'$ be the generic point.
+Set $\xi = (\xi', 0) \in X \times_S \mathbf{P}^1_S$.
+Consider the local ring $A = \mathcal{O}_{X \times_S \mathbf{P}^1_S, \xi}$.
+Let $I \subset A$ be the ideal cutting out $Z$, in other words so that
+$A/I = \mathcal{O}_{Z, \xi}$. Let $t \in A$ be the element cutting
+out $X \times_S D_0$ (i.e., the coordinate of $\mathbf{P}^1$ at zero
+pulled back). By our choice of $\xi' \in Z'$ we have $\delta(\xi) = k$
+and hence $\dim(A/I) = 1$. Since $\xi$ is not an embedded point by
+assumption (3) we see that $A/I$ is Cohen-Macaulay. Since $\dim_\delta(Z_0)
+= k$ we see that $\dim(A/(t, I)) = 0$ which implies that $t$
+is a nonzerodivisor on $A/I$. Finally, the irreducible closed subschemes
+$W_i$ passing through $\xi$ correspond to the minimal primes
+$I \subset \mathfrak q_i$ over $I$. The multiplicities $n_i$ correspond
+to the lengths $\text{length}_{A_{\mathfrak q_i}}(A/I)_{\mathfrak q_i}$.
+Hence we see that
+$$
+n = \text{length}_A(A/(t, I))
+$$
+and
+$$
+m = \sum
+\text{length}_A(A/(t, \mathfrak q_i))
+\text{length}_{A_{\mathfrak q_i}}(A/I)_{\mathfrak q_i}
+$$
+Thus the result follows from
+Lemma \ref{lemma-additivity-divisors-restricted}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-coherent-sheaf-cross-p1}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+Let $\mathcal{F}$ be a coherent sheaf on $X \times \mathbf{P}^1$.
+Let $i_0, i_\infty : X \to X \times \mathbf{P}^1$ be the closed immersion
+such that $i_t(x) = (x, t)$. Denote $\mathcal{F}_0 = i_0^*\mathcal{F}$ and
+$\mathcal{F}_\infty = i_\infty^*\mathcal{F}$.
+Assume
+\begin{enumerate}
+\item $\dim_\delta(\text{Supp}(\mathcal{F})) \leq k + 1$,
+\item $\dim_\delta(\text{Supp}(\mathcal{F}_0)) \leq k$,
+$\dim_\delta(\text{Supp}(\mathcal{F}_\infty)) \leq k$, and
+\item for any embedded associated point $\xi$ of $\mathcal{F}$ either
+$\xi \not \in (X \times \mathbf{P}^1)_0 \cup (X \times \mathbf{P}^1)_\infty$
+or $\delta(\xi) < k$.
+\end{enumerate}
+Then $[\mathcal{F}_0]_k \sim_{rat} [\mathcal{F}_\infty]_k$ as $k$-cycles on $X$.
+\end{lemma}
+
+\begin{proof}
+Let $\{W_i\}_{i \in I}$ be the collection of irreducible
+components of $\text{Supp}(\mathcal{F})$
+which have $\delta$-dimension $k + 1$.
+Write
+$$
+[\mathcal{F}]_{k + 1} = \sum n_i[W_i]
+$$
+with $n_i > 0$ as per definition. Note that $\{W_i\}$
+is a locally finite collection of closed subsets of
+$X \times_S \mathbf{P}^1_S$ by Lemma \ref{lemma-length-finite}.
+We claim that
+$$
+[\mathcal{F}_0]_k = \sum n_i[(W_i)_0]_k
+$$
+and similarly for $[\mathcal{F}_\infty]_k$. If we prove this then the lemma
+follows from Lemma \ref{lemma-rational-equivalence-family}.
+
+\medskip\noindent
+Let $Z' \subset X$ be an integral closed subscheme of $\delta$-dimension $k$.
+To prove the equality above it suffices to show that the coefficient $n$
+of $[Z']$ in $[\mathcal{F}_0]_k$ is the same as the coefficient $m$ of
+$[Z']$ in $\sum n_i[(W_i)_0]_k$. Let $\xi' \in Z'$ be the generic point.
+Set $\xi = (\xi', 0) \in X \times_S \mathbf{P}^1_S$.
+Consider the local ring $A = \mathcal{O}_{X \times_S \mathbf{P}^1_S, \xi}$.
+Let $M = \mathcal{F}_\xi$ as an $A$-module.
+Let $t \in A$ be the element cutting out $X \times_S D_0$
+(i.e., the coordinate of $\mathbf{P}^1$ at zero pulled back).
+By our choice of $\xi' \in Z'$ we have $\delta(\xi) = k$
+and hence $\dim(\text{Supp}(M)) = 1$. Since $\xi$ is not an associated point
+of $\mathcal{F}$ by assumption (3) we see that $M$ is a Cohen-Macaulay module.
+Since $\dim_\delta(\text{Supp}(\mathcal{F}_0)) = k$
+we see that $\dim(\text{Supp}(M/tM)) = 0$ which implies that $t$
+is a nonzerodivisor on $M$. Finally, the irreducible closed subschemes
+$W_i$ passing through $\xi$ correspond to the minimal primes
+$\mathfrak q_i$ of $\text{Ass}(M)$. The multiplicities $n_i$ correspond
+to the lengths $\text{length}_{A_{\mathfrak q_i}}M_{\mathfrak q_i}$.
+Hence we see that
+$$
+n = \text{length}_A(M/tM)
+$$
+and
+$$
+m = \sum
+\text{length}_A(A/(t, \mathfrak q_i)A)
+\text{length}_{A_{\mathfrak q_i}}M_{\mathfrak q_i}
+$$
+Thus the result follows from
+Lemma \ref{lemma-additivity-divisors-restricted}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Chow groups and envelopes}
+\label{section-envelopes}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-envelope}
+\begin{reference}
+\cite[Definition 18.3]{F}
+\end{reference}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+An {\it envelope} is a proper morphism $f : Y \to X$
+which is completely decomposed
+(More on Morphisms, Definition \ref{more-morphisms-definition-cd-morphism}).
+\end{definition}
+
+\noindent
+The exact sequence of Lemma \ref{lemma-envelope}
+is the main motivation for the definition.
+
+\begin{lemma}
+\label{lemma-composition-envelope}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+If $f : Y \to X$ and $g : Z \to Y$ are envelopes, then
+$f \circ g$ is an envelope.
+\end{lemma}
+
+\begin{proof}
+Follows from Morphisms, Lemma \ref{morphisms-lemma-composition-proper}
+and More on Morphisms, Lemma \ref{more-morphisms-lemma-composition-cd}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-envelope}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X' \to X$ be a morphism of schemes locally of finite type over $S$.
+If $f : Y \to X$ is an envelope, then the base change $f' : Y' \to X'$
+of $f$ is an envelope too.
+\end{lemma}
+
+\begin{proof}
+Follows from Morphisms, Lemma \ref{morphisms-lemma-base-change-proper}
+and More on Morphisms, Lemma \ref{more-morphisms-lemma-base-change-cd}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-envelope}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+Let $f : Y \to X$ be an envelope. Then
+we have an exact sequence
+$$
+\CH_k(Y \times_X Y) \xrightarrow{p_* - q_*}
+\CH_k(Y) \xrightarrow{f_*}
+\CH_k(X) \to 0
+$$
+for all $k \in \mathbf{Z}$. Here $p, q : Y \times_X Y \to Y$ are
+the projections.
+\end{lemma}
+
+\begin{proof}
+Since $f$ is an envelope, $f$ is proper and hence pushforward on
+cycles and cycle classes is defined, see
+Sections \ref{section-proper-pushforward} and \ref{section-push-pull}.
+Similarly, the morphisms $p$ and $q$ are proper as base changes of $f$.
+The composition of the arrows is zero as
+$f_* \circ p_* = (p \circ f)_* = (q \circ f)_* = f_* \circ q_*$, see
+Lemma \ref{lemma-compose-pushforward}.
+
+\medskip\noindent
+Let us show that $f_* : Z_k(Y) \to Z_k(X)$ is surjective.
+Namely, suppose that we have $\alpha = \sum n_i[Z_i] \in Z_k(X)$
+where $Z_i \subset X$ is a locally finite family of integral
+closed subschemes. Let $x_i \in Z_i$ be the generic point.
+Since $f$ is an envelope and hence completely decomposed,
+there exists a point $y_i \in Y$ with $f(y_i) = x_i$
+and with $\kappa(y_i)/\kappa(x_i)$ trivial. Let $W_i \subset Y$
+be the integral closed subscheme with generic point $y_i$.
+Since $f$ is closed, we see that $f(W_i) = Z_i$.
+It follows that the family of closed subschemes $W_i$ is locally finite
+on $Y$. Since $\kappa(y_i)/\kappa(x_i)$ is trivial we see
+that $\dim_\delta(W_i) = \dim_\delta(Z_i) = k$. Hence
+$\beta = \sum n_i[W_i]$ is in $Z_k(Y)$. Finally, since
+$\kappa(y_i)/\kappa(x_i)$ is trivial, the degree of the dominant
+morphism $f|_{W_i} : W_i \to Z_i$ is $1$ and we conclude
+that $f_*\beta = \alpha$.
+
+\medskip\noindent
+Since $f_* : Z_k(Y) \to Z_k(X)$ is surjective, a fortiori the map
+$f_* : \CH_k(Y) \to \CH_k(X)$ is surjective.
+
+\medskip\noindent
+Let $\beta \in Z_k(Y)$ be an element such that $f_*\beta$ is zero in
+$\CH_k(X)$. This means we can find a locally finite family of
+integral closed subschemes $Z_j \subset X$ with $\dim_\delta(Z_j) = k + 1$
+and $f_j \in R(Z_j)^*$ such that
+$$
+f_*\beta = \sum (Z_j \to X)_*\text{div}(f_j)
+$$
+as cycles where $i_j : Z_j \to X$ is the given closed immersion.
+Arguing exactly as above, we can find a locally finite
+family of integral closed subschemes $W_j \subset Y$
+with $f(W_j) = Z_j$ and such that $W_j \to Z_j$ is birational, i.e.,
+induces an isomorphism $R(Z_j) = R(W_j)$. Denote $g_j \in R(W_j)^*$
+the element corresponding to $f_j$. Observe that $W_j \to Z_j$
+is proper and that $(W_j \to Z_j)_*\text{div}(g_j) = \text{div}(f_j)$
+as cycles on $Z_j$. It follows from this that if we replace
+$\beta$ by the rationally equivalent cycle
+$$
+\beta' = \beta - \sum (W_j \to Y)_*\text{div}(g_j)
+$$
+then we find that $f_*\beta' = 0$.
+(This uses Lemma \ref{lemma-compose-pushforward}.)
+Thus to finish the proof
+of the lemma it suffices to show the claim in the following paragraph.
+
+\medskip\noindent
+Claim: if $\beta \in Z_k(Y)$ and $f_*\beta = 0$, then
+$\beta = \delta + p_*\gamma - q_*\gamma$ in $Z_k(Y)$ for some
+$\gamma \in Z_k(Y \times_X Y)$. Namely, write $\beta = \sum_{j \in J} n_j[W_j]$
+with $\{W_j\}_{j \in J}$ a locally finite family of integral closed
+subschemes of $Y$ with $\dim_\delta(W_j) = k$.
+Fix an integral closed subscheme $Z \subset X$. Consider the subset
+$J_Z = \{j \in J : f(W_j) = Z\}$. This is a finite set. There are three
+cases:
+\begin{enumerate}
+\item $J_Z = \emptyset$. In this case we set $\gamma_Z = 0$.
+\item $J_Z \not = \emptyset$ and $\dim_\delta(Z) = k$.
+The condition $f_*\beta = 0$ implies by looking at the
+coefficient of $Z$ that $\sum_{j \in J_Z} n_j\deg(W_j/Z) = 0$.
+In this case we choose an integral closed subscheme $W \subset Y$
+which maps birationally onto $Z$ (see above). Looking at generic
+points, we see that $W_j \times_Z W$ has a unique irreducible
+component $W'_j \subset W_j \times_Z W \subset Y \times_X Y$
+mapping birationally to $W_j$. Then $W'_j \to W$ is dominant
+and $\deg(W'_j/W) = \deg(W_j/W)$. Thus if we set
+$\gamma_Z = \sum_{j \in J_Z} n_j[W'_j]$
+then we see that
+$p_*\gamma_Z = \sum_{j \in J_Z} n_j[W_j]$ and
+$q_*\gamma_Z = \sum_{j \in J_Z} n_j\deg(W'_j/W)[W] = 0$.
+\item $J_Z \not = \emptyset$ and $\dim_\delta(Z) < k$.
+In this case we choose an integral closed subscheme $W \subset Y$
+which maps birationally onto $Z$ (see above). Looking at generic
+points, we see that $W_j \times_Z W$ has a unique irreducible
+component $W'_j \subset W_j \times_Z W \subset Y \times_X Y$
+mapping birationally to $W_j$. Then $W'_j \to W$ is dominant
+and $k = \dim_\delta(W'_j) > \dim_\delta(W) = \dim_\delta(Z)$.
+Thus if we set $\gamma_Z = \sum_{j \in J_Z} n_j[W'_j]$
+then we see that
+$p_*\gamma_Z = \sum_{j \in J_Z} n_j[W_j]$ and
+$q_*\gamma_Z = 0$.
+\end{enumerate}
+Since the family of integral closed subschemes $\{f(W_j)\}$
+is locally finite on $X$
+(Lemma \ref{lemma-quasi-compact-locally-finite})
+we see that the $k$-cycle
+$$
+\gamma = \sum\nolimits_{Z \subset X\text{ integral closed}} \gamma_Z
+$$
+on $Y \times_X Y$ is well defined. By our computations above it follows that
+$p_*\gamma_Z = \beta$ and $q_*\gamma_Z = 0$ which implies
+what we wanted to prove.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Chow groups and K-groups}
+\label{section-chow-and-K}
+
+\noindent
+In this section we are going to compare $K_0$ of the
+category of coherent sheaves to the chow groups.
+
+\medskip\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+We denote $\textit{Coh}(X) = \textit{Coh}(\mathcal{O}_X)$
+the category of coherent sheaves on $X$.
+It is an abelian category, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-abelian-Noetherian}.
+For any $k \in \mathbf{Z}$ we let $\textit{Coh}_{\leq k}(X)$
+be the full subcategory of $\textit{Coh}(X)$
+consisting of those coherent sheaves $\mathcal{F}$
+having $\dim_\delta(\text{Supp}(\mathcal{F})) \leq k$.
+
+\begin{lemma}
+\label{lemma-Serre-subcategories}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+The categories $\textit{Coh}_{\leq k}(X)$ are Serre subcategories
+of the abelian category $\textit{Coh}(X)$.
+\end{lemma}
+
+\begin{proof}
+The definition of a Serre subcategory is
+Homology, Definition \ref{homology-definition-serre-subcategory}.
+The proof of the lemma is straightforward and omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cycles-k-group}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+The maps
+$$
+Z_k(X)
+\longrightarrow
+K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X)),
+\quad
+\sum n_Z[Z] \mapsto
+\left[\bigoplus\nolimits_{n_Z > 0} \mathcal{O}_Z^{\oplus n_Z}\right]
+-
+\left[\bigoplus\nolimits_{n_Z < 0} \mathcal{O}_Z^{\oplus -n_Z}\right]
+$$
+and
+$$
+K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X))
+\longrightarrow
+Z_k(X),\quad
+\mathcal{F} \longmapsto [\mathcal{F}]_k
+$$
+are mutually inverse isomorphisms.
+\end{lemma}
+
+\begin{proof}
+Note that if $\sum n_Z[Z]$ is in $Z_k(X)$, then
+the direct sums
+$\bigoplus\nolimits_{n_Z > 0} \mathcal{O}_Z^{\oplus n_Z}$ and
+$\bigoplus\nolimits_{n_Z < 0} \mathcal{O}_Z^{\oplus -n_Z}$
+are coherent sheaves on $X$ since the family $\{Z \mid n_Z > 0\}$
+is locally finite on $X$.
+The map $\mathcal{F} \to [\mathcal{F}]_k$ is additive
+on $\textit{Coh}_{\leq k}(X)$, see
+Lemma \ref{lemma-additivity-sheaf-cycle}. And $[\mathcal{F}]_k = 0$
+if $\mathcal{F} \in \textit{Coh}_{\leq k - 1}(X)$. By part (1)
+of Homology, Lemma \ref{homology-lemma-serre-subcategory-K-groups}
+this implies that the second map is well defined too.
+It is clear that the composition of the first map with the second
+map is the identity.
+
+\medskip\noindent
+Conversely, say we start with a coherent sheaf $\mathcal{F}$
+on $X$. Write $[\mathcal{F}]_k = \sum_{i \in I} n_i[Z_i]$
+with $n_i > 0$ and $Z_i \subset X$, $i \in I$
+pairwise distinct integral closed subschemes of $\delta$-dimension $k$.
+We have to show that
+$$
+[\mathcal{F}] = [\bigoplus\nolimits_{i \in I} \mathcal{O}_{Z_i}^{\oplus n_i}]
+$$
+in $K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X))$.
+Denote $\xi_i \in Z_i$ the generic point.
+If we set
+$$
+\mathcal{F}' = \Ker(\mathcal{F} \to \bigoplus \xi_{i, *}\mathcal{F}_{\xi_i})
+$$
+then $\mathcal{F}'$ is the maximal coherent submodule of $\mathcal{F}$
+whose support has dimension $\leq k - 1$. In particular $\mathcal{F}$
+and $\mathcal{F}/\mathcal{F}'$ have the same class in
+$K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X))$.
+Thus after replacing $\mathcal{F}$ by $\mathcal{F}/\mathcal{F}'$
+we may and do assume that the kernel $\mathcal{F}'$ displayed
+above is zero.
+
+\medskip\noindent
+For each $i \in I$ we choose a filtration
+$$
+\mathcal{F}_{\xi_i} = \mathcal{F}_i^0 \supset \mathcal{F}_i^1 \supset
+\ldots \supset \mathcal{F}_i^{n_i} = 0
+$$
+such that the successive quotients are of dimension $1$ over the residue
+field at $\xi_i$. This is possible as the length of $\mathcal{F}_{\xi_i}$
+over $\mathcal{O}_{X, \xi_i}$ is $n_i$.
+For $p > n_i$ set $\mathcal{F}_i^p = 0$. For $p \geq 0$ we denote
+$$
+\mathcal{F}^p =
+\Ker\left(\mathcal{F} \longrightarrow \bigoplus
+\xi_{i, *}(\mathcal{F}_{\xi_i}/\mathcal{F}_i^p)\right)
+$$
+Then $\mathcal{F}^p$ is coherent, $\mathcal{F}^0 = \mathcal{F}$, and
+$\mathcal{F}^p/\mathcal{F}^{p + 1}$ is isomorphic to a free
+$\mathcal{O}_{Z_i}$-module of rank $1$ (if $n_i > p$) or $0$
+(if $n_i \leq p$) in an open neighbourhood of $\xi_i$. Moreover,
+$\mathcal{F}' = \bigcap \mathcal{F}^p = 0$. Since every quasi-compact
+open $U \subset X$ contains only a finite number of $\xi_i$
+we conclude that $\mathcal{F}^p|_U$ is zero for $p \gg 0$.
+Hence $\bigoplus_{p \geq 0} \mathcal{F}^p$ is a coherent
+$\mathcal{O}_X$-module. Consider the short exact sequences
+$$
+0 \to
+\bigoplus\nolimits_{p > 0} \mathcal{F}^p \to
+\bigoplus\nolimits_{p \geq 0} \mathcal{F}^p \to
+\bigoplus\nolimits_{p > 0} \mathcal{F}^p/\mathcal{F}^{p + 1} \to 0
+$$
+and
+$$
+0 \to
+\bigoplus\nolimits_{p > 0} \mathcal{F}^p \to
+\bigoplus\nolimits_{p \geq 0} \mathcal{F}^p \to
+\mathcal{F} \to 0
+$$
+of coherent $\mathcal{O}_X$-modules. This already shows that
+$$
+[\mathcal{F}] = [\bigoplus \mathcal{F}^p/\mathcal{F}^{p + 1}]
+$$
+in $K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X))$.
+Next, for every $p \geq 0$ and $i \in I$ such that $n_i > p$
+we choose a nonzero ideal sheaf $\mathcal{I}_{i, p} \subset \mathcal{O}_{Z_i}$
+and a map $\mathcal{I}_{i, p} \to \mathcal{F}^p/\mathcal{F}^{p + 1}$ on $X$
+which is an isomorphism over the open neighbourhood of $\xi_i$
+mentioned above. This is possible by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-extend-coherent}.
+Then we consider the short exact sequence
+$$
+0 \to
+\bigoplus\nolimits_{p \geq 0, i \in I, n_i > p} \mathcal{I}_{i, p}
+\to
+\bigoplus \mathcal{F}^p/\mathcal{F}^{p + 1} \to
+\mathcal{Q} \to 0
+$$
+and the short exact sequence
+$$
+0 \to
+\bigoplus\nolimits_{p \geq 0, i \in I, n_i > p} \mathcal{I}_{i, p}
+\to
+\bigoplus\nolimits_{p \geq 0, i \in I, n_i > p} \mathcal{O}_{Z_i}
+\to
+\mathcal{Q}' \to 0
+$$
+Observe that both $\mathcal{Q}$ and $\mathcal{Q}'$ are zero in a neighbourhood
+of the points $\xi_i$ and that they are supported on $\bigcup Z_i$.
+Hence $\mathcal{Q}$ and $\mathcal{Q}'$ are in
+$\textit{Coh}_{\leq k - 1}(X)$.
+Since
+$$
+\bigoplus\nolimits_{i \in I} \mathcal{O}_{Z_i}^{\oplus n_i} \cong
+\bigoplus\nolimits_{p \geq 0, i \in I, n_i > p} \mathcal{O}_{Z_i}
+$$
+this concludes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-cycles-k-group}
+Let $\pi : X \to Y$ be a finite morphism of schemes locally of finite type
+over $(S, \delta)$ as in Situation \ref{situation-setup}. Then
+$\pi_* : \textit{Coh}(X) \to \textit{Coh}(Y)$ is an exact functor
+which sends $\textit{Coh}_{\leq k}(X)$ into $\textit{Coh}_{\leq k}(Y)$
+and induces homomorphisms on $K_0$ of these categories and
+their quotients. The maps of Lemma \ref{lemma-cycles-k-group}
+fit into a commutative diagram
+$$
+\xymatrix{
+Z_k(X) \ar[d]^{\pi_*} \ar[r] &
+K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X))
+\ar[d]^{\pi_*} \ar[r] &
+Z_k(X) \ar[d]^{\pi_*} \\
+Z_k(Y) \ar[r] &
+K_0(\textit{Coh}_{\leq k}(Y)/\textit{Coh}_{\leq k - 1}(Y)) \ar[r] &
+Z_k(Y)
+}
+$$
+\end{lemma}
+
+\begin{proof}
+A finite morphism is affine, hence pushforward of quasi-coherent
+modules along $\pi$ is an exact functor by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-relative-affine-vanishing}.
+A finite morphism is proper, hence $\pi_*$ sends coherent sheaves
+to coherent sheaves, see Cohomology of Schemes, Proposition
+\ref{coherent-proposition-proper-pushforward-coherent}.
+The statement on dimensions of supports is clear.
+Commutativity on the right follows immediately from
+Lemma \ref{lemma-cycle-push-sheaf}.
+Since the horizontal arrows are bijections, we find that
+we have commutativity on the left as well.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-from-chow-to-K}
+Let $X$ be a scheme locally of finite type over $(S, \delta)$
+as in Situation \ref{situation-setup}. There is a canonical map
+$$
+\CH_k(X)
+\longrightarrow
+K_0(\textit{Coh}_{\leq k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))
+$$
+induced by the map
+$Z_k(X) \to K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X))$
+from Lemma \ref{lemma-cycles-k-group}.
+\end{lemma}
+
+\begin{proof}
+We have to show that an element $\alpha$ of $Z_k(X)$ which is rationally
+equivalent to zero, is mapped to zero in
+$K_0(\textit{Coh}_{\leq k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))$.
+Write $\alpha = \sum (i_j)_*\text{div}(f_j)$ as in
+Definition \ref{definition-rational-equivalence}.
+Observe that
+$$
+\pi = \coprod i_j : W = \coprod W_j \longrightarrow X
+$$
+is a finite morphism as each $i_j : W_j \to X$ is a closed immersion
+and the family of $W_j$ is locally finite in $X$. Hence we may use
+Lemma \ref{lemma-finite-cycles-k-group} to reduce to the case of $W$.
+Since $W$ is a disjoint union of intregral scheme, we reduce
+to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $X$ is integral of $\delta$-dimension $k + 1$.
+Let $f$ be a nonzero rational function on $X$.
+Let $\alpha = \text{div}(f)$. We have to show that
+$\alpha$ is mapped to zero in
+$K_0(\textit{Coh}_{\leq k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))$.
+Let $\mathcal{I} \subset \mathcal{O}_X$ be the ideal of denominators
+of $f$, see Divisors, Definition
+\ref{divisors-definition-regular-meromorphic-ideal-denominators}.
+Then we have short exact sequences
+$$
+0 \to \mathcal{I} \to \mathcal{O}_X \to \mathcal{O}_X/\mathcal{I} \to 0
+$$
+and
+$$
+0 \to \mathcal{I} \xrightarrow{f} \mathcal{O}_X \to
+\mathcal{O}_X/f\mathcal{I} \to 0
+$$
+See Divisors, Lemma
+\ref{divisors-lemma-regular-meromorphic-ideal-denominators}.
+We claim that
+$$
+[\mathcal{O}_X/\mathcal{I}]_k - [\mathcal{O}_X/f\mathcal{I}]_k =
+\text{div}(f)
+$$
+The claim implies the element $\alpha = \text{div}(f)$ is represented by
+$[\mathcal{O}_X/\mathcal{I}] - [\mathcal{O}_X/f\mathcal{I}]$
+in $K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X))$.
+Then the short exact sequences show that this element maps to
+zero in $K_0(\textit{Coh}_{\leq k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))$.
+
+\medskip\noindent
+To prove the claim, let $Z \subset X$ be an integral closed subscheme
+of $\delta$-dimension $k$ and let $\xi \in Z$ be its generic point.
+Then $I = \mathcal{I}_\xi \subset A = \mathcal{O}_{X, \xi}$
+is an ideal such that $fI \subset A$. Now the coefficient of
+$[Z]$ in $\text{div}(f)$ is $\text{ord}_A(f)$. (Of course as usual
+we identify the function field of $X$ with the fraction field of $A$.)
+On the other hand, the coefficient of $[Z]$ in
+$[\mathcal{O}_X/\mathcal{I}] - [\mathcal{O}_X/f\mathcal{I}]$
+is
+$$
+\text{length}_A(A/I) - \text{length}_A(A/fI)
+$$
+Using the distance fuction of
+Algebra, Definition \ref{algebra-definition-distance}
+we can rewrite this as
+$$
+d(A, I) - d(A, fI) = d(I, fI) = \text{ord}_A(f)
+$$
+The equalities hold by Algebra, Lemmas
+\ref{algebra-lemma-properties-distance-function} and
+\ref{algebra-lemma-order-vanishing-determinant}.
+(Using these lemmas isn't necessary, but convenient.)
+\end{proof}
+
+\begin{remark}
+\label{remark-good-cases-K-A}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+We will see later (in Lemma \ref{lemma-cycles-rational-equivalence-K-group})
+that the map
+$$
+\CH_k(X)
+\longrightarrow
+K_0(\textit{Coh}_{k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))
+$$
+of Lemma \ref{lemma-from-chow-to-K} is injective.
+Composing with the canonical map
+$$
+K_0(\textit{Coh}_{k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))
+\longrightarrow
+K_0(\textit{Coh}(X)/\textit{Coh}_{\leq k - 1}(X))
+$$
+we obtain a canonical map
+$$
+\CH_k(X)
+\longrightarrow
+K_0(\textit{Coh}(X)/\textit{Coh}_{\leq k - 1}(X)).
+$$
+We have not been able to find a statement or conjecture in the
+literature as to whether this map is should be injective or not.
+It seems reasonable to expect the kernel of this map to be torsion.
+We will return to this question (insert future reference).
+\end{remark}
+
+\begin{lemma}
+\label{lemma-K-coherent-supported-on-closed}
+Let $X$ be a locally Noetherian scheme. Let $Z \subset X$ be a closed
+subscheme. Denote $\textit{Coh}_Z(X) \subset \textit{Coh}(X)$
+the Serre subcategory of coherent $\mathcal{O}_X$-modules whose
+set theoretic support is contained in $Z$. Then the exact inclusion
+functor $\textit{Coh}(Z) \to \textit{Coh}_Z(X)$ induces
+an isomorphism
+$$
+K'_0(Z) = K_0(\textit{Coh}(Z)) \longrightarrow K_0(\textit{Coh}_Z(X))
+$$
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be an object of $\textit{Coh}_Z(X)$.
+Let $\mathcal{I} \subset \mathcal{O}_X$ be the quasi-coherent
+ideal sheaf of $Z$. Consider the descending filtration
+$$
+\ldots \subset
+\mathcal{F}^p = \mathcal{I}^p \mathcal{F} \subset
+\mathcal{F}^{p - 1} \subset \ldots \subset \mathcal{F}^0 = \mathcal{F}
+$$
+Exactly as in the proof of Lemma \ref{lemma-from-chow-to-K} this filtration
+is locally finite and hence
+$\bigoplus_{p \geq 0} \mathcal{F}^p$,
+$\bigoplus_{p \geq 1} \mathcal{F}^p$, and
+$\bigoplus_{p \geq 0} \mathcal{F}^p/\mathcal{F}^{p + 1}$
+are coherent $\mathcal{O}_X$-modules supported on $Z$.
+Hence we get
+$$
+[\mathcal{F}] =
+[\bigoplus\nolimits_{p \geq 0} \mathcal{F}^p/\mathcal{F}^{p + 1}]
+$$
+in $K_0(\textit{Coh}_Z(X))$ exactly as in the proof of
+Lemma \ref{lemma-from-chow-to-K}. Since the coherent module
+$\bigoplus_{p \geq 0} \mathcal{F}^p/\mathcal{F}^{p + 1}$
+is annihilated by $\mathcal{I}$ we conclude that
+$[\mathcal{F}]$ is in the image. Actually, we claim that the map
+$$
+\mathcal{F} \longmapsto
+c(\mathcal{F}) =
+[\bigoplus\nolimits_{p \geq 0} \mathcal{F}^p/\mathcal{F}^{p + 1}]
+$$
+factors through $K_0(\textit{Coh}_Z(X))$ and is an inverse to
+the map in the statement of the lemma. To see this all we have
+to show is that if
+$$
+0 \to \mathcal{F} \to \mathcal{G} \to \mathcal{H} \to 0
+$$
+is a short exact sequence in $\textit{Coh}_Z(X)$, then we
+get $c(\mathcal{G}) = c(\mathcal{F}) + c(\mathcal{H})$.
+Observe that for all $q \geq 0$ we have a short exact sequence
+$$
+0 \to
+(\mathcal{F} \cap \mathcal{I}^q\mathcal{G})/
+(\mathcal{F} \cap \mathcal{I}^{q + 1}\mathcal{G}) \to
+\mathcal{G}^q/\mathcal{G}^{q + 1} \to
+\mathcal{H}^q/\mathcal{H}^{q + 1} \to 0
+$$
+For $p, q \geq 0$ consider the coherent submodule
+$$
+\mathcal{F}^{p, q} = \mathcal{I}^p\mathcal{F} \cap \mathcal{I}^q\mathcal{G}
+$$
+Arguing exactly as above and using that the filtrations
+$\mathcal{F}^p = \mathcal{I}^p\mathcal{F}$ and
+$\mathcal{F} \cap \mathcal{I}^q\mathcal{G}$ are locally finite,
+we find that
+$$
+[\bigoplus\nolimits_{p \geq 0} \mathcal{F}^p/\mathcal{F}^{p + 1}] =
+[\bigoplus\nolimits_{p, q \geq 0}
+\mathcal{F}^{p, q}/(\mathcal{F}^{p + 1, q} + \mathcal{F}^{p, q + 1})] =
+[\bigoplus\nolimits_{q \geq 0}
+(\mathcal{F} \cap \mathcal{I}^q\mathcal{G})/
+(\mathcal{F} \cap \mathcal{I}^{q + 1}\mathcal{G})]
+$$
+in $K_0(\textit{Coh}(Z))$. Combined with the exact sequences above we obtain
+the desired result. Some details omitted.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{The divisor associated to an invertible sheaf}
+\label{section-divisor-invertible-sheaf}
+
+\noindent
+The following definition is the analogue of
+Divisors, Definition \ref{divisors-definition-divisor-invertible-sheaf}
+in our current setup.
+
+\begin{definition}
+\label{definition-divisor-invertible-sheaf}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Assume $X$ is
+integral and $n = \dim_\delta(X)$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item For any nonzero meromorphic section $s$ of $\mathcal{L}$
+we define the {\it Weil divisor associated to $s$} is the
+$(n - 1)$-cycle
+$$
+\text{div}_\mathcal{L}(s) =
+\sum \text{ord}_{Z, \mathcal{L}}(s) [Z]
+$$
+defined in Divisors, Definition
+\ref{divisors-definition-divisor-invertible-sheaf}.
+This makes sense because Weil divisors have $\delta$-dimension $n - 1$
+by Lemma \ref{lemma-divisor-delta-dimension}.
+\item We define {\it Weil divisor associated to $\mathcal{L}$} as
+$$
+c_1(\mathcal{L}) \cap [X] =
+\text{class of }\text{div}_\mathcal{L}(s) \in \CH_{n - 1}(X)
+$$
+where $s$ is any nonzero meromorphic section of $\mathcal{L}$ over
+$X$. This is well defined by
+Divisors, Lemma \ref{divisors-lemma-divisor-meromorphic-well-defined}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Let $X$ and $S$ be as in Definition \ref{definition-divisor-invertible-sheaf}
+above. Set $n = \dim_\delta(X)$. It is clear from the definitions that
+$Cl(X) = \CH_{n - 1}(X)$ where $Cl(X)$ is the Weil divisor class group of $X$
+as defined in Divisors, Definition \ref{divisors-definition-class-group}.
+The map
+$$
+\Pic(X) \longrightarrow \CH_{n - 1}(X), \quad
+\mathcal{L} \longmapsto c_1(\mathcal{L}) \cap [X]
+$$
+is the same as the map $\Pic(X) \to Cl(X)$ constructed in
+Divisors, Equation (\ref{divisors-equation-c1}) for arbitrary
+locally Noetherian integral schemes. In particular, this map
+is a homomorphism of abelian groups, it is injective if $X$ is
+a normal scheme, and an isomorphism if all local rings of $X$
+are UFDs. See Divisors, Lemmas \ref{divisors-lemma-normal-c1-injective} and
+\ref{divisors-lemma-local-rings-UFD-c1-bijective}.
+There are some cases where it is easy to compute the
+Weil divisor associated to an invertible sheaf.
+
+\begin{lemma}
+\label{lemma-compute-c1}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Assume $X$ is
+integral and $n = \dim_\delta(X)$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $s \in \Gamma(X, \mathcal{L})$ be a nonzero global section.
+Then
+$$
+\text{div}_\mathcal{L}(s) = [Z(s)]_{n - 1}
+$$
+in $Z_{n - 1}(X)$ and
+$$
+c_1(\mathcal{L}) \cap [X] = [Z(s)]_{n - 1}
+$$
+in $\CH_{n - 1}(X)$.
+\end{lemma}
+
+\begin{proof}
+Let $Z \subset X$ be an integral closed subscheme of
+$\delta$-dimension $n - 1$. Let $\xi \in Z$ be its generic
+point. Choose a generator $s_\xi \in \mathcal{L}_\xi$.
+Write $s = fs_\xi$ for some $f \in \mathcal{O}_{X, \xi}$.
+By definition of $Z(s)$, see
+Divisors, Definition \ref{divisors-definition-zero-scheme-s}
+we see that $Z(s)$ is cut out by a quasi-coherent
+sheaf of ideals $\mathcal{I} \subset \mathcal{O}_X$ such
+that $\mathcal{I}_\xi = (f)$. Hence
+$\text{length}_{\mathcal{O}_{X, x}}(\mathcal{O}_{Z(s), \xi})
+=
+\text{length}_{\mathcal{O}_{X, x}}(\mathcal{O}_{X, \xi}/(f))
+=
+\text{ord}_{\mathcal{O}_{X, x}}(f)$ as desired.
+\end{proof}
+
+\noindent
+The following lemma will be superseded by the more general
+Lemma \ref{lemma-flat-pullback-cap-c1}.
+
+\begin{lemma}
+\label{lemma-flat-pullback-divisor-invertible-sheaf}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$. Assume $X$, $Y$
+are integral and $n = \dim_\delta(Y)$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_Y$-module.
+Let $f : X \to Y$ be a flat morphism of relative dimension $r$.
+Let $\mathcal{L}$ be an invertible sheaf on $Y$. Then
+$$
+f^*(c_1(\mathcal{L}) \cap [Y]) = c_1(f^*\mathcal{L}) \cap [X]
+$$
+in $\CH_{n + r - 1}(X)$.
+\end{lemma}
+
+\begin{proof}
+Let $s$ be a nonzero meromorphic section of $\mathcal{L}$.
+We will show that actually
+$f^*\text{div}_\mathcal{L}(s) = \text{div}_{f^*\mathcal{L}}(f^*s)$
+and hence the lemma holds.
+To see this let $\xi \in Y$ be a point and let $s_\xi \in \mathcal{L}_\xi$
+be a generator. Write $s = gs_\xi$ with $g \in R(Y)^*$.
+Then there is an open neighbourhood $V \subset Y$ of $\xi$
+such that $s_\xi \in \mathcal{L}(V)$ and such that $s_\xi$ generates
+$\mathcal{L}|_V$. Hence we see that
+$$
+\text{div}_\mathcal{L}(s)|_V = \text{div}_Y(g)|_V.
+$$
+In exactly the same way, since $f^*s_\xi$ generates $f^*\mathcal{L}$
+over $f^{-1}(V)$ and since $f^*s = g f^*s_\xi$ we also
+have
+$$
+\text{div}_\mathcal{L}(f^*s)|_{f^{-1}(V)}
+=
+\text{div}_X(g)|_{f^{-1}(V)}.
+$$
+Thus the desired equality of cycles over $f^{-1}(V)$ follows from the
+corresponding result for pullbacks of principal divisors, see
+Lemma \ref{lemma-flat-pullback-principal-divisor}.
+\end{proof}
+
+
+
+\section{Intersecting with an invertible sheaf}
+\label{section-intersecting-with-divisors}
+
+\noindent
+In this section we study the following construction.
+
+\begin{definition}
+\label{definition-cap-c1}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+We define, for every integer $k$, an operation
+$$
+c_1(\mathcal{L}) \cap - :
+Z_{k + 1}(X) \to \CH_k(X)
+$$
+called {\it intersection with the first Chern class of $\mathcal{L}$}.
+\begin{enumerate}
+\item Given an integral closed subscheme $i : W \to X$ with
+$\dim_\delta(W) = k + 1$ we define
+$$
+c_1(\mathcal{L}) \cap [W] = i_*(c_1({i^*\mathcal{L}}) \cap [W])
+$$
+where the right hand side is defined in
+Definition \ref{definition-divisor-invertible-sheaf}.
+\item For a general $(k + 1)$-cycle $\alpha = \sum n_i [W_i]$ we set
+$$
+c_1(\mathcal{L}) \cap \alpha = \sum n_i c_1(\mathcal{L}) \cap [W_i]
+$$
+\end{enumerate}
+\end{definition}
+
+\noindent
+Write each $c_1(\mathcal{L}) \cap W_i = \sum_j n_{i, j} [Z_{i, j}]$
+with $\{Z_{i, j}\}_j$ a locally finite sum
+of integral closed subschemes of $W_i$. Since $\{W_i\}$ is a locally
+finite collection of integral closed subschemes on $X$, it follows
+easily that $\{Z_{i, j}\}_{i, j}$ is a locally finite collection
+of closed subschemes of $X$. Hence
+$c_1(\mathcal{L}) \cap \alpha = \sum n_in_{i, j}[Z_{i, j}]$
+is a cycle. Another, more convenient, way to think about this
+is to observe that the morphism $\coprod W_i \to X$ is
+proper. Hence $c_1(\mathcal{L}) \cap \alpha$ can be viewed
+as the pushforward of a class in $\CH_k(\coprod W_i) = \prod \CH_k(W_i)$.
+This also explains why the result is well defined up to rational
+equivalence on $X$.
+
+\medskip\noindent
+The main goal for the next few sections is to show that intersecting with
+$c_1(\mathcal{L})$ factors through rational equivalence.
+This is not a triviality.
+
+\begin{lemma}
+\label{lemma-c1-cap-additive}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{L}$, $\mathcal{N}$ be an invertible sheaves on $X$.
+Then
+$$
+c_1(\mathcal{L}) \cap \alpha + c_1(\mathcal{N}) \cap \alpha =
+c_1(\mathcal{L} \otimes_{\mathcal{O}_X} \mathcal{N}) \cap \alpha
+$$
+in $\CH_k(X)$ for every $\alpha \in Z_{k + 1}(X)$. Moreover,
+$c_1(\mathcal{O}_X) \cap \alpha = 0$ for all $\alpha$.
+\end{lemma}
+
+\begin{proof}
+The additivity follows directly from
+Divisors, Lemma \ref{divisors-lemma-c1-additive}
+and the definitions. To see that $c_1(\mathcal{O}_X) \cap \alpha = 0$
+consider the section $1 \in \Gamma(X, \mathcal{O}_X)$. This restricts
+to an everywhere nonzero section on any integral closed subscheme
+$W \subset X$. Hence $c_1(\mathcal{O}_X) \cap [W] = 0$ as desired.
+\end{proof}
+
+\noindent
+Recall that $Z(s) \subset X$ denotes the zero scheme of a global section
+$s$ of an invertible sheaf on a scheme $X$, see
+Divisors, Definition \ref{divisors-definition-zero-scheme-s}.
+
+\begin{lemma}
+\label{lemma-prepare-geometric-cap}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $Y$ be locally of finite type over $S$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_Y$-module.
+Let $s \in \Gamma(Y, \mathcal{L})$.
+Assume
+\begin{enumerate}
+\item $\dim_\delta(Y) \leq k + 1$,
+\item $\dim_\delta(Z(s)) \leq k$, and
+\item for every generic point $\xi$ of an irreducible component of
+$Z(s)$ of $\delta$-dimension $k$ the multiplication by $s$
+induces an injection $\mathcal{O}_{Y, \xi} \to \mathcal{L}_\xi$.
+\end{enumerate}
+Write $[Y]_{k + 1} = \sum n_i[Y_i]$ where $Y_i \subset Y$ are the
+irreducible components of $Y$ of $\delta$-dimension $k + 1$.
+Set $s_i = s|_{Y_i} \in \Gamma(Y_i, \mathcal{L}|_{Y_i})$. Then
+\begin{equation}
+\label{equation-equal-as-cycles}
+[Z(s)]_k = \sum n_i[Z(s_i)]_k
+\end{equation}
+as $k$-cycles on $Y$.
+\end{lemma}
+
+\begin{proof}
+Let $Z \subset Y$ be an integral closed subscheme of
+$\delta$-dimension $k$. Let $\xi \in Z$ be its generic point.
+We want to compare the coefficient $n$ of $[Z]$ in the expression
+$\sum n_i[Z(s_i)]_k$ with the coefficient $m$ of $[Z]$ in the
+expression $[Z(s)]_k$. Choose a generator $s_\xi \in \mathcal{L}_\xi$.
+Write $A = \mathcal{O}_{Y, \xi}$, $L = \mathcal{L}_\xi$.
+Then $L = As_\xi$. Write $s = f s_\xi$ for some (unique) $f \in A$.
+Hypothesis (3) means that $f : A \to A$ is injective.
+Since $\dim_\delta(Y) \leq k + 1$ and $\dim_\delta(Z) = k$
+we have $\dim(A) = 0$ or $1$. We have
+$$
+m = \text{length}_A(A/(f))
+$$
+which is finite in either case.
+
+\medskip\noindent
+If $\dim(A) = 0$, then $f : A \to A$ being injective
+implies that $f \in A^*$. Hence in this case $m$ is zero.
+Moreover, the condition $\dim(A) = 0$ means that $\xi$
+does not lie on any irreducible component of $\delta$-dimension
+$k + 1$, i.e., $n = 0$ as well.
+
+\medskip\noindent
+Now, let $\dim(A) = 1$.
+Since $A$ is a Noetherian local ring it has finitely
+many minimal primes $\mathfrak q_1, \ldots, \mathfrak q_t$.
+These correspond 1-1 with the $Y_i$ passing through $\xi'$.
+Moreover $n_i = \text{length}_{A_{\mathfrak q_i}}(A_{\mathfrak q_i})$.
+Also, the multiplicity of $[Z]$ in $[Z(s_i)]_k$ is
+$\text{length}_A(A/(f, \mathfrak q_i))$.
+Hence the equation to prove in this case is
+$$
+\text{length}_A(A/(f))
+=
+\sum \text{length}_{A_{\mathfrak q_i}}(A_{\mathfrak q_i})
+\text{length}_A(A/(f, \mathfrak q_i))
+$$
+which follows from
+Lemma \ref{lemma-additivity-divisors-restricted}.
+\end{proof}
+
+\noindent
+The following lemma is a useful result in order to compute the intersection
+product of the $c_1$ of an invertible sheaf and the cycle associated
+to a closed subscheme.
+Recall that $Z(s) \subset X$ denotes the zero scheme of a global section
+$s$ of an invertible sheaf on a scheme $X$, see
+Divisors, Definition \ref{divisors-definition-zero-scheme-s}.
+
+\begin{lemma}
+\label{lemma-geometric-cap}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $Y \subset X$ be a closed subscheme.
+Let $s \in \Gamma(Y, \mathcal{L}|_Y)$.
+Assume
+\begin{enumerate}
+\item $\dim_\delta(Y) \leq k + 1$,
+\item $\dim_\delta(Z(s)) \leq k$, and
+\item for every generic point $\xi$ of an irreducible component of
+$Z(s)$ of $\delta$-dimension $k$ the multiplication by $s$
+induces an injection
+$\mathcal{O}_{Y, \xi} \to (\mathcal{L}|_Y)_\xi$\footnote{For example,
+this holds if $s$ is a regular section of $\mathcal{L}|_Y$.}.
+\end{enumerate}
+Then
+$$
+c_1(\mathcal{L}) \cap [Y]_{k + 1} = [Z(s)]_k
+$$
+in $\CH_k(X)$.
+\end{lemma}
+
+\begin{proof}
+Write
+$$
+[Y]_{k + 1} = \sum n_i[Y_i]
+$$
+where $Y_i \subset Y$ are the irreducible components of
+$Y$ of $\delta$-dimension $k + 1$ and $n_i > 0$.
+By assumption the restriction
+$s_i = s|_{Y_i} \in \Gamma(Y_i, \mathcal{L}|_{Y_i})$ is not
+zero, and hence is a regular section. By Lemma \ref{lemma-compute-c1}
+we see that $[Z(s_i)]_k$ represents $c_1(\mathcal{L}|_{Y_i})$.
+Hence by definition
+$$
+c_1(\mathcal{L}) \cap [Y]_{k + 1} = \sum n_i[Z(s_i)]_k
+$$
+Thus the result follows from Lemma \ref{lemma-prepare-geometric-cap}.
+\end{proof}
+
+
+
+
+\section{Intersecting with an invertible sheaf and push and pull}
+\label{section-intersecting-with-divisors-push-pull}
+
+\noindent
+In this section we prove that the operation $c_1(\mathcal{L}) \cap -$
+commutes with flat pullback and proper pushforward.
+
+\begin{lemma}
+\label{lemma-prepare-flat-pullback-cap-c1}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a flat morphism of relative dimension $r$.
+Let $\mathcal{L}$ be an invertible sheaf on $Y$.
+Assume $Y$ is integral and $n = \dim_\delta(Y)$.
+Let $s$ be a nonzero meromorphic section of $\mathcal{L}$.
+Then we have
+$$
+f^*\text{div}_\mathcal{L}(s) = \sum n_i\text{div}_{f^*\mathcal{L}|_{X_i}}(s_i)
+$$
+in $Z_{n + r - 1}(X)$. Here the sum is over the irreducible
+components $X_i \subset X$ of $\delta$-dimension $n + r$,
+the section $s_i = f|_{X_i}^*(s)$ is the pullback of $s$, and
+$n_i = m_{X_i, X}$ is the multiplicity of $X_i$ in $X$.
+\end{lemma}
+
+\begin{proof}
+To prove this equality of cycles, we may work locally on $Y$.
+Hence we may assume $Y$ is affine and $s = p/q$ for some nonzero
+sections $p \in \Gamma(Y, \mathcal{L})$ and $q \in \Gamma(Y, \mathcal{O})$.
+If we can show both
+$$
+f^*\text{div}_\mathcal{L}(p) =
+\sum n_i\text{div}_{f^*\mathcal{L}|_{X_i}}(p_i)
+\quad\text{and}\quad
+f^*\text{div}_\mathcal{O}(q) =
+\sum n_i\text{div}_{\mathcal{O}_{X_i}}(q_i)
+$$
+(with obvious notations) then we win by the
+additivity, see Divisors, Lemma \ref{divisors-lemma-c1-additive}.
+Thus we may assume that $s \in \Gamma(Y, \mathcal{L})$.
+In this case we may apply the equality
+(\ref{equation-equal-as-cycles}) to see that
+$$
+[Z(f^*(s))]_{k + r - 1} =
+\sum n_i\text{div}_{f^*\mathcal{L}|_{X_i}}(s_i)
+$$
+where $f^*(s) \in f^*\mathcal{L}$ denotes the pullback of $s$ to $X$.
+On the other hand we have
+$$
+f^*\text{div}_\mathcal{L}(s) = f^*[Z(s)]_{k - 1}
+= [f^{-1}(Z(s))]_{k + r - 1},
+$$
+by Lemmas \ref{lemma-compute-c1} and \ref{lemma-pullback-coherent}.
+Since $Z(f^*(s)) = f^{-1}(Z(s))$ we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-pullback-cap-c1}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a flat morphism of relative dimension $r$.
+Let $\mathcal{L}$ be an invertible sheaf on $Y$.
+Let $\alpha$ be a $k$-cycle on $Y$.
+Then
+$$
+f^*(c_1(\mathcal{L}) \cap \alpha) = c_1(f^*\mathcal{L}) \cap f^*\alpha
+$$
+in $\CH_{k + r - 1}(X)$.
+\end{lemma}
+
+\begin{proof}
+Write $\alpha = \sum n_i[W_i]$. We will show that
+$$
+f^*(c_1(\mathcal{L}) \cap [W_i]) = c_1(f^*\mathcal{L}) \cap f^*[W_i]
+$$
+in $\CH_{k + r - 1}(X)$ by producing a rational equivalence
+on the closed subscheme $f^{-1}(W_i)$ of $X$.
+By the discussion in
+Remark \ref{remark-infinite-sums-rational-equivalences}
+this will prove the equality of the lemma is true.
+
+\medskip\noindent
+Let $W \subset Y$ be an integral closed subscheme of $\delta$-dimension $k$.
+Consider the closed subscheme $W' = f^{-1}(W) = W \times_Y X$
+so that we have the fibre product diagram
+$$
+\xymatrix{
+W' \ar[r] \ar[d]_h & X \ar[d]^f \\
+W \ar[r] & Y
+}
+$$
+We have to show that
+$f^*(c_1(\mathcal{L}) \cap [W]) = c_1(f^*\mathcal{L}) \cap f^*[W]$.
+Choose a nonzero meromorphic section $s$ of $\mathcal{L}|_W$.
+Let $W'_i \subset W'$ be the irreducible components of
+$\delta$-dimension $k + r$. Write $[W']_{k + r} = \sum n_i[W'_i]$
+with $n_i$ the multiplicity of $W'_i$ in $W'$ as per definition.
+So $f^*[W] = \sum n_i[W'_i]$ in $Z_{k + r}(X)$.
+Since each $W'_i \to W$ is dominant we
+see that $s_i = s|_{W'_i}$ is a nonzero meromorphic section for
+each $i$. By Lemma \ref{lemma-prepare-flat-pullback-cap-c1}
+we have the following equality of cycles
+$$
+h^*\text{div}_{\mathcal{L}|_W}(s) =
+\sum n_i\text{div}_{f^*\mathcal{L}|_{W'_i}}(s_i)
+$$
+in $Z_{k + r - 1}(W')$. This finishes the proof since
+the left hand side is a cycle on $W'$ which pushes to
+$f^*(c_1(\mathcal{L}) \cap [W])$ in $\CH_{k + r - 1}(X)$
+and the right hand side is a cycle on $W'$ which pushes to
+$c_1(f^*\mathcal{L}) \cap f^*[W]$ in $\CH_{k + r - 1}(X)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-equal-c1-as-cycles}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a proper morphism.
+Let $\mathcal{L}$ be an invertible sheaf on $Y$.
+Let $s$ be a nonzero meromorphic section $s$ of $\mathcal{L}$ on $Y$.
+Assume $X$, $Y$ integral, $f$ dominant, and $\dim_\delta(X) = \dim_\delta(Y)$.
+Then
+$$
+f_*\left(\text{div}_{f^*\mathcal{L}}(f^*s)\right) =
+[R(X) : R(Y)]\text{div}_\mathcal{L}(s).
+$$
+as cycles on $Y$. In particular
+$$
+f_*(c_1(f^*\mathcal{L}) \cap [X]) =
+[R(X) : R(Y)] c_1(\mathcal{L}) \cap [Y] =
+c_1(\mathcal{L}) \cap f_*[X]
+$$
+\end{lemma}
+
+\begin{proof}
+The last equation follows from the first since $f_*[X] = [R(X) : R(Y)][Y]$
+by definition. It turns out that we can re-use
+Lemma \ref{lemma-proper-pushforward-alteration}
+to prove this. Namely, since we are trying to prove an equality
+of cycles, we may work locally on $Y$. Hence we may assume
+that $\mathcal{L} = \mathcal{O}_Y$. In this case $s$
+corresponds to a rational function $g \in R(Y)$, and
+we are simply trying to prove
+$$
+f_*\left(\text{div}_X(g)\right) =
+[R(X) : R(Y)]\text{div}_Y(g).
+$$
+Comparing with the result of the aforementioned
+Lemma \ref{lemma-proper-pushforward-alteration}
+we see this true since
+$\text{Nm}_{R(X)/R(Y)}(g) = g^{[R(X) : R(Y)]}$
+as $g \in R(Y)^*$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushforward-cap-c1}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $p : X \to Y$ be a proper morphism.
+Let $\alpha \in Z_{k + 1}(X)$.
+Let $\mathcal{L}$ be an invertible sheaf on $Y$.
+Then
+$$
+p_*(c_1(p^*\mathcal{L}) \cap \alpha) = c_1(\mathcal{L}) \cap p_*\alpha
+$$
+in $\CH_k(Y)$.
+\end{lemma}
+
+\begin{proof}
+Suppose that $p$ has the property that for every integral
+closed subscheme $W \subset X$ the map $p|_W : W \to Y$
+is a closed immersion. Then, by definition of capping
+with $c_1(\mathcal{L})$ the lemma holds.
+
+\medskip\noindent
+We will use this remark to reduce to a special case. Namely,
+write $\alpha = \sum n_i[W_i]$ with $n_i \not = 0$ and $W_i$ pairwise
+distinct. Let $W'_i \subset Y$ be the image of $W_i$ (as an integral
+closed subscheme). Consider the diagram
+$$
+\xymatrix{
+X' = \coprod W_i \ar[r]_-q \ar[d]_{p'} & X \ar[d]^p \\
+Y' = \coprod W'_i \ar[r]^-{q'} & Y.
+}
+$$
+Since $\{W_i\}$ is locally finite on $X$, and $p$ is proper
+we see that $\{W'_i\}$ is locally finite on $Y$ and that
+$q, q', p'$ are also proper morphisms.
+We may think of $\sum n_i[W_i]$ also as a $k$-cycle
+$\alpha' \in Z_k(X')$. Clearly $q_*\alpha' = \alpha$.
+We have
+$q_*(c_1(q^*p^*\mathcal{L}) \cap \alpha')
+= c_1(p^*\mathcal{L}) \cap q_*\alpha'$
+and
+$(q')_*(c_1((q')^*\mathcal{L}) \cap p'_*\alpha') =
+c_1(\mathcal{L}) \cap q'_*p'_*\alpha'$ by the initial
+remark of the proof. Hence it suffices to prove the lemma
+for the morphism $p'$ and the cycle $\sum n_i[W_i]$.
+Clearly, this means we may assume $X$, $Y$ integral,
+$f : X \to Y$ dominant and $\alpha = [X]$.
+In this case the result follows from
+Lemma \ref{lemma-equal-c1-as-cycles}.
+\end{proof}
+
+
+
+
+
+
+\section{The key formula}
+\label{section-key}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Assume
+$X$ is integral and $\dim_\delta(X) = n$.
+Let $\mathcal{L}$ and $\mathcal{N}$ be invertible sheaves on $X$.
+Let $s$ be a nonzero meromorphic section of $\mathcal{L}$ and
+let $t$ be a nonzero meromorphic section of $\mathcal{N}$.
+Let $Z_i \subset X$, $i \in I$ be a locally finite set of irreducible
+closed subsets of codimension $1$ with the following property:
+If $Z \not \in \{Z_i\}$ with generic point $\xi$, then $s$ is a generator
+for $\mathcal{L}_\xi$ and $t$ is a generator for $\mathcal{N}_\xi$.
+Such a set exists by
+Divisors, Lemma \ref{divisors-lemma-divisor-meromorphic-locally-finite}.
+Then
+$$
+\text{div}_\mathcal{L}(s) = \sum \text{ord}_{Z_i, \mathcal{L}}(s) [Z_i]
+$$
+and similarly
+$$
+\text{div}_\mathcal{N}(t) = \sum \text{ord}_{Z_i, \mathcal{N}}(t) [Z_i]
+$$
+Unwinding the definitions more, we pick for each $i$ generators
+$s_i \in \mathcal{L}_{\xi_i}$ and $t_i \in \mathcal{N}_{\xi_i}$
+where $\xi_i$ is the generic point of $Z_i$. Then we can write
+$$
+s = f_i s_i
+\quad\text{and}\quad
+t = g_i t_i
+$$
+Set $B_i = \mathcal{O}_{X, \xi_i}$. Then by definition
+$$
+\text{ord}_{Z_i, \mathcal{L}}(s) = \text{ord}_{B_i}(f_i)
+\quad\text{and}\quad
+\text{ord}_{Z_i, \mathcal{N}}(t) = \text{ord}_{B_i}(g_i)
+$$
+Since $t_i$ is a generator of $\mathcal{N}_{\xi_i}$ we see that
+its image in the fibre $\mathcal{N}_{\xi_i} \otimes \kappa(\xi_i)$
+is a nonzero meromorphic section of $\mathcal{N}|_{Z_i}$. We will denote
+this image $t_i|_{Z_i}$. From our definitions it follows that
+$$
+c_1(\mathcal{N}) \cap \text{div}_\mathcal{L}(s) =
+\sum \text{ord}_{B_i}(f_i)
+(Z_i \to X)_*\text{div}_{\mathcal{N}|_{Z_i}}(t_i|_{Z_i})
+$$
+and similarly
+$$
+c_1(\mathcal{L}) \cap \text{div}_\mathcal{N}(t) =
+\sum \text{ord}_{B_i}(g_i)
+(Z_i \to X)_*\text{div}_{\mathcal{L}|_{Z_i}}(s_i|_{Z_i})
+$$
+in $\CH_{n - 2}(X)$. We are going to find a rational equivalence between
+these two cycles. To do this we consider the tame symbol
+$$
+\partial_{B_i}(f_i, g_i) \in \kappa(\xi_i)^*
+$$
+see Section \ref{section-tame-symbol}.
+
+\begin{lemma}[Key formula]
+\label{lemma-key-formula}
+In the situation above the cycle
+$$
+\sum
+(Z_i \to X)_*\left(
+\text{ord}_{B_i}(f_i) \text{div}_{\mathcal{N}|_{Z_i}}(t_i|_{Z_i}) -
+\text{ord}_{B_i}(g_i) \text{div}_{\mathcal{L}|_{Z_i}}(s_i|_{Z_i}) \right)
+$$
+is equal to the cycle
+$$
+\sum (Z_i \to X)_*\text{div}(\partial_{B_i}(f_i, g_i))
+$$
+\end{lemma}
+
+\begin{proof}
+First, let us examine what happens if we replace $s_i$ by $us_i$
+for some unit $u$ in $B_i$. Then $f_i$ gets replaced by $u^{-1} f_i$.
+Thus the first part of the first expression of the lemma is unchanged
+and in the second part we add
+$$
+-\text{ord}_{B_i}(g_i)\text{div}(u|_{Z_i})
+$$
+(where $u|_{Z_i}$ is the image of $u$ in the residue field) by
+Divisors, Lemma \ref{divisors-lemma-divisor-meromorphic-well-defined}
+and in the second expression we add
+$$
+\text{div}(\partial_{B_i}(u^{-1}, g_i))
+$$
+by bi-linearity of the tame symbol. These terms agree by property
+(\ref{item-normalization}) of the tame symbol.
+
+\medskip\noindent
+Let $Z \subset X$ be an irreducible closed with $\dim_\delta(Z) = n - 2$.
+To show that the coefficients of $Z$ of the two cycles of the lemma
+is the same, we may do a replacement $s_i \mapsto us_i$ as in the previous
+paragraph. In exactly the same way one shows that we may do a replacement
+$t_i \mapsto vt_i$ for some unit $v$ of $B_i$.
+
+\medskip\noindent
+Since we are proving the equality of cycles we may argue one coefficient
+at a time. Thus we choose an irreducible closed $Z \subset X$
+with $\dim_\delta(Z) = n - 2$ and compare coefficients. Let $\xi \in Z$
+be the generic point and set $A = \mathcal{O}_{X, \xi}$. This is a Noetherian
+local domain of dimension $2$. Choose generators $\sigma$ and $\tau$
+for $\mathcal{L}_\xi$ and $\mathcal{N}_\xi$. After shrinking $X$, we may
+and do assume $\sigma$ and $\tau$ define trivializations
+of the invertible sheaves $\mathcal{L}$ and $\mathcal{N}$ over all of $X$.
+Because $Z_i$ is locally
+finite after shrinking $X$ we may assume $Z \subset Z_i$ for all $i \in I$
+and that $I$ is finite. Then $\xi_i$ corresponds to a prime
+$\mathfrak q_i \subset A$ of height $1$.
+We may write $s_i = a_i \sigma$ and $t_i = b_i \tau$
+for some $a_i$ and $b_i$ units in $A_{\mathfrak q_i}$.
+By the remarks above, it suffices to prove the lemma when
+$a_i = b_i = 1$ for all $i$.
+
+\medskip\noindent
+Assume $a_i = b_i = 1$ for all $i$. Then the first expression of the
+lemma is zero, because we choose $\sigma$ and $\tau$ to be trivializing
+sections. Write $s = f\sigma$ and $t = g \tau$ with $f$ and $g$ in the
+fraction field of $A$. By the previous paragraph we have reduced to the case
+$f_i = f$ and $g_i = g$ for all $i$. Moreover, for a height $1$ prime
+$\mathfrak q$ of $A$ which is not in $\{\mathfrak q_i\}$ we have
+that both $f$ and $g$ are units in $A_\mathfrak q$ (by our choice of
+the family $\{Z_i\}$ in the discussion preceding the lemma). Thus
+the coefficient of $Z$ in the second expression of the lemma is
+$$
+\sum\nolimits_i \text{ord}_{A/\mathfrak q_i}(\partial_{B_i}(f, g))
+$$
+which is zero by the key Lemma \ref{lemma-milnor-gersten-low-degree}.
+\end{proof}
+
+\begin{remark}
+\label{remark-higher-chow-pointwise}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $k \in \mathbf{Z}$.
+We claim that there is a complex
+$$
+\bigoplus\nolimits_{\delta(x) = k + 2}' K_2^M(\kappa(x))
+\xrightarrow{\partial}
+\bigoplus\nolimits_{\delta(x) = k + 1}' K_1^M(\kappa(x))
+\xrightarrow{\partial}
+\bigoplus\nolimits_{\delta(x) = k}' K_0^M(\kappa(x))
+$$
+Here we use notation and conventions introduced in
+Remark \ref{remark-chow-group-pointwise} and in addition
+\begin{enumerate}
+\item $K_2^M(\kappa(x))$ is the degree $2$ part of
+the Milnor K-theory of the residue field $\kappa(x)$ of the point
+$x \in X$ (see Remark \ref{remark-gersten-complex-milnor}) which
+is the quotient of $\kappa(x)^* \otimes_\mathbf{Z} \kappa(x)^*$
+by the subgroup generated by elements of the form
+$\lambda \otimes (1 - \lambda)$ for
+$\lambda \in \kappa(x) \setminus \{0, 1\}$, and
+\item the first differential $\partial$ is defined as follows:
+given an element $\xi = \sum_x \alpha_x$ in the first term
+we set
+$$
+\partial(\xi) = \sum\nolimits_{x \leadsto x',\ \delta(x') = k + 1}
+\partial_{\mathcal{O}_{W_x, x'}}(\alpha_x)
+$$
+where
+$\partial_{\mathcal{O}_{W_x, x'}} : K_2^M(\kappa(x)) \to K_1^M(\kappa(x))$
+is the tame symbol constructed in Section \ref{section-tame-symbol}.
+\end{enumerate}
+We claim that we get a complex, i.e., that $\partial \circ \partial = 0$.
+To see this it suffices to take an element $\xi$ as above and a point
+$x'' \in X$ with $\delta(x'') = k$ and check that the coefficient of
+$x''$ in the element $\partial(\partial(\xi))$ is zero.
+Because $\xi = \sum \alpha_x$ is a locally finite sum, we
+may in fact assume by additivity that $\xi = \alpha_x$ for
+some $x \in X$ with $\delta(x) = k + 2$ and $\alpha_x \in K_2^M(\kappa(x))$.
+By linearity again we may assume that $\alpha_x = f \otimes g$ for
+some $f, g \in \kappa(x)^*$. Denote $W \subset X$ the integral closed
+subscheme with generic point $x$. If $x'' \not \in W$, then it is
+immediately clear that the coefficient of $x$ in $\partial(\partial(\xi))$
+is zero. If $x'' \in W$, then we see that the coefficient of $x''$
+in $\partial(\partial(x))$ is equal to
+$$
+\sum\nolimits_{x \leadsto x' \leadsto x'',\ \delta(x') = k + 1}
+\text{ord}_{\mathcal{O}_{\overline{\{x'\}}, x''}}(
+\partial_{\mathcal{O}_{W, x'}}(f, g))
+$$
+The key algebraic Lemma \ref{lemma-milnor-gersten-low-degree}
+says exactly that this is zero.
+\end{remark}
+
+\begin{remark}
+\label{remark-higher-chow}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $k \in \mathbf{Z}$.
+The complex in Remark \ref{remark-higher-chow-pointwise} and the
+presentation of $\CH_k(X)$ in Remark \ref{remark-chow-group-pointwise}
+suggests that we can define a first higher Chow group
+$$
+\CH^M_k(X, 1) =
+H_1(\text{the complex of Remark \ref{remark-higher-chow-pointwise}})
+$$
+We use the supscript ${}^M$ to distinguish our notation from the
+higher chow groups defined in the literature, e.g., in the papers
+by Spencer Bloch (\cite{Bloch} and \cite{Bloch-moving}).
+Let $U \subset X$ be open with complement $Y \subset X$ (viewed as reduced
+closed subscheme). Then we find a split short exact sequence
+$$
+0 \to
+\bigoplus\nolimits_{y \in Y, \delta(y) = k + i}' K_i^M(\kappa(y)) \to
+\bigoplus\nolimits_{x \in X, \delta(x) = k + i}' K_i^M(\kappa(x)) \to
+\bigoplus\nolimits_{u \in U, \delta(u) = k + i}' K_i^M(\kappa(u)) \to 0
+$$
+for $i = 2, 1, 0$ compatible with the boundary maps in the complexes
+of Remark \ref{remark-higher-chow-pointwise}. Applying the snake lemma
+(see Homology, Lemma \ref{homology-lemma-long-exact-sequence-chain})
+we obtain a six term exact sequence
+$$
+\CH^M_k(Y, 1) \to \CH^M_k(X, 1) \to \CH^M_k(U, 1) \to
+\CH_k(Y) \to \CH_k(X) \to \CH_k(U) \to 0
+$$
+extending the canonical exact sequence of Lemma \ref{lemma-restrict-to-open}.
+With some work, one may also define flat pullback and proper pushforward
+for the first higher chow group $\CH^M_k(X, 1)$. We will return to this
+later (insert future reference here).
+\end{remark}
+
+
+
+
+
+
+\section{Intersecting with an invertible sheaf and rational equivalence}
+\label{section-commutativity}
+
+\noindent
+Applying the key lemma we obtain the fundamental properties of intersecting
+with invertible sheaves. In particular, we will see that
+$c_1(\mathcal{L}) \cap -$ factors through rational equivalence and
+that these operations for different invertible sheaves commute.
+
+\begin{lemma}
+\label{lemma-commutativity-on-integral}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Assume $X$ integral and $\dim_\delta(X) = n$.
+Let $\mathcal{L}$, $\mathcal{N}$ be invertible on $X$.
+Choose a nonzero meromorphic section $s$ of $\mathcal{L}$
+and a nonzero meromorphic section $t$ of $\mathcal{N}$.
+Set $\alpha = \text{div}_\mathcal{L}(s)$ and
+$\beta = \text{div}_\mathcal{N}(t)$.
+Then
+$$
+c_1(\mathcal{N}) \cap \alpha
+=
+c_1(\mathcal{L}) \cap \beta
+$$
+in $\CH_{n - 2}(X)$.
+\end{lemma}
+
+\begin{proof}
+Immediate from the key Lemma \ref{lemma-key-formula}
+and the discussion preceding it.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-factors}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{L}$ be invertible on $X$.
+The operation $\alpha \mapsto c_1(\mathcal{L}) \cap \alpha$
+factors through rational equivalence to give an operation
+$$
+c_1(\mathcal{L}) \cap - : \CH_{k + 1}(X) \to \CH_k(X)
+$$
+\end{lemma}
+
+\begin{proof}
+Let $\alpha \in Z_{k + 1}(X)$, and $\alpha \sim_{rat} 0$.
+We have to show that $c_1(\mathcal{L}) \cap \alpha$
+as defined in Definition \ref{definition-cap-c1} is zero.
+By Definition \ref{definition-rational-equivalence} there
+exists a locally finite family $\{W_j\}$ of integral closed
+subschemes with $\dim_\delta(W_j) = k + 2$ and rational functions
+$f_j \in R(W_j)^*$ such that
+$$
+\alpha = \sum (i_j)_*\text{div}_{W_j}(f_j)
+$$
+Note that $p : \coprod W_j \to X$ is a proper morphism,
+and hence $\alpha = p_*\alpha'$ where $\alpha' \in Z_{k + 1}(\coprod W_j)$
+is the sum of the principal divisors $\text{div}_{W_j}(f_j)$.
+By Lemma \ref{lemma-pushforward-cap-c1} we have
+$c_1(\mathcal{L}) \cap \alpha = p_*(c_1(p^*\mathcal{L}) \cap \alpha')$.
+Hence it suffices to show that each
+$c_1(\mathcal{L}|_{W_j}) \cap \text{div}_{W_j}(f_j)$ is zero.
+In other words we may assume that $X$ is integral and
+$\alpha = \text{div}_X(f)$ for some $f \in R(X)^*$.
+
+\medskip\noindent
+Assume $X$ is integral and $\alpha = \text{div}_X(f)$ for some $f \in R(X)^*$.
+We can think of $f$ as a regular meromorphic section of the invertible
+sheaf $\mathcal{N} = \mathcal{O}_X$. Choose a meromorphic section
+$s$ of $\mathcal{L}$ and denote $\beta = \text{div}_\mathcal{L}(s)$.
+By Lemma \ref{lemma-commutativity-on-integral}
+we conclude that
+$$
+c_1(\mathcal{L}) \cap \alpha = c_1(\mathcal{O}_X) \cap \beta.
+$$
+However, by Lemma \ref{lemma-c1-cap-additive} we see that the right hand side
+is zero in $\CH_k(X)$ as desired.
+\end{proof}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{L}$ be invertible on $X$.
+We will denote
+$$
+c_1(\mathcal{L}) \cap - : \CH_{k + 1}(X) \to \CH_k(X)
+$$
+the operation $c_1(\mathcal{L}) \cap - $. This makes sense by
+Lemma \ref{lemma-factors}. We will denote $c_1(\mathcal{L})^s \cap -$
+the $s$-fold iterate of this operation for all $s \geq 0$.
+
+\begin{lemma}
+\label{lemma-cap-commutative}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{L}$, $\mathcal{N}$ be invertible on $X$.
+For any $\alpha \in \CH_{k + 2}(X)$ we have
+$$
+c_1(\mathcal{L}) \cap c_1(\mathcal{N}) \cap \alpha
+=
+c_1(\mathcal{N}) \cap c_1(\mathcal{L}) \cap \alpha
+$$
+as elements of $\CH_k(X)$.
+\end{lemma}
+
+\begin{proof}
+Write $\alpha = \sum m_j[Z_j]$ for some locally finite
+collection of integral closed subschemes $Z_j \subset X$
+with $\dim_\delta(Z_j) = k + 2$.
+Consider the proper morphism $p : \coprod Z_j \to X$.
+Set $\alpha' = \sum m_j[Z_j]$ as a $(k + 2)$-cycle on
+$\coprod Z_j$. By several applications of
+Lemma \ref{lemma-pushforward-cap-c1} we see that
+$c_1(\mathcal{L}) \cap c_1(\mathcal{N}) \cap \alpha
+= p_*(c_1(p^*\mathcal{L}) \cap c_1(p^*\mathcal{N}) \cap \alpha')$
+and
+$c_1(\mathcal{N}) \cap c_1(\mathcal{L}) \cap \alpha
+= p_*(c_1(p^*\mathcal{N}) \cap c_1(p^*\mathcal{L}) \cap \alpha')$.
+Hence it suffices to prove the formula in case $X$ is integral
+and $\alpha = [X]$. In this case the result follows
+from Lemma \ref{lemma-commutativity-on-integral} and the definitions.
+\end{proof}
+
+
+
+
+
+
+\section{Gysin homomorphisms}
+\label{section-intersecting-effective-Cartier}
+
+\noindent
+In this section we define the gysin map for the zero locus $D$ of a
+section of an invertible sheaf. An interesting case occurs when $D$
+is an effective Cartier divisor, but the generalization to arbitrary $D$
+allows us a flexibility to formulate various compatibilities, see
+Remark \ref{remark-pullback-pairs} and
+Lemmas \ref{lemma-closed-in-X-gysin}, \ref{lemma-gysin-flat-pullback}, and
+\ref{lemma-gysin-commutes-gysin}.
+These results can be generalized to locally principal closed subschemes
+endowed with a virtual normal bundle
+(Remark \ref{remark-generalize-to-virtual}) or to
+pseudo-divisors (Remark \ref{remark-generalize-to-pseudo-divisor}).
+
+\medskip\noindent
+Recall that effective Cartier divisors correspond $1$-to-$1$ to
+isomorphism classes of pairs $(\mathcal{L}, s)$ where $\mathcal{L}$
+is an invertible sheaf and $s$ is a regular global section, see
+Divisors, Lemma \ref{divisors-lemma-characterize-OD}.
+If $D$ corresponds to $(\mathcal{L}, s)$, then
+$\mathcal{L} = \mathcal{O}_X(D)$. Please keep this in mind while
+reading this section.
+
+\begin{definition}
+\label{definition-gysin-homomorphism}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $(\mathcal{L}, s)$ be a pair consisting of an invertible
+sheaf and a global section $s \in \Gamma(X, \mathcal{L})$.
+Let $D = Z(s)$ be the zero scheme of $s$, and
+denote $i : D \to X$ the closed immersion.
+We define, for every integer $k$, a {\it Gysin homomorphism}
+$$
+i^* : Z_{k + 1}(X) \to \CH_k(D).
+$$
+by the following rules:
+\begin{enumerate}
+\item Given a integral closed subscheme $W \subset X$ with
+$\dim_\delta(W) = k + 1$ we define
+\begin{enumerate}
+\item if $W \not \subset D$, then $i^*[W] = [D \cap W]_k$ as a
+$k$-cycle on $D$, and
+\item if $W \subset D$, then
+$i^*[W] = i'_*(c_1(\mathcal{L}|_W) \cap [W])$,
+where $i' : W \to D$ is the induced closed immersion.
+\end{enumerate}
+\item For a general $(k + 1)$-cycle $\alpha = \sum n_j[W_j]$
+we set
+$$
+i^*\alpha = \sum n_j i^*[W_j]
+$$
+\item If $D$ is an effective Cartier divisor, then we denote
+$D \cdot \alpha = i_*i^*\alpha$ the pushforward of the class $i^*\alpha$
+to a class on $X$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In fact, as we will see later, this Gysin homomorphism $i^*$ can be viewed
+as an example of a non-flat pullback. Thus we will sometimes informally
+call the class $i^*\alpha$ the {\it pullback} of the class $\alpha$.
+
+\begin{remark}
+\label{remark-generalize-to-virtual}
+Let $X$ be a scheme locally of finite type over $S$ as in
+Situation \ref{situation-setup}. Let $(D, \mathcal{N}, \sigma)$
+be a triple consisting of a locally principal (Divisors, Definition
+\ref{divisors-definition-effective-Cartier-divisor}) closed subscheme
+$i : D \to X$, an invertible $\mathcal{O}_D$-module $\mathcal{N}$, and
+a surjection $\sigma : \mathcal{N}^{\otimes -1} \to i^*\mathcal{I}_D$
+of $\mathcal{O}_D$-modules\footnote{This condition assures us that if
+$D$ is an effective Cartier divisor, then $\mathcal{N} = \mathcal{O}_X(D)|_D$.}.
+Here $\mathcal{N}$ should be thought of as
+a {\it virtual normal bundle of $D$ in $X$}. The construction of
+$i^* : Z_{k + 1}(X) \to \CH_k(D)$ in
+Definition \ref{definition-gysin-homomorphism}
+generalizes to such triples, see
+Section \ref{section-gysin-higher-codimension}.
+\end{remark}
+
+\begin{remark}
+\label{remark-generalize-to-pseudo-divisor}
+Let $X$ be a scheme locally of finite type over $S$ as in
+Situation \ref{situation-setup}. In \cite{F} a {\it pseudo-divisor} on $X$
+is defined as a triple $D = (\mathcal{L}, Z, s)$ where $\mathcal{L}$
+is an invertible $\mathcal{O}_X$-module, $Z \subset X$ is a closed subset,
+and $s \in \Gamma(X \setminus Z, \mathcal{L})$ is a nowhere vanishing
+section. Similarly to the above, one can define for every $\alpha$
+in $\CH_{k + 1}(X)$ a product $D \cdot \alpha$ in $\CH_k(Z \cap |\alpha|)$
+where $|\alpha|$ is the support of $\alpha$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-support-cap-effective-Cartier}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be locally
+of finite type over $S$. Let $(\mathcal{L}, s, i : D \to X)$ be as in
+Definition \ref{definition-gysin-homomorphism}. Let $\alpha$ be a
+$(k + 1)$-cycle on $X$. Then $i_*i^*\alpha = c_1(\mathcal{L}) \cap \alpha$
+in $\CH_k(X)$. In particular, if $D$ is an effective Cartier divisor, then
+$D \cdot \alpha = c_1(\mathcal{O}_X(D)) \cap \alpha$.
+\end{lemma}
+
+\begin{proof}
+Write $\alpha = \sum n_j[W_j]$ where $i_j : W_j \to X$ are integral closed
+subschemes with $\dim_\delta(W_j) = k$.
+Since $D$ is the zero scheme of $s$ we see that $D \cap W_j$ is the zero scheme
+of the restriction $s|_{W_j}$. Hence for each $j$ such that
+$W_j \not \subset D$ we have
+$c_1(\mathcal{L}) \cap [W_j] = [D \cap W_j]_k$
+by Lemma \ref{lemma-geometric-cap}. So we have
+$$
+c_1(\mathcal{L}) \cap \alpha
+=
+\sum\nolimits_{W_j \not \subset D} n_j[D \cap W_j]_k
++
+\sum\nolimits_{W_j \subset D}
+n_j i_{j, *}(c_1(\mathcal{L})|_{W_j}) \cap [W_j])
+$$
+in $\CH_k(X)$ by Definition \ref{definition-cap-c1}.
+The right hand side matches (termwise) the pushforward of the class
+$i^*\alpha$ on $D$ from Definition \ref{definition-gysin-homomorphism}.
+Hence we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-easy-gysin}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $(\mathcal{L}, s, i : D \to X)$ be as in
+Definition \ref{definition-gysin-homomorphism}.
+\begin{enumerate}
+\item Let $Z \subset X$ be a closed subscheme such
+that $\dim_\delta(Z) \leq k + 1$ and such that
+$D \cap Z$ is an effective Cartier divisor on $Z$. Then
+$i^*[Z]_{k + 1} = [D \cap Z]_k$.
+\item Let $\mathcal{F}$ be a coherent sheaf on $X$
+such that $\dim_\delta(\text{Supp}(\mathcal{F})) \leq k + 1$ and
+$s : \mathcal{F} \to \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}$
+is injective. Then
+$$
+i^*[\mathcal{F}]_{k + 1} = [i^*\mathcal{F}]_k
+$$
+in $\CH_k(D)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $Z \subset X$ as in (1). Then set $\mathcal{F} = \mathcal{O}_Z$.
+The assumption that $D \cap Z$ is an effective Cartier divisor is
+equivalent to the assumption that
+$s : \mathcal{F} \to \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}$
+is injective. Moreover $[Z]_{k + 1} = [\mathcal{F}]_{k + 1}]$
+and $[D \cap Z]_k = [\mathcal{O}_{D \cap Z}]_k = [i^*\mathcal{F}]_k$.
+See Lemma \ref{lemma-cycle-closed-coherent}.
+Hence part (1) follows from part (2).
+
+\medskip\noindent
+Write $[\mathcal{F}]_{k + 1} = \sum m_j[W_j]$ with $m_j > 0$
+and pairwise distinct integral closed subschemes $W_j \subset X$
+of $\delta$-dimension $k + 1$. The assumption that
+$s : \mathcal{F} \to \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}$
+is injective implies that $W_j \not \subset D$ for all $j$.
+By definition we see that
+$$
+i^*[\mathcal{F}]_{k + 1} = \sum m_j [D \cap W_j]_k.
+$$
+We claim that
+$$
+\sum [D \cap W_j]_k = [i^*\mathcal{F}]_k
+$$
+as cycles.
+Let $Z \subset D$ be an integral closed subscheme of $\delta$-dimension
+$k$. Let $\xi \in Z$ be its generic point. Let $A = \mathcal{O}_{X, \xi}$.
+Let $M = \mathcal{F}_\xi$. Let $f \in A$ be an element generating the
+ideal of $D$, i.e., such that $\mathcal{O}_{D, \xi} = A/fA$.
+By assumption $\dim(\text{Supp}(M)) = 1$,
+the map $f : M \to M$ is injective, and
+$\text{length}_A(M/fM) < \infty$. Moreover, $\text{length}_A(M/fM)$
+is the coefficient of $[Z]$ in $[i^*\mathcal{F}]_k$. On the
+other hand, let $\mathfrak q_1, \ldots, \mathfrak q_t$ be the minimal
+primes in the support of $M$. Then
+$$
+\sum
+\text{length}_{A_{\mathfrak q_i}}(M_{\mathfrak q_i})
+\text{ord}_{A/\mathfrak q_i}(f)
+$$
+is the coefficient of $[Z]$ in $\sum [D \cap W_j]_k$.
+Hence we see the equality by
+Lemma \ref{lemma-additivity-divisors-restricted}.
+\end{proof}
+
+\begin{remark}
+\label{remark-gysin-on-cycles}
+Let $X \to S$, $\mathcal{L}$, $s$, $i : D \to X$ be as in
+Definition \ref{definition-gysin-homomorphism} and assume
+that $\mathcal{L}|_D \cong \mathcal{O}_D$. In this case we
+can define a canonical map $i^* : Z_{k + 1}(X) \to Z_k(D)$
+on cycles, by requiring that $i^*[W] = 0$ whenever $W \subset D$
+is an integral closed subscheme.
+The possibility to do this will be useful later on.
+\end{remark}
+
+\begin{remark}
+\label{remark-pullback-pairs}
+Let $f : X' \to X$ be a morphism of schemes locally of finite type over $S$
+as in Situation \ref{situation-setup}. Let $(\mathcal{L}, s, i : D \to X)$
+be a triple as in Definition \ref{definition-gysin-homomorphism}.
+Then we can set $\mathcal{L}' = f^*\mathcal{L}$, $s' = f^*s$, and
+$D' = X' \times_X D = Z(s')$. This gives a commutative diagram
+$$
+\xymatrix{
+D' \ar[d]_g \ar[r]_{i'} & X' \ar[d]^f \\
+D \ar[r]^i & X
+}
+$$
+and we can ask for various compatibilities between $i^*$ and $(i')^*$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-closed-in-X-gysin}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X' \to X$ be a proper morphism of schemes
+locally of finite type over $S$.
+Let $(\mathcal{L}, s, i : D \to X)$ be as in
+Definition \ref{definition-gysin-homomorphism}.
+Form the diagram
+$$
+\xymatrix{
+D' \ar[d]_g \ar[r]_{i'} & X' \ar[d]^f \\
+D \ar[r]^i & X
+}
+$$
+as in Remark \ref{remark-pullback-pairs}.
+For any $(k + 1)$-cycle $\alpha'$ on $X'$ we have
+$i^*f_*\alpha' = g_*(i')^*\alpha'$ in $\CH_k(D)$
+(this makes sense as $f_*$ is defined on the level of cycles).
+\end{lemma}
+
+\begin{proof}
+Suppose $\alpha = [W']$ for some integral closed subscheme
+$W' \subset X'$. Let $W = f(W') \subset X$. In case $W' \not \subset D'$,
+then $W \not \subset D$ and we see that
+$$
+[W' \cap D']_k = \text{div}_{\mathcal{L}'|_{W'}}({s'|_{W'}})
+\quad\text{and}\quad
+[W \cap D]_k = \text{div}_{\mathcal{L}|_W}(s|_W)
+$$
+and hence $f_*$ of the first cycle equals the second cycle by
+Lemma \ref{lemma-equal-c1-as-cycles}. Hence the
+equality holds as cycles. In case $W' \subset D'$, then
+$W \subset D$ and $f_*(c_1(\mathcal{L}|_{W'}) \cap [W'])$
+is equal to $c_1(\mathcal{L}|_W) \cap [W]$ in $\CH_k(W)$ by the second
+assertion of Lemma \ref{lemma-equal-c1-as-cycles}.
+By Remark \ref{remark-infinite-sums-rational-equivalences}
+the result follows for general $\alpha'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-flat-pullback}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $f : X' \to X$
+be a flat morphism of relative dimension $r$ of schemes locally of finite type
+over $S$. Let $(\mathcal{L}, s, i : D \to X)$ be as in
+Definition \ref{definition-gysin-homomorphism}. Form the diagram
+$$
+\xymatrix{
+D' \ar[d]_g \ar[r]_{i'} & X' \ar[d]^f \\
+D \ar[r]^i & X
+}
+$$
+as in Remark \ref{remark-pullback-pairs}.
+For any $(k + 1)$-cycle $\alpha$ on $X$ we have
+$(i')^*f^*\alpha = g^*i^*\alpha$ in $\CH_{k + r}(D)$
+(this makes sense as $f^*$ is defined on the level of cycles).
+\end{lemma}
+
+\begin{proof}
+Suppose $\alpha = [W]$ for some integral closed subscheme
+$W \subset X$. Let $W' = f^{-1}(W) \subset X'$. In case $W \not \subset D$,
+then $W' \not \subset D'$ and we see that
+$$
+W' \cap D' = g^{-1}(W \cap D)
+$$
+as closed subschemes of $D'$. Hence the
+equality holds as cycles, see Lemma \ref{lemma-pullback-coherent}.
+In case $W \subset D$, then $W' \subset D'$ and $W' = g^{-1}(W)$
+with $[W']_{k + 1 + r} = g^*[W]$ and equality holds in
+$\CH_{k + r}(D')$ by Lemma \ref{lemma-flat-pullback-cap-c1}.
+By Remark \ref{remark-infinite-sums-rational-equivalences}
+the result follows for general $\alpha'$.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Gysin homomorphisms and rational equivalence}
+\label{section-gysin}
+
+\noindent
+In this section we use the key formula to show the Gysin homomorphism
+factor through rational equivalence. We also prove an important
+commutativity property.
+
+\begin{lemma}
+\label{lemma-gysin-factors-general}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $X$ be integral and $n = \dim_\delta(X)$.
+Let $i : D \to X$ be an effective Cartier divisor.
+Let $\mathcal{N}$ be an invertible $\mathcal{O}_X$-module
+and let $t$ be a nonzero meromorphic section of $\mathcal{N}$.
+Then $i^*\text{div}_\mathcal{N}(t) = c_1(\mathcal{N}|_D) \cap [D]_{n - 1}$
+in $\CH_{n - 2}(D)$.
+\end{lemma}
+
+\begin{proof}
+Write $\text{div}_\mathcal{N}(t) = \sum \text{ord}_{Z_i, \mathcal{N}}(t)[Z_i]$
+for some integral closed subschemes $Z_i \subset X$ of $\delta$-dimension
+$n - 1$. We may assume that the family $\{Z_i\}$ is locally
+finite, that $t \in \Gamma(U, \mathcal{N}|_U)$ is a generator
+where $U = X \setminus \bigcup Z_i$, and that every irreducible component
+of $D$ is one of the $Z_i$, see
+Divisors, Lemmas \ref{divisors-lemma-components-locally-finite},
+\ref{divisors-lemma-divisor-locally-finite}, and
+\ref{divisors-lemma-divisor-meromorphic-locally-finite}.
+
+\medskip\noindent
+Set $\mathcal{L} = \mathcal{O}_X(D)$. Denote
+$s \in \Gamma(X, \mathcal{O}_X(D)) = \Gamma(X, \mathcal{L})$
+the canonical section. We will apply the discussion of
+Section \ref{section-key} to our current situation.
+For each $i$ let $\xi_i \in Z_i$ be its generic point. Let
+$B_i = \mathcal{O}_{X, \xi_i}$. For each $i$ we pick generators
+$s_i \in \mathcal{L}_{\xi_i}$ and $t_i \in \mathcal{N}_{\xi_i}$
+over $B_i$ but we insist that we pick $s_i = s$ if $Z_i \not \subset D$.
+Write $s = f_i s_i$ and $t = g_i t_i$ with $f_i, g_i \in B_i$.
+Then $\text{ord}_{Z_i, \mathcal{N}}(t) = \text{ord}_{B_i}(g_i)$.
+On the other hand, we have $f_i \in B_i$ and
+$$
+[D]_{n - 1} = \sum \text{ord}_{B_i}(f_i)[Z_i]
+$$
+because of our choices of $s_i$. We claim that
+$$
+i^*\text{div}_\mathcal{N}(t) =
+\sum \text{ord}_{B_i}(g_i) \text{div}_{\mathcal{L}|_{Z_i}}(s_i|_{Z_i})
+$$
+as cycles. More precisely, the right hand side is a cycle
+representing the left hand side. Namely, this is clear by our
+formula for $\text{div}_\mathcal{N}(t)$ and the fact that
+$\text{div}_{\mathcal{L}|_{Z_i}}(s_i|_{Z_i}) = [Z(s_i|_{Z_i})]_{n - 2} =
+[Z_i \cap D]_{n - 2}$ when $Z_i \not \subset D$ because in
+that case $s_i|_{Z_i} = s|_{Z_i}$ is a regular section, see
+Lemma \ref{lemma-compute-c1}. Similarly,
+$$
+c_1(\mathcal{N}) \cap [D]_{n - 1} =
+\sum \text{ord}_{B_i}(f_i) \text{div}_{\mathcal{N}|_{Z_i}}(t_i|_{Z_i})
+$$
+The key formula (Lemma \ref{lemma-key-formula}) gives the equality
+$$
+\sum \left(
+\text{ord}_{B_i}(f_i) \text{div}_{\mathcal{N}|_{Z_i}}(t_i|_{Z_i}) -
+\text{ord}_{B_i}(g_i) \text{div}_{\mathcal{L}|_{Z_i}}(s_i|_{Z_i}) \right) =
+\sum \text{div}_{Z_i}(\partial_{B_i}(f_i, g_i))
+$$
+of cycles. If $Z_i \not \subset D$, then $f_i = 1$ and hence
+$\text{div}_{Z_i}(\partial_{B_i}(f_i, g_i)) = 0$. Thus we get a rational
+equivalence between our specific cycles representing
+$i^*\text{div}_\mathcal{N}(t)$ and $c_1(\mathcal{N}) \cap [D]_{n - 1}$
+on $D$. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-factors}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $(\mathcal{L}, s, i : D \to X)$ be as in
+Definition \ref{definition-gysin-homomorphism}.
+The Gysin homomorphism factors through rational equivalence to
+give a map $i^* : \CH_{k + 1}(X) \to \CH_k(D)$.
+\end{lemma}
+
+\begin{proof}
+Let $\alpha \in Z_{k + 1}(X)$ and assume that $\alpha \sim_{rat} 0$.
+This means there exists a locally finite collection of integral
+closed subschemes $W_j \subset X$ of $\delta$-dimension $k + 2$
+and $f_j \in R(W_j)^*$ such that
+$\alpha = \sum i_{j, *}\text{div}_{W_j}(f_j)$.
+Set $X' = \coprod W_i$ and consider the diagram
+$$
+\xymatrix{
+D' \ar[d]_q \ar[r]_{i'} & X' \ar[d]^p \\
+D \ar[r]^i & X
+}
+$$
+of Remark \ref{remark-pullback-pairs}. Since $X' \to X$ is proper
+we see that $i^*p_* = q_*(i')^*$ by Lemma \ref{lemma-closed-in-X-gysin}.
+As we know that $q_*$ factors through rational equivalence
+(Lemma \ref{lemma-proper-pushforward-rational-equivalence}), it suffices
+to prove the result for $\alpha' = \sum \text{div}_{W_j}(f_j)$
+on $X'$. Clearly this reduces us to the case where $X$ is integral
+and $\alpha = \text{div}(f)$ for some $f \in R(X)^*$.
+
+\medskip\noindent
+Assume $X$ is integral and $\alpha = \text{div}(f)$ for some $f \in R(X)^*$.
+If $X = D$, then we see that $i^*\alpha$ is equal
+to $c_1(\mathcal{L}) \cap \alpha$.
+This is rationally equivalent to zero by Lemma \ref{lemma-factors}.
+If $D \not = X$, then we see that $i^*\text{div}_X(f)$ is equal to
+$c_1(\mathcal{O}_D) \cap [D]_{n - 1}$ in $\CH_{n - 2}(D)$ by
+Lemma \ref{lemma-gysin-factors-general}. Of course
+capping with $c_1(\mathcal{O}_D)$ is the zero map
+(Lemma \ref{lemma-c1-cap-additive}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-back}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be locally
+of finite type over $S$. Let $(\mathcal{L}, s, i : D \to X)$ be as in
+Definition \ref{definition-gysin-homomorphism}. Then
+$i^*i_* : \CH_k(D) \to \CH_{k - 1}(D)$ sends $\alpha$ to
+$c_1(\mathcal{L}|_D) \cap \alpha$.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the definition of $i_*$ on cycles
+and the definition of $i^*$ given in
+Definition \ref{definition-gysin-homomorphism}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-commutes-cap-c1}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be
+locally of finite type over $S$. Let $(\mathcal{L}, s, i : D \to X)$
+be a triple as in Definition \ref{definition-gysin-homomorphism}.
+Let $\mathcal{N}$ be an invertible $\mathcal{O}_X$-module.
+Then $i^*(c_1(\mathcal{N}) \cap \alpha) = c_1(i^*\mathcal{N}) \cap i^*\alpha$
+in $\CH_{k - 2}(D)$ for all $\alpha \in \CH_k(X)$.
+\end{lemma}
+
+\begin{proof}
+With exactly the same proof as in Lemma \ref{lemma-gysin-factors}
+this follows from Lemmas
+\ref{lemma-pushforward-cap-c1},
+\ref{lemma-cap-commutative}, and
+\ref{lemma-gysin-factors-general}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-commutes-gysin}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be locally
+of finite type over $S$. Let $(\mathcal{L}, s, i : D \to X)$ and
+$(\mathcal{L}', s', i' : D' \to X)$ be two triples as in
+Definition \ref{definition-gysin-homomorphism}. Then the diagram
+$$
+\xymatrix{
+\CH_k(X) \ar[r]_{i^*} \ar[d]_{(i')^*} & \CH_{k - 1}(D) \ar[d]^{j^*} \\
+\CH_{k - 1}(D') \ar[r]^{(j')^*} & \CH_{k - 2}(D \cap D')
+}
+$$
+commutes where each of the maps is a gysin map.
+\end{lemma}
+
+\begin{proof}
+Denote $j : D \cap D' \to D$ and $j' : D \cap D' \to D'$ the closed
+immersions corresponding to $(\mathcal{L}|_{D'}, s|_{D'}$ and
+$(\mathcal{L}'_D, s|_D)$. We have to show that
+$(j')^*i^*\alpha = j^* (i')^*\alpha$ for all $\alpha \in \CH_k(X)$.
+Let $W \subset X$ be an integral closed subscheme of dimension $k$.
+Let us prove the equality in case $\alpha = [W]$. We will deduce
+it from the key formula.
+
+\medskip\noindent
+We let $\sigma$ be a nonzero meromorphic section of $\mathcal{L}|_W$
+which we require to be equal to $s|_W$ if $W \not \subset D$.
+We let $\sigma'$ be a nonzero meromorphic section of $\mathcal{L}'|_W$
+which we require to be equal to $s'|_W$ if $W \not \subset D'$.
+Write
+$$
+\text{div}_{\mathcal{L}|_W}(\sigma) =
+\sum \text{ord}_{Z_i, \mathcal{L}|_W}(\sigma)[Z_i] = \sum n_i[Z_i]
+$$
+and similarly
+$$
+\text{div}_{\mathcal{L}'|_W}(\sigma') =
+\sum \text{ord}_{Z_i, \mathcal{L}'|_W}(\sigma')[Z_i] = \sum n'_i[Z_i]
+$$
+as in the discussion in Section \ref{section-key}.
+Then we see that $Z_i \subset D$ if $n_i \not = 0$ and
+$Z'_i \subset D'$ if $n'_i \not = 0$. For each $i$, let $\xi_i \in Z_i$
+be the generic point. As in Section \ref{section-key} we choose
+for each $i$ an element
+$\sigma_i \in \mathcal{L}_{\xi_i}$, resp.\ $\sigma'_i \in \mathcal{L}'_{\xi_i}$
+which generates over $B_i = \mathcal{O}_{W, \xi_i}$
+and which is equal to the image of
+$s$, resp.\ $s'$ if $Z_i \not \subset D$, resp.\ $Z_i \not \subset D'$.
+Write $\sigma = f_i \sigma_i$ and $\sigma' = f'_i\sigma'_i$ so that
+$n_i = \text{ord}_{B_i}(f_i)$ and
+$n'_i = \text{ord}_{B_i}(f'_i)$.
+From our definitions it follows that
+$$
+(j')^*i^*[W] =
+\sum \text{ord}_{B_i}(f_i) \text{div}_{\mathcal{L}'|_{Z_i}}(\sigma'_i|_{Z_i})
+$$
+as cycles and
+$$
+j^*(i')^*[W] =
+\sum \text{ord}_{B_i}(f'_i) \text{div}_{\mathcal{L}|_{Z_i}}(\sigma_i|_{Z_i})
+$$
+The key formula (Lemma \ref{lemma-key-formula}) now gives the equality
+$$
+\sum \left(
+\text{ord}_{B_i}(f_i) \text{div}_{\mathcal{L}'|_{Z_i}}(\sigma'_i|_{Z_i}) -
+\text{ord}_{B_i}(f'_i) \text{div}_{\mathcal{L}|_{Z_i}}(\sigma_i|_{Z_i})
+\right) =
+\sum \text{div}_{Z_i}(\partial_{B_i}(f_i, f'_i))
+$$
+of cycles. Note that $\text{div}_{Z_i}(\partial_{B_i}(f_i, f'_i)) = 0$ if
+$Z_i \not \subset D \cap D'$ because in this case either $f_i = 1$
+or $f'_i = 1$. Thus we get a rational equivalence between our specific
+cycles representing $(j')^*i^*[W]$ and $j^*(i')^*[W]$ on $D \cap D' \cap W$.
+By Remark \ref{remark-infinite-sums-rational-equivalences}
+the result follows for general $\alpha$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Relative effective Cartier divisors}
+\label{section-relative-effective-cartier}
+
+\noindent
+Relative effective Cartier divisors are defined and studied
+in Divisors, Section \ref{divisors-section-effective-Cartier-morphisms}.
+To develop the basic results on Chern classes of vector bundles
+we only need the case where both the ambient scheme and the effective
+Cartier divisor are flat over the base.
+
+\begin{lemma}
+\label{lemma-relative-effective-cartier}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $p : X \to Y$ be a flat morphism of relative dimension $r$.
+Let $i : D \to X$ be a relative effective Cartier divisor
+(Divisors, Definition
+\ref{divisors-definition-relative-effective-Cartier-divisor}).
+Let $\mathcal{L} = \mathcal{O}_X(D)$.
+For any $\alpha \in \CH_{k + 1}(Y)$ we have
+$$
+i^*p^*\alpha = (p|_D)^*\alpha
+$$
+in $\CH_{k + r}(D)$ and
+$$
+c_1(\mathcal{L}) \cap p^*\alpha = i_* ((p|_D)^*\alpha)
+$$
+in $\CH_{k + r}(X)$.
+\end{lemma}
+
+\begin{proof}
+Let $W \subset Y$ be an integral closed subscheme of $\delta$-dimension
+$k + 1$. By Divisors, Lemma \ref{divisors-lemma-relative-Cartier}
+we see that $D \cap p^{-1}W$ is an effective
+Cartier divisor on $p^{-1}W$. By Lemma \ref{lemma-easy-gysin}
+we get the first equality in
+$$
+i^*[p^{-1}W]_{k + r + 1} =
+[D \cap p^{-1}W]_{k + r} =
+[(p|_D)^{-1}(W)]_{k + r}.
+$$
+and the second because $D \cap p^{-1}(W) = (p|_D)^{-1}(W)$ as schemes.
+Since by definition $p^*[W] = [p^{-1}W]_{k + r + 1}$ we see that
+$i^*p^*[W] = (p|_D)^*[W]$ as cycles. If $\alpha = \sum m_j[W_j]$ is a
+general $k + 1$ cycle, then we get
+$i^*\alpha = \sum m_j i^*p^*[W_j] = \sum m_j(p|_D)^*[W_j]$ as cycles.
+This proves then first equality. To deduce the second from the
+first apply Lemma \ref{lemma-support-cap-effective-Cartier}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Affine bundles}
+\label{section-affine-vector}
+
+\noindent
+For an affine bundle the pullback map is surjective on Chow groups.
+
+\begin{lemma}
+\label{lemma-pullback-affine-fibres-surjective}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $f : X \to Y$ be a flat morphism of relative dimension $r$.
+Assume that for every $y \in Y$, there exists an open neighbourhood
+$U \subset Y$ such that $f|_{f^{-1}(U)} : f^{-1}(U) \to U$
+is identified with the morphism $U \times \mathbf{A}^r \to U$.
+Then $f^* : \CH_k(Y) \to \CH_{k + r}(X)$ is surjective for all
+$k \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Let $\alpha \in \CH_{k + r}(X)$.
+Write $\alpha = \sum m_j[W_j]$ with $m_j \not = 0$ and
+$W_j$ pairwise distinct integral closed subschemes of
+$\delta$-dimension $k + r$. Then the family $\{W_j\}$
+is locally finite in $X$. For any quasi-compact open
+$V \subset Y$ we see that $f^{-1}(V) \cap W_j$
+is nonempty only for finitely many $j$. Hence the
+collection $Z_j = \overline{f(W_j)}$ of closures
+of images is a locally finite collection of integral
+closed subschemes of $Y$.
+
+\medskip\noindent
+Consider the fibre product diagrams
+$$
+\xymatrix{
+f^{-1}(Z_j) \ar[r] \ar[d]_{f_j} & X \ar[d]^f \\
+Z_j \ar[r] & Y
+}
+$$
+Suppose that $[W_j] \in Z_{k + r}(f^{-1}(Z_j))$
+is rationally equivalent to $f_j^*\beta_j$ for some
+$k$-cycle $\beta_j \in \CH_k(Z_j)$. Then
+$\beta = \sum m_j \beta_j$ will be a $k$-cycle on $Y$
+and $f^*\beta = \sum m_j f_j^*\beta_j$ will be rationally
+equivalent to $\alpha$ (see
+Remark \ref{remark-infinite-sums-rational-equivalences}).
+This reduces us to the case $Y$ integral, and
+$\alpha = [W]$ for some integral closed subscheme
+of $X$ dominating $Y$. In particular we may
+assume that $d = \dim_\delta(Y) < \infty$.
+
+\medskip\noindent
+Hence we can use induction on $d = \dim_\delta(Y)$.
+If $d < k$, then $\CH_{k + r}(X) = 0$ and the lemma holds.
+By assumption there exists a dense open $V \subset Y$ such
+that $f^{-1}(V) \cong V \times \mathbf{A}^r$ as schemes over $V$.
+Suppose that we can show that $\alpha|_{f^{-1}(V)} = f^*\beta$
+for some $\beta \in Z_k(V)$. By Lemma \ref{lemma-exact-sequence-open}
+we see that
+$\beta = \beta'|_V$ for some $\beta' \in Z_k(Y)$.
+By the exact sequence
+$\CH_k(f^{-1}(Y \setminus V)) \to \CH_k(X) \to \CH_k(f^{-1}(V))$
+of Lemma \ref{lemma-restrict-to-open}
+we see that $\alpha - f^*\beta'$ comes from
+a cycle $\alpha' \in \CH_{k + r}(f^{-1}(Y \setminus V))$.
+Since $\dim_\delta(Y \setminus V) < d$ we win by
+induction on $d$.
+
+\medskip\noindent
+Thus we may assume that $X = Y \times \mathbf{A}^r$.
+In this case we can factor $f$ as
+$$
+X = Y \times \mathbf{A}^r \to
+Y \times \mathbf{A}^{r - 1} \to \ldots \to
+Y \times \mathbf{A}^1 \to Y.
+$$
+Hence it suffices to do the case $r = 1$. By the argument in the
+second paragraph of the proof we are reduced to the case
+$\alpha = [W]$, $Y$ integral, and $W \to Y$ dominant.
+Again we can do induction on $d = \dim_\delta(Y)$.
+If $W = Y \times \mathbf{A}^1$, then $[W] = f^*[Y]$.
+Lastly, $W \subset Y \times \mathbf{A}^1$ is a proper inclusion,
+then $W \to Y$ induces a finite field extension $R(W)/R(Y)$.
+Let $P(T) \in R(Y)[T]$ be the monic irreducible polynomial such
+that the generic fibre of $W \to Y$ is cut out by $P$ in
+$\mathbf{A}^1_{R(Y)}$. Let $V \subset Y$ be a nonempty open such
+that $P \in \Gamma(V, \mathcal{O}_Y)[T]$, and such that
+$W \cap f^{-1}(V)$ is still cut out by $P$. Then we see that
+$\alpha|_{f^{-1}(V)} \sim_{rat} 0$ and hence $\alpha \sim_{rat} \alpha'$
+for some cycle $\alpha'$ on $(Y \setminus V) \times \mathbf{A}^1$.
+By induction on the dimension we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-linebundle}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let
+$$
+p :
+L = \underline{\Spec}(\text{Sym}^*(\mathcal{L}))
+\longrightarrow
+X
+$$
+be the associated vector bundle over $X$.
+Then $p^* : \CH_k(X) \to \CH_{k + 1}(L)$ is an isomorphism for all $k$.
+\end{lemma}
+
+\begin{proof}
+For surjectivity see Lemma \ref{lemma-pullback-affine-fibres-surjective}.
+Let $o : X \to L$ be the zero section of $L \to X$, i.e., the morphism
+corresponding to the surjection $\text{Sym}^*(\mathcal{L}) \to \mathcal{O}_X$
+which maps $\mathcal{L}^{\otimes n}$ to zero for all $n > 0$.
+Then $p \circ o = \text{id}_X$ and $o(X)$ is an effective
+Cartier divisor on $L$. Hence by Lemma \ref{lemma-relative-effective-cartier}
+we see that $o^* \circ p^* = \text{id}$ and we conclude that $p^*$ is
+injective too.
+\end{proof}
+
+\begin{remark}
+\label{remark-when-isomorphism}
+We will see later (Lemma \ref{lemma-vectorbundle}) that if $X$ is a
+vector bundle of rank $r$ over $Y$ then the pullback map
+$\CH_k(Y) \to \CH_{k + r}(X)$
+is an isomorphism. This is true whenever $X \to Y$ satisfies
+the assumptions of Lemma \ref{lemma-pullback-affine-fibres-surjective}, see
+\cite[Lemma 2.2]{Totaro-group}. We will sketch a proof in
+Remark \ref{remark-higher-chow-isomorphism} using higher chow groups.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-linebundle-formulae}
+In the situation of Lemma \ref{lemma-linebundle} denote $o : X \to L$
+the zero section (see proof of the lemma). Then we have
+\begin{enumerate}
+\item $o(X)$ is the zero scheme of a regular global section of
+$p^*\mathcal{L}^{\otimes -1}$,
+\item $o_* : \CH_k(X) \to \CH_k(L)$ as $o$ is a closed immersion,
+\item $o^* : \CH_{k + 1}(L) \to \CH_k(X)$ as $o(X)$
+is an effective Cartier divisor,
+\item $o^* p^* : \CH_k(X) \to \CH_k(X)$ is the identity map,
+\item $o_*\alpha = - p^*(c_1(\mathcal{L}) \cap \alpha)$ for any
+$\alpha \in \CH_k(X)$, and
+\item $o^* o_* : \CH_k(X) \to \CH_{k - 1}(X)$ is equal to the map
+$\alpha \mapsto - c_1(\mathcal{L}) \cap \alpha$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $p_*\mathcal{O}_L = \text{Sym}^*(\mathcal{L})$ we have
+$p_*(p^*\mathcal{L}^{\otimes -1}) =
+\text{Sym}^*(\mathcal{L}) \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes -1}$
+by the projection formula
+(Cohomology, Lemma \ref{cohomology-lemma-projection-formula})
+and the section mentioned in (1) is
+the canonical trivialization $\mathcal{O}_X \to
+\mathcal{L} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes -1}$.
+We omit the proof that the vanishing locus of
+this section is precisely $o(X)$. This proves (1).
+
+\medskip\noindent
+Parts (2), (3), and (4) we've seen in the course of the proof of
+Lemma \ref{lemma-linebundle}. Of course (4) is the first
+formula in Lemma \ref{lemma-relative-effective-cartier}.
+
+\medskip\noindent
+Part (5) follows from the second formula in
+Lemma \ref{lemma-relative-effective-cartier},
+additivity of capping with $c_1$ (Lemma \ref{lemma-c1-cap-additive}),
+and the fact that capping with $c_1$ commutes with flat pullback
+(Lemma \ref{lemma-flat-pullback-cap-c1}).
+
+\medskip\noindent
+Part (6) follows from Lemma \ref{lemma-gysin-back}
+and the fact that $o^*p^*\mathcal{L} = \mathcal{L}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-decompose-section}
+Let $Y$ be a scheme. Let $\mathcal{L}_i$, $i = 1, 2$ be invertible
+$\mathcal{O}_Y$-modules. Let $s$ be a global section of
+$\mathcal{L}_1 \otimes_{\mathcal{O}_X} \mathcal{L}_2$.
+Denote $i : D \to X$ the zero scheme of $s$.
+Then there exists a commutative diagram
+$$
+\xymatrix{
+D_1 \ar[r]_{i_1} \ar[d]_{p_1} &
+L \ar[d]^p &
+D_2 \ar[l]^{i_2} \ar[d]^{p_2} \\
+D \ar[r]^i &
+Y &
+D \ar[l]_i
+}
+$$
+and sections $s_i$ of $p^*\mathcal{L}_i$ such that
+the following hold:
+\begin{enumerate}
+\item $p^*s = s_1 \otimes s_2$,
+\item $p$ is of finite type and flat of relative dimension $1$,
+\item $D_i$ is the zero scheme of $s_i$,
+\item $D_i \cong
+\underline{\Spec}(\text{Sym}^*(\mathcal{L}_{1 - i}^{\otimes -1})|_D))$
+over $D$ for $i = 1, 2$,
+\item $p^{-1}D = D_1 \cup D_2$ (scheme theoretic union),
+\item $D_1 \cap D_2$ (scheme theoretic intersection) maps
+isomorphically to $D$, and
+\item $D_1 \cap D_2 \to D_i$
+is the zero section of the line bundle $D_i \to D$ for $i = 1, 2$.
+\end{enumerate}
+Moreover, the formation of this diagram and the sections $s_i$
+commutes with arbitrary base change.
+\end{lemma}
+
+\begin{proof}
+Let $p : X \to Y$ be the relative spectrum of the quasi-coherent
+sheaf of $\mathcal{O}_Y$-algebras
+$$
+\mathcal{A} =
+\left(\bigoplus\nolimits_{a_1, a_2 \geq 0}
+\mathcal{L}_1^{\otimes -a_1} \otimes_{\mathcal{O}_Y}
+\mathcal{L}_2^{\otimes -a_2}\right)/\mathcal{J}
+$$
+where $\mathcal{J}$ is the ideal generated by local sections of
+the form $st - t$ for $t$ a local section of any summand
+$\mathcal{L}_1^{\otimes -a_1} \otimes \mathcal{L}_2^{\otimes -a_2}$
+with $a_1, a_2 > 0$. The sections $s_i$ viewed as maps
+$p^*\mathcal{L}_i^{\otimes -1} \to \mathcal{O}_X$ are defined as the adjoints
+of the maps $\mathcal{L}_i^{\otimes -1} \to \mathcal{A} = p_*\mathcal{O}_X$.
+For any $y \in Y$ we can choose an affine
+open $V \subset Y$, say $V = \Spec(B)$, containing $y$ and
+trivializations $z_i : \mathcal{O}_V \to \mathcal{L}_i^{\otimes -1}|_V$.
+Observe that $f = s(z_1z_2) \in A$ cuts out the closed subscheme $D$.
+Then clearly
+$$
+p^{-1}(V) = \Spec(B[z_1, z_2]/(z_1 z_2 - f))
+$$
+Since $D_i$ is cut out by $z_i$ everything is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-decompose-section-formulae}
+In the situation of Lemma \ref{lemma-decompose-section}
+assume $Y$ is locally of finite type over $(S, \delta)$ as in
+Situation \ref{situation-setup}. Then we have
+$i_1^*p^*\alpha = p_1^*i^*\alpha$
+in $\CH_k(D_1)$ for all $\alpha \in \CH_k(Y)$.
+\end{lemma}
+
+\begin{proof}
+Let $W \subset Y$ be an integral closed subscheme of $\delta$-dimension $k$.
+We distinguish two cases.
+
+\medskip\noindent
+Assume $W \subset D$. Then
+$i^*[W] = c_1(\mathcal{L}_1) \cap [W] + c_1(\mathcal{L}_2) \cap [W]$
+in $\CH_{k - 1}(D)$ by our definition of gysin homomorphisms and the
+additivity of Lemma \ref{lemma-c1-cap-additive}.
+Hence $p_1^*i^*[W] =
+p_1^*(c_1(\mathcal{L}_1) \cap [W]) + p_1^*(c_1(\mathcal{L}_2) \cap [W])$.
+On the other hand, we have
+$p^*[W] = [p^{-1}(W)]_{k + 1}$ by construction of flat pullback.
+And $p^{-1}(W) = W_1 \cup W_2$ (scheme theoretically)
+where $W_i = p_i^{-1}(W)$ is a line bundle over $W$
+by the lemma (since formation of the diagram commutes with base change).
+Then $[p^{-1}(W)]_{k + 1} = [W_1] + [W_2]$ as $W_i$ are integral closed
+subschemes of $L$ of $\delta$-dimension $k + 1$. Hence
+\begin{align*}
+i_1^*p^*[W]
+& =
+i_1^*[p^{-1}(W)]_{k + 1} \\
+& =
+i_1^*([W_1] + [W_2]) \\
+& =
+c_1(p_1^*\mathcal{L}_1) \cap [W_1] + [W_1 \cap W_2]_k \\
+& =
+c_1(p_1^*\mathcal{L}_1) \cap p_1^*[W] + [W_1 \cap W_2]_k \\
+& =
+p_1^*(c_1(\mathcal{L}_1) \cap [W]) + [W_1 \cap W_2]_k
+\end{align*}
+by construction of gysin homomorphisms, the definition of flat pullback
+(for the second equality), and compatibility of $c_1 \cap -$
+with flat pullback (Lemma \ref{lemma-flat-pullback-cap-c1}).
+Since $W_1 \cap W_2$ is the zero section of the line bundle
+$W_1 \to W$ we see from Lemma \ref{lemma-linebundle-formulae}
+that $[W_1 \cap W_2]_k = p_1^*(c_1(\mathcal{L}_2) \cap [W])$.
+Note that here we use the fact that $D_1$ is the line bundle
+which is the relative spectrum of the inverse of $\mathcal{L}_2$.
+Thus we get the same thing as before.
+
+\medskip\noindent
+Assume $W \not \subset D$. In this case, both $i_1^*p^*[W]$
+and $p_1^*i^*[W]$ are represented by the $k - 1$ cycle associated
+to the scheme theoretic inverse image of $W$ in $D_1$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-normal-cone-effective-Cartier}
+In Situation \ref{situation-setup} let $X$ be a scheme locally
+of finite type over $S$. Let $(\mathcal{L}, s, i : D \to X)$
+be a triple as in Definition \ref{definition-gysin-homomorphism}.
+There exists a commutative diagram
+$$
+\xymatrix{
+D' \ar[r]_{i'} \ar[d]_p & X' \ar[d]^g \\
+D \ar[r]^i & X
+}
+$$
+such that
+\begin{enumerate}
+\item $p$ and $g$ are of finite type and flat of relative dimension $1$,
+\item $p^* : \CH_k(D) \to \CH_{k + 1}(D')$ is injective for all $k$,
+\item $D' \subset X'$ is the zero scheme of a global section
+$s' \in \Gamma(X', \mathcal{O}_{X'})$,
+\item $p^*i^* = (i')^*g^*$ as maps $\CH_k(X) \to \CH_k(D')$.
+\end{enumerate}
+Moreover, these properties remain true after arbitrary base change
+by morphisms $Y \to X$ which are locally of finite type.
+\end{lemma}
+
+\begin{proof}
+Observe that $(i')^*$ is defined because we have the triple
+$(\mathcal{O}_{X'}, s', i' : D' \to X')$ as in
+Definition \ref{definition-gysin-homomorphism}. Thus the statement makes sense.
+
+\medskip\noindent
+Set $\mathcal{L}_1 = \mathcal{O}_X$, $\mathcal{L}_2 = \mathcal{L}$
+and apply Lemma \ref{lemma-decompose-section} with the section $s$ of
+$\mathcal{L} = \mathcal{L}_1 \otimes_{\mathcal{O}_X} \mathcal{L}_2$.
+Take $D' = D_1$. The results now follow from the lemma, from
+Lemma \ref{lemma-decompose-section-formulae}
+and injectivity by
+Lemma \ref{lemma-linebundle}.
+\end{proof}
+
+\begin{remark}
+\label{remark-higher-chow-isomorphism}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $Y$ be locally of finite type over $S$. Let $r \geq 0$. Let
+$f : X \to Y$ be a morphism of schemes. Assume every $y \in Y$
+is contained in an open $V \subset Y$ such that
+$f^{-1}(V) \cong V \times \mathbf{A}^r$ as schemes over $V$.
+In this remark we sketch a proof of the fact that
+$f^* : \CH_k(Y) \to \CH_k(X)$ is an isomorphism.
+First, by Lemma \ref{lemma-pullback-affine-fibres-surjective}
+the map is surjective. Let $\alpha \in \CH_k(Y)$ with $f^*\alpha = 0$.
+We will prove that $\alpha = 0$.
+
+\medskip\noindent
+Step 1. We may assume that $\dim_\delta(Y) < \infty$. (This is immediate
+in all cases in practice so we suggest the reader skip this step.)
+Namely, any rational equivalence witnessing that $f^*\alpha = 0$ on $X$,
+will use a locally finite collection of integral closed subschemes
+of dimension $k + r + 1$. Taking the union of the closures of the images
+of these in $Y$ we get a closed subscheme $Y' \subset Y$
+of $\dim_\delta(Y') \leq k + r + 1$ such that
+$\alpha$ is the image of some $\alpha' \in \CH_k(Y')$
+and such that $(f')^*\alpha = 0$ where $f'$ is the base change of $f$ to $Y'$.
+
+\medskip\noindent
+Step 2. Assume $d = \dim_\delta(Y) < \infty$. Then we can use induction on
+$d$. If $d < k$, then $\alpha = 0$ and we are done; this is the base case
+of the induction.
+In general, our assumption on $f$ shows we can choose a dense open
+$V \subset Y$ such that $U = f^{-1}(V) = \mathbf{A}^r_V$.
+Denote $Y' \subset Y$ the complement of $V$ as a reduced closed
+subscheme and set $X' = f^{-1}(Y')$. Consider
+$$
+\xymatrix{
+\CH^M_{k + r}(U, 1) \ar[r] &
+\CH_{k + r}(X') \ar[r] &
+\CH_{k + r}(X) \ar[r] &
+\CH_{k + r}(U) \ar[r] &
+0 \\
+\CH^M_k(V, 1) \ar[r] \ar[u] &
+\CH_k(Y') \ar[r] \ar[u] &
+\CH_k(Y) \ar[r] \ar[u] &
+\CH_k(V) \ar[r] \ar[u] &
+0
+}
+$$
+Here we use the first higher Chow groups of $V$ and $U$ and the
+six term exact sequences constructed in Remark \ref{remark-higher-chow},
+as well as flat pullback for these higher chow groups and
+compatibility of flat pullback with these six term exact sequences.
+Since $U = \mathbf{A}^r_V$ the vertical map on the right is an isomorphism.
+The map $\CH_k(Y') \to \CH_{k + r}(X')$ is bijective by induction on $d$.
+Hence to finish the argument is suffices to show that
+$$
+\CH^M_k(V, 1) \longrightarrow \CH^M_{k + r}(U, 1)
+$$
+is surjective. Arguing as in the proof of
+Lemma \ref{lemma-pullback-affine-fibres-surjective}
+this reduces to Step 3 below.
+
+\medskip\noindent
+Step 3. Let $F$ be a field. Then $\CH^M_0(\mathbf{A}^1_F, 1) = 0$.
+(In the proof of the lemma cited above we proved analogously that
+$\CH_0(\mathbf{A}^1_F) = 0$.) We have
+$$
+\CH^M_0(\mathbf{A}^1_F, 1) = \Coker\left(
+\partial : K^M_2(F(T)) \longrightarrow
+\bigoplus\nolimits_{\mathfrak p \subset F[T]\text{ maximal}}
+\kappa(\mathfrak p)^*\right)
+$$
+The classical argument for the vanishing of the cokernel is to
+show by induction on the degree of $\kappa(\mathfrak p)/F$
+that the summand corresponding to $\mathfrak p$ is in the image.
+If $\mathfrak p$ is generated by the
+irreducible monic polynomial $P(T) \in F[T]$ and if
+$u \in \kappa(x)^*$ is the residue class of some
+$Q(T) \in F[T]$ with $\deg(Q) < \deg(P)$ then one shows that
+$\partial(Q, P)$ produces the element $u$ at $\mathfrak p$
+and perhaps some other units at primes dividing $Q$ which
+have lower degree. This finishes the sketch of the proof.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+\section{Bivariant intersection theory}
+\label{section-bivariant}
+
+\noindent
+In order to intelligently talk about higher Chern classes of vector
+bundles we introduce bivariant chow classes as in \cite{F}.
+Our definition differs from \cite{F} in two respects:
+(1) we work in a different setting, and (2) we only require
+our bivariant classes commute with the gysin homomorphisms for
+zero schemes of sections of invertible modules
+(Section \ref{section-intersecting-effective-Cartier}).
+We will see later, in Lemma \ref{lemma-gysin-commutes}, that our
+bivariant classes commute with all higher codimension gysin homomorphisms
+and hence satisfy all properties required of them in \cite{F}; see
+also \cite[Theorem 17.1]{F}.
+
+\begin{definition}
+\label{definition-bivariant-class}
+\begin{reference}
+Similar to \cite[Definition 17.1]{F}
+\end{reference}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a morphism of schemes locally of finite type over $S$.
+Let $p \in \mathbf{Z}$.
+A {\it bivariant class $c$ of degree $p$ for $f$} is given by a rule
+which assigns to every locally of finite type morphism $Y' \to Y$
+and every $k$ a map
+$$
+c \cap - : \CH_k(Y') \longrightarrow \CH_{k - p}(X')
+$$
+where $X' = Y' \times_Y X$, satisfying the following conditions
+\begin{enumerate}
+\item if $Y'' \to Y'$ is a proper, then
+$c \cap (Y'' \to Y')_*\alpha'' = (X'' \to X')_*(c \cap \alpha'')$
+for all $\alpha''$ on $Y''$ where $X'' = Y'' \times_Y X$,
+\item if $Y'' \to Y'$ is flat locally of finite type of
+fixed relative dimension, then
+$c \cap (Y'' \to Y')^*\alpha' = (X'' \to X')^*(c \cap \alpha')$
+for all $\alpha'$ on $Y'$, and
+\item if $(\mathcal{L}', s', i' : D' \to Y')$ is as in
+Definition \ref{definition-gysin-homomorphism}
+with pullback $(\mathcal{N}', t', j' : E' \to X')$ to $X'$,
+then we have $c \cap (i')^*\alpha' = (j')^*(c \cap \alpha')$
+for all $\alpha'$ on $Y'$.
+\end{enumerate}
+The collection of all bivariant classes of degree $p$ for $f$ is
+denoted $A^p(X \to Y)$.
+\end{definition}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X \to Y$
+and $Y \to Z$
+be morphisms of schemes locally of finite type over $S$. Let
+$p \in \mathbf{Z}$. It is clear that $A^p(X \to Y)$ is an abelian group.
+Moreover, it is clear that we have a bilinear composition
+$$
+A^p(X \to Y) \times A^q(Y \to Z) \to A^{p + q}(X \to Z)
+$$
+which is associative.
+
+\begin{lemma}
+\label{lemma-flat-pullback-bivariant}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a flat morphism of relative dimension $r$
+between schemes locally of finite type over $S$.
+Then the rule that to $Y' \to Y$ assigns
+$(f')^* : \CH_k(Y') \to \CH_{k + r}(X')$ where $X' = X \times_Y Y'$
+is a bivariant class of degree $-r$.
+\end{lemma}
+
+\begin{proof}
+This follows from
+Lemmas \ref{lemma-flat-pullback-rational-equivalence},
+\ref{lemma-compose-flat-pullback},
+\ref{lemma-flat-pullback-proper-pushforward}, and
+\ref{lemma-gysin-flat-pullback}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-bivariant}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $(\mathcal{L}, s, i : D \to X)$ be a triple as in
+Definition \ref{definition-gysin-homomorphism}.
+Then the rule that to $f : X' \to X$ assigns
+$(i')^* : \CH_k(X') \to \CH_{k - 1}(D')$ where $D' = D \times_X X'$
+is a bivariant class of degree $1$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemmas \ref{lemma-gysin-factors},
+\ref{lemma-closed-in-X-gysin},
+\ref{lemma-gysin-flat-pullback}, and
+\ref{lemma-gysin-commutes-gysin}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-push-proper-bivariant}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ and $g : Y \to Z$ be morphisms of
+schemes locally of finite type over $S$.
+Let $c \in A^p(X \to Z)$ and assume $f$ is proper.
+Then the rule that to $Z' \to Z$ assigns
+$\alpha \longmapsto f'_*(c \cap \alpha)$
+is a bivariant class denoted $f_* \circ c \in A^p(Y \to Z)$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemmas \ref{lemma-compose-pushforward},
+\ref{lemma-flat-pullback-proper-pushforward}, and
+\ref{lemma-closed-in-X-gysin}.
+\end{proof}
+
+\begin{remark}
+\label{remark-restriction-bivariant}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X \to Y$
+and $Y' \to Y$ be morphisms of schemes locally of finite type over $S$.
+Let $X' = Y' \times_Y X$. Then there is an obvious restriction map
+$$
+A^p(X \to Y) \longrightarrow A^p(X' \to Y'),\quad
+c \longmapsto res(c)
+$$
+obtained by viewing a scheme $Y''$ locally of finite type over $Y'$
+as a scheme locally of finite type over $Y$ and settting
+$res(c) \cap \alpha'' = c \cap \alpha''$ for any $\alpha'' \in \CH_k(Y'')$.
+This restriction operation is compatible with compositions in an
+obvious manner.
+\end{remark}
+
+\begin{remark}
+\label{remark-bivariant-commute}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be
+locally of finite type over $S$. For $i = 1, 2$ let $Z_i \to X$
+be a morphism of schemes locally of finite type. Let
+$c_i \in A^{p_i}(Z_i \to X)$, $i = 1, 2$ be bivariant classes.
+For any $\alpha \in \CH_k(X)$ we can ask whether
+$$
+c_1 \cap c_2 \cap \alpha = c_2 \cap c_1 \cap \alpha
+$$
+in $\CH_{k - p_1 - p_2}(Z_1 \times_X Z_2)$. If this is true and if it holds
+after any base change by $X' \to X$ locally of finite type, then we say
+$c_1$ and $c_2$ {\it commute}. Of course this is the same thing as saying that
+$$
+res(c_1) \circ c_2 = res(c_2) \circ c_1
+$$
+in $A^{p_1 + p_2}(Z_1 \times_X Z_2 \to X)$. Here
+$res(c_1) \in A^{p_1}(Z_1 \times_X Z_2 \to Z_2)$ is the restriction of $c_1$
+as in Remark \ref{remark-restriction-bivariant}; similarly for $res(c_2)$.
+\end{remark}
+
+\begin{example}
+\label{example-gysin-commute}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be locally
+of finite type over $S$. Let $(\mathcal{L}, s, i : D \to X)$ a triple as in
+Definition \ref{definition-gysin-homomorphism}. Let $Z \to X$ be a morphism
+of schemes locally of finite type and let $c \in A^p(Z \to X)$ be a
+bivariant class. Then the bivariant gysin class $c' \in A^1(D \to X)$ of
+Lemma \ref{lemma-gysin-bivariant} commutes with $c$ in the sense of
+Remark \ref{remark-bivariant-commute}. Namely, this is a restatement of
+condition (3) of Definition \ref{definition-bivariant-class}.
+\end{example}
+
+\begin{remark}
+\label{remark-more-general-bivariant}
+There is a more general type of bivariant class that doesn't seem to be
+considered in the literature. Namely, suppose we are given a diagram
+$$
+X \longrightarrow Z \longleftarrow Y
+$$
+of schemes locally of finite type over $(S, \delta)$ as in
+Situation \ref{situation-setup}. Let $p \in \mathbf{Z}$.
+Then we can consider a rule $c$ which assigns to every $Z' \to Z$
+locally of finite type maps
+$$
+c \cap - : \CH_k(Y') \longrightarrow \CH_{k - p}(X')
+$$
+for all $k \in \mathbf{Z}$
+where $X' = X \times_Z Z'$ and $Y' = Z' \times_Z Y$ compatible with
+\begin{enumerate}
+\item proper pushforward if given $Z'' \to Z'$ proper,
+\item flat pullback if given $Z'' \to Z'$ flat
+of fixed relative dimension, and
+\item gysin maps if given $D' \subset Z'$ as in
+Definition \ref{definition-gysin-homomorphism}.
+\end{enumerate}
+We omit the detailed formulations. Suppose we denote the collection
+of all such operations $A^p(X \to Z \leftarrow Y)$. A simple example
+of the utility of this concept is when we have a proper morphism
+$f : X_2 \to X_1$. Then $f_*$ isn't a bivariant operation in the sense of
+Definition \ref{definition-bivariant-class} but it is in the
+above generalized sense, namely, $f_* \in A^0(X_1 \to X_1 \leftarrow X_2)$.
+\end{remark}
+
+
+
+
+
+\section{Chow cohomology and the first Chern class}
+\label{section-chow-cohomology}
+
+\noindent
+We will be most interested in $A^p(X) = A^p(X \to X)$, which will always mean
+the bivariant cohomology classes for $\text{id}_X$. Namely, that is where
+Chern classes will live.
+
+\begin{definition}
+\label{definition-chow-cohomology}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. The {\it Chow cohomology}
+of $X$ is the graded $\mathbf{Z}$-algebra $A^*(X)$ whose degree
+$p$ component is $A^p(X \to X)$.
+\end{definition}
+
+\noindent
+Warning: It is not clear that the $\mathbf{Z}$-algebra structure
+on $A^*(X)$ is commutative, but we will see that Chern classes live
+in its center.
+
+\begin{remark}
+\label{remark-pullback-cohomology}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : Y' \to Y$ be a morphism of schemes locally of finite type over $S$.
+As a special case of Remark \ref{remark-restriction-bivariant}
+there is a canonical $\mathbf{Z}$-algebra map $res : A^*(Y) \to A^*(Y')$.
+This map is often denoted $f^*$ in the literature.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-cap-c1-bivariant}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Then the rule that to $f : X' \to X$ assigns
+$c_1(f^*\mathcal{L}) \cap - : \CH_k(X') \to \CH_{k - 1}(X')$
+is a bivariant class of degree $1$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemmas \ref{lemma-factors},
+\ref{lemma-pushforward-cap-c1},
+\ref{lemma-flat-pullback-cap-c1}, and
+\ref{lemma-gysin-commutes-cap-c1}.
+\end{proof}
+
+\noindent
+The lemma above finally allows us to make the following definition.
+
+\begin{definition}
+\label{definition-first-chern-class}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$
+be locally of finite type over $S$. Let $\mathcal{L}$ be an invertible
+$\mathcal{O}_X$-module. The {\it first Chern class}
+$c_1(\mathcal{L}) \in A^1(X)$ of $\mathcal{L}$
+is the bivariant class of Lemma \ref{lemma-cap-c1-bivariant}.
+\end{definition}
+
+\noindent
+For finite locally free modules we construct the Chern classes in
+Section \ref{section-intersecting-chern-classes}.
+Let us prove that $c_1(\mathcal{L})$ is in the center of $A^*(X)$.
+
+\begin{lemma}
+\label{lemma-c1-center}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Then
+\begin{enumerate}
+\item $c_1(\mathcal{L}) \in A^1(X)$ is in the center of $A^*(X)$ and
+\item if $f : X' \to X$ is locally of finite type and $c \in A^*(X' \to X)$,
+then $c \circ c_1(\mathcal{L}) = c_1(f^*\mathcal{L}) \circ c$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Of course (2) implies (1).
+Let $p : L \to X$ be as in Lemma \ref{lemma-linebundle} and let $o : X \to L$
+be the zero section. Denote $p' : L' \to X'$ and $o' : X' \to L'$
+their base changes. By Lemma \ref{lemma-linebundle-formulae} we have
+$$
+p^*(c_1(\mathcal{L}) \cap \alpha) = - o_* \alpha
+\quad\text{and}\quad
+(p')^*(c_1(f^*\mathcal{L}) \cap \alpha') = - o'_* \alpha'
+$$
+Since $c$ is a bivariant class we have
+\begin{align*}
+(p')^*(c \cap c_1(\mathcal{L}) \cap \alpha)
+& =
+c \cap p^*(c_1(\mathcal{L}) \cap \alpha) \\
+& =
+- c \cap o_* \alpha \\
+& =
+- o'_*(c \cap \alpha) \\
+& =
+(p')^*(c_1(f^*\mathcal{L}) \cap c \cap \alpha)
+\end{align*}
+Since $(p')^*$ is injective by one of the lemmas cited above we obtain
+$c \cap c_1(\mathcal{L}) \cap \alpha =
+c_1(f^*\mathcal{L}) \cap c \cap \alpha$.
+The same is true after any base change by $Y \to X$ locally of finite type
+and hence we have the equality of bivariant classes stated in (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanish-above-dimension}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be a
+finite type scheme over $S$ which has an ample invertible sheaf.
+Assume $d = \dim(X) < \infty$ (here we really mean dimension and
+not $\delta$-dimension).
+Then for any invertible sheaves $\mathcal{L}_1, \ldots, \mathcal{L}_{d + 1}$
+on $X$ we have
+$c_1(\mathcal{L}_1) \circ \ldots \circ c_1(\mathcal{L}_{d + 1}) = 0$
+in $A^{d + 1}(X)$.
+\end{lemma}
+
+\begin{proof}
+We prove this by induction on $d$. The base case $d = 0$ is true because
+in this case $X$ is a finite set of closed points and hence every invertible
+module is trivial. Assume $d > 0$. By Divisors, Lemma
+\ref{divisors-lemma-quasi-projective-Noetherian-pic-effective-Cartier}
+we can write $\mathcal{L}_{d + 1} \cong \mathcal{O}_X(D) \otimes
+\mathcal{O}_X(D')^{\otimes -1}$ for some effective Cartier divisors
+$D, D' \subset X$. Then $c_1(\mathcal{L}_{d + 1})$ is the difference
+of $c_1(\mathcal{O}_X(D))$ and $c_1(\mathcal{O}_X(D'))$ and hence
+we may assume $\mathcal{L}_{d + 1} = \mathcal{O}_X(D)$ for some
+effective Cartier divisor.
+
+\medskip\noindent
+Denote $i : D \to X$ the inclusion morphism and denote
+$i^* \in A^1(D \to X)$ the bivariant class given by the
+gysin hommomorphism as in Lemma \ref{lemma-gysin-bivariant}.
+We have $i_* \circ i^* = c_1(\mathcal{L}_{d + 1})$
+in $A^1(X)$ by Lemma \ref{lemma-support-cap-effective-Cartier}
+(and Lemma \ref{lemma-push-proper-bivariant}
+to make sense of the left hand side).
+Since $c_1(\mathcal{L}_i)$ commutes with
+both $i_*$ and $i^*$ (by definition of bivariant classes)
+we conclude that
+$$
+c_1(\mathcal{L}_1) \circ \ldots \circ c_1(\mathcal{L}_{d + 1}) =
+i_* \circ c_1(\mathcal{L}_1) \circ \ldots \circ c_1(\mathcal{L}_d) \circ i^* =
+i_* \circ c_1(\mathcal{L}_1|_D) \circ \ldots \circ c_1(\mathcal{L}_d|_D)
+\circ i^*
+$$
+Thus we conclude by induction on $d$. Namely, we have $\dim(D) < d$
+as none of the generic points of $X$ are in $D$.
+\end{proof}
+
+\begin{remark}
+\label{remark-ring-loc-classes}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $Z \to X$ be a closed immersion of schemes locally of
+finite type over $S$ and let $p \geq 0$. In this setting we define
+$$
+A^{(p)}(Z \to X) =
+\prod\nolimits_{i \leq p - 1} A^i(X) \times
+\prod\nolimits_{i \geq p} A^i(Z \to X).
+$$
+Then $A^{(p)}(Z \to X)$ canonically comes equipped with the structure
+of a graded algebra. In fact, more generally there is a multiplication
+$$
+A^{(p)}(Z \to X) \times A^{(q)}(Z \to X)
+\longrightarrow A^{(\max(p, q))}(Z \to X)
+$$
+In order to define these we define maps
+\begin{align*}
+A^i(Z \to X) \times A^j(X) & \to A^{i + j}(Z \to X) \\
+A^i(X) \times A^j(Z \to X) & \to A^{i + j}(Z \to X) \\
+A^i(Z \to X) \times A^j(Z \to X) & \to A^{i + j}(Z \to X)
+\end{align*}
+For the first we use composition of bivariant classes.
+For the second we use restriction
+$A^i(X) \to A^i(Z)$ (Remark \ref{remark-restriction-bivariant}) and
+composition $A^i(Z) \times A^j(Z \to X) \to A^{i + j}(Z \to X)$.
+For the third, we send $(c, c')$ to $res(c) \circ c'$
+where $res : A^i(Z \to X) \to A^i(Z)$ is the restriction map (see
+Remark \ref{remark-restriction-bivariant}). We omit the
+verification that these multiplications are associative in a suitable sense.
+\end{remark}
+
+\begin{remark}
+\label{remark-res-push}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $Z \to X$ be a closed immersion of schemes locally of
+finite type over $S$. Denote $res : A^p(Z \to X) \to A^p(Z)$
+the restriction map of Remark \ref{remark-restriction-bivariant}.
+For $c \in A^p(Z \to X)$ we have
+$res(c) \cap \alpha = c \cap i_*\alpha$ for $\alpha \in \CH_*(Z)$.
+Namely $res(c) \cap \alpha = c \cap \alpha$
+and compatibility of $c$ with proper pushforward
+gives $(Z \to Z)_*(c \cap \alpha) = c \cap (Z \to X)_*\alpha$.
+\end{remark}
+
+
+
+
+
+
+\section{Lemmas on bivariant classes}
+\label{section-bivariant-II}
+
+\noindent
+In this section we prove some elementary results on bivariant classes.
+Here is a criterion to see that an operation
+passes through rational equivalence.
+
+\begin{lemma}
+\label{lemma-factors-through-rational-equivalence}
+\begin{reference}
+Very weak form of \cite[Theorem 17.1]{F}
+\end{reference}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a morphism of schemes locally of finite type over $S$.
+Let $p \in \mathbf{Z}$. Suppose given a rule
+which assigns to every locally of finite type morphism $Y' \to Y$
+and every $k$ a map
+$$
+c \cap - : Z_k(Y') \longrightarrow \CH_{k - p}(X')
+$$
+where $Y' = X' \times_X Y$, satisfying condition (3) of
+Definition \ref{definition-bivariant-class}
+whenever $\mathcal{L}'|_{D'} \cong \mathcal{O}_{D'}$. Then
+$c \cap -$ factors through rational equivalence.
+\end{lemma}
+
+\begin{proof}
+The statement makes sense because given a triple
+$(\mathcal{L}, s, i : D \to X)$ as in
+Definition \ref{definition-gysin-homomorphism}
+such that $\mathcal{L}|_D \cong \mathcal{O}_D$, then
+the operation $i^*$ is defined on the level of cycles, see
+Remark \ref{remark-gysin-on-cycles}.
+Let $\alpha \in Z_k(X')$ be a cycle which is rationally equivalent to zero.
+We have to show that $c \cap \alpha = 0$. By
+Lemma \ref{lemma-rational-equivalence-family}
+there exists a cycle $\beta \in Z_{k + 1}(X' \times \mathbf{P}^1)$
+such that $\alpha = i_0^*\beta - i_\infty^*\beta$
+where $i_0, i_\infty : X' \to X' \times \mathbf{P}^1$ are the
+closed immersions of $X'$ over $0, \infty$. Since these are
+examples of effective Cartier divisors with trivial normal
+bundles, we see that $c \cap i_0^*\beta = j_0^*(c \cap \beta)$
+and $c \cap i_\infty^*\beta = j_\infty^*(c \cap \beta)$
+where $j_0, j_\infty : Y' \to Y' \times \mathbf{P}^1$ are
+closed immersions as before. Since
+$j_0^*(c \cap \beta) \sim_{rat} j_\infty^*(c \cap \beta)$
+(follows from Lemma \ref{lemma-rational-equivalence-family}) we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bivariant-weaker}
+\begin{reference}
+Weak form of \cite[Theorem 17.1]{F}
+\end{reference}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a morphism of schemes locally of finite type over $S$.
+Let $p \in \mathbf{Z}$. Suppose given a rule
+which assigns to every locally of finite type morphism $Y' \to Y$
+and every $k$ a map
+$$
+c \cap - : \CH_k(Y') \longrightarrow \CH_{k - p}(X')
+$$
+where $Y' = X' \times_X Y$, satisfying conditions (1), (2) of
+Definition \ref{definition-bivariant-class}
+and condition (3) whenever $\mathcal{L}'|_{D'} \cong \mathcal{O}_{D'}$. Then
+$c \cap -$ is a bivariant class.
+\end{lemma}
+
+\begin{proof}
+Let $Y' \to Y$ be a morphism of schemes which is locally of finite type.
+Let $(\mathcal{L}', s', i' : D' \to Y')$ be as in
+Definition \ref{definition-gysin-homomorphism}
+with pullback $(\mathcal{N}', t', j' : E' \to X')$ to $X'$.
+We have to show that $c \cap (i')^*\alpha' = (j')^*(c \cap \alpha')$
+for all $\alpha' \in \CH_k(Y')$.
+
+\medskip\noindent
+Denote $g : Y'' \to Y'$ the smooth morphism of relative
+dimension $1$ with $i'' : D'' \to Y''$ and $p : D'' \to D'$
+constructed in Lemma \ref{lemma-normal-cone-effective-Cartier}.
+(Warning: $D''$ isn't the full inverse image of $D'$.)
+Denote $f : X'' \to X'$ and $E'' \subset X''$
+their base changes by $X' \to Y'$. Picture
+$$
+\xymatrix{
+& X'' \ar[rr] \ar'[d][dd]_h & & Y'' \ar[dd]^g \\
+E'' \ar[rr] \ar[dd]_q \ar[ru]^{j''} & & D'' \ar[dd]^p \ar[ru]^{i''} & \\
+& X' \ar'[r][rr] & & Y' \\
+E' \ar[rr] \ar[ru]^{j'} & & D' \ar[ru]^{i'}
+}
+$$
+By the properties given in the lemma we know that $\beta' = (i')^*\alpha'$
+is the unique element of $\CH_{k - 1}(D')$ such that
+$p^*\beta' = (i'')^*g^*\alpha'$. Similarly, we know that
+$\gamma' = (j')^*(c \cap \alpha')$ is the unique element of
+$\CH_{k - 1 - p}(E')$ such that $q^*\gamma' = (j'')^*h^*(c \cap \alpha')$.
+Since we know that
+$$
+(j'')^*h^*(c \cap \alpha') =
+(j'')^*(c \cap g^*\alpha') =
+c \cap (i'')^*g^*\alpha'
+$$
+by our assuptions on $c$; note that the modified version of (3)
+assumed in the statement of the lemma applies to $i''$
+and its base change $j''$. We similarly know that
+$$
+q^*(c \cap \beta') = c \cap p^*\beta'
+$$
+We conclude that $\gamma' = c \cap \beta'$ by the uniqueness pointed
+out above.
+\end{proof}
+
+\noindent
+Here a criterion for when a bivariant class is zero.
+
+\begin{lemma}
+\label{lemma-bivariant-zero}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a morphism of schemes locally of finite type over $S$.
+Let $c \in A^p(X \to Y)$. For $Y'' \to Y' \to Y$ set
+$X'' = Y'' \times_Y X$ and $X' = Y' \times_Y X$.
+The following are equivalent
+\begin{enumerate}
+\item $c$ is zero,
+\item $c \cap [Y'] = 0$ in $\CH_*(X')$ for every integral scheme $Y'$
+locally of finite type over $Y$, and
+\item for every integral scheme $Y'$ locally of finite type over $Y$,
+there exists a proper birational morphism $Y'' \to Y'$ such that
+$c \cap [Y''] = 0$ in $\CH_*(X'')$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implications (1) $\Rightarrow$ (2) $\Rightarrow$ (3) are clear.
+Assumption (3) implies (2) because $(Y'' \to Y')_*[Y''] = [Y']$
+and hence $c \cap [Y'] = (X'' \to X')_*(c \cap [Y''])$ as $c$
+is a bivariant class. Assume (2).
+Let $Y' \to Y$ be locally of finite type. Let $\alpha \in \CH_k(Y')$.
+Write $\alpha = \sum n_i [Y'_i]$ with $Y'_i \subset Y'$ a locally finite
+collection of integral closed subschemes of $\delta$-dimension $k$.
+Then we see that $\alpha$ is pushforward of the cycle
+$\alpha' = \sum n_i[Y'_i]$ on $Y'' = \coprod Y'_i$ under the
+proper morphism $Y'' \to Y'$. By the properties of bivariant
+classes it suffices to prove that $c \cap \alpha' = 0$ in $\CH_{k - p}(X'')$.
+We have $\CH_{k - p}(X'') = \prod \CH_{k - p}(X'_i)$ where
+$X'_i = Y'_i \times_Y X$. This follows immediately
+from the definitions. The projection maps
+$\CH_{k - p}(X'') \to \CH_{k - p}(X'_i)$ are given by flat pullback.
+Since capping with $c$ commutes with
+flat pullback, we see that it suffices to show that $c \cap [Y'_i]$
+is zero in $\CH_{k - p}(X'_i)$ which is true by assumption.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-disjoint-decomposition-bivariant}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a morphism of schemes locally of finite type over $S$.
+Assume we have disjoint union decompositions
+$X = \coprod_{i \in I} X_i$ and $Y = \coprod_{j \in J} Y_j$
+by open and closed subschemes
+and a map $a : I \to J$ of sets such that $f(X_i) \subset Y_{a(i)}$.
+Then
+$$
+A^p(X \to Y) = \prod\nolimits_{i \in I} A^p(X_i \to Y_{a(i)})
+$$
+\end{lemma}
+
+\begin{proof}
+Suppose given an element $(c_i) \in \prod_i A^p(X_i \to Y_{a(i)})$.
+Then given $\beta \in \CH_k(Y)$ we can map this to the element of
+$\CH_{k - p}(X)$ whose restriction to $X_i$ is $c_i \cap \beta|_{Y_{a(i)}}$.
+This works because $\CH_{k - p}(X) = \prod_i \CH_{k - p}(X_i)$.
+The same construction works after base change by any $Y' \to Y$
+locally of finite type and we get $c \in A^p(X \to Y)$.
+Thus we obtain a map $\Psi$ from the right hand side of the formula
+to the left hand side of the formula.
+Conversely, given $c \in A^p(X \to Y)$ and an element
+$\beta_i \in \CH_k(Y_{a(i)})$ we can consider the element
+$(c \cap (Y_{a(i)} \to Y)_*\beta_i)|_{X_i}$ in $\CH_{k - p}(X_i)$.
+The same thing works after base change by any $Y' \to Y$
+locally of finite type and we get $c_i \in A^p(X_i \to Y_{a(i)})$.
+Thus we obtain a map $\Phi$ from the left hand
+side of the formula to the right hand side of the formula.
+It is immediate that $\Phi \circ \Psi = \text{id}$.
+For the converse, suppose that $c \in A^p(X \to Y)$ and
+$\beta \in \CH_k(Y)$. Say $\Phi(c) = (c_i)$. Let $j \in J$.
+Because $c$ commutes with flat pullback we get
+$$
+(c \cap \beta)|_{\coprod_{a(i) = j} X_i} =
+c \cap \beta|_{Y_j}
+$$
+Because $c$ commutes with proper pushforward we get
+$$
+(\coprod\nolimits_{a(i) = j} X_i \to X)_*
+((c \cap \beta)|_{\coprod_{a(i) = j} X_i})
+=
+c \cap (Y_j \to Y)_*\beta|_{Y_j}
+$$
+The left hand side is the cycle on $X$ restricting to $(c \cap \beta)|_{X_i}$
+on $X_i$ for $i \in I$ with $a(i) = j$ and $0$ else.
+The right hand side is a cycle on $X$ whose restriction to $X_i$
+is $c_i \cap \beta|_{Y_j}$ for $i \in I$ with $a(i) = j$.
+Thus $c \cap \beta = \Psi((c_i))$ as desired.
+\end{proof}
+
+\begin{remark}
+\label{remark-completion-bivariant}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a morphism of schemes locally of finite type over $S$.
+Let $X = \coprod_{i \in I} X_i$ and $Y = \coprod_{j \in J} Y_j$
+be the decomposition of $X$ and $Y$ into their connected components
+(the connected components are open as $X$ and $Y$ are locally Noetherian, see
+Topology, Lemma \ref{topology-lemma-locally-Noetherian-locally-connected} and
+Properties, Lemma \ref{properties-lemma-Noetherian-topology}).
+Let $a(i) \in J$ be the index such that $f(X_i) \subset Y_{a(i)}$.
+Then $A^p(X \to Y) = \prod A^p(X_i \to Y_{a(i)})$ by
+Lemma \ref{lemma-disjoint-decomposition-bivariant}.
+In this setting it is convenient to set
+$$
+A^*(X \to Y)^\wedge = \prod\nolimits_i A^*(X_i \to Y_{a(i)})
+$$
+This ``completed'' bivariant group is the subset
+$$
+A^*(X \to Y)^\wedge \quad\subset\quad \prod\nolimits_{p \geq 0} A^p(X)
+$$
+consisting of elements $c = (c_0, c_1, c_2, \ldots)$ such that
+for each connected component $X_i$ the image of $c_p$ in
+$A^p(X_i \to Y_{a(i)})$ is zero for almost all $p$.
+If $Y \to Z$ is a second morphism, then the
+composition $A^*(X \to Y) \times A^*(Y \to Z) \to A^*(X \to Z)$
+extends to a composition
+$A^*(X \to Y)^\wedge \times A^*(Y \to Z)^\wedge \to A^*(X \to Z)^\wedge$
+of completions.
+We sometimes call $A^*(X)^\wedge = A^*(X \to X)^\wedge$ the
+{\it completed bivariant cohomology ring} of $X$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-envelope-bivariant}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a morphism of schemes locally of finite type over $S$.
+Let $g : Y' \to Y$ be an envelope (Definition \ref{definition-envelope})
+and denote $X' = Y' \times_Y X$. Let $p \in \mathbf{Z}$ and let
+$c' \in A^p(X' \to Y')$. If the two restrictions
+$$
+res_1(c') = res_2(c') \in A^p(X' \times_X X' \to Y' \times_Y Y')
+$$
+are equal (see proof), then there exists a unique $c \in A^p(X \to Y)$
+whose restriction $res(c) = c'$ in $A^p(X' \to Y')$.
+\end{lemma}
+
+\begin{proof}
+We have a commutative diagram
+$$
+\xymatrix{
+X' \times_X X' \ar[d]^{f''} \ar@<1ex>[r]^-a \ar@<-1ex>[r]_-b &
+X' \ar[d]^{f'} \ar[r]_h &
+X \ar[d]^f \\
+Y' \times_Y Y' \ar@<1ex>[r]^-p \ar@<-1ex>[r]_-q &
+Y' \ar[r]^g &
+Y
+}
+$$
+The element $res_1(c')$ is the restriction (see
+Remark \ref{remark-restriction-bivariant}) of $c'$ for the cartesian square
+with morphisms $a, f', p, f''$ and the element $res_2(c')$ is the restriction
+of $c'$ for the cartesian square with morphisms $b, f', q, f''$.
+Assume $res_1(c') = res_2(c')$ and let $\beta \in \CH_k(Y)$.
+By Lemma \ref{lemma-envelope} we can find a $\beta' \in \CH_k(Y')$
+with $g_*\beta' = \beta$. Then we set
+$$
+c \cap \beta = h_*(c' \cap \beta')
+$$
+To see that this is independent of the choice of
+$\beta'$ it suffices to show that
+$h_*(c' \cap (p_*\gamma - q_*\gamma))$ is zero
+for $\gamma \in \CH_k(Y' \times_Y Y')$.
+Since $c'$ is a bivariant class we have
+$$
+h_*(c' \cap (p_*\gamma - q_*\gamma)) =
+h_*(a_*(c' \cap \gamma) - b_*(c' \cap \gamma)) = 0
+$$
+the last equality since $h_* \circ a_* = h_* \circ b_*$
+as $h \circ a = h \circ b$.
+
+\medskip\noindent
+Observe that our choice for $c \cap \beta$ is forced
+by the requirement that $res(c) = c'$ and the compatibility
+of bivariant classes with proper pushforward.
+
+\medskip\noindent
+Of course, in order to define the bivariant class $c$ we need
+to construct maps $c \cap -: \CH_k(Y_1) \to \CH_{k + p}(Y_1 \times_Y X)$
+for any morphism $Y_1 \to Y$ locally of finite type satisfying the
+conditions listed in Definition \ref{definition-bivariant-class}.
+Denote $Y'_1 = Y' \times_Y Y_1$, $X_1 = X \times_Y Y_1$.
+The morphism $Y'_1 \to Y_1$ is an envelope by
+Lemma \ref{lemma-base-change-envelope}. Hence we can use the base
+changed diagram
+$$
+\xymatrix{
+X'_1 \times_{X_1} X'_1 \ar[d]^{f''_1} \ar@<1ex>[r]^-{a_1} \ar@<-1ex>[r]_-{b_1} &
+X'_1 \ar[d]^{f'_1} \ar[r]_{h_1} &
+X_1 \ar[d]^{f_1} \\
+Y'_1 \times_{Y_1} Y'_1 \ar@<1ex>[r]^-{p_1} \ar@<-1ex>[r]_-{q_1} &
+Y'_1 \ar[r]^{g_1} &
+Y_1
+}
+$$
+and the same arguments to get a well defined map
+$c \cap - : \CH_k(Y_1) \to \CH_{k + p}(X_1)$ as before.
+
+\medskip\noindent
+Next, we have to check conditions (1), (2), and (3) of
+Definition \ref{definition-bivariant-class} for $c$.
+For example, suppose that $t : Y_2 \to Y_1$ is a proper morphism
+of schemes locally of finite type over $Y$. Denote as above
+the base changes of the first diagram to $Y_1$, resp.\ $Y_2$,
+by subscripts ${}_1$, resp.\ ${}_2$. Denote $t' : Y'_2 \to Y'_1$,
+$s : X_2 \to X_1$, and $s' : X'_2 \to X'_1$ the base changes of $t$
+to $Y'$, $X$, and $X'$. We have to show that
+$$
+s_*(c \cap \beta_2) = c \cap t_*\beta_2
+$$
+for $\beta_2 \in \CH_k(Y_2)$. Choose $\beta'_2 \in \CH_k(Y'_2)$
+with $g_{2, *}\beta'_2 = \beta_2$. Since $c'$ is a bivariant class
+and the diagrams
+$$
+\vcenter{
+\xymatrix{
+X'_2 \ar[d]_{s'} \ar[r]_{h_2} & X_2 \ar[d]^s \\
+X'_1 \ar[r]^{h_1} & X_1
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+X'_2 \ar[d]_{s'} \ar[r]_{f'_2} & Y'_2 \ar[d]^{t'} \\
+X'_2 \ar[r]^{f'_1} & Y'_1
+}
+}
+$$
+are cartesian we have
+$$
+s_*(c \cap \beta_2) =
+s_*(h_{2, *}(c' \cap \beta'_2)) =
+h_{1, *}s'_*(c' \cap \beta'_2) =
+h_{1, *}(c' \cap (t'_*\beta'_2))
+$$
+and the final expression computes $c \cap t_*\beta_2$ by construction:
+$t'_*\beta'_2 \in \CH_k(Y'_1)$ is a class whose image by $g_{1, *}$ is
+$t_*\beta_2$. This proves condition (1).
+The other conditions are proved in the same manner
+and we omit the detailed arguments.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Projective space bundle formula}
+\label{section-projective-space-bundle-formula}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Consider a finite locally free $\mathcal{O}_X$-module
+$\mathcal{E}$ of rank $r$.
+Our convention is that the {\it projective bundle associated to
+$\mathcal{E}$} is the morphism
+$$
+\xymatrix{
+\mathbf{P}(\mathcal{E}) =
+\underline{\text{Proj}}_X(\text{Sym}^*(\mathcal{E}))
+\ar[r]^-\pi
+& X
+}
+$$
+over $X$ with
+$\mathcal{O}_{\mathbf{P}(\mathcal{E})}(1)$ normalized so that
+$\pi_*(\mathcal{O}_{\mathbf{P}(\mathcal{E})}(1)) = \mathcal{E}$.
+In particular there is a surjection
+$\pi^*\mathcal{E} \to \mathcal{O}_{\mathbf{P}(\mathcal{E})}(1)$.
+We will say informally ``let $(\pi : P \to X, \mathcal{O}_P(1))$
+be the projective bundle associated to $\mathcal{E}$'' to denote
+the situation where $P = \mathbf{P}(\mathcal{E})$ and
+$\mathcal{O}_P(1) = \mathcal{O}_{\mathbf{P}(\mathcal{E})}(1)$.
+
+\begin{lemma}
+\label{lemma-cap-projective-bundle}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free $\mathcal{O}_X$-module
+$\mathcal{E}$ of rank $r$. Let $(\pi : P \to X, \mathcal{O}_P(1))$
+be the projective bundle associated to $\mathcal{E}$.
+For any $\alpha \in \CH_k(X)$ the element
+$$
+\pi_*\left(
+c_1(\mathcal{O}_P(1))^s \cap \pi^*\alpha
+\right)
+\in
+\CH_{k + r - 1 - s}(X)
+$$
+is $0$ if $s < r - 1$ and is equal to $\alpha$ when $s = r - 1$.
+\end{lemma}
+
+\begin{proof}
+Let $Z \subset X$ be an integral closed subscheme of $\delta$-dimension $k$.
+Note that $\pi^*[Z] = [\pi^{-1}(Z)]$ as $\pi^{-1}(Z)$ is integral of
+$\delta$-dimension $r - 1$.
+If $s < r - 1$, then by construction
+$c_1(\mathcal{O}_P(1))^s \cap \pi^*[Z]$
+is represented by a $(k + r - 1 - s)$-cycle supported on
+$\pi^{-1}(Z)$. Hence the pushforward of this cycle
+is zero for dimension reasons.
+
+\medskip\noindent
+Let $s = r - 1$. By the argument given above we see that
+$\pi_*(c_1(\mathcal{O}_P(1))^s \cap \pi^*\alpha) = n [Z]$
+for some $n \in \mathbf{Z}$. We want to show that $n = 1$.
+For the same dimension reasons
+as above it suffices to prove this result after replacing $X$ by
+$X \setminus T$ where $T \subset Z$ is a proper closed subset.
+Let $\xi$ be the generic point of $Z$.
+We can choose elements $e_1, \ldots, e_{r - 1} \in \mathcal{E}_\xi$
+which form part of a basis of $\mathcal{E}_\xi$.
+These give rational sections $s_1, \ldots, s_{r - 1}$
+of $\mathcal{O}_P(1)|_{\pi^{-1}(Z)}$ whose common zero set
+is the closure of the image a rational section of
+$\mathbf{P}(\mathcal{E}|_Z) \to Z$ union a closed subset whose
+support maps to a proper closed subset $T$ of $Z$.
+After removing $T$ from $X$ (and correspondingly $\pi^{-1}(T)$
+from $P$), we see that $s_1, \ldots, s_n$ form a sequence
+of global sections
+$s_i \in \Gamma(\pi^{-1}(Z), \mathcal{O}_{\pi^{-1}(Z)}(1))$
+whose common zero set is the image of a section $Z \to \pi^{-1}(Z)$.
+Hence we see successively that
+\begin{eqnarray*}
+\pi^*[Z] & = & [\pi^{-1}(Z)] \\
+c_1(\mathcal{O}_P(1)) \cap \pi^*[Z] & = & [Z(s_1)] \\
+c_1(\mathcal{O}_P(1))^2 \cap \pi^*[Z] & = & [Z(s_1) \cap Z(s_2)] \\
+\ldots & = & \ldots \\
+c_1(\mathcal{O}_P(1))^{r - 1} \cap \pi^*[Z] & = &
+[Z(s_1) \cap \ldots \cap Z(s_{r - 1})]
+\end{eqnarray*}
+by repeated applications of Lemma \ref{lemma-geometric-cap}.
+Since the pushforward by $\pi$ of the image of a
+section of $\pi$ over $Z$ is clearly $[Z]$ we see the result
+when $\alpha = [Z]$. We omit the verification that these
+arguments imply the result for a general cycle $\alpha = \sum n_j [Z_j]$.
+\end{proof}
+
+\begin{lemma}[Projective space bundle formula]
+\label{lemma-chow-ring-projective-bundle}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free $\mathcal{O}_X$-module
+$\mathcal{E}$ of rank $r$. Let $(\pi : P \to X, \mathcal{O}_P(1))$
+be the projective bundle associated to $\mathcal{E}$.
+The map
+$$
+\bigoplus\nolimits_{i = 0}^{r - 1}
+\CH_{k + i}(X)
+\longrightarrow
+\CH_{k + r - 1}(P),
+$$
+$$
+(\alpha_0, \ldots, \alpha_{r-1})
+\longmapsto
+\pi^*\alpha_0 +
+c_1(\mathcal{O}_P(1)) \cap \pi^*\alpha_1
++ \ldots +
+c_1(\mathcal{O}_P(1))^{r - 1} \cap \pi^*\alpha_{r-1}
+$$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Fix $k \in \mathbf{Z}$. We first show the map is injective.
+Suppose that $(\alpha_0, \ldots, \alpha_{r - 1})$ is an element
+of the left hand side that maps to zero.
+By Lemma \ref{lemma-cap-projective-bundle} we see that
+$$
+0 = \pi_*(\pi^*\alpha_0 +
+c_1(\mathcal{O}_P(1)) \cap \pi^*\alpha_1
++ \ldots +
+c_1(\mathcal{O}_P(1))^{r - 1} \cap \pi^*\alpha_{r-1})
+= \alpha_{r - 1}
+$$
+Next, we see that
+$$
+0 = \pi_*(c_1(\mathcal{O}_P(1)) \cap (\pi^*\alpha_0 +
+c_1(\mathcal{O}_P(1)) \cap \pi^*\alpha_1
++ \ldots +
+c_1(\mathcal{O}_P(1))^{r - 2} \cap \pi^*\alpha_{r - 2}))
+= \alpha_{r - 2}
+$$
+and so on. Hence the map is injective.
+
+\medskip\noindent
+It remains to show the map is surjective.
+Let $X_i$, $i \in I$ be the irreducible components of $X$.
+Then $P_i = \mathbf{P}(\mathcal{E}|_{X_i})$, $i \in I$
+are the irreducible components of $P$. Consider the commutative
+diagram
+$$
+\xymatrix{
+\coprod P_i \ar[d]_{\coprod \pi_i} \ar[r]_p & P \ar[d]^\pi \\
+\coprod X_i \ar[r]^q & X
+}
+$$
+Observe that $p_*$ is surjective. If $\beta \in \CH_k(\coprod X_i)$
+then $\pi^* q_* \beta = p_*(\coprod \pi_i)^* \beta$, see
+Lemma \ref{lemma-flat-pullback-proper-pushforward}. Similarly for
+capping with $c_1(\mathcal{O}(1))$ by
+Lemma \ref{lemma-pushforward-cap-c1}.
+Hence, if the map of the lemma is surjective for each
+of the morphisms $\pi_i : P_i \to X_i$, then the map is
+surjective for $\pi : P \to X$. Hence we may assume $X$ is irreducible.
+Thus $\dim_\delta(X) < \infty$ and in particular we may use
+induction on $\dim_\delta(X)$.
+
+\medskip\noindent
+The result is clear if $\dim_\delta(X) < k$.
+Let $\alpha \in \CH_{k + r - 1}(P)$.
+For any locally closed subscheme $T \subset X$ denote
+$\gamma_T : \bigoplus \CH_{k + i}(T) \to \CH_{k + r - 1}(\pi^{-1}(T))$
+the map
+$$
+\gamma_T(\alpha_0, \ldots, \alpha_{r - 1})
+= \pi^*\alpha_0 + \ldots +
+c_1(\mathcal{O}_{\pi^{-1}(T)}(1))^{r - 1} \cap \pi^*\alpha_{r - 1}.
+$$
+Suppose for some nonempty open $U \subset X$ we have
+$\alpha|_{\pi^{-1}(U)} = \gamma_U(\alpha_0, \ldots, \alpha_{r - 1})$.
+Then we may choose lifts $\alpha'_i \in \CH_{k + i}(X)$ and we
+see that $\alpha - \gamma_X(\alpha'_0, \ldots, \alpha'_{r - 1})$
+is by Lemma \ref{lemma-restrict-to-open}
+rationally equivalent to a $k$-cycle on $P_Y = \mathbf{P}(\mathcal{E}|_Y)$
+where $Y = X \setminus U$ as a reduced closed subscheme.
+Note that $\dim_\delta(Y) < \dim_\delta(X)$.
+By induction the result holds
+for $P_Y \to Y$ and hence the result holds for $\alpha$.
+Hence we may replace $X$ by any nonempty open of $X$.
+
+\medskip\noindent
+In particular we may assume that $\mathcal{E} \cong \mathcal{O}_X^{\oplus r}$.
+In this case $\mathbf{P}(\mathcal{E}) = X \times \mathbf{P}^{r - 1}$.
+Let us use the stratification
+$$
+\mathbf{P}^{r - 1} = \mathbf{A}^{r - 1}
+\amalg \mathbf{A}^{r - 2}
+\amalg \ldots
+\amalg \mathbf{A}^0
+$$
+The closure of each stratum is a $\mathbf{P}^{r - 1 - i}$ which is a
+representative of $c_1(\mathcal{O}(1))^i \cap [\mathbf{P}^{r - 1}]$.
+Hence $P$ has a similar stratification
+$$
+P = U^{r - 1} \amalg U^{r - 2} \amalg \ldots \amalg U^0
+$$
+Let $P^i$ be the closure of $U^i$. Let $\pi^i : P^i \to X$
+be the restriction of $\pi$ to $P^i$.
+Let $\alpha \in \CH_{k + r - 1}(P)$. By
+Lemma \ref{lemma-pullback-affine-fibres-surjective}
+we can write $\alpha|_{U^{r - 1}} = \pi^*\alpha_0|_{U^{r - 1}}$
+for some $\alpha_0 \in \CH_k(X)$. Hence the difference
+$\alpha - \pi^*\alpha_0$ is the image of some
+$\alpha' \in \CH_{k + r - 1}(P^{r - 2})$.
+By Lemma \ref{lemma-pullback-affine-fibres-surjective}
+again we can write
+$\alpha'|_{U^{r - 2}} = (\pi^{r - 2})^*\alpha_1|_{U^{r - 2}}$
+for some $\alpha_1 \in \CH_{k + 1}(X)$.
+By Lemma \ref{lemma-relative-effective-cartier}
+we see that the image of $(\pi^{r - 2})^*\alpha_1$
+represents $c_1(\mathcal{O}_P(1)) \cap \pi^*\alpha_1$.
+We also see that
+$\alpha - \pi^*\alpha_0 - c_1(\mathcal{O}_P(1)) \cap \pi^*\alpha_1$
+is the image of some $\alpha'' \in \CH_{k + r - 1}(P^{r - 3})$.
+And so on.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vectorbundle}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free sheaf of rank $r$ on $X$.
+Let
+$$
+p :
+E = \underline{\Spec}(\text{Sym}^*(\mathcal{E}))
+\longrightarrow
+X
+$$
+be the associated vector bundle over $X$.
+Then $p^* : \CH_k(X) \to \CH_{k + r}(E)$ is an isomorphism for all $k$.
+\end{lemma}
+
+\begin{proof}
+(For the case of linebundles, see Lemma \ref{lemma-linebundle}.)
+For surjectivity see Lemma \ref{lemma-pullback-affine-fibres-surjective}.
+Let $(\pi : P \to X, \mathcal{O}_P(1))$
+be the projective space bundle associated
+to the finite locally free sheaf $\mathcal{E} \oplus \mathcal{O}_X$.
+Let $s \in \Gamma(P, \mathcal{O}_P(1))$ correspond to the global
+section $(0, 1) \in \Gamma(X, \mathcal{E} \oplus \mathcal{O}_X)$.
+Let $D = Z(s) \subset P$. Note that
+$(\pi|_D : D \to X , \mathcal{O}_P(1)|_D)$
+is the projective space bundle associated
+to $\mathcal{E}$. We denote $\pi_D = \pi|_D$ and
+$\mathcal{O}_D(1) = \mathcal{O}_P(1)|_D$.
+Moreover, $D$ is an effective
+Cartier divisor on $P$. Hence $\mathcal{O}_P(D) = \mathcal{O}_P(1)$
+(see Divisors, Lemma \ref{divisors-lemma-characterize-OD}).
+Also there is an isomorphism
+$E \cong P \setminus D$. Denote $j : E \to P$ the
+corresponding open immersion.
+For injectivity we use that the kernel of
+$$
+j^* :
+\CH_{k + r}(P)
+\longrightarrow
+\CH_{k + r}(E)
+$$
+are the cycles supported in the effective Cartier divisor $D$,
+see Lemma \ref{lemma-restrict-to-open}. So if $p^*\alpha = 0$, then
+$\pi^*\alpha = i_*\beta$ for some $\beta \in \CH_{k + r}(D)$.
+By Lemma \ref{lemma-chow-ring-projective-bundle} we may write
+$$
+\beta = \pi_D^*\beta_0 +
+\ldots + c_1(\mathcal{O}_D(1))^{r - 1} \cap \pi_D^* \beta_{r - 1}.
+$$
+for some $\beta_i \in \CH_{k + i}(X)$.
+By Lemmas \ref{lemma-relative-effective-cartier}
+and \ref{lemma-pushforward-cap-c1}
+this implies
+$$
+\pi^*\alpha = i_*\beta =
+c_1(\mathcal{O}_P(1)) \cap \pi^*\beta_0 +
+\ldots +
+c_1(\mathcal{O}_D(1))^r \cap \pi^*\beta_{r - 1}.
+$$
+Since the rank of $\mathcal{E} \oplus \mathcal{O}_X$ is $r + 1$
+this contradicts Lemma \ref{lemma-pushforward-cap-c1} unless all
+$\alpha$ and all $\beta_i$ are zero.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{The Chern classes of a vector bundle}
+\label{section-chern-classes-vector-bundles}
+
+\noindent
+We can use the projective space bundle formula to define the
+Chern classes of a rank $r$ vector bundle in terms of the expansion
+of $c_1(\mathcal{O}(1))^r$ in terms of the lower powers, see
+formula (\ref{equation-chern-classes}).
+The reason for the signs will be explained later.
+
+\begin{definition}
+\label{definition-chern-classes}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Assume $X$ is integral and $n = \dim_\delta(X)$.
+Let $\mathcal{E}$ be a finite locally free sheaf of rank $r$
+on $X$. Let $(\pi : P \to X, \mathcal{O}_P(1))$ be the projective space
+bundle associated to $\mathcal{E}$.
+\begin{enumerate}
+\item By Lemma \ref{lemma-chow-ring-projective-bundle} there are
+elements $c_i \in \CH_{n - i}(X)$, $i = 0, \ldots, r$
+such that $c_0 = [X]$, and
+\begin{equation}
+\label{equation-chern-classes}
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_P(1))^i \cap \pi^*c_{r - i}
+= 0.
+\end{equation}
+\item With notation as above we set
+$c_i(\mathcal{E}) \cap [X] = c_i$
+as an element of $\CH_{n - i}(X)$.
+We call these the {\it Chern classes of $\mathcal{E}$ on $X$}.
+\item The {\it total Chern class of $\mathcal{E}$ on $X$}
+is the combination
+$$
+c({\mathcal E}) \cap [X] =
+c_0({\mathcal E}) \cap [X]
++ c_1({\mathcal E}) \cap [X] + \ldots
++ c_r({\mathcal E}) \cap [X]
+$$
+which is an element of
+$\CH_*(X) = \bigoplus_{k \in \mathbf{Z}} \CH_k(X)$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Let us check that this does not give a new notion in case the
+vector bundle has rank $1$.
+
+\begin{lemma}
+\label{lemma-first-chern-class}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Assume $X$ is integral and $n = \dim_\delta(X)$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+The first Chern class of $\mathcal{L}$ on $X$ of
+Definition \ref{definition-chern-classes}
+is equal to the Weil divisor associated to $\mathcal{L}$
+by Definition \ref{definition-divisor-invertible-sheaf}.
+\end{lemma}
+
+\begin{proof}
+In this proof we use $c_1(\mathcal{L}) \cap [X]$ to denote the
+construction of Definition \ref{definition-divisor-invertible-sheaf}.
+Since $\mathcal{L}$ has rank $1$ we have
+$\mathbf{P}(\mathcal{L}) = X$ and
+$\mathcal{O}_{\mathbf{P}(\mathcal{L})}(1) = \mathcal{L}$
+by our normalizations. Hence (\ref{equation-chern-classes})
+reads
+$$
+(-1)^1 c_1(\mathcal{L}) \cap c_0 + (-1)^0 c_1 = 0
+$$
+Since $c_0 = [X]$, we conclude $c_1 = c_1(\mathcal{L}) \cap [X]$
+as desired.
+\end{proof}
+
+\begin{remark}
+\label{remark-equation-signs}
+We could also rewrite equation \ref{equation-chern-classes} as
+\begin{equation}
+\label{equation-signs}
+\sum\nolimits_{i = 0}^r
+c_1(\mathcal{O}_P(-1))^i \cap \pi^*c_{r - i}
+= 0.
+\end{equation}
+but we find it easier to work with the tautological quotient
+sheaf $\mathcal{O}_P(1)$ instead of
+its dual.
+\end{remark}
+
+
+
+
+\section{Intersecting with Chern classes}
+\label{section-intersecting-chern-classes}
+
+\noindent
+In this section we define Chern classes of vector bundles on $X$ as
+bivariant classes on $X$, see Lemma \ref{lemma-cap-cp-bivariant}
+and the discussion following this lemma. Our construction follows the familiar
+pattern of first defining the operation on prime cycles and then
+summing. In Lemma \ref{lemma-determine-intersections} we show
+that the result is determined by the usual formula on the associated
+projective bundle. Next, we show that capping with Chern classes
+passes through rational equivalence, commutes with proper pushforward,
+commutes with flat pullback, and commutes with the gysin maps for
+inclusions of effective Cartier divisors. These lemmas could have been
+avoided by directly using the characterization in
+Lemma \ref{lemma-determine-intersections} and using
+Lemma \ref{lemma-push-proper-bivariant}; the reader who wishes to
+see this worked out should consult
+Chow Groups of Spaces, Lemma \ref{spaces-chow-lemma-segre-classes}.
+
+\begin{definition}
+\label{definition-cap-chern-classes}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free sheaf of rank $r$ on $X$.
+We define, for every integer $k$ and any $0 \leq j \leq r$,
+an operation
+$$
+c_j(\mathcal{E}) \cap - : Z_k(X) \to \CH_{k - j}(X)
+$$
+called {\it intersection with the $j$th Chern class of $\mathcal{E}$}.
+\begin{enumerate}
+\item Given an integral closed subscheme $i : W \to X$ of $\delta$-dimension
+$k$ we define
+$$
+c_j(\mathcal{E}) \cap [W] = i_*(c_j({i^*\mathcal{E}}) \cap [W])
+\in
+\CH_{k - j}(X)
+$$
+where $c_j({i^*\mathcal{E}}) \cap [W]$ is as defined in
+Definition \ref{definition-chern-classes}.
+\item For a general $k$-cycle $\alpha = \sum n_i [W_i]$ we set
+$$
+c_j(\mathcal{E}) \cap \alpha = \sum n_i c_j(\mathcal{E}) \cap [W_i]
+$$
+\end{enumerate}
+\end{definition}
+
+\noindent
+If $\mathcal{E}$ has rank $1$ then this agrees with our
+previous definition (Definition \ref{definition-cap-c1})
+by Lemma \ref{lemma-first-chern-class}.
+
+\begin{lemma}
+\label{lemma-determine-intersections}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free sheaf of rank $r$ on $X$.
+Let $(\pi : P \to X, \mathcal{O}_P(1))$ be the projective bundle
+associated to $\mathcal{E}$.
+For $\alpha \in Z_k(X)$ the elements
+$c_j(\mathcal{E}) \cap \alpha$ are the unique elements
+$\alpha_j$ of $\CH_{k - j}(X)$
+such that $\alpha_0 = \alpha$ and
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_P(1))^i \cap
+\pi^*(\alpha_{r - i}) = 0
+$$
+holds in the Chow group of $P$.
+\end{lemma}
+
+\begin{proof}
+The uniqueness of $\alpha_0, \ldots, \alpha_r$ such that
+$\alpha_0 = \alpha$ and such that
+the displayed equation holds follows from
+the projective space bundle formula
+Lemma \ref{lemma-chow-ring-projective-bundle}.
+The identity holds by definition for $\alpha = [W]$ where $W$
+is an integral closed subscheme of $X$.
+For a general $k$-cycle $\alpha$ on $X$ write
+$\alpha = \sum n_a[W_a]$ with $n_a \not = 0$, and
+$i_a : W_a \to X$ pairwise distinct integral closed subschemes.
+Then the family $\{W_a\}$ is locally finite on $X$.
+Set $P_a = \pi^{-1}(W_a) = \mathbf{P}(\mathcal{E}|_{W_a})$.
+Denote $i'_a : P_a \to P$ the corresponding closed immersions.
+Consider the fibre product diagram
+$$
+\xymatrix{
+P' \ar@{=}[r] \ar[d]_{\pi'} &
+\coprod P_a \ar[d]_{\coprod \pi_a} \ar[r]_{\coprod i'_a} &
+P \ar[d]^\pi \\
+X' \ar@{=}[r] &
+\coprod W_a \ar[r]^{\coprod i_a} &
+X
+}
+$$
+The morphism $p : X' \to X$ is proper. Moreover
+$\pi' : P' \to X'$ together with the invertible sheaf
+$\mathcal{O}_{P'}(1) = \coprod \mathcal{O}_{P_a}(1)$
+which is also the pullback of $\mathcal{O}_P(1)$
+is the projective bundle associated to
+$\mathcal{E}' = p^*\mathcal{E}$. By definition
+$$
+c_j(\mathcal{E}) \cap [\alpha]
+=
+\sum i_{a, *}(c_j(\mathcal{E}|_{W_a}) \cap [W_a]).
+$$
+Write $\beta_{a, j} = c_j(\mathcal{E}|_{W_a}) \cap [W_a]$
+which is an element of $\CH_{k - j}(W_a)$. We have
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_{P_a}(1))^i \cap \pi_a^*(\beta_{a, r - i}) = 0
+$$
+for each $a$ by definition. Thus clearly we have
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_{P'}(1))^i \cap (\pi')^*(\beta_{r - i}) = 0
+$$
+with $\beta_j = \sum n_a\beta_{a, j} \in \CH_{k - j}(X')$. Denote
+$p' : P' \to P$ the morphism $\coprod i'_a$.
+We have $\pi^*p_*\beta_j = p'_*(\pi')^*\beta_j$
+by Lemma \ref{lemma-flat-pullback-proper-pushforward}.
+By the projection formula of Lemma \ref{lemma-pushforward-cap-c1}
+we conclude that
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_P(1))^i \cap \pi^*(p_*\beta_j) = 0
+$$
+Since $p_*\beta_j$ is a representative of $c_j(\mathcal{E}) \cap \alpha$
+we win.
+\end{proof}
+
+\noindent
+We will consistently use this characterization of Chern classes
+to prove many more properties.
+
+\begin{lemma}
+\label{lemma-cap-chern-class-factors-rational-equivalence}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free sheaf of rank $r$ on $X$.
+If $\alpha \sim_{rat} \beta$ are rationally equivalent $k$-cycles
+on $X$ then $c_j(\mathcal{E}) \cap \alpha = c_j(\mathcal{E}) \cap \beta$
+in $\CH_{k - j}(X)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-determine-intersections} the elements
+$\alpha_j = c_j(\mathcal{E}) \cap \alpha$, $j \geq 1$ and
+$\beta_j = c_j(\mathcal{E}) \cap \beta$, $j \geq 1$ are uniquely determined
+by the {\it same} equation in the chow group of the projective
+bundle associated to $\mathcal{E}$. (This of course relies on the fact that
+flat pullback is compatible with rational equivalence, see
+Lemma \ref{lemma-flat-pullback-rational-equivalence}.) Hence they are equal.
+\end{proof}
+
+\noindent
+In other words capping with Chern classes of
+finite locally free sheaves factors through rational equivalence
+to give maps
+$$
+c_j(\mathcal{E}) \cap - : \CH_k(X) \to \CH_{k - j}(X).
+$$
+Our next task is to show that Chern classes are bivariant classes, see
+Definition \ref{definition-bivariant-class}.
+
+\begin{lemma}
+\label{lemma-pushforward-cap-cj}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free sheaf of rank $r$ on $X$.
+Let $p : X \to Y$ be a proper morphism.
+Let $\alpha$ be a $k$-cycle on $X$.
+Let $\mathcal{E}$ be a finite locally free sheaf on $Y$.
+Then
+$$
+p_*(c_j(p^*\mathcal{E}) \cap \alpha) = c_j(\mathcal{E}) \cap p_*\alpha
+$$
+\end{lemma}
+
+\begin{proof}
+Let $(\pi : P \to Y, \mathcal{O}_P(1))$ be the projective bundle associated
+to $\mathcal{E}$. Then $P_X = X \times_Y P$ is the projective bundle associated
+to $p^*\mathcal{E}$ and $\mathcal{O}_{P_X}(1)$ is the pullback of
+$\mathcal{O}_P(1)$. Write $\alpha_j = c_j(p^*\mathcal{E}) \cap \alpha$, so
+$\alpha_0 = \alpha$. By Lemma \ref{lemma-determine-intersections} we have
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_P(1))^i \cap
+\pi_X^*(\alpha_{r - i}) = 0
+$$
+in the chow group of $P_X$. Consider the fibre product diagram
+$$
+\xymatrix{
+P_X \ar[r]_-{p'} \ar[d]_{\pi_X} & P \ar[d]^\pi \\
+X \ar[r]^p & Y
+}
+$$
+Apply proper pushforward $p'_*$
+(Lemma \ref{lemma-proper-pushforward-rational-equivalence})
+to the displayed equality above. Using
+Lemmas \ref{lemma-pushforward-cap-c1} and
+\ref{lemma-flat-pullback-proper-pushforward} we obtain
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_P(1))^i \cap
+\pi^*(p_*\alpha_{r - i}) = 0
+$$
+in the chow group of $P$. By the characterization of
+Lemma \ref{lemma-determine-intersections} we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-pullback-cap-cj}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$, $Y$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free sheaf of rank $r$ on $Y$.
+Let $f : X \to Y$ be a flat morphism of relative dimension $r$.
+Let $\alpha$ be a $k$-cycle on $Y$.
+Then
+$$
+f^*(c_j(\mathcal{E}) \cap \alpha) = c_j(f^*\mathcal{E}) \cap f^*\alpha
+$$
+\end{lemma}
+
+\begin{proof}
+Write $\alpha_j = c_j(\mathcal{E}) \cap \alpha$, so $\alpha_0 = \alpha$.
+By Lemma \ref{lemma-determine-intersections} we have
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_P(1))^i \cap
+\pi^*(\alpha_{r - i}) = 0
+$$
+in the chow group of the projective bundle
+$(\pi : P \to Y, \mathcal{O}_P(1))$
+associated to $\mathcal{E}$. Consider the fibre product diagram
+$$
+\xymatrix{
+P_X = \mathbf{P}(f^*\mathcal{E}) \ar[r]_-{f'} \ar[d]_{\pi_X} &
+P \ar[d]^\pi \\
+X \ar[r]^f & Y
+}
+$$
+Note that $\mathcal{O}_{P_X}(1)$ is the pullback of $\mathcal{O}_P(1)$.
+Apply flat pullback $(f')^*$
+(Lemma \ref{lemma-flat-pullback-rational-equivalence}) to the displayed
+equation above. By Lemmas \ref{lemma-flat-pullback-cap-c1} and
+\ref{lemma-compose-flat-pullback} we see that
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_{P_X}(1))^i \cap
+\pi_X^*(f^*\alpha_{r - i}) = 0
+$$
+holds in the chow group of $P_X$. By the characterization of
+Lemma \ref{lemma-determine-intersections} we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cap-chern-class-commutes-with-gysin}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free sheaf of rank $r$ on $X$.
+Let $(\mathcal{L}, s, i : D \to X)$ be as in
+Definition \ref{definition-gysin-homomorphism}.
+Then $c_j(\mathcal{E}|_D) \cap i^*\alpha = i^*(c_j(\mathcal{E}) \cap \alpha)$
+for all $\alpha \in \CH_k(X)$.
+\end{lemma}
+
+\begin{proof}
+Write $\alpha_j = c_j(\mathcal{E}) \cap \alpha$, so $\alpha_0 = \alpha$.
+By Lemma \ref{lemma-determine-intersections} we have
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_P(1))^i \cap
+\pi^*(\alpha_{r - i}) = 0
+$$
+in the chow group of the projective bundle
+$(\pi : P \to X, \mathcal{O}_P(1))$
+associated to $\mathcal{E}$. Consider the fibre product diagram
+$$
+\xymatrix{
+P_D = \mathbf{P}(\mathcal{E}|_D) \ar[r]_-{i'} \ar[d]_{\pi_D} &
+P \ar[d]^\pi \\
+D \ar[r]^i & X
+}
+$$
+Note that $\mathcal{O}_{P_D}(1)$ is the pullback of $\mathcal{O}_P(1)$.
+Apply the gysin map $(i')^*$ (Lemma \ref{lemma-gysin-factors}) to the
+displayed equation above.
+Applying Lemmas \ref{lemma-gysin-commutes-cap-c1} and
+\ref{lemma-gysin-flat-pullback} we obtain
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_{P_D}(1))^i \cap
+\pi_D^*(i^*\alpha_{r - i}) = 0
+$$
+in the chow group of $P_D$.
+By the characterization of Lemma \ref{lemma-determine-intersections}
+we conclude.
+\end{proof}
+
+\noindent
+At this point we have enough material to be able to prove that
+capping with Chern classes defines a bivariant class.
+
+\begin{lemma}
+\label{lemma-cap-cp-bivariant}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a locally free $\mathcal{O}_X$-module
+of rank $r$. Let $0 \leq p \leq r$.
+Then the rule that to $f : X' \to X$ assigns
+$c_p(f^*\mathcal{E}) \cap - : \CH_k(X') \to \CH_{k - p}(X')$
+is a bivariant class of degree $p$.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemmas
+\ref{lemma-cap-chern-class-factors-rational-equivalence},
+\ref{lemma-pushforward-cap-cj},
+\ref{lemma-flat-pullback-cap-cj}, and
+\ref{lemma-cap-chern-class-commutes-with-gysin}
+and Definition \ref{definition-bivariant-class}.
+\end{proof}
+
+\noindent
+This lemma allows us to define the Chern classes of a finite
+locally free module as follows.
+
+\begin{definition}
+\label{definition-chern-classes-final}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a locally free $\mathcal{O}_X$-module
+of rank $r$. For $i = 0, \ldots, r$ the {\it $i$th Chern class}
+of $\mathcal{E}$ is the bivariant class
+$c_i(\mathcal{E}) \in A^i(X)$ of degree $i$
+constructed in Lemma \ref{lemma-cap-cp-bivariant}. The
+{\it total Chern class} of $\mathcal{E}$ is the formal sum
+$$
+c(\mathcal{E}) =
+c_0(\mathcal{E}) + c_1(\mathcal{E}) + \ldots + c_r(\mathcal{E})
+$$
+which is viewed as a nonhomogeneous bivariant class on $X$.
+\end{definition}
+
+\noindent
+By the remark following Definition \ref{definition-cap-chern-classes}
+if $\mathcal{E}$ is invertible, then this definition agrees with
+Definition \ref{definition-first-chern-class}.
+Next we see that Chern classes are in the center of the bivariant
+Chow cohomology ring $A^*(X)$.
+
+\begin{lemma}
+\label{lemma-cap-commutative-chern}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a locally free $\mathcal{O}_X$-module of rank $r$.
+Then
+\begin{enumerate}
+\item $c_j(\mathcal{E}) \in A^j(X)$ is in the center of $A^*(X)$ and
+\item if $f : X' \to X$ is locally of finite type and $c \in A^*(X' \to X)$,
+then $c \circ c_j(\mathcal{E}) = c_j(f^*\mathcal{E}) \circ c$.
+\end{enumerate}
+In particular, if $\mathcal{F}$ is a second locally free
+$\mathcal{O}_X$-module on $X$ of rank $s$, then
+$$
+c_i(\mathcal{E}) \cap c_j(\mathcal{F}) \cap \alpha
+=
+c_j(\mathcal{F}) \cap c_i(\mathcal{E}) \cap \alpha
+$$
+as elements of $\CH_{k - i - j}(X)$ for all $\alpha \in \CH_k(X)$.
+\end{lemma}
+
+\begin{proof}
+It is immediate that (2) implies (1).
+Let $\alpha \in \CH_k(X)$. Write $\alpha_j = c_j(\mathcal{E}) \cap \alpha$, so
+$\alpha_0 = \alpha$. By Lemma \ref{lemma-determine-intersections} we have
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_P(1))^i \cap
+\pi^*(\alpha_{r - i}) = 0
+$$
+in the chow group of the projective bundle
+$(\pi : P \to Y, \mathcal{O}_P(1))$
+associated to $\mathcal{E}$. Denote $\pi' : P' \to X'$ the base change
+of $\pi$ by $f$. Using Lemma \ref{lemma-c1-center} and
+the properties of bivariant classes we obtain
+\begin{align*}
+0 & = c \cap \left(\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_P(1))^i \cap
+\pi^*(\alpha_{r - i})\right) \\
+& =
+\sum\nolimits_{i = 0}^r
+(-1)^i c_1(\mathcal{O}_{P'}(1))^i \cap
+(\pi')^*(c \cap \alpha_{r - i})
+\end{align*}
+in the Chow group of $P'$ (calculation omitted).
+Hence we see that $c \cap \alpha_j$ is
+equal to $c_j(f^*\mathcal{E}) \cap (c \cap \alpha)$ by the characterization
+of Lemma \ref{lemma-determine-intersections}.
+This proves the lemma.
+\end{proof}
+
+\begin{remark}
+\label{remark-extend-to-finite-locally-free}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free $\mathcal{O}_X$-module.
+If the rank of $\mathcal{E}$ is not constant then we can
+still define the Chern classes of $\mathcal{E}$. Namely, in this
+case we can write
+$$
+X = X_0 \amalg X_1 \amalg X_2 \amalg \ldots
+$$
+where $X_r \subset X$ is the open and closed subspace where
+the rank of $\mathcal{E}$ is $r$. By
+Lemma \ref{lemma-disjoint-decomposition-bivariant}
+we have $A^p(X) = \prod A^p(X_r)$.
+Hence we can define $c_p(\mathcal{E})$ to be the
+product of the classes $c_p(\mathcal{E}|_{X_r})$ in $A^p(X_r)$.
+Explicitly, if $X' \to X$ is a morphism locally of finite type,
+then we obtain by pullback a corresponding decomposition of $X'$
+and we find that
+$$
+\CH_*(X') = \prod\nolimits_{r \geq 0} \CH_*(X'_r)
+$$
+by our definitions. Then $c_p(\mathcal{E}) \in A^p(X)$
+is the bivariant class which preserves these direct
+product decompositions and acts by the already defined
+operations $c_i(\mathcal{E}|_{X_r}) \cap -$
+on the factors. Observe that in this setting it may happen
+that $c_p(\mathcal{E})$ is nonzero for infinitely many $p$.
+It follows that the total chern class is an element
+$$
+c(\mathcal{E}) =
+c_0(\mathcal{E}) + c_1(\mathcal{E}) + c_2(\mathcal{E}) + \ldots
+\in A^*(X)^\wedge
+$$
+of the completed bivariant cohomology ring, see
+Remark \ref{remark-completion-bivariant}.
+In this setting we define the ``rank'' of $\mathcal{E}$
+to be the element $r(\mathcal{E}) \in A^0(X)$
+as the bivariant operation which sends $(\alpha_r) \in \prod \CH_*(X'_r)$
+to $(r\alpha_r) \in \prod \CH_*(X'_r)$.
+Note that it is still true that $c_p(\mathcal{E})$ and $r(\mathcal{E})$
+are in the center of $A^*(X)$.
+\end{remark}
+
+\begin{remark}
+\label{remark-top-chern-class}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$
+be locally of finite type over $S$. Let $\mathcal{E}$ be a
+finite locally free $\mathcal{O}_X$-module. In general
+we write $X = \coprod X_r$ as in
+Remark \ref{remark-extend-to-finite-locally-free}.
+If only a finite number of the $X_r$ are nonempty, then
+we can set
+$$
+c_{top}(\mathcal{E}) = \sum\nolimits_r c_r(\mathcal{E}|_{X_r})
+\in A^*(X) = \bigoplus A^*(X_r)
+$$
+where the equality is Lemma \ref{lemma-disjoint-decomposition-bivariant}.
+If infinitely many $X_r$ are nonempty, we will use the same
+notation to denote
+$$
+c_{top}(\mathcal{E}) = \prod c_r(\mathcal{E}|_{X_r})
+\in \prod A^r(X_r) \subset A^*(X)^\wedge
+$$
+see Remark \ref{remark-completion-bivariant} for notation.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+\section{Polynomial relations among Chern classes}
+\label{section-relations-chern-classes}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be locally
+of finite type over $S$. Let $\mathcal{E}_i$ be a finite collection of finite
+locally free sheaves on $X$. By Lemma \ref{lemma-cap-commutative-chern}
+we see that the Chern classes
+$$
+c_j(\mathcal{E}_i) \in A^*(X)
+$$
+generate a commutative (and even central) $\mathbf{Z}$-subalgebra of the
+Chow cohomology algebra $A^*(X)$.
+Thus we can say what it means for a polynomial in these Chern classes
+to be zero, or for two polynomials to be the same. As an example, saying that
+$c_1(\mathcal{E}_1)^5 + c_2(\mathcal{E}_2)c_3(\mathcal{E}_3) = 0$
+means that the operations
+$$
+\CH_k(Y) \longrightarrow \CH_{k - 5}(Y), \quad
+\alpha \longmapsto
+c_1(\mathcal{E}_1)^5 \cap \alpha +
+c_2(\mathcal{E}_2) \cap c_3(\mathcal{E}_3) \cap \alpha
+$$
+are zero for all morphisms $f : Y \to X$ which are locally of finite type.
+By Lemma \ref{lemma-bivariant-zero}
+this is equivalent to the requirement that given any morphism
+$f : Y \to X$ where $Y$ is an integral scheme
+locally of finite type over $S$ the cycle
+$$
+c_1(\mathcal{E}_1)^5 \cap [Y] +
+c_2(\mathcal{E}_2) \cap c_3(\mathcal{E}_3) \cap [Y]
+$$
+is zero in $\CH_{\dim(Y) - 5}(Y)$.
+
+\medskip\noindent
+A specific example is the relation
+$$
+c_1(\mathcal{L} \otimes_{\mathcal{O}_X} \mathcal{N})
+=
+c_1(\mathcal{L}) + c_1(\mathcal{N})
+$$
+proved in Lemma \ref{lemma-c1-cap-additive}.
+More generally, here is what happens when we tensor an
+arbitrary locally free sheaf by an invertible sheaf.
+
+\begin{lemma}
+\label{lemma-chern-classes-E-tensor-L}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free sheaf of
+rank $r$ on $X$. Let $\mathcal{L}$ be an invertible
+sheaf on $X$. Then we have
+\begin{equation}
+\label{equation-twist}
+c_i({\mathcal E} \otimes {\mathcal L})
+=
+\sum\nolimits_{j = 0}^i
+\binom{r - i + j}{j} c_{i - j}({\mathcal E}) c_1({\mathcal L})^j
+\end{equation}
+in $A^*(X)$.
+\end{lemma}
+
+\begin{proof}
+This should hold for any triple $(X, \mathcal{E}, \mathcal{L})$.
+In particular it should hold when $X$ is integral and by
+Lemma \ref{lemma-bivariant-zero}
+it is enough to prove
+it holds when capping with $[X]$ for such $X$. Thus assume
+that $X$ is integral. Let $(\pi : P \to X, \mathcal{O}_P(1))$,
+resp.\ $(\pi' : P' \to X, \mathcal{O}_{P'}(1))$ be the
+projective space bundle associated to $\mathcal{E}$,
+resp.\ $\mathcal{E} \otimes \mathcal{L}$. Consider the canonical morphism
+$$
+\xymatrix{
+P \ar[rd]_\pi \ar[rr]_g & & P' \ar[ld]^{\pi'} \\
+& X &
+}
+$$
+see Constructions, Lemma \ref{constructions-lemma-twisting-and-proj}.
+It has the property that
+$g^*\mathcal{O}_{P'}(1)
+= \mathcal{O}_P(1) \otimes \pi^* {\mathcal L}$.
+This means that we have
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i
+(\xi + x)^i \cap \pi^*(c_{r - i}(\mathcal{E} \otimes \mathcal{L}) \cap [X])
+=
+0
+$$
+in $\CH_*(P)$, where $\xi$ represents
+$c_1(\mathcal{O}_P(1))$ and $x$
+represents $c_1(\pi^*\mathcal{L})$. By simple algebra this
+is equivalent to
+$$
+\sum\nolimits_{i = 0}^r
+(-1)^i \xi^i \left(
+\sum\nolimits_{j = i}^r
+(-1)^{j - i}
+\binom{j}{i}
+x^{j - i} \cap
+\pi^*(c_{r - j}(\mathcal{E} \otimes \mathcal{L}) \cap [X])
+\right)
+=
+0
+$$
+Comparing with
+Equation (\ref{equation-chern-classes}) it follows from this that
+$$
+c_{r - i}(\mathcal{E}) \cap [X] =
+\sum\nolimits_{j = i}^r
+\binom{j}{i}
+(-c_1(\mathcal{L}))^{j - i} \cap
+c_{r - j}(\mathcal{E} \otimes \mathcal{L}) \cap [X]
+$$
+Reworking this (getting rid of minus signs, and renumbering) we get
+the desired relation.
+\end{proof}
+
+\noindent
+Some example cases of (\ref{equation-twist}) are
+\begin{align*}
+c_1(\mathcal{E} \otimes \mathcal{L})
+& =
+c_1(\mathcal{E}) +
+r c_1(\mathcal{L}) \\
+c_2(\mathcal{E} \otimes \mathcal{L})
+& =
+c_2(\mathcal{E}) +
+(r - 1) c_1(\mathcal{E}) c_1(\mathcal{L}) +
+\binom{r}{2} c_1(\mathcal{L})^2 \\
+c_3(\mathcal{E} \otimes \mathcal{L})
+& =
+c_3(\mathcal{E}) +
+(r - 2) c_2(\mathcal{E})c_1(\mathcal{L}) +
+\binom{r - 1}{2} c_1(\mathcal{E})c_1(\mathcal{L})^2 +
+\binom{r}{3} c_1(\mathcal{L})^3
+\end{align*}
+
+
+
+
+
+
+
+
+\section{Additivity of Chern classes}
+\label{section-additivity-chern-classes}
+
+\noindent
+All of the preliminary lemmas follow trivially from the
+final result.
+
+\begin{lemma}
+\label{lemma-get-rid-of-trivial-subbundle}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$, $\mathcal{F}$ be finite locally free sheaves
+on $X$ of ranks $r$, $r - 1$ which fit into a short
+exact sequence
+$$
+0 \to \mathcal{O}_X \to \mathcal{E} \to \mathcal{F} \to 0
+$$
+Then we have
+$$
+c_r(\mathcal{E}) = 0, \quad
+c_j(\mathcal{E}) = c_j(\mathcal{F}), \quad j = 0, \ldots, r - 1
+$$
+in $A^*(X)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-bivariant-zero}
+it suffices to show that if $X$ is integral
+then $c_j(\mathcal{E}) \cap [X] = c_j(\mathcal{F}) \cap [X]$.
+Let $(\pi : P \to X, \mathcal{O}_P(1))$,
+resp.\ $(\pi' : P' \to X, \mathcal{O}_{P'}(1))$ denote the
+projective space bundle associated to $\mathcal{E}$, resp.\ $\mathcal{F}$.
+The surjection $\mathcal{E} \to \mathcal{F}$ gives rise
+to a closed immersion
+$$
+i : P' \longrightarrow P
+$$
+over $X$. Moreover, the element
+$1 \in \Gamma(X, \mathcal{O}_X) \subset \Gamma(X, \mathcal{E})$
+gives rise to a global section $s \in \Gamma(P, \mathcal{O}_P(1))$
+whose zero set is exactly $P'$. Hence $P'$ is an effective Cartier
+divisor on $P$ such that $\mathcal{O}_P(P') \cong \mathcal{O}_P(1)$.
+Hence we see that
+$$
+c_1(\mathcal{O}_P(1)) \cap \pi^*\alpha = i_*((\pi')^*\alpha)
+$$
+for any cycle class $\alpha$ on $X$ by
+Lemma \ref{lemma-relative-effective-cartier}.
+By Lemma \ref{lemma-determine-intersections} we see that
+$\alpha_j = c_j(\mathcal{F}) \cap [X]$, $j = 0, \ldots, r - 1$
+satisfy
+$$
+\sum\nolimits_{j = 0}^{r - 1} (-1)^jc_1(\mathcal{O}_{P'}(1))^j
+\cap (\pi')^*\alpha_j = 0
+$$
+Pushing this to $P$ and using the remark above as well as
+Lemma \ref{lemma-pushforward-cap-c1} we get
+$$
+\sum\nolimits_{j = 0}^{r - 1}
+(-1)^j c_1(\mathcal{O}_P(1))^{j + 1}
+\cap \pi^*\alpha_j = 0
+$$
+By the uniqueness of Lemma \ref{lemma-determine-intersections}
+we conclude that
+$c_r(\mathcal{E}) \cap [X] = 0$ and
+$c_j(\mathcal{E}) \cap [X] = \alpha_j = c_j(\mathcal{F}) \cap [X]$
+for $j = 0, \ldots, r - 1$. Hence the lemma holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-additivity-invertible-subsheaf}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$, $\mathcal{F}$ be finite locally free sheaves
+on $X$ of ranks $r$, $r - 1$ which fit into a short
+exact sequence
+$$
+0 \to \mathcal{L} \to \mathcal{E} \to \mathcal{F} \to 0
+$$
+where $\mathcal{L}$ is an invertible sheaf.
+Then
+$$
+c(\mathcal{E}) = c(\mathcal{L}) c(\mathcal{F})
+$$
+in $A^*(X)$.
+\end{lemma}
+
+\begin{proof}
+This relation really just says that
+$c_i(\mathcal{E}) = c_i(\mathcal{F}) + c_1(\mathcal{L})c_{i - 1}(\mathcal{F})$.
+By Lemma \ref{lemma-get-rid-of-trivial-subbundle}
+we have $c_j(\mathcal{E} \otimes \mathcal{L}^{\otimes -1})
+= c_j(\mathcal{F} \otimes \mathcal{L}^{\otimes -1})$ for
+$j = 0, \ldots, r$ were we set
+$c_r(\mathcal{F} \otimes \mathcal{L}^{-1}) = 0$ by convention.
+Applying Lemma \ref{lemma-chern-classes-E-tensor-L} we deduce
+$$
+\sum_{j = 0}^i
+\binom{r - i + j}{j} (-1)^j c_{i - j}({\mathcal E}) c_1({\mathcal L})^j
+=
+\sum_{j = 0}^i
+\binom{r - 1 - i + j}{j} (-1)^j c_{i - j}({\mathcal F}) c_1({\mathcal L})^j
+$$
+Setting
+$c_i(\mathcal{E}) = c_i(\mathcal{F}) + c_1(\mathcal{L})c_{i - 1}(\mathcal{F})$
+gives a ``solution'' of this equation. The lemma follows if we show
+that this is the only possible solution. We omit the verification.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-additivity-chern-classes}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+Suppose that ${\mathcal E}$ sits in an
+exact sequence
+$$
+0
+\to
+{\mathcal E}_1
+\to
+{\mathcal E}
+\to
+{\mathcal E}_2
+\to
+0
+$$
+of finite locally free sheaves $\mathcal{E}_i$ of rank $r_i$.
+The total Chern classes satisfy
+$$
+c({\mathcal E}) = c({\mathcal E}_1) c({\mathcal E}_2)
+$$
+in $A^*(X)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-bivariant-zero} we may assume that $X$ is integral
+and we have to show the identity when capping against $[X]$.
+By induction on $r_1$. The case $r_1 = 1$ is
+Lemma \ref{lemma-additivity-invertible-subsheaf}.
+Assume $r_1 > 1$. Let $(\pi : P \to X, \mathcal{O}_P(1))$
+denote the projective space bundle associated to $\mathcal{E}_1$. Note that
+\begin{enumerate}
+\item $\pi^* : \CH_*(X) \to \CH_*(P)$ is injective, and
+\item $\pi^*\mathcal{E}_1$ sits in a short exact sequence
+$0 \to \mathcal{F} \to \pi^*\mathcal{E}_1 \to \mathcal{L} \to 0$
+where $\mathcal{L}$ is invertible.
+\end{enumerate}
+The first assertion follows from the projective space bundle formula
+and the second follows from the definition of a projective space bundle.
+(In fact $\mathcal{L} = \mathcal{O}_P(1)$.)
+Let $Q = \pi^*\mathcal{E}/\mathcal{F}$, which sits in an
+exact sequence $0 \to \mathcal{L} \to Q \to \pi^*\mathcal{E}_2 \to 0$.
+By induction we have
+\begin{eqnarray*}
+c(\pi^*\mathcal{E}) \cap [P]
+& = &
+c(\mathcal{F}) \cap c(\pi^*\mathcal{E}/\mathcal{F}) \cap [P] \\
+& = &
+c(\mathcal{F}) \cap c(\mathcal{L}) \cap c(\pi^*\mathcal{E}_2) \cap [P] \\
+& = &
+c(\pi^*\mathcal{E}_1) \cap c(\pi^*\mathcal{E}_2) \cap [P]
+\end{eqnarray*}
+Since $[P] = \pi^*[X]$ we
+win by Lemma \ref{lemma-flat-pullback-cap-cj}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chern-filter-by-linebundles}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let ${\mathcal L}_i$, $i = 1, \ldots, r$ be invertible
+$\mathcal{O}_X$-modules on $X$.
+Let $\mathcal{E}$ be a locally free rank
+$\mathcal{O}_X$-module endowed with a filtration
+$$
+0 = \mathcal{E}_0 \subset \mathcal{E}_1 \subset \mathcal{E}_2
+\subset \ldots \subset \mathcal{E}_r = \mathcal{E}
+$$
+such that $\mathcal{E}_i/\mathcal{E}_{i - 1} \cong \mathcal{L}_i$.
+Set $c_1({\mathcal L}_i) = x_i$. Then
+$$
+c(\mathcal{E})
+=
+\prod\nolimits_{i = 1}^r (1 + x_i)
+$$
+in $A^*(X)$.
+\end{lemma}
+
+\begin{proof}
+Apply Lemma \ref{lemma-additivity-invertible-subsheaf} and induction.
+\end{proof}
+
+
+
+
+\section{Degrees of zero cycles}
+\label{section-degree-zero-cycles}
+
+\noindent
+We start defining the degree of a zero cycle on a proper scheme over a field.
+One approach is to define it directly as in
+Lemma \ref{lemma-spell-out-degree-zero-cycle} and then show
+it is well defined by
+Lemma \ref{lemma-curve-principal-divisor}.
+Instead we define it as follows.
+
+\begin{definition}
+\label{definition-degree-zero-cycle}
+Let $k$ be a field (Example \ref{example-field}). Let $p : X \to \Spec(k)$
+be proper. The {\it degree of a zero cycle} on $X$ is given by proper
+pushforward
+$$
+p_* : \CH_0(X) \to \CH_0(\Spec(k))
+$$
+(Lemma \ref{lemma-proper-pushforward-rational-equivalence})
+combined with the natural isomorphism $\CH_0(\Spec(k)) = \mathbf{Z}$
+which maps $[\Spec(k)]$ to $1$. Notation: $\deg(\alpha)$.
+\end{definition}
+
+\noindent
+Let us spell this out further.
+
+\begin{lemma}
+\label{lemma-spell-out-degree-zero-cycle}
+Let $k$ be a field. Let $X$ be proper over $k$. Let $\alpha = \sum n_i[Z_i]$
+be in $Z_0(X)$. Then
+$$
+\deg(\alpha) = \sum n_i\deg(Z_i)
+$$
+where $\deg(Z_i)$ is the degree of $Z_i \to \Spec(k)$, i.e.,
+$\deg(Z_i) = \dim_k \Gamma(Z_i, \mathcal{O}_{Z_i})$.
+\end{lemma}
+
+\begin{proof}
+This is the definition of proper pushforward
+(Definition \ref{definition-proper-pushforward}).
+\end{proof}
+
+\noindent
+Next, we make the connection with degrees of vector bundles
+over $1$-dimensional proper schemes over fields as defined in
+Varieties, Section \ref{varieties-section-divisors-curves}.
+
+\begin{lemma}
+\label{lemma-degree-vector-bundle}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ of dimension $\leq 1$.
+Let $\mathcal{E}$ be a finite locally free $\mathcal{O}_X$-module of constant
+rank. Then
+$$
+\deg(\mathcal{E}) = \deg(c_1(\mathcal{E}) \cap [X]_1)
+$$
+where the left hand side is defined in
+Varieties, Definition \ref{varieties-definition-degree-invertible-sheaf}.
+\end{lemma}
+
+\begin{proof}
+Let $C_i \subset X$, $i = 1, \ldots, t$ be the irreducible components
+of dimension $1$ with reduced induced scheme structure and let $m_i$ be the
+multiplicity of $C_i$ in $X$. Then $[X]_1 = \sum m_i[C_i]$ and
+$c_1(\mathcal{E}) \cap [X]_1$ is the sum of the pushforwards of the cycles
+$m_i c_1(\mathcal{E}|_{C_i}) \cap [C_i]$. Since we have a similar decomposition
+of the degree of $\mathcal{E}$ by
+Varieties, Lemma \ref{varieties-lemma-degree-in-terms-of-components}
+it suffices to prove the lemma in case $X$ is a proper curve over $k$.
+
+\medskip\noindent
+Assume $X$ is a proper curve over $k$.
+By Divisors, Lemma \ref{divisors-lemma-filter-after-modification}
+there exists a modification $f : X' \to X$ such that $f^*\mathcal{E}$
+has a filtration whose successive quotients are invertible
+$\mathcal{O}_{X'}$-modules. Since $f_*[X']_1 = [X]_1$ we conclude
+from Lemma \ref{lemma-pushforward-cap-cj} that
+$$
+\deg(c_1(\mathcal{E}) \cap [X]_1) = \deg(c_1(f^*\mathcal{E}) \cap [X']_1)
+$$
+Since we have a similar relationship for the degree by
+Varieties, Lemma \ref{varieties-lemma-degree-birational-pullback}
+we reduce to the case where $\mathcal{E}$ has a filtration whose
+successive quotients are invertible $\mathcal{O}_X$-modules.
+In this case, we may use additivity of the degree
+(Varieties, Lemma \ref{varieties-lemma-degree-additive})
+and of first Chern classes (Lemma \ref{lemma-additivity-chern-classes})
+to reduce to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $X$ is a proper curve over $k$ and $\mathcal{E}$ is an
+invertible $\mathcal{O}_X$-module. By
+Divisors, Lemma
+\ref{divisors-lemma-quasi-projective-Noetherian-pic-effective-Cartier}
+we see that $\mathcal{E}$ is isomorphic to
+$\mathcal{O}_X(D) \otimes \mathcal{O}_X(D')^{\otimes -1}$
+for some effective Cartier divisors $D, D'$ on $X$ (this also uses
+that $X$ is projective, see
+Varieties, Lemma \ref{varieties-lemma-dim-1-proper-projective} for example).
+By additivity of degree under tensor product of invertible sheaves
+(Varieties, Lemma \ref{varieties-lemma-degree-tensor-product})
+and additivity of $c_1$ under tensor product of invertible sheaves
+(Lemma \ref{lemma-c1-cap-additive} or \ref{lemma-chern-classes-E-tensor-L})
+we reduce to the case $\mathcal{E} = \mathcal{O}_X(D)$.
+In this case the left hand side gives $\deg(D)$
+(Varieties, Lemma \ref{varieties-lemma-degree-effective-Cartier-divisor})
+and the right hand side gives $\deg([D]_0)$ by
+Lemma \ref{lemma-geometric-cap}.
+Since
+$$
+[D]_0 = \sum\nolimits_{x \in D}
+\text{length}_{\mathcal{O}_{X, x}}(\mathcal{O}_{D, x}) [x] =
+\sum\nolimits_{x \in D}
+\text{length}_{\mathcal{O}_{D, x}}(\mathcal{O}_{D, x}) [x]
+$$
+by definition, we see
+$$
+\deg([D]_0) = \sum\nolimits_{x \in D}
+\text{length}_{\mathcal{O}_{D, x}}(\mathcal{O}_{D, x}) [\kappa(x) : k] =
+\dim_k \Gamma(D, \mathcal{O}_D) = \deg(D)
+$$
+The penultimate equality by
+Algebra, Lemma \ref{algebra-lemma-pushdown-module}
+using that $D$ is affine.
+\end{proof}
+
+\noindent
+Finally, we can tie everything up with the numerical intersections
+defined in Varieties, Section \ref{varieties-section-num}.
+
+\begin{lemma}
+\label{lemma-degrees-and-numerical-intersections}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$.
+Let $Z \subset X$ be a closed subscheme of dimension $d$.
+Let $\mathcal{L}_1, \ldots, \mathcal{L}_d$ be invertible
+$\mathcal{O}_X$-modules. Then
+$$
+(\mathcal{L}_1 \cdots \mathcal{L}_d \cdot Z) =
+\deg(
+c_1(\mathcal{L}_1) \cap \ldots \cap c_1(\mathcal{L}_d) \cap [Z]_d)
+$$
+where the left hand side is defined in
+Varieties, Definition \ref{varieties-definition-intersection-number}.
+In particular,
+$$
+\deg_\mathcal{L}(Z) = \deg(c_1(\mathcal{L})^d \cap [Z]_d)
+$$
+if $\mathcal{L}$ is an ample invertible $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+We will prove this by induction on $d$. If $d = 0$, then the result is
+true by Varieties, Lemma \ref{varieties-lemma-chi-tensor-finite}.
+Assume $d > 0$.
+
+\medskip\noindent
+Let $Z_i \subset Z$, $i = 1, \ldots, t$ be the irreducible components
+of dimension $d$ with reduced induced scheme structure and let $m_i$ be the
+multiplicity of $Z_i$ in $Z$. Then $[Z]_d = \sum m_i[Z_i]$ and
+$c_1(\mathcal{L}_1) \cap \ldots \cap c_1(\mathcal{L}_d) \cap [Z]_d$
+is the sum of the cycles
+$m_i c_1(\mathcal{L}_1) \cap \ldots \cap c_1(\mathcal{L}_d) \cap [Z_i]$.
+Since we have a similar decomposition for
+$(\mathcal{L}_1 \cdots \mathcal{L}_d \cdot Z)$ by
+Varieties, Lemma \ref{varieties-lemma-numerical-polynomial-leading-term}
+it suffices to prove the lemma in case $Z = X$
+is a proper variety of dimension $d$ over $k$.
+
+\medskip\noindent
+By Chow's lemma there exists a birational proper morphism $f : Y \to X$
+with $Y$ H-projective over $k$. See Cohomology of Schemes, Lemma
+\ref{coherent-lemma-chow-Noetherian} and Remark
+\ref{coherent-remark-chow-Noetherian}. Then
+$$
+(f^*\mathcal{L}_1 \cdots f^*\mathcal{L}_d \cdot Y) =
+(\mathcal{L}_1 \cdots \mathcal{L}_d \cdot X)
+$$
+by Varieties, Lemma \ref{varieties-lemma-intersection-number-and-pullback}
+and we have
+$$
+f_*(c_1(f^*\mathcal{L}_1) \cap \ldots \cap c_1(f^*\mathcal{L}_d) \cap [Y]) =
+c_1(\mathcal{L}_1) \cap \ldots \cap c_1(\mathcal{L}_d) \cap [X]
+$$
+by Lemma \ref{lemma-pushforward-cap-c1}. Thus we may replace $X$ by $Y$
+and assume that $X$ is projective over $k$.
+
+\medskip\noindent
+If $X$ is a proper $d$-dimensional projective variety, then we can
+write $\mathcal{L}_1 = \mathcal{O}_X(D) \otimes \mathcal{O}_X(D')^{\otimes -1}$
+for some effective Cartier divisors $D, D' \subset X$
+by Divisors, Lemma
+\ref{divisors-lemma-quasi-projective-Noetherian-pic-effective-Cartier}.
+By additivity for both sides of the equation
+(Varieties, Lemma \ref{varieties-lemma-intersection-number-additive} and
+Lemma \ref{lemma-c1-cap-additive})
+we reduce to the case $\mathcal{L}_1 = \mathcal{O}_X(D)$ for some
+effective Cartier divisor $D$.
+By Varieties, Lemma
+\ref{varieties-lemma-numerical-intersection-effective-Cartier-divisor}
+we have
+$$
+(\mathcal{L}_1 \cdots \mathcal{L}_d \cdot X) =
+(\mathcal{L}_2 \cdots \mathcal{L}_d \cdot D)
+$$
+and by Lemma \ref{lemma-geometric-cap} we have
+$$
+c_1(\mathcal{L}_1) \cap \ldots \cap c_1(\mathcal{L}_d) \cap [X] =
+c_1(\mathcal{L}_2) \cap \ldots \cap c_1(\mathcal{L}_d) \cap [D]_{d - 1}
+$$
+Thus we obtain the result from our induction hypothesis.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Cycles of given codimension}
+\label{section-cycles-codimension}
+
+\noindent
+In some cases there is a second grading on the abelian group
+of all cycles given by codimension.
+
+\begin{lemma}
+\label{lemma-locally-equidimensional}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Write
+$\delta = \delta_{X/S}$ as in Section \ref{section-setup}.
+The following are equivalent
+\begin{enumerate}
+\item There exists a decomposition $X = \coprod_{n \in \mathbf{Z}} X_n$
+into open and closed subschemes such that $\delta(\xi) = n$ whenever
+$\xi \in X_n$ is a generic point of an irreducible component of $X_n$.
+\item For all $x \in X$ there exists an open neighbourhood $U \subset X$
+of $x$ and an integer $n$ such that $\delta(\xi) = n$ whenever
+$\xi \in U$ is a generic point of an irreducible component of $U$.
+\item For all $x \in X$ there exists an integer $n_x$ such that
+$\delta(\xi) = n_x$ for any generic point $\xi$ of an irreducible
+component of $X$ containing $x$.
+\end{enumerate}
+The conditions are satisfied if $X$ is either
+normal or Cohen-Macaulay\footnote{In fact, it suffices if
+$X$ is $(S_2)$. Compare with Local Cohomology, Lemma
+\ref{local-cohomology-lemma-catenary-S2-equidimensional}.}.
+\end{lemma}
+
+\begin{proof}
+It is clear that (1) $\Rightarrow$ (2) $\Rightarrow$ (3).
+Conversely, if (3) holds, then we set $X_n = \{x \in X \mid n_x = n\}$
+and we get a decomposition as in (1). Namely, $X_n$ is open because
+given $x$ the union of the irreducible components of $X$ passing through $x$
+minus the union of the irreducible components of $X$ not passing through $x$
+is an open neighbourhood of $x$. If $X$ is normal, then $X$ is a
+disjoint union of integral schemes
+(Properties, Lemma \ref{properties-lemma-normal-locally-Noetherian})
+and hence the properties hold.
+If $X$ is Cohen-Macaulay, then
+$\delta' : X \to \mathbf{Z}$, $x \mapsto -\dim(\mathcal{O}_{X, x})$
+is a dimension function on $X$ (see Example \ref{example-CM-irreducible}).
+Since $\delta - \delta'$ is locally constant
+(Topology, Lemma \ref{topology-lemma-dimension-function-unique})
+and since $\delta'(\xi) = 0$ for every generic point $\xi$ of $X$
+we see that (2) holds.
+\end{proof}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be
+locally of finite type over $S$ satisfying the equivalent
+conditions of Lemma \ref{lemma-locally-equidimensional}. For an integral
+closed subscheme $Z \subset X$ we have the codimension $\text{codim}(Z, X)$
+of $Z$ in $X$, see Topology, Definition \ref{topology-definition-codimension}.
+We define a {\it codimension $p$-cycle} to be a cycle $\alpha = \sum n_Z[Z]$
+on $X$ such that $n_Z \not = 0 \Rightarrow \text{codim}(Z, X) = p$.
+The abelian group of all codimension $p$-cycles is denoted $Z^p(X)$.
+Let $X = \coprod X_n$ be the decomposition given in
+Lemma \ref{lemma-locally-equidimensional} part (1).
+Recalling that our cycles are defined as locally finite sums, it is clear that
+$$
+Z^p(X) = \prod\nolimits_n Z_{n - p}(X_n)
+$$
+Moreover, we see that $\prod_p Z^p(X) = \prod_k Z_k(X)$. We could now define
+rational equivalence of codimension $p$ cycles on $X$ in exactly the same
+manner as before and in fact we could redevelop the whole theory from scratch
+for cycles of a given codimension for $X$ as in
+Lemma \ref{lemma-locally-equidimensional}. However, instead we simply
+define the {\it Chow group of codimension $p$-cycles} as
+$$
+\CH^p(X) = \prod\nolimits_n \CH_{n - p}(X_n)
+$$
+As before we have $\prod_p \CH^p(X) = \prod_k \CH_k(X)$.
+If $X$ is quasi-compact, then the product in the formula is finite
+(and hence is a direct sum) and we have
+$\bigoplus_p \CH^p(X) = \bigoplus_k \CH_k(X)$. If $X$ is quasi-compact
+and finite dimensional, then only a finite number of these groups
+is nonzero.
+
+\medskip\noindent
+Many of the constructions and results for Chow groups proved above have
+natural counterparts for the Chow groups $\CH^*(X)$. Each of these is
+shown by decomposing the relevant schemes into ``equidimensional'' pieces
+as in Lemma \ref{lemma-locally-equidimensional}
+and applying the results already proved for the
+factors in the product decomposition given above.
+Let us list some of them.
+\begin{enumerate}
+\item If $f : X \to Y$ is a flat morphism of schemes locally of finite type
+over $S$ and $X$ and $Y$ satisfy the equivalent conditions of
+Lemma \ref{lemma-locally-equidimensional} then flat pullback determines a map
+$$
+f^* : \CH^p(Y) \to \CH^p(X)
+$$
+\item If $f : X \to Y$ is a morphism of schemes locally of finite type
+over $S$ and $X$ and $Y$ satisfy the equivalent conditions of
+Lemma \ref{lemma-locally-equidimensional} let us say $f$ has
+{\it codimension} $r \in \mathbf{Z}$ if for all pairs of irreducible components
+$Z \subset X$, $W \subset Y$ with $f(Z) \subset W$ we have
+$\dim_\delta(W) - \dim_\delta(Z) = r$.
+\item If $f : X \to Y$ is a proper morphism of schemes locally of finite type
+over $S$ and $X$ and $Y$ satisfy the equivalent conditions of
+Lemma \ref{lemma-locally-equidimensional} and $f$ has codimension $r$,
+then proper pushforward is a map
+$$
+f_* : \CH^p(X) \to \CH^{p + r}(Y)
+$$
+\item If $f : X \to Y$ is a morphism of schemes locally of finite type over $S$
+and $X$ and $Y$ satisfy the equivalent conditions of
+Lemma \ref{lemma-locally-equidimensional} and $f$ has codimension $r$
+and $c \in A^q(X \to Y)$, then $c$ induces maps
+$$
+c \cap - : \CH^p(Y) \to \CH^{p + q - r}(X)
+$$
+\item If $X$ is a scheme locally of finite type over $S$
+satisfying the equivalent conditions of
+Lemma \ref{lemma-locally-equidimensional} and
+$\mathcal{L}$ is an invertible $\mathcal{O}_X$-module,
+then
+$$
+c_1(\mathcal{L}) \cap - : \CH^p(X) \to \CH^{p + 1}(X)
+$$
+\item If $X$ is a scheme locally of finite type over $S$
+satisfying the equivalent conditions of
+Lemma \ref{lemma-locally-equidimensional} and
+$\mathcal{E}$ is a finite locally free $\mathcal{O}_X$-module,
+then
+$$
+c_i(\mathcal{E}) \cap - : \CH^p(X) \to \CH^{p + i}(X)
+$$
+\end{enumerate}
+Warning: the property for a morphism to have codimension $r$
+is not preserved by base change.
+
+\begin{remark}
+\label{remark-fundamental-class}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$ satisfying the
+equivalent conditions of Lemma \ref{lemma-locally-equidimensional}.
+Let $X = \coprod X_n$ be the decomposition into open and closed
+subschemes such that every irreducible component of $X_n$ has
+$\delta$-dimension $n$. In this situation we sometimes set
+$$
+[X] = \sum\nolimits_n [X_n]_n \in \CH^0(X)
+$$
+This class is a kind of ``fundamental class'' of $X$ in Chow theory.
+\end{remark}
+
+
+
+
+\section{The splitting principle}
+\label{section-splitting-principle}
+
+\noindent
+In our setting it is not so easy to say what the splitting principle
+exactly says/is. Here is a possible formulation.
+
+\begin{lemma}
+\label{lemma-splitting-principle}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be locally
+of finite type over $S$. Let $\mathcal{E}_i$ be a finite collection of
+locally free $\mathcal{O}_X$-modules of rank $r_i$. There exists a projective
+flat morphism $\pi : P \to X$ of relative dimension $d$ such that
+\begin{enumerate}
+\item for any morphism $f : Y \to X$ the map
+$\pi_Y^* : \CH_*(Y) \to \CH_{* + d}(Y \times_X P)$ is injective, and
+\item each $\pi^*\mathcal{E}_i$ has a filtration
+whose successive quotients $\mathcal{L}_{i, 1}, \ldots, \mathcal{L}_{i, r_i}$
+are invertible ${\mathcal O}_P$-modules.
+\end{enumerate}
+Moreover, when (1) holds the restriction map $A^*(X) \to A^*(P)$
+(Remark \ref{remark-pullback-cohomology}) is injective.
+\end{lemma}
+
+\begin{proof}
+We may assume $r_i \geq 1$ for all $i$. We will prove the lemma by induction
+on $\sum (r_i - 1)$. If this integer is $0$, then $\mathcal{E}_i$
+is invertible for all $i$ and we conclude by taking $\pi = \text{id}_X$.
+If not, then we can pick an $i$ such that $r_i > 1$ and consider the
+morphism $\pi_i : P_i = \mathbf{P}(\mathcal{E}_i) \to X$.
+We have a short exact sequence
+$$
+0 \to \mathcal{F} \to \pi_i^*\mathcal{E}_i \to \mathcal{O}_{P_i}(1) \to 0
+$$
+of finite locally free $\mathcal{O}_{P_i}$-modules of ranks $r_i - 1$,
+$r_i$, and $1$. Observe that $\pi_i^*$ is injective on chow groups
+after any base change by the projective bundle formula
+(Lemma \ref{lemma-chow-ring-projective-bundle}).
+By the induction hypothesis applied to the finite locally free
+$\mathcal{O}_{P_i}$-modules $\mathcal{F}$ and $\pi_{i'}^*\mathcal{E}_{i'}$
+for $i' \not = i$, we find a morphism $\pi : P \to P_i$ with
+properties stated as in the lemma. Then the composition
+$\pi_i \circ \pi : P \to X$ does the job. Some details omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-the-proof-shows-more}
+The proof of Lemma \ref{lemma-splitting-principle}
+shows that the morphism $\pi : P \to X$ has the following additional
+properties:
+\begin{enumerate}
+\item $\pi$ is a finite composition of projective space bundles
+associated to locally free modules of finite constant rank, and
+\item for every $\alpha \in \CH_k(X)$ we have
+$\alpha = \pi_*(\xi_1 \cap \ldots \cap \xi_d \cap \pi^*\alpha)$
+where $\xi_i$ is the first Chern class of some invertible
+$\mathcal{O}_P$-module.
+\end{enumerate}
+The second observation follows from the first and
+Lemma \ref{lemma-cap-projective-bundle}.
+We will add more observations here as needed.
+\end{remark}
+
+\noindent
+Let $(S, \delta)$, $X$, and $\mathcal{E}_i$ be as in
+Lemma \ref{lemma-splitting-principle}.
+The {\it splitting principle} refers to the practice of symbolically writing
+$$
+c(\mathcal{E}_i) = \prod (1 + x_{i, j})
+$$
+The symbols $x_{i, 1}, \ldots, x_{i, r_i}$ are called the {\it Chern roots}
+of $\mathcal{E}_i$. In other words, the $p$th Chern class of $\mathcal{E}_i$
+is the $p$th elementary symmetric function in the Chern roots.
+The usefulness of the splitting principle comes from the assertion that
+in order to prove a polynomial relation among Chern classes of the
+$\mathcal{E}_i$ it is enough to prove the corresponding relation among the
+Chern roots.
+
+\medskip\noindent
+Namely, let $\pi : P \to X$ be as in Lemma \ref{lemma-splitting-principle}.
+Recall that there is a canonical $\mathbf{Z}$-algebra map
+$\pi^* : A^*(X) \to A^*(P)$, see
+Remark \ref{remark-pullback-cohomology}. The injectivity of $\pi_Y^*$
+on Chow groups for every $Y$ over $X$, implies that the map
+$\pi^* : A^*(X) \to A^*(P)$ is injective (details omitted).
+We have
+$$
+\pi^*c(\mathcal{E}_i) = \prod (1 + c_1(\mathcal{L}_{i, j}))
+$$
+by Lemma \ref{lemma-chern-filter-by-linebundles}. Thus we may think of the
+Chern roots $x_{i, j}$ as the elements $c_1(\mathcal{L}_{i, j}) \in A^*(P)$
+and the displayed equation as taking place in $A^*(P)$ after
+applying the injective map $\pi^* : A^*(X) \to A^*(P)$ to the left
+hand side of the equation.
+
+\medskip\noindent
+To see how this works, it is best to give some examples.
+
+\begin{lemma}
+\label{lemma-chern-classes-dual}
+In Situation \ref{situation-setup} let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free $\mathcal{O}_X$-module
+with dual $\mathcal{E}^\vee$. Then
+$$
+c_i(\mathcal{E}^\vee) = (-1)^i c_i(\mathcal{E})
+$$
+in $A^i(X)$.
+\end{lemma}
+
+\begin{proof}
+Choose a morphism $\pi : P \to X$ as in
+Lemma \ref{lemma-splitting-principle}.
+By the injectivity of $\pi^*$ (after any base change)
+it suffices to prove the relation between
+the Chern classes of $\mathcal{E}$ and $\mathcal{E}^\vee$
+after pulling back to $P$. Thus we may assume there
+exist invertible $\mathcal{O}_X$-modules
+${\mathcal L}_i$, $i = 1, \ldots, r$
+and a filtration
+$$
+0 = \mathcal{E}_0 \subset \mathcal{E}_1 \subset \mathcal{E}_2
+\subset \ldots \subset \mathcal{E}_r = \mathcal{E}
+$$
+such that $\mathcal{E}_i/\mathcal{E}_{i - 1} \cong \mathcal{L}_i$.
+Then we obtain the dual filtration
+$$
+0 = \mathcal{E}_r^\perp \subset \mathcal{E}_1^\perp \subset \mathcal{E}_2^\perp
+\subset \ldots \subset \mathcal{E}_0^\perp = \mathcal{E}^\vee
+$$
+such that $\mathcal{E}_{i - 1}^\perp/\mathcal{E}_i^\perp \cong
+\mathcal{L}_i^{\otimes -1}$.
+Set $x_i = c_1(\mathcal{L}_i)$.
+Then $c_1(\mathcal{L}_i^{\otimes -1}) = - x_i$
+by Lemma \ref{lemma-c1-cap-additive}.
+By Lemma \ref{lemma-chern-filter-by-linebundles}
+we have
+$$
+c(\mathcal{E}) = \prod\nolimits_{i = 1}^r (1 + x_i)
+\quad\text{and}\quad
+c(\mathcal{E}^\vee) = \prod\nolimits_{i = 1}^r (1 - x_i)
+$$
+in $A^*(X)$. The result follows from a formal computation
+which we omit.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chern-classes-tensor-product}
+In Situation \ref{situation-setup} let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ and $\mathcal{F}$ be a finite locally free
+$\mathcal{O}_X$-modules of ranks $r$ and $s$. Then we have
+$$
+c_1(\mathcal{E} \otimes \mathcal{F})
+=
+r c_1(\mathcal{F}) + s c_1(\mathcal{E})
+$$
+$$
+c_2(\mathcal{E} \otimes \mathcal{F})
+=
+r c_2(\mathcal{F}) + s c_2(\mathcal{E}) +
+{r \choose 2} c_1(\mathcal{F})^2 +
+(rs - 1) c_1(\mathcal{F})c_1(\mathcal{E}) +
+{s \choose 2} c_1(\mathcal{E})^2
+$$
+and so on in $A^*(X)$.
+\end{lemma}
+
+\begin{proof}
+Arguing exactly as in the proof of Lemma \ref{lemma-chern-classes-dual}
+we may assume we have
+invertible $\mathcal{O}_X$-modules
+${\mathcal L}_i$, $i = 1, \ldots, r$
+${\mathcal N}_i$, $i = 1, \ldots, s$
+filtrations
+$$
+0 = \mathcal{E}_0 \subset \mathcal{E}_1 \subset \mathcal{E}_2
+\subset \ldots \subset \mathcal{E}_r = \mathcal{E}
+\quad\text{and}\quad
+0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset \mathcal{F}_2
+\subset \ldots \subset \mathcal{F}_s = \mathcal{F}
+$$
+such that $\mathcal{E}_i/\mathcal{E}_{i - 1} \cong \mathcal{L}_i$
+and such that $\mathcal{F}_j/\mathcal{F}_{j - 1} \cong \mathcal{N}_j$.
+Ordering pairs $(i, j)$ lexicographically
+we obtain a filtration
+$$
+0 \subset \ldots \subset
+\mathcal{E}_i \otimes \mathcal{F}_j
++
+\mathcal{E}_{i - 1} \otimes \mathcal{F}
+\subset \ldots \subset \mathcal{E} \otimes \mathcal{F}
+$$
+with successive quotients
+$$
+\mathcal{L}_1 \otimes \mathcal{N}_1,
+\mathcal{L}_1 \otimes \mathcal{N}_2,
+\ldots,
+\mathcal{L}_1 \otimes \mathcal{N}_s,
+\mathcal{L}_2 \otimes \mathcal{N}_1,
+\ldots,
+\mathcal{L}_r \otimes \mathcal{N}_s
+$$
+By Lemma \ref{lemma-chern-filter-by-linebundles}
+we have
+$$
+c(\mathcal{E}) = \prod (1 + x_i),
+\quad
+c(\mathcal{F}) = \prod (1 + y_j),
+\quad\text{and}\quad
+c(\mathcal{F}) = \prod (1 + x_i + y_j),
+$$
+in $A^*(X)$. The result follows from a formal computation
+which we omit.
+\end{proof}
+
+\begin{remark}
+\label{remark-equalities-nonconstant-rank}
+The equalities proven above remain true even when we work with
+finite locally free
+$\mathcal{O}_X$-modules whose rank is allowed to be nonconstant.
+In fact, we can work with polynomials in the rank and the
+Chern classes as follows. Consider the graded polynomial ring
+$\mathbf{Z}[r, c_1, c_2, c_3, \ldots]$
+where $r$ has degree $0$ and $c_i$ has degree $i$. Let
+$$
+P \in \mathbf{Z}[r, c_1, c_2, c_3, \ldots]
+$$
+be a homogeneous polynomial of degree $p$. Then for any finite locally
+free $\mathcal{O}_X$-module $\mathcal{E}$ on $X$ we can consider
+$$
+P(\mathcal{E}) =
+P(r(\mathcal{E}), c_1(\mathcal{E}), c_2(\mathcal{E}), c_3(\mathcal{E}), \ldots)
+\in A^p(X)
+$$
+see Remark \ref{remark-extend-to-finite-locally-free} for notation and
+conventions. To prove relations among these polynomials (for multiple
+finite locally free modules) we can work locally on $X$ and use the splitting
+principle as above. For example, we claim that
+$$
+c_2(\SheafHom_{\mathcal{O}_X}(\mathcal{E}, \mathcal{E})) =
+P(\mathcal{E})
+$$
+where $P = 2rc_2 - (r - 1)c_1^2$.
+Namely, since $\SheafHom_{\mathcal{O}_X}(\mathcal{E}, \mathcal{E}) =
+\mathcal{E} \otimes \mathcal{E}^\vee$ this follows easily from
+Lemmas \ref{lemma-chern-classes-dual} and
+\ref{lemma-chern-classes-tensor-product}
+above by decomposing $X$ into parts where the rank
+of $\mathcal{E}$ is constant as in
+Remark \ref{remark-extend-to-finite-locally-free}.
+\end{remark}
+
+\begin{example}
+\label{example-power-sum}
+For every $p \geq 1$ there is a unique homogeneous polynomial
+$P_p \in \mathbf{Z}[c_1, c_2, c_3, \ldots]$ of degree $p$
+such that, for any $n \geq p$ we have
+$$
+P_p(s_1, s_2, \ldots, s_p) = \sum x_i^p
+$$
+in $\mathbf{Z}[x_1, \ldots, x_n]$ where $s_1, \ldots, s_p$ are the
+elementary symmetric polynomials in $x_1, \ldots, x_n$, so
+$$
+s_i = \sum\nolimits_{1 \leq j_1 < \ldots < j_i \leq n}
+x_{j_1}x_{j_2} \ldots x_{j_i}
+$$
+The existence of $P_p$ comes from the well known fact that
+the elementary symmetric functions generate the ring of
+all symmetric functions over the integers. Another way to
+characterize $P_p \in \mathbf{Z}[c_1, c_2, c_3, \ldots]$ is that we have
+$$
+\log(1 + c_1 + c_2 + c_3 + \ldots) =
+\sum\nolimits_{p \geq 1} (-1)^{p - 1}\frac{P_p}{p}
+$$
+as formal power series. This is clear by writing
+$1 + c_1 + c_2 + \ldots = \prod (1 + x_i)$ and applying
+the power series for the logarithm function. Expanding the left
+hand side we get
+\begin{align*}
+& (c_1 + c_2 + \ldots) - (1/2)(c_1 + c_2 + \ldots)^2 +
+(1/3)(c_1 + c_2 + \ldots)^3 - \ldots \\
+& =
+c_1 + (c_2 - (1/2)c_1^2) + (c_3 - c_1c_2 + (1/3)c_1^3) + \ldots
+\end{align*}
+In this way we find that
+\begin{align*}
+P_1 & = c_1, \\
+P_2 & = c_1^2 - 2c_2, \\
+P_3 & = c_1^3 - 3c_1c_2 + 3c_3, \\
+P_4 & = c_1^4 - 4c_1^2c_2 + 4c_1c_3 + 2c_2^2 - 4c_4,
+\end{align*}
+and so on. Since the Chern classes of a finite locally free
+$\mathcal{O}_X$-module $\mathcal{E}$ are the elementary symmetric
+polynomials in the Chern roots $x_i$, we see that
+$$
+P_p(\mathcal{E}) = \sum x_i^p
+$$
+For convenience we set $P_0 = r$ in $\mathbf{Z}[r, c_1, c_2, c_3, \ldots]$
+so that $P_0(\mathcal{E}) = r(\mathcal{E})$ as a bivariant class
+(as in Remarks \ref{remark-extend-to-finite-locally-free} and
+\ref{remark-equalities-nonconstant-rank}).
+\end{example}
+
+
+
+
+
+
+\section{Chern classes and sections}
+\label{section-top-chern-class}
+
+\noindent
+A brief section whose main result is that we may compute the
+top Chern class of a finite locally free module using the
+vanishing locus of a ``regular section.
+
+\medskip\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be a scheme
+locally of finite type over $S$. Let $\mathcal{E}$ be a finite locally
+free $\mathcal{O}_X$-module. Let $f : X' \to X$ be locally of finite type. Let
+$$
+s \in \Gamma(X', f^*\mathcal{E})
+$$
+be a global section of the pullback of $\mathcal{E}$ to $X'$. Let
+$Z(s) \subset X'$ be the zero scheme of $s$. More precisely, we define
+$Z(s)$ to be the closed subscheme whose quasi-coherent sheaf
+of ideals is the image of the map
+$s : f^*\mathcal{E}^\vee \to \mathcal{O}_{X'}$.
+
+\begin{lemma}
+\label{lemma-top-chern-class}
+In the situation described just above assume $\dim_\delta(X') = n$,
+that $f^*\mathcal{E}$ has constant rank $r$, that
+$\dim_\delta(Z(s)) \leq n - r$, and that for every generic point
+$\xi \in Z(s)$ with $\delta(\xi) = n - r$ the ideal of $Z(s)$
+in $\mathcal{O}_{X', \xi}$ is generated by a regular sequence
+of length $r$. Then
+$$
+c_r(\mathcal{E}) \cap [X']_n = [Z(s)]_{n - r}
+$$
+in $\CH_*(X')$.
+\end{lemma}
+
+\begin{proof}
+Since $c_r(\mathcal{E})$ is a bivariant class
+(Lemma \ref{lemma-cap-cp-bivariant})
+we may assume $X = X'$ and we have to show that
+$c_r(\mathcal{E}) \cap [X]_n = [Z(s)]_{n - r}$ in $\CH_{n - r}(X)$.
+We will prove the lemma by induction on $r \geq 0$. (The case
+$r = 0$ is trivial.) The case $r = 1$
+is handled by Lemma \ref{lemma-geometric-cap}. Assume $r > 1$.
+
+\medskip\noindent
+Let $\pi : P \to X$ be the projective space bundle associated to
+$\mathcal{E}$ and consider the short exact sequence
+$$
+0 \to \mathcal{E}' \to \pi^*\mathcal{E} \to \mathcal{O}_P(1) \to 0
+$$
+By the projective space bundle formula
+(Lemma \ref{lemma-chow-ring-projective-bundle})
+it suffices to prove the equality after pulling back by $\pi$.
+Observe that $\pi^{-1}Z(s) = Z(\pi^*s)$ has $\delta$-dimension
+$\leq n - 1$ and that the assumption on regular sequences at
+generic points of $\delta$-dimension $n - 1$ holds by
+flat pullback, see
+Algebra, Lemma \ref{algebra-lemma-flat-increases-depth}.
+Let $t \in \Gamma(P, \mathcal{O}_P(1))$ be the image of $\pi^*s$.
+We claim
+$$
+[Z(t)]_{n + r - 2} = c_1(\mathcal{O}_P(1)) \cap [P]_{n + r - 1}
+$$
+Assuming the claim we finish the proof as follows.
+The restriction $\pi^*s|_{Z(t)}$ maps to zero in
+$\mathcal{O}_P(1)|_{Z(t)}$ hence comes from a unique
+element $s' \in \Gamma(Z(t), \mathcal{E}'|_{Z(t)})$.
+Note that $Z(s') = Z(\pi^*s)$ as closed subschemes of $P$.
+If $\xi \in Z(s')$ is a generic point with $\delta(\xi) = n - 1$,
+then the ideal of $Z(s')$ in $\mathcal{O}_{Z(t), \xi}$
+can be generated by a regular sequence of length $r - 1$: it is generated by
+$r - 1$ elements which are the images of $r - 1$ elements in
+$\mathcal{O}_{P, \xi}$ which together with a generator of the
+ideal of $Z(t)$ in $\mathcal{O}_{P, \xi}$ form a regular sequence
+of length $r$ in $\mathcal{O}_{P, \xi}$. Hence we can apply the
+induction hypothesis to $s'$ on $Z(t)$ to get
+$c_{r - 1}(\mathcal{E}') \cap [Z(t)]_{n + r - 2} = [Z(s')]_{n - 1}$.
+Combining all of the above we obtain
+\begin{align*}
+c_r(\pi^*\mathcal{E}) \cap [P]_{n + r - 1}
+& =
+c_{r - 1}(\mathcal{E}') \cap c_1(\mathcal{O}_P(1)) \cap [P]_{n + r - 1} \\
+& =
+c_{r - 1}(\mathcal{E}') \cap [Z(t)]_{n + r - 2} \\
+& =
+[Z(s')]_{n - 1} \\
+& = [Z(\pi^*s)]_{n - 1}
+\end{align*}
+which is what we had to show.
+
+\medskip\noindent
+Proof of the claim. This will follow from an application of
+the already used Lemma \ref{lemma-geometric-cap}.
+We have $\pi^{-1}(Z(s)) = Z(\pi^*s) \subset Z(t)$.
+On the other hand, for $x \in X$ if $P_x \subset Z(t)$, then
+$t|_{P_x} = 0$ which implies that $s$ is zero in the fibre
+$\mathcal{E} \otimes \kappa(x)$, which implies $x \in Z(s)$.
+It follows that $\dim_\delta(Z(t)) \leq n + (r - 1) - 1$.
+Finally, let $\xi \in Z(t)$ be a generic point with
+$\delta(\xi) = n + r - 2$. If $\xi$ is not the generic point
+of the fibre of $P \to X$ it is immediate that
+a local equation of $Z(t)$ is a nonzerodivisor in $\mathcal{O}_{P, \xi}$
+(because we can check this on the fibre by
+Algebra, Lemma \ref{algebra-lemma-grothendieck}).
+If $\xi$ is the generic point of a fibre, then $x = \pi(\xi) \in Z(s)$
+and $\delta(x) = n + r - 2 - (r - 1) = n - 1$. This is a contradiction
+with $\dim_\delta(Z(s)) \leq n - r$ because $r > 1$
+so this case doesn't happen.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-easy-virtual-class}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$
+be a scheme locally of finite type over $S$. Let
+$$
+0 \to \mathcal{N}' \to \mathcal{N} \to \mathcal{E} \to 0
+$$
+be a short exact sequence of finite locally free $\mathcal{O}_X$-modules.
+Consider the closed embedding
+$$
+i :
+N' = \underline{\Spec}_X(\text{Sym}((\mathcal{N}')^\vee))
+\longrightarrow
+N = \underline{\Spec}_X(\text{Sym}(\mathcal{N}^\vee))
+$$
+For $\alpha \in \CH_k(X)$ we have
+$$
+i_*(p')^*\alpha = p^*(c_{top}(\mathcal{E}) \cap \alpha)
+$$
+where $p' : N' \to X$ and $p : N \to X$ are the structure morphisms.
+\end{lemma}
+
+\begin{proof}
+Here $c_{top}(\mathcal{E})$ is the bivariant class defined in
+Remark \ref{remark-top-chern-class}. By its very definition, in
+order to verify the formula, we may assume that $\mathcal{E}$
+has constant rank. We may similarly assume $\mathcal{N}'$ and
+$\mathcal{N}$ have constant ranks, say $r'$ and $r$, so
+$\mathcal{E}$ has rank $r - r'$ and
+$c_{top}(\mathcal{E}) = c_{r - r'}(\mathcal{E})$.
+Observe that $p^*\mathcal{E}$ has a canonical section
+$$
+s \in \Gamma(N, p^*\mathcal{E}) = \Gamma(X, p_*p^*\mathcal{E}) =
+\Gamma(X, \mathcal{E} \otimes_{\mathcal{O}_X} \text{Sym}(\mathcal{N}^\vee)
+\supset \Gamma(X, \SheafHom(\mathcal{N}, \mathcal{E}))
+$$
+corresponding to the surjection $\mathcal{N} \to \mathcal{E}$ given
+in the statement of the lemma. The vanishing scheme of this section
+is exactly $N' \subset N$. Let $Y \subset X$ be an integral closed
+subscheme of $\delta$-dimension $n$. Then we have
+\begin{enumerate}
+\item $p^*[Y] = [p^{-1}(Y)]$ since $p^{-1}(Y)$ is integral of
+$\delta$-dimension $n + r$,
+\item $(p')^*[Y] = [(p')^{-1}(Y)]$ since $(p')^{-1}(Y)$ is integral of
+$\delta$-dimension $n + r'$,
+\item the restriction of $s$ to $p^{-1}Y$ has vanishing scheme
+$(p')^{-1}Y$ and the closed immersion $(p')^{-1}Y \to p^{-1}Y$
+is a regular immersion (locally cut out by a regular sequence).
+\end{enumerate}
+We conclude that
+$$
+(p')^*[Y] = c_{r - r'}(p^*\mathcal{E}) \cap p^*[Y]
+\quad\text{in}\quad \CH_*(N)
+$$
+by Lemma \ref{lemma-top-chern-class}. This proves the lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{The Chern character and tensor products}
+\label{section-chern-classes-tensor}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+We define the {\it Chern character} of a finite locally free
+$\mathcal{O}_X$-module to be the formal expression
+$$
+ch({\mathcal E}) = \sum\nolimits_{i=1}^r e^{x_i}
+$$
+if the $x_i$ are the Chern roots of ${\mathcal E}$. Writing this
+as a polynomial in the Chern classes we obtain
+\begin{align*}
+ch(\mathcal{E})
+& =
+r(\mathcal{E})
++
+c_1(\mathcal{E}) +
+\frac{1}{2}(c_1(\mathcal{E})^2 - 2c_2(\mathcal{E}))
++
+\frac{1}{6}(c_1(\mathcal{E})^3 - 3c_1(\mathcal{E})c_2(\mathcal{E}) + 3c_3(\mathcal{E})) \\
+& \quad\quad +
+\frac{1}{24}(c_1(\mathcal{E})^4 - 4c_1(\mathcal{E})^2c_2(\mathcal{E}) + 4c_1(\mathcal{E})c_3(\mathcal{E}) + 2c_2(\mathcal{E})^2 - 4c_4(\mathcal{E}))
++
+\ldots \\
+& =
+\sum\nolimits_{p = 0, 1, 2, \ldots} \frac{P_p(\mathcal{E})}{p!}
+\end{align*}
+with $P_p$ polynomials in the Chern classes as in
+Example \ref{example-power-sum}. The degree $p$ component of
+the above is
+$$
+ch_p(\mathcal{E}) = \frac{P_p(\mathcal{E})}{p!} \in A^p(X) \otimes \mathbf{Q}
+$$
+What does it mean that the coefficients are rational numbers?
+Well this simply means that we think of
+$ch_p(\mathcal{E})$ as an element of $A^p(X) \otimes \mathbf{Q}$.
+
+\begin{remark}
+\label{remark-extend-chern-character-to-finite-locally-free}
+In the discussion above we have defined the components of the Chern character
+$ch_p(\mathcal{E}) \in A^p(X) \otimes \mathbf{Q}$
+of $\mathcal{E}$ even if the rank of $\mathcal{E}$
+is not constant. See Remarks \ref{remark-extend-to-finite-locally-free} and
+\ref{remark-equalities-nonconstant-rank}. Thus the full Chern character
+of $\mathcal{E}$ is
+an element of $\prod_{p \geq 0} (A^p(X) \otimes \mathbf{Q})$. If $X$
+is quasi-compact and $\dim(X) < \infty$ (usual dimension), then one can show
+using Lemma \ref{lemma-vanish-above-dimension} and the splitting principle
+that $ch(\mathcal{E}) \in A^*(X) \otimes \mathbf{Q}$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-chern-character-additive}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be locally
+of finite type over $S$. Let
+$
+0 \to \mathcal{E}_1 \to \mathcal{E} \to \mathcal{E}_2 \to 0
+$
+be a short exact sequence of finite locally free $\mathcal{O}_X$-modules.
+Then we have the equality
+$$
+ch(\mathcal{E}) = ch(\mathcal{E}_1) + ch(\mathcal{E}_2)
+$$
+More precisely, we have
+$P_p(\mathcal{E}) = P_p(\mathcal{E}_1) + P_p(\mathcal{E}_2)$
+in $A^p(X)$ where $P_p$ is as in Example \ref{example-power-sum}.
+\end{lemma}
+
+\begin{proof}
+It suffices to prove the more precise statement. By
+Section \ref{section-splitting-principle}
+this follows because if $x_{1, i}$, $i = 1, \ldots, r_1$
+and $x_{2, i}$, $i = 1, \ldots, r_2$ are the
+Chern roots of $\mathcal{E}_1$ and $\mathcal{E}_2$, then
+$x_{1, 1}, \ldots, x_{1, r_1}, x_{2, 1}, \ldots, x_{2, r_2}$
+are the Chern roots of $\mathcal{E}$. Hence we get the result
+from our choice of $P_p$ in Example \ref{example-power-sum}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chern-character-multiplicative}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be locally
+of finite type over $S$. Let $\mathcal{E}_1$ and $\mathcal{E}_2$
+be finite locally free $\mathcal{O}_X$-modules.
+Then we have the equality
+$$
+ch(\mathcal{E}_1 \otimes_{\mathcal{O}_X} \mathcal{E}_2) =
+ch(\mathcal{E}_1) ch(\mathcal{E}_2)
+$$
+More precisely, we have
+$$
+P_p(\mathcal{E}_1 \otimes_{\mathcal{O}_X} \mathcal{E}_2) =
+\sum\nolimits_{p_1 + p_2 = p}
+{p \choose p_1} P_{p_1}(\mathcal{E}_1) P_{p_2}(\mathcal{E}_2)
+$$
+in $A^p(X)$ where $P_p$ is as in Example \ref{example-power-sum}.
+\end{lemma}
+
+\begin{proof}
+It suffices to prove the more precise statement. By
+Section \ref{section-splitting-principle}
+this follows because if $x_{1, i}$, $i = 1, \ldots, r_1$
+and $x_{2, i}$, $i = 1, \ldots, r_2$ are the
+Chern roots of $\mathcal{E}_1$ and $\mathcal{E}_2$, then
+$x_{1, i} + x_{2, j}$, $1 \leq i \leq r_1$, $1 \leq j \leq r_2$
+are the Chern roots of $\mathcal{E}_1 \otimes \mathcal{E}_2$.
+Hence we get the result from the binomial formula for
+$(x_{1, i} + x_{2, j})^p$ and the
+shape of our polynomials $P_p$ in Example \ref{example-power-sum}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chern-character-dual}
+In Situation \ref{situation-setup} let $X$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free $\mathcal{O}_X$-module
+with dual $\mathcal{E}^\vee$. Then
+$ch_i(\mathcal{E}^\vee) = (-1)^i ch_i(\mathcal{E})$ in
+$A^i(X) \otimes \mathbf{Q}$.
+\end{lemma}
+
+\begin{proof}
+Follows from the corresponding result for Chern classes
+(Lemma \ref{lemma-chern-classes-dual}).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Chern classes and the derived category}
+\label{section-pre-derived}
+
+\noindent
+In this section we define the total Chern class of an object
+of the derived category which may be represented globally
+by a finite complex of finite locally free modules.
+
+\medskip\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let
+$$
+\mathcal{E}^a \to \mathcal{E}^{a + 1} \to \ldots \to \mathcal{E}^b
+$$
+be a bounded complex of finite locally free $\mathcal{O}_X$-modules
+of constant rank.
+Then we define the {\it total Chern class of the complex} by the formula
+$$
+c(\mathcal{E}^\bullet) = \prod\nolimits_{n = a, \ldots, b}
+c(\mathcal{E}^n)^{(-1)^n} \in \prod\nolimits_{p \geq 0} A^p(X)
+$$
+Here the inverse is the formal inverse, so
+$$
+(1 + c_1 + c_2 + c_3 + \ldots)^{-1} =
+1 - c_1 + c_1^2 - c_2 - c_1^3 + 2c_1 c_2 - c_3 + \ldots
+$$
+We will denote $c_p(\mathcal{E}^\bullet) \in A^p(X)$
+the degree $p$ part of $c(\mathcal{E}^\bullet)$.
+We similarly define the {\it Chern character of the complex} by
+the formula
+$$
+ch(\mathcal{E}^\bullet) = \sum\nolimits_{n = a, \ldots, b}
+(-1)^n ch(\mathcal{E}^n) \in
+\prod\nolimits_{p \geq 0} (A^p(X) \otimes \mathbf{Q})
+$$
+We will denote $ch_p(\mathcal{E}^\bullet) \in A^p(X) \otimes \mathbf{Q}$
+the degree $p$ part of $ch(\mathcal{E}^\bullet)$.
+Finally, for $P_p \in \mathbf{Z}[r, c_1, c_2, c_3, \ldots]$
+as in Example \ref{example-power-sum} we define
+$$
+P_p(\mathcal{E}^\bullet) = \sum\nolimits_{n = a, \ldots, b}
+(-1)^n P_p(\mathcal{E}^n)
+$$
+in $A^p(X)$. Then we have
+$ch_p(\mathcal{E}^\bullet) = (1/p!)P_p(\mathcal{E}^\bullet)$
+as usual. The next lemma shows that these constructions only depends
+on the image of the complex in the derived category.
+
+\begin{lemma}
+\label{lemma-pre-derived-chern-class}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $E \in D(\mathcal{O}_X)$
+be an object such that there exists a locally bounded complex
+$\mathcal{E}^\bullet$ of finite locally free $\mathcal{O}_X$-modules
+representing $E$. Then a slight generalization of the above constructions
+$$
+c(\mathcal{E}^\bullet) \in \prod\nolimits_{p \geq 0} A^p(X),\quad
+ch(\mathcal{E}^\bullet) \in
+\prod\nolimits_{p \geq 0} A^p(X) \otimes \mathbf{Q},\quad
+P_p(\mathcal{E}^\bullet) \in A^p(X)
+$$
+are independent of the choice of the complex $\mathcal{E}^\bullet$.
+\end{lemma}
+
+\begin{proof}
+We prove this for the total Chern class; the other two cases follow
+by the same arguments using
+Lemma \ref{lemma-chern-character-additive}
+instead of
+Lemma \ref{lemma-additivity-chern-classes}.
+
+\medskip\noindent
+As in Remark \ref{remark-extend-to-finite-locally-free} in order
+to define the total chern class $c(\mathcal{E}^\bullet)$
+we decompose $X$ into open and closed subschemes
+$$
+X = \coprod\nolimits_{i \in I} X_i
+$$
+such that the rank $\mathcal{E}^n$ is constant on $X_i$ for
+all $n$ and $i$. (Since these ranks are locally constant functions
+on $X$ we can do this.) Since $\mathcal{E}^\bullet$ is locally
+bounded, we see that only a finite number of the sheaves
+$\mathcal{E}^n|_{X_i}$ are nonzero for a fixed $i$. Hence we
+can define
+$$
+c(\mathcal{E}^\bullet|_{X_i}) =
+\prod\nolimits_n c(\mathcal{E}^n|_{X_i})^{(-1)^n}
+\in \prod\nolimits_{p \geq 0} A^p(X_i)
+$$
+as above. By Lemma \ref{lemma-disjoint-decomposition-bivariant}
+we have $A^p(X) = \prod_i A^p(X_i)$. Hence for each $p \in \mathbf{Z}$
+we have a unique element $c_p(\mathcal{E}^\bullet) \in A^p(X)$ restricting
+to $c_p(\mathcal{E}^\bullet|_{X_i})$ on $X_i$ for all $i$.
+
+\medskip\noindent
+Suppose we have a second locally bounded complex
+$\mathcal{F}^\bullet$ of finite locally free $\mathcal{O}_X$-modules
+representing $E$.
+Let $g : Y \to X$ be a morphism locally of finite type with $Y$ integral.
+By Lemma \ref{lemma-bivariant-zero} it suffices to show that
+with $c(g^*\mathcal{E}^\bullet) \cap [Y]$ is the same as
+$c(g^*\mathcal{F}^\bullet) \cap [Y]$ and it even suffices to prove
+this after replacing $Y$ by an integral scheme proper and birational
+over $Y$. Then first we conclude that $g^*\mathcal{E}^\bullet$
+and $g^*\mathcal{F}^\bullet$ are bounded complexes of finite locally
+free $\mathcal{O}_Y$-modules of constant rank. Next, by
+More on Flatness, Lemma \ref{flat-lemma-blowup-complex-integral}
+we may assume that $H^i(Lg^*E)$ is perfect of tor dimension $\leq 1$
+for all $i \in \mathbf{Z}$.
+This reduces us to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $X$ is integral, $\mathcal{E}^\bullet$ and $\mathcal{F}^\bullet$
+are bounded complexes of finite locally free modules of constant rank, and
+$H^i(E)$ is a perfect $\mathcal{O}_X$-module of tor dimension $\leq 1$
+for all $i \in \mathbf{Z}$. We have to
+show that $c(\mathcal{E}^\bullet) \cap [X]$ is the same as
+$c(\mathcal{F}^\bullet) \cap [X]$. Denote
+$d_\mathcal{E}^i : \mathcal{E}^i \to \mathcal{E}^{i + 1}$ and
+$d_\mathcal{F}^i : \mathcal{F}^i \to \mathcal{F}^{i + 1}$
+the differentials of our complexes. By
+More on Flatness, Remark \ref{flat-remark-when-you-have-a-complex}
+we know that $\Im(d_\mathcal{E}^i)$, $\Ker(d_\mathcal{E}^i)$,
+$\Im(d_\mathcal{F}^i)$, and $\Ker(d_\mathcal{F}^i)$
+are finite locally free $\mathcal{O}_X$-modules for all $i$.
+By additivity (Lemma \ref{lemma-additivity-chern-classes}) we see that
+$$
+c(\mathcal{E}^\bullet) = \prod\nolimits_i
+c(\Ker(d_\mathcal{E}^i))^{(-1)^i} c(\Im(d_\mathcal{E}^i))^{(-1)^i}
+$$
+and similarly for $\mathcal{F}^\bullet$. Since we have the
+short exact sequences
+$$
+0 \to \Im(d_\mathcal{E}^i) \to \Ker(d_\mathcal{E}^i) \to H^i(E) \to 0
+\quad\text{and}\quad
+0 \to \Im(d_\mathcal{F}^i) \to \Ker(d_\mathcal{F}^i) \to H^i(E) \to 0
+$$
+we reduce to the problem stated and solved in the next paragraph.
+
+\medskip\noindent
+Assume $X$ is integral and we have two short exact sequences
+$$
+0 \to \mathcal{E}' \to \mathcal{E} \to \mathcal{Q} \to 0
+\quad\text{and}\quad
+0 \to \mathcal{F}' \to \mathcal{F} \to \mathcal{Q} \to 0
+$$
+with $\mathcal{E}$, $\mathcal{E}'$, $\mathcal{F}$, $\mathcal{F}'$
+finite locally free. Problem: show that
+$c(\mathcal{E})c(\mathcal{E}')^{-1} \cap [X] =
+c(\mathcal{F})c(\mathcal{F}')^{-1} \cap [X]$.
+To do this, consider the short exact sequence
+$$
+0 \to \mathcal{G} \to \mathcal{E} \oplus \mathcal{F} \to \mathcal{Q} \to 0
+$$
+defining $\mathcal{G}$. Since $\mathcal{Q}$ has tor dimension $\leq 1$
+we see that $\mathcal{G}$ is finite locally free. A diagram chase
+shows that the kernel of the surjection $\mathcal{G} \to \mathcal{F}$
+maps isomorphically to $\mathcal{E}'$ in $\mathcal{E}$ and
+the kernel of the surjection $\mathcal{G} \to \mathcal{E}$ maps
+isomorphically to $\mathcal{F}'$ in $\mathcal{F}$. (Working affine
+locally this follows from or is equivalent to Schanuel's lemma, see
+Algebra, Lemma \ref{algebra-lemma-Schanuel}.)
+We conclude that
+$$
+c(\mathcal{E})c(\mathcal{F}') = c(\mathcal{G}) =
+c(\mathcal{F})c(\mathcal{E}')
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-defined-by-envelope}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $E \in D(\mathcal{O}_X)$
+be a perfect object. Assume there exists an envelope
+$f : Y \to X$ (Definition \ref{definition-envelope})
+such that $Lf^*E$ is isomorphic in $D(\mathcal{O}_Y)$
+to a locally bounded complex $\mathcal{E}^\bullet$ of finite locally free
+$\mathcal{O}_Y$-modules. Then there exists unique bivariant classes
+$c(E) \in A^*(X)$, $ch(E) \in A^*(X) \otimes \mathbf{Q}$, and
+$P_p(E) \in A^p(X)$, independent of the choice of $f : Y \to X$
+and $\mathcal{E}^\bullet$, such that the restriction of these classes
+to $Y$ are equal to $c(\mathcal{E}^\bullet) \in A^*(Y)$,
+$ch(\mathcal{E}^\bullet) \in A^*(Y) \otimes \mathbf{Q}$, and
+$P_p(\mathcal{E}^\bullet) \in A^p(Y)$.
+\end{lemma}
+
+\begin{proof}
+Fix $p \in \mathbf{Z}$. We will prove the lemma for the chern
+class $c_p(E) \in A^p(X)$ and omit the arguments for the other cases.
+
+\medskip\noindent
+Let $g : T \to X$ be a morphism locally of finite type such that
+there exists a locally bounded complex $\mathcal{E}^\bullet$ of finite locally
+free $\mathcal{O}_T$-modules representing $Lg^*E$ in $D(\mathcal{O}_T)$.
+The bivariant class $c_p(\mathcal{E}^\bullet) \in A^p(T)$
+is independent of the choice of $\mathcal{E}^\bullet$ by
+Lemma \ref{lemma-pre-derived-chern-class}.
+Let us write $c_p(Lg^*E) \in A^p(T)$ for this class.
+For any further morphism $h : T' \to T$ which is locally
+of finite type, setting $g' = g \circ h$ we see that
+$L(g')^*E = L(g \circ h)^*E = Lh^*Lg^*E$ is represented by
+$h^*\mathcal{E}^\bullet$ in $D(\mathcal{O}_{T'})$.
+We conclude that $c_p(L(g')^*E)$ makes sense and is equal to the
+restriction (Remark \ref{remark-restriction-bivariant})
+of $c_p(Lg^*E)$ to $T'$ (strictly speaking this requires an application of
+Lemma \ref{lemma-cap-cp-bivariant}).
+
+\medskip\noindent
+Let $f : Y \to X$ and $\mathcal{E}^\bullet$ be as in the statement
+of the lemma. We obtain a bivariant class $c_p(E) \in A^p(X)$ from
+an application of Lemma \ref{lemma-envelope-bivariant}
+to $f : Y \to X$ and the class $c' = c_p(Lf^*E)$ we constructed
+in the previous paragraph. The assumption in the lemma is
+satisfied because by the discussion in the previous paragraph we have
+$res_1(c') = c_p(Lg^*E) = res_2(c')$ where
+$g = f \circ p = f \circ q : Y \times_X Y \to X$.
+
+\medskip\noindent
+Finally, suppose that $f' : Y' \to X$ is a second envelope
+such that $L(f')^*E$ is represented by a bounded complex of
+finite locally free $\mathcal{O}_{Y'}$-modules. Then it follows
+that the restrictions of $c_p(Lf^*E)$ and $c_p(L(f')^*E)$
+to $Y \times_X Y'$ are equal. Since $Y \times_X Y' \to X$
+is an envelope (Lemmas \ref{lemma-base-change-envelope} and
+\ref{lemma-composition-envelope}), we see that our two candidates for $c_p(E)$
+agree by the unicity in Lemma \ref{lemma-envelope-bivariant}.
+\end{proof}
+
+\begin{definition}
+\label{definition-defined-on-perfect}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $E \in D(\mathcal{O}_X)$
+be a perfect object.
+\begin{enumerate}
+\item We say the {\it Chern classes of $E$ are defined}\footnote{See
+Lemma \ref{lemma-chern-classes-defined} for some criteria.} if there exists
+an envelope $f : Y \to X$ such that $Lf^*E$ is isomorphic in
+$D(\mathcal{O}_Y)$ to a locally bounded complex of finite locally free
+$\mathcal{O}_Y$-modules.
+\item If the Chern classes of $E$ are defined, then we define
+$$
+c(E) \in \prod\nolimits_{p \geq 0} A^p(X),\quad
+ch(E) \in
+\prod\nolimits_{p \geq 0} A^p(X) \otimes \mathbf{Q},\quad
+P_p(E) \in A^p(X)
+$$
+by an application of Lemma \ref{lemma-defined-by-envelope}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+This definition applies in many but not all situations envisioned
+in this chapter, see Lemma \ref{lemma-chern-classes-defined}.
+Perhaps an elementary construction of these bivariant classes for general
+$E/X/(S,\delta)$ as in the definition exists; we don't know.
+
+\begin{lemma}
+\label{lemma-chern-classes-defined}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $E \in D(\mathcal{O}_X)$
+be a perfect object. If one of the following conditions hold, then
+the Chern classes of $E$ are defined:
+\begin{enumerate}
+\item there exists an envelope $f : Y \to X$ such that $Lf^*E$
+is isomorphic in $D(\mathcal{O}_Y)$ to a locally bounded complex of finite
+locally free $\mathcal{O}_Y$-modules,
+\item $E$ can be represented by a bounded complex of finite locally
+free $\mathcal{O}_X$-modules,
+\item the irreducible components of $X$ are quasi-compact,
+\item $X$ is quasi-compact,
+\item there exists a morphism $X \to X'$ of schemes locally of finite type
+over $S$ such that $E$ is the pullback of a perfect object $E'$ on $X'$
+whose chern classes are defined, or
+\item add more here.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Condition (1) is just Definition \ref{definition-defined-on-perfect} part (1).
+Condition (2) implies (1).
+
+\medskip\noindent
+As in (3) assume the irreducible components $X_i$ of $X$ are quasi-compact.
+We view $X_i$ as a reduced integral closed subscheme over $X$.
+The morphism $\coprod X_i \to X$ is an envelope. For each $i$ there
+exists an envelope $X'_i \to X_i$ such that $X'_i$ has an ample family
+of invertible modules, see More on Morphisms, Proposition
+\ref{more-morphisms-proposition-envelope-with-resolution-property}.
+Observe that $f : Y = \coprod X'_i \to X$ is an envelope; small detail omitted.
+By Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-resolution-property-ample-family}
+each $X'_i$ has the resolution property.
+Thus the perfect object $L(f|_{X'_i})^*E$ of $D(\mathcal{O}_{X'_i})$
+can be represented by a bounded
+complex of finite locally free $\mathcal{O}_{X'_i}$-modules, see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-resolution-property-perfect-complex}.
+This proves (3) implies (1).
+
+\medskip\noindent
+Part (4) implies (3).
+
+\medskip\noindent
+Let $g : X \to X'$ and $E'$ be as in part (5). Then there exists an
+envelope $f' : Y' \to X'$ such that $L(f')^*E'$ is represented by a
+locally bounded complex $(\mathcal{E}')^\bullet$ of $\mathcal{O}_{Y'}$-modules.
+Then the base change $f : Y \to X$ is an envelope by
+Lemma \ref{lemma-base-change-envelope}. Moreover, the pulllback
+$\mathcal{E}^\bullet = g^*(\mathcal{E}')^\bullet$ represents $Lf^*E$
+and we see that the chern classes of $E$ are defined.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chern-classes-computed}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $E \in D(\mathcal{O}_X)$
+be a perfect object. Assume the Chern classes of $E$ are defined.
+For $g : W \to X$ locally of finite type with $W$ integral, there exists
+a commutative diagram
+$$
+\xymatrix{
+W' \ar[rd]_{g'} \ar[rr]_b & & W \ar[ld]^g \\
+& X
+}
+$$
+with $W'$ integral and $b : W' \to W$ proper birational such that $L(g')^*E$
+is represented by a bounded complex $\mathcal{E}^\bullet$ of locally free
+$\mathcal{O}_{W'}$-modules of constant rank and we have
+$res(c_p(E)) = c_p(\mathcal{E}^\bullet)$ in $A^p(W')$.
+\end{lemma}
+
+\begin{proof}
+Choose an envelope $f : Y \to X$ such that $Lf^*E$ is isomorphic in
+$D(\mathcal{O}_Y)$ to a locally bounded complex $\mathcal{E}^\bullet$
+of finite locally free $\mathcal{O}_Y$-modules. The base change
+$Y \times_X W \to W$ of $f$ is an envelope by
+Lemma \ref{lemma-base-change-envelope}. Choose a point
+$\xi \in Y \times_X W$ mapping to the generic point of $W$
+with the same residue field. Consider the integral closed subscheme
+$W' \subset Y \times_X W$ with generic point $\xi$. The restriction
+of the projection $Y \times_X W \to W$ to $W'$ is a proper birational
+morphism $b : W' \to W$. Set $g' = g \circ b$. Finally, consider the
+pullback $(W' \to Y)^*\mathcal{E}^\bullet$. This is a locally bounded
+complex of finite locally free modules on $W'$. Since $W'$ is integral
+it follows that it is bounded and that the terms have constant rank.
+Finally, by construction $(W' \to Y)^*\mathcal{E}^\bullet$
+represents $L(g')^*E$ and by construction its $p$th chern class
+gives the restriction of $c_p(E)$ by $W' \to X$. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-commutative-chern-perfect}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $E \in D(\mathcal{O}_X)$ be perfect. If the Chern classes
+of $E$ are defined then
+\begin{enumerate}
+\item $c_p(E)$ is in the center of the algebra $A^*(X)$, and
+\item if $g : X' \to X$ is locally of finite type and $c \in A^*(X' \to X)$,
+then $c \circ c_p(E) = c_p(Lg^*E) \circ c$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) follows immediately from part (2). Let $g : X' \to X$ and
+$c \in A^*(X' \to X)$ be as in (2). To show that
+$c \circ c_p(E) - c_p(Lg^*E) \circ c = 0$ we use the criterion of
+Lemma \ref{lemma-bivariant-zero}. Thus we may assume that $X$
+is integral and by Lemma \ref{lemma-chern-classes-computed}
+we may even assume that $E$ is represented
+by a bounded complex $\mathcal{E}^\bullet$
+of finite locally free $\mathcal{O}_X$-modules of constant rank.
+Then we have to show that
+$$
+c \cap c_p(\mathcal{E}^\bullet) \cap [X] =
+c_p(\mathcal{E}^\bullet) \cap c \cap [X]
+$$
+in $\CH_*(X')$. This is immediate from
+Lemma \ref{lemma-cap-commutative-chern} and the construction
+of $c_p(\mathcal{E}^\bullet)$ as a polynomial in the
+chern classes of the locally free modules $\mathcal{E}^n$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-additivity-on-perfect}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let
+$$
+E_1 \to E_2 \to E_3 \to E_1[1]
+$$
+be a distinguished triangle of perfect objects in $D(\mathcal{O}_X)$.
+If one of the following conditions holds
+\begin{enumerate}
+\item there exists an envelope $f : Y \to X$ such that
+$Lf^*E_1 \to Lf^*E_2$ can be represented by a map of locally
+bounded complexes of finite locally free $\mathcal{O}_Y$-modules,
+\item $E_1 \to E_2$ can be represented be a map of locally bounded complexes
+of finite locally free $\mathcal{O}_X$-modules,
+\item the irreducible components of $X$ are quasi-compact,
+\item $X$ is quasi-compact, or
+\item add more here,
+\end{enumerate}
+then the Chern classes of $E_1$, $E_2$, $E_3$ are defined and we have
+$c(E_2) = c(E_1) c(E_3)$, $ch(E_2) = ch(E_1) + ch(E_3)$, and
+$P_p(E_2) = P_p(E_1) + P_p(E_3)$.
+\end{lemma}
+
+\begin{proof}
+Let $f : Y \to X$ be an envelope and let
+$\alpha^\bullet : \mathcal{E}_1^\bullet \to \mathcal{E}_2^\bullet$
+be a map of locally bounded complexes of finite locally free
+$\mathcal{O}_Y$-modules representing $Lf^*E_1 \to Lf^*E_2$.
+Then the cone $C(\alpha)^\bullet$ represents $Lf^*E_3$. Since
+$C(\alpha)^n = \mathcal{E}_2^n \oplus \mathcal{E}_1^{n + 1}$
+we see that $C(\alpha)^\bullet$ is a locally bounded complex
+of finite locally free $\mathcal{O}_Y$-modules.
+We conclude that the Chern classes of $E_1$, $E_2$, $E_3$
+are defined. Moreover, recall that $c_p(E_1)$ is defined
+as the unique element of $A^p(X)$ which restricts to
+$c_p(\mathcal{E}_1^\bullet)$ in $A^p(Y)$. Similarly for
+$E_2$ and $E_3$. Hence it suffices
+to prove $c(\mathcal{E}_2^\bullet) =
+c(\mathcal{E}_1^\bullet) c(C(\alpha)^\bullet)$
+in $\prod_{p \geq 0} A^p(Y)$.
+In turn, it suffices to prove this after restricting
+to a connected component of $Y$. Hence we may assume
+the complexes $\mathcal{E}_1^\bullet$ \and $\mathcal{E}_2^\bullet$
+are bounded complexes of finite locally free $\mathcal{O}_Y$-modules
+of fixed rank. In this case the desired equality follows from the
+multiplicativity of Lemma \ref{lemma-additivity-chern-classes}.
+In the case of $ch$ or $P_p$ we use
+Lemmas \ref{lemma-chern-character-additive}.
+
+\medskip\noindent
+In the previous paragraph we have seen that the lemma holds if
+condition (1) is satisfied. Since (2) implies (1) this deals with
+the second case. Assume (3). Arguing exactly as in the proof of
+Lemma \ref{lemma-chern-classes-defined} we find an envelope
+$f : Y \to X$ such that $Y$ is a disjoint union $Y = \coprod Y_i$
+of quasi-compact (and quasi-separated) schemes each having the
+resolution property. Then we may represent the restriction of
+$Lf^*E_1 \to Lf^*E_2$ to $Y_i$ by a map of bounded complexes
+of finite locally free modules, see
+Derived Categories of Schemes, Proposition
+\ref{perfect-proposition-perfect-resolution-property}.
+In this way we see that condition (3) implies condition (1).
+Of course condition (4) implies condition (3) and the proof
+is complete.
+\end{proof}
+
+\begin{remark}
+\label{remark-splitting-principle-perfect}
+The Chern classes of a perfect complex, when defined, satisfy a kind of
+splitting principle. Namely, suppose that $(S, \delta), X, E$ are as in
+Definition \ref{definition-defined-on-perfect}
+such that the Chern classes of $E$ are defined.
+Say we want to prove a relation between the bivariant classes
+$c_p(E)$, $P_p(E)$, and $ch_p(E)$. To do this, we may choose an
+envelope $f : Y \to X$ and a locally bounded
+complex $\mathcal{E}^\bullet$ of finite locally free $\mathcal{O}_X$-modules
+representing $E$. By the uniqueness in Lemma \ref{lemma-defined-by-envelope}
+it suffices to prove the desired relation between the bivariant classes
+$c_p(\mathcal{E}^\bullet)$, $P_p(\mathcal{E}^\bullet)$, and
+$ch_p(\mathcal{E}^\bullet)$. Thus we may replace $X$ by a connected
+component of $Y$ and assume that $E$ is represented by a bounded
+complex $\mathcal{E}^\bullet$ of finite locally free modules of fixed rank.
+Using the splitting principle
+(Lemma \ref{lemma-splitting-principle}) we may assume each
+$\mathcal{E}^i$ has a filtration whose successive
+quotients $\mathcal{L}_{i, j}$ are invertible modules.
+Settting $x_{i, j} = c_1(\mathcal{L}_{i, j})$ we see that
+$$
+c(E) =
+\prod\nolimits_{i\text{ even}} (1 + x_{i, j})
+\prod\nolimits_{i\text{ odd}} (1 + x_{i, j})^{-1}
+$$
+and
+$$
+P_p(E) = \sum\nolimits_{i\text{ even}} (x_{i, j})^p -
+\sum\nolimits_{i\text{ odd}} (x_{i, j})^p
+$$
+Formally taking the logarithm for the expression for $c(E)$ above
+we find that
+$$
+\log(c(E)) = \sum (-1)^{p - 1}\frac{P_p(E)}{p}
+$$
+Looking at the construction of the polynomials $P_p$ in
+Example \ref{example-power-sum} it follows that $P_p(E)$
+is the exact same expression in the Chern classes of $E$
+as in the case of vector bundles, in other words, we have
+\begin{align*}
+P_1(E) & = c_1(E), \\
+P_2(E) & = c_1(E)^2 - 2c_2(E), \\
+P_3(E) & = c_1(E)^3 - 3c_1(E)c_2(E) + 3c_3(E), \\
+P_4(E) & = c_1(E)^4 - 4c_1(E)^2c_2(E) + 4c_1(E)c_3(E) + 2c_2(E)^2 - 4c_4(E),
+\end{align*}
+and so on. On the other hand, the bivariant class $P_0(E) = r(E) = ch_0(E)$
+cannot be recovered from the Chern class $c(E)$ of $E$; the chern class
+doesn't know about the rank of the complex.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-chern-classes-perfect-dual}
+In Situation \ref{situation-setup} let $X$ be locally of finite type over $S$.
+Let $E \in D(\mathcal{O}_X)$ be a perfect object whose Chern classes are
+defined. Then $c_i(E^\vee) = (-1)^i c_i(E)$, $P_i(E^\vee) = (-1)^iP_i(E)$,
+and $ch_i(E^\vee) = (-1)^ich_i(E)$ in $A^i(X)$.
+\end{lemma}
+
+\begin{proof}
+First proof: argue as in the proof of
+Lemma \ref{lemma-commutative-chern-perfect}
+to reduce to the case where $E$ is represented
+by a bounded complex of finite locally free modules
+of fixed rank and apply Lemma \ref{lemma-chern-classes-dual}.
+Second proof: use the splitting principle discussed
+in Remark \ref{remark-splitting-principle-perfect}
+and use that the chern roots of $E^\vee$ are the negatives
+of the chern roots of $E$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chern-class-perfect-tensor-invertible}
+In Situation \ref{situation-setup} let $X$ be locally of finite type over $S$.
+Let $E$ be a perfect object of $D(\mathcal{O}_X)$ whose Chern classes
+are defined.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module. Then
+$$
+c_i(E \otimes \mathcal{L}) =
+\sum\nolimits_{j = 0}^i
+\binom{r - i + j}{j} c_{i - j}(E) c_1(\mathcal{L})^j
+$$
+provided $E$ has constant rank $r \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+In the case where $E$ is locally free of rank $r$ this is
+Lemma \ref{lemma-chern-classes-E-tensor-L}. The reader can deduce
+the lemma from this special case by a formal computation.
+An alternative is to use the splitting principle of
+Remark \ref{remark-splitting-principle-perfect}.
+In this case one ends up having to prove the following
+algebra fact: if we write formally
+$$
+\frac{\prod_{a = 1, \ldots, n} (1 + x_a)}{\prod_{n = 1, \ldots, m} (1 + y_b)}
+= 1 + c_1 + c_2 + c_3 + \ldots
+$$
+with $c_i$ homogeneous of degree $i$
+in $\mathbf{Z}[x_i, y_j]$ then we have
+$$
+\frac{\prod_{a = 1, \ldots, n} (1 + x_a + t)}{\prod_{b = 1, \ldots, m} (1 + y_b + t)}
+= \sum\nolimits_{i \geq 0} \sum\nolimits_{j = 0}^i
+\binom{r - i + j}{j} c_{i - j} t^j
+$$
+where $r = n - m$. We omit the details.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chern-classes-perfect-tensor-product}
+In Situation \ref{situation-setup} let $X$ be locally of finite type over $S$.
+Let $E$ and $F$ be perfect objects of $D(\mathcal{O}_X)$ whose Chern classes
+are defined. Then we have
+$$
+c_1(E \otimes_{\mathcal{O}_X}^\mathbf{L} F) =
+r(E) c_1(\mathcal{F}) + r(F) c_1(\mathcal{E})
+$$
+and for $c_2(E \otimes_{\mathcal{O}_X}^\mathbf{L} F)$ we have the expression
+$$
+r(E) c_2(F) + r(F) c_2(E) + {r(E) \choose 2} c_1(F)^2 +
+(r(E)r(F) - 1) c_1(F)c_1(E) + {r(F) \choose 2} c_1(E)^2
+$$
+and so on for higher Chern classes in $A^*(X)$. Similarly, we have
+$ch(E \otimes_{\mathcal{O}_X}^\mathbf{L} F) = ch(E) ch(F)$
+in $A^*(X) \otimes \mathbf{Q}$. More precisely, we have
+$$
+P_p(E \otimes_{\mathcal{O}_X}^\mathbf{L} F) = \sum\nolimits_{p_1 + p_2 = p}
+{p \choose p_1} P_{p_1}(E) P_{p_2}(F)
+$$
+in $A^p(X)$.
+\end{lemma}
+
+\begin{proof}
+After choosing an envelope $f : Y \to X$ such that $Lf^*E$ and $Lf^*F$
+can be represented by locally bounded complexes of finite locally
+free $\mathcal{O}_X$-modules this follows by a compuation from the
+corresponding result for vector bundles in
+Lemmas \ref{lemma-chern-classes-tensor-product} and
+\ref{lemma-chern-character-multiplicative}.
+A better proof is probably to use the splitting principle as in
+Remark \ref{remark-splitting-principle-perfect}
+and reduce the lemma to computations in polynomial rings
+which we describe in the next paragraph.
+
+\medskip\noindent
+Let $A$ be a commutative ring (for us this will be the subring of the
+bivariant chow ring of $X$ generated by Chern classes).
+Let $S$ be a finite set together with maps $\epsilon : S \to \{\pm 1\}$
+and $f : S \to A$. Define
+$$
+P_p(S, f , \epsilon) = \sum\nolimits_{s \in S} \epsilon(s) f(s)^p
+$$
+in $A$. Given a second triple $(S', \epsilon', f')$
+the equality that has to be shown for $P_p$ is the equality
+$$
+P_p(S \times S', f + f' , \epsilon \epsilon') =
+\sum\nolimits_{p_1 + p_2 = p}
+{p \choose p_1} P_{p_1}(S, f, \epsilon) P_{p_2}(S', f', \epsilon')
+$$
+To see this is true, one reduces to the polynomial ring on variables
+$S \amalg S'$ and one shows that each term $f(s)^if'(s')^j$ occurs
+on the left and right hand side with the same coefficient.
+To verify the formulas for $c_1(E \otimes_{\mathcal{O}_X}^\mathbf{L} F)$
+and $c_2(E \otimes_{\mathcal{O}_X}^\mathbf{L} F)$ we use the splitting
+principle to reduce to checking these formulae in a torsion free ring.
+Then we use the relationship between $P_j(E)$ and $c_i(E)$ proved
+in Remark \ref{remark-splitting-principle-perfect}. For example
+$$
+c_1(E \otimes F) = P_1(E \otimes F) = r(F)P_1(E) + r(E)P_1(F) =
+r(F)c_1(E) + r(E)c_1(F)
+$$
+the middle equation because $r(E) = P_0(E)$ by definition. Similarly, we have
+\begin{align*}
+& 2c_2(E \otimes F) \\
+& = c_1(E \otimes F)^2 - P_2(E \otimes F) \\
+& =
+(r(F)c_1(E) + r(E)c_1(F))^2 -
+r(F)P_2(E) - P_1(E)P_1(F) - r(E)P_2(F) \\
+& =
+(r(F)c_1(E) + r(E)c_1(F))^2 -
+r(F)(c_1(E)^2 - 2c_2(E)) - c_1(E)c_1(F) - \\
+& \quad r(E)(c_1(F)^2 - 2c_2(F))
+\end{align*}
+which the reader can verify agrees with the formula in the statement
+of the lemma up to a factor of $2$.
+\end{proof}
+
+
+
+
+
+
+
+\section{A baby case of localized Chern classes}
+\label{section-preparation-localized-chern}
+
+\noindent
+In this section we discuss some properties of the bivariant classes
+constructed in the following lemma; most of these properties follow
+immediately from the characterization given in the lemma. We urge the
+reader to skip the rest of the section.
+
+\begin{lemma}
+\label{lemma-silly}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be
+locally of finite type over $S$. Let $i_j : X_j \to X$, $j = 1, 2$
+be closed immersions such that $X = X_1 \cup X_2$ set theoretically. Let
+$E_2 \in D(\mathcal{O}_{X_2})$ be a perfect object. Assume
+\begin{enumerate}
+\item Chern classes of $E_2$ are defined,
+\item the restriction $E_2|_{X_1 \cap X_2}$ is zero,
+resp.\ isomorphic to a finite locally free $\mathcal{O}_{X_1 \cap X_2}$-module
+of rank $< p$ sitting in cohomological degree $0$.
+\end{enumerate}
+Then there is a canonical bivariant class
+$$
+P'_p(E_2),\text{ resp. }c'_p(E_2) \in A^p(X_2 \to X)
+$$
+characterized by the property
+$$
+P'_p(E_2) \cap i_{2, *} \alpha_2 = P_p(E_2) \cap \alpha_2
+\quad\text{and}\quad
+P'_p(E_2) \cap i_{1, *} \alpha_1 = 0,
+$$
+respectively
+$$
+c'_p(E_2) \cap i_{2, *} \alpha_2 = c_p(E_2) \cap \alpha_2
+\quad\text{and}\quad
+c'_p(E_2) \cap i_{1, *} \alpha_1 = 0
+$$
+for $\alpha_i \in \CH_k(X_i)$ and similarly after any base change
+$X' \to X$ locally of finite type.
+\end{lemma}
+
+\begin{proof}
+We are going to use the material of Section \ref{section-pre-derived}
+without further mention.
+
+\medskip\noindent
+Assume $E_2|_{X_1 \cap X_2}$ is zero.
+Consider a morphism of schemes $X' \to X$
+which is locally of finite type and denote $i'_j : X'_j \to X'$ the
+base change of $i_j$. By Lemma \ref{lemma-exact-sequence-closed-chow}
+we can write any element $\alpha' \in \CH_k(X')$ as
+$i'_{1, *}\alpha'_1 + i'_{2, *}\alpha'_2$ where
+$\alpha'_2 \in \CH_k(X'_2)$
+is well defined up to an element in the image of pushforward
+by $X'_1 \cap X'_2 \to X'_2$. Then we can set
+$P'_p(E_2) \cap \alpha' = P_p(E_2) \cap \alpha'_2 \in \CH_{k - p}(X'_2)$. This
+is well defined by our assumption that $E_2$ restricts
+to zero on $X_1 \cap X_2$.
+
+\medskip\noindent
+If $E_2|_{X_1 \cap X_2}$ is isomorphic to a finite locally free
+$\mathcal{O}_{X_1 \cap X_2}$-module of rank $< p$ sitting in
+cohomological degree $0$, then $c_p(E_2|_{X_1 \cap X_2}) = 0$
+by rank considerations and we can argue in exactly the same manner.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-independent}
+In Lemma \ref{lemma-silly} the bivariant class
+$P'_p(E_2)$, resp.\ $c'_p(E_2)$ in $A^p(X_2 \to X)$
+does not depend on the choice of $X_1$.
+\end{lemma}
+
+\begin{proof}
+Suppose that $X_1' \subset X$ is another closed subscheme such that
+$X = X'_1 \cup X_2$ set theoretically and the restriction
+$E_2|_{X'_1 \cap X_2}$ is zero, resp.\ isomorphic to a
+finite locally free $\mathcal{O}_{X'_1 \cap X_2}$-module
+of rank $< p$ sitting in cohomological degree $0$.
+Then $X = (X_1 \cap X'_1) \cup X_2$. Hence we can write
+any element $\alpha \in \CH_k(X)$ as $i_*\beta + i_{2, *}\alpha_2$ with
+$\alpha_2 \in \CH_k(X'_2)$ and $\beta \in \CH_k(X_1 \cap X'_1)$.
+Thus it is clear that
+$P'_p(E_2) \cap \alpha = P_p(E_2) \cap \alpha_2 \in \CH_{k - p}(X_2)$,
+resp.\ $c'_p(E_2) \cap \alpha = c_p(E_2) \cap \alpha_2 \in \CH_{k - p}(X_2)$,
+is independent of whether we use $X_1$ or $X'_1$. Similarly
+after any base change.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-silly}
+In Lemma \ref{lemma-silly} let $X' \to X$ be a morphism
+which is locally of finite type. Denote $X' = X'_1 \cup X'_2$
+and $E'_2 \in D(\mathcal{O}_{X'_2})$ the pullbacks to $X'$.
+Then the class $P'_p(E_2')$, resp.\ $c'_p(E_2')$ in
+$A^p(X_2' \to X')$ constructed in Lemma \ref{lemma-silly} using
+$X' = X'_1 \cup X'_2$ and $E_2'$ is the restriction
+(Remark \ref{remark-restriction-bivariant})
+of the class $P'_p(E_2)$, resp.\ $c'_p(E_2)$ in $A^p(X_2 \to X)$.
+\end{lemma}
+
+\begin{proof}
+Immediate from the characterization of these classes in
+Lemma \ref{lemma-silly}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-silly}
+In Lemma \ref{lemma-silly} say $E_2$ is the restriction of a
+perfect $E \in D(\mathcal{O}_X)$ such that $E|_{X_1}$ is zero,
+resp.\ isomorphic to a finite locally free $\mathcal{O}_{X_1}$-module
+of rank $< p$ sitting in cohomological degree $0$.
+If Chern classes of $E$ are defined, then
+$i_{2, *} \circ P'_p(E_2) = P_p(E)$,
+resp.\ $i_{2, *} \circ c'_p(E_2) = c_p(E)$
+(with $\circ$ as in Lemma \ref{lemma-push-proper-bivariant}).
+\end{lemma}
+
+\begin{proof}
+First, assume $E|_{X_1}$ is zero.
+With notations as in the proof of Lemma \ref{lemma-silly}
+the lemma in this case follows from
+\begin{align*}
+P_p(E) \cap \alpha'
+& =
+i'_{1, *}(P_p(E) \cap \alpha'_1) +
+i'_{2, *}(P_p(E) \cap \alpha'_2) \\
+& =
+i'_{1, *}(P_p(E|_{X_1}) \cap \alpha'_1) +
+i'_{2, *}(P'_p(E_2) \cap \alpha') \\
+& =
+i'_{2, *}(P'_p(E_2) \cap \alpha')
+\end{align*}
+The case where $E|_{X_1}$ is isomorphic to a finite locally free
+$\mathcal{O}_{X_1}$-module of rank $< p$ sitting in cohomological degree $0$
+is similar.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-shrink}
+In Lemma \ref{lemma-silly} suppose we have closed subschemes
+$X'_2 \subset X_2$ and $X_1 \subset X'_1 \subset X$ such that
+$X = X'_1 \cup X'_2$ set theoretically. Assume $E_2|_{X'_1 \cap X_2}$
+is zero, resp.\ isomorphic to a finite locally free module
+of rank $< p$ placed in degree $0$. Then we have
+$(X'_2 \to X_2)_* \circ P'_p(E_2|_{X'_2}) = P'_p(E_2)$,
+resp.\ $(X'_2 \to X_2)_* \circ c'_p(E_2|_{X'_2}) = c_p(E_2)$
+(with $\circ$ as in Lemma \ref{lemma-push-proper-bivariant}).
+\end{lemma}
+
+\begin{proof}
+This follows immediately from the characterization of these classes
+in Lemma \ref{lemma-silly}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-commutes}
+In Lemma \ref{lemma-silly} let $f : Y \to X$ be locally of finite type
+and say $c \in A^*(Y \to X)$. Then
+$$
+c \circ P'_p(E_2) = P'_p(Lf_2^*E_2) \circ c
+\quad\text{resp.}\quad
+c \circ c'_p(E_2) = c'_p(Lf_2^*E_2) \circ c
+$$
+in $A^*(Y_2 \to Y)$ where $f_2 : Y_2 \to X_2$ is the base change of $f$.
+\end{lemma}
+
+\begin{proof}
+Let $\alpha \in \CH_k(X)$. We may write
+$$
+\alpha = \alpha_1 + \alpha_2
+$$
+with $\alpha_i \in \CH_k(X_i)$; we are omitting the pushforwards
+by the closed immersions $X_i \to X$. The reader then checks that
+$c'_p(E_2) \cap \alpha = c_p(E_2) \cap \alpha_2$,
+$c \cap c'_p(E_2) \cap \alpha = c \cap c_p(E_2) \cap \alpha_2$,
+$c \cap \alpha = c \cap \alpha_1 + c \cap \alpha_2$, and
+$c'_p(Lf_2^*E_2) \cap c \cap \alpha = c_p(Lf_2^*E_2) \cap c \cap \alpha_2$.
+We conclude by Lemma \ref{lemma-commutative-chern-perfect}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-compose}
+In Lemma \ref{lemma-silly} assume $E_2|_{X_1 \cap X_2}$ is zero. Then
+\begin{align*}
+P'_1(E_2) & = c'_1(E_2), \\
+P'_2(E_2) & = c'_1(E_2)^2 - 2c'_2(E_2), \\
+P'_3(E_2) & = c'_1(E_2)^3 - 3c'_1(E_2)c'_2(E_2) + 3c'_3(E_2), \\
+P'_4(E_2) & = c'_1(E_2)^4 - 4c'_1(E_2)^2c'_2(E_2) +
+4c'_1(E_2)c'_3(E_2) + 2c'_2(E_2)^2 - 4c'_4(E_2),
+\end{align*}
+and so on with multiplication as in Remark \ref{remark-ring-loc-classes}.
+\end{lemma}
+
+\begin{proof}
+The statement makes sense because the zero sheaf has rank $< 1$ and
+hence the classes $c'_p(E_2)$ are defined for all $p \geq 1$. The equalities
+follow immediately from the characterization of the classes produced
+by Lemma \ref{lemma-silly} and the corresponding result for
+capping with the Chern classes of $E_2$ given in
+Remark \ref{remark-splitting-principle-perfect}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-sum-c}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be
+locally of finite type over $S$. Let $i_j : X_j \to X$, $j = 1, 2$
+be closed immersions such that $X = X_1 \cup X_2$ set theoretically. Let
+$E, F \in D(\mathcal{O}_X)$ be perfect objects. Assume
+\begin{enumerate}
+\item Chern classes of $E$ and $F$ are defined,
+\item the restrictions $E|_{X_1 \cap X_2}$ and $F|_{X_1 \cap X_2}$
+are isomorphic to a finite locally free $\mathcal{O}_{X_1}$-modules
+of rank $< p$ and $< q$ sitting in cohomological degree $0$.
+\end{enumerate}
+With notation as in Remark \ref{remark-ring-loc-classes} set
+$$
+c^{(p)}(E) = 1 + c_1(E) + \ldots + c_{p - 1}(E) +
+c'_p(E|_{X_2}) + c'_{p + 1}(E|_{X_2}) + \ldots \in A^{(p)}(X_2 \to X)
+$$
+with $c'_p(E|_{X_2})$ as in Lemma \ref{lemma-silly}. Similarly
+for $c^{(q)}(F)$ and $c^{(p + q)}(E \oplus F)$.
+Then $c^{(p + q)}(E \oplus F) = c^{(p)}(E)c^{(q)}(F)$
+in $A^{(p + q)}(X_2 \to X)$.
+\end{lemma}
+
+\begin{proof}
+Immediate from the characterization of the classes in
+Lemma \ref{lemma-silly} and the additivity in
+Lemma \ref{lemma-additivity-on-perfect}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-sum-P}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be
+locally of finite type over $S$. Let $i_j : X_j \to X$, $j = 1, 2$
+be closed immersions such that $X = X_1 \cup X_2$ set theoretically. Let
+$E, F \in D(\mathcal{O}_{X_2})$ be perfect objects. Assume
+\begin{enumerate}
+\item Chern classes of $E$ and $F$ are defined,
+\item the restrictions $E|_{X_1 \cap X_2}$ and $F|_{X_1 \cap X_2}$ are zero,
+\end{enumerate}
+Denote $P'_p(E), P'_p(F), P'_p(E \oplus F) \in A^p(X_2 \to X)$ for $p \geq 0$
+the classes constructed in Lemma \ref{lemma-silly}. Then
+$P'_p(E \oplus F) = P'_p(E) + P'_p(F)$.
+\end{lemma}
+
+\begin{proof}
+Immediate from the characterization of the classes in
+Lemma \ref{lemma-silly} and the additivity in
+Lemma \ref{lemma-additivity-on-perfect}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-tensor-invertible}
+In Lemma \ref{lemma-silly} assume $E_2$ has constant rank $0$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module. Then
+$$
+c'_i(E_2 \otimes \mathcal{L}) =
+\sum\nolimits_{j = 0}^i
+\binom{- i + j}{j} c'_{i - j}(E_2) c_1(\mathcal{L})^j
+$$
+\end{lemma}
+
+\begin{proof}
+The assumption on rank implies that $E_2|_{X_1 \cap X_2}$ is zero.
+Hence $c'_i(E_2)$ is defined for all $i \geq 1$ and the statement
+makes sense. The actual equality follows
+immediately from Lemma \ref{lemma-chern-class-perfect-tensor-invertible}
+and the characterization of $c'_i$ in Lemma \ref{lemma-silly}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-silly-tensor-product}
+In Situation \ref{situation-setup} let $X$ be locally of finite type over $S$.
+Let
+$$
+X = X_1 \cup X_2 = X'_1 \cup X'_2
+$$
+be two ways of writing $X$ as a set theoretic union of closed subschemes.
+Let $E$, $E'$ be perfect objects of $D(\mathcal{O}_X)$
+whose Chern classes are defined.
+Assume that $E|_{X_1}$ and $E'|_{X'_1}$ are zero\footnote{Presumably there
+is a variant of this lemma where we only assume these restrictions are
+isomorphic to a finite locally free modules
+of rank $< p$ and $< p'$.} for $i = 1, 2$. Denote
+\begin{enumerate}
+\item $r = P'_0(E) \in A^0(X_2 \to X)$ and
+$r' = P'_0(E') \in A^0(X'_2 \to X)$,
+\item $\gamma_p = c'_p(E|_{X_2}) \in A^p(X_2 \to X)$ and
+$\gamma'_p = c'_p(E'|_{X'_2}) \in A^p(X'_2 \to X)$,
+\item $\chi_p = P'_p(E|_{X_2}) \in A^p(X_2 \to X)$ and
+$\chi'_p = P'_p(E'|_{X'_2}) \in A^p(X'_2 \to X)$
+\end{enumerate}
+the classes constructed in Lemma \ref{lemma-silly}. Then we have
+$$
+c'_1((E \otimes_{\mathcal{O}_X}^\mathbf{L} E')|_{X_2 \cap X'_2}) =
+r \gamma'_1 + r' \gamma_1
+$$
+in $A^1(X_2 \cap X'_2 \to X)$ and
+$$
+c'_2((E \otimes_{\mathcal{O}_X}^\mathbf{L} E')|_{X_2 \cap X'_2}) =
+r \gamma'_2 + r' \gamma_2 + {r \choose 2} (\gamma'_1)^2 +
+(rr' - 1) \gamma'_1\gamma_1 + {r' \choose 2} \gamma_1^2
+$$
+in $A^2(X_2 \cap X'_2 \to X)$ and so on for higher Chern classes.
+Similarly, we have
+$$
+P'_p((E \otimes_{\mathcal{O}_X}^\mathbf{L} E')|_{X_2 \cap X'_2}) =
+\sum\nolimits_{p_1 + p_2 = p}
+{p \choose p_1} \chi_{p_1} \chi'_{p_2}
+$$
+in $A^p(X_2 \cap X'_2 \to X)$.
+\end{lemma}
+
+\begin{proof}
+First we observe that the statement makes sense. Namely, we have
+$X = (X_2 \cap X'_2) \cup Y$ where
+$Y = (X_1 \cap X'_1) \cup (X_1 \cap X'_2) \cup (X_2 \cap X'_1)$
+and the object $E \otimes_{\mathcal{O}_X}^\mathbf{L} E'$
+restricts to zero on $Y$.
+The actual equalities follow from the characterization
+of our classes in Lemma \ref{lemma-silly}
+and the equalities of Lemma \ref{lemma-chern-classes-perfect-tensor-product}.
+We omit the details.
+\end{proof}
+
+
+
+
+
+
+
+\section{Gysin at infinity}
+\label{section-gysin-at-infty}
+
+\noindent
+This section is about the bivariant class constructed in the next
+lemma. We urge the reader to skip the rest of the section.
+
+\begin{lemma}
+\label{lemma-gysin-at-infty}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let
+$b : W \to \mathbf{P}^1_X$ be a proper morphism of schemes
+which is an isomorphism over $\mathbf{A}^1_X$.
+Denote $i_\infty : W_\infty \to W$ the inverse image of the divisor
+$D_\infty \subset \mathbf{P}^1_X$ with complement $\mathbf{A}^1_X$.
+Then there is a canonical bivariant class
+$$
+C \in A^0(W_\infty \to X)
+$$
+with the property that
+$i_{\infty, *}(C \cap \alpha) = i_{0, *}\alpha$
+for $\alpha \in \CH_k(X)$ and similarly after any base change by
+$X' \to X$ locally of finite type.
+\end{lemma}
+
+\begin{proof}
+Given $\alpha \in \CH_k(X)$ there exists a $\beta \in \CH_{k + 1}(W)$
+restricting to the flat pullback of $\alpha$ on $b^{-1}(\mathbf{A}^1_X)$, see
+Lemma \ref{lemma-exact-sequence-open}.
+A second choice of $\beta$ differs from $\beta$ by a cycle
+supported on $W_\infty$, see
+Lemma \ref{lemma-restrict-to-open}. Since the normal bundle of the effective
+Cartier divisor $D_\infty \subset \mathbf{P}^1_X$ of
+(\ref{equation-zero-infty}) is trivial,
+the gysin homomorphism $i_\infty^*$ kills cycle classes
+supported on $W_\infty$, see Remark \ref{remark-gysin-on-cycles}.
+Hence setting $C \cap \alpha = i_\infty^*\beta$ is well defined.
+
+\medskip\noindent
+Since $W_\infty$ and $W_0 = X \times \{0\}$
+are the pullbacks of the rationally equivalent effective Cartier divisors
+$D_0, D_\infty$ in $\mathbf{P}^1_X$, we see that $i_\infty^*\beta$ and
+$i_0^*\beta$ map to the same cycle class on $W$; namely, both
+represent the class $c_1(\mathcal{O}_{\mathbf{P}^1_X}(1)) \cap \beta$ by
+Lemma \ref{lemma-support-cap-effective-Cartier}. By our choice of
+$\beta$ we have $i_0^*\beta = \alpha$ as cycles on
+$W_0 = X \times \{0\}$, see for example
+Lemma \ref{lemma-relative-effective-cartier}.
+Thus we see that $i_{\infty, *}(C \cap \alpha) = i_{0, *}\alpha$
+as stated in the lemma.
+
+\medskip\noindent
+Observe that the assumptions on $b$ are preserved by any base change
+by $X' \to X$ locally of finite type. Hence we get an operation
+$C \cap - : \CH_k(X') \to \CH_k(W'_\infty)$ by the same construction as above.
+To see that this family of operations defines a bivariant class,
+we consider the diagram
+$$
+\xymatrix{
+& & & \CH_*(X) \ar[d]^{\text{flat pullback}} \\
+\CH_{* + 1}(W_\infty) \ar[r] \ar[rd]^0 &
+\CH_{* + 1}(W) \ar[d]^{i_\infty^*} \ar[rr]^{\text{flat pullback}} & &
+\CH_{* + 1}(\mathbf{A}^1_X) \ar[r] \ar@{..>}[lld]^{C \cap -} &
+0 \\
+& \CH_*(W_\infty)
+}
+$$
+for $X$ as indicated and the base change of this diagram for any $X' \to X$.
+We know that flat pullback and $i_\infty^*$ are bivariant operations, see
+Lemmas \ref{lemma-flat-pullback-bivariant} and \ref{lemma-gysin-bivariant}.
+Then a formal argument (involving huge diagrams of schemes and their
+chow groups) shows that the dotted arrow is a bivariant operation.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-gysin-at-infty}
+In Lemma \ref{lemma-gysin-at-infty} let $X' \to X$ be a morphism
+which is locally of finite type. Denote $b' : W' \to \mathbf{P}^1_{X'}$
+and $i'_\infty : W'_\infty \to W'$ the base changes of $b$ and $i_\infty$.
+Then the class $C' \in A^0(W'_\infty \to X')$ constructed as in
+Lemma \ref{lemma-gysin-at-infty} using $b'$ is the restriction
+(Remark \ref{remark-restriction-bivariant}) of $C$.
+\end{lemma}
+
+\begin{proof}
+Immediate from the construction and the fact that a similar
+statement holds for flat pullback and $i_\infty^*$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-at-infty-independent}
+In Lemma \ref{lemma-gysin-at-infty} let $g : W' \to W$ be a proper morphism
+which is an isomorphism over $\mathbf{A}^1_X$. Let
+$C' \in A^0(W'_\infty \to X)$ and $C \in A^0(W_\infty \to X)$
+be the classes constructed in Lemma \ref{lemma-gysin-at-infty}.
+Then $g_{\infty, *} \circ C' = C$ in $A^0(W_\infty \to X)$.
+\end{lemma}
+
+\begin{proof}
+Set $b' = b \circ g : W' \to \mathbf{P}^1_X$. Denote
+$i'_\infty : W'_\infty \to W'$ the inclusion morphism.
+Denote $g_\infty : W'_\infty \to W_\infty$ the restriction of $g$.
+Given $\alpha \in \CH_k(X)$ choose $\beta' \in \CH_{k + 1}(W')$
+restricting to the flat pullback of $\alpha$ on $(b')^{-1}\mathbf{A}^1_X$.
+Then $\beta = g_*\beta' \in \CH_{k + 1}(W)$ restricts to the
+flat pullback of $\alpha$ on $b^{-1}\mathbf{A}^1_X$.
+Then $i_\infty^*\beta = g_{\infty, *}(i'_\infty)^*\beta'$
+by Lemma \ref{lemma-closed-in-X-gysin}.
+This and the corresponding fact after base change by
+morphisms $X' \to X$ locally of finite type, corresponds
+to the assertion made in the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-homomorphism-pre}
+In Lemma \ref{lemma-gysin-at-infty} we have
+$C \circ (W_\infty \to X)_* \circ i_\infty^* = i_\infty^*$.
+\end{lemma}
+
+\begin{proof}
+Let $\beta \in \CH_{k + 1}(W)$. Denote $i_0 : X = X \times \{0\} \to W$
+the closed immersion of the fibre over $0$ in $\mathbf{P}^1$. Then
+$(W_\infty \to X)_* i_\infty^* \beta = i_0^*\beta$ in $\CH_k(X)$ because
+$i_{\infty, *}i_\infty^*\beta$ and $i_{0, *}i_0^*\beta$
+represent the same class on $W$ (for example by
+Lemma \ref{lemma-support-cap-effective-Cartier})
+and hence pushforward to the same class on $X$.
+The restriction of $\beta$ to $b^{-1}(\mathbf{A}^1_X)$
+restricts to the flat pullback of
+$i_0^*\beta = (W_\infty \to X)_* i_\infty^* \beta$ because we can check
+this after pullback by $i_0$, see
+Lemmas \ref{lemma-linebundle} and \ref{lemma-linebundle-formulae}.
+Hence we may use $\beta$ when computing the image of
+$(W_\infty \to X)_*i_\infty^*\beta$ under $C$
+and we get the desired result.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-at-infty-commutes}
+In Lemma \ref{lemma-gysin-at-infty} let $f : Y \to X$ be a morphism
+locally of finite type and $c \in A^*(Y \to X)$. Then $C \circ c = c \circ C$
+in $A^*(W_\infty \times_X Y \to X)$.
+\end{lemma}
+
+\begin{proof}
+Consider the commutative diagram
+$$
+\xymatrix{
+W_\infty \times_X Y \ar@{=}[r] &
+W_{Y, \infty} \ar[r]_{i_{Y, \infty}} \ar[d] &
+W_Y \ar[r]_{b_Y} \ar[d] &
+\mathbf{P}^1_Y \ar[r]_{p_Y} \ar[d] &
+Y \ar[d]^f \\
+& W_\infty \ar[r]^{i_\infty} &
+W \ar[r]^b &
+\mathbf{P}^1_X \ar[r]^p &
+X
+}
+$$
+with cartesian squares. For an elemnent $\alpha \in \CH_k(X)$
+choose $\beta \in \CH_{k + 1}(W)$ whose restriction to $b^{-1}(\mathbf{A}^1_X)$
+is the flat pullback of $\alpha$. Then $c \cap \beta$ is a class
+in $\CH_*(W_Y)$ whose restriction to $b_Y^{-1}(\mathbf{A}^1_Y)$
+is the flat pullback of $c \cap \alpha$. Next, we have
+$$
+i_{Y, \infty}^*(c \cap \beta) = c \cap i_\infty^*\beta
+$$
+because $c$ is a bivariant class. This exactly says that
+$C \cap c \cap \alpha = c \cap C \cap \alpha$. The same argument
+works after any base change by $X' \to X$ locally of finite type.
+This proves the lemma.
+\end{proof}
+
+
+
+
+
+\section{Preparation for localized Chern classes}
+\label{section-preparation-localized-chern-II}
+
+\noindent
+In this section we discuss some properties of the bivariant classes
+constructed in the following lemma. We urge the
+reader to skip the rest of the section.
+
+\begin{lemma}
+\label{lemma-localized-chern-pre}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be
+locally of finite type over $S$. Let $Z \subset X$ be a closed subscheme.
+Let
+$$
+b : W \longrightarrow \mathbf{P}^1_X
+$$
+be a proper morphism of schemes. Let $Q \in D(\mathcal{O}_W)$ be a
+perfect object. Denote $W_\infty \subset W$ the inverse image of the divisor
+$D_\infty \subset \mathbf{P}^1_X$ with complement $\mathbf{A}^1_X$.
+We assume
+\begin{enumerate}
+\item[(A0)] Chern classes of $Q$ are defined
+(Section \ref{section-pre-derived}),
+\item[(A1)] $b$ is an isomorphism over $\mathbf{A}^1_X$,
+\item[(A2)] there exists a closed subscheme $T \subset W_\infty$
+containing all points of $W_\infty$ lying over $X \setminus Z$ such that
+$Q|_T$ is zero, resp.\ isomorphic to a finite locally free
+$\mathcal{O}_T$-module of rank $< p$ sitting in cohomological degree $0$.
+\end{enumerate}
+Then there exists a canonical bivariant class
+$$
+P'_p(Q),\text{ resp. }c'_p(Q) \in A^p(Z \to X)
+$$
+with
+$(Z \to X)_* \circ P'_p(Q) = P_p(Q|_{X \times \{0\}})$,
+resp.\ $(Z \to X)_* \circ c'_p(Q) = c_p(Q|_{X \times \{0\}})$.
+\end{lemma}
+
+\begin{proof}
+Denote $E \subset W_\infty$ the inverse image of $Z$. Then
+$W_\infty = T \cup E$ and $b$ induces a proper morphism $E \to Z$.
+Denote $C \in A^0(W_\infty \to X)$ the bivariant class constructed
+in Lemma \ref{lemma-gysin-at-infty}. Denote $P'_p(Q|_E)$, resp.\ $c'_p(Q|_E)$
+in $A^p(E \to W_\infty)$ the bivariant class constructed
+in Lemma \ref{lemma-silly}. This makes sense because
+$(Q|_E)|_{T \cap E}$ is zero, resp.\ isomorphic to a finite locally free
+$\mathcal{O}_{E \cap T}$-module of rank $< p$ sitting in
+cohomological degree $0$ by assumption (A2). Then we define
+$$
+P'_p(Q) = (E \to Z)_* \circ P'_p(Q|_E) \circ C,\text{ resp. }
+c'_p(Q) = (E \to Z)_* \circ c'_p(Q|_E) \circ C
+$$
+This is a bivariant class, see Lemma \ref{lemma-push-proper-bivariant}.
+Since $E \to Z \to X$ is equal to $E \to W_\infty \to W \to X$ we see that
+\begin{align*}
+(Z \to X)_* \circ c'_p(Q)
+& =
+(W \to X)_* \circ i_{\infty, *} \circ (E \to W_\infty)_*
+\circ c'_p(Q|_E) \circ C \\
+& =
+(W \to X)_* \circ i_{\infty, *} \circ c_p(Q|_{W_\infty}) \circ C \\
+& =
+(W \to X)_* \circ c_p(Q) \circ i_{\infty, *} \circ C \\
+& =
+(W \to X)_*\circ c_p(Q) \circ i_{0, *} \\
+& =
+(W \to X)_* \circ i_{0, *} \circ c_p(Q|_{X \times \{0\}}) \\
+& =
+c_p(Q|_{X \times \{0\}})
+\end{align*}
+The second equality holds by Lemma \ref{lemma-silly-silly}.
+The third equality because $c_p(Q)$ is a bivariant class.
+The fourth equality by Lemma \ref{lemma-gysin-at-infty}.
+The fifth equality because $c_p(Q)$ is a bivariant class.
+The final equality because $(W_0 \to W) \circ (W \to X)$
+is the identity on $X$ if we identify $W_0$ with $X$ as we've
+done above. The exact same sequence of equations works to
+prove the property for $P'_p(Q)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-localized-chern-pre}
+In Lemma \ref{lemma-localized-chern-pre} let $X' \to X$ be a morphism
+which is locally of finite type. Denote
+$Z'$, $b' : W' \to \mathbf{P}^1_{X'}$, and $T' \subset W'_\infty$
+the base changes of $Z$, $b : W \to \mathbf{P}^1_X$, and $T \subset W_\infty$.
+Set $Q' = (W' \to W)^*Q$. Then the class
+$P'_p(Q')$, resp.\ $c'_p(Q')$ in $A^p(Z' \to X')$ constructed as in
+Lemma \ref{lemma-localized-chern-pre} using $b'$, $Q'$, and $T'$
+is the restriction (Remark \ref{remark-restriction-bivariant})
+of the class $P'_p(Q)$, resp.\ $c'_p(Q)$ in $A^p(Z \to X)$.
+\end{lemma}
+
+\begin{proof}
+Recall that the construction is as follows
+$$
+P'_p(Q) = (E \to Z)_* \circ P'_p(Q|_E) \circ C,\text{ resp. }
+c'_p(Q) = (E \to Z)_* \circ c'_p(Q|_E) \circ C
+$$
+Thus the lemma follows from the corresponding base change property
+for $C$ (Lemma \ref{lemma-base-change-gysin-at-infty})
+and the fact that the same base change property holds for the classes
+constructed in Lemma \ref{lemma-silly} (small detail omitted).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localized-chern-pre-independent}
+In Lemma \ref{lemma-localized-chern-pre} the bivariant class
+$P'_p(Q)$, resp.\ $c'_p(Q)$
+is independent of the choice of the closed subscheme $T$.
+Moreover, given a proper morphism $g : W' \to W$ which is an
+isomorphism over $\mathbf{A}^1_X$, then setting $Q' = g^*Q$
+we have $P'_p(Q) = P'_p(Q')$, resp.\ $c'_p(Q) = c'_p(Q')$.
+\end{lemma}
+
+\begin{proof}
+The independence of $T$ follows immediately from
+Lemma \ref{lemma-silly-independent}.
+
+\medskip\noindent
+Let $g : W' \to W$ be a proper morphism which is an isomorphism over
+$\mathbf{A}^1_X$. Observe that taking $T' = g^{-1}(T) \subset W'_\infty$
+is a closed subscheme satisfying (A2) hence the operator
+$P'_p(Q')$, resp.\ $c'_p(Q')$ in $A^p(Z \to X)$
+corresponding to $b' = b \circ g : W' \to \mathbf{P}^1_X$
+and $Q'$ is defined. Denote $E' \subset W'_\infty$
+the inverse image of $Z$ in $W'_\infty$. Recall that
+$$
+c'_p(Q') = (E' \to Z)_* \circ c'_p(Q'|_{E'}) \circ C'
+$$
+with $C' \in A^0(W'_\infty \to X)$ and
+$c'_p(Q'|_{E'}) \in A^p(E' \to W'_\infty)$.
+By Lemma \ref{lemma-gysin-at-infty-independent} we have
+$g_{\infty, *} \circ C' = C$. Observe that $E'$ is also
+the inverse image of $E$ in $W'_\infty$ by $g_\infty$.
+Since moreover $Q' = g^*Q$ we find that $c'_p(Q'|_{E'})$ is simply the
+restriction of $c'_p(Q|_E)$ to schemes lying over $W'_\infty$, see
+Remark \ref{remark-restriction-bivariant}. Thus we obtain
+\begin{align*}
+c'_p(Q')
+& =
+(E' \to Z)_* \circ c'_p(Q'|_{E'}) \circ C' \\
+& =
+(E \to Z)_* \circ (E' \to E)_* \circ c'_p(Q|_E) \circ C' \\
+& =
+(E \to Z)_* \circ c'_p(Q|_E) \circ g_{\infty, *} \circ C' \\
+& =
+(E \to Z)_* \circ c'_p(Q|_E) \circ C \\
+& =
+c'_p(Q)
+\end{align*}
+In the third equality we used that $c'_p(Q|_E)$
+commutes with proper pushforward as it is a
+bivariant class. The equality $P'_p(Q) = P'_p(Q')$
+is proved in exactly the same way.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-homomorphism}
+In Lemma \ref{lemma-localized-chern-pre} assume $Q|_T$ is isomorphic
+to a finite locally free $\mathcal{O}_T$-module of rank $< p$.
+Denote $C \in A^0(W_\infty \to X)$ the class of
+Lemma \ref{lemma-gysin-at-infty}. Then
+$$
+C \circ c_p(Q|_{X \times \{0\}}) =
+C \circ (Z \to X)_* \circ c'_p(Q) = c_p(Q|_{W_\infty}) \circ C
+$$
+\end{lemma}
+
+\begin{proof}
+The first equality holds because $c_p(Q|_{X \times \{0\}}) =
+(Z \to X)_* \circ c'_p(Q)$ by Lemma \ref{lemma-localized-chern-pre}.
+We may prove the second equality one cycle class at a time
+(see Lemma \ref{lemma-bivariant-zero}). Since the construction of
+the bivariant classes in the lemma is compatible with base change,
+we may assume we have some $\alpha \in \CH_k(X)$ and we have to show that
+$C \cap (Z \to X)_*(c'_p(Q) \cap \alpha) =
+c_p(Q|_{W_\infty}) \cap C \cap \alpha$. Observe that
+\begin{align*}
+C \cap (Z \to X)_*(c'_p(Q) \cap \alpha)
+& =
+C \cap (Z \to X)_* (E \to Z)_*(c'_p(Q|_E) \cap C \cap \alpha) \\
+& =
+C \cap (W_\infty \to X)_*(E \to W_\infty)_*(c'_p(Q|_E) \cap C \cap \alpha) \\
+& =
+C \cap (W_\infty \to X)_*(E \to W_\infty)_*(c'_p(Q|_E) \cap i_\infty^*\beta) \\
+& =
+C \cap (W_\infty \to X)_*(c_p(Q|_{W_\infty}) \cap i_\infty^*\beta) \\
+& =
+C \cap (W_\infty \to X)_*i_\infty^*(c_p(Q) \cap \beta) \\
+& =
+i_\infty^*(c_p(Q) \cap \beta) \\
+& =
+c_p(Q|_{W_\infty}) \cap i_\infty^*\beta \\
+& =
+c_p(Q|_{W_\infty}) \cap C \cap \alpha
+\end{align*}
+as desired. For the first equality we used that
+$c'_p(Q) = (E \to Z)_* \circ c'_p(Q|_E) \circ C$ where $E \subset W_\infty$
+is the inverse image of $Z$ and $c'_p(Q|_E)$ is the class constructed
+in Lemma \ref{lemma-silly}. The second equality is just the statement
+that $E \to Z \to X$ is equal to $E \to W_\infty \to X$.
+For the third equality we choose $\beta \in \CH_{k + 1}(W)$ whose restriction to
+$b^{-1}(\mathbf{A}^1_X)$ is the flat pullback of $\alpha$ so that
+$C \cap \alpha = i_\infty^*\beta$ by construction. The fourth equality is
+Lemma \ref{lemma-silly-silly}. The fifth equality is the fact that
+$c_p(Q)$ is a bivariant class and hence commutes with $i_\infty^*$.
+The sixth equality is Lemma \ref{lemma-homomorphism-pre}.
+The seventh uses again that $c_p(Q)$ is a bivariant class.
+The final holds as $C \cap \alpha = i_\infty^*\beta$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-homomorphism-commute}
+In Lemma \ref{lemma-localized-chern-pre} let $Y \to X$ be a morphism
+locally of finite type and let $c \in A^*(Y \to X)$ be a bivariant class.
+Then
+$$
+P'_p(Q) \circ c = c \circ P'_p(Q)
+\quad\text{resp.}\quad
+c'_p(Q) \circ c = c \circ c'_p(Q)
+$$
+in $A^*(Y \times_X Z \to X)$.
+\end{lemma}
+
+\begin{proof}
+Let $E \subset W_\infty$ be the inverse image of $Z$.
+Recall that $P'_p(Q) = (E \to Z)_* \circ P'_p(Q|_E) \circ C$,
+resp.\ $c'_p(Q) = (E \to Z)_* \circ c'_p(Q|_E) \circ C$
+where $C$ is as in Lemma \ref{lemma-gysin-at-infty} and
+$P'_p(Q|_E)$, resp.\ $c'_p(Q|_E)$ are as in
+Lemma \ref{lemma-silly}.
+By Lemma \ref{lemma-gysin-at-infty-commutes}
+we see that $C$ commutes with $c$
+and by Lemma \ref{lemma-silly-commutes} we see that
+$P'_p(Q|_E)$, resp.\ $c'_p(Q|_E)$ commutes with $c$.
+Since $c$ is a bivariant class it commutes with proper
+pushforward by $E \to Z$ by definition. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localized-chern-pre-compose}
+In Lemma \ref{lemma-localized-chern-pre} assume $Q|_T$ is zero. In
+$A^*(Z \to X)$ we have
+\begin{align*}
+P'_1(Q) & = c'_1(Q), \\
+P'_2(Q) & = c'_1(Q)^2 - 2c'_2(Q), \\
+P'_3(Q) & = c'_1(Q)^3 - 3c'_1(Q)c'_2(Q) + 3c'_3(Q), \\
+P'_4(Q) & = c'_1(Q)^4 - 4c'_1(Q)^2c'_2(Q) +
+4c'_1(Q)c'_3(Q) + 2c'_2(Q)^2 - 4c'_4(Q),
+\end{align*}
+and so on with multiplication as in Remark \ref{remark-ring-loc-classes}.
+\end{lemma}
+
+\begin{proof}
+The statement makes sense because the zero sheaf has rank $< 1$ and
+hence the classes $c'_p(Q)$ are defined for all $p \geq 1$.
+In the proof of Lemma \ref{lemma-localized-chern-pre} we have constructed
+the classes $P'_p(Q)$ and $c'_p(Q)$ using the bivariant class
+$C \in A^0(W_\infty \to X)$ of Lemma \ref{lemma-gysin-at-infty}
+and the bivariant classes
+$P'_p(Q|_E)$ and $c'_p(Q|_E)$ of Lemma \ref{lemma-silly} for the restriction
+$Q|_E$ of $Q$ to the inverse image $E$ of $Z$ in $W_\infty$.
+Observe that by Lemma \ref{lemma-silly-compose} we have the desired
+relationship between $P'_p(Q|_E)$ and $c'_p(Q|_E)$. Recall that
+$$
+P'_p(Q) = (E \to Z)_* \circ P'_p(Q|_E) \circ C
+\quad\text{and}\quad
+c'_p(Q) = (E \to Z)_* \circ c'_p(Q|_E) \circ C
+$$
+To finish the proof it suffices to show the multiplications defined
+in Remark \ref{remark-ring-loc-classes} on the classes $a_p = c'_p(Q)$
+and on the classes $b_p = c'_p(Q|_E)$ agree:
+$$
+a_{p_1}a_{p_2} \ldots a_{p_r} =
+(E \to Z)_* \circ b_{p_1}b_{p_2} \ldots b_{p_r} \circ C
+$$
+Some details omitted. If $r = 1$, then this is true.
+For $r > 1$ note that by Remark \ref{remark-res-push} the multiplication in
+Remark \ref{remark-ring-loc-classes} proceeds
+by inserting $(Z \to X)_*$, resp.\ $(E \to W_\infty)_*$ in between
+the factors of the product
+$a_{p_1}a_{p_2} \ldots a_{p_r}$, resp.\ $b_{p_1}b_{p_2} \ldots b_{p_r}$
+and taking compositions as bivariant classes.
+Now by Lemma \ref{lemma-silly} we have
+$$
+(E \to W_\infty)_* \circ b_{p_i} = c_{p_i}(Q|_{W_\infty})
+$$
+and by Lemma \ref{lemma-homomorphism} we have
+$$
+C \circ (Z \to X)_* \circ a_{p_i} = c_{p_i}(Q|_{W_\infty}) \circ C
+$$
+for $i = 2, \ldots, r$. A calculation
+shows that the left and right hand side of the desired
+equality both simplify to
+$$
+(E \to Z)_* \circ c'_{p_1}(Q|_E) \circ
+c_{p_2}(Q|_{W_\infty}) \circ \ldots \circ
+c_{p_r}(Q|_{W_\infty}) \circ C
+$$
+and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localized-chern-pre-sum-c}
+In Lemma \ref{lemma-localized-chern-pre} assume $Q|_T$ is isomorphic
+to a finite locally free $\mathcal{O}_T$-module of rank $< p$.
+Assume we have another perfect object $Q' \in D(\mathcal{O}_W)$
+whose Chern classes are defined with $Q'|_T$ isomorphic to a
+finite locally free $\mathcal{O}_T$-module of rank $< p'$ placed
+in cohomological degree $0$. With notation as in
+Remark \ref{remark-ring-loc-classes} set
+$$
+c^{(p)}(Q) = 1 + c_1(Q|_{X \times \{0\}}) + \ldots +
+c_{p - 1}(Q|_{X \times \{0\}}) +
+c'_{p}(Q) + c'_{p + 1}(Q) + \ldots
+$$
+in $A^{(p)}(Z \to X)$ with $c'_i(Q)$ for $i \geq p$ as in
+Lemma \ref{lemma-localized-chern-pre}. Similarly for $c^{(p')}(Q')$ and
+$c^{(p + p')}(Q \oplus Q')$.
+Then $c^{(p + p')}(Q \oplus Q') = c^{(p)}(Q)c^{(p')}(Q')$
+in $A^{(p + p')}(Z \to X)$.
+\end{lemma}
+
+\begin{proof}
+Recall that the image of $c'_i(Q)$ in $A^p(X)$ is equal to
+$c_i(Q|_{X \times \{0\}})$ for $i \geq p$ and similarly for
+$Q'$ and $Q \oplus Q'$, see Lemma \ref{lemma-localized-chern-pre}.
+Hence the equality in degrees $< p + p'$ follows from the
+additivity of Lemma \ref{lemma-additivity-on-perfect}.
+
+\medskip\noindent
+Let's take $n \geq p + p'$.
+As in the proof of Lemma \ref{lemma-localized-chern-pre}
+let $E \subset W_\infty$ denote the inverse image of $Z$.
+Observe that we have the equality
+$$
+c^{(p + p')}(Q|_E \oplus Q'|_E) =
+c^{(p)}(Q|_E)c^{(p')}(Q'|_E)
+$$
+in $A^{(p + p')}(E \to W_\infty)$ by Lemma \ref{lemma-silly-sum-c}.
+Since by construction
+$$
+c'_p(Q \oplus Q') = (E \to Z)_* \circ c'_p(Q|_E \oplus Q'|_E) \circ C
+$$
+we conclude that suffices to show for all $i + j = n$ we have
+$$
+(E \to Z)_* \circ c^{(p)}_i(Q|_E)c^{(p')}_j(Q'|_E) \circ C
+=
+c^{(p)}_i(Q)c^{(p')}_j(Q')
+$$
+in $A^n(Z \to X)$ where the multiplication is the one from
+Remark \ref{remark-ring-loc-classes} on both sides. There are
+three cases, depending on whether $i \geq p$, $j \geq p'$, or both.
+
+\medskip\noindent
+Assume $i \geq p$ and $j \geq p'$. In this case the products are
+defined by inserting $(E \to W_\infty)_*$, resp.\ $(Z \to X)_*$ in between
+the two factors and taking compositions as bivariant classes, see
+Remark \ref{remark-res-push}.
+In other words, we have to show
+$$
+(E \to Z)_* \circ c'_i(Q|_E) \circ
+(E \to W_\infty)_* \circ c'_j(Q'|_E) \circ C =
+c'_i(Q) \circ (Z \to X)_* \circ c'_j(Q')
+$$
+By Lemma \ref{lemma-silly} the left hand side is equal to
+$$
+(E \to Z)_* \circ c'_i(Q|_E) \circ c_j(Q'|_{W_\infty}) \circ C
+$$
+Since $c'_i(Q) = (E \to Z)_* \circ c'_i(Q|_E) \circ C$
+the right hand side is equal to
+$$
+(E \to Z)_* \circ c'_i(Q|_E) \circ C \circ (Z \to X)_* \circ c'_j(Q')
+$$
+which is immediately seen to be equal to the above
+by Lemma \ref{lemma-homomorphism}.
+
+\medskip\noindent
+Assume $i \geq p$ and $j < p$. Unwinding the products
+in this case we have to show
+$$
+(E \to Z)_* \circ c'_i(Q|_E) \circ c_j(Q'|_{W_\infty}) \circ C =
+c'_i(Q) \circ c_j(Q'|_{X \times \{0\}})
+$$
+Again using that $c'_i(Q) = (E \to Z)_* \circ c'_i(Q|_E) \circ C$
+we see that it suffices to show $c_j(Q'|_{W_\infty}) \circ C =
+C \circ c_j(Q'|_{X \times \{0\}})$ which is part of
+Lemma \ref{lemma-homomorphism}.
+
+\medskip\noindent
+Assume $i < p$ and $j \geq p'$. Unwinding the products
+in this case we have to show
+$$
+(E \to Z)_* \circ c_i(Q|_E) \circ c'_j(Q'|_E) \circ C =
+c_i(Q|_{Z \times \{0\}}) \circ c'_j(Q')
+$$
+However, since $c'_j(Q|_E)$ and $c'_j(Q')$ are
+bivariant classes, they commute with capping with Chern classes
+(Lemma \ref{lemma-cap-commutative-chern}). Hence it suffices to prove
+$$
+(E \to Z)_* \circ c'_j(Q'|_E) \circ c_i(Q|_{W_\infty}) \circ C =
+c'_j(Q') \circ c_i(Q|_{X \times \{0\}})
+$$
+which we reduces us to the case discussed in the preceding paragraph.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localized-chern-pre-sum-P}
+In Lemma \ref{lemma-localized-chern-pre} assume $Q|_T$ is zero.
+Assume we have another perfect object $Q' \in D(\mathcal{O}_W)$
+whose Chern classes are defined such that the restriction $Q'|_T$ is zero.
+In this case the classes
+$P'_p(Q), P'_p(Q'), P'_p(Q \oplus Q') \in A^p(Z \to X)$
+constructed in Lemma \ref{lemma-localized-chern-pre}
+satisfy $P'_p(Q \oplus Q') = P'_p(Q) + P'_p(Q')$.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from the construction of these
+classes and Lemma \ref{lemma-silly-sum-P}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Localized Chern classes}
+\label{section-localized-chern}
+
+\noindent
+Outline of the construction. Let $F$ be a field,
+let $X$ be a variety over $F$, let $E$ be a perfect object of
+$D(\mathcal{O}_X)$, and let $Z \subset X$ be a closed subscheme
+such that $E|_{X \setminus Z} = 0$. Then we want to construct elements
+$$
+c_p(Z \to X, E) \in A^p(Z \to X)
+$$
+We will do this by constructing a diagram
+$$
+\xymatrix{
+W \ar[d]_f \ar[r]_q & X \\
+\mathbf{P}^1_F
+}
+$$
+and a perfect object $Q$ of $D(\mathcal{O}_W)$ such that
+\begin{enumerate}
+\item $f$ is flat, and $f$, $q$ are proper; for $t \in \mathbf{P}^1_F$
+denote $W_t$ the fibre of $f$,
+$q_t : W_t \to X$ the restriction of $q$, and $Q_t = Q|_{W_t}$,
+\item $q_t : W_t \to X$ is an isomorphism and $Q_t = q_t^*E$
+for $t \in \mathbf{A}^1_F$,
+\item $q_\infty : W_\infty \to X$ is an isomorphism over $X \setminus Z$,
+\item if $T \subset W_\infty$ is the closure of
+$q_\infty^{-1}(X \setminus Z)$ then $Q_\infty|_T$ is zero.
+\end{enumerate}
+The idea is to think of this as a family $\{(W_t, Q_t)\}$
+parametrized by $t \in \mathbf{P}^1$.
+For $t \not = \infty$ we see that $c_p(Q_t)$ is just $c_p(E)$
+on the chow groups of $Q_t = X$. But for $t = \infty$ we see that
+$c_p(Q_\infty)$ sends classes on $Q_\infty$ to classes supported on
+$E = q_\infty^{-1}(Z)$ since $Q_\infty|_T = 0$.
+We think of $E$ as the exceptional locus of $q_\infty : W_\infty \to X$.
+Since any $\alpha \in \CH_*(X)$ gives rise to a ``family''
+of cycles $\alpha_t \in \CH_*(W_t)$ it makes sense to define
+$c_p(Z \to X, E) \cap \alpha$ as
+the pushforward $(E \to Z)_*(c_p(Q_\infty) \cap \alpha_\infty)$.
+
+\medskip\noindent
+To make this work there are two main ingredients: (1) the construction of
+$W$ and $Q$ is a sort of algebraic Macpherson's graph construction; it
+is done in More on Flatness, Section \ref{flat-section-blowup-complexes-III}.
+(2) the construction of the actual class given $W$ and $Q$ is done in
+Section \ref{section-preparation-localized-chern-II} relying on
+Sections \ref{section-gysin-at-infty} and
+\ref{section-preparation-localized-chern}.
+
+\begin{situation}
+\label{situation-loc-chern}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be a scheme
+locally of finite type over $S$. Let $i : Z \to X$ be a closed immersion.
+Let $E \in D(\mathcal{O}_X)$ be an object. Let $p \geq 0$. Assume
+\begin{enumerate}
+\item $E$ is a perfect object of $D(\mathcal{O}_X)$,
+\item the restriction $E|_{X \setminus Z}$ is zero, resp.\ isomorphic to a
+finite locally free $\mathcal{O}_{X \setminus Z}$-module of rank $< p$
+sitting in cohomological degree $0$, and
+\item at least one\footnote{Please ignore this technical condition on a
+first reading; see discussion in Remark \ref{remark-loc-chern}.}
+of the following is true:
+(a) $X$ is quasi-compact,
+(b) $X$ has quasi-compact irreducible components,
+(c) there exists a locally bounded complex of finite locally free
+$\mathcal{O}_X$-modules representing $E$, or
+(c) there exists a morphism $X \to X'$ of schemes locally of finite type
+over $S$ such that $E$ is the pullback of a perfect object on $X'$ and
+the irreducible components of $X'$ are quasi-compact.
+\end{enumerate}
+\end{situation}
+
+\begin{lemma}
+\label{lemma-independent-loc-chern}
+In Situation \ref{situation-loc-chern} there exists a canonical bivariant class
+$$
+P_p(Z \to X, E) \in A^p(Z \to X),
+\quad\text{resp.}\quad
+c_p(Z \to X, E) \in A^p(Z \to X)
+$$
+with the property that
+\begin{equation}
+\label{equation-defining-property-localized-classes}
+i_* \circ P_p(Z \to X, E) = P_p(E),
+\quad\text{resp.}\quad
+i_* \circ c_p(Z \to X, E) = c_p(E)
+\end{equation}
+as bivariant classes on $X$ (with $\circ$ as in
+Lemma \ref{lemma-push-proper-bivariant}).
+\end{lemma}
+
+\begin{proof}
+The construction of these bivariant classes is as follows. Let
+$$
+b : W \longrightarrow \mathbf{P}^1_X
+\quad\text{and}\quad
+T \longrightarrow W_\infty
+\quad\text{and}\quad
+Q
+$$
+be the blowing up, the perfect object $Q$ in $D(\mathcal{O}_W)$,
+and the closed immersion constructed in
+More on Flatness, Section \ref{flat-section-blowup-complexes-III}
+and Lemma \ref{flat-lemma-graph-construction}.
+Let $T' \subset T$ be the open and closed subscheme such that
+$Q|_{T'}$ is zero, resp.\ isomorphic to a
+finite locally free $\mathcal{O}_{T'}$-module of rank $< p$
+sitting in cohomological degree $0$. By condition (2) of
+Situation \ref{situation-loc-chern} the morphisms
+$$
+T' \to T \to W_\infty \to X
+$$
+are all isomorphisms of schemes over the open subscheme $X \setminus Z$ of $X$.
+Below we check the chern classes of $Q$ are defined.
+Recalling that $Q|_{X \times \{0\}} \cong E$ by construction, we conclude
+that the bivariant class constructed in Lemma \ref{lemma-localized-chern-pre}
+using $W, b, Q, T'$ gives us classes
+$$
+P_p(Z \to X, E) = P'_p(Q) \in A^p(Z \to X)
+$$
+and
+$$
+c_p(Z \to X, E) = c'_p(Q) \in A^p(Z \to X)
+$$
+satisfying (\ref{equation-defining-property-localized-classes}).
+
+\medskip\noindent
+In this paragraph we prove that the chern classes of $Q$ are defined
+(Definition \ref{definition-defined-on-perfect}); we suggest the reader skip
+this. If assumption (3)(a) or (3)(b) of Situation \ref{situation-loc-chern}
+holds, i.e., if $X$ has quasi-compact irreducible components, then the
+same is true for $W$ (because $W \to X$ is proper). Hence we conclude that
+the chern classes of any perfect object of $D(\mathcal{O}_W)$ are defined by
+Lemma \ref{lemma-chern-classes-defined}. If (3)(c) hold, i.e., if
+$E$ can be represented by a locally bounded complex of finite locally
+free modules, then the object $Q$ can be represented by a locally bounded
+complex of finite locally free $\mathcal{O}_W$-modules by part (5) of
+More on Flatness, Lemma \ref{flat-lemma-graph-construction}. Hence the
+chern classes of $Q$ are defined. Finally, assume (3)(d) holds, i.e.,
+assume we have a morphism $X \to X'$ of schemes locally of finite type
+over $S$ such that $E$ is the pullback of a perfect object $E'$ on $X'$ and
+the irreducible components of $X'$ are quasi-compact.
+Let $b' : W' \to \mathbf{P}^1_{X'}$ and $Q' \in D(\mathcal{O}_{W'})$
+be the morphism and perfect object constructed as
+in More on Flatness, Section \ref{flat-section-blowup-complexes-III}
+starting with the triple
+$(\mathbf{P}^1_{X'}, (\mathbf{P}^1_{X'})_\infty, L(p')^*E')$.
+By the discussion above we see that the chern classes of $Q'$
+are defined. Since $b$ and $b'$ were constructed via an application of
+More on Flatness, Lemma \ref{flat-lemma-complex-and-divisor-blowup-pre}
+it follows from
+More on Flatness, Lemma \ref{flat-lemma-complex-and-divisor-blowup-base-change}
+that there exists a morphism $W \to W'$ such that
+$Q = L(W \to W')^*Q'$. Then it follows from
+Lemma \ref{lemma-chern-classes-defined}
+that the chern classes of $Q$ are defined.
+\end{proof}
+
+\begin{definition}
+\label{definition-localized-chern}
+With $(S, \delta)$, $X$, $E \in D(\mathcal{O}_X)$, and $i : Z \to X$ as in
+Situation \ref{situation-loc-chern}.
+\begin{enumerate}
+\item If the restriction $E|_{X \setminus Z}$ is zero, then for all
+$p \geq 0$ we define
+$$
+P_p(Z \to X, E) \in A^p(Z \to X)
+$$
+by the construction in Lemma \ref{lemma-independent-loc-chern}
+and we define the {\it localized Chern character} by the formula
+$$
+ch(Z \to X, E) =
+\sum\nolimits_{p = 0, 1, 2, \ldots} \frac{P_p(Z \to X, E)}{p!}
+\quad\text{in}\quad A^*(Z \to X) \otimes \mathbf{Q}
+$$
+\item If the restriction $E|_{X \setminus Z}$ is isomorphic to a
+finite locally free $\mathcal{O}_{X \setminus Z}$-module of rank $< p$
+sitting in cohomological degree $0$, then we define the
+{\it localized $p$th Chern class} $c_p(Z \to X, E)$ by the construction
+in Lemma \ref{lemma-independent-loc-chern}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In the situation of the definition assume $E|_{X \setminus Z}$ is zero.
+Then, to be sure, we have the equality
+$$
+i_* \circ ch(Z \to X, E) = ch(E)
+$$
+in $A^*(X) \otimes \mathbf{Q}$ because we have shown the
+equality (\ref{equation-defining-property-localized-classes}) above.
+
+\medskip\noindent
+Here is an important sanity check.
+
+\begin{lemma}
+\label{lemma-base-change-loc-chern}
+In Situation \ref{situation-loc-chern}
+let $f : X' \to X$ be a morphism of schemes which is locally of finite type.
+Denote $E' = f^*E$ and $Z' = f^{-1}(Z)$. Then the bivariant class
+of Definition \ref{definition-localized-chern}
+$$
+P_p(Z' \to X', E') \in A^p(Z' \to X'),
+\quad\text{resp.}\quad
+c_p(Z' \to X', E') \in A^p(Z' \to X')
+$$
+constructed as in Lemma \ref{lemma-independent-loc-chern}
+using $X', Z', E'$ is the restriction
+(Remark \ref{remark-restriction-bivariant}) of the
+bivariant class $P_p(Z \to X, E) \in A^p(Z \to X)$,
+resp.\ $c_p(Z \to X, E) \in A^p(Z \to X)$.
+\end{lemma}
+
+\begin{proof}
+Denote $p : \mathbf{P}^1_X \to X$ and $p' : \mathbf{P}^1_{X'} \to X'$
+the structure morphisms.
+Recall that $b : W \to \mathbf{P}^1_X$ and $b' : W' \to \mathbf{P}^1_{X'}$
+are the morphism constructed from the triples
+$(\mathbf{P}^1_X, (\mathbf{P}^1_X)\infty, p^*E)$ and
+$(\mathbf{P}^1_{X'}, (\mathbf{P}^1_{X'})\infty, (p')^*E')$
+in More on Flatness, Lemma \ref{flat-lemma-complex-and-divisor-blowup-pre}.
+Furthermore $Q = L\eta_{\mathcal{I}_\infty}p^*E$ and
+$Q = L\eta_{\mathcal{I}'_\infty}(p')^*E'$
+where
+$\mathcal{I}_\infty \subset \mathcal{O}_W$ is the
+ideal sheaf of $W_\infty$ and
+$\mathcal{I}'_\infty \subset \mathcal{O}_{W'}$ is the
+ideal sheaf of $W'_\infty$.
+Next, $h : \mathbf{P}^1_{X'} \to \mathbf{P}^1_X$ is a morphism of
+schemes such that the pullback of the effective Cartier divisor
+$(\mathbf{P}^1_X)_\infty$ is the effective Cartier divisor
+$(\mathbf{P}^1_{X'})_\infty$ and such that $h^*p^*E = (p')^*E'$.
+By More on Flatness, Lemma
+\ref{flat-lemma-complex-and-divisor-blowup-base-change}
+we obtain a commutative diagram
+$$
+\xymatrix{
+W' \ar[rd]_{b'} \ar[r]_-g &
+\mathbf{P}^1_{X'} \times_{\mathbf{P}^1_X} W \ar[d]_r \ar[r]_-q &
+W \ar[d]^b \\
+&
+\mathbf{P}^1_{X'} \ar[r] &
+\mathbf{P}^1_X
+}
+$$
+such that $W'$ is the ``strict transform'' of $\mathbf{P}^1_{X'}$
+with respect to $b$ and such that $Q' = (q \circ g)^*Q$.
+Now recall that $P_p(Z \to X, E) = P'_p(Q)$,
+resp.\ $c_p(Z \to X, E) = c'_p(Q)$ where $P'_p(Q)$, resp.\ $c'_p(Q)$
+are constructed in Lemma \ref{lemma-localized-chern-pre}
+using $b, Q, T'$ where $T'$ is a closed subscheme $T' \subset W_\infty$
+with the following two properties:
+(a) $T'$ contains all points of $W_\infty$ lying over $X \setminus Z$,
+and (b) $Q|_{T'}$ is zero, resp.\ isomorphic to a finite locally free
+module of rank $< p$ placed in degree $0$.
+In the construction of Lemma \ref{lemma-localized-chern-pre}
+we chose a particular closed subscheme $T'$ with properties (a) and (b)
+but the precise choice of $T'$ is immaterial, see
+Lemma \ref{lemma-localized-chern-pre-independent}.
+
+\medskip\noindent
+Next, by Lemma \ref{lemma-base-change-localized-chern-pre}
+the restriction of the bivariant class $P_p(Z \to X, E) = P'_p(Q)$,
+resp.\ $c_p(Z \to X, E) = c_p(Q')$
+to $X'$ corresponds to the class $P'_p(q^*Q)$, resp.\ $c'_p(q^*Q)$
+constructed as in Lemma \ref{lemma-localized-chern-pre} using
+$r : \mathbf{P}^1_{X'} \times_{\mathbf{P}^1_X} W \to \mathbf{P}^1_{X'}$,
+the complex $q^*Q$, and the inverse image $q^{-1}(T')$.
+
+\medskip\noindent
+Now by the second statement of
+Lemma \ref{lemma-localized-chern-pre-independent}
+we have $P'_p(Q') = P'_p(q^*Q)$, resp.\ $c'_p(q^*Q) = c'_p(Q')$.
+Since $P_p(Z' \to X', E') = P'_p(Q')$, resp.\ $c_p(Z' \to X', E') = c'_p(Q')$
+we conclude that the lemma is true.
+\end{proof}
+
+\begin{remark}
+\label{remark-loc-chern}
+In Situation \ref{situation-loc-chern} it would have been more natural
+to replace assumption (3) with the assumption: ``the chern classes
+of $E$ are defined''. In fact, combining
+Lemmas \ref{lemma-independent-loc-chern} and
+\ref{lemma-base-change-loc-chern}
+with Lemma \ref{lemma-envelope-bivariant}
+it is easy to extend the definition to this (slightly) more general case.
+If we ever need this we will do so here.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-loc-chern-after-pushforward}
+In Situation \ref{situation-loc-chern} we have
+$$
+P_p(Z \to X, E) \cap i_*\alpha = P_p(E|_Z) \cap \alpha,
+\quad\text{resp.}\quad
+c_p(Z \to X, E) \cap i_*\alpha = c_p(E|_Z) \cap \alpha
+$$
+in $\CH_*(Z)$ for any $\alpha \in \CH_*(Z)$.
+\end{lemma}
+
+\begin{proof}
+We only prove the second equality and we omit the proof of the first.
+Since $c_p(Z \to X, E)$ is a bivariant class and since the base
+change of $Z \to X$ by $Z \to X$ is $\text{id} : Z \to Z$ we have
+$c_p(Z \to X, E) \cap i_*\alpha = c_p(Z \to X, E) \cap \alpha$.
+By Lemma \ref{lemma-base-change-loc-chern} the restriction of
+$c_p(Z \to X, E)$ to $Z$ (!) is the localized Chern class for
+$\text{id} : Z \to Z$ and $E|_Z$. Thus the result follows from
+(\ref{equation-defining-property-localized-classes}) with $X = Z$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-loc-chern-disjoint}
+In Situation \ref{situation-loc-chern}
+if $\alpha \in \CH_k(X)$ has support disjoint from $Z$, then
+$P_p(Z \to X, E) \cap \alpha = 0$, resp.\ $c_p(Z \to X, E) \cap \alpha = 0$.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the construction of the localized Chern classes.
+It also follows from the fact that we can compute
+$c_p(Z \to X, E) \cap \alpha$ by first restricting $c_p(Z \to X, E)$ to
+the support of $\alpha$, and then using
+Lemma \ref{lemma-base-change-loc-chern}
+to see that this restriction is zero.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-loc-chern-shrink-Z}
+In Situation \ref{situation-loc-chern}
+assume $Z \subset Z' \subset X$ where $Z'$ is a closed subscheme of $X$.
+Then
+$P_p(Z' \to X, E) = (Z \to Z')_* \circ P_p(Z \to X, E)$,
+resp.\ $c_p(Z' \to X, E) = (Z \to Z')_* \circ c_p(Z \to X, E)$
+(with $\circ$ as in Lemma \ref{lemma-push-proper-bivariant}).
+\end{lemma}
+
+\begin{proof}
+The construction of $P_p(Z' \to X, E)$,
+resp.\ $c_p(Z' \to X, E)$ in Lemma \ref{lemma-independent-loc-chern}
+uses the exact same morphism
+$b : W \to \mathbf{P}^1_X$ and perfect object $Q$ of $D(\mathcal{O}_W)$.
+Then we can use Lemma \ref{lemma-silly-shrink} to conclude.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-loc-chern-agree}
+In Lemma \ref{lemma-silly} say $E_2$ is the restriction of a perfect
+$E \in D(\mathcal{O}_X)$ whose restriction to $X_1$ is zero,
+resp.\ isomorphic to a finite locally free $\mathcal{O}_{X_1}$-module
+of rank $< p$ sitting in cohomological degree $0$. Then the class
+$P'_p(E_2)$, resp.\ $c'_p(E_2)$ of Lemma \ref{lemma-silly} agrees with
+$P_p(X_2 \to X, E)$, resp.\ $c_p(X_2 \to X, E)$ of
+Definition \ref{definition-localized-chern} provided $E$ satisfies
+assumption (3) of Situation \ref{situation-loc-chern}.
+\end{lemma}
+
+\begin{proof}
+The assumptions on $E$ imply that there is an open $U \subset X$
+containing $X_1$ such that $E|_U$ is zero, resp.\ isomorphic to a finite locally
+free $\mathcal{O}_U$-module of rank $< p$. See More on Algebra, Lemma
+\ref{more-algebra-lemma-lift-perfect-from-residue-field}.
+Let $Z \subset X$ be the complement of $U$ in $X$ endowed with
+the reduced induced closed subscheme structure. Then
+$P_p(X_2 \to X, E) = (Z \to X_2)_* \circ P_p(Z \to X, E)$,
+resp.\ $c_p(X_2 \to X, E) = (Z \to X_2)_* \circ c_p(Z \to X, E)$
+by Lemma \ref{lemma-loc-chern-shrink-Z}.
+Now we can prove that $P_p(X_2 \to X, E)$, resp.\ $c_p(X_2 \to X, E)$
+satisfies the characterization of $P'_p(E_2)$, resp.\ $c'_p(E_2)$
+given in Lemma \ref{lemma-silly}. Namely, by the relation
+$P_p(X_2 \to X, E) = (Z \to X_2)_* \circ P_p(Z \to X, E)$,
+resp.\ $c_p(X_2 \to X, E) = (Z \to X_2)_* \circ c_p(Z \to X, E)$
+just proven and the fact that $X_1 \cap Z = \emptyset$,
+the composition $P_p(X_2 \to X, E) \circ i_{1, *}$,
+resp.\ $c_p(X_2 \to X, E) \circ i_{1, *}$ is zero
+by Lemma \ref{lemma-loc-chern-disjoint}.
+On the other hand,
+$P_p(X_2 \to X, E) \circ i_{2, *} = P_p(E_2)$,
+resp.\ $c_p(X_2 \to X, E) \circ i_{2, *} = c_p(E_2)$
+by Lemma \ref{lemma-loc-chern-after-pushforward}.
+\end{proof}
+
+
+
+
+
+\section{Two technical lemmas}
+\label{section-tools-loc-chern}
+
+\noindent
+In this section we develop some additional tools to allow us to work
+more comfortably with localized Chern classes. The following lemma
+is a more precise version of something we've already encountered in
+the proofs of Lemmas \ref{lemma-localized-chern-pre-compose} and
+\ref{lemma-localized-chern-pre-sum-c}.
+
+\begin{lemma}
+\label{lemma-homomorphism-final}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be
+locally of finite type over $S$. Let $b : W \longrightarrow \mathbf{P}^1_X$
+be a proper morphism of schemes. Let $n \geq 1$. For $i = 1, \ldots, n$
+let $Z_i \subset X$ be a closed subscheme, let $Q_i \in D(\mathcal{O}_W)$
+be a perfect object, let $p_i \geq 0$ be an integer, and let
+$T_i \subset W_\infty$, $i = 1, \ldots, n$ be closed.
+Denote $W_i = b^{-1}(\mathbf{P}^1_{Z_i})$. Assume
+\begin{enumerate}
+\item for $i = 1, \ldots, n$ the assumption of
+Lemma \ref{lemma-localized-chern-pre} hold for
+$b, Z_i, Q_i, T_i, p_i$,
+\item $Q_i|_{W \setminus W_i}$ is zero, resp.\ isomorphic to a finite
+locally free module of rank $< p_i$ placed in cohomological degree $0$,
+\item $Q_i$ on $W$ satisfies
+assumption (3) of Situation \ref{situation-loc-chern}.
+\end{enumerate}
+Then $P'_{p_n}(Q_n) \circ \ldots \circ P'_{p_1}(Q_1)$ is equal to
+$$
+(W_{n, \infty} \cap \ldots \cap W_{1, \infty} \to
+Z_n \cap \ldots \cap Z_1)_* \circ
+P'_{p_n}(Q_n|_{W_{n, \infty}}) \circ \ldots \circ P'_{p_1}(Q_1|_{W_{1, \infty}})
+\circ C
+$$
+in $A^{p_n + \ldots + p_1}(Z_n \cap \ldots \cap Z_1 \to X)$,
+resp.\ $c'_{p_n}(Q_n) \circ \ldots \circ c'_{p_1}(Q_1)$ is equal to
+$$
+(W_{n, \infty} \cap \ldots \cap W_{1, \infty} \to
+Z_n \cap \ldots \cap Z_1)_* \circ
+c'_{p_n}(Q_n|_{W_{n, \infty}}) \circ \ldots \circ c'_{p_1}(Q_1|_{W_{1, \infty}})
+\circ C
+$$
+in $A^{p_n + \ldots + p_1}(Z_n \cap \ldots \cap Z_1 \to X)$.
+\end{lemma}
+
+\begin{proof}
+Let us prove the statement on Chern classes by induction on $n$;
+the statement on $P_p(-)$ is proved in the exact same manner.
+The case $n = 1$ is the construction of $c'_{p_1}(Q_1)$ because
+$W_{1, \infty}$ is the inverse image of $Z_1$ in $W_\infty$.
+For $n > 1$ we have by induction that
+$c'_{p_n}(Q_n) \circ \ldots \circ c'_{p_1}(Q_1)$ is equal to
+$$
+c'_{p_n}(Q_n) \circ
+(W_{n - 1, \infty} \cap \ldots \cap W_{1, \infty} \to
+Z_{n - 1} \cap \ldots \cap Z_1)_* \circ
+c'_{p_{n - 1}}(Q_{n - 1}|_{W_{n - 1}, \infty}) \circ \ldots \circ
+c'_{p_1}(Q_1|_{W_{1, \infty}})
+\circ C
+$$
+By Lemma \ref{lemma-base-change-localized-chern-pre} the restriction of
+$c'_{p_n}(Q_n)$ to $Z_{n - 1} \cap \ldots \cap Z_1$ is computed by
+the closed subset $Z_n \cap \ldots \cap Z_1$, the morphism
+$b' : W_{n - 1} \cap \ldots \cap W_1 \to
+\mathbf{P}^1_{Z_{n - 1} \cap \ldots \cap Z_1}$
+and the restriction of $Q_n$ to $W_{n - 1} \cap \ldots \cap W_1$.
+Observe that $(b')^{-1}(Z_n) = W_n \cap \ldots \cap W_1$
+and that $(W_n \cap \ldots \cap W_1)_\infty =
+W_{n, \infty} \cap \ldots \cap W_{1, \infty}$.
+Denote $C_{n - 1} \in A^0(W_{n - 1, \infty} \cap \ldots \cap W_{1, \infty} \to
+Z_{n - 1} \cap \ldots \cap Z_1)$ the class of Lemma \ref{lemma-gysin-at-infty}.
+We conclude the restriction of $c'_{p_n}(Q_n)$ to
+$Z_{n - 1} \cap \ldots \cap Z_1$ is
+\begin{align*}
+&
+(W_{n, \infty} \cap \ldots \cap W_{1, \infty} \to Z_n \cap \ldots \cap Z_1)_*
+\circ
+c'_{p_n}(Q_n|_{(W_n \cap \ldots \cap W_1)_\infty})
+\circ
+C_{n - 1} \\
+& =
+(W_{n, \infty} \cap \ldots \cap W_{1, \infty} \to Z_n \cap \ldots \cap Z_1)_*
+\circ
+c'_{p_n}(Q_n|_{W_{n, \infty}})
+\circ
+C_{n - 1}
+\end{align*}
+where the equality follows from Lemma \ref{lemma-base-change-silly}
+(we omit writing the restriction on the right). Hence the above becomes
+\begin{align*}
+(W_{n, \infty} \cap \ldots \cap W_{1, \infty} \to
+Z_n \cap \ldots \cap Z_1)_* \circ
+c'_{p_n}(Q_n|_{W_n, \infty}) \circ \\
+C_{n - 1} \circ
+(W_{n - 1, \infty} \cap \ldots \cap W_{1, \infty} \to
+Z_{n - 1} \cap \ldots \cap Z_1)_* \\
+\circ
+c'_{p_{n - 1}}(Q_{n - 1}|_{W_{n - 1}, \infty}) \circ \ldots \circ
+c'_{p_1}(Q_1|_{W_{1, \infty}})
+\circ C
+\end{align*}
+By Lemma \ref{lemma-homomorphism-pre}
+we know that the composition
+$C_{n - 1} \circ (W_{n - 1, \infty} \cap \ldots \cap W_{1, \infty} \to
+Z_{n - 1} \cap \ldots \cap Z_1)_*$
+is the identity on elements in the image of the gysin map
+$$
+(W_{n - 1, \infty} \cap \ldots \cap W_{1, \infty} \to
+W_{n - 1} \cap \ldots \cap W_1)^*
+$$
+Thus it suffices to show that any element in the image of
+$c'_{p_{n - 1}}(Q_{n - 1}|_{W_{n - 1}, \infty}) \circ \ldots \circ
+c'_{p_1}(Q_1|_{W_{1, \infty}}) \circ C$
+is in the image of the gysin map. We may write
+$$
+c'_{p_i}(Q_i|_{W_{i, \infty}}) = \text{restriction of } c_{p_i}(W_i \to W, Q_i)
+\text{ to } W_{i, \infty}
+$$
+by Lemma \ref{lemma-loc-chern-agree} and assumptions (2) and (3) on $Q_i$
+in the statement of the lemma. Thus, if $\beta \in \CH_{k + 1}(W)$
+restricts to the flat pullback of $\alpha$ on $b^{-1}(\mathbf{A}^1_X)$,
+then
+\begin{align*}
+& c'_{p_{n - 1}}(Q_{n - 1}|_{W_{n - 1}, \infty}) \cap \ldots \cap
+c'_{p_1}(Q_1|_{W_{1, \infty}})
+\cap C \cap \alpha \\
+& =
+c'_{p_{n - 1}}(Q_{n - 1}|_{W_{n - 1}, \infty}) \cap \ldots \cap
+c'_{p_1}(Q_1|_{W_{1, \infty}})
+\cap i_\infty^* \beta \\
+& =
+c_{p_{n - 1}}(W_{n - 1} \to W, Q_{n - 1}) \cap \ldots \cap
+c_{p_{n - 1}}(W_1 \to W, Q_1) \cap i_\infty^* \beta \\
+& =
+(W_{n - 1, \infty} \cap \ldots \cap W_{1, \infty} \to
+W_{n - 1} \cap \ldots \cap W_1)^*
+\left(c_{p_{n - 1}}(W_{n - 1} \to W, Q_{n - 1}) \cap \ldots \cap
+c_{p_1}(W_1 \to W, Q_1) \cap \beta\right)
+\end{align*}
+as desired. Namely, for the last equality we use that
+$c_{p_i}(W_i \to W, Q_i)$ is a bivariant class and hence
+commutes with $i_\infty^*$ by definition.
+\end{proof}
+
+\noindent
+The following lemma gives us a tremendous amount of flexibility
+if we want to compute the localized Chern classes of a complex.
+
+\begin{lemma}
+\label{lemma-independent-loc-chern-bQ}
+Assume $(S, \delta), X, Z, b : W \to \mathbf{P}^1_X, Q, T, p$
+satisfy the assumptions of Lemma \ref{lemma-localized-chern-pre}.
+Let $F \in D(\mathcal{O}_X)$ be a perfect object such that
+\begin{enumerate}
+\item the restriction of $Q$ to $b^{-1}(\mathbf{A}^1_X)$ is
+isomorphic to the pullback of $F$,
+\item $F|_{X \setminus Z}$ is zero, resp.\ isomorphic to a finite
+locally free $\mathcal{O}_{X \setminus Z}$-module of rank $< p$
+sitting in cohomological degree $0$, and
+\item $Q$ on $W$ and $F$ on $X$ satisfy assumption (3) of
+Situation \ref{situation-loc-chern}.
+\end{enumerate}
+Then the class $P'_p(Q)$, resp.\ $c'_p(Q)$ in $A^p(Z \to X)$ constructed
+in Lemma \ref{lemma-localized-chern-pre}
+is equal to $P_p(Z \to X, F)$, resp.\ $c_p(Z \to X, F)$
+from Definition \ref{definition-localized-chern}.
+\end{lemma}
+
+\begin{proof}
+The assumptions are preserved by base change with a morphism
+$X' \to X$ locally of finite type. Hence it suffices to show that
+$P_p(Z \to X, F) \cap \alpha = P'_p(Q) \cap \alpha$,
+resp.\ $c_p(Z \to X, F) \cap \alpha = c'_p(Q) \cap \alpha$
+for any $\alpha \in \CH_k(X)$. Choose $\beta \in \CH_{k + 1}(W)$
+whose restriction to $b^{-1}(\mathbf{A}^1_X)$ is equal to
+the flat pullback of $\alpha$ as in the construction of
+$C$ in Lemma \ref{lemma-gysin-at-infty}.
+Denote $W' = b^{-1}(Z)$ and denote
+$E = W'_\infty \subset W_\infty$ the inverse image of $Z$
+by $W_\infty \to X$.
+The lemma follows from
+the following sequence of equalities (the case of $P_p$ is similar)
+\begin{align*}
+c'_p(Q) \cap \alpha
+& =
+(E \to Z)_*(c'_p(Q|_E) \cap i_\infty^*\beta) \\
+& =
+(E \to Z)_*(c_p(E \to W_\infty, Q|_{W_\infty}) \cap i_\infty^*\beta) \\
+& =
+(W'_\infty \to Z)_*(c_p(W' \to W, Q) \cap i_\infty^*\beta) \\
+& =
+(W'_\infty \to Z)_*((i'_\infty)^*(c_p(W' \to W, Q) \cap \beta)) \\
+& =
+(W'_\infty \to Z)_*((i'_\infty)^*(c_p(Z' \to X, F) \cap \beta)) \\
+& =
+(W'_0 \to Z)_*((i'_0)^*(c_p(Z' \to X, F) \cap \beta)) \\
+& =
+(W'_0 \to Z)_*(c_p(Z' \to X, F) \cap i_0^*\beta)) \\
+& =
+c_p(Z \to X, F) \cap \alpha
+\end{align*}
+The first equality is the construction of $c'_p(Q)$ in
+Lemma \ref{lemma-localized-chern-pre}.
+The second is Lemma \ref{lemma-loc-chern-agree}.
+The base change of $W' \to W$ by $W_\infty \to W$ is the
+morphism $E = W'_\infty \to W_\infty$. Hence the third equality holds
+by Lemma \ref{lemma-base-change-loc-chern}. The fourth
+equality, in which $i'_\infty : W'_\infty \to W'$ is the
+inclusion morphism, follows from the fact that $c_p(W' \to W, Q)$
+is a bivariant class. For the fith equality, observe that
+$c_p(W' \to W, Q)$ and $c_p(Z' \to X, F)$
+restrict to the same bivariant class in
+$A^p((b')^{-1} \to b^{-1}(\mathbf{A}^1_X))$ by
+assumption (1) of the lemma which says that $Q$ and $F$ restrict
+to the same object of $D(\mathcal{O}_{b^{-1}(\mathbf{A}^1_X)})$;
+use Lemma \ref{lemma-base-change-loc-chern}.
+Since $(i'_\infty)^*$ annihilates cycles supported on $W'_\infty$
+(see Remark \ref{remark-gysin-on-cycles}) we conclude the fifth equality
+is true. The sixth equality holds because $W'_\infty$ and $W'_0$
+are the pullbacks of the rationally equivalent effective Cartier divisors
+$D_0, D_\infty$ in $\mathbf{P}^1_Z$ and hence $i_\infty^*\beta$ and
+$i_0^*\beta$ map to the same cycle class on $W'$; namely, both
+represent the class
+$c_1(\mathcal{O}_{\mathbf{P}^1_Z}(1)) \cap c_p(Z \to X, F_) \cap \beta$ by
+Lemma \ref{lemma-support-cap-effective-Cartier}.
+The seventh equality holds because $c_p(Z \to X, F)$ is
+a bivariant class. By construction $W'_0 = Z$ and $i_0^*\beta = \alpha$
+which explains why the final equality holds.
+\end{proof}
+
+
+
+
+
+
+
+\section{Properties of localized Chern classes}
+\label{section-properties-loc-chern}
+
+\noindent
+The main results in this section are additivity and multiplicativity
+for localized Chern classes.
+
+\begin{lemma}
+\label{lemma-loc-chern-character}
+In Situation \ref{situation-loc-chern} assume $E|_{X \setminus Z}$ is zero.
+Then
+\begin{align*}
+P_1(Z \to X, E) & = c_1(Z \to X, E), \\
+P_2(Z \to X, E) & = c_1(Z \to X, E)^2 - 2c_2(Z \to X, E), \\
+P_3(Z \to X, E) & = c_1(Z \to X, E)^3 - 3c_1(Z \to X, E)c_2(Z \to X, E)
++ 3c_3(Z \to X, E),
+\end{align*}
+and so on where the products are taken in the algebra $A^{(1)}(Z \to X)$
+of Remark \ref{remark-ring-loc-classes}.
+\end{lemma}
+
+\begin{proof}
+The statement makes sense because the zero sheaf has rank $< 1$ and
+hence the classes $c_p(Z \to X, E)$ are defined for all $p \geq 1$.
+The result itself follows immediately from the more general
+Lemma \ref{lemma-localized-chern-pre-compose} as the localized Chern
+classes where defined using the procedure of
+Lemma \ref{lemma-localized-chern-pre}
+in Section \ref{section-localized-chern}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-loc-chern-classes-commute}
+In Situation \ref{situation-loc-chern}
+let $Y \to X$ be locally of finite type and $c \in A^*(Y \to X)$.
+Then
+$$
+P_p(Z \to X, E) \circ c = c \circ P_p(Z \to X, E),
+$$
+respectively
+$$
+c_p(Z \to X, E) \circ c = c \circ c_p(Z \to X, E)
+$$
+in $A^*(Y \times_X Z \to X)$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-homomorphism-commute}.
+More precisely, let
+$$
+b : W \to \mathbf{P}^1_X
+\quad\text{and}\quad
+Q
+\quad\text{and}\quad
+T' \subset T \subset W_\infty
+$$
+be as in the proof of Lemma \ref{lemma-independent-loc-chern}.
+By definition $c_p(Z \to X, E) = c'_p(Q)$
+as bivariant operations
+where the right hand side is the bivariant class constructed in
+Lemma \ref{lemma-localized-chern-pre} using $W, b, Q, T'$.
+By Lemma \ref{lemma-homomorphism-commute} we have
+$P'_p(Q) \circ c = c \circ P'_p(Q)$, resp.\ $c'_p(Q) \circ c = c \circ c'_p(Q)$
+in $A^*(Y \times_X Z \to X)$ and we conclude.
+\end{proof}
+
+\begin{remark}
+\label{remark-loc-chern-classes}
+In Situation \ref{situation-loc-chern} it is convenient to define
+$$
+c^{(p)}(Z \to X, E) = 1 + c_1(E) + \ldots + c_{p - 1}(E) +
+c_p(Z \to X, E) + c_{p + 1}(Z \to X, E) + \ldots
+$$
+as an element of the algebra $A^{(p)}(Z \to X)$ considered in
+Remark \ref{remark-ring-loc-classes}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-additivity-loc-chern-c}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $Z \to X$ be
+a closed immersion. Let
+$$
+E_1 \to E_2 \to E_3 \to E_1[1]
+$$
+be a distinguished triangle of perfect objects in $D(\mathcal{O}_X)$.
+Assume
+\begin{enumerate}
+\item the restrictions $E_1|_{X \setminus Z}$ and $E_3|_{X \setminus Z}$
+are isomorphic to finite locally free $\mathcal{O}_{X \setminus Z}$-modules
+of rank $< p_1$ and $< p_3$ placed in degree $0$, and
+\item at least one of the following is true:
+(a) $X$ is quasi-compact,
+(b) $X$ has quasi-compact irreducible components,
+(c) $E_3 \to E_1[1]$ can be represented by a map of locally
+bounded complexes of finite locally free $\mathcal{O}_X$-modules, or
+(d) there exists an envelope $f : Y \to X$ such that $Lf^*E_3 \to Lf^*E_1[1]$
+can be represented by a map of locally bounded complexes of
+finite locally free $\mathcal{O}_Y$-modules.
+\end{enumerate}
+With notation as in Remark \ref{remark-loc-chern-classes} we have
+$$
+c^{(p_1 + p_3)}(Z \to X, E_2) = c^{(p_1)}(Z \to X, E_1)c^{(p_3)}(Z \to X, E_3)
+$$
+in $A^{(p_1 + p_3)}(Z \to X)$.
+\end{lemma}
+
+\begin{proof}
+Observe that the assumptions imply that $E_2|_{X \setminus Z}$ is zero,
+resp.\ isomorphic to a finite locally free $\mathcal{O}_{X \setminus Z}$-module
+of rank $< p_1 + p_3$. Thus the statement makes sense.
+
+\medskip\noindent
+Let $f : Y \to X$ be an envelope. Expaing the left and right hand sides
+of the formula in the statement of the lemma we see that we have to prove
+some equalities of classes in $A^*(X)$ and in $A^*(Z \to X)$. By the
+uniqueness in Lemma \ref{lemma-envelope-bivariant} it suffices to prove the
+corresponding relations in $A^*(Y)$ and $A^*(Z \to Y)$. Since moreover
+the construction of the classes involved is compatible with base change
+(Lemma \ref{lemma-base-change-loc-chern}) we may replace $X$ by $Y$
+and the distinguished triangle by its pullback.
+
+\medskip\noindent
+In the proof of Lemma \ref{lemma-additivity-on-perfect} we have
+seen that conditions (2)(a), (2)(b), and (2)(c) imply condition
+(2)(d). Combined with the discussion in the previous paragraph we
+reduce to the case discussed in the next paragraph.
+
+\medskip\noindent
+Let $\varphi^\bullet : \mathcal{E}_3^\bullet[-1] \to \mathcal{E}_1^\bullet$
+be a map of locally bounded complexes of finite locally free
+$\mathcal{O}_X$-modules representing the map $E_3[-1] \to E_1$
+in the derived category. Consider the scheme
+$X' = \mathbf{A}^1 \times X$ with projection
+$g : X' \to X$. Let $Z' = g^{-1}(Z) = \mathbf{A}^1 \times Z$.
+Denote $t$ the coordinate on $\mathbf{A}^1$. Consider the cone
+$\mathcal{C}^\bullet$ of the map of complexes
+$$
+t g^*\varphi^\bullet :
+g^*\mathcal{E}_3^\bullet[-1]
+\longrightarrow
+g^*\mathcal{E}_1^\bullet
+$$
+over $X'$. We obtain a distinguished triangle
+$$
+g^*\mathcal{E}_1^\bullet \to \mathcal{C}^\bullet \to
+g^*\mathcal{E}_3^\bullet \to g^*\mathcal{E}_1^\bullet[1]
+$$
+where the first three terms form a termwise split short exact
+sequence of complexes. Clearly $\mathcal{C}^\bullet$ is a
+bounded complex of finite locally free $\mathcal{O}_{X'}$-modules
+whose restriction to $X' \setminus Z'$ is isomorphic to a
+finite locally free
+$\mathcal{O}_{X' \setminus Z'}$-module of rank $< p_1 + p_3$
+placed in degree $0$. Thus we have the localized Chern classes
+$$
+c_p(Z' \to X', \mathcal{C}^\bullet) \in A^p(Z' \to X')
+$$
+for $p \geq p_1 + p_3$. For any $\alpha \in \CH_k(X)$ consider
+$$
+c_p(Z' \to X', \mathcal{C}^\bullet) \cap g^*\alpha
+\in \CH_{k + 1 - p}(\mathbf{A}^1 \times X)
+$$
+If we restrict to $t = 0$, then the map $t g^*\varphi^\bullet$
+restricts to zero and $\mathcal{C}^\bullet|_{t = 0}$
+is the direct sum of $\mathcal{E}_1^\bullet$ and $\mathcal{E}_3^\bullet$.
+By compatibility of localized Chern classes with base change
+(Lemma \ref{lemma-base-change-loc-chern}) we conclude that
+$$
+i_0^* \circ c^{(p_1 + p_3)}(Z' \to X', \mathcal{C}^\bullet) \circ g^* =
+c^{(p_1 + p_2)}(Z \to X, E_1 \oplus E_3)
+$$
+in $A^{(p_1 + p_3)}(Z \to X)$. On the other hand, if we restrict to $t = 1$,
+then the map $t g^*\varphi^\bullet$
+restricts to $\varphi$ and $\mathcal{C}^\bullet|_{t = 1}$
+is a bounded complex of finite locally free modules representing $E_2$.
+We conclude that
+$$
+i_1^* \circ c^{(p_1 + p_3)}(Z' \to X', \mathcal{C}^\bullet) \circ g^* =
+c^{(p_1 + p_2)}(Z \to X, E_2)
+$$
+in $A^{(p_1 + p_3)}(Z \to X)$. Since $i_0^* = i_1^*$ by definition of
+rational equivalence (more precisely this follows from the formulae in
+Lemma \ref{lemma-linebundle-formulae}) we conclude that
+$$
+c^{(p_1 + p_2)}(Z \to X, E_2) = c^{(p_1 + p_2)}(Z \to X, E_1 \oplus E_3)
+$$
+This reduces us to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $E_2 = E_1 \oplus E_3$ and the triples $(X, Z, E_i)$
+are as in Situation \ref{situation-loc-chern}.
+For $i = 1, 3$ let
+$$
+b_i : W_i \to \mathbf{P}^1_X
+\quad\text{and}\quad
+Q_i
+\quad\text{and}\quad
+T'_i \subset T_i \subset W_{i, \infty}
+$$
+be as in the proof of Lemma \ref{lemma-independent-loc-chern}.
+By definition
+$$
+c_p(Z \to X, E_i) = c'_p(Q_i)
+$$
+where the right hand side is the bivariant class constructed in
+Lemma \ref{lemma-localized-chern-pre} using $W_i, b_i, Q_i, T'_i$.
+Set $W = W_1 \times_{b_1, \mathbf{P}^1_X, b_2} W_2$ and consider
+the cartesian diagram
+$$
+\xymatrix{
+W \ar[d]_{g_1} \ar[rd]^b \ar[r]_{g_3} & W_3 \ar[d]^{b_3} \\
+W_1 \ar[r]^{b_1} & \mathbf{P}^1_X
+}
+$$
+Of course $b^{-1}(\mathbf{A}^1)$ maps isomorphically to $\mathbf{A}^1_X$.
+Observe that $T' = g_1^{-1}(T'_1) \cap g_2^{-1}(T'_2)$ still contains
+all the points of $W_\infty$ lying over $X \setminus Z$.
+By Lemma \ref{lemma-localized-chern-pre-independent} we may use
+$W$, $b$, $g_i^*\mathcal{Q}_i$, and
+$T'$ to construct $c_p(Z \to X, E_i)$ for $i = 1, 3$.
+Also, by the stronger independence given in
+Lemma \ref{lemma-independent-loc-chern-bQ} we may use
+$W$, $b$, $g_1^*Q_1 \oplus g_3^*Q_3$, and $T'$
+to compute the classes $c_p(Z \to X, E_2)$.
+Thus the desired equality follows from
+Lemma \ref{lemma-localized-chern-pre-sum-c}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-additivity-loc-chern-P}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. Let $Z \to X$ be
+a closed immersion. Let
+$$
+E_1 \to E_2 \to E_3 \to E_1[1]
+$$
+be a distinguished triangle of perfect objects in $D(\mathcal{O}_X)$.
+Assume
+\begin{enumerate}
+\item the restrictions $E_1|_{X \setminus Z}$ and $E_3|_{X \setminus Z}$
+are zero, and
+\item at least one of the following is true:
+(a) $X$ is quasi-compact,
+(b) $X$ has quasi-compact irreducible components,
+(c) $E_3 \to E_1[1]$ can be represented by a map of locally
+bounded complexes of finite locally free $\mathcal{O}_X$-modules, or
+(d) there exists an envelope $f : Y \to X$ such that $Lf^*E_3 \to Lf^*E_1[1]$
+can be represented by a map of locally bounded complexes of
+finite locally free $\mathcal{O}_Y$-modules.
+\end{enumerate}
+Then we have
+$$
+P_p(Z \to X, E_2) = P_p(Z \to X, E_1) + P_p(Z \to X, E_3)
+$$
+for all $p \in \mathbf{Z}$ and consequently
+$ch(Z \to X, E_2) = ch(Z \to X, E_1) + ch(Z \to X, E_3)$.
+\end{lemma}
+
+\begin{proof}
+The proof is exactly the same as the proof of
+Lemma \ref{lemma-additivity-loc-chern-c}
+except it uses
+Lemma \ref{lemma-localized-chern-pre-sum-P}
+at the very end. For $p > 0$ we can deduce this lemma
+from Lemma \ref{lemma-additivity-loc-chern-c} with $p_1 = p_3 = 1$
+and the relationship between $P_p(Z \to X, E)$ and $c_p(Z \to X, E)$ given in
+Lemma \ref{lemma-loc-chern-character}. The case $p = 0$ can be shown
+directly (it is only interesting if $X$ has a connected component
+entirely contained in $Z$).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-loc-chern-tensor-product}
+In Situation \ref{situation-setup} let $X$ be locally of finite type over $S$.
+Let $Z_i \subset X$, $i = 1, 2$ be closed subschemes. Let $F_i$, $i = 1, 2$
+be perfect objects of $D(\mathcal{O}_X)$. Assume for $i = 1, 2$ that
+$F_i|_{X \setminus Z_i}$ is zero\footnote{Presumably there
+is a variant of this lemma where we only assume $F_i|_{X \setminus Z_i}$
+is isomorphic to a finite locally free $\mathcal{O}_{X \setminus Z_i}$-module
+of rank $< p_i$.} and that $F_i$ on $X$ satisfies assumption
+(3) of Situation \ref{situation-loc-chern}. Denote
+$r_i = P_0(Z_i \to X, F_i) \in A^0(Z_i \to X)$.
+Then we have
+$$
+c_1(Z_1 \cap Z_2 \to X, F_1 \otimes_{\mathcal{O}_X}^\mathbf{L} F_2) =
+r_1 c_1(Z_2 \to X, F_2) + r_2 c_1(Z_1 \to X, F_1)
+$$
+in $A^1(Z_1 \cap Z_2 \to X)$ and
+\begin{align*}
+c_2(Z_1 \cap Z_2 \to X, F_1 \otimes_{\mathcal{O}_X}^\mathbf{L} F_2)
+& =
+r_1 c_2(Z_2 \to X, F_2) +
+r_2 c_2(Z_1 \to X, F_1) + \\
+& {r_1 \choose 2} c_1(Z_2 \to X, F_2)^2 + \\
+& (r_1r_2 - 1) c_1(Z_2 \to X, F_2)c_1(Z_1 \to X, F_1) + \\
+& {r_2 \choose 2} c_1(Z_1 \to X, F_1)^2
+\end{align*}
+in $A^2(Z_1 \cap Z_2 \to X)$ and so on for higher Chern classes.
+Similarly, we have
+$$
+ch(Z_1 \cap Z_2 \to X, F_1 \otimes_{\mathcal{O}_X}^\mathbf{L} F_2) =
+ch(Z_1 \to X, F_1) ch(Z_2 \to X, F_2)
+$$
+in $A^*(Z_1 \cap Z_2 \to X) \otimes \mathbf{Q}$. More precisely, we have
+$$
+P_p(Z_1 \cap Z_2 \to X, F_1 \otimes_{\mathcal{O}_X}^\mathbf{L} F_2) =
+\sum\nolimits_{p_1 + p_2 = p}
+{p \choose p_1} P_{p_1}(Z_1 \to X, F_1) P_{p_2}(Z_2 \to X, F_2)
+$$
+in $A^p(Z_1 \cap Z_2 \to X)$.
+\end{lemma}
+
+\begin{proof}
+Choose proper morphisms $b_i : W_i \to \mathbf{P}^1_X$ and
+$Q_i \in D(\mathcal{O}_{W_i})$ as well as closed subschemes
+$T_i \subset W_{i, \infty}$ as in the construction of
+the localized Chern classes for $F_i$ or more generally as in
+Lemma \ref{lemma-independent-loc-chern-bQ}. Choose a commutative
+diagram
+$$
+\xymatrix{
+W \ar[d]^{g_1} \ar[rd]^b \ar[r]_{g_2} & W_2 \ar[d]^{b_2} \\
+W_1 \ar[r]^{b_1} & \mathbf{P}^1_X
+}
+$$
+where all morphisms are proper and isomorphisms over
+$\mathbf{A}^1_X$. For example, we can take $W$ to be the closure
+of the graph of the isomorphism between
+$b_1^{-1}(\mathbf{A}^1_X)$ and $b_2^{-1}(\mathbf{A}^1_X)$.
+By Lemma \ref{lemma-independent-loc-chern-bQ} we may work with
+$W$, $b = b_i \circ g_i$, $Lg_i^*Q_i$, and
+$g_i^{-1}(T_i)$ to construct the localized Chern classes
+$c_p(Z_i \to X, F_i)$. Thus we reduce to the situation described
+in the next paragraph.
+
+\medskip\noindent
+Assume we have
+\begin{enumerate}
+\item a proper morphism $b : W \to \mathbf{P}^1_X$ which is an isomorphism
+over $\mathbf{A}^1_X$,
+\item $E_i \subset W_\infty$ is the inverse image of $Z_i$,
+\item perfect objects $Q_i \in D(\mathcal{O}_W)$ whose Chern classes
+are defined, such that
+\begin{enumerate}
+\item the restriction of $Q_i$ to $b^{-1}(\mathbf{A}^1_X)$ is
+the pullback of $F_i$, and
+\item there exists a closed subscheme $T_i \subset W_\infty$ containing
+all points of $W_\infty$ lying over $X \setminus Z_i$ such that
+$Q_i|_{T_i}$ is zero.
+\end{enumerate}
+\end{enumerate}
+By Lemma \ref{lemma-independent-loc-chern-bQ} we have
+$$
+c_p(Z_i \to X, F_i) = c'_p(Q_i) =
+(E_i \to Z_i)_* \circ c'_p(Q_i|_{E_i}) \circ C
+$$
+and
+$$
+P_p(Z_i \to X, F_i) = P'_p(Q_i) =
+(E_i \to Z_i)_* \circ P'_p(Q_i|_{E_i}) \circ C
+$$
+for $i = 1, 2$. Next, we observe that
+$Q = Q_1 \otimes_{\mathcal{O}_W}^\mathbf{L} Q_2$
+satisfies (3)(a) and (3)(b) for $F_1 \otimes_{\mathcal{O}_X}^\mathbf{L} F_2$
+and $T_1 \cup T_2$. Hence we see that
+$$
+c_p(Z_1 \cap Z_2 \to X, F_1 \otimes_{\mathcal{O}_X}^\mathbf{L} F_2) =
+(E_1 \cap E_2 \to Z_1 \cap Z_2)_* \circ
+c'_p(Q|_{E_1 \cap E_2}) \circ C
+$$
+and
+$$
+P_p(Z_1 \cap Z_2 \to X, F_1 \otimes_{\mathcal{O}_X}^\mathbf{L} F_2) =
+(E_1 \cap E_2 \to Z_1 \cap Z_2)_* \circ
+P'_p(Q|_{E_1 \cap E_2}) \circ C
+$$
+by the same lemma. By Lemma \ref{lemma-silly-tensor-product}
+the classes $c'_p(Q|_{E_1 \cap E_2})$ and $P'_p(Q|_{E_1 \cap E_2})$
+can be expanded in the correct manner in terms of the classes
+$c'_p(Q_i|_{E_i})$ and $P'_p(Q_i|_{E_i})$. Then finally
+Lemma \ref{lemma-homomorphism-final}
+tells us that polynomials in $c'_p(Q_i|_{E_i})$ and $P'_p(Q_i|_{E_i})$
+agree with the corresponding polynomials in
+$c'_p(Q_i)$ and $P'_p(Q_i)$ as desired.
+\end{proof}
+
+
+
+
+
+
+\section{Blowing up at infinity}
+\label{section-blowup-Z-first}
+
+\noindent
+Let $X$ be a scheme. Let $Z \subset X$ be a closed subscheme cut out
+by a finite type quasi-coherent sheaf of ideals. Denote $X' \to X$
+the blowing up with center $Z$. Let $b : W \to \mathbf{P}^1_X$ be the
+blowing up with center $\infty(Z)$. Denote $E \subset W$ the exceptional
+divisor. There is a commutative diagram
+$$
+\xymatrix{
+X' \ar[r] \ar[d] & W \ar[d]^b \\
+X \ar[r]^\infty & \mathbf{P}^1_X
+}
+$$
+whose horizontal arrows are closed immersion
+(Divisors, Lemma \ref{divisors-lemma-strict-transform}). Denote $E \subset W$
+the exceptional divisor and $W_\infty \subset W$ the inverse image
+of $(\mathbf{P}^1_X)_\infty$. Then the following are true
+\begin{enumerate}
+\item $b$ is an isomorphism over
+$\mathbf{A}^1_X \cup \mathbf{P}^1_{X \setminus Z}$,
+\item $X'$ is an effective Cartier divisor on $W$,
+\item $X' \cap E$ is the exceptional divisor of $X' \to X$,
+\item $W_\infty = X' + E$ as effective Cartier divisors on $W$,
+\item $E = \underline{\text{Proj}}_Z(\mathcal{C}_{Z/X, *}[S])$ where $S$
+is a variable placed in degree $1$,
+\item $X' \cap E = \underline{\text{Proj}}_Z(\mathcal{C}_{Z/X, *})$,
+\item
+\label{item-cone-is-open}
+$E \setminus X' = E \setminus (X' \cap E) =
+\underline{\Spec}_Z(\mathcal{C}_{Z/X, *}) = C_ZX$,
+\item
+\label{item-find-Z-in-blowup}
+there is a closed immersion $\mathbf{P}^1_Z \to W$ whose
+composition with $b$ is the inclusion morphism
+$\mathbf{P}^1_Z \to \mathbf{P}^1_X$ and whose base change by $\infty$
+is the composition $Z \to C_ZX \to E \to W_\infty$ where the first
+arrow is the vertex of the cone.
+\end{enumerate}
+We recall that $\mathcal{C}_{Z/X, *}$ is the conormal algebra of $Z$ in $X$,
+see Divisors, Definition \ref{divisors-definition-conormal-sheaf} and
+that $C_ZX$ is the normal cone of $Z$ in $X$, see
+Divisors, Definition \ref{divisors-definition-normal-cone}.
+
+\medskip\noindent
+We now give the proof of the numbered assertions above. We strongly
+urge the reader to work through some examples instead of reading the
+proofs.
+
+\medskip\noindent
+Part (1) follows from the corresponding assertion of Divisors, Lemma
+\ref{divisors-lemma-blowing-up-gives-effective-Cartier-divisor}.
+Observe that $E \subset W$ is an effective Cartier divisor by the same lemma.
+
+\medskip\noindent
+Observe that $W_\infty$ is an effective Cartier divisor by
+Divisors, Lemma \ref{divisors-lemma-blow-up-pullback-effective-Cartier}.
+Since $E \subset W_\infty$ we can write $W_\infty = D + E$ for some
+effective Cartier divisor $D$, see
+Divisors, Lemma \ref{divisors-lemma-difference-effective-Cartier-divisors}.
+We will see below that $D = X'$ which will prove (2) and (4).
+
+\medskip\noindent
+Since $X'$ is the strict transform of the closed immersion
+$\infty : X \to \mathbf{P}^1_X$ (see above) it follows that the exceptional
+divisor of $X' \to X$ is equal to the intersection $X' \cap E$
+(for example because both are cut out by the pullback of the
+ideal sheaf of $Z$ to $X'$). This proves (3).
+
+\medskip\noindent
+The intersection of $\infty(Z)$ with $\mathbf{P}^1_Z$ is the effective
+Cartier divisor $(\mathbf{P}^1_Z)_\infty$ hence the strict transform
+of $\mathbf{P}^1_Z$ by the blowing up $b$ maps isomorphically to
+$\mathbf{P}^1_Z$ (see Divisors, Lemmas \ref{divisors-lemma-strict-transform}
+and \ref{divisors-lemma-blow-up-effective-Cartier-divisor}).
+This gives us the morphism $\mathbf{P}^1_Z \to W$ mentioned in (8).
+It is a closed immersion as $b$ is separated, see
+Schemes, Lemma \ref{schemes-lemma-section-immersion}.
+
+\medskip\noindent
+Suppose that $\Spec(A) \subset X$ is an affine open and that $Z \cap \Spec(A)$
+corresponds to the finitely generated ideal $I \subset A$.
+An affine neighbourhood of $\infty(Z \cap \Spec(A))$ is the
+affine space over $A$ with coordinate $s = T_0/T_1$. Denote
+$J = (I, s) \subset A[s]$ the ideal generated by $I$ and $s$.
+Let $B = A[s] \oplus J \oplus J^2 \oplus \ldots$ be the Rees algebra
+of $(A[s], J)$. Observe that
+$$
+J^n =
+I^n \oplus sI^{n - 1} \oplus s^2I^{n - 2} \ldots \oplus s^nA
+\oplus s^{n + 1}A \oplus \ldots
+$$
+as an $A$-submodule of $A[s]$ for all $n \geq 0$. Consider the open subscheme
+$$
+\text{Proj}(B) = \text{Proj}(A[s] \oplus J \oplus J^2 \oplus \ldots)
+\subset W
+$$
+Finally, denote $S$ the element $s \in J$ viewed as a degree $1$ element
+of $B$.
+
+\medskip\noindent
+Since formation of $\text{Proj}$ commutes with base change
+(Constructions, Lemma \ref{constructions-lemma-base-change-map-proj})
+we see that
+$$
+E = \text{Proj}(B \otimes_{A[s]} A/I) =
+\text{Proj}((A/I \oplus I/I^2 \oplus I^2/I^3 \oplus \ldots)[S])
+$$
+The verification that $B \otimes_{A[s]} A/I = \bigoplus J^n/J^{n + 1}$
+is as given
+follows immediately from our description of the powers $J^n$ above.
+This proves (5) because the conormal algebra of $Z \cap \Spec(A)$
+in $\Spec(A)$ corresponds to the graded $A$-algebra
+$A/I \oplus I/I^2 \oplus I^2/I^3 \oplus \ldots$ by
+Divisors, Lemma \ref{divisors-lemma-affine-conormal-sheaf}.
+
+\medskip\noindent
+Recall that $\text{Proj}(B)$ is covered by the affine opens
+$D_+(S)$ and $D_+(f^{(1)})$ for $f \in I$ which are
+the spectra of affine blowup algebras $A[s][\frac{J}{s}]$
+and $A[s][\frac{J}{f}]$, see
+Divisors, Lemma \ref{divisors-lemma-blowing-up-affine} and
+Algebra, Definition \ref{algebra-definition-blow-up}.
+We will describe each of these affine opens and this will finish the
+proof.
+
+\medskip\noindent
+The open $D_+(S)$, i.e., the spectrum of $A[s][\frac{J}{s}]$.
+It follows from the description of the powers of $J$ above
+that
+$$
+A[s][\textstyle{\frac{J}{s}}] = \sum s^{-n}I^n[s] \subset A[s, s^{-1}]
+$$
+The element $s$ is a nonzerodivisor in this ring, defines the exceptional
+divisor $E$ as well as $W_\infty$. Hence $D \cap D_+(S) = \emptyset$.
+Finally, the quotient of $A[s][\frac{J}{s}]$ by $s$ is the conormal algebra
+$$
+A/I \oplus I/I^2 \oplus I^2/I^3 \oplus \ldots
+$$
+This proves (7).
+
+\medskip\noindent
+The open $D_+(f^{(1)})$, i.e., the spectrum of $A[s][\frac{J}{f}]$.
+It follows from the description of the powers of $J$ above that
+$$
+A[s][\textstyle{\frac{J}{f}}] =
+A[\textstyle{\frac{I}{f}}][\textstyle{\frac{s}{f}}]
+$$
+where $\frac{s}{f}$ is a variable. The element $f$ is a nonzerodivisor
+in this ring whose zero scheme defines the exceptional divisor $E$.
+Since $s$ defines $W_\infty$ and $s = f \cdot \frac{s}{f}$
+we conclude that $\frac{s}{f}$ defines
+the divisor $D$ constructed above. Then we see that
+$$
+D \cap D_+(f^{(1)}) = \Spec(A[\textstyle{\frac{I}{f}}])
+$$
+which is the corresponding open of the blowup $X'$ over $\Spec(A)$.
+Namely, the surjective graded $A[s]$-algebra map
+$B \to A \oplus I \oplus I^2 \oplus \ldots$
+to the Rees algebra of $(A, I)$ corresponds to the closed
+immersion $X' \to W$ over $\Spec(A[s])$.
+This proves $D = X'$ as desired.
+
+\medskip\noindent
+Let us prove (6). Observe that the zero scheme of $\frac{s}{f}$
+in the previous paragraph is the restriction of the zero scheme of $S$
+on the affine open $D_+(f^{(1)})$. Hence we see that $S = 0$ defines
+$X' \cap E$ on $E$. Thus (6) follows from (5).
+
+\medskip\noindent
+Finally, we have to prove the last part of (8). This is clear
+because the map $\mathbf{P}^1_Z \to W$ is affine locally
+given by the surjection
+$$
+B \to B \otimes_{A[s]} A/I =
+(A/I \oplus I/I^2 \oplus I^2/I^3 \oplus \ldots)[S] \to
+A/I[S]
+$$
+and the identification $\text{Proj}(A/I[S]) = \Spec(A/I)$.
+Some details omitted.
+
+
+
+
+
+
+\section{Higher codimension gysin homomorphisms}
+\label{section-gysin-higher-codimension}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be a scheme
+locally of finite type over $S$. In this section we are going to consider
+triples
+$$
+(Z \to X, \mathcal{N}, \sigma : \mathcal{N}^\vee \to \mathcal{C}_{Z/X})
+$$
+consisting of a closed immersion $Z \to X$ and a locally free
+$\mathcal{O}_Z$-module $\mathcal{N}$ and a surjection
+$\sigma : \mathcal{N}^\vee \to \mathcal{C}_{Z/X}$ from the dual
+of $\mathcal{N}$ to the conormal sheaf of $Z$ in $X$, see
+Morphisms, Section \ref{morphisms-section-conormal-sheaf}.
+We will say
+$\mathcal{N}$ is a {\it virtual normal sheaf for $Z$ in $X$}.
+
+\begin{lemma}
+\label{lemma-pullback-virtual-normal-sheaf}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let
+$$
+\xymatrix{
+Z' \ar[r] \ar[d]_g & X' \ar[d]^f \\
+Z \ar[r] & X
+}
+$$
+be a cartesian diagram of schemes locally of finite type over $S$
+whose horizontal arrows are closed immersions.
+If $\mathcal{N}$ is a virtual normal sheaf for $Z$ in $X$, then
+$\mathcal{N}' = g^*\mathcal{N}$ is a virtual normal sheaf for
+$Z'$ in $X'$.
+\end{lemma}
+
+\begin{proof}
+This follows from the surjectivity of the map
+$g^*\mathcal{C}_{Z/X} \to \mathcal{C}_{Z'/X'}$ proved in
+Morphisms, Lemma \ref{morphisms-lemma-conormal-functorial-flat}.
+\end{proof}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be a scheme
+locally of finite type over $S$. Let $\mathcal{N}$ be a virtual normal bundle
+for a closed immersion $Z \to X$. In this situation we set
+$$
+p : N = \underline{\Spec}_Z(\text{Sym}(\mathcal{N}^\vee)) \longrightarrow Z
+$$
+equal to the vector bundle over $Z$
+whose sections correspond to sections of $\mathcal{N}$.
+In this situation we have canonical closed immersions
+$$
+C_ZX \longrightarrow N_ZX \longrightarrow N
+$$
+The first closed immersion is Divisors, Equation
+(\ref{divisors-equation-normal-cone-in-normal-bundle})
+and the second closed immersion corresponds to the surjection
+$\text{Sym}(\mathcal{N}^\vee) \to \text{Sym}(\mathcal{C}_{Z/X})$
+induced by $\sigma$.
+Let
+$$
+b : W \longrightarrow \mathbf{P}^1_X
+$$
+be the blowing up in $\infty(Z)$ constructed in
+Section \ref{section-blowup-Z-first}. By
+Lemma \ref{lemma-gysin-at-infty}
+we have a canonical bivariant class in
+$$
+C \in A^0(W_\infty \to X)
+$$
+Consider the open immersion $j : C_ZX \to W_\infty$ of
+(\ref{item-cone-is-open}) and the closed immersion
+$i : C_ZX \to N$ constructed above. By Lemma \ref{lemma-vectorbundle}
+for every $\alpha \in \CH_k(X)$ there exists a unique
+$\beta \in \CH_*(Z)$ such that
+$$
+i_*j^*(C \cap \alpha) = p^*\beta
+$$
+We set $c(Z \to X, \mathcal{N}) \cap \alpha = \beta$.
+
+\begin{lemma}
+\label{lemma-construction-gysin}
+The construction above defines a bivariant class\footnote{The
+notation $A^*(Z \to X)^\wedge$ is discussed in
+Remark \ref{remark-completion-bivariant}.
+If $X$ is quasi-compact, then $A^*(Z \to X)^\wedge = A^*(Z \to X)$.}
+$$
+c(Z \to X, \mathcal{N}) \in A^*(Z \to X)^\wedge
+$$
+and moreover the construction is compatible with base change
+as in Lemma \ref{lemma-pullback-virtual-normal-sheaf}.
+If $\mathcal{N}$ has constant rank $r$, then
+$c(Z \to X, \mathcal{N}) \in A^r(Z \to X)$.
+\end{lemma}
+
+\begin{proof}
+Since both $i_* \circ j^* \circ C$ and $p^*$ are bivariant classes
+(see Lemmas \ref{lemma-flat-pullback-bivariant} and
+\ref{lemma-push-proper-bivariant}) we can use the equation
+$$
+i_* \circ j^* \circ C = p^* \circ c(Z \to X, \mathcal{N})
+$$
+(suitably interpreted) to define $c(Z \to X, \mathcal{N})$
+as a bivariant class. This works because $p^*$ is always
+bijective on chow groups by Lemma \ref{lemma-vectorbundle}.
+
+\medskip\noindent
+Let $X' \to X$, $Z' \to X'$, and $\mathcal{N}'$ be as in
+Lemma \ref{lemma-pullback-virtual-normal-sheaf}. Write
+$c = c(Z \to X, \mathcal{N})$ and $c' = c(Z' \to X', \mathcal{N}')$.
+The second statement of the lemma means that $c'$ is the restriction of $c$
+as in Remark \ref{remark-restriction-bivariant}. Since we claim this
+is true for all $X'/X$ locally of finite type, a formal argument
+shows that it suffices to check that $c' \cap \alpha' = c \cap \alpha'$
+for $\alpha' \in \CH_k(X')$.
+To see this, note that we have a commutative diagram
+$$
+\xymatrix{
+C_{Z'}X' \ar[d] \ar[r] &
+W'_\infty \ar[d] \ar[r] &
+W' \ar[d] \ar[r] &
+\mathbf{P}^1_{X'} \ar[d] \\
+C_ZX \ar[r] &
+W_\infty \ar[r] &
+W \ar[r] &
+\mathbf{P}^1_X
+}
+$$
+which induces closed immersions:
+$$
+W' \to W \times_{\mathbf{P}^1_X} \mathbf{P}^1_{X'},\quad
+W'_\infty \to W_\infty \times_X X',\quad
+C_{Z'}X' \to C_ZX \times_Z Z'
+$$
+To get $c \cap \alpha'$ we use the class $C \cap \alpha'$
+defined using the morphism
+$W \times_{\mathbf{P}^1_X} \mathbf{P}^1_{X'} \to \mathbf{P}^1_{X'}$
+in Lemma \ref{lemma-gysin-at-infty}.
+To get $c' \cap \alpha'$ on the other hand, we use the class
+$C' \cap \alpha'$ defined using the morphism $W' \to \mathbf{P}^1_{X'}$.
+By Lemma \ref{lemma-gysin-at-infty-independent} the pushforward of
+$C' \cap \alpha'$ by the closed immersion
+$W'_\infty \to (W \times_{\mathbf{P}^1_X} \mathbf{P}^1_{X'})_\infty$,
+is equal to $C \cap \alpha'$. Hence the same is true for the pullbacks
+to the opens
+$$
+C_{Z'}X' \subset W'_\infty,\quad
+C_ZX \times_Z Z' \subset (W \times_{\mathbf{P}^1_X} \mathbf{P}^1_{X'})_\infty
+$$
+by Lemma \ref{lemma-flat-pullback-proper-pushforward}.
+Since we have a commutative diagram
+$$
+\xymatrix{
+C_{Z'} X' \ar[d] \ar[r] & N' \ar@{=}[d] \\
+C_ZX \times_Z Z' \ar[r] & N \times_Z Z'
+}
+$$
+these classes pushforward to the same class on $N'$ which
+proves that we obtain the same element $c \cap \alpha' = c' \cap \alpha'$
+in $\CH_*(Z')$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-decompose}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be a scheme
+locally of finite type over $S$. Let $\mathcal{N}$ be a virtual normal
+sheaf for a closed subscheme $Z$ of $X$. Suppose that we have a short
+exact sequence $0 \to \mathcal{N}' \to \mathcal{N} \to \mathcal{E} \to 0$
+of finite locally free $\mathcal{O}_Z$-modules such that the given surjection
+$\sigma : \mathcal{N}^\vee \to \mathcal{C}_{Z/X}$ factors through a map
+$\sigma' : (\mathcal{N}')^\vee \to \mathcal{C}_{Z/X}$.
+Then
+$$
+c(Z \to X, \mathcal{N}) = c_{top}(\mathcal{E}) \circ c(Z \to X, \mathcal{N}')
+$$
+as bivariant classes.
+\end{lemma}
+
+\begin{proof}
+Denote $N' \to N$ the closed immersion of vector bundles corresponding
+to the surjection $\mathcal{N}^\vee \to (\mathcal{N}')^\vee$. Then we
+have closed immersions
+$$
+C_ZX \to N' \to N
+$$
+Thus the desired relationship between the bivariant classes follows
+immediately from Lemma \ref{lemma-easy-virtual-class}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-excess}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Consider
+a cartesian diagram
+$$
+\xymatrix{
+Z' \ar[r] \ar[d]_g & X' \ar[d]^f \\
+Z \ar[r] & X
+}
+$$
+of schemes locally of finite type over $S$ whose horizontal arrows
+are closed immersions. Let $\mathcal{N}$, resp.\ $\mathcal{N}'$
+be a virtual normal sheaf for $Z \subset X$, resp.\ $Z' \to X'$.
+Assume given a short exact sequence
+$0 \to \mathcal{N}' \to g^*\mathcal{N} \to \mathcal{E} \to 0$
+of finite locally free modules on $Z'$ such that the diagram
+$$
+\xymatrix{
+g^*\mathcal{N}^\vee \ar[r] \ar[d] &
+(\mathcal{N}')^\vee \ar[d] \\
+g^*\mathcal{C}_{Z/X} \ar[r] &
+\mathcal{C}_{Z'/X'}
+}
+$$
+commutes. Then we have
+$$
+res(c(Z \to X, \mathcal{N})) =
+c_{top}(\mathcal{E}) \circ c(Z' \to X', \mathcal{N}')
+$$
+in $A^*(Z' \to X')^\wedge$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-construction-gysin} we have
+$res(c(Z \to X, \mathcal{N})) = c(Z' \to X', g^*\mathcal{N})$
+and the equality follows from Lemma \ref{lemma-gysin-decompose}.
+\end{proof}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be a scheme
+locally of finite type over $S$. Let $\mathcal{N}$ be a virtual normal
+sheaf for a closed subscheme $Z$ of $X$. Let $Y \to X$ be a morphism
+which is locally of finite type. Assume $Z \times_X Y \to Y$ is a
+regular closed immersion, see
+Divisors, Section \ref{divisors-section-regular-immersions}.
+In this case the conormal sheaf $\mathcal{C}_{Z \times_X Y/Y}$ is a finite
+locally free $\mathcal{O}_{Z \times_X Y}$-module and we obtain a short
+exact sequence
+$$
+0 \to \mathcal{E}^\vee \to
+\mathcal{N}^\vee|_{Z \times_X Y} \to \mathcal{C}_{Z \times_X Y/Y} \to 0
+$$
+The quotient $\mathcal{N}|_{Y \times_X Z} \to \mathcal{E}$ is called the
+{\it excess normal sheaf} of the situation.
+
+\begin{lemma}
+\label{lemma-gysin-fundamental}
+In the situation described just above assume $\dim_\delta(Y) = n$
+and that $\mathcal{C}_{Y \times_X Z/Z}$ has constant rank $r$.
+Then
+$$
+c(Z \to X, \mathcal{N}) \cap [Y]_n =
+c_{top}(\mathcal{E}) \cap [Z \times_X Y]_{n - r}
+$$
+in $\CH_*(Z \times_X Y)$.
+\end{lemma}
+
+\begin{proof}
+The bivariant class $c_{top}(\mathcal{E}) \in A^*(Z \times_X Y)$ was
+defined in Remark \ref{remark-top-chern-class}.
+By Lemma \ref{lemma-construction-gysin} we may replace $X$ by $Y$.
+Thus we may assume $Z \to X$ is a regular closed immersion
+of codimension $r$, we have $\dim_\delta(X) = n$, and we have
+to show that $c(Z \to X, \mathcal{N}) \cap [X]_n =
+c_{top}(\mathcal{E}) \cap [Z]_{n - r}$ in $\CH_*(Z)$.
+By Lemma \ref{lemma-gysin-decompose} we may even assume
+$\mathcal{N}^\vee \to \mathcal{C}_{Z/X}$ is an isomorphism.
+In other words, we have to show
+$c(Z \to X, \mathcal{C}_{Z/X}^\vee) \cap [X]_n = [Z]_{n - r}$ in $\CH_*(Z)$.
+
+\medskip\noindent
+Let us trace through the steps in the definition of
+$c(Z \to X, \mathcal{C}_{Z/X}^\vee) \cap [X]_n$. Let
+$b : W \to \mathbf{P}^1_X$
+be the blowing up of $\infty(Z)$. We first have to compute
+$C \cap [X]_n$ where $C \in A^0(W_\infty \to X)$ is
+the class of Lemma \ref{lemma-gysin-at-infty}.
+To do this, note that $[W]_{n + 1}$
+is a cycle on $W$ whose restriction to $\mathbf{A}^1_X$ is
+equal to the flat pullback of $[X]_n$. Hence $C \cap [X]_n$
+is equal to $i_\infty^*[W]_{n + 1}$. Since $W_\infty$ is an
+effective Cartier divisor on $W$ we have
+$i_\infty^*[W]_{n + 1} = [W_\infty]_n$, see Lemma \ref{lemma-easy-gysin}.
+The restriction of this class to the open $C_ZX \subset W_\infty$
+is of course just $[C_ZX]_n$. Because $Z \subset X$ is regularly
+embedded we have
+$$
+\mathcal{C}_{Z/X, *} = \text{Sym}(\mathcal{C}_{Z/X})
+$$
+as graded $\mathcal{O}_Z$-algebras, see
+Divisors, Lemma \ref{divisors-lemma-quasi-regular-immersion}.
+Hence $p : N = C_ZX \to Z$ is the structure morphism of the
+vector bundle associated to the finite locally free module
+$\mathcal{C}_{Z/X}$ of rank $r$. Then it is clear that
+$p^*[Z]_{n - r} = [C_ZX]_n$ and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-easy}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be a scheme
+locally of finite type over $S$. Let $\mathcal{N}$ be a virtual normal
+sheaf for a closed subscheme $Z$ of $X$. Let $Y \to X$ be a morphism
+which is locally of finite type. Given integers $r$, $n$ assume
+\begin{enumerate}
+\item $\mathcal{N}$ is locally free of rank $r$,
+\item every irreducible component of $Y$ has $\delta$-dimension $n$,
+\item $\dim_\delta(Z \times_X Y) \leq n - r$, and
+\item for $\xi \in Z \times_X Y$ with $\delta(\xi) = n - r$
+the local ring $\mathcal{O}_{Y, \xi}$ is Cohen-Macaulay.
+\end{enumerate}
+Then $c(Z \to X, \mathcal{N}) \cap [Y]_n = [Z \times_X Y]_{n - r}$
+in $\CH_{n - r}(Z \times_X Y)$.
+\end{lemma}
+
+\begin{proof}
+The statement makes sense as $Z \times_X Y$ is a closed subscheme of $Y$.
+Because $\mathcal{N}$ has rank $r$ we know that
+$c(Z \to X, \mathcal{N}) \cap [Y]_n$ is in $\CH_{n - r}(Z \times_X Y)$.
+Since $\dim_\delta(Z \cap Y) \leq n - r$ the chow group
+$\CH_{n - r}(Z \times_X Y)$ is freely generated by the
+cycle classes of the irreducible components $W \subset Z \times_X Y$
+of $\delta$-dimension $n - r$. Let $\xi \in W$ be the generic point.
+By assumption (2) we see that $\dim(\mathcal{O}_{Y, \xi}) = r$.
+On the other hand, since $\mathcal{N}$ has rank $r$ and since
+$\mathcal{N}^\vee \to \mathcal{C}_{Z/X}$ is surjective, we see that
+the ideal sheaf of $Z$ is locally cut out by $r$ equations.
+Hence the quasi-coherent ideal sheaf $\mathcal{I} \subset \mathcal{O}_Y$
+of $Z \times_X Y$ in $Y$ is locally generated by $r$ elements.
+Since $\mathcal{O}_{Y, \xi}$ is Cohen-Macaulay of dimension $r$
+and since $\mathcal{I}_\xi$ is an ideal of definition (as $\xi$ is
+a generic point of $Z \times_X Y$) it follows that $\mathcal{I}_\xi$
+is generated by a regular sequence
+(Algebra, Lemma \ref{algebra-lemma-reformulate-CM}).
+By Divisors, Lemma \ref{divisors-lemma-Noetherian-scheme-regular-ideal}
+we see that $\mathcal{I}$ is generated by a regular sequence over
+an open neighbourhood $V \subset Y$ of $\xi$. By our description of
+$\CH_{n - r}(Z \times_X Y)$ it suffices to show that
+$c(Z \to X, \mathcal{N}) \cap [V]_n = [Z \times_X V]_{n - r}$
+in $\CH_{n - r}(Z \times_X V)$. This follows from
+Lemma \ref{lemma-gysin-fundamental}
+because the excess normal sheaf is $0$ over $V$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-agrees}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be a scheme
+locally of finite type over $S$. Let $(\mathcal{L}, s, i : D \to X)$
+be a triple as in Definition \ref{definition-gysin-homomorphism}.
+The gysin homomorphism $i^*$ viewed as an element of $A^1(D \to X)$
+(see Lemma \ref{lemma-gysin-bivariant}) is the same as the bivariant class
+$c(D \to X, \mathcal{N}) \in A^1(D \to X)$
+constructed using $\mathcal{N} = i^*\mathcal{L}$
+viewed as a virtual normal sheaf for $D$ in $X$.
+\end{lemma}
+
+\begin{proof}
+We will use the criterion of Lemma \ref{lemma-bivariant-zero}.
+Thus we may assume that $X$ is an integral scheme and
+we have to show that $i^*[X]$ is equal to $c \cap [X]$.
+Let $n = \dim_\delta(X)$. As usual, there are two cases.
+
+\medskip\noindent
+If $X = D$, then we see that both classes are represented by
+$c_1(\mathcal{N}) \cap [X]_n$. See Lemma \ref{lemma-gysin-fundamental}
+and Definition \ref{definition-gysin-homomorphism}.
+
+\medskip\noindent
+If $D \not = X$, then $D \to X$ is an effective Cartier divisor
+and in particular a regular closed immersion of codimension $1$.
+Again by Lemma \ref{lemma-gysin-fundamental} we conclude
+$c(D \to X, \mathcal{N}) \cap [X]_n = [D]_{n - 1}$. The same
+is true by definition for the gysin homomorphism and we conclude
+once again.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-commutes}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be a scheme
+locally of finite type over $S$. Let $Z \subset X$ be a closed subscheme
+with virtual normal sheaf $\mathcal{N}$. Let $Y \to X$ be locally of
+finite type and $c \in A^*(Y \to X)$. Then $c$ and $c(Z \to X, \mathcal{N})$
+commute (Remark \ref{remark-bivariant-commute}).
+\end{lemma}
+
+\begin{proof}
+To check this we may use Lemma \ref{lemma-bivariant-zero}.
+Thus we may assume $X$ is an integral scheme and we have to show
+$c \cap c(Z \to X, \mathcal{N}) \cap [X] =
+c(Z \to X, \mathcal{N}) \cap c \cap [X]$ in $\CH_*(Z \times_X Y)$.
+
+\medskip\noindent
+If $Z = X$, then $c(Z \to X, \mathcal{N}) = c_{top}(\mathcal{N})$ by
+Lemma \ref{lemma-gysin-fundamental} which commutes
+with the bivariant class $c$, see Lemma \ref{lemma-cap-commutative-chern}.
+
+\medskip\noindent
+Assume that $Z$ is not equal to $X$. By Lemma \ref{lemma-bivariant-zero}
+it even suffices to prove the result after blowing up $X$ (in a nonzero ideal).
+Let us blowup $X$ in the ideal sheaf of $Z$. This reduces us to the case
+where $Z$ is an effective Cartier divisor, see
+Divisors, Lemma
+\ref{divisors-lemma-blowing-up-gives-effective-Cartier-divisor},
+
+\medskip\noindent
+If $Z$ is an effective Cartier divisor, then we have
+$$
+c(Z \to X, \mathcal{N}) =
+c_{top}(\mathcal{E}) \circ i^*
+$$
+where $i^* \in A^1(Z \to X)$ is the gysin homomorphism
+associated to $i : Z \to X$ (Lemma \ref{lemma-gysin-bivariant})
+and $\mathcal{E}$ is the dual of the kernel of
+$\mathcal{N}^\vee \to \mathcal{C}_{Z/X}$, see
+Lemmas \ref{lemma-gysin-decompose} and \ref{lemma-gysin-agrees}.
+Then we conclude because Chern classes are in the center of the
+bivariant ring (in the strong sense formulated in
+Lemma \ref{lemma-cap-commutative-chern}) and $c$ commutes
+with the gysin homomorphism $i^*$ by definition of bivariant classes.
+\end{proof}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$ be an
+integral scheme locally of finite type over $S$ of $\delta$-dimension $n$.
+Let $Z \subset Y \subset X$ be closed subschemes which are both effective
+Cartier divisors in $X$. Denote $o : Y \to C_Y X$ the zero section of the
+normal line cone of $Y$ in $X$. As $C_YX$ is a line bundle over $Y$
+we obtain a bivariant class $o^* \in A^1(Y \to C_YX)$, see
+Lemma \ref{lemma-gysin-bivariant}.
+
+\begin{lemma}
+\label{lemma-relation-normal-cones}
+With notation as above we have
+$$
+o^*[C_ZX]_n = [C_Z Y]_{n - 1}
+$$
+in $\CH_{n - 1}(Y \times_{o, C_Y X} C_ZX)$.
+\end{lemma}
+
+\begin{proof}
+Denote $W \to \mathbf{P}^1_X$ the blowing up of $\infty(Z)$ as in
+Section \ref{section-blowup-Z-first}.
+Similarly, denote $W' \to \mathbf{P}^1_X$ the blowing up of $\infty(Y)$.
+Since $\infty(Z) \subset \infty(Y)$ we get an opposite inclusion
+of ideal sheaves and hence a map of the graded algebras
+defining these blowups. This produces a rational morphism from $W$
+to $W'$ which in fact has a canonical representative
+$$
+W \supset U \longrightarrow W'
+$$
+See Constructions, Lemma \ref{constructions-lemma-morphism-relative-proj}.
+A local calculation (omitted) shows that $U$ contains at least all points
+of $W$ not lying over $\infty$ and the open subscheme $C_Z X$ of the special
+fibre. After shrinking $U$ we may assume $U_\infty = C_Z X$ and
+$\mathbf{A}^1_X \subset U$. Another local calculation (omitted)
+shows that the morphism $U_\infty \to W'_\infty$
+induces the canonical morphism $C_Z X \to C_Y X \subset W'_\infty$
+of normal cones induced by the inclusion of ideals sheaves
+coming from $Z \subset Y$. Denote $W'' \subset W$ the strict transform of
+$\mathbf{P}^1_Y \subset \mathbf{P}^1_X$ in $W$. Then $W''$ is the blowing
+up of $\mathbf{P}^1_Y$ in $\infty(Z)$ by
+Divisors, Lemma \ref{divisors-lemma-strict-transform}
+and hence $(W'' \cap U)_\infty = C_ZY$.
+
+\medskip\noindent
+Consider the effective Cartier divisor $i : \mathbf{P}^1_Y \to W'$
+from (\ref{item-find-Z-in-blowup}) and its associated bivariant class
+$i^* \in A^1(\mathbf{P}^1_Y \to W')$ from Lemma \ref{lemma-gysin-bivariant}.
+We similarly denote $(i'_\infty)^* \in A^1(W'_\infty \to W')$ the
+gysin map at infinity. Observe that the restriction of $i'_\infty$
+(Remark \ref{remark-restriction-bivariant}) to $U$ is the restriction of
+$i_\infty^* \in A^1(W_\infty \to W)$ to $U$. On the one hand we have
+$$
+(i'_\infty)^* i^* [U]_{n + 1} =
+i_\infty^* i^* [U]_{n + 1} =
+i_\infty^* [(W'' \cap U)_\infty]_{n + 1} =
+[C_ZY]_n
+$$
+because $i_\infty^*$ kills all classes supported over $\infty$, because
+$i^*[U]$ and $[W'']$ agree as cycles over $\mathbf{A}^1$, and because
+$C_ZY$ is the fibre of $W'' \cap U$ over $\infty$.
+On the other hand, we have
+$$
+(i'_\infty)^* i^* [U]_{n + 1} =
+i^* i_\infty^*[U]_{n + 1} =
+i^* [U_\infty] =
+o^*[C_YX]_n
+$$
+because $(i'_\infty)^*$ and $i^*$ commute
+(Lemma \ref{lemma-gysin-commutes-gysin})
+and because the fibre of $i : \mathbf{P}^1_Y \to W'$ over $\infty$
+factors as $o : Y \to C_YX$ and the open immersion $C_YX \to W'_\infty$.
+The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-composition}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $Z \subset Y \subset X$ be closed subschemes of a scheme locally
+of finite type over $S$.
+Let $\mathcal{N}$ be a virtual normal sheaf for $Z \subset X$.
+Let $\mathcal{N}'$ be a virtual normal sheaf for $Z \subset Y$.
+Let $\mathcal{N}''$ be a virtual normal sheaf for $Y \subset X$.
+Assume there is a commutative diagram
+$$
+\xymatrix{
+(\mathcal{N}'')^\vee|_Z \ar[r] \ar[d] &
+\mathcal{N}^\vee \ar[r] \ar[d] &
+(\mathcal{N}')^\vee \ar[d] \\
+\mathcal{C}_{Y/X}|_Z \ar[r] &
+\mathcal{C}_{Z/X} \ar[r] &
+\mathcal{C}_{Z/Y}
+}
+$$
+where the sequence at the bottom is from More on Morphisms, Lemma
+\ref{more-morphisms-lemma-transitivity-conormal} and the top
+sequence is a short exact sequence. Then
+$$
+c(Z \to X, \mathcal{N}) =
+c(Z \to Y, \mathcal{N}') \circ c(Y \to X, \mathcal{N}'')
+$$
+in $A^*(Z \to X)^\wedge$.
+\end{lemma}
+
+\begin{proof}
+Observe that the assumptions remain satisfied after any base change
+by a morphism $X' \to X$ which is locally of finite type (the short
+exact sequence of virtual normal sheaves is locally split hence
+remains exact after any base change). Thus to check the
+equality of bivariant classes we may use Lemma \ref{lemma-bivariant-zero}.
+Thus we may assume $X$ is an integral scheme and we have to show
+$c(Z \to X, \mathcal{N}) \cap [X] =
+c(Z \to Y, \mathcal{N}') \cap c(Y \to X, \mathcal{N}'') \cap [X]$.
+
+\medskip\noindent
+If $Y = X$, then we have
+\begin{align*}
+c(Z \to Y, \mathcal{N}') \cap c(Y \to X, \mathcal{N}'') \cap [X]
+& =
+c(Z \to Y, \mathcal{N}') \cap c_{top}(\mathcal{N}'') \cap [Y] \\
+& =
+c_{top}(\mathcal{N}''|_Z) \cap c(Z \to Y, \mathcal{N}') \cap [Y] \\
+& =
+c(Z \to X, \mathcal{N}) \cap [X]
+\end{align*}
+The first equality by Lemma \ref{lemma-gysin-decompose}.
+The second because Chern classes commute with bivariant classes
+(Lemma \ref{lemma-cap-commutative-chern}).
+The third equality by Lemma \ref{lemma-gysin-decompose}.
+
+\medskip\noindent
+Assume $Y \not = X$. By Lemma \ref{lemma-bivariant-zero}
+it even suffices to prove the result after blowing up $X$ in a nonzero ideal.
+Let us blowup $X$ in the product of the ideal sheaf of $Y$ and the ideal
+sheaf of $Z$. This reduces us to the case where both $Y$ and $Z$ are
+effective Cartier divisors on $X$, see
+Divisors, Lemmas
+\ref{divisors-lemma-blowing-up-gives-effective-Cartier-divisor} and
+\ref{divisors-lemma-blowing-up-two-ideals}.
+
+\medskip\noindent
+Denote $\mathcal{N}'' \to \mathcal{E}$ the surjection of finite locally
+free $\mathcal{O}_Z$-modules such that
+$0 \to \mathcal{E}^\vee \to (\mathcal{N}'')^\vee \to \mathcal{C}_{Y/X} \to 0$
+is a short exact sequence. Then $\mathcal{N} \to \mathcal{E}|_Z$
+is a surjection as well. Denote $\mathcal{N}_1$ the finite locally free kernel
+of this map and observe that $\mathcal{N}^\vee \to \mathcal{C}_{Z/X}$
+factors through $\mathcal{N}_1$.
+By Lemma \ref{lemma-gysin-decompose} we have
+$$
+c(Y \to X, \mathcal{N}'') = c_{top}(\mathcal{E}) \circ
+c(Y \to X, \mathcal{C}_{Y/X}^\vee)
+$$
+and
+$$
+c(Z \to X, \mathcal{N}) = c_{top}(\mathcal{E}|_Z) \circ
+c(Z \to X, \mathcal{N}_1)
+$$
+Since Chern classes of bundles commute with bivariant classes
+(Lemma \ref{lemma-cap-commutative-chern})
+it suffices to prove
+$$
+c(Z \to X, \mathcal{N}_1) =
+c(Z \to Y, \mathcal{N}') \circ c(Y \to X, \mathcal{C}_{Y/X}^\vee)
+$$
+in $A^*(Z \to X)$. This we may assume that $\mathcal{N}'' = \mathcal{C}_{Y/X}$.
+This reduces us to the case discussed in the next paragraph.
+
+\medskip\noindent
+In this paragraph $Z$ and $Y$ are effective Cartier divisors on $X$
+integral of dimension $n$, we have $\mathcal{N}'' = \mathcal{C}_{Y/X}$.
+In this case $c(Y \to X, \mathcal{C}_{Y/X}^\vee) \cap [X] = [Y]_{n - 1}$ by
+Lemma \ref{lemma-gysin-fundamental}. Thus we have to prove that
+$c(Z \to X, \mathcal{N}) \cap [X] = c(Z \to Y, \mathcal{N}') \cap [Y]_{n - 1}$.
+Denote $N$ and $N'$ the vector bundles over $Z$ associated to
+$\mathcal{N}$ and $\mathcal{N}'$. Consider the commutative diagram
+$$
+\xymatrix{
+N' \ar[r]_i &
+N \ar[r] &
+(C_Y X) \times_Y Z \\
+C_Z Y \ar[r] \ar[u] &
+C_Z X \ar[u]
+}
+$$
+of cones and vector bundles over $Z$. Observe that $N'$ is a relative
+effective Cartier divisor in $N$ over $Z$ and that
+$$
+\xymatrix{
+N' \ar[d] \ar[r]_i & N \ar[d] \\
+Z \ar[r]^-o & (C_Y X) \times_Y Z
+}
+$$
+is cartesian where $o$ is the zero section of the line bundle
+$C_Y X$ over $Y$. By
+Lemma \ref{lemma-relation-normal-cones} we have $o^*[C_ZX]_n = [C_Z Y]_{n - 1}$
+in
+$$
+\CH_{n - 1}(Y \times_{o, C_Y X} C_ZX) =
+\CH_{n - 1}(Z \times_{o, (C_Y X) \times_Y Z} C_ZX)
+$$
+By the cartesian property of
+the square above this implies that
+$$
+i^*[C_ZX]_n = [C_Z Y]_{n - 1}
+$$
+in $\CH_{n - 1}(N')$. Now observe that
+$\gamma = c(Z \to X, \mathcal{N}) \cap [X]$ and
+$\gamma' = c(Z \to Y, \mathcal{N}') \cap [Y]_{n - 1}$
+are characterized by $p^*\gamma = [C_Z X]_n$ in $\CH_n(N)$
+and by $(p')^*\gamma' = [C_Z Y]_{n - 1}$ in $\CH_{n - 1}(N')$.
+Hence the proof is finished as $i^* \circ p^* = (p')^*$ by
+Lemma \ref{lemma-relative-effective-cartier}.
+\end{proof}
+
+\begin{remark}[Variant for immersions]
+\label{remark-gysin-for-immersion}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+Let $i : Z \to X$ be an immersion of schemes.
+In this situation
+\begin{enumerate}
+\item the conormal sheaf $\mathcal{C}_{Z/X}$
+of $Z$ in $X$ is defined
+(Morphisms, Definition \ref{morphisms-definition-conormal-sheaf}),
+\item we say a pair consisting of a finite locally free $\mathcal{O}_Z$-module
+$\mathcal{N}$ and a surjection $\sigma : \mathcal{N}^\vee \to \mathcal{C}_{Z/X}$
+is a virtual normal bundle for the immersion $Z \to X$,
+\item choose an open subscheme $U \subset X$ such that $Z \to X$
+factors through a closed immersion $Z \to U$ and set
+$c(Z \to X, \mathcal{N}) = c(Z \to U, \mathcal{N}) \circ (U \to X)^*$.
+\end{enumerate}
+The bivariant class $c(Z \to X, \mathcal{N})$ does not depend on the choice
+of the open subscheme $U$. All of the lemmas have immediate counterparts
+for this slightly more general construction. We omit the details.
+\end{remark}
+
+
+
+
+
+
+
+\section{Calculating some classes}
+\label{section-calculate}
+
+\noindent
+To get further we need to compute the values of some of the
+classes we've constructed above.
+
+\begin{lemma}
+\label{lemma-compute-koszul}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$
+be a scheme locally of finite type over $S$. Let $\mathcal{E}$
+be a locally free $\mathcal{O}_X$-module of rank $r$.
+Then
+$$
+\prod\nolimits_{n = 0, \ldots, r} c(\wedge^n \mathcal{E})^{(-1)^n} =
+1 - (r - 1)! c_r(\mathcal{E}) + \ldots
+$$
+\end{lemma}
+
+\begin{proof}
+By the splitting principle we can turn this into a calculation in the
+polynomial ring on the Chern roots $x_1, \ldots, x_r$ of $\mathcal{E}$. See
+Section \ref{section-splitting-principle}. Observe that
+$$
+c(\wedge^n \mathcal{E}) =
+\prod\nolimits_{1 \leq i_1 < \ldots < i_n \leq r}
+(1 + x_{i_1} + \ldots + x_{i_n})
+$$
+Thus the logarithm of the left hand side of the equation in the lemma is
+$$
+-
+\sum\nolimits_{p \geq 1}
+\sum\nolimits_{n = 0}^r
+\sum\nolimits_{1 \leq i_1 < \ldots < i_n \leq r}
+\frac{(-1)^{p + n}}{p}(x_{i_1} + \ldots + x_{i_n})^p
+$$
+Please notice the minus sign in front. However, we have
+$$
+\sum\nolimits_{p \geq 0}
+\sum\nolimits_{n = 0}^r
+\sum\nolimits_{1 \leq i_1 < \ldots < i_n \leq r}
+\frac{(-1)^{p + n}}{p!}(x_{i_1} + \ldots + x_{i_n})^p
+=
+\prod (1 - e^{-x_i})
+$$
+Hence we see that the first nonzero term in our Chern class
+is in degree $r$ and equal to the predicted value.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-section}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $X$
+be a scheme locally of finite type over $S$. Let $\mathcal{C}$
+be a locally free $\mathcal{O}_X$-module of rank $r$. Consider the
+morphisms
+$$
+X = \underline{\text{Proj}}_X(\mathcal{O}_X[T])
+\xrightarrow{i}
+E = \underline{\text{Proj}}_X(\text{Sym}^*(\mathcal{C})[T])
+\xrightarrow{\pi}
+X
+$$
+Then $c_t(i_*\mathcal{O}_X) = 0$ for $t = 1, \ldots, r - 1$ and in
+$A^0(C \to E)$ we have
+$$
+p^* \circ \pi_* \circ c_r(i_*\mathcal{O}_X) = (-1)^{r - 1}(r - 1)! j^*
+$$
+where
+$j : C \to E$ and $p : C \to X$ are the inclusion and structure
+morphism of the vector bundle
+$C = \underline{\Spec}(\text{Sym}^*(\mathcal{C}))$.
+\end{lemma}
+
+\begin{proof}
+The canonical map $\pi^*\mathcal{C} \to \mathcal{O}_E(1)$ vanishes
+exactly along $i(X)$. Hence the Koszul complex on the map
+$$
+\pi^*\mathcal{C} \otimes \mathcal{O}_E(-1) \to \mathcal{O}_E
+$$
+is a resolution of $i_*\mathcal{O}_X$. In particular we see that
+$i_*\mathcal{O}_X$ is a perfect object of $D(\mathcal{O}_E)$
+whose Chern classes are defined. The vanishing of $c_t(i_*\mathcal{O}_X)$
+for $t = 1, \ldots, t - 1$ follows from Lemma \ref{lemma-compute-koszul}.
+This lemma also gives
+$$
+c_r(i_*\mathcal{O}_X) = - (r - 1)!
+c_r(\pi^*\mathcal{C} \otimes \mathcal{O}_E(-1))
+$$
+On the other hand, by Lemma \ref{lemma-chern-classes-dual} we have
+$$
+c_r(\pi^*\mathcal{C} \otimes \mathcal{O}_E(-1)) =
+(-1)^r c_r(\pi^*\mathcal{C}^\vee \otimes \mathcal{O}_E(1))
+$$
+and $\pi^*\mathcal{C}^\vee \otimes \mathcal{O}_E(1)$ has a section $s$
+vanishing exactly along $i(X)$.
+
+\medskip\noindent
+After replacing $X$ by a scheme locally of finite type over $X$,
+it suffices to prove that both sides of the equality have the
+same effect on an element $\alpha \in \CH_*(E)$. Since $C \to X$
+is a vector bundle, every cycle class on $C$ is of the form $p^*\beta$
+for some $\beta \in \CH_*(X)$ (Lemma \ref{lemma-vectorbundle}).
+Hence by Lemma \ref{lemma-restrict-to-open}
+we can write $\alpha = \pi^*\beta + \gamma$ where $\gamma$
+is supported on $E \setminus C$. Using the equalities above
+it suffices to show that
+$$
+p^*(\pi_*(c_r(\pi^*\mathcal{C}^\vee \otimes \mathcal{O}_E(1)) \cap [W])) =
+j^*[W]
+$$
+when $W \subset E$ is an integral closed subscheme which
+is either (a) disjoint from $C$ or (b) is of the form $W = \pi^{-1}Y$
+for some integral closed subscheme $Y \subset X$.
+Using the section $s$ and Lemma \ref{lemma-top-chern-class} we find
+in case (a) $c_r(\pi^*\mathcal{C}^\vee \otimes \mathcal{O}_E(1)) \cap [W] = 0$
+and in case (b)
+$c_r(\pi^*\mathcal{C}^\vee \otimes \mathcal{O}_E(1)) \cap [W] = [i(Y)]$.
+The result follows easily from this; details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-agreement-with-loc-chern}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $i : Z \to X$
+be a regular closed immersion of codimension $r$
+between schemes locally of finite type over $S$.
+Let $\mathcal{N} = \mathcal{C}_{Z/X}^\vee$ be the normal sheaf. If $X$
+is quasi-compact (or has quasi-compact irreducible components), then
+$c_t(Z \to X, i_*\mathcal{O}_Z) = 0$ for $t = 1, \ldots, r - 1$ and
+$$
+c_r(Z \to X, i_*\mathcal{O}_Z) = (-1)^{r - 1} (r - 1)! c(Z \to X, \mathcal{N})
+\quad\text{in}\quad
+A^r(Z \to X)
+$$
+where $c_t(Z \to X, i_*\mathcal{O}_Z)$
+is the localized Chern class
+of Definition \ref{definition-localized-chern}.
+\end{lemma}
+
+\begin{proof}
+For any $x \in Z$ we can choose an affine open neighbourhood
+$\Spec(A) \subset X$ such that $Z \cap \Spec(A) = V(f_1, \ldots, f_r)$
+where $f_1, \ldots, f_r \in A$ is a regular sequence.
+See Divisors, Definition \ref{divisors-definition-regular-immersion} and
+Lemma \ref{divisors-lemma-Noetherian-scheme-regular-ideal}.
+Then we see that the Koszul complex on $f_1, \ldots, f_r$ is
+a resolution of $A/(f_1, \ldots, f_r)$ for example by
+More on Algebra, Lemma \ref{more-algebra-lemma-regular-koszul-regular}.
+Hence $A/(f_1, \ldots, f_r)$ is perfect as an $A$-module.
+It follows that $F = i_*\mathcal{O}_Z$ is a perfect object of
+$D(\mathcal{O}_X)$ whose restriction to $X \setminus Z$ is zero.
+The assumption that $X$ is quasi-compact (or has quasi-compact
+irreducible components) means that the localized Chern classes
+$c_t(Z \to X, i_*\mathcal{O}_Z)$ are defined, see
+Situation \ref{situation-loc-chern} and
+Definition \ref{definition-localized-chern}. All in all
+we conclude that the statement makes sense.
+
+\medskip\noindent
+Denote $b : W \to \mathbf{P}^1_X$ the blowing up in $\infty(Z)$
+as in Section \ref{section-blowup-Z-first}. By (\ref{item-find-Z-in-blowup})
+we have a closed immersion
+$$
+i' : \mathbf{P}^1_Z \longrightarrow W
+$$
+We claim that $Q = i'_*\mathcal{O}_{\mathbf{P}^1_Z}$
+is a perfect object of
+$D(\mathcal{O}_W)$ and that $F$ and $Q$ satisfy the assumptions of
+Lemma \ref{lemma-independent-loc-chern-bQ}.
+
+\medskip\noindent
+Assume the claim. The output of Lemma \ref{lemma-independent-loc-chern-bQ}
+is that we have
+$$
+c_p(Z \to X, F) = c'_p(Q) = (E \to Z)_* \circ c'_p(Q|_E) \circ C
+$$
+for all $p \geq 1$. Observe that $Q|_E$ is equal to the pushforward of
+the structure sheaf of $Z$ via the morphism $Z \to E$ which is the
+base change of $i'$ by $\infty$.
+Thus the vanishing of $c_t(Z \to X, F)$ for $1 \leq t \leq r - 1$
+by Lemma \ref{lemma-compute-section} applied to $E \to Z$.
+Because $\mathcal{C}_{Z/X} = \mathcal{N}^\vee$
+is locally free the bivariant class $c(Z \to X, \mathcal{N})$
+is characterized by the relation
+$$
+j^* \circ C = p^* \circ c(Z \to X, \mathcal{N})
+$$
+where $j : C_ZX \to W_\infty$ and $p : C_ZX \to Z$ are the given maps.
+(Recall $C \in A^0(W_\infty \to X)$ is the class of
+Lemma \ref{lemma-gysin-at-infty}.)
+Thus the displayed equation in the statement of the lemma
+follows from the corresponding equation in Lemma \ref{lemma-compute-section}.
+
+\medskip\noindent
+Proof of the claim. Let $A$ and $f_1, \ldots, f_r$ be as above.
+Consider the affine open $\Spec(A[s]) \subset \mathbf{P}^1_X$
+as in Section \ref{section-blowup-Z-first}. Recall that $s = 0$
+defines $(\mathbf{P}^1_X)_\infty$ over this open. Hence over
+$\Spec(A[s])$ we are blowing up in the ideal generated by
+the regular sequence $s, f_1, \ldots, f_r$. By More on Algebra, Lemma
+\ref{more-algebra-lemma-blowup-regular-sequence} the $r + 1$
+affine charts are global complete intersections over $A[s]$.
+The chart corresponding to the affine blowup algebra
+$$
+A[s][f_1/s, \ldots, f_r/s] = A[s, y_1, \ldots, y_r]/(sy_i - f_i)
+$$
+contains $i'(Z \cap \Spec(A))$ as the closed subscheme cut out by
+$y_1, \ldots, y_r$. Since $y_1, \ldots, y_r, sy_1 - f_1, \ldots, sy_r - f_r$
+is a regular sequence in the polynomial ring $A[s, y_1, \ldots, y_r]$
+we find that $i'$ is a regular immersion. Some details omitted.
+As above we conclude that $Q = i'_*\mathcal{O}_{\mathbf{P}^1_Z}$
+is a perfect object of $D(\mathcal{O}_W)$. All the
+other assumptions on $F$ and $Q$ in Lemma \ref{lemma-independent-loc-chern-bQ}
+(and Lemma \ref{lemma-localized-chern-pre}) are immediately verified.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-actual-computation}
+In the situation of Lemma \ref{lemma-agreement-with-loc-chern}
+say $\dim_\delta(X) = n$. Then we have
+\begin{enumerate}
+\item $c_t(Z \to X, i_*\mathcal{O}_Z) \cap [X]_n = 0$ for
+$t = 1, \ldots, r - 1$,
+\item $c_r(Z \to X, i_*\mathcal{O}_Z) \cap [X]_n =
+(-1)^{r - 1}(r - 1)![Z]_{n - r}$,
+\item $ch_t(Z \to X, i_*\mathcal{O}_Z) \cap [X]_n = 0$ for
+$t = 0, \ldots, r - 1$, and
+\item $ch_r(Z \to X, i_*\mathcal{O}_Z) \cap [X]_n = [Z]_{n - r}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Parts (1) and (2) follow immediately from
+Lemma \ref{lemma-agreement-with-loc-chern}
+combined with Lemma \ref{lemma-gysin-fundamental}.
+Then we deduce parts (3) and (4) using the relationship
+between $ch_p = (1/p!)P_p$ and $c_p$ given in
+Lemma \ref{lemma-loc-chern-character}. (Namely,
+$(-1)^{r - 1}(r - 1)!ch_r = c_r$ provided
+$c_1 = c_2 = \ldots = c_{r - 1} = 0$.)
+\end{proof}
+
+
+
+
+
+
+
+\section{An Adams operator}
+\label{section-adams}
+
+\noindent
+We do the minimal amount of work to define the second adams operator.
+Let $X$ be a scheme. Recall that $\textit{Vect}(X)$ denotes the
+category of finite locally free $\mathcal{O}_X$-modules.
+Moreover, recall that we have constructed a zeroth $K$-group
+$K_0(\textit{Vect}(X))$ associated to this category in
+Derived Categories of Schemes, Section \ref{perfect-section-K-groups}.
+Finally, $K_0(\textit{Vect}(X))$ is a ring, see
+Derived Categories of Schemes, Remark \ref{perfect-remark-K-ring}.
+
+\begin{lemma}
+\label{lemma-second-adams-operator}
+Let $X$ be a scheme. There is a ring map
+$$
+\psi^2 :
+K_0(\textit{Vect}(X))
+\longrightarrow
+K_0(\textit{Vect}(X))
+$$
+which sends $[\mathcal{L}]$ to $[\mathcal{L}^{\otimes 2}]$
+when $\mathcal{L}$ is invertible and is compatible with pullbacks.
+\end{lemma}
+
+\begin{proof}
+Let $X$ be a scheme.
+Let $\mathcal{E}$ be a finite locally free $\mathcal{O}_X$-module.
+We will consider the element
+$$
+\psi^2(\mathcal{E}) = [\text{Sym}^2(\mathcal{E})] - [\wedge^2(\mathcal{E})]
+$$
+of $K_0(\textit{Vect}(X))$.
+
+\medskip\noindent
+Let $X$ be a scheme and consider a short exact sequence
+$$
+0 \to \mathcal{E} \to \mathcal{F} \to \mathcal{G} \to 0
+$$
+of finite locally free $\mathcal{O}_X$-modules. Let us think of
+this as a filtration on $\mathcal{F}$ with $2$ steps. The induced
+filtration on $\text{Sym}^2(\mathcal{F})$ has $3$ steps with
+graded pieces $\text{Sym}^2(\mathcal{E})$, $\mathcal{E} \otimes \mathcal{F}$,
+and $\text{Sym}^2(\mathcal{G})$. Hence
+$$
+[\text{Sym}^2(\mathcal{F})] =
+[\text{Sym}^2(\mathcal{E})] +
+[\mathcal{E} \otimes \mathcal{F}] +
+[\text{Sym}^2(\mathcal{G})]
+$$
+In exactly the same manner one shows that
+$$
+[\wedge^2(\mathcal{F})] =
+[\wedge^2(\mathcal{E})] +
+[\mathcal{E} \otimes \mathcal{F}] +
+[\wedge^2(\mathcal{G})]
+$$
+Thus we see that
+$\psi^2(\mathcal{F}) = \psi^2(\mathcal{E}) + \psi^2(\mathcal{G})$.
+We conclude that we obtain a well defined additive map
+$\psi^2 : K_0(\textit{Vect}(X)) \to K_0(\textit{Vect}(X))$.
+
+\medskip\noindent
+It is clear that this map commutes with pullbacks.
+
+\medskip\noindent
+We still have to show that $\psi^2$ is a ring map.
+Let $X$ be a scheme and let $\mathcal{E}$ and $\mathcal{F}$
+be finite locally free $\mathcal{O}_X$-modules.
+Observe that there is a short exact sequence
+$$
+0 \to \wedge^2(\mathcal{E}) \otimes \wedge^2(\mathcal{F}) \to
+\text{Sym}^2(\mathcal{E} \otimes \mathcal{F}) \to
+\text{Sym}^2(\mathcal{E}) \otimes \text{Sym}^2(\mathcal{F}) \to 0
+$$
+where the first map sends $(e \wedge e') \otimes (f \wedge f')$ to
+$(e \otimes f)(e' \otimes f') - (e' \otimes f)(e \otimes f')$ and
+the second map sends $(e \otimes f) (e' \otimes f')$ to $ee' \otimes ff'$.
+Similarly, there is a short exact sequence
+$$
+0 \to \text{Sym}^2(\mathcal{E}) \otimes \wedge^2(\mathcal{F}) \to
+\wedge^2(\mathcal{E} \otimes \mathcal{F}) \to
+\wedge^2(\mathcal{E}) \otimes \text{Sym}^2(\mathcal{F}) \to 0
+$$
+where the first map sends $e e' \otimes f \wedge f'$ to
+$(e \otimes f) \wedge (e' \otimes f') + (e' \otimes f) \wedge (e \otimes f')$
+and the second map sends
+$(e \otimes f) \wedge (e' \otimes f')$ to
+$(e \wedge e') \otimes (f f')$.
+As above this proves the map $\psi^2$ is multiplicative.
+Since it is clear that $\psi^2(1) = 1$ this concludes the proof.
+\end{proof}
+
+\begin{remark}
+\label{remark-adams-derived}
+Let $X$ be a scheme such that $2$ is invertible on $X$.
+Then the Adams operator $\psi^2$ can be defined on the $K$-group
+$K_0(X) = K_0(D_{perf}(\mathcal{O}_X))$
+(Derived Categories of Schemes, Definition \ref{perfect-definition-K-group})
+in a straightforward manner.
+Namely, given a perfect complex $L$ on $X$ we get an action
+of the group $\{\pm 1\}$ on $L \otimes^\mathbf{L} L$ by switching
+the factors. Then we can set
+$$
+\psi^2(L) = [(L \otimes^\mathbf{L} L)^+] -
+[(L \otimes^\mathbf{L} L)^-]
+$$
+where $(-)^+$ denotes taking invariants and $(-)^-$ denotes taking
+anti-invariants (suitably defined).
+Using exactness of taking invariants and anti-invariants one can
+argue similarly to the proof of Lemma \ref{lemma-second-adams-operator}
+to show that this is well defined.
+When $2$ is not invertible on $X$ the situation is a good deal more
+complicated and another approach has to be used.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-minus-adams-operator}
+Let $X$ be a scheme. There is a ring map
+$\psi^{-1} : K_0(\textit{Vect}(X)) \to K_0(\textit{Vect}(X))$
+which sends $[\mathcal{E}]$ to $[\mathcal{E}^\vee]$
+when $\mathcal{E}$ is finite locally free
+and is compatible with pullbacks.
+\end{lemma}
+
+\begin{proof}
+The only thing to check is that taking duals is compatible with
+short exact sequences and with pullbacks. This is clear.
+\end{proof}
+
+\begin{remark}
+\label{remark-chern-classes-K}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. The Chern class
+map defines a canonical map
+$$
+c : K_0(\textit{Vect}(X)) \longrightarrow \prod\nolimits_{i \geq 0} A^i(X)
+$$
+by sending a generator $[\mathcal{E}]$ on the left hand side to
+$c(\mathcal{E}) = 1 + c_1(\mathcal{E}) + c_2(\mathcal{E}) + \ldots$
+and extending multiplicatively. Thus $-[\mathcal{E}]$ is sent to
+the formal inverse $c(\mathcal{E})^{-1}$ which is why we have the
+infinite product on the right hand side. This is well defined by
+Lemma \ref{lemma-additivity-chern-classes}.
+\end{remark}
+
+\begin{remark}
+\label{remark-chern-character-K}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. The Chern character
+map defines a canonical ring map
+$$
+ch : K_0(\textit{Vect}(X)) \longrightarrow
+\prod\nolimits_{i \geq 0} A^i(X) \otimes \mathbf{Q}
+$$
+by sending a generator $[\mathcal{E}]$ on the left hand side to
+$ch(\mathcal{E})$ and extending additively. This is well defined
+by Lemma \ref{lemma-chern-character-additive} and a ring homomorphism by
+Lemma \ref{lemma-chern-character-multiplicative}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-adams-and-chern}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. If $\psi^2$ is
+as in Lemma \ref{lemma-second-adams-operator} and $c$ and $ch$ are as in
+Remarks \ref{remark-chern-classes-K} and \ref{remark-chern-character-K}
+then we have $c_i(\psi^2(\alpha)) = 2^i c_i(\alpha)$ and
+$ch_i(\psi^2(\alpha)) = 2^i ch_i(\alpha)$
+for all $\alpha \in K_0(\textit{Vect}(X))$.
+\end{lemma}
+
+\begin{proof}
+Observe that the map $\prod_{i \geq 0} A^i(X) \to \prod_{i \geq 0} A^i(X)$
+multiplying by $2^i$ on $A^i(X)$ is a ring map. Hence, since $\psi^2$
+is also a ring map, it suffices to prove the formulas for additive generators
+of $K_0(\textit{Vect}(X))$. Thus we may assume $\alpha = [\mathcal{E}]$
+for some finite locally free $\mathcal{O}_X$-module $\mathcal{E}$.
+By construction of the Chern classes of $\mathcal{E}$ we immediately
+reduce to the case where $\mathcal{E}$ has constant rank $r$, see
+Remark \ref{remark-extend-to-finite-locally-free}.
+In this case, we can choose a projective smooth morphism $p : P \to X$
+such that restriction $A^*(X) \to A^*(P)$ is injective
+and such that $p^*\mathcal{E}$ has a finite filtration whose
+graded parts are invertible $\mathcal{O}_P$-modules $\mathcal{L}_j$, see
+Lemma \ref{lemma-splitting-principle}. Then
+$[p^*\mathcal{E}] = \sum [\mathcal{L}_j]$ and hence
+$\psi^2([p^\mathcal{E}]) = \sum [\mathcal{L}_j^{\otimes 2}]$
+by definition of $\psi^2$. Setting $x_j = c_1(\mathcal{L}_j)$
+we have
+$$
+c(\alpha) = \prod (1 + x_j)
+\quad\text{and}\quad
+c(\psi^2(\alpha)) = \prod (1 + 2 x_j)
+$$
+in $\prod A^i(P)$ and we have
+$$
+ch(\alpha) = \sum \exp(x_j)
+\quad\text{and}\quad
+ch(\psi^2(\alpha)) = \sum \exp(2 x_j)
+$$
+in $\prod A^i(P)$. From these formulas the desired result follows.
+\end{proof}
+
+\begin{remark}
+\label{remark-perf-Z-cohomology-K}
+Let $X$ be a locally Noetherian scheme.
+Let $Z \subset X$ be a closed subscheme. Consider the strictly
+full, saturated, triangulated subcategory
+$$
+D_{Z, perf}(\mathcal{O}_X) \subset D(\mathcal{O}_X)
+$$
+consisting of perfect complexes of $\mathcal{O}_X$-modules
+whose cohomology sheaves are settheoretically supported on $Z$.
+Denote $\textit{Coh}_Z(X) \subset \textit{Coh}(X)$
+the Serre subcategory of coherent $\mathcal{O}_X$-modules whose set theoretic
+support is contained in $Z$. Observe that given
+$E \in D_{Z, perf}(\mathcal{O}_X)$ Zariski locally on $X$
+only a finite number of the cohomology sheaves $H^i(E)$ are nonzero
+(and they are all settheoretically supported on $Z$).
+Hence we can define
+$$
+K_0(D_{Z, perf}(\mathcal{O}_X))
+\longrightarrow
+K_0(\textit{Coh}_Z(X)) = K'_0(Z)
+$$
+(equality by Lemma \ref{lemma-K-coherent-supported-on-closed}) by the rule
+$$
+E \longmapsto
+[\bigoplus\nolimits_{i \in \mathbf{Z}} H^{2i}(E)] -
+[\bigoplus\nolimits_{i \in \mathbf{Z}} H^{2i + 1}(E)]
+$$
+This works because given a distinguished triangle in
+$D_{Z, perf}(\mathcal{O}_X)$ we have a long exact sequence of
+cohomology sheaves.
+\end{remark}
+
+\begin{remark}
+\label{remark-perf-Z-regular}
+Let $X$, $Z$, $D_{Z, perf}(\mathcal{O}_X)$ be as in
+Remark \ref{remark-perf-Z-cohomology-K}.
+Assume $X$ is Noetherian regular of finite dimension.
+Then there is a canonical map
+$$
+K_0(\textit{Coh}(Z)) \longrightarrow K_0(D_{Z, perf}(\mathcal{O}_X))
+$$
+defined as follows. For any coherent $\mathcal{O}_Z$-module
+$\mathcal{F}$ denote $\mathcal{F}[0]$ the object of $D(\mathcal{O}_X)$
+which has $\mathcal{F}$ in degree $0$ and is zero in other degrees.
+Then $\mathcal{F}[0]$ is a perfect complex on $X$ by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-perfect-on-regular}.
+Hence $\mathcal{F}[0]$ is an object of $D_{Z, perf}(\mathcal{O}_X)$.
+On the other hand, given a short exact sequence
+$0 \to \mathcal{F} \to \mathcal{F}' \to \mathcal{F}'' \to 0$ of
+coherent $\mathcal{O}_Z$-modules we obtain a distinguished triangle
+$\mathcal{F}[0] \to \mathcal{F}'[0] \to \mathcal{F}''[0] \to \mathcal{F}[1]$,
+see Derived Categories, Section \ref{derived-section-canonical-delta-functor}.
+This shows that we obtain a map
+$K_0(\textit{Coh}(Z)) \to K_0(D_{Z, perf}(\mathcal{O}_X))$
+by sending $[\mathcal{F}]$ to $[\mathcal{F}[0]]$
+with apologies for the horrendous notation.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-perf-Z-regular}
+Let $X$ be a Noetherian regular scheme of finite dimension.
+Let $Z \subset X$ be a closed subscheme. The maps constructed
+in Remarks \ref{remark-perf-Z-cohomology-K} and
+\ref{remark-perf-Z-regular} are mutually inverse and we get
+$K'_0(Z) = K_0(D_{Z, perf}(\mathcal{O}_X))$.
+\end{lemma}
+
+\begin{proof}
+Clearly the composition
+$$
+K_0(\textit{Coh}(Z)) \longrightarrow
+K_0(D_{Z, perf}(\mathcal{O}_X)) \longrightarrow
+K_0(\textit{Coh}(Z))
+$$
+is the identity map. Thus it suffices to show the first arrow is
+surjective. Let $E$ be an object of $D_{Z, perf}(\mathcal{O}_X)$.
+We are going to use without further mention that $E$ is bounded
+with coherent cohomology and that any such complex is a perfect complex.
+Using the distinguished triangles of canonical truncations the
+reader sees that
+$$
+[E] = \sum (-1)^i[H^i(E)[0]]
+$$
+in $K_0(D_{Z, perf}(\mathcal{O}_X))$. Then it suffices to
+show that $[\mathcal{F}[0]]$ is in the image of the map
+for any coherent $\mathcal{O}_X$-module set theoretically
+supported on $Z$. Since we can find a finite filtration on
+$\mathcal{F}$ whose subquotients are $\mathcal{O}_Z$-modules,
+the proof is complete.
+\end{proof}
+
+\begin{remark}
+\label{remark-localized-chern-classes-K}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $Z \subset X$ be a closed subscheme and let
+$D_{Z, perf}(\mathcal{O}_X)$ be as in
+Remark \ref{remark-perf-Z-cohomology-K}.
+If $X$ is quasi-compact (or more generally the irreducible
+components of $X$ are quasi-compact), then
+the localized Chern classes define a canonical map
+$$
+c(Z \to X, -) : K_0(D_{Z, perf}(\mathcal{O}_X)) \longrightarrow
+A^0(X) \times \prod\nolimits_{i \geq 1} A^i(Z \to X)
+$$
+by sending a generator $[E]$ on the left hand side to
+$$
+c(Z \to X, E) = 1 + c_1(Z \to X, E) + c_2(Z \to X, E) + \ldots
+$$
+and extending multiplicatively (with product on the right hand
+side as in Remark \ref{remark-ring-loc-classes}).
+The quasi-compactness condition on $X$ guarantees that the
+localized chern classes are defined (Situation \ref{situation-loc-chern} and
+Definition \ref{definition-localized-chern})
+and that these localized chern classes convert distinguished triangles into
+the corresponding products in the bivariant chow rings
+(Lemma \ref{lemma-additivity-loc-chern-c}).
+\end{remark}
+
+\begin{remark}
+\label{remark-localized-chern-character-K}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $Z \subset X$ be a closed subscheme and let
+$D_{Z, perf}(\mathcal{O}_X)$ be as in
+Remark \ref{remark-perf-Z-cohomology-K}.
+If $X$ is quasi-compact (or more generally the irreducible
+components of $X$ are quasi-compact), then
+the localized Chern character defines a canonical additive
+and multiplicative map
+$$
+ch(Z \to X, -) : K_0(D_{Z, perf}(\mathcal{O}_X)) \longrightarrow
+\prod\nolimits_{i \geq 0} A^i(Z \to X)
+$$
+by sending a generator $[E]$ on the left hand side to
+$ch(Z \to X, E)$ and extending additively.
+The quasi-compactness condition on $X$ guarantees that the
+localized chern character is defined (Situation \ref{situation-loc-chern} and
+Definition \ref{definition-localized-chern})
+and that these localized chern characters
+convert distinguished triangles into the corresponding
+sums in the bivariant chow rings
+(Lemma \ref{lemma-additivity-loc-chern-P}).
+The multiplication on
+$K_0(D_{Z, perf}(X))$ is defined using derived tensor product
+(Derived Categories of Schemes, Remark \ref{perfect-remark-perf-Z})
+hence $ch(Z \to X, \alpha \beta) = ch(Z \to X, \alpha) ch(Z \to X, \beta)$ by
+Lemma \ref{lemma-loc-chern-tensor-product}.
+\end{remark}
+
+\begin{remark}
+\label{remark-chern-classes-agree}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$ and assume $X$
+is quasi-compact (or more generally the irreducible components
+of $X$ are quasi-compact). With $Z = X$ and notation as in
+Remarks \ref{remark-localized-chern-classes-K} and
+\ref{remark-localized-chern-character-K}
+we have $D_{Z, perf}(\mathcal{O}_X) = D_{perf}(\mathcal{O}_X)$
+and we see that
+$$
+K_0(D_{Z, perf}(\mathcal{O}_X)) = K_0(D_{perf}(\mathcal{O}_X)) = K_0(X)
+$$
+see
+Derived Categories of Schemes, Definition \ref{perfect-definition-K-group}.
+Hence we get
+$$
+c : K_0(X) \to \prod A^i(X)
+\quad\text{and}\quad
+ch : K_0(X) \to \prod A^i(X)
+$$
+as a special case of Remarks \ref{remark-localized-chern-classes-K} and
+\ref{remark-localized-chern-character-K}. Of course, instead we could
+have just directly used Definition \ref{definition-defined-on-perfect} and
+Lemmas \ref{lemma-additivity-on-perfect} and
+\ref{lemma-chern-classes-perfect-tensor-product} to construct these maps
+(as this immediately seen to produce the same classes).
+Recall that there is a canonical map $K_0(\textit{Vect}(X)) \to K_0(X)$
+which sends a finite locally free module to itself viewed
+as a perfect complex (placed in degree $0$), see
+Derived Categories of Schemes, Section \ref{perfect-section-K-groups}.
+Then the diagram
+$$
+\xymatrix{
+K_0((\textit{Vect}(X)) \ar[rd]_c \ar[rr] & &
+K_0(D_{perf}(\mathcal{O}_X)) = K_0(X) \ar[ld]^c \\
+& \prod A^i(X)
+}
+$$
+commutes where the south-east arrow is the one constructed in
+Remark \ref{remark-chern-classes-K}. Similarly, the diagram
+$$
+\xymatrix{
+K_0((\textit{Vect}(X)) \ar[rd]_{ch} \ar[rr] & &
+K_0(D_{perf}(\mathcal{O}_X)) = K_0(X) \ar[ld]^{ch} \\
+& \prod A^i(X)
+}
+$$
+commutes where the south-east arrow is the one constructed in
+Remark \ref{remark-chern-character-K}.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+\section{Chow groups and K-groups revisited}
+\label{section-chow-and-K-II}
+
+\noindent
+This section is the continuation of Section \ref{section-chow-and-K}.
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$. The K-group
+$K'_0(X) = K_0(\textit{Coh}(X))$ of coherent sheaves on $X$
+has a canonical increasing filtration
+$$
+F_kK'_0(X) =
+\Im\Big(K_0(\textit{Coh}_{\leq k}(X)) \to K_0(\textit{Coh}(X)\Big)
+$$
+This is called the filtration by dimension of supports. Observe that
+$$
+\text{gr}_k K'_0(X) \subset K'_0(X)/F_{k - 1}K'_0(X) =
+K_0(\textit{Coh}(X)/\textit{Coh}_{\leq k - 1}(X))
+$$
+where the equality holds
+by Homology, Lemma \ref{homology-lemma-serre-subcategory-K-groups}.
+The discussion in Remark \ref{remark-good-cases-K-A} shows
+that there are canonical maps
+$$
+\CH_k(X) \longrightarrow \text{gr}_k K'_0(X)
+$$
+defined by sending the class of an integral closed subscheme
+$Z \subset X$ of $\delta$-dimension $k$ to the class of
+$[\mathcal{O}_Z]$ on the right hand side.
+
+\begin{proposition}
+\label{proposition-K-tensor-Q}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Assume given a
+closed immersion $X \to Y$ of schemes locally of finite type over $S$
+with $Y$ regular, quasi-compact, affine diagonal, and
+$\delta_{Y/S} : Y \to \mathbf{Z}$ bounded. Then the composition
+$$
+K'_0(X) \to
+K_0(D_{X, perf}(\mathcal{O}_Y)) \to
+A^*(X \to Y) \to
+\CH_*(X)
+$$
+of the map $\mathcal{F} \mapsto \mathcal{F}[0]$ from
+Remark \ref{remark-perf-Z-regular}, the map $ch(X \to Y, -)$ from
+Remark \ref{remark-localized-chern-character-K}, and
+the map $c \mapsto c \cap [Y]$ induces an isomorphism
+$$
+K'_0(X) \otimes \mathbf{Q}
+\longrightarrow
+\CH_*(X) \otimes \mathbf{Q}
+$$
+which depends on the choice of $Y$. Moreover, the canonical map
+$$
+\CH_k(X) \otimes \mathbf{Q}
+\longrightarrow
+\text{gr}_k K'_0(X) \otimes \mathbf{Q}
+$$
+(see above) is an isomorphism of $\mathbf{Q}$-vector spaces for all
+$k \in \mathbf{Z}$.
+\end{proposition}
+
+\begin{proof}
+Since $Y$ is regular of finite dimension, the construction in
+Remark \ref{remark-perf-Z-regular} applies.
+Since $Y$ is quasi-compact, the construction in
+Remark \ref{remark-localized-chern-character-K} applies.
+We have that $Y$ is locally equidimensional
+(Lemma \ref{lemma-locally-equidimensional}) and
+thus the ``fundamental cycle'' $[Y]$ is defined
+as an element of $\CH_*(Y)$, see Remark \ref{remark-fundamental-class}.
+Combining this with the map $\CH_k(X) \to \text{gr}_kK'_0(X)$
+constructed above we see that it suffices to prove
+\begin{enumerate}
+\item If $\mathcal{F}$ is a coherent $\mathcal{O}_X$-module
+whose support has $\delta$-dimension $\leq k$, then
+the composition above sends $[\mathcal{F}]$ into
+$\bigoplus_{k' \leq k} \CH_{k'}(X) \otimes \mathbf{Q}$.
+\item If $Z \subset X$ is an integral closed subscheme
+of $\delta$-dimension $k$, then the composition above
+sends $[\mathcal{O}_Z]$ to an element whose degree $k$
+part is the class of $[Z]$ in $\CH_k(X) \otimes \mathbf{Q}$.
+\end{enumerate}
+Namely, if this holds, then our maps induce maps
+$\text{gr}_kK'_0(X) \otimes \mathbf{Q} \to CH_k(X) \otimes \mathbf{Q}$
+which are inverse to the canonical maps
+$\CH_k(X) \otimes \mathbf{Q} \to \text{gr}_k K'_0(X) \otimes \mathbf{Q}$
+given above the proposition.
+
+\medskip\noindent
+Given a coherent $\mathcal{O}_X$-module $\mathcal{F}$
+the composition above sends $[\mathcal{F}]$ to
+$$
+ch(X \to Y, \mathcal{F}[0]) \cap [Y] \in \CH_*(X) \otimes \mathbf{Q}
+$$
+If $\mathcal{F}$ is (set theoretically) supported on a closed subscheme
+$Z \subset X$, then we have
+$$
+ch(X \to Y, \mathcal{F}[0]) = (Z \to X)_* \circ ch(Z \to Y, \mathcal{F}[0])
+$$
+by Lemma \ref{lemma-loc-chern-shrink-Z}. We conclude that in this
+case we end up in the image of $\CH_*(Z) \to \CH_*(X)$. Hence
+we get condition (1).
+
+\medskip\noindent
+Let $Z \subset X$ be an integral closed subscheme of $\delta$-dimension $k$.
+The composition above sends $[\mathcal{O}_Z]$ to the element
+$$
+ch(X \to Y, \mathcal{O}_Z[0]) \cap [Y] =
+(Z \to X)_* ch(Z \to Y, \mathcal{O}_Z[0]) \cap [Y]
+$$
+by the same argument as above.
+Thus it suffices to prove that the degree $k$ part of
+$ch(Z \to Y, \mathcal{O}_Z[0]) \cap [Y] \in
+\CH_*(Z) \otimes \mathbf{Q}$ is $[Z]$.
+Since $\CH_k(Z) = \mathbf{Z}$, in order
+to prove this we may replace $Y$ by an open neighbourhood of the
+generic point $\xi$ of $Z$. Since the maximal ideal of the regular
+local ring $\mathcal{O}_{X, \xi}$ is generated by a
+regular sequence (Algebra, Lemma \ref{algebra-lemma-regular-ring-CM})
+we may assume the ideal of $Z$ is generated by a regular sequence, see
+Divisors, Lemma \ref{divisors-lemma-Noetherian-scheme-regular-ideal}.
+Thus we deduce the result from Lemma \ref{lemma-actual-computation}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Rational intersection products on regular schemes}
+\label{section-intersection-regular}
+
+\noindent
+We will show that $\CH_*(X) \otimes \mathbf{Q}$ has an intersection
+product if $X$ is Noetherian, regular, finite dimensional, with
+affine diagonal. The basis for the construction is the following result
+(which is a corollary of the proposition in the previous section).
+
+\begin{lemma}
+\label{lemma-K-tensor-Q}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a quasi-compact regular scheme of finite type over $S$ with
+affine diagonal and $\delta_{X/S} : X \to \mathbf{Z}$ bounded.
+Then the composition
+$$
+K_0(\textit{Vect}(X)) \otimes \mathbf{Q}
+\longrightarrow
+A^*(X) \otimes \mathbf{Q}
+\longrightarrow
+\CH_*(X) \otimes \mathbf{Q}
+$$
+of the map $ch$ from Remark \ref{remark-chern-character-K} and
+the map $c \mapsto c \cap [X]$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+We have $K'_0(X) = K_0(X) = K_0(\textit{Vect}(X))$ by
+Derived Categories of Schemes, Lemmas \ref{perfect-lemma-Kprime-K} and
+\ref{perfect-lemma-K-is-old-K}.
+By Remark \ref{remark-chern-classes-agree}
+the composition given agrees with the map of
+Proposition \ref{proposition-K-tensor-Q} for $X = Y$.
+Thus the result follows from the proposition.
+\end{proof}
+
+\noindent
+Let $X, S, \delta$ be as in Lemma \ref{lemma-K-tensor-Q}.
+For simplicity let us work with cycles of a given codimension, see
+Section \ref{section-cycles-codimension}.
+Let $[X]$ be the fundamental cycle of $X$, see
+Remark \ref{remark-fundamental-class}.
+Pick $\alpha \in CH^i(X)$ and
+$\beta \in CH^j(X)$. By the lemma we can find a unique
+$\alpha' \in K_0(\textit{Vect}(X)) \otimes \mathbf{Q}$ with
+$ch(\alpha') \cap [X] = \alpha$.
+Of course this means that $ch_{i'}(\alpha') \cap [X] = 0$
+if $i' \not = i$ and $ch_i(\alpha') \cap [X] = \alpha$.
+By Lemma \ref{lemma-adams-and-chern} we see that
+$\alpha'' = 2^{-i}\psi^2(\alpha')$ is another solution.
+By uniqueness we get $\alpha'' = \alpha'$ and we conclude
+that $ch_{i'}(\alpha) = 0$ in $A^{i'}(X) \otimes \mathbf{Q}$
+for $i' \not = i$. Then we can define
+$$
+\alpha \cdot \beta = ch(\alpha') \cap \beta =
+ch_i(\alpha') \cap \beta
+$$
+in $\CH^{i + j}(X) \otimes \mathbf{Q}$ by the property of $\alpha'$
+we observed above. This is a symmetric pairing: namely, if we pick
+$\beta' \in K_0(\textit{Vect}(X)) \otimes \mathbf{Q}$ lifting
+$\beta$, then we get
+$$
+\alpha \cdot \beta = ch(\alpha') \cap \beta =
+ch(\alpha') \cap ch(\beta') \cap [X]
+$$
+and we know that Chern classes commute.
+The intersection product is associative for the same reason
+$$
+(\alpha \cdot \beta) \cdot \gamma =
+ch(\alpha') \cap ch(\beta') \cap ch(\gamma') \cap [X]
+$$
+because we know composition of bivariant classes is associative.
+Perhaps a better way to formulate this is as follows: there is
+a unique commutative, associative intersection product on
+$\CH^*(X) \otimes \mathbf{Q}$ compatible with grading such that
+the isomorphism
+$K_0(\textit{Vect}(X)) \otimes \mathbf{Q} \to \CH^*(X) \otimes \mathbf{Q}$
+is an isomorphism of rings.
+
+
+
+
+
+
+\section{Gysin maps for local complete intersection morphisms}
+\label{section-koszul}
+
+\noindent
+Before reading this section, we suggest the reader read up on
+regular immersions
+(Divisors, Section \ref{divisors-section-regular-immersions}) and
+local complete intersection morphisms
+(More on Morphisms, Section \ref{more-morphisms-section-lci}).
+
+\medskip\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $i : X \to Y$ be a
+regular immersion\footnote{See
+Divisors, Definition \ref{divisors-definition-regular-immersion}.
+Observe that regular immersions are the same thing as
+Koszul-regular immersions or quasi-regular immersions
+for locally Noetherian schemes, see
+Divisors, Lemma \ref{divisors-lemma-regular-immersion-noetherian}.
+We will use this without further mention in this section.}
+of schemes locally of finite type over $S$.
+In particular, the conormal sheaf $\mathcal{C}_{X/Y}$ is finite locally free
+(see Divisors, Lemma \ref{divisors-lemma-quasi-regular-immersion}). Hence the
+normal sheaf
+$$
+\mathcal{N}_{X/Y} = \SheafHom_{\mathcal{O}_X}(\mathcal{C}_{X/Y}, \mathcal{O}_X)
+$$
+is finite locally free as well and we have a surjection
+$\mathcal{N}_{X/Y}^\vee \to \mathcal{C}_{X/Y}$ (because an isomorphism
+is also a surjection).
+The construction in Section \ref{section-gysin-higher-codimension}
+gives us a canonical bivariant class
+$$
+i^! = c(X \to Y, \mathcal{N}_{X/Y}) \in A^*(X \to Y)^\wedge
+$$
+We need a couple of lemmas about this notion.
+
+\begin{lemma}
+\label{lemma-composition-regular-immersion}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $i : X \to Y$ and $j : Y \to Z$ be regular immersions
+of schemes locally of finite type over $S$. Then
+$j \circ i$ is a regular immersion and
+$(j \circ i)^! = i^! \circ j^!$.
+\end{lemma}
+
+\begin{proof}
+The first statement is
+Divisors, Lemma \ref{divisors-lemma-composition-regular-immersion}.
+By Divisors, Lemma \ref{divisors-lemma-transitivity-conormal-quasi-regular}
+there is a short exact sequence
+$$
+0 \to
+i^*(\mathcal{C}_{Y/Z}) \to
+\mathcal{C}_{X/Z} \to
+\mathcal{C}_{X/Y} \to 0
+$$
+Thus the result by the more general Lemma \ref{lemma-gysin-composition}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-section-smooth}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $p : P \to X$ be a smooth morphism of schemes locally of finite type
+over $S$ and let $s : X \to P$ be a section. Then $s$ is a
+regular immersion and $1 = s^! \circ p^*$ in $A^*(X)^\wedge$
+where $p^* \in A^*(P \to X)^\wedge$ is the bivariant class
+of Lemma \ref{lemma-flat-pullback-bivariant}.
+\end{lemma}
+
+\begin{proof}
+The first statement is Divisors, Lemma
+\ref{divisors-lemma-section-smooth-regular-immersion}.
+It suffices to show that $s^! \cap p^*[Z] = [Z]$ in
+$\CH_*(X)$ for any integral closed subscheme $Z \subset X$
+as the assumptions are preserved by base change by $X' \to X$
+locally of finite type. After replacing $P$ by an open neighbourhood
+of $s(Z)$ we may assume $P \to X$ is smooth of fixed relative dimension $r$.
+Say $\dim_\delta(Z) = n$. Then every irreducible component of $p^{-1}(Z)$
+has dimension $r + n$ and $p^*[Z]$ is given by $[p^{-1}(Z)]_{n + r}$.
+Observe that $s(X) \cap p^{-1}(Z) = s(Z)$ scheme theoretically. Hence by the
+same reference as used above $s(X) \cap p^{-1}(Z)$ is a closed subscheme
+regularly embedded in $\overline{p}^{-1}(Z)$ of codimension $r$.
+We conclude by Lemma \ref{lemma-gysin-fundamental}.
+\end{proof}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Consider a
+commutative diagram
+$$
+\xymatrix{
+X \ar[rd]_f \ar[rr]_i & & P \ar[ld]^g \\
+& Y
+}
+$$
+of schemes locally of finite type over $S$ such that $g$ is smooth
+and $i$ is a regular immersion. Combining the bivariant class
+$i^!$ discussed above with the bivariant class $g^* \in A^*(P \to Y)^\wedge$
+of Lemma \ref{lemma-flat-pullback-bivariant} we obtain
+$$
+f^! = i^! \circ g^* \in A^*(X \to Y)
+$$
+Observe that the morphism $f$ is a local complete intersection morphism, see
+More on Morphisms, Definition \ref{more-morphisms-definition-lci}.
+Conversely, if $f : X \to Y$ is a local complete intersection morphism
+of locally Noetherian schemes and $f = g \circ i$ with $g$ smooth, then
+$i$ is a regular immersion. We claim that our construction of $f^!$
+only depends on the morphism $f$ and not on the choice of factorization
+$f = g \circ i$.
+
+\begin{lemma}
+\label{lemma-lci-gysin-well-defined}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a local complete intersection morphism
+of schemes locally of finite type over $S$.
+The bivariant class $f^!$ is independent of the choice of
+the factorization $f = g \circ i$ with $g$ smooth (provided
+one exists).
+\end{lemma}
+
+\begin{proof}
+Given a second such factorization $f = g' \circ i'$ we can
+consider the smooth morphism $g'' : P \times_Y P' \to Y$, the
+immersion $i'' : X \to P \times_Y P'$ and the factorization
+$f = g'' \circ i''$. Thus we may assume that we have a diagram
+$$
+\xymatrix{
+& P' \ar[d]^p \ar[rd]^{g'} \\
+X \ar[r]^i \ar[ru]^{i'} & P \ar[r]^g & Y
+}
+$$
+where $p$ is a smooth morphism. Then $(g')^* = p^* \circ g^*$
+(Lemma \ref{lemma-compose-flat-pullback}) and hence it suffices
+to show that $i^! = (i')^! \circ p^*$
+in $A^*(X \to P)$. Consider the commutative diagram
+$$
+\xymatrix{
+& X \times_P P' \ar[d]^{\overline{p}} \ar[r]_j & P' \ar[d]^p \\
+X \ar[ru]^s \ar[r]^1 & X \ar[r]^i & P
+}
+$$
+where $s =(1, i')$. Then $s$ and $j$ are regular immersions
+(by Divisors, Lemma \ref{divisors-lemma-section-smooth-regular-immersion}
+and Divisors, Lemma \ref{divisors-lemma-flat-base-change-regular-immersion})
+and $i' = j \circ s$. By Lemma \ref{lemma-composition-regular-immersion}
+we have $(i')^! = s^! \circ j^!$.
+Since the square is cartesian, the bivariant class $j^!$
+is the restriction (Remark \ref{remark-restriction-bivariant})
+of $i^!$ to $P'$, see Lemma \ref{lemma-construction-gysin}.
+Since bivariant classes commute with flat pullbacks
+we find $j^! \circ p^* = \overline{p}^* \circ i^!$.
+Thus it suffices to show that $s^! \circ \overline{p}^* = \text{id}$
+which is done in Lemma \ref{lemma-section-smooth}.
+\end{proof}
+
+\begin{definition}
+\label{definition-lci-gysin}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a local complete intersection morphism
+of schemes locally of finite type over $S$. We say
+{\it the gysin map for $f$ exists} if we can write
+$f = g \circ i$ with $g$ smooth and $i$ an immersion.
+In this case we define the
+{\it gysin map} $f^! = i^! \circ g^* \in A^*(X \to Y)$ as above.
+\end{definition}
+
+\noindent
+It follows from the definition that for a regular immersion
+this agrees with the construction earlier and for a smooth
+morphism this agrees with flat pullback. In fact, this agreement
+holds for all syntomic morphisms.
+
+\begin{lemma}
+\label{lemma-lci-gysin-flat}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a local complete intersection morphism
+of schemes locally of finite type over $S$. If the gysin map
+exists for $f$ and $f$ is flat, then $f^!$ is equal to the
+bivariant class of Lemma \ref{lemma-flat-pullback-bivariant}.
+\end{lemma}
+
+\begin{proof}
+Choose a factorization $f = g \circ i$ with $i : X \to P$
+an immersion and $g : P \to Y$ smooth. Observe that for
+any morphism $Y' \to Y$ which is locally of finite type,
+the base changes of $f'$, $g'$, $i'$ satisfy the same
+assumptions (see Morphisms, Lemmas \ref{morphisms-lemma-base-change-smooth}
+and \ref{morphisms-lemma-base-change-syntomic} and
+More on Morphisms, Lemma \ref{more-morphisms-lemma-flat-lci}).
+Thus we reduce to proving that $f^*[Y] = i^!(g^*[Y])$ in case $Y$
+is integral, see Lemma \ref{lemma-bivariant-zero}. Set $n = \dim_\delta(Y)$.
+After decomposing $X$ and $P$ into connected components we
+may assume $f$ is flat of relative dimension $r$ and
+$g$ is smooth of relative dimension $t$.
+Then $f^*[Y] = [X]_{n + s}$ and $g^*[Y] = [P]_{n + t}$.
+On the other hand $i$ is a regular immersion of codimension $t - s$.
+Thus $i^![P]_{n + t} = [X]_{n + s}$ (Lemma \ref{lemma-gysin-fundamental})
+and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-gysin-composition}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ and $g : Y \to Z$ be local complete intersection morphisms
+of schemes locally of finite type over $S$. Assume the gysin
+map exists for $g \circ f$ and $g$. Then the gysin map exists for $f$
+and $(g \circ f)^! = f^! \circ g^!$.
+\end{lemma}
+
+\begin{proof}
+Observe that $g \circ f$ is a local complete intersection morphism
+by More on Morphisms, Lemma \ref{more-morphisms-lemma-composition-lci}
+and hence the statement of the lemma makes sense.
+If $X \to P$ is an immersion of $X$ into a scheme $P$ smooth over $Z$
+then $X \to P \times_Z Y$ is an immersion of $X$ into a scheme smooth
+over $Y$. This prove the first assertion of the lemma.
+Let $Y \to P'$ be an immersion of $Y$ into a scheme $P'$ smooth over $Z$.
+Consider the commutative diagram
+$$
+\xymatrix{
+X \ar[r] \ar[d] &
+P \times_Z Y \ar[r]_a \ar[ld]^p &
+P \times_Z P' \ar[ld]^q \\
+Y \ar[r]_b \ar[d] &
+P' \ar[ld] \\
+Z
+}
+$$
+Here the horizontal arrows are regular immersions, the south-west arrows
+are smooth, and the square is cartesian. Whence
+$a^! \circ q^* = p^* \circ b^!$ as bivariant classes commute
+with flat pullback. Combining this fact with
+Lemmas \ref{lemma-composition-regular-immersion} and
+\ref{lemma-compose-flat-pullback}
+the reader finds the statement of the lemma holds true.
+Small detail omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-gysin-commutes}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Consider a commutative diagram
+$$
+\xymatrix{
+X'' \ar[d] \ar[r] &
+X' \ar[d] \ar[r] &
+X \ar[d]^f \\
+Y'' \ar[r] &
+Y' \ar[r] &
+Y
+}
+$$
+of schemes locally of finite type over $S$ with both square cartesian.
+Assume $f : X \to Y$ is a local complete intersection morphism
+such that the gysin map exists for $f$. Let $c \in A^*(Y'' \to Y')$. Denote
+$res(f^!) \in A^*(X' \to Y')$ the restriction of $f^!$ to $Y'$
+(Remark \ref{remark-restriction-bivariant}). Then $c$ and $res(f^!)$ commute
+(Remark \ref{remark-bivariant-commute}).
+\end{lemma}
+
+\begin{proof}
+Choose a factorization $f = g \circ i$ with $g$ smooth and $i$ an immersion.
+Since $f^! = i^! \circ g^!$ it suffices to prove the lemma for $g^!$
+(which is given by flat pullback) and for $i^!$. The result for flat pullback
+is part of the definition of a bivariant class. The case of $i^!$ follows
+immediately from Lemma \ref{lemma-gysin-commutes}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-gysin-easy}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Consider a cartesian diagram
+$$
+\xymatrix{
+X' \ar[d]_{f'} \ar[r] &
+X \ar[d]^f \\
+Y' \ar[r] &
+Y
+}
+$$
+of schemes locally of finite type over $S$. Assume
+\begin{enumerate}
+\item $f$ is a local complete intersection morphism and
+the gysin map exists for $f$,
+\item $X$, $X'$, $Y$, $Y'$ satisfy the equivalent conditions of
+Lemma \ref{lemma-locally-equidimensional},
+\item for $x' \in X'$ with images $x$, $y'$, and $y$
+in $X$, $Y'$, and $Y$ we have $n_{x'} - n_{y'} = n_x - n_y$
+where $n_{x'}$, $n_x$, $n_{y'}$, and $n_y$ are as in the lemma, and
+\item for every generic point $\xi \in X'$ the local ring
+$\mathcal{O}_{Y', f'(\xi)}$ is Cohen-Macaulay.
+\end{enumerate}
+Then $f^![Y'] = [X']$ where $[Y']$ and $[X']$ are as in
+Remark \ref{remark-fundamental-class}.
+\end{lemma}
+
+\begin{proof}
+Recall that $n_{x'}$ is the common value of $\delta(\xi)$
+where $\xi$ is the generic point of an irreducible component
+passing through $x'$. Moreover, the functions
+$x' \mapsto n_{x'}$, $x \mapsto n_x$, $y' \mapsto n_{y'}$, and
+$y \mapsto n_y$ are locally constant. Let $X'_n$, $X_n$, $Y'_n$,
+and $Y_n$ be the open and closed subscheme of $X'$, $X$, $Y'$, and
+$Y$ where the function has value $n$. Recall that
+$[X'] = \sum [X'_n]_n$ and $[Y'] = \sum [Y'_n]_n$.
+Having said this, it is clear that to prove the lemma we
+may replace $X'$ by one of its connected components
+and $X$, $Y'$, $Y$ by the connected component that
+it maps into. Then we know that $X'$, $X$, $Y'$, and
+$Y$ are $\delta$-equidimensional in the sense that
+each irreducible component has the same $\delta$-dimension.
+Say $n'$, $n$, $m'$, and $m$ is this common value
+for $X'$, $X$, $Y'$, and $Y$. The last assumption
+means that $n' - m' = n - m$.
+
+\medskip\noindent
+Choose a factorization $f = g \circ i$ where $i : X \to P$
+is an immersion and $g : P \to Y$ is smooth. As $X$ is connected,
+we see that the relative dimension of $P \to Y$ at points of $i(X)$
+is constant. Hence after replacing $P$ by an open neighbourhood
+of $i(X)$, we may assume that $P \to Y$ has constant relative dimension
+and $i : X \to P$ is a closed immersion.
+Denote $g' : Y' \times_Y P \to Y'$ the base change of $g$ and denote
+$i' : X' \to Y' \times_Y P$ the base change of $i$.
+It is clear that $g^*[Y] = [P]$ and $(g')^*[Y'] = [Y' \times_Y P]$.
+Finally, if $\xi' \in X'$ is a generic point, then
+$\mathcal{O}_{Y' \times_Y P, i'(\xi)}$ is Cohen-Macaulay.
+Namely, the local ring map
+$\mathcal{O}_{Y', f'(\xi)} \to \mathcal{O}_{Y' \times_Y P, i'(\xi)}$
+is flat with regular fibre
+(see Algebra, Section \ref{algebra-section-smooth-overview}),
+a regular local ring is Cohen-Macaulay
+(Algebra, Lemma \ref{algebra-lemma-regular-ring-CM}),
+$\mathcal{O}_{Y', f'(\xi)}$ is Cohen-Macaulay by assumption
+(4) and we get what we want from
+Algebra, Lemma \ref{algebra-lemma-CM-goes-up}.
+Thus we reduce to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $f$ is a regular closed immersion and $X'$, $X$, $Y'$, and
+$Y$ are $\delta$-equidimensional of $\delta$-dimensions
+$n'$, $n$, $m'$, and $m$ and $m' - n' = m - n$.
+In this case we obtain the result immediately from
+Lemma \ref{lemma-gysin-easy}.
+\end{proof}
+
+\begin{remark}
+\label{remark-gysin-chern-classes}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a local complete intersection morphism
+of schemes locally of finite type over $S$. Assume the gysin
+map exists for $f$. Then
+$f^! \circ c_i(\mathcal{E}) = c_i(f^*\mathcal{E}) \circ f^!$
+and similarly for the Chern character, see
+Lemma \ref{lemma-lci-gysin-commutes}.
+If $X$ and $Y$ satisfy the equivalent conditions of
+Lemma \ref{lemma-locally-equidimensional} and $Y$ is Cohen-Macaulay
+(for example), then $f^![Y] = [X]$ by Lemma \ref{lemma-lci-gysin-easy}.
+In this case we also get
+$f^!(c_i(\mathcal{E}) \cap [Y]) = c_i(f^*\mathcal{E}) \cap [X]$
+and similarly for the Chern character.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-compare-gysin-base-change}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Consider a cartesian square
+$$
+\xymatrix{
+X' \ar[d]_{f'} \ar[r]_{g'} &
+X \ar[d]^f \\
+Y' \ar[r]^g &
+Y
+}
+$$
+of schemes locally of finite type over $S$. Assume
+\begin{enumerate}
+\item both $f$ and $f'$ are local complete intersection morphisms, and
+\item the gysin map exists for $f$
+\end{enumerate}
+Then $\mathcal{C} = \Ker(H^{-1}((g')^*\NL_{X/Y}) \to H^{-1}(\NL_{X'/Y'}))$
+is a finite locally free $\mathcal{O}_{X'}$-module, the gysin map
+exists for $f'$, and we have
+$$
+res(f^!) = c_{top}(\mathcal{C}^\vee) \circ (f')^!
+$$
+in $A^*(X' \to Y')$.
+\end{lemma}
+
+\begin{proof}
+The fact that $\mathcal{C}$ is finite locally free follows immediately
+from More on Algebra, Lemma \ref{more-algebra-lemma-base-change-lci-bis}.
+Choose a factorization $f = g \circ i$ with $g : P \to Y$ smooth and $i$
+an immersion. Then we can factor $f' = g' \circ i'$ where $g' : P' \to Y'$
+and $i' : X' \to P'$ the base changes. Picture
+$$
+\xymatrix{
+X' \ar[r] \ar[d] &
+P' \ar[r] \ar[d] &
+Y' \ar[d] \\
+X \ar[r] &
+P \ar[r] &
+Y
+}
+$$
+In particular, we see that the gysin map exists for $f'$. By
+More on Morphisms, Lemmas \ref{more-morphisms-lemma-get-NL}
+we have
+$$
+\NL_{X/Y} = \left( \mathcal{C}_{X/P} \to i^*\Omega_{P/Y} \right)
+$$
+where $\mathcal{C}_{X/P}$ is the conormal sheaf of the embedding $i$.
+Similarly for the primed version. We have
+$(g')^*i^*\Omega_{P/Y} = (i')^*\Omega_{P'/Y'}$ because
+$\Omega_{P/Y}$ pulls back to $\Omega_{P'/Y'}$ by
+Morphisms, Lemma \ref{morphisms-lemma-base-change-differentials}.
+Also, recall that $(g')^*\mathcal{C}_{X/P} \to \mathcal{C}_{X'/P'}$
+is surjective, see
+Morphisms, Lemma \ref{morphisms-lemma-conormal-functorial-flat}.
+We deduce that the sheaf $\mathcal{C}$ is canonicallly
+isomorphic to the kernel of the map
+$(g')^*\mathcal{C}_{X/P} \to \mathcal{C}_{X'/P'}$
+of finite locally free modules. Recall that $i^!$ is defined
+using $\mathcal{N} = \mathcal{C}_{Z/X}^\vee$ and similarly
+for $(i')^!$. Thus we have
+$$
+res(i^!) = c_{top}(\mathcal{C}^\vee) \circ (i')^!
+$$
+in $A^*(X' \to P')$ by an application of Lemma \ref{lemma-gysin-excess}.
+Since finally we have $f^! = i^! \circ g^*$,
+$(f')^! = (i')^! \circ (g')^*$, and $(g')^* = res(g^*)$ we conclude.
+\end{proof}
+
+\begin{lemma}[Blow up formula]
+\label{lemma-blow-up-formula}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $i : Z \to X$ be a regular closed immersion of schemes
+locally of finite type over $S$. Let $b : X' \to X$ be the
+blowing up with center $Z$. Picture
+$$
+\xymatrix{
+E \ar[r]_j \ar[d]_\pi & X' \ar[d]^b \\
+Z \ar[r]^i & X
+}
+$$
+Assume that the gysin map exists for $b$. Then we have
+$$
+res(b^!) = c_{top}(\mathcal{F}^\vee) \circ \pi^*
+$$
+in $A^*(E \to Z)$ where $\mathcal{F}$ is the kernel of the canonical map
+$\pi^*\mathcal{C}_{Z/X} \to \mathcal{C}_{E/X'}$.
+\end{lemma}
+
+\begin{proof}
+Observe that the morphism $b$ is a local complete intersection morphism
+by More on Algebra, Lemma \ref{more-algebra-lemma-blowup-regular-sequence}
+and hence the statement makes sense. Since $Z \to X$ is a regular
+immersion (and hence a fortiori quasi-regular) we see that $\mathcal{C}_{Z/X}$
+is finite locally free and the map
+$\text{Sym}^*(\mathcal{C}_{Z/X}) \to \mathcal{C}_{Z/X, *}$
+is an isomorphism, see
+Divisors, Lemma \ref{divisors-lemma-quasi-regular-immersion}.
+Since $E = \text{Proj}(\mathcal{C}_{Z/X, *})$ we conclude
+that $E = \mathbf{P}(\mathcal{C}_{Z/X})$
+is a projective space bundle over $Z$.
+Thus $E \to Z$ is smooth and certainly a local complete intersection
+morphism. Thus Lemma \ref{lemma-compare-gysin-base-change}
+applies and we see that
+$$
+res(b^!) = c_{top}(\mathcal{C}^\vee) \circ \pi^!
+$$
+with $\mathcal{C}$ as in the statement there.
+Of course $\pi^* = \pi^!$ by Lemma \ref{lemma-lci-gysin-flat}.
+It remains to show that $\mathcal{F}$ is equal to
+the kernel $\mathcal{C}$ of the map
+$H^{-1}(j^*\NL_{X'/X}) \to H^{-1}(\NL_{E/Z})$.
+
+\medskip\noindent
+Since $E \to Z$ is smooth we have $H^{-1}(\NL_{E/Z}) = 0$, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-NL-smooth}.
+Hence it suffices to show that $\mathcal{F}$ can be identified
+with $H^{-1}(j^*\NL_{X'/X})$. By More on Morphisms, Lemmas
+\ref{more-morphisms-lemma-get-triangle-NL} and
+\ref{more-morphisms-lemma-NL-immersion} we have an exact sequence
+$$
+0 \to H^{-1}(j^*\NL_{X'/X}) \to H^{-1}(\NL_{E/X}) \to
+\mathcal{C}_{E/X'} \to \ldots
+$$
+By the same lemmas applied to $E \to Z \to X$ we obtain an isomorphism
+$\pi^*\mathcal{C}_{Z/X} = H^{-1}(\pi^*\NL_{Z/X}) \to H^{-1}(\NL_{E/X})$.
+Thus we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-gysin-product-regular}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a morphism of schemes locally of finite
+type over $S$ such that both $X$ and $Y$ are quasi-compact,
+regular, have affine diagonal, and finite dimension.
+Then $f$ is a local complete intersection morphism.
+Assume moreover the gysin map exists for $f$. Then
+$$
+f^!(\alpha \cdot \beta) = f^!\alpha \cdot f^!\beta
+$$
+in $\CH^*(X) \otimes \mathbf{Q}$ where the intersection product
+is as in Section \ref{section-intersection-regular}.
+\end{lemma}
+
+\begin{proof}
+The first statement follows from
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-morphism-regular-schemes-is-lci}.
+Observe that $f^![Y] = [X]$, see Lemma \ref{lemma-lci-gysin-easy}.
+Write $\alpha = ch(\alpha') \cap [Y]$ and $\beta = ch(\beta') \cap [Y]$
+where $\alpha', \beta' \in K_0(\textit{Vect}(X)) \otimes \mathbf{Q}$
+as in Section \ref{section-intersection-regular}.
+Setting $c = ch(\alpha')$ and $c' = ch(\beta')$ we find
+$\alpha \cdot \beta = c \cap c' \cap [Y]$ by construction.
+By Lemma \ref{lemma-lci-gysin-commutes} we know that $f^!$
+commutes with both $c$ and $c'$. Hence
+\begin{align*}
+f^!(\alpha \cdot \beta)
+& =
+f^!(c \cap c' \cap [Y]) \\
+& =
+c \cap c' \cap f^![Y] \\
+& =
+c \cap c' \cap [X] \\
+& =
+(c \cap [X]) \cdot (c' \cap [X]) \\
+& =
+(c \cap f^![Y]) \cdot (c' \cap f^![Y]) \\
+& =
+f^!(\alpha) \cdot f^!(\beta)
+\end{align*}
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projection-formula-regular}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ be a morphism of schemes locally of finite
+type over $S$ such that both $X$ and $Y$ are quasi-compact,
+regular, have affine diagonal, and finite dimension.
+Then $f$ is a local complete intersection morphism.
+Assume moreover the gysin map exists for $f$
+and that $f$ is proper. Then
+$$
+f_*(\alpha \cdot f^!\beta) = f_*\alpha \cdot \beta
+$$
+in $\CH^*(Y) \otimes \mathbf{Q}$ where the intersection product
+is as in Section \ref{section-intersection-regular}.
+\end{lemma}
+
+\begin{proof}
+The first statement follows from
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-morphism-regular-schemes-is-lci}.
+Observe that $f^![Y] = [X]$, see Lemma \ref{lemma-lci-gysin-easy}.
+Write $\alpha = ch(\alpha') \cap [X]$ and $\beta = ch(\beta') \cap [Y]$
+$\alpha' \in K_0(\textit{Vect}(X)) \otimes \mathbf{Q}$ and
+$\beta' \in K_0(\textit{Vect}(Y)) \otimes \mathbf{Q}$
+as in Section \ref{section-intersection-regular}.
+Set $c = ch(\alpha')$ and $c' = ch(\beta')$. We have
+\begin{align*}
+f_*(\alpha \cdot f^!\beta)
+& =
+f_*(c \cap f^!(c' \cap [Y]_e)) \\
+& =
+f_*(c \cap c' \cap f^![Y]_e) \\
+& =
+f_*(c \cap c' \cap [X]_d) \\
+& =
+f_*(c' \cap c \cap [X]_d) \\
+& =
+c' \cap f_*(c \cap [X]_d) \\
+& =
+\beta \cdot f_*(\alpha)
+\end{align*}
+The first equality by the construction of the intersection product.
+By Lemma \ref{lemma-lci-gysin-commutes} we know that $f^!$
+commutes with $c'$. The fact that Chern classes are in the center
+of the bivariant ring justifies switching the order of capping
+$[X]$ with $c$ and $c'$. Commuting $c'$ with $f_*$ is allowed as $c'$
+is a bivariant class. The final equality is again the construction
+of the intersection product.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Gysin maps for diagonals}
+\label{section-gysin-for-diagonal}
+
+\noindent
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}. Let $f : X \to Y$
+be a smooth morphism of schemes locally of finite type over $S$. Then the
+diagonal morphism $\Delta : X \longrightarrow X \times_Y X$
+is a regular immersion, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-smooth-diagonal-perfect}.
+Thus we have the gysin map
+$$
+\Delta^! \in A^*(X \to X \times_Y X)^\wedge
+$$
+constructed in Section \ref{section-koszul}. If $X \to Y$ has constant
+relative dimension $d$, then $\Delta^! \in A^d(X \to X \times_Y X)$.
+
+\begin{lemma}
+\label{lemma-diagonal-identity}
+In the situation above we have $\Delta^! \circ \text{pr}_i^! = 1$ in $A^0(X)$.
+\end{lemma}
+
+\begin{proof}
+Observe that the projections $\text{pr}_i : X \times_Y X \to X$ are
+smooth and hence we have gysin maps for these projections as well.
+Thus the lemma makes sense and is a special case of
+Lemma \ref{lemma-lci-gysin-composition}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-compute-bivariant}
+\begin{reference}
+\cite[Proposition 17.4.2]{F}
+\end{reference}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $f : X \to Y$ and $g : Y \to Z$ be morphisms of schemes locally
+of finite type over $S$. If $g$ is smooth of relative dimension $d$, then
+$A^p(X \to Y) = A^{p - d}(X \to Z)$.
+\end{proposition}
+
+\begin{proof}
+We will use that smooth morphisms are local complete intersection
+morphisms whose gysin maps exist (see Section \ref{section-koszul}).
+In particular we have $g^! \in A^{-d}(Y \to Z)$. Then we can send
+$c \in A^p(X \to Y)$ to $c \circ g^! \in A^{p - d}(X \to Z)$.
+
+\medskip\noindent
+Conversely, let $c' \in A^{p - d}(X \to Z)$. Denote $res(c')$ the restriction
+(Remark \ref{remark-restriction-bivariant}) of $c'$ by the morphism $Y \to Z$.
+Since the diagram
+$$
+\xymatrix{
+X \times_Z Y \ar[r]_{\text{pr}_2} \ar[d]_{\text{pr}_1} & Y \ar[d]^g \\
+X \ar[r]^f & Z
+}
+$$
+is cartesian we find $res(c') \in A^{p - d}(X \times_Z Y \to Y)$.
+Let $\Delta : Y \to Y \times_Z Y$ be the diagonal and denote
+$res(\Delta^!)$ the restriction of $\Delta^!$
+to $X \times_Z Y$ by the morphism $X \times_Z Y \to Y \times_Z Y$.
+Since the diagram
+$$
+\xymatrix{
+X \ar[r] \ar[d] & X \times_Z Y \ar[d] \\
+Y \ar[r]^-\Delta & Y \times_Z Y
+}
+$$
+is cartesian we see that $res(\Delta^!) \in A^d(X \to X \times_Z Y)$.
+Combining these two restrictions we obtain
+$$
+res(\Delta^!) \circ res(c') \in A^p(X \to Y)
+$$
+Thus we have produced maps $A^p(X \to Y) \to A^{p - d}(X \to Z)$
+and $A^{p - d}(X \to Z) \to A^p(X \to Y)$. To finish the proof we
+will show these maps are mutually inverse.
+
+\medskip\noindent
+Let us start with $c \in A^p(X \to Y)$. Consider the diagram
+$$
+\xymatrix{
+X \ar[d] \ar[r] & Y \ar[d] \\
+X \times_Z Y \ar[r] \ar[d]^{\text{pr}_1} &
+Y \times_Z Y \ar[r]_{p_2} \ar[d]^{p_1} &
+Y \ar[d]^g \\
+X \ar[r]^f &
+Y \ar[r]^g &
+Z
+}
+$$
+whose squares are carteisan. The lower two square of this diagram
+show that $res(c \circ g^!) = res(c) \cap p_2^!$ where in this formula
+$res(c)$ means the restriction of $c$ via $p_1$. Looking at the upper
+square of the diagram and using Lemma \ref{lemma-lci-gysin-commutes}
+we get $c \circ \Delta^! = res(\Delta^!) \circ res(c)$.
+We compute
+\begin{align*}
+res(\Delta^!) \circ res(c \circ g^!)
+& =
+res(\Delta^!) \circ res(c) \circ p_2^! \\
+& =
+c \circ \Delta^! \circ p_2^! \\
+& =
+c
+\end{align*}
+The final equality by Lemma \ref{lemma-diagonal-identity}.
+
+\medskip\noindent
+Conversely, let us start with $c' \in A^{p - d}(X \to Z)$. Looking
+at the lower rectangle of the diagram above we find
+$res(c') \circ g^! = \text{pr}_1^! \circ c'$.
+We compute
+\begin{align*}
+res(\Delta^!) \circ res(c') \circ g^!
+& =
+res(\Delta^!) \circ \text{pr}_1^! \circ c' \\
+& =
+c'
+\end{align*}
+The final equality holds because the left two squares of
+the diagram show that
+$\text{id} = res(\Delta^! \circ p_1^!) = res(\Delta^!) \circ \text{pr}_1^!$.
+This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Exterior product}
+\label{section-exterior-product}
+
+\noindent
+Let $k$ be a field. Set $S = \Spec(k)$ and define $\delta : S \to \mathbf{Z}$
+by sending the unique point to $0$. Then $(S, \delta)$ is a special case of
+our general Situation \ref{situation-setup}, see
+Example \ref{example-field}.
+
+\medskip\noindent
+Consider a cartesian square
+$$
+\xymatrix{
+X \times_k Y \ar[r] \ar[d] & Y \ar[d] \\
+X \ar[r] & \Spec(k) = S
+}
+$$
+of schemes locally of finite type over $k$. Then there is a canonical map
+$$
+\times :
+\CH_n(X) \otimes_{\mathbf{Z}} \CH_m(Y)
+\longrightarrow
+\CH_{n + m}(X \times_k Y)
+$$
+which is uniquely determined by the following rule:
+given integral closed subschemes $X' \subset X$
+and $Y' \subset Y$ of dimensions $n$ and $m$ we have
+$$
+[X'] \times [Y'] = [X' \times_k Y']_{n + m}
+$$
+in $\CH_{n + m}(X \times_k Y)$.
+
+\begin{lemma}
+\label{lemma-exterior-product-well-defined}
+The map
+$\times : \CH_n(X) \otimes_{\mathbf{Z}} \CH_m(Y) \to \CH_{n + m}(X \times_k Y)$
+is well defined.
+\end{lemma}
+
+\begin{proof}
+A first remark is that if $\alpha = \sum n_i[X_i]$
+and $\beta = \sum m_j[Y_j]$ with $X_i \subset X$ and $Y_j \subset Y$
+locally finite families of integral closed subschemes of
+dimensions $n$ and $m$, then
+$X_i \times_k Y_j$ is a locally finite
+collection of closed subschemes of $X \times_k Y$ of
+dimensions $n + m$ and we can indeed consider
+$$
+\alpha \times \beta = \sum n_i m_j [X_i \times_k Y_j]_{n + m}
+$$
+as a $(n + m)$-cycle on $X \times_k Y$. In this way we obtain an
+additive map
+$\times : Z_n(X) \otimes_{\mathbf{Z}} Z_m(Y) \to Z_{n + m}(X \times_k Y)$.
+The problem is to show that
+this procedure is compatible with rational equivalence.
+
+\medskip\noindent
+Let $i : X' \to X$ be the inclusion morphism of
+an integral closed subscheme of dimension $n$.
+Then flat pullback along the morphism $p' : X' \to \Spec(k)$ is an element
+$(p')^* \in A^{-n}(X' \to \Spec(k))$ by
+Lemma \ref{lemma-flat-pullback-bivariant}
+and hence $c' = i_* \circ (p')^* \in A^{-n}(X \to \Spec(k))$ by
+Lemma \ref{lemma-push-proper-bivariant}.
+This produces maps
+$$
+c' \cap - : \CH_m(Y) \longrightarrow \CH_{m + n}(X \times_k Y)
+$$
+which the reader easily sends $[Y']$ to $[X' \times_k Y']_{n + m}$
+for any integral closed subscheme $Y' \subset Y$ of dimension
+$m$. Hence the construction
+$([X'], [Y']) \mapsto [X' \times_k Y']_{n + m}$
+factors through rational equivalence in the second variable, i.e.,
+gives a well defined map
+$Z_n(X) \otimes_{\mathbf{Z}} \CH_m(Y) \to \CH_{n + m}(X \times_k Y)$.
+By symmetry the same is true for the other variable and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chow-cohomology-towards-point}
+Let $k$ be a field. Let $X$ be a scheme locally of finite type over $k$.
+Then we have a canonical identification
+$$
+A^p(X \to \Spec(k)) = \CH_{-p}(X)
+$$
+for all $p \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Consider the element $[\Spec(k)] \in \CH_0(\Spec(k))$. We get a map
+$A^p(X \to \Spec(k)) \to \CH_{-p}(X)$ by sending $c$ to $c \cap [\Spec(k)]$.
+
+\medskip\noindent
+Conversely, suppose we have $\alpha \in \CH_{-p}(X)$.
+Then we can define $c_\alpha \in A^p(X \to \Spec(k))$ as
+follows: given $X' \to \Spec(k)$ and $\alpha' \in \CH_n(X')$
+we let
+$$
+c_\alpha \cap \alpha' = \alpha \times \alpha'
+$$
+in $\CH_{n - p}(X \times_k X')$. To show that this is a bivariant
+class we write $\alpha = \sum n_i[X_i]$ as in
+Definition \ref{definition-cycles}. Consider the composition
+$$
+\coprod X_i \xrightarrow{g} X \to \Spec(k)
+$$
+and denote $f : \coprod X_i \to \Spec(k)$ the composition.
+Then $g$ is proper and $f$ is flat of relative dimension $-p$.
+Pullback along $f$ is a bivariant class
+$f^* \in A^p(\coprod X_i \to \Spec(k))$ by
+Lemma \ref{lemma-flat-pullback-bivariant}.
+Denote $\nu \in A^0(\coprod X_i)$ the bivariant class
+which multiplies a cycle by $n_i$ on the $i$th component.
+Thus $\nu \circ f^* \in A^p(\coprod X_i \to X)$.
+Finally, we have a bivariant class
+$$
+g_* \circ \nu \circ f^*
+$$
+by Lemma \ref{lemma-push-proper-bivariant}. The reader easily
+verifies that $c_\alpha$ is equal to this class and hence
+is itself a bivariant class.
+
+\medskip\noindent
+To finish the proof we have to show that the two constructions
+are mutually inverse. Since $c_\alpha \cap [\Spec(k)] = \alpha$
+this is clear for one of the two directions. For the other, let
+$c \in A^p(X \to \Spec(k))$ and set $\alpha = c \cap [\Spec(k)]$.
+It suffices to prove that
+$$
+c \cap [X'] = c_\alpha \cap [X']
+$$
+when $X'$ is an integral scheme locally of finite type over $\Spec(k)$,
+see Lemma \ref{lemma-bivariant-zero}. However, then $p' : X' \to \Spec(k)$
+is flat of relative dimension $\dim(X')$ and hence
+$[X'] = (p')^*[\Spec(k)]$. Thus the fact that the bivariant classes
+$c$ and $c_\alpha$ agree on $[\Spec(k)]$ implies they
+agree when capped against $[X']$ and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chow-cohomology-towards-point-commutes}
+Let $k$ be a field. Let $X$ be a scheme locally of finite type over $k$.
+Let $c \in A^p(X \to \Spec(k))$. Let $Y \to Z$ be a morphism of schemes
+locally of finite type over $k$. Let $c' \in A^q(Y \to Z)$. Then
+$c \circ c' = c' \circ c$ in $A^{p + q}(X \times_k Y \to X \times_k Z)$.
+\end{lemma}
+
+\begin{proof}
+In the proof of Lemma \ref{lemma-chow-cohomology-towards-point}
+we have seen that $c$ is given by a combination of
+proper pushforward, multiplying by integers over connected
+components, and flat pullback. Since $c'$ commutes with each of
+these operations by definition of bivariant classes, we conclude.
+Some details omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-commuting-exterior}
+The upshot of Lemmas \ref{lemma-chow-cohomology-towards-point}
+and \ref{lemma-chow-cohomology-towards-point-commutes} is the following.
+Let $k$ be a field. Let $X$ be a scheme locally of finite type over $k$.
+Let $\alpha \in \CH_*(X)$. Let $Y \to Z$ be a morphism of schemes
+locally of finite type over $k$. Let $c' \in A^q(Y \to Z)$. Then
+$$
+\alpha \times (c' \cap \beta) = c' \cap (\alpha \times \beta)
+$$
+in $\CH_*(X \times_k Y)$ for any $\beta \in \CH_*(Z)$. Namely, this
+follows by taking $c = c_\alpha \in A^*(X \to \Spec(k))$ the bivariant class
+corresponding to $\alpha$, see proof of
+Lemma \ref{lemma-chow-cohomology-towards-point}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-exterior-product-associative}
+Exterior product is associative. More precisely, let $k$ be a field,
+let $X, Y, Z$ be schemes locally of finite type over $k$, let
+$\alpha \in \CH_*(X)$, $\beta \in \CH_*(Y)$, $\gamma \in \CH_*(Z)$.
+Then $(\alpha \times \beta) \times \gamma =
+\alpha \times (\beta \times \gamma)$ in $\CH_*(X \times_k Y \times_k Z)$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: associativity of fibre product of schemes.
+\end{proof}
+
+
+
+
+
+
+
+\section{Intersection products}
+\label{section-intersection-product}
+
+\noindent
+Let $k$ be a field. Set $S = \Spec(k)$ and define $\delta : S \to \mathbf{Z}$
+by sending the unique point to $0$. Then $(S, \delta)$ is a special case of
+our general Situation \ref{situation-setup}, see
+Example \ref{example-field}.
+
+\medskip\noindent
+Let $X$ be a smooth scheme over $k$. The bivariant class $\Delta^!$
+of Section \ref{section-gysin-for-diagonal} allows us to define a kind of
+intersection product on chow groups of schemes locally of finite type over $X$.
+Namely, suppose that $Y \to X$ and $Z \to X$ are morphisms of schemes
+which are locally of finite type. Then observe that
+$$
+Y \times_X Z = (Y \times_k Z) \times_{X \times_k X, \Delta} X
+$$
+Hence we can consider the following sequence of maps
+$$
+\CH_n(Y) \otimes_\mathbf{Z} \CH_m(Y)
+\xrightarrow{\times}
+\CH_{n + m}(Y \times_k Z)
+\xrightarrow{\Delta^!}
+\CH_{n + m - *}(Y \times_X Z)
+$$
+Here the first arrow is the exterior product constructed in
+Section \ref{section-exterior-product} and the second arrow
+is the gysin map for the diagonal studied in
+Section \ref{section-gysin-for-diagonal}. If $X$ is equidimensional
+of dimension $d$, then we end up in $\CH_{n + m - d}(Y \times_X Z)$
+and in general we can decompose into the parts lying over the open
+and closed subschemes of $X$ where $X$ has a given dimension.
+Given $\alpha \in \CH_*(Y)$ and $\beta \in \CH_*(Z)$ we will denote
+$$
+\alpha \cdot \beta = \Delta^!(\alpha \times \beta)
+\in \CH_*(Y \times_X Z)
+$$
+In the special case where $X = Y = Z$ we obtain a multiplication
+$$
+\CH_*(X) \times \CH_*(X) \to \CH_*(X),\quad
+(\alpha, \beta) \mapsto \alpha \cdot \beta
+$$
+which is called the {\it intersection product}. We observe that
+this product is clearly symmetric. Associativity follows from
+the next lemma.
+
+\begin{lemma}
+\label{lemma-associative}
+The product defined above is associative. More precisely, let $k$ be a field,
+let $X$ be smooth over $k$,
+let $Y, Z, W$ be schemes locally of finite type over $X$, let
+$\alpha \in \CH_*(Y)$, $\beta \in \CH_*(Z)$, $\gamma \in \CH_*(W)$.
+Then $(\alpha \cdot \beta) \cdot \gamma =
+\alpha \cdot (\beta \cdot \gamma)$ in $\CH_*(Y \times_X Z \times_X W)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-exterior-product-associative} we have
+$(\alpha \times \beta) \times \gamma =
+\alpha \times (\beta \times \gamma)$ in $\CH_*(Y \times_k Z \times_k W)$.
+Consider the closed immersions
+$$
+\Delta_{12} : X \times_k X \longrightarrow X \times_k X \times_k X,
+\quad (x, x') \mapsto (x, x, x')
+$$
+and
+$$
+\Delta_{23} : X \times_k X \longrightarrow X \times_k X \times_k X,
+\quad (x, x') \mapsto (x, x', x')
+$$
+Denote $\Delta_{12}^!$ and $\Delta_{23}^!$ the corresponding bivariant
+classes; observe that $\Delta_{12}^!$ is the restriction
+(Remark \ref{remark-restriction-bivariant}) of $\Delta^!$
+to $X \times_k X \times_k X$ by the map $\text{pr}_{12}$ and that
+$\Delta_{23}^!$ is the restriction of $\Delta^!$
+to $X \times_k X \times_k X$ by the map $\text{pr}_{23}$.
+Thus clearly the restriction of $\Delta_{12}^!$ by $\Delta_{23}$
+is $\Delta^!$ and the restriction of $\Delta_{23}^!$ by $\Delta_{12}$ is
+$\Delta^!$ too. Thus by Lemma \ref{lemma-gysin-commutes} we have
+$$
+\Delta^! \circ \Delta_{12}^! =
+\Delta^! \circ \Delta_{23}^!
+$$
+Now we can prove the lemma by the following sequence of equalities:
+\begin{align*}
+(\alpha \cdot \beta) \cdot \gamma
+& =
+\Delta^!(\Delta^!(\alpha \times \beta) \times \gamma) \\
+& =
+\Delta^!(\Delta_{12}^!((\alpha \times \beta) \times \gamma)) \\
+& =
+\Delta^!(\Delta_{23}^!((\alpha \times \beta) \times \gamma)) \\
+& =
+\Delta^!(\Delta_{23}^!(\alpha \times (\beta \times \gamma)) \\
+& =
+\Delta^!(\alpha \times \Delta^!(\beta \times \gamma)) \\
+& =
+\alpha \cdot (\beta \cdot \gamma)
+\end{align*}
+All equalities are clear from the above except perhaps
+for the second and penultimate one. The equation
+$\Delta_{23}^!(\alpha \times (\beta \times \gamma)) =
+\alpha \times \Delta^!(\beta \times \gamma)$ holds by
+Remark \ref{remark-commuting-exterior}. Similarly for the second
+equation.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-identify-chow-for-smooth}
+Let $k$ be a field. Let $X$ be a smooth scheme over $k$, equidimensional
+of dimension $d$. The map
+$$
+A^p(X) \longrightarrow \CH_{d - p}(X),\quad
+c \longmapsto c \cap [X]_d
+$$
+is an isomorphism. Via this isomorphism composition of bivariant
+classes turns into the intersection product defined above.
+\end{lemma}
+
+\begin{proof}
+Denote $g : X \to \Spec(k)$ the structure morphism.
+The map is the composition of the isomorphisms
+$$
+A^p(X) \to A^{p - d}(X \to \Spec(k)) \to \CH_{d - p}(X)
+$$
+The first is the isomorphism $c \mapsto c \circ g^*$ of
+Proposition \ref{proposition-compute-bivariant}
+and the second is the isomorphism $c \mapsto c \cap [\Spec(k)]$ of
+Lemma \ref{lemma-chow-cohomology-towards-point}.
+From the proof of Lemma \ref{lemma-chow-cohomology-towards-point}
+we see that the inverse to the second arrow sends $\alpha \in \CH_{d - p}(X)$
+to the bivariant class $c_\alpha$ which sends $\beta \in \CH_*(Y)$
+for $Y$ locally of finite type over $k$
+to $\alpha \times \beta$ in $\CH_*(X \times_k Y)$. From the proof of
+Proposition \ref{proposition-compute-bivariant} we see the inverse
+to the first arrow in turn sends $c_\alpha$ to the bivariant class
+which sends $\beta \in \CH_*(Y)$ for $Y \to X$ locally of finite type
+to $\Delta^!(\alpha \times \beta) = \alpha \cdot \beta$.
+From this the final result of the lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-gysin-product}
+Let $k$ be a field. Let $f : X \to Y$ be a morphism of schemes smooth
+over $k$. Then the gysin map exists for $f$ and
+$f^!(\alpha \cdot \beta) = f^!\alpha \cdot f^!\beta$.
+\end{lemma}
+
+\begin{proof}
+Observe that $X \to X \times_k Y$ is an immersion of $X$ into a scheme
+smooth over $Y$. Hence the gysin map exists for $f$
+(Definition \ref{definition-lci-gysin}).
+To prove the formula we may decompose $X$ and $Y$ into their
+connected components, hence we may assume $X$ is smooth over $k$
+and equidimensional of dimension $d$ and $Y$ is smooth over $k$
+and equidimensional of dimension $e$. Observe that
+$f^![Y]_e = [X]_d$ (see for example Lemma \ref{lemma-lci-gysin-easy}).
+Write $\alpha = c \cap [Y]_e$ and $\beta = c' \cap [Y]_e$
+and hence $\alpha \cdot \beta = c \cap c' \cap [Y]_e$,
+see Lemma \ref{lemma-identify-chow-for-smooth}.
+By Lemma \ref{lemma-lci-gysin-commutes} we know that $f^!$
+commutes with both $c$ and $c'$. Hence
+\begin{align*}
+f^!(\alpha \cdot \beta)
+& =
+f^!(c \cap c' \cap [Y]_e) \\
+& =
+c \cap c' \cap f^![Y]_e \\
+& =
+c \cap c' \cap [X]_d \\
+& =
+(c \cap [X]_d) \cdot (c' \cap [X]_d) \\
+& =
+(c \cap f^![Y]_e) \cdot (c' \cap f^![Y]_e) \\
+& =
+f^!(\alpha) \cdot f^!(\beta)
+\end{align*}
+as desired where we have used Lemma \ref{lemma-identify-chow-for-smooth}
+for $X$ as well.
+
+\medskip\noindent
+An alternative proof can be given by proving that
+$(f \times f)^!(\alpha \times \beta) = f^!\alpha \times f^!\beta$
+and using Lemma \ref{lemma-lci-gysin-composition}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projection-formula}
+Let $k$ be a field. Let $f : X \to Y$ be a proper morphism of schemes smooth
+over $k$. Then the gysin map exists for $f$ and
+$f_*(\alpha \cdot f^!\beta) = f_*\alpha \cdot \beta$.
+\end{lemma}
+
+\begin{proof}
+Observe that $X \to X \times_k Y$ is an immersion of $X$ into a scheme
+smooth over $Y$. Hence the gysin map exists for $f$
+(Definition \ref{definition-lci-gysin}).
+To prove the formula we may decompose $X$ and $Y$ into their
+connected components, hence we may assume $X$ is smooth over $k$
+and equidimensional of dimension $d$ and $Y$ is smooth over $k$
+and equidimensional of dimension $e$. Observe that
+$f^![Y]_e = [X]_d$ (see for example Lemma \ref{lemma-lci-gysin-easy}).
+Write $\alpha = c \cap [X]_d$ and $\beta = c' \cap [Y]_e$,
+see Lemma \ref{lemma-identify-chow-for-smooth}. We have
+\begin{align*}
+f_*(\alpha \cdot f^!\beta)
+& =
+f_*(c \cap f^!(c' \cap [Y]_e)) \\
+& =
+f_*(c \cap c' \cap f^![Y]_e) \\
+& =
+f_*(c \cap c' \cap [X]_d) \\
+& =
+f_*(c' \cap c \cap [X]_d) \\
+& =
+c' \cap f_*(c \cap [X]_d) \\
+& =
+\beta \cdot f_*(\alpha)
+\end{align*}
+The first equality by the result of Lemma \ref{lemma-identify-chow-for-smooth}
+for $X$. By Lemma \ref{lemma-lci-gysin-commutes} we know that $f^!$
+commutes with $c'$. The commutativity of the intersection
+product justifies switching the order of capping $[X]_d$ with $c$ and $c'$
+(via the lemma). Commuting $c'$ with $f_*$ is allowed as $c'$
+is a bivariant class. The final equality is again the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-intersect-properly}
+Let $k$ be a field. Let $X$ be an integral scheme smooth over $k$.
+Let $Y, Z \subset X$ be integral closed subschemes. Set
+$d = \dim(Y) + \dim(Z) - \dim(X)$. Assume
+\begin{enumerate}
+\item $\dim(Y \cap Z) \leq d$, and
+\item $\mathcal{O}_{Y, \xi}$ and $\mathcal{O}_{Z, \xi}$
+are Cohen-Macaulay for every $\xi \in Y \cap Z$ with
+$\delta(\xi) = d$.
+\end{enumerate}
+Then $[Y] \cdot [Z] = [Y \cap Z]_d$ in $\CH_d(X)$.
+\end{lemma}
+
+\begin{proof}
+Recall that $[Y] \cdot [Z] = \Delta^!([Y \times Z])$ where
+$\Delta^! = c(\Delta : X \to X \times X, \mathcal{T}_{X/k})$
+is a higher codimension gysin map
+(Section \ref{section-gysin-higher-codimension}) with
+$\mathcal{T}_{X/k} = \SheafHom(\Omega_{X/k}, \mathcal{O}_X)$
+locally free of rank $\dim(X)$. We have the equality of schemes
+$$
+Y \cap Z = X \times_{\Delta, (X \times X)} (Y \times Z)
+$$
+and $\dim(Y \times Z) = \dim(Y) + \dim(Z)$ and hence conditions
+(1), (2), and (3) of Lemma \ref{lemma-gysin-easy} hold.
+Finally, if $\xi \in Y \cap Z$, then we have a flat local
+homomorphism
+$$
+\mathcal{O}_{Y, \xi} \longrightarrow
+\mathcal{O}_{Y \times Z, \xi}
+$$
+whose ``fibre'' is $\mathcal{O}_{Z, \xi}$. It follows that if both
+$\mathcal{O}_{Y, \xi}$ and $\mathcal{O}_{Z, \xi}$
+are Cohen-Macaulay, then so is $\mathcal{O}_{Y \times Z, \xi}$, see
+Algebra, Lemma \ref{algebra-lemma-CM-goes-up}.
+In this way we see that all the hypotheses of
+Lemma \ref{lemma-gysin-easy} are satisfied and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-intersect-regularly-embedded}
+Let $k$ be a field. Let $X$ be a scheme smooth over $k$. Let $i : Y \to X$ be
+a regular closed immersion. Let $\alpha \in \CH_*(X)$. If $Y$ is
+equidimensional of dimension $e$, then
+$\alpha \cdot [Y]_e = i_*(i^!(\alpha))$ in $\CH_*(X)$.
+\end{lemma}
+
+\begin{proof}
+After decomposing $X$ into connected components we may and do assume $X$
+is equidimensional of dimension $d$. Write $\alpha = c \cap [X]_n$
+with $x \in A^*(X)$, see Lemma \ref{lemma-identify-chow-for-smooth}. Then
+$$
+i_*(i^!(\alpha)) = i_*(i^!(c \cap [X]_n)) =
+i_*(c \cap i^![X]_n) = i_*(c \cap [Y]_e) =
+c \cap i_*[Y]_e = \alpha \cdot [Y]_e
+$$
+The first equality by choice of $c$. Then second equality by
+Lemma \ref{lemma-lci-gysin-commutes}. The third because
+$i^![X]_d = [Y]_e$ in $\CH_*(Y)$ (Lemma \ref{lemma-lci-gysin-easy}).
+The fourth because bivariant classes commute with proper pushforward.
+The last equality by Lemma \ref{lemma-identify-chow-for-smooth}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-intersection-regular-smooth}
+Let $k$ be a field. Let $X$ be a smooth scheme over $k$ which is
+quasi-compact and has affine diagonal. Then the intersection
+product on $\CH^*(X)$ constructed in this section agrees
+after tensoring with $\mathbf{Q}$ with the intersection product
+constructed in Section \ref{section-intersection-regular}.
+\end{lemma}
+
+\begin{proof}
+Let $\alpha \in \CH^i(X)$ and $\beta \in \CH^j(X)$. Write
+$\alpha = ch(\alpha') \cap [X]$ and $\beta = ch(\beta') \cap [X]$
+$\alpha', \beta' \in K_0(\textit{Vect}(X)) \otimes \mathbf{Q}$
+as in Section \ref{section-intersection-regular}.
+Set $c = ch(\alpha')$ and $c' = ch(\beta')$.
+Then the intersection product in Section \ref{section-intersection-regular}
+produces $c \cap c' \cap [X]$. This is the same as $\alpha \cdot \beta$
+by Lemma \ref{lemma-identify-chow-for-smooth} (or rather the
+generalization that $A^i(X) \to \CH^i(X)$, $c \mapsto c \cap [X]$
+is an isomorphism for any smooth scheme $X$ over $k$).
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Exterior product over Dedekind domains}
+\label{section-exterior-product-dim-1}
+
+\noindent
+Let $S$ be a locally Noetherian scheme which has an open covering
+by spectra of Dedekind domains. Set $\delta(s) = 0$ for $s \in S$ closed
+and $\delta(s) = 1$ otherwise. Then $(S, \delta)$ is a special case of our
+general Situation \ref{situation-setup}; see
+Example \ref{example-domain-dimension-1}.
+Observe that $S$ is normal
+(Algebra, Lemma \ref{algebra-lemma-characterize-Dedekind})
+and hence a disjoint union of normal integral schemes
+(Properties, Lemma \ref{properties-lemma-normal-locally-Noetherian}).
+Thus all of the arguments below reduce to the case where $S$ is
+irreducible. On the other hand, we allow $S$ to be nonseparated (so $S$
+could be the affine line with $0$ doubled for example).
+
+\medskip\noindent
+Consider a cartesian square
+$$
+\xymatrix{
+X \times_S Y \ar[r] \ar[d] & Y \ar[d] \\
+X \ar[r] & S
+}
+$$
+of schemes locally of finite type over $S$. We claim there is a canonical map
+$$
+\times :
+\CH_n(X) \otimes_{\mathbf{Z}} \CH_m(Y)
+\longrightarrow
+\CH_{n + m - 1}(X \times_S Y)
+$$
+which is uniquely determined by the following rule:
+given integral closed subschemes $X' \subset X$
+and $Y' \subset Y$ of $\delta$-dimensions $n$ and $m$ we set
+\begin{enumerate}
+\item $[X'] \times [Y'] = [X' \times_S Y']_{n + m - 1}$ if
+$X'$ or $Y'$ dominates an irreducible component of $S$,
+\item $[X'] \times [Y'] = 0$ if neither $X'$ nor $Y'$ dominates an
+irreducible component of $S$.
+\end{enumerate}
+
+\begin{lemma}
+\label{lemma-exterior-product-well-defined-dim-1}
+The map $\times : \CH_n(X) \otimes_{\mathbf{Z}} \CH_m(Y) \to
+\CH_{n + m - 1}(X \times_S Y)$ is well defined.
+\end{lemma}
+
+\begin{proof}
+Consider $n$ and $m$ cycles $\alpha = \sum_{i \in I} n_i[X_i]$
+and $\beta = \sum_{j \in J} m_j[Y_j]$ with $X_i \subset X$ and $Y_j \subset Y$
+locally finite families of integral closed subschemes of
+$\delta$-dimensions $n$ and $m$. Let $K \subset I \times J$ be the set
+of pairs $(i, j) \in I \times J$ such that $X_i$ or $Y_j$ dominates
+an irreducible component of $S$.
+Then $\{X_i \times_S Y_j\}_{(i, j) \in K}$ is a locally finite
+collection of closed subschemes of $X \times_S Y$ of
+$\delta$-dimension $n + m - 1$. This means we can indeed consider
+$$
+\alpha \times \beta =
+\sum\nolimits_{(i, j) \in K} n_i m_j [X_i \times_S Y_j]_{n + m - 1}
+$$
+as a $(n + m - 1)$-cycle on $X \times_S Y$. In this way we obtain an
+additive map
+$\times : Z_n(X) \otimes_{\mathbf{Z}} Z_m(Y) \to Z_{n + m}(X \times_S Y)$.
+The problem is to show that
+this procedure is compatible with rational equivalence.
+
+\medskip\noindent
+Let $i : X' \to X$ be the inclusion morphism of an integral closed subscheme
+of $\delta$-dimension $n$ which dominates an irreducible component
+of $S$. Then $p' : X' \to S$ is flat of relative dimension $n - 1$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-dedekind-torsion-free-flat}.
+Hence flat pullback along $p'$ is an element
+$(p')^* \in A^{-n + 1}(X' \to S)$ by
+Lemma \ref{lemma-flat-pullback-bivariant}
+and hence $c' = i_* \circ (p')^* \in A^{-n + 1}(X \to S)$ by
+Lemma \ref{lemma-push-proper-bivariant}.
+This produces maps
+$$
+c' \cap - : \CH_m(Y) \longrightarrow \CH_{m + n - 1}(X \times_S Y)
+$$
+which sends $[Y']$ to $[X' \times_S Y']_{n + m - 1}$ for any
+integral closed subscheme $Y' \subset Y$ of $\delta$-dimension $m$.
+
+\medskip\noindent
+Let $i : X' \to X$ be the inclusion morphism of an integral closed subscheme
+of $\delta$-dimension $n$ such that the composition $X' \to X \to S$
+factors through a closed point $s \in S$. Since $s$ is a closed point
+of the spectrum of a Dedekind domain, we see that $s$ is an effective
+Cartier divisor on $S$ whose normal bundle is trivial. Denote
+$c \in A^1(s \to S)$ the gysin homomorphism, see
+Lemma \ref{lemma-gysin-bivariant}. The morphism $p' : X' \to s$
+is flat of relative dimension $n$. Hence flat pullback along $p'$
+is an element $(p')^* \in A^{-n}(X' \to S)$ by
+Lemma \ref{lemma-flat-pullback-bivariant}.
+Thus
+$$
+c' = i_* \circ (p')^* \circ c \in A^{-n}(X \to S)
+$$
+by Lemma \ref{lemma-push-proper-bivariant}. This produces maps
+$$
+c' \cap - : \CH_m(Y) \longrightarrow \CH_{m + n - 1}(X \times_S Y)
+$$
+which for any integral closed subscheme $Y' \subset Y$
+of $\delta$-dimension $m$
+sends $[Y']$ to either $[X' \times_S Y']_{n + m - 1}$ if $Y'$ dominates
+an irreducible component of $S$ or to $0$ if not.
+
+\medskip\noindent
+From the previous two paragraphs we conclude
+the construction $([X'], [Y']) \mapsto [X' \times_S Y']_{n + m - 1}$
+factors through rational equivalence in the second variable, i.e.,
+gives a well defined map
+$Z_n(X) \otimes_{\mathbf{Z}} \CH_m(Y) \to \CH_{n + m - 1}(X \times_S Y)$.
+By symmetry the same is true for the other variable and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chow-cohomology-towards-base-dim-1}
+Let $(S, \delta)$ be as above. Let $X$ be a scheme locally of finite type
+over $S$. Then we have a canonical identification
+$$
+A^p(X \to S) = \CH_{1 - p}(X)
+$$
+for all $p \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Consider the element $[S]_1 \in \CH_1(S)$. We get a map
+$A^p(X \to S) \to \CH_{1 - p}(X)$ by sending $c$ to $c \cap [S]_1$.
+
+\medskip\noindent
+Conversely, suppose we have $\alpha \in \CH_{1 - p}(X)$.
+Then we can define $c_\alpha \in A^p(X \to S)$ as
+follows: given $X' \to S$ and $\alpha' \in \CH_n(X')$
+we let
+$$
+c_\alpha \cap \alpha' = \alpha \times \alpha'
+$$
+in $\CH_{n - p}(X \times_S X')$. To show that this is a bivariant
+class we write $\alpha = \sum_{i \in I} n_i[X_i]$ as in
+Definition \ref{definition-cycles}. In particular the morphism
+$$
+g : \coprod\nolimits_{i \in I} X_i \longrightarrow X
+$$
+is proper. Pick $i \in I$. If $X_i$ dominates an irreducible component
+of $S$, then the structure morphism $p_i : X_i \to S$ is flat and we have
+$\xi_i = p_i^* \in A^p(X_i \to S)$. On the other hand, if $p_i$ factors
+as $p'_i : X_i \to s_i$ followed by the inclusion $s_i \to S$
+of a closed point, then we have
+$\xi_i = (p'_i)^* \circ c_i \in A^p(X_i \to S)$
+where $c_i \in A^1(s_i \to S)$ is the gysin homomorphism and
+$(p'_i)^*$ is flat pullback. Observe that
+$$
+A^p(\coprod\nolimits_{i \in I} X_i \to S) =
+\prod\nolimits_{i \in I} A^p(X_i \to S)
+$$
+Thus we have
+$$
+\xi = \sum n_i \xi_i \in A^p(\coprod\nolimits_{i \in I} X_i \to S)
+$$
+Finally, since $g$ is proper we have a bivariant class
+$$
+g_* \circ \xi \in A^p(X \to S)
+$$
+by Lemma \ref{lemma-push-proper-bivariant}. The reader easily
+verifies that $c_\alpha$ is equal to this class
+(please compare with the proof of
+Lemma \ref{lemma-exterior-product-well-defined-dim-1})
+and hence is itself a bivariant class.
+
+\medskip\noindent
+To finish the proof we have to show that the two constructions
+are mutually inverse. Since $c_\alpha \cap [S]_1 = \alpha$
+this is clear for one of the two directions. For the other, let
+$c \in A^p(X \to S)$ and set $\alpha = c \cap [S]_1$.
+It suffices to prove that
+$$
+c \cap [X'] = c_\alpha \cap [X']
+$$
+when $X'$ is an integral scheme locally of finite type over $S$,
+see Lemma \ref{lemma-bivariant-zero}. However, either $p' : X' \to S$
+is flat of relative dimension $\dim_\delta(X') - 1$ and hence
+$[X'] = (p')^*[S]_1$ or $X' \to S$ factors as $X' \to s \to S$
+and hence $[X'] = (p')^*(s \to S)^*[S]_1$. Thus the fact that the
+bivariant classes $c$ and $c_\alpha$ agree on $[S]_1$
+implies they agree when capped against $[X']$ (since bivariant classes
+commute with flat pullback and gysin maps) and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chow-cohomology-towards-base-dim-1-commutes}
+Let $(S, \delta)$ be as above. Let $X$ be a scheme locally of finite type
+over $S$. Let $c \in A^p(X \to S)$. Let $Y \to Z$ be a morphism of schemes
+locally of finite type over $S$. Let $c' \in A^q(Y \to Z)$. Then
+$c \circ c' = c' \circ c$ in $A^{p + q}(X \times_S Y \to X \times_S Z)$.
+\end{lemma}
+
+\begin{proof}
+In the proof of Lemma \ref{lemma-chow-cohomology-towards-base-dim-1}
+we have seen that $c$ is given by a combination of
+proper pushforward, multiplying by integers over connected
+components, flat pullback, and gysin maps. Since $c'$ commutes with each of
+these operations by definition of bivariant classes, we conclude.
+Some details omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-commuting-exterior-dim-1}
+The upshot of Lemmas \ref{lemma-chow-cohomology-towards-base-dim-1}
+and \ref{lemma-chow-cohomology-towards-base-dim-1-commutes} is the following.
+Let $(S, \delta)$ be as above. Let $X$ be a scheme locally of finite type
+over $S$.
+Let $\alpha \in \CH_*(X)$. Let $Y \to Z$ be a morphism of schemes
+locally of finite type over $S$. Let $c' \in A^q(Y \to Z)$. Then
+$$
+\alpha \times (c' \cap \beta) = c' \cap (\alpha \times \beta)
+$$
+in $\CH_*(X \times_S Y)$ for any $\beta \in \CH_*(Z)$. Namely, this
+follows by taking $c = c_\alpha \in A^*(X \to S)$ the bivariant class
+corresponding to $\alpha$, see proof of
+Lemma \ref{lemma-chow-cohomology-towards-base-dim-1}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-exterior-product-associative-dim-1}
+Exterior product is associative. More precisely, let $(S, \delta)$ be
+as above, let $X, Y, Z$ be schemes locally of finite type over $S$, let
+$\alpha \in \CH_*(X)$, $\beta \in \CH_*(Y)$, $\gamma \in \CH_*(Z)$.
+Then $(\alpha \times \beta) \times \gamma =
+\alpha \times (\beta \times \gamma)$ in $\CH_*(X \times_S Y \times_S Z)$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: associativity of fibre product of schemes.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Intersection products over Dedekind domains}
+\label{section-intersection-product-dim-1}
+
+\noindent
+Let $S$ be a locally Noetherian scheme which has an open covering
+by spectra of Dedekind domains. Set $\delta(s) = 0$ for $s \in S$ closed
+and $\delta(s) = 1$ otherwise. Then $(S, \delta)$ is a special case of our
+general Situation \ref{situation-setup}; see
+Example \ref{example-domain-dimension-1} and discussion in
+Section \ref{section-exterior-product-dim-1}.
+
+\medskip\noindent
+Let $X$ be a smooth scheme over $S$. The bivariant class $\Delta^!$
+of Section \ref{section-gysin-for-diagonal} allows us to define a kind of
+intersection product on chow groups of schemes locally of finite type over $X$.
+Namely, suppose that $Y \to X$ and $Z \to X$ are morphisms of schemes
+which are locally of finite type. Then observe that
+$$
+Y \times_X Z = (Y \times_S Z) \times_{X \times_S X, \Delta} X
+$$
+Hence we can consider the following sequence of maps
+$$
+\CH_n(Y) \otimes_\mathbf{Z} \CH_m(Y)
+\xrightarrow{\times}
+\CH_{n + m - 1}(Y \times_S Z)
+\xrightarrow{\Delta^!}
+\CH_{n + m - *}(Y \times_X Z)
+$$
+Here the first arrow is the exterior product constructed in
+Section \ref{section-exterior-product-dim-1} and the second arrow
+is the gysin map for the diagonal studied in
+Section \ref{section-gysin-for-diagonal}. If $X$ is equidimensional
+of dimension $d$, then $X \to S$ is smooth of relative dimension $d - 1$
+and hence we end up in $\CH_{n + m - d}(Y \times_X Z)$.
+In general we can decompose into the parts lying over the open
+and closed subschemes of $X$ where $X$ has a given dimension.
+Given $\alpha \in \CH_*(Y)$ and $\beta \in \CH_*(Z)$ we will denote
+$$
+\alpha \cdot \beta = \Delta^!(\alpha \times \beta)
+\in \CH_*(Y \times_X Z)
+$$
+In the special case where $X = Y = Z$ we obtain a multiplication
+$$
+\CH_*(X) \times \CH_*(X) \to \CH_*(X),\quad
+(\alpha, \beta) \mapsto \alpha \cdot \beta
+$$
+which is called the {\it intersection product}. We observe that
+this product is clearly symmetric. Associativity follows from
+the next lemma.
+
+\begin{lemma}
+\label{lemma-associative-dim-1}
+The product defined above is associative. More precisely, with
+$(S, \delta)$ as above, let $X$ be smooth over $S$,
+let $Y, Z, W$ be schemes locally of finite type over $X$, let
+$\alpha \in \CH_*(Y)$, $\beta \in \CH_*(Z)$, $\gamma \in \CH_*(W)$.
+Then $(\alpha \cdot \beta) \cdot \gamma =
+\alpha \cdot (\beta \cdot \gamma)$ in $\CH_*(Y \times_X Z \times_X W)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-exterior-product-associative-dim-1} we have
+$(\alpha \times \beta) \times \gamma =
+\alpha \times (\beta \times \gamma)$ in $\CH_*(Y \times_S Z \times_S W)$.
+Consider the closed immersions
+$$
+\Delta_{12} : X \times_S X \longrightarrow X \times_S X \times_S X,
+\quad (x, x') \mapsto (x, x, x')
+$$
+and
+$$
+\Delta_{23} : X \times_S X \longrightarrow X \times_S X \times_S X,
+\quad (x, x') \mapsto (x, x', x')
+$$
+Denote $\Delta_{12}^!$ and $\Delta_{23}^!$ the corresponding bivariant
+classes; observe that $\Delta_{12}^!$ is the restriction
+(Remark \ref{remark-restriction-bivariant}) of $\Delta^!$
+to $X \times_S X \times_S X$ by the map $\text{pr}_{12}$ and that
+$\Delta_{23}^!$ is the restriction of $\Delta^!$
+to $X \times_S X \times_S X$ by the map $\text{pr}_{23}$.
+Thus clearly the restriction of $\Delta_{12}^!$ by $\Delta_{23}$
+is $\Delta^!$ and the restriction of $\Delta_{23}^!$ by $\Delta_{12}$ is
+$\Delta^!$ too. Thus by Lemma \ref{lemma-gysin-commutes} we have
+$$
+\Delta^! \circ \Delta_{12}^! =
+\Delta^! \circ \Delta_{23}^!
+$$
+Now we can prove the lemma by the following sequence of equalities:
+\begin{align*}
+(\alpha \cdot \beta) \cdot \gamma
+& =
+\Delta^!(\Delta^!(\alpha \times \beta) \times \gamma) \\
+& =
+\Delta^!(\Delta_{12}^!((\alpha \times \beta) \times \gamma)) \\
+& =
+\Delta^!(\Delta_{23}^!((\alpha \times \beta) \times \gamma)) \\
+& =
+\Delta^!(\Delta_{23}^!(\alpha \times (\beta \times \gamma)) \\
+& =
+\Delta^!(\alpha \times \Delta^!(\beta \times \gamma)) \\
+& =
+\alpha \cdot (\beta \cdot \gamma)
+\end{align*}
+All equalities are clear from the above except perhaps
+for the second and penultimate one. The equation
+$\Delta_{23}^!(\alpha \times (\beta \times \gamma)) =
+\alpha \times \Delta^!(\beta \times \gamma)$ holds by
+Remark \ref{remark-commuting-exterior}. Similarly for the second
+equation.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-identify-chow-for-smooth-dim-1}
+Let $(S, \delta)$ be as above. Let $X$ be a smooth scheme over $S$,
+equidimensional of dimension $d$. The map
+$$
+A^p(X) \longrightarrow \CH_{d - p}(X),\quad
+c \longmapsto c \cap [X]_d
+$$
+is an isomorphism. Via this isomorphism composition of bivariant
+classes turns into the intersection product defined above.
+\end{lemma}
+
+\begin{proof}
+Denote $g : X \to S$ the structure morphism.
+The map is the composition of the isomorphisms
+$$
+A^p(X) \to A^{p - d + 1}(X \to S) \to \CH_{d - p}(X)
+$$
+The first is the isomorphism $c \mapsto c \circ g^*$ of
+Proposition \ref{proposition-compute-bivariant}
+and the second is the isomorphism $c \mapsto c \cap [S]_1$ of
+Lemma \ref{lemma-chow-cohomology-towards-base-dim-1}.
+From the proof of Lemma \ref{lemma-chow-cohomology-towards-base-dim-1}
+we see that the inverse to the second arrow sends $\alpha \in \CH_{d - p}(X)$
+to the bivariant class $c_\alpha$ which sends $\beta \in \CH_*(Y)$
+for $Y$ locally of finite type over $k$
+to $\alpha \times \beta$ in $\CH_*(X \times_k Y)$. From the proof of
+Proposition \ref{proposition-compute-bivariant} we see the inverse
+to the first arrow in turn sends $c_\alpha$ to the bivariant class
+which sends $\beta \in \CH_*(Y)$ for $Y \to X$ locally of finite type
+to $\Delta^!(\alpha \times \beta) = \alpha \cdot \beta$.
+From this the final result of the lemma follows.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Todd classes}
+\label{section-todd-classes}
+
+\noindent
+A final class associated to a vector bundle $\mathcal{E}$
+of rank $r$ is its {\it Todd class} $Todd(\mathcal{E})$.
+In terms of the Chern roots $x_1, \ldots, x_r$ it is
+defined as
+$$
+Todd(\mathcal{E})
+=
+\prod\nolimits_{i = 1}^r
+\frac{x_i}{1 - e^{-x_i}}
+$$
+In terms of the Chern classes $c_i = c_i(\mathcal{E})$
+we have
+$$
+Todd(\mathcal{E})
+=
+1
++
+\frac{1}{2}c_1
++
+\frac{1}{12}(c_1^2 + c_2)
++
+\frac{1}{24}c_1c_2
++
+\frac{1}{720}(-c_1^4 + 4c_1^2c_2 + 3c_2^2 + c_1c_3 - c_4)
++
+\ldots
+$$
+We have made the appropriate remarks about denominators
+in the previous section. It is the case that
+given an exact sequence
+$$
+0
+\to
+{\mathcal E}_1
+\to
+{\mathcal E}
+\to
+{\mathcal E}_2
+\to
+0
+$$
+we have
+$$
+Todd({\mathcal E}) = Todd({\mathcal E}_1) Todd({\mathcal E}_2).
+$$
+
+
+
+
+
+
+
+\section{Grothendieck-Riemann-Roch}
+\label{section-grr}
+
+\noindent
+Let $(S, \delta)$ be as in
+Situation \ref{situation-setup}.
+Let $X, Y$ be locally of finite type over $S$.
+Let $\mathcal{E}$ be a finite locally free sheaf
+${\mathcal E}$ on $X$ of rank $r$.
+Let $f : X \to Y$ be a proper smooth morphism.
+Assume that $R^if_*\mathcal{E}$ are locally free
+sheaves on $Y$ of finite rank.
+The Grothendieck-Riemann-Roch theorem say in this
+case that
+$$
+f_*(Todd(T_{X/Y}) ch(\mathcal{E}))
+=
+\sum (-1)^i ch(R^if_*\mathcal{E})
+$$
+Here
+$$
+T_{X/Y} = \SheafHom_{\mathcal{O}_X}(\Omega_{X/Y}, \mathcal{O}_X)
+$$
+is the relative tangent bundle of $X$ over $Y$. If $Y = \Spec(k)$
+where $k$ is a field, then we can restate this as
+$$
+\chi(X, \mathcal{E}) = \deg(Todd(T_{X/k}) ch(\mathcal{E}))
+$$
+The theorem is more general and becomes easier to prove
+when formulated in correct generality. We will return to
+this elsewhere (insert future reference here).
+
+
+
+
+
+
+
+\section{Change of base scheme}
+\label{section-change-base}
+
+\noindent
+In this section we explain how to compare theories for different
+base schemes.
+
+\begin{situation}
+\label{situation-setup-base-change}
+Here $(S, \delta)$ and $(S', \delta')$ are as in
+Situation \ref{situation-setup}. Furthermore $g : S' \to S$
+is a flat morphism of schemes and $c \in \mathbf{Z}$
+is an integer such that: for all $s \in S$ and $s' \in S'$
+a generic point of an irreducible of $g^{-1}(\{s\})$ we have
+$\delta(s') = \delta(s) + c$.
+\end{situation}
+
+\noindent
+We will see that for a scheme $X$ locally of finite type over $S$
+there is a well defined map $\CH_k(X) \to \CH_{k + c}(X \times_S S')$
+of Chow groups which (by and large) commutes with the operations
+we have defined in this chapter.
+
+\begin{lemma}
+\label{lemma-dimension-base-change}
+In Situation \ref{situation-setup-base-change} let $X \to S$ be locally
+of finite type. Denote $X' \to S'$ the base change by $S' \to S$.
+If $X$ is integral with $\dim_\delta(X) = k$, then
+every irreducible component $Z'$ of $X'$ has $\dim_{\delta'}(Z') = k + c$,
+\end{lemma}
+
+\begin{proof}
+The projection $X' \to X$ is flat as a base change of the flat morphism
+$S' \to S$ (Morphisms, Lemma \ref{morphisms-lemma-base-change-flat}).
+Hence every generic point $x'$ of an irreducible
+component of $X'$ maps to the generic point $x \in X$ (because generalizations
+lift along $X' \to X$ by
+Morphisms, Lemma \ref{morphisms-lemma-generalizations-lift-flat}).
+Let $s \in S$ be the image of $x$.
+Recall that the scheme $S'_s = S' \times_S s$
+has the same underlying topological space as $g^{-1}(\{s\})$
+(Schemes, Lemma \ref{schemes-lemma-fibre-topological}).
+We may view $x'$ as a point of the scheme $S'_s \times_s x$ which
+comes equipped with a monomorphism $S'_s \times_s x \to S' \times_S X$.
+Of course, $x'$ is a generic point of an irreducible component
+of $S'_s \times_s x$ as well.
+Using the flatness of $\Spec(\kappa(x)) \to \Spec(\kappa(s)) = s$
+and arguing as above, we see that $x'$ maps to a generic point $s'$
+of an irreducible component of $g^{-1}(\{s\})$. Hence
+$\delta'(s') = \delta(s) + c$ by assumption.
+We have $\dim_x(X_s) = \dim_{x'}(X_{s'})$ by
+Morphisms, Lemma \ref{morphisms-lemma-dimension-fibre-after-base-change}.
+Since $x$ is a generic point of an irreducible component $X_s$
+(this is an irreducible scheme but we don't need this) and
+$x'$ is a generic point of an irreducible component of $X'_{s'}$ we conclude
+that $\text{trdeg}_{\kappa(s)}(\kappa(x)) =
+\text{trdeg}_{\kappa(s')}(\kappa(x'))$
+by Morphisms, Lemma \ref{morphisms-lemma-dimension-fibre-at-a-point}.
+Then
+$$
+\delta_{X'/S'}(x') = \delta(s') + \text{trdeg}_{\kappa(s')}(\kappa(x')) =
+\delta(s) + c + \text{trdeg}_{\kappa(s)}(\kappa(x)) = \delta_{X/S}(x) + c
+$$
+This proves what we want by Definition \ref{definition-delta-dimension}.
+\end{proof}
+
+\noindent
+In Situation \ref{situation-setup-base-change} let $X \to S$ be locally
+of finite type. Denote $X' \to S'$ the base change by $g : S' \to S$.
+There is a unique homomorphism
+$$
+g^* : Z_k(X) \longrightarrow Z_{k + c}(X')
+$$
+which given an integral closed subscheme $Z \subset X$ of
+$\delta$-dimension $k$ sends $[Z]$ to $[Z \times_S S']_k$.
+This makes sense by Lemma \ref{lemma-dimension-base-change}.
+
+\begin{lemma}
+\label{lemma-pullback-coherent-base-change}
+In Situation \ref{situation-setup-base-change} let $X \to S$
+locally of finite type and let $X' \to S$ be the base change by $S' \to S$.
+\begin{enumerate}
+\item Let $Z \subset X$ be a closed subscheme with
+$\dim_\delta(Z) \leq k$ and base change $Z' \subset X'$. Then we have
+$\dim_{\delta'}(Z')) \leq k + c$
+and $[Z']_{k + c} = g^*[Z]_k$ in $Z_{k + c}(X')$.
+\item Let $\mathcal{F}$ be a coherent sheaf on $X$ with
+$\dim_\delta(\text{Supp}(\mathcal{F})) \leq k$ and base
+change $\mathcal{F}'$ on $X'$.
+Then we have $\dim_\delta(\text{Supp}(\mathcal{F}')) \leq k + c$
+and $g^*[\mathcal{F}]_k = [\mathcal{F}']_{k + c}$
+in $Z_{k + c}(X')$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The proof is exactly the same is the proof of
+Lemma \ref{lemma-pullback-coherent}
+and we suggest the reader skip it.
+
+\medskip\noindent
+The statements on dimensions follow from
+Lemma \ref{lemma-dimension-base-change}.
+Part (1) follows from part (2) by Lemma \ref{lemma-cycle-closed-coherent}
+and the fact that the base change of the coherent module $\mathcal{O}_Z$
+is $\mathcal{O}_{Z'}$.
+
+\medskip\noindent
+Proof of (2). As $X$, $X'$ are locally Noetherian we may apply
+Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-Noetherian} to see
+that $\mathcal{F}$ is of finite type, hence $\mathcal{F}'$ is
+of finite type (Modules, Lemma \ref{modules-lemma-pullback-finite-type}),
+hence $\mathcal{F}'$ is coherent
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-Noetherian} again).
+Thus the lemma makes sense. Let $W \subset X$ be an integral closed
+subscheme of $\delta$-dimension $k$, and let $W' \subset X'$ be
+an integral closed subscheme of $\delta'$-dimension $k + c$ mapping into $W$
+under $X' \to X$. We have to show that the coefficient $n$ of
+$[W']$ in $g^*[\mathcal{F}]_k$ agrees with the coefficient
+$m$ of $[W']$ in $[\mathcal{F}']_{k + c}$. Let $\xi \in W$ and
+$\xi' \in W'$ be the generic points. Let
+$A = \mathcal{O}_{X, \xi}$, $B = \mathcal{O}_{X', \xi'}$
+and set $M = \mathcal{F}_\xi$ as an $A$-module. (Note that
+$M$ has finite length by our dimension assumptions, but we
+actually do not need to verify this. See
+Lemma \ref{lemma-length-finite}.)
+We have $\mathcal{F}'_{\xi'} = B \otimes_A M$.
+Thus we see that
+$$
+n = \text{length}_B(B \otimes_A M)
+\quad
+\text{and}
+\quad
+m = \text{length}_A(M) \text{length}_B(B/\mathfrak m_AB)
+$$
+Thus the equality follows from
+Algebra, Lemma \ref{algebra-lemma-pullback-module}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-base-change}
+In Situation \ref{situation-setup-base-change} let $X \to S$ be locally
+of finite type and let $X' \to S'$ be the base change by $S' \to S$.
+The map $g^* : Z_k(X) \to Z_{k + c}(X')$ above factors through rational
+equivalence to give a map
+$$
+g^* : \CH_k(X) \longrightarrow \CH_{k + c}(X')
+$$
+of chow groups.
+\end{lemma}
+
+\begin{proof}
+Suppose that $\alpha \in Z_k(X)$ is a $k$-cycle which is rationally equivalent
+to zero. By Lemma \ref{lemma-rational-equivalence-family}
+there exists a locally finite family of integral closed subschemes
+$W_i \subset X \times \mathbf{P}^1$ of $\delta$-dimension $k$
+not contained in the divisors
+$(X \times \mathbf{P}^1)_0$ or $(X \times \mathbf{P}^1)_\infty$
+of $X \times \mathbf{P}^1$ such that
+$\alpha = \sum ([(W_i)_0]_k - [(W_i)_\infty]_k)$.
+Thus it suffices to prove for $W \subset X \times \mathbf{P}^1$
+integral closed of $\delta$-dimension $k$ not contained in the divisors
+$(X \times \mathbf{P}^1)_0$ or $(X \times \mathbf{P}^1)_\infty$
+of $X \times \mathbf{P}^1$ we have
+\begin{enumerate}
+\item the base change $W' \subset X' \times \mathbf{P}^1$ satisfies the
+assumptions of Lemma \ref{lemma-closed-subscheme-cross-p1} with
+$k$ replaced by $k + c$, and
+\item $g^*[W_0]_k = [(W')_0]_{k + c}$ and
+$g^*[W_\infty]_k = [(W')_\infty]_{k + c}$.
+\end{enumerate}
+Part (2) follows immediately from
+Lemma \ref{lemma-pullback-coherent-base-change} and the fact that
+$(W')_0$ is the base change of $W_0$ (by associativity of fibre products).
+For part (1), first the statement on dimensions follows
+from Lemma \ref{lemma-dimension-base-change}.
+Then let $w' \in (W')_0$ with image $w \in W_0$
+and $z \in \mathbf{P}^1_S$. Denote $t \in \mathcal{O}_{\mathbf{P}^1_S, z}$
+the usual equation for $0 : S \to \mathbf{P}^1_S$.
+Since $\mathcal{O}_{W, w} \to \mathcal{O}_{W', w'}$ is flat
+and since $t$ is a nonzerodivisor on $\mathcal{O}_{W, w}$
+(as $W$ is integral and $W \not = W_0$) we see that also
+$t$ is a nonzerodivisor in $\mathcal{O}_{W', w'}$. Hence
+$W'$ has no associated points lying on $W'_0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-base-change-pullback}
+In Situation \ref{situation-setup-base-change} let $Y \to X \to S$ be locally
+of finite type and let $Y' \to X' \to S'$ be the base change by $S' \to S$.
+Assume $f : Y \to X$ is flat of relative dimension $r$. Then $f' : Y' \to X'$
+is flat of relative dimension $r$ and the diagram
+$$
+\xymatrix{
+\CH_{k + r}(Y) \ar[r]_{g^*} & \CH_{k + c + r}(Y') \\
+\CH_k(X) \ar[r]^{g^*} \ar[u]^{(f')^*} & \CH_{k + c}(X') \ar[u]_{f^*}
+}
+$$
+of chow groups commutes.
+\end{lemma}
+
+\begin{proof}
+In fact, we claim it commutes on the level of cycles. Namely, let
+$Z \subset X$ be an integral closed subscheme of $\delta$-dimension $k$
+and denote $Z' \subset X'$ its base change. By construction
+we have $g^*[Z] = [Z']_{k + c}$. By Lemma \ref{lemma-pullback-coherent}
+we have $(f')^*g^*[Z] = [Z' \times_{X'} Y']_{k + c + r}$.
+Conversely, we have $f^*[Z] = [Z \times_X Y]_{k + r}$ by
+Definition \ref{definition-flat-pullback}. By
+Lemma \ref{lemma-pullback-coherent-base-change}
+we have $g^*f^*[Z] = [(Z \times_X Y)']_{k + r + c}$.
+Since $(Z \times_X Y)' = Z' \times_{X'} Y'$ by
+associativity of fibre product we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-base-change-pushforward}
+In Situation \ref{situation-setup-base-change} let $Y \to X \to S$ be locally
+of finite type and let $Y' \to X' \to S'$ be the base change by $S' \to S$.
+Assume $f : Y \to X$ is proper. Then $f' : Y' \to X'$ is proper and the diagram
+$$
+\xymatrix{
+\CH_k(Y) \ar[r]_{g^*} \ar[d]_{f_*} & \CH_{k + c}(Y') \ar[d]^{f'_*} \\
+\CH_k(X) \ar[r]^{g^*} & \CH_{k + c}(X')
+}
+$$
+of chow groups commutes.
+\end{lemma}
+
+\begin{proof}
+In fact, we claim it commutes on the level of cycles. Namely, let
+$Z \subset Y$ be an integral closed subscheme of $\delta$-dimension $k$
+and denote $Z' \subset X'$ its base change. By construction
+we have $g^*[Z] = [Z']_{k + c}$. By Lemma \ref{lemma-cycle-push-sheaf}
+we have $(f')_*g^*[Z] = [f'_*\mathcal{O}_{Z'}]_{k + c}$.
+By the same lemma we have $f_*[Z] = [f_*\mathcal{O}_Z]_k$. By
+Lemma \ref{lemma-pullback-coherent-base-change}
+we have $g^*f_*[Z] = [(X' \to X)^*f_*\mathcal{O}_Z]_{k + r}$.
+Thus it suffices to show that
+$$
+(X' \to X)^*f_*\mathcal{O}_Z \cong f'_*\mathcal{O}_{Z'}
+$$
+as coherent modules on $X'$. As $X' \to X$ is flat and as
+$\mathcal{O}_{Z'} = (Y' \to Y)^*\mathcal{O}_Z$, this
+follows from flat base change, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-base-change-c1}
+In Situation \ref{situation-setup-base-change} let $X \to S$ be locally
+of finite type and let $X' \to S'$ be the base change by $S' \to S$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module with
+base change $\mathcal{L}'$ on $X'$. Then the
+diagram
+$$
+\xymatrix{
+\CH_k(X) \ar[r]_{g^*} \ar[d]_{c_1(\mathcal{L}) \cap -} &
+\CH_{k + c}(X') \ar[d]^{c_1(\mathcal{L}') \cap -} \\
+\CH_{k - 1}(X) \ar[r]^{g^*} & \CH_{k + c - 1}(X')
+}
+$$
+of chow groups commutes.
+\end{lemma}
+
+\begin{proof}
+Let $p : L \to X$ be the line bundle associated to $\mathcal{L}$
+with zero section $o : X \to L$. For $\alpha \in CH_k(X)$ we
+know that $\beta = c_1(\mathcal{L}) \cap \alpha$
+is the unique element of $\CH_{k - 1}(X)$ such that
+$o_*\alpha = - p^*\beta$, see Lemmas \ref{lemma-linebundle} and
+\ref{lemma-linebundle-formulae}.
+The same characterization holds after pullback. Hence the lemma follows from
+Lemmas \ref{lemma-pullback-base-change-pullback} and
+\ref{lemma-pullback-base-change-pushforward}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-base-change-chern-classes}
+In Situation \ref{situation-setup-base-change} let $X \to S$ be locally
+of finite type and let $X' \to S'$ be the base change by $S' \to S$.
+Let $\mathcal{E}$ be a finite locally free $\mathcal{O}_X$-module of
+rank $r$ with base change $\mathcal{E}'$ on $X'$. Then the
+diagram
+$$
+\xymatrix{
+\CH_k(X) \ar[r]_{g^*} \ar[d]_{c_i(\mathcal{E}) \cap -} &
+\CH_{k + c}(X') \ar[d]^{c_i(\mathcal{E}') \cap -} \\
+\CH_{k - i}(X) \ar[r]^{g^*} & \CH_{k + c - i}(X')
+}
+$$
+of chow groups commutes for all $i$.
+\end{lemma}
+
+\begin{proof}
+Set $P = \mathbf{P}(\mathcal{E})$. The base change $P'$ of $P$
+is equal to $\mathbf{P}(\mathcal{E}')$. Since we already know that
+flat pullback and cupping with $c_1$ of an invertible module
+commute with base change (Lemmas \ref{lemma-pullback-base-change-pullback} and
+\ref{lemma-pullback-base-change-c1})
+the lemma follows from the characterization of capping
+with $c_i(\mathcal{E})$ given in Lemma \ref{lemma-determine-intersections}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compose-base-change}
+Let $(S, \delta)$, $(S', \delta')$, $(S'', \delta'')$ be as in
+Situation \ref{situation-setup}. Let $g : S' \to S$ and $g' : S'' \to S'$
+be flat morphisms of schemes and let $c, c' \in \mathbf{Z}$
+be integers such that $S, \delta, S', \delta', g, c$ and
+$S', \delta', S'', g', c'$ are as in
+Situation \ref{situation-setup-base-change}.
+Let $X \to S$ be locally of finite type and denote $X' \to S'$
+and $X'' \to S''$ the base changes by $S' \to S$ and $S'' \to S$.
+Then $S, \delta, S'', \delta'', g \circ g', c + c'$ is as in
+Situation \ref{situation-setup-base-change} and
+the maps $g^* : \CH_k(X) \to \CH_{k + c}(X')$ and
+$(g')^* : \CH_{k + c}(X') \to \CH_{k + c + c'}(X'')$ of
+Lemma \ref{lemma-pullback-base-change}
+compose to give the map $(g \circ g')^* : \CH_k(X) \to \CH_{k + c + c'}(X'')$
+of Lemma \ref{lemma-pullback-base-change}.
+\end{lemma}
+
+\begin{proof}
+Let $s \in S$ and let $s'' \in S''$ be a generic point of an irreducible
+component of $(g \circ g')^{-1}(\{s\})$. Set $s' = g'(s'')$.
+Clearly, $s''$ is a generic point of an irreducible component of
+$(g')^{-1}(\{s'\})$. Moreover, since $g'$ is flat and hence generalizations
+lift along $g'$ (Morphisms, Lemma \ref{morphisms-lemma-base-change-flat})
+we see that also $s'$ is a generic point of an irreducible component
+of $g^{-1}(\{s\})$. Thus by assumption $\delta'(s') = \delta(s) + c$
+and $\delta''(s'') = \delta'(s') + c'$. We conclude
+$\delta''(s'') = \delta(s) + c + c'$ and the first part of the
+statement is true.
+
+\medskip\noindent
+For the second part, let $Z \subset X$ be an integral closed subscheme
+of $\delta$-dimension $k$. Denote $Z' \subset X'$ and $Z'' \subset X''$
+the base changes. By definition we have $g^*[Z] = [Z']_{k + c}$.
+By Lemma \ref{lemma-pullback-coherent-base-change} we have
+$(g')^*[Z']_{k + c} = [Z'']_{k + c + c'}$. This proves the final statement.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-chow-limit}
+In Situation \ref{situation-setup-base-change} assume $c = 0$
+and assume that $S' = \lim_{i \in I} S_i$ is a filtered limit
+of schemes $S_i$ affine over $S$ such that
+\begin{enumerate}
+\item with $\delta_i$ equal to $S_i \to S \xrightarrow{\delta} \mathbf{Z}$
+the pair $(S_i, \delta_i)$ is as in Situation \ref{situation-setup},
+\item $S_i, \delta_i, S, \delta, S \to S_i, c = 0$ is as in
+Situation \ref{situation-setup-base-change},
+\item $S_i, \delta_i, S_{i'}, \delta_{i'}, S_i \to S_{i'}, c = 0$
+for $i \geq i'$ is as in Situation \ref{situation-setup-base-change}.
+\end{enumerate}
+Then for a quasi-compact scheme $X$ of finite type over $S$
+with base change $X'$ and $X_i$ by $S' \to S$ and $S_i \to S$ we have
+$\CH_k(X') = \colim \CH_k(X_i)$.
+\end{lemma}
+
+\begin{proof}
+By the result of Lemma \ref{lemma-compose-base-change} we obtain
+an inverse system of chow groups $\CH_k(X_i)$ and a map
+$\colim \CH_i(X_i) \to \CH_k(X)$.
+We may replace $S$ by a quasi-compact open through which $X \to S$
+factors, hence we may and do assume all the schemes occuring in
+this proof are Noetherian (and hence quasi-compact and quasi-separated).
+
+\medskip\noindent
+Let us show that this map is surjective. Namely, let $Z' \subset X'$
+be an integral closed subscheme of $\delta'$-dimension $k$. By
+Limits, Lemma \ref{limits-lemma-descend-finite-presentation}
+we can find an $i$ and a morphism $Z_i \to X_i$ of finite presentation
+whose base change is $Z'$. Afer increasing $i$ we may assume $Z_i$
+is a closed subscheme of $X_i$, see
+Limits, Lemma \ref{limits-lemma-descend-closed-immersion-finite-presentation}.
+Then $Z' \to X_i$ factors through $Z_i$ and we may replace $Z_i$
+by the scheme theoretic image of $Z' \to X_i$. In this way we see
+that we may assume $Z_i$ is an integral closed subscheme of $X_i$.
+By Lemma \ref{lemma-dimension-base-change} we conclude that
+$\dim_{\delta_i}(Z_i) = \dim_{\delta'}(Z') = k$.
+Thus $\CH_k(X_i) \to \CH_k(X')$ maps $[Z_i]$ to $[Z']$ and
+we conclude surjectivity holds.
+
+\medskip\noindent
+Let us show that our map is injective. Let $\alpha_i \in \CH_k(X_i)$
+be a cycle whose image $\alpha' \in \CH_k(X')$ is zero.
+Then there exist integral closed subschemes
+$W_l' \subset X'$, $l = 1, \ldots, r$ of $\delta"$-dimension $k + 1$
+and nonzero rational functions $f'_l$ on $W'_l$
+such that $\alpha' = \sum_{l = 1, \ldots, r} \text{div}_{W'_l}(f'_l)$.
+Arguing as above we can find an $i$ and integral closed subschemes
+$W_{i, l} \subset X_i$ of $\delta_i$-dimension $k + 1$
+whose base change is $W'_l$.
+After increasin $i$ we may assume we have rational functions
+$f_{i, l}$ on $W_{i, l}$. Namely, we may think of $f'_l$ as a
+section of the structure sheaf over a nonempty open $U'_l \subset W'_l$,
+we can descend these opens by Limits, Lemma \ref{limits-lemma-descend-opens}
+and after increasing $i$ we may descend $f'_l$ by
+Limits, Lemma \ref{limits-lemma-descend-section}.
+We claim that
+$$
+\alpha_i = \sum\nolimits_{l = 1, \ldots, r} \text{div}_{W_{i, l}}(f_{i, l})
+$$
+after possibly increasing $i$.
+
+\medskip\noindent
+To prove the claim, let $Z'_{l, j} \subset W'_l$ be a finite
+collection of integral closed subschemes of $\delta'$-dimension $k$
+such that $f'_l$ is an invertible regular function outside
+$\bigcup_j Y'_{l, j}$. After increasing $i$ (by the arguments above)
+we may assume there exist integral closed subschemes $Z_{i, l, j} \subset W_i$
+of $\delta_i$-dimension $k$ such that $f_{i, l}$ is an
+invertible regular function outside $\bigcup_j Z_{i, l, j}$.
+Then we may write
+$$
+\text{div}_{W'_l}(f'_l) = \sum n_{l, j} [Z'_{l, j}]
+$$
+and
+$$
+\text{div}_{W_{i, l}}(f_{i, l}) = \sum n_{i, l, j} [Z_{i, l, j}]
+$$
+To prove the claim it suffices to show that $n_{l, i} = n_{i, l, j}$.
+Namely, this will imply that $\beta_i =
+\alpha_i - \sum\nolimits_{l = 1, \ldots, r} \text{div}_{W_{i, l}}(f_{i, l})$
+is a cycle on $X_i$ whose pullback to $X'$ is zero as a cycle!
+It follows that $\beta_i$ pulls back to zero as a cycle on $X_{i'}$
+for some $i' \geq i$ by an easy argument we omit.
+
+\medskip\noindent
+To prove the equality $n_{l, i} = n_{i, l, j}$ we choose a
+generic point $\xi' \in Z'_{l, j}$ and we denote
+$\xi \in Z_{i, l, j}$ the image which is a generic point also.
+Then the local ring map
+$$
+\mathcal{O}_{W_{i, l}, \xi}
+\longrightarrow
+\mathcal{O}_{W'_l, \xi'}
+$$
+is flat as $W'_l \to W_{i, l}$ is the base change of the flat
+morphism $S' \to S_i$. We also have
+$\mathfrak m_\xi \mathcal{O}_{W'_l, \xi'} = \mathfrak m_{\xi'}$
+because $Z_{i, l, j}$ pulls back to $Z'_{l, j}$! Thus the equality of
+$$
+n_{l, j} = \text{ord}_{Z'_{l, j}}(f'_l) =
+\text{ord}_{\mathcal{O}_{W'_l, \xi'}}(f'_l)
+\quad\text{and}\quad
+n_{i, l, j} = \text{ord}_{Z_{i, l, j}}(f_{i, l}) =
+\text{ord}_{\mathcal{O}_{W_{i, l}, \xi}}(f_{i, l})
+$$
+follows from Algebra, Lemma \ref{algebra-lemma-pullback-module}
+and the construction of $\text{ord}$ in
+Algebra, Section \ref{algebra-section-orders-of-vanishing}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Appendix A: Alternative approach to key lemma}
+\label{section-appendix-A}
+
+\noindent
+In this appendix we first define determinants $\det_\kappa(M)$
+of finite length modules $M$ over local rings $(R, \mathfrak m, \kappa)$,
+see Subsection \ref{subsection-determinants-finite-length}.
+The determinant $\det_\kappa(M)$ is a $1$-dimensional $\kappa$-vector space.
+We use this in Subsection \ref{subsection-periodic-complexes-determinants}
+to define the determinant $\det_\kappa(M, \varphi, \psi) \in \kappa^*$
+of an exact $(2, 1)$-periodic complex $(M, \varphi, \psi)$
+with $M$ of finite length. In Subsection \ref{subsection-symbols}
+we use these determinants to construct a tame symbol
+$d_R(a, b) = \det_\kappa(R/ab, a, b)$ for a pair of nonzerodivisors
+$a, b \in R$ when $R$ is Noetherian of dimension $1$.
+Although there is no doubt that
+$$
+d_R(a, b) = \partial_R(a, b)
+$$
+where $\partial_R$ is as in Section \ref{section-tame-symbol},
+we have not (yet) added the verification. The advantage of the
+tame symbol as constructed in this appendix is that it extends
+(for example) to pairs of injective endomorphisms $\varphi, \psi$
+of a finite $R$-module $M$ of dimension $1$ such that
+$\varphi(\psi(M)) = \psi(\varphi(M))$. In
+Subsection \ref{subsection-length-determinant}
+we relate Herbrand quotients and determinants.
+An easy to state version of main the main result
+(Proposition \ref{proposition-length-determinant-periodic-complex})
+is the formula
+$$
+-e_R(M, \varphi, \psi) =
+\text{ord}_R(\det\nolimits_K(M_K, \varphi, \psi))
+$$
+when $(M, \varphi, \psi)$ is a $(2, 1)$-periodic complex
+whose Herbrand quotient $e_R$ (Definition \ref{definition-periodic-length})
+is defined
+over a $1$-dimensonal Noetherian local domain $R$ with fraction field $K$.
+We use this proposition to give an alternative proof of the key lemma
+(Lemma \ref{lemma-milnor-gersten-low-degree})
+for the tame symbol constructed in this appendix, see
+Lemma \ref{lemma-secondary-ramification}.
+
+
+\subsection{Determinants of finite length modules}
+\label{subsection-determinants-finite-length}
+
+\noindent
+The material in this section is related to the material in
+the paper \cite{determinant} and to the material in the
+thesis \cite{Joe}.
+
+\medskip\noindent
+Let $(R, \mathfrak m, \kappa)$ be a local ring. Let
+$\varphi : M \to M$ be an $R$-linear endomorphism of
+a finite length $R$-module $M$. In More on Algebra, Section
+\ref{more-algebra-section-determinants-finite-length}
+we have already defined the determinant $\det_\kappa(\varphi)$
+(and the trace and the characteristic polynomial)
+of $\varphi$ relative to $\kappa$. In this section, we will
+construct a canonical $1$-dimensional $\kappa$-vector space
+$\det_\kappa(M)$ such that
+$\det_\kappa(\varphi : M \to M) : \det_\kappa(M) \to \det_\kappa(M)$
+is equal to multiplication by $\det_\kappa(\varphi)$.
+If $M$ is annihilated by $\mathfrak m$, then $M$ can be viewed
+as a finite dimension $\kappa$-vector space and then we have
+$\det_\kappa(M) = \wedge^n_\kappa(M)$ where $n = \dim_\kappa(M)$.
+Our construction will generalize this to all finite length modules
+over $R$ and if $R$ contains its residue field, then the determinant
+$\det_\kappa(M)$ will be given by the usual determinant in a suitable
+sense, see Remark \ref{remark-explain-determinant}.
+
+\begin{definition}
+\label{definition-determinant}
+Let $R$ be a local ring with maximal ideal $\mathfrak m$ and
+residue field $\kappa$. Let $M$ be a finite length $R$-module.
+Say $l = \text{length}_R(M)$.
+\begin{enumerate}
+\item Given elements $x_1, \ldots, x_r \in M$ we denote
+$\langle x_1, \ldots, x_r \rangle = Rx_1 + \ldots + Rx_r$ the
+$R$-submodule of $M$ generated by $x_1, \ldots, x_r$.
+\item We will say an $l$-tuple of elements
+$(e_1, \ldots, e_l)$ of $M$ is {\it admissible} if
+$\mathfrak m e_i \subset \langle e_1, \ldots, e_{i - 1} \rangle$
+for $i = 1, \ldots, l$.
+\item A {\it symbol} $[e_1, \ldots, e_l]$ will mean
+$(e_1, \ldots, e_l)$ is an admissible $l$-tuple.
+\item An {\it admissible relation} between symbols is one of the following:
+\begin{enumerate}
+\item if $(e_1, \ldots, e_l)$ is an admissible sequence and
+for some $1 \leq a \leq l$ we have
+$e_a \in \langle e_1, \ldots, e_{a - 1}\rangle$, then
+$[e_1, \ldots, e_l] = 0$,
+\item if $(e_1, \ldots, e_l)$ is an admissible sequence and
+for some $1 \leq a \leq l$ we have $e_a = \lambda e'_a + x$
+with $\lambda \in R^*$, and
+$x \in \langle e_1, \ldots, e_{a - 1}\rangle$, then
+$$
+[e_1, \ldots, e_l] =
+\overline{\lambda} [e_1, \ldots, e_{a - 1}, e'_a, e_{a + 1}, \ldots, e_l]
+$$
+where $\overline{\lambda} \in \kappa^*$ is the image of $\lambda$ in
+the residue field, and
+\item if $(e_1, \ldots, e_l)$ is an admissible sequence and
+$\mathfrak m e_a \subset \langle e_1, \ldots, e_{a - 2}\rangle$ then
+$$
+[e_1, \ldots, e_l] =
+- [e_1, \ldots, e_{a - 2}, e_a, e_{a - 1}, e_{a + 1}, \ldots, e_l].
+$$
+\end{enumerate}
+\item
+We define the {\it determinant of the finite length $R$-module $M$} to be
+$$
+\det\nolimits_\kappa(M) =
+\left\{
+\frac{\kappa\text{-vector space generated by symbols}}
+{\kappa\text{-linear combinations of admissible relations}}
+\right\}
+$$
+\end{enumerate}
+\end{definition}
+
+\noindent
+We stress that always $l = \text{length}_R(M)$. We also stress that
+it does not follow that the symbol $[e_1, \ldots, e_l]$ is
+additive in the entries (this will typically not be the case).
+Before we can show that the determinant $\det_\kappa(M)$ actually
+has dimension $1$ we have to show that it has dimension at most $1$.
+
+\begin{lemma}
+\label{lemma-dimension-at-most-one}
+With notations as above we have $\dim_\kappa(\det_\kappa(M)) \leq 1$.
+\end{lemma}
+
+\begin{proof}
+Fix an admissible sequence $(f_1, \ldots, f_l)$ of $M$ such that
+$$
+\text{length}_R(\langle f_1, \ldots, f_i\rangle) = i
+$$
+for $i = 1, \ldots, l$. Such an admissible sequence exists exactly because
+$M$ has length $l$. We will show that any element of
+$\det_\kappa(M)$ is a $\kappa$-multiple of the symbol
+$[f_1, \ldots, f_l]$. This will prove the lemma.
+
+\medskip\noindent
+Let $(e_1, \ldots, e_l)$ be an admissible sequence of $M$.
+It suffices to show that $[e_1, \ldots, e_l]$ is a multiple
+of $[f_1, \ldots, f_l]$. First assume that
+$\langle e_1, \ldots, e_l\rangle \not = M$. Then there exists
+an $i \in [1, \ldots, l]$ such that
+$e_i \in \langle e_1, \ldots, e_{i - 1}\rangle$. It immediately
+follows from the first admissible relation that
+$[e_1, \ldots, e_n] = 0$ in $\det_\kappa(M)$.
+Hence we may assume that $\langle e_1, \ldots, e_l\rangle = M$.
+In particular there exists a smallest index $i \in \{1, \ldots, l\}$
+such that $f_1 \in \langle e_1, \ldots, e_i\rangle$. This means
+that $e_i = \lambda f_1 + x$ with
+$x \in \langle e_1, \ldots, e_{i - 1}\rangle$ and $\lambda \in R^*$.
+By the second admissible relation this means that
+$[e_1, \ldots, e_l] =
+\overline{\lambda}[e_1, \ldots, e_{i - 1}, f_1, e_{i + 1}, \ldots, e_l]$.
+Note that $\mathfrak m f_1 = 0$. Hence by applying the third
+admissible relation $i - 1$ times we see that
+$$
+[e_1, \ldots, e_l] =
+(-1)^{i - 1}\overline{\lambda}
+[f_1, e_1, \ldots, e_{i - 1}, e_{i + 1}, \ldots, e_l].
+$$
+Note that it is also the case that
+$ \langle f_1, e_1, \ldots, e_{i - 1}, e_{i + 1}, \ldots, e_l\rangle = M$.
+By induction suppose we have proven that our original
+symbol is equal to a scalar times
+$$
+[f_1, \ldots, f_j, e_{j + 1}, \ldots, e_l]
+$$
+for some admissible sequence $(f_1, \ldots, f_j, e_{j + 1}, \ldots, e_l)$
+whose elements generate $M$, i.e., \ with
+$\langle f_1, \ldots, f_j, e_{j + 1}, \ldots, e_l\rangle = M$.
+Then we find the smallest $i$ such that
+$f_{j + 1} \in \langle f_1, \ldots, f_j, e_{j + 1}, \ldots, e_i\rangle$
+and we go through the same process as above to see that
+$$
+[f_1, \ldots, f_j, e_{j + 1}, \ldots, e_l]
+=
+(\text{scalar}) [f_1, \ldots, f_j, f_{j + 1}, e_{j + 1},
+\ldots, \hat{e_i}, \ldots, e_l]
+$$
+Continuing in this vein we obtain the desired result.
+\end{proof}
+
+\noindent
+Before we show that $\det_\kappa(M)$ always has dimension $1$,
+let us show that it agrees with the usual top exterior power in
+the case the module is a vector space over $\kappa$.
+
+\begin{lemma}
+\label{lemma-compare-det}
+Let $R$ be a local ring with maximal ideal $\mathfrak m$ and
+residue field $\kappa$. Let $M$ be a finite length $R$-module
+which is annihilated by $\mathfrak m$. Let $l = \dim_\kappa(M)$.
+Then the map
+$$
+\det\nolimits_\kappa(M) \longrightarrow \wedge^l_\kappa(M),
+\quad
+[e_1, \ldots, e_l] \longmapsto e_1 \wedge \ldots \wedge e_l
+$$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+It is clear that the rule described in the lemma gives a $\kappa$-linear
+map since all of the admissible relations are satisfied by the usual
+symbols $e_1 \wedge \ldots \wedge e_l$. It is also clearly a surjective
+map. Since by Lemma \ref{lemma-dimension-at-most-one} the left hand side
+has dimension at most one
+we see that the map is an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-determinant-dimension-one}
+Let $R$ be a local ring with maximal ideal $\mathfrak m$ and
+residue field $\kappa$. Let $M$ be a finite length $R$-module.
+The determinant $\det_\kappa(M)$ defined above is a $\kappa$-vector
+space of dimension $1$. It is generated by the symbol
+$[f_1, \ldots, f_l]$ for any admissible sequence such
+that $\langle f_1, \ldots f_l \rangle = M$.
+\end{lemma}
+
+\begin{proof}
+We know $\det_\kappa(M)$ has dimension at most $1$, and in fact that it
+is generated by $[f_1, \ldots, f_l]$, by
+Lemma \ref{lemma-dimension-at-most-one} and its proof.
+We will show by induction on $l = \text{length}(M)$
+that it is nonzero. For $l = 1$ it follows from Lemma \ref{lemma-compare-det}.
+Choose a nonzero element $f \in M$
+with $\mathfrak m f = 0$. Set $\overline{M} = M /\langle f \rangle$,
+and denote the quotient map $x \mapsto \overline{x}$.
+We will define a surjective map
+$$
+\psi : \det\nolimits_k(M) \to \det\nolimits_\kappa(\overline{M})
+$$
+which will prove the lemma since by induction the determinant of
+$\overline{M}$ is nonzero.
+
+\medskip\noindent
+We define $\psi$ on symbols as follows.
+Let $(e_1, \ldots, e_l)$ be an admissible sequence.
+If $f \not \in \langle e_1, \ldots, e_l \rangle$ then
+we simply set $\psi([e_1, \ldots, e_l]) = 0$.
+If $f \in \langle e_1, \ldots, e_l \rangle$ then we choose
+an $i$ minimal such that $f \in \langle e_1, \ldots, e_i \rangle$.
+We may write $e_i = \lambda f + x$ for some unit $\lambda \in R$
+and $x \in \langle e_1, \ldots, e_{i - 1} \rangle$.
+In this case we set
+$$
+\psi([e_1, \ldots, e_l]) =
+(-1)^i
+\overline{\lambda}[\overline{e}_1, \ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1}, \ldots, \overline{e}_l].
+$$
+Note that it is indeed the case that
+$(\overline{e}_1, \ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1}, \ldots, \overline{e}_l)$
+is an admissible sequence in $\overline{M}$, so this makes sense.
+Let us show that extending this rule $\kappa$-linearly to
+linear combinations of symbols does indeed lead to a map on
+determinants. To do this we have to show that the admissible
+relations are mapped to zero.
+
+\medskip\noindent
+Type (a) relations. Suppose we have $(e_1, \ldots, e_l)$ an
+admissible sequence and for some $1 \leq a \leq l$ we have
+$e_a \in \langle e_1, \ldots, e_{a - 1}\rangle$.
+Suppose that $f \in \langle e_1, \ldots, e_i\rangle$ with $i$ minimal.
+Then $i \not = a$ and
+$\overline{e}_a \in \langle \overline{e}_1, \ldots,
+\hat{\overline{e}_i}, \ldots, \overline{e}_{a - 1}\rangle$ if $i < a$
+or
+$\overline{e}_a \in \langle \overline{e}_1, \ldots,
+\overline{e}_{a - 1}\rangle$ if $i > a$.
+Thus the same admissible relation for $\det_\kappa(\overline{M})$ forces
+the symbol $[\overline{e}_1, \ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1}, \ldots, \overline{e}_l]$
+to be zero as desired.
+
+\medskip\noindent
+Type (b) relations. Suppose we have $(e_1, \ldots, e_l)$ an
+admissible sequence and for some $1 \leq a \leq l$ we have
+$e_a = \lambda e'_a + x$ with $\lambda \in R^*$, and
+$x \in \langle e_1, \ldots, e_{a - 1}\rangle$.
+Suppose that $f \in \langle e_1, \ldots, e_i\rangle$ with $i$ minimal.
+Say $e_i = \mu f + y$ with $y \in \langle e_1, \ldots, e_{i - 1}\rangle$.
+If $i < a$ then the desired equality is
+$$
+(-1)^i
+\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1},
+\ldots,
+\overline{e}_l]
+=
+(-1)^i
+\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1},
+\ldots,
+\overline{e}_{a - 1},
+\overline{e}'_a,
+\overline{e}_{a + 1},
+\ldots,
+\overline{e}_l]
+$$
+which follows from $\overline{e}_a = \lambda \overline{e}'_a + \overline{x}$
+and the corresponding admissible relation for $\det_\kappa(\overline{M})$.
+If $i > a$ then the desired equality is
+$$
+(-1)^i
+\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1},
+\ldots,
+\overline{e}_l]
+=
+(-1)^i
+\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{a - 1},
+\overline{e}'_a,
+\overline{e}_{a + 1},
+\ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1},
+\ldots,
+\overline{e}_l]
+$$
+which follows from $\overline{e}_a = \lambda \overline{e}'_a + \overline{x}$
+and the corresponding admissible relation for $\det_\kappa(\overline{M})$.
+The interesting case is when $i = a$. In this case we have
+$e_a = \lambda e'_a + x = \mu f + y$. Hence also
+$e'_a = \lambda^{-1}(\mu f + y - x)$. Thus we see that
+$$
+\psi([e_1, \ldots, e_l])
+= (-1)^i \overline{\mu}
+[\overline{e}_1, \ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1}, \ldots, \overline{e}_l]
+=
+\psi(
+\overline{\lambda}
+[e_1, \ldots, e_{a - 1}, e'_a, e_{a + 1}, \ldots, e_l]
+)
+$$
+as desired.
+
+\medskip\noindent
+Type (c) relations. Suppose that $(e_1, \ldots, e_l)$
+is an admissible sequence and
+$\mathfrak m e_a \subset \langle e_1, \ldots, e_{a - 2}\rangle$.
+Suppose that $f \in \langle e_1, \ldots, e_i\rangle$ with $i$ minimal.
+Say $e_i = \lambda f + x$ with $x \in \langle e_1, \ldots, e_{i - 1}\rangle$.
+We distinguish $4$ cases:
+
+\medskip\noindent
+Case 1: $i < a - 1$. The desired equality is
+\begin{align*}
+& (-1)^i
+\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1},
+\ldots,
+\overline{e}_l] \\
+& =
+(-1)^{i + 1}
+\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1},
+\ldots,
+\overline{e}_{a - 2},
+\overline{e}_a,
+\overline{e}_{a - 1},
+\overline{e}_{a + 1},
+\ldots,
+\overline{e}_l]
+\end{align*}
+which follows from the type (c) admissible relation for
+$\det_\kappa(\overline{M})$.
+
+\medskip\noindent
+Case 2: $i > a$. The desired equality is
+\begin{align*}
+& (-1)^i
+\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1},
+\ldots,
+\overline{e}_l] \\
+& =
+(-1)^{i + 1}
+\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{a - 2},
+\overline{e}_a,
+\overline{e}_{a - 1},
+\overline{e}_{a + 1},
+\ldots,
+\overline{e}_{i - 1},
+\overline{e}_{i + 1},
+\ldots,
+\overline{e}_l]
+\end{align*}
+which follows from the type (c) admissible relation for
+$\det_\kappa(\overline{M})$.
+
+\medskip\noindent
+Case 3: $i = a$. We write $e_a = \lambda f + \mu e_{a - 1} + y$
+with $y \in \langle e_1, \ldots, e_{a - 2}\rangle$. Then
+$$
+\psi([e_1, \ldots, e_l]) =
+(-1)^a
+\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{a - 1},
+\overline{e}_{a + 1},
+\ldots,
+\overline{e}_l]
+$$
+by definition. If $\overline{\mu}$ is nonzero, then we have
+$e_{a - 1} = - \mu^{-1} \lambda f + \mu^{-1}e_a - \mu^{-1} y$
+and we obtain
+$$
+\psi(-[e_1, \ldots, e_{a - 2}, e_a, e_{a - 1}, e_{a + 1}, \ldots, e_l]) =
+(-1)^a
+\overline{\mu^{-1}\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{a - 2},
+\overline{e}_a,
+\overline{e}_{a + 1},
+\ldots,
+\overline{e}_l]
+$$
+by definition. Since in $\overline{M}$ we have
+$\overline{e}_a = \mu \overline{e}_{a - 1} + \overline{y}$ we see
+the two outcomes are equal by relation (a) for $\det_\kappa(\overline{M})$.
+If on the other hand $\overline{\mu}$ is zero, then we can write
+$e_a = \lambda f + y$ with $y \in \langle e_1, \ldots, e_{a - 2}\rangle$
+and we have
+$$
+\psi(-[e_1, \ldots, e_{a - 2}, e_a, e_{a - 1}, e_{a + 1}, \ldots, e_l]) =
+(-1)^a
+\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{a - 1},
+\overline{e}_{a + 1},
+\ldots,
+\overline{e}_l]
+$$
+which is equal to $\psi([e_1, \ldots, e_l])$.
+
+\medskip\noindent
+Case 4: $i = a - 1$. Here we have
+$$
+\psi([e_1, \ldots, e_l]) =
+(-1)^{a - 1}
+\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{a - 2},
+\overline{e}_a,
+\ldots,
+\overline{e}_l]
+$$
+by definition. If $f \not \in \langle e_1, \ldots, e_{a - 2}, e_a \rangle$
+then
+$$
+\psi(-[e_1, \ldots, e_{a - 2}, e_a, e_{a - 1}, e_{a + 1}, \ldots, e_l]) =
+(-1)^{a + 1}\overline{\lambda}
+[\overline{e}_1,
+\ldots,
+\overline{e}_{a - 2},
+\overline{e}_a,
+\ldots,
+\overline{e}_l]
+$$
+Since $(-1)^{a - 1} = (-1)^{a + 1}$ the two expressions are the same.
+Finally, assume $f \in \langle e_1, \ldots, e_{a - 2}, e_a \rangle$.
+In this case we see that $e_{a - 1} = \lambda f + x$ with
+$x \in \langle e_1, \ldots, e_{a - 2}\rangle$ and
+$e_a = \mu f + y$ with $y \in \langle e_1, \ldots, e_{a - 2}\rangle$
+for units $\lambda, \mu \in R$.
+We conclude that both
+$e_a \in \langle e_1, \ldots, e_{a - 1} \rangle$ and
+$e_{a - 1} \in \langle e_1, \ldots, e_{a - 2}, e_a\rangle$.
+In this case a relation of type (a) applies to both
+$[e_1, \ldots, e_l]$ and
+$[e_1, \ldots, e_{a - 2}, e_a, e_{a - 1}, e_{a + 1}, \ldots, e_l]$
+and the compatibility of $\psi$ with these shown above to see that both
+$$
+\psi([e_1, \ldots, e_l])
+\quad\text{and}\quad
+\psi([e_1, \ldots, e_{a - 2}, e_a, e_{a - 1}, e_{a + 1}, \ldots, e_l])
+$$
+are zero, as desired.
+
+\medskip\noindent
+At this point we have shown that $\psi$ is well defined, and all that remains
+is to show that it is surjective. To see this let
+$(\overline{f}_2, \ldots, \overline{f}_l)$ be an admissible sequence
+in $\overline{M}$. We can choose lifts $f_2, \ldots, f_l \in M$, and
+then $(f, f_2, \ldots, f_l)$ is an admissible sequence in $M$.
+Since $\psi([f, f_2, \ldots, f_l]) = [f_2, \ldots, f_l]$ we win.
+\end{proof}
+
+\noindent
+Let $R$ be a local ring with maximal ideal $\mathfrak m$ and
+residue field $\kappa$. Note that if $\varphi : M \to N$ is an
+isomorphism of finite length $R$-modules, then we get an
+isomorphism
+$$
+\det\nolimits_\kappa(\varphi) :
+\det\nolimits_\kappa(M)
+\to
+\det\nolimits_\kappa(N)
+$$
+simply by the rule
+$$
+\det\nolimits_\kappa(\varphi)([e_1, \ldots, e_l])
+=
+[\varphi(e_1), \ldots, \varphi(e_l)]
+$$
+for any symbol $[e_1, \ldots, e_l]$ for $M$.
+Hence we see that $\det\nolimits_\kappa$ is a functor
+\begin{equation}
+\label{equation-functor}
+\left\{
+\begin{matrix}
+\text{finite length }R\text{-modules}\\
+\text{with isomorphisms}
+\end{matrix}
+\right\}
+\longrightarrow
+\left\{
+\begin{matrix}
+1\text{-dimensional }\kappa\text{-vector spaces}\\
+\text{with isomorphisms}
+\end{matrix}
+\right\}
+\end{equation}
+This is typical for a ``determinant functor''
+(see \cite{Knudsen}), as is the following additivity
+property.
+
+\begin{lemma}
+\label{lemma-det-exact-sequences}
+Let $(R, \mathfrak m, \kappa)$ be a local ring.
+For every short exact sequence
+$$
+0 \to K \to L \to M \to 0
+$$
+of finite length $R$-modules there exists a canonical isomorphism
+$$
+\gamma_{K \to L \to M} :
+\det\nolimits_\kappa(K) \otimes_\kappa \det\nolimits_\kappa(M)
+\longrightarrow
+\det\nolimits_\kappa(L)
+$$
+defined by the rule on nonzero symbols
+$$
+[e_1, \ldots, e_k]
+\otimes
+[\overline{f}_1, \ldots, \overline{f}_m]
+\longrightarrow
+[e_1, \ldots, e_k, f_1, \ldots, f_m]
+$$
+with the following properties:
+\begin{enumerate}
+\item For every isomorphism of short exact sequences, i.e., for
+every commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+K \ar[r] \ar[d]^u &
+L \ar[r] \ar[d]^v &
+M \ar[r] \ar[d]^w &
+0 \\
+0 \ar[r] &
+K' \ar[r] &
+L' \ar[r] &
+M' \ar[r] &
+0
+}
+$$
+with short exact rows and isomorphisms $u, v, w$ we have
+$$
+\gamma_{K' \to L' \to M'} \circ
+(\det\nolimits_\kappa(u) \otimes \det\nolimits_\kappa(w))
+=
+\det\nolimits_\kappa(v) \circ
+\gamma_{K \to L \to M},
+$$
+\item for every commutative square of finite length $R$-modules
+with exact rows and columns
+$$
+\xymatrix{
+& 0 \ar[d] & 0 \ar[d] & 0 \ar[d] & \\
+0 \ar[r] & A \ar[r] \ar[d] & B \ar[r] \ar[d] & C \ar[r] \ar[d] & 0 \\
+0 \ar[r] & D \ar[r] \ar[d] & E \ar[r] \ar[d] & F \ar[r] \ar[d] & 0 \\
+0 \ar[r] & G \ar[r] \ar[d] & H \ar[r] \ar[d] & I \ar[r] \ar[d] & 0 \\
+& 0 & 0 & 0 &
+}
+$$
+the following diagram is commutative
+$$
+\xymatrix{
+\det\nolimits_\kappa(A) \otimes
+\det\nolimits_\kappa(C) \otimes
+\det\nolimits_\kappa(G) \otimes
+\det\nolimits_\kappa(I)
+\ar[dd]_{\epsilon}
+\ar[rrr]_-{\gamma_{A \to B \to C} \otimes \gamma_{G \to H \to I}}
+& & &
+\det\nolimits_\kappa(B) \otimes
+\det\nolimits_\kappa(H)
+\ar[d]^{\gamma_{B \to E \to H}}
+\\
+& & & \det\nolimits_\kappa(E)
+\\
+\det\nolimits_\kappa(A) \otimes
+\det\nolimits_\kappa(G) \otimes
+\det\nolimits_\kappa(C) \otimes
+\det\nolimits_\kappa(I)
+\ar[rrr]^-{\gamma_{A \to D \to G} \otimes \gamma_{C \to F \to I}}
+& & &
+\det\nolimits_\kappa(D) \otimes
+\det\nolimits_\kappa(F)
+\ar[u]_{\gamma_{D \to E \to F}}
+}
+$$
+where $\epsilon$ is the switch of the factors in the tensor product
+times $(-1)^{cg}$ with $c = \text{length}_R(C)$ and $g = \text{length}_R(G)$,
+and
+\item the map $\gamma_{K \to L \to M}$ agrees with the usual isomorphism
+if $0 \to K \to L \to M \to 0$ is actually a short exact sequence
+of $\kappa$-vector spaces.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The significance of taking nonzero symbols in the explicit description
+of the map $\gamma_{K \to L \to M}$ is simply that if $(e_1, \ldots, e_l)$
+is an admissible sequence in $K$, and
+$(\overline{f}_1, \ldots, \overline{f}_m)$ is an admissible sequence in
+$M$, then it is not guaranteed that $(e_1, \ldots, e_l, f_1, \ldots, f_m)$
+is an admissible sequence in $L$ (where of course $f_i \in L$ signifies
+a lift of $\overline{f}_i$). However, if the symbol
+$[e_1, \ldots, e_l]$ is nonzero in $\det_\kappa(K)$, then
+necessarily $K = \langle e_1, \ldots, e_k\rangle$ (see
+proof of Lemma \ref{lemma-dimension-at-most-one}), and
+in this case it is true that $(e_1, \ldots, e_k, f_1, \ldots, f_m)$
+is an admissible sequence.
+Moreover, by the admissible relations of type (b) for $\det_\kappa(L)$
+we see that the value of $[e_1, \ldots, e_k, f_1, \ldots, f_m]$ in
+$\det_\kappa(L)$ is independent of the choice of the lifts
+$f_i$ in this case also. Given this remark, it is clear
+that an admissible relation for $e_1, \ldots, e_k$ in $K$
+translates into an admissible relation among
+$e_1, \ldots, e_k, f_1, \ldots, f_m$ in $L$, and
+similarly for an admissible relation among the
+$\overline{f}_1, \ldots, \overline{f}_m$.
+Thus $\gamma$ defines a linear map of vector spaces as claimed in the lemma.
+
+\medskip\noindent
+By Lemma \ref{lemma-determinant-dimension-one} we know
+$\det_\kappa(L)$ is generated by any single
+symbol $[x_1, \ldots, x_{k + m}]$ such that
+$(x_1, \ldots, x_{k + m})$ is an admissible sequence
+with $L = \langle x_1, \ldots, x_{k + m}\rangle$. Hence it is
+clear that the map $\gamma_{K \to L \to M}$ is surjective and
+hence an isomorphism.
+
+\medskip\noindent
+Property (1) holds because
+\begin{eqnarray*}
+& & \det\nolimits_\kappa(v)([e_1, \ldots, e_k, f_1, \ldots, f_m]) \\
+& = &
+[v(e_1), \ldots, v(e_k), v(f_1), \ldots, v(f_m)] \\
+& = &
+\gamma_{K' \to L' \to M'}([u(e_1), \ldots, u(e_k)]
+\otimes [w(f_1), \ldots, w(f_m)]).
+\end{eqnarray*}
+Property (2) means that given a symbol
+$[\alpha_1, \ldots, \alpha_a]$ generating $\det_\kappa(A)$,
+a symbol $[\gamma_1, \ldots, \gamma_c]$ generating $\det_\kappa(C)$,
+a symbol $[\zeta_1, \ldots, \zeta_g]$ generating $\det_\kappa(G)$, and
+a symbol $[\iota_1, \ldots, \iota_i]$ generating $\det_\kappa(I)$
+we have
+\begin{eqnarray*}
+& & [\alpha_1, \ldots, \alpha_a, \tilde\gamma_1, \ldots, \tilde\gamma_c,
+\tilde\zeta_1, \ldots, \tilde\zeta_g, \tilde\iota_1, \ldots, \tilde\iota_i] \\
+& = &
+(-1)^{cg} [\alpha_1, \ldots, \alpha_a, \tilde\zeta_1, \ldots, \tilde\zeta_g,
+\tilde\gamma_1, \ldots, \tilde\gamma_c, \tilde\iota_1, \ldots, \tilde\iota_i]
+\end{eqnarray*}
+(for suitable lifts $\tilde{x}$ in $E$) in $\det_\kappa(E)$.
+This holds because we may use the admissible relations of type (c)
+$cg$ times in the following order: move the
+$\tilde\zeta_1$ past the elements
+$\tilde\gamma_c, \ldots, \tilde\gamma_1$
+(allowed since $\mathfrak m\tilde\zeta_1 \subset A$),
+then move $\tilde\zeta_2$ past the elements
+$\tilde\gamma_c, \ldots, \tilde\gamma_1$
+(allowed since $\mathfrak m\tilde\zeta_2 \subset A + R\tilde\zeta_1$),
+and so on.
+
+\medskip\noindent
+Part (3) of the lemma is obvious.
+This finishes the proof.
+\end{proof}
+
+\noindent
+We can use the maps $\gamma$ of the lemma to define more general maps
+$\gamma$ as follows. Suppose that $(R, \mathfrak m, \kappa)$ is a
+local ring. Let $M$ be a finite length $R$-module and suppose we
+are given a finite filtration (see
+Homology, Definition \ref{homology-definition-filtered})
+$$
+0 = F^m \subset F^{m - 1} \subset \ldots \subset F^{n + 1} \subset F^n = M
+$$
+then there is a well defined and canonical isomorphism
+$$
+\gamma_{(M, F)} :
+\det\nolimits_\kappa(F^{m - 1}/F^m) \otimes_\kappa \ldots \otimes_k
+\det\nolimits_\kappa(F^n/F^{n + 1})
+\longrightarrow
+\det\nolimits_\kappa(M)
+$$
+To construct it we use isomorphisms of Lemma \ref{lemma-det-exact-sequences}
+coming from the short exact sequences
+$0 \to F^{i - 1}/F^i \to M/F^i \to M/F^{i - 1} \to 0$.
+Part (2) of Lemma \ref{lemma-det-exact-sequences} with $G = 0$ shows
+we obtain the same isomorphism if we use the short exact sequences
+$0 \to F^i \to F^{i - 1} \to F^{i - 1}/F^i \to 0$.
+
+\medskip\noindent
+Here is another typical result for determinant functors.
+It is not hard to show. The tricky part is usually to show the
+existence of a determinant functor.
+
+\begin{lemma}
+\label{lemma-uniqueness-det}
+Let $(R, \mathfrak m, \kappa)$ be any local ring.
+The functor
+$$
+\det\nolimits_\kappa :
+\left\{
+\begin{matrix}
+\text{finite length }R\text{-modules} \\
+\text{with isomorphisms}
+\end{matrix}
+\right\}
+\longrightarrow
+\left\{
+\begin{matrix}
+1\text{-dimensional }\kappa\text{-vector spaces} \\
+\text{with isomorphisms}
+\end{matrix}
+\right\}
+$$
+endowed with the maps $\gamma_{K \to L \to M}$ is characterized by
+the following properties
+\begin{enumerate}
+\item its restriction to the subcategory of modules annihilated
+by $\mathfrak m$ is isomorphic to the usual determinant functor
+(see Lemma \ref{lemma-compare-det}), and
+\item (1), (2) and (3) of Lemma \ref{lemma-det-exact-sequences}
+hold.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-determinant-quotient-ring}
+Let $(R', \mathfrak m') \to (R, \mathfrak m)$ be a local ring
+homomorphism which induces an isomorphism on residue fields $\kappa$.
+Then for every finite length $R$-module the restriction $M_{R'}$
+is a finite length $R'$-module and there is a canonical isomorphism
+$$
+\det\nolimits_{R, \kappa}(M)
+\longrightarrow
+\det\nolimits_{R', \kappa}(M_{R'})
+$$
+This isomorphism is functorial in $M$ and compatible with the
+isomorphisms $\gamma_{K \to L \to M}$ of Lemma \ref{lemma-det-exact-sequences}
+defined for $\det_{R, \kappa}$ and $\det_{R', \kappa}$.
+\end{lemma}
+
+\begin{proof}
+If the length of $M$ as an $R$-module is $l$, then the length
+of $M$ as an $R'$-module (i.e., $M_{R'}$) is $l$ as well, see
+Algebra, Lemma \ref{algebra-lemma-pushdown-module}.
+Note that an admissible sequence $x_1, \ldots, x_l$ of $M$
+over $R$ is an admissible sequence of $M$ over $R'$ as $\mathfrak m'$
+maps into $\mathfrak m$.
+The isomorphism is obtained by mapping the symbol
+$[x_1, \ldots, x_l] \in \det\nolimits_{R, \kappa}(M)$
+to the corresponding symbol
+$[x_1, \ldots, x_l] \in \det\nolimits_{R', \kappa}(M)$.
+It is immediate to verify that this is functorial for
+isomorphisms and compatible with the isomorphisms
+$\gamma$ of Lemma \ref{lemma-det-exact-sequences}.
+\end{proof}
+
+\begin{remark}
+\label{remark-explain-determinant}
+Let $(R, \mathfrak m, \kappa)$ be a local ring and assume either
+the characteristic of $\kappa$ is zero or it is $p$ and $p R = 0$.
+Let $M_1, \ldots, M_n$ be finite length $R$-modules.
+We will show below that there exists an
+ideal $I \subset \mathfrak m$ annihilating $M_i$ for $i = 1, \ldots, n$
+and a section $\sigma : \kappa \to R/I$ of the canonical surjection
+$R/I \to \kappa$. The restriction $M_{i, \kappa}$ of $M_i$ via $\sigma$
+is a $\kappa$-vector space of dimension $l_i = \text{length}_R(M_i)$ and
+using Lemma \ref{lemma-determinant-quotient-ring} we see that
+$$
+\det\nolimits_\kappa(M_i) = \wedge_\kappa^{l_i}(M_{i, \kappa})
+$$
+These isomorphisms are compatible with the isomorphisms
+$\gamma_{K \to M \to L}$ of Lemma \ref{lemma-det-exact-sequences}
+for short exact sequences of finite length $R$-modules annihilated
+by $I$. The conclusion is that verifying a property of
+$\det_\kappa$ often reduces to verifying corresponding properties
+of the usual determinant on the category finite dimensional vector
+spaces.
+
+\medskip\noindent
+For $I$ we can take the annihilator
+(Algebra, Definition \ref{algebra-definition-annihilator})
+of the module $M = \bigoplus M_i$. In this case we see that
+$R/I \subset \text{End}_R(M)$ hence has finite length.
+Thus $R/I$ is an Artinian local ring with residue field $\kappa$.
+Since an Artinian local ring is complete we see that $R/I$
+has a coefficient ring by the Cohen structure theorem
+(Algebra, Theorem \ref{algebra-theorem-cohen-structure-theorem})
+which is a field by our assumption on $R$.
+\end{remark}
+
+\noindent
+Here is a case where we can compute the determinant of a linear map.
+In fact there is nothing mysterious about this in any case, see
+Example \ref{example-determinant-map} for a random example.
+
+\begin{lemma}
+\label{lemma-times-u-determinant}
+Let $R$ be a local ring with residue field $\kappa$.
+Let $u \in R^*$ be a unit.
+Let $M$ be a module of finite length over $R$.
+Denote $u_M : M \to M$ the map multiplication by $u$.
+Then
+$$
+\det\nolimits_\kappa(u_M) :
+\det\nolimits_\kappa(M)
+\longrightarrow
+\det\nolimits_\kappa(M)
+$$
+is multiplication by $\overline{u}^l$ where $l = \text{length}_R(M)$
+and $\overline{u} \in \kappa^*$ is the image of $u$.
+\end{lemma}
+
+\begin{proof}
+Denote $f_M \in \kappa^*$ the element such that
+$\det\nolimits_\kappa(u_M) = f_M \text{id}_{\det\nolimits_\kappa(M)}$.
+Suppose that $0 \to K \to L \to M \to 0$ is a short
+exact sequence of finite $R$-modules. Then we see that
+$u_k$, $u_L$, $u_M$ give an isomorphism of short exact sequences.
+Hence by Lemma \ref{lemma-det-exact-sequences} (1) we conclude that
+$f_K f_M = f_L$.
+This means that by induction on length it suffices to prove the
+lemma in the case of length $1$ where it is trivial.
+\end{proof}
+
+\begin{example}
+\label{example-determinant-map}
+Consider the local ring $R = \mathbf{Z}_p$.
+Set $M = \mathbf{Z}_p/(p^2) \oplus \mathbf{Z}_p/(p^3)$.
+Let $u : M \to M$ be the map given by the matrix
+$$
+u =
+\left(
+\begin{matrix}
+a & b \\
+pc & d
+\end{matrix}
+\right)
+$$
+where $a, b, c, d \in \mathbf{Z}_p$, and $a, d \in \mathbf{Z}_p^*$.
+In this case $\det_\kappa(u)$ equals multiplication by
+$a^2d^3 \bmod p \in \mathbf{F}_p^*$. This can easily be seen
+by consider the effect of $u$ on the symbol
+$[p^2e, pe, pf, e, f]$ where $e = (0 , 1) \in M$ and
+$f = (1, 0) \in M$.
+\end{example}
+
+
+
+
+
+\subsection{Periodic complexes and determinants}
+\label{subsection-periodic-complexes-determinants}
+
+\noindent
+Let $R$ be a local ring with residue field $\kappa$.
+Let $(M, \varphi, \psi)$ be a $(2, 1)$-periodic complex over $R$.
+Assume that $M$ has finite length and that $(M, \varphi, \psi)$ is
+exact. We are going to use the determinant construction to define
+an invariant of this situation. See
+Subsection \ref{subsection-determinants-finite-length}.
+Let us abbreviate
+$K_\varphi = \Ker(\varphi)$,
+$I_\varphi = \Im(\varphi)$,
+$K_\psi = \Ker(\psi)$, and
+$I_\psi = \Im(\psi)$.
+The short exact sequences
+$$
+0 \to K_\varphi \to M \to I_\varphi \to 0, \quad
+0 \to K_\psi \to M \to I_\psi \to 0
+$$
+give isomorphisms
+$$
+\gamma_\varphi :
+\det\nolimits_\kappa(K_\varphi)
+\otimes
+\det\nolimits_\kappa(I_\varphi)
+\longrightarrow
+\det\nolimits_\kappa(M), \quad
+\gamma_\psi :
+\det\nolimits_\kappa(K_\psi)
+\otimes
+\det\nolimits_\kappa(I_\psi)
+\longrightarrow
+\det\nolimits_\kappa(M),
+$$
+see Lemma \ref{lemma-det-exact-sequences}.
+On the other hand the exactness of the complex gives equalities
+$K_\varphi = I_\psi$, and $K_\psi = I_\varphi$
+and hence an isomorphism
+$$
+\sigma :
+\det\nolimits_\kappa(K_\varphi)
+\otimes
+\det\nolimits_\kappa(I_\varphi)
+\longrightarrow
+\det\nolimits_\kappa(K_\psi)
+\otimes
+\det\nolimits_\kappa(I_\psi)
+$$
+by switching the factors. Using this notation we can define our invariant.
+
+\begin{definition}
+\label{definition-periodic-determinant}
+Let $R$ be a local ring with residue field $\kappa$.
+Let $(M, \varphi, \psi)$ be a $(2, 1)$-periodic complex over $R$.
+Assume that $M$ has finite length and that $(M, \varphi, \psi)$ is
+exact. The {\it determinant of $(M, \varphi, \psi)$} is
+the element
+$$
+\det\nolimits_\kappa(M, \varphi, \psi) \in \kappa^*
+$$
+such that the composition
+$$
+\det\nolimits_\kappa(M)
+\xrightarrow{\gamma_\psi \circ \sigma \circ \gamma_\varphi^{-1}}
+\det\nolimits_\kappa(M)
+$$
+is multiplication by
+$(-1)^{\text{length}_R(I_\varphi)\text{length}_R(I_\psi)}
+\det\nolimits_\kappa(M, \varphi, \psi)$.
+\end{definition}
+
+\begin{remark}
+\label{remark-more-elementary}
+Here is a more down to earth description of the determinant
+introduced above. Let $R$ be a local ring with residue field $\kappa$.
+Let $(M, \varphi, \psi)$ be a $(2, 1)$-periodic complex over $R$.
+Assume that $M$ has finite length and that $(M, \varphi, \psi)$ is
+exact. Let us abbreviate $I_\varphi = \Im(\varphi)$,
+$I_\psi = \Im(\psi)$ as above.
+Assume that $\text{length}_R(I_\varphi) = a$ and
+$\text{length}_R(I_\psi) = b$, so that $a + b = \text{length}_R(M)$
+by exactness. Choose admissible sequences
+$x_1, \ldots, x_a \in I_\varphi$ and $y_1, \ldots, y_b \in I_\psi$
+such that the symbol $[x_1, \ldots, x_a]$ generates $\det_\kappa(I_\varphi)$
+and the symbol $[x_1, \ldots, x_b]$ generates $\det_\kappa(I_\psi)$.
+Choose $\tilde x_i \in M$ such that $\varphi(\tilde x_i) = x_i$.
+Choose $\tilde y_j \in M$ such that $\psi(\tilde y_j) = y_j$.
+Then $\det_\kappa(M, \varphi, \psi)$ is characterized
+by the equality
+$$
+[x_1, \ldots, x_a, \tilde y_1, \ldots, \tilde y_b]
+=
+(-1)^{ab} \det\nolimits_\kappa(M, \varphi, \psi)
+[y_1, \ldots, y_b, \tilde x_1, \ldots, \tilde x_a]
+$$
+in $\det_\kappa(M)$. This also explains the sign.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-periodic-determinant-shift}
+Let $R$ be a local ring with residue field $\kappa$.
+Let $(M, \varphi, \psi)$ be a $(2, 1)$-periodic complex over $R$.
+Assume that $M$ has finite length and that $(M, \varphi, \psi)$ is
+exact. Then
+$$
+\det\nolimits_\kappa(M, \varphi, \psi)
+\det\nolimits_\kappa(M, \psi, \varphi)
+= 1.
+$$
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-periodic-determinant-sign}
+Let $R$ be a local ring with residue field $\kappa$.
+Let $(M, \varphi, \varphi)$ be a $(2, 1)$-periodic complex over $R$.
+Assume that $M$ has finite length and that $(M, \varphi, \varphi)$ is
+exact. Then $\text{length}_R(M) = 2 \text{length}_R(\Im(\varphi))$
+and
+$$
+\det\nolimits_\kappa(M, \varphi, \varphi)
+=
+(-1)^{\text{length}_R(\Im(\varphi))}
+=
+(-1)^{\frac{1}{2}\text{length}_R(M)}
+$$
+\end{lemma}
+
+\begin{proof}
+Follows directly from the sign rule in the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-periodic-determinant-easy-case}
+Let $R$ be a local ring with residue field $\kappa$.
+Let $M$ be a finite length $R$-module.
+\begin{enumerate}
+\item if $\varphi : M \to M$ is an isomorphism then
+$\det_\kappa(M, \varphi, 0) = \det_\kappa(\varphi)$.
+\item if $\psi : M \to M$ is an isomorphism then
+$\det_\kappa(M, 0, \psi) = \det_\kappa(\psi)^{-1}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let us prove (1). Set $\psi = 0$. Then we may, with notation
+as above Definition \ref{definition-periodic-determinant}, identify
+$K_\varphi = I_\psi = 0$, $I_\varphi = K_\psi = M$.
+With these identifications, the map
+$$
+\gamma_\varphi :
+\kappa \otimes \det\nolimits_\kappa(M)
+=
+\det\nolimits_\kappa(K_\varphi)
+\otimes
+\det\nolimits_\kappa(I_\varphi)
+\longrightarrow
+\det\nolimits_\kappa(M)
+$$
+is identified with $\det_\kappa(\varphi^{-1})$. On the other hand the
+map $\gamma_\psi$ is identified with the identity map. Hence
+$\gamma_\psi \circ \sigma \circ \gamma_\varphi^{-1}$ is equal
+to $\det_\kappa(\varphi)$ in this case. Whence the result.
+We omit the proof of (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-periodic-determinant}
+Let $R$ be a local ring with residue field $\kappa$.
+Suppose that we have a short exact sequence of
+$(2, 1)$-periodic complexes
+$$
+0 \to (M_1, \varphi_1, \psi_1)
+\to (M_2, \varphi_2, \psi_2)
+\to (M_3, \varphi_3, \psi_3)
+\to 0
+$$
+with all $M_i$ of finite length, and each $(M_1, \varphi_1, \psi_1)$ exact.
+Then
+$$
+\det\nolimits_\kappa(M_2, \varphi_2, \psi_2) =
+\det\nolimits_\kappa(M_1, \varphi_1, \psi_1)
+\det\nolimits_\kappa(M_3, \varphi_3, \psi_3).
+$$
+in $\kappa^*$.
+\end{lemma}
+
+\begin{proof}
+Let us abbreviate
+$I_{\varphi, i} = \Im(\varphi_i)$,
+$K_{\varphi, i} = \Ker(\varphi_i)$,
+$I_{\psi, i} = \Im(\psi_i)$, and
+$K_{\psi, i} = \Ker(\psi_i)$.
+Observe that we have a commutative square
+$$
+\xymatrix{
+& 0 \ar[d] & 0 \ar[d] & 0 \ar[d] & \\
+0 \ar[r] &
+K_{\varphi, 1} \ar[r] \ar[d] &
+K_{\varphi, 2} \ar[r] \ar[d] &
+K_{\varphi, 3} \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+M_1 \ar[r] \ar[d] &
+M_2 \ar[r] \ar[d] &
+M_3 \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+I_{\varphi, 1} \ar[r] \ar[d] &
+I_{\varphi, 2} \ar[r] \ar[d] &
+I_{\varphi, 3} \ar[r] \ar[d] &
+0 \\
+& 0 & 0 & 0 &
+}
+$$
+of finite length $R$-modules with exact rows and columns.
+The top row is exact since it can be identified with the
+sequence $I_{\psi, 1} \to I_{\psi, 2} \to I_{\psi, 3} \to 0$
+of images, and similarly for the bottom row. There is a similar diagram
+involving the modules $I_{\psi, i}$ and $K_{\psi, i}$.
+By definition $\det_\kappa(M_2, \varphi_2, \psi_2)$
+corresponds, up to a sign, to the composition of the left vertical maps
+in the following diagram
+$$
+\xymatrix{
+\det_\kappa(M_1) \otimes
+\det_\kappa(M_3) \ar[r]^\gamma
+\ar[d]^{\gamma^{-1} \otimes \gamma^{-1}} &
+\det_\kappa(M_2)
+\ar[d]^{\gamma^{-1}} \\
+\det\nolimits_\kappa(K_{\varphi, 1})
+\otimes
+\det\nolimits_\kappa(I_{\varphi, 1})
+\otimes
+\det\nolimits_\kappa(K_{\varphi, 3})
+\otimes
+\det\nolimits_\kappa(I_{\varphi, 3})
+\ar[d]^{\sigma \otimes \sigma}
+\ar[r]^-{\gamma \otimes \gamma} &
+\det\nolimits_\kappa(K_{\varphi, 2})
+\otimes
+\det\nolimits_\kappa(I_{\varphi, 2})
+\ar[d]^\sigma
+\\
+\det\nolimits_\kappa(K_{\psi, 1})
+\otimes
+\det\nolimits_\kappa(I_{\psi, 1})
+\otimes
+\det\nolimits_\kappa(K_{\psi, 3})
+\otimes
+\det\nolimits_\kappa(I_{\psi, 3})
+\ar[d]^{\gamma \otimes \gamma}
+\ar[r]^-{\gamma \otimes \gamma}
+&
+\det\nolimits_\kappa(K_{\psi, 2})
+\otimes
+\det\nolimits_\kappa(I_{\psi, 2})
+\ar[d]^\gamma \\
+\det_\kappa(M_1)
+\otimes
+\det_\kappa(M_3) \ar[r]^\gamma
+&
+\det_\kappa(M_2)
+}
+$$
+The top and bottom squares are commutative up to sign
+by applying Lemma \ref{lemma-det-exact-sequences} (2).
+The middle square is trivially
+commutative (we are just switching factors). Hence we see
+that
+$\det\nolimits_\kappa(M_2, \varphi_2, \psi_2) =
+\epsilon \det\nolimits_\kappa(M_1, \varphi_1, \psi_1)
+\det\nolimits_\kappa(M_3, \varphi_3, \psi_3)
+$
+for some sign $\epsilon$. And the sign can be worked out, namely
+the outer rectangle in the diagram above commutes up to
+\begin{eqnarray*}
+\epsilon & = &
+(-1)^{\text{length}(I_{\varphi, 1})\text{length}(K_{\varphi, 3})
++ \text{length}(I_{\psi, 1})\text{length}(K_{\psi, 3})} \\
+& = &
+(-1)^{\text{length}(I_{\varphi, 1})\text{length}(I_{\psi, 3})
++ \text{length}(I_{\psi, 1})\text{length}(I_{\varphi, 3})}
+\end{eqnarray*}
+(proof omitted). It follows easily from this that the signs
+work out as well.
+\end{proof}
+
+\begin{example}
+\label{example-dual-numbers}
+Let $k$ be a field.
+Consider the ring $R = k[T]/(T^2)$ of dual numbers over $k$.
+Denote $t$ the class of $T$ in $R$.
+Let $M = R$ and $\varphi = ut$, $\psi = vt$ with $u, v \in k^*$.
+In this case $\det_k(M)$ has generator $e = [t, 1]$.
+We identify $I_\varphi = K_\varphi = I_\psi = K_\psi = (t)$.
+Then $\gamma_\varphi(t \otimes t) = u^{-1}[t, 1]$
+(since $u^{-1} \in M$ is a lift of $t \in I_\varphi$)
+and $\gamma_\psi(t \otimes t) = v^{-1}[t, 1]$ (same reason).
+Hence we see that $\det_k(M, \varphi, \psi) = -u/v \in k^*$.
+\end{example}
+
+\begin{example}
+\label{example-Zp}
+Let $R = \mathbf{Z}_p$ and let $M = \mathbf{Z}_p/(p^l)$.
+Let $\varphi = p^b u$ and $\varphi = p^a v$ with $a, b \geq 0$,
+$a + b = l$ and $u, v \in \mathbf{Z}_p^*$.
+Then a computation as in Example \ref{example-dual-numbers}
+shows that
+\begin{eqnarray*}
+\det\nolimits_{\mathbf{F}_p}(\mathbf{Z}_p/(p^l), p^bu, p^av) & = &
+(-1)^{ab}u^a/v^b \bmod p \\
+& = &
+(-1)^{\text{ord}_p(\alpha)\text{ord}_p(\beta)}
+\frac{\alpha^{\text{ord}_p(\beta)}}{\beta^{\text{ord}_p(\alpha)}} \bmod p
+\end{eqnarray*}
+with $\alpha = p^bu, \beta = p^av \in \mathbf{Z}_p$.
+See Lemma \ref{lemma-symbol-is-usual-tame-symbol}
+for a more general case (and a proof).
+\end{example}
+
+\begin{example}
+\label{example-generic-vector-space}
+Let $R = k$ be a field.
+Let $M = k^{\oplus a} \oplus k^{\oplus b}$ be $l = a + b$ dimensional.
+Let $\varphi$ and $\psi$ be the following diagonal matrices
+$$
+\varphi = \text{diag}(u_1, \ldots, u_a, 0, \ldots, 0),
+\quad
+\psi = \text{diag}(0, \ldots, 0, v_1, \ldots, v_b)
+$$
+with $u_i, v_j \in k^*$. In this case we have
+$$
+\det\nolimits_k(M, \varphi, \psi)
+=
+\frac{u_1 \ldots u_a}{v_1 \ldots v_b}.
+$$
+This can be seen by a direct computation or by computing in case $l = 1$
+and using the additivity of Lemma \ref{lemma-periodic-determinant}.
+\end{example}
+
+\begin{example}
+\label{example-special-vector-space}
+Let $R = k$ be a field.
+Let $M = k^{\oplus a} \oplus k^{\oplus a}$ be $l = 2a$ dimensional.
+Let $\varphi$ and $\psi$ be the following block matrices
+$$
+\varphi =
+\left(
+\begin{matrix}
+0 & U \\
+0 & 0
+\end{matrix}
+\right),
+\quad
+\psi =
+\left(
+\begin{matrix}
+0 & V \\
+0 & 0
+\end{matrix}
+\right),
+$$
+with $U, V \in \text{Mat}(a \times a, k)$ invertible.
+In this case we have
+$$
+\det\nolimits_k(M, \varphi, \psi)
+=
+(-1)^a\frac{\det(U)}{\det(V)}.
+$$
+This can be seen by a direct computation.
+The case $a = 1$ is similar to the computation in
+Example \ref{example-dual-numbers}.
+\end{example}
+
+\begin{example}
+\label{example-a-la-oort}
+Let $R = k$ be a field.
+Let $M = k^{\oplus 4}$.
+Let
+$$
+\varphi =
+\left(
+\begin{matrix}
+ 0 & 0 & 0 & 0 \\
+u_1 & 0 & 0 & 0 \\
+ 0 & 0 & 0 & 0 \\
+ 0 & 0 & u_2 & 0
+\end{matrix}
+\right)
+\quad
+\varphi =
+\left(
+\begin{matrix}
+ 0 & 0 & 0 & 0 \\
+ 0 & 0 & v_2 & 0 \\
+ 0 & 0 & 0 & 0 \\
+v_1 & 0 & 0 & 0
+\end{matrix}
+\right)
+\quad
+$$
+with $u_1, u_2, v_1, v_2 \in k^*$.
+Then we have
+$$
+\det\nolimits_k(M, \varphi, \psi) = -\frac{u_1u_2}{v_1v_2}.
+$$
+\end{example}
+
+\noindent
+Next we come to the analogue of the fact that the determinant
+of a composition of linear endomorphisms is the product of
+the determinants. To avoid very long formulae we
+write $I_\varphi = \Im(\varphi)$, and
+$K_\varphi = \Ker(\varphi)$
+for any $R$-module map $\varphi : M \to M$.
+We also denote $\varphi\psi = \varphi \circ \psi$
+for a pair of morphisms $\varphi, \psi : M \to M$.
+
+\begin{lemma}
+\label{lemma-multiplicativity-determinant}
+Let $R$ be a local ring with residue field $\kappa$.
+Let $M$ be a finite length $R$-module.
+Let $\alpha, \beta, \gamma$ be endomorphisms of $M$.
+Assume that
+\begin{enumerate}
+\item $I_\alpha = K_{\beta\gamma}$, and similarly for any permutation
+of $\alpha, \beta, \gamma$,
+\item $K_\alpha = I_{\beta\gamma}$, and similarly for any permutation
+of $\alpha, \beta, \gamma$.
+\end{enumerate}
+Then
+\begin{enumerate}
+\item The triple $(M, \alpha, \beta\gamma)$
+is an exact $(2, 1)$-periodic complex.
+\item The triple $(I_\gamma, \alpha, \beta)$
+is an exact $(2, 1)$-periodic complex.
+\item The triple $(M/K_\beta, \alpha, \gamma)$
+is an exact $(2, 1)$-periodic complex.
+\item We have
+$$
+\det\nolimits_\kappa(M, \alpha, \beta\gamma)
+=
+\det\nolimits_\kappa(I_\gamma, \alpha, \beta)
+\det\nolimits_\kappa(M/K_\beta, \alpha, \gamma).
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that the assumptions imply part (1) of the lemma.
+
+\medskip\noindent
+To see part (1) note that the assumptions imply that
+$I_{\gamma\alpha} = I_{\alpha\gamma}$, and similarly for kernels
+and any other pair of morphisms.
+Moreover, we see that
+$I_{\gamma\beta} =I_{\beta\gamma} = K_\alpha \subset I_\gamma$ and
+similarly for any other pair. In particular we get a short exact sequence
+$$
+0 \to I_{\beta\gamma} \to I_\gamma \xrightarrow{\alpha} I_{\alpha\gamma} \to 0
+$$
+and similarly we get a short exact sequence
+$$
+0 \to I_{\alpha\gamma} \to I_\gamma \xrightarrow{\beta} I_{\beta\gamma} \to 0.
+$$
+This proves $(I_\gamma, \alpha, \beta)$ is an exact $(2, 1)$-periodic
+complex. Hence part (2) of the lemma holds.
+
+\medskip\noindent
+To see that $\alpha$, $\gamma$ give well defined endomorphisms
+of $M/K_\beta$ we have to check that $\alpha(K_\beta) \subset K_\beta$
+and $\gamma(K_\beta) \subset K_\beta$. This is true because
+$\alpha(K_\beta) = \alpha(I_{\gamma\alpha}) = I_{\alpha\gamma\alpha}
+\subset I_{\alpha\gamma} = K_\beta$, and similarly in the other case.
+The kernel of the map $\alpha : M/K_\beta \to M/K_\beta$ is
+$K_{\beta\alpha}/K_\beta = I_\gamma/K_\beta$. Similarly,
+the kernel of $\gamma : M/K_\beta \to M/K_\beta$ is equal to
+$I_\alpha/K_\beta$. Hence we conclude that (3) holds.
+
+\medskip\noindent
+We introduce $r = \text{length}_R(K_\alpha)$,
+$s = \text{length}_R(K_\beta)$ and $t = \text{length}_R(K_\gamma)$.
+By the exact sequences above and our hypotheses we have
+$\text{length}_R(I_\alpha) = s + t$, $\text{length}_R(I_\beta) = r + t$,
+$\text{length}_R(I_\gamma) = r + s$, and
+$\text{length}(M) = r + s + t$.
+Choose
+\begin{enumerate}
+\item an admissible sequence $x_1, \ldots, x_r \in K_\alpha$
+generating $K_\alpha$
+\item an admissible sequence $y_1, \ldots, y_s \in K_\beta$
+generating $K_\beta$,
+\item an admissible sequence $z_1, \ldots, z_t \in K_\gamma$
+generating $K_\gamma$,
+\item elements $\tilde x_i \in M$ such that $\beta\gamma\tilde x_i = x_i$,
+\item elements $\tilde y_i \in M$ such that $\alpha\gamma\tilde y_i = y_i$,
+\item elements $\tilde z_i \in M$ such that $\beta\alpha\tilde z_i = z_i$.
+\end{enumerate}
+With these choices the sequence
+$y_1, \ldots, y_s, \alpha\tilde z_1, \ldots, \alpha\tilde z_t$
+is an admissible sequence in $I_\alpha$ generating it.
+Hence, by Remark \ref{remark-more-elementary} the determinant
+$D = \det_\kappa(M, \alpha, \beta\gamma)$ is the
+unique element of $\kappa^*$ such that
+\begin{align*}
+[y_1, \ldots, y_s,
+\alpha\tilde z_1, \ldots, \alpha\tilde z_s,
+\tilde x_1, \ldots, \tilde x_r] \\
+= (-1)^{r(s + t)} D
+[x_1, \ldots, x_r,
+\gamma\tilde y_1, \ldots, \gamma\tilde y_s,
+\tilde z_1, \ldots, \tilde z_t]
+\end{align*}
+By the same remark, we see that
+$D_1 = \det_\kappa(M/K_\beta, \alpha, \gamma)$
+is characterized by
+$$
+[y_1, \ldots, y_s,
+\alpha\tilde z_1, \ldots, \alpha\tilde z_t,
+\tilde x_1, \ldots, \tilde x_r]
+=
+(-1)^{rt} D_1
+[y_1, \ldots, y_s,
+\gamma\tilde x_1, \ldots, \gamma\tilde x_r,
+\tilde z_1, \ldots, \tilde z_t]
+$$
+By the same remark, we see that
+$D_2 = \det_\kappa(I_\gamma, \alpha, \beta)$ is characterized by
+$$
+[y_1, \ldots, y_s,
+\gamma\tilde x_1, \ldots, \gamma\tilde x_r,
+\tilde z_1, \ldots, \tilde z_t]
+=
+(-1)^{rs} D_2
+[x_1, \ldots, x_r,
+\gamma\tilde y_1, \ldots, \gamma\tilde y_s,
+\tilde z_1, \ldots, \tilde z_t]
+$$
+Combining the formulas above we see that $D = D_1 D_2$
+as desired.
+\end{proof}
+
+
+\begin{lemma}
+\label{lemma-tricky}
+Let $R$ be a local ring with residue field $\kappa$.
+Let $\alpha : (M, \varphi, \psi) \to (M', \varphi', \psi')$
+be a morphism of $(2, 1)$-periodic complexes over $R$.
+Assume
+\begin{enumerate}
+\item $M$, $M'$ have finite length,
+\item $(M, \varphi, \psi)$, $(M', \varphi', \psi')$ are exact,
+\item the maps $\varphi$, $\psi$ induce the zero map on
+$K = \Ker(\alpha)$, and
+\item the maps $\varphi$, $\psi$ induce the zero map on
+$Q = \Coker(\alpha)$.
+\end{enumerate}
+Denote $N = \alpha(M) \subset M'$. We obtain two short exact sequences
+of $(2, 1)$-periodic complexes
+$$
+\begin{matrix}
+0 \to (N, \varphi', \psi') \to (M', \varphi', \psi') \to (Q, 0, 0) \to 0 \\
+0 \to (K, 0, 0) \to (M, \varphi, \psi) \to (N, \varphi', \psi') \to 0
+\end{matrix}
+$$
+which induce two isomorphisms $\alpha_i : Q \to K$, $i = 0, 1$. Then
+$$
+\det\nolimits_\kappa(M, \varphi, \psi)
+=
+\det\nolimits_\kappa(\alpha_0^{-1} \circ \alpha_1)
+\det\nolimits_\kappa(M', \varphi', \psi')
+$$
+In particular, if $\alpha_0 = \alpha_1$, then
+$\det\nolimits_\kappa(M, \varphi, \psi) =
+\det\nolimits_\kappa(M', \varphi', \psi')$.
+\end{lemma}
+
+\begin{proof}
+There are (at least) two ways to prove this lemma. One is to produce an
+enormous commutative diagram using the properties of the determinants.
+The other is to use the characterization of the determinants in terms
+of admissible sequences of elements. It is the second approach that we
+will use.
+
+\medskip\noindent
+First let us explain precisely what the maps $\alpha_i$ are.
+Namely, $\alpha_0$ is the composition
+$$
+\alpha_0 : Q = H^0(Q, 0, 0) \to H^1(N, \varphi', \psi') \to H^2(K, 0, 0) = K
+$$
+and $\alpha_1$ is the composition
+$$
+\alpha_1 : Q = H^1(Q, 0, 0) \to H^2(N, \varphi', \psi') \to H^3(K, 0, 0) = K
+$$
+coming from the boundary maps of the short exact sequences of complexes
+displayed in the lemma. The fact that the
+complexes $(M, \varphi, \psi)$, $(M', \varphi', \psi')$ are exact
+implies these maps are isomorphisms.
+
+\medskip\noindent
+We will use the notation $I_\varphi = \Im(\varphi)$,
+$K_\varphi = \Ker(\varphi)$ and similarly for the other maps.
+Exactness for $M$ and $M'$
+means that $K_\varphi = I_\psi$ and three similar equalities.
+We introduce $k = \text{length}_R(K)$, $a = \text{length}_R(I_\varphi)$,
+$b = \text{length}_R(I_\psi)$. Then we see that $\text{length}_R(M) = a + b$,
+and $\text{length}_R(N) = a + b - k$, $\text{length}_R(Q) = k$
+and $\text{length}_R(M') = a + b$. The exact sequences below will show
+that also $\text{length}_R(I_{\varphi'}) = a$ and
+$\text{length}_R(I_{\psi'}) = b$.
+
+\medskip\noindent
+The assumption that $K \subset K_\varphi = I_\psi$ means that
+$\varphi$ factors through $N$ to give an exact sequence
+$$
+0 \to \alpha(I_\psi) \to N \xrightarrow{\varphi\alpha^{-1}} I_\psi \to 0.
+$$
+Here $\varphi\alpha^{-1}(x') = y$ means $x' = \alpha(x)$ and $y = \varphi(x)$.
+Similarly, we have
+$$
+0 \to \alpha(I_\varphi) \to N \xrightarrow{\psi\alpha^{-1}} I_\varphi \to 0.
+$$
+The assumption that $\psi'$ induces the zero map on
+$Q$ means that $I_{\psi'} = K_{\varphi'} \subset N$.
+This means the quotient $\varphi'(N) \subset I_{\varphi'}$
+is identified with $Q$. Note that $\varphi'(N) = \alpha(I_\varphi)$.
+Hence we conclude there is an isomorphism
+$$
+\varphi' : Q \to I_{\varphi'}/\alpha(I_\varphi)
+$$
+simply described by
+$\varphi'(x' \bmod N) = \varphi'(x') \bmod \alpha(I_\varphi)$.
+In exactly the same way we get
+$$
+\psi' : Q \to I_{\psi'}/\alpha(I_\psi)
+$$
+Finally, note that $\alpha_0$ is the composition
+$$
+\xymatrix{
+Q \ar[r]^-{\varphi'} &
+I_{\varphi'}/\alpha(I_\varphi)
+\ar[rrr]^-{\psi\alpha^{-1}|_{I_{\varphi'}/\alpha(I_\varphi)}} & & &
+K
+}
+$$
+and similarly
+$\alpha_1 = \varphi\alpha^{-1}|_{I_{\psi'}/\alpha(I_\psi)} \circ \psi'$.
+
+\medskip\noindent
+To shorten the formulas below we are going to write $\alpha x$ instead
+of $\alpha(x)$ in the following. No confusion should result since
+all maps are indicated by Greek letters and elements by Roman letters.
+We are going to choose
+\begin{enumerate}
+\item an admissible sequence $z_1, \ldots, z_k \in K$
+generating $K$,
+\item elements $z'_i \in M$ such that $\varphi z'_i = z_i$,
+\item elements $z''_i \in M$ such that $\psi z''_i = z_i$,
+\item elements $x_{k + 1}, \ldots, x_a \in I_\varphi$ such
+that $z_1, \ldots, z_k, x_{k + 1}, \ldots, x_a$ is an admissible
+sequence generating $I_\varphi$,
+\item elements $\tilde x_i \in M$ such that $\varphi \tilde x_i = x_i$,
+\item elements $y_{k + 1}, \ldots, y_b \in I_\psi$ such that
+$z_1, \ldots, z_k, y_{k + 1}, \ldots, y_b$ is an admissible
+sequence generating $I_\psi$,
+\item elements $\tilde y_i \in M$ such that $\psi \tilde y_i = y_i$, and
+\item elements $w_1, \ldots, w_k \in M'$ such that
+$w_1 \bmod N, \ldots, w_k \bmod N$ are an admissible sequence
+in $Q$ generating $Q$.
+\end{enumerate}
+By Remark \ref{remark-more-elementary} the element
+$D = \det_\kappa(M, \varphi, \psi) \in \kappa^*$ is
+characterized by
+\begin{eqnarray*}
+& &
+[z_1, \ldots, z_k,
+x_{k + 1}, \ldots, x_a,
+z''_1, \ldots, z''_k,
+\tilde y_{k + 1}, \ldots, \tilde y_b] \\
+& = &
+(-1)^{ab} D
+[z_1, \ldots, z_k,
+y_{k + 1}, \ldots, y_b,
+z'_1, \ldots, z'_k,
+\tilde x_{k + 1}, \ldots, \tilde x_a]
+\end{eqnarray*}
+Note that by the discussion above
+$\alpha x_{k + 1}, \ldots, \alpha x_a, \varphi w_1, \ldots, \varphi w_k$
+is an admissible sequence generating $I_{\varphi'}$ and
+$\alpha y_{k + 1}, \ldots, \alpha y_b, \psi w_1, \ldots, \psi w_k$
+is an admissible sequence generating $I_{\psi'}$.
+Hence by Remark \ref{remark-more-elementary} the element
+$D' = \det_\kappa(M', \varphi', \psi') \in \kappa^*$ is
+characterized by
+\begin{eqnarray*}
+& &
+[\alpha x_{k + 1}, \ldots, \alpha x_a,
+\varphi' w_1, \ldots, \varphi' w_k,
+\alpha \tilde y_{k + 1}, \ldots, \alpha \tilde y_b,
+w_1, \ldots, w_k]
+\\
+& = &
+(-1)^{ab} D'
+[\alpha y_{k + 1}, \ldots, \alpha y_b,
+\psi' w_1, \ldots, \psi' w_k,
+\alpha \tilde x_{k + 1}, \ldots, \alpha \tilde x_a,
+w_1, \ldots, w_k]
+\end{eqnarray*}
+Note how in the first, resp.\ second displayed formula
+the first, resp.\ last $k$ entries of the symbols on both sides
+are the same. Hence these formulas are really equivalent to the
+equalities
+\begin{eqnarray*}
+& &
+[\alpha x_{k + 1}, \ldots, \alpha x_a,
+\alpha z''_1, \ldots, \alpha z''_k,
+\alpha \tilde y_{k + 1}, \ldots, \alpha \tilde y_b] \\
+& = &
+(-1)^{ab} D
+[\alpha y_{k + 1}, \ldots, \alpha y_b,
+\alpha z'_1, \ldots, \alpha z'_k,
+\alpha \tilde x_{k + 1}, \ldots, \alpha \tilde x_a]
+\end{eqnarray*}
+and
+\begin{eqnarray*}
+& &
+[\alpha x_{k + 1}, \ldots, \alpha x_a,
+\varphi' w_1, \ldots, \varphi' w_k,
+\alpha \tilde y_{k + 1}, \ldots, \alpha \tilde y_b]
+\\
+& = &
+(-1)^{ab} D'
+[\alpha y_{k + 1}, \ldots, \alpha y_b,
+\psi' w_1, \ldots, \psi' w_k,
+\alpha \tilde x_{k + 1}, \ldots, \alpha \tilde x_a]
+\end{eqnarray*}
+in $\det_\kappa(N)$. Note that
+$\varphi' w_1, \ldots, \varphi' w_k$
+and
+$\alpha z''_1, \ldots, z''_k$
+are admissible sequences generating the module
+$I_{\varphi'}/\alpha(I_\varphi)$. Write
+$$
+[\varphi' w_1, \ldots, \varphi' w_k]
+= \lambda_0 [\alpha z''_1, \ldots, \alpha z''_k]
+$$
+in $\det_\kappa(I_{\varphi'}/\alpha(I_\varphi))$
+for some $\lambda_0 \in \kappa^*$. Similarly,
+write
+$$
+[\psi' w_1, \ldots, \psi' w_k]
+= \lambda_1 [\alpha z'_1, \ldots, \alpha z'_k]
+$$
+in $\det_\kappa(I_{\psi'}/\alpha(I_\psi))$
+for some $\lambda_1 \in \kappa^*$. On the one hand
+it is clear that
+$$
+\alpha_i([w_1, \ldots, w_k]) = \lambda_i[z_1, \ldots, z_k]
+$$
+for $i = 0, 1$ by our description of $\alpha_i$ above,
+which means that
+$$
+\det\nolimits_\kappa(\alpha_0^{-1} \circ \alpha_1)
+=
+\lambda_1/\lambda_0
+$$
+and
+on the other hand it is clear that
+\begin{eqnarray*}
+& &
+\lambda_0 [\alpha x_{k + 1}, \ldots, \alpha x_a,
+\alpha z''_1, \ldots, \alpha z''_k,
+\alpha \tilde y_{k + 1}, \ldots, \alpha \tilde y_b] \\
+& = &
+[\alpha x_{k + 1}, \ldots, \alpha x_a,
+\varphi' w_1, \ldots, \varphi' w_k,
+\alpha \tilde y_{k + 1}, \ldots, \alpha \tilde y_b]
+\end{eqnarray*}
+and
+\begin{eqnarray*}
+& &
+\lambda_1[\alpha y_{k + 1}, \ldots, \alpha y_b,
+\alpha z'_1, \ldots, \alpha z'_k,
+\alpha \tilde x_{k + 1}, \ldots, \alpha \tilde x_a] \\
+& = &
+[\alpha y_{k + 1}, \ldots, \alpha y_b,
+\psi' w_1, \ldots, \psi' w_k,
+\alpha \tilde x_{k + 1}, \ldots, \alpha \tilde x_a]
+\end{eqnarray*}
+which imply $\lambda_0 D = \lambda_1 D'$. The lemma follows.
+\end{proof}
+
+
+
+
+
+
+
+
+\subsection{Symbols}
+\label{subsection-symbols}
+
+\noindent
+The correct generality for this construction is perhaps the
+situation of the following lemma.
+
+\begin{lemma}
+\label{lemma-pre-symbol}
+Let $A$ be a Noetherian local ring.
+Let $M$ be a finite $A$-module of dimension $1$.
+Assume $\varphi, \psi : M \to M$ are two injective
+$A$-module maps, and assume $\varphi(\psi(M)) = \psi(\varphi(M))$,
+for example if $\varphi$ and $\psi$ commute.
+Then $\text{length}_R(M/\varphi\psi M) < \infty$
+and $(M/\varphi\psi M, \varphi, \psi)$ is an exact
+$(2, 1)$-periodic complex.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak q$ be a minimal prime of the support of $M$.
+Then $M_{\mathfrak q}$ is a finite length $A_{\mathfrak q}$-module,
+see Algebra, Lemma \ref{algebra-lemma-support-point}.
+Hence both $\varphi$ and $\psi$
+induce isomorphisms $M_{\mathfrak q} \to M_{\mathfrak q}$.
+Thus the support of $M/\varphi\psi M$ is $\{\mathfrak m_A\}$
+and hence it has finite length (see lemma cited above).
+Finally, the kernel of $\varphi$ on $M/\varphi\psi M$
+is clearly $\psi M/\varphi\psi M$, and hence the kernel
+of $\varphi$ is the image of $\psi$ on $M/\varphi\psi M$.
+Similarly the other way since $M/\varphi\psi M = M/\psi\varphi M$
+by assumption.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-symbol-defined}
+Let $A$ be a Noetherian local ring. Let $a, b \in A$.
+\begin{enumerate}
+\item If $M$ is a finite $A$-module of dimension $1$
+such that $a, b$ are nonzerodivisors on $M$, then
+$\text{length}_A(M/abM) < \infty$ and
+$(M/abM, a, b)$ is a $(2, 1)$-periodic exact complex.
+\item If $a, b$ are nonzerodivisors and $\dim(A) = 1$
+then $\text{length}_A(A/(ab)) < \infty$ and
+$(A/(ab), a, b)$ is a $(2, 1)$-periodic exact complex.
+\end{enumerate}
+In particular, in these cases
+$\det_\kappa(M/abM, a, b) \in \kappa^*$,
+resp.\ $\det_\kappa(A/(ab), a, b) \in \kappa^*$
+are defined.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-pre-symbol}.
+\end{proof}
+
+\begin{definition}
+\label{definition-symbol-M}
+Let $A$ be a Noetherian local ring with residue field $\kappa$.
+Let $a, b \in A$.
+Let $M$ be a finite $A$-module of dimension $1$
+such that $a, b$ are nonzerodivisors on $M$.
+We define the {\it symbol associated to $M, a, b$}
+to be the element
+$$
+d_M(a, b) =
+\det\nolimits_\kappa(M/abM, a, b) \in \kappa^*
+$$
+\end{definition}
+
+\begin{lemma}
+\label{lemma-multiplicativity-symbol}
+Let $A$ be a Noetherian local ring.
+Let $a, b, c \in A$. Let $M$ be a finite $A$-module
+with $\dim(\text{Supp}(M)) = 1$. Assume $a, b, c$ are nonzerodivisors on $M$.
+Then
+$$
+d_M(a, bc) = d_M(a, b) d_M(a, c)
+$$
+and $d_M(a, b)d_M(b, a) = 1$.
+\end{lemma}
+
+\begin{proof}
+The first statement follows from Lemma \ref{lemma-multiplicativity-determinant}
+applied to $M/abcM$ and endomorphisms $\alpha, \beta, \gamma$ given by
+multiplication by $a, b, c$.
+The second comes from Lemma \ref{lemma-periodic-determinant-shift}.
+\end{proof}
+
+\begin{definition}
+\label{definition-tame-symbol}
+Let $A$ be a Noetherian local domain of dimension $1$
+with residue field $\kappa$.
+Let $K$ be the fraction field of $A$.
+We define the {\it tame symbol} of $A$ to be the map
+$$
+K^* \times K^* \longrightarrow \kappa^*,
+\quad
+(x, y) \longmapsto d_A(x, y)
+$$
+where $d_A(x, y)$ is extended to $K^* \times K^*$ by the multiplicativity of
+Lemma \ref{lemma-multiplicativity-symbol}.
+\end{definition}
+
+\noindent
+It is clear that we may extend more generally $d_M(-, -)$ to
+certain rings of fractions of $A$ (even if $A$ is not a domain).
+
+\begin{lemma}
+\label{lemma-symbol-when-equal}
+Let $A$ be a Noetherian local ring and $M$ a finite $A$-module of
+dimension $1$. Let $a \in A$ be a nonzerodivisor on $M$.
+Then $d_M(a, a) = (-1)^{\text{length}_A(M/aM)}$.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-periodic-determinant-sign}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-symbol-when-one-is-a-unit}
+Let $A$ be a Noetherian local ring.
+Let $M$ be a finite $A$-module of dimension $1$.
+Let $b \in A$ be a nonzerodivisor on $M$, and let $u \in A^*$.
+Then
+$$
+d_M(u, b) = u^{\text{length}_A(M/bM)} \bmod \mathfrak m_A.
+$$
+In particular, if $M = A$, then
+$d_A(u, b) = u^{\text{ord}_A(b)} \bmod \mathfrak m_A$.
+\end{lemma}
+
+\begin{proof}
+Note that in this case $M/ubM = M/bM$ on which multiplication
+by $b$ is zero. Hence $d_M(u, b) = \det_\kappa(u|_{M/bM})$
+by Lemma \ref{lemma-periodic-determinant-easy-case}. The lemma
+then follows from Lemma \ref{lemma-times-u-determinant}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-symbol-short-exact-sequence}
+Let $A$ be a Noetherian local ring.
+Let $a, b \in A$.
+Let
+$$
+0 \to M \to M' \to M'' \to 0
+$$
+be a short exact sequence of $A$-modules of dimension $1$
+such that $a, b$ are nonzerodivisors on
+all three $A$-modules.
+Then
+$$
+d_{M'}(a, b) = d_M(a, b) d_{M''}(a, b)
+$$
+in $\kappa^*$.
+\end{lemma}
+
+\begin{proof}
+It is easy to see that this leads to a short exact sequence
+of exact $(2, 1)$-periodic complexes
+$$
+0 \to
+(M/abM, a, b) \to
+(M'/abM', a, b) \to
+(M''/abM'', a, b) \to 0
+$$
+Hence the lemma follows from Lemma \ref{lemma-periodic-determinant}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-symbol-compare-modules}
+Let $A$ be a Noetherian local ring.
+Let $\alpha : M \to M'$ be a homomorphism of
+finite $A$-modules of dimension $1$.
+Let $a, b \in A$. Assume
+\begin{enumerate}
+\item $a$, $b$ are nonzerodivisors on both $M$ and $M'$, and
+\item $\dim(\Ker(\alpha)), \dim(\Coker(\alpha)) \leq 0$.
+\end{enumerate}
+Then $d_M(a, b) = d_{M'}(a, b)$.
+\end{lemma}
+
+\begin{proof}
+If $a \in A^*$, then the equality follows from the
+equality $\text{length}(M/bM) = \text{length}(M'/bM')$
+and Lemma \ref{lemma-symbol-when-one-is-a-unit}.
+Similarly if $b$ is a unit the lemma holds as well
+(by the symmetry of Lemma \ref{lemma-multiplicativity-symbol}).
+Hence we may assume that $a, b \in \mathfrak m_A$.
+This in particular implies that $\mathfrak m$ is not
+an associated prime of $M$, and hence $\alpha : M \to M'$
+is injective. This permits us to think of $M$ as a submodule of $M'$.
+By assumption $M'/M$ is a finite $A$-module with support
+$\{\mathfrak m_A\}$ and hence has finite length.
+Note that for any third module $M''$ with $M \subset M'' \subset M'$
+the maps $M \to M''$ and $M'' \to M'$ satisfy the assumptions of the lemma
+as well. This reduces us, by induction on the length of $M'/M$,
+to the case where $\text{length}_A(M'/M) = 1$.
+Finally, in this case consider the map
+$$
+\overline{\alpha} : M/abM \longrightarrow M'/abM'.
+$$
+By construction the cokernel $Q$ of $\overline{\alpha}$ has
+length $1$. Since $a, b \in \mathfrak m_A$, they act trivially on
+$Q$. It also follows that the kernel $K$ of $\overline{\alpha}$ has
+length $1$ and hence also $a$, $b$ act trivially on $K$.
+Hence we may apply Lemma \ref{lemma-tricky}. Thus it suffices to see
+that the two maps $\alpha_i : Q \to K$ are the same.
+In fact, both maps are equal to the map
+$q = x' \bmod \Im(\overline{\alpha}) \mapsto abx' \in K$.
+We omit the verification.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-symbol-M}
+Let $A$ be a Noetherian local ring.
+Let $M$ be a finite $A$-module with $\dim(\text{Supp}(M)) = 1$.
+Let $a, b \in A$ nonzerodivisors on $M$.
+Let $\mathfrak q_1, \ldots, \mathfrak q_t$ be the minimal
+primes in the support of $M$. Then
+$$
+d_M(a, b)
+=
+\prod\nolimits_{i = 1, \ldots, t}
+d_{A/\mathfrak q_i}(a, b)^{
+\text{length}_{A_{\mathfrak q_i}}(M_{\mathfrak q_i})}
+$$
+as elements of $\kappa^*$.
+\end{lemma}
+
+\begin{proof}
+Choose a filtration by $A$-submodules
+$$
+0 = M_0 \subset M_1 \subset \ldots \subset M_n = M
+$$
+such that each quotient $M_j/M_{j - 1}$ is isomorphic
+to $A/\mathfrak p_j$ for some prime ideal $\mathfrak p_j$
+of $A$. See Algebra, Lemma \ref{algebra-lemma-filter-Noetherian-module}.
+For each $j$ we have either $\mathfrak p_j = \mathfrak q_i$
+for some $i$, or $\mathfrak p_j = \mathfrak m_A$. Moreover,
+for a fixed $i$, the number of $j$ such that
+$\mathfrak p_j = \mathfrak q_i$ is equal to
+$\text{length}_{A_{\mathfrak q_i}}(M_{\mathfrak q_i})$ by
+Algebra, Lemma \ref{algebra-lemma-filter-minimal-primes-in-support}.
+Hence $d_{M_j}(a, b)$ is defined for each $j$ and
+$$
+d_{M_j}(a, b)
+=
+\left\{
+\begin{matrix}
+d_{M_{j - 1}}(a, b) d_{A/\mathfrak q_i}(a, b) &
+\text{if} & \mathfrak p_j = \mathfrak q_i \\
+d_{M_{j - 1}}(a, b) & \text{if} & \mathfrak p_j = \mathfrak m_A
+\end{matrix}
+\right.
+$$
+by Lemma \ref{lemma-symbol-short-exact-sequence} in the first instance
+and Lemma \ref{lemma-symbol-compare-modules} in the second. Hence the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-symbol-is-usual-tame-symbol}
+Let $A$ be a discrete valuation ring with fraction field $K$.
+For nonzero $x, y \in K$ we have
+$$
+d_A(x, y)
+=
+(-1)^{\text{ord}_A(x)\text{ord}_A(y)}
+\frac{x^{\text{ord}_A(y)}}{y^{\text{ord}_A(x)}} \bmod \mathfrak m_A,
+$$
+in other words the symbol is equal to the usual tame symbol.
+\end{lemma}
+
+\begin{proof}
+By multiplicativity it suffices to prove this when $x, y \in A$.
+Let $t \in A$ be a uniformizer.
+Write $x = t^bu$ and $y = t^bv$ for some $a, b \geq 0$
+and $u, v \in A^*$. Set $l = a + b$. Then
+$t^{l - 1}, \ldots, t^b$ is an admissible sequence in
+$(x)/(xy)$ and $t^{l - 1}, \ldots, t^a$ is an admissible
+sequence in $(y)/(xy)$. Hence by Remark \ref{remark-more-elementary}
+we see that $d_A(x, y)$ is characterized by the equation
+$$
+[t^{l - 1}, \ldots, t^b, v^{-1}t^{b - 1}, \ldots, v^{-1}]
+=
+(-1)^{ab} d_A(x, y)
+[t^{l - 1}, \ldots, t^a, u^{-1}t^{a - 1}, \ldots, u^{-1}].
+$$
+Hence by the admissible relations for the
+symbols $[x_1, \ldots, x_l]$ we see that
+$$
+d_A(x, y) = (-1)^{ab} u^a/v^b \bmod \mathfrak m_A
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-symbol-is-steinberg-prepare}
+Let $A$ be a Noetherian local ring.
+Let $a, b \in A$.
+Let $M$ be a finite $A$-module of dimension $1$ on
+which each of $a$, $b$, $b - a$ are nonzerodivisors.
+Then
+$$
+d_M(a, b - a)d_M(b, b) = d_M(b, b - a)d_M(a, b)
+$$
+in $\kappa^*$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-compute-symbol-M} it suffices to show the relation when
+$M = A/\mathfrak q$ for some prime $\mathfrak q \subset A$ with
+$\dim(A/\mathfrak q) = 1$.
+
+\medskip\noindent
+In case $M = A/\mathfrak q$ we may replace $A$ by $A/\mathfrak q$ and
+$a, b$ by their images in $A/\mathfrak q$. Hence we may assume $A = M$
+and $A$ a local Noetherian domain of dimension $1$. The reason is
+that the residue field $\kappa$ of $A$ and $A/\mathfrak q$ are
+the same and that for any $A/\mathfrak q$-module $M$ the determinant
+taken over $A$ or over $A/\mathfrak q$ are canonically identified.
+See Lemma \ref{lemma-determinant-quotient-ring}.
+
+\medskip\noindent
+It suffices to show the relation when both
+$a, b$ are in the maximal ideal. Namely, the case where one
+or both are units follows from Lemmas \ref{lemma-symbol-when-one-is-a-unit}
+and \ref{lemma-symbol-when-equal}.
+
+\medskip\noindent
+Choose an extension $A \subset A'$ and factorizations
+$a = ta'$, $b = tb'$ as in
+Lemma \ref{lemma-Noetherian-domain-dim-1-two-elements}.
+Note that also $b - a = t(b' - a')$ and that
+$A' = (a', b') = (a', b' - a') = (b' - a', b')$.
+Here and in the following we think of $A'$ as an $A$-module and
+$a, b, a', b', t$ as $A$-module endomorphisms of $A'$.
+We will use the notation $d^A_{A'}(a', b')$ and so on to indicate
+$$
+d^A_{A'}(a', b')
+=
+\det\nolimits_\kappa(A'/a'b'A', a', b')
+$$
+which is defined by Lemma \ref{lemma-pre-symbol}. The upper index ${}^A$
+is used to distinguish this from the already defined symbol
+$d_{A'}(a', b')$ which is different (for example because it has values
+in the residue field of $A'$ which may be different from $\kappa$).
+By Lemma \ref{lemma-symbol-compare-modules} we see that
+$d_A(a, b) = d^A_{A'}(a, b)$,
+and similarly for the other combinations.
+Using this and multiplicativity we see that it suffices to prove
+$$
+d^A_{A'}(a', b' - a') d^A_{A'}(b', b')
+=
+d^A_{A'}(b', b' - a') d^A_{A'}(a', b')
+$$
+Now, since $(a', b') = A'$ and so on we have
+$$
+\begin{matrix}
+A'/(a'(b' - a')) & \cong & A'/(a') \oplus A'/(b' - a') \\
+A'/(b'(b' - a')) & \cong & A'/(b') \oplus A'/(b' - a') \\
+A'/(a'b') & \cong & A'/(a') \oplus A'/(b')
+\end{matrix}
+$$
+Moreover, note that multiplication by $b' - a'$ on
+$A/(a')$ is equal to multiplication by $b'$, and that
+multiplication by $b' - a'$ on $A/(b')$ is equal to multiplication by $-a'$.
+Using Lemmas
+\ref{lemma-periodic-determinant-easy-case} and
+\ref{lemma-periodic-determinant}
+we conclude
+$$
+\begin{matrix}
+d^A_{A'}(a', b' - a') & = &
+\det\nolimits_\kappa(b'|_{A'/(a')})^{-1}
+\det\nolimits_\kappa(a'|_{A'/(b' - a')}) \\
+d^A_{A'}(b', b' - a') & = &
+\det\nolimits_\kappa(-a'|_{A'/(b')})^{-1}
+\det\nolimits_\kappa(b'|_{A'/(b' - a')}) \\
+d^A_{A'}(a', b') & = &
+\det\nolimits_\kappa(b'|_{A'/(a')})^{-1}
+\det\nolimits_\kappa(a'|_{A'/(b')})
+\end{matrix}
+$$
+Hence we conclude that
+$$
+(-1)^{\text{length}_A(A'/(b'))}
+d^A_{A'}(a', b' - a')
+=
+d^A_{A'}(b', b' - a') d^A_{A'}(a', b')
+$$
+the sign coming from the $-a'$ in the second equality above.
+On the other hand, by Lemma \ref{lemma-periodic-determinant-sign} we have
+$d^A_{A'}(b', b') = (-1)^{\text{length}_A(A'/(b'))}$ and the lemma
+is proved.
+\end{proof}
+
+\noindent
+The tame symbol is a Steinberg symbol.
+
+\begin{lemma}
+\label{lemma-symbol-is-steinberg}
+Let $A$ be a Noetherian local domain of dimension $1$
+with fraction field $K$. For $x \in K \setminus \{0, 1\}$
+we have
+$$
+d_A(x, 1 -x) = 1
+$$
+\end{lemma}
+
+\begin{proof}
+Write $x = a/b$ with $a, b \in A$.
+The hypothesis implies, since $1 - x = (b - a)/b$,
+that also $b - a \not = 0$. Hence we compute
+$$
+d_A(x, 1 - x)
+=
+d_A(a, b - a)d_A(a, b)^{-1}d_A(b, b - a)^{-1}d_A(b, b)
+$$
+Thus we have to show that
+$d_A(a, b - a) d_A(b, b) = d_A(b, b - a) d_A(a, b)$.
+This is Lemma \ref{lemma-symbol-is-steinberg-prepare}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\subsection{Lengths and determinants}
+\label{subsection-length-determinant}
+
+\noindent
+In this section we use the determinant to compare lattices.
+The key lemma is the following.
+
+\begin{lemma}
+\label{lemma-key-lemma}
+Let $R$ be a Noetherian local ring.
+Let $\mathfrak q \subset R$ be a prime with $\dim(R/\mathfrak q) = 1$.
+Let $\varphi : M \to N$ be a homomorphism of finite $R$-modules.
+Assume there exist $x_1, \ldots, x_l \in M$ and $y_1, \ldots, y_l \in M$
+with the following properties
+\begin{enumerate}
+\item $M = \langle x_1, \ldots, x_l\rangle$,
+\item $\langle x_1, \ldots, x_i\rangle / \langle x_1, \ldots, x_{i - 1}\rangle
+\cong R/\mathfrak q$ for $i = 1, \ldots, l$,
+\item $N = \langle y_1, \ldots, y_l\rangle$, and
+\item $\langle y_1, \ldots, y_i\rangle / \langle y_1, \ldots, y_{i - 1}\rangle
+\cong R/\mathfrak q$ for $i = 1, \ldots, l$.
+\end{enumerate}
+Then $\varphi$ is injective if and only if $\varphi_{\mathfrak q}$ is an
+isomorphism, and in this case we have
+$$
+\text{length}_R(\Coker(\varphi)) = \text{ord}_{R/\mathfrak q}(f)
+$$
+where $f \in \kappa(\mathfrak q)$ is the element such that
+$$
+[\varphi(x_1), \ldots, \varphi(x_l)] = f [y_1, \ldots, y_l]
+$$
+in $\det_{\kappa(\mathfrak q)}(N_{\mathfrak q})$.
+\end{lemma}
+
+\begin{proof}
+First, note that the lemma holds in case $l = 1$.
+Namely, in this case $x_1$ is a basis of $M$ over $R/\mathfrak q$
+and $y_1$ is a basis of $N$ over $R/\mathfrak q$ and we have
+$\varphi(x_1) = fy_1$ for some $f \in R$. Thus $\varphi$ is injective
+if and only if $f \not \in \mathfrak q$. Moreover,
+$\Coker(\varphi) = R/(f, \mathfrak q)$ and hence the lemma
+holds by definition of $\text{ord}_{R/q}(f)$
+(see Algebra, Definition \ref{algebra-definition-ord}).
+
+\medskip\noindent
+In fact, suppose more generally that $\varphi(x_i) = f_iy_i$ for some
+$f_i \in R$, $f_i \not \in \mathfrak q$. Then the induced maps
+$$
+\langle x_1, \ldots, x_i\rangle / \langle x_1, \ldots, x_{i - 1}\rangle
+\longrightarrow
+\langle y_1, \ldots, y_i\rangle / \langle y_1, \ldots, y_{i - 1}\rangle
+$$
+are all injective and have cokernels isomorphic to
+$R/(f_i, \mathfrak q)$. Hence we see that
+$$
+\text{length}_R(\Coker(\varphi)) = \sum \text{ord}_{R/\mathfrak q}(f_i).
+$$
+On the other hand it is clear that
+$$
+[\varphi(x_1), \ldots, \varphi(x_l)] = f_1 \ldots f_l [y_1, \ldots, y_l]
+$$
+in this case from the admissible relation (b) for symbols.
+Hence we see the result holds in this case also.
+
+\medskip\noindent
+We prove the general case by induction on $l$. Assume $l > 1$.
+Let $i \in \{1, \ldots, l\}$ be minimal such that
+$\varphi(x_1) \in \langle y_1, \ldots, y_i\rangle$.
+We will argue by induction on $i$.
+If $i = 1$, then we get a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\langle x_1 \rangle \ar[r] \ar[d] &
+\langle x_1, \ldots, x_l \rangle \ar[r] \ar[d] &
+\langle x_1, \ldots, x_l \rangle / \langle x_1 \rangle \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\langle y_1 \rangle \ar[r] &
+\langle y_1, \ldots, y_l \rangle \ar[r] &
+\langle y_1, \ldots, y_l \rangle / \langle y_1 \rangle \ar[r] &
+0
+}
+$$
+and the lemma follows from the snake lemma and induction on $l$.
+Assume now that $i > 1$.
+Write $\varphi(x_1) = a_1 y_1 + \ldots + a_{i - 1} y_{i - 1} + a y_i$
+with $a_j, a \in R$ and $a \not \in \mathfrak q$ (since otherwise
+$i$ was not minimal). Set
+$$
+x'_j =
+\left\{
+\begin{matrix}
+x_j & \text{if} & j = 1 \\
+ax_j & \text{if} & j \geq 2
+\end{matrix}
+\right.
+\quad\text{and}\quad
+y'_j =
+\left\{
+\begin{matrix}
+y_j & \text{if} & j < i \\
+ay_j & \text{if} & j \geq i
+\end{matrix}
+\right.
+$$
+Let $M' = \langle x'_1, \ldots, x'_l \rangle$ and
+$N' = \langle y'_1, \ldots, y'_l \rangle$.
+Since $\varphi(x'_1) = a_1 y'_1 + \ldots + a_{i - 1} y'_{i - 1} + y'_i$
+by construction and since for $j > 1$ we have
+$\varphi(x'_j) = a\varphi(x_i) \in \langle y'_1, \ldots, y'_l\rangle$
+we get a commutative diagram of $R$-modules and maps
+$$
+\xymatrix{
+M' \ar[d] \ar[r]_{\varphi'} & N' \ar[d] \\
+M \ar[r]^\varphi & N
+}
+$$
+By the result of the second paragraph of the proof we know
+that $\text{length}_R(M/M') = (l - 1)\text{ord}_{R/\mathfrak q}(a)$
+and similarly
+$\text{length}_R(M/M') = (l - i + 1)\text{ord}_{R/\mathfrak q}(a)$.
+By a diagram chase this implies that
+$$
+\text{length}_R(\Coker(\varphi')) =
+\text{length}_R(\Coker(\varphi)) + i\ \text{ord}_{R/\mathfrak q}(a).
+$$
+On the other hand, it is clear that writing
+$$
+[\varphi(x_1), \ldots, \varphi(x_l)] = f [y_1, \ldots, y_l],
+\quad
+[\varphi'(x'_1), \ldots, \varphi(x'_l)] = f' [y'_1, \ldots, y'_l]
+$$
+we have $f' = a^if$. Hence it suffices to prove the lemma for the
+case that $\varphi(x_1) = a_1y_1 + \ldots a_{i - 1}y_{i - 1} + y_i$,
+i.e., in the case that $a = 1$. Next, recall that
+$$
+[y_1, \ldots, y_l] = [y_1, \ldots, y_{i - 1},
+a_1y_1 + \ldots a_{i - 1}y_{i - 1} + y_i, y_{i + 1}, \ldots, y_l]
+$$
+by the admissible relations for symbols. The sequence
+$y_1, \ldots, y_{i - 1},
+a_1y_1 + \ldots + a_{i - 1}y_{i - 1} + y_i, y_{i + 1}, \ldots, y_l$
+satisfies the conditions (3), (4) of the lemma also.
+Hence, we may actually
+assume that $\varphi(x_1) = y_i$. In this case, note that we have
+$\mathfrak q x_1 = 0$ which implies also $\mathfrak q y_i = 0$.
+We have
+$$
+[y_1, \ldots, y_l] =
+- [y_1, \ldots, y_{i - 2}, y_i, y_{i - 1}, y_{i + 1}, \ldots, y_l]
+$$
+by the third of the admissible relations defining
+$\det_{\kappa(\mathfrak q)}(N_{\mathfrak q})$. Hence we may
+replace $y_1, \ldots, y_l$ by
+the sequence
+$y'_1, \ldots, y'_l =
+y_1, \ldots, y_{i - 2}, y_i, y_{i - 1}, y_{i + 1}, \ldots, y_l$
+(which also satisfies conditions (3) and (4) of the lemma).
+Clearly this decreases the invariant $i$ by $1$ and we win by induction
+on $i$.
+\end{proof}
+
+\noindent
+To use the previous lemma we show that often sequences of elements
+with the required properties exist.
+
+\begin{lemma}
+\label{lemma-good-sequence-exists}
+Let $R$ be a local Noetherian ring.
+Let $\mathfrak q \subset R$ be a prime ideal.
+Let $M$ be a finite $R$-module such that
+$\mathfrak q$ is one of the minimal primes of the support of $M$.
+Then there exist $x_1, \ldots, x_l \in M$ such that
+\begin{enumerate}
+\item the support of $M / \langle x_1, \ldots, x_l\rangle$ does not contain
+$\mathfrak q$, and
+\item $\langle x_1, \ldots, x_i\rangle / \langle x_1, \ldots, x_{i - 1}\rangle
+\cong R/\mathfrak q$ for $i = 1, \ldots, l$.
+\end{enumerate}
+Moreover, in this case $l = \text{length}_{R_\mathfrak q}(M_\mathfrak q)$.
+\end{lemma}
+
+\begin{proof}
+The condition that $\mathfrak q$ is a minimal prime in the support
+of $M$ implies that $l = \text{length}_{R_\mathfrak q}(M_\mathfrak q)$
+is finite (see Algebra, Lemma \ref{algebra-lemma-support-point}).
+Hence we can find $y_1, \ldots, y_l \in M_{\mathfrak q}$
+such that
+$\langle y_1, \ldots, y_i\rangle / \langle y_1, \ldots, y_{i - 1}\rangle
+\cong \kappa(\mathfrak q)$ for $i = 1, \ldots, l$.
+We can find $f_i \in R$, $f_i \not \in \mathfrak q$ such that
+$f_i y_i$ is the image of some element $z_i \in M$.
+Moreover, as $R$ is Noetherian we can write
+$\mathfrak q = (g_1, \ldots, g_t)$ for some $g_j \in R$.
+By assumption $g_j y_i \in \langle y_1, \ldots, y_{i - 1} \rangle$
+inside the module $M_{\mathfrak q}$.
+By our choice of $z_i$ we can find some further elements
+$f_{ji} \in R$, $f_{ij} \not \in \mathfrak q$ such that
+$f_{ij} g_j z_i \in \langle z_1, \ldots, z_{i - 1} \rangle$
+(equality in the module $M$).
+The lemma follows by taking
+$$
+x_1 = f_{11}f_{12}\ldots f_{1t}z_1,
+\quad
+x_2 = f_{11}f_{12}\ldots f_{1t}f_{21}f_{22}\ldots f_{2t}z_2,
+$$
+and so on. Namely, since all the elements $f_i, f_{ij}$ are invertible
+in $R_{\mathfrak q}$ we still have that
+$R_{\mathfrak q}x_1 + \ldots + R_{\mathfrak q}x_i /
+R_{\mathfrak q}x_1 + \ldots + R_{\mathfrak q}x_{i - 1}
+\cong \kappa(\mathfrak q)$ for $i = 1, \ldots, l$.
+By construction, $\mathfrak q x_i \in \langle x_1, \ldots, x_{i - 1}\rangle$.
+Thus $\langle x_1, \ldots, x_i\rangle / \langle x_1, \ldots, x_{i - 1}\rangle$
+is an $R$-module generated by one element, annihilated $\mathfrak q$
+such that localizing at $\mathfrak q$ gives a $q$-dimensional
+vector space over $\kappa(\mathfrak q)$.
+Hence it is isomorphic to $R/\mathfrak q$.
+\end{proof}
+
+\noindent
+Here is the main result of this section.
+We will see below the various different
+consequences of this proposition.
+The reader is encouraged to first prove the easier
+Lemma \ref{lemma-application-herbrand-quotient} his/herself.
+
+\begin{proposition}
+\label{proposition-length-determinant-periodic-complex}
+Let $R$ be a local Noetherian ring with residue field $\kappa$.
+Suppose that $(M, \varphi, \psi)$ is a $(2, 1)$-periodic
+complex over $R$. Assume
+\begin{enumerate}
+\item $M$ is a finite $R$-module,
+\item the cohomology modules of $(M, \varphi, \psi)$ are of finite length, and
+\item $\dim(\text{Supp}(M)) = 1$.
+\end{enumerate}
+Let $\mathfrak q_i$, $i = 1, \ldots, t$ be the minimal
+primes of the support of $M$. Then we have\footnote{
+Obviously we could get rid of the minus sign by redefining
+$\det_\kappa(M, \varphi, \psi)$ as the inverse of its
+current value, see Definition \ref{definition-periodic-determinant}.}
+$$
+- e_R(M, \varphi, \psi) =
+\sum\nolimits_{i = 1, \ldots, t}
+\text{ord}_{R/\mathfrak q_i}\left(
+\det\nolimits_{\kappa(\mathfrak q_i)}
+(M_{\mathfrak q_i}, \varphi_{\mathfrak q_i}, \psi_{\mathfrak q_i})
+\right)
+$$
+\end{proposition}
+
+\begin{proof}
+We first reduce to the case $t = 1$ in the following way.
+Note that
+$\text{Supp}(M) = \{\mathfrak m, \mathfrak q_1, \ldots, \mathfrak q_t\}$,
+where $\mathfrak m \subset R$ is the maximal ideal.
+Let $M_i$ denote the image of $M \to M_{\mathfrak q_i}$,
+so $\text{Supp}(M_i) = \{\mathfrak m, \mathfrak q_i\}$.
+The map $\varphi$ (resp.\ $\psi$) induces an $R$-module map
+$\varphi_i : M_i \to M_i$ (resp.\ $\psi_i : M_i \to M_i$).
+Thus we get a morphism of $(2, 1)$-periodic complexes
+$$
+(M, \varphi, \psi) \longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, t} (M_i, \varphi_i, \psi_i).
+$$
+The kernel and cokernel of this map have support contained in
+$\{\mathfrak m\}$. Hence by Lemma \ref{lemma-compare-periodic-lengths}
+we have
+$$
+e_R(M, \varphi, \psi) =
+\sum\nolimits_{i = 1, \ldots, t}
+e_R(M_i, \varphi_i, \psi_i)
+$$
+On the other hand we clearly have $M_{\mathfrak q_i} = M_{i, \mathfrak q_i}$,
+and hence the terms of the right hand side of the formula of the
+lemma are equal to the expressions
+$$
+\text{ord}_{R/\mathfrak q_i}\left(
+\det\nolimits_{\kappa(\mathfrak q_i)}
+(M_{i, \mathfrak q_i}, \varphi_{i, \mathfrak q_i}, \psi_{i, \mathfrak q_i})
+\right)
+$$
+In other words, if we can prove the lemma for each of the modules
+$M_i$, then the lemma holds. This reduces us to the case $t = 1$.
+
+\medskip\noindent
+Assume we have a $(2, 1)$-periodic complex $(M, \varphi, \psi)$
+over a Noetherian local ring with $M$ a finite $R$-module,
+$\text{Supp}(M) = \{\mathfrak m, \mathfrak q\}$, and
+finite length cohomology modules. The proof in this case
+follows from Lemma \ref{lemma-key-lemma} and careful bookkeeping.
+Denote
+$K_\varphi = \Ker(\varphi)$,
+$I_\varphi = \Im(\varphi)$,
+$K_\psi = \Ker(\psi)$, and
+$I_\psi = \Im(\psi)$.
+Since $R$ is Noetherian these are all finite $R$-modules.
+Set
+$$
+a = \text{length}_{R_{\mathfrak q}}(I_{\varphi, \mathfrak q})
+= \text{length}_{R_{\mathfrak q}}(K_{\psi, \mathfrak q}),
+\quad
+b = \text{length}_{R_{\mathfrak q}}(I_{\psi, \mathfrak q})
+= \text{length}_{R_{\mathfrak q}}(K_{\varphi, \mathfrak q}).
+$$
+Equalities because the complex becomes exact after localizing at
+$\mathfrak q$. Note that $l = \text{length}_{R_{\mathfrak q}}(M_{\mathfrak q})$
+is equal to $l = a + b$.
+
+\medskip\noindent
+We are going to use Lemma \ref{lemma-good-sequence-exists}
+to choose sequences of elements in finite $R$-modules
+$N$ with support contained in $\{\mathfrak m, \mathfrak q\}$.
+In this case $N_{\mathfrak q}$ has finite length, say $n \in \mathbf{N}$.
+Let us call a sequence $w_1, \ldots, w_n \in N$
+with properties (1) and (2) of Lemma \ref{lemma-good-sequence-exists}
+a ``good sequence''. Note that the quotient
+$N/\langle w_1, \ldots, w_n \rangle$ of $N$ by the submodule generated by
+a good sequence has support (contained in) $\{\mathfrak m\}$
+and hence has finite length (Algebra, Lemma \ref{algebra-lemma-support-point}).
+Moreover, the symbol
+$[w_1, \ldots, w_n] \in \det_{\kappa(\mathfrak q)}(N_{\mathfrak q})$
+is a generator, see Lemma \ref{lemma-determinant-dimension-one}.
+
+\medskip\noindent
+Having said this we choose good sequences
+$$
+\begin{matrix}
+x_1, \ldots, x_b & \text{in} & K_\varphi, &
+t_1, \ldots, t_a & \text{in} & K_\psi, \\
+y_1, \ldots, y_a & \text{in} & I_\varphi \cap \langle t_1, \ldots t_a\rangle, &
+s_1, \ldots, s_b & \text{in} & I_\psi \cap \langle x_1, \ldots, x_b\rangle.
+\end{matrix}
+$$
+We will adjust our choices a little bit as follows.
+Choose lifts $\tilde y_i \in M$ of $y_i \in I_\varphi$
+and $\tilde s_i \in M$ of $s_i \in I_\psi$. It may not be the case
+that $\mathfrak q \tilde y_1 \subset \langle x_1, \ldots, x_b\rangle$
+and it may not be the case that
+$\mathfrak q \tilde s_1 \subset \langle t_1, \ldots, t_a\rangle$.
+However, using that $\mathfrak q$ is finitely generated (as in the proof
+of Lemma \ref{lemma-good-sequence-exists}) we can find a
+$d \in R$, $d \not \in \mathfrak q$ such that
+$\mathfrak q d\tilde y_1 \subset \langle x_1, \ldots, x_b\rangle$
+and
+$\mathfrak q d\tilde s_1 \subset \langle t_1, \ldots, t_a\rangle$.
+Thus after replacing $y_i$ by $dy_i$,
+$\tilde y_i$ by $d\tilde y_i$, $s_i$ by $ds_i$ and $\tilde s_i$
+by $d\tilde s_i$ we see that we may assume also that
+$x_1, \ldots, x_b, \tilde y_1, \ldots, \tilde y_b$
+and $t_1, \ldots, t_a, \tilde s_1, \ldots, \tilde s_b$
+are good sequences in $M$.
+
+\medskip\noindent
+Finally, we choose a good sequence
+$z_1, \ldots, z_l$ in the finite $R$-module
+$$
+\langle
+x_1, \ldots, x_b, \tilde y_1, \ldots, \tilde y_a
+\rangle
+\cap
+\langle
+t_1, \ldots, t_a, \tilde s_1, \ldots, \tilde s_b
+\rangle.
+$$
+Note that this is also a good sequence in $M$.
+
+\medskip\noindent
+Since $I_{\varphi, \mathfrak q} = K_{\psi, \mathfrak q}$
+there is a unique element $h \in \kappa(\mathfrak q)$ such that
+$[y_1, \ldots, y_a] = h [t_1, \ldots, t_a]$
+inside $\det_{\kappa(\mathfrak q)}(K_{\psi, \mathfrak q})$.
+Similarly, as $I_{\psi, \mathfrak q} = K_{\varphi, \mathfrak q}$
+there is a unique element $h \in \kappa(\mathfrak q)$ such that
+$[s_1, \ldots, s_b] = g [x_1, \ldots, x_b]$
+inside $\det_{\kappa(\mathfrak q)}(K_{\varphi, \mathfrak q})$.
+We can also do this with the three good sequences we have
+in $M$. All in all we get the following identities
+\begin{align*}
+[y_1, \ldots, y_a]
+& =
+h [t_1, \ldots, t_a] \\
+[s_1, \ldots, s_b]
+& =
+g [x_1, \ldots, x_b] \\
+[z_1, \ldots, z_l]
+& =
+f_\varphi [x_1, \ldots, x_b, \tilde y_1, \ldots, \tilde y_a] \\
+[z_1, \ldots, z_l]
+& =
+f_\psi [t_1, \ldots, t_a, \tilde s_1, \ldots, \tilde s_b]
+\end{align*}
+for some $g, h, f_\varphi, f_\psi \in \kappa(\mathfrak q)$.
+
+\medskip\noindent
+Having set up all this
+notation let us compute $\det_{\kappa(\mathfrak q)}(M, \varphi, \psi)$.
+Namely, consider the element $[z_1, \ldots, z_l]$.
+Under the map $\gamma_\psi \circ \sigma \circ \gamma_\varphi^{-1}$
+of Definition \ref{definition-periodic-determinant} we have
+\begin{eqnarray*}
+[z_1, \ldots, z_l] & = &
+f_\varphi [x_1, \ldots, x_b, \tilde y_1, \ldots, \tilde y_a] \\
+& \mapsto & f_\varphi [x_1, \ldots, x_b] \otimes [y_1, \ldots, y_a] \\
+& \mapsto &
+f_\varphi h/g [t_1, \ldots, t_a] \otimes [s_1, \ldots, s_b] \\
+& \mapsto &
+f_\varphi h/g [t_1, \ldots, t_a, \tilde s_1, \ldots, \tilde s_b] \\
+& = &
+f_\varphi h/f_\psi g [z_1, \ldots, z_l]
+\end{eqnarray*}
+This means that
+$\det_{\kappa(\mathfrak q)}
+(M_{\mathfrak q}, \varphi_{\mathfrak q}, \psi_{\mathfrak q})$
+is equal to $f_\varphi h/f_\psi g$ up to a sign.
+
+\medskip\noindent
+We abbreviate the following quantities
+\begin{eqnarray*}
+k_\varphi & = & \text{length}_R(K_\varphi/\langle x_1, \ldots, x_b\rangle) \\
+k_\psi & = & \text{length}_R(K_\psi/\langle t_1, \ldots, t_a\rangle) \\
+i_\varphi & = & \text{length}_R(I_\varphi/\langle y_1, \ldots, y_a\rangle) \\
+i_\psi & = & \text{length}_R(I_\psi/\langle s_1, \ldots, s_a\rangle) \\
+m_\varphi & = & \text{length}_R(M/
+\langle x_1, \ldots, x_b, \tilde y_1, \ldots, \tilde y_a\rangle) \\
+m_\psi & = & \text{length}_R(M/
+\langle t_1, \ldots, t_a, \tilde s_1, \ldots, \tilde s_b\rangle) \\
+\delta_\varphi & = & \text{length}_R(
+\langle x_1, \ldots, x_b, \tilde y_1, \ldots, \tilde y_a\rangle
+\langle z_1, \ldots, z_l\rangle) \\
+\delta_\psi & = & \text{length}_R(
+\langle t_1, \ldots, t_a, \tilde s_1, \ldots, \tilde s_b\rangle
+\langle z_1, \ldots, z_l\rangle)
+\end{eqnarray*}
+Using the exact sequences $0 \to K_\varphi \to M \to I_\varphi \to 0$
+we get $m_\varphi = k_\varphi + i_\varphi$. Similarly we have
+$m_\psi = k_\psi + i_\psi$. We have
+$\delta_\varphi + m_\varphi = \delta_\psi + m_\psi$ since this
+is equal to the colength of $\langle z_1, \ldots, z_l \rangle$
+in $M$. Finally, we have
+$$
+\delta_\varphi = \text{ord}_{R/\mathfrak q}(f_\varphi),
+\quad
+\delta_\psi = \text{ord}_{R/\mathfrak q}(f_\psi)
+$$
+by our first application of the key Lemma \ref{lemma-key-lemma}.
+
+\medskip\noindent
+Next, let us compute the multiplicity of the periodic complex
+\begin{eqnarray*}
+e_R(M, \varphi, \psi) & = &
+\text{length}_R(K_\varphi/I_\psi) - \text{length}_R(K_\psi/I_\varphi) \\
+& = &
+\text{length}_R(
+\langle x_1, \ldots, x_b\rangle/
+\langle s_1, \ldots, s_b\rangle)
++ k_\varphi - i_\psi \\
+& & -
+\text{length}_R(
+\langle t_1, \ldots, t_a\rangle/
+\langle y_1, \ldots, y_a\rangle)
+- k_\psi + i_\varphi \\
+& = &
+\text{ord}_{R/\mathfrak q}(g/h) + k_\varphi - i_\psi - k_\psi + i_\varphi \\
+& = &
+\text{ord}_{R/\mathfrak q}(g/h) + m_\varphi - m_\psi \\
+& = &
+\text{ord}_{R/\mathfrak q}(g/h) + \delta_\psi - \delta_\varphi \\
+& = &
+\text{ord}_{R/\mathfrak q}(f_\psi g/f_\varphi h)
+\end{eqnarray*}
+where we used the key Lemma \ref{lemma-key-lemma} twice in the third equality.
+By our computation of $\det_{\kappa(\mathfrak q)}
+(M_{\mathfrak q}, \varphi_{\mathfrak q}, \psi_{\mathfrak q})$
+this proves the proposition.
+\end{proof}
+
+\noindent
+In most applications the following lemma suffices.
+
+\begin{lemma}
+\label{lemma-application-herbrand-quotient}
+Let $R$ be a Noetherian local ring with maximal ideal $\mathfrak m$.
+Let $M$ be a finite $R$-module, and let $\psi : M \to M$ be an
+$R$-module map. Assume that
+\begin{enumerate}
+\item $\Ker(\psi)$ and $\Coker(\psi)$ have finite length, and
+\item $\dim(\text{Supp}(M)) \leq 1$.
+\end{enumerate}
+Write
+$\text{Supp}(M) = \{\mathfrak m, \mathfrak q_1, \ldots, \mathfrak q_t\}$
+and denote $f_i \in \kappa(\mathfrak q_i)^*$ the element such that
+$\det_{\kappa(\mathfrak q_i)}(\psi_{\mathfrak q_i}) :
+\det_{\kappa(\mathfrak q_i)}(M_{\mathfrak q_i})
+\to \det_{\kappa(\mathfrak q_i)}(M_{\mathfrak q_i})$
+is multiplication by $f_i$. Then
+we have
+$$
+\text{length}_R(\Coker(\psi))
+-
+\text{length}_R(\Ker(\psi))
+=
+\sum\nolimits_{i = 1, \ldots, t}
+\text{ord}_{R/\mathfrak q_i}(f_i).
+$$
+\end{lemma}
+
+\begin{proof}
+Recall that $H^0(M, 0, \psi) = \Coker(\psi)$ and
+$H^1(M, 0, \psi) = \Ker(\psi)$, see remarks above
+Definition \ref{definition-periodic-length}.
+The lemma follows by combining
+Proposition \ref{proposition-length-determinant-periodic-complex} with
+Lemma \ref{lemma-periodic-determinant-easy-case}.
+
+\medskip\noindent
+Alternative proof. Reduce to the case
+$\text{Supp}(M) = \{\mathfrak m, \mathfrak q\}$
+as in the proof of
+Proposition \ref{proposition-length-determinant-periodic-complex}.
+Then directly combine
+Lemmas \ref{lemma-key-lemma} and
+\ref{lemma-good-sequence-exists}
+to prove this specific case of
+Proposition \ref{proposition-length-determinant-periodic-complex}.
+There is much less bookkeeping in this case, and the reader is
+encouraged to work this out. Details omitted.
+\end{proof}
+
+
+
+
+\subsection{Application to the key lemma}
+\label{subsection-application-tame-symbol}
+
+\noindent
+In this section we apply the results above to show the
+analogue of the key lemma (Lemma \ref{lemma-milnor-gersten-low-degree})
+with the tame symbol $d_A$ constructed above. Please
+see Remark \ref{remark-gersten-complex-milnor}
+for the relationship with Milnor $K$-theory.
+
+\begin{lemma}[Key Lemma]
+\label{lemma-secondary-ramification}
+\begin{reference}
+When $A$ is an excellent ring this is \cite[Proposition 1]{Kato-Milnor-K}.
+\end{reference}
+Let $A$ be a $2$-dimensional Noetherian local domain with fraction field $K$.
+Let $f, g \in K^*$.
+Let $\mathfrak q_1, \ldots, \mathfrak q_t$ be the height
+$1$ primes $\mathfrak q$ of $A$ such that either $f$ or $g$ is not an
+element of $A^*_{\mathfrak q}$.
+Then we have
+$$
+\sum\nolimits_{i = 1, \ldots, t}
+\text{ord}_{A/\mathfrak q_i}(d_{A_{\mathfrak q_i}}(f, g))
+=
+0
+$$
+We can also write this as
+$$
+\sum\nolimits_{\text{height}(\mathfrak q) = 1}
+\text{ord}_{A/\mathfrak q}(d_{A_{\mathfrak q}}(f, g))
+=
+0
+$$
+since at any height one prime $\mathfrak q$
+of $A$ where $f, g \in A^*_{\mathfrak q}$
+we have $d_{A_{\mathfrak q}}(f, g) = 1$ by
+Lemma \ref{lemma-symbol-when-one-is-a-unit}.
+\end{lemma}
+
+\begin{proof}
+Since the tame symbols $d_{A_{\mathfrak q}}(f, g)$ are additive
+(Lemma \ref{lemma-multiplicativity-symbol}) and the order
+functions $\text{ord}_{A/\mathfrak q}$
+are additive (Algebra, Lemma \ref{algebra-lemma-ord-additive})
+it suffices to prove the formula when $f = a \in A$ and
+$g = b \in A$. In this case we see that we have to show
+$$
+\sum\nolimits_{\text{height}(\mathfrak q) = 1}
+\text{ord}_{A/\mathfrak q}(\det\nolimits_\kappa(A_{\mathfrak q}/(ab), a, b))
+= 0
+$$
+By Proposition \ref{proposition-length-determinant-periodic-complex}
+this is equivalent to showing that
+$$
+e_A(A/(ab), a, b) = 0.
+$$
+Since the complex
+$A/(ab) \xrightarrow{a} A/(ab) \xrightarrow{b} A/(ab) \xrightarrow{a} A/(ab)$
+is exact we win.
+\end{proof}
+
+
+
+
+
+\section{Appendix B: Alternative approaches}
+\label{section-appendix-chow}
+
+\noindent
+In this appendix we first briefly try to connect the material
+in the main text with $K$-theory of coherent sheaves. In
+particular we describe how cupping with $c_1$ of an invertible
+module is related to tensoring by this invertible module, see
+Lemma \ref{lemma-coherent-sheaf-cap-c1}.
+This material is obviously very interesting and
+deserves a much more detailed and expansive exposition.
+
+
+\subsection{Rational equivalence and K-groups}
+\label{subsection-rational-equivalence-K-groups}
+
+\noindent
+This section is a continuation of Section \ref{section-chow-and-K}.
+The motivation for the following lemma is
+Homology, Lemma \ref{homology-lemma-serre-subcategory-K-groups}.
+
+\begin{lemma}
+\label{lemma-maps-between-coherent-sheaves}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Let
+$$
+\xymatrix{
+\ldots \ar[r] &
+\mathcal{F} \ar[r]^\varphi &
+\mathcal{F} \ar[r]^\psi &
+\mathcal{F} \ar[r]^\varphi &
+\mathcal{F} \ar[r] & \ldots
+}
+$$
+be a complex as in Homology, Equation (\ref{homology-equation-cyclic-complex}).
+Assume that
+\begin{enumerate}
+\item $\dim_\delta(\text{Supp}(\mathcal{F})) \leq k + 1$.
+\item $\dim_\delta(\text{Supp}(H^i(\mathcal{F}, \varphi, \psi))) \leq k$
+for $i = 0, 1$.
+\end{enumerate}
+Then we have
+$$
+[H^0(\mathcal{F}, \varphi, \psi)]_k
+\sim_{rat}
+[H^1(\mathcal{F}, \varphi, \psi)]_k
+$$
+as $k$-cycles on $X$.
+\end{lemma}
+
+\begin{proof}
+Let $\{W_j\}_{j \in J}$ be the collection of irreducible
+components of $\text{Supp}(\mathcal{F})$
+which have $\delta$-dimension $k + 1$. Note that $\{W_j\}$
+is a locally finite collection of closed subsets of
+$X$ by Lemma \ref{lemma-length-finite}.
+For every $j$, let $\xi_j \in W_j$ be the generic point.
+Set
+$$
+f_j = \det\nolimits_{\kappa(\xi_j)}
+(\mathcal{F}_{\xi_j}, \varphi_{\xi_j}, \psi_{\xi_j})
+\in
+R(W_j)^*.
+$$
+See Definition \ref{definition-periodic-determinant} for notation.
+We claim that
+$$
+- [H^0(\mathcal{F}, \varphi, \psi)]_k + [H^1(\mathcal{F}, \varphi, \psi)]_k
+=
+\sum (W_j \to X)_*\text{div}(f_j)
+$$
+If we prove this then the lemma follows.
+
+\medskip\noindent
+Let $Z \subset X$ be an integral closed subscheme of $\delta$-dimension $k$.
+To prove the equality above it suffices to show that the coefficient $n$
+of $[Z]$ in
+$
+[H^0(\mathcal{F}, \varphi, \psi)]_k - [H^1(\mathcal{F}, \varphi, \psi)]_k
+$
+is the same as the coefficient $m$ of $[Z]$ in
+$
+\sum (W_j \to X)_*\text{div}(f_j)
+$.
+Let $\xi \in Z$ be the generic point.
+Consider the local ring $A = \mathcal{O}_{X, \xi}$.
+Let $M = \mathcal{F}_\xi$ as an $A$-module.
+Denote $\varphi, \psi : M \to M$ the action of $\varphi, \psi$ on
+the stalk.
+By our choice of $\xi \in Z$ we have $\delta(\xi) = k$
+and hence $\dim(\text{Supp}(M)) = 1$.
+Finally, the integral closed subschemes
+$W_j$ passing through $\xi$ correspond to the minimal primes
+$\mathfrak q_i$ of $\text{Supp}(M)$.
+In each case the element $f_j \in R(W_j)^*$ corresponds to
+the element $\det_{\kappa(\mathfrak q_i)}(M_{\mathfrak q_i}, \varphi, \psi)$
+in $\kappa(\mathfrak q_i)^*$. Hence we see that
+$$
+n = - e_A(M, \varphi, \psi)
+$$
+and
+$$
+m =
+\sum
+\text{ord}_{A/\mathfrak q_i}
+(\det\nolimits_{\kappa(\mathfrak q_i)}(M_{\mathfrak q_i}, \varphi, \psi))
+$$
+Thus the result follows from
+Proposition \ref{proposition-length-determinant-periodic-complex}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cycles-rational-equivalence-K-group}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be a scheme locally of finite type over $S$.
+The map
+$$
+\CH_k(X) \longrightarrow
+K_0(\textit{Coh}_{\leq k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))
+$$
+from Lemma \ref{lemma-from-chow-to-K} induces a bijection from
+$\CH_k(X)$ onto the image $B_k(X)$ of the map
+$$
+K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X))
+\longrightarrow
+K_0(\textit{Coh}_{\leq k + 1}(X)/\textit{Coh}_{\leq k - 1}(X)).
+$$
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-cycles-k-group} we have
+$Z_k(X) = K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X))$
+compatible with the map of Lemma \ref{lemma-from-chow-to-K}.
+Thus, suppose we have an element $[A] - [B]$ of
+$K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X))$
+which maps to zero in $B_k(X)$, i.e., maps to zero in
+$K_0(\textit{Coh}_{\leq k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))$.
+We have to show that $[A] - [B]$ corresponds to a cycle
+rationally equivalent to zero on $X$.
+Suppose $[A] = [\mathcal{A}]$ and $[B] = [\mathcal{B}]$
+for some coherent sheaves $\mathcal{A}, \mathcal{B}$ on
+$X$ supported in $\delta$-dimension $\leq k$.
+The assumption that $[A] - [B]$ maps to zero in the group
+$K_0(\textit{Coh}_{\leq k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))$
+means that there exists coherent sheaves
+$\mathcal{A}', \mathcal{B}'$ on $X$ supported in
+$\delta$-dimension $\leq k - 1$ such that
+$[\mathcal{A} \oplus \mathcal{A}'] - [\mathcal{B} \oplus \mathcal{B}']$
+is zero in $K_0(\textit{Coh}_{k + 1}(X))$ (use part (1) of
+Homology, Lemma \ref{homology-lemma-serre-subcategory-K-groups}).
+By part (2) of
+Homology, Lemma \ref{homology-lemma-serre-subcategory-K-groups}
+this means there exists a $(2, 1)$-periodic complex
+$(\mathcal{F}, \varphi, \psi)$ in the category $\textit{Coh}_{\leq k + 1}(X)$
+such that
+$\mathcal{A} \oplus \mathcal{A}' = H^0(\mathcal{F}, \varphi, \psi)$
+and $\mathcal{B} \oplus \mathcal{B}' = H^1(\mathcal{F}, \varphi, \psi)$.
+By Lemma \ref{lemma-maps-between-coherent-sheaves}
+this implies that
+$$
+[\mathcal{A} \oplus \mathcal{A}']_k
+\sim_{rat}
+[\mathcal{B} \oplus \mathcal{B}']_k
+$$
+This proves that $[A] - [B]$ maps to a cycle rationally
+equivalent to zero by the map
+$$
+K_0(\textit{Coh}_{\leq k}(X)/\textit{Coh}_{\leq k - 1}(X))
+\longrightarrow Z_k(X)
+$$
+of Lemma \ref{lemma-cycles-k-group}. This is what we
+had to prove and the proof is complete.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\subsection{Cartier divisors and K-groups}
+\label{subsection-cartier-coherent}
+
+\noindent
+In this section we describe how the intersection with the
+first Chern class of an invertible sheaf $\mathcal{L}$
+corresponds to tensoring with $\mathcal{L} - \mathcal{O}$
+in $K$-groups.
+
+\begin{lemma}
+\label{lemma-no-embedded-points-modules}
+Let $A$ be a Noetherian local ring.
+Let $M$ be a finite $A$-module.
+Let $a, b \in A$.
+Assume
+\begin{enumerate}
+\item $\dim(A) = 1$,
+\item both $a$ and $b$ are nonzerodivisors in $A$,
+\item $A$ has no embedded primes,
+\item $M$ has no embedded associated primes,
+\item $\text{Supp}(M) = \Spec(A)$.
+\end{enumerate}
+Let $I = \{x \in A \mid x(a/b) \in A\}$.
+Let $\mathfrak q_1, \ldots, \mathfrak q_t$ be the minimal
+primes of $A$. Then $(a/b)IM \subset M$ and
+$$
+\text{length}_A(M/(a/b)IM)
+-
+\text{length}_A(M/IM)
+=
+\sum\nolimits_i
+\text{length}_{A_{\mathfrak q_i}}(M_{\mathfrak q_i})
+\text{ord}_{A/\mathfrak q_i}(a/b)
+$$
+\end{lemma}
+
+\begin{proof}
+Since $M$ has no embedded associated primes, and since
+the support of $M$ is $\Spec(A)$ we see that
+$\text{Ass}(M) = \{\mathfrak q_1, \ldots, \mathfrak q_t\}$.
+Hence $a$, $b$ are nonzerodivisors on $M$. Note that
+\begin{align*}
+& \text{length}_A(M/(a/b)IM) \\
+& = \text{length}_A(bM/aIM) \\
+& = \text{length}_A(M/aIM)
+-
+\text{length}_A(M/bM) \\
+& = \text{length}_A(M/aM) + \text{length}_A(aM/aIM) - \text{length}_A(M/bM) \\
+& = \text{length}_A(M/aM) + \text{length}_A(M/IM) - \text{length}_A(M/bM)
+\end{align*}
+as the injective map $b : M \to bM$ maps $(a/b)IM$ to $aIM$
+and the injective map $a : M \to aM$ maps $IM$ to $aIM$.
+Hence the left hand side of the equation of the lemma is
+equal to
+$$
+\text{length}_A(M/aM) - \text{length}_A(M/bM).
+$$
+Applying the second formula of
+Lemma \ref{lemma-additivity-divisors-restricted}
+with $x = a, b$ respectively
+and using Algebra, Definition \ref{algebra-definition-ord}
+of the $\text{ord}$-functions we get the result.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-no-embedded-points}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Let $s \in \Gamma(X, \mathcal{K}_X(\mathcal{L}))$ be a
+meromorphic section of $\mathcal{L}$.
+Assume
+\begin{enumerate}
+\item $\dim_\delta(X) \leq k + 1$,
+\item $X$ has no embedded points,
+\item $\mathcal{F}$ has no embedded associated points,
+\item the support of $\mathcal{F}$ is $X$, and
+\item the section $s$ is regular meromorphic.
+\end{enumerate}
+In this situation let $\mathcal{I} \subset \mathcal{O}_X$
+be the ideal of denominators of $s$, see
+Divisors,
+Definition \ref{divisors-definition-regular-meromorphic-ideal-denominators}.
+Then we have the following:
+\begin{enumerate}
+\item there are short exact sequences
+$$
+\begin{matrix}
+0 &
+\to &
+\mathcal{I}\mathcal{F} &
+\xrightarrow{1} &
+\mathcal{F} &
+\to &
+\mathcal{Q}_1 &
+\to &
+0 \\
+0 &
+\to &
+\mathcal{I}\mathcal{F} &
+\xrightarrow{s} &
+\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L} &
+\to &
+\mathcal{Q}_2 &
+\to &
+0
+\end{matrix}
+$$
+\item the coherent sheaves $\mathcal{Q}_1$, $\mathcal{Q}_2$
+are supported in $\delta$-dimension $\leq k$,
+\item the section $s$ restricts to a regular meromorphic
+section $s_i$ on every irreducible component $X_i$ of
+$X$ of $\delta$-dimension $k + 1$, and
+\item writing $[\mathcal{F}]_{k + 1} = \sum m_i[X_i]$ we have
+$$
+[\mathcal{Q}_2]_k - [\mathcal{Q}_1]_k
+=
+\sum m_i(X_i \to X)_*\text{div}_{\mathcal{L}|_{X_i}}(s_i)
+$$
+in $Z_k(X)$, in particular
+$$
+[\mathcal{Q}_2]_k - [\mathcal{Q}_1]_k
+=
+c_1(\mathcal{L}) \cap [\mathcal{F}]_{k + 1}
+$$
+in $\CH_k(X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Recall from Divisors, Lemma \ref{divisors-lemma-make-maps-regular-section}
+the existence of injective maps
+$1 : \mathcal{I}\mathcal{F} \to \mathcal{F}$ and
+$s : \mathcal{I}\mathcal{F} \to \mathcal{F} \otimes_{\mathcal{O}_X}\mathcal{L}$
+whose cokernels are supported on a closed nowhere dense subsets $T$.
+Denote $\mathcal{Q}_i$ there cokernels as in the lemma.
+We conclude that $\dim_\delta(\text{Supp}(\mathcal{Q}_i)) \leq k$.
+By Divisors, Lemmas \ref{divisors-lemma-pullback-meromorphic-sections-defined}
+and \ref{divisors-lemma-meromorphic-sections-pullback} the pullbacks $s_i$
+are defined and are regular meromorphic sections for $\mathcal{L}|_{X_i}$.
+The equality of cycles in (4) implies the equality of cycle classes
+in (4). Hence the only remaining thing to show is that
+$$
+[\mathcal{Q}_2]_k - [\mathcal{Q}_1]_k
+=
+\sum m_i(X_i \to X)_*\text{div}_{\mathcal{L}|_{X_i}}(s_i)
+$$
+holds in $Z_k(X)$. To see this, let $Z \subset X$ be an integral closed
+subscheme of $\delta$-dimension $k$. Let $\xi \in Z$ be the generic point.
+Let $A = \mathcal{O}_{X, \xi}$ and $M = \mathcal{F}_\xi$.
+Moreover, choose a generator $s_\xi \in \mathcal{L}_\xi$.
+Then we can write $s = (a/b) s_\xi$ where $a, b \in A$ are
+nonzerodivisors. In this case
+$I = \mathcal{I}_\xi = \{x \in A \mid x(a/b) \in A\}$.
+In this case the coefficient of $[Z]$ in the left hand side is
+$$
+\text{length}_A(M/(a/b)IM) - \text{length}_A(M/IM)
+$$
+and the coefficient of $[Z]$ in the right hand side
+is
+$$
+\sum
+\text{length}_{A_{\mathfrak q_i}}(M_{\mathfrak q_i})
+\text{ord}_{A/\mathfrak q_i}(a/b)
+$$
+where $\mathfrak q_1, \ldots, \mathfrak q_t$ are the minimal
+primes of the $1$-dimensional local ring $A$. Hence the result
+follows from Lemma \ref{lemma-no-embedded-points-modules}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-coherent-sheaf-cap-c1}
+Let $(S, \delta)$ be as in Situation \ref{situation-setup}.
+Let $X$ be locally of finite type over $S$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Assume $\dim_\delta(\text{Supp}(\mathcal{F})) \leq k + 1$.
+Then the element
+$$
+[\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}]
+-
+[\mathcal{F}]
+\in
+K_0(\textit{Coh}_{\leq k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))
+$$
+lies in the subgroup $B_k(X)$ of
+Lemma \ref{lemma-cycles-rational-equivalence-K-group} and maps to
+the element $c_1(\mathcal{L}) \cap [\mathcal{F}]_{k + 1}$ via
+the map $B_k(X) \to \CH_k(X)$.
+\end{lemma}
+
+\begin{proof}
+Let
+$$
+0 \to \mathcal{K} \to \mathcal{F} \to \mathcal{F}' \to 0
+$$
+be the short exact sequence constructed in
+Divisors, Lemma \ref{divisors-lemma-remove-embedded-points}.
+This in particular means that $\mathcal{F}'$ has no embedded
+associated points.
+Since the support of $\mathcal{K}$ is nowhere dense in the
+support of $\mathcal{F}$ we see that
+$\dim_\delta(\text{Supp}(\mathcal{K})) \leq k$. We may
+re-apply
+Divisors, Lemma \ref{divisors-lemma-remove-embedded-points}
+starting with $\mathcal{K}$ to get a short exact sequence
+$$
+0 \to \mathcal{K}'' \to \mathcal{K} \to \mathcal{K}' \to 0
+$$
+where now $\dim_\delta(\text{Supp}(\mathcal{K}'')) < k$
+and $\mathcal{K}'$ has no embedded associated points.
+Suppose we can prove the lemma for the coherent sheaves
+$\mathcal{F}'$ and $\mathcal{K}'$. Then we see
+from the equations
+$$
+[\mathcal{F}]_{k + 1}
+=
+[\mathcal{F}']_{k + 1}
++ [\mathcal{K}']_{k + 1}
++ [\mathcal{K}'']_{k + 1}
+$$
+(use Lemma \ref{lemma-additivity-sheaf-cycle}),
+$$
+[\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}]
+-
+[\mathcal{F}]
+=
+[\mathcal{F}' \otimes_{\mathcal{O}_X} \mathcal{L}]
+-
+[\mathcal{F}']
++
+[\mathcal{K}' \otimes_{\mathcal{O}_X} \mathcal{L}]
+-
+[\mathcal{K}']
++
+[\mathcal{K}'' \otimes_{\mathcal{O}_X} \mathcal{L}]
+-
+[\mathcal{K}'']
+$$
+(use the $\otimes \mathcal{L}$ is exact)
+and the trivial vanishing of $[\mathcal{K}'']_{k + 1}$ and
+$[\mathcal{K}'' \otimes_{\mathcal{O}_X} \mathcal{L}]
+- [\mathcal{K}'']$ in
+$K_0(\textit{Coh}_{\leq k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))$
+that the result holds
+for $\mathcal{F}$. What this means is that we may assume that
+the sheaf $\mathcal{F}$ has no embedded associated points.
+
+\medskip\noindent
+Assume $X$, $\mathcal{F}$ as in the lemma, and assume in addition
+that $\mathcal{F}$ has no embedded associated points. Consider the
+sheaf of ideals $\mathcal{I} \subset \mathcal{O}_X$, the corresponding
+closed subscheme $i : Z \to X$ and the coherent $\mathcal{O}_Z$-module
+$\mathcal{G}$ constructed in
+Divisors, Lemma \ref{divisors-lemma-no-embedded-points-endos}.
+Recall that $Z$ is a locally Noetherian scheme without embedded points,
+$\mathcal{G}$ is a coherent sheaf without embedded
+associated points, with $\text{Supp}(\mathcal{G}) = Z$
+and such that $i_*\mathcal{G} = \mathcal{F}$.
+Moreover, set $\mathcal{N} = \mathcal{L}|_Z$.
+
+\medskip\noindent
+By Divisors, Lemma \ref{divisors-lemma-regular-meromorphic-section-exists}
+the invertible sheaf $\mathcal{N}$ has a regular meromorphic section $s$
+over $Z$. Let us denote $\mathcal{J} \subset \mathcal{O}_Z$ the sheaf
+of denominators of $s$. By Lemma \ref{lemma-no-embedded-points}
+there exist short exact sequences
+$$
+\begin{matrix}
+0 &
+\to &
+\mathcal{J}\mathcal{G} &
+\xrightarrow{1} &
+\mathcal{G} &
+\to &
+\mathcal{Q}_1 &
+\to &
+0 \\
+0 &
+\to &
+\mathcal{J}\mathcal{G} &
+\xrightarrow{s} &
+\mathcal{G} \otimes_{\mathcal{O}_Z} \mathcal{N} &
+\to &
+\mathcal{Q}_2 &
+\to &
+0
+\end{matrix}
+$$
+such that $\dim_\delta(\text{Supp}(\mathcal{Q}_i)) \leq k$ and
+such that the cycle
+$
+[\mathcal{Q}_2]_k - [\mathcal{Q}_1]_k
+$
+is a representative of $c_1(\mathcal{N}) \cap [\mathcal{G}]_{k + 1}$.
+We see (using the fact that
+$i_*(\mathcal{G} \otimes \mathcal{N}) = \mathcal{F} \otimes \mathcal{L}$
+by the projection formula, see
+Cohomology, Lemma \ref{cohomology-lemma-projection-formula})
+that
+$$
+[\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}]
+-
+[\mathcal{F}]
+=
+[i_*\mathcal{Q}_2] - [i_*\mathcal{Q}_1]
+$$
+in $K_0(\textit{Coh}_{\leq k + 1}(X)/\textit{Coh}_{\leq k - 1}(X))$.
+This already shows that
+$[\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}] - [\mathcal{F}]$
+is an element of $B_k(X)$. Moreover we have
+\begin{eqnarray*}
+[i_*\mathcal{Q}_2]_k - [i_*\mathcal{Q}_1]_k
+& = &
+i_*\left( [\mathcal{Q}_2]_k - [\mathcal{Q}_1]_k \right) \\
+& = &
+i_*\left(c_1(\mathcal{N}) \cap [\mathcal{G}]_{k + 1} \right) \\
+& = &
+c_1(\mathcal{L}) \cap i_*[\mathcal{G}]_{k + 1} \\
+& = &
+c_1(\mathcal{L}) \cap [\mathcal{F}]_{k + 1}
+\end{eqnarray*}
+by the above and Lemmas \ref{lemma-pushforward-cap-c1}
+and \ref{lemma-cycle-push-sheaf}. And this agree with the
+image of the element under $B_k(X) \to \CH_k(X)$ by definition.
+Hence the lemma is proved.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/coding.tex b/books/stacks/coding.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f2f4e7c1d2f7ad671384d18fedbb106fe295ebe9
--- /dev/null
+++ b/books/stacks/coding.tex
@@ -0,0 +1,182 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Coding Style}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{List of style comments}
+\label{section-style}
+
+\noindent
+These will be changed over time, but having some here now
+will hopefully encourage a consistent LaTeX style.
+We will call ``code\footnote{It is all Knuth's fault. See \cite{Knuth}.}''
+the contents of the source files.
+
+\begin{enumerate}
+\item Keep all lines in all tex files to at most 80 characters.
+\item Do not use indentation in the tex file. Use syntax highlighting in your
+editor, instead of indentation, to visualize environments, etc.
+\item Use
+\begin{verbatim}
+\medskip\noindent
+\end{verbatim}
+to start a new paragraph, and use
+\begin{verbatim}
+\noindent
+\end{verbatim}
+to start a new paragraph just after an environment.
+\item Do not break the code for mathematical formulas across
+lines if possible. If the complete code complete with enclosing
+dollar signs does not fit on the line, then start with the first
+dollar sign on the first character of the next line. If it still
+does not fit, find a mathematically reasonable spot to break
+the code.
+\item Displayed math equations should be coded as follows
+\begin{verbatim}
+$$
+...
+...
+$$
+\end{verbatim}
+In other words, start with a double dollar sign on a line by itself
+and end similarly.
+\item {\it Do not use any macros}. Rationale: This makes it easier
+to read the tex file, and start editing an arbitrary part
+without having to learn innumerable macros.
+And it doesn't make it harder or more timeconsuming to write.
+Of course the disadvantage is that the same mathematical object
+may be TeXed differently in different places in the text, but
+this should be easy to spot.
+\item The theorem environments we use are:
+``theorem'', ``proposition'', ``lemma'' (plain),
+``definition'', ``example'', ``exercise'', ``situation'' (definition),
+``remark'', ``remarks'' (remark). Of course there is also
+a ``proof'' environment.
+\item An environment ``foo'' should be coded as follows
+\begin{verbatim}
+\begin{foo}
+...
+...
+\end{foo}
+\end{verbatim}
+similarly to the way displayed equations are coded.
+\item Instead of a ``corollary'', just use ``lemma'' environment
+since likely the result will be used to prove the next bigger
+theorem anyway.
+\item Directly following each lemma, proposition, or theorem
+is the proof of said lemma, proposition, or theorem. No nested
+proofs please.
+\item The files preamble.tex, chapters.tex and fdl.tex are special
+tex files. Apart from these, each tex file
+has the following structure
+\begin{verbatim}
+\input{preamble}
+\begin{document}
+\title{Title}
+\maketitle
+\tableofcontents
+...
+...
+\input{chapters}
+\bibliography{my}
+\bibliographystyle{amsalpha}
+\end{document}
+\end{verbatim}
+\item Try to add labels to lemmas, propositions, theorems, and even
+remarks, exercise, and other environments.
+If labelling a lemma use something like
+\begin{verbatim}
+\begin{lemma}
+\label{lemma-bar}
+...
+\end{lemma}
+\end{verbatim}
+Similarly for all other environments. In other words, the label
+of a environment named ``foo'' starts with ``foo-''. In addition to
+this please make all labels consist only of lower case letters,
+digits, and the symbol ``-''.
+\item Never refer to ``the lemma above'' (or proposition, etc).
+Instead use:
+\begin{verbatim}
+Lemma \ref{lemma-bar} above
+\end{verbatim}
+This means that later
+moving lemmas around is basically harmless.
+\item Cross-file referencing. To reference a lemma labeled
+``lemma-bar'' in the file foo.tex which has title
+``Foo'', please use the following code
+\begin{verbatim}
+Foo, Lemma \ref{foo-lemma-bar}
+\end{verbatim}
+If this does not work, then take a look at the file
+preamble.tex to find the correct expression to use.
+This will produce the ``Foo, Lemma $<$link$>$'' in the
+output file so it will be clear that the link points
+out of the file.
+\item If at all possible avoid forward references in proof
+environments. (It should be possible to write an automated
+test for this.)
+\item Do not start any sentence with a mathematical symbol.
+\item Do not have a sentence of the type ``This follows from
+the following'' just before a lemma, proposition, or theorem.
+Every sentence ends with a period.
+\item State all hypotheses in each lemma, proposition, theorem.
+This makes it easier for readers to see if a given
+lemma, proposition, or theorem applies to their particular
+problem.
+\item Keep proofs short; less than 1 page in pdf or dvi.
+You can always achieve this by splitting out the proof in lemmas
+etc.
+\item In a defining property foobar use
+\begin{verbatim}
+{\it foobar}
+\end{verbatim}
+in the code inside the definition environment.
+Similarly if the definition occurs in the text of the document.
+This will make it easier for the reader to see what it is
+that is being defined.
+\item Put any definition that will be used outside the section
+it is in, in its own definition environment. Temporary definitions
+may be made in the text. A tricky case is that of mathematical
+constructions (which are often definitions involving interrelated
+lemmas). Maybe a good solution is to have them in their own
+short section so users can refer to the section instead of
+a definition.
+\item Do not number equations unless they are actually being
+referenced somewhere in the text. We can always add labels later.
+\item In statements of lemmas, propositions and theorems and
+in proofs keep the sentences short. For example, instead of
+``Let $R$ be a ring and let $M$ be an $R$-module.'' write
+``Let $R$ be a ring. Let $M$ be an $R$-module.''. Rationale:
+This makes it easier to parse the trickier parts of proofs and
+statements.
+\item Use the
+\begin{verbatim}
+\section
+\end{verbatim}
+command to make sections, but try to avoid using subsections
+and subsubsections.
+\item Avoid using complicated latex constructions.
+\end{enumerate}
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/coherent.tex b/books/stacks/coherent.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f162f4bd796d4a788176f805a6bab62df1dfa68b
--- /dev/null
+++ b/books/stacks/coherent.tex
@@ -0,0 +1,8147 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Cohomology of Schemes}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we first prove a number of results on the cohomology of
+quasi-coherent sheaves. A fundamental reference is \cite{EGA}.
+Having done this we will elaborate on cohomology of
+coherent sheaves in the Noetherian setting. See \cite{FAC}.
+
+
+
+
+
+
+
+\section{{\v C}ech cohomology of quasi-coherent sheaves}
+\label{section-cech-quasi-coherent}
+
+\noindent
+Let $X$ be a scheme.
+Let $U \subset X$ be an affine open.
+Recall that a {\it standard open covering} of $U$ is a covering
+of the form $\mathcal{U} : U = \bigcup_{i = 1}^n D(f_i)$
+where $f_1, \ldots, f_n \in \Gamma(U, \mathcal{O}_X)$ generate
+the unit ideal, see
+Schemes, Definition \ref{schemes-definition-standard-covering}.
+
+\begin{lemma}
+\label{lemma-cech-cohomology-quasi-coherent-trivial}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $\mathcal{U} : U = \bigcup_{i = 1}^n D(f_i)$ be a standard
+open covering of an affine open of $X$.
+Then $\check{H}^p(\mathcal{U}, \mathcal{F}) = 0$ for
+all $p > 0$.
+\end{lemma}
+
+\begin{proof}
+Write $U = \Spec(A)$ for some ring $A$.
+In other words, $f_1, \ldots, f_n$ are elements of $A$
+which generate the unit ideal of $A$.
+Write $\mathcal{F}|_U = \widetilde{M}$ for some $A$-module $M$.
+Clearly the {\v C}ech complex
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$
+is identified with the complex
+$$
+\prod\nolimits_{i_0} M_{f_{i_0}} \to
+\prod\nolimits_{i_0i_1} M_{f_{i_0}f_{i_1}} \to
+\prod\nolimits_{i_0i_1i_2} M_{f_{i_0}f_{i_1}f_{i_2}} \to
+\ldots
+$$
+We are asked to show that the extended complex
+\begin{equation}
+\label{equation-extended}
+0 \to
+M \to
+\prod\nolimits_{i_0} M_{f_{i_0}} \to
+\prod\nolimits_{i_0i_1} M_{f_{i_0}f_{i_1}} \to
+\prod\nolimits_{i_0i_1i_2} M_{f_{i_0}f_{i_1}f_{i_2}} \to
+\ldots
+\end{equation}
+(whose truncation we have studied in
+Algebra, Lemma \ref{algebra-lemma-cover-module}) is exact.
+It suffices to show that (\ref{equation-extended})
+is exact after localizing at a prime $\mathfrak p$, see
+Algebra, Lemma \ref{algebra-lemma-characterize-zero-local}.
+In fact we will show that the extended complex localized
+at $\mathfrak p$ is homotopic to zero.
+
+\medskip\noindent
+There exists an index $i$ such that $f_i \not \in \mathfrak p$.
+Choose and fix such an element $i_{\text{fix}}$. Note that
+$M_{f_{i_{\text{fix}}}, \mathfrak p} = M_{\mathfrak p}$. Similarly
+for a localization at a product $f_{i_0} \ldots f_{i_p}$ and $\mathfrak p$
+we can drop any $f_{i_j}$ for which $i_j = i_{\text{fix}}$.
+Let us define a homotopy
+$$
+h :
+\prod\nolimits_{i_0 \ldots i_{p + 1}}
+M_{f_{i_0} \ldots f_{i_{p + 1}}, \mathfrak p}
+\longrightarrow
+\prod\nolimits_{i_0 \ldots i_p}
+M_{f_{i_0} \ldots f_{i_p}, \mathfrak p}
+$$
+by the rule
+$$
+h(s)_{i_0 \ldots i_p} = s_{i_{\text{fix}} i_0 \ldots i_p}
+$$
+(This is ``dual'' to the homotopy in the proof of
+Cohomology, Lemma \ref{cohomology-lemma-homology-complex}.)
+In other words, $h : \prod_{i_0} M_{f_{i_0}, \mathfrak p} \to M_\mathfrak p$
+is projection onto the factor
+$M_{f_{i_{\text{fix}}}, \mathfrak p} = M_{\mathfrak p}$ and in general
+the map $h$ equal projection onto the factors
+$M_{f_{i_{\text{fix}}} f_{i_1} \ldots f_{i_{p + 1}}, \mathfrak p}
+= M_{f_{i_1} \ldots f_{i_{p + 1}}, \mathfrak p}$. We compute
+\begin{align*}
+(dh + hd)(s)_{i_0 \ldots i_p}
+& =
+\sum\nolimits_{j = 0}^p
+(-1)^j
+h(s)_{i_0 \ldots \hat i_j \ldots i_p}
++
+d(s)_{i_{\text{fix}} i_0 \ldots i_p}\\
+& =
+\sum\nolimits_{j = 0}^p
+(-1)^j
+s_{i_{\text{fix}} i_0 \ldots \hat i_j \ldots i_p}
++
+s_{i_0 \ldots i_p}
++
+\sum\nolimits_{j = 0}^p
+(-1)^{j + 1}
+s_{i_{\text{fix}} i_0 \ldots \hat i_j \ldots i_p} \\
+& =
+s_{i_0 \ldots i_p}
+\end{align*}
+This proves the identity map is homotopic to zero as desired.
+\end{proof}
+
+\noindent
+The following lemma says in particular that for any affine scheme
+$X$ and any quasi-coherent sheaf $\mathcal{F}$ on $X$ we have
+$$
+H^p(X, \mathcal{F}) = 0
+$$
+for all $p > 0$.
+
+\begin{lemma}
+\label{lemma-quasi-coherent-affine-cohomology-zero}
+\begin{slogan}
+Serre vanishing: Higher cohomology vanishes on affine schemes
+for quasi-coherent modules.
+\end{slogan}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+For any affine open $U \subset X$ we have
+$H^p(U, \mathcal{F}) = 0$ for all $p > 0$.
+\end{lemma}
+
+\begin{proof}
+We are going to apply
+Cohomology, Lemma \ref{cohomology-lemma-cech-vanish-basis}.
+As our basis $\mathcal{B}$ for the topology of $X$ we are going to use
+the affine opens of $X$.
+As our set $\text{Cov}$ of open coverings we are going to use the standard
+open coverings of affine opens of $X$.
+Next we check that conditions (1), (2) and (3) of
+Cohomology, Lemma \ref{cohomology-lemma-cech-vanish-basis}
+hold. Note that the intersection of standard opens in an affine is
+another standard open. Hence property (1) holds.
+The coverings form a cofinal system of open coverings of any element
+of $\mathcal{B}$, see
+Schemes, Lemma \ref{schemes-lemma-standard-open}.
+Hence (2) holds.
+Finally, condition (3) of the lemma follows from
+Lemma \ref{lemma-cech-cohomology-quasi-coherent-trivial}.
+\end{proof}
+
+\noindent
+Here is a relative version of the vanishing of cohomology of quasi-coherent
+sheaves on affines.
+
+\begin{lemma}
+\label{lemma-relative-affine-vanishing}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+If $f$ is affine then $R^if_*\mathcal{F} = 0$ for all $i > 0$.
+\end{lemma}
+
+\begin{proof}
+According to
+Cohomology, Lemma \ref{cohomology-lemma-describe-higher-direct-images}
+the sheaf
+$R^if_*\mathcal{F}$ is the sheaf associated to the presheaf
+$V \mapsto H^i(f^{-1}(V), \mathcal{F}|_{f^{-1}(V)})$.
+By assumption, whenever $V$ is affine we have that $f^{-1}(V)$ is
+affine, see Morphisms, Definition \ref{morphisms-definition-affine}.
+By Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero} we conclude that
+$H^i(f^{-1}(V), \mathcal{F}|_{f^{-1}(V)}) = 0$
+whenever $V$ is affine. Since $S$ has a basis consisting of affine
+opens we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-affine-cohomology}
+Let $f : X \to S$ be an affine morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Then $H^i(X, \mathcal{F}) = H^i(S, f_*\mathcal{F})$ for all $i \geq 0$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-relative-affine-vanishing}
+and the Leray spectral sequence. See
+Cohomology, Lemma \ref{cohomology-lemma-apply-Leray}.
+\end{proof}
+
+\noindent
+The following two lemmas explain when {\v C}ech cohomology
+can be used to compute cohomology of quasi-coherent modules.
+
+\begin{lemma}
+\label{lemma-affine-diagonal}
+Let $X$ be a scheme. The following are equivalent
+\begin{enumerate}
+\item $X$ has affine diagonal $\Delta : X \to X \times X$,
+\item for $U, V \subset X$ affine open, the intersection
+$U \cap V$ is affine, and
+\item there exists an open covering $\mathcal{U} : X = \bigcup_{i \in I} U_i$
+such that $U_{i_0 \ldots i_p}$ is affine open for all $p \ge 0$ and all
+$i_0, \ldots, i_p \in I$.
+\end{enumerate}
+In particular this holds if $X$ is separated.
+\end{lemma}
+
+\begin{proof}
+Assume $X$ has affine diagonal. Let $U, V \subset X$ be affine opens.
+Then $U \cap V = \Delta^{-1}(U \times V)$ is affine. Thus (2) holds.
+It is immediate that (2) implies (3). Conversely, if there is a
+covering of $X$ as in (3), then $X \times X = \bigcup U_i \times U_{i'}$
+is an affine open covering, and we see that
+$\Delta^{-1}(U_i \times U_{i'}) = U_i \cap U_{i'}$
+is affine. Then $\Delta$ is an affine morphism by
+Morphisms, Lemma \ref{morphisms-lemma-characterize-affine}.
+The final assertion follows from Schemes, Lemma
+\ref{schemes-lemma-characterize-separated}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-cohomology-quasi-coherent}
+Let $X$ be a scheme.
+Let $\mathcal{U} : X = \bigcup_{i \in I} U_i$ be an open covering such that
+$U_{i_0 \ldots i_p}$ is affine open for all $p \ge 0$ and all
+$i_0, \ldots, i_p \in I$.
+In this case for any quasi-coherent sheaf $\mathcal{F}$ we have
+$$
+\check{H}^p(\mathcal{U}, \mathcal{F}) = H^p(X, \mathcal{F})
+$$
+as $\Gamma(X, \mathcal{O}_X)$-modules for all $p$.
+\end{lemma}
+
+\begin{proof}
+In view of
+Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero}
+this is a special case of
+Cohomology, Lemma
+\ref{cohomology-lemma-cech-spectral-sequence-application}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Vanishing of cohomology}
+\label{section-vanishing}
+
+\noindent
+We have seen that on an affine scheme the higher cohomology groups
+of any quasi-coherent sheaf vanish
+(Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero}).
+It turns out that this also
+characterizes affine schemes. We give two versions.
+
+\begin{lemma}
+\label{lemma-quasi-compact-h1-zero-covering}
+\begin{reference}
+\cite{Serre-criterion}, \cite[II, Theorem 5.2.1 (d') and IV (1.7.17)]{EGA}
+\end{reference}
+\begin{slogan}
+Serre's criterion for affineness.
+\end{slogan}
+Let $X$ be a scheme.
+Assume that
+\begin{enumerate}
+\item $X$ is quasi-compact,
+\item for every quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_X$ we have $H^1(X, \mathcal{I}) = 0$.
+\end{enumerate}
+Then $X$ is affine.
+\end{lemma}
+
+\begin{proof}
+Let $x \in X$ be a closed point. Let $U \subset X$ be an affine open
+neighbourhood of $x$. Write $U = \Spec(A)$ and let
+$\mathfrak m \subset A$ be the maximal ideal corresponding to $x$.
+Set $Z = X \setminus U$ and $Z' = Z \cup \{x\}$.
+By Schemes, Lemma \ref{schemes-lemma-reduced-closed-subscheme} there
+are quasi-coherent sheaves of ideals
+$\mathcal{I}$, resp.\ $\mathcal{I}'$ cutting out
+the reduced closed subschemes $Z$, resp.\ $Z'$.
+Consider the short exact sequence
+$$
+0 \to \mathcal{I}' \to \mathcal{I} \to \mathcal{I}/\mathcal{I}' \to 0.
+$$
+Since $x$ is a closed point of $X$ and $x \not \in Z$ we see that
+$\mathcal{I}/\mathcal{I}'$ is supported at $x$. In fact, the restriction
+of $\mathcal{I}/\mathcal{I'}$ to $U$ corresponds to the $A$-module
+$A/\mathfrak m$. Hence we see that $\Gamma(X, \mathcal{I}/\mathcal{I'})
+= A/\mathfrak m$. Since by assumption $H^1(X, \mathcal{I}') = 0$
+we see there exists a global section $f \in \Gamma(X, \mathcal{I})$
+which maps to the element $1 \in A/\mathfrak m$ as a section of
+$\mathcal{I}/\mathcal{I'}$. Clearly we have
+$x \in X_f \subset U$. This implies that $X_f = D(f_A)$ where
+$f_A$ is the image of $f$ in $A = \Gamma(U, \mathcal{O}_X)$.
+In particular $X_f$ is affine.
+
+\medskip\noindent
+Consider the union $W = \bigcup X_f$ over all $f \in \Gamma(X, \mathcal{O}_X)$
+such that $X_f$ is affine. Obviously $W$ is open in $X$.
+By the arguments above every closed point of
+$X$ is contained in $W$. The closed subset $X \setminus W$ of $X$
+is also quasi-compact
+(see Topology, Lemma \ref{topology-lemma-closed-in-quasi-compact}).
+Hence it has a closed point if it is nonempty (see
+Topology, Lemma \ref{topology-lemma-quasi-compact-closed-point}).
+This would contradict the fact that all closed points are in
+$W$. Hence we conclude $X = W$.
+
+\medskip\noindent
+Choose finitely many $f_1, \ldots, f_n \in \Gamma(X, \mathcal{O}_X)$
+such that $X = X_{f_1} \cup \ldots \cup X_{f_n}$ and such that each
+$X_{f_i}$ is affine. This is possible as we've seen above.
+By Properties, Lemma \ref{properties-lemma-characterize-affine}
+to finish the proof it suffices
+to show that $f_1, \ldots, f_n$ generate the unit ideal in
+$\Gamma(X, \mathcal{O}_X)$. Consider the short exact sequence
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{F} \ar[r] &
+\mathcal{O}_X^{\oplus n} \ar[rr]^{f_1, \ldots, f_n} & &
+\mathcal{O}_X \ar[r] &
+0
+}
+$$
+The arrow defined by $f_1, \ldots, f_n$ is surjective since the
+opens $X_{f_i}$ cover $X$. We let $\mathcal{F}$ be the kernel
+of this surjective map.
+Observe that $\mathcal{F}$ has a filtration
+$$
+0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset
+\ldots \subset \mathcal{F}_n = \mathcal{F}
+$$
+so that each subquotient $\mathcal{F}_i/\mathcal{F}_{i - 1}$ is
+isomorphic to a quasi-coherent sheaf of ideals.
+Namely we can take $\mathcal{F}_i$ to be the intersection of
+$\mathcal{F}$ with the first $i$ direct summands of
+$\mathcal{O}_X^{\oplus n}$.
+The assumption
+of the lemma implies that $H^1(X, \mathcal{F}_i/\mathcal{F}_{i - 1}) = 0$
+for all $i$. This implies that
+$H^1(X, \mathcal{F}_2) = 0$ because it is sandwiched between
+$H^1(X, \mathcal{F}_1)$ and $H^1(X, \mathcal{F}_2/\mathcal{F}_1)$.
+Continuing like this we deduce that $H^1(X, \mathcal{F}) = 0$.
+Therefore we conclude that the map
+$$
+\xymatrix{
+\bigoplus\nolimits_{i = 1, \ldots, n} \Gamma(X, \mathcal{O}_X)
+\ar[rr]^{f_1, \ldots, f_n} & &
+\Gamma(X, \mathcal{O}_X)
+}
+$$
+is surjective as desired.
+\end{proof}
+
+\noindent
+Note that if $X$ is a Noetherian scheme then every quasi-coherent
+sheaf of ideals is automatically a coherent sheaf of ideals and a
+finite type quasi-coherent sheaf of ideals. Hence
+the preceding lemma and the next lemma both apply in this case.
+
+\begin{lemma}
+\label{lemma-quasi-separated-h1-zero-covering}
+\begin{reference}
+\cite{Serre-criterion}, \cite[II, Theorem 5.2.1]{EGA}
+\end{reference}
+\begin{slogan}
+Serre's criterion for affineness.
+\end{slogan}
+Let $X$ be a scheme. Assume that
+\begin{enumerate}
+\item $X$ is quasi-compact,
+\item $X$ is quasi-separated, and
+\item $H^1(X, \mathcal{I}) = 0$ for every quasi-coherent sheaf
+of ideals $\mathcal{I}$ of finite type.
+\end{enumerate}
+Then $X$ is affine.
+\end{lemma}
+
+\begin{proof}
+By
+Properties, Lemma \ref{properties-lemma-quasi-coherent-colimit-finite-type}
+every quasi-coherent sheaf of ideals is a directed colimit of
+quasi-coherent sheaves of ideals of finite type.
+By Cohomology, Lemma \ref{cohomology-lemma-quasi-separated-cohomology-colimit}
+taking cohomology on $X$ commutes with directed colimits.
+Hence we see that $H^1(X, \mathcal{I}) = 0$
+for every quasi-coherent sheaf of ideals on $X$. In other words
+we see that Lemma \ref{lemma-quasi-compact-h1-zero-covering} applies.
+\end{proof}
+
+\noindent
+We can use the arguments given above to find a sufficient condition to
+see when an invertible sheaf is ample. However, we warn the reader that
+this condition is not necessary.
+
+\begin{lemma}
+\label{lemma-quasi-compact-h1-zero-invertible}
+Let $X$ be a scheme. Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Assume that
+\begin{enumerate}
+\item $X$ is quasi-compact,
+\item for every quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_X$
+there exists an $n \geq 1$ such that
+$H^1(X, \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n}) = 0$.
+\end{enumerate}
+Then $\mathcal{L}$ is ample.
+\end{lemma}
+
+\begin{proof}
+This is proved in exactly the same way as
+Lemma \ref{lemma-quasi-compact-h1-zero-covering}.
+Let $x \in X$ be a closed point. Let $U \subset X$ be an affine open
+neighbourhood of $x$ such that $\mathcal{L}|_U \cong \mathcal{O}_U$.
+Write $U = \Spec(A)$ and let
+$\mathfrak m \subset A$ be the maximal ideal corresponding to $x$.
+Set $Z = X \setminus U$ and $Z' = Z \cup \{x\}$.
+By Schemes, Lemma \ref{schemes-lemma-reduced-closed-subscheme} there
+are quasi-coherent sheaves of ideals
+$\mathcal{I}$, resp.\ $\mathcal{I}'$ cutting out
+the reduced closed subschemes $Z$, resp.\ $Z'$.
+Consider the short exact sequence
+$$
+0 \to \mathcal{I}' \to \mathcal{I} \to \mathcal{I}/\mathcal{I}' \to 0.
+$$
+For every $n \geq 1$ we obtain a short exact sequence
+$$
+0 \to \mathcal{I}' \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n}
+\to \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n} \to
+\mathcal{I}/\mathcal{I}' \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n} \to 0.
+$$
+By our assumption we may pick $n$ such that
+$H^1(X, \mathcal{I}' \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n}) = 0$.
+Since $x$ is a closed point of $X$ and $x \not \in Z$ we see that
+$\mathcal{I}/\mathcal{I}'$ is supported at $x$. In fact, the restriction
+of $\mathcal{I}/\mathcal{I'}$ to $U$ corresponds to the $A$-module
+$A/\mathfrak m$. Since $\mathcal{L}$ is trivial on $U$
+we see that the restriction of
+$\mathcal{I}/\mathcal{I}' \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n}$
+to $U$ also corresponds to the $A$-module $A/\mathfrak m$.
+Hence we see that
+$\Gamma(X, \mathcal{I}/\mathcal{I'} \otimes_{\mathcal{O}_X}
+\mathcal{L}^{\otimes n}) = A/\mathfrak m$.
+By our choice of $n$ we see there exists a global section
+$s \in \Gamma(X, \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n})$
+which maps to the element $1 \in A/\mathfrak m$. Clearly we have
+$x \in X_s \subset U$ because $s$ vanishes at points of $Z$.
+This implies that $X_s = D(f)$ where
+$f \in A$ is the image of $s$ in $A \cong \Gamma(U, \mathcal{L}^{\otimes n})$.
+In particular $X_s$ is affine.
+
+\medskip\noindent
+Consider the union $W = \bigcup X_s$ over all
+$s \in \Gamma(X, \mathcal{L}^{\otimes n})$ for $n \geq 1$
+such that $X_s$ is affine. Obviously $W$ is open in $X$.
+By the arguments above every closed point of
+$X$ is contained in $W$. The closed subset $X \setminus W$ of $X$
+is also quasi-compact
+(see Topology, Lemma \ref{topology-lemma-closed-in-quasi-compact}).
+Hence it has a closed point if it is nonempty (see
+Topology, Lemma \ref{topology-lemma-quasi-compact-closed-point}).
+This would contradict the fact that all closed points are in
+$W$. Hence we conclude $X = W$. This means that $\mathcal{L}$
+is ample by Properties, Definition \ref{properties-definition-ample}.
+\end{proof}
+
+\noindent
+There is a variant of Lemma \ref{lemma-quasi-compact-h1-zero-invertible}
+with finite type ideal sheaves which we will formulate and prove here if
+we ever need it.
+
+\begin{lemma}
+\label{lemma-criterion-affine-morphism}
+Let $f : X \to Y$ be a quasi-compact morphism with $X$ and $Y$ quasi-separated.
+If $R^1f_*\mathcal{I} = 0$ for every quasi-coherent sheaf of ideals
+$\mathcal{I}$ on $X$, then $f$ is affine.
+\end{lemma}
+
+\begin{proof}
+Let $V \subset Y$ be an affine open subscheme. We have to show that
+$U = f^{-1}(V)$ is affine. The inclusion morphism $V \to Y$ is quasi-compact
+by Schemes, Lemma \ref{schemes-lemma-quasi-compact-permanence}.
+Hence the base change $U \to X$ is quasi-compact, see
+Schemes, Lemma \ref{schemes-lemma-quasi-compact-preserved-base-change}.
+Thus any quasi-coherent sheaf of ideals $\mathcal{I}$ on $U$
+extends to a quasi-coherent sheaf of ideals on $X$, see
+Properties, Lemma \ref{properties-lemma-extend-trivial}.
+Since the formation of $R^1f_*$ is local on $Y$
+(Cohomology, Section \ref{cohomology-section-locality})
+we conclude that $R^1(U \to V)_*\mathcal{I} = 0$ by the assumption
+in the lemma. Hence by the Leray Spectral sequence
+(Cohomology, Lemma \ref{cohomology-lemma-Leray})
+we conclude that $H^1(U, \mathcal{I}) = H^1(V, (U \to V)_*\mathcal{I})$.
+Since $(U \to V)_*\mathcal{I}$ is quasi-coherent by
+Schemes, Lemma \ref{schemes-lemma-push-forward-quasi-coherent}, we have
+$H^1(V, (U \to V)_*\mathcal{I}) = 0$ by
+Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero}.
+Thus we find that $U$ is affine by
+Lemma \ref{lemma-quasi-compact-h1-zero-covering}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Quasi-coherence of higher direct images}
+\label{section-quasi-coherence}
+
+\noindent
+We have seen that the higher cohomology groups of a quasi-coherent module on
+an affine is zero. For (quasi-)separated quasi-compact schemes $X$ this implies
+vanishing of cohomology groups of quasi-coherent sheaves beyond a certain
+degree. However, it may not be the case that $X$ has finite cohomological
+dimension, because that is defined in terms of vanishing of cohomology
+of {\it all} $\mathcal{O}_X$-modules.
+
+\begin{lemma}[Induction Principle]
+\label{lemma-induction-principle}
+\begin{reference}
+\cite[Proposition 3.3.1]{BvdB}
+\end{reference}
+Let $X$ be a quasi-compact and quasi-separated scheme. Let $P$ be a property
+of the quasi-compact opens of $X$. Assume that
+\begin{enumerate}
+\item $P$ holds for every affine open of $X$,
+\item if $U$ is quasi-compact open, $V$ affine open,
+$P$ holds for $U$, $V$, and $U \cap V$, then
+$P$ holds for $U \cup V$.
+\end{enumerate}
+Then $P$ holds for every quasi-compact open of $X$
+and in particular for $X$.
+\end{lemma}
+
+\begin{proof}
+First we argue by induction that $P$ holds for {\it separated} quasi-compact
+opens $W \subset X$. Namely, such an open can be written as
+$W = U_1 \cup \ldots \cup U_n$ and we can do induction on $n$ using
+property (2) with $U = U_1 \cup \ldots \cup U_{n - 1}$ and $V = U_n$.
+This is allowed because
+$U \cap V = (U_1 \cap U_n) \cup \ldots \cup (U_{n - 1} \cap U_n)$
+is also a union of $n - 1$ affine open subschemes by
+Schemes, Lemma \ref{schemes-lemma-characterize-separated}
+applied to the affine opens $U_i$ and $U_n$ of $W$.
+Having said this, for any quasi-compact open $W \subset X$ we can
+do induction on the number of affine opens needed to cover $W$
+using the same trick as before and using that the quasi-compact open
+$U_i \cap U_n$ is separated as an open subscheme of the affine scheme $U_n$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-nr-affines}
+\begin{slogan}
+For schemes with affine diagonal, the cohomology of quasi-coherent
+modules vanishes in degrees bigger than the number of affine
+opens needed in a covering.
+\end{slogan}
+Let $X$ be a quasi-compact scheme with affine diagonal (for example
+if $X$ is separated).
+Let $t = t(X)$ be the minimal number of affine opens needed to
+cover $X$. Then $H^n(X, \mathcal{F}) = 0$ for all $n \geq t$ and all
+quasi-coherent sheaves $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+First proof.
+By induction on $t$.
+If $t = 1$ the result follows from
+Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero}.
+If $t > 1$ write $X = U \cup V$ with $V$ affine open and
+$U = U_1 \cup \ldots \cup U_{t - 1}$ a union of $t - 1$ open affines.
+Note that in this case
+$U \cap V = (U_1 \cap V) \cup \ldots (U_{t - 1} \cap V)$
+is also a union of $t - 1$ affine open subschemes.
+Namely, since the diagonal is affine, the intersection of two
+affine opens is affine, see Lemma \ref{lemma-affine-diagonal}.
+We apply the Mayer-Vietoris long exact sequence
+$$
+0 \to
+H^0(X, \mathcal{F}) \to
+H^0(U, \mathcal{F}) \oplus H^0(V, \mathcal{F}) \to
+H^0(U \cap V, \mathcal{F}) \to
+H^1(X, \mathcal{F}) \to \ldots
+$$
+see Cohomology, Lemma \ref{cohomology-lemma-mayer-vietoris}.
+By induction we see that the groups $H^i(U, \mathcal{F})$,
+$H^i(V, \mathcal{F})$, $H^i(U \cap V, \mathcal{F})$ are zero for
+$i \geq t - 1$. It follows immediately that $H^i(X, \mathcal{F})$
+is zero for $i \geq t$.
+
+\medskip\noindent
+Second proof.
+Let $\mathcal{U} : X = \bigcup_{i = 1}^t U_i$ be a finite affine open
+covering. Since $X$ is has affine diagonal the multiple intersections
+$U_{i_0 \ldots i_p}$ are all affine, see
+Lemma \ref{lemma-affine-diagonal}.
+By Lemma \ref{lemma-cech-cohomology-quasi-coherent} the {\v C}ech
+cohomology groups $\check{H}^p(\mathcal{U}, \mathcal{F})$
+agree with the cohomology groups. By
+Cohomology, Lemma \ref{cohomology-lemma-alternating-usual}
+the {\v C}ech cohomology groups may be computed using the alternating
+{\v C}ech complex $\check{\mathcal{C}}_{alt}^\bullet(\mathcal{U}, \mathcal{F})$.
+As the covering consists of $t$ elements we see immediately
+that $\check{\mathcal{C}}_{alt}^p(\mathcal{U}, \mathcal{F}) = 0$
+for all $p \geq t$. Hence the result follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-affine-diagonal-universal-delta-functor}
+Let $X$ be a quasi-compact scheme with affine diagonal
+(for example if $X$ is separated). Then
+\begin{enumerate}
+\item given a quasi-coherent $\mathcal{O}_X$-module $\mathcal{F}$
+there exists an embedding $\mathcal{F} \to \mathcal{F}'$ of
+quasi-coherent $\mathcal{O}_X$-modules
+such that $H^p(X, \mathcal{F}') = 0$ for all $p \geq 1$, and
+\item $\{H^n(X, -)\}_{n \geq 0}$
+is a universal $\delta$-functor from $\QCoh(\mathcal{O}_X)$ to
+$\textit{Ab}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $X = \bigcup U_i$ be a finite affine open covering.
+Set $U = \coprod U_i$ and denote $j : U \to X$
+the morphism inducing the given open immersions $U_i \to X$.
+Since $U$ is an affine scheme and $X$ has affine diagonal,
+the morphism $j$ is affine, see
+Morphisms, Lemma \ref{morphisms-lemma-affine-permanence}.
+For every $\mathcal{O}_X$-module $\mathcal{F}$ there is
+a canonical map $\mathcal{F} \to j_*j^*\mathcal{F}$.
+This map is injective as can be seen by checking on stalks:
+if $x \in U_i$, then we have a factorization
+$$
+\mathcal{F}_x \to (j_*j^*\mathcal{F})_x
+\to (j^*\mathcal{F})_{x'} = \mathcal{F}_x
+$$
+where $x' \in U$ is the point $x$ viewed as a point of $U_i \subset U$.
+Now if $\mathcal{F}$ is quasi-coherent, then $j^*\mathcal{F}$
+is quasi-coherent on the affine scheme $U$ hence has vanishing
+higher cohomology by
+Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero}.
+Then $H^p(X, j_*j^*\mathcal{F}) = 0$ for
+$p > 0$ by Lemma \ref{lemma-relative-affine-cohomology}
+as $j$ is affine. This proves (1).
+Finally, we see that the map
+$H^p(X, \mathcal{F}) \to H^p(X, j_*j^*\mathcal{F})$
+is zero and part (2) follows from
+Homology, Lemma \ref{homology-lemma-efface-implies-universal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-nr-affines-quasi-separated}
+Let $X$ be a quasi-compact quasi-separated scheme.
+Let $X = U_1 \cup \ldots \cup U_t$ be an affine open covering.
+Set
+$$
+d = \max\nolimits_{I \subset \{1, \ldots, t\}}
+\left(|I| + t(\bigcap\nolimits_{i \in I} U_i)\right)
+$$
+where $t(U)$ is the minimal number of affines needed to cover
+the scheme $U$. Then $H^n(X, \mathcal{F}) = 0$ for all $n \geq d$ and all
+quasi-coherent sheaves $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Note that since $X$ is quasi-separated the numbers
+$t(\bigcap_{i \in I} U_i)$ are finite.
+Let $\mathcal{U} : X = \bigcup_{i = 1}^t U_i$.
+By
+Cohomology, Lemma \ref{cohomology-lemma-cech-spectral-sequence}
+there is a spectral sequence
+$$
+E_2^{p, q} = \check{H}^p(\mathcal{U}, \underline{H}^q(\mathcal{F}))
+$$
+converging to $H^{p + q}(U, \mathcal{F})$. By
+Cohomology, Lemma \ref{cohomology-lemma-alternating-usual}
+we have
+$$
+E_2^{p, q} =
+H^p(\check{\mathcal{C}}_{alt}^\bullet(
+\mathcal{U}, \underline{H}^q(\mathcal{F}))
+$$
+The alternating {\v C}ech complex with values in the presheaf
+$\underline{H}^q(\mathcal{F})$ vanishes in high degrees by
+Lemma \ref{lemma-vanishing-nr-affines},
+more precisely $E_2^{p, q} = 0$ for $p + q \geq d$.
+Hence the result follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-coherence-higher-direct-images}
+Let $f : X \to S$ be a morphism of schemes.
+Assume that $f$ is quasi-separated and quasi-compact.
+\begin{enumerate}
+\item For any quasi-coherent $\mathcal{O}_X$-module $\mathcal{F}$ the
+higher direct images $R^pf_*\mathcal{F}$ are quasi-coherent on $S$.
+\item If $S$ is quasi-compact, there exists an integer $n = n(X, S, f)$
+such that $R^pf_*\mathcal{F} = 0$ for all $p \geq n$ and any
+quasi-coherent sheaf $\mathcal{F}$ on $X$.
+\item In fact, if $S$ is quasi-compact we can find $n = n(X, S, f)$
+such that for every
+morphism of schemes $S' \to S$ we have $R^p(f')_*\mathcal{F}' = 0$
+for $p \geq n$ and any quasi-coherent sheaf $\mathcal{F}'$
+on $X'$. Here $f' : X' = S' \times_S X \to S'$ is the base change of $f$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We first prove (1). Note that under the hypotheses of the lemma the sheaf
+$R^0f_*\mathcal{F} = f_*\mathcal{F}$ is quasi-coherent by
+Schemes, Lemma \ref{schemes-lemma-push-forward-quasi-coherent}.
+Using
+Cohomology, Lemma \ref{cohomology-lemma-localize-higher-direct-images}
+we see that forming higher direct images commutes with restriction
+to open subschemes. Since being quasi-coherent is local on $S$ we
+may assume $S$ is affine.
+
+\medskip\noindent
+Assume $S$ is affine and $f$ quasi-compact and separated.
+Let $t \geq 1$ be the minimal number of affine opens needed to cover $X$.
+We will prove this case of (1) by induction on $t$.
+If $t = 1$ then the morphism $f$ is affine by
+Morphisms, Lemma \ref{morphisms-lemma-morphism-affines-affine}
+and (1) follows from
+Lemma \ref{lemma-relative-affine-vanishing}.
+If $t > 1$ write $X = U \cup V$ with $V$ affine open and
+$U = U_1 \cup \ldots \cup U_{t - 1}$ a union of $t - 1$ open affines.
+Note that in this case
+$U \cap V = (U_1 \cap V) \cup \ldots (U_{t - 1} \cap V)$
+is also a union of $t - 1$ affine open subschemes, see
+Schemes, Lemma \ref{schemes-lemma-characterize-separated}.
+We will apply the relative Mayer-Vietoris sequence
+$$
+0 \to
+f_*\mathcal{F} \to
+a_*(\mathcal{F}|_U) \oplus b_*(\mathcal{F}|_V) \to
+c_*(\mathcal{F}|_{U \cap V}) \to
+R^1f_*\mathcal{F} \to \ldots
+$$
+see Cohomology, Lemma \ref{cohomology-lemma-relative-mayer-vietoris}.
+By induction we see that
+$R^pa_*\mathcal{F}$, $R^pb_*\mathcal{F}$ and $R^pc_*\mathcal{F}$
+are all quasi-coherent. This implies that each of the sheaves
+$R^pf_*\mathcal{F}$ is quasi-coherent since it sits in the middle of a short
+exact sequence with a cokernel of a map between quasi-coherent sheaves
+on the left and a kernel of a map between quasi-coherent sheaves on the right.
+Using the results on quasi-coherent sheaves in
+Schemes, Section \ref{schemes-section-quasi-coherent} we see
+conclude $R^pf_*\mathcal{F}$ is quasi-coherent.
+
+\medskip\noindent
+Assume $S$ is affine and $f$ quasi-compact and quasi-separated.
+Let $t \geq 1$ be the minimal number of affine opens needed to cover $X$.
+We will prove (1) by induction on $t$.
+In case $t = 1$ the morphism $f$ is separated and we are back
+in the previous case (see previous paragraph).
+If $t > 1$ write $X = U \cup V$ with $V$ affine open and
+$U$ a union of $t - 1$ open affines.
+Note that in this case $U \cap V$ is an open subscheme of an affine
+scheme and hence separated (see
+Schemes, Lemma \ref{schemes-lemma-affine-separated}).
+We will apply the relative Mayer-Vietoris sequence
+$$
+0 \to
+f_*\mathcal{F} \to
+a_*(\mathcal{F}|_U) \oplus b_*(\mathcal{F}|_V) \to
+c_*(\mathcal{F}|_{U \cap V}) \to
+R^1f_*\mathcal{F} \to \ldots
+$$
+see Cohomology, Lemma \ref{cohomology-lemma-relative-mayer-vietoris}.
+By induction and the result of the previous paragraph we see that
+$R^pa_*\mathcal{F}$, $R^pb_*\mathcal{F}$ and $R^pc_*\mathcal{F}$
+are quasi-coherent. As in the previous paragraph this implies each of
+sheaves $R^pf_*\mathcal{F}$ is quasi-coherent.
+
+\medskip\noindent
+Next, we prove (3) and a fortiori (2). Choose a finite affine open
+covering $S = \bigcup_{j = 1, \ldots m} S_j$. For each $j$ choose
+a finite affine open covering
+$f^{-1}(S_j) = \bigcup_{i = 1, \ldots t_j} U_{ji} $.
+Let
+$$
+d_j = \max\nolimits_{I \subset \{1, \ldots, t_j\}}
+\left(|I| + t(\bigcap\nolimits_{i \in I} U_{ji})\right)
+$$
+be the integer found in
+Lemma \ref{lemma-vanishing-nr-affines-quasi-separated}.
+We claim that $n(X, S, f) = \max d_j$ works.
+
+\medskip\noindent
+Namely, let $S' \to S$ be a morphism of schemes and let
+$\mathcal{F}'$ be a quasi-coherent sheaf on $X' = S' \times_S X$.
+We want to show that $R^pf'_*\mathcal{F}' = 0$ for $p \geq n(X, S, f)$.
+Since this question is local on $S'$ we may assume that $S'$ is affine
+and maps into $S_j$ for some $j$. Then $X' = S' \times_{S_j} f^{-1}(S_j)$
+is covered by the open affines $S' \times_{S_j} U_{ji}$, $i = 1, \ldots t_j$
+and the intersections
+$$
+\bigcap\nolimits_{i \in I} S' \times_{S_j} U_{ji} =
+S' \times_{S_j} \bigcap\nolimits_{i \in I} U_{ji}
+$$
+are covered by the same number of affines as before the base change.
+Applying
+Lemma \ref{lemma-vanishing-nr-affines-quasi-separated}
+we get $H^p(X', \mathcal{F}') = 0$. By the first part of the proof
+we already know that each $R^qf'_*\mathcal{F}'$ is quasi-coherent
+hence has vanishing higher cohomology groups on our affine scheme $S'$,
+thus we see that $H^0(S', R^pf'_*\mathcal{F}') = H^p(X', \mathcal{F}') = 0$
+by Cohomology, Lemma \ref{cohomology-lemma-apply-Leray}.
+Since $R^pf'_*\mathcal{F}'$ is quasi-coherent
+we conclude that $R^pf'_*\mathcal{F}' = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-coherence-higher-direct-images-application}
+Let $f : X \to S$ be a morphism of schemes.
+Assume that $f$ is quasi-separated and quasi-compact.
+Assume $S$ is affine.
+For any quasi-coherent $\mathcal{O}_X$-module $\mathcal{F}$
+we have
+$$
+H^q(X, \mathcal{F}) = H^0(S, R^qf_*\mathcal{F})
+$$
+for all $q \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Consider the Leray spectral sequence $E_2^{p, q} = H^p(S, R^qf_*\mathcal{F})$
+converging to $H^{p + q}(X, \mathcal{F})$, see
+Cohomology, Lemma \ref{cohomology-lemma-Leray}.
+By Lemma \ref{lemma-quasi-coherence-higher-direct-images}
+we see that the sheaves $R^qf_*\mathcal{F}$ are quasi-coherent.
+By Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero}
+we see that $E_2^{p, q} = 0$ when $p > 0$.
+Hence the spectral sequence degenerates at $E_2$ and we win.
+See also
+Cohomology, Lemma \ref{cohomology-lemma-apply-Leray} (2)
+for the general principle.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Cohomology and base change, I}
+\label{section-cohomology-and-base-change}
+
+\noindent
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent sheaf on $X$.
+Suppose further that $g : S' \to S$ is any morphism of schemes. Denote
+$X' = X_{S'} = S' \times_S X$ the base change of $X$ and denote
+$f' : X' \to S'$ the base change of $f$.
+Also write $g' : X' \to X$ the projection,
+and set $\mathcal{F}' = (g')^*\mathcal{F}$.
+Here is a diagram representing the situation:
+\begin{equation}
+\label{equation-base-change-diagram}
+\vcenter{
+\xymatrix{
+\mathcal{F}' = (g')^*\mathcal{F} &
+X' \ar[r]_{g'} \ar[d]_{f'} &
+X \ar[d]^f &
+\mathcal{F} \\
+Rf'_*\mathcal{F}' &
+S' \ar[r]^g &
+S &
+Rf_*\mathcal{F}
+}
+}
+\end{equation}
+Here is the simplest case of the base change property we have in mind.
+
+\begin{lemma}
+\label{lemma-affine-base-change}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Assume $f$ is affine.
+In this case $f_*\mathcal{F} \cong Rf_*\mathcal{F}$ is
+a quasi-coherent sheaf, and for every base change diagram
+(\ref{equation-base-change-diagram})
+we have
+$$
+g^*f_*\mathcal{F} = f'_*(g')^*\mathcal{F}.
+$$
+\end{lemma}
+
+\begin{proof}
+The vanishing of higher direct images is
+Lemma \ref{lemma-relative-affine-vanishing}.
+The statement is local on $S$ and $S'$. Hence we may
+assume $X = \Spec(A)$, $S = \Spec(R)$,
+$S' = \Spec(R')$ and $\mathcal{F} = \widetilde{M}$
+for some $A$-module $M$.
+We use Schemes, Lemma \ref{schemes-lemma-widetilde-pullback}
+to describe pullbacks and pushforwards of $\mathcal{F}$.
+Namely, $X' = \Spec(R' \otimes_R A)$ and
+$\mathcal{F}'$ is the quasi-coherent sheaf associated
+to $(R' \otimes_R A) \otimes_A M$.
+Thus we see that the lemma boils down to the
+equality
+$$
+(R' \otimes_R A) \otimes_A M = R' \otimes_R M
+$$
+as $R'$-modules.
+\end{proof}
+
+\noindent
+In many situations it is sufficient to know about the following
+special case of cohomology and base change. It follows immediately
+from the stronger results in
+Section \ref{section-cohomology-and-base-change-derived},
+but since it is so important it deserves its own proof.
+
+\begin{lemma}[Flat base change]
+\label{lemma-flat-base-change-cohomology}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+X' \ar[d]_{f'} \ar[r]_{g'} & X \ar[d]^f \\
+S' \ar[r]^g & S
+}
+$$
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module
+with pullback $\mathcal{F}' = (g')^*\mathcal{F}$.
+Assume that $g$ is flat and that $f$ is quasi-compact and quasi-separated.
+For any $i \geq 0$
+\begin{enumerate}
+\item the base change map of
+Cohomology, Lemma \ref{cohomology-lemma-base-change-map-flat-case}
+is an isomorphism
+$$
+g^*R^if_*\mathcal{F} \longrightarrow R^if'_*\mathcal{F}',
+$$
+\item if $S = \Spec(A)$ and $S' = \Spec(B)$, then
+$H^i(X, \mathcal{F}) \otimes_A B = H^i(X', \mathcal{F}')$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Using Cohomology, Lemma \ref{cohomology-lemma-base-change-map-flat-case} in (1)
+is allowed since $g'$ is flat by
+Morphisms, Lemma \ref{morphisms-lemma-base-change-flat}.
+Having said this, part (1) follows from part (2). Namely,
+part (1) is local on $S'$ and hence we may assume $S$
+and $S'$ are affine. In other words, we have $S = \Spec(A)$
+and $S' = \Spec(B)$ as in (2).
+Then since $R^if_*\mathcal{F}$ is quasi-coherent
+(Lemma \ref{lemma-quasi-coherence-higher-direct-images}),
+it is the quasi-coherent $\mathcal{O}_S$-module associated to the
+$A$-module $H^0(S, R^if_*\mathcal{F}) = H^i(X, \mathcal{F})$
+(equality by
+Lemma \ref{lemma-quasi-coherence-higher-direct-images-application}).
+Similarly, $R^if'_*\mathcal{F}'$ is the quasi-coherent
+$\mathcal{O}_{S'}$-module associated to the $B$-module
+$H^i(X', \mathcal{F}')$. Since pullback by $g$ corresponds
+to $- \otimes_A B$ on modules
+(Schemes, Lemma \ref{schemes-lemma-widetilde-pullback})
+we see that it suffices to prove (2).
+
+\medskip\noindent
+Let $A \to B$ be a flat ring homomorphism.
+Let $X$ be a quasi-compact and quasi-separated scheme over $A$.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Set $X_B = X \times_{\Spec(A)} \Spec(B)$ and denote
+$\mathcal{F}_B$ the pullback of $\mathcal{F}$.
+We are trying to show that the map
+$$
+H^i(X, \mathcal{F}) \otimes_A B \longrightarrow H^i(X_B, \mathcal{F}_B)
+$$
+(given by the reference in the statement of the lemma)
+is an isomorphism.
+
+\medskip\noindent
+In case $X$ is separated, choose an affine open covering
+$\mathcal{U} : X = U_1 \cup \ldots \cup U_t$ and recall that
+$$
+\check{H}^p(\mathcal{U}, \mathcal{F}) = H^p(X, \mathcal{F}),
+$$
+see
+Lemma \ref{lemma-cech-cohomology-quasi-coherent}.
+If $\mathcal{U}_B : X_B = (U_1)_B \cup \ldots \cup (U_t)_B$ we obtain
+by base change, then it is still the case that each $(U_i)_B$ is affine
+and that $X_B$ is separated. Thus we obtain
+$$
+\check{H}^p(\mathcal{U}_B, \mathcal{F}_B) = H^p(X_B, \mathcal{F}_B).
+$$
+We have the following relation between the {\v C}ech complexes
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{U}_B, \mathcal{F}_B) =
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \otimes_A B
+$$
+as follows from
+Lemma \ref{lemma-affine-base-change}.
+Since $A \to B$ is flat, the same thing remains true on taking cohomology.
+
+\medskip\noindent
+In case $X$ is quasi-separated, choose an affine open covering
+$\mathcal{U} : X = U_1 \cup \ldots \cup U_t$. We will use the
+{\v C}ech-to-cohomology spectral sequence
+Cohomology, Lemma \ref{cohomology-lemma-cech-spectral-sequence}.
+The reader who wishes to avoid this spectral sequence
+can use Mayer-Vietoris and induction on $t$ as in the proof of
+Lemma \ref{lemma-quasi-coherence-higher-direct-images}.
+The spectral sequence has $E_2$-page
+$E_2^{p, q} = \check{H}^p(\mathcal{U}, \underline{H}^q(\mathcal{F}))$
+and converges to $H^{p + q}(X, \mathcal{F})$.
+Similarly, we have a spectral sequence with $E_2$-page
+$E_2^{p, q} = \check{H}^p(\mathcal{U}_B, \underline{H}^q(\mathcal{F}_B))$
+which converges to $H^{p + q}(X_B, \mathcal{F}_B)$.
+Since the intersections $U_{i_0 \ldots i_p}$ are quasi-compact
+and separated, the result of the second paragraph of the proof gives
+$\check{H}^p(\mathcal{U}_B, \underline{H}^q(\mathcal{F}_B)) =
+\check{H}^p(\mathcal{U}, \underline{H}^q(\mathcal{F})) \otimes_A B$.
+Using that $A \to B$ is flat we conclude that
+$H^i(X, \mathcal{F}) \otimes_A B \to H^i(X_B, \mathcal{F}_B)$
+is an isomorphism for all $i$ and we win.
+\end{proof}
+
+\begin{lemma}[Finite locally free base change]
+\label{lemma-finite-locally-free-base-change-cohomology}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+Y \ar[d]_{g} \ar[r]_h & X \ar[d]^f \\
+\Spec(B) \ar[r] & \Spec(A)
+}
+$$
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module
+with pullback $\mathcal{G} = h^*\mathcal{F}$.
+If $B$ is a finite locally free $A$-module, then
+$H^i(X, \mathcal{F}) \otimes_A B = H^i(Y, \mathcal{G})$.
+\end{lemma}
+
+\noindent
+{\bf Warning}: Do not use this lemma unless you understand the difference
+between this and Lemma \ref{lemma-flat-base-change-cohomology}.
+
+\begin{proof}
+In case $X$ is separated, choose an affine open covering
+$\mathcal{U} : X = \bigcup_{i \in I} U_i$ and recall that
+$$
+\check{H}^p(\mathcal{U}, \mathcal{F}) = H^p(X, \mathcal{F}),
+$$
+see
+Lemma \ref{lemma-cech-cohomology-quasi-coherent}.
+Let $\mathcal{V} : Y = \bigcup_{i \in I} g^{-1}(U_i)$
+be the corresponding affine open covering of $Y$.
+The opens $V_i = g^{-1}(U_i) = U_i \times_{\Spec(A)} \Spec(B)$
+are affine and $Y$ is separated. Thus we obtain
+$$
+\check{H}^p(\mathcal{V}, \mathcal{G}) = H^p(Y, \mathcal{G}).
+$$
+We claim the map of {\v C}ech complexes
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \otimes_A B
+\longrightarrow
+\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{G})
+$$
+is an isomorphism. Namely, as $B$ is finitely presented as an $A$-module
+we see that tensoring with $B$ over $A$ commutes with products, see
+Algebra, Proposition \ref{algebra-proposition-fp-tensor}.
+Thus it suffices to show that the maps
+$\Gamma(U_{i_0 \ldots i_p}, \mathcal{F}) \otimes_A B \to
+\Gamma(V_{i_0 \ldots i_p}, \mathcal{G})$
+are isomorphisms which follows from
+Lemma \ref{lemma-affine-base-change}.
+Since $A \to B$ is flat, the same thing remains true on taking cohomology.
+
+\medskip\noindent
+In the general case we argue in exactly the same way using affine
+open covering $\mathcal{U} : X = \bigcup_{i \in I} U_i$ and the
+corresponding covering $\mathcal{V} : Y = \bigcup_{i \in I} V_i$
+with $V_i = g^{-1}(U_i)$ as above. We will use the
+{\v C}ech-to-cohomology spectral sequence
+Cohomology, Lemma \ref{cohomology-lemma-cech-spectral-sequence}.
+The spectral sequence has $E_2$-page
+$E_2^{p, q} = \check{H}^p(\mathcal{U}, \underline{H}^q(\mathcal{F}))$
+and converges to $H^{p + q}(X, \mathcal{F})$.
+Similarly, we have a spectral sequence with $E_2$-page
+$E_2^{p, q} = \check{H}^p(\mathcal{V}, \underline{H}^q(\mathcal{G}))$
+which converges to $H^{p + q}(Y, \mathcal{G})$.
+Since the intersections $U_{i_0 \ldots i_p}$ are separated, the result
+of the previous paragraph gives isomorphisms
+$\Gamma(U_{i_0 \ldots i_p}, \underline{H}^q(\mathcal{F})) \otimes_A B
+\to \Gamma(V_{i_0 \ldots i_p}, \underline{H}^q(\mathcal{G}))$.
+Using that $- \otimes_A B$ commutes with products and is exact, we conclude
+that
+$\check{H}^p(\mathcal{U}, \underline{H}^q(\mathcal{F})) \otimes_A B
+\to \check{H}^p(\mathcal{V}, \underline{H}^q(\mathcal{G}))$
+is an isomorphism. Using that $A \to B$ is flat we conclude that
+$H^i(X, \mathcal{F}) \otimes_A B \to H^i(Y, \mathcal{G})$
+is an isomorphism for all $i$ and we win.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Colimits and higher direct images}
+\label{section-colimits}
+
+\noindent
+General results of this nature can be found in
+Cohomology, Section \ref{cohomology-section-limits},
+Sheaves, Lemma \ref{sheaves-lemma-directed-colimits-sections}, and
+Modules, Lemma \ref{modules-lemma-finite-presentation-quasi-compact-colimit}.
+
+\begin{lemma}
+\label{lemma-colimit-cohomology}
+Let $f : X \to S$ be a quasi-compact and quasi-separated morphism of schemes.
+Let $\mathcal{F} = \colim \mathcal{F}_i$ be a filtered colimit
+of quasi-coherent sheaves on $X$.
+Then for any $p \geq 0$ we have
+$$
+R^pf_*\mathcal{F} = \colim R^pf_*\mathcal{F}_i.
+$$
+\end{lemma}
+
+\begin{proof}
+Recall that $R^pf_*\mathcal{F}$ is the sheaf associated to
+$U \mapsto H^p(f^{-1}U, \mathcal{F})$, see
+Cohomology, Lemma \ref{cohomology-lemma-describe-higher-direct-images}.
+Recall that the colimit is the sheaf associated to the presheaf colimit
+(taking colimits over opens). Hence we can apply
+Cohomology, Lemma \ref{cohomology-lemma-quasi-separated-cohomology-colimit}
+to $H^p(f^{-1}U, -)$ where $U$ is affine to conclude. (Because the
+basis of affine opens in $f^{-1}U$ satisfies the assumptions of that
+lemma.)
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Cohomology and base change, II}
+\label{section-cohomology-and-base-change-derived}
+
+\noindent
+Let $f : X \to S$ be a morphism of schemes and let $\mathcal{F}$
+be a quasi-coherent $\mathcal{O}_X$-module. If $f$ is quasi-compact
+and quasi-separated we would like to represent $Rf_*\mathcal{F}$
+by a complex of quasi-coherent sheaves on $S$. This follows
+from the fact that the sheaves $R^if_*\mathcal{F}$ are quasi-coherent
+if $S$ is quasi-compact and has affine diagonal,
+using that $D_\QCoh(S)$ is equivalent to
+$D(\QCoh(\mathcal{O}_S))$, see
+Derived Categories of Schemes, Proposition
+\ref{perfect-proposition-quasi-compact-affine-diagonal}.
+
+\medskip\noindent
+In this section we will use a different approach which produces an
+explicit complex having a good base change property. The construction
+is particularly easy if $f$ and $S$ are separated, or more generally
+have affine diagonal. Since this is the case which
+by far the most often used we treat it separately.
+
+\begin{lemma}
+\label{lemma-separated-case-relative-cech}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Assume $X$ is quasi-compact and $X$ and $S$ have affine diagonal
+(e.g., if $X$ and $S$ are separated).
+In this case we can compute $Rf_*\mathcal{F}$ as follows:
+\begin{enumerate}
+\item Choose a finite affine open covering
+$\mathcal{U} : X = \bigcup_{i = 1, \ldots, n} U_i$.
+\item For $i_0, \ldots, i_p \in \{1, \ldots, n\}$ denote
+$f_{i_0 \ldots i_p} : U_{i_0 \ldots i_p} \to S$ the restriction of $f$
+to the intersection $U_{i_0 \ldots i_p} = U_{i_0} \cap \ldots \cap U_{i_p}$.
+\item Set $\mathcal{F}_{i_0 \ldots i_p}$ equal to the restriction
+of $\mathcal{F}$ to $U_{i_0 \ldots i_p}$.
+\item Set
+$$
+\check{\mathcal{C}}^p(\mathcal{U}, f, \mathcal{F}) =
+\bigoplus\nolimits_{i_0 \ldots i_p}
+f_{i_0 \ldots i_p *} \mathcal{F}_{i_0 \ldots i_p}
+$$
+and define differentials
+$d : \check{\mathcal{C}}^p(\mathcal{U}, f, \mathcal{F})
+\to \check{\mathcal{C}}^{p + 1}(\mathcal{U}, f, \mathcal{F})$
+as in Cohomology, Equation (\ref{cohomology-equation-d-cech}).
+\end{enumerate}
+Then the complex $\check{\mathcal{C}}^\bullet(\mathcal{U}, f, \mathcal{F})$
+is a complex of quasi-coherent sheaves on $S$ which comes equipped with an
+isomorphism
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{U}, f, \mathcal{F})
+\longrightarrow
+Rf_*\mathcal{F}
+$$
+in $D^{+}(S)$. This isomorphism is functorial in the quasi-coherent
+sheaf $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Consider the resolution
+$\mathcal{F} \to {\mathfrak C}^\bullet(\mathcal{U}, \mathcal{F})$
+of Cohomology, Lemma \ref{cohomology-lemma-covering-resolution}.
+We have an equality of complexes
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, f, \mathcal{F}) =
+f_*{\mathfrak C}^\bullet(\mathcal{U}, \mathcal{F})$
+of quasi-coherent $\mathcal{O}_S$-modules.
+The morphisms $j_{i_0 \ldots i_p} : U_{i_0 \ldots i_p} \to X$
+and the morphisms $f_{i_0 \ldots i_p} : U_{i_0 \ldots i_p} \to S$
+are affine by Morphisms, Lemma \ref{morphisms-lemma-affine-permanence}
+and Lemma \ref{lemma-affine-diagonal}.
+Hence $R^qj_{i_0 \ldots i_p *}\mathcal{F}_{i_0 \ldots i_p}$
+as well as $R^qf_{i_0 \ldots i_p *}\mathcal{F}_{i_0 \ldots i_p}$
+are zero for $q > 0$ (Lemma \ref{lemma-relative-affine-vanishing}).
+Using $f \circ j_{i_0 \ldots i_p} = f_{i_0 \ldots i_p}$ and
+the spectral sequence of
+Cohomology, Lemma \ref{cohomology-lemma-relative-Leray}
+we conclude that
+$R^qf_*(j_{i_0 \ldots i_p *}\mathcal{F}_{i_0 \ldots i_p}) = 0$
+for $q > 0$.
+Since the terms of the complex
+${\mathfrak C}^\bullet(\mathcal{U}, \mathcal{F})$ are finite direct
+sums of the sheaves $j_{i_0 \ldots i_p *}\mathcal{F}_{i_0 \ldots i_p}$
+we conclude using Leray's acyclicity lemma
+(Derived Categories, Lemma \ref{derived-lemma-leray-acyclicity})
+that
+$$
+Rf_* \mathcal{F} = f_*{\mathfrak C}^\bullet(\mathcal{U}, \mathcal{F}) =
+\check{\mathcal{C}}^\bullet(\mathcal{U}, f, \mathcal{F})
+$$
+as desired.
+\end{proof}
+
+\noindent
+Next, we are going to consider what happens if we do a base change.
+
+\begin{lemma}
+\label{lemma-base-change-complex}
+With notation as in diagram (\ref{equation-base-change-diagram}).
+Assume $f : X \to S$ and $\mathcal{F}$ satisfy the hypotheses of
+Lemma \ref{lemma-separated-case-relative-cech}. Choose a finite
+affine open covering $\mathcal{U} : X = \bigcup U_i$ of $X$.
+There is a canonical isomorphism
+$$
+g^*\check{\mathcal{C}}^\bullet(\mathcal{U}, f, \mathcal{F})
+\longrightarrow
+Rf'_*\mathcal{F}'
+$$
+in $D^{+}(S')$. Moreover, if $S' \to S$ is affine, then in fact
+$$
+g^*\check{\mathcal{C}}^\bullet(\mathcal{U}, f, \mathcal{F})
+=
+\check{\mathcal{C}}^\bullet(\mathcal{U}', f', \mathcal{F}')
+$$
+with $\mathcal{U}' : X' = \bigcup U_i'$ where
+$U_i' = (g')^{-1}(U_i) = U_{i, S'}$ is also affine.
+\end{lemma}
+
+\begin{proof}
+In fact we may define $U_i' = (g')^{-1}(U_i) = U_{i, S'}$ no matter
+whether $S'$ is affine over $S$ or not.
+Let $\mathcal{U}' : X' = \bigcup U_i'$ be the induced covering of $X'$.
+In this case we claim that
+$$
+g^*\check{\mathcal{C}}^\bullet(\mathcal{U}, f, \mathcal{F})
+=
+\check{\mathcal{C}}^\bullet(\mathcal{U}', f', \mathcal{F}')
+$$
+with $\check{\mathcal{C}}^\bullet(\mathcal{U}', f', \mathcal{F}')$
+defined in exactly the same manner as in
+Lemma \ref{lemma-separated-case-relative-cech}.
+This is clear from the case of affine morphisms
+(Lemma \ref{lemma-affine-base-change}) by working locally on $S'$.
+Moreover, exactly as in the proof of
+Lemma \ref{lemma-separated-case-relative-cech}
+one sees that there is an isomorphism
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{U}', f', \mathcal{F}')
+\longrightarrow
+Rf'_*\mathcal{F}'
+$$
+in $D^{+}(S')$ since the morphisms $U_i' \to X'$ and $U_i' \to S'$
+are still affine (being base changes of affine morphisms).
+Details omitted.
+\end{proof}
+
+\noindent
+The lemma above says that the complex
+$$
+\mathcal{K}^\bullet = \check{\mathcal{C}}^\bullet(\mathcal{U}, f, \mathcal{F})
+$$
+is a bounded below complex of quasi-coherent sheaves on $S$ which
+{\it universally} computes the higher direct images of $f : X \to S$.
+This is something about this particular complex and
+it is not preserved by replacing
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, f, \mathcal{F})$ by
+a quasi-isomorphic complex in general! In other words, this is
+not a statement that makes sense in the derived category.
+The reason is that the pullback $g^*\mathcal{K}^\bullet$ is
+{\it not} equal to the derived pullback $Lg^*\mathcal{K}^\bullet$
+of $\mathcal{K}^\bullet$ in general!
+
+\medskip\noindent
+Here is a more general case where we can prove this statement.
+We remark that the condition of $S$ being separated is harmless
+in most applications, since this is usually used to prove some
+local property of the total derived image.
+The proof is significantly more involved and uses hypercoverings;
+it is a nice example of how you can use them sometimes.
+
+\begin{lemma}
+\label{lemma-hypercoverings}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent sheaf on $X$.
+Assume that $f$ is quasi-compact and quasi-separated and
+that $S$ is quasi-compact and separated.
+There exists a bounded below complex $\mathcal{K}^\bullet$
+of quasi-coherent $\mathcal{O}_S$-modules with the
+following property: For every morphism
+$g : S' \to S$ the complex $g^*\mathcal{K}^\bullet$ is
+a representative for $Rf'_*\mathcal{F}'$ with notation as in
+diagram (\ref{equation-base-change-diagram}).
+\end{lemma}
+
+\begin{proof}
+(If $f$ is separated as well, please see
+Lemma \ref{lemma-base-change-complex}.)
+The assumptions imply in particular that $X$
+is quasi-compact and quasi-separated as a scheme.
+Let $\mathcal{B}$ be the set of affine opens of $X$. By
+Hypercoverings,
+Lemma \ref{hypercovering-lemma-quasi-separated-quasi-compact-hypercovering}
+we can find a hypercovering $K = (I, \{U_i\})$ such that each
+$I_n$ is finite and each $U_i$ is an affine open of $X$. By
+Hypercoverings, Lemma \ref{hypercovering-lemma-cech-spectral-sequence}
+there is a spectral sequence with $E_2$-page
+$$
+E_2^{p, q} = \check{H}^p(K, \underline{H}^q(\mathcal{F}))
+$$
+converging to $H^{p + q}(X, \mathcal{F})$. Note that
+$\check{H}^p(K, \underline{H}^q(\mathcal{F}))$ is the $p$th cohomology
+group of the complex
+$$
+\prod\nolimits_{i \in I_0} H^q(U_i, \mathcal{F})
+\to
+\prod\nolimits_{i \in I_1} H^q(U_i, \mathcal{F})
+\to
+\prod\nolimits_{i \in I_2} H^q(U_i, \mathcal{F})
+\to \ldots
+$$
+Since each $U_i$ is affine we see that this is zero unless $q = 0$
+in which case we obtain
+$$
+\prod\nolimits_{i \in I_0} \mathcal{F}(U_i)
+\to
+\prod\nolimits_{i \in I_1} \mathcal{F}(U_i)
+\to
+\prod\nolimits_{i \in I_2} \mathcal{F}(U_i)
+\to \ldots
+$$
+Thus we conclude that $R\Gamma(X, \mathcal{F})$ is computed by
+this complex.
+
+\medskip\noindent
+For any $n$ and $i \in I_n$ denote $f_i : U_i \to S$ the restriction of
+$f$ to $U_i$. As $S$ is separated and $U_i$ is affine this morphism
+is affine. Consider the complex of quasi-coherent sheaves
+$$
+\mathcal{K}^\bullet = (
+\prod\nolimits_{i \in I_0} f_{i, *}\mathcal{F}|_{U_i}
+\to
+\prod\nolimits_{i \in I_1} f_{i, *}\mathcal{F}|_{U_i}
+\to
+\prod\nolimits_{i \in I_2} f_{i, *}\mathcal{F}|_{U_i}
+\to \ldots )
+$$
+on $S$. As in
+Hypercoverings, Lemma \ref{hypercovering-lemma-cech-spectral-sequence}
+we obtain a map $\mathcal{K}^\bullet \to Rf_*\mathcal{F}$ in
+$D(\mathcal{O}_S)$ by choosing an injective resolution of $\mathcal{F}$
+(details omitted). Consider any affine scheme $V$ and a morphism
+$g : V \to S$. Then the base change $X_V$ has a hypercovering
+$K_V = (I, \{U_{i, V}\})$ obtained by base change. Moreover,
+$g^*f_{i, *}\mathcal{F} = f_{i, V, *}(g')^*\mathcal{F}|_{U_{i, V}}$.
+Thus the arguments above prove that $\Gamma(V, g^*\mathcal{K}^\bullet)$
+computes $R\Gamma(X_V, (g')^*\mathcal{F})$.
+This finishes the proof of the lemma as it suffices to prove
+the equality of complexes Zariski locally on $S'$.
+\end{proof}
+
+\noindent
+The following lemma is a variant to flat base change.
+
+\begin{lemma}
+\label{lemma-base-change-tensor-with-flat}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+X' \ar[d]_{f'} \ar[r]_{g'} & X \ar[d]^f \\
+S' \ar[r]^g & S
+}
+$$
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $\mathcal{G}$ be a quasi-coherent $\mathcal{O}_{S'}$-module
+flat over $S$. Assume $f$ is quasi-compact and quasi-separated.
+For any $i \geq 0$ there is an identification
+$$
+\mathcal{G} \otimes_{\mathcal{O}_{S'}} g^*R^if_*\mathcal{F} =
+R^if'_*\left((f')^*\mathcal{G}
+\otimes_{\mathcal{O}_{X'}} (g')^*\mathcal{F}\right)
+$$
+\end{lemma}
+
+\begin{proof}
+Let us construct a map from left to right. First, we have the base change map
+$Lg^*Rf_*\mathcal{F} \to Rf'_*L(g')^*\mathcal{F}$. There is also the adjunction
+map $\mathcal{G} \to Rf'_*L(f')^*\mathcal{G}$. Using the relative cup product
+We obtain
+\begin{align*}
+\mathcal{G}
+\otimes_{\mathcal{O}_{S'}}^\mathbf{L}
+Lg^*Rf_*\mathcal{F}
+& \to
+Rf'_*L(f')^*\mathcal{G}
+\otimes_{\mathcal{O}_{S'}}^\mathbf{L}
+Rf'_*L(g')^*\mathcal{F} \\
+& \to
+Rf'_*\left(L(f')^*\mathcal{G}
+\otimes_{\mathcal{O}_{X'}}^\mathbf{L}
+L(g')^*\mathcal{F}\right) \\
+& \to
+Rf'_*\left((f')^*\mathcal{G}
+\otimes_{\mathcal{O}_{X'}} (g')^*\mathcal{F}\right)
+\end{align*}
+where for the middle arrow we used the relative cup product, see
+Cohomology, Remark \ref{cohomology-remark-cup-product}.
+The source of the composition is
+$$
+\mathcal{G} \otimes_{\mathcal{O}_{S'}}^\mathbf{L} Lg^*Rf_*\mathcal{F} =
+\mathcal{G} \otimes_{g^{-1}\mathcal{O}_S}^\mathbf{L} g^{-1}Rf_*\mathcal{F}
+$$
+by Cohomology, Lemma \ref{cohomology-lemma-variant-derived-pullback}.
+Since $\mathcal{G}$ is flat as a sheaf of $g^{-1}\mathcal{O}_S$-modules
+and since $g^{-1}$ is an exact functor, this is a complex whose
+$i$th cohomology sheaf is
+$\mathcal{G} \otimes_{g^{-1}\mathcal{O}_S} g^{-1}R^if_*\mathcal{F} =
+\mathcal{G} \otimes_{\mathcal{O}_{S'}} g^*R^if_*\mathcal{F}$.
+In this way we obtain global maps from left to right in the
+equality of the lemma. To show this map is an isomorphism we may
+work locally on $S'$. Thus we may and do assume that $S$ and $S'$
+are affine schemes.
+
+\medskip\noindent
+Proof in case $S$ and $S'$ are affine. Say $S = \Spec(A)$ and $S' = \Spec(B)$
+and say $\mathcal{G}$ corresponds to the $B$-module $N$ which is assumed to be
+$A$-flat. Since $S$ is affine, $X$ is quasi-compact and quasi-separated.
+We will use a hypercovering argument to finish the proof; if $X$ is separated
+or has affine diagonal, then you can use a {\v C}ech covering.
+Let $\mathcal{B}$ be the set of affine opens of $X$. By
+Hypercoverings,
+Lemma \ref{hypercovering-lemma-quasi-separated-quasi-compact-hypercovering}
+we can find a hypercovering $K = (I, \{U_i\})$ of $X$ such that each
+$I_n$ is finite and each $U_i$ is an affine open of $X$. By
+Hypercoverings, Lemma \ref{hypercovering-lemma-cech-spectral-sequence}
+there is a spectral sequence with $E_2$-page
+$$
+E_2^{p, q} = \check{H}^p(K, \underline{H}^q(\mathcal{F}))
+$$
+converging to $H^{p + q}(X, \mathcal{F})$. Since each $U_i$ is affine
+and $\mathcal{F}$ is quasi-coherent the value of
+$\underline{H}^q(\mathcal{F})$ is zero on $U_i$ for $q > 0$. Thus the
+spectral sequence degenerates and we conclude that the cohomology
+modules $H^q(X, \mathcal{F})$ are computed by
+$$
+\prod\nolimits_{i \in I_0} \mathcal{F}(U_i)
+\to
+\prod\nolimits_{i \in I_1} \mathcal{F}(U_i)
+\to
+\prod\nolimits_{i \in I_2} \mathcal{F}(U_i)
+\to \ldots
+$$
+Next, note that the base change of our hypercovering to
+$S'$ is a hypercovering of $X' = S' \times_S X$.
+The schemes $S' \times_S U_i$ are affine too
+and we have
+$$
+\left((f')^*\mathcal{G} \otimes_{\mathcal{O}_{S'}} (g')^*\mathcal{F}\right)
+(S' \times_S U_i) =
+N \otimes_A \mathcal{F}(U_i)
+$$
+In this way we conclude that the cohomology modules
+$H^q(X', (f')^*\mathcal{G} \otimes_{\mathcal{O}_{S'}} (g')^*\mathcal{F})$
+are computed by
+$$
+N \otimes_A \left(
+\prod\nolimits_{i \in I_0} \mathcal{F}(U_i)
+\to
+\prod\nolimits_{i \in I_1} \mathcal{F}(U_i)
+\to
+\prod\nolimits_{i \in I_2} \mathcal{F}(U_i)
+\to \ldots
+\right)
+$$
+Since $N$ is flat over $A$, we conclude that
+$$
+H^q(X', (f')^*\mathcal{G} \otimes_{\mathcal{O}_{S'}} (g')^*\mathcal{F})
+=
+N \otimes_A H^q(X, \mathcal{F})
+$$
+Since this is the translation into algebra of the statement
+we had to show the proof is complete.
+\end{proof}
+
+
+
+
+
+
+
+\section{Cohomology of projective space}
+\label{section-cohomology-projective-space}
+
+\noindent
+In this section we compute the cohomology of the twists of the
+structure sheaf on $\mathbf{P}^n_S$ over a scheme $S$.
+Recall that $\mathbf{P}^n_S$ was defined as the fibre product
+$
+\mathbf{P}^n_S = S \times_{\Spec(\mathbf{Z})} \mathbf{P}^n_{\mathbf{Z}}
+$
+in Constructions, Definition \ref{constructions-definition-projective-space}.
+It was shown to be equal to
+$$
+\mathbf{P}^n_S = \underline{\text{Proj}}_S(\mathcal{O}_S[T_0, \ldots, T_n])
+$$
+in Constructions, Lemma \ref{constructions-lemma-projective-space-bundle}.
+In particular, projective space is a particular case of a projective bundle.
+If $S = \Spec(R)$ is affine then we have
+$$
+\mathbf{P}^n_S = \mathbf{P}^n_R = \text{Proj}(R[T_0, \ldots, T_n]).
+$$
+All these identifications are compatible and compatible with the constructions
+of the twisted structure sheaves $\mathcal{O}_{\mathbf{P}^n_S}(d)$.
+
+\medskip\noindent
+Before we state the result we need some notation.
+Let $R$ be a ring.
+Recall that $R[T_0, \ldots, T_n]$ is a graded
+$R$-algebra where each $T_i$ is homogeneous of degree $1$.
+Denote $(R[T_0, \ldots, T_n])_d$ the degree $d$ summand.
+It is a finite free $R$-module of rank $\binom{n + d}{d}$
+when $d \geq 0$ and zero else.
+It has a basis consisting of monomials $T_0^{e_0} \ldots T_n^{e_n}$
+with $\sum e_i = d$. We will also use the following notation:
+$R[\frac{1}{T_0}, \ldots, \frac{1}{T_n}]$ denotes the $\mathbf{Z}$-graded
+ring with $\frac{1}{T_i}$ in degree $-1$. In particular the
+$\mathbf{Z}$-graded $R[\frac{1}{T_0}, \ldots, \frac{1}{T_n}]$ module
+$$
+\frac{1}{T_0 \ldots T_n} R[\frac{1}{T_0}, \ldots, \frac{1}{T_n}]
+$$
+which shows up in the statement below is zero in degrees
+$\geq -n$, is free on the generator $\frac{1}{T_0 \ldots T_n}$
+in degree $-n - 1$ and is free of rank $(-1)^n\binom{n + d}{d}$ for
+$d \leq -n - 1$.
+
+\begin{lemma}
+\label{lemma-cohomology-projective-space-over-ring}
+\begin{reference}
+\cite[III Proposition 2.1.12]{EGA}
+\end{reference}
+Let $R$ be a ring.
+Let $n \geq 0$ be an integer.
+We have
+$$
+H^q(\mathbf{P}^n, \mathcal{O}_{\mathbf{P}^n_R}(d)) =
+\left\{
+\begin{matrix}
+(R[T_0, \ldots, T_n])_d & \text{if} & q = 0 \\
+0 & \text{if} & q \not = 0, n \\
+\left(\frac{1}{T_0 \ldots T_n} R[\frac{1}{T_0}, \ldots, \frac{1}{T_n}]\right)_d
+& \text{if} & q = n
+\end{matrix}
+\right.
+$$
+as $R$-modules.
+\end{lemma}
+
+\begin{proof}
+We will use the standard affine open covering
+$$
+\mathcal{U} : \mathbf{P}^n_R = \bigcup\nolimits_{i = 0}^n D_{+}(T_i)
+$$
+to compute the cohomology using the {\v C}ech complex.
+This is permissible by Lemma \ref{lemma-cech-cohomology-quasi-coherent}
+since any intersection of finitely many affine $D_{+}(T_i)$ is also a
+standard affine open (see
+Constructions, Section \ref{constructions-section-proj}).
+In fact, we can use the alternating or ordered {\v C}ech complex according to
+Cohomology, Lemmas \ref{cohomology-lemma-ordered-alternating} and
+\ref{cohomology-lemma-alternating-usual}.
+
+\medskip\noindent
+The ordering we will use on $\{0, \ldots, n\}$ is the usual one.
+Hence the complex we are looking at has terms
+$$
+\check{\mathcal{C}}_{ord}^p(\mathcal{U}, \mathcal{O}_{\mathbf{P}_R}(d))
+=
+\bigoplus\nolimits_{i_0 < \ldots < i_p}
+(R[T_0, \ldots, T_n, \frac{1}{T_{i_0} \ldots T_{i_p}}])_d
+$$
+Moreover, the maps are given by the usual formula
+$$
+d(s)_{i_0 \ldots i_{p + 1}} =
+\sum\nolimits_{j = 0}^{p + 1} (-1)^j s_{i_0 \ldots \hat i_j \ldots i_{p + 1}}
+$$
+see Cohomology, Section \ref{cohomology-section-alternating-cech}.
+Note that each term of this complex has a natural
+$\mathbf{Z}^{n + 1}$-grading. Namely, we get this by declaring a monomial
+$T_0^{e_0} \ldots T_n^{e_n}$ to be homogeneous with weight
+$(e_0, \ldots, e_n) \in \mathbf{Z}^{n + 1}$. It is clear that the differential
+given above respects the grading. In a formula we have
+$$
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}, \mathcal{O}_{\mathbf{P}_R}(d))
+=
+\bigoplus\nolimits_{\vec{e} \in \mathbf{Z}^{n + 1}}
+\check{\mathcal{C}}^\bullet(\vec{e})
+$$
+where not all summands on the right hand side occur (see below).
+Hence in order to compute the cohomology
+modules of the complex it suffices to compute the cohomology of the graded
+pieces and take the direct sum at the end.
+
+\medskip\noindent
+Fix $\vec{e} = (e_0, \ldots, e_n) \in \mathbf{Z}^{n + 1}$. In order for this
+weight to occur in the complex above we need to assume
+$e_0 + \ldots + e_n = d$ (if not then it occurs for a different twist of
+the structure sheaf of course). Assuming this, set
+$$
+NEG(\vec{e}) = \{i \in \{0, \ldots, n\} \mid e_i < 0\}.
+$$
+With this notation the weight $\vec{e}$ summand
+$\check{\mathcal{C}}^\bullet(\vec{e})$ of the {\v C}ech complex above has
+the following terms
+$$
+\check{\mathcal{C}}^p(\vec{e})
+=
+\bigoplus\nolimits_{i_0 < \ldots < i_p,
+\ NEG(\vec{e}) \subset \{i_0, \ldots, i_p\}}
+R \cdot T_0^{e_0} \ldots T_n^{e_n}
+$$
+In other words, the terms corresponding to $i_0 < \ldots < i_p$ such
+that $NEG(\vec{e})$ is not contained in $\{i_0 \ldots i_p\}$ are zero.
+The differential of the complex $\check{\mathcal{C}}^\bullet(\vec{e})$
+is still given by the exact same formula as above.
+
+\medskip\noindent
+Suppose that $NEG(\vec{e}) = \{0, \ldots, n\}$, i.e., that all
+exponents $e_i$ are negative.
+In this case the complex $\check{\mathcal{C}}^\bullet(\vec{e})$ has
+only one term, namely $\check{\mathcal{C}}^n(\vec{e}) =
+R \cdot \frac{1}{T_0^{-e_0} \ldots T_n^{-e_n}}$. Hence in this
+case
+$$
+H^q(\check{\mathcal{C}}^\bullet(\vec{e})) =
+\left\{
+\begin{matrix}
+R \cdot \frac{1}{T_0^{-e_0} \ldots T_n^{-e_n}} & \text{if} & q = n \\
+0 & \text{if} & \text{else}
+\end{matrix}
+\right.
+$$
+The direct sum of all of these terms clearly gives the value
+$$
+\left(\frac{1}{T_0 \ldots T_n} R[\frac{1}{T_0}, \ldots, \frac{1}{T_n}]\right)_d
+$$
+in degree $n$ as stated in the lemma. Moreover these terms do not contribute
+to cohomology in other degrees (also in accordance with the statement of the
+lemma).
+
+\medskip\noindent
+Assume $NEG(\vec{e}) = \emptyset$. In this case the complex
+$\check{\mathcal{C}}^\bullet(\vec{e})$ has a summand $R$ corresponding
+to all $i_0 < \ldots < i_p$.
+Let us compare the complex $\check{\mathcal{C}}^\bullet(\vec{e})$
+to another complex. Namely, consider the affine open covering
+$$
+\mathcal{V} : \Spec(R) = \bigcup\nolimits_{i \in \{0, \ldots, n\}} V_i
+$$
+where $V_i = \Spec(R)$ for all $i$. Consider the alternating
+{\v C}ech complex
+$$
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{V}, \mathcal{O}_{\Spec(R)})
+$$
+By the same reasoning as above this computes the cohomology of the
+structure sheaf on $\Spec(R)$. Hence we see that
+$H^p(
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{V}, \mathcal{O}_{\Spec(R)})
+) = R$ if $p = 0$ and is $0$ whenever $p > 0$.
+For these facts, see
+Lemma \ref{lemma-cech-cohomology-quasi-coherent-trivial} and its proof.
+Note that also
+$\check{\mathcal{C}}_{ord}^\bullet(\mathcal{V}, \mathcal{O}_{\Spec(R)})$
+has a summand $R$ for every $i_0 < \ldots < i_p$ and has exactly the same
+differential as $\check{\mathcal{C}}^\bullet(\vec{e})$. In other words
+these complexes are isomorphic complexes and hence have the same cohomology.
+We conclude that
+$$
+H^q(\check{\mathcal{C}}^\bullet(\vec{e})) =
+\left\{
+\begin{matrix}
+R \cdot T_0^{e_0} \ldots T_n^{e_n} & \text{if} & q = 0 \\
+0 & \text{if} & \text{else}
+\end{matrix}
+\right.
+$$
+in the case that $NEG(\vec{e}) = \emptyset$.
+The direct sum of all of these terms clearly gives the value
+$$
+(R[T_0, \ldots, T_n])_d
+$$
+in degree $0$ as stated in the lemma. Moreover these terms do not contribute
+to cohomology in other degrees (also in accordance with the statement of the
+lemma).
+
+\medskip\noindent
+To finish the proof of the lemma we have to show that the complexes
+$\check{\mathcal{C}}^\bullet(\vec{e})$ are acyclic when
+$NEG(\vec{e})$ is neither empty nor equal to $\{0, \ldots, n\}$.
+Pick an index $i_{\text{fix}} \not \in NEG(\vec{e})$ (such an index exists).
+Consider the map
+$$
+h :
+\check{\mathcal{C}}^{p + 1}(\vec{e})
+\to
+\check{\mathcal{C}}^p(\vec{e})
+$$
+given by the rule that for $i_0 < \ldots < i_p$ we have
+$$
+h(s)_{i_0 \ldots i_p} =
+\left\{
+\begin{matrix}
+0 & \text{if} & p \not \in \{0, \ldots, n - 1\} \\
+0 & \text{if} & i_{\text{fix}} \in \{i_0, \ldots, i_p\} \\
+s_{i_{\text{fix}} i_0 \ldots i_p} & \text{if} & i_{\text{fix}} < i_0 \\
+(-1)^a s_{i_0 \ldots i_{a - 1} i_{\text{fix}} i_a \ldots i_p} &
+\text{if} & i_{a - 1} < i_{\text{fix}} < i_a \\
+(-1)^p s_{i_0 \ldots i_p} & \text{if} & i_p < i_{\text{fix}}
+\end{matrix}
+\right.
+$$
+Please compare with the proof of
+Lemma \ref{lemma-cech-cohomology-quasi-coherent-trivial}.
+This makes sense because we have
+$$
+NEG(\vec{e}) \subset \{i_0, \ldots, i_p\}
+\Leftrightarrow
+NEG(\vec{e}) \subset \{i_{\text{fix}}, i_0, \ldots, i_p\}
+$$
+The exact same (combinatorial) computation\footnote{
+For example, suppose that $i_0 < \ldots < i_p$ is such that
+$i_{\text{fix}} \not \in \{i_0, \ldots, i_p\}$ and that
+$i_{a - 1} < i_{\text{fix}} < i_a$ for some $1 \leq a \leq p$. Then we have
+\begin{align*}
+&
+(dh + hd)(s)_{i_0 \ldots i_p} \\
+& =
+\sum\nolimits_{j = 0}^p
+(-1)^j
+h(s)_{i_0 \ldots \hat i_j \ldots i_p}
++
+(-1)^a d(s)_{i_0 \ldots i_{a - 1} i_{\text{fix}} i_a \ldots i_p}\\
+& =
+\sum\nolimits_{j = 0}^{a - 1}
+(-1)^{j + a - 1}
+s_{i_0 \ldots \hat i_j \ldots i_{a - 1} i_{\text{fix}} i_a \ldots i_p} +
+\sum\nolimits_{j = a}^p
+(-1)^{j + a}
+s_{i_0 \ldots i_{a - 1} i_{\text{fix}} i_a \ldots \hat i_j \ldots i_p}
++ \\
+&
+\sum\nolimits_{j = 0}^{a - 1}
+(-1)^{a + j}
+s_{i_0 \ldots \hat i_j \ldots i_{a - 1} i_{\text{fix}} i_a \ldots i_p} +
+(-1)^{2a} s_{i_0 \ldots i_p} +
+\sum\nolimits_{j = a}^p
+(-1)^{a + j + 1}
+s_{i_0 \ldots i_{a - 1} i_{\text{fix}} i_a \ldots \hat i_j \ldots i_p}
+\\
+& =
+s_{i_0 \ldots i_p}
+\end{align*}
+as desired. The other cases are similar.}
+as in the proof of Lemma \ref{lemma-cech-cohomology-quasi-coherent-trivial}
+shows that
+$$
+(hd + dh)(s)_{i_0 \ldots i_p}
+=
+s_{i_0 \ldots i_p}
+$$
+Hence we see that the identity map of the complex
+$\check{\mathcal{C}}^\bullet(\vec{e})$ is homotopic to zero
+which implies that it is acyclic.
+\end{proof}
+
+\noindent
+In the following lemma we are going to use the pairing of free
+$R$-modules
+$$
+R[T_0, \ldots, T_n]
+\times
+\frac{1}{T_0 \ldots T_n} R[\frac{1}{T_0}, \ldots, \frac{1}{T_n}]
+\longrightarrow
+R
+$$
+which is defined by the rule
+$$
+(f, g)
+\longmapsto
+\text{coefficient of }
+\frac{1}{T_0 \ldots T_n}
+\text{ in }fg.
+$$
+In other words, the basis element $T_0^{e_0} \ldots T_n^{e_n}$ pairs
+with the basis element $T_0^{d_0} \ldots T_n^{d_n}$ to give $1$ if and only
+if $e_i + d_i = -1$ for all $i$, and pairs to zero in all other cases.
+Using this pairing we get an identification
+$$
+\left(\frac{1}{T_0 \ldots T_n} R[\frac{1}{T_0}, \ldots, \frac{1}{T_n}]\right)_d
+=
+\Hom_R((R[T_0, \ldots, T_n])_{-n - 1 - d}, R)
+$$
+Thus we can reformulate the result of
+Lemma \ref{lemma-cohomology-projective-space-over-ring} as saying that
+\begin{equation}
+\label{equation-identify}
+H^q(\mathbf{P}^n, \mathcal{O}_{\mathbf{P}^n_R}(d)) =
+\left\{
+\begin{matrix}
+(R[T_0, \ldots, T_n])_d & \text{if} & q = 0 \\
+0 & \text{if} & q \not = 0, n \\
+\Hom_R((R[T_0, \ldots, T_n])_{-n - 1 - d}, R)
+& \text{if} & q = n
+\end{matrix}
+\right.
+\end{equation}
+
+\begin{lemma}
+\label{lemma-identify-functorially}
+The identifications of Equation (\ref{equation-identify}) are
+compatible with base change w.r.t.\ ring maps $R \to R'$.
+Moreover, for any $f \in R[T_0, \ldots, T_n]$ homogeneous
+of degree $m$ the map multiplication by $f$
+$$
+\mathcal{O}_{\mathbf{P}^n_R}(d)
+\longrightarrow
+\mathcal{O}_{\mathbf{P}^n_R}(d + m)
+$$
+induces the map on the cohomology group via the identifications
+of Equation (\ref{equation-identify}) which is multiplication by
+$f$ for $H^0$ and the contragredient of multiplication by $f$
+$$
+(R[T_0, \ldots, T_n])_{-n - 1 - (d + m)}
+\longrightarrow
+(R[T_0, \ldots, T_n])_{-n - 1 - d}
+$$
+on $H^n$.
+\end{lemma}
+
+\begin{proof}
+Suppose that $R \to R'$ is a ring map.
+Let $\mathcal{U}$ be the standard affine open covering of $\mathbf{P}^n_R$,
+and let $\mathcal{U}'$ be the standard affine open covering of
+$\mathbf{P}^n_{R'}$. Note that $\mathcal{U}'$ is the pullback of the covering
+$\mathcal{U}$ under the canonical morphism
+$\mathbf{P}^n_{R'} \to \mathbf{P}^n_R$. Hence there
+is a map of {\v C}ech complexes
+$$
+\gamma :
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U},
+\mathcal{O}_{\mathbf{P}_R}(d))
+\longrightarrow
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}',
+\mathcal{O}_{\mathbf{P}_{R'}}(d))
+$$
+which is compatible with the map on cohomology by
+Cohomology, Lemma \ref{cohomology-lemma-functoriality-cech}.
+It is clear from the computations in the proof of
+Lemma \ref{lemma-cohomology-projective-space-over-ring}
+that this map of {\v C}ech complexes is compatible with the identifications
+of the cohomology groups in question. (Namely the basis elements for
+the {\v C}ech complex over $R$ simply map to the corresponding basis elements
+for the {\v C}ech complex over $R'$.) Whence the first statement of the lemma.
+
+\medskip\noindent
+Now fix the ring $R$ and consider two homogeneous polynomials
+$f, g \in R[T_0, \ldots, T_n]$ both of the same degree $m$.
+Since cohomology is an additive functor, it is clear that the
+map induced by multiplication by $f + g$ is the same as the sum
+of the maps induced by multiplication by $f$ and the map induced
+by multiplication by $g$. Moreover, since cohomology is a functor,
+a similar result holds for multiplication by a product $fg$ where
+$f, g$ are both homogeneous (but not necessarily of the same degree).
+Hence to verify the second statement of the lemma it suffices to
+prove this when $f = x \in R$ or when $f = T_i$.
+In the case of multiplication by an element $x \in R$ the result
+follows since every cohomology groups or complex in sight has the
+structure of an $R$-module or complex of $R$-modules.
+Finally, we consider the case of multiplication by $T_i$
+as a $\mathcal{O}_{\mathbf{P}^n_R}$-linear map
+$$
+\mathcal{O}_{\mathbf{P}^n_R}(d)
+\longrightarrow
+\mathcal{O}_{\mathbf{P}^n_R}(d + 1)
+$$
+The statement on $H^0$ is clear. For the statement on $H^n$
+consider multiplication by $T_i$ as a map on {\v C}ech complexes
+$$
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U},
+\mathcal{O}_{\mathbf{P}_R}(d))
+\longrightarrow
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U},
+\mathcal{O}_{\mathbf{P}_R}(d + 1))
+$$
+We are going to use the notation introduced in the proof of
+Lemma \ref{lemma-cohomology-projective-space-over-ring}.
+We consider the effect of multiplication by $T_i$
+in terms of the decompositions
+$$
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}, \mathcal{O}_{\mathbf{P}_R}(d))
+=
+\bigoplus\nolimits_{\vec{e} \in \mathbf{Z}^{n + 1}, \ \sum e_i = d}
+\check{\mathcal{C}}^\bullet(\vec{e})
+$$
+and
+$$
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U},
+\mathcal{O}_{\mathbf{P}_R}(d + 1))
+=
+\bigoplus\nolimits_{\vec{e} \in \mathbf{Z}^{n + 1}, \ \sum e_i = d + 1}
+\check{\mathcal{C}}^\bullet(\vec{e})
+$$
+It is clear that it maps the subcomplex
+$\check{\mathcal{C}}^\bullet(\vec{e})$ to the subcomplex
+$\check{\mathcal{C}}^\bullet(\vec{e} + \vec{b}_i)$ where
+$\vec{b}_i = (0, \ldots, 0, 1, 0, \ldots, 0))$ the $i$th basis vector.
+In other words, it maps the summand of $H^n$ corresponding to
+$\vec{e}$ with $e_i < 0$ and $\sum e_i = d$
+to the summand of $H^n$ corresponding to
+$\vec{e} + \vec{b}_i$ (which is zero if $e_i + b_i \geq 0$).
+It is easy to see that this corresponds exactly to the action
+of the contragredient of multiplication by $T_i$ as a map
+$$
+(R[T_0, \ldots, T_n])_{-n - 1 - (d + 1)}
+\longrightarrow
+(R[T_0, \ldots, T_n])_{-n - 1 - d}
+$$
+This proves the lemma.
+\end{proof}
+
+\noindent
+Before we state the relative version we need some notation.
+Namely, recall that $\mathcal{O}_S[T_0, \ldots, T_n]$ is a graded
+$\mathcal{O}_S$-module where each $T_i$ is homogeneous of degree $1$.
+Denote $(\mathcal{O}_S[T_0, \ldots, T_n])_d$ the degree $d$ summand.
+It is a finite locally free sheaf of rank $\binom{n + d}{d}$ on $S$.
+
+\begin{lemma}
+\label{lemma-cohomology-projective-space-over-base}
+Let $S$ be a scheme.
+Let $n \geq 0$ be an integer.
+Consider the structure morphism
+$$
+f : \mathbf{P}^n_S \longrightarrow S.
+$$
+We have
+$$
+R^qf_*(\mathcal{O}_{\mathbf{P}^n_S}(d)) =
+\left\{
+\begin{matrix}
+(\mathcal{O}_S[T_0, \ldots, T_n])_d & \text{if} & q = 0 \\
+0 & \text{if} & q \not = 0, n \\
+\SheafHom_{\mathcal{O}_S}(
+(\mathcal{O}_S[T_0, \ldots, T_n])_{- n - 1 - d}, \mathcal{O}_S)
+& \text{if} & q = n
+\end{matrix}
+\right.
+$$
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This follows since the identifications in
+(\ref{equation-identify}) are compatible with affine base change
+by Lemma \ref{lemma-identify-functorially}.
+\end{proof}
+
+\noindent
+Next we state the version for projective bundles associated to finite locally
+free sheaves. Let $S$ be a scheme. Let $\mathcal{E}$ be a finite locally
+free $\mathcal{O}_S$-module of constant rank $n + 1$, see
+Modules, Section \ref{modules-section-locally-free}.
+In this case we think of $\text{Sym}(\mathcal{E})$ as a graded
+$\mathcal{O}_S$-module where $\mathcal{E}$ is the graded part of degree $1$.
+And $\text{Sym}^d(\mathcal{E})$ is the degree $d$ summand.
+It is a finite locally free sheaf of rank $\binom{n + d}{d}$ on $S$.
+Recall that our normalization is that
+$$
+\pi :
+\mathbf{P}(\mathcal{E})
+=
+\underline{\text{Proj}}_S(\text{Sym}(\mathcal{E}))
+\longrightarrow
+S
+$$
+and that there are natural maps
+$\text{Sym}^d(\mathcal{E}) \to \pi_*\mathcal{O}_{\mathbf{P}(\mathcal{E})}(d)$.
+
+\begin{lemma}
+\label{lemma-cohomology-projective-bundle}
+Let $S$ be a scheme. Let $n \geq 1$.
+Let $\mathcal{E}$ be a finite locally
+free $\mathcal{O}_S$-module of constant rank $n + 1$.
+Consider the structure morphism
+$$
+\pi : \mathbf{P}(\mathcal{E}) \longrightarrow S.
+$$
+We have
+$$
+R^q\pi_*(\mathcal{O}_{\mathbf{P}(\mathcal{E})}(d)) =
+\left\{
+\begin{matrix}
+\text{Sym}^d(\mathcal{E}) & \text{if} & q = 0 \\
+0 & \text{if} & q \not = 0, n \\
+\SheafHom_{\mathcal{O}_S}(
+\text{Sym}^{- n - 1 - d}(\mathcal{E})
+\otimes_{\mathcal{O}_S}
+\wedge^{n + 1}\mathcal{E},
+\mathcal{O}_S)
+& \text{if} & q = n
+\end{matrix}
+\right.
+$$
+These identifications are compatible with base change and
+isomorphism between locally free sheaves.
+\end{lemma}
+
+\begin{proof}
+Consider the canonical map
+$$
+\pi^*\mathcal{E} \longrightarrow \mathcal{O}_{\mathbf{P}(\mathcal{E})}(1)
+$$
+and twist down by $1$ to get
+$$
+\pi^*(\mathcal{E})(-1) \longrightarrow \mathcal{O}_{\mathbf{P}(\mathcal{E})}
+$$
+This is a surjective map from a locally free rank $n + 1$ sheaf onto
+the structure sheaf. Hence the corresponding Koszul complex is
+exact (More on Algebra, Lemma
+\ref{more-algebra-lemma-homotopy-koszul-abstract}).
+In other words there is an exact complex
+$$
+0 \to
+\pi^*(\wedge^{n + 1}\mathcal{E})(-n - 1) \to
+\ldots \to
+\pi^*(\wedge^i\mathcal{E})(-i) \to
+\ldots \to
+\pi^*\mathcal{E}(-1) \to
+\mathcal{O}_{\mathbf{P}(\mathcal{E})} \to 0
+$$
+We will think of the term $\pi^*(\wedge^i\mathcal{E})(-i)$ as being
+in degree $-i$.
+We are going to compute the higher direct images
+of this acyclic complex using the first spectral sequence of
+Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}.
+Namely, we see that there is a spectral sequence with terms
+$$
+E_1^{p, q} = R^q\pi_*\left(\pi^*(\wedge^{-p}\mathcal{E})(p)\right)
+$$
+converging to zero! By the projection formula
+(Cohomology, Lemma \ref{cohomology-lemma-projection-formula})
+we have
+$$
+E_1^{p, q} = \wedge^{-p} \mathcal{E} \otimes_{\mathcal{O}_S}
+R^q\pi_*\left(\mathcal{O}_{\mathbf{P}(\mathcal{E})}(p)\right).
+$$
+Note that locally on $S$ the sheaf $\mathcal{E}$ is trivial,
+i.e., isomorphic to $\mathcal{O}_S^{\oplus n + 1}$, hence locally on
+$S$ the morphism $\mathbf{P}(\mathcal{E}) \to S$ can be identified
+with $\mathbf{P}^n_S \to S$. Hence
+locally on $S$ we can use the result of Lemmas
+\ref{lemma-cohomology-projective-space-over-ring},
+\ref{lemma-identify-functorially}, or
+\ref{lemma-cohomology-projective-space-over-base}.
+It follows that $E_1^{p, q} = 0$ unless $(p, q)$ is $(0, 0)$
+or $(-n - 1, n)$. The nonzero terms are
+\begin{align*}
+E_1^{0, 0} & = \pi_*\mathcal{O}_{\mathbf{P}(\mathcal{E})} = \mathcal{O}_S \\
+E_1^{-n - 1, n} & =
+R^n\pi_*\left(\pi^*(\wedge^{n + 1}\mathcal{E})(-n - 1)\right) =
+\wedge^{n + 1}\mathcal{E} \otimes_{\mathcal{O}_S}
+R^n\pi_*\left(\mathcal{O}_{\mathbf{P}(\mathcal{E})}(-n - 1)\right)
+\end{align*}
+Hence there can only be one nonzero
+differential in the spectral sequence namely the map
+$d_{n + 1}^{-n - 1, n} : E_{n + 1}^{-n - 1, n} \to E_{n + 1}^{0, 0}$
+which has to be an isomorphism (because the spectral sequence converges
+to the $0$ sheaf). Thus $E_1^{p, q} = E_{n + 1}^{p, q}$ and
+we obtain a canonical isomorphism
+$$
+\wedge^{n + 1}\mathcal{E} \otimes_{\mathcal{O}_S}
+R^n\pi_*\left(\mathcal{O}_{\mathbf{P}(\mathcal{E})}(-n - 1)\right) =
+R^n\pi_*\left(\pi^*(\wedge^{n + 1}\mathcal{E})(-n - 1)\right)
+\xrightarrow{d_{n + 1}^{-n - 1, n}}
+\mathcal{O}_S
+$$
+Since $\wedge^{n + 1}\mathcal{E}$ is an invertible
+sheaf, this implies that
+$R^n\pi_*\mathcal{O}_{\mathbf{P}(\mathcal{E})}(-n - 1)$ is invertible
+as well and canonically isomorphic to the inverse of
+$\wedge^{n + 1}\mathcal{E}$. In other words we have proved the case
+$d = - n - 1$ of the lemma.
+
+\medskip\noindent
+Working locally on $S$ we see immediately from the computation of
+cohomology in Lemmas \ref{lemma-cohomology-projective-space-over-ring},
+\ref{lemma-identify-functorially}, or
+\ref{lemma-cohomology-projective-space-over-base} the statements on
+vanishing of the lemma. Moreover the result on $R^0\pi_*$ is clear
+as well, since there are canonical maps
+$\text{Sym}^d(\mathcal{E}) \to \pi_* \mathcal{O}_{\mathbf{P}(\mathcal{E})}(d)$
+for all $d$. It remains to show that the description of
+$R^n\pi_*\mathcal{O}_{\mathbf{P}(\mathcal{E})}(d)$ is correct
+for $d < -n - 1$. In order to do this we consider the map
+$$
+\pi^*(\text{Sym}^{-d - n - 1}(\mathcal{E}))
+\otimes_{\mathcal{O}_{\mathbf{P}(\mathcal{E})}}
+\mathcal{O}_{\mathbf{P}(\mathcal{E})}(d)
+\longrightarrow
+\mathcal{O}_{\mathbf{P}(\mathcal{E})}(-n - 1)
+$$
+Applying $R^n\pi_*$ and the projection formula (see above) we get a map
+$$
+\text{Sym}^{-d - n - 1}(\mathcal{E})
+\otimes_{\mathcal{O}_S}
+R^n\pi_*(\mathcal{O}_{\mathbf{P}(\mathcal{E})}(d))
+\longrightarrow
+R^n\pi_*\mathcal{O}_{\mathbf{P}(\mathcal{E})}(-n - 1) =
+(\wedge^{n + 1}\mathcal{E})^{\otimes -1}
+$$
+(the last equality we have shown above).
+Again by the local calculations of Lemmas
+\ref{lemma-cohomology-projective-space-over-ring},
+\ref{lemma-identify-functorially}, or
+\ref{lemma-cohomology-projective-space-over-base}
+it follows that this map induces a perfect pairing between
+$R^n\pi_*(\mathcal{O}_{\mathbf{P}(\mathcal{E})}(d))$ and
+$\text{Sym}^{-d - n - 1}(\mathcal{E}) \otimes \wedge^{n + 1}(\mathcal{E})$
+as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Coherent sheaves on locally Noetherian schemes}
+\label{section-coherent-sheaves}
+
+\noindent
+We have defined the notion of a coherent module on any ringed space in
+Modules, Section \ref{modules-section-coherent}.
+Although it is possible to consider coherent sheaves on non-Noetherian
+schemes we will always assume the base scheme is locally Noetherian when
+we consider coherent sheaves. Here is a characterization of coherent
+sheaves on locally Noetherian schemes.
+
+\begin{lemma}
+\label{lemma-coherent-Noetherian}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is coherent,
+\item $\mathcal{F}$ is a quasi-coherent, finite type $\mathcal{O}_X$-module,
+\item $\mathcal{F}$ is a finitely presented $\mathcal{O}_X$-module,
+\item for any affine open $\Spec(A) = U \subset X$ we have
+$\mathcal{F}|_U = \widetilde M$ with $M$ a finite $A$-module, and
+\item there exists an affine open covering $X = \bigcup U_i$,
+$U_i = \Spec(A_i)$ such that each
+$\mathcal{F}|_{U_i} = \widetilde M_i$ with $M_i$ a finite $A_i$-module.
+\end{enumerate}
+In particular $\mathcal{O}_X$ is coherent, any invertible
+$\mathcal{O}_X$-module is coherent, and more generally any
+finite locally free $\mathcal{O}_X$-module is coherent.
+\end{lemma}
+
+\begin{proof}
+The implications (1) $\Rightarrow$ (2) and (1) $\Rightarrow$ (3) hold
+in general, see
+Modules, Lemma \ref{modules-lemma-coherent-finite-presentation}.
+If $\mathcal{F}$ is finitely presented then $\mathcal{F}$ is
+quasi-coherent, see
+Modules, Lemma \ref{modules-lemma-finite-presentation-quasi-coherent}.
+Hence also (3) $\Rightarrow$ (2).
+
+\medskip\noindent
+Assume $\mathcal{F}$ is a quasi-coherent, finite type $\mathcal{O}_X$-module.
+By
+Properties, Lemma \ref{properties-lemma-finite-type-module}
+we see that on any affine open
+$\Spec(A) = U \subset X$ we have $\mathcal{F}|_U = \widetilde M$
+with $M$ a finite $A$-module. Since $A$ is Noetherian we see that
+$M$ has a finite resolution
+$$
+A^{\oplus m} \to A^{\oplus n} \to M \to 0.
+$$
+Hence $\mathcal{F}$ is of finite presentation by
+Properties, Lemma \ref{properties-lemma-finite-presentation-module}.
+In other words (2) $\Rightarrow$ (3).
+
+\medskip\noindent
+By Modules, Lemma \ref{modules-lemma-coherent-structure-sheaf} it suffices
+to show that $\mathcal{O}_X$ is coherent in order to show that (3)
+implies (1). Thus we have to show: given any open $U \subset X$ and
+any finite collection of sections $f_i \in \mathcal{O}_X(U)$,
+$i = 1, \ldots, n$ the kernel of the map
+$\bigoplus_{i = 1, \ldots, n} \mathcal{O}_U \to \mathcal{O}_U$
+is of finite type. Since being of finite type is a local property
+it suffices to check this in a neighbourhood of any $x \in U$.
+Thus we may assume $U = \Spec(A)$ is affine. In this case
+$f_1, \ldots, f_n \in A$ are elements of $A$. Since $A$ is
+Noetherian, see
+Properties, Lemma \ref{properties-lemma-locally-Noetherian}
+the kernel $K$ of the map $\bigoplus_{i = 1, \ldots, n} A \to A$
+is a finite $A$-module. See for example
+Algebra, Lemma \ref{algebra-lemma-Noetherian-basic}.
+As the functor\ $\widetilde{ }$\ is exact, see
+Schemes, Lemma \ref{schemes-lemma-spec-sheaves}
+we get an exact sequence
+$$
+\widetilde K \to
+\bigoplus\nolimits_{i = 1, \ldots, n} \mathcal{O}_U \to
+\mathcal{O}_U
+$$
+and by
+Properties, Lemma \ref{properties-lemma-finite-type-module}
+again we see that $\widetilde K$ is of finite type. We conclude
+that (1), (2) and (3) are all equivalent.
+
+\medskip\noindent
+It follows from
+Properties, Lemma \ref{properties-lemma-finite-type-module}
+that (2) implies (4). It is trivial that (4) implies (5).
+The discussion in
+Schemes, Section \ref{schemes-section-quasi-coherent}
+show that (5) implies
+that $\mathcal{F}$ is quasi-coherent and it is clear that (5)
+implies that $\mathcal{F}$ is of finite type. Hence (5) implies
+(2) and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-coherent-abelian-Noetherian}
+Let $X$ be a locally Noetherian scheme.
+The category of coherent $\mathcal{O}_X$-modules is abelian.
+More precisely, the kernel and cokernel of a map of coherent
+$\mathcal{O}_X$-modules are coherent. Any extension
+of coherent sheaves is coherent.
+\end{lemma}
+
+\begin{proof}
+This is a restatement of
+Modules, Lemma \ref{modules-lemma-coherent-abelian}
+in a particular case.
+\end{proof}
+
+\noindent
+The following lemma does not always hold for the category of coherent
+$\mathcal{O}_X$-modules on a general ringed space $X$.
+
+\begin{lemma}
+\label{lemma-coherent-Noetherian-quasi-coherent-sub-quotient}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Any quasi-coherent submodule of $\mathcal{F}$ is coherent.
+Any quasi-coherent quotient module of $\mathcal{F}$ is coherent.
+\end{lemma}
+
+\begin{proof}
+We may assume that $X$ is affine, say $X = \Spec(A)$.
+Properties, Lemma \ref{properties-lemma-locally-Noetherian}
+implies that $A$ is Noetherian. Lemma \ref{lemma-coherent-Noetherian}
+turns this into algebra. The algebraic counter part of
+the lemma is that a quotient, or a submodule of a finite $A$-module
+is a finite $A$-module, see for example
+Algebra, Lemma \ref{algebra-lemma-Noetherian-basic}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-hom-coherent}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$, $\mathcal{G}$ be coherent $\mathcal{O}_X$-modules.
+The $\mathcal{O}_X$-modules $\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{G}$
+and $\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$ are
+coherent.
+\end{lemma}
+
+\begin{proof}
+It is shown in
+Modules, Lemma \ref{modules-lemma-internal-hom-locally-kernel-direct-sum} that
+$\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$ is coherent.
+The result for tensor products is
+Modules, Lemma \ref{modules-lemma-tensor-product-permanence}
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-isomorphism}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$, $\mathcal{G}$ be coherent $\mathcal{O}_X$-modules.
+Let $\varphi : \mathcal{G} \to \mathcal{F}$ be a homomorphism
+of $\mathcal{O}_X$-modules. Let $x \in X$.
+\begin{enumerate}
+\item If $\mathcal{F}_x = 0$ then there exists an open neighbourhood
+$U \subset X$ of $x$ such that $\mathcal{F}|_U = 0$.
+\item If $\varphi_x : \mathcal{G}_x \to \mathcal{F}_x$ is injective,
+then there exists an open neighbourhood $U \subset X$ of $x$ such that
+$\varphi|_U$ is injective.
+\item If $\varphi_x : \mathcal{G}_x \to \mathcal{F}_x$ is surjective,
+then there exists an open neighbourhood $U \subset X$ of $x$ such that
+$\varphi|_U$ is surjective.
+\item If $\varphi_x : \mathcal{G}_x \to \mathcal{F}_x$ is bijective,
+then there exists an open neighbourhood $U \subset X$ of $x$ such that
+$\varphi|_U$ is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+See Modules, Lemmas
+\ref{modules-lemma-finite-type-surjective-on-stalk},
+\ref{modules-lemma-finite-type-stalk-zero}, and
+\ref{modules-lemma-finite-type-to-coherent-injective-on-stalk}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-stalks-local-map}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$, $\mathcal{G}$ be coherent $\mathcal{O}_X$-modules.
+Let $x \in X$.
+Suppose $\psi : \mathcal{G}_x \to \mathcal{F}_x$ is a map of
+$\mathcal{O}_{X, x}$-modules.
+Then there exists an open neighbourhood $U \subset X$ of $x$ and a map
+$\varphi : \mathcal{G}|_U \to \mathcal{F}|_U$ such that
+$\varphi_x = \psi$.
+\end{lemma}
+
+\begin{proof}
+In view of Lemma \ref{lemma-coherent-Noetherian}
+this is a reformulation of
+Modules, Lemma \ref{modules-lemma-stalk-internal-hom}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-coherent-support-closed}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{F}$ be a coherent
+$\mathcal{O}_X$-module. Then $\text{Supp}(\mathcal{F})$ is closed, and
+$\mathcal{F}$ comes from a coherent sheaf on the scheme theoretic support
+of $\mathcal{F}$, see
+Morphisms, Definition \ref{morphisms-definition-scheme-theoretic-support}.
+\end{lemma}
+
+\begin{proof}
+Let $i : Z \to X$ be the scheme theoretic support of $\mathcal{F}$ and
+let $\mathcal{G}$ be the finite type quasi-coherent sheaf on $Z$
+such that $i_*\mathcal{G} \cong \mathcal{F}$.
+Since $Z = \text{Supp}(\mathcal{F})$ we see that the support is closed.
+The scheme $Z$ is locally Noetherian by
+Morphisms, Lemmas \ref{morphisms-lemma-immersion-locally-finite-type}
+and \ref{morphisms-lemma-finite-type-noetherian}.
+Finally, $\mathcal{G}$ is a coherent $\mathcal{O}_Z$-module by
+Lemma \ref{lemma-coherent-Noetherian}
+\end{proof}
+
+\begin{lemma}
+\label{lemma-i-star-equivalence}
+Let $i : Z \to X$ be a closed immersion of locally Noetherian schemes.
+Let $\mathcal{I} \subset \mathcal{O}_X$ be the quasi-coherent sheaf of ideals
+cutting out $Z$. The functor $i_*$ induces an equivalence between the
+category of coherent $\mathcal{O}_X$-modules annihilated by $\mathcal{I}$
+and the category of coherent $\mathcal{O}_Z$-modules.
+\end{lemma}
+
+\begin{proof}
+The functor is fully faithful by
+Morphisms, Lemma \ref{morphisms-lemma-i-star-equivalence}.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module
+annihilated by $\mathcal{I}$. By
+Morphisms, Lemma \ref{morphisms-lemma-i-star-equivalence}
+we can write $\mathcal{F} = i_*\mathcal{G}$ for some quasi-coherent
+sheaf $\mathcal{G}$ on $Z$. By
+Modules, Lemma \ref{modules-lemma-i-star-reflects-finite-type}
+we see that $\mathcal{G}$ is of finite type.
+Hence $\mathcal{G}$ is coherent by
+Lemma \ref{lemma-coherent-Noetherian}.
+Thus the functor is also essentially surjective as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-pushforward-coherent}
+Let $f : X \to Y$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Assume $f$ is finite and $Y$ locally Noetherian.
+Then $R^pf_*\mathcal{F} = 0$ for $p > 0$ and
+$f_*\mathcal{F}$ is coherent if $\mathcal{F}$ is coherent.
+\end{lemma}
+
+\begin{proof}
+The higher direct images vanish by
+Lemma \ref{lemma-relative-affine-vanishing} and because
+a finite morphism is affine (by definition).
+Note that the assumptions imply that also $X$ is locally Noetherian
+(see Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian})
+and hence the statement makes sense.
+Let $\Spec(A) = V \subset Y$ be an affine open subset.
+By Morphisms, Definition \ref{morphisms-definition-integral}
+we see that $f^{-1}(V) = \Spec(B)$ with $A \to B$ finite.
+Lemma \ref{lemma-coherent-Noetherian}
+turns the statement of the lemma into the following algebra
+fact: If $M$ is a finite $B$-module, then $M$ is also finite
+viewed as a $A$-module, see
+Algebra, Lemma \ref{algebra-lemma-finite-module-over-finite-extension}.
+\end{proof}
+
+\noindent
+In the situation of the lemma also the higher direct images are
+coherent since they vanish.
+We will show that this is always the case for a proper morphism
+between locally Noetherian schemes
+(Proposition \ref{proposition-proper-pushforward-coherent}).
+
+\begin{lemma}
+\label{lemma-coherent-support-dimension-0}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{F}$
+be a coherent sheaf with $\dim(\text{Supp}(\mathcal{F})) \leq 0$.
+Then $\mathcal{F}$ is generated by global sections and
+$H^i(X, \mathcal{F}) = 0$ for $i > 0$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-coherent-support-closed} we see that
+$\mathcal{F} = i_*\mathcal{G}$ where $i : Z \to X$ is the inclusion
+of the scheme theoretic support of $\mathcal{F}$ and where $\mathcal{G}$
+is a coherent $\mathcal{O}_Z$-module. Since the dimension of $Z$ is
+$0$, we see $Z$ is a disjoint union of affines (Properties, Lemma
+\ref{properties-lemma-locally-Noetherian-dimension-0}).
+Hence $\mathcal{G}$ is globally generated and the higher
+cohomology groups of $\mathcal{G}$ are zero
+(Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero}).
+Hence $\mathcal{F} = i_*\mathcal{G}$ is globally generated.
+Since the cohomologies of $\mathcal{F}$ and $\mathcal{G}$ agree
+(Lemma \ref{lemma-relative-affine-cohomology} applies as a
+closed immersion is affine)
+we conclude that the higher cohomology groups of $\mathcal{F}$ are zero.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushforward-coherent-on-open}
+Let $X$ be a scheme. Let $j : U \to X$ be the inclusion of an open.
+Let $T \subset X$ be a closed subset contained in $U$.
+If $\mathcal{F}$ is a coherent $\mathcal{O}_U$-module
+with $\text{Supp}(\mathcal{F}) \subset T$, then
+$j_*\mathcal{F}$ is a coherent $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+Consider the open covering $X = U \cup (X \setminus T)$.
+Then $j_*\mathcal{F}|_U = \mathcal{F}$ is coherent and
+$j_*\mathcal{F}|_{X \setminus T} = 0$ is also coherent.
+Hence $j_*\mathcal{F}$ is coherent.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Coherent sheaves on Noetherian schemes}
+\label{section-coherent-quasi-compact}
+
+\noindent
+In this section we mention some properties of coherent sheaves on
+Noetherian schemes.
+
+\begin{lemma}
+\label{lemma-acc-coherent}
+Let $X$ be a Noetherian scheme.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+The ascending chain condition holds for quasi-coherent submodules
+of $\mathcal{F}$. In other words, given any sequence
+$$
+\mathcal{F}_1 \subset \mathcal{F}_2 \subset \ldots \subset \mathcal{F}
+$$
+of quasi-coherent submodules, then
+$\mathcal{F}_n = \mathcal{F}_{n + 1} = \ldots $ for some $n \geq 0$.
+\end{lemma}
+
+\begin{proof}
+Choose a finite affine open covering.
+On each member of the covering we get stabilization by
+Algebra, Lemma \ref{algebra-lemma-Noetherian-basic}.
+Hence the lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-power-ideal-kills-sheaf}
+Let $X$ be a Noetherian scheme.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Let $\mathcal{I} \subset \mathcal{O}_X$ be a quasi-coherent
+sheaf of ideals corresponding to a closed subscheme $Z \subset X$.
+Then there is some $n \geq 0$ such that $\mathcal{I}^n\mathcal{F} = 0$
+if and only if $\text{Supp}(\mathcal{F}) \subset Z$ (set theoretically).
+\end{lemma}
+
+\begin{proof}
+This follows immediately from
+Algebra, Lemma \ref{algebra-lemma-Noetherian-power-ideal-kills-module}
+because $X$ has a finite covering by spectra of Noetherian rings.
+\end{proof}
+
+\begin{lemma}[Artin-Rees]
+\label{lemma-Artin-Rees}
+Let $X$ be a Noetherian scheme.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Let $\mathcal{G} \subset \mathcal{F}$ be a quasi-coherent subsheaf.
+Let $\mathcal{I} \subset \mathcal{O}_X$ be a quasi-coherent sheaf of
+ideals.
+Then there exists a $c \geq 0$ such that for all $n \geq c$ we
+have
+$$
+\mathcal{I}^{n - c}(\mathcal{I}^c\mathcal{F} \cap \mathcal{G})
+=
+\mathcal{I}^n\mathcal{F} \cap \mathcal{G}
+$$
+\end{lemma}
+
+\begin{proof}
+This follows immediately from
+Algebra, Lemma \ref{algebra-lemma-Artin-Rees}
+because $X$ has a finite covering by spectra of Noetherian rings.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-directed-colimit-coherent}
+Let $X$ be a Noetherian scheme. Every quasi-coherent $\mathcal{O}_X$-module
+is the filtered colimit of its coherent submodules.
+\end{lemma}
+
+\begin{proof}
+This is a reformulation of Properties, Lemma
+\ref{properties-lemma-quasi-coherent-colimit-finite-type}
+in view of the fact that a finite type quasi-coherent
+$\mathcal{O}_X$-module is coherent by
+Lemma \ref{lemma-coherent-Noetherian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-homs-over-open}
+Let $X$ be a Noetherian scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $\mathcal{G}$ be a coherent $\mathcal{O}_X$-module.
+Let $\mathcal{I} \subset \mathcal{O}_X$ be a quasi-coherent sheaf of
+ideals. Denote $Z \subset X$ the corresponding closed subscheme and
+set $U = X \setminus Z$.
+There is a canonical isomorphism
+$$
+\colim_n \Hom_{\mathcal{O}_X}(\mathcal{I}^n\mathcal{G}, \mathcal{F})
+\longrightarrow
+\Hom_{\mathcal{O}_U}(\mathcal{G}|_U, \mathcal{F}|_U).
+$$
+In particular we have an isomorphism
+$$
+\colim_n \Hom_{\mathcal{O}_X}(
+\mathcal{I}^n, \mathcal{F})
+\longrightarrow
+\Gamma(U, \mathcal{F}).
+$$
+\end{lemma}
+
+\begin{proof}
+We first prove the second map is an isomorphism. It is injective by
+Properties, Lemma \ref{properties-lemma-sections-over-quasi-compact-open}.
+Since $\mathcal{F}$ is the union of its coherent submodules, see
+Properties, Lemma \ref{properties-lemma-quasi-coherent-colimit-finite-type}
+(and Lemma \ref{lemma-coherent-Noetherian})
+we may and do assume that $\mathcal{F}$ is coherent to prove surjectivity.
+Let $\mathcal{F}_n$ denote the quasi-coherent subsheaf of $\mathcal{F}$
+consisting of sections annihilated by $\mathcal{I}^n$,
+see Properties, Lemma \ref{properties-lemma-sections-over-quasi-compact-open}.
+Since $\mathcal{F}_1 \subset \mathcal{F}_2 \subset \ldots$ we see that
+$\mathcal{F}_n = \mathcal{F}_{n + 1} = \ldots $ for some $n \geq 0$
+by Lemma \ref{lemma-acc-coherent}. Set $\mathcal{H} = \mathcal{F}_n$
+for this $n$. By Artin-Rees (Lemma \ref{lemma-Artin-Rees})
+there exists an $c \geq 0$ such that
+$\mathcal{I}^m\mathcal{F} \cap \mathcal{H}
+\subset \mathcal{I}^{m - c}\mathcal{H}$. Picking $m = n + c$ we get
+$\mathcal{I}^m\mathcal{F} \cap \mathcal{H} \subset \mathcal{I}^n\mathcal{H}
+= 0$. Thus if we set $\mathcal{F}' = \mathcal{I}^m\mathcal{F}$ then we
+see that $\mathcal{F}' \cap \mathcal{F}_n = 0$ and
+$\mathcal{F}'|_U = \mathcal{F}|_U$. Note in particular that the subsheaf
+$(\mathcal{F}')_N$ of sections annihilated by $\mathcal{I}^N$ is zero
+for all $N \geq 0$. Hence by
+Properties, Lemma \ref{properties-lemma-sections-over-quasi-compact-open}
+we deduce that
+the top horizontal arrow in the following commutative
+diagram is a bijection:
+$$
+\xymatrix{
+\colim_n \Hom_{\mathcal{O}_X}(
+\mathcal{I}^n, \mathcal{F}')
+\ar[r] \ar[d] &
+\Gamma(U, \mathcal{F}') \ar[d] \\
+\colim_n \Hom_{\mathcal{O}_X}(
+\mathcal{I}^n, \mathcal{F})
+\ar[r] &
+\Gamma(U, \mathcal{F})
+}
+$$
+Since also the right vertical arrow is a bijection we conclude that
+the bottom horizontal arrow is surjective as desired.
+
+\medskip\noindent
+Next, we prove the first arrow of the lemma is a bijection.
+By Lemma \ref{lemma-coherent-Noetherian} the sheaf $\mathcal{G}$
+is of finite presentation and hence the sheaf
+$\mathcal{H} = \SheafHom_{\mathcal{O}_X}(\mathcal{G}, \mathcal{F})$
+is quasi-coherent, see
+Schemes, Section \ref{schemes-section-quasi-coherent}.
+By definition we have
+$$
+\mathcal{H}(U)
+=
+\Hom_{\mathcal{O}_U}(\mathcal{G}|_U, \mathcal{F}|_U)
+$$
+Pick a $\psi$ in the right hand side of the first arrow of the
+lemma, i.e., $\psi \in \mathcal{H}(U)$. The result just proved applies
+to $\mathcal{H}$ and hence there exists an $n \geq 0$ and an
+$\varphi : \mathcal{I}^n \to \mathcal{H}$ which recovers
+$\psi$ on restriction to $U$. By
+Modules, Lemma \ref{modules-lemma-internal-hom}
+$\varphi$ corresponds to a map
+$$
+\varphi :
+\mathcal{I}^n \otimes_{\mathcal{O}_X} \mathcal{G}
+\longrightarrow
+\mathcal{F}.
+$$
+This is almost what we want except that the source of the arrow
+is the tensor product of $\mathcal{I}^n$ and $\mathcal{G}$
+and not the product. We will show that, at the cost of increasing $n$,
+the difference is irrelevant. Consider the short exact sequence
+$$
+0 \to \mathcal{K} \to
+\mathcal{I}^n \otimes_{\mathcal{O}_X} \mathcal{G} \to
+\mathcal{I}^n\mathcal{G} \to 0
+$$
+where $\mathcal{K}$ is defined as the kernel. Note that
+$\mathcal{I}^n\mathcal{K} = 0$ (proof omitted). By Artin-Rees
+again we see that
+$$
+\mathcal{K}
+\cap
+\mathcal{I}^m(\mathcal{I}^n \otimes_{\mathcal{O}_X} \mathcal{G})
+=
+0
+$$
+for some $m$ large enough. In other words we see that
+$$
+\mathcal{I}^m(\mathcal{I}^n \otimes_{\mathcal{O}_X} \mathcal{G})
+\longrightarrow
+\mathcal{I}^{n + m}\mathcal{G}
+$$
+is an isomorphism. Let $\varphi'$ be the restriction of
+$\varphi$ to this submodule thought of as a map
+$\mathcal{I}^{m + n}\mathcal{G} \to \mathcal{F}$.
+Then $\varphi'$ gives an element
+of the left hand side of the first arrow of the lemma which
+maps to $\psi$ via the arrow. In other words we have proved surjectivity
+of the arrow. We omit the proof of injectivity.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extend-coherent}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{F}$, $\mathcal{G}$
+be coherent $\mathcal{O}_X$-modules. Let $U \subset X$ be open and
+let $\varphi : \mathcal{F}|_U \to \mathcal{G}|_U$ be an
+$\mathcal{O}_U$-module map. Then there exists a coherent
+submodule $\mathcal{F}' \subset \mathcal{F}$ agreeing with
+$\mathcal{F}$ over $U$ such that $\varphi$ extends to
+$\varphi' : \mathcal{F}' \to \mathcal{G}$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{I} \subset \mathcal{O}_X$ be the coherent sheaf of ideals
+cutting out the reduced induced scheme structure on $X \setminus U$.
+If $X$ is Noetherian, then Lemma \ref{lemma-homs-over-open} tells us that
+we can take $\mathcal{F}' = \mathcal{I}^n\mathcal{F}$
+for some $n$. The general case will follow from this using Zorn's lemma.
+
+\medskip\noindent
+Consider the set of triples $(U', \mathcal{F}', \varphi')$ where
+$U \subset U' \subset X$ is open, $\mathcal{F}' \subset \mathcal{F}|_{U'}$
+is a coherent subsheaf agreeing with $\mathcal{F}$ over $U$, and
+$\varphi' : \mathcal{F}' \to \mathcal{G}|_{U'}$ restricts to
+$\varphi$ over $U$. We say
+$(U'', \mathcal{F}'', \varphi'') \geq (U', \mathcal{F}', \varphi')$
+if and only if $U'' \supset U'$, $\mathcal{F}''|_{U'} = \mathcal{F}'$,
+and $\varphi''|_{U'} = \varphi'$.
+It is clear that if we have a totally ordered collection
+of triples $(U_i, \mathcal{F}_i, \varphi_i)$, then we
+can glue the $\mathcal{F}_i$ to a subsheaf $\mathcal{F}'$ of $\mathcal{F}$
+over $U' = \bigcup U_i$ and extend $\varphi$ to a map
+$\varphi' : \mathcal{F}' \to \mathcal{G}|_{U'}$.
+Hence any totally ordered subset of triples has an upper bound.
+Finally, suppose that $(U', \mathcal{F}', \varphi')$
+is any triple but $U' \not = X$. Then we can choose an
+affine open $W \subset X$ which is not contained in $U'$.
+By the result of the first paragraph we can extend
+the subsheaf $\mathcal{F}'|_{W \cap U'}$ and the restriction
+$\varphi'|_{W \cap U'}$ to some subsheaf $\mathcal{F}'' \subset \mathcal{F}|_W$
+and map $\varphi'' : \mathcal{F}'' \to \mathcal{G}|_W$.
+Of course the agreement between $(\mathcal{F}', \varphi')$ and
+$(\mathcal{F}'', \varphi'')$ over $W \cap U'$ exactly means that
+we can extend this to a triple $(U' \cup W, \mathcal{F}''', \varphi''')$.
+Hence any maximal triple $(U', \mathcal{F}', \varphi')$
+(which exist by Zorn's lemma) must have $U' = X$ and the
+proof is complete.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Depth}
+\label{section-depth}
+
+\noindent
+In this section we talk a little bit about depth and property
+$(S_k)$ for coherent modules on locally Noetherian schemes.
+Note that we have already discussed this notion for locally
+Noetherian schemes in Properties, Section \ref{properties-section-Rk}.
+
+\begin{definition}
+\label{definition-depth}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Let $k \geq 0$ be an integer.
+\begin{enumerate}
+\item We say $\mathcal{F}$ has {\it depth $k$ at a point}
+$x$ of $X$ if $\text{depth}_{\mathcal{O}_{X, x}}(\mathcal{F}_x) = k$.
+\item We say $X$ has {\it depth $k$ at a point} $x$ of $X$ if
+$\text{depth}(\mathcal{O}_{X, x}) = k$.
+\item We say $\mathcal{F}$ has property {\it $(S_k)$} if
+$$
+\text{depth}_{\mathcal{O}_{X, x}}(\mathcal{F}_x)
+\geq \min(k, \dim(\text{Supp}(\mathcal{F}_x)))
+$$
+for all $x \in X$.
+\item We say $X$ has property {\it $(S_k)$} if $\mathcal{O}_X$ has
+property $(S_k)$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Any coherent sheaf satisfies condition $(S_0)$.
+Condition $(S_1)$ is equivalent to having no embedded associated
+points, see Divisors, Lemma \ref{divisors-lemma-S1-no-embedded}.
+
+\begin{lemma}
+\label{lemma-hom-into-depth}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{F}$, $\mathcal{G}$
+be coherent $\mathcal{O}_X$-modules and $x \in X$.
+\begin{enumerate}
+\item If $\mathcal{G}_x$ has depth $\geq 1$, then
+$\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})_x$
+has depth $\geq 1$.
+\item If $\mathcal{G}_x$ has depth $\geq 2$, then
+$\Hom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})_x$ has depth $\geq 2$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Observe that $\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$ is
+a coherent $\mathcal{O}_X$-module by Lemma \ref{lemma-tensor-hom-coherent}.
+Coherent modules are of finite presentation
+(Lemma \ref{lemma-coherent-Noetherian}) hence taking stalks commutes
+with taking $\SheafHom$ and $\Hom$, see
+Modules, Lemma \ref{modules-lemma-stalk-internal-hom}.
+Thus we reduce to the case of finite modules over local
+rings which is More on Algebra, Lemma \ref{more-algebra-lemma-hom-into-depth}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hom-into-S2}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{F}$, $\mathcal{G}$
+be coherent $\mathcal{O}_X$-modules.
+\begin{enumerate}
+\item If $\mathcal{G}$ has property $(S_1)$, then
+$\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$ has property $(S_1)$.
+\item If $\mathcal{G}$ has property $(S_2)$, then
+$\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$ has property $(S_2)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows immediately from Lemma \ref{lemma-hom-into-depth}
+and the definitions.
+\end{proof}
+
+\noindent
+We have seen in Properties, Lemma \ref{properties-lemma-scheme-CM-iff-all-Sk}
+that a locally Noetherian
+scheme is Cohen-Macaulay if and only if $(S_k)$ holds for all $k$.
+Thus it makes sense to introduce the following definition, which
+is equivalent to the condition that all stalks are Cohen-Macaulay modules.
+
+\begin{definition}
+\label{definition-Cohen-Macaulay}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+We say $\mathcal{F}$ is {\it Cohen-Macaulay} if and only
+if $(S_k)$ holds for all $k \geq 0$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-Cohen-Macaulay-over-regular}
+Let $X$ be a regular scheme. Let $\mathcal{F}$ be a coherent
+$\mathcal{O}_X$-module. The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is Cohen-Macaulay and $\text{Supp}(\mathcal{F}) = X$,
+\item $\mathcal{F}$ is finite locally free of rank $> 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $x \in X$. If (2) holds, then $\mathcal{F}_x$ is a free
+$\mathcal{O}_{X, x}$-module of rank $> 0$. Hence
+$\text{depth}(\mathcal{F}_x) = \dim(\mathcal{O}_{X, x})$
+because a regular local ring is Cohen-Macaulay
+(Algebra, Lemma \ref{algebra-lemma-regular-ring-CM}).
+Conversely, if (1) holds, then $\mathcal{F}_x$ is a
+maximal Cohen-Macaulay module over $\mathcal{O}_{X, x}$
+(Algebra, Definition \ref{algebra-definition-maximal-CM}).
+Hence $\mathcal{F}_x$ is free by
+Algebra, Lemma \ref{algebra-lemma-regular-mcm-free}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Devissage of coherent sheaves}
+\label{section-devissage}
+
+\noindent
+Let $X$ be a Noetherian scheme. Consider an integral closed subscheme
+$i : Z \to X$. It is often convenient to consider coherent sheaves of
+the form $i_*\mathcal{G}$ where $\mathcal{G}$ is a coherent sheaf on
+$Z$. In particular we are interested in these sheaves when $\mathcal{G}$
+is a torsion free rank $1$ sheaf. For example $\mathcal{G}$ could be
+a nonzero sheaf of ideals on $Z$, or even more specifically
+$\mathcal{G} = \mathcal{O}_Z$.
+
+\medskip\noindent
+Throughout this section we will use that a coherent sheaf is the
+same thing as a finite type quasi-coherent sheaf and that a
+quasi-coherent subquotient of a coherent sheaf is coherent, see
+Section \ref{section-coherent-sheaves}.
+The support of a coherent sheaf is closed, see
+Modules, Lemma \ref{modules-lemma-support-finite-type-closed}.
+
+\begin{lemma}
+\label{lemma-prepare-filter-support}
+Let $X$ be a Noetherian scheme.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Suppose that $\text{Supp}(\mathcal{F}) = Z \cup Z'$ with $Z$, $Z'$ closed.
+Then there exists a short exact sequence of coherent sheaves
+$$
+0 \to \mathcal{G}' \to \mathcal{F} \to \mathcal{G} \to 0
+$$
+with $\text{Supp}(\mathcal{G}') \subset Z'$ and
+$\text{Supp}(\mathcal{G}) \subset Z$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{I} \subset \mathcal{O}_X$ be the sheaf of ideals
+defining the reduced induced closed subscheme structure on $Z$, see
+Schemes, Lemma \ref{schemes-lemma-reduced-closed-subscheme}.
+Consider the subsheaves
+$\mathcal{G}'_n = \mathcal{I}^n\mathcal{F}$ and the
+quotients $\mathcal{G}_n = \mathcal{F}/\mathcal{I}^n\mathcal{F}$.
+For each $n$ we have a short exact sequence
+$$
+0 \to \mathcal{G}'_n \to \mathcal{F} \to \mathcal{G}_n \to 0
+$$
+For every point $x$ of $Z' \setminus Z$ we have
+$\mathcal{I}_x = \mathcal{O}_{X, x}$
+and hence $\mathcal{G}_{n, x} = 0$. Thus we see that
+$\text{Supp}(\mathcal{G}_n) \subset Z$. Note that $X \setminus Z'$
+is a Noetherian scheme. Hence by Lemma \ref{lemma-power-ideal-kills-sheaf}
+there exists an $n$ such that
+$\mathcal{G}'_n|_{X \setminus Z'} =
+\mathcal{I}^n\mathcal{F}|_{X \setminus Z'} = 0$.
+For such an $n$ we see that $\text{Supp}(\mathcal{G}'_n) \subset Z'$.
+Thus setting
+$\mathcal{G}' = \mathcal{G}'_n$ and $\mathcal{G} = \mathcal{G}_n$
+works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-prepare-filter-irreducible}
+Let $X$ be a Noetherian scheme.
+Let $i : Z \to X$ be an integral closed subscheme.
+Let $\xi \in Z$ be the generic point.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Assume that $\mathcal{F}_\xi$ is annihilated by
+$\mathfrak m_\xi$. Then there exist an integer
+$r \geq 0$ and a coherent sheaf of ideals $\mathcal{I} \subset \mathcal{O}_Z$
+and an injective map of coherent sheaves
+$$
+i_*\left(\mathcal{I}^{\oplus r}\right) \to \mathcal{F}
+$$
+which is an isomorphism in a neighbourhood of $\xi$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{J} \subset \mathcal{O}_X$ be the ideal sheaf of $Z$.
+Let $\mathcal{F}' \subset \mathcal{F}$ be the subsheaf of
+local sections of $\mathcal{F}$ which are annihilated by
+$\mathcal{J}$. It is a quasi-coherent sheaf by
+Properties, Lemma \ref{properties-lemma-sections-annihilated-by-ideal}.
+Moreover, $\mathcal{F}'_\xi = \mathcal{F}_\xi$ because
+$\mathcal{J}_\xi = \mathfrak m_\xi$ and part (3) of
+Properties, Lemma \ref{properties-lemma-sections-annihilated-by-ideal}.
+By Lemma \ref{lemma-local-isomorphism} we see that
+$\mathcal{F}' \to \mathcal{F}$
+induces an isomorphism in a neighbourhood of $\xi$.
+Hence we may replace $\mathcal{F}$ by $\mathcal{F}'$ and assume
+that $\mathcal{F}$ is annihilated by $\mathcal{J}$.
+
+\medskip\noindent
+Assume $\mathcal{J}\mathcal{F} = 0$. By
+Lemma \ref{lemma-i-star-equivalence} we can write
+$\mathcal{F} = i_*\mathcal{G}$ for some coherent
+sheaf $\mathcal{G}$ on $Z$. Suppose we can find a morphism
+$\mathcal{I}^{\oplus r} \to \mathcal{G}$ which is an isomorphism
+in a neighbourhood of the generic point $\xi$ of $Z$.
+Then applying $i_*$ (which is left exact) we get the result of the lemma.
+Hence we have reduced to the case $X = Z$.
+
+\medskip\noindent
+Suppose $Z = X$ is an integral Noetherian scheme with generic point $\xi$.
+Note that $\mathcal{O}_{X, \xi} = \kappa(\xi)$ is the function field of $X$
+in this case.
+Since $\mathcal{F}_\xi$ is a finite $\mathcal{O}_\xi$-module we see
+that $r = \dim_{\kappa(\xi)} \mathcal{F}_\xi$ is finite.
+Hence the sheaves $\mathcal{O}_X^{\oplus r}$ and $\mathcal{F}$
+have isomorphic stalks at $\xi$.
+By Lemma \ref{lemma-map-stalks-local-map} there exists a nonempty
+open $U \subset X$ and a morphism
+$\psi : \mathcal{O}_X^{\oplus r}|_U \to \mathcal{F}|_U$
+which is an isomorphism
+at $\xi$, and hence an isomorphism in a neighbourhood of $\xi$ by
+Lemma \ref{lemma-local-isomorphism}.
+By Schemes, Lemma \ref{schemes-lemma-reduced-closed-subscheme}
+there exists a quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_X$
+whose associated closed subscheme $Z \subset X$ is the complement
+of $U$.
+By Lemma \ref{lemma-homs-over-open} there exists an $n \geq 0$ and a morphism
+$\mathcal{I}^n(\mathcal{O}_X^{\oplus r}) \to \mathcal{F}$
+which recovers our $\psi$ over $U$. Since
+$\mathcal{I}^n(\mathcal{O}_X^{\oplus r}) = (\mathcal{I}^n)^{\oplus r}$
+we get a map as in the lemma. It is injective because $X$ is
+integral and it is injective at the generic point of $X$
+(easy proof omitted).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-coherent-filter}
+Let $X$ be a Noetherian scheme.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+There exists a filtration
+$$
+0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset
+\ldots \subset \mathcal{F}_m = \mathcal{F}
+$$
+by coherent subsheaves such that for each $j = 1, \ldots, m$
+there exist an integral closed subscheme $Z_j \subset X$ and
+a nonzero coherent sheaf of ideals $\mathcal{I}_j \subset \mathcal{O}_{Z_j}$
+such that
+$$
+\mathcal{F}_j/\mathcal{F}_{j - 1}
+\cong (Z_j \to X)_* \mathcal{I}_j
+$$
+\end{lemma}
+
+\begin{proof}
+Consider the collection
+$$
+\mathcal{T} =
+\left\{
+\begin{matrix}
+Z \subset X
+\text{ closed such that there exists a coherent sheaf }
+\mathcal{F} \\
+\text{ with }
+\text{Supp}(\mathcal{F}) = Z
+\text{ for which the lemma is wrong}
+\end{matrix}
+\right\}
+$$
+We are trying to show that $\mathcal{T}$ is empty. If not, then
+because $X$ is Noetherian we can choose a minimal element
+$Z \in \mathcal{T}$. This means that there exists a coherent
+sheaf $\mathcal{F}$ on $X$ whose support is $Z$ and for which the
+lemma does not hold. Clearly $Z \not = \emptyset$ since the only
+sheaf whose support is empty is the zero sheaf for which the
+lemma does hold (with $m = 0$).
+
+\medskip\noindent
+If $Z$ is not irreducible, then we can write $Z = Z_1 \cup Z_2$
+with $Z_1, Z_2$ closed and strictly smaller than $Z$.
+Then we can apply Lemma \ref{lemma-prepare-filter-support}
+to get a short exact sequence of coherent sheaves
+$$
+0 \to
+\mathcal{G}_1 \to
+\mathcal{F} \to
+\mathcal{G}_2 \to 0
+$$
+with $\text{Supp}(\mathcal{G}_i) \subset Z_i$. By minimality of
+$Z$ each of $\mathcal{G}_i$ has a filtration as in the statement
+of the lemma. By considering the induced filtration on $\mathcal{F}$
+we arrive at a contradiction. Hence we conclude
+that $Z$ is irreducible.
+
+\medskip\noindent
+Suppose $Z$ is irreducible. Let $\mathcal{J}$ be the sheaf of ideals
+cutting out the reduced induced closed subscheme structure of $Z$,
+see Schemes, Lemma \ref{schemes-lemma-reduced-closed-subscheme}.
+By Lemma \ref{lemma-power-ideal-kills-sheaf} we see there exists
+an $n \geq 0$ such that $\mathcal{J}^n\mathcal{F} = 0$. Hence we obtain
+a filtration
+$$
+0 = \mathcal{J}^n\mathcal{F} \subset \mathcal{J}^{n - 1}\mathcal{F}
+\subset \ldots \subset \mathcal{J}\mathcal{F} \subset \mathcal{F}
+$$
+each of whose successive subquotients is annihilated by $\mathcal{J}$.
+Hence if each of these subquotients has a filtration as in the statement
+of the lemma then also $\mathcal{F}$ does. In other words we may
+assume that $\mathcal{J}$ does annihilate $\mathcal{F}$.
+
+\medskip\noindent
+In the case where $Z$ is irreducible and $\mathcal{J}\mathcal{F} = 0$
+we can apply Lemma \ref{lemma-prepare-filter-irreducible}.
+This gives a short exact sequence
+$$
+0 \to
+i_*(\mathcal{I}^{\oplus r}) \to
+\mathcal{F} \to
+\mathcal{Q} \to 0
+$$
+where $\mathcal{Q}$ is defined as the quotient.
+Since $\mathcal{Q}$ is zero in a neighbourhood of $\xi$ by
+the lemma just cited we see that the support of $\mathcal{Q}$
+is strictly smaller than $Z$. Hence we see that $\mathcal{Q}$
+has a filtration of the desired type by minimality of $Z$.
+But then clearly $\mathcal{F}$ does too, which is our final contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-property-initial}
+Let $X$ be a Noetherian scheme.
+Let $\mathcal{P}$ be a property of coherent sheaves on $X$. Assume
+\begin{enumerate}
+\item For any short exact sequence of coherent sheaves
+$$
+0 \to \mathcal{F}_1 \to \mathcal{F} \to \mathcal{F}_2 \to 0
+$$
+if $\mathcal{F}_i$, $i = 1, 2$ have property $\mathcal{P}$
+then so does $\mathcal{F}$.
+\item For every integral closed subscheme $Z \subset X$
+and every quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_Z$ we have
+$\mathcal{P}$ for $i_*\mathcal{I}$.
+\end{enumerate}
+Then property $\mathcal{P}$ holds for every coherent sheaf
+on $X$.
+\end{lemma}
+
+\begin{proof}
+First note that if $\mathcal{F}$ is a coherent sheaf with a filtration
+$$
+0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset
+\ldots \subset \mathcal{F}_m = \mathcal{F}
+$$
+by coherent subsheaves such that each of $\mathcal{F}_i/\mathcal{F}_{i - 1}$
+has property $\mathcal{P}$, then so does $\mathcal{F}$.
+This follows from the property (1) for $\mathcal{P}$.
+On the other hand, by Lemma \ref{lemma-coherent-filter}
+we can filter any $\mathcal{F}$
+with successive subquotients as in (2).
+Hence the lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-property-irreducible}
+Let $X$ be a Noetherian scheme. Let $Z_0 \subset X$ be an irreducible closed
+subset with generic point $\xi$. Let $\mathcal{P}$ be a property of coherent
+sheaves on $X$ with support contained in $Z_0$ such that
+\begin{enumerate}
+\item For any short exact sequence of coherent sheaves if two
+out of three of them have property $\mathcal{P}$ then so does the
+third.
+\item For every integral closed subscheme $Z \subset Z_0 \subset X$,
+$Z \not = Z_0$ and every quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_Z$ we have
+$\mathcal{P}$ for $(Z \to X)_*\mathcal{I}$.
+\item There exists some coherent sheaf $\mathcal{G}$ on $X$ such that
+\begin{enumerate}
+\item $\text{Supp}(\mathcal{G}) = Z_0$,
+\item $\mathcal{G}_\xi$ is annihilated by $\mathfrak m_\xi$,
+\item $\dim_{\kappa(\xi)} \mathcal{G}_\xi = 1$, and
+\item property $\mathcal{P}$ holds for $\mathcal{G}$.
+\end{enumerate}
+\end{enumerate}
+Then property $\mathcal{P}$ holds for every coherent sheaf
+$\mathcal{F}$ on $X$ whose support is contained in $Z_0$.
+\end{lemma}
+
+\begin{proof}
+First note that if $\mathcal{F}$ is a coherent sheaf with support
+contained in $Z_0$ with a filtration
+$$
+0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset
+\ldots \subset \mathcal{F}_m = \mathcal{F}
+$$
+by coherent subsheaves such that each of $\mathcal{F}_i/\mathcal{F}_{i - 1}$
+has property $\mathcal{P}$, then so does $\mathcal{F}$. Or, if $\mathcal{F}$
+has property $\mathcal{P}$ and all but one of the
+$\mathcal{F}_i/\mathcal{F}_{i - 1}$ has property $\mathcal{P}$ then
+so does the last one. This follows from assumption (1).
+
+\medskip\noindent
+As a first application we conclude that any coherent sheaf whose support
+is strictly contained in $Z_0$ has property $\mathcal{P}$. Namely, such a
+sheaf has a filtration (see Lemma \ref{lemma-coherent-filter})
+whose subquotients have property $\mathcal{P}$ according to (2).
+
+\medskip\noindent
+Let $\mathcal{G}$ be as in (3). By Lemma \ref{lemma-prepare-filter-irreducible}
+there exist a sheaf of ideals $\mathcal{I}$ on $Z_0$, an
+integer $r \geq 1$, and a short exact sequence
+$$
+0 \to
+\left((Z_0 \to X)_*\mathcal{I}\right)^{\oplus r} \to
+\mathcal{G} \to
+\mathcal{Q} \to 0
+$$
+where the support of $\mathcal{Q}$ is strictly contained in $Z_0$.
+By (3)(c) we see that $r = 1$. Since $\mathcal{Q}$ has property $\mathcal{P}$
+too we conclude that $(Z_0 \to X)_*\mathcal{I}$ has property
+$\mathcal{P}$.
+
+\medskip\noindent
+Next, suppose that $\mathcal{I}' \not = 0$ is another quasi-coherent
+sheaf of ideals on $Z_0$. Then we can consider the intersection
+$\mathcal{I}'' = \mathcal{I}' \cap \mathcal{I}$ and we get
+two short exact sequences
+$$
+0 \to
+(Z_0 \to X)_*\mathcal{I}'' \to
+(Z_0 \to X)_*\mathcal{I} \to
+\mathcal{Q} \to 0
+$$
+and
+$$
+0 \to
+(Z_0 \to X)_*\mathcal{I}'' \to
+(Z_0 \to X)_*\mathcal{I}' \to
+\mathcal{Q}' \to 0.
+$$
+Note that the support of the coherent sheaves $\mathcal{Q}$ and
+$\mathcal{Q}'$ are strictly contained in $Z_0$.
+Hence $\mathcal{Q}$ and $\mathcal{Q}'$ have property $\mathcal{P}$
+(see above). Hence we conclude using (1)
+that $(Z_0 \to X)_*\mathcal{I}''$ and $(Z_0 \to X)_*\mathcal{I}'$
+both have $\mathcal{P}$ as well.
+
+\medskip\noindent
+The final step of the proof is to note that any coherent sheaf
+$\mathcal{F}$ on $X$ whose support is contained in $Z_0$ has a filtration
+(see Lemma \ref{lemma-coherent-filter} again) whose subquotients
+all have property $\mathcal{P}$ by what we just said.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-property}
+Let $X$ be a Noetherian scheme.
+Let $\mathcal{P}$ be a property of coherent sheaves on $X$ such that
+\begin{enumerate}
+\item For any short exact sequence of coherent sheaves if two
+out of three of them have property $\mathcal{P}$ then so does the
+third.
+\item For every integral closed subscheme $Z \subset X$
+with generic point $\xi$ there exists
+some coherent sheaf $\mathcal{G}$ such that
+\begin{enumerate}
+\item $\text{Supp}(\mathcal{G}) = Z$,
+\item $\mathcal{G}_\xi$ is annihilated by $\mathfrak m_\xi$,
+\item $\dim_{\kappa(\xi)} \mathcal{G}_\xi = 1$, and
+\item property $\mathcal{P}$ holds for $\mathcal{G}$.
+\end{enumerate}
+\end{enumerate}
+Then property $\mathcal{P}$ holds for every coherent sheaf
+on $X$.
+\end{lemma}
+
+\begin{proof}
+According to Lemma \ref{lemma-property-initial} it suffices to show that
+for all integral closed subschemes $Z \subset X$ and all quasi-coherent
+ideal sheaves $\mathcal{I} \subset \mathcal{O}_Z$ we have $\mathcal{P}$
+for $(Z \to X)_*\mathcal{I}$. If this fails, then since $X$ is Noetherian
+there is a minimal integral closed subscheme $Z_0 \subset X$ such that
+$\mathcal{P}$ fails for $(Z_0 \to X)_*\mathcal{I}_0$ for some
+quasi-coherent sheaf of ideals $\mathcal{I}_0 \subset \mathcal{O}_{Z_0}$,
+but $\mathcal{P}$ does hold for $(Z \to X)_*\mathcal{I}$ for all integral
+closed subschemes $Z \subset Z_0$, $Z \not = Z_0$ and quasi-coherent
+ideal sheaves $\mathcal{I} \subset \mathcal{O}_Z$. Since we have the
+existence of $\mathcal{G}$ for $Z_0$ by part (2), according to
+Lemma \ref{lemma-property-irreducible} this cannot happen.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-property-irreducible-higher-rank-cohomological}
+Let $X$ be a Noetherian scheme. Let $Z_0 \subset X$ be an irreducible
+closed subset with generic point $\xi$. Let $\mathcal{P}$ be a property
+of coherent sheaves on $X$ such that
+\begin{enumerate}
+\item For any short exact sequence of coherent sheaves
+$$
+0 \to \mathcal{F}_1 \to \mathcal{F} \to \mathcal{F}_2 \to 0
+$$
+if $\mathcal{F}_i$, $i = 1, 2$ have property $\mathcal{P}$
+then so does $\mathcal{F}$.
+\item If $\mathcal{P}$ holds for $\mathcal{F}^{\oplus r}$ for
+some $r \geq 1$, then it holds for $\mathcal{F}$.
+\item For every integral closed subscheme $Z \subset Z_0 \subset X$,
+$Z \not = Z_0$ and every quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_Z$ we have
+$\mathcal{P}$ for $(Z \to X)_*\mathcal{I}$.
+\item There exists some coherent sheaf $\mathcal{G}$ such that
+\begin{enumerate}
+\item $\text{Supp}(\mathcal{G}) = Z_0$,
+\item $\mathcal{G}_\xi$ is annihilated by $\mathfrak m_\xi$, and
+\item for every quasi-coherent sheaf of ideals
+$\mathcal{J} \subset \mathcal{O}_X$ such that
+$\mathcal{J}_\xi = \mathcal{O}_{X, \xi}$ there exists a quasi-coherent
+subsheaf $\mathcal{G}' \subset \mathcal{J}\mathcal{G}$ with
+$\mathcal{G}'_\xi = \mathcal{G}_\xi$ and such that
+$\mathcal{P}$ holds for $\mathcal{G}'$.
+\end{enumerate}
+\end{enumerate}
+Then property $\mathcal{P}$ holds for every coherent sheaf
+$\mathcal{F}$ on $X$ whose support is contained in $Z_0$.
+\end{lemma}
+
+\begin{proof}
+Note that if $\mathcal{F}$ is a coherent sheaf with a filtration
+$$
+0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset
+\ldots \subset \mathcal{F}_m = \mathcal{F}
+$$
+by coherent subsheaves such that each of $\mathcal{F}_i/\mathcal{F}_{i - 1}$
+has property $\mathcal{P}$, then so does $\mathcal{F}$.
+This follows from assumption (1).
+
+\medskip\noindent
+As a first application we conclude that any coherent sheaf whose support
+is strictly contained in $Z_0$ has property $\mathcal{P}$. Namely, such a
+sheaf has a filtration (see Lemma \ref{lemma-coherent-filter})
+whose subquotients have property $\mathcal{P}$ according to (3).
+
+\medskip\noindent
+Let us denote $i : Z_0 \to X$ the closed immersion.
+Consider a coherent sheaf $\mathcal{G}$ as in (4).
+By Lemma \ref{lemma-prepare-filter-irreducible}
+there exists a sheaf of ideals $\mathcal{I}$ on $Z_0$ and
+a short exact sequence
+$$
+0 \to
+i_*\mathcal{I}^{\oplus r} \to
+\mathcal{G} \to
+\mathcal{Q} \to 0
+$$
+where the support of $\mathcal{Q}$ is strictly contained in $Z_0$.
+In particular $r > 0$ and $\mathcal{I}$ is nonzero
+because the support of $\mathcal{G}$ is equal to $Z_0$.
+Let $\mathcal{I}' \subset \mathcal{I}$ be any nonzero quasi-coherent
+sheaf of ideals on $Z_0$ contained in $\mathcal{I}$.
+Then we also get a short exact sequence
+$$
+0 \to
+i_*(\mathcal{I}')^{\oplus r} \to
+\mathcal{G} \to
+\mathcal{Q}' \to 0
+$$
+where $\mathcal{Q}'$ has support properly contained in $Z_0$.
+Let $\mathcal{J} \subset \mathcal{O}_X$ be a quasi-coherent sheaf
+of ideals cutting out the support of $\mathcal{Q}'$ (for example
+the ideal corresponding to the reduced induced closed subscheme
+structure on the support of $\mathcal{Q}'$). Then
+$\mathcal{J}_\xi = \mathcal{O}_{X, \xi}$. By
+Lemma \ref{lemma-power-ideal-kills-sheaf}
+we see that $\mathcal{J}^n\mathcal{Q}' = 0$ for some $n$.
+Hence $\mathcal{J}^n\mathcal{G} \subset i_*(\mathcal{I}')^{\oplus r}$.
+By assumption (4)(c) of the lemma we see there exists
+a quasi-coherent subsheaf $\mathcal{G}' \subset \mathcal{J}^n\mathcal{G}$
+with $\mathcal{G}'_\xi = \mathcal{G}_\xi$
+for which property $\mathcal{P}$ holds.
+Hence we get a short exact sequence
+$$
+0 \to \mathcal{G}' \to
+i_*(\mathcal{I}')^{\oplus r} \to
+\mathcal{Q}'' \to 0
+$$
+where $\mathcal{Q}''$ has support properly contained in $Z_0$.
+Thus by our initial remarks and property (1) of the lemma
+we conclude that $i_*(\mathcal{I}')^{\oplus r}$ satisfies
+$\mathcal{P}$. Hence we see that $i_*\mathcal{I}'$ satisfies
+$\mathcal{P}$ by (2). Finally, for an arbitrary quasi-coherent
+sheaf of ideals $\mathcal{I}'' \subset \mathcal{O}_{Z_0}$ we can set
+$\mathcal{I}' = \mathcal{I}'' \cap \mathcal{I}$ and we get
+a short exact sequence
+$$
+0 \to
+i_*(\mathcal{I}') \to
+i_*(\mathcal{I}'') \to
+\mathcal{Q}''' \to 0
+$$
+where $\mathcal{Q}'''$ has support properly contained in $Z_0$.
+Hence we conclude that property $\mathcal{P}$ holds for
+$i_*\mathcal{I}''$ as well.
+
+\medskip\noindent
+The final step of the proof is to note that any coherent sheaf
+$\mathcal{F}$ on $X$ whose support is contained in $Z_0$ has a filtration
+(see Lemma \ref{lemma-coherent-filter} again) whose subquotients
+all have property $\mathcal{P}$ by what we just said.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-property-higher-rank-cohomological}
+Let $X$ be a Noetherian scheme.
+Let $\mathcal{P}$ be a property of coherent sheaves on $X$ such that
+\begin{enumerate}
+\item For any short exact sequence of coherent sheaves
+$$
+0 \to \mathcal{F}_1 \to \mathcal{F} \to \mathcal{F}_2 \to 0
+$$
+if $\mathcal{F}_i$, $i = 1, 2$ have property $\mathcal{P}$
+then so does $\mathcal{F}$.
+\item If $\mathcal{P}$ holds for $\mathcal{F}^{\oplus r}$ for
+some $r \geq 1$, then it holds for $\mathcal{F}$.
+\item For every integral closed subscheme $Z \subset X$
+with generic point $\xi$ there exists
+some coherent sheaf $\mathcal{G}$ such that
+\begin{enumerate}
+\item $\text{Supp}(\mathcal{G}) = Z$,
+\item $\mathcal{G}_\xi$ is annihilated by $\mathfrak m_\xi$, and
+\item for every quasi-coherent sheaf of ideals
+$\mathcal{J} \subset \mathcal{O}_X$ such that
+$\mathcal{J}_\xi = \mathcal{O}_{X, \xi}$ there exists a quasi-coherent
+subsheaf $\mathcal{G}' \subset \mathcal{J}\mathcal{G}$ with
+$\mathcal{G}'_\xi = \mathcal{G}_\xi$ and such that
+$\mathcal{P}$ holds for $\mathcal{G}'$.
+\end{enumerate}
+\end{enumerate}
+Then property $\mathcal{P}$ holds for every coherent sheaf
+on $X$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-property-irreducible-higher-rank-cohomological}
+in exactly the same way that Lemma \ref{lemma-property} follows from
+Lemma \ref{lemma-property-irreducible}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Finite morphisms and affines}
+\label{section-finite-affine}
+
+\noindent
+In this section we use the results of the preceding sections
+to show that the image of a Noetherian affine scheme under a finite
+morphism is affine. We will see later that this result holds more
+generally (see Limits, Lemma \ref{limits-lemma-affine} and
+Proposition \ref{limits-proposition-affine}).
+
+\begin{lemma}
+\label{lemma-finite-morphism-Noetherian}
+Let $f : Y \to X$ be a morphism of schemes.
+Assume $f$ is finite, surjective and $X$ locally Noetherian.
+Let $Z \subset X$ be an integral closed subscheme with
+generic point $\xi$. Then
+there exists a coherent sheaf $\mathcal{F}$ on $Y$
+such that the support of $f_*\mathcal{F}$ is equal to $Z$
+and $(f_*\mathcal{F})_\xi$ is annihilated by $\mathfrak m_\xi$.
+\end{lemma}
+
+\begin{proof}
+Note that $Y$ is locally Noetherian by
+Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian}.
+Because $f$ is surjective the fibre $Y_\xi$ is not empty.
+Pick $\xi' \in Y$ mapping to $\xi$. Let $Z' = \overline{\{\xi'\}}$.
+We may think of $Z' \subset Y$ as a reduced closed subscheme,
+see Schemes, Lemma \ref{schemes-lemma-reduced-closed-subscheme}.
+Hence the sheaf $\mathcal{F} = (Z' \to Y)_*\mathcal{O}_{Z'}$
+is a coherent sheaf on $Y$ (see
+Lemma \ref{lemma-finite-pushforward-coherent}).
+Look at the commutative diagram
+$$
+\xymatrix{
+Z' \ar[r]_{i'} \ar[d]_{f'} &
+Y \ar[d]^f \\
+Z \ar[r]^i &
+X
+}
+$$
+We see that $f_*\mathcal{F} = i_*f'_*\mathcal{O}_{Z'}$.
+Hence the stalk of $f_*\mathcal{F}$ at $\xi$ is the stalk
+of $f'_*\mathcal{O}_{Z'}$ at $\xi$. Note that since $Z'$ is
+integral with generic point $\xi'$ we have that
+$\xi'$ is the only point of $Z'$ lying over $\xi$, see
+Algebra, Lemmas \ref{algebra-lemma-finite-is-integral} and
+\ref{algebra-lemma-integral-no-inclusion}.
+Hence the stalk of $f'_*\mathcal{O}_{Z'}$ at $\xi$
+equal $\mathcal{O}_{Z', \xi'} = \kappa(\xi')$. In particular
+the stalk of $f_*\mathcal{F}$ at $\xi$ is not zero.
+This combined with the fact that $f_*\mathcal{F}$ is
+of the form $i_*f'_*(\text{something})$ implies the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-affine-morphism-projection-ideal}
+Let $f : Y \to X$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent sheaf on $Y$.
+Let $\mathcal{I}$ be a quasi-coherent sheaf of ideals on $X$.
+If the morphism $f$ is affine then
+$\mathcal{I}f_*\mathcal{F} = f_*(f^{-1}\mathcal{I}\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+The notation means the following. Since $f^{-1}$ is an exact functor
+we see that $f^{-1}\mathcal{I}$ is a sheaf
+of ideals of $f^{-1}\mathcal{O}_X$. Via the map
+$f^\sharp : f^{-1}\mathcal{O}_X \to \mathcal{O}_Y$ this acts on
+$\mathcal{F}$. Then $f^{-1}\mathcal{I}\mathcal{F}$ is the subsheaf
+generated by sums of local sections of the form $as$ where $a$
+is a local section of $f^{-1}\mathcal{I}$ and $s$ is a local section
+of $\mathcal{F}$. It is a quasi-coherent $\mathcal{O}_Y$-submodule
+of $\mathcal{F}$ because it is also the image of a natural map
+$f^*\mathcal{I} \otimes_{\mathcal{O}_Y} \mathcal{F} \to \mathcal{F}$.
+
+\medskip\noindent
+Having said this the proof is straightforward. Namely, the question is local
+and hence we may assume $X$ is affine. Since $f$ is affine we see that
+$Y$ is affine too. Thus we may write
+$Y = \Spec(B)$, $X = \Spec(A)$, $\mathcal{F} = \widetilde{M}$,
+and $\mathcal{I} = \widetilde{I}$. The assertion of the lemma in this
+case boils down to the statement that
+$$
+I(M_A) = ((IB)M)_A
+$$
+where $M_A$ indicates the $A$-module associated to the $B$-module $M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-image-affine-finite-morphism-affine-Noetherian}
+Let $f : Y \to X$ be a morphism of schemes.
+Assume
+\begin{enumerate}
+\item $f$ finite,
+\item $f$ surjective,
+\item $Y$ affine, and
+\item $X$ Noetherian.
+\end{enumerate}
+Then $X$ is affine.
+\end{lemma}
+
+\begin{proof}
+We will prove that under the assumptions of the lemma for any coherent
+$\mathcal{O}_X$-module $\mathcal{F}$ we have $H^1(X, \mathcal{F}) = 0$.
+This will in particular imply that $H^1(X, \mathcal{I}) = 0$
+for every quasi-coherent sheaf of ideals of $\mathcal{O}_X$. Then it
+follows that $X$ is affine from either
+Lemma \ref{lemma-quasi-compact-h1-zero-covering} or
+Lemma \ref{lemma-quasi-separated-h1-zero-covering}.
+
+\medskip\noindent
+Let $\mathcal{P}$ be the property of coherent sheaves
+$\mathcal{F}$ on $X$ defined by the rule
+$$
+\mathcal{P}(\mathcal{F}) \Leftrightarrow H^1(X, \mathcal{F}) = 0.
+$$
+We are going to apply Lemma \ref{lemma-property-higher-rank-cohomological}.
+Thus we have to verify (1), (2) and (3) of that lemma for $\mathcal{P}$.
+Property (1) follows from the long exact cohomology sequence associated
+to a short exact sequence of sheaves. Property (2) follows since
+$H^1(X, -)$ is an additive functor. To see (3) let $Z \subset X$ be
+an integral closed subscheme with generic point $\xi$.
+Let $\mathcal{F}$ be a coherent sheaf on $Y$ such that
+the support of $f_*\mathcal{F}$ is equal to $Z$
+and $(f_*\mathcal{F})_\xi$ is annihilated by $\mathfrak m_\xi$,
+see Lemma \ref{lemma-finite-morphism-Noetherian}. We claim that
+taking $\mathcal{G} = f_*\mathcal{F}$ works. We only have to verify
+part (3)(c) of Lemma \ref{lemma-property-higher-rank-cohomological}.
+Hence assume that $\mathcal{J} \subset \mathcal{O}_X$ is a
+quasi-coherent sheaf of ideals such that
+$\mathcal{J}_\xi = \mathcal{O}_{X, \xi}$.
+A finite morphism is affine hence by
+Lemma \ref{lemma-affine-morphism-projection-ideal} we see that
+$\mathcal{J}\mathcal{G} = f_*(f^{-1}\mathcal{J}\mathcal{F})$.
+Also, as pointed out in the proof of
+Lemma \ref{lemma-affine-morphism-projection-ideal} the sheaf
+$f^{-1}\mathcal{J}\mathcal{F}$ is a quasi-coherent $\mathcal{O}_Y$-module.
+Since $Y$ is affine we see that $H^1(Y, f^{-1}\mathcal{J}\mathcal{F}) = 0$,
+see Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero}.
+Since $f$ is finite, hence affine, we see that
+$$
+H^1(X, \mathcal{J}\mathcal{G}) =
+H^1(X, f_*(f^{-1}\mathcal{J}\mathcal{F})) =
+H^1(Y, f^{-1}\mathcal{J}\mathcal{F}) = 0
+$$
+by Lemma \ref{lemma-relative-affine-cohomology}.
+Hence the quasi-coherent subsheaf $\mathcal{G}' = \mathcal{J}\mathcal{G}$
+satisfies $\mathcal{P}$. This verifies property (3)(c) of
+Lemma \ref{lemma-property-higher-rank-cohomological} as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Coherent sheaves on Proj, I}
+\label{section-coherent-proj}
+
+\noindent
+In this section we discuss coherent sheaves on $\text{Proj}(A)$
+where $A$ is a Noetherian graded ring generated by $A_1$ over $A_0$.
+In the next section we discuss what happens if $A$ is not generated
+by degree $1$ elements. First, we formulate an all-in-one result for
+projective space over a Noetherian ring.
+
+\begin{lemma}
+\label{lemma-coherent-projective}
+Let $R$ be a Noetherian ring.
+Let $n \geq 0$ be an integer.
+For every coherent sheaf $\mathcal{F}$ on $\mathbf{P}^n_R$
+we have the following:
+\begin{enumerate}
+\item There exists an $r \geq 0$ and
+$d_1, \ldots, d_r \in \mathbf{Z}$ and a surjection
+$$
+\bigoplus\nolimits_{j = 1, \ldots, r}
+\mathcal{O}_{\mathbf{P}^n_R}(d_j)
+\longrightarrow
+\mathcal{F}.
+$$
+\item We have $H^i(\mathbf{P}^n_R, \mathcal{F}) = 0$ unless
+$0 \leq i \leq n$.
+\item For any $i$ the cohomology group $H^i(\mathbf{P}^n_R, \mathcal{F})$
+is a finite $R$-module.
+\item If $i > 0$, then
+$H^i(\mathbf{P}^n_R, \mathcal{F}(d)) = 0$ for all $d$ large enough.
+\item For any $k \in \mathbf{Z}$ the graded $R[T_0, \ldots, T_n]$-module
+$$
+\bigoplus\nolimits_{d \geq k} H^0(\mathbf{P}^n_R, \mathcal{F}(d))
+$$
+is a finite $R[T_0, \ldots, T_n]$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will use that $\mathcal{O}_{\mathbf{P}^n_R}(1)$ is an ample invertible
+sheaf on
+the scheme $\mathbf{P}^n_R$. This follows directly from the definition
+since $\mathbf{P}^n_R$ covered by the standard affine opens $D_{+}(T_i)$.
+Hence by
+Properties, Proposition \ref{properties-proposition-characterize-ample}
+every finite type quasi-coherent $\mathcal{O}_{\mathbf{P}^n_R}$-module
+is a quotient of a finite direct sum of tensor powers of
+$\mathcal{O}_{\mathbf{P}^n_R}(1)$. On the other hand coherent sheaves
+and finite type quasi-coherent sheaves are the same thing on projective
+space over $R$ by Lemma \ref{lemma-coherent-Noetherian}. Thus we see (1).
+
+\medskip\noindent
+Projective $n$-space $\mathbf{P}^n_R$ is covered by $n + 1$ affines,
+namely the standard opens $D_{+}(T_i)$, $i = 0, \ldots, n$, see Constructions,
+Lemma \ref{constructions-lemma-standard-covering-projective-space}.
+Hence we see that for any quasi-coherent
+sheaf $\mathcal{F}$ on $\mathbf{P}^n_R$
+we have $H^i(\mathbf{P}^n_R, \mathcal{F}) = 0$ for $i \geq n + 1$,
+see Lemma \ref{lemma-vanishing-nr-affines}. Hence (2) holds.
+
+\medskip\noindent
+Let us prove (3) and (4) simultaneously for all coherent sheaves
+on $\mathbf{P}^n_R$ by descending induction on $i$. Clearly the result
+holds for $i \geq n + 1$ by (2). Suppose we know the result for
+$i + 1$ and we want to show the result for $i$. (If $i = 0$, then
+part (4) is vacuous.) Let $\mathcal{F}$ be a coherent sheaf on
+$\mathbf{P}^n_R$. Choose a surjection as in (1) and denote
+$\mathcal{G}$ the kernel so that we have a short exact sequence
+$$
+0 \to \mathcal{G} \to
+\bigoplus\nolimits_{j = 1, \ldots, r}
+\mathcal{O}_{\mathbf{P}^n_R}(d_j)
+\to
+\mathcal{F} \to 0
+$$
+By Lemma \ref{lemma-coherent-abelian-Noetherian}
+we see that $\mathcal{G}$ is coherent. The long exact
+cohomology sequence gives an exact sequence
+$$
+H^i(\mathbf{P}^n_R, \bigoplus\nolimits_{j = 1, \ldots, r}
+\mathcal{O}_{\mathbf{P}^n_R}(d_j))
+\to
+H^i(\mathbf{P}^n_R, \mathcal{F})
+\to
+H^{i + 1}(\mathbf{P}^n_R, \mathcal{G}).
+$$
+By induction assumption the right $R$-module is finite and by
+Lemma \ref{lemma-cohomology-projective-space-over-ring} the left
+$R$-module is finite. Since $R$ is Noetherian it follows immediately
+that $H^i(\mathbf{P}^n_R, \mathcal{F})$ is a finite $R$-module.
+This proves the induction step for assertion (3).
+Since $\mathcal{O}_{\mathbf{P}^n_R}(d)$ is invertible
+we see that twisting on $\mathbf{P}^n_R$ is an exact functor (since
+you get it by tensoring with an invertible sheaf, see
+Constructions, Definition \ref{constructions-definition-twist}).
+This means that for all $d \in \mathbf{Z}$ the sequence
+$$
+0 \to \mathcal{G}(d) \to
+\bigoplus\nolimits_{j = 1, \ldots, r}
+\mathcal{O}_{\mathbf{P}^n_R}(d_j + d)
+\to
+\mathcal{F}(d) \to 0
+$$
+is short exact. The resulting cohomology sequence is
+$$
+H^i(\mathbf{P}^n_R, \bigoplus\nolimits_{j = 1, \ldots, r}
+\mathcal{O}_{\mathbf{P}^n_R}(d_j + d))
+\to
+H^i(\mathbf{P}^n_R, \mathcal{F}(d))
+\to
+H^{i + 1}(\mathbf{P}^n_R, \mathcal{G}(d)).
+$$
+By induction assumption we see the module on the right is zero
+for $d \gg 0$ and by the computation in
+Lemma \ref{lemma-cohomology-projective-space-over-ring}
+the module on the left is zero as soon as $d \geq -\min\{d_j\}$
+and $i \geq 1$. Hence the induction step for assertion (4).
+This concludes the proof of (3) and (4).
+
+\medskip\noindent
+In order to prove (5) note that for all sufficiently large $d$
+the map
+$$
+H^0(\mathbf{P}^n_R, \bigoplus\nolimits_{j = 1, \ldots, r}
+\mathcal{O}_{\mathbf{P}^n_R}(d_j + d))
+\to
+H^0(\mathbf{P}^n_R, \mathcal{F}(d))
+$$
+is surjective by the vanishing of $H^1(\mathbf{P}^n_R, \mathcal{G}(d))$
+we just proved. In other words, the module
+$$
+M_k
+=
+\bigoplus\nolimits_{d \geq k} H^0(\mathbf{P}^n_R, \mathcal{F}(d))
+$$
+is for $k$ large enough a quotient of the corresponding module
+$$
+N_k
+=
+\bigoplus\nolimits_{d \geq k} H^0(\mathbf{P}^n_R,
+\bigoplus\nolimits_{j = 1, \ldots, r}
+\mathcal{O}_{\mathbf{P}^n_R}(d_j + d)
+)
+$$
+When $k$ is sufficiently small (e.g.\ $k < -d_j$ for all $j$) then
+$$
+N_k = \bigoplus\nolimits_{j = 1, \ldots, r}
+R[T_0, \ldots, T_n](d_j)
+$$
+by our computations in Section \ref{section-cohomology-projective-space}.
+In particular it is finitely generated.
+Suppose $k \in \mathbf{Z}$ is arbitrary.
+Choose $k_{-} \ll k \ll k_{+}$.
+Consider the diagram
+$$
+\xymatrix{
+N_{k_{-}} & N_{k_{+}} \ar[d] \ar[l] \\
+M_k & M_{k_{+}} \ar[l]
+}
+$$
+where the vertical arrow is the surjective map above and
+the horizontal arrows are the obvious inclusion maps.
+By what was said above we see that $N_{k_{-}}$ is a finitely
+generated $R[T_0, \ldots, T_n]$-module. Hence $N_{k_{+}}$ is
+a finitely generated $R[T_0, \ldots, T_n]$-module because it
+is a submodule of a finitely generated module and the ring
+$R[T_0, \ldots, T_n]$ is Noetherian. Since the vertical arrow
+is surjective we conclude that $M_{k_{+}}$ is a finitely
+generated $R[T_0, \ldots, T_n]$-module. The quotient
+$M_k/M_{k_{+}}$ is finite as an $R$-module since it is a
+finite direct sum of the finite $R$-modules
+$H^0(\mathbf{P}^n_R, \mathcal{F}(d))$ for $k \leq d < k_{+}$.
+Note that we use part (3) for $i = 0$ here. Hence
+$M_k/M_{k_{+}}$ is a fortiori a finite $R[T_0, \ldots, T_n]$-module.
+In other words, we have sandwiched $M_k$ between two finite
+$R[T_0, \ldots, T_n]$-modules and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-coherent-on-proj}
+Let $A$ be a graded ring such that $A_0$ is Noetherian and
+$A$ is generated by finitely many elements of $A_1$ over $A_0$.
+Set $X = \text{Proj}(A)$. Then $X$ is a Noetherian scheme.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item There exists an $r \geq 0$ and
+$d_1, \ldots, d_r \in \mathbf{Z}$ and a surjection
+$$
+\bigoplus\nolimits_{j = 1, \ldots, r} \mathcal{O}_X(d_j)
+\longrightarrow \mathcal{F}.
+$$
+\item For any $p$ the cohomology group $H^p(X, \mathcal{F})$ is a finite
+$A_0$-module.
+\item If $p > 0$, then $H^p(X, \mathcal{F}(d)) = 0$ for all $d$ large enough.
+\item For any $k \in \mathbf{Z}$ the graded $A$-module
+$$
+\bigoplus\nolimits_{d \geq k} H^0(X, \mathcal{F}(d))
+$$
+is a finite $A$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By assumption there exists a surjection of graded $A_0$-algebras
+$$
+A_0[T_0, \ldots, T_n] \longrightarrow A
+$$
+where $\deg(T_j) = 1$ for $j = 0, \ldots, n$. By Constructions, Lemma
+\ref{constructions-lemma-surjective-graded-rings-generated-degree-1-map-proj}
+this defines a closed immersion $i : X \to \mathbf{P}^n_{A_0}$
+such that $i^*\mathcal{O}_{\mathbf{P}^n_{A_0}}(1) = \mathcal{O}_X(1)$.
+In particular, $X$ is Noetherian as a closed subscheme of the Noetherian
+scheme $\mathbf{P}^n_{A_0}$. We claim that the results of the lemma for
+$\mathcal{F}$ follow from the corresponding
+results of Lemma \ref{lemma-coherent-projective} for the coherent sheaf
+$i_*\mathcal{F}$ (Lemma \ref{lemma-i-star-equivalence}) on
+$\mathbf{P}^n_{A_0}$. For example, by this lemma there
+exists a surjection
+$$
+\bigoplus\nolimits_{j = 1, \ldots, r} \mathcal{O}_{\mathbf{P}^n_{A_0}}(d_j)
+\longrightarrow i_*\mathcal{F}.
+$$
+By adjunction this corresponds to a map
+$\bigoplus_{j = 1, \ldots, r} \mathcal{O}_X(d_j) \longrightarrow \mathcal{F}$
+which is surjective as well. The statements on cohomology follow from the
+fact that
+$H^p(X, \mathcal{F}(d)) = H^p(\mathbf{P}^n_{A_0}, i_*\mathcal{F}(d))$
+by Lemma \ref{lemma-relative-affine-cohomology}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-recover-tail-graded-module}
+Let $A$ be a graded ring such that $A_0$ is Noetherian and $A$ is generated
+by finitely many elements of $A_1$ over $A_0$. Let $M$ be a
+finite graded $A$-module. Set $X = \text{Proj}(A)$ and let $\widetilde{M}$
+be the quasi-coherent $\mathcal{O}_X$-module on $X$ associated to $M$.
+The maps
+$$
+M_n \longrightarrow \Gamma(X, \widetilde{M}(n))
+$$
+from Constructions, Lemma \ref{constructions-lemma-apply-modules}
+are isomorphisms for all sufficiently large $n$.
+\end{lemma}
+
+\begin{proof}
+Because $M$ is a finite $A$-module we see that
+$\widetilde{M}$ is a finite type $\mathcal{O}_X$-module,
+i.e., a coherent $\mathcal{O}_X$-module.
+Set $N = \bigoplus_{n \geq 0} \Gamma(X, \widetilde{M}(n))$.
+We have to show that the map $M \to N$ of graded $A$-modules
+is an isomorphism in all sufficiently large degrees.
+By Properties, Lemma \ref{properties-lemma-proj-quasi-coherent}
+we have a canonical isomorphism $\widetilde{N} \to \widetilde{M}$
+such that the induced maps $N_n \to N_n = \Gamma(X, \widetilde{M}(n))$
+are the identity maps. Thus we have maps
+$\widetilde{M} \to \widetilde{N} \to \widetilde{M}$
+such that for all $n$ the diagram
+$$
+\xymatrix{
+M_n \ar[d] \ar[r] & N_n \ar[d] \ar@{=}[rd] \\
+\Gamma(X, \widetilde{M}(n)) \ar[r] &
+\Gamma(X, \widetilde{N}(n)) \ar[r]^{\cong} &
+\Gamma(X, \widetilde{M}(n))
+}
+$$
+is commutative. This means that the composition
+$$
+M_n \to \Gamma(X, \widetilde{M}(n))
+\to \Gamma(X, \widetilde{N}(n)) \to \Gamma(X, \widetilde{M}(n)
+$$
+is equal to the canonical map $M_n \to \Gamma(X, \widetilde{M}(n)$.
+Clearly this implies that the composition
+$\widetilde{M} \to \widetilde{N} \to \widetilde{M}$
+is the identity. Hence $\widetilde{M} \to \widetilde{N}$ is an isomorphism.
+Let $K = \Ker(M \to N)$ and $Q = \Coker(M \to N)$.
+Recall that the functor $M \mapsto \widetilde{M}$ is exact, see
+Constructions, Lemma \ref{constructions-lemma-proj-sheaves}.
+Hence we see that $\widetilde{K} = 0$ and $\widetilde{Q} = 0$.
+Recall that $A$ is a Noetherian ring, $M$ is a finitely generated
+$A$-module, and $N$ is a graded $A$-module such that
+$N' = \bigoplus_{n \geq 0} N$ is finitely generated by
+the last part of Lemma \ref{lemma-coherent-on-proj}.
+Hence $K' = \bigoplus_{n \geq 0} N_n$ and
+$Q' = \bigoplus_{n \geq 0} Q_n$ are finite $A$-modules.
+Observe that $\widetilde{Q} = \widetilde{Q'}$ and
+$\widetilde{K} = \widetilde{K'}$.
+Thus to finish the proof it suffices to show
+that a finite $A$-module $K$ with $\widetilde{K} = 0$
+has only finitely many nonzero homogeneous parts $K_d$ with $d \geq 0$.
+To do this, let $x_1, \ldots, x_r \in K$ be homogeneous generators
+say sitting in degrees $d_1, \ldots, d_r$.
+Let $f_1, \ldots, f_n \in A_1$ be elements generating $A$ over $A_0$.
+For each $i$ and $j$ there exists an $n_{ij} \geq 0$ such that
+$f_i^{n_{ij}} x_j = 0$ in $K_{d_j + n_{ij}}$: if not then
+$x_i/f_i^{d_i} \in K_{(f_i)}$ would not be zero, i.e., $\widetilde{K}$
+would not be zero.
+Then we see that $K_d$ is zero for $d > \max_j(d_j + \sum_i n_{ij})$
+as every element of $K_d$ is a sum of terms where each term is a
+monomials in the $f_i$ times one of the $x_j$ of total degree $d$.
+\end{proof}
+
+\noindent
+Let $A$ be a graded ring such that $A_0$ is Noetherian and $A$ is generated
+by finitely many elements of $A_1$ over $A_0$. Recall that
+$A_+ = \bigoplus_{n > 0} A_n$ is the irrelevant ideal.
+Let $M$ be a graded $A$-module. Recall that
+$M$ is an $A_+$-power torsion module if for all $x \in M$
+there is an $n \geq 1$ such that $(A_+)^n x = 0$, see
+More on Algebra, Definition \ref{more-algebra-definition-f-power-torsion}.
+If $M$ is finitely generated, then we see that
+this is equivalent to $M_n = 0$ for $n \gg 0$.
+Sometimes $A_+$-power torsion modules are called torsion modules.
+Sometimes a graded $A$-module $M$ is called torsion free if
+$x \in M$ with $(A_+)^n x = 0$, $n > 0$ implies $x = 0$.
+Denote $\text{Mod}_A$ the category of graded $A$-modules,
+$\text{Mod}^{fg}_A$ the full subcategory of finitely generated ones,
+and $\text{Mod}^{fg}_{A, torsion}$ the full subcategory of
+modules $M$ such that $M_n = 0$ for $n \gg 0$.
+
+\begin{proposition}
+\label{proposition-coherent-modules-on-proj}
+Let $A$ be a graded ring such that $A_0$ is Noetherian and $A$ is generated
+by finitely many elements of $A_1$ over $A_0$.
+Set $X = \text{Proj}(A)$. The functor $M \mapsto \widetilde M$
+induces an equivalence
+$$
+\text{Mod}^{fg}_A/\text{Mod}^{fg}_{A, torsion}
+\longrightarrow
+\textit{Coh}(\mathcal{O}_X)
+$$
+whose quasi-inverse is given by
+$\mathcal{F} \longmapsto \bigoplus_{n \geq 0} \Gamma(X, \mathcal{F}(n))$.
+\end{proposition}
+
+\begin{proof}
+The subcategory $\text{Mod}^{fg}_{A, torsion}$ is a Serre subcategory
+of $\text{Mod}^{fg}_A$, see
+Homology, Definition \ref{homology-definition-serre-subcategory}.
+This is clear from the description of objects given above but it also follows
+from More on Algebra, Lemma \ref{more-algebra-lemma-I-power-torsion}.
+Hence the quotient category on the left of the arrow is defined
+in Homology, Lemma \ref{homology-lemma-serre-subcategory-is-kernel}.
+To define the functor of the proposition, it suffices to show that
+the functor $M \mapsto \widetilde M$ sends torsion modules to $0$.
+This is clear because for any $f \in A_+$ homogeneous the
+module $M_f$ is zero and hence the value $M_{(f)}$ of $\widetilde M$
+on $D_+(f)$ is zero too.
+
+\medskip\noindent
+By Lemma \ref{lemma-coherent-on-proj} the proposed quasi-inverse
+makes sense. Namely, the lemma shows that
+$\mathcal{F} \longmapsto \bigoplus_{n \geq 0} \Gamma(X, \mathcal{F}(n))$
+is a functor $\textit{Coh}(\mathcal{O}_X) \to \text{Mod}^{fg}_A$
+which we can compose with the quotient functor
+$\text{Mod}^{fg}_A \to \text{Mod}^{fg}_A/\text{Mod}^{fg}_{A, torsion}$.
+
+\medskip\noindent
+By Lemma \ref{lemma-recover-tail-graded-module}
+the composite left to right to left is isomorphic to the identity functor.
+
+\medskip\noindent
+Finally, let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Set $M = \bigoplus_{n \in \mathbf{Z}} \Gamma(X, \mathcal{F}(n))$
+viewed as a graded $A$-module, so that our functor sends $\mathcal{F}$ to
+$M_{\geq 0} = \bigoplus_{n \geq 0} M_n$.
+By Properties, Lemma \ref{properties-lemma-proj-quasi-coherent}
+the canonical map $\widetilde M \to \mathcal{F}$
+is an isomorphism. Since the inclusion map
+$M_{\geq 0} \to M$ defines an isomorphism
+$\widetilde{M_{\geq 0}} \to \widetilde M$ we conclude that
+the composite right to left to right is isomorphic to the identity
+functor as well.
+\end{proof}
+
+
+
+
+
+\section{Coherent sheaves on Proj, II}
+\label{section-coherent-proj-general}
+
+\noindent
+In this section we discuss coherent sheaves on $\text{Proj}(A)$
+where $A$ is a Noetherian graded ring. Most of the results will
+be deduced by sleight of hand from the corresponding result in the
+previous section where we discussed what happens if $A$ is generated
+by degree $1$ elements.
+
+\begin{lemma}
+\label{lemma-coherent-on-proj-general}
+Let $A$ be a Noetherian graded ring. Set $X = \text{Proj}(A)$. Then $X$
+is a Noetherian scheme. Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item There exists an $r \geq 0$ and
+$d_1, \ldots, d_r \in \mathbf{Z}$ and a surjection
+$$
+\bigoplus\nolimits_{j = 1, \ldots, r} \mathcal{O}_X(d_j)
+\longrightarrow \mathcal{F}.
+$$
+\item For any $p$ the cohomology group $H^p(X, \mathcal{F})$ is a finite
+$A_0$-module.
+\item If $p > 0$, then $H^p(X, \mathcal{F}(d)) = 0$ for all $d$ large enough.
+\item For any $k \in \mathbf{Z}$ the graded $A$-module
+$$
+\bigoplus\nolimits_{d \geq k} H^0(X, \mathcal{F}(d))
+$$
+is a finite $A$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will prove this by reducing the statement to
+Lemma \ref{lemma-coherent-on-proj}.
+By Algebra, Lemmas \ref{algebra-lemma-graded-Noetherian} and
+\ref{algebra-lemma-S-plus-generated} the ring $A_0$ is Noetherian
+and $A$ is generated over $A_0$ by finitely many elements
+$f_1, \ldots, f_r$ homogeneous of positive degree.
+Let $d$ be a sufficiently divisible integer. Set $A' = A^{(d)}$ with notation
+as in Algebra, Section \ref{algebra-section-graded}.
+Then $A'$ is generated over $A'_0 = A_0$ by elements of
+degree $1$, see Algebra, Lemma \ref{algebra-lemma-uple-generated-degree-1}.
+Thus Lemma \ref{lemma-coherent-on-proj} applies to $X' = \text{Proj}(A')$.
+
+\medskip\noindent
+By Constructions, Lemma \ref{constructions-lemma-d-uple} there exist
+an isomorphism of schemes $i : X \to X'$ and
+isomorphisms $\mathcal{O}_X(nd) \to i^*\mathcal{O}_{X'}(n)$
+compatible with the map $A' \to A$ and the maps
+$A_n \to H^0(X, \mathcal{O}_X(n))$ and $A'_n \to H^0(X', \mathcal{O}_{X'}(n))$.
+Thus Lemma \ref{lemma-coherent-on-proj} implies $X$ is Noetherian and that
+(1) and (2) hold. To see (3) and (4)
+we can use that for any fixed $k$, $p$, and $q$ we have
+$$
+\bigoplus\nolimits_{dn + q \geq k} H^p(X, \mathcal{F}(dn + q)) =
+\bigoplus\nolimits_{dn + q \geq k} H^p(X', (i_*\mathcal{F}(q))(n)
+$$
+by the compatibilities above. If $p > 0$, we have the vanishing of the right
+hand side for $k$ depending on $q$ large enough by
+Lemma \ref{lemma-coherent-on-proj}. Since there are only a finite number
+of congruence classes of integers modulo $d$, we see that (3) holds for
+$\mathcal{F}$ on $X$. If $p = 0$, then we have that the right hand side
+is a finite $A'$-module by Lemma \ref{lemma-coherent-on-proj}. Using
+the finiteness of congruence classes once more, we find that
+$\bigoplus_{n \geq k} H^0(X, \mathcal{F}(n))$ is a finite $A'$-module too.
+Since the $A'$-module structure comes from the $A$-module structure
+(by the compatibilities mentioned above), we conclude it is finite
+as an $A$-module as well.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-recover-tail-graded-module-general}
+Let $A$ be a Noetherian graded ring and let $d$ be the lcm of generators
+of $A$ over $A_0$. Let $M$ be a finite graded $A$-module.
+Set $X = \text{Proj}(A)$ and let $\widetilde{M}$ be
+the quasi-coherent $\mathcal{O}_X$-module on $X$ associated to $M$.
+Let $k \in \mathbf{Z}$.
+\begin{enumerate}
+\item $N' = \bigoplus_{n \geq k} H^0(X, \widetilde{M(n)})$
+is a finite $A$-module,
+\item $N = \bigoplus_{n \geq k} H^0(X, \widetilde{M}(n))$
+is a finite $A$-module,
+\item there is a canonical map $N \to N'$,
+\item if $k$ is small enough there is a canonical map $M \to N'$,
+\item the map $M_n \to N'_n$ is an isomorphism for $n \gg 0$,
+\item $N_n \to N'_n$ is an isomorphism for $d | n$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The map $N \to N'$ in (3) comes from
+Constructions, Equation (\ref{constructions-equation-multiply-more-generally})
+by taking global sections.
+
+\medskip\noindent
+By
+Constructions, Equation
+(\ref{constructions-equation-global-sections-more-generally})
+there is a map of graded $A$-modules
+$M \to \bigoplus_{n \in \mathbf{Z}} H^0(X, \widetilde{M(n)})$.
+If the generators of $M$ sit in degrees $\geq k$, then the image
+is contained in the submodule
+$N' \subset \bigoplus_{n \in \mathbf{Z}} H^0(X, \widetilde{M(n)})$
+and we get the map in (4).
+
+\medskip\noindent
+By Algebra, Lemmas \ref{algebra-lemma-graded-Noetherian} and
+\ref{algebra-lemma-S-plus-generated} the ring $A_0$ is Noetherian
+and $A$ is generated over $A_0$ by finitely many elements
+$f_1, \ldots, f_r$ homogeneous of positive degree.
+Let $d = \text{lcm}(\deg(f_i))$. Then we see that (6) holds
+for example by
+Constructions, Lemma \ref{constructions-lemma-where-invertible}.
+
+\medskip\noindent
+Because $M$ is a finite $A$-module we see that
+$\widetilde{M}$ is a finite type $\mathcal{O}_X$-module,
+i.e., a coherent $\mathcal{O}_X$-module.
+Thus part (2) follows from Lemma \ref{lemma-coherent-on-proj-general}.
+
+\medskip\noindent
+We will deduce (1) from (2) using a trick. For $q \in \{0, \ldots, d - 1\}$
+write
+$$
+{}^qN = \bigoplus\nolimits_{n + q \geq k} H^0(X, \widetilde{M(q)}(n))
+$$
+By part (2) these are finite $A$-modules. The Noetherian ring $A$
+is finite over $A^{(d)} = \bigoplus_{n \geq 0} A_{dn}$, because
+it is generated by $f_i$ over $A^{(d)}$ and $f_i^d \in A^{(d)}$.
+Hence ${}^qN$ is a finite $A^{(d)}$-module.
+Moreover, $A^{(d)}$ is Noetherian (follows from
+Algebra, Lemma \ref{algebra-lemma-dehomogenize-finite-type}).
+It follows that the $A^{(d)}$-submodule
+${}^qN^{(d)} = \bigoplus_{n \in \mathbf{Z}} {}^qN_{dn}$
+is a finite module over $A^{(d)}$. Using the isomorphisms
+$\widetilde{M(dn + q)} = \widetilde{M(q)}(dn)$ we can write
+$$
+N' =
+\bigoplus\nolimits_{q \in \{0, \ldots, d - 1\}}
+\bigoplus\nolimits_{dn + q \geq k}
+H^0(X, \widetilde{M(q)}(dn)) =
+\bigoplus\nolimits_{q \in \{0, \ldots, d - 1\}} {}^qN^{(d)}
+$$
+Thus $N'$ is finite over $A^{(d)}$ and a fortiori finite over $A$.
+Thus (1) is true.
+
+\medskip\noindent
+Let $K$ be a finite $A$-module such that $\widetilde{K} = 0$. We claim
+that $K_n = 0$ for $d|n$ and $n \gg 0$. Arguing as above we see that
+$K^{(d)}$ is a finite $A^{(d)}$-module. Let
+$x_1, \ldots, x_m \in K$ be homogeneous generators of $K^{(d)}$
+over $A^{(d)}$, say sitting in degrees $d_1, \ldots, d_m$ with $d | d_j$.
+For each $i$ and $j$ there exists an $n_{ij} \geq 0$ such that
+$f_i^{n_{ij}} x_j = 0$ in $K_{d_j + n_{ij}}$: if not then
+$x_j/f_i^{d_i/\deg(f_i)} \in K_{(f_i)}$ would not be zero, i.e.,
+$\widetilde{K}$ would not be zero. Here we use that $\deg(f_i) | d | d_j$
+for all $i, j$. We conclude that $K_n$ is zero for
+$n$ with $d | n$ and $n > \max_j (d_j + \sum_i n_{ij} \deg(f_i))$
+as every element of $K_n$ is a sum of terms where each term is a
+monomials in the $f_i$ times one of the $x_j$ of total degree $n$.
+
+\medskip\noindent
+To finish the proof, we have to show that $M \to N'$ is an isomorphism
+in all sufficiently large degrees. The map $N \to N'$ induces an
+isomorphism $\widetilde{N} \to \widetilde{N'}$ because on the affine
+opens $D_+(f_i) = D_+(f_i^d)$ the corresponding modules are isomorphic:
+$N_{(f_i)} \cong N_{(f_i^d)} \cong N'_{(f_i^d)} \cong N'_{(f_i)}$
+by property (6).
+By Properties, Lemma \ref{properties-lemma-proj-quasi-coherent}
+we have a canonical isomorphism $\widetilde{N} \to \widetilde{M}$.
+The composition $\widetilde{N} \to \widetilde{M} \to \widetilde{N'}$
+is the isomorphism above (proof omitted; hint: look on standard
+affine opens to check this). Thus the map $M \to N'$ induces an isomorphism
+$\widetilde{M} \to \widetilde{N'}$.
+Let $K = \Ker(M \to N')$ and $Q = \Coker(M \to N')$.
+Recall that the functor
+$M \mapsto \widetilde{M}$ is exact, see
+Constructions, Lemma \ref{constructions-lemma-proj-sheaves}.
+Hence we see that $\widetilde{K} = 0$ and $\widetilde{Q} = 0$.
+By the result of the previous paragraph we see that $K_n = 0$ and
+$Q_n = 0$ for $d | n$ and $n \gg 0$. At this point we finally see
+the advantage of using $N'$ over $N$: the functor $M \leadsto N'$
+is compatible with shifts (immediate from the construction).
+Thus, repeating the whole argument with $M$ replaced by $M(q)$
+we find that $K_n = 0$ and $Q_n = 0$ for $n \equiv q \bmod d$
+and $n \gg 0$. Since there are
+only finitely many congruence classes modulo $n$ the proof is finished.
+\end{proof}
+
+\noindent
+Let $A$ be a Noetherian graded ring. Recall that
+$A_+ = \bigoplus_{n > 0} A_n$ is the irrelevant ideal.
+By Algebra, Lemmas \ref{algebra-lemma-graded-Noetherian} and
+\ref{algebra-lemma-S-plus-generated} the ring $A_0$ is Noetherian
+and $A$ is generated over $A_0$ by finitely many elements
+$f_1, \ldots, f_r$ homogeneous of positive degree.
+Let $d = \text{lcm}(\deg(f_i))$. Let $M$ be a graded $A$-module.
+In this situation we say a homogeneous element $x \in M$
+is {\it irrelevant}\footnote{This is nonstandard notation.} if
+$$
+(A_+ x)_{nd} = 0\text{ for all }n \gg 0
+$$
+If $x \in M$ is homogeneous and irrelevant and $f \in A$
+is homogeneous, then $fx$ is irrelevant too. Hence
+the set of irrelevant elements generate a graded submodule
+$M_{irrelevant} \subset M$. We will say $M$ is {\it irrelevant}
+if every homogeneous element of $M$ is irrelevant, i.e.,
+if $M_{irrelevant} = M$.
+If $M$ is finitely generated, then we see that
+this is equivalent to $M_{nd} = 0$ for $n \gg 0$.
+Denote $\text{Mod}_A$ the category of graded $A$-modules,
+$\text{Mod}^{fg}_A$ the full subcategory of finitely generated ones,
+and $\text{Mod}^{fg}_{A, irrelevant}$ the full subcategory of
+irrelevant modules.
+
+\begin{proposition}
+\label{proposition-coherent-modules-on-proj-general}
+Let $A$ be a Noetherian graded ring. Set $X = \text{Proj}(A)$. The functor
+$M \mapsto \widetilde M$ induces an equivalence
+$$
+\text{Mod}^{fg}_A/\text{Mod}^{fg}_{A, irrelevant}
+\longrightarrow
+\textit{Coh}(\mathcal{O}_X)
+$$
+whose quasi-inverse is given by
+$\mathcal{F} \longmapsto \bigoplus_{n \geq 0} \Gamma(X, \mathcal{F}(n))$.
+\end{proposition}
+
+\begin{proof}
+We urge the reader to read the proof in the case where $A$
+is generated in degree $1$ first, see
+Proposition \ref{proposition-coherent-modules-on-proj}.
+Let $f_1, \ldots, f_r \in A$ be homogeneous elements of positive degree
+which generate $A$ over $A_0$. Let $d$ be the lcm of the degrees
+$d_i$ of $f_i$. Let $M$ be a finite $A$-module.
+Let us show that $\widetilde{M}$ is zero if and
+only if $M$ is an irrelevant graded $A$-module (as defined above
+the statement of the proposition). Namely, let $x \in M$ be a
+homogeneous element.
+Choose $k \in \mathbf{Z}$ sufficiently small and let
+$N \to N'$ and $M \to N'$ be as in
+Lemma \ref{lemma-recover-tail-graded-module-general}.
+We may also pick $l$ sufficiently large such that
+$M_n \to N_n$ is an isomorphism for $n \geq l$.
+If $\widetilde{M}$ is zero, then $N = 0$.
+Thus for any $f \in A_+$ homogeneous with
+$\deg(f) + \deg(x) = nd$ and $nd > l$ we see that $fx$ is zero
+because $N_{nd} \to N'_{nd}$ and $M_{nd} \to N'_{nd}$ are isomorphisms.
+Hence $x$ is irrelevant.
+Conversely, assume $M$ is irrelevant. Then $M_{nd}$ is zero
+for $n \gg 0$ (see discussion above proposition).
+Clearly this implies that $M_{(f_i)} = M_{(f_i^{d/\deg(f_i)})} = 0$,
+whence $\widetilde{M} = 0$ by construction.
+
+\medskip\noindent
+It follows that the subcategory $\text{Mod}^{fg}_{A, irrelevant}$
+is a Serre subcategory of $\text{Mod}^{fg}_A$ as the kernel
+of the exact functor $M \mapsto \widetilde M$, see
+Homology, Lemma \ref{homology-lemma-kernel-exact-functor}
+and Constructions, Lemma \ref{constructions-lemma-proj-sheaves}.
+Hence the quotient category on the left of the arrow is defined
+in Homology, Lemma \ref{homology-lemma-serre-subcategory-is-kernel}.
+To define the functor of the proposition, it suffices to show that
+the functor $M \mapsto \widetilde M$ sends irrelevant modules to $0$
+which we have shown above.
+
+\medskip\noindent
+By Lemma \ref{lemma-coherent-on-proj-general} the proposed quasi-inverse
+makes sense. Namely, the lemma shows that
+$\mathcal{F} \longmapsto \bigoplus_{n \geq 0} \Gamma(X, \mathcal{F}(n))$
+is a functor $\textit{Coh}(\mathcal{O}_X) \to \text{Mod}^{fg}_A$
+which we can compose with the quotient functor
+$\text{Mod}^{fg}_A \to \text{Mod}^{fg}_A/\text{Mod}^{fg}_{A, irrelevant}$.
+
+\medskip\noindent
+By Lemma \ref{lemma-recover-tail-graded-module-general}
+the composite left to right to left is isomorphic to the identity functor.
+Namely, let $M$ be a finite graded $A$-module and let
+$k \in \mathbf{Z}$ sufficiently small and let
+$N \to N'$ and $M \to N'$ be as in
+Lemma \ref{lemma-recover-tail-graded-module-general}.
+Then the kernel and cokernel of $M \to N'$ are nonzero in
+only finitely many degrees, hence are irrelevant. Moreover, the
+kernel and cokernel of the map $N \to N'$ are zero in all sufficiently
+large degrees divisible by $d$, hence these are irrelevant modules too.
+Thus $M \to N'$ and $N \to N'$ are both isomorphisms in the quotient
+category, as desired.
+
+\medskip\noindent
+Finally, let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Set $M = \bigoplus_{n \in \mathbf{Z}} \Gamma(X, \mathcal{F}(n))$
+viewed as a graded $A$-module, so that our functor sends $\mathcal{F}$ to
+$M_{\geq 0} = \bigoplus_{n \geq 0} M_n$.
+By Properties, Lemma \ref{properties-lemma-proj-quasi-coherent}
+the canonical map $\widetilde M \to \mathcal{F}$
+is an isomorphism. Since the inclusion map
+$M_{\geq 0} \to M$ defines an isomorphism
+$\widetilde{M_{\geq 0}} \to \widetilde M$ we conclude that
+the composite right to left to right is isomorphic to the identity
+functor as well.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Higher direct images along projective morphisms}
+\label{section-projective-pushforward}
+
+\noindent
+We first state and prove a result for when the base is affine
+and then we deduce some results for projective morphisms.
+
+\begin{lemma}
+\label{lemma-coherent-proper-ample}
+Let $R$ be a Noetherian ring. Let $X \to \Spec(R)$ be a proper morphism.
+Let $\mathcal{L}$ be an ample invertible sheaf on $X$. Let $\mathcal{F}$
+be a coherent $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item The graded ring
+$A = \bigoplus_{d \geq 0} H^0(X, \mathcal{L}^{\otimes d})$
+is a finitely generated $R$-algebra.
+\item There exists an $r \geq 0$ and
+$d_1, \ldots, d_r \in \mathbf{Z}$ and a surjection
+$$
+\bigoplus\nolimits_{j = 1, \ldots, r} \mathcal{L}^{\otimes d_j}
+\longrightarrow \mathcal{F}.
+$$
+\item For any $p$ the cohomology group $H^p(X, \mathcal{F})$ is a finite
+$R$-module.
+\item If $p > 0$, then
+$H^p(X, \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d}) = 0$
+for all $d$ large enough.
+\item For any $k \in \mathbf{Z}$ the graded $A$-module
+$$
+\bigoplus\nolimits_{d \geq k}
+H^0(X, \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d})
+$$
+is a finite $A$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By
+Morphisms, Lemma \ref{morphisms-lemma-finite-type-over-affine-ample-very-ample}
+there exists a $d > 0$ and an immersion $i : X \to \mathbf{P}^n_R$
+such that $\mathcal{L}^{\otimes d} \cong i^*\mathcal{O}_{\mathbf{P}^n_R}(1)$.
+Since $X$ is proper over $R$ the morphism $i$ is a closed immersion
+(Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed}).
+Thus we have $H^i(X, \mathcal{G}) = H^i(\mathbf{P}^n_R, i_*\mathcal{G})$
+for any quasi-coherent sheaf $\mathcal{G}$ on $X$
+(by Lemma \ref{lemma-relative-affine-cohomology} and the fact that
+closed immersions are affine, see
+Morphisms, Lemma \ref{morphisms-lemma-closed-immersion-affine}).
+Moreover, if $\mathcal{G}$ is coherent, then $i_*\mathcal{G}$
+is coherent as well (Lemma \ref{lemma-i-star-equivalence}).
+We will use these facts without further mention.
+
+\medskip\noindent
+Proof of (1). Set $S = R[T_0, \ldots, T_n]$ so that
+$\mathbf{P}^n_R = \text{Proj}(S)$.
+Observe that $A$ is an $S$-algebra (but the ring map $S \to A$ is not
+a homomorphism of graded rings because $S_n$ maps into $A_{dn}$).
+By the projection formula
+(Cohomology, Lemma \ref{cohomology-lemma-projection-formula})
+we have
+$$
+i_*(\mathcal{L}^{\otimes nd + q}) =
+i_*(\mathcal{L}^{\otimes q})
+\otimes_{\mathcal{O}_{\mathbf{P}^n_R}}
+\mathcal{O}_{\mathbf{P}^n_R}(n)
+$$
+for all $n \in \mathbf{Z}$. We conclude that $\bigoplus_{n \geq 0} A_{nd + q}$
+is a finite graded $S$-module by Lemma \ref{lemma-coherent-projective}.
+Since
+$A = \bigoplus_{q \in \{0, \ldots, d - 1} \bigoplus_{n \geq 0} A_{nd + q}$
+we see that $A$ is finite as an $S$-algebra, hence (1) is true.
+
+\medskip\noindent
+Proof of (2). This follows from
+Properties, Proposition \ref{properties-proposition-characterize-ample}.
+
+\medskip\noindent
+Proof of (3). Apply Lemma \ref{lemma-coherent-projective}
+and use $H^p(X, \mathcal{F}) = H^p(\mathbf{P}^n_R, i_*\mathcal{F})$.
+
+\medskip\noindent
+Proof of (4). Fix $p > 0$. By the projection formula we have
+$$
+i_*(\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes nd + q}) =
+i_*(\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes q})
+\otimes_{\mathcal{O}_{\mathbf{P}^n_R}}
+\mathcal{O}_{\mathbf{P}^n_R}(n)
+$$
+for all $n \in \mathbf{Z}$. By Lemma \ref{lemma-coherent-projective}
+we conclude that $H^p(X, \mathcal{F} \otimes \mathcal{L}^{nd + q}) = 0$
+for $n \gg 0$. Since there are only finitely many congruence classes
+of integers modulo $d$ this proves (4).
+
+\medskip\noindent
+Proof of (5). Fix an integer $k$. Set
+$M = \bigoplus_{n \geq k} H^0(X, \mathcal{F} \otimes \mathcal{L}^{\otimes n})$.
+Arguing as above we conclude that $\bigoplus_{nd + q \geq k} M_{nd + q}$
+is a finite graded $S$-module. Since
+$M = \bigoplus_{q \in \{0, \ldots, d - 1\}}
+\bigoplus_{nd + q \geq k} M_{nd + q}$
+we see that $M$ is finite as an $S$-module. Since the $S$-module structure
+factors through the ring map $S \to A$, we conclude that $M$ is finite
+as an $A$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kill-by-twisting}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $\mathcal{L}$ be an invertible sheaf on $X$.
+Assume that
+\begin{enumerate}
+\item $S$ is Noetherian,
+\item $f$ is proper,
+\item $\mathcal{F}$ is coherent, and
+\item $\mathcal{L}$ is relatively ample on $X/S$.
+\end{enumerate}
+Then there exists an $n_0$ such that for all $n \geq n_0$
+we have
+$$
+R^pf_*\left(\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n}\right)
+=
+0
+$$
+for all $p > 0$.
+\end{lemma}
+
+\begin{proof}
+Choose a finite affine open covering $S = \bigcup V_j$ and
+set $X_j = f^{-1}(V_j)$.
+Clearly, if we solve the question for each of the finitely many
+systems $(X_j \to V_j, \mathcal{L}|_{X_j}, \mathcal{F}|_{V_j})$
+then the result follows. Thus we may assume $S$ is affine.
+In this case the vanishing of
+$R^pf_*(\mathcal{F} \otimes \mathcal{L}^{\otimes n})$
+is equivalent to the vanishing of
+$H^p(X, \mathcal{F} \otimes \mathcal{L}^{\otimes n})$, see
+Lemma \ref{lemma-quasi-coherence-higher-direct-images-application}.
+Thus the required vanishing follows
+from Lemma \ref{lemma-coherent-proper-ample} (which applies because
+$\mathcal{L}$ is ample on $X$ by Morphisms, Lemma
+\ref{morphisms-lemma-finite-type-over-affine-ample-very-ample}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-projective-pushforward}
+Let $S$ be a locally Noetherian scheme.
+Let $f : X \to S$ be a locally projective morphism.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Then $R^if_*\mathcal{F}$ is a coherent $\mathcal{O}_S$-module
+for all $i \geq 0$.
+\end{lemma}
+
+\begin{proof}
+We first remark that a locally projective morphism is proper
+(Morphisms, Lemma \ref{morphisms-lemma-locally-projective-proper})
+and hence of finite type.
+In particular $X$ is locally Noetherian
+(Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian})
+and hence the statement makes sense.
+Moreover, by Lemma \ref{lemma-quasi-coherence-higher-direct-images}
+the sheaves $R^pf_*\mathcal{F}$ are quasi-coherent.
+
+\medskip\noindent
+Having said this the statement is local on $S$ (for example by
+Cohomology, Lemma \ref{cohomology-lemma-localize-higher-direct-images}).
+Hence we may assume $S = \Spec(R)$ is the spectrum of
+a Noetherian ring, and $X$ is a closed subscheme of
+$\mathbf{P}^n_R$ for some $n$, see
+Morphisms, Lemma \ref{morphisms-lemma-characterize-locally-projective}.
+In this case, the sheaves $R^pf_*\mathcal{F}$ are the quasi-coherent
+sheaves associated to the $R$-modules $H^p(X, \mathcal{F})$, see
+Lemma \ref{lemma-quasi-coherence-higher-direct-images-application}.
+Hence it suffices to show that $R$-modules $H^p(X, \mathcal{F})$
+are finite $R$-modules (Lemma \ref{lemma-coherent-Noetherian}).
+This follows from Lemma \ref{lemma-coherent-proper-ample}
+(because the restriction of $\mathcal{O}_{\mathbf{P}^n_R}(1)$
+to $X$ is ample on $X$).
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Ample invertible sheaves and cohomology}
+\label{section-ample-cohomology}
+
+\noindent
+Here is a criterion for ampleness on proper schemes over affine bases
+in terms of vanishing of cohomology after twisting.
+
+\begin{lemma}
+\label{lemma-vanshing-gives-ample}
+\begin{reference}
+\cite[III Proposition 2.6.1]{EGA}
+\end{reference}
+Let $R$ be a Noetherian ring. Let $f : X \to \Spec(R)$ be a proper morphism.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{L}$ is ample on $X$ (this is equivalent to many other
+things, see
+Properties, Proposition \ref{properties-proposition-characterize-ample} and
+Morphisms, Lemma
+\ref{morphisms-lemma-finite-type-over-affine-ample-very-ample}),
+\item for every coherent $\mathcal{O}_X$-module $\mathcal{F}$ there exists
+an $n_0 \geq 0$ such that
+$H^p(X, \mathcal{F} \otimes \mathcal{L}^{\otimes n}) = 0$ for all $n \geq n_0$
+and $p > 0$, and
+\item for every quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_X$, there exists an $n \geq 1$
+such that $H^1(X, \mathcal{I} \otimes \mathcal{L}^{\otimes n}) = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (1) $\Rightarrow$ (2) follows from
+Lemma \ref{lemma-coherent-proper-ample}.
+The implication (2) $\Rightarrow$ (3) is trivial.
+The implication (3) $\Rightarrow$ (1) is
+Lemma \ref{lemma-quasi-compact-h1-zero-invertible}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-finite-morphism-ample}
+Let $R$ be a Noetherian ring. Let $f : Y \to X$ be a morphism of
+schemes proper over $R$. Let $\mathcal{L}$ be an
+invertible $\mathcal{O}_X$-module. Assume $f$ is finite and surjective.
+Then $\mathcal{L}$ is ample if and only if $f^*\mathcal{L}$ is ample.
+\end{lemma}
+
+\begin{proof}
+The pullback of an ample invertible sheaf by a quasi-affine morphism
+is ample, see Morphisms, Lemma
+\ref{morphisms-lemma-pullback-ample-tensor-relatively-ample}.
+This proves one of the implications as a finite morphism is affine
+by definition.
+
+\medskip\noindent
+Assume that $f^*\mathcal{L}$ is ample. Let $P$ be the following property on
+coherent $\mathcal{O}_X$-modules $\mathcal{F}$: there exists an $n_0$
+such that $H^p(X, \mathcal{F} \otimes \mathcal{L}^{\otimes n}) = 0$
+for all $n \geq n_0$ and $p > 0$. We will prove that $P$ holds
+for any coherent $\mathcal{O}_X$-module $\mathcal{F}$, which implies
+$\mathcal{L}$ is ample by Lemma \ref{lemma-vanshing-gives-ample}.
+We are going to apply Lemma \ref{lemma-property-higher-rank-cohomological}.
+Thus we have to verify (1), (2) and (3) of that lemma for $P$.
+Property (1) follows from the long exact cohomology sequence associated
+to a short exact sequence of sheaves and the fact that tensoring with
+an invertible sheaf is an exact functor. Property (2) follows since
+$H^p(X, -)$ is an additive functor. To see (3) let $Z \subset X$ be
+an integral closed subscheme with generic point $\xi$.
+Let $\mathcal{F}$ be a coherent sheaf on $Y$ such that
+the support of $f_*\mathcal{F}$ is equal to $Z$
+and $(f_*\mathcal{F})_\xi$ is annihilated by $\mathfrak m_\xi$,
+see Lemma \ref{lemma-finite-morphism-Noetherian}. We claim that
+taking $\mathcal{G} = f_*\mathcal{F}$ works. We only have to verify
+part (3)(c) of Lemma \ref{lemma-property-higher-rank-cohomological}.
+Hence assume that $\mathcal{J} \subset \mathcal{O}_X$ is a
+quasi-coherent sheaf of ideals such that
+$\mathcal{J}_\xi = \mathcal{O}_{X, \xi}$.
+A finite morphism is affine hence by
+Lemma \ref{lemma-affine-morphism-projection-ideal} we see that
+$\mathcal{J}\mathcal{G} = f_*(f^{-1}\mathcal{J}\mathcal{F})$.
+Also, as pointed out in the proof of
+Lemma \ref{lemma-affine-morphism-projection-ideal} the sheaf
+$f^{-1}\mathcal{J}\mathcal{F}$ is a coherent $\mathcal{O}_Y$-module.
+As $\mathcal{L}$ is ample we see from Lemma \ref{lemma-vanshing-gives-ample}
+that there exists an $n_0$ such that
+$$
+H^p(Y, f^{-1}\mathcal{J}\mathcal{F}
+\otimes_{\mathcal{O}_Y} f^*\mathcal{L}^{\otimes n}) = 0,
+$$
+for $n \geq n_0$ and $p > 0$. Since $f$ is finite, hence affine, we see that
+\begin{align*}
+H^p(X, \mathcal{J}\mathcal{G} \otimes_{\mathcal{O}_X}
+\mathcal{L}^{\otimes n})
+& =
+H^p(X, f_*(f^{-1}\mathcal{J}\mathcal{F}) \otimes_{\mathcal{O}_X}
+\mathcal{L}^{\otimes n}) \\
+& =
+H^p(X, f_*(f^{-1}\mathcal{J}\mathcal{F} \otimes_{\mathcal{O}_Y}
+f^*\mathcal{L}^{\otimes n})) \\
+& =
+H^p(Y, f^{-1}\mathcal{J}\mathcal{F} \otimes_{\mathcal{O}_Y}
+f^*\mathcal{L}^{\otimes n}) = 0
+\end{align*}
+Here we have used the projection formula
+(Cohomology, Lemma \ref{cohomology-lemma-projection-formula}) and
+Lemma \ref{lemma-relative-affine-cohomology}.
+Hence the quasi-coherent subsheaf $\mathcal{G}' = \mathcal{J}\mathcal{G}$
+satisfies $P$. This verifies property (3)(c) of
+Lemma \ref{lemma-property-higher-rank-cohomological} as desired.
+\end{proof}
+
+\noindent
+Cohomology is functorial. In particular, given a ringed space $X$,
+an invertible $\mathcal{O}_X$-module $\mathcal{L}$, a section
+$s \in \Gamma(X, \mathcal{L})$ we get maps
+$$
+H^p(X, \mathcal{F})
+\longrightarrow
+H^p(X, \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}), \quad
+\xi \longmapsto s\xi
+$$
+induced by the map
+$\mathcal{F} \to \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}$
+which is multiplication by $s$. We set
+$\Gamma_*(X, \mathcal{L}) =
+\bigoplus_{n \geq 0} \Gamma(X, \mathcal{L}^{\otimes n})$
+as a graded ring, see
+Modules, Definition \ref{modules-definition-gamma-star}.
+Given a sheaf of $\mathcal{O}_X$-modules $\mathcal{F}$ and an integer
+$p \geq 0$ we set
+$$
+H^p_*(X, \mathcal{L}, \mathcal{F}) =
+\bigoplus\nolimits_{n \in \mathbf{Z}} H^p(X,
+\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n})
+$$
+This is a graded $\Gamma_*(X, \mathcal{L})$-module by the multiplication
+defined above. Warning: the notation $H^p_*(X, \mathcal{L}, \mathcal{F})$
+is nonstandard.
+
+\begin{lemma}
+\label{lemma-invert-s-cohomology}
+Let $X$ be a scheme. Let $\mathcal{L}$ be an invertible sheaf on $X$.
+Let $s \in \Gamma(X, \mathcal{L})$.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+If $X$ is quasi-compact and quasi-separated, the canonical map
+$$
+H^p_*(X, \mathcal{L}, \mathcal{F})_{(s)}
+\longrightarrow
+H^p(X_s, \mathcal{F})
+$$
+which maps $\xi/s^n$ to $s^{-n}\xi$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Note that for $p = 0$ this is
+Properties, Lemma \ref{properties-lemma-invert-s-sections}.
+We will prove the statement using the induction
+principle (Lemma \ref{lemma-induction-principle}) where for
+$U \subset X$ quasi-compact open we let $P(U)$ be the property:
+for all $p \geq 0$ the map
+$$
+H^p_*(U, \mathcal{L}, \mathcal{F})_{(s)}
+\longrightarrow
+H^p(U_s, \mathcal{F})
+$$
+is an isomorphism.
+
+\medskip\noindent
+If $U$ is affine, then both sides of the arrow displayed above
+are zero for $p > 0$ by
+Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero}
+and
+Properties, Lemma \ref{properties-lemma-affine-cap-s-open}
+and the statement is true. If $P$ is true for $U$, $V$, and $U \cap V$,
+then we can use the Mayer-Vietoris sequences
+(Cohomology, Lemma \ref{cohomology-lemma-mayer-vietoris}) to obtain
+a map of long exact sequences
+$$
+\xymatrix{
+H^{p - 1}_*(U \cap V, \mathcal{L}, \mathcal{F})_{(s)} \ar[r] \ar[d] &
+H^p_*(U \cup V, \mathcal{L}, \mathcal{F})_{(s)} \ar[r] \ar[d] &
+H^p_*(U, \mathcal{L}, \mathcal{F})_{(s)}
+\oplus
+H^p_*(V, \mathcal{L}, \mathcal{F})_{(s)} \ar[d] \\
+H^{p - 1}(U_s \cap V_s, \mathcal{F}) \ar[r]&
+H^p(U_s \cup V_s, \mathcal{F}) \ar[r] &
+H^p(U_s, \mathcal{F})
+\oplus
+H^p(V_s, \mathcal{F})
+}
+$$
+(only a snippet shown). Observe that $U_s \cap V_s = (U \cap V)_s$ and
+that $U_s \cup V_s = (U \cup V)_s$. Thus the left and right vertical
+maps are isomorphisms (as well as one more to the right and one more
+to the left which are not shown in the diagram).
+We conclude that $P(U \cup V)$ holds by
+the 5-lemma (Homology, Lemma \ref{homology-lemma-five-lemma}).
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-section-affine-open-kills-classes}
+Let $X$ be a scheme.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $s \in \Gamma(X, \mathcal{L})$ be a section.
+Assume that
+\begin{enumerate}
+\item $X$ is quasi-compact and quasi-separated, and
+\item $X_s$ is affine.
+\end{enumerate}
+Then for every quasi-coherent $\mathcal{O}_X$-module $\mathcal{F}$ and
+every $p > 0$ and all $\xi \in H^p(X, \mathcal{F})$ there exists
+an $n \geq 0$ such that $s^n\xi = 0$ in
+$H^p(X, \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n})$.
+\end{lemma}
+
+\begin{proof}
+Recall that $H^p(X_s, \mathcal{G})$ is zero for every quasi-coherent
+module $\mathcal{G}$ by
+Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero}.
+Hence the lemma follows from
+Lemma \ref{lemma-invert-s-cohomology}.
+\end{proof}
+
+\noindent
+For a more general version of the following lemma see
+Limits, Lemma \ref{limits-lemma-ample-on-reduction}.
+
+\begin{lemma}
+\label{lemma-ample-on-reduction}
+Let $i : Z \to X$ be a closed immersion of Noetherian schemes
+inducing a homeomorphism of underlying topological spaces.
+Let $\mathcal{L}$ be an invertible sheaf on $X$.
+Then $i^*\mathcal{L}$ is ample on $Z$, if and only if
+$\mathcal{L}$ is ample on $X$.
+\end{lemma}
+
+\begin{proof}
+If $\mathcal{L}$ is ample, then $i^*\mathcal{L}$ is ample for
+example by Morphisms, Lemma
+\ref{morphisms-lemma-pullback-ample-tensor-relatively-ample}.
+Assume $i^*\mathcal{L}$ is ample. We have to show that $\mathcal{L}$
+is ample on $X$.
+Let $\mathcal{I} \subset \mathcal{O}_X$ be the coherent sheaf of ideals
+cutting out the closed subscheme $Z$. Since $i(Z) = X$ set theoretically
+we see that $\mathcal{I}^n = 0$ for some $n$ by
+Lemma \ref{lemma-power-ideal-kills-sheaf}.
+Consider the sequence
+$$
+X = Z_n \supset Z_{n - 1} \supset Z_{n - 2} \supset \ldots \supset Z_1 = Z
+$$
+of closed subschemes cut out by
+$0 = \mathcal{I}^n \subset \mathcal{I}^{n - 1} \subset \ldots \subset
+\mathcal{I}$. Then each of the closed immersions $Z_i \to Z_{i - 1}$
+is defined by a coherent sheaf of ideals of square zero. In this way
+we reduce to the case that $\mathcal{I}^2 = 0$.
+
+\medskip\noindent
+Consider the short exact sequence
+$$
+0 \to \mathcal{I} \to \mathcal{O}_X \to i_*\mathcal{O}_Z \to 0
+$$
+of quasi-coherent $\mathcal{O}_X$-modules. Tensoring with
+$\mathcal{L}^{\otimes n}$ we obtain short exact sequences
+\begin{equation}
+\label{equation-ses}
+0 \to \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n}
+\to \mathcal{L}^{\otimes n} \to i_*i^*\mathcal{L}^{\otimes n} \to 0
+\end{equation}
+As $\mathcal{I}^2 = 0$, we can use
+Morphisms, Lemma \ref{morphisms-lemma-i-star-equivalence}
+to think of $\mathcal{I}$ as a quasi-coherent $\mathcal{O}_Z$-module
+and then $\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n} =
+\mathcal{I} \otimes_{\mathcal{O}_Z} i^*\mathcal{L}^{\otimes n}$ with
+obvious abuse of notation.
+Moreover, the cohomology of this sheaf over $Z$ is canonically
+the same as the cohomology of this sheaf over $X$ (as $i$ is a
+homeomorphism).
+
+\medskip\noindent
+Let $x \in X$ be a point and denote $z \in Z$ the corresponding point.
+Because $i^*\mathcal{L}$ is ample there exists an $n$ and a section
+$s \in \Gamma(Z, i^*\mathcal{L}^{\otimes n})$ with $z \in Z_s$
+and with $Z_s$ affine. The obstruction to lifting $s$ to a section
+of $\mathcal{L}^{\otimes n}$ over $X$ is the boundary
+$$
+\xi = \partial s \in
+H^1(X, \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n}) =
+H^1(Z, \mathcal{I} \otimes_{\mathcal{O}_Z} i^*\mathcal{L}^{\otimes n})
+$$
+coming from the short exact sequence of sheaves (\ref{equation-ses}).
+If we replace $s$ by $s^{e + 1}$ then $\xi$ is replaced by
+$\partial(s^{e + 1}) = (e + 1) s^e \xi$ in
+$H^1(Z, \mathcal{I} \otimes_{\mathcal{O}_Z} i^*\mathcal{L}^{\otimes (e + 1)n})$
+because the boundary map for
+$$
+0 \to
+\bigoplus\nolimits_{m \geq 0}
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes m} \to
+\bigoplus\nolimits_{m \geq 0}
+\mathcal{L}^{\otimes m} \to
+\bigoplus\nolimits_{m \geq 0}
+i_*i^*\mathcal{L}^{\otimes m} \to 0
+$$
+is a derivation by Cohomology, Lemma
+\ref{cohomology-lemma-boundary-derivation-over-cup-product}. By
+Lemma \ref{lemma-section-affine-open-kills-classes}
+we see that $s^e \xi$ is zero for $e$ large enough.
+Hence, after replacing $s$ by a power, we can assume $s$ is the image
+of a section $s' \in \Gamma(X, \mathcal{L}^{\otimes n})$.
+Then $X_{s'}$ is an open subscheme and $Z_s \to X_{s'}$ is a surjective
+closed immersion of Noetherian schemes with $Z_s$ affine. Hence
+$X_s$ is affine by
+Lemma \ref{lemma-image-affine-finite-morphism-affine-Noetherian} and
+we conclude that $\mathcal{L}$ is ample.
+\end{proof}
+
+\noindent
+For a more general version of the following lemma see
+Limits, Lemma \ref{limits-lemma-thickening-quasi-affine}.
+
+\begin{lemma}
+\label{lemma-thickening-quasi-affine}
+Let $i : Z \to X$ be a closed immersion of Noetherian schemes
+inducing a homeomorphism of underlying topological spaces.
+Then $X$ is quasi-affine if and only if $Z$ is quasi-affine.
+\end{lemma}
+
+\begin{proof}
+Recall that a scheme is quasi-affine
+if and only if the structure sheaf is ample, see
+Properties, Lemma \ref{properties-lemma-quasi-affine-O-ample}.
+Hence if $Z$ is quasi-affine, then $\mathcal{O}_Z$ is ample,
+hence $\mathcal{O}_X$ is ample by
+Lemma \ref{lemma-ample-on-reduction}, hence
+$X$ is quasi-affine. A proof of the converse, which
+can also be seen in an elementary way, is gotten by
+reading the argument just given backwards.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-affine-in-presence-ample}
+Let $X$ be a scheme. Let $\mathcal{L}$ be an ample invertible
+$\mathcal{O}_X$-module. Let $n_0$ be an integer.
+If $H^p(X, \mathcal{L}^{\otimes -n}) = 0$ for $n \geq n_0$ and $p > 0$,
+then $X$ is affine.
+\end{lemma}
+
+\begin{proof}
+We claim $H^p(X, \mathcal{F}) = 0$ for every quasi-coherent
+$\mathcal{O}_X$-module and $p > 0$. Since $X$ is quasi-compact
+by Properties, Definition \ref{properties-definition-ample}
+the claim finishes the proof
+by Lemma \ref{lemma-quasi-compact-h1-zero-covering}.
+The scheme $X$ is separated by
+Properties, Lemma \ref{properties-lemma-ample-separated}.
+Say $X$ is covered by $e + 1$ affine opens. Then
+$H^p(X, \mathcal{F}) = 0$ for $p > e$, see
+Lemma \ref{lemma-vanishing-nr-affines}. Thus we may use descending
+induction on $p$ to prove the claim. Writing $\mathcal{F}$
+as a filtered colimit of finite type quasi-coherent
+modules (Properties, Lemma
+\ref{properties-lemma-quasi-coherent-colimit-finite-type})
+and using Cohomology, Lemma
+\ref{cohomology-lemma-quasi-separated-cohomology-colimit}
+we may assume $\mathcal{F}$ is of finite type.
+Then we can choose $n > n_0$ such that
+$\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n}$
+is globally generated, see Properties, Proposition
+\ref{properties-proposition-characterize-ample}.
+This means there is a short exact sequence
+$$
+0 \to \mathcal{F}' \to
+\bigoplus\nolimits_{i \in I} \mathcal{L}^{\otimes -n}
+\to \mathcal{F} \to 0
+$$
+for some set $I$ (in fact we can choose $I$ finite). By
+induction hypothesis we have $H^{p + 1}(X, \mathcal{F}') = 0$
+and by assumption (combined with the already used
+commutation of cohomology with colimits)
+we have $H^p(X, \bigoplus_{i \in I} \mathcal{L}^{\otimes -n}) = 0$.
+From the long exact cohomology sequence we conclude that
+$H^p(X, \mathcal{F}) = 0$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-affine-if-quasi-affine}
+Let $X$ be a quasi-affine scheme.
+If $H^p(X, \mathcal{O}_X) = 0$ for $p > 0$,
+then $X$ is affine.
+\end{lemma}
+
+\begin{proof}
+Since $\mathcal{O}_X$ is ample by
+Properties, Lemma \ref{properties-lemma-quasi-affine-O-ample}
+this follows from Lemma \ref{lemma-affine-in-presence-ample}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Chow's Lemma}
+\label{section-chows-lemma}
+
+\noindent
+In this section we prove Chow's lemma in the Noetherian
+case (Lemma \ref{lemma-chow-Noetherian}).
+In
+Limits, Section \ref{limits-section-chows-lemma}
+we prove some variants for the non-Noetherian case.
+
+\begin{lemma}
+\label{lemma-chow-Noetherian}
+\begin{reference}
+\cite[II Theorem 5.6.1(a)]{EGA}
+\end{reference}
+Let $S$ be a Noetherian scheme.
+Let $f : X \to S$ be a separated morphism of finite type.
+Then there exist an $n \geq 0$ and a diagram
+$$
+\xymatrix{
+X \ar[rd] & X' \ar[d] \ar[l]^\pi \ar[r] & \mathbf{P}^n_S \ar[dl] \\
+& S &
+}
+$$
+where $X' \to \mathbf{P}^n_S$ is an immersion, and
+$\pi : X' \to X$ is proper and surjective. Moreover, we may
+arrange it such that there exists a dense open subscheme
+$U \subset X$ such that $\pi^{-1}(U) \to U$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+All of the schemes we will encounter during the rest of the proof
+are going to be of finite type over the Noetherian scheme $S$ and
+hence Noetherian
+(see Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian}).
+All morphisms between them will automatically be quasi-compact, locally of
+finite type and quasi-separated, see
+Morphisms, Lemma \ref{morphisms-lemma-permanence-finite-type} and
+Properties,
+Lemmas \ref{properties-lemma-locally-Noetherian-quasi-separated} and
+\ref{properties-lemma-morphism-Noetherian-schemes-quasi-compact}.
+
+\medskip\noindent
+The scheme $X$ has only finitely many irreducible components
+(Properties, Lemma \ref{properties-lemma-Noetherian-irreducible-components}).
+Say $X = X_1 \cup \ldots \cup X_r$ is the decomposition
+of $X$ into irreducible components.
+Let $\eta_i \in X_i$ be the generic point.
+For every point $x \in X$ there exists an affine open
+$U_x \subset X$ which contains $x$ and each of the generic
+points $\eta_i$. See
+Properties, Lemma \ref{properties-lemma-point-and-maximal-points-affine}.
+Since $X$ is quasi-compact, we can find a finite affine open
+covering $X = U_1 \cup \ldots \cup U_m$ such that
+each $U_i$ contains $\eta_1, \ldots, \eta_r$.
+In particular we conclude that the open
+$U = U_1 \cap \ldots \cap U_m \subset X$ is
+a dense open. This and the fact that the $U_i$ are affine opens
+covering $X$ are all that we will use below.
+
+\medskip\noindent
+Let $X^* \subset X$ be the scheme theoretic closure of $U \to X$, see
+Morphisms, Definition \ref{morphisms-definition-scheme-theoretic-image}.
+Let $U_i^* = X^* \cap U_i$. Note that $U_i^*$ is a closed subscheme
+of $U_i$. Hence $U_i^*$ is affine. Since $U$ is dense in $X$ the
+morphism $X^* \to X$ is a surjective closed immersion. It is an
+isomorphism over $U$. Hence we may replace $X$ by $X^*$ and
+$U_i$ by $U_i^*$ and assume that $U$ is scheme theoretically dense
+in $X$, see
+Morphisms, Definition \ref{morphisms-definition-scheme-theoretically-dense}.
+
+\medskip\noindent
+By Morphisms, Lemma \ref{morphisms-lemma-quasi-projective-finite-type-over-S}
+we can find an immersion $j_i : U_i \to \mathbf{P}_S^{n_i}$
+for each $i$. By
+Morphisms, Lemma \ref{morphisms-lemma-quasi-compact-immersion} we can find
+closed subschemes $Z_i \subset \mathbf{P}_S^{n_i}$
+such that $j_i : U_i \to Z_i$ is a scheme theoretically
+dense open immersion. Note that $Z_i \to S$ is proper, see
+Morphisms, Lemma \ref{morphisms-lemma-locally-projective-proper}.
+Consider the morphism
+$$
+j = (j_1|_U, \ldots, j_m|_U) : U \longrightarrow
+\mathbf{P}_S^{n_1} \times_S \ldots \times_S \mathbf{P}_S^{n_m}.
+$$
+By the lemma cited above we can find a closed subscheme
+$Z$ of $\mathbf{P}_S^{n_1} \times_S \ldots \times_S \mathbf{P}_S^{n_m}$
+such that $j : U \to Z$ is an open immersion and such that $U$
+is scheme theoretically dense in $Z$. The morphism $Z \to S$
+is proper. Consider the $i$th projection
+$$
+\text{pr}_i|_Z : Z \longrightarrow \mathbf{P}^{n_i}_S.
+$$
+This morphism factors through $Z_i$ (see Morphisms,
+Lemma \ref{morphisms-lemma-factor-factor}). Denote $p_i : Z \to Z_i$
+the induced morphism. This is a proper morphism, see
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed}
+for example. At this point we have that
+$U \subset U_i \subset Z_i$ are scheme theoretically
+dense open immersions. Moreover, we can think of $Z$ as the
+scheme theoretic image of the ``diagonal'' morphism
+$U \to Z_1 \times_S \ldots \times_S Z_m$.
+
+\medskip\noindent
+Set $V_i = p_i^{-1}(U_i)$. Note that $p_i|_{V_i} : V_i \to U_i$ is proper.
+Set $X' = V_1 \cup \ldots \cup V_m$. By construction $X'$ has an immersion
+into the scheme
+$\mathbf{P}^{n_1}_S \times_S \ldots \times_S \mathbf{P}^{n_m}_S$.
+Thus by the Segre embedding (see
+Constructions, Lemma \ref{constructions-lemma-segre-embedding})
+we see that $X'$ has
+an immersion into a projective space over $S$.
+
+\medskip\noindent
+We claim that the morphisms $p_i|_{V_i}: V_i \to U_i$ glue to a morphism
+$X' \to X$. Namely, it is clear that $p_i|_U$ is the identity map
+from $U$ to $U$. Since $U \subset X'$ is scheme theoretically
+dense by construction, it is also scheme theoretically dense
+in the open subscheme $V_i \cap V_j$. Thus we see that
+$p_i|_{V_i \cap V_j} = p_j|_{V_i \cap V_j}$ as morphisms into the
+separated $S$-scheme $X$, see
+Morphisms, Lemma \ref{morphisms-lemma-equality-of-morphisms}.
+We denote the resulting morphism $\pi : X' \to X$.
+
+\medskip\noindent
+We claim that $\pi^{-1}(U_i) = V_i$.
+Since $\pi|_{V_i} = p_i|_{V_i}$ it follows that
+$V_i \subset \pi^{-1}(U_i)$. Consider the diagram
+$$
+\xymatrix{
+V_i \ar[r] \ar[rd]_{p_i|_{V_i}} & \pi^{-1}(U_i) \ar[d] \\
+& U_i
+}
+$$
+Since $V_i \to U_i$ is proper we see that the image of
+the horizontal arrow is closed, see
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed}.
+Since $V_i \subset \pi^{-1}(U_i)$ is scheme
+theoretically dense (as it contains $U$)
+we conclude that $V_i = \pi^{-1}(U_i)$ as claimed.
+
+\medskip\noindent
+This shows that $\pi^{-1}(U_i) \to U_i$ is identified with the proper
+morphism $p_i|_{V_i} : V_i \to U_i$. Hence we see that $X$ has a
+finite affine covering $X = \bigcup U_i$ such that the restriction
+of $\pi$ is proper on each member of the covering.
+Thus by Morphisms, Lemma \ref{morphisms-lemma-proper-local-on-the-base}
+we see that $\pi$ is proper.
+
+\medskip\noindent
+Finally we have to show that $\pi^{-1}(U) = U$. To see this we argue in the
+same way as above using the diagram
+$$
+\xymatrix{
+U \ar[r] \ar[rd] & \pi^{-1}(U) \ar[d] \\
+& U
+}
+$$
+and using that $\text{id}_U : U \to U$ is proper and that
+$U$ is scheme theoretically dense in $\pi^{-1}(U)$.
+\end{proof}
+
+\begin{remark}
+\label{remark-chow-Noetherian}
+In the situation of Chow's
+Lemma \ref{lemma-chow-Noetherian}:
+\begin{enumerate}
+\item The morphism $\pi$ is actually H-projective (hence projective, see
+Morphisms, Lemma \ref{morphisms-lemma-H-projective})
+since the morphism $X' \to \mathbf{P}^n_S \times_S X = \mathbf{P}^n_X$
+is a closed immersion (use the fact that $\pi$ is proper, see
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed}).
+\item We may assume that $\pi^{-1}(U)$ is scheme theoretically dense
+in $X'$. Namely, we can simply replace $X'$ by the scheme theoretic
+closure of $\pi^{-1}(U)$. In this case we can think of $U$ as a
+scheme theoretically dense open subscheme of $X'$.
+See Morphisms, Section \ref{morphisms-section-scheme-theoretic-image}.
+\item If $X$ is reduced then we may choose $X'$ reduced. This is clear
+from (2).
+\end{enumerate}
+\end{remark}
+
+
+
+
+
+
+\section{Higher direct images of coherent sheaves}
+\label{section-proper-pushforward}
+
+\noindent
+In this section we prove the fundamental fact that the higher
+direct images of a coherent sheaf under a proper morphism
+are coherent.
+
+\begin{proposition}
+\label{proposition-proper-pushforward-coherent}
+\begin{reference}
+\cite[III Theorem 3.2.1]{EGA}
+\end{reference}
+Let $S$ be a locally Noetherian scheme.
+Let $f : X \to S$ be a proper morphism.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Then $R^if_*\mathcal{F}$ is a coherent $\mathcal{O}_S$-module
+for all $i \geq 0$.
+\end{proposition}
+
+\begin{proof}
+Since the problem is local on $S$ we may assume that $S$ is
+a Noetherian scheme. Since a proper morphism is of finite type
+we see that in this case $X$ is a Noetherian scheme also.
+Consider the property $\mathcal{P}$ of coherent sheaves
+on $X$ defined by the rule
+$$
+\mathcal{P}(\mathcal{F}) \Leftrightarrow
+R^pf_*\mathcal{F}\text{ is coherent for all }p \geq 0
+$$
+We are going to use the result of
+Lemma \ref{lemma-property} to prove that
+$\mathcal{P}$ holds for every coherent sheaf on $X$.
+
+\medskip\noindent
+Let
+$$
+0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0
+$$
+be a short exact sequence of coherent sheaves on $X$.
+Consider the long exact sequence of higher direct images
+$$
+R^{p - 1}f_*\mathcal{F}_3 \to
+R^pf_*\mathcal{F}_1 \to
+R^pf_*\mathcal{F}_2 \to
+R^pf_*\mathcal{F}_3 \to
+R^{p + 1}f_*\mathcal{F}_1
+$$
+Then it is clear that if 2-out-of-3 of the sheaves $\mathcal{F}_i$
+have property $\mathcal{P}$, then the higher direct images of the
+third are sandwiched in this exact complex between two coherent
+sheaves. Hence these higher direct images are also coherent by
+Lemma \ref{lemma-coherent-abelian-Noetherian} and
+\ref{lemma-coherent-Noetherian-quasi-coherent-sub-quotient}.
+Hence property $\mathcal{P}$ holds for the third as well.
+
+\medskip\noindent
+Let $Z \subset X$ be an integral closed subscheme.
+We have to find a coherent sheaf $\mathcal{F}$ on $X$ whose support is
+contained in $Z$, whose stalk at the generic point $\xi$ of $Z$ is a
+$1$-dimensional vector space over $\kappa(\xi)$ such that $\mathcal{P}$
+holds for $\mathcal{F}$. Denote $g = f|_Z : Z \to S$ the restriction of $f$.
+Suppose we can find a coherent sheaf $\mathcal{G}$ on $Z$ such
+that
+(a) $\mathcal{G}_\xi$ is a $1$-dimensional vector space over $\kappa(\xi)$,
+(b) $R^pg_*\mathcal{G} = 0$ for $p > 0$, and
+(c) $g_*\mathcal{G}$ is coherent. Then we can consider
+$\mathcal{F} = (Z \to X)_*\mathcal{G}$. As $Z \to X$ is a closed immersion
+we see that $(Z \to X)_*\mathcal{G}$ is coherent on $X$
+and $R^p(Z \to X)_*\mathcal{G} = 0$ for $p > 0$
+(Lemma \ref{lemma-finite-pushforward-coherent}).
+Hence by the relative Leray spectral sequence
+(Cohomology, Lemma \ref{cohomology-lemma-relative-Leray})
+we will have $R^pf_*\mathcal{F} = R^pg_*\mathcal{G} = 0$ for $p > 0$
+and $f_*\mathcal{F} = g_*\mathcal{G}$ is coherent.
+Finally $\mathcal{F}_\xi = ((Z \to X)_*\mathcal{G})_\xi = \mathcal{G}_\xi$
+which verifies the condition on the stalk at $\xi$.
+Hence everything depends on finding a coherent sheaf $\mathcal{G}$
+on $Z$ which has properties (a), (b), and (c).
+
+\medskip\noindent
+We can apply Chow's Lemma \ref{lemma-chow-Noetherian}
+to the morphism $Z \to S$. Thus we get a diagram
+$$
+\xymatrix{
+Z \ar[rd]_g & Z' \ar[d]^-{g'} \ar[l]^\pi \ar[r]_i & \mathbf{P}^m_S \ar[dl] \\
+& S &
+}
+$$
+as in the statement of Chow's lemma. Also, let $U \subset Z$ be
+the dense open subscheme such that $\pi^{-1}(U) \to U$ is an isomorphism.
+By the discussion in Remark \ref{remark-chow-Noetherian} we see that
+$i' = (i, \pi) : Z' \to \mathbf{P}^m_Z$ is
+a closed immersion. Hence
+$$
+\mathcal{L} = i^*\mathcal{O}_{\mathbf{P}^m_S}(1) \cong
+(i')^*\mathcal{O}_{\mathbf{P}^m_Z}(1)
+$$
+is $g'$-relatively ample and $\pi$-relatively ample (for example by
+Morphisms, Lemma \ref{morphisms-lemma-characterize-ample-on-finite-type}).
+Hence by Lemma \ref{lemma-kill-by-twisting}
+there exists an $n \geq 0$ such that
+both $R^p\pi_*\mathcal{L}^{\otimes n} = 0$ for all $p > 0$ and
+$R^p(g')_*\mathcal{L}^{\otimes n} = 0$ for all $p > 0$.
+Set $\mathcal{G} = \pi_*\mathcal{L}^{\otimes n}$.
+Property (a) holds because $\pi_*\mathcal{L}^{\otimes n}|_U$ is
+an invertible sheaf (as $\pi^{-1}(U) \to U$ is an isomorphism).
+Properties (b) and (c) hold because by the relative Leray
+spectral sequence
+(Cohomology, Lemma \ref{cohomology-lemma-relative-Leray})
+we have
+$$
+E_2^{p, q} = R^pg_* R^q\pi_*\mathcal{L}^{\otimes n}
+\Rightarrow
+R^{p + q}(g')_*\mathcal{L}^{\otimes n}
+$$
+and by choice of $n$ the only nonzero terms in $E_2^{p, q}$ are
+those with $q = 0$ and the only nonzero terms of
+$R^{p + q}(g')_*\mathcal{L}^{\otimes n}$ are those with $p = q = 0$.
+This implies that $R^pg_*\mathcal{G} = 0$ for $p > 0$ and
+that $g_*\mathcal{G} = (g')_*\mathcal{L}^{\otimes n}$.
+Finally, applying the previous
+Lemma \ref{lemma-locally-projective-pushforward}
+we see that $g_*\mathcal{G} = (g')_*\mathcal{L}^{\otimes n}$ is
+coherent as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-over-affine-cohomology-finite}
+Let $S = \Spec(A)$ with $A$ a Noetherian ring.
+Let $f : X \to S$ be a proper morphism.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Then $H^i(X, \mathcal{F})$ is finite $A$-module for all $i \geq 0$.
+\end{lemma}
+
+\begin{proof}
+This is just the affine case of
+Proposition \ref{proposition-proper-pushforward-coherent}.
+Namely, by Lemmas \ref{lemma-quasi-coherence-higher-direct-images} and
+\ref{lemma-quasi-coherence-higher-direct-images-application} we know that
+$R^if_*\mathcal{F}$ is the quasi-coherent sheaf associated
+to the $A$-module $H^i(X, \mathcal{F})$
+and by Lemma \ref{lemma-coherent-Noetherian} this is
+a coherent sheaf if and only if $H^i(X, \mathcal{F})$
+is an $A$-module of finite type.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-graded-finiteness}
+Let $A$ be a Noetherian ring.
+Let $B$ be a finitely generated graded $A$-algebra.
+Let $f : X \to \Spec(A)$ be a proper morphism.
+Set $\mathcal{B} = f^*\widetilde B$.
+Let $\mathcal{F}$ be a quasi-coherent
+graded $\mathcal{B}$-module of finite type.
+\begin{enumerate}
+\item For every $p \geq 0$ the graded $B$-module $H^p(X, \mathcal{F})$
+is a finite $B$-module.
+\item If $\mathcal{L}$ is an ample invertible $\mathcal{O}_X$-module,
+then there exists an integer $d_0$ such that
+$H^p(X, \mathcal{F} \otimes \mathcal{L}^{\otimes d}) = 0$
+for all $p > 0$ and $d \geq d_0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove this we consider the fibre product diagram
+$$
+\xymatrix{
+X' = \Spec(B) \times_{\Spec(A)} X
+\ar[r]_-\pi \ar[d]_{f'} &
+X \ar[d]^f \\
+\Spec(B) \ar[r] &
+\Spec(A)
+}
+$$
+Note that $f'$ is a proper morphism, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-proper}.
+Also, $B$ is a finitely generated $A$-algebra, and hence
+Noetherian (Algebra, Lemma \ref{algebra-lemma-Noetherian-permanence}).
+This implies that $X'$ is a Noetherian scheme
+(Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian}).
+Note that $X'$ is the relative spectrum of the quasi-coherent
+$\mathcal{O}_X$-algebra $\mathcal{B}$ by
+Constructions, Lemma \ref{constructions-lemma-spec-properties}.
+Since $\mathcal{F}$ is a quasi-coherent $\mathcal{B}$-module
+we see that there is a unique quasi-coherent
+$\mathcal{O}_{X'}$-module $\mathcal{F}'$ such that
+$\pi_*\mathcal{F}' = \mathcal{F}$, see
+Morphisms, Lemma \ref{morphisms-lemma-affine-equivalence-modules}
+Since $\mathcal{F}$ is finite type as a $\mathcal{B}$-module we
+conclude that $\mathcal{F}'$ is a finite type
+$\mathcal{O}_{X'}$-module (details omitted). In other words,
+$\mathcal{F}'$ is a coherent $\mathcal{O}_{X'}$-module
+(Lemma \ref{lemma-coherent-Noetherian}).
+Since the morphism $\pi : X' \to X$ is affine we have
+$$
+H^p(X, \mathcal{F}) = H^p(X', \mathcal{F}')
+$$
+by Lemma \ref{lemma-relative-affine-cohomology}.
+Thus (1) follows from
+Lemma \ref{lemma-proper-over-affine-cohomology-finite}.
+Given $\mathcal{L}$ as in (2) we set
+$\mathcal{L}' = \pi^*\mathcal{L}$. Note that $\mathcal{L}'$ is
+ample on $X'$ by
+Morphisms, Lemma \ref{morphisms-lemma-pullback-ample-tensor-relatively-ample}.
+By the projection formula
+(Cohomology, Lemma \ref{cohomology-lemma-projection-formula}) we have
+$\pi_*(\mathcal{F}' \otimes \mathcal{L}') = \mathcal{F} \otimes \mathcal{L}$.
+Thus part (2) follows by the same reasoning as above from
+Lemma \ref{lemma-kill-by-twisting}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{The theorem on formal functions}
+\label{section-theorem-formal-functions}
+
+\noindent
+In this section we study the behaviour of cohomology of
+sequences of sheaves either of the form $\{I^n\mathcal{F}\}_{n \geq 0}$
+or of the form $\{\mathcal{F}/I^n\mathcal{F}\}_{n \geq 0}$ as
+$n$ varies.
+
+\medskip\noindent
+Here and below we use the following notation.
+Given a morphism of schemes $f : X \to Y$, a quasi-coherent sheaf
+$\mathcal{F}$ on $X$, and a quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_Y$ we denote
+$\mathcal{I}^n\mathcal{F}$ the quasi-coherent subsheaf generated
+by products of local sections of $f^{-1}(\mathcal{I}^n)$ and
+$\mathcal{F}$. In a formula
+$$
+\mathcal{I}^n\mathcal{F}
+=
+\Im\left(
+f^*(\mathcal{I}^n) \otimes_{\mathcal{O}_X} \mathcal{F}
+\longrightarrow
+\mathcal{F}
+\right).
+$$
+Note that there are natural maps
+$$
+f^{-1}(\mathcal{I}^n) \otimes_{f^{-1}\mathcal{O}_Y} \mathcal{I}^m\mathcal{F}
+\longrightarrow
+f^*(\mathcal{I}^n) \otimes_{\mathcal{O}_X} \mathcal{I}^m\mathcal{F}
+\longrightarrow
+\mathcal{I}^{n + m}\mathcal{F}
+$$
+Hence a section of $\mathcal{I}^n$ will give rise to a
+map $R^pf_*(\mathcal{I}^m\mathcal{F}) \to
+R^pf_*(\mathcal{I}^{n + m}\mathcal{F})$ by functoriality
+of higher direct images. Localizing and then sheafifying we
+see that there are $\mathcal{O}_Y$-module maps
+$$
+\mathcal{I}^n \otimes_{\mathcal{O}_Y} R^pf_*(\mathcal{I}^m\mathcal{F})
+\longrightarrow
+R^pf_*(\mathcal{I}^{n + m}\mathcal{F}).
+$$
+In other words we see that
+$\bigoplus_{n \geq 0} R^pf_*(\mathcal{I}^n\mathcal{F})$
+is a graded $\bigoplus_{n \geq 0} \mathcal{I}^n$-module.
+
+\medskip\noindent
+If $Y = \Spec(A)$ and $\mathcal{I} = \widetilde{I}$ we denote
+$\mathcal{I}^n\mathcal{F}$ simply $I^n\mathcal{F}$. The maps
+introduced above give $M = \bigoplus H^p(X, I^n\mathcal{F})$ the
+structure of a graded $S = \bigoplus I^n$-module. If $f$ is proper,
+$A$ is Noetherian and $\mathcal{F}$ is coherent, then this turns out
+to be a module of finite type.
+
+\begin{lemma}
+\label{lemma-cohomology-powers-ideal-times-F}
+\begin{reference}
+\cite[III Cor 3.3.2]{EGA}
+\end{reference}
+Let $A$ be a Noetherian ring.
+Let $I \subset A$ be an ideal.
+Set $B = \bigoplus_{n \geq 0} I^n$.
+Let $f : X \to \Spec(A)$ be a proper morphism.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Then for every $p \geq 0$ the graded $B$-module
+$\bigoplus_{n \geq 0} H^p(X, I^n\mathcal{F})$ is
+a finite $B$-module.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{B} = \bigoplus I^n\mathcal{O}_X = f^*\widetilde{B}$.
+Then $\bigoplus I^n\mathcal{F}$ is a finite type
+graded $\mathcal{B}$-module. Hence the result follows
+from Lemma \ref{lemma-graded-finiteness} part (1).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-powers-ideal-times-sheaf}
+Given a morphism of schemes $f : X \to Y$, a quasi-coherent sheaf
+$\mathcal{F}$ on $X$, and a quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_Y$. Assume $Y$ locally
+Noetherian, $f$ proper, and $\mathcal{F}$ coherent.
+Then
+$$
+\mathcal{M} =
+\bigoplus\nolimits_{n \geq 0} R^pf_*(\mathcal{I}^n\mathcal{F})
+$$
+is a graded $\mathcal{A} = \bigoplus_{n \geq 0} \mathcal{I}^n$-module
+which is quasi-coherent and of finite type.
+\end{lemma}
+
+\begin{proof}
+The statement is local on $Y$, hence this reduces to the
+case where $Y$ is affine. In the affine case the result follows
+from Lemma \ref{lemma-cohomology-powers-ideal-times-F}.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-powers-ideal-application}
+Let $A$ be a Noetherian ring.
+Let $I \subset A$ be an ideal.
+Let $f : X \to \Spec(A)$ be a proper morphism.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Then for every $p \geq 0$ there exists an integer $c \geq 0$
+such that
+\begin{enumerate}
+\item the multiplication map
+$I^{n - c} \otimes H^p(X, I^c\mathcal{F}) \to H^p(X, I^n\mathcal{F})$
+is surjective for all $n \geq c$,
+\item the image of $H^p(X, I^{n + m}\mathcal{F}) \to H^p(X, I^n\mathcal{F})$
+is contained in the submodule $I^{m - e} H^p(X, I^n\mathcal{F})$
+where $e = \max(0, c - n)$ for $n + m \geq c$, $n, m \geq 0$,
+\item we have
+$$
+\Ker(H^p(X, I^n\mathcal{F}) \to H^p(X, \mathcal{F})) =
+\Ker(H^p(X, I^n\mathcal{F}) \to H^p(X, I^{n - c}\mathcal{F}))
+$$
+for $n \geq c$,
+\item there are maps $I^nH^p(X, \mathcal{F}) \to H^p(X, I^{n - c}\mathcal{F})$
+for $n \geq c$ such that the compositions
+$$
+H^p(X, I^n\mathcal{F}) \to
+I^{n - c}H^p(X, \mathcal{F}) \to
+H^p(X, I^{n - 2c}\mathcal{F})
+$$
+and
+$$
+I^nH^p(X, \mathcal{F}) \to
+H^p(X, I^{n - c}\mathcal{F}) \to
+I^{n - 2c}H^p(X, \mathcal{F})
+$$
+for $n \geq 2c$ are the canonical ones, and
+\item the inverse systems $(H^p(X, I^n\mathcal{F}))$ and
+$(I^nH^p(X, \mathcal{F}))$ are pro-isomorphic.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Write $M_n = H^p(X, I^n\mathcal{F})$ for $n \geq 1$ and
+$M_0 = H^p(X, \mathcal{F})$ so that we have maps
+$\ldots \to M_3 \to M_2 \to M_1 \to M_0$. Setting
+$B = \bigoplus_{n \geq 0} I^n$, then $M = \bigoplus_{n \geq 0} M_n$
+is a finite graded $B$-module, see
+Lemma \ref{lemma-cohomology-powers-ideal-times-F}.
+Observe that the products
+$B_n \otimes M_m \to M_{m + n}$, $a \otimes m \mapsto a \cdot m$
+are compatible with the maps in our inverse system in the sense
+that the diagrams
+$$
+\xymatrix{
+B_n \otimes_A M_m \ar[r] \ar[d] & M_{n + m} \ar[d] \\
+B_n \otimes_A M_{m'} \ar[r] & M_{n + m'}
+}
+$$
+commute for $n, m' \geq 0$ and $m \geq m'$.
+
+\medskip\noindent
+Proof of (1). Choose $d_1, \ldots, d_t \geq 0$ and $x_i \in M_{d_i}$
+such that $M$ is generated by $x_1, \ldots, x_t$ over $B$.
+For any $c \geq \max\{d_i\}$ we conclude that
+$B_{n - c} \cdot M_c = M_n$ for $n \geq c$ and we conclude (1) is true.
+
+\medskip\noindent
+Proof of (2). Let $c$ be as in the proof of (1). Let $n + m \geq c$.
+We have $M_{n + m} = B_{n + m - c} \cdot M_c$.
+If $c > n$ then we use $M_c \to M_n$ and the compatibility of products
+with transition maps pointed out above to conclude that
+the image of $M_{n + m} \to M_n$ is contained in $I^{n + m - c}M_n$.
+If $c \leq n$, then we write $M_{n + m} = B_m \cdot B_{n - c} \cdot M_c =
+B_m \cdot M_n$ to see that the image is contained in $I^m M_n$.
+This proves (2).
+
+\medskip\noindent
+Let $K_n \subset M_n$ be the kernel of the map $M_n \to M_0$. The
+compatibility of products with transition maps pointed out above
+shows that $K = \bigoplus K_n \subset M$ is a graded $B$-submodule.
+As $B$ is Noetherian and $M$ is a finitely generated graded $B$-module,
+this shows that $K$ is a finitely generated graded $B$-module.
+Choose $d'_1, \ldots, d'_{t'} \geq 0$ and $y_i \in K_{d'_i}$
+such that $K$ is generated by $y_1, \ldots, y_{t'}$ over $B$.
+Set $c = \max(d'_i, d'_j)$. Since $y_i \in \Ker(M_{d'_i} \to M_0)$
+we see that $B_n \cdot y_i \subset \Ker(M_{n + d'_i} \to M_n)$.
+In this way we see that $K_n = \Ker(M_n \to M_{n - c})$ for $n \geq c$.
+This proves (3).
+
+\medskip\noindent
+Consider the following commutative solid diagram
+$$
+\xymatrix{
+I^n \otimes_A M_0 \ar[r] \ar[d] &
+I^nM_0 \ar[r] \ar@{..>}[d] &
+M_0 \ar[d] \\
+M_n \ar[r] &
+M_{n - c} \ar[r] &
+M_0
+}
+$$
+Since the kernel of the surjective arrow $I^n \otimes_A M_0 \to I^nM_0$
+maps into $K_n$ by the above we obtain the dotted arrow and the
+composition $I^nM_0 \to M_{n - c} \to M_0$ is the canonical map.
+Then clearly the composition $I^nM_0 \to M_{n - c} \to I^{n - 2c}M_0$
+is the canonical map for $n \geq 2c$. Consider the composition
+$M_n \to I^{n - c}M_0 \to M_{n - 2c}$. The first map
+sends an element of the form $a \cdot m$
+with $a \in I^{n - c}$ and $m \in M_c$
+to $a m'$ where $m'$ is the image of $m$ in $M_0$.
+Then the second map sends this to $a \cdot m'$ in $M_{n - 2c}$ and
+we see (4) is true.
+
+\medskip\noindent
+Part (5) is an immediate consequence of (4) and the definition of
+morphisms of pro-objects.
+\end{proof}
+
+\noindent
+In the situation of Lemmas \ref{lemma-cohomology-powers-ideal-times-F} and
+\ref{lemma-cohomology-powers-ideal-application} consider the inverse
+system
+$$
+\mathcal{F}/I\mathcal{F} \leftarrow
+\mathcal{F}/I^2\mathcal{F} \leftarrow
+\mathcal{F}/I^3\mathcal{F} \leftarrow \ldots
+$$
+We would like to know what happens to the cohomology groups.
+Here is a first result.
+
+\begin{lemma}
+\label{lemma-ML-cohomology-powers-ideal}
+Let $A$ be a Noetherian ring.
+Let $I \subset A$ be an ideal.
+Let $f : X \to \Spec(A)$ be a proper morphism.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Fix $p \geq 0$. There exists a $c \geq 0$ such that
+\begin{enumerate}
+\item for all $n \geq c$ we have
+$$
+\Ker(H^p(X, \mathcal{F}) \to H^p(X, \mathcal{F}/I^n\mathcal{F})) \subset
+I^{n - c}H^p(X, \mathcal{F}).
+$$
+\item the inverse system
+$$
+\left(H^p(X, \mathcal{F}/I^n\mathcal{F})\right)_{n \in \mathbf{N}}
+$$
+satisfies the Mittag-Leffler condition (see
+Homology, Definition \ref{homology-definition-Mittag-Leffler}), and
+\item we have
+$$
+\Im(H^p(X, \mathcal{F}/I^k\mathcal{F})
+\to H^p(X, \mathcal{F}/I^n\mathcal{F}))
+=
+\Im(H^p(X, \mathcal{F})
+\to H^p(X, \mathcal{F}/I^n\mathcal{F}))
+$$
+for all $k \geq n + c$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $c = \max\{c_p, c_{p + 1}\}$, where $c_p, c_{p + 1}$ are the integers
+found in Lemma \ref{lemma-cohomology-powers-ideal-application} for
+$H^p$ and $H^{p + 1}$.
+
+\medskip\noindent
+Let us prove part (1). Consider the short exact sequence
+$$
+0 \to I^n\mathcal{F} \to \mathcal{F} \to \mathcal{F}/I^n\mathcal{F} \to 0
+$$
+From the long exact cohomology sequence we see that
+$$
+\Ker(
+H^p(X, \mathcal{F}) \to H^p(X, \mathcal{F}/I^n\mathcal{F})
+)
+=
+\Im(
+H^p(X, I^n\mathcal{F}) \to H^p(X, \mathcal{F})
+)
+$$
+Hence by Lemma \ref{lemma-cohomology-powers-ideal-application} part (2)
+we see that this is contained in $I^{n - c}H^p(X, \mathcal{F})$ for $n \geq c$.
+
+\medskip\noindent
+Note that part (3) implies part (2) by definition of the Mittag-Leffler
+systems.
+
+\medskip\noindent
+Let us prove part (3). Fix an $n$. Consider the commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+I^n\mathcal{F} \ar[r] &
+\mathcal{F} \ar[r] &
+\mathcal{F}/I^n\mathcal{F} \ar[r] &
+0 \\
+0 \ar[r] &
+I^{n + m}\mathcal{F} \ar[r] \ar[u] &
+\mathcal{F} \ar[r] \ar[u] &
+\mathcal{F}/I^{n + m}\mathcal{F} \ar[r] \ar[u] &
+0
+}
+$$
+This gives rise to the following commutative diagram
+$$
+\xymatrix{
+H^p(X, \mathcal{F}) \ar[r] &
+H^p(X, \mathcal{F}/I^n\mathcal{F}) \ar[r]_\delta &
+H^{p + 1}(X, I^n\mathcal{F}) \ar[r] &
+H^{p + 1}(X, \mathcal{F}) \\
+H^p(X, \mathcal{F}) \ar[r] \ar[u]^1 &
+H^p(X, \mathcal{F}/I^{n + m}\mathcal{F}) \ar[r] \ar[u]^\gamma &
+H^{p + 1}(X, I^{n + m}\mathcal{F}) \ar[u]^\alpha \ar[r]^-\beta &
+H^{p + 1}(X, \mathcal{F}) \ar[u]_1
+}
+$$
+with exact rows. By
+Lemma \ref{lemma-cohomology-powers-ideal-application} part (4) the kernel
+of $\beta$ is equal to the kernel of $\alpha$ for $m \geq c$.
+By a diagram chase this shows that the image of $\gamma$ is contained
+in the kernel of $\delta$ which shows that part (3) is true
+(set $k = n + m$ to get it).
+\end{proof}
+
+\begin{theorem}[Theorem on formal functions]
+\label{theorem-formal-functions}
+Let $A$ be a Noetherian ring.
+Let $I \subset A$ be an ideal.
+Let $f : X \to \Spec(A)$ be a proper morphism.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Fix $p \geq 0$.
+The system of maps
+$$
+H^p(X, \mathcal{F})/I^nH^p(X, \mathcal{F})
+\longrightarrow
+H^p(X, \mathcal{F}/I^n\mathcal{F})
+$$
+define an isomorphism of limits
+$$
+H^p(X, \mathcal{F})^\wedge
+\longrightarrow
+\lim_n H^p(X, \mathcal{F}/I^n\mathcal{F})
+$$
+where the left hand side is the completion of the $A$-module
+$H^p(X, \mathcal{F})$ with respect to the ideal $I$, see
+Algebra, Section \ref{algebra-section-completion}.
+Moreover, this is in fact a homeomorphism for the limit topologies.
+\end{theorem}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-ML-cohomology-powers-ideal} as follows.
+Set $M = H^p(X, \mathcal{F})$, $M_n = H^p(X, \mathcal{F}/I^n\mathcal{F})$,
+and denote $N_n = \Im(M \to M_n)$. By
+Lemma \ref{lemma-ML-cohomology-powers-ideal} parts (2) and (3) we see that
+$(M_n)$ is a Mittag-Leffler system with
+$N_n \subset M_n$ equal to the image of $M_k$ for all $k \gg n$.
+It follows that $\lim M_n = \lim N_n$ as topological modules (with limit
+topologies). On the other hand, the $N_n$ form an inverse system of quotients
+of the module $M$ and hence $\lim N_n$ is the completion of $M$
+with respect to the topology given by the kernels $K_n = \Ker(M \to N_n)$.
+By Lemma \ref{lemma-ML-cohomology-powers-ideal} part (1)
+we have $K_n \subset I^{n - c}M$ and since $N_n \subset M_n$
+is annihilated by $I^n$ we have $I^n M \subset K_n$.
+Thus the topology defined using the submodules $K_n$
+as a fundamental system of open neighbourhoods of $0$
+is the same as the $I$-adic topology and we find that the
+induced map $M^\wedge = \lim M/I^nM \to \lim N_n = \lim M_n$
+is an isomorphism of topological modules\footnote{
+To be sure, the limit topology on $M^\wedge$ is the same as its
+$I$-adic topology as follows from
+Algebra, Lemma \ref{algebra-lemma-hathat-finitely-generated}.
+See More on Algebra, Section \ref{more-algebra-section-topological-ring}.}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-spell-out-theorem-formal-functions}
+Let $A$ be a ring. Let $I \subset A$ be an ideal. Assume $A$ is
+Noetherian and complete with respect to $I$.
+Let $f : X \to \Spec(A)$ be a proper morphism.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Then
+$$
+H^p(X, \mathcal{F}) = \lim_n H^p(X, \mathcal{F}/I^n\mathcal{F})
+$$
+for all $p \geq 0$.
+\end{lemma}
+
+\begin{proof}
+This is a reformulation of the theorem on formal functions
+(Theorem \ref{theorem-formal-functions}) in the
+case of a complete Noetherian base ring. Namely, in this case the
+$A$-module $H^p(X, \mathcal{F})$ is finite
+(Lemma \ref{lemma-proper-over-affine-cohomology-finite}) hence
+$I$-adically complete (Algebra, Lemma \ref{algebra-lemma-completion-tensor})
+and we see that completion on the left hand side is not necessary.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-formal-functions-stalk}
+Given a morphism of schemes $f : X \to Y$ and a quasi-coherent sheaf
+$\mathcal{F}$ on $X$. Assume
+\begin{enumerate}
+\item $Y$ locally Noetherian,
+\item $f$ proper, and
+\item $\mathcal{F}$ coherent.
+\end{enumerate}
+Let $y \in Y$ be a point. Consider the infinitesimal neighbourhoods
+$$
+\xymatrix{
+X_n =
+\Spec(\mathcal{O}_{Y, y}/\mathfrak m_y^n) \times_Y X
+\ar[r]_-{i_n} \ar[d]_{f_n} &
+X \ar[d]^f \\
+\Spec(\mathcal{O}_{Y, y}/\mathfrak m_y^n) \ar[r]^-{c_n} & Y
+}
+$$
+of the fibre $X_1 = X_y$ and set $\mathcal{F}_n = i_n^*\mathcal{F}$.
+Then we have
+$$
+\left(R^pf_*\mathcal{F}\right)_y^\wedge
+\cong
+\lim_n H^p(X_n, \mathcal{F}_n)
+$$
+as $\mathcal{O}_{Y, y}^\wedge$-modules.
+\end{lemma}
+
+\begin{proof}
+This is just a reformulation of a special case of the theorem
+on formal functions, Theorem \ref{theorem-formal-functions}.
+Let us spell it out. Note that $\mathcal{O}_{Y, y}$ is a Noetherian
+local ring. Consider the canonical morphism
+$c : \Spec(\mathcal{O}_{Y, y}) \to Y$, see
+Schemes, Equation (\ref{schemes-equation-canonical-morphism}).
+This is a flat morphism as it identifies local rings.
+Denote momentarily $f' : X' \to \Spec(\mathcal{O}_{Y, y})$
+the base change of $f$ to this local ring. We see that
+$c^*R^pf_*\mathcal{F} = R^pf'_*\mathcal{F}'$ by
+Lemma \ref{lemma-flat-base-change-cohomology}.
+Moreover, the infinitesimal neighbourhoods of
+the fibre $X_y$ and $X'_y$ are identified (verification omitted; hint:
+the morphisms $c_n$ factor through $c$).
+
+\medskip\noindent
+Hence we may assume that $Y = \Spec(A)$ is the spectrum of
+a Noetherian local ring $A$ with maximal ideal $\mathfrak m$
+and that $y \in Y$ corresponds to the closed point (i.e., to $\mathfrak m$).
+In particular it follows that
+$$
+\left(R^pf_*\mathcal{F}\right)_y =
+\Gamma(Y, R^pf_*\mathcal{F}) =
+H^p(X, \mathcal{F}).
+$$
+
+\medskip\noindent
+In this case also, the morphisms $c_n$ are each closed immersions.
+Hence their base changes $i_n$ are closed immersions as well.
+Note that $i_{n, *}\mathcal{F}_n = i_{n, *}i_n^*\mathcal{F}
+= \mathcal{F}/\mathfrak m^n\mathcal{F}$. By the Leray spectral sequence
+for $i_n$, and Lemma \ref{lemma-finite-pushforward-coherent} we see that
+$$
+H^p(X_n, \mathcal{F}_n) =
+H^p(X, i_{n, *}\mathcal{F}_n) =
+H^p(X, \mathcal{F}/\mathfrak m^n\mathcal{F})
+$$
+Hence we may indeed apply the theorem on formal functions to compute
+the limit in the statement of the lemma and we win.
+\end{proof}
+
+\noindent
+Here is a lemma which we will generalize later to fibres of
+dimension $ > 0$, namely the next lemma.
+
+\begin{lemma}
+\label{lemma-higher-direct-images-zero-finite-fibre}
+Let $f : X \to Y$ be a morphism of schemes.
+Let $y \in Y$.
+Assume
+\begin{enumerate}
+\item $Y$ locally Noetherian,
+\item $f$ is proper, and
+\item $f^{-1}(\{y\})$ is finite.
+\end{enumerate}
+Then for any coherent sheaf $\mathcal{F}$ on $X$ we have
+$(R^pf_*\mathcal{F})_y = 0$ for all $p > 0$.
+\end{lemma}
+
+\begin{proof}
+The fibre $X_y$ is finite, and by
+Morphisms, Lemma \ref{morphisms-lemma-finite-fibre} it
+is a finite discrete space. Moreover, the underlying topological
+space of each infinitesimal neighbourhood $X_n$ is the same.
+Hence each of the schemes $X_n$ is affine according to
+Schemes, Lemma \ref{schemes-lemma-scheme-finite-discrete-affine}.
+Hence it follows that $H^p(X_n, \mathcal{F}_n) = 0$ for all
+$p > 0$. Hence we see that $(R^pf_*\mathcal{F})_y^\wedge = 0$
+by Lemma \ref{lemma-formal-functions-stalk}.
+Note that $R^pf_*\mathcal{F}$ is coherent by
+Proposition \ref{proposition-proper-pushforward-coherent} and
+hence $R^pf_*\mathcal{F}_y$ is a finite
+$\mathcal{O}_{Y, y}$-module. By Nakayama's lemma
+(Algebra, Lemma \ref{algebra-lemma-NAK})
+if the completion of a finite module over a local ring
+is zero, then the module is zero. Whence
+$(R^pf_*\mathcal{F})_y = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-higher-direct-images-zero-above-dimension-fibre}
+Let $f : X \to Y$ be a morphism of schemes.
+Let $y \in Y$.
+Assume
+\begin{enumerate}
+\item $Y$ locally Noetherian,
+\item $f$ is proper, and
+\item $\dim(X_y) = d$.
+\end{enumerate}
+Then for any coherent sheaf $\mathcal{F}$ on $X$ we have
+$(R^pf_*\mathcal{F})_y = 0$ for all $p > d$.
+\end{lemma}
+
+\begin{proof}
+The fibre $X_y$ is of finite type over $\Spec(\kappa(y))$.
+Hence $X_y$ is a Noetherian scheme by
+Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian}.
+Hence the underlying topological space of $X_y$ is Noetherian, see
+Properties, Lemma \ref{properties-lemma-Noetherian-topology}.
+Moreover, the underlying topological space of each infinitesimal
+neighbourhood $X_n$ is the same as that of $X_y$.
+Hence $H^p(X_n, \mathcal{F}_n) = 0$ for all $p > d$ by
+Cohomology, Proposition \ref{cohomology-proposition-vanishing-Noetherian}.
+Hence we see that $(R^pf_*\mathcal{F})_y^\wedge = 0$
+by Lemma \ref{lemma-formal-functions-stalk} for $p > d$.
+Note that $R^pf_*\mathcal{F}$ is coherent by
+Proposition \ref{proposition-proper-pushforward-coherent} and
+hence $R^pf_*\mathcal{F}_y$ is a finite
+$\mathcal{O}_{Y, y}$-module. By Nakayama's lemma
+(Algebra, Lemma \ref{algebra-lemma-NAK})
+if the completion of a finite module over a local ring
+is zero, then the module is zero. Whence
+$(R^pf_*\mathcal{F})_y = 0$.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Applications of the theorem on formal functions}
+\label{section-applications-formal-functions}
+
+
+\noindent
+We will add more here as needed. For the moment we need the
+following characterization of finite morphisms in the Noetherian case.
+
+\begin{lemma}
+\label{lemma-characterize-finite}
+(For a more general version see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-characterize-finite}.)
+Let $f : X \to S$ be a morphism of schemes.
+Assume $S$ is locally Noetherian.
+The following are equivalent
+\begin{enumerate}
+\item $f$ is finite, and
+\item $f$ is proper with finite fibres.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+A finite morphism is proper according to
+Morphisms, Lemma \ref{morphisms-lemma-finite-proper}.
+A finite morphism is quasi-finite according to
+Morphisms, Lemma \ref{morphisms-lemma-finite-quasi-finite}.
+A quasi-finite morphism has finite fibres, see
+Morphisms, Lemma \ref{morphisms-lemma-quasi-finite}.
+Hence a finite morphism is proper and has finite fibres.
+
+\medskip\noindent
+Assume $f$ is proper with finite fibres.
+We want to show $f$ is finite.
+In fact it suffices to prove $f$ is affine.
+Namely, if $f$ is affine, then it follows that
+$f$ is integral by
+Morphisms, Lemma \ref{morphisms-lemma-integral-universally-closed}
+whereupon it follows from
+Morphisms, Lemma \ref{morphisms-lemma-finite-integral}
+that $f$ is finite.
+
+\medskip\noindent
+To show that $f$ is affine we may assume that $S$ is affine, and our
+goal is to show that $X$ is affine too.
+Since $f$ is proper we see that $X$ is separated and quasi-compact.
+Hence we may use the criterion of
+Lemma \ref{lemma-quasi-separated-h1-zero-covering} to prove that $X$
+is affine. To see this let $\mathcal{I} \subset \mathcal{O}_X$
+be a finite type ideal sheaf. In particular $\mathcal{I}$ is
+a coherent sheaf on $X$. By
+Lemma \ref{lemma-higher-direct-images-zero-finite-fibre} we conclude that
+$R^1f_*\mathcal{I}_s = 0$ for all $s \in S$.
+In other words, $R^1f_*\mathcal{I} = 0$. Hence we see from
+the Leray Spectral Sequence for $f$ that
+$H^1(X , \mathcal{I}) = H^1(S, f_*\mathcal{I})$.
+Since $S$ is affine, and $f_*\mathcal{I}$ is quasi-coherent
+(Schemes, Lemma \ref{schemes-lemma-push-forward-quasi-coherent})
+we conclude $H^1(S, f_*\mathcal{I}) = 0$
+from Lemma \ref{lemma-quasi-coherent-affine-cohomology-zero}
+as desired. Hence $H^1(X, \mathcal{I}) = 0$ as desired.
+\end{proof}
+
+\noindent
+As a consequence we have the following useful result.
+
+\begin{lemma}
+\label{lemma-proper-finite-fibre-finite-in-neighbourhood}
+\begin{slogan}
+A proper morphism is finite in a neighbourhood of a finite fiber.
+\end{slogan}
+(For a more general version see
+More on Morphisms,
+Lemma \ref{more-morphisms-lemma-proper-finite-fibre-finite-in-neighbourhood}.)
+Let $f : X \to S$ be a morphism of schemes.
+Let $s \in S$.
+Assume
+\begin{enumerate}
+\item $S$ is locally Noetherian,
+\item $f$ is proper, and
+\item $f^{-1}(\{s\})$ is a finite set.
+\end{enumerate}
+Then there exists an open neighbourhood $V \subset S$ of $s$
+such that $f|_{f^{-1}(V)} : f^{-1}(V) \to V$ is finite.
+\end{lemma}
+
+\begin{proof}
+The morphism $f$ is quasi-finite at all the points of $f^{-1}(\{s\})$
+by Morphisms, Lemma \ref{morphisms-lemma-finite-fibre}.
+By Morphisms, Lemma \ref{morphisms-lemma-quasi-finite-points-open} the
+set of points at which $f$ is quasi-finite is an open $U \subset X$.
+Let $Z = X \setminus U$. Then $s \not \in f(Z)$. Since $f$ is proper
+the set $f(Z) \subset S$ is closed. Choose any open neighbourhood
+$V \subset S$ of $s$ with $Z \cap V = \emptyset$. Then
+$f^{-1}(V) \to V$ is locally quasi-finite and proper.
+Hence it is quasi-finite
+(Morphisms, Lemma \ref{morphisms-lemma-quasi-finite-locally-quasi-compact}),
+hence has finite fibres
+(Morphisms, Lemma \ref{morphisms-lemma-quasi-finite}), hence
+is finite by Lemma \ref{lemma-characterize-finite}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ample-on-fibre}
+Let $f : X \to Y$ be a proper morphism of schemes with $Y$ Noetherian.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Let $y \in Y$ be a point such that $\mathcal{L}_y$ is ample on $X_y$.
+Then there exists a $d_0$ such that for all $d \geq d_0$ we have
+$$
+R^pf_*(\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d})_y = 0
+\text{ for }p > 0
+$$
+and the map
+$$
+f_*(\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d})_y
+\longrightarrow
+H^0(X_y, \mathcal{F}_y \otimes_{\mathcal{O}_{X_y}} \mathcal{L}_y^{\otimes d})
+$$
+is surjective.
+\end{lemma}
+
+\begin{proof}
+Note that $\mathcal{O}_{Y, y}$ is a Noetherian local ring.
+Consider the canonical morphism
+$c : \Spec(\mathcal{O}_{Y, y}) \to Y$, see
+Schemes, Equation (\ref{schemes-equation-canonical-morphism}).
+This is a flat morphism as it identifies local rings.
+Denote momentarily $f' : X' \to \Spec(\mathcal{O}_{Y, y})$
+the base change of $f$ to this local ring. We see that
+$c^*R^pf_*\mathcal{F} = R^pf'_*\mathcal{F}'$ by
+Lemma \ref{lemma-flat-base-change-cohomology}.
+Moreover, the fibres $X_y$ and $X'_y$ are identified.
+Hence we may assume that $Y = \Spec(A)$ is the spectrum of
+a Noetherian local ring $(A, \mathfrak m, \kappa)$ and $y \in Y$
+corresponds to $\mathfrak m$. In this case
+$R^pf_*(\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d})_y =
+H^p(X, \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d})$
+for all $p \geq 0$. Denote $f_y : X_y \to \Spec(\kappa)$ the projection.
+
+\medskip\noindent
+Let $B = \text{Gr}_\mathfrak m(A) =
+\bigoplus_{n \geq 0} \mathfrak m^n/\mathfrak m^{n + 1}$.
+Consider the sheaf $\mathcal{B} = f_y^*\widetilde{B}$
+of quasi-coherent graded $\mathcal{O}_{X_y}$-algebras.
+We will use notation as in Section \ref{section-theorem-formal-functions}
+with $I$ replaced by $\mathfrak m$.
+Since $X_y$ is the closed subscheme of $X$ cut out by
+$\mathfrak m\mathcal{O}_X$ we may think of
+$\mathfrak m^n\mathcal{F}/\mathfrak m^{n + 1}\mathcal{F}$
+as a coherent $\mathcal{O}_{X_y}$-module, see
+Lemma \ref{lemma-i-star-equivalence}. Then
+$\bigoplus_{n \geq 0} \mathfrak m^n\mathcal{F}/\mathfrak m^{n + 1}\mathcal{F}$
+is a quasi-coherent graded $\mathcal{B}$-module of finite type
+because it is generated in degree zero over $\mathcal{B}$
+abd because the degree zero part is
+$\mathcal{F}_y = \mathcal{F}/\mathfrak m \mathcal{F}$
+which is a coherent $\mathcal{O}_{X_y}$-module.
+Hence by Lemma \ref{lemma-graded-finiteness} part (2)
+we see that
+$$
+H^p(X_y, \mathfrak m^n \mathcal{F}/ \mathfrak m^{n + 1}\mathcal{F}
+\otimes_{\mathcal{O}_{X_y}} \mathcal{L}_y^{\otimes d}) = 0
+$$
+for all $p > 0$, $d \geq d_0$, and $n \geq 0$. By
+Lemma \ref{lemma-relative-affine-cohomology}
+this is the same as the statement that
+$
+H^p(X, \mathfrak m^n \mathcal{F}/ \mathfrak m^{n + 1}\mathcal{F}
+\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d}) = 0
+$
+for all $p > 0$, $d \geq d_0$, and $n \geq 0$.
+
+\medskip\noindent
+Consider the short exact sequences
+$$
+0 \to \mathfrak m^n\mathcal{F}/\mathfrak m^{n + 1} \mathcal{F}
+\to \mathcal{F}/\mathfrak m^{n + 1} \mathcal{F}
+\to \mathcal{F}/\mathfrak m^n \mathcal{F} \to 0
+$$
+of coherent $\mathcal{O}_X$-modules. Tensoring with $\mathcal{L}^{\otimes d}$
+is an exact functor and we obtain short exact sequences
+$$
+0 \to
+\mathfrak m^n\mathcal{F}/\mathfrak m^{n + 1} \mathcal{F}
+\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d}
+\to \mathcal{F}/\mathfrak m^{n + 1} \mathcal{F}
+\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d}
+\to \mathcal{F}/\mathfrak m^n \mathcal{F}
+\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d} \to 0
+$$
+Using the long exact cohomology sequence and the vanishing above
+we conclude (using induction) that
+\begin{enumerate}
+\item $H^p(X, \mathcal{F}/\mathfrak m^n \mathcal{F}
+\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d}) = 0$
+for all $p > 0$, $d \geq d_0$, and $n \geq 0$, and
+\item $H^0(X, \mathcal{F}/\mathfrak m^n \mathcal{F}
+\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d}) \to
+H^0(X_y, \mathcal{F}_y \otimes_{\mathcal{O}_{X_y}} \mathcal{L}_y^{\otimes d})$
+is surjective for all $d \geq d_0$ and $n \geq 1$.
+\end{enumerate}
+By the theorem on formal functions (Theorem \ref{theorem-formal-functions})
+we find that the $\mathfrak m$-adic completion of
+$H^p(X, \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d})$
+is zero for all $d \geq d_0$ and $p > 0$.
+Since $H^p(X, \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d})$
+is a finite $A$-module by
+Lemma \ref{lemma-proper-over-affine-cohomology-finite}
+it follows from Nakayama's lemma (Algebra, Lemma \ref{algebra-lemma-NAK})
+that $H^p(X, \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d})$
+is zero for all $d \geq d_0$ and $p > 0$.
+For $p = 0$ we deduce from
+Lemma \ref{lemma-ML-cohomology-powers-ideal} part (3)
+that $H^0(X, \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes d}) \to
+H^0(X_y, \mathcal{F}_y \otimes_{\mathcal{O}_{X_y}} \mathcal{L}_y^{\otimes d})$
+is surjective, which gives the final statement of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ample-in-neighbourhood}
+(For a more general version see
+More on Morphisms,
+Lemma \ref{more-morphisms-lemma-ample-in-neighbourhood}.)
+Let $f : X \to Y$ be a proper morphism of schemes with $Y$ Noetherian.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $y \in Y$ be a point such that $\mathcal{L}_y$ is ample
+on $X_y$. Then there is an open neighbourhood $V \subset Y$
+of $y$ such that $\mathcal{L}|_{f^{-1}(V)}$ is ample on $f^{-1}(V)/V$.
+\end{lemma}
+
+\begin{proof}
+Pick $d_0$ as in Lemma \ref{lemma-ample-on-fibre} for
+$\mathcal{F} = \mathcal{O}_X$. Pick $d \geq d_0$
+so that we can find $r \geq 0$ and sections
+$s_{y, 0}, \ldots, s_{y, r} \in H^0(X_y, \mathcal{L}_y^{\otimes d})$
+which define a closed immersion
+$$
+\varphi_y =
+\varphi_{\mathcal{L}_y^{\otimes d}, (s_{y, 0}, \ldots, s_{y, r})} :
+X_y \to \mathbf{P}^r_{\kappa(y)}.
+$$
+This is possible by Morphisms, Lemma
+\ref{morphisms-lemma-finite-type-over-affine-ample-very-ample}
+but we also use
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed}
+to see that $\varphi_y$ is a closed immersion and
+Constructions, Section \ref{constructions-section-projective-space}
+for the description of morphisms into projective
+space in terms of invertible sheaves and sections.
+By our choice of $d_0$, after replacing $Y$ by an open neighbourhood
+of $y$, we can choose
+$s_0, \ldots, s_r \in H^0(X, \mathcal{L}^{\otimes d})$
+mapping to $s_{y, 0}, \ldots, s_{y, r}$.
+Let $X_{s_i} \subset X$ be the open subset where $s_i$
+is a generator of $\mathcal{L}^{\otimes d}$. Since
+the $s_{y, i}$ generate $\mathcal{L}_y^{\otimes d}$ we see that
+$X_y \subset U = \bigcup X_{s_i}$.
+Since $X \to Y$ is closed, we see that
+there is an open neighbourhood $y \in V \subset Y$
+such that $f^{-1}(V) \subset U$.
+After replacing $Y$ by $V$ we may assume that
+the $s_i$ generate $\mathcal{L}^{\otimes d}$. Thus we
+obtain a morphism
+$$
+\varphi = \varphi_{\mathcal{L}^{\otimes d}, (s_0, \ldots, s_r)} :
+X \longrightarrow \mathbf{P}^r_Y
+$$
+with $\mathcal{L}^{\otimes d} \cong \varphi^*\mathcal{O}_{\mathbf{P}^r_Y}(1)$
+whose base change to $y$ gives $\varphi_y$.
+
+\medskip\noindent
+We will finish the proof by a sleight of hand; the ``correct'' proof
+proceeds by directly showing that $\varphi$ is a closed
+immersion after base changing to an open neighbourhood of $y$.
+Namely, by Lemma \ref{lemma-proper-finite-fibre-finite-in-neighbourhood}
+we see that $\varphi$ is a finite over an open neighbourhood
+of the fibre $\mathbf{P}^r_{\kappa(y)}$ of $\mathbf{P}^r_Y \to Y$
+above $y$. Using that $\mathbf{P}^r_Y \to Y$ is closed, after
+shrinking $Y$ we may assume that $\varphi$ is finite.
+Then $\mathcal{L}^{\otimes d} \cong \varphi^*\mathcal{O}_{\mathbf{P}^r_Y}(1)$
+is ample by the very general
+Morphisms, Lemma \ref{morphisms-lemma-pullback-ample-tensor-relatively-ample}.
+\end{proof}
+
+
+
+
+
+\section{Cohomology and base change, III}
+\label{section-cohomology-and-base-change-perfect}
+
+\noindent
+In this section we prove the simplest case of a very general phenomenon
+that will be discussed in
+Derived Categories of Schemes, Section
+\ref{perfect-section-cohomology-and-base-change-perfect}.
+Please see Remark \ref{remark-explain-perfect-direct-image} for a translation
+of the following lemma into algebra.
+
+\begin{lemma}
+\label{lemma-perfect-direct-image}
+Let $A$ be a Noetherian ring and set $S = \Spec(A)$. Let $f : X \to S$ be a
+proper morphism of schemes. Let $\mathcal{F}$ be a coherent
+$\mathcal{O}_X$-module flat over $S$. Then
+\begin{enumerate}
+\item $R\Gamma(X, \mathcal{F})$ is a perfect object of $D(A)$, and
+\item for any ring map $A \to A'$ the base change map
+$$
+R\Gamma(X, \mathcal{F}) \otimes_A^{\mathbf{L}} A'
+\longrightarrow
+R\Gamma(X_{A'}, \mathcal{F}_{A'})
+$$
+is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose a finite affine open covering $X = \bigcup_{i = 1, \ldots, n} U_i$.
+By Lemmas \ref{lemma-separated-case-relative-cech} and
+\ref{lemma-base-change-complex} the {\v C}ech complex
+$K^\bullet = {\check C}^\bullet(\mathcal{U}, \mathcal{F})$ satisfies
+$$
+K^\bullet \otimes_A A' = R\Gamma(X_{A'}, \mathcal{F}_{A'})
+$$
+for all ring maps $A \to A'$. Let
+$K_{alt}^\bullet = {\check C}_{alt}^\bullet(\mathcal{U}, \mathcal{F})$
+be the alternating {\v C}ech complex. By
+Cohomology, Lemma \ref{cohomology-lemma-alternating-usual}
+there is a homotopy equivalence $K_{alt}^\bullet \to K^\bullet$
+of $A$-modules. In particular, we have
+$$
+K_{alt}^\bullet \otimes_A A' = R\Gamma(X_{A'}, \mathcal{F}_{A'})
+$$
+as well. Since $\mathcal{F}$ is flat over $A$ we see that each $K_{alt}^n$
+is flat over $A$ (see
+Morphisms, Lemma \ref{morphisms-lemma-flat-module-characterize}).
+Since moreover $K_{alt}^\bullet$ is bounded above (this is why we switched
+to the alternating {\v C}ech complex)
+$K_{alt}^\bullet \otimes_A A' = K_{alt}^\bullet \otimes_A^{\mathbf{L}} A'$
+by the definition of derived tensor products (see
+More on Algebra, Section \ref{more-algebra-section-derived-tensor-product}).
+By
+Lemma \ref{lemma-proper-over-affine-cohomology-finite}
+the cohomology groups $H^i(K_{alt}^\bullet)$ are finite $A$-modules.
+As $K_{alt}^\bullet$ is bounded, we conclude that $K_{alt}^\bullet$
+is pseudo-coherent, see
+More on Algebra, Lemma \ref{more-algebra-lemma-Noetherian-pseudo-coherent}.
+Given any $A$-module $M$ set $A' = A \oplus M$ where $M$ is a square zero
+ideal, i.e., $(a, m) \cdot (a', m') = (aa', am' + a'm)$. By the
+above we see that $K_{alt}^\bullet \otimes_A^\mathbf{L} A'$ has cohomology
+in degrees $0, \ldots, n$. Hence $K_{alt}^\bullet \otimes_A^\mathbf{L} M$
+has cohomology in degrees $0, \ldots, n$. Hence $K_{alt}^\bullet$ has
+finite Tor dimension, see
+More on Algebra, Definition \ref{more-algebra-definition-tor-amplitude}.
+We win by More on Algebra, Lemma \ref{more-algebra-lemma-perfect}.
+\end{proof}
+
+\begin{remark}
+\label{remark-explain-perfect-direct-image}
+A consequence of Lemma \ref{lemma-perfect-direct-image} is that there
+exists a finite complex of finite projective $A$-modules $M^\bullet$ such
+that we have
+$$
+H^i(X_{A'}, \mathcal{F}_{A'}) = H^i(M^\bullet \otimes_A A')
+$$
+functorially in $A'$. The condition that $\mathcal{F}$ is
+flat over $A$ is essential, see \cite{Hartshorne}.
+\end{remark}
+
+
+
+
+
+
+
+\section{Coherent formal modules}
+\label{section-coherent-formal}
+
+\noindent
+As we do not yet have the theory of formal schemes to our disposal, we
+develop a bit of language that replaces the notion of a ``coherent module on a
+Noetherian adic formal scheme''.
+
+\medskip\noindent
+Let $X$ be a Noetherian scheme and let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. We will consider inverse
+systems $(\mathcal{F}_n)$ of coherent $\mathcal{O}_X$-modules such that
+\begin{enumerate}
+\item $\mathcal{F}_n$ is annihilated by $\mathcal{I}^n$, and
+\item the transition maps induce isomorphisms
+$\mathcal{F}_{n + 1}/\mathcal{I}^n\mathcal{F}_{n + 1} \to \mathcal{F}_n$.
+\end{enumerate}
+A morphism of such inverse systems is defined as usual.
+Let us denote the category of these inverse systems with
+$\textit{Coh}(X, \mathcal{I})$. We are going to proceed by proving
+a bunch of lemmas about objects in this category. In fact, most
+of the lemmas that follow are straightforward consequences of the following
+description of the category in the affine case.
+
+\begin{lemma}
+\label{lemma-inverse-systems-affine}
+If $X = \Spec(A)$ is the spectrum of a Noetherian ring and
+$\mathcal{I}$ is the quasi-coherent sheaf of ideals associated to the ideal
+$I \subset A$, then $\textit{Coh}(X, \mathcal{I})$ is equivalent to the
+category of finite $A^\wedge$-modules where $A^\wedge$ is the completion
+of $A$ with respect to $I$.
+\end{lemma}
+
+\begin{proof}
+Let $\text{Mod}^{fg}_{A, I}$ be the category of inverse systems $(M_n)$
+of finite $A$-modules satisfying: (1) $M_n$ is annihilated by $I^n$ and (2)
+$M_{n + 1}/I^nM_{n + 1} = M_n$. By the correspondence between coherent
+sheaves on $X$ and finite $A$-modules (Lemma \ref{lemma-coherent-Noetherian})
+it suffices to show $\text{Mod}^{fg}_{A, I}$ is equivalent to the category of
+finite $A^\wedge$-modules. To see this it suffices to prove that given
+an object $(M_n)$ of $\text{Mod}^{fg}_{A, I}$ the module
+$$
+M = \lim M_n
+$$
+is a finite $A^\wedge$-module and that $M/I^nM = M_n$. As the transition
+maps are surjective, we see that $M \to M_1$ is surjective.
+Pick $x_1, \ldots, x_t \in M$ which map to generators of $M_1$.
+This induces a map of systems $(A/I^n)^{\oplus t} \to M_n$.
+By Nakayama's lemma (Algebra, Lemma \ref{algebra-lemma-NAK}) these maps are
+surjective. Let $K_n \subset (A/I^n)^{\oplus t}$ be the kernel.
+Property (2) implies that $K_{n + 1} \to K_n$ is surjective, in particular
+the system $(K_n)$ satisfies the Mittag-Leffler condition.
+By Homology, Lemma \ref{homology-lemma-Mittag-Leffler}
+we obtain an exact sequence
+$0 \to K \to (A^\wedge)^{\oplus t} \to M \to 0$
+with $K = \lim K_n$.
+Hence $M$ is a finite $A^\wedge$-module.
+As $K \to K_n$ is surjective it follows that
+$$
+M/I^nM = \Coker(K \to (A/I^n)^{\oplus t}) =
+(A/I^n)^{\oplus t}/K_n = M_n
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inverse-systems-abelian}
+Let $X$ be a Noetherian scheme and let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals.
+\begin{enumerate}
+\item The category $\textit{Coh}(X, \mathcal{I})$ is abelian.
+\item For $U \subset X$ open the restriction functor
+$\textit{Coh}(X, \mathcal{I}) \to \textit{Coh}(U, \mathcal{I}|_U)$
+is exact.
+\item Exactness in $\textit{Coh}(X, \mathcal{I})$ may be checked by
+restricting to the members of an open covering of $X$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\alpha =(\alpha_n) : (\mathcal{F}_n) \to (\mathcal{G}_n)$ be a morphism of
+$\textit{Coh}(X, \mathcal{I})$. The cokernel of $\alpha$ is the inverse system
+$(\Coker(\alpha_n))$ (details omitted). To describe the kernel let
+$$
+\mathcal{K}'_{l, m} = \Im(\Ker(\alpha_l) \to \mathcal{F}_m)
+$$
+for $l \geq m$.
+We claim:
+\begin{enumerate}
+\item[(a)] the inverse system $(\mathcal{K}'_{l, m})_{l \geq m}$ is
+eventually constant, say with value $\mathcal{K}'_m$,
+\item[(b)] the system $(\mathcal{K}'_m/\mathcal{I}^n\mathcal{K}'_m)_{m \geq n}$
+is eventually constant, say with value $\mathcal{K}_n$,
+\item[(c)] the system $(\mathcal{K}_n)$ forms an object of
+$\textit{Coh}(X, \mathcal{I})$, and
+\item[(d)] this object is the kernel of $\alpha$.
+\end{enumerate}
+To see (a), (b), and (c) we may work affine locally, say $X = \Spec(A)$
+and $\mathcal{I}$ corresponds to the ideal $I \subset A$. By
+Lemma \ref{lemma-inverse-systems-affine}
+$\alpha$ corresponds to a map $f : M \to N$ of finite $A^\wedge$-modules.
+Denote $K = \Ker(f)$. Note that $A^\wedge$ is a Noetherian
+ring (Algebra, Lemma \ref{algebra-lemma-completion-Noetherian-Noetherian}).
+Choose an integer $c \geq 0$ such that
+$K \cap I^n M \subset I^{n - c}K$ for $n \geq c$
+(Algebra, Lemma \ref{algebra-lemma-Artin-Rees})
+and which satisfies Algebra, Lemma \ref{algebra-lemma-map-AR}
+for the map $f$ and the ideal $I^\wedge = IA^\wedge$. Then
+$\mathcal{K}'_{l, m}$ corresponds to the $A$-module
+$$
+K'_{l, m} = \frac{a^{-1}(I^lN) + I^mM}{I^mM} =
+\frac{K + I^{l - c}f^{-1}(I^cN) + I^mM}{I^mM} =
+\frac{K + I^mM}{I^mM}
+$$
+where the last equality holds if $l \geq m + c$. So $\mathcal{K}'_m$
+corresponds to the $A$-module $K/K \cap I^mM$ and
+$\mathcal{K}'_m/\mathcal{I}^n\mathcal{K}'_m$ corresponds to
+$$
+\frac{K}{K \cap I^mM + I^nK} = \frac{K}{I^nK}
+$$
+for $m \geq n + c$ by our choice of $c$ above. Hence $\mathcal{K}_n$
+corresponds to $K/I^nK$.
+
+\medskip\noindent
+We prove (d). It is clear from the description on affines above that
+the composition $(\mathcal{K}_n) \to (\mathcal{F}_n) \to (\mathcal{G}_n)$
+is zero. Let $\beta : (\mathcal{H}_n) \to (\mathcal{F}_n)$
+be a morphism such that $\alpha \circ \beta = 0$. Then
+$\mathcal{H}_l \to \mathcal{F}_l$ maps into $\Ker(\alpha_l)$.
+Since $\mathcal{H}_m = \mathcal{H}_l/\mathcal{I}^m\mathcal{H}_l$
+for $l \geq m$ we obtain a system of maps
+$\mathcal{H}_m \to \mathcal{K}'_{l, m}$. Thus a map
+$\mathcal{H}_m \to \mathcal{K}_m'$. Since
+$\mathcal{H}_n = \mathcal{H}_m/\mathcal{I}^n\mathcal{H}_m$ we obtain
+a system of maps $\mathcal{H}_n \to \mathcal{K}'_m/\mathcal{I}^n\mathcal{K}'_m$
+and hence a map $\mathcal{H}_n \to \mathcal{K}_n$ as desired.
+
+\medskip\noindent
+To finish the proof of (1) we still have to show that $\Coim = \Im$
+in $\textit{Coh}(X, \mathcal{I})$. We have seen above that taking
+kernels and cokernels commutes, over affines, with the description
+of $\textit{Coh}(X, \mathcal{I})$ as a category of modules. Since
+$\Im = \Coim$ holds in the category of modules
+this gives $\Coim = \Im$ in $\textit{Coh}(X, \mathcal{I})$.
+Parts (2) and (3) of the lemma are immediate from our construction
+of kernels and cokernels.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inverse-systems-surjective}
+Let $X$ be a Noetherian scheme and let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. A map
+$(\mathcal{F}_n) \to (\mathcal{G}_n)$ is surjective in
+$\textit{Coh}(X, \mathcal{I})$
+if and only if $\mathcal{F}_1 \to \mathcal{G}_1$ is surjective.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: Look on affine opens, use
+Lemma \ref{lemma-inverse-systems-affine}, and use
+Algebra, Lemma \ref{algebra-lemma-NAK}.
+\end{proof}
+
+\noindent
+Let $X$ be a Noetherian scheme and let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. There is a functor
+\begin{equation}
+\label{equation-completion-functor}
+\textit{Coh}(\mathcal{O}_X) \longrightarrow \textit{Coh}(X, \mathcal{I}), \quad
+\mathcal{F} \longmapsto \mathcal{F}^\wedge
+\end{equation}
+which associates to the coherent $\mathcal{O}_X$-module $\mathcal{F}$
+the object $\mathcal{F}^\wedge = (\mathcal{F}/\mathcal{I}^n\mathcal{F})$
+of $\textit{Coh}(X, \mathcal{I})$.
+
+\begin{lemma}
+\label{lemma-exact}
+The functor (\ref{equation-completion-functor}) is exact.
+\end{lemma}
+
+\begin{proof}
+It suffices to check this locally on $X$. Hence we may assume $X$ is
+affine, i.e., we have a situation as in
+Lemma \ref{lemma-inverse-systems-affine}.
+The functor is the functor $\text{Mod}^{fg}_A \to \text{Mod}^{fg}_{A^\wedge}$
+which associates to a finite $A$-module $M$ the completion $M^\wedge$.
+Thus the result follows from
+Algebra, Lemma \ref{algebra-lemma-completion-flat}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-internal-hom}
+Let $X$ be a Noetherian scheme and let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. Let $\mathcal{F}$, $\mathcal{G}$ be
+coherent $\mathcal{O}_X$-modules. Set
+$\mathcal{H} = \SheafHom_{\mathcal{O}_X}(\mathcal{G}, \mathcal{F})$.
+Then
+$$
+\lim H^0(X, \mathcal{H}/\mathcal{I}^n\mathcal{H}) =
+\Mor_{\textit{Coh}(X, \mathcal{I})}
+(\mathcal{G}^\wedge, \mathcal{F}^\wedge).
+$$
+\end{lemma}
+
+\begin{proof}
+To prove this we may work affine locally on $X$.
+Hence we may assume $X = \Spec(A)$ and $\mathcal{F}$, $\mathcal{G}$
+given by finite $A$-module $M$ and $N$. Then $\mathcal{H}$
+corresponds to the finite $A$-module $H = \Hom_A(M, N)$.
+The statement of the lemma becomes the statement
+$$
+H^\wedge = \Hom_{A^\wedge}(M^\wedge, N^\wedge)
+$$
+via the equivalence of Lemma \ref{lemma-inverse-systems-affine}.
+By Algebra, Lemma \ref{algebra-lemma-completion-flat}
+(used 3 times) we have
+$$
+H^\wedge = \Hom_A(M, N) \otimes_A A^\wedge =
+\Hom_{A^\wedge}(M \otimes_A A^\wedge, N \otimes_A A^\wedge) =
+\Hom_{A^\wedge}(M^\wedge, N^\wedge)
+$$
+where the second equality uses that $A^\wedge$ is flat over $A$
+(see More on Algebra, Lemma
+\ref{more-algebra-lemma-pseudo-coherence-and-base-change-ext}).
+The lemma follows.
+\end{proof}
+
+\noindent
+Let $X$ be a Noetherian scheme. Let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. We say an object $(\mathcal{F}_n)$ of
+$\textit{Coh}(X, \mathcal{I})$ is {\it $\mathcal{I}$-power torsion}
+or is {\it annihilated by a power of $\mathcal{I}$} if there exists
+a $c \geq 1$ such that $\mathcal{F}_n = \mathcal{F}_c$ for all $n \geq c$.
+If this is the case we will say that $(\mathcal{F}_n)$ is {\it annihilated by
+$\mathcal{I}^c$}. If $X = \Spec(A)$ is affine, then, via the equivalence of
+Lemma \ref{lemma-inverse-systems-affine},
+these objects corresponds exactly to the finite $A$-modules
+annihilated by a power of $I$ or by $I^c$.
+
+\begin{lemma}
+\label{lemma-existence-easy}
+Let $X$ be a Noetherian scheme and let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. Let $\mathcal{G}$ be a coherent
+$\mathcal{O}_X$-module. Let $(\mathcal{F}_n)$ an object of
+$\textit{Coh}(X, \mathcal{I})$.
+\begin{enumerate}
+\item If $\alpha : (\mathcal{F}_n) \to \mathcal{G}^\wedge$ is
+a map whose kernel and cokernel are annihilated by a power of $\mathcal{I}$,
+then there exists a unique (up to unique isomorphism) triple
+$(\mathcal{F}, a, \beta)$ where
+\begin{enumerate}
+\item $\mathcal{F}$ is a coherent $\mathcal{O}_X$-module,
+\item $a : \mathcal{F} \to \mathcal{G}$ is an $\mathcal{O}_X$-module map
+whose kernel and cokernel are annihilated by a power of $\mathcal{I}$,
+\item $\beta : (\mathcal{F}_n) \to \mathcal{F}^\wedge$ is an isomorphism, and
+\item $\alpha = a^\wedge \circ \beta$.
+\end{enumerate}
+\item If $\alpha : \mathcal{G}^\wedge \to (\mathcal{F}_n)$ is
+a map whose kernel and cokernel are annihilated by a power of $\mathcal{I}$,
+then there exists a unique (up to unique isomorphism) triple
+$(\mathcal{F}, a, \beta)$ where
+\begin{enumerate}
+\item $\mathcal{F}$ is a coherent $\mathcal{O}_X$-module,
+\item $a : \mathcal{G} \to \mathcal{F}$ is an $\mathcal{O}_X$-module map
+whose kernel and cokernel are annihilated by a power of $\mathcal{I}$,
+\item $\beta : \mathcal{F}^\wedge \to (\mathcal{F}_n)$ is an isomorphism, and
+\item $\alpha = \beta \circ a^\wedge$.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). The uniqueness implies it suffices to construct
+$(\mathcal{F}, a, \beta)$
+Zariski locally on $X$. Thus we may assume $X = \Spec(A)$ and $\mathcal{I}$
+corresponds to the ideal $I \subset A$. In this situation
+Lemma \ref{lemma-inverse-systems-affine} applies.
+Let $M'$ be the finite $A^\wedge$-module corresponding
+to $(\mathcal{F}_n)$. Let $N$ be the finite $A$-module corresponding to
+$\mathcal{G}$. Then $\alpha$ corresponds to a map
+$$
+\varphi : M' \longrightarrow N^\wedge
+$$
+whose kernel and cokernel are annihilated by $I^t$ for some $t$. Recall that
+$N^\wedge = N \otimes_A A^\wedge$
+(Algebra, Lemma \ref{algebra-lemma-completion-tensor}).
+By More on Algebra, Lemma \ref{more-algebra-lemma-application-formal-glueing}
+there is an $A$-module map $\psi : M \to N$ whose kernel and cokernel are
+$I$-power torsion and an isomorphism
+$M \otimes_A A^\wedge = M'$ compatible with $\varphi$.
+As $N$ and $M'$ are finite modules, we conclude that $M$
+is a finite $A$-module, see
+More on Algebra, Remark \ref{more-algebra-remark-formal-glueing-algebras}.
+Hence $M \otimes_A A^\wedge = M^\wedge$. We omit the verification
+that the triple $(M, N \to M, M^\wedge \to M')$ so obtained
+is unique up to unique isomorphism.
+
+\medskip\noindent
+The proof of (2) is exactly the same and we omit it.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-torsion-hom-ext}
+Let $X$ be a Noetherian scheme and let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. Any object of
+$\textit{Coh}(X, \mathcal{I})$ which is annihilated
+by a power of $\mathcal{I}$ is in the essential image of
+(\ref{equation-completion-functor}).
+Moreover, if $\mathcal{F}$, $\mathcal{G}$ are in $\textit{Coh}(\mathcal{O}_X)$
+and either $\mathcal{F}$ or $\mathcal{G}$ is annihilated by a power of
+$\mathcal{I}$, then the maps
+$$
+\xymatrix{
+\Hom_X(\mathcal{F}, \mathcal{G}) \ar[d] &
+\Ext_X(\mathcal{F}, \mathcal{G}) \ar[d] \\
+\Hom_{\textit{Coh}(X, \mathcal{I})}(\mathcal{F}^\wedge, \mathcal{G}^\wedge) &
+\Ext_{\textit{Coh}(X, \mathcal{I})}(\mathcal{F}^\wedge, \mathcal{G}^\wedge)
+}
+$$
+are isomorphisms.
+\end{lemma}
+
+\begin{proof}
+Suppose $(\mathcal{F}_n)$ is an object of $\textit{Coh}(X, \mathcal{I})$
+which is annihilated by $\mathcal{I}^c$ for some $c \geq 1$. Then
+$\mathcal{F}_n \to \mathcal{F}_c$ is an isomorphism for $n \geq c$.
+Hence if we set $\mathcal{F} = \mathcal{F}_c$, then we see that
+$\mathcal{F}^\wedge \cong (\mathcal{F}_n)$. This proves the first assertion.
+
+\medskip\noindent
+Let $\mathcal{F}$, $\mathcal{G}$ be objects of $\textit{Coh}(\mathcal{O}_X)$
+such that either $\mathcal{F}$ or $\mathcal{G}$ is annihilated by
+$\mathcal{I}^c$ for some $c \geq 1$. Then
+$\mathcal{H} = \SheafHom_{\mathcal{O}_X}(\mathcal{G}, \mathcal{F})$
+is a coherent $\mathcal{O}_X$-module annihilated by $\mathcal{I}^c$.
+Hence we see that
+$$
+\Hom_X(\mathcal{G}, \mathcal{F}) =
+H^0(X, \mathcal{H}) =
+\lim H^0(X, \mathcal{H}/\mathcal{I}^n\mathcal{H}) =
+\Mor_{\textit{Coh}(X, \mathcal{I})}
+(\mathcal{G}^\wedge, \mathcal{F}^\wedge).
+$$
+see Lemma \ref{lemma-completion-internal-hom}.
+This proves the statement on homomorphisms.
+
+\medskip\noindent
+The notation $\Ext$ refers to extensions as defined in
+Homology, Section \ref{homology-section-extensions}.
+The injectivity of the map on $\Ext$'s follows immediately
+from the bijectivity of the map on $\Hom$'s.
+For surjectivity, assume $\mathcal{F}$ is annihilated
+by a power of $I$. Then part (1) of Lemma \ref{lemma-existence-easy}
+shows that given an extension
+$$
+0 \to \mathcal{G}^\wedge \to (\mathcal{E}_n) \to \mathcal{F}^\wedge \to 0
+$$
+in $\textit{Coh}(U, I\mathcal{O}_U)$
+the morphism $\mathcal{G}^\wedge \to (\mathcal{E}_n)$ is
+isomorphic to $\mathcal{G} \to \mathcal{E}^\wedge$
+for some $\mathcal{G} \to \mathcal{E}$ in $\textit{Coh}(\mathcal{O}_U)$.
+Similarly in the other case.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-over-rees-algebra}
+Let $X$ be a Noetherian scheme and let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. If $(\mathcal{F}_n)$ is an object of
+$\textit{Coh}(X, \mathcal{I})$ then
+$\bigoplus \Ker(\mathcal{F}_{n + 1} \to \mathcal{F}_n)$ is
+a finite type, graded, quasi-coherent
+$\bigoplus \mathcal{I}^n/\mathcal{I}^{n + 1}$-module.
+\end{lemma}
+
+\begin{proof}
+The question is local on $X$ hence we may assume $X$ is affine, i.e.,
+we have a situation as in Lemma \ref{lemma-inverse-systems-affine}.
+In this case, if $(\mathcal{F}_n)$ corresponds to the finite $A^\wedge$
+module $M$, then $\bigoplus \Ker(\mathcal{F}_{n + 1} \to \mathcal{F}_n)$
+corresponds to $\bigoplus I^nM/I^{n + 1}M$ which is clearly a finite
+module over $\bigoplus I^n/I^{n + 1}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inverse-systems-pullback}
+Let $f : X \to Y$ be a morphism of Noetherian schemes.
+Let $\mathcal{J} \subset \mathcal{O}_Y$ be a quasi-coherent sheaf
+of ideals and set $\mathcal{I} = f^{-1}\mathcal{J} \mathcal{O}_X$.
+Then there is a right exact functor
+$$
+f^* : \textit{Coh}(Y, \mathcal{J}) \longrightarrow \textit{Coh}(X, \mathcal{I})
+$$
+which sends $(\mathcal{G}_n)$ to $(f^*\mathcal{G}_n)$. If $f$ is flat,
+then $f^*$ is an exact functor.
+\end{lemma}
+
+\begin{proof}
+Since $f^* : \textit{Coh}(\mathcal{O}_Y) \to \textit{Coh}(\mathcal{O}_X)$
+is right exact we have
+$$
+f^*\mathcal{G}_n =
+f^*(\mathcal{G}_{n + 1}/\mathcal{I}^n\mathcal{G}_{n + 1}) =
+f^*\mathcal{G}_{n + 1}/f^{-1}\mathcal{I}^nf^*\mathcal{G}_{n + 1} =
+f^*\mathcal{G}_{n + 1}/\mathcal{J}^nf^*\mathcal{G}_{n + 1}
+$$
+hence the pullback of a system is a system. The construction of
+cokernels in the proof of Lemma \ref{lemma-inverse-systems-abelian}
+shows that
+$f^* : \textit{Coh}(Y, \mathcal{J}) \to \textit{Coh}(X, \mathcal{I})$
+is always right exact. If $f$ is flat, then
+$f^* : \textit{Coh}(\mathcal{O}_Y) \to \textit{Coh}(\mathcal{O}_X)$
+is an exact functor. It follows from the construction of kernels
+in the proof of Lemma \ref{lemma-inverse-systems-abelian}
+that in this case
+$f^* : \textit{Coh}(Y, \mathcal{J}) \to \textit{Coh}(X, \mathcal{I})$
+also transforms kernels into kernels.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inverse-systems-pullback-equivalence}
+Let $f : X' \to X$ be a morphism of Noetherian schemes. Let $Z \subset X$
+be a closed subscheme and denote $Z' = f^{-1}Z$ the scheme theoretic
+inverse image. Let $\mathcal{I} \subset \mathcal{O}_X$,
+$\mathcal{I}' \subset \mathcal{O}_{X'}$ be the corresponding
+quasi-coherent sheaves of ideals.
+If $f$ is flat and the induced morphism $Z' \to Z$
+is an isomorphism, then the pullback functor
+$f^* : \textit{Coh}(X, \mathcal{I}) \to \textit{Coh}(X', \mathcal{I}')$
+(Lemma \ref{lemma-inverse-systems-pullback})
+is an equivalence.
+\end{lemma}
+
+\begin{proof}
+If $X$ and $X'$ are affine, then this follows immediately from
+More on Algebra, Lemma \ref{more-algebra-lemma-neighbourhood-equivalence}.
+To prove it in general we let $Z_n \subset X$, $Z'_n \subset X'$
+be the $n$th infinitesimal neighbourhoods of $Z$, $Z'$.
+The induced morphism $Z_n \to Z'_n$ is a homeomorphism on
+underlying topological spaces. On the other hand, if $z' \in Z'$
+maps to $z \in Z$, then the ring map
+$\mathcal{O}_{X, z} \to \mathcal{O}_{X', z'}$ is flat
+and induces an isomorphism
+$\mathcal{O}_{X, z}/\mathcal{I}_z \to \mathcal{O}_{X', z'}/\mathcal{I}'_{z'}$.
+Hence it induces an isomorphism
+$\mathcal{O}_{X, z}/\mathcal{I}_z^n \to
+\mathcal{O}_{X', z'}/(\mathcal{I}'_{z'})^n$
+for all $n \geq 1$ for example by
+More on Algebra, Lemma \ref{more-algebra-lemma-neighbourhood-isomorphism}.
+Thus $Z'_n \to Z_n$ is an isomorphism of schemes.
+Thus $f^*$ induces an equivalence between the
+category of coherent $\mathcal{O}_X$-modules annihilated by $\mathcal{I}^n$
+and the
+category of coherent $\mathcal{O}_{X'}$-modules annihilated by
+$(\mathcal{I}')^n$, see
+Lemma \ref{lemma-i-star-equivalence}.
+This clearly implies the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inverse-systems-ideals-equivalence}
+Let $X$ be a Noetherian scheme. Let
+$\mathcal{I}, \mathcal{J} \subset \mathcal{O}_X$
+be quasi-coherent sheaves of ideals.
+If $V(\mathcal{I}) = V(\mathcal{J})$ is the same closed subset
+of $X$, then $\textit{Coh}(X, \mathcal{I})$ and $\textit{Coh}(X, \mathcal{J})$
+are equivalent.
+\end{lemma}
+
+\begin{proof}
+First, assume $X = \Spec(A)$ is affine. Let $I, J \subset A$ be the ideals
+corresponding to $\mathcal{I}, \mathcal{J}$. Then $V(I) = V(J)$
+implies we have $I^c \subset J$ and $J^d \subset I$ for some $c, d \geq 1$
+by elementary properties of the Zariski topology
+(see Algebra, Section \ref{algebra-section-spectrum-ring} and
+Lemma \ref{algebra-lemma-Noetherian-power}).
+Hence the $I$-adic and $J$-adic completions of $A$ agree, see
+Algebra, Lemma \ref{algebra-lemma-change-ideal-completion}.
+Thus the equivalence follows from Lemma \ref{lemma-inverse-systems-affine}
+in this case.
+
+\medskip\noindent
+In general, using what we said above and the fact that
+$X$ is quasi-compact, to choose $c, d \geq 1$ such that
+$\mathcal{I}^c \subset \mathcal{J}$ and $\mathcal{J}^d \subset \mathcal{I}$.
+Then given an object $(\mathcal{F}_n)$ in
+$\textit{Coh}(X, \mathcal{I})$ we claim that the
+inverse system
+$$
+(\mathcal{F}_{cn}/\mathcal{J}^n\mathcal{F}_{cn})
+$$
+is in $\textit{Coh}(X, \mathcal{J})$. This may be checked on
+the members of an affine covering; we omit the details.
+In the same manner we can construct an object of
+$\textit{Coh}(X, \mathcal{I})$ starting with an object of
+$\textit{Coh}(X, \mathcal{J})$. We omit the verification
+that these constructions define mutually quasi-inverse functors.
+\end{proof}
+
+
+
+
+
+\section{Grothendieck's existence theorem, I}
+\label{section-existence}
+
+\noindent
+In this section we discuss Grothendieck's existence theorem for the
+projective case. We will use the notion of coherent formal modules
+developed in Section \ref{section-coherent-formal}. The reader who is familiar
+with formal schemes is encouraged to read the statement and proof
+of the theorem in \cite{EGA}.
+
+\begin{lemma}
+\label{lemma-fully-faithful}
+Let $A$ be Noetherian ring complete with respect to an ideal $I$.
+Let $f : X \to \Spec(A)$ be a proper morphism. Let
+$\mathcal{I} = I\mathcal{O}_X$.
+Then the functor (\ref{equation-completion-functor}) is fully faithful.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$, $\mathcal{G}$ be coherent $\mathcal{O}_X$-modules.
+Then $\mathcal{H} = \SheafHom_{\mathcal{O}_X}(\mathcal{G}, \mathcal{F})$
+is a coherent $\mathcal{O}_X$-module, see
+Modules, Lemma \ref{modules-lemma-internal-hom-locally-kernel-direct-sum}.
+By Lemma \ref{lemma-completion-internal-hom} the map
+$$
+\lim_n H^0(X, \mathcal{H}/\mathcal{I}^n\mathcal{H})
+\to
+\Mor_{\textit{Coh}(X, \mathcal{I})}
+(\mathcal{G}^\wedge, \mathcal{F}^\wedge)
+$$
+is bijective. Hence fully faithfulness of
+(\ref{equation-completion-functor}) follows from the theorem on formal
+functions (Lemma \ref{lemma-spell-out-theorem-formal-functions})
+for the coherent sheaf $\mathcal{H}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-projective}
+Let $A$ be Noetherian ring and $I \subset A$ an ideal.
+Let $f : X \to \Spec(A)$ be a proper morphism and let
+$\mathcal{L}$ be an $f$-ample invertible sheaf. Let
+$\mathcal{I} = I\mathcal{O}_X$. Let $(\mathcal{F}_n)$ be an
+object of $\textit{Coh}(X, \mathcal{I})$. Then there exists an
+integer $d_0$ such that
+$$
+H^1(X, \Ker(\mathcal{F}_{n + 1} \to \mathcal{F}_n)
+\otimes \mathcal{L}^{\otimes d} )
+= 0
+$$
+for all $n \geq 0$ and all $d \geq d_0$.
+\end{lemma}
+
+\begin{proof}
+Set $B = \bigoplus I^n/I^{n + 1}$ and
+$\mathcal{B} = \bigoplus \mathcal{I}^n/\mathcal{I}^{n + 1} = f^*\widetilde{B}$.
+By Lemma \ref{lemma-finite-over-rees-algebra} the graded quasi-coherent
+$\mathcal{B}$-module
+$\mathcal{G} = \bigoplus \Ker(\mathcal{F}_{n + 1} \to \mathcal{F}_n)$
+is of finite type. Hence the lemma follows from
+Lemma \ref{lemma-graded-finiteness} part (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-existence-projective}
+Let $A$ be Noetherian ring complete with respect to an ideal $I$.
+Let $f : X \to \Spec(A)$ be a projective morphism. Let
+$\mathcal{I} = I\mathcal{O}_X$.
+Then the functor (\ref{equation-completion-functor}) is an equivalence.
+\end{lemma}
+
+\begin{proof}
+We have already seen that (\ref{equation-completion-functor}) is
+fully faithful in Lemma \ref{lemma-fully-faithful}. Thus it suffices
+to show that the functor is essentially surjective.
+
+\medskip\noindent
+We first show that every object $(\mathcal{F}_n)$ of
+$\textit{Coh}(X, \mathcal{I})$ is the quotient of an object
+in the image of (\ref{equation-completion-functor}).
+Let $\mathcal{L}$ be an $f$-ample invertible sheaf on $X$.
+Choose $d_0$ as in Lemma \ref{lemma-vanishing-projective}.
+Choose a $d \geq d_0$ such that
+$\mathcal{F}_1 \otimes \mathcal{L}^{\otimes d}$
+is globally generated by some sections $s_{1, 1}, \ldots, s_{t, 1}$.
+Since the transition maps of the system
+$$
+H^0(X, \mathcal{F}_{n + 1} \otimes \mathcal{L}^{\otimes d})
+\longrightarrow
+H^0(X, \mathcal{F}_n \otimes \mathcal{L}^{\otimes d})
+$$
+are surjective by the vanishing of $H^1$ we can lift
+$s_{1, 1}, \ldots, s_{t, 1}$ to a compatible system of global sections
+$s_{1, n}, \ldots, s_{t, n}$ of
+$\mathcal{F}_n \otimes \mathcal{L}^{\otimes d}$.
+These determine a compatible system of maps
+$$
+(s_{1, n}, \ldots, s_{t, n}) :
+(\mathcal{L}^{\otimes -d})^{\oplus t} \longrightarrow \mathcal{F}_n
+$$
+Using Lemma \ref{lemma-inverse-systems-surjective}
+we deduce that we have a surjective map
+$$
+\left((\mathcal{L}^{\otimes -d})^{\oplus t}\right)^\wedge
+\longrightarrow
+(\mathcal{F}_n)
+$$
+as desired.
+
+\medskip\noindent
+The result of the previous paragraph and the fact that
+$\textit{Coh}(X, \mathcal{I})$ is abelian
+(Lemma \ref{lemma-inverse-systems-abelian})
+implies that
+every object of $\textit{Coh}(X, \mathcal{I})$ is a cokernel
+of a map between objects coming from $\textit{Coh}(\mathcal{O}_X)$.
+As (\ref{equation-completion-functor}) is fully faithful and exact by
+Lemmas \ref{lemma-fully-faithful} and \ref{lemma-exact}
+we conclude.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Grothendieck's existence theorem, II}
+\label{section-existence-proper}
+
+\noindent
+In this section we discuss Grothendieck's existence theorem in the proper case.
+Before we give the statement and proof, we need to develop a bit
+more theory regarding the categories $\textit{Coh}(X, \mathcal{I})$
+of coherent formal modules
+introduced in Section \ref{section-coherent-formal}.
+
+\begin{remark}
+\label{remark-inverse-systems-kernel-cokernel-annihilated-by}
+Let $X$ be a Noetherian scheme and let
+$\mathcal{I}, \mathcal{K} \subset \mathcal{O}_X$
+be quasi-coherent sheaves of ideals. Let
+$\alpha : (\mathcal{F}_n) \to (\mathcal{G}_n)$ be a morphism of
+$\textit{Coh}(X, \mathcal{I})$.
+Given an affine open $\Spec(A) = U \subset X$ with
+$\mathcal{I}|_U, \mathcal{K}|_U$ corresponding to ideals $I, K \subset A$
+denote $\alpha_U : M \to N$ of finite $A^\wedge$-modules which
+corresponds to $\alpha|_U$ via Lemma \ref{lemma-inverse-systems-affine}.
+We claim the following are equivalent
+\begin{enumerate}
+\item there exists an integer $t \geq 1$ such that
+$\Ker(\alpha_n)$ and $\Coker(\alpha_n)$
+are annihilated by $\mathcal{K}^t$ for all $n \geq 1$,
+\item for any affine open $\Spec(A) = U \subset X$ as above
+the modules $\Ker(\alpha_U)$ and $\Coker(\alpha_U)$
+are annihilated by $K^t$ for some integer $t \geq 1$, and
+\item there exists a finite affine open covering $X = \bigcup U_i$
+such that the conclusion of (2) holds for $\alpha_{U_i}$.
+\end{enumerate}
+If these equivalent conditions hold we will say that
+$\alpha$ is a
+{\it map whose kernel and cokernel are annihilated by a power of
+$\mathcal{K}$}.
+To see the equivalence we use the following commutative algebra fact:
+suppose given an exact sequence
+$$
+0 \to T \to M \to N \to Q \to 0
+$$
+of $A$-modules with $T$ and $Q$ annihilated by $K^t$ for some
+ideal $K \subset A$. Then for every $f, g \in K^t$ there exists a
+canonical map $"fg": N \to M$ such that $M \to N \to M$ is equal to
+multiplication by $fg$. Namely, for $y \in N$ we can pick $x \in M$
+mapping to $fy$ in $N$ and then we can set $"fg"(y) = gx$. Thus it is
+clear that $\Ker(M/JM \to N/JN)$ and $\Coker(M/JM \to N/JN)$
+are annihilated by $K^{2t}$ for any ideal $J \subset A$.
+
+\medskip\noindent
+Applying the commutative algebra fact to $\alpha_{U_i}$ and $J = I^n$
+we see that (3) implies (1). Conversely,
+suppose (1) holds and $M \to N$ is equal to $\alpha_U$. Then there is
+a $t \geq 1$ such that
+$\Ker(M/I^nM \to N/I^nN)$ and $\Coker(M/I^nM \to N/I^nN)$
+are annihilated by $K^t$ for all $n$. We obtain maps
+$"fg" : N/I^nN \to M/I^nM$ which in the limit induce a map $N \to M$
+as $N$ and $M$ are $I$-adically complete. Since the composition with
+$N \to M \to N$ is multiplication by $fg$ we conclude that $fg$
+annihilates $T$ and $Q$. In other words $T$ and $Q$ are annihilated by
+$K^{2t}$ as desired.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-existence-tricky}
+Let $X$ be a Noetherian scheme. Let
+$\mathcal{I}, \mathcal{K} \subset \mathcal{O}_X$
+be quasi-coherent sheaves of ideals.
+Let $X_e \subset X$ be the closed subscheme cut out by $\mathcal{K}^e$.
+Let $\mathcal{I}_e = \mathcal{I}\mathcal{O}_{X_e}$.
+Let $(\mathcal{F}_n)$ be an object of $\textit{Coh}(X, \mathcal{I})$.
+Assume
+\begin{enumerate}
+\item the functor
+$\textit{Coh}(\mathcal{O}_{X_e}) \to \textit{Coh}(X_e, \mathcal{I}_e)$
+is an equivalence for all $e \geq 1$, and
+\item there exists a coherent sheaf $\mathcal{H}$ on $X$ and a map
+$\alpha : (\mathcal{F}_n) \to \mathcal{H}^\wedge$ whose
+kernel and cokernel are annihilated by a power of $\mathcal{K}$.
+\end{enumerate}
+Then $(\mathcal{F}_n)$ is in the essential image of
+(\ref{equation-completion-functor}).
+\end{lemma}
+
+\begin{proof}
+During this proof we will use without further mention that for a closed
+immersion $i : Z \to X$ the functor $i_*$ gives an equivalence between the
+category of coherent modules on $Z$ and coherent modules on $X$ annihilated
+by the ideal sheaf of $Z$, see Lemma \ref{lemma-i-star-equivalence}.
+In particular we may identify $\textit{Coh}(\mathcal{O}_{X_e})$
+with the category of coherent $\mathcal{O}_X$-modules annihilated by
+$\mathcal{K}^e$ and $\textit{Coh}(X_e, \mathcal{I}_e)$ as the full subcategory
+of $\textit{Coh}(X, \mathcal{I})$ of objects annihilated by $\mathcal{K}^e$.
+Moreover (1) tells us these two categories are equivalent under the
+completion functor (\ref{equation-completion-functor}).
+
+\medskip\noindent
+Applying this equivalence we get a coherent $\mathcal{O}_X$-module
+$\mathcal{G}_e$ annihilated by $\mathcal{K}^e$ corresponding to the system
+$(\mathcal{F}_n/\mathcal{K}^e\mathcal{F}_n)$ of
+$\textit{Coh}(X, \mathcal{I})$. The maps
+$\mathcal{F}_n/\mathcal{K}^{e + 1}\mathcal{F}_n \to
+\mathcal{F}_n/\mathcal{K}^e\mathcal{F}_n$ correspond to canonical maps
+$\mathcal{G}_{e + 1} \to \mathcal{G}_e$ which induce isomorphisms
+$\mathcal{G}_{e + 1}/\mathcal{K}^e\mathcal{G}_{e + 1} \to \mathcal{G}_e$.
+Hence $(\mathcal{G}_e)$ is an object of $\textit{Coh}(X, \mathcal{K})$.
+The map $\alpha$ induces a system of maps
+$$
+\mathcal{F}_n/\mathcal{K}^e\mathcal{F}_n
+\longrightarrow
+\mathcal{H}/(\mathcal{I}^n + \mathcal{K}^e)\mathcal{H}
+$$
+whence maps $\mathcal{G}_e \to \mathcal{H}/\mathcal{K}^e\mathcal{H}$
+(by the equivalence of categories again).
+Let $t \geq 1$ be an integer, which exists by assumption (2),
+such that $\mathcal{K}^t$ annihilates the kernel and cokernel of all the maps
+$\mathcal{F}_n \to \mathcal{H}/\mathcal{I}^n\mathcal{H}$.
+Then $\mathcal{K}^{2t}$ annihilates the kernel and cokernel of the maps
+$\mathcal{F}_n/\mathcal{K}^e\mathcal{F}_n \to
+\mathcal{H}/(\mathcal{I}^n + \mathcal{K}^e)\mathcal{H}$, see
+Remark \ref{remark-inverse-systems-kernel-cokernel-annihilated-by}.
+Whereupon we conclude that $\mathcal{K}^{4t}$ annihilates the kernel and
+the cokernel of the maps
+$$
+\mathcal{G}_e
+\longrightarrow
+\mathcal{H}/\mathcal{K}^e\mathcal{H},
+$$
+see Remark \ref{remark-inverse-systems-kernel-cokernel-annihilated-by}.
+We apply Lemma \ref{lemma-existence-easy} to obtain a coherent
+$\mathcal{O}_X$-module $\mathcal{F}$, a map
+$a : \mathcal{F} \to \mathcal{H}$ and an isomorphism
+$\beta : (\mathcal{G}_e) \to (\mathcal{F}/\mathcal{K}^e\mathcal{F})$
+in $\textit{Coh}(X, \mathcal{K})$. Working backwards, for a given $n$
+the triple
+$(\mathcal{F}/\mathcal{I}^n\mathcal{F}, a \bmod \mathcal{I}^n, \beta
+\bmod \mathcal{I}^n)$ is a triple as in the lemma for the morphism
+$\alpha_n \bmod \mathcal{K}^e :
+(\mathcal{F}_n/\mathcal{K}^e\mathcal{F}_n) \to
+(\mathcal{H}/(\mathcal{I}^n + \mathcal{K}^e)\mathcal{H})$
+of $\textit{Coh}(X, \mathcal{K})$. Thus the uniqueness in
+Lemma \ref{lemma-existence-easy}
+gives a canonical isomorphism
+$\mathcal{F}/\mathcal{I}^n\mathcal{F} \to \mathcal{F}_n$
+compatible with all the morphisms in sight. This finishes the proof
+of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inverse-systems-push-pull}
+Let $Y$ be a Noetherian scheme. Let
+$\mathcal{J}, \mathcal{K} \subset \mathcal{O}_Y$
+be quasi-coherent sheaves of ideals.
+Let $f : X \to Y$ be a proper morphism which is an isomorphism
+over $V = Y \setminus V(\mathcal{K})$.
+Set $\mathcal{I} = f^{-1}\mathcal{J} \mathcal{O}_X$.
+Let $(\mathcal{G}_n)$ be an object of $\textit{Coh}(Y, \mathcal{J})$,
+let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module, and let
+$\beta : (f^*\mathcal{G}_n) \to \mathcal{F}^\wedge$ be an isomorphism in
+$\textit{Coh}(X, \mathcal{I})$. Then there exists a map
+$$
+\alpha :
+(\mathcal{G}_n)
+\longrightarrow
+(f_*\mathcal{F})^\wedge
+$$
+in $\textit{Coh}(Y, \mathcal{J})$ whose kernel and cokernel
+are annihilated by a power of $\mathcal{K}$.
+\end{lemma}
+
+\begin{proof}
+Since $f$ is a proper morphism we see that $f_*\mathcal{F}$
+is a coherent $\mathcal{O}_Y$-module
+(Proposition \ref{proposition-proper-pushforward-coherent}).
+Thus the statement of the lemma makes sense.
+Consider the compositions
+$$
+\gamma_n : \mathcal{G}_n \to
+f_*f^*\mathcal{G}_n \to
+f_*(\mathcal{F}/\mathcal{I}^n\mathcal{F}).
+$$
+Here the first map is the adjunction map and the second is $f_*\beta_n$.
+We claim that there exists a unique $\alpha$ as in the lemma
+such that the compositions
+$$
+\mathcal{G}_n \xrightarrow{\alpha_n}
+f_*\mathcal{F}/\mathcal{J}^nf_*\mathcal{F} \to
+f_*(\mathcal{F}/\mathcal{I}^n\mathcal{F})
+$$
+equal $\gamma_n$ for all $n$. Because of the uniqueness we may assume
+that $Y = \Spec(B)$ is affine. Let $J \subset B$ corresponds to the
+ideal $\mathcal{J}$. Set
+$$
+M_n = H^0(X, \mathcal{F}/\mathcal{I}^n\mathcal{F})
+\quad\text{and}\quad
+M = H^0(X, \mathcal{F})
+$$
+By Lemma \ref{lemma-ML-cohomology-powers-ideal} and
+Theorem \ref{theorem-formal-functions}
+the inverse limit of the modules
+$M_n$ equals the completion $M^\wedge = \lim M/J^nM$.
+Set $N_n = H^0(Y, \mathcal{G}_n)$ and $N = \lim N_n$.
+Via the equivalence of categories of
+Lemma \ref{lemma-inverse-systems-affine}
+the finite $B^\wedge$ modules $N$ and $M^\wedge$ correspond
+to $(\mathcal{G}_n)$ and $f_*\mathcal{F}^\wedge$.
+It follows from this that $\alpha$ has to be the morphism of
+$\textit{Coh}(Y, \mathcal{J})$ corresponding to the homomorphism
+$$
+\lim \gamma_n : N = \lim_n N_n \longrightarrow \lim M_n = M^\wedge
+$$
+of finite $B^\wedge$-modules.
+
+\medskip\noindent
+We still have to show that the kernel and cokernel of $\alpha$ are
+annihilated by a power of $\mathcal{K}$. Set $Y' = \Spec(B^\wedge)$
+and $X' = Y' \times_Y X$. Let $\mathcal{K}'$, $\mathcal{J}'$, $\mathcal{G}'_n$
+and $\mathcal{I}'$, $\mathcal{F}'$ be the pullback of
+$\mathcal{K}$, $\mathcal{J}$, $\mathcal{G}_n$ and
+$\mathcal{I}$, $\mathcal{F}$, to $Y'$ and $X'$.
+The projection morphism $f' : X' \to Y'$ is the base change of
+$f$ by $Y' \to Y$. Note that $Y' \to Y$ is a flat morphism of schemes
+as $B \to B^\wedge$ is flat by
+Algebra, Lemma \ref{algebra-lemma-completion-flat}.
+Hence $f'_*\mathcal{F}'$, resp.\ $f'_*(f')^*\mathcal{G}_n'$
+is the pullback of $f_*\mathcal{F}$, resp.\ $f_*f^*\mathcal{G}_n$
+to $Y'$ by Lemma \ref{lemma-flat-base-change-cohomology}.
+The uniqueness of our construction shows the pullback of $\alpha$ to $Y'$
+is the corresponding map $\alpha'$ constructed for the situation on $Y'$.
+Moreover, to check that the kernel and cokernel of $\alpha$ are
+annihilated by $\mathcal{K}^t$ it suffices to check that the
+kernel and cokernel of $\alpha'$ are annihilated by
+$(\mathcal{K}')^t$. Namely, to see this we need to check this for
+kernels and cokernels of the maps $\alpha_n$ and $\alpha'_n$
+(see Remark \ref{remark-inverse-systems-kernel-cokernel-annihilated-by})
+and the ring map $B \to B^\wedge$ induces
+an equivalence of categories between modules annihilated by
+$J^n$ and $(J')^n$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-neighbourhood-equivalence}.
+Thus we may assume $B$ is complete with respect to $J$.
+
+\medskip\noindent
+Assume $Y = \Spec(B)$ is affine, $\mathcal{J}$ corresponds to the ideal
+$J \subset B$, and $B$ is complete with respect to $J$.
+In this case $(\mathcal{G}_n)$ is in the essential image of the functor
+$\textit{Coh}(\mathcal{O}_Y) \to \textit{Coh}(Y, \mathcal{J})$.
+Say $\mathcal{G}$ is a coherent $\mathcal{O}_Y$-module such that
+$(\mathcal{G}_n) = \mathcal{G}^\wedge$. Note that
+$f^*(\mathcal{G}^\wedge) = (f^*\mathcal{G})^\wedge$. Hence
+Lemma \ref{lemma-fully-faithful}
+tells us that $\beta$ comes from an isomorphism
+$b : f^*\mathcal{G} \to \mathcal{F}$
+and $\alpha$ is the completion functor applied to
+$$
+\mathcal{G} \to f_*f^*\mathcal{G} \cong f_*\mathcal{F}
+$$
+Hence we are trying to verify that the kernel and cokernel of the
+adjunction map $c : \mathcal{G} \to f_*f^*\mathcal{G}$ are annihilated by
+a power of $\mathcal{K}$. However, since the restriction
+$f|_{f^{-1}(V)} : f^{-1}(V) \to V$ is an isomorphism
+we see that $c|_V$ is an isomorphism. Thus the coherent sheaves
+$\Ker(c)$ and $\Coker(c)$ are supported on $V(\mathcal{K})$
+hence are annihilated by a power of $\mathcal{K}$
+(Lemma \ref{lemma-power-ideal-kills-sheaf}) as desired.
+\end{proof}
+
+\noindent
+The following proposition is the form of Grothendieck's existence
+theorem which is most often used in practice.
+
+\begin{proposition}
+\label{proposition-existence-proper}
+Let $A$ be a Noetherian ring complete with respect to an ideal $I$.
+Let $f : X \to \Spec(A)$ be a proper morphism of schemes.
+Set $\mathcal{I} = I\mathcal{O}_X$.
+Then the functor (\ref{equation-completion-functor}) is an equivalence.
+\end{proposition}
+
+\begin{proof}
+We have already seen that (\ref{equation-completion-functor}) is
+fully faithful in Lemma \ref{lemma-fully-faithful}. Thus it suffices
+to show that the functor is essentially surjective.
+
+\medskip\noindent
+Consider the collection $\Xi$ of quasi-coherent sheaves of ideals
+$\mathcal{K} \subset \mathcal{O}_X$ such that every object
+$(\mathcal{F}_n)$ annihilated by $\mathcal{K}$ is in the essential image.
+We want to show $(0)$ is in $\Xi$. If not, then since $X$ is Noetherian
+there exists a maximal quasi-coherent sheaf of ideals $\mathcal{K}$
+not in $\Xi$, see
+Lemma \ref{lemma-acc-coherent}.
+After replacing $X$ by the closed subscheme of $X$
+corresponding to $\mathcal{K}$ we may assume that every nonzero
+$\mathcal{K}$ is in $\Xi$. (This uses the correspondence by
+coherent modules annihilated by $\mathcal{K}$ and coherent modules
+on the closed subscheme corresponding to $\mathcal{K}$, see
+Lemma \ref{lemma-i-star-equivalence}.)
+Let $(\mathcal{F}_n)$ be an object of
+$\textit{Coh}(X, \mathcal{I})$.
+We will show that this object is in the essential image of the
+functor (\ref{equation-completion-functor}), thereby completion the
+proof of the proposition.
+
+\medskip\noindent
+Apply Chow's lemma (Lemma \ref{lemma-chow-Noetherian}) to find a
+proper surjective morphism $f : X' \to X$ which is an isomorphism
+over a dense open $U \subset X$ such that $X'$ is projective over $A$.
+Let $\mathcal{K}$ be the quasi-coherent sheaf of ideals cutting
+out the reduced complement $X \setminus U$. By the projective
+case of Grothendieck's existence theorem
+(Lemma \ref{lemma-existence-projective})
+there exists a coherent module $\mathcal{F}'$ on $X'$ such
+that $(\mathcal{F}')^\wedge \cong (f^*\mathcal{F}_n)$. By
+Proposition \ref{proposition-proper-pushforward-coherent}
+the $\mathcal{O}_X$-module $\mathcal{H} = f_*\mathcal{F}'$ is coherent
+and by Lemma \ref{lemma-inverse-systems-push-pull}
+there exists a morphism $(\mathcal{F}_n) \to \mathcal{H}^\wedge$
+of $\textit{Coh}(X, \mathcal{I})$ whose kernel and cokernel are
+annihilated by a power of $\mathcal{K}$. The powers $\mathcal{K}^e$
+are all in $\Xi$ so that (\ref{equation-completion-functor})
+is an equivalence for the closed subschemes $X_e = V(\mathcal{K}^e)$.
+We conclude by Lemma \ref{lemma-existence-tricky}.
+\end{proof}
+
+
+
+
+
+\section{Being proper over a base}
+\label{section-proper-over-base}
+
+\noindent
+This is just a short section to point out some useful features
+of closed subsets proper over a base and finite type, quasi-coherent
+modules with support proper over a base.
+
+\begin{lemma}
+\label{lemma-closed-proper-over-base}
+Let $f : X \to S$ be a morphism of schemes which is locally of finite type.
+Let $Z \subset X$ be a closed subset. The following are equivalent
+\begin{enumerate}
+\item the morphism $Z \to S$ is proper if $Z$ is endowed with the reduced
+induced closed subscheme structure
+(Schemes, Definition \ref{schemes-definition-reduced-induced-scheme}),
+\item for some closed subscheme structure on $Z$ the morphism $Z \to S$
+is proper,
+\item for any closed subscheme structure on $Z$ the morphism
+$Z \to S$ is proper.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implications (3) $\Rightarrow$ (1) and (1) $\Rightarrow$ (2)
+are immediate. Thus it suffices to prove that (2) implies (3).
+We urge the reader to find their own proof of this fact.
+Let $Z'$ and $Z''$ be closed subscheme structures on $Z$
+such that $Z' \to S$ is proper. We have to show that $Z'' \to S$ is proper.
+Let $Z''' = Z' \cup Z''$ be the scheme theoretic union, see
+Morphisms, Definition
+\ref{morphisms-definition-scheme-theoretic-intersection-union}.
+Then $Z'''$ is another closed subscheme structure on $Z$.
+This follows for example from the description of scheme theoretic unions in
+Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-union}.
+Since $Z'' \to Z'''$ is a closed immersion it suffices to prove
+that $Z''' \to S$ is proper (see
+Morphisms, Lemmas \ref{morphisms-lemma-closed-immersion-proper} and
+\ref{morphisms-lemma-composition-proper}).
+The morphism $Z' \to Z'''$ is a bijective closed immersion
+and in particular surjective and universally closed.
+Then the fact that $Z' \to S$ is separated implies that
+$Z''' \to S$ is separated, see
+Morphisms, Lemma \ref{morphisms-lemma-image-universally-closed-separated}.
+Moreover $Z''' \to S$ is locally of finite type
+as $X \to S$ is locally of finite type
+(Morphisms, Lemmas \ref{morphisms-lemma-immersion-locally-finite-type} and
+\ref{morphisms-lemma-composition-finite-type}).
+Since $Z' \to S$ is quasi-compact and $Z' \to Z'''$ is a homeomorphism
+we see that $Z''' \to S$ is quasi-compact.
+Finally, since $Z' \to S$ is universally closed, we see that
+the same thing is true for $Z''' \to S$ by
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-is-proper}.
+This finishes the proof.
+\end{proof}
+
+\begin{definition}
+\label{definition-proper-over-base}
+Let $f : X \to S$ be a morphism of schemes which is locally of finite type.
+Let $Z \subset X$ be a closed subset.
+We say {\it $Z$ is proper over $S$}
+if the equivalent conditions of Lemma \ref{lemma-closed-proper-over-base}
+are satisfied.
+\end{definition}
+
+\noindent
+The lemma used in the definition above is false if the morphism
+$f : X \to S$ is not locally of finite type. Therefore we urge
+the reader not to use this terminology if $f$ is not locally of
+finite type.
+
+\begin{lemma}
+\label{lemma-closed-closed-proper-over-base}
+Let $f : X \to S$ be a morphism of schemes which is locally of finite type.
+Let $Y \subset Z \subset X$ be closed subsets.
+If $Z$ is proper over $S$, then the same is true for $Y$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-closed-proper-over-base}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+X' \ar[d]_{f'} \ar[r]_{g'} & X \ar[d]^f \\
+S' \ar[r]^g & S
+}
+$$
+with $f$ locally of finite type.
+If $Z$ is a closed subset of $X$ proper over $S$, then
+$(g')^{-1}(Z)$ is a closed subset of $X'$ proper over $S'$.
+\end{lemma}
+
+\begin{proof}
+Observe that the statement makes sense as $f'$ is locally of
+finite type by Morphisms, Lemma \ref{morphisms-lemma-base-change-finite-type}.
+Endow $Z$ with the reduced induced closed subscheme structure.
+Denote $Z' = (g')^{-1}(Z)$ the scheme theoretic inverse image
+(Schemes, Definition \ref{schemes-definition-inverse-image-closed-subscheme}).
+Then $Z' = X' \times_X Z = (S' \times_S X) \times_X Z = S' \times_S Z$
+is proper over $S'$ as a base change of $Z$ over $S$
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-proper}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-functoriality-closed-proper-over-base}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of schemes which
+are locally of finite type over $S$.
+\begin{enumerate}
+\item If $Y$ is separated over $S$ and $Z \subset X$ is a closed subset
+proper over $S$, then $f(Z)$ is a closed subset of $Y$ proper over $S$.
+\item If $f$ is universally closed and $Z \subset X$ is a
+closed subset proper over $S$, then $f(Z)$ is a closed subset
+of $Y$ proper over $S$.
+\item If $f$ is proper and $Z \subset Y$ is a closed subset
+proper over $S$, then $f^{-1}(Z)$ is a closed subset of $X$ proper over $S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Assume $Y$ is separated over $S$ and $Z \subset X$
+is a closed subset proper over $S$. Endow $Z$ with the reduced induced
+closed subscheme structure and apply
+Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-image-is-proper}
+to $Z \to Y$ over $S$ to conclude.
+
+\medskip\noindent
+Proof of (2). Assume $f$ is universally closed and $Z \subset X$ is a
+closed subset proper over $S$. Endow $Z$ and $Z' = f(Z)$ with their reduced
+induced closed subscheme structures. We obtain an induced
+morphism $Z \to Z'$.
+Denote $Z'' = f^{-1}(Z')$ the scheme theoretic inverse image
+(Schemes, Definition \ref{schemes-definition-inverse-image-closed-subscheme}).
+Then $Z'' \to Z'$ is universally closed as a base change of $f$
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-proper}).
+Hence $Z \to Z'$ is universally closed as a composition of
+the closed immersion $Z \to Z''$ and $Z'' \to Z'$
+(Morphisms, Lemmas
+\ref{morphisms-lemma-closed-immersion-proper} and
+\ref{morphisms-lemma-composition-proper}).
+We conclude that $Z' \to S$ is separated by
+Morphisms, Lemma \ref{morphisms-lemma-image-universally-closed-separated}.
+Since $Z \to S$ is quasi-compact and $Z \to Z'$ is surjective
+we see that $Z' \to S$ is quasi-compact.
+Since $Z' \to S$ is the composition of $Z' \to Y$ and $Y \to S$
+we see that $Z' \to S$ is locally of finite type
+(Morphisms, Lemmas \ref{morphisms-lemma-immersion-locally-finite-type} and
+\ref{morphisms-lemma-composition-finite-type}).
+Finally, since $Z \to S$ is universally closed, we see that
+the same thing is true for $Z' \to S$ by
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-is-proper}.
+This finishes the proof.
+
+\medskip\noindent
+Proof of (3). Assume $f$ is proper and $Z \subset Y$ is a closed subset
+proper over $S$. Endow $Z$ with the reduced induced closed subscheme
+structure. Denote $Z' = f^{-1}(Z)$ the scheme theoretic inverse image
+(Schemes, Definition \ref{schemes-definition-inverse-image-closed-subscheme}).
+Then $Z' \to Z$ is proper as a base change of $f$
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-proper}).
+Whence $Z' \to S$ is proper as the composition of $Z' \to Z$
+and $Z \to S$
+(Morphisms, Lemma \ref{morphisms-lemma-composition-proper}).
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-union-closed-proper-over-base}
+Let $f : X \to S$ be a morphism of schemes which is locally of finite type.
+Let $Z_i \subset X$, $i = 1, \ldots, n$ be closed subsets.
+If $Z_i$, $i = 1, \ldots, n$ are proper over $S$, then the same is
+true for $Z_1 \cup \ldots \cup Z_n$.
+\end{lemma}
+
+\begin{proof}
+Endow $Z_i$ with their reduced induced closed subscheme structures.
+The morphism
+$$
+Z_1 \amalg \ldots \amalg Z_n \longrightarrow X
+$$
+is finite by Morphisms, Lemmas
+\ref{morphisms-lemma-closed-immersion-finite} and
+\ref{morphisms-lemma-finite-union-finite}.
+As finite morphisms are universally closed
+(Morphisms, Lemma \ref{morphisms-lemma-finite-proper})
+and since $Z_1 \amalg \ldots \amalg Z_n$ is proper over $S$
+we conclude by
+Lemma \ref{lemma-functoriality-closed-proper-over-base} part (2)
+that the image $Z_1 \cup \ldots \cup Z_n$ is proper over $S$.
+\end{proof}
+
+\noindent
+Let $f : X \to S$ be a morphism of schemes which is locally
+of finite type. Let $\mathcal{F}$ be a finite type, quasi-coherent
+$\mathcal{O}_X$-module. Then the support $\text{Supp}(\mathcal{F})$
+of $\mathcal{F}$ is a closed subset of $X$, see
+Morphisms, Lemma \ref{morphisms-lemma-support-finite-type}.
+Hence it makes sense to say
+``the support of $\mathcal{F}$ is proper over $S$''.
+
+\begin{lemma}
+\label{lemma-module-support-proper-over-base}
+Let $f : X \to S$ be a morphism of schemes which is locally
+of finite type. Let $\mathcal{F}$ be a finite type, quasi-coherent
+$\mathcal{O}_X$-module. The following are equivalent
+\begin{enumerate}
+\item the support of $\mathcal{F}$ is proper over $S$,
+\item the scheme theoretic support of $\mathcal{F}$
+(Morphisms, Definition \ref{morphisms-definition-scheme-theoretic-support})
+is proper over $S$, and
+\item there exists a closed subscheme $Z \subset X$ and
+a finite type, quasi-coherent $\mathcal{O}_Z$-module
+$\mathcal{G}$ such that (a) $Z \to S$ is proper, and (b)
+$(Z \to X)_*\mathcal{G} = \mathcal{F}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The support $\text{Supp}(\mathcal{F})$ of $\mathcal{F}$ is a closed subset
+of $X$, see Morphisms, Lemma \ref{morphisms-lemma-support-finite-type}.
+Hence we can apply Definition \ref{definition-proper-over-base}.
+Since the scheme theoretic support of $\mathcal{F}$ is a closed
+subscheme whose underlying closed subset is $\text{Supp}(\mathcal{F})$
+we see that (1) and (2) are equivalent by
+Definition \ref{definition-proper-over-base}.
+It is clear that (2) implies (3).
+Conversely, if (3) is true, then
+$\text{Supp}(\mathcal{F}) \subset Z$
+(an inclusion of closed subsets of $X$)
+and hence $\text{Supp}(\mathcal{F})$
+is proper over $S$ for example by
+Lemma \ref{lemma-closed-closed-proper-over-base}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-module-support-proper-over-base}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+X' \ar[d]_{f'} \ar[r]_{g'} & X \ar[d]^f \\
+S' \ar[r]^g & S
+}
+$$
+with $f$ locally of finite type. Let $\mathcal{F}$ be a
+finite type, quasi-coherent $\mathcal{O}_X$-module.
+If the support of $\mathcal{F}$ is proper over $S$, then
+the support of $(g')^*\mathcal{F}$ is proper over $S'$.
+\end{lemma}
+
+\begin{proof}
+Observe that the statement makes sense because
+$(g')*\mathcal{F}$ is of finite type by
+Modules, Lemma \ref{modules-lemma-pullback-finite-type}.
+We have $\text{Supp}((g')^*\mathcal{F}) = (g')^{-1}(\text{Supp}(\mathcal{F}))$
+by Morphisms, Lemma \ref{morphisms-lemma-support-finite-type}.
+Thus the lemma follows from
+Lemma \ref{lemma-base-change-closed-proper-over-base}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cat-module-support-proper-over-base}
+Let $f : X \to S$ be a morphism of schemes which is locally
+of finite type. Let $\mathcal{F}$, $\mathcal{G}$
+be finite type, quasi-coherent $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item If the supports of $\mathcal{F}$, $\mathcal{G}$
+are proper over $S$, then the same is true
+for $\mathcal{F} \oplus \mathcal{G}$, for any extension
+of $\mathcal{G}$ by $\mathcal{F}$, for $\Im(u)$ and $\Coker(u)$
+given any $\mathcal{O}_X$-module map $u : \mathcal{F} \to \mathcal{G}$,
+and for any quasi-coherent quotient of $\mathcal{F}$ or $\mathcal{G}$.
+\item If $S$ is locally Noetherian, then the category of
+coherent $\mathcal{O}_X$-modules with support proper over
+$S$ is a Serre subcategory (Homology, Definition
+\ref{homology-definition-serre-subcategory})
+of the abelian category of
+coherent $\mathcal{O}_X$-modules.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Let $Z$, $Z'$ be the support of $\mathcal{F}$
+and $\mathcal{G}$. Then all the sheaves mentioned in (1)
+have support contained in $Z \cup Z'$. Thus the assertion itself
+is clear from Lemmas \ref{lemma-closed-closed-proper-over-base} and
+\ref{lemma-union-closed-proper-over-base}
+provided we check that these sheaves are finite type
+and quasi-coherent. For quasi-coherence we refer the reader to
+Schemes, Section \ref{schemes-section-quasi-coherent}.
+For ``finite type'' we suggest the reader take a look at
+Modules, Section \ref{modules-section-finite-type}.
+
+\medskip\noindent
+Proof of (2). The proof is the same as the proof of (1). Note that
+the assertions make sense as $X$ is locally Noetherian by
+Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian}
+and by the description of the category of coherent
+modules in Section \ref{section-coherent-sheaves}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-support-proper-over-base-pushforward}
+Let $S$ be a locally Noetherian scheme.
+Let $f : X \to S$ be a morphism of schemes which is locally of finite type.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module
+with support proper over $S$. Then $R^pf_*\mathcal{F}$
+is a coherent $\mathcal{O}_S$-module for all $p \geq 0$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-module-support-proper-over-base}
+there exists a closed immersion $i : Z \to X$ and
+a finite type, quasi-coherent $\mathcal{O}_Z$-module
+$\mathcal{G}$ such that (a) $g = f \circ i : Z \to S$ is proper, and (b)
+$i_*\mathcal{G} = \mathcal{F}$.
+We see that $R^pg_*\mathcal{G}$ is coherent on $S$ by
+Proposition \ref{proposition-proper-pushforward-coherent}.
+On the other hand, $R^qi_*\mathcal{G} = 0$ for $q > 0$
+(Lemma \ref{lemma-finite-pushforward-coherent}).
+By Cohomology, Lemma \ref{cohomology-lemma-relative-Leray}
+we get $R^pf_*\mathcal{F} = R^pg_*\mathcal{G}$ which concludes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-systems-with-proper-support}
+Let $S$ be a Noetherian scheme. Let $f : X \to S$ be a finite type morphism.
+Let $\mathcal{I} \subset \mathcal{O}_X$ be
+a quasi-coherent sheaf of ideals. The following are Serre subcategories
+of $\textit{Coh}(X, \mathcal{I})$
+\begin{enumerate}
+\item the full subcategory of $\textit{Coh}(X, \mathcal{I})$
+consisting of those objects $(\mathcal{F}_n)$ such that
+the support of $\mathcal{F}_1$ is proper over $S$,
+\item the full subcategory of $\textit{Coh}(X, \mathcal{I})$
+consisting of those objects $(\mathcal{F}_n)$ such that
+there exists a closed subscheme $Z \subset X$ proper over $S$
+with $\mathcal{I}_Z \mathcal{F}_n = 0$ for all $n \geq 1$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will use the criterion of
+Homology, Lemma \ref{homology-lemma-characterize-serre-subcategory}.
+Moreover, we will use that if
+$0 \to (\mathcal{G}_n) \to (\mathcal{F}_n) \to (\mathcal{H}_n) \to 0$
+is a short exact sequence of $\textit{Coh}(X, \mathcal{I})$, then
+(a) $\mathcal{G}_n \to \mathcal{F}_n \to \mathcal{H}_n \to 0$
+is exact for all $n \geq 1$ and
+(b) $\mathcal{G}_n$ is a quotient of $\Ker(\mathcal{F}_m \to \mathcal{H}_m)$
+for some $m \geq n$. See proof of Lemma \ref{lemma-inverse-systems-abelian}.
+
+\medskip\noindent
+Proof of (1). Let $(\mathcal{F}_n)$ be an object of
+$\textit{Coh}(X, \mathcal{I})$. Then
+$\text{Supp}(\mathcal{F}_n) = \text{Supp}(\mathcal{F}_1)$ for all $n \geq 1$.
+Hence by remarks (a) and (b) above we see that
+for any short exact sequence
+$0 \to (\mathcal{G}_n) \to (\mathcal{F}_n) \to (\mathcal{H}_n) \to 0$
+of $\textit{Coh}(X, \mathcal{I})$ we have
+$\text{Supp}(\mathcal{G}_1) \cup \text{Supp}(\mathcal{H}_1) =
+\text{Supp}(\mathcal{F}_1)$.
+This proves that the category defined in (1)
+is a Serre subcategory of $\textit{Coh}(X, \mathcal{I})$.
+
+\medskip\noindent
+Proof of (2). Here we argue the same way. Let
+$0 \to (\mathcal{G}_n) \to (\mathcal{F}_n) \to (\mathcal{H}_n) \to 0$
+be a short exact sequence of $\textit{Coh}(X, \mathcal{I})$.
+If $Z \subset X$ is a closed subscheme and $\mathcal{I}_Z$
+annihilates $\mathcal{F}_n$ for all $n$, then
+$\mathcal{I}_Z$ annihilates $\mathcal{G}_n$ and $\mathcal{H}_n$
+for all $n$ by (a) and (b) above.
+Hence if $Z \to S$ is proper, then we conclude that the category
+defined in (2) is closed under taking sub and quotient objects
+inside of $\textit{Coh}(X, \mathcal{I})$.
+Finally, suppose that $Z \subset X$ and $Y \subset X$ are
+closed subschemes proper over $S$ such that
+$\mathcal{I}_Z \mathcal{G}_n = 0$ and
+$\mathcal{I}_Y \mathcal{H}_n = 0$ for all $n \geq 1$.
+Then it follows from (a) above that
+$\mathcal{I}_{Z \cup Y} = \mathcal{I}_Z \cdot \mathcal{I}_Y$
+annihilates $\mathcal{F}_n$ for all $n$.
+By Lemma \ref{lemma-union-closed-proper-over-base}
+(and via Definition \ref{definition-proper-over-base} which
+tells us we may choose an arbitrary scheme structure used on the union)
+we see that $Z \cup Y \to S$ is proper and the proof is complete.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Grothendieck's existence theorem, III}
+\label{section-existence-proper-support}
+
+\noindent
+To state the general version of Grothendieck's existence theorem
+we introduce a bit more notation. Let $A$ be a Noetherian ring
+complete with respect to an ideal $I$. Let $f : X \to \Spec(A)$
+be a separated finite type morphism of schemes. Set
+$\mathcal{I} = I\mathcal{O}_X$. In this situation we let
+$$
+\textit{Coh}_{\text{support proper over } A}(\mathcal{O}_X)
+$$
+be the full subcategory of $\textit{Coh}(\mathcal{O}_X)$
+consisting of those coherent $\mathcal{O}_X$-modules whose
+support is proper over $\Spec(A)$. This is a Serre subcategory of
+$\textit{Coh}(\mathcal{O}_X)$, see
+Lemma \ref{lemma-cat-module-support-proper-over-base}.
+Similarly, we let
+$$
+\textit{Coh}_{\text{support proper over } A}(X, \mathcal{I})
+$$
+be the full subcategory of $\textit{Coh}(X, \mathcal{I})$
+consisting of those objects $(\mathcal{F}_n)$ such that
+the support of $\mathcal{F}_1$ is proper over $\Spec(A)$.
+This is a Serre subcategory of $\textit{Coh}(X, \mathcal{I})$
+by Lemma \ref{lemma-systems-with-proper-support} part (1).
+Since the support of a quotient module is contained in the support
+of the module, it follows that (\ref{equation-completion-functor})
+induces a functor
+\begin{equation}
+\label{equation-completion-functor-proper-over-A}
+\textit{Coh}_{\text{support proper over }A}(\mathcal{O}_X)
+\longrightarrow
+\textit{Coh}_{\text{support proper over }A}(X, \mathcal{I})
+\end{equation}
+We are now ready to state the main theorem of this section.
+
+\begin{theorem}[Grothendieck's existence theorem]
+\label{theorem-grothendieck-existence}
+\begin{reference}
+\cite[III Theorem 5.1.5]{EGA}
+\end{reference}
+Let $A$ be a Noetherian ring complete with respect to an ideal $I$.
+Let $X$ be a separated, finite type scheme over $A$. Then
+the functor
+(\ref{equation-completion-functor-proper-over-A})
+$$
+\textit{Coh}_{\text{support proper over }A}(\mathcal{O}_X)
+\longrightarrow
+\textit{Coh}_{\text{support proper over }A}(X, \mathcal{I})
+$$
+is an equivalence.
+\end{theorem}
+
+\begin{proof}
+We will use the equivalence of categories of
+Lemma \ref{lemma-i-star-equivalence}
+without further mention.
+For a closed subscheme $Z \subset X$ proper over $A$
+in this proof we will say a coherent module on $X$ is
+``supported on $Z$'' if it is annihilated by the ideal
+sheaf of $Z$ or equivalently if it is the pushforward
+of a coherent module on $Z$.
+By Proposition \ref{proposition-existence-proper} we know
+that the result is true for
+the functor between coherent modules and systems of coherent
+modules supported on $Z$. Hence it suffices to show that
+every object of
+$\textit{Coh}_{\text{support proper over }A}(\mathcal{O}_X)$
+and every object of
+$\textit{Coh}_{\text{support proper over }A}(X, \mathcal{I})$ is
+supported on a closed subscheme $Z \subset X$ proper over $A$.
+This holds by definition for objects of
+$\textit{Coh}_{\text{support proper over }A}(\mathcal{O}_X)$.
+We will prove this statement for objects of
+$\textit{Coh}_{\text{support proper over }A}(X, \mathcal{I})$
+using the method of proof of Proposition \ref{proposition-existence-proper}.
+We urge the reader to read that proof first.
+
+\medskip\noindent
+Consider the collection $\Xi$ of quasi-coherent sheaves of ideals
+$\mathcal{K} \subset \mathcal{O}_X$ such that the statement holds
+for every object $(\mathcal{F}_n)$ of
+$\textit{Coh}_{\text{support proper over }A}(X, \mathcal{I})$
+annihilated by $\mathcal{K}$. We want to show $(0)$ is in $\Xi$.
+If not, then since $X$ is Noetherian there exists a maximal
+quasi-coherent sheaf of ideals $\mathcal{K}$ not in $\Xi$, see
+Lemma \ref{lemma-acc-coherent}.
+After replacing $X$ by the closed subscheme of $X$
+corresponding to $\mathcal{K}$ we may assume that every nonzero
+$\mathcal{K}$ is in $\Xi$. Let $(\mathcal{F}_n)$ be an object of
+$\textit{Coh}_{\text{support proper over }A}(X, \mathcal{I})$.
+We will show that this object is supported on a closed subscheme
+$Z \subset X$ proper over $A$, thereby completing the
+proof of the theorem.
+
+\medskip\noindent
+Apply Chow's lemma (Lemma \ref{lemma-chow-Noetherian}) to find a
+proper surjective morphism $f : Y \to X$ which is an isomorphism
+over a dense open $U \subset X$ such that $Y$ is H-quasi-projective
+over $A$. Choose an open immersion $j : Y \to Y'$ with
+$Y'$ projective over $A$, see
+Morphisms, Lemma \ref{morphisms-lemma-H-quasi-projective-open-H-projective}.
+Observe that
+$$
+\text{Supp}(f^*\mathcal{F}_n) = f^{-1}\text{Supp}(\mathcal{F}_n) =
+f^{-1}\text{Supp}(\mathcal{F}_1)
+$$
+The first equality by
+Morphisms, Lemma \ref{morphisms-lemma-support-finite-type}.
+By assumption and
+Lemma \ref{lemma-functoriality-closed-proper-over-base} part (3)
+we see that $f^{-1}\text{Supp}(\mathcal{F}_1)$ is proper over $A$.
+Hence the image of $f^{-1}\text{Supp}(\mathcal{F}_1)$
+under $j$ is closed in $Y'$ by
+Lemma \ref{lemma-functoriality-closed-proper-over-base} part (1).
+Thus $\mathcal{F}'_n = j_*f^*\mathcal{F}_n$ is coherent on
+$Y'$ by Lemma \ref{lemma-pushforward-coherent-on-open}.
+It follows that $(\mathcal{F}_n')$
+is an object of $\textit{Coh}(Y', I\mathcal{O}_{Y'})$.
+By the projective case of Grothendieck's existence theorem
+(Lemma \ref{lemma-existence-projective})
+there exists a coherent $\mathcal{O}_{Y'}$-module
+$\mathcal{F}'$ and an isomorphism
+$(\mathcal{F}')^\wedge \cong (\mathcal{F}'_n)$ in
+$\textit{Coh}(Y', I\mathcal{O}_{Y'})$.
+Since $\mathcal{F}'/I\mathcal{F}' = \mathcal{F}'_1$ we see that
+$$
+\text{Supp}(\mathcal{F}') \cap V(I\mathcal{O}_{Y'}) =
+\text{Supp}(\mathcal{F}'_1) = j(f^{-1}\text{Supp}(\mathcal{F}_1))
+$$
+The structure morphism $p' : Y' \to \Spec(A)$ is proper, hence
+$p'(\text{Supp}(\mathcal{F}') \setminus j(Y))$
+is closed in $\Spec(A)$. A nonempty closed subset of $\Spec(A)$
+contains a point of $V(I)$ as $I$ is contained in the Jacobson radical
+of $A$ by Algebra, Lemma \ref{algebra-lemma-radical-completion}.
+The displayed equation shows that
+$\text{Supp}(\mathcal{F}') \cap (p')^{-1}V(I) \subset j(Y)$
+hence we conclude that $\text{Supp}(\mathcal{F}') \subset j(Y)$.
+Thus $\mathcal{F}'|_Y = j^*\mathcal{F}'$
+is supported on a closed subscheme $Z'$ of $Y$ proper over $A$
+and $(\mathcal{F}'|_Y)^\wedge = (f^*\mathcal{F}_n)$.
+
+\medskip\noindent
+Let $\mathcal{K}$ be the quasi-coherent sheaf of ideals cutting
+out the reduced complement $X \setminus U$. By
+Proposition \ref{proposition-proper-pushforward-coherent}
+the $\mathcal{O}_X$-module $\mathcal{H} = f_*(\mathcal{F}'|_Y)$ is coherent
+and by Lemma \ref{lemma-inverse-systems-push-pull}
+there exists a morphism $\alpha : (\mathcal{F}_n) \to \mathcal{H}^\wedge$
+of $\textit{Coh}(X, \mathcal{I})$ whose kernel and cokernel are
+annihilated by a power $\mathcal{K}^t$ of $\mathcal{K}$.
+We obtain an exact sequence
+$$
+0 \to \Ker(\alpha) \to (\mathcal{F}_n) \to
+\mathcal{H}^\wedge \to \Coker(\alpha) \to 0
+$$
+in $\textit{Coh}(X, \mathcal{I})$. If $Z_0 \subset X$ is the scheme theoretic
+support of $\mathcal{H}$, then it is clear that $Z_0 \subset f(Z')$
+set-theoretically. Hence $Z_0$ is proper over $A$ by
+Lemma \ref{lemma-closed-closed-proper-over-base} and
+Lemma \ref{lemma-functoriality-closed-proper-over-base} part (2).
+Hence $\mathcal{H}^\wedge$ is in the subcategory defined in
+Lemma \ref{lemma-systems-with-proper-support} part (2)
+and a fortiori in
+$\textit{Coh}_{\text{support proper over }A}(X, \mathcal{I})$.
+We conclude that $\Ker(\alpha)$ and $\Coker(\alpha)$
+are in $\textit{Coh}_{\text{support proper over }A}(X, \mathcal{I})$
+by Lemma \ref{lemma-systems-with-proper-support} part (1).
+By induction hypothesis, more precisely because $\mathcal{K}^t$ is in $\Xi$,
+we see that $\Ker(\alpha)$ and $\Coker(\alpha)$ are in
+the subcategory defined in
+Lemma \ref{lemma-systems-with-proper-support} part (2).
+Since this is a Serre subcategory by the lemma, we conclude that the
+same is true for $(\mathcal{F}_n)$ which is what we wanted to show.
+\end{proof}
+
+\begin{remark}[Unwinding Grothendieck's existence theorem]
+\label{remark-reformulate-existence-theorem}
+Let $A$ be a Noetherian ring complete with respect to an ideal $I$.
+Write $S = \Spec(A)$ and $S_n = \Spec(A/I^n)$.
+Let $X \to S$ be a separated morphism of finite type.
+For $n \geq 1$ we set $X_n = X \times_S S_n$.
+Picture:
+$$
+\xymatrix{
+X_1 \ar[r]_{i_1} \ar[d] & X_2 \ar[r]_{i_2} \ar[d] & X_3 \ar[r] \ar[d] &
+\ldots & X \ar[d] \\
+S_1 \ar[r] & S_2 \ar[r] & S_3 \ar[r] & \ldots & S
+}
+$$
+In this situation we consider systems $(\mathcal{F}_n, \varphi_n)$
+where
+\begin{enumerate}
+\item $\mathcal{F}_n$ is a coherent $\mathcal{O}_{X_n}$-module,
+\item $\varphi_n : i_n^*\mathcal{F}_{n + 1} \to \mathcal{F}_n$
+is an isomorphism, and
+\item $\text{Supp}(\mathcal{F}_1)$ is proper over $S_1$.
+\end{enumerate}
+Theorem \ref{theorem-grothendieck-existence} says that the
+completion functor
+$$
+\begin{matrix}
+\text{coherent }\mathcal{O}_X\text{-modules }\mathcal{F} \\
+\text{with support proper over }A
+\end{matrix}
+\quad
+\longrightarrow
+\quad
+\begin{matrix}
+\text{systems }(\mathcal{F}_n) \\
+\text{as above}
+\end{matrix}
+$$
+is an equivalence of categories. In the special case that $X$ is
+proper over $A$ we can omit the conditions on the supports.
+\end{remark}
+
+
+
+\section{Grothendieck's algebraization theorem}
+\label{section-algebraization}
+
+\noindent
+Our first result is a translation of Grothendieck's existence
+theorem in terms of closed subschemes and finite morphisms.
+
+\begin{lemma}
+\label{lemma-algebraize-formal-closed-subscheme}
+Let $A$ be a Noetherian ring complete with respect to an ideal $I$.
+Write $S = \Spec(A)$ and $S_n = \Spec(A/I^n)$.
+Let $X \to S$ be a separated morphism of finite type.
+For $n \geq 1$ we set $X_n = X \times_S S_n$.
+Suppose given a commutative diagram
+$$
+\xymatrix{
+Z_1 \ar[r] \ar[d] & Z_2 \ar[r] \ar[d] & Z_3 \ar[r] \ar[d] & \ldots \\
+X_1 \ar[r]^{i_1} & X_2 \ar[r]^{i_2} & X_3 \ar[r] & \ldots
+}
+$$
+of schemes with cartesian squares. Assume that
+\begin{enumerate}
+\item $Z_1 \to X_1$ is a closed immersion, and
+\item $Z_1 \to S_1$ is proper.
+\end{enumerate}
+Then there exists a closed immersion of schemes $Z \to X$ such that
+$Z_n = Z \times_S S_n$. Moreover, $Z$ is proper over $S$.
+\end{lemma}
+
+\begin{proof}
+Let's write $j_n : Z_n \to X_n$ for the vertical morphisms.
+As the squares in the statement are cartesian
+we see that the base change of $j_n$ to $X_1$ is $j_1$.
+Thus Morphisms, Lemma \ref{morphisms-lemma-check-closed-infinitesimally}
+shows that $j_n$ is a closed immersion.
+Set $\mathcal{F}_n = j_{n, *}\mathcal{O}_{Z_n}$, so that
+$j_n^\sharp$ is a surjection $\mathcal{O}_{X_n} \to \mathcal{F}_n$.
+Again using that the squares are cartesian we see that
+the pullback of $\mathcal{F}_{n + 1}$ to $X_n$ is $\mathcal{F}_n$.
+Hence Grothendieck's existence theorem, as reformulated in
+Remark \ref{remark-reformulate-existence-theorem},
+tells us there exists a map
+$\mathcal{O}_X \to \mathcal{F}$
+of coherent $\mathcal{O}_X$-modules whose restriction to
+$X_n$ recovers $\mathcal{O}_{X_n} \to \mathcal{F}_n$.
+Moreover, the support of $\mathcal{F}$ is proper over $S$.
+As the completion functor is exact (Lemma \ref{lemma-exact})
+we see that the cokernel $\mathcal{Q}$ of $\mathcal{O}_X \to \mathcal{F}$
+has vanishing completion. Since $\mathcal{F}$ has support
+proper over $S$ and so does $\mathcal{Q}$ this implies that
+$\mathcal{Q} = 0$ for example because the functor
+(\ref{equation-completion-functor-proper-over-A}) is an equivalence
+by Grothendieck's existence theorem.
+Thus $\mathcal{F} = \mathcal{O}_X/\mathcal{J}$
+for some quasi-coherent sheaf of ideals $\mathcal{J}$.
+Setting $Z = V(\mathcal{J})$ finishes the proof.
+\end{proof}
+
+\noindent
+In the following lemma it is actually enough to assume that $Y_1 \to X_1$
+is finite as it will imply that $Y_n \to X_n$ is finite too
+(see More on Morphisms, Lemma
+\ref{more-morphisms-lemma-thicken-property-morphisms-cartesian}).
+
+\begin{lemma}
+\label{lemma-algebraize-formal-scheme-finite-over-proper}
+Let $A$ be a Noetherian ring complete with respect to an ideal $I$.
+Write $S = \Spec(A)$ and $S_n = \Spec(A/I^n)$.
+Let $X \to S$ be a separated morphism of finite type.
+For $n \geq 1$ we set $X_n = X \times_S S_n$.
+Suppose given a commutative diagram
+$$
+\xymatrix{
+Y_1 \ar[r] \ar[d] & Y_2 \ar[r] \ar[d] & Y_3 \ar[r] \ar[d] & \ldots \\
+X_1 \ar[r]^{i_1} & X_2 \ar[r]^{i_2} & X_3 \ar[r] & \ldots
+}
+$$
+of schemes with cartesian squares. Assume that
+\begin{enumerate}
+\item $Y_n \to X_n$ is a finite morphism, and
+\item $Y_1 \to S_1$ is proper.
+\end{enumerate}
+Then there exists a finite morphism of schemes $Y \to X$ such that
+$Y_n = Y \times_S S_n$. Moreover, $Y$ is proper over $S$.
+\end{lemma}
+
+\begin{proof}
+Let's write $f_n : Y_n \to X_n$ for the vertical morphisms.
+Set $\mathcal{F}_n = f_{n, *}\mathcal{O}_{Y_n}$. This is
+a coherent $\mathcal{O}_{X_n}$-module as $f_n$ is finite
+(Lemma \ref{lemma-finite-pushforward-coherent}).
+Using that the squares are cartesian we see that
+the pullback of $\mathcal{F}_{n + 1}$ to $X_n$ is $\mathcal{F}_n$.
+Hence Grothendieck's existence theorem, as reformulated in
+Remark \ref{remark-reformulate-existence-theorem},
+tells us there exists a coherent $\mathcal{O}_X$-module
+$\mathcal{F}$ whose restriction to $X_n$ recovers $\mathcal{F}_n$.
+Moreover, the support of $\mathcal{F}$ is proper over $S$.
+As the completion functor is fully faithful
+(Theorem \ref{theorem-grothendieck-existence})
+we see that the multiplication maps
+$\mathcal{F}_n \otimes_{\mathcal{O}_{X_n}} \mathcal{F}_n \to
+\mathcal{F}_n$ fit together to give an algebra structure on $\mathcal{F}$.
+Setting $Y = \underline{\Spec}_X(\mathcal{F})$ finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-algebraize-morphism}
+Let $A$ be a Noetherian ring complete with respect to an ideal $I$.
+Write $S = \Spec(A)$ and $S_n = \Spec(A/I^n)$. Let $X$, $Y$ be schemes
+over $S$. For $n \geq 1$ we set $X_n = X \times_S S_n$ and
+$Y_n = Y \times_S S_n$. Suppose given a compatible system of
+commutative diagrams
+$$
+\xymatrix{
+& & X_{n + 1} \ar[rd] \ar[rr]_{g_{n + 1}} & & Y_{n + 1} \ar[ld] \\
+X_n \ar[rru] \ar[rd] \ar[rr]_{g_n} & & Y_n \ar[rru] \ar[ld] & S_{n + 1} \\
+& S_n \ar[rru]
+}
+$$
+Assume that
+\begin{enumerate}
+\item $X \to S$ is proper, and
+\item $Y \to S$ is separated of finite type.
+\end{enumerate}
+Then there exists a unique morphism of schemes $g : X \to Y$
+over $S$ such that $g_n$ is the base change of $g$ to $S_n$.
+\end{lemma}
+
+\begin{proof}
+The morphisms $(1, g_n) : X_n \to X_n \times_S Y_n$ are closed immersions
+because $Y_n \to S_n$ is separated
+(Schemes, Lemma \ref{schemes-lemma-section-immersion}). Thus by
+Lemma \ref{lemma-algebraize-formal-closed-subscheme}
+there exists a closed subscheme $Z \subset X \times_S Y$
+proper over $S$ whose base change to $S_n$ recovers
+$X_n \subset X_n \times_S Y_n$. The first projection $p : Z \to X$
+is a proper morphism (as $Z$ is proper over $S$, see
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed})
+whose base change to $S_n$ is an isomorphism
+for all $n$. In particular, $p : Z \to X$ is finite over an open
+neighbourhood of $X_0$ by
+Lemma \ref{lemma-proper-finite-fibre-finite-in-neighbourhood}.
+As $X$ is proper over $S$ this open neighbourhood is all of $X$
+and we conclude $p : Z \to X$ is finite.
+Applying the equivalence of Proposition \ref{proposition-existence-proper}
+we see that $p_*\mathcal{O}_Z = \mathcal{O}_X$ as this is true
+modulo $I^n$ for all $n$. Hence $p$ is an isomorphism and we obtain
+the morphism $g$ as the composition $X \cong Z \to Y$.
+We omit the proof of uniqueness.
+\end{proof}
+
+\noindent
+In order to prove an ``abstract'' algebraization theorem we need
+to assume we have an ample invertible sheaf, as the result is false
+without such an assumption.
+
+\begin{theorem}[Grothendieck's algebraization theorem]
+\label{theorem-algebraization}
+Let $A$ be a Noetherian ring complete with respect to an ideal $I$.
+Set $S = \Spec(A)$ and $S_n = \Spec(A/I^n)$. Consider a commutative
+diagram
+$$
+\xymatrix{
+X_1 \ar[r]_{i_1} \ar[d] & X_2 \ar[r]_{i_2} \ar[d] & X_3 \ar[r] \ar[d] &
+\ldots \\
+S_1 \ar[r] & S_2 \ar[r] & S_3 \ar[r] & \ldots
+}
+$$
+of schemes with cartesian squares. Suppose given $(\mathcal{L}_n, \varphi_n)$
+where each $\mathcal{L}_n$ is an invertible sheaf on $X_n$ and
+$\varphi_n : i_n^*\mathcal{L}_{n + 1} \to \mathcal{L}_n$
+is an isomorphism. If
+\begin{enumerate}
+\item $X_1 \to S_1$ is proper, and
+\item $\mathcal{L}_1$ is ample on $X_1$
+\end{enumerate}
+then there exists a proper morphism of schemes $X \to S$
+and an ample invertible $\mathcal{O}_X$-module $\mathcal{L}$
+and isomorphisms $X_n \cong X \times_S S_n$ and
+$\mathcal{L}_n \cong \mathcal{L}|_{X_n}$ compatible with
+the morphisms $i_n$ and $\varphi_n$.
+\end{theorem}
+
+\begin{proof}
+Since the squares in the diagram are cartesian and since the morphisms
+$S_n \to S_{n + 1}$ are closed immersions, we see that the morphisms
+$i_n$ are closed immersions too. In particular we may think of
+$X_m$ as a closed subscheme of $X_n$ for $m < n$. In fact $X_m$ is
+the closed subscheme cut out by the quasi-coherent sheaf of ideals
+$I^m\mathcal{O}_{X_n}$. Moreover, the underlying topological spaces
+of the schemes $X_1, X_2, X_3, \ldots$ are all identified, hence we
+may (and do) think of sheaves $\mathcal{O}_{X_n}$ as living on the
+same underlying topological space; similarly for coherent
+$\mathcal{O}_{X_n}$-modules. Set
+$$
+\mathcal{F}_n =
+\Ker(\mathcal{O}_{X_{n + 1}} \to \mathcal{O}_{X_n})
+$$
+so that we obtain short exact sequences
+$$
+0 \to \mathcal{F}_n \to \mathcal{O}_{X_{n + 1}} \to \mathcal{O}_{X_n} \to 0
+$$
+By the above we have $\mathcal{F}_n = I^n\mathcal{O}_{X_{n + 1}}$.
+It follows $\mathcal{F}_n$ is a coherent sheaf on $X_{n + 1}$
+annihilated by $I$, hence we may (and do) think of it as a coherent
+module $\mathcal{O}_{X_1}$-module. Observe that for $m > n$ the sheaf
+$$
+I^n\mathcal{O}_{X_m}/I^{n + 1}\mathcal{O}_{X_m}
+$$
+maps isomorphically to $\mathcal{F}_n$ under the map
+$\mathcal{O}_{X_m} \to \mathcal{O}_{X_{n + 1}}$. Hence given
+$n_1, n_2 \geq 0$ we can pick an $m > n_1 + n_2$ and consider the
+multiplication map
+$$
+I^{n_1}\mathcal{O}_{X_m} \times I^{n_2}\mathcal{O}_{X_m}
+\longrightarrow
+I^{n_1 + n_2}\mathcal{O}_{X_m} \to \mathcal{F}_{n_1 + n_2}
+$$
+This induces an $\mathcal{O}_{X_1}$-bilinear map
+$$
+\mathcal{F}_{n_1} \times \mathcal{F}_{n_2} \longrightarrow
+\mathcal{F}_{n_1 + n_2}
+$$
+which in turn defines the structure of a graded $\mathcal{O}_{X_1}$-algebra
+on $\mathcal{F} = \bigoplus_{n \geq 0} \mathcal{F}_n$.
+
+\medskip\noindent
+Set $B = \bigoplus I^n/I^{n + 1}$; this is a finitely generated
+graded $A/I$-algebra. Set $\mathcal{B} = (X_1 \to S_1)^*\widetilde{B}$.
+The discussion above provides us with a canonical surjection
+$$
+\mathcal{B} \longrightarrow \mathcal{F}
+$$
+of graded $\mathcal{O}_{X_1}$-algebras. In particular we see that
+$\mathcal{F}$ is a finite type quasi-coherent graded $\mathcal{B}$-module.
+By Lemma \ref{lemma-graded-finiteness} we can find an integer $d_0$
+such that $H^1(X_1, \mathcal{F} \otimes \mathcal{L}^{\otimes d}) = 0$
+for all $d \geq d_0$. Pick a $d \geq d_0$ such that there exist sections
+$s_{0, 1}, \ldots, s_{N, 1} \in \Gamma(X_1, \mathcal{L}_1^{\otimes d})$
+which induce an immersion
+$$
+\psi_1 : X_1 \to \mathbf{P}^N_{S_1}
+$$
+over $S_1$, see
+Morphisms, Lemma \ref{morphisms-lemma-finite-type-over-affine-ample-very-ample}.
+As $X_1$ is proper over $S_1$ we see that $\psi_1$
+is a closed immersion, see
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed}
+and
+Schemes, Lemma \ref{schemes-lemma-immersion-when-closed}.
+We are going to ``lift'' $\psi_1$ to a compatible system of
+closed immersions of $X_n$ into $\mathbf{P}^N$.
+
+\medskip\noindent
+Upon tensoring the short exact sequences of the first paragraph
+of the proof
+by $\mathcal{L}_{n + 1}^{\otimes d}$ we obtain short exact sequences
+$$
+0 \to \mathcal{F}_n \otimes \mathcal{L}_{n + 1}^{\otimes d} \to
+\mathcal{L}_{n + 1}^{\otimes d} \to \mathcal{L}_{n + 1}^{\otimes d} \to 0
+$$
+Using the isomorphisms $\varphi_n$ we obtain isomorphisms
+$\mathcal{L}_{n + 1} \otimes \mathcal{O}_{X_l} = \mathcal{L}_l$
+for $l \leq n$. Whence the sequence above becomes
+$$
+0 \to \mathcal{F}_n \otimes \mathcal{L}_1^{\otimes d} \to
+\mathcal{L}_{n + 1}^{\otimes d} \to \mathcal{L}_n^{\otimes d} \to 0
+$$
+The vanishing of $H^1(X, \mathcal{F}_n \otimes \mathcal{L}_1^{\otimes d})$
+implies we can inductively lift
+$s_{0, 1}, \ldots, s_{N, 1} \in \Gamma(X_1, \mathcal{L}_1^{\otimes d})$
+to sections
+$s_{0, n}, \ldots, s_{N, n} \in \Gamma(X_n, \mathcal{L}_n^{\otimes d})$.
+Thus we obtain a commutative diagram
+$$
+\xymatrix{
+X_1 \ar[r]_{i_1} \ar[d]_{\psi_1} &
+X_2 \ar[r]_{i_2} \ar[d]_{\psi_2} &
+X_3 \ar[r] \ar[d]_{\psi_3} &
+\ldots \\
+\mathbf{P}^N_{S_1} \ar[r] &
+\mathbf{P}^N_{S_2} \ar[r] &
+\mathbf{P}^N_{S_3} \ar[r] & \ldots
+}
+$$
+where
+$\psi_n = \varphi_{(\mathcal{L}_n, (s_{0, n}, \ldots, s_{N, n}))}$
+in the notation of Constructions, Section
+\ref{constructions-section-projective-space}.
+As the squares in the statement of the theorem are cartesian
+we see that the squares in the above diagram are cartesian.
+We win by applying Lemma \ref{lemma-algebraize-formal-closed-subscheme}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/cohomology.tex b/books/stacks/cohomology.tex
new file mode 100644
index 0000000000000000000000000000000000000000..7e50443287f88bca3ba69285028d3b4a46546a72
--- /dev/null
+++ b/books/stacks/cohomology.tex
@@ -0,0 +1,14084 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Cohomology of Sheaves}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this document we work out some topics on cohomology of sheaves
+on topological spaces. We mostly work in the generality of modules
+over a sheaf of rings and we work with morphisms of ringed spaces.
+To see what happens for sheaves on sites take a look at the chapter
+Cohomology on Sites, Section \ref{sites-cohomology-section-introduction}.
+Basic references are \cite{Godement} and \cite{Iversen}.
+
+
+
+
+
+\section{Cohomology of sheaves}
+\label{section-cohomology-sheaves}
+
+\noindent
+Let $X$ be a topological space. Let $\mathcal{F}$ be an abelian sheaf.
+We know that the category of abelian sheaves on $X$ has enough injectives, see
+Injectives, Lemma \ref{injectives-lemma-abelian-sheaves-space}.
+Hence we can choose an injective resolution
+$\mathcal{F}[0] \to \mathcal{I}^\bullet$. As is customary we define
+\begin{equation}
+\label{equation-cohomology}
+H^i(X, \mathcal{F}) = H^i(\Gamma(X, \mathcal{I}^\bullet))
+\end{equation}
+to be the {\it $i$th cohomology group of the abelian sheaf $\mathcal{F}$}.
+The family of functors $H^i(X, -)$ forms a universal $\delta$-functor
+from $\textit{Ab}(X) \to \textit{Ab}$.
+
+\medskip\noindent
+Let $f : X \to Y$ be a continuous map of topological spaces. With
+$\mathcal{F}[0] \to \mathcal{I}^\bullet$ as above
+we define
+\begin{equation}
+\label{equation-higher-direct-image}
+R^if_*\mathcal{F} = H^i(f_*\mathcal{I}^\bullet)
+\end{equation}
+to be the {\it $i$th higher direct image of $\mathcal{F}$}.
+The family of functors $R^if_*$ forms a universal $\delta$-functor
+from $\textit{Ab}(X) \to \textit{Ab}(Y)$.
+
+\medskip\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{F}$ be an
+$\mathcal{O}_X$-module. We know that the category of $\mathcal{O}_X$-modules
+on $X$ has enough injectives, see
+Injectives, Lemma \ref{injectives-lemma-sheaves-modules-space}.
+Hence we can choose an injective resolution
+$\mathcal{F}[0] \to \mathcal{I}^\bullet$. As is customary we define
+\begin{equation}
+\label{equation-cohomology-modules}
+H^i(X, \mathcal{F}) = H^i(\Gamma(X, \mathcal{I}^\bullet))
+\end{equation}
+to be the {\it $i$th cohomology group of $\mathcal{F}$}.
+The family of functors $H^i(X, -)$ forms a universal $\delta$-functor
+from $\textit{Mod}(\mathcal{O}_X) \to \text{Mod}_{\mathcal{O}_X(X)}$.
+
+\medskip\noindent
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of ringed
+spaces. With $\mathcal{F}[0] \to \mathcal{I}^\bullet$ as above
+we define
+\begin{equation}
+\label{equation-higher-direct-image-modules}
+R^if_*\mathcal{F} = H^i(f_*\mathcal{I}^\bullet)
+\end{equation}
+to be the {\it $i$th higher direct image of $\mathcal{F}$}.
+The family of functors $R^if_*$ forms a universal $\delta$-functor
+from $\textit{Mod}(\mathcal{O}_X) \to \textit{Mod}(\mathcal{O}_Y)$.
+
+
+
+
+
+\section{Derived functors}
+\label{section-derived-functors}
+
+\noindent
+We briefly explain how to get right derived functors using resolution
+functors. For the unbounded derived functors, please see
+Section \ref{section-unbounded}.
+
+\medskip\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. The category
+$\textit{Mod}(\mathcal{O}_X)$ is abelian, see
+Modules, Lemma \ref{modules-lemma-abelian}.
+In this chapter we will write
+$$
+K(\mathcal{O}_X) = K(\textit{Mod}(\mathcal{O}_X))
+\quad
+\text{and}
+\quad
+D(\mathcal{O}_X) = D(\textit{Mod}(\mathcal{O}_X)).
+$$
+and similarly for the bounded versions for the triangulated categories
+introduced in
+Derived Categories, Definition \ref{derived-definition-complexes-notation} and
+Definition \ref{derived-definition-unbounded-derived-category}.
+By
+Derived Categories, Remark \ref{derived-remark-big-abelian-category}
+there exists a resolution functor
+$$
+j = j_X :
+K^{+}(\textit{Mod}(\mathcal{O}_X))
+\longrightarrow
+K^{+}(\mathcal{I})
+$$
+where $\mathcal{I}$ is the strictly full additive subcategory of
+$\textit{Mod}(\mathcal{O}_X)$ consisting of injective sheaves.
+For any left exact functor
+$F : \textit{Mod}(\mathcal{O}_X) \to \mathcal{B}$
+into any abelian category $\mathcal{B}$ we will denote $RF$ the
+right derived functor described in
+Derived Categories, Section \ref{derived-section-right-derived-functor}
+and constructed using the resolution functor $j_X$ just described:
+\begin{equation}
+\label{equation-RF}
+RF = F \circ j_X' : D^{+}(X) \longrightarrow D^{+}(\mathcal{B})
+\end{equation}
+see
+Derived Categories, Lemma \ref{derived-lemma-right-derived-functor}
+for notation. Note that we may think of $RF$ as defined on
+$\textit{Mod}(\mathcal{O}_X)$,
+$\text{Comp}^{+}(\textit{Mod}(\mathcal{O}_X))$,
+$K^{+}(X)$, or $D^{+}(X)$
+depending on the situation. According to
+Derived Categories, Definition \ref{derived-definition-higher-derived-functors}
+we obtain the $i$th right derived functor
+\begin{equation}
+\label{equation-RFi}
+R^iF = H^i \circ RF : \textit{Mod}(\mathcal{O}_X) \longrightarrow \mathcal{B}
+\end{equation}
+so that $R^0F = F$ and $\{R^iF, \delta\}_{i \geq 0}$ is universal
+$\delta$-functor, see
+Derived Categories, Lemma \ref{derived-lemma-higher-derived-functors}.
+
+\medskip\noindent
+Here are two special cases of this construction.
+Given a ring $R$ we write $K(R) = K(\text{Mod}_R)$ and
+$D(R) = D(\text{Mod}_R)$ and similarly for bounded versions.
+For any open $U \subset X$ we have a left exact functor
+$
+\Gamma(U, -) :
+\textit{Mod}(\mathcal{O}_X)
+\longrightarrow
+\text{Mod}_{\mathcal{O}_X(U)}
+$
+which gives rise to
+\begin{equation}
+\label{equation-total-derived-cohomology}
+R\Gamma(U, -) :
+D^{+}(X)
+\longrightarrow
+D^{+}(\mathcal{O}_X(U))
+\end{equation}
+by the discussion above. We set $H^i(U, -) = R^i\Gamma(U, -)$.
+If $U = X$ we recover (\ref{equation-cohomology-modules}).
+If $f : X \to Y$ is a morphism of ringed spaces, then we have
+the left exact functor
+$
+f_* :
+\textit{Mod}(\mathcal{O}_X)
+\longrightarrow
+\textit{Mod}(\mathcal{O}_Y)
+$
+which gives rise to the {\it derived pushforward}
+\begin{equation}
+\label{equation-total-derived-direct-image}
+Rf_* :
+D^{+}(X)
+\longrightarrow
+D^{+}(Y)
+\end{equation}
+The $i$th cohomology sheaf of $Rf_*\mathcal{F}^\bullet$ is denoted
+$R^if_*\mathcal{F}^\bullet$ and called the $i$th {\it higher direct image}
+in accordance with (\ref{equation-higher-direct-image-modules}).
+The two displayed functors above are exact functors
+of derived categories.
+
+\medskip\noindent
+{\bf Abuse of notation:} When the functor $Rf_*$, or any other
+derived functor, is applied to a sheaf $\mathcal{F}$ on $X$ or a complex
+of sheaves it is understood that $\mathcal{F}$ has been replaced by a
+suitable resolution of $\mathcal{F}$. To facilitate this kind of
+operation we will say, given an object
+$\mathcal{F}^\bullet \in D(\mathcal{O}_X)$,
+that a bounded below complex $\mathcal{I}^\bullet$ of injectives of
+$\textit{Mod}(\mathcal{O}_X)$
+{\it represents $\mathcal{F}^\bullet$ in the derived category}
+if there exists a quasi-isomorphism
+$\mathcal{F}^\bullet \to \mathcal{I}^\bullet$. In the same vein the phrase
+``let $\alpha : \mathcal{F}^\bullet \to \mathcal{G}^\bullet$ be
+a morphism of $D(\mathcal{O}_X)$''
+does not mean that $\alpha$ is represented by a
+morphism of complexes. If we have an actual morphism of complexes we will
+say so.
+
+
+
+
+
+
+
+
+
+\section{First cohomology and torsors}
+\label{section-h1-torsors}
+
+\begin{definition}
+\label{definition-torsor}
+Let $X$ be a topological space.
+Let $\mathcal{G}$ be a sheaf of (possibly non-commutative) groups on $X$.
+A {\it torsor}, or more precisely a {\it $\mathcal{G}$-torsor}, is a sheaf
+of sets $\mathcal{F}$ on $X$ endowed with an action
+$\mathcal{G} \times \mathcal{F} \to \mathcal{F}$ such that
+\begin{enumerate}
+\item whenever $\mathcal{F}(U)$ is nonempty the action
+$\mathcal{G}(U) \times \mathcal{F}(U) \to \mathcal{F}(U)$
+is simply transitive, and
+\item for every $x \in X$ the stalk $\mathcal{F}_x$ is nonempty.
+\end{enumerate}
+A {\it morphism of $\mathcal{G}$-torsors} $\mathcal{F} \to \mathcal{F}'$
+is simply a morphism of sheaves of sets compatible with the
+$\mathcal{G}$-actions. The {\it trivial $\mathcal{G}$-torsor}
+is the sheaf $\mathcal{G}$ endowed with the obvious left
+$\mathcal{G}$-action.
+\end{definition}
+
+\noindent
+It is clear that a morphism of torsors is automatically an isomorphism.
+
+\begin{lemma}
+\label{lemma-trivial-torsor}
+Let $X$ be a topological space.
+Let $\mathcal{G}$ be a sheaf of (possibly non-commutative) groups on $X$.
+A $\mathcal{G}$-torsor $\mathcal{F}$ is trivial if and only if
+$\mathcal{F}(X) \not = \emptyset$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-torsors-h1}
+Let $X$ be a topological space.
+Let $\mathcal{H}$ be an abelian sheaf on $X$.
+There is a canonical bijection between the set of isomorphism
+classes of $\mathcal{H}$-torsors and $H^1(X, \mathcal{H})$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be a $\mathcal{H}$-torsor.
+Consider the free abelian sheaf $\mathbf{Z}[\mathcal{F}]$
+on $\mathcal{F}$. It is the sheafification of the rule
+which associates to $U \subset X$ open the collection of finite
+formal sums $\sum n_i[s_i]$ with $n_i \in \mathbf{Z}$
+and $s_i \in \mathcal{F}(U)$. There is a natural map
+$$
+\sigma : \mathbf{Z}[\mathcal{F}] \longrightarrow \underline{\mathbf{Z}}
+$$
+which to a local section $\sum n_i[s_i]$ associates $\sum n_i$.
+The kernel of $\sigma$ is generated by the local section of the form
+$[s] - [s']$. There is a canonical map
+$a : \Ker(\sigma) \to \mathcal{H}$
+which maps $[s] - [s'] \mapsto h$ where $h$ is the local section of
+$\mathcal{H}$ such that $h \cdot s = s'$. Consider the pushout diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\Ker(\sigma) \ar[r] \ar[d]^a &
+\mathbf{Z}[\mathcal{F}] \ar[r] \ar[d] &
+\underline{\mathbf{Z}} \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\mathcal{H} \ar[r] &
+\mathcal{E} \ar[r] &
+\underline{\mathbf{Z}} \ar[r] &
+0
+}
+$$
+Here $\mathcal{E}$ is the extension obtained by pushout.
+From the long exact cohomology sequence associated to the lower
+short exact sequence we obtain an element
+$\xi = \xi_\mathcal{F} \in H^1(X, \mathcal{H})$
+by applying the boundary operator to $1 \in H^0(X, \underline{\mathbf{Z}})$.
+
+\medskip\noindent
+Conversely, given $\xi \in H^1(X, \mathcal{H})$ we can associate to
+$\xi$ a torsor as follows. Choose an embedding $\mathcal{H} \to \mathcal{I}$
+of $\mathcal{H}$ into an injective abelian sheaf $\mathcal{I}$. We set
+$\mathcal{Q} = \mathcal{I}/\mathcal{H}$ so that we have a short exact
+sequence
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{H} \ar[r] &
+\mathcal{I} \ar[r] &
+\mathcal{Q} \ar[r] &
+0
+}
+$$
+The element $\xi$ is the image of a global section $q \in H^0(X, \mathcal{Q})$
+because $H^1(X, \mathcal{I}) = 0$ (see
+Derived Categories, Lemma \ref{derived-lemma-higher-derived-functors}).
+Let $\mathcal{F} \subset \mathcal{I}$ be the subsheaf (of sets) of sections
+that map to $q$ in the sheaf $\mathcal{Q}$. It is easy to verify that
+$\mathcal{F}$ is a torsor.
+
+\medskip\noindent
+We omit the verification that the two constructions given
+above are mutually inverse.
+\end{proof}
+
+
+
+
+
+
+
+\section{First cohomology and extensions}
+\label{section-h1-extensions}
+
+\begin{lemma}
+\label{lemma-h1-extensions}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{F}$ be a sheaf of
+$\mathcal{O}_X$-modules. There is a canonical bijection
+$$
+\Ext^1_{\textit{Mod}(\mathcal{O}_X)}(\mathcal{O}_X, \mathcal{F})
+\longrightarrow
+H^1(X, \mathcal{F})
+$$
+which associates to the extension
+$$
+0 \to \mathcal{F} \to \mathcal{E} \to \mathcal{O}_X \to 0
+$$
+the image of $1 \in \Gamma(X, \mathcal{O}_X)$ in $H^1(X, \mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+Let us construct the inverse of the map given in the lemma. Let
+$\xi \in H^1(X, \mathcal{F})$. Choose an injection
+$\mathcal{F} \subset \mathcal{I}$ with $\mathcal{I}$ injective in
+$\textit{Mod}(\mathcal{O}_X)$.
+Set $\mathcal{Q} = \mathcal{I}/\mathcal{F}$.
+By the long exact sequence of cohomology, we see that
+$\xi$ is the image of a section
+$\tilde \xi \in \Gamma(X, \mathcal{Q}) =
+\Hom_{\mathcal{O}_X}(\mathcal{O}_X, \mathcal{Q})$.
+Now, we just form the pullback
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{F} \ar[r] \ar@{=}[d] &
+\mathcal{E} \ar[r] \ar[d] &
+\mathcal{O}_X \ar[r] \ar[d]^{\tilde \xi} &
+0 \\
+0 \ar[r] &
+\mathcal{F} \ar[r] &
+\mathcal{I} \ar[r] &
+\mathcal{Q} \ar[r] &
+0
+}
+$$
+see Homology, Section \ref{homology-section-extensions}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{First cohomology and invertible sheaves}
+\label{section-invertible-sheaves}
+
+\noindent
+The Picard group of a ringed space is defined in
+Modules, Section \ref{modules-section-invertible}.
+
+\begin{lemma}
+\label{lemma-h1-invertible}
+Let $(X, \mathcal{O}_X)$ be a locally ringed space.
+There is a canonical isomorphism
+$$
+H^1(X, \mathcal{O}_X^*) = \Pic(X).
+$$
+of abelian groups.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Consider the presheaf $\mathcal{L}^*$ defined by the rule
+$$
+U \longmapsto \{s \in \mathcal{L}(U)
+\text{ such that } \mathcal{O}_U \xrightarrow{s \cdot -} \mathcal{L}_U
+\text{ is an isomorphism}\}
+$$
+This presheaf satisfies the sheaf condition. Moreover, if
+$f \in \mathcal{O}_X^*(U)$ and $s \in \mathcal{L}^*(U)$, then clearly
+$fs \in \mathcal{L}^*(U)$. By the same token, if $s, s' \in \mathcal{L}^*(U)$
+then there exists a unique $f \in \mathcal{O}_X^*(U)$ such that
+$fs = s'$. Moreover, the sheaf $\mathcal{L}^*$ has sections locally
+by Modules, Lemma \ref{modules-lemma-invertible-is-locally-free-rank-1}.
+In other words we
+see that $\mathcal{L}^*$ is a $\mathcal{O}_X^*$-torsor. Thus we get
+a map
+$$
+\begin{matrix}
+\text{invertible sheaves on }(X, \mathcal{O}_X) \\
+\text{ up to isomorphism}
+\end{matrix}
+\longrightarrow
+\begin{matrix}
+\mathcal{O}_X^*\text{-torsors} \\
+\text{ up to isomorphism}
+\end{matrix}
+$$
+We omit the verification that this is a homomorphism of abelian groups.
+By
+Lemma \ref{lemma-torsors-h1}
+the right hand side is canonically
+bijective to $H^1(X, \mathcal{O}_X^*)$.
+Thus we have to show this map is injective and surjective.
+
+\medskip\noindent
+Injective. If the torsor $\mathcal{L}^*$ is trivial, this means by
+Lemma \ref{lemma-trivial-torsor}
+that $\mathcal{L}^*$ has a global section.
+Hence this means exactly that $\mathcal{L} \cong \mathcal{O}_X$ is
+the neutral element in $\Pic(X)$.
+
+\medskip\noindent
+Surjective. Let $\mathcal{F}$ be an $\mathcal{O}_X^*$-torsor.
+Consider the presheaf of sets
+$$
+\mathcal{L}_1 : U \longmapsto
+(\mathcal{F}(U) \times \mathcal{O}_X(U))/\mathcal{O}_X^*(U)
+$$
+where the action of $f \in \mathcal{O}_X^*(U)$ on
+$(s, g)$ is $(fs, f^{-1}g)$. Then $\mathcal{L}_1$ is a presheaf
+of $\mathcal{O}_X$-modules by setting
+$(s, g) + (s', g') = (s, g + (s'/s)g')$ where $s'/s$ is the local
+section $f$ of $\mathcal{O}_X^*$ such that $fs = s'$, and
+$h(s, g) = (s, hg)$ for $h$ a local section of $\mathcal{O}_X$.
+We omit the verification that the sheafification
+$\mathcal{L} = \mathcal{L}_1^\#$ is an invertible $\mathcal{O}_X$-module
+whose associated $\mathcal{O}_X^*$-torsor $\mathcal{L}^*$ is isomorphic
+to $\mathcal{F}$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Locality of cohomology}
+\label{section-locality}
+
+\noindent
+The following lemma says there is no ambiguity in defining the cohomology
+of a sheaf $\mathcal{F}$ over an open.
+
+\begin{lemma}
+\label{lemma-cohomology-of-open}
+Let $X$ be a ringed space.
+Let $U \subset X$ be an open subspace.
+\begin{enumerate}
+\item If $\mathcal{I}$ is an injective $\mathcal{O}_X$-module
+then $\mathcal{I}|_U$ is an injective $\mathcal{O}_U$-module.
+\item For any sheaf of $\mathcal{O}_X$-modules $\mathcal{F}$ we have
+$H^p(U, \mathcal{F}) = H^p(U, \mathcal{F}|_U)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $j : U \to X$ the open immersion.
+Recall that the functor $j^{-1}$ of restriction to $U$ is a right adjoint
+to the functor $j_!$ of extension by $0$, see
+Sheaves, Lemma \ref{sheaves-lemma-j-shriek-modules}.
+Moreover, $j_!$ is exact. Hence (1) follows from
+Homology, Lemma \ref{homology-lemma-adjoint-preserve-injectives}.
+
+\medskip\noindent
+By definition $H^p(U, \mathcal{F}) = H^p(\Gamma(U, \mathcal{I}^\bullet))$
+where $\mathcal{F} \to \mathcal{I}^\bullet$ is an injective resolution
+in $\textit{Mod}(\mathcal{O}_X)$.
+By the above we see that $\mathcal{F}|_U \to \mathcal{I}^\bullet|_U$
+is an injective resolution in $\textit{Mod}(\mathcal{O}_U)$.
+Hence $H^p(U, \mathcal{F}|_U)$ is equal to
+$H^p(\Gamma(U, \mathcal{I}^\bullet|_U))$.
+Of course $\Gamma(U, \mathcal{F}) = \Gamma(U, \mathcal{F}|_U)$ for
+any sheaf $\mathcal{F}$ on $X$.
+Hence the equality
+in (2).
+\end{proof}
+
+\noindent
+Let $X$ be a ringed space.
+Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_X$-modules.
+Let $U \subset V \subset X$ be open subsets.
+Then there is a canonical {\it restriction mapping}
+\begin{equation}
+\label{equation-restriction-mapping}
+H^n(V, \mathcal{F})
+\longrightarrow
+H^n(U, \mathcal{F}), \quad
+\xi \longmapsto \xi|_U
+\end{equation}
+functorial in $\mathcal{F}$. Namely, choose any injective
+resolution $\mathcal{F} \to \mathcal{I}^\bullet$. The restriction
+mappings of the sheaves $\mathcal{I}^p$ give a morphism of complexes
+$$
+\Gamma(V, \mathcal{I}^\bullet)
+\longrightarrow
+\Gamma(U, \mathcal{I}^\bullet)
+$$
+The LHS is a complex representing $R\Gamma(V, \mathcal{F})$
+and the RHS is a complex representing $R\Gamma(U, \mathcal{F})$.
+We get the map on cohomology groups by applying the functor $H^n$.
+As indicated we will use the notation $\xi \mapsto \xi|_U$ to denote this map.
+Thus the rule $U \mapsto H^n(U, \mathcal{F})$ is a presheaf of
+$\mathcal{O}_X$-modules. This presheaf is customarily denoted
+$\underline{H}^n(\mathcal{F})$. We will give another interpretation
+of this presheaf in Lemma \ref{lemma-include}.
+
+\begin{lemma}
+\label{lemma-kill-cohomology-class-on-covering}
+Let $X$ be a ringed space.
+Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_X$-modules.
+Let $U \subset X$ be an open subspace.
+Let $n > 0$ and let $\xi \in H^n(U, \mathcal{F})$.
+Then there exists an open covering
+$U = \bigcup_{i\in I} U_i$ such that $\xi|_{U_i} = 0$ for
+all $i \in I$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F} \to \mathcal{I}^\bullet$ be an injective resolution.
+Then
+$$
+H^n(U, \mathcal{F}) =
+\frac{\Ker(\mathcal{I}^n(U) \to \mathcal{I}^{n + 1}(U))}
+{\Im(\mathcal{I}^{n - 1}(U) \to \mathcal{I}^n(U))}.
+$$
+Pick an element $\tilde \xi \in \mathcal{I}^n(U)$ representing the
+cohomology class in the presentation above. Since $\mathcal{I}^\bullet$
+is an injective resolution of $\mathcal{F}$ and $n > 0$ we see that
+the complex $\mathcal{I}^\bullet$ is exact in degree $n$. Hence
+$\Im(\mathcal{I}^{n - 1} \to \mathcal{I}^n) =
+\Ker(\mathcal{I}^n \to \mathcal{I}^{n + 1})$ as sheaves.
+Since $\tilde \xi$ is a section of the kernel sheaf over $U$
+we conclude there exists an open covering $U = \bigcup_{i \in I} U_i$
+such that $\tilde \xi|_{U_i}$ is the image under $d$ of a section
+$\xi_i \in \mathcal{I}^{n - 1}(U_i)$. By our definition of the
+restriction $\xi|_{U_i}$ as corresponding to the class of
+$\tilde \xi|_{U_i}$ we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-describe-higher-direct-images}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $\mathcal{F}$ be a $\mathcal{O}_X$-module.
+The sheaves $R^if_*\mathcal{F}$ are the sheaves
+associated to the presheaves
+$$
+V \longmapsto H^i(f^{-1}(V), \mathcal{F})
+$$
+with restriction mappings as in Equation (\ref{equation-restriction-mapping}).
+There is a similar statement for $R^if_*$ applied to a
+bounded below complex $\mathcal{F}^\bullet$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F} \to \mathcal{I}^\bullet$ be an injective resolution.
+Then $R^if_*\mathcal{F}$ is by definition the $i$th cohomology sheaf
+of the complex
+$$
+f_*\mathcal{I}^0 \to f_*\mathcal{I}^1 \to f_*\mathcal{I}^2 \to \ldots
+$$
+By definition of the abelian category structure on $\mathcal{O}_Y$-modules
+this cohomology sheaf is the sheaf associated to the presheaf
+$$
+V
+\longmapsto
+\frac{\Ker(f_*\mathcal{I}^i(V) \to f_*\mathcal{I}^{i + 1}(V))}
+{\Im(f_*\mathcal{I}^{i - 1}(V) \to f_*\mathcal{I}^i(V))}
+$$
+and this is obviously equal to
+$$
+\frac{\Ker(\mathcal{I}^i(f^{-1}(V)) \to \mathcal{I}^{i + 1}(f^{-1}(V)))}
+{\Im(\mathcal{I}^{i - 1}(f^{-1}(V)) \to \mathcal{I}^i(f^{-1}(V)))}
+$$
+which is equal to $H^i(f^{-1}(V), \mathcal{F})$
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-higher-direct-images}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+Let $V \subset Y$ be an open subspace.
+Denote $g : f^{-1}(V) \to V$ the restriction of $f$.
+Then we have
+$$
+R^pg_*(\mathcal{F}|_{f^{-1}(V)}) = (R^pf_*\mathcal{F})|_V
+$$
+There is a similar statement for the
+derived image $Rf_*\mathcal{F}^\bullet$ where $\mathcal{F}^\bullet$
+is a bounded below complex of $\mathcal{O}_X$-modules.
+\end{lemma}
+
+\begin{proof}
+First proof. Apply Lemmas \ref{lemma-describe-higher-direct-images}
+and \ref{lemma-cohomology-of-open} to see the displayed equality.
+Second proof. Choose an injective resolution
+$\mathcal{F} \to \mathcal{I}^\bullet$
+and use that $\mathcal{F}|_{f^{-1}(V)} \to \mathcal{I}^\bullet|_{f^{-1}(V)}$
+is an injective resolution also.
+\end{proof}
+
+\begin{remark}
+\label{remark-daniel}
+Here is a different approach to the proofs of
+Lemmas \ref{lemma-kill-cohomology-class-on-covering} and
+\ref{lemma-describe-higher-direct-images} above.
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $i_X : \textit{Mod}(\mathcal{O}_X) \to \textit{PMod}(\mathcal{O}_X)$
+be the inclusion functor and let $\#$ be the sheafification functor.
+Recall that $i_X$ is left exact and $\#$ is exact.
+\begin{enumerate}
+\item First prove Lemma \ref{lemma-include} below which says that the
+right derived functors of $i_X$ are given by
+$R^pi_X\mathcal{F} = \underline{H}^p(\mathcal{F})$.
+Here is another proof: The equality is clear for $p = 0$.
+Both $(R^pi_X)_{p \geq 0}$ and $(\underline{H}^p)_{p \geq 0}$
+are delta functors vanishing on injectives, hence both are universal,
+hence they are isomorphic. See Homology,
+Section \ref{homology-section-cohomological-delta-functor}.
+\item A restatement of Lemma \ref{lemma-kill-cohomology-class-on-covering}
+is that $(\underline{H}^p(\mathcal{F}))^\# = 0$, $p > 0$ for any sheaf of
+$\mathcal{O}_X$-modules $\mathcal{F}$.
+To see this is true, use that ${}^\#$ is exact so
+$$
+(\underline{H}^p(\mathcal{F}))^\# =
+(R^pi_X\mathcal{F})^\# =
+R^p(\# \circ i_X)(\mathcal{F}) = 0
+$$
+because $\# \circ i_X$ is the identity functor.
+\item Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module. The presheaf
+$V \mapsto H^p(f^{-1}V, \mathcal{F})$ is equal to
+$R^p (i_Y \circ f_*)\mathcal{F}$. You can prove this by noticing that
+both give universal delta functors as in the argument of (1) above.
+Hence Lemma \ref{lemma-describe-higher-direct-images}
+says that $R^p f_* \mathcal{F}= (R^p (i_Y \circ f_*)\mathcal{F})^\#$.
+Again using that $\#$ is exact a that $\# \circ i_Y$ is the identity
+functor we see that
+$$
+R^p f_* \mathcal{F} =
+R^p(\# \circ i_Y \circ f_*)\mathcal{F} =
+(R^p (i_Y \circ f_*)\mathcal{F})^\#
+$$
+as desired.
+\end{enumerate}
+\end{remark}
+
+
+
+
+
+
+\section{Mayer-Vietoris}
+\label{section-mayer-vietoris}
+
+\noindent
+Below will construct the {\v C}ech-to-cohomology spectral sequence, see
+Lemma \ref{lemma-cech-spectral-sequence}.
+A special case of that spectral sequence is the Mayer-Vietoris
+long exact sequence. Since it is such a basic, useful and easy to understand
+variant of the spectral sequence we treat it here separately.
+
+\begin{lemma}
+\label{lemma-injective-restriction-surjective}
+\begin{slogan}
+Local sections in injective sheaves can be extended globally.
+\end{slogan}
+Let $X$ be a ringed space.
+Let $U' \subset U \subset X$ be open subspaces.
+For any injective $\mathcal{O}_X$-module $\mathcal{I}$ the
+restriction mapping
+$\mathcal{I}(U) \to \mathcal{I}(U')$ is surjective.
+\end{lemma}
+
+\begin{proof}
+Let $j : U \to X$ and $j' : U' \to X$ be the open immersions.
+Recall that $j_!\mathcal{O}_U$ is the extension by zero of
+$\mathcal{O}_U = \mathcal{O}_X|_U$, see
+Sheaves, Section \ref{sheaves-section-open-immersions}.
+Since $j_!$ is a left adjoint to restriction we see that
+for any sheaf $\mathcal{F}$ of $\mathcal{O}_X$-modules
+$$
+\Hom_{\mathcal{O}_X}(j_!\mathcal{O}_U, \mathcal{F})
+=
+\Hom_{\mathcal{O}_U}(\mathcal{O}_U, \mathcal{F}|_U)
+=
+\mathcal{F}(U)
+$$
+see Sheaves, Lemma \ref{sheaves-lemma-j-shriek-modules}.
+Similarly, the sheaf $j'_!\mathcal{O}_{U'}$ represents the
+functor $\mathcal{F} \mapsto \mathcal{F}(U')$.
+Moreover there
+is an obvious canonical map of $\mathcal{O}_X$-modules
+$$
+j'_!\mathcal{O}_{U'} \longrightarrow j_!\mathcal{O}_U
+$$
+which corresponds to the restriction mapping
+$\mathcal{F}(U) \to \mathcal{F}(U')$ via Yoneda's lemma
+(Categories, Lemma \ref{categories-lemma-yoneda}). By the description
+of the stalks of the sheaves
+$j'_!\mathcal{O}_{U'}$, $j_!\mathcal{O}_U$
+we see that the displayed map above is injective (see lemma cited above).
+Hence if $\mathcal{I}$ is an injective $\mathcal{O}_X$-module,
+then the map
+$$
+\Hom_{\mathcal{O}_X}(j_!\mathcal{O}_U, \mathcal{I})
+\longrightarrow
+\Hom_{\mathcal{O}_X}(j'_!\mathcal{O}_{U'}, \mathcal{I})
+$$
+is surjective, see
+Homology, Lemma \ref{homology-lemma-characterize-injectives}.
+Putting everything together we obtain the lemma.
+\end{proof}
+
+\begin{lemma}[Mayer-Vietoris]
+\label{lemma-mayer-vietoris}
+Let $X$ be a ringed space. Suppose that $X = U \cup V$ is a
+union of two open subsets. For every $\mathcal{O}_X$-module $\mathcal{F}$
+there exists a long exact cohomology sequence
+$$
+0 \to
+H^0(X, \mathcal{F}) \to
+H^0(U, \mathcal{F}) \oplus H^0(V, \mathcal{F}) \to
+H^0(U \cap V, \mathcal{F}) \to
+H^1(X, \mathcal{F}) \to \ldots
+$$
+This long exact sequence is functorial in $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+The sheaf condition says that the kernel of
+$(1, -1) : \mathcal{F}(U) \oplus \mathcal{F}(V) \to \mathcal{F}(U \cap V)$
+is equal to the image of $\mathcal{F}(X)$ by the first map
+for any abelian sheaf $\mathcal{F}$.
+Lemma \ref{lemma-injective-restriction-surjective} above implies that the map
+$(1, -1) : \mathcal{I}(U) \oplus \mathcal{I}(V) \to \mathcal{I}(U \cap V)$
+is surjective whenever $\mathcal{I}$ is an injective $\mathcal{O}_X$-module.
+Hence if $\mathcal{F} \to \mathcal{I}^\bullet$ is an injective resolution
+of $\mathcal{F}$, then we get a short exact sequence of complexes
+$$
+0 \to
+\mathcal{I}^\bullet(X) \to
+\mathcal{I}^\bullet(U) \oplus \mathcal{I}^\bullet(V) \to
+\mathcal{I}^\bullet(U \cap V) \to
+0.
+$$
+Taking cohomology gives the result (use
+Homology, Lemma \ref{homology-lemma-long-exact-sequence-cochain}).
+We omit the proof of the functoriality of the sequence.
+\end{proof}
+
+\begin{lemma}[Relative Mayer-Vietoris]
+\label{lemma-relative-mayer-vietoris}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Suppose that $X = U \cup V$ is a union of two open subsets.
+Denote $a = f|_U : U \to Y$, $b = f|_V : V \to Y$, and
+$c = f|_{U \cap V} : U \cap V \to Y$.
+For every $\mathcal{O}_X$-module $\mathcal{F}$
+there exists a long exact sequence
+$$
+0 \to
+f_*\mathcal{F} \to
+a_*(\mathcal{F}|_U) \oplus b_*(\mathcal{F}|_V) \to
+c_*(\mathcal{F}|_{U \cap V}) \to
+R^1f_*\mathcal{F} \to \ldots
+$$
+This long exact sequence is functorial in $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F} \to \mathcal{I}^\bullet$ be an injective resolution
+of $\mathcal{F}$. We claim that we
+get a short exact sequence of complexes
+$$
+0 \to
+f_*\mathcal{I}^\bullet \to
+a_*\mathcal{I}^\bullet|_U \oplus b_*\mathcal{I}^\bullet|_V \to
+c_*\mathcal{I}^\bullet|_{U \cap V} \to
+0.
+$$
+Namely, for any open $W \subset Y$, and for any $n \geq 0$ the
+corresponding sequence of groups of sections over $W$
+$$
+0 \to
+\mathcal{I}^n(f^{-1}(W)) \to
+\mathcal{I}^n(U \cap f^{-1}(W))
+\oplus \mathcal{I}^n(V \cap f^{-1}(W)) \to
+\mathcal{I}^n(U \cap V \cap f^{-1}(W)) \to
+0
+$$
+was shown to be short exact in the proof of Lemma \ref{lemma-mayer-vietoris}.
+The lemma follows by taking cohomology sheaves and using the fact that
+$\mathcal{I}^\bullet|_U$ is an injective resolution of $\mathcal{F}|_U$
+and similarly for $\mathcal{I}^\bullet|_V$, $\mathcal{I}^\bullet|_{U \cap V}$
+see Lemma \ref{lemma-cohomology-of-open}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{The {\v C}ech complex and {\v C}ech cohomology}
+\label{section-cech}
+
+\noindent
+Let $X$ be a topological space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be an open covering,
+see Topology, Basic notion (\ref{topology-item-covering}).
+As is customary we denote
+$U_{i_0\ldots i_p} = U_{i_0} \cap \ldots \cap U_{i_p}$ for the
+$(p + 1)$-fold intersection of members of $\mathcal{U}$.
+Let $\mathcal{F}$ be an abelian presheaf on $X$.
+Set
+$$
+\check{\mathcal{C}}^p(\mathcal{U}, \mathcal{F})
+=
+\prod\nolimits_{(i_0, \ldots, i_p) \in I^{p + 1}}
+\mathcal{F}(U_{i_0\ldots i_p}).
+$$
+This is an abelian group. For
+$s \in \check{\mathcal{C}}^p(\mathcal{U}, \mathcal{F})$ we denote
+$s_{i_0\ldots i_p}$ its value in $\mathcal{F}(U_{i_0\ldots i_p})$.
+Note that if $s \in \check{\mathcal{C}}^1(\mathcal{U}, \mathcal{F})$
+and $i, j \in I$ then $s_{ij}$ and $s_{ji}$ are both elements
+of $\mathcal{F}(U_i \cap U_j)$ but there is no imposed
+relation between $s_{ij}$ and $s_{ji}$. In other words, we are {\it not}
+working with alternating cochains (these will be defined
+in Section \ref{section-alternating-cech}). We define
+$$
+d : \check{\mathcal{C}}^p(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}^{p + 1}(\mathcal{U}, \mathcal{F})
+$$
+by the formula
+\begin{equation}
+\label{equation-d-cech}
+d(s)_{i_0\ldots i_{p + 1}}
+=
+\sum\nolimits_{j = 0}^{p + 1}
+(-1)^j
+s_{i_0\ldots \hat i_j \ldots i_{p + 1}}|_{U_{i_0\ldots i_{p + 1}}}
+\end{equation}
+It is straightforward to see that $d \circ d = 0$. In other words
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$ is a complex.
+
+\begin{definition}
+\label{definition-cech-complex}
+Let $X$ be a topological space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be an open covering.
+Let $\mathcal{F}$ be an abelian presheaf on $X$.
+The complex $\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$
+is the {\it {\v C}ech complex} associated to $\mathcal{F}$ and the
+open covering $\mathcal{U}$. Its cohomology groups
+$H^i(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}))$ are
+called the {\it {\v C}ech cohomology groups} associated to
+$\mathcal{F}$ and the covering $\mathcal{U}$.
+They are denoted $\check H^i(\mathcal{U}, \mathcal{F})$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-cech-h0}
+Let $X$ be a topological space.
+Let $\mathcal{F}$ be an abelian presheaf on $X$.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is an abelian sheaf and
+\item for every open covering $\mathcal{U} : U = \bigcup_{i \in I} U_i$
+the natural map
+$$
+\mathcal{F}(U) \to \check{H}^0(\mathcal{U}, \mathcal{F})
+$$
+is bijective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is true since the sheaf condition is exactly that
+$\mathcal{F}(U) \to \check{H}^0(\mathcal{U}, \mathcal{F})$
+is bijective for every open covering.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-trivial}
+Let $X$ be a topological space. Let $\mathcal{F}$ be an abelian presheaf on $X$.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be an open covering. If
+$U_i = U$ for some $i \in I$, then the extended {\v C}ech complex
+$$
+\mathcal{F}(U) \to \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+obtained by putting $\mathcal{F}(U)$ in degree $-1$ with differential given by
+the canonical map of $\mathcal{F}(U)$ into
+$\check{\mathcal{C}}^0(\mathcal{U}, \mathcal{F})$
+is homotopy equivalent to $0$.
+\end{lemma}
+
+\begin{proof}
+Fix an element $i \in I$ with $U = U_i$. Observe that
+$U_{i_0 \ldots i_p} = U_{i_0 \ldots \hat i_j \ldots i_p}$ if $i_j = i$.
+Let us define a homotopy
+$$
+h :
+\prod\nolimits_{i_0 \ldots i_{p + 1}} \mathcal{F}(U_{i_0 \ldots i_{p + 1}})
+\longrightarrow
+\prod\nolimits_{i_0 \ldots i_p} \mathcal{F}(U_{i_0 \ldots i_p})
+$$
+by the rule
+$$
+h(s)_{i_0 \ldots i_p} = s_{i i_0 \ldots i_p}
+$$
+In other words, $h : \prod_{i_0} \mathcal{F}(U_{i_0}) \to \mathcal{F}(U)$
+is projection onto the factor $\mathcal{F}(U_i) = \mathcal{F}(U)$
+and in general the map $h$ equals the projection onto the factors
+$\mathcal{F}(U_{i i_1 \ldots i_{p + 1}}) =
+\mathcal{F}(U_{i_1 \ldots i_{p + 1}})$.
+We compute
+\begin{align*}
+(dh + hd)(s)_{i_0 \ldots i_p}
+& =
+\sum\nolimits_{j = 0}^p
+(-1)^j
+h(s)_{i_0 \ldots \hat i_j \ldots i_p}
++
+d(s)_{i i_0 \ldots i_p}\\
+& =
+\sum\nolimits_{j = 0}^p
+(-1)^j
+s_{i i_0 \ldots \hat i_j \ldots i_p}
++
+s_{i_0 \ldots i_p}
++
+\sum\nolimits_{j = 0}^p
+(-1)^{j + 1}
+s_{i i_0 \ldots \hat i_j \ldots i_p} \\
+& =
+s_{i_0 \ldots i_p}
+\end{align*}
+This proves the identity map is homotopic to zero as desired.
+\end{proof}
+
+
+
+
+
+
+\section{{\v C}ech cohomology as a functor on presheaves}
+\label{section-cech-functor}
+
+\noindent
+Warning: In this section we work almost exclusively with presheaves and
+categories of presheaves and the results are completely wrong in the
+setting of sheaves and categories of sheaves!
+
+\medskip\noindent
+Let $X$ be a ringed space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be an open covering.
+Let $\mathcal{F}$ be a presheaf of $\mathcal{O}_X$-modules.
+We have the {\v C}ech complex
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$
+of $\mathcal{F}$ just by thinking of $\mathcal{F}$
+as a presheaf of abelian groups. However, each term
+$\check{\mathcal{C}}^p(\mathcal{U}, \mathcal{F})$ has a natural
+structure of a $\mathcal{O}_X(U)$-module and the differential is given by
+$\mathcal{O}_X(U)$-module maps. Moreover, it is clear that the
+construction
+$$
+\mathcal{F} \longmapsto \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+is functorial in $\mathcal{F}$. In fact, it is a functor
+\begin{equation}
+\label{equation-cech-functor}
+\check{\mathcal{C}}^\bullet(\mathcal{U}, -) :
+\textit{PMod}(\mathcal{O}_X)
+\longrightarrow
+\text{Comp}^{+}(\text{Mod}_{\mathcal{O}_X(U)})
+\end{equation}
+see
+Derived Categories, Definition \ref{derived-definition-complexes-notation}
+for notation. Recall that the category of bounded below complexes
+in an abelian category is an abelian category, see
+Homology, Lemma \ref{homology-lemma-cat-cochain-abelian}.
+
+\begin{lemma}
+\label{lemma-cech-exact-presheaves}
+The functor given by Equation (\ref{equation-cech-functor})
+is an exact functor (see Homology, Lemma \ref{homology-lemma-exact-functor}).
+\end{lemma}
+
+\begin{proof}
+For any open $W \subset U$ the functor
+$\mathcal{F} \mapsto \mathcal{F}(W)$ is an additive exact functor
+from $\textit{PMod}(\mathcal{O}_X)$ to $\text{Mod}_{\mathcal{O}_X(U)}$.
+The terms
+$\check{\mathcal{C}}^p(\mathcal{U}, \mathcal{F})$
+of the complex are products of these exact functors and hence exact.
+Moreover a sequence of complexes is exact if and only if the sequence
+of terms in a given degree is exact. Hence the lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-cohomology-delta-functor-presheaves}
+Let $X$ be a ringed space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be an open covering.
+The functors $\mathcal{F} \mapsto \check{H}^n(\mathcal{U}, \mathcal{F})$
+form a $\delta$-functor from the abelian category of
+presheaves of $\mathcal{O}_X$-modules to the category
+of $\mathcal{O}_X(U)$-modules (see
+Homology, Definition \ref{homology-definition-cohomological-delta-functor}).
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-cech-exact-presheaves}
+a short exact sequence of presheaves of
+$\mathcal{O}_X$-modules
+$0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0$
+is turned into a short exact sequence of complexes of
+$\mathcal{O}_X(U)$-modules. Hence we can use
+Homology, Lemma \ref{homology-lemma-long-exact-sequence-cochain}
+to get the boundary maps
+$\delta_{\mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3} :
+\check{H}^n(\mathcal{U}, \mathcal{F}_3) \to
+\check{H}^{n + 1}(\mathcal{U}, \mathcal{F}_1)$
+and a corresponding long exact sequence. We omit the verification
+that these maps are compatible with maps between short exact
+sequences of presheaves.
+\end{proof}
+
+
+\noindent
+In the formulation of the following lemma we use the functor $j_{p!}$ of
+extension by $0$ for presheaves of modules
+relative to an open immersion $j : U \to X$.
+See Sheaves, Section \ref{sheaves-section-open-immersions}. For any open
+$W \subset X$ and any presheaf $\mathcal{G}$ of $\mathcal{O}_X|_U$-modules
+we have
+$$
+(j_{p!}\mathcal{G})(W) =
+\left\{
+\begin{matrix}
+\mathcal{G}(W) & \text{if } W \subset U \\
+0 & \text{else.}
+\end{matrix}
+\right.
+$$
+Moreover, the functor $j_{p!}$ is a left adjoint to the restriction functor
+see Sheaves, Lemma \ref{sheaves-lemma-j-shriek-modules}.
+In particular we have the following formula
+$$
+\Hom_{\mathcal{O}_X}(j_{p!}\mathcal{O}_U, \mathcal{F})
+=
+\Hom_{\mathcal{O}_U}(\mathcal{O}_U, \mathcal{F}|_U)
+=
+\mathcal{F}(U).
+$$
+Since the functor $\mathcal{F} \mapsto \mathcal{F}(U)$ is an exact functor
+on the category of presheaves we conclude that the presheaf
+$j_{p!}\mathcal{O}_U$ is a projective object in the category
+$\textit{PMod}(\mathcal{O}_X)$, see
+Homology, Lemma \ref{homology-lemma-characterize-projectives}.
+
+\medskip\noindent
+Note that if we are given open subsets $U \subset V \subset X$
+with associated open immersions $j_U, j_V$, then we have a canonical
+map $(j_U)_{p!}\mathcal{O}_U \to (j_V)_{p!}\mathcal{O}_V$. It is the
+identity on sections over any open $W \subset U$ and $0$ else.
+In terms of the identification
+$\Hom_{\mathcal{O}_X}((j_U)_{p!}\mathcal{O}_U, (j_V)_{p!}\mathcal{O}_V) =
+(j_V)_{p!}\mathcal{O}_V(U) = \mathcal{O}_V(U)$ it corresponds to
+the element $1 \in \mathcal{O}_V(U)$.
+
+\begin{lemma}
+\label{lemma-cech-map-into}
+Let $X$ be a ringed space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering.
+Denote $j_{i_0\ldots i_p} : U_{i_0 \ldots i_p} \to X$ the open immersion.
+Consider the chain complex $K(\mathcal{U})_\bullet$
+of presheaves of $\mathcal{O}_X$-modules
+$$
+\ldots
+\to
+\bigoplus_{i_0i_1i_2} (j_{i_0i_1i_2})_{p!}\mathcal{O}_{U_{i_0i_1i_2}}
+\to
+\bigoplus_{i_0i_1} (j_{i_0i_1})_{p!}\mathcal{O}_{U_{i_0i_1}}
+\to
+\bigoplus_{i_0} (j_{i_0})_{p!}\mathcal{O}_{U_{i_0}}
+\to 0 \to \ldots
+$$
+where the last nonzero term is placed in degree $0$
+and where the map
+$$
+(j_{i_0\ldots i_{p + 1}})_{p!}\mathcal{O}_{U_{i_0\ldots i_{p + 1}}}
+\longrightarrow
+(j_{i_0\ldots \hat i_j \ldots i_{p + 1}})_{p!}
+\mathcal{O}_{U_{i_0\ldots \hat i_j \ldots i_{p + 1}}}
+$$
+is given by $(-1)^j$ times the canonical map.
+Then there is an isomorphism
+$$
+\Hom_{\mathcal{O}_X}(K(\mathcal{U})_\bullet, \mathcal{F})
+=
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+functorial in $\mathcal{F} \in \Ob(\textit{PMod}(\mathcal{O}_X))$.
+\end{lemma}
+
+\begin{proof}
+We saw in the discussion just above the lemma that
+$$
+\Hom_{\mathcal{O}_X}(
+(j_{i_0\ldots i_p})_{p!}\mathcal{O}_{U_{i_0\ldots i_p}},
+\mathcal{F})
+=
+\mathcal{F}(U_{i_0\ldots i_p}).
+$$
+Hence we see that it is indeed the case that the direct sum
+$$
+\bigoplus\nolimits_{i_0 \ldots i_p}
+(j_{i_0 \ldots i_p})_{p!}\mathcal{O}_{U_{i_0 \ldots i_p}}
+$$
+represents the functor
+$$
+\mathcal{F}
+\longmapsto
+\prod\nolimits_{i_0\ldots i_p} \mathcal{F}(U_{i_0\ldots i_p}).
+$$
+Hence by Categories, Yoneda Lemma \ref{categories-lemma-yoneda}
+we see that there is a complex $K(\mathcal{U})_\bullet$ with terms
+as given. It is a simple matter to see that the maps are as given
+in the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-homology-complex}
+Let $X$ be a ringed space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering.
+Let $\mathcal{O}_\mathcal{U} \subset \mathcal{O}_X$
+be the image presheaf of the map
+$\bigoplus j_{p!}\mathcal{O}_{U_i} \to \mathcal{O}_X$.
+The chain complex $K(\mathcal{U})_\bullet$ of presheaves
+of Lemma \ref{lemma-cech-map-into} above has homology presheaves
+$$
+H_i(K(\mathcal{U})_\bullet) =
+\left\{
+\begin{matrix}
+0 & \text{if} & i \not = 0 \\
+\mathcal{O}_\mathcal{U} & \text{if} & i = 0
+\end{matrix}
+\right.
+$$
+\end{lemma}
+
+\begin{proof}
+Consider the extended complex $K^{ext}_\bullet$ one gets by putting
+$\mathcal{O}_\mathcal{U}$ in degree $-1$ with the obvious map
+$K(\mathcal{U})_0 =
+\bigoplus_{i_0} (j_{i_0})_{p!}\mathcal{O}_{U_{i_0}} \to
+\mathcal{O}_\mathcal{U}$.
+It suffices to show that taking sections of this extended complex over
+any open $W \subset X$ leads to an acyclic complex.
+In fact, we claim that for every $W \subset X$ the complex
+$K^{ext}_\bullet(W)$ is homotopy equivalent to the zero complex.
+Write $I = I_1 \amalg I_2$ where $W \subset U_i$ if and only
+if $i \in I_1$.
+
+\medskip\noindent
+If $I_1 = \emptyset$, then the complex $K^{ext}_\bullet(W) = 0$ so there is
+nothing to prove.
+
+\medskip\noindent
+If $I_1 \not = \emptyset$, then
+$\mathcal{O}_\mathcal{U}(W) = \mathcal{O}_X(W)$
+and
+$$
+K^{ext}_p(W) =
+\bigoplus\nolimits_{i_0 \ldots i_p \in I_1} \mathcal{O}_X(W).
+$$
+This is true because of the simple description of the presheaves
+$(j_{i_0 \ldots i_p})_{p!}\mathcal{O}_{U_{i_0 \ldots i_p}}$.
+Moreover, the differential of the complex $K^{ext}_\bullet(W)$
+is given by
+$$
+d(s)_{i_0 \ldots i_p} =
+\sum\nolimits_{j = 0, \ldots, p + 1} \sum\nolimits_{i \in I_1}
+(-1)^j s_{i_0 \ldots i_{j - 1} i i_j \ldots i_p}.
+$$
+The sum is finite as the element $s$ has finite support.
+Fix an element $i_{\text{fix}} \in I_1$. Define a map
+$$
+h : K^{ext}_p(W) \longrightarrow K^{ext}_{p + 1}(W)
+$$
+by the rule
+$$
+h(s)_{i_0 \ldots i_{p + 1}} =
+\left\{
+\begin{matrix}
+0 & \text{if} & i_0 \not = i_{\text{fix}} \\
+s_{i_1 \ldots i_{p + 1}} & \text{if} & i_0 = i_{\text{fix}}
+\end{matrix}
+\right.
+$$
+We will use the shorthand
+$h(s)_{i_0 \ldots i_{p + 1}} = (i_0 = i_{\text{fix}}) s_{i_1 \ldots i_p}$
+for this. Then we compute
+\begin{eqnarray*}
+& & (dh + hd)(s)_{i_0 \ldots i_p} \\
+& = &
+\sum_j \sum_{i \in I_1} (-1)^j h(s)_{i_0 \ldots i_{j - 1} i i_j \ldots i_p}
++
+(i = i_0) d(s)_{i_1 \ldots i_p} \\
+& = &
+s_{i_0 \ldots i_p} +
+\sum_{j \geq 1}\sum_{i \in I_1}
+(-1)^j (i_0 = i_{\text{fix}}) s_{i_1 \ldots i_{j - 1} i i_j \ldots i_p}
++
+(i_0 = i_{\text{fix}}) d(s)_{i_1 \ldots i_p}
+\end{eqnarray*}
+which is equal to $s_{i_0 \ldots i_p}$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-cohomology-derived-presheaves}
+Let $X$ be a ringed space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$
+be an open covering of $U \subset X$.
+The {\v C}ech cohomology functors $\check{H}^p(\mathcal{U}, -)$
+are canonically isomorphic as a $\delta$-functor to
+the right derived functors of the functor
+$$
+\check{H}^0(\mathcal{U}, -) :
+\textit{PMod}(\mathcal{O}_X)
+\longrightarrow
+\text{Mod}_{\mathcal{O}_X(U)}.
+$$
+Moreover, there is a functorial quasi-isomorphism
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+\longrightarrow
+R\check{H}^0(\mathcal{U}, \mathcal{F})
+$$
+where the right hand side indicates the right derived functor
+$$
+R\check{H}^0(\mathcal{U}, -) :
+D^{+}(\textit{PMod}(\mathcal{O}_X))
+\longrightarrow
+D^{+}(\mathcal{O}_X(U))
+$$
+of the left exact functor $\check{H}^0(\mathcal{U}, -)$.
+\end{lemma}
+
+\begin{proof}
+Note that the category of presheaves of $\mathcal{O}_X$-modules
+has enough injectives, see
+Injectives, Proposition \ref{injectives-proposition-presheaves-modules}.
+Note that $\check{H}^0(\mathcal{U}, -)$ is a left exact functor
+from the category of presheaves of $\mathcal{O}_X$-modules
+to the category of $\mathcal{O}_X(U)$-modules.
+Hence the derived functor and the right derived functor exist, see
+Derived Categories, Section \ref{derived-section-right-derived-functor}.
+
+\medskip\noindent
+Let $\mathcal{I}$ be a injective presheaf of $\mathcal{O}_X$-modules.
+In this case the functor $\Hom_{\mathcal{O}_X}(-, \mathcal{I})$
+is exact on $\textit{PMod}(\mathcal{O}_X)$. By
+Lemma \ref{lemma-cech-map-into} we have
+$$
+\Hom_{\mathcal{O}_X}(K(\mathcal{U})_\bullet, \mathcal{I})
+=
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}).
+$$
+By Lemma \ref{lemma-homology-complex} we have that $K(\mathcal{U})_\bullet$ is
+quasi-isomorphic to $\mathcal{O}_\mathcal{U}[0]$. Hence by
+the exactness of Hom into $\mathcal{I}$ mentioned above we see
+that $\check{H}^i(\mathcal{U}, \mathcal{I}) = 0$ for all
+$i > 0$. Thus the $\delta$-functor $(\check{H}^n, \delta)$
+(see Lemma \ref{lemma-cech-cohomology-delta-functor-presheaves})
+satisfies the assumptions of
+Homology, Lemma \ref{homology-lemma-efface-implies-universal},
+and hence is a universal $\delta$-functor.
+
+\medskip\noindent
+By
+Derived Categories, Lemma \ref{derived-lemma-higher-derived-functors}
+also the sequence $R^i\check{H}^0(\mathcal{U}, -)$
+forms a universal $\delta$-functor. By the uniqueness of universal
+$\delta$-functors, see
+Homology, Lemma \ref{homology-lemma-uniqueness-universal-delta-functor}
+we conclude that
+$R^i\check{H}^0(\mathcal{U}, -) = \check{H}^i(\mathcal{U}, -)$.
+This is enough for most applications
+and the reader is suggested to skip the rest of the proof.
+
+\medskip\noindent
+Let $\mathcal{F}$ be any presheaf of $\mathcal{O}_X$-modules.
+Choose an injective resolution $\mathcal{F} \to \mathcal{I}^\bullet$
+in the category $\textit{PMod}(\mathcal{O}_X)$.
+Consider the double complex
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet)$ with terms
+$\check{\mathcal{C}}^p(\mathcal{U}, \mathcal{I}^q)$.
+Consider the associated total complex
+$\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet))$,
+see Homology, Definition \ref{homology-definition-associated-simple-complex}.
+There is a map of complexes
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet))
+$$
+coming from the maps
+$\check{\mathcal{C}}^p(\mathcal{U}, \mathcal{F})
+\to \check{\mathcal{C}}^p(\mathcal{U}, \mathcal{I}^0)$
+and there is a map of complexes
+$$
+\check{H}^0(\mathcal{U}, \mathcal{I}^\bullet)
+\longrightarrow
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet))
+$$
+coming from the maps
+$\check{H}^0(\mathcal{U}, \mathcal{I}^q) \to
+\check{\mathcal{C}}^0(\mathcal{U}, \mathcal{I}^q)$.
+Both of these maps are quasi-isomorphisms by an application of
+Homology, Lemma \ref{homology-lemma-double-complex-gives-resolution}.
+Namely, the columns of the double complex are exact in positive degrees
+because the {\v C}ech complex as a functor is exact
+(Lemma \ref{lemma-cech-exact-presheaves})
+and the rows of the double complex are exact in positive degrees
+since as we just saw the higher {\v C}ech cohomology groups of the injective
+presheaves $\mathcal{I}^q$ are zero.
+Since quasi-isomorphisms become invertible
+in $D^{+}(\mathcal{O}_X(U))$ this gives the last displayed morphism
+of the lemma. We omit the verification that this morphism is
+functorial.
+\end{proof}
+
+
+
+
+
+\section{{\v C}ech cohomology and cohomology}
+\label{section-cech-cohomology-cohomology}
+
+\begin{lemma}
+\label{lemma-injective-trivial-cech}
+Let $X$ be a ringed space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering.
+Let $\mathcal{I}$ be an injective $\mathcal{O}_X$-module.
+Then
+$$
+\check{H}^p(\mathcal{U}, \mathcal{I}) =
+\left\{
+\begin{matrix}
+\mathcal{I}(U) & \text{if} & p = 0 \\
+0 & \text{if} & p > 0
+\end{matrix}
+\right.
+$$
+\end{lemma}
+
+\begin{proof}
+An injective $\mathcal{O}_X$-module is also injective as an object in
+the category $\textit{PMod}(\mathcal{O}_X)$ (for example since
+sheafification is an exact left adjoint to the inclusion functor,
+using Homology, Lemma \ref{homology-lemma-adjoint-preserve-injectives}).
+Hence we can apply Lemma \ref{lemma-cech-cohomology-derived-presheaves}
+(or its proof) to see the result.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-cohomology}
+Let $X$ be a ringed space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering.
+There is a transformation
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{U}, -)
+\longrightarrow
+R\Gamma(U, -)
+$$
+of functors
+$\textit{Mod}(\mathcal{O}_X) \to D^{+}(\mathcal{O}_X(U))$.
+In particular this provides canonical maps
+$\check{H}^p(\mathcal{U}, \mathcal{F}) \to H^p(U, \mathcal{F})$ for
+$\mathcal{F}$ ranging over $\textit{Mod}(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module. Choose an injective resolution
+$\mathcal{F} \to \mathcal{I}^\bullet$. Consider the double complex
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet)$ with terms
+$\check{\mathcal{C}}^p(\mathcal{U}, \mathcal{I}^q)$.
+There is a map of complexes
+$$
+\alpha :
+\Gamma(U, \mathcal{I}^\bullet)
+\longrightarrow
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet))
+$$
+coming from the maps
+$\mathcal{I}^q(U) \to \check{H}^0(\mathcal{U}, \mathcal{I}^q)$
+and a map of complexes
+$$
+\beta :
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet))
+$$
+coming from the map $\mathcal{F} \to \mathcal{I}^0$.
+We can apply
+Homology, Lemma \ref{homology-lemma-double-complex-gives-resolution}
+to see that $\alpha$ is a quasi-isomorphism.
+Namely, Lemma \ref{lemma-injective-trivial-cech} implies that
+the $q$th row of the double complex
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet)$ is a
+resolution of $\Gamma(U, \mathcal{I}^q)$.
+Hence $\alpha$ becomes invertible in $D^{+}(\mathcal{O}_X(U))$ and
+the transformation of the lemma is the composition of $\beta$
+followed by the inverse of $\alpha$. We omit the verification
+that this is functorial.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-h1}
+Let $X$ be a topological space. Let $\mathcal{H}$ be an abelian sheaf
+on $X$. Let $\mathcal{U} : X = \bigcup_{i \in I} U_i$ be an open covering.
+The map
+$$
+\check{H}^1(\mathcal{U}, \mathcal{H}) \longrightarrow H^1(X, \mathcal{H})
+$$
+is injective and identifies $\check{H}^1(\mathcal{U}, \mathcal{H})$ via
+the bijection of Lemma \ref{lemma-torsors-h1}
+with the set of isomorphism classes of $\mathcal{H}$-torsors
+which restrict to trivial torsors over each $U_i$.
+\end{lemma}
+
+\begin{proof}
+To see this we construct an inverse map. Namely, let $\mathcal{F}$ be a
+$\mathcal{H}$-torsor whose restriction to $U_i$ is trivial. By
+Lemma \ref{lemma-trivial-torsor} this means there
+exists a section $s_i \in \mathcal{F}(U_i)$. On $U_{i_0} \cap U_{i_1}$
+there is a unique section $s_{i_0i_1}$ of $\mathcal{H}$ such that
+$s_{i_0i_1} \cdot s_{i_0}|_{U_{i_0} \cap U_{i_1}} =
+s_{i_1}|_{U_{i_0} \cap U_{i_1}}$. A computation shows
+that $s_{i_0i_1}$ is a {\v C}ech cocycle and that its class is well
+defined (i.e., does not depend on the choice of the sections $s_i$).
+The inverse maps the isomorphism class of $\mathcal{F}$ to the cohomology
+class of the cocycle $(s_{i_0i_1})$.
+We omit the verification that this map is indeed an inverse.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-include}
+Let $X$ be a ringed space.
+Consider the functor
+$i : \textit{Mod}(\mathcal{O}_X) \to \textit{PMod}(\mathcal{O}_X)$.
+It is a left exact functor with right derived functors given by
+$$
+R^pi(\mathcal{F}) = \underline{H}^p(\mathcal{F}) :
+U \longmapsto H^p(U, \mathcal{F})
+$$
+see discussion in Section \ref{section-locality}.
+\end{lemma}
+
+\begin{proof}
+It is clear that $i$ is left exact.
+Choose an injective resolution $\mathcal{F} \to \mathcal{I}^\bullet$.
+By definition $R^pi$ is the $p$th cohomology {\it presheaf}
+of the complex $\mathcal{I}^\bullet$. In other words, the
+sections of $R^pi(\mathcal{F})$ over an open $U$ are given by
+$$
+\frac{\Ker(\mathcal{I}^n(U) \to \mathcal{I}^{n + 1}(U))}
+{\Im(\mathcal{I}^{n - 1}(U) \to \mathcal{I}^n(U))}.
+$$
+which is the definition of $H^p(U, \mathcal{F})$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-spectral-sequence}
+Let $X$ be a ringed space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering.
+For any sheaf of $\mathcal{O}_X$-modules $\mathcal{F}$ there
+is a spectral sequence $(E_r, d_r)_{r \geq 0}$ with
+$$
+E_2^{p, q} = \check{H}^p(\mathcal{U}, \underline{H}^q(\mathcal{F}))
+$$
+converging to $H^{p + q}(U, \mathcal{F})$.
+This spectral sequence is functorial in $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+This is a Grothendieck spectral sequence
+(see
+Derived Categories, Lemma \ref{derived-lemma-grothendieck-spectral-sequence})
+for the functors
+$$
+i : \textit{Mod}(\mathcal{O}_X) \to \textit{PMod}(\mathcal{O}_X)
+\quad\text{and}\quad
+\check{H}^0(\mathcal{U}, - ) : \textit{PMod}(\mathcal{O}_X)
+\to \text{Mod}_{\mathcal{O}_X(U)}.
+$$
+Namely, we have $\check{H}^0(\mathcal{U}, i(\mathcal{F})) = \mathcal{F}(U)$
+by Lemma \ref{lemma-cech-h0}. We have that $i(\mathcal{I})$ is
+{\v C}ech acyclic by Lemma \ref{lemma-injective-trivial-cech}. And we
+have that $\check{H}^p(\mathcal{U}, -) = R^p\check{H}^0(\mathcal{U}, -)$
+as functors on $\textit{PMod}(\mathcal{O}_X)$
+by Lemma \ref{lemma-cech-cohomology-derived-presheaves}.
+Putting everything together gives the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-spectral-sequence-application}
+Let $X$ be a ringed space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+Assume that $H^i(U_{i_0 \ldots i_p}, \mathcal{F}) = 0$
+for all $i > 0$, all $p \geq 0$ and all $i_0, \ldots, i_p \in I$.
+Then $\check{H}^p(\mathcal{U}, \mathcal{F}) = H^p(U, \mathcal{F})$
+as $\mathcal{O}_X(U)$-modules.
+\end{lemma}
+
+\begin{proof}
+We will use the spectral sequence of
+Lemma \ref{lemma-cech-spectral-sequence}.
+The assumptions mean that $E_2^{p, q} = 0$ for all $(p, q)$ with
+$q \not = 0$. Hence the spectral sequence degenerates at $E_2$
+and the result follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ses-cech-h1}
+Let $X$ be a ringed space.
+Let
+$$
+0 \to \mathcal{F} \to \mathcal{G} \to \mathcal{H} \to 0
+$$
+be a short exact sequence of $\mathcal{O}_X$-modules.
+Let $U \subset X$ be an open subset.
+If there exists a cofinal system of open coverings $\mathcal{U}$
+of $U$ such that $\check{H}^1(\mathcal{U}, \mathcal{F}) = 0$,
+then the map $\mathcal{G}(U) \to \mathcal{H}(U)$ is
+surjective.
+\end{lemma}
+
+\begin{proof}
+Take an element $s \in \mathcal{H}(U)$. Choose an open covering
+$\mathcal{U} : U = \bigcup_{i \in I} U_i$ such that
+(a) $\check{H}^1(\mathcal{U}, \mathcal{F}) = 0$ and (b)
+$s|_{U_i}$ is the image of a section $s_i \in \mathcal{G}(U_i)$.
+Since we can certainly find a covering such that (b) holds
+it follows from the assumptions of the lemma that we can find
+a covering such that (a) and (b) both hold.
+Consider the sections
+$$
+s_{i_0i_1} = s_{i_1}|_{U_{i_0i_1}} - s_{i_0}|_{U_{i_0i_1}}.
+$$
+Since $s_i$ lifts $s$ we see that $s_{i_0i_1} \in \mathcal{F}(U_{i_0i_1})$.
+By the vanishing of $\check{H}^1(\mathcal{U}, \mathcal{F})$ we can
+find sections $t_i \in \mathcal{F}(U_i)$ such that
+$$
+s_{i_0i_1} = t_{i_1}|_{U_{i_0i_1}} - t_{i_0}|_{U_{i_0i_1}}.
+$$
+Then clearly the sections $s_i - t_i$ satisfy the sheaf condition
+and glue to a section of $\mathcal{G}$ over $U$ which maps to $s$.
+Hence we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-vanish}
+\begin{slogan}
+If higher {\v C}ech cohomology of an abelian sheaf vanishes for all open covers,
+then higher cohomology vanishes.
+\end{slogan}
+Let $X$ be a ringed space.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module such that
+$$
+\check{H}^p(\mathcal{U}, \mathcal{F}) = 0
+$$
+for all $p > 0$ and any open covering $\mathcal{U} : U = \bigcup_{i \in I} U_i$
+of an open of $X$. Then $H^p(U, \mathcal{F}) = 0$ for all $p > 0$
+and any open $U \subset X$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be a sheaf satisfying the assumption of the lemma.
+We will indicate this by saying ``$\mathcal{F}$ has vanishing higher
+{\v C}ech cohomology for any open covering''.
+Choose an embedding $\mathcal{F} \to \mathcal{I}$ into an
+injective $\mathcal{O}_X$-module.
+By Lemma \ref{lemma-injective-trivial-cech} $\mathcal{I}$ has vanishing higher
+{\v C}ech cohomology for any open covering.
+Let $\mathcal{Q} = \mathcal{I}/\mathcal{F}$
+so that we have a short exact sequence
+$$
+0 \to \mathcal{F} \to \mathcal{I} \to \mathcal{Q} \to 0.
+$$
+By Lemma \ref{lemma-ses-cech-h1} and our assumptions
+this sequence is actually exact as a sequence of presheaves!
+In particular we have a long exact sequence of {\v C}ech cohomology
+groups for any open covering $\mathcal{U}$, see
+Lemma \ref{lemma-cech-cohomology-delta-functor-presheaves}
+for example. This implies that $\mathcal{Q}$ is also an $\mathcal{O}_X$-module
+with vanishing higher {\v C}ech cohomology for all open coverings.
+
+\medskip\noindent
+Next, we look at the long exact cohomology sequence
+$$
+\xymatrix{
+0 \ar[r] &
+H^0(U, \mathcal{F}) \ar[r] &
+H^0(U, \mathcal{I}) \ar[r] &
+H^0(U, \mathcal{Q}) \ar[lld] \\
+&
+H^1(U, \mathcal{F}) \ar[r] &
+H^1(U, \mathcal{I}) \ar[r] &
+H^1(U, \mathcal{Q}) \ar[lld] \\
+&
+\ldots & \ldots & \ldots \\
+}
+$$
+for any open $U \subset X$. Since $\mathcal{I}$ is injective we
+have $H^n(U, \mathcal{I}) = 0$ for $n > 0$ (see
+Derived Categories, Lemma \ref{derived-lemma-higher-derived-functors}).
+By the above we see that $H^0(U, \mathcal{I}) \to H^0(U, \mathcal{Q})$
+is surjective and hence $H^1(U, \mathcal{F}) = 0$.
+Since $\mathcal{F}$ was an arbitrary $\mathcal{O}_X$-module with
+vanishing higher {\v C}ech cohomology we conclude that also
+$H^1(U, \mathcal{Q}) = 0$ since $\mathcal{Q}$ is another of these
+sheaves (see above). By the long exact sequence this in turn implies
+that $H^2(U, \mathcal{F}) = 0$. And so on and so forth.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-vanish-basis}
+(Variant of Lemma \ref{lemma-cech-vanish}.)
+Let $X$ be a ringed space.
+Let $\mathcal{B}$ be a basis for the topology on $X$.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+Assume there exists a set of open coverings $\text{Cov}$
+with the following properties:
+\begin{enumerate}
+\item For every $\mathcal{U} \in \text{Cov}$
+with $\mathcal{U} : U = \bigcup_{i \in I} U_i$ we have
+$U, U_i \in \mathcal{B}$ and every $U_{i_0 \ldots i_p} \in \mathcal{B}$.
+\item For every $U \in \mathcal{B}$ the open coverings of $U$
+occurring in $\text{Cov}$ is a cofinal system of open coverings
+of $U$.
+\item For every $\mathcal{U} \in \text{Cov}$ we have
+$\check{H}^p(\mathcal{U}, \mathcal{F}) = 0$ for all $p > 0$.
+\end{enumerate}
+Then $H^p(U, \mathcal{F}) = 0$ for all $p > 0$ and any $U \in \mathcal{B}$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ and $\text{Cov}$ be as in the lemma.
+We will indicate this by saying ``$\mathcal{F}$ has vanishing higher
+{\v C}ech cohomology for any $\mathcal{U} \in \text{Cov}$''.
+Choose an embedding $\mathcal{F} \to \mathcal{I}$ into an
+injective $\mathcal{O}_X$-module.
+By Lemma \ref{lemma-injective-trivial-cech} $\mathcal{I}$
+has vanishing higher {\v C}ech cohomology for any $\mathcal{U} \in \text{Cov}$.
+Let $\mathcal{Q} = \mathcal{I}/\mathcal{F}$
+so that we have a short exact sequence
+$$
+0 \to \mathcal{F} \to \mathcal{I} \to \mathcal{Q} \to 0.
+$$
+By Lemma \ref{lemma-ses-cech-h1} and our assumption (2)
+this sequence gives rise to an exact sequence
+$$
+0 \to \mathcal{F}(U) \to \mathcal{I}(U) \to \mathcal{Q}(U) \to 0.
+$$
+for every $U \in \mathcal{B}$. Hence for any $\mathcal{U} \in \text{Cov}$
+we get a short exact sequence of {\v C}ech complexes
+$$
+0 \to
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \to
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}) \to
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{Q}) \to 0
+$$
+since each term in the {\v C}ech complex is made up out of a product of
+values over elements of $\mathcal{B}$ by assumption (1).
+In particular we have a long exact sequence of {\v C}ech cohomology
+groups for any open covering $\mathcal{U} \in \text{Cov}$.
+This implies that $\mathcal{Q}$ is also an $\mathcal{O}_X$-module
+with vanishing higher {\v C}ech cohomology for all
+$\mathcal{U} \in \text{Cov}$.
+
+\medskip\noindent
+Next, we look at the long exact cohomology sequence
+$$
+\xymatrix{
+0 \ar[r] &
+H^0(U, \mathcal{F}) \ar[r] &
+H^0(U, \mathcal{I}) \ar[r] &
+H^0(U, \mathcal{Q}) \ar[lld] \\
+&
+H^1(U, \mathcal{F}) \ar[r] &
+H^1(U, \mathcal{I}) \ar[r] &
+H^1(U, \mathcal{Q}) \ar[lld] \\
+&
+\ldots & \ldots & \ldots \\
+}
+$$
+for any $U \in \mathcal{B}$. Since $\mathcal{I}$ is injective we
+have $H^n(U, \mathcal{I}) = 0$ for $n > 0$ (see
+Derived Categories, Lemma \ref{derived-lemma-higher-derived-functors}).
+By the above we see that $H^0(U, \mathcal{I}) \to H^0(U, \mathcal{Q})$
+is surjective and hence $H^1(U, \mathcal{F}) = 0$.
+Since $\mathcal{F}$ was an arbitrary $\mathcal{O}_X$-module with
+vanishing higher {\v C}ech cohomology for all $\mathcal{U} \in \text{Cov}$
+we conclude that also $H^1(U, \mathcal{Q}) = 0$ since $\mathcal{Q}$ is
+another of these sheaves (see above). By the long exact sequence this in
+turn implies that $H^2(U, \mathcal{F}) = 0$. And so on and so forth.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushforward-injective}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $\mathcal{I}$ be an injective $\mathcal{O}_X$-module.
+Then
+\begin{enumerate}
+\item $\check{H}^p(\mathcal{V}, f_*\mathcal{I}) = 0$
+for all $p > 0$ and any open covering
+$\mathcal{V} : V = \bigcup_{j \in J} V_j$ of $Y$.
+\item $H^p(V, f_*\mathcal{I}) = 0$ for all $p > 0$ and
+every open $V \subset Y$.
+\end{enumerate}
+In other words, $f_*\mathcal{I}$ is right acyclic for $\Gamma(V, -)$
+(see
+Derived Categories, Definition \ref{derived-definition-derived-functor})
+for any $V \subset Y$ open.
+\end{lemma}
+
+\begin{proof}
+Set $\mathcal{U} : f^{-1}(V) = \bigcup_{j \in J} f^{-1}(V_j)$.
+It is an open covering of $X$ and
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{V}, f_*\mathcal{I}) =
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}).
+$$
+This is true because
+$$
+f_*\mathcal{I}(V_{j_0 \ldots j_p})
+= \mathcal{I}(f^{-1}(V_{j_0 \ldots j_p})) =
+\mathcal{I}(f^{-1}(V_{j_0}) \cap \ldots \cap f^{-1}(V_{j_p}))
+= \mathcal{I}(U_{j_0 \ldots j_p}).
+$$
+Thus the first statement of the lemma follows from
+Lemma \ref{lemma-injective-trivial-cech}. The second statement
+follows from the first and Lemma \ref{lemma-cech-vanish}.
+\end{proof}
+
+\noindent
+The following lemma implies in particular that
+$f_* : \textit{Ab}(X) \to \textit{Ab}(Y)$ transforms injective
+abelian sheaves into injective abelian sheaves.
+
+\begin{lemma}
+\label{lemma-pushforward-injective-flat}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Assume $f$ is flat.
+Then $f_*\mathcal{I}$ is an injective $\mathcal{O}_Y$-module
+for any injective $\mathcal{O}_X$-module $\mathcal{I}$.
+\end{lemma}
+
+\begin{proof}
+In this case the functor $f^*$ transforms injections into injections
+(Modules, Lemma \ref{modules-lemma-pullback-flat}).
+Hence the result follows from
+Homology, Lemma \ref{homology-lemma-adjoint-preserve-injectives}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-products}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $I$ be a set.
+For $i \in I$ let $\mathcal{F}_i$ be an $\mathcal{O}_X$-module.
+Let $U \subset X$ be open. The canonical map
+$$
+H^p(U, \prod\nolimits_{i \in I} \mathcal{F}_i)
+\longrightarrow
+\prod\nolimits_{i \in I} H^p(U, \mathcal{F}_i)
+$$
+is an isomorphism for $p = 0$ and injective for $p = 1$.
+\end{lemma}
+
+\begin{proof}
+The statement for $p = 0$ is true because the product of sheaves
+is equal to the product of the underlying presheaves, see
+Sheaves, Section \ref{sheaves-section-limits-sheaves}.
+Proof for $p = 1$. Set $\mathcal{F} = \prod \mathcal{F}_i$.
+Let $\xi \in H^1(U, \mathcal{F})$ map to zero in
+$\prod H^1(U, \mathcal{F}_i)$. By locality of cohomology, see
+Lemma \ref{lemma-kill-cohomology-class-on-covering},
+there exists an open covering $\mathcal{U} : U = \bigcup U_j$ such that
+$\xi|_{U_j} = 0$ for all $j$. By Lemma \ref{lemma-cech-h1} this means
+$\xi$ comes from an element
+$\check \xi \in \check H^1(\mathcal{U}, \mathcal{F})$.
+Since the maps
+$\check H^1(\mathcal{U}, \mathcal{F}_i) \to H^1(U, \mathcal{F}_i)$
+are injective for all $i$ (by Lemma \ref{lemma-cech-h1}), and since
+the image of $\xi$ is zero in $\prod H^1(U, \mathcal{F}_i)$ we see
+that the image
+$\check \xi_i = 0$ in $\check H^1(\mathcal{U}, \mathcal{F}_i)$.
+However, since $\mathcal{F} = \prod \mathcal{F}_i$ we see that
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$ is the
+product of the complexes
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}_i)$,
+hence by
+Homology, Lemma \ref{homology-lemma-product-abelian-groups-exact}
+we conclude that $\check \xi = 0$ as desired.
+\end{proof}
+
+
+
+
+
+
+
+\section{Flasque sheaves}
+\label{section-flasque}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-flasque}
+Let $X$ be a topological space. We say a presheaf of sets
+$\mathcal{F}$ is {\it flasque} or {\it flabby} if for every
+$U \subset V$ open in $X$ the restriction map
+$\mathcal{F}(V) \to \mathcal{F}(U)$ is surjective.
+\end{definition}
+
+\noindent
+We will use this terminology also for abelian sheaves and
+sheaves of modules if $X$ is a ringed space.
+Clearly it suffices to assume the restriction maps
+$\mathcal{F}(X) \to \mathcal{F}(U)$ is surjective for every
+open $U \subset X$.
+
+\begin{lemma}
+\label{lemma-injective-flasque}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Then any injective $\mathcal{O}_X$-module is flasque.
+\end{lemma}
+
+\begin{proof}
+This is a reformulation of Lemma \ref{lemma-injective-restriction-surjective}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flasque-acyclic}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Any flasque $\mathcal{O}_X$-module
+is acyclic for $R\Gamma(X, -)$ as well as $R\Gamma(U, -)$ for any
+open $U$ of $X$.
+\end{lemma}
+
+\begin{proof}
+We will prove this using
+Derived Categories, Lemma \ref{derived-lemma-subcategory-right-acyclics}.
+Since every injective module is flasque we see that we can embed
+every $\mathcal{O}_X$-module into a flasque module, see
+Injectives, Lemma \ref{injectives-lemma-abelian-sheaves-space}.
+Thus it suffices to show that given a short exact sequence
+$$
+0 \to \mathcal{F} \to \mathcal{G} \to \mathcal{H} \to 0
+$$
+with $\mathcal{F}$, $\mathcal{G}$ flasque, then $\mathcal{H}$
+is flasque and the sequence remains short exact after taking sections
+on any open of $X$. In fact, the second statement implies the first.
+Thus, let $U \subset X$ be an open subspace. Let $s \in \mathcal{H}(U)$.
+We will show that we can lift $s$ to a section of $\mathcal{G}$
+over $U$. To do this consider the set $T$ of pairs $(V, t)$
+where $V \subset U$ is open and $t \in \mathcal{G}(V)$ is a section
+mapping to $s|_V$ in $\mathcal{H}$.
+We put a partial ordering on $T$ by setting
+$(V, t) \leq (V', t')$ if and only if $V \subset V'$ and $t'|_V = t$.
+If $(V_\alpha, t_\alpha)$, $\alpha \in A$
+is a totally ordered subset of $T$, then $V = \bigcup V_\alpha$
+is open and there is a unique section $t \in \mathcal{G}(V)$
+restricting to $t_\alpha$ over $V_\alpha$ by the sheaf condition on
+$\mathcal{G}$. Thus by Zorn's lemma there exists a maximal element
+$(V, t)$ in $T$. We will show that $V = U$ thereby finishing the proof.
+Namely, pick any $x \in U$. We can find a small open neighbourhood
+$W \subset U$ of $x$ and $t' \in \mathcal{G}(W)$ mapping to $s|_W$
+in $\mathcal{H}$. Then $t'|_{W \cap V} - t|_{W \cap V}$ maps to
+zero in $\mathcal{H}$, hence comes from some section
+$r' \in \mathcal{F}(W \cap V)$. Using that $\mathcal{F}$ is flasque
+we find a section $r \in \mathcal{F}(W)$ restricting to $r'$
+over $W \cap V$. Modifying $t'$ by the image of $r$ we may
+assume that $t$ and $t'$ restrict to the same section over
+$W \cap V$. By the sheaf condition of $\mathcal{G}$
+we can find a section $\tilde t$ of $\mathcal{G}$ over
+$W \cup V$ restricting to $t$ and $t'$.
+By maximality of $(V, t)$ we see that $V \cup W = V$.
+Thus $x \in V$ and we are done.
+\end{proof}
+
+\noindent
+The following lemma does not hold for flasque presheaves.
+
+\begin{lemma}
+\label{lemma-flasque-acyclic-cech}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_X$-modules.
+Let $\mathcal{U} : U = \bigcup U_i$ be an open covering.
+If $\mathcal{F}$ is flasque, then
+$\check{H}^p(\mathcal{U}, \mathcal{F}) = 0$ for $p > 0$.
+\end{lemma}
+
+\begin{proof}
+The presheaves $\underline{H}^q(\mathcal{F})$ used in the statement
+of Lemma \ref{lemma-cech-spectral-sequence} are zero by
+Lemma \ref{lemma-flasque-acyclic}.
+Hence $\check{H}^p(U, \mathcal{F}) = H^p(U, \mathcal{F}) = 0$
+by Lemma \ref{lemma-flasque-acyclic} again.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flasque-acyclic-pushforward}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism
+of ringed spaces. Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_X$-modules.
+If $\mathcal{F}$ is flasque, then $R^pf_*\mathcal{F} = 0$ for $p > 0$.
+\end{lemma}
+
+\begin{proof}
+Immediate from
+Lemma \ref{lemma-describe-higher-direct-images} and
+Lemma \ref{lemma-flasque-acyclic}.
+\end{proof}
+
+\noindent
+The following lemma can be proved by an elementary induction
+argument for finite coverings, compare with the discussion
+of {\v C}ech cohomology in \cite{FOAG}.
+
+\begin{lemma}
+\label{lemma-vanishing-ravi}
+Let $X$ be a topological space. Let $\mathcal{F}$ be an abelian sheaf
+on $X$. Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be an
+open covering. Assume the restriction mappings
+$\mathcal{F}(U) \to \mathcal{F}(U')$ are surjective
+for $U'$ an arbitrary union of opens of the form $U_{i_0 \ldots i_p}$.
+Then $\check{H}^p(\mathcal{U}, \mathcal{F})$
+vanishes for $p > 0$.
+\end{lemma}
+
+\begin{proof}
+Let $Y$ be the set of nonempty subsets of $I$. We will use the letters
+$A, B, C, \ldots$ to denote elements of $Y$, i.e., nonempty subsets of $I$.
+For a finite nonempty subset $J \subset I$ let
+$$
+V_J = \{A \in Y \mid J \subset A\}
+$$
+This means that $V_{\{i\}} = \{A \in Y \mid i \in A\}$ and
+$V_J = \bigcap_{j \in J} V_{\{j\}}$.
+Then $V_J \subset V_K$ if and only if $J \supset K$.
+There is a unique topology on $Y$ such that the collection of
+subsets $V_J$ is a basis for the topology on $Y$. Any open is of the form
+$$
+V = \bigcup\nolimits_{t \in T} V_{J_t}
+$$
+for some family of finite subsets $J_t$. If $J_t \subset J_{t'}$
+then we may remove $J_{t'}$ from the family without changing $V$.
+Thus we may assume there are no inclusions among the $J_t$.
+In this case the minimal elements of $V$ are the sets $A = J_t$.
+Hence we can read off the family $(J_t)_{t \in T}$ from the open $V$.
+
+\medskip\noindent
+We can completely understand open coverings in $Y$. First, because
+the elements $A \in Y$ are nonempty subsets of $I$ we have
+$$
+Y = \bigcup\nolimits_{i \in I} V_{\{i\}}
+$$
+To understand other coverings, let $V$ be as above and let $V_s \subset Y$
+be an open corresponding to the family $(J_{s, t})_{t \in T_s}$. Then
+$$
+V = \bigcup\nolimits_{s \in S} V_s
+$$
+if and only if for each $t \in T$ there exists an $s \in S$ and
+$t_s \in T_s$ such that $J_t = J_{s, t_s}$. Namely, as the family
+$(J_t)_{t \in T}$ is minimal, the minimal element $A = J_t$
+has to be in $V_s$ for some $s$, hence $A \in V_{J_{t_s}}$ for some
+$t_s \in T_s$. But since $A$ is also minimal in $V_s$ we conclude
+that $J_{t_s} = J_t$.
+
+\medskip\noindent
+Next we map the set of opens of $Y$ to opens of $X$. Namely, we send
+$Y$ to $U$, we use the rule
+$$
+V_J \mapsto U_J = \bigcap\nolimits_{i \in J} U_i
+$$
+on the opens $V_J$, and we extend it to arbitrary opens $V$ by the rule
+$$
+V = \bigcup\nolimits_{t \in T} V_{J_t}
+\mapsto
+\bigcup\nolimits_{t \in T} U_{J_t}
+$$
+The classification of open coverings of $Y$ given above shows that
+this rule transforms open coverings into open coverings. Thus we obtain
+an abelian sheaf $\mathcal{G}$ on $Y$ by setting
+$\mathcal{G}(Y) = \mathcal{F}(U)$ and for
+$V = \bigcup\nolimits_{t \in T} V_{J_t}$ setting
+$$
+\mathcal{G}(V) = \mathcal{F}\left(\bigcup\nolimits_{t \in T} U_{J_t}\right)
+$$
+and using the restriction maps of $\mathcal{F}$.
+
+\medskip\noindent
+With these preliminaries out of the way we can prove our lemma as follows.
+We have an open covering
+$\mathcal{V} : Y = \bigcup_{i \in I} V_{\{i\}}$ of $Y$.
+By construction we have an equality
+$$
+\check{C}^\bullet(\mathcal{V}, \mathcal{G}) =
+\check{C}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+of {\v C}ech complexes. Since the sheaf $\mathcal{G}$ is flasque on $Y$
+(by our assumption on $\mathcal{F}$ in the statement of the lemma)
+the vanishing follows from
+Lemma \ref{lemma-flasque-acyclic-cech}.
+\end{proof}
+
+
+
+
+\section{The Leray spectral sequence}
+\label{section-Leray}
+
+\begin{lemma}
+\label{lemma-before-Leray}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+There is a commutative diagram
+$$
+\xymatrix{
+D^{+}(X) \ar[rr]_-{R\Gamma(X, -)} \ar[d]_{Rf_*} & &
+D^{+}(\mathcal{O}_X(X)) \ar[d]^{\text{restriction}} \\
+D^{+}(Y) \ar[rr]^-{R\Gamma(Y, -)} & &
+D^{+}(\mathcal{O}_Y(Y))
+}
+$$
+More generally for any $V \subset Y$ open and $U = f^{-1}(V)$ there
+is a commutative diagram
+$$
+\xymatrix{
+D^{+}(X) \ar[rr]_-{R\Gamma(U, -)} \ar[d]_{Rf_*} & &
+D^{+}(\mathcal{O}_X(U)) \ar[d]^{\text{restriction}} \\
+D^{+}(Y) \ar[rr]^-{R\Gamma(V, -)} & &
+D^{+}(\mathcal{O}_Y(V))
+}
+$$
+See also Remark \ref{remark-elucidate-lemma} for more explanation.
+\end{lemma}
+
+\begin{proof}
+Let
+$\Gamma_{res} : \textit{Mod}(\mathcal{O}_X) \to \text{Mod}_{\mathcal{O}_Y(Y)}$
+be the functor which associates to an $\mathcal{O}_X$-module $\mathcal{F}$
+the global sections of $\mathcal{F}$ viewed as an $\mathcal{O}_Y(Y)$-module
+via the map $f^\sharp : \mathcal{O}_Y(Y) \to \mathcal{O}_X(X)$. Let
+$restriction : \text{Mod}_{\mathcal{O}_X(X)} \to \text{Mod}_{\mathcal{O}_Y(Y)}$
+be the restriction functor induced by
+$f^\sharp : \mathcal{O}_Y(Y) \to \mathcal{O}_X(X)$. Note that $restriction$
+is exact so that
+its right derived functor is computed by simply applying the restriction
+functor, see
+Derived Categories, Lemma \ref{derived-lemma-right-derived-exact-functor}.
+It is clear that
+$$
+\Gamma_{res}
+=
+restriction \circ \Gamma(X, -)
+=
+\Gamma(Y, -) \circ f_*
+$$
+We claim that
+Derived Categories, Lemma \ref{derived-lemma-compose-derived-functors}
+applies to both compositions. For the first this is clear by our remarks
+above. For the second, it follows from
+Lemma \ref{lemma-pushforward-injective} which implies that
+injective $\mathcal{O}_X$-modules are mapped to $\Gamma(Y, -)$-acyclic
+sheaves on $Y$.
+\end{proof}
+
+\begin{remark}
+\label{remark-elucidate-lemma}
+Here is a down-to-earth explanation of the meaning of
+Lemma \ref{lemma-before-Leray}. It says that given
+$f : X \to Y$ and $\mathcal{F} \in \textit{Mod}(\mathcal{O}_X)$
+and given an injective resolution $\mathcal{F} \to \mathcal{I}^\bullet$
+we have
+$$
+\begin{matrix}
+R\Gamma(X, \mathcal{F}) & \text{is represented by} &
+\Gamma(X, \mathcal{I}^\bullet) \\
+Rf_*\mathcal{F} & \text{is represented by} & f_*\mathcal{I}^\bullet \\
+R\Gamma(Y, Rf_*\mathcal{F}) & \text{is represented by} &
+\Gamma(Y, f_*\mathcal{I}^\bullet)
+\end{matrix}
+$$
+the last fact coming from Leray's acyclicity lemma
+(Derived Categories, Lemma \ref{derived-lemma-leray-acyclicity})
+and Lemma \ref{lemma-pushforward-injective}.
+Finally, it combines this with the trivial observation that
+$$
+\Gamma(X, \mathcal{I}^\bullet)
+=
+\Gamma(Y, f_*\mathcal{I}^\bullet).
+$$
+to arrive at the commutativity of the diagram of the lemma.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-modules-abelian}
+Let $X$ be a ringed space.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item The cohomology groups $H^i(U, \mathcal{F})$ for $U \subset X$ open
+of $\mathcal{F}$ computed as an $\mathcal{O}_X$-module, or computed as an
+abelian sheaf are identical.
+\item Let $f : X \to Y$ be a morphism of ringed spaces.
+The higher direct images $R^if_*\mathcal{F}$ of $\mathcal{F}$
+computed as an $\mathcal{O}_X$-module, or computed as an abelian sheaf
+are identical.
+\end{enumerate}
+There are similar statements in the case of bounded below
+complexes of $\mathcal{O}_X$-modules.
+\end{lemma}
+
+\begin{proof}
+Consider the morphism of ringed spaces
+$(X, \mathcal{O}_X) \to (X, \underline{\mathbf{Z}}_X)$ given
+by the identity on the underlying topological space and by
+the unique map of sheaves of rings
+$\underline{\mathbf{Z}}_X \to \mathcal{O}_X$.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+Denote $\mathcal{F}_{ab}$ the same sheaf seen as an
+$\underline{\mathbf{Z}}_X$-module, i.e., seen as a sheaf of
+abelian groups. Let
+$\mathcal{F} \to \mathcal{I}^\bullet$ be an injective resolution.
+By Remark \ref{remark-elucidate-lemma} we see that
+$\Gamma(X, \mathcal{I}^\bullet)$ computes both
+$R\Gamma(X, \mathcal{F})$ and $R\Gamma(X, \mathcal{F}_{ab})$.
+This proves (1).
+
+\medskip\noindent
+To prove (2) we use (1) and Lemma \ref{lemma-describe-higher-direct-images}.
+The result follows immediately.
+\end{proof}
+
+\begin{lemma}[Leray spectral sequence]
+\label{lemma-Leray}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $\mathcal{F}^\bullet$ be
+a bounded below complex of $\mathcal{O}_X$-modules.
+There is a spectral sequence
+$$
+E_2^{p, q} = H^p(Y, R^qf_*(\mathcal{F}^\bullet))
+$$
+converging to $H^{p + q}(X, \mathcal{F}^\bullet)$.
+\end{lemma}
+
+\begin{proof}
+This is just the Grothendieck spectral sequence
+Derived Categories, Lemma \ref{derived-lemma-grothendieck-spectral-sequence}
+coming from the composition of functors
+$\Gamma_{res} = \Gamma(Y, -) \circ f_*$ where $\Gamma_{res}$ is as
+in the proof of Lemma \ref{lemma-before-Leray}.
+To see that the assumptions of
+Derived Categories, Lemma \ref{derived-lemma-grothendieck-spectral-sequence}
+are satisfied, see the proof of Lemma \ref{lemma-before-Leray} or
+Remark \ref{remark-elucidate-lemma}.
+\end{proof}
+
+\begin{remark}
+\label{remark-Leray-ss-more-structure}
+The Leray spectral sequence, the way we proved it in Lemma \ref{lemma-Leray}
+is a spectral sequence of $\Gamma(Y, \mathcal{O}_Y)$-modules. However, it
+is quite easy to see that it is in fact a spectral sequence of
+$\Gamma(X, \mathcal{O}_X)$-modules. For example $f$ gives rise to
+a morphism of ringed spaces
+$f' : (X, \mathcal{O}_X) \to (Y, f_*\mathcal{O}_X)$.
+By Lemma \ref{lemma-modules-abelian} the terms $E_r^{p, q}$ of the
+Leray spectral sequence for an $\mathcal{O}_X$-module $\mathcal{F}$
+and $f$ are identical with those for $\mathcal{F}$ and $f'$
+at least for $r \geq 2$. Namely, they both agree with the terms of the Leray
+spectral sequence for $\mathcal{F}$ as an abelian sheaf.
+And since $(f_*\mathcal{O}_X)(Y) = \mathcal{O}_X(X)$ we see the result.
+It is often the case
+that the Leray spectral sequence carries additional structure.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-apply-Leray}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item If $R^qf_*\mathcal{F} = 0$ for $q > 0$, then
+$H^p(X, \mathcal{F}) = H^p(Y, f_*\mathcal{F})$ for all $p$.
+\item If $H^p(Y, R^qf_*\mathcal{F}) = 0$ for all $q$ and $p > 0$, then
+$H^q(X, \mathcal{F}) = H^0(Y, R^qf_*\mathcal{F})$ for all $q$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+These are two simple conditions that force the Leray spectral sequence to
+degenerate at $E_2$. You can also prove these facts directly (without using
+the spectral sequence) which is a good exercise in cohomology of sheaves.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-higher-direct-images-compose}
+\begin{slogan}
+The total derived functor of a composition is the
+composition of the total derived functors.
+\end{slogan}
+Let $f : X \to Y$ and $g : Y \to Z$ be morphisms of ringed spaces.
+In this case $Rg_* \circ Rf_* = R(g \circ f)_*$ as functors
+from $D^{+}(X) \to D^{+}(Z)$.
+\end{lemma}
+
+\begin{proof}
+We are going to apply
+Derived Categories, Lemma \ref{derived-lemma-compose-derived-functors}.
+It is clear that $g_* \circ f_* = (g \circ f)_*$, see
+Sheaves, Lemma \ref{sheaves-lemma-pushforward-composition}.
+It remains to show that $f_*\mathcal{I}$ is $g_*$-acyclic.
+This follows from Lemma \ref{lemma-pushforward-injective}
+and the description of the
+higher direct images $R^ig_*$ in
+Lemma \ref{lemma-describe-higher-direct-images}.
+\end{proof}
+
+\begin{lemma}[Relative Leray spectral sequence]
+\label{lemma-relative-Leray}
+Let $f : X \to Y$ and $g : Y \to Z$ be morphisms of ringed spaces.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+There is a spectral sequence with
+$$
+E_2^{p, q} = R^pg_*(R^qf_*\mathcal{F})
+$$
+converging to $R^{p + q}(g \circ f)_*\mathcal{F}$.
+This spectral sequence is functorial in $\mathcal{F}$, and there
+is a version for bounded below complexes of $\mathcal{O}_X$-modules.
+\end{lemma}
+
+\begin{proof}
+This is a Grothendieck spectral sequence for composition of functors
+and follows from Lemma \ref{lemma-higher-direct-images-compose} and
+Derived Categories, Lemma \ref{derived-lemma-grothendieck-spectral-sequence}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Functoriality of cohomology}
+\label{section-functoriality}
+
+\begin{lemma}
+\label{lemma-functoriality}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $\mathcal{G}^\bullet$, resp.\ $\mathcal{F}^\bullet$ be
+a bounded below complex of $\mathcal{O}_Y$-modules,
+resp.\ $\mathcal{O}_X$-modules. Let
+$\varphi : \mathcal{G}^\bullet \to f_*\mathcal{F}^\bullet$
+be a morphism of complexes. There is a canonical morphism
+$$
+\mathcal{G}^\bullet
+\longrightarrow
+Rf_*(\mathcal{F}^\bullet)
+$$
+in $D^{+}(Y)$. Moreover this construction is functorial in the triple
+$(\mathcal{G}^\bullet, \mathcal{F}^\bullet, \varphi)$.
+\end{lemma}
+
+\begin{proof}
+Choose an injective resolution $\mathcal{F}^\bullet \to \mathcal{I}^\bullet$.
+By definition $Rf_*(\mathcal{F}^\bullet)$ is represented by
+$f_*\mathcal{I}^\bullet$ in $K^{+}(\mathcal{O}_Y)$.
+The composition
+$$
+\mathcal{G}^\bullet \to f_*\mathcal{F}^\bullet \to f_*\mathcal{I}^\bullet
+$$
+is a morphism in $K^{+}(Y)$ which turns
+into the morphism of the lemma upon applying the
+localization functor $j_Y : K^{+}(Y) \to D^{+}(Y)$.
+\end{proof}
+
+\noindent
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $\mathcal{G}$ be an $\mathcal{O}_Y$-module and let
+$\mathcal{F}$ be an $\mathcal{O}_X$-module. Recall that an
+$f$-map $\varphi$ from $\mathcal{G}$ to $\mathcal{F}$ is a map
+$\varphi : \mathcal{G} \to f_*\mathcal{F}$, or what is the same
+thing, a map $\varphi : f^*\mathcal{G} \to \mathcal{F}$.
+See Sheaves, Definition \ref{sheaves-definition-f-map}.
+Such an $f$-map gives rise to a morphism of complexes
+\begin{equation}
+\label{equation-functorial-derived}
+\varphi :
+R\Gamma(Y, \mathcal{G})
+\longrightarrow
+R\Gamma(X, \mathcal{F})
+\end{equation}
+in $D^{+}(\mathcal{O}_Y(Y))$. Namely, we use the morphism
+$\mathcal{G} \to Rf_*\mathcal{F}$ in $D^{+}(Y)$ of
+Lemma \ref{lemma-functoriality}, and we apply $R\Gamma(Y, -)$.
+By Lemma \ref{lemma-before-Leray} we see that
+$R\Gamma(X, \mathcal{F}) = R\Gamma(Y, Rf_*\mathcal{F})$
+and we get the displayed arrow. We spell this out completely in
+Remark \ref{remark-explain-arrow} below.
+In particular it gives
+rise to maps on cohomology
+\begin{equation}
+\label{equation-functorial}
+\varphi : H^i(Y, \mathcal{G}) \longrightarrow H^i(X, \mathcal{F}).
+\end{equation}
+
+\begin{remark}
+\label{remark-explain-arrow}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $\mathcal{G}$ be an $\mathcal{O}_Y$-module.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+Let $\varphi$ be an $f$-map from $\mathcal{G}$ to $\mathcal{F}$.
+Choose a resolution $\mathcal{F} \to \mathcal{I}^\bullet$
+by a complex of injective $\mathcal{O}_X$-modules.
+Choose resolutions $\mathcal{G} \to \mathcal{J}^\bullet$ and
+$f_*\mathcal{I}^\bullet \to (\mathcal{J}')^\bullet$ by complexes
+of injective $\mathcal{O}_Y$-modules. By
+Derived Categories, Lemma \ref{derived-lemma-morphisms-lift}
+there exists a map of complexes
+$\beta$ such that the diagram
+\begin{equation}
+\label{equation-choice}
+\xymatrix{
+\mathcal{G} \ar[d] \ar[r] &
+f_*\mathcal{F} \ar[r] &
+f_*\mathcal{I}^\bullet \ar[d] \\
+\mathcal{J}^\bullet \ar[rr]^\beta & &
+(\mathcal{J}')^\bullet
+}
+\end{equation}
+commutes. Applying global section functors we see
+that we get a diagram
+$$
+\xymatrix{
+ & & \Gamma(Y, f_*\mathcal{I}^\bullet) \ar[d]_{qis} \ar@{=}[r] &
+\Gamma(X, \mathcal{I}^\bullet) \\
+\Gamma(Y, \mathcal{J}^\bullet) \ar[rr]^\beta & &
+\Gamma(Y, (\mathcal{J}')^\bullet) &
+}
+$$
+The complex on the bottom left represents $R\Gamma(Y, \mathcal{G})$
+and the complex on the top right represents $R\Gamma(X, \mathcal{F})$.
+The vertical arrow is a quasi-isomorphism by
+Lemma \ref{lemma-before-Leray} which becomes invertible after
+applying the localization functor
+$K^{+}(\mathcal{O}_Y(Y)) \to D^{+}(\mathcal{O}_Y(Y))$.
+The arrow (\ref{equation-functorial-derived}) is given by the
+composition of the horizontal map by the inverse of the vertical map.
+\end{remark}
+
+
+
+
+
+\section{Refinements and {\v C}ech cohomology}
+\label{section-refinements-cech}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$\mathcal{U} : X = \bigcup_{i \in I} U_i$ and
+$\mathcal{V} : X = \bigcup_{j \in J} V_j$ be open coverings.
+Assume that $\mathcal{U}$ is a refinement of $\mathcal{V}$.
+Choose a map $c : I \to J$ such that $U_i \subset V_{c(i)}$
+for all $i \in I$. This induces a map of {\v C}ech complexes
+$$
+\gamma :
+\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}),
+\quad
+(\xi_{j_0 \ldots j_p})
+\longmapsto
+(\xi_{c(i_0) \ldots c(i_p)}|_{U_{i_0 \ldots i_p}})
+$$
+functorial in the sheaf of $\mathcal{O}_X$-modules $\mathcal{F}$.
+Suppose that $c' : I \to J$ is a second map such that
+$U_i \subset V_{c'(i)}$ for all $i \in I$. Then the corresponding maps
+$\gamma$ and $\gamma'$ are homotopic. Namely,
+$\gamma - \gamma' = \text{d} \circ h + h \circ \text{d}$
+with
+$h : \check{\mathcal{C}}^{p + 1}(\mathcal{V}, \mathcal{F}) \to
+\check{\mathcal{C}}^p(\mathcal{U}, \mathcal{F})$
+given by the rule
+$$
+h(\alpha)_{i_0 \ldots i_p} =
+\sum\nolimits_{a = 0}^{p}
+(-1)^a
+\alpha_{c(i_0)\ldots c(i_a) c'(i_a) \ldots c'(i_p)}
+$$
+We omit the computation showing this works; please see the discussion
+following (\ref{equation-transformation}) for the proof in a more general
+case. In particular, the map on {\v C}ech cohomology groups is independent
+of the choice of $c$. Moreover, it is clear that if
+$\mathcal{W} : X = \bigcup_{k \in K} W_k$ is a third open covering
+and $\mathcal{V}$ is a refinement of $\mathcal{W}$, then the composition
+of the maps
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{W}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+associated to maps $I \to J$ and $J \to K$ is the map associated
+to the composition $I \to K$.
+In particular, we can define the {\v C}ech cohomology
+groups
+$$
+\check{H}^p(X, \mathcal{F}) =
+\colim_\mathcal{U} \check{H}^p(\mathcal{U}, \mathcal{F})
+$$
+where the colimit is over all open coverings of $X$ preordered by refinement.
+
+\medskip\noindent
+It turns out that the maps $\gamma$ defined above are compatible with
+the map to cohomology, in other words, the composition
+$$
+\check{H}^p(\mathcal{V}, \mathcal{F}) \to
+\check{H}^p(\mathcal{U}, \mathcal{F})
+\xrightarrow{\text{Lemma \ref{lemma-cech-cohomology}}}
+H^p(X, \mathcal{F})
+$$
+is the canonical map from the first group to cohomology of
+Lemma \ref{lemma-cech-cohomology}.
+In the lemma below we will prove this in a slightly more general
+setting. A consequence is that we obtain a well defined map
+\begin{equation}
+\label{equation-cech-to-cohomology}
+\check{H}^p(X, \mathcal{F}) =
+\colim_\mathcal{U} \check{H}^p(\mathcal{U}, \mathcal{F})
+\longrightarrow
+H^p(X, \mathcal{F})
+\end{equation}
+from {\v C}ech cohomology to cohomology.
+
+\begin{lemma}
+\label{lemma-functoriality-cech}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $\varphi : f^*\mathcal{G} \to \mathcal{F}$ be an $f$-map
+from an $\mathcal{O}_Y$-module $\mathcal{G}$ to an
+$\mathcal{O}_X$-module $\mathcal{F}$.
+Let $\mathcal{U} : X = \bigcup_{i \in I} U_i$ and
+$\mathcal{V} : Y = \bigcup_{j \in J} V_j$ be open coverings.
+Assume that $\mathcal{U}$ is a refinement of
+$f^{-1}\mathcal{V} : X = \bigcup_{j \in J} f^{-1}(V_j)$.
+In this case there exists a commutative diagram
+$$
+\xymatrix{
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \ar[r] &
+R\Gamma(X, \mathcal{F}) \\
+\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{G}) \ar[r]
+\ar[u]^\gamma &
+R\Gamma(Y, \mathcal{G}) \ar[u]
+}
+$$
+in $D^{+}(\mathcal{O}_X(X))$ with horizontal arrows given by
+Lemma \ref{lemma-cech-cohomology} and right vertical arrow by
+(\ref{equation-functorial-derived}).
+In particular we get commutative diagrams of cohomology groups
+$$
+\xymatrix{
+\check{H}^p(\mathcal{U}, \mathcal{F}) \ar[r] &
+H^p(X, \mathcal{F}) \\
+\check{H}^p(\mathcal{V}, \mathcal{G}) \ar[r]
+\ar[u]^\gamma &
+H^p(Y, \mathcal{G}) \ar[u]
+}
+$$
+where the right vertical arrow is (\ref{equation-functorial})
+\end{lemma}
+
+\begin{proof}
+We first define the left vertical arrow. Namely, choose a map
+$c : I \to J$ such that $U_i \subset f^{-1}(V_{c(i)})$ for all
+$i \in I$. In degree $p$ we define the map by the rule
+$$
+\gamma(s)_{i_0 \ldots i_p} = \varphi(s)_{c(i_0) \ldots c(i_p)}
+$$
+This makes sense because $\varphi$ does indeed induce maps
+$\mathcal{G}(V_{c(i_0) \ldots c(i_p)}) \to \mathcal{F}(U_{i_0 \ldots i_p})$
+by assumption. It is also clear that this defines a morphism of complexes.
+Choose injective resolutions
+$\mathcal{F} \to \mathcal{I}^\bullet$ on $X$ and
+$\mathcal{G} \to J^\bullet$ on $Y$. According to
+the proof of Lemma \ref{lemma-cech-cohomology} we introduce the double
+complexes $A^{\bullet, \bullet}$ and $B^{\bullet, \bullet}$
+with terms
+$$
+B^{p, q} = \check{\mathcal{C}}^p(\mathcal{V}, \mathcal{J}^q)
+\quad
+\text{and}
+\quad
+A^{p, q} = \check{\mathcal{C}}^p(\mathcal{U}, \mathcal{I}^q).
+$$
+As in Remark \ref{remark-explain-arrow} above we also choose an
+injective resolution
+$f_*\mathcal{I} \to (\mathcal{J}')^\bullet$ on $Y$ and a morphism
+of complexes $\beta : \mathcal{J} \to (\mathcal{J}')^\bullet$
+making (\ref{equation-choice}) commutes. We introduce some more
+double complexes, namely $(B')^{\bullet, \bullet}$ and
+$(B''){\bullet, \bullet}$ with
+$$
+(B')^{p, q} = \check{\mathcal{C}}^p(\mathcal{V}, (\mathcal{J}')^q)
+\quad
+\text{and}
+\quad
+(B'')^{p, q} = \check{\mathcal{C}}^p(\mathcal{V}, f_*\mathcal{I}^q).
+$$
+Note that there is an $f$-map of complexes from
+$f_*\mathcal{I}^\bullet$ to $\mathcal{I}^\bullet$. Hence
+it is clear that the same rule as above defines a morphism
+of double complexes
+$$
+\gamma : (B'')^{\bullet, \bullet} \longrightarrow A^{\bullet, \bullet}.
+$$
+Consider the diagram of complexes
+$$
+\xymatrix{
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+\ar[r] &
+\text{Tot}(A^{\bullet, \bullet}) & & &
+\Gamma(X, \mathcal{I}^\bullet) \ar[lll]^{qis}
+\ar@{=}[ddl]\\
+\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{G})
+\ar[r] \ar[u]^\gamma &
+\text{Tot}(B^{\bullet, \bullet}) \ar[r]^\beta &
+\text{Tot}((B')^{\bullet, \bullet}) &
+\text{Tot}((B'')^{\bullet, \bullet}) \ar[l] \ar[llu]_{s\gamma} \\
+& \Gamma(Y, \mathcal{J}^\bullet) \ar[u]^{qis} \ar[r]^\beta &
+\Gamma(Y, (\mathcal{J}')^\bullet) \ar[u] &
+\Gamma(Y, f_*\mathcal{I}^\bullet) \ar[u] \ar[l]_{qis}
+}
+$$
+The two horizontal arrows with targets $\text{Tot}(A^{\bullet, \bullet})$ and
+$\text{Tot}(B^{\bullet, \bullet})$
+are the ones explained in Lemma \ref{lemma-cech-cohomology}.
+The left upper shape (a pentagon) is commutative simply
+because (\ref{equation-choice}) is commutative.
+The two lower squares are trivially commutative.
+It is also immediate from the definitions that the
+right upper shape (a square) is commutative.
+The result of the lemma now follows from the definitions
+and the fact that going around the diagram on the outer sides
+from $\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{G})$
+to $\Gamma(X, \mathcal{I}^\bullet)$ either on top or on bottom
+is the same (where you have to invert any quasi-isomorphisms along the way).
+\end{proof}
+
+
+
+
+
+\section{Cohomology on Hausdorff quasi-compact spaces}
+\label{section-cohomology-LC}
+
+\noindent
+For such a space {\v C}ech cohomology agrees with cohomology.
+
+\begin{lemma}
+\label{lemma-cech-always}
+Let $X$ be a topological space. Let $\mathcal{F}$ be an abelian sheaf. Then
+the map $\check{H}^1(X, \mathcal{F}) \to H^1(X, \mathcal{F})$ defined
+in (\ref{equation-cech-to-cohomology}) is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{U}$ be an open covering of $X$.
+By Lemma \ref{lemma-cech-spectral-sequence}
+there is an exact sequence
+$$
+0 \to \check{H}^1(\mathcal{U}, \mathcal{F}) \to H^1(X, \mathcal{F})
+\to \check{H}^0(\mathcal{U}, \underline{H}^1(\mathcal{F}))
+$$
+Thus the map is injective. To show surjectivity it suffices to show that
+any element of $\check{H}^0(\mathcal{U}, \underline{H}^1(\mathcal{F}))$
+maps to zero after replacing $\mathcal{U}$ by a refinement.
+This is immediate from the definitions and the fact that
+$\underline{H}^1(\mathcal{F})$ is a presheaf of abelian groups
+whose sheafification is zero by locality of cohomology, see
+Lemma \ref{lemma-kill-cohomology-class-on-covering}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-Hausdorff-quasi-compact}
+Let $X$ be a Hausdorff and quasi-compact topological space. Let
+$\mathcal{F}$ be an abelian sheaf on $X$. Then
+the map $\check{H}^n(X, \mathcal{F}) \to H^n(X, \mathcal{F})$ defined
+in (\ref{equation-cech-to-cohomology}) is an isomorphism for
+all $n$.
+\end{lemma}
+
+\begin{proof}
+We already know that $\check{H}^n(X, -) \to H^n(X, -)$
+is an isomorphism of functors for $n = 0, 1$, see
+Lemma \ref{lemma-cech-always}.
+The functors $H^n(X, -)$ form a universal $\delta$-functor, see
+Derived Categories, Lemma \ref{derived-lemma-higher-derived-functors}.
+If we show that $\check{H}^n(X, -)$ forms a universal $\delta$-functor
+and that $\check{H}^n(X, -) \to H^n(X, -)$ is compatible with boundary
+maps, then the map will automatically be an isomorphism by uniqueness
+of universal $\delta$-functors, see
+Homology, Lemma \ref{homology-lemma-uniqueness-universal-delta-functor}.
+
+\medskip\noindent
+Let $0 \to \mathcal{F} \to \mathcal{G} \to \mathcal{H} \to 0$
+be a short exact sequence of abelian sheaves on $X$.
+Let $\mathcal{U} : X = \bigcup_{i \in I} U_i$ be an open covering.
+This gives a complex of complexes
+$$
+0 \to \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \to
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{G}) \to
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{H}) \to 0
+$$
+which is in general not exact on the right. The sequence defines
+the maps
+$$
+\check{H}^n(\mathcal{U}, \mathcal{F}) \to
+\check{H}^n(\mathcal{U}, \mathcal{G}) \to
+\check{H}^n(\mathcal{U}, \mathcal{H})
+$$
+but isn't good enough to define a boundary operator
+$\delta : \check{H}^n(\mathcal{U}, \mathcal{H}) \to
+\check{H}^{n + 1}(\mathcal{U}, \mathcal{F})$. Indeed
+such a thing will not exist in general. However, given an
+element $\overline{h} \in \check{H}^n(\mathcal{U}, \mathcal{H})$
+which is the cohomology class of a cocycle
+$h = (h_{i_0 \ldots i_n})$
+we can choose open coverings
+$$
+U_{i_0 \ldots i_n} = \bigcup W_{i_0 \ldots i_n, k}
+$$
+such that $h_{i_0 \ldots i_n}|_{W_{i_0 \ldots i_n, k}}$
+lifts to a section of $\mathcal{G}$ over $W_{i_0 \ldots i_n, k}$.
+By Topology, Lemma \ref{topology-lemma-refine-covering}
+(this is where we use the assumption that $X$ is hausdorff and quasi-compact)
+we can choose an open covering $\mathcal{V} : X = \bigcup_{j \in J} V_j$
+and $\alpha : J \to I$ such that $V_j \subset U_{\alpha(j)}$
+(it is a refinement) and such that for all $j_0, \ldots, j_n \in J$
+there is a $k$ such that
+$V_{j_0 \ldots j_n} \subset W_{\alpha(j_0) \ldots \alpha(j_n), k}$.
+We obtain maps of complexes
+$$
+\xymatrix{
+0 \ar[r] &
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \ar[d] \ar[r] &
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{G}) \ar[d] \ar[r] &
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{H}) \ar[d] \ar[r] &
+0 \\
+0 \ar[r] &
+\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{F}) \ar[r] &
+\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{G}) \ar[r] &
+\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{H}) \ar[r] &
+0
+}
+$$
+In fact, the vertical arrows are the maps of complexes used
+to define the transition maps between the {\v C}ech cohomology groups.
+Our choice of refinement shows that we may choose
+$$
+g_{j_0 \ldots j_n} \in
+\mathcal{G}(V_{j_0 \ldots j_n}),\quad
+g_{j_0 \ldots j_n} \longmapsto
+h_{\alpha(j_0) \ldots \alpha(j_n)}|_{V_{j_0 \ldots j_n}}
+$$
+The cochain $g = (g_{j_0 \ldots j_n})$ is not a cocycle
+in general but we know that its {\v C}ech boundary $\text{d}(g)$
+maps to zero in $\check{\mathcal{C}}^{n + 1}(\mathcal{V}, \mathcal{H})$
+(by the commutative diagram above and the fact that $h$ is a cocycle).
+Hence $\text{d}(g)$ is a cocycle in
+$\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{F})$.
+This allows us to define
+$$
+\delta(\overline{h}) = \text{class of }\text{d}(g)\text{ in }
+\check{H}^{n + 1}(\mathcal{V}, \mathcal{F})
+$$
+Now, given an element $\xi \in \check{H}^n(X, \mathcal{G})$
+we choose an open covering $\mathcal{U}$ and an element
+$\overline{h} \in \check{H}^n(\mathcal{U}, \mathcal{G})$
+mapping to $\xi$ in the colimit defining {\v C}ech cohomology.
+Then we choose $\mathcal{V}$ and $g$ as above and set
+$\delta(\xi)$ equal to the image of $\delta(\overline{h})$
+in $\check{H}^n(X, \mathcal{F})$.
+At this point a lot of properties have to be checked, all of which
+are straightforward. For example, we need to check that our construction
+is independent of the choice of
+$\mathcal{U}, \overline{h}, \mathcal{V}, \alpha : J \to I, g$.
+The class of $\text{d}(g)$ is independent of the choice of the lifts
+$g_{i_0 \ldots i_n}$ because the difference will be a coboundary.
+Independence of $\alpha$ holds\footnote{This is an important
+check because the nonuniqueness of $\alpha$ is the only thing preventing
+us from taking the colimit of {\v C}ech complexes over all open
+coverings of $X$ to get a short exact sequence of complexes computing
+{\v C}ech cohomology.}
+because a different choice
+of $\alpha$ determines homotopic vertical maps of complexes
+in the diagram above, see Section \ref{section-refinements-cech}.
+For the other choices we use that given a finite collection
+of coverings of $X$ we can always find a covering refining all
+of them. We also need to check additivity which is shown in the same manner.
+Finally, we need to check that the maps
+$\check{H}^n(X, -) \to H^n(X, -)$ are compatible
+with boundary maps. To do this we choose injective
+resolutions
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{F} \ar[r] \ar[d] &
+\mathcal{G} \ar[r] \ar[d] &
+\mathcal{H} \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\mathcal{I}_1^\bullet \ar[r] &
+\mathcal{I}_2^\bullet \ar[r] &
+\mathcal{I}_3^\bullet \ar[r] &
+0
+}
+$$
+as in Derived Categories, Lemma \ref{derived-lemma-injective-resolution-ses}.
+This will give a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \ar[r] \ar[d] &
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \ar[r] \ar[d] &
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}_1^\bullet))
+\ar[r] &
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}_2^\bullet))
+\ar[r] &
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}_3^\bullet))
+\ar[r] &
+0
+}
+$$
+Here $\mathcal{U}$ is an open covering as above and
+the vertical maps are those used to define the maps
+$\check{H}^n(\mathcal{U}, -) \to H^n(X, -)$, see
+Lemma \ref{lemma-cech-cohomology}.
+The bottom complex is exact as the sequence of
+complexes of injectives is termwise split exact.
+Hence the boundary map in cohomology is computed
+by the usual procedure for this lower exact sequence, see
+Homology, Lemma \ref{homology-lemma-long-exact-sequence-cochain}.
+The same will be true after passing to the refinement
+$\mathcal{V}$ where the boundary map for {\v C}ech cohomology
+was defined. Hence the boundary maps agree because they
+use the same construction (whenever the first one is defined
+on an element in {\v C}ech cohomology on a given covering).
+This finishes our discussion of the construction of
+the structure of a $\delta$-functor on {\v C}ech cohomology
+and why this structure is compatible with the given
+$\delta$-functor structure on usual cohomology.
+
+\medskip\noindent
+Finally, we may apply Lemma \ref{lemma-injective-trivial-cech}
+to see that higher {\v C}ech cohomology is trivial on injective
+sheaves. Hence we see that {\v C}ech cohomology is a universal
+$\delta$-functor by
+Homology, Lemma \ref{homology-lemma-efface-implies-universal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-of-closed}
+\begin{reference}
+\cite[Expose V bis, 4.1.3]{SGA4}
+\end{reference}
+Let $X$ be a topological space. Let $Z \subset X$ be a quasi-compact subset
+such that any two points of $Z$ have disjoint open neighbourhoods in $X$.
+For every abelian sheaf $\mathcal{F}$ on $X$ the canonical
+map
+$$
+\colim H^p(U, \mathcal{F})
+\longrightarrow
+H^p(Z, \mathcal{F}|_Z)
+$$
+where the colimit is over open neighbourhoods $U$ of $Z$ in $X$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+We first prove this for $p = 0$. Injectivity follows from
+the definition of $\mathcal{F}|_Z$ and holds in general
+(for any subset of any topological space $X$). Next, suppose that
+$s \in H^0(Z, \mathcal{F}|_Z)$. Then we can find opens $U_i \subset X$
+such that $Z \subset \bigcup U_i$ and such that $s|_{Z \cap U_i}$
+comes from $s_i \in \mathcal{F}(U_i)$. It follows that
+there exist opens $W_{ij} \subset U_i \cap U_j$ with
+$W_{ij} \cap Z = U_i \cap U_j \cap Z$ such that
+$s_i|_{W_{ij}} = s_j|_{W_{ij}}$. Applying
+Topology, Lemma
+\ref{topology-lemma-lift-covering-of-quasi-compact-hausdorff-subset}
+we find opens $V_i$ of $X$ such that $V_i \subset U_i$ and
+such that $V_i \cap V_j \subset W_{ij}$. Hence we see that
+$s_i|_{V_i}$ glue to a section of $\mathcal{F}$ over the
+open neighbourhood $\bigcup V_i$ of $Z$.
+
+\medskip\noindent
+To finish the proof, it suffices to show that if $\mathcal{I}$ is an
+injective abelian sheaf on $X$, then $H^p(Z, \mathcal{I}|_Z) = 0$
+for $p > 0$. This follows using short exact sequences and dimension
+shifting; details omitted. Thus, suppose $\overline{\xi}$ is an element
+of $H^p(Z, \mathcal{I}|_Z)$ for some $p > 0$.
+By Lemma \ref{lemma-cech-Hausdorff-quasi-compact}
+the element $\overline{\xi}$ comes from
+$\check{H}^p(\mathcal{V}, \mathcal{I}|_Z)$
+for some open covering $\mathcal{V} : Z = \bigcup V_i$ of $Z$.
+Say $\overline{\xi}$ is the image of the class of a cocycle
+$\xi = (\xi_{i_0 \ldots i_p})$ in
+$\check{\mathcal{C}}^p(\mathcal{V}, \mathcal{I}|_Z)$.
+
+\medskip\noindent
+Let $\mathcal{I}' \subset \mathcal{I}|_Z$ be the subpresheaf
+defined by the rule
+$$
+\mathcal{I}'(V) =
+\{s \in \mathcal{I}|_Z(V) \mid
+\exists (U, t),\ U \subset X\text{ open},
+\ t \in \mathcal{I}(U),\ V = Z \cap U,\ s = t|_{Z \cap U} \}
+$$
+Then $\mathcal{I}|_Z$ is the sheafification of $\mathcal{I}'$.
+Thus for every $(p + 1)$-tuple $i_0 \ldots i_p$ we can find an
+open covering $V_{i_0 \ldots i_p} = \bigcup W_{i_0 \ldots i_p, k}$
+such that $\xi_{i_0 \ldots i_p}|_{W_{i_0 \ldots i_p, k}}$ is
+a section of $\mathcal{I}'$. Applying
+Topology, Lemma \ref{topology-lemma-refine-covering}
+we may after refining $\mathcal{V}$ assume that each
+$\xi_{i_0 \ldots i_p}$ is a section of the presheaf $\mathcal{I}'$.
+
+\medskip\noindent
+Write $V_i = Z \cap U_i$ for some opens $U_i \subset X$.
+Since $\mathcal{I}$ is flasque (Lemma \ref{lemma-injective-flasque})
+and since $\xi_{i_0 \ldots i_p}$ is a section of $\mathcal{I}'$
+for every $(p + 1)$-tuple $i_0 \ldots i_p$ we can choose
+a section $s_{i_0 \ldots i_p} \in \mathcal{I}(U_{i_0 \ldots i_p})$
+which restricts to $\xi_{i_0 \ldots i_p}$ on
+$V_{i_0 \ldots i_p} = Z \cap U_{i_0 \ldots i_p}$.
+(This appeal to injectives being flasque can be avoided by an
+additional application of
+Topology, Lemma
+\ref{topology-lemma-lift-covering-of-quasi-compact-hausdorff-subset}.)
+Let $s = (s_{i_0 \ldots i_p})$ be the corresponding cochain
+for the open covering $U = \bigcup U_i$.
+Since $\text{d}(\xi) = 0$ we see that the sections
+$\text{d}(s)_{i_0 \ldots i_{p + 1}}$ restrict to zero
+on $Z \cap U_{i_0 \ldots i_{p + 1}}$. Hence, by the initial
+remarks of the proof, there exists open subsets
+$W_{i_0 \ldots i_{p + 1}} \subset U_{i_0 \ldots i_{p + 1}}$
+with $Z \cap W_{i_0 \ldots i_{p + 1}} = Z \cap U_{i_0 \ldots i_{p + 1}}$
+such that $\text{d}(s)_{i_0 \ldots i_{p + 1}}|_{W_{i_0 \ldots i_{p + 1}}} = 0$.
+By Topology, Lemma
+\ref{topology-lemma-lift-covering-of-quasi-compact-hausdorff-subset}
+we can find $U'_i \subset U_i$ such that $Z \subset \bigcup U'_i$
+and such that $U'_{i_0 \ldots i_{p + 1}} \subset W_{i_0 \ldots i_{p + 1}}$.
+Then $s' = (s'_{i_0 \ldots i_p})$ with
+$s'_{i_0 \ldots i_p} = s_{i_0 \ldots i_p}|_{U'_{i_0 \ldots i_p}}$
+is a cocycle for $\mathcal{I}$ for the open covering
+$U' = \bigcup U'_i$ of an open neighbourhood of $Z$.
+Since $\mathcal{I}$ has trivial higher {\v C}ech cohomology groups
+(Lemma \ref{lemma-injective-trivial-cech})
+we conclude that $s'$ is a coboundary. It follows that the image of
+$\xi$ in the {\v C}ech complex for the open covering
+$Z = \bigcup Z \cap U'_i$ is a coboundary and we are done.
+\end{proof}
+
+
+
+
+
+
+\section{The base change map}
+\label{section-base-change-map}
+
+\noindent
+We will need to know how to construct the base change map in some cases.
+Since we have not yet discussed derived pullback we only discuss
+this in the case of a base change by a flat morphism of ringed spaces.
+Before we state the result, let us discuss flat pullback on the derived
+category. Namely, suppose that $g : X \to Y$ is a flat morphism of
+ringed spaces. By Modules, Lemma \ref{modules-lemma-pullback-flat}
+the functor $g^* : \textit{Mod}(\mathcal{O}_Y) \to
+\textit{Mod}(\mathcal{O}_X)$ is exact. Hence it has a derived functor
+$$
+g^* : D^{+}(Y) \to D^{+}(X)
+$$
+which is computed by simply pulling back an representative of a given
+object in $D^{+}(Y)$, see
+Derived Categories, Lemma \ref{derived-lemma-right-derived-exact-functor}.
+Hence as indicated we indicate this functor by $g^*$ rather than
+$Lg^*$.
+
+\begin{lemma}
+\label{lemma-base-change-map-flat-case}
+Let
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} &
+X \ar[d]^f \\
+S' \ar[r]^g &
+S
+}
+$$
+be a commutative diagram of ringed spaces.
+Let $\mathcal{F}^\bullet$ be a bounded below complex of
+$\mathcal{O}_X$-modules.
+Assume both $g$ and $g'$ are flat.
+Then there exists a canonical base change map
+$$
+g^*Rf_*\mathcal{F}^\bullet
+\longrightarrow
+R(f')_*(g')^*\mathcal{F}^\bullet
+$$
+in $D^{+}(S')$.
+\end{lemma}
+
+\begin{proof}
+Choose injective resolutions $\mathcal{F}^\bullet \to \mathcal{I}^\bullet$
+and $(g')^*\mathcal{F}^\bullet \to \mathcal{J}^\bullet$.
+By Lemma \ref{lemma-pushforward-injective-flat} we see that
+$(g')_*\mathcal{J}^\bullet$ is a complex of injectives representing
+$R(g')_*(g')^*\mathcal{F}^\bullet$. Hence by
+Derived Categories, Lemmas \ref{derived-lemma-morphisms-lift}
+and \ref{derived-lemma-morphisms-equal-up-to-homotopy}
+the arrow $\beta$ in the diagram
+$$
+\xymatrix{
+(g')_*(g')^*\mathcal{F}^\bullet \ar[r] &
+(g')_*\mathcal{J}^\bullet \\
+\mathcal{F}^\bullet \ar[u]^{adjunction} \ar[r] &
+\mathcal{I}^\bullet \ar[u]_\beta
+}
+$$
+exists and is unique up to homotopy.
+Pushing down to $S$ we get
+$$
+f_*\beta :
+f_*\mathcal{I}^\bullet
+\longrightarrow
+f_*(g')_*\mathcal{J}^\bullet
+=
+g_*(f')_*\mathcal{J}^\bullet
+$$
+By adjunction of $g^*$ and $g_*$ we get a map of complexes
+$g^*f_*\mathcal{I}^\bullet \to (f')_*\mathcal{J}^\bullet$.
+Note that this map is unique up to homotopy since the only
+choice in the whole process was the choice of the map $\beta$
+and everything was done on the level of complexes.
+\end{proof}
+
+\begin{remark}
+\label{remark-correct-version-base-change-map}
+The ``correct'' version of the base change map is the map
+$$
+Lg^* Rf_* \mathcal{F}^\bullet
+\longrightarrow
+R(f')_* L(g')^*\mathcal{F}^\bullet.
+$$
+The construction of this map involves
+unbounded complexes, see Remark \ref{remark-base-change}.
+\end{remark}
+
+
+
+
+
+\section{Proper base change in topology}
+\label{section-proper-base-change}
+
+\noindent
+In this section we prove a very general version of the proper base change
+theorem in topology. It tells us that the stalks of the higher direct
+images $R^pf_*$ can be computed on the fibre.
+
+\begin{lemma}
+\label{lemma-proper-base-change}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of
+ringed spaces. Let $y \in Y$. Assume that
+\begin{enumerate}
+\item $f$ is closed,
+\item $f$ is separated, and
+\item $f^{-1}(y)$ is quasi-compact.
+\end{enumerate}
+Then for $E$ in $D^+(\mathcal{O}_X)$
+we have $(Rf_*E)_y = R\Gamma(f^{-1}(y), E|_{f^{-1}(y)})$ in
+$D^+(\mathcal{O}_{Y, y})$.
+\end{lemma}
+
+\begin{proof}
+The base change map of Lemma \ref{lemma-base-change-map-flat-case}
+gives a canonical map $(Rf_*E)_y \to R\Gamma(f^{-1}(y), E|_{f^{-1}(y)})$.
+To prove this map is an isomorphism, we represent $E$ by a bounded
+below complex of injectives $\mathcal{I}^\bullet$.
+Set $Z = f^{-1}(\{y\})$. The assumptions of
+Lemma \ref{lemma-cohomology-of-closed}
+are satisfied, see Topology, Lemma \ref{topology-lemma-separated}.
+Hence the restrictions
+$\mathcal{I}^n|_Z$ are acyclic for $\Gamma(Z, -)$.
+Thus $R\Gamma(Z, E|_Z)$ is represented by the
+complex $\Gamma(Z, \mathcal{I}^\bullet|_Z)$, see
+Derived Categories, Lemma \ref{derived-lemma-leray-acyclicity}.
+In other words, we have to show the map
+$$
+\colim_V \mathcal{I}^\bullet(f^{-1}(V))
+\longrightarrow
+\Gamma(Z, \mathcal{I}^\bullet|_Z)
+$$
+is an isomorphism. Using Lemma \ref{lemma-cohomology-of-closed}
+we see that it suffices to show that the collection of open neighbourhoods
+$f^{-1}(V)$ of $Z = f^{-1}(\{y\})$
+is cofinal in the system of all open neighbourhoods.
+If $f^{-1}(\{y\}) \subset U$ is an open neighbourhood, then as $f$ is closed
+the set $V = Y \setminus f(X \setminus U)$ is an open neighbourhood
+of $y$ with $f^{-1}(V) \subset U$. This proves the lemma.
+\end{proof}
+
+\begin{theorem}[Proper base change]
+\label{theorem-proper-base-change}
+\begin{reference}
+\cite[Expose V bis, 4.1.1]{SGA4}
+\end{reference}
+Consider a cartesian square of topological spaces
+$$
+\xymatrix{
+X' = Y' \times_Y X \ar[d]_{f'} \ar[r]_-{g'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+Assume that $f$ is proper.
+Let $E$ be an object of $D^+(X)$. Then the base change map
+$$
+g^{-1}Rf_*E \longrightarrow Rf'_*(g')^{-1}E
+$$
+of Lemma \ref{lemma-base-change-map-flat-case} is an isomorphism
+in $D^+(Y')$.
+\end{theorem}
+
+\begin{proof}
+Let $y' \in Y'$ be a point with image $y \in Y$. It suffices to show that
+the base change map induces an isomorphism on stalks at $y'$.
+As $f$ is proper it follows that $f'$ is proper, the
+fibres of $f$ and $f'$ are quasi-compact and $f$ and $f'$ are closed, see
+Topology, Theorem \ref{topology-theorem-characterize-proper} and
+Lemma \ref{topology-lemma-base-change-separated}.
+Thus we can apply Lemma \ref{lemma-proper-base-change} twice to see that
+$$
+(Rf'_*(g')^{-1}E)_{y'} = R\Gamma((f')^{-1}(y'), (g')^{-1}E|_{(f')^{-1}(y')})
+$$
+and
+$$
+(Rf_*E)_y = R\Gamma(f^{-1}(y), E|_{f^{-1}(y)})
+$$
+The induced map of fibres $(f')^{-1}(y') \to f^{-1}(y)$ is
+a homeomorphism of topological spaces and the pull back of
+$E|_{f^{-1}(y)}$ is $(g')^{-1}E|_{(f')^{-1}(y')}$. The
+desired result follows.
+\end{proof}
+
+\begin{lemma}[Proper base change for sheaves of sets]
+\label{lemma-proper-base-change-sheaves-of-sets}
+Consider a cartesian square of topological spaces
+$$
+\xymatrix{
+X' \ar[d]_{f'} \ar[r]_-{g'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+Assume that $f$ is proper. Then
+$g^{-1}f_*\mathcal{F} = f'_*(g')^{-1}\mathcal{F}$
+for any sheaf of sets $\mathcal{F}$ on $X$.
+\end{lemma}
+
+\begin{proof}
+We argue exactly as in the proof of Theorem \ref{theorem-proper-base-change}
+and we find it suffices to show
+$(f_*\mathcal{F})_y = \Gamma(X_y, \mathcal{F}|_{X_y})$.
+Then we argue as in Lemma \ref{lemma-proper-base-change}
+to reduce this to the $p = 0$ case of Lemma \ref{lemma-cohomology-of-closed}
+for sheaves of sets. The first part of the proof of
+Lemma \ref{lemma-cohomology-of-closed}
+works for sheaves of sets and this finishes the proof.
+Some details omitted.
+\end{proof}
+
+
+
+\section{Cohomology and colimits}
+\label{section-limits}
+
+\noindent
+Let $X$ be a ringed space. Let $(\mathcal{F}_i, \varphi_{ii'})$ be
+a system of sheaves of $\mathcal{O}_X$-modules over the directed set $I$, see
+Categories, Section \ref{categories-section-posets-limits}.
+Since for each $i$ there is a canonical map
+$\mathcal{F}_i \to \colim_i \mathcal{F}_i$ we get a
+canonical map
+$$
+\colim_i H^p(X, \mathcal{F}_i)
+\longrightarrow
+H^p(X, \colim_i \mathcal{F}_i)
+$$
+for every $p \geq 0$. Of course there is a similar map for
+every open $U \subset X$. These maps are in general not isomorphisms,
+even for $p = 0$. In this section we generalize the results of
+Sheaves, Lemma \ref{sheaves-lemma-directed-colimits-sections}.
+See also
+Modules, Lemma \ref{modules-lemma-finite-presentation-quasi-compact-colimit}
+(in the special case $\mathcal{G} = \mathcal{O}_X$).
+
+\begin{lemma}
+\label{lemma-quasi-separated-cohomology-colimit}
+Let $X$ be a ringed space. Assume that the underlying topological space
+of $X$ has the following properties:
+\begin{enumerate}
+\item there exists a basis of quasi-compact open subsets, and
+\item the intersection of any two quasi-compact opens is quasi-compact.
+\end{enumerate}
+Then for any directed system $(\mathcal{F}_i, \varphi_{ii'})$
+of sheaves of $\mathcal{O}_X$-modules and for any quasi-compact open
+$U \subset X$ the canonical map
+$$
+\colim_i H^q(U, \mathcal{F}_i)
+\longrightarrow
+H^q(U, \colim_i \mathcal{F}_i)
+$$
+is an isomorphism for every $q \geq 0$.
+\end{lemma}
+
+\begin{proof}
+It is important in this proof to argue for all quasi-compact opens
+$U \subset X$ at the same time.
+The result is true for $q = 0$ and any quasi-compact open $U \subset X$ by
+Sheaves, Lemma \ref{sheaves-lemma-directed-colimits-sections}
+(combined with
+Topology, Lemma \ref{topology-lemma-topology-quasi-separated-scheme}).
+Assume that we have proved the result for all $q \leq q_0$ and let
+us prove the result for $q = q_0 + 1$.
+
+\medskip\noindent
+By our conventions on directed systems the index set $I$ is directed,
+and any system of $\mathcal{O}_X$-modules $(\mathcal{F}_i, \varphi_{ii'})$
+over $I$ is directed.
+By Injectives, Lemma \ref{injectives-lemma-sheaves-modules-space} the category
+of $\mathcal{O}_X$-modules has functorial injective embeddings.
+Thus for any system $(\mathcal{F}_i, \varphi_{ii'})$ there exists a
+system $(\mathcal{I}_i, \varphi_{ii'})$ with each $\mathcal{I}_i$ an
+injective $\mathcal{O}_X$-module and a morphism of systems given
+by injective $\mathcal{O}_X$-module maps
+$\mathcal{F}_i \to \mathcal{I}_i$. Denote $\mathcal{Q}_i$ the
+cokernel so that we have short exact sequences
+$$
+0 \to
+\mathcal{F}_i \to
+\mathcal{I}_i \to
+\mathcal{Q}_i \to 0.
+$$
+We claim that the sequence
+$$
+0 \to
+\colim_i \mathcal{F}_i \to
+\colim_i \mathcal{I}_i \to
+\colim_i \mathcal{Q}_i \to 0.
+$$
+is also a short exact sequence of $\mathcal{O}_X$-modules.
+We may check this on stalks. By
+Sheaves, Sections \ref{sheaves-section-limits-presheaves}
+and \ref{sheaves-section-limits-sheaves}
+taking stalks commutes with colimits. Since a directed colimit
+of short exact sequences of abelian groups is short exact
+(see Algebra, Lemma \ref{algebra-lemma-directed-colimit-exact})
+we deduce the result. We claim that
+$H^q(U, \colim_i \mathcal{I}_i) = 0$ for all quasi-compact
+open $U \subset X$ and all $q \geq 1$. Accepting this claim
+for the moment consider the diagram
+$$
+\xymatrix{
+\colim_i H^{q_0}(U, \mathcal{I}_i) \ar[d] \ar[r] &
+\colim_i H^{q_0}(U, \mathcal{Q}_i) \ar[d] \ar[r] &
+\colim_i H^{q_0 + 1}(U, \mathcal{F}_i) \ar[d] \ar[r] &
+0 \ar[d] \\
+H^{q_0}(U, \colim_i \mathcal{I}_i) \ar[r] &
+H^{q_0}(U, \colim_i \mathcal{Q}_i) \ar[r] &
+H^{q_0 + 1}(U, \colim_i \mathcal{F}_i) \ar[r] &
+0
+}
+$$
+The zero at the lower right corner comes from the claim and the
+zero at the upper right corner comes from the fact that the sheaves
+$\mathcal{I}_i$ are injective.
+The top row is exact by an application of
+Algebra, Lemma \ref{algebra-lemma-directed-colimit-exact}.
+Hence by the snake lemma we deduce the
+result for $q = q_0 + 1$.
+
+\medskip\noindent
+It remains to show that the claim is true. We will use
+Lemma \ref{lemma-cech-vanish-basis}.
+Let $\mathcal{B}$ be the collection of all quasi-compact open
+subsets of $X$. This is a basis for the topology on $X$ by assumption.
+Let $\text{Cov}$ be the collection of finite open coverings
+$\mathcal{U} : U = \bigcup_{j = 1, \ldots, m} U_j$ with each
+of $U$, $U_j$ quasi-compact open in $X$. By the result for $q = 0$
+we see that for $\mathcal{U} \in \text{Cov}$ we have
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \colim_i \mathcal{I}_i)
+=
+\colim_i \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}_i)
+$$
+because all the multiple intersections $U_{j_0 \ldots j_p}$
+are quasi-compact. By Lemma \ref{lemma-injective-trivial-cech}
+each of the complexes in the colimit of {\v C}ech complexes is
+acyclic in degree $\geq 1$. Hence by
+Algebra, Lemma \ref{algebra-lemma-directed-colimit-exact}
+we see that also the {\v C}ech complex
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \colim_i \mathcal{I}_i)$
+is acyclic in degrees $\geq 1$. In other words we see that
+$\check{H}^p(\mathcal{U}, \colim_i \mathcal{I}_i) = 0$
+for all $p \geq 1$. Thus the assumptions of
+Lemma \ref{lemma-cech-vanish-basis} are satisfied and the claim follows.
+\end{proof}
+
+\noindent
+Next we formulate the analogy of
+Sheaves, Lemma \ref{sheaves-lemma-descend-opens}
+for cohomology.
+Let $X$ be a spectral space which is written as a cofiltered limit
+of spectral spaces $X_i$ for a diagram with spectral transition morphisms
+as in
+Topology, Lemma \ref{topology-lemma-directed-inverse-limit-spectral-spaces}.
+Assume given
+\begin{enumerate}
+\item an abelian sheaf $\mathcal{F}_i$ on $X_i$ for all
+$i \in \Ob(\mathcal{I})$,
+\item for $a : j \to i$ an $f_a$-map
+$\varphi_a : \mathcal{F}_i \to \mathcal{F}_j$ of abelian sheaves (see
+Sheaves, Definition \ref{sheaves-definition-f-map})
+\end{enumerate}
+such that $\varphi_c = \varphi_b \circ \varphi_a$
+whenever $c = a \circ b$. Set $\mathcal{F} = \colim p_i^{-1}\mathcal{F}_i$
+on $X$.
+
+\begin{lemma}
+\label{lemma-colimit}
+In the situation discussed above.
+Let $i \in \Ob(\mathcal{I})$ and let $U_i \subset X_i$ be quasi-compact open.
+Then
+$$
+\colim_{a : j \to i} H^p(f_a^{-1}(U_i), \mathcal{F}_j) =
+H^p(p_i^{-1}(U_i), \mathcal{F})
+$$
+for all $p \geq 0$. In particular we have
+$H^p(X, \mathcal{F}) = \colim H^p(X_i, \mathcal{F}_i)$.
+\end{lemma}
+
+\begin{proof}
+The case $p = 0$ is Sheaves, Lemma \ref{sheaves-lemma-descend-opens}.
+
+\medskip\noindent
+In this paragraph we show that we can find a map of systems
+$(\gamma_i) : (\mathcal{F}_i, \varphi_a) \to (\mathcal{G}_i, \psi_a)$
+with $\mathcal{G}_i$ an injective abelian sheaf and $\gamma_i$ injective.
+For each $i$ we pick an injection $\mathcal{F}_i \to \mathcal{I}_i$
+where $\mathcal{I}_i$ is an injective abelian sheaf on $X_i$.
+Then we can consider the family of maps
+$$
+\gamma_i :
+\mathcal{F}_i
+\longrightarrow
+\prod\nolimits_{b : k \to i} f_{b, *}\mathcal{I}_k = \mathcal{G}_i
+$$
+where the component maps are the maps adjoint to the maps
+$f_b^{-1}\mathcal{F}_i \to \mathcal{F}_k \to \mathcal{I}_k$.
+For $a : j \to i$ in $\mathcal{I}$ there is a canonical map
+$$
+\psi_a : f_a^{-1}\mathcal{G}_i \to \mathcal{G}_j
+$$
+whose components are the canonical maps
+$f_b^{-1}f_{a \circ b, *}\mathcal{I}_k \to f_{b, *}\mathcal{I}_k$
+for $b : k \to j$. Thus we find an injection
+$\{\gamma_i\} : \{\mathcal{F}_i, \varphi_a) \to (\mathcal{G}_i, \psi_a)$
+of systems of abelian sheaves. Note that $\mathcal{G}_i$ is an injective
+sheaf of abelian groups on $X_i$, see
+Lemma \ref{lemma-pushforward-injective-flat} and
+Homology, Lemma \ref{homology-lemma-product-injectives}.
+This finishes the construction.
+
+\medskip\noindent
+Arguing exactly as in the proof of
+Lemma \ref{lemma-quasi-separated-cohomology-colimit}
+we see that it suffices to prove that
+$H^p(X, \colim f_i^{-1}\mathcal{G}_i) = 0$ for $p > 0$.
+
+\medskip\noindent
+Set $\mathcal{G} = \colim f_i^{-1}\mathcal{G}_i$.
+To show vanishing of cohomology of $\mathcal{G}$ on every quasi-compact
+open of $X$, it suffices to show that the {\v C}ech cohomology of
+$\mathcal{G}$ for any covering $\mathcal{U}$ of a quasi-compact open of
+$X$ by finitely many quasi-compact opens is zero, see
+Lemma \ref{lemma-cech-vanish-basis}.
+Such a covering is the inverse by $p_i$ of such a covering $\mathcal{U}_i$
+on the space $X_i$ for some $i$ by
+Topology, Lemma \ref{topology-lemma-descend-opens}. We have
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{G}) =
+\colim_{a : j \to i}
+\check{\mathcal{C}}^\bullet(f_a^{-1}(\mathcal{U}_i), \mathcal{G}_j)
+$$
+by the case $p = 0$. The right hand side is a filtered colimit of
+complexes each of which is acyclic in positive degrees by
+Lemma \ref{lemma-injective-trivial-cech}. Thus we conclude by
+Algebra, Lemma \ref{algebra-lemma-directed-colimit-exact}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Vanishing on Noetherian topological spaces}
+\label{section-vanishing-Noetherian}
+
+\noindent
+The aim is to prove a theorem of Grothendieck namely
+Proposition \ref{proposition-vanishing-Noetherian}. See \cite{Tohoku}.
+
+\begin{lemma}
+\label{lemma-cohomology-and-closed-immersions}
+Let $i : Z \to X$ be a closed immersion of topological spaces.
+For any abelian sheaf $\mathcal{F}$ on $Z$ we have
+$H^p(Z, \mathcal{F}) = H^p(X, i_*\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+This is true because $i_*$ is exact (see
+Modules, Lemma \ref{modules-lemma-i-star-exact}),
+and hence $R^pi_* = 0$ as a functor
+(Derived Categories, Lemma \ref{derived-lemma-right-derived-exact-functor}).
+Thus we may apply Lemma \ref{lemma-apply-Leray}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-irreducible-constant-cohomology-zero}
+Let $X$ be an irreducible topological space.
+Then $H^p(X, \underline{A}) = 0$ for all $p > 0$
+and any abelian group $A$.
+\end{lemma}
+
+\begin{proof}
+Recall that $\underline{A}$ is the constant sheaf as defined
+in Sheaves, Definition \ref{sheaves-definition-constant-sheaf}.
+It is clear that for any nonempty
+open $U \subset X$ we have $\underline{A}(U) = A$ as $X$ is
+irreducible (and hence $U$ is connected).
+We will show that the higher {\v C}ech cohomology groups
+$\check{H}^p(\mathcal{U}, \underline{A})$ are zero for
+any open covering $\mathcal{U} : U = \bigcup_{i\in I} U_i$
+of an open $U \subset X$. Then the lemma will follow
+from Lemma \ref{lemma-cech-vanish}.
+
+\medskip\noindent
+Recall that the value of an abelian
+sheaf on the empty open set is $0$. Hence we may clearly assume
+$U_i \not = \emptyset$ for all $i \in I$. In this case we see
+that $U_i \cap U_{i'} \not = \emptyset$ for all $i, i' \in I$.
+Hence we see that the {\v C}ech complex is simply the complex
+$$
+\prod_{i_0 \in I} A \to
+\prod_{(i_0, i_1) \in I^2} A \to
+\prod_{(i_0, i_1, i_2) \in I^3} A \to
+\ldots
+$$
+We have to see this has trivial higher cohomology groups.
+We can see this for example because this is the {\v C}ech complex for the
+covering of a $1$-point space and {\v C}ech cohomology agrees with cohomology
+on such a space. (You can also directly verify it
+by writing an explicit homotopy.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-subsheaf-of-constant-sheaf}
+\begin{reference}
+\cite[Page 168]{Tohoku}.
+\end{reference}
+Let $X$ be a topological space such that the intersection of any
+two quasi-compact opens is quasi-compact. Let
+$\mathcal{F} \subset \underline{\mathbf{Z}}$
+be a subsheaf generated by finitely many sections over quasi-compact opens.
+Then there exists a finite filtration
+$$
+(0) = \mathcal{F}_0 \subset \mathcal{F}_1 \subset \ldots \subset
+\mathcal{F}_n = \mathcal{F}
+$$
+by abelian subsheaves such that for each $0 < i \leq n$
+there exists a short exact sequence
+$$
+0 \to j'_!\underline{\mathbf{Z}}_V \to j_!\underline{\mathbf{Z}}_U \to
+\mathcal{F}_i/\mathcal{F}_{i - 1} \to 0
+$$
+with $j : U \to X$ and $j' : V \to X$ the inclusion of quasi-compact opens
+into $X$.
+\end{lemma}
+
+\begin{proof}
+Say $\mathcal{F}$ is generated by the sections $s_1, \ldots, s_t$ over the
+quasi-compact opens $U_1, \ldots, U_t$. Since $U_i$ is quasi-compact and
+$s_i$ a locally constant function to $\mathbf{Z}$ we may assume, after
+possibly replacing $U_i$ by the parts of a finite decomposition into open
+and closed subsets, that $s_i$ is a constant section.
+Say $s_i = n_i$ with $n_i \in \mathbf{Z}$. Of course we can remove
+$(U_i, n_i)$ from the list if $n_i = 0$. Flipping signs if necessary
+we may also assume $n_i > 0$. Next, for any subset $I \subset \{1, \ldots, t\}$
+we may add $\bigcup_{i \in I} U_i$ and $\gcd(n_i, i \in I)$ to the list.
+After doing this we see that our list $(U_1, n_1), \ldots, (U_t, n_t)$
+satisfies the following property:
+For $x \in X$ set $I_x = \{i \in \{1, \ldots, t\} \mid x \in U_i\}$.
+Then $\gcd(n_i, i \in I_x)$ is attained by $n_i$ for some $i \in I_x$.
+
+\medskip\noindent
+As our filtration we take $\mathcal{F}_0 = (0)$ and
+$\mathcal{F}_n$ generated by the sections $n_i$ over $U_i$ for those
+$i$ such that $n_i \leq n$. It is clear that
+$\mathcal{F}_n = \mathcal{F}$ for $n \gg 0$. Moreover, the quotient
+$\mathcal{F}_n/\mathcal{F}_{n - 1}$ is generated by the section
+$n$ over $U = \bigcup_{n_i \leq n} U_i$ and the kernel of the map
+$j_!\underline{\mathbf{Z}}_U \to \mathcal{F}_n/\mathcal{F}_{n - 1}$
+is generated by the section $n$ over $V = \bigcup_{n_i \leq n - 1} U_i$.
+Thus a short exact sequence as in the statement of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-generated-one-section}
+\begin{reference}
+This is a special case of \cite[Proposition 3.6.1]{Tohoku}.
+\end{reference}
+Let $X$ be a topological space. Let $d \geq 0$ be an integer. Assume
+\begin{enumerate}
+\item $X$ is quasi-compact,
+\item the quasi-compact opens form a basis for $X$, and
+\item the intersection of two quasi-compact opens is quasi-compact.
+\item $H^p(X, j_!\underline{\mathbf{Z}}_U) = 0$ for all $p > d$
+and any quasi-compact open $j : U \to X$.
+\end{enumerate}
+Then $H^p(X, \mathcal{F}) = 0$ for all $p > d$
+and any abelian sheaf $\mathcal{F}$ on $X$.
+\end{lemma}
+
+\begin{proof}
+Let $S = \coprod_{U \subset X} \mathcal{F}(U)$ where $U$ runs over the
+quasi-compact opens of $X$.
+For any finite subset $A = \{s_1, \ldots, s_n\} \subset S$,
+let $\mathcal{F}_A$ be the subsheaf of $\mathcal{F}$ generated
+by all $s_i$ (see
+Modules, Definition \ref{modules-definition-generated-by-local-sections}).
+Note that if $A \subset A'$, then $\mathcal{F}_A \subset \mathcal{F}_{A'}$.
+Hence $\{\mathcal{F}_A\}$ forms a system over the
+directed partially ordered set of finite subsets of $S$.
+By Modules, Lemma \ref{modules-lemma-generated-by-local-sections-stalk}
+it is clear that
+$$
+\colim_A \mathcal{F}_A = \mathcal{F}
+$$
+by looking at stalks. By
+Lemma \ref{lemma-quasi-separated-cohomology-colimit} we have
+$$
+H^p(X, \mathcal{F}) =
+\colim_A H^p(X, \mathcal{F}_A)
+$$
+Hence it suffices to prove the vanishing for the abelian sheaves
+$\mathcal{F}_A$. In other words, it suffices to prove the
+result when $\mathcal{F}$ is generated by finitely many local sections
+over quasi-compact opens of $X$.
+
+\medskip\noindent
+Suppose that $\mathcal{F}$ is generated by the local sections
+$s_1, \ldots, s_n$. Let $\mathcal{F}' \subset \mathcal{F}$
+be the subsheaf generated by $s_1, \ldots, s_{n - 1}$.
+Then we have a short exact sequence
+$$
+0 \to \mathcal{F}' \to \mathcal{F} \to \mathcal{F}/\mathcal{F}' \to 0
+$$
+From the long exact sequence of cohomology we see that it suffices
+to prove the vanishing for the abelian sheaves $\mathcal{F}'$
+and $\mathcal{F}/\mathcal{F}'$ which are generated by fewer than
+$n$ local sections. Hence it suffices to prove the vanishing
+for sheaves generated by at most one local section. These sheaves
+are exactly the quotients of the sheaves $j_!\underline{\mathbf{Z}}_U$
+where $U$ is a quasi-compact open of $X$.
+
+\medskip\noindent
+Assume now that we have a short exact sequence
+$$
+0 \to \mathcal{K} \to j_!\underline{\mathbf{Z}}_U \to \mathcal{F} \to 0
+$$
+with $U$ quasi-compact open in $X$.
+It suffices to show that $H^q(X, \mathcal{K})$ is zero for $q \geq d + 1$.
+As above we can write $\mathcal{K}$ as the filtered colimit of
+subsheaves $\mathcal{K}'$ generated by finitely many sections over
+quasi-compact opens. Then $\mathcal{F}$ is the filtered colimit of the
+sheaves $j_!\underline{\mathbf{Z}}_U/\mathcal{K}'$. In this way we
+reduce to the case that $\mathcal{K}$ is generated by finitely many
+sections over quasi-compact opens. Note that $\mathcal{K}$
+is a subsheaf of $\underline{\mathbf{Z}}_X$. Thus by
+Lemma \ref{lemma-subsheaf-of-constant-sheaf} there exists a finite
+filtration of $\mathcal{K}$ whose successive quotients $\mathcal{Q}$ fit
+into a short exact sequence
+$$
+0 \to j''_!\underline{\mathbf{Z}}_W \to
+j'_!\underline{\mathbf{Z}}_V \to \mathcal{Q} \to 0
+$$
+with $j'' : W \to X$ and $j' : V \to X$ the inclusions of quasi-compact opens.
+Hence the vanishing of $H^p(X, \mathcal{Q})$ for $p > d$ follows
+from our assumption (in the lemma) on the vanishing of the cohomology groups
+of $j''_!\underline{\mathbf{Z}}_W$ and $j'_!\underline{\mathbf{Z}}_V$.
+Returning to $\mathcal{K}$ this, via an induction argument using the
+long exact cohomology sequence, implies the desired vanishing for it as well.
+\end{proof}
+
+\begin{example}
+\label{example-datta}
+Let $X = \mathbf{N}$ endowed with the topology whose opens are
+$\emptyset$, $X$, and $U_n = \{i \mid i \leq n\}$ for $n \geq 1$.
+An abelian sheaf $\mathcal{F}$ on $X$ is the same as an inverse
+system of abelian groups $A_n = \mathcal{F}(U_n)$ and
+$\Gamma(X, \mathcal{F}) = \lim A_n$. Since the inverse limit
+functor is not an exact functor on the category of inverse systems,
+we see that there is an abelian sheaf with nonzero $H^1$.
+Finally, the reader can check that $H^p(X, j_!\mathbf{Z}_U) = 0$,
+$p \geq 1$ if $j : U = U_n \to X$ is the inclusion. Thus we see
+that $X$ is an example of a space satisfying conditions (2), (3), and (4) of
+Lemma \ref{lemma-vanishing-generated-one-section} for $d = 0$
+but not the conclusion.
+\end{example}
+
+\begin{lemma}
+\label{lemma-subsheaf-irreducible}
+Let $X$ be an irreducible topological space.
+Let $\mathcal{H} \subset \underline{\mathbf{Z}}$ be
+an abelian subsheaf of the constant sheaf.
+Then there exists a nonempty open $U \subset X$ such
+that $\mathcal{H}|_U = \underline{d\mathbf{Z}}_U$
+for some $d \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Recall that $\underline{\mathbf{Z}}(V) = \mathbf{Z}$
+for any nonempty open $V$ of $X$ (see proof of
+Lemma \ref{lemma-irreducible-constant-cohomology-zero}).
+If $\mathcal{H} = 0$, then the lemma holds with $d = 0$.
+If $\mathcal{H} \not = 0$, then there exists a nonempty open
+$U \subset X$ such that $\mathcal{H}(U) \not = 0$.
+Say $\mathcal{H}(U) = n\mathbf{Z}$ for some $n \geq 1$.
+Hence we see that
+$\underline{n\mathbf{Z}}_U
+\subset \mathcal{H}|_U \subset
+\underline{\mathbf{Z}}_U$. If the first inclusion is strict we
+can find a nonempty $U' \subset U$ and an integer $1 \leq n' < n$
+such that
+$\underline{n'\mathbf{Z}}_{U'}
+\subset \mathcal{H}|_{U'} \subset
+\underline{\mathbf{Z}}_{U'}$.
+This process has to stop after a finite number of steps, and
+hence we get the lemma.
+\end{proof}
+
+\begin{proposition}[Grothendieck]
+\label{proposition-vanishing-Noetherian}
+\begin{reference}
+\cite[Theorem 3.6.5]{Tohoku}.
+\end{reference}
+Let $X$ be a Noetherian topological space.
+If $\dim(X) \leq d$, then $H^p(X, \mathcal{F}) = 0$
+for all $p > d$ and any abelian sheaf $\mathcal{F}$
+on $X$.
+\end{proposition}
+
+\begin{proof}
+We prove this lemma by induction on $d$.
+So fix $d$ and assume the lemma holds for all
+Noetherian topological spaces of dimension $< d$.
+
+\medskip\noindent
+Let $\mathcal{F}$ be an abelian sheaf on $X$.
+Suppose $U \subset X$ is an open. Let $Z \subset X$
+denote the closed complement.
+Denote $j : U \to X$ and $i : Z \to X$ the inclusion maps.
+Then there is a short exact sequence
+$$
+0 \to j_{!}j^*\mathcal{F} \to \mathcal{F} \to i_*i^*\mathcal{F} \to 0
+$$
+see Modules, Lemma \ref{modules-lemma-canonical-exact-sequence}.
+Note that $j_!j^*\mathcal{F}$ is supported on
+the topological closure $Z'$ of $U$, i.e., it is of
+the form $i'_*\mathcal{F}'$ for some abelian sheaf $\mathcal{F}'$
+on $Z'$, where $i' : Z' \to X$ is the inclusion.
+
+\medskip\noindent
+We can use this to reduce to the case where $X$ is irreducible.
+Namely, according to
+Topology, Lemma \ref{topology-lemma-Noetherian}
+$X$ has finitely
+many irreducible components. If $X$ has more than one irreducible
+component, then let $Z \subset X$ be an irreducible component of $X$
+and set $U = X \setminus Z$. By the above, and the long exact sequence
+of cohomology, it suffices to prove the vanishing of
+$H^p(X, i_*i^*\mathcal{F})$ and $H^p(X, i'_*\mathcal{F}')$ for $p > d$.
+By Lemma \ref{lemma-cohomology-and-closed-immersions} it suffices to prove
+$H^p(Z, i^*\mathcal{F})$ and $H^p(Z', \mathcal{F}')$ vanish for $p > d$.
+Since $Z'$ and $Z$ have fewer irreducible components we indeed
+reduce to the case of an irreducible $X$.
+
+\medskip\noindent
+If $d = 0$ and $X$ is irreducible, then $X$ is the only nonempty
+open subset of $X$. Hence every sheaf is constant and higher cohomology
+groups vanish (for example by
+Lemma \ref{lemma-irreducible-constant-cohomology-zero}).
+
+\medskip\noindent
+Suppose $X$ is irreducible of dimension $d > 0$.
+By Lemma \ref{lemma-vanishing-generated-one-section}
+we reduce to the case where
+$\mathcal{F} = j_!\underline{\mathbf{Z}}_U$ for some open $U \subset X$.
+In this case we look at the short exact sequence
+$$
+0 \to j_!(\underline{\mathbf{Z}}_U) \to
+\underline{\mathbf{Z}}_X \to i_*\underline{\mathbf{Z}}_Z \to 0
+$$
+where $Z = X \setminus U$.
+By Lemma \ref{lemma-irreducible-constant-cohomology-zero}
+we have the vanishing of $H^p(X, \underline{\mathbf{Z}}_X)$
+for all $p \geq 1$. By induction we have
+$H^p(X, i_*\underline{\mathbf{Z}}_Z) = H^p(Z, \underline{\mathbf{Z}}_Z) = 0$
+for $p \geq d$. Hence we win by the long exact cohomology sequence.
+\end{proof}
+
+
+
+
+
+\section{Cohomology with support in a closed}
+\label{section-cohomology-support}
+
+\noindent
+This section just discusses the bare minimum -- the discussion
+will be continued in Section \ref{section-cohomology-support-bis}.
+
+\medskip\noindent
+Let $X$ be a topological space and let $Z \subset X$ be a closed subset.
+Let $\mathcal{F}$ be an abelian sheaf on $X$. We let
+$$
+\Gamma_Z(X, \mathcal{F}) =
+\{s \in \mathcal{F}(X) \mid \text{Supp}(s) \subset Z\}
+$$
+be the subset of sections whose support is contained in $Z$.
+The support of a section is defined in
+Modules, Definition \ref{modules-definition-support}.
+Modules, Lemma \ref{modules-lemma-support-section-closed}
+implies that $\Gamma_Z(X, \mathcal{F})$ is a subgroup of
+$\Gamma(X, \mathcal{F})$. The same lemma guarantees that
+the assignment $\mathcal{F} \mapsto \Gamma_Z(X, \mathcal{F})$
+is a functor in $\mathcal{F}$.
+This functor is left exact but not exact in general.
+
+\medskip\noindent
+Since the category of abelian sheaves has enough injectives
+(Injectives, Lemma \ref{injectives-lemma-abelian-sheaves-space})
+we we obtain a right derived functor
+$$
+R\Gamma_Z(X, -) : D^+(X) \longrightarrow D^+(\textit{Ab})
+$$
+by
+Derived Categories, Lemma \ref{derived-lemma-enough-injectives-right-derived}.
+The value of $R\Gamma_Z(X, -)$ on an object $K$ is computed by representing
+$K$ by a bounded below complex $\mathcal{I}^\bullet$ of injective abelian
+sheaves and taking $\Gamma_Z(X, \mathcal{I}^\bullet)$, see
+Derived Categories, Lemma \ref{derived-lemma-injective-acyclic}.
+The cohomology groups of an abelian sheaf $\mathcal{F}$
+with support in $Z$ defined by
+$H^q_Z(X, \mathcal{F}) = R^q\Gamma_Z(X, \mathcal{F})$.
+
+\medskip\noindent
+Let $\mathcal{I}$ be an injective abelian sheaf on $X$. Let
+$U = X \setminus Z$. Then the restriction map
+$\mathcal{I}(X) \to \mathcal{I}(U)$ is surjective
+(Lemma \ref{lemma-injective-restriction-surjective})
+with kernel $\Gamma_Z(X, \mathcal{I})$. It immediately follows that
+for $K \in D^+(X)$ there is a distinguished triangle
+$$
+R\Gamma_Z(X, K) \to R\Gamma(X, K) \to R\Gamma(U, K) \to R\Gamma_Z(X, K)[1]
+$$
+in $D^+(\textit{Ab})$. As a consequence we obtain a long exact cohomology
+sequence
+$$
+\ldots \to H^i_Z(X, K) \to H^i(X, K) \to H^i(U, K) \to
+H^{i + 1}_Z(X, K) \to \ldots
+$$
+for any $K$ in $D^+(X)$.
+
+\medskip\noindent
+For an abelian sheaf $\mathcal{F}$ on $X$ we can consider the
+{\it subsheaf of sections with support in $Z$}, denoted
+$\mathcal{H}_Z(\mathcal{F})$, defined by the rule
+$$
+\mathcal{H}_Z(\mathcal{F})(U) =
+\{s \in \mathcal{F}(U) \mid \text{Supp}(s) \subset U \cap Z\} =
+\Gamma_{Z \cap U}(U, \mathcal{F}|_U)
+$$
+Using the equivalence of Modules, Lemma \ref{modules-lemma-i-star-exact}
+we may view $\mathcal{H}_Z(\mathcal{F})$ as an abelian sheaf on $Z$, see
+Modules, Remark \ref{modules-remark-sections-support-in-closed}.
+Thus we obtain a functor
+$$
+\textit{Ab}(X) \longrightarrow \textit{Ab}(Z),\quad
+\mathcal{F} \longmapsto
+\mathcal{H}_Z(\mathcal{F})\text{ viewed as a sheaf on }Z
+$$
+This functor is left exact, but in general not exact. Exactly as above
+we obtain a right derived functor
+$$
+R\mathcal{H}_Z : D^+(X) \longrightarrow D^+(Z)
+$$
+the derived functor. We set
+$\mathcal{H}^q_Z(\mathcal{F}) = R^q\mathcal{H}_Z(\mathcal{F})$ so that
+$\mathcal{H}^0_Z(\mathcal{F}) = \mathcal{H}_Z(\mathcal{F})$.
+
+\medskip\noindent
+Observe that we have
+$\Gamma_Z(X, \mathcal{F}) = \Gamma(Z, \mathcal{H}_Z(\mathcal{F}))$
+for any abelian sheaf $\mathcal{F}$. By
+Lemma \ref{lemma-sections-with-support-acyclic} below
+the functor $\mathcal{H}_Z$ transforms injective abelian sheaves
+into sheaves right acyclic for $\Gamma(Z, -)$. Thus by
+Derived Categories, Lemma \ref{derived-lemma-grothendieck-spectral-sequence}
+we obtain a convergent Grothendieck spectral sequence
+$$
+E_2^{p, q} = H^p(Z, \mathcal{H}^q_Z(K)) \Rightarrow H^{p + q}_Z(X, K)
+$$
+functorial in $K$ in $D^+(X)$.
+
+\begin{lemma}
+\label{lemma-sections-with-support-acyclic}
+Let $i : Z \to X$ be the inclusion of a closed subset.
+Let $\mathcal{I}$ be an injective abelian sheaf on $X$.
+Then $\mathcal{H}_Z(\mathcal{I})$ is an injective abelian sheaf on $Z$.
+\end{lemma}
+
+\begin{proof}
+This follows from
+Homology, Lemma \ref{homology-lemma-adjoint-preserve-injectives}
+as $\mathcal{H}_Z(-)$ is right adjoint to the exact functor $i_*$.
+See Modules, Lemmas \ref{modules-lemma-i-star-exact} and
+\ref{modules-lemma-i-star-right-adjoint}.
+\end{proof}
+
+
+
+
+\section{Cohomology on spectral spaces}
+\label{section-spectral}
+
+\noindent
+A key result on the cohomology of spectral spaces is Lemma \ref{lemma-colimit}
+which loosely speaking says that cohomology commutes with cofiltered limits
+in the category of spectral spaces as defined in
+Topology, Definition \ref{topology-definition-spectral-space}.
+This can be applied to give analogues of
+Lemmas \ref{lemma-cohomology-of-closed} and \ref{lemma-proper-base-change}
+as follows.
+
+\begin{lemma}
+\label{lemma-cohomology-of-neighbourhoods-of-closed}
+Let $X$ be a spectral space. Let $\mathcal{F}$ be an abelian sheaf on $X$.
+Let $E \subset X$ be a quasi-compact subset. Let $W \subset X$ be the set of
+points of $X$ which specialize to a point of $E$.
+\begin{enumerate}
+\item $H^p(W, \mathcal{F}|_W) = \colim H^p(U, \mathcal{F})$
+where the colimit is over quasi-compact open neighbourhoods of $E$,
+\item $H^p(W \setminus E, \mathcal{F}|_{W \setminus E}) =
+\colim H^p(U \setminus E, \mathcal{F}|_{U \setminus E})$
+if $E$ is a constructible subset.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+From Topology, Lemma \ref{topology-lemma-make-spectral-space}
+we see that $W = \lim U$ where the limit is over the quasi-compact
+opens containing $E$. Each $U$ is a spectral space by
+Topology, Lemma \ref{topology-lemma-spectral-sub}.
+Thus we may apply Lemma \ref{lemma-colimit} to conclude that (1) holds.
+The same proof works for part (2) except we use
+Topology, Lemma \ref{topology-lemma-make-spectral-space-minus}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-base-change-spectral}
+Let $f : X \to Y$ be a spectral map of spectral spaces. Let $y \in Y$.
+Let $E \subset Y$ be the set of points specializing to $y$.
+Let $\mathcal{F}$ be an abelian sheaf on $X$.
+Then $(R^pf_*\mathcal{F})_y = H^p(f^{-1}(E), \mathcal{F}|_{f^{-1}(E)})$.
+\end{lemma}
+
+\begin{proof}
+Observe that $E = \bigcap V$ where $V$ runs over the quasi-compact
+open neighbourhoods of $y$ in $Y$. Hence $f^{-1}(E) = \bigcap f^{-1}(V)$.
+This implies that $f^{-1}(E) = \lim f^{-1}(V)$ as topological spaces.
+Since $f$ is spectral, each $f^{-1}(V)$ is a spectral space too
+(Topology, Lemma \ref{topology-lemma-spectral-sub}).
+We conclude that $f^{-1}(E)$ is a spectral space and that
+$$
+H^p(f^{-1}(E), \mathcal{F}|_{f^{-1}(E)}) =
+\colim H^p(f^{-1}(V), \mathcal{F})
+$$
+by Lemma \ref{lemma-colimit}. On the other hand, the stalk of
+$R^pf_*\mathcal{F}$ at $y$ is given by the colimit on the right.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-for-profinite}
+Let $X$ be a profinite topological space. Then $H^q(X, \mathcal{F}) = 0$
+for all $q > 0$ and all abelian sheaves $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Any open covering of $X$ can be refined by a finite disjoint union
+decomposition with open parts, see
+Topology, Lemma \ref{topology-lemma-profinite-refine-open-covering}.
+Hence if $\mathcal{F} \to \mathcal{G}$ is a surjection of abelian
+sheaves on $X$, then $\mathcal{F}(X) \to \mathcal{G}(X)$ is surjective.
+In other words, the global sections functor is an exact functor.
+Therefore its higher derived functors are zero, see
+Derived Categories, Lemma \ref{derived-lemma-right-derived-exact-functor}.
+\end{proof}
+
+\noindent
+The following result on cohomological vanishing
+improves Grothendieck's result
+(Proposition \ref{proposition-vanishing-Noetherian})
+and can be found in \cite{Scheiderer}.
+
+\begin{proposition}
+\label{proposition-cohomological-dimension-spectral}
+\begin{reference}
+Part (1) is the main theorem of \cite{Scheiderer}.
+\end{reference}
+Let $X$ be a spectral space of Krull dimension $d$.
+Let $\mathcal{F}$ be an abelian sheaf on $X$.
+\begin{enumerate}
+\item $H^q(X, \mathcal{F}) = 0$ for $q > d$,
+\item $H^d(X, \mathcal{F}) \to H^d(U, \mathcal{F})$ is surjective
+for every quasi-compact open $U \subset X$,
+\item $H^q_Z(X, \mathcal{F}) = 0$ for $q > d$ and any constructible
+closed subset $Z \subset X$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+We prove this result by induction on $d$.
+
+\medskip\noindent
+If $d = 0$, then $X$ is a profinite space, see
+Topology, Lemma \ref{topology-lemma-characterize-profinite-spectral}.
+Thus (1) holds by Lemma \ref{lemma-vanishing-for-profinite}.
+If $U \subset X$ is quasi-compact open, then $U$ is
+also closed as a quasi-compact subset of a Hausdorff space.
+Hence $X = U \amalg (X \setminus U)$ as a topological space
+and we see that (2) holds. Given $Z$ as in (3) we consider the
+long exact sequence
+$$
+H^{q - 1}(X, \mathcal{F}) \to
+H^{q - 1}(X \setminus Z, \mathcal{F}) \to
+H^q_Z(X, \mathcal{F}) \to H^q(X, \mathcal{F})
+$$
+Since $X$ and $U = X \setminus Z$ are profinite (namely $U$ is quasi-compact
+because $Z$ is constructible) and since
+we have (2) and (1) we obtain the desired vanishing of the
+cohomology groups with support in $Z$.
+
+\medskip\noindent
+Induction step. Assume $d \geq 1$ and assume
+the proposition is valid for all spectral
+spaces of dimension $< d$. We first prove part (2) for $X$.
+Let $U$ be a quasi-compact open. Let $\xi \in H^d(U, \mathcal{F})$.
+Set $Z = X \setminus U$. Let $W \subset X$ be the set of points
+specializing to $Z$. By
+Lemma \ref{lemma-cohomology-of-neighbourhoods-of-closed} we have
+$$
+H^d(W \setminus Z, \mathcal{F}|_{W \setminus Z}) =
+\colim_{Z \subset V} H^d(V \setminus Z, \mathcal{F})
+$$
+where the colimit is over the quasi-compact open neighbourhoods $V$
+of $Z$ in $X$.
+By Topology, Lemma \ref{topology-lemma-make-spectral-space} we see that
+$W \setminus Z$ is a spectral space.
+Since every point of $W$ specializes to a point of $Z$, we see that
+$W \setminus Z$ is a spectral space of Krull dimension $< d$.
+By induction hypothesis we see that the image of $\xi$ in
+$H^d(W \setminus Z, \mathcal{F}|_{W \setminus Z})$ is zero.
+By the displayed formula, there exists a $Z \subset V \subset X$
+quasi-compact open such that $\xi|_{V \setminus Z} = 0$.
+Since $V \setminus Z = V \cap U$ we conclude by the Mayer-Vietoris
+(Lemma \ref{lemma-mayer-vietoris}) for the covering $X = U \cup V$
+that there exists a $\tilde \xi \in H^d(X, \mathcal{F})$ which restricts
+to $\xi$ on $U$ and to zero on $V$. In other words, part (2) is true.
+
+\medskip\noindent
+Proof of part (1) assuming (2). Choose an injective resolution
+$\mathcal{F} \to \mathcal{I}^\bullet$. Set
+$$
+\mathcal{G} = \Im(\mathcal{I}^{d - 1} \to \mathcal{I}^d) =
+\Ker(\mathcal{I}^d \to \mathcal{I}^{d + 1})
+$$
+For $U \subset X$ quasi-compact open we have a map of exact sequences
+as follows
+$$
+\xymatrix{
+\mathcal{I}^{d - 1}(X) \ar[r] \ar[d] &
+\mathcal{G}(X) \ar[r] \ar[d] &
+H^d(X, \mathcal{F}) \ar[d] \ar[r] & 0 \\
+\mathcal{I}^{d - 1}(U) \ar[r] &
+\mathcal{G}(U) \ar[r] &
+H^d(U, \mathcal{F}) \ar[r] & 0
+}
+$$
+The sheaf $\mathcal{I}^{d - 1}$ is flasque by
+Lemma \ref{lemma-injective-flasque} and the fact that $d \geq 1$.
+By part (2) we see that the right vertical arrow is surjective.
+We conclude by a diagram chase that the map
+$\mathcal{G}(X) \to \mathcal{G}(U)$ is surjective.
+By Lemma \ref{lemma-vanishing-ravi} we conclude that
+$\check{H}^q(\mathcal{U}, \mathcal{G}) = 0$ for $q > 0$ and
+any finite covering $\mathcal{U} : U = U_1 \cup \ldots \cup U_n$
+of a quasi-compact open by quasi-compact opens. Applying
+Lemma \ref{lemma-cech-vanish-basis} we find that $H^q(U, \mathcal{G}) = 0$
+for all $q > 0$ and all quasi-compact opens $U$ of $X$.
+By Leray's acyclicity lemma
+(Derived Categories, Lemma \ref{derived-lemma-leray-acyclicity})
+we conclude that
+$$
+H^q(X, \mathcal{F}) =
+H^q\left(
+\Gamma(X, \mathcal{I}^0) \to \ldots \to
+\Gamma(X, \mathcal{I}^{d - 1}) \to \Gamma(X, \mathcal{G})
+\right)
+$$
+In particular the cohomology group vanishes if $q > d$.
+
+\medskip\noindent
+Proof of (3). Given $Z$ as in (3) we consider the long exact sequence
+$$
+H^{q - 1}(X, \mathcal{F}) \to
+H^{q - 1}(X \setminus Z, \mathcal{F}) \to
+H^q_Z(X, \mathcal{F}) \to H^q(X, \mathcal{F})
+$$
+Since $X$ and $U = X \setminus Z$ are spectral spaces
+(Topology, Lemma \ref{topology-lemma-spectral-sub})
+of dimension $\leq d$
+and since we have (2) and (1) we obtain the desired vanishing.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{The alternating {\v C}ech complex}
+\label{section-alternating-cech}
+
+\noindent
+This section compares the {\v C}ech complex with the alternating {\v C}ech
+complex and some related complexes.
+
+\medskip\noindent
+Let $X$ be a topological space. Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$
+be an open covering. For $p \geq 0$ set
+$$
+\check{\mathcal{C}}_{alt}^p(\mathcal{U}, \mathcal{F})
+=
+\left\{
+\begin{matrix}
+s \in \check{\mathcal{C}}^p(\mathcal{U}, \mathcal{F})
+\text{ such that }
+s_{i_0 \ldots i_p} = 0 \text{ if } i_n = i_m \text{ for some } n \not = m\\
+\text{ and }
+s_{i_0\ldots i_n \ldots i_m \ldots i_p}
+=
+-s_{i_0\ldots i_m \ldots i_n \ldots i_p}
+\text{ in any case.}
+\end{matrix}
+\right\}
+$$
+We omit the verification that the differential $d$ of
+Equation (\ref{equation-d-cech}) maps
+$\check{\mathcal{C}}^p_{alt}(\mathcal{U}, \mathcal{F})$ into
+$\check{\mathcal{C}}^{p + 1}_{alt}(\mathcal{U}, \mathcal{F})$.
+
+\begin{definition}
+\label{definition-alternating-cech-complex}
+Let $X$ be a topological space. Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$
+be an open covering. Let $\mathcal{F}$ be an abelian presheaf on $X$.
+The complex $\check{\mathcal{C}}_{alt}^\bullet(\mathcal{U}, \mathcal{F})$
+is the {\it alternating {\v C}ech complex} associated to $\mathcal{F}$ and the
+open covering $\mathcal{U}$.
+\end{definition}
+
+\noindent
+Hence there is a canonical morphism of complexes
+$$
+\check{\mathcal{C}}_{alt}^\bullet(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+namely the inclusion of the alternating {\v C}ech complex into the
+usual {\v C}ech complex.
+
+\medskip\noindent
+Suppose our covering $\mathcal{U} : U = \bigcup_{i \in I} U_i$ comes
+equipped with a total ordering $<$ on $I$. In this case, set
+$$
+\check{\mathcal{C}}_{ord}^p(\mathcal{U}, \mathcal{F})
+=
+\prod\nolimits_{(i_0, \ldots, i_p) \in I^{p + 1}, i_0 < \ldots < i_p}
+\mathcal{F}(U_{i_0\ldots i_p}).
+$$
+This is an abelian group. For
+$s \in \check{\mathcal{C}}_{ord}^p(\mathcal{U}, \mathcal{F})$ we denote
+$s_{i_0\ldots i_p}$ its value in $\mathcal{F}(U_{i_0\ldots i_p})$.
+We define
+$$
+d : \check{\mathcal{C}}_{ord}^p(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}_{ord}^{p + 1}(\mathcal{U}, \mathcal{F})
+$$
+by the formula
+$$
+d(s)_{i_0\ldots i_{p + 1}}
+=
+\sum\nolimits_{j = 0}^{p + 1}
+(-1)^j
+s_{i_0\ldots \hat i_j \ldots i_{p + 1}}|_{U_{i_0\ldots i_{p + 1}}}
+$$
+for any $i_0 < \ldots < i_{p + 1}$. Note that this formula is identical
+to Equation (\ref{equation-d-cech}).
+It is straightforward to see that $d \circ d = 0$. In other words
+$\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}, \mathcal{F})$ is a complex.
+
+\begin{definition}
+\label{definition-ordered-cech-complex}
+Let $X$ be a topological space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be an open covering.
+Assume given a total ordering on $I$.
+Let $\mathcal{F}$ be an abelian presheaf on $X$.
+The complex $\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}, \mathcal{F})$
+is the {\it ordered {\v C}ech complex} associated to $\mathcal{F}$, the
+open covering $\mathcal{U}$ and the given total ordering on $I$.
+\end{definition}
+
+\noindent
+This complex is sometimes called the alternating {\v C}ech complex.
+The reason is that there is an obvious comparison map between
+the ordered {\v C}ech complex and the alternating {\v C}ech complex.
+Namely, consider the map
+$$
+c :
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+given by the rule
+$$
+c(s)_{i_0\ldots i_p} =
+\left\{
+\begin{matrix}
+0 &
+\text{if} &
+i_n = i_m \text{ for some } n \not = m\\
+\text{sgn}(\sigma) s_{i_{\sigma(0)}\ldots i_{\sigma(p)}} &
+\text{if} &
+i_{\sigma(0)} < i_{\sigma(1)} < \ldots < i_{\sigma(p)}
+\end{matrix}
+\right.
+$$
+Here $\sigma$ denotes a permutation of $\{0, \ldots, p\}$ and
+$\text{sgn}(\sigma)$ denotes its sign. The alternating and ordered
+{\v C}ech complexes are often identified in the literature via the map
+$c$. Namely we have the following easy lemma.
+
+\begin{lemma}
+\label{lemma-ordered-alternating}
+Let $X$ be a topological space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be an open covering.
+Assume $I$ comes equipped with a total ordering.
+The map $c$ is a morphism of complexes. In fact it induces
+an isomorphism
+$$
+c : \check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}, \mathcal{F})
+\to \check{\mathcal{C}}_{alt}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+of complexes.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\noindent
+There is also a map
+$$
+\pi :
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+which is described by the rule
+$$
+\pi(s)_{i_0\ldots i_p} = s_{i_0\ldots i_p}
+$$
+whenever $i_0 < i_1 < \ldots < i_p$.
+
+\begin{lemma}
+\label{lemma-project-to-ordered}
+Let $X$ be a topological space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be an open covering.
+Assume $I$ comes equipped with a total ordering.
+The map $\pi : \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+\to \check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}, \mathcal{F})$
+is a morphism of complexes. It induces an isomorphism
+$$
+\pi : \check{\mathcal{C}}_{alt}^\bullet(\mathcal{U}, \mathcal{F})
+\to \check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+of complexes which is a left inverse to the morphism $c$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-compared-ordered-complexes}
+This means that if we have two total orderings $<_1$ and $<_2$ on
+the index set $I$, then we get an isomorphism of complexes
+$\tau = \pi_2 \circ c_1 :
+\check{\mathcal{C}}_{ord\text{-}1}(\mathcal{U}, \mathcal{F}) \to
+\check{\mathcal{C}}_{ord\text{-}2}(\mathcal{U}, \mathcal{F})$.
+It is clear that
+$$
+\tau(s)_{i_0 \ldots i_p} =
+\text{sign}(\sigma) s_{i_{\sigma(0)} \ldots i_{\sigma(p)}}
+$$
+where $i_0 <_1 i_1 <_1 \ldots <_1 i_p$ and
+$i_{\sigma(0)} <_2 i_{\sigma(1)} <_2 \ldots <_2 i_{\sigma(p)}$.
+This is the sense in which the ordered {\v C}ech complex is independent
+of the chosen total ordering.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-alternating-usual}
+Let $X$ be a topological space.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be an open covering.
+Assume $I$ comes equipped with a total ordering.
+The map $c \circ \pi$ is homotopic to the identity on
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$.
+In particular the inclusion map
+$\check{\mathcal{C}}_{alt}^\bullet(\mathcal{U}, \mathcal{F}) \to
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$
+is a homotopy equivalence.
+\end{lemma}
+
+\begin{proof}
+For any multi-index $(i_0, \ldots, i_p) \in I^{p + 1}$ there exists
+a unique permutation $\sigma : \{0, \ldots, p\} \to \{0, \ldots, p\}$
+such that
+$$
+i_{\sigma(0)} \leq i_{\sigma(1)} \leq \ldots \leq i_{\sigma(p)}
+\quad
+\text{and}
+\quad
+\sigma(j) < \sigma(j + 1)
+\quad
+\text{if}
+\quad
+i_{\sigma(j)} = i_{\sigma(j + 1)}.
+$$
+We denote this permutation $\sigma = \sigma^{i_0 \ldots i_p}$.
+
+\medskip\noindent
+For any permutation $\sigma : \{0, \ldots, p\} \to \{0, \ldots, p\}$
+and any $a$, $0 \leq a \leq p$ we denote $\sigma_a$
+the permutation of $\{0, \ldots, p\}$ such that
+$$
+\sigma_a(j) =
+\left\{
+\begin{matrix}
+\sigma(j) & \text{if} & 0 \leq j < a, \\
+\min\{j' \mid j' > \sigma_a(j - 1), j' \not = \sigma(k), \forall k < a\}
+& \text{if} & a \leq j
+\end{matrix}
+\right.
+$$
+So if $p = 3$ and $\sigma$, $\tau$ are given by
+$$
+\begin{matrix}
+\text{id} & 0 & 1 & 2 & 3 \\
+\sigma & 3 & 2 & 1 & 0
+\end{matrix}
+\quad \text{and} \quad
+\begin{matrix}
+\text{id} & 0 & 1 & 2 & 3 \\
+\tau & 3 & 0 & 2 & 1
+\end{matrix}
+$$
+then we have
+$$
+\begin{matrix}
+\text{id} & 0 & 1 & 2 & 3 \\
+\sigma_0 & 0 & 1 & 2 & 3 \\
+\sigma_1 & 3 & 0 & 1 & 2 \\
+\sigma_2 & 3 & 2 & 0 & 1 \\
+\sigma_3 & 3 & 2 & 1 & 0 \\
+\end{matrix}
+\quad \text{and} \quad
+\begin{matrix}
+\text{id} & 0 & 1 & 2 & 3 \\
+\tau_0 & 0 & 1 & 2 & 3 \\
+\tau_1 & 3 & 0 & 1 & 2 \\
+\tau_2 & 3 & 0 & 1 & 2 \\
+\tau_3 & 3 & 0 & 2 & 1 \\
+\end{matrix}
+$$
+It is clear that always $\sigma_0 = \text{id}$ and $\sigma_p = \sigma$.
+
+\medskip\noindent
+Having introduced this notation we define for
+$s \in \check{\mathcal{C}}^{p + 1}(\mathcal{U}, \mathcal{F})$
+the element $h(s) \in \check{\mathcal{C}}^p(\mathcal{U}, \mathcal{F})$
+to be the element with components
+\begin{equation}
+\label{equation-first-homotopy}
+h(s)_{i_0\ldots i_p} =
+\sum\nolimits_{0 \leq a \leq p}
+(-1)^a \text{sign}(\sigma_a)
+s_{i_{\sigma(0)} \ldots i_{\sigma(a)} i_{\sigma_a(a)} \ldots i_{\sigma_a(p)}}
+\end{equation}
+where $\sigma = \sigma^{i_0 \ldots i_p}$. The index
+$i_{\sigma(a)}$ occurs twice in
+$i_{\sigma(0)} \ldots i_{\sigma(a)} i_{\sigma_a(a)} \ldots i_{\sigma_a(p)}$
+once in the first group of $a + 1$ indices and once in the second group
+of $p - a + 1$ indices since $\sigma_a(j) = \sigma(a)$ for some
+$j \geq a$ by definition of $\sigma_a$. Hence the sum makes sense since each
+of the elements
+$s_{i_{\sigma(0)} \ldots i_{\sigma(a)} i_{\sigma_a(a)} \ldots i_{\sigma_a(p)}}$
+is defined over the open $U_{i_0 \ldots i_p}$.
+Note also that for $a = 0$ we get $s_{i_0 \ldots i_p}$ and
+for $a = p$ we get
+$(-1)^p \text{sign}(\sigma) s_{i_{\sigma(0)} \ldots i_{\sigma(p)}}$.
+
+\medskip\noindent
+We claim that
+$$
+(dh + hd)(s)_{i_0 \ldots i_p} =
+s_{i_0 \ldots i_p} -
+\text{sign}(\sigma) s_{i_{\sigma(0)} \ldots i_{\sigma(p)}}
+$$
+where $\sigma = \sigma^{i_0 \ldots i_p}$. We omit the verification
+of this claim. (There is a PARI/gp script called first-homotopy.gp
+in the stacks-project subdirectory scripts which can be used to check
+finitely many instances of this claim.
+We wrote this script to make sure the signs are correct.)
+Write
+$$
+\kappa :
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+for the operator given by the rule
+$$
+\kappa(s)_{i_0 \ldots i_p} =
+\text{sign}(\sigma^{i_0 \ldots i_p}) s_{i_{\sigma(0)} \ldots i_{\sigma(p)}}.
+$$
+The claim above implies that $\kappa$ is a morphism of complexes and that
+$\kappa$ is homotopic to the identity map of the {\v C}ech complex.
+This does not immediately imply the lemma since
+the image of the operator $\kappa$ is not the alternating subcomplex.
+Namely, the image of $\kappa$ is the ``semi-alternating'' complex
+$\check{\mathcal{C}}_{semi\text{-}alt}^p(\mathcal{U}, \mathcal{F})$
+where $s$ is a $p$-cochain of this complex if and only if
+$$
+s_{i_0 \ldots i_p} = \text{sign}(\sigma) s_{i_{\sigma(0)} \ldots i_{\sigma(p)}}
+$$
+for any $(i_0, \ldots, i_p) \in I^{p + 1}$ with
+$\sigma = \sigma^{i_0 \ldots i_p}$.
+We introduce yet another variant {\v C}ech complex, namely the semi-ordered
+{\v C}ech complex defined by
+$$
+\check{\mathcal{C}}_{semi\text{-}ord}^p(\mathcal{U}, \mathcal{F})
+=
+\prod\nolimits_{i_0 \leq i_1 \leq \ldots \leq i_p}
+\mathcal{F}(U_{i_0 \ldots i_p})
+$$
+It is easy to see that Equation (\ref{equation-d-cech}) also defines
+a differential and hence that we get a complex. It is also clear
+(analogous to Lemma \ref{lemma-project-to-ordered}) that the projection map
+$$
+\check{\mathcal{C}}_{semi\text{-}alt}^\bullet(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}_{semi\text{-}ord}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+is an isomorphism of complexes.
+
+\medskip\noindent
+Hence the Lemma follows if we can show that the obvious inclusion map
+$$
+\check{\mathcal{C}}_{ord}^p(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}_{semi\text{-}ord}^p(\mathcal{U}, \mathcal{F})
+$$
+is a homotopy equivalence. To see this we use the homotopy
+\begin{equation}
+\label{equation-second-homotopy}
+h(s)_{i_0 \ldots i_p} =
+\left\{
+\begin{matrix}
+0 & \text{if} & i_0 < i_1 < \ldots < i_p \\
+(-1)^a s_{i_0 \ldots i_{a - 1} i_a i_a i_{a + 1} \ldots i_p}
+& \text{if} & i_0 < i_1 < \ldots < i_{a - 1} < i_a = i_{a + 1}
+\end{matrix}
+\right.
+\end{equation}
+We claim that
+$$
+(dh + hd)(s)_{i_0 \ldots i_p} =
+\left\{
+\begin{matrix}
+0 & \text{if} & i_0 < i_1 < \ldots < i_p \\
+s_{i_0 \ldots i_p}
+& \text{else} &
+\end{matrix}
+\right.
+$$
+We omit the verification. (There is a PARI/gp script called second-homotopy.gp
+in the stacks-project subdirectory scripts which can be used to check
+finitely many instances of this claim.
+We wrote this script to make sure the signs are correct.)
+The claim clearly shows that the composition
+$$
+\check{\mathcal{C}}_{semi\text{-}ord}^\bullet(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}_{semi\text{-}ord}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+of the projection with the natural inclusion
+is homotopic to the identity map as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-alternating-cech-trivial}
+Let $X$ be a topological space. Let $\mathcal{F}$ be an abelian presheaf on $X$.
+Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be an open covering. If
+$U_i = U$ for some $i \in I$, then the extended alternating {\v C}ech complex
+$$
+\mathcal{F}(U) \to \check{\mathcal{C}}_{alt}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+obtained by putting $\mathcal{F}(U)$ in degree $-1$ with differential given by
+the canonical map of $\mathcal{F}(U)$ into
+$\check{\mathcal{C}}^0(\mathcal{U}, \mathcal{F})$
+is homotopy equivalent to $0$. Similarly, for any total ordering on $I$
+the extended ordered {\v C}ech complex
+$$
+\mathcal{F}(U) \to
+\check{\mathcal{C}}_{ord}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+is homotopy equivalent to $0$.
+\end{lemma}
+
+\begin{proof}[First proof]
+Combine Lemmas \ref{lemma-cech-trivial} and \ref{lemma-alternating-usual}.
+\end{proof}
+
+\begin{proof}[Second proof]
+Since the alternating and ordered {\v C}ech complexes are isomorphic
+it suffices to prove this for the ordered one.
+We will use standard notation: a cochain $s$ of degree $p$
+in the extended ordered {\v C}ech complex has the form
+$s = (s_{i_0 \ldots i_p})$ where $s_{i_0 \ldots i_p}$ is in
+$\mathcal{F}(U_{i_0 \ldots i_p})$ and $i_0 < \ldots < i_p$.
+With this notation we have
+$$
+d(x)_{i_0 \ldots i_{p + 1}} =
+\sum\nolimits_j (-1)^j x_{i_0 \ldots \hat i_j \ldots i_p}
+$$
+Fix an index $i \in I$ with $U = U_i$.
+As homotopy we use the maps
+$$
+h : \text{cochains of degree }p + 1 \to \text{cochains of degree }p
+$$
+given by the rule
+$$
+h(s)_{i_0 \ldots i_p} = 0 \text{ if } i \in \{i_0, \ldots, i_p\}
+\text{ and }
+h(s)_{i_0 \ldots i_p} =
+(-1)^j s_{i_0 \ldots i_j i i_{j + 1} \ldots i_p} \text{ if not}
+$$
+Here $j$ is the unique index such that $i_j < i < i_{j + 1}$ in the
+second case; also, since $U = U_i$ we have the equality
+$$
+\mathcal{F}(U_{i_0 \ldots i_p}) =
+\mathcal{F}(U_{i_0 \ldots i_j i i_{j + 1} \ldots i_p})
+$$
+which we can use to make sense of thinking of
+$(-1)^j s_{i_0 \ldots i_j i i_{j + 1} \ldots i_p}$
+as an element of $\mathcal{F}(U_{i_0 \ldots i_p})$.
+We will show by a computation that $d h + h d$ equals
+the negative of the identity map which finishes the proof.
+To do this fix $s$ a cochain of degree $p$ and let
+$i_0 < \ldots < i_p$ be elements of $I$.
+
+\medskip\noindent
+Case I: $i \in \{i_0, \ldots, i_p\}$. Say $i = i_t$. Then we have
+$h(d(s))_{i_0 \ldots i_p} = 0$. On the other hand we have
+$$
+d(h(s))_{i_0 \ldots i_p} =
+\sum (-1)^j h(s)_{i_0 \ldots \hat i_j \ldots i_p} =
+(-1)^t h(s)_{i_0 \ldots \hat i \ldots i_p} =
+(-1)^t (-1)^{t - 1} s_{i_0 \ldots i_p}
+$$
+Thus $(dh + hd)(s)_{i_0 \ldots i_p} = -s_{i_0 \ldots i_p}$ as desired.
+
+\medskip\noindent
+Case II: $i \not \in \{i_0, \ldots, i_p\}$. Let $j$ be such that
+$i_j < i < i_{j + 1}$. Then we see that
+\begin{align*}
+h(d(s))_{i_0 \ldots i_p}
+& =
+(-1)^j d(s)_{i_0 \ldots i_j i i_{j + 1} \ldots i_p} \\
+& =
+\sum\nolimits_{j' \leq j} (-1)^{j + j'}
+s_{i_0 \ldots \hat i_{j'} \ldots i_j i i_{j + 1} \ldots i_p} -
+s_{i_0 \ldots i_p} \\
+&
++ \sum\nolimits_{j' > j} (-1)^{j + j' + 1}
+s_{i_0 \ldots i_j i i_{j + 1} \ldots \hat i_{j'} \ldots i_p}
+\end{align*}
+On the other hand we have
+\begin{align*}
+d(h(s))_{i_0 \ldots i_p}
+& =
+\sum\nolimits_{j'} (-1)^{j'} h(s)_{i_0 \ldots \hat i_{j'} \ldots i_p} \\
+& =
+\sum\nolimits_{j' \leq j} (-1)^{j' + j - 1}
+s_{i_0 \ldots \hat i_{j'} \ldots i_j i i_{j + 1} \ldots i_p} \\
+& +
+\sum\nolimits_{j' > j} (-1)^{j' + j}
+s_{i_0 \ldots i_j i i_{j + 1} \ldots \hat i_{j'} \ldots i_p}
+\end{align*}
+Adding these up we obtain
+$(dh + hd)(s)_{i_0 \ldots i_p} = - s_{i_0 \ldots i_p}$
+as desired.
+\end{proof}
+
+
+
+
+
+\section{Alternative view of the {\v C}ech complex}
+\label{section-locally-finite-cech}
+
+\noindent
+In this section we discuss an alternative way to establish the relationship
+between the {\v C}ech complex and cohomology.
+
+\begin{lemma}
+\label{lemma-covering-resolution}
+Let $X$ be a ringed space. Let $\mathcal{U} : X = \bigcup_{i \in I} U_i$
+be an open covering of $X$. Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+Denote $\mathcal{F}_{i_0 \ldots i_p}$ the restriction of
+$\mathcal{F}$ to $U_{i_0 \ldots i_p}$. There exists a complex
+${\mathfrak C}^\bullet(\mathcal{U}, \mathcal{F})$
+of $\mathcal{O}_X$-modules with
+$$
+{\mathfrak C}^p(\mathcal{U}, \mathcal{F}) =
+\prod\nolimits_{i_0 \ldots i_p}
+(j_{i_0 \ldots i_p})_* \mathcal{F}_{i_0 \ldots i_p}
+$$
+and differential
+$d : {\mathfrak C}^p(\mathcal{U}, \mathcal{F})
+\to {\mathfrak C}^{p + 1}(\mathcal{U}, \mathcal{F})$
+as in Equation (\ref{equation-d-cech}). Moreover, there exists a canonical
+map
+$$
+\mathcal{F} \to {\mathfrak C}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+which is a quasi-isomorphism, i.e.,
+${\mathfrak C}^\bullet(\mathcal{U}, \mathcal{F})$
+is a resolution of $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+We check
+$$
+0 \to \mathcal{F} \to \mathfrak{C}^0(\mathcal{U}, \mathcal{F}) \to
+\mathfrak{C}^1(\mathcal{U}, \mathcal{F}) \to \ldots
+$$
+is exact on stalks. Let $x \in X$ and choose $i_{\text{fix}} \in I$
+such that $x \in U_{i_{\text{fix}}}$. Then define
+$$
+h : \mathfrak{C}^p(\mathcal{U}, \mathcal{F})_x
+\to \mathfrak{C}^{p - 1}(\mathcal{U}, \mathcal{F})_x
+$$
+as follows: If $s \in \mathfrak{C}^p(\mathcal{U}, \mathcal{F})_x$, take
+a representative
+$$
+\widetilde{s} \in
+\mathfrak{C}^p(\mathcal{U}, \mathcal{F})(V) =
+\prod\nolimits_{i_0 \ldots i_p}
+\mathcal{F}(V \cap U_{i_0} \cap \ldots \cap U_{i_p})
+$$
+defined on some neighborhood $V$ of $x$, and set
+$$
+h(s)_{i_0 \ldots i_{p - 1}} =
+\widetilde{s}_{i_{\text{fix}} i_0 \ldots i_{p - 1}, x}.
+$$
+By the same formula (for $p = 0$) we get a map
+$\mathfrak{C}^{0}(\mathcal{U},\mathcal{F})_x \to \mathcal{F}_x$.
+We compute formally as follows:
+\begin{align*}
+(dh + hd)(s)_{i_0 \ldots i_p}
+& =
+\sum\nolimits_{j = 0}^p
+(-1)^j
+h(s)_{i_0 \ldots \hat i_j \ldots i_p}
++
+d(s)_{i_{\text{fix}} i_0 \ldots i_p}\\
+& =
+\sum\nolimits_{j = 0}^p
+(-1)^j
+s_{i_{\text{fix}} i_0 \ldots \hat i_j \ldots i_p}
++
+s_{i_0 \ldots i_p}
++
+\sum\nolimits_{j = 0}^p
+(-1)^{j + 1}
+s_{i_{\text{fix}} i_0 \ldots \hat i_j \ldots i_p} \\
+& =
+s_{i_0 \ldots i_p}
+\end{align*}
+This shows $h$ is a homotopy from the identity map of
+the extended complex
+$$
+0 \to \mathcal{F}_x \to \mathfrak{C}^0(\mathcal{U}, \mathcal{F})_x
+\to \mathfrak{C}^1(\mathcal{U}, \mathcal{F})_x \to \ldots
+$$
+to zero and we conclude.
+\end{proof}
+
+\noindent
+With this lemma it is easy to reprove the {\v C}ech to cohomology spectral
+sequence of Lemma \ref{lemma-cech-spectral-sequence}. Namely,
+let $X$, $\mathcal{U}$, $\mathcal{F}$ as in
+Lemma \ref{lemma-covering-resolution}
+and let $\mathcal{F} \to \mathcal{I}^\bullet$ be an injective resolution.
+Then we may consider the double complex
+$$
+A^{\bullet, \bullet} =
+\Gamma(X, {\mathfrak C}^\bullet(\mathcal{U}, \mathcal{I}^\bullet)).
+$$
+By construction we have
+$$
+A^{p, q} = \prod\nolimits_{i_0 \ldots i_p} \mathcal{I}^q(U_{i_0 \ldots i_p})
+$$
+Consider the two spectral sequences of
+Homology, Section \ref{homology-section-double-complex} associated
+to this double complex, see especially
+Homology, Lemma \ref{homology-lemma-ss-double-complex}.
+For the spectral sequence $({}'E_r, {}'d_r)_{r \geq 0}$ we get
+${}'E_2^{p, q} = \check{H}^p(\mathcal{U}, \underline{H}^q(\mathcal{F}))$
+because taking products is exact
+(Homology, Lemma \ref{homology-lemma-product-abelian-groups-exact}).
+For the spectral sequence $({}''E_r, {}''d_r)_{r \geq 0}$ we get
+${}''E_2^{p, q} = 0$ if $p > 0$ and ${}''E_2^{0, q} = H^q(X, \mathcal{F})$.
+Namely, for fixed $q$ the complex of sheaves
+${\mathfrak C}^\bullet(\mathcal{U}, \mathcal{I}^q)$
+is a resolution (Lemma \ref{lemma-covering-resolution})
+of the injective sheaf $\mathcal{I}^q$
+by injective sheaves (by Lemmas \ref{lemma-cohomology-of-open} and
+\ref{lemma-pushforward-injective-flat}
+and
+Homology, Lemma \ref{homology-lemma-product-injectives}).
+Hence the cohomology of
+$\Gamma(X, {\mathfrak C}^\bullet(\mathcal{U}, \mathcal{I}^q))$
+is zero in positive degrees and equal to $\Gamma(X, \mathcal{I}^q)$
+in degree $0$. Taking cohomology of the next differential
+we get our claim about the spectral sequence $({}''E_r, {}''d_r)_{r \geq 0}$.
+Whence the result since both spectral sequences converge to the
+cohomology of the associated total complex of $A^{\bullet, \bullet}$.
+
+\begin{definition}
+\label{definition-covering-locally-finite}
+Let $X$ be a topological space.
+An open covering $X = \bigcup_{i \in I} U_i$ is said to be
+{\it locally finite} if for every $x \in X$ there exists an open neighbourhood
+$W$ of $x$ such that $\{i \in I \mid W \cap U_i \not = \emptyset\}$ is finite.
+\end{definition}
+
+\begin{remark}
+\label{remark-locally-finite-sections}
+Let $X = \bigcup_{i \in I} U_i$ be a locally finite open covering.
+Denote $j_i : U_i \to X$ the inclusion map. Suppose that for each $i$
+we are given an abelian sheaf $\mathcal{F}_i$ on $U_i$. Consider the
+abelian sheaf $\mathcal{G} = \bigoplus_{i \in I} (j_i)_*\mathcal{F}_i$.
+Then for $V \subset X$ open we actually have
+$$
+\Gamma(V, \mathcal{G}) = \prod\nolimits_{i \in I} \mathcal{F}_i(V \cap U_i).
+$$
+In other words we have
+$$
+\bigoplus\nolimits_{i \in I} (j_i)_*\mathcal{F}_i =
+\prod\nolimits_{i \in I} (j_i)_*\mathcal{F}_i
+$$
+This seems strange until you realize that the direct sum of a collection
+of sheaves is the sheafification of what you think it should be.
+See discussion in Modules, Section \ref{modules-section-kernels}.
+Thus we conclude that in this case the complex of
+Lemma \ref{lemma-covering-resolution} has terms
+$$
+{\mathfrak C}^p(\mathcal{U}, \mathcal{F}) =
+\bigoplus\nolimits_{i_0 \ldots i_p}
+(j_{i_0 \ldots i_p})_* \mathcal{F}_{i_0 \ldots i_p}
+$$
+which is sometimes useful.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+\section{{\v C}ech cohomology of complexes}
+\label{section-cech-cohomology-of-complexes}
+
+\noindent
+In general for sheaves of abelian groups
+${\mathcal F}$ and ${\mathcal G}$ on $X$ there is a cup product map
+$$
+H^i(X, {\mathcal F}) \times H^j(X, {\mathcal G})
+\longrightarrow
+H^{i + j}(X, {\mathcal F} \otimes_{\mathbf Z} {\mathcal G}).
+$$
+In this section we define it using {\v C}ech cocycles by an explicit formula
+for the cup product. If you are worried about the fact that cohomology may not
+equal {\v C}ech cohomology, then you can use hypercoverings and still
+use the cocycle notation. This also has the advantage that
+it works to define the cup product for hypercohomology on any topos (insert
+future reference here).
+
+\medskip\noindent
+Let ${\mathcal F}^\bullet$ be a bounded below complex of presheaves of abelian
+groups on $X$. We can often compute $H^n(X, {\mathcal F}^\bullet)$
+using {\v C}ech cocycles. Namely, let
+${\mathcal U} : X = \bigcup_{i \in I} U_i$
+be an open covering of $X$. Since the {\v C}ech complex
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$
+(Definition \ref{definition-cech-complex})
+is functorial in the presheaf $\mathcal{F}$ we obtain a double complex
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}^\bullet)$.
+The associated total complex to
+$\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet)$
+is the complex with degree $n$ term
+$$
+\text{Tot}^n(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+=
+\bigoplus\nolimits_{p + q = n}
+\prod\nolimits_{i_0\ldots i_p} {\mathcal F}^q(U_{i_0\ldots i_p})
+$$
+see
+Homology, Definition \ref{homology-definition-associated-simple-complex}.
+A typical element in $\text{Tot}^n$ will be denoted
+$\alpha = \{\alpha_{i_0\ldots i_p}\}$ where
+$\alpha_{i_0 \ldots i_p} \in \mathcal{F}^q(U_{i_0\ldots i_p})$.
+In other words the $\mathcal{F}$-degree of $\alpha_{i_0\ldots i_p}$ is
+$q = n - p$. This notation requires us to be aware of the degree $\alpha$
+lives in at all times. We indicate this situation by the formula
+$\deg_{\mathcal F}(\alpha_{i_0\ldots i_p}) = q$.
+According to our conventions in
+Homology, Definition \ref{homology-definition-associated-simple-complex}
+the differential of an element $\alpha$ of degree $n$ is given by
+$$
+d(\alpha)_{i_0\ldots i_{p + 1}}
+=
+\sum\nolimits_{j = 0}^{p + 1}
+(-1)^j \alpha_{i_0 \ldots \hat i_j \ldots i_{p + 1}} +
+(-1)^{p + 1}d_{{\mathcal F}}(\alpha_{i_0 \ldots i_{p + 1}})
+$$
+where $d_\mathcal{F}$ denotes the differential on the complex
+$\mathcal{F}^\bullet$.
+The expression $\alpha_{i_0 \ldots \hat i_j \ldots i_{p + 1}}$ means the
+restriction of $\alpha_{i_0 \ldots \hat i_j \ldots i_{p + 1}}
+\in {\mathcal F}(U_{i_0\ldots\hat i_j\ldots i_{p + 1}})$ to
+$U_{i_0 \ldots i_{p + 1}}$.
+
+\medskip\noindent
+The construction of
+$\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))$
+is functorial in ${\mathcal F}^\bullet$. As well there is a functorial
+transformation
+\begin{equation}
+\label{equation-global-sections-to-cech}
+\Gamma(X, {\mathcal F}^\bullet)
+\longrightarrow
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+\end{equation}
+of complexes defined by the following rule: The section
+$s\in \Gamma(X, {\mathcal F}^n)$
+is mapped to the element $\alpha = \{\alpha_{i_0\ldots i_p}\}$
+with $\alpha_{i_0} = s|_{U_{i_0}}$ and $\alpha_{i_0\ldots i_p} = 0$
+for $p > 0$.
+
+\medskip\noindent
+Refinements. Let ${\mathcal V} = \{ V_j \}_{j\in J}$ be a
+refinement of ${\mathcal U}$. This means there is a map $t: J \to I$
+such that $V_j \subset U_{t(j)}$ for all $j\in J$. This gives
+rise to a functorial transformation
+\begin{equation}
+\label{equation-transformation}
+T_t :
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+\longrightarrow
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal V}, {\mathcal F}^\bullet)).
+\end{equation}
+defined by the rule
+$$
+T_t(\alpha)_{j_0\ldots j_p}
+=
+\alpha_{t(j_0)\ldots t(j_p)}|_{V_{j_0\ldots j_p}}.
+$$
+Given two maps $t, t' : J \to I$ as above the maps
+$T_t$ and $T_{t'}$ constructed above are homotopic.
+The homotopy is given by
+$$
+h(\alpha)_{j_0\ldots j_p}
+=
+\sum\nolimits_{a = 0}^{p}
+(-1)^a
+\alpha_{t(j_0)\ldots t(j_a) t'(j_a) \ldots t'(j_p)}
+$$
+for an element $\alpha$ of degree $n$. This works
+because of the following computation, again with
+$\alpha$ an element of degree $n$ (so $d(\alpha)$
+has degree $n + 1$ and $h(\alpha)$ has degree $n - 1$):
+\begin{align*}
+(
+d(h(\alpha)) + h(d(\alpha))
+)_{j_0\ldots j_p}
+= &
+\sum\nolimits_{k = 0}^p
+(-1)^k
+h(\alpha)_{j_0 \ldots \hat j_k \ldots j_p}
++ \\
+&
+(-1)^p
+d_{\mathcal F}(h(\alpha)_{j_0 \ldots j_p})
++ \\
+&
+\sum\nolimits_{a = 0}^p
+(-1)^a
+d(\alpha)_{t(j_0) \ldots t(j_a) t'(j_a) \ldots t'(j_p)}
+\\
+= &
+\sum\nolimits_{k = 0}^p
+\sum\nolimits_{a = 0}^{k - 1}
+(-1)^{k + a}
+\alpha_{t(j_0)\ldots t(j_a)t'(j_a)\ldots \hat{t'(j_k)}\ldots t'(j_p)}
++ \\
+&
+\sum\nolimits_{k = 0}^p
+\sum\nolimits_{a = k + 1}^p
+(-1)^{k + a - 1}
+\alpha_{t(j_0)\ldots \hat{t(j_k)}\ldots t(j_a)t'(j_a)\ldots t'(j_p)}
++ \\
+&
+\sum\nolimits_{a = 0}^p
+(-1)^{p + a}
+d_{\mathcal F}(\alpha_{t(j_0)\ldots t(j_a) t'(j_a) \ldots t'(j_p)})
++ \\
+&
+\sum\nolimits_{a = 0}^p
+\sum\nolimits_{k = 0}^a
+(-1)^{a + k}
+\alpha_{t(j_0)\ldots\hat{t(j_k)}\ldots t(j_a)t'(j_a)\ldots t'(j_p)}
++ \\
+&
+\sum\nolimits_{a = 0}^p
+\sum\nolimits_{k = a}^p
+(-1)^{a + k + 1}
+\alpha_{t(j_0) \ldots t(j_a) t'(j_a) \ldots \hat{t'(j_k)} \ldots t'(j_p)}
++ \\
+&
+\sum\nolimits_{a = 0}^p
+(-1)^{a + p + 1}
+d_{\mathcal F}(\alpha_{t(j_0)\ldots t(j_a) t'(j_a) \ldots t'(j_p)})
+\\
+= &
+\alpha_{t'(j_0)\ldots t'(j_p)} +
+(-1)^{2p + 1}\alpha_{t(j_0)\ldots t(j_p)}
+\\
+= &
+T_{t'}(\alpha)_{j_0\ldots j_p} - T_t(\alpha)_{j_0\ldots j_p}
+\end{align*}
+We leave it to the reader to verify the cancellations. (Note that the
+terms having both $k$ and $a$ in the 1st, 2nd and 4th, 5th summands
+cancel, except the ones where $a = k$ which only occur in the 4th and 5th
+and these cancel against each other except for the two desired terms.)
+It follows that the induced map
+$$
+H^n(T_t) :
+H^n(
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+)
+\to
+H^n(
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal V}, {\mathcal F}^\bullet))
+)
+$$
+is independent of the choice of $t$. We define
+{\it {\v C}ech hypercohomology} as the limit of the
+{\v C}ech cohomology groups
+over all refinements via the maps $H^\bullet(T_t)$.
+
+\medskip\noindent
+In the limit (over all open coverings of $X$) the following lemma provides
+a map of {\v C}ech hypercohomology into cohomology, which is often an
+isomorphism and is always an isomorphism if we use hypercoverings.
+
+\begin{lemma}
+\label{lemma-cech-complex-complex}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{U} : X = \bigcup_{i \in I} U_i$ be
+an open covering. For a bounded below complex $\mathcal{F}^\bullet$
+of $\mathcal{O}_X$-modules there is a canonical map
+$$
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}^\bullet))
+\longrightarrow
+R\Gamma(X, \mathcal{F}^\bullet)
+$$
+functorial in $\mathcal{F}^\bullet$ and compatible with
+(\ref{equation-global-sections-to-cech}) and (\ref{equation-transformation}).
+There is a spectral sequence $(E_r, d_r)_{r \geq 0}$ with
+$$
+E_2^{p, q} =
+H^p(\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U},
+\underline{H}^q(\mathcal{F}^\bullet)))
+$$
+converging to $H^{p + q}(X, \mathcal{F}^\bullet)$.
+\end{lemma}
+
+\begin{proof}
+Let ${\mathcal I}^\bullet$ be a bounded below complex of injectives.
+The map (\ref{equation-global-sections-to-cech}) for
+$\mathcal{I}^\bullet$ is a map
+$\Gamma(X, {\mathcal I}^\bullet) \to
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal I}^\bullet))$.
+This is a quasi-isomorphism of complexes of abelian groups
+as follows from
+Homology, Lemma \ref{homology-lemma-double-complex-gives-resolution}
+applied to the double complex
+$\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal I}^\bullet)$ using
+Lemma \ref{lemma-injective-trivial-cech}.
+Suppose ${\mathcal F}^\bullet \to {\mathcal I}^\bullet$ is a quasi-isomorphism
+of ${\mathcal F}^\bullet$ into a bounded below complex of injectives.
+Since $R\Gamma(X, {\mathcal F}^\bullet)$ is represented by the complex
+$\Gamma(X, {\mathcal I}^\bullet)$ we obtain the map of the lemma
+using
+$$
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+\longrightarrow
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal I}^\bullet)).
+$$
+We omit the verification of functoriality and compatibilities.
+To construct the spectral sequence of the lemma, choose a Cartan-Eilenberg
+resolution $\mathcal{F}^\bullet \to \mathcal{I}^{\bullet, \bullet}$, see
+Derived Categories, Lemma \ref{derived-lemma-cartan-eilenberg}. In this
+case $\mathcal{F}^\bullet \to \text{Tot}(\mathcal{I}^{\bullet, \bullet})$
+is an injective resolution and hence
+$$
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U},
+\text{Tot}({\mathcal I}^{\bullet, \bullet})))
+$$
+computes $R\Gamma(X, \mathcal{F}^\bullet)$ as we've seen above.
+By Homology, Remark \ref{homology-remark-triple-complex}
+we can view this as the total complex associated to the
+triple complex
+$\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal I}^{\bullet, \bullet})$
+hence, using the same remark we can view it as the total complex
+associate to the double complex $A^{\bullet, \bullet}$ with terms
+$$
+A^{n, m} =
+\bigoplus\nolimits_{p + q = n}
+\check{\mathcal{C}}^p({\mathcal U}, \mathcal{I}^{q, m})
+$$
+Since $\mathcal{I}^{q, \bullet}$ is an injective resolution of
+$\mathcal{F}^q$ we can apply the first spectral sequence associated to
+$A^{\bullet, \bullet}$
+(Homology, Lemma \ref{homology-lemma-ss-double-complex})
+to get a spectral sequence with
+$$
+E_1^{n, m} =
+\bigoplus\nolimits_{p + q = n}
+\check{\mathcal{C}}^p(\mathcal{U}, \underline{H}^m(\mathcal{F}^q))
+$$
+which is the $n$th term of the complex
+$\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U},
+\underline{H}^m(\mathcal{F}^\bullet))$. Hence we obtain
+$E_2$ terms as described in the lemma. Convergence by
+Homology, Lemma \ref{homology-lemma-first-quadrant-ss}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cech-complex-complex-computes}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{U} : X = \bigcup_{i \in I} U_i$ be
+an open covering. Let $\mathcal{F}^\bullet$ be a bounded below complex
+of $\mathcal{O}_X$-modules. If $H^i(U_{i_0 \ldots i_p}, \mathcal{F}^q) = 0$
+for all $i > 0$ and all $p, i_0, \ldots, i_p, q$, then the map
+$
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}^\bullet))
+\to
+R\Gamma(X, \mathcal{F}^\bullet)
+$
+of Lemma \ref{lemma-cech-complex-complex} is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Immediate from the spectral sequence of Lemma \ref{lemma-cech-complex-complex}.
+\end{proof}
+
+\begin{remark}
+\label{remark-shift-complex-cech-complex}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$\mathcal{U} : X = \bigcup_{i \in I} U_i$ be
+an open covering. Let $\mathcal{F}^\bullet$ be a bounded below complex
+of $\mathcal{O}_X$-modules. Let $b$ be an integer.
+We claim there is a commutative diagram
+$$
+\xymatrix{
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}^\bullet))[b]
+\ar[r] \ar[d]_\gamma &
+R\Gamma(X, \mathcal{F}^\bullet)[b] \ar[d] \\
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}^\bullet[b]))
+\ar[r] &
+R\Gamma(X, \mathcal{F}^\bullet[b])
+}
+$$
+in the derived category where the map $\gamma$ is the map on complexes
+constructed in Homology, Remark \ref{homology-remark-shift-double-complex}.
+This makes sense because the double complex
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}^\bullet[b])$
+is clearly the same as the double complex
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}^\bullet)[0, b]$
+introduced in Homology, Remark \ref{homology-remark-shift-double-complex}.
+To check that the diagram commutes, we may choose an injective resolution
+$\mathcal{F}^\bullet \to \mathcal{I}^\bullet$ as in the proof of
+Lemma \ref{lemma-cech-complex-complex}. Chasing diagrams, we see that
+it suffices to check the diagram commutes when we replace $\mathcal{F}^\bullet$
+by $\mathcal{I}^\bullet$. Then we consider the extended diagram
+$$
+\xymatrix{
+\Gamma(X, \mathcal{I}^\bullet)[b] \ar[r] \ar[d] &
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet))[b]
+\ar[r] \ar[d]_\gamma &
+R\Gamma(X, \mathcal{I}^\bullet)[b] \ar[d] \\
+\Gamma(X, \mathcal{I}^\bullet[b]) \ar[r] &
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet[b]))
+\ar[r] &
+R\Gamma(X, \mathcal{I}^\bullet[b])
+}
+$$
+where the left horizontal arrows are (\ref{equation-global-sections-to-cech}).
+Since in this case the horizonal arrows are isomorphisms in the derived
+category (see proof of Lemma \ref{lemma-cech-complex-complex}) it
+suffices to show that the left square commutes. This is true because
+the map $\gamma$ uses the sign $1$ on the summands
+$\check{\mathcal{C}}^0(\mathcal{U}, \mathcal{I}^{q + b})$, see
+formula in Homology, Remark \ref{homology-remark-shift-double-complex}.
+\end{remark}
+
+\noindent
+Let $X$ be a topological space, let $\mathcal{U} : X = \bigcup_{i \in I} U_i$
+be an open covering, and let $\mathcal{F}^\bullet$ be a bounded below
+complex of presheaves of abelian groups. Consider the map
+$\tau :
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+\to
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))$
+defined by
+$$
+\tau(\alpha)_{i_0 \ldots i_p} = (-1)^{p(p + 1)/2} \alpha_{i_p \ldots i_0}.
+$$
+Then we have for an element $\alpha$ of degree $n$ that
+\begin{align*}
+& d(\tau(\alpha))_{i_0 \ldots i_{p + 1}} \\
+& =
+\sum\nolimits_{j = 0}^{p + 1}
+(-1)^j
+\tau(\alpha)_{i_0 \ldots \hat i_j \ldots i_{p + 1}}
++
+(-1)^{p + 1}
+d_{\mathcal F}(\tau(\alpha)_{i_0 \ldots i_{p + 1}})
+\\
+& =
+\sum\nolimits_{j = 0}^{p + 1}
+(-1)^{j + \frac{p(p + 1)}{2}}
+\alpha_{i_{p + 1} \ldots \hat i_j \ldots i_0}
++
+(-1)^{p + 1 + \frac{(p + 1)(p + 2)}{2}}
+d_{\mathcal F}(\alpha_{i_{p + 1} \ldots i_0})
+\end{align*}
+On the other hand we have
+\begin{align*}
+& \tau(d(\alpha))_{i_0\ldots i_{p + 1}} \\
+& =
+(-1)^{\frac{(p + 1)(p + 2)}{2}} d(\alpha)_{i_{p + 1} \ldots i_0}
+\\
+& =
+(-1)^{\frac{(p + 1)(p + 2)}{2}}
+\left(
+\sum\nolimits_{j = 0}^{p + 1}
+(-1)^j
+\alpha_{i_{p + 1}\ldots \hat i_{p + 1 - j} \ldots i_0}
++
+(-1)^{p + 1}
+d_{\mathcal F}(\alpha_{i_{p + 1}\ldots i_0})
+\right)
+\end{align*}
+Thus we conclude that $d(\tau(\alpha)) = \tau(d(\alpha))$
+because $p(p + 1)/2 \equiv (p + 1)(p + 2)/2 + p + 1 \bmod 2$. In other words
+$\tau$ is an endomorphism of the complex
+$\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))$.
+Note that the diagram
+$$
+\begin{matrix}
+\Gamma(X, {\mathcal F}^\bullet) &
+\longrightarrow &
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet)) \\
+\downarrow \text{id} & & \downarrow \tau \\
+\Gamma(X, {\mathcal F}^\bullet) &
+\longrightarrow &
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+\end{matrix}
+$$
+commutes. In addition $\tau$ is clearly compatible with refinements.
+This suggests that $\tau$ acts as the identity on {\v C}ech cohomology
+(i.e., in the limit -- provided {\v C}ech hypercohomology agrees with
+hypercohomology, which is always the case if we use hypercoverings).
+We claim that $\tau$ actually is homotopic to the identity on the
+total {\v C}ech complex
+$\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))$.
+To prove this, we use as homotopy
+$$
+h(\alpha)_{i_0 \ldots i_p}
+=
+\sum\nolimits_{a = 0}^p
+\epsilon_p(a)
+\alpha_{i_0 \ldots i_a i_p \ldots i_a}
+\quad\text{with}\quad
+\epsilon_p(a) = (-1)^{\frac{(p - a)(p - a - 1)}{2} + p}
+$$
+for $\alpha$ of degree $n$. As usual we omit writing
+$|_{U_{i_0 \ldots i_p}}$. This works
+because of the following computation, again with
+$\alpha$ an element of degree $n$:
+\begin{align*}
+(d(h(\alpha)) + h(d(\alpha)))_{i_0 \ldots i_p}
+= &
+\sum\nolimits_{k = 0}^p
+(-1)^k
+h(\alpha)_{i_0 \ldots \hat i_k \ldots i_p}
++ \\
+&
+(-1)^p
+d_{\mathcal F}(h(\alpha)_{i_0 \ldots i_p})
++ \\
+&
+\sum\nolimits_{a = 0}^p
+\epsilon_p(a)
+d(\alpha)_{i_0 \ldots i_a i_p \ldots i_a}
+\\
+= &
+\sum\nolimits_{k = 0}^p
+\sum\nolimits_{a = 0}^{k - 1}
+(-1)^k \epsilon_{p - 1}(a)
+\alpha_{i_0 \ldots i_a i_p \ldots \hat{i_k} \ldots i_a}
++ \\
+&
+\sum\nolimits_{k = 0}^p
+\sum\nolimits_{a = k + 1}^p
+(-1)^k \epsilon_{p - 1}(a - 1)
+\alpha_{i_0 \ldots \hat{i_k} \ldots i_a i_p \ldots i_a}
++ \\
+&
+\sum\nolimits_{a = 0}^p
+(-1)^p \epsilon_p(a)
+d_{\mathcal F}(\alpha_{i_0 \ldots i_a i_p \ldots i_a})
++ \\
+&
+\sum\nolimits_{a = 0}^p
+\sum\nolimits_{k = 0}^a
+\epsilon_p(a) (-1)^k
+\alpha_{i_0 \ldots \hat{i_k} \ldots i_a i_p \ldots i_a}
++ \\
+&
+\sum\nolimits_{a = 0}^p
+\sum\nolimits_{k = a}^p
+\epsilon_p(a) (-1)^{p + a + 1 - k}
+\alpha_{i_0 \ldots i_a i_p \ldots \hat{i_k} \ldots i_a}
++ \\
+&
+\sum\nolimits_{a = 0}^p
+\epsilon_p(a) (-1)^{p + 1}
+d_{\mathcal F}(\alpha_{i_0 \ldots i_a i_p \ldots i_a})
+\\
+= &
+\epsilon_p(0) \alpha_{i_p \ldots i_0} +
+\epsilon_p(p) (-1)^{p + 1} \alpha_{i_0 \ldots i_p} \\
+= &
+(-1)^{\frac{p(p + 1)}{2}}\alpha_{i_p \ldots i_0}
+- \alpha_{i_0 \ldots i_p}
+\end{align*}
+The cancellations follow because
+$$
+(-1)^k \epsilon_{p - 1}(a) + \epsilon_p(a)(-1)^{p + a + 1 - k} = 0
+\quad\text{and}\quad
+(-1)^k\epsilon_{p - 1}(a - 1) + \epsilon_p(a) (-1)^k = 0
+$$
+We leave it to the reader to verify the cancellations.
+
+\medskip\noindent
+Suppose we have two bounded below complexes of abelian sheaves
+${\mathcal F}^\bullet$ and ${\mathcal G}^\bullet$. We define the complex
+$\text{Tot}({\mathcal F}^\bullet\otimes_{\mathbf Z} {\mathcal G}^\bullet)$
+to be to complex with terms
+$\bigoplus_{p + q = n} {\mathcal F}^p \otimes {\mathcal G}^q$
+and differential according to the rule
+\begin{equation}
+\label{equation-differential-tensor-product-complexes}
+d(\alpha \otimes \beta) =
+d(\alpha)\otimes \beta + (-1)^{\deg(\alpha)} \alpha \otimes d(\beta)
+\end{equation}
+when $\alpha$ and $\beta$ are homogeneous, see
+Homology, Definition \ref{homology-definition-associated-simple-complex}.
+
+\medskip\noindent
+Suppose that $M^\bullet$ and $N^\bullet$ are two bounded below
+complexes of abelian groups. Then if $m$, resp.\ $n$
+is a cocycle for $M^\bullet$, resp.\ $N^\bullet$, it is immediate
+that $m \otimes n$ is a cocycle for $\text{Tot}(M^\bullet\otimes N^\bullet)$.
+Hence a cup product
+$$
+H^i(M^\bullet) \times H^j(N^\bullet)
+\longrightarrow
+H^{i + j}(Tot(M^\bullet\otimes N^\bullet)).
+$$
+This is discussed also in
+More on Algebra, Section \ref{more-algebra-section-products-tor}.
+
+\medskip\noindent
+So the construction of the cup product in hypercohomology
+of complexes rests on a construction of a map of complexes
+\begin{equation}
+\label{equation-needs-signs}
+\text{Tot}\left(
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+\otimes_{\mathbf Z}
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal G}^\bullet))
+\right)
+\longrightarrow
+\text{Tot}(
+\check{\mathcal{C}}^\bullet({\mathcal U},
+\text{Tot}({\mathcal F}^\bullet\otimes {\mathcal G}^\bullet)
+))
+\end{equation}
+This map is denoted $\cup$ and is given by the rule
+$$
+(\alpha \cup \beta)_{i_0 \ldots i_p}
+=
+\sum\nolimits_{r = 0}^p
+\epsilon(n, m, p, r)
+\alpha_{i_0 \ldots i_r} \otimes \beta_{i_r \ldots i_p}.
+$$
+where $\alpha$ has degree $n$ and $\beta$ has degree $m$
+and with
+$$
+\epsilon(n, m, p, r) = (-1)^{(p + r)n + rp + r}.
+$$
+Note that $\epsilon(n, m, p, n) = 1$. Hence if
+$\mathcal{F}^\bullet = \mathcal{F}[0]$ is the complex
+consisting in a single abelian sheaf $\mathcal{F}$ placed in degree $0$,
+then there no signs in the formula for $\cup$ (as in that case
+$\alpha_{i_0 \ldots i_r} = 0$ unless $r = n$).
+For an explanation of why there has to be a sign and how to
+compute it see \cite[Exposee XVII]{SGA4} by Deligne.
+To check (\ref{equation-needs-signs})
+is a map of complexes we have to show that
+$$
+d(\alpha \cup \beta) =
+d(\alpha) \cup \beta +
+(-1)^{\deg(\alpha)} \alpha \cup d(\beta)
+$$
+by the definition of the differential on
+$\text{Tot}(
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+\otimes_{\mathbf Z}
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal G}^\bullet))
+)$
+as given in
+Homology, Definition \ref{homology-definition-associated-simple-complex}.
+We compute first
+\begin{align*}
+d(\alpha \cup \beta)_{i_0 \ldots i_{p + 1}}
+= &
+\sum\nolimits_{j = 0}^{p + 1}
+(-1)^j
+(\alpha \cup \beta)_{i_0 \ldots \hat i_j \ldots i_{p + 1}}
++
+(-1)^{p + 1}
+d_{{\mathcal F} \otimes {\mathcal G}}
+((\alpha \cup \beta)_{i_0 \ldots i_{p + 1}})
+\\
+= &
+\sum\nolimits_{j = 0}^{p + 1}
+\sum\nolimits_{r = 0}^{j - 1}
+(-1)^j \epsilon(n, m, p, r)
+\alpha_{i_0 \ldots i_r} \otimes \beta_{i_r \ldots \hat i_j \ldots i_{p + 1}}
++ \\
+&
+\sum\nolimits_{j = 0}^{p + 1}
+\sum\nolimits_{r = j + 1}^{p + 1}
+(-1)^j \epsilon(n, m, p, r - 1)
+\alpha_{i_0 \ldots \hat i_j \ldots i_r} \otimes \beta_{i_r \ldots i_{p + 1}}
++ \\
+&
+\sum\nolimits_{r = 0}^{p + 1}
+(-1)^{p + 1} \epsilon(n, m, p + 1, r)
+d_{{\mathcal F} \otimes {\mathcal G}}
+(\alpha_{i_0 \ldots i_r} \otimes \beta_{i_r \ldots i_{p + 1}})
+\end{align*}
+and note that the summands in the last term equal
+$$
+(-1)^{p + 1} \epsilon(n, m, p + 1, r)
+\left(
+d_{\mathcal F}(\alpha_{i_0 \ldots i_r}) \otimes
+\beta_{i_r \ldots i_{p + 1}} +
+(-1)^{n - r}
+\alpha_{i_0 \ldots i_r} \otimes d_{\mathcal G}(\beta_{i_r \ldots i_{p + 1}})
+\right).
+$$
+because $\deg_\mathcal{F}(\alpha_{i_0 \ldots i_r}) = n - r$.
+On the other hand
+\begin{align*}
+(d(\alpha) \cup \beta)_{i_0\ldots i_{p + 1}}
+= &
+\sum\nolimits_{r = 0}^{p + 1}
+\epsilon(n + 1, m, p + 1, r)
+d(\alpha)_{i_0\ldots i_r} \otimes \beta_{i_r\ldots i_{p + 1}}
+\\
+= &
+\sum\nolimits_{r = 0}^{p + 1}
+\sum\nolimits_{j = 0}^{r}
+\epsilon(n + 1, m, p + 1, r) (-1)^j
+\alpha_{i_0\ldots\hat{i_j}\ldots i_r} \otimes \beta_{i_r\ldots i_{p + 1}}
++ \\
+&
+\sum\nolimits_{r = 0}^{p + 1}
+\epsilon(n + 1, m, p + 1, r) (-1)^r
+d_{\mathcal F}(\alpha_{i_0 \ldots i_r}) \otimes \beta_{i_r\ldots i_{p + 1}}
+\end{align*}
+and
+\begin{align*}
+(\alpha \cup d(\beta))_{i_0\ldots i_{p + 1}}
+= &
+\sum\nolimits_{r = 0}^{p + 1}
+\epsilon(n, m + 1, p + 1, r)
+\alpha_{i_0 \ldots i_r} \otimes d(\beta)_{i_r \ldots i_{p + 1}}
+\\
+= &
+\sum\nolimits_{r = 0}^{p + 1}
+\sum\nolimits_{j = r}^{p + 1}
+\epsilon(n, m + 1, p + 1, r) (-1)^{j - r}
+\alpha_{i_0 \ldots i_r} \otimes \beta_{i_r \ldots \hat{i_j}\ldots i_{p + 1}}
++ \\
+&
+\sum\nolimits_{r = 0}^{p + 1}
+\epsilon(n, m + 1, p + 1, r) (-1)^{p + 1 - r}
+\alpha_{i_0 \ldots i_r} \otimes d_{\mathcal G}(\beta_{i_r \ldots i_{p + 1}})
+\end{align*}
+The desired equality holds if we have
+\begin{align*}
+(-1)^{p + 1} \epsilon(n, m, p + 1, r)
+& =
+\epsilon(n + 1, m, p + 1, r) (-1)^r \\
+(-1)^{p + 1} \epsilon(n, m, p + 1, r) (-1)^{n - r}
+& =
+(-1)^n \epsilon(n, m + 1, p + 1, r) (-1)^{p + 1 - r} \\
+\epsilon(n + 1, m, p + 1, r) (-1)^r
+& =
+(-1)^{1 + n} \epsilon(n, m + 1, p + 1, r - 1) \\
+(-1)^j \epsilon(n, m, p, r)
+& =
+(-1)^n \epsilon(n, m + 1, p + 1, r) (-1)^{j - r} \\
+(-1)^j \epsilon(n, m, p, r - 1)
+& =
+\epsilon(n + 1, m, p + 1, r) (-1)^j
+\end{align*}
+(The third equality is necessary to get the terms with $r = j$
+from $d(\alpha) \cup \beta$ and $(-1)^n \alpha \cup d(\beta)$
+to cancel each other.) We leave the verifications to the reader.
+(Alternatively, check the script signs.gp in the scripts subdirectory
+of the Stacks project.)
+
+\medskip\noindent
+Associativity of the cup product. Suppose that ${\mathcal F}^\bullet$,
+${\mathcal G}^\bullet$ and ${\mathcal H}^\bullet$ are bounded below
+complexes of abelian groups on $X$. The obvious map
+(without the intervention of signs) is an isomorphism
+of complexes
+$$
+\text{Tot}(
+\text{Tot}({\mathcal F}^\bullet \otimes_{\mathbf Z} {\mathcal G}^\bullet)
+\otimes_{\mathbf Z} {\mathcal H}^\bullet
+)
+\longrightarrow
+\text{Tot}(
+{\mathcal F}^\bullet \otimes_{\mathbf Z}
+\text{Tot}({\mathcal G}^\bullet \otimes_{\mathbf Z} {\mathcal H}^\bullet)
+).
+$$
+Another way to say this is that the triple complex
+${\mathcal F}^\bullet \otimes_{\mathbf Z} {\mathcal G}^\bullet
+\otimes_{\mathbf Z} {\mathcal H}^\bullet$ gives rise to a well defined
+total complex with differential satisfying
+$$
+d(\alpha \otimes \beta \otimes \gamma) =
+d(\alpha) \otimes \beta \otimes \gamma +
+(-1)^{\deg(\alpha)} \alpha \otimes d(\beta) \otimes \gamma +
+(-1)^{\deg(\alpha) + \deg(\beta)} \alpha \otimes \beta \otimes d(\gamma)
+$$
+for homogeneous elements. Using this map it is easy to verify that
+$$
+(\alpha \cup \beta) \cup \gamma = \alpha \cup ( \beta \cup \gamma)
+$$
+namely, if $\alpha$ has degree $a$, $\beta$ has degree $b$ and
+$\gamma$ has degree $c$, then
+\begin{align*}
+((\alpha \cup \beta) \cup \gamma)_{i_0 \ldots i_p}
+= &
+\sum\nolimits_{r = 0}^p
+\epsilon(a + b, c, p, r)
+(\alpha \cup \beta)_{i_0 \ldots i_r} \otimes \gamma_{i_r \ldots i_p}
+\\
+= &
+\sum\nolimits_{r = 0}^p
+\sum\nolimits_{s = 0}^r
+\epsilon(a + b, c, p, r) \epsilon(a, b, r, s)
+\alpha_{i_0 \ldots i_s} \otimes
+\beta_{i_s \ldots i_r} \otimes
+\gamma_{i_r \ldots i_p}
+\end{align*}
+and
+\begin{align*}
+(\alpha \cup (\beta \cup \gamma)_{i_0\ldots i_p}
+= &
+\sum\nolimits_{s = 0}^p
+\epsilon(a, b + c, p, s)
+\alpha_{i_0 \ldots i_s} \otimes (\beta \cup \gamma)_{i_s \ldots i_p}
+\\
+= &
+\sum\nolimits_{s = 0}^p
+\sum\nolimits_{r = s}^p
+\epsilon(a, b + c, p, s) \epsilon(b, c, p - s, r - s)
+\alpha_{i_0 \ldots i_s} \otimes \beta_{i_s \ldots i_r} \otimes
+\gamma_{i_r \ldots i_p}
+\end{align*}
+and a trivial mod $2$ calculation shows the signs match up.
+(Alternatively, check the script signs.gp in the scripts subdirectory
+of the Stacks project.)
+
+\medskip\noindent
+Finally, we indicate why the cup product preserves a graded commutative
+structure, at least on a cohomological level. For this we use the operator
+$\tau$ introduced above. Let ${\mathcal F}^\bullet$ be a bounded below
+complexes of abelian groups, and assume we are given a graded commutative
+multiplication
+$$
+\wedge^\bullet :
+\text{Tot}({\mathcal F}^\bullet\otimes {\mathcal F}^\bullet)
+\longrightarrow
+{\mathcal F}^\bullet.
+$$
+This means the following: For $s$ a local section of
+${\mathcal F}^a$, and $t$ a local section of ${\mathcal F}^b$
+we have $s \wedge t$ a local section of ${\mathcal F}^{a + b}$.
+Graded commutative means we have
+$s \wedge t = (-1)^{ab} t \wedge s$. Since $\wedge$ is a map
+of complexes we have
+$d(s\wedge t) = d(s) \wedge t + (-1)^a s \wedge d(t)$.
+The composition
+$$
+\text{Tot}(
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+\otimes
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+)
+\to
+\text{Tot}(
+\check{\mathcal{C}}^\bullet({\mathcal U},
+\text{Tot}({\mathcal F}^\bullet\otimes_{\mathbf Z}{\mathcal F}^\bullet))
+)
+\to
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+$$
+induces a cup product on cohomology
+$$
+H^n(
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+)
+\times
+H^m(
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+)
+\longrightarrow
+H^{n + m}(
+\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))
+)
+$$
+and so in the limit also a product on {\v C}ech cohomology
+and therefore (using hypercoverings if needed) a product
+in cohomology of ${\mathcal F}^\bullet$. We claim this product
+(on cohomology) is graded commutative as well. To prove this
+we first consider an element $\alpha$ of degree $n$ in
+$\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))$
+and an element $\beta$ of degree $m$ in
+$\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}^\bullet))$
+and we compute
+\begin{align*}
+\wedge^\bullet(\alpha \cup \beta)_{i_0 \ldots i_p}
+= &
+\sum\nolimits_{r = 0}^p
+\epsilon(n, m, p, r)
+\alpha_{i_0 \ldots i_r} \wedge \beta_{i_r \ldots i_p} \\
+= &
+\sum\nolimits_{r = 0}^p
+\epsilon(n, m, p, r)
+(-1)^{\deg(\alpha_{i_0 \ldots i_r})\deg(\beta_{i_r \ldots i_p})}
+\beta_{i_r \ldots i_p} \wedge \alpha_{i_0 \ldots i_r}
+\end{align*}
+because $\wedge$ is graded commutative. On the other hand we have
+\begin{align*}
+\tau(\wedge^\bullet(\tau(\beta) \cup \tau(\alpha)))_{i_0 \ldots i_p}
+= &
+\chi(p)
+\sum\nolimits_{r = 0}^p
+\epsilon(m, n, p, r)
+\tau(\beta)_{i_p \ldots i_{p - r}} \wedge \tau(\alpha)_{i_{p - r} \ldots i_0}
+\\
+= &
+\chi(p)
+\sum\nolimits_{r = 0}^p
+\epsilon(m, n, p, r) \chi(r) \chi(p - r)
+\beta_{i_{p - r} \ldots i_p} \wedge \alpha_{i_0 \ldots i_{p - r}}
+\\
+= &
+\chi(p)
+\sum\nolimits_{r = 0}^p
+\epsilon(m, n, p, p - r) \chi(r) \chi(p - r)
+\beta_{i_r \ldots i_p} \wedge \alpha_{i_0 \ldots i_r}
+\end{align*}
+where $\chi(t) = (-1)^{\frac{t(t + 1)}{2}}$. Since we proved earlier that
+$\tau$ acts as the identity on cohomology we have to verify that
+$$
+\epsilon(n, m, p, r)
+(-1)^{(n - r)(m - (p - r))}
+=
+(-1)^{nm} \chi(p)\epsilon(m, n, p, p - r) \chi(r) \chi(p - r)
+$$
+A trivial mod $2$ calculation shows these signs match up.
+(Alternatively, check the script signs.gp in the scripts subdirectory
+of the Stacks project.)
+
+\medskip\noindent
+Finally, we study the compatibility of cup product with boundary maps.
+Suppose that
+$$
+0
+\to
+{\mathcal F}_1^\bullet
+\to
+{\mathcal F}_2^\bullet
+\to
+{\mathcal F}_3^\bullet
+\to
+0
+\quad\text{and}\quad
+0
+\leftarrow
+{\mathcal G}_1^\bullet
+\leftarrow
+{\mathcal G}_2^\bullet
+\leftarrow
+{\mathcal G}_3^\bullet
+\leftarrow
+0
+$$
+are short exact sequences of bounded below complexes of abelian
+sheaves on $X$. Let ${\mathcal H}^\bullet$ be another bounded below
+complex of abelian sheaves, and suppose we have maps of complexes
+$$
+\gamma_i :
+\text{Tot}({\mathcal F}_i^\bullet \otimes_{\mathbf Z} {\mathcal G}_i^\bullet)
+\longrightarrow
+{\mathcal H}^\bullet
+$$
+which are compatible with the maps between the complexes, namely such that
+the diagrams
+$$
+\xymatrix{
+\text{Tot}({\mathcal F}_1^\bullet \otimes_{\mathbf Z} {\mathcal G}_1^\bullet)
+\ar[d]_{\gamma_1}
+&
+\text{Tot}({\mathcal F}_1^\bullet \otimes_{\mathbf Z} {\mathcal G}_2^\bullet)
+\ar[l] \ar[d]
+\\
+\mathcal{H}^\bullet &
+\text{Tot}({\mathcal F}_2^\bullet \otimes_{\mathbf Z} {\mathcal G}_2^\bullet)
+\ar[l]_-{\gamma_2}
+}
+$$
+and
+$$
+\xymatrix{
+\text{Tot}({\mathcal F}_2^\bullet \otimes_{\mathbf Z} {\mathcal G}_2^\bullet)
+\ar[d]_{\gamma_2}
+&
+\text{Tot}({\mathcal F}_2^\bullet \otimes_{\mathbf Z} {\mathcal G}_3^\bullet)
+\ar[l] \ar[d]
+\\
+\mathcal{H}^\bullet &
+\text{Tot}({\mathcal F}_3^\bullet \otimes_{\mathbf Z} {\mathcal G}_3^\bullet)
+\ar[l]_-{\gamma_3}
+}
+$$
+are commutative.
+
+\begin{lemma}
+\label{lemma-compute-sign-cup-product-boundaries}
+In the situation above, assume {\v C}ech cohomology agrees with cohomology
+for the sheaves $\mathcal{F}_i^p$ and $\mathcal{G}_j^q$.
+Let $a_3 \in H^n(X, \mathcal{F}_3^\bullet)$ and
+$b_1 \in H^m(X, \mathcal{G}_1^\bullet)$. Then we have
+$$
+\gamma_1( \partial a_3 \cup b_1) =
+(-1)^{n + 1} \gamma_3( a_3 \cup \partial b_1)
+$$
+in $H^{n + m}(X, \mathcal{H}^\bullet)$ where $\partial$ indicates the
+boundary map on cohomology associated to the short exact sequences of
+complexes above.
+\end{lemma}
+
+\begin{proof}
+We will use the following conventions and notation. We think of
+${\mathcal F}_1^p$ as a subsheaf of ${\mathcal F}_2^p$ and we think of
+${\mathcal G}_3^q$ as a subsheaf of ${\mathcal G}_2^q$. Hence if $s$ is
+a local section of ${\mathcal F}_1^p$ we use $s$ to denote
+the corresponding section of ${\mathcal F}_2^p$ as well. Similarly
+for local sections of ${\mathcal G}_3^q$. Furthermore,
+if $s$ is a local section of ${\mathcal F}_2^p$ then we denote
+$\bar s$ its image in ${\mathcal F}_3^p$. Similarly for the
+map ${\mathcal G}_2^q \to {\mathcal G}^q_1$. In particular if
+$s$ is a local section of ${\mathcal F}_2^p$ and $\bar s = 0$
+then $s$ is a local section of ${\mathcal F}_1^p$. The commutativity
+of the diagrams above implies, for local sections $s$ of
+${\mathcal F}_2^p$ and $t$ of ${\mathcal G}_3^q$ that
+$\gamma_2(s \otimes t) = \gamma_3(\bar s \otimes t)$ as sections of
+${\mathcal H}^{p + q}$.
+
+\medskip\noindent
+Let ${\mathcal U} : X = \bigcup_{i \in I} U_i$
+be an open covering of $X$. Suppose that $\alpha_3$,
+resp.\ $\beta_1$ is a degree $n$, resp.\ $m$ cocycle of
+$\text{Tot}(
+\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}_3^\bullet))$,
+resp.\ $\text{Tot}(
+\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal G}_1^\bullet))$
+representing $a_3$, resp.\ $b_1$. After refining $\mathcal{U}$ if necessary,
+we can find cochains $\alpha_2$, resp.\ $\beta_2$ of
+degree $n$, resp.\ $m$ in
+$\text{Tot}(
+\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}_2^\bullet))$,
+resp.\ $\text{Tot}(
+\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal G}_2^\bullet))$
+mapping to $\alpha_3$, resp.\ $\beta_1$.
+Then we see that
+$$
+\overline{d(\alpha_2)} = d(\bar \alpha_2) = 0
+\quad\text{and}\quad
+\overline{d(\beta_2)} = d(\bar \beta_2) = 0.
+$$
+This means that $\alpha_1 = d(\alpha_2)$ is a degree $n + 1$ cocycle in
+$\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal F}_1^\bullet))$
+representing $\partial a_3$. Similarly, $\beta_3 = d(\beta_2)$ is
+a degree $m + 1$ cocycle in
+$\text{Tot}(\check{\mathcal{C}}^\bullet({\mathcal U}, {\mathcal G}_3^\bullet))$
+representing $\partial b_1$.
+Thus we may compute
+\begin{align*}
+d(\gamma_2(\alpha_2 \cup \beta_2))
+& =
+\gamma_2(d(\alpha_2 \cup \beta_2))
+\\
+& =
+\gamma_2(d(\alpha_2) \cup \beta_2 + (-1)^n \alpha_2 \cup d(\beta_2) )
+\\
+& =
+\gamma_2( \alpha_1 \cup \beta_2) + (-1)^n \gamma_2( \alpha_2 \cup \beta_3)
+\\
+& =
+\gamma_1(\alpha_1 \cup \beta_1) + (-1)^n \gamma_3(\alpha_3 \cup \beta_3)
+\end{align*}
+So this even tells us that the sign is $(-1)^{n + 1}$ as indicated
+in the lemma\footnote{The sign depends on the convention for the
+signs in the long exact sequence in cohomology associated to a triangle
+in $D(X)$. The conventions in the Stacks project are (a) distinguished
+triangles correspond to termwise split exact sequences and (b) the boundary
+maps in the long exact sequence are given by the maps in the snake lemma
+without the intervention of signs. See
+Derived Categories, Section \ref{derived-section-homotopy-triangulated}.}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-boundary-derivation-over-cup-product}
+Let $X$ be a topological space. Let $\mathcal{O}' \to \mathcal{O}$ be a
+surjection of sheaves of rings whose kernel $\mathcal{I} \subset \mathcal{O}'$
+has square zero. Then $M = H^1(X, \mathcal{I})$ is a
+$R = H^0(X, \mathcal{O})$-module and the boundary map
+$\partial : R \to M$ associated to the short exact sequence
+$$
+0 \to \mathcal{I} \to \mathcal{O}' \to \mathcal{O} \to 0
+$$
+is a derivation (Algebra, Definition \ref{algebra-definition-derivation}).
+\end{lemma}
+
+\begin{proof}
+The map $\mathcal{O}' \to \SheafHom(\mathcal{I}, \mathcal{I})$
+factors through $\mathcal{O}$ as $\mathcal{I} \cdot \mathcal{I} = 0$
+by assumption. Hence $\mathcal{I}$ is a sheaf of $\mathcal{O}$-modules
+and this defines the $R$-module structure on $M$.
+The boundary map is additive hence it suffices to prove
+the Leibniz rule. Let $f \in R$. Choose an open covering
+$\mathcal{U} : X = \bigcup U_i$ such that there exist
+$f_i \in \mathcal{O}'(U_i)$ lifting $f|_{U_i} \in \mathcal{O}(U_i)$.
+Observe that $f_i - f_j$ is an element of $\mathcal{I}(U_i \cap U_j)$.
+Then $\partial(f)$ corresponds to the {\v C}ech cohomology class of
+the $1$-cocycle $\alpha$ with $\alpha_{i_0i_1} = f_{i_0} - f_{i_1}$.
+(Observe that by Lemma \ref{lemma-cech-h1} the first {\v C}ech cohomology
+group with respect to $\mathcal{U}$ is a submodule of $M$.)
+Next, let $g \in R$ be a second element and assume (after possibly
+refining the open covering) that $g_i \in \mathcal{O}'(U_i)$ lifts
+$g|_{U_i} \in \mathcal{O}(U_i)$. Then we see that
+$\partial(g)$ is given by the cocycle $\beta$ with
+$\beta_{i_0i_1} = g_{i_0} - g_{i_1}$. Since $f_ig_i \in \mathcal{O}'(U_i)$
+lifts $fg|_{U_i}$ we see that
+$\partial(fg)$ is given by the cocycle $\gamma$ with
+$$
+\gamma_{i_0i_1} = f_{i_0}g_{i_0} - f_{i_1}g_{i_1} =
+(f_{i_0} - f_{i_1})g_{i_0} + f_{i_1}(g_{i_0} - g_{i_1}) =
+\alpha_{i_0i_1}g + f\beta_{i_0i_1}
+$$
+by our definition of the $\mathcal{O}$-module structure on $\mathcal{I}$.
+This proves the Leibniz rule and the proof is complete.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Flat resolutions}
+\label{section-flat}
+
+\noindent
+A reference for the material in this section is \cite{Spaltenstein}.
+Let $(X, \mathcal{O}_X)$ be a ringed space. By
+Modules, Lemma \ref{modules-lemma-module-quotient-flat}
+any $\mathcal{O}_X$-module is a quotient of a flat $\mathcal{O}_X$-module.
+By
+Derived Categories, Lemma \ref{derived-lemma-subcategory-left-resolution}
+any bounded above complex of $\mathcal{O}_X$-modules has a left
+resolution by a bounded above complex of flat $\mathcal{O}_X$-modules.
+However, for unbounded complexes, it turns out that flat resolutions
+aren't good enough.
+
+\begin{lemma}
+\label{lemma-derived-tor-exact}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{G}^\bullet$ be a complex of $\mathcal{O}_X$-modules.
+The functors
+$$
+K(\textit{Mod}(\mathcal{O}_X))
+\longrightarrow
+K(\textit{Mod}(\mathcal{O}_X)),
+\quad
+\mathcal{F}^\bullet \longmapsto
+\text{Tot}(\mathcal{G}^\bullet \otimes_{\mathcal{O}_X} \mathcal{F}^\bullet)
+$$
+and
+$$
+K(\textit{Mod}(\mathcal{O}_X))
+\longrightarrow
+K(\textit{Mod}(\mathcal{O}_X)),
+\quad
+\mathcal{F}^\bullet \longmapsto
+\text{Tot}(\mathcal{F}^\bullet \otimes_{\mathcal{O}_X} \mathcal{G}^\bullet)
+$$
+are exact functors of triangulated categories.
+\end{lemma}
+
+\begin{proof}
+This follows from Derived Categories, Remark
+\ref{derived-remark-double-complex-as-tensor-product-of}.
+\end{proof}
+
+\begin{definition}
+\label{definition-K-flat}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+A complex $\mathcal{K}^\bullet$ of $\mathcal{O}_X$-modules is
+called {\it K-flat} if for every acyclic complex $\mathcal{F}^\bullet$
+of $\mathcal{O}_X$-modules the complex
+$$
+\text{Tot}(\mathcal{F}^\bullet \otimes_{\mathcal{O}_X} \mathcal{K}^\bullet)
+$$
+is acyclic.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-K-flat-quasi-isomorphism}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{K}^\bullet$ be a K-flat complex.
+Then the functor
+$$
+K(\textit{Mod}(\mathcal{O}_X))
+\longrightarrow
+K(\textit{Mod}(\mathcal{O}_X)), \quad
+\mathcal{F}^\bullet
+\longmapsto
+\text{Tot}(\mathcal{F}^\bullet \otimes_{\mathcal{O}_X} \mathcal{K}^\bullet)
+$$
+transforms quasi-isomorphisms into quasi-isomorphisms.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Lemma \ref{lemma-derived-tor-exact}
+and the fact that quasi-isomorphisms are characterized by having
+acyclic cones.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-K-flat-stalks}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{K}^\bullet$
+be a complex of $\mathcal{O}_X$-modules. Then $\mathcal{K}^\bullet$
+is K-flat if and only if for all $x \in X$ the complex
+$\mathcal{K}_x^\bullet$ of $\mathcal{O}_{X, x}$-modules is K-flat
+(More on Algebra, Definition \ref{more-algebra-definition-K-flat}).
+\end{lemma}
+
+\begin{proof}
+If $\mathcal{K}_x^\bullet$ is K-flat for all $x \in X$ then we see
+that $\mathcal{K}^\bullet$ is K-flat because $\otimes$ and
+direct sums commute with taking stalks and because we can check exactness
+at stalks, see
+Modules, Lemma \ref{modules-lemma-abelian}.
+Conversely, assume $\mathcal{K}^\bullet$ is K-flat. Pick $x \in X$
+$M^\bullet$ be an acyclic complex of $\mathcal{O}_{X, x}$-modules.
+Then $i_{x, *}M^\bullet$ is an acyclic complex of $\mathcal{O}_X$-modules.
+Thus $\text{Tot}(i_{x, *}M^\bullet \otimes_{\mathcal{O}_X} \mathcal{K}^\bullet)$
+is acyclic. Taking stalks at $x$ shows that
+$\text{Tot}(M^\bullet \otimes_{\mathcal{O}_{X, x}} \mathcal{K}_x^\bullet)$
+is acyclic.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-product-K-flat}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+If $\mathcal{K}^\bullet$, $\mathcal{L}^\bullet$ are K-flat complexes
+of $\mathcal{O}_X$-modules, then
+$\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet)$
+is a K-flat complex of $\mathcal{O}_X$-modules.
+\end{lemma}
+
+\begin{proof}
+Follows from the isomorphism
+$$
+\text{Tot}(\mathcal{M}^\bullet \otimes_{\mathcal{O}_X}
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet))
+=
+\text{Tot}(\text{Tot}(\mathcal{M}^\bullet \otimes_{\mathcal{O}_X}
+\mathcal{K}^\bullet) \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet)
+$$
+and the definition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-K-flat-two-out-of-three}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $(\mathcal{K}_1^\bullet, \mathcal{K}_2^\bullet, \mathcal{K}_3^\bullet)$
+be a distinguished triangle in $K(\textit{Mod}(\mathcal{O}_X))$.
+If two out of three of $\mathcal{K}_i^\bullet$ are K-flat, so is the third.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Lemma \ref{lemma-derived-tor-exact}
+and the fact that in a distinguished triangle in
+$K(\textit{Mod}(\mathcal{O}_X))$
+if two out of three are acyclic, so is the third.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-K-flat-two-out-of-three-ses}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$0 \to \mathcal{K}_1^\bullet \to \mathcal{K}_2^\bullet \to
+\mathcal{K}_3^\bullet \to 0$ be a short exact sequence of complexes
+such that the terms of $\mathcal{K}_3^\bullet$ are flat $\mathcal{O}_X$-modules.
+If two out of three of $\mathcal{K}_i^\bullet$ are K-flat, so is the third.
+\end{lemma}
+
+\begin{proof}
+By Modules, Lemma \ref{modules-lemma-flat-tor-zero}
+for every complex $\mathcal{L}^\bullet$
+we obtain a short exact sequence
+$$
+0 \to
+\text{Tot}(\mathcal{L}^\bullet \otimes_{\mathcal{O}_X} \mathcal{K}_1^\bullet)
+\to
+\text{Tot}(\mathcal{L}^\bullet \otimes_{\mathcal{O}_X} \mathcal{K}_1^\bullet)
+\to
+\text{Tot}(\mathcal{L}^\bullet \otimes_{\mathcal{O}_X} \mathcal{K}_1^\bullet)
+\to 0
+$$
+of complexes. Hence the lemma follows from the long exact sequence of
+cohomology sheaves and the definition of K-flat complexes.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-K-flat}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of
+ringed spaces. The pullback of a K-flat complex of $\mathcal{O}_Y$-modules
+is a K-flat complex of $\mathcal{O}_X$-modules.
+\end{lemma}
+
+\begin{proof}
+We can check this on stalks, see
+Lemma \ref{lemma-check-K-flat-stalks}.
+Hence this follows from
+Sheaves, Lemma \ref{sheaves-lemma-stalk-pullback-modules}
+and
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-K-flat}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bounded-flat-K-flat}
+Let $(X, \mathcal{O}_X)$ be a ringed space. A bounded above complex
+of flat $\mathcal{O}_X$-modules is K-flat.
+\end{lemma}
+
+\begin{proof}
+We can check this on stalks, see
+Lemma \ref{lemma-check-K-flat-stalks}.
+Thus this lemma follows from
+Modules, Lemma \ref{modules-lemma-flat-stalks-flat}
+and
+More on Algebra, Lemma \ref{more-algebra-lemma-derived-tor-quasi-isomorphism}.
+\end{proof}
+
+\noindent
+In the following lemma by a colimit of a system of complexes we mean
+the termwise colimit.
+
+\begin{lemma}
+\label{lemma-colimit-K-flat}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{K}_1^\bullet \to \mathcal{K}_2^\bullet \to \ldots$
+be a system of K-flat complexes.
+Then $\colim_i \mathcal{K}_i^\bullet$ is K-flat.
+\end{lemma}
+
+\begin{proof}
+Because we are taking termwise colimits it is clear that
+$$
+\colim_i \text{Tot}(
+\mathcal{F}^\bullet \otimes_{\mathcal{O}_X} \mathcal{K}_i^\bullet)
+=
+\text{Tot}(\mathcal{F}^\bullet \otimes_{\mathcal{O}_X}
+\colim_i \mathcal{K}_i^\bullet)
+$$
+Hence the lemma follows from the fact that filtered colimits are
+exact.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-resolution-by-direct-sums-extensions-by-zero}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+For any complex $\mathcal{G}^\bullet$ of $\mathcal{O}_X$-modules
+there exists a commutative diagram of complexes of $\mathcal{O}_X$-modules
+$$
+\xymatrix{
+\mathcal{K}_1^\bullet \ar[d] \ar[r] &
+\mathcal{K}_2^\bullet \ar[d] \ar[r] & \ldots \\
+\tau_{\leq 1}\mathcal{G}^\bullet \ar[r] &
+\tau_{\leq 2}\mathcal{G}^\bullet \ar[r] & \ldots
+}
+$$
+with the following properties: (1) the vertical arrows are quasi-isomorphisms
+and termwise surjective,
+(2) each $\mathcal{K}_n^\bullet$ is a bounded above complex whose terms
+are direct sums of $\mathcal{O}_X$-modules of the form
+$j_{U!}\mathcal{O}_U$, and
+(3) the maps $\mathcal{K}_n^\bullet \to \mathcal{K}_{n + 1}^\bullet$ are
+termwise split injections whose cokernels are direct sums of
+$\mathcal{O}_X$-modules of the form $j_{U!}\mathcal{O}_U$. Moreover, the map
+$\colim \mathcal{K}_n^\bullet \to \mathcal{G}^\bullet$ is a quasi-isomorphism.
+\end{lemma}
+
+\begin{proof}
+The existence of the diagram and properties (1), (2), (3) follows immediately
+from
+Modules, Lemma \ref{modules-lemma-module-quotient-flat}
+and
+Derived Categories, Lemma \ref{derived-lemma-special-direct-system}.
+The induced map
+$\colim \mathcal{K}_n^\bullet \to \mathcal{G}^\bullet$
+is a quasi-isomorphism because filtered colimits are exact.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-K-flat-resolution}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+For any complex $\mathcal{G}^\bullet$ there exists a $K$-flat complex
+$\mathcal{K}^\bullet$ whose terms are flat $\mathcal{O}_X$-modules
+and a quasi-isomorphism $\mathcal{K}^\bullet \to \mathcal{G}^\bullet$
+which is termwise surjective.
+\end{lemma}
+
+\begin{proof}
+Choose a diagram as in
+Lemma \ref{lemma-resolution-by-direct-sums-extensions-by-zero}.
+Each complex $\mathcal{K}_n^\bullet$ is a bounded
+above complex of flat modules, see
+Modules, Lemma \ref{modules-lemma-j-shriek-flat}.
+Hence $\mathcal{K}_n^\bullet$ is K-flat by
+Lemma \ref{lemma-bounded-flat-K-flat}.
+Thus $\colim \mathcal{K}_n^\bullet$ is K-flat by
+Lemma \ref{lemma-colimit-K-flat}.
+The induced map
+$\colim \mathcal{K}_n^\bullet \to \mathcal{G}^\bullet$
+is a quasi-isomorphism and termwise surjective by construction.
+Property (3) of Lemma \ref{lemma-resolution-by-direct-sums-extensions-by-zero}
+shows that $\colim \mathcal{K}_n^m$ is a direct sum of
+flat modules and hence flat which proves the final assertion.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derived-tor-quasi-isomorphism-other-side}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$\alpha : \mathcal{P}^\bullet \to \mathcal{Q}^\bullet$ be a
+quasi-isomorphism of K-flat complexes of $\mathcal{O}_X$-modules.
+For every complex $\mathcal{F}^\bullet$ of $\mathcal{O}_X$-modules
+the induced map
+$$
+\text{Tot}(\text{id}_{\mathcal{F}^\bullet} \otimes \alpha) :
+\text{Tot}(\mathcal{F}^\bullet \otimes_{\mathcal{O}_X} \mathcal{P}^\bullet)
+\longrightarrow
+\text{Tot}(\mathcal{F}^\bullet \otimes_{\mathcal{O}_X} \mathcal{Q}^\bullet)
+$$
+is a quasi-isomorphism.
+\end{lemma}
+
+\begin{proof}
+Choose a quasi-isomorphism $\mathcal{K}^\bullet \to \mathcal{F}^\bullet$
+with $\mathcal{K}^\bullet$ a K-flat complex, see
+Lemma \ref{lemma-K-flat-resolution}.
+Consider the commutative diagram
+$$
+\xymatrix{
+\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X} \mathcal{P}^\bullet) \ar[r] \ar[d] &
+\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X} \mathcal{Q}^\bullet) \ar[d] \\
+\text{Tot}(\mathcal{F}^\bullet
+\otimes_{\mathcal{O}_X} \mathcal{P}^\bullet) \ar[r] &
+\text{Tot}(\mathcal{F}^\bullet
+\otimes_{\mathcal{O}_X} \mathcal{Q}^\bullet)
+}
+$$
+The result follows as by
+Lemma \ref{lemma-K-flat-quasi-isomorphism}
+the vertical arrows and the top horizontal arrow are quasi-isomorphisms.
+\end{proof}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{F}^\bullet$ be an object of $D(\mathcal{O}_X)$.
+Choose a K-flat resolution $\mathcal{K}^\bullet \to \mathcal{F}^\bullet$, see
+Lemma \ref{lemma-K-flat-resolution}.
+By
+Lemma \ref{lemma-derived-tor-exact}
+we obtain an exact functor of triangulated categories
+$$
+K(\mathcal{O}_X)
+\longrightarrow
+K(\mathcal{O}_X),
+\quad
+\mathcal{G}^\bullet
+\longmapsto
+\text{Tot}(\mathcal{G}^\bullet \otimes_{\mathcal{O}_X} \mathcal{K}^\bullet)
+$$
+By
+Lemma \ref{lemma-K-flat-quasi-isomorphism}
+this functor induces a functor
+$D(\mathcal{O}_X) \to D(\mathcal{O}_X)$ simply because
+$D(\mathcal{O}_X)$ is the localization of $K(\mathcal{O}_X)$
+at quasi-isomorphisms. By
+Lemma \ref{lemma-derived-tor-quasi-isomorphism-other-side}
+the resulting functor (up to isomorphism)
+does not depend on the choice of the K-flat resolution.
+
+\begin{definition}
+\label{definition-derived-tor}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{F}^\bullet$ be an object of $D(\mathcal{O}_X)$.
+The {\it derived tensor product}
+$$
+- \otimes_{\mathcal{O}_X}^{\mathbf{L}} \mathcal{F}^\bullet :
+D(\mathcal{O}_X)
+\longrightarrow
+D(\mathcal{O}_X)
+$$
+is the exact functor of triangulated categories described above.
+\end{definition}
+
+\noindent
+It is clear from our explicit constructions that
+there is a canonical isomorphism
+$$
+\mathcal{F}^\bullet \otimes_{\mathcal{O}_X}^{\mathbf{L}} \mathcal{G}^\bullet
+\cong
+\mathcal{G}^\bullet \otimes_{\mathcal{O}_X}^{\mathbf{L}} \mathcal{F}^\bullet
+$$
+for $\mathcal{G}^\bullet$ and $\mathcal{F}^\bullet$ in $D(\mathcal{O}_X)$.
+Here we use sign rules as given in More on Algebra, Section
+\ref{more-algebra-section-sign-rules}. Hence when we write
+$\mathcal{F}^\bullet \otimes_{\mathcal{O}_X}^{\mathbf{L}} \mathcal{G}^\bullet$
+we will usually be agnostic about which variable we are using to
+define the derived tensor product with.
+
+\begin{definition}
+\label{definition-tor}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{F}$, $\mathcal{G}$ be $\mathcal{O}_X$-modules.
+The {\it Tor}'s of $\mathcal{F}$ and $\mathcal{G}$ are define by
+the formula
+$$
+\text{Tor}_p^{\mathcal{O}_X}(\mathcal{F}, \mathcal{G}) =
+H^{-p}(\mathcal{F} \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{G})
+$$
+with derived tensor product as defined above.
+\end{definition}
+
+\noindent
+This definition implies that for every short exact sequence
+of $\mathcal{O}_X$-modules
+$0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0$
+we have a long exact cohomology sequence
+$$
+\xymatrix{
+\mathcal{F}_1 \otimes_{\mathcal{O}_X} \mathcal{G} \ar[r] &
+\mathcal{F}_2 \otimes_{\mathcal{O}_X} \mathcal{G} \ar[r] &
+\mathcal{F}_3 \otimes_{\mathcal{O}_X} \mathcal{G} \ar[r] & 0 \\
+\text{Tor}_1^{\mathcal{O}_X}(\mathcal{F}_1, \mathcal{G}) \ar[r] &
+\text{Tor}_1^{\mathcal{O}_X}(\mathcal{F}_2, \mathcal{G}) \ar[r] &
+\text{Tor}_1^{\mathcal{O}_X}(\mathcal{F}_3, \mathcal{G}) \ar[ull]
+}
+$$
+for every $\mathcal{O}_X$-module $\mathcal{G}$. This will be called
+the long exact sequence of $\text{Tor}$ associated to the situation.
+
+\begin{lemma}
+\label{lemma-flat-tor-zero}
+\begin{slogan}
+Tor measures the deviation of flatness.
+\end{slogan}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is a flat $\mathcal{O}_X$-module, and
+\item $\text{Tor}_1^{\mathcal{O}_X}(\mathcal{F}, \mathcal{G}) = 0$
+for every $\mathcal{O}_X$-module $\mathcal{G}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $\mathcal{F}$ is flat, then $\mathcal{F} \otimes_{\mathcal{O}_X} -$
+is an exact functor and the satellites vanish. Conversely assume (2)
+holds. Then if $\mathcal{G} \to \mathcal{H}$ is injective with cokernel
+$\mathcal{Q}$, the long exact sequence of $\text{Tor}$ shows that
+the kernel of
+$\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{G} \to
+\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{H}$
+is a quotient of
+$\text{Tor}_1^{\mathcal{O}_X}(\mathcal{F}, \mathcal{Q})$
+which is zero by assumption. Hence $\mathcal{F}$ is flat.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-factor-through-K-flat}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $a : \mathcal{K}^\bullet \to \mathcal{L}^\bullet$ be a map of complexes
+of $\mathcal{O}_X$-modules. If $\mathcal{K}^\bullet$ is K-flat, then
+there exist a complex $\mathcal{N}^\bullet$ and maps of complexes
+$b : \mathcal{K}^\bullet \to \mathcal{N}^\bullet$
+and $c : \mathcal{N}^\bullet \to \mathcal{L}^\bullet$ such that
+\begin{enumerate}
+\item $\mathcal{N}^\bullet$ is K-flat,
+\item $c$ is a quasi-isomorphism,
+\item $a$ is homotopic to $c \circ b$.
+\end{enumerate}
+If the terms of $\mathcal{K}^\bullet$ are flat, then we may choose
+$\mathcal{N}^\bullet$, $b$, and $c$
+such that the same is true for $\mathcal{N}^\bullet$.
+\end{lemma}
+
+\begin{proof}
+We will use that the homotopy category $K(\textit{Mod}(\mathcal{O}_X))$
+is a triangulated category, see Derived Categories, Proposition
+\ref{derived-proposition-homotopy-category-triangulated}.
+Choose a distinguished triangle
+$\mathcal{K}^\bullet \to \mathcal{L}^\bullet \to
+\mathcal{C}^\bullet \to \mathcal{K}^\bullet[1]$.
+Choose a quasi-isomorphism $\mathcal{M}^\bullet \to \mathcal{C}^\bullet$ with
+$\mathcal{M}^\bullet$ K-flat with flat terms, see
+Lemma \ref{lemma-K-flat-resolution}.
+By the axioms of triangulated categories,
+we may fit the composition
+$\mathcal{M}^\bullet \to \mathcal{C}^\bullet \to \mathcal{K}^\bullet[1]$
+into a distinguished triangle
+$\mathcal{K}^\bullet \to \mathcal{N}^\bullet \to
+\mathcal{M}^\bullet \to \mathcal{K}^\bullet[1]$.
+By Lemma \ref{lemma-K-flat-two-out-of-three} we see that
+$\mathcal{N}^\bullet$ is K-flat.
+Again using the axioms of triangulated categories,
+we can choose a map $\mathcal{N}^\bullet \to \mathcal{L}^\bullet$ fitting into
+the following morphism of distinghuised triangles
+$$
+\xymatrix{
+\mathcal{K}^\bullet \ar[r] \ar[d] &
+\mathcal{N}^\bullet \ar[r] \ar[d] &
+\mathcal{M}^\bullet \ar[r] \ar[d] &
+\mathcal{K}^\bullet[1] \ar[d] \\
+\mathcal{K}^\bullet \ar[r] &
+\mathcal{L}^\bullet \ar[r] &
+\mathcal{C}^\bullet \ar[r] &
+\mathcal{K}^\bullet[1]
+}
+$$
+Since two out of three of the arrows are quasi-isomorphisms, so is
+the third arrow $\mathcal{N}^\bullet \to \mathcal{L}^\bullet$
+by the long exact sequences
+of cohomology associated to these distinguished triangles
+(or you can look at the image of this diagram in $D(\mathcal{O}_X)$ and use
+Derived Categories, Lemma \ref{derived-lemma-third-isomorphism-triangle}
+if you like). This finishes the proof of (1), (2), and (3).
+To prove the final assertion, we may choose $\mathcal{N}^\bullet$
+such that $\mathcal{N}^n \cong \mathcal{M}^n \oplus \mathcal{K}^n$, see
+Derived Categories, Lemma
+\ref{derived-lemma-improve-distinguished-triangle-homotopy}.
+Hence we get the desired flatness
+if the terms of $\mathcal{K}^\bullet$ are flat.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Derived pullback}
+\label{section-derived-pullback}
+
+\noindent
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$
+be a morphism of ringed spaces. We can use K-flat resolutions to define
+a derived pullback functor
+$$
+Lf^* : D(\mathcal{O}_Y) \to D(\mathcal{O}_X)
+$$
+Namely, for every complex of $\mathcal{O}_Y$-modules $\mathcal{G}^\bullet$
+we can choose a K-flat resolution
+$\mathcal{K}^\bullet \to \mathcal{G}^\bullet$ and set
+$Lf^*\mathcal{G}^\bullet = f^*\mathcal{K}^\bullet$.
+You can use
+Lemmas \ref{lemma-pullback-K-flat},
+\ref{lemma-K-flat-resolution}, and
+\ref{lemma-derived-tor-quasi-isomorphism-other-side}
+to see that this is well defined. However, to cross all the t's and dot all
+the i's it is perhaps more convenient to use some general theory.
+
+\begin{lemma}
+\label{lemma-derived-base-change}
+The construction above is independent of choices and defines an exact
+functor of triangulated categories
+$Lf^* : D(\mathcal{O}_Y) \to D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+To see this we use the general theory developed in
+Derived Categories, Section \ref{derived-section-derived-functors}.
+Set $\mathcal{D} = K(\mathcal{O}_Y)$ and $\mathcal{D}' = D(\mathcal{O}_X)$.
+Let us write $F : \mathcal{D} \to \mathcal{D}'$ the exact functor
+of triangulated categories defined by the rule
+$F(\mathcal{G}^\bullet) = f^*\mathcal{G}^\bullet$.
+We let $S$ be the set of quasi-isomorphisms in
+$\mathcal{D} = K(\mathcal{O}_Y)$.
+This gives a situation as in
+Derived Categories, Situation \ref{derived-situation-derived-functor}
+so that
+Derived Categories, Definition
+\ref{derived-definition-right-derived-functor-defined}
+applies. We claim that $LF$ is everywhere defined.
+This follows from
+Derived Categories, Lemma \ref{derived-lemma-find-existence-computes}
+with $\mathcal{P} \subset \Ob(\mathcal{D})$ the collection
+of $K$-flat complexes: (1) follows from
+Lemma \ref{lemma-K-flat-resolution}
+and to see (2) we have to show that for a quasi-isomorphism
+$\mathcal{K}_1^\bullet \to \mathcal{K}_2^\bullet$ between
+K-flat complexes of $\mathcal{O}_Y$-modules the map
+$f^*\mathcal{K}_1^\bullet \to f^*\mathcal{K}_2^\bullet$ is a
+quasi-isomorphism. To see this write this as
+$$
+f^{-1}\mathcal{K}_1^\bullet \otimes_{f^{-1}\mathcal{O}_Y} \mathcal{O}_X
+\longrightarrow
+f^{-1}\mathcal{K}_2^\bullet \otimes_{f^{-1}\mathcal{O}_Y} \mathcal{O}_X
+$$
+The functor $f^{-1}$ is exact, hence the map
+$f^{-1}\mathcal{K}_1^\bullet \to f^{-1}\mathcal{K}_2^\bullet$ is a
+quasi-isomorphism. By
+Lemma \ref{lemma-pullback-K-flat}
+applied to the morphism $(X, f^{-1}\mathcal{O}_Y) \to (Y, \mathcal{O}_Y)$
+the complexes $f^{-1}\mathcal{K}_1^\bullet$ and $f^{-1}\mathcal{K}_2^\bullet$
+are K-flat complexes of $f^{-1}\mathcal{O}_Y$-modules. Hence
+Lemma \ref{lemma-derived-tor-quasi-isomorphism-other-side}
+guarantees that the displayed map is a quasi-isomorphism.
+Thus we obtain a derived functor
+$$
+LF :
+D(\mathcal{O}_Y) = S^{-1}\mathcal{D}
+\longrightarrow
+\mathcal{D}' = D(\mathcal{O}_X)
+$$
+see
+Derived Categories, Equation (\ref{derived-equation-everywhere}).
+Finally,
+Derived Categories, Lemma \ref{derived-lemma-find-existence-computes}
+also guarantees that
+$LF(\mathcal{K}^\bullet) = F(\mathcal{K}^\bullet) = f^*\mathcal{K}^\bullet$
+when $\mathcal{K}^\bullet$ is K-flat, i.e., $Lf^* = LF$ is
+indeed computed in the way described above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derived-pullback-composition}
+Let $f : X \to Y$ and $g : Y \to Z$ be morphisms of ringed spaces.
+Then $Lf^* \circ Lg^* = L(g \circ f)^*$ as functors
+$D(\mathcal{O}_Z) \to D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Let $E$ be an object of $D(\mathcal{O}_Z)$.
+By construction $Lg^*E$ is computed by choosing a K-flat complex
+$\mathcal{K}^\bullet$ representing $E$ on $Z$ and
+setting $Lg^*E = g^*\mathcal{K}^\bullet$.
+By Lemma \ref{lemma-pullback-K-flat} we see that $g^*\mathcal{K}^\bullet$
+is K-flat on $Y$. Then $Lf^*Lg^*E$ is given by
+$f^*g^*\mathcal{K}^\bullet = (g \circ f)^*\mathcal{K}^\bullet$
+which also represents $L(g \circ f)^*E$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-tensor-product}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$
+be a morphism of ringed spaces. There is a canonical bifunctorial
+isomorphism
+$$
+Lf^*(
+\mathcal{F}^\bullet \otimes_{\mathcal{O}_Y}^{\mathbf{L}} \mathcal{G}^\bullet
+) =
+Lf^*\mathcal{F}^\bullet
+\otimes_{\mathcal{O}_X}^{\mathbf{L}}
+Lf^*\mathcal{G}^\bullet
+$$
+for $\mathcal{F}^\bullet, \mathcal{G}^\bullet \in \Ob(D(\mathcal{O}_X))$.
+\end{lemma}
+
+\begin{proof}
+We may assume that $\mathcal{F}^\bullet$ and $\mathcal{G}^\bullet$
+are K-flat complexes. In this case
+$\mathcal{F}^\bullet \otimes_{\mathcal{O}_Y}^{\mathbf{L}} \mathcal{G}^\bullet$
+is just the total complex associated to the double complex
+$\mathcal{F}^\bullet \otimes_{\mathcal{O}_Y} \mathcal{G}^\bullet$.
+By
+Lemma \ref{lemma-tensor-product-K-flat}
+$\text{Tot}(\mathcal{F}^\bullet \otimes_{\mathcal{O}_Y} \mathcal{G}^\bullet)$
+is K-flat also. Hence the isomorphism of the lemma comes from the
+isomorphism
+$$
+\text{Tot}(f^*\mathcal{F}^\bullet \otimes_{\mathcal{O}_X}
+f^*\mathcal{G}^\bullet)
+\longrightarrow
+f^*\text{Tot}(\mathcal{F}^\bullet \otimes_{\mathcal{O}_Y} \mathcal{G}^\bullet)
+$$
+whose constituents are the isomorphisms
+$f^*\mathcal{F}^p \otimes_{\mathcal{O}_X} f^*\mathcal{G}^q \to
+f^*(\mathcal{F}^p \otimes_{\mathcal{O}_Y} \mathcal{G}^q)$ of
+Modules, Lemma \ref{modules-lemma-tensor-product-pullback}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-variant-derived-pullback}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$
+be a morphism of ringed spaces. There is a canonical bifunctorial
+isomorphism
+$$
+\mathcal{F}^\bullet
+\otimes_{\mathcal{O}_X}^{\mathbf{L}}
+Lf^*\mathcal{G}^\bullet
+=
+\mathcal{F}^\bullet
+\otimes_{f^{-1}\mathcal{O}_Y}^{\mathbf{L}}
+f^{-1}\mathcal{G}^\bullet
+$$
+for $\mathcal{F}^\bullet$ in $D(\mathcal{O}_X)$ and
+$\mathcal{G}^\bullet$ in $D(\mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module and let $\mathcal{G}$
+be an $\mathcal{O}_Y$-module. Then
+$\mathcal{F} \otimes_{\mathcal{O}_X} f^*\mathcal{G} =
+\mathcal{F} \otimes_{f^{-1}\mathcal{O}_Y} f^{-1}\mathcal{G}$
+because
+$f^*\mathcal{G} =
+\mathcal{O}_X \otimes_{f^{-1}\mathcal{O}_Y} f^{-1}\mathcal{G}$.
+The lemma follows from this and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-pull-compatibility}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism
+of ringed spaces. Let $\mathcal{K}^\bullet$ and $\mathcal{M}^\bullet$
+be complexes of $\mathcal{O}_Y$-modules. The diagram
+$$
+\xymatrix{
+Lf^*(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+\mathcal{M}^\bullet) \ar[r] \ar[d] &
+Lf^*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}
+\mathcal{M}^\bullet) \ar[d] \\
+Lf^*\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L}
+Lf^*\mathcal{M}^\bullet \ar[d] &
+f^*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}
+\mathcal{M}^\bullet) \ar[d] \\
+f^*\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L}
+f^*\mathcal{M}^\bullet \ar[r] &
+\text{Tot}(f^*\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}
+f^*\mathcal{M}^\bullet)
+}
+$$
+commutes.
+\end{lemma}
+
+\begin{proof}
+We will use the existence of K-flat resolutions as in
+Lemma \ref{lemma-pullback-K-flat}. If we choose such
+resolutions $\mathcal{P}^\bullet \to \mathcal{K}^\bullet$
+and $\mathcal{Q}^\bullet \to \mathcal{M}^\bullet$, then
+we see that
+$$
+\xymatrix{
+Lf^*\text{Tot}(\mathcal{P}^\bullet
+\otimes_{\mathcal{O}_Y}
+\mathcal{Q}^\bullet) \ar[r] \ar[d] &
+Lf^*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}
+\mathcal{M}^\bullet) \ar[d] \\
+f^*\text{Tot}(\mathcal{P}^\bullet
+\otimes_{\mathcal{O}_Y}
+\mathcal{Q}^\bullet) \ar[d] \ar[r] &
+f^*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}
+\mathcal{M}^\bullet) \ar[d] \\
+\text{Tot}(f^*\mathcal{P}^\bullet \otimes_{\mathcal{O}_X}
+f^*\mathcal{Q}^\bullet) \ar[r] &
+\text{Tot}(f^*\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}
+f^*\mathcal{M}^\bullet)
+}
+$$
+commutes. However, now the left hand side of the diagram
+is the left hand side of the diagram by our choice of
+$\mathcal{P}^\bullet$ and $\mathcal{Q}^\bullet$ and
+Lemma \ref{lemma-tensor-product-K-flat}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Cohomology of unbounded complexes}
+\label{section-unbounded}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+The category $\textit{Mod}(\mathcal{O}_X)$ is a Grothendieck
+abelian category: it has all colimits,
+filtered colimits are exact, and it has a generator, namely
+$$
+\bigoplus\nolimits_{U \subset X\text{ open}} j_{U!}\mathcal{O}_U,
+$$
+see Modules, Section \ref{modules-section-kernels} and
+Lemmas \ref{modules-lemma-j-shriek-flat} and
+\ref{modules-lemma-module-quotient-flat}.
+By Injectives, Theorem
+\ref{injectives-theorem-K-injective-embedding-grothendieck}
+for every complex $\mathcal{F}^\bullet$ of $\mathcal{O}_X$-modules
+there exists an injective quasi-isomorphism
+$\mathcal{F}^\bullet \to \mathcal{I}^\bullet$ to a K-injective complex
+of $\mathcal{O}_X$-modules all of whose terms are injective
+$\mathcal{O}_X$-modules and moreover this embedding can be
+chosen functorial in the complex $\mathcal{F}^\bullet$.
+It follows from
+Derived Categories, Lemma \ref{derived-lemma-enough-K-injectives-implies}
+that
+\begin{enumerate}
+\item any exact functor $F : K(\textit{Mod}(\mathcal{O}_X)) \to \mathcal{D}$
+into a trianguated category $\mathcal{D}$ has a right derived functor
+$RF : D(\mathcal{O}_X) \to \mathcal{D}$,
+\item for any additive functor
+$F : \textit{Mod}(\mathcal{O}_X) \to \mathcal{A}$
+into an abelian category $\mathcal{A}$ we consider the exact functor
+$F : K(\textit{Mod}(\mathcal{O}_X)) \to D(\mathcal{A})$ induced by $F$
+and we obtain a right derived functor
+$RF : D(\mathcal{O}_X) \to K(\mathcal{A})$.
+\end{enumerate}
+By construction we have $RF(\mathcal{F}^\bullet) = F(\mathcal{I}^\bullet)$
+where $\mathcal{F}^\bullet \to \mathcal{I}^\bullet$ is as above.
+
+\medskip\noindent
+Here are some examples of the above:
+\begin{enumerate}
+\item The functor $\Gamma(X, -) : \textit{Mod}(\mathcal{O}_X) \to
+\text{Mod}_{\Gamma(X, \mathcal{O}_X)}$ gives rise to
+$$
+R\Gamma(X, -) : D(\mathcal{O}_X) \to D(\Gamma(X, \mathcal{O}_X))
+$$
+We shall use the notation $H^i(X, K) = H^i(R\Gamma(X, K))$ for cohomology.
+\item For an open $U \subset X$ we consider the functor
+$\Gamma(U, -) : \textit{Mod}(\mathcal{O}_X) \to
+\text{Mod}_{\Gamma(U, \mathcal{O}_X)}$. This gives rise to
+$$
+R\Gamma(U, -) : D(\mathcal{O}_X) \to D(\Gamma(U, \mathcal{O}_X))
+$$
+We shall use the notation $H^i(U, K) = H^i(R\Gamma(U, K))$ for cohomology.
+\item For a morphism of ringed spaces
+$f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ we consider the functor
+$f_* : \textit{Mod}(\mathcal{O}_X) \to \textit{Mod}(\mathcal{O}_Y)$
+which gives rise to the total direct image
+$$
+Rf_* : D(\mathcal{O}_X) \longrightarrow D(\mathcal{O}_Y)
+$$
+on unbounded derived categories.
+\end{enumerate}
+
+\begin{lemma}
+\label{lemma-adjoint}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of
+ringed spaces. The functor $Rf_*$ defined above and the functor $Lf^*$
+defined in Lemma \ref{lemma-derived-base-change} are adjoint:
+$$
+\Hom_{D(\mathcal{O}_X)}(Lf^*\mathcal{G}^\bullet, \mathcal{F}^\bullet)
+=
+\Hom_{D(\mathcal{O}_Y)}(\mathcal{G}^\bullet, Rf_*\mathcal{F}^\bullet)
+$$
+bifunctorially in $\mathcal{F}^\bullet \in \Ob(D(\mathcal{O}_X))$ and
+$\mathcal{G}^\bullet \in \Ob(D(\mathcal{O}_Y))$.
+\end{lemma}
+
+\begin{proof}
+This follows formally from the fact that $Rf_*$ and $Lf^*$ exist, see
+Derived Categories, Lemma \ref{derived-lemma-derived-adjoint-functors}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derived-pushforward-composition}
+Let $f : X \to Y$ and $g : Y \to Z$ be morphisms of ringed spaces.
+Then $Rg_* \circ Rf_* = R(g \circ f)_*$ as functors
+$D(\mathcal{O}_X) \to D(\mathcal{O}_Z)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-adjoint} we see that $Rg_* \circ Rf_*$
+is adjoint to $Lf^* \circ Lg^*$. We have
+$Lf^* \circ Lg^* = L(g \circ f)^*$ by
+Lemma \ref{lemma-derived-pullback-composition}
+and hence by
+uniqueness of adjoint functors we have $Rg_* \circ Rf_* = R(g \circ f)_*$.
+\end{proof}
+
+\begin{remark}
+\label{remark-base-change}
+The construction of unbounded derived functor $Lf^*$ and $Rf_*$
+allows one to construct the base change map in full generality.
+Namely, suppose that
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} &
+X \ar[d]^f \\
+S' \ar[r]^g &
+S
+}
+$$
+is a commutative diagram of ringed spaces. Let $K$ be an object of
+$D(\mathcal{O}_X)$. Then there exists a canonical base change
+map
+$$
+Lg^*Rf_*K \longrightarrow R(f')_*L(g')^*K
+$$
+in $D(\mathcal{O}_{S'})$. Namely, this map is adjoint to a map
+$L(f')^*Lg^*Rf_*K \to L(g')^*K$
+Since $L(f')^*Lg^* = L(g')^*Lf^*$ we see this is the same as a map
+$L(g')^*Lf^*Rf_*K \to L(g')^*K$
+which we can take to be $L(g')^*$ of the adjunction map
+$Lf^*Rf_*K \to K$.
+\end{remark}
+
+\begin{remark}
+\label{remark-compose-base-change}
+Consider a commutative diagram
+$$
+\xymatrix{
+X' \ar[r]_k \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^l \ar[d]_{g'} & Y \ar[d]^g \\
+Z' \ar[r]^m & Z
+}
+$$
+of ringed spaces. Then the base change maps of
+Remark \ref{remark-base-change}
+for the two squares compose to give the base
+change map for the outer rectangle. More precisely,
+the composition
+\begin{align*}
+Lm^* \circ R(g \circ f)_*
+& =
+Lm^* \circ Rg_* \circ Rf_* \\
+& \to Rg'_* \circ Ll^* \circ Rf_* \\
+& \to Rg'_* \circ Rf'_* \circ Lk^* \\
+& = R(g' \circ f')_* \circ Lk^*
+\end{align*}
+is the base change map for the rectangle. We omit the verification.
+\end{remark}
+
+\begin{remark}
+\label{remark-compose-base-change-horizontal}
+Consider a commutative diagram
+$$
+\xymatrix{
+X'' \ar[r]_{g'} \ar[d]_{f''} & X' \ar[r]_g \ar[d]_{f'} & X \ar[d]^f \\
+Y'' \ar[r]^{h'} & Y' \ar[r]^h & Y
+}
+$$
+of ringed spaces. Then the base change maps of
+Remark \ref{remark-base-change}
+for the two squares compose to give the base
+change map for the outer rectangle. More precisely,
+the composition
+\begin{align*}
+L(h \circ h')^* \circ Rf_*
+& =
+L(h')^* \circ Lh_* \circ Rf_* \\
+& \to L(h')^* \circ Rf'_* \circ Lg^* \\
+& \to Rf''_* \circ L(g')^* \circ Lg^* \\
+& = Rf''_* \circ L(g \circ g')^*
+\end{align*}
+is the base change map for the rectangle. We omit the verification.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-adjoints-push-pull-compatibility}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$
+be a morphism of ringed spaces. Let $\mathcal{K}^\bullet$
+be a complex of $\mathcal{O}_X$-modules.
+The diagram
+$$
+\xymatrix{
+Lf^*f_*\mathcal{K}^\bullet \ar[r] \ar[d] &
+f^*f_*\mathcal{K}^\bullet \ar[d] \\
+Lf^*Rf_*\mathcal{K}^\bullet \ar[r] &
+\mathcal{K}^\bullet
+}
+$$
+coming from $Lf^* \to f^*$ on complexes, $f_* \to Rf_*$ on complexes,
+and adjunction $Lf^* \circ Rf_* \to \text{id}$
+commutes in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+We will use the existence of K-flat resolutions and
+K-injective resolutions, see Lemma \ref{lemma-pullback-K-flat}
+and the discussion above. Choose a quasi-isomorphism
+$\mathcal{K}^\bullet \to \mathcal{I}^\bullet$ where $\mathcal{I}^\bullet$
+is K-injective as a complex of $\mathcal{O}_X$-modules.
+Choose a quasi-isomorphism $\mathcal{Q}^\bullet \to f_*\mathcal{I}^\bullet$
+where $\mathcal{Q}^\bullet$ is K-flat as a complex of
+$\mathcal{O}_Y$-modules. We can choose a K-flat complex of
+$\mathcal{O}_Y$-modules $\mathcal{P}^\bullet$
+and a diagram of morphisms of complexes
+$$
+\xymatrix{
+\mathcal{P}^\bullet \ar[r] \ar[d] &
+f_*\mathcal{K}^\bullet \ar[d] \\
+\mathcal{Q}^\bullet \ar[r] & f_*\mathcal{I}^\bullet
+}
+$$
+commutative up to homotopy where the top horizontal arrow
+is a quasi-isomorphism. Namely, we can first choose such a
+diagram for some complex $\mathcal{P}^\bullet$ because
+the quasi-isomorphisms form a multiplicative system in
+the homotopy category of complexes and then we can replace
+$\mathcal{P}^\bullet$ by a K-flat complex.
+Taking pullbacks we obtain a diagram of morphisms of complexes
+$$
+\xymatrix{
+f^*\mathcal{P}^\bullet \ar[r] \ar[d] &
+f^*f_*\mathcal{K}^\bullet \ar[d] \ar[r] &
+\mathcal{K}^\bullet \ar[d] \\
+f^*\mathcal{Q}^\bullet \ar[r] &
+f^*f_*\mathcal{I}^\bullet \ar[r] &
+\mathcal{I}^\bullet
+}
+$$
+commutative up to homotopy. The outer rectangle witnesses the
+truth of the statement in the lemma.
+\end{proof}
+
+\begin{remark}
+\label{remark-cup-product}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of
+ringed spaces. The adjointness of $Lf^*$ and $Rf_*$ allows us to construct
+a relative cup product
+$$
+Rf_*K \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*L
+\longrightarrow
+Rf_*(K \otimes_{\mathcal{O}_X}^\mathbf{L} L)
+$$
+in $D(\mathcal{O}_Y)$ for all $K, L$ in $D(\mathcal{O}_X)$.
+Namely, this map is adjoint to a map
+$Lf^*(Rf_*K \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*L) \to
+K \otimes_{\mathcal{O}_X}^\mathbf{L} L$ for which we can take the
+composition of the isomorphism
+$Lf^*(Rf_*K \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*L) =
+Lf^*Rf_*K \otimes_{\mathcal{O}_X}^\mathbf{L} Lf^*Rf_*L$
+(Lemma \ref{lemma-pullback-tensor-product})
+with the map
+$Lf^*Rf_*K \otimes_{\mathcal{O}_X}^\mathbf{L} Lf^*Rf_*L
+\to K \otimes_{\mathcal{O}_X}^\mathbf{L} L$
+coming from the counit $Lf^* \circ Rf_* \to \text{id}$.
+\end{remark}
+
+
+
+
+
+
+\section{Cohomology of filtered complexes}
+\label{section-cohomology-filtered-object}
+
+\noindent
+Filtered complexes of sheaves frequently come up in a natural fashion
+when studying cohomology of algebraic varieties, for example the de Rham
+complex comes with its Hodge filtration. In this sectionwe use the very general
+Injectives, Lemma \ref{injectives-lemma-K-injective-embedding-filtration}
+to find construct spectral sequences on cohomology and we relate these to
+previously constructed spectral sequences.
+
+\begin{lemma}
+\label{lemma-spectral-sequence-filtered-object}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{F}^\bullet$ be a
+filtered complex of $\mathcal{O}_X$-modules. There exists a canonical
+spectral sequence $(E_r, \text{d}_r)_{r \geq 1}$ of bigraded
+$\Gamma(X, \mathcal{O}_X)$-modules with $d_r$ of bidegree $(r, -r + 1)$ and
+$$
+E_1^{p, q} = H^{p + q}(X, \text{gr}^p\mathcal{F}^\bullet)
+$$
+If for every $n$ we have
+$$
+H^n(X, F^p\mathcal{F}^\bullet) = 0\text{ for }p \gg 0
+\quad\text{and}\quad
+H^n(X, F^p\mathcal{F}^\bullet) = H^n(X, \mathcal{F}^\bullet)\text{ for }p \ll 0
+$$
+then the spectral sequence is bounded and converges to
+$H^*(X, \mathcal{F}^\bullet)$.
+\end{lemma}
+
+\begin{proof}
+(For a proof in case the complex is a bounded below complex
+of modules with finite filtrations, see the remark below.)
+Choose an map of filtered complexes
+$j : \mathcal{F}^\bullet \to \mathcal{J}^\bullet$ as in
+Injectives, Lemma
+\ref{injectives-lemma-K-injective-embedding-filtration}.
+The spectral sequence is the spectral sequence of
+Homology, Section \ref{homology-section-filtered-complex}
+associated to the filtered complex
+$$
+\Gamma(X, \mathcal{J}^\bullet)
+\quad\text{with}\quad
+F^p\Gamma(X, \mathcal{J}^\bullet) = \Gamma(X, F^p\mathcal{J}^\bullet)
+$$
+Since cohomology is computed by evaluating on K-injective representatives
+we see that the $E_1$ page is as stated in the lemma.
+The convergence and boundedness under the stated conditions
+follows from Homology, Lemma \ref{homology-lemma-ss-converges-trivial}.
+\end{proof}
+
+\begin{remark}
+\label{remark-spectral-sequence-filtered-object}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{F}^\bullet$ be a
+filtered complex of $\mathcal{O}_X$-modules. If $\mathcal{F}^\bullet$
+is bounded from below and for each $n$ the filtration on $\mathcal{F}^n$
+is finite, then there is a construction of the spectral sequence
+in Lemma \ref{lemma-spectral-sequence-filtered-object}
+avoiding Injectives, Lemma
+\ref{injectives-lemma-K-injective-embedding-filtration}.
+Namely, by
+Derived Categories, Lemma
+\ref{derived-lemma-right-resolution-by-filtered-injectives}
+there is a filtered quasi-isomorphism
+$i : \mathcal{F}^\bullet \to \mathcal{I}^\bullet$
+of filtered complexes
+with $\mathcal{I}^\bullet$ bounded below,
+the filtration on $\mathcal{I}^n$ is finite for all $n$,
+and with each $\text{gr}^p\mathcal{I}^n$ an
+injective $\mathcal{O}_X$-module.
+Then we take the spectral sequence associated to
+$$
+\Gamma(X, \mathcal{I}^\bullet)
+\quad\text{with}\quad
+F^p\Gamma(X, \mathcal{I}^\bullet) = \Gamma(X, F^p\mathcal{I}^\bullet)
+$$
+Since cohomology can be computed by evaluating on
+bounded below complexes of injectives
+we see that the $E_1$ page is as stated in the lemma.
+The convergence and boundedness under the stated conditions
+follows from
+Homology, Lemma \ref{homology-lemma-biregular-ss-converges}.
+In fact, this is a special case of the spectral sequence
+in Derived Categories, Lemma \ref{derived-lemma-ss-filtered-derived}.
+\end{remark}
+
+\begin{example}
+\label{example-spectral-sequence}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{F}^\bullet$ be a
+complex of $\mathcal{O}_X$-modules. We can apply
+Lemma \ref{lemma-spectral-sequence-filtered-object}
+with $F^p\mathcal{F}^\bullet = \tau_{\leq -p}\mathcal{F}^\bullet$.
+(If $\mathcal{F}^\bullet$ is bounded below we can use
+Remark \ref{remark-spectral-sequence-filtered-object}.)
+Then we get a spectral sequence
+$$
+E_1^{p, q} = H^{p + q}(X, H^{-p}(\mathcal{F}^\bullet)[p]) =
+H^{2p + q}(X, H^{-p}(\mathcal{F}^\bullet))
+$$
+After renumbering $p = -j$ and $q = i + 2j$ we find that for any
+$K \in D(\mathcal{O}_X)$ there is a spectral sequence
+$(E'_r, d'_r)_{r \geq 2}$ of bigraded modules with
+$d'_r$ of bidegree $(r, -r + 1)$, with
+$$
+(E'_2)^{i, j} = H^i(X, H^j(K))
+$$
+If $K$ is bounded below (for example), then this spectral sequence
+is bounded and converges to $H^{i + j}(X, K)$.
+In the bounded below case this spectral sequence is an example
+of the second spectral sequence of
+Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}
+(constructed using Cartan-Eilenberg resolutions).
+\end{example}
+
+\begin{example}
+\label{example-spectral-sequence-bis}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{F}^\bullet$ be a
+complex of $\mathcal{O}_X$-modules. We can apply
+Lemma \ref{lemma-spectral-sequence-filtered-object}
+with $F^p\mathcal{F}^\bullet = \sigma_{\geq p}\mathcal{F}^\bullet$.
+Then we get a spectral sequence
+$$
+E_1^{p, q} = H^{p + q}(X, \mathcal{F}^p[-p]) = H^q(X, \mathcal{F}^p)
+$$
+If $\mathcal{F}^\bullet$ is bounded below, then
+\begin{enumerate}
+\item we can use
+Remark \ref{remark-spectral-sequence-filtered-object}
+to construct this spectral sequence,
+\item the spectral sequence is bounded and converges to
+$H^{i + j}(X, \mathcal{F}^\bullet)$, and
+\item the spectral sequence is equal to the first spectral sequence of
+Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}
+(constructed using Cartan-Eilenberg resolutions).
+\end{enumerate}
+\end{example}
+
+\begin{lemma}
+\label{lemma-relative-spectral-sequence-filtered-object}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of
+ringed spaces. Let $\mathcal{F}^\bullet$ be a filtered complex of
+$\mathcal{O}_X$-modules. There exists a canonical spectral sequence
+$(E_r, \text{d}_r)_{r \geq 1}$ of bigraded
+$\mathcal{O}_Y$-modules with $d_r$ of bidegree $(r, -r + 1)$ and
+$$
+E_1^{p, q} = R^{p + q}f_*\text{gr}^p\mathcal{F}^\bullet
+$$
+If for every $n$ we have
+$$
+R^nf_*F^p\mathcal{F}^\bullet = 0 \text{ for }p \gg 0
+\quad\text{and}\quad
+R^nf_*F^p\mathcal{F}^\bullet = R^nf_*\mathcal{F}^\bullet \text{ for }p \ll 0
+$$
+then the spectral sequence is bounded and converges to
+$Rf_*\mathcal{F}^\bullet$.
+\end{lemma}
+
+\begin{proof}
+The proof is exactly the same as the proof of
+Lemma \ref{lemma-spectral-sequence-filtered-object}.
+\end{proof}
+
+
+
+
+
+\section{Godement resolution}
+\label{section-godement}
+
+\noindent
+A reference is \cite{Godement}.
+
+\medskip\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. Denote $X_{disc}$ the discrete
+topological space with the same points as $X$. Denote $f : X_{disc} \to X$
+the obvious continuous map. Set $\mathcal{O}_{X_{disc}} = f^{-1}\mathcal{O}_X$.
+Then $f : (X_{disc}, \mathcal{O}_{X_{disc}}) \to (X, \mathcal{O}_X)$
+is a flat morphism of ringed spaces. We can apply the
+{\it dual} of the material in
+Simplicial, Section \ref{simplicial-section-standard} to the adjoint pair of
+functors $f^*, f_*$ on sheaves of modules. Thus we obtain an augmented
+cosimplicial object
+$$
+\xymatrix{
+\text{id} \ar[r] &
+f_*f^* \ar@<1ex>[r] \ar@<-1ex>[r] &
+f_*f^*f_*f^* \ar@<0ex>[l] \ar@<-2ex>[r] \ar@<0ex>[r] \ar@<2ex>[r] &
+f_*f^*f_*f^*f_*f^* \ar@<1ex>[l] \ar@<-1ex>[l]
+}
+$$
+in the category of functors from $\textit{Mod}(\mathcal{O}_X)$
+to itself, see Simplicial, Lemma \ref{simplicial-lemma-standard-simplicial}.
+Moreover, the augmentation
+$$
+\xymatrix{
+f^* \ar[r] &
+f^*f_*f^* \ar@<1ex>[r] \ar@<-1ex>[r] &
+f^*f_*f^*f_*f^* \ar@<0ex>[l] \ar@<-2ex>[r] \ar@<0ex>[r] \ar@<2ex>[r] &
+f^*f_*f^*f_*f^*f_*f^* \ar@<1ex>[l] \ar@<-1ex>[l]
+}
+$$
+is a homotopy equivalence, see
+Simplicial, Lemma \ref{simplicial-lemma-standard-simplicial-homotopy}.
+
+\begin{lemma}
+\label{lemma-godement-resolution}
+Let $(X, \mathcal{O}_X)$ be a ringed space. For every sheaf of
+$\mathcal{O}_X$-modules $\mathcal{F}$ there is a resolution
+$$
+0 \to
+\mathcal{F} \to
+f_*f^*\mathcal{F} \to
+f_*f^*f_*f^*\mathcal{F} \to
+f_*f^*f_*f^*f_*f^*\mathcal{F} \to \ldots
+$$
+functorial in $\mathcal{F}$ such that each term
+$f_*f^* \ldots f_*f^*\mathcal{F}$ is a flasque
+$\mathcal{O}_X$-module and such that for all $x \in X$ the
+map
+$$
+\mathcal{F}_x[0] \to \Big(
+(f_*f^*\mathcal{F})_x \to
+(f_*f^*f_*f^*\mathcal{F})_x \to
+(f_*f^*f_*f^*f_*f^*\mathcal{F})_x \to \ldots
+\Big)
+$$
+is a homotopy equivalence in the category of complexes
+of $\mathcal{O}_{X, x}$-modules.
+\end{lemma}
+
+\begin{proof}
+The complex $f_*f^*\mathcal{F} \to f_*f^*f_*f^*\mathcal{F} \to
+f_*f^*f_*f^*f_*f^*\mathcal{F} \to \ldots$ is the complex associated
+to the cosimplicial object with terms
+$f_*f^*\mathcal{F}, f_*f^*f_*f^*\mathcal{F},
+f_*f^*f_*f^*f_*f^*\mathcal{F}, \ldots$ described above, see
+Simplicial, Section \ref{simplicial-section-dold-kan-cosimplicial}.
+The augmentation gives rise to the map $\mathcal{F} \to f_*f^*\mathcal{F}$
+as indicated. For any abelian sheaf $\mathcal{H}$ on $X_{disc}$ the
+pushforward $f_*\mathcal{H}$ is flasque because $X_{disc}$
+is a discrete space and the pushforward of a flasque sheaf is flasque.
+Hence the terms of the complex are flasque $\mathcal{O}_X$-modules.
+
+\medskip\noindent
+If $x \in X_{disc} = X$ is a point, then $(f^*\mathcal{G})_x = \mathcal{G}_x$
+for any $\mathcal{O}_X$-module $\mathcal{G}$. Hence $f^*$ is an exact functor
+and a complex of $\mathcal{O}_X$-modules
+$\mathcal{G}_1 \to \mathcal{G}_2 \to \mathcal{G}_3$
+is exact if and only if
+$f^*\mathcal{G}_1 \to f^*\mathcal{G}_2 \to f^*\mathcal{G}_3$
+is exact (see Modules, Lemma \ref{modules-lemma-abelian}).
+The result mentioned in the introduction to this section
+proves the pullback by $f^*$ gives a homotopy equivalence from
+the constant cosimplicial object $f^*\mathcal{F}$ to the
+cosimplicial object with terms
+$f_*f^*\mathcal{F}, f_*f^*f_*f^*\mathcal{F},
+f_*f^*f_*f^*f_*f^*\mathcal{F}, \ldots$.
+By Simplicial, Lemma \ref{simplicial-lemma-homotopy-equivalence-s-Q}
+we obtain that
+$$
+f^*\mathcal{F}[0] \to \Big(
+f^*f_*f^*\mathcal{F} \to
+f^*f_*f^*f_*f^*\mathcal{F} \to
+f^*f_*f^*f_*f^*f_*f^*\mathcal{F} \to \ldots
+\Big)
+$$
+is a homotopy equivalence. This immediately implies the two remaining
+statements of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-godement-resolution-bounded-below}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$\mathcal{F}^\bullet$ be a bounded below complex of
+$\mathcal{O}_X$-modules. There exists a quasi-isomorphism
+$\mathcal{F}^\bullet \to \mathcal{G}^\bullet$
+where $\mathcal{G}^\bullet$ be a bounded below complex of flasque
+$\mathcal{O}_X$-modules and for all $x \in X$ the
+map $\mathcal{F}^\bullet_x \to \mathcal{G}^\bullet_x$
+is a homotopy equivalence in the category of complexes
+of $\mathcal{O}_{X, x}$-modules.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{A}$ be the category of complexes of $\mathcal{O}_X$-modules
+and let $\mathcal{B}$ be the category of complexes of $\mathcal{O}_X$-modules.
+Then we can apply the discussion above to the adjoint functors
+$f^*$ and $f_*$ between $\mathcal{A}$ and $\mathcal{B}$.
+Arguing exactly as in the proof of
+Lemma \ref{lemma-godement-resolution}
+we get a resolution
+$$
+0 \to
+\mathcal{F}^\bullet \to
+f_*f^*\mathcal{F}^\bullet \to
+f_*f^*f_*f^*\mathcal{F}^\bullet \to
+f_*f^*f_*f^*f_*f^*\mathcal{F}^\bullet \to \ldots
+$$
+in the abelian category $\mathcal{A}$ such that each term of each
+$f_*f^*\ldots f_*f^*\mathcal{F}^\bullet$ is a flasque
+$\mathcal{O}_X$-module and such that for all $x \in X$ the
+map
+$$
+\mathcal{F}^\bullet_x[0] \to \Big(
+(f_*f^*\mathcal{F}^\bullet)_x \to
+(f_*f^*f_*f^*\mathcal{F}^\bullet)_x \to
+(f_*f^*f_*f^*f_*f^*\mathcal{F}^\bullet)_x \to \ldots
+\Big)
+$$
+is a homotopy equivalence in the category of complexes of complexes
+of $\mathcal{O}_{X, x}$-modules. Since a complex of complexes is the
+same thing as a double complex, we can consider the induced map
+$$
+\mathcal{F}^\bullet \to
+\mathcal{G}^\bullet =
+\text{Tot}(
+f_*f^*\mathcal{F}^\bullet \to
+f_*f^*f_*f^*\mathcal{F}^\bullet \to
+f_*f^*f_*f^*f_*f^*\mathcal{F}^\bullet \to \ldots
+)
+$$
+Since the complex $\mathcal{F}^\bullet$ is bounded below, the
+same is true for $\mathcal{G}^\bullet$ and in fact each term
+of $\mathcal{G}^\bullet$ is a finite direct sum of
+terms of the complexes $f_*f^*\ldots f_*f^*\mathcal{F}^\bullet$
+and hence is flasque. The final assertion of the lemma
+now follows from
+Homology, Lemma \ref{homology-lemma-homotopy-complex-complexes}.
+Since this in particular shows that
+$\mathcal{F}^\bullet \to \mathcal{G}^\bullet$
+is a quasi-isomorphism, the proof is complete.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Cup product}
+\label{section-cup-product}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K, M$ be objects
+of $D(\mathcal{O}_X)$. Set $A = \Gamma(X, \mathcal{O}_X)$.
+The (global) cup product in this setting is a map
+$$
+\mu :
+R\Gamma(X, K) \otimes_A^\mathbf{L} R\Gamma(X, M)
+\longrightarrow
+R\Gamma(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+$$
+in $D(A)$. We define it as the relative cup product for the
+morphism of ringed spaces $f : (X, \mathcal{O}_X) \to (pt, A)$
+as in Remark \ref{remark-cup-product} via $D(pt, A) = D(A)$.
+This map in particular defines pairings
+$$
+\cup :
+H^i(X, K) \times H^j(X, M)
+\longrightarrow
+H^{i + j}(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+$$
+Namely, given $\xi \in H^i(X, K) = H^i(R\Gamma(X, K))$ and
+$\eta \in H^j(X, M) = H^j(R\Gamma(X, M))$ we can
+first ``tensor'' them to get an element $\xi \otimes \eta$ in
+$H^{i + j}(R\Gamma(X, K) \otimes_A^\mathbf{L} R\Gamma(X, M))$, see
+More on Algebra, Section \ref{more-algebra-section-products-tor}.
+Then we can apply $\mu$ to get the desired element
+$\xi \cup \eta = \mu(\xi \otimes \eta)$
+of $H^{i + j}(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)$.
+
+\medskip\noindent
+Here is another way to think of the cup product of $\xi$ and $\eta$.
+Namely, we can write
+$$
+R\Gamma(X, K) = R\Hom_X(\mathcal{O}_X, K)
+\quad\text{and}\quad
+R\Gamma(X, M) = R\Hom_X(\mathcal{O}_X, M)
+$$
+because $\Hom(\mathcal{O}_X, -) = \Gamma(X, -)$.
+Thus $\xi$ and $\eta$ are the ``same'' thing as maps
+$$
+\tilde \xi : \mathcal{O}_X[-i] \to K
+\quad\text{and}\quad
+\tilde \eta : \mathcal{O}_X[-j] \to M
+$$
+Combining this with the functoriality of the derived tensor product
+we obtain
+$$
+\mathcal{O}_X[-i - j] =
+\mathcal{O}_X[-i] \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{O}_X[-j]
+\xrightarrow{\tilde \xi \otimes \tilde \eta}
+K \otimes_{\mathcal{O}_X}^\mathbf{L} M
+$$
+which by the same token as above is an element of
+$H^{i + j}(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)$.
+
+\begin{lemma}
+\label{lemma-second-cup-equals-first}
+This construction gives the cup product.
+\end{lemma}
+
+\begin{proof}
+With $f : (X, \mathcal{O}_X) \to (pt, A)$ as above we have
+$Rf_*(-) = R\Gamma(X, -)$ and our map $\mu$ is adjoint to the map
+$$
+Lf^*(Rf_*K \otimes_A^\mathbf{L} Rf_*M) =
+Lf^*Rf_*K \otimes_{\mathcal{O}_X}^\mathbf{L} Lf^*Rf_*M
+\xrightarrow{\epsilon_K \otimes \epsilon_M}
+K \otimes_{\mathcal{O}_X}^\mathbf{L} M
+$$
+where $\epsilon$ is the counit of the adjunction between
+$Lf^*$ and $Rf_*$.
+If we think of $\xi$ and $\eta$ as maps $\xi : A[-i] \to R\Gamma(X, K)$
+and $\eta : A[-j] \to R\Gamma(X, M)$, then
+the tensor $\xi \otimes \eta$ corresponds to the map\footnote{There
+is a sign hidden here, namely, the equality is defined by
+the composition
+$$
+A[-i - j] \to (A \otimes_A^\mathbf{L} A)[-i - j] \to
+A[-i] \otimes_A^\mathbf{L} A[-j]
+$$
+where in the second step we use the identification of
+More on Algebra, Item (\ref{more-algebra-item-shift-tensor})
+which uses a sign in principle.
+Except, in this case the sign is $+1$ by our convention and even if it wasn't
+$+1$ it wouldn't matter since we used the same sign
+in the identification
+$\mathcal{O}_X[-i - j] =
+\mathcal{O}_X[-i] \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{O}_X[-j]$.}
+$$
+A[-i - j] = A[-i] \otimes_A^\mathbf{L} A[-j]
+\xrightarrow{\xi \otimes \eta}
+R\Gamma(X, K) \otimes_A^\mathbf{L} R\Gamma(X, M)
+$$
+By definition the cup product $\xi \cup \eta$ is the map
+$A[-i - j] \to R\Gamma(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)$
+which is adjoint to
+$$
+(\epsilon_K \otimes \epsilon_M) \circ Lf^*(\xi \otimes \eta) =
+(\epsilon_K \circ Lf^*\xi) \otimes (\epsilon_M \circ Lf^*\eta)
+$$
+However, it is easy to see that
+$\epsilon_K \circ Lf^*\xi = \tilde \xi$ and
+$\epsilon_M \circ Lf^*\eta = \tilde \eta$.
+We conclude that $\widetilde{\xi \cup \eta} = \tilde \xi \otimes \tilde \eta$
+which means we have the desired agreement.
+\end{proof}
+
+\begin{remark}
+\label{remark-cup-with-element-map-total-cohomology}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K, M$ be objects
+of $D(\mathcal{O}_X)$. Set $A = \Gamma(X, \mathcal{O}_X)$.
+Given $\xi \in H^i(X, K)$ we get an associated map
+$$
+\xi = ``\xi \cup -'' :
+R\Gamma(X, M)[-i]
+\to
+R\Gamma(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+$$
+by representing $\xi$ as a map $\xi : A[-i] \to R\Gamma(X, K)$ as in the
+proof of Lemma \ref{lemma-second-cup-equals-first}
+and then using the composition
+$$
+R\Gamma(X, M)[-i] = A[-i] \otimes_A^\mathbf{L} R\Gamma(X, M)
+\xrightarrow{\xi \otimes 1}
+R\Gamma(X, K) \otimes_A^\mathbf{L} R\Gamma(X, M)
+\to
+R\Gamma(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+$$
+where the second arrow is the global cup product $\mu$ above.
+On cohomology this recovers the cup product by $\xi$ as is clear
+from Lemma \ref{lemma-second-cup-equals-first} and its proof.
+\end{remark}
+
+\noindent
+Let us formulate and prove a natural compatibility of the
+relative cup product. Namely, suppose that we have a morphism
+$f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ of ringed spaces.
+Let $\mathcal{K}^\bullet$ and $\mathcal{M}^\bullet$
+be complexes of $\mathcal{O}_X$-modules.
+There is a naive cup product
+$$
+\text{Tot}(
+f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}
+f_*\mathcal{M}^\bullet)
+\longrightarrow
+f_*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet)
+$$
+We claim that this is related to the relative cup product.
+
+\begin{lemma}
+\label{lemma-cup-compatible-with-naive}
+In the situation above the following diagram commutes
+$$
+\xymatrix{
+f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+f_*\mathcal{M}^\bullet \ar[r] \ar[d]
+&
+Rf_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+Rf_*\mathcal{M}^\bullet \ar[d]^{\text{Remark \ref{remark-cup-product}}} \\
+\text{Tot}(
+f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}
+f_*\mathcal{M}^\bullet) \ar[d]_{\text{naive cup product}} &
+Rf_*(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}^\mathbf{L}
+\mathcal{M}^\bullet) \ar[d] \\
+f_*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet) \ar[r] &
+Rf_*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet)
+}
+$$
+\end{lemma}
+
+\begin{proof}
+By the construction in Remark \ref{remark-cup-product} we see that
+going around the diagram clockwise the map
+$$
+f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+f_*\mathcal{M}^\bullet
+\longrightarrow
+Rf_*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet)
+$$
+is adjoint to the map
+\begin{align*}
+Lf^*(f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+f_*\mathcal{M}^\bullet)
+& =
+Lf^*f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+Lf^*f_*\mathcal{M}^\bullet \\
+& \to
+Lf^*Rf_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+Lf^*Rf_*\mathcal{M}^\bullet \\
+& \to
+\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+\mathcal{M}^\bullet \\
+& \to
+\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet)
+\end{align*}
+By Lemma \ref{lemma-adjoints-push-pull-compatibility} this is also equal to
+\begin{align*}
+Lf^*(f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+f_*\mathcal{M}^\bullet)
+& =
+Lf^*f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+Lf^*f_*\mathcal{M}^\bullet \\
+& \to
+f^*f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+f^*f_*\mathcal{M}^\bullet \\
+& \to
+\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+\mathcal{M}^\bullet \\
+& \to
+\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet)
+\end{align*}
+Going around anti-clockwise we obtain the map adjoint to the map
+\begin{align*}
+Lf^*(f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+f_*\mathcal{M}^\bullet)
+& \to
+Lf^*\text{Tot}(
+f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}
+f_*\mathcal{M}^\bullet) \\
+& \to
+Lf^*f_*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet) \\
+& \to
+Lf^*Rf_*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet) \\
+& \to
+\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet)
+\end{align*}
+By Lemma \ref{lemma-adjoints-push-pull-compatibility} this is also equal to
+\begin{align*}
+Lf^*(f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+f_*\mathcal{M}^\bullet)
+& \to
+Lf^*\text{Tot}(
+f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}
+f_*\mathcal{M}^\bullet) \\
+& \to
+Lf^*f_*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet) \\
+& \to
+f^*f_*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet) \\
+& \to
+\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet)
+\end{align*}
+Now the proof is finished by a contemplation of the diagram
+$$
+\xymatrix{
+Lf^*(f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+f_*\mathcal{M}^\bullet) \ar[d] \ar[rr] & &
+Lf^*f_*\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L}
+Lf^*f_*\mathcal{M}^\bullet \ar[d] \\
+Lf^*\text{Tot}(
+f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}
+f_*\mathcal{M}^\bullet) \ar[d]_{naive} \ar[r] &
+f^*\text{Tot}(
+f_*\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_Y}
+f_*\mathcal{M}^\bullet) \ar[ldd]^{naive} \ar[dd] &
+f^*f_*\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L}
+f^*f_*\mathcal{M}^\bullet \ar[dd] \ar[ldd] \\
+Lf^*f_*\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet) \ar[d] \\
+f^*f_*\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet) \ar[rd] &
+\text{Tot}(f^*f_*\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}
+f^*f_*\mathcal{M}^\bullet) \ar[d] &
+\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L}
+\mathcal{M}^\bullet \ar[ld] \\
+& \text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet)
+}
+$$
+All of the polygons in this diagram commute. The top one commutes
+by Lemma \ref{lemma-tensor-pull-compatibility}.
+The square with the two naive cup products commutes because
+$Lf^* \to f^*$ is functorial in the complex of modules.
+Similarly with the square involving the two maps
+$\mathcal{A}^\bullet \otimes^\mathbf{L} \mathcal{B}^\bullet \to
+\text{Tot}(\mathcal{A}^\bullet \otimes \mathcal{B}^\bullet)$.
+Finally, the commutativity of the remaining square
+is true on the level of complexes and may be viewed as the
+definiton of the naive cup product (by the adjointness
+of $f^*$ and $f_*$). The proof is finished because
+going around the diagram on the outside are the two maps
+given above.
+\end{proof}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ring space. Let $\mathcal{K}^\bullet$ and
+$\mathcal{M}^\bullet$ be complexes of $\mathcal{O}_X$-modules.
+Then we have a ``naive'' cup product
+$$
+\mu' :
+\text{Tot}(
+\Gamma(X, \mathcal{K}^\bullet) \otimes_A \Gamma(X, \mathcal{M}^\bullet))
+\longrightarrow
+\Gamma(X, \text{Tot}(
+\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{M}^\bullet))
+$$
+By Lemma \ref{lemma-cup-compatible-with-naive}
+applied to the morphism $(X, \mathcal{O}_X) \to (pt, A)$
+this naive cup product is related to the cup product $\mu$
+defined in the first paragraph of this section by the
+following commutative diagram
+$$
+\xymatrix{
+\Gamma(X, \mathcal{K}^\bullet)
+\otimes_A^\mathbf{L}
+\Gamma(X, \mathcal{M}^\bullet) \ar[d] \ar[r] &
+R\Gamma(X, \mathcal{K}^\bullet)
+\otimes_A^\mathbf{L}
+R\Gamma(X, \mathcal{M}^\bullet) \ar[d]^-\mu \\
+\text{Tot}(\Gamma(X, \mathcal{K}^\bullet)
+\otimes_A
+\Gamma(X, \mathcal{M}^\bullet)) \ar[d]_-{\mu'} &
+R\Gamma(X, \mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}^\mathbf{L}
+\mathcal{M}^\bullet) \ar[d] \\
+\Gamma(X, \text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet)) \ar[r] &
+R\Gamma(X, \text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet))
+}
+$$
+in $D(A)$. On cohomology we obtain the commutative diagram
+$$
+\xymatrix{
+H^i(\Gamma(X, \mathcal{K}^\bullet)) \times
+H^j(\Gamma(X, \mathcal{M}^\bullet)) \ar[d] \ar[r] &
+H^{i + j}(X,
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{M}^\bullet)) \\
+H^i(X, \mathcal{K}^\bullet) \times
+H^j(X, \mathcal{M}^\bullet) \ar[r]^\cup &
+H^{i + j}(X, \mathcal{K}^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L}
+\mathcal{M}^\bullet) \ar[u]
+}
+$$
+relating the naive cup product with the actual cuproduct.
+
+\begin{lemma}
+\label{lemma-diagrams-commute}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$\mathcal{K}^\bullet$ and $\mathcal{M}^\bullet$
+be bounded below complexes of $\mathcal{O}_X$-modules.
+Let $\mathcal{U} : X = \bigcup_{i \in I} U_i$ be an open covering
+Then
+$$
+\xymatrix{
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{K}^\bullet))
+\otimes_A^\mathbf{L}
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{M}^\bullet))
+\ar[d] \ar[r] &
+R\Gamma(X, \mathcal{K}^\bullet)
+\otimes_A^\mathbf{L}
+R\Gamma(X, \mathcal{M}^\bullet) \ar[d]^\mu \\
+\text{Tot}(
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{K}^\bullet))
+\otimes_A
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{M}^\bullet)))
+\ar[d]^{(\ref{equation-needs-signs})} &
+R\Gamma(X,
+\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{M}^\bullet)
+\ar[d] \\
+\text{Tot}(
+\check{\mathcal{C}}^\bullet({\mathcal U},
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{M}^\bullet)
+)) \ar[r] &
+R\Gamma(X,
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{M}^\bullet))
+}
+$$
+where the horizontal arrows are the ones in
+Lemma \ref{lemma-cech-complex-complex}
+commutes in $D(A)$.
+\end{lemma}
+
+\begin{proof}
+Choose quasi-isomorphisms of complexes
+$a : \mathcal{K}^\bullet \to \mathcal{K}_1^\bullet$ and
+$b : \mathcal{M}^\bullet \to \mathcal{M}_1^\bullet$
+as in Lemma \ref{lemma-godement-resolution-bounded-below}.
+Since the maps $a$ and $b$ on stalks are homotopy equivalences
+we see that the induced map
+$$
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{M}^\bullet)
+\to
+\text{Tot}(\mathcal{K}_1^\bullet \otimes_{\mathcal{O}_X} \mathcal{M}_1^\bullet)
+$$
+is a homotopy equivalence on stalks too (More on Algebra, Lemma
+\ref{more-algebra-lemma-derived-tor-homotopy}) and hence a quasi-isomorphism.
+Thus the targets
+$$
+R\Gamma(X,
+\text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X} \mathcal{M}^\bullet)) =
+R\Gamma(X,
+\text{Tot}(\mathcal{K}_1^\bullet
+\otimes_{\mathcal{O}_X} \mathcal{M}_1^\bullet))
+$$
+of the two diagrams are the same in $D(A)$. It follows that it suffices
+to prove the diagram commutes for $\mathcal{K}$ and $\mathcal{M}$
+replaced by $\mathcal{K}_1$ and $\mathcal{M}_1$. This reduces us to
+the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $\mathcal{K}^\bullet$ and $\mathcal{M}^\bullet$ are bounded
+below complexes of flasque $\mathcal{O}_X$-modules and
+consider the diagram relating the cup product with the cup product
+(\ref{equation-needs-signs}) on {\v C}ech complexes.
+Then we can consider the commutative diagram
+$$
+\xymatrix{
+\Gamma(X, \mathcal{K}^\bullet)
+\otimes_A^\mathbf{L}
+\Gamma(X, \mathcal{M}^\bullet) \ar[d] \ar[r] &
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{K}^\bullet))
+\otimes_A^\mathbf{L}
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{M}^\bullet))
+\ar[d] \\
+\text{Tot}(\Gamma(X, \mathcal{K}^\bullet)
+\otimes_A
+\Gamma(X, \mathcal{M}^\bullet)) \ar[d] \ar[r] &
+\text{Tot}(
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{K}^\bullet))
+\otimes_A
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{M}^\bullet)))
+\ar[d]^{(\ref{equation-needs-signs})} \\
+\Gamma(X, \text{Tot}(\mathcal{K}^\bullet
+\otimes_{\mathcal{O}_X}
+\mathcal{M}^\bullet)) \ar[r] &
+\text{Tot}(
+\check{\mathcal{C}}^\bullet({\mathcal U},
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{M}^\bullet)
+))
+}
+$$
+In this diagram the horizontal arrows are isomorphisms in $D(A)$ because
+for a bounded below complex of flasque modules such as $\mathcal{K}^\bullet$
+we have
+$$
+\Gamma(X, \mathcal{K}^\bullet) =
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{K}^\bullet)) =
+R\Gamma(X, \mathcal{K}^\bullet)
+$$
+in $D(A)$. This follows from
+Lemma \ref{lemma-flasque-acyclic},
+Derived Categories, Lemma \ref{derived-lemma-leray-acyclicity}, and
+Lemma \ref{lemma-cech-complex-complex-computes}.
+Hence the commutativity of the diagram of the lemma involving
+(\ref{equation-needs-signs}) follows from the already proven
+commutativity of Lemma \ref{lemma-cup-compatible-with-naive}
+where $f$ is the morphism to a point (see discussion
+following Lemma \ref{lemma-cup-compatible-with-naive}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cup-product-associative}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$
+be a morphism of ringed spaces. The relative cup product of
+Remark \ref{remark-cup-product} is associative in the sense that
+the diagram
+$$
+\xymatrix{
+Rf_*K \otimes_{\mathcal{O}_Y}^\mathbf{L}
+Rf_*L \otimes_{\mathcal{O}_Y}^\mathbf{L}
+Rf_*M \ar[r] \ar[d] &
+Rf_*(K \otimes_{\mathcal{O}_X}^\mathbf{L} L)
+\otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*M \ar[d] \\
+Rf_*K \otimes_{\mathcal{O}_Y}^\mathbf{L}
+Rf_*(L \otimes_{\mathcal{O}_X}^\mathbf{L} M) \ar[r] &
+Rf_*(K \otimes_{\mathcal{O}_X}^\mathbf{L}
+L \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+}
+$$
+is commutative in $D(\mathcal{O}_Y)$ for all $K, L, M$ in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Going around either side we obtain the map adjoint to the obvious map
+\begin{align*}
+Lf^*(Rf_*K \otimes_{\mathcal{O}_Y}^\mathbf{L}
+Rf_*L \otimes_{\mathcal{O}_Y}^\mathbf{L}
+Rf_*M) & =
+Lf^*(Rf_*K) \otimes_{\mathcal{O}_X}^\mathbf{L}
+Lf^*(Rf_*L) \otimes_{\mathcal{O}_X}^\mathbf{L}
+Lf^*(Rf_*M) \\
+& \to
+K \otimes_{\mathcal{O}_X}^\mathbf{L}
+L \otimes_{\mathcal{O}_X}^\mathbf{L} M
+\end{align*}
+in $D(\mathcal{O}_X)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cup-product-commutative}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$
+be a morphism of ringed spaces. The relative cup product of
+Remark \ref{remark-cup-product} is commutative in the sense that
+the diagram
+$$
+\xymatrix{
+Rf_*K \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*L \ar[r] \ar[d]_\psi &
+Rf_*(K \otimes_{\mathcal{O}_X}^\mathbf{L} L) \ar[d]^{Rf_*\psi} \\
+Rf_*L \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*K \ar[r] &
+Rf_*(L \otimes_{\mathcal{O}_X}^\mathbf{L} K)
+}
+$$
+is commutative in $D(\mathcal{O}_Y)$ for all $K, L$ in $D(\mathcal{O}_X)$.
+Here $\psi$ is the commutativity constraint on the derived category
+(Lemma \ref{lemma-symmetric-monoidal-derived}).
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compose-cup-product}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ and
+$g : (Y, \mathcal{O}_Y) \to (Z, \mathcal{O}_Z)$
+be morphisms of ringed spaces. The relative cup product of
+Remark \ref{remark-cup-product} is compatible with compositions
+in the sense that the diagram
+$$
+\xymatrix{
+R(g \circ f)_*K \otimes_{\mathcal{O}_Z}^\mathbf{L} R(g \circ f)_*L
+\ar@{=}[rr] \ar[d] & &
+Rg_*Rf_*K \otimes_{\mathcal{O}_Z}^\mathbf{L} Rg_*Rf_*L \ar[d] \\
+R(g \circ f)_*(K \otimes_{\mathcal{O}_X}^\mathbf{L} L) \ar@{=}[r] &
+Rg_*Rf_*(K \otimes_{\mathcal{O}_X}^\mathbf{L} L) &
+Rg_*(Rf_*K \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*L) \ar[l]
+}
+$$
+is commutative in $D(\mathcal{O}_Z)$ for all $K, L$ in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+This is true because going around the diagram either way we obtain the map
+adjoint to the map
+\begin{align*}
+& L(g \circ f)^*\left(R(g \circ f)_*K
+\otimes_{\mathcal{O}_Z}^\mathbf{L}
+R(g \circ f)_*L\right) \\
+& =
+L(g \circ f)^*R(g \circ f)_*K
+\otimes_{\mathcal{O}_X}^\mathbf{L}
+L(g \circ f)^*R(g \circ f)_*L) \\
+& \to
+K \otimes_{\mathcal{O}_X}^\mathbf{L} L
+\end{align*}
+in $D(\mathcal{O}_X)$. To see this one uses that the composition
+of the counits like so
+$$
+L(g \circ f)^*R(g \circ f)_* =
+Lf^* Lg^* Rg_* Rf_* \to
+Lf^* Rf_* \to \text{id}
+$$
+is the counit for $L(g \circ f)^*$ and $R(g \circ f)_*$. See
+Categories, Lemma \ref{categories-lemma-compose-counits}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Some properties of K-injective complexes}
+\label{section-properties-K-injective}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $U \subset X$ be an
+open subset. Denote $j : (U, \mathcal{O}_U) \to (X, \mathcal{O}_X)$
+the corresponding open immersion. The pullback functor $j^*$ is exact
+as it is just the restriction functor. Thus derived pullback $Lj^*$ is
+computed on any complex by simply restricting the complex. We often
+simply denote the corresponding functor
+$$
+D(\mathcal{O}_X) \to D(\mathcal{O}_U),
+\quad
+E \mapsto j^*E = E|_U
+$$
+Similarly, extension by zero
+$j_! : \textit{Mod}(\mathcal{O}_U) \to \textit{Mod}(\mathcal{O}_X)$
+(see Sheaves, Section \ref{sheaves-section-open-immersions})
+is an exact functor (Modules, Lemma \ref{modules-lemma-j-shriek-exact}).
+Thus it induces a functor
+$$
+j_! : D(\mathcal{O}_U) \to D(\mathcal{O}_X),\quad
+F \mapsto j_!F
+$$
+by simply applying $j_!$ to any complex representing the object $F$.
+
+\begin{lemma}
+\label{lemma-restrict-K-injective-to-open}
+Let $X$ be a ringed space. Let $U \subset X$ be an open subspace.
+The restriction of a K-injective complex of $\mathcal{O}_X$-modules
+to $U$ is a K-injective complex of $\mathcal{O}_U$-modules.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Derived Categories, Lemma \ref{derived-lemma-adjoint-preserve-K-injectives}
+and the fact that the restriction functor has the
+exact left adjoint $j_!$.
+For the construction of $j_!$ see
+Sheaves, Section \ref{sheaves-section-open-immersions}
+and for exactness see Modules, Lemma \ref{modules-lemma-j-shriek-exact}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unbounded-cohomology-of-open}
+Let $X$ be a ringed space. Let $U \subset X$ be an open subspace.
+For $K$ in $D(\mathcal{O}_X)$ we have
+$H^p(U, K) = H^p(U, K|_U)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{I}^\bullet$ be a K-injective complex of $\mathcal{O}_X$-modules
+representing $K$. Then
+$$
+H^q(U, K) = H^q(\Gamma(U, \mathcal{I}^\bullet)) =
+H^q(\Gamma(U, \mathcal{I}^\bullet|_U))
+$$
+by construction of cohomology. By Lemma \ref{lemma-restrict-K-injective-to-open}
+the complex $\mathcal{I}^\bullet|_U$ is a K-injective complex
+representing $K|_U$ and the lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sheafification-cohomology}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K$ be an object of
+$D(\mathcal{O}_X)$. The sheafification of
+$$
+U \mapsto H^q(U, K) = H^q(U, K|_U)
+$$
+is the $q$th cohomology sheaf $H^q(K)$ of $K$.
+\end{lemma}
+
+\begin{proof}
+The equality $H^q(U, K) = H^q(U, K|_U)$ holds by
+Lemma \ref{lemma-unbounded-cohomology-of-open}.
+Choose a K-injective complex $\mathcal{I}^\bullet$ representing $K$.
+Then
+$$
+H^q(U, K) =
+\frac{\Ker(\mathcal{I}^q(U) \to \mathcal{I}^{q + 1}(U))}
+{\Im(\mathcal{I}^{q - 1}(U) \to \mathcal{I}^q(U))}.
+$$
+by our construction of cohomology. Since
+$H^q(K) = \Ker(\mathcal{I}^q \to \mathcal{I}^{q + 1})/
+\Im(\mathcal{I}^{q - 1} \to \mathcal{I}^q)$ the result is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-restrict-direct-image-open}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of ringed
+spaces. Given an open subspace $V \subset Y$, set $U = f^{-1}(V)$ and denote
+$g : U \to V$ the induced morphism. Then
+$(Rf_*E)|_V = Rg_*(E|_U)$ for $E$ in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Represent $E$ by a K-injective complex $\mathcal{I}^\bullet$ of
+$\mathcal{O}_X$-modules. Then $Rf_*(E) = f_*\mathcal{I}^\bullet$
+and $Rg_*(E|_U) = g_*(\mathcal{I}^\bullet|_U)$ by
+Lemma \ref{lemma-restrict-K-injective-to-open}.
+Since it is clear that $(f_*\mathcal{F})|_V = g_*(\mathcal{F}|_U)$
+for any sheaf $\mathcal{F}$ on $X$ the result follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Leray-unbounded}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Then $R\Gamma(Y, -) \circ Rf_* = R\Gamma(X, -)$ as functors
+$D(\mathcal{O}_X) \to D(\Gamma(Y, \mathcal{O}_Y))$.
+More generally for $V \subset Y$ open and $U = f^{-1}(V)$
+we have $R\Gamma(U, -) = R\Gamma(V, -) \circ Rf_*$.
+\end{lemma}
+
+\begin{proof}
+Let $Z$ be the ringed space consisting of a singleton
+space with $\Gamma(Z, \mathcal{O}_Z) = \Gamma(Y, \mathcal{O}_Y)$.
+There is a canonical morphism $Y \to Z$ of ringed spaces
+inducing the identification on global sections of structure sheaves.
+Then $D(\mathcal{O}_Z) = D(\Gamma(Y, \mathcal{O}_Y))$.
+Hence the assertion $R\Gamma(Y, -) \circ Rf_* = R\Gamma(X, -)$
+follows from Lemma \ref{lemma-derived-pushforward-composition}
+applied to $X \to Y \to Z$.
+
+\medskip\noindent
+The second (more general) statement follows from the first statement
+after applying Lemma \ref{lemma-restrict-direct-image-open}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unbounded-describe-higher-direct-images}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of ringed
+spaces. Let $K$ be in $D(\mathcal{O}_X)$. Then $H^i(Rf_*K)$ is the sheaf
+associated to the presheaf
+$$
+V \mapsto H^i(f^{-1}(V), K) = H^i(V, Rf_*K)
+$$
+\end{lemma}
+
+\begin{proof}
+The equality $H^i(f^{-1}(V), K) = H^i(V, Rf_*K)$ follows upon taking
+cohomology from the second statement in
+Lemma \ref{lemma-Leray-unbounded}. Then the statement on sheafification
+follows from Lemma \ref{lemma-sheafification-cohomology}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-modules-abelian-unbounded}
+Let $X$ be a ringed space. Let $K$ be an object of $D(\mathcal{O}_X)$
+and denote $K_{ab}$ its image in $D(\underline{\mathbf{Z}}_X)$.
+\begin{enumerate}
+\item For any open $U \subset X$ there is a canonical map
+$R\Gamma(U, K) \to R\Gamma(U, K_{ab})$
+which is an isomorphism in $D(\textit{Ab})$.
+\item Let $f : X \to Y$ be a morphism of ringed spaces.
+There is a canonical map $Rf_*K \to Rf_*(K_{ab})$ which
+is an isomorphism in $D(\underline{\mathbf{Z}}_Y)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The map is constructed as follows. Choose a K-injective complex
+$\mathcal{I}^\bullet$ representing $K$. Choose a quasi-isomorpism
+$\mathcal{I}^\bullet \to \mathcal{J}^\bullet$ where $\mathcal{J}^\bullet$
+is a K-injective complex of abelian groups. Then the map in (1)
+is given by $\Gamma(U, \mathcal{I}^\bullet) \to \Gamma(U, \mathcal{J}^\bullet)$
+and the map in (2) is given by
+$f_*\mathcal{I}^\bullet \to f_*\mathcal{J}^\bullet$.
+To show that these maps are isomorphisms, it suffices to prove
+they induce isomorphisms on cohomology groups and cohomology sheaves.
+By Lemmas \ref{lemma-unbounded-cohomology-of-open} and
+\ref{lemma-unbounded-describe-higher-direct-images}
+it suffices to show that the map
+$$
+H^0(X, K) \longrightarrow H^0(X, K_{ab})
+$$
+is an isomorphism. Observe that
+$$
+H^0(X, K) = \Hom_{D(\mathcal{O}_X)}(\mathcal{O}_X, K)
+$$
+and similarly for the other group. Choose any complex $\mathcal{K}^\bullet$
+of $\mathcal{O}_X$-modules representing $K$. By construction of the
+derived category as a localization we have
+$$
+\Hom_{D(\mathcal{O}_X)}(\mathcal{O}_X, K) =
+\colim_{s : \mathcal{F}^\bullet \to \mathcal{O}_X}
+\Hom_{K(\mathcal{O}_X)}(\mathcal{F}^\bullet, \mathcal{K}^\bullet)
+$$
+where the colimit is over quasi-isomorphisms $s$ of complexes of
+$\mathcal{O}_X$-modules. Similarly, we have
+$$
+\Hom_{D(\underline{\mathbf{Z}}_X)}(\underline{\mathbf{Z}}_X, K) =
+\colim_{s : \mathcal{G}^\bullet \to \underline{\mathbf{Z}}_X}
+\Hom_{K(\underline{\mathbf{Z}}_X)}(\mathcal{G}^\bullet, \mathcal{K}^\bullet)
+$$
+Next, we observe that the quasi-isomorphisms
+$s : \mathcal{G}^\bullet \to \underline{\mathbf{Z}}_X$
+with $\mathcal{G}^\bullet$ bounded above complex of flat
+$\underline{\mathbf{Z}}_X$-modules is cofinal in the system.
+(This follows from Modules, Lemma \ref{modules-lemma-module-quotient-flat} and
+Derived Categories, Lemma \ref{derived-lemma-subcategory-left-resolution};
+see discussion in Section \ref{section-flat}.)
+Hence we can construct an inverse to the map
+$H^0(X, K) \longrightarrow H^0(X, K_{ab})$
+by representing an element $\xi \in H^0(X, K_{ab})$ by a pair
+$$
+(s : \mathcal{G}^\bullet \to \underline{\mathbf{Z}}_X,
+a : \mathcal{G}^\bullet \to \mathcal{K}^\bullet)
+$$
+with $\mathcal{G}^\bullet$ a bounded above complex of flat
+$\underline{\mathbf{Z}}_X$-modules and sending this to
+$$
+(\mathcal{G}^\bullet \otimes_{\underline{\mathbf{Z}}_X} \mathcal{O}_X
+\to \mathcal{O}_X,
+\mathcal{G}^\bullet \otimes_{\underline{\mathbf{Z}}_X} \mathcal{O}_X
+\to \mathcal{K}^\bullet)
+$$
+The only thing to note here is that the first arrow
+is a quasi-isomorphism by
+Lemmas \ref{lemma-derived-tor-quasi-isomorphism-other-side} and
+\ref{lemma-bounded-flat-K-flat}.
+We omit the detailed verification that this construction
+is indeed an inverse.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-adjoint-lower-shriek-restrict}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $U \subset X$ be an
+open subset. Denote $j : (U, \mathcal{O}_U) \to (X, \mathcal{O}_X)$
+the corresponding open immersion. The restriction functor
+$D(\mathcal{O}_X) \to D(\mathcal{O}_U)$ is a right adjoint to
+extension by zero $j_! : D(\mathcal{O}_U) \to D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+This follows formally from the fact that $j_!$ and $j^*$ are adjoint and
+exact (and hence $Lj_! = j_!$ and $Rj^* = j^*$ exist), see
+Derived Categories, Lemma \ref{derived-lemma-derived-adjoint-functors}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-K-injective-flat}
+Let $f : X \to Y$ be a flat morphism of ringed spaces.
+If $\mathcal{I}^\bullet$ is a K-injective complex of $\mathcal{O}_X$-modules,
+then $f_*\mathcal{I}^\bullet$ is K-injective as a complex of
+$\mathcal{O}_Y$-modules.
+\end{lemma}
+
+\begin{proof}
+This is true because
+$$
+\Hom_{K(\mathcal{O}_Y)}(\mathcal{F}^\bullet, f_*\mathcal{I}^\bullet)
+=
+\Hom_{K(\mathcal{O}_X)}(f^*\mathcal{F}^\bullet, \mathcal{I}^\bullet)
+$$
+by
+Sheaves, Lemma
+\ref{sheaves-lemma-adjoint-pullback-pushforward-modules}
+and the fact that $f^*$ is exact as $f$ is assumed to be flat.
+\end{proof}
+
+
+
+
+
+\section{Unbounded Mayer-Vietoris}
+\label{section-unbounded-mayer-vietoris}
+
+\noindent
+There is a Mayer-Vietoris sequence for unbounded cohomology as well.
+
+\begin{lemma}
+\label{lemma-exact-sequence-lower-shriek}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $X = U \cup V$ be the union of two open subspaces.
+For any object $E$ of $D(\mathcal{O}_X)$ we have a distinguished
+triangle
+$$
+j_{U \cap V!}E|_{U \cap V} \to
+j_{U!}E|_U \oplus j_{V!}E|_V \to E \to
+j_{U \cap V!}E|_{U \cap V}[1]
+$$
+in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+We have seen in Section \ref{section-properties-K-injective}
+that the restriction functors and the extension
+by zero functors are computed by just applying the functors to
+any complex. Let $\mathcal{E}^\bullet$ be a complex of $\mathcal{O}_X$-modules
+representing $E$. The distinguished triangle of the lemma is the
+distinguished triangle associated (by
+Derived Categories, Section
+\ref{derived-section-canonical-delta-functor} and especially
+Lemma \ref{derived-lemma-derived-canonical-delta-functor})
+to the short exact sequence of complexes of $\mathcal{O}_X$-modules
+$$
+0 \to j_{U \cap V!}\mathcal{E}^\bullet|_{U \cap V} \to
+j_{U!}\mathcal{E}^\bullet|_U \oplus j_{V!}\mathcal{E}^\bullet|_V
+\to \mathcal{E}^\bullet \to 0
+$$
+To see this sequence is exact one checks on stalks using
+Sheaves, Lemma \ref{sheaves-lemma-j-shriek-modules}
+(computation omitted).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exact-sequence-j-star}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $X = U \cup V$ be the union of two open subspaces.
+For any object $E$ of $D(\mathcal{O}_X)$ we have a distinguished
+triangle
+$$
+E \to
+Rj_{U, *}E|_U \oplus Rj_{V, *}E|_V \to
+Rj_{U \cap V, *}E|_{U \cap V} \to
+E[1]
+$$
+in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-injective complex $\mathcal{I}^\bullet$ representing $E$
+whose terms $\mathcal{I}^n$ are injective objects of
+$\textit{Mod}(\mathcal{O}_X)$, see Injectives, Theorem
+\ref{injectives-theorem-K-injective-embedding-grothendieck}.
+We have seen that $\mathcal{I}^\bullet|U$ is a K-injective complex
+as well (Lemma \ref{lemma-restrict-K-injective-to-open}). Hence
+$Rj_{U, *}E|_U$ is represented by $j_{U, *}\mathcal{I}^\bullet|_U$.
+Similarly for $V$ and $U \cap V$. Hence the distinguished triangle
+of the lemma is the distinguished triangle associated (by
+Derived Categories, Section
+\ref{derived-section-canonical-delta-functor} and especially
+Lemma \ref{derived-lemma-derived-canonical-delta-functor})
+to the short exact sequence of complexes
+$$
+0 \to
+\mathcal{I}^\bullet \to
+j_{U, *}\mathcal{I}^\bullet|_U \oplus j_{V, *}\mathcal{I}^\bullet|_V \to
+j_{U \cap V, *}\mathcal{I}^\bullet|_{U \cap V} \to
+0.
+$$
+This sequence is exact because for any $W \subset X$ open
+and any $n$ the sequence
+$$
+0 \to
+\mathcal{I}^n(W) \to
+\mathcal{I}^n(W \cap U) \oplus \mathcal{I}^n(W \cap V) \to
+\mathcal{I}^n(W \cap U \cap V) \to
+0
+$$
+is exact (see proof of Lemma \ref{lemma-mayer-vietoris}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-mayer-vietoris-hom}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $X = U \cup V$ be
+the union of two open subspaces of $X$.
+For objects $E$, $F$ of $D(\mathcal{O}_X)$ we have a
+Mayer-Vietoris sequence
+$$
+\xymatrix{
+& \ldots \ar[r] & \Ext^{-1}(E_{U \cap V}, F_{U \cap V}) \ar[lld] \\
+\Hom(E, F) \ar[r] &
+\Hom(E_U, F_U) \oplus
+\Hom(E_V, F_V) \ar[r] &
+\Hom(E_{U \cap V}, F_{U \cap V})
+}
+$$
+where the subscripts denote restrictions to the relevant opens
+and the $\Hom$'s and $\Ext$'s are taken in the relevant
+derived categories.
+\end{lemma}
+
+\begin{proof}
+Use the distinguished triangle of
+Lemma \ref{lemma-exact-sequence-lower-shriek}
+to obtain a long exact sequence of $\Hom$'s
+(from Derived Categories, Lemma \ref{derived-lemma-representable-homological})
+and use that
+$$
+\Hom_{D(\mathcal{O}_X)}(j_{U!}E|_U, F) =
+\Hom_{D(\mathcal{O}_U)}(E|_U, F|_U)
+$$
+by Lemma \ref{lemma-adjoint-lower-shriek-restrict}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unbounded-mayer-vietoris}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Suppose that
+$X = U \cup V$ is a union of two open subsets. For an object $E$
+of $D(\mathcal{O}_X)$ we have a distinguished triangle
+$$
+R\Gamma(X, E) \to R\Gamma(U, E) \oplus R\Gamma(V, E) \to
+R\Gamma(U \cap V, E) \to R\Gamma(X, E)[1]
+$$
+and in particular a long exact cohomology sequence
+$$
+\ldots \to
+H^n(X, E) \to
+H^n(U, E) \oplus H^0(V, E) \to
+H^n(U \cap V, E) \to
+H^{n + 1}(X, E) \to \ldots
+$$
+The construction of the distinguished triangle and the
+long exact sequence is functorial in $E$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-injective complex $\mathcal{I}^\bullet$
+representing $E$. We may assume $\mathcal{I}^n$ is an injective
+object of $\textit{Mod}(\mathcal{O}_X)$ for all $n$, see
+Injectives, Theorem
+\ref{injectives-theorem-K-injective-embedding-grothendieck}.
+Then $R\Gamma(X, E)$ is computed by $\Gamma(X, \mathcal{I}^\bullet)$.
+Similarly for $U$, $V$, and $U \cap V$ by
+Lemma \ref{lemma-restrict-K-injective-to-open}.
+Hence the distinguished triangle of the lemma is the distinguished
+triangle associated (by
+Derived Categories, Section
+\ref{derived-section-canonical-delta-functor} and especially
+Lemma \ref{derived-lemma-derived-canonical-delta-functor})
+to the short exact sequence of complexes
+$$
+0 \to
+\mathcal{I}^\bullet(X) \to
+\mathcal{I}^\bullet(U) \oplus \mathcal{I}^\bullet(V) \to
+\mathcal{I}^\bullet(U \cap V) \to
+0.
+$$
+We have seen this is a short exact sequence in the proof of
+Lemma \ref{lemma-mayer-vietoris}.
+The final statement follows from the functoriality of the construction
+in Injectives, Theorem
+\ref{injectives-theorem-K-injective-embedding-grothendieck}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unbounded-relative-mayer-vietoris}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Suppose that $X = U \cup V$ is a union of two open subsets.
+Denote $a = f|_U : U \to Y$, $b = f|_V : V \to Y$, and
+$c = f|_{U \cap V} : U \cap V \to Y$.
+For every object $E$ of $D(\mathcal{O}_X)$ there exists a
+distinguished triangle
+$$
+Rf_*E \to
+Ra_*(E|_U) \oplus Rb_*(E|_V) \to
+Rc_*(E|_{U \cap V}) \to
+Rf_*E[1]
+$$
+This triangle is functorial in $E$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-injective complex $\mathcal{I}^\bullet$
+representing $E$. We may assume $\mathcal{I}^n$ is an injective
+object of $\textit{Mod}(\mathcal{O}_X)$ for all $n$, see
+Injectives, Theorem
+\ref{injectives-theorem-K-injective-embedding-grothendieck}.
+Then $Rf_*E$ is computed by $f_*\mathcal{I}^\bullet$.
+Similarly for $U$, $V$, and $U \cap V$ by
+Lemma \ref{lemma-restrict-K-injective-to-open}.
+Hence the distinguished triangle of the lemma is the distinguished
+triangle associated (by
+Derived Categories, Section
+\ref{derived-section-canonical-delta-functor} and especially
+Lemma \ref{derived-lemma-derived-canonical-delta-functor})
+to the short exact sequence of complexes
+$$
+0 \to
+f_*\mathcal{I}^\bullet \to
+a_*\mathcal{I}^\bullet|_U \oplus b_*\mathcal{I}^\bullet|_V \to
+c_*\mathcal{I}^\bullet|_{U \cap V} \to
+0.
+$$
+This is a short exact sequence of complexes by
+Lemma \ref{lemma-relative-mayer-vietoris}
+and the fact that $R^1f_*\mathcal{I} = 0$
+for an injective object $\mathcal{I}$ of $\textit{Mod}(\mathcal{O}_X)$.
+The final statement follows from the functoriality of the construction
+in Injectives, Theorem
+\ref{injectives-theorem-K-injective-embedding-grothendieck}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushforward-restriction}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $j : U \to X$ be an
+open subspace. Let $T \subset X$ be a closed subset contained in $U$.
+\begin{enumerate}
+\item If $E$ is an object of $D(\mathcal{O}_X)$ whose cohomology sheaves
+are supported on $T$, then $E \to Rj_*(E|_U)$ is an isomorphism.
+\item If $F$ is an object of $D(\mathcal{O}_U)$ whose cohomology sheaves
+are supported on $T$, then $j_!F \to Rj_*F$ is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $V = X \setminus T$ and $W = U \cap V$. Note that $X = U \cup V$ is an
+open covering of $X$. Denote $j_W : W \to V$ the open immersion.
+Let $E$ be an object of $D(\mathcal{O}_X)$ whose cohomology sheaves are
+supported on $T$. By
+Lemma \ref{lemma-restrict-direct-image-open} we have
+$(Rj_*E|_U)|_V = Rj_{W, *}(E|_W) = 0$ because $E|_W = 0$ by our assumption.
+On the other hand, $Rj_*(E|_U)|_U = E|_U$. Thus (1) is clear.
+Let $F$ be an object of $D(\mathcal{O}_U)$ whose cohomology sheaves
+are supported on $T$. By
+Lemma \ref{lemma-restrict-direct-image-open} we have
+$(Rj_*F)|_V = Rj_{W, *}(F|_W) = 0$ because $F|_W = 0$ by our assumption.
+We also have $(j_!F)|_V = j_{W!}(F|_W) = 0$ (the first equality is immediate
+from the definition of extension by zero). Since both
+$(Rj_*F)|_U = F$ and $(j_!F)|_U = F$ we see that (2) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-mayer-vietoris-cup}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Set $A = \Gamma(X, \mathcal{O}_X)$.
+Suppose that $X = U \cup V$ is a union of two open subsets. For objects
+$K$ and $M$ of $D(\mathcal{O}_X)$ we have a map of distinguished triangles
+$$
+\xymatrix{
+R\Gamma(X, K) \otimes_A^\mathbf{L} R\Gamma(X, M) \ar[r] \ar[d] &
+R\Gamma(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M) \ar[d] \\
+R\Gamma(X, K) \otimes_A^\mathbf{L}
+(R\Gamma(U, M) \oplus R\Gamma(V, M)) \ar[r] \ar[d] &
+R\Gamma(U, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+\oplus R\Gamma(V, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)) \ar[d] \\
+R\Gamma(X, K) \otimes_A^\mathbf{L} R\Gamma(U \cap V, M) \ar[r] \ar[d] &
+R\Gamma(U \cap V, K \otimes_{\mathcal{O}_X}^\mathbf{L} M) \ar[d] \\
+R\Gamma(X, K) \otimes_A^\mathbf{L} R\Gamma(X, M)[1] \ar[r] &
+R\Gamma(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)[1]
+}
+$$
+where
+\begin{enumerate}
+\item the horizontal arrows are given by cup product,
+\item on the right hand side we have the distinguished triangle
+of Lemma \ref{lemma-unbounded-mayer-vietoris} for
+$K \otimes_{\mathcal{O}_X}^\mathbf{L} M$, and
+\item on the left hand side we have the exact functor
+$R\Gamma(X, K) \otimes_A^\mathbf{L} - $ applied to the
+distinguished triangle of Lemma \ref{lemma-unbounded-mayer-vietoris} for $M$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose a K-flat complex $T^\bullet$ of flat $A$-modules representing
+$R\Gamma(X, K)$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-K-flat-resolution}.
+Denote $T^\bullet \otimes_A \mathcal{O}_X$ the pullback of $T^\bullet$
+by the morphism of ringed spaces $(X, \mathcal{O}_X) \to (pt, A)$.
+There is a natural adjunction map
+$\epsilon : T^\bullet \otimes_A \mathcal{O}_X \to K$ in $D(\mathcal{O}_X)$.
+Observe that $T^\bullet \otimes_A \mathcal{O}_X$ is a K-flat
+complex of $\mathcal{O}_X$-modules with flat terms, see
+Lemma \ref{lemma-pullback-K-flat} and
+Modules, Lemma \ref{modules-lemma-pullback-flat}.
+By Lemma \ref{lemma-factor-through-K-flat} we can find a morphism of complexes
+$$
+T^\bullet \otimes_A \mathcal{O}_X \longrightarrow \mathcal{K}^\bullet
+$$
+of $\mathcal{O}_X$-modules representing $\epsilon$
+such that $\mathcal{K}^\bullet$ is a
+K-flat complex with flat terms. Namely, by the construction of
+$D(\mathcal{O}_X)$ we can first represent $\epsilon$ by some map of complexes
+$e : T^\bullet \otimes_A \mathcal{O}_X \to \mathcal{L}^\bullet$
+of $\mathcal{O}_X$-modules representing $\epsilon$
+and then we can apply the lemma to $e$. Choose a K-injective
+complex $\mathcal{I}^\bullet$ whose terms are injective $\mathcal{O}_X$-modules
+representing $M$. Finally, choose a quasi-isomorphism
+$$
+\text{Tot}(\mathcal{K}^\bullet \otimes_\mathcal{O} \mathcal{I}^\bullet)
+\longrightarrow
+\mathcal{J}^\bullet
+$$
+into a K-injective complex whose terms are injective $\mathcal{O}_X$-modules.
+Observe that source and target of this arrow represent
+$K \otimes_{\mathcal{O}_X}^\mathbf{L} M$ in $D(\mathcal{O}_X)$.
+At this point, for any open $W \subset X$ we obtain a map of complexes
+$$
+\text{Tot}(T^\bullet \otimes_A \mathcal{I}^\bullet(W))
+\to
+\text{Tot}(\mathcal{K}^\bullet(W) \otimes_A \mathcal{I}^\bullet(W))
+\to
+\mathcal{J}^\bullet(W)
+$$
+of $A$-modules whose composition represents the map
+$$
+R\Gamma(X, K) \otimes_A^\mathbf{L} R\Gamma(W, M)
+\longrightarrow
+R\Gamma(W, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+$$
+in $D(A)$. Clearly, these maps are compatible with restriction mappings.
+OK, so now we can consider the following commutative(!) diagram
+of complexes of $A$-modules
+$$
+\xymatrix{
+0 \ar[d] & 0 \ar[d] \\
+\text{Tot}(T^\bullet \otimes_A \mathcal{I}^\bullet(X)) \ar[d] \ar[r] &
+\mathcal{J}^\bullet(X) \ar[d] \\
+\text{Tot}(T^\bullet \otimes_A
+(\mathcal{I}^\bullet(U) \oplus \mathcal{I}^\bullet(V)) \ar[d] \ar[r] &
+\mathcal{J}^\bullet(U) \oplus \mathcal{J}^\bullet(V) \ar[d] \\
+\text{Tot}(T^\bullet \otimes_A \mathcal{I}^\bullet(U \cap V)) \ar[r] \ar[d] &
+\mathcal{J}^\bullet(U \cap V) \ar[d] \\
+0 & 0
+}
+$$
+By the proof of Lemma \ref{lemma-mayer-vietoris} the columns are
+exact sequences of complexes of $A$-modules (this also uses that
+$\text{Tot}(T^\bullet \otimes_A -)$ transforms short exact sequences
+of complexes of $A$-modules into short exact sequences as the terms
+of $T^\bullet$ are flat $A$-modules). Since the distinguished triangles
+of Lemma \ref{lemma-unbounded-mayer-vietoris}
+are the distinguished triangles associated to these
+short exact sequences of complexes, the desired result follows from
+the functoriality of ``taking the associated distinguished triangle''
+discussed in
+Derived Categories, Section \ref{derived-section-canonical-delta-functor}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Cohomology with support in a closed, II}
+\label{section-cohomology-support-bis}
+
+\noindent
+We continue the discussion started in
+Section \ref{section-cohomology-support}.
+
+\medskip\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $Z \subset X$ be
+a closed subset. In this situation we can consider the functor
+$\textit{Mod}(\mathcal{O}_X) \to \textit{Mod}(\mathcal{O}_X(X))$
+given by $\mathcal{F} \mapsto \Gamma_Z(X, \mathcal{F})$. See
+Modules, Definition \ref{modules-definition-support}
+and
+Modules, Lemma \ref{modules-lemma-support-section-closed}.
+Using K-injective resolutions, see Section \ref{section-unbounded},
+we obtain the right derived functor
+$$
+R\Gamma_Z(X, - ) : D(\mathcal{O}_X) \to D(\mathcal{O}_X(X))
+$$
+Given an object $K$ in $D(\mathcal{O}_X)$ we denote
+$H^q_Z(X, K) = H^q(R\Gamma_Z(X, K))$ the cohomology module with
+support in $Z$. We will see later
+(Lemma \ref{lemma-sections-support-abelian-unbounded}) that this
+agrees with the construction in Section \ref{section-cohomology-support}.
+
+\medskip\noindent
+For an $\mathcal{O}_X$-module $\mathcal{F}$ we can consider the
+{\it subsheaf of sections with support in $Z$}, denoted
+$\mathcal{H}_Z(\mathcal{F})$, defined by the rule
+$$
+\mathcal{H}_Z(\mathcal{F})(U) =
+\{s \in \mathcal{F}(U) \mid \text{Supp}(s) \subset U \cap Z\} =
+\Gamma_{Z \cap U}(U, \mathcal{F}|_U)
+$$
+As discussed in
+Modules, Remark \ref{modules-remark-sections-support-in-closed-modules}
+we may view $\mathcal{H}_Z(\mathcal{F})$ as an $\mathcal{O}_X|_Z$-module
+on $Z$ and we obtain a functor
+$$
+\textit{Mod}(\mathcal{O}_X) \longrightarrow \textit{Mod}(\mathcal{O}_X|_Z),
+\quad
+\mathcal{F} \longmapsto \mathcal{H}_Z(\mathcal{F})
+\text{ viewed as an }\mathcal{O}_X|_Z\text{-module on }Z
+$$
+This functor is left exact, but in general not exact. Exactly as above
+we obtain a right derived functor
+$$
+R\mathcal{H}_Z : D(\mathcal{O}_X) \longrightarrow D(\mathcal{O}_X|_Z)
+$$
+We set $\mathcal{H}^q_Z(K) = H^q(R\mathcal{H}_Z(K))$ so that
+$\mathcal{H}^0_Z(\mathcal{F}) = \mathcal{H}_Z(\mathcal{F})$
+for any sheaf of $\mathcal{O}_X$-modules $\mathcal{F}$.
+
+\begin{lemma}
+\label{lemma-cohomology-with-support-sheaf-on-support}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $i : Z \to X$ be the
+inclusion of a closed subset.
+\begin{enumerate}
+\item $R\mathcal{H}_Z : D(\mathcal{O}_X) \to D(\mathcal{O}_X|_Z)$
+is right adjoint to $i_* : D(\mathcal{O}_X|_Z) \to D(\mathcal{O}_X)$.
+\item For $K$ in $D(\mathcal{O}_X|_Z)$ we have $R\mathcal{H}_Z(i_*K) = K$.
+\item Let $\mathcal{G}$ be a sheaf of
+$\mathcal{O}_X|_Z$-modules on $Z$. Then
+$\mathcal{H}^p_Z(i_*\mathcal{G}) = 0$ for $p > 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The functor $i_*$ is exact, so $i_* = Ri_* = Li_*$. Hence part (1)
+of the lemma follows from
+Modules, Lemma \ref{modules-lemma-adjoint-section-with-support}
+and
+Derived Categories, Lemma \ref{derived-lemma-derived-adjoint-functors}.
+Let $K$ be as in (2). We can represent $K$ by a K-injective complex
+$\mathcal{I}^\bullet$ of $\mathcal{O}_X|_Z$-modules. By
+Lemma \ref{lemma-K-injective-flat}
+the complex $i_*\mathcal{I}^\bullet$, which represents $i_*K$,
+is a K-injective complex of $\mathcal{O}_X$-modules. Thus
+$R\mathcal{H}_Z(i_*K)$ is computed by
+$\mathcal{H}_Z(i_*\mathcal{I}^\bullet) = \mathcal{I}^\bullet$
+which proves (2). Part (3) is a special case of (2).
+\end{proof}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space and let $Z \subset X$
+be a closed subset. The category of $\mathcal{O}_X$-modules whose
+support is contained in $Z$ is a Serre subcategory of the
+category of all $\mathcal{O}_X$-modules, see
+Homology, Definition \ref{homology-definition-serre-subcategory}
+and
+Modules, Lemma \ref{modules-lemma-support-section-closed}.
+We denote $D_Z(\mathcal{O}_X)$
+the strictly full saturated triangulated subcategory of
+$D(\mathcal{O}_X)$ consisting of complexes whose cohomology sheaves
+are supported on $Z$, see
+Derived Categories, Section \ref{derived-section-triangulated-sub}.
+
+\begin{lemma}
+\label{lemma-complexes-with-support-on-closed}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $i : Z \to X$ be the
+inclusion of a closed subset.
+\begin{enumerate}
+\item For $K$ in $D(\mathcal{O}_X|_Z)$ we have $i_*K$ in $D_Z(\mathcal{O}_X)$.
+\item The functor $i_* : D(\mathcal{O}_X|_Z) \to D_Z(\mathcal{O}_X)$
+is an equivalence with quasi-inverse
+$i^{-1}|_{D_Z(\mathcal{O}_X)} = R\mathcal{H}_Z|_{D_Z(\mathcal{O}_X)}$.
+\item The functor
+$i_* \circ R\mathcal{H}_Z : D(\mathcal{O}_X) \to D_Z(\mathcal{O}_X)$
+is right adjoint to the inclusion functor
+$D_Z(\mathcal{O}_X) \to D(\mathcal{O}_X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is immediate from the definitions. Part (3) is a formal
+consequence of part (2) and
+Lemma \ref{lemma-cohomology-with-support-sheaf-on-support}.
+In the rest of the proof we prove part (2).
+
+\medskip\noindent
+Let us think of $i$ as the morphism of ringed spaces
+$i : (Z, \mathcal{O}_X|_Z) \to (X, \mathcal{O}_X)$.
+Recall that $i^*$ and $i_*$ is an adjoint pair of functors.
+Since $i$ is a closed immersion, $i_*$ is exact.
+Since $i^{-1}\mathcal{O}_X = \mathcal{O}_X|_Z$ is the structure
+sheaf of $(Z, \mathcal{O}_X|_Z)$ we see that $i^* = i^{-1}$
+is exact and we see that that $i^*i_* = i^{-1}i_*$
+is isomorphic to the identify functor. See
+Modules, Lemmas \ref{modules-lemma-exactness-pushforward-pullback} and
+\ref{modules-lemma-i-star-exact}. Thus
+$i_* : D(\mathcal{O}_X|_Z) \to D_Z(\mathcal{O}_X)$
+is fully faithful and $i^{-1}$ determines
+a left inverse. On the other hand, suppose that $K$ is an object of
+$D_Z(\mathcal{O}_X)$ and consider the adjunction map
+$K \to i_*i^{-1}K$.
+Using exactness of $i_*$ and $i^{-1}$ this induces the adjunction maps
+$H^n(K) \to i_*i^{-1}H^n(K)$ on cohomology sheaves. Since these cohomology
+sheaves are supported on $Z$ we see these adjunction maps are isomorphisms
+and we conclude that $i_* : D(\mathcal{O}_X|_Z) \to D_Z(\mathcal{O}_X)$
+is an equivalence.
+
+\medskip\noindent
+To finish the proof it suffices to show that $R\mathcal{H}_Z(K) = i^{-1}K$ if
+$K$ is an object of $D_Z(\mathcal{O}_X)$. To do this we can use that
+$K = i_*i^{-1}K$ as we've just proved this is the case. Then
+Lemma \ref{lemma-cohomology-with-support-sheaf-on-support}
+tells us what we want.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sections-with-support-K-injective}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $i : Z \to X$
+be the inclusion of a closed subset. If $\mathcal{I}^\bullet$ is a K-injective
+complex of $\mathcal{O}_X$-modules, then
+$\mathcal{H}_Z(\mathcal{I}^\bullet)$ is K-injective complex of
+$\mathcal{O}_X|_Z$-modules.
+\end{lemma}
+
+\begin{proof}
+Since $i_* : \textit{Mod}(\mathcal{O}_X|_Z) \to \textit{Mod}(\mathcal{O}_X)$
+is exact and left adjoint to $\mathcal{H}_Z$
+(Modules, Lemma \ref{modules-lemma-adjoint-section-with-support})
+this follows from
+Derived Categories, Lemma \ref{derived-lemma-adjoint-preserve-K-injectives}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-to-global-sections-with-support}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $i : Z \to X$ be the
+inclusion of a closed subset. Then
+$R\Gamma(Z, - ) \circ R\mathcal{H}_Z = R\Gamma_Z(X, - )$
+as functors $D(\mathcal{O}_X) \to D(\mathcal{O}_X(X))$.
+\end{lemma}
+
+\begin{proof}
+Follows from the construction of right derived functors using
+K-injective resolutions, Lemma \ref{lemma-sections-with-support-K-injective},
+and the fact that $\Gamma_Z(X, -) = \Gamma(Z, -) \circ \mathcal{H}_Z$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-triangle-sections-with-support}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $i : Z \to X$ be the
+inclusion of a closed subset. Let $U = X \setminus Z$.
+There is a distinguished triangle
+$$
+R\Gamma_Z(X, K) \to R\Gamma(X, K) \to R\Gamma(U, K) \to
+R\Gamma_Z(X, K)[1]
+$$
+in $D(\mathcal{O}_X(X))$ functorial for $K$ in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-injective complex $\mathcal{I}^\bullet$ all of whose terms
+are injective $\mathcal{O}_X$-modules representing $K$. See
+Section \ref{section-unbounded}. Recall that $\mathcal{I}^\bullet|_U$
+is a K-injective complex of $\mathcal{O}_U$-modules, see
+Lemma \ref{lemma-restrict-K-injective-to-open}. Hence each
+of the derived functors in the distinguished triangle is gotten
+by applying the underlying functor to $\mathcal{I}^\bullet$.
+Hence we find that it suffices to prove that
+for an injective $\mathcal{O}_X$-module $\mathcal{I}$ we have
+a short exact sequence
+$$
+0 \to \Gamma_Z(X, \mathcal{I}) \to \Gamma(X, \mathcal{I})
+\to \Gamma(U, \mathcal{I}) \to 0
+$$
+This follows from Lemma \ref{lemma-injective-restriction-surjective}
+and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-triangle-sections-with-support-sheaves}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $i : Z \to X$ be the
+inclusion of a closed subset. Denote $j : U = X \setminus Z \to X$
+the inclusion of the complement. There is a distinguished triangle
+$$
+i_*R\mathcal{H}_Z(K) \to K \to Rj_*(K|_U) \to
+i_*R\mathcal{H}_Z(K)[1]
+$$
+in $D(\mathcal{O}_X)$ functorial for $K$ in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-injective complex $\mathcal{I}^\bullet$ all of whose terms
+are injective $\mathcal{O}_X$-modules representing $K$. See
+Section \ref{section-unbounded}. Recall that $\mathcal{I}^\bullet|_U$
+is a K-injective complex of $\mathcal{O}_U$-modules, see
+Lemma \ref{lemma-restrict-K-injective-to-open}. Hence each
+of the derived functors in the distinguished triangle is gotten
+by applying the underlying functor to $\mathcal{I}^\bullet$.
+Hence it suffices to prove that
+for an injective $\mathcal{O}_X$-module $\mathcal{I}$ we have
+a short exact sequence
+$$
+0 \to i_*\mathcal{H}_Z(\mathcal{I}) \to \mathcal{I}
+\to j_*(\mathcal{I}|_U) \to 0
+$$
+This follows from Lemma \ref{lemma-injective-restriction-surjective}
+and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sections-support-in-closed-disjoint-open}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $Z \subset X$
+be a closed subset. Let $j : U \to X$ be the inclusion of
+an open subset with $U \cap Z = \emptyset$. Then
+$R\mathcal{H}_Z(Rj_*K) = 0$ for all $K$ in $D(\mathcal{O}_U)$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-injective complex $\mathcal{I}^\bullet$ of $\mathcal{O}_U$-modules
+representing $K$. Then $j_*\mathcal{I}^\bullet$ represents $Rj_*K$. By
+Lemma \ref{lemma-K-injective-flat} the complex $j_*\mathcal{I}^\bullet$ is a
+K-injective complex of $\mathcal{O}_X$-modules. Hence
+$\mathcal{H}_Z(j_*\mathcal{I}^\bullet)$ represents $R\mathcal{H}_Z(Rj_*K)$.
+Thus it suffices to show that $\mathcal{H}_Z(j_*\mathcal{G}) = 0$
+for any abelian sheaf $\mathcal{G}$ on $U$. Thus we have to show that
+a section $s$ of $j_*\mathcal{G}$ over some open $W$ which is supported
+on $W \cap Z$ is zero. The support condition means that
+$s|_{W \setminus W \cap Z} = 0$. Since $j_*\mathcal{G}(W) =
+\mathcal{G}(U \cap W) = j_*\mathcal{G}(W \setminus W \cap Z)$
+this implies that $s$ is zero as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sections-support-abelian-unbounded}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $Z \subset X$
+be a closed subset. Let $K$ be an object of $D(\mathcal{O}_X)$
+and denote $K_{ab}$ its image in $D(\underline{\mathbf{Z}}_X)$.
+\begin{enumerate}
+\item There is a canonical map $R\Gamma_Z(X, K) \to R\Gamma_Z(X, K_{ab})$
+which is an isomorphism in $D(\textit{Ab})$.
+\item There is a canonical map
+$R\mathcal{H}_Z(K) \to R\mathcal{H}_Z(K_{ab})$
+which is an isomorphism in $D(\underline{\mathbf{Z}}_Z)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). The map is constructed as follows. Choose a K-injective complex
+of $\mathcal{O}_X$-modules $\mathcal{I}^\bullet$ representing $K$.
+Choose a quasi-isomorpism
+$\mathcal{I}^\bullet \to \mathcal{J}^\bullet$ where $\mathcal{J}^\bullet$
+is a K-injective complex of abelian groups. Then the map in (1)
+is given by
+$$
+\Gamma_Z(X, \mathcal{I}^\bullet) \to \Gamma_Z(X, \mathcal{J}^\bullet)
+$$
+determined by the fact that $\Gamma_Z$ is a functor on abelian sheaves.
+An easy check shows that the resulting map combined with the canonical
+maps of Lemma \ref{lemma-modules-abelian-unbounded}
+fit into a morphism of distinguished triangles
+$$
+\xymatrix{
+R\Gamma_Z(X, K) \ar[r] \ar[d] &
+R\Gamma(X, K) \ar[r] \ar[d] &
+R\Gamma(U, K) \ar[d] \\
+R\Gamma_Z(X, K_{ab}) \ar[r] &
+R\Gamma(X, K_{ab}) \ar[r] &
+R\Gamma(U, K_{ab})
+}
+$$
+of Lemma \ref{lemma-triangle-sections-with-support}.
+Since two of the three arrows are isomorphisms by the lemma cited,
+we conclude by Derived Categories, Lemma
+\ref{derived-lemma-third-isomorphism-triangle}.
+
+\medskip\noindent
+The proof of (2) is omitted. Hint: use the same argument with
+Lemma \ref{lemma-triangle-sections-with-support-sheaves}
+for the distinguished triangle.
+\end{proof}
+
+\begin{remark}
+\label{remark-support-cup-product}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $i : Z \to X$
+be the inclusion of a closed subset. Given $K$ and $M$ in
+$D(\mathcal{O}_X)$ there is a canonical map
+$$
+K|_Z \otimes_{\mathcal{O}_X|_Z}^\mathbf{L} R\mathcal{H}_Z(M)
+\longrightarrow
+R\mathcal{H}_Z(K \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+$$
+in $D(\mathcal{O}_X|_Z)$. Here $K|_Z = i^{-1}K$ is the restriction of
+$K$ to $Z$ viewed as an object of $D(\mathcal{O}_X|_Z)$. By adjointness
+of $i_*$ and $R\mathcal{H}_Z$ of
+Lemma \ref{lemma-cohomology-with-support-sheaf-on-support} to construct this
+map it suffices to produce a canonical map
+$$
+i_*\left(K|_Z \otimes_{\mathcal{O}_X|_Z}^\mathbf{L} R\mathcal{H}_Z(M)\right)
+\longrightarrow
+K \otimes_{\mathcal{O}_X}^\mathbf{L} M
+$$
+To construct this map, we choose a K-injective complex $\mathcal{I}^\bullet$
+of $\mathcal{O}_X$-modules representing $M$ and a K-flat complex
+$\mathcal{K}^\bullet$ of $\mathcal{O}_X$-modules representing $K$.
+Observe that $\mathcal{K}^\bullet|_Z$ is a K-flat complex of
+$\mathcal{O}_X|_Z$-modules representing $K|_Z$, see
+Lemma \ref{lemma-pullback-K-flat}. Hence we need to produce a map
+of complexes
+$$
+i_*\text{Tot}\left(
+\mathcal{K}^\bullet|_Z \otimes_{\mathcal{O}_X|_Z}
+\mathcal{H}_Z(\mathcal{I}^\bullet)\right)
+\longrightarrow
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{I}^\bullet)
+$$
+of $\mathcal{O}_X$-modules. For this it suffices to produce maps
+$$
+i_*(\mathcal{K}^a|_Z \otimes_{\mathcal{O}_X|_Z}
+\mathcal{H}_Z(\mathcal{I}^b))
+\longrightarrow
+\mathcal{K}^a \otimes_{\mathcal{O}_X} \mathcal{I}^b
+$$
+Looking at stalks (for example), we see that the left hand side of this
+formula is equal to
+$\mathcal{K}^a \otimes_{\mathcal{O}_X} i_*\mathcal{H}_Z(\mathcal{I}^b)$
+and we can use the inclusion
+$\mathcal{H}_Z(\mathcal{I}^b) \to \mathcal{I}^b$ to get our map.
+\end{remark}
+
+\begin{remark}
+\label{remark-support-cup-product-global}
+With notation as in Remark \ref{remark-support-cup-product}
+we obtain a canonical cup product
+\begin{align*}
+H^a(X, K) \times H^b_Z(X, M)
+& =
+H^a(X, K) \times H^b(Z, R\mathcal{H}_Z(M)) \\
+& \to
+H^a(Z, K|_Z) \times H^b(Z, R\mathcal{H}_Z(M)) \\
+& \to
+H^{a + b}(Z, K|_Z \otimes_{\mathcal{O}_X|_Z}^\mathbf{L} R\mathcal{H}_Z(M)) \\
+& \to
+H^{a + b}(Z, R\mathcal{H}_Z(K \otimes_{\mathcal{O}_X}^\mathbf{L} M)) \\
+& =
+H^{a + b}_Z(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+\end{align*}
+Here the equal signs are given by
+Lemma \ref{lemma-local-to-global-sections-with-support},
+the first arrow is restriction to $Z$, the second
+arrow is the cup product (Section \ref{section-cup-product}),
+and the third arrow is the map from Remark \ref{remark-support-cup-product}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-support-cup-product}
+With notation as in Remark \ref{remark-support-cup-product} the diagram
+$$
+\xymatrix{
+H^i(X, K) \times H^j_Z(X, M) \ar[r] \ar[d] &
+H^{i + j}_Z(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M) \ar[d] \\
+H^i(X, K) \times H^j(X, M) \ar[r] &
+H^{i + j}(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+}
+$$
+commutes where the top horizontal arrow is the cup product of
+Remark \ref{remark-support-cup-product-global}.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-support-functorial}
+Let $f : (X', \mathcal{O}_{X'}) \to (X, \mathcal{O}_X)$ be a morphism
+of ringed spaces. Let $Z \subset X$ be a closed subset and $Z' = f^{-1}(Z)$.
+Denote $f|_{Z'} : (Z', \mathcal{O}_{X'}|_{Z'}) \to (Z, \mathcal{O}_X|Z)$ be the
+induced morphism of ringed spaces. For any $K$ in $D(\mathcal{O}_X)$ there
+is a canonical map
+$$
+L(f|_{Z'})^*R\mathcal{H}_Z(K) \longrightarrow R\mathcal{H}_{Z'}(Lf^*K)
+$$
+in $D(\mathcal{O}_{X'}|_{Z'})$. Denote $i : Z \to X$ and $i' : Z' \to X'$
+the inclusion maps. By
+Lemma \ref{lemma-complexes-with-support-on-closed} part (2)
+applied to $i'$ it is the same thing to give a map
+$$
+i'_* L(f|_{Z'})^* R\mathcal{H}_Z(K)
+\longrightarrow
+i'_*R\mathcal{H}_{Z'}(Lf^*K)
+$$
+in $D_{Z'}(\mathcal{O}_{X'})$. The map of functors
+$Lf^* \circ i_* \to i'_* \circ L(f|_{Z'})^*$ of
+Remark \ref{remark-base-change} is an isomorphism in this case
+(follows by checking what happens on stalks using that $i_*$ and $i'_*$
+are exact and that $\mathcal{O}_{Z, z} = \mathcal{O}_{X, z}$
+and similarly for $Z'$). Hence it suffices to construct a the top
+horizonal arrow in the following diagram
+$$
+\xymatrix{
+Lf^* i_* R\mathcal{H}_Z(K) \ar[rr] \ar[rd] & &
+i'_* R\mathcal{H}_{Z'}(Lf^*K) \ar[ld] \\
+& Lf^*K
+}
+$$
+The complex $Lf^* i_* R\mathcal{H}_Z(K)$ is supported on $Z'$. The south-east
+arrow comes from the adjunction mapping $i_*R\mathcal{H}_Z(K) \to K$
+(Lemma \ref{lemma-cohomology-with-support-sheaf-on-support}). Since the
+adjunction mapping $i'_* R\mathcal{H}_{Z'}(Lf^*K) \to Lf^*K$ is universal by
+Lemma \ref{lemma-complexes-with-support-on-closed} part (3), we find that
+the south-east arrow factors uniquely over the south-west arrow and
+we obtain the desired arrow.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-support-functorial}
+With notation and assumptions as in Remark \ref{remark-support-functorial}
+the diagram
+$$
+\xymatrix{
+H^p_Z(X, K) \ar[r] \ar[d] & H^p_{Z'}(X, Lf^*K) \ar[d] \\
+H^p(X, K) \ar[r] & H^p(X', Lf^*K)
+}
+$$
+commutes. Here the top horizontal arrow comes from the identifications
+$H^p_Z(X, K) = H^p(Z, R\mathcal{H}_Z(K))$ and
+$H^p_{Z'}(X', Lf^*K) = H^p(Z', R\mathcal{H}_{Z'}(K'))$,
+the pullback map
+$H^p(Z, R\mathcal{H}_Z(K)) \to H^p(Z', L(f|_{Z'})^*R\mathcal{H}_Z(K))$,
+and the map constructed in Remark \ref{remark-support-functorial}.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hints:
+Using that $H^p(Z, R\mathcal{H}_Z(K)) = H^p(X, i_*R\mathcal{H}_Z(K))$
+and similarly for $R\mathcal{H}_{Z'}(Lf^*K)$ this follows from
+the functoriality of the pullback maps and the commutative diagram
+used to define the map of Remark \ref{remark-support-functorial}.
+\end{proof}
+
+
+
+
+
+
+\section{Inverse systems and cohomology}
+\label{section-inverse-systems}
+
+\noindent
+We prove some results on inverse systems of sheaves of modules.
+
+\begin{lemma}
+\label{lemma-ML-general}
+Let $I$ be an ideal of a ring $A$. Let $X$ be a topological space.
+Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of sheaves of $A$-modules on $X$
+such that $\mathcal{F}_n = \mathcal{F}_{n + 1}/I^n\mathcal{F}_{n + 1}$.
+Let $p \geq 0$. Assume
+$$
+\bigoplus\nolimits_{n \geq 0} H^{p + 1}(X, I^n\mathcal{F}_{n + 1})
+$$
+satisfies the ascending chain condition as a graded
+$\bigoplus_{n \geq 0} I^n/I^{n + 1}$-module.
+Then the inverse system $M_n = H^p(X, \mathcal{F}_n)$ satisfies the
+Mittag-Leffler condition\footnote{In fact, there exists
+a $c \geq 0$ such that $\Im(M_n \to M_{n - c})$ is the stable image
+for all $n \geq c$.}.
+\end{lemma}
+
+\begin{proof}
+Set $N_n = H^{p + 1}(X, I^n\mathcal{F}_{n + 1})$ and let
+$\delta_n : M_n \to N_n$ be the boundary map on cohomology
+coming from the short exact sequence
+$0 \to I^n\mathcal{F}_{n + 1} \to \mathcal{F}_{n + 1} \to \mathcal{F}_n \to 0$.
+Then $\bigoplus \Im(\delta_n) \subset \bigoplus N_n$ is a graded submodule.
+Namely, if $s \in M_n$ and $f \in I^m$, then we have a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+I^n\mathcal{F}_{n + 1} \ar[d]_f \ar[r] &
+\mathcal{F}_{n + 1} \ar[d]_f \ar[r] &
+\mathcal{F}_n \ar[d]_f \ar[r] & 0 \\
+0 \ar[r] &
+I^{n + m}\mathcal{F}_{n + m + 1} \ar[r] &
+\mathcal{F}_{n + m + 1} \ar[r] &
+\mathcal{F}_{n + m} \ar[r] & 0
+}
+$$
+The middle vertical map is given by lifting a local section of
+$\mathcal{F}_{n + 1}$ to a section of $\mathcal{F}_{n + m + 1}$
+and then multiplying by $f$; similarly for the other vertical arrows.
+We conclude that $\delta_{n + m}(fs) = f \delta_n(s)$.
+By assumption we can find $s_j \in M_{n_j}$, $j = 1, \ldots, N$
+such that $\delta_{n_j}(s_j)$
+generate $\bigoplus \Im(\delta_n)$ as a graded module. Let $n > c = \max(n_j)$.
+Let $s \in M_n$. Then we can find $f_j \in I^{n - n_j}$ such that
+$\delta_n(s) = \sum f_j \delta_{n_j}(s_j)$. We conclude that
+$\delta(s - \sum f_j s_j) = 0$, i.e., we can find $s' \in M_{n + 1}$
+mapping to $s - \sum f_js_j$ in $M_n$. It follows that
+$$
+\Im(M_{n + 1} \to M_{n - c}) = \Im(M_n \to M_{n - c})
+$$
+Namely, the elements $f_js_j$ map to zero in $M_{n - c}$.
+This proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ML-general-better}
+Let $I$ be an ideal of a ring $A$. Let $X$ be a topological space. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of $A$-modules on $X$
+such that $\mathcal{F}_n = \mathcal{F}_{n + 1}/I^n\mathcal{F}_{n + 1}$.
+Let $p \geq 0$. Given $n$ define
+$$
+N_n =
+\bigcap\nolimits_{m \geq n}
+\Im\left(
+H^{p + 1}(X, I^n\mathcal{F}_{m + 1}) \to H^{p + 1}(X, I^n\mathcal{F}_{n + 1})
+\right)
+$$
+If $\bigoplus N_n$ satisfies the ascending chain condition as a graded
+$\bigoplus_{n \geq 0} I^n/I^{n + 1}$-module, then the inverse system
+$M_n = H^p(X, \mathcal{F}_n)$ satisfies the Mittag-Leffler
+condition\footnote{In fact, there exists
+a $c \geq 0$ such that $\Im(M_n \to M_{n - c})$ is the stable image
+for all $n \geq c$.}.
+\end{lemma}
+
+\begin{proof}
+The proof is exactly the same as the proof of Lemma \ref{lemma-ML-general}.
+In fact, the result will follow from the arguments given there
+as soon as we show that
+$\bigoplus N_n$ is a graded $\bigoplus_{n \geq 0} I^n/I^{n + 1}$-submodule
+of $\bigoplus H^{p + 1}(X, I^n\mathcal{F}_{n + 1})$
+and that the boundary maps
+$\delta_n : M_n \to H^{p + 1}(X, I^n\mathcal{F}_{n + 1})$
+have image contained in $N_n$.
+
+\medskip\noindent
+Suppose that $\xi \in N_n$ and $f \in I^k$.
+Choose $m \gg n + k$. Choose
+$\xi' \in H^{p + 1}(X, I^n\mathcal{F}_{m + 1})$ lifting
+$\xi$. We consider the diagram
+$$
+\xymatrix{
+0 \ar[r] &
+I^n\mathcal{F}_{m + 1} \ar[d]_f \ar[r] &
+\mathcal{F}_{m + 1} \ar[d]_f \ar[r] &
+\mathcal{F}_n \ar[d]_f \ar[r] & 0 \\
+0 \ar[r] &
+I^{n + k}\mathcal{F}_{m + 1} \ar[r] &
+\mathcal{F}_{m + 1} \ar[r] &
+\mathcal{F}_{n + k} \ar[r] & 0
+}
+$$
+constructed as in the proof of Lemma \ref{lemma-ML-general}.
+We get an induced map on cohomology and we see that
+$f \xi' \in H^{p + 1}(X, I^{n + k}\mathcal{F}_{m + 1})$
+maps to $f \xi$. Since this is true for all $m \gg n + k$
+we see that $f\xi$ is in $N_{n + k}$ as desired.
+
+\medskip\noindent
+To see the boundary maps $\delta_n$ have image contained in $N_n$
+we consider the diagrams
+$$
+\xymatrix{
+0 \ar[r] &
+I^n\mathcal{F}_{m + 1} \ar[d] \ar[r] &
+\mathcal{F}_{m + 1} \ar[d] \ar[r] &
+\mathcal{F}_n \ar[d] \ar[r] & 0 \\
+0 \ar[r] &
+I^n\mathcal{F}_{n + 1} \ar[r] &
+\mathcal{F}_{n + 1} \ar[r] &
+\mathcal{F}_n \ar[r] & 0
+}
+$$
+for $m \geq n$. Looking at the induced maps on cohomology we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-topology-I-adic-general}
+Let $I$ be an ideal of a ring $A$. Let $X$ be a topological space. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of sheaves of $A$-modules on $X$ such that
+$\mathcal{F}_n = \mathcal{F}_{n + 1}/I^n\mathcal{F}_{n + 1}$.
+Let $p \geq 0$. Assume
+$$
+\bigoplus\nolimits_{n \geq 0} H^p(X, I^n\mathcal{F}_{n + 1})
+$$
+satisfies the ascending chain condition as a graded
+$\bigoplus_{n \geq 0} I^n/I^{n + 1}$-module.
+Then the limit topology on $M = \lim H^p(X, \mathcal{F}_n)$
+is the $I$-adic topology.
+\end{lemma}
+
+\begin{proof}
+Set $F^n = \Ker(M \to H^p(X, \mathcal{F}_n))$ for $n \geq 1$ and $F^0 = M$.
+Observe that $I F^n \subset F^{n + 1}$. In particular $I^n M \subset F^n$.
+Hence the $I$-adic topology is finer than the limit topology. For
+the converse, we will show that given $n$
+there exists an $m \geq n$ such that $F^m \subset I^nM$\footnote{In fact,
+there exist a $c \geq 0$ such that $F^{n + c} \subset I^nM$ for all $n$.}.
+We have injective maps
+$$
+F^n/F^{n + 1} \longrightarrow H^p(X, \mathcal{F}_{n + 1})
+$$
+whose image is contained in the image of
+$H^p(X, I^n\mathcal{F}_{n + 1}) \to H^p(X, \mathcal{F}_{n + 1})$.
+Denote
+$$
+E_n \subset H^p(X, I^n\mathcal{F}_{n + 1})
+$$
+the inverse image of $F^n/F^{n + 1}$. Then $\bigoplus E_n$ is
+a graded $\bigoplus I^n/I^{n + 1}$-submodule of
+$\bigoplus H^p(X, I^n\mathcal{F}_{n + 1})$ and
+$\bigoplus E_n \to \bigoplus F^n/F^{n + 1}$ is a homomorphism of graded
+modules; details omitted. By assumption $\bigoplus E_n$ is generated by
+finitely many homogeneous elements over $\bigoplus I^n/I^{n + 1}$.
+Since $E_n \to F^n/F^{n + 1}$ is surjective, we see that
+the same thing is true of $\bigoplus F^n/F^{n + 1}$.
+Hence we can find $r$ and $c_1, \ldots, c_r \geq 0$ and
+$a_i \in F^{c_i}$ whose images in $\bigoplus F^n/F^{n + 1}$ generate.
+Set $c = \max(c_i)$.
+
+\medskip\noindent
+For $n \geq c$ we claim that $I F^n = F^{n + 1}$. The claim shows that
+$F^{n + c} = I^nF^c \subset I^nM$ as desired. To prove the claim
+suppose $a \in F^{n + 1}$. The image of
+$a$ in $F^{n + 1}/F^{n + 2}$ is a linear combination
+of our $a_i$. Therefore $a - \sum f_i a_i \in F^{n + 2}$
+for some $f_i \in I^{n + 1 - c_i}$. Since
+$I^{n + 1 - c_i} = I \cdot I^{n - c_i}$ as $n \geq c_i$ we can write
+$f_i = \sum g_{i, j} h_{i, j}$ with $g_{i, j} \in I$
+and $h_{i, j}a_i \in F^n$. Thus we see that
+$F^{n + 1} = F^{n + 2} + IF^n$.
+A simple induction argument gives $F^{n + 1} = F^{n + e} + IF^n$
+for all $e > 0$. It follows that $IF^n$ is dense in $F^{n + 1}$.
+Choose generators $k_1, \ldots, k_r$ of $I$ and consider
+the continuous map
+$$
+u : (F^n)^{\oplus r} \longrightarrow F^{n + 1},\quad
+(x_1, \ldots, x_r) \mapsto \sum k_i x_i
+$$
+(in the limit topology).
+By the above the image of $(F^m)^{\oplus r}$ under $u$ is dense in
+$F^{m + 1}$ for all $m \geq n$. By the open mapping lemma
+(More on Algebra, Lemma \ref{more-algebra-lemma-open-mapping}) we find
+that $u$ is open. Hence $u$ is surjective. Hence $IF^n = F^{n + 1}$
+for $n \geq c$. This concludes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-topology-I-adic-general-better}
+Let $I$ be an ideal of a ring $A$. Let $X$ be a topological space. Let
+$$
+\ldots \to \mathcal{F}_3 \to \mathcal{F}_2 \to \mathcal{F}_1
+$$
+be an inverse system of sheaves of $A$-modules on $X$ such that
+$\mathcal{F}_n = \mathcal{F}_{n + 1}/I^n\mathcal{F}_{n + 1}$.
+Let $p \geq 0$. Given $n$ define
+$$
+N_n =
+\bigcap\nolimits_{m \geq n}
+\Im\left(
+H^p(X, I^n\mathcal{F}_{m + 1}) \to H^p(X, I^n\mathcal{F}_{n + 1})
+\right)
+$$
+If $\bigoplus N_n$ satisfies the ascending chain condition as a graded
+$\bigoplus_{n \geq 0} I^n/I^{n + 1}$-module, then
+the limit topology on $M = \lim H^p(X, \mathcal{F}_n)$
+is the $I$-adic topology.
+\end{lemma}
+
+\begin{proof}
+The proof is exactly the same as the proof of
+Lemma \ref{lemma-topology-I-adic-general}.
+In fact, the result will follow from the arguments given there
+as soon as we show that
+$\bigoplus N_n$ is a graded $\bigoplus_{n \geq 0} I^n/I^{n + 1}$-submodule
+of $\bigoplus H^{p + 1}(X, I^n\mathcal{F}_{n + 1})$
+and that $F^n/F^{n + 1} \subset H^p(X, \mathcal{F}_{n + 1})$
+is contained in the image of $N_n \to H^p(X, \mathcal{F}_{n + 1})$.
+In the proof of Lemma \ref{lemma-ML-general-better}
+we have seen the statement on the module structure.
+
+\medskip\noindent
+Let $t \in F^n$. Choose an element $s \in H^p(X, I^n\mathcal{F}_{n + 1})$
+which maps to the image of $t$ in $H^p(X, \mathcal{F}_{n + 1})$. We have
+to show that $s$ is in $N_n$. Now $F^n$ is the kernel of the map from
+$M \to H^p(X, \mathcal{F}_n)$ hence for all $m \geq n$ we can map $t$
+to an element $t_m \in H^p(X, \mathcal{F}_{m + 1})$ which maps to zero
+in $H^p(X, \mathcal{F}_n)$. Consider the cohomology sequence
+$$
+H^{p - 1}(X, \mathcal{F}_n) \to
+H^p(X, I^n\mathcal{F}_{m + 1}) \to
+H^p(X, \mathcal{F}_{m + 1}) \to
+H^p(X, \mathcal{F}_n)
+$$
+coming from the short exact sequence
+$0 \to I^n\mathcal{F}_{m + 1} \to \mathcal{F}_{m + 1} \to \mathcal{F}_n \to 0$.
+We can choose $s_m \in H^p(X, I^n\mathcal{F}_{m + 1})$ mapping to $t_m$.
+Comparing the sequence above with the one for $m = n$ we see that
+$s_m$ maps to $s$ up to an element in the image of
+$H^{p - 1}(X, \mathcal{F}_n) \to H^p(X, I^n\mathcal{F}_{n + 1})$.
+However, this map factors through the map
+$H^p(X, I^n\mathcal{F}_{m + 1}) \to H^p(X, I^n\mathcal{F}_{n + 1})$
+and we see that $s$ is in the image as desired.
+\end{proof}
+
+
+
+
+
+
+
+\section{Derived limits}
+\label{section-derived-limits}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. Since the triangulated category
+$D(\mathcal{O}_X)$ has products
+(Injectives, Lemma \ref{injectives-lemma-derived-products})
+it follows that $D(\mathcal{O}_X)$ has derived limits, see
+Derived Categories, Definition \ref{derived-definition-derived-limit}.
+If $(K_n)$ is an inverse system in $D(\mathcal{O}_X)$ then we
+denote $R\lim K_n$ the derived limit.
+
+\begin{lemma}
+\label{lemma-RGamma-commutes-with-Rlim}
+Let $(X, \mathcal{O}_X)$ be a ringed space. For $U \subset X$ open the
+functor $R\Gamma(U, -)$ commutes with $R\lim$. Moreover, there are
+short exact sequences
+$$
+0 \to
+R^1\lim H^{m - 1}(U, K_n) \to H^m(U, R\lim K_n) \to
+\lim H^m(U, K_n) \to 0
+$$
+for any inverse system $(K_n)$ in $D(\mathcal{O}_X)$ and any $m \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+The first statement follows from
+Injectives, Lemma \ref{injectives-lemma-RF-commutes-with-Rlim}.
+Then we may apply
+More on Algebra, Remark \ref{more-algebra-remark-compare-derived-limit}
+to $R\lim R\Gamma(U, K_n) = R\Gamma(U, R\lim K_n)$ to get the short
+exact sequences.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Rf-commutes-with-Rlim}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of ringed
+spaces. Then $Rf_*$ commutes with $R\lim$, i.e., $Rf_*$ commutes with
+derived limits.
+\end{lemma}
+
+\begin{proof}
+Let $(K_n)$ be an inverse system in $D(\mathcal{O}_X)$. Consider the defining
+distinguished triangle
+$$
+R\lim K_n \to \prod K_n \to \prod K_n
+$$
+in $D(\mathcal{O}_X)$. Applying the exact functor $Rf_*$ we obtain
+the distinguished triangle
+$$
+Rf_*(R\lim K_n) \to Rf_*\left(\prod K_n\right) \to Rf_*\left(\prod K_n\right)
+$$
+in $D(\mathcal{O}_Y)$. Thus we see that it suffices to prove that
+$Rf_*$ commutes with products in the derived category (which are not just
+given by products of complexes, see
+Injectives, Lemma \ref{injectives-lemma-derived-products}).
+However, since $Rf_*$ is a right adjoint by Lemma \ref{lemma-adjoint}
+this follows formally (see
+Categories, Lemma \ref{categories-lemma-adjoint-exact}).
+Caution: Note that we cannot apply
+Categories, Lemma \ref{categories-lemma-adjoint-exact}
+directly as $R\lim K_n$ is not a limit in $D(\mathcal{O}_X)$.
+\end{proof}
+
+\begin{remark}
+\label{remark-discuss-derived-limit}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $(K_n)$ be an inverse
+system in $D(\mathcal{O}_X)$. Set $K = R\lim K_n$. For each $n$ and $m$
+let $\mathcal{H}^m_n = H^m(K_n)$ be the $m$th cohomology sheaf of
+$K_n$ and similarly set $\mathcal{H}^m = H^m(K)$. Let us denote
+$\underline{\mathcal{H}}^m_n$ the presheaf
+$$
+U \longmapsto \underline{\mathcal{H}}^m_n(U) = H^m(U, K_n)
+$$
+Similarly we set $\underline{\mathcal{H}}^m(U) = H^m(U, K)$.
+By Lemma \ref{lemma-sheafification-cohomology} we see that
+$\mathcal{H}^m_n$ is the sheafification of
+$\underline{\mathcal{H}}^m_n$ and $\mathcal{H}^m$ is the
+sheafification of $\underline{\mathcal{H}}^m$.
+Here is a diagram
+$$
+\xymatrix{
+K \ar@{=}[d] &
+\underline{\mathcal{H}}^m \ar[d] \ar[r] &
+\mathcal{H}^m \ar[d] \\
+R\lim K_n &
+\lim \underline{\mathcal{H}}^m_n \ar[r] &
+\lim \mathcal{H}^m_n
+}
+$$
+In general it may not be the case that
+$\lim \mathcal{H}^m_n$ is the sheafification of
+$\lim \underline{\mathcal{H}}^m_n$.
+If $U \subset X$ is an open, then we have short exact
+sequences
+\begin{equation}
+\label{equation-ses-Rlim-over-U}
+0 \to
+R^1\lim \underline{\mathcal{H}}^{m - 1}_n(U) \to
+\underline{\mathcal{H}}^m(U) \to
+\lim \underline{\mathcal{H}}^m_n(U) \to 0
+\end{equation}
+by Lemma \ref{lemma-RGamma-commutes-with-Rlim}.
+\end{remark}
+
+\noindent
+The following lemma applies to an inverse system of quasi-coherent
+modules with surjective transition maps on a scheme.
+
+\begin{lemma}
+\label{lemma-inverse-limit-is-derived-limit}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $(\mathcal{F}_n)$ be an
+inverse system of $\mathcal{O}_X$-modules. Let $\mathcal{B}$ be a set
+of opens of $X$. Assume
+\begin{enumerate}
+\item every open of $X$ has a covering whose members are elements of
+$\mathcal{B}$,
+\item $H^p(U, \mathcal{F}_n) = 0$ for $p > 0$ and $U \in \mathcal{B}$,
+\item the inverse system $\mathcal{F}_n(U)$ has vanishing $R^1\lim$
+for $U \in \mathcal{B}$.
+\end{enumerate}
+Then $R\lim \mathcal{F}_n = \lim \mathcal{F}_n$ and we have
+$H^p(U, \lim \mathcal{F}_n) = 0$ for $p > 0$ and $U \in \mathcal{B}$.
+\end{lemma}
+
+\begin{proof}
+Set $K_n = \mathcal{F}_n$ and $K = R\lim \mathcal{F}_n$. Using the notation
+of Remark \ref{remark-discuss-derived-limit} and assumption (2) we see that for
+$U \in \mathcal{B}$ we have $\underline{\mathcal{H}}_n^m(U) = 0$
+when $m \not = 0$ and $\underline{\mathcal{H}}_n^0(U) = \mathcal{F}_n(U)$.
+From Equation (\ref{equation-ses-Rlim-over-U}) and assumption (3)
+we see that $\underline{\mathcal{H}}^m(U) = 0$
+when $m \not = 0$ and equal to $\lim \mathcal{F}_n(U)$
+when $m = 0$. Sheafifying using (1) we find that
+$\mathcal{H}^m = 0$ when $m \not = 0$ and equal to
+$\lim \mathcal{F}_n$ when $m = 0$. Hence $K = \lim \mathcal{F}_n$.
+Since $H^m(U, K) = \underline{\mathcal{H}}^m(U) = 0$ for $m > 0$
+(see above) we see that the second assertion holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-derived-limit-injective}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $(K_n)$ be an
+inverse system in $D(\mathcal{O}_X)$. Let $x \in X$ and $m \in \mathbf{Z}$.
+Assume there exist an integer $n(x)$ and a fundamental system $\mathfrak{U}_x$
+of open neighbourhoods of $x$ such that for $U \in \mathfrak{U}_x$
+\begin{enumerate}
+\item $R^1\lim H^{m - 1}(U, K_n) = 0$, and
+\item $H^m(U, K_n) \to H^m(U, K_{n(x)})$ is injective
+for $n \geq n(x)$.
+\end{enumerate}
+Then the map on stalks $H^m(R\lim K_n)_x \to H^m(K_{n(x)})_x$ is injective.
+\end{lemma}
+
+\begin{proof}
+Let $\gamma$ be an element of $H^m(R\lim K_n)_x$ which maps to zero
+in $H^m(K_{n(x)})_x$. Since $H^m(R\lim K_n)$ is the sheafification
+of $U \mapsto H^m(U, R\lim K_n)$
+(by Lemma \ref{lemma-sheafification-cohomology})
+we can choose $U \in \mathfrak{U}_x$
+and an element $\tilde \gamma \in H^m(U, R\lim K_n)$ mapping to $\gamma$.
+Then $\tilde\gamma$ maps to $\tilde\gamma_{n(x)} \in H^m(U, K_{n(x)})$.
+Using that $H^m(K_{n(x)})$ is the sheafification of
+$U \mapsto H^m(U, K_{n(x)})$
+(by Lemma \ref{lemma-sheafification-cohomology} again)
+we see that after shrinking $U$ we may assume that $\tilde\gamma_{n(x)} = 0$.
+For this $U$ we consider the short exact sequence
+$$
+0 \to
+R^1\lim H^{m - 1}(U, K_n) \to H^m(U, R\lim K_n) \to
+\lim H^m(U, K_n) \to 0
+$$
+of Lemma \ref{lemma-RGamma-commutes-with-Rlim}.
+By assumption (1) the group on the left is zero and by
+assumption (2) the group on the right maps injectively
+into $H^m(U, K_{n(x)})$. We conclude $\tilde\gamma = 0$
+and hence $\gamma = 0$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-is-limit-per-point}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $E \in D(\mathcal{O}_X)$.
+Assume that for every $x \in X$ there exist
+a function $p(x, -) : \mathbf{Z} \to \mathbf{Z}$ and
+a fundamental system $\mathfrak{U}_x$ of open neighbourhoods of $x$
+such that
+$$
+H^p(U, H^{m - p}(E)) = 0 \text{ for }
+U \in \mathfrak{U}_x \text{ and } p > p(x, m)
+$$
+Then the canonical map $E \to R\lim \tau_{\geq -n} E$
+is an isomorphism in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Set $K_n = \tau_{\geq -n}E$ and $K = R\lim K_n$.
+The canonical map $E \to K$
+comes from the canonical maps $E \to K_n = \tau_{\geq -n}E$.
+We have to show that $E \to K$ induces an isomorphism
+$H^m(E) \to H^m(K)$ of cohomology sheaves. In the rest of the
+proof we fix $m$. If $n \geq -m$, then
+the map $E \to \tau_{\geq -n}E = K_n$ induces an isomorphism
+$H^m(E) \to H^m(K_n)$.
+To finish the proof it suffices to show that for every $x \in X$
+there exists an integer $n(x) \geq -m$ such that the map
+$H^m(K)_x \to H^m(K_{n(x)})_x$ is injective. Namely, then
+the composition
+$$
+H^m(E)_x \to H^m(K)_x \to H^m(K_{n(x)})_x
+$$
+is a bijection and the second arrow is injective, hence the
+first arrow is bijective. Set
+$$
+n(x) = 1 + \max\{-m, p(x, m - 1) - m, -1 + p(x, m) - m, -2 + p(x, m + 1) - m\}.
+$$
+so that in any case $n(x) \geq -m$. Claim: the maps
+$$
+H^{m - 1}(U, K_{n + 1}) \to H^{m - 1}(U, K_n)
+\quad\text{and}\quad
+H^m(U, K_{n + 1}) \to H^m(U, K_n)
+$$
+are isomorphisms for $n \geq n(x)$ and $U \in \mathfrak{U}_x$.
+The claim implies conditions
+(1) and (2) of Lemma \ref{lemma-cohomology-derived-limit-injective}
+are satisfied and hence implies the desired injectivity.
+Recall (Derived Categories, Remark
+\ref{derived-remark-truncation-distinguished-triangle})
+that we have distinguished triangles
+$$
+H^{-n - 1}(E)[n + 1] \to
+K_{n + 1} \to K_n \to H^{-n - 1}(E)[n + 2]
+$$
+Looking at the asssociated long exact cohomology sequence the claim follows if
+$$
+H^{m + n}(U, H^{-n - 1}(E)),\quad
+H^{m + n + 1}(U, H^{-n - 1}(E)),\quad
+H^{m + n + 2}(U, H^{-n - 1}(E))
+$$
+are zero for $n \geq n(x)$ and $U \in \mathfrak{U}_x$.
+This follows from our choice of $n(x)$
+and the assumption in the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-is-limit-spaltenstein}
+\begin{reference}
+\cite[Proposition 3.13]{Spaltenstein}
+\end{reference}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $E \in D(\mathcal{O}_X)$.
+Assume that for every $x \in X$ there exist an integer $d_x \geq 0$ and
+a fundamental system $\mathfrak{U}_x$ of open neighbourhoods of $x$
+such that
+$$
+H^p(U, H^q(E)) = 0 \text{ for }
+U \in \mathfrak{U}_x,\ p > d_x, \text{ and }q < 0
+$$
+Then the canonical map $E \to R\lim \tau_{\geq -n} E$
+is an isomorphism in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-is-limit-per-point}
+with $p(x, m) = d_x + \max(0, m)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-is-limit}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $E \in D(\mathcal{O}_X)$.
+Assume there exist a function $p(-) : \mathbf{Z} \to \mathbf{Z}$
+and a set $\mathcal{B}$ of opens of $X$ such that
+\begin{enumerate}
+\item every open in $X$ has a covering whose members are
+elements of $\mathcal{B}$, and
+\item $H^p(U, H^{m - p}(E)) = 0$ for $p > p(m)$ and $U \in \mathcal{B}$.
+\end{enumerate}
+Then the canonical map $E \to R\lim \tau_{\geq -n} E$
+is an isomorphism in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Apply Lemma \ref{lemma-is-limit-per-point}
+with $p(x, m) = p(m)$ and
+$\mathfrak{U}_x = \{U \in \mathcal{B} \mid x \in U\}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-is-limit-dimension}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $E \in D(\mathcal{O}_X)$.
+Assume there exist an integer $d \geq 0$ and a basis $\mathcal{B}$ for the
+topology of $X$ such that
+$$
+H^p(U, H^q(E)) = 0 \text{ for }
+U \in \mathcal{B},\ p > d, \text{ and }q < 0
+$$
+Then the canonical map $E \to R\lim \tau_{\geq -n} E$
+is an isomorphism in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Apply Lemma \ref{lemma-is-limit-spaltenstein} with $d_x = d$
+and $\mathfrak{U}_x = \{U \in \mathcal{B} \mid x \in U\}$.
+\end{proof}
+
+\noindent
+The lemmas above can be used to compute cohomology
+in certain situations.
+
+\begin{lemma}
+\label{lemma-cohomology-over-U-trivial}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K$
+be an object of $D(\mathcal{O}_X)$.
+Let $\mathcal{B}$ be a set of opens of $X$. Assume
+\begin{enumerate}
+\item every open of $X$ has a covering whose members are
+elements of $\mathcal{B}$,
+\item $H^p(U, H^q(K)) = 0$ for all $p > 0$, $q \in \mathbf{Z}$, and
+$U \in \mathcal{B}$.
+\end{enumerate}
+Then $H^q(U, K) = H^0(U, H^q(K))$ for $q \in \mathbf{Z}$
+and $U \in \mathcal{B}$.
+\end{lemma}
+
+\begin{proof}
+Observe that $K = R\lim \tau_{\geq -n} K$ by
+Lemma \ref{lemma-is-limit-dimension} with $d = 0$.
+Let $U \in \mathcal{B}$. By Equation (\ref{equation-ses-Rlim-over-U})
+we get a short exact sequence
+$$
+0 \to R^1\lim H^{q - 1}(U, \tau_{\geq -n}K) \to
+H^q(U, K) \to \lim H^q(U, \tau_{\geq -n}K) \to 0
+$$
+Condition (2) implies
+$H^q(U, \tau_{\geq -n} K) = H^0(U, H^q(\tau_{\geq -n} K))$
+for all $q$ by using the spectral sequence of
+Example \ref{example-spectral-sequence}.
+The spectral sequence converges because $\tau_{\geq -n}K$ is bounded
+below. If $n > -q$ then we have $H^q(\tau_{\geq -n}K) = H^q(K)$.
+Thus the systems on the left and the right of the displayed
+short exact sequence are eventually constant with values
+$H^0(U, H^{q - 1}(K))$ and $H^0(U, H^q(K))$. The lemma follows.
+\end{proof}
+
+\noindent
+Here is another case where we can describe the derived limit.
+
+\begin{lemma}
+\label{lemma-derived-limit-suitable-system}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $(K_n)$
+be an inverse system of objects of $D(\mathcal{O}_X)$.
+Let $\mathcal{B}$ be a set of opens of $X$. Assume
+\begin{enumerate}
+\item every open of $X$ has a covering whose members are
+elements of $\mathcal{B}$,
+\item for all $U \in \mathcal{B}$ and all $q \in \mathbf{Z}$ we have
+\begin{enumerate}
+\item $H^p(U, H^q(K_n)) = 0$ for $p > 0$,
+\item the inverse system $H^0(U, H^q(K_n))$ has vanishing $R^1\lim$.
+\end{enumerate}
+\end{enumerate}
+Then $H^q(R\lim K_n) = \lim H^q(K_n)$ for $q \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Set $K = R\lim K_n$. We will use notation as in
+Remark \ref{remark-discuss-derived-limit}. Let $U \in \mathcal{B}$.
+By Lemma \ref{lemma-cohomology-over-U-trivial} and (2)(a)
+we have $H^q(U, K_n) = H^0(U, H^q(K_n))$.
+Using that the functor $R\Gamma(U, -)$ commutes with
+derived limits we have
+$$
+H^q(U, K) = H^q(R\lim R\Gamma(U, K_n)) = \lim H^0(U, H^q(K_n))
+$$
+where the final equality follows from
+More on Algebra, Remark \ref{more-algebra-remark-compare-derived-limit}
+and assumption (2)(b). Thus $H^q(U, K)$ is the inverse limit
+the sections of the sheaves $H^q(K_n)$ over $U$. Since
+$\lim H^q(K_n)$ is a sheaf we find using assumption (1) that $H^q(K)$,
+which is the sheafification of the presheaf $U \mapsto H^q(U, K)$,
+is equal to $\lim H^q(K_n)$. This proves the lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Producing K-injective resolutions}
+\label{section-K-injective}
+
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{F}^\bullet$ be a complex of $\mathcal{O}_X$-modules.
+The category $\textit{Mod}(\mathcal{O}_X)$ has enough injectives, hence
+we can use
+Derived Categories, Lemma \ref{derived-lemma-special-inverse-system}
+produce a diagram
+$$
+\xymatrix{
+\ldots \ar[r] &
+\tau_{\geq -2}\mathcal{F}^\bullet \ar[r] \ar[d] &
+\tau_{\geq -1}\mathcal{F}^\bullet \ar[d] \\
+\ldots \ar[r] & \mathcal{I}_2^\bullet \ar[r] & \mathcal{I}_1^\bullet
+}
+$$
+in the category of complexes of $\mathcal{O}_X$-modules such that
+\begin{enumerate}
+\item the vertical arrows are quasi-isomorphisms,
+\item $\mathcal{I}_n^\bullet$ is a bounded below complex of injectives,
+\item the arrows $\mathcal{I}_{n + 1}^\bullet \to \mathcal{I}_n^\bullet$
+are termwise split surjections.
+\end{enumerate}
+The category of $\mathcal{O}_X$-modules has limits (they are computed
+on the level of presheaves), hence we can form the termwise limit
+$\mathcal{I}^\bullet = \lim_n \mathcal{I}_n^\bullet$. By
+Derived Categories, Lemmas
+\ref{derived-lemma-bounded-below-injectives-K-injective} and
+\ref{derived-lemma-limit-K-injectives}
+this is a K-injective complex. In general the canonical map
+\begin{equation}
+\label{equation-into-candidate-K-injective}
+\mathcal{F}^\bullet \to \mathcal{I}^\bullet
+\end{equation}
+may not be a quasi-isomorphism. In the following lemma we describe some
+conditions under which it is.
+
+\begin{lemma}
+\label{lemma-K-injective}
+In the situation described above.
+Denote $\mathcal{H}^m = H^m(\mathcal{F}^\bullet)$ the $m$th cohomology sheaf.
+Let $\mathcal{B}$ be a set of open subsets of $X$.
+Let $d \in \mathbf{N}$.
+Assume
+\begin{enumerate}
+\item every open in $X$ has a covering whose members are
+elements of $\mathcal{B}$,
+\item for every $U \in \mathcal{B}$ we have $H^p(U, \mathcal{H}^q) = 0$
+for $p > d$ and $q < 0$\footnote{It suffices if
+$\forall m$, $\exists p(m)$, $H^p(U. \mathcal{H}^{m - p}) = 0$ for
+$p > p(m)$, see Lemma \ref{lemma-is-limit}.}.
+\end{enumerate}
+Then (\ref{equation-into-candidate-K-injective}) is a quasi-isomorphism.
+\end{lemma}
+
+\begin{proof}
+By Derived Categories, Lemma \ref{derived-lemma-difficulty-K-injectives}
+it suffices to show that the canonical map
+$\mathcal{F}^\bullet \to R\lim \tau_{\geq -n} \mathcal{F}^\bullet$
+is an isomorphism. This is Lemma \ref{lemma-is-limit-dimension}.
+\end{proof}
+
+\noindent
+Here is a technical lemma about the cohomology sheaves of the inverse
+limit of a system of complexes of sheaves. In some sense this lemma
+is the wrong thing to try to prove as one should take derived
+limits and not actual inverse limits.
+
+\begin{lemma}
+\label{lemma-inverse-limit-complexes}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $(\mathcal{F}_n^\bullet)$
+be an inverse system of complexes of $\mathcal{O}_X$-modules.
+Let $m \in \mathbf{Z}$. Assume there exist a set $\mathcal{B}$
+of open subsets of $X$ and an integer $n_0$ such that
+\begin{enumerate}
+\item every open in $X$ has a covering whose members are
+elements of $\mathcal{B}$,
+\item for every $U \in \mathcal{B}$
+\begin{enumerate}
+\item the systems of abelian groups
+$\mathcal{F}_n^{m - 2}(U)$ and $\mathcal{F}_n^{m - 1}(U)$
+have vanishing $R^1\lim$ (for example these have the Mittag-Leffler
+condition),
+\item the system of abelian groups $H^{m - 1}(\mathcal{F}_n^\bullet(U))$
+has vanishing $R^1\lim$ (for example it has the Mittag-Leffler condition), and
+\item we have
+$H^m(\mathcal{F}_n^\bullet(U)) = H^m(\mathcal{F}_{n_0}^\bullet(U))$
+for all $n \geq n_0$.
+\end{enumerate}
+\end{enumerate}
+Then the maps
+$H^m(\mathcal{F}^\bullet) \to \lim H^m(\mathcal{F}_n^\bullet) \to
+H^m(\mathcal{F}_{n_0}^\bullet)$
+are isomorphisms of sheaves where
+$\mathcal{F}^\bullet = \lim \mathcal{F}_n^\bullet$ is the termwise
+inverse limit.
+\end{lemma}
+
+\begin{proof}
+Let $U \in \mathcal{B}$. Note that $H^m(\mathcal{F}^\bullet(U))$ is the
+cohomology of
+$$
+\lim_n \mathcal{F}_n^{m - 2}(U) \to
+\lim_n \mathcal{F}_n^{m - 1}(U) \to
+\lim_n \mathcal{F}_n^m(U) \to
+\lim_n \mathcal{F}_n^{m + 1}(U)
+$$
+in the third spot from the left. By assumptions (2)(a) and (2)(b)
+we may apply
+More on Algebra, Lemma \ref{more-algebra-lemma-apply-Mittag-Leffler-again}
+to conclude that
+$$
+H^m(\mathcal{F}^\bullet(U)) = \lim H^m(\mathcal{F}_n^\bullet(U))
+$$
+By assumption (2)(c) we conclude
+$$
+H^m(\mathcal{F}^\bullet(U)) = H^m(\mathcal{F}_n^\bullet(U))
+$$
+for all $n \geq n_0$. By assumption (1) we conclude that the sheafification of
+$U \mapsto H^m(\mathcal{F}^\bullet(U))$ is equal to the sheafification
+of $U \mapsto H^m(\mathcal{F}_n^\bullet(U))$ for all $n \geq n_0$.
+Thus the inverse system of sheaves $H^m(\mathcal{F}_n^\bullet)$ is
+constant for $n \geq n_0$ with value $H^m(\mathcal{F}^\bullet)$ which
+proves the lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{{\v C}ech cohomology of unbounded complexes}
+\label{section-cech-cohomology-of-unbounded-complexes}
+
+\noindent
+The construction of Section \ref{section-cech-cohomology-of-complexes}
+isn't the ``correct'' one for unbounded complexes. The problem is that
+in the Stacks project we use direct sums in the totalization of a
+double complex and we would have to replace this by a product. Instead
+of doing so in this section we assume the covering is finite and
+we use the alternating {\v C}ech complex.
+
+\medskip\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let ${\mathcal F}^\bullet$ be a complex of presheaves of
+$\mathcal{O}_X$-modules. Let ${\mathcal U} : X = \bigcup_{i \in I} U_i$
+be a {\bf finite} open covering of $X$. Since the alternating
+{\v C}ech complex
+$\check{\mathcal{C}}_{alt}^\bullet(\mathcal{U}, \mathcal{F})$
+(Section \ref{section-alternating-cech})
+is functorial in the presheaf $\mathcal{F}$ we obtain a double complex
+$\check{\mathcal{C}}^\bullet_{alt}(\mathcal{U}, \mathcal{F}^\bullet)$.
+In this section we work with the associated total complex.
+The construction of
+$\text{Tot}(\check{\mathcal{C}}^\bullet_{alt}({\mathcal U},
+{\mathcal F}^\bullet))$
+is functorial in ${\mathcal F}^\bullet$. As well there is a functorial
+transformation
+\begin{equation}
+\label{equation-global-sections-to-alternating-cech}
+\Gamma(X, {\mathcal F}^\bullet)
+\longrightarrow
+\text{Tot}(\check{\mathcal{C}}^\bullet_{alt}({\mathcal U},
+{\mathcal F}^\bullet))
+\end{equation}
+of complexes defined by the following rule: The section
+$s\in \Gamma(X, {\mathcal F}^n)$
+is mapped to the element $\alpha = \{\alpha_{i_0\ldots i_p}\}$
+with $\alpha_{i_0} = s|_{U_{i_0}}$ and $\alpha_{i_0\ldots i_p} = 0$
+for $p > 0$.
+
+\begin{lemma}
+\label{lemma-alternating-cech-complex-complex}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{U} : X = \bigcup_{i \in I} U_i$ be
+a finite open covering. For a complex $\mathcal{F}^\bullet$
+of $\mathcal{O}_X$-modules there is a canonical map
+$$
+\text{Tot}(\check{\mathcal{C}}^\bullet_{alt}(\mathcal{U}, \mathcal{F}^\bullet))
+\longrightarrow
+R\Gamma(X, \mathcal{F}^\bullet)
+$$
+functorial in $\mathcal{F}^\bullet$ and compatible with
+(\ref{equation-global-sections-to-alternating-cech}).
+\end{lemma}
+
+\begin{proof}
+Let ${\mathcal I}^\bullet$ be a K-injective complex whose terms
+are injective $\mathcal{O}_X$-modules.
+The map (\ref{equation-global-sections-to-alternating-cech}) for
+$\mathcal{I}^\bullet$ is a map
+$\Gamma(X, {\mathcal I}^\bullet) \to
+\text{Tot}(\check{\mathcal{C}}^\bullet_{alt}({\mathcal U},
+{\mathcal I}^\bullet))$.
+This is a quasi-isomorphism of complexes of abelian groups
+as follows from
+Homology, Lemma \ref{homology-lemma-double-complex-gives-resolution}
+applied to the double complex
+$\check{\mathcal{C}}^\bullet_{alt}({\mathcal U},
+{\mathcal I}^\bullet)$ using
+Lemmas \ref{lemma-injective-trivial-cech} and \ref{lemma-alternating-usual}.
+Suppose ${\mathcal F}^\bullet \to {\mathcal I}^\bullet$ is a quasi-isomorphism
+of ${\mathcal F}^\bullet$ into a K-injective complex whose terms
+are injectives (Injectives, Theorem
+\ref{injectives-theorem-K-injective-embedding-grothendieck}).
+Since $R\Gamma(X, {\mathcal F}^\bullet)$ is represented by the complex
+$\Gamma(X, {\mathcal I}^\bullet)$ we obtain the map of the lemma
+using
+$$
+\text{Tot}(\check{\mathcal{C}}^\bullet_{alt}({\mathcal U},
+{\mathcal F}^\bullet))
+\longrightarrow
+\text{Tot}(\check{\mathcal{C}}^\bullet_{alt}({\mathcal U},
+{\mathcal I}^\bullet)).
+$$
+We omit the verification of functoriality and compatibilities.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-alternating-cech-complex-complex-ss}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$\mathcal{U} : X = \bigcup_{i \in I} U_i$ be a finite open covering. Let
+$\mathcal{F}^\bullet$ be a complex of $\mathcal{O}_X$-modules.
+Let $\mathcal{B}$ be a set of open subsets of $X$. Assume
+\begin{enumerate}
+\item every open in $X$ has a covering whose members are
+elements of $\mathcal{B}$,
+\item we have $U_{i_0\ldots i_p} \in \mathcal{B}$ for all
+$i_0, \ldots, i_p \in I$,
+\item for every $U \in \mathcal{B}$ and $p > 0$ we have
+\begin{enumerate}
+\item $H^p(U, \mathcal{F}^q) = 0$,
+\item $H^p(U, \Coker(\mathcal{F}^{q - 1} \to \mathcal{F}^q)) = 0$, and
+\item $H^p(U, H^q(\mathcal{F})) = 0$.
+\end{enumerate}
+\end{enumerate}
+Then the map
+$$
+\text{Tot}(\check{\mathcal{C}}^\bullet_{alt}(\mathcal{U}, \mathcal{F}^\bullet))
+\longrightarrow
+R\Gamma(X, \mathcal{F}^\bullet)
+$$
+of Lemma \ref{lemma-alternating-cech-complex-complex}
+is an isomorphism in $D(\textit{Ab})$.
+\end{lemma}
+
+\begin{proof}
+First assume $\mathcal{F}^\bullet$ is bounded below. In this case the map
+$$
+\text{Tot}(\check{\mathcal{C}}^\bullet_{alt}(\mathcal{U}, \mathcal{F}^\bullet))
+\longrightarrow
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}^\bullet))
+$$
+is a quasi-isomorphism by Lemma \ref{lemma-alternating-usual}.
+Namely, the map of double complexes
+$\check{\mathcal{C}}^\bullet_{alt}(\mathcal{U}, \mathcal{F}^\bullet) \to
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}^\bullet)$
+induces an isomorphism between the first pages of the second spectral sequences
+associated to these complexes
+(by Homology, Lemma \ref{homology-lemma-ss-double-complex})
+and these spectral sequences converge
+(Homology, Lemma \ref{homology-lemma-first-quadrant-ss}).
+Thus the conclusion in this case by
+Lemma \ref{lemma-cech-complex-complex-computes} and assumption (3)(a).
+
+\medskip\noindent
+In general, by assumption (3)(c) we may choose a resolution
+$\mathcal{F}^\bullet \to \mathcal{I}^\bullet = \lim \mathcal{I}_n^\bullet$
+as in Lemma \ref{lemma-K-injective}.
+Then the map of the lemma becomes
+$$
+\lim_n
+\text{Tot}(\check{\mathcal{C}}^\bullet_{alt}(\mathcal{U},
+\tau_{\geq -n}\mathcal{F}^\bullet))
+\longrightarrow
+\Gamma(X, \mathcal{I}^\bullet) =
+\lim_n \Gamma(X, \mathcal{I}_n^\bullet)
+$$
+Here the arrow is in the derived category, but the equality on the
+right holds on the level of complexes.
+Note that (3)(b) shows that $\tau_{\geq -n}\mathcal{F}^\bullet$
+is a bounded below complex satisfying the hypothesis of the lemma.
+Thus the case of bounded below complexes shows each of the maps
+$$
+\text{Tot}(\check{\mathcal{C}}^\bullet_{alt}(\mathcal{U},
+\tau_{\geq -n}\mathcal{F}^\bullet))
+\longrightarrow
+\Gamma(X, \mathcal{I}_n^\bullet)
+$$
+is a quasi-isomorphism. The cohomologies of the complexes on the left
+hand side in given degree are eventually
+constant (as the alternating {\v C}ech complex is finite).
+Hence the same is true on the right hand side.
+Thus the cohomology of the limit on the right hand side is
+this constant value by
+Homology, Lemma \ref{homology-lemma-apply-Mittag-Leffler-again}
+(or the stronger More on Algebra, Lemma
+\ref{more-algebra-lemma-apply-Mittag-Leffler-again})
+and we win.
+\end{proof}
+
+
+
+
+
+
+\section{Hom complexes}
+\label{section-hom-complexes}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$\mathcal{L}^\bullet$ and $\mathcal{M}^\bullet$ be two complexes
+of $\mathcal{O}_X$-modules. We construct a complex
+of $\mathcal{O}_X$-modules
+$\SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{M}^\bullet)$.
+Namely, for each $n$ we set
+$$
+\SheafHom^n(\mathcal{L}^\bullet, \mathcal{M}^\bullet) =
+\prod\nolimits_{n = p + q}
+\SheafHom_{\mathcal{O}_X}(\mathcal{L}^{-q}, \mathcal{M}^p)
+$$
+It is a good idea to think of $\SheafHom^n$ as the
+sheaf of $\mathcal{O}_X$-modules of all $\mathcal{O}_X$-linear
+maps from $\mathcal{L}^\bullet$ to $\mathcal{M}^\bullet$
+(viewed as graded $\mathcal{O}_X$-modules) which are homogenous
+of degree $n$. In this terminology, we define the differential by the rule
+$$
+\text{d}(f) =
+\text{d}_\mathcal{M} \circ f - (-1)^n f \circ \text{d}_\mathcal{L}
+$$
+for
+$f \in \SheafHom^n_{\mathcal{O}_X}(\mathcal{L}^\bullet, \mathcal{M}^\bullet)$.
+We omit the verification that $\text{d}^2 = 0$.
+This construction is a special case of
+Differential Graded Algebra, Example \ref{dga-example-category-complexes}.
+It follows immediately from the construction that we have
+\begin{equation}
+\label{equation-cohomology-hom-complex}
+H^n(\Gamma(U, \SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{M}^\bullet))) =
+\Hom_{K(\mathcal{O}_U)}(\mathcal{L}^\bullet, \mathcal{M}^\bullet[n])
+\end{equation}
+for all $n \in \mathbf{Z}$ and every open $U \subset X$.
+
+\begin{lemma}
+\label{lemma-compose}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Given complexes $\mathcal{K}^\bullet, \mathcal{L}^\bullet, \mathcal{M}^\bullet$
+of $\mathcal{O}_X$-modules there is an isomorphism
+$$
+\SheafHom^\bullet(\mathcal{K}^\bullet,
+\SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{M}^\bullet))
+=
+\SheafHom^\bullet(\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}
+\mathcal{L}^\bullet), \mathcal{M}^\bullet)
+$$
+of complexes of $\mathcal{O}_X$-modules functorial in
+$\mathcal{K}^\bullet, \mathcal{L}^\bullet, \mathcal{M}^\bullet$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is proved in exactly the same way as
+More on Algebra, Lemma \ref{more-algebra-lemma-compose}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Given complexes
+$\mathcal{K}^\bullet, \mathcal{L}^\bullet, \mathcal{M}^\bullet$
+of $\mathcal{O}_X$-modules there is a canonical morphism
+$$
+\text{Tot}\left(
+\SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{M}^\bullet)
+\otimes_{\mathcal{O}_X}
+\SheafHom^\bullet(\mathcal{K}^\bullet, \mathcal{L}^\bullet)
+\right)
+\longrightarrow
+\SheafHom^\bullet(\mathcal{K}^\bullet, \mathcal{M}^\bullet)
+$$
+of complexes of $\mathcal{O}_X$-modules.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is proved in exactly the same way as
+More on Algebra, Lemma \ref{more-algebra-lemma-composition}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-diagonal-better}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Given complexes
+$\mathcal{K}^\bullet, \mathcal{L}^\bullet, \mathcal{M}^\bullet$
+of $\mathcal{O}_X$-modules there is a canonical morphism
+$$
+\text{Tot}\left(
+\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}
+\SheafHom^\bullet(\mathcal{M}^\bullet, \mathcal{L}^\bullet)
+\right)
+\longrightarrow
+\SheafHom^\bullet(\mathcal{M}^\bullet,
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet))
+$$
+of complexes of $\mathcal{O}_X$-modules functorial in all three complexes.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is proved in exactly the same way as
+More on Algebra, Lemma \ref{more-algebra-lemma-diagonal-better}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-diagonal}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Given complexes
+$\mathcal{K}^\bullet, \mathcal{L}^\bullet$
+of $\mathcal{O}_X$-modules there is a canonical morphism
+$$
+\mathcal{K}^\bullet
+\longrightarrow
+\SheafHom^\bullet(\mathcal{L}^\bullet,
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet))
+$$
+of complexes of $\mathcal{O}_X$-modules functorial in both complexes.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is proved in exactly the same way as
+More on Algebra, Lemma \ref{more-algebra-lemma-diagonal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-evaluate-and-more}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Given complexes
+$\mathcal{K}^\bullet, \mathcal{L}^\bullet, \mathcal{M}^\bullet$
+of $\mathcal{O}_X$-modules there is a canonical morphism
+$$
+\text{Tot}(\SheafHom^\bullet(\mathcal{L}^\bullet,
+\mathcal{M}^\bullet) \otimes_{\mathcal{O}_X} \mathcal{K}^\bullet)
+\longrightarrow
+\SheafHom^\bullet(\SheafHom^\bullet(\mathcal{K}^\bullet,
+\mathcal{L}^\bullet), \mathcal{M}^\bullet)
+$$
+of complexes of $\mathcal{O}_X$-modules functorial in all three complexes.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is proved in exactly the same way as
+More on Algebra, Lemma \ref{more-algebra-lemma-evaluate-and-more}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-RHom-into-K-injective}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{I}^\bullet$
+be a K-injective complex of $\mathcal{O}_X$-modules. Let
+$\mathcal{L}^\bullet$ be a complex of $\mathcal{O}_X$-modules.
+Then
+$$
+H^0(\Gamma(U, \SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{I}^\bullet))) =
+\Hom_{D(\mathcal{O}_U)}(L|_U, M|_U)
+$$
+for all $U \subset X$ open.
+\end{lemma}
+
+\begin{proof}
+We have
+\begin{align*}
+H^0(\Gamma(U, \SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{I}^\bullet)))
+& =
+\Hom_{K(\mathcal{O}_U)}(L|_U, M|_U) \\
+& =
+\Hom_{D(\mathcal{O}_U)}(L|_U, M|_U)
+\end{align*}
+The first equality is (\ref{equation-cohomology-hom-complex}).
+The second equality is true because $\mathcal{I}^\bullet|_U$
+is K-injective by Lemma \ref{lemma-restrict-K-injective-to-open}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-RHom-well-defined}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$(\mathcal{I}')^\bullet \to \mathcal{I}^\bullet$
+be a quasi-isomorphism of K-injective complexes of $\mathcal{O}_X$-modules.
+Let $(\mathcal{L}')^\bullet \to \mathcal{L}^\bullet$
+be a quasi-isomorphism of complexes of $\mathcal{O}_X$-modules.
+Then
+$$
+\SheafHom^\bullet(\mathcal{L}^\bullet, (\mathcal{I}')^\bullet)
+\longrightarrow
+\SheafHom^\bullet((\mathcal{L}')^\bullet, \mathcal{I}^\bullet)
+$$
+is a quasi-isomorphism.
+\end{lemma}
+
+\begin{proof}
+Let $M$ be the object of $D(\mathcal{O}_X)$ represented by
+$\mathcal{I}^\bullet$ and $(\mathcal{I}')^\bullet$.
+Let $L$ be the object of $D(\mathcal{O}_X)$ represented by
+$\mathcal{L}^\bullet$ and $(\mathcal{L}')^\bullet$.
+By Lemma \ref{lemma-RHom-into-K-injective}
+we see that the sheaves
+$$
+H^0(\SheafHom^\bullet(\mathcal{L}^\bullet, (\mathcal{I}')^\bullet))
+\quad\text{and}\quad
+H^0(\SheafHom^\bullet((\mathcal{L}')^\bullet, \mathcal{I}^\bullet))
+$$
+are both equal to the sheaf associated to the presheaf
+$$
+U \longmapsto \Hom_{D(\mathcal{O}_U)}(L|_U, M|_U)
+$$
+Thus the map is a quasi-isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-RHom-from-K-flat-into-K-injective}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{I}^\bullet$
+be a K-injective complex of $\mathcal{O}_X$-modules. Let
+$\mathcal{L}^\bullet$ be a K-flat complex of $\mathcal{O}_X$-modules.
+Then $\SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{I}^\bullet)$
+is a K-injective complex of $\mathcal{O}_X$-modules.
+\end{lemma}
+
+\begin{proof}
+Namely, if $\mathcal{K}^\bullet$ is an acyclic complex of
+$\mathcal{O}_X$-modules, then
+\begin{align*}
+\Hom_{K(\mathcal{O}_X)}(\mathcal{K}^\bullet,
+\SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{I}^\bullet))
+& =
+H^0(\Gamma(X,
+\SheafHom^\bullet(\mathcal{K}^\bullet,
+\SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{I}^\bullet)))) \\
+& =
+H^0(\Gamma(X, \SheafHom^\bullet(\text{Tot}(
+\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet),
+\mathcal{I}^\bullet))) \\
+& =
+\Hom_{K(\mathcal{O}_X)}(
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet),
+\mathcal{I}^\bullet) \\
+& =
+0
+\end{align*}
+The first equality by (\ref{equation-cohomology-hom-complex}).
+The second equality by Lemma \ref{lemma-compose}.
+The third equality by (\ref{equation-cohomology-hom-complex}).
+The final equality because
+$\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet)$
+is acyclic because $\mathcal{L}^\bullet$ is K-flat
+(Definition \ref{definition-K-flat}) and because $\mathcal{I}^\bullet$
+is K-injective.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Internal hom in the derived category}
+\label{section-internal-hom}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $L, M$ be objects
+of $D(\mathcal{O}_X)$. We would like to construct an object
+$R\SheafHom(L, M)$ of $D(\mathcal{O}_X)$ such that for every third
+object $K$ of $D(\mathcal{O}_X)$ there exists a canonical bijection
+\begin{equation}
+\label{equation-internal-hom}
+\Hom_{D(\mathcal{O}_X)}(K, R\SheafHom(L, M))
+=
+\Hom_{D(\mathcal{O}_X)}(K \otimes_{\mathcal{O}_X}^\mathbf{L} L, M)
+\end{equation}
+Observe that this formula defines $R\SheafHom(L, M)$ up to unique
+isomorphism by the Yoneda lemma
+(Categories, Lemma \ref{categories-lemma-yoneda}).
+
+\medskip\noindent
+To construct such an object, choose a K-injective complex
+$\mathcal{I}^\bullet$ representing $M$ and any complex of
+$\mathcal{O}_X$-modules $\mathcal{L}^\bullet$ representing $L$.
+Then we set
+$$
+R\SheafHom(L, M) = \SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{I}^\bullet)
+$$
+where the right hand side is the complex of $\mathcal{O}_X$-modules
+constructed in Section \ref{section-hom-complexes}.
+This is well defined by Lemma \ref{lemma-RHom-well-defined}.
+We get a functor
+$$
+D(\mathcal{O}_X)^{opp} \times D(\mathcal{O}_X) \longrightarrow D(\mathcal{O}_X),
+\quad
+(K, L) \longmapsto R\SheafHom(K, L)
+$$
+As a prelude to proving (\ref{equation-internal-hom})
+we compute the cohomology groups of $R\SheafHom(K, L)$.
+
+\begin{lemma}
+\label{lemma-section-RHom-over-U}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $L, M$ be objects
+of $D(\mathcal{O}_X)$. For every open $U$ we have
+$$
+H^0(U, R\SheafHom(L, M)) =
+\Hom_{D(\mathcal{O}_U)}(L|_U, M|_U)
+$$
+and in particular $H^0(X, R\SheafHom(L, M)) = \Hom_{D(\mathcal{O}_X)}(L, M)$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-injective complex $\mathcal{I}^\bullet$ of
+$\mathcal{O}_X$-modules representing $M$ and a K-flat complex
+$\mathcal{L}^\bullet$ representing $L$. Then
+$\SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{I}^\bullet)$
+is K-injective by Lemma \ref{lemma-RHom-from-K-flat-into-K-injective}.
+Hence we can compute cohomology over $U$ by simply taking sections over $U$
+and the result follows from Lemma \ref{lemma-RHom-into-K-injective}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-internal-hom}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K, L, M$ be objects
+of $D(\mathcal{O}_X)$. With the construction as described above
+there is a canonical isomorphism
+$$
+R\SheafHom(K, R\SheafHom(L, M)) =
+R\SheafHom(K \otimes_{\mathcal{O}_X}^\mathbf{L} L, M)
+$$
+in $D(\mathcal{O}_X)$ functorial in $K, L, M$
+which recovers (\ref{equation-internal-hom}) by taking $H^0(X, -)$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-injective complex $\mathcal{I}^\bullet$ representing
+$M$ and a K-flat complex of $\mathcal{O}_X$-modules $\mathcal{L}^\bullet$
+representing $L$. Let $\mathcal{K}^\bullet$ be any complex of
+$\mathcal{O}_X$-modules representing $K$. Then
+we have
+$$
+\SheafHom^\bullet(\mathcal{K}^\bullet,
+\SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{I}^\bullet))
+=
+\SheafHom^\bullet(
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet),
+\mathcal{I}^\bullet)
+$$
+by Lemma \ref{lemma-compose}.
+Note that the left hand side represents
+$R\SheafHom(K, R\SheafHom(L, M))$ (use
+Lemma \ref{lemma-RHom-from-K-flat-into-K-injective})
+and that the right hand side represents
+$R\SheafHom(K \otimes_{\mathcal{O}_X}^\mathbf{L} L, M)$.
+This proves the displayed formula of the lemma.
+Taking global sections and using Lemma \ref{lemma-section-RHom-over-U}
+we obtain (\ref{equation-internal-hom}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-restriction-RHom-to-U}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K, L$ be objects
+of $D(\mathcal{O}_X)$. The construction of $R\SheafHom(K, L)$
+commutes with restrictions to opens, i.e.,
+for every open $U$ we have
+$R\SheafHom(K|_U, L|_U) = R\SheafHom(K, L)|_U$.
+\end{lemma}
+
+\begin{proof}
+This is clear from the construction and
+Lemma \ref{lemma-restrict-K-injective-to-open}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-RHom-triangulated}
+Let $(X, \mathcal{O}_X)$ be a ringed space. The bifunctor $R\SheafHom(- , -)$
+transforms distinguished triangles into distinguished triangles in both
+variables.
+\end{lemma}
+
+\begin{proof}
+This follows from the observation that the assignment
+$$
+(\mathcal{L}^\bullet, \mathcal{M}^\bullet) \longmapsto
+\SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{M}^\bullet)
+$$
+transforms a termwise split short exact sequences of complexes in either
+variable into a termwise split short exact sequence. Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-internal-hom-composition}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Given $K, L, M$ in
+$D(\mathcal{O}_X)$ there is a canonical morphism
+$$
+R\SheafHom(L, M) \otimes_{\mathcal{O}_X}^\mathbf{L} R\SheafHom(K, L)
+\longrightarrow R\SheafHom(K, M)
+$$
+in $D(\mathcal{O}_X)$ functorial in $K, L, M$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-injective complex $\mathcal{I}^\bullet$ representing $M$,
+a K-injective complex $\mathcal{J}^\bullet$ representing $L$, and
+any complex of $\mathcal{O}_X$-modules $\mathcal{K}^\bullet$ representing $K$.
+By Lemma \ref{lemma-composition} there is a map of complexes
+$$
+\text{Tot}\left(
+\SheafHom^\bullet(\mathcal{J}^\bullet, \mathcal{I}^\bullet)
+\otimes_{\mathcal{O}_X}
+\SheafHom^\bullet(\mathcal{K}^\bullet, \mathcal{J}^\bullet)
+\right)
+\longrightarrow
+\SheafHom^\bullet(\mathcal{K}^\bullet, \mathcal{I}^\bullet)
+$$
+The complexes of $\mathcal{O}_X$-modules
+$\SheafHom^\bullet(\mathcal{J}^\bullet, \mathcal{I}^\bullet)$,
+$\SheafHom^\bullet(\mathcal{K}^\bullet, \mathcal{J}^\bullet)$, and
+$\SheafHom^\bullet(\mathcal{K}^\bullet, \mathcal{I}^\bullet)$
+represent $R\SheafHom(L, M)$, $R\SheafHom(K, L)$, and $R\SheafHom(K, M)$.
+If we choose a K-flat complex $\mathcal{H}^\bullet$ and a quasi-isomorphism
+$\mathcal{H}^\bullet \to
+\SheafHom^\bullet(\mathcal{K}^\bullet, \mathcal{J}^\bullet)$,
+then there is a map
+$$
+\text{Tot}\left(
+\SheafHom^\bullet(\mathcal{J}^\bullet, \mathcal{I}^\bullet)
+\otimes_{\mathcal{O}_X} \mathcal{H}^\bullet
+\right)
+\longrightarrow
+\text{Tot}\left(
+\SheafHom^\bullet(\mathcal{J}^\bullet, \mathcal{I}^\bullet)
+\otimes_{\mathcal{O}_X}
+\SheafHom^\bullet(\mathcal{K}^\bullet, \mathcal{J}^\bullet)
+\right)
+$$
+whose source represents
+$R\SheafHom(L, M) \otimes_{\mathcal{O}_X}^\mathbf{L} R\SheafHom(K, L)$.
+Composing the two displayed arrows gives the desired map. We omit the
+proof that the construction is functorial.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-internal-hom-diagonal-better}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Given $K, L, M$
+in $D(\mathcal{O}_X)$ there is a canonical morphism
+$$
+K \otimes_{\mathcal{O}_X}^\mathbf{L} R\SheafHom(M, L)
+\longrightarrow
+R\SheafHom(M, K \otimes_{\mathcal{O}_X}^\mathbf{L} L)
+$$
+in $D(\mathcal{O}_X)$ functorial in $K, L, M$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-flat complex $\mathcal{K}^\bullet$ representing $K$,
+and a K-injective complex $\mathcal{I}^\bullet$ representing $L$, and
+choose any complex of $\mathcal{O}_X$-modules $\mathcal{M}^\bullet$
+representing $M$. Choose a quasi-isomorphism
+$\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{I}^\bullet)
+\to \mathcal{J}^\bullet$
+where $\mathcal{J}^\bullet$ is K-injective. Then we use the map
+$$
+\text{Tot}\left(
+\mathcal{K}^\bullet \otimes_{\mathcal{O}_X}
+\SheafHom^\bullet(\mathcal{M}^\bullet, \mathcal{I}^\bullet)
+\right)
+\to
+\SheafHom^\bullet(\mathcal{M}^\bullet,
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{I}^\bullet))
+\to
+\SheafHom^\bullet(\mathcal{M}^\bullet, \mathcal{J}^\bullet)
+$$
+where the first map is the map from Lemma \ref{lemma-diagonal-better}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-internal-hom-diagonal}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Given $K, L$ in $D(\mathcal{O}_X)$
+there is a canonical morphism
+$$
+K \longrightarrow R\SheafHom(L, K \otimes_{\mathcal{O}_X}^\mathbf{L} L)
+$$
+in $D(\mathcal{O}_X)$ functorial in both $K$ and $L$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-flat complex $\mathcal{K}^\bullet$ representing $K$
+and any complex of $\mathcal{O}_X$-modules $\mathcal{L}^\bullet$
+representing $L$. Choose a K-injective complex $\mathcal{J}^\bullet$
+and a quasi-isomorphism
+$\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet)
+\to \mathcal{J}^\bullet$. Then we use
+$$
+\mathcal{K}^\bullet \to
+\SheafHom^\bullet(\mathcal{L}^\bullet,
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet))
+\to
+\SheafHom^\bullet(\mathcal{L}^\bullet, \mathcal{J}^\bullet)
+$$
+where the first map comes from Lemma \ref{lemma-diagonal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dual}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $L$ be an
+object of $D(\mathcal{O}_X)$. Set $L^\vee = R\SheafHom(L, \mathcal{O}_X)$.
+For $M$ in $D(\mathcal{O}_X)$ there is a canonical map
+\begin{equation}
+\label{equation-eval}
+M \otimes^\mathbf{L}_{\mathcal{O}_X} L^\vee
+\longrightarrow
+R\SheafHom(L, M)
+\end{equation}
+which induces a canonical map
+$$
+H^0(X, M \otimes^\mathbf{L}_{\mathcal{O}_X} L^\vee)
+\longrightarrow
+\Hom_{D(\mathcal{O}_X)}(L, M)
+$$
+functorial in $M$ in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+The map (\ref{equation-eval}) is a special case of
+Lemma \ref{lemma-internal-hom-composition}
+using the identification $M = R\SheafHom(\mathcal{O}_X, M)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-internal-hom-evaluate}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K, L, M$ be objects of
+$D(\mathcal{O}_X)$. There is a canonical morphism
+$$
+R\SheafHom(L, M) \otimes_{\mathcal{O}_X}^\mathbf{L} K
+\longrightarrow
+R\SheafHom(R\SheafHom(K, L), M)
+$$
+in $D(\mathcal{O}_X)$ functorial in $K, L, M$.
+\end{lemma}
+
+\begin{proof}
+Choose
+a K-injective complex $\mathcal{I}^\bullet$ representing $M$,
+a K-injective complex $\mathcal{J}^\bullet$ representing $L$, and
+a K-flat complex $\mathcal{K}^\bullet$ representing $K$.
+The map is defined using the map
+$$
+\text{Tot}(\SheafHom^\bullet(\mathcal{J}^\bullet,
+\mathcal{I}^\bullet) \otimes_{\mathcal{O}_X} \mathcal{K}^\bullet)
+\longrightarrow
+\SheafHom^\bullet(\SheafHom^\bullet(\mathcal{K}^\bullet,
+\mathcal{J}^\bullet), \mathcal{I}^\bullet)
+$$
+of Lemma \ref{lemma-evaluate-and-more}. By our particular
+choice of complexes the left hand side represents
+$R\SheafHom(L, M) \otimes_{\mathcal{O}_X}^\mathbf{L} K$
+and the right hand side represents
+$R\SheafHom(R\SheafHom(K, L), M)$. We omit the proof that
+this is functorial in all three objects of $D(\mathcal{O}_X)$.
+\end{proof}
+
+\begin{remark}
+\label{remark-tensor-internal-hom}
+Let $(X, \mathcal{O}_X)$ be a ringed space. For $K, K', M, M'$ in
+$D(\mathcal{O}_X)$ there is a canonical map
+$$
+R\SheafHom(K, K') \otimes_{\mathcal{O}_X}^\mathbf{L}
+R\SheafHom(M, M')
+\longrightarrow
+R\SheafHom(K \otimes_{\mathcal{O}_X}^\mathbf{L} M,
+K' \otimes_{\mathcal{O}_X}^\mathbf{L} M')
+$$
+Namely, by (\ref{equation-internal-hom}) is the same thing as a map
+$$
+R\SheafHom(K, K') \otimes_{\mathcal{O}_X}^\mathbf{L}
+R\SheafHom(M, M') \otimes_{\mathcal{O}_X}^\mathbf{L}
+K \otimes_{\mathcal{O}_X}^\mathbf{L} M
+\longrightarrow
+K' \otimes_{\mathcal{O}_X}^\mathbf{L} M'
+$$
+For this we can first flip the middle two factors
+(with sign rules as in More on Algebra, Section
+\ref{more-algebra-section-sign-rules})
+and use the maps
+$$
+R\SheafHom(K, K') \otimes_{\mathcal{O}_X}^\mathbf{L} K \to K'
+\quad\text{and}\quad
+R\SheafHom(M, M') \otimes_{\mathcal{O}_X}^\mathbf{L} M \to M'
+$$
+from Lemma \ref{lemma-internal-hom-composition} when thinking
+of $K = R\SheafHom(\mathcal{O}_X, K)$ and similarly for
+$K'$, $M$, and $M'$.
+\end{remark}
+
+\begin{remark}
+\label{remark-projection-formula-for-internal-hom}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $K, L$ be objects of $D(\mathcal{O}_X)$. We claim there is a canonical map
+$$
+Rf_*R\SheafHom(L, K) \longrightarrow R\SheafHom(Rf_*L, Rf_*K)
+$$
+Namely, by (\ref{equation-internal-hom}) this is the same thing
+as a map
+$Rf_*R\SheafHom(L, K) \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*L \to Rf_*K$.
+For this we can use the composition
+$$
+Rf_*R\SheafHom(L, K) \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*L \to
+Rf_*(R\SheafHom(L, K) \otimes_{\mathcal{O}_X}^\mathbf{L} L) \to
+Rf_*K
+$$
+where the first arrow is the relative cup product
+(Remark \ref{remark-cup-product}) and the second arrow is $Rf_*$ applied
+to the canonical map
+$R\SheafHom(L, K) \otimes_{\mathcal{O}_X}^\mathbf{L} L \to K$
+coming from Lemma \ref{lemma-internal-hom-composition}
+(with $\mathcal{O}_X$ in one of the spots).
+\end{remark}
+
+\begin{remark}
+\label{remark-relative-cup-and-composition}
+Let $h : X \to Y$ be a morphism of ringed spaces.
+Let $K, L, M$ be objects of $D(\mathcal{O}_Y)$.
+The diagram
+$$
+\xymatrix{
+Rf_*R\SheafHom_{\mathcal{O}_X}(K, M)
+\otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*M
+\ar[r] \ar[d] &
+Rf_*\left(R\SheafHom_{\mathcal{O}_X}(K, M)
+\otimes_{\mathcal{O}_X}^\mathbf{L} M\right)
+\ar[d] \\
+R\SheafHom_{\mathcal{O}_Y}(Rf_*K, Rf_*M) \otimes_{\mathcal{O}_Y}^\mathbf{L}
+Rf_*M \ar[r] &
+Rf_*M
+}
+$$
+is commutative. Here the left vertical arrow comes from
+Remark \ref{remark-projection-formula-for-internal-hom}.
+The top horizontal arrow is Remark \ref{remark-cup-product}.
+The other two arrows are instances of the map in
+Lemma \ref{lemma-internal-hom-composition} (with one of the entries
+replaced with $\mathcal{O}_X$ or $\mathcal{O}_Y$).
+\end{remark}
+
+\begin{remark}
+\label{remark-prepare-fancy-base-change}
+Let $h : X \to Y$ be a morphism of ringed spaces.
+Let $K, L$ be objects of $D(\mathcal{O}_Y)$. We claim there is a
+canonical map
+$$
+Lh^*R\SheafHom(K, L) \longrightarrow R\SheafHom(Lh^*K, Lh^*L)
+$$
+in $D(\mathcal{O}_X)$. Namely, by (\ref{equation-internal-hom})
+proved in Lemma \ref{lemma-internal-hom}
+such a map is the same thing as a map
+$$
+Lh^*R\SheafHom(K, L) \otimes^\mathbf{L} Lh^*K \longrightarrow Lh^*L
+$$
+The source of this arrow is $Lh^*(\SheafHom(K, L) \otimes^\mathbf{L} K)$
+by Lemma \ref{lemma-pullback-tensor-product}
+hence it suffices to construct a canonical map
+$$
+R\SheafHom(K, L) \otimes^\mathbf{L} K \longrightarrow L.
+$$
+For this we take the arrow corresponding to
+$$
+\text{id} :
+R\SheafHom(K, L)
+\longrightarrow
+R\SheafHom(K, L)
+$$
+via (\ref{equation-internal-hom}).
+\end{remark}
+
+\begin{remark}
+\label{remark-fancy-base-change}
+Suppose that
+$$
+\xymatrix{
+X' \ar[r]_h \ar[d]_{f'} &
+X \ar[d]^f \\
+S' \ar[r]^g &
+S
+}
+$$
+is a commutative diagram of ringed spaces. Let $K, L$ be objects
+of $D(\mathcal{O}_X)$. We claim there exists a canonical base change
+map
+$$
+Lg^*Rf_*R\SheafHom(K, L)
+\longrightarrow
+R(f')_*R\SheafHom(Lh^*K, Lh^*L)
+$$
+in $D(\mathcal{O}_{S'})$. Namely, we take the map adjoint to
+the composition
+\begin{align*}
+L(f')^*Lg^*Rf_*R\SheafHom(K, L)
+& =
+Lh^*Lf^*Rf_*R\SheafHom(K, L) \\
+& \to
+Lh^*R\SheafHom(K, L) \\
+& \to
+R\SheafHom(Lh^*K, Lh^*L)
+\end{align*}
+where the first arrow uses the adjunction mapping
+$Lf^*Rf_* \to \text{id}$ and the second arrow is the canonical map
+constructed in Remark \ref{remark-prepare-fancy-base-change}.
+\end{remark}
+
+
+
+
+
+\section{Ext sheaves}
+\label{section-ext}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K, L \in D(\mathcal{O}_X)$.
+Using the construction of the internal hom in the derived category we
+obtain a well defined sheaves of $\mathcal{O}_X$-modules
+$$
+\SheafExt^n(K, L) = H^n(R\SheafHom(K, L))
+$$
+by taking the $n$th cohomology sheaf of the object $R\SheafHom(K, L)$
+of $D(\mathcal{O}_X)$. We will sometimes write
+$\SheafExt^n_{\mathcal{O}_X}(K, L)$ for this object.
+By Lemma \ref{lemma-section-RHom-over-U}
+we see that this $\SheafExt^n$-sheaf
+is the sheafification of the rule
+$$
+U \longmapsto \Ext^n_{D(\mathcal{O}_U)}(K|_U, L|_U)
+$$
+By Example \ref{example-spectral-sequence} there is always a spectral
+sequence
+$$
+E_2^{p, q} = H^p(X, \SheafExt^q(K, L))
+$$
+converging to $\Ext^{p + q}_{D(\mathcal{O}_X)}(K, L)$
+in favorable situations (for example if $L$ is bounded below and
+$K$ is bounded above).
+
+
+
+
+
+\section{Global derived hom}
+\label{section-global-RHom}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K, L \in D(\mathcal{O}_X)$.
+Using the construction of the internal hom in the derived category we
+obtain a well defined object
+$$
+R\Hom_X(K, L) = R\Gamma(X, R\SheafHom(K, L))
+$$
+in $D(\Gamma(X, \mathcal{O}_X))$. We will sometimes write
+$R\Hom_{\mathcal{O}_X}(K, L)$ for this object.
+By Lemma \ref{lemma-section-RHom-over-U}
+we have
+$$
+H^0(R\Hom_X(K, L)) = \Hom_{D(\mathcal{O}_X)}(K, L),
+\quad
+H^p(R\Hom_X(K, L)) = \Ext_{D(\mathcal{O}_X)}^p(K, L)
+$$
+If $f : Y \to X$ is a morphism of ringed spaces, then there is
+a canonical map
+$$
+R\Hom_X(K, L) \longrightarrow R\Hom_Y(Lf^*K, Lf^*L)
+$$
+in $D(\Gamma(X, \mathcal{O}_X))$ by taking global sections of the map
+defined in Remark \ref{remark-prepare-fancy-base-change}.
+
+
+
+
+
+
+
+
+\section{Glueing complexes}
+\label{section-glueing-complexes}
+
+\noindent
+We can glue complexes! More precisely, in certain circumstances we can
+glue locally given objects of the derived category to a global object.
+We first prove some easy cases and then we'll prove the very general
+\cite[Theorem 3.2.4]{BBD}
+in the setting of topological spaces and open coverings.
+
+\begin{lemma}
+\label{lemma-glue}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $X = U \cup V$ be
+the union of two open subspaces of $X$. Suppose given
+\begin{enumerate}
+\item an object $A$ of $D(\mathcal{O}_U)$,
+\item an object $B$ of $D(\mathcal{O}_V)$, and
+\item an isomorphism $c : A|_{U \cap V} \to B|_{U \cap V}$.
+\end{enumerate}
+Then there exists an object $F$ of $D(\mathcal{O}_X)$
+and isomorphisms $f : F|_U \to A$, $g : F|_V \to B$ such
+that $c = g|_{U \cap V} \circ f^{-1}|_{U \cap V}$.
+Moreover, given
+\begin{enumerate}
+\item an object $E$ of $D(\mathcal{O}_X)$,
+\item a morphism $a : A \to E|_U$ of $D(\mathcal{O}_U)$,
+\item a morphism $b : B \to E|_V$ of $D(\mathcal{O}_V)$,
+\end{enumerate}
+such that
+$$
+a|_{U \cap V} = b|_{U \cap V} \circ c.
+$$
+Then there exists a morphism $F \to E$ in $D(\mathcal{O}_X)$
+whose restriction to $U$ is $a \circ f$
+and whose restriction to $V$ is $b \circ g$.
+\end{lemma}
+
+\begin{proof}
+Denote $j_U$, $j_V$, $j_{U \cap V}$ the corresponding open immersions.
+Choose a distinguished triangle
+$$
+F \to Rj_{U, *}A \oplus Rj_{V, *}B \to Rj_{U \cap V, *}(B|_{U \cap V})
+\to F[1]
+$$
+where the map $Rj_{V, *}B \to Rj_{U \cap V, *}(B|_{U \cap V})$ is the
+obvious one and where
+$Rj_{U, *}A \to Rj_{U \cap V, *}(B|_{U \cap V})$
+is the composition of
+$Rj_{U, *}A \to Rj_{U \cap V, *}(A|_{U \cap V})$
+with $Rj_{U \cap V, *}c$. Restricting to $U$ we obtain
+$$
+F|_U \to A \oplus (Rj_{V, *}B)|_U \to (Rj_{U \cap V, *}(B|_{U \cap V}))|_U
+\to F|_U[1]
+$$
+Denote $j : U \cap V \to U$. Compatibility of restriction to opens and
+cohomology shows that both
+$(Rj_{V, *}B)|_U$ and $(Rj_{U \cap V, *}(B|_{U \cap V}))|_U$
+are canonically isomorphic to $Rj_*(B|_{U \cap V})$.
+Hence the second arrow of the last displayed diagram has
+a section, and we conclude that the morphism $F|_U \to A$ is
+an isomorphism. Similarly, the morphism $F|_V \to B$ is an
+isomorphism. The existence of the morphism $F \to E$ follows
+from the Mayer-Vietoris sequence for $\Hom$, see
+Lemma \ref{lemma-mayer-vietoris-hom}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-and-glueing}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism
+of ringed spaces. Let $\mathcal{B}$ be a basis for the topology on $Y$.
+\begin{enumerate}
+\item Assume $K$ is in $D(\mathcal{O}_X)$ such that
+for $V \in \mathcal{B}$ we have $H^i(f^{-1}(V), K) = 0$ for $i < 0$.
+Then $Rf_*K$ has vanishing cohomology sheaves in negative degrees,
+$H^i(f^{-1}(V), K) = 0$ for $i < 0$ for all opens $V \subset Y$, and
+the rule $V \mapsto H^0(f^{-1}V, K)$ is a sheaf on $Y$.
+\item Assume $K, L$ are in $D(\mathcal{O}_X)$ such that
+for $V \in \mathcal{B}$ we have
+$\Ext^i(K|_{f^{-1}V}, L|_{f^{-1}V}) = 0$ for $i < 0$.
+Then $\Ext^i(K|_{f^{-1}V}, L|_{f^{-1}V}) = 0$ for $i < 0$
+for all opens $V \subset Y$ and
+the rule $V \mapsto \Hom(K|_{f^{-1}V}, L|_{f^{-1}V})$ is a sheaf on $Y$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Lemma \ref{lemma-unbounded-describe-higher-direct-images} tells us
+$H^i(Rf_*K)$ is the sheaf associated to the presheaf
+$V \mapsto H^i(f^{-1}(V), K) = H^i(V, Rf_*K)$.
+The assumptions in (1) imply that $Rf_*K$ has vanishing cohomology
+sheaves in degrees $< 0$. We conclude that for any open $V \subset Y$
+the cohomology group $H^i(V, Rf_*K)$ is zero for $i < 0$ and is equal to
+$H^0(V, H^0(Rf_*K))$ for $i = 0$. This proves (1).
+
+\medskip\noindent
+To prove (2) apply (1) to the complex $R\SheafHom(K, L)$ using
+Lemma \ref{lemma-section-RHom-over-U} to do the translation.
+\end{proof}
+
+\begin{situation}
+\label{situation-locally-given}
+Let $(X, \mathcal{O}_X)$ be a ringed space. We are given
+\begin{enumerate}
+\item a collection of opens $\mathcal{B}$ of $X$,
+\item for $U \in \mathcal{B}$ an object $K_U$ in $D(\mathcal{O}_U)$,
+\item for $V \subset U$ with $V, U \in \mathcal{B}$ an isomorphism
+$\rho^U_V : K_U|_V \to K_V$ in $D(\mathcal{O}_V)$,
+\end{enumerate}
+such that whenever we have $W \subset V \subset U$ with $U, V, W$ in
+$\mathcal{B}$, then $\rho^U_W = \rho^V_W \circ \rho ^U_V|_W$.
+\end{situation}
+
+\noindent
+We won't be able to prove anything about this without making more
+assumptions. An interesting case is where $\mathcal{B}$ is a basis
+for the topology on $X$. Another is the case where we have a morphism
+$f : X \to Y$ of topological spaces and the elements of $\mathcal{B}$
+are the inverse images of the elements of a basis for the topology of $Y$.
+
+\medskip\noindent
+In Situation \ref{situation-locally-given} a {\it solution}
+will be a pair $(K, \rho_U)$ where $K$ is an object of $D(\mathcal{O}_X)$
+and $\rho_U : K|_U \to K_U$, $U \in \mathcal{B}$
+are isomorphisms such that
+we have $\rho^U_V \circ \rho_U|_V = \rho_V$ for all $V \subset U$,
+$U, V \in \mathcal{B}$. In certain cases solutions are unique.
+
+\begin{lemma}
+\label{lemma-uniqueness}
+In Situation \ref{situation-locally-given} assume
+\begin{enumerate}
+\item $X = \bigcup_{U \in \mathcal{B}} U$ and
+for $U, V \in \mathcal{B}$ we have
+$U \cap V = \bigcup_{W \in \mathcal{B}, W \subset U \cap V} W$,
+\item for any $U \in \mathcal{B}$ we have $\Ext^i(K_U, K_U) = 0$
+for $i < 0$.
+\end{enumerate}
+If a solution $(K, \rho_U)$ exists, then it is unique up to unique isomorphism
+and moreover $\Ext^i(K, K) = 0$ for $i < 0$.
+\end{lemma}
+
+\begin{proof}
+Let $(K, \rho_U)$ and $(K', \rho'_U)$ be a pair of solutions.
+Let $f : X \to Y$ be the continuous map constructed
+in Topology, Lemma \ref{topology-lemma-create-map-from-subcollection}.
+Set $\mathcal{O}_Y = f_*\mathcal{O}_X$.
+Then $K, K'$ and $\mathcal{B}$ are as in
+Lemma \ref{lemma-vanishing-and-glueing} part (2).
+Hence we obtain the vanishing of negative exts for $K$ and we see that
+the rule
+$$
+V \longmapsto \Hom(K|_{f^{-1}V}, K'|_{f^{-1}V})
+$$
+is a sheaf on $Y$. As both $(K, \rho_U)$ and $(K', \rho'_U)$ are solutions
+the maps
+$$
+(\rho'_U)^{-1} \circ \rho_U : K|_U \longrightarrow K'|_U
+$$
+over $U = f^{-1}(f(U))$ agree on overlaps. Hence we get a unique global
+section of the sheaf above which defines the desired isomorphism
+$K \to K'$ compatible with all structure available.
+\end{proof}
+
+\begin{remark}
+\label{remark-uniqueness}
+With notation and assumptions as in Lemma \ref{lemma-uniqueness}.
+Suppose that $U, V \in \mathcal{B}$. Let $\mathcal{B}'$ be the set of
+elements of $\mathcal{B}$ contained in $U \cap V$. Then
+$$
+(\{K_{U'}\}_{U' \in \mathcal{B}'},
+\{\rho_{V'}^{U'}\}_{V' \subset U'\text{ with }U', V' \in \mathcal{B}'})
+$$
+is a system on the ringed space $U \cap V$
+satisfying the assumptions of Lemma \ref{lemma-uniqueness}.
+Moreover, both $(K_U|_{U \cap V}, \rho^U_{U'})$ and
+$(K_V|_{U \cap V}, \rho^V_{U'})$ are solutions to this system.
+By the lemma we find a unique isomorphism
+$$
+\rho_{U, V} : K_U|_{U \cap V} \longrightarrow K_V|_{U \cap V}
+$$
+such that for every $U' \subset U \cap V$, $U' \in \mathcal{B}$ the
+diagram
+$$
+\xymatrix{
+K_U|_{U'} \ar[rr]_{\rho_{U, V}|_{U'}} \ar[rd]_{\rho^U_{U'}} & &
+K_V|_{U'} \ar[ld]^{\rho^V_{U'}} \\
+& K_{U'}
+}
+$$
+commutes. Pick a third element $W \in \mathcal{B}$. We obtain isomorphisms
+$\rho_{U, W} : K_U|_{U \cap W} \to K_W|_{U \cap W}$ and
+$\rho_{V, W} : K_U|_{V \cap W} \to K_W|_{V \cap W}$ satisfying
+similar properties to those of $\rho_{U, V}$. Finally,
+we have
+$$
+\rho_{U, W}|_{U \cap V \cap W} =
+\rho_{V, W}|_{U \cap V \cap W} \circ \rho_{U, V}|_{U \cap V \cap W}
+$$
+This is true by the uniqueness in the lemma
+because both sides of the equality are the unique isomorphism
+compatible with the maps $\rho^U_{U''}$ and $\rho^W_{U''}$
+for $U'' \subset U \cap V \cap W$, $U'' \in \mathcal{B}$.
+Some minor details omitted.
+The collection $(K_U, \rho_{U, V})$ is a descent datum
+in the derived category for the open covering
+$\mathcal{U} : X = \bigcup_{U \in \mathcal{B}} U$ of $X$.
+In this language we are looking for ``effectiveness of the descent datum''
+when we look for the existence of a solution.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-solution-in-finite-case}
+In Situation \ref{situation-locally-given} assume
+\begin{enumerate}
+\item $X = U_1 \cup \ldots \cup U_n$ with $U_i \in \mathcal{B}$,
+\item for $U, V \in \mathcal{B}$ we have
+$U \cap V = \bigcup_{W \in \mathcal{B}, W \subset U \cap V} W$,
+\item for any $U \in \mathcal{B}$ we have $\Ext^i(K_U, K_U) = 0$
+for $i < 0$.
+\end{enumerate}
+Then a solution exists and is unique up to unique isomorphism.
+\end{lemma}
+
+\begin{proof}
+Uniqueness was seen in Lemma \ref{lemma-uniqueness}. We may prove the lemma
+by induction on $n$. The case $n = 1$ is immediate.
+
+\medskip\noindent
+The case $n = 2$.
+Consider the isomorphism
+$\rho_{U_1, U_2} : K_{U_1}|_{U_1 \cap U_2} \to K_{U_2}|_{U_1 \cap U_2}$
+constructed in Remark \ref{remark-uniqueness}.
+By Lemma \ref{lemma-glue} we obtain an object $K$ in $D(\mathcal{O}_X)$
+and isomorphisms $\rho_{U_1} : K|_{U_1} \to K_{U_1}$ and
+$\rho_{U_2} : K|_{U_2} \to K_{U_2}$ compatible with $\rho_{U_1, U_2}$.
+Take $U \in \mathcal{B}$. We will construct an isomorphism
+$\rho_U : K|_U \to K_U$ and we will leave it to the reader to verify
+that $(K, \rho_U)$ is a solution. Consider the set $\mathcal{B}'$
+of elements of $\mathcal{B}$ contained in either $U \cap U_1$ or contained in
+$U \cap U_2$. Then $(K_U, \rho^U_{U'})$ is a solution for the system
+$(\{K_{U'}\}_{U' \in \mathcal{B}'},
+\{\rho_{V'}^{U'}\}_{V' \subset U'\text{ with }U', V' \in \mathcal{B}'})$
+on the ringed space $U$.
+We claim that $(K|_U, \tau_{U'})$ is another solution where
+$\tau_{U'}$ for $U' \in \mathcal{B}'$ is chosen as follows:
+if $U' \subset U_1$ then we take the composition
+$$
+K|_{U'} \xrightarrow{\rho_{U_1}|_{U'}}
+K_{U_1}|_{U'} \xrightarrow{\rho^{U_1}_{U'}}
+K_{U'}
+$$
+and if $U' \subset U_2$ then we take the composition
+$$
+K|_{U'} \xrightarrow{\rho_{U_2}|_{U'}}
+K_{U_2}|_{U'} \xrightarrow{\rho^{U_2}_{U'}}
+K_{U'}.
+$$
+To verify this is a solution use the property of the map $\rho_{U_1, U_2}$
+described in Remark \ref{remark-uniqueness} and the compatibility of
+$\rho_{U_1}$ and $\rho_{U_2}$ with $\rho_{U_1, U_2}$. Having said this
+we apply Lemma \ref{lemma-uniqueness} to see that we obtain a unique
+isomorphism $K|_{U'} \to K_{U'}$ compatible with the maps $\tau_{U'}$ and
+$\rho^U_{U'}$ for $U' \in \mathcal{B}'$.
+
+\medskip\noindent
+The case $n > 2$. Consider the open subspace
+$X' = U_1 \cup \ldots \cup U_{n - 1}$ and let $\mathcal{B}'$ be the set of
+elements of $\mathcal{B}$ contained in $X'$. Then we find a system
+$(\{K_U\}_{U \in \mathcal{B}'}, \{\rho_V^U\}_{U, V \in \mathcal{B}'})$
+on the ringed space $X'$ to which we may apply our induction hypothesis.
+We find a solution $(K_{X'}, \rho^{X'}_U)$.
+Then we can consider the collection
+$\mathcal{B}^* = \mathcal{B} \cup \{X'\}$ of opens of $X$ and we see that
+we obtain a system
+$(\{K_U\}_{U \in \mathcal{B}^*},
+\{\rho_V^U\}_{V \subset U\text{ with }U, V \in \mathcal{B}^*})$.
+Note that this new system also satisfies condition (3)
+by Lemma \ref{lemma-uniqueness} applied to the solution $K_{X'}$.
+For this system we have $X = X' \cup U_n$.
+This reduces us to the case $n = 2$ we worked out above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glueing-increasing-union}
+Let $X$ be a ringed space. Let $E$ be a well ordered set and let
+$$
+X = \bigcup\nolimits_{\alpha \in E} W_\alpha
+$$
+be an open covering with $W_\alpha \subset W_{\alpha + 1}$
+and $W_\alpha = \bigcup_{\beta < \alpha} W_\beta$ if $\alpha$ is not
+a successor. Let $K_\alpha$ be an object of $D(\mathcal{O}_{W_\alpha})$
+with $\Ext^i(K_\alpha, K_\alpha) = 0$ for $i < 0$.
+Assume given isomorphisms
+$\rho_\beta^\alpha : K_\alpha|_{W_\beta} \to K_\beta$ in
+$D(\mathcal{O}_{W_\beta})$ for all $\beta < \alpha$ with
+$\rho_\gamma^\alpha = \rho_\gamma^\beta \circ \rho^\alpha_\beta|_{W_\gamma}$
+for $\gamma < \beta < \alpha$.
+Then there exists an object
+$K$ in $D(\mathcal{O}_X)$ and isomorphisms
+$K|_{W_\alpha} \to K_\alpha$ for $\alpha \in E$
+compatible with the isomorphisms $\rho_\beta^\alpha$.
+\end{lemma}
+
+\begin{proof}
+In this proof $\alpha, \beta, \gamma, \ldots$ represent elements of $E$.
+Choose a K-injective complex
+$I_\alpha^\bullet$ on $W_\alpha$ representing $K_\alpha$.
+For $\beta < \alpha$ denote $j_{\beta, \alpha} : W_\beta \to W_\alpha$
+the inclusion morphism. Using transfinite recursion we will construct for all
+$\beta < \alpha$ a map of complexes
+$$
+\tau_{\beta, \alpha} :
+(j_{\beta, \alpha})_!I_\beta^\bullet
+\longrightarrow
+I_\alpha^\bullet
+$$
+representing the adjoint to the inverse of the isomorphism
+$\rho^\alpha_\beta : K_\alpha|_{W_\beta} \to K_\beta$.
+Moreover, we will do this in such that for
+$\gamma < \beta < \alpha$ we have
+$$
+\tau_{\gamma, \alpha} = \tau_{\beta, \alpha} \circ
+(j_{\beta, \alpha})_!\tau_{\gamma, \beta}
+$$
+as maps of complexes. Namely, suppose already given $\tau_{\gamma, \beta}$
+composing correctly for all $\gamma < \beta < \alpha$.
+If $\alpha = \alpha' + 1$ is a successor, then we choose any map of complexes
+$$
+(j_{\alpha', \alpha})_!I_{\alpha'}^\bullet \to I_\alpha^\bullet
+$$
+which is adjoint to the inverse of the isomorphism
+$\rho^\alpha_{\alpha'} : K_\alpha|_{W_{\alpha'}} \to K_{\alpha'}$
+(possible because $I_\alpha^\bullet$ is K-injective)
+and for any $\beta < \alpha'$ we set
+$$
+\tau_{\beta, \alpha} = \tau_{\alpha', \alpha} \circ
+(j_{\alpha', \alpha})_!\tau_{\beta, \alpha'}
+$$
+If $\alpha$ is not a successor, then
+we can consider the complex on $W_\alpha$ given by
+$$
+C^\bullet = \colim_{\beta < \alpha} (j_{\beta, \alpha})_!I_\beta^\bullet
+$$
+(termwise colimit) where the transition maps of the sequence
+are given by the maps $\tau_{\beta', \beta}$ for
+$\beta' < \beta < \alpha$. We claim that $C^\bullet$
+represents $K_\alpha$. Namely, for $\beta < \alpha$ the restriction
+of the coprojection $(j_{\beta, \alpha})_!I_\beta^\bullet \to C^\bullet$
+gives a map
+$$
+\sigma_\beta : I_\beta^\bullet \longrightarrow C^\bullet|_{W_\beta}
+$$
+which is a quasi-isomorphism: if $x \in W_\beta$ then looking
+at stalks we get
+$$
+(C^\bullet)_x =
+\colim_{\beta' < \alpha}
+\left((j_{\beta', \alpha})_!I_{\beta'}^\bullet\right)_x =
+\colim_{\beta \leq \beta' < \alpha} (I_{\beta'}^\bullet)_x
+\longleftarrow
+(I_\beta^\bullet)_x
+$$
+which is a quasi-isomorphism. Here we used that taking stalks
+commutes with colimits, that filtered colimits are exact, and
+that the maps $(I_\beta^\bullet)_x \to (I_{\beta'}^\bullet)_x$
+are quasi-isomorphisms for $\beta \leq \beta' < \alpha$.
+Hence $(C^\bullet, \sigma_\beta^{-1})$ is a solution to the
+system $(\{K_\beta\}_{\beta < \alpha},
+\{\rho^\beta_{\beta'}\}_{\beta' < \beta < \alpha})$.
+Since $(K_\alpha, \rho^\alpha_\beta)$ is another solution
+we obtain a unique isomorphism $\sigma : K_\alpha \to C^\bullet$
+in $D(\mathcal{O}_{W_\alpha})$ compatible with all our maps, see
+Lemma \ref{lemma-solution-in-finite-case}
+(this is where we use the vanishing of negative ext groups).
+Choose a morphism $\tau : C^\bullet \to I_\alpha^\bullet$
+of complexes representing $\sigma$. Then we set
+$$
+\tau_{\beta, \alpha} = \tau|_{W_\beta} \circ \sigma_\beta
+$$
+to get the desired maps. Finally, we take $K$ to be the object of the derived
+category represented by the complex
+$$
+K^\bullet = \colim_{\alpha \in E} (W_\alpha \to X)_!I_\alpha^\bullet
+$$
+where the transition maps are given by our carefully constructed
+maps $\tau_{\beta, \alpha}$ for $\beta < \alpha$.
+Arguing exactly as above we see that for all $\alpha$
+the restriction of the coprojection determines an isomorphism
+$$
+K|_{W_\alpha} \longrightarrow K_\alpha
+$$
+compatible with the given maps $\rho^\alpha_\beta$.
+\end{proof}
+
+\noindent
+Using transfinite induction we can prove the result in the general case.
+
+\begin{theorem}[BBD gluing lemma]
+\label{theorem-glueing-bbd-general}
+\begin{reference}
+Special case of \cite[Theorem 3.2.4]{BBD}
+without boundedness assumption.
+\end{reference}
+In Situation \ref{situation-locally-given} assume
+\begin{enumerate}
+\item $X = \bigcup_{U \in \mathcal{B}} U$,
+\item for $U, V \in \mathcal{B}$ we have
+$U \cap V = \bigcup_{W \in \mathcal{B}, W \subset U \cap V} W$,
+\item for any $U \in \mathcal{B}$ we have $\Ext^i(K_U, K_U) = 0$
+for $i < 0$.
+\end{enumerate}
+Then there exists an object $K$ of $D(\mathcal{O}_X)$
+and isomorphisms $\rho_U : K|_U \to K_U$ in $D(\mathcal{O}_U)$ for
+$U \in \mathcal{B}$ such that $\rho^U_V \circ \rho_U|_V = \rho_V$
+for all $V \subset U$ with $U, V \in \mathcal{B}$.
+The pair $(K, \rho_U)$ is unique up to unique isomorphism.
+\end{theorem}
+
+\begin{proof}
+A pair $(K, \rho_U)$ is called a solution in the text above.
+The uniqueness follows from Lemma \ref{lemma-uniqueness}.
+If $X$ has a finite covering by elements of $\mathcal{B}$
+(for example if $X$ is quasi-compact), then the theorem
+is a consequence of Lemma \ref{lemma-solution-in-finite-case}.
+In the general case we argue in exactly the same manner,
+using transfinite induction and
+Lemma \ref{lemma-glueing-increasing-union}.
+
+\medskip\noindent
+First we use transfinite recursion to choose opens $W_\alpha \subset X$
+for any ordinal $\alpha$. Namely, we set $W_0 = \emptyset$.
+If $\alpha = \beta + 1$ is a successor, then either $W_\beta = X$
+and we set $W_\alpha = X$ or $W_\beta \not = X$ and we set
+$W_\alpha = W_\beta \cup U_\alpha$ where
+$U_\alpha \in \mathcal{B}$ is not contained in $W_\beta$.
+If $\alpha$ is a limit ordinal we set
+$W_\alpha = \bigcup_{\beta < \alpha} W_\beta$.
+Then for large enough $\alpha$ we have $W_\alpha = X$.
+Observe that for every $\alpha$ the open $W_\alpha$ is
+a union of elements of $\mathcal{B}$. Hence if
+$\mathcal{B}_\alpha = \{U \in \mathcal{B}, U \subset W_\alpha\}$, then
+$$
+S_\alpha = (\{K_U\}_{U \in \mathcal{B}_\alpha},
+\{\rho_V^U\}_{V \subset U\text{ with }U, V \in \mathcal{B}_\alpha})
+$$
+is a system as in Lemma \ref{lemma-uniqueness} on the ringed space $W_\alpha$.
+
+\medskip\noindent
+We will show by transfinite induction that for every $\alpha$
+the system $S_\alpha$ has a solution. This will prove the theorem
+as this system is the system given in the theorem for large $\alpha$.
+
+\medskip\noindent
+The case where $\alpha = \beta + 1$ is a successor ordinal.
+(This case was already treated in the proof of the lemma above
+but for clarity we repeat the argument.)
+Recall that $W_\alpha = W_\beta \cup U_\alpha$ for some
+$U_\alpha \in \mathcal{B}$ in this case.
+By induction hypothesis we have a solution
+$(K_{W_\beta}, \{\rho^{W_\beta}_U\}_{U \in \mathcal{B}_\beta})$
+for the system $S_\beta$.
+Then we can consider the collection
+$\mathcal{B}_\alpha^* = \mathcal{B}_\alpha \cup \{W_\beta\}$
+of opens of $W_\alpha$ and we see that we obtain a system
+$(\{K_U\}_{U \in \mathcal{B}_\alpha^*},
+\{\rho_V^U\}_{V \subset U\text{ with }U, V \in \mathcal{B}_\alpha^*})$.
+Note that this new system also satisfies condition (3)
+by Lemma \ref{lemma-uniqueness} applied to the solution $K_{W_\beta}$.
+For this system we have $W_\alpha = W_\beta \cup U_\alpha$.
+This reduces us to the case handled in
+Lemma \ref{lemma-solution-in-finite-case}.
+
+\medskip\noindent
+The case where $\alpha$ is a limit ordinal. Recall that
+$W_\alpha = \bigcup_{\beta < \alpha} W_\beta$ in this case.
+For $\beta < \alpha$ let
+$(K_{W_\beta}, \{\rho^{W_\beta}_U\}_{U \in \mathcal{B}_\beta})$
+be the solution for $S_\beta$.
+For $\gamma < \beta < \alpha$ the restriction
+$K_{W_\beta}|_{W_\gamma}$ endowed with the maps
+$\rho^{W_\beta}_U$, $U \in \mathcal{B}_\gamma$
+is a solution for $S_\gamma$. By uniqueness we get unique isomorphisms
+$\rho_{W_\gamma}^{W_\beta} : K_{W_\beta}|_{W_\gamma} \to K_{W_\gamma}$
+compatible with the maps $\rho^{W_\beta}_U$ and $\rho^{W_\gamma}_U$
+for $U \in \mathcal{B}_\gamma$. These maps compose in the correct manner,
+i.e., $\rho_{W_\delta}^{W_\gamma} \circ \rho_{W_\gamma}^{W_\beta}|_{W_\delta}
+= \rho^{W_\delta}_{W_\beta}$ for $\delta < \gamma < \beta < \alpha$.
+Thus we may apply Lemma \ref{lemma-glueing-increasing-union}
+(note that the vanishing of negative exts is true for
+$K_{W_\beta}$ by Lemma \ref{lemma-uniqueness} applied
+to the solution $K_{W_\beta}$)
+to obtain $K_{W_\alpha}$ and isomorphisms
+$$
+\rho_{W_\beta}^{W_\alpha} :
+K_{W_\alpha}|_{W_\beta}
+\longrightarrow
+K_{W_\beta}
+$$
+compatible with the maps $\rho_{W_\gamma}^{W_\beta}$ for
+$\gamma < \beta < \alpha$.
+
+\medskip\noindent
+To show that $K_{W_\alpha}$ is a solution we still need to construct the
+isomorphisms $\rho_U^{W_\alpha} : K_{W_\alpha}|_U \to K_U$ for
+$U \in \mathcal{B}_\alpha$ satisfying certain compatibilities.
+We choose $\rho_U^{W_\alpha}$ to be the unique map such that
+for any $\beta < \alpha$ and any $V \in \mathcal{B}_\beta$
+with $V \subset U$ the diagram
+$$
+\xymatrix{
+K_{W_\alpha}|_V \ar[r]_{\rho_U^{W_\alpha}|_V}
+\ar[d]_{\rho_{W_\beta}^{W_\alpha}|_V}
+& K_U|_V \ar[d]^{\rho_U^V} \\
+K_{W_\beta} \ar[r]^{\rho_V^{W_\beta}}
+& K_V
+}
+$$
+commutes. This makes sense because
+$$
+(\{K_V\}_{V \subset U, V \in \mathcal{B}_\beta\text{ for some }\beta < \alpha},
+\{\rho_V^{V'}\}_{V \subset V'\text{ with }V, V' \subset U
+\text{ and }V, V' \in \mathcal{B}_\beta\text{ for some }\beta < \alpha})
+$$
+is a system as in Lemma \ref{lemma-uniqueness} on the ringed space $U$
+and because $(K_U, \rho^U_V)$ and
+$(K_{W_\alpha}|_U, \rho_V^{W_\beta}\circ \rho_{W_\beta}^{W_\alpha}|_V)$
+are both solutions for this system. This gives existence and uniqueness.
+We omit the proof that these
+maps satisfy the desired compatibilities (it is just bookkeeping).
+\end{proof}
+
+
+
+\section{Strictly perfect complexes}
+\label{section-strictly-perfect}
+
+\noindent
+Strictly perfect complexes of modules are used to define the notions
+of pseudo-coherent and perfect complexes later on. They are defined
+as follows.
+
+\begin{definition}
+\label{definition-strictly-perfect}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{E}^\bullet$ be a complex of $\mathcal{O}_X$-modules.
+We say $\mathcal{E}^\bullet$ is {\it strictly perfect}
+if $\mathcal{E}^i$ is zero for all but finitely many $i$ and
+$\mathcal{E}^i$ is a direct summand of a finite free
+$\mathcal{O}_X$-module for all $i$.
+\end{definition}
+
+\noindent
+Warning: Since we do not assume that $X$ is a locally ringed space,
+it may not be true that a direct summand of a finite free
+$\mathcal{O}_X$-module is finite locally free.
+
+\begin{lemma}
+\label{lemma-cone}
+The cone on a morphism of strictly perfect complexes is
+strictly perfect.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor}
+The total complex associated to the tensor product of two
+strictly perfect complexes is strictly perfect.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strictly-perfect-pullback}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$
+be a morphism of ringed spaces. If $\mathcal{F}^\bullet$ is a strictly
+perfect complex of $\mathcal{O}_Y$-modules, then
+$f^*\mathcal{F}^\bullet$ is a strictly perfect complex of
+$\mathcal{O}_X$-modules.
+\end{lemma}
+
+\begin{proof}
+The pullback of a finite free module is finite free. The functor
+$f^*$ is additive functor hence preserves direct summands. The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-lift-map}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Given a solid diagram of $\mathcal{O}_X$-modules
+$$
+\xymatrix{
+\mathcal{E} \ar@{..>}[dr] \ar[r] & \mathcal{F} \\
+& \mathcal{G} \ar[u]_p
+}
+$$
+with $\mathcal{E}$ a direct summand of a finite free
+$\mathcal{O}_X$-module and $p$ surjective, then a dotted arrow
+making the diagram commute exists locally on $X$.
+\end{lemma}
+
+\begin{proof}
+We may assume $\mathcal{E} = \mathcal{O}_X^{\oplus n}$ for some $n$.
+In this case finding the dotted arrow is equivalent to lifting the
+images of the basis elements in $\Gamma(X, \mathcal{F})$. This is
+locally possible by the characterization of surjective maps of
+sheaves (Sheaves, Section \ref{sheaves-section-exactness-points}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-homotopy}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+\begin{enumerate}
+\item Let $\alpha : \mathcal{E}^\bullet \to \mathcal{F}^\bullet$
+be a morphism of complexes of $\mathcal{O}_X$-modules
+with $\mathcal{E}^\bullet$ strictly perfect and $\mathcal{F}^\bullet$
+acyclic. Then $\alpha$ is locally on $X$ homotopic to zero.
+\item Let $\alpha : \mathcal{E}^\bullet \to \mathcal{F}^\bullet$
+be a morphism of complexes of $\mathcal{O}_X$-modules
+with $\mathcal{E}^\bullet$ strictly perfect, $\mathcal{E}^i = 0$
+for $i < a$, and $H^i(\mathcal{F}^\bullet) = 0$ for $i \geq a$.
+Then $\alpha$ is locally on $X$ homotopic to zero.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first statement follows from the second, hence we only prove (2).
+We will prove this by induction on the length of the complex
+$\mathcal{E}^\bullet$. If $\mathcal{E}^\bullet \cong \mathcal{E}[-n]$
+for some direct summand $\mathcal{E}$ of a finite free
+$\mathcal{O}_X$-module and integer $n \geq a$, then the result follows from
+Lemma \ref{lemma-local-lift-map} and the fact that
+$\mathcal{F}^{n - 1} \to \Ker(\mathcal{F}^n \to \mathcal{F}^{n + 1})$
+is surjective by the assumed vanishing of $H^n(\mathcal{F}^\bullet)$.
+If $\mathcal{E}^i$ is zero except for $i \in [a, b]$, then we have a
+split exact sequence of complexes
+$$
+0 \to \mathcal{E}^b[-b] \to \mathcal{E}^\bullet \to
+\sigma_{\leq b - 1}\mathcal{E}^\bullet \to 0
+$$
+which determines a distinguished triangle in
+$K(\mathcal{O}_X)$. Hence an exact sequence
+$$
+\Hom_{K(\mathcal{O}_X)}(
+\sigma_{\leq b - 1}\mathcal{E}^\bullet, \mathcal{F}^\bullet)
+\to
+\Hom_{K(\mathcal{O}_X)}(\mathcal{E}^\bullet, \mathcal{F}^\bullet)
+\to
+\Hom_{K(\mathcal{O}_X)}(\mathcal{E}^b[-b], \mathcal{F}^\bullet)
+$$
+by the axioms of triangulated categories. The composition
+$\mathcal{E}^b[-b] \to \mathcal{F}^\bullet$ is locally homotopic to
+zero, whence we may assume our map comes from an element in the
+left hand side of the displayed exact sequence above. This element
+is locally zero by induction hypothesis.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-through-quasi-isomorphism}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Given a solid diagram of complexes of $\mathcal{O}_X$-modules
+$$
+\xymatrix{
+\mathcal{E}^\bullet \ar@{..>}[dr] \ar[r]_\alpha & \mathcal{F}^\bullet \\
+& \mathcal{G}^\bullet \ar[u]_f
+}
+$$
+with $\mathcal{E}^\bullet$ strictly perfect, $\mathcal{E}^j = 0$ for
+$j < a$ and $H^j(f)$ an isomorphism for $j > a$ and surjective for $j = a$,
+then a dotted arrow making the diagram commute up to homotopy
+exists locally on $X$.
+\end{lemma}
+
+\begin{proof}
+Our assumptions on $f$ imply the cone $C(f)^\bullet$ has vanishing
+cohomology sheaves in degrees $\geq a$.
+Hence Lemma \ref{lemma-local-homotopy} guarantees there is an open
+covering $X = \bigcup U_i$ such that the composition
+$\mathcal{E}^\bullet \to \mathcal{F}^\bullet \to C(f)^\bullet$
+is homotopic to zero over $U_i$. Since
+$$
+\mathcal{G}^\bullet \to \mathcal{F}^\bullet \to C(f)^\bullet \to
+\mathcal{G}^\bullet[1]
+$$
+restricts to a distinguished triangle in $K(\mathcal{O}_{U_i})$
+we see that we can lift $\alpha|_{U_i}$ up to homotopy to a map
+$\alpha_i : \mathcal{E}^\bullet|_{U_i} \to \mathcal{G}^\bullet|_{U_i}$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-actual}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{E}^\bullet$, $\mathcal{F}^\bullet$ be complexes
+of $\mathcal{O}_X$-modules with $\mathcal{E}^\bullet$ strictly perfect.
+\begin{enumerate}
+\item For any element
+$\alpha \in \Hom_{D(\mathcal{O}_X)}(\mathcal{E}^\bullet, \mathcal{F}^\bullet)$
+there exists an open covering $X = \bigcup U_i$ such that
+$\alpha|_{U_i}$ is given by a morphism of complexes
+$\alpha_i : \mathcal{E}^\bullet|_{U_i} \to \mathcal{F}^\bullet|_{U_i}$.
+\item Given a morphism of complexes
+$\alpha : \mathcal{E}^\bullet \to \mathcal{F}^\bullet$
+whose image in the group
+$\Hom_{D(\mathcal{O}_X)}(\mathcal{E}^\bullet, \mathcal{F}^\bullet)$
+is zero, there exists an open covering $X = \bigcup U_i$ such that
+$\alpha|_{U_i}$ is homotopic to zero.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1).
+By the construction of the derived category we can find a quasi-isomorphism
+$f : \mathcal{F}^\bullet \to \mathcal{G}^\bullet$ and a map of complexes
+$\beta : \mathcal{E}^\bullet \to \mathcal{G}^\bullet$ such that
+$\alpha = f^{-1}\beta$. Thus the result follows from
+Lemma \ref{lemma-lift-through-quasi-isomorphism}.
+We omit the proof of (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Rhom-strictly-perfect}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{E}^\bullet$, $\mathcal{F}^\bullet$ be complexes
+of $\mathcal{O}_X$-modules with $\mathcal{E}^\bullet$ strictly perfect.
+Then the internal hom $R\SheafHom(\mathcal{E}^\bullet, \mathcal{F}^\bullet)$
+is represented by the complex $\mathcal{H}^\bullet$ with terms
+$$
+\mathcal{H}^n =
+\bigoplus\nolimits_{n = p + q}
+\SheafHom_{\mathcal{O}_X}(\mathcal{E}^{-q}, \mathcal{F}^p)
+$$
+and differential as described in Section \ref{section-hom-complexes}.
+\end{lemma}
+
+\begin{proof}
+Choose a quasi-isomorphism $\mathcal{F}^\bullet \to \mathcal{I}^\bullet$
+into a K-injective complex. Let $(\mathcal{H}')^\bullet$ be the
+complex with terms
+$$
+(\mathcal{H}')^n =
+\prod\nolimits_{n = p + q}
+\SheafHom_{\mathcal{O}_X}(\mathcal{E}^{-q}, \mathcal{I}^p)
+$$
+which represents $R\SheafHom(\mathcal{E}^\bullet, \mathcal{F}^\bullet)$
+by the construction in Section \ref{section-internal-hom}. It suffices
+to show that the map
+$$
+\mathcal{H}^\bullet \longrightarrow (\mathcal{H}')^\bullet
+$$
+is a quasi-isomorphism. Given an open $U \subset X$ we have
+by inspection
+$$
+H^0(\mathcal{H}^\bullet(U)) =
+\Hom_{K(\mathcal{O}_U)}(\mathcal{E}^\bullet|_U, \mathcal{K}^\bullet|_U)
+\to
+H^0((\mathcal{H}')^\bullet(U)) =
+\Hom_{D(\mathcal{O}_U)}(\mathcal{E}^\bullet|_U, \mathcal{K}^\bullet|_U)
+$$
+By Lemma \ref{lemma-local-actual} the sheafification of
+$U \mapsto H^0(\mathcal{H}^\bullet(U))$
+is equal to the sheafification of
+$U \mapsto H^0((\mathcal{H}')^\bullet(U))$. A similar argument can be
+given for the other cohomology sheaves. Thus $\mathcal{H}^\bullet$
+is quasi-isomorphic to $(\mathcal{H}')^\bullet$ which proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Rhom-strictly-perfect-K-flat}
+In the situation of Lemma \ref{lemma-Rhom-strictly-perfect}
+if $\mathcal{F}^\bullet$ is K-flat, then $\mathcal{H}^\bullet$ is K-flat.
+\end{lemma}
+
+\begin{proof}
+Observe that $\mathcal{H}^\bullet$ is simply the hom complex
+$\SheafHom^\bullet(\mathcal{E}^\bullet, \mathcal{F}^\bullet)$
+since the boundedness of the strictly prefect complex
+$\mathcal{E}^\bullet$ insures that the products in the definition
+of the hom complex turn into direct sums.
+Let $\mathcal{K}^\bullet$ be an acyclic complex of
+$\mathcal{O}_X$-modules. Consider the map
+$$
+\gamma :
+\text{Tot}(\mathcal{K}^\bullet \otimes
+\SheafHom^\bullet(\mathcal{E}^\bullet, \mathcal{F}^\bullet))
+\longrightarrow
+\SheafHom^\bullet(\mathcal{E}^\bullet,
+\text{Tot}(\mathcal{K}^\bullet \otimes \mathcal{F}^\bullet))
+$$
+of Lemma \ref{lemma-diagonal-better}. Since $\mathcal{F}^\bullet$ is K-flat,
+the complex $\text{Tot}(\mathcal{K}^\bullet \otimes \mathcal{F}^\bullet)$
+is acyclic, and hence by Lemma \ref{lemma-local-actual}
+(or Lemma \ref{lemma-Rhom-strictly-perfect} if you like)
+the target of $\gamma$ is acyclic too. Hence to prove the lemma it suffices to
+show that $\gamma$ is an isomorphism of complexes. To see this, we may argue
+by induction on the length of the complex $\mathcal{E}^\bullet$.
+If the length is $\leq 1$ then the $\mathcal{E}^\bullet$
+is a direct summand of $\mathcal{O}_X^{\oplus n}[k]$ for some
+$n \geq 0$ and $k \in \mathbf{Z}$ and in this case the result
+follows by inspection. If the length is $> 1$, then we reduce
+to smaller length by considering the termwise split short exact
+sequence of complexes
+$$
+0 \to \sigma_{\geq a + 1} \mathcal{E}^\bullet \to
+\mathcal{E}^\bullet \to
+\sigma_{\leq a} \mathcal{E}^\bullet \to 0
+$$
+for a suitable $a \in \mathbf{Z}$, see
+Homology, Section \ref{homology-section-truncations}.
+Then $\gamma$ fits into a morphism of termwise split
+short exact sequences of complexes.
+By induction $\gamma$ is an isomorphism for
+$\sigma_{\geq a + 1} \mathcal{E}^\bullet$
+and $\sigma_{\leq a} \mathcal{E}^\bullet$ and hence the result for
+$\mathcal{E}^\bullet$ follows. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Rhom-complex-of-direct-summands-finite-free}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{E}^\bullet$, $\mathcal{F}^\bullet$ be complexes
+of $\mathcal{O}_X$-modules with
+\begin{enumerate}
+\item $\mathcal{F}^n = 0$ for $n \ll 0$,
+\item $\mathcal{E}^n = 0$ for $n \gg 0$, and
+\item $\mathcal{E}^n$ isomorphic to a direct summand of a finite
+free $\mathcal{O}_X$-module.
+\end{enumerate}
+Then the internal hom $R\SheafHom(\mathcal{E}^\bullet, \mathcal{F}^\bullet)$
+is represented by the complex $\mathcal{H}^\bullet$ with terms
+$$
+\mathcal{H}^n =
+\bigoplus\nolimits_{n = p + q}
+\SheafHom_{\mathcal{O}_X}(\mathcal{E}^{-q}, \mathcal{F}^p)
+$$
+and differential as described in Section \ref{section-internal-hom}.
+\end{lemma}
+
+\begin{proof}
+Choose a quasi-isomorphism $\mathcal{F}^\bullet \to \mathcal{I}^\bullet$
+where $\mathcal{I}^\bullet$ is a bounded below complex of injectives.
+Note that $\mathcal{I}^\bullet$ is K-injective
+(Derived Categories, Lemma
+\ref{derived-lemma-bounded-below-injectives-K-injective}).
+Hence the construction in Section \ref{section-internal-hom}
+shows that
+$R\SheafHom(\mathcal{E}^\bullet, \mathcal{F}^\bullet)$ is
+represented by the complex $(\mathcal{H}')^\bullet$ with terms
+$$
+(\mathcal{H}')^n =
+\prod\nolimits_{n = p + q}
+\SheafHom_{\mathcal{O}_X}(\mathcal{E}^{-q}, \mathcal{I}^p) =
+\bigoplus\nolimits_{n = p + q}
+\SheafHom_{\mathcal{O}_X}(\mathcal{E}^{-q}, \mathcal{I}^p)
+$$
+(equality because there are only finitely many nonzero terms).
+Note that $\mathcal{H}^\bullet$ is the total complex associated to
+the double complex with terms
+$\SheafHom_{\mathcal{O}_X}(\mathcal{E}^{-q}, \mathcal{F}^p)$
+and similarly for $(\mathcal{H}')^\bullet$.
+The natural map $(\mathcal{H}')^\bullet \to \mathcal{H}^\bullet$
+comes from a map of double complexes.
+Thus to show this map is a quasi-isomorphism, we may use the spectral
+sequence of a double complex
+(Homology, Lemma \ref{homology-lemma-first-quadrant-ss})
+$$
+{}'E_1^{p, q} =
+H^p(\SheafHom_{\mathcal{O}_X}(\mathcal{E}^{-q}, \mathcal{F}^\bullet))
+$$
+converging to $H^{p + q}(\mathcal{H}^\bullet)$ and similarly for
+$(\mathcal{H}')^\bullet$. To finish the proof of the lemma it
+suffices to show that $\mathcal{F}^\bullet \to \mathcal{I}^\bullet$
+induces an isomorphism
+$$
+H^p(\SheafHom_{\mathcal{O}_X}(\mathcal{E}, \mathcal{F}^\bullet))
+\longrightarrow
+H^p(\SheafHom_{\mathcal{O}_X}(\mathcal{E}, \mathcal{I}^\bullet))
+$$
+on cohomology sheaves whenever $\mathcal{E}$ is a direct summand of a
+finite free $\mathcal{O}_X$-module. Since this is clear when $\mathcal{E}$
+is finite free the result follows.
+\end{proof}
+
+
+
+
+
+\section{Pseudo-coherent modules}
+\label{section-pseudo-coherent}
+
+\noindent
+In this section we discuss pseudo-coherent complexes.
+
+\begin{definition}
+\label{definition-pseudo-coherent}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{E}^\bullet$
+be a complex of $\mathcal{O}_X$-modules. Let $m \in \mathbf{Z}$.
+\begin{enumerate}
+\item We say $\mathcal{E}^\bullet$ is {\it $m$-pseudo-coherent}
+if there exists an open covering $X = \bigcup U_i$ and for each $i$
+a morphism of complexes
+$\alpha_i : \mathcal{E}_i^\bullet \to \mathcal{E}^\bullet|_{U_i}$
+where $\mathcal{E}_i$ is strictly perfect on $U_i$ and
+$H^j(\alpha_i)$ is an isomorphism for $j > m$ and $H^m(\alpha_i)$
+is surjective.
+\item We say $\mathcal{E}^\bullet$ is {\it pseudo-coherent}
+if it is $m$-pseudo-coherent for all $m$.
+\item We say an object $E$ of $D(\mathcal{O}_X)$ is
+{\it $m$-pseudo-coherent} (resp.\ {\it pseudo-coherent})
+if and only if it can be represented by a $m$-pseudo-coherent
+(resp.\ pseudo-coherent) complex of $\mathcal{O}_X$-modules.
+\end{enumerate}
+\end{definition}
+
+\noindent
+If $X$ is quasi-compact, then an $m$-pseudo-coherent object
+of $D(\mathcal{O}_X)$ is in $D^-(\mathcal{O}_X)$. But this need
+not be the case if $X$ is not quasi-compact.
+
+\begin{lemma}
+\label{lemma-pseudo-coherent-independent-representative}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $E$ be an object
+of $D(\mathcal{O}_X)$.
+\begin{enumerate}
+\item If there exists an open covering $X = \bigcup U_i$,
+strictly perfect complexes $\mathcal{E}_i^\bullet$ on $U_i$, and
+maps $\alpha_i : \mathcal{E}_i^\bullet \to E|_{U_i}$ in
+$D(\mathcal{O}_{U_i})$ with $H^j(\alpha_i)$ an isomorphism for $j > m$
+and $H^m(\alpha_i)$ surjective, then $E$ is $m$-pseudo-coherent.
+\item If $E$ is $m$-pseudo-coherent, then any complex representing
+$E$ is $m$-pseudo-coherent.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}^\bullet$ be any complex representing $E$
+and let $X = \bigcup U_i$ and
+$\alpha_i : \mathcal{E}_i^\bullet \to E|_{U_i}$ be as in (1).
+We will show that $\mathcal{F}^\bullet$ is $m$-pseudo-coherent
+as a complex, which will prove (1) and (2) simultaneously.
+By Lemma \ref{lemma-local-actual}
+we can after refining the open covering $X = \bigcup U_i$
+represent the maps $\alpha_i$ by maps of complexes
+$\alpha_i : \mathcal{E}_i^\bullet \to \mathcal{F}^\bullet|_{U_i}$.
+By assumption
+$H^j(\alpha_i)$ are isomorphisms for $j > m$, and $H^m(\alpha_i)$
+is surjective whence $\mathcal{F}^\bullet$ is
+$m$-pseudo-coherent.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pseudo-coherent-pullback}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$
+be a morphism of ringed spaces. Let $E$ be an object of
+$D(\mathcal{O}_Y)$. If $E$ is $m$-pseudo-coherent,
+then $Lf^*E$ is $m$-pseudo-coherent.
+\end{lemma}
+
+\begin{proof}
+Represent $E$ by a complex $\mathcal{E}^\bullet$ of $\mathcal{O}_Y$-modules
+and choose an open covering $Y = \bigcup V_i$ and
+$\alpha_i : \mathcal{E}_i^\bullet \to \mathcal{E}^\bullet|_{V_i}$
+as in Definition \ref{definition-pseudo-coherent}.
+Set $U_i = f^{-1}(V_i)$.
+By Lemma \ref{lemma-pseudo-coherent-independent-representative}
+it suffices to show that $Lf^*\mathcal{E}^\bullet|_{U_i}$ is
+$m$-pseudo-coherent.
+Choose a distinguished triangle
+$$
+\mathcal{E}_i^\bullet \to
+\mathcal{E}^\bullet|_{V_i} \to
+C \to
+\mathcal{E}_i^\bullet[1]
+$$
+The assumption on $\alpha_i$ means exactly that the cohomology sheaves
+$H^j(C)$ are zero for all $j \geq m$. Denote $f_i : U_i \to V_i$
+the restriction of $f$. Note that $Lf^*\mathcal{E}^\bullet|_{U_i} =
+Lf_i^*(\mathcal{E}|_{V_i})$. Applying $Lf_i^*$ we obtain
+the distinguished triangle
+$$
+Lf_i^*\mathcal{E}_i^\bullet \to
+Lf_i^*\mathcal{E}|_{V_i} \to
+Lf_i^*C \to
+Lf_i^*\mathcal{E}_i^\bullet[1]
+$$
+By the construction of $Lf_i^*$ as a left derived functor we see that
+$H^j(Lf_i^*C) = 0$ for $j \geq m$ (by the dual of Derived Categories, Lemma
+\ref{derived-lemma-negative-vanishing}). Hence $H^j(Lf_i^*\alpha_i)$ is an
+isomorphism for $j > m$ and $H^m(Lf^*\alpha_i)$ is surjective.
+On the other hand,
+$Lf_i^*\mathcal{E}_i^\bullet = f_i^*\mathcal{E}_i^\bullet$.
+is strictly perfect by Lemma \ref{lemma-strictly-perfect-pullback}.
+Thus we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cone-pseudo-coherent}
+Let $(X, \mathcal{O}_X)$ be a ringed space and $m \in \mathbf{Z}$.
+Let $(K, L, M, f, g, h)$ be a distinguished triangle in $D(\mathcal{O}_X)$.
+\begin{enumerate}
+\item If $K$ is $(m + 1)$-pseudo-coherent and $L$ is $m$-pseudo-coherent
+then $M$ is $m$-pseudo-coherent.
+\item If $K$ and $M$ are $m$-pseudo-coherent, then $L$ is $m$-pseudo-coherent.
+\item If $L$ is $(m + 1)$-pseudo-coherent and $M$
+is $m$-pseudo-coherent, then $K$ is $(m + 1)$-pseudo-coherent.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Choose an open covering $X = \bigcup U_i$ and
+maps $\alpha_i : \mathcal{K}_i^\bullet \to K|_{U_i}$ in $D(\mathcal{O}_{U_i})$
+with $\mathcal{K}_i^\bullet$ strictly perfect and $H^j(\alpha_i)$
+isomorphisms for $j > m + 1$ and surjective for $j = m + 1$.
+We may replace $\mathcal{K}_i^\bullet$ by
+$\sigma_{\geq m + 1}\mathcal{K}_i^\bullet$
+and hence we may assume that $\mathcal{K}_i^j = 0$
+for $j < m + 1$. After refining the open covering we may choose
+maps $\beta_i : \mathcal{L}_i^\bullet \to L|_{U_i}$ in $D(\mathcal{O}_{U_i})$
+with $\mathcal{L}_i^\bullet$ strictly perfect such that
+$H^j(\beta)$ is an isomorphism for $j > m$ and
+surjective for $j = m$. By
+Lemma \ref{lemma-lift-through-quasi-isomorphism}
+we can, after refining the covering, find maps of complexes
+$\gamma_i : \mathcal{K}^\bullet \to \mathcal{L}^\bullet$
+such that the diagrams
+$$
+\xymatrix{
+K|_{U_i} \ar[r] & L|_{U_i} \\
+\mathcal{K}_i^\bullet \ar[u]^{\alpha_i} \ar[r]^{\gamma_i} &
+\mathcal{L}_i^\bullet \ar[u]_{\beta_i}
+}
+$$
+are commutative in $D(\mathcal{O}_{U_i})$ (this requires representing the
+maps $\alpha_i$, $\beta_i$ and $K|_{U_i} \to L|_{U_i}$
+by actual maps of complexes; some details omitted).
+The cone $C(\gamma_i)^\bullet$ is strictly perfect (Lemma \ref{lemma-cone}).
+The commutativity of the diagram implies that there exists a morphism
+of distinguished triangles
+$$
+(\mathcal{K}_i^\bullet, \mathcal{L}_i^\bullet, C(\gamma_i)^\bullet)
+\longrightarrow
+(K|_{U_i}, L|_{U_i}, M|_{U_i}).
+$$
+It follows from the induced map on long exact cohomology sequences and
+Homology, Lemmas \ref{homology-lemma-four-lemma} and
+\ref{homology-lemma-five-lemma}
+that $C(\gamma_i)^\bullet \to M|_{U_i}$ induces an isomorphism
+on cohomology in degrees $> m$ and a surjection in degree $m$.
+Hence $M$ is $m$-pseudo-coherent by
+Lemma \ref{lemma-pseudo-coherent-independent-representative}.
+
+\medskip\noindent
+Assertions (2) and (3) follow from (1) by rotating the distinguished
+triangle.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-pseudo-coherent}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K, L$ be objects
+of $D(\mathcal{O}_X)$.
+\begin{enumerate}
+\item If $K$ is $n$-pseudo-coherent and $H^i(K) = 0$ for $i > a$
+and $L$ is $m$-pseudo-coherent and $H^j(L) = 0$ for $j > b$, then
+$K \otimes_{\mathcal{O}_X}^\mathbf{L} L$ is $t$-pseudo-coherent
+with $t = \max(m + a, n + b)$.
+\item If $K$ and $L$ are pseudo-coherent, then
+$K \otimes_{\mathcal{O}_X}^\mathbf{L} L$ is pseudo-coherent.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). By replacing $X$ by the members of an open covering
+we may assume there exist strictly perfect complexes $\mathcal{K}^\bullet$
+and $\mathcal{L}^\bullet$ and maps
+$\alpha : \mathcal{K}^\bullet \to K$ and
+$\beta : \mathcal{L}^\bullet \to L$ with $H^i(\alpha)$ and isomorphism
+for $i > n$ and surjective for $i = n$ and with
+$H^i(\beta)$ and isomorphism for $i > m$ and surjective for $i = m$.
+Then the map
+$$
+\alpha \otimes^\mathbf{L} \beta :
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{L}^\bullet)
+\to K \otimes_{\mathcal{O}_X}^\mathbf{L} L
+$$
+induces isomorphisms on cohomology sheaves in degree $i$ for
+$i > t$ and a surjection for $i = t$. This follows from the
+spectral sequence of tors (details omitted).
+
+\medskip\noindent
+Proof of (2). We may first replace $X$ by the members of an open
+covering to reduce to the case that $K$ and $L$ are bounded above.
+Then the statement follows immediately from case (1).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-summands-pseudo-coherent}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $m \in \mathbf{Z}$.
+If $K \oplus L$ is $m$-pseudo-coherent (resp.\ pseudo-coherent)
+in $D(\mathcal{O}_X)$ so are $K$ and $L$.
+\end{lemma}
+
+\begin{proof}
+Assume that $K \oplus L$ is $m$-pseudo-coherent.
+After replacing $X$ by the members of an open covering we may
+assume $K \oplus L \in D^-(\mathcal{O}_X)$, hence
+$L \in D^-(\mathcal{O}_X)$.
+Note that there is a distinguished triangle
+$$
+(K \oplus L, K \oplus L, L \oplus L[1]) =
+(K, K, 0) \oplus (L, L, L \oplus L[1])
+$$
+see
+Derived Categories, Lemma \ref{derived-lemma-direct-sum-triangles}.
+By
+Lemma \ref{lemma-cone-pseudo-coherent}
+we see that $L \oplus L[1]$ is $m$-pseudo-coherent.
+Hence also $L[1] \oplus L[2]$ is $m$-pseudo-coherent.
+By induction $L[n] \oplus L[n + 1]$ is $m$-pseudo-coherent.
+Since $L$ is bounded above we see that $L[n]$ is $m$-pseudo-coherent
+for large $n$. Hence working backwards, using the distinguished triangles
+$$
+(L[n], L[n] \oplus L[n - 1], L[n - 1])
+$$
+we conclude that $L[n - 1], L[n - 2], \ldots, L$ are $m$-pseudo-coherent
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complex-pseudo-coherent-modules}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $m \in \mathbf{Z}$. Let $\mathcal{F}^\bullet$ be a (locally) bounded
+above complex of $\mathcal{O}_X$-modules such that
+$\mathcal{F}^i$ is $(m - i)$-pseudo-coherent for all $i$.
+Then $\mathcal{F}^\bullet$ is $m$-pseudo-coherent.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: use Lemma \ref{lemma-cone-pseudo-coherent} and truncations
+as in the proof of
+More on Algebra, Lemma \ref{more-algebra-lemma-complex-pseudo-coherent-modules}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-pseudo-coherent}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $m \in \mathbf{Z}$. Let
+$E$ be an object of $D(\mathcal{O}_X)$. If $E$ is (locally) bounded above
+and $H^i(E)$ is $(m - i)$-pseudo-coherent for all $i$, then
+$E$ is $m$-pseudo-coherent.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: use Lemma \ref{lemma-cone-pseudo-coherent} and truncations
+as in the proof of
+More on Algebra, Lemma \ref{more-algebra-lemma-cohomology-pseudo-coherent}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-cohomology}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $K$ be an object of $D(\mathcal{O}_X)$.
+Let $m \in \mathbf{Z}$.
+\begin{enumerate}
+\item If $K$ is $m$-pseudo-coherent and $H^i(K) = 0$
+for $i > m$, then $H^m(K)$ is a finite type $\mathcal{O}_X$-module.
+\item If $K$ is $m$-pseudo-coherent and $H^i(K) = 0$
+for $i > m + 1$, then $H^{m + 1}(K)$ is a finitely presented
+$\mathcal{O}_X$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). We may work locally on $X$. Hence we may assume there exists
+a strictly perfect complex $\mathcal{E}^\bullet$ and a map
+$\alpha : \mathcal{E}^\bullet \to K$ which induces
+an isomorphism on cohomology in degrees $> m$ and a surjection in degree $m$.
+It suffices to prove the result for $\mathcal{E}^\bullet$.
+Let $n$ be the largest integer such that $\mathcal{E}^n \not = 0$.
+If $n = m$, then $H^m(\mathcal{E}^\bullet)$ is a quotient of
+$\mathcal{E}^n$ and the result is clear.
+If $n > m$, then $\mathcal{E}^{n - 1} \to \mathcal{E}^n$ is surjective as
+$H^n(E^\bullet) = 0$. By Lemma \ref{lemma-local-lift-map}
+we can locally find a section of this surjection and write
+$\mathcal{E}^{n - 1} = \mathcal{E}' \oplus \mathcal{E}^n$.
+Hence it suffices to prove the result for the complex
+$(\mathcal{E}')^\bullet$ which is the same as $\mathcal{E}^\bullet$
+except has $\mathcal{E}'$ in degree $n - 1$ and $0$ in degree $n$.
+We win by induction on $n$.
+
+\medskip\noindent
+Proof of (2). We may work locally on $X$. Hence we may assume there exists
+a strictly perfect complex $\mathcal{E}^\bullet$ and a map
+$\alpha : \mathcal{E}^\bullet \to K$ which induces
+an isomorphism on cohomology in degrees $> m$ and a surjection in degree $m$.
+As in the proof of (1) we can reduce to the case that $\mathcal{E}^i = 0$
+for $i > m + 1$. Then we see that
+$H^{m + 1}(K) \cong H^{m + 1}(\mathcal{E}^\bullet) =
+\Coker(\mathcal{E}^m \to \mathcal{E}^{m + 1})$
+which is of finite presentation.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-n-pseudo-module}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{F}$ be a sheaf
+of $\mathcal{O}_X$-modules.
+\begin{enumerate}
+\item $\mathcal{F}$ viewed as an object of $D(\mathcal{O}_X)$ is
+$0$-pseudo-coherent if and only if $\mathcal{F}$ is a finite type
+$\mathcal{O}_X$-module, and
+\item $\mathcal{F}$ viewed as an object of $D(\mathcal{O}_X)$ is
+$(-1)$-pseudo-coherent if and only if $\mathcal{F}$ is an
+$\mathcal{O}_X$-module of finite presentation.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Use Lemma \ref{lemma-finite-cohomology}
+to prove the implications in one direction and
+Lemma \ref{lemma-cohomology-pseudo-coherent} for the other.
+\end{proof}
+
+
+
+
+
+\section{Tor dimension}
+\label{section-tor}
+
+\noindent
+In this section we take a closer look at resolutions by flat modules.
+
+\begin{definition}
+\label{definition-tor-amplitude}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $E$ be an object of $D(\mathcal{O}_X)$.
+Let $a, b \in \mathbf{Z}$ with $a \leq b$.
+\begin{enumerate}
+\item We say $E$ has {\it tor-amplitude in $[a, b]$}
+if $H^i(E \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{F}) = 0$
+for all $\mathcal{O}_X$-modules $\mathcal{F}$ and all $i \not \in [a, b]$.
+\item We say $E$ has {\it finite tor dimension}
+if it has tor-amplitude in $[a, b]$ for some $a, b$.
+\item We say $E$ {\it locally has finite tor dimension}
+if there exists an open covering $X = \bigcup U_i$ such that
+$E|_{U_i}$ has finite tor dimension for all $i$.
+\end{enumerate}
+An $\mathcal{O}_X$-module $\mathcal{F}$ has {\it tor dimension $\leq d$}
+if $\mathcal{F}[0]$ viewed as an object of $D(\mathcal{O}_X)$ has
+tor-amplitude in $[-d, 0]$.
+\end{definition}
+
+\noindent
+Note that if $E$ as in the definition
+has finite tor dimension, then $E$ is an object of
+$D^b(\mathcal{O}_X)$ as can be seen by taking $\mathcal{F} = \mathcal{O}_X$
+in the definition above.
+
+\begin{lemma}
+\label{lemma-last-one-flat}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{E}^\bullet$ be a bounded above complex of flat
+$\mathcal{O}_X$-modules with tor-amplitude in $[a, b]$.
+Then $\Coker(d_{\mathcal{E}^\bullet}^{a - 1})$ is a flat
+$\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+As $\mathcal{E}^\bullet$ is a bounded above complex of flat modules we see that
+$\mathcal{E}^\bullet \otimes_{\mathcal{O}_X} \mathcal{F} =
+\mathcal{E}^\bullet \otimes_{\mathcal{O}_X}^{\mathbf{L}} \mathcal{F}$
+for any $\mathcal{O}_X$-module $\mathcal{F}$.
+Hence for every $\mathcal{O}_X$-module $\mathcal{F}$ the sequence
+$$
+\mathcal{E}^{a - 2} \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{E}^{a - 1} \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{E}^a \otimes_{\mathcal{O}_X} \mathcal{F}
+$$
+is exact in the middle. Since
+$\mathcal{E}^{a - 2} \to \mathcal{E}^{a - 1} \to \mathcal{E}^a \to
+\Coker(d^{a - 1}) \to 0$
+is a flat resolution this implies that
+$\text{Tor}_1^{\mathcal{O}_X}(\Coker(d^{a - 1}), \mathcal{F}) = 0$
+for all $\mathcal{O}_X$-modules $\mathcal{F}$. This means that
+$\Coker(d^{a - 1})$ is flat, see Lemma \ref{lemma-flat-tor-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tor-amplitude}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $E$ be an object of $D(\mathcal{O}_X)$.
+Let $a, b \in \mathbf{Z}$ with $a \leq b$. The following are equivalent
+\begin{enumerate}
+\item $E$ has tor-amplitude in $[a, b]$.
+\item $E$ is represented by a complex
+$\mathcal{E}^\bullet$ of flat $\mathcal{O}_X$-modules with
+$\mathcal{E}^i = 0$ for $i \not \in [a, b]$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If (2) holds, then we may compute
+$E \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{F} =
+\mathcal{E}^\bullet \otimes_{\mathcal{O}_X} \mathcal{F}$
+and it is clear that (1) holds.
+
+\medskip\noindent
+Assume that (1) holds. We may represent $E$ by a bounded above complex
+of flat $\mathcal{O}_X$-modules $\mathcal{K}^\bullet$, see
+Section \ref{section-flat}.
+Let $n$ be the largest integer such that $\mathcal{K}^n \not = 0$.
+If $n > b$, then $\mathcal{K}^{n - 1} \to \mathcal{K}^n$ is surjective as
+$H^n(\mathcal{K}^\bullet) = 0$. As $\mathcal{K}^n$ is flat we see that
+$\Ker(\mathcal{K}^{n - 1} \to \mathcal{K}^n)$ is flat
+(Modules, Lemma \ref{modules-lemma-flat-ses}).
+Hence we may replace $\mathcal{K}^\bullet$ by
+$\tau_{\leq n - 1}\mathcal{K}^\bullet$. Thus, by induction on $n$, we
+reduce to the case that $K^\bullet$ is a complex of flat
+$\mathcal{O}_X$-modules with $\mathcal{K}^i = 0$ for $i > b$.
+
+\medskip\noindent
+Set $\mathcal{E}^\bullet = \tau_{\geq a}\mathcal{K}^\bullet$.
+Everything is clear except that $\mathcal{E}^a$ is flat
+which follows immediately from Lemma \ref{lemma-last-one-flat}
+and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tor-amplitude-pullback}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of ringed
+spaces. Let $E$ be an object of $D(\mathcal{O}_Y)$.
+If $E$ has tor amplitude in $[a, b]$, then $Lf^*E$ has tor amplitude in
+$[a, b]$.
+\end{lemma}
+
+\begin{proof}
+Assume $E$ has tor amplitude in $[a, b]$. By
+Lemma \ref{lemma-tor-amplitude}
+we can represent $E$ by a complex of
+$\mathcal{E}^\bullet$ of flat $\mathcal{O}$-modules with
+$\mathcal{E}^i = 0$ for $i \not \in [a, b]$. Then
+$Lf^*E$ is represented by $f^*\mathcal{E}^\bullet$.
+By Modules, Lemma \ref{modules-lemma-pullback-flat}
+the modules $f^*\mathcal{E}^i$ are flat.
+Thus by Lemma \ref{lemma-tor-amplitude}
+we conclude that $Lf^*E$ has tor amplitude in $[a, b]$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tor-amplitude-stalk}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $E$ be an object of $D(\mathcal{O}_X)$.
+Let $a, b \in \mathbf{Z}$ with $a \leq b$. The following are equivalent
+\begin{enumerate}
+\item $E$ has tor-amplitude in $[a, b]$.
+\item for every $x \in X$ the object $E_x$ of $D(\mathcal{O}_{X, x})$
+has tor-amplitude in $[a, b]$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Taking stalks at $x$ is the same thing as pulling back by the
+morphism of ringed spaces $(x, \mathcal{O}_{X, x}) \to (X, \mathcal{O}_X)$.
+Hence the implication (1) $\Rightarrow$ (2) follows from
+Lemma \ref{lemma-tor-amplitude-pullback}.
+For the converse, note that taking stalks commutes with tensor
+products (Modules, Lemma \ref{modules-lemma-stalk-tensor-product}).
+Hence
+$$
+(E \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{F})_x =
+E_x \otimes_{\mathcal{O}_{X, x}}^\mathbf{L} \mathcal{F}_x
+$$
+On the other hand, taking stalks is exact, so
+$$
+H^i(E \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{F})_x =
+H^i((E \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{F})_x) =
+H^i(E_x \otimes_{\mathcal{O}_{X, x}}^\mathbf{L} \mathcal{F}_x)
+$$
+and we can check whether
+$H^i(E \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{F})$ is zero
+by checking whether all of its stalks are zero
+(Modules, Lemma \ref{modules-lemma-abelian}). Thus (2) implies (1).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cone-tor-amplitude}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $(K, L, M, f, g, h)$ be a distinguished
+triangle in $D(\mathcal{O}_X)$. Let $a, b \in \mathbf{Z}$.
+\begin{enumerate}
+\item If $K$ has tor-amplitude in $[a + 1, b + 1]$ and
+$L$ has tor-amplitude in $[a, b]$ then $M$ has
+tor-amplitude in $[a, b]$.
+\item If $K$ and $M$ have tor-amplitude in $[a, b]$, then
+$L$ has tor-amplitude in $[a, b]$.
+\item If $L$ has tor-amplitude in $[a + 1, b + 1]$
+and $M$ has tor-amplitude in $[a, b]$, then
+$K$ has tor-amplitude in $[a + 1, b + 1]$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This just follows from the long exact cohomology sequence
+associated to a distinguished triangle and the fact that
+$- \otimes_{\mathcal{O}_X}^{\mathbf{L}} \mathcal{F}$
+preserves distinguished triangles.
+The easiest one to prove is (2) and the others follow from it by
+translation.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-tor-amplitude}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K, L$ be objects of
+$D(\mathcal{O}_X)$. If $K$ has tor-amplitude in $[a, b]$ and
+$L$ has tor-amplitude in $[c, d]$ then $K \otimes_{\mathcal{O}_X}^\mathbf{L} L$
+has tor amplitude in $[a + c, b + d]$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: use the spectral sequence for tors.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-summands-tor-amplitude}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $a, b \in \mathbf{Z}$.
+For $K$, $L$ objects of $D(\mathcal{O}_X)$ if $K \oplus L$ has tor
+amplitude in $[a, b]$ so do $K$ and $L$.
+\end{lemma}
+
+\begin{proof}
+Clear from the fact that the Tor functors are additive.
+\end{proof}
+
+
+
+
+
+
+\section{Perfect complexes}
+\label{section-perfect}
+
+\noindent
+In this section we discuss properties of perfect complexes on
+ringed spaces.
+
+\begin{definition}
+\label{definition-perfect}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $\mathcal{E}^\bullet$ be a complex of $\mathcal{O}_X$-modules.
+We say $\mathcal{E}^\bullet$ is {\it perfect} if there exists
+an open covering $X = \bigcup U_i$ such that for each $i$
+there exists a morphism of complexes
+$\mathcal{E}_i^\bullet \to \mathcal{E}^\bullet|_{U_i}$
+which is a quasi-isomorphism with $\mathcal{E}_i^\bullet$
+a strictly perfect complex of $\mathcal{O}_{U_i}$-modules.
+An object $E$ of $D(\mathcal{O}_X)$ is {\it perfect}
+if it can be represented by a perfect complex of $\mathcal{O}_X$-modules.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-perfect-independent-representative}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $E$ be an object of $D(\mathcal{O}_X)$.
+\begin{enumerate}
+\item If there exists an open covering $X = \bigcup U_i$ and
+strictly perfect complexes $\mathcal{E}_i^\bullet$ on $U_i$
+such that $\mathcal{E}_i^\bullet$ represents $E|_{U_i}$ in
+$D(\mathcal{O}_{U_i})$, then $E$ is perfect.
+\item If $E$ is perfect, then any complex representing $E$ is perfect.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Identical to the proof of
+Lemma \ref{lemma-pseudo-coherent-independent-representative}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-perfect-on-locally-ringed}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $E$ be an object of
+$D(\mathcal{O}_X)$. Assume that all stalks $\mathcal{O}_{X, x}$
+are local rings. Then the following are equivalent
+\begin{enumerate}
+\item $E$ is perfect,
+\item there exists an open covering $X = \bigcup U_i$ such that
+$E|_{U_i}$ can be represented by a finite complex of finite locally
+free $\mathcal{O}_{U_i}$-modules, and
+\item there exists an open covering $X = \bigcup U_i$ such that
+$E|_{U_i}$ can be represented by a finite complex of finite
+free $\mathcal{O}_{U_i}$-modules.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-perfect-independent-representative}
+and the fact that on $X$ every direct summand of a finite free module
+is finite locally free. See Modules, Lemma
+\ref{modules-lemma-direct-summand-of-locally-free-is-locally-free}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-perfect-precise}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $E$ be an object of $D(\mathcal{O}_X)$.
+Let $a \leq b$ be integers. If $E$ has tor amplitude in $[a, b]$
+and is $(a - 1)$-pseudo-coherent, then $E$ is perfect.
+\end{lemma}
+
+\begin{proof}
+After replacing $X$ by the members of an open covering we may assume there
+exists a strictly perfect complex $\mathcal{E}^\bullet$ and a map
+$\alpha : \mathcal{E}^\bullet \to E$ such that $H^i(\alpha)$ is an isomorphism
+for $i \geq a$. We may and do replace $\mathcal{E}^\bullet$ by
+$\sigma_{\geq a - 1}\mathcal{E}^\bullet$. Choose a distinguished triangle
+$$
+\mathcal{E}^\bullet \to E \to C \to \mathcal{E}^\bullet[1]
+$$
+From the vanishing of cohomology sheaves of $E$ and $\mathcal{E}^\bullet$
+and the assumption on $\alpha$ we obtain $C \cong \mathcal{K}[a - 2]$ with
+$\mathcal{K} = \Ker(\mathcal{E}^{a - 1} \to \mathcal{E}^a)$.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+Applying $- \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{F}$
+the assumption that $E$ has tor amplitude in $[a, b]$
+implies $\mathcal{K} \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{E}^{a - 1} \otimes_{\mathcal{O}_X} \mathcal{F}$ has image
+$\Ker(\mathcal{E}^{a - 1} \otimes_{\mathcal{O}_X} \mathcal{F}
+\to \mathcal{E}^a \otimes_{\mathcal{O}_X} \mathcal{F})$.
+It follows that $\text{Tor}_1^{\mathcal{O}_X}(\mathcal{E}', \mathcal{F}) = 0$
+where $\mathcal{E}' = \Coker(\mathcal{E}^{a - 1} \to \mathcal{E}^a)$.
+Hence $\mathcal{E}'$ is flat (Lemma \ref{lemma-flat-tor-zero}).
+Thus $\mathcal{E}'$ is locally a direct summand of a finite free module by
+Modules, Lemma \ref{modules-lemma-flat-locally-finite-presentation}.
+Thus locally the complex
+$$
+\mathcal{E}' \to \mathcal{E}^{a - 1} \to \ldots \to \mathcal{E}^b
+$$
+is quasi-isomorphic to $E$ and $E$ is perfect.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-perfect}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Let $E$ be an object of $D(\mathcal{O}_X)$.
+The following are equivalent
+\begin{enumerate}
+\item $E$ is perfect, and
+\item $E$ is pseudo-coherent and locally has finite tor dimension.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). By definition this means there exists an open covering
+$X = \bigcup U_i$ such that $E|_{U_i}$ is represented by a
+strictly perfect complex. Thus $E$ is pseudo-coherent (i.e.,
+$m$-pseudo-coherent for all $m$) by
+Lemma \ref{lemma-pseudo-coherent-independent-representative}.
+Moreover, a direct summand of a finite free module is flat, hence
+$E|_{U_i}$ has finite Tor dimension by
+Lemma \ref{lemma-tor-amplitude}. Thus (2) holds.
+
+\medskip\noindent
+Assume (2). After replacing $X$ by the members of an open covering
+we may assume there exist integers $a \leq b$ such that $E$
+has tor amplitude in $[a, b]$. Since $E$ is $m$-pseudo-coherent
+for all $m$ we conclude using Lemma \ref{lemma-perfect-precise}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-perfect-pullback}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of ringed
+spaces. Let $E$ be an object of $D(\mathcal{O}_Y)$. If $E$ is perfect in
+$D(\mathcal{O}_Y)$, then $Lf^*E$ is perfect in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-perfect},
+\ref{lemma-tor-amplitude-pullback}, and
+\ref{lemma-pseudo-coherent-pullback}.
+(An alternative proof is to copy the proof of
+Lemma \ref{lemma-pseudo-coherent-pullback}.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-two-out-of-three-perfect}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $(K, L, M, f, g, h)$
+be a distinguished triangle in $D(\mathcal{O}_X)$. If two out of three of
+$K, L, M$ are perfect then the third is also perfect.
+\end{lemma}
+
+\begin{proof}
+First proof: Combine
+Lemmas \ref{lemma-perfect}, \ref{lemma-cone-pseudo-coherent}, and
+\ref{lemma-cone-tor-amplitude}.
+Second proof (sketch): Say $K$ and $L$ are perfect. After replacing
+$X$ by the members of an open covering we may assume that $K$ and $L$
+are represented by strictly perfect complexes $\mathcal{K}^\bullet$
+and $\mathcal{L}^\bullet$. After replacing $X$ by the members
+of an open covering we may assume the map $K \to L$ is given by
+a map of complexes $\alpha : \mathcal{K}^\bullet \to \mathcal{L}^\bullet$,
+see Lemma \ref{lemma-local-actual}.
+Then $M$ is isomorphic to the cone of $\alpha$ which is strictly
+perfect by Lemma \ref{lemma-cone}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-perfect}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+If $K, L$ are perfect objects of $D(\mathcal{O}_X)$, then
+so is $K \otimes_{\mathcal{O}_X}^\mathbf{L} L$.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Lemmas \ref{lemma-perfect}, \ref{lemma-tensor-pseudo-coherent}, and
+\ref{lemma-tensor-tor-amplitude}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-summands-perfect}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+If $K \oplus L$ is a perfect object of $D(\mathcal{O}_X)$, then
+so are $K$ and $L$.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Lemmas \ref{lemma-perfect}, \ref{lemma-summands-pseudo-coherent}, and
+\ref{lemma-summands-tor-amplitude}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushforward-perfect}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $j : U \to X$ be an
+open subspace. Let $E$ be a perfect object of $D(\mathcal{O}_U)$
+whose cohomology
+sheaves are supported on a closed subset $T \subset U$ with $j(T)$
+closed in $X$. Then $Rj_*E$ is a perfect object of $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Being a perfect complex is local on $X$. Thus it suffices to check that
+$Rj_*E$ is perfect when restricted to $U$ and $V = X \setminus j(T)$.
+We have $Rj_*E|_U = E$ which is perfect. We have
+ $Rj_*E|_V = 0$ because $E|_{U \setminus T} = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-perfect-max-open-coh-loc-free}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $E$ in $D(\mathcal{O}_X)$
+be perfect. Assume that all stalks $\mathcal{O}_{X, x}$ are local rings.
+Then the set
+$$
+U =
+\{x \in X \mid
+H^i(E)_x\text{ is a finite free }
+\mathcal{O}_{X, x}\text{-module for all }i\in \mathbf{Z}\}
+$$
+is open in $X$ and is the maximal open set $U \subset X$ such that
+$H^i(E)|_U$ is finite locally free for all $i \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Note that if $V \subset X$ is some open such that $H^i(E)|_V$
+is finite locally free for all $i \in \mathbf{Z}$ then $V \subset U$.
+Let $x \in U$. We will show that an open neighbourhood of $x$
+is contained in $U$ and that $H^i(E)$ is finite locally free
+on this neighbourhood for all $i$. This will finish the proof.
+During the proof we may (finitely many times) replace
+$X$ by an open neighbourhood of $x$.
+Hence we may assume $E$ is represented
+by a strictly perfect complex $\mathcal{E}^\bullet$. Say
+$\mathcal{E}^i = 0$ for $i \not \in [a, b]$. We will
+prove the result by induction on $b - a$. The module
+$H^b(E) = \Coker(d^{b - 1} : \mathcal{E}^{b - 1} \to \mathcal{E}^b)$
+is of finite presentation. Since $H^b(E)_x$ is finite free,
+we conclude $H^b(E)$ is finite free in an open neighbourhood of $x$ by
+Modules, Lemma \ref{modules-lemma-finite-presentation-stalk-free}.
+Thus after replacing $X$ by a (possibly smaller) open
+neighbourhood we may assume we have a direct sum decomposition
+$\mathcal{E}^b = \Im(d^{b - 1}) \oplus H^b(E)$ and
+$H^b(E)$ is finite free, see Lemma \ref{lemma-local-lift-map}.
+Doing the same argument again, we see that we may
+assume $\mathcal{E}^{b - 1} = \Ker(d^{b - 1}) \oplus \Im(d^{b - 1})$.
+The complex
+$\mathcal{E}^a \to \ldots \to \mathcal{E}^{b - 2} \to \Ker(d^{b - 1})$
+is a strictly perfect complex representing a perfect
+object $E'$ with $H^i(E) = H^i(E')$ for $i \not = b$.
+Hence we conclude by our induction hypothesis.
+\end{proof}
+
+
+
+
+
+
+\section{Duals}
+\label{section-duals}
+
+\noindent
+In this section we characterize the dualizable objects of
+the category of complexes and of the derived category.
+In particular, we will see that an object of $D(\mathcal{O}_X)$
+has a dual if and only if it is perfect (this follows from
+Example \ref{example-dual-derived} and
+Lemma \ref{lemma-left-dual-derived}).
+
+\begin{lemma}
+\label{lemma-symmetric-monoidal-cat-complexes}
+Let $(X, \mathcal{O}_X)$ be a ringed space. The category of complexes
+of $\mathcal{O}_X$-modules with tensor product defined by
+$\mathcal{F}^\bullet \otimes \mathcal{G}^\bullet =
+\text{Tot}(\mathcal{F}^\bullet \otimes_{\mathcal{O}_X} \mathcal{G}^\bullet)$
+is a symmetric monoidal category (for sign rules, see
+More on Algebra, Section \ref{more-algebra-section-sign-rules}).
+\end{lemma}
+
+\begin{proof}
+Omitted. Hints: as unit $\mathbf{1}$ we take the complex having
+$\mathcal{O}_X$ in degree $0$ and zero in other degrees with
+obvious isomorphisms
+$\text{Tot}(\mathbf{1} \otimes_{\mathcal{O}_X} \mathcal{G}^\bullet) =
+\mathcal{G}^\bullet$ and
+$\text{Tot}(\mathcal{F}^\bullet \otimes_{\mathcal{O}_X} \mathbf{1}) =
+\mathcal{F}^\bullet$.
+to prove the lemma you have to check the commutativity
+of various diagrams, see Categories, Definitions
+\ref{categories-definition-monoidal-category} and
+\ref{categories-definition-symmetric-monoidal-category}.
+The verifications are straightforward in each case.
+\end{proof}
+
+\begin{example}
+\label{example-dual}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{F}^\bullet$
+be a locally bounded complex of $\mathcal{O}_X$-modules such that each
+$\mathcal{F}^n$ is locally a direct summand of a finite
+free $\mathcal{O}_X$-module. In other words, there is an open covering
+$X = \bigcup U_i$ such that $\mathcal{F}^\bullet|_{U_i}$ is a strictly
+perfect complex. Consider the complex
+$$
+\mathcal{G}^\bullet = \SheafHom^\bullet(\mathcal{F}^\bullet, \mathcal{O}_X)
+$$
+as in Section \ref{section-hom-complexes}. Let
+$$
+\eta :
+\mathcal{O}_X
+\to
+\text{Tot}(\mathcal{F}^\bullet \otimes_{\mathcal{O}_X} \mathcal{G}^\bullet)
+\quad\text{and}\quad
+\epsilon :
+\text{Tot}(\mathcal{G}^\bullet \otimes_{\mathcal{O}_X} \mathcal{F}^\bullet)
+\to
+\mathcal{O}_X
+$$
+be $\eta = \sum \eta_n$ and $\epsilon = \sum \epsilon_n$
+where $\eta_n : \mathcal{O}_X \to
+\mathcal{F}^n \otimes_{\mathcal{O}_X} \mathcal{G}^{-n}$
+and
+$\epsilon_n : \mathcal{G}^{-n} \otimes_{\mathcal{O}_X} \mathcal{F}^n
+\to \mathcal{O}_X$ are as in Modules, Example \ref{modules-example-dual}.
+Then $\mathcal{G}^\bullet, \eta, \epsilon$
+is a left dual for $\mathcal{F}^\bullet$ as in
+Categories, Definition \ref{categories-definition-dual}.
+We omit the verification that
+$(1 \otimes \epsilon) \circ (\eta \otimes 1) = \text{id}_{\mathcal{F}^\bullet}$
+and
+$(\epsilon \otimes 1) \circ (1 \otimes \eta) =
+\text{id}_{\mathcal{G}^\bullet}$. Please compare with
+More on Algebra, Lemma \ref{more-algebra-lemma-left-dual-complex}.
+\end{example}
+
+\begin{lemma}
+\label{lemma-left-dual-complex}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{F}^\bullet$
+be a complex of $\mathcal{O}_X$-modules. If $\mathcal{F}^\bullet$
+has a left dual in the monoidal category of complexes of
+$\mathcal{O}_X$-modules
+(Categories, Definition \ref{categories-definition-dual})
+then $\mathcal{F}^\bullet$ is a locally bounded complex whose terms are
+locally direct summands of finite free $\mathcal{O}_X$-modules
+and the left dual is as constructed in Example \ref{example-dual}.
+\end{lemma}
+
+\begin{proof}
+By uniqueness of left duals
+(Categories, Remark \ref{categories-remark-left-dual-adjoint})
+we get the final statement provided we show that $\mathcal{F}^\bullet$
+is as stated. Let $\mathcal{G}^\bullet, \eta, \epsilon$ be a left dual.
+Write $\eta = \sum \eta_n$ and $\epsilon = \sum \epsilon_n$
+where $\eta_n : \mathcal{O}_X \to
+\mathcal{F}^n \otimes_{\mathcal{O}_X} \mathcal{G}^{-n}$
+and
+$\epsilon_n : \mathcal{G}^{-n} \otimes_{\mathcal{O}_X} \mathcal{F}^n
+\to \mathcal{O}_X$. Since
+$(1 \otimes \epsilon) \circ (\eta \otimes 1) = \text{id}_{\mathcal{F}^\bullet}$
+and
+$(\epsilon \otimes 1) \circ (1 \otimes \eta) = \text{id}_{\mathcal{G}^\bullet}$
+by Categories, Definition \ref{categories-definition-dual} we see immediately
+that we have
+$(1 \otimes \epsilon_n) \circ (\eta_n \otimes 1) = \text{id}_{\mathcal{F}^n}$
+and
+$(\epsilon_n \otimes 1) \circ (1 \otimes \eta_n) =
+\text{id}_{\mathcal{G}^{-n}}$.
+Hence we see that $\mathcal{F}^n$ is locally a direct summand of a finite
+free $\mathcal{O}_X$-module by
+Modules, Lemma \ref{modules-lemma-left-dual-module}.
+Since the sum $\eta = \sum \eta_n$ is locally finite, we conclude that
+$\mathcal{F}^\bullet$ is locally bounded.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-internal-hom-evaluate-isom}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K, L, M \in D(\mathcal{O}_X)$.
+If $K$ is perfect, then the map
+$$
+R\SheafHom(L, M) \otimes_{\mathcal{O}_X}^\mathbf{L} K
+\longrightarrow
+R\SheafHom(R\SheafHom(K, L), M)
+$$
+of Lemma \ref{lemma-internal-hom-evaluate} is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Since the map is globally defined and since formation of the right and
+left hand side commute with localization
+(see Lemma \ref{lemma-restriction-RHom-to-U}), to prove this we may work
+locally on $X$. Thus we may assume $K$ is represented by a strictly
+perfect complex $\mathcal{E}^\bullet$.
+
+\medskip\noindent
+If $K_1 \to K_2 \to K_3$ is a distinguished triangle in $D(\mathcal{O}_X)$,
+then we get distinguished triangles
+$$
+R\SheafHom(L, M) \otimes_{\mathcal{O}_X}^\mathbf{L} K_1 \to
+R\SheafHom(L, M) \otimes_{\mathcal{O}_X}^\mathbf{L} K_2 \to
+R\SheafHom(L, M) \otimes_{\mathcal{O}_X}^\mathbf{L} K_3
+$$
+and
+$$
+R\SheafHom(R\SheafHom(K_1, L), M) \to
+R\SheafHom(R\SheafHom(K_2, L), M)
+R\SheafHom(R\SheafHom(K_3, L), M)
+$$
+See Section \ref{section-flat} and
+Lemma \ref{lemma-RHom-triangulated}.
+The arrow of Lemma \ref{lemma-internal-hom-evaluate} is functorial in $K$
+hence we get a morphism between these distinguished triangles.
+Thus, if the result holds for $K_1$ and $K_3$, then the result holds for
+$K_2$ by Derived Categories, Lemma
+\ref{derived-lemma-third-isomorphism-triangle}.
+
+\medskip\noindent
+Combining the remarks above with the distinguished triangles
+$$
+\sigma_{\geq n}\mathcal{E}^\bullet \to \mathcal{E}^\bullet \to
+\sigma_{\leq n - 1}\mathcal{E}^\bullet
+$$
+of stupid trunctions, we reduce to the case where $K$ consists
+of a direct summand of a finite free $\mathcal{O}_X$-module placed
+in some degree. By an obvious compatibility of the problem with direct sums
+(similar to what was said above) and shifts this reduces us to the case
+where $K = \mathcal{O}_X^{\oplus n}$ for some integer $n$.
+This case is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dual-perfect-complex}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K$ be a perfect object of
+$D(\mathcal{O}_X)$. Then $K^\vee = R\SheafHom(K, \mathcal{O}_X)$ is a
+perfect object too and $(K^\vee)^\vee \cong K$. There are
+functorial isomorphisms
+$$
+M \otimes^\mathbf{L}_{\mathcal{O}_X} K^\vee = R\SheafHom(K, M)
+$$
+and
+$$
+H^0(X, M \otimes^\mathbf{L}_{\mathcal{O}_X} K^\vee) =
+\Hom_{D(\mathcal{O}_X)}(K, M)
+$$
+for $M$ in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-internal-hom-evaluate} there is a canonical map
+$$
+K = R\SheafHom(\mathcal{O}_X, \mathcal{O}_X)
+\otimes_{\mathcal{O}_X}^\mathbf{L} K \longrightarrow
+R\SheafHom(R\SheafHom(K, \mathcal{O}_X), \mathcal{O}_X) =
+(K^\vee)^\vee
+$$
+which is an isomorphism by Lemma \ref{lemma-internal-hom-evaluate-isom}.
+To check the other statements we will use without further mention that
+formation of internal hom commutes with restriction to opens
+(Lemma \ref{lemma-restriction-RHom-to-U}).
+We may check $K^\vee$ is perfect locally on $X$.
+By Lemma \ref{lemma-dual}
+to see the final statement it suffices to check that the map
+(\ref{equation-eval})
+$$
+M \otimes^\mathbf{L}_{\mathcal{O}_X} K^\vee
+\longrightarrow
+R\SheafHom(K, M)
+$$
+is an isomorphism. This is local on $X$ as well.
+Hence it suffices to prove these two statements $K$ is represented
+by a strictly perfect complex.
+
+\medskip\noindent
+Assume $K$ is represented by the strictly perfect complex
+$\mathcal{E}^\bullet$. Then it follows from
+Lemma \ref{lemma-Rhom-strictly-perfect}
+that $K^\vee$ is represented by the complex whose terms are
+$(\mathcal{E}^{-n})^\vee =
+\SheafHom_{\mathcal{O}_X}(\mathcal{E}^{-n}, \mathcal{O}_X)$
+in degree $n$. Since $\mathcal{E}^{-n}$ is a direct summand of a finite
+free $\mathcal{O}_X$-module, so is $(\mathcal{E}^{-n})^\vee$.
+Hence $K^\vee$ is represented by a strictly perfect complex too
+and we see that $K^\vee$ is perfect.
+To see that (\ref{equation-eval}) is an isomorphism, represent
+$M$ by a complex $\mathcal{F}^\bullet$.
+By Lemma \ref{lemma-Rhom-strictly-perfect} the complex
+$R\SheafHom(K, M)$ is represented by the complex with terms
+$$
+\bigoplus\nolimits_{n = p + q}
+\SheafHom_{\mathcal{O}_X}(\mathcal{E}^{-q}, \mathcal{F}^p)
+$$
+On the other hand, the object $M \otimes^\mathbf{L}_{\mathcal{O}_X} K^\vee$
+is represented by the complex with terms
+$$
+\bigoplus\nolimits_{n = p + q}
+\mathcal{F}^p \otimes_{\mathcal{O}_X} (\mathcal{E}^{-q})^\vee
+$$
+Thus the assertion that (\ref{equation-eval}) is an isomorphism
+reduces to the assertion that the canonical map
+$$
+\mathcal{F}
+\otimes_{\mathcal{O}_X}
+\SheafHom_{\mathcal{O}_X}(\mathcal{E}, \mathcal{O}_X)
+\longrightarrow
+\SheafHom_{\mathcal{O}_X}(\mathcal{E}, \mathcal{F})
+$$
+is an isomorphism when $\mathcal{E}$ is a direct summand of a finite
+free $\mathcal{O}_X$-module and $\mathcal{F}$ is any $\mathcal{O}_X$-module.
+This follows immediately from the corresponding statement when
+$\mathcal{E}$ is finite free.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-symmetric-monoidal-derived}
+Let $(X, \mathcal{O}_X)$ be a ringed space. The derived category
+$D(\mathcal{O}_X)$ is a symmetric monoidal category with tensor product
+given by derived tensor product with usual associativity and
+commutativity constraints (for sign rules, see
+More on Algebra, Section \ref{more-algebra-section-sign-rules}).
+\end{lemma}
+
+\begin{proof}
+Omitted. Compare with Lemma \ref{lemma-symmetric-monoidal-cat-complexes}.
+\end{proof}
+
+\begin{example}
+\label{example-dual-derived}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K$ be a perfect object
+of $D(\mathcal{O}_X)$. Set $K^\vee = R\SheafHom(K, \mathcal{O}_X)$
+as in Lemma \ref{lemma-dual-perfect-complex}.
+Then the map
+$$
+K \otimes_{\mathcal{O}_X}^\mathbf{L} K^\vee \longrightarrow R\SheafHom(K, K)
+$$
+is an isomorphism (by the lemma). Denote
+$$
+\eta :
+\mathcal{O}_X
+\longrightarrow
+K \otimes_{\mathcal{O}_X}^\mathbf{L} K^\vee
+$$
+the map sending $1$ to the section corresponding to
+$\text{id}_K$ under the isomorphism above.
+Denote
+$$
+\epsilon :
+K^\vee
+\otimes_{\mathcal{O}_X}^\mathbf{L} K
+\longrightarrow
+\mathcal{O}_X
+$$
+the evaluation map (to construct it you can use
+Lemma \ref{lemma-internal-hom-composition} for example). Then
+$K^\vee, \eta, \epsilon$ is a left dual for $K$ as in
+Categories, Definition \ref{categories-definition-dual}.
+We omit the verification that
+$(1 \otimes \epsilon) \circ (\eta \otimes 1) = \text{id}_K$
+and
+$(\epsilon \otimes 1) \circ (1 \otimes \eta) =
+\text{id}_{K^\vee}$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-left-dual-derived}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $M$ be an object
+of $D(\mathcal{O}_X)$. If $M$ has a left dual in the monoidal category
+$D(\mathcal{O}_X)$ (Categories, Definition \ref{categories-definition-dual})
+then $M$ is perfect and the left dual is as constructed in
+Example \ref{example-dual-derived}.
+\end{lemma}
+
+\begin{proof}
+Let $x \in X$. It suffices to find an open neighbourhood $U$ of $x$
+such that $M$ restricts to a perfect complex over $U$. Hence during the
+proof we can (finitely often) replace $X$ by an open neighbourhood of $x$.
+Let $N, \eta, \epsilon$ be a left dual.
+
+\medskip\noindent
+We are going to use the following argument several times. Choose any
+complex $\mathcal{M}^\bullet$
+of $\mathcal{O}_X$-modules representing $M$. Choose a K-flat complex
+$\mathcal{N}^\bullet$ representing $N$ whose terms are flat
+$\mathcal{O}_X$-modules, see Lemma \ref{lemma-K-flat-resolution}.
+Consider the map
+$$
+\eta : \mathcal{O}_X \to
+\text{Tot}(\mathcal{M}^\bullet \otimes_{\mathcal{O}_X} \mathcal{N}^\bullet)
+$$
+After shrinking $X$ we can find an integer $N$ and for
+$i = 1, \ldots, N$ integers $n_i \in \mathbf{Z}$ and sections
+$f_i$ and $g_i$ of $\mathcal{M}^{n_i}$ and $\mathcal{N}^{-n_i}$
+such that
+$$
+\eta(1) = \sum\nolimits_i f_i \otimes g_i
+$$
+Let $\mathcal{K}^\bullet \subset \mathcal{M}^\bullet$ be any subcomplex
+of $\mathcal{O}_X$-modules containing the sections $f_i$
+for $i = 1, \ldots, N$.
+Since
+$\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{N}^\bullet)
+\subset
+\text{Tot}(\mathcal{M}^\bullet \otimes_{\mathcal{O}_X} \mathcal{N}^\bullet)$
+by flatness of the modules $\mathcal{N}^n$, we see that $\eta$ factors through
+$$
+\tilde \eta :
+\mathcal{O}_X \to
+\text{Tot}(\mathcal{K}^\bullet \otimes_{\mathcal{O}_X} \mathcal{N}^\bullet)
+$$
+Denoting $K$ the object of $D(\mathcal{O}_X)$ represented by
+$\mathcal{K}^\bullet$ we find a commutative diagram
+$$
+\xymatrix{
+M \ar[rr]_-{\eta \otimes 1} \ar[rrd]_{\tilde \eta \otimes 1} & &
+M \otimes^\mathbf{L} N \otimes^\mathbf{L} M
+\ar[r]_-{1 \otimes \epsilon} &
+M \\
+& &
+K \otimes^\mathbf{L} N \otimes^\mathbf{L} M
+\ar[u] \ar[r]^-{1 \otimes \epsilon} &
+K \ar[u]
+}
+$$
+Since the composition of the upper row is the identity on $M$
+we conclude that $M$ is a direct summand of $K$ in $D(\mathcal{O}_X)$.
+
+\medskip\noindent
+As a first use of the argument above, we can choose the subcomplex
+$\mathcal{K}^\bullet = \sigma_{\geq a} \tau_{\leq b}\mathcal{M}^\bullet$
+with $a < n_i < b$ for $i = 1, \ldots, N$. Thus $M$ is a direct
+summand in $D(\mathcal{O}_X)$ of a bounded complex and we conclude
+we may assume $M$ is in $D^b(\mathcal{O}_X)$. (Recall that the process
+above involves shrinking $X$.)
+
+\medskip\noindent
+Since $M$ is in $D^b(\mathcal{O}_X)$ we may choose
+$\mathcal{M}^\bullet$ to be a bounded above complex of
+flat modules (by Modules, Lemma \ref{modules-lemma-module-quotient-flat} and
+Derived Categories, Lemma \ref{derived-lemma-subcategory-left-resolution}).
+Then we can choose $\mathcal{K}^\bullet = \sigma_{\geq a}\mathcal{M}^\bullet$
+with $a < n_i$ for $i = 1, \ldots, N$ in the argument above.
+Thus we find that we may assume $M$ is a direct summand in
+$D(\mathcal{O}_X)$ of a bounded complex of flat modules.
+In particular, $M$ has finite tor amplitude.
+
+\medskip\noindent
+Say $M$ has tor amplitude in $[a, b]$. Assuming $M$ is $m$-pseudo-coherent
+we are going to show that (after shrinking $X$) we may assume $M$
+is $(m - 1)$-pseudo-coherent. This will finish the proof by
+Lemma \ref{lemma-perfect-precise} and the fact that
+$M$ is $(b + 1)$-pseudo-coherent in any case.
+After shrinking $X$ we may assume there exists a strictly perfect
+complex $\mathcal{E}^\bullet$ and a map $\alpha : \mathcal{E}^\bullet \to M$
+in $D(\mathcal{O}_X)$ such that $H^i(\alpha)$ is an isomorphism for
+$i > m$ and surjective for $i = m$. We may and do assume
+that $\mathcal{E}^i = 0$ for $i < m$. Choose a distinguished triangle
+$$
+\mathcal{E}^\bullet \to M \to L \to \mathcal{E}^\bullet[1]
+$$
+Observe that $H^i(L) = 0$ for $i \geq m$. Thus we may represent
+$L$ by a complex $\mathcal{L}^\bullet$ with $\mathcal{L}^i = 0$
+for $i \geq m$. The map $L \to \mathcal{E}^\bullet[1]$
+is given by a map of complexes
+$\mathcal{L}^\bullet \to \mathcal{E}^\bullet[1]$
+which is zero in all degrees except in degree $m - 1$
+where we obtain a map $\mathcal{L}^{m - 1} \to \mathcal{E}^m$, see
+Derived Categories, Lemma \ref{derived-lemma-negative-exts}.
+Then $M$ is represented by the complex
+$$
+\mathcal{M}^\bullet :
+\ldots \to
+\mathcal{L}^{m - 2} \to
+\mathcal{L}^{m - 1} \to
+\mathcal{E}^m \to
+\mathcal{E}^{m + 1} \to \ldots
+$$
+Apply the discussion in the second paragraph to this complex to get
+sections $f_i$ of $\mathcal{M}^{n_i}$ for $i = 1, \ldots, N$.
+For $n < m$ let $\mathcal{K}^n \subset \mathcal{L}^n$
+be the $\mathcal{O}_X$-submodule generated by the sections
+$f_i$ for $n_i = n$ and $d(f_i)$ for $n_i = n - 1$.
+For $n \geq m$ set $\mathcal{K}^n = \mathcal{E}^n$.
+Clearly, we have a morphism of
+distinguished triangles
+$$
+\xymatrix{
+\mathcal{E}^\bullet \ar[r] &
+\mathcal{M}^\bullet \ar[r] &
+\mathcal{L}^\bullet \ar[r] &
+\mathcal{E}^\bullet[1] \\
+\mathcal{E}^\bullet \ar[r] \ar[u] &
+\mathcal{K}^\bullet \ar[r] \ar[u] &
+\sigma_{\leq m - 1}\mathcal{K}^\bullet \ar[r] \ar[u] &
+\mathcal{E}^\bullet[1] \ar[u]
+}
+$$
+where all the morphisms are as indicated above.
+Denote $K$ the object of $D(\mathcal{O}_X)$ corresponding to the complex
+$\mathcal{K}^\bullet$.
+By the arguments in the second paragraph of the proof we obtain
+a morphism $s : M \to K$ in $D(\mathcal{O}_X)$ such that the composition
+$M \to K \to M$ is the identity on $M$. We don't know that the
+diagram
+$$
+\xymatrix{
+\mathcal{E}^\bullet \ar[r] &
+\mathcal{K}^\bullet \ar@{=}[r] &
+K \\
+\mathcal{E}^\bullet \ar[u]^{\text{id}} \ar[r]^i &
+\mathcal{M}^\bullet \ar@{=}[r] &
+M \ar[u]_s
+}
+$$
+commutes, but we do know it commutes after composing with the
+map $K \to M$. By Lemma \ref{lemma-local-actual} after shrinking $X$ we may
+assume that $s \circ i$ is given by a map of complexes
+$\sigma : \mathcal{E}^\bullet \to \mathcal{K}^\bullet$.
+By the same lemma we may assume the composition of $\sigma$
+with the inclusion $\mathcal{K}^\bullet \subset \mathcal{M}^\bullet$
+is homotopic to zero by some homotopy
+$\{h^i : \mathcal{E}^i \to \mathcal{M}^{i - 1}\}$.
+Thus, after replacing $\mathcal{K}^{m - 1}$ by
+$\mathcal{K}^{m - 1} + \Im(h^m)$ (note that after doing this
+it is still the case that $\mathcal{K}^{m - 1}$ is generated
+by finitely many global sections), we see that
+$\sigma$ itself is homotopic to zero!
+This means that we have a commutative solid diagram
+$$
+\xymatrix{
+\mathcal{E}^\bullet \ar[r] &
+M \ar[r] &
+\mathcal{L}^\bullet \ar[r] &
+\mathcal{E}^\bullet[1] \\
+\mathcal{E}^\bullet \ar[r] \ar[u] &
+K \ar[r] \ar[u] &
+\sigma_{\leq m - 1}\mathcal{K}^\bullet \ar[r] \ar[u] &
+\mathcal{E}^\bullet[1] \ar[u] \\
+\mathcal{E}^\bullet \ar[r] \ar[u] &
+M \ar[r] \ar[u]^s &
+\mathcal{L}^\bullet \ar[r] \ar@{..>}[u] &
+\mathcal{E}^\bullet[1] \ar[u]
+}
+$$
+By the axioms of triangulated categories we obtain a dotted
+arrow fitting into the diagram.
+Looking at cohomology sheaves in degree $m - 1$ we see that we obtain
+$$
+\xymatrix{
+H^{m - 1}(M) \ar[r] &
+H^{m - 1}(\mathcal{L}^\bullet) \ar[r] &
+H^m(\mathcal{E}^\bullet) \\
+H^{m - 1}(K) \ar[r] \ar[u] &
+H^{m - 1}(\sigma_{\leq m - 1}\mathcal{K}^\bullet) \ar[r] \ar[u] &
+H^m(\mathcal{E}^\bullet) \ar[u] \\
+H^{m - 1}(M) \ar[r] \ar[u] &
+H^{m - 1}(\mathcal{L}^\bullet) \ar[r] \ar[u] &
+H^m(\mathcal{E}^\bullet) \ar[u]
+}
+$$
+Since the vertical compositions are the identity in both the
+left and right column, we conclude the vertical composition
+$H^{m - 1}(\mathcal{L}^\bullet) \to
+H^{m - 1}(\sigma_{\leq m - 1}\mathcal{K}^\bullet) \to
+H^{m - 1}(\mathcal{L}^\bullet)$ in the middle is surjective!
+In particular $H^{m - 1}(\sigma_{\leq m - 1}\mathcal{K}^\bullet) \to
+H^{m - 1}(\mathcal{L}^\bullet)$ is surjective.
+Using the induced map of long exact sequences of cohomology
+sheaves from the morphism of triangles above, a diagram chase
+shows this implies $H^i(K) \to H^i(M)$ is an isomorphism
+for $i \geq m$ and surjective for $i = m - 1$.
+By construction we can choose an $r \geq 0$ and a surjection
+$\mathcal{O}_X^{\oplus r} \to \mathcal{K}^{m - 1}$. Then the
+composition
+$$
+(\mathcal{O}_X^{\oplus r} \to \mathcal{E}^m \to
+\mathcal{E}^{m + 1} \to \ldots ) \longrightarrow
+K \longrightarrow M
+$$
+induces an isomorphism on cohomology sheaves in degrees $\geq m$ and
+a surjection in degree $m - 1$ and the proof is complete.
+\end{proof}
+
+
+
+
+
+\section{Miscellany}
+\label{section-misc}
+
+\noindent
+Some results which do not fit anywhere else.
+
+\begin{lemma}
+\label{lemma-colim-and-lim-of-duals}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$(K_n)_{n \in \mathbf{N}}$ be a system of perfect objects of $D(\mathcal{O}_X)$.
+Let $K = \text{hocolim} K_n$ be the derived colimit
+(Derived Categories, Definition \ref{derived-definition-derived-colimit}).
+Then for any object $E$ of $D(\mathcal{O}_X)$ we have
+$$
+R\SheafHom(K, E) = R\lim E \otimes^\mathbf{L}_{\mathcal{O}_X} K_n^\vee
+$$
+where $(K_n^\vee)$ is the inverse system of dual perfect complexes.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-dual-perfect-complex} we have
+$R\lim E \otimes^\mathbf{L}_{\mathcal{O}_X} K_n^\vee =
+R\lim R\SheafHom(K_n, E)$
+which fits into the distinguished triangle
+$$
+R\lim R\SheafHom(K_n, E) \to
+\prod R\SheafHom(K_n, E) \to
+\prod R\SheafHom(K_n, E)
+$$
+Because $K$ similarly fits into the distinguished triangle
+$\bigoplus K_n \to \bigoplus K_n \to K$ it suffices to show that
+$\prod R\SheafHom(K_n, E) = R\SheafHom(\bigoplus K_n, E)$.
+This is a formal consequence of (\ref{equation-internal-hom})
+and the fact that derived tensor product commutes with direct sums.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ext-composition-is-cup}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $K$ and $E$ be objects
+of $D(\mathcal{O}_X)$ with $E$ perfect. The diagram
+$$
+\xymatrix{
+H^0(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} E^\vee) \times H^0(X, E)
+\ar[r] \ar[d] &
+H^0(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} E^\vee
+\otimes_{\mathcal{O}_X}^\mathbf{L} E) \ar[d] \\
+\Hom_X(E, K) \times H^0(X, E) \ar[r] &
+H^0(X, K)
+}
+$$
+commutes where the top horizontal arrow is the cup product, the
+right vertical arrow uses
+$\epsilon : E^\vee \otimes_{\mathcal{O}_X}^\mathbf{L} E \to \mathcal{O}_X$
+(Example \ref{example-dual-derived}), the left vertical arrow uses
+Lemma \ref{lemma-dual-perfect-complex}, and the bottom horizontal
+arrow is the obvious one.
+\end{lemma}
+
+\begin{proof}
+We will abbreviate $\otimes = \otimes_{\mathcal{O}_X}^\mathbf{L}$
+and $\mathcal{O} = \mathcal{O}_X$. We will identify $E$ and $K$
+with $R\SheafHom(\mathcal{O}, E)$ and $R\SheafHom(\mathcal{O}, K)$
+and we will identify $E^\vee$ with $R\SheafHom(E, \mathcal{O})$.
+
+\medskip\noindent
+Let $\xi \in H^0(X, K \otimes E^\vee)$ and $\eta \in H^0(X, E)$.
+Denote $\tilde \xi : \mathcal{O} \to K \otimes E^\vee$ and
+$\tilde \eta : \mathcal{O} \to E$ the corresponding maps in
+$D(\mathcal{O})$. By Lemma \ref{lemma-second-cup-equals-first}
+the cup product $\xi \cup \eta$ corresponds to
+$\tilde \xi \otimes \tilde \eta : \mathcal{O} \to
+K \otimes E^\vee \otimes E$.
+
+\medskip\noindent
+We claim the map $\xi' : E \to K$ corresponding to $\xi$ by
+Lemma \ref{lemma-dual-perfect-complex} is the composition
+$$
+E = \mathcal{O} \otimes E
+\xrightarrow{\tilde \xi \otimes 1_E}
+K \otimes E^\vee \otimes E
+\xrightarrow{1_K \otimes \epsilon}
+K
+$$
+The construction in Lemma \ref{lemma-dual-perfect-complex}
+uses the evaluation map (\ref{equation-eval}) which in turn
+is constructed using the identification of $E$ with
+$R\SheafHom(\mathcal{O}, E)$ and the composition
+$\underline{\circ}$ constructed
+in Lemma \ref{lemma-internal-hom-composition}.
+Hence $\xi'$ is the composition
+\begin{align*}
+E = \mathcal{O} \otimes
+R\SheafHom(\mathcal{O}, E)
+& \xrightarrow{\tilde \xi \otimes 1}
+R\SheafHom(\mathcal{O}, K) \otimes
+R\SheafHom(E, \mathcal{O}) \otimes
+R\SheafHom(\mathcal{O}, E) \\
+& \xrightarrow{\underline{\circ} \otimes 1}
+R\SheafHom(E, K) \otimes R\SheafHom(\mathcal{O}, E) \\
+& \xrightarrow{\underline{\circ}}
+R\SheafHom(\mathcal{O}, K) = K
+\end{align*}
+The claim follows immediately from this and the fact that
+the composition $\underline{\circ}$ constructed in
+Lemma \ref{lemma-internal-hom-composition} is associative
+(insert future reference here) and the fact that $\epsilon$
+is defined as the composition
+$\underline{\circ} : E^\vee \otimes E \to \mathcal{O}$ in
+Example \ref{example-dual-derived}.
+
+\medskip\noindent
+Using the results from the previous two paragraphs, we find
+the statement of the lemma is that
+$(1_K \otimes \epsilon) \circ (\tilde \xi \otimes \tilde \eta)$
+is equal to
+$(1_K \otimes \epsilon) \circ (\tilde \xi \otimes 1_E)
+\circ (1_\mathcal{O} \otimes \tilde \eta)$
+which is immediate.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-internal-hom}
+Let $h : X \to Y$ be a morphism of ringed spaces.
+Let $K, M$ be objects of $D(\mathcal{O}_Y)$. The
+canonical map
+$$
+Lh^*R\SheafHom(K, M) \longrightarrow R\SheafHom(Lh^*K, Lh^*M)
+$$
+of Remark \ref{remark-prepare-fancy-base-change}
+is an isomorphism in the following cases
+\begin{enumerate}
+\item $K$ is perfect,
+\item $h$ is flat, $K$ is pseudo-coherent, and $M$ is (locally) bounded below,
+\item $\mathcal{O}_X$ has finite tor dimension over $h^{-1}\mathcal{O}_Y$,
+$K$ is pseudo-coherent, and $M$ is (locally) bounded below,
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). The question is local on $Y$, hence we may assume that
+$K$ is represented by a strictly perfect complex
+$\mathcal{E}^\bullet$, see Section \ref{section-perfect}.
+Choose a K-flat complex $\mathcal{F}^\bullet$ representing $M$.
+Apply Lemma \ref{lemma-Rhom-strictly-perfect} to see that
+$R\SheafHom(K, L)$ is represented by the complex
+$\mathcal{H}^\bullet =
+\SheafHom^\bullet(\mathcal{E}^\bullet, \mathcal{F}^\bullet)$
+with terms $\mathcal{H}^n = \bigoplus\nolimits_{n = p + q}
+\SheafHom_{\mathcal{O}_X}(\mathcal{E}^{-q}, \mathcal{F}^p)$.
+By the construction of $Lh^*$ in Section \ref{section-derived-pullback}
+we see that $Lh^*K$ is represented by the strictly perfect complex
+$h^*\mathcal{E}^\bullet$ (Lemma \ref{lemma-strictly-perfect-pullback}).
+Similarly, the object $Lh^*M$ is represented by
+the complex $h^*\mathcal{F}^\bullet$.
+Finally, the object $Lh^*R\SheafHom(K, M)$
+is represented by $h^*\mathcal{H}^\bullet$ as
+$\mathcal{H}^\bullet$ is K-flat by
+Lemma \ref{lemma-Rhom-strictly-perfect-K-flat}.
+Thus to finish the proof it suffices to show that
+$h^*\mathcal{H}^\bullet =
+\SheafHom^\bullet(h^*\mathcal{E}^\bullet, h^*\mathcal{F}^\bullet)$.
+For this it suffices to note that
+$h^*\SheafHom(\mathcal{E}, \mathcal{F}) =
+\SheafHom(h^*\mathcal{E}, \mathcal{F})$
+whenever $\mathcal{E}$ is a direct summand of a finite free
+$\mathcal{O}_X$-module.
+
+\medskip\noindent
+Proof of (2). Since $h$ is flat, we can compute $Lh^*$
+by simply using $h^*$ on any complex of $\mathcal{O}_Y$-modules.
+In particular we have $H^i(Lh^*K) = h^*H^i(K)$ for all $i \in \mathbf{Z}$.
+Say $H^i(M) = 0$ for $i < a$. Let $K' \to K$ be a morphism
+of $D(\mathcal{O}_Y)$ which defines an isomorphism
+$H^i(K') \to H^i(K)$ for all $i \geq b$. Then the corresponding maps
+$$
+R\SheafHom(K, M) \to R\SheafHom(K', M)
+$$
+and
+$$
+R\SheafHom(Lh^*K, Lh^*M) \to R\SheafHom(Lh^*K', Lh^*M)
+$$
+are isomorphisms on cohomology sheaves in degrees $< a - b$ (details omitted).
+Thus to prove the map in the statement of the lemma induces an
+isomorphism on cohomology sheaves in degrees $< a - b$ it suffices to
+prove the result for $K'$ in those degrees. Also, as in the proof
+of part (1) the question is local on $Y$. Thus we may assume $K$ is
+represented by a strictly perfect complex, see
+Section \ref{section-pseudo-coherent}. This reduces us to case (1).
+
+\medskip\noindent
+Proof of (3). The proof is the same as the proof of (2) except one
+uses that $Lh^*$ has bounded cohomological dimension to get the
+desired vanishing. We omit the details.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-stalk-internal-hom}
+Let $X$ be a ringed space. Let $K, M$ be objects of $D(\mathcal{O}_X)$.
+Let $x \in X$. The canonical map
+$$
+R\SheafHom(K, M)_x \longrightarrow
+R\Hom_{\mathcal{O}_{X, x}}(K_x, M_x)
+$$
+is an isomorphism in the following cases
+\begin{enumerate}
+\item $K$ is perfect,
+\item $K$ is pseudo-coherent and $M$ is (locally) bounded below.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $Y = \{x\}$ be the singleton ringed space with structure sheaf
+given by $\mathcal{O}_{X, x}$. Then apply
+Lemma \ref{lemma-pullback-internal-hom}
+to the flat inclusion morphism $Y \to X$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Invertible objects in the derived category}
+\label{section-invertible-D-or-R}
+
+\noindent
+We characterize invertible objects in the derived category of
+a ringed space (both in the case where the stalks of the
+structure sheaf are local and where not).
+
+\begin{lemma}
+\label{lemma-category-summands-finite-free}
+Let $(X, \mathcal{O}_X)$ be a ringed space.
+Set $R = \Gamma(X, \mathcal{O}_X)$. The category of
+$\mathcal{O}_X$-modules which are summands of finite free
+$\mathcal{O}_X$-modules is equivalent to the category of
+finite projective $R$-modules.
+\end{lemma}
+
+\begin{proof}
+Observe that a finite projective $R$-module is the same thing
+as a summand of a finite free $R$-module.
+The equivalence is given by the functor $\mathcal{E} \mapsto
+\Gamma(X, \mathcal{E})$. The inverse functor is given by the construction of
+Modules, Lemma \ref{modules-lemma-construct-quasi-coherent-sheaves}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-invertible-derived}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $M$ be an object
+of $D(\mathcal{O}_X)$. The following are equivalent
+\begin{enumerate}
+\item $M$ is invertible in $D(\mathcal{O}_X)$, see
+Categories, Definition \ref{categories-definition-invertible}, and
+\item there is a locally finite direct product decomposition
+$$
+\mathcal{O}_X = \prod\nolimits_{n \in \mathbf{Z}} \mathcal{O}_n
+$$
+and for each $n$ there is an invertible $\mathcal{O}_n$-module
+$\mathcal{H}^n$ (Modules, Definition \ref{modules-definition-invertible})
+and $M = \bigoplus \mathcal{H}^n[-n]$ in $D(\mathcal{O}_X)$.
+\end{enumerate}
+If (1) and (2) hold, then $M$ is a perfect object of $D(\mathcal{O}_X)$. If
+$\mathcal{O}_{X, x}$ is a local ring for all $x \in X$ these condition
+are also equivalent to
+\begin{enumerate}
+\item[(3)] there exists an open covering $X = \bigcup U_i$
+and for each $i$ an integer $n_i$ such that $M|_{U_i}$
+is represented by an invertible $\mathcal{O}_{U_i}$-module
+placed in degree $n_i$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2). Consider the object $R\SheafHom(M, \mathcal{O}_X)$
+and the composition map
+$$
+R\SheafHom(M, \mathcal{O}_X) \otimes_{\mathcal{O}_X}^\mathbf{L} M \to
+\mathcal{O}_X
+$$
+To prove this is an isomorphism, we may work locally. Thus we may
+assume $\mathcal{O}_X = \prod_{a \leq n \leq b} \mathcal{O}_n$
+and $M = \bigoplus_{a \leq n \leq b} \mathcal{H}^n[-n]$.
+Then it suffices to show that
+$$
+R\SheafHom(\mathcal{H}^m, \mathcal{O}_X)
+\otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{H}^n
+$$
+is zero if $n \not = m$ and equal to $\mathcal{O}_n$ if $n = m$.
+The case $n \not = m$ follows from the fact that $\mathcal{O}_n$ and
+$\mathcal{O}_m$ are flat $\mathcal{O}_X$-algebras with
+$\mathcal{O}_n \otimes_{\mathcal{O}_X} \mathcal{O}_m = 0$.
+Using the local structure of invertible $\mathcal{O}_X$-modules
+(Modules, Lemma \ref{modules-lemma-invertible}) and working locally
+the isomorphism in case $n = m$ follows in a straightforward manner;
+we omit the details. Because $D(\mathcal{O}_X)$ is symmetric monoidal,
+we conclude that $M$ is invertible.
+
+\medskip\noindent
+Assume (1). The description in (2) shows that we have a candidate
+for $\mathcal{O}_n$, namely,
+$\SheafHom_{\mathcal{O}_X}(H^n(M), H^n(M))$.
+If this is a locally finite family of sheaves of rings
+and if $\mathcal{O}_X = \prod \mathcal{O}_n$, then we immediately
+obtain the direct sum decomposition $M = \bigoplus H^n(M)[-n]$
+using the idempotents in $\mathcal{O}_X$ coming from the product
+decomposition.
+This shows that in order to prove (2) we may work locally on $X$.
+
+\medskip\noindent
+Choose an object $N$ of $D(\mathcal{O}_X)$
+and an isomorphism
+$M \otimes_{\mathcal{O}_X}^\mathbf{L} N \cong \mathcal{O}_X$.
+Let $x \in X$.
+Then $N$ is a left dual for $M$ in the monoidal category
+$D(\mathcal{O}_X)$ and we conclude that $M$ is perfect by
+Lemma \ref{lemma-left-dual-derived}. By symmetry we see that
+$N$ is perfect. After replacing $X$ by an open neighbourhood of $x$,
+we may assume $M$ and $N$ are represented by a strictly perfect
+complexes $\mathcal{E}^\bullet$ and $\mathcal{F}^\bullet$.
+Then $M \otimes_{\mathcal{O}_X}^\mathbf{L} N$ is represented by
+$\text{Tot}(\mathcal{E}^\bullet \otimes_{\mathcal{O}_X} \mathcal{F}^\bullet)$.
+After another shinking of $X$ we may assume the mutually inverse
+isomorphisms
+$\mathcal{O}_X \to M \otimes_{\mathcal{O}_X}^\mathbf{L} N$ and
+$M \otimes_{\mathcal{O}_X}^\mathbf{L} N \to \mathcal{O}_X$
+are given by maps of complexes
+$$
+\alpha : \mathcal{O}_X \to
+\text{Tot}(\mathcal{E}^\bullet \otimes_{\mathcal{O}_X} \mathcal{F}^\bullet)
+\quad\text{and}\quad
+\beta :
+\text{Tot}(\mathcal{E}^\bullet \otimes_{\mathcal{O}_X} \mathcal{F}^\bullet)
+\to \mathcal{O}_X
+$$
+See Lemma \ref{lemma-local-actual}. Then $\beta \circ \alpha = 1$
+as maps of complexes and $\alpha \circ \beta = 1$ as a morphism
+in $D(\mathcal{O}_X)$. After shrinking $X$
+we may assume the composition $\alpha \circ \beta$ is homotopic to $1$
+by some homotopy $\theta$ with components
+$$
+\theta^n :
+\text{Tot}^n(\mathcal{E}^\bullet \otimes_{\mathcal{O}_X} \mathcal{F}^\bullet)
+\to
+\text{Tot}^{n - 1}(
+\mathcal{E}^\bullet \otimes_{\mathcal{O}_X} \mathcal{F}^\bullet)
+$$
+by the same lemma as before. Set $R = \Gamma(X, \mathcal{O}_X)$. By
+Lemma \ref{lemma-category-summands-finite-free}
+we find that we obtain
+\begin{enumerate}
+\item $M^\bullet = \Gamma(X, \mathcal{E}^\bullet)$ is a bounded complex
+of finite projective $R$-modules,
+\item $N^\bullet = \Gamma(X, \mathcal{F}^\bullet)$ is a bounded complex
+of finite projective $R$-modules,
+\item $\alpha$ and $\beta$ correspond to maps of complexes
+$a : R \to \text{Tot}(M^\bullet \otimes_R N^\bullet)$ and
+$b : \text{Tot}(M^\bullet \otimes_R N^\bullet) \to R$,
+\item $\theta^n$ corresponds to a map
+$h^n : \text{Tot}^n(M^\bullet \otimes_R N^\bullet) \to
+\text{Tot}^{n - 1}(M^\bullet \otimes_R N^\bullet)$, and
+\item $b \circ a = 1$ and $b \circ a - 1 = dh + hd$,
+\end{enumerate}
+It follows that $M^\bullet$ and $N^\bullet$ define
+mutually inverse objects of $D(R)$. By
+More on Algebra, Lemma \ref{more-algebra-lemma-invertible-derived}
+we find a product decomposition $R = \prod_{a \leq n \leq b} R_n$
+and invertible $R_n$-modules $H^n$ such
+that $M^\bullet \cong \bigoplus_{a \leq n \leq b} H^n[-n]$.
+This isomorphism in $D(R)$ can be lifted to an morphism
+$$
+\bigoplus H^n[-n] \longrightarrow M^\bullet
+$$
+of complexes because each $H^n$ is projective as an $R$-module.
+Correspondingly, using Lemma \ref{lemma-category-summands-finite-free} again,
+we obtain an morphism
+$$
+\bigoplus H^n \otimes_R \mathcal{O}_X[-n] \to \mathcal{E}^\bullet
+$$
+which is an isomorphism in $D(\mathcal{O}_X)$. Setting
+$\mathcal{O}_n = R_n \otimes_R \mathcal{O}_X$ we conclude (2) is true.
+
+\medskip\noindent
+If all stalks of $\mathcal{O}_X$ are local, then it is straightforward
+to prove the equivalence of (2) and (3). We omit the details.
+\end{proof}
+
+
+
+
+
+
+\section{Compact objects}
+\label{section-compact}
+
+\noindent
+In this section we study compact objects in the derived category of modules
+on a ringed space. We recall that compact objects are defined in
+Derived Categories, Definition \ref{derived-definition-compact-object}.
+On suitable ringed spaces the perfect objects are compact.
+
+\begin{lemma}
+\label{lemma-when-jshriek-compact}
+Let $X$ be a ringed space. Let $j : U \to X$ be the
+inclusion of an open. The $\mathcal{O}_X$-module $j_!\mathcal{O}_U$ is a
+compact object of $D(\mathcal{O}_X)$ if there exists an integer $d$ such that
+\begin{enumerate}
+\item $H^p(U, \mathcal{F}) = 0$ for all $p > d$, and
+\item the functors $\mathcal{F} \mapsto H^p(U, \mathcal{F})$
+commute with direct sums.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1) and (2). Since
+$\Hom(j_!\mathcal{O}_U, \mathcal{F}) = \mathcal{F}(U)$
+by Sheaves, Lemma \ref{sheaves-lemma-j-shriek-modules}
+we have $\Hom(j_!\mathcal{O}_U, K) = R\Gamma(U, K)$ for
+$K$ in $D(\mathcal{O}_X)$. Thus we have to show that $R\Gamma(U, -)$
+commutes with direct sums. The first assumption means that the functor
+$F = H^0(U, -)$ has finite cohomological dimension. Moreover, the second
+assumption implies any direct sum of injective modules is acyclic for $F$.
+Let $K_i$ be a family of objects of $D(\mathcal{O}_X)$.
+Choose K-injective representatives $I_i^\bullet$ with injective terms
+representing $K_i$, see Injectives, Theorem
+\ref{injectives-theorem-K-injective-embedding-grothendieck}.
+Since we may compute $RF$ by applying $F$ to any complex of acyclics
+(Derived Categories, Lemma \ref{derived-lemma-unbounded-right-derived})
+and since $\bigoplus K_i$ is represented by $\bigoplus I_i^\bullet$
+(Injectives, Lemma \ref{injectives-lemma-derived-products})
+we conclude that $R\Gamma(U, \bigoplus K_i)$ is represented by
+$\bigoplus H^0(U, I_i^\bullet)$. Hence $R\Gamma(U, -)$ commutes
+with direct sums as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-perfect-is-compact}
+Let $X$ be a ringed space. Assume that the underlying topological space
+of $X$ has the following properties:
+\begin{enumerate}
+\item $X$ is quasi-compact,
+\item there exists a basis of quasi-compact open subsets, and
+\item the intersection of any two quasi-compact opens is quasi-compact.
+\end{enumerate}
+Let $K$ be a perfect object of $D(\mathcal{O}_X)$. Then
+\begin{enumerate}
+\item[(a)] $K$ is a compact object of $D^+(\mathcal{O}_X)$
+in the following sense: if $M = \bigoplus_{i \in I} M_i$ is
+bounded below, then $\Hom(K, M) = \bigoplus_{i \in I} \Hom(K, M_i)$.
+\item[(b)] If $X$ has finite cohomological dimension, i.e., if there exists
+a $d$ such that $H^i(X, \mathcal{F}) = 0$ for $i > d$, then
+$K$ is a compact object of $D(\mathcal{O}_X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $K^\vee$ be the dual of $K$, see
+Lemma \ref{lemma-dual-perfect-complex}. Then we have
+$$
+\Hom_{D(\mathcal{O}_X)}(K, M) =
+H^0(X, K^\vee \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+$$
+functorially in $M$ in $D(\mathcal{O}_X)$.
+Since $K^\vee \otimes_{\mathcal{O}_X}^\mathbf{L} -$ commutes with
+direct sums it suffices
+to show that $R\Gamma(X, -)$ commutes with the relevant direct sums.
+
+\medskip\noindent
+Proof of (b). Since $R\Gamma(X, K) = R\Hom(\mathcal{O}_X, K)$
+and since $H^p(X, -)$ commutes with direct sums by
+Lemma \ref{lemma-quasi-separated-cohomology-colimit}
+this is a special case of
+Lemma \ref{lemma-when-jshriek-compact}
+
+\medskip\noindent
+Proof of (a). Let $\mathcal{I}_i$, $i \in I$ be a collection of injective
+$\mathcal{O}_X$-modules. By Lemma \ref{lemma-quasi-separated-cohomology-colimit}
+we see that
+$$
+H^p(X, \bigoplus\nolimits_{i \in I} \mathcal{I}_i) =
+\bigoplus\nolimits_{i \in I} H^p(X, \mathcal{I}_i) = 0
+$$
+for all $p$. Now if $M = \bigoplus M_i$ is as in (a), then we
+see that there exists an $a \in \mathbf{Z}$ such that $H^n(M_i) = 0$
+for $n < a$. Thus we can choose complexes of injective $\mathcal{O}_X$-modules
+$\mathcal{I}_i^\bullet$ representing $M_i$
+with $\mathcal{I}_i^n = 0$ for $n < a$, see
+Derived Categories, Lemma \ref{derived-lemma-injective-resolutions-exist}.
+By Injectives, Lemma \ref{injectives-lemma-derived-products}
+we see that the direct sum complex $\bigoplus \mathcal{I}_i^\bullet$
+represents $M$. By Leray acyclicity
+(Derived Categories, Lemma \ref{derived-lemma-leray-acyclicity})
+we see that
+$$
+R\Gamma(X, M) = \Gamma(X, \bigoplus \mathcal{I}_i^\bullet) =
+\bigoplus \Gamma(X, \bigoplus \mathcal{I}_i^\bullet) =
+\bigoplus R\Gamma(X, M_i)
+$$
+as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Projection formula}
+\label{section-projection-formula}
+
+\noindent
+In this section we collect variants of the projection formula.
+The most basic version is Lemma \ref{lemma-projection-formula}.
+After we state and prove it, we discuss a more general version
+involving perfect complexes.
+
+\begin{lemma}
+\label{lemma-injective-tensor-finite-locally-free}
+Let $X$ be a ringed space.
+Let $\mathcal{I}$ be an injective $\mathcal{O}_X$-module.
+Let $\mathcal{E}$ be an $\mathcal{O}_X$-module.
+Assume $\mathcal{E}$ is finite locally free on $X$, see
+Modules, Definition \ref{modules-definition-locally-free}.
+Then $\mathcal{E} \otimes_{\mathcal{O}_X} \mathcal{I}$ is
+an injective $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+This is true because under the assumptions of the lemma we have
+$$
+\Hom_{\mathcal{O}_X}(\mathcal{F},
+\mathcal{E} \otimes_{\mathcal{O}_X} \mathcal{I})
+=
+\Hom_{\mathcal{O}_X}(
+\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{E}^\vee, \mathcal{I})
+$$
+where
+$\mathcal{E}^\vee = \SheafHom_{\mathcal{O}_X}(\mathcal{E}, \mathcal{O}_X)$
+is the dual of $\mathcal{E}$ which is finite locally free also. Since tensoring
+with a finite locally free sheaf is an exact functor we win by
+Homology, Lemma \ref{homology-lemma-characterize-injectives}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projection-formula}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+Let $\mathcal{E}$ be an $\mathcal{O}_Y$-module.
+Assume $\mathcal{E}$ is finite locally free on $Y$, see
+Modules, Definition \ref{modules-definition-locally-free}.
+Then there exist isomorphisms
+$$
+\mathcal{E} \otimes_{\mathcal{O}_Y} R^qf_*\mathcal{F}
+\longrightarrow
+R^qf_*(f^*\mathcal{E} \otimes_{\mathcal{O}_X} \mathcal{F})
+$$
+for all $q \geq 0$. In fact there exists an isomorphism
+$$
+\mathcal{E} \otimes_{\mathcal{O}_Y} Rf_*\mathcal{F}
+\longrightarrow
+Rf_*(f^*\mathcal{E} \otimes_{\mathcal{O}_X} \mathcal{F})
+$$
+in $D^{+}(Y)$ functorial in $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Choose an injective resolution $\mathcal{F} \to \mathcal{I}^\bullet$
+on $X$. Note that $f^*\mathcal{E}$ is finite locally free also, hence
+we get a resolution
+$$
+f^*\mathcal{E} \otimes_{\mathcal{O}_X} \mathcal{F}
+\longrightarrow
+f^*\mathcal{E} \otimes_{\mathcal{O}_X} \mathcal{I}^\bullet
+$$
+which is an injective resolution by
+Lemma \ref{lemma-injective-tensor-finite-locally-free}.
+Apply $f_*$ to see that
+$$
+Rf_*(f^*\mathcal{E} \otimes_{\mathcal{O}_X} \mathcal{F})
+=
+f_*(f^*\mathcal{E} \otimes_{\mathcal{O}_X} \mathcal{I}^\bullet).
+$$
+Hence the lemma follows if we can show that
+$f_*(f^*\mathcal{E} \otimes_{\mathcal{O}_X} \mathcal{F}) =
+\mathcal{E} \otimes_{\mathcal{O}_Y} f_*(\mathcal{F})$ functorially
+in the $\mathcal{O}_X$-module $\mathcal{F}$. This is clear when
+$\mathcal{E} = \mathcal{O}_Y^{\oplus n}$, and follows in general
+by working locally on $Y$. Details omitted.
+\end{proof}
+
+\noindent
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $E \in D(\mathcal{O}_X)$ and $K \in D(\mathcal{O}_Y)$.
+Without any further assumptions there is a map
+\begin{equation}
+\label{equation-projection-formula-map}
+Rf_*E \otimes^\mathbf{L}_{\mathcal{O}_Y} K
+\longrightarrow
+Rf_*(E \otimes^\mathbf{L}_{\mathcal{O}_X} Lf^*K)
+\end{equation}
+Namely, it is the adjoint to the canonical map
+$$
+Lf^*(Rf_*E \otimes^\mathbf{L}_{\mathcal{O}_Y} K) =
+Lf^*Rf_*E \otimes^\mathbf{L}_{\mathcal{O}_X} Lf^*K
+\longrightarrow
+E \otimes^\mathbf{L}_{\mathcal{O}_X} Lf^*K
+$$
+coming from the map $Lf^*Rf_*E \to E$ and Lemmas
+\ref{lemma-pullback-tensor-product} and \ref{lemma-adjoint}.
+A reasonably general version of the projection formula is the following.
+
+\begin{lemma}
+\label{lemma-projection-formula-perfect}
+Let $f : X \to Y$ be a morphism of ringed spaces.
+Let $E \in D(\mathcal{O}_X)$ and $K \in D(\mathcal{O}_Y)$.
+If $K$ is perfect, then
+$$
+Rf_*E \otimes^\mathbf{L}_{\mathcal{O}_Y} K =
+Rf_*(E \otimes^\mathbf{L}_{\mathcal{O}_X} Lf^*K)
+$$
+in $D(\mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+To check (\ref{equation-projection-formula-map}) is an isomorphism
+we may work locally on $Y$, i.e., we have to find a covering $\{V_j \to Y\}$
+such that the map restricts to an isomorphism on $V_j$. By definition
+of perfect objects, this means we may assume $K$ is represented by
+a strictly perfect complex of $\mathcal{O}_Y$-modules.
+Note that, completely generally, the statement is true for
+$K = K_1 \oplus K_2$, if and only if the statement is true for
+$K_1$ and $K_2$. Hence we may assume $K$ is a finite
+complex of finite free $\mathcal{O}_Y$-modules.
+In this case a simple argument involving stupid truncations reduces
+the statement to the case where $K$ is represented by a finite
+free $\mathcal{O}_Y$-module. Since the statement is invariant
+under finite direct summands in the $K$ variable, we conclude
+it suffices to prove it for $K = \mathcal{O}_Y[n]$
+in which case it is trivial.
+\end{proof}
+
+\noindent
+Here is a case where the projection formula is true in complete
+generality.
+
+\begin{lemma}
+\label{lemma-projection-formula-closed-immersion}
+Let $f : X \to Y$ be a morphism of ringed spaces such that $f$ is a
+homeomorphism onto a closed subset. Then
+(\ref{equation-projection-formula-map}) is an isomorphism always.
+\end{lemma}
+
+\begin{proof}
+Since $f$ is a homeomorphism onto a closed subset, the functor $f_*$
+is exact (Modules, Lemma \ref{modules-lemma-i-star-exact}). Hence
+$Rf_*$ is computed by applying $f_*$ to any representative complex.
+Choose a K-flat complex $\mathcal{K}^\bullet$ of $\mathcal{O}_Y$-modules
+representing $K$ and choose any complex $\mathcal{E}^\bullet$
+of $\mathcal{O}_X$-modules representing $E$. Then
+$Lf^*K$ is represented by $f^*\mathcal{K}^\bullet$ which is
+a K-flat complex of $\mathcal{O}_X$-modules
+(Lemma \ref{lemma-pullback-K-flat}). Thus the right hand side of
+(\ref{equation-projection-formula-map}) is represented by
+$$
+f_*\text{Tot}(\mathcal{E}^\bullet
+\otimes_{\mathcal{O}_X} f^*\mathcal{K}^\bullet)
+$$
+By the same reasoning we see that the left hand side is represented by
+$$
+\text{Tot}(f_*\mathcal{E}^\bullet \otimes_{\mathcal{O}_Y} \mathcal{K}^\bullet)
+$$
+Since $f_*$ commutes with direct sums
+(Modules, Lemma \ref{modules-lemma-i-star-right-adjoint})
+it suffices to show that
+$$
+f_*(\mathcal{E} \otimes_{\mathcal{O}_X} f^*\mathcal{K}) =
+f_*\mathcal{E} \otimes_{\mathcal{O}_Y} \mathcal{K}
+$$
+for any $\mathcal{O}_X$-module $\mathcal{E}$ and $\mathcal{O}_Y$-module
+$\mathcal{K}$. We will check this by checking on stalks.
+Let $y \in Y$. If $y \not \in f(X)$, then the stalks
+of both sides are zero. If $y = f(x)$, then we see that we have to show
+$$
+\mathcal{E}_x \otimes_{\mathcal{O}_{X, x}}
+(\mathcal{O}_{X, x} \otimes_{\mathcal{O}_{Y, y}} \mathcal{F}_y) =
+\mathcal{E}_x \otimes_{\mathcal{O}_{Y, y}} \mathcal{F}_y
+$$
+(using Sheaves, Lemma \ref{sheaves-lemma-stalks-closed-pushforward}
+and Lemma \ref{sheaves-lemma-stalk-pullback-modules}).
+This equality holds and therefore the lemma has been proved.
+\end{proof}
+
+\begin{remark}
+\label{remark-compatible-with-diagram}
+The map (\ref{equation-projection-formula-map}) is compatible with the
+base change map of Remark \ref{remark-base-change} in the following sense.
+Namely, suppose that
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} &
+X \ar[d]^f \\
+Y' \ar[r]^g &
+Y
+}
+$$
+is a commutative diagram of ringed spaces.
+Let $E \in D(\mathcal{O}_X)$ and $K \in D(\mathcal{O}_Y)$.
+Then the diagram
+$$
+\xymatrix{
+Lg^*(Rf_*E \otimes^\mathbf{L}_{\mathcal{O}_Y} K) \ar[r]_p \ar[d]_t &
+Lg^*Rf_*(E \otimes^\mathbf{L}_{\mathcal{O}_X} Lf^*K) \ar[d]_b \\
+Lg^*Rf_*E \otimes^\mathbf{L}_{\mathcal{O}_{Y'}} Lg^*K \ar[d]_b &
+Rf'_*L(g')^*(E \otimes^\mathbf{L}_{\mathcal{O}_X} Lf^*K) \ar[d]_t \\
+Rf'_*L(g')^*E \otimes^\mathbf{L}_{\mathcal{O}_{Y'}} Lg^*K \ar[rd]_p &
+Rf'_*(L(g')^*E \otimes^\mathbf{L}_{\mathcal{O}_{Y'}} L(g')^*Lf^*K) \ar[d]_c \\
+& Rf'_*(L(g')^*E \otimes^\mathbf{L}_{\mathcal{O}_{Y'}} L(f')^*Lg^*K)
+}
+$$
+is commutative. Here arrows labeled $t$ are gotten by an application of
+Lemma \ref{lemma-pullback-tensor-product}, arrows labeled $b$ by an
+application of Remark \ref{remark-base-change}, arrows labeled $p$
+by an application of (\ref{equation-projection-formula-map}), and
+$c$ comes from $L(g')^* \circ Lf^* = L(f')^* \circ Lg^*$.
+We omit the verification.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{An operator introduced by Berthelot and Ogus}
+\label{section-eta}
+
+\noindent
+This section continuous the discussion started in
+More on Algebra, Section \ref{more-algebra-section-eta}.
+We strongly encourage the reader to read that section first.
+
+\begin{lemma}
+\label{lemma-invertible-ideal-sheaf}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$\mathcal{I} \subset \mathcal{O}_X$ be a sheaf of ideals.
+Consider the following two conditions
+\begin{enumerate}
+\item for every $x \in X$ there exists an open neighbourhood
+$U \subset X$ of $x$ and $f \in \mathcal{I}(U)$ such that
+$\mathcal{I}|_U = \mathcal{O}_U \cdot f$ and
+$f : \mathcal{O}_U \to \mathcal{O}_U$ is injective, and
+\item $\mathcal{I}$ is invertible as an $\mathcal{O}_X$-module.
+\end{enumerate}
+Then (1) implies (2) and the converse is true if all stalks
+$\mathcal{O}_{X, x}$ of the structure sheaf are local rings.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: Use Modules, Lemma
+\ref{modules-lemma-invertible-is-locally-free-rank-1}.
+\end{proof}
+
+\begin{situation}
+\label{situation-eta}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$\mathcal{I} \subset \mathcal{O}_X$ be a sheaf of ideals
+satisfying condition (1) of
+Lemma \ref{lemma-invertible-ideal-sheaf}\footnote{The discussion
+in this section can be generalized to the case where all we
+require is that $\mathcal{I}$ is an invertible $\mathcal{O}_X$-module
+as defined in Modules, Section \ref{modules-section-invertible}.}.
+\end{situation}
+
+\begin{lemma}
+\label{lemma-I-torsion-free}
+In Situation \ref{situation-eta}
+let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+The following are equivalent
+\begin{enumerate}
+\item the subsheaf $\mathcal{F}[\mathcal{I}] \subset \mathcal{F}$
+of sections annihilated by $\mathcal{I}$ is zero,
+\item the subsheaf $\mathcal{F}[\mathcal{I}^n]$ is zero for all $n \geq 1$,
+\item the multiplication map
+$\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F} \to \mathcal{F}$
+is injective,
+\item for every open $U \subset X$ such that
+$\mathcal{I}|_U = \mathcal{O}_U \cdot f$
+for some $f \in \mathcal{I}(U)$
+the map $f : \mathcal{F}|_U \to \mathcal{F}|_U$ is injective,
+\item for every $x \in X$ and generator $f$ of the ideal
+$\mathcal{I}_x \subset \mathcal{O}_{X, x}$ the element $f$
+is a nonzerodivisor on the stalk $\mathcal{F}_x$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\noindent
+In Situation \ref{situation-eta}
+let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+If the equivalent conditions of Lemma \ref{lemma-I-torsion-free} hold,
+then we will say that $\mathcal{F}$ is {\it $\mathcal{I}$-torsion free}.
+If so, then for any $i \in \mathbf{Z}$ we will denote
+$$
+\mathcal{I}^i\mathcal{F} =
+\mathcal{I}^{\otimes i} \otimes_{\mathcal{O}_X} \mathcal{F}
+$$
+so that we have inclusions
+$$
+\ldots \subset
+\mathcal{I}^{i + 1}\mathcal{F} \subset
+\mathcal{I}^i\mathcal{F} \subset
+\mathcal{I}^{i - 1}\mathcal{F} \subset \ldots
+$$
+The modules $\mathcal{I}^i\mathcal{F}$ are locally isomorphic to
+$\mathcal{F}$ as $\mathcal{O}_X$-modules, but not globally.
+
+\medskip\noindent
+Let $\mathcal{F}^\bullet$ be a complex of $\mathcal{I}$-torsion free
+$\mathcal{O}_X$-modules with
+differentials $d^i : \mathcal{F}^i \to \mathcal{F}^{i + 1}$.
+In this case we define $\eta_\mathcal{I}\mathcal{F}^\bullet$
+to be the complex with terms
+\begin{align*}
+(\eta_\mathcal{I}\mathcal{F})^i
+& =
+\Ker\left(
+d^i, -1 :
+\mathcal{I}^i\mathcal{F}^i \oplus \mathcal{I}^{i + 1}\mathcal{F}^{i + 1}
+\to
+\mathcal{I}^i\mathcal{F}^{i + 1}
+\right) \\
+& =
+\Ker\left(d^i :
+\mathcal{I}^i\mathcal{F}^i
+\to
+\mathcal{I}^i\mathcal{F}^{i + 1}/
+\mathcal{I}^{i + 1}\mathcal{F}^{i + 1}
+\right)
+\end{align*}
+and differential induced by $d^i$. In other words, a local section
+$s$ of $(\eta_\mathcal{I}\mathcal{F})^i$ is the same thing as a local section
+$s$ of $\mathcal{I}^i\mathcal{F}^i$ such that its image $d^i(s)$
+in $\mathcal{I}^i\mathcal{F}^{i + 1}$ is in the subsheaf
+$\mathcal{I}^{i + 1}\mathcal{F}^{i + 1}$.
+Observe that $\eta_\mathcal{I}\mathcal{F}^\bullet$
+is another complex of $\mathcal{I}$-torsion free modules.
+
+\medskip\noindent
+Let $a^\bullet : \mathcal{F}^\bullet \to \mathcal{G}^\bullet$ be a map of
+complexes of $\mathcal{I}$-torsion free $\mathcal{O}_X$-modules.
+Then we obtain a map of complexes
+$$
+\eta_\mathcal{I} a^\bullet :
+\eta_\mathcal{I}\mathcal{F}^\bullet
+\longrightarrow
+\eta_\mathcal{I}\mathcal{G}^\bullet
+$$
+induced by the maps
+$\mathcal{I}^i\mathcal{F}^i \to \mathcal{I}^i\mathcal{G}^i$.
+The reader checks that we obtain
+an endo-functor on the category of complexes of
+$\mathcal{I}$-torsion free $\mathcal{O}_X$-modules.
+
+\medskip\noindent
+If $a^\bullet, b^\bullet : \mathcal{F}^\bullet \to \mathcal{G}^\bullet$
+are two maps of
+complexes of $\mathcal{I}$-torsion free $\mathcal{O}_X$-modules
+and $h = \{h^i : \mathcal{F}^i \to \mathcal{G}^{i - 1}\}$ is a homotopy
+between $a^\bullet$ and $b^\bullet$, then we define
+$\eta_\mathcal{I}h$ to be the family of maps
+$(\eta_\mathcal{I}h)^i : (\eta_\mathcal{I}\mathcal{F})^i \to
+(\eta_\mathcal{I}\mathcal{G})^{i - 1}$
+which sends $x$ to $h^i(x)$; this makes sense as
+$x$ a local section of $\mathcal{I}^i\mathcal{F}^i$
+implies $h^i(x)$ is a local section of $\mathcal{I}^i\mathcal{G}^{i - 1}$
+which is certainly contained in $(\eta_\mathcal{I}\mathcal{G})^{i - 1}$.
+The reader checks that $\eta_\mathcal{I}h$ is a homotopy
+between $\eta_\mathcal{I}a^\bullet$ and $\eta_\mathcal{I}b^\bullet$.
+All in all we see that we obtain a functor
+$$
+\eta_f :
+K(\mathcal{I}\text{-torsion free }\mathcal{O}_X\text{-modules})
+\longrightarrow
+K(\mathcal{I}\text{-torsion free }\mathcal{O}_X\text{-modules})
+$$
+on the homotopy category
+(Derived Categories, Section \ref{derived-section-homotopy})
+of the additive category of $\mathcal{I}$-torsion free
+$\mathcal{O}_X$-modules.
+There is no sense in which $\eta_\mathcal{I}$ is an exact functor of
+triangulated categories; compare with More on Algebra,
+Example \ref{more-algebra-example-eta-not-distinguished}.
+
+\begin{lemma}
+\label{lemma-eta-stalks}
+In Situation \ref{situation-eta}
+let $\mathcal{F}^\bullet$ be a complex of $\mathcal{I}$-torsion free
+$\mathcal{O}_X$-modules.
+For $x \in X$ choose a generator $f \in \mathcal{I}_x$. Then
+the stalk $(\eta_\mathcal{I}\mathcal{F}^\bullet)_x$ is canonically
+isomorphic to the complex $\eta_f\mathcal{F}^\bullet_x$ constructed
+in More on Algebra, Section \ref{more-algebra-section-eta}.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-eta-first-property}
+In Situation \ref{situation-eta}
+let $\mathcal{F}^\bullet$ be a complex of $\mathcal{I}$-torsion free
+$\mathcal{O}_X$-modules. There is a canonical isomorphism
+$$
+\mathcal{I}^{\otimes i} \otimes_{\mathcal{O}_X}
+\left(
+H^i(\mathcal{F}^\bullet)/H^i(\mathcal{F}^\bullet)[\mathcal{I}]
+\right)
+\longrightarrow H^i(\eta_\mathcal{I}\mathcal{F}^\bullet)
+$$
+of cohomology sheaves.
+\end{lemma}
+
+\begin{proof}
+We define a map
+$$
+\mathcal{I}^{\otimes i} \otimes_{\mathcal{O}_X} H^i(\mathcal{F}^\bullet)
+\longrightarrow
+H^i(\eta_\mathcal{I}\mathcal{F}^\bullet)
+$$
+as follows. Let $g$ be a local section of $\mathcal{I}^{\otimes i}$
+and let $\overline{s}$ be a local section of $H^i(\mathcal{F}^\bullet)$.
+Then $\overline{s}$ is (locally) the class of a local section $s$ of
+$\Ker(d^i : \mathcal{F}^i \to \mathcal{F}^{i + 1})$.
+Then we send $g \otimes \overline{s}$ to the local section
+$gs$ of $(\eta_\mathcal{I}\mathcal{F})^i \subset \mathcal{I}^i\mathcal{F}$.
+Of course $gs$ is in the kernel of $d^i$ on
+$\eta_\mathcal{I}\mathcal{F}^\bullet$ and hence defines a local
+section of $H^i(\eta_\mathcal{I}\mathcal{F}^\bullet)$.
+Checking that this is well defined is without problems.
+We claim that this map factors through an isomorphism
+as given in the lemma. This we my check on stalks and hence
+via Lemma \ref{lemma-eta-stalks} this translates into the result of
+More on Algebra, Lemma \ref{more-algebra-lemma-eta-first-property}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-eta-qis}
+In Situation \ref{situation-eta}
+let $\mathcal{F}^\bullet \to \mathcal{G}^\bullet$ be a map of
+complexes of $\mathcal{I}$-torsion free $\mathcal{O}_X$-modules.
+Then the induced map
+$\eta_\mathcal{I}\mathcal{F}^\bullet \to \eta_\mathcal{I}\mathcal{G}^\bullet$
+is a quasi-isomorphism too.
+\end{lemma}
+
+\begin{proof}
+This is true because the isomorphisms of Lemma \ref{lemma-eta-first-property}
+are compatible with maps of complexes.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Leta}
+In Situation \ref{situation-eta} there is an additive
+functor\footnote{Beware that this functor isn't exact, i.e.,
+does not transform distinguished triangles into distinguished triangles.}
+$L\eta_\mathcal{I} : D(\mathcal{O}_X) \to D(\mathcal{O}_X)$
+such that if $M$ in $D(\mathcal{O}_X)$ is represented by a complex
+$\mathcal{F}^\bullet$ of $\mathcal{I}$-torsion free $\mathcal{O}_X$-modules,
+then $L\eta_\mathcal{I}M = \eta_\mathcal{I}\mathcal{F}^\bullet$.
+Similarly for morphisms.
+\end{lemma}
+
+\begin{proof}
+Denote $\mathcal{T} \subset \textit{Mod}(\mathcal{O}_X)$ the full subcategory
+of $\mathcal{I}$-torsion free $\mathcal{O}_X$-modules.
+We have a corresponding inclusion
+$$
+K(\mathcal{T})
+\quad\subset\quad
+K(\textit{Mod}(\mathcal{O}_X)) = K(\mathcal{O}_X)
+$$
+of $K(\mathcal{T})$ as a full triangulated subcategory of $K(\mathcal{O}_X)$.
+Let $S \subset \text{Arrows}(K(\mathcal{T}))$ be the quasi-isomorphisms.
+We will apply
+Derived Categories, Lemma \ref{derived-lemma-localization-subcategory}
+to show that the map
+$$
+S^{-1}K(\mathcal{T}) \longrightarrow D(\mathcal{O}_X)
+$$
+is an equivalence of triangulated categories. The lemma shows that
+it suffices to prove: given a complex $\mathcal{G}^\bullet$ of
+$\mathcal{O}_X$-modules,
+there exists a quasi-isomorphism $\mathcal{F}^\bullet \to \mathcal{G}^\bullet$
+with $\mathcal{F}^\bullet$
+a complex of $\mathcal{I}$-torsion free $\mathcal{O}_X$-modules.
+By Lemma \ref{lemma-K-flat-resolution} we can find a quasi-isomorphism
+$\mathcal{F}^\bullet \to \mathcal{G}^\bullet$ such that the complex
+$\mathcal{F}^\bullet$ is K-flat (we won't use this) and
+consists of flat $\mathcal{O}_X$-modules $\mathcal{F}^i$.
+By the third characterization of Lemma \ref{lemma-I-torsion-free}
+we see that a flat $\mathcal{O}_X$-module is an
+$\mathcal{I}$-torsion free $\mathcal{O}_X$-module
+and we are done.
+
+\medskip\noindent
+With these preliminaries out of the way we can define $L\eta_f$.
+Namely, by the discussion following Lemma \ref{lemma-I-torsion-free}
+this section we have already a well defined functor
+$$
+K(\mathcal{T}) \xrightarrow{\eta_f} K(\mathcal{T}) \to
+K(\mathcal{O}_X) \to D(\mathcal{O}_X)
+$$
+which according to Lemma \ref{lemma-eta-qis} sends quasi-isomorphisms
+to quasi-isomorphisms. Hence this functor factors over
+$S^{-1}K(\mathcal{T}) = D(\mathcal{O}_X)$ by
+Categories, Lemma \ref{categories-lemma-properties-left-localization}.
+\end{proof}
+
+\noindent
+In Situation \ref{situation-eta} let us construct the Bockstein operators.
+First we observe that there is a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{I}^{i + 1} \ar[r] \ar[d] &
+\mathcal{I}^i \ar[r] \ar[d] &
+\mathcal{I}^i/\mathcal{I}^{i + 1}
+\ar[r] \ar@{=}[d] &
+0 \\
+0 \ar[r] &
+\mathcal{I}^{i + 1}/\mathcal{I}^{i + 2} \ar[r] &
+\mathcal{I}^i/\mathcal{I}^{i + 2} \ar[r] &
+\mathcal{I}^i/\mathcal{I}^{i + 1} \ar[r] &
+0
+}
+$$
+whose rows are short exact sequences of $\mathcal{O}_X$-modules.
+Let $M$ be an object of $D(\mathcal{O}_X)$.
+Tensoring the above diagram with $M$ gives a morphism
+$$
+\xymatrix{
+M \otimes^\mathbf{L} \mathcal{I}^{i + 1} \ar[r] \ar[d] &
+M \otimes^\mathbf{L} \mathcal{I}^i \ar[r] \ar[d] &
+M \otimes^\mathbf{L} \mathcal{I}^i/\mathcal{I}^{i + 1} \ar[d]^{\text{id}} \\
+M \otimes^\mathbf{L} \mathcal{I}^{i + 1}/\mathcal{I}^{i + 2} \ar[r] &
+M \otimes^\mathbf{L} \mathcal{I}^i/\mathcal{I}^{i + 2} \ar[r] &
+M \otimes^\mathbf{L} \mathcal{I}^i/\mathcal{I}^{i + 1}
+}
+$$
+of distinguished triangles. The long exact sequence of
+cohomology sheaves associated
+the bottom triangle in particular determines the
+{\it Bockstein operator}
+$$
+\beta = \beta^i :
+H^i(M \otimes^\mathbf{L} \mathcal{I}^i/\mathcal{I}^{i + 1})
+\longrightarrow
+H^{i + 1}(M \otimes^\mathbf{L} \mathcal{I}^{i + 1}/\mathcal{I}^{i + 2})
+$$
+for all $i \in \mathbf{Z}$. For later use we record here that by
+the commutative diagram above there is a factorization
+\begin{equation}
+\label{equation-factorization-bockstein}
+\vcenter{
+\xymatrix{
+H^i(M \otimes^\mathbf{L} \mathcal{I}^i/\mathcal{I}^{i + 1})
+\ar[r]_\delta \ar[rd]_\beta &
+H^{i + 1}(M \otimes^\mathbf{L} \mathcal{I}^{i + 1}) \ar[d] \\
+&
+H^{i + 1}(M \otimes^\mathbf{L} \mathcal{I}^{i + 1}/\mathcal{I}^{i + 2})
+}
+}
+\end{equation}
+of the Bockstein operator where $\delta$ is the boundary operator
+coming from the top distinguished triangle in the commutative diagram above.
+We obtain a complex
+\begin{equation}
+\label{equation-complex-bocksteins}
+H^\bullet(M/\mathcal{I}) =
+\left[
+\begin{matrix}
+\ldots \\
+\downarrow \\
+H^{i - 1}(M \otimes^\mathbf{L} \mathcal{I}^{i - 1}/\mathcal{I}^i) \\
+\downarrow \beta \\
+H^i(M \otimes^\mathbf{L} \mathcal{I}^i/\mathcal{I}^{i + 1}) \\
+\downarrow \beta \\
+H^{i + 1}(M \otimes^\mathbf{L} \mathcal{I}^{i + 1}/\mathcal{I}^{i + 2}) \\
+\downarrow \\
+\ldots
+\end{matrix}
+\right]
+\end{equation}
+i.e., that $\beta \circ \beta = 0$. Namely, we can check this on stalks
+and in this case we can deduce it from the corresponding result in algebra
+shown in More on Algebra, Section \ref{more-algebra-section-eta}.
+Alternative proof: the short exact sequences
+$0 \to \mathcal{I}^{i + 1}/\mathcal{I}^{i + 2}
+\to \mathcal{I}^i/\mathcal{I}^{i + 2}
+\to \mathcal{I}^i/\mathcal{I}^{i + 1} \to 0$
+define maps
+$b^i : \mathcal{I}^i/\mathcal{I}^{i + 1} \to
+(\mathcal{I}^{i + 1}/\mathcal{I}^{i + 2})[1]$
+in $D(\mathcal{O}_X)$
+which induce the maps $\beta$ above by tensoring with $M$
+and taking cohomology sheaves.
+Then one shows that the composition
+$b^{i + 1}[1] \circ b^i : \mathcal{I}^i/\mathcal{I}^{i + 1} \to
+(\mathcal{I}^{i + 1}/\mathcal{I}^{i + 2})[1] \to
+(\mathcal{I}^{i + 2}/\mathcal{I}^{i + 3})[2]$
+is zero in $D(\mathcal{O}_X)$ by using the criterion in
+Derived Categories, Lemma \ref{derived-lemma-cup-ext-1-zero}
+using that the module $\mathcal{I}^i/\mathcal{I}^{i + 3}$
+is an extension of $\mathcal{I}^{i + 1}/\mathcal{I}^{i + 3}$
+by $\mathcal{I}^i/\mathcal{I}^{i + 1}$.
+
+\begin{lemma}
+\label{lemma-eta-second-property}
+In Situation \ref{situation-eta} let $M$ be an object of
+$D(\mathcal{O}_X)$. There is a canonical isomorphism
+$$
+L\eta_\mathcal{I}M \otimes^\mathbf{L} \mathcal{O}_X/\mathcal{I}
+\longrightarrow
+H^\bullet(M/\mathcal{I})
+$$
+in $D(\mathcal{O}_X)$ where the right hand side is the complex
+(\ref{equation-complex-bocksteins}).
+\end{lemma}
+
+\begin{proof}
+By the construction of $L\eta_\mathcal{I}$ in Lemma \ref{lemma-eta-qis}
+we may assume $M$ is represented by a complex of $\mathcal{I}$-torsion
+free $\mathcal{O}_X$-modules $\mathcal{F}^\bullet$. Then
+$L\eta_\mathcal{I}M$ is represented by the complex
+$\eta_\mathcal{I}\mathcal{F}^\bullet$ which is a
+complex of $\mathcal{I}$-torsion free $\mathcal{O}_X$-modules as well.
+Thus $L\eta_\mathcal{I}M \otimes^\mathbf{L} \mathcal{O}_X/\mathcal{I}$
+is represented by the complex
+$\eta_\mathcal{I}\mathcal{F}^\bullet \otimes \mathcal{O}_X/\mathcal{I}$.
+Similarly, the complex $H^\bullet(M/\mathcal{I})$ has terms
+$H^i(\mathcal{F}^\bullet \otimes \mathcal{I}^i/\mathcal{I}^{i + 1})$.
+
+\medskip\noindent
+Let $f$ be a local generator for $\mathcal{I}$.
+Let $s$ be a local section of $(\eta_\mathcal{I}\mathcal{F})^i$.
+Then we can write $s = f^is'$ for a local section $s'$ of
+$\mathcal{F}^i$ and similarly $d^i(s) = f^{i + 1}t$ for a local
+section $t$ of $\mathcal{F}^{i + 1}$. Thus $d^i$ maps $f^is'$
+to zero in $\mathcal{F}^{i + 1} \otimes \mathcal{I}^i/\mathcal{I}^{i + 1}$.
+Hence we may map $s$ to the class of $f^is'$ in
+$H^i(\mathcal{F}^\bullet \otimes \mathcal{I}^i/\mathcal{I}^{i + 1})$.
+This rule defines a map
+$$
+(\eta_\mathcal{I}\mathcal{F})^i \otimes \mathcal{O}_X/\mathcal{I}
+\longrightarrow
+H^i(\mathcal{F}^\bullet \otimes \mathcal{I}^i/\mathcal{I}^{i + 1})
+$$
+of $\mathcal{O}_X$-modules. A calculation shows that these maps
+are compatible with differentials (essentially because $\beta$
+sends the class of $f^is'$ to the class of $f^{i + 1}t$), whence
+a map of complexes representing the arrow in the statement of the lemma.
+
+\medskip\noindent
+To finish the proof, we observe that the construction given
+in the previous paragraph agrees on stalks with the maps
+constructed in More on Algebra, Lemma
+\ref{more-algebra-lemma-eta-second-property}
+hence we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-eta-tensor-invertible}
+In Situation \ref{situation-eta}
+let $\mathcal{F}^\bullet$ be a complex of
+$\mathcal{I}$-torsion free $\mathcal{O}_X$-modules.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Then $\eta_\mathcal{I}(\mathcal{F}^\bullet \otimes \mathcal{L}) =
+(\eta_\mathcal{I}\mathcal{F}^\bullet) \otimes \mathcal{L}$.
+\end{lemma}
+
+\begin{proof}
+Immediate from the construction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-eta-cohomology-locally-free}
+In Situation \ref{situation-eta} let $M$ be an object of $D(\mathcal{O}_X)$.
+Let $x \in X$ with $\mathcal{O}_{X, x}$ nonzero. If $H^i(M)_x$
+is finite free over $\mathcal{O}_{X, x}$, then $H^i(L\eta_\mathcal{I}M)_x$
+is finite free over $\mathcal{O}_{X, x}$ of the same rank.
+\end{lemma}
+
+\begin{proof}
+Namely, say $f \in \mathcal{O}_{X, x}$ generates the stalk $\mathcal{I}_x$.
+Then $f$ is a nonzerodivisor in $\mathcal{O}_{X, x}$ and hence
+$H^i(M)_x[f] = 0$. Thus by
+Lemma \ref{lemma-eta-first-property}
+we see that $H^i(L\eta_\mathcal{I}M)_x$
+is isomorphic to
+$\mathcal{I}^i_x \otimes_{\mathcal{O}_{X, x}} H^i(M)_x$
+which is free of the same rank as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/constructions.tex b/books/stacks/constructions.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d73c65d075d46b3475c9f8570cabc6d6e6921143
--- /dev/null
+++ b/books/stacks/constructions.tex
@@ -0,0 +1,5157 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Constructions of Schemes}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we introduce ways of constructing schemes out of others.
+A basic reference is \cite{EGA}.
+
+
+
+
+
+\section{Relative glueing}
+\label{section-relative-glueing}
+
+\noindent
+The following lemma is relevant in case we are trying to construct a
+scheme $X$ over $S$, and we already know how to construct the restriction
+of $X$ to the affine opens of $S$. The actual result is completely general
+and works in the setting of (locally) ringed spaces, although our proof
+is written in the language of schemes.
+
+\begin{lemma}
+\label{lemma-relative-glueing}
+Let $S$ be a scheme.
+Let $\mathcal{B}$ be a basis for the topology of $S$.
+Suppose given the following data:
+\begin{enumerate}
+\item For every $U \in \mathcal{B}$ a scheme $f_U : X_U \to U$ over $U$.
+\item For $U, V \in \mathcal{B}$ with $V \subset U$ a morphism
+$\rho^U_V : X_V \to X_U$ over $U$.
+\end{enumerate}
+Assume that
+\begin{enumerate}
+\item[(a)] each $\rho^U_V$ induces an isomorphism
+$X_V \to f_U^{-1}(V)$ of schemes over $V$,
+\item[(b)] whenever $W, V, U \in \mathcal{B}$, with
+$W \subset V \subset U$ we have $\rho^U_W = \rho^U_V \circ \rho ^V_W$.
+\end{enumerate}
+Then there exists a morphism $f : X \to S$ of schemes
+and isomorphisms $i_U : f^{-1}(U) \to X_U$ over $U \in \mathcal{B}$
+such that for $V, U \in \mathcal{B}$ with $V \subset U$ the composition
+$$
+\xymatrix{
+X_V \ar[r]^{i_V^{-1}} &
+f^{-1}(V) \ar[rr]^{inclusion} & &
+f^{-1}(U) \ar[r]^{i_U} &
+X_U
+}
+$$
+is the morphism $\rho^U_V$. Moreover $X$ is unique up to
+unique isomorphism over $S$.
+\end{lemma}
+
+\begin{proof}
+To prove this we will use Schemes, Lemma \ref{schemes-lemma-glue-functors}.
+First we define a contravariant functor $F$ from the category of schemes
+to the category of sets. Namely, for a scheme $T$ we set
+$$
+F(T) =
+\left\{
+\begin{matrix}
+(g, \{h_U\}_{U \in \mathcal{B}}),
+\ g : T \to S, \ h_U : g^{-1}(U) \to X_U, \\
+f_U \circ h_U = g|_{g^{-1}(U)},
+\ h_U|_{g^{-1}(V)} = \rho^U_V \circ h_V
+\ \forall\ V, U \in \mathcal{B}, V \subset U
+\end{matrix}
+\right\}.
+$$
+The restriction mapping $F(T) \to F(T')$ given a morphism
+$T' \to T$ is just gotten by composition.
+For any $W \in \mathcal{B}$ we consider the subfunctor
+$F_W \subset F$ consisting of those systems $(g, \{h_U\})$
+such that $g(T) \subset W$.
+
+\medskip\noindent
+First we show $F$ satisfies the sheaf property for the Zariski topology.
+Suppose that $T$ is a scheme, $T = \bigcup V_i$ is an open covering,
+and $\xi_i \in F(V_i)$ is an element such that
+$\xi_i|_{V_i \cap V_j} = \xi_j|_{V_i \cap V_j}$.
+Say $\xi_i = (g_i, \{h_{i, U}\})$. Then we immediately see that
+the morphisms $g_i$ glue to a unique global morphism
+$g : T \to S$. Moreover, it is clear that
+$g^{-1}(U) = \bigcup g_i^{-1}(U)$. Hence the morphisms
+$h_{i, U} : g_i^{-1}(U) \to X_U$ glue to a unique morphism
+$h_U : g^{-1}(U) \to X_U$. It is easy to verify that the system
+$(g, \{h_U\})$ is an element of $F(T)$. Hence $F$ satisfies the
+sheaf property for the Zariski topology.
+
+\medskip\noindent
+Next we verify that each $F_W$, $W \in \mathcal{B}$ is representable.
+Namely, we claim that the transformation of functors
+$$
+F_W \longrightarrow \Mor(-, X_W), \ (g, \{h_U\}) \longmapsto h_W
+$$
+is an isomorphism. To see this suppose that $T$ is a scheme and
+$\alpha : T \to X_W$ is a morphism. Set $g = f_W \circ \alpha$.
+For any $U \in \mathcal{B}$ such that $U \subset W$ we can
+define $h_U : g^{-1}(U) \to X_U$ be the composition
+$(\rho^W_U)^{-1} \circ \alpha|_{g^{-1}(U)}$. This works because
+the image $\alpha(g^{-1}(U))$ is contained in $f_W^{-1}(U)$ and
+condition (a) of the lemma. It is clear that
+$f_U \circ h_U = g|_{g^{-1}(U)}$ for such a $U$.
+Moreover, if also $V \in \mathcal{B}$ and $V \subset U \subset W$,
+then $\rho^U_V \circ h_V = h_U|_{g^{-1}(V)}$ by property (b)
+of the lemma. We still have to define $h_U$ for an arbitrary
+element $U \in \mathcal{B}$. Since $\mathcal{B}$ is a basis for
+the topology on $S$ we can find an open covering
+$U \cap W = \bigcup U_i$ with $U_i \in \mathcal{B}$. Since $g$ maps into $W$
+we have
+$g^{-1}(U) = g^{-1}(U \cap W) = \bigcup g^{-1}(U_i)$.
+Consider the morphisms
+$h_i = \rho^U_{U_i} \circ h_{U_i} : g^{-1}(U_i) \to X_U$.
+It is a simple matter to use condition (b) of the lemma
+to prove that
+$h_i|_{g^{-1}(U_i) \cap g^{-1}(U_j)} = h_j|_{g^{-1}(U_i) \cap g^{-1}(U_j)}$.
+Hence these morphisms glue to give the desired morphism
+$h_U : g^{-1}(U) \to X_U$. We omit the (easy) verification that
+the system $(g, \{h_U\})$ is an element of $F_W(T)$ which
+maps to $\alpha$ under the displayed arrow above.
+
+\medskip\noindent
+Next, we verify each $F_W \subset F$ is representable by open immersions.
+This is clear from the definitions.
+
+\medskip\noindent
+Finally we have to verify
+the collection $(F_W)_{W \in \mathcal{B}}$ covers $F$.
+This is clear by construction and the fact that $\mathcal{B}$ is
+a basis for the topology of $S$.
+
+\medskip\noindent
+Let $X$ be a scheme representing the functor $F$.
+Let $(f, \{i_U\}) \in F(X)$ be a ``universal family''.
+Since each $F_W$ is representable by $X_W$ (via the morphism of functors
+displayed above) we see that $i_W : f^{-1}(W) \to X_W$
+is an isomorphism as desired. The lemma is proved.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-glueing-sheaves}
+Let $S$ be a scheme.
+Let $\mathcal{B}$ be a basis for the topology of $S$.
+Suppose given the following data:
+\begin{enumerate}
+\item For every $U \in \mathcal{B}$ a scheme $f_U : X_U \to U$ over $U$.
+\item For every $U \in \mathcal{B}$ a quasi-coherent sheaf $\mathcal{F}_U$
+over $X_U$.
+\item For every pair $U, V \in \mathcal{B}$ such that
+$V \subset U$ a morphism $\rho^U_V : X_V \to X_U$.
+\item For every pair $U, V \in \mathcal{B}$ such that
+$V \subset U$ a morphism
+$\theta^U_V : (\rho^U_V)^*\mathcal{F}_U \to \mathcal{F}_V$.
+\end{enumerate}
+Assume that
+\begin{enumerate}
+\item[(a)] each $\rho^U_V$ induces an isomorphism
+$X_V \to f_U^{-1}(V)$ of schemes over $V$,
+\item[(b)] each $\theta^U_V$ is an isomorphism,
+\item[(c)] whenever $W, V, U \in \mathcal{B}$, with
+$W \subset V \subset U$ we have $\rho^U_W = \rho^U_V \circ \rho ^V_W$,
+\item[(d)] whenever $W, V, U \in \mathcal{B}$, with
+$W \subset V \subset U$ we have
+$\theta^U_W = \theta^V_W \circ (\rho^V_W)^*\theta^U_V$.
+\end{enumerate}
+Then there exists a morphism of schemes $f : X \to S$
+together with a quasi-coherent sheaf $\mathcal{F}$ on $X$
+and isomorphisms $i_U : f^{-1}(U) \to X_U$ and
+$\theta_U : i_U^*\mathcal{F}_U \to \mathcal{F}|_{f^{-1}(U)}$
+over $U \in \mathcal{B}$ such that
+for $V, U \in \mathcal{B}$ with $V \subset U$ the composition
+$$
+\xymatrix{
+X_V \ar[r]^{i_V^{-1}} &
+f^{-1}(V) \ar[rr]^{inclusion} & &
+f^{-1}(U) \ar[r]^{i_U} &
+X_U
+}
+$$
+is the morphism $\rho^U_V$, and the composition
+\begin{equation}
+\label{equation-glue}
+(\rho^U_V)^*\mathcal{F}_U
+=
+(i_V^{-1})^*((i_U^*\mathcal{F}_U)|_{f^{-1}(V)})
+\xrightarrow{\theta_U|_{f^{-1}(V)}}
+(i_V^{-1})^*(\mathcal{F}|_{f^{-1}(V)})
+\xrightarrow{\theta_V^{-1}}
+\mathcal{F}_V
+\end{equation}
+is equal to $\theta^U_V$. Moreover $(X, \mathcal{F})$ is unique
+up to unique isomorphism over $S$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-relative-glueing} we get the scheme $X$ over $S$
+and the isomorphisms $i_U$.
+Set $\mathcal{F}'_U = i_U^*\mathcal{F}_U$ for $U \in \mathcal{B}$.
+This is a quasi-coherent $\mathcal{O}_{f^{-1}(U)}$-module.
+The maps
+$$
+\mathcal{F}'_U|_{f^{-1}(V)} =
+i_U^*\mathcal{F}_U|_{f^{-1}(V)} =
+i_V^*(\rho^U_V)^*\mathcal{F}_U \xrightarrow{i_V^*\theta^U_V}
+i_V^*\mathcal{F}_V = \mathcal{F}'_V
+$$
+define isomorphisms
+$(\theta')^U_V : \mathcal{F}'_U|_{f^{-1}(V)} \to \mathcal{F}'_V$
+whenever $V \subset U$ are elements of $\mathcal{B}$.
+Condition (d) says exactly that this is compatible in case
+we have a triple of elements $W \subset V \subset U$ of $\mathcal{B}$.
+This allows us to get well defined isomorphisms
+$$
+\varphi_{12} :
+\mathcal{F}'_{U_1}|_{f^{-1}(U_1 \cap U_2)}
+\longrightarrow
+\mathcal{F}'_{U_2}|_{f^{-1}(U_1 \cap U_2)}
+$$
+whenever $U_1, U_2 \in \mathcal{B}$ by covering the intersection
+$U_1 \cap U_2 = \bigcup V_j$ by elements $V_j$ of $\mathcal{B}$
+and taking
+$$
+\varphi_{12}|_{V_j} =
+\left((\theta')^{U_2}_{V_j}\right)^{-1}
+\circ
+(\theta')^{U_1}_{V_j}.
+$$
+We omit the verification that these maps do indeed glue to a
+$\varphi_{12}$ and we omit the verification of the
+cocycle condition of a glueing datum for sheaves
+(as in Sheaves, Section \ref{sheaves-section-glueing-sheaves}).
+By Sheaves, Lemma \ref{sheaves-lemma-glue-sheaves}
+we get our $\mathcal{F}$ on $X$. We omit the verification
+of (\ref{equation-glue}).
+\end{proof}
+
+\begin{remark}
+\label{remark-relative-glueing-functorial}
+There is a functoriality property for the constructions explained
+in Lemmas \ref{lemma-relative-glueing} and
+\ref{lemma-relative-glueing-sheaves}. Namely, suppose given
+two collections of data $(f_U : X_U \to U, \rho^U_V)$ and
+$(g_U : Y_U \to U, \sigma^U_V)$ as in Lemma \ref{lemma-relative-glueing}.
+Suppose for every $U \in \mathcal{B}$ given
+a morphism $h_U : X_U \to Y_U$ over $U$ compatible with
+the restrictions $\rho^U_V$ and $\sigma^U_V$. Functoriality
+means that this gives rise to a morphism of schemes
+$h : X \to Y$ over $S$ restricting back to the morphisms $h_U$,
+where $f : X \to S$ is obtained from
+the datum $(f_U : X_U \to U, \rho^U_V)$ and $g : Y \to S$
+is obtained from the datum $(g_U : Y_U \to U, \sigma^U_V)$.
+
+\medskip\noindent
+Similarly, suppose given
+two collections of data
+$(f_U : X_U \to U, \mathcal{F}_U, \rho^U_V, \theta^U_V)$ and
+$(g_U : Y_U \to U, \mathcal{G}_U, \sigma^U_V, \eta^U_V)$
+as in Lemma \ref{lemma-relative-glueing-sheaves}.
+Suppose for every $U \in \mathcal{B}$ given
+a morphism $h_U : X_U \to Y_U$ over $U$ compatible with
+the restrictions $\rho^U_V$ and $\sigma^U_V$, and a morphism
+$\tau_U : h_U^*\mathcal{G}_U \to \mathcal{F}_U$ compatible with
+the maps $\theta^U_V$ and $\eta^U_V$. Functoriality
+means that these give rise to a morphism of schemes
+$h : X \to Y$ over $S$ restricting back to the morphisms $h_U$,
+and a morphism $h^*\mathcal{G} \to \mathcal{F}$ restricting back
+to the maps $h_U$
+where $(f : X \to S, \mathcal{F})$ is obtained from the datum
+$(f_U : X_U \to U, \mathcal{F}_U, \rho^U_V, \theta^U_V)$ and
+where $(g : Y \to S, \mathcal{G})$ is obtained from the datum
+$(g_U : Y_U \to U, \mathcal{G}_U, \sigma^U_V, \eta^U_V)$.
+
+\medskip\noindent
+We omit the verifications and we omit a suitable formulation of
+``equivalence of categories'' between relative glueing data
+and relative objects.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Relative spectrum via glueing}
+\label{section-spec-via-glueing}
+
+\begin{situation}
+\label{situation-relative-spec}
+Here $S$ is a scheme, and $\mathcal{A}$ is a quasi-coherent
+$\mathcal{O}_S$-algebra. This means that $\mathcal{A}$ is a
+sheaf of $\mathcal{O}_S$-algebras which is quasi-coherent as an
+$\mathcal{O}_S$-module.
+\end{situation}
+
+\noindent
+In this section we outline how to construct a morphism
+of schemes
+$$
+\underline{\Spec}_S(\mathcal{A}) \longrightarrow S
+$$
+by glueing the spectra $\Spec(\Gamma(U, \mathcal{A}))$
+where $U$ ranges over the affine opens of $S$. We first show that the
+spectra of the values of $\mathcal{A}$ over affines form a
+suitable collection of schemes, as in Lemma \ref{lemma-relative-glueing}.
+
+\begin{lemma}
+\label{lemma-spec-inclusion}
+In Situation \ref{situation-relative-spec}.
+Suppose $U \subset U' \subset S$ are affine opens.
+Let $A = \mathcal{A}(U)$ and $A' = \mathcal{A}(U')$.
+The map of rings $A' \to A$ induces a morphism
+$\Spec(A) \to \Spec(A')$, and the diagram
+$$
+\xymatrix{
+\Spec(A) \ar[r] \ar[d] &
+\Spec(A') \ar[d] \\
+U \ar[r] &
+U'
+}
+$$
+is cartesian.
+\end{lemma}
+
+\begin{proof}
+Let $R = \mathcal{O}_S(U)$ and $R' = \mathcal{O}_S(U')$.
+Note that the map $R \otimes_{R'} A' \to A$ is an isomorphism as
+$\mathcal{A}$ is quasi-coherent
+(see Schemes, Lemma \ref{schemes-lemma-widetilde-pullback} for example).
+The result follows from the description of the fibre product of
+affine schemes in
+Schemes, Lemma \ref{schemes-lemma-fibre-product-affine-schemes}.
+\end{proof}
+
+\noindent
+In particular the morphism $\Spec(A) \to \Spec(A')$
+of the lemma is an open immersion.
+
+\begin{lemma}
+\label{lemma-transitive-spec}
+In Situation \ref{situation-relative-spec}.
+Suppose $U \subset U' \subset U'' \subset S$ are affine opens.
+Let $A = \mathcal{A}(U)$, $A' = \mathcal{A}(U')$ and $A'' = \mathcal{A}(U'')$.
+The composition of the morphisms
+$\Spec(A) \to \Spec(A')$, and
+$\Spec(A') \to \Spec(A'')$ of
+Lemma \ref{lemma-spec-inclusion} gives the
+morphism $\Spec(A) \to \Spec(A'')$
+of Lemma \ref{lemma-spec-inclusion}.
+\end{lemma}
+
+\begin{proof}
+This follows as the map $A'' \to A$ is the composition of $A'' \to A'$ and
+$A' \to A$ (because $\mathcal{A}$ is a sheaf).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glue-relative-spec}
+In Situation \ref{situation-relative-spec}.
+There exists a morphism of schemes
+$$
+\pi : \underline{\Spec}_S(\mathcal{A}) \longrightarrow S
+$$
+with the following properties:
+\begin{enumerate}
+\item for every affine open $U \subset S$ there exists an isomorphism
+$i_U : \pi^{-1}(U) \to \Spec(\mathcal{A}(U))$, and
+\item for $U \subset U' \subset S$ affine open the composition
+$$
+\xymatrix{
+\Spec(\mathcal{A}(U)) \ar[r]^{i_U^{-1}} &
+\pi^{-1}(U) \ar[rr]^{inclusion} & &
+\pi^{-1}(U') \ar[r]^{i_{U'}} &
+\Spec(\mathcal{A}(U'))
+}
+$$
+is the open immersion of Lemma \ref{lemma-spec-inclusion} above.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows immediately from
+Lemmas \ref{lemma-relative-glueing},
+\ref{lemma-spec-inclusion}, and
+\ref{lemma-transitive-spec}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Relative spectrum as a functor}
+\label{section-spec}
+
+\noindent
+We place ourselves in Situation \ref{situation-relative-spec}, i.e.,
+$S$ is a scheme and $\mathcal{A}$ is a quasi-coherent sheaf of
+$\mathcal{O}_S$-algebras.
+
+\medskip\noindent
+For any $f : T \to S$ the pullback
+$f^*\mathcal{A}$ is a quasi-coherent sheaf of $\mathcal{O}_T$-algebras.
+We are going to consider pairs $(f : T \to S, \varphi)$ where
+$f$ is a morphism of schemes and $\varphi : f^*\mathcal{A} \to \mathcal{O}_T$
+is a morphism of $\mathcal{O}_T$-algebras. Note that this is the
+same as giving a $f^{-1}\mathcal{O}_S$-algebra homomorphism
+$\varphi : f^{-1}\mathcal{A} \to \mathcal{O}_T$, see
+Sheaves, Lemma \ref{sheaves-lemma-adjointness-tensor-restrict}.
+This is also the same as giving an $\mathcal{O}_S$-algebra map
+$\varphi : \mathcal{A} \to f_*\mathcal{O}_T$, see
+Sheaves, Lemma \ref{sheaves-lemma-adjoint-push-pull-modules}.
+We will use all three ways of thinking about $\varphi$,
+without further mention.
+
+\medskip\noindent
+Given such a
+pair $(f : T \to S, \varphi)$ and a morphism $a : T' \to T$ we get
+a second pair $(f' = f \circ a, \varphi' = a^*\varphi)$ which we
+call the pullback of $(f, \varphi)$. One way to describe
+$\varphi' = a^*\varphi$ is as the composition
+$\mathcal{A} \to f_*\mathcal{O}_T \to f'_*\mathcal{O}_{T'}$
+where the second map is $f_*a^\sharp$ with
+$a^\sharp : \mathcal{O}_T \to a_*\mathcal{O}_{T'}$.
+In this way we have defined a functor
+\begin{eqnarray}
+\label{equation-spec}
+F : \Sch^{opp} & \longrightarrow & \textit{Sets} \\
+T & \longmapsto & F(T) = \{\text{pairs }(f, \varphi) \text{ as above}\}
+\nonumber
+\end{eqnarray}
+
+\begin{lemma}
+\label{lemma-spec-base-change}
+In Situation \ref{situation-relative-spec}.
+Let $F$ be the functor
+associated to $(S, \mathcal{A})$ above.
+Let $g : S' \to S$ be a morphism of schemes.
+Set $\mathcal{A}' = g^*\mathcal{A}$. Let $F'$ be the
+functor associated to $(S', \mathcal{A}')$ above.
+Then there is a canonical isomorphism
+$$
+F' \cong h_{S'} \times_{h_S} F
+$$
+of functors.
+\end{lemma}
+
+\begin{proof}
+A pair $(f' : T \to S', \varphi' : (f')^*\mathcal{A}' \to \mathcal{O}_T)$
+is the same as a pair $(f, \varphi : f^*\mathcal{A} \to \mathcal{O}_T)$
+together with a factorization of $f$ as $f = g \circ f'$. Namely with
+this notation we have
+$(f')^* \mathcal{A}' = (f')^*g^*\mathcal{A} = f^*\mathcal{A}$.
+Hence the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-spec-affine}
+In Situation \ref{situation-relative-spec}.
+Let $F$ be the functor associated to $(S, \mathcal{A})$ above.
+If $S$ is affine, then $F$ is representable by the
+affine scheme $\Spec(\Gamma(S, \mathcal{A}))$.
+\end{lemma}
+
+\begin{proof}
+Write $S = \Spec(R)$ and $A = \Gamma(S, \mathcal{A})$.
+Then $A$ is an $R$-algebra and $\mathcal{A} = \widetilde A$.
+The ring map $R \to A$ gives rise to a canonical map
+$$
+f_{univ} : \Spec(A)
+\longrightarrow
+S = \Spec(R).
+$$
+We have
+$f_{univ}^*\mathcal{A} = \widetilde{A \otimes_R A}$
+by Schemes, Lemma \ref{schemes-lemma-widetilde-pullback}.
+Hence there is a canonical map
+$$
+\varphi_{univ} :
+f_{univ}^*\mathcal{A} = \widetilde{A \otimes_R A}
+\longrightarrow
+\widetilde A = \mathcal{O}_{\Spec(A)}
+$$
+coming from the $A$-module map $A \otimes_R A \to A$,
+$a \otimes a' \mapsto aa'$. We claim that the pair
+$(f_{univ}, \varphi_{univ})$ represents $F$ in this case.
+In other words we claim that for any scheme $T$ the map
+$$
+\Mor(T, \Spec(A)) \longrightarrow \{\text{pairs } (f, \varphi)\},\quad
+a \longmapsto (f_{univ} \circ a, a^*\varphi_{univ})
+$$
+is bijective.
+
+\medskip\noindent
+Let us construct the inverse map.
+For any pair $(f : T \to S, \varphi)$ we get the induced
+ring map
+$$
+\xymatrix{
+A = \Gamma(S, \mathcal{A}) \ar[r]^{f^*} &
+\Gamma(T, f^*\mathcal{A}) \ar[r]^{\varphi} &
+\Gamma(T, \mathcal{O}_T)
+}
+$$
+This induces a morphism of schemes $T \to \Spec(A)$
+by Schemes, Lemma \ref{schemes-lemma-morphism-into-affine}.
+
+\medskip\noindent
+The verification that this map is inverse to the map
+displayed above is omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-spec}
+In Situation \ref{situation-relative-spec}.
+The functor $F$ is representable by a scheme.
+\end{lemma}
+
+\begin{proof}
+We are going to use Schemes, Lemma \ref{schemes-lemma-glue-functors}.
+
+\medskip\noindent
+First we check that $F$ satisfies the sheaf property for the
+Zariski topology. Namely, suppose that $T$ is a scheme,
+that $T = \bigcup_{i \in I} U_i$ is an open covering,
+and that $(f_i, \varphi_i) \in F(U_i)$ such that
+$(f_i, \varphi_i)|_{U_i \cap U_j} = (f_j, \varphi_j)|_{U_i \cap U_j}$.
+This implies that the morphisms $f_i : U_i \to S$
+glue to a morphism of schemes $f : T \to S$ such that
+$f|_{I_i} = f_i$, see Schemes, Section \ref{schemes-section-glueing-schemes}.
+Thus $f_i^*\mathcal{A} = f^*\mathcal{A}|_{U_i}$ and by assumption the
+morphisms $\varphi_i$ agree on $U_i \cap U_j$. Hence by Sheaves,
+Section \ref{sheaves-section-glueing-sheaves} these glue to a
+morphism of $\mathcal{O}_T$-algebras $f^*\mathcal{A} \to \mathcal{O}_T$.
+This proves that $F$ satisfies the sheaf condition with respect to
+the Zariski topology.
+
+\medskip\noindent
+Let $S = \bigcup_{i \in I} U_i$ be an affine open covering.
+Let $F_i \subset F$ be the subfunctor consisting of
+those pairs $(f : T \to S, \varphi)$ such that
+$f(T) \subset U_i$.
+
+\medskip\noindent
+We have to show each $F_i$ is representable.
+This is the case because $F_i$ is identified with
+the functor associated to $U_i$ equipped with
+the quasi-coherent $\mathcal{O}_{U_i}$-algebra $\mathcal{A}|_{U_i}$,
+by Lemma \ref{lemma-spec-base-change}.
+Thus the result follows from Lemma \ref{lemma-spec-affine}.
+
+\medskip\noindent
+Next we show that $F_i \subset F$ is representable by open immersions.
+Let $(f : T \to S, \varphi) \in F(T)$. Consider $V_i = f^{-1}(U_i)$.
+It follows from the definition of $F_i$ that given $a : T' \to T$
+we gave $a^*(f, \varphi) \in F_i(T')$ if and only if $a(T') \subset V_i$.
+This is what we were required to show.
+
+\medskip\noindent
+Finally, we have to show that the collection $(F_i)_{i \in I}$
+covers $F$. Let $(f : T \to S, \varphi) \in F(T)$.
+Consider $V_i = f^{-1}(U_i)$. Since $S = \bigcup_{i \in I} U_i$
+is an open covering of $S$ we see that $T = \bigcup_{i \in I} V_i$
+is an open covering of $T$. Moreover $(f, \varphi)|_{V_i} \in F_i(V_i)$.
+This finishes the proof of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glueing-gives-functor-spec}
+In Situation \ref{situation-relative-spec}.
+The scheme $\pi : \underline{\Spec}_S(\mathcal{A}) \to S$
+constructed in Lemma \ref{lemma-glue-relative-spec}
+and the scheme representing the functor $F$ are
+canonically isomorphic as schemes over $S$.
+\end{lemma}
+
+\begin{proof}
+Let $X \to S$ be the scheme representing the functor $F$.
+Consider the sheaf of $\mathcal{O}_S$-algebras
+$\mathcal{R} = \pi_*\mathcal{O}_{\underline{\Spec}_S(\mathcal{A})}$.
+By construction of $\underline{\Spec}_S(\mathcal{A})$
+we have isomorphisms $\mathcal{A}(U) \to \mathcal{R}(U)$
+for every affine open $U \subset S$; this follows from
+Lemma \ref{lemma-glue-relative-spec} part (1).
+For $U \subset U' \subset S$ open these isomorphisms are
+compatible with the restriction mappings; this follows from
+Lemma \ref{lemma-glue-relative-spec} part (2).
+Hence by Sheaves, Lemma \ref{sheaves-lemma-restrict-basis-equivalence-modules}
+these isomorphisms result from an isomorphism of $\mathcal{O}_S$-algebras
+$\varphi : \mathcal{A} \to \mathcal{R}$. Hence this gives an element
+$(\underline{\Spec}_S(\mathcal{A}), \varphi)
+\in F(\underline{\Spec}_S(\mathcal{A}))$.
+Since $X$ represents the functor $F$ we get a corresponding
+morphism of schemes $can : \underline{\Spec}_S(\mathcal{A}) \to X$
+over $S$.
+
+\medskip\noindent
+Let $U \subset S$ be any affine open. Let $F_U \subset F$ be
+the subfunctor of $F$ corresponding to pairs $(f, \varphi)$ over
+schemes $T$ with $f(T) \subset U$. Clearly the base change
+$X_U$ represents $F_U$. Moreover, $F_U$ is represented by
+$\Spec(\mathcal{A}(U)) = \pi^{-1}(U)$ according to
+Lemma \ref{lemma-spec-affine}. In other words $X_U \cong \pi^{-1}(U)$.
+We omit the verification that this identification is brought about
+by the base change of the morphism $can$ to $U$.
+\end{proof}
+
+\begin{definition}
+\label{definition-relative-spec}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent sheaf of
+$\mathcal{O}_S$-algebras. The {\it relative spectrum of $\mathcal{A}$ over
+$S$}, or simply the {\it spectrum of $\mathcal{A}$ over $S$} is the scheme
+constructed in Lemma \ref{lemma-glue-relative-spec} which represents the
+functor $F$ (\ref{equation-spec}), see
+Lemma \ref{lemma-glueing-gives-functor-spec}.
+We denote it $\pi : \underline{\Spec}_S(\mathcal{A}) \to S$.
+The ``universal family'' is a morphism of $\mathcal{O}_S$-algebras
+$$
+\mathcal{A}
+\longrightarrow
+\pi_*\mathcal{O}_{\underline{\Spec}_S(\mathcal{A})}
+$$
+\end{definition}
+
+\noindent
+The following lemma says among other things that forming the
+relative spectrum commutes with base change.
+
+\begin{lemma}
+\label{lemma-spec-properties}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent
+sheaf of $\mathcal{O}_S$-algebras. Let
+$\pi : \underline{\Spec}_S(\mathcal{A}) \to S$
+be the relative spectrum of $\mathcal{A}$ over $S$.
+\begin{enumerate}
+\item For every affine open $U \subset S$ the inverse image
+$\pi^{-1}(U)$ is affine.
+\item For every morphism $g : S' \to S$ we have
+$S' \times_S \underline{\Spec}_S(\mathcal{A}) =
+\underline{\Spec}_{S'}(g^*\mathcal{A})$.
+\item
+The universal map
+$$
+\mathcal{A}
+\longrightarrow
+\pi_*\mathcal{O}_{\underline{\Spec}_S(\mathcal{A})}
+$$
+is an isomorphism of $\mathcal{O}_S$-algebras.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) comes from the description of the relative spectrum
+by glueing, see Lemma \ref{lemma-glue-relative-spec}.
+Part (2) follows immediately from Lemma \ref{lemma-spec-base-change}.
+Part (3) follows because it is local on $S$ and it is clear in case $S$
+is affine by Lemma \ref{lemma-spec-affine} for example.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-canonical-morphism}
+Let $f : X \to S$ be a quasi-compact and quasi-separated morphism
+of schemes. By Schemes, Lemma \ref{schemes-lemma-push-forward-quasi-coherent}
+the sheaf $f_*\mathcal{O}_X$ is a quasi-coherent sheaf of
+$\mathcal{O}_S$-algebras. There is a canonical morphism
+$$
+can : X \longrightarrow \underline{\Spec}_S(f_*\mathcal{O}_X)
+$$
+of schemes over $S$.
+For any affine open $U \subset S$ the restriction $can|_{f^{-1}(U)}$
+is identified with the canonical morphism
+$$
+f^{-1}(U) \longrightarrow \Spec(\Gamma(f^{-1}(U), \mathcal{O}_X))
+$$
+coming from Schemes, Lemma \ref{schemes-lemma-morphism-into-affine}.
+\end{lemma}
+
+\begin{proof}
+The morphism comes, via the definition of $\underline{\Spec}$
+as the scheme representing the functor $F$, from the canonical map
+$\varphi : f^*f_*\mathcal{O}_X \to \mathcal{O}_X$ (which by adjointness of
+push and pull corresponds to
+$\text{id} : f_*\mathcal{O}_X \to f_*\mathcal{O}_X$).
+The statement on the restriction to $f^{-1}(U)$
+follows from the description of the relative spectrum over
+affines, see Lemma \ref{lemma-spec-affine}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Affine n-space}
+\label{section-affine-n-space}
+
+\noindent
+As an application of the relative spectrum
+we define affine $n$-space over a base scheme
+$S$ as follows. For any integer $n \geq 0$ we can consider the
+quasi-coherent sheaf of $\mathcal{O}_S$-algebras
+$\mathcal{O}_S[T_1, \ldots, T_n]$. It is quasi-coherent because
+as a sheaf of $\mathcal{O}_S$-modules it is just the direct sum
+of copies of $\mathcal{O}_S$ indexed by multi-indices.
+
+\begin{definition}
+\label{definition-affine-n-space}
+Let $S$ be a scheme and $n \geq 0$.
+The scheme
+$$
+\mathbf{A}^n_S =
+\underline{\Spec}_S(\mathcal{O}_S[T_1, \ldots, T_n])
+$$
+over $S$ is called {\it affine $n$-space over $S$}.
+If $S = \Spec(R)$ is affine then we also call this
+{\it affine $n$-space over $R$} and we denote it $\mathbf{A}^n_R$.
+\end{definition}
+
+\noindent
+Note that $\mathbf{A}^n_R = \Spec(R[T_1, \ldots, T_n])$.
+For any morphism $g : S' \to S$ of schemes we have
+$g^*\mathcal{O}_S[T_1, \ldots, T_n] = \mathcal{O}_{S'}[T_1, \ldots, T_n]$
+and hence $\mathbf{A}^n_{S'} = S' \times_S \mathbf{A}^n_S$ is the base
+change. Therefore an alternative definition of affine $n$-space
+is the formula
+$$
+\mathbf{A}^n_S = S \times_{\Spec(\mathbf{Z})} \mathbf{A}^n_{\mathbf{Z}}.
+$$
+Also, a morphism from an $S$-scheme $f : X \to S$
+to $\mathbf{A}^n_S$ is given by a homomorphism of
+$\mathcal{O}_S$-algebras
+$\mathcal{O}_S[T_1, \ldots, T_n] \to f_*\mathcal{O}_X$.
+This is clearly the same thing as giving the images of the $T_i$.
+In other words, a morphism from $X$ to $\mathbf{A}^n_S$ over $S$
+is the same as giving $n$ elements
+$h_1, \ldots, h_n \in \Gamma(X, \mathcal{O}_X)$.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Vector bundles}
+\label{section-vector-bundle}
+
+\noindent
+Let $S$ be a scheme.
+Let $\mathcal{E}$ be a quasi-coherent sheaf of $\mathcal{O}_S$-modules.
+By Modules, Lemma \ref{modules-lemma-whole-tensor-algebra-permanence}
+the symmetric algebra $\text{Sym}(\mathcal{E})$ of
+$\mathcal{E}$ over $\mathcal{O}_S$
+is a quasi-coherent sheaf of $\mathcal{O}_S$-algebras.
+Hence it makes sense to apply the construction of the
+previous section to it.
+
+\begin{definition}
+\label{definition-vector-bundle}
+Let $S$ be a scheme. Let $\mathcal{E}$ be a quasi-coherent
+$\mathcal{O}_S$-module\footnote{The reader may expect here
+the condition that $\mathcal{E}$ is finite locally free. We do not
+do so in order to be consistent with \cite[II, Definition 1.7.8]{EGA}.}.
+The {\it vector bundle associated to $\mathcal{E}$} is
+$$
+\mathbf{V}(\mathcal{E}) = \underline{\Spec}_S(\text{Sym}(\mathcal{E})).
+$$
+\end{definition}
+
+\noindent
+The vector bundle associated to $\mathcal{E}$ comes with a bit
+of extra structure. Namely, we have a grading
+$$
+\pi_*\mathcal{O}_{\mathbf{V}(\mathcal{E})} =
+\bigoplus\nolimits_{n \geq 0} \text{Sym}^n(\mathcal{E}).
+$$
+which turns $\pi_*\mathcal{O}_{\mathbf{V}(\mathcal{E})}$
+into a graded $\mathcal{O}_S$-algebra. Conversely, we can recover
+$\mathcal{E}$ from the degree $1$ part of this.
+Thus we define an abstract vector bundle as follows.
+
+\begin{definition}
+\label{definition-abstract-vector-bundle}
+Let $S$ be a scheme. A {\it vector bundle $\pi : V \to S$ over $S$} is an
+affine morphism of schemes such that $\pi_*\mathcal{O}_V$ is endowed with
+the structure of a graded $\mathcal{O}_S$-algebra
+$\pi_*\mathcal{O}_V = \bigoplus\nolimits_{n \geq 0} \mathcal{E}_n$
+such that $\mathcal{E}_0 = \mathcal{O}_S$ and such that the maps
+$$
+\text{Sym}^n(\mathcal{E}_1) \longrightarrow \mathcal{E}_n
+$$
+are isomorphisms for all $n \geq 0$. A {\it morphism of vector bundles
+over $S$} is a morphism $f : V \to V'$ such that the induced map
+$$
+f^* : \pi'_*\mathcal{O}_{V'} \longrightarrow \pi_*\mathcal{O}_V
+$$
+is compatible with the given gradings.
+\end{definition}
+
+\noindent
+An example of a vector bundle over $S$ is affine $n$-space
+$\mathbf{A}^n_S$ over $S$, see Definition \ref{definition-affine-n-space}.
+This is true because
+$\mathcal{O}_S[T_1, \ldots, T_n] = \text{Sym}(\mathcal{O}_S^{\oplus n})$.
+
+\begin{lemma}
+\label{lemma-category-vector-bundles}
+The category of vector bundles over a scheme $S$ is
+anti-equivalent to the category of quasi-coherent $\mathcal{O}_S$-modules.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: In one direction one uses the functor
+$\underline{\Spec}_S(\text{Sym}^*_{\mathcal{O}_S}(-))$
+and in the other the functor
+$(\pi : V \to S) \leadsto (\pi_*\mathcal{O}_V)_1$ where the subscript
+indicates we take the degree $1$ part.
+\end{proof}
+
+
+
+
+\section{Cones}
+\label{section-cone}
+
+\noindent
+In algebraic geometry cones correspond to graded algebras. By our conventions
+a graded ring or algebra $A$ comes with a grading
+$A = \bigoplus_{d \geq 0} A_d$ by the nonnegative integers, see
+Algebra, Section \ref{algebra-section-graded}.
+
+\begin{definition}
+\label{definition-cone}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent
+graded $\mathcal{O}_S$-algebra. Assume that $\mathcal{O}_S \to \mathcal{A}_0$
+is an isomorphism\footnote{Often one imposes the assumption that
+$\mathcal{A}$ is generated by $\mathcal{A}_1$ over $\mathcal{O}_S$. We do not
+assume this in order to be consistent with \cite[II, (8.3.1)]{EGA}.}.
+The {\it cone associated to $\mathcal{A}$} or the
+{\it affine cone associated to $\mathcal{A}$}
+is
+$$
+C(\mathcal{A}) = \underline{\Spec}_S(\mathcal{A}).
+$$
+\end{definition}
+
+\noindent
+The cone associated to a graded sheaf of $\mathcal{O}_S$-algebras
+comes with a bit of extra structure. Namely, we obtain a grading
+$$
+\pi_*\mathcal{O}_{C(\mathcal{A})} =
+\bigoplus\nolimits_{n \geq 0} \mathcal{A}_n
+$$
+Thus we can define an abstract cone as follows.
+
+\begin{definition}
+\label{definition-abstract-cone}
+Let $S$ be a scheme. A {\it cone $\pi : C \to S$ over $S$} is an
+affine morphism of schemes such that $\pi_*\mathcal{O}_C$ is endowed with
+the structure of a graded $\mathcal{O}_S$-algebra
+$\pi_*\mathcal{O}_C = \bigoplus\nolimits_{n \geq 0} \mathcal{A}_n$
+such that $\mathcal{A}_0 = \mathcal{O}_S$. A {\it morphism of cones}
+from $\pi : C \to S$ to $\pi' : C' \to S$
+is a morphism $f : C \to C'$ such that the induced map
+$$
+f^* : \pi'_*\mathcal{O}_{C'} \longrightarrow \pi_*\mathcal{O}_C
+$$
+is compatible with the given gradings.
+\end{definition}
+
+\noindent
+Any vector bundle is an example of a cone. In fact the category of
+vector bundles over $S$ is a full subcategory of the category of cones
+over $S$.
+
+
+
+
+
+
+
+
+
+
+\section{Proj of a graded ring}
+\label{section-proj}
+
+\noindent
+In this section we construct Proj of a graded ring
+following \cite[II, Section 2]{EGA}.
+
+\medskip\noindent
+Let $S$ be a graded ring. Consider the topological space $\text{Proj}(S)$
+associated to $S$, see Algebra, Section \ref{algebra-section-proj}.
+We will endow this space with a sheaf of rings $\mathcal{O}_{\text{Proj}(S)}$
+such that the resulting pair $(\text{Proj}(S), \mathcal{O}_{\text{Proj}(S)})$
+will be a scheme.
+
+\medskip\noindent
+Recall that $\text{Proj}(S)$ has a basis of open sets $D_{+}(f)$,
+$f \in S_d$, $d \geq 1$ which we call {\it standard opens}, see Algebra,
+Section \ref{algebra-section-proj}. This terminology will always
+imply that $f$ is homogeneous of positive degree even if we forget to
+mention it. In addition, the intersection of two standard opens is another:
+$D_{+}(f) \cap D_{+}(g) = D_{+}(fg)$, for $f, g \in S$ homogeneous of positive
+degree.
+
+\begin{lemma}
+\label{lemma-standard-open}
+Let $S$ be a graded ring. Let $f \in S$ homogeneous of positive degree.
+\begin{enumerate}
+\item If $g\in S$ homogeneous of positive degree
+and $D_{+}(g) \subset D_{+}(f)$, then
+\begin{enumerate}
+\item $f$ is invertible in $S_g$, and
+$f^{\deg(g)}/g^{\deg(f)}$ is invertible in $S_{(g)}$,
+\item $g^e = af$ for some $e \geq 1$ and $a \in S$ homogeneous,
+\item there is a canonical $S$-algebra map $S_f \to S_g$,
+\item there is a canonical $S_0$-algebra map $S_{(f)} \to S_{(g)}$
+compatible with the map $S_f \to S_g$,
+\item the map $S_{(f)} \to S_{(g)}$ induces an isomorphism
+$$
+(S_{(f)})_{g^{\deg(f)}/f^{\deg(g)}} \cong S_{(g)},
+$$
+\item these maps induce a commutative diagram of
+topological spaces
+$$
+\xymatrix{
+D_{+}(g) \ar[d] &
+\{\mathbf{Z}\text{-graded primes of }S_g\} \ar[l] \ar[r] \ar[d] &
+\Spec(S_{(g)}) \ar[d] \\
+D_{+}(f) &
+\{\mathbf{Z}\text{-graded primes of }S_f\} \ar[l] \ar[r] &
+\Spec(S_{(f)})
+}
+$$
+where the horizontal maps are homeomorphisms and the vertical maps
+are open immersions,
+\item there are compatible canonical $S_f$ and $S_{(f)}$-module
+maps $M_f \to M_g$ and $M_{(f)} \to M_{(g)}$ for any graded $S$-module $M$,
+and
+\item the map $M_{(f)} \to M_{(g)}$ induces an isomorphism
+$$
+(M_{(f)})_{g^{\deg(f)}/f^{\deg(g)}} \cong M_{(g)}.
+$$
+\end{enumerate}
+\item Any open covering of $D_{+}(f)$ can be refined to a finite
+open covering of the form $D_{+}(f) = \bigcup_{i = 1}^n D_{+}(g_i)$.
+\item Let $g_1, \ldots, g_n \in S$ be homogeneous of positive degree.
+Then $D_{+}(f) \subset \bigcup D_{+}(g_i)$
+if and only if
+$g_1^{\deg(f)}/f^{\deg(g_1)}, \ldots, g_n^{\deg(f)}/f^{\deg(g_n)}$
+generate the unit ideal in $S_{(f)}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Recall that $D_{+}(g) = \Spec(S_{(g)})$ with identification
+given by the ring maps $S \to S_g \leftarrow S_{(g)}$, see
+Algebra, Lemma \ref{algebra-lemma-topology-proj}.
+Thus $f^{\deg(g)}/g^{\deg(f)}$ is an element of $S_{(g)}$ which is not
+contained in any prime ideal, and hence invertible,
+see Algebra, Lemma \ref{algebra-lemma-Zariski-topology}.
+We conclude that (a) holds.
+Write the inverse of $f$ in $S_g$ as $a/g^d$.
+We may replace $a$ by its homogeneous part of degree $d\deg(g) - \deg(f)$.
+This means $g^d - af$ is annihilated by a power of $g$, whence
+$g^e = af$ for some $a \in S$ homogeneous of degree $e\deg(g) - \deg(f)$.
+This proves (b).
+For (c), the map $S_f \to S_g$ exists by (a) from the universal property
+of localization, or we can define it by mapping $b/f^n$
+to $a^nb/g^{ne}$. This clearly induces a map of the subrings
+$S_{(f)} \to S_{(g)}$ of degree zero elements as well.
+We can similarly define $M_f \to M_g$ and $M_{(f)} \to M_{(g)}$ by mapping
+$x/f^n$ to $a^nx/g^{ne}$. The statements writing $S_{(g)}$
+resp.\ $M_{(g)}$ as principal localizations of $S_{(f)}$ resp.\ $M_{(f)}$
+are clear from the formulas above. The maps in the commutative diagram
+of topological spaces correspond to the ring maps given above. The
+horizontal arrows are homeomorphisms by
+Algebra, Lemma \ref{algebra-lemma-topology-proj}.
+The vertical arrows are open immersions since the left
+one is the inclusion of an open subset.
+
+\medskip\noindent
+The open $D_{+}(f)$ is quasi-compact because it is homeomorphic
+to $\Spec(S_{(f)})$, see Algebra, Lemma \ref{algebra-lemma-quasi-compact}.
+Hence the second statement follows directly
+from the fact that the standard opens form
+a basis for the topology.
+
+\medskip\noindent
+The third statement follows directly from
+Algebra, Lemma \ref{algebra-lemma-Zariski-topology}.
+\end{proof}
+
+\noindent
+In Sheaves, Section \ref{sheaves-section-bases} we defined
+the notion of a sheaf on a basis, and we showed that it is
+essentially equivalent to the notion of a sheaf on the space,
+see Sheaves, Lemmas \ref{sheaves-lemma-extend-off-basis} and
+\ref{sheaves-lemma-extend-off-basis-structures}. Moreover,
+we showed in
+Sheaves, Lemma \ref{sheaves-lemma-cofinal-systems-coverings-standard-case}
+that it is sufficient to check the sheaf
+condition on a cofinal system of open coverings for each
+standard open. By the lemma above it suffices to check
+on the finite coverings by standard opens.
+
+\begin{definition}
+\label{definition-standard-covering}
+Let $S$ be a graded ring.
+Suppose that $D_{+}(f) \subset \text{Proj}(S)$ is a standard
+open. A {\it standard open covering} of $D_{+}(f)$
+is a covering $D_{+}(f) = \bigcup_{i = 1}^n D_{+}(g_i)$,
+where $g_1, \ldots, g_n \in S$ are homogeneous of positive degree.
+\end{definition}
+
+\noindent
+Let $S$ be a graded ring. Let $M$ be a graded $S$-module. We will define
+a presheaf $\widetilde M$ on the basis of standard opens.
+Suppose that $U \subset \text{Proj}(S)$ is a standard open.
+If $f, g \in S$ are homogeneous of positive degree
+such that $D_{+}(f) = D_{+}(g)$, then
+by Lemma \ref{lemma-standard-open} above there are canonical
+maps $M_{(f)} \to M_{(g)}$ and $M_{(g)} \to M_{(f)}$ which are
+mutually inverse. Hence we may choose any $f$ such that $U = D_{+}(f)$
+and define
+$$
+\widetilde M(U) = M_{(f)}.
+$$
+Note that if $D_{+}(g) \subset D_{+}(f)$, then by
+Lemma \ref{lemma-standard-open} above we have
+a canonical map
+$$
+\widetilde M(D_{+}(f)) = M_{(f)} \longrightarrow
+M_{(g)} = \widetilde M(D_{+}(g)).
+$$
+Clearly, this defines a presheaf of abelian groups on the basis
+of standard opens. If $M = S$, then $\widetilde S$ is a presheaf
+of rings on the basis of standard opens. And for general $M$ we
+see that $\widetilde M$ is a presheaf of $\widetilde S$-modules
+on the basis of standard opens.
+
+\medskip\noindent
+Let us compute the stalk of $\widetilde M$ at a point
+$x \in \text{Proj}(S)$.
+Suppose that $x$ corresponds to the homogeneous prime
+ideal $\mathfrak p \subset S$.
+By definition of the stalk we see that
+$$
+\widetilde M_x
+=
+\colim_{f\in S_d, d > 0, f\not\in \mathfrak p} M_{(f)}
+$$
+Here the set $\{f \in S_d, d > 0, f \not \in \mathfrak p\}$ is preordered by
+the rule $f \geq f' \Leftrightarrow D_{+}(f) \subset D_{+}(f')$.
+If $f_1, f_2 \in S \setminus \mathfrak p$ are homogeneous of positive
+degree, then we have
+$f_1f_2 \geq f_1$ in this ordering. In
+Algebra, Section \ref{algebra-section-proj}
+we defined $M_{(\mathfrak p)}$ as the module whose elements are fractions
+$x/f$ with $x, f$ homogeneous, $\deg(x) = \deg(f)$, $f \not \in \mathfrak p$.
+Since $\mathfrak p \in \text{Proj}(S)$ there exists at least
+one $f_0 \in S$ homogeneous of positive degree with $f_0 \not\in \mathfrak p$.
+Hence $x/f = f_0x/ff_0$ and we see that we may always assume
+the denominator of an element in $M_{(\mathfrak p)}$ has positive degree.
+From these remarks it follows easily that
+$$
+\widetilde M_x = M_{(\mathfrak p)}.
+$$
+
+\medskip\noindent
+Next, we check the sheaf condition for the standard open coverings.
+If $D_{+}(f) = \bigcup_{i = 1}^n D_{+}(g_i)$, then the sheaf condition
+for this covering is equivalent with the exactness of the
+sequence
+$$
+0 \to M_{(f)} \to \bigoplus M_{(g_i)} \to \bigoplus M_{(g_ig_j)}.
+$$
+Note that $D_{+}(g_i) = D_{+}(fg_i)$, and hence we can rewrite this
+sequence as the sequence
+$$
+0 \to M_{(f)} \to \bigoplus M_{(fg_i)} \to \bigoplus M_{(fg_ig_j)}.
+$$
+By Lemma \ref{lemma-standard-open} we see that
+$g_1^{\deg(f)}/f^{\deg(g_1)}, \ldots, g_n^{\deg(f)}/f^{\deg(g_n)}$
+generate the unit ideal in $S_{(f)}$, and that the modules
+$M_{(fg_i)}$, $M_{(fg_ig_j)}$ are the principal localizations
+of the $S_{(f)}$-module $M_{(f)}$ at these elements and their products.
+Thus we may apply Algebra, Lemma \ref{algebra-lemma-cover-module}
+to the module $M_{(f)}$ over $S_{(f)}$ and the elements
+$g_1^{\deg(f)}/f^{\deg(g_1)}, \ldots, g_n^{\deg(f)}/f^{\deg(g_n)}$.
+We conclude that the sequence is exact. By the remarks
+made above, we see that $\widetilde M$ is a sheaf
+on the basis of standard opens.
+
+\medskip\noindent
+Thus we conclude from the material in
+Sheaves, Section \ref{sheaves-section-bases}
+that there exists a
+unique sheaf of rings $\mathcal{O}_{\text{Proj}(S)}$
+which agrees with $\widetilde S$ on the standard opens.
+Note that by our computation of stalks above and
+Algebra, Lemma \ref{algebra-lemma-proj-prime} the
+stalks of this sheaf of rings are all local rings.
+
+\medskip\noindent
+Similarly, for any graded $S$-module $M$ there exists
+a unique sheaf of $\mathcal{O}_{\text{Proj}(S)}$-modules
+$\mathcal{F}$ which agrees with $\widetilde M$ on the
+standard opens, see
+Sheaves, Lemma \ref{sheaves-lemma-extend-off-basis-module}.
+
+\begin{definition}
+\label{definition-structure-sheaf}
+Let $S$ be a graded ring.
+\begin{enumerate}
+\item The {\it structure sheaf $\mathcal{O}_{\text{Proj}(S)}$ of the
+homogeneous spectrum of $S$} is the unique sheaf of rings
+$\mathcal{O}_{\text{Proj}(S)}$
+which agrees with $\widetilde S$ on the basis of standard opens.
+\item The locally ringed space
+$(\text{Proj}(S), \mathcal{O}_{\text{Proj}(S)})$ is called
+the {\it homogeneous spectrum} of $S$ and denoted $\text{Proj}(S)$.
+\item The sheaf of $\mathcal{O}_{\text{Proj}(S)}$-modules
+extending $\widetilde M$ to all opens of $\text{Proj}(S)$
+is called the sheaf of $\mathcal{O}_{\text{Proj}(S)}$-modules
+associated to $M$. This sheaf is denoted $\widetilde M$ as
+well.
+\end{enumerate}
+\end{definition}
+
+\noindent
+We summarize the results obtained so far.
+
+\begin{lemma}
+\label{lemma-proj-sheaves}
+Let $S$ be a graded ring. Let $M$ be a graded $S$-module.
+Let $\widetilde M$ be the sheaf of $\mathcal{O}_{\text{Proj}(S)}$-modules
+associated to $M$.
+\begin{enumerate}
+\item For every $f \in S$ homogeneous of positive degree we have
+$$
+\Gamma(D_{+}(f), \mathcal{O}_{\text{Proj}(S)}) = S_{(f)}.
+$$
+\item For every $f\in S$ homogeneous of positive degree
+we have $\Gamma(D_{+}(f), \widetilde M) = M_{(f)}$
+as an $S_{(f)}$-module.
+\item Whenever $D_{+}(g) \subset D_{+}(f)$ the restriction mappings
+on $\mathcal{O}_{\text{Proj}(S)}$ and $\widetilde M$
+are the maps
+$S_{(f)} \to S_{(g)}$ and $M_{(f)} \to M_{(g)}$ from Lemma
+\ref{lemma-standard-open}.
+\item Let $\mathfrak p$ be a homogeneous prime of $S$ not containing
+$S_{+}$, and let $x \in \text{Proj}(S)$
+be the corresponding point. We have
+$\mathcal{O}_{\text{Proj}(S), x} = S_{(\mathfrak p)}$.
+\item Let $\mathfrak p$ be a homogeneous prime of $S$ not containing
+$S_{+}$, and let $x \in \text{Proj}(S)$
+be the corresponding point. We have $\mathcal{F}_x = M_{(\mathfrak p)}$
+as an $S_{(\mathfrak p)}$-module.
+\item
+\label{item-map}
+There is a canonical ring map
+$
+S_0 \longrightarrow \Gamma(\text{Proj}(S), \widetilde S)
+$
+and a canonical $S_0$-module map
+$
+M_0 \longrightarrow \Gamma(\text{Proj}(S), \widetilde M)
+$
+compatible with the descriptions of sections over standard opens
+and stalks above.
+\end{enumerate}
+Moreover, all these identifications are functorial in the graded
+$S$-module $M$. In particular, the functor $M \mapsto \widetilde M$
+is an exact functor from the category of graded $S$-modules
+to the category of $\mathcal{O}_{\text{Proj}(S)}$-modules.
+\end{lemma}
+
+\begin{proof}
+Assertions (1) - (5) are clear from the discussion above.
+We see (6) since there are canonical maps $M_0 \to M_{(f)}$,
+$x \mapsto x/1$ compatible with the restriction maps
+described in (3). The exactness of the functor $M \mapsto \widetilde M$
+follows from the fact that the functor $M \mapsto M_{(\mathfrak p)}$
+is exact (see Algebra, Lemma \ref{algebra-lemma-proj-prime})
+and the fact that exactness of short exact sequences
+may be checked on stalks, see
+Modules, Lemma \ref{modules-lemma-abelian}.
+\end{proof}
+
+\begin{remark}
+\label{remark-global-sections-not-isomorphism}
+The map from $M_0$ to the global sections of $\widetilde M$
+is generally far from being an isomorphism. A trivial
+example is to take $S = k[x, y, z]$ with $1 = \deg(x) = \deg(y) = \deg(z)$
+(or any number of variables) and to take $M = S/(x^{100}, y^{100}, z^{100})$.
+It is easy to see that $\widetilde M = 0$, but $M_0 = k$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-standard-open-proj}
+Let $S$ be a graded ring. Let $f \in S$ be homogeneous of positive degree.
+Suppose that $D(g) \subset \Spec(S_{(f)})$ is a standard open.
+Then there exists an $h \in S$ homogeneous of positive degree such that
+$D(g)$ corresponds to $D_{+}(h) \subset D_{+}(f)$ via the homeomorphism
+of Algebra, Lemma \ref{algebra-lemma-topology-proj}. In fact we can
+take $h$ such that $g = h/f^n$ for some $n$.
+\end{lemma}
+
+\begin{proof}
+Write $g = h/f^n$ for some $h$ homogeneous of positive degree
+and some $n \geq 1$. If $D_{+}(h)$ is not contained in
+$D_{+}(f)$ then we replace $h$ by $hf$ and $n$ by $n + 1$.
+Then $h$ has the required shape and $D_{+}(h) \subset D_{+}(f)$
+corresponds to $D(g) \subset \Spec(S_{(f)})$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proj-scheme}
+Let $S$ be a graded ring.
+The locally ringed space $\text{Proj}(S)$ is a scheme.
+The standard opens $D_{+}(f)$ are affine opens.
+For any graded $S$-module $M$ the sheaf
+$\widetilde M$ is a quasi-coherent sheaf of
+$\mathcal{O}_{\text{Proj}(S)}$-modules.
+\end{lemma}
+
+\begin{proof}
+Consider a standard open $D_{+}(f) \subset \text{Proj}(S)$.
+By Lemmas \ref{lemma-standard-open} and \ref{lemma-proj-sheaves}
+we have $\Gamma(D_{+}(f), \mathcal{O}_{\text{Proj}(S)}) = S_{(f)}$, and
+we have a homeomorphism $\varphi : D_{+}(f) \to \Spec(S_{(f)})$.
+For any standard open $D(g) \subset \Spec(S_{(f)})$ we may
+pick an $h \in S_{+}$ as in Lemma \ref{lemma-standard-open-proj}.
+Then $\varphi^{-1}(D(g)) = D_{+}(h)$, and by
+Lemmas \ref{lemma-proj-sheaves} and \ref{lemma-standard-open} we see
+$$
+\Gamma(D_{+}(h), \mathcal{O}_{\text{Proj}(S)})
+=
+S_{(h)}
+=
+(S_{(f)})_{h^{\deg(f)}/f^{\deg(h)}}
+=
+(S_{(f)})_g
+=
+\Gamma(D(g), \mathcal{O}_{\Spec(S_{(f)})}).
+$$
+Thus the restriction of $\mathcal{O}_{\text{Proj}(S)}$ to
+$D_{+}(f)$ corresponds via the homeomorphism $\varphi$
+exactly to the sheaf $\mathcal{O}_{\Spec(S_{(f)})}$
+as defined in Schemes, Section \ref{schemes-section-affine-schemes}.
+We conclude that $D_{+}(f)$ is an affine scheme isomorphic to
+$\Spec(S_{(f)})$ via $\varphi$ and
+hence that $\text{Proj}(S)$ is a scheme.
+
+\medskip\noindent
+In exactly the same way we show that $\widetilde M$ is a
+quasi-coherent sheaf of $\mathcal{O}_{\text{Proj}(S)}$-modules.
+Namely, the argument above will show that
+$$
+\widetilde M|_{D_{+}(f)} \cong \varphi^*\left(\widetilde{M_{(f)}}\right)
+$$
+which shows that $\widetilde M$ is quasi-coherent.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proj-separated}
+Let $S$ be a graded ring.
+The scheme $\text{Proj}(S)$ is separated.
+\end{lemma}
+
+\begin{proof}
+We have to show that the canonical morphism
+$\text{Proj}(S) \to \Spec(\mathbf{Z})$
+is separated.
+We will use Schemes, Lemma \ref{schemes-lemma-characterize-separated}.
+Thus it suffices to show given any pair of standard opens
+$D_{+}(f)$ and $D_{+}(g)$ that $D_{+}(f) \cap D_{+}(g) = D_{+}(fg)$
+is affine (clear) and that the ring map
+$$
+S_{(f)} \otimes_{\mathbf{Z}} S_{(g)} \longrightarrow S_{(fg)}
+$$
+is surjective. Any element $s$ in $S_{(fg)}$ is of
+the form $s = h/(f^ng^m)$ with $h \in S$ homogeneous of degree
+$n\deg(f) + m\deg(g)$. We may multiply $h$ by a suitable
+monomial $f^ig^j$ and assume that $n = n' \deg(g)$, and
+$m = m' \deg(f)$. Then we can rewrite $s$ as
+$s = h/f^{(n' + m')\deg(g)} \cdot f^{m'\deg(g)}/g^{m'\deg(f)}$.
+So $s$ is indeed in the image of the displayed arrow.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proj-quasi-compact}
+Let $S$ be a graded ring.
+The scheme $\text{Proj}(S)$ is quasi-compact if and only
+if there exist finitely many homogeneous elements
+$f_1, \ldots, f_n \in S_{+}$ such that
+$S_{+} \subset \sqrt{(f_1, \ldots, f_n)}$. In this case
+$\text{Proj}(S) = D_+(f_1) \cup \ldots \cup D_+(f_n)$.
+\end{lemma}
+
+\begin{proof}
+Given such a collection of elements the standard affine opens
+$D_{+}(f_i)$ cover $\text{Proj}(S)$ by
+Algebra, Lemma \ref{algebra-lemma-topology-proj}.
+Conversely, if $\text{Proj}(S)$ is quasi-compact, then we
+may cover it by finitely many standard opens
+$D_{+}(f_i)$, $i = 1, \ldots, n$ and we see that
+$S_{+} \subset \sqrt{(f_1, \ldots, f_n)}$ by the
+lemma referenced above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-structure-morphism-proj}
+Let $S$ be a graded ring. The scheme $\text{Proj}(S)$ has a canonical morphism
+towards the affine scheme $\Spec(S_0)$, agreeing with the map on
+topological spaces coming from
+Algebra, Definition \ref{algebra-definition-proj}.
+\end{lemma}
+
+\begin{proof}
+We saw above that our construction of $\widetilde S$,
+resp.\ $\widetilde M$ gives a sheaf of $S_0$-algebras, resp.\ $S_0$-modules.
+Hence we get a morphism by
+Schemes, Lemma \ref{schemes-lemma-morphism-into-affine}.
+This morphism, when restricted to $D_{+}(f)$ comes from the
+canonical ring map $S_0 \to S_{(f)}$. The maps
+$S \to S_f$, $S_{(f)} \to S_f$ are $S_0$-algebra maps, see
+Lemma \ref{lemma-standard-open}.
+Hence if the homogeneous prime $\mathfrak p \subset S$
+corresponds to the $\mathbf{Z}$-graded prime $\mathfrak p' \subset S_f$
+and the (usual) prime $\mathfrak p'' \subset S_{(f)}$, then
+each of these has the same inverse image in $S_0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proj-valuative-criterion}
+Let $S$ be a graded ring. If $S$ is finitely generated as
+an algebra over $S_0$, then
+the morphism $\text{Proj}(S) \to \Spec(S_0)$ satisfies
+the existence and uniqueness parts of the valuative criterion,
+see Schemes, Definition \ref{schemes-definition-valuative-criterion}.
+\end{lemma}
+
+\begin{proof}
+The uniqueness part follows from the fact that $\text{Proj}(S)$ is
+separated (Lemma \ref{lemma-proj-separated} and
+Schemes, Lemma \ref{schemes-lemma-separated-implies-valuative}).
+Choose $x_i \in S_{+}$ homogeneous, $i = 1, \ldots, n$
+which generate $S$ over $S_0$. Let $d_i = \deg(x_i)$ and
+set $d = \text{lcm}\{d_i\}$. Suppose we are given a diagram
+$$
+\xymatrix{
+\Spec(K) \ar[r] \ar[d] & \text{Proj}(S) \ar[d] \\
+\Spec(A) \ar[r] & \Spec(S_0)
+}
+$$
+as in Schemes, Definition \ref{schemes-definition-valuative-criterion}.
+Denote $v : K^* \to \Gamma$ the valuation of $A$, see
+Algebra, Definition \ref{algebra-definition-value-group}.
+We may choose an $f \in S_{+}$ homogeneous such that
+$\Spec(K)$ maps into $D_{+}(f)$. Then we get a commutative
+diagram of ring maps
+$$
+\xymatrix{
+K & S_{(f)} \ar[l]^{\varphi} \\
+A \ar[u] & S_0 \ar[l] \ar[u]
+}
+$$
+After renumbering we may assume that $\varphi(x_i^{\deg(f)}/f^{d_i})$
+is nonzero for $i = 1, \ldots, r$ and zero for $i = r + 1, \ldots, n$.
+Since the open sets $D_{+}(x_i)$ cover $\text{Proj}(S)$ we see that $r \geq 1$.
+Let $i_0 \in \{1, \ldots, r\}$ be an index minimizing
+$\gamma_i = (d/d_i)v(\varphi(x_i^{\deg(f)}/f^{d_i}))$ in $\Gamma$.
+For convenience set $x_0 = x_{i_0}$ and $d_0 = d_{i_0}$.
+The ring map $\varphi$ factors though a map $\varphi' : S_{(fx_0)} \to K$
+which gives a ring map $S_{(x_0)} \to S_{(fx_0)} \to K$.
+The algebra $S_{(x_0)}$ is generated over $S_0$ by the elements
+$x_1^{e_1} \ldots x_n^{e_n}/x_0^{e_0}$, where $\sum e_i d_i = e_0 d_0$.
+If $e_i > 0$ for some $i > r$, then
+$\varphi'(x_1^{e_1} \ldots x_n^{e_n}/x_0^{e_0}) = 0$.
+If $e_i = 0$ for $i > r$, then we have
+\begin{align*}
+\deg(f) v(\varphi'(x_1^{e_1} \ldots x_r^{e_r}/x_0^{e_0}))
+& =
+v(\varphi'(x_1^{e_1 \deg(f)} \ldots x_r^{e_r \deg(f)}/x_0^{e_0 \deg(f)})) \\
+& =
+\sum e_i v(\varphi'(x_i^{\deg(f)}/f^{d_i}))
+- e_0 v(\varphi'(x_0^{\deg(f)}/f^{d_0})) \\
+& =
+\sum e_i d_i \gamma_i - e_0 d_0 \gamma_0 \\
+& \geq
+\sum e_i d_i \gamma_0 - e_0 d_0 \gamma_0 = 0
+\end{align*}
+because $\gamma_0$ is minimal among the $\gamma_i$.
+This implies that $S_{(x_0)}$ maps into $A$ via $\varphi'$.
+The corresponding morphism of schemes
+$\Spec(A) \to \Spec(S_{(x_0)}) = D_{+}(x_0)
+\subset \text{Proj}(S)$ provides the morphism fitting into
+the first commutative diagram of this proof.
+\end{proof}
+
+\noindent
+We saw in the proof of Lemma \ref{lemma-proj-valuative-criterion}
+that, under the hypotheses of that lemma, the morphism
+$\text{Proj}(S) \to \Spec(S_0)$ is quasi-compact as well. Hence (by
+Schemes, Proposition \ref{schemes-proposition-characterize-universally-closed})
+we see that $\text{Proj}(S) \to \Spec(S_0)$ is universally closed in
+the situation of the lemma. We give several examples showing these results
+do not hold without some assumption on the graded ring $S$.
+
+\begin{example}
+\label{example-not-existence-valuative-big-proj}
+Let $\mathbf{C}[X_1, X_2, X_3, \ldots]$ be the graded $\mathbf{C}$-algebra
+with each $X_i$ in degree $1$. Consider the ring map
+$$
+\mathbf{C}[X_1, X_2, X_3, \ldots]
+\longrightarrow
+\mathbf{C}[t^\alpha ; \alpha \in \mathbf{Q}_{\geq 0}]
+$$
+which maps $X_i$ to $t^{1/i}$. The right hand side becomes a valuation ring
+$A$ upon localization at the ideal $\mathfrak m = (t^\alpha ; \alpha > 0)$.
+Let $K$ be the fraction field of $A$. The above gives a morphism
+$\Spec(K) \to \text{Proj}(\mathbf{C}[X_1, X_2, X_3, \ldots])$ which does not
+extend to a morphism defined on all of $\Spec(A)$.
+The reason is that the image of $\Spec(A)$ would be contained
+in one of the $D_{+}(X_i)$ but then $X_{i + 1}/X_i$ would map
+to an element of $A$ which it doesn't since it maps to
+$t^{1/(i + 1) - 1/i}$.
+\end{example}
+
+\begin{example}
+\label{example-not-existence-valuative-small-proj}
+Let $R = \mathbf{C}[t]$ and
+$$
+S = R[X_1, X_2, X_3, \ldots]/(X_i^2 - tX_{i + 1}).
+$$
+The grading is such that $R = S_0$ and $\deg(X_i) = 2^{i - 1}$.
+Note that if $\mathfrak p \in \text{Proj}(S)$ then
+$t \not \in \mathfrak p$ (otherwise $\mathfrak p$ has to contain
+all of the $X_i$ which is not allowed for an element of
+the homogeneous spectrum). Thus we see that
+$D_{+}(X_i) = D_{+}(X_{i + 1})$ for all $i$. Hence
+$\text{Proj}(S)$ is quasi-compact; in fact it is affine
+since it is equal to $D_{+}(X_1)$. It is easy to see that
+the image of $\text{Proj}(S) \to \Spec(R)$ is
+$D(t)$. Hence the morphism $\text{Proj}(S) \to \Spec(R)$
+is not closed. Thus the valuative criterion cannot apply because
+it would imply that the morphism is closed (see
+Schemes, Proposition \ref{schemes-proposition-characterize-universally-closed}
+).
+\end{example}
+
+\begin{example}
+\label{example-trivial-proj}
+Let $A$ be a ring.
+Let $S = A[T]$ as a graded $A$ algebra with $T$ in degree $1$.
+Then the canonical morphism $\text{Proj}(S) \to \Spec(A)$
+(see Lemma \ref{lemma-structure-morphism-proj})
+is an isomorphism.
+\end{example}
+
+\begin{example}
+\label{example-open-subset-proj}
+Let $X = \Spec(A)$ be an affine scheme, and let $U \subset X$
+be an open subscheme. Grade $A[T]$ by setting $\deg T = 1$. Define $S$
+to be the subring of $A[T]$ generated by $A$ and all $fT^i$, where $i \ge 0$
+and where $f \in A$ is such that $D(f) \subset U$. We claim that $S$
+is a graded ring with $S_0 = A$ such that $\text{Proj}(S) \cong U$,
+and this isomorphism identifies the canonical morphism
+$\text{Proj}(S) \to \Spec(A)$ of Lemma \ref{lemma-structure-morphism-proj}
+with the inclusion $U \subset X$.
+
+\medskip\noindent
+Suppose $\mathfrak p \in \text{Proj}(S)$ is such that every $fT \in S_1$
+is in $\mathfrak p$. Then every generator $fT^i$ with $i \ge 1$
+is in $\mathfrak p$ because $(fT^i)^2 = (fT)(fT^{2i-1}) \in \mathfrak p$
+and $\mathfrak p$ is radical. But then $\mathfrak p \supset S_+$, which
+is impossible. Consequently $\text{Proj}(S)$ is covered by the standard
+open affine subsets $\{D_+(fT)\}_{fT \in S_1}$.
+
+\medskip\noindent
+Observe that, if $fT \in S_1$, then the inclusion $S \subset A[T]$
+induces a graded isomorphism of $S[(fT)^{-1}]$ with $A[T, T^{-1}, f^{-1}]$.
+Hence the standard open subset $D_+(fT) \cong \Spec(S_{(fT)})$
+is isomorphic to $\Spec(A[T, T^{-1}, f^{-1}]_0) = \Spec(A[f^{-1}])$.
+It is clear that this isomorphism is a restriction of the canonical morphism
+$\text{Proj}(S) \to \Spec(A)$. If in addition $gT \in S_1$, then
+$S[(fT)^{-1}, (gT)^{-1}] \cong A[T, T^{-1}, f^{-1}, g^{-1}]$
+as graded rings, so $D_+(fT) \cap D_+(gT) \cong \Spec(A[f^{-1}, g^{-1}])$.
+Therefore $\text{Proj}(S)$ is the union of open subschemes $D_+(fT)$
+which are isomorphic to the open subschemes $D(f) \subset X$
+under the canonical morphism, and these open subschemes intersect
+in $\text{Proj}(S)$ in the same way they do in $X$.
+We conclude that the canonical morphism is an isomorphism of
+$\text{Proj}(S)$ with the union of all $D(f) \subset U$, which is $U$.
+\end{example}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Quasi-coherent sheaves on Proj}
+\label{section-quasi-coherent-proj}
+
+\noindent
+Let $S$ be a graded ring. Let $M$ be a graded $S$-module.
+We saw in Lemma \ref{lemma-proj-sheaves} how to construct a quasi-coherent
+sheaf of modules $\widetilde{M}$ on $\text{Proj}(S)$ and a map
+\begin{equation}
+\label{equation-map-global-sections}
+M_0 \longrightarrow \Gamma(\text{Proj}(S), \widetilde{M})
+\end{equation}
+of the degree $0$ part of $M$ to the global sections of $\widetilde{M}$.
+The degree $0$ part of the $n$th twist $M(n)$ of the graded module $M$ (see
+Algebra, Section \ref{algebra-section-graded})
+is equal to $M_n$. Hence we can get maps
+\begin{equation}
+\label{equation-map-global-sections-degree-n}
+M_n \longrightarrow \Gamma(\text{Proj}(S), \widetilde{M(n)}).
+\end{equation}
+We would like to be able to perform this operation for any quasi-coherent
+sheaf $\mathcal{F}$ on $\text{Proj}(S)$. We will do this by tensoring
+with the $n$th twist of the structure sheaf, see
+Definition \ref{definition-twist}. In order to relate the two notions
+we will use the following lemma.
+
+\begin{lemma}
+\label{lemma-widetilde-tensor}
+Let $S$ be a graded ring.
+Let $(X, \mathcal{O}_X) = (\text{Proj}(S), \mathcal{O}_{\text{Proj}(S)})$
+be the scheme of Lemma \ref{lemma-proj-scheme}.
+Let $f \in S_{+}$ be homogeneous. Let $x \in X$ be a point
+corresponding to the homogeneous prime $\mathfrak p \subset S$.
+Let $M$, $N$ be graded $S$-modules.
+There is a canonical map of $\mathcal{O}_{\text{Proj}(S)}$-modules
+$$
+\widetilde M \otimes_{\mathcal{O}_X} \widetilde N
+\longrightarrow
+\widetilde{M \otimes_S N}
+$$
+which induces the canonical map
+$
+M_{(f)} \otimes_{S_{(f)}} N_{(f)}
+\to
+(M \otimes_S N)_{(f)}
+$
+on sections over $D_{+}(f)$ and the canonical map
+$
+M_{(\mathfrak p)} \otimes_{S_{(\mathfrak p)}} N_{(\mathfrak p)}
+\to
+(M \otimes_S N)_{(\mathfrak p)}
+$
+on stalks at $x$. Moreover, the following diagram
+$$
+\xymatrix{
+M_0 \otimes_{S_0} N_0 \ar[r] \ar[d] &
+(M \otimes_S N)_0 \ar[d] \\
+\Gamma(X, \widetilde M \otimes_{\mathcal{O}_X} \widetilde N) \ar[r] &
+\Gamma(X, \widetilde{M \otimes_S N})
+}
+$$
+is commutative where the vertical maps are given by
+(\ref{equation-map-global-sections}).
+\end{lemma}
+
+\begin{proof}
+To construct a morphism as displayed is the same as constructing
+a $\mathcal{O}_X$-bilinear map
+$$
+\widetilde M \times \widetilde N
+\longrightarrow
+\widetilde{M \otimes_S N}
+$$
+see Modules, Section \ref{modules-section-tensor-product}.
+It suffices to define this on sections over the opens $D_{+}(f)$
+compatible with restriction mappings. On $D_{+}(f)$ we use the
+$S_{(f)}$-bilinear map
+$M_{(f)} \times N_{(f)} \to (M \otimes_S N)_{(f)}$,
+$(x/f^n, y/f^m) \mapsto (x \otimes y)/f^{n + m}$. Details omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-not-isomorphism}
+In general the map constructed in Lemma \ref{lemma-widetilde-tensor}
+above is not an isomorphism. Here is an example. Let $k$
+be a field. Let $S = k[x, y, z]$ with $k$ in degree $0$ and
+$\deg(x) = 1$, $\deg(y) = 2$, $\deg(z) = 3$.
+Let $M = S(1)$ and $N = S(2)$, see
+Algebra, Section \ref{algebra-section-graded}
+for notation. Then $M \otimes_S N = S(3)$.
+Note that
+\begin{eqnarray*}
+S_z
+& = &
+k[x, y, z, 1/z] \\
+S_{(z)}
+& = &
+k[x^3/z, xy/z, y^3/z^2]
+\cong
+k[u, v, w]/(uw - v^3) \\
+M_{(z)} & = & S_{(z)} \cdot x + S_{(z)} \cdot y^2/z \subset S_z \\
+N_{(z)} & = & S_{(z)} \cdot y + S_{(z)} \cdot x^2 \subset S_z \\
+S(3)_{(z)} & = & S_{(z)} \cdot z \subset S_z
+\end{eqnarray*}
+Consider the maximal ideal $\mathfrak m = (u, v, w) \subset S_{(z)}$.
+It is not hard to see that both $M_{(z)}/\mathfrak mM_{(z)}$
+and $N_{(z)}/\mathfrak mN_{(z)}$ have dimension $2$ over
+$\kappa(\mathfrak m)$. But
+$S(3)_{(z)}/\mathfrak mS(3)_{(z)}$ has dimension $1$.
+Thus the map $M_{(z)} \otimes N_{(z)} \to S(3)_{(z)}$ is not
+an isomorphism.
+\end{remark}
+
+
+
+
+
+
+
+
+
+\section{Invertible sheaves on Proj}
+\label{section-invertible-on-proj}
+
+\noindent
+Recall from Algebra, Section \ref{algebra-section-graded}
+the construction of the twisted module $M(n)$ associated
+to a graded module over a graded ring.
+
+\begin{definition}
+\label{definition-twist}
+Let $S$ be a graded ring. Let $X = \text{Proj}(S)$.
+\begin{enumerate}
+\item We define $\mathcal{O}_X(n) = \widetilde{S(n)}$.
+This is called the $n$th
+{\it twist of the structure sheaf of $\text{Proj}(S)$}.
+\item For any sheaf of $\mathcal{O}_X$-modules $\mathcal{F}$ we set
+$\mathcal{F}(n) = \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{O}_X(n)$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+We are going to use Lemma \ref{lemma-widetilde-tensor}
+to construct some canonical maps.
+Since $S(n) \otimes_S S(m) = S(n + m)$ we see that there
+are canonical maps
+\begin{equation}
+\label{equation-multiply}
+\mathcal{O}_X(n) \otimes_{\mathcal{O}_X} \mathcal{O}_X(m)
+\longrightarrow
+\mathcal{O}_X(n + m).
+\end{equation}
+These maps are not isomorphisms in general, see the example in
+Remark \ref{remark-not-isomorphism}. The same example shows
+that $\mathcal{O}_X(n)$ is {\it not} an invertible sheaf on $X$ in
+general. Tensoring with an arbitrary $\mathcal{O}_X$-module $\mathcal{F}$
+we get maps
+\begin{equation}
+\label{equation-multiply-on-sheaf}
+\mathcal{O}_X(n) \otimes_{\mathcal{O}_X} \mathcal{F}(m)
+\longrightarrow
+\mathcal{F}(n + m).
+\end{equation}
+The maps (\ref{equation-multiply}) on global sections give a map of graded
+rings
+\begin{equation}
+\label{equation-global-sections}
+S \longrightarrow \bigoplus\nolimits_{n \geq 0} \Gamma(X, \mathcal{O}_X(n)).
+\end{equation}
+And for an arbitrary $\mathcal{O}_X$-module $\mathcal{F}$ the maps
+(\ref{equation-multiply-on-sheaf}) give a graded module structure
+\begin{equation}
+\label{equation-global-sections-module}
+\bigoplus\nolimits_{n \geq 0} \Gamma(X, \mathcal{O}_X(n))
+\times
+\bigoplus\nolimits_{m \in \mathbf{Z}} \Gamma(X, \mathcal{F}(m))
+\longrightarrow
+\bigoplus\nolimits_{m \in \mathbf{Z}} \Gamma(X, \mathcal{F}(m))
+\end{equation}
+and via (\ref{equation-global-sections}) also a $S$-module structure.
+More generally, given any graded $S$-module
+$M$ we have $M(n) = M \otimes_S S(n)$. Hence we get maps
+\begin{equation}
+\label{equation-multiply-more-generally}
+\widetilde M(n)
+=
+\widetilde M
+\otimes_{\mathcal{O}_X}
+\mathcal{O}_X(n)
+\longrightarrow
+\widetilde{M(n)}.
+\end{equation}
+On global sections (\ref{equation-map-global-sections-degree-n})
+defines a map of graded $S$-modules
+\begin{equation}
+\label{equation-global-sections-more-generally}
+M \longrightarrow
+\bigoplus\nolimits_{n \in \mathbf{Z}} \Gamma(X, \widetilde{M(n)}).
+\end{equation}
+Here is an important fact which follows basically immediately from the
+definitions.
+
+\begin{lemma}
+\label{lemma-when-invertible}
+Let $S$ be a graded ring. Set $X = \text{Proj}(S)$.
+Let $f \in S$ be homogeneous of degree $d > 0$.
+The sheaves $\mathcal{O}_X(nd)|_{D_{+}(f)}$ are invertible,
+and in fact trivial for all $n \in \mathbf{Z}$
+(see Modules, Definition \ref{modules-definition-invertible}).
+The maps (\ref{equation-multiply}) restricted to $D_{+}(f)$
+$$
+\mathcal{O}_X(nd)|_{D_{+}(f)} \otimes_{\mathcal{O}_{D_{+}(f)}}
+\mathcal{O}_X(m)|_{D_{+}(f)}
+\longrightarrow
+\mathcal{O}_X(nd + m)|_{D_{+}(f)},
+$$
+the maps (\ref{equation-multiply-on-sheaf}) restricted to $D_+(f)$
+$$
+\mathcal{O}_X(nd)|_{D_{+}(f)} \otimes_{\mathcal{O}_{D_{+}(f)}}
+\mathcal{F}(m)|_{D_{+}(f)}
+\longrightarrow
+\mathcal{F}(nd + m)|_{D_{+}(f)},
+$$
+and the maps (\ref{equation-multiply-more-generally})
+restricted to $D_{+}(f)$
+$$
+\widetilde M(nd)|_{D_{+}(f)}
+=
+\widetilde M|_{D_{+}(f)}
+\otimes_{\mathcal{O}_{D_{+}(f)}}
+\mathcal{O}_X(nd)|_{D_{+}(f)}
+\longrightarrow
+\widetilde{M(nd)}|_{D_{+}(f)}
+$$
+are isomorphisms for all $n, m \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+The (not graded) $S$-module maps $S \to S(nd)$, and $M \to M(nd)$, given by
+$x \mapsto f^n x$ become isomorphisms after inverting $f$. The first shows that
+$S_{(f)} \cong S(nd)_{(f)}$ which gives an isomorphism
+$\mathcal{O}_{D_{+}(f)} \cong \mathcal{O}_X(nd)|_{D_{+}(f)}$.
+The second shows that the map
+$S(nd)_{(f)} \otimes_{S_{(f)}} M_{(f)} \to M(nd)_{(f)}$
+is an isomorphism. The case of the map (\ref{equation-multiply-on-sheaf})
+is a consequence of the case of the map (\ref{equation-multiply}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-apply-modules}
+Let $S$ be a graded ring. Let $M$ be a graded $S$-module.
+Set $X = \text{Proj}(S)$. Assume $X$ is covered by the standard
+opens $D_+(f)$ with $f \in S_1$, e.g., if $S$ is generated by $S_1$
+over $S_0$. Then the sheaves $\mathcal{O}_X(n)$
+are invertible and the maps
+(\ref{equation-multiply}), (\ref{equation-multiply-on-sheaf}), and
+(\ref{equation-multiply-more-generally}) are isomorphisms.
+In particular, these maps induce isomorphisms
+$$
+\mathcal{O}_X(1)^{\otimes n} \cong
+\mathcal{O}_X(n)
+\quad
+\text{and}
+\quad
+\widetilde{M} \otimes_{\mathcal{O}_X} \mathcal{O}_X(n) =
+\widetilde{M}(n) \cong \widetilde{M(n)}
+$$
+Thus (\ref{equation-map-global-sections-degree-n}) becomes a map
+\begin{equation}
+\label{equation-map-global-sections-degree-n-simplified}
+M_n \longrightarrow \Gamma(X, \widetilde{M}(n))
+\end{equation}
+and (\ref{equation-global-sections-more-generally}) becomes a map
+\begin{equation}
+\label{equation-global-sections-more-generally-simplified}
+M \longrightarrow
+\bigoplus\nolimits_{n \in \mathbf{Z}} \Gamma(X, \widetilde{M}(n)).
+\end{equation}
+\end{lemma}
+
+\begin{proof}
+Under the assumptions of the lemma $X$ is covered by the
+open subsets $D_{+}(f)$ with $f \in S_1$ and the
+lemma is a consequence of Lemma \ref{lemma-when-invertible} above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-where-invertible}
+Let $S$ be a graded ring. Set $X = \text{Proj}(S)$. Fix $d \geq 1$ an
+integer. The following open subsets of $X$ are equal:
+\begin{enumerate}
+\item The largest open subset $W = W_d \subset X$ such that
+each $\mathcal{O}_X(dn)|_W$ is invertible and all the
+multiplication maps
+$\mathcal{O}_X(nd)|_W \otimes_{\mathcal{O}_W} \mathcal{O}_X(md)|_W
+\to \mathcal{O}_X(nd + md)|_W$
+(see \ref{equation-multiply}) are isomorphisms.
+\item The union of the open subsets $D_{+}(fg)$ with
+$f, g \in S$ homogeneous and $\deg(f) = \deg(g) + d$.
+\end{enumerate}
+Moreover, all the maps
+$\widetilde M(nd)|_W = \widetilde M|_W \otimes_{\mathcal{O}_W}
+\mathcal{O}_X(nd)|_W \to \widetilde{M(nd)}|_W$
+(see \ref{equation-multiply-more-generally}) are isomorphisms.
+\end{lemma}
+
+\begin{proof}
+If $x \in D_{+}(fg)$ with $\deg(f) = \deg(g) + d$ then
+on $D_{+}(fg)$ the sheaves $\mathcal{O}_X(dn)$
+are generated by the element $(f/g)^n = f^{2n}/(fg)^n$. This implies $x$
+is in the open subset $W$ defined in (1) by arguing as in the
+proof of Lemma \ref{lemma-when-invertible}.
+
+\medskip\noindent
+Conversely, suppose that $\mathcal{O}_X(d)$ is free of rank 1
+in an open neighbourhood $V$ of $x \in X$ and all the
+multiplication maps
+$\mathcal{O}_X(nd)|_V \otimes_{\mathcal{O}_V} \mathcal{O}_X(md)|_V
+\to \mathcal{O}_X(nd + md)|_V$ are isomorphisms.
+We may choose $h \in S_{+}$ homogeneous such that $D_{+}(h) \subset V$.
+By the definition of the twists of the structure sheaf we conclude there
+exists an element $s$ of $(S_h)_d$ such that $s^n$ is a basis of $(S_h)_{nd}$
+as a module over $S_{(h)}$ for all $n \in \mathbf{Z}$.
+We may write
+$s = f/h^m$ for some $m \geq 1$ and $f \in S_{d + m \deg(h)}$.
+Set $g = h^m$ so $s = f/g$. Note that $x \in D(g)$ by construction.
+Note that $g^d \in (S_h)_{-d\deg(g)}$.
+By assumption we can write this as a multiple of
+$s^{\deg(g)} = f^{\deg(g)}/g^{\deg(g)}$, say
+$g^d = a/g^e \cdot f^{\deg(g)}/g^{\deg(g)}$.
+Then we conclude that $g^{d + e + \deg(g)} = a f^{\deg(g)}$
+and hence also $x \in D_{+}(f)$. So $x$ is an element of the set defined
+in (2).
+
+\medskip\noindent
+The existence of the generating section $s = f/g$ over
+the affine open $D_{+}(fg)$ whose
+powers freely generate the sheaves of modules
+$\mathcal{O}_X(nd)$ easily implies that the multiplication maps
+$\widetilde M(nd)|_W = \widetilde M|_W \otimes_{\mathcal{O}_W}
+\mathcal{O}_X(nd)|_W \to \widetilde{M(nd)}|_W$
+(see \ref{equation-multiply-more-generally})
+are isomorphisms. Compare with the proof of Lemma \ref{lemma-when-invertible}.
+\end{proof}
+
+\noindent
+Recall from Modules, Lemma \ref{modules-lemma-s-open}
+that given an invertible sheaf $\mathcal{L}$ on a locally ringed
+space $X$, and given a global section $s$ of $\mathcal{L}$
+the set $X_s = \{x \in X \mid s \not \in \mathfrak m_x\mathcal{L}_x\}$
+is open.
+
+\begin{lemma}
+\label{lemma-principal-open}
+Let $S$ be a graded ring. Set $X = \text{Proj}(S)$. Fix $d \geq 1$ an
+integer. Let $W = W_d \subset X$ be the open subscheme defined in
+Lemma \ref{lemma-where-invertible}. Let $n \geq 1$ and $f \in S_{nd}$.
+Denote $s \in \Gamma(W, \mathcal{O}_W(nd))$ the section which is
+the image of $f$ via (\ref{equation-global-sections}) restricted to $W$. Then
+$$
+W_s = D_{+}(f) \cap W.
+$$
+\end{lemma}
+
+\begin{proof}
+Let $D_{+}(ab) \subset W$ be a standard affine open with
+$a, b \in S$ homogeneous and $\deg(a) = \deg(b) + d$.
+Note that $D_{+}(ab) \cap D_{+}(f) = D_{+}(abf)$.
+On the other hand the restriction of $s$ to $D_{+}(ab)$
+corresponds to the element $f/1 = b^nf/a^n (a/b)^n \in (S_{ab})_{nd}$.
+We have seen in the proof of Lemma \ref{lemma-where-invertible} that
+$(a/b)^n$ is a generator for $\mathcal{O}_W(nd)$ over $D_{+}(ab)$.
+We conclude that $W_s \cap D_{+}(ab)$ is the principal open
+associated to $b^nf/a^n \in \mathcal{O}_X(D_{+}(ab))$.
+Thus the result of the lemma is clear.
+\end{proof}
+
+\noindent
+The following lemma states the properties that we will later use to
+characterize schemes with an ample invertible sheaf.
+
+\begin{lemma}
+\label{lemma-ample-on-proj}
+Let $S$ be a graded ring.
+Let $X = \text{Proj}(S)$.
+Let $Y \subset X$ be a quasi-compact open subscheme.
+Denote $\mathcal{O}_Y(n)$ the restriction of
+$\mathcal{O}_X(n)$ to $Y$.
+There exists an integer $d \geq 1$ such that
+\begin{enumerate}
+\item the subscheme $Y$ is contained in the open $W_d$ defined
+in Lemma \ref{lemma-where-invertible},
+\item the sheaf $\mathcal{O}_Y(dn)$ is invertible for all $n \in \mathbf{Z}$,
+\item all the maps
+$\mathcal{O}_Y(nd) \otimes_{\mathcal{O}_Y} \mathcal{O}_Y(m)
+\longrightarrow
+\mathcal{O}_Y(nd + m)$
+of Equation (\ref{equation-multiply}) are isomorphisms,
+\item all the maps
+$\widetilde M(nd)|_Y = \widetilde M|_Y \otimes_{\mathcal{O}_Y}
+\mathcal{O}_X(nd)|_Y \to \widetilde{M(nd)}|_Y$
+(see \ref{equation-multiply-more-generally}) are isomorphisms,
+\item given $f \in S_{nd}$ denote $s \in \Gamma(Y, \mathcal{O}_Y(nd))$
+the image of $f$ via (\ref{equation-global-sections})
+restricted to $Y$, then $D_{+}(f) \cap Y = Y_s$,
+\item a basis for the topology on $Y$ is given
+by the collection of opens $Y_s$, where $s \in \Gamma(Y, \mathcal{O}_Y(nd))$,
+$n \geq 1$, and
+\item a basis for the topology of $Y$ is given
+by those opens $Y_s \subset Y$, for
+$s \in \Gamma(Y, \mathcal{O}_Y(nd))$, $n \geq 1$ which are affine.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $Y$ is quasi-compact there exist finitely many homogeneous
+$f_i \in S_{+}$, $i = 1, \ldots, n$ such that the standard opens
+$D_{+}(f_i)$ give an open covering of $Y$. Let $d_i = \deg(f_i)$ and set
+$d = d_1 \ldots d_n$. Note that $D_{+}(f_i) = D_{+}(f_i^{d/d_i})$
+and hence we see immediately that $Y \subset W_d$, by characterization
+(2) in Lemma \ref{lemma-where-invertible} or
+by (1) using Lemma \ref{lemma-when-invertible}.
+Note that (1) implies (2), (3) and (4) by Lemma \ref{lemma-where-invertible}.
+(Note that (3) is a special case of (4).)
+Assertion (5) follows from Lemma \ref{lemma-principal-open}.
+Assertions (6) and (7) follow because the open subsets $D_{+}(f)$
+form a basis for the topology of $X$ and are affine.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-comparison-proj-quasi-coherent}
+Let $S$ be a graded ring. Set $X = \text{Proj}(S)$.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Set $M = \bigoplus_{n \in \mathbf{Z}} \Gamma(X, \mathcal{F}(n))$ as
+a graded $S$-module, using
+(\ref{equation-global-sections-module}) and (\ref{equation-global-sections}).
+Then there is a canonical $\mathcal{O}_X$-module map
+$$
+\widetilde{M} \longrightarrow \mathcal{F}
+$$
+functorial in $\mathcal{F}$ such that the induced map
+$M_0 \to \Gamma(X, \mathcal{F})$ is the identity.
+\end{lemma}
+
+\begin{proof}
+Let $f \in S$ be homogeneous of degree $d > 0$. Recall that
+$\widetilde{M}|_{D_{+}(f)}$ corresponds to the
+$S_{(f)}$-module $M_{(f)}$ by Lemma \ref{lemma-proj-sheaves}.
+Thus we can define a canonical map
+$$
+M_{(f)} \longrightarrow \Gamma(D_+(f), \mathcal{F}),\quad
+m/f^n \longmapsto m|_{D_+(f)} \otimes f|_{D_+(f)}^{-n}
+$$
+which makes sense because $f|_{D_+(f)}$ is a trivializing
+section of the invertible sheaf $\mathcal{O}_X(d)|_{D_+(f)}$, see
+Lemma \ref{lemma-when-invertible} and its proof.
+Since $\widetilde{M}$ is quasi-coherent, this leads to a canonical
+map
+$$
+\widetilde{M}|_{D_+(f)} \longrightarrow \mathcal{F}|_{D_+(f)}
+$$
+via Schemes, Lemma \ref{schemes-lemma-compare-constructions}.
+We obtain a global map if we prove that the displayed maps glue on overlaps.
+Proof of this is omitted. We also omit the proof of the final statement.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Functoriality of Proj}
+\label{section-functoriality-proj}
+
+\noindent
+A graded ring map $\psi : A \to B$ does not always give rise to a morphism of
+associated projective homogeneous spectra. The reason is that
+the inverse image $\psi^{-1}(\mathfrak q)$
+of a homogeneous prime $\mathfrak q \subset B$ may
+contain the irrelevant prime $A_{+}$ even if $\mathfrak q$ does not
+contain $B_{+}$.
+The correct result is stated as follows.
+
+\begin{lemma}
+\label{lemma-morphism-proj}
+Let $A$, $B$ be two graded rings.
+Set $X = \text{Proj}(A)$ and $Y = \text{Proj}(B)$.
+Let $\psi : A \to B$ be a graded ring map.
+Set
+$$
+U(\psi)
+=
+\bigcup\nolimits_{f \in A_{+}\ \text{homogeneous}} D_{+}(\psi(f))
+\subset Y.
+$$
+Then there is a canonical morphism of schemes
+$$
+r_\psi :
+U(\psi)
+\longrightarrow
+X
+$$
+and a map of $\mathbf{Z}$-graded $\mathcal{O}_{U(\psi)}$-algebras
+$$
+\theta = \theta_\psi :
+r_\psi^*\left(
+\bigoplus\nolimits_{d \in \mathbf{Z}} \mathcal{O}_X(d)
+\right)
+\longrightarrow
+\bigoplus\nolimits_{d \in \mathbf{Z}} \mathcal{O}_{U(\psi)}(d).
+$$
+The triple $(U(\psi), r_\psi, \theta)$ is
+characterized by the following properties:
+\begin{enumerate}
+\item For every $d \geq 0$ the diagram
+$$
+\xymatrix{
+A_d \ar[d] \ar[rr]_{\psi} & &
+B_d \ar[d] \\
+\Gamma(X, \mathcal{O}_X(d)) \ar[r]^-\theta &
+\Gamma(U(\psi), \mathcal{O}_Y(d)) &
+\Gamma(Y, \mathcal{O}_Y(d)) \ar[l]
+}
+$$
+is commutative.
+\item For any $f \in A_{+}$ homogeneous
+we have $r_\psi^{-1}(D_{+}(f)) = D_{+}(\psi(f))$ and
+the restriction of $r_\psi$ to $D_{+}(\psi(f))$
+corresponds to the ring map
+$A_{(f)} \to B_{(\psi(f))}$ induced by $\psi$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Clearly condition (2) uniquely determines the morphism of schemes
+and the open subset $U(\psi)$. Pick $f \in A_d$ with $d \geq 1$.
+Note that
+$\mathcal{O}_X(n)|_{D_{+}(f)}$ corresponds to the
+$A_{(f)}$-module $(A_f)_n$ and that
+$\mathcal{O}_Y(n)|_{D_{+}(\psi(f))}$ corresponds to the
+$B_{(\psi(f))}$-module $(B_{\psi(f)})_n$. In other words $\theta$
+when restricted to $D_{+}(\psi(f))$ corresponds to a map of
+$\mathbf{Z}$-graded $B_{(\psi(f))}$-algebras
+$$
+A_f \otimes_{A_{(f)}} B_{(\psi(f))}
+\longrightarrow
+B_{\psi(f)}
+$$
+Condition (1) determines the images of all elements of $A$.
+Since $f$ is an invertible element which is mapped to $\psi(f)$
+we see that $1/f^m$ is mapped to $1/\psi(f)^m$. It easily follows
+from this that $\theta$ is uniquely determined, namely it is
+given by the rule
+$$
+a/f^m \otimes b/\psi(f)^e \longmapsto \psi(a)b/\psi(f)^{m + e}.
+$$
+To show existence we remark that the proof of uniqueness above gave
+a well defined prescription for the morphism $r$ and the map $\theta$
+when restricted to every standard open of the form
+$D_{+}(\psi(f)) \subset U(\psi)$ into $D_{+}(f)$.
+Call these $r_f$ and $\theta_f$.
+Hence we only need to verify that if $D_{+}(f) \subset D_{+}(g)$
+for some $f, g \in A_{+}$ homogeneous, then the restriction of
+$r_g$ to $D_{+}(\psi(f))$ matches $r_f$. This is clear from the
+formulas given for $r$ and $\theta$ above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphism-proj-transitive}
+Let $A$, $B$, and $C$ be graded rings.
+Set $X = \text{Proj}(A)$, $Y = \text{Proj}(B)$ and $Z = \text{Proj}(C)$.
+Let $\varphi : A \to B$, $\psi : B \to C$ be graded ring maps.
+Then we have
+$$
+U(\psi \circ \varphi) = r_\varphi^{-1}(U(\psi))
+\quad
+\text{and}
+\quad
+r_{\psi \circ \varphi}
+=
+r_\varphi \circ r_\psi|_{U(\psi \circ \varphi)}.
+$$
+In addition we have
+$$
+\theta_\psi \circ r_\psi^*\theta_\varphi
+=
+\theta_{\psi \circ \varphi}
+$$
+with obvious notation.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-graded-rings-map-proj}
+With hypotheses and notation as in Lemma \ref{lemma-morphism-proj} above.
+Assume $A_d \to B_d$ is surjective for all $d \gg 0$. Then
+\begin{enumerate}
+\item $U(\psi) = Y$,
+\item $r_\psi : Y \to X$ is a closed immersion, and
+\item the maps $\theta : r_\psi^*\mathcal{O}_X(n) \to \mathcal{O}_Y(n)$
+are surjective but not isomorphisms in general (even if $A \to B$ is
+surjective).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) follows from the definition of $U(\psi)$ and the fact that
+$D_{+}(f) = D_{+}(f^n)$ for any $n > 0$. For $f \in A_{+}$ homogeneous
+we see that $A_{(f)} \to B_{(\psi(f))}$ is surjective because
+any element of $B_{(\psi(f))}$ can be represented by a fraction
+$b/\psi(f)^n$ with $n$ arbitrarily large (which forces the degree of
+$b \in B$ to be large). This proves (2).
+The same argument shows the map
+$$
+A_f \to B_{\psi(f)}
+$$
+is surjective which proves the surjectivity of $\theta$.
+For an example where this map is not an isomorphism
+consider the graded ring $A = k[x, y]$ where $k$ is a field
+and $\deg(x) = 1$, $\deg(y) = 2$. Set $I = (x)$, so that
+$B = k[y]$. Note that $\mathcal{O}_Y(1) = 0$ in this case.
+But it is easy to see that $r_\psi^*\mathcal{O}_X(1)$
+is not zero. (There are less silly examples.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-eventual-iso-graded-rings-map-proj}
+With hypotheses and notation as in Lemma \ref{lemma-morphism-proj} above.
+Assume $A_d \to B_d$ is an isomorphism for all $d \gg 0$. Then
+\begin{enumerate}
+\item $U(\psi) = Y$,
+\item $r_\psi : Y \to X$ is an isomorphism, and
+\item the maps $\theta : r_\psi^*\mathcal{O}_X(n) \to \mathcal{O}_Y(n)$
+are isomorphisms.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We have (1) by Lemma \ref{lemma-surjective-graded-rings-map-proj}.
+Let $f \in A_{+}$ be homogeneous. The assumption on $\psi$ implies that
+$A_f \to B_f$ is an isomorphism (details omitted). Thus it is clear that
+$r_\psi$ and $\theta$ restrict to isomorphisms over $D_{+}(f)$.
+The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-graded-rings-generated-degree-1-map-proj}
+With hypotheses and notation as in Lemma \ref{lemma-morphism-proj} above.
+Assume $A_d \to B_d$ is surjective for $d \gg 0$ and that $A$ is generated
+by $A_1$ over $A_0$. Then
+\begin{enumerate}
+\item $U(\psi) = Y$,
+\item $r_\psi : Y \to X$ is a closed immersion, and
+\item the maps $\theta : r_\psi^*\mathcal{O}_X(n) \to \mathcal{O}_Y(n)$
+are isomorphisms.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemmas \ref{lemma-eventual-iso-graded-rings-map-proj} and
+\ref{lemma-morphism-proj-transitive}
+we may replace $B$ by the image of $A \to B$
+without changing $X$ or the sheaves $\mathcal{O}_X(n)$.
+Thus we may assume that $A \to B$ is surjective. By
+Lemma \ref{lemma-surjective-graded-rings-map-proj} we get (1) and (2)
+and surjectivity in (3).
+By Lemma \ref{lemma-apply-modules} we see that both
+$\mathcal{O}_X(n)$ and $\mathcal{O}_Y(n)$
+are invertible. Hence $\theta$ is an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-map-proj}
+With hypotheses and notation as in Lemma \ref{lemma-morphism-proj} above.
+Assume there exists a ring map $R \to A_0$ and a ring map
+$R \to R'$ such that $B = R' \otimes_R A$. Then
+\begin{enumerate}
+\item $U(\psi) = Y$,
+\item the diagram
+$$
+\xymatrix{
+Y = \text{Proj}(B) \ar[r]_{r_\psi} \ar[d] &
+\text{Proj}(A) = X \ar[d] \\
+\Spec(R') \ar[r] &
+\Spec(R)
+}
+$$
+is a fibre product square, and
+\item the maps $\theta : r_\psi^*\mathcal{O}_X(n) \to \mathcal{O}_Y(n)$
+are isomorphisms.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows immediately by looking at what happens over the standard
+opens $D_{+}(f)$ for $f \in A_{+}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localization-map-proj}
+With hypotheses and notation as in Lemma \ref{lemma-morphism-proj} above.
+Assume there exists a $g \in A_0$ such that $\psi$ induces an
+isomorphism $A_g \to B$. Then
+$U(\psi) = Y$, $r_\psi : Y \to X$ is an open immersion
+which induces an isomorphism of $Y$ with the inverse image
+of $D(g) \subset \Spec(A_0)$. Moreover the map $\theta$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-base-change-map-proj} above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-d-uple}
+Let $S$ be a graded ring. Let $d \geq 1$. Set $S' = S^{(d)}$ with notation
+as in Algebra, Section \ref{algebra-section-graded}. Set
+$X = \text{Proj}(S)$ and $X' = \text{Proj}(S')$. There is a canonical
+isomorphism $i : X \to X'$ of schemes such that
+\begin{enumerate}
+\item for any graded $S$-module $M$ setting $M' = M^{(d)}$,
+we have a canonical isomorphism $\widetilde{M} \to i^*\widetilde{M'}$,
+\item we have canonical isomorphisms
+$\mathcal{O}_{X}(nd) \to i^*\mathcal{O}_{X'}(n)$
+\end{enumerate}
+and these isomorphisms are compatible with the multiplication maps
+of Lemma \ref{lemma-widetilde-tensor} and hence with the maps
+(\ref{equation-multiply}),
+(\ref{equation-multiply-on-sheaf}),
+(\ref{equation-global-sections}),
+(\ref{equation-global-sections-module}),
+(\ref{equation-multiply-more-generally}), and
+(\ref{equation-global-sections-more-generally}) (see proof for precise
+statements.
+\end{lemma}
+
+\begin{proof}
+The injective ring map $S' \to S$ (which is not a homomorphism of graded rings
+due to our conventions), induces a map $j : \Spec(S) \to \Spec(S')$.
+Given a graded prime ideal $\mathfrak p \subset S$ we see that
+$\mathfrak p' = j(\mathfrak p) = S' \cap \mathfrak p$
+is a graded prime ideal of $S'$.
+Moreover, if $f \in S_+$ is homogeneous and $f \not \in \mathfrak p$, then
+$f^d \in S'_+$ and $f^d \not \in \mathfrak p'$. Conversely, if
+$\mathfrak p' \subset S'$ is a graded prime ideal not containing some
+homogeneous element $f \in S'_+$, then
+$\mathfrak p = \{g \in S \mid g^d \in \mathfrak p'\}$ is a
+graded prime ideal of $S$ not containing $f$ whose image under $j$
+is $\mathfrak p'$. To see that $\mathfrak p$ is an ideal, note
+that if $g, h \in \mathfrak p$, then
+$(g + h)^{2d} \in \mathfrak p'$ by the binomial formula
+and hence $g + h \in \mathfrak p'$ as $\mathfrak p'$ is a prime.
+In this way we see that $j$ induces a homeomorphism $i : X \to X'$.
+Moreover, given $f \in S_+$ homogeneous, then we have
+$S_{(f)} \cong S'_{(f^d)}$. Since these isomorphisms are compatible
+with the restrictions mappings of
+Lemma \ref{lemma-standard-open}, we see that there exists an
+isomorphism $i^\sharp : i^{-1}\mathcal{O}_{X'} \to \mathcal{O}_X$ of
+structure sheaves on $X$ and $X'$, hence $i$ is an isomorphism
+of schemes.
+
+\medskip\noindent
+Let $M$ be a graded $S$-module. Given $f \in S_+$ homogeneous, we have
+$M_{(f)} \cong M'_{(f^d)}$, hence in exactly the same manner as above
+we obtain the isomorphism in (1). The isomorphisms in (2) are a special
+case of (1) for $M = S(nd)$ which gives $M' = S'(n)$. Let $M$ and $N$
+be graded $S$-modules. Then we have
+$$
+M' \otimes_{S'} N' =
+(M \otimes_S N)^{(d)} =
+(M \otimes_S N)'
+$$
+as can be verified directly from the definitions. Having said this
+the compatibility with the multiplication maps of
+Lemma \ref{lemma-widetilde-tensor} is the commutativity of the diagram
+$$
+\xymatrix{
+\widetilde M \otimes_{\mathcal{O}_X} \widetilde N
+\ar[d]_{(1) \otimes (1)} \ar[r] &
+\widetilde{M \otimes_S N} \ar[d]^{(1)} \\
+i^*(\widetilde{M'} \otimes_{\mathcal{O}_{X'}} \widetilde{N'}) \ar[r] &
+i^*(\widetilde{M' \otimes_{S'} N'})
+}
+$$
+This can be seen by looking at the construction of the maps
+over the open $D_+(f) = D_+(f^d)$ where the top horizontal
+arrow is given by the map
+$M_{(f)} \times N_{(f)} \to (M \otimes_S N)_{(f)}$
+and the lower horizontal arrow by the map
+$M'_{(f^d)} \times N'_{(f^d)} \to (M' \otimes_{S'} N')_{(f^d)}$.
+Since these maps agree via the identifications
+$M_{(f)} = M'_{(f^d)}$, etc, we get the desired compatibility.
+We omit the proof of the other compatibilities.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Morphisms into Proj}
+\label{section-morphisms-proj}
+
+\noindent
+Let $S$ be a graded ring.
+Let $X = \text{Proj}(S)$ be the homogeneous spectrum of $S$.
+Let $d \geq 1$ be an integer.
+Consider the open subscheme
+\begin{equation}
+\label{equation-Ud}
+U_d = \bigcup\nolimits_{f \in S_d} D_{+}(f)
+\quad\subset\quad
+X = \text{Proj}(S)
+\end{equation}
+Note that $d | d' \Rightarrow U_d \subset U_{d'}$ and
+$X = \bigcup_d U_d$. Neither $X$ nor $U_d$ need
+be quasi-compact, see Algebra, Lemma \ref{algebra-lemma-topology-proj}.
+Let us write $\mathcal{O}_{U_d}(n) = \mathcal{O}_X(n)|_{U_d}$.
+By Lemma \ref{lemma-when-invertible}
+we know that $\mathcal{O}_{U_d}(nd)$, $n \in \mathbf{Z}$
+is an invertible $\mathcal{O}_{U_d}$-module and
+that all the multiplication maps
+$\mathcal{O}_{U_d}(nd) \otimes_{\mathcal{O}_{U_d}} \mathcal{O}_{U_d}(m)
+\to \mathcal{O}_{U_d}(nd + m)$ of
+(\ref{equation-multiply}) are isomorphisms. In particular we have
+$\mathcal{O}_{U_d}(nd) \cong \mathcal{O}_{U_d}(d)^{\otimes n}$.
+The graded ring map (\ref{equation-global-sections}) on global sections
+combined with restriction to $U_d$ give a homomorphism of graded rings
+\begin{equation}
+\label{equation-psi-d}
+\psi^d : S^{(d)} \longrightarrow \Gamma_*(U_d, \mathcal{O}_{U_d}(d)).
+\end{equation}
+For the notation $S^{(d)}$, see Algebra, Section \ref{algebra-section-graded}.
+For the notation $\Gamma_*$ see
+Modules, Definition \ref{modules-definition-gamma-star}.
+Moreover, since $U_d$ is covered by the opens $D_{+}(f)$, $f \in S_d$
+we see that $\mathcal{O}_{U_d}(d)$ is globally generated
+by the sections in the image of
+$\psi^d_1 : S^{(d)}_1 = S_d \to \Gamma(U_d, \mathcal{O}_{U_d}(d))$, see
+Modules, Definition \ref{modules-definition-globally-generated}.
+
+\medskip\noindent
+Let $Y$ be a scheme, and let $\varphi : Y \to X$ be a morphism of schemes.
+Assume the image $\varphi(Y)$ is contained in the open subscheme
+$U_d$ of $X$.
+By the discussion following
+Modules, Definition \ref{modules-definition-gamma-star}
+we obtain a homomorphism of graded rings
+$$
+\Gamma_*(U_d, \mathcal{O}_{U_d}(d))
+\longrightarrow
+\Gamma_*(Y, \varphi^*\mathcal{O}_X(d)).
+$$
+The composition of this and $\psi^d$ gives a graded ring
+homomorphism
+\begin{equation}
+\label{equation-psi-phi-d}
+\psi_\varphi^d :
+S^{(d)}
+\longrightarrow
+\Gamma_*(Y, \varphi^*\mathcal{O}_X(d))
+\end{equation}
+which has the property that the invertible sheaf
+$\varphi^*\mathcal{O}_X(d)$ is globally generated
+by the sections in the image of
+$(S^{(d)})_1 = S_d \to \Gamma(Y, \varphi^*\mathcal{O}_X(d))$.
+
+\begin{lemma}
+\label{lemma-converse-construction}
+Let $S$ be a graded ring, and $X = \text{Proj}(S)$.
+Let $d \geq 1$ and $U_d \subset X$ as above.
+Let $Y$ be a scheme.
+Let $\mathcal{L}$ be an invertible sheaf on $Y$.
+Let $\psi : S^{(d)} \to \Gamma_*(Y, \mathcal{L})$ be
+a graded ring homomorphism such that $\mathcal{L}$ is
+generated by the sections in the image of
+$\psi|_{S_d} : S_d \to \Gamma(Y, \mathcal{L})$.
+Then there exist a morphism
+$\varphi : Y \to X$ such that $\varphi(Y) \subset U_d$ and
+an isomorphism $\alpha : \varphi^*\mathcal{O}_{U_d}(d) \to \mathcal{L}$
+such that $\psi_\varphi^d$ agrees with $\psi$ via $\alpha$:
+$$
+\xymatrix{
+\Gamma_*(Y, \mathcal{L}) &
+\Gamma_*(Y, \varphi^*\mathcal{O}_{U_d}(d)) \ar[l]^-\alpha &
+\Gamma_*(U_d, \mathcal{O}_{U_d}(d)) \ar[l]^-{\varphi^*} \\
+S^{(d)} \ar[u]^\psi & &
+S^{(d)} \ar[u]^{\psi^d} \ar[ul]^{\psi^d_\varphi} \ar[ll]_{\text{id}}
+}
+$$
+commutes. Moreover, the pair $(\varphi, \alpha)$ is unique.
+\end{lemma}
+
+\begin{proof}
+Pick $f \in S_d$. Denote $s = \psi(f) \in \Gamma(Y, \mathcal{L})$.
+On the open set $Y_s$ where $s$ does not vanish multiplication
+by $s$ induces an isomorphism $\mathcal{O}_{Y_s} \to \mathcal{L}|_{Y_s}$,
+see Modules, Lemma \ref{modules-lemma-s-open}. We will denote
+the inverse of this map $x \mapsto x/s$, and similarly for
+powers of $\mathcal{L}$. Using this we
+define a ring map $\psi_{(f)} : S_{(f)} \to \Gamma(Y_s, \mathcal{O})$
+by mapping the fraction $a/f^n$ to $\psi(a)/s^n$.
+By Schemes, Lemma \ref{schemes-lemma-morphism-into-affine}
+this corresponds to a morphism
+$\varphi_f : Y_s \to \Spec(S_{(f)}) = D_{+}(f)$.
+We also introduce the isomorphism
+$\alpha_f : \varphi_f^*\mathcal{O}_{D_{+}(f)}(d) \to \mathcal{L}|_{Y_s}$
+which maps the pullback of the trivializing section
+$f$ over $D_{+}(f)$ to the trivializing section $s$ over $Y_s$.
+With this choice the commutativity of the diagram in the lemma
+holds with $Y$ replaced by $Y_s$, $\varphi$ replaced by $\varphi_f$,
+and $\alpha$ replaced by $\alpha_f$; verification omitted.
+
+\medskip\noindent
+Suppose that $f' \in S_d$ is a second element, and denote
+$s' = \psi(f') \in \Gamma(Y, \mathcal{L})$. Then
+$Y_s \cap Y_{s'} = Y_{ss'}$ and similarly
+$D_{+}(f) \cap D_{+}(f') = D_{+}(ff')$.
+In Lemma \ref{lemma-ample-on-proj} we saw that
+$D_{+}(f') \cap D_{+}(f)$ is the same as the set
+of points of $D_{+}(f)$ where the section of
+$\mathcal{O}_X(d)$ defined by $f'$ does not vanish.
+Hence
+$\varphi_f^{-1}(D_{+}(f') \cap D_{+}(f)) = Y_s \cap Y_{s'}
+= \varphi_{f'}^{-1}(D_{+}(f') \cap D_{+}(f))$.
+On $D_{+}(f) \cap D_{+}(f')$ the fraction $f/f'$ is an
+invertible section of the structure sheaf with inverse
+$f'/f$. Note that $\psi_{(f')}(f/f') = \psi(f)/s' = s/s'$
+and $\psi_{(f)}(f'/f) = \psi(f')/s = s'/s$. We claim there
+is a unique ring map
+$S_{(ff')} \to \Gamma(Y_{ss'}, \mathcal{O})$ making the
+following diagram commute
+$$
+\xymatrix{
+\Gamma(Y_s, \mathcal{O}) \ar[r] &
+\Gamma(Y_{ss'}, \mathcal{O}) &
+\Gamma(Y_{s, '} \mathcal{O}) \ar[l]\\
+S_{(f)} \ar[r] \ar[u]^{\psi_{(f)}} &
+S_{(ff')} \ar[u] &
+S_{(f')} \ar[l] \ar[u]^{\psi_{(f')}}
+}
+$$
+It exists because we may use the rule
+$x/(ff')^n \mapsto \psi(x)/(ss')^n$, which ``works'' by the formulas
+above. Uniqueness follows as $\text{Proj}(S)$ is separated, see
+Lemma \ref{lemma-proj-separated} and its proof. This shows that the
+morphisms $\varphi_f$ and $\varphi_{f'}$ agree over $Y_s \cap Y_{s'}$.
+The restrictions of $\alpha_f$ and $\alpha_{f'}$ agree over
+$Y_s \cap Y_{s'}$ because the regular functions $s/s'$ and
+$\psi_{(f')}(f)$ agree. This proves that the morphisms $\psi_f$
+glue to a global morphism from $Y$ into $U_d \subset X$, and
+that the maps $\alpha_f$ glue to an isomorphism satisfying the
+conditions of the lemma.
+
+\medskip\noindent
+We still have to show the pair $(\varphi, \alpha)$ is unique.
+Suppose $(\varphi', \alpha')$ is a second such pair.
+Let $f \in S_d$. By the commutativity of the diagrams in the lemma we have
+that the inverse images of $D_{+}(f)$ under both $\varphi$ and $\varphi'$
+are equal to $Y_{\psi(f)}$. Since the opens $D_{+}(f)$ are a basis
+for the topology on $X$, and since $X$ is a sober topological
+space (see Schemes, Lemma \ref{schemes-lemma-scheme-sober})
+this means the maps $\varphi$ and $\varphi'$ are the same
+on underlying topological spaces. Let us use $s = \psi(f)$ to
+trivialize the invertible sheaf $\mathcal{L}$ over $Y_{\psi(f)}$.
+By the commutativity of the diagrams we have that
+$\alpha^{\otimes n}(\psi^d_{\varphi}(x)) =
+\psi(x) = (\alpha')^{\otimes n}(\psi^d_{\varphi'}(x))$
+for all $x \in S_{nd}$. By construction of $\psi^d_{\varphi}$
+and $\psi^d_{\varphi'}$ we have
+$\psi^d_{\varphi}(x) = \varphi^\sharp(x/f^n) \psi^d_{\varphi}(f^n)$
+over $Y_{\psi(f)}$,
+and similarly for $\psi^d_{\varphi'}$. By the commutativity of
+the diagrams of the lemma we deduce that
+$\varphi^\sharp(x/f^n) = (\varphi')^\sharp(x/f^n)$.
+This proves that $\varphi$ and $\varphi'$ induce the same morphism
+from $Y_{\psi(f)}$ into the affine scheme $D_{+}(f) = \Spec(S_{(f)})$.
+Hence $\varphi$ and $\varphi'$ are the same as morphisms.
+Finally, it remains to show that the commutativity of the
+diagram of the lemma singles out, given $\varphi$, a unique
+$\alpha$. We omit the verification.
+\end{proof}
+
+\noindent
+We continue the discussion from above the lemma.
+Let $S$ be a graded ring.
+Let $Y$ be a scheme. We will consider {\it triples}
+$(d, \mathcal{L}, \psi)$ where
+\begin{enumerate}
+\item $d \geq 1$ is an integer,
+\item $\mathcal{L}$ is an invertible $\mathcal{O}_Y$-module, and
+\item $\psi : S^{(d)} \to \Gamma_*(Y, \mathcal{L})$ is a graded
+ring homomorphism such that $\mathcal{L}$ is generated by
+the global sections $\psi(f)$, with $f \in S_d$.
+\end{enumerate}
+Given a morphism $h : Y' \to Y$ and a triple
+$(d, \mathcal{L}, \psi)$ over $Y$ we can pull it back to the
+triple $(d, h^*\mathcal{L}, h^* \circ \psi)$.
+Given two triples $(d, \mathcal{L}, \psi)$ and
+$(d, \mathcal{L}', \psi')$ with the same integer $d$
+we say they are {\it strictly equivalent} if there exists
+an isomorphism $\beta : \mathcal{L} \to \mathcal{L}'$
+such that $\beta \circ \psi = \psi'$ as graded
+ring maps $S^{(d)} \to \Gamma_*(Y, \mathcal{L}')$.
+
+\medskip\noindent
+For each integer $d \geq 1$ we define
+\begin{eqnarray*}
+F_d : \Sch^{opp} & \longrightarrow & \textit{Sets}, \\
+Y & \longmapsto &
+\{\text{strict equivalence classes of triples }
+(d, \mathcal{L}, \psi)
+\text{ as above}\}
+\end{eqnarray*}
+with pullbacks as defined above.
+
+\begin{lemma}
+\label{lemma-proj-functor-strict}
+Let $S$ be a graded ring.
+Let $X = \text{Proj}(S)$.
+The open subscheme $U_d \subset X$ (\ref{equation-Ud}) represents the
+functor $F_d$ and the triple $(d, \mathcal{O}_{U_d}(d), \psi^d)$
+defined above is the universal family (see
+Schemes, Section \ref{schemes-section-representable}).
+\end{lemma}
+
+\begin{proof}
+This is a reformulation of Lemma \ref{lemma-converse-construction}
+\end{proof}
+
+\begin{lemma}
+\label{lemma-apply}
+Let $S$ be a graded ring generated as an $S_0$-algebra by
+the elements of $S_1$. In this case the scheme $X = \text{Proj}(S)$
+represents the functor which associates to a scheme
+$Y$ the set of pairs $(\mathcal{L}, \psi)$, where
+\begin{enumerate}
+\item $\mathcal{L}$ is an invertible $\mathcal{O}_Y$-module, and
+\item $\psi : S \to \Gamma_*(Y, \mathcal{L})$ is a graded
+ring homomorphism such that $\mathcal{L}$ is generated by
+the global sections $\psi(f)$, with $f \in S_1$
+\end{enumerate}
+up to strict equivalence as above.
+\end{lemma}
+
+\begin{proof}
+Under the assumptions of the lemma we have $X = U_1$ and the
+lemma is a reformulation of Lemma \ref{lemma-proj-functor-strict} above.
+\end{proof}
+
+\noindent
+We end this section with a discussion of a functor corresponding
+to $\text{Proj}(S)$ for a general graded ring $S$.
+We advise the reader to skip the rest of this section.
+
+\medskip\noindent
+Fix an arbitrary graded ring $S$. Let $T$ be a scheme.
+We will say two triples $(d, \mathcal{L}, \psi)$ and
+$(d', \mathcal{L}', \psi')$ over $T$ with possibly different integers
+$d$, $d'$ are {\it equivalent} if there exists
+an isomorphism
+$\beta : \mathcal{L}^{\otimes d'} \to (\mathcal{L}')^{\otimes d}$
+of invertible sheaves over $T$
+such that $\beta \circ \psi|_{S^{(dd')}}$ and $\psi'|_{S^{(dd')}}$ agree
+as graded ring maps $S^{(dd')} \to \Gamma_*(Y, (\mathcal{L}')^{\otimes dd'})$.
+
+\begin{lemma}
+\label{lemma-equivalent}
+Let $S$ be a graded ring. Set $X = \text{Proj}(S)$. Let $T$ be a scheme.
+Let $(d, \mathcal{L}, \psi)$ and $(d', \mathcal{L}', \psi')$
+be two triples over $T$. The following are equivalent:
+\begin{enumerate}
+\item Let $n = \text{lcm}(d, d')$. Write $n = ad = a'd'$. There exists
+an isomorphism
+$\beta : \mathcal{L}^{\otimes a} \to (\mathcal{L}')^{\otimes a'}$
+with the property that
+$\beta \circ \psi|_{S^{(n)}}$ and $\psi'|_{S^{(n)}}$ agree
+as graded ring maps $S^{(n)} \to \Gamma_*(Y, (\mathcal{L}')^{\otimes n})$.
+\item The triples $(d, \mathcal{L}, \psi)$ and $(d', \mathcal{L}', \psi')$
+are equivalent.
+\item For some positive integer $n = ad = a'd'$ there exists
+an isomorphism
+$\beta : \mathcal{L}^{\otimes a} \to (\mathcal{L}')^{\otimes a'}$
+with the property that
+$\beta \circ \psi|_{S^{(n)}}$ and $\psi'|_{S^{(n)}}$ agree
+as graded ring maps $S^{(n)} \to \Gamma_*(Y, (\mathcal{L}')^{\otimes n})$.
+\item The morphisms $\varphi : T \to X$ and $\varphi' : T \to X$
+associated to $(d, \mathcal{L}, \psi)$ and $(d', \mathcal{L}', \psi')$
+are equal.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Clearly (1) implies (2) and (2) implies (3) by restricting to
+more divisible degrees and powers of invertible sheaves.
+Also (3) implies (4) by the uniqueness statement
+in Lemma \ref{lemma-converse-construction}.
+Thus we have to prove that (4) implies (1). Assume (4),
+in other words $\varphi = \varphi'$.
+Note that this implies that we may write
+$\mathcal{L} = \varphi^*\mathcal{O}_X(d)$ and
+$\mathcal{L}' = \varphi^*\mathcal{O}_X(d')$.
+Moreover, via these identifications we have that the graded ring
+maps $\psi$ and $\psi'$ correspond to the restriction of the canonical
+graded ring map
+$$
+S \longrightarrow \bigoplus\nolimits_{n \geq 0} \Gamma(X, \mathcal{O}_X(n))
+$$
+to $S^{(d)}$ and $S^{(d')}$ composed with pullback by $\varphi$
+(by Lemma \ref{lemma-converse-construction} again). Hence taking
+$\beta$ to be the isomorphism
+$$
+(\varphi^*\mathcal{O}_X(d))^{\otimes a} =
+\varphi^*\mathcal{O}_X(n) =
+(\varphi^*\mathcal{O}_X(d'))^{\otimes a'}
+$$
+works.
+\end{proof}
+
+\noindent
+Let $S$ be a graded ring.
+Let $X = \text{Proj}(S)$.
+Over the open subscheme scheme $U_d \subset X = \text{Proj}(S)$
+(\ref{equation-Ud}) we have the triple
+$(d, \mathcal{O}_{U_d}(d), \psi^d)$. Clearly, if $d | d'$ the triples
+$(d, \mathcal{O}_{U_d}(d), \psi^d)$ and
+$(d', \mathcal{O}_{U_{d'}}(d'), \psi^{d'})$ are equivalent
+when restricted to the open $U_d$ (which is a subset of $U_{d'}$).
+This, combined with Lemma \ref{lemma-converse-construction} shows
+that morphisms $Y \to X$ correspond roughly to
+equivalence classes of triples over $Y$. This is not quite true since if $Y$ is
+not quasi-compact, then there may not be a single triple which works.
+Thus we have to be slightly careful in defining the corresponding functor.
+
+\medskip\noindent
+Here is one possible way to do this. Suppose $d' = ad$.
+Consider the transformation of functors $F_d \to F_{d'}$
+which assigns to the triple $(d, \mathcal{L}, \psi)$ over
+$T$ the triple $(d', \mathcal{L}^{\otimes a}, \psi|_{S^{(d')}})$.
+One of the implications of Lemma \ref{lemma-equivalent} is that the
+transformation $F_d \to F_{d'}$ is injective!
+For a quasi-compact scheme $T$ we define
+$$
+F(T) = \bigcup\nolimits_{d \in \mathbf{N}} F_d(T)
+$$
+with transition maps as explained above. This clearly defines a
+contravariant functor on the category of quasi-compact schemes
+with values in sets. For a general scheme
+$T$ we define
+$$
+F(T)
+=
+\lim_{V \subset T\text{ quasi-compact open}} F(V).
+$$
+In other words, an element $\xi$ of $F(T)$ corresponds to a compatible system
+of choices of elements $\xi_V \in F(V)$ where $V$ ranges over the
+quasi-compact opens of $T$.
+We omit the definition of the pullback map $F(T) \to F(T')$
+for a morphism $T' \to T$ of schemes.
+Thus we have defined our functor
+\begin{eqnarray*}
+F : \Sch^{opp} & \longrightarrow & \textit{Sets}
+\end{eqnarray*}
+
+\begin{lemma}
+\label{lemma-proj-functor}
+Let $S$ be a graded ring.
+Let $X = \text{Proj}(S)$.
+The functor $F$ defined above is representable by the scheme $X$.
+\end{lemma}
+
+\begin{proof}
+We have seen above that the functor $F_d$ corresponds to the
+open subscheme $U_d \subset X$. Moreover the transformation
+of functors $F_d \to F_{d'}$ (if $d | d'$) defined above
+corresponds to the inclusion morphism $U_d \to U_{d'}$
+(see discussion above). Hence to show that $F$ is represented
+by $X$ it suffices to show that $T \to X$ for a quasi-compact
+scheme $T$ ends up in some $U_d$, and that for a general scheme
+$T$ we have
+$$
+\Mor(T, X)
+=
+\lim_{V \subset T\text{ quasi-compact open}} \Mor(V, X).
+$$
+These verifications are omitted.
+\end{proof}
+
+
+
+
+
+
+
+\section{Projective space}
+\label{section-projective-space}
+
+\noindent
+Projective space is one of the fundamental objects studied in
+algebraic geometry. In this section we just give its construction
+as $\text{Proj}$ of a polynomial ring. Later we will discover many
+of its beautiful properties.
+
+\begin{lemma}
+\label{lemma-projective-space}
+Let $S = \mathbf{Z}[T_0, \ldots, T_n]$ with $\deg(T_i) = 1$.
+The scheme
+$$
+\mathbf{P}^n_{\mathbf{Z}} = \text{Proj}(S)
+$$
+represents the functor which associates to a scheme $Y$ the pairs
+$(\mathcal{L}, (s_0, \ldots, s_n))$ where
+\begin{enumerate}
+\item $\mathcal{L}$ is an invertible $\mathcal{O}_Y$-module, and
+\item $s_0, \ldots, s_n$ are global sections of $\mathcal{L}$
+which generate $\mathcal{L}$
+\end{enumerate}
+up to the following equivalence:
+$(\mathcal{L}, (s_0, \ldots, s_n)) \sim
+(\mathcal{N}, (t_0, \ldots, t_n))$ $\Leftrightarrow$ there exists
+an isomorphism $\beta : \mathcal{L} \to \mathcal{N}$
+with $\beta(s_i) = t_i$ for $i = 0, \ldots, n$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-apply} above.
+Namely, for any graded ring $A$ we have
+\begin{eqnarray*}
+\Mor_{graded rings}(\mathbf{Z}[T_0, \ldots, T_n], A)
+& = &
+A_1 \times \ldots \times A_1 \\
+\psi & \mapsto & (\psi(T_0), \ldots, \psi(T_n))
+\end{eqnarray*}
+and the degree $1$ part of $\Gamma_*(Y, \mathcal{L})$ is
+just $\Gamma(Y, \mathcal{L})$.
+\end{proof}
+
+\begin{definition}
+\label{definition-projective-space}
+The scheme
+$\mathbf{P}^n_{\mathbf{Z}} = \text{Proj}(\mathbf{Z}[T_0, \ldots, T_n])$
+is called {\it projective $n$-space over $\mathbf{Z}$}.
+Its base change $\mathbf{P}^n_S$ to a scheme $S$ is called
+{\it projective $n$-space over $S$}. If $R$ is a ring the base change
+to $\Spec(R)$ is denoted $\mathbf{P}^n_R$ and called
+{\it projective $n$-space over $R$}.
+\end{definition}
+
+\noindent
+Given a scheme $Y$ over $S$
+and a pair $(\mathcal{L}, (s_0, \ldots, s_n))$ as in
+Lemma \ref{lemma-projective-space}
+the induced morphism to $\mathbf{P}^n_S$ is denoted
+$$
+\varphi_{(\mathcal{L}, (s_0, \ldots, s_n))} :
+Y \longrightarrow \mathbf{P}^n_S
+$$
+This makes sense since the pair defines a morphism into
+$\mathbf{P}^n_{\mathbf{Z}}$ and we already have the structure
+morphism into $S$ so combined we get a morphism into
+$\mathbf{P}^n_S = \mathbf{P}^n_{\mathbf{Z}} \times S$.
+Note that this is the $S$-morphism characterized by
+$$
+\mathcal{L} =
+\varphi_{(\mathcal{L}, (s_0, \ldots, s_n))}^*\mathcal{O}_{\mathbf{P}^n_R}(1)
+\quad
+\text{and}
+\quad
+s_i = \varphi_{(\mathcal{L}, (s_0, \ldots, s_n))}^*T_i
+$$
+where we think of $T_i$ as a global section of
+$\mathcal{O}_{\mathbf{P}^n_S}(1)$ via (\ref{equation-global-sections}).
+
+\begin{lemma}
+\label{lemma-standard-covering-projective-space}
+Projective $n$-space over $\mathbf{Z}$ is covered by
+$n + 1$ standard opens
+$$
+\mathbf{P}^n_{\mathbf{Z}} =
+\bigcup\nolimits_{i = 0, \ldots, n} D_{+}(T_i)
+$$
+where each $D_{+}(T_i)$ is isomorphic to $\mathbf{A}^n_{\mathbf{Z}}$
+affine $n$-space over $\mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+This is true because
+$\mathbf{Z}[T_0, \ldots, T_n]_{+} = (T_0, \ldots, T_n)$ and
+since
+$$
+\Spec
+\left(
+\mathbf{Z}
+\left[\frac{T_0}{T_i}, \ldots, \frac{T_n}{T_i}
+\right]
+\right)
+\cong
+\mathbf{A}^n_{\mathbf{Z}}
+$$
+in an obvious way.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projective-space-separated}
+Let $S$ be a scheme.
+The structure morphism $\mathbf{P}^n_S \to S$ is
+\begin{enumerate}
+\item separated,
+\item quasi-compact,
+\item satisfies the existence and uniqueness parts of the valuative criterion,
+and
+\item universally closed.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+All these properties are stable under base change (this is clear for the
+last two and for the other two see
+Schemes, Lemmas
+\ref{schemes-lemma-separated-permanence} and
+\ref{schemes-lemma-quasi-compact-preserved-base-change}).
+Hence it suffices to prove them for the morphism
+$\mathbf{P}^n_{\mathbf{Z}} \to \Spec(\mathbf{Z})$.
+Separatedness is Lemma \ref{lemma-proj-separated}. Quasi-compactness follows
+from Lemma \ref{lemma-standard-covering-projective-space}.
+Existence and uniqueness of the valuative criterion follow from
+Lemma \ref{lemma-proj-valuative-criterion}.
+Universally closed follows from the above and
+Schemes, Proposition \ref{schemes-proposition-characterize-universally-closed}.
+\end{proof}
+
+\begin{remark}
+\label{remark-missing-finite-type}
+What's missing in the list of properties above? Well to be sure the property
+of being of finite type. The reason we do not list this here is that we have
+not yet defined the notion of finite type at this point. (Another property
+which is missing is ``smoothness''. And I'm sure there are many more you can
+think of.)
+\end{remark}
+
+\begin{lemma}[Segre embedding]
+\label{lemma-segre-embedding}
+Let $S$ be a scheme. There exists a closed immersion
+$$
+\mathbf{P}^n_S \times_S \mathbf{P}^m_S
+\longrightarrow
+\mathbf{P}^{nm + n + m}_S
+$$
+called the {\it Segre embedding}.
+\end{lemma}
+
+\begin{proof}
+It suffices to prove this when $S = \Spec(\mathbf{Z})$.
+Hence we will drop the index $S$ and work in the absolute setting.
+Write $\mathbf{P}^n = \text{Proj}(\mathbf{Z}[X_0, \ldots, X_n])$,
+$\mathbf{P}^m = \text{Proj}(\mathbf{Z}[Y_0, \ldots, Y_m])$,
+and
+$\mathbf{P}^{nm + n + m} =
+\text{Proj}(\mathbf{Z}[Z_0, \ldots, Z_{nm + n + m}])$.
+In order to map into $\mathbf{P}^{nm + n + m}$ we have to
+write down an invertible sheaf $\mathcal{L}$ on the left hand
+side and $(n + 1)(m + 1)$ sections $s_i$ which generate it.
+See Lemma \ref{lemma-projective-space}.
+The invertible sheaf we take is
+$$
+\mathcal{L} =
+\text{pr}_1^*\mathcal{O}_{\mathbf{P}^n}(1)
+\otimes
+\text{pr}_2^*\mathcal{O}_{\mathbf{P}^m}(1)
+$$
+The sections we take are
+$$
+s_0 = X_0Y_0, \ s_1 = X_1Y_0, \ldots, \ s_n = X_nY_0,
+\ s_{n + 1} = X_0Y_1, \ldots, \ s_{nm + n + m} = X_nY_m.
+$$
+These generate $\mathcal{L}$ since the sections $X_i$ generate
+$\mathcal{O}_{\mathbf{P}^n}(1)$ and the sections $Y_j$ generate
+$\mathcal{O}_{\mathbf{P}^m}(1)$. The induced morphism
+$\varphi$ has the property that
+$$
+\varphi^{-1}(D_{+}(Z_{i + (n + 1)j})) = D_{+}(X_i) \times D_{+}(Y_j).
+$$
+Hence it is an affine morphism. The corresponding ring map in case
+$(i, j) = (0, 0)$ is the map
+$$
+\mathbf{Z}[Z_1/Z_0, \ldots, Z_{nm + n + m}/Z_0]
+\longrightarrow
+\mathbf{Z}[X_1/X_0, \ldots, X_n/X_0, Y_1/Y_0, \ldots, Y_n/Y_0]
+$$
+which maps $Z_i/Z_0$ to the element $X_i/X_0$ for $i \leq n$ and
+the element $Z_{(n + 1)j}/Z_0$ to the element $Y_j/Y_0$. Hence it
+is surjective. A similar argument works for the other affine
+open subsets. Hence the morphism $\varphi$ is a closed immersion
+(see Schemes, Lemma \ref{schemes-lemma-closed-local-target} and
+Example \ref{schemes-example-closed-immersion-affines}.)
+\end{proof}
+
+\noindent
+The following two lemmas are special cases of more general results later, but
+perhaps it makes sense to prove these directly here now.
+
+\begin{lemma}
+\label{lemma-closed-in-projective-space}
+Let $R$ be a ring. Let $Z \subset \mathbf{P}^n_R$ be a closed subscheme.
+Let
+$$
+I_d = \Ker\left(
+R[T_0, \ldots, T_n]_d
+\longrightarrow
+\Gamma(Z, \mathcal{O}_{\mathbf{P}^n_R}(d)|_Z)\right)
+$$
+Then $I = \bigoplus I_d \subset R[T_0, \ldots, T_n]$ is
+a graded ideal and $Z = \text{Proj}(R[T_0, \ldots, T_n]/I)$.
+\end{lemma}
+
+\begin{proof}
+It is clear that $I$ is a graded ideal.
+Set $Z' = \text{Proj}(R[T_0, \ldots, T_n]/I)$.
+By Lemma \ref{lemma-surjective-graded-rings-generated-degree-1-map-proj}
+we see that $Z'$ is a closed subscheme of $\mathbf{P}^n_R$.
+To see the equality $Z = Z'$
+it suffices to check on an standard affine open
+$D_{+}(T_i)$. By renumbering the homogeneous coordinates we
+may assume $i = 0$. Say $Z \cap D_{+}(T_0)$, resp.\ $Z' \cap D_{+}(T_0)$
+is cut out by the ideal $J$, resp.\ $J'$ of $R[T_1/T_0, \ldots, T_n/T_0]$.
+Then $J'$ is the ideal generated by the elements $F/T_0^{\deg(F)}$ where
+$F \in I$ is homogeneous.
+Suppose the degree of $F \in I$ is $d$. Since $F$ vanishes as a section
+of $\mathcal{O}_{\mathbf{P}^n_R}(d)$ restricted to $Z$ we see that
+$F/T_0^d$ is an element of $J$. Thus $J' \subset J$.
+
+\medskip\noindent
+Conversely, suppose that $f \in J$. If $f$ has total degree
+$d$ in $T_1/T_0, \ldots, T_n/T_0$, then we can write
+$f = F/T_0^d$ for some $F \in R[T_0, \ldots, T_n]_d$.
+Pick $i \in \{1, \ldots, n\}$. Then $Z \cap D_{+}(T_i)$ is
+cut out by some ideal $J_i \subset R[T_0/T_i, \ldots, T_n/T_i]$.
+Moreover,
+$$
+J \cdot
+R\left[
+\frac{T_1}{T_0}, \ldots, \frac{T_n}{T_0},
+\frac{T_0}{T_i}, \ldots, \frac{T_n}{T_i}
+\right]
+=
+J_i \cdot
+R\left[
+\frac{T_1}{T_0}, \ldots, \frac{T_n}{T_0},
+\frac{T_0}{T_i}, \ldots, \frac{T_n}{T_i}
+\right]
+$$
+The left hand side is the localization of $J$ with respect to the
+element $T_i/T_0$ and the right hand side is the localization of $J_i$
+with respect to the element $T_0/T_i$. It follows that
+$T_0^{d_i}F/T_i^{d + d_i}$ is an element of $J_i$ for some $d_i$
+sufficiently large. This proves that $T_0^{\max(d_i)}F$ is an
+element of $I$, because its restriction to each standard affine
+open $D_{+}(T_i)$ vanishes on the closed subscheme
+$Z \cap D_{+}(T_i)$. Hence $f \in J'$ and we conclude $J \subset J'$
+as desired.
+\end{proof}
+
+\noindent
+The following lemma is a special case of the more general
+Properties, Lemmas \ref{properties-lemma-ample-quasi-coherent} or
+\ref{properties-lemma-proj-quasi-coherent}.
+
+\begin{lemma}
+\label{lemma-quasi-coherent-projective-space}
+Let $R$ be a ring.
+Let $\mathcal{F}$ be a quasi-coherent sheaf on $\mathbf{P}^n_R$.
+For $d \geq 0$ set
+$$
+M_d
+=
+\Gamma(\mathbf{P}^n_R,
+\mathcal{F} \otimes_{\mathcal{O}_{\mathbf{P}^n_R}}
+\mathcal{O}_{\mathbf{P}^n_R}(d))
+=
+\Gamma(\mathbf{P}^n_R, \mathcal{F}(d))
+$$
+Then $M = \bigoplus_{d \geq 0} M_d$ is a graded $R[T_0, \ldots, R_n]$-module
+and there is a canonical isomorphism $\mathcal{F} = \widetilde{M}$.
+\end{lemma}
+
+\begin{proof}
+The multiplication maps
+$$
+R[T_0, \ldots, R_n]_e \times M_d \longrightarrow M_{d + e}
+$$
+come from the natural isomorphisms
+$$
+\mathcal{O}_{\mathbf{P}^n_R}(e)
+\otimes_{\mathcal{O}_{\mathbf{P}^n_R}}
+\mathcal{F}(d)
+\longrightarrow
+\mathcal{F}(e + d)
+$$
+see Equation (\ref{equation-global-sections-module}). Let us construct the
+map $c : \widetilde{M} \to \mathcal{F}$. On each of the standard affines
+$U_i = D_{+}(T_i)$ we see that $\Gamma(U_i, \widetilde{M}) = (M[1/T_i])_0$
+where the subscript ${}_0$ means degree $0$ part. An element of this
+can be written as $m/T_i^d$ with $m \in M_d$. Since $T_i$ is a generator
+of $\mathcal{O}(1)$ over $U_i$ we can always write
+$m|_{U_i} = m_i \otimes T_i^d$ where $m_i \in \Gamma(U_i, \mathcal{F})$
+is a unique section. Thus a natural guess is $c(m/T_i^d) = m_i$.
+A small argument, which is omitted here, shows that this gives a
+well defined map $c : \widetilde{M} \to \mathcal{F}$ if we can
+show that
+$$
+(T_i/T_j)^d m_i|_{U_i \cap U_j} = m_j|_{U_i \cap U_j}
+$$
+in $M[1/T_iT_j]$.
+But this is clear since on the overlap the generators $T_i$ and
+$T_j$ of $\mathcal{O}(1)$ differ by the invertible function $T_i/T_j$.
+
+\medskip\noindent
+Injectivity of $c$. We may check for injectivity over the affine opens
+$U_i$. Let $i \in \{0, \ldots, n\}$
+and let $s$ be an element $s = m/T_i^d \in \Gamma(U_i, \widetilde{M})$
+such that $c(m/T_i^d) = 0$. By the description of $c$ above this means that
+$m_i = 0$, hence $m|_{U_i} = 0$. Hence $T_i^em = 0$ in $M$ for some
+$e$. Hence $s = m/T_i^d = T_i^e/T_i^{e + d} = 0$ as desired.
+
+\medskip\noindent
+Surjectivity of $c$. We may check for surjectivity over the affine opens
+$U_i$. By renumbering it suffices to check it over $U_0$.
+Let $s \in \mathcal{F}(U_0)$.
+Let us write $\mathcal{F}|_{U_i} = \widetilde{N_i}$ for some
+$R[T_0/T_i, \ldots, T_0/T_i]$-module $N_i$, which is possible because
+$\mathcal{F}$ is quasi-coherent. So $s$ corresponds to an element
+$x \in N_0$. Then we have that
+$$
+(N_i)_{T_j/T_i} \cong (N_j)_{T_i/T_j}
+$$
+(where the subscripts mean ``principal localization at'')
+as modules over the ring
+$$
+R\left[
+\frac{T_0}{T_i}, \ldots, \frac{T_n}{T_i},
+\frac{T_0}{T_j}, \ldots, \frac{T_n}{T_j}
+\right].
+$$
+This means that for some large integer $d$ there exist elements
+$s_i \in N_i$, $i = 1, \ldots, n$ such that
+$$
+s = (T_i/T_0)^d s_i
+$$
+on $U_0 \cap U_i$. Next, we look at the difference
+$$
+t_{ij} = s_i - (T_j/T_i)^d s_j
+$$
+on $U_i \cap U_j$, $0 < i < j$. By our choice of $s_i$ we know that
+$t_{ij}|_{U_0 \cap U_i \cap U_j} = 0$. Hence there exists a large integer
+$e$ such that $(T_0/T_i)^et_{ij} = 0$. Set $s_i' = (T_0/T_i)^es_i$,
+and $s_0' = s$. Then we will have
+$$
+s_a' = (T_b/T_a)^{e + d} s_b'
+$$
+on $U_a \cap U_b$ for all $a, b$. This is exactly the condition that the
+elements $s'_a$ glue to a global section
+$m \in \Gamma(\mathbf{P}^n_R, \mathcal{F}(e + d))$.
+And moreover $c(m/T_0^{e + d}) = s$ by construction. Hence $c$ is
+surjective and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-globally-generated-omega-twist-1}
+Let $X$ be a scheme. Let $\mathcal{L}$ be an invertible sheaf
+and let $s_0, \ldots, s_n$ be global sections of $\mathcal{L}$
+which generate it. Let $\mathcal{F}$ be the kernel of the induced
+map $\mathcal{O}_X^{\oplus n + 1} \to \mathcal{L}$.
+Then $\mathcal{F} \otimes \mathcal{L}$ is globally generated.
+\end{lemma}
+
+\begin{proof}
+In fact the result is true if $X$ is any locally ringed space.
+The sheaf $\mathcal{F}$ is a finite locally free $\mathcal{O}_X$-module
+of rank $n$. The elements
+$$
+s_{ij} = (0, \ldots, 0, s_j, 0, \ldots, 0, -s_i, 0, \ldots, 0)
+\in \Gamma(X, \mathcal{L}^{\oplus n + 1})
+$$
+with $s_j$ in the $i$th spot and $-s_i$ in the $j$th spot map to zero
+in $\mathcal{L}^{\otimes 2}$. Hence
+$s_{ij} \in \Gamma(X, \mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{L})$.
+A local computation shows that these sections generate
+$\mathcal{F} \otimes \mathcal{L}$.
+
+\medskip\noindent
+Alternative proof. Consider the morphism
+$\varphi : X \to \mathbf{P}^n_\mathbf{Z}$ associated to
+the pair $(\mathcal{L}, (s_0, \ldots, s_n))$. Since the pullback
+of $\mathcal{O}(1)$ is $\mathcal{L}$ and since the pullback
+of $T_i$ is $s_i$, it suffices to prove the lemma in the
+case of $\mathbf{P}^n_\mathbf{Z}$. In this case the sheaf
+$\mathcal{F}$ corresponds to the graded $S = \mathbf{Z}[T_0, \ldots, T_n]$
+module $M$ which fits into the short exact sequence
+$$
+0 \to M \to S^{\oplus n + 1} \to S(1) \to 0
+$$
+where the second map is given by $T_0, \ldots, T_n$. In this
+case the statement above translates into the statement that
+the elements
+$$
+T_{ij} = (0, \ldots, 0, T_j, 0, \ldots, 0, -T_i, 0, \ldots, 0)
+\in M(1)_0
+$$
+generate the graded module $M(1)$ over $S$. We omit the details.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Invertible sheaves and morphisms into Proj}
+\label{section-invertible-proj}
+
+\noindent
+Let $T$ be a scheme and let $\mathcal{L}$ be an invertible sheaf
+on $T$. For a section $s \in \Gamma(T, \mathcal{L})$ we denote
+$T_s$ the open subset of points where $s$ does not vanish. See
+Modules, Lemma \ref{modules-lemma-s-open}. We can view the following
+lemma as a slight generalization of Lemma \ref{lemma-apply}.
+It also is a generalization of Lemma \ref{lemma-morphism-proj}.
+
+\begin{lemma}
+\label{lemma-invertible-map-into-proj}
+Let $A$ be a graded ring.
+Set $X = \text{Proj}(A)$.
+Let $T$ be a scheme.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_T$-module.
+Let $\psi : A \to \Gamma_*(T, \mathcal{L})$ be a homomorphism
+of graded rings. Set
+$$
+U(\psi) = \bigcup\nolimits_{f \in A_{+}\text{ homogeneous}} T_{\psi(f)}
+$$
+The morphism $\psi$ induces a canonical morphism of schemes
+$$
+r_{\mathcal{L}, \psi} :
+U(\psi) \longrightarrow X
+$$
+together with a map of $\mathbf{Z}$-graded $\mathcal{O}_T$-algebras
+$$
+\theta :
+r_{\mathcal{L}, \psi}^*\left(
+\bigoplus\nolimits_{d \in \mathbf{Z}} \mathcal{O}_X(d)
+\right)
+\longrightarrow
+\bigoplus\nolimits_{d \in \mathbf{Z}} \mathcal{L}^{\otimes d}|_{U(\psi)}.
+$$
+The triple $(U(\psi), r_{\mathcal{L}, \psi}, \theta)$ is
+characterized by the following properties:
+\begin{enumerate}
+\item For $f \in A_{+}$ homogeneous we have
+$r_{\mathcal{L}, \psi}^{-1}(D_{+}(f)) = T_{\psi(f)}$.
+\item For every $d \geq 0$ the diagram
+$$
+\xymatrix{
+A_d \ar[d]_{(\ref{equation-global-sections})} \ar[r]_{\psi} &
+\Gamma(T, \mathcal{L}^{\otimes d}) \ar[d]^{restrict} \\
+\Gamma(X, \mathcal{O}_X(d)) \ar[r]^{\theta} &
+\Gamma(U(\psi), \mathcal{L}^{\otimes d})
+}
+$$
+is commutative.
+\end{enumerate}
+Moreover, for any $d \geq 1$ and any open subscheme $V \subset T$
+such that the sections in $\psi(A_d)$ generate $\mathcal{L}^{\otimes d}|_V$
+the morphism $r_{\mathcal{L}, \psi}|_V$ agrees with the morphism
+$\varphi : V \to \text{Proj}(A)$ and the map $\theta|_V$ agrees with the map
+$\alpha : \varphi^*\mathcal{O}_X(d) \to \mathcal{L}^{\otimes d}|_V$
+where $(\varphi, \alpha)$ is the pair
+of Lemma \ref{lemma-converse-construction}
+associated to
+$\psi|_{A^{(d)}} : A^{(d)} \to \Gamma_*(V, \mathcal{L}^{\otimes d})$.
+\end{lemma}
+
+\begin{proof}
+Suppose that we have two triples $(U, r : U \to X, \theta)$
+and $(U', r' : U' \to X, \theta')$ satisfying (1) and (2).
+Property (1) implies that $U = U' = U(\psi)$ and that
+$r = r'$ as maps of underlying topological
+spaces, since the opens $D_{+}(f)$ form a basis for the topology
+on $X$, and since $X$ is a sober topological space (see
+Algebra, Section \ref{algebra-section-proj}
+and
+Schemes, Lemma \ref{schemes-lemma-scheme-sober}).
+Let $f \in A_{+}$ be homogeneous. Note that
+$\Gamma(D_{+}(f), \bigoplus_{n \in \mathbf{Z}} \mathcal{O}_X(n)) = A_f$
+as a $\mathbf{Z}$-graded algebra. Consider the two
+$\mathbf{Z}$-graded ring maps
+$$
+\theta, \theta' :
+A_f
+\longrightarrow
+\Gamma(T_{\psi(f)}, \bigoplus \mathcal{L}^{\otimes n}).
+$$
+We know that multiplication by $f$ (resp.\ $\psi(f)$)
+is an isomorphism on the left (resp.\ right) hand side.
+We also know that $\theta(x/1) = \theta'(x/1) = \psi(x)|_{T_{\psi(f)}}$
+by (2) for all $x \in A$. Hence we deduce easily that $\theta = \theta'$
+as desired. Considering the degree $0$ parts we deduce that
+$r^\sharp = (r')^\sharp$, i.e., that $r = r'$ as morphisms of schemes.
+This proves the uniqueness.
+
+\medskip\noindent
+Now we come to existence. By the uniqueness just proved, it is enough to
+construct the pair $(r, \theta)$ locally on $T$. Hence we may assume
+that $T = \Spec(R)$ is affine, that $\mathcal{L} = \mathcal{O}_T$
+and that for some $f \in A_{+}$ homogeneous we have
+$\psi(f)$ generates $\mathcal{O}_T = \mathcal{O}_T^{\otimes \deg(f)}$.
+In other words, $\psi(f) = u \in R^*$ is a unit. In this case the map
+$\psi$ is a graded ring map
+$$
+A \longrightarrow R[x] = \Gamma_*(T, \mathcal{O}_T)
+$$
+which maps $f$ to $ux^{\deg(f)}$. Clearly this extends (uniquely) to
+a $\mathbf{Z}$-graded ring map $\theta : A_f \to R[x, x^{-1}]$ by
+mapping $1/f$ to $u^{-1}x^{-\deg(f)}$. This map in degree zero gives
+the ring map $A_{(f)} \to R$ which gives the morphism
+$r : T = \Spec(R) \to \Spec(A_{(f)}) = D_{+}(f) \subset X$.
+Hence we have constructed $(r, \theta)$ in this special case.
+
+\medskip\noindent
+Let us show the last statement of the lemma.
+According to Lemma \ref{lemma-converse-construction}
+the morphism constructed there is the unique one such that
+the displayed diagram in its statement commutes.
+The commutativity of the diagram in the lemma implies the
+commutativity when restricted to $V$ and $A^{(d)}$.
+Whence the result.
+\end{proof}
+
+\begin{remark}
+\label{remark-not-in-invertible-locus}
+Assumptions as in Lemma \ref{lemma-invertible-map-into-proj} above.
+The image of the morphism $r_{\mathcal{L}, \psi}$ need not be
+contained in the locus where the sheaf $\mathcal{O}_X(1)$
+is invertible.
+Here is an example.
+Let $k$ be a field.
+Let $S = k[A, B, C]$ graded by $\deg(A) = 1$, $\deg(B) = 2$, $\deg(C) = 3$.
+Set $X = \text{Proj}(S)$.
+Let $T = \mathbf{P}^2_k = \text{Proj}(k[X_0, X_1, X_2])$.
+Recall that $\mathcal{L} = \mathcal{O}_T(1)$ is invertible
+and that $\mathcal{O}_T(n) = \mathcal{L}^{\otimes n}$.
+Consider the composition $\psi$ of the maps
+$$
+S \to k[X_0, X_1, X_2] \to \Gamma_*(T, \mathcal{L}).
+$$
+Here the first map is $A \mapsto X_0$, $B \mapsto X_1^2$,
+$C \mapsto X_2^3$ and the second map is (\ref{equation-global-sections}).
+By the lemma this corresponds to a morphism
+$r_{\mathcal{L}, \psi} : T \to X = \text{Proj}(S)$
+which is easily seen to be surjective. On the other hand, in
+Remark \ref{remark-not-isomorphism} we showed that the sheaf
+$\mathcal{O}_X(1)$ is not invertible at all points of $X$.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+\section{Relative Proj via glueing}
+\label{section-relative-proj-via-glueing}
+
+\begin{situation}
+\label{situation-relative-proj}
+Here $S$ is a scheme, and $\mathcal{A}$
+is a quasi-coherent graded $\mathcal{O}_S$-algebra.
+\end{situation}
+
+\noindent
+In this section we outline how to construct a morphism
+of schemes
+$$
+\underline{\text{Proj}}_S(\mathcal{A}) \longrightarrow S
+$$
+by glueing the homogeneous spectra $\text{Proj}(\Gamma(U, \mathcal{A}))$
+where $U$ ranges over the affine opens of $S$. We first show that the
+homogeneous spectra of the values of $\mathcal{A}$ over affines form a
+suitable collection of schemes, as in Lemma \ref{lemma-relative-glueing}.
+
+\begin{lemma}
+\label{lemma-proj-inclusion}
+In Situation \ref{situation-relative-proj}.
+Suppose $U \subset U' \subset S$ are affine opens.
+Let $A = \mathcal{A}(U)$ and $A' = \mathcal{A}(U')$.
+The map of graded rings $A' \to A$ induces a morphism
+$r : \text{Proj}(A) \to \text{Proj}(A')$, and the diagram
+$$
+\xymatrix{
+\text{Proj}(A) \ar[r] \ar[d] &
+\text{Proj}(A') \ar[d] \\
+U \ar[r] &
+U'
+}
+$$
+is cartesian. Moreover there are canonical isomorphisms
+$\theta : r^*\mathcal{O}_{\text{Proj}(A')}(n) \to
+\mathcal{O}_{\text{Proj}(A)}(n)$ compatible with multiplication maps.
+\end{lemma}
+
+\begin{proof}
+Let $R = \mathcal{O}_S(U)$ and $R' = \mathcal{O}_S(U')$.
+Note that the map $R \otimes_{R'} A' \to A$ is an isomorphism as
+$\mathcal{A}$ is quasi-coherent
+(see Schemes, Lemma \ref{schemes-lemma-widetilde-pullback} for example).
+Hence the lemma follows from
+Lemma \ref{lemma-base-change-map-proj}.
+\end{proof}
+
+\noindent
+In particular the morphism $\text{Proj}(A) \to \text{Proj}(A')$
+of the lemma is an open immersion.
+
+\begin{lemma}
+\label{lemma-transitive-proj}
+In Situation \ref{situation-relative-proj}.
+Suppose $U \subset U' \subset U'' \subset S$ are affine opens.
+Let $A = \mathcal{A}(U)$, $A' = \mathcal{A}(U')$ and $A'' = \mathcal{A}(U'')$.
+The composition of the morphisms
+$r : \text{Proj}(A) \to \text{Proj}(A')$, and
+$r' : \text{Proj}(A') \to \text{Proj}(A'')$ of
+Lemma \ref{lemma-proj-inclusion} gives the
+morphism $r'' : \text{Proj}(A) \to \text{Proj}(A'')$
+of Lemma \ref{lemma-proj-inclusion}. A similar statement
+holds for the isomorphisms $\theta$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-morphism-proj-transitive} since
+the map $A'' \to A$ is the composition of $A'' \to A'$ and
+$A' \to A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glue-relative-proj}
+In Situation \ref{situation-relative-proj}.
+There exists a morphism of schemes
+$$
+\pi : \underline{\text{Proj}}_S(\mathcal{A}) \longrightarrow S
+$$
+with the following properties:
+\begin{enumerate}
+\item for every affine open $U \subset S$ there exists an isomorphism
+$i_U : \pi^{-1}(U) \to \text{Proj}(A)$ with $A = \mathcal{A}(U)$, and
+\item for $U \subset U' \subset S$ affine open the composition
+$$
+\xymatrix{
+\text{Proj}(A) \ar[r]^{i_U^{-1}} &
+\pi^{-1}(U) \ar[rr]^{inclusion} & &
+\pi^{-1}(U') \ar[r]^{i_{U'}} &
+\text{Proj}(A')
+}
+$$
+with $A = \mathcal{A}(U)$, $A' = \mathcal{A}(U')$
+is the open immersion of Lemma \ref{lemma-proj-inclusion} above.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows immediately from
+Lemmas \ref{lemma-relative-glueing},
+\ref{lemma-proj-inclusion}, and
+\ref{lemma-transitive-proj}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glue-relative-proj-twists}
+In Situation \ref{situation-relative-proj}.
+The morphism $\pi : \underline{\text{Proj}}_S(\mathcal{A}) \to S$
+of Lemma \ref{lemma-glue-relative-proj} comes with the following
+additional structure.
+There exists a quasi-coherent $\mathbf{Z}$-graded sheaf
+of $\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}$-algebras
+$\bigoplus\nolimits_{n \in \mathbf{Z}}
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(n)$,
+and a morphism of graded $\mathcal{O}_S$-algebras
+$$
+\psi :
+\mathcal{A}
+\longrightarrow
+\bigoplus\nolimits_{n \geq 0}
+\pi_*\left(\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(n)\right)
+$$
+uniquely determined by the following property:
+For every affine open $U \subset S$ with $A = \mathcal{A}(U)$
+there is an isomorphism
+$$
+\theta_U :
+i_U^*\left(
+\bigoplus\nolimits_{n \in \mathbf{Z}} \mathcal{O}_{\text{Proj}(A)}(n)
+\right)
+\longrightarrow
+\left(
+\bigoplus\nolimits_{n \in \mathbf{Z}}
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(n)
+\right)|_{\pi^{-1}(U)}
+$$
+of $\mathbf{Z}$-graded $\mathcal{O}_{\pi^{-1}(U)}$-algebras
+such that
+$$
+\xymatrix{
+A_n
+\ar[rr]_\psi
+\ar[dr]_-{(\ref{equation-global-sections})}
+& &
+\Gamma(\pi^{-1}(U),
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(n)) \\
+&
+\Gamma(\text{Proj}(A),
+\mathcal{O}_{\text{Proj}(A)}(n))
+\ar[ru]_-{\theta_U}
+&
+}
+$$
+is commutative.
+\end{lemma}
+
+\begin{proof}
+We are going to use Lemma \ref{lemma-relative-glueing-sheaves}
+to glue the sheaves of $\mathbf{Z}$-graded algebras
+$\bigoplus_{n \in \mathbf{Z}} \mathcal{O}_{\text{Proj}(A)}(n)$
+for $A = \mathcal{A}(U)$, $U \subset S$ affine open
+over the scheme $\underline{\text{Proj}}_S(\mathcal{A})$.
+We have constructed the data necessary for this in
+Lemma \ref{lemma-proj-inclusion} and we have checked condition (d) of
+Lemma \ref{lemma-relative-glueing-sheaves} in
+Lemma \ref{lemma-transitive-proj}. Hence we get the
+sheaf of $\mathbf{Z}$-graded
+$\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}$-algebras
+$\bigoplus_{n \in \mathbf{Z}}
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(n)$
+together with the isomorphisms $\theta_U$ for all
+$U \subset S$ affine open and all $n \in \mathbf{Z}$.
+For every affine open $U \subset S$ with $A = \mathcal{A}(U)$ we have a map
+$A \to \Gamma(\text{Proj}(A),
+\bigoplus_{n \geq 0} \mathcal{O}_{\text{Proj}(A)}(n))$.
+Hence the map $\psi$ exists by functoriality
+of relative glueing, see Remark \ref{remark-relative-glueing-functorial}.
+The diagram of the lemma commutes by construction.
+This characterizes the sheaf of $\mathbf{Z}$-graded
+$\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}$-algebras
+$\bigoplus \mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(n)$
+because the proof of Lemma \ref{lemma-morphism-proj} shows that
+having these diagrams commute uniquely determines the maps $\theta_U$.
+Some details omitted.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Relative Proj as a functor}
+\label{section-relative-proj}
+
+\noindent
+We place ourselves in Situation \ref{situation-relative-proj}.
+So $S$ is a scheme and $\mathcal{A} = \bigoplus_{d \geq 0} \mathcal{A}_d$
+is a quasi-coherent graded $\mathcal{O}_S$-algebra.
+In this section we relativize the construction of
+$\text{Proj}$ by constructing a functor which the relative
+homogeneous spectrum will represent.
+As a result we will construct a morphism of schemes
+$$
+\underline{\text{Proj}}_S(\mathcal{A}) \longrightarrow S
+$$
+which above affine opens of $S$ will look like the homogeneous spectrum
+of a graded ring. The discussion will be modeled after our
+discussion of the relative spectrum in Section \ref{section-spec}.
+The easier method using glueing schemes of the form
+$\text{Proj}(A)$, $A = \Gamma(U, \mathcal{A})$, $U \subset S$
+affine open, is explained in Section \ref{section-relative-proj-via-glueing},
+and the result in this section will be shown to be isomorphic to that one.
+
+\medskip\noindent
+Fix for the moment an integer $d \geq 1$.
+We denote $\mathcal{A}^{(d)} = \bigoplus_{n \geq 0} \mathcal{A}_{nd}$
+similarly to the notation in Algebra, Section \ref{algebra-section-graded}.
+Let $T$ be a scheme.
+Let us consider {\it quadruples $(d, f : T \to S, \mathcal{L}, \psi)$
+over $T$} where
+\begin{enumerate}
+\item $d$ is the integer we fixed above,
+\item $f : T \to S$ is a morphism of schemes,
+\item $\mathcal{L}$ is an invertible $\mathcal{O}_T$-module, and
+\item
+$\psi : f^*\mathcal{A}^{(d)} \to \bigoplus_{n \geq 0}\mathcal{L}^{\otimes n}$
+is a homomorphism of graded $\mathcal{O}_T$-algebras
+such that $f^*\mathcal{A}_d \to \mathcal{L}$ is surjective.
+\end{enumerate}
+Given a morphism $h : T' \to T$ and a quadruple
+$(d, f, \mathcal{L}, \psi)$ over $T$ we can pull it back to the
+quadruple $(d, f \circ h, h^*\mathcal{L}, h^*\psi)$ over $T'$.
+Given two quadruples $(d, f, \mathcal{L}, \psi)$ and
+$(d, f', \mathcal{L}', \psi')$ over $T$ with the same integer $d$
+we say they are {\it strictly equivalent} if $f = f'$ and there exists
+an isomorphism $\beta : \mathcal{L} \to \mathcal{L}'$
+such that $\beta \circ \psi = \psi'$ as graded $\mathcal{O}_T$-algebra maps
+$f^*\mathcal{A}^{(d)} \to \bigoplus_{n \geq 0} (\mathcal{L}')^{\otimes n}$.
+
+\medskip\noindent
+For each integer $d \geq 1$ we define
+\begin{eqnarray*}
+F_d : \Sch^{opp} & \longrightarrow & \textit{Sets}, \\
+T & \longmapsto &
+\{\text{strict equivalence classes of }
+(d, f : T \to S, \mathcal{L}, \psi)
+\text{ as above}\}
+\end{eqnarray*}
+with pullbacks as defined above.
+
+\begin{lemma}
+\label{lemma-proj-base-change}
+In Situation \ref{situation-relative-proj}. Let $d \geq 1$.
+Let $F_d$ be the functor
+associated to $(S, \mathcal{A})$ above.
+Let $g : S' \to S$ be a morphism of schemes.
+Set $\mathcal{A}' = g^*\mathcal{A}$. Let $F_d'$ be the
+functor associated to $(S', \mathcal{A}')$ above.
+Then there is a canonical isomorphism
+$$
+F'_d \cong h_{S'} \times_{h_S} F_d
+$$
+of functors.
+\end{lemma}
+
+\begin{proof}
+A quadruple
+$(d, f' : T \to S', \mathcal{L}',
+\psi' : (f')^*(\mathcal{A}')^{(d)} \to
+\bigoplus_{n \geq 0} (\mathcal{L}')^{\otimes n})$
+is the same as a quadruple
+$(d, f, \mathcal{L},
+\psi : f^*\mathcal{A}^{(d)} \to
+\bigoplus_{n \geq 0} \mathcal{L}^{\otimes n})$
+together with a factorization of $f$ as $f = g \circ f'$. Namely,
+the correspondence is $f = g \circ f'$, $\mathcal{L} = \mathcal{L}'$
+and $\psi = \psi'$ via the identifications
+$(f')^*(\mathcal{A}')^{(d)} = (f')^*g^*(\mathcal{A}^{(d)}) =
+f^*\mathcal{A}^{(d)}$. Hence the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-proj-affine}
+In Situation \ref{situation-relative-proj}. Let $F_d$ be the functor
+associated to $(d, S, \mathcal{A})$ above.
+If $S$ is affine, then $F_d$ is representable by the open subscheme
+$U_d$ (\ref{equation-Ud})
+of the scheme $\text{Proj}(\Gamma(S, \mathcal{A}))$.
+\end{lemma}
+
+\begin{proof}
+Write $S = \Spec(R)$ and $A = \Gamma(S, \mathcal{A})$.
+Then $A$ is a graded $R$-algebra and $\mathcal{A} = \widetilde A$.
+To prove the lemma we have to identify the functor $F_d$
+with the functor $F_d^{triples}$ of triples defined in Section
+\ref{section-morphisms-proj}.
+
+\medskip\noindent
+Let $(d, f : T \to S, \mathcal{L}, \psi)$ be a quadruple.
+We may think of $\psi$ as a $\mathcal{O}_S$-module map
+$\mathcal{A}^{(d)} \to \bigoplus_{n \geq 0} f_*\mathcal{L}^{\otimes n}$.
+Since $\mathcal{A}^{(d)}$ is quasi-coherent this is the same
+thing as an $R$-linear homomorphism of graded rings
+$A^{(d)} \to \Gamma(S, \bigoplus_{n \geq 0} f_*\mathcal{L}^{\otimes n})$.
+Clearly, $\Gamma(S, \bigoplus_{n \geq 0} f_*\mathcal{L}^{\otimes n}) =
+\Gamma_*(T, \mathcal{L})$. Thus we may associate to
+the quadruple the triple $(d, \mathcal{L}, \psi)$.
+
+\medskip\noindent
+Conversely, let $(d, \mathcal{L}, \psi)$ be a triple.
+The composition $R \to A_0 \to \Gamma(T, \mathcal{O}_T)$
+determines a morphism $f : T \to S = \Spec(R)$, see
+Schemes, Lemma \ref{schemes-lemma-morphism-into-affine}.
+With this choice of $f$ the map
+$A^{(d)} \to \Gamma(S, \bigoplus_{n \geq 0} f_*\mathcal{L}^{\otimes n})$
+is $R$-linear, and hence corresponds to a $\psi$ which we
+can use for a quadruple $(d, f : T \to S, \mathcal{L}, \psi)$.
+We omit the verification that this establishes an isomorphism
+of functors $F_d = F_d^{triples}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-proj-d}
+In Situation \ref{situation-relative-proj}.
+The functor $F_d$ is representable by a scheme.
+\end{lemma}
+
+\begin{proof}
+We are going to use Schemes, Lemma \ref{schemes-lemma-glue-functors}.
+
+\medskip\noindent
+First we check that $F_d$ satisfies the sheaf property for the
+Zariski topology. Namely, suppose that $T$ is a scheme,
+that $T = \bigcup_{i \in I} U_i$ is an open covering,
+and that $(d, f_i, \mathcal{L}_i, \psi_i) \in F_d(U_i)$ such that
+$(d, f_i, \mathcal{L}_i, \psi_i)|_{U_i \cap U_j}$ and
+$(d, f_j, \mathcal{L}_j, \psi_j)|_{U_i \cap U_j}$ are strictly
+equivalent. This implies that the morphisms $f_i : U_i \to S$
+glue to a morphism of schemes $f : T \to S$ such that
+$f|_{I_i} = f_i$, see Schemes, Section \ref{schemes-section-glueing-schemes}.
+Thus $f_i^*\mathcal{A}^{(d)} = f^*\mathcal{A}^{(d)}|_{U_i}$.
+It also implies there exist isomorphisms
+$\beta_{ij} : \mathcal{L}_i|_{U_i \cap U_j} \to \mathcal{L}_j|_{U_i \cap U_j}$
+such that $\beta_{ij} \circ \psi_i = \psi_j$ on $U_i \cap U_j$.
+Note that the isomorphisms $\beta_{ij}$ are uniquely determined
+by this requirement because the maps $f_i^*\mathcal{A}_d \to \mathcal{L}_i$
+are surjective. In particular we see that
+$\beta_{jk} \circ \beta_{ij} = \beta_{ik}$ on $U_i \cap U_j \cap U_k$.
+Hence by Sheaves,
+Section \ref{sheaves-section-glueing-sheaves} the invertible sheaves
+$\mathcal{L}_i$ glue to an invertible $\mathcal{O}_T$-module
+$\mathcal{L}$ and the morphisms $\psi_i$ glue to
+morphism of $\mathcal{O}_T$-algebras
+$\psi : f^*\mathcal{A}^{(d)} \to \bigoplus_{n \geq 0} \mathcal{L}^{\otimes n}$.
+This proves that $F_d$ satisfies the sheaf condition with respect to
+the Zariski topology.
+
+\medskip\noindent
+Let $S = \bigcup_{i \in I} U_i$ be an affine open covering.
+Let $F_{d, i} \subset F_d$ be the subfunctor consisting of
+those pairs $(f : T \to S, \varphi)$ such that
+$f(T) \subset U_i$.
+
+\medskip\noindent
+We have to show each $F_{d, i}$ is representable.
+This is the case because $F_{d, i}$ is identified with
+the functor associated to $U_i$ equipped with
+the quasi-coherent graded $\mathcal{O}_{U_i}$-algebra
+$\mathcal{A}|_{U_i}$ by Lemma \ref{lemma-proj-base-change}.
+Thus the result follows from Lemma \ref{lemma-relative-proj-affine}.
+
+\medskip\noindent
+Next we show that $F_{d, i} \subset F_d$ is representable by open immersions.
+Let $(f : T \to S, \varphi) \in F_d(T)$. Consider $V_i = f^{-1}(U_i)$.
+It follows from the definition of $F_{d, i}$ that given $a : T' \to T$
+we gave $a^*(f, \varphi) \in F_{d, i}(T')$ if and only if $a(T') \subset V_i$.
+This is what we were required to show.
+
+\medskip\noindent
+Finally, we have to show that the collection $(F_{d, i})_{i \in I}$
+covers $F_d$. Let $(f : T \to S, \varphi) \in F_d(T)$.
+Consider $V_i = f^{-1}(U_i)$. Since $S = \bigcup_{i \in I} U_i$
+is an open covering of $S$ we see that $T = \bigcup_{i \in I} V_i$
+is an open covering of $T$. Moreover $(f, \varphi)|_{V_i} \in F_{d, i}(V_i)$.
+This finishes the proof of the lemma.
+\end{proof}
+
+\noindent
+At this point we can redo the material at the end of
+Section \ref{section-morphisms-proj} in the current
+relative setting and define a functor
+which is representable by
+$\underline{\text{Proj}}_S(\mathcal{A})$. To do this we introduce the
+notion of equivalence between two
+quadruples $(d, f : T \to S, \mathcal{L}, \psi)$ and
+$(d', f' : T \to S, \mathcal{L}', \psi')$ with possibly different
+values of the integers $d, d'$. Namely, we say these
+are {\it equivalent} if $f = f'$, and there exists an
+isomorphism $\beta : \mathcal{L}^{\otimes d'} \to (\mathcal{L}')^{\otimes d}$
+such that
+$\beta \circ \psi|_{f^*\mathcal{A}^{(dd')}} = \psi'|_{f^*\mathcal{A}^{(dd')}}$.
+The following lemma implies that this defines an equivalence relation.
+(This is not a complete triviality.)
+
+\begin{lemma}
+\label{lemma-equivalent-relative}
+In Situation \ref{situation-relative-proj}.
+Let $T$ be a scheme.
+Let $(d, f, \mathcal{L}, \psi)$, $(d', f', \mathcal{L}', \psi')$
+be two quadruples over $T$. The following are equivalent:
+\begin{enumerate}
+\item Let $m = \text{lcm}(d, d')$. Write $m = ad = a'd'$.
+We have $f = f'$ and there exists
+an isomorphism
+$\beta : \mathcal{L}^{\otimes a} \to (\mathcal{L}')^{\otimes a'}$
+with the property that $\beta \circ \psi|_{f^*\mathcal{A}^{(m)}}$
+and $\psi'|_{f^*\mathcal{A}^{(m)}}$ agree
+as graded ring maps
+$f^*\mathcal{A}^{(m)} \to \bigoplus_{n \geq 0} (\mathcal{L}')^{\otimes mn}$.
+\item The quadruples $(d, f, \mathcal{L}, \psi)$ and
+$(d', f', \mathcal{L}', \psi')$ are equivalent.
+\item We have $f = f'$ and
+for some positive integer $m = ad = a'd'$ there exists an isomorphism
+$\beta : \mathcal{L}^{\otimes a} \to (\mathcal{L}')^{\otimes a'}$
+with the property that $\beta \circ \psi|_{f^*\mathcal{A}^{(m)}}$
+and $\psi'|_{f^*\mathcal{A}^{(m)}}$ agree
+as graded ring maps
+$f^*\mathcal{A}^{(m)} \to \bigoplus_{n \geq 0} (\mathcal{L}')^{\otimes mn}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Clearly (1) implies (2) and (2) implies (3) by restricting to
+more divisible degrees and powers of invertible sheaves.
+Assume (3) for some integer $m = ad = a'd'$. Let
+$m_0 = \text{lcm}(d, d')$ and write it as $m_0 = a_0d = a'_0d'$.
+We are given an isomorphism
+$\beta : \mathcal{L}^{\otimes a} \to (\mathcal{L}')^{\otimes a'}$
+with the property described in (3). We want to find an isomorphism
+$\beta_0 : \mathcal{L}^{\otimes a_0} \to (\mathcal{L}')^{\otimes a'_0}$
+having that property as well.
+Since by assumption the maps $\psi : f^*\mathcal{A}_d \to \mathcal{L}$
+and $\psi' : (f')^*\mathcal{A}_{d'} \to \mathcal{L}'$ are surjective the
+same is true for the maps
+$\psi : f^*\mathcal{A}_{m_0} \to \mathcal{L}^{\otimes a_0}$
+and $\psi' : (f')^*\mathcal{A}_{m_0} \to (\mathcal{L}')^{\otimes a_0}$.
+Hence if $\beta_0$ exists it is uniquely determined by the
+condition that $\beta_0 \circ \psi = \psi'$. This means that
+we may work locally on $T$. Hence we may assume that
+$f = f' : T \to S$ maps into an affine open, in other words
+we may assume that $S$ is affine. In this case the result follows
+from the corresponding result for triples (see Lemma \ref{lemma-equivalent})
+and the fact that triples and quadruples correspond in the
+affine base case (see proof of Lemma \ref{lemma-relative-proj-affine}).
+\end{proof}
+
+\noindent
+Suppose $d' = ad$. Consider the transformation of functors $F_d \to F_{d'}$
+which assigns to the quadruple $(d, f, \mathcal{L}, \psi)$ over
+$T$ the quadruple
+$(d', f, \mathcal{L}^{\otimes a}, \psi|_{f^*\mathcal{A}^{(d')}})$.
+One of the implications of Lemma \ref{lemma-equivalent-relative} is that the
+transformation $F_d \to F_{d'}$ is injective!
+For a quasi-compact scheme $T$ we define
+$$
+F(T) = \bigcup\nolimits_{d \in \mathbf{N}} F_d(T)
+$$
+with transition maps as explained above. This clearly defines a
+contravariant functor on the category of quasi-compact schemes
+with values in sets. For a general scheme
+$T$ we define
+$$
+F(T)
+=
+\lim_{V \subset T\text{ quasi-compact open}} F(V).
+$$
+In other words, an element $\xi$ of $F(T)$ corresponds to a compatible system
+of choices of elements $\xi_V \in F(V)$ where $V$ ranges over the
+quasi-compact opens of $T$.
+We omit the definition of the pullback map $F(T) \to F(T')$
+for a morphism $T' \to T$ of schemes.
+Thus we have defined our functor
+\begin{equation}
+\label{equation-proj}
+F : \Sch^{opp} \longrightarrow \textit{Sets}
+\end{equation}
+
+\begin{lemma}
+\label{lemma-relative-proj}
+In Situation \ref{situation-relative-proj}.
+The functor $F$ above is representable by a scheme.
+\end{lemma}
+
+\begin{proof}
+Let $U_d \to S$ be the scheme representing the functor $F_d$
+defined above. Let $\mathcal{L}_d$,
+$\psi^d : \pi_d^*\mathcal{A}^{(d)} \to
+\bigoplus_{n \geq 0} \mathcal{L}_d^{\otimes n}$ be the universal object.
+If $d | d'$, then we may consider the quadruple
+$(d', \pi_d, \mathcal{L}_d^{\otimes d'/d}, \psi^d|_{\mathcal{A}^{(d')}})$
+which determines a canonical morphism $U_d \to U_{d'}$ over $S$.
+By construction this morphism corresponds to the transformation
+of functors $F_d \to F_{d'}$ defined above.
+
+\medskip\noindent
+For every affine open $\Spec(R) = V \subset S$
+setting $A = \Gamma(V, \mathcal{A})$ we have a canonical
+identification of the base change $U_{d, V}$ with the corresponding open
+subscheme of $\text{Proj}(A)$, see Lemma \ref{lemma-relative-proj-affine}.
+Moreover, the morphisms $U_{d, V} \to U_{d', V}$ constructed above
+correspond to the inclusions of opens in $\text{Proj}(A)$.
+Thus we conclude that $U_d \to U_{d'}$ is an open immersion.
+
+\medskip\noindent
+This allows us to construct $X$
+by glueing the schemes $U_d$ along the open immersions $U_d \to U_{d'}$.
+Technically, it is convenient to choose a sequence
+$d_1 | d_2 | d_3 | \ldots$ such that every positive integer
+divides one of the $d_i$ and to simply take
+$X = \bigcup U_{d_i}$ using the open immersions above.
+It is then a simple matter to prove that $X$ represents the
+functor $F$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glueing-gives-functor-proj}
+In Situation \ref{situation-relative-proj}.
+The scheme $\pi : \underline{\text{Proj}}_S(\mathcal{A}) \to S$
+constructed in Lemma \ref{lemma-glue-relative-proj}
+and the scheme representing the functor $F$
+are canonically isomorphic as schemes over $S$.
+\end{lemma}
+
+\begin{proof}
+Let $X$ be the scheme representing the functor $F$.
+Note that $X$ is a scheme over $S$ since the functor $F$
+comes equipped with a natural transformation $F \to h_S$.
+Write $Y = \underline{\text{Proj}}_S(\mathcal{A})$.
+We have to show that $X \cong Y$ as $S$-schemes.
+We give two arguments.
+
+\medskip\noindent
+The first argument uses the construction of $X$ as the union
+of the schemes $U_d$ representing $F_d$ in the
+proof of Lemma \ref{lemma-relative-proj}.
+Over each affine open of $S$ we can identify $X$ with the homogeneous spectrum
+of the sections of $\mathcal{A}$ over that open, since this was
+true for the opens $U_d$. Moreover, these identifications
+are compatible with further restrictions to smaller affine opens.
+On the other hand, $Y$ was constructed by glueing these
+homogeneous spectra.
+Hence we can glue these isomorphisms to an isomorphism
+between $X$ and $\underline{\text{Proj}}_S(\mathcal{A})$ as
+desired. Details omitted.
+
+\medskip\noindent
+Here is the second argument.
+Lemma \ref{lemma-glue-relative-proj-twists}
+shows that there exists a morphism of graded algebras
+$$
+\psi : \pi^*\mathcal{A}
+\longrightarrow
+\bigoplus\nolimits_{n \geq 0} \mathcal{O}_Y(n)
+$$
+over $Y$ which on sections over affine opens of $S$ agrees with
+(\ref{equation-global-sections}). Hence for every $y \in Y$
+there exists an open neighbourhood $V \subset Y$ of $y$
+and an integer $d \geq 1$ such that for $d | n$ the sheaf
+$\mathcal{O}_Y(n)|_V$ is invertible and the multiplication maps
+$\mathcal{O}_Y(n)|_V \otimes_{\mathcal{O}_V} \mathcal{O}_Y(m)|_V
+\to \mathcal{O}_Y(n + m)|_V$ are isomorphisms. Thus
+$\psi$ restricted to the sheaf $\pi^*\mathcal{A}^{(d)}|_V$
+gives an element of $F_d(V)$. Since the opens $V$ cover $Y$
+we see ``$\psi$'' gives rise to an element of $F(Y)$.
+Hence a canonical morphism $Y \to X$ over $S$.
+Because this construction is completely canonical to see
+that it is an isomorphism we may work locally on $S$.
+Hence we reduce to the case $S$ affine where the result is clear.
+\end{proof}
+
+\begin{definition}
+\label{definition-relative-proj}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent sheaf of
+graded $\mathcal{O}_S$-algebras. The
+{\it relative homogeneous spectrum of $\mathcal{A}$ over $S$},
+or the {\it homogeneous spectrum of $\mathcal{A}$ over $S$}, or the
+{\it relative Proj of $\mathcal{A}$ over $S$} is the scheme
+constructed in Lemma \ref{lemma-glue-relative-proj} which represents the
+functor $F$ (\ref{equation-proj}), see
+Lemma \ref{lemma-glueing-gives-functor-proj}.
+We denote it $\pi : \underline{\text{Proj}}_S(\mathcal{A}) \to S$.
+\end{definition}
+
+\noindent
+The relative Proj comes equipped with a quasi-coherent
+sheaf of $\mathbf{Z}$-graded algebras
+$\bigoplus_{n \in \mathbf{Z}}
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(n)$
+(the twists of the structure sheaf) and
+a ``universal'' homomorphism of graded algebras
+$$
+\psi_{univ} :
+\mathcal{A}
+\longrightarrow
+\pi_*\left(
+\bigoplus\nolimits_{n \geq 0}
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(n)
+\right)
+$$
+see Lemma \ref{lemma-glue-relative-proj-twists}. We may also think of this
+as a homomorphism
+$$
+\psi_{univ} :
+\pi^*\mathcal{A}
+\longrightarrow
+\bigoplus\nolimits_{n \geq 0}
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(n)
+$$
+if we like. The following lemma is a formulation of the
+universality of this object.
+
+\begin{lemma}
+\label{lemma-tie-up-psi}
+In Situation \ref{situation-relative-proj}.
+Let $(f : T \to S, d, \mathcal{L}, \psi)$
+be a quadruple. Let
+$r_{d, \mathcal{L}, \psi} : T \to \underline{\text{Proj}}_S(\mathcal{A})$
+be the associated $S$-morphism.
+There exists an isomorphism
+of $\mathbf{Z}$-graded $\mathcal{O}_T$-algebras
+$$
+\theta :
+r_{d, \mathcal{L}, \psi}^*\left(
+\bigoplus\nolimits_{n \in \mathbf{Z}}
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(nd)
+\right)
+\longrightarrow
+\bigoplus\nolimits_{n \in \mathbf{Z}} \mathcal{L}^{\otimes n}
+$$
+such that the following diagram commutes
+$$
+\xymatrix{
+\mathcal{A}^{(d)} \ar[rr]_-{\psi}
+ \ar[rd]_-{\psi_{univ}} & &
+f_*\left(
+\bigoplus\nolimits_{n \in \mathbf{Z}}
+\mathcal{L}^{\otimes n}
+\right) \\
+ &
+\pi_*\left(
+\bigoplus\nolimits_{n \geq 0}
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(nd)
+\right) \ar[ru]_\theta
+}
+$$
+The commutativity of this diagram uniquely determines $\theta$.
+\end{lemma}
+
+\begin{proof}
+Note that the quadruple $(f : T \to S, d, \mathcal{L}, \psi)$
+defines an element of $F_d(T)$. Let
+$U_d \subset \underline{\text{Proj}}_S(\mathcal{A})$
+be the locus
+where the sheaf $\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(d)$
+is invertible and generated by the image of
+$\psi_{univ} : \pi^*\mathcal{A}_d \to
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(d)$.
+Recall that $U_d$ represents the functor $F_d$, see the proof
+of Lemma \ref{lemma-relative-proj}. Hence the result will follow
+if we can show the quadruple
+$(U_d \to S, d, \mathcal{O}_{U_d}(d), \psi_{univ}|_{\mathcal{A}^{(d)}})$
+is the universal family, i.e., the representing object in $F_d(U_d)$.
+We may do this after restricting to an affine open of $S$ because
+(a) the formation of the functors $F_d$ commutes with base change
+(see Lemma \ref{lemma-proj-base-change}), and (b) the pair
+$(\bigoplus_{n \in \mathbf{Z}}
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(n),
+\psi_{univ})$
+is constructed by glueing over affine opens in $S$
+(see Lemma \ref{lemma-glue-relative-proj-twists}).
+Hence we may assume that $S$ is affine. In this case the functor
+of quadruples $F_d$ and the functor of triples $F_d$ agree
+(see proof of Lemma \ref{lemma-relative-proj-affine}) and moreover
+Lemma \ref{lemma-proj-functor-strict}
+shows that $(d, \mathcal{O}_{U_d}(d), \psi^d)$
+is the universal triple over $U_d$.
+Going backwards through the identifications in the proof of
+Lemma \ref{lemma-relative-proj-affine} shows that
+$(U_d \to S, d, \mathcal{O}_{U_d}(d), \psi_{univ}|_{\mathcal{A}^{(d)}})$
+is the universal quadruple as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-proj-separated}
+Let $S$ be a scheme and $\mathcal{A}$ be a quasi-coherent sheaf
+of graded $\mathcal{O}_S$-algebras. The morphism
+$\pi : \underline{\text{Proj}}_S(\mathcal{A}) \to S$
+is separated.
+\end{lemma}
+
+\begin{proof}
+To prove a morphism is separated we may work locally on the base,
+see Schemes, Section \ref{schemes-section-separation-axioms}.
+By construction $\underline{\text{Proj}}_S(\mathcal{A})$ is
+over any affine $U \subset S$ isomorphic to
+$\text{Proj}(A)$ with $A = \mathcal{A}(U)$. By
+Lemma \ref{lemma-proj-separated} we see that $\text{Proj}(A)$ is separated.
+Hence $\text{Proj}(A) \to U$ is separated (see
+Schemes, Lemma \ref{schemes-lemma-compose-after-separated}) as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-proj-base-change}
+Let $S$ be a scheme and $\mathcal{A}$ be a quasi-coherent sheaf
+of graded $\mathcal{O}_S$-algebras. Let $g : S' \to S$ be any morphism
+of schemes. Then there is a canonical isomorphism
+$$
+r :
+\underline{\text{Proj}}_{S'}(g^*\mathcal{A})
+\longrightarrow
+S' \times_S \underline{\text{Proj}}_S(\mathcal{A})
+$$
+as well as a corresponding isomorphism
+$$
+\theta :
+r^*\text{pr}_2^*\left(\bigoplus\nolimits_{d \in \mathbf{Z}}
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(d)\right)
+\longrightarrow
+\bigoplus\nolimits_{d \in \mathbf{Z}}
+\mathcal{O}_{\underline{\text{Proj}}_{S'}(g^*\mathcal{A})}(d)
+$$
+of $\mathbf{Z}$-graded
+$\mathcal{O}_{\underline{\text{Proj}}_{S'}(g^*\mathcal{A})}$-algebras.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-proj-base-change} and the construction
+of $\underline{\text{Proj}}_S(\mathcal{A})$ in
+Lemma \ref{lemma-relative-proj} as the union
+of the schemes $U_d$ representing the functors $F_d$.
+In terms of the construction of relative Proj via glueing
+this isomorphism is given by the isomorphisms constructed
+in Lemma \ref{lemma-base-change-map-proj} which provides us with
+the isomorphism $\theta$. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-apply-relative}
+Let $S$ be a scheme.
+Let $\mathcal{A}$ be a quasi-coherent sheaf of graded $\mathcal{O}_S$-modules
+generated as an $\mathcal{A}_0$-algebra by $\mathcal{A}_1$.
+In this case the scheme $X = \underline{\text{Proj}}_S(\mathcal{A})$
+represents the functor $F_1$ which associates to a scheme
+$f : T \to S$ over $S$ the set of pairs $(\mathcal{L}, \psi)$, where
+\begin{enumerate}
+\item $\mathcal{L}$ is an invertible $\mathcal{O}_T$-module, and
+\item $\psi : f^*\mathcal{A} \to \bigoplus_{n \geq 0} \mathcal{L}^{\otimes n}$
+is a graded $\mathcal{O}_T$-algebra homomorphism such that
+$f^*\mathcal{A}_1 \to \mathcal{L}$ is surjective
+\end{enumerate}
+up to strict equivalence as above. Moreover, in this case all the
+quasi-coherent sheaves
+$\mathcal{O}_{\underline{\text{Proj}}(\mathcal{A})}(n)$
+are invertible
+$\mathcal{O}_{\underline{\text{Proj}}(\mathcal{A})}$-modules
+and the multiplication maps induce isomorphisms
+$
+\mathcal{O}_{\underline{\text{Proj}}(\mathcal{A})}(n)
+\otimes_{\mathcal{O}_{\underline{\text{Proj}}(\mathcal{A})}}
+\mathcal{O}_{\underline{\text{Proj}}(\mathcal{A})}(m) =
+\mathcal{O}_{\underline{\text{Proj}}(\mathcal{A})}(n + m)$.
+\end{lemma}
+
+\begin{proof}
+Under the assumptions of the lemma the sheaves
+$\mathcal{O}_{\underline{\text{Proj}}(\mathcal{A})}(n)$
+are invertible and the multiplication maps isomorphisms
+by Lemma \ref{lemma-relative-proj} and
+Lemma \ref{lemma-apply}
+over affine opens of $S$. Thus $X$ actually represents the
+functor $F_1$, see proof of Lemma \ref{lemma-relative-proj}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Quasi-coherent sheaves on relative Proj}
+\label{section-quasi-coherent-relative-proj}
+
+\noindent
+We briefly discuss how to deal with graded modules in the relative
+setting.
+
+\medskip\noindent
+We place ourselves in Situation \ref{situation-relative-proj}.
+So $S$ is a scheme, and
+$\mathcal{A}$ is a quasi-coherent graded $\mathcal{O}_S$-algebra.
+Let $\mathcal{M} = \bigoplus_{n \in \mathbf{Z}} \mathcal{M}_n$
+be a graded $\mathcal{A}$-module, quasi-coherent as an $\mathcal{O}_S$-module.
+We are going to describe the associated quasi-coherent sheaf
+of modules on $\underline{\text{Proj}}_S(\mathcal{A})$.
+We first describe the value of this sheaf schemes $T$ mapping
+into the relative Proj.
+
+\medskip\noindent
+Let $T$ be a scheme. Let $(d, f : T \to S, \mathcal{L}, \psi)$
+be a quadruple over $T$, as in Section \ref{section-relative-proj}.
+We define a quasi-coherent sheaf
+$\widetilde{\mathcal{M}}_T$ of $\mathcal{O}_T$-modules
+as follows
+\begin{equation}
+\label{equation-widetilde-M}
+\widetilde{\mathcal{M}}_T =
+\left(
+f^*\mathcal{M}^{(d)}
+\otimes_{f^*\mathcal{A}^{(d)}}
+\left(\bigoplus\nolimits_{n \in \mathbf{Z}} \mathcal{L}^{\otimes n}\right)
+\right)_0
+\end{equation}
+So $\widetilde{\mathcal{M}}_T$ is the degree $0$ part
+of the tensor product of the graded $f^*\mathcal{A}^{(d)}$-modules
+$\mathcal{M}^{(d)}$ and
+$\bigoplus\nolimits_{n \in \mathbf{Z}} \mathcal{L}^{\otimes n}$.
+Note that the sheaf $\widetilde{\mathcal{M}}_T$ depends on the quadruple
+even though we suppressed this in the notation.
+This construction has the pleasing property that
+given any morphism $g : T' \to T$ we have
+$\widetilde{\mathcal{M}}_{T'} = g^*\widetilde{\mathcal{M}}_T$
+where $\widetilde{\mathcal{M}}_{T'}$ denotes the quasi-coherent
+sheaf associated to the pullback quadruple
+$(d, f \circ g, g^*\mathcal{L}, g^*\psi)$.
+
+\medskip\noindent
+Since all sheaves in (\ref{equation-widetilde-M}) are quasi-coherent
+we can spell out the construction
+over an affine open $\Spec(C) = V \subset T$
+which maps into an affine open $\Spec(R) = U \subset S$.
+Namely, suppose that $\mathcal{A}|_U$ corresponds
+to the graded $R$-algebra $A$, that $\mathcal{M}|_U$ corresponds to the
+graded $A$-module $M$, and that $\mathcal{L}|_V$ corresponds to the
+invertible $C$-module $L$. The map $\psi$ gives
+rise to a graded $R$-algebra map
+$\gamma : A^{(d)} \to \bigoplus_{n \geq 0} L^{\otimes n}$.
+(Tensor powers of $L$ over $C$.)
+Then $(\widetilde{\mathcal{M}}_T)|_V$
+is the quasi-coherent sheaf associated to the $C$-module
+$$
+N_{R, C, A, M, \gamma} =
+\left(
+M^{(d)} \otimes_{A^{(d)}, \gamma}
+\left(\bigoplus\nolimits_{n \in \mathbf{Z}} L^{\otimes n}\right)
+\right)_0
+$$
+By assumption we may even cover $T$ by affine opens
+$V$ such that there exists some $a \in A_d$ such that
+$\gamma(a) \in L$ is a $C$-basis for the module $L$.
+In that case any element of $N_{R, C, A, M, \gamma}$ is a sum
+of pure tensors $\sum m_i \otimes \gamma(a)^{-n_i}$ with $m \in M_{n_id}$.
+In fact we may multiply each $m_i$ with a suitable positive power
+of $a$ and collect terms to see that each element of $N_{R, C, A, M, \gamma}$
+can be written as $m \otimes \gamma(a)^{-n}$ with $m \in M_{nd}$ and
+$n \gg 0$. In other words we see that in this case
+$$
+N_{R, C, A, M, \gamma} = M_{(a)} \otimes_{A_{(a)}} C
+$$
+where the map $A_{(a)} \to C$ is the map
+$x/a^n \mapsto \gamma(x)/\gamma(a)^n$. In other words, this is
+the value of $\widetilde{M}$ on $D_{+}(a) \subset \text{Proj}(A)$
+pulled back to $\Spec(C)$ via the morphism
+$\Spec(C) \to D_{+}(a)$ coming from $\gamma$.
+
+\begin{lemma}
+\label{lemma-relative-proj-modules}
+In Situation \ref{situation-relative-proj}.
+For any quasi-coherent sheaf of graded $\mathcal{A}$-modules
+$\mathcal{M}$ on $S$, there exists a canonical associated sheaf
+of $\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}$-modules
+$\widetilde{\mathcal{M}}$ with the following properties:
+\begin{enumerate}
+\item Given a scheme $T$ and a quadruple
+$(T \to S, d, \mathcal{L}, \psi)$ over $T$
+corresponding to a morphism
+$h : T \to \underline{\text{Proj}}_S(\mathcal{A})$ there is
+a canonical isomorphism
+$\widetilde{\mathcal{M}}_T = h^*\widetilde{\mathcal{M}}$
+where $\widetilde{\mathcal{M}}_T$ is defined by (\ref{equation-widetilde-M}).
+\item The isomorphisms of (1) are compatible with pullbacks.
+\item There is a canonical map
+$$
+\pi^*\mathcal{M}_0 \longrightarrow \widetilde{\mathcal{M}}.
+$$
+\item The construction $\mathcal{M} \mapsto \widetilde{\mathcal{M}}$
+is functorial in $\mathcal{M}$.
+\item The construction $\mathcal{M} \mapsto \widetilde{\mathcal{M}}$
+is exact.
+\item There are canonical maps
+$$
+\widetilde{\mathcal{M}}
+\otimes_{\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}}
+\widetilde{\mathcal{N}}
+\longrightarrow
+\widetilde{\mathcal{M} \otimes_\mathcal{A} \mathcal{N}}
+$$
+as in
+Lemma \ref{lemma-widetilde-tensor}.
+\item There exist canonical maps
+$$
+\pi^*\mathcal{M}
+\longrightarrow
+\bigoplus\nolimits_{n \in \mathbf{Z}}
+\widetilde{\mathcal{M}(n)}
+$$
+generalizing (\ref{equation-global-sections-more-generally}).
+\item The formation of $\widetilde{\mathcal{M}}$ commutes with base change.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. We should split this lemma into parts and prove the parts separately.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Functoriality of relative Proj}
+\label{section-functoriality-relative-proj}
+
+\noindent
+This section is the analogue of Section \ref{section-functoriality-proj}
+for the relative Proj. Let $S$ be a scheme. A graded $\mathcal{O}_S$-algebra
+map $\psi : \mathcal{A} \to \mathcal{B}$ does not always give rise to a
+morphism of associated relative Proj. The correct result is stated as follows.
+
+\begin{lemma}
+\label{lemma-morphism-relative-proj}
+Let $S$ be a scheme. Let $\mathcal{A}$, $\mathcal{B}$ be two graded
+quasi-coherent $\mathcal{O}_S$-algebras. Set
+$p : X = \underline{\text{Proj}}_S(\mathcal{A}) \to S$ and
+$q : Y = \underline{\text{Proj}}_S(\mathcal{B}) \to S$. Let
+$\psi : \mathcal{A} \to \mathcal{B}$ be a homomorphism of
+graded $\mathcal{O}_S$-algebras. There is a canonical open
+$U(\psi) \subset Y$ and a canonical morphism of schemes
+$$
+r_\psi :
+U(\psi)
+\longrightarrow
+X
+$$
+over $S$ and a map of $\mathbf{Z}$-graded $\mathcal{O}_{U(\psi)}$-algebras
+$$
+\theta = \theta_\psi :
+r_\psi^*\left(
+\bigoplus\nolimits_{d \in \mathbf{Z}} \mathcal{O}_X(d)
+\right)
+\longrightarrow
+\bigoplus\nolimits_{d \in \mathbf{Z}} \mathcal{O}_{U(\psi)}(d).
+$$
+The triple $(U(\psi), r_\psi, \theta)$ is characterized by the property
+that for any affine open $W \subset S$ the triple
+$$
+(U(\psi) \cap p^{-1}W,\quad
+r_\psi|_{U(\psi) \cap p^{-1}W} : U(\psi) \cap p^{-1}W \to q^{-1}W,\quad
+\theta|_{U(\psi) \cap p^{-1}W})
+$$
+is equal to the triple associated to
+$\psi : \mathcal{A}(W) \to \mathcal{B}(W)$ in
+Lemma \ref{lemma-morphism-proj} via the identifications
+$p^{-1}W = \text{Proj}(\mathcal{A}(W))$ and
+$q^{-1}W = \text{Proj}(\mathcal{B}(W))$ of
+Section \ref{section-relative-proj-via-glueing}.
+\end{lemma}
+
+\begin{proof}
+This lemma proves itself by glueing the local triples.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphism-relative-proj-transitive}
+Let $S$ be a scheme. Let $\mathcal{A}$, $\mathcal{B}$, and $\mathcal{C}$ be
+quasi-coherent graded $\mathcal{O}_S$-algebras.
+Set $X = \underline{\text{Proj}}_S(\mathcal{A})$,
+$Y = \underline{\text{Proj}}_S(\mathcal{B})$ and
+$Z = \underline{\text{Proj}}_S(\mathcal{C})$.
+Let $\varphi : \mathcal{A} \to \mathcal{B}$,
+$\psi : \mathcal{B} \to \mathcal{C}$ be graded $\mathcal{O}_S$-algebra maps.
+Then we have
+$$
+U(\psi \circ \varphi) = r_\varphi^{-1}(U(\psi))
+\quad
+\text{and}
+\quad
+r_{\psi \circ \varphi}
+=
+r_\varphi \circ r_\psi|_{U(\psi \circ \varphi)}.
+$$
+In addition we have
+$$
+\theta_\psi \circ r_\psi^*\theta_\varphi
+=
+\theta_{\psi \circ \varphi}
+$$
+with obvious notation.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-graded-rings-map-relative-proj}
+With hypotheses and notation as in Lemma \ref{lemma-morphism-relative-proj}
+above. Assume $\mathcal{A}_d \to \mathcal{B}_d$ is surjective for
+$d \gg 0$. Then
+\begin{enumerate}
+\item $U(\psi) = Y$,
+\item $r_\psi : Y \to X$ is a closed immersion, and
+\item the maps $\theta : r_\psi^*\mathcal{O}_X(n) \to \mathcal{O}_Y(n)$
+are surjective but not isomorphisms in general (even if
+$\mathcal{A} \to \mathcal{B}$ is surjective).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows on combining
+Lemma \ref{lemma-morphism-relative-proj}
+with
+Lemma \ref{lemma-surjective-graded-rings-map-proj}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-eventual-iso-graded-rings-map-relative-proj}
+With hypotheses and notation as in Lemma \ref{lemma-morphism-relative-proj}
+above. Assume $\mathcal{A}_d \to \mathcal{B}_d$ is an isomorphism for all
+$d \gg 0$. Then
+\begin{enumerate}
+\item $U(\psi) = Y$,
+\item $r_\psi : Y \to X$ is an isomorphism, and
+\item the maps $\theta : r_\psi^*\mathcal{O}_X(n) \to \mathcal{O}_Y(n)$
+are isomorphisms.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows on combining
+Lemma \ref{lemma-morphism-relative-proj}
+with
+Lemma \ref{lemma-eventual-iso-graded-rings-map-proj}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-surjective-generated-degree-1-map-relative-proj}
+With hypotheses and notation as in Lemma \ref{lemma-morphism-relative-proj}
+above. Assume $\mathcal{A}_d \to \mathcal{B}_d$ is surjective for $d \gg 0$
+and that $\mathcal{A}$ is generated by $\mathcal{A}_1$ over $\mathcal{A}_0$.
+Then
+\begin{enumerate}
+\item $U(\psi) = Y$,
+\item $r_\psi : Y \to X$ is a closed immersion, and
+\item the maps $\theta : r_\psi^*\mathcal{O}_X(n) \to \mathcal{O}_Y(n)$
+are isomorphisms.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows on combining
+Lemma \ref{lemma-morphism-relative-proj}
+with
+Lemma \ref{lemma-surjective-graded-rings-generated-degree-1-map-proj}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Invertible sheaves and morphisms into relative Proj}
+\label{section-invertible-relative-proj}
+
+\noindent
+It seems that we may need the following lemma somewhere.
+The situation is the following:
+\begin{enumerate}
+\item Let $S$ be a scheme.
+\item Let $\mathcal{A}$ be a quasi-coherent graded $\mathcal{O}_S$-algebra.
+\item Denote $\pi : \underline{\text{Proj}}_S(\mathcal{A}) \to S$ the relative
+homogeneous spectrum over $S$.
+\item Let $f : X \to S$ be a morphism of schemes.
+\item Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+\item Let $\psi : f^*\mathcal{A} \to
+\bigoplus_{d \geq 0} \mathcal{L}^{\otimes d}$
+be a homomorphism of graded $\mathcal{O}_X$-algebras.
+\end{enumerate}
+Given this data set
+$$
+U(\psi) = \bigcup\nolimits_{(U, V, a)} U_{\psi(a)}
+$$
+where $(U, V, a)$ satisfies:
+\begin{enumerate}
+\item $V \subset S$ affine open,
+\item $U = f^{-1}(V)$, and
+\item $a \in \mathcal{A}(V)_{+}$ is homogeneous.
+\end{enumerate}
+Namely, then $\psi(a) \in \Gamma(U, \mathcal{L}^{\otimes \deg(a)})$
+and $U_{\psi(a)}$ is the corresponding open (see
+Modules, Lemma \ref{modules-lemma-s-open}).
+
+\begin{lemma}
+\label{lemma-invertible-map-into-relative-proj}
+With assumptions and notation as above. The morphism
+$\psi$ induces a canonical morphism of schemes over $S$
+$$
+r_{\mathcal{L}, \psi} :
+U(\psi) \longrightarrow \underline{\text{Proj}}_S(\mathcal{A})
+$$
+together with a map of graded $\mathcal{O}_{U(\psi)}$-algebras
+$$
+\theta :
+r_{\mathcal{L}, \psi}^*\left(
+\bigoplus\nolimits_{d \geq 0}
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(d)
+\right)
+\longrightarrow
+\bigoplus\nolimits_{d \geq 0} \mathcal{L}^{\otimes d}|_{U(\psi)}
+$$
+characterized by the following properties:
+\begin{enumerate}
+\item For every open $V \subset S$ and every $d \geq 0$ the diagram
+$$
+\xymatrix{
+\mathcal{A}_d(V) \ar[d]_{\psi} \ar[r]_{\psi} &
+\Gamma(f^{-1}(V), \mathcal{L}^{\otimes d}) \ar[d]^{restrict} \\
+\Gamma(\pi^{-1}(V),
+\mathcal{O}_{\underline{\text{Proj}}_S(\mathcal{A})}(d)) \ar[r]^{\theta} &
+\Gamma(f^{-1}(V) \cap U(\psi), \mathcal{L}^{\otimes d})
+}
+$$
+is commutative.
+\item For any $d \geq 1$ and any open subscheme $W \subset X$
+such that $\psi|_W : f^*\mathcal{A}_d|_W \to \mathcal{L}^{\otimes d}|_W$
+is surjective the restriction of the morphism $r_{\mathcal{L}, \psi}$
+agrees with the morphism $W \to \underline{\text{Proj}}_S(\mathcal{A})$
+which exists by the construction of the relative homogeneous spectrum,
+see Definition \ref{definition-relative-proj}.
+\item For any affine open $V \subset S$, the restriction
+$$
+(U(\psi) \cap f^{-1}(V), r_{\mathcal{L}, \psi}|_{U(\psi) \cap f^{-1}(V)},
+\theta|_{U(\psi) \cap f^{-1}(V)})
+$$
+agrees via $i_V$ (see Lemma \ref{lemma-glue-relative-proj}) with the triple
+$(U(\psi'), r_{\mathcal{L}, \psi'}, \theta')$
+of Lemma \ref{lemma-invertible-map-into-proj} associated to the map
+$\psi' : A = \mathcal{A}(V) \to \Gamma_*(f^{-1}(V), \mathcal{L}|_{f^{-1}(V)})$
+induced by $\psi$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Use characterization (3) to construct the morphism $r_{\mathcal{L}, \psi}$
+and $\theta$ locally over $S$. Use the uniqueness of
+Lemma \ref{lemma-invertible-map-into-proj}
+to show that the construction glues. Details omitted.
+\end{proof}
+
+
+
+
+
+
+
+\section{Twisting by invertible sheaves and relative Proj}
+\label{section-twisting-and-proj}
+
+\noindent
+Let $S$ be a scheme.
+Let $\mathcal{A} = \bigoplus_{d \geq 0} \mathcal{A}_d$ be a
+quasi-coherent graded $\mathcal{O}_S$-algebra.
+Let $\mathcal{L}$ be an invertible sheaf on $S$.
+In this situation we obtain another quasi-coherent graded
+$\mathcal{O}_S$-algebra, namely
+$$
+\mathcal{B}
+=
+\bigoplus\nolimits_{d \geq 0}
+\mathcal{A}_d \otimes_{\mathcal{O}_S} \mathcal{L}^{\otimes d}
+$$
+It turns out that $\mathcal{A}$ and $\mathcal{B}$ have
+isomorphic relative homogeneous spectra.
+
+\begin{lemma}
+\label{lemma-twisting-and-proj}
+With notation $S$, $\mathcal{A}$, $\mathcal{L}$ and $\mathcal{B}$ as
+above. There is a canonical isomorphism
+$$
+\xymatrix{
+P = \underline{\text{Proj}}_S(\mathcal{A})
+\ar[rr]_g \ar[rd]_\pi & &
+\underline{\text{Proj}}_S(\mathcal{B}) = P'
+\ar[ld]^{\pi'} \\
+& S &
+}
+$$
+with the following properties
+\begin{enumerate}
+\item There are isomorphisms
+$\theta_n : g^*\mathcal{O}_{P'}(n)
+\to
+\mathcal{O}_P(n) \otimes \pi^*\mathcal{L}^{\otimes n}$
+which fit together to give an isomorphism of $\mathbf{Z}$-graded
+algebras
+$$
+\theta :
+g^*\left(
+\bigoplus\nolimits_{n \in \mathbf{Z}} \mathcal{O}_{P'}(n)
+\right)
+\longrightarrow
+\bigoplus\nolimits_{n \in \mathbf{Z}} \mathcal{O}_P(n)
+\otimes \pi^*\mathcal{L}^{\otimes n}
+$$
+\item For every open $V \subset S$ the diagrams
+$$
+\xymatrix{
+\mathcal{A}_n(V) \otimes \mathcal{L}^{\otimes n}(V)
+\ar[r]_{multiply} \ar[d]^{\psi \otimes \pi^*}
+&
+\mathcal{B}_n(V) \ar[dd]^\psi \\
+\Gamma(\pi^{-1}V, \mathcal{O}_P(n)) \otimes
+\Gamma(\pi^{-1}V, \pi^*\mathcal{L}^{\otimes n})
+\ar[d]^{multiply} \\
+\Gamma(\pi^{-1}V, \mathcal{O}_P(n) \otimes \pi^*\mathcal{L}^{\otimes n})
+&
+\Gamma(\pi'^{-1}V, \mathcal{O}_{P'}(n)) \ar[l]_-{\theta_n}
+}
+$$
+are commutative.
+\item Add more here as necessary.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is the identity map when $\mathcal{L} \cong \mathcal{O}_S$.
+In general choose an open covering of $S$ such that $\mathcal{L}$
+is trivialized over the pieces and glue the corresponding maps.
+Details omitted.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Projective bundles}
+\label{section-projective-bundle}
+
+\noindent
+Let $S$ be a scheme.
+Let $\mathcal{E}$ be a quasi-coherent sheaf of $\mathcal{O}_S$-modules.
+By Modules, Lemma \ref{modules-lemma-whole-tensor-algebra-permanence}
+the symmetric algebra $\text{Sym}(\mathcal{E})$ of
+$\mathcal{E}$ over $\mathcal{O}_S$
+is a quasi-coherent sheaf of $\mathcal{O}_S$-algebras.
+Note that it is generated in degree $1$ over $\mathcal{O}_S$.
+Hence it makes sense to apply the construction of the
+previous section to it, specifically Lemmas
+\ref{lemma-relative-proj} and \ref{lemma-apply-relative}.
+
+\begin{definition}
+\label{definition-projective-bundle}
+Let $S$ be a scheme. Let $\mathcal{E}$ be a quasi-coherent
+$\mathcal{O}_S$-module\footnote{The reader may expect here
+the condition that $\mathcal{E}$ is finite locally free. We do not
+do so in order to be consistent with
+\cite[II, Definition 4.1.1]{EGA}.}.
+We denote
+$$
+\pi :
+\mathbf{P}(\mathcal{E}) = \underline{\text{Proj}}_S(\text{Sym}(\mathcal{E}))
+\longrightarrow
+S
+$$
+and we call it the {\it projective bundle associated to $\mathcal{E}$}.
+The symbol $\mathcal{O}_{\mathbf{P}(\mathcal{E})}(n)$
+indicates the invertible $\mathcal{O}_{\mathbf{P}(\mathcal{E})}$-module
+of Lemma \ref{lemma-apply-relative} and is called the $n$th
+{\it twist of the structure sheaf}.
+\end{definition}
+
+\noindent
+According to Lemma \ref{lemma-glue-relative-proj-twists} there are
+canonical $\mathcal{O}_S$-module homomorphisms
+$$
+\text{Sym}^n(\mathcal{E})
+\longrightarrow
+\pi_*\mathcal{O}_{\mathbf{P}(\mathcal{E})}(n)
+\quad\text{equivalently}\quad
+\pi^*\text{Sym}^n(\mathcal{E})
+\longrightarrow
+\mathcal{O}_{\mathbf{P}(\mathcal{E})}(n)
+$$
+for all $n \geq 0$. In particular, for $n = 1$ we have
+$$
+\mathcal{E}
+\longrightarrow
+\pi_*\mathcal{O}_{\mathbf{P}(\mathcal{E})}(1)
+\quad\text{equivalently}\quad
+\pi^*\mathcal{E}
+\longrightarrow
+\mathcal{O}_{\mathbf{P}(\mathcal{E})}(1)
+$$
+and the map $\pi^*\mathcal{E} \to \mathcal{O}_{\mathbf{P}(\mathcal{E})}(1)$
+is a surjection by Lemma \ref{lemma-apply-relative}.
+This is a good way to remember how we have normalized
+our construction of $\mathbf{P}(\mathcal{E})$.
+
+\medskip\noindent
+Warning: In some references the scheme $\mathbf{P}(\mathcal{E})$
+is only defined for $\mathcal{E}$ finite locally free on $S$.
+Moreover sometimes $\mathbf{P}(\mathcal{E})$ is actually defined as our
+$\mathbf{P}(\mathcal{E}^\vee)$ where $\mathcal{E}^\vee$
+is the dual of $\mathcal{E}$ (and this is done only when $\mathcal{E}$ is
+finite locally free).
+
+\medskip\noindent
+Let $S$, $\mathcal{E}$, $\mathbf{P}(\mathcal{E}) \to S$ be as in
+Definition \ref{definition-projective-bundle}. Let $f : T \to S$
+be a scheme over $S$. Let $\psi : f^*\mathcal{E} \to \mathcal{L}$
+be a surjection where $\mathcal{L}$ is an invertible $\mathcal{O}_T$-module.
+The induced graded $\mathcal{O}_T$-algebra map
+$$
+f^*\text{Sym}(\mathcal{E}) = \text{Sym}(f^*\mathcal{E}) \to
+\text{Sym}(\mathcal{L}) = \bigoplus\nolimits_{n \geq 0} \mathcal{L}^{\otimes n}
+$$
+corresponds to a morphism
+$$
+\varphi_{\mathcal{L}, \psi} : T \longrightarrow \mathbf{P}(\mathcal{E})
+$$
+over $S$ by our construction of the relative Proj as the scheme representing
+the functor $F$ in Section \ref{section-relative-proj}. On the other hand,
+given a morphism $\varphi : T \to \mathbf{P}(\mathcal{E})$ over $S$
+we can set $\mathcal{L} = \varphi^*\mathcal{O}_{\mathbf{P}(\mathcal{E})}(1)$
+and $\psi : f^*\mathcal{E} \to \mathcal{L}$ equal to the pullback
+by $\varphi$ of the canonical surjection
+$\pi^*\mathcal{E} \to \mathcal{O}_{\mathbf{P}(\mathcal{E})}(1)$.
+By Lemma \ref{lemma-apply-relative} these constructions
+are inverse bijections between the set of isomorphism classes of pairs
+$(\mathcal{L}, \psi)$ and the set of morphisms
+$\varphi : T \to \mathbf{P}(\mathcal{E})$ over $S$.
+Thus we see that $\mathbf{P}(\mathcal{E})$ represents the functor
+which associates to $f : T \to S$ the set of $\mathcal{O}_T$-module
+quotients of $f^*\mathcal{E}$ which are locally free of rank $1$.
+
+\begin{example}[Projective space of a vector space]
+\label{example-projective-space}
+Let $k$ be a field. Let $V$ be a $k$-vector space. The corresponding
+{\it projective space} is the $k$-scheme
+$$
+\mathbf{P}(V) = \text{Proj}(\text{Sym}(V))
+$$
+where $\text{Sym}(V)$ is the symmetric algebra on $V$ over $k$.
+Of course we have $\mathbf{P}(V) \cong \mathbf{P}^n_k$ if $\dim(V) = n + 1$
+because then the symmetric algebra on $V$ is isomorphic to a polynomial
+ring in $n + 1$ variables. If we
+think of $V$ as a quasi-coherent module on $\Spec(k)$, then $\mathbf{P}(V)$
+is the corresponding projective space bundle over $\Spec(k)$. By the
+discussion above a $k$-valued point $p$ of $\mathbf{P}(V)$ corresponds to
+a surjection of $k$-vector spaces $V \to L_p$ with $\dim(L_p) = 1$.
+More generally, let $X$ be a scheme over $k$, let $\mathcal{L}$ be an
+invertible $\mathcal{O}_X$-module, and let
+$\psi : V \to \Gamma(X, \mathcal{L})$ be a $k$-linear map
+such that $\mathcal{L}$ is generated as an $\mathcal{O}_X$-module
+by the sections in the image of $\psi$. Then the discussion above
+gives a canonical morphism
+$$
+\varphi_{\mathcal{L}, \psi} : X \longrightarrow \mathbf{P}(V)
+$$
+of schemes over $k$ such that there is an isomorphism
+$\theta : \varphi_{\mathcal{L}, \psi}^*\mathcal{O}_{\mathbf{P}(V)}(1)
+\to \mathcal{L}$ and such that $\psi$ agrees with the composition
+$$
+V \to
+\Gamma(\mathbf{P}(V), \mathcal{O}_{\mathbf{P}(V)}(1))
+\to
+\Gamma(X, \varphi_{\mathcal{L}, \psi}^*\mathcal{O}_{\mathbf{P}(V)}(1))
+\to
+\Gamma(X, \mathcal{L})
+$$
+See Lemma \ref{lemma-invertible-map-into-proj}. If
+$V \subset \Gamma(X, \mathcal{L})$ is a subspace, then we will
+denote the morphism constructed above simply as
+$\varphi_{\mathcal{L}, V}$.
+If $\dim(V) = n + 1$ and we choose a basis $v_0, \ldots, v_n$ of $V$
+then the diagram
+$$
+\xymatrix{
+X \ar@{=}[d] \ar[rr]_{\varphi_{\mathcal{L}, \psi}} & &
+\mathbf{P}(V) \ar[d]^{\cong} \\
+X \ar[rr]^{\varphi_{(\mathcal{L}, (s_0, \ldots, s_n))}} & &
+\mathbf{P}^n_k
+}
+$$
+is commutative, where $s_i = \psi(v_i) \in \Gamma(X, \mathcal{L})$, where
+$\varphi_{(\mathcal{L}, (s_0, \ldots, s_n))}$
+is as in Section \ref{section-projective-space},
+and where the right vertical arrow corresponds
+to the isomorphism $k[T_0, \ldots, T_n] \to \text{Sym}(V)$ sending
+$T_i$ to $v_i$.
+\end{example}
+
+\begin{example}
+\label{example-projective-bundle}
+The map $\text{Sym}^n(\mathcal{E}) \to
+\pi_*(\mathcal{O}_{\mathbf{P}(\mathcal{E})}(n))$
+is an isomorphism if $\mathcal{E}$ is locally free, but in general
+need not be an isomorphism. In fact we will give an example where
+this map is not injective for $n = 1$. Set $S = \Spec(A)$ with
+$$
+A = k[u, v, s_1, s_2, t_1, t_2]/I
+$$
+where $k$ is a field and
+$$
+I = (-us_1 + vt_1 + ut_2, vs_1 + us_2 - vt_2, vs_2, ut_1).
+$$
+Denote $\overline{u}$ the class of $u$ in $A$ and similarly for
+the other variables.
+Let $M = (Ax \oplus Ay)/A(\overline{u}x + \overline{v}y)$ so that
+$$
+\text{Sym}(M) = A[x, y]/(\overline{u}x + \overline{v}y)
+= k[x, y, u, v, s_1, s_2, t_1, t_2]/J
+$$
+where
+$$
+J = (-us_1 + vt_1 + ut_2, vs_1 + us_2 - vt_2, vs_2, ut_1, ux + vy).
+$$
+In this case the projective bundle associated to the quasi-coherent
+sheaf $\mathcal{E} = \widetilde{M}$ on $S = \Spec(A)$ is the scheme
+$$
+P =
+\text{Proj}(\text{Sym}(M)).
+$$
+Note that this scheme as an affine open covering
+$P = D_{+}(x) \cup D_{+}(y)$.
+Consider the element
+$m \in M$ which is the image of the element
+$us_1x + vt_2y$. Note that
+$$
+x(us_1x + vt_2y) = (s_1x + s_2y)(ux + vy) \bmod I
+$$
+and
+$$
+y(us_1x + vt_2y) = (t_1x + t_2y)(ux + vy) \bmod I.
+$$
+The first equation implies that $m$ maps to zero as a
+section of $\mathcal{O}_P(1)$ on $D_{+}(x)$ and the second
+that it maps to zero as a section of $\mathcal{O}_P(1)$ on $D_{+}(y)$.
+This shows that $m$ maps to zero in $\Gamma(P, \mathcal{O}_P(1))$.
+On the other hand we claim that $m \not = 0$, so that $m$ gives
+an example of a nonzero global section of $\mathcal{E}$ mapping to zero
+in $\Gamma(P, \mathcal{O}_P(1))$. Assume $m = 0$
+to get a contradiction. In this case there exists
+an element $f \in k[u, v, s_1, s_2, t_1, t_2]$ such that
+$$
+us_1x + vt_2y = f(ux + vy) \bmod I
+$$
+Since $I$ is generated by homogeneous polynomials of degree $2$ we
+may decompose $f$ into its homogeneous components and take the
+degree 1 component. In other words we may assume that
+$$
+f = au + bv + \alpha_1s_1 + \alpha_2s_2 + \beta_1t_1 + \beta_2t_2
+$$
+for some $a, b, \alpha_1, \alpha_2, \beta_1, \beta_2 \in k$.
+The resulting conditions are that
+$$
+\begin{matrix}
+us_1 - u(au + bv + \alpha_1s_1 + \alpha_2s_2 + \beta_1t_1 + \beta_2t_2)
+\in I \\
+vt_2 - v(au + bv + \alpha_1s_1 + \alpha_2s_2 + \beta_1t_1 + \beta_2t_2)
+\in I
+\end{matrix}
+$$
+There are no terms $u^2, uv, v^2$ in the generators of $I$ and
+hence we see $a = b = 0$. Thus we get the relations
+$$
+\begin{matrix}
+us_1 - u(\alpha_1s_1 + \alpha_2s_2 + \beta_1t_1 + \beta_2t_2)
+\in I \\
+vt_2 - v(\alpha_1s_1 + \alpha_2s_2 + \beta_1t_1 + \beta_2t_2)
+\in I
+\end{matrix}
+$$
+We may use the first generator of $I$ to replace any occurrence of
+$us_1$ by $vt_1 + ut_2$, the second generator of $I$ to replace any
+occurrence of $vs_1$ by $-us_2 + vt_2$, the third generator
+to remove occurrences of $vs_2$ and the third to remove occurrences
+of $ut_1$. Then we get the relations
+$$
+\begin{matrix}
+(1 - \alpha_1)vt_1 + (1 - \alpha_1)ut_2 - \alpha_2us_2 - \beta_2ut_2 = 0 \\
+(1 - \alpha_1)vt_2 + \alpha_1us_2 - \beta_1vt_1 - \beta_2vt_2 = 0
+\end{matrix}
+$$
+This implies that $\alpha_1$ should be both $0$ and $1$ which is
+a contradiction as desired.
+\end{example}
+
+
+\begin{lemma}
+\label{lemma-projective-bundle-separated}
+Let $S$ be a scheme.
+The structure morphism $\mathbf{P}(\mathcal{E}) \to S$ of a
+projective bundle over $S$ is separated.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-relative-proj-separated}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projective-space-bundle}
+Let $S$ be a scheme. Let $n \geq 0$. Then
+$\mathbf{P}^n_S$ is a projective bundle over $S$.
+\end{lemma}
+
+\begin{proof}
+Note that
+$$
+\mathbf{P}^n_{\mathbf{Z}} =
+\text{Proj}(\mathbf{Z}[T_0, \ldots, T_n]) =
+\underline{\text{Proj}}_{\Spec(\mathbf{Z})}
+\left(\widetilde{\mathbf{Z}[T_0, \ldots, T_n]}\right)
+$$
+where the grading on the ring $\mathbf{Z}[T_0, \ldots, T_n]$ is given by
+$\deg(T_i) = 1$ and the elements of $\mathbf{Z}$ are in degree $0$.
+Recall that $\mathbf{P}^n_S$ is defined as
+$\mathbf{P}^n_{\mathbf{Z}} \times_{\Spec(\mathbf{Z})} S$.
+Moreover, forming the relative homogeneous spectrum commutes with base change,
+see Lemma \ref{lemma-relative-proj-base-change}.
+For any scheme $g : S \to \Spec(\mathbf{Z})$ we have
+$g^*\mathcal{O}_{\Spec(\mathbf{Z})}[T_0, \ldots, T_n]
+= \mathcal{O}_S[T_0, \ldots, T_n]$.
+Combining the above we see that
+$$
+\mathbf{P}^n_S = \underline{\text{Proj}}_S(\mathcal{O}_S[T_0, \ldots, T_n]).
+$$
+Finally, note that
+$\mathcal{O}_S[T_0, \ldots, T_n] = \text{Sym}(\mathcal{O}_S^{\oplus n + 1})$.
+Hence we see that $\mathbf{P}^n_S$ is a projective bundle over $S$.
+\end{proof}
+
+
+
+\section{Grassmannians}
+\label{section-grassmannian}
+
+\noindent
+In this section we introduce the standard Grassmannian functors and
+we show that they are represented by schemes. Pick integers $k$, $n$
+with $0 < k < n$. We will construct a functor
+\begin{equation}
+\label{equation-gkn}
+G(k, n) : \Sch \longrightarrow \textit{Sets}
+\end{equation}
+which will loosely speaking parametrize $k$-dimensional subspaces
+of $n$-space. However, for technical reasons it is more convenient
+to parametrize $(n - k)$-dimensional quotients and this is what we will
+do.
+
+\medskip\noindent
+More precisely, $G(k, n)$ associates to a scheme $S$ the set $G(k, n)(S)$
+of isomorphism classes of surjections
+$$
+q : \mathcal{O}_S^{\oplus n} \longrightarrow \mathcal{Q}
+$$
+where $\mathcal{Q}$ is a finite locally free $\mathcal{O}_S$-module
+of rank $n - k$. Note that this is indeed a set, for example by
+Modules, Lemma \ref{modules-lemma-set-isomorphism-classes-finite-type-modules}
+or by the observation that the isomorphism class of the surjection $q$
+is determined by the kernel of $q$ (and given a sheaf there is a set
+of subsheaves). Given a morphism of schemes $f : T \to S$ we let
+$G(k, n)(f) : G(k, n)(S) \to G(k, n)(T)$ which sends the
+isomorphism class of $q : \mathcal{O}_S^{\oplus n} \longrightarrow \mathcal{Q}$
+to the isomorphism class of
+$f^*q : \mathcal{O}_T^{\oplus n} \longrightarrow f^*\mathcal{Q}$.
+This makes sense since (1) $f^*\mathcal{O}_S = \mathcal{O}_T$,
+(2) $f^*$ is additive, (3) $f^*$ preserves locally free modules
+(Modules, Lemma \ref{modules-lemma-pullback-locally-free}),
+and (4) $f^*$ is right exact
+(Modules, Lemma \ref{modules-lemma-exactness-pushforward-pullback}).
+
+\begin{lemma}
+\label{lemma-gkn-representable}
+Let $0 < k < n$.
+The functor $G(k, n)$ of (\ref{equation-gkn}) is representable by a scheme.
+\end{lemma}
+
+\begin{proof}
+Set $F = G(k, n)$. To prove the lemma we will use the criterion of
+Schemes, Lemma \ref{schemes-lemma-glue-functors}.
+The reason $F$ satisfies the sheaf property for the
+Zariski topology is that we can glue sheaves, see Sheaves,
+Section \ref{sheaves-section-glueing-sheaves} (some details omitted).
+
+\medskip\noindent
+The family of subfunctors $F_i$.
+Let $I$ be the set of subsets of $\{1, \ldots, n\}$ of cardinality $n - k$.
+Given a scheme $S$ and $j \in \{1, \ldots, n\}$ we denote $e_j$
+the global section
+$$
+e_j = (0, \ldots, 0, 1, 0, \ldots, 0)\quad(1\text{ in }j\text{th spot})
+$$
+of $\mathcal{O}_S^{\oplus n}$. Of course these sections freely generate
+$\mathcal{O}_S^{\oplus n}$. Similarly, for $j \in \{1, \ldots, n - k\}$
+we denote $f_j$ the global section of $\mathcal{O}_S^{\oplus n - k}$
+which is zero in all summands except the $j$th where we put a $1$.
+For $i \in I$ we let
+$$
+s_i : \mathcal{O}_S^{\oplus n - k} \longrightarrow \mathcal{O}_S^{\oplus n}
+$$
+which is the direct sum of the coprojections
+$\mathcal{O}_S \to \mathcal{O}_S^{\oplus n}$ corresponding to elements of $i$.
+More precisely, if $i = \{i_1, \ldots, i_{n - k}\}$ with
+$i_1 < i_2 < \ldots < i_{n - k}$
+then $s_i$ maps $f_j$ to $e_{i_j}$ for $j \in \{1, \ldots, n - k\}$.
+With this notation we can set
+$$
+F_i(S) = \{q : \mathcal{O}_S^{\oplus n} \to \mathcal{Q} \in F(S) \mid
+q \circ s_i \text{ is surjective}\}
+\subset F(S)
+$$
+Given a morphism $f : T \to S$ of schemes the pullback $f^*s_i$
+is the corresponding map over $T$. Since $f^*$ is right exact
+(Modules, Lemma \ref{modules-lemma-exactness-pushforward-pullback})
+we conclude that $F_i$ is a subfunctor of $F$.
+
+\medskip\noindent
+Representability of $F_i$. To prove this we may assume (after renumbering)
+that $i = \{1, \ldots, n - k\}$. This means $s_i$ is the inclusion of
+the first $n - k$ summands. Observe that if $q \circ s_i$ is surjective,
+then $q \circ s_i$ is an isomorphism as a surjective map between finite
+locally free modules of the same rank
+(Modules, Lemma \ref{modules-lemma-map-finite-locally-free}).
+Thus if $q : \mathcal{O}_S^{\oplus n} \to \mathcal{Q}$ is an element of
+$F_i(S)$, then we can use $q \circ s_i$ to identify $\mathcal{Q}$ with
+$\mathcal{O}_S^{\oplus n - k}$. After doing so we obtain
+$$
+q : \mathcal{O}_S^{\oplus n} \longrightarrow \mathcal{O}_S^{\oplus n - k}
+$$
+mapping $e_j$ to $f_j$ (notation as above) for $j = 1, \ldots, n - k$.
+To determine $q$ completely we have to fix the images
+$q(e_{n - k + 1}), \ldots, q(e_n)$ in
+$\Gamma(S, \mathcal{O}_S^{\oplus n - k})$.
+It follows that $F_i$ is isomorphic to the functor
+$$
+S \longmapsto
+\prod\nolimits_{j = n - k + 1, \ldots, n}
+\Gamma(S, \mathcal{O}_S^{\oplus n - k})
+$$
+This functor is isomorphic to the $k(n - k)$-fold self product of the functor
+$S \mapsto \Gamma(S, \mathcal{O}_S)$. By
+Schemes, Example \ref{schemes-example-global-sections}
+the latter is representable by $\mathbf{A}^1_\mathbf{Z}$. It follows $F_i$
+is representable by $\mathbf{A}^{k(n - k)}_\mathbf{Z}$ since fibred product
+over $\Spec(\mathbf{Z})$ is the product in the category of schemes.
+
+\medskip\noindent
+The inclusion $F_i \subset F$ is representable by open immersions.
+Let $S$ be a scheme and let
+$q : \mathcal{O}_S^{\oplus n} \to \mathcal{Q}$ be an element of
+$F(S)$. By
+Modules, Lemma \ref{modules-lemma-finite-type-surjective-on-stalk}.
+the set $U_i = \{s \in S \mid (q \circ s_i)_s\text{ surjective}\}$
+is open in $S$. Since $\mathcal{O}_{S, s}$ is a local ring
+and $\mathcal{Q}_s$ a finite $\mathcal{O}_{S, s}$-module
+by Nakayama's lemma (Algebra, Lemma \ref{algebra-lemma-NAK}) we have
+$$
+s \in U_i \Leftrightarrow
+\left(
+\text{the map }
+\kappa(s)^{\oplus n - k} \to \mathcal{Q}_s/\mathfrak m_s\mathcal{Q}_s
+\text{ induced by }
+(q \circ s_i)_s
+\text{ is surjective}
+\right)
+$$
+Let $f : T \to S$ be a morphism of schemes and let $t \in T$ be a point
+mapping to $s \in S$. We have
+$(f^*\mathcal{Q})_t =
+\mathcal{Q}_s \otimes_{\mathcal{O}_{S, s}} \mathcal{O}_{T, t}$
+(Sheaves, Lemma \ref{sheaves-lemma-stalk-pullback-modules})
+and so on. Thus the map
+$$
+\kappa(t)^{\oplus n - k} \to (f^*\mathcal{Q})_t/\mathfrak m_t(f^*\mathcal{Q})_t
+$$
+induced by $(f^*q \circ f^*s_i)_t$ is the base change of the map
+$\kappa(s)^{\oplus n - k} \to \mathcal{Q}_s/\mathfrak m_s\mathcal{Q}_s$
+above by the field extension $\kappa(t)/\kappa(s)$. It follows
+that $s \in U_i$ if and only if $t$ is in the corresponding open
+for $f^*q$. In particular $T \to S$ factors through $U_i$ if
+and only if $f^*q \in F_i(T)$ as desired.
+
+\medskip\noindent
+The collection $F_i$, $i \in I$ covers $F$. Let
+$q : \mathcal{O}_S^{\oplus n} \to \mathcal{Q}$ be an element of
+$F(S)$. We have to show that for every point $s$ of $S$ there exists
+an $i \in I$ such that $s_i$ is surjective in a neighbourhood of $s$.
+Thus we have to show that one of the compositions
+$$
+\kappa(s)^{\oplus n - k} \xrightarrow{s_i}
+\kappa(s)^{\oplus n} \rightarrow
+\mathcal{Q}_s/\mathfrak m_s\mathcal{Q}_s
+$$
+is surjective (see previous paragraph). As
+$\mathcal{Q}_s/\mathfrak m_s\mathcal{Q}_s$ is a vector space of
+dimension $n - k$ this follows from the theory of vector spaces.
+\end{proof}
+
+\begin{definition}
+\label{definition-grassmannian}
+Let $0 < k < n$. The scheme $\mathbf{G}(k, n)$ representing the functor
+$G(k, n)$ is called {\it Grassmannian over $\mathbf{Z}$}.
+Its base change $\mathbf{G}(k, n)_S$ to a scheme $S$ is called
+{\it Grassmannian over $S$}. If $R$ is a ring the base change
+to $\Spec(R)$ is denoted $\mathbf{G}(k, n)_R$ and called
+{\it Grassmannian over $R$}.
+\end{definition}
+
+\noindent
+The definition makes sense as we've shown in
+Lemma \ref{lemma-gkn-representable}
+that these functors are indeed representable.
+
+\begin{lemma}
+\label{lemma-projective-space-grassmannian}
+Let $n \geq 1$. There is a canonical isomorphism
+$\mathbf{G}(n, n + 1) = \mathbf{P}^n_\mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+According to Lemma \ref{lemma-projective-space} the scheme
+$\mathbf{P}^n_\mathbf{Z}$ represents the functor
+which assigns to a scheme $S$ the set of isomorphisms classes
+of pairs $(\mathcal{L}, (s_0, \ldots, s_n))$ consisting of
+an invertible module $\mathcal{L}$ and an $(n + 1)$-tuple
+of global sections generating $\mathcal{L}$.
+Given such a pair we obtain a quotient
+$$
+\mathcal{O}_S^{\oplus n + 1} \longrightarrow \mathcal{L},\quad
+(h_0, \ldots, h_n) \longmapsto \sum h_i s_i.
+$$
+Conversely, given an element
+$q : \mathcal{O}_S^{\oplus n + 1} \to \mathcal{Q}$ of $G(n, n + 1)(S)$
+we obtain such a pair, namely $(\mathcal{Q}, (q(e_1), \ldots, q(e_{n + 1})))$.
+Here $e_i$, $i = 1, \ldots, n + 1$ are the standard generating sections
+of the free module $\mathcal{O}_S^{\oplus n + 1}$.
+We omit the verification that these constructions define mutually
+inverse transformations of functors.
+\end{proof}
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/conventions.tex b/books/stacks/conventions.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1a90697441c70dec5691463993b68859329ac015
--- /dev/null
+++ b/books/stacks/conventions.tex
@@ -0,0 +1,71 @@
+\input{preamble}
+
+% OK, start here
+%
+\begin{document}
+
+\title{Conventions}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Comments}
+\label{section-comments}
+
+\noindent
+The philosophy behind the conventions used in writing these documents is
+to choose those conventions that work.
+
+\section{Set theory}
+\label{section-sets}
+
+\noindent
+We use Zermelo-Fraenkel set theory with the axiom of choice.
+See \cite{Kunen}. We do not use
+universes (different from SGA4). We do not stress set-theoretic issues,
+but we make sure everything is correct (of course) and so we do not ignore
+them either.
+
+
+\section{Categories}
+\label{section-categories}
+
+\noindent
+A category $\mathcal{C}$ consists of a set of objects and, for each pair
+of objects,
+a set of morphisms between them. In other words, it is what is called
+a ``small'' category in other texts. We will use ``big'' categories
+(categories whose objects form a proper class)
+as well, but only those that are listed in Categories,
+Remark \ref{categories-remark-big-categories}.
+
+\section{Algebra}
+\label{section-algebra}
+
+\noindent
+In these notes a ring is a commutative ring with a $1$. Hence the
+category of rings has an initial object $\mathbf{Z}$ and a final
+object $\{0\}$ (this is the unique ring where $1 = 0$). Modules are
+assumed unitary. See \cite{Eisenbud}.
+
+\section{Notation}
+\label{section-notation}
+
+\noindent
+The natural integers are elements of $\mathbf{N} = \{1, 2, 3, \ldots\}$.
+The integers are elements of $\mathbf{Z} = \{\ldots, -2, -1, 0, 1, 2, \ldots\}$.
+The field of rational numbers is denoted $\mathbf{Q}$.
+The field of real numbers is denoted $\mathbf{R}$.
+The field of complex numbers is denoted $\mathbf{C}$.
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/cotangent.tex b/books/stacks/cotangent.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f5252baf15a290b3d527d5c19ee90e128cfa5571
--- /dev/null
+++ b/books/stacks/cotangent.tex
@@ -0,0 +1,4648 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{The Cotangent Complex}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+The goal of this chapter is to construct the cotangent complex of a
+ring map, of a morphism of schemes, and of a morphism of algebraic spaces.
+Some references are the notes \cite{quillenhomology}, the paper
+\cite{quillencohomology}, and the books
+\cite{Andre} and \cite{cotangent}.
+
+
+
+
+\section{Advice for the reader}
+\label{section-advice-reader}
+
+\noindent
+In writing this chapter we have tried to minimize
+the use of simplicial techniques. We view the choice of a {\it resolution}
+$P_\bullet$ of a ring $B$ over a ring $A$ as a tool to calculating the
+{\it homology} of abelian sheaves on the category $\mathcal{C}_{B/A}$, see
+Remark \ref{remark-resolution}. This is similar to the role played
+by a ``good cover'' to compute cohomology using the {\v C}ech complex.
+To read a bit on homology on categories, please visit
+Cohomology on Sites, Section \ref{sites-cohomology-section-homology}.
+The derived lower shriek functor $L\pi_!$ is to homology what
+$R\Gamma(\mathcal{C}_{B/A}, -)$ is to cohomology. The category
+$\mathcal{C}_{B/A}$, studied in Section \ref{section-compute-L-pi-shriek},
+is the opposite of the category of factorizations $A \to P \to B$ where $P$
+is a polynomial algebra over $A$. This category comes with maps of sheaves
+of rings
+$$
+\underline{A} \longrightarrow \mathcal{O} \longrightarrow \underline{B}
+$$
+where over the object $U = (P \to B)$ we have $\mathcal{O}(U) = P$.
+It turns out that we obtain the cotangent complex of $B$ over $A$ as
+$$
+L_{B/A} =
+L\pi_!(\Omega_{\mathcal{O}/\underline{A}} \otimes_\mathcal{O} \underline{B})
+$$
+see Lemma \ref{lemma-compute-cotangent-complex}. We have consistently tried
+to use this point of view to prove the basic properties of cotangent
+complexes of ring maps. In particular, all of the results can be proven
+without relying on the existence of standard resolutions, although we have
+not done so. The theory is quite satisfactory, except that
+perhaps the proof of the fundamental triangle
+(Proposition \ref{proposition-triangle}) uses just a little
+bit more theory on derived lower shriek functors.
+To provide the reader with an alternative,
+we give a rather complete sketch of an approach to this result
+based on simple properties of standard resolutions in
+Remarks \ref{remark-triangle} and \ref{remark-explicit-map}.
+
+\medskip\noindent
+Our approach to the cotangent complex for morphisms of ringed topoi,
+morphisms of schemes, morphisms of algebraic spaces, etc
+is to deduce as much as possible from the case of ``plain ring maps''
+discussed above.
+
+
+
+
+
+\section{The cotangent complex of a ring map}
+\label{section-cotangent-ring-map}
+
+\noindent
+Let $A$ be a ring. Let $\textit{Alg}_A$ be the category of $A$-algebras.
+Consider the pair of adjoint functors $(U, V)$ where
+$V : \textit{Alg}_A \to \textit{Sets}$ is the forgetful functor and
+$U : \textit{Sets} \to \textit{Alg}_A$ assigns to a set $E$ the polynomial
+algebra $A[E]$ on $E$ over $A$. Let $X_\bullet$ be the simplicial object of
+$\text{Fun}(\textit{Alg}_A, \textit{Alg}_A)$ constructed in
+Simplicial, Section \ref{simplicial-section-standard}.
+
+\medskip\noindent
+Consider an $A$-algebra $B$. Denote $P_\bullet = X_\bullet(B)$ the resulting
+simplicial $A$-algebra. Recall that $P_0 = A[B]$, $P_1 = A[A[B]]$, and so on.
+In particular each term $P_n$ is a polynomial $A$-algebra.
+Recall also that there is an augmentation
+$$
+\epsilon : P_\bullet \longrightarrow B
+$$
+where we view $B$ as a constant simplicial $A$-algebra.
+
+\begin{definition}
+\label{definition-standard-resolution}
+Let $A \to B$ be a ring map. The {\it standard resolution of $B$ over $A$}
+is the augmentation $\epsilon : P_\bullet \to B$ with terms
+$$
+P_0 = A[B],\quad P_1 = A[A[B]],\quad \ldots
+$$
+and maps as constructed above.
+\end{definition}
+
+\noindent
+It will turn out that we can use the standard resolution
+to compute left derived functors in certain settings.
+
+\begin{definition}
+\label{definition-cotangent-complex-ring-map}
+The {\it cotangent complex} $L_{B/A}$ of a ring map $A \to B$
+is the complex of $B$-modules associated to the simplicial $B$-module
+$$
+\Omega_{P_\bullet/A} \otimes_{P_\bullet, \epsilon} B
+$$
+where $\epsilon : P_\bullet \to B$ is the standard resolution
+of $B$ over $A$.
+\end{definition}
+
+\noindent
+In Simplicial, Section \ref{simplicial-section-complexes} we associate a
+chain complex to a simplicial module, but here we work with cochain complexes.
+Thus the term $L_{B/A}^{-n}$ in degree $-n$ is the $B$-module
+$\Omega_{P_n/A} \otimes_{P_n, \epsilon_n} B$ and $L_{B/A}^m = 0$
+for $m > 0$.
+
+\begin{remark}
+\label{remark-variant-cotangent-complex}
+Let $A \to B$ be a ring map. Let $\mathcal{A}$ be the category of
+arrows $\psi : C \to B$ of $A$-algebras and let $\mathcal{S}$ be
+the category of maps $E \to B$ where $E$ is a set. There are adjoint
+functors $V : \mathcal{A} \to \mathcal{S}$ (the forgetful functor)
+and $U : \mathcal{S} \to \mathcal{A}$ which sends $E \to B$ to
+$A[E] \to B$. Let $X_\bullet$ be the simplicial object of
+$\text{Fun}(\mathcal{A}, \mathcal{A})$ constructed in
+Simplicial, Section \ref{simplicial-section-standard}.
+The diagram
+$$
+\xymatrix{
+\mathcal{A} \ar[d] \ar[r] & \mathcal{S} \ar@<1ex>[l] \ar[d] \\
+\textit{Alg}_A \ar[r] & \textit{Sets} \ar@<1ex>[l]
+}
+$$
+commutes. It follows that $X_\bullet(\text{id}_B : B \to B)$
+is equal to the standard resolution of $B$ over $A$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-colimit-cotangent-complex}
+Let $A_i \to B_i$ be a system of ring maps over a directed index
+set $I$. Then $\colim L_{B_i/A_i} = L_{\colim B_i/\colim A_i}$.
+\end{lemma}
+
+\begin{proof}
+This is true because the forgetful functor
+$V : A\textit{-Alg} \to \textit{Sets}$ and its adjoint
+$U : \textit{Sets} \to A\textit{-Alg}$ commute with filtered colimits.
+Moreover, the functor $B/A \mapsto \Omega_{B/A}$ does as well
+(Algebra, Lemma \ref{algebra-lemma-colimit-differentials}).
+\end{proof}
+
+
+
+
+
+\section{Simplicial resolutions and derived lower shriek}
+\label{section-compute-L-pi-shriek}
+
+\noindent
+Let $A \to B$ be a ring map. Consider the category whose objects are
+$A$-algebra maps $\alpha : P \to B$ where $P$ is a polynomial algebra over $A$
+(in some set\footnote{It suffices to consider sets of cardinality
+at most the cardinality of $B$.} of variables) and whose
+morphisms $s : (\alpha : P \to B) \to (\alpha' : P' \to B)$ are
+$A$-algebra homomorphisms $s : P \to P'$ with $\alpha' \circ s = \alpha$.
+Let $\mathcal{C} = \mathcal{C}_{B/A}$ denote the {\bf opposite}
+of this category. The reason for
+taking the opposite is that we want to think of objects
+$(P, \alpha)$ as corresponding to the diagram of affine schemes
+$$
+\xymatrix{
+\Spec(B) \ar[d] \ar[r] & \Spec(P) \ar[ld] \\
+\Spec(A)
+}
+$$
+We endow $\mathcal{C}$ with the chaotic topology
+(Sites, Example \ref{sites-example-indiscrete}), i.e., we endow
+$\mathcal{C}$ with the structure of a site where coverings are given by
+identities so that all presheaves are sheaves.
+Moreover, we endow $\mathcal{C}$ with two sheaves of rings. The first
+is the sheaf $\mathcal{O}$ which sends to object $(P, \alpha)$ to $P$.
+Then second is the constant sheaf $B$, which we will denote
+$\underline{B}$. We obtain the following diagram of morphisms of
+ringed topoi
+\begin{equation}
+\label{equation-pi}
+\vcenter{
+\xymatrix{
+(\Sh(\mathcal{C}), \underline{B}) \ar[r]_i \ar[d]_\pi &
+(\Sh(\mathcal{C}), \mathcal{O}) \\
+(\Sh(*), B)
+}
+}
+\end{equation}
+The morphism $i$ is the identity on underlying topoi and
+$i^\sharp : \mathcal{O} \to \underline{B}$ is the obvious map.
+The map $\pi$ is as in Cohomology on Sites, Example
+\ref{sites-cohomology-example-category-to-point}.
+An important role will be played in the following
+by the derived functors
+$
+Li^* : D(\mathcal{O}) \longrightarrow D(\underline{B})
+$
+left adjoint to $Ri_* = i_* : D(\underline{B}) \to D(\mathcal{O})$ and
+$
+L\pi_! : D(\underline{B}) \longrightarrow D(B)
+$
+left adjoint to $\pi^* = \pi^{-1} : D(B) \to D(\underline{B})$.
+
+\begin{lemma}
+\label{lemma-identify-pi-shriek}
+With notation as above let $P_\bullet$ be a simplicial $A$-algebra
+endowed with an augmentation $\epsilon : P_\bullet \to B$.
+Assume each $P_n$ is a polynomial algebra over $A$ and $\epsilon$
+is a trivial Kan fibration on underlying simplicial sets. Then
+$$
+L\pi_!(\mathcal{F}) = \mathcal{F}(P_\bullet, \epsilon)
+$$
+in $D(\textit{Ab})$, resp.\ $D(B)$ functorially in $\mathcal{F}$ in
+$\textit{Ab}(\mathcal{C})$, resp.\ $\textit{Mod}(\underline{B})$.
+\end{lemma}
+
+\begin{proof}
+We will use the criterion of Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-by-cosimplicial-resolution} to prove this.
+Given an object $U = (Q, \beta)$ of $\mathcal{C}$ we have to show that
+$$
+S_\bullet = \Mor_\mathcal{C}((Q, \beta), (P_\bullet, \epsilon))
+$$
+is homotopy equivalent to a singleton.
+Write $Q = A[E]$ for some set $E$ (this is possible by our choice of
+the category $\mathcal{C}$). We see that
+$$
+S_\bullet = \Mor_{\textit{Sets}}((E, \beta|_E), (P_\bullet, \epsilon))
+$$
+Let $*$ be the constant simplicial set on a singleton. For $b \in B$
+let $F_{b, \bullet}$ be the simplicial set defined by the cartesian
+diagram
+$$
+\xymatrix{
+F_{b, \bullet} \ar[r] \ar[d] & P_\bullet \ar[d]_\epsilon \\
+{*} \ar[r]^b & B
+}
+$$
+With this notation $S_\bullet = \prod_{e \in E} F_{\beta(e), \bullet}$.
+Since we assumed $\epsilon$ is a trivial Kan fibration we see that
+$F_{b, \bullet} \to *$ is a trivial Kan fibration
+(Simplicial, Lemma \ref{simplicial-lemma-trivial-kan-base-change}).
+Thus $S_\bullet \to *$ is a trivial Kan fibration
+(Simplicial, Lemma \ref{simplicial-lemma-product-trivial-kan}).
+Therefore $S_\bullet$ is homotopy equivalent to $*$
+(Simplicial, Lemma \ref{simplicial-lemma-trivial-kan-homotopy}).
+\end{proof}
+
+\noindent
+In particular, we can use the standard resolution of $B$ over $A$
+to compute derived lower shriek.
+
+\begin{lemma}
+\label{lemma-pi-shriek-standard}
+Let $A \to B$ be a ring map. Let $\epsilon : P_\bullet \to B$
+be the standard resolution of $B$ over $A$. Let $\pi$ be as in
+(\ref{equation-pi}). Then
+$$
+L\pi_!(\mathcal{F}) = \mathcal{F}(P_\bullet, \epsilon)
+$$
+in $D(\textit{Ab})$, resp.\ $D(B)$ functorially in $\mathcal{F}$ in
+$\textit{Ab}(\mathcal{C})$, resp.\ $\textit{Mod}(\underline{B})$.
+\end{lemma}
+
+\begin{proof}[First proof]
+We will apply Lemma \ref{lemma-identify-pi-shriek}.
+Since the terms $P_n$ are polynomial algebras we see the first
+assumption of that lemma is satisfied. The second assumption is proved
+as follows. By
+Simplicial, Lemma \ref{simplicial-lemma-standard-simplicial-homotopy}
+the map $\epsilon$ is a homotopy equivalence of underlying
+simplicial sets. By
+Simplicial, Lemma \ref{simplicial-lemma-homotopy-equivalence}
+this implies $\epsilon$ induces a quasi-isomorphism of associated
+complexes of abelian groups. By
+Simplicial, Lemma \ref{simplicial-lemma-qis-simplicial-abelian-groups}
+this implies that $\epsilon$ is a trivial Kan fibration of underlying
+simplicial sets.
+\end{proof}
+
+\begin{proof}[Second proof]
+We will use the criterion of Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-by-cosimplicial-resolution}.
+Let $U = (Q, \beta)$ be an object of $\mathcal{C}$.
+We have to show that
+$$
+S_\bullet = \Mor_\mathcal{C}((Q, \beta), (P_\bullet, \epsilon))
+$$
+is homotopy equivalent to a singleton. Write $Q = A[E]$ for some set $E$
+(this is possible by our choice of the category $\mathcal{C}$). Using the
+notation of Remark \ref{remark-variant-cotangent-complex} we see that
+$$
+S_\bullet = \Mor_\mathcal{S}((E \to B), i(P_\bullet \to B))
+$$
+By Simplicial, Lemma \ref{simplicial-lemma-standard-simplicial-homotopy}
+the map $i(P_\bullet \to B) \to i(B \to B)$ is a homotopy equivalence
+in $\mathcal{S}$. Hence $S_\bullet$ is homotopy equivalent to
+$$
+\Mor_\mathcal{S}((E \to B), (B \to B)) = \{*\}
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-cotangent-complex}
+Let $A \to B$ be a ring map. Let $\pi$ and $i$ be as in (\ref{equation-pi}).
+There is a canonical isomorphism
+$$
+L_{B/A} = L\pi_!(Li^*\Omega_{\mathcal{O}/A}) =
+L\pi_!(i^*\Omega_{\mathcal{O}/A}) =
+L\pi_!(\Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{B})
+$$
+in $D(B)$.
+\end{lemma}
+
+\begin{proof}
+For an object $\alpha : P \to B$ of the category $\mathcal{C}$
+the module $\Omega_{P/A}$ is a free $P$-module. Thus
+$\Omega_{\mathcal{O}/A}$ is a flat $\mathcal{O}$-module. Hence
+$Li^*\Omega_{\mathcal{O}/A} = i^*\Omega_{\mathcal{O}/A}$ is the sheaf
+of $\underline{B}$-modules which associates to $\alpha : P \to A$ the
+$B$-module $\Omega_{P/A} \otimes_{P, \alpha} B$.
+By Lemma \ref{lemma-pi-shriek-standard}
+we see that the right hand side is computed by
+the value of this sheaf on the standard resolution which is our
+definition of the left hand side
+(Definition \ref{definition-cotangent-complex-ring-map}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pi-lower-shriek-constant-sheaf}
+If $A \to B$ is a ring map, then $L\pi_!(\pi^{-1}M) = M$
+with $\pi$ as in (\ref{equation-pi}).
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-identify-pi-shriek} which tells us
+$L\pi_!(\pi^{-1}M)$ is computed by $(\pi^{-1}M)(P_\bullet, \epsilon)$
+which is the constant simplicial object on $M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-identify-H0}
+If $A \to B$ is a ring map, then $H^0(L_{B/A}) = \Omega_{B/A}$.
+\end{lemma}
+
+\begin{proof}
+We will prove this by a direct calculation.
+We will use the identification of Lemma \ref{lemma-compute-cotangent-complex}.
+There is clearly a map from $\Omega_{\mathcal{O}/A} \otimes \underline{B}$
+to the constant sheaf with value $\Omega_{B/A}$. Thus this map induces
+a map
+$$
+H^0(L_{B/A}) = H^0(L\pi_!(\Omega_{\mathcal{O}/A} \otimes \underline{B}))
+= \pi_!(\Omega_{\mathcal{O}/A} \otimes \underline{B}) \to \Omega_{B/A}
+$$
+By choosing an object $P \to B$ of $\mathcal{C}_{B/A}$ with $P \to B$
+surjective we see that this map is surjective (by
+Algebra, Lemma \ref{algebra-lemma-differential-surjective}).
+To show that it is injective, suppose that $P \to B$ is an object
+of $\mathcal{C}_{B/A}$ and that $\xi \in \Omega_{P/A} \otimes_P B$
+is an element which maps to zero in $\Omega_{B/A}$.
+We first choose factorization $P \to P' \to B$ such that $P' \to B$
+is surjective and $P'$ is a polynomial algebra over $A$.
+We may replace $P$ by $P'$. If $B = P/I$, then the kernel
+$\Omega_{P/A} \otimes_P B \to \Omega_{B/A}$ is the image of
+$I/I^2$ (Algebra, Lemma \ref{algebra-lemma-differential-seq}).
+Say $\xi$ is the image of $f \in I$.
+Then we consider the two maps $a, b : P' = P[x] \to P$, the first of which
+maps $x$ to $0$ and the second of which maps $x$ to $f$ (in both
+cases $P[x] \to B$ maps $x$ to zero). We see that $\xi$ and $0$
+are the image of $\text{d}x \otimes 1$ in $\Omega_{P'/A} \otimes_{P'} B$.
+Thus $\xi$ and $0$ have the same image in the colimit (see
+Cohomology on Sites, Example \ref{sites-cohomology-example-category-to-point})
+$\pi_!(\Omega_{\mathcal{O}/A} \otimes \underline{B})$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pi-lower-shriek-polynomial-algebra}
+If $B$ is a polynomial algebra over the ring $A$, then
+with $\pi$ as in (\ref{equation-pi}) we have that
+$\pi_!$ is exact and $\pi_!\mathcal{F} = \mathcal{F}(B \to B)$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-identify-pi-shriek} which tells us
+the constant simplicial algebra on $B$ can be used to compute $L\pi_!$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cotangent-complex-polynomial-algebra}
+If $B$ is a polynomial algebra over the ring $A$, then
+$L_{B/A}$ is quasi-isomorphic to $\Omega_{B/A}[0]$.
+\end{lemma}
+
+\begin{proof}
+Immediate from
+Lemmas \ref{lemma-compute-cotangent-complex} and
+\ref{lemma-pi-lower-shriek-polynomial-algebra}.
+\end{proof}
+
+
+
+
+
+\section{Constructing a resolution}
+\label{section-polynomial}
+
+\noindent
+In the Noetherian finite type case we can construct a ``small'' simplicial
+resolution for finite type ring maps.
+
+\begin{lemma}
+\label{lemma-polynomial}
+Let $A$ be a Noetherian ring. Let $A \to B$ be a finite type ring map.
+Let $\mathcal{A}$ be the category of $A$-algebra maps $C \to B$. Let
+$n \geq 0$ and let $P_\bullet$ be a simplicial object of $\mathcal{A}$
+such that
+\begin{enumerate}
+\item $P_\bullet \to B$ is a trivial Kan fibration of simplicial sets,
+\item $P_k$ is finite type over $A$ for $k \leq n$,
+\item $P_\bullet = \text{cosk}_n \text{sk}_n P_\bullet$ as simplicial
+objects of $\mathcal{A}$.
+\end{enumerate}
+Then $P_{n + 1}$ is a finite type $A$-algebra.
+\end{lemma}
+
+\begin{proof}
+Although the proof we give of this lemma is straightforward, it is a bit
+messy. To clarify the idea we explain what happens for low $n$ before giving
+the proof in general. For example, if $n = 0$, then (3) means that
+$P_1 = P_0 \times_B P_0$. Since the ring map $P_0 \to B$ is surjective, this
+is of finite type over $A$ by
+More on Algebra, Lemma \ref{more-algebra-lemma-fibre-product-finite-type}.
+
+\medskip\noindent
+If $n = 1$, then (3) means that
+$$
+P_2 = \{(f_0, f_1, f_2) \in P_1^3 \mid
+d_0f_0 = d_0f_1,\ d_1f_0 = d_0f_2,\ d_1f_1 = d_1f_2 \}
+$$
+where the equalities take place in $P_0$. Observe that the triple
+$$
+(d_0f_0, d_1f_0, d_1f_1) = (d_0f_1, d_0f_2, d_1f_2)
+$$
+is an element of the fibre product $P_0 \times_B P_0 \times_B P_0$ over $B$
+because the maps $d_i : P_1 \to P_0$ are morphisms over $B$. Thus we get
+a map
+$$
+\psi : P_2 \longrightarrow P_0 \times_B P_0 \times_B P_0
+$$
+The fibre of $\psi$ over an element
+$(g_0, g_1, g_2) \in P_0 \times_B P_0 \times_B P_0$
+is the set of triples $(f_0, f_1, f_2)$ of $1$-simplices
+with $(d_0, d_1)(f_0) = (g_0, g_1)$, $(d_0, d_1)(f_1) = (g_0, g_2)$,
+and $(d_0, d_1)(f_2) = (g_1, g_2)$. As $P_\bullet \to B$ is a trivial
+Kan fibration the map $(d_0, d_1) : P_1 \to P_0 \times_B P_0$ is
+surjective. Thus we see that $P_2$ fits into the cartesian diagram
+$$
+\xymatrix{
+P_2 \ar[d] \ar[r] & P_1^3 \ar[d] \\
+P_0 \times_B P_0 \times_B P_0 \ar[r] & (P_0 \times_B P_0)^3
+}
+$$
+By More on Algebra, Lemma \ref{more-algebra-lemma-formal-consequence}
+we conclude. The general case is similar, but requires a bit more notation.
+
+\medskip\noindent
+The case $n > 1$. By Simplicial, Lemma \ref{simplicial-lemma-cosk-above-object}
+the condition $P_\bullet = \text{cosk}_n \text{sk}_n P_\bullet$
+implies the same thing is true in the category of simplicial
+$A$-algebras and hence in the category of sets (as the forgetful
+functor from $A$-algebras to sets commutes with limits). Thus
+$$
+P_{n + 1} =
+\Mor(\Delta[n + 1], P_\bullet) =
+\Mor(\text{sk}_n \Delta[n + 1], \text{sk}_n P_\bullet)
+$$
+by Simplicial, Lemma \ref{simplicial-lemma-simplex-map} and
+Equation (\ref{simplicial-equation-cosk}). We will prove by induction
+on $1 \leq k < m \leq n + 1$ that the ring
+$$
+Q_{k, m} = \Mor(\text{sk}_k \Delta[m], \text{sk}_k P_\bullet)
+$$
+is of finite type over $A$. The case $k = 1$, $1 < m \leq n + 1$
+is entirely similar to the discussion above in the case $n = 1$.
+Namely, there is a cartesian diagram
+$$
+\xymatrix{
+Q_{1, m} \ar[d] \ar[r] & P_1^N \ar[d] \\
+P_0 \times_B \ldots \times_B P_0 \ar[r] & (P_0 \times_B P_0)^N
+}
+$$
+where $N = {m + 1 \choose 2}$. We conclude as before.
+
+\medskip\noindent
+Let $1 \leq k_0 \leq n$ and assume $Q_{k, m}$ is of finite type
+over $A$ for all $1 \leq k \leq k_0$ and $k < m \leq n + 1$.
+For $k_0 + 1 < m \leq n + 1$ we claim there is a cartesian square
+$$
+\xymatrix{
+Q_{k_0 + 1, m} \ar[d] \ar[r] & P_{k_0 + 1}^N \ar[d] \\
+Q_{k_0, m} \ar[r] & Q_{k_0, k_0 + 1}^N
+}
+$$
+where $N$ is the number of nondegenerate $(k_0 + 1)$-simplices
+of $\Delta[m]$. Namely, to see this is true, think of an element of
+$Q_{k_0 + 1, m}$ as a function $f$ from the $(k_0 + 1)$-skeleton
+of $\Delta[m]$ to $P_\bullet$. We can restrict $f$ to the $k_0$-skeleton
+which gives the left vertical map of the diagram. We can also restrict
+to each nondegenerate $(k_0 + 1)$-simplex which gives the top horizontal
+arrow. Moreover, to give such an $f$ is the same thing as giving its
+restriction to $k_0$-skeleton and to each nondegenerate
+$(k_0 + 1)$-face, provided these agree on the overlap, and this
+is exactly the content of the diagram. Moreover, the fact that
+$P_\bullet \to B$ is a trivial Kan fibration implies that
+the map
+$$
+P_{k_0} \to Q_{k_0, k_0 + 1} = \Mor(\partial \Delta[k_0 + 1], P_\bullet)
+$$
+is surjective as every map $\partial \Delta[k_0 + 1] \to B$ can be extended
+to $\Delta[k_0 + 1] \to B$ for $k_0 \geq 1$ (small argument about constant
+simplicial sets omitted). Since by induction hypothesis the rings
+$Q_{k_0, m}$, $Q_{k_0, k_0 + 1}$ are finite type $A$-algebras, so is
+$Q_{k_0 + 1, m}$ by
+More on Algebra, Lemma \ref{more-algebra-lemma-formal-consequence}
+once more.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-polynomial}
+Let $A$ be a Noetherian ring. Let $A \to B$ be a finite type ring map.
+There exists a simplicial $A$-algebra $P_\bullet$ with an augmentation
+$\epsilon : P_\bullet \to B$ such that each $P_n$ is a polynomial algebra
+of finite type over $A$ and such that $\epsilon$ is a trivial
+Kan fibration of simplicial sets.
+\end{proposition}
+
+\begin{proof}
+Let $\mathcal{A}$ be the category of $A$-algebra maps $C \to B$.
+In this proof our simplicial objects and skeleton and coskeleton
+functors will be taken in this category.
+
+\medskip\noindent
+Choose a polynomial algebra $P_0$ of finite type over $A$ and a surjection
+$P_0 \to B$. As a first approximation we take
+$P_\bullet = \text{cosk}_0(P_0)$. In other words, $P_\bullet$ is the simplicial
+$A$-algebra with terms $P_n = P_0 \times_A \ldots \times_A P_0$.
+(In the final paragraph of the proof this simplicial object will
+be denoted $P^0_\bullet$.) By
+Simplicial, Lemma \ref{simplicial-lemma-cosk-minus-one-equivalence}
+the map $P_\bullet \to B$ is a trivial Kan fibration of simplicial sets.
+Also, observe that $P_\bullet = \text{cosk}_0 \text{sk}_0 P_\bullet$.
+
+\medskip\noindent
+Suppose for some $n \geq 0$ we have constructed $P_\bullet$
+(in the final paragraph of the proof this will be $P^n_\bullet$)
+such that
+\begin{enumerate}
+\item[(a)] $P_\bullet \to B$ is a trivial Kan fibration of simplicial sets,
+\item[(b)] $P_k$ is a finitely generated polynomial algebra for
+$0 \leq k \leq n$, and
+\item[(c)] $P_\bullet = \text{cosk}_n \text{sk}_n P_\bullet$
+\end{enumerate}
+By Lemma \ref{lemma-polynomial}
+we can find a finitely generated polynomial algebra $Q$ over $A$
+and a surjection $Q \to P_{n + 1}$. Since $P_n$ is a polynomial algebra
+the $A$-algebra maps $s_i : P_n \to P_{n + 1}$ lift to maps
+$s'_i : P_n \to Q$. Set $d'_j : Q \to P_n$ equal to the composition of
+$Q \to P_{n + 1}$ and $d_j : P_{n + 1} \to P_n$.
+We obtain a truncated simplicial object $P'_\bullet$ of $\mathcal{A}$
+by setting $P'_k = P_k$ for $k \leq n$ and $P'_{n + 1} = Q$ and morphisms
+$d'_i = d_i$ and $s'_i = s_i$ in degrees $k \leq n - 1$ and using the
+morphisms $d'_j$ and $s'_i$ in degree $n$. Extend this to a full simplicial
+object $P'_\bullet$ of $\mathcal{A}$ using $\text{cosk}_{n + 1}$. By
+functoriality of the coskeleton functors there is a morphism
+$P'_\bullet \to P_\bullet$ of simplicial objects extending the
+given morphism of $(n + 1)$-truncated simplicial objects.
+(This morphism will be denoted $P^{n + 1}_\bullet \to P^n_\bullet$
+in the final paragraph of the proof.)
+
+\medskip\noindent
+Note that conditions (b) and (c) are satisfied for $P'_\bullet$ with $n$
+replaced by $n + 1$. We claim the map $P'_\bullet \to P_\bullet$ satisfies
+assumptions (1), (2), (3), and (4) of
+Simplicial, Lemmas \ref{simplicial-lemma-section}
+with $n + 1$ instead of $n$. Conditions (1) and (2) hold by construction.
+By Simplicial, Lemma \ref{simplicial-lemma-cosk-above-object}
+we see that we have
+$P_\bullet = \text{cosk}_{n + 1}\text{sk}_{n + 1}P_\bullet$
+and
+$P'_\bullet = \text{cosk}_{n + 1}\text{sk}_{n + 1}P'_\bullet$
+not only in $\mathcal{A}$ but also in the category of $A$-algebras,
+whence in the category of sets (as the forgetful functor from $A$-algebras
+to sets commutes with all limits). This proves (3) and (4). Thus the lemma
+applies and $P'_\bullet \to P_\bullet$ is a trivial Kan fibration. By
+Simplicial, Lemma \ref{simplicial-lemma-trivial-kan-composition}
+we conclude that $P'_\bullet \to B$ is a trivial Kan fibration and (a)
+holds as well.
+
+\medskip\noindent
+To finish the proof we take the inverse limit $P_\bullet = \lim P^n_\bullet$
+of the sequence of simplicial algebras
+$$
+\ldots \to P^2_\bullet \to P^1_\bullet \to P^0_\bullet
+$$
+constructed above. The map $P_\bullet \to B$ is a trivial Kan fibration by
+Simplicial, Lemma \ref{simplicial-lemma-limit-trivial-kan}.
+However, the construction above stabilizes in each degree
+to a fixed finitely generated polynomial algebra as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pi-shriek-finite}
+Let $A$ be a Noetherian ring. Let $A \to B$ be a finite type ring map.
+Let $\pi$, $\underline{B}$ be as in (\ref{equation-pi}).
+If $\mathcal{F}$ is an $\underline{B}$-module such that
+$\mathcal{F}(P, \alpha)$ is a finite $B$-module for all
+$\alpha : P = A[x_1, \ldots, x_n] \to B$, then the cohomology modules
+of $L\pi_!(\mathcal{F})$ are finite $B$-modules.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-identify-pi-shriek} and
+Proposition \ref{proposition-polynomial}
+we can compute $L\pi_!(\mathcal{F})$ by a complex
+constructed out of the values of $\mathcal{F}$ on finite type
+polynomial algebras.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cotangent-finite}
+Let $A$ be a Noetherian ring. Let $A \to B$ be a finite type ring map.
+Then $H^n(L_{B/A})$ is a finite $B$-module for all $n \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Apply Lemmas \ref{lemma-compute-cotangent-complex} and
+\ref{lemma-pi-shriek-finite}.
+\end{proof}
+
+\begin{remark}[Resolutions]
+\label{remark-resolution}
+Let $A \to B$ be any ring map. Let us call an augmented simplicial $A$-algebra
+$\epsilon : P_\bullet \to B$ a {\it resolution of $B$ over $A$} if
+each $P_n$ is a polynomial algebra and $\epsilon$ is a trivial Kan fibration
+of simplicial sets. If $P_\bullet \to B$ is an augmentation of a simplicial
+$A$-algebra with each $P_n$ a polynomial algebra surjecting onto $B$, then
+the following are equivalent
+\begin{enumerate}
+\item $\epsilon : P_\bullet \to B$ is a resolution of $B$ over $A$,
+\item $\epsilon : P_\bullet \to B$ is a quasi-isomorphism on
+associated complexes,
+\item $\epsilon : P_\bullet \to B$ induces a homotopy equivalence
+of simplicial sets.
+\end{enumerate}
+To see this use Simplicial, Lemmas
+\ref{simplicial-lemma-trivial-kan-homotopy},
+\ref{simplicial-lemma-homotopy-equivalence}, and
+\ref{simplicial-lemma-qis-simplicial-abelian-groups}.
+A resolution $P_\bullet$ of $B$ over $A$ gives a cosimplicial object
+$U_\bullet$ of $\mathcal{C}_{B/A}$ as in Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-by-cosimplicial-resolution}
+and it follows that
+$$
+L\pi_!\mathcal{F} = \mathcal{F}(P_\bullet)
+$$
+functorially in $\mathcal{F}$, see Lemma \ref{lemma-identify-pi-shriek}.
+The (formal part of the) proof of Proposition \ref{proposition-polynomial}
+shows that resolutions exist. We also have seen in the first proof of
+Lemma \ref{lemma-pi-shriek-standard} that the standard resolution of $B$
+over $A$ is a resolution (so that this terminology doesn't lead to a conflict).
+However, the argument in the proof of Proposition \ref{proposition-polynomial}
+shows the existence of resolutions without appealing to the simplicial
+computations in Simplicial, Section \ref{simplicial-section-standard}.
+Moreover, for {\it any} choice of resolution we have a canonical isomorphism
+$$
+L_{B/A} = \Omega_{P_\bullet/A} \otimes_{P_\bullet, \epsilon} B
+$$
+in $D(B)$ by Lemma \ref{lemma-compute-cotangent-complex}. The freedom to
+choose an arbitrary resolution can be quite useful.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-O-homology-B-homology}
+Let $A \to B$ be a ring map. Let $\pi$, $\mathcal{O}$, $\underline{B}$
+be as in (\ref{equation-pi}). For any $\mathcal{O}$-module $\mathcal{F}$
+we have
+$$
+L\pi_!(\mathcal{F}) = L\pi_!(Li^*\mathcal{F}) =
+L\pi_!(\mathcal{F} \otimes_\mathcal{O}^\mathbf{L} \underline{B})
+$$
+in $D(\textit{Ab})$.
+\end{lemma}
+
+\begin{proof}
+It suffices to verify the assumptions of Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-O-homology-qis}
+hold for $\mathcal{O} \to \underline{B}$ on $\mathcal{C}_{B/A}$.
+We will use the results of Remark \ref{remark-resolution} without
+further mention. Choose a resolution $P_\bullet$ of $B$ over $A$ to get a
+suitable cosimplicial object $U_\bullet$ of $\mathcal{C}_{B/A}$.
+Since $P_\bullet \to B$ induces a quasi-isomorphism on associated
+complexes of abelian groups we see that $L\pi_!\mathcal{O} = B$.
+On the other hand $L\pi_!\underline{B}$ is computed by
+$\underline{B}(U_\bullet) = B$. This verifies the second assumption of
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-O-homology-qis}
+and we are done with the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-apply-O-B-comparison}
+Let $A \to B$ be a ring map. Let $\pi$, $\mathcal{O}$, $\underline{B}$
+be as in (\ref{equation-pi}). We have
+$$
+L\pi_!(\mathcal{O}) = L\pi_!(\underline{B}) = B
+\quad\text{and}\quad
+L_{B/A} = L\pi_!(\Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{B}) =
+L\pi_!(\Omega_{\mathcal{O}/A})
+$$
+in $D(\textit{Ab})$.
+\end{lemma}
+
+\begin{proof}
+This is just an application of Lemma \ref{lemma-O-homology-B-homology}
+(and the first equality on the right is
+Lemma \ref{lemma-compute-cotangent-complex}).
+\end{proof}
+
+\noindent
+Here is a special case of the fundamental triangle that is easy to prove.
+
+\begin{lemma}
+\label{lemma-special-case-triangle}
+Let $A \to B \to C$ be ring maps. If $B$ is a polynomial algebra over
+$A$, then there is a distinguished triangle
+$L_{B/A} \otimes_B^\mathbf{L} C \to L_{C/A} \to L_{C/B} \to
+L_{B/A} \otimes_B^\mathbf{L} C[1]$ in $D(C)$.
+\end{lemma}
+
+\begin{proof}
+We will use the observations of Remark \ref{remark-resolution}
+without further mention. Choose a resolution $\epsilon : P_\bullet \to C$
+of $C$ over $B$ (for example the standard resolution). Since $B$ is a
+polynomial algebra over $A$ we see that $P_\bullet$ is also a resolution of
+$C$ over $A$. Hence $L_{C/A}$ is computed by
+$\Omega_{P_\bullet/A} \otimes_{P_\bullet, \epsilon} C$
+and $L_{C/B}$ is computed by
+$\Omega_{P_\bullet/B} \otimes_{P_\bullet, \epsilon} C$.
+Since for each $n$ we have the short exact sequence
+$0 \to \Omega_{B/A} \otimes_B P_n \to \Omega_{P_n/A} \to \Omega_{P_n/B}$
+(Algebra, Lemma \ref{algebra-lemma-ses-formally-smooth})
+and since $L_{B/A} = \Omega_{B/A}[0]$
+(Lemma \ref{lemma-cotangent-complex-polynomial-algebra})
+we obtain the result.
+\end{proof}
+
+\begin{example}
+\label{example-resolution-length-two}
+Let $A \to B$ be a ring map. In this example we
+will construct an ``explicit'' resolution $P_\bullet$ of $B$ over $A$ of
+length $2$. To do this we follow the procedure of the proof of
+Proposition \ref{proposition-polynomial}, see also the discussion in
+Remark \ref{remark-resolution}.
+
+\medskip\noindent
+We choose a surjection $P_0 = A[u_i] \to B$ where $u_i$ is a set of
+variables. Choose generators $f_t \in P_0$, $t \in T$ of the ideal
+$\Ker(P_0 \to B)$. We choose $P_1 = A[u_i, x_t]$ with face maps
+$d_0$ and $d_1$ the unique $A$-algebra maps with $d_j(u_i) = u_i$
+and $d_0(x_t) = 0$ and $d_1(x_t) = f_t$. The map $s_0 : P_0 \to P_1$
+is the unique $A$-algebra map with $s_0(u_i) = u_i$. It is clear that
+$$
+P_1 \xrightarrow{d_0 - d_1} P_0 \to B \to 0
+$$
+is exact, in particular the map $(d_0, d_1) : P_1 \to P_0 \times_B P_0$
+is surjective. Thus, if $P_\bullet$ denotes the $1$-truncated
+simplicial $A$-algebra given by $P_0$, $P_1$, $d_0$, $d_1$, and $s_0$, then
+the augmentation $\text{cosk}_1(P_\bullet) \to B$ is a trivial Kan fibration.
+The next step of the procedure in the proof of
+Proposition \ref{proposition-polynomial}
+is to choose a polynomial algebra $P_2$ and a surjection
+$$
+P_2 \longrightarrow \text{cosk}_1(P_\bullet)_2
+$$
+Recall that
+$$
+\text{cosk}_1(P_\bullet)_2 =
+\{(g_0, g_1, g_2) \in P_1^3 \mid d_0(g_0) = d_0(g_1),
+d_1(g_0) = d_0(g_2), d_1(g_1) = d_1(g_2)\}
+$$
+Thinking of $g_i \in P_1$ as a polynomial in $x_t$ the conditions
+are
+$$
+g_0(0) = g_1(0),\quad
+g_0(f_t) = g_2(0),\quad
+g_1(f_t) = g_2(f_t)
+$$
+Thus $\text{cosk}_1(P_\bullet)_2$ contains the elements
+$y_t = (x_t, x_t, f_t)$ and $z_t = (0, x_t, x_t)$.
+Every element $G$ in $\text{cosk}_1(P_\bullet)_2$ is
+of the form $G = H + (0, 0, g)$ where $H$ is in the image
+of $A[u_i, y_t, z_t] \to \text{cosk}_1(P_\bullet)_2$. Here
+$g \in P_1$ is a polynomial with vanishing
+constant term such that $g(f_t) = 0$ in $P_0$. Observe that
+\begin{enumerate}
+\item $g = x_t x_{t'} - f_t x_{t'}$ and
+\item $g = \sum r_t x_t$ with $r_t \in P_0$ if $\sum r_t f_t = 0$ in $P_0$
+\end{enumerate}
+are elements of $P_1$ of the desired form. Let
+$$
+Rel = \Ker(\bigoplus\nolimits_{t \in T} P_0 \longrightarrow P_0),\quad
+(r_t) \longmapsto \sum r_tf_t
+$$
+We set $P_2 = A[u_i, y_t, z_t, v_r, w_{t, t'}]$ where
+$r = (r_t) \in Rel$, with map
+$$
+P_2 \longrightarrow \text{cosk}_1(P_\bullet)_2
+$$
+given by $y_t \mapsto (x_t, x_t, f_t)$,
+$z_t \mapsto (0, x_t, x_t)$,
+$v_r \mapsto (0, 0, \sum r_t x_t)$, and
+$w_{t, t'} \mapsto (0, 0, x_t x_{t'} - f_t x_{t'})$. A calculation
+(omitted) shows that this map is surjective. Our choice of the
+map displayed above determines the maps $d_0, d_1, d_2 : P_2 \to P_1$.
+Finally, the procedure in the proof of
+Proposition \ref{proposition-polynomial}
+tells us to choose the maps $s_0, s_1 : P_1 \to P_2$ lifting the two
+maps $P_1 \to \text{cosk}_1(P_\bullet)_2$. It is clear that we can take
+$s_i$ to be the unique $A$-algebra maps determined by
+$s_0(x_t) = y_t$ and $s_1(x_t) = z_t$.
+\end{example}
+
+
+
+
+
+
+
+
+\section{Functoriality}
+\label{section-functoriality}
+
+\noindent
+In this section we consider a commutative square
+\begin{equation}
+\label{equation-commutative-square}
+\vcenter{
+\xymatrix{
+B \ar[r] & B' \\
+A \ar[u] \ar[r] & A' \ar[u]
+}
+}
+\end{equation}
+of ring maps. We claim there is a canonical $B$-linear map of complexes
+$$
+L_{B/A} \longrightarrow L_{B'/A'}
+$$
+associated to this diagram. Namely, if $P_\bullet \to B$ is the
+standard resolution of $B$ over $A$ and $P'_\bullet \to B'$ is the
+standard resolution of $B'$ over $A'$, then there is a canonical map
+$P_\bullet \to P'_\bullet$
+of simplicial $A$-algebras compatible with the augmentations
+$P_\bullet \to B$ and $P'_\bullet \to B'$. This can be seen in terms
+of the construction of standard resolutions in
+Simplicial, Section \ref{simplicial-section-standard}
+but in the special case at hand it probably suffices to say simply
+that the maps
+$$
+P_0 = A[B] \longrightarrow A'[B'] = P'_0,\quad
+P_1 = A[A[B]] \longrightarrow A'[A'[B']] = P'_1,
+$$
+and so on are given by the given maps $A \to A'$ and $B \to B'$.
+The desired map $L_{B/A} \to L_{B'/A'}$ then comes from the associated
+maps $\Omega_{P_n/A} \to \Omega_{P'_n/A'}$.
+
+\medskip\noindent
+Another description of the functoriality map can be given as follows.
+Let $\mathcal{C} = \mathcal{C}_{B/A}$ and $\mathcal{C}' = \mathcal{C}_{B'/A}'$
+be the categories considered in Section \ref{section-compute-L-pi-shriek}.
+There is a functor
+$$
+u : \mathcal{C} \longrightarrow \mathcal{C}',\quad
+(P, \alpha) \longmapsto (P \otimes_A A', c \circ (\alpha \otimes 1))
+$$
+where $c : B \otimes_A A' \to B'$ is the obvious map. As discussed in
+Cohomology on Sites, Example
+\ref{sites-cohomology-example-morphism-categories}
+we obtain a morphism of topoi $g : \Sh(\mathcal{C}) \to \Sh(\mathcal{C}')$
+and a commutative diagram of maps of ringed topoi
+\begin{equation}
+\label{equation-double-square}
+\vcenter{
+\xymatrix{
+(\Sh(\mathcal{C}'), \underline{B}) \ar[d]_\pi &
+(\Sh(\mathcal{C}'), \underline{B'}) \ar[d]_\pi \ar[l]^h &
+(\Sh(\mathcal{C}), \underline{B'}) \ar[d]_{\pi'} \ar[l]^g \\
+(\Sh(*), B) &
+(\Sh(*), B') \ar[l]_f &
+(\Sh(*), B') \ar[l]
+}
+}
+\end{equation}
+Here $h$ is the identity on underlying topoi and given by the ring map
+$B \to B'$ on sheaves of rings.
+By Cohomology on Sites, Remark
+\ref{sites-cohomology-remark-morphism-fibred-categories}
+given $\mathcal{F}$ on $\mathcal{C}$ and $\mathcal{F}'$ on $\mathcal{C}'$
+and a transformation $t : \mathcal{F} \to g^{-1}\mathcal{F}'$
+we obtain a canonical map $L\pi_!(\mathcal{F}) \to L\pi'_!(\mathcal{F}')$.
+If we apply this to the sheaves
+$$
+\mathcal{F} : (P, \alpha) \mapsto \Omega_{P/A} \otimes_P B,\quad
+\mathcal{F}' : (P', \alpha') \mapsto \Omega_{P'/A'} \otimes_{P'} B',
+$$
+and the transformation $t$ given by the canonical maps
+$$
+\Omega_{P/A} \otimes_P B \longrightarrow
+\Omega_{P \otimes_A A'/A'} \otimes_{P \otimes_A A'} B'
+$$
+to get a canonical map
+$$
+L\pi_!(\Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{B})
+\longrightarrow
+L\pi'_!(\Omega_{\mathcal{O}'/A'} \otimes_{\mathcal{O}'} \underline{B'})
+$$
+By Lemma \ref{lemma-compute-cotangent-complex} this gives
+$L_{B/A} \to L_{B'/A'}$. We omit the verification that this map
+agrees with the map defined above in terms of simplicial
+resolutions.
+
+\begin{lemma}
+\label{lemma-flat-base-change}
+Assume (\ref{equation-commutative-square}) induces a quasi-isomorphism
+$B \otimes_A^\mathbf{L} A' = B'$. Then, with notation as in
+(\ref{equation-double-square}) and
+$\mathcal{F}' \in \textit{Ab}(\mathcal{C}')$,
+we have $L\pi_!(g^{-1}\mathcal{F}') = L\pi'_!(\mathcal{F}')$.
+\end{lemma}
+
+\begin{proof}
+We use the results of Remark \ref{remark-resolution} without
+further mention. We will apply Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-get-it-now}. Let $P_\bullet \to B$ be a resolution.
+If we can show that $u(P_\bullet) = P_\bullet \otimes_A A' \to B'$
+is a quasi-isomorphism, then we are done. The complex of $A$-modules
+$s(P_\bullet)$ associated to $P_\bullet$ (viewed as a simplicial $A$-module)
+is a free $A$-module resolution of $B$. Namely, $P_n$ is a free $A$-module and
+$s(P_\bullet) \to B$ is a quasi-isomorphism. Thus $B \otimes_A^\mathbf{L} A'$
+is computed by $s(P_\bullet) \otimes_A A' = s(P_\bullet \otimes_A A')$.
+Therefore the assumption of the lemma signifies that
+$\epsilon' : P_\bullet \otimes_A A' \to B'$ is a quasi-isomorphism.
+\end{proof}
+
+\noindent
+The following lemma in particular applies when $A \to A'$ is flat
+and $B' = B \otimes_A A'$ (flat base change).
+
+\begin{lemma}
+\label{lemma-flat-base-change-cotangent-complex}
+If (\ref{equation-commutative-square}) induces a quasi-isomorphism
+$B \otimes_A^\mathbf{L} A' = B'$, then the functoriality map
+induces an isomorphism
+$$
+L_{B/A} \otimes_B^\mathbf{L} B' \longrightarrow L_{B'/A'}
+$$
+\end{lemma}
+
+\begin{proof}
+We will use the notation introduced in Equation (\ref{equation-double-square}).
+We have
+$$
+L_{B/A} \otimes_B^\mathbf{L} B' =
+L\pi_!(\Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{B})
+\otimes_B^\mathbf{L} B' =
+L\pi_!(Lh^*(\Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{B}))
+$$
+the first equality by Lemma \ref{lemma-compute-cotangent-complex}
+and the second by Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-change-of-rings}.
+Since $\Omega_{\mathcal{O}/A}$ is a flat $\mathcal{O}$-module,
+we see that $\Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{B}$
+is a flat $\underline{B}$-module. Thus
+$Lh^*(\Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{B}) =
+\Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{B'}$
+which is equal to
+$g^{-1}(\Omega_{\mathcal{O}'/A'} \otimes_{\mathcal{O}'} \underline{B'})$
+by inspection.
+we conclude by Lemma \ref{lemma-flat-base-change}
+and the fact that $L_{B'/A'}$ is computed by
+$L\pi'_!(\Omega_{\mathcal{O}'/A'} \otimes_{\mathcal{O}'} \underline{B'})$.
+\end{proof}
+
+\begin{remark}
+\label{remark-homotopy-triangle}
+Suppose that we are given a square (\ref{equation-commutative-square})
+such that there exists an arrow $\kappa : B \to A'$ making the diagram
+commute:
+$$
+\xymatrix{
+B \ar[r]_\beta \ar[rd]_\kappa & B' \\
+A \ar[u] \ar[r]^\alpha & A' \ar[u]
+}
+$$
+In this case we claim the functoriality map $P_\bullet \to P'_\bullet$
+is homotopic to the composition $P_\bullet \to B \to A' \to P'_\bullet$.
+Namely, using $\kappa$ the functoriality map factors as
+$$
+P_\bullet \to P_{A'/A', \bullet} \to P'_\bullet
+$$
+where $P_{A'/A', \bullet}$ is the standard resolution of $A'$ over $A'$.
+Since $A'$ is the polynomial algebra on the empty set over $A'$ we
+see from Simplicial, Lemma \ref{simplicial-lemma-standard-simplicial-homotopy}
+that the augmentation $\epsilon_{A'/A'} : P_{A'/A', \bullet} \to A'$
+is a homotopy equivalence of simplicial rings. Observe that the homotopy
+inverse map $c : A' \to P_{A'/A', \bullet}$ constructed in the proof of
+that lemma is just the structure morphism, hence
+we conclude what we want because the two compositions
+$$
+\xymatrix{
+P_\bullet \ar[r] &
+P_{A'/A', \bullet} \ar@<1ex>[rr]^{\text{id}}
+\ar@<-1ex>[rr]_{c \circ \epsilon_{A'/A'}} & &
+P_{A'/A', \bullet} \ar[r] &
+P'_\bullet
+}
+$$
+are the two maps discussed above and these are homotopic
+(Simplicial, Remark \ref{simplicial-remark-homotopy-pre-post-compose}).
+Since the second map $P_\bullet \to P'_\bullet$ induces the zero
+map $\Omega_{P_\bullet/A} \to \Omega_{P'_\bullet/A'}$ we conclude
+that the functoriality map $L_{B/A} \to L_{B'/A'}$ is homotopic
+to zero in this case.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-cotangent-complex-product}
+Let $A \to B$ and $A \to C$ be ring maps.
+Then the map $L_{B \times C/A} \to L_{B/A} \oplus L_{C/A}$ is
+an isomorphism in $D(B \times C)$.
+\end{lemma}
+
+\begin{proof}
+Although this lemma can be deduced from the fundamental triangle
+we will give a direct and elementary proof of this now.
+Factor the ring map $A \to B \times C$ as $A \to A[x] \to B \times C$
+where $x \mapsto (1, 0)$. By Lemma \ref{lemma-special-case-triangle}
+we have a distinguished triangle
+$$
+L_{A[x]/A} \otimes_{A[x]}^\mathbf{L} (B \times C) \to L_{B \times C/A} \to
+L_{B \times C/A[x]} \to L_{A[x]/A} \otimes_{A[x]}^\mathbf{L} (B \times C)[1]
+$$
+in $D(B \times C)$. Similarly we have the distinguished triangles
+$$
+\begin{matrix}
+L_{A[x]/A} \otimes_{A[x]}^\mathbf{L} B \to L_{B/A} \to L_{B/A[x]}
+\to L_{A[x]/A} \otimes_{A[x]}^\mathbf{L} B[1] \\
+L_{A[x]/A} \otimes_{A[x]}^\mathbf{L} C \to L_{C/A} \to L_{C/A[x]}
+\to L_{A[x]/A} \otimes_{A[x]}^\mathbf{L} C[1]
+\end{matrix}
+$$
+Thus it suffices to prove the result for $B \times C$ over $A[x]$.
+Note that $A[x] \to A[x, x^{-1}]$ is flat, that
+$(B \times C) \otimes_{A[x]} A[x, x^{-1}] = B \otimes_{A[x]} A[x, x^{-1}]$,
+and that $C \otimes_{A[x]} A[x, x^{-1}] = 0$.
+Thus by base change (Lemma \ref{lemma-flat-base-change-cotangent-complex})
+the map $L_{B \times C/A[x]} \to L_{B/A[x]} \oplus L_{C/A[x]}$
+becomes an isomorphism after inverting $x$.
+In the same way one shows that the map becomes an isomorphism after
+inverting $x - 1$. This proves the lemma.
+\end{proof}
+
+
+
+
+\section{The fundamental triangle}
+\label{section-triangle}
+
+\noindent
+In this section we consider a sequence of ring maps $A \to B \to C$.
+It is our goal to show that this triangle gives rise to a distinguished
+triangle
+\begin{equation}
+\label{equation-triangle}
+L_{B/A} \otimes_B^\mathbf{L} C \to L_{C/A} \to L_{C/B} \to
+L_{B/A} \otimes_B^\mathbf{L} C[1]
+\end{equation}
+in $D(C)$. This will be proved in Proposition \ref{proposition-triangle}.
+For an alternative approach see Remark \ref{remark-triangle}.
+
+\medskip\noindent
+Consider the category $\mathcal{C}_{C/B/A}$
+wich is the {\bf opposite} of the category whose objects are
+$(P \to B, Q \to C)$ where
+\begin{enumerate}
+\item $P$ is a polynomial algebra over $A$,
+\item $P \to B$ is an $A$-algebra homomorphism,
+\item $Q$ is a polynomial algebra over $P$, and
+\item $Q \to C$ is a $P$-algebra-homomorphism.
+\end{enumerate}
+We take the opposite as we want to think of $(P \to B, Q \to C)$
+as corresponding to the commutative diagram
+$$
+\xymatrix{
+\Spec(C) \ar[d] \ar[r] & \Spec(Q) \ar[d] \\
+\Spec(B) \ar[d] \ar[r] & \Spec(P) \ar[dl] \\
+\Spec(A)
+}
+$$
+Let $\mathcal{C}_{B/A}$, $\mathcal{C}_{C/A}$, $\mathcal{C}_{C/B}$
+be the categories considered in Section \ref{section-compute-L-pi-shriek}.
+There are functors
+$$
+\begin{matrix}
+u_1 : \mathcal{C}_{C/B/A} \to \mathcal{C}_{B/A}, &
+(P \to B, Q \to C) \mapsto (P \to B) \\
+u_2 : \mathcal{C}_{C/B/A} \to \mathcal{C}_{C/A}, &
+(P \to B, Q \to C) \mapsto (Q \to C) \\
+u_3 : \mathcal{C}_{C/B/A} \to \mathcal{C}_{C/B}, &
+(P \to B, Q \to C) \mapsto (Q \otimes_P B \to C)
+\end{matrix}
+$$
+These functors induce corresponding morphisms of topoi $g_i$.
+Let us denote $\mathcal{O}_i = g_i^{-1}\mathcal{O}$ so that we
+get morphisms of ringed topoi
+\begin{equation}
+\label{equation-three-maps}
+\begin{matrix}
+g_1 : (\Sh(\mathcal{C}_{C/B/A}), \mathcal{O}_1)
+\longrightarrow (\Sh(\mathcal{C}_{B/A}), \mathcal{O}) \\
+g_2 : (\Sh(\mathcal{C}_{C/B/A}), \mathcal{O}_2)
+\longrightarrow (\Sh(\mathcal{C}_{C/A}), \mathcal{O}) \\
+g_3 : (\Sh(\mathcal{C}_{C/B/A}), \mathcal{O}_3)
+\longrightarrow (\Sh(\mathcal{C}_{C/B}), \mathcal{O})
+\end{matrix}
+\end{equation}
+Let us denote
+$\pi : \Sh(\mathcal{C}_{C/B/A}) \to \Sh(*)$,
+$\pi_1 : \Sh(\mathcal{C}_{B/A}) \to \Sh(*)$,
+$\pi_2 : \Sh(\mathcal{C}_{C/A}) \to \Sh(*)$, and
+$\pi_3 : \Sh(\mathcal{C}_{C/B}) \to \Sh(*)$,
+so that $\pi = \pi_i \circ g_i$.
+We will obtain our distinguished triangle from the identification
+of the cotangent complex in Lemma \ref{lemma-compute-cotangent-complex}
+and the following lemmas.
+
+\begin{lemma}
+\label{lemma-triangle-ses}
+With notation as in (\ref{equation-three-maps}) set
+$$
+\begin{matrix}
+\Omega_1 = \Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{B}
+\text{ on }\mathcal{C}_{B/A} \\
+\Omega_2 = \Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{C}
+\text{ on }\mathcal{C}_{C/A} \\
+\Omega_3 = \Omega_{\mathcal{O}/B} \otimes_\mathcal{O} \underline{C}
+\text{ on }\mathcal{C}_{C/B}
+\end{matrix}
+$$
+Then we have a canonical short exact sequence of sheaves
+of $\underline{C}$-modules
+$$
+0 \to g_1^{-1}\Omega_1 \otimes_{\underline{B}} \underline{C} \to
+g_2^{-1}\Omega_2 \to
+g_3^{-1}\Omega_3 \to 0
+$$
+on $\mathcal{C}_{C/B/A}$.
+\end{lemma}
+
+\begin{proof}
+Recall that $g_i^{-1}$ is gotten by simply precomposing with $u_i$.
+Given an object $U = (P \to B, Q \to C)$ we have a split
+short exact sequence
+$$
+0 \to \Omega_{P/A} \otimes Q \to \Omega_{Q/A} \to \Omega_{Q/P} \to 0
+$$
+for example by Algebra, Lemma \ref{algebra-lemma-ses-formally-smooth}.
+Tensoring with $C$ over $Q$ we obtain a short exact sequence
+$$
+0 \to \Omega_{P/A} \otimes C \to \Omega_{Q/A} \otimes C \to
+\Omega_{Q/P} \otimes C \to 0
+$$
+We have $\Omega_{P/A} \otimes C = \Omega_{P/A} \otimes B \otimes C$
+whence this is the value of
+$g_1^{-1}\Omega_1 \otimes_{\underline{B}} \underline{C}$
+on $U$. The module $\Omega_{Q/A} \otimes C$ is the value of
+$g_2^{-1}\Omega_2$ on $U$.
+We have $\Omega_{Q/P} \otimes C = \Omega_{Q \otimes_P B/B} \otimes C$
+by Algebra, Lemma \ref{algebra-lemma-differentials-base-change}
+hence this is the value of
+$g_3^{-1}\Omega_3$ on $U$. Thus the short exact sequence of the
+lemma comes from assigning to $U$ the last displayed short exact
+sequence.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-polynomial-on-top}
+With notation as in (\ref{equation-three-maps})
+suppose that $C$ is a polynomial algebra over $B$. Then
+$L\pi_!(g_3^{-1}\mathcal{F}) = L\pi_{3, !}\mathcal{F} = \pi_{3, !}\mathcal{F}$
+for any abelian sheaf $\mathcal{F}$ on $\mathcal{C}_{C/B}$
+\end{lemma}
+
+\begin{proof}
+Write $C = B[E]$ for some set $E$. Choose a resolution
+$P_\bullet \to B$ of $B$ over $A$. For every $n$ consider
+the object $U_n = (P_n \to B, P_n[E] \to C)$ of $\mathcal{C}_{C/B/A}$.
+Then $U_\bullet$ is a cosimplicial object of $\mathcal{C}_{C/B/A}$.
+Note that $u_3(U_\bullet)$ is the constant cosimplicial
+object of $\mathcal{C}_{C/B}$ with value $(C \to C)$.
+We will prove that the object $U_\bullet$ of $\mathcal{C}_{C/B/A}$
+satisfies the hypotheses of
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-by-cosimplicial-resolution}.
+This implies the lemma as it shows that $L\pi_!(g_3^{-1}\mathcal{F})$
+is computed by the constant simplicial abelian group
+$\mathcal{F}(C \to C)$ which is the value of
+$L\pi_{3, !}\mathcal{F} = \pi_{3, !}\mathcal{F}$ by
+Lemma \ref{lemma-pi-lower-shriek-polynomial-algebra}.
+
+\medskip\noindent
+Let $U = (\beta : P \to B, \gamma : Q \to C)$ be an object of
+$\mathcal{C}_{C/B/A}$. We may write $P = A[S]$ and $Q = A[S \amalg T]$
+by the definition of our category $\mathcal{C}_{C/B/A}$. We have to show that
+$$
+\Mor_{\mathcal{C}_{C/B/A}}(U_\bullet, U)
+$$
+is homotopy equivalent to a singleton simplicial set $*$. Observe that this
+simplicial set is the product
+$$
+\prod\nolimits_{s \in S} F_s \times \prod\nolimits_{t \in T} F'_t
+$$
+where $F_s$ is the corresponding simplicial set for
+$U_s = (A[\{s\}] \to B, A[\{s\}] \to C)$
+and $F'_t$ is the corresponding simplicial set for
+$U_t = (A \to B, A[\{t\}] \to C)$. Namely, the object $U$
+is the product $\prod U_s \times \prod U_t$ in $\mathcal{C}_{C/B/A}$.
+It suffices each $F_s$ and $F'_t$ is homotopy equivalent to $*$, see
+Simplicial, Lemma \ref{simplicial-lemma-products-homotopy}.
+The case of $F_s$ follows as $P_\bullet \to B$ is a trivial Kan
+fibration (as a resolution) and $F_s$ is the fibre of this map over
+$\beta(s)$. (Use Simplicial, Lemmas
+\ref{simplicial-lemma-trivial-kan-base-change} and
+\ref{simplicial-lemma-trivial-kan-homotopy}).
+The case of $F'_t$ is more interesting. Here we are saying that
+the fibre of
+$$
+P_\bullet[E] \longrightarrow C = B[E]
+$$
+over $\gamma(t) \in C$ is homotopy equivalent to a point. In fact we
+will show this map is a trivial Kan fibration. Namely,
+$P_\bullet \to B$ is a trivial can fibration. For any ring $R$
+we have
+$$
+R[E] =
+\colim_{\Sigma \subset \text{Map}(E, \mathbf{Z}_{\geq 0})\text{ finite}}
+\prod\nolimits_{I \in \Sigma} R
+$$
+(filtered colimit). Thus the displayed map of simplicial sets is a
+filtered colimit of trivial Kan fibrations, whence a trivial Kan fibration
+by Simplicial, Lemma \ref{simplicial-lemma-filtered-colimit-trivial-kan}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-triangle-compute-lower-shriek}
+With notation as in (\ref{equation-three-maps}) we have
+$Lg_{i, !} \circ g_i^{-1} = \text{id}$ for $i = 1, 2, 3$
+and hence also $L\pi_! \circ g_i^{-1} = L\pi_{i, !}$ for
+$i = 1, 2, 3$.
+\end{lemma}
+
+\begin{proof}
+Proof for $i = 1$. We claim the functor $\mathcal{C}_{C/B/A}$
+is a fibred category over $\mathcal{C}_{B/A}$
+Namely, suppose given $(P \to B, Q \to C)$
+and a morphism $(P' \to B) \to (P \to B)$ of $\mathcal{C}_{B/A}$.
+Recall that this means we have an $A$-algebra homomorphism
+$P \to P'$ compatible with maps to $B$. Then we set $Q' = Q \otimes_P P'$
+with induced map to $C$ and the morphism
+$$
+(P' \to B, Q' \to C) \longrightarrow (P \to B, Q \to C)
+$$
+in $\mathcal{C}_{C/B/A}$ (note reversal arrows again) is strongly cartesian
+in $\mathcal{C}_{C/B/A}$ over $\mathcal{C}_{B/A}$. Moreover, observe
+that the fibre category of $u_1$ over $P \to B$ is the category
+$\mathcal{C}_{C/P}$. Let $\mathcal{F}$ be an abelian sheaf on
+$\mathcal{C}_{B/A}$. Since we have a fibred category we may apply
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-left-derived-pi-shriek}.
+Thus $L_ng_{1, !}g_1^{-1}\mathcal{F}$ is the (pre)sheaf
+which assigns to $U \in \Ob(\mathcal{C}_{B/A})$ the $n$th homology of
+$g_1^{-1}\mathcal{F}$ restricted to the fibre category over $U$.
+Since these restrictions are constant the desired result follows from
+Lemma \ref{lemma-pi-lower-shriek-constant-sheaf}
+via our identifications of fibre categories above.
+
+\medskip\noindent
+The case $i = 2$.
+We claim $\mathcal{C}_{C/B/A}$ is a fibred category over $\mathcal{C}_{C/A}$
+is a fibred category. Namely, suppose given $(P \to B, Q \to C)$
+and a morphism $(Q' \to C) \to (Q \to C)$ of $\mathcal{C}_{C/A}$.
+Recall that this means we have a $B$-algebra homomorphism
+$Q \to Q'$ compatible with maps to $C$. Then
+$$
+(P \to B, Q' \to C) \longrightarrow (P \to B, Q \to C)
+$$
+is strongly cartesian in $\mathcal{C}_{C/B/A}$ over $\mathcal{C}_{C/A}$.
+Note that the fibre category of $u_2$ over $Q \to C$ has an final
+(beware reversal arrows) object, namely, $(A \to B, Q \to C)$. Let
+$\mathcal{F}$ be an abelian sheaf on $\mathcal{C}_{C/A}$.
+Since we have a fibred category we may apply
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-left-derived-pi-shriek}.
+Thus $L_ng_{2, !}g_2^{-1}\mathcal{F}$ is the (pre)sheaf
+which assigns to $U \in \Ob(\mathcal{C}_{C/A})$ the $n$th homology of
+$g_1^{-1}\mathcal{F}$ restricted to the fibre category over $U$.
+Since these restrictions are constant the desired result follows from
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-initial-final}
+because the fibre categories all have final objects.
+
+\medskip\noindent
+The case $i = 3$. In this case we will apply
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-left-derived-g-shriek}
+to $u = u_3 : \mathcal{C}_{C/B/A} \to \mathcal{C}_{C/B}$
+and $\mathcal{F}' = g_3^{-1}\mathcal{F}$ for some abelian sheaf
+$\mathcal{F}$ on $\mathcal{C}_{C/B}$.
+Suppose $U = (\overline{Q} \to C)$ is an object of $\mathcal{C}_{C/B}$.
+Then $\mathcal{I}_U = \mathcal{C}_{\overline{Q}/B/A}$ (again beware
+of reversal of arrows). The sheaf $\mathcal{F}'_U$ is given by the
+rule $(P \to B, Q \to \overline{Q}) \mapsto \mathcal{F}(Q \otimes_P B \to C)$.
+In other words, this sheaf is the pullback of a sheaf
+on $\mathcal{C}_{\overline{Q}/C}$ via the morphism
+$\Sh(\mathcal{C}_{\overline{Q}/B/A}) \to \Sh(\mathcal{C}_{\overline{Q}/B})$.
+Thus Lemma \ref{lemma-polynomial-on-top} shows that
+$H_n(\mathcal{I}_U, \mathcal{F}'_U) = 0$ for $n > 0$
+and equal to $\mathcal{F}(\overline{Q} \to C)$ for $n = 0$.
+The aforementioned Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-left-derived-g-shriek}
+implies that $Lg_{3, !}(g_3^{-1}\mathcal{F}) = \mathcal{F}$ and
+the proof is done.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-triangle}
+Let $A \to B \to C$ be ring maps. There is a canonical distinguished
+triangle
+$$
+L_{B/A} \otimes_B^\mathbf{L} C \to L_{C/A} \to L_{C/B} \to
+L_{B/A} \otimes_B^\mathbf{L} C[1]
+$$
+in $D(C)$.
+\end{proposition}
+
+\begin{proof}
+Consider the short exact sequence of sheaves of
+Lemma \ref{lemma-triangle-ses}
+and apply the derived functor $L\pi_!$ to obtain a distinguished
+triangle
+$$
+L\pi_!(g_1^{-1}\Omega_1 \otimes_{\underline{B}} \underline{C}) \to
+L\pi_!(g_2^{-1}\Omega_2) \to
+L\pi_!(g_3^{-1}\Omega_3) \to
+L\pi_!(g_1^{-1}\Omega_1 \otimes_{\underline{B}} \underline{C})[1]
+$$
+in $D(C)$. Using Lemmas \ref{lemma-triangle-compute-lower-shriek} and
+\ref{lemma-compute-cotangent-complex}
+we see that the second and third terms agree with $L_{C/A}$ and $L_{C/B}$
+and the first one equals
+$$
+L\pi_{1, !}(\Omega_1 \otimes_{\underline{B}} \underline{C}) =
+L\pi_{1, !}(\Omega_1) \otimes_B^\mathbf{L} C =
+L_{B/A} \otimes_B^\mathbf{L} C
+$$
+The first equality by Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-change-of-rings}
+(and flatness of $\Omega_1$ as a sheaf of modules over $\underline{B}$)
+and the second by Lemma \ref{lemma-compute-cotangent-complex}.
+\end{proof}
+
+\begin{remark}
+\label{remark-triangle}
+We sketch an alternative, perhaps simpler, proof of the existence of
+the fundamental triangle.
+Let $A \to B \to C$ be ring maps and assume that $B \to C$ is injective.
+Let $P_\bullet \to B$ be the standard resolution of $B$ over $A$ and
+let $Q_\bullet \to C$ be the standard resolution of $C$ over $B$.
+Picture
+$$
+\xymatrix{
+P_\bullet : &
+A[A[A[B]]] \ar[d]
+\ar@<2ex>[r]
+\ar@<0ex>[r]
+\ar@<-2ex>[r]
+&
+A[A[B]] \ar[d]
+\ar@<1ex>[r]
+\ar@<-1ex>[r]
+\ar@<1ex>[l]
+\ar@<-1ex>[l]
+&
+A[B] \ar[d] \ar@<0ex>[l] \ar[r] &
+B \\
+Q_\bullet : &
+A[A[A[C]]]
+\ar@<2ex>[r]
+\ar@<0ex>[r]
+\ar@<-2ex>[r]
+&
+A[A[C]]
+\ar@<1ex>[r]
+\ar@<-1ex>[r]
+\ar@<1ex>[l]
+\ar@<-1ex>[l]
+&
+A[C] \ar@<0ex>[l] \ar[r] &
+C
+}
+$$
+Observe that since $B \to C$ is injective, the ring $Q_n$ is a
+polynomial algebra over $P_n$ for all $n$. Hence we obtain a cosimplicial
+object in $\mathcal{C}_{C/B/A}$ (beware reversal arrows).
+Now set $\overline{Q}_\bullet = Q_\bullet \otimes_{P_\bullet} B$.
+The key to the proof of Proposition \ref{proposition-triangle}
+is to show that $\overline{Q}_\bullet$ is a resolution of $C$ over $B$.
+This follows from Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-O-homology-qis}
+applied to $\mathcal{C} = \Delta$, $\mathcal{O} = P_\bullet$,
+$\mathcal{O}' = B$, and $\mathcal{F} = Q_\bullet$ (this uses that $Q_n$
+is flat over $P_n$; see Cohomology on Sites, Remark
+\ref{sites-cohomology-remark-simplicial-modules} to relate simplicial modules
+to sheaves). The key fact implies that the distinguished triangle of
+Proposition \ref{proposition-triangle}
+is the distinguished triangle associated to the short exact sequence
+of simplicial $C$-modules
+$$
+0 \to
+\Omega_{P_\bullet/A} \otimes_{P_\bullet} C \to
+\Omega_{Q_\bullet/A} \otimes_{Q_\bullet} C \to
+\Omega_{\overline{Q}_\bullet/B} \otimes_{\overline{Q}_\bullet} C \to 0
+$$
+which is deduced from the short exact sequences
+$0 \to \Omega_{P_n/A} \otimes_{P_n} Q_n \to \Omega_{Q_n/A} \to
+\Omega_{Q_n/P_n} \to 0$ of
+Algebra, Lemma \ref{algebra-lemma-ses-formally-smooth}.
+Namely, by Remark \ref{remark-resolution} and the key fact the complex on the
+right hand side represents $L_{C/B}$ in $D(C)$.
+
+\medskip\noindent
+If $B \to C$ is not injective, then we can use the above to get a
+fundamental triangle for $A \to B \to B \times C$. Since
+$L_{B \times C/B} \to L_{B/B} \oplus L_{C/B}$ and
+$L_{B \times C/A} \to L_{B/A} \oplus L_{C/A}$
+are quasi-isomorphism in $D(B \times C)$
+(Lemma \ref{lemma-cotangent-complex-product})
+this induces the desired distinguished triangle in $D(C)$
+by tensoring with the flat ring map $B \times C \to C$.
+\end{remark}
+
+\begin{remark}
+\label{remark-explicit-map}
+Let $A \to B \to C$ be ring maps with $B \to C$ injective.
+Recall the notation $P_\bullet$, $Q_\bullet$, $\overline{Q}_\bullet$ of
+Remark \ref{remark-triangle}.
+Let $R_\bullet$ be the standard resolution of $C$ over $B$.
+In this remark we explain how to get the canonical identification
+of $\Omega_{\overline{Q}_\bullet/B} \otimes_{\overline{Q}_\bullet} C$
+with $L_{C/B} = \Omega_{R_\bullet/B} \otimes_{R_\bullet} C$.
+Let $S_\bullet \to B$ be the standard resolution of $B$ over $B$.
+Note that the functoriality map $S_\bullet \to R_\bullet$ identifies
+$R_n$ as a polynomial algebra over $S_n$ because $B \to C$ is injective.
+For example in degree $0$ we have the map $B[B] \to B[C]$, in degree
+$1$ the map $B[B[B]] \to B[B[C]]$, and so on. Thus
+$\overline{R}_\bullet = R_\bullet \otimes_{S_\bullet} B$
+is a simplicial polynomial algebra
+over $B$ as well and it follows (as in Remark \ref{remark-triangle}) from
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-O-homology-qis}
+that $\overline{R}_\bullet \to C$ is a resolution. Since we have
+a commutative diagram
+$$
+\xymatrix{
+Q_\bullet \ar[r] & R_\bullet \\
+P_\bullet \ar[u] \ar[r] & S_\bullet \ar[u] \ar[r] & B
+}
+$$
+we obtain a canonical map
+$\overline{Q}_\bullet = Q_\bullet \otimes_{P_\bullet} B \to
+\overline{R}_\bullet$. Thus the maps
+$$
+L_{C/B} = \Omega_{R_\bullet/B} \otimes_{R_\bullet} C
+\longrightarrow
+\Omega_{\overline{R}_\bullet/B} \otimes_{\overline{R}_\bullet} C
+\longleftarrow
+\Omega_{\overline{Q}_\bullet/B} \otimes_{\overline{Q}_\bullet} C
+$$
+are quasi-isomorphisms (Remark \ref{remark-resolution}) and composing
+one with the inverse of the other gives the desired identification.
+\end{remark}
+
+
+
+
+
+\section{Localization and \'etale ring maps}
+\label{section-localization}
+
+\noindent
+In this section we study what happens if we localize our rings.
+Let $A \to A' \to B$ be ring maps such that $B = B \otimes_A^\mathbf{L} A'$.
+This happens for example if $A' = S^{-1}A$ is the localization of $A$
+at a multiplicative subset $S \subset A$. In this
+case for an abelian sheaf $\mathcal{F}'$ on $\mathcal{C}_{B/A'}$
+the homology of $g^{-1}\mathcal{F}'$ over $\mathcal{C}_{B/A}$ agrees with
+the homology of $\mathcal{F}'$ over $\mathcal{C}_{B/A'}$, see
+Lemma \ref{lemma-flat-base-change} for a precise statement.
+
+\begin{lemma}
+\label{lemma-localize-at-bottom}
+Let $A \to A' \to B$ be ring maps such that $B = B \otimes_A^\mathbf{L} A'$.
+Then $L_{B/A} = L_{B/A'}$ in $D(B)$.
+\end{lemma}
+
+\begin{proof}
+According to the discussion above (i.e., using
+Lemma \ref{lemma-flat-base-change})
+and Lemma \ref{lemma-compute-cotangent-complex}
+we have to show that the sheaf given
+by the rule $(P \to B) \mapsto \Omega_{P/A} \otimes_P B$ on $\mathcal{C}_{B/A}$
+is the pullback of the sheaf given by the rule
+$(P \to B) \mapsto \Omega_{P/A'} \otimes_P B$.
+The pullback functor $g^{-1}$ is given by precomposing with the
+functor $u : \mathcal{C}_{B/A} \to \mathcal{C}_{B/A'}$,
+$(P \to B) \mapsto (P \otimes_A A' \to B)$.
+Thus we have to show that
+$$
+\Omega_{P/A} \otimes_P B =
+\Omega_{P \otimes_A A'/A'} \otimes_{(P \otimes_A A')} B
+$$
+By Algebra, Lemma \ref{algebra-lemma-differentials-base-change}
+the right hand side is equal to
+$$
+(\Omega_{P/A} \otimes_A A') \otimes_{(P \otimes_A A')} B
+$$
+Since $P$ is a polynomial algebra over $A$ the module
+$\Omega_{P/A}$ is free and the equality is obvious.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derived-diagonal}
+Let $A \to B$ be a ring map such that $B = B \otimes_A^\mathbf{L} B$.
+Then $L_{B/A} = 0$ in $D(B)$.
+\end{lemma}
+
+\begin{proof}
+This is true because $L_{B/A} = L_{B/B} = 0$ by
+Lemmas \ref{lemma-localize-at-bottom} and
+\ref{lemma-cotangent-complex-polynomial-algebra}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bootstrap}
+Let $A \to B$ be a ring map such that $\text{Tor}^A_i(B, B) = 0$ for $i > 0$
+and such that $L_{B/B \otimes_A B} = 0$.
+Then $L_{B/A} = 0$ in $D(B)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-flat-base-change-cotangent-complex} we see that
+$L_{B/A} \otimes_B^\mathbf{L} (B \otimes_A B) = L_{B \otimes_A B/B}$.
+Now we use the distinguished triangle (\ref{equation-triangle})
+$$
+L_{B \otimes_A B/B} \otimes^\mathbf{L}_{(B \otimes_A B)} B \to
+L_{B/B} \to L_{B/B \otimes_A B} \to
+L_{B \otimes_A B/B} \otimes^\mathbf{L}_{(B \otimes_A B)} B[1]
+$$
+associated to the ring maps $B \to B \otimes_A B \to B$ and the vanishing of
+$L_{B/B}$ (Lemma \ref{lemma-cotangent-complex-polynomial-algebra}) and
+$L_{B/B \otimes_A B}$ (assumed) to see that
+$$
+0 =
+L_{B \otimes_A B/B} \otimes^\mathbf{L}_{(B \otimes_A B)} B =
+L_{B/A} \otimes_B^\mathbf{L} (B \otimes_A B)
+\otimes^\mathbf{L}_{(B \otimes_A B)} B = L_{B/A}
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-zero}
+The cotangent complex $L_{B/A}$ is zero in each of the following cases:
+\begin{enumerate}
+\item $A \to B$ and $B \otimes_A B \to B$ are flat, i.e., $A \to B$
+is weakly \'etale
+(More on Algebra, Definition \ref{more-algebra-definition-weakly-etale}),
+\item $A \to B$ is a flat epimorphism of rings,
+\item $B = S^{-1}A$ for some multiplicative subset $S \subset A$,
+\item $A \to B$ is unramified and flat,
+\item $A \to B$ is \'etale,
+\item $A \to B$ is a filtered colimit of ring maps for which
+the cotangent complex vanishes,
+\item $B$ is a henselization of a local ring of $A$,
+\item $B$ is a strict henselization of a local ring of $A$, and
+\item add more here.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+In case (1) we may apply
+Lemma \ref{lemma-derived-diagonal}
+to the surjective flat ring map $B \otimes_A B \to B$
+to conclude that $L_{B/B \otimes_A B} = 0$ and then we use
+Lemma \ref{lemma-bootstrap}
+to conclude. The cases (2) -- (5) are each special cases of (1).
+Part (6) follows from Lemma \ref{lemma-colimit-cotangent-complex}.
+Parts (7) and (8) follows from the fact that (strict) henselizations
+are filtered colimits of \'etale ring extensions of $A$, see
+Algebra, Lemmas \ref{algebra-lemma-henselization-different} and
+\ref{algebra-lemma-strict-henselization-different}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-on-top}
+Let $A \to B \to C$ be ring maps such that $L_{C/B} = 0$.
+Then $L_{C/A} = L_{B/A} \otimes_B^\mathbf{L} C$.
+\end{lemma}
+
+\begin{proof}
+This is a trivial consequence of
+the distinguished triangle (\ref{equation-triangle}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize}
+Let $A \to B$ be ring maps and $S \subset A$, $T \subset B$ multiplicative
+subsets such that $S$ maps into $T$.
+Then $L_{T^{-1}B/S^{-1}A} = L_{B/A} \otimes_B T^{-1}B$
+in $D(T^{-1}B)$.
+\end{lemma}
+
+\begin{proof}
+Lemma \ref{lemma-localize-on-top} shows that
+$L_{T^{-1}B/A} = L_{B/A} \otimes_B T^{-1}B$
+and Lemma \ref{lemma-localize-at-bottom}
+shows that $L_{T^{-1}B/A} = L_{T^{-1}B/S^{-1}A}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cotangent-complex-henselization}
+Let $A \to B$ be a local ring homomorphism of local rings.
+Let $A^h \to B^h$, resp.\ $A^{sh} \to B^{sh}$ be the induced
+maps of henselizations, resp.\ strict henselizations.
+Then
+$$
+L_{B^h/A^h} = L_{B^h/A} = L_{B/A} \otimes_B^\mathbf{L} B^h
+\quad\text{resp.}\quad
+L_{B^{sh}/A^{sh}} = L_{B^{sh}/A} = L_{B/A} \otimes_B^\mathbf{L} B^{sh}
+$$
+in $D(B^h)$, resp.\ $D(B^{sh})$.
+\end{lemma}
+
+\begin{proof}
+The complexes $L_{A^h/A}$, $L_{A^{sh}/A}$, $L_{B^h/B}$, and
+$L_{B^{sh}/B}$ are all zero by Lemma \ref{lemma-when-zero}.
+Using the fundamental distinguished triangle (\ref{equation-triangle})
+for $A \to B \to B^h$ we obtain
+$L_{B^h/A} = L_{B/A} \otimes_B^\mathbf{L} B^h$.
+Using the fundamental triangle for $A \to A^h \to B^h$
+we obtain $L_{B^h/A^h} = L_{B^h/A}$.
+Similarly for strict henselizations.
+\end{proof}
+
+
+
+
+\section{Smooth ring maps}
+\label{section-smooth}
+
+\noindent
+Let $C \to B$ be a surjection of rings with kernel $I$. Let us call such
+a ring map ``weakly quasi-regular'' if $I/I^2$ is a flat $B$-module and
+$\text{Tor}_*^C(B, B)$ is the exterior algebra on $I/I^2$.
+The generalization to ``smooth ring maps'' of what is done in
+Lemma \ref{lemma-when-zero} for ``\'etale ring maps'' is to look
+at flat ring maps $A \to B$ such that the multiplication map
+$B \otimes_A B \to B$ is weakly quasi-regular. For the moment we just stick to
+smooth ring maps.
+
+\begin{lemma}
+\label{lemma-when-projective}
+If $A \to B$ is a smooth ring map, then $L_{B/A} = \Omega_{B/A}[0]$.
+\end{lemma}
+
+\begin{proof}
+We have the agreement in cohomological degree $0$ by
+Lemma \ref{lemma-identify-H0}.
+Thus it suffices to prove the other cohomology groups
+are zero. It suffices to prove this locally on $\Spec(B)$ as
+$L_{B_g/A} = (L_{B/A})_g$ for $g \in B$ by Lemma \ref{lemma-localize-on-top}.
+Thus we may assume that $A \to B$ is standard smooth
+(Algebra, Lemma \ref{algebra-lemma-smooth-syntomic}), i.e.,
+that we can factor $A \to B$ as
+$A \to A[x_1, \ldots, x_n] \to B$ with $A[x_1, \ldots, x_n] \to B$
+\'etale. In this case Lemmas \ref{lemma-when-zero} and
+Lemma \ref{lemma-localize-on-top} show that
+$L_{B/A} = L_{A[x_1, \ldots, x_n]/A} \otimes B$
+whence the conclusion by
+Lemma \ref{lemma-cotangent-complex-polynomial-algebra}.
+\end{proof}
+
+
+
+
+
+\section{Positive characteristic}
+\label{section-positive-characteristic}
+
+\noindent
+In this section we fix a prime number $p$.
+If $A$ is a ring with $p = 0$ in $A$, then $F_A : A \to A$
+denotes the Frobenius endomorphism $a \mapsto a^p$.
+
+\begin{lemma}
+\label{lemma-frobenius-homotopy}
+Let $A \to B$ be a ring map with $p = 0$ in $A$. Let $P_\bullet$ be the
+standard resolution of $B$ over $A$. The map $P_\bullet \to P_\bullet$
+induced by the diagram
+$$
+\xymatrix{
+B \ar[r]_{F_B} & B \\
+A \ar[u] \ar[r]^{F_A} & A \ar[u]
+}
+$$
+discussed in Section \ref{section-functoriality} is homotopic to the Frobenius
+endomorphism $P_\bullet \to P_\bullet$ given by Frobenius on each $P_n$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{A}$ be the category of $\mathbf{F}_p$-algebra maps
+$A \to B$. Let $\mathcal{S}$ be the category of pairs $(A, E)$
+where $A$ is an $\mathbf{F}_p$-algebra and $E$ is a set. Consider the
+adjoint functors
+$$
+V : \mathcal{A} \to \mathcal{S}, \quad (A \to B) \mapsto (A, B)
+$$
+and
+$$
+U : \mathcal{S} \to \mathcal{A}, \quad (A, E) \mapsto (A \to A[E])
+$$
+Let $X$ be the simplicial object in
+in the category of functors from $\mathcal{A}$ to $\mathcal{A}$
+constructed in Simplicial, Section \ref{simplicial-section-standard}.
+It is clear that $P_\bullet = X(A \to B)$ because if we fix
+$A$ then.
+
+\medskip\noindent
+Set $Y = U \circ V$. Recall that $X$ is constructed from $Y$
+and certain maps and has terms $X_n = Y \circ \ldots \circ Y$
+with $n + 1$ terms; the construction is given in
+Simplicial, Example \ref{simplicial-example-godement} and please see
+proof of Simplicial, Lemma \ref{simplicial-lemma-standard-simplicial}
+for details.
+
+\medskip\noindent
+Let $f : \text{id}_\mathcal{A} \to \text{id}_\mathcal{A}$
+be the Frobenius endomorphism of the identity functor.
+In other words, we set $f_{A \to B} = (F_A, F_B) : (A \to B) \to (A \to B)$.
+Then our two maps on $X(A \to B)$ are given by the natural transformations
+$f \star 1_X$ and $1_X \star f$. Details omitted.
+Thus we conclude by Simplicial, Lemma
+\ref{simplicial-lemma-godement-before-after}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-frobenius-acts-as-zero}
+Let $p$ be a prime number. Let $A \to B$ be a ring homomorphism
+and assume that $p = 0$ in $A$. The map $L_{B/A} \to L_{B/A}$
+of Section \ref{section-functoriality} induced by the
+Frobenius maps $F_A$ and $F_B$ is homotopic to zero.
+\end{lemma}
+
+\begin{proof}
+Let $P_\bullet$ be the standard resolution of $B$ over $A$.
+By Lemma \ref{lemma-frobenius-homotopy} the map $P_\bullet \to P_\bullet$
+induced by $F_A$ and $F_B$ is homotopic to the map
+$F_{P_\bullet} : P_\bullet \to P_\bullet$ given by
+Frobenius on each term. Hence we obtain what we want as clearly
+$F_{P_\bullet}$ induces the zero zero map $\Omega_{P_n/A} \to \Omega_{P_n/A}$
+(since the derivative of a $p$th power is zero).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-perfect-zero}
+Let $p$ be a prime number. Let $A \to B$ be a ring homomorphism
+and assume that $p = 0$ in $A$. If $A$ and $B$ are perfect, then
+$L_{B/A}$ is zero in $D(B)$.
+\end{lemma}
+
+\begin{proof}
+The map $(F_A, F_B) : (A \to B) \to (A \to B)$ is an isomorphism
+hence induces an isomorphism on $L_{B/A}$ and on the other hand
+induces zero on $L_{B/A}$ by Lemma \ref{lemma-frobenius-acts-as-zero}.
+\end{proof}
+
+
+
+
+
+\section{Comparison with the naive cotangent complex}
+\label{section-surjections}
+
+\noindent
+The naive cotangent complex was introduced in
+Algebra, Section \ref{algebra-section-netherlander}.
+
+\begin{remark}
+\label{remark-make-map}
+Let $A \to B$ be a ring map. Working on $\mathcal{C}_{B/A}$ as in
+Section \ref{section-compute-L-pi-shriek} let
+$\mathcal{J} \subset \mathcal{O}$ be the kernel of
+$\mathcal{O} \to \underline{B}$. Note that $L\pi_!(\mathcal{J}) = 0$ by
+Lemma \ref{lemma-apply-O-B-comparison}. Set
+$\Omega = \Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{B}$
+so that
+$L_{B/A} = L\pi_!(\Omega)$ by Lemma \ref{lemma-compute-cotangent-complex}.
+It follows that $L\pi_!(\mathcal{J} \to \Omega) = L\pi_!(\Omega) = L_{B/A}$.
+Thus, for any object $U = (P \to B)$ of $\mathcal{C}_{B/A}$ we obtain a map
+\begin{equation}
+\label{equation-comparison-map-A}
+(J \to \Omega_{P/A} \otimes_P B) \longrightarrow L_{B/A}
+\end{equation}
+where $J = \Ker(P \to B)$ in $D(A)$, see
+Cohomology on Sites, Remark
+\ref{sites-cohomology-remark-map-evaluation-to-derived}.
+Continuing in this manner, note that
+$L\pi_!(\mathcal{J} \otimes_\mathcal{O}^\mathbf{L} \underline{B}) =
+L\pi_!(\mathcal{J}) = 0$ by
+Lemma \ref{lemma-O-homology-B-homology}.
+Since $\text{Tor}_0^\mathcal{O}(\mathcal{J}, \underline{B}) =
+\mathcal{J}/\mathcal{J}^2$
+the spectral sequence
+$$
+H_p(\mathcal{C}_{B/A}, \text{Tor}_q^\mathcal{O}(\mathcal{J}, \underline{B}))
+\Rightarrow
+H_{p + q}(\mathcal{C}_{B/A},
+\mathcal{J} \otimes_\mathcal{O}^\mathbf{L} \underline{B}) = 0
+$$
+(dual of
+Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor})
+implies that
+$H_0(\mathcal{C}_{B/A}, \mathcal{J}/\mathcal{J}^2) = 0$
+and $H_1(\mathcal{C}_{B/A}, \mathcal{J}/\mathcal{J}^2) = 0$.
+It follows that the complex of $\underline{B}$-modules
+$\mathcal{J}/\mathcal{J}^2 \to \Omega$ satisfies
+$\tau_{\geq -1}L\pi_!(\mathcal{J}/\mathcal{J}^2 \to \Omega) =
+\tau_{\geq -1}L_{B/A}$.
+Thus, for any object $U = (P \to B)$ of $\mathcal{C}_{B/A}$ we obtain a map
+\begin{equation}
+\label{equation-comparison-map}
+(J/J^2 \to \Omega_{P/A} \otimes_P B) \longrightarrow \tau_{\geq -1}L_{B/A}
+\end{equation}
+in $D(B)$, see
+Cohomology on Sites, Remark
+\ref{sites-cohomology-remark-map-evaluation-to-derived}.
+\end{remark}
+
+\noindent
+The first case is where we have a surjection of rings.
+
+\begin{lemma}
+\label{lemma-surjection}
+\begin{slogan}
+The cohomology of the cotangent complex of a surjective ring map is trivial in
+degree zero; it is the kernel modulo its square in degree $-1$.
+\end{slogan}
+Let $A \to B$ be a surjective ring map with kernel $I$.
+Then $H^0(L_{B/A}) = 0$ and $H^{-1}(L_{B/A}) = I/I^2$.
+This isomorphism comes from the map (\ref{equation-comparison-map})
+for the object $(A \to B)$ of $\mathcal{C}_{B/A}$.
+\end{lemma}
+
+\begin{proof}
+We will show below (using the surjectivity of $A \to B$)
+that there exists a short exact sequence
+$$
+0 \to \pi^{-1}(I/I^2) \to \mathcal{J}/\mathcal{J}^2 \to \Omega \to 0
+$$
+of sheaves on $\mathcal{C}_{B/A}$. Taking $L\pi_!$ and
+the associated long exact sequence of homology, and using the
+vanishing of $H_1(\mathcal{C}_{B/A}, \mathcal{J}/\mathcal{J}^2)$ and
+$H_0(\mathcal{C}_{B/A}, \mathcal{J}/\mathcal{J}^2)$
+shown in Remark \ref{remark-make-map} we obtain what we want using
+Lemma \ref{lemma-pi-lower-shriek-constant-sheaf}.
+
+\medskip\noindent
+What is left is to verify the local statement mentioned above.
+For every object $U = (P \to B)$ of $\mathcal{C}_{B/A}$
+we can choose an isomorphism $P = A[E]$ such that the map
+$P \to B$ maps each $e \in E$ to zero. Then
+$J = \mathcal{J}(U) \subset P = \mathcal{O}(U)$
+is equal to $J = IP + (e; e \in E)$. The value on $U$ of the short sequence
+of sheaves above is the sequence
+$$
+0 \to I/I^2 \to J/J^2 \to \Omega_{P/A} \otimes_P B \to 0
+$$
+Verification omitted (hint: the only tricky point is that
+$IP \cap J^2 = IJ$; which follows for example from
+More on Algebra, Lemma \ref{more-algebra-lemma-conormal-sequence-H1-regular}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relation-with-naive-cotangent-complex}
+Let $A \to B$ be a ring map. Then $\tau_{\geq -1}L_{B/A}$
+is canonically quasi-isomorphic to the naive cotangent complex.
+\end{lemma}
+
+\begin{proof}
+Consider $P = A[B] \to B$ with kernel $I$. The naive cotangent
+complex $\NL_{B/A}$ of $B$ over $A$ is the complex
+$I/I^2 \to \Omega_{P/A} \otimes_P B$,
+see Algebra, Definition \ref{algebra-definition-naive-cotangent-complex}.
+Observe that in (\ref{equation-comparison-map}) we have already
+constructed a canonical map
+$$
+c : \NL_{B/A} \longrightarrow \tau_{\geq -1}L_{B/A}
+$$
+Consider the distinguished triangle (\ref{equation-triangle})
+$$
+L_{P/A} \otimes_P^\mathbf{L} B \to L_{B/A} \to L_{B/P} \to
+(L_{P/A} \otimes_P^\mathbf{L} B)[1]
+$$
+associated to the ring maps $A \to A[B] \to B$. We know that
+$L_{P/A} = \Omega_{P/A}[0] = \NL_{P/A}$ in $D(P)$
+(Lemma \ref{lemma-cotangent-complex-polynomial-algebra}
+and
+Algebra, Lemma \ref{algebra-lemma-NL-polynomial-algebra})
+and that
+$\tau_{\geq -1}L_{B/P} = I/I^2[1] = \NL_{B/P}$ in $D(B)$
+(Lemma \ref{lemma-surjection} and
+Algebra, Lemma \ref{algebra-lemma-NL-surjection}).
+To show $c$ is a quasi-isomorphism it suffices by
+Algebra, Lemma \ref{algebra-lemma-exact-sequence-NL}
+and the long exact cohomology sequence associated to the
+distinguished triangle
+to show that the maps $L_{P/A} \to L_{B/A} \to L_{B/P}$ are compatible
+on cohomology groups with the corresponding maps
+$\NL_{P/A} \to \NL_{B/A} \to \NL_{B/P}$
+of the naive cotangent complex. We omit the verification.
+\end{proof}
+
+\begin{remark}
+\label{remark-explicit-comparison-map}
+We can make the comparison map of
+Lemma \ref{lemma-relation-with-naive-cotangent-complex}
+explicit in the following way.
+Let $P_\bullet$ be the standard resolution of $B$
+over $A$.
+Let $I = \Ker(A[B] \to B)$.
+Recall that $P_0 = A[B]$. The map of the
+lemma is given by the commutative diagram
+$$
+\xymatrix{
+L_{B/A} \ar[d] & \ldots \ar[r] &
+\Omega_{P_2/A} \otimes_{P_2} B
+\ar[r] \ar[d] &
+\Omega_{P_1/A} \otimes_{P_1} B
+\ar[r] \ar[d] &
+\Omega_{P_0/A} \otimes_{P_0} B
+\ar[d] \\
+\NL_{B/A} & \ldots \ar[r] &
+0 \ar[r] &
+I/I^2 \ar[r] &
+\Omega_{P_0/A} \otimes_{P_0} B
+}
+$$
+We construct the downward arrow with target $I/I^2$
+by sending $\text{d}f \otimes b$ to the class of
+$(d_0(f) - d_1(f))b$ in $I/I^2$. Here $d_i : P_1 \to P_0$,
+$i = 0, 1$ are the two face maps of the simplicial structure.
+This makes sense as $d_0 - d_1$ maps $P_1$ into $I = \Ker(P_0 \to B)$.
+We omit the verification that this rule is well defined.
+Our map is compatible with the differential
+$\Omega_{P_1/A} \otimes_{P_1} B \to \Omega_{P_0/A} \otimes_{P_0} B$
+as this differential maps $\text{d}f \otimes b$ to
+$\text{d}(d_0(f) - d_1(f)) \otimes b$. Moreover, the differential
+$\Omega_{P_2/A} \otimes_{P_2} B \to \Omega_{P_1/A} \otimes_{P_1} B$
+maps $\text{d}f \otimes b$ to $\text{d}(d_0(f) - d_1(f) + d_2(f)) \otimes b$
+which are annihilated by our downward arrow. Hence a map of complexes.
+We omit the verification that this is the same as the map of
+Lemma \ref{lemma-relation-with-naive-cotangent-complex}.
+\end{remark}
+
+\begin{remark}
+\label{remark-surjection}
+Adopt notation as in Remark \ref{remark-make-map}. The arguments given
+there show that the differential
+$$
+H_2(\mathcal{C}_{B/A}, \mathcal{J}/\mathcal{J}^2)
+\longrightarrow
+H_0(\mathcal{C}_{B/A}, \text{Tor}_1^\mathcal{O}(\mathcal{J}, \underline{B}))
+$$
+of the spectral sequence is an isomorphism. Let $\mathcal{C}'_{B/A}$
+denote the full subcategory of $\mathcal{C}_{B/A}$ consisting of surjective
+maps $P \to B$. The agreement of the cotangent complex with the naive
+cotangent complex (Lemma \ref{lemma-relation-with-naive-cotangent-complex})
+shows that we have an exact sequence of sheaves
+$$
+0 \to \underline{H_1(L_{B/A})} \to
+\mathcal{J}/\mathcal{J}^2 \xrightarrow{\text{d}} \Omega \to
+\underline{H_2(L_{B/A})} \to 0
+$$
+on $\mathcal{C}'_{B/A}$. It follows that $\Ker(d)$ and
+$\Coker(d)$ on the whole category $\mathcal{C}_{B/A}$ have
+vanishing higher homology groups, since
+these are computed by the homology groups of constant simplicial abelian
+groups by Lemma \ref{lemma-identify-pi-shriek}. Hence we conclude
+that
+$$
+H_n(\mathcal{C}_{B/A}, \mathcal{J}/\mathcal{J}^2) \to H_n(L_{B/A})
+$$
+is an isomorphism for all $n \geq 2$. Combined with the remark above
+we obtain the formula
+$H_2(L_{B/A}) =
+H_0(\mathcal{C}_{B/A}, \text{Tor}_1^\mathcal{O}(\mathcal{J}, \underline{B}))$.
+\end{remark}
+
+
+
+
+
+\section{A spectral sequence of Quillen}
+\label{section-spectral-sequence}
+
+\noindent
+In this section we discuss a spectral sequence relating derived
+tensor product to the cotangent complex.
+
+\begin{lemma}
+\label{lemma-vanishing-symmetric-powers}
+Notation and assumptions as in
+Cohomology on Sites, Example \ref{sites-cohomology-example-category-to-point}.
+Assume $\mathcal{C}$ has a cosimplicial object as in
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-by-cosimplicial-resolution}.
+Let $\mathcal{F}$ be a flat $\underline{B}$-module such that
+$H_0(\mathcal{C}, \mathcal{F}) = 0$.
+Then $H_l(\mathcal{C}, \text{Sym}_{\underline{B}}^k(\mathcal{F})) = 0$
+for $l < k$.
+\end{lemma}
+
+\begin{proof}
+We drop the subscript ${}_{\underline{B}}$ from tensor products, wedge powers,
+and symmetric powers. We will prove the lemma by induction on $k$.
+The cases $k = 0, 1$ follow from the assumptions. If $k > 1$ consider
+the exact complex
+$$
+\ldots \to
+\wedge^2\mathcal{F} \otimes \text{Sym}^{k - 2}\mathcal{F} \to
+\mathcal{F} \otimes \text{Sym}^{k - 1}\mathcal{F} \to
+\text{Sym}^k\mathcal{F} \to 0
+$$
+with differentials as in the Koszul complex. If we think of this as a
+resolution of $\text{Sym}^k\mathcal{F}$, then this gives a first quadrant
+spectral sequence
+$$
+E_1^{p, q} =
+H_p(\mathcal{C},
+\wedge^{q + 1}\mathcal{F} \otimes \text{Sym}^{k - q - 1}\mathcal{F})
+\Rightarrow
+H_{p + q}(\mathcal{C}, \text{Sym}^k(\mathcal{F}))
+$$
+By Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-eilenberg-zilber}
+we have
+$$
+L\pi_!(\wedge^{q + 1}\mathcal{F} \otimes \text{Sym}^{k - q - 1}\mathcal{F}) =
+L\pi_!(\wedge^{q + 1}\mathcal{F}) \otimes_B^\mathbf{L}
+L\pi_!(\text{Sym}^{k - q - 1}\mathcal{F}))
+$$
+It follows (from the construction of derived tensor products) that
+the induction hypothesis combined with the vanishing
+of $H_0(\mathcal{C}, \wedge^{q + 1}(\mathcal{F})) = 0$ will prove what we want.
+This is true because $\wedge^{q + 1}(\mathcal{F})$ is a quotient
+of $\mathcal{F}^{\otimes q + 1}$ and
+$H_0(\mathcal{C}, \mathcal{F}^{\otimes q + 1})$
+is a quotient of $H_0(\mathcal{C}, \mathcal{F})^{\otimes q + 1}$
+which is zero.
+\end{proof}
+
+\begin{remark}
+\label{remark-first-homology-symmetric-power}
+In the situation of Lemma \ref{lemma-vanishing-symmetric-powers}
+one can show that
+$H_k(\mathcal{C}, \text{Sym}^k(\mathcal{F})) =
+\wedge^k_B(H_1(\mathcal{C}, \mathcal{F}))$.
+Namely, it can be deduced from the proof that
+$H_k(\mathcal{C}, \text{Sym}^k(\mathcal{F}))$ is the $S_k$-coinvariants
+of
+$$
+H^{-k}(L\pi_!(\mathcal{F}) \otimes_B^\mathbf{L}
+L\pi_!(\mathcal{F}) \otimes_B^\mathbf{L}
+\ldots \otimes_B^\mathbf{L} L\pi_!(\mathcal{F})) =
+H_1(\mathcal{C}, \mathcal{F})^{\otimes k}
+$$
+Thus our claim is that this action is given by the usual action
+of $S_k$ on the tensor product multiplied by the sign character.
+To prove this one has to work through the sign conventions
+in the definition of the total complex associated to a
+multi-complex. We omit the verification.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-map-tors-zero}
+Let $A$ be a ring. Let $P = A[E]$ be a polynomial ring.
+Set $I = (e; e \in E) \subset P$. The maps
+$\text{Tor}_i^P(A, I^{n + 1}) \to \text{Tor}_i^P(A, I^n)$
+are zero for all $i$ and $n$.
+\end{lemma}
+
+\begin{proof}
+Denote $x_e \in P$ the variable corresponding to $e \in E$.
+A free resolution of $A$ over $P$ is given by the Koszul complex
+$K_\bullet$ on the $x_e$. Here $K_i$ has basis given by wedges
+$e_1 \wedge \ldots \wedge e_i$, $e_1, \ldots, e_i \in E$ and $d(e) = x_e$.
+Thus $K_\bullet \otimes_P I^n = I^nK_\bullet$ computes
+$\text{Tor}_i^P(A, I^n)$. Observe that everything is graded
+with $\deg(x_e) = 1$, $\deg(e) = 1$, and $\deg(a) = 0$ for $a \in A$.
+Suppose $\xi \in I^{n + 1}K_i$ is a cocycle homogeneous of degree $m$.
+Note that $m \geq i + 1 + n$. Then $\xi = \text{d}\eta$ for some
+$\eta \in K_{i + 1}$ as $K_\bullet$ is exact in degrees $ > 0$.
+(The case $i = 0$ is left to the reader.)
+Now $\deg(\eta) = m \geq i + 1 + n$. Hence writing $\eta$
+in terms of the basis we see the coordinates are in $I^n$.
+Thus $\xi$ maps to zero in the homology of $I^nK_\bullet$ as desired.
+\end{proof}
+
+\begin{theorem}[Quillen spectral sequence]
+\label{theorem-quillen-spectral-sequence}
+Let $A \to B$ be a surjective ring map. Consider the sheaf
+$\Omega = \Omega_{\mathcal{O}/A} \otimes_\mathcal{O} \underline{B}$ of
+$\underline{B}$-modules on $\mathcal{C}_{B/A}$, see
+Section \ref{section-compute-L-pi-shriek}.
+Then there is a spectral sequence with $E_1$-page
+$$
+E_1^{p, q} =
+H_{- p - q}(\mathcal{C}_{B/A}, \text{Sym}^p_{\underline{B}}(\Omega))
+\Rightarrow \text{Tor}^A_{- p - q}(B, B)
+$$
+with $d_r$ of bidegree $(r, -r + 1)$.
+Moreover, $H_i(\mathcal{C}_{B/A}, \text{Sym}^k_{\underline{B}}(\Omega)) = 0$
+for $i < k$.
+\end{theorem}
+
+\begin{proof}
+Let $I \subset A$ be the kernel of $A \to B$. Let
+$\mathcal{J} \subset \mathcal{O}$
+be the kernel of $\mathcal{O} \to \underline{B}$. Then
+$I\mathcal{O} \subset \mathcal{J}$. Set
+$\mathcal{K} = \mathcal{J}/I\mathcal{O}$ and
+$\overline{\mathcal{O}} = \mathcal{O}/I\mathcal{O}$.
+
+\medskip\noindent
+For every object $U = (P \to B)$ of $\mathcal{C}_{B/A}$
+we can choose an isomorphism $P = A[E]$ such that the map
+$P \to B$ maps each $e \in E$ to zero. Then
+$J = \mathcal{J}(U) \subset P = \mathcal{O}(U)$
+is equal to $J = IP + (e; e \in E)$. Moreover
+$\overline{\mathcal{O}}(U) = B[E]$ and $K = \mathcal{K}(U) = (e; e \in E)$
+is the ideal generated by the variables in the polynomial ring $B[E]$.
+In particular it is clear that
+$$
+K/K^2 \xrightarrow{\text{d}} \Omega_{P/A} \otimes_P B
+$$
+is a bijection. In other words, $\Omega = \mathcal{K}/\mathcal{K}^2$
+and $\text{Sym}_B^k(\Omega) = \mathcal{K}^k/\mathcal{K}^{k + 1}$.
+Note that $\pi_!(\Omega) = \Omega_{B/A} = 0$ (Lemma \ref{lemma-identify-H0})
+as $A \to B$ is surjective
+(Algebra, Lemma \ref{algebra-lemma-trivial-differential-surjective}).
+By Lemma \ref{lemma-vanishing-symmetric-powers} we conclude that
+$$
+H_i(\mathcal{C}_{B/A}, \mathcal{K}^k/\mathcal{K}^{k + 1}) =
+H_i(\mathcal{C}_{B/A}, \text{Sym}^k_{\underline{B}}(\Omega)) = 0
+$$
+for $i < k$. This proves the final statement of the theorem.
+
+\medskip\noindent
+The approach to the theorem is to note that
+$$
+B \otimes_A^\mathbf{L} B = L\pi_!(\mathcal{O}) \otimes_A^\mathbf{L} B =
+L\pi_!(\mathcal{O} \otimes_{\underline{A}}^\mathbf{L} \underline{B}) =
+L\pi_!(\overline{\mathcal{O}})
+$$
+The first equality by Lemma \ref{lemma-apply-O-B-comparison},
+the second equality by
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-change-of-rings}, and
+the third equality as $\mathcal{O}$ is flat over $\underline{A}$.
+The sheaf $\overline{\mathcal{O}}$ has a filtration
+$$
+\ldots \subset
+\mathcal{K}^3 \subset
+\mathcal{K}^2 \subset
+\mathcal{K} \subset
+\overline{\mathcal{O}}
+$$
+This induces a filtration $F$ on a complex $C$ representing
+$L\pi_!(\overline{\mathcal{O}})$ with $F^pC$ representing
+$L\pi_!(\mathcal{K}^p)$ (construction of $C$ and $F$ omitted).
+Consider the spectral sequence of
+Homology, Section \ref{homology-section-filtered-complex}
+associated to $(C, F)$. It has $E_1$-page
+$$
+E_1^{p, q} = H_{- p - q}(\mathcal{C}_{B/A}, \mathcal{K}^p/\mathcal{K}^{p + 1})
+\quad\Rightarrow\quad
+H_{- p - q}(\mathcal{C}_{B/A}, \overline{\mathcal{O}}) =
+\text{Tor}_{- p - q}^A(B, B)
+$$
+and differentials $E_r^{p, q} \to E_r^{p + r, q - r + 1}$. To show convergence
+we will show that for every $k$ there exists a $c$ such that
+$H_i(\mathcal{C}_{B/A}, \mathcal{K}^n) = 0$
+for $i < k$ and $n > c$\footnote{A posteriori
+the ``correct'' vanishing $H_i(\mathcal{C}_{B/A}, \mathcal{K}^n) = 0$ for
+$i < n$ can be concluded.}.
+
+\medskip\noindent
+Given $k \geq 0$ set $c = k^2$. We claim that
+$$
+H_i(\mathcal{C}_{B/A}, \mathcal{K}^{n + c}) \to
+H_i(\mathcal{C}_{B/A}, \mathcal{K}^n)
+$$
+is zero for $i < k$ and all $n \geq 0$. Note that
+$\mathcal{K}^n/\mathcal{K}^{n + c}$ has a finite filtration whose successive
+quotients $\mathcal{K}^m/\mathcal{K}^{m + 1}$, $n \leq m < n + c$
+have $H_i(\mathcal{C}_{B/A}, \mathcal{K}^m/\mathcal{K}^{m + 1}) = 0$
+for $i < n$ (see above). Hence the claim implies
+$H_i(\mathcal{C}_{B/A}, \mathcal{K}^{n + c}) = 0$ for $i < k$ and all
+$n \geq k$ which is what we need to show.
+
+\medskip\noindent
+Proof of the claim. Recall that for any $\mathcal{O}$-module $\mathcal{F}$
+the map $\mathcal{F} \to \mathcal{F} \otimes_\mathcal{O}^\mathbf{L} B$
+induces an isomorphism on applying $L\pi_!$, see
+Lemma \ref{lemma-O-homology-B-homology}.
+Consider the map
+$$
+\mathcal{K}^{n + k} \otimes_\mathcal{O}^\mathbf{L} B
+\longrightarrow
+\mathcal{K}^n \otimes_\mathcal{O}^\mathbf{L} B
+$$
+We claim that this map induces the zero map on cohomology sheaves
+in degrees $0, -1, \ldots, - k + 1$. If this second claim holds, then
+the $k$-fold composition
+$$
+\mathcal{K}^{n + c} \otimes_\mathcal{O}^\mathbf{L} B
+\longrightarrow
+\mathcal{K}^n \otimes_\mathcal{O}^\mathbf{L} B
+$$
+factors through $\tau_{\leq -k}\mathcal{K}^n \otimes_\mathcal{O}^\mathbf{L} B$
+hence induces zero on $H_i(\mathcal{C}_{B/A}, -) = L_i\pi_!( - )$
+for $i < k$, see
+Derived Categories, Lemma \ref{derived-lemma-trick-vanishing-composition}.
+By the remark above this means the same thing is true for
+$H_i(\mathcal{C}_{B/A}, \mathcal{K}^{n + c}) \to
+H_i(\mathcal{C}_{B/A}, \mathcal{K}^n)$
+which proves the (first) claim.
+
+\medskip\noindent
+Proof of the second claim. The statement is local, hence we may work
+over an object $U = (P \to B)$ as above. We have to show
+the maps
+$$
+\text{Tor}_i^P(B, K^{n + k}) \to \text{Tor}_i^P(B, K^n)
+$$
+are zero for $i < k$. There is a spectral sequence
+$$
+\text{Tor}_a^P(P/IP, \text{Tor}_b^{P/IP}(B, K^n))
+\Rightarrow
+\text{Tor}_{a + b}^P(B, K^n),
+$$
+see More on Algebra, Example \ref{more-algebra-example-tor-change-rings}.
+Thus it suffices to prove the maps
+$$
+\text{Tor}_i^{P/IP}(B, K^{n + 1}) \to \text{Tor}_i^{P/IP}(B, K^n)
+$$
+are zero for all $i$. This is Lemma \ref{lemma-map-tors-zero}.
+\end{proof}
+
+\begin{remark}
+\label{remark-elucidate-ss}
+In the situation of Theorem \ref{theorem-quillen-spectral-sequence}
+let $I = \Ker(A \to B)$. Then
+$H^{-1}(L_{B/A}) = H_1(\mathcal{C}_{B/A}, \Omega) = I/I^2$, see
+Lemma \ref{lemma-surjection}.
+Hence $H_k(\mathcal{C}_{B/A}, \text{Sym}^k(\Omega)) = \wedge^k_B(I/I^2)$ by
+Remark \ref{remark-first-homology-symmetric-power}. Thus the
+$E_1$-page looks like
+$$
+\begin{matrix}
+B \\
+0 \\
+0 & I/I^2 \\
+0 & H^{-2}(L_{B/A}) \\
+0 & H^{-3}(L_{B/A}) & \wedge^2(I/I^2) \\
+0 & H^{-4}(L_{B/A}) & H_3(\mathcal{C}_{B/A}, \text{Sym}^2(\Omega)) \\
+0 & H^{-5}(L_{B/A}) & H_4(\mathcal{C}_{B/A}, \text{Sym}^2(\Omega)) &
+\wedge^3(I/I^2)
+\end{matrix}
+$$
+with horizontal differential. Thus we obtain edge maps
+$\text{Tor}_i^A(B, B) \to H^{-i}(L_{B/A})$, $i > 0$ and
+$\wedge^i_B(I/I^2) \to \text{Tor}_i^A(B, B)$. Finally, we have
+$\text{Tor}_1^A(B, B) = I/I^2$ and there is a
+five term exact sequence
+$$
+\text{Tor}_3^A(B, B) \to H^{-3}(L_{B/A}) \to \wedge^2_B(I/I^2) \to
+\text{Tor}_2^A(B, B) \to H^{-2}(L_{B/A}) \to 0
+$$
+of low degree terms.
+\end{remark}
+
+\begin{remark}
+\label{remark-elucidate-degree-two}
+Let $A \to B$ be a ring map. Let $P_\bullet$ be a resolution of
+$B$ over $A$ (Remark \ref{remark-resolution}).
+Set $J_n = \Ker(P_n \to B)$. Note that
+$$
+\text{Tor}_2^{P_n}(B, B) =
+\text{Tor}_1^{P_n}(J_n, B) =
+\Ker(J_n \otimes_{P_n} J_n \to J_n^2).
+$$
+Hence $H_2(L_{B/A})$ is canonically equal to
+$$
+\Coker(\text{Tor}_2^{P_1}(B, B) \to \text{Tor}_2^{P_0}(B, B))
+$$
+by Remark \ref{remark-surjection}. To make this more explicit we choose
+$P_2$, $P_1$, $P_0$ as in Example \ref{example-resolution-length-two}.
+We claim that
+$$
+\text{Tor}_2^{P_1}(B, B) =
+\wedge^2(\bigoplus\nolimits_{t \in T} B)\ \oplus
+\ \bigoplus\nolimits_{t \in T} J_0\ \oplus
+\ \text{Tor}_2^{P_0}(B, B)
+$$
+Namely, the basis elements $x_t \wedge x_{t'}$ of the first summand
+corresponds to the element $x_t \otimes x_{t'} - x_{t'} \otimes x_t$
+of $J_1 \otimes_{P_1} J_1$. For $f \in J_0$ the element $x_t \otimes f$
+of the second summand corresponds to the element
+$x_t \otimes s_0(f) - s_0(f) \otimes x_t$ of $J_1 \otimes_{P_1} J_1$.
+Finally, the map $\text{Tor}_2^{P_0}(B, B) \to \text{Tor}_2^{P_1}(B, B)$
+is given by $s_0$. The map
+$d_0 - d_1 : \text{Tor}_2^{P_1}(B, B) \to \text{Tor}_2^{P_0}(B, B)$
+is zero on the last summand, maps $x_t \otimes f$ to
+$f \otimes f_t - f_t \otimes f$, and maps $x_t \wedge x_{t'}$
+to $f_t \otimes f_{t'} - f_{t'} \otimes f_t$. All in all we conclude
+that there is an exact sequence
+$$
+\wedge^2_B(J_0/J_0^2) \to \text{Tor}_2^{P_0}(B, B) \to H^{-2}(L_{B/A}) \to 0
+$$
+In this way we obtain a direct proof of a consequence of Quillen's spectral
+sequence discussed in Remark \ref{remark-elucidate-ss}.
+\end{remark}
+
+
+
+
+
+
+\section{Comparison with Lichtenbaum-Schlessinger}
+\label{section-compare-higher}
+
+\noindent
+Let $A \to B$ be a ring map. In \cite{Lichtenbaum-Schlessinger}
+there is a fairly explicit determination of $\tau_{\geq -2}L_{B/A}$
+which is often used in calculations of versal deformation spaces of
+singularities. The construction follows.
+Choose a polynomial algebra $P$ over $A$
+and a surjection $P \to B$ with kernel $I$. Choose generators
+$f_t$, $t \in T$ for $I$ which induces a surjection
+$F = \bigoplus_{t \in T} P \to I$ with $F$ a free $P$-module.
+Let $Rel \subset F$ be the kernel of $F \to I$, in other words
+$Rel$ is the set of relations among the $f_t$. Let $TrivRel \subset Rel$
+be the submodule of trivial relations, i.e., the submodule of $Rel$
+generated by the elements $(\ldots, f_{t'}, 0, \ldots, 0, -f_t, 0, \ldots)$.
+Consider the complex of $B$-modules
+\begin{equation}
+\label{equation-lichtenbaum-schlessinger}
+Rel/TrivRel \longrightarrow
+F \otimes_P B \longrightarrow
+\Omega_{P/A} \otimes_P B
+\end{equation}
+where the last term is placed in degree $0$. The first map is the obvious
+one and the second map sends the basis element corresponding to $t \in T$
+to $\text{d}f_t \otimes 1$.
+
+\begin{definition}
+\label{definition-biderivation}
+Let $A \to B$ be a ring map. Let $M$ be a $(B, B)$-bimodule
+over $A$. An {\it $A$-biderivation} is an $A$-linear map $\lambda : B \to M$
+such that $\lambda(xy) = x\lambda(y) + \lambda(x)y$.
+\end{definition}
+
+\noindent
+For a polynomial algebra the biderivations are easy to describe.
+
+\begin{lemma}
+\label{lemma-polynomial-ring-unique}
+Let $P = A[S]$ be a polynomial ring over $A$. Let $M$ be a $(P, P)$-bimodule
+over $A$. Given $m_s \in M$ for $s \in S$, there exists a unique
+$A$-biderivation $\lambda : P \to M$ mapping $s$ to $m_s$ for $s \in S$.
+\end{lemma}
+
+\begin{proof}
+We set
+$$
+\lambda(s_1 \ldots s_t) =
+\sum s_1 \ldots s_{i - 1} m_{s_i} s_{i + 1} \ldots s_t
+$$
+in $M$. Extending by $A$-linearity we obtain a biderivation.
+\end{proof}
+
+\noindent
+Here is the comparison statement. The reader may also read about this
+in \cite[page 206, Proposition 12]{Andre-Homologie} or in the paper
+\cite{Doncel} which extends the complex
+(\ref{equation-lichtenbaum-schlessinger}) by one term and the comparison
+to $\tau_{\geq -3}$.
+
+\begin{lemma}
+\label{lemma-compare-higher}
+In the situation above denote $L$ the complex
+(\ref{equation-lichtenbaum-schlessinger}).
+There is a canonical map $L_{B/A} \to L$ in $D(B)$ which
+induces an isomorphism $\tau_{\geq -2}L_{B/A} \to L$ in $D(B)$.
+\end{lemma}
+
+\begin{proof}
+Let $P_\bullet \to B$ be a resolution of $B$ over $A$
+(Remark \ref{remark-resolution}). We will identify $L_{B/A}$ with
+$\Omega_{P_\bullet/A} \otimes B$. To construct the map we
+make some choices.
+
+\medskip\noindent
+Choose an $A$-algebra map $\psi : P_0 \to P$ compatible with the
+given maps $P_0 \to B$ and $P \to B$.
+
+\medskip\noindent
+Write $P_1 = A[S]$ for some set $S$. For $s \in S$ we may write
+$$
+\psi(d_0(s) - d_1(s)) = \sum p_{s, t} f_t
+$$
+for some $p_{s, t} \in P$. Think of $F = \bigoplus_{t \in T} P$
+as a $(P_1, P_1)$-bimodule via the maps $(\psi \circ d_0, \psi \circ d_1)$.
+By Lemma \ref{lemma-polynomial-ring-unique} we obtain a unique
+$A$-biderivation $\lambda : P_1 \to F$ mapping $s$ to the vector with
+coordinates $p_{s, t}$. By construction the composition
+$$
+P_1 \longrightarrow F \longrightarrow P
+$$
+sends $f \in P_1$ to $\psi(d_0(f) - d_1(f))$ because the map
+$f \mapsto \psi(d_0(f) - d_1(f))$ is an $A$-biderivation agreeing with
+the composition on generators.
+
+\medskip\noindent
+For $g \in P_2$ we claim that $\lambda(d_0(g) - d_1(g) + d_2(g))$
+is an element of $Rel$. Namely, by the last remark of the previous
+paragraph the image of $\lambda(d_0(g) - d_1(g) + d_2(g))$ in $P$ is
+$$
+\psi((d_0 - d_1)(d_0(g) - d_1(g) + d_2(g)))
+$$
+which is zero by Simplicial, Section \ref{simplicial-section-complexes}).
+
+\medskip\noindent
+The choice of $\psi$ determines a map
+$$
+\text{d}\psi \otimes 1 :
+\Omega_{P_0/A} \otimes B
+\longrightarrow
+\Omega_{P/A} \otimes B
+$$
+Composing $\lambda$ with the map $F \to F \otimes B$ gives a
+usual $A$-derivation as the two $P_1$-module structures on
+$F \otimes B$ agree. Thus $\lambda$ determines a map
+$$
+\overline{\lambda} :
+\Omega_{P_1/A} \otimes B
+\longrightarrow
+F \otimes B
+$$
+Finally, We obtain a $B$-linear map
+$$
+q :
+\Omega_{P_2/A} \otimes B
+\longrightarrow
+Rel/TrivRel
+$$
+by mapping $\text{d}g$ to the class of $\lambda(d_0(g) - d_1(g) + d_2(g))$
+in the quotient.
+
+\medskip\noindent
+The diagram
+$$
+\xymatrix{
+\Omega_{P_3/A} \otimes B \ar[r] \ar[d] &
+\Omega_{P_2/A} \otimes B \ar[r] \ar[d]_q &
+\Omega_{P_1/A} \otimes B \ar[r] \ar[d]_{\overline{\lambda}} &
+\Omega_{P_0/A} \otimes B \ar[d]_{\text{d}\psi \otimes 1} \\
+0 \ar[r] &
+Rel/TrivRel \ar[r] &
+F \otimes B \ar[r] &
+\Omega_{P/A} \otimes B
+}
+$$
+commutes (calculation omitted) and we obtain the map of the lemma.
+By Remark \ref{remark-explicit-comparison-map} and
+Lemma \ref{lemma-relation-with-naive-cotangent-complex} we see that this map
+induces isomorphisms $H_1(L_{B/A}) \to H_1(L)$ and $H_0(L_{B/A}) \to H_0(L)$.
+
+\medskip\noindent
+It remains to see that our map $L_{B/A} \to L$ induces an isomorphism
+$H_2(L_{B/A}) \to H_2(L)$. Choose a resolution of $B$ over $A$ with
+$P_0 = P = A[u_i]$ and then $P_1$ and $P_2$ as in
+Example \ref{example-resolution-length-two}.
+In Remark \ref{remark-elucidate-degree-two} we have constructed an exact
+sequence
+$$
+\wedge^2_B(J_0/J_0^2) \to \text{Tor}_2^{P_0}(B, B) \to H^{-2}(L_{B/A}) \to 0
+$$
+where $P_0 = P$ and $J_0 = \Ker(P \to B) = I$.
+Calculating the Tor group using the short exact sequences
+$0 \to I \to P \to B \to 0$ and $0 \to Rel \to F \to I \to 0$
+we find that
+$\text{Tor}_2^P(B, B) = \Ker(Rel \otimes B \to F \otimes B)$.
+The image of the map $\wedge^2_B(I/I^2) \to \text{Tor}_2^P(B, B)$
+under this identification is exactly the image of $TrivRel \otimes B$.
+Thus we see that $H_2(L_{B/A}) \cong H_2(L)$.
+
+\medskip\noindent
+Finally, we have to check that our map $L_{B/A} \to L$ actually induces
+this isomorphism. We will use the notation and results discussed in
+Example \ref{example-resolution-length-two} and
+Remarks \ref{remark-elucidate-degree-two} and \ref{remark-surjection}
+without further mention. Pick an element $\xi$ of
+$\text{Tor}_2^{P_0}(B, B) = \Ker(I \otimes_P I \to I^2)$.
+Write $\xi = \sum h_{t', t}f_{t'} \otimes f_t$ for some
+$h_{t', t} \in P$. Tracing through the exact sequences above we
+find that $\xi$ corresponds to the image in $Rel \otimes B$
+of the element $r \in Rel \subset F = \bigoplus_{t \in T} P$ with
+$t$th coordinate $r_t = \sum_{t' \in T} h_{t', t}f_{t'}$.
+On the other hand, $\xi$ corresponds to the element of
+$H_2(L_{B/A}) = H_2(\Omega)$ which is the image
+via $\text{d} : H_2(\mathcal{J}/\mathcal{J}^2) \to H_2(\Omega)$
+of the boundary of $\xi$ under the $2$-extension
+$$
+0 \to
+\text{Tor}_2^\mathcal{O}(\underline{B}, \underline{B})
+\to
+\mathcal{J} \otimes_\mathcal{O} \mathcal{J} \to \mathcal{J}
+\to
+\mathcal{J}/\mathcal{J}^2 \to 0
+$$
+We compute the successive transgressions of our element. First we have
+$$
+\xi = (d_0 - d_1)(- \sum s_0(h_{t', t} f_{t'}) \otimes x_t)
+$$
+and next we have
+$$
+\sum s_0(h_{t', t} f_{t'}) x_t = d_0(v_r) - d_1(v_r) + d_2(v_r)
+$$
+by our choice of the variables $v$ in
+Example \ref{example-resolution-length-two}.
+We may choose our map $\lambda$ above such that
+$\lambda(u_i) = 0$ and $\lambda(x_t) = - e_t$ where $e_t \in F$
+denotes the basis vector corresponding to $t \in T$.
+Hence the construction of our map $q$ above sends $\text{d}v_r$ to
+$$
+\lambda(\sum s_0(h_{t', t} f_{t'}) x_t) =
+\sum\nolimits_t \left(\sum\nolimits_{t'} h_{t', t}f_{t'}\right) e_t
+$$
+matching the image of $\xi$ in $Rel \otimes B$ (the two minus signs
+we found above cancel out). This agreement finishes the proof.
+\end{proof}
+
+\begin{remark}[Functoriality of the Lichtenbaum-Schlessinger complex]
+\label{remark-functoriality-lichtenbaum-schlessinger}
+Consider a commutative square
+$$
+\xymatrix{
+A' \ar[r] & B' \\
+A \ar[u] \ar[r] & B \ar[u]
+}
+$$
+of ring maps. Choose a factorization
+$$
+\xymatrix{
+A' \ar[r] & P' \ar[r] & B' \\
+A \ar[u] \ar[r] & P \ar[u] \ar[r] & B \ar[u]
+}
+$$
+with $P$ a polynomial algebra over $A$ and $P'$ a polynomial algebra over $A'$.
+Choose generators $f_t$, $t \in T$ for $\Ker(P \to B)$.
+For $t \in T$ denote $f'_t$ the image of $f_t$ in $P'$.
+Choose $f'_s \in P'$ such that the elements $f'_t$ for
+$t \in T' = T \amalg S$ generate the kernel
+of $P' \to B'$. Set $F = \bigoplus_{t \in T} P$ and
+$F' = \bigoplus_{t' \in T'} P'$. Let $Rel = \Ker(F \to P)$
+and $Rel' = \Ker(F' \to P')$ where the maps are given
+by multiplication by $f_t$, resp.\ $f'_t$ on the coordinates.
+Finally, set $TrivRel$, resp.\ $TrivRel'$ equal to the submodule
+of $Rel$, resp.\ $TrivRel$ generated by the elements
+$(\ldots, f_{t'}, 0, \ldots, 0, -f_t, 0, \ldots)$
+for $t, t' \in T$, resp.\ $T'$. Having made these choices we obtain a
+canonical commutative diagram
+$$
+\xymatrix{
+L' : &
+Rel'/TrivRel' \ar[r] &
+F' \otimes_{P'} B' \ar[r] &
+\Omega_{P'/A'} \otimes_{P'} B' \\
+L : \ar[u] &
+Rel/TrivRel \ar[r] \ar[u] &
+F \otimes_P B \ar[r] \ar[u] &
+\Omega_{P/A} \otimes_P B \ar[u]
+}
+$$
+Moreover, tracing through the choices made in the proof of
+Lemma \ref{lemma-compare-higher}
+the reader sees that one obtains a commutative diagram
+$$
+\xymatrix{
+L_{B'/A'} \ar[r] & L' \\
+L_{B/A} \ar[r] \ar[u] & L \ar[u]
+}
+$$
+\end{remark}
+
+
+
+
+
+\section{The cotangent complex of a local complete intersection}
+\label{section-lci}
+
+\noindent
+If $A \to B$ is a local complete intersection map, then
+$L_{B/A}$ is a perfect complex. The key to proving this is
+the following lemma.
+
+\begin{lemma}
+\label{lemma-special-case}
+Let $A = \mathbf{Z}[x_1, \ldots, x_n] \to B = \mathbf{Z}$
+be the ring map which sends $x_i$ to $0$ for $i = 1, \ldots, n$.
+Let $I = (x_1, \ldots, x_n) \subset A$. Then $L_{B/A}$ is quasi-isomorphic to
+$I/I^2[1]$.
+\end{lemma}
+
+\begin{proof}
+There are several ways to prove this. For example one can explicitly construct
+a resolution of $B$ over $A$ and compute. We will use (\ref{equation-triangle}).
+Namely, consider the distinguished triangle
+$$
+L_{\mathbf{Z}[x_1, \ldots, x_n]/\mathbf{Z}}
+\otimes_{\mathbf{Z}[x_1, \ldots, x_n]} \mathbf{Z} \to
+L_{\mathbf{Z}/\mathbf{Z}} \to
+L_{\mathbf{Z}/\mathbf{Z}[x_1, \ldots, x_n]}\to
+L_{\mathbf{Z}[x_1, \ldots, x_n]/\mathbf{Z}}
+\otimes_{\mathbf{Z}[x_1, \ldots, x_n]} \mathbf{Z}[1]
+$$
+The complex $L_{\mathbf{Z}[x_1, \ldots, x_n]/\mathbf{Z}}$
+is quasi-isomorphic to $\Omega_{\mathbf{Z}[x_1, \ldots, x_n]/\mathbf{Z}}$ by
+Lemma \ref{lemma-cotangent-complex-polynomial-algebra}.
+The complex $L_{\mathbf{Z}/\mathbf{Z}}$ is zero in $D(\mathbf{Z})$ by
+Lemma \ref{lemma-when-zero}.
+Thus we see that $L_{B/A}$ has only one nonzero cohomology group
+which is as described in the lemma by Lemma \ref{lemma-surjection}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-mod-regular-sequence}
+Let $A \to B$ be a surjective ring map whose kernel $I$ is generated
+by a Koszul-regular sequence (for example a regular sequence).
+Then $L_{B/A}$ is quasi-isomorphic to $I/I^2[1]$.
+\end{lemma}
+
+\begin{proof}
+Let $f_1, \ldots, f_r \in I$ be a Koszul regular sequence generating $I$.
+Consider the ring map $\mathbf{Z}[x_1, \ldots, x_r] \to A$ sending
+$x_i$ to $f_i$. Since $x_1, \ldots, x_r$ is a regular sequence in
+$\mathbf{Z}[x_1, \ldots, x_r]$ we see that the Koszul complex
+on $x_1, \ldots, x_r$ is a free resolution of
+$\mathbf{Z} = \mathbf{Z}[x_1, \ldots, x_r]/(x_1, \ldots, x_r)$
+over $\mathbf{Z}[x_1, \ldots, x_r]$
+(see More on Algebra, Lemma \ref{more-algebra-lemma-regular-koszul-regular}).
+Thus the assumption that $f_1, \ldots, f_r$ is Koszul regular
+exactly means that
+$B = A \otimes_{\mathbf{Z}[x_1, \ldots, x_r]}^\mathbf{L} \mathbf{Z}$.
+Hence
+$L_{B/A} = L_{\mathbf{Z}/\mathbf{Z}[x_1, \ldots, x_r]}
+\otimes_\mathbf{Z}^\mathbf{L} B$ by
+Lemmas \ref{lemma-flat-base-change-cotangent-complex} and
+\ref{lemma-special-case}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-mod-Koszul-regular-ideal}
+Let $A \to B$ be a surjective ring map whose kernel $I$ is Koszul.
+Then $L_{B/A}$ is quasi-isomorphic to $I/I^2[1]$.
+\end{lemma}
+
+\begin{proof}
+Locally on $\Spec(A)$ the ideal $I$ is generated by a Koszul regular
+sequence, see More on Algebra, Definition
+\ref{more-algebra-definition-regular-ideal}.
+Hence this follows from
+Lemma \ref{lemma-flat-base-change-cotangent-complex}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-cotangent-complex-local-complete-intersection}
+Let $A \to B$ be a local complete intersection map.
+Then $L_{B/A}$ is a perfect complex with tor amplitude in $[-1, 0]$.
+\end{proposition}
+
+\begin{proof}
+Choose a surjection $P = A[x_1, \ldots, x_n] \to B$ with kernel $J$.
+By Lemma \ref{lemma-relation-with-naive-cotangent-complex}
+we see that $J/J^2 \to \bigoplus B\text{d}x_i$
+is quasi-isomorphic to $\tau_{\geq -1}L_{B/A}$.
+Note that $J/J^2$ is finite projective
+(More on Algebra, Lemma
+\ref{more-algebra-lemma-quasi-regular-ideal-finite-projective}),
+hence $\tau_{\geq -1}L_{B/A}$ is a perfect complex with
+tor amplitude in $[-1, 0]$.
+Thus it suffices to show that $H^i(L_{B/A}) = 0$ for $i \not \in [-1, 0]$.
+This follows from (\ref{equation-triangle})
+$$
+L_{P/A} \otimes_P^\mathbf{L} B \to L_{B/A} \to L_{B/P} \to
+L_{P/A} \otimes_P^\mathbf{L} B[1]
+$$
+and Lemma \ref{lemma-mod-Koszul-regular-ideal}
+to see that $H^i(L_{B/P})$ is zero unless $i \in \{-1, 0\}$.
+(We also use Lemma \ref{lemma-cotangent-complex-polynomial-algebra}
+for the term on the left.)
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Tensor products and the cotangent complex}
+\label{section-tensor-product}
+
+\noindent
+Let $R$ be a ring and let $A$, $B$ be $R$-algebras. In this section we
+discuss $L_{A \otimes_R B/R}$. Most of the information we want is contained
+in the following diagram
+\begin{equation}
+\label{equation-tensor-product}
+\vcenter{
+\xymatrix{
+L_{A/R} \otimes_A^\mathbf{L} (A \otimes_R B) \ar[r] &
+L_{A \otimes_R B/B} \ar[r] &
+E \\
+L_{A/R} \otimes_A^\mathbf{L} (A \otimes_R B) \ar[r] \ar@{=}[u] &
+L_{A \otimes_R B/R} \ar[r] \ar[u] &
+L_{A \otimes_R B/A} \ar[u] \\
+ &
+L_{B/R} \otimes_B^\mathbf{L} (A \otimes_R B) \ar[u] \ar@{=}[r] &
+L_{B/R} \otimes_B^\mathbf{L} (A \otimes_R B) \ar[u]
+}
+}
+\end{equation}
+Explanation: The middle row is the fundamental triangle
+(\ref{equation-triangle}) for the ring maps $R \to A \to A \otimes_R B$.
+The middle column is the fundamental triangle
+(\ref{equation-triangle}) for the ring maps $R \to B \to A \otimes_R B$.
+Next, $E$ is an object of $D(A \otimes_R B)$ which ``fits'' into the
+upper right corner, i.e., which turns both the top row
+and the right column into distinguished triangles. Such an $E$
+exists by Derived Categories, Proposition \ref{derived-proposition-9}
+applied to the lower left square (with $0$ placed in the missing
+spot). To be more explicit, we could for example define $E$ as the cone
+(Derived Categories, Definition \ref{derived-definition-cone})
+of the map of complexes
+$$
+L_{A/R} \otimes_A^\mathbf{L} (A \otimes_R B) \oplus
+L_{B/R} \otimes_B^\mathbf{L} (A \otimes_R B)
+\longrightarrow
+L_{A \otimes_R B/R}
+$$
+and get the two maps with target $E$ by an application of TR3.
+In the Tor independent case the object $E$ is zero.
+
+\begin{lemma}
+\label{lemma-tensor-product-tor-independent}
+If $A$ and $B$ are Tor independent $R$-algebras, then the object $E$
+in (\ref{equation-tensor-product}) is zero. In this case we have
+$$
+L_{A \otimes_R B/R} =
+L_{A/R} \otimes_A^\mathbf{L} (A \otimes_R B) \oplus
+L_{B/R} \otimes_B^\mathbf{L} (A \otimes_R B)
+$$
+which is represented by the complex
+$L_{A/R} \otimes_R B \oplus L_{B/R} \otimes_R A $
+of $A \otimes_R B$-modules.
+\end{lemma}
+
+\begin{proof}
+The first two statements are immediate from
+Lemma \ref{lemma-flat-base-change-cotangent-complex}.
+The last statement follows as $L_{A/R}$ is a complex
+of free $A$-modules, hence $L_{A/R} \otimes_A^\mathbf{L} (A \otimes_R B)$
+is represented by
+$L_{A/R} \otimes_A (A \otimes_R B) = L_{A/R} \otimes_R B$
+\end{proof}
+
+\noindent
+In general we can say this about the object $E$.
+
+\begin{lemma}
+\label{lemma-tensor-product}
+Let $R$ be a ring and let $A$, $B$ be $R$-algebras. The object $E$
+in (\ref{equation-tensor-product}) satisfies
+$$
+H^i(E) =
+\left\{
+\begin{matrix}
+0 & \text{if} & i \geq -1 \\
+\text{Tor}_1^R(A, B) & \text{if} & i = -2
+\end{matrix}
+\right.
+$$
+\end{lemma}
+
+\begin{proof}
+We use the description of $E$ as the cone on
+$L_{B/R} \otimes_B^\mathbf{L} (A \otimes_R B) \to L_{A \otimes_R B/A}$.
+By Lemma \ref{lemma-compare-higher} the canonical truncations
+$\tau_{\geq -2}L_{B/R}$ and $\tau_{\geq -2}L_{A \otimes_R B/A}$
+are computed by the Lichtenbaum-Schlessinger complex
+(\ref{equation-lichtenbaum-schlessinger}).
+These isomorphisms are compatible with functoriality
+(Remark \ref{remark-functoriality-lichtenbaum-schlessinger}).
+Thus in this proof we work with the Lichtenbaum-Schlessinger complexes.
+
+\medskip\noindent
+Choose a polynomial algebra $P$ over $R$ and a surjection $P \to B$.
+Choose generators $f_t \in P$, $t \in T$ of the kernel of this surjection.
+Let $Rel \subset F = \bigoplus_{t \in T} P$ be the kernel of the map
+$F \to P$ which maps the basis vector corresponding to $t$ to $f_t$.
+Set $P_A = A \otimes_R P$ and $F_A = A \otimes_R F = P_A \otimes_P F$.
+Let $Rel_A$ be the kernel of the map $F_A \to P_A$. Using the exact sequence
+$$
+0 \to Rel \to F \to P \to B \to 0
+$$
+and standard short exact sequences for Tor we obtain an exact sequence
+$$
+A \otimes_R Rel \to Rel_A \to \text{Tor}_1^R(A, B) \to 0
+$$
+Note that $P_A \to A \otimes_R B$ is a surjection whose kernel is generated
+by the elements $1 \otimes f_t$ in $P_A$. Denote $TrivRel_A \subset Rel_A$
+the $P_A$-submodule generated by the elements
+$(\ldots, 1 \otimes f_{t'}, 0, \ldots,
+0, - 1 \otimes f_t \otimes 1, 0, \ldots)$.
+Since $TrivRel \otimes_R A \to TrivRel_A$ is surjective, we find a
+canonical exact sequence
+$$
+A \otimes_R (Rel/TrivRel) \to Rel_A/TrivRel_A \to \text{Tor}_1^R(A, B) \to 0
+$$
+The map of Lichtenbaum-Schlessinger complexes is given by the diagram
+$$
+\xymatrix{
+Rel_A/TrivRel_A \ar[r] &
+F_A \otimes_{P_A} (A \otimes_R B) \ar[r] &
+\Omega_{P_A/A \otimes_R B} \otimes_{P_A} (A \otimes_R B) \\
+Rel/TrivRel \ar[r] \ar[u]_{-2} &
+F \otimes_P B \ar[r] \ar[u]_{-1} &
+\Omega_{P/A} \otimes_P B \ar[u]_0
+}
+$$
+Note that vertical maps $-1$ and $-0$ induce an isomorphism after applying
+the functor $A \otimes_R - = P_A \otimes_P -$ to the source and the vertical
+map $-2$ gives exactly the map whose cokernel is the desired Tor module
+as we saw above.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Deformations of ring maps and the cotangent complex}
+\label{section-deformations}
+
+\noindent
+This section is the continuation of
+Deformation Theory, Section \ref{defos-section-deformations}
+which we urge the reader to read first.
+We start with a surjective ring map $A' \to A$
+whose kernel is an ideal $I$ of square zero. Moreover we assume
+given a ring map $A \to B$, a $B$-module $N$, and an $A$-module map
+$c : I \to N$. In this section we ask ourselves whether we can find
+the question mark fitting into the following diagram
+\begin{equation}
+\label{equation-to-solve}
+\vcenter{
+\xymatrix{
+0 \ar[r] & N \ar[r] & {?} \ar[r] & B \ar[r] & 0 \\
+0 \ar[r] & I \ar[u]^c \ar[r] & A' \ar[u] \ar[r] & A \ar[u] \ar[r] & 0
+}
+}
+\end{equation}
+and moreover how unique the solution is (if it exists). More precisely,
+we look for a surjection of $A'$-algebras $B' \to B$ whose kernel is
+an ideal of square zero and is
+identified with $N$ such that $A' \to B'$ induces the given map $c$.
+We will say $B'$ is a {\it solution} to (\ref{equation-to-solve}).
+
+\begin{lemma}
+\label{lemma-find-obstruction}
+In the situation above we have
+\begin{enumerate}
+\item There is a canonical element $\xi \in \Ext^2_B(L_{B/A}, N)$
+whose vanishing is a sufficient and necessary condition for the existence
+of a solution to (\ref{equation-to-solve}).
+\item If there exists a solution, then the set of
+isomorphism classes of solutions is principal homogeneous under
+$\Ext^1_B(L_{B/A}, N)$.
+\item Given a solution $B'$, the set of automorphisms of $B'$
+fitting into (\ref{equation-to-solve}) is canonically isomorphic
+to $\Ext^0_B(L_{B/A}, N)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Via the identifications $\NL_{B/A} = \tau_{\geq -1}L_{B/A}$
+(Lemma \ref{lemma-relation-with-naive-cotangent-complex}) and
+$H^0(L_{B/A}) = \Omega_{B/A}$ (Lemma \ref{lemma-identify-H0})
+we have seen parts (2) and (3) in
+Deformation Theory, Lemmas \ref{defos-lemma-huge-diagram} and
+\ref{defos-lemma-choices}.
+
+\medskip\noindent
+Proof of (1). Roughly speaking, this follows from the discussion in
+Deformation Theory, Remark \ref{defos-remark-parametrize-solutions}
+by replacing the naive cotangent complex by the full cotangent complex.
+Here is a more detailed explanation. By
+Deformation Theory, Lemma \ref{defos-lemma-parametrize-solutions}
+and Remark \ref{defos-remark-parametrize-solutions}
+there exists an element
+$$
+\xi' \in
+\Ext^1_A(\NL_{A/A'}, N) =
+\Ext^1_B(\NL_{A/A'} \otimes_A^\mathbf{L} B, N) =
+\Ext^1_B(L_{A/A'} \otimes_A^\mathbf{L} B, N)
+$$
+(for the equalities see Deformation Theory, Remark
+\ref{defos-remark-parametrize-solutions} and use that
+$\NL_{A'/A} = \tau_{\geq -1} L_{A'/A}$)
+such that a solution exists if and only if this element is in
+the image of the map
+$$
+\Ext^1_B(\NL_{B/A'}, N) = \Ext^1_B(L_{B/A'}, N)
+\longrightarrow
+\Ext^1_B(L_{A/A'} \otimes_A^\mathbf{L} B, N)
+$$
+The distinguished triangle (\ref{equation-triangle})
+for $A' \to A \to B$ gives rise to a long exact sequence
+$$
+\ldots \to
+\Ext^1_B(L_{B/A'}, N) \to
+\Ext^1_B(L_{A/A'} \otimes_A^\mathbf{L} B, N) \to
+\Ext^2_B(L_{B/A}, N) \to \ldots
+$$
+Hence taking $\xi$ the image of $\xi'$ works.
+\end{proof}
+
+
+
+
+
+\section{The Atiyah class of a module}
+\label{section-atiyah}
+
+\noindent
+Let $A \to B$ be a ring map. Let $M$ be a $B$-module.
+Let $P \to B$ be an object of $\mathcal{C}_{B/A}$
+(Section \ref{section-compute-L-pi-shriek}).
+Consider the extension of principal parts
+$$
+0 \to \Omega_{P/A} \otimes_P M \to P^1_{P/A}(M) \to M \to 0
+$$
+see Algebra, Lemma \ref{algebra-lemma-sequence-of-principal-parts}.
+This sequence is functorial in $P$ by
+Algebra, Remark \ref{algebra-remark-functoriality-principal-parts}.
+Thus we obtain a short exact sequence of sheaves of $\mathcal{O}$-modules
+$$
+0 \to \Omega_{\mathcal{O}/\underline{A}} \otimes_\mathcal{O} \underline{M} \to
+P^1_{\mathcal{O}/\underline{A}}(M) \to \underline{M} \to 0
+$$
+on $\mathcal{C}_{B/A}$. We have
+$L\pi_!(\Omega_{\mathcal{O}/\underline{A}} \otimes_\mathcal{O} \underline{M})
+= L_{B/A} \otimes_B M = L_{B/A} \otimes_B^\mathbf{L} M$
+by Lemma \ref{lemma-pi-shriek-standard} and the flatness of
+the terms of $L_{B/A}$.
+We have $L\pi_!(\underline{M}) = M$ by
+Lemma \ref{lemma-pi-lower-shriek-constant-sheaf}.
+Thus a distinguished triangle
+\begin{equation}
+\label{equation-atiyah}
+L_{B/A} \otimes_B^\mathbf{L} M \to
+L\pi_!\left(P^1_{\mathcal{O}/\underline{A}}(M)\right) \to M
+\to L_{B/A} \otimes_B^\mathbf{L} M [1]
+\end{equation}
+in $D(B)$. Here we use Cohomology on Sites, Remark
+\ref{sites-cohomology-remark-O-homology-B-homology-general}
+to get a distinguished triangle in $D(B)$ and not just in $D(A)$.
+
+\begin{definition}
+\label{definition-atiyah-class}
+Let $A \to B$ be a ring map. Let $M$ be a $B$-module.
+The map $M \to L_{B/A} \otimes_B^\mathbf{L} M[1]$
+in (\ref{equation-atiyah}) is called the {\it Atiyah class} of $M$.
+\end{definition}
+
+
+
+
+\section{The cotangent complex}
+\label{section-cotangent-complex}
+
+\noindent
+In this section we discuss the cotangent complex of a map of sheaves
+of rings on a site. In later sections we specialize this to obtain
+the cotangent complex of a morphism of ringed topoi, a morphism of
+ringed spaces, a morphism of schemes, a morphism of algebraic space, etc.
+
+\medskip\noindent
+Let $\mathcal{C}$ be a site and let $\Sh(\mathcal{C})$ denote the
+associated topos. Let $\mathcal{A}$ denote a sheaf of rings
+on $\mathcal{C}$. Let $\mathcal{A}\textit{-Alg}$ be the category of
+$\mathcal{A}$-algebras. Consider the pair of adjoint functors $(U, V)$ where
+$V : \mathcal{A}\textit{-Alg} \to \Sh(\mathcal{C})$ is the forgetful functor and
+$U : \Sh(\mathcal{C}) \to \mathcal{A}\textit{-Alg}$ assigns to a sheaf of sets
+$\mathcal{E}$ the polynomial algebra $\mathcal{A}[\mathcal{E}]$ on
+$\mathcal{E}$ over $\mathcal{A}$.
+Let $X_\bullet$ be the simplicial object of
+$\text{Fun}(\mathcal{A}\textit{-Alg}, \mathcal{A}\textit{-Alg})$
+constructed in
+Simplicial, Section \ref{simplicial-section-standard}.
+
+\medskip\noindent
+Now assume that $\mathcal{A} \to \mathcal{B}$ is a homomorphism of sheaves
+of rings. Then $\mathcal{B}$ is an object of the category
+$\mathcal{A}\textit{-Alg}$. Denote
+$\mathcal{P}_\bullet = X_\bullet(\mathcal{B})$ the resulting
+simplicial $\mathcal{A}$-algebra.
+Recall that
+$\mathcal{P}_0 = \mathcal{A}[\mathcal{B}]$,
+$\mathcal{P}_1 = \mathcal{A}[\mathcal{A}[\mathcal{B}]]$, and so on.
+Recall also that there is an augmentation
+$$
+\epsilon : \mathcal{P}_\bullet \longrightarrow \mathcal{B}
+$$
+where we view $\mathcal{B}$ as a constant simplicial $\mathcal{A}$-algebra.
+
+\begin{definition}
+\label{definition-standard-resolution-sheaves-rings}
+Let $\mathcal{C}$ be a site.
+Let $\mathcal{A} \to \mathcal{B}$ be a homomorphism of sheaves of rings
+on $\mathcal{C}$. The {\it standard resolution of $\mathcal{B}$ over
+$\mathcal{A}$} is the augmentation
+$\epsilon : \mathcal{P}_\bullet \to \mathcal{B}$
+with terms
+$$
+\mathcal{P}_0 = \mathcal{A}[\mathcal{B}],\quad
+\mathcal{P}_1 = \mathcal{A}[\mathcal{A}[\mathcal{B}]],\quad \ldots
+$$
+and maps as constructed above.
+\end{definition}
+
+\noindent
+With this definition in hand the cotangent complex of a map of sheaves
+of rings is defined as follows.
+We will use the module of differentials as defined in
+Modules on Sites, Section \ref{sites-modules-section-differentials}.
+
+\begin{definition}
+\label{definition-cotangent-complex-morphism-sheaves-rings}
+Let $\mathcal{C}$ be a site.
+Let $\mathcal{A} \to \mathcal{B}$ be a homomorphism of sheaves of rings
+on $\mathcal{C}$.
+The {\it cotangent complex} $L_{\mathcal{B}/\mathcal{A}}$
+is the complex of $\mathcal{B}$-modules associated to the
+simplicial module
+$$
+\Omega_{\mathcal{P}_\bullet/\mathcal{A}}
+\otimes_{\mathcal{P}_\bullet, \epsilon} \mathcal{B}
+$$
+where $\epsilon : \mathcal{P}_\bullet \to \mathcal{B}$
+is the standard resolution of $\mathcal{B}$ over
+$\mathcal{A}$. We usually think of $L_{\mathcal{B}/\mathcal{A}}$
+as an object of $D(\mathcal{B})$.
+\end{definition}
+
+\noindent
+These constructions satisfy a functoriality similar to that discussed
+in Section \ref{section-functoriality}. Namely, given a commutative diagram
+\begin{equation}
+\label{equation-commutative-square-sheaves}
+\vcenter{
+\xymatrix{
+\mathcal{B} \ar[r] & \mathcal{B}' \\
+\mathcal{A} \ar[u] \ar[r] & \mathcal{A}' \ar[u]
+}
+}
+\end{equation}
+of sheaves of rings on $\mathcal{C}$ there is a canonical
+$\mathcal{B}$-linear map of complexes
+$$
+L_{\mathcal{B}/\mathcal{A}} \longrightarrow L_{\mathcal{B}'/\mathcal{A}'}
+$$
+constructed as follows. If $\mathcal{P}_\bullet \to \mathcal{B}$ is the
+standard resolution of $\mathcal{B}$ over $\mathcal{A}$ and
+$\mathcal{P}'_\bullet \to \mathcal{B}'$ is the
+standard resolution of $\mathcal{B}'$ over $\mathcal{A}'$,
+then there is a canonical map $\mathcal{P}_\bullet \to \mathcal{P}'_\bullet$
+of simplicial $\mathcal{A}$-algebras compatible with the augmentations
+$\mathcal{P}_\bullet \to \mathcal{B}$ and
+$\mathcal{P}'_\bullet \to \mathcal{B}'$. The maps
+$$
+\mathcal{P}_0 = \mathcal{A}[\mathcal{B}]
+\longrightarrow
+\mathcal{A}'[\mathcal{B}'] = \mathcal{P}'_0,
+\quad
+\mathcal{P}_1 = \mathcal{A}[\mathcal{A}[\mathcal{B}]]
+\longrightarrow
+\mathcal{A}'[\mathcal{A}'[\mathcal{B}']] = \mathcal{P}'_1
+$$
+and so on are given by the given maps $\mathcal{A} \to \mathcal{A}'$
+and $\mathcal{B} \to \mathcal{B}'$. The desired map
+$L_{\mathcal{B}/\mathcal{A}} \to L_{\mathcal{B}'/\mathcal{A}'}$
+then comes from the associated maps on sheaves of differentials.
+
+\begin{lemma}
+\label{lemma-pullback-cotangent-morphism-topoi}
+Let $f : \Sh(\mathcal{D}) \to \Sh(\mathcal{C})$ be a morphism of topoi.
+Let $\mathcal{A} \to \mathcal{B}$ be a homomorphism of sheaves of rings
+on $\mathcal{C}$. Then
+$f^{-1}L_{\mathcal{B}/\mathcal{A}} = L_{f^{-1}\mathcal{B}/f^{-1}\mathcal{A}}$.
+\end{lemma}
+
+\begin{proof}
+The diagram
+$$
+\xymatrix{
+\mathcal{A}\textit{-Alg} \ar[d]_{f^{-1}} \ar[r] &
+\Sh(\mathcal{C}) \ar@<1ex>[l] \ar[d]^{f^{-1}} \\
+f^{-1}\mathcal{A}\textit{-Alg} \ar[r] & \Sh(\mathcal{D}) \ar@<1ex>[l]
+}
+$$
+commutes.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-L-morphism-sheaves-rings}
+Let $\mathcal{C}$ be a site. Let $\mathcal{A} \to \mathcal{B}$ be a
+homomorphism of sheaves of rings on $\mathcal{C}$. Then
+$H^i(L_{\mathcal{B}/\mathcal{A}})$ is the sheaf associated to the
+presheaf $U \mapsto H^i(L_{\mathcal{B}(U)/\mathcal{A}(U)})$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{C}'$ be the site we get by endowing $\mathcal{C}$ with the
+chaotic topology (presheaves are sheaves). There is a morphism of topoi
+$f : \Sh(\mathcal{C}) \to \Sh(\mathcal{C}')$ where $f_*$ is the inclusion
+of sheaves into presheaves and $f^{-1}$ is sheafification.
+By Lemma \ref{lemma-pullback-cotangent-morphism-topoi}
+it suffices to prove the result for $\mathcal{C}'$, i.e.,
+in case $\mathcal{C}$ has the chaotic topology.
+
+\medskip\noindent
+If $\mathcal{C}$ carries the chaotic topology, then
+$L_{\mathcal{B}/\mathcal{A}}(U)$ is equal to
+$L_{\mathcal{B}(U)/\mathcal{A}(U)}$ because
+$$
+\xymatrix{
+\mathcal{A}\textit{-Alg} \ar[d]_{\text{sections over }U} \ar[r] &
+\Sh(\mathcal{C}) \ar@<1ex>[l] \ar[d]^{\text{sections over }U} \\
+\mathcal{A}(U)\textit{-Alg} \ar[r] & \textit{Sets} \ar@<1ex>[l]
+}
+$$
+commutes.
+\end{proof}
+
+\begin{remark}
+\label{remark-map-sections-over-U}
+It is clear from the proof of
+Lemma \ref{lemma-compute-L-morphism-sheaves-rings}
+that for any $U \in \Ob(\mathcal{C})$ there is a canonical map
+$L_{\mathcal{B}(U)/\mathcal{A}(U)} \to L_{\mathcal{B}/\mathcal{A}}(U)$
+of complexes of $\mathcal{B}(U)$-modules. Moreover, these maps
+are compatible with restriction maps and the complex
+$L_{\mathcal{B}/\mathcal{A}}$
+is the sheafification of the rule $U \mapsto L_{\mathcal{B}(U)/\mathcal{A}(U)}$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-H0-L-morphism-sheaves-rings}
+Let $\mathcal{C}$ be a site. Let $\mathcal{A} \to \mathcal{B}$ be a
+homomorphism of sheaves of rings on $\mathcal{C}$. Then
+$H^0(L_{\mathcal{B}/\mathcal{A}}) = \Omega_{\mathcal{B}/\mathcal{A}}$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemmas \ref{lemma-compute-L-morphism-sheaves-rings}
+and \ref{lemma-identify-H0} and
+Modules on Sites, Lemma \ref{sites-modules-lemma-differentials-sheafify}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-L-product-sheaves-rings}
+Let $\mathcal{C}$ be a site. Let $\mathcal{A} \to \mathcal{B}$
+and $\mathcal{A} \to \mathcal{B}'$ be homomorphisms of sheaves of rings
+on $\mathcal{C}$. Then
+$$
+L_{\mathcal{B} \times \mathcal{B}'/\mathcal{A}}
+\longrightarrow
+L_{\mathcal{B}/\mathcal{A}} \oplus L_{\mathcal{B}'/\mathcal{A}}
+$$
+is an isomorphism in $D(\mathcal{B} \times \mathcal{B}')$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-compute-L-morphism-sheaves-rings}
+it suffices to prove this for ring maps.
+In the case of rings this is
+Lemma \ref{lemma-cotangent-complex-product}.
+\end{proof}
+
+\noindent
+The fundamental triangle for the cotangent complex of sheaves of rings
+is an easy consequence of the result for homomorphisms of rings.
+
+\begin{lemma}
+\label{lemma-triangle-sheaves-rings}
+Let $\mathcal{D}$ be a site. Let $\mathcal{A} \to \mathcal{B} \to \mathcal{C}$
+be homomorphisms of sheaves of rings on $\mathcal{D}$.
+There is a canonical distinguished triangle
+$$
+L_{\mathcal{B}/\mathcal{A}} \otimes_\mathcal{B}^\mathbf{L} \mathcal{C}
+\to L_{\mathcal{C}/\mathcal{A}} \to L_{\mathcal{C}/\mathcal{B}} \to
+L_{\mathcal{B}/\mathcal{A}} \otimes_\mathcal{B}^\mathbf{L} \mathcal{C}[1]
+$$
+in $D(\mathcal{C})$.
+\end{lemma}
+
+\begin{proof}
+We will use the method described in
+Remarks \ref{remark-triangle} and \ref{remark-explicit-map}
+to construct the triangle; we will freely use the results mentioned there.
+As in those remarks we first construct the triangle in case
+$\mathcal{B} \to \mathcal{C}$ is an injective map of sheaves of rings.
+In this case we set
+\begin{enumerate}
+\item $\mathcal{P}_\bullet$ is the standard resolution of $\mathcal{B}$
+over $\mathcal{A}$,
+\item $\mathcal{Q}_\bullet$ is the standard resolution of $\mathcal{C}$
+over $\mathcal{A}$,
+\item $\mathcal{R}_\bullet$ is the standard resolution of $\mathcal{C}$
+over $\mathcal{B}$,
+\item $\mathcal{S}_\bullet$ is the standard resolution of $\mathcal{B}$
+over $\mathcal{B}$,
+\item $\overline{\mathcal{Q}}_\bullet =
+\mathcal{Q}_\bullet \otimes_{\mathcal{P}_\bullet} \mathcal{B}$, and
+\item $\overline{\mathcal{R}}_\bullet =
+\mathcal{R}_\bullet \otimes_{\mathcal{S}_\bullet} \mathcal{B}$.
+\end{enumerate}
+The distinguished triangle is the distinguished triangle associated
+to the short exact sequence
+of simplicial $\mathcal{C}$-modules
+$$
+0 \to
+\Omega_{\mathcal{P}_\bullet/\mathcal{A}}
+\otimes_{\mathcal{P}_\bullet} \mathcal{C} \to
+\Omega_{\mathcal{Q}_\bullet/\mathcal{A}}
+\otimes_{\mathcal{Q}_\bullet} \mathcal{C} \to
+\Omega_{\overline{\mathcal{Q}}_\bullet/\mathcal{B}}
+\otimes_{\overline{\mathcal{Q}}_\bullet} \mathcal{C} \to 0
+$$
+The first two terms are equal to the first two terms of the triangle
+of the statement of the lemma. The identification of the last term with
+$L_{\mathcal{C}/\mathcal{B}}$ uses the quasi-isomorphisms of complexes
+$$
+L_{\mathcal{C}/\mathcal{B}} =
+\Omega_{\mathcal{R}_\bullet/\mathcal{B}}
+\otimes_{\mathcal{R}_\bullet} \mathcal{C}
+\longrightarrow
+\Omega_{\overline{\mathcal{R}}_\bullet/\mathcal{B}}
+\otimes_{\overline{\mathcal{R}}_\bullet} \mathcal{C}
+\longleftarrow
+\Omega_{\overline{\mathcal{Q}}_\bullet/\mathcal{B}}
+\otimes_{\overline{\mathcal{Q}}_\bullet} \mathcal{C}
+$$
+All the constructions used above can first be done on the level
+of presheaves and then sheafified. Hence to prove sequences are exact,
+or that map are quasi-isomorphisms it suffices to prove the corresponding
+statement for the ring maps
+$\mathcal{A}(U) \to \mathcal{B}(U) \to \mathcal{C}(U)$
+which are known. This finishes the proof in the case that
+$\mathcal{B} \to \mathcal{C}$ is injective.
+
+\medskip\noindent
+In general, we reduce to the case where $\mathcal{B} \to \mathcal{C}$ is
+injective by replacing $\mathcal{C}$ by $\mathcal{B} \times \mathcal{C}$ if
+necessary. This is possible by the argument given in
+Remark \ref{remark-triangle} by
+Lemma \ref{lemma-compute-L-product-sheaves-rings}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-stalk-cotangent-complex}
+Let $\mathcal{C}$ be a site. Let $\mathcal{A} \to \mathcal{B}$ be a
+homomorphism of sheaves of rings on $\mathcal{C}$. If $p$ is a point
+of $\mathcal{C}$, then
+$(L_{\mathcal{B}/\mathcal{A}})_p = L_{\mathcal{B}_p/\mathcal{A}_p}$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-pullback-cotangent-morphism-topoi}.
+\end{proof}
+
+\noindent
+For the construction of the naive cotangent complex and its properties
+we refer to
+Modules on Sites, Section \ref{sites-modules-section-netherlander}.
+
+\begin{lemma}
+\label{lemma-compare-cotangent-complex-with-naive}
+Let $\mathcal{C}$ be a site. Let $\mathcal{A} \to \mathcal{B}$ be a
+homomorphism of sheaves of rings on $\mathcal{C}$.
+There is a canonical map
+$L_{\mathcal{B}/\mathcal{A}} \to \NL_{\mathcal{B}/\mathcal{A}}$
+which identifies the naive cotangent complex with the truncation
+$\tau_{\geq -1}L_{\mathcal{B}/\mathcal{A}}$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{P}_\bullet$ be the standard resolution of $\mathcal{B}$
+over $\mathcal{A}$.
+Let $\mathcal{I} = \Ker(\mathcal{A}[\mathcal{B}] \to \mathcal{B})$.
+Recall that $\mathcal{P}_0 = \mathcal{A}[\mathcal{B}]$. The map of the
+lemma is given by the commutative diagram
+$$
+\xymatrix{
+L_{\mathcal{B}/\mathcal{A}} \ar[d] & \ldots \ar[r] &
+\Omega_{\mathcal{P}_2/\mathcal{A}} \otimes_{\mathcal{P}_2} \mathcal{B}
+\ar[r] \ar[d] &
+\Omega_{\mathcal{P}_1/\mathcal{A}} \otimes_{\mathcal{P}_1} \mathcal{B}
+\ar[r] \ar[d] &
+\Omega_{\mathcal{P}_0/\mathcal{A}} \otimes_{\mathcal{P}_0} \mathcal{B}
+\ar[d] \\
+\NL_{\mathcal{B}/\mathcal{A}} & \ldots \ar[r] &
+0 \ar[r] &
+\mathcal{I}/\mathcal{I}^2 \ar[r] &
+\Omega_{\mathcal{P}_0/\mathcal{A}} \otimes_{\mathcal{P}_0} \mathcal{B}
+}
+$$
+We construct the downward arrow with target $\mathcal{I}/\mathcal{I}^2$
+by sending a local section $\text{d}f \otimes b$ to the class of
+$(d_0(f) - d_1(f))b$ in $\mathcal{I}/\mathcal{I}^2$.
+Here $d_i : \mathcal{P}_1 \to \mathcal{P}_0$,
+$i = 0, 1$ are the two face maps of the simplicial structure.
+This makes sense as $d_0 - d_1$ maps $\mathcal{P}_1$ into
+$\mathcal{I} = \Ker(\mathcal{P}_0 \to \mathcal{B})$.
+We omit the verification that this rule is well defined.
+Our map is compatible with the differential
+$\Omega_{\mathcal{P}_1/\mathcal{A}} \otimes_{\mathcal{P}_1} \mathcal{B}
+\to \Omega_{\mathcal{P}_0/\mathcal{A}} \otimes_{\mathcal{P}_0} \mathcal{B}$
+as this differential maps a local section $\text{d}f \otimes b$ to
+$\text{d}(d_0(f) - d_1(f)) \otimes b$. Moreover, the differential
+$\Omega_{\mathcal{P}_2/\mathcal{A}} \otimes_{\mathcal{P}_2} \mathcal{B}
+\to \Omega_{\mathcal{P}_1/\mathcal{A}} \otimes_{\mathcal{P}_1} \mathcal{B}$
+maps a local section $\text{d}f \otimes b$ to
+$\text{d}(d_0(f) - d_1(f) + d_2(f)) \otimes b$
+which are annihilated by our downward arrow. Hence a map of complexes.
+
+\medskip\noindent
+To see that our map induces an isomorphism on the cohomology sheaves
+$H^0$ and $H^{-1}$ we argue as follows. Let $\mathcal{C}'$ be the site
+with the same underlying category as $\mathcal{C}$ but endowed with the
+chaotic topology. Let $f : \Sh(\mathcal{C}) \to \Sh(\mathcal{C}')$ be
+the morphism of topoi whose pullback functor is sheafification.
+Let $\mathcal{A}' \to \mathcal{B}'$ be the given map, but thought of
+as a map of sheaves of rings on $\mathcal{C}'$. The construction above
+gives a map $L_{\mathcal{B}'/\mathcal{A}'} \to \NL_{\mathcal{B}'/\mathcal{A}'}$
+on $\mathcal{C}'$ whose value over any object $U$ of $\mathcal{C}'$
+is just the map
+$$
+L_{\mathcal{B}(U)/\mathcal{A}(U)} \to \NL_{\mathcal{B}(U)/\mathcal{A}(U)}
+$$
+of Remark \ref{remark-explicit-comparison-map} which induces an isomorphism
+on $H^0$ and $H^{-1}$. Since
+$f^{-1}L_{\mathcal{B}'/\mathcal{A}'} = L_{\mathcal{B}/\mathcal{A}}$
+(Lemma \ref{lemma-pullback-cotangent-morphism-topoi})
+and
+$f^{-1}\NL_{\mathcal{B}'/\mathcal{A}'} = \NL_{\mathcal{B}/\mathcal{A}}$
+(Modules on Sites, Lemma \ref{sites-modules-lemma-pullback-NL})
+the lemma is proved.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{The Atiyah class of a sheaf of modules}
+\label{section-atiyah-general}
+
+\noindent
+Let $\mathcal{C}$ be a site. Let $\mathcal{A} \to \mathcal{B}$ be a
+homomorphism of sheaves of rings. Let $\mathcal{F}$ be a
+sheaf of $\mathcal{B}$-modules. Let $\mathcal{P}_\bullet \to \mathcal{B}$
+be the standard resolution of $\mathcal{B}$ over $\mathcal{A}$
+(Section \ref{section-cotangent-complex}).
+For every $n \geq 0$ consider the extension of principal parts
+\begin{equation}
+\label{equation-atiyah-extension}
+0 \to
+\Omega_{\mathcal{P}_n/\mathcal{A}} \otimes_{\mathcal{P}_n} \mathcal{F} \to
+\mathcal{P}^1_{\mathcal{P}_n/\mathcal{A}}(\mathcal{F}) \to
+\mathcal{F} \to 0
+\end{equation}
+see
+Modules on Sites, Lemma \ref{sites-modules-lemma-sequence-of-principal-parts}.
+The functoriality of this construction
+(Modules on Sites, Remark
+\ref{sites-modules-remark-functoriality-principal-parts})
+tells us (\ref{equation-atiyah-extension}) is the degree $n$ part of
+a short exact sequence of simplicial $\mathcal{P}_\bullet$-modules
+(Cohomology on Sites, Section
+\ref{sites-cohomology-section-simplicial-modules}).
+Using the functor $L\pi_! : D(\mathcal{P}_\bullet) \to D(\mathcal{B})$
+of Cohomology on Sites, Remark
+\ref{sites-cohomology-remark-homology-augmentation}
+(here we use that $\mathcal{P}_\bullet \to \mathcal{A}$ is a resolution)
+we obtain a distinguished triangle
+\begin{equation}
+\label{equation-atiyah-general}
+L_{\mathcal{B}/\mathcal{A}} \otimes_\mathcal{B}^\mathbf{L} \mathcal{F} \to
+L\pi_!\left(\mathcal{P}^1_{\mathcal{P}_\bullet/\mathcal{A}}(\mathcal{F})\right)
+\to \mathcal{F} \to
+L_{\mathcal{B}/\mathcal{A}} \otimes_\mathcal{B}^\mathbf{L} \mathcal{F} [1]
+\end{equation}
+in $D(\mathcal{B})$.
+
+\begin{definition}
+\label{definition-atiyah-class-general}
+Let $\mathcal{C}$ be a site.
+Let $\mathcal{A} \to \mathcal{B}$ be a homomorphism of sheaves of rings.
+Let $\mathcal{F}$ be a sheaf of $\mathcal{B}$-modules.
+The map $\mathcal{F} \to
+L_{\mathcal{B}/\mathcal{A}} \otimes_\mathcal{B}^\mathbf{L} \mathcal{F}[1]$
+in (\ref{equation-atiyah-general}) is called the {\it Atiyah class} of
+$\mathcal{F}$.
+\end{definition}
+
+
+
+
+
+
+
+
+
+\section{The cotangent complex of a morphism of ringed spaces}
+\label{section-cotangent-morphism-ringed-spaces}
+
+\noindent
+The cotangent complex of a morphism of ringed spaces is defined
+in terms of the cotangent complex we defined above.
+
+\begin{definition}
+\label{definition-cotangent-complex-morphism-ringed-spaces}
+Let $f : (X, \mathcal{O}_X) \to (S, \mathcal{O}_S)$ be a morphism of
+ringed spaces. The {\it cotangent complex} $L_f$ of $f$ is
+$L_f = L_{\mathcal{O}_X/f^{-1}\mathcal{O}_S}$.
+We will also use the notation
+$L_f = L_{X/S} = L_{\mathcal{O}_X/\mathcal{O}_S}$.
+\end{definition}
+
+\noindent
+More precisely, this means that we consider the cotangent complex
+(Definition \ref{definition-cotangent-complex-morphism-sheaves-rings})
+of the homomorphism $f^\sharp : f^{-1}\mathcal{O}_S \to \mathcal{O}_X$
+of sheaves of rings on the site associated to the topological space $X$
+(Sites, Example \ref{sites-example-site-topological}).
+
+\begin{lemma}
+\label{lemma-H0-L-morphism-ringed-spaces}
+Let $f : (X, \mathcal{O}_X) \to (S, \mathcal{O}_S)$ be a morphism of
+ringed spaces. Then $H^0(L_{X/S}) = \Omega_{X/S}$.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-H0-L-morphism-sheaves-rings}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-triangle-ringed-spaces}
+Let $f : X \to Y$ and $g : Y \to Z$ be morphisms of ringed spaces.
+Then there is a canonical distinguished triangle
+$$
+Lf^* L_{Y/Z} \to L_{X/Z} \to L_{X/Y} \to Lf^*L_{Y/Z}[1]
+$$
+in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Set $h = g \circ f$ so that $h^{-1}\mathcal{O}_Z = f^{-1}g^{-1}\mathcal{O}_Z$.
+By Lemma \ref{lemma-pullback-cotangent-morphism-topoi} we have
+$f^{-1}L_{Y/Z} = L_{f^{-1}\mathcal{O}_Y/h^{-1}\mathcal{O}_Z}$
+and this is a complex of flat $f^{-1}\mathcal{O}_Y$-modules.
+Hence the distinguished triangle above is an example of the
+distinguished triangle of
+Lemma \ref{lemma-triangle-sheaves-rings}
+with $\mathcal{A} = h^{-1}\mathcal{O}_Z$, $\mathcal{B} = f^{-1}\mathcal{O}_Y$,
+and $\mathcal{C} = \mathcal{O}_X$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-cotangent-complex-with-naive-ringed-spaces}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of
+ringed spaces. There is a canonical map $L_{X/Y} \to \NL_{X/Y}$ which
+identifies the naive cotangent complex with the truncation
+$\tau_{\geq -1}L_{X/Y}$.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-compare-cotangent-complex-with-naive}.
+\end{proof}
+
+
+
+
+
+
+\section{Deformations of ringed spaces and the cotangent complex}
+\label{section-deformations-ringed-spaces}
+
+\noindent
+This section is the continuation of
+Deformation Theory, Section \ref{defos-section-deformations-ringed-spaces}
+which we urge the reader to read first. We briefly recall the setup.
+We have a first order thickening
+$t : (S, \mathcal{O}_S) \to (S', \mathcal{O}_{S'})$ of ringed spaces
+with $\mathcal{J} = \Ker(t^\sharp)$, a morphism of ringed spaces
+$f : (X, \mathcal{O}_X) \to (S, \mathcal{O}_S)$, an $\mathcal{O}_X$-module
+$\mathcal{G}$, and an $f$-map $c : \mathcal{J} \to \mathcal{G}$
+of sheaves of modules. We ask whether we can find
+the question mark fitting into the following diagram
+\begin{equation}
+\label{equation-to-solve-ringed-spaces}
+\vcenter{
+\xymatrix{
+0 \ar[r] & \mathcal{G} \ar[r] & {?} \ar[r] & \mathcal{O}_X \ar[r] & 0 \\
+0 \ar[r] & \mathcal{J} \ar[u]^c \ar[r] & \mathcal{O}_{S'} \ar[u] \ar[r] &
+\mathcal{O}_S \ar[u] \ar[r] & 0
+}
+}
+\end{equation}
+and moreover how unique the solution is (if it exists). More precisely,
+we look for a first order thickening
+$i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$
+and a morphism of thickenings $(f, f')$ as in
+Deformation Theory, Equation (\ref{defos-equation-morphism-thickenings})
+where $\Ker(i^\sharp)$ is identified with $\mathcal{G}$
+such that $(f')^\sharp$ induces the given map $c$.
+We will say $X'$ is a {\it solution} to
+(\ref{equation-to-solve-ringed-spaces}).
+
+\begin{lemma}
+\label{lemma-find-obstruction-ringed-spaces}
+In the situation above we have
+\begin{enumerate}
+\item There is a canonical element
+$\xi \in \Ext^2_{\mathcal{O}_X}(L_{X/S}, \mathcal{G})$
+whose vanishing is a sufficient and necessary condition for the existence
+of a solution to (\ref{equation-to-solve-ringed-spaces}).
+\item If there exists a solution, then the set of
+isomorphism classes of solutions is principal homogeneous under
+$\Ext^1_{\mathcal{O}_X}(L_{X/S}, \mathcal{G})$.
+\item Given a solution $X'$, the set of automorphisms of $X'$
+fitting into (\ref{equation-to-solve-ringed-spaces}) is canonically isomorphic
+to $\Ext^0_{\mathcal{O}_X}(L_{X/S}, \mathcal{G})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Via the identifications $\NL_{X/S} = \tau_{\geq -1}L_{X/S}$
+(Lemma \ref{lemma-compare-cotangent-complex-with-naive-ringed-spaces})
+and
+$H^0(L_{X/S}) = \Omega_{X/S}$
+(Lemma \ref{lemma-H0-L-morphism-ringed-spaces})
+we have seen parts (2) and (3) in
+Deformation Theory, Lemmas \ref{defos-lemma-huge-diagram-ringed-spaces} and
+\ref{defos-lemma-choices-ringed-spaces}.
+
+\medskip\noindent
+Proof of (1). Roughly speaking, this follows from the discussion in
+Deformation Theory, Remark
+\ref{defos-remark-parametrize-solutions-ringed-spaces}
+by replacing the naive cotangent complex by the full cotangent complex.
+Here is a more detailed explanation. By
+Deformation Theory, Lemma \ref{defos-lemma-parametrize-solutions-ringed-spaces}
+there exists an element
+$$
+\xi' \in
+\Ext^1_{\mathcal{O}_X}(Lf^*\NL_{S/S'}, \mathcal{G}) =
+\Ext^1_{\mathcal{O}_X}(Lf^*L_{S/S'}, \mathcal{G})
+$$
+such that a solution exists if and only if this element is in
+the image of the map
+$$
+\Ext^1_{\mathcal{O}_X}(NL_{X/S'}, \mathcal{G}) =
+\Ext^1_{\mathcal{O}_X}(L_{X/S'}, \mathcal{G})
+\longrightarrow
+\Ext^1_{\mathcal{O}_X}(Lf^*L_{S/S'}, \mathcal{G})
+$$
+The distinguished triangle of Lemma \ref{lemma-triangle-ringed-spaces}
+for $X \to S \to S'$ gives rise to a long exact sequence
+$$
+\ldots \to
+\Ext^1_{\mathcal{O}_X}(L_{X/S'}, \mathcal{G}) \to
+\Ext^1_{\mathcal{O}_X}(Lf^*L_{S/S'}, \mathcal{G}) \to
+\Ext^2_{\mathcal{O}_X}(L_{X/S}, \mathcal{G}) \to \ldots
+$$
+Hence taking $\xi$ the image of $\xi'$ works.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{The cotangent complex of a morphism of ringed topoi}
+\label{section-cotangent-morphism-ringed-topoi}
+
+\noindent
+The cotangent complex of a morphism of ringed topoi is defined
+in terms of the cotangent complex we defined above.
+
+\begin{definition}
+\label{definition-cotangent-complex-morphism-ringed-topoi}
+Let $(f, f^\sharp) : (\Sh(\mathcal{C}), \mathcal{O}_\mathcal{C}) \to
+(\Sh(\mathcal{D}), \mathcal{O}_\mathcal{D})$ be a morphism of ringed topoi.
+The {\it cotangent complex} $L_f$ of $f$ is
+$L_f = L_{\mathcal{O}_\mathcal{C}/f^{-1}\mathcal{O}_\mathcal{D}}$.
+We sometimes write $L_f = L_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{D}}$.
+\end{definition}
+
+\noindent
+This definition applies to many situations, but it doesn't always produce
+the thing one expects. For example, if $f : X \to Y$ is a morphism of
+schemes, then $f$ induces a morphism of big \'etale sites
+$f_{big} : (\Sch/X)_\etale \to (\Sch/Y)_\etale$
+which is a morphism of ringed topoi (Descent, Remark
+\ref{descent-remark-change-topologies-ringed}).
+However, $L_{f_{big}} = 0$ since $(f_{big})^\sharp$ is an isomorphism.
+On the other hand, if we take $L_f$ where we think of $f$ as a morphism
+between the underlying Zariski ringed topoi, then $L_f$ does agree with
+the cotangent complex $L_{X/Y}$ (as defined below)
+whose zeroth cohomology sheaf is $\Omega_{X/Y}$.
+
+\begin{lemma}
+\label{lemma-H0-L-morphism-ringed-topoi}
+Let $f : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$ be a morphism of
+ringed topoi. Then $H^0(L_f) = \Omega_f$.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-H0-L-morphism-sheaves-rings}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-triangle-ringed-topoi}
+Let $f : (\Sh(\mathcal{C}_1), \mathcal{O}_1) \to
+(\Sh(\mathcal{C}_2), \mathcal{O}_2)$ and
+$g : (\Sh(\mathcal{C}_2), \mathcal{O}_2) \to
+(\Sh(\mathcal{C}_3), \mathcal{O}_3)$ be morphisms of ringed topoi.
+Then there is a canonical distinguished triangle
+$$
+Lf^* L_g \to L_{g \circ f} \to L_f \to Lf^*L_g[1]
+$$
+in $D(\mathcal{O}_1)$.
+\end{lemma}
+
+\begin{proof}
+Set $h = g \circ f$ so that $h^{-1}\mathcal{O}_3 = f^{-1}g^{-1}\mathcal{O}_3$.
+By Lemma \ref{lemma-pullback-cotangent-morphism-topoi} we have
+$f^{-1}L_g = L_{f^{-1}\mathcal{O}_2/h^{-1}\mathcal{O}_3}$
+and this is a complex of flat $f^{-1}\mathcal{O}_2$-modules.
+Hence the distinguished triangle above is an example of the
+distinguished triangle of
+Lemma \ref{lemma-triangle-sheaves-rings}
+with $\mathcal{A} = h^{-1}\mathcal{O}_3$, $\mathcal{B} = f^{-1}\mathcal{O}_2$,
+and $\mathcal{C} = \mathcal{O}_1$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-cotangent-complex-with-naive-ringed-topoi}
+Let $f : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$ be a morphism of
+ringed topoi. There is a canonical map $L_f \to \NL_f$ which
+identifies the naive cotangent complex with the truncation
+$\tau_{\geq -1}L_f$.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-compare-cotangent-complex-with-naive}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Deformations of ringed topoi and the cotangent complex}
+\label{section-deformations-ringed-topoi}
+
+\noindent
+This section is the continuation of
+Deformation Theory, Section \ref{defos-section-deformations-ringed-topoi}
+which we urge the reader to read first. We briefly recall the setup.
+We have a first order thickening
+$t : (\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B}) \to
+(\Sh(\mathcal{B}'), \mathcal{O}_{\mathcal{B}'})$ of ringed topoi
+with $\mathcal{J} = \Ker(t^\sharp)$, a morphism of ringed topoi
+$f : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$, an $\mathcal{O}$-module
+$\mathcal{G}$, and a map $f^{-1}\mathcal{J} \to \mathcal{G}$
+of sheaves of $f^{-1}\mathcal{O}_\mathcal{B}$-modules.
+We ask whether we can find
+the question mark fitting into the following diagram
+\begin{equation}
+\label{equation-to-solve-ringed-topoi}
+\vcenter{
+\xymatrix{
+0 \ar[r] & \mathcal{G} \ar[r] & {?} \ar[r] & \mathcal{O} \ar[r] & 0 \\
+0 \ar[r] & f^{-1}\mathcal{J} \ar[u]^c \ar[r] &
+f^{-1}\mathcal{O}_{\mathcal{B}'} \ar[u] \ar[r] &
+f^{-1}\mathcal{O}_\mathcal{B} \ar[u] \ar[r] & 0
+}
+}
+\end{equation}
+and moreover how unique the solution is (if it exists). More precisely,
+we look for a first order thickening
+$i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{C}'), \mathcal{O}')$
+and a morphism of thickenings $(f, f')$ as in
+Deformation Theory, Equation
+(\ref{defos-equation-morphism-thickenings-ringed-topoi})
+where $\Ker(i^\sharp)$ is identified with $\mathcal{G}$
+such that $(f')^\sharp$ induces the given map $c$.
+We will say $(\Sh(\mathcal{C}'), \mathcal{O}')$ is a {\it solution} to
+(\ref{equation-to-solve-ringed-topoi}).
+
+\begin{lemma}
+\label{lemma-find-obstruction-ringed-topoi}
+In the situation above we have
+\begin{enumerate}
+\item There is a canonical element
+$\xi \in \Ext^2_\mathcal{O}(L_f, \mathcal{G})$
+whose vanishing is a sufficient and necessary condition for the existence
+of a solution to (\ref{equation-to-solve-ringed-topoi}).
+\item If there exists a solution, then the set of
+isomorphism classes of solutions is principal homogeneous under
+$\Ext^1_\mathcal{O}(L_f, \mathcal{G})$.
+\item Given a solution $X'$, the set of automorphisms of $X'$
+fitting into (\ref{equation-to-solve-ringed-topoi}) is canonically isomorphic
+to $\Ext^0_\mathcal{O}(L_f, \mathcal{G})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Via the identifications $\NL_f = \tau_{\geq -1}L_f$
+(Lemma \ref{lemma-compare-cotangent-complex-with-naive-ringed-topoi}) and
+$H^0(L_f) = \Omega_f$
+(Lemma \ref{lemma-H0-L-morphism-ringed-topoi})
+we have seen parts (2) and (3) in
+Deformation Theory, Lemmas \ref{defos-lemma-huge-diagram-ringed-topoi} and
+\ref{defos-lemma-choices-ringed-topoi}.
+
+\medskip\noindent
+Proof of (1). To match notation with Deformation Theory, Section
+\ref{defos-section-deformations-ringed-topoi} we will write
+$\NL_f = \NL_{\mathcal{O}/\mathcal{O}_\mathcal{B}}$ and
+$L_f = L_{\mathcal{O}/\mathcal{O}_\mathcal{B}}$ and similarly
+for the morphisms $t$ and $t \circ f$. By
+Deformation Theory, Lemma \ref{defos-lemma-parametrize-solutions-ringed-topoi}
+there exists an element
+$$
+\xi' \in
+\Ext^1_\mathcal{O}(
+Lf^*\NL_{\mathcal{O}_\mathcal{B}/\mathcal{O}_{\mathcal{B}'}}, \mathcal{G}) =
+\Ext^1_\mathcal{O}(
+Lf^*L_{\mathcal{O}_\mathcal{B}/\mathcal{O}_{\mathcal{B}'}}, \mathcal{G})
+$$
+such that a solution exists if and only if this element is in
+the image of the map
+$$
+\Ext^1_\mathcal{O}(
+\NL_{\mathcal{O}/\mathcal{O}_{\mathcal{B}'}}, \mathcal{G}) =
+\Ext^1_\mathcal{O}(
+L_{\mathcal{O}/\mathcal{O}_{\mathcal{B}'}}, \mathcal{G})
+\longrightarrow
+\Ext^1_\mathcal{O}(
+Lf^*L_{\mathcal{O}_\mathcal{B}/\mathcal{O}_{\mathcal{B}'}}, \mathcal{G})
+$$
+The distinguished triangle of Lemma \ref{lemma-triangle-ringed-topoi}
+for $f$ and $t$ gives rise to a long exact sequence
+$$
+\ldots \to
+\Ext^1_\mathcal{O}(
+L_{\mathcal{O}/\mathcal{O}_{\mathcal{B}'}}, \mathcal{G}) \to
+\Ext^1_\mathcal{O}(
+Lf^*L_{\mathcal{O}_\mathcal{B}/\mathcal{O}_{\mathcal{B}'}}, \mathcal{G})
+\to
+\Ext^1_\mathcal{O}(
+L_{\mathcal{O}/\mathcal{O}_\mathcal{B}}, \mathcal{G})
+$$
+Hence taking $\xi$ the image of $\xi'$ works.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{The cotangent complex of a morphism of schemes}
+\label{section-cotangent-morphism-schemes}
+
+\noindent
+As promised above we define the cotangent complex of a morphism of
+schemes as follows.
+
+\begin{definition}
+\label{definition-cotangent-morphism-schemes}
+Let $f : X \to Y$ be a morphism of schemes. The {\it cotangent complex
+$L_{X/Y}$ of $X$ over $Y$} is the cotangent complex of $f$ as a
+morphism of ringed spaces
+(Definition \ref{definition-cotangent-complex-morphism-ringed-spaces}).
+\end{definition}
+
+\noindent
+In particular, the results of
+Section \ref{section-cotangent-morphism-ringed-spaces} apply
+to cotangent complexes of morphisms of schemes.
+The next lemma shows this definition is compatible with the definition
+for ring maps and it also implies that $L_{X/Y}$ is an
+object of $D_\QCoh(\mathcal{O}_X)$.
+
+\begin{lemma}
+\label{lemma-morphism-affine-schemes}
+Let $f : X \to Y$ be a morphism of schemes. Let $U = \Spec(A) \subset X$
+and $V = \Spec(B) \subset Y$ be affine opens such that $f(U) \subset V$.
+There is a canonical map
+$$
+\widetilde{L_{B/A}} \longrightarrow L_{X/Y}|_U
+$$
+of complexes which is an isomorphism in $D(\mathcal{O}_U)$.
+This map is compatible with restricting to smaller affine opens
+of $X$ and $Y$.
+\end{lemma}
+
+\begin{proof}
+By Remark \ref{remark-map-sections-over-U}
+there is a canonical map of complexes
+$L_{\mathcal{O}_X(U)/f^{-1}\mathcal{O}_Y(U)} \to L_{X/Y}(U)$
+of $B = \mathcal{O}_X(U)$-modules, which is compatible
+with further restrictions. Using the canonical map
+$A \to f^{-1}\mathcal{O}_Y(U)$ we obtain a canonical map
+$L_{B/A} \to L_{\mathcal{O}_X(U)/f^{-1}\mathcal{O}_Y(U)}$
+of complexes of $B$-modules.
+Using the universal property of the $\widetilde{\ }$
+functor (see Schemes, Lemma \ref{schemes-lemma-compare-constructions})
+we obtain a map as in the statement of the lemma.
+We may check this map is an isomorphism on cohomology sheaves
+by checking it induces isomorphisms on stalks.
+This follows immediately from
+Lemmas \ref{lemma-stalk-cotangent-complex} and \ref{lemma-localize}
+(and the description of the stalks of
+$\mathcal{O}_X$ and $f^{-1}\mathcal{O}_Y$
+at a point $\mathfrak p \in \Spec(B)$ as $B_\mathfrak p$ and
+$A_\mathfrak q$ where $\mathfrak q = A \cap \mathfrak p$; references
+used are Schemes, Lemma \ref{schemes-lemma-spec-sheaves}
+and
+Sheaves, Lemma \ref{sheaves-lemma-stalk-pullback}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-scheme-over-ring}
+Let $\Lambda$ be a ring. Let $X$ be a scheme over $\Lambda$.
+Then
+$$
+L_{X/\Spec(\Lambda)} = L_{\mathcal{O}_X/\underline{\Lambda}}
+$$
+where $\underline{\Lambda}$ is the constant sheaf with value
+$\Lambda$ on $X$.
+\end{lemma}
+
+\begin{proof}
+Let $p : X \to \Spec(\Lambda)$ be the structure morphism.
+Let $q : \Spec(\Lambda) \to (*, \Lambda)$ be the obvious morphism.
+By the distinguished triangle of Lemma \ref{lemma-triangle-ringed-spaces}
+it suffices to show that $L_q = 0$. To see this it suffices to
+show for $\mathfrak p \in \Spec(\Lambda)$ that
+$$
+(L_q)_\mathfrak p =
+L_{\mathcal{O}_{\Spec(\Lambda), \mathfrak p}/\Lambda} =
+L_{\Lambda_\mathfrak p/\Lambda}
+$$
+(Lemma \ref{lemma-stalk-cotangent-complex})
+is zero which follows from Lemma \ref{lemma-when-zero}.
+\end{proof}
+
+
+
+
+
+
+\section{The cotangent complex of a scheme over a ring}
+\label{section-cotangent-schemes-variant}
+
+\noindent
+Let $\Lambda$ be a ring and let $X$ be a scheme over $\Lambda$.
+Write $L_{X/\Spec(\Lambda)} = L_{X/\Lambda}$ which is justified
+by Lemma \ref{lemma-scheme-over-ring}.
+In this section we give a description of $L_{X/\Lambda}$ similar to
+Lemma \ref{lemma-compute-cotangent-complex}.
+Namely, we construct a category $\mathcal{C}_{X/\Lambda}$
+fibred over $X_{Zar}$ and endow it with a sheaf of (polynomial)
+$\Lambda$-algebras $\mathcal{O}$ such that
+$$
+L_{X/\Lambda} =
+L\pi_!(\Omega_{\mathcal{O}/\underline{\Lambda}} \otimes_\mathcal{O}
+\underline{\mathcal{O}}_X).
+$$
+We will later use the category $\mathcal{C}_{X/\Lambda}$ to construct
+a naive obstruction theory for the stack of coherent sheaves.
+
+\medskip\noindent
+Let $\Lambda$ be a ring. Let $X$ be a scheme over $\Lambda$.
+Let $\mathcal{C}_{X/\Lambda}$ be the category whose objects are
+commutative diagrams
+\begin{equation}
+\label{equation-object}
+\vcenter{
+\xymatrix{
+X \ar[d] & U \ar[l] \ar[d] \\
+\Spec(\Lambda) & \mathbf{A} \ar[l]
+}
+}
+\end{equation}
+of schemes where
+\begin{enumerate}
+\item $U$ is an open subscheme of $X$,
+\item there exists an isomorphism $\mathbf{A} = \Spec(P)$
+where $P$ is a polynomial algebra over $\Lambda$ (on some set
+of variables).
+\end{enumerate}
+In other words, $\mathbf{A}$ is an (infinite dimensional) affine space over
+$\Spec(\Lambda)$. Morphisms are given by commutative diagrams.
+Recall that $X_{Zar}$ denotes the small Zariski site $X$.
+There is a forgetful functor
+$$
+u : \mathcal{C}_{X/\Lambda} \to X_{Zar},\ (U \to \mathbf{A}) \mapsto U
+$$
+Observe that the fibre category over $U$ is canonically equivalent
+to the category $\mathcal{C}_{\mathcal{O}_X(U)/\Lambda}$ introduced
+in Section \ref{section-compute-L-pi-shriek}.
+
+\begin{lemma}
+\label{lemma-category-fibred}
+In the situation above the category
+$\mathcal{C}_{X/\Lambda}$ is fibred over $X_{Zar}$.
+\end{lemma}
+
+\begin{proof}
+Given an object $U \to \mathbf{A}$ of $\mathcal{C}_{X/\Lambda}$ and a morphism
+$U' \to U$ of $X_{Zar}$ consider the object $U' \to \mathbf{A}$ of
+$\mathcal{C}_{X/\Lambda}$ where $U' \to \mathbf{A}$ is the composition of
+$U \to \mathbf{A}$ and $U' \to U$. The morphism
+$(U' \to \mathbf{A}) \to (U \to \mathbf{A})$ of
+$\mathcal{C}_{X/\Lambda}$ is strongly cartesian over $X_{Zar}$.
+\end{proof}
+
+\noindent
+We endow $\mathcal{C}_{X/\Lambda}$ with the topology inherited from
+$X_{Zar}$ (see Stacks, Section \ref{stacks-section-topology}).
+The functor $u$ defines a morphism of topoi
+$\pi : \Sh(\mathcal{C}_{X/\Lambda}) \to \Sh(X_{Zar})$.
+The site $\mathcal{C}_{X/\Lambda}$ comes with several sheaves of rings.
+\begin{enumerate}
+\item The sheaf $\mathcal{O}$ given by the rule
+$(U \to \mathbf{A}) \mapsto \Gamma(\mathbf{A}, \mathcal{O}_\mathbf{A})$.
+\item The sheaf $\underline{\mathcal{O}}_X = \pi^{-1}\mathcal{O}_X$ given by
+the rule $(U \to \mathbf{A}) \mapsto \mathcal{O}_X(U)$.
+\item The constant sheaf $\underline{\Lambda}$.
+\end{enumerate}
+We obtain morphisms of ringed topoi
+\begin{equation}
+\label{equation-pi-schemes}
+\vcenter{
+\xymatrix{
+(\Sh(\mathcal{C}_{X/\Lambda}), \underline{\mathcal{O}}_X) \ar[r]_i \ar[d]_\pi &
+(\Sh(\mathcal{C}_{X/\Lambda}), \mathcal{O}) \\
+(\Sh(X_{Zar}), \mathcal{O}_X)
+}
+}
+\end{equation}
+The morphism $i$ is the identity on underlying topoi and
+$i^\sharp : \mathcal{O} \to \underline{\mathcal{O}}_X$
+is the obvious map.
+The map $\pi$ is a special case of Cohomology on Sites, Situation
+\ref{sites-cohomology-situation-fibred-category}.
+An important role will be played in the following
+by the derived functors
+$
+Li^* : D(\mathcal{O}) \longrightarrow D(\underline{\mathcal{O}}_X)
+$
+left adjoint to $Ri_* = i_* : D(\underline{\mathcal{O}}_X) \to D(\mathcal{O})$
+and
+$
+L\pi_! : D(\underline{\mathcal{O}}_X) \longrightarrow D(\mathcal{O}_X)
+$
+left adjoint to
+$\pi^* = \pi^{-1} : D(\mathcal{O}_X) \to D(\underline{\mathcal{O}}_X)$.
+We can compute $L\pi_!$ thanks to our earlier work.
+
+\begin{remark}
+\label{remark-compute-L-pi-shriek}
+In the situation above, for every $U \subset X$ open let
+$P_{\bullet, U}$ be the standard resolution of $\mathcal{O}_X(U)$
+over $\Lambda$. Set $\mathbf{A}_{n, U} = \Spec(P_{n, U})$. Then
+$\mathbf{A}_{\bullet, U}$
+is a cosimplicial object of the fibre category
+$\mathcal{C}_{\mathcal{O}_X(U)/\Lambda}$ of
+$\mathcal{C}_{X/\Lambda}$ over $U$. Moreover, as discussed
+in Remark \ref{remark-resolution} we have that $\mathbf{A}_{\bullet, U}$
+is a cosimplicial object of $\mathcal{C}_{\mathcal{O}_X(U)/\Lambda}$
+as in Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-by-cosimplicial-resolution}.
+Since the construction $U \mapsto \mathbf{A}_{\bullet, U}$ is functorial
+in $U$, given any (abelian) sheaf $\mathcal{F}$ on $\mathcal{C}_{X/\Lambda}$
+we obtain a complex of presheaves
+$$
+U \longmapsto \mathcal{F}(\mathbf{A}_{\bullet, U})
+$$
+whose cohomology groups compute the homology of $\mathcal{F}$ on the fibre
+category. We conclude by
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-left-derived-pi-shriek}
+that the sheafification computes $L_n\pi_!(\mathcal{F})$.
+In other words, the complex of sheaves whose term in degree $-n$ is
+the sheafification of $U \mapsto \mathcal{F}(\mathbf{A}_{n, U})$ computes
+$L\pi_!(\mathcal{F})$.
+\end{remark}
+
+\noindent
+With this remark out of the way we can state the main
+result of this section.
+
+\begin{lemma}
+\label{lemma-cotangent-morphism-schemes}
+In the situation above there is a canonical isomorphism
+$$
+L_{X/\Lambda} =
+L\pi_!(Li^*\Omega_{\mathcal{O}/\underline{\Lambda}}) =
+L\pi_!(i^*\Omega_{\mathcal{O}/\underline{\Lambda}}) =
+L\pi_!(\Omega_{\mathcal{O}/\underline{\Lambda}}
+\otimes_\mathcal{O} \underline{\mathcal{O}}_X)
+$$
+in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+We first observe that for any object $(U \to \mathbf{A})$ of
+$\mathcal{C}_{X/\Lambda}$
+the value of the sheaf $\mathcal{O}$ is a polynomial algebra over $\Lambda$.
+Hence $\Omega_{\mathcal{O}/\underline{\Lambda}}$ is a flat $\mathcal{O}$-module
+and we conclude the second and third equalities of the statement of the
+lemma hold.
+
+\medskip\noindent
+By Remark \ref{remark-compute-L-pi-shriek} the object
+$L\pi_!(\Omega_{\mathcal{O}/\underline{\Lambda}}
+\otimes_\mathcal{O} \underline{\mathcal{O}}_X)$
+is computed as the sheafification of the complex of presheaves
+$$
+U \mapsto
+\left(\Omega_{\mathcal{O}/\underline{\Lambda}}
+\otimes_\mathcal{O} \underline{\mathcal{O}}_X\right)(\mathbf{A}_{\bullet, U})
+=
+\Omega_{P_{\bullet, U}/\Lambda} \otimes_{P_{\bullet, U}} \mathcal{O}_X(U) =
+L_{\mathcal{O}_X(U)/\Lambda}
+$$
+using notation as in Remark \ref{remark-compute-L-pi-shriek}.
+Now Remark \ref{remark-map-sections-over-U} shows that
+$L\pi_!(\Omega_{\mathcal{O}/\underline{\Lambda}}
+\otimes_\mathcal{O} \underline{\mathcal{O}}_X)$
+computes the cotangent complex of the map of rings
+$\underline{\Lambda} \to \mathcal{O}_X$ on $X$.
+This is what we want by Lemma \ref{lemma-scheme-over-ring}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{The cotangent complex of a morphism of algebraic spaces}
+\label{section-cotangent-morphism-spaces}
+
+\noindent
+We define the cotangent complex of a morphism of algebraic spaces
+using the associated morphism between the small \'etale sites.
+
+\begin{definition}
+\label{definition-cotangent-morphism-spaces}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces
+over $S$. The {\it cotangent complex $L_{X/Y}$ of $X$ over $Y$} is the
+cotangent complex of the morphism of ringed topoi $f_{small}$
+between the small \'etale sites of $X$ and $Y$
+(see
+Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-morphism-ringed-topoi}
+and
+Definition \ref{definition-cotangent-complex-morphism-ringed-topoi}).
+\end{definition}
+
+\noindent
+In particular, the results of
+Section \ref{section-cotangent-morphism-ringed-topoi} apply
+to cotangent complexes of morphisms of algebraic spaces.
+The next lemmas show this definition is compatible with the definition
+for ring maps and for schemes and that $L_{X/Y}$ is an
+object of $D_\QCoh(\mathcal{O}_X)$.
+
+\begin{lemma}
+\label{lemma-etale-localization}
+Let $S$ be a scheme. Consider a commutative diagram
+$$
+\xymatrix{
+U \ar[d]_p \ar[r]_g & V \ar[d]^q \\
+X \ar[r]^f & Y
+}
+$$
+of algebraic spaces over $S$ with $p$ and $q$ \'etale.
+Then there is a canonical identification
+$L_{X/Y}|_{U_\etale} = L_{U/V}$ in $D(\mathcal{O}_U)$.
+\end{lemma}
+
+\begin{proof}
+Formation of the cotangent complex commutes with pullback
+(Lemma \ref{lemma-pullback-cotangent-morphism-topoi}) and
+we have $p_{small}^{-1}\mathcal{O}_X = \mathcal{O}_U$ and
+$g_{small}^{-1}\mathcal{O}_{V_\etale} =
+p_{small}^{-1}f_{small}^{-1}\mathcal{O}_{Y_\etale}$
+because $q_{small}^{-1}\mathcal{O}_{Y_\etale} =
+\mathcal{O}_{V_\etale}$
+(Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-etale-exact-pullback}).
+Tracing through the definitions we conclude that
+$L_{X/Y}|_{U_\etale} = L_{U/V}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-spaces-schemes}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces
+over $S$. Assume $X$ and $Y$ representable by schemes $X_0$ and $Y_0$.
+Then there is a canonical identification
+$L_{X/Y} = \epsilon^*L_{X_0/Y_0}$ in $D(\mathcal{O}_X)$
+where $\epsilon$ is as in Derived Categories of Spaces, Section
+\ref{spaces-perfect-section-derived-quasi-coherent-etale}
+and $L_{X_0/Y_0}$ is as in
+ Definition \ref{definition-cotangent-morphism-schemes}.
+\end{lemma}
+
+\begin{proof}
+Let $f_0 : X_0 \to Y_0$ be the morphism of schemes corresponding to $f$.
+There is a canonical map
+$\epsilon^{-1}f_0^{-1}\mathcal{O}_{Y_0} \to f_{small}^{-1}\mathcal{O}_Y$
+compatible with
+$\epsilon^\sharp : \epsilon^{-1}\mathcal{O}_{X_0} \to \mathcal{O}_X$
+because there is a commutative diagram
+$$
+\xymatrix{
+X_{0, Zar} \ar[d]_{f_0} & X_\etale \ar[l]^\epsilon \ar[d]^f \\
+Y_{0, Zar} & Y_\etale \ar[l]_\epsilon
+}
+$$
+see Derived Categories of Spaces, Remark
+\ref{spaces-perfect-remark-match-total-direct-images}.
+Thus we obtain a canonical map
+$$
+\epsilon^{-1}L_{X_0/Y_0} =
+\epsilon^{-1}L_{\mathcal{O}_{X_0}/f_0^{-1}\mathcal{O}_{Y_0}} =
+L_{\epsilon^{-1}\mathcal{O}_{X_0}/\epsilon^{-1}f_0^{-1}\mathcal{O}_{Y_0}}
+\longrightarrow
+L_{\mathcal{O}_X/f^{-1}_{small}\mathcal{O}_Y} = L_{X/Y}
+$$
+by the functoriality discussed in Section \ref{section-cotangent-complex}
+and Lemma \ref{lemma-pullback-cotangent-morphism-topoi}.
+To see that the induced map $\epsilon^*L_{X_0/Y_0} \to L_{X/Y}$ is an
+isomorphism we may check on stalks at geometric points
+(Properties of Spaces, Theorem
+\ref{spaces-properties-theorem-exactness-stalks}).
+We will use Lemma \ref{lemma-stalk-cotangent-complex}
+to compute the stalks. Let $\overline{x} : \Spec(k) \to X_0$
+be a geometric point lying over $x \in X_0$, with
+$\overline{y} = f \circ \overline{x}$ lying over $y \in Y_0$. Then
+$$
+L_{X/Y, \overline{x}} =
+L_{\mathcal{O}_{X, \overline{x}}/\mathcal{O}_{Y, \overline{y}}}
+$$
+and
+$$
+(\epsilon^*L_{X_0/Y_0})_{\overline{x}} =
+L_{X_0/Y_0, x} \otimes_{\mathcal{O}_{X_0, x}}
+\mathcal{O}_{X, \overline{x}} =
+L_{\mathcal{O}_{X_0, x}/\mathcal{O}_{Y_0, y}}
+\otimes_{\mathcal{O}_{X_0, x}} \mathcal{O}_{X, \overline{x}}
+$$
+Some details omitted (hint: use that the stalk of a pullback
+is the stalk at the image point, see
+Sites, Lemma \ref{sites-lemma-point-morphism-sites},
+as well as the corresponding result for modules, see
+Modules on Sites, Lemma \ref{sites-modules-lemma-pullback-stalk}).
+Observe that $\mathcal{O}_{X, \overline{x}}$ is the strict
+henselization of $\mathcal{O}_{X_0, x}$ and similarly
+for $\mathcal{O}_{Y, \overline{y}}$
+(Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-describe-etale-local-ring}).
+Thus the result follows from
+Lemma \ref{lemma-cotangent-complex-henselization}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-space-over-ring}
+Let $\Lambda$ be a ring. Let $X$ be an algebraic space over $\Lambda$.
+Then
+$$
+L_{X/\Spec(\Lambda)} = L_{\mathcal{O}_X/\underline{\Lambda}}
+$$
+where $\underline{\Lambda}$ is the constant sheaf with value
+$\Lambda$ on $X_\etale$.
+\end{lemma}
+
+\begin{proof}
+Let $p : X \to \Spec(\Lambda)$ be the structure morphism.
+Let $q : \Spec(\Lambda)_\etale \to (*, \Lambda)$
+be the obvious morphism. By the distinguished triangle of
+Lemma \ref{lemma-triangle-ringed-topoi}
+it suffices to show that $L_q = 0$. To see this it suffices to
+show
+(Properties of Spaces, Theorem
+\ref{spaces-properties-theorem-exactness-stalks})
+for a geometric point $\overline{t} : \Spec(k) \to \Spec(\Lambda)$ that
+$$
+(L_q)_{\overline{t}} =
+L_{\mathcal{O}_{\Spec(\Lambda)_\etale, \overline{t}}/\Lambda}
+$$
+(Lemma \ref{lemma-stalk-cotangent-complex})
+is zero. Since $\mathcal{O}_{\Spec(\Lambda)_\etale, \overline{t}}$
+is a strict henselization of a local ring of $\Lambda$
+(Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-describe-etale-local-ring})
+this follows from Lemma \ref{lemma-when-zero}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{The cotangent complex of an algebraic space over a ring}
+\label{section-cotangent-spaces-variant}
+
+\noindent
+Let $\Lambda$ be a ring and let $X$ be an algebraic space over $\Lambda$.
+Write $L_{X/\Spec(\Lambda)} = L_{X/\Lambda}$ which is justified
+by Lemma \ref{lemma-space-over-ring}.
+In this section we give a description of $L_{X/\Lambda}$ similar to
+Lemma \ref{lemma-compute-cotangent-complex}.
+Namely, we construct a category $\mathcal{C}_{X/\Lambda}$
+fibred over $X_\etale$ and endow it with a sheaf of (polynomial)
+$\Lambda$-algebras $\mathcal{O}$ such that
+$$
+L_{X/\Lambda} =
+L\pi_!(\Omega_{\mathcal{O}/\underline{\Lambda}} \otimes_\mathcal{O}
+\underline{\mathcal{O}}_X).
+$$
+We will later use the category $\mathcal{C}_{X/\Lambda}$ to construct
+a naive obstruction theory for the stack of coherent sheaves.
+
+\medskip\noindent
+Let $\Lambda$ be a ring. Let $X$ be an algebraic space over $\Lambda$.
+Let $\mathcal{C}_{X/\Lambda}$ be the category whose objects are
+commutative diagrams
+\begin{equation}
+\label{equation-object-space}
+\vcenter{
+\xymatrix{
+X \ar[d] & U \ar[l] \ar[d] \\
+\Spec(\Lambda) & \mathbf{A} \ar[l]
+}
+}
+\end{equation}
+of schemes where
+\begin{enumerate}
+\item $U$ is a scheme,
+\item $U \to X$ is \'etale,
+\item there exists an isomorphism $\mathbf{A} = \Spec(P)$
+where $P$ is a polynomial algebra over $\Lambda$ (on some set
+of variables).
+\end{enumerate}
+In other words, $\mathbf{A}$ is an (infinite dimensional) affine space over
+$\Spec(\Lambda)$. Morphisms are given by commutative diagrams.
+Recall that $X_\etale$ denotes the small \'etale site of $X$
+whose objects are schemes \'etale over $X$.
+There is a forgetful functor
+$$
+u : \mathcal{C}_{X/\Lambda} \to X_\etale,
+\quad
+(U \to \mathbf{A}) \mapsto U
+$$
+Observe that the fibre category over $U$ is canonically equivalent
+to the category $\mathcal{C}_{\mathcal{O}_X(U)/\Lambda}$ introduced
+in Section \ref{section-compute-L-pi-shriek}.
+
+\begin{lemma}
+\label{lemma-category-fibred-space}
+In the situation above the category
+$\mathcal{C}_{X/\Lambda}$ is fibred over $X_\etale$.
+\end{lemma}
+
+\begin{proof}
+Given an object $U \to \mathbf{A}$ of $\mathcal{C}_{X/\Lambda}$ and a morphism
+$U' \to U$ of $X_\etale$ consider the object $U' \to \mathbf{A}$ of
+$\mathcal{C}_{X/\Lambda}$ where $U' \to \mathbf{A}$ is the composition of
+$U \to \mathbf{A}$
+and $U' \to U$. The morphism $(U' \to \mathbf{A}) \to (U \to \mathbf{A})$ of
+$\mathcal{C}_{X/\Lambda}$ is strongly cartesian over $X_\etale$.
+\end{proof}
+
+\noindent
+We endow $\mathcal{C}_{X/\Lambda}$ with the topology inherited from
+$X_\etale$ (see Stacks, Section \ref{stacks-section-topology}).
+The functor $u$ defines a morphism of topoi
+$\pi : \Sh(\mathcal{C}_{X/\Lambda}) \to \Sh(X_\etale)$.
+The site $\mathcal{C}_{X/\Lambda}$ comes with several sheaves of rings.
+\begin{enumerate}
+\item The sheaf $\mathcal{O}$ given by the rule
+$(U \to \mathbf{A}) \mapsto \Gamma(\mathbf{A}, \mathcal{O}_\mathbf{A})$.
+\item The sheaf $\underline{\mathcal{O}}_X = \pi^{-1}\mathcal{O}_X$ given by
+the rule $(U \to \mathbf{A}) \mapsto \mathcal{O}_X(U)$.
+\item The constant sheaf $\underline{\Lambda}$.
+\end{enumerate}
+We obtain morphisms of ringed topoi
+\begin{equation}
+\label{equation-pi-spaces}
+\vcenter{
+\xymatrix{
+(\Sh(\mathcal{C}_{X/\Lambda}), \underline{\mathcal{O}}_X) \ar[r]_i \ar[d]_\pi &
+(\Sh(\mathcal{C}_{X/\Lambda}), \mathcal{O}) \\
+(\Sh(X_\etale), \mathcal{O}_X)
+}
+}
+\end{equation}
+The morphism $i$ is the identity on underlying topoi and
+$i^\sharp : \mathcal{O} \to \underline{\mathcal{O}}_X$
+is the obvious map.
+The map $\pi$ is a special case of Cohomology on Sites, Situation
+\ref{sites-cohomology-situation-fibred-category}.
+An important role will be played in the following
+by the derived functors
+$
+Li^* : D(\mathcal{O}) \longrightarrow D(\underline{\mathcal{O}}_X)
+$
+left adjoint to $Ri_* = i_* : D(\underline{\mathcal{O}}_X) \to D(\mathcal{O})$
+and
+$
+L\pi_! : D(\underline{\mathcal{O}}_X) \longrightarrow D(\mathcal{O}_X)
+$
+left adjoint to
+$\pi^* = \pi^{-1} : D(\mathcal{O}_X) \to D(\underline{\mathcal{O}}_X)$.
+We can compute $L\pi_!$ thanks to our earlier work.
+
+\begin{remark}
+\label{remark-compute-L-pi-shriek-spaces}
+In the situation above, for every object $U \to X$ of $X_\etale$
+let $P_{\bullet, U}$ be the standard resolution of $\mathcal{O}_X(U)$
+over $\Lambda$. Set $\mathbf{A}_{n, U} = \Spec(P_{n, U})$.
+Then $\mathbf{A}_{\bullet, U}$
+is a cosimplicial object of the fibre category
+$\mathcal{C}_{\mathcal{O}_X(U)/\Lambda}$ of
+$\mathcal{C}_{X/\Lambda}$ over $U$. Moreover, as discussed
+in Remark \ref{remark-resolution} we have that $\mathbf{A}_{\bullet, U}$
+is a cosimplicial object of $\mathcal{C}_{\mathcal{O}_X(U)/\Lambda}$
+as in Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-by-cosimplicial-resolution}.
+Since the construction $U \mapsto \mathbf{A}_{\bullet, U}$ is functorial
+in $U$, given any (abelian) sheaf $\mathcal{F}$ on $\mathcal{C}_{X/\Lambda}$
+we obtain a complex of presheaves
+$$
+U \longmapsto \mathcal{F}(\mathbf{A}_{\bullet, U})
+$$
+whose cohomology groups compute the homology of $\mathcal{F}$ on the fibre
+category. We conclude by
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-compute-left-derived-pi-shriek}
+that the sheafification computes $L_n\pi_!(\mathcal{F})$.
+In other words, the complex of sheaves whose term in degree $-n$ is
+the sheafification of $U \mapsto \mathcal{F}(\mathbf{A}_{n, U})$ computes
+$L\pi_!(\mathcal{F})$.
+\end{remark}
+
+\noindent
+With this remark out of the way we can state the main
+result of this section.
+
+\begin{lemma}
+\label{lemma-cotangent-morphism-spaces}
+In the situation above there is a canonical isomorphism
+$$
+L_{X/\Lambda} =
+L\pi_!(Li^*\Omega_{\mathcal{O}/\underline{\Lambda}}) =
+L\pi_!(i^*\Omega_{\mathcal{O}/\underline{\Lambda}}) =
+L\pi_!(\Omega_{\mathcal{O}/\underline{\Lambda}}
+\otimes_\mathcal{O} \underline{\mathcal{O}}_X)
+$$
+in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+We first observe that for any object $(U \to \mathbf{A})$ of
+$\mathcal{C}_{X/\Lambda}$
+the value of the sheaf $\mathcal{O}$ is a polynomial algebra over $\Lambda$.
+Hence $\Omega_{\mathcal{O}/\underline{\Lambda}}$ is a flat $\mathcal{O}$-module
+and we conclude the second and third equalities of the statement of the
+lemma hold.
+
+\medskip\noindent
+By Remark \ref{remark-compute-L-pi-shriek-spaces} the object
+$L\pi_!(\Omega_{\mathcal{O}/\underline{\Lambda}}
+\otimes_\mathcal{O} \underline{\mathcal{O}}_X)$
+is computed as the sheafification of the complex of presheaves
+$$
+U \mapsto
+\left(\Omega_{\mathcal{O}/\underline{\Lambda}}
+\otimes_\mathcal{O} \underline{\mathcal{O}}_X\right)(\mathbf{A}_{\bullet, U})
+=
+\Omega_{P_{\bullet, U}/\Lambda} \otimes_{P_{\bullet, U}} \mathcal{O}_X(U) =
+L_{\mathcal{O}_X(U)/\Lambda}
+$$
+using notation as in Remark \ref{remark-compute-L-pi-shriek-spaces}.
+Now Remark \ref{remark-map-sections-over-U} shows that
+$L\pi_!(\Omega_{\mathcal{O}/\underline{\Lambda}}
+\otimes_\mathcal{O} \underline{\mathcal{O}}_X)$
+computes the cotangent complex of the map of rings
+$\underline{\Lambda} \to \mathcal{O}_X$ on $X_\etale$.
+This is what we want by Lemma \ref{lemma-space-over-ring}.
+\end{proof}
+
+
+
+
+
+
+\section{Fibre products of algebraic spaces and the cotangent complex}
+\label{section-fibre-product}
+
+\noindent
+Let $S$ be a scheme. Let $X \to B$ and $Y \to B$ be morphisms of algebraic
+spaces over $S$. Consider the fibre product $X \times_B Y$ with projection
+morphisms $p : X \times_B Y \to X$ and $q : X \times_B Y \to Y$.
+In this section we discuss $L_{X \times_B Y/B}$. Most of the
+information we want is contained in the following diagram
+\begin{equation}
+\label{equation-fibre-product}
+\vcenter{
+\xymatrix{
+Lp^*L_{X/B} \ar[r] &
+L_{X \times_B Y/Y} \ar[r] &
+E \\
+Lp^*L_{X/B} \ar[r] \ar@{=}[u] &
+L_{X \times_B Y/B} \ar[r] \ar[u] &
+L_{X \times_B Y/X} \ar[u] \\
+ &
+Lq^*L_{Y/B} \ar[u] \ar@{=}[r] &
+Lq^*L_{Y/B} \ar[u]
+}
+}
+\end{equation}
+Explanation: The middle row is the fundamental triangle of
+Lemma \ref{lemma-triangle-ringed-topoi} for the morphisms
+$X \times_B Y \to X \to B$. The middle column is the fundamental triangle
+for the morphisms $X \times_B Y \to Y \to B$.
+Next, $E$ is an object of $D(\mathcal{O}_{X \times_B Y})$ which ``fits''
+into the upper right corner, i.e., which turns both the top row
+and the right column into distinguished triangles. Such an $E$
+exists by Derived Categories, Proposition \ref{derived-proposition-9}
+applied to the lower left square (with $0$ placed in the missing
+spot). To be more explicit, we could for example define $E$ as the cone
+(Derived Categories, Definition \ref{derived-definition-cone})
+of the map of complexes
+$$
+Lp^*L_{X/B} \oplus Lq^*L_{Y/B} \longrightarrow L_{X \times_B Y/B}
+$$
+and get the two maps with target $E$ by an application of TR3.
+In the Tor independent case the object $E$ is zero.
+
+\begin{lemma}
+\label{lemma-fibre-product-tor-independent}
+In the situation above, if $X$ and $Y$ are Tor independent over $B$, then
+the object $E$ in (\ref{equation-fibre-product}) is zero. In this case we
+have
+$$
+L_{X \times_B Y/B} = Lp^*L_{X/B} \oplus Lq^*L_{Y/B}
+$$
+\end{lemma}
+
+\begin{proof}
+Choose a scheme $W$ and a surjective \'etale morphism $W \to B$.
+Choose a scheme $U$ and a surjective \'etale morphism $U \to X \times_B W$.
+Choose a scheme $V$ and a surjective \'etale morphism $V \to Y \times_B W$.
+Then $U \times_W V \to X \times_B Y$ is surjective \'etale too.
+Hence it suffices to prove that the restriction of $E$ to $U \times_W V$
+is zero. By Lemma \ref{lemma-compare-spaces-schemes} and
+Derived Categories of Spaces, Lemma \ref{spaces-perfect-lemma-tor-independent}
+this reduces us to the case of schemes.
+Taking suitable affine opens we reduce to the case of affine schemes.
+Using
+Lemma \ref{lemma-morphism-affine-schemes}
+we reduce to the case of a tensor product of rings, i.e., to
+Lemma \ref{lemma-tensor-product-tor-independent}.
+\end{proof}
+
+\noindent
+In general we can say the following about the object $E$.
+
+\begin{lemma}
+\label{lemma-fibre-product}
+Let $S$ be a scheme. Let $X \to B$ and $Y \to B$ be morphisms of algebraic
+spaces over $S$. The object $E$ in (\ref{equation-fibre-product}) satisfies
+$H^i(E) = 0$ for $i = 0, -1$ and for a geometric point
+$(\overline{x}, \overline{y}) : \Spec(k) \to X \times_B Y$ we have
+$$
+H^{-2}(E)_{(\overline{x}, \overline{y})} =
+\text{Tor}_1^R(A, B) \otimes_{A \otimes_R B} C
+$$
+where $R = \mathcal{O}_{B, \overline{b}}$, $A = \mathcal{O}_{X, \overline{x}}$,
+$B = \mathcal{O}_{Y, \overline{y}}$, and
+$C = \mathcal{O}_{X \times_B Y, (\overline{x}, \overline{y})}$.
+\end{lemma}
+
+\begin{proof}
+The formation of the cotangent complex commutes with taking stalks
+and pullbacks, see
+Lemmas \ref{lemma-stalk-cotangent-complex} and
+\ref{lemma-pullback-cotangent-morphism-topoi}.
+Note that $C$ is a henselization of $A \otimes_R B$.
+$L_{C/R} = L_{A \otimes_R B/R} \otimes_{A \otimes_R B} C$
+by the results of Section \ref{section-localization}.
+Thus the stalk of $E$ at our geometric point is the cone of the
+map $L_{A/R} \otimes C \to L_{A \otimes_R B/R} \otimes C$.
+Therefore the results of the lemma follow from
+the case of rings, i.e., Lemma \ref{lemma-tensor-product}.
+\end{proof}
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/criteria.tex b/books/stacks/criteria.tex
new file mode 100644
index 0000000000000000000000000000000000000000..979f6a1cde9193655ef22094a1aaccf79214437d
--- /dev/null
+++ b/books/stacks/criteria.tex
@@ -0,0 +1,3401 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Criteria for Representability}
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+
+
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+The purpose of this chapter is to find criteria guaranteeing that a
+stack in groupoids over the category of schemes with the fppf topology
+is an algebraic stack. Historically, this often involved proving that
+certain functors were representable, see Grothendieck's lectures
+\cite{Gr-I},
+\cite{Gr-II},
+\cite{Gr-III},
+\cite{Gr-IV},
+\cite{Gr-V}, and
+\cite{Gr-VI}.
+This explains the title of this chapter. Another important source
+of this material comes from the work of Artin, see
+\cite{ArtinI},
+\cite{ArtinII},
+\cite{Artin-Theorem-Representability},
+\cite{Artin-Construction-Techniques},
+\cite{Artin-Algebraic-Spaces},
+\cite{Artin-Algebraic-Approximation},
+\cite{Artin-Implicit-Function}, and
+\cite{ArtinVersal}.
+
+\medskip\noindent
+Some of the notation, conventions and terminology in this chapter is awkward
+and may seem backwards to the more experienced reader. This is intentional.
+Please see Quot, Section \ref{quot-section-conventions} for an
+explanation.
+
+
+
+\section{Conventions}
+\label{section-conventions}
+
+\noindent
+The conventions we use in this chapter are the same as those in the
+chapter on algebraic stacks, see
+Algebraic Stacks, Section \ref{algebraic-section-conventions}.
+
+
+
+
+\section{What we already know}
+\label{section-done-so-far}
+
+\noindent
+The analogue of this chapter for algebraic spaces is the chapter entitled
+``Bootstrap'', see
+Bootstrap, Section \ref{bootstrap-section-introduction}.
+That chapter already contains some representability results.
+Moreover, some of the preliminary material treated there we already
+have worked out in the chapter on algebraic stacks.
+Here is a list:
+\begin{enumerate}
+\item We discuss morphisms of presheaves representable by algebraic spaces in
+Bootstrap, Section
+\ref{bootstrap-section-morphism-representable-by-spaces}.
+In
+Algebraic Stacks, Section
+\ref{algebraic-section-morphisms-representable-by-algebraic-spaces}
+we discuss the notion of a $1$-morphism of categories fibred in groupoids
+being representable by algebraic spaces.
+\item We discuss properties of morphisms of presheaves representable by
+algebraic spaces in
+Bootstrap, Section
+\ref{bootstrap-section-representable-by-spaces-properties}.
+In
+Algebraic Stacks, Section
+\ref{algebraic-section-representable-properties}
+we discuss properties of $1$-morphisms of categories fibred in groupoids
+representable by algebraic spaces.
+\item We proved that if $F$ is a sheaf whose diagonal is representable
+by algebraic spaces and which has an \'etale covering by an algebraic
+space, then $F$ is an algebraic space, see
+Bootstrap, Theorem \ref{bootstrap-theorem-bootstrap}.
+(This is a weak version of the result in the next item on the list.)
+\item
+\label{item-bootstrap-final}
+We proved that if $F$ is a sheaf and if there exists an algebraic
+space $U$ and a morphism $U \to F$ which is representable by algebraic
+spaces, surjective, flat, and locally of finite presentation, then
+$F$ is an algebraic space, see
+Bootstrap, Theorem \ref{bootstrap-theorem-final-bootstrap}.
+\item We have also proved the ``smooth'' analogue of
+(\ref{item-bootstrap-final}) for algebraic
+stacks: If $\mathcal{X}$ is a stack in groupoids over
+$(\Sch/S)_{fppf}$ and if there exists a stack in groupoids
+$\mathcal{U}$ over $(\Sch/S)_{fppf}$ which is representable
+by an algebraic space and a $1$-morphism $u : \mathcal{U} \to \mathcal{X}$
+which is representable by algebraic spaces, surjective, and smooth
+then $\mathcal{X}$ is an algebraic stack, see
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-smooth-surjective-morphism-implies-algebraic}.
+\end{enumerate}
+Our first task now is to prove the analogue of
+(\ref{item-bootstrap-final}) for algebraic
+stacks in general; it is
+Theorem \ref{theorem-bootstrap}.
+
+
+
+\section{Morphisms of stacks in groupoids}
+\label{section-1-morphisms}
+
+\noindent
+This section is preliminary and should be skipped on a first reading.
+
+\begin{lemma}
+\label{lemma-etale-permanence}
+Let $\mathcal{X} \to \mathcal{Y} \to \mathcal{Z}$
+be $1$-morphisms of categories fibred in groupoids over
+$(\Sch/S)_{fppf}$.
+If $\mathcal{X} \to \mathcal{Z}$ and $\mathcal{Y} \to \mathcal{Z}$ are
+representable by algebraic spaces and \'etale so is
+$\mathcal{X} \to \mathcal{Y}$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{U}$ be a representable category fibred in groupoids over $S$.
+Let $f : \mathcal{U} \to \mathcal{Y}$ be a $1$-morphism. We have to show that
+$\mathcal{X} \times_\mathcal{Y} \mathcal{U}$ is representable by an
+algebraic space and \'etale over $\mathcal{U}$.
+Consider the composition $h : \mathcal{U} \to \mathcal{Z}$. Then
+$$
+\mathcal{X} \times_\mathcal{Z} \mathcal{U}
+\longrightarrow
+\mathcal{Y} \times_\mathcal{Z} \mathcal{U}
+$$
+is a $1$-morphism between categories fibres in groupoids which are both
+representable by algebraic spaces and both \'etale over $\mathcal{U}$.
+Hence by
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-etale-permanence}
+this is represented by an \'etale morphism of algebraic spaces.
+Finally, we obtain the result we want as the morphism $f$ induces
+a morphism $\mathcal{U} \to \mathcal{Y} \times_\mathcal{Z} \mathcal{U}$
+and we have
+$$
+\mathcal{X} \times_\mathcal{Y} \mathcal{U} =
+(\mathcal{X} \times_\mathcal{Z} \mathcal{U})
+\times_{(\mathcal{Y} \times_\mathcal{Z} \mathcal{U})}
+\mathcal{U}.
+$$
+\end{proof}
+
+\begin{lemma}
+\label{lemma-stack-in-setoids-descent}
+Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$ be stacks in groupoids
+over $(\Sch/S)_{fppf}$. Suppose that $\mathcal{X} \to \mathcal{Y}$
+and $\mathcal{Z} \to \mathcal{Y}$ are $1$-morphisms.
+If
+\begin{enumerate}
+\item $\mathcal{Y}$, $\mathcal{Z}$ are representable by algebraic spaces
+$Y$, $Z$ over $S$,
+\item the associated morphism of algebraic spaces $Y \to Z$ is surjective,
+flat and locally of finite presentation, and
+\item $\mathcal{Y} \times_\mathcal{Z} \mathcal{X}$ is a stack in
+setoids,
+\end{enumerate}
+then $\mathcal{X}$ is a stack in setoids.
+\end{lemma}
+
+\begin{proof}
+This is a special case of
+Stacks, Lemma \ref{stacks-lemma-stack-in-setoids-descent}.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-smooth-surjective-morphism-implies-algebraic}
+and will be superseded by the stronger
+Theorem \ref{theorem-bootstrap}.
+
+\begin{lemma}
+\label{lemma-flat-finite-presentation-surjective-diagonal}
+Let $S$ be a scheme.
+Let $u : \mathcal{U} \to \mathcal{X}$ be a $1$-morphism of
+stacks in groupoids over $(\Sch/S)_{fppf}$. If
+\begin{enumerate}
+\item $\mathcal{U}$ is representable by an algebraic space, and
+\item $u$ is representable by algebraic spaces, surjective, flat and
+locally of finite presentation,
+\end{enumerate}
+then
+$\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$
+representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Given two schemes $T_1$, $T_2$ over $S$ denote
+$\mathcal{T}_i = (\Sch/T_i)_{fppf}$ the associated representable
+fibre categories. Suppose given $1$-morphisms
+$f_i : \mathcal{T}_i \to \mathcal{X}$.
+According to
+Algebraic Stacks, Lemma \ref{algebraic-lemma-representable-diagonal}
+it suffices to prove that the $2$-fibered
+product $\mathcal{T}_1 \times_\mathcal{X} \mathcal{T}_2$
+is representable by an algebraic space. By
+Stacks, Lemma
+\ref{stacks-lemma-2-fibre-product-stacks-in-setoids-over-stack-in-groupoids}
+this is in any case a stack in setoids. Thus
+$\mathcal{T}_1 \times_\mathcal{X} \mathcal{T}_2$ corresponds
+to some sheaf $F$ on $(\Sch/S)_{fppf}$, see
+Stacks, Lemma \ref{stacks-lemma-stack-in-setoids-characterize}.
+Let $U$ be the algebraic space which represents $\mathcal{U}$.
+By assumption
+$$
+\mathcal{T}_i' = \mathcal{U} \times_{u, \mathcal{X}, f_i} \mathcal{T}_i
+$$
+is representable by an algebraic space $T'_i$ over $S$. Hence
+$\mathcal{T}_1' \times_\mathcal{U} \mathcal{T}_2'$ is representable
+by the algebraic space $T'_1 \times_U T'_2$.
+Consider the commutative diagram
+$$
+\xymatrix{
+&
+\mathcal{T}_1 \times_{\mathcal X} \mathcal{T}_2 \ar[rr]\ar'[d][dd] & &
+\mathcal{T}_1 \ar[dd] \\
+\mathcal{T}_1' \times_\mathcal{U} \mathcal{T}_2' \ar[ur]\ar[rr]\ar[dd] & &
+\mathcal{T}_1' \ar[ur]\ar[dd] \\
+&
+\mathcal{T}_2 \ar'[r][rr] & &
+\mathcal X \\
+\mathcal{T}_2' \ar[rr]\ar[ur] & &
+\mathcal{U} \ar[ur] }
+$$
+In this diagram the bottom square, the right square, the back square, and
+the front square are $2$-fibre products. A formal argument then shows
+that $\mathcal{T}_1' \times_\mathcal{U} \mathcal{T}_2' \to
+\mathcal{T}_1 \times_{\mathcal X} \mathcal{T}_2$
+is the ``base change'' of $\mathcal{U} \to \mathcal{X}$, more precisely
+the diagram
+$$
+\xymatrix{
+\mathcal{T}_1' \times_\mathcal{U} \mathcal{T}_2' \ar[d] \ar[r] &
+\mathcal{U} \ar[d] \\
+\mathcal{T}_1 \times_{\mathcal X} \mathcal{T}_2 \ar[r] &
+\mathcal{X}
+}
+$$
+is a $2$-fibre square.
+Hence $T'_1 \times_U T'_2 \to F$ is representable by algebraic spaces,
+flat, locally of finite presentation and surjective, see
+Algebraic Stacks, Lemmas
+\ref{algebraic-lemma-map-fibred-setoids-representable-algebraic-spaces},
+\ref{algebraic-lemma-base-change-representable-by-spaces},
+\ref{algebraic-lemma-map-fibred-setoids-property}, and
+\ref{algebraic-lemma-base-change-representable-transformations-property}.
+Therefore $F$ is an algebraic space by
+Bootstrap, Theorem \ref{bootstrap-theorem-final-bootstrap}
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-second-diagonal}
+Let $\mathcal{X}$ be a category fibred in groupoids over $(\Sch/S)_{fppf}$.
+The following are equivalent
+\begin{enumerate}
+\item $\Delta_\Delta : \mathcal{X} \to
+\mathcal{X} \times_{\mathcal{X} \times \mathcal{X}} \mathcal{X}$
+is representable by algebraic spaces,
+\item for every $1$-morphism $\mathcal{V} \to \mathcal{X} \times \mathcal{X}$
+with $\mathcal{V}$ representable (by a scheme) the fibre product
+$\mathcal{Y} =
+\mathcal{X} \times_{\Delta, \mathcal{X} \times \mathcal{X}} \mathcal{V}$
+has diagonal representable by algebraic spaces.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Although this is a bit of a brain twister, it is completely formal.
+Namely, recall that
+$\mathcal{X} \times_{\mathcal{X} \times \mathcal{X}} \mathcal{X} =
+\mathcal{I}_\mathcal{X}$ is the inertia of $\mathcal{X}$ and that
+$\Delta_\Delta$ is the identity section of $\mathcal{I}_\mathcal{X}$, see
+Categories, Section \ref{categories-section-inertia}.
+Thus condition (1) says the following: Given a scheme $V$, an object $x$ of
+$\mathcal{X}$ over $V$, and a morphism $\alpha : x \to x$ of $\mathcal{X}_V$
+the condition ``$\alpha = \text{id}_x$'' defines an algebraic space over $V$.
+(In other words, there exists a monomorphism of algebraic spaces $W \to V$
+such that a morphism of schemes $f : T \to V$ factors through $W$
+if and only if $f^*\alpha = \text{id}_{f^*x}$.)
+
+\medskip\noindent
+On the other hand, let $V$ be a scheme and let $x, y$ be objects of
+$\mathcal{X}$ over $V$. Then $(x, y)$ define a morphism
+$\mathcal{V} = (\Sch/V)_{fppf} \to \mathcal{X} \times \mathcal{X}$.
+Next, let $h : V' \to V$ be a morphism of schemes and let
+$\alpha : h^*x \to h^*y$ and $\beta : h^*x \to h^*y$ be morphisms
+of $\mathcal{X}_{V'}$. Then $(\alpha, \beta)$ define a morphism
+$\mathcal{V}' = (\Sch/V)_{fppf} \to \mathcal{Y} \times \mathcal{Y}$.
+Condition (2) now says that (with any choices as above) the
+condition ``$\alpha = \beta$'' defines an algebraic space over $V$.
+
+\medskip\noindent
+To see the equivalence, given $(\alpha, \beta)$ as in (2) we see that
+(1) implies that ``$\alpha^{-1} \circ \beta = \text{id}_{h^*x}$''
+defines an algebraic space. The implication (2) $\Rightarrow$ (1)
+follows by taking $h = \text{id}_V$ and $\beta = \text{id}_x$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Limit preserving on objects}
+\label{section-limit-preserving}
+
+\noindent
+Let $S$ be a scheme. Let $p : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+of categories fibred in groupoids over $(\Sch/S)_{fppf}$. We will say that
+$p$ is {\it limit preserving on objects} if the following condition holds:
+Given any data consisting of
+\begin{enumerate}
+\item an affine scheme $U = \lim_{i \in I} U_i$ which is written as the
+directed limit of affine schemes $U_i$ over $S$,
+\item an object $y_i$ of $\mathcal{Y}$ over $U_i$ for some $i$,
+\item an object $x$ of $\mathcal{X}$ over $U$, and
+\item an isomorphism $\gamma : p(x) \to y_i|_U$,
+\end{enumerate}
+then there exists an $i' \geq i$, an object $x_{i'}$ of
+$\mathcal{X}$ over $U_{i'}$, an isomorphism
+$\beta : x_{i'}|_U \to x$, and an isomorphism
+$\gamma_{i'} : p(x_{i'}) \to y_i|_{U_{i'}}$
+such that
+\begin{equation}
+\label{equation-limit-preserving}
+\vcenter{
+\xymatrix{
+p(x_{i'}|_U) \ar[d]_{p(\beta)} \ar[rr]_{\gamma_{i'}|_U} & &
+(y_i|_{U_{i'}})|_U \ar@{=}[d] \\
+p(x) \ar[rr]^\gamma & & y_i|_U
+}
+}
+\end{equation}
+commutes. In this situation we say that ``$(i', x_{i'}, \beta, \gamma_{i'})$
+is a {\it solution} to the problem posed by our data (1), (2), (3), (4)''.
+The motivation for this definition comes from
+Limits of Spaces,
+Lemma \ref{spaces-limits-lemma-characterize-relative-limit-preserving}.
+
+\begin{lemma}
+\label{lemma-base-change-limit-preserving}
+Let $p : \mathcal{X} \to \mathcal{Y}$ and $q : \mathcal{Z} \to \mathcal{Y}$
+be $1$-morphisms of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+If $p : \mathcal{X} \to \mathcal{Y}$ is limit preserving on objects, then so
+is the base change
+$p' : \mathcal{X} \times_\mathcal{Y} \mathcal{Z} \to \mathcal{Z}$
+of $p$ by $q$.
+\end{lemma}
+
+\begin{proof}
+This is formal. Let $U = \lim_{i \in I} U_i$ be the directed limit
+of affine schemes $U_i$ over $S$, let $z_i$ be an object of $\mathcal{Z}$
+over $U_i$ for some $i$, let $w$ be an object of
+$\mathcal{X} \times_\mathcal{Y} \mathcal{Z}$ over $U$, and let
+$\delta : p'(w) \to z_i|_U$ be an isomorphism.
+We may write
+$w = (U, x, z, \alpha)$ for some object $x$ of $\mathcal{X}$ over $U$
+and object $z$ of $\mathcal{Z}$ over $U$ and isomorphism
+$\alpha : p(x) \to q(z)$. Note that $p'(w) = z$ hence
+$\delta : z \to z_i|_U$. Set $y_i = q(z_i)$ and
+$\gamma = q(\delta) \circ \alpha : p(x) \to y_i|_U$.
+As $p$ is limit preserving on objects there exists an $i' \geq i$
+and an object $x_{i'}$ of $\mathcal{X}$ over $U_{i'}$ as well as
+isomorphisms $\beta : x_{i'}|_U \to x$ and
+$\gamma_{i'} : p(x_{i'}) \to y_i|_{U_{i'}}$ such that
+(\ref{equation-limit-preserving}) commutes. Then we consider the object
+$w_{i'} = (U_{i'}, x_{i'}, z_i|_{U_{i'}}, \gamma_{i'})$ of
+$\mathcal{X} \times_\mathcal{Y} \mathcal{Z}$ over $U_{i'}$
+and define isomorphisms
+$$
+w_{i'}|_U = (U, x_{i'}|_U, z_i|_U, \gamma_{i'}|_U)
+\xrightarrow{(\beta, \delta^{-1})}
+(U, x, z, \alpha) = w
+$$
+and
+$$
+p'(w_{i'}) = z_i|_{U_{i'}} \xrightarrow{\text{id}} z_i|_{U_{i'}}.
+$$
+These combine to give a solution to the problem.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-limit-preserving}
+Let $p : \mathcal{X} \to \mathcal{Y}$ and $q : \mathcal{Y} \to \mathcal{Z}$
+be $1$-morphisms of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+If $p$ and $q$ are limit preserving on objects, then so is the composition
+$q \circ p$.
+\end{lemma}
+
+\begin{proof}
+This is formal. Let $U = \lim_{i \in I} U_i$ be the directed limit
+of affine schemes $U_i$ over $S$, let $z_i$ be an object of $\mathcal{Z}$
+over $U_i$ for some $i$, let $x$ be an object of $\mathcal{X}$ over $U$,
+and let $\gamma : q(p(x)) \to z_i|_U$ be an isomorphism. As $q$ is
+limit preserving on objects there exist an $i' \geq i$, an object
+$y_{i'}$ of $\mathcal{Y}$ over $U_{i'}$, an isomorphism
+$\beta : y_{i'}|_U \to p(x)$, and an isomorphism
+$\gamma_{i'} : q(y_{i'}) \to z_i|_{U_{i'}}$
+such that (\ref{equation-limit-preserving}) is commutative. As $p$ is
+limit preserving on objects there exist an $i'' \geq i'$, an object
+$x_{i''}$ of $\mathcal{X}$ over $U_{i''}$, an isomorphism
+$\beta' : x_{i''}|_U \to x$, and an isomorphism
+$\gamma'_{i''} : p(x_{i''}) \to y_{i'}|_{U_{i''}}$
+such that (\ref{equation-limit-preserving}) is commutative.
+The solution is to take $x_{i''}$ over $U_{i''}$ with isomorphism
+$$
+q(p(x_{i''})) \xrightarrow{q(\gamma'_{i''})}
+q(y_{i'})|_{U_{i''}} \xrightarrow{\gamma_{i'}|_{U_{i''}}}
+z_i|_{U_{i''}}
+$$
+and isomorphism $\beta' : x_{i''}|_U \to x$. We omit the verification
+that (\ref{equation-limit-preserving}) is commutative.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-by-spaces-limit-preserving}
+Let $p : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. If $p$ is
+representable by algebraic spaces, then the following are equivalent:
+\begin{enumerate}
+\item $p$ is limit preserving on objects, and
+\item $p$ is locally of finite presentation (see
+Algebraic Stacks,
+Definition \ref{algebraic-definition-relative-representable-property}).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2). Let $U = \lim_{i \in I} U_i$ be the directed limit
+of affine schemes $U_i$ over $S$, let $y_i$ be an object of $\mathcal{Y}$
+over $U_i$ for some $i$, let $x$ be an object of $\mathcal{X}$ over $U$,
+and let $\gamma : p(x) \to y_i|_U$ be an isomorphism. Let
+$X_{y_i}$ denote an algebraic space over $U_i$ representing the $2$-fibre
+product
+$$
+(\Sch/U_i)_{fppf} \times_{y_i, \mathcal{Y}, p} \mathcal{X}.
+$$
+Note that $\xi = (U, U \to U_i, x, \gamma^{-1})$ defines an object of
+this $2$-fibre product over $U$. Via the $2$-Yoneda lemma $\xi$ corresponds
+to a morphism $f_\xi : U \to X_{y_i}$ over $U_i$. By
+Limits of Spaces, Proposition
+\ref{spaces-limits-proposition-characterize-locally-finite-presentation}
+there exists an $i' \geq i$ and a morphism $f_{i'} : U_{i'} \to X_{y_i}$
+such that $f_\xi$ is the composition of $f_{i'}$ and the projection
+morphism $U \to U_{i'}$. Also, the $2$-Yoneda lemma tells us that
+$f_{i'}$ corresponds to an object
+$\xi_{i'} = (U_{i'}, U_{i'} \to U_i, x_{i'}, \alpha)$ of
+the displayed $2$-fibre product over $U_{i'}$ whose restriction to
+$U$ recovers $\xi$. In particular we obtain an isomorphism
+$\gamma : x_{i'}|U \to x$. Note that $\alpha : y_i|_{U_{i'}} \to p(x_{i'})$.
+Hence we see that taking $x_{i'}$, the isomorphism
+$\gamma : x_{i'}|U \to x$, and the isomorphism
+$\beta = \alpha^{-1} : p(x_{i'}) \to y_i|_{U_{i'}}$
+is a solution to the problem.
+
+\medskip\noindent
+Assume (1). Choose a scheme $T$ and a $1$-morphism
+$y : (\Sch/T)_{fppf} \to \mathcal{Y}$. Let
+$X_y$ be an algebraic space over $T$ representing the $2$-fibre product
+$(\Sch/T)_{fppf} \times_{y, \mathcal{Y}, p} \mathcal{X}$.
+We have to show that $X_y \to T$ is locally of finite presentation.
+To do this we will use the criterion in
+Limits of Spaces, Remark \ref{spaces-limits-remark-limit-preserving}.
+Consider an affine scheme $U = \lim_{i \in I} U_i$ written as the
+directed limit of affine schemes over $T$.
+Pick any $i \in I$ and set $y_i = y|_{U_i}$. Also denote $i'$ an element
+of $I$ which is bigger than or equal to $i$. By the $2$-Yoneda lemma
+morphisms $U \to X_y$ over $T$ correspond bijectively
+to isomorphism classes of pairs $(x, \alpha)$ where $x$ is an object
+of $\mathcal{X}$ over $U$ and $\alpha : y|_U \to p(x)$ is an isomorphism.
+Of course giving $\alpha$ is, up to an inverse, the same thing as giving
+an isomorphism $\gamma : p(x) \to y_i|_U$.
+Similarly for morphisms $U_{i'} \to X_y$ over $T$. Hence (1) guarantees
+that the canonical map
+$$
+\colim_{i' \geq i} X_y(U_{i'}) \longrightarrow X_y(U)
+$$
+is surjective in this situation. It follows from
+Limits of Spaces, Lemma \ref{spaces-limits-lemma-surjection-is-enough}
+that $X_y \to T$ is locally of finite presentation.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-open-immersion-limit-preserving}
+Let $p : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. Assume $p$ is representable
+by algebraic spaces and an open immersion. Then $p$ is limit preserving
+on objects.
+\end{lemma}
+
+\begin{proof}
+This follows from
+Lemma \ref{lemma-representable-by-spaces-limit-preserving}
+and (via the general principle
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-representable-transformations-property-implication})
+from the fact that an open immersion of algebraic spaces is
+locally of finite presentation, see
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-open-immersion-locally-finite-presentation}.
+\end{proof}
+
+\noindent
+Let $S$ be a scheme. In the following lemma we need the notion of the
+{\it size} of an algebraic space $X$ over $S$. Namely, given a cardinal
+$\kappa$ we will say $X$ has $\text{size}(X) \leq \kappa$ if and only
+if there exists a scheme $U$ with $\text{size}(U) \leq \kappa$ (see
+Sets, Section \ref{sets-section-categories-schemes}) and a surjective
+\'etale morphism $U \to X$.
+
+\begin{lemma}
+\label{lemma-check-representable-limit-preserving}
+Let $S$ be a scheme.
+Let $\kappa = \text{size}(T)$ for some $T \in \Ob((\Sch/S)_{fppf})$.
+Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+of categories fibred in groupoids over $(\Sch/S)_{fppf}$
+such that
+\begin{enumerate}
+\item $\mathcal{Y} \to (\Sch/S)_{fppf}$ is limit preserving on objects,
+\item for an affine scheme $V$ locally of finite presentation over $S$ and
+$y \in \Ob(\mathcal{Y}_V)$ the fibre product
+$(\Sch/V)_{fppf} \times_{y, \mathcal{Y}} \mathcal{X}$ is representable
+by an algebraic space of size $\leq \kappa$\footnote{The condition on
+size can be dropped by those ignoring set theoretic issues.},
+\item $\mathcal{X}$ and $\mathcal{Y}$ are stacks for the Zariski topology.
+\end{enumerate}
+Then $f$ is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Let $V$ be a scheme over $S$ and $y \in \mathcal{Y}_V$. We have to prove
+$(\Sch/V)_{fppf} \times_{y, \mathcal{Y}} \mathcal{X}$ is representable
+by an algebraic space.
+
+\medskip\noindent
+Case I: $V$ is affine and maps into an affine open $\Spec(\Lambda) \subset S$.
+Then we can write $V = \lim V_i$ with each $V_i$ affine and of finite
+presentation over $\Spec(\Lambda)$, see
+Algebra, Lemma \ref{algebra-lemma-ring-colimit-fp}.
+Then $y$ comes from an object $y_i$ over $V_i$ for some $i$ by assumption (1).
+By assumption (3) the fibre product
+$(\Sch/V_i)_{fppf} \times_{y_i, \mathcal{Y}} \mathcal{X}$ is representable
+by an algebraic space $Z_i$. Then
+$(\Sch/V)_{fppf} \times_{y, \mathcal{Y}} \mathcal{X}$ is representable
+by $Z \times_{V_i} V$.
+
+\medskip\noindent
+Case II: $V$ is general. Choose an affine open covering
+$V = \bigcup_{i \in I} V_i$ such that each $V_i$ maps into an affine open
+of $S$. We first claim
+that $\mathcal{Z} = (\Sch/V)_{fppf} \times_{y, \mathcal{Y}} \mathcal{X}$
+is a stack in setoids for the Zariski topology. Namely, it is a stack in
+groupoids for the Zariski topology by
+Stacks, Lemma \ref{stacks-lemma-2-product-stacks-in-groupoids}.
+Then suppose that $z$ is an object of $\mathcal{Z}$ over a scheme $T$.
+Denote $g : T \to V$ the morphism corresponding to the
+projection of $z$ in $(\Sch/V)_{fppf}$. Consider the Zariski sheaf
+$\mathit{I} = \mathit{Isom}_{\mathcal{Z}}(z, z)$. By Case I we see that
+$\mathit{I}|_{g^{-1}(V_i)} = *$ (the singleton sheaf). Hence
+$\mathcal{I} = *$. Thus $\mathcal{Z}$ is fibred in setoids. To finish
+the proof we have to show that the Zariski sheaf
+$Z : T \mapsto \Ob(\mathcal{Z}_T)/\cong$ is an algebraic space, see
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-characterize-representable-by-space}.
+There is a map $p : Z \to V$ (transformation of functors) and by Case I
+we know that $Z_i = p^{-1}(V_i)$ is an algebraic space. The morphisms
+$Z_i \to Z$ are representable by open immersions and
+$\coprod Z_i \to Z$ is surjective (in the Zariski topology).
+Hence $Z$ is a sheaf for the fppf topology by
+Bootstrap, Lemma \ref{bootstrap-lemma-glueing-sheaves}.
+Thus Spaces, Lemma \ref{spaces-lemma-glueing-algebraic-spaces}
+applies and we conclude that $Z$ is an algebraic space\footnote{
+To see that the set theoretic condition of that lemma is satisfied
+we argue as follows: First choose the open covering such that
+$|I| \leq \text{size}(V)$. Next, choose schemes $U_i$ of size
+$\leq \max(\kappa, \text{size}(V))$ and surjective \'etale morphisms
+$U_i \to Z_i$; we can do this by assumption (2) and
+Sets, Lemma \ref{sets-lemma-bound-size-fibre-product}
+(details omitted). Then
+Sets, Lemma \ref{sets-lemma-what-is-in-it}
+implies that $\coprod U_i$ is an object of $(\Sch/S)_{fppf}$.
+Hence $\coprod Z_i$ is an algebraic space by
+Spaces, Lemma \ref{spaces-lemma-coproduct-algebraic-spaces}.
+}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-property-limit-preserving}
+Let $S$ be a scheme. Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+of categories fibred in groupoids over $(\Sch/S)_{fppf}$. Let $\mathcal{P}$
+be a property of morphisms of algebraic spaces as in
+Algebraic Stacks, Definition
+\ref{algebraic-definition-relative-representable-property}. If
+\begin{enumerate}
+\item $f$ is representable by algebraic spaces,
+\item $\mathcal{Y} \to (\Sch/S)_{fppf}$ is limit preserving on objects,
+\item for an affine scheme $V$ locally of finite presentation over $S$ and
+$y \in \mathcal{Y}_V$ the resulting morphism of algebraic spaces
+$f_y : F_y \to V$, see Algebraic Stacks, Equation
+(\ref{algebraic-equation-representable-by-algebraic-spaces}),
+has property $\mathcal{P}$.
+\end{enumerate}
+Then $f$ has property $\mathcal{P}$.
+\end{lemma}
+
+\begin{proof}
+Let $V$ be a scheme over $S$ and $y \in \mathcal{Y}_V$. We have to show
+that $F_y \to V$ has property $\mathcal{P}$. Since $\mathcal{P}$ is
+fppf local on the base we may assume that $V$ is an affine scheme which
+maps into an affine open $\Spec(\Lambda) \subset S$. Thus we can write
+$V = \lim V_i$ with each $V_i$ affine and of finite presentation over
+$\Spec(\Lambda)$, see Algebra, Lemma \ref{algebra-lemma-ring-colimit-fp}.
+Then $y$ comes from an object $y_i$ over $V_i$ for some $i$ by assumption (2).
+By assumption (3) the morphism $F_{y_i} \to V_i$ has property $\mathcal{P}$.
+As $\mathcal{P}$ is stable under arbitrary base change and since
+$F_y = F_{y_i} \times_{V_i} V$ we conclude that $F_y \to V$ has property
+$\mathcal{P}$ as desired.
+\end{proof}
+
+
+
+\section{Formally smooth on objects}
+\label{section-formally-smooth}
+
+\noindent
+Let $S$ be a scheme. Let $p : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+of categories fibred in groupoids over $(\Sch/S)_{fppf}$. We will say that
+$p$ is {\it formally smooth on objects} if the following condition holds:
+Given any data consisting of
+\begin{enumerate}
+\item a first order thickening $U \subset U'$ of affine schemes over $S$,
+\item an object $y'$ of $\mathcal{Y}$ over $U'$,
+\item an object $x$ of $\mathcal{X}$ over $U$, and
+\item an isomorphism $\gamma : p(x) \to y'|_U$,
+\end{enumerate}
+then there exists an object $x'$ of
+$\mathcal{X}$ over $U'$ with an isomorphism
+$\beta : x'|_U \to x$ and an isomorphism $\gamma' : p(x') \to y'$
+such that
+\begin{equation}
+\label{equation-formally-smooth}
+\vcenter{
+\xymatrix{
+p(x'|_U) \ar[d]_{p(\beta)} \ar[rr]_{\gamma'|_U} & &
+y'|_U \ar@{=}[d] \\
+p(x) \ar[rr]^\gamma & & y'|_U
+}
+}
+\end{equation}
+commutes. In this situation we say that ``$(x', \beta, \gamma')$
+is a {\it solution} to the problem posed by our data (1), (2), (3), (4)''.
+
+\begin{lemma}
+\label{lemma-base-change-formally-smooth}
+Let $p : \mathcal{X} \to \mathcal{Y}$ and $q : \mathcal{Z} \to \mathcal{Y}$
+be $1$-morphisms of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+If $p : \mathcal{X} \to \mathcal{Y}$ is formally smooth on objects, then so
+is the base change
+$p' : \mathcal{X} \times_\mathcal{Y} \mathcal{Z} \to \mathcal{Z}$
+of $p$ by $q$.
+\end{lemma}
+
+\begin{proof}
+This is formal. Let $U \subset U'$ be a first order thickening
+of affine schemes over $S$, let $z'$ be an object of $\mathcal{Z}$
+over $U'$, let $w$ be an object of
+$\mathcal{X} \times_\mathcal{Y} \mathcal{Z}$ over $U$, and let
+$\delta : p'(w) \to z'|_U$ be an isomorphism.
+We may write
+$w = (U, x, z, \alpha)$ for some object $x$ of $\mathcal{X}$ over $U$
+and object $z$ of $\mathcal{Z}$ over $U$ and isomorphism
+$\alpha : p(x) \to q(z)$. Note that $p'(w) = z$ hence
+$\delta : z \to z|_U$. Set $y' = q(z')$ and
+$\gamma = q(\delta) \circ \alpha : p(x) \to y'|_U$.
+As $p$ is formally smooth on objects there exists an
+object $x'$ of $\mathcal{X}$ over $U'$ as well as
+isomorphisms $\beta : x'|_U \to x$ and $\gamma' : p(x') \to y'$ such that
+(\ref{equation-formally-smooth}) commutes. Then we consider the object
+$w = (U', x', z', \gamma')$ of $\mathcal{X} \times_\mathcal{Y} \mathcal{Z}$
+over $U'$ and define isomorphisms
+$$
+w'|_U = (U, x'|_U, z'|_U, \gamma'|_U)
+\xrightarrow{(\beta, \delta^{-1})}
+(U, x, z, \alpha) = w
+$$
+and
+$$
+p'(w') = z' \xrightarrow{\text{id}} z'.
+$$
+These combine to give a solution to the problem.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-formally-smooth}
+Let $p : \mathcal{X} \to \mathcal{Y}$ and $q : \mathcal{Y} \to \mathcal{Z}$
+be $1$-morphisms of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+If $p$ and $q$ are formally smooth on objects, then so is the composition
+$q \circ p$.
+\end{lemma}
+
+\begin{proof}
+This is formal. Let $U \subset U'$ be a first order thickening
+of affine schemes over $S$, let $z'$ be an object of $\mathcal{Z}$
+over $U'$, let $x$ be an object of $\mathcal{X}$ over $U$,
+and let $\gamma : q(p(x)) \to z'|_U$ be an isomorphism. As $q$ is
+formally smooth on objects there exist an object
+$y'$ of $\mathcal{Y}$ over $U'$, an isomorphism
+$\beta : y'|_U \to p(x)$, and an isomorphism $\gamma' : q(y') \to z'$
+such that (\ref{equation-formally-smooth}) is commutative. As $p$ is
+formally smooth on objects there exist an object
+$x'$ of $\mathcal{X}$ over $U'$, an isomorphism
+$\beta' : x'|_U \to x$, and an isomorphism $\gamma'' : p(x') \to y'$
+such that (\ref{equation-formally-smooth}) is commutative.
+The solution is to take $x'$ over $U'$ with isomorphism
+$$
+q(p(x')) \xrightarrow{q(\gamma'')} q(y') \xrightarrow{\gamma'} z'
+$$
+and isomorphism $\beta' : x'|_U \to x$. We omit the verification
+that (\ref{equation-formally-smooth}) is commutative.
+\end{proof}
+
+\noindent
+Note that the class of formally smooth morphisms of algebraic spaces is
+stable under arbitrary base change and local on the target in the
+fpqc topology, see
+More on Morphisms of Spaces,
+Lemma \ref{spaces-more-morphisms-lemma-base-change-formally-smooth} and
+\ref{spaces-more-morphisms-lemma-descending-property-formally-smooth}.
+Hence condition (2) in the lemma below makes sense.
+
+\begin{lemma}
+\label{lemma-representable-by-spaces-formally-smooth}
+Let $p : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. If $p$ is
+representable by algebraic spaces, then the following are equivalent:
+\begin{enumerate}
+\item $p$ is formally smooth on objects, and
+\item $p$ is formally smooth (see
+Algebraic Stacks,
+Definition \ref{algebraic-definition-relative-representable-property}).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2). Let $U \subset U'$ be a first order thickening
+of affine schemes over $S$, let $y'$ be an object of $\mathcal{Y}$
+over $U'$, let $x$ be an object of $\mathcal{X}$ over $U$,
+and let $\gamma : p(x) \to y'|_U$ be an isomorphism. Let
+$X_{y'}$ denote an algebraic space over $U'$ representing the $2$-fibre
+product
+$$
+(\Sch/U')_{fppf} \times_{y', \mathcal{Y}, p} \mathcal{X}.
+$$
+Note that $\xi = (U, U \to U', x, \gamma^{-1})$ defines an object of
+this $2$-fibre product over $U$. Via the $2$-Yoneda lemma $\xi$ corresponds
+to a morphism $f_\xi : U \to X_{y'}$ over $U'$. As $X_{y'} \to U'$ is
+formally smooth by assumption there exists a morphism
+$f' : U' \to X_{y'}$ such that $f_\xi$ is the composition of $f'$
+and the morphism $U \to U'$. Also, the $2$-Yoneda lemma tells us that
+$f'$ corresponds to an object $\xi' = (U', U' \to U', x', \alpha)$ of
+the displayed $2$-fibre product over $U'$ whose restriction to
+$U$ recovers $\xi$. In particular we obtain an isomorphism
+$\gamma : x'|U \to x$. Note that $\alpha : y' \to p(x')$.
+Hence we see that taking $x'$, the isomorphism
+$\gamma : x'|U \to x$, and the isomorphism
+$\beta = \alpha^{-1} : p(x') \to y'$
+is a solution to the problem.
+
+\medskip\noindent
+Assume (1). Choose a scheme $T$ and a $1$-morphism
+$y : (\Sch/T)_{fppf} \to \mathcal{Y}$. Let
+$X_y$ be an algebraic space over $T$ representing the $2$-fibre product
+$(\Sch/T)_{fppf} \times_{y, \mathcal{Y}, p} \mathcal{X}$.
+We have to show that $X_y \to T$ is formally smooth.
+Hence it suffices to show that given a first order thickening
+$U \subset U'$ of affine schemes over $T$, then
+$X_y(U') \to X_y(U')$ is surjective (morphisms in the
+category of algebraic spaces over $T$). Set $y' = y|_{U'}$.
+By the $2$-Yoneda lemma morphisms $U \to X_y$ over $T$ correspond bijectively
+to isomorphism classes of pairs $(x, \alpha)$ where $x$ is an object
+of $\mathcal{X}$ over $U$ and $\alpha : y|_U \to p(x)$ is an isomorphism.
+Of course giving $\alpha$ is, up to an inverse, the same thing as giving
+an isomorphism $\gamma : p(x) \to y'|_U$.
+Similarly for morphisms $U' \to X_y$ over $T$. Hence (1) guarantees
+the surjectivity of $X_y(U') \to X_y(U')$
+in this situation and we win.
+\end{proof}
+
+
+
+
+
+
+
+\section{Surjective on objects}
+\label{section-formally-surjective}
+
+\noindent
+Let $S$ be a scheme. Let $p : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+of categories fibred in groupoids over $(\Sch/S)_{fppf}$. We will say that
+$p$ is {\it surjective on objects} if the following condition holds:
+Given any data consisting of
+\begin{enumerate}
+\item a field $k$ over $S$, and
+\item an object $y$ of $\mathcal{Y}$ over $\Spec(k)$,
+\end{enumerate}
+then there exists an extension $K/k$ of fields over $S$, an
+object $x$ of $\mathcal{X}$ over $\Spec(K)$
+such that $p(x) \cong y|_{\Spec(K)}$.
+
+\begin{lemma}
+\label{lemma-base-change-surjective}
+Let $p : \mathcal{X} \to \mathcal{Y}$ and $q : \mathcal{Z} \to \mathcal{Y}$
+be $1$-morphisms of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+If $p : \mathcal{X} \to \mathcal{Y}$ is surjective on objects, then so
+is the base change
+$p' : \mathcal{X} \times_\mathcal{Y} \mathcal{Z} \to \mathcal{Z}$
+of $p$ by $q$.
+\end{lemma}
+
+\begin{proof}
+This is formal. Let $z$ be an object of $\mathcal{Z}$ over a field $k$.
+As $p$ is surjective on objects there exists an extension $K/k$
+and an object $x$ of $\mathcal{X}$ over $K$ and an isomorphism
+$\alpha : p(x) \to q(z)|_{\Spec(K)}$. Then
+$w = (\Spec(K), x, z|_{\Spec(K)}, \alpha)$ is an object of
+$\mathcal{X} \times_\mathcal{Y} \mathcal{Z}$ over $K$ with
+$p'(w) = z|_{\Spec(K)}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-surjective}
+Let $p : \mathcal{X} \to \mathcal{Y}$ and $q : \mathcal{Y} \to \mathcal{Z}$
+be $1$-morphisms of categories fibred in groupoids over $(\Sch/S)_{fppf}$.
+If $p$ and $q$ are surjective on objects, then so is the composition
+$q \circ p$.
+\end{lemma}
+
+\begin{proof}
+This is formal. Let $z$ be an object of $\mathcal{Z}$ over a field $k$.
+As $q$ is surjective on objects there exists a field extension $K/k$
+and an object $y$ of $\mathcal{Y}$ over $K$ such that
+$q(y) \cong x|_{\Spec(K)}$. As $p$ is surjective on objects there
+exists a field extension $L/K$ and an object $x$ of $\mathcal{X}$
+over $L$ such that $p(x) \cong y|_{\Spec(L)}$. Then the field extension
+$L/k$ and the object $x$ of $\mathcal{X}$ over $L$ satisfy
+$q(p(x)) \cong z|_{\Spec(L)}$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-by-spaces-surjective}
+Let $p : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of categories
+fibred in groupoids over $(\Sch/S)_{fppf}$. If $p$ is
+representable by algebraic spaces, then the following are equivalent:
+\begin{enumerate}
+\item $p$ is surjective on objects, and
+\item $p$ is surjective (see
+Algebraic Stacks,
+Definition \ref{algebraic-definition-relative-representable-property}).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2). Let $k$ be a field and let $y$ be an object of
+$\mathcal{Y}$ over $k$. Let $X_y$ denote an algebraic space over $k$
+representing the $2$-fibre product
+$$
+(\Sch/\Spec(k))_{fppf} \times_{y, \mathcal{Y}, p} \mathcal{X}.
+$$
+As we've assumed that $p$ is surjective we see that $X_y$ is not empty.
+Hence we can find a field extension $K/k$ and a $K$-valued point
+$x$ of $X_y$. Via the $2$-Yoneda lemma this corresponds to an object
+$x$ of $\mathcal{X}$ over $K$ together with an isomorphism
+$p(x) \cong y|_{\Spec(K)}$ and we see that (1) holds.
+
+\medskip\noindent
+Assume (1). Choose a scheme $T$ and a $1$-morphism
+$y : (\Sch/T)_{fppf} \to \mathcal{Y}$. Let
+$X_y$ be an algebraic space over $T$ representing the $2$-fibre product
+$(\Sch/T)_{fppf} \times_{y, \mathcal{Y}, p} \mathcal{X}$.
+We have to show that $X_y \to T$ is surjective. By
+Morphisms of Spaces, Definition \ref{spaces-morphisms-definition-surjective}
+we have to show that $|X_y| \to |T|$ is surjective.
+This means exactly that given a field $k$ over $T$ and a
+morphism $t : \Spec(k) \to T$ there exists a field extension
+$K/k$ and a morphism $x : \Spec(K) \to X_y$ such that
+$$
+\xymatrix{
+\Spec(K) \ar[d] \ar[r]_x & X_y \ar[d] \\
+\Spec(k) \ar[r]^t & T
+}
+$$
+commutes. By the $2$-Yoneda lemma this means exactly that we have to find
+$k \subset K$ and an object $x$ of $\mathcal{X}$ over $K$ such that
+$p(x) \cong t^*y|_{\Spec(K)}$. Hence (1) guarantees that this is
+the case and we win.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Algebraic morphisms}
+\label{section-algebraic}
+
+\noindent
+The following notion is occasionally useful.
+
+\begin{definition}
+\label{definition-algebraic}
+Let $S$ be a scheme. Let $F : \mathcal{X} \to \mathcal{Y}$ be a
+$1$-morphism of stacks in groupoids over $(\Sch/S)_{fppf}$.
+We say that $F$ is {\it algebraic} if for every scheme $T$ and every
+object $\xi$ of $\mathcal{Y}$ over $T$ the $2$-fibre product
+$$
+(\Sch/T)_{fppf} \times_{\xi, \mathcal{Y}} \mathcal{X}
+$$
+is an algebraic stack over $S$.
+\end{definition}
+
+\noindent
+With this terminology in place we have the following result that generalizes
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-representable-morphism-to-algebraic}.
+
+\begin{lemma}
+\label{lemma-algebraic-morphism-to-algebraic}
+Let $S$ be a scheme.
+Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of
+stacks in groupoids over $(\Sch/S)_{fppf}$. If
+\begin{enumerate}
+\item $\mathcal{Y}$ is an algebraic stack, and
+\item $F$ is algebraic (see above),
+\end{enumerate}
+then $\mathcal{X}$ is an algebraic stack.
+\end{lemma}
+
+\begin{proof}
+By assumption (1) there exists a scheme $T$ and an object
+$\xi$ of $\mathcal{Y}$ over $T$ such that the corresponding
+$1$-morphism $\xi : (\Sch/T)_{fppf} \to \mathcal{Y}$
+is smooth an surjective. Then
+$\mathcal{U} = (\Sch/T)_{fppf} \times_{\xi, \mathcal{Y}} \mathcal{X}$
+is an algebraic stack by assumption (2).
+Choose a scheme $U$ and a surjective smooth $1$-morphism
+$(\Sch/U)_{fppf} \to \mathcal{U}$.
+The projection $\mathcal{U} \longrightarrow \mathcal{X}$
+is, as the base change of the morphism
+$\xi : (\Sch/T)_{fppf} \to \mathcal{Y}$,
+surjective and smooth, see
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-base-change-representable-transformations-property}.
+Then the composition
+$(\Sch/U)_{fppf} \to \mathcal{U} \to \mathcal{X}$
+is surjective and smooth as a composition of surjective and smooth
+morphisms, see
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-composition-representable-transformations-property}.
+Hence $\mathcal{X}$ is an algebraic stack by
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-smooth-surjective-morphism-implies-algebraic}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-from-algebraic}
+Let $S$ be a scheme. Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+of stacks in groupoids over $(\Sch/S)_{fppf}$. If $\mathcal{X}$ is an
+algebraic stack and $\Delta : \mathcal{Y} \to \mathcal{Y} \times \mathcal{Y}$
+is representable by algebraic spaces, then $F$ is algebraic.
+\end{lemma}
+
+\begin{proof}
+Choose a representable stack in groupoids $\mathcal{U}$ and a surjective
+smooth $1$-morphism $\mathcal{U} \to \mathcal{X}$. Let $T$ be a scheme and
+let $\xi$ be an object of $\mathcal{Y}$ over $T$. The morphism of
+$2$-fibre products
+$$
+(\Sch/T)_{fppf} \times_{\xi, \mathcal{Y}} \mathcal{U}
+\longrightarrow
+(\Sch/T)_{fppf} \times_{\xi, \mathcal{Y}} \mathcal{X}
+$$
+is representable by algebraic spaces, surjective, and smooth as a
+base change of $\mathcal{U} \to \mathcal{X}$, see
+Algebraic Stacks,
+Lemmas \ref{algebraic-lemma-base-change-representable-by-spaces} and
+\ref{algebraic-lemma-base-change-representable-transformations-property}.
+By our condition on the diagonal of $\mathcal{Y}$ we see that
+the source of this morphism is representable by an algebraic space, see
+Algebraic Stacks, Lemma \ref{algebraic-lemma-representable-diagonal}.
+Hence the target is an algebraic stack by
+Algebraic Stacks,
+Lemma \ref{algebraic-lemma-smooth-surjective-morphism-implies-algebraic}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-diagonals-and-algebraic-morphisms}
+Let $S$ be a scheme. Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+of stacks in groupoids over $(\Sch/S)_{fppf}$.
+If $F$ is algebraic and
+$\Delta : \mathcal{Y} \to \mathcal{Y} \times \mathcal{Y}$
+is representable by algebraic spaces, then
+$\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$
+is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Assume $F$ is algebraic and
+$\Delta : \mathcal{Y} \to \mathcal{Y} \times \mathcal{Y}$
+is representable by algebraic spaces.
+Take a scheme $U$ over $S$ and two objects $x_1, x_2$ of
+$\mathcal{X}$ over $U$.
+We have to show that $\mathit{Isom}(x_1, x_2)$ is an algebraic space
+over $U$, see
+Algebraic Stacks, Lemma \ref{algebraic-lemma-representable-diagonal}.
+Set $y_i = F(x_i)$. We have a morphism of sheaves of sets
+$$
+f : \mathit{Isom}(x_1, x_2) \to \mathit{Isom}(y_1, y_2)
+$$
+and the target is an algebraic space by assumption.
+Thus it suffices to show that $f$ is representable by
+algebraic spaces, see Bootstrap, Lemma
+\ref{bootstrap-lemma-representable-by-spaces-over-space}.
+Thus we can choose a scheme $V$ over $U$ and an
+isomorphism $\beta : y_{1, V} \to y_{2, V}$ and
+we have to show the functor
+$$
+(\Sch/V)_{fppf} \to \textit{Sets},\quad
+T/V \mapsto \{\alpha : x_{1, T} \to x_{2, T}
+\text{ in }\mathcal{X}_T \mid F(\alpha) = \beta|_T\}
+$$
+is an algebraic space. Consider the objects
+$z_1 = (V, x_{1, V}, \text{id})$ and
+$z_2 = (V, x_{2, V}, \beta)$ of
+$$
+\mathcal{Z} = (\Sch/V)_{fppf} \times_{y_{1, V}, \mathcal{Y}} \mathcal{X}
+$$
+Then it is straightforward to verify that
+the functor above is equal to $\mathit{Isom}(z_1, z_2)$
+on $(\Sch/V)_{fppf}$. Hence this is an algebraic space
+by our assumption that $F$ is algebraic (and the definition
+of algebraic stacks).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Spaces of sections}
+\label{section-spaces-sections}
+
+\noindent
+Given morphisms $W \to Z \to U$ we can consider the functor that associates
+to a scheme $U'$ over $U$ the set of sections $\sigma : Z_{U'} \to W_{U'}$
+of the base change $W_{U'} \to Z_{U'}$ of the morphism $W \to Z$.
+In this section we prove some preliminary lemmas on this functor.
+
+\begin{lemma}
+\label{lemma-surjection-space-of-sections}
+Let $Z \to U$ be a finite morphism of schemes.
+Let $W$ be an algebraic space and let $W \to Z$ be a
+surjective \'etale morphism. Then there exists a surjective
+\'etale morphism $U' \to U$ and a section
+$$
+\sigma : Z_{U'} \to W_{U'}
+$$
+of the morphism $W_{U'} \to Z_{U'}$.
+\end{lemma}
+
+\begin{proof}
+We may choose a separated scheme $W'$ and a surjective \'etale morphism
+$W' \to W$. Hence after replacing $W$ by $W'$ we may assume that $W$
+is a separated scheme. Write $f : W \to Z$ and $\pi : Z \to U$.
+Note that $f \circ \pi : W \to U$ is separated as
+$W$ is separated (see
+Schemes, Lemma \ref{schemes-lemma-compose-after-separated}).
+Let $u \in U$ be a point. Clearly it suffices
+to find an \'etale neighbourhood $(U', u')$ of $(U, u)$ such that
+a section $\sigma$ exists over $U'$. Let $z_1, \ldots, z_r$
+be the points of $Z$ lying above $u$. For each $i$ choose a point
+$w_i \in W$ which maps to $z_i$. We may pick an \'etale neighbourhood
+$(U', u') \to (U, u)$ such that the conclusions of
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-etale-splits-off-quasi-finite-part-technical-variant}
+hold for both $Z \to U$ and the points $z_1, \ldots, z_r$
+and $W \to U$ and the points $w_1, \ldots, w_r$. Hence, after
+replacing $(U, u)$ by $(U', u')$ and relabeling, we may assume that
+all the field extensions $\kappa(z_i)/\kappa(u)$ and
+$\kappa(w_i)/\kappa(u)$ are purely inseparable, and moreover
+that there exist disjoint union decompositions
+$$
+Z = V_1 \amalg \ldots \amalg V_r \amalg A, \quad
+W = W_1 \amalg \ldots \amalg W_r \amalg B
+$$
+by open and closed subschemes
+with $z_i \in V_i$, $w_i \in W_i$ and $V_i \to U$, $W_i \to U$ finite.
+After replacing $U$ by $U \setminus \pi(A)$ we may assume that
+$A = \emptyset$, i.e., $Z = V_1 \amalg \ldots \amalg V_r$.
+After replacing $W_i$ by $W_i \cap f^{-1}(V_i)$ and
+$B$ by $B \cup \bigcup W_i \cap f^{-1}(Z \setminus V_i)$
+we may assume that $f$ maps $W_i$ into $V_i$.
+Then $f_i = f|_{W_i} : W_i \to V_i$ is a morphism of schemes finite over $U$,
+hence finite (see
+Morphisms, Lemma \ref{morphisms-lemma-finite-permanence}).
+It is also \'etale (by assumption),
+$f_i^{-1}(\{z_i\}) = w_i$, and induces an isomorphism of residue
+fields $\kappa(z_i) = \kappa(w_i)$ (because both are purely inseparable
+extensions of $\kappa(u)$ and $\kappa(w_i)/\kappa(z_i)$
+is separable as $f$ is \'etale). Hence by
+\'Etale Morphisms, Lemma \ref{etale-lemma-finite-etale-one-point}
+we see that $f_i$ is an isomorphism in a neighbourhood $V_i'$ of
+$z_i$. Since $\pi : Z \to U$ is closed, after shrinking $U$, we may assume
+that $W_i \to V_i$ is an isomorphism. This proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-space-of-sections}
+Let $Z \to U$ be a finite locally free morphism of schemes.
+Let $W$ be an algebraic space and let $W \to Z$ be an \'etale morphism.
+Then the functor
+$$
+F : (\Sch/U)_{fppf}^{opp} \longrightarrow \textit{Sets},
+$$
+defined by the rule
+$$
+U' \longmapsto
+F(U') =
+\{\sigma : Z_{U'} \to W_{U'}\text{ section of }W_{U'} \to Z_{U'}\}
+$$
+is an algebraic space and the morphism $F \to U$ is \'etale.
+\end{lemma}
+
+\begin{proof}
+Assume first that $W \to Z$ is also separated.
+Let $U'$ be a scheme over $U$ and let $\sigma \in F(U')$. By
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-section-immersion}
+the morphism $\sigma$ is a closed immersion.
+Moreover, $\sigma$ is \'etale by
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-etale-permanence}.
+Hence $\sigma$ is also an open immersion, see
+Morphisms of Spaces,
+Lemma \ref{spaces-morphisms-lemma-etale-universally-injective-open}.
+In other words, $Z_\sigma = \sigma(Z_{U'}) \subset W_{U'}$ is
+an open subspace such that the morphism $Z_\sigma \to Z_{U'}$
+is an isomorphism. In particular, the morphism $Z_\sigma \to U'$
+is finite. Hence we obtain a transformation of functors
+$$
+F \longrightarrow (W/U)_{fin}, \quad
+\sigma \longmapsto (U' \to U, Z_\sigma)
+$$
+where $(W/U)_{fin}$ is the finite part of the morphism $W \to U$
+introduced in
+More on Groupoids in Spaces, Section
+\ref{spaces-more-groupoids-section-finite}.
+It is clear that this transformation of functors is injective (since we can
+recover $\sigma$ from $Z_\sigma$ as the inverse of the isomorphism
+$Z_\sigma \to Z_{U'}$). By
+More on Groupoids in Spaces, Proposition
+\ref{spaces-more-groupoids-proposition-finite-algebraic-space}
+we know that $(W/U)_{fin}$ is an algebraic space \'etale over $U$.
+Hence to finish the proof in this case it suffices to show that
+$F \to (W/U)_{fin}$ is representable and an open immersion.
+To see this suppose that we are given a morphism of schemes $U' \to U$
+and an open subspace $Z' \subset W_{U'}$ such that $Z' \to U'$
+is finite. Then it suffices to show that there exists an
+open subscheme $U'' \subset U'$ such that a morphism
+$T \to U'$ factors through $U''$ if and only if $Z' \times_{U'} T$
+maps isomorphically to $Z \times_{U'} T$. This follows from
+More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-where-isomorphism}
+(here we use that $Z \to B$ is flat and locally of finite presentation
+as well as finite).
+Hence we have proved the lemma in case $W \to Z$ is separated
+as well as \'etale.
+
+\medskip\noindent
+In the general case we choose a separated scheme $W'$ and a surjective
+\'etale morphism $W' \to W$. Note that the morphisms $W' \to W$ and
+$W \to Z$ are separated as their source is separated. Denote $F'$ the
+functor associated to $W' \to Z \to U$ as in the lemma. In the first
+paragraph of the proof we showed that $F'$ is representable by an
+algebraic space \'etale over $U$. By
+Lemma \ref{lemma-surjection-space-of-sections}
+the map of functors $F' \to F$ is surjective for the \'etale topology
+on $\Sch/U$. Moreover, if $U'$ and $\sigma : Z_{U'} \to W_{U'}$
+define a point $\xi \in F(U')$, then the fibre product
+$$
+F'' = F' \times_{F, \xi} U'
+$$
+is the functor on $\Sch/U'$ associated to the morphisms
+$$
+W'_{U'} \times_{W_{U'}, \sigma} Z_{U'} \to Z_{U'} \to U'.
+$$
+Since the first morphism is separated as a base change of a separated
+morphism, we see that $F''$ is an algebraic space \'etale over $U'$
+by the result of the first paragraph. It follows that $F' \to F$ is a
+surjective \'etale transformation of functors, which is representable
+by algebraic spaces. Hence $F$ is an algebraic space by
+Bootstrap, Theorem \ref{bootstrap-theorem-final-bootstrap}.
+Since $F' \to F$ is an \'etale surjective morphism of algebraic spaces
+it follows that $F \to U$ is \'etale because $F' \to U$ is \'etale.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Relative morphisms}
+\label{section-relative-morphisms}
+
+\noindent
+We continue the discussion started in
+More on Morphisms, Section \ref{more-morphisms-section-relative-morphisms}.
+
+\medskip\noindent
+Let $S$ be a scheme. Let $Z \to B$ and $X \to B$ be morphisms of
+algebraic spaces over $S$. Given a scheme $T$ we can consider pairs
+$(a, b)$ where $a : T \to B$
+is a morphism and $b : T \times_{a, B} Z \to T \times_{a, B} X$
+is a morphism over $T$. Picture
+\begin{equation}
+\label{equation-hom}
+\vcenter{
+\xymatrix{
+T \times_{a, B} Z \ar[rd] \ar[rr]_b & &
+T \times_{a, B} X \ar[ld] & Z \ar[rd] & & X \ar[ld] \\
+& T \ar[rrr]^a & & & B
+}
+}
+\end{equation}
+Of course, we can also think of $b$ as a morphism
+$b : T \times_{a, B} Z \to X$ such that
+$$
+\xymatrix{
+T \times_{a, B} Z \ar[r] \ar[d] \ar@/^1pc/[rrr]_-b &
+Z \ar[rd] & & X \ar[ld] \\
+T \ar[rr]^a & & B
+}
+$$
+commutes. In this situation we can define a functor
+\begin{equation}
+\label{equation-hom-functor}
+\mathit{Mor}_B(Z, X) : (\Sch/S)^{opp} \longrightarrow \textit{Sets},
+\quad
+T \longmapsto \{(a, b)\text{ as above}\}
+\end{equation}
+Sometimes we think of this as a functor defined on the category
+of schemes over $B$, in which case we drop $a$ from the notation.
+
+\begin{lemma}
+\label{lemma-hom-functor-sheaf}
+Let $S$ be a scheme. Let $Z \to B$ and $X \to B$ be morphisms of
+algebraic spaces over $S$. Then
+\begin{enumerate}
+\item $\mathit{Mor}_B(Z, X)$ is a sheaf on
+$(\Sch/S)_{fppf}$.
+\item If $T$ is an algebraic space over $S$, then there is a
+canonical bijection
+$$
+\Mor_{\Sh((\Sch/S)_{fppf})}(T, \mathit{Mor}_B(Z, X))
+=
+\{(a, b)\text{ as in }(\ref{equation-hom})\}
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $T$ be an algebraic space over $S$. Let $\{T_i \to T\}$ be an fppf
+covering of $T$ (as in
+Topologies on Spaces, Section \ref{spaces-topologies-section-fppf}).
+Suppose that $(a_i, b_i) \in \mathit{Mor}_B(Z, X)(T_i)$ such
+that $(a_i, b_i)|_{T_i \times_T T_j} = (a_j, b_j)|_{T_i \times_T T_j}$
+for all $i, j$. Then by
+Descent on Spaces,
+Lemma \ref{spaces-descent-lemma-fpqc-universal-effective-epimorphisms}
+there exists a unique morphism $a : T \to B$ such that $a_i$ is the
+composition of $T_i \to T$ and $a$. Then
+$\{T_i \times_{a_i, B} Z \to T \times_{a, B} Z\}$ is an fppf covering
+too and the same lemma implies there exists a unique morphism
+$b : T \times_{a, B} Z \to T \times_{a, B} X$ such that $b_i$ is the
+composition of $T_i \times_{a_i, B} Z \to T \times_{a, B} Z$ and $b$. Hence
+$(a, b) \in \mathit{Mor}_B(Z, X)(T)$ restricts to $(a_i, b_i)$
+over $T_i$ for all $i$.
+
+\medskip\noindent
+Note that the result of the preceding paragraph in particular implies (1).
+
+\medskip\noindent
+Let $T$ be an algebraic space over $S$. In order to prove (2) we will
+construct mutually inverse maps between the displayed sets. In the
+following when we say ``pair'' we mean a pair $(a, b)$ fitting
+into (\ref{equation-hom}).
+
+\medskip\noindent
+Let $v : T \to \mathit{Mor}_B(Z, X)$ be a natural transformation.
+Choose a scheme $U$ and a surjective \'etale morphism $p : U \to T$.
+Then $v(p) \in \mathit{Mor}_B(Z, X)(U)$ corresponds to a pair $(a_U, b_U)$
+over $U$. Let $R = U \times_T U$ with projections $t, s : R \to U$.
+As $v$ is a transformation of functors we see that the pullbacks of
+$(a_U, b_U)$ by $s$ and $t$ agree. Hence, since $\{U \to T\}$ is an
+fppf covering, we may apply the result of the first paragraph that
+deduce that there exists a unique pair $(a, b)$ over $T$.
+
+\medskip\noindent
+Conversely, let $(a, b)$ be a pair over $T$.
+Let $U \to T$, $R = U \times_T U$, and $t, s : R \to U$ be as
+above. Then the restriction $(a, b)|_U$ gives rise to a
+transformation of functors $v : h_U \to \mathit{Mor}_B(Z, X)$ by the
+Yoneda lemma
+(Categories, Lemma \ref{categories-lemma-yoneda}).
+As the two pullbacks $s^*(a, b)|_U$ and $t^*(a, b)|_U$
+are equal, we see that $v$ coequalizes the two maps
+$h_t, h_s : h_R \to h_U$. Since $T = U/R$ is the fppf quotient sheaf by
+Spaces, Lemma \ref{spaces-lemma-space-presentation}
+and since $\mathit{Mor}_B(Z, X)$ is an fppf sheaf by (1) we conclude
+that $v$ factors through a map $T \to \mathit{Mor}_B(Z, X)$.
+
+\medskip\noindent
+We omit the verification that the two constructions above are mutually
+inverse.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-hom-functor}
+Let $S$ be a scheme. Let $Z \to B$, $X \to B$, and $B' \to B$
+be morphisms of algebraic spaces over $S$. Set $Z' = B' \times_B Z$
+and $X' = B' \times_B X$. Then
+$$
+\mathit{Mor}_{B'}(Z', X')
+=
+B' \times_B \mathit{Mor}_B(Z, X)
+$$
+in $\Sh((\Sch/S)_{fppf})$.
+\end{lemma}
+
+\begin{proof}
+The equality as functors follows immediately from the definitions.
+The equality as sheaves follows from this because both sides are
+sheaves according to
+Lemma \ref{lemma-hom-functor-sheaf}
+and the fact that a fibre product of sheaves is the same as the
+corresponding fibre product of pre-sheaves (i.e., functors).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-covering-hom-functor}
+Let $S$ be a scheme. Let $Z \to B$ and $X' \to X \to B$ be morphisms of
+algebraic spaces over $S$. Assume
+\begin{enumerate}
+\item $X' \to X$ is \'etale, and
+\item $Z \to B$ is finite locally free.
+\end{enumerate}
+Then $\mathit{Mor}_B(Z, X') \to \mathit{Mor}_B(Z, X)$ is representable
+by algebraic spaces and \'etale. If $X' \to X$ is also surjective,
+then $\mathit{Mor}_B(Z, X') \to \mathit{Mor}_B(Z, X)$ is surjective.
+\end{lemma}
+
+\begin{proof}
+Let $U$ be a scheme and let $\xi = (a, b)$ be an element of
+$\mathit{Mor}_B(Z, X)(U)$. We have to prove that the functor
+$$
+h_U \times_{\xi, \mathit{Mor}_B(Z, X)} \mathit{Mor}_B(Z, X')
+$$
+is representable by an algebraic space \'etale over $U$. Set
+$Z_U = U \times_{a, B} Z$ and $W = Z_U \times_{b, X} X'$.
+Then $W \to Z_U \to U$ is as in
+Lemma \ref{lemma-space-of-sections}
+and the sheaf $F$ defined there is identified with the fibre product
+displayed above. Hence the first assertion of the lemma.
+The second assertion follows from this and
+Lemma \ref{lemma-surjection-space-of-sections}
+which guarantees that $F \to U$ is surjective in the situation above.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-hom-functor-algebraic-space}
+Let $S$ be a scheme. Let $Z \to B$ and $X \to B$ be morphisms of
+algebraic spaces over $S$. If $Z \to B$ is finite locally free
+then $\mathit{Mor}_B(Z, X)$ is an algebraic space.
+\end{proposition}
+
+\begin{proof}
+Choose a scheme $B' = \coprod B'_i$ which is a disjoint union of
+affine schemes $B'_i$ and an \'etale surjective morphism $B' \to B$.
+We may also assume that $B'_i \times_B Z$ is the spectrum of a ring
+which is finite free as a $\Gamma(B'_i, \mathcal{O}_{B'_i})$-module.
+By
+Lemma \ref{lemma-base-change-hom-functor}
+and
+Spaces, Lemma
+\ref{spaces-lemma-base-change-representable-transformations-property}
+the morphism $\mathit{Mor}_{B'}(Z', X') \to \mathit{Mor}_B(Z, X)$
+is surjective \'etale. Hence by
+Bootstrap, Theorem \ref{bootstrap-theorem-final-bootstrap}
+it suffices to prove the proposition when $B = B'$ is a disjoint union of
+affine schemes $B'_i$ so that each $B'_i \times_B Z$ is finite free
+over $B'_i$. Then it actually suffices to prove the result for the restriction
+to each $B'_i$. Thus we may assume that $B$ is affine and that
+$\Gamma(Z, \mathcal{O}_Z)$ is a finite free $\Gamma(B, \mathcal{O}_B)$-module.
+
+\medskip\noindent
+Choose a scheme $X'$ which is a disjoint union of affine schemes and
+a surjective \'etale morphism $X' \to X$. By
+Lemma \ref{lemma-etale-covering-hom-functor}
+the morphism $\mathit{Mor}_B(Z, X') \to \mathit{Mor}_B(Z, X)$
+is representable by algebraic spaces, \'etale, and surjective.
+Hence by
+Bootstrap, Theorem \ref{bootstrap-theorem-final-bootstrap}
+it suffices to prove the proposition when $X$ is a disjoint union
+of affine schemes. This reduces us to the case discussed in the next
+paragraph.
+
+\medskip\noindent
+Assume $X = \coprod_{i \in I} X_i$ is a disjoint union of affine
+schemes, $B$ is affine, and that $\Gamma(Z, \mathcal{O}_Z)$ is a finite
+free $\Gamma(B, \mathcal{O}_B)$-module. For any finite subset
+$E \subset I$ set
+$$
+F_E = \mathit{Mor}_B(Z, \coprod\nolimits_{i \in E} X_i).
+$$
+By More on Morphisms,
+Lemma \ref{more-morphisms-lemma-hom-from-finite-free-into-affine}
+we see that $F_E$ is an algebraic space. Consider the morphism
+$$
+\coprod\nolimits_{E \subset I\text{ finite}} F_E
+\longrightarrow
+\mathit{Mor}_B(Z, X)
+$$
+Each of the morphisms
+$F_E \to \mathit{Mor}_B(Z, X)$ is an open immersion, because it is
+simply the locus parametrizing pairs $(a, b)$ where $b$ maps into
+the open subscheme $\coprod\nolimits_{i \in E} X_i$ of $X$. Moreover,
+if $T$ is quasi-compact, then for any pair $(a, b)$ the image
+of $b$ is contained in $\coprod\nolimits_{i \in E} X_i$ for some
+$E \subset I$ finite. Hence the displayed arrow is in fact an
+open covering and we win\footnote{Modulo
+some set theoretic arguments. Namely, we have to show that
+$\coprod F_E$ is an algebraic space. This follows because
+$|I| \leq \text{size}(X)$ and $\text{size}(F_E) \leq \text{size}(X)$
+as follows from the explicit description of $F_E$ in the proof of
+More on Morphisms,
+Lemma \ref{more-morphisms-lemma-hom-from-finite-free-into-affine}.
+Some details omitted.} by
+Spaces, Lemma \ref{spaces-lemma-glueing-algebraic-spaces}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Restriction of scalars}
+\label{section-restriction-of-scalars}
+
+\noindent
+Suppose $X \to Z \to B$ are morphisms of algebraic spaces over $S$.
+Given a scheme $T$ we can consider pairs $(a, b)$ where $a : T \to B$
+is a morphism and $b : T \times_{a, B} Z \to X$ is a morphism over $Z$.
+Picture
+\begin{equation}
+\label{equation-pairs}
+\vcenter{
+\xymatrix{
+& X \ar[d] \\
+T \times_{a, B} Z \ar[d] \ar[ru]^b \ar[r] & Z \ar[d] \\
+T \ar[r]^a & B
+}
+}
+\end{equation}
+In this situation we can define a
+functor
+\begin{equation}
+\label{equation-restriction-of-scalars}
+\text{Res}_{Z/B}(X) : (\Sch/S)^{opp} \longrightarrow \textit{Sets},
+\quad
+T \longmapsto \{(a, b)\text{ as above}\}
+\end{equation}
+Sometimes we think of this as a functor defined on the category
+of schemes over $B$, in which case we drop $a$ from the notation.
+
+\begin{lemma}
+\label{lemma-restriction-of-scalars-sheaf}
+Let $S$ be a scheme. Let $X \to Z \to B$ be morphisms of
+algebraic spaces over $S$. Then
+\begin{enumerate}
+\item $\text{Res}_{Z/B}(X)$ is a sheaf on
+$(\Sch/S)_{fppf}$.
+\item If $T$ is an algebraic space over $S$, then there is a
+canonical bijection
+$$
+\Mor_{\Sh((\Sch/S)_{fppf})}(T, \text{Res}_{Z/B}(X))
+=
+\{(a, b)\text{ as in }(\ref{equation-pairs})\}
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $T$ be an algebraic space over $S$. Let $\{T_i \to T\}$ be an fppf
+covering of $T$ (as in
+Topologies on Spaces, Section \ref{spaces-topologies-section-fppf}).
+Suppose that $(a_i, b_i) \in \text{Res}_{Z/B}(X)(T_i)$ such
+that $(a_i, b_i)|_{T_i \times_T T_j} = (a_j, b_j)|_{T_i \times_T T_j}$
+for all $i, j$. Then by
+Descent on Spaces,
+Lemma \ref{spaces-descent-lemma-fpqc-universal-effective-epimorphisms}
+there exists a unique morphism $a : T \to B$ such that $a_i$ is the
+composition of $T_i \to T$ and $a$. Then
+$\{T_i \times_{a_i, B} Z \to T \times_{a, B} Z\}$ is an fppf covering
+too and the same lemma implies there exists a unique morphism
+$b : T \times_{a, B} Z \to X$ such that $b_i$ is the composition
+of $T_i \times_{a_i, B} Z \to T \times_{a, B} Z$ and $b$. Hence
+$(a, b) \in \text{Res}_{Z/B}(X)(T)$ restricts to $(a_i, b_i)$
+over $T_i$ for all $i$.
+
+\medskip\noindent
+Note that the result of the preceding paragraph in particular implies (1).
+
+\medskip\noindent
+Let $T$ be an algebraic space over $S$. In order to prove (2) we will
+construct mutually inverse maps between the displayed sets. In the
+following when we say ``pair'' we mean a pair $(a, b)$ fitting
+into (\ref{equation-pairs}).
+
+\medskip\noindent
+Let $v : T \to \text{Res}_{Z/B}(X)$ be a natural transformation.
+Choose a scheme $U$ and a surjective \'etale morphism $p : U \to T$.
+Then $v(p) \in \text{Res}_{Z/B}(X)(U)$ corresponds to a pair $(a_U, b_U)$
+over $U$. Let $R = U \times_T U$ with projections $t, s : R \to U$.
+As $v$ is a transformation of functors we see that the pullbacks of
+$(a_U, b_U)$ by $s$ and $t$ agree. Hence, since $\{U \to T\}$ is an
+fppf covering, we may apply the result of the first paragraph that
+deduce that there exists a unique pair $(a, b)$ over $T$.
+
+\medskip\noindent
+Conversely, let $(a, b)$ be a pair over $T$.
+Let $U \to T$, $R = U \times_T U$, and $t, s : R \to U$ be as
+above. Then the restriction $(a, b)|_U$ gives rise to a
+transformation of functors $v : h_U \to \text{Res}_{Z/B}(X)$ by the
+Yoneda lemma
+(Categories, Lemma \ref{categories-lemma-yoneda}).
+As the two pullbacks $s^*(a, b)|_U$ and $t^*(a, b)|_U$
+are equal, we see that $v$ coequalizes the two maps
+$h_t, h_s : h_R \to h_U$. Since $T = U/R$ is the fppf quotient sheaf by
+Spaces, Lemma \ref{spaces-lemma-space-presentation}
+and since $\text{Res}_{Z/B}(X)$ is an fppf sheaf by (1) we conclude
+that $v$ factors through a map $T \to \text{Res}_{Z/B}(X)$.
+
+\medskip\noindent
+We omit the verification that the two constructions above are mutually
+inverse.
+\end{proof}
+
+\noindent
+Of course the sheaf $\text{Res}_{Z/B}(X)$ comes with a natural transformation
+of functors $\text{Res}_{Z/B}(X) \to B$. We will use this without further
+mention in the following.
+
+\begin{lemma}
+\label{lemma-etale-base-change-restriction-of-scalars}
+Let $S$ be a scheme. Let $X \to Z \to B$ and $B' \to B$
+be morphisms of algebraic spaces over $S$.
+Set $Z' = B' \times_B Z$ and $X' = B' \times_B X$. Then
+$$
+\text{Res}_{Z'/B'}(X')
+=
+B' \times_B \text{Res}_{Z/B}(X)
+$$
+in $\Sh((\Sch/S)_{fppf})$.
+\end{lemma}
+
+\begin{proof}
+The equality as functors follows immediately from the definitions.
+The equality as sheaves follows from this because both sides are
+sheaves according to
+Lemma \ref{lemma-restriction-of-scalars-sheaf}
+and the fact that a fibre product of sheaves is the same as the
+corresponding fibre product of pre-sheaves (i.e., functors).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-covering-restriction-of-scalars}
+Let $S$ be a scheme. Let $X' \to X \to Z \to B$ be morphisms of
+algebraic spaces over $S$. Assume
+\begin{enumerate}
+\item $X' \to X$ is \'etale, and
+\item $Z \to B$ is finite locally free.
+\end{enumerate}
+Then $\text{Res}_{Z/B}(X') \to \text{Res}_{Z/B}(X)$ is representable
+by algebraic spaces and \'etale. If $X' \to X$ is also surjective,
+then $\text{Res}_{Z/B}(X') \to \text{Res}_{Z/B}(X)$ is surjective.
+\end{lemma}
+
+\begin{proof}
+Let $U$ be a scheme and let $\xi = (a, b)$ be an element of
+$\text{Res}_{Z/B}(X)(U)$. We have to prove that the functor
+$$
+h_U \times_{\xi, \text{Res}_{Z/B}(X)} \text{Res}_{Z/B}(X')
+$$
+is representable by an algebraic space \'etale over $U$. Set
+$Z_U = U \times_{a, B} Z$ and $W = Z_U \times_{b, X} X'$.
+Then $W \to Z_U \to U$ is as in
+Lemma \ref{lemma-space-of-sections}
+and the sheaf $F$ defined there is identified with the fibre product
+displayed above. Hence the first assertion of the lemma.
+The second assertion follows from this and
+Lemma \ref{lemma-surjection-space-of-sections}
+which guarantees that $F \to U$ is surjective in the situation above.
+\end{proof}
+
+\noindent
+At this point we can use the lemmas above to prove that $\text{Res}_{Z/B}(X)$
+is an algebraic space whenever $Z \to B$ is finite locally free in almost
+exactly the same way as in the proof that $\mathit{Mor}_B(Z, X)$ is an
+algebraic spaces, see
+Proposition \ref{proposition-hom-functor-algebraic-space}.
+Instead we will directly deduce this result from the following lemma
+and the fact that $\mathit{Mor}_B(Z, X)$ is an algebraic space.
+
+\begin{lemma}
+\label{lemma-fibre-diagram}
+Let $S$ be a scheme. Let $X \to Z \to B$ be morphisms of
+algebraic spaces over $S$. The following diagram
+$$
+\xymatrix{
+\mathit{Mor}_B(Z, X) \ar[r] & \mathit{Mor}_B(Z, Z) \\
+\text{Res}_{Z/B}(X) \ar[r] \ar[u] & B \ar[u]_{\text{id}_Z}
+}
+$$
+is a cartesian diagram of sheaves on $(\Sch/S)_{fppf}$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: Exercise in the functorial point of view in algebraic
+geometry.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-restriction-of-scalars-algebraic-space}
+Let $S$ be a scheme. Let $X \to Z \to B$ be morphisms of
+algebraic spaces over $S$. If $Z \to B$ is finite locally free
+then $\text{Res}_{Z/B}(X)$ is an algebraic space.
+\end{proposition}
+
+\begin{proof}
+By
+Proposition \ref{proposition-hom-functor-algebraic-space}
+the functors $\mathit{Mor}_B(Z, X)$ and $\mathit{Mor}_B(Z, Z)$
+are algebraic spaces. Hence this follows from the cartesian
+diagram of
+Lemma \ref{lemma-fibre-diagram}
+and the fact that fibre products of algebraic spaces exist and
+are given by the fibre product in the underlying category of
+sheaves of sets (see
+Spaces, Lemma
+\ref{spaces-lemma-fibre-product-spaces-over-sheaf-with-representable-diagonal}).
+\end{proof}
+
+
+
+
+
+
+\section{Finite Hilbert stacks}
+\label{section-finite-hilbert-stacks}
+
+\noindent
+In this section we prove some results concerning the finite
+Hilbert stacks $\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$
+introduced in
+Examples of Stacks, Section \ref{examples-stacks-section-hilbert-d-stack}.
+
+\begin{lemma}
+\label{lemma-map-hilbert}
+Consider a $2$-commutative diagram
+$$
+\xymatrix{
+\mathcal{X}' \ar[r]_G \ar[d]_{F'} & \mathcal{X} \ar[d]^F \\
+\mathcal{Y}' \ar[r]^H & \mathcal{Y}
+}
+$$
+of stacks in groupoids over $(\Sch/S)_{fppf}$ with a given
+$2$-isomorphism $\gamma : H \circ F' \to F \circ G$. In this situation we
+obtain a canonical $1$-morphism
+$\mathcal{H}_d(\mathcal{X}'/\mathcal{Y}') \to
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$.
+This morphism is compatible with the forgetful $1$-morphisms of
+Examples of Stacks,
+Equation (\ref{examples-stacks-equation-diagram-hilbert-d-stack}).
+\end{lemma}
+
+\begin{proof}
+We map the object $(U, Z, y', x', \alpha')$ to the object
+$(U, Z, H(y'), G(x'), \gamma \star \text{id}_H \star \alpha')$
+where $\star$ denotes horizontal composition of $2$-morphisms, see
+Categories, Definition \ref{categories-definition-horizontal-composition}.
+To a morphism
+$(f, g, b, a) :
+(U_1, Z_1, y_1', x_1', \alpha_1') \to (U_2, Z_2, y_2', x_2', \alpha_2')$
+we assign
+$(f, g, H(b), G(a))$.
+We omit the verification that this defines a functor between categories over
+$(\Sch/S)_{fppf}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cartesian-map-hilbert}
+In the situation of
+Lemma \ref{lemma-map-hilbert}
+assume that the given square is $2$-cartesian. Then the diagram
+$$
+\xymatrix{
+\mathcal{H}_d(\mathcal{X}'/\mathcal{Y}') \ar[r] \ar[d] &
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y}) \ar[d] \\
+\mathcal{Y}' \ar[r] &
+\mathcal{Y}
+}
+$$
+is $2$-cartesian.
+\end{lemma}
+
+\begin{proof}
+We get a $2$-commutative diagram by
+Lemma \ref{lemma-map-hilbert}
+and hence we get a $1$-morphism (i.e., a functor)
+$$
+\mathcal{H}_d(\mathcal{X}'/\mathcal{Y}')
+\longrightarrow
+\mathcal{Y}' \times_\mathcal{Y} \mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+$$
+We indicate why this functor is essentially surjective. Namely, an object
+of the category on the right hand side is given by a scheme $U$ over $S$,
+an object $y'$ of $\mathcal{Y}'_U$, an object $(U, Z, y, x, \alpha)$
+of $\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ over $U$ and an isomorphism
+$H(y') \to y$ in $\mathcal{Y}_U$. The assumption means exactly that
+there exists an object $x'$ of $\mathcal{X}'_Z$ such that there exist
+isomorphisms $G(x') \cong x$ and $\alpha' : y'|_Z \to F'(x')$ compatible
+with $\alpha$. Then we see that $(U, Z, y', x', \alpha')$ is an
+object of $\mathcal{H}_d(\mathcal{X}'/\mathcal{Y}')$ over $U$.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-covering-hilbert}
+In the situation of
+Lemma \ref{lemma-map-hilbert}
+assume
+\begin{enumerate}
+\item $\mathcal{Y}' = \mathcal{Y}$ and $H = \text{id}_\mathcal{Y}$,
+\item $G$ is representable by algebraic spaces and \'etale.
+\end{enumerate}
+Then $\mathcal{H}_d(\mathcal{X}'/\mathcal{Y}) \to
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ is representable by
+algebraic spaces and \'etale.
+If $G$ is also surjective, then
+$\mathcal{H}_d(\mathcal{X}'/\mathcal{Y}) \to
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ is surjective.
+\end{lemma}
+
+\begin{proof}
+Let $U$ be a scheme and let $\xi = (U, Z, y, x, \alpha)$ be an object of
+$\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ over $U$.
+We have to prove that the $2$-fibre product
+\begin{equation}
+\label{equation-to-show}
+(\Sch/U)_{fppf}
+\times_{\xi, \mathcal{H}_d(\mathcal{X}/\mathcal{Y})}
+\mathcal{H}_d(\mathcal{X}'/\mathcal{Y})
+\end{equation}
+is representable by an algebraic space \'etale over $U$.
+An object of this over $U'$ corresponds to an object
+$x'$ in the fibre category of $\mathcal{X}'$ over $Z_{U'}$
+such that $G(x') \cong x|_{Z_{U'}}$.
+By assumption the $2$-fibre product
+$$
+(\Sch/Z)_{fppf} \times_{x, \mathcal{X}} \mathcal{X}'
+$$
+is representable by an algebraic space $W$ such that the projection
+$W \to Z$ is \'etale. Then (\ref{equation-to-show})
+is representable by the algebraic space $F$ parametrizing sections of
+$W \to Z$ over $U$ introduced in
+Lemma \ref{lemma-space-of-sections}.
+Since $F \to U$ is \'etale we conclude that
+$\mathcal{H}_d(\mathcal{X}'/\mathcal{Y}) \to
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ is representable by
+algebraic spaces and \'etale.
+Finally, if $\mathcal{X}' \to \mathcal{X}$ is surjective also,
+then $W \to Z$ is surjective, and hence $F \to U$ is surjective by
+Lemma \ref{lemma-surjection-space-of-sections}.
+Thus in this case
+$\mathcal{H}_d(\mathcal{X}'/\mathcal{Y}) \to
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ is also surjective.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-map-hilbert}
+In the situation of
+Lemma \ref{lemma-map-hilbert}.
+Assume that $G$, $H$ are representable by algebraic spaces and \'etale.
+Then $\mathcal{H}_d(\mathcal{X}'/\mathcal{Y}') \to
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ is representable by
+algebraic spaces and \'etale.
+If also $H$ is surjective and the induced functor
+$\mathcal{X}' \to \mathcal{Y}' \times_\mathcal{Y} \mathcal{X}$
+is surjective, then
+$\mathcal{H}_d(\mathcal{X}'/\mathcal{Y}') \to
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ is surjective.
+\end{lemma}
+
+\begin{proof}
+Set $\mathcal{X}'' = \mathcal{Y}' \times_\mathcal{Y} \mathcal{X}$. By
+Lemma \ref{lemma-etale-permanence}
+the $1$-morphism $\mathcal{X}' \to \mathcal{X}''$ is representable by
+algebraic spaces and \'etale (in particular the condition in the second
+statement of the lemma that $\mathcal{X}' \to \mathcal{X}''$ be surjective
+makes sense). We obtain a $2$-commutative diagram
+$$
+\xymatrix{
+\mathcal{X}' \ar[r] \ar[d] &
+\mathcal{X}'' \ar[r] \ar[d] &
+\mathcal{X} \ar[d] \\
+\mathcal{Y}' \ar[r] &
+\mathcal{Y}' \ar[r] &
+\mathcal{Y}
+}
+$$
+It follows from
+Lemma \ref{lemma-cartesian-map-hilbert}
+that $\mathcal{H}_d(\mathcal{X}''/\mathcal{Y}')$ is the base change
+of $\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ by $\mathcal{Y}' \to \mathcal{Y}$.
+In particular we see that
+$\mathcal{H}_d(\mathcal{X}''/\mathcal{Y}') \to
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ is
+representable by algebraic spaces and \'etale, see
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-base-change-representable-transformations-property}.
+Moreover, it is also surjective if $H$ is.
+ Hence if we can show that
+the result holds for the left square in the diagram, then we're done.
+In this way we reduce to the case where $\mathcal{Y}' = \mathcal{Y}$
+which is the content of
+Lemma \ref{lemma-etale-covering-hilbert}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-hilbert}
+Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of stacks in groupoids
+over $(\Sch/S)_{fppf}$. Assume that
+$\Delta : \mathcal{Y} \to \mathcal{Y} \times \mathcal{Y}$
+is representable by algebraic spaces. Then
+$$
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+\longrightarrow
+\mathcal{H}_d(\mathcal{X}) \times \mathcal{Y}
+$$
+see
+Examples of Stacks, Equation
+(\ref{examples-stacks-equation-diagram-hilbert-d-stack})
+is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Let $U$ be a scheme and let $\xi = (U, Z, p, x, 1)$ be an object of
+$\mathcal{H}_d(\mathcal{X}) = \mathcal{H}_d(\mathcal{X}/S)$ over $U$.
+Here $p$ is just the structure morphism of $U$.
+The fifth component $1$ exists and is unique
+since everything is over $S$.
+Also, let $y$ be an object of $\mathcal{Y}$ over $U$.
+We have to show the $2$-fibre product
+\begin{equation}
+\label{equation-res-isom}
+(\Sch/U)_{fppf}
+\times_{\xi \times y, \mathcal{H}_d(\mathcal{X}) \times \mathcal{Y}}
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+\end{equation}
+is representable by an algebraic space. To explain why this is so
+we introduce
+$$
+I = \mathit{Isom}_\mathcal{Y}(y|_Z, F(x))
+$$
+which is an algebraic space over $Z$ by assumption. Let $a : U' \to U$
+be a scheme over $U$. What does it mean to give an object of the fibre
+category of (\ref{equation-res-isom}) over $U'$? Well, it means that we
+have an object $\xi' = (U', Z', y', x', \alpha')$ of
+$\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ over $U'$ and isomorphisms
+$(U', Z', p', x', 1) \cong (U, Z, p, x, 1)|_{U'}$ and
+$y' \cong y|_{U'}$. Thus $\xi'$ is isomorphic to
+$(U', U' \times_{a, U} Z, a^*y, x|_{U' \times_{a, U} Z}, \alpha)$
+for some morphism
+$$
+\alpha :
+a^*y|_{U' \times_{a, U} Z}
+\longrightarrow
+F(x|_{U' \times_{a, U} Z})
+$$
+in the fibre category of $\mathcal{Y}$ over $U' \times_{a, U} Z$. Hence
+we can view $\alpha$ as a morphism $b : U' \times_{a, U} Z \to I$.
+In this way we see that (\ref{equation-res-isom})
+is representable by $\text{Res}_{Z/U}(I)$ which is an algebraic space by
+Proposition \ref{proposition-restriction-of-scalars-algebraic-space}.
+\end{proof}
+
+\noindent
+The following lemma is a (partial) generalization of
+Lemma \ref{lemma-etale-covering-hilbert}.
+
+\begin{lemma}
+\label{lemma-representable-on-top}
+Let $F : \mathcal{X} \to \mathcal{Y}$ and $G : \mathcal{X}' \to \mathcal{X}$
+be $1$-morphisms of stacks in groupoids over $(\Sch/S)_{fppf}$.
+If $G$ is representable by algebraic spaces, then the $1$-morphism
+$$
+\mathcal{H}_d(\mathcal{X}'/\mathcal{Y})
+\longrightarrow
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+$$
+is representable by algebraic spaces.
+\end{lemma}
+
+\begin{proof}
+Let $U$ be a scheme and let $\xi = (U, Z, y, x, \alpha)$ be an object of
+$\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ over $U$.
+We have to prove that the $2$-fibre product
+\begin{equation}
+\label{equation-to-show-again}
+(\Sch/U)_{fppf}
+\times_{\xi, \mathcal{H}_d(\mathcal{X}/\mathcal{Y})}
+\mathcal{H}_d(\mathcal{X}'/\mathcal{Y})
+\end{equation}
+is representable by an algebraic space \'etale over $U$.
+An object of this over $a : U' \to U$ corresponds to an object
+$x'$ of $\mathcal{X}'$ over $U' \times_{a, U} Z$ such that
+$G(x') \cong x|_{U' \times_{a, U} Z}$. By assumption the $2$-fibre product
+$$
+(\Sch/Z)_{fppf} \times_{x, \mathcal{X}} \mathcal{X}'
+$$
+is representable by an algebraic space $X$ over $Z$. It follows that
+(\ref{equation-to-show-again}) is representable by $\text{Res}_{Z/U}(X)$,
+which is an algebraic space by
+Proposition \ref{proposition-restriction-of-scalars-algebraic-space}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-preserving}
+Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of stacks in groupoids
+over $(\Sch/S)_{fppf}$. Assume $F$ is representable by algebraic
+spaces and locally of finite presentation. Then
+$$
+p : \mathcal{H}_d(\mathcal{X}/\mathcal{Y}) \to \mathcal{Y}
+$$
+is limit preserving on objects.
+\end{lemma}
+
+\begin{proof}
+This means we have to show the following: Given
+\begin{enumerate}
+\item an affine scheme $U = \lim_i U_i$ which is written as the
+directed limit of affine schemes $U_i$ over $S$,
+\item an object $y_i$ of $\mathcal{Y}$ over $U_i$ for some $i$, and
+\item an object $\Xi = (U, Z, y, x, \alpha)$ of
+$\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$
+over $U$ such that $y = y_i|_U$,
+\end{enumerate}
+then there exists an $i' \geq i$ and an object
+$\Xi_{i'} = (U_{i'}, Z_{i'}, y_{i'}, x_{i'}, \alpha_{i'})$ of
+$\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ over $U_{i'}$ with
+$\Xi_{i'}|_U = \Xi$ and $y_{i'} = y_i|_{U_{i'}}$.
+Namely, the last two equalities will take care of the commutativity of
+(\ref{equation-limit-preserving}).
+
+\medskip\noindent
+Let $X_{y_i} \to U_i$ be an algebraic space representing the $2$-fibre
+product
+$$
+(\Sch/U_i)_{fppf} \times_{y_i, \mathcal{Y}, F} \mathcal{X}.
+$$
+Note that $X_{y_i} \to U_i$ is locally of finite presentation by our
+assumption on $F$. Write $\Xi $. It is clear that
+$\xi = (Z, Z \to U_i, x, \alpha)$ is an object of the $2$-fibre product
+displayed above, hence $\xi$ gives rise to a morphism
+$f_\xi : Z \to X_{y_i}$ of algebraic spaces over $U_i$
+(since $X_{y_i}$ is the functor of isomorphisms classes of objects of
+$(\Sch/U_i)_{fppf} \times_{y, \mathcal{Y}, F} \mathcal{X}$, see
+Algebraic Stacks,
+Lemma \ref{algebraic-lemma-characterize-representable-by-space}).
+By
+Limits, Lemmas \ref{limits-lemma-descend-finite-presentation} and
+\ref{limits-lemma-descend-finite-locally-free}
+there exists an $i' \geq i$ and a finite locally free morphism
+$Z_{i'} \to U_{i'}$ of degree $d$ whose base change to $U$ is $Z$. By
+Limits of Spaces, Proposition
+\ref{spaces-limits-proposition-characterize-locally-finite-presentation}
+we may, after replacing $i'$ by a bigger index, assume there exists a
+morphism $f_{i'} : Z_{i'} \to X_{y_i}$ such that
+$$
+\xymatrix{
+Z \ar[d] \ar[r] \ar@/^3ex/[rr]^{f_\xi} &
+Z_{i'} \ar[d] \ar[r]_{f_{i'}} & X_{y_i} \ar[d] \\
+U \ar[r] & U_{i'} \ar[r] & U_i
+}
+$$
+is commutative. We set
+$\Xi_{i'} = (U_{i'}, Z_{i'}, y_{i'}, x_{i'}, \alpha_{i'})$
+where
+\begin{enumerate}
+\item $y_{i'}$ is the object of $\mathcal{Y}$ over $U_{i'}$
+which is the pullback of $y_i$ to $U_{i'}$,
+\item $x_{i'}$ is the object of $\mathcal{X}$ over $Z_{i'}$ corresponding
+via the $2$-Yoneda lemma to the $1$-morphism
+$$
+(\Sch/Z_{i'})_{fppf} \to
+\mathcal{S}_{X_{y_i}} \to
+(\Sch/U_i)_{fppf} \times_{y_i, \mathcal{Y}, F} \mathcal{X} \to
+\mathcal{X}
+$$
+where the middle arrow is the equivalence which defines $X_{y_i}$
+(notation as in
+Algebraic Stacks, Sections
+\ref{algebraic-section-representable-by-algebraic-spaces} and
+\ref{algebraic-section-split}).
+\item $\alpha_{i'} : y_{i'}|_{Z_{i'}} \to F(x_{i'})$ is the isomorphism
+coming from the $2$-commutativity of the diagram
+$$
+\xymatrix{
+(\Sch/Z_{i'})_{fppf} \ar[r] \ar[rd] &
+(\Sch/U_i)_{fppf} \times_{y_i, \mathcal{Y}, F} \mathcal{X}
+\ar[r] \ar[d] &
+\mathcal{X} \ar[d]^F \\
+& (\Sch/U_{i'})_{fppf} \ar[r] & \mathcal{Y}
+}
+$$
+\end{enumerate}
+Recall that $f_\xi : Z \to X_{y_i}$ was the morphism corresponding to
+the object $\xi = (Z, Z \to U_i, x, \alpha)$ of
+$(\Sch/U_i)_{fppf} \times_{y_i, \mathcal{Y}, F} \mathcal{X}$
+over $Z$. By construction $f_{i'}$ is the morphism corresponding to
+the object $\xi_{i'} = (Z_{i'}, Z_{i'} \to U_i, x_{i'}, \alpha_{i'})$.
+As $f_\xi = f_{i'} \circ (Z \to Z_{i'})$ we see that
+the object $\xi_{i'} = (Z_{i'}, Z_{i'} \to U_i, x_{i'}, \alpha_{i'})$ pulls
+back to $\xi$ over $Z$. Thus $x_{i'}$ pulls back to $x$ and $\alpha_{i'}$
+pulls back to $\alpha$. This means that $\Xi_{i'}$ pulls back
+to $\Xi$ over $U$ and we win.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{The finite Hilbert stack of a point}
+\label{section-hilbert-point}
+
+\noindent
+Let $d \geq 1$ be an integer. In
+Examples of Stacks, Definition \ref{examples-stacks-definition-hilbert-d-stack}
+we defined a stack in groupoids $\mathcal{H}_d$.
+In this section we prove that $\mathcal{H}_d$ is an
+algebraic stack. We will throughout assume that
+$S = \Spec(\mathbf{Z})$.
+The general case will follow from this by base change.
+Recall that the fibre category of $\mathcal{H}_d$ over a scheme $T$
+is the category of finite locally free morphisms $\pi : Z \to T$ of
+degree $d$. Instead of classifying these directly we first
+study the quasi-coherent sheaves of algebras $\pi_*\mathcal{O}_Z$.
+
+\medskip\noindent
+Let $R$ be a ring. Let us temporarily make the following definition:
+A {\it free $d$-dimensional algebra over $R$}
+is given by a commutative $R$-algebra structure $m$ on $R^{\oplus d}$
+such that $e_1 = (1, 0, \ldots, 0)$ is a unit\footnote{It may be better
+to think of this as a pair consisting of a multiplication map
+$m : R^{\oplus d} \otimes_R R^{\oplus d} \to R^{\oplus d}$ and
+a ring map $\psi : R \to R^{\oplus d}$ satisfying a bunch of axioms.}.
+We think of $m$ as an $R$-linear map
+$$
+m : R^{\oplus d} \otimes_R R^{\oplus d} \longrightarrow R^{\oplus d}
+$$
+such that $m(e_1, x) = m(x, e_1) = x$ and such that $m$ defines a
+commutative and associative ring structure. If we write
+$m(e_i, e_j) = \sum a_{ij}^ke_k$ then we see this boils down
+to the conditions
+$$
+\left\{
+\begin{matrix}
+\sum_l a_{ij}^la_{lk}^m = \sum_l a_{il}^ma_{jk}^l & \forall i, j, k, m \\
+a_{ij}^k = a_{ji}^k & \forall i, j, k \\
+a_{i1}^j = \delta_{ij} & \forall i, j
+\end{matrix}
+\right.
+$$
+where $\delta_{ij}$ is the Kronecker $\delta$-function. OK, so let's define
+$$
+R_{univ} = \mathbf{Z}[a_{ij}^k]/J
+$$
+where the ideal $J$ is the ideal generated by the relations displayed above.
+Denote
+$$
+m_{univ} :
+R_{univ}^{\oplus d} \otimes_{R_{univ}} R_{univ}^{\oplus d}
+\longrightarrow
+R_{univ}^{\oplus d}
+$$
+the free $d$-dimensional algebra $m$ over $R_{univ}$ whose structure
+constants are the classes of $a_{ij}^k$ modulo $J$.
+Then it is clear that given any free $d$-dimensional algebra $m$ over a ring
+$R$ there exists a unique $\mathbf{Z}$-algebra homomorphism
+$\psi : R_{univ} \to R$ such that $\psi_*m_{univ} = m$ (this means that
+$m$ is what you get by applying the base change functor
+$- \otimes_{R_{univ}} R$ to $m_{univ}$). In other words, setting
+$X = \Spec(R_{univ})$ we obtain a canonical identification
+$$
+X(T) = \{\text{free }d\text{-dimensional algebras }m\text{ over }R\}
+$$
+for varying $T = \Spec(R)$. By Zariski localization we obtain
+the following seemingly more general identification
+\begin{equation}
+\label{equation-objects}
+X(T) = \{\text{free }d\text{-dimensional algebras }
+m\text{ over }\Gamma(T, \mathcal{O}_T)\}
+\end{equation}
+for any scheme $T$.
+
+\medskip\noindent
+Next we talk a little bit about {\it isomorphisms of free $d$-dimensional
+$R$-algebras}. Namely, suppose that $m$, $m'$ are two free $d$-dimensional
+algebras over a ring $R$. An {\it isomorphism from $m$ to $m'$} is given by
+an invertible $R$-linear map
+$$
+\varphi : R^{\oplus d} \longrightarrow R^{\oplus d}
+$$
+such that $\varphi(e_1) = e_1$ and such that
+$$
+m \circ \varphi \otimes \varphi = \varphi \circ m'.
+$$
+Note that we can compose these so that the collection of
+free $d$-dimensional algebras over $R$ becomes a category.
+In this way we obtain a functor
+\begin{equation}
+\label{equation-FAd}
+FA_d : \Sch_{fppf}^{opp} \longrightarrow \textit{Groupoids}
+\end{equation}
+from the category of schemes to groupoids: to a scheme $T$ we associate the
+set of free $d$-dimensional algebras over $\Gamma(T, \mathcal{O}_T)$
+endowed with the structure
+of a category using the notion of isomorphisms just defined.
+
+\medskip\noindent
+The above suggests we consider the functor $G$ in groups
+which associates to any scheme $T$ the group
+$$
+G(T) = \{g \in \text{GL}_d(\Gamma(T, \mathcal{O}_T)) \mid g(e_1) = e_1\}
+$$
+It is clear that $G \subset \text{GL}_d$ (see
+Groupoids, Example \ref{groupoids-example-general-linear-group})
+is the closed subgroup scheme cut out by the equations
+$x_{11} = 1$ and $x_{i1} = 0$ for $i > 1$. Hence $G$ is a smooth
+affine group scheme over $\Spec(\mathbf{Z})$. Consider the
+action
+$$
+a : G \times_{\Spec(\mathbf{Z})} X \longrightarrow X
+$$
+which associates to a $T$-valued point $(g, m)$ with $T = \Spec(R)$
+on the left hand side the free $d$-dimensional algebra over $R$
+given by
+$$
+a(g, m) = g^{-1} \circ m \circ g \otimes g.
+$$
+Note that this means that $g$ defines an isomorphism $m \to a(g, m)$
+of $d$-dimensional free $R$-algebras. We omit the verification that
+$a$ indeed defines an action of the group scheme $G$ on the scheme $X$.
+
+\begin{lemma}
+\label{lemma-represent-FAd}
+The functor in groupoids $FA_d$ defined in (\ref{equation-FAd})
+is isomorphic (!) to the functor in groupoids which associates
+to a scheme $T$ the category with
+\begin{enumerate}
+\item set of objects is $X(T)$,
+\item set of morphisms is $G(T) \times X(T)$,
+\item $s : G(T) \times X(T) \to X(T)$ is the projection map,
+\item $t : G(T) \times X(T) \to X(T)$ is $a(T)$, and
+\item composition $G(T) \times X(T) \times_{s, X(T), t} G(T) \times X(T)
+\to G(T) \times X(T)$ is given by $((g, m), (g', m')) \mapsto (gg', m')$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We have seen the rule on objects in (\ref{equation-objects}).
+We have also seen above that $g \in G(T)$ can be viewed as
+a morphism from $m$ to $a(g, m)$ for any free $d$-dimensional algebra $m$.
+Conversely, any morphism $m \to m'$ is given by an invertible linear
+map $\varphi$ which corresponds to an element $g \in G(T)$ such
+that $m' = a(g, m)$.
+\end{proof}
+
+\noindent
+In fact the groupoid $(X, G \times X, s, t, c)$ described in the
+lemma above is the groupoid associated to the action $a : G \times X \to X$
+as defined in
+Groupoids, Lemma \ref{groupoids-lemma-groupoid-from-action}.
+Since $G$ is smooth over $\Spec(\mathbf{Z})$
+we see that the two morphisms $s, t : G \times X \to X$ are
+smooth: by symmetry it suffices to prove that one of them is, and
+$s$ is the base change of $G \to \Spec(\mathbf{Z})$.
+Hence $(G \times X, X, s, t, c)$ is a smooth groupoid scheme,
+and the quotient stack $[X/G]$ is an algebraic stack by
+Algebraic Stacks,
+Theorem \ref{algebraic-theorem-smooth-groupoid-gives-algebraic-stack}.
+
+\begin{proposition}
+\label{proposition-finite-hilbert-point}
+The stack $\mathcal{H}_d$ is equivalent to the quotient stack
+$[X/G]$ described above. In particular $\mathcal{H}_d$ is an
+algebraic stack.
+\end{proposition}
+
+\begin{proof}
+Note that by
+Groupoids in Spaces, Definition
+\ref{spaces-groupoids-definition-quotient-stack}
+the quotient stack $[X/G]$ is the stackification of the
+category fibred in groupoids associated to the ``presheaf in groupoids''
+which associates to a scheme $T$ the groupoid
+$$
+(X(T), G(T) \times X(T), s, t, c).
+$$
+Since this ``presheaf in groupoids'' is isomorphic to $FA_d$ by
+Lemma \ref{lemma-represent-FAd}
+it suffices to prove that the $\mathcal{H}_d$ is the stackification
+of (the category fibred in groupoids associated to the
+``presheaf in groupoids'') $FA_d$. To do this we first define a
+functor
+$$
+\Spec : FA_d \longrightarrow \mathcal{H}_d
+$$
+Recall that the fibre category of $\mathcal{H}_d$ over a scheme $T$
+is the category of finite locally free morphisms $Z \to T$ of degree $d$.
+Thus given a scheme $T$ and a free $d$-dimensional
+$\Gamma(T, \mathcal{O}_T)$-algebra $m$ we may assign to this the object
+$$
+Z = \underline{\Spec}_T(\mathcal{A})
+$$
+of $\mathcal{H}_{d, T}$
+where $\mathcal{A} = \mathcal{O}_T^{\oplus d}$ endowed with a
+$\mathcal{O}_T$-algebra structure via $m$. Moreover, if $m'$ is
+a second such free $d$-dimensional $\Gamma(T, \mathcal{O}_T)$-algebra
+and if $\varphi : m \to m'$ is an isomorphism of these, then
+the induced $\mathcal{O}_T$-linear map
+$\varphi : \mathcal{O}_T^{\oplus d} \to \mathcal{O}_T^{\oplus d}$
+induces an isomorphism
+$$
+\varphi : \mathcal{A}' \longrightarrow \mathcal{A}
+$$
+of quasi-coherent $\mathcal{O}_T$-algebras. Hence
+$$
+\underline{\Spec}_T(\varphi) :
+\underline{\Spec}_T(\mathcal{A})
+\longrightarrow
+\underline{\Spec}_T(\mathcal{A}')
+$$
+is a morphism in the fibre category $\mathcal{H}_{d, T}$. We omit the
+verification that this construction is compatible with base change so
+we get indeed a functor $\Spec : FA_d \to \mathcal{H}_d$
+as claimed above.
+
+\medskip\noindent
+To show that $\Spec : FA_d \to \mathcal{H}_d$ induces an equivalence
+between the stackification of $FA_d$ and $\mathcal{H}_d$ it suffices to
+check that
+\begin{enumerate}
+\item $\mathit{Isom}(m, m') = \mathit{Isom}(\Spec(m), \Spec(m'))$
+for any $m, m' \in FA_d(T)$.
+\item for any scheme $T$ and any object $Z \to T$ of $\mathcal{H}_{d, T}$
+there exists a covering $\{T_i \to T\}$ such that $Z|_{T_i}$ is
+isomorphic to $\Spec(m)$ for some $m \in FA_d(T_i)$, and
+\end{enumerate}
+see
+Stacks, Lemma \ref{stacks-lemma-stackify-groupoids}.
+The first statement follows from the observation that any isomorphism
+$$
+\underline{\Spec}_T(\mathcal{A})
+\longrightarrow
+\underline{\Spec}_T(\mathcal{A}')
+$$
+is necessarily given by a global invertible matrix $g$ when
+$\mathcal{A} = \mathcal{A}' = \mathcal{O}_T^{\oplus d}$ as modules.
+To prove the second statement let $\pi : Z \to T$ be a finite
+locally free morphism of degree $d$. Then $\mathcal{A}$ is a locally
+free sheaf $\mathcal{O}_T$-modules of rank $d$.
+Consider the element $1 \in \Gamma(T, \mathcal{A})$. This element is
+nonzero in $\mathcal{A} \otimes_{\mathcal{O}_{T, t}} \kappa(t)$
+for every $t \in T$ since the scheme
+$Z_t = \Spec(\mathcal{A} \otimes_{\mathcal{O}_{T, t}} \kappa(t))$
+is nonempty being of degree $d > 0$ over $\kappa(t)$. Thus
+$1 : \mathcal{O}_T \to \mathcal{A}$ can locally be used as the first
+basis element (for example you can use
+Algebra, Lemma \ref{algebra-lemma-cokernel-flat} parts (1) and (2)
+to see this). Thus, after localizing on
+$T$ we may assume that there exists an isomorphism
+$\varphi : \mathcal{A} \to \mathcal{O}_T^{\oplus d}$
+such that $1 \in \Gamma(\mathcal{A})$ corresponds to the first basis element.
+In this situation the multiplication map
+$\mathcal{A} \otimes_{\mathcal{O}_T} \mathcal{A} \to \mathcal{A}$
+translates via $\varphi$ into a free $d$-dimensional algebra $m$ over
+$\Gamma(T, \mathcal{O}_T)$. This finishes the proof.
+\end{proof}
+
+
+
+
+\section{Finite Hilbert stacks of spaces}
+\label{section-spaces-hilbert}
+
+\noindent
+The finite Hilbert stack of an algebraic space is an algebraic stack.
+
+\begin{lemma}
+\label{lemma-hilbert-stack-of-space}
+Let $S$ be a scheme.
+Let $X$ be an algebraic space over $S$.
+Then $\mathcal{H}_d(X)$ is an algebraic stack.
+\end{lemma}
+
+\begin{proof}
+The $1$-morphism
+$$
+\mathcal{H}_d(X) \longrightarrow \mathcal{H}_d
+$$
+is representable by algebraic spaces according to
+Lemma \ref{lemma-representable-on-top}.
+The stack $\mathcal{H}_d$ is an algebraic stack according to
+Proposition \ref{proposition-finite-hilbert-point}.
+Hence $\mathcal{H}_d(X)$ is an algebraic stack by
+Algebraic Stacks,
+Lemma \ref{algebraic-lemma-representable-morphism-to-algebraic}.
+\end{proof}
+
+\noindent
+This lemma allows us to bootstrap.
+
+\begin{lemma}
+\label{lemma-hilbert-stack-relative-space}
+Let $S$ be a scheme. Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism
+of stacks in groupoids over $(\Sch/S)_{fppf}$ such that
+\begin{enumerate}
+\item $\mathcal{X}$ is representable by an algebraic space, and
+\item $F$ is representable by algebraic spaces, surjective, flat, and
+locally of finite presentation.
+\end{enumerate}
+Then $\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ is an algebraic stack.
+\end{lemma}
+
+\begin{proof}
+Choose a representable stack in groupoids $\mathcal{U}$ over $S$ and a
+$1$-morphism $f : \mathcal{U} \to \mathcal{H}_d(\mathcal{X})$
+which is representable by algebraic spaces, smooth, and surjective.
+This is possible because $\mathcal{H}_d(\mathcal{X})$ is an algebraic stack by
+Lemma \ref{lemma-hilbert-stack-of-space}.
+Consider the $2$-fibre product
+$$
+\mathcal{W} =
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+\times_{\mathcal{H}_d(\mathcal{X}), f}
+\mathcal{U}.
+$$
+Since $\mathcal{U}$ is representable (in particular a stack in setoids)
+it follows from
+Examples of Stacks, Lemma \ref{examples-stacks-lemma-faithful-hilbert}
+and
+Stacks, Lemma \ref{stacks-lemma-2-fibre-product-gives-stack-in-setoids}
+that $\mathcal{W}$ is a stack in setoids. The $1$-morphism
+$\mathcal{W} \to \mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ is
+representable by algebraic spaces, smooth, and surjective as a base
+change of the morphism $f$ (see
+Algebraic Stacks,
+Lemmas \ref{algebraic-lemma-base-change-representable-by-spaces} and
+\ref{algebraic-lemma-base-change-representable-transformations-property}).
+Thus, if we can show that $\mathcal{W}$ is representable by an algebraic space,
+then the lemma follows from
+Algebraic Stacks,
+Lemma \ref{algebraic-lemma-smooth-surjective-morphism-implies-algebraic}.
+
+\medskip\noindent
+The diagonal of $\mathcal{Y}$ is representable by algebraic spaces according to
+Lemma \ref{lemma-flat-finite-presentation-surjective-diagonal}.
+We may apply
+Lemma \ref{lemma-relative-hilbert}
+to see that the $1$-morphism
+$$
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+\longrightarrow
+\mathcal{H}_d(\mathcal{X}) \times \mathcal{Y}
+$$
+is representable by algebraic spaces. Consider the $2$-fibre product
+$$
+\mathcal{V} =
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+\times_{(\mathcal{H}_d(\mathcal{X}) \times \mathcal{Y}), f \times F}
+(\mathcal{U} \times \mathcal{X}).
+$$
+The projection morphism $\mathcal{V} \to \mathcal{U} \times \mathcal{X}$
+is representable by algebraic spaces as a base change of the last
+displayed morphism. Hence $\mathcal{V}$ is an algebraic space (see
+Bootstrap, Lemma \ref{bootstrap-lemma-representable-by-spaces-over-space}
+or
+Algebraic Stacks,
+Lemma \ref{algebraic-lemma-base-change-by-space-representable-by-space}).
+The $1$-morphism $\mathcal{V} \to \mathcal{U}$ fits into the following
+$2$-cartesian diagram
+$$
+\xymatrix{
+\mathcal{V} \ar[d] \ar[r] & \mathcal{X} \ar[d]^F \\
+\mathcal{W} \ar[r] & \mathcal{Y}
+}
+$$
+because
+$$
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+\times_{(\mathcal{H}_d(\mathcal{X}) \times \mathcal{Y}), f \times F}
+(\mathcal{U} \times \mathcal{X})
+=
+(\mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+\times_{\mathcal{H}_d(\mathcal{X}), f}
+\mathcal{U}) \times_{\mathcal{Y}, F} \mathcal{X}.
+$$
+Hence $\mathcal{V} \to \mathcal{W}$ is representable by algebraic spaces,
+surjective, flat, and locally of finite presentation as a base change
+of $F$. It follows that the same thing is true for the corresponding
+sheaves of sets associated to $\mathcal{V}$ and $\mathcal{W}$, see
+Algebraic Stacks, Lemma \ref{algebraic-lemma-map-fibred-setoids-property}.
+Thus we conclude that the sheaf associated to $\mathcal{W}$ is an
+algebraic space by
+Bootstrap, Theorem \ref{bootstrap-theorem-final-bootstrap}.
+\end{proof}
+
+
+
+
+\section{LCI locus in the Hilbert stack}
+\label{section-lci}
+
+\noindent
+Please consult
+Examples of Stacks, Section \ref{examples-stacks-section-hilbert-d-stack}
+for notation. Fix a $1$-morphism $F : \mathcal{X} \longrightarrow \mathcal{Y}$
+of stacks in groupoids over $(\Sch/S)_{fppf}$. Assume that
+$F$ is representable by algebraic spaces. Fix $d \geq 1$. Consider an
+object $(U, Z, y, x, \alpha)$ of $\mathcal{H}_d$. There is an
+induced $1$-morphism
+$$
+(\Sch/Z)_{fppf}
+\longrightarrow
+(\Sch/U)_{fppf} \times_{y, \mathcal{Y}, F} \mathcal{X}
+$$
+(by the universal property of $2$-fibre products) which is representable by
+a morphism of algebraic spaces over $U$.
+Namely, since $F$ is representable by algebraic spaces, we may choose
+an algebraic space $X_y$ over $U$ which represents the $2$-fibre product
+$(\Sch/U)_{fppf} \times_{y, \mathcal{Y}, F} \mathcal{X}$.
+Since $\alpha : y|_Z \to F(x)$ is an isomorphism we see that
+$\xi = (Z, Z \to U, x, \alpha)$ is an object of the $2$-fibre product
+$(\Sch/U)_{fppf} \times_{y, \mathcal{Y}, F} \mathcal{X}$ over $Z$.
+Hence $\xi$ gives rise to a morphism $x_\alpha : Z \to X_y$ of algebraic spaces
+over $U$ as $X_y$ is the functor of isomorphisms classes of objects of
+$(\Sch/U)_{fppf} \times_{y, \mathcal{Y}, F} \mathcal{X}$, see
+Algebraic Stacks,
+Lemma \ref{algebraic-lemma-characterize-representable-by-space}.
+Here is a picture
+\begin{equation}
+\label{equation-relative-map}
+\vcenter{
+\xymatrix{
+Z \ar[r]_{x_\alpha} \ar[rd] & X_y \ar[d] \\
+& U
+}
+}
+\quad\quad
+\vcenter{
+\xymatrix{
+(\Sch/Z)_{fppf} \ar[rd] \ar[r]_-{x, \alpha} &
+(\Sch/U)_{fppf} \times_{y, \mathcal{Y}, F} \mathcal{X} \ar[r] \ar[d] &
+\mathcal{X} \ar[d]^F \\
+& (\Sch/U)_{fppf} \ar[r]^y & \mathcal{Y}
+}
+}
+\end{equation}
+We remark that if
+$(f, g, b, a) : (U, Z, y, x, \alpha) \to (U', Z', y', x', \alpha')$
+is a morphism between objects of $\mathcal{H}_d$, then the morphism
+$x'_{\alpha'} : Z' \to X'_{y'}$ is the base change of the morphism
+$x_\alpha$ by the morphism $g : U' \to U$ (details omitted).
+
+\medskip\noindent
+Now assume moreover that $F$ is flat and locally of finite presentation.
+In this situation we define a full subcategory
+$$
+\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y}) \subset
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+$$
+consisting of those objects $(U, Z, y, x, \alpha)$ of
+$\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ such
+that the corresponding morphism $x_\alpha : Z \to X_y$ is unramified
+and a local complete intersection morphism (see
+Morphisms of Spaces, Definition \ref{spaces-morphisms-definition-unramified}
+and
+More on Morphisms of Spaces,
+Definition \ref{spaces-more-morphisms-definition-lci}
+for definitions).
+
+\begin{lemma}
+\label{lemma-lci-locus-stack-in-groupoids}
+Let $S$ be a scheme. Fix a $1$-morphism
+$F : \mathcal{X} \longrightarrow \mathcal{Y}$
+of stacks in groupoids over $(\Sch/S)_{fppf}$.
+Assume $F$ is representable by algebraic spaces, flat, and locally
+of finite presentation. Then $\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$
+is a stack in groupoids and the inclusion functor
+$$
+\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})
+\longrightarrow
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+$$
+is representable and an open immersion.
+\end{lemma}
+
+\begin{proof}
+Let $\Xi = (U, Z, y, x, \alpha)$ be an object of $\mathcal{H}_d$. It follows
+from the remark following
+(\ref{equation-relative-map})
+that the pullback of $\Xi$ by $U' \to U$ belongs to
+$\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$ if and only if the base
+change of $x_\alpha$ is unramified and a local complete intersection morphism.
+Note that $Z \to U$ is finite locally free (hence flat, locally of
+finite presentation and universally closed) and that $X_y \to U$ is
+flat and locally of finite presentation by our assumption on $F$. Then
+More on Morphisms of Spaces, Lemmas
+\ref{spaces-more-morphisms-lemma-where-unramified} and
+\ref{spaces-more-morphisms-lemma-where-lci}
+imply exists an open subscheme $W \subset U$ such that a morphism
+$U' \to U$ factors through $W$ if and only if the base change of
+$x_\alpha$ via $U' \to U$ is unramified and a local complete intersection
+morphism. This implies that
+$$
+(\Sch/U)_{fppf}
+\times_{\Xi, \mathcal{H}_d(\mathcal{X}/\mathcal{Y})}
+\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})
+$$
+is representable by $W$. Hence the final statement of the lemma
+holds. The first statement (that
+$\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$ is a stack in groupoids)
+follows from this and
+Algebraic Stacks,
+Lemma \ref{algebraic-lemma-open-fibred-category-is-algebraic}.
+\end{proof}
+
+\noindent
+Local complete intersection morphisms are ``locally unobstructed''.
+This holds in much greater generality than the special case
+that we need in this chapter here.
+
+\begin{lemma}
+\label{lemma-lci-unobstructed}
+Let $U \subset U'$ be a first order thickening of affine schemes.
+Let $X'$ be an algebraic space flat over $U'$. Set $X = U \times_{U'} X'$.
+Let $Z \to U$ be finite locally free of degree $d$. Finally, let
+$f : Z \to X$ be unramified and a local complete intersection morphism.
+Then there exists a commutative diagram
+$$
+\xymatrix{
+(Z \subset Z') \ar[rd] \ar[rr]_{(f, f')} & & (X \subset X') \ar[ld] \\
+& (U \subset U')
+}
+$$
+of algebraic spaces over $U'$ such that $Z' \to U'$ is finite locally free
+of degree $d$ and $Z = U \times_{U'} Z'$.
+\end{lemma}
+
+\begin{proof}
+By
+More on Morphisms of Spaces,
+Lemma \ref{spaces-more-morphisms-lemma-unramified-lci}
+the conormal sheaf $\mathcal{C}_{Z/X}$ of the unramified morphism $Z \to X$
+is a finite locally free $\mathcal{O}_Z$-module and by
+More on Morphisms of Spaces,
+Lemma \ref{spaces-more-morphisms-lemma-transitivity-conormal-lci}
+we have an exact sequence
+$$
+0 \to i^*\mathcal{C}_{X/X'} \to
+\mathcal{C}_{Z/X'} \to
+\mathcal{C}_{Z/X} \to 0
+$$
+of conormal sheaves. Since $Z$ is affine this sequence is split. Choose
+a splitting
+$$
+\mathcal{C}_{Z/X'} = i^*\mathcal{C}_{X/X'} \oplus \mathcal{C}_{Z/X}
+$$
+Let $Z \subset Z''$ be the universal first order thickening of $Z$
+over $X'$ (see
+More on Morphisms of Spaces,
+Section \ref{spaces-more-morphisms-section-universal-thickening}).
+Denote $\mathcal{I} \subset \mathcal{O}_{Z''}$ the quasi-coherent sheaf
+of ideals corresponding to $Z \subset Z''$. By definition we have
+$\mathcal{C}_{Z/X'}$ is $\mathcal{I}$ viewed as a sheaf on $Z$.
+Hence the splitting above determines a splitting
+$$
+\mathcal{I} = i^*\mathcal{C}_{X/X'} \oplus \mathcal{C}_{Z/X}
+$$
+Let $Z' \subset Z''$ be the closed subscheme cut out by
+$\mathcal{C}_{Z/X} \subset \mathcal{I}$ viewed as a quasi-coherent sheaf
+of ideals on $Z''$. It is clear that $Z'$ is a first order thickening
+of $Z$ and that we obtain a commutative diagram of first order thickenings
+as in the statement of the lemma.
+
+\medskip\noindent
+Since $X' \to U'$ is flat and since $X = U \times_{U'} X'$ we see that
+$\mathcal{C}_{X/X'}$ is the pullback of $\mathcal{C}_{U/U'}$ to $X$, see
+More on Morphisms of Spaces, Lemma \ref{spaces-more-morphisms-lemma-deform}.
+Note that by construction $\mathcal{C}_{Z/Z'} = i^*\mathcal{C}_{X/X'}$
+hence we conclude that $\mathcal{C}_{Z/Z'}$ is isomorphic to the pullback
+of $\mathcal{C}_{U/U'}$ to $Z$. Applying
+More on Morphisms of Spaces, Lemma \ref{spaces-more-morphisms-lemma-deform}
+once again (or its analogue for schemes, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-deform})
+we conclude that $Z' \to U'$ is flat and that $Z = U \times_{U'} Z'$.
+Finally,
+More on Morphisms, Lemma \ref{more-morphisms-lemma-deform-property}
+shows that $Z' \to U'$ is finite locally free of degree $d$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-formally-smooth}
+Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of stacks in groupoids
+over $(\Sch/S)_{fppf}$. Assume $F$ is representable by algebraic
+spaces, flat, and locally of finite presentation. Then
+$$
+p : \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y}) \to \mathcal{Y}
+$$
+is formally smooth on objects.
+\end{lemma}
+
+\begin{proof}
+We have to show the following: Given
+\begin{enumerate}
+\item an object $(U, Z, y, x, \alpha)$ of
+$\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$ over an affine scheme $U$,
+\item a first order thickening $U \subset U'$, and
+\item an object $y'$ of $\mathcal{Y}$ over $U'$ such that $y'|_U = y$,
+\end{enumerate}
+then there exists an object $(U', Z', y', x', \alpha')$ of
+$\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$ over $U'$ with
+$Z = U \times_{U'} Z'$, with $x = x'|_Z$, and with
+$\alpha = \alpha'|_U$. Namely, the last two equalities will take care
+of the commutativity of (\ref{equation-formally-smooth}).
+
+\medskip\noindent
+Consider the morphism $x_\alpha : Z \to X_y$ constructed in
+Equation (\ref{equation-relative-map}). Denote similarly $X'_{y'}$
+the algebraic space over $U'$ representing the $2$-fibre product
+$(\Sch/U')_{fppf} \times_{y', \mathcal{Y}, F} \mathcal{X}$.
+By assumption the morphism $X'_{y'} \to U'$ is flat (and locally of finite
+presentation). As $y'|_U = y$ we see that $X_y = U \times_{U'} X'_{y'}$.
+Hence we may apply
+Lemma \ref{lemma-lci-unobstructed}
+to find $Z' \to U'$ finite locally free of degree $d$ with
+$Z = U \times_{U'} Z'$ and with $Z' \to X'_{y'}$ extending $x_\alpha$.
+By construction the morphism $Z' \to X'_{y'}$ corresponds to a pair
+$(x', \alpha')$. It is clear that $(U', Z', y', x', \alpha')$
+is an object of $\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ over $U'$
+with $Z = U \times_{U'} Z'$, with $x = x'|_Z$, and with
+$\alpha = \alpha'|_U$. As we've seen in
+Lemma \ref{lemma-lci-locus-stack-in-groupoids}
+that $\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y}) \subset
+\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$ is an ``open substack''
+it follows that $(U', Z', y', x', \alpha')$ is an object of
+$\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-surjective}
+Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of stacks in groupoids
+over $(\Sch/S)_{fppf}$. Assume $F$ is representable by algebraic
+spaces, flat, surjective, and locally of finite presentation. Then
+$$
+\coprod\nolimits_{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})
+\longrightarrow
+\mathcal{Y}
+$$
+is surjective on objects.
+\end{lemma}
+
+\begin{proof}
+It suffices to prove the following: For any field $k$
+and object $y$ of $\mathcal{Y}$ over $\Spec(k)$ there exists
+an integer $d \geq 1$ and an object $(U, Z, y, x, \alpha)$ of
+$\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$ with $U = \Spec(k)$.
+Namely, in this case we see that $p$ is surjective on objects in the
+strong sense that an extension of the field is not needed.
+
+\medskip\noindent
+Denote $X_y$ the algebraic space over $U = \Spec(k)$
+representing the $2$-fibre product
+$(\Sch/U')_{fppf} \times_{y', \mathcal{Y}, F} \mathcal{X}$.
+By assumption the morphism $X_y \to \Spec(k)$ is surjective and
+locally of finite presentation (and flat). In particular $X_y$ is
+nonempty. Choose a nonempty affine scheme $V$ and an \'etale morphism
+$V \to X_y$. Note that $V \to \Spec(k)$ is (flat), surjective,
+and locally of finite presentation (by
+Morphisms of Spaces,
+Definition \ref{spaces-morphisms-definition-locally-finite-presentation}).
+Pick a closed point $v \in V$ where $V \to \Spec(k)$ is Cohen-Macaulay
+(i.e., $V$ is Cohen-Macaulay at $v$), see
+More on Morphisms,
+Lemma \ref{more-morphisms-lemma-flat-finite-presentation-CM-open}.
+Applying
+More on Morphisms,
+Lemma \ref{more-morphisms-lemma-slice-CM}
+we find a regular immersion $Z \to V$ with $Z = \{v\}$.
+This implies $Z \to V$ is a closed immersion. Moreover, it follows that
+$Z \to \Spec(k)$ is finite (for example by
+Algebra, Lemma \ref{algebra-lemma-isolated-point}).
+Hence $Z \to \Spec(k)$ is finite locally free of some degree $d$.
+Now $Z \to X_y$ is unramified as the composition
+of a closed immersion followed by an \'etale morphism
+(see
+Morphisms of Spaces, Lemmas \ref{spaces-morphisms-lemma-composition-unramified},
+\ref{spaces-morphisms-lemma-etale-unramified}, and
+\ref{spaces-morphisms-lemma-immersion-unramified}).
+Finally, $Z \to X_y$ is a local complete intersection morphism
+as a composition of a regular immersion of schemes and an \'etale
+morphism of algebraic spaces (see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-regular-immersion-lci}
+and
+Morphisms of Spaces, Lemmas \ref{spaces-morphisms-lemma-etale-smooth} and
+\ref{spaces-morphisms-lemma-smooth-syntomic} and
+More on Morphisms of Spaces,
+Lemmas \ref{spaces-more-morphisms-lemma-flat-lci} and
+\ref{spaces-more-morphisms-lemma-composition-lci}).
+The morphism $Z \to X_y$ corresponds to an object $x$ of $\mathcal{X}$
+over $Z$ together with an isomorphism $\alpha : y|_Z \to F(x)$.
+We obtain an object $(U, Z, y, x, \alpha)$ of
+$\mathcal{H}_d(\mathcal{X}/\mathcal{Y})$. By what was said above about
+the morphism $Z \to X_y$ we see that it actually is an object of the
+subcategory $\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$ and we win.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Bootstrapping algebraic stacks}
+\label{section-bootstrap}
+
+\noindent
+The following theorem is one of the main results of this chapter.
+
+\begin{theorem}
+\label{theorem-bootstrap}
+Let $S$ be a scheme. Let $F : \mathcal{X} \to \mathcal{Y}$
+be a $1$-morphism of stacks in groupoids over $(\Sch/S)_{fppf}$. If
+\begin{enumerate}
+\item $\mathcal{X}$ is representable by an algebraic space, and
+\item $F$ is representable by algebraic spaces, surjective, flat and
+locally of finite presentation,
+\end{enumerate}
+then $\mathcal{Y}$ is an algebraic stack.
+\end{theorem}
+
+\begin{proof}
+By
+Lemma \ref{lemma-flat-finite-presentation-surjective-diagonal}
+we see that the diagonal of $\mathcal{Y}$ is representable by algebraic
+spaces. Hence we only need to verify the existence of a $1$-morphism
+$f : \mathcal{V} \to \mathcal{Y}$ of stacks in groupoids over
+$(\Sch/S)_{fppf}$ with $\mathcal{V}$ representable and
+$f$ surjective and smooth. By
+Lemma \ref{lemma-hilbert-stack-relative-space}
+we know that
+$$
+\coprod\nolimits_{d \geq 1} \mathcal{H}_d(\mathcal{X}/\mathcal{Y})
+$$
+is an algebraic stack. It follows from
+Lemma \ref{lemma-lci-locus-stack-in-groupoids}
+and
+Algebraic Stacks,
+Lemma \ref{algebraic-lemma-open-fibred-category-is-algebraic}
+that
+$$
+\coprod\nolimits_{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})
+$$
+is an algebraic stack as well. Choose a representable stack in groupoids
+$\mathcal{V}$ over $(\Sch/S)_{fppf}$ and a surjective and smooth
+$1$-morphism
+$$
+\mathcal{V}
+\longrightarrow
+\coprod\nolimits_{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y}).
+$$
+We claim that the composition
+$$
+\mathcal{V}
+\longrightarrow
+\coprod\nolimits_{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})
+\longrightarrow
+\mathcal{Y}
+$$
+is smooth and surjective which finishes the proof of the theorem. In fact,
+the smoothness will be a consequence of
+Lemmas \ref{lemma-limit-preserving} and \ref{lemma-lci-formally-smooth}
+and the surjectivity a consequence of
+Lemma \ref{lemma-lci-surjective}.
+We spell out the details in the following paragraph.
+
+\medskip\noindent
+By construction $\mathcal{V} \to
+\coprod\nolimits_{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$
+is representable by algebraic spaces, surjective, and smooth (and hence
+also locally of finite presentation and formally smooth by the general
+principle
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-representable-transformations-property-implication}
+and
+More on Morphisms of Spaces,
+Lemma \ref{spaces-more-morphisms-lemma-smooth-formally-smooth}).
+Applying
+Lemmas \ref{lemma-representable-by-spaces-limit-preserving},
+\ref{lemma-representable-by-spaces-formally-smooth}, and
+\ref{lemma-representable-by-spaces-surjective}
+we see that $\mathcal{V} \to
+\coprod\nolimits_{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})$
+is limit preserving on objects, formally smooth on objects, and
+surjective on objects. The $1$-morphism
+$\coprod\nolimits_{d \geq 1} \mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y})
+\to \mathcal{Y}$ is
+\begin{enumerate}
+\item limit preserving on objects: this is
+Lemma \ref{lemma-limit-preserving}
+for $\mathcal{H}_d(\mathcal{X}/\mathcal{Y}) \to \mathcal{Y}$
+and we combine it with Lemmas
+\ref{lemma-lci-locus-stack-in-groupoids},
+\ref{lemma-open-immersion-limit-preserving}, and
+\ref{lemma-composition-limit-preserving}
+to get it for $\mathcal{H}_{d, lci}(\mathcal{X}/\mathcal{Y}) \to \mathcal{Y}$,
+\item formally smooth on objects by
+Lemma \ref{lemma-lci-formally-smooth},
+and
+\item surjective on objects by
+Lemma \ref{lemma-lci-surjective}.
+\end{enumerate}
+Using
+Lemmas \ref{lemma-composition-limit-preserving},
+\ref{lemma-composition-formally-smooth}, and
+\ref{lemma-composition-surjective}
+we conclude that the composition $\mathcal{V} \to \mathcal{Y}$ is
+limit preserving on objects, formally smooth on objects, and
+surjective on objects.
+Using
+Lemmas \ref{lemma-representable-by-spaces-limit-preserving},
+\ref{lemma-representable-by-spaces-formally-smooth}, and
+\ref{lemma-representable-by-spaces-surjective}
+we see that $\mathcal{V} \to \mathcal{Y}$ is
+locally of finite presentation, formally smooth, and surjective.
+Finally, using (via the general principle
+Algebraic Stacks,
+Lemma \ref{algebraic-lemma-representable-transformations-property-implication})
+the infinitesimal lifting criterion
+(More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-smooth-formally-smooth})
+we see that $\mathcal{V} \to \mathcal{Y}$ is smooth and we win.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Applications}
+\label{section-applications}
+
+\noindent
+Our first task is to show that the quotient stack $[U/R]$ associated to
+a ``flat and locally finitely presented groupoid'' is an algebraic stack.
+See
+Groupoids in Spaces,
+Definition \ref{spaces-groupoids-definition-quotient-stack}
+for the definition of the quotient stack.
+The following lemma is preliminary and is the analogue of
+Algebraic Stacks,
+Lemma \ref{algebraic-lemma-smooth-quotient-smooth-presentation}.
+
+\begin{lemma}
+\label{lemma-flat-quotient-flat-presentation}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $(U, R, s, t, c)$ be a groupoid in algebraic spaces over $S$.
+Assume $s, t$ are flat and locally of finite presentation.
+Then the morphism $\mathcal{S}_U \to [U/R]$ is flat, locally of
+finite presentation, and surjective.
+\end{lemma}
+
+\begin{proof}
+Let $T$ be a scheme and let $x : (\Sch/T)_{fppf} \to [U/R]$
+be a $1$-morphism. We have to show that the projection
+$$
+\mathcal{S}_U \times_{[U/R]} (\Sch/T)_{fppf}
+\longrightarrow
+(\Sch/T)_{fppf}
+$$
+is surjective, flat, and locally of finite presentation.
+We already know that the left hand side
+is representable by an algebraic space $F$, see
+Algebraic Stacks, Lemmas \ref{algebraic-lemma-diagonal-quotient-stack} and
+\ref{algebraic-lemma-representable-diagonal}.
+Hence we have to show the corresponding morphism $F \to T$ of
+algebraic spaces is surjective, locally of finite presentation, and flat.
+Since we are working with properties of morphisms of algebraic
+spaces which are local on the target in the fppf topology we
+may check this fppf locally on $T$. By construction, there exists
+an fppf covering $\{T_i \to T\}$ of $T$ such that
+$x|_{(\Sch/T_i)_{fppf}}$ comes from a morphism
+$x_i : T_i \to U$. (Note that $F \times_T T_i$ represents the
+$2$-fibre product $\mathcal{S}_U \times_{[U/R]} (\Sch/T_i)_{fppf}$
+so everything is compatible with the base change via $T_i \to T$.)
+Hence we may assume that $x$ comes from $x : T \to U$.
+In this case we see that
+$$
+\mathcal{S}_U \times_{[U/R]} (\Sch/T)_{fppf}
+=
+(\mathcal{S}_U \times_{[U/R]} \mathcal{S}_U)
+\times_{\mathcal{S}_U} (\Sch/T)_{fppf}
+=
+\mathcal{S}_R \times_{\mathcal{S}_U} (\Sch/T)_{fppf}
+$$
+The first equality by
+Categories, Lemma \ref{categories-lemma-2-fibre-product-erase-factor}
+and the second equality by
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-2-cartesian}.
+Clearly the last $2$-fibre product is represented by the algebraic
+space $F = R \times_{s, U, x} T$ and the projection
+$R \times_{s, U, x} T \to T$ is flat and locally of finite presentation
+as the base change of the flat locally finitely presented
+morphism of algebraic spaces $s : R \to U$.
+It is also surjective as $s$ has a section (namely the identity
+$e : U \to R$ of the groupoid).
+This proves the lemma.
+\end{proof}
+
+\noindent
+Here is the first main result of this section.
+
+\begin{theorem}
+\label{theorem-flat-groupoid-gives-algebraic-stack}
+Let $S$ be a scheme contained in $\Sch_{fppf}$.
+Let $(U, R, s, t, c)$ be a groupoid in algebraic spaces over $S$.
+Assume $s, t$ are flat and locally of finite presentation.
+Then the quotient stack $[U/R]$ is an algebraic stack over $S$.
+\end{theorem}
+
+\begin{proof}
+We check the two conditions of
+Theorem \ref{theorem-bootstrap}
+for the morphism
+$$
+(\Sch/U)_{fppf} \longrightarrow [U/R].
+$$
+The first is trivial (as $U$ is an algebraic space).
+The second is
+Lemma \ref{lemma-flat-quotient-flat-presentation}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{When is a quotient stack algebraic?}
+\label{section-quotient-algebraic}
+
+\noindent
+In
+Groupoids in Spaces, Section \ref{spaces-groupoids-section-stacks}
+we have defined the quotient stack $[U/R]$ associated to a groupoid
+$(U, R, s, t, c)$ in algebraic spaces. Note that $[U/R]$ is a stack
+in groupoids whose diagonal is representable by algebraic spaces (see
+Bootstrap, Lemma \ref{bootstrap-lemma-quotient-stack-isom}
+and
+Algebraic Stacks, Lemma \ref{algebraic-lemma-representable-diagonal})
+and such that there exists an algebraic space $U$ and a $1$-morphism
+$(\Sch/U)_{fppf} \to [U/R]$ which is an ``fppf surjection''
+in the sense that it induces a map on presheaves of isomorphism classes of
+objects which becomes surjective after sheafification.
+However, it is not the case that $[U/R]$ is an algebraic
+stack in general. This is not a contradiction with
+Theorem \ref{theorem-bootstrap}
+as the $1$-morphism $(\Sch/U)_{fppf} \to [U/R]$ is not
+representable by algebraic spaces in general, and if it is it may not
+be flat and locally of finite presentation.
+
+\medskip\noindent
+The easiest way to make examples of non-algebraic quotient stacks is
+to look at quotients of the form $[S/G]$ where $S$ is a scheme and $G$
+is a group scheme over $S$ acting trivially on $S$. Namely, we will see
+below
+(Lemma \ref{lemma-BG-algebraic})
+that if $[S/G]$ is algebraic, then $G \to S$ has to be flat and locally
+of finite presentation. An explicit example can be found in
+Examples, Section \ref{examples-section-not-algebraic-stack}.
+
+\begin{lemma}
+\label{lemma-quotient-algebraic}
+Let $S$ be a scheme and let $B$ be an algebraic space over $S$.
+Let $(U, R, s, t, c)$ be a groupoid in algebraic spaces over $B$.
+The quotient stack $[U/R]$ is an algebraic stack if and only if
+there exists a morphism of algebraic spaces $g : U' \to U$ such that
+\begin{enumerate}
+\item the composition
+$U' \times_{g, U, t} R \to R \xrightarrow{s} U$ is a surjection of
+sheaves, and
+\item the morphisms $s', t' : R' \to U'$ are flat and locally of finite
+presentation where $(U', R', s', t', c')$ is the restriction of
+$(U, R, s, t, c)$ via $g$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First, assume that $g : U' \to U$ satisfies (1) and (2). Property (1)
+implies that $[U'/R'] \to [U/R]$ is an equivalence, see
+Groupoids in Spaces,
+Lemma \ref{spaces-groupoids-lemma-quotient-stack-restrict-equivalence}.
+By
+Theorem \ref{theorem-flat-groupoid-gives-algebraic-stack}
+the quotient stack $[U'/R']$ is an algebraic stack. Hence
+$[U/R]$ is an algebraic stack too, see
+Algebraic Stacks, Lemma \ref{algebraic-lemma-equivalent}.
+
+\medskip\noindent
+Conversely, assume that $[U/R]$ is an algebraic stack. We may choose a
+scheme $W$ and a surjective smooth $1$-morphism
+$$
+f : (\Sch/W)_{fppf} \longrightarrow [U/R].
+$$
+By the $2$-Yoneda lemma
+(Algebraic Stacks, Section \ref{algebraic-section-2-yoneda})
+this corresponds to an object $\xi$ of $[U/R]$ over $W$.
+By the description of $[U/R]$ in
+Groupoids in Spaces, Lemma \ref{spaces-groupoids-lemma-quotient-stack-objects}
+we can find a surjective, flat, locally finitely presented morphism
+$b : U' \to W$ of schemes such that $\xi' = b^*\xi$ corresponds to a morphism
+$g : U' \to U$. Note that the $1$-morphism
+$$
+f' : (\Sch/U')_{fppf} \longrightarrow [U/R].
+$$
+corresponding to $\xi'$ is surjective, flat, and locally of finite
+presentation, see
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-composition-representable-transformations-property}.
+Hence
+$(\Sch/U')_{fppf} \times_{[U/R]} (\Sch/U')_{fppf}$
+which is represented by the algebraic space
+$$
+\mathit{Isom}_{[U/R]}(\text{pr}_0^*\xi', \text{pr}_1^*\xi') =
+(U' \times_S U')
+\times_{(g \circ \text{pr}_0, g \circ \text{pr}_1), U \times_S U} R = R'
+$$
+(see
+Groupoids in Spaces, Lemma
+\ref{spaces-groupoids-lemma-quotient-stack-morphisms}
+for the first equality; the second is the definition of restriction)
+is flat and locally of finite presentation over $U'$ via both $s'$ and $t'$
+(by base change, see
+Algebraic Stacks, Lemma
+\ref{algebraic-lemma-base-change-representable-transformations-property}).
+By this description of $R'$ and by
+Algebraic Stacks, Lemma \ref{algebraic-lemma-map-space-into-stack}
+we obtain a canonical fully faithful $1$-morphism $[U'/R'] \to [U/R]$.
+This $1$-morphism is essentially surjective because $f'$ is flat,
+locally of finite presentation, and surjective (see
+Stacks, Lemma \ref{stacks-lemma-characterize-essentially-surjective-when-ff});
+another way to prove this is to use
+Algebraic Stacks, Remark
+\ref{algebraic-remark-flat-fp-presentation}.
+Finally, we can use
+Groupoids in Spaces, Lemma
+\ref{spaces-groupoids-lemma-quotient-stack-restrict-equivalence}
+to conclude that the composition
+$U' \times_{g, U, t} R \to R \xrightarrow{s} U$ is a surjection of sheaves.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-group-quotient-algebraic}
+Let $S$ be a scheme and let $B$ be an algebraic space over $S$.
+Let $G$ be a group algebraic space over $B$.
+Let $X$ be an algebraic space over $B$ and let $a : G \times_B X \to X$
+be an action of $G$ on $X$ over $B$.
+The quotient stack $[X/G]$ is an algebraic stack if and only if
+there exists a morphism of algebraic spaces $\varphi : X' \to X$ such that
+\begin{enumerate}
+\item $G \times_B X' \to X$, $(g, x') \mapsto a(g, \varphi(x'))$ is a
+surjection of sheaves, and
+\item the two projections $X'' \to X'$ of the algebraic space $X''$
+given by the rule
+$$
+T \longmapsto \{(x'_1, g, x'_2) \in (X' \times_B G \times_B X')(T)
+\mid \varphi(x'_1) = a(g, \varphi(x'_2))\}
+$$
+are flat and locally of finite presentation.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma is a special case of
+Lemma \ref{lemma-quotient-algebraic}.
+Namely, the quotient stack $[X/G]$ is by
+Groupoids in Spaces, Definition \ref{spaces-groupoids-definition-quotient-stack}
+equal to the quotient stack $[X/G \times_B X]$ of the groupoid in
+algebraic spaces $(X, G \times_B X, s, t, c)$ associated to
+the group action in
+Groupoids in Spaces, Lemma \ref{spaces-groupoids-lemma-groupoid-from-action}.
+There is one small observation that is needed to get condition (1).
+Namely, the morphism $s : G \times_B X \to X$ is the second projection
+and the morphism $t : G \times_B X \to X$ is the action morphism $a$.
+Hence the morphism $h : U' \times_{g, U, t} R \to R \xrightarrow{s} U$ from
+Lemma \ref{lemma-quotient-algebraic}
+corresponds to the morphism
+$$
+X' \times_{\varphi, X, a} (G \times_B X) \xrightarrow{\text{pr}_1} X
+$$
+in the current setting. However, because of the symmetry given by
+the inverse of $G$ this morphism is isomorphic to the morphism
+$$
+(G \times_B X) \times_{\text{pr}_1, X, \varphi} X' \xrightarrow{a} X
+$$
+of the statement of the lemma. Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-BG-algebraic}
+\begin{slogan}
+Gerbes are algebraic if and only if the associated groups are flat
+and locally of finite presentation
+\end{slogan}
+Let $S$ be a scheme and let $B$ be an algebraic space over $S$.
+Let $G$ be a group algebraic space over $B$.
+Endow $B$ with the trivial action of $G$.
+Then the quotient stack $[B/G]$ is an algebraic stack
+if and only if $G$ is flat and locally of finite presentation over $B$.
+\end{lemma}
+
+\begin{proof}
+If $G$ is flat and locally of finite presentation over $B$, then
+$[B/G]$ is an algebraic stack by
+Theorem \ref{theorem-flat-groupoid-gives-algebraic-stack}.
+
+\medskip\noindent
+Conversely, assume that $[B/G]$ is an algebraic stack. By
+Lemma \ref{lemma-group-quotient-algebraic}
+and because the action is trivial, we see
+there exists an algebraic space $B'$ and a morphism
+$B' \to B$ such that (1) $B' \to B$ is a surjection
+of sheaves and (2) the projections
+$$
+B' \times_B G \times_B B' \to B'
+$$
+are flat and locally of finite presentation. Note that the base change
+$B' \times_B G \times_B B' \to G \times_B B'$ of $B' \to B$
+is a surjection of sheaves also. Thus it follows from
+Descent on Spaces, Lemma \ref{spaces-descent-lemma-curiosity}
+that the projection $G \times_B B' \to B'$ is flat and locally
+of finite presentation. By (1) we can find an fppf covering
+$\{B_i \to B\}$ such that $B_i \to B$ factors through $B' \to B$.
+Hence $G \times_B B_i \to B_i$ is flat and locally of finite presentation
+by base change. By
+Descent on Spaces, Lemmas
+\ref{spaces-descent-lemma-descending-property-flat} and
+\ref{spaces-descent-lemma-descending-property-locally-finite-presentation}
+we conclude that $G \to B$ is flat and locally of finite presentation.
+\end{proof}
+
+\noindent
+Later we will see that the quotient stack of a smooth $S$-space
+by a group algebraic space $G$ is smooth, even when $G$ is not smooth
+(Morphisms of Stacks, Lemma
+\ref{stacks-morphisms-lemma-smooth-quotient-stack}).
+
+
+
+
+\section{Algebraic stacks in the \'etale topology}
+\label{section-stacks-etale}
+
+\noindent
+Let $S$ be a scheme. Instead of working with stacks in groupoids over
+the big fppf site $(\Sch/S)_{fppf}$ we could work with stacks in groupoids
+over the big \'etale site $(\Sch/S)_\etale$. All of the material in
+Algebraic Stacks, Sections
+\ref{algebraic-section-representable},
+\ref{algebraic-section-2-yoneda},
+\ref{algebraic-section-representable-morphism},
+\ref{algebraic-section-split},
+\ref{algebraic-section-representable-by-algebraic-spaces},
+\ref{algebraic-section-morphisms-representable-by-algebraic-spaces},
+\ref{algebraic-section-representable-properties}, and
+\ref{algebraic-section-stacks}
+makes sense for categories fibred in groupoids over $(\Sch/S)_\etale$.
+Thus we get a second notion of an algebraic stack by working in the
+\'etale topology. This notion is (a priori) weaker than the notion introduced
+in Algebraic Stacks, Definition \ref{algebraic-definition-algebraic-stack}
+since a stack in the fppf topology is certainly a stack in the \'etale
+topology. However, the notions are equivalent as is shown by the following
+lemma.
+
+\begin{lemma}
+\label{lemma-stacks-etale}
+Denote the common underlying category of $\Sch_{fppf}$
+and $\Sch_\etale$ by $\Sch_\alpha$ (see
+Sheaves on Stacks, Section \ref{stacks-sheaves-section-sheaves} and
+Topologies, Remark \ref{topologies-remark-choice-sites}). Let $S$ be an object
+of $\Sch_\alpha$. Let
+$$
+p : \mathcal{X} \to \Sch_\alpha/S
+$$
+be a category fibred in groupoids with the following properties:
+\begin{enumerate}
+\item $\mathcal{X}$ is a stack in groupoids over $(\Sch/S)_\etale$,
+\item the diagonal $\Delta : \mathcal{X} \to \mathcal{X} \times \mathcal{X}$
+is representable by algebraic spaces\footnote{Here we can either mean
+sheaves in the \'etale topology whose diagonal is representable and which
+have an \'etale surjective covering by a scheme or algebraic spaces as
+defined in
+Algebraic Spaces, Definition \ref{spaces-definition-algebraic-space}.
+Namely, by Bootstrap, Lemma \ref{bootstrap-lemma-spaces-etale}
+there is no difference.}, and
+\item there exists $U \in \Ob(\Sch_\alpha/S)$
+and a $1$-morphism $(\Sch/U)_\etale \to \mathcal{X}$
+which is surjective and smooth.
+\end{enumerate}
+Then $\mathcal{X}$ is an algebraic stack in the sense of
+Algebraic Stacks, Definition \ref{algebraic-definition-algebraic-stack}.
+\end{lemma}
+
+\begin{proof}
+Note that properties (2) and (3) of the lemma and the corresponding
+properties (2) and (3) of
+Algebraic Stacks, Definition \ref{algebraic-definition-algebraic-stack}
+are independent of the topology. This is true because these properties
+involve only the notion of a $2$-fibre product of categories fibred in
+groupoids, $1$- and $2$-morphisms of categories fibred in groupoids, the
+notion of a $1$-morphism of categories fibred in groupoids representable
+by algebraic spaces, and what it means for such a $1$-morphism to be
+surjective and smooth.
+Thus all we have to prove is that an \'etale stack in groupoids
+$\mathcal{X}$ with properties (2) and (3) is also an fppf stack in groupoids.
+
+\medskip\noindent
+Using (2) let $R$ be an algebraic space representing
+$$
+(\Sch_\alpha/U) \times_\mathcal{X} (\Sch_\alpha/U)
+$$
+By (3) the projections $s, t : R \to U$ are smooth. Exactly as in the proof of
+Algebraic Stacks, Lemma \ref{algebraic-lemma-map-space-into-stack}
+there exists a groupoid in spaces $(U, R, s, t, c)$ and a canonical
+fully faithful $1$-morphism $[U/R]_\etale \to \mathcal{X}$
+where $[U/R]_\etale$ is the \'etale stackification of presheaf
+in groupoids
+$$
+T \longmapsto (U(T), R(T), s(T), t(T), c(T))
+$$
+Claim: If $V \to T$ is a surjective smooth morphism from an algebraic space
+$V$ to a scheme $T$, then there exists an \'etale covering $\{T_i \to T\}$
+refining the covering $\{V \to T\}$. This follows from
+More on Morphisms, Lemma \ref{more-morphisms-lemma-etale-dominates-smooth}
+or the more general
+Sheaves on Stacks, Lemma
+\ref{stacks-sheaves-lemma-surjective-flat-locally-finite-presentation}.
+Using the claim and arguing exactly as in
+Algebraic Stacks, Lemma \ref{algebraic-lemma-stack-presentation}
+it follows that $[U/R]_\etale \to \mathcal{X}$ is an
+equivalence.
+
+\medskip\noindent
+Next, let $[U/R]$ denote the quotient stack in the fppf topology
+which is an algebraic stack by
+Algebraic Stacks, Theorem
+\ref{algebraic-theorem-smooth-groupoid-gives-algebraic-stack}.
+Thus we have $1$-morphisms
+$$
+U \to [U/R]_\etale \to [U/R].
+$$
+Both $U \to [U/R]_\etale \cong \mathcal{X}$ and
+$U \to [U/R]$ are surjective and smooth (the first by assumption
+and the second by the theorem) and in both cases the
+fibre product $U \times_\mathcal{X} U$ and $U \times_{[U/R]} U$
+is representable by $R$. Hence the $1$-morphism
+$[U/R]_\etale \to [U/R]$ is fully faithful (since morphisms
+in the quotient stacks are given by morphisms into $R$, see
+Groupoids in Spaces, Section
+\ref{spaces-groupoids-section-explicit-quotient-stacks}).
+
+\medskip\noindent
+Finally, for any scheme $T$ and morphism $t : T \to [U/R]$ the fibre product
+$V = T \times_{U/R} U$ is an algebraic space surjective and smooth over $T$.
+By the claim above there exists an \'etale covering $\{T_i \to T\}_{i \in I}$
+and morphisms $T_i \to V$ over $T$. This proves that the object
+$t$ of $[U/R]$ over $T$ comes \'etale locally from $U$. We conclude that
+$[U/R]_\etale \to [U/R]$ is an equivalence of stacks in
+groupoids over $(\Sch/S)_\etale$ by
+Stacks, Lemma \ref{stacks-lemma-characterize-essentially-surjective-when-ff}.
+This concludes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/crystalline.tex b/books/stacks/crystalline.tex
new file mode 100644
index 0000000000000000000000000000000000000000..51170299e5b8c6b71285c2845a70b4cd39984bbf
--- /dev/null
+++ b/books/stacks/crystalline.tex
@@ -0,0 +1,5645 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Crystalline Cohomology}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+This chapter is based on a lecture series given by Johan de Jong
+held in 2012 at Columbia University.
+The goals of this chapter are to give a quick introduction to
+crystalline cohomology. A reference is the book \cite{Berthelot}.
+
+\medskip\noindent
+We have moved the more elementary purely algebraic discussion of divided
+power rings to a preliminary chapter as it is also useful
+in discussing Tate resolutions in commutative algebra.
+Please see Divided Power Algebra, Section \ref{dpa-section-introduction}.
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Divided power envelope}
+\label{section-divided-power-envelope}
+
+\noindent
+The construction of the following lemma will be dubbed the
+divided power envelope. It will play an important role later.
+
+\begin{lemma}
+\label{lemma-divided-power-envelope}
+Let $(A, I, \gamma)$ be a divided power ring.
+Let $A \to B$ be a ring map. Let $J \subset B$ be an ideal
+with $IB \subset J$. There exists a homomorphism of
+divided power rings
+$$
+(A, I, \gamma) \longrightarrow (D, \bar J, \bar \gamma)
+$$
+such that
+$$
+\Hom_{(A, I, \gamma)}((D, \bar J, \bar \gamma), (C, K, \delta)) =
+\Hom_{(A, I)}((B, J), (C, K))
+$$
+functorially in the divided power algebra $(C, K, \delta)$ over
+$(A, I, \gamma)$. Here the LHS is morphisms of divided
+power rings over $(A, I, \gamma)$ and the RHS is morphisms of
+(ring, ideal) pairs over $(A, I)$.
+\end{lemma}
+
+\begin{proof}
+Denote $\mathcal{C}$ the category of divided power rings
+$(C, K, \delta)$. Consider the functor
+$F : \mathcal{C} \longrightarrow \textit{Sets}$ defined by
+$$
+F(C, K, \delta) =
+\left\{
+(\varphi, \psi)
+\middle|
+\begin{matrix}
+\varphi : (A, I, \gamma) \to (C, K, \delta)
+\text{ homomorphism of divided power rings} \\
+\psi : (B, J) \to (C, K)\text{ an }
+A\text{-algebra homomorphism with }\psi(J) \subset K
+\end{matrix}
+\right\}
+$$
+We will show that
+Divided Power Algebra, Lemma \ref{dpa-lemma-a-version-of-brown}
+applies to this functor which will
+prove the lemma. Suppose that $(\varphi, \psi) \in F(C, K, \delta)$.
+Let $C' \subset C$ be the subring generated by $\varphi(A)$,
+$\psi(B)$, and $\delta_n(\psi(f))$ for all $f \in J$.
+Let $K' \subset K \cap C'$ be the ideal of $C'$ generated by
+$\varphi(I)$ and $\delta_n(\psi(f))$ for $f \in J$.
+Then $(C', K', \delta|_{K'})$ is a divided power ring and
+$C'$ has cardinality bounded by the cardinal
+$\kappa = |A| \otimes |B|^{\aleph_0}$.
+Moreover, $\varphi$ factors as $A \to C' \to C$ and $\psi$ factors
+as $B \to C' \to C$. This proves assumption (1) of
+Divided Power Algebra, Lemma \ref{dpa-lemma-a-version-of-brown}
+holds. Assumption (2) is clear
+as limits in the category of divided power rings commute with the
+forgetful functor $(C, K, \delta) \mapsto (C, K)$, see
+Divided Power Algebra, Lemma \ref{dpa-lemma-limits} and its proof.
+\end{proof}
+
+\begin{definition}
+\label{definition-divided-power-envelope}
+Let $(A, I, \gamma)$ be a divided power ring.
+Let $A \to B$ be a ring map. Let $J \subset B$ be an ideal
+with $IB \subset J$. The divided power algebra $(D, \bar J, \bar\gamma)$
+constructed in Lemma \ref{lemma-divided-power-envelope}
+is called the {\it divided power envelope of $J$ in $B$
+relative to $(A, I, \gamma)$} and is denoted $D_B(J)$ or $D_{B, \gamma}(J)$.
+\end{definition}
+
+\noindent
+Let $(A, I, \gamma) \to (C, K, \delta)$ be a homomorphism of divided
+power rings. The universal property of
+$D_{B, \gamma}(J) = (D, \bar J, \bar \gamma)$ is
+$$
+\begin{matrix}
+\text{ring maps }B \to C \\
+\text{ which map }J\text{ into }K
+\end{matrix}
+\longleftrightarrow
+\begin{matrix}
+\text{divided power homomorphisms} \\
+(D, \bar J, \bar \gamma) \to (C, K, \delta)
+\end{matrix}
+$$
+and the correspondence is given by precomposing with the map $B \to D$
+which corresponds to $\text{id}_D$. Here are some properties of
+$(D, \bar J, \bar \gamma)$ which follow directly from the universal
+property. There are $A$-algebra maps
+\begin{equation}
+\label{equation-divided-power-envelope}
+B \longrightarrow D \longrightarrow B/J
+\end{equation}
+The first arrow maps $J$ into $\bar J$ and $\bar J$ is the kernel
+of the second arrow. The elements $\bar\gamma_n(x)$ where $n > 0$
+and $x$ is an element in the image of $J \to D$ generate $\bar J$
+as an ideal in $D$ and generate $D$ as a $B$-algebra.
+
+\begin{lemma}
+\label{lemma-divided-power-envelop-quotient}
+Let $(A, I, \gamma)$ be a divided power ring.
+Let $\varphi : B' \to B$ be a surjection of $A$-algebras with kernel $K$.
+Let $IB \subset J \subset B$ be an ideal. Let $J' \subset B'$
+be the inverse image of $J$. Write
+$D_{B', \gamma}(J') = (D', \bar J', \bar\gamma)$.
+Then $D_{B, \gamma}(J) = (D'/K', \bar J'/K', \bar\gamma)$
+where $K'$ is the ideal generated by the elements $\bar\gamma_n(k)$
+for $n \geq 1$ and $k \in K$.
+\end{lemma}
+
+\begin{proof}
+Write $D_{B, \gamma}(J) = (D, \bar J, \bar \gamma)$.
+The universal property of $D'$ gives us a homomorphism $D' \to D$
+of divided power algebras. As $B' \to B$ and $J' \to J$ are surjective, we
+see that $D' \to D$ is surjective (see remarks above). It is clear that
+$\bar\gamma_n(k)$ is in the kernel for $n \geq 1$ and $k \in K$, i.e.,
+we obtain a homomorphism $D'/K' \to D$. Conversely, there exists a divided
+power structure on $\bar J'/K' \subset D'/K'$, see
+Divided Power Algebra, Lemma \ref{dpa-lemma-kernel}.
+Hence the universal property of $D$ gives an inverse $D \to D'/K'$ and we win.
+\end{proof}
+
+\noindent
+In the situation of Definition \ref{definition-divided-power-envelope}
+we can choose a surjection $P \to B$ where $P$ is a polynomial
+algebra over $A$ and let $J' \subset P$ be the inverse image of $J$.
+The previous lemma describes $D_{B, \gamma}(J)$ in terms of
+$D_{P, \gamma}(J')$. Note that $\gamma$ extends to a divided power
+structure $\gamma'$ on $IP$ by
+Divided Power Algebra, Lemma \ref{dpa-lemma-gamma-extends}. Hence
+$D_{P, \gamma}(J') = D_{P, \gamma'}(J')$ is an example of a special
+case of divided power envelopes we describe in the following lemma.
+
+\begin{lemma}
+\label{lemma-describe-divided-power-envelope}
+Let $(B, I, \gamma)$ be a divided power algebra. Let $I \subset J \subset B$
+be an ideal. Let $(D, \bar J, \bar \gamma)$ be the divided power envelope
+of $J$ relative to $\gamma$. Choose elements $f_t \in J$, $t \in T$ such
+that $J = I + (f_t)$. Then there exists a surjection
+$$
+\Psi : B\langle x_t \rangle \longrightarrow D
+$$
+of divided power rings mapping $x_t$ to the image of $f_t$ in $D$.
+The kernel of $\Psi$ is generated by the elements $x_t - f_t$ and
+all
+$$
+\delta_n\left(\sum r_t x_t - r_0\right)
+$$
+whenever $\sum r_t f_t = r_0$ in $B$ for some $r_t \in B$, $r_0 \in I$.
+\end{lemma}
+
+\begin{proof}
+In the statement of the lemma we think of $B\langle x_t \rangle$
+as a divided power ring with ideal
+$J' = IB\langle x_t \rangle + B\langle x_t \rangle_{+}$, see
+Divided Power Algebra, Remark \ref{dpa-remark-divided-power-polynomial-algebra}.
+The existence of $\Psi$ follows from the universal property of
+divided power polynomial rings. Surjectivity of $\Psi$ follows from
+the fact that its image is a divided power subring of $D$, hence equal to $D$
+by the universal property of $D$. It is clear that
+$x_t - f_t$ is in the kernel. Set
+$$
+\mathcal{R} = \{(r_0, r_t) \in I \oplus \bigoplus\nolimits_{t \in T} B
+\mid \sum r_t f_t = r_0 \text{ in }B\}
+$$
+If $(r_0, r_t) \in \mathcal{R}$ then it is clear that
+$\sum r_t x_t - r_0$ is in the kernel.
+As $\Psi$ is a homomorphism of divided power rings
+and $\sum r_tx_t - r_0 \in J'$
+it follows that $\delta_n(\sum r_t x_t - r_0)$ is in the kernel as well.
+Let $K \subset B\langle x_t \rangle$ be the ideal generated by
+$x_t - f_t$ and the elements $\delta_n(\sum r_t x_t - r_0)$ for
+$(r_0, r_t) \in \mathcal{R}$.
+To show that $K = \Ker(\Psi)$ it suffices to show that
+$\delta$ extends to $B\langle x_t \rangle/K$. Namely, if so the universal
+property of $D$ gives a map $D \to B\langle x_t \rangle/K$
+inverse to $\Psi$. Hence we have to show that $K \cap J'$ is
+preserved by $\delta_n$, see
+Divided Power Algebra, Lemma \ref{dpa-lemma-kernel}.
+Let $K' \subset B\langle x_t \rangle$ be the ideal
+generated by the elements
+\begin{enumerate}
+\item $\delta_m(\sum r_t x_t - r_0)$ where $m > 0$ and
+$(r_0, r_t) \in \mathcal{R}$,
+\item $x_{t'}^{[m]}(x_t - f_t)$ where $m > 0$ and $t', t \in I$.
+\end{enumerate}
+We claim that $K' = K \cap J'$. The claim proves that $K \cap J'$
+is preserved by $\delta_n$, $n > 0$ by the criterion of
+Divided Power Algebra, Lemma \ref{dpa-lemma-kernel} (2)(c)
+and a computation of $\delta_n$
+of the elements listed which we leave to the reader.
+To prove the claim note that $K' \subset K \cap J'$.
+Conversely, if $h \in K \cap J'$ then, modulo $K'$ we can write
+$$
+h = \sum r_t (x_t - f_t)
+$$
+for some $r_t \in B$. As $h \in K \cap J' \subset J'$
+we see that $r_0 = \sum r_t f_t \in I$. Hence $(r_0, r_t) \in \mathcal{R}$
+and we see that
+$$
+h = \sum r_t x_t - r_0
+$$
+is in $K'$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-divided-power-envelope-add-variables}
+Let $(A, I, \gamma)$ be a divided power ring.
+Let $B$ be an $A$-algebra and $IB \subset J \subset B$ an ideal.
+Let $x_i$ be a set of variables. Then
+$$
+D_{B[x_i], \gamma}(JB[x_i] + (x_i)) = D_{B, \gamma}(J) \langle x_i \rangle
+$$
+\end{lemma}
+
+\begin{proof}
+One possible proof is to deduce this from
+Lemma \ref{lemma-describe-divided-power-envelope}
+as any relation between $x_i$ in $B[x_i]$ is trivial.
+On the other hand, the lemma follows from the universal property
+of the divided power polynomial algebra and the universal property of
+divided power envelopes.
+\end{proof}
+
+\noindent
+Conditions (1) and (2) of the following lemma hold if $B \to B'$ is flat
+at all primes of $V(IB') \subset \Spec(B')$ and is very closely related
+to that condition, see
+Algebra, Lemma \ref{algebra-lemma-what-does-it-mean}.
+It in particular says that taking the divided power
+envelope commutes with localization.
+
+\begin{lemma}
+\label{lemma-flat-base-change-divided-power-envelope}
+Let $(A, I, \gamma)$ be a divided power ring.
+Let $B \to B'$ be a homomorphism of $A$-algebras.
+Assume that
+\begin{enumerate}
+\item $B/IB \to B'/IB'$ is flat, and
+\item $\text{Tor}_1^B(B', B/IB) = 0$.
+\end{enumerate}
+Then for any ideal $IB \subset J \subset B$ the canonical map
+$$
+D_B(J) \otimes_B B' \longrightarrow D_{B'}(JB')
+$$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Set $D = D_B(J)$ and denote $\bar J \subset D$ its divided power ideal
+with divided power structure $\bar\gamma$. The universal property of
+$D$ produces a $B$-algebra map $D \to D_{B'}(JB')$, whence a map as in
+the lemma. It suffices to show that
+the divided powers $\bar\gamma$ extend to $D \otimes_B B'$ since then
+the universal property of $D_{B'}(JB')$ will produce a map
+$D_{B'}(JB') \to D \otimes_B B'$ inverse to the one in the lemma.
+
+\medskip\noindent
+Choose a surjection $P \to B'$ where $P$ is a polynomial algebra over $B$.
+In particular $B \to P$ is flat, hence $D \to D \otimes_B P$ is flat by
+Algebra, Lemma \ref{algebra-lemma-flat-base-change}.
+Then $\bar\gamma$ extends to $D \otimes_B P$ by
+Divided Power Algebra, Lemma \ref{dpa-lemma-gamma-extends}; we
+will denote this extension
+$\bar\gamma$ also. Set $\mathfrak a = \Ker(P \to B')$ so that
+we have the short exact sequence
+$$
+0 \to \mathfrak a \to P \to B' \to 0
+$$
+Thus $\text{Tor}_1^B(B', B/IB) = 0$ implies that
+$\mathfrak a \cap IP = I\mathfrak a$.
+Now we have the following commutative diagram
+$$
+\xymatrix{
+B/J \otimes_B \mathfrak a \ar[r]_\beta &
+B/J \otimes_B P \ar[r] &
+B/J \otimes_B B' \\
+D \otimes_B \mathfrak a \ar[r]^\alpha \ar[u] &
+D \otimes_B P \ar[r] \ar[u] &
+D \otimes_B B' \ar[u] \\
+\bar J \otimes_B \mathfrak a \ar[r] \ar[u] &
+\bar J \otimes_B P \ar[r] \ar[u] &
+\bar J \otimes_B B' \ar[u]
+}
+$$
+This diagram is exact even with $0$'s added at the top and the right.
+We have to show the divided powers on the ideal
+$\bar J \otimes_B P$ preserve the ideal
+$\Im(\alpha) \cap \bar J \otimes_B P$, see
+Divided Power Algebra, Lemma \ref{dpa-lemma-kernel}.
+Consider the exact sequence
+$$
+0 \to \mathfrak a/I\mathfrak a \to P/IP \to B'/IB' \to 0
+$$
+(which uses that $\mathfrak a \cap IP = I\mathfrak a$ as seen above).
+As $B'/IB'$ is flat over $B/IB$ this sequence remains exact after
+applying $B/J \otimes_{B/IB} -$, see
+Algebra, Lemma \ref{algebra-lemma-flat-tor-zero}. Hence
+$$
+\Ker(B/J \otimes_{B/IB} \mathfrak a/I\mathfrak a \to
+B/J \otimes_{B/IB} P/IP) =
+\Ker(\mathfrak a/J\mathfrak a \to P/JP)
+$$
+is zero. Thus $\beta$ is injective. It follows that
+$\Im(\alpha) \cap \bar J \otimes_B P$ is the
+image of $\bar J \otimes \mathfrak a$. Now if
+$f \in \bar J$ and $a \in \mathfrak a$, then
+$\bar\gamma_n(f \otimes a) = \bar\gamma_n(f) \otimes a^n$
+hence the result is clear.
+\end{proof}
+
+\noindent
+The following lemma is a special case of
+\cite[Proposition 2.1.7]{dJ-crystalline} which in turn is a
+generalization of \cite[Proposition 2.8.2]{Berthelot}.
+
+\begin{lemma}
+\label{lemma-flat-extension-divided-power-envelope}
+Let $(B, I, \gamma) \to (B', I', \gamma')$ be a homomorphism of
+divided power rings. Let $I \subset J \subset B$ and
+$I' \subset J' \subset B'$ be ideals. Assume
+\begin{enumerate}
+\item $B/I \to B'/I'$ is flat, and
+\item $J' = JB' + I'$.
+\end{enumerate}
+Then the canonical map
+$$
+D_{B, \gamma}(J) \otimes_B B' \longrightarrow D_{B', \gamma'}(J')
+$$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Set $D = D_{B, \gamma}(J)$. Choose elements $f_t \in J$ which generate $J/I$.
+Set $\mathcal{R} = \{(r_0, r_t) \in I \oplus \bigoplus\nolimits_{t \in T} B
+\mid \sum r_t f_t = r_0 \text{ in }B\}$ as in the proof of
+Lemma \ref{lemma-describe-divided-power-envelope}. This lemma shows that
+$$
+D = B\langle x_t \rangle/ K
+$$
+where $K$ is generated by the elements $x_t - f_t$ and
+$\delta_n(\sum r_t x_t - r_0)$ for $(r_0, r_t) \in \mathcal{R}$.
+Thus we see that
+\begin{equation}
+\label{equation-base-change}
+D \otimes_B B' = B'\langle x_t \rangle/K'
+\end{equation}
+where $K'$ is generated by the images in $B'\langle x_t \rangle$
+of the generators of $K$ listed above. Let $f'_t \in B'$ be the image
+of $f_t$. By assumption (1) we see that the elements $f'_t \in J'$
+generate $J'/I'$ and we see that $x_t - f'_t \in K'$. Set
+$$
+\mathcal{R}' =
+\{(r'_0, r'_t) \in I' \oplus \bigoplus\nolimits_{t \in T} B'
+\mid \sum r'_t f'_t = r'_0 \text{ in }B'\}
+$$
+To finish the proof we have to show that
+$\delta'_n(\sum r'_t x_t - r'_0) \in K'$ for
+$(r'_0, r'_t) \in \mathcal{R}'$, because then the presentation
+(\ref{equation-base-change}) of $D \otimes_B B'$ is identical
+to the presentation of $D_{B', \gamma'}(J')$ obtain in
+Lemma \ref{lemma-describe-divided-power-envelope} from the generators $f'_t$.
+Suppose that $(r'_0, r'_t) \in \mathcal{R}'$. Then
+$\sum r'_t f'_t = 0$ in $B'/I'$. As $B/I \to B'/I'$ is flat by
+assumption (1) we can apply the equational criterion of flatness
+(Algebra, Lemma \ref{algebra-lemma-flat-eq}) to see
+that there exist an $m > 0$ and
+$r_{jt} \in B$ and $c_j \in B'$, $j = 1, \ldots, m$ such
+that
+$$
+r_{j0} = \sum\nolimits_t r_{jt} f_t \in I \text{ for } j = 1, \ldots, m
+$$
+and
+$$
+i'_t = r'_t - \sum\nolimits_j c_j r_{jt} \in I' \text{ for all }t
+$$
+Note that this also implies that
+$r'_0 = \sum_t i'_t f_t + \sum_j c_j r_{j0}$.
+Then we have
+\begin{align*}
+\delta'_n(\sum\nolimits_t r'_t x_t - r'_0)
+& =
+\delta'_n(
+\sum\nolimits_t i'_t x_t +
+\sum\nolimits_{t, j} c_j r_{jt} x_t -
+\sum\nolimits_t i'_t f_t -
+\sum\nolimits_j c_j r_{j0}) \\
+& =
+\delta'_n(
+\sum\nolimits_t i'_t(x_t - f_t) +
+\sum\nolimits_j c_j (\sum\nolimits_t r_{jt} x_t - r_{j0}))
+\end{align*}
+Since $\delta_n(a + b) = \sum_{m = 0, \ldots, n} \delta_m(a) \delta_{n - m}(b)$
+and since $\delta_m(\sum i'_t(x_t - f_t))$ is in the ideal
+generated by $x_t - f_t \in K'$ for $m > 0$, it suffices to prove that
+$\delta_n(\sum c_j (\sum r_{jt} x_t - r_{j0}))$ is in $K'$.
+For this we use
+$$
+\delta_n(\sum\nolimits_j c_j (\sum\nolimits_t r_{jt} x_t - r_{j0}))
+=
+\sum c_1^{n_1} \ldots c_m^{n_m}
+\delta_{n_1}(\sum r_{1t} x_t - r_{10}) \ldots
+\delta_{n_m}(\sum r_{mt} x_t - r_{m0})
+$$
+where the sum is over $n_1 + \ldots + n_m = n$. This proves what we want.
+\end{proof}
+
+
+
+
+
+\section{Some explicit divided power thickenings}
+\label{section-explicit-thickenings}
+
+\noindent
+The constructions in this section will help us to define the connection
+on a crystal in modules on the crystalline site.
+
+\begin{lemma}
+\label{lemma-divided-power-first-order-thickening}
+Let $(A, I, \gamma)$ be a divided power ring. Let $M$ be an $A$-module.
+Let $B = A \oplus M$ as an $A$-algebra where $M$ is an ideal of square zero
+and set $J = I \oplus M$. Set
+$$
+\delta_n(x + z) = \gamma_n(x) + \gamma_{n - 1}(x)z
+$$
+for $x \in I$ and $z \in M$.
+Then $\delta$ is a divided power structure and
+$A \to B$ is a homomorphism of divided power rings from
+$(A, I, \gamma)$ to $(B, J, \delta)$.
+\end{lemma}
+
+\begin{proof}
+We have to check conditions (1) -- (5) of
+Divided Power Algebra, Definition \ref{dpa-definition-divided-powers}.
+We will prove this directly for this case, but please see the proof of
+the next lemma for a method which avoids calculations.
+Conditions (1) and (3) are clear. Condition (2) follows from
+\begin{align*}
+\delta_n(x + z)\delta_m(x + z)
+& =
+(\gamma_n(x) + \gamma_{n - 1}(x)z)(\gamma_m(x) + \gamma_{m - 1}(x)z) \\
+& = \gamma_n(x)\gamma_m(x) + \gamma_n(x)\gamma_{m - 1}(x)z +
+\gamma_{n - 1}(x)\gamma_m(x)z \\
+& =
+\frac{(n + m)!}{n!m!} \gamma_{n + m}(x) +
+\left(\frac{(n + m - 1)!}{n!(m - 1)!} +
+\frac{(n + m - 1)!}{(n - 1)!m!}\right)
+\gamma_{n + m - 1}(x) z \\
+& =
+\frac{(n + m)!}{n!m!}\delta_{n + m}(x + z)
+\end{align*}
+Condition (5) follows from
+\begin{align*}
+\delta_n(\delta_m(x + z))
+& =
+\delta_n(\gamma_m(x) + \gamma_{m - 1}(x)z) \\
+& =
+\gamma_n(\gamma_m(x)) + \gamma_{n - 1}(\gamma_m(x))\gamma_{m - 1}(x)z \\
+& =
+\frac{(nm)!}{n! (m!)^n} \gamma_{nm}(x) +
+\frac{((n - 1)m)!}{(n - 1)! (m!)^{n - 1}}
+\gamma_{(n - 1)m}(x) \gamma_{m - 1}(x) z \\
+& = \frac{(nm)!}{n! (m!)^n}(\gamma_{nm}(x) + \gamma_{nm - 1}(x) z)
+\end{align*}
+by elementary number theory. To prove (4) we have to see that
+$$
+\delta_n(x + x' + z + z')
+=
+\gamma_n(x + x') + \gamma_{n - 1}(x + x')(z + z')
+$$
+is equal to
+$$
+\sum\nolimits_{i = 0}^n
+(\gamma_i(x) + \gamma_{i - 1}(x)z)
+(\gamma_{n - i}(x') + \gamma_{n - i - 1}(x')z')
+$$
+This follows easily on collecting the coefficients of
+$1$, $z$, and $z'$ and using condition (4) for $\gamma$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-divided-power-second-order-thickening}
+Let $(A, I, \gamma)$ be a divided power ring. Let $M$, $N$ be $A$-modules.
+Let $q : M \times M \to N$ be an $A$-bilinear map.
+Let $B = A \oplus M \oplus N$ as an $A$-algebra with multiplication
+$$
+(x, z, w)\cdot (x', z', w') = (xx', xz' + x'z, xw' + x'w + q(z, z') + q(z', z))
+$$
+and set $J = I \oplus M \oplus N$. Set
+$$
+\delta_n(x, z, w) = (\gamma_n(x), \gamma_{n - 1}(x)z,
+\gamma_{n - 1}(x)w + \gamma_{n - 2}(x)q(z, z))
+$$
+for $(x, z, w) \in J$.
+Then $\delta$ is a divided power structure and
+$A \to B$ is a homomorphism of divided power rings from
+$(A, I, \gamma)$ to $(B, J, \delta)$.
+\end{lemma}
+
+\begin{proof}
+Suppose we want to prove that property (4) of
+Divided Power Algebra, Definition \ref{dpa-definition-divided-powers}
+is satisfied. Pick $(x, z, w)$ and $(x', z', w')$ in $J$.
+Pick a map
+$$
+A_0 = \mathbf{Z}\langle s, s'\rangle \longrightarrow A,\quad
+s \longmapsto x,
+s' \longmapsto x'
+$$
+which is possible by the universal property of divided power
+polynomial rings. Set $M_0 = A_0 \oplus A_0$ and
+$N_0 = A_0 \oplus A_0 \oplus M_0 \otimes_{A_0} M_0$.
+Let $q_0 : M_0 \times M_0 \to N_0$ be the obvious map.
+Define $M_0 \to M$ as the $A_0$-linear map which sends
+the basis vectors of $M_0$ to $z$ and $z'$. Define $N_0 \to N$
+as the $A_0$ linear map which sends the first two basis vectors
+of $N_0$ to $w$ and $w'$ and uses
+$M_0 \otimes_{A_0} M_0 \to M \otimes_A M \xrightarrow{q} N$
+on the last summand. Then we see that it suffices to prove the
+identity (4) for the situation $(A_0, M_0, N_0, q_0)$.
+Similarly for the other identities. This reduces us to the case
+of a $\mathbf{Z}$-torsion free ring $A$ and $A$-torsion free modules.
+In this case all we have to do is show that
+$$
+n! \delta_n(x, z, w) = (x, z, w)^n
+$$
+in the ring $A$, see Divided Power Algebra, Lemma \ref{dpa-lemma-silly}.
+To see this note that
+$$
+(x, z, w)^2 = (x^2, 2xz, 2xw + 2q(z, z))
+$$
+and by induction
+$$
+(x, z, w)^n = (x^n, nx^{n - 1}z, nx^{n - 1}w + n(n - 1)x^{n - 2}q(z, z))
+$$
+On the other hand,
+$$
+n! \delta_n(x, z, w) = (n!\gamma_n(x), n!\gamma_{n - 1}(x)z,
+n!\gamma_{n - 1}(x)w + n!\gamma_{n - 2}(x) q(z, z))
+$$
+which matches. This finishes the proof.
+\end{proof}
+
+
+
+
+\section{Compatibility}
+\label{section-compatibility}
+
+\noindent
+This section isn't required reading; it explains how our discussion
+fits with that of \cite{Berthelot}.
+Consider the following technical notion.
+
+\begin{definition}
+\label{definition-compatible}
+Let $(A, I, \gamma)$ and $(B, J, \delta)$ be divided power rings.
+Let $A \to B$ be a ring map. We say
+{\it $\delta$ is compatible with $\gamma$}
+if there exists a divided power structure $\bar\gamma$ on
+$J + IB$ such that
+$$
+(A, I, \gamma) \to (B, J + IB, \bar \gamma)\quad\text{and}\quad
+(B, J, \delta) \to (B, J + IB, \bar \gamma)
+$$
+are homomorphisms of divided power rings.
+\end{definition}
+
+\noindent
+Let $p$ be a prime number. Let $(A, I, \gamma)$ be a divided power ring.
+Let $A \to C$ be a ring map with $p$ nilpotent in $C$.
+Assume that $\gamma$ extends to $IC$ (see
+Divided Power Algebra, Lemma \ref{dpa-lemma-gamma-extends}).
+In this situation, the (big affine) crystalline site of
+$\Spec(C)$ over $\Spec(A)$
+as defined in \cite{Berthelot}
+is the opposite of the category of systems
+$$
+(B, J, \delta, A \to B, C \to B/J)
+$$
+where
+\begin{enumerate}
+\item $(B, J, \delta)$ is a divided power ring with $p$ nilpotent in $B$,
+\item $\delta$ is compatible with $\gamma$, and
+\item the diagram
+$$
+\xymatrix{
+B \ar[r] & B/J \\
+A \ar[u] \ar[r] & C \ar[u]
+}
+$$
+is commutative.
+\end{enumerate}
+The conditions
+``$\gamma$ extends to $C$ and $\delta$ compatible with $\gamma$''
+are used in \cite{Berthelot} to insure that
+the crystalline cohomology of $\Spec(C)$ is the same as the crystalline
+cohomology of $\Spec(C/IC)$. We will avoid this issue
+by working exclusively with $C$ such that $IC = 0$\footnote{Of course there
+will be a price to pay.}. In this case,
+for a system $(B, J, \delta, A \to B, C \to B/J)$ as above,
+the commutativity of the displayed diagram above implies $IB \subset J$ and
+compatibility is equivalent to the condition that
+$(A, I, \gamma) \to (B, J, \delta)$ is a homomorphism of divided
+power rings.
+
+
+
+
+\section{Affine crystalline site}
+\label{section-affine-site}
+
+\noindent
+In this section we discuss the algebraic variant of the crystalline site.
+Our basic situation in which we discuss this material will be as
+follows.
+
+\begin{situation}
+\label{situation-affine}
+Here $p$ is a prime number, $(A, I, \gamma)$ is a divided power
+ring such that $A$ is a $\mathbf{Z}_{(p)}$-algebra, and $A \to C$ is a
+ring map such that $IC = 0$ and such that $p$ is nilpotent in $C$.
+\end{situation}
+
+\noindent
+Usually the prime number $p$ will be contained in the
+divided power ideal $I$.
+
+\begin{definition}
+\label{definition-affine-thickening}
+In Situation \ref{situation-affine}.
+\begin{enumerate}
+\item A {\it divided power thickening} of $C$ over $(A, I, \gamma)$
+is a homomorphism of divided power algebras $(A, I, \gamma) \to (B, J, \delta)$
+such that $p$ is nilpotent in $B$ and a ring map $C \to B/J$ such that
+$$
+\xymatrix{
+B \ar[r] & B/J \\
+& C \ar[u] \\
+A \ar[uu] \ar[r] & A/I \ar[u]
+}
+$$
+is commutative.
+\item A {\it homomorphism of divided power thickenings}
+$$
+(B, J, \delta, C \to B/J) \longrightarrow (B', J', \delta', C \to B'/J')
+$$
+is a homomorphism $\varphi : B \to B'$ of divided power $A$-algebras such
+that $C \to B/J \to B'/J'$ is the given map $C \to B'/J'$.
+\item We denote $\text{CRIS}(C/A, I, \gamma)$ or simply $\text{CRIS}(C/A)$
+the category of divided power thickenings of $C$ over $(A, I, \gamma)$.
+\item We denote $\text{Cris}(C/A, I, \gamma)$ or simply $\text{Cris}(C/A)$
+the full subcategory consisting of $(B, J, \delta, C \to B/J)$ such that
+$C \to B/J$ is an isomorphism. We often denote such an object
+$(B \to C, \delta)$ with $J = \Ker(B \to C)$ being understood.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that for a divided power thickening $(B, J, \delta)$ as above
+the ideal $J$ is locally nilpotent, see
+Divided Power Algebra, Lemma \ref{dpa-lemma-nil}.
+There is a canonical functor
+\begin{equation}
+\label{equation-forget-affine}
+\text{CRIS}(C/A) \longrightarrow C\text{-algebras},\quad
+(B, J, \delta) \longmapsto B/J
+\end{equation}
+This category does not have equalizers or fibre products in general.
+It also doesn't have an initial object ($=$ empty colimit) in general.
+
+\begin{lemma}
+\label{lemma-affine-thickenings-colimits}
+In Situation \ref{situation-affine}.
+\begin{enumerate}
+\item $\text{CRIS}(C/A)$ has finite products (but not infinite ones),
+\item $\text{CRIS}(C/A)$ has all finite nonempty colimits and
+(\ref{equation-forget-affine}) commutes with these, and
+\item $\text{Cris}(C/A)$ has all finite nonempty colimits and
+$\text{Cris}(C/A) \to \text{CRIS}(C/A)$ commutes with them.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The empty product, i.e., the final object in the category of divided
+power thickenings of $C$ over $(A, I, \gamma)$, is the zero ring viewed
+as an $A$-algebra endowed with the zero ideal and the unique divided powers
+on the zero ideal and finally endowed with the unique homomorphism of $C$ to
+the zero ring. If $(B_t, J_t, \delta_t)_{t \in T}$ is a family of objects of
+$\text{CRIS}(C/A)$ then we can form the product
+$(\prod_t B_t, \prod_t J_t, \prod_t \delta_t)$ as in
+Divided Power Algebra, Lemma \ref{dpa-lemma-limits}.
+The map $C \to \prod B_t/\prod J_t = \prod B_t/J_t$ is clear.
+However, we are only guaranteed that $p$ is nilpotent in $\prod_t B_t$
+if $T$ is finite.
+
+\medskip\noindent
+Given two objects $(B, J, \gamma)$ and $(B', J', \gamma')$ of
+$\text{CRIS}(C/A)$ we can form a cocartesian diagram
+$$
+\xymatrix{
+(B, J, \delta) \ar[r] & (B'', J'', \delta'') \\
+(A, I, \gamma) \ar[r] \ar[u] & (B', J', \delta') \ar[u]
+}
+$$
+in the category of divided power rings. Then we see that we have
+$$
+B''/J'' = B/J \otimes_{A/I} B'/J' \longleftarrow C \otimes_{A/I} C
+$$
+see Divided Power Algebra, Remark \ref{dpa-remark-forgetful}.
+Denote $J'' \subset K \subset B''$
+the ideal such that
+$$
+\xymatrix{
+B''/J'' \ar[r] & B''/K \\
+C \otimes_{A/I} C \ar[r] \ar[u] & C \ar[u]
+}
+$$
+is a pushout, i.e., $B''/K \cong B/J \otimes_C B'/J'$.
+Let $D_{B''}(K) = (D, \bar K, \bar \delta)$
+be the divided power envelope of $K$ in $B''$ relative to
+$(B'', J'', \delta'')$. Then it is easily verified that
+$(D, \bar K, \bar \delta)$ is a coproduct of $(B, J, \delta)$ and
+$(B', J', \delta')$ in $\text{CRIS}(C/A)$.
+
+\medskip\noindent
+Next, we come to coequalizers. Let
+$\alpha, \beta : (B, J, \delta) \to (B', J', \delta')$ be morphisms of
+$\text{CRIS}(C/A)$. Consider $B'' = B'/ (\alpha(b) - \beta(b))$. Let
+$J'' \subset B''$ be the image of $J'$. Let
+$D_{B''}(J'') = (D, \bar J, \bar\delta)$ be the divided power envelope of
+$J''$ in $B''$ relative to $(B', J', \delta')$. Then it is easily verified
+that $(D, \bar J, \bar \delta)$ is the coequalizer of $(B, J, \delta)$ and
+$(B', J', \delta')$ in $\text{CRIS}(C/A)$.
+
+\medskip\noindent
+By Categories, Lemma \ref{categories-lemma-almost-finite-colimits-exist}
+we have all finite nonempty colimits in $\text{CRIS}(C/A)$. The constructions
+above shows that (\ref{equation-forget-affine}) commutes with them.
+This formally implies part (3) as $\text{Cris}(C/A)$ is the fibre category
+of (\ref{equation-forget-affine}) over $C$.
+\end{proof}
+
+\begin{remark}
+\label{remark-completed-affine-site}
+In Situation \ref{situation-affine} we denote
+$\text{Cris}^\wedge(C/A)$ the category whose objects are
+pairs $(B \to C, \delta)$ such that
+\begin{enumerate}
+\item $B$ is a $p$-adically complete $A$-algebra,
+\item $B \to C$ is a surjection of $A$-algebras,
+\item $\delta$ is a divided power structure on $\Ker(B \to C)$,
+\item $A \to B$ is a homomorphism of divided power rings.
+\end{enumerate}
+Morphisms are defined as in Definition \ref{definition-affine-thickening}.
+Then $\text{Cris}(C/A) \subset \text{Cris}^\wedge(C/A)$ is the full
+subcategory consisting of those $B$ such that $p$ is nilpotent in $B$.
+Conversely, any object $(B \to C, \delta)$ of $\text{Cris}^\wedge(C/A)$
+is equal to the limit
+$$
+(B \to C, \delta) = \lim_e (B/p^eB \to C, \delta)
+$$
+where for $e \gg 0$ the object $(B/p^eB \to C, \delta)$ lies
+in $\text{Cris}(C/A)$, see
+Divided Power Algebra, Lemma \ref{dpa-lemma-extend-to-completion}.
+In particular, we see that $\text{Cris}^\wedge(C/A)$ is a full subcategory
+of the category of pro-objects of $\text{Cris}(C/A)$, see
+Categories, Remark \ref{categories-remark-pro-category}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-list-properties}
+In Situation \ref{situation-affine}.
+Let $P \to C$ be a surjection of $A$-algebras with kernel $J$.
+Write $D_{P, \gamma}(J) = (D, \bar J, \bar\gamma)$.
+Let $(D^\wedge, J^\wedge, \bar\gamma^\wedge)$ be the
+$p$-adic completion of $D$, see
+Divided Power Algebra, Lemma \ref{dpa-lemma-extend-to-completion}.
+For every $e \geq 1$ set $P_e = P/p^eP$ and $J_e \subset P_e$
+the image of $J$ and write
+$D_{P_e, \gamma}(J_e) = (D_e, \bar J_e, \bar\gamma)$.
+Then for all $e$ large enough we have
+\begin{enumerate}
+\item $p^eD \subset \bar J$ and $p^eD^\wedge \subset \bar J^\wedge$
+are preserved by divided powers,
+\item $D^\wedge/p^eD^\wedge = D/p^eD = D_e$ as divided power rings,
+\item $(D_e, \bar J_e, \bar\gamma)$ is an object of $\text{Cris}(C/A)$,
+\item $(D^\wedge, \bar J^\wedge, \bar\gamma^\wedge)$ is equal to
+$\lim_e (D_e, \bar J_e, \bar\gamma)$, and
+\item $(D^\wedge, \bar J^\wedge, \bar\gamma^\wedge)$ is an object of
+$\text{Cris}^\wedge(C/A)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) follows from
+Divided Power Algebra, Lemma \ref{dpa-lemma-extend-to-completion}.
+It is a general property of $p$-adic completion that
+$D/p^eD = D^\wedge/p^eD^\wedge$. Since $D/p^eD$ is a divided power ring
+and since $P \to D/p^eD$ factors through $P_e$, the universal property of
+$D_e$ produces a map $D_e \to D/p^eD$. Conversely, the universal property
+of $D$ produces a map $D \to D_e$ which factors through $D/p^eD$. We omit
+the verification that these maps are mutually inverse. This proves (2).
+If $e$ is large enough, then $p^eC = 0$, hence we see (3) holds.
+Part (4) follows from
+Divided Power Algebra, Lemma \ref{dpa-lemma-extend-to-completion}.
+Part (5) is clear from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-set-generators}
+In Situation \ref{situation-affine}.
+Let $P$ be a polynomial algebra over $A$ and let
+$P \to C$ be a surjection of $A$-algebras with kernel $J$.
+With $(D_e, \bar J_e, \bar\gamma)$ as in Lemma \ref{lemma-list-properties}:
+for every object $(B, J_B, \delta)$ of $\text{CRIS}(C/A)$ there
+exists an $e$ and a morphism $D_e \to B$ of $\text{CRIS}(C/A)$.
+\end{lemma}
+
+\begin{proof}
+We can find an $A$-algebra homomorphism $P \to B$
+lifting the map $C \to B/J_B$. By our definition of
+$\text{CRIS}(C/A)$ we see that $p^eB = 0$ for
+some $e$ hence $P \to B$ factors as $P \to P_e \to B$.
+By the universal property of the divided power envelope we
+conclude that $P_e \to B$ factors through $D_e$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generator-completion}
+In Situation \ref{situation-affine}.
+Let $P$ be a polynomial algebra over $A$ and let
+$P \to C$ be a surjection of $A$-algebras with kernel $J$.
+Let $(D, \bar J, \bar\gamma)$ be the $p$-adic completion of
+$D_{P, \gamma}(J)$. For every object $(B \to C, \delta)$ of
+$\text{Cris}^\wedge(C/A)$ there
+exists a morphism $D \to B$ of $\text{Cris}^\wedge(C/A)$.
+\end{lemma}
+
+\begin{proof}
+We can find an $A$-algebra homomorphism $P \to B$ compatible
+with maps to $C$. By our definition of
+$\text{Cris}(C/A)$ we see that $P \to B$ factors as
+$P \to D_{P, \gamma}(J) \to B$. As $B$ is $p$-adically complete
+we can factor this map through $D$.
+\end{proof}
+
+
+
+\section{Module of differentials}
+\label{section-differentials}
+
+\noindent
+In this section we develop a theory of modules of differentials
+for divided power rings.
+
+\begin{definition}
+\label{definition-derivation}
+Let $A$ be a ring. Let $(B, J, \delta)$ be a divided power ring.
+Let $A \to B$ be a ring map. Let $M$ be an $B$-module.
+A {\it divided power $A$-derivation} into $M$ is a map
+$\theta : B \to M$ which is additive, annihilates the elements
+of $A$, satisfies the Leibniz rule
+$\theta(bb') = b\theta(b') + b'\theta(b)$ and satisfies
+$$
+\theta(\delta_n(x)) = \delta_{n - 1}(x)\theta(x)
+$$
+for all $n \geq 1$ and all $x \in J$.
+\end{definition}
+
+\noindent
+In the situation of the definition, just as in the case of usual
+derivations, there exists a {\it universal divided power $A$-derivation}
+$$
+\text{d}_{B/A, \delta} : B \to \Omega_{B/A, \delta}
+$$
+such that any divided power $A$-derivation $\theta : B \to M$ is equal to
+$\theta = \xi \circ d_{B/A, \delta}$ for some unique $B$-linear map
+$\xi : \Omega_{B/A, \delta} \to M$. If $(A, I, \gamma) \to (B, J, \delta)$
+is a homomorphism of divided power rings, then we can forget the
+divided powers on $A$ and consider the divided power derivations of
+$B$ over $A$. Here are some basic properties of the universal
+module of (divided power) differentials.
+
+\begin{lemma}
+\label{lemma-omega}
+Let $A$ be a ring. Let $(B, J, \delta)$ be a divided power ring and
+$A \to B$ a ring map.
+\begin{enumerate}
+\item Consider $B[x]$ with divided power ideal $(JB[x], \delta')$
+where $\delta'$ is the extension of $\delta$ to $B[x]$. Then
+$$
+\Omega_{B[x]/A, \delta'} =
+\Omega_{B/A, \delta} \otimes_B B[x] \oplus B[x]\text{d}x.
+$$
+\item Consider $B\langle x \rangle$ with divided power ideal
+$(JB\langle x \rangle + B\langle x \rangle_{+}, \delta')$. Then
+$$
+\Omega_{B\langle x\rangle/A, \delta'} =
+\Omega_{B/A, \delta} \otimes_B B\langle x \rangle \oplus
+B\langle x\rangle \text{d}x.
+$$
+\item Let $K \subset J$ be an ideal preserved by $\delta_n$ for
+all $n > 0$. Set $B' = B/K$ and denote $\delta'$ the induced
+divided power on $J/K$. Then $\Omega_{B'/A, \delta'}$ is the quotient
+of $\Omega_{B/A, \delta} \otimes_B B'$ by the $B'$-submodule generated
+by $\text{d}k$ for $k \in K$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+These are proved directly from the construction of $\Omega_{B/A, \delta}$
+as the free $B$-module on the elements $\text{d}b$ modulo the relations
+\begin{enumerate}
+\item $\text{d}(b + b') = \text{d}b + \text{d}b'$, $b, b' \in B$,
+\item $\text{d}a = 0$, $a \in A$,
+\item $\text{d}(bb') = b \text{d}b' + b' \text{d}b$, $b, b' \in B$,
+\item $\text{d}\delta_n(f) = \delta_{n - 1}(f)\text{d}f$, $f \in J$, $n > 1$.
+\end{enumerate}
+Note that the last relation explains why we get ``the same'' answer for
+the divided power polynomial algebra and the usual polynomial algebra:
+in the first case $x$ is an element of the divided power ideal and hence
+$\text{d}x^{[n]} = x^{[n - 1]}\text{d}x$.
+\end{proof}
+
+\noindent
+Let $(A, I, \gamma)$ be a divided power ring. In this setting the
+correct version of the powers of $I$ is given by the divided powers
+$$
+I^{[n]} = \text{ideal generated by }
+\gamma_{e_1}(x_1) \ldots \gamma_{e_t}(x_t)
+\text{ with }\sum e_j \geq n\text{ and }x_j \in I.
+$$
+Of course we have $I^n \subset I^{[n]}$. Note that $I^{[1]} = I$.
+Sometimes we also set $I^{[0]} = A$.
+
+\begin{lemma}
+\label{lemma-diagonal-and-differentials}
+Let $(A, I, \gamma) \to (B, J, \delta)$ be a homomorphism
+of divided power rings. Let $(B(1), J(1), \delta(1))$ be the coproduct
+of $(B, J, \delta)$ with itself over $(A, I, \gamma)$, i.e.,
+such that
+$$
+\xymatrix{
+(B, J, \delta) \ar[r] & (B(1), J(1), \delta(1)) \\
+(A, I, \gamma) \ar[r] \ar[u] & (B, J, \delta) \ar[u]
+}
+$$
+is cocartesian. Denote $K = \Ker(B(1) \to B)$.
+Then $K \cap J(1) \subset J(1)$ is preserved by the divided power
+structure and
+$$
+\Omega_{B/A, \delta} = K/ \left(K^2 + (K \cap J(1))^{[2]}\right)
+$$
+canonically.
+\end{lemma}
+
+\begin{proof}
+The fact that $K \cap J(1) \subset J(1)$ is preserved by the divided power
+structure follows from the fact that $B(1) \to B$ is a homomorphism of
+divided power rings.
+
+\medskip\noindent
+Recall that $K/K^2$ has a canonical $B$-module structure.
+Denote $s_0, s_1 : B \to B(1)$ the two coprojections and consider
+the map $\text{d} : B \to K/K^2 +(K \cap J(1))^{[2]}$ given by
+$b \mapsto s_1(b) - s_0(b)$. It is clear that $\text{d}$ is additive,
+annihilates $A$, and satisfies the Leibniz rule.
+We claim that $\text{d}$ is a divided power $A$-derivation.
+Let $x \in J$. Set $y = s_1(x)$ and $z = s_0(x)$.
+Denote $\delta$ the divided power structure on $J(1)$.
+We have to show that $\delta_n(y) - \delta_n(z) = \delta_{n - 1}(y)(y - z)$
+modulo $K^2 +(K \cap J(1))^{[2]}$ for $n \geq 1$.
+The equality holds for $n = 1$. Assume $n > 1$.
+Note that $\delta_i(y - z)$ lies in $(K \cap J(1))^{[2]}$ for $i > 1$.
+Calculating modulo $K^2 + (K \cap J(1))^{[2]}$ we have
+$$
+\delta_n(z) = \delta_n(z - y + y) =
+\sum\nolimits_{i = 0}^n \delta_i(z - y)\delta_{n - i}(y) =
+\delta_{n - 1}(y) \delta_1(z - y) + \delta_n(y)
+$$
+This proves the desired equality.
+
+\medskip\noindent
+Let $M$ be a $B$-module. Let $\theta : B \to M$ be a divided power
+$A$-derivation.
+Set $D = B \oplus M$ where $M$ is an ideal of square zero. Define a
+divided power structure on $J \oplus M \subset D$ by setting
+$\delta_n(x + m) = \delta_n(x) + \delta_{n - 1}(x)m$ for $n > 1$, see
+Lemma \ref{lemma-divided-power-first-order-thickening}.
+There are two divided power algebra homomorphisms $B \to D$: the first
+is given by the inclusion and the second by the map $b \mapsto b + \theta(b)$.
+Hence we get a canonical homomorphism $B(1) \to D$ of divided power
+algebras over $(A, I, \gamma)$. This induces a map $K \to M$
+which annihilates $K^2$ (as $M$ is an ideal of square zero) and
+$(K \cap J(1))^{[2]}$ as $M^{[2]} = 0$. The composition
+$B \to K/K^2 + (K \cap J(1))^{[2]} \to M$ equals $\theta$ by construction.
+It follows that $\text{d}$
+is a universal divided power $A$-derivation and we win.
+\end{proof}
+
+\begin{remark}
+\label{remark-filtration-differentials}
+Let $A \to B$ be a ring map and let $(J, \delta)$ be a divided
+power structure on $B$. The universal module $\Omega_{B/A, \delta}$
+comes with a little bit of extra structure, namely the $B$-submodule
+$N$ of $\Omega_{B/A, \delta}$ generated by $\text{d}_{B/A, \delta}(J)$.
+In terms of the isomorphism given in
+Lemma \ref{lemma-diagonal-and-differentials}
+this corresponds to the image of
+$K \cap J(1)$ in $\Omega_{B/A, \delta}$. Consider the $A$-algebra
+$D = B \oplus \Omega^1_{B/A, \delta}$ with ideal $\bar J = J \oplus N$
+and divided powers $\bar \delta$ as in the proof of the lemma.
+Then $(D, \bar J, \bar \delta)$ is a divided power ring
+and the two maps $B \to D$ given by $b \mapsto b$ and
+$b \mapsto b + \text{d}_{B/A, \delta}(b)$
+are homomorphisms of divided power rings over $A$. Moreover, $N$
+is the smallest submodule of $\Omega_{B/A, \delta}$ such that this is true.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-diagonal-and-differentials-affine-site}
+In Situation \ref{situation-affine}.
+Let $(B, J, \delta)$ be an object of $\text{CRIS}(C/A)$.
+Let $(B(1), J(1), \delta(1))$ be the coproduct of $(B, J, \delta)$
+with itself in $\text{CRIS}(C/A)$. Denote
+$K = \Ker(B(1) \to B)$. Then $K \cap J(1) \subset J(1)$
+is preserved by the divided power structure and
+$$
+\Omega_{B/A, \delta} = K/ \left(K^2 + (K \cap J(1))^{[2]}\right)
+$$
+canonically.
+\end{lemma}
+
+\begin{proof}
+Word for word the same as the proof of
+Lemma \ref{lemma-diagonal-and-differentials}.
+The only point that has to be checked is that the
+divided power ring $D = B \oplus M$ is an object of $\text{CRIS}(C/A)$
+and that the two maps $B \to C$ are morphisms of $\text{CRIS}(C/A)$.
+Since $D/(J \oplus M) = B/J$ we can use $C \to B/J$ to view
+$D$ as an object of $\text{CRIS}(C/A)$
+and the statement on morphisms is clear from the construction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-module-differentials-divided-power-envelope}
+Let $(A, I, \gamma)$ be a divided power ring. Let $A \to B$ be a ring
+map and let $IB \subset J \subset B$ be an ideal. Let
+$D_{B, \gamma}(J) = (D, \bar J, \bar \gamma)$ be the divided power envelope.
+Then we have
+$$
+\Omega_{D/A, \bar\gamma} = \Omega_{B/A} \otimes_B D
+$$
+\end{lemma}
+
+\begin{proof}[First proof]
+Let $M$ be a $D$-module. We claim that an $A$-derivation
+$\vartheta : B \to M$ is the same thing as a divided power
+$A$-derivation $\theta : D \to M$. The claim implies the
+statement by the Yoneda lemma.
+
+\medskip\noindent
+Consider the square zero thickening $D \oplus M$ of $D$.
+There is a divided power structure $\delta$ on $\bar J \oplus M$
+if we set the higher divided power operations zero on $M$.
+In other words, we set
+$\delta_n(x + m) = \bar\gamma_n(x) + \bar\gamma_{n - 1}(x)m$ for
+any $x \in \bar J$ and $m \in M$, see
+Lemma \ref{lemma-divided-power-first-order-thickening}.
+Consider the $A$-algebra map $B \to D \oplus M$ whose first
+component is given by the map $B \to D$ and whose second component
+is $\vartheta$. By the universal property we get a corresponding
+homomorphism $D \to D \oplus M$ of divided power algebras
+whose second component is the divided power
+$A$-derivation $\theta$ corresponding to $\vartheta$.
+\end{proof}
+
+\begin{proof}[Second proof]
+We will prove this first when $B$ is flat over $A$. In this case $\gamma$
+extends to a divided power structure $\gamma'$ on $IB$, see
+Divided Power Algebra, Lemma \ref{dpa-lemma-gamma-extends}.
+Hence $D = D_{B, \gamma'}(J)$ is equal to a quotient of
+the divided power ring $(D', J', \delta)$ where $D' = B\langle x_t \rangle$
+and $J' = IB\langle x_t \rangle + B\langle x_t \rangle_{+}$
+by the elements $x_t - f_t$ and $\delta_n(\sum r_t x_t - r_0)$, see
+Lemma \ref{lemma-describe-divided-power-envelope} for notation
+and explanation. Write $\text{d} : D' \to \Omega_{D'/A, \delta}$
+for the universal derivation. Note that
+$$
+\Omega_{D'/A, \delta} =
+\Omega_{B/A} \otimes_B D' \oplus \bigoplus D' \text{d}x_t,
+$$
+see Lemma \ref{lemma-omega}. We conclude that $\Omega_{D/A, \bar\gamma}$
+is the quotient of $\Omega_{D'/A, \delta} \otimes_{D'} D$ by the submodule
+generated by $\text{d}$ applied to the generators of the
+kernel of $D' \to D$ listed above, see Lemma \ref{lemma-omega}.
+Since $\text{d}(x_t - f_t) = - \text{d}f_t + \text{d}x_t$
+we see that we have $\text{d}x_t = \text{d}f_t$ in the quotient.
+In particular we see that $\Omega_{B/A} \otimes_B D \to \Omega_{D/A, \gamma}$
+is surjective with kernel given by the images of $\text{d}$
+applied to the elements $\delta_n(\sum r_t x_t - r_0)$.
+However, given a relation $\sum r_tf_t - r_0 = 0$ in $B$ with
+$r_t \in B$ and $r_0 \in IB$ we see that
+\begin{align*}
+\text{d}\delta_n(\sum r_t x_t - r_0)
+& =
+\delta_{n - 1}(\sum r_t x_t - r_0)\text{d}(\sum r_t x_t - r_0)
+\\
+& =
+\delta_{n - 1}(\sum r_t x_t - r_0)
+\left(
+\sum r_t\text{d}(x_t - f_t) + \sum (x_t - f_t)\text{d}r_t
+\right)
+\end{align*}
+because $\sum r_tf_t - r_0 = 0$ in $B$. Hence this is already zero in
+$\Omega_{B/A} \otimes_A D$ and we win in the case that $B$ is flat over $A$.
+
+\medskip\noindent
+In the general case we write $B$ as a quotient of a polynomial ring
+$P \to B$ and let $J' \subset P$ be the inverse image of $J$. Then
+$D = D'/K'$ with notation as in
+Lemma \ref{lemma-divided-power-envelop-quotient}.
+By the case handled in the first paragraph of the proof we have
+$\Omega_{D'/A, \bar\gamma'} = \Omega_{P/A} \otimes_P D'$. Then
+$\Omega_{D/A, \bar \gamma}$ is the quotient of $\Omega_{P/A} \otimes_P D$
+by the submodule generated by $\text{d}\bar\gamma_n'(k)$ where $k$
+is an element of the kernel of $P \to B$, see
+Lemma \ref{lemma-omega} and the description of $K'$ from
+Lemma \ref{lemma-divided-power-envelop-quotient}. Since
+$\text{d}\bar\gamma_n'(k) = \bar\gamma'_{n - 1}(k)\text{d}k$ we see
+again that it suffices to divided by the submodule generated by
+$\text{d}k$ with $k \in \Ker(P \to B)$ and since $\Omega_{B/A}$
+is the quotient of $\Omega_{P/A} \otimes_A B$ by these elements
+(Algebra, Lemma \ref{algebra-lemma-differential-seq}) we win.
+\end{proof}
+
+\begin{remark}
+\label{remark-divided-powers-de-rham-complex}
+Let $A \to B$ be a ring map and let $(J, \delta)$ be a divided power
+structure on $B$. Set
+$\Omega_{B/A, \delta}^i = \wedge^i_B \Omega_{B/A, \delta}$
+where $\Omega_{B/A, \delta}$ is the target of the universal divided power
+$A$-derivation $\text{d} = \text{d}_{B/A} : B \to \Omega_{B/A, \delta}$.
+Note that $\Omega_{B/A, \delta}$ is the quotient of $\Omega_{B/A}$ by the
+$B$-submodule generated by the elements
+$\text{d}\delta_n(x) - \delta_{n - 1}(x)\text{d}x$ for $x \in J$.
+We claim Algebra, Lemma \ref{algebra-lemma-de-rham-complex} applies.
+To see this it suffices to verify the elements
+$\text{d}\delta_n(x) - \delta_{n - 1}(x)\text{d}x$
+of $\Omega_B$ are mapped to zero in $\Omega^2_{B/A, \delta}$.
+We observe that
+$$
+\text{d}(\delta_{n - 1}(x)) \wedge \text{d}x
+= \delta_{n - 2}(x) \text{d}x \wedge \text{d}x = 0
+$$
+in $\Omega^2_{B/A, \delta}$ as desired. Hence we obtain a
+{\it divided power de Rham complex}
+$$
+\Omega^0_{B/A, \delta} \to \Omega^1_{B/A, \delta} \to
+\Omega^2_{B/A, \delta} \to \ldots
+$$
+which will play an important role in the sequel.
+\end{remark}
+
+\begin{remark}
+\label{remark-connection}
+Let $A \to B$ be a ring map. Let $\Omega_{B/A} \to \Omega$
+be a quotient satisfying the assumptions of
+Algebra, Lemma \ref{algebra-lemma-de-rham-complex}.
+Let $M$ be a $B$-module. A {\it connection} is an additive map
+$$
+\nabla : M \longrightarrow M \otimes_B \Omega
+$$
+such that $\nabla(bm) = b \nabla(m) + m \otimes \text{d}b$
+for $b \in B$ and $m \in M$. In this situation we can define maps
+$$
+\nabla : M \otimes_B \Omega^i \longrightarrow M \otimes_B \Omega^{i + 1}
+$$
+by the rule $\nabla(m \otimes \omega) = \nabla(m) \wedge \omega +
+m \otimes \text{d}\omega$. This works because if $b \in B$, then
+\begin{align*}
+\nabla(bm \otimes \omega) - \nabla(m \otimes b\omega)
+& =
+\nabla(bm) \wedge \omega + bm \otimes \text{d}\omega
+- \nabla(m) \wedge b\omega - m \otimes \text{d}(b\omega) \\
+& =
+b\nabla(m) \wedge \omega + m \otimes \text{d}b \wedge \omega
++ bm \otimes \text{d}\omega \\
+& \ \ \ \ \ \ - b\nabla(m) \wedge \omega - bm \otimes \text{d}(\omega)
+- m \otimes \text{d}b \wedge \omega = 0
+\end{align*}
+As is customary we say the connection is {\it integrable} if and
+only if the composition
+$$
+M \xrightarrow{\nabla} M \otimes_B \Omega^1
+\xrightarrow{\nabla} M \otimes_B \Omega^2
+$$
+is zero. In this case we obtain a complex
+$$
+M \xrightarrow{\nabla} M \otimes_B \Omega^1
+\xrightarrow{\nabla} M \otimes_B \Omega^2
+\xrightarrow{\nabla} M \otimes_B \Omega^3
+\xrightarrow{\nabla} M \otimes_B \Omega^4 \to \ldots
+$$
+which is called the de Rham complex of the connection.
+\end{remark}
+
+\begin{remark}
+\label{remark-base-change-connection}
+Consider a commutative diagram of rings
+$$
+\xymatrix{
+B \ar[r]_\varphi & B' \\
+A \ar[u] \ar[r] & A' \ar[u]
+}
+$$
+Let $\Omega_{B/A} \to \Omega$ and $\Omega_{B'/A'} \to \Omega'$
+be quotients satisfying the assumptions of
+Algebra, Lemma \ref{algebra-lemma-de-rham-complex}.
+Assume there is a map $\varphi : \Omega \to \Omega'$ which
+fits into a commutative diagram
+$$
+\xymatrix{
+\Omega_{B/A} \ar[r] \ar[d] &
+\Omega_{B'/A'} \ar[d] \\
+\Omega \ar[r]^{\varphi} &
+\Omega'
+}
+$$
+where the top horizontal arrow is the canonical map
+$\Omega_{B/A} \to \Omega_{B'/A'}$ induced by $\varphi : B \to B'$.
+In this situation, given any pair $(M, \nabla)$ where $M$ is a $B$-module
+and $\nabla : M \to M \otimes_B \Omega$ is a connection
+we obtain a {\it base change} $(M \otimes_B B', \nabla')$ where
+$$
+\nabla' :
+M \otimes_B B'
+\longrightarrow
+(M \otimes_B B') \otimes_{B'} \Omega' = M \otimes_B \Omega'
+$$
+is defined by the rule
+$$
+\nabla'(m \otimes b') =
+\sum m_i \otimes b'\text{d}\varphi(b_i) + m \otimes \text{d}b'
+$$
+if $\nabla(m) = \sum m_i \otimes \text{d}b_i$. If $\nabla$ is integrable,
+then so is $\nabla'$, and in this case there is a canonical map of
+de Rham complexes (Remark \ref{remark-connection})
+\begin{equation}
+\label{equation-base-change-map-complexes}
+M \otimes_B \Omega^\bullet
+\longrightarrow
+(M \otimes_B B') \otimes_{B'} (\Omega')^\bullet =
+M \otimes_B (\Omega')^\bullet
+\end{equation}
+which maps $m \otimes \eta$ to $m \otimes \varphi(\eta)$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-differentials-completion}
+Let $A \to B$ be a ring map and let $(J, \delta)$ be a divided power
+structure on $B$. Let $p$ be a prime number. Assume that $A$ is a
+$\mathbf{Z}_{(p)}$-algebra and that $p$ is nilpotent in $B/J$. Then
+we have
+$$
+\lim_e \Omega_{B_e/A, \bar\delta} =
+\lim_e \Omega_{B/A, \delta}/p^e\Omega_{B/A, \delta} =
+\lim_e \Omega_{B^\wedge/A, \delta^\wedge}/p^e \Omega_{B^\wedge/A, \delta^\wedge}
+$$
+see proof for notation and explanation.
+\end{lemma}
+
+\begin{proof}
+By Divided Power Algebra, Lemma \ref{dpa-lemma-extend-to-completion}
+we see that $\delta$ extends
+to $B_e = B/p^eB$ for all sufficiently large $e$. Hence the first limit
+make sense. The lemma also produces a divided power structure $\delta^\wedge$
+on the completion $B^\wedge = \lim_e B_e$, hence the last limit makes
+sense. By Lemma \ref{lemma-omega}
+and the fact that $\text{d}p^e = 0$ (always)
+we see that the surjection
+$\Omega_{B/A, \delta} \to \Omega_{B_e/A, \bar\delta}$ has kernel
+$p^e\Omega_{B/A, \delta}$. Similarly for the kernel of
+$\Omega_{B^\wedge/A, \delta^\wedge} \to \Omega_{B_e/A, \bar\delta}$.
+Hence the lemma is clear.
+\end{proof}
+
+
+
+\section{Divided power schemes}
+\label{section-divided-power-schemes}
+
+\noindent
+Some remarks on how to globalize the previous notions.
+
+\begin{definition}
+\label{definition-divided-power-structure}
+Let $\mathcal{C}$ be a site. Let $\mathcal{O}$ be a sheaf of rings
+on $\mathcal{C}$. Let $\mathcal{I} \subset \mathcal{O}$ be a
+sheaf of ideals. A {\it divided power structure $\gamma$} on $\mathcal{I}$
+is a sequence of maps $\gamma_n : \mathcal{I} \to \mathcal{I}$, $n \geq 1$
+such that for any object $U$ of $\mathcal{C}$ the triple
+$$
+(\mathcal{O}(U), \mathcal{I}(U), \gamma)
+$$
+is a divided power ring.
+\end{definition}
+
+\noindent
+To be sure this applies in particular to sheaves of rings on
+topological spaces. But it's good to be a little bit more general
+as the structure sheaf of the crystalline site lives on a... site!
+A triple $(\mathcal{C}, \mathcal{I}, \gamma)$ as in the
+definition above is sometimes called a {\it divided power topos}
+in this chapter. Given a second $(\mathcal{C}', \mathcal{I}', \gamma')$ and
+given a morphism of ringed topoi
+$(f, f^\sharp) : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{C}'), \mathcal{O}')$
+we say that $(f, f^\sharp)$ induces a {\it morphism of divided
+power topoi} if $f^\sharp(f^{-1}\mathcal{I}') \subset \mathcal{I}$
+and the diagrams
+$$
+\xymatrix{
+f^{-1}\mathcal{I}' \ar[d]_{f^{-1}\gamma'_n} \ar[r]_{f^\sharp} &
+\mathcal{I} \ar[d]^{\gamma_n} \\
+f^{-1}\mathcal{I}' \ar[r]^{f^\sharp} & \mathcal{I}
+}
+$$
+are commutative for all $n \geq 1$. If $f$ comes from a morphism of
+sites induced by a functor $u : \mathcal{C}' \to \mathcal{C}$ then
+this just means that
+$$
+(\mathcal{O}'(U'), \mathcal{I}'(U'), \gamma')
+\longrightarrow
+(\mathcal{O}(u(U')), \mathcal{I}(u(U')), \gamma)
+$$
+is a homomorphism of divided power rings for all $U' \in \Ob(\mathcal{C}')$.
+
+\medskip\noindent
+In the case of schemes we require the divided power ideal to be
+{\bf quasi-coherent}. But apart from this the definition is exactly
+the same as in the case of topoi. Here it is.
+
+\begin{definition}
+\label{definition-divided-power-scheme}
+A {\it divided power scheme} is a triple $(S, \mathcal{I}, \gamma)$
+where $S$ is a scheme, $\mathcal{I}$ is a quasi-coherent sheaf of
+ideals, and $\gamma$ is a divided power structure on $\mathcal{I}$.
+A {\it morphism of divided power schemes}
+$(S, \mathcal{I}, \gamma) \to (S', \mathcal{I}', \gamma')$ is
+a morphism of schemes $f : S \to S'$ such that
+$f^{-1}\mathcal{I}'\mathcal{O}_S \subset \mathcal{I}$ and such that
+$$
+(\mathcal{O}_{S'}(U'), \mathcal{I}'(U'), \gamma')
+\longrightarrow
+(\mathcal{O}_S(f^{-1}U'), \mathcal{I}(f^{-1}U'), \gamma)
+$$
+is a homomorphism of divided power rings for all $U' \subset S'$ open.
+\end{definition}
+
+\noindent
+Recall that there is a 1-to-1 correspondence between quasi-coherent
+sheaves of ideals and closed immersions, see
+Morphisms, Section \ref{morphisms-section-closed-immersions}.
+Thus given a divided power scheme $(T, \mathcal{J}, \gamma)$ we
+get a canonical closed immersion $U \to T$ defined by $\mathcal{J}$.
+Conversely, given a closed immersion $U \to T$ and a divided power
+structure $\gamma$ on the sheaf of ideals $\mathcal{J}$ associated
+to $U \to T$ we obtain a divided power scheme $(T, \mathcal{J}, \gamma)$.
+In many situations we only want to consider such triples
+$(U, T, \gamma)$ when the morphism $U \to T$ is a thickening, see
+More on Morphisms, Definition \ref{more-morphisms-definition-thickening}.
+
+\begin{definition}
+\label{definition-divided-power-thickening}
+A triple $(U, T, \gamma)$ as above is called a {\it divided power thickening}
+if $U \to T$ is a thickening.
+\end{definition}
+
+\noindent
+Fibre products of divided power schemes exist when one of the
+three is a divided power thickening. Here is a formal statement.
+
+\begin{lemma}
+\label{lemma-fibre-product}
+Let $(U', T', \delta') \to (S'_0, S', \gamma')$ and
+$(S_0, S, \gamma) \to (S'_0, S', \gamma')$ be morphisms of
+divided power schemes. If $(U', T', \delta')$ is a divided power
+thickening, then there exists a divided power scheme $(T_0, T, \delta)$
+and
+$$
+\xymatrix{
+T \ar[r] \ar[d] & T' \ar[d] \\
+S \ar[r] & S'
+}
+$$
+which is a cartesian diagram in the category of divided power schemes.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hints: If $T$ exists, then $T_0 = S_0 \times_{S'_0} U'$
+(argue as in Divided Power Algebra, Remark \ref{dpa-remark-forgetful}).
+Since $T'$ is a divided power thickening, we see that $T$
+(if it exists) will be a divided power thickening too.
+Hence we can define $T$ as the scheme with underlying topological
+space the underlying topological space of $T_0 = S_0 \times_{S'_0} U'$
+and as structure sheaf on affine pieces the ring given
+by Lemma \ref{lemma-affine-thickenings-colimits}.
+\end{proof}
+
+\noindent
+We make the following observation. Suppose that $(U, T, \gamma)$
+is triple as above. Assume that $T$ is a scheme over $\mathbf{Z}_{(p)}$
+and that $p$ is locally nilpotent on $U$. Then
+\begin{enumerate}
+\item $p$ locally nilpotent on $T \Leftrightarrow U \to T$
+is a thickening (see Divided Power Algebra, Lemma \ref{dpa-lemma-nil}), and
+\item $p^e\mathcal{O}_T$ is locally on $T$ preserved by $\gamma$
+for $e \gg 0$ (see
+Divided Power Algebra, Lemma \ref{dpa-lemma-extend-to-completion}).
+\end{enumerate}
+This suggest that good results on divided power thickenings will be
+available under the following hypotheses.
+
+\begin{situation}
+\label{situation-global}
+Here $p$ is a prime number and $(S, \mathcal{I}, \gamma)$ is a divided power
+scheme over $\mathbf{Z}_{(p)}$. We set $S_0 = V(\mathcal{I}) \subset S$.
+Finally, $X \to S_0$ is a morphism of schemes such that $p$ is
+locally nilpotent on $X$.
+\end{situation}
+
+\noindent
+It is in this situation that we will define the big and small
+crystalline sites.
+
+
+
+
+
+
+
+
+
+\section{The big crystalline site}
+\label{section-big-site}
+
+\noindent
+We first define the big site. Given a divided power scheme
+$(S, \mathcal{I}, \gamma)$ we say $(T, \mathcal{J}, \delta)$ is
+a divided power scheme over $(S, \mathcal{I}, \gamma)$ if
+$T$ comes endowed with a morphism $T \to S$ of divided power
+schemes. Similarly, we say a divided power thickening $(U, T, \delta)$
+is a divided power thickening over $(S, \mathcal{I}, \gamma)$
+if $T$ comes endowed with a morphism $T \to S$ of divided power
+schemes.
+
+\begin{definition}
+\label{definition-divided-power-thickening-X}
+In Situation \ref{situation-global}.
+\begin{enumerate}
+\item A {\it divided power thickening of $X$ relative to
+$(S, \mathcal{I}, \gamma)$} is given by a divided power thickening
+$(U, T, \delta)$ over $(S, \mathcal{I}, \gamma)$
+and an $S$-morphism $U \to X$.
+\item A {\it morphism of divided power thickenings of $X$
+relative to $(S, \mathcal{I}, \gamma)$} is defined in the obvious
+manner.
+\end{enumerate}
+The category of divided power thickenings of $X$ relative to
+$(S, \mathcal{I}, \gamma)$ is denoted $\text{CRIS}(X/S, \mathcal{I}, \gamma)$
+or simply $\text{CRIS}(X/S)$.
+\end{definition}
+
+\noindent
+For any $(U, T, \delta)$ in $\text{CRIS}(X/S)$
+we have that $p$ is locally nilpotent on $T$, see discussion preceding
+Situation \ref{situation-global}.
+A good way to visualize all the data associated to $(U, T, \delta)$
+is the commutative diagram
+$$
+\xymatrix{
+T \ar[dd] & U \ar[l] \ar[d] \\
+& X \ar[d] \\
+S & S_0 \ar[l]
+}
+$$
+where $S_0 = V(\mathcal{I}) \subset S$. Morphisms of $\text{CRIS}(X/S)$
+can be similarly visualized as huge commutative diagrams. In particular,
+there is a canonical forgetful functor
+\begin{equation}
+\label{equation-forget}
+\text{CRIS}(X/S) \longrightarrow \Sch/X,\quad
+(U, T, \delta) \longmapsto U
+\end{equation}
+as well as its one sided inverse (and left adjoint)
+\begin{equation}
+\label{equation-endow-trivial}
+\Sch/X \longrightarrow \text{CRIS}(X/S),\quad
+U \longmapsto (U, U, \emptyset)
+\end{equation}
+which is sometimes useful.
+
+\begin{lemma}
+\label{lemma-divided-power-thickening-fibre-products}
+In Situation \ref{situation-global}.
+The category $\text{CRIS}(X/S)$ has all finite nonempty limits,
+in particular products of pairs and fibre products.
+The functor (\ref{equation-forget}) commutes with limits.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: See Lemma \ref{lemma-affine-thickenings-colimits}
+for the affine case. See also
+Divided Power Algebra, Remark \ref{dpa-remark-forgetful}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-divided-power-thickening-base-change-flat}
+In Situation \ref{situation-global}. Let
+$$
+\xymatrix{
+(U_3, T_3, \delta_3) \ar[d] \ar[r] & (U_2, T_2, \delta_2) \ar[d] \\
+(U_1, T_1, \delta_1) \ar[r] & (U, T, \delta)
+}
+$$
+be a fibre square in the category of divided power thickenings of
+$X$ relative to $(S, \mathcal{I}, \gamma)$. If $T_2 \to T$ is
+flat and $U_2 = T_2 \times_T U$, then $T_3 = T_1 \times_T T_2$ (as schemes).
+\end{lemma}
+
+\begin{proof}
+This is true because a divided power structure extends uniquely
+along a flat ring map. See
+Divided Power Algebra, Lemma \ref{dpa-lemma-gamma-extends}.
+\end{proof}
+
+\noindent
+The lemma above means that the base change of a flat morphism
+of divided power thickenings is another flat morphism, and in
+fact is the ``usual'' base change of the morphism. This implies
+that the following definition makes sense.
+
+\begin{definition}
+\label{definition-big-crystalline-site}
+In Situation \ref{situation-global}.
+\begin{enumerate}
+\item A family of morphisms $\{(U_i, T_i, \delta_i) \to (U, T, \delta)\}$
+of divided power thickenings of $X/S$ is a
+{\it Zariski, \'etale, smooth, syntomic, or fppf covering}
+if and only if
+\begin{enumerate}
+\item $U_i = U \times_T T_i$ for all $i$ and
+\item $\{T_i \to T\}$ is a Zariski, \'etale, smooth, syntomic, or fppf covering.
+\end{enumerate}
+\item The {\it big crystalline site} of $X$ over $(S, \mathcal{I}, \gamma)$,
+is the category $\text{CRIS}(X/S)$ endowed with the Zariski topology.
+\item The topos of sheaves on $\text{CRIS}(X/S)$ is denoted
+$(X/S)_{\text{CRIS}}$ or sometimes
+$(X/S, \mathcal{I}, \gamma)_{\text{CRIS}}$\footnote{This clashes with
+our convention to denote the topos associated to a site $\mathcal{C}$
+by $\Sh(\mathcal{C})$.}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+There are some obvious functorialities concerning these topoi.
+
+\begin{remark}[Functoriality]
+\label{remark-functoriality-big-cris}
+Let $p$ be a prime number.
+Let $(S, \mathcal{I}, \gamma) \to (S', \mathcal{I}', \gamma')$ be a
+morphism of divided power schemes over $\mathbf{Z}_{(p)}$.
+Set $S_0 = V(\mathcal{I})$ and $S'_0 = V(\mathcal{I}')$.
+Let
+$$
+\xymatrix{
+X \ar[r]_f \ar[d] & Y \ar[d] \\
+S_0 \ar[r] & S'_0
+}
+$$
+be a commutative diagram of morphisms of schemes and assume $p$ is
+locally nilpotent on $X$ and $Y$. Then we get a continuous and
+cocontinuous functor
+$$
+\text{CRIS}(X/S) \longrightarrow \text{CRIS}(Y/S')
+$$
+by letting $(U, T, \delta)$ correspond to $(U, T, \delta)$
+with $U \to X \to Y$ as the $S'$-morphism from $U$ to $Y$.
+Hence we get a morphism of topoi
+$$
+f_{\text{CRIS}} : (X/S)_{\text{CRIS}} \longrightarrow (Y/S')_{\text{CRIS}}
+$$
+see Sites, Section \ref{sites-section-cocontinuous-morphism-topoi}.
+\end{remark}
+
+\begin{remark}[Comparison with Zariski site]
+\label{remark-compare-big-zariski}
+In Situation \ref{situation-global}.
+The functor (\ref{equation-forget}) is cocontinuous (details omitted) and
+commutes with products and fibred products
+(Lemma \ref{lemma-divided-power-thickening-fibre-products}).
+Hence we obtain a morphism of topoi
+$$
+U_{X/S} : (X/S)_{\text{CRIS}} \longrightarrow \Sh((\Sch/X)_{Zar})
+$$
+from the big crystalline topos of $X/S$ to the big Zariski topos of $X$.
+See Sites, Section \ref{sites-section-cocontinuous-morphism-topoi}.
+\end{remark}
+
+\begin{remark}[Structure morphism]
+\label{remark-big-structure-morphism}
+In Situation \ref{situation-global}.
+Consider the closed subscheme $S_0 = V(\mathcal{I}) \subset S$.
+If we assume that $p$ is locally nilpotent on $S_0$ (which is always
+the case in practice) then we obtain a situation as in
+Definition \ref{definition-divided-power-thickening-X} with $S_0$ instead
+of $X$. Hence we get a site $\text{CRIS}(S_0/S)$. If $f : X \to S_0$ is
+the structure morphism of $X$ over $S$, then we get a commutative diagram
+of morphisms of ringed topoi
+$$
+\xymatrix{
+(X/S)_{\text{CRIS}}
+\ar[r]_{f_{\text{CRIS}}} \ar[d]_{U_{X/S}} &
+(S_0/S)_{\text{CRIS}} \ar[d]^{U_{S_0/S}} \\
+\Sh((\Sch/X)_{Zar}) \ar[r]^{f_{big}} & \Sh((\Sch/S_0)_{Zar}) \ar[rd] \\
+& & \Sh((\Sch/S)_{Zar})
+}
+$$
+by Remark \ref{remark-functoriality-big-cris}. We think of the composition
+$(X/S)_{\text{CRIS}} \to \Sh((\Sch/S)_{Zar})$ as the structure morphism of
+the big crystalline site. Even if $p$ is not locally nilpotent on $S_0$
+the structure morphism
+$$
+(X/S)_{\text{CRIS}} \longrightarrow \Sh((\Sch/S)_{Zar})
+$$
+is defined as we can take the lower route through the diagram above. Thus it
+is the morphism of topoi corresponding to the cocontinuous
+functor $\text{CRIS}(X/S) \to (\Sch/S)_{Zar}$ given by the rule
+$(U, T, \delta)/S \mapsto U/S$, see
+Sites, Section \ref{sites-section-cocontinuous-morphism-topoi}.
+\end{remark}
+
+\begin{remark}[Compatibilities]
+\label{remark-compatibilities-big-cris}
+The morphisms defined above satisfy numerous compatibilities. For example,
+in the situation of Remark \ref{remark-functoriality-big-cris}
+we obtain a commutative diagram of ringed topoi
+$$
+\xymatrix{
+(X/S)_{\text{CRIS}} \ar[d] \ar[r] & (Y/S')_{\text{CRIS}} \ar[d] \\
+\Sh((\Sch/S)_{Zar}) \ar[r] & \Sh((\Sch/S')_{Zar})
+}
+$$
+where the vertical arrows are the structure morphisms.
+\end{remark}
+
+
+
+
+\section{The crystalline site}
+\label{section-site}
+
+\noindent
+Since (\ref{equation-forget}) commutes with products and fibre
+products, we see that looking at those $(U, T, \delta)$ such that
+$U \to X$ is an open immersion defines a full
+subcategory preserved under fibre products (and more generally
+finite nonempty limits). Hence the following
+definition makes sense.
+
+\begin{definition}
+\label{definition-crystalline-site}
+In Situation \ref{situation-global}.
+\begin{enumerate}
+\item The (small) {\it crystalline site} of $X$ over
+$(S, \mathcal{I}, \gamma)$, denoted $\text{Cris}(X/S, \mathcal{I}, \gamma)$
+or simply $\text{Cris}(X/S)$ is the full subcategory of $\text{CRIS}(X/S)$
+consisting of those $(U, T, \delta)$ in $\text{CRIS}(X/S)$ such that
+$U \to X$ is an open immersion. It comes endowed with the Zariski topology.
+\item The topos of sheaves on $\text{Cris}(X/S)$ is denoted
+$(X/S)_{\text{cris}}$ or sometimes
+$(X/S, \mathcal{I}, \gamma)_{\text{cris}}$\footnote{This clashes with
+our convention to denote the topos associated to a site $\mathcal{C}$
+by $\Sh(\mathcal{C})$.}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+For any $(U, T, \delta)$ in $\text{Cris}(X/S)$ the morphism $U \to X$
+defines an object of the small Zariski site $X_{Zar}$ of $X$. Hence
+a canonical forgetful functor
+\begin{equation}
+\label{equation-forget-small}
+\text{Cris}(X/S) \longrightarrow X_{Zar},\quad
+(U, T, \delta) \longmapsto U
+\end{equation}
+and a left adjoint
+\begin{equation}
+\label{equation-endow-trivial-small}
+X_{Zar} \longrightarrow \text{Cris}(X/S),\quad
+U \longmapsto (U, U, \emptyset)
+\end{equation}
+which is sometimes useful.
+
+\medskip\noindent
+We can compare the small and big crystalline sites, just like
+we can compare the small and big Zariski sites of a scheme, see
+Topologies, Lemma \ref{topologies-lemma-at-the-bottom}.
+
+\begin{lemma}
+\label{lemma-compare-big-small}
+Assumptions as in Definition \ref{definition-divided-power-thickening-X}.
+The inclusion functor
+$$
+\text{Cris}(X/S) \to \text{CRIS}(X/S)
+$$
+commutes with finite nonempty limits, is fully faithful, continuous,
+and cocontinuous. There are morphisms of topoi
+$$
+(X/S)_{\text{cris}} \xrightarrow{i} (X/S)_{\text{CRIS}}
+\xrightarrow{\pi} (X/S)_{\text{cris}}
+$$
+whose composition is the identity and of which the first is induced
+by the inclusion functor. Moreover, $\pi_* = i^{-1}$.
+\end{lemma}
+
+\begin{proof}
+For the first assertion see
+Lemma \ref{lemma-divided-power-thickening-fibre-products}.
+This gives us a morphism of topoi
+$i : (X/S)_{\text{cris}} \to (X/S)_{\text{CRIS}}$ and a left adjoint
+$i_!$ such that $i^{-1}i_! = i^{-1}i_* = \text{id}$, see
+Sites, Lemmas \ref{sites-lemma-when-shriek},
+\ref{sites-lemma-preserve-equalizers}, and
+\ref{sites-lemma-back-and-forth}.
+We claim that $i_!$ is exact. If this is true, then we can define
+$\pi$ by the rules $\pi^{-1} = i_!$ and $\pi_* = i^{-1}$
+and everything is clear. To prove the claim, note that we already know
+that $i_!$ is right exact and preserves fibre products (see references
+given). Hence it suffices to show that $i_! * = *$ where $*$ indicates
+the final object in the category of sheaves of sets.
+To see this it suffices to produce a set of objects
+$(U_i, T_i, \delta_i)$, $i \in I$ of $\text{Cris}(X/S)$ such that
+$$
+\coprod\nolimits_{i \in I} h_{(U_i, T_i, \delta_i)} \to *
+$$
+is surjective in $(X/S)_{\text{CRIS}}$ (details omitted; hint: use that
+$\text{Cris}(X/S)$ has products and that the functor
+$\text{Cris}(X/S) \to \text{CRIS}(X/S)$ commutes with them).
+In the affine case this
+follows from Lemma \ref{lemma-set-generators}. We omit the proof
+in general.
+\end{proof}
+
+\begin{remark}[Functoriality]
+\label{remark-functoriality-cris}
+Let $p$ be a prime number.
+Let $(S, \mathcal{I}, \gamma) \to (S', \mathcal{I}', \gamma')$
+be a morphism of divided power schemes over $\mathbf{Z}_{(p)}$.
+Let
+$$
+\xymatrix{
+X \ar[r]_f \ar[d] & Y \ar[d] \\
+S_0 \ar[r] & S'_0
+}
+$$
+be a commutative diagram of morphisms of schemes and assume $p$ is
+locally nilpotent on $X$ and $Y$. By analogy with
+Topologies, Lemma \ref{topologies-lemma-morphism-big-small} we define
+$$
+f_{\text{cris}} : (X/S)_{\text{cris}} \longrightarrow (Y/S')_{\text{cris}}
+$$
+by the formula $f_{\text{cris}} = \pi_Y \circ f_{\text{CRIS}} \circ i_X$
+where $i_X$ and $\pi_Y$ are as in Lemma \ref{lemma-compare-big-small}
+for $X$ and $Y$ and where $f_{\text{CRIS}}$ is as in
+Remark \ref{remark-functoriality-big-cris}.
+\end{remark}
+
+\begin{remark}[Comparison with Zariski site]
+\label{remark-compare-zariski}
+In Situation \ref{situation-global}.
+The functor (\ref{equation-forget-small}) is continuous, cocontinuous, and
+commutes with products and fibred products.
+Hence we obtain a morphism of topoi
+$$
+u_{X/S} : (X/S)_{\text{cris}} \longrightarrow \Sh(X_{Zar})
+$$
+relating the small crystalline topos of $X/S$ with
+the small Zariski topos of $X$.
+See Sites, Section \ref{sites-section-cocontinuous-morphism-topoi}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-localize}
+In Situation \ref{situation-global}.
+Let $X' \subset X$ and $S' \subset S$ be open subschemes such that
+$X'$ maps into $S'$. Then there is a fully faithful functor
+$\text{Cris}(X'/S') \to \text{Cris}(X/S)$
+which gives rise to a morphism of topoi fitting into the commutative
+diagram
+$$
+\xymatrix{
+(X'/S')_{\text{cris}} \ar[r] \ar[d]_{u_{X'/S'}} &
+(X/S)_{\text{cris}} \ar[d]^{u_{X/S}} \\
+\Sh(X'_{Zar}) \ar[r] & \Sh(X_{Zar})
+}
+$$
+Moreover, this diagram is an example of localization of morphisms of
+topoi as in Sites, Lemma \ref{sites-lemma-localize-morphism-topoi}.
+\end{lemma}
+
+\begin{proof}
+The fully faithful functor comes from thinking of
+objects of $\text{Cris}(X'/S')$ as divided power
+thickenings $(U, T, \delta)$ of $X$ where $U \to X$
+factors through $X' \subset X$ (since then automatically $T \to S$
+will factor through $S'$). This functor is clearly cocontinuous
+hence we obtain a morphism of topoi as indicated.
+Let $h_{X'} \in \Sh(X_{Zar})$ be the representable sheaf associated
+to $X'$ viewed as an object of $X_{Zar}$. It is clear that
+$\Sh(X'_{Zar})$ is the localization $\Sh(X_{Zar})/h_{X'}$.
+On the other hand, the category $\text{Cris}(X/S)/u_{X/S}^{-1}h_{X'}$
+(see Sites, Lemma \ref{sites-lemma-localize-topos-site})
+is canonically identified with $\text{Cris}(X'/S')$ by the functor above.
+This finishes the proof.
+\end{proof}
+
+\begin{remark}[Structure morphism]
+\label{remark-structure-morphism}
+In Situation \ref{situation-global}.
+Consider the closed subscheme $S_0 = V(\mathcal{I}) \subset S$.
+If we assume that $p$ is locally nilpotent on $S_0$ (which is always
+the case in practice) then we obtain a situation as in
+Definition \ref{definition-divided-power-thickening-X} with $S_0$ instead
+of $X$. Hence we get a site $\text{Cris}(S_0/S)$. If $f : X \to S_0$
+is the structure morphism of $X$ over $S$, then we get a
+commutative diagram of ringed topoi
+$$
+\xymatrix{
+(X/S)_{\text{cris}}
+\ar[r]_{f_{\text{cris}}} \ar[d]_{u_{X/S}} &
+(S_0/S)_{\text{cris}} \ar[d]^{u_{S_0/S}} \\
+\Sh(X_{Zar}) \ar[r]^{f_{small}} & \Sh(S_{0, Zar}) \ar[rd] \\
+& & \Sh(S_{Zar})
+}
+$$
+see Remark \ref{remark-functoriality-cris}. We think of the composition
+$(X/S)_{\text{cris}} \to \Sh(S_{Zar})$ as the structure morphism of the
+crystalline site. Even if $p$ is not locally nilpotent on $S_0$
+the structure morphism
+$$
+\tau_{X/S} : (X/S)_{\text{cris}} \longrightarrow \Sh(S_{Zar})
+$$
+is defined as we can take the lower route through the diagram above.
+\end{remark}
+
+\begin{remark}[Compatibilities]
+\label{remark-compatibilities}
+The morphisms defined above satisfy numerous compatibilities. For example,
+in the situation of Remark \ref{remark-functoriality-cris}
+we obtain a commutative diagram of ringed topoi
+$$
+\xymatrix{
+(X/S)_{\text{cris}} \ar[d] \ar[r] & (Y/S')_{\text{cris}} \ar[d] \\
+\Sh((\Sch/S)_{Zar}) \ar[r] & \Sh((\Sch/S')_{Zar})
+}
+$$
+where the vertical arrows are the structure morphisms.
+\end{remark}
+
+
+
+\section{Sheaves on the crystalline site}
+\label{section-sheaves}
+
+\noindent
+Notation and assumptions as in Situation \ref{situation-global}.
+In order to discuss the small and big crystalline sites of $X/S$
+simultaneously in this section we let
+$$
+\mathcal{C} = \text{CRIS}(X/S)
+\quad\text{or}\quad
+\mathcal{C} = \text{Cris}(X/S).
+$$
+A sheaf $\mathcal{F}$ on $\mathcal{C}$ gives rise to
+a {\it restriction} $\mathcal{F}_T$ for every object $(U, T, \delta)$
+of $\mathcal{C}$. Namely, $\mathcal{F}_T$ is the Zariski sheaf on
+the scheme $T$ defined by the rule
+$$
+\mathcal{F}_T(W) = \mathcal{F}(U \cap W, W, \delta|_W)
+$$
+for $W \subset T$ is open. Moreover, if $f : T \to T'$ is a morphism
+between objects
+$(U, T, \delta)$ and $(U', T', \delta')$ of $\mathcal{C}$, then there
+is a canonical {\it comparison} map
+\begin{equation}
+\label{equation-comparison}
+c_f : f^{-1}\mathcal{F}_{T'} \longrightarrow \mathcal{F}_T.
+\end{equation}
+Namely, if $W' \subset T'$ is open then $f$ induces a morphism
+$$
+f|_{f^{-1}W'} :
+(U \cap f^{-1}(W'), f^{-1}W', \delta|_{f^{-1}W'})
+\longrightarrow
+(U' \cap W', W', \delta|_{W'})
+$$
+of $\mathcal{C}$, hence we can use the restriction mapping
+$(f|_{f^{-1}W'})^*$ of $\mathcal{F}$ to define a map
+$\mathcal{F}_{T'}(W') \to \mathcal{F}_T(f^{-1}W')$.
+These maps are clearly compatible with further restriction, hence
+define an $f$-map from $\mathcal{F}_{T'}$ to $\mathcal{F}_T$ (see
+Sheaves, Section \ref{sheaves-section-presheaves-functorial}
+and especially
+Sheaves, Definition \ref{sheaves-definition-f-map}).
+Thus a map $c_f$ as in (\ref{equation-comparison}).
+Note that if $f$ is an open immersion, then $c_f$ is an
+isomorphism, because in that case $\mathcal{F}_T$ is just
+the restriction of $\mathcal{F}_{T'}$ to $T$.
+
+\medskip\noindent
+Conversely, given Zariski sheaves $\mathcal{F}_T$ for every object
+$(U, T, \delta)$ of $\mathcal{C}$ and comparison maps
+$c_f$ as above which (a) are isomorphisms for open immersions, and (b)
+satisfy a suitable cocycle condition, we obtain a sheaf on
+$\mathcal{C}$. This is proved exactly as in
+Topologies, Lemma \ref{topologies-lemma-characterize-sheaf-big}.
+
+\medskip\noindent
+The {\it structure sheaf} on $\mathcal{C}$ is the sheaf
+$\mathcal{O}_{X/S}$ defined by the rule
+$$
+\mathcal{O}_{X/S} :
+(U, T, \delta)
+\longmapsto
+\Gamma(T, \mathcal{O}_T)
+$$
+This is a sheaf by the definition of coverings in $\mathcal{C}$.
+Suppose that $\mathcal{F}$ is a sheaf of $\mathcal{O}_{X/S}$-modules.
+In this case the comparison mappings (\ref{equation-comparison})
+define a comparison map
+\begin{equation}
+\label{equation-comparison-modules}
+c_f : f^*\mathcal{F}_{T'} \longrightarrow \mathcal{F}_T
+\end{equation}
+of $\mathcal{O}_T$-modules.
+
+\medskip\noindent
+Another type of example comes by starting with a sheaf
+$\mathcal{G}$ on $(\Sch/X)_{Zar}$ or $X_{Zar}$ (depending on whether
+$\mathcal{C} = \text{CRIS}(X/S)$ or $\mathcal{C} = \text{Cris}(X/S)$).
+Then $\underline{\mathcal{G}}$ defined by the rule
+$$
+\underline{\mathcal{G}} :
+(U, T, \delta)
+\longmapsto
+\mathcal{G}(U)
+$$
+is a sheaf on $\mathcal{C}$. In particular, if we take
+$\mathcal{G} = \mathbf{G}_a = \mathcal{O}_X$, then we obtain
+$$
+\underline{\mathbf{G}_a} :
+(U, T, \delta)
+\longmapsto
+\Gamma(U, \mathcal{O}_U)
+$$
+There is a surjective map of sheaves
+$\mathcal{O}_{X/S} \to \underline{\mathbf{G}_a}$ defined by the
+canonical maps $\Gamma(T, \mathcal{O}_T) \to \Gamma(U, \mathcal{O}_U)$
+for objects $(U, T, \delta)$. The kernel of this map is denoted
+$\mathcal{J}_{X/S}$, hence a short exact sequence
+$$
+0 \to
+\mathcal{J}_{X/S} \to
+\mathcal{O}_{X/S} \to
+\underline{\mathbf{G}_a} \to 0
+$$
+Note that $\mathcal{J}_{X/S}$ comes equipped with a canonical
+divided power structure. After all, for each object $(U, T, \delta)$
+the third component $\delta$ {\it is} a divided power structure on the
+kernel of $\mathcal{O}_T \to \mathcal{O}_U$. Hence the (big)
+crystalline topos is a divided power topos.
+
+
+
+
+
+\section{Crystals in modules}
+\label{section-crystals}
+
+\noindent
+It turns out that a crystal is a very general gadget. However, the
+definition may be a bit hard to parse, so we first give the definition
+in the case of modules on the crystalline sites.
+
+\begin{definition}
+\label{definition-modules}
+In Situation \ref{situation-global}.
+Let $\mathcal{C} = \text{CRIS}(X/S)$ or $\mathcal{C} = \text{Cris}(X/S)$.
+Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_{X/S}$-modules on $\mathcal{C}$.
+\begin{enumerate}
+\item We say $\mathcal{F}$ is {\it locally quasi-coherent} if for every
+object $(U, T, \delta)$ of $\mathcal{C}$ the restriction $\mathcal{F}_T$
+is a quasi-coherent $\mathcal{O}_T$-module.
+\item We say $\mathcal{F}$ is {\it quasi-coherent} if it is quasi-coherent
+in the sense of
+Modules on Sites, Definition \ref{sites-modules-definition-site-local}.
+\item We say $\mathcal{F}$ is a {\it crystal in $\mathcal{O}_{X/S}$-modules}
+if all the comparison maps (\ref{equation-comparison-modules}) are
+isomorphisms.
+\end{enumerate}
+\end{definition}
+
+\noindent
+It turns out that we can relate these notions as follows.
+
+\begin{lemma}
+\label{lemma-crystal-quasi-coherent-modules}
+With notation $X/S, \mathcal{I}, \gamma, \mathcal{C}, \mathcal{F}$
+as in Definition \ref{definition-modules}. The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is quasi-coherent, and
+\item $\mathcal{F}$ is locally quasi-coherent and a crystal in
+$\mathcal{O}_{X/S}$-modules.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). Let $f : (U', T', \delta') \to (U, T, \delta)$ be an object of
+$\mathcal{C}$. We have to prove (a) $\mathcal{F}_T$ is a quasi-coherent
+$\mathcal{O}_T$-module and (b) $c_f : f^*\mathcal{F}_T \to \mathcal{F}_{T'}$
+is an isomorphism. The assumption means that we can find a covering
+$\{(T_i, U_i, \delta_i) \to (T, U, \delta)\}$ and for each $i$
+the restriction of $\mathcal{F}$ to $\mathcal{C}/(T_i, U_i, \delta_i)$
+has a global presentation. Since it suffices to prove (a) and (b)
+Zariski locally, we may replace $f : (T', U', \delta') \to (T, U, \delta)$
+by the base change to $(T_i, U_i, \delta_i)$ and assume that $\mathcal{F}$
+restricted to $\mathcal{C}/(T, U, \delta)$ has a global
+presentation
+$$
+\bigoplus\nolimits_{j \in J}
+\mathcal{O}_{X/S}|_{\mathcal{C}/(U, T, \delta)} \longrightarrow
+\bigoplus\nolimits_{i \in I}
+\mathcal{O}_{X/S}|_{\mathcal{C}/(U, T, \delta)} \longrightarrow
+\mathcal{F}|_{\mathcal{C}/(U, T, \delta)}
+\longrightarrow 0
+$$
+It is clear that this gives a presentation
+$$
+\bigoplus\nolimits_{j \in J} \mathcal{O}_T \longrightarrow
+\bigoplus\nolimits_{i \in I} \mathcal{O}_T \longrightarrow
+\mathcal{F}_T
+\longrightarrow 0
+$$
+and hence (a) holds. Moreover, the presentation restricts to $T'$
+to give a similar presentation of $\mathcal{F}_{T'}$, whence (b) holds.
+
+\medskip\noindent
+Assume (2). Let $(U, T, \delta)$ be an object of $\mathcal{C}$.
+We have to find a covering of $(U, T, \delta)$ such that $\mathcal{F}$ has a
+global presentation when we restrict to the localization of $\mathcal{C}$
+at the members of the covering. Thus we may assume that $T$ is affine.
+In this case we can choose a presentation
+$$
+\bigoplus\nolimits_{j \in J} \mathcal{O}_T \longrightarrow
+\bigoplus\nolimits_{i \in I} \mathcal{O}_T \longrightarrow
+\mathcal{F}_T
+\longrightarrow 0
+$$
+as $\mathcal{F}_T$ is assumed to be a quasi-coherent $\mathcal{O}_T$-module.
+Then by the crystal property of $\mathcal{F}$ we see that this pulls back
+to a presentation of $\mathcal{F}_{T'}$ for any morphism
+$f : (U', T', \delta') \to (U, T, \delta)$ of $\mathcal{C}$.
+Thus the desired presentation of $\mathcal{F}|_{\mathcal{C}/(U, T, \delta)}$.
+\end{proof}
+
+\begin{definition}
+\label{definition-crystal-quasi-coherent-modules}
+If $\mathcal{F}$ satisfies the equivalent conditions of
+Lemma \ref{lemma-crystal-quasi-coherent-modules}, then
+we say that $\mathcal{F}$ is a
+{\it crystal in quasi-coherent modules}.
+We say that $\mathcal{F}$ is a {\it crystal in finite locally free modules}
+if, in addition, $\mathcal{F}$ is finite locally free.
+\end{definition}
+
+\noindent
+Of course, as Lemma \ref{lemma-crystal-quasi-coherent-modules} shows, this
+notation is somewhat heavy since a quasi-coherent module is always a crystal.
+But it is standard terminology in the literature.
+
+\begin{remark}
+\label{remark-crystal}
+To formulate the general notion of a crystal we use the language
+of stacks and strongly cartesian morphisms, see
+Stacks, Definition \ref{stacks-definition-stack} and
+Categories, Definition \ref{categories-definition-cartesian-over-C}.
+In Situation \ref{situation-global} let
+$p : \mathcal{C} \to \text{Cris}(X/S)$ be a stack.
+A {\it crystal in objects of $\mathcal{C}$ on $X$ relative to $S$}
+is a {\it cartesian section} $\sigma : \text{Cris}(X/S) \to \mathcal{C}$,
+i.e., a functor $\sigma$ such that $p \circ \sigma = \text{id}$
+and such that $\sigma(f)$ is strongly cartesian for all
+morphisms $f$ of $\text{Cris}(X/S)$. Similarly for the big crystalline site.
+\end{remark}
+
+
+
+
+
+\section{Sheaf of differentials}
+\label{section-differentials-sheaf}
+
+\noindent
+In this section we will stick with the (small) crystalline site
+as it seems more natural. We globalize
+Definition \ref{definition-derivation} as follows.
+
+\begin{definition}
+\label{definition-global-derivation}
+In Situation \ref{situation-global} let
+$\mathcal{F}$ be a sheaf of $\mathcal{O}_{X/S}$-modules on
+$\text{Cris}(X/S)$. An
+{\it $S$-derivation $D : \mathcal{O}_{X/S} \to \mathcal{F}$}
+is a map of sheaves such that for every object $(U, T, \delta)$ of
+$\text{Cris}(X/S)$ the map
+$$
+D : \Gamma(T, \mathcal{O}_T) \longrightarrow \Gamma(T, \mathcal{F})
+$$
+is a divided power $\Gamma(V, \mathcal{O}_V)$-derivation where $V \subset S$
+is any open such that $T \to S$ factors through $V$.
+\end{definition}
+
+\noindent
+This means that $D$ is additive, satisfies the Leibniz rule, annihilates
+functions coming from $S$, and satisfies $D(f^{[n]}) = f^{[n - 1]}D(f)$
+for a local section $f$ of the divided power ideal $\mathcal{J}_{X/S}$.
+This is a special case of a very general notion which we now describe.
+
+\medskip\noindent
+Please compare the following discussion with
+Modules on Sites, Section \ref{sites-modules-section-differentials}. Let
+$\mathcal{C}$ be a site, let $\mathcal{A} \to \mathcal{B}$ be a
+map of sheaves of rings on $\mathcal{C}$, let $\mathcal{J} \subset \mathcal{B}$
+be a sheaf of ideals, let $\delta$ be a divided power structure on
+$\mathcal{J}$, and let $\mathcal{F}$ be a sheaf of $\mathcal{B}$-modules.
+Then there is a notion of a {\it divided power $\mathcal{A}$-derivation}
+$D : \mathcal{B} \to \mathcal{F}$. This means that $D$ is $\mathcal{A}$-linear,
+satisfies the Leibniz rule, and satisfies
+$D(\delta_n(x)) = \delta_{n - 1}(x)D(x)$ for local sections $x$ of
+$\mathcal{J}$. In this situation there exists a
+{\it universal divided power $\mathcal{A}$-derivation}
+$$
+\text{d}_{\mathcal{B}/\mathcal{A}, \delta} :
+\mathcal{B}
+\longrightarrow
+\Omega_{\mathcal{B}/\mathcal{A}, \delta}
+$$
+Moreover, $\text{d}_{\mathcal{B}/\mathcal{A}, \delta}$ is the composition
+$$
+\mathcal{B}
+\longrightarrow
+\Omega_{\mathcal{B}/\mathcal{A}}
+\longrightarrow
+\Omega_{\mathcal{B}/\mathcal{A}, \delta}
+$$
+where the first map is the universal derivation constructed in the proof
+of Modules on Sites, Lemma \ref{sites-modules-lemma-universal-module}
+and the second arrow is the quotient by the submodule generated by
+the local sections
+$\text{d}_{\mathcal{B}/\mathcal{A}}(\delta_n(x)) -
+\delta_{n - 1}(x)\text{d}_{\mathcal{B}/\mathcal{A}}(x)$.
+
+\medskip\noindent
+We translate this into a relative notion as follows. Suppose
+$(f, f^\sharp) : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{C}'), \mathcal{O}')$ is a morphism of ringed topoi,
+$\mathcal{J} \subset \mathcal{O}$ a sheaf of ideals, $\delta$ a
+divided power structure on $\mathcal{J}$, and $\mathcal{F}$ a sheaf
+of $\mathcal{O}$-modules. In this situation we say
+$D : \mathcal{O} \to \mathcal{F}$ is a divided power $\mathcal{O}'$-derivation
+if $D$ is a divided power $f^{-1}\mathcal{O}'$-derivation as defined above.
+Moreover, we write
+$$
+\Omega_{\mathcal{O}/\mathcal{O}', \delta} =
+\Omega_{\mathcal{O}/f^{-1}\mathcal{O}', \delta}
+$$
+which is the receptacle of the universal divided power
+$\mathcal{O}'$-derivation.
+
+\medskip\noindent
+Applying this to the structure morphism
+$$
+(X/S)_{\text{Cris}} \longrightarrow \Sh(S_{Zar})
+$$
+(see Remark \ref{remark-structure-morphism}) we recover the notion of
+Definition \ref{definition-global-derivation} above.
+In particular, there is a universal divided power derivation
+$$
+d_{X/S} : \mathcal{O}_{X/S} \to \Omega_{X/S}
+$$
+Note that we omit from the notation the decoration indicating the
+module of differentials is compatible with divided powers (it seems
+unlikely anybody would ever consider the usual module of differentials
+of the structure sheaf on the crystalline site).
+
+\begin{lemma}
+\label{lemma-module-differentials-divided-power-scheme}
+Let $(T, \mathcal{J}, \delta)$ be a divided power scheme.
+Let $T \to S$ be a morphism of schemes.
+The quotient $\Omega_{T/S} \to \Omega_{T/S, \delta}$
+described above is a quasi-coherent $\mathcal{O}_T$-module.
+For $W \subset T$ affine open mapping into $V \subset S$ affine open
+we have
+$$
+\Gamma(W, \Omega_{T/S, \delta}) =
+\Omega_{\Gamma(W, \mathcal{O}_W)/\Gamma(V, \mathcal{O}_V), \delta}
+$$
+where the right hand side is
+as constructed in Section \ref{section-differentials}.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-module-of-differentials}
+In Situation \ref{situation-global}.
+For $(U, T, \delta)$ in $\text{Cris}(X/S)$ the restriction
+$(\Omega_{X/S})_T$ to $T$ is $\Omega_{T/S, \delta}$ and the restriction
+$\text{d}_{X/S}|_T$ is equal to $\text{d}_{T/S, \delta}$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-module-of-differentials-on-affine}
+In Situation \ref{situation-global}.
+For any affine object $(U, T, \delta)$ of $\text{Cris}(X/S)$
+mapping into an affine open $V \subset S$ we have
+$$
+\Gamma((U, T, \delta), \Omega_{X/S}) =
+\Omega_{\Gamma(T, \mathcal{O}_T)/\Gamma(V, \mathcal{O}_V), \delta}
+$$
+where the right hand side is
+as constructed in Section \ref{section-differentials}.
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-module-differentials-divided-power-scheme} and
+\ref{lemma-module-of-differentials}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-describe-omega-small}
+In Situation \ref{situation-global}.
+Let $(U, T, \delta)$ be an object of $\text{Cris}(X/S)$.
+Let
+$$
+(U(1), T(1), \delta(1)) = (U, T, \delta) \times (U, T, \delta)
+$$
+in $\text{Cris}(X/S)$. Let $\mathcal{K} \subset \mathcal{O}_{T(1)}$
+be the quasi-coherent sheaf of ideals corresponding to the closed
+immersion $\Delta : T \to T(1)$. Then
+$\mathcal{K} \subset \mathcal{J}_{T(1)}$ is preserved by the
+divided structure on $\mathcal{J}_{T(1)}$ and we have
+$$
+(\Omega_{X/S})_T = \mathcal{K}/\mathcal{K}^{[2]}
+$$
+\end{lemma}
+
+\begin{proof}
+Note that $U = U(1)$ as $U \to X$ is an open immersion and as
+(\ref{equation-forget-small}) commutes with products. Hence we see that
+$\mathcal{K} \subset \mathcal{J}_{T(1)}$. Given this fact the lemma follows
+by working affine locally on $T$ and using
+Lemmas \ref{lemma-module-of-differentials-on-affine} and
+\ref{lemma-diagonal-and-differentials-affine-site}.
+\end{proof}
+
+\noindent
+It turns out that $\Omega_{X/S}$ is not a crystal in quasi-coherent
+$\mathcal{O}_{X/S}$-modules. But it does satisfy two closely
+related properties (compare with
+Lemma \ref{lemma-crystal-quasi-coherent-modules}).
+
+\begin{lemma}
+\label{lemma-omega-locally-quasi-coherent}
+In Situation \ref{situation-global}.
+The sheaf of differentials $\Omega_{X/S}$ has the following two
+properties:
+\begin{enumerate}
+\item $\Omega_{X/S}$ is locally quasi-coherent, and
+\item for any morphism $(U, T, \delta) \to (U', T', \delta')$
+of $\text{Cris}(X/S)$ where $f : T \to T'$ is a closed immersion
+the map $c_f : f^*(\Omega_{X/S})_{T'} \to (\Omega_{X/S})_T$ is surjective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) follows from a combination of
+Lemmas \ref{lemma-module-differentials-divided-power-scheme} and
+\ref{lemma-module-of-differentials}.
+Part (2) follows from the fact that
+$(\Omega_{X/S})_T = \Omega_{T/S, \delta}$
+is a quotient of $\Omega_{T/S}$ and that $f^*\Omega_{T'/S} \to \Omega_{T/S}$
+is surjective.
+\end{proof}
+
+
+
+
+
+
+
+\section{Two universal thickenings}
+\label{section-universal-thickenings}
+
+\noindent
+The constructions in this section will help us define a connection on
+a crystal in modules on the crystalline site. In some sense the constructions
+here are the ``sheafified, universal'' versions of the constructions in
+Section \ref{section-explicit-thickenings}.
+
+\begin{remark}
+\label{remark-first-order-thickening}
+In Situation \ref{situation-global}.
+Let $(U, T, \delta)$ be an object of $\text{Cris}(X/S)$.
+Write $\Omega_{T/S, \delta} = (\Omega_{X/S})_T$, see
+Lemma \ref{lemma-module-of-differentials}.
+We explicitly describe a first order thickening $T'$ of
+$T$. Namely, set
+$$
+\mathcal{O}_{T'} = \mathcal{O}_T \oplus \Omega_{T/S, \delta}
+$$
+with algebra structure such that $\Omega_{T/S, \delta}$ is an
+ideal of square zero. Let $\mathcal{J} \subset \mathcal{O}_T$
+be the ideal sheaf of the closed immersion $U \to T$. Set
+$\mathcal{J}' = \mathcal{J} \oplus \Omega_{T/S, \delta}$.
+Define a divided power structure on $\mathcal{J}'$ by setting
+$$
+\delta_n'(f, \omega) = (\delta_n(f), \delta_{n - 1}(f)\omega),
+$$
+see Lemma \ref{lemma-divided-power-first-order-thickening}.
+There are two ring maps
+$$
+p_0, p_1 : \mathcal{O}_T \to \mathcal{O}_{T'}
+$$
+The first is given by $f \mapsto (f, 0)$ and the second by
+$f \mapsto (f, \text{d}_{T/S, \delta}f)$. Note that both are compatible
+with the divided power structures on $\mathcal{J}$ and $\mathcal{J}'$
+and so is the quotient map $\mathcal{O}_{T'} \to \mathcal{O}_T$.
+Thus we get an object $(U, T', \delta')$ of $\text{Cris}(X/S)$
+and a commutative diagram
+$$
+\xymatrix{
+& T \ar[ld]_{\text{id}} \ar[d]^i \ar[rd]^{\text{id}} \\
+T & T' \ar[l]_{p_0} \ar[r]^{p_1} & T
+}
+$$
+of $\text{Cris}(X/S)$ such that $i$ is a first order thickening whose ideal
+sheaf is identified with $\Omega_{T/S, \delta}$ and such that
+$p_1^* - p_0^* : \mathcal{O}_T \to \mathcal{O}_{T'}$
+is identified with the universal derivation $\text{d}_{T/S, \delta}$
+composed with the inclusion $\Omega_{T/S, \delta} \to \mathcal{O}_{T'}$.
+\end{remark}
+
+\begin{remark}
+\label{remark-second-order-thickening}
+In Situation \ref{situation-global}.
+Let $(U, T, \delta)$ be an object of $\text{Cris}(X/S)$.
+Write $\Omega_{T/S, \delta} = (\Omega_{X/S})_T$, see
+Lemma \ref{lemma-module-of-differentials}.
+We also write $\Omega^2_{T/S, \delta}$ for its second exterior
+power. We explicitly describe a second order thickening $T''$ of $T$.
+Namely, set
+$$
+\mathcal{O}_{T''} =
+\mathcal{O}_T \oplus \Omega_{T/S, \delta} \oplus \Omega_{T/S, \delta}
+\oplus \Omega^2_{T/S, \delta}
+$$
+with algebra structure defined in the following way
+$$
+(f, \omega_1, \omega_2, \eta) \cdot
+(f', \omega_1', \omega_2', \eta') =
+(ff', f\omega_1' + f'\omega_1, f\omega_2' + f'\omega_2,
+f\eta' + f'\eta + \omega_1 \wedge \omega_2' + \omega_1' \wedge \omega_2).
+$$
+Let $\mathcal{J} \subset \mathcal{O}_T$
+be the ideal sheaf of the closed immersion $U \to T$. Let
+$\mathcal{J}''$ be the inverse image of $\mathcal{J}$ under the
+projection $\mathcal{O}_{T''} \to \mathcal{O}_T$.
+Define a divided power structure on $\mathcal{J}''$ by setting
+$$
+\delta_n''(f, \omega_1, \omega_2, \eta) =
+(\delta_n(f), \delta_{n - 1}(f)\omega_1, \delta_{n - 1}(f)\omega_2,
+\delta_{n - 1}(f)\eta + \delta_{n - 2}(f)\omega_1 \wedge \omega_2)
+$$
+see Lemma \ref{lemma-divided-power-second-order-thickening}.
+There are three ring maps
+$q_0, q_1, q_2 : \mathcal{O}_T \to \mathcal{O}_{T''}$
+given by
+\begin{align*}
+q_0(f) & = (f, 0, 0, 0), \\
+q_1(f) & = (f, \text{d}f, 0, 0), \\
+q_2(f) & = (f, \text{d}f, \text{d}f, 0)
+\end{align*}
+where $\text{d} = \text{d}_{T/S, \delta}$.
+Note that all three are compatible with the divided power structures
+on $\mathcal{J}$ and $\mathcal{J}''$. There are three ring maps
+$q_{01}, q_{12}, q_{02} : \mathcal{O}_{T'} \to \mathcal{O}_{T''}$
+where $\mathcal{O}_{T'}$ is as in Remark \ref{remark-first-order-thickening}.
+Namely, set
+\begin{align*}
+q_{01}(f, \omega) & = (f, \omega, 0, 0), \\
+q_{12}(f, \omega) & =
+(f, \text{d}f, \omega, \text{d}\omega), \\
+q_{02}(f, \omega) & = (f, \omega, \omega, 0)
+\end{align*}
+These are also compatible with the given divided power
+structures. Let's do the verifications for $q_{12}$: Note
+that $q_{12}$ is a ring homomorphism as
+\begin{align*}
+q_{12}(f, \omega)q_{12}(g, \eta) & =
+(f, \text{d}f, \omega, \text{d}\omega)(g, \text{d}g, \eta, \text{d}\eta) \\
+& =
+(fg, f\text{d}g + g \text{d}f, f\eta + g\omega,
+f\text{d}\eta + g\text{d}\omega + \text{d}f \wedge \eta +
+\text{d}g \wedge \omega) \\
+& = q_{12}(fg, f\eta + g\omega) = q_{12}((f, \omega)(g, \eta))
+\end{align*}
+Note that $q_{12}$ is compatible with divided powers because
+\begin{align*}
+\delta_n''(q_{12}(f, \omega)) & =
+\delta_n''((f, \text{d}f, \omega, \text{d}\omega)) \\
+& =
+(\delta_n(f), \delta_{n - 1}(f)\text{d}f, \delta_{n - 1}(f)\omega,
+\delta_{n - 1}(f)\text{d}\omega + \delta_{n - 2}(f)\text{d}(f) \wedge \omega)
+\\
+& = q_{12}((\delta_n(f), \delta_{n - 1}(f)\omega)) =
+q_{12}(\delta'_n(f, \omega))
+\end{align*}
+The verifications for $q_{01}$ and $q_{02}$ are easier.
+Note that $q_0 = q_{01} \circ p_0$, $q_1 = q_{01} \circ p_1$,
+$q_1 = q_{12} \circ p_0$, $q_2 = q_{12} \circ p_1$,
+$q_0 = q_{02} \circ p_0$, and $q_2 = q_{02} \circ p_1$.
+Thus $(U, T'', \delta'')$ is an object of $\text{Cris}(X/S)$
+and we get morphisms
+$$
+\xymatrix{
+T''
+\ar@<2ex>[r]
+\ar@<0ex>[r]
+\ar@<-2ex>[r]
+&
+T'
+\ar@<1ex>[r]
+\ar@<-1ex>[r]
+&
+T
+}
+$$
+of $\text{Cris}(X/S)$ satisfying the relations described above.
+In applications we will use $q_i : T'' \to T$ and
+$q_{ij} : T'' \to T'$ to denote the morphisms associated to the
+ring maps described above.
+\end{remark}
+
+
+
+
+
+
+\section{The de Rham complex}
+\label{section-de-Rham}
+
+\noindent
+In Situation \ref{situation-global}.
+Working on the (small) crystalline site, we define
+$\Omega^i_{X/S} = \wedge^i_{\mathcal{O}_{X/S}} \Omega_{X/S}$
+for $i \geq 0$. The universal $S$-derivation $\text{d}_{X/S}$ gives
+rise to the {\it de Rham complex}
+$$
+\mathcal{O}_{X/S} \to \Omega^1_{X/S} \to \Omega^2_{X/S} \to \ldots
+$$
+on $\text{Cris}(X/S)$, see
+Lemma \ref{lemma-module-of-differentials-on-affine} and
+Remark \ref{remark-divided-powers-de-rham-complex}.
+
+
+\section{Connections}
+\label{section-connections}
+
+\noindent
+In Situation \ref{situation-global}.
+Given an $\mathcal{O}_{X/S}$-module $\mathcal{F}$ on $\text{Cris}(X/S)$
+a {\it connection} is a map of abelian sheaves
+$$
+\nabla :
+\mathcal{F}
+\longrightarrow
+\mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega_{X/S}
+$$
+such that $\nabla(f s) = f\nabla(s) + s \otimes \text{d}f$
+for local sections $s, f$ of $\mathcal{F}$ and $\mathcal{O}_{X/S}$.
+Given a connection there are canonical maps
+$
+\nabla :
+\mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega^i_{X/S}
+\longrightarrow
+\mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega^{i + 1}_{X/S}
+$
+defined by the rule $\nabla(s \otimes \omega) =
+\nabla(s) \wedge \omega + s \otimes \text{d}\omega$
+as in Remark \ref{remark-connection}. We say the connection is
+{\it integrable} if $\nabla \circ \nabla = 0$. If $\nabla$ is integrable
+we obtain the {\it de Rham complex}
+$$
+\mathcal{F} \to
+\mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega^1_{X/S} \to
+\mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega^2_{X/S} \to \ldots
+$$
+on $\text{Cris}(X/S)$. It turns out that any crystal in
+$\mathcal{O}_{X/S}$-modules comes equipped with a canonical
+integrable connection.
+
+\begin{lemma}
+\label{lemma-automatic-connection}
+In Situation \ref{situation-global}.
+Let $\mathcal{F}$ be a crystal in $\mathcal{O}_{X/S}$-modules
+on $\text{Cris}(X/S)$. Then $\mathcal{F}$ comes equipped with a
+canonical integrable connection.
+\end{lemma}
+
+\begin{proof}
+Say $(U, T, \delta)$ is an object of $\text{Cris}(X/S)$.
+Let $(U, T', \delta')$ be the infinitesimal thickening of $T$
+by $(\Omega_{X/S})_T = \Omega_{T/S, \delta}$
+constructed in Remark \ref{remark-first-order-thickening}.
+It comes with projections $p_0, p_1 : T' \to T$
+and a diagonal $i : T \to T'$. By assumption we get
+isomorphisms
+$$
+p_0^*\mathcal{F}_T \xrightarrow{c_0}
+\mathcal{F}_{T'} \xleftarrow{c_1}
+p_1^*\mathcal{F}_T
+$$
+of $\mathcal{O}_{T'}$-modules. Pulling $c = c_1^{-1} \circ c_0$
+back to $T$ by $i$ we obtain the identity map
+of $\mathcal{F}_T$. Hence if $s \in \Gamma(T, \mathcal{F}_T)$
+then $\nabla(s) = p_1^*s - c(p_0^*s)$ is a section of
+$p_1^*\mathcal{F}_T$ which vanishes on pulling back by $i$. Hence
+$\nabla(s)$ is a section of
+$$
+\mathcal{F}_T
+\otimes_{\mathcal{O}_T}
+\Omega_{T/S, \delta}
+$$
+because this is the kernel of $p_1^*\mathcal{F}_T \to \mathcal{F}_T$
+as $\mathcal{O}_{T'} = \mathcal{O}_T \oplus \Omega_{T/S, \delta}$
+by construction. It is easily verified that $\nabla(fs) =
+f\nabla(s) + s \otimes \text{d}(f)$ using the description of
+$\text{d}$ in Remark \ref{remark-first-order-thickening}.
+
+\medskip\noindent
+The collection of maps
+$$
+\nabla : \Gamma(T, \mathcal{F}_T) \to
+\Gamma(T, \mathcal{F}_T \otimes_{\mathcal{O}_T} \Omega_{T/S, \delta})
+$$
+so obtained is functorial in $T$ because the construction of $T'$
+is functorial in $T$. Hence we obtain a connection.
+
+\medskip\noindent
+To show that the connection is integrable we consider the
+object $(U, T'', \delta'')$ constructed in
+Remark \ref{remark-second-order-thickening}.
+Because $\mathcal{F}$ is a sheaf we see that
+$$
+\xymatrix{
+q_0^*\mathcal{F}_T \ar[rr]_{q_{01}^*c} \ar[rd]_{q_{02}^*c} & &
+q_1^*\mathcal{F}_T \ar[ld]^{q_{12}^*c} \\
+& q_2^*\mathcal{F}_T
+}
+$$
+is a commutative diagram of $\mathcal{O}_{T''}$-modules. For
+$s \in \Gamma(T, \mathcal{F}_T)$ we have
+$c(p_0^*s) = p_1^*s - \nabla(s)$. Write
+$\nabla(s) = \sum p_1^*s_i \cdot \omega_i$ where $s_i$ is a local section
+of $\mathcal{F}_T$ and $\omega_i$ is a local section of $\Omega_{T/S, \delta}$.
+We think of $\omega_i$ as a local section of the structure
+sheaf of $\mathcal{O}_{T'}$ and hence we write product instead of tensor
+product. On the one hand
+\begin{align*}
+q_{12}^*c \circ q_{01}^*c(q_0^*s) & =
+q_{12}^*c(q_1^*s - \sum q_1^*s_i \cdot q_{01}^*\omega_i) \\
+& =
+q_2^*s - \sum q_2^*s_i \cdot q_{12}^*\omega_i -
+\sum q_2^*s_i \cdot q_{01}^*\omega_i +
+\sum q_{12}^*\nabla(s_i) \cdot q_{01}^*\omega_i
+\end{align*}
+and on the other hand
+$$
+q_{02}^*c(q_0^*s) = q_2^*s - \sum q_2^*s_i \cdot q_{02}^*\omega_i.
+$$
+From the formulae of Remark \ref{remark-second-order-thickening} we see
+that
+$q_{01}^*\omega_i + q_{12}^*\omega_i - q_{02}^*\omega_i = \text{d}\omega_i$.
+Hence the difference of the two expressions above is
+$$
+\sum q_2^*s_i \cdot \text{d}\omega_i -
+\sum q_{12}^*\nabla(s_i) \cdot q_{01}^*\omega_i
+$$
+Note that
+$q_{12}^*\omega \cdot q_{01}^*\omega' = \omega' \wedge \omega =
+- \omega \wedge \omega'$ by the definition of the multiplication on
+$\mathcal{O}_{T''}$. Thus the expression above is $\nabla^2(s)$ viewed
+as a section of the subsheaf $\mathcal{F}_T \otimes \Omega^2_{T/S, \delta}$ of
+$q_2^*\mathcal{F}$. Hence we get the integrability condition.
+\end{proof}
+
+
+
+
+\section{Cosimplicial algebra}
+\label{section-cosimplicial}
+
+\noindent
+This section should be moved somewhere else. A
+{\it cosimplicial ring} is a cosimplicial object
+in the category of rings. Given a ring $R$, a
+{\it cosimplicial $R$-algebra} is a cosimplicial object in the
+category of $R$-algebras. A {\it cosimplicial ideal} in a cosimplicial
+ring $A_*$ is given by an ideal $I_n \subset A_n$ for all $n$ such
+that $A(f)(I_n) \subset I_m$ for all $f : [n] \to [m]$ in $\Delta$.
+
+\medskip\noindent
+Let $A_*$ be a cosimplicial ring. Let $\mathcal{C}$ be the category
+of pairs $(A, M)$ where $A$ is a ring and $M$ is a module over $A$.
+A morphism $(A, M) \to (A', M')$ consists of a ring map $A \to A'$ and
+an $A$-module map $M \to M'$ where $M'$ is viewed as an $A$-module
+via $A \to A'$ and the $A'$-module structure on $M'$. Having said this
+we can define a {\it cosimplicial module $M_*$ over $A_*$} as a cosimplicial
+object $(A_*, M_*)$ of $\mathcal{C}$ whose first entry is equal to $A_*$.
+A {\it homomorphism $\varphi_* : M_* \to N_*$ of cosimplicial modules over
+$A_*$} is a morphism $(A_*, M_*) \to (A_*, N_*)$ of cosimplicial objects
+in $\mathcal{C}$ whose first component is $1_{A_*}$.
+
+\medskip\noindent
+A {\it homotopy} between homomorphisms $\varphi_*, \psi_* : M_* \to N_*$
+of cosimplicial modules over $A_*$ is a homotopy between the associated
+maps $(A_*, M_*) \to (A_*, N_*)$ whose first component is the
+trivial homotopy (dual to
+Simplicial, Example \ref{simplicial-example-trivial-homotopy}).
+We spell out what this means. Such a homotopy is a homotopy
+$$
+h : M_* \longrightarrow \Hom(\Delta[1], N_*)
+$$
+between $\varphi_*$ and $\psi_*$ as homomorphisms of cosimplicial abelian
+groups such that for each $n$ the map
+$h_n : M_n \to \prod_{\alpha \in \Delta[1]_n} N_n$ is $A_n$-linear.
+The following lemma is a version of
+Simplicial, Lemma \ref{simplicial-lemma-functorial-homotopy}
+for cosimplicial modules.
+
+\begin{lemma}
+\label{lemma-homotopy-tensor}
+Let $A_*$ be a cosimplicial ring. Let $\varphi_*, \psi_* : K_* \to M_*$
+be homomorphisms of cosimplicial $A_*$-modules.
+\begin{enumerate}
+\item
+\label{item-tensor}
+If $\varphi_*$ and $\psi_*$ are homotopic, then
+$$
+\varphi_* \otimes 1, \psi_* \otimes 1 :
+K_* \otimes_{A_*} L_* \longrightarrow M_* \otimes_{A_*} L_*
+$$
+are homotopic for any cosimplicial $A_*$-module $L_*$.
+\item
+\label{item-wedge}
+If $\varphi_*$ and $\psi_*$ are homotopic, then
+$$
+\wedge^i(\varphi_*), \wedge^i(\psi_*) :
+\wedge^i(K_*) \longrightarrow \wedge^i(M_*)
+$$
+are homotopic.
+\item
+\label{item-base-change}
+If $\varphi_*$ and $\psi_*$ are homotopic, and $A_* \to B_*$
+is a homomorphism of cosimplicial rings, then
+$$
+\varphi_* \otimes 1, \psi_* \otimes 1 :
+K_* \otimes_{A_*} B_* \longrightarrow M_* \otimes_{A_*} B_*
+$$
+are homotopic as homomorphisms of cosimplicial $B_*$-modules.
+\item
+\label{item-completion}
+If $I_* \subset A_*$ is a cosimplicial ideal, then the induced
+maps
+$$
+\varphi^\wedge_*, \psi^\wedge_* :
+K_*^\wedge \longrightarrow M_*^\wedge
+$$
+between completions are homotopic.
+\item Add more here as needed, for example symmetric powers.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $h : M_* \longrightarrow \Hom(\Delta[1], N_*)$ be the given
+homotopy. In degree $n$ we have
+$$
+h_n = (h_{n, \alpha}) :
+K_n \longrightarrow
+\prod\nolimits_{\alpha \in \Delta[1]_n} K_n
+$$
+see Simplicial, Section \ref{simplicial-section-homotopy-cosimplicial}.
+In order for a collection of $h_{n, \alpha}$ to form a homotopy,
+it is necessary and sufficient if for every $f : [n] \to [m]$ we
+have
+$$
+h_{m, \alpha} \circ M_*(f) = N_*(f) \circ h_{n, \alpha \circ f}
+$$
+see
+Simplicial, Equation (\ref{simplicial-equation-property-homotopy-cosimplicial}).
+We also should have that $\psi_n = h_{n, 0 : [n] \to [1]}$ and
+$\varphi_n = h_{n, 1 : [n] \to [1]}$.
+
+\medskip\noindent
+In each of the cases of the lemma we can produce the corresponding maps.
+Case (\ref{item-tensor}). We can use the homotopy $h \otimes 1$ defined
+in degree $n$ by setting
+$$
+(h \otimes 1)_{n, \alpha} = h_{n, \alpha} \otimes 1_{L_n} :
+K_n \otimes_{A_n} L_n
+\longrightarrow
+M_n \otimes_{A_n} L_n.
+$$
+Case (\ref{item-wedge}). We can use the homotopy $\wedge^ih$ defined
+in degree $n$ by setting
+$$
+\wedge^i(h)_{n, \alpha} = \wedge^i(h_{n, \alpha}) :
+\wedge_{A_n}(K_n)
+\longrightarrow
+\wedge^i_{A_n}(M_n).
+$$
+Case (\ref{item-base-change}). We can use the homotopy $h \otimes 1$ defined
+in degree $n$ by setting
+$$
+(h \otimes 1)_{n, \alpha} = h_{n, \alpha} \otimes 1 :
+K_n \otimes_{A_n} B_n
+\longrightarrow
+M_n \otimes_{A_n} B_n.
+$$
+Case (\ref{item-completion}). We can use the homotopy $h^\wedge$ defined
+in degree $n$ by setting
+$$
+(h^\wedge)_{n, \alpha} = h_{n, \alpha}^\wedge :
+K_n^\wedge
+\longrightarrow
+M_n^\wedge.
+$$
+This works because each $h_{n, \alpha}$ is $A_n$-linear.
+\end{proof}
+
+
+
+
+
+
+\section{Crystals in quasi-coherent modules}
+\label{section-quasi-coherent-crystals}
+
+\noindent
+In Situation \ref{situation-affine}.
+Set $X = \Spec(C)$ and $S = \Spec(A)$. We are going to
+classify crystals in quasi-coherent modules on $\text{Cris}(X/S)$.
+Before we do so we fix some notation.
+
+\medskip\noindent
+Choose a polynomial ring $P = A[x_i]$ over $A$ and a surjection $P \to C$
+of $A$-algebras with kernel $J = \Ker(P \to C)$. Set
+\begin{equation}
+\label{equation-D}
+D = \lim_e D_{P, \gamma}(J) / p^eD_{P, \gamma}(J)
+\end{equation}
+for the $p$-adically completed divided power envelope.
+This ring comes with a divided power ideal $\bar J$ and divided power
+structure $\bar \gamma$, see Lemma \ref{lemma-list-properties}.
+Set $D_e = D/p^eD$ and denote $\bar J_e$ the image of $\bar J$ in $D_e$.
+We will use the short hand
+\begin{equation}
+\label{equation-omega-D}
+\Omega_D = \lim_e \Omega_{D_e/A, \bar\gamma} =
+\lim_e \Omega_{D/A, \bar\gamma}/p^e\Omega_{D/A, \bar\gamma}
+\end{equation}
+for the $p$-adic completion of the module of divided power differentials,
+see Lemma \ref{lemma-differentials-completion}.
+It is also the $p$-adic completion of
+$\Omega_{D_{P, \gamma}(J)/A, \bar\gamma}$
+which is free on $\text{d}x_i$, see
+Lemma \ref{lemma-module-differentials-divided-power-envelope}.
+Hence any element of $\Omega_D$ can be written uniquely as a sum
+$\sum f_i\text{d}x_i$ with for all $e$ only finitely many $f_i$
+not in $p^eD$. Moreover, the maps
+$\text{d}_{D_e/A, \bar\gamma} : D_e \to \Omega_{D_e/A, \bar\gamma}$
+fit together to define a divided power $A$-derivation
+\begin{equation}
+\label{equation-derivation-D}
+\text{d} : D \longrightarrow \Omega_D
+\end{equation}
+on $p$-adic completions.
+
+\medskip\noindent
+We will also need the ``products $\Spec(D(n))$ of $\Spec(D)$'', see
+Proposition \ref{proposition-compute-cohomology} and its proof for an
+explanation. Formally these are defined as follows. For $n \geq 0$ let
+$J(n) = \Ker(P \otimes_A \ldots \otimes_A P \to C)$ where
+the tensor product has $n + 1$ factors. We set
+\begin{equation}
+\label{equation-Dn}
+D(n) = \lim_e
+D_{P \otimes_A \ldots \otimes_A P, \gamma}(J(n))/
+p^eD_{P \otimes_A \ldots \otimes_A P, \gamma}(J(n))
+\end{equation}
+equal to the $p$-adic completion of the divided power envelope.
+We denote $\bar J(n)$ its divided power ideal and $\bar \gamma(n)$
+its divided powers. We also introduce $D(n)_e = D(n)/p^eD(n)$ as well
+as the $p$-adically completed module of differentials
+\begin{equation}
+\label{equation-omega-Dn}
+\Omega_{D(n)} = \lim_e \Omega_{D(n)_e/A, \bar\gamma} =
+\lim_e \Omega_{D(n)/A, \bar\gamma}/p^e\Omega_{D(n)/A, \bar\gamma}
+\end{equation}
+and derivation
+\begin{equation}
+\label{equation-derivation-Dn}
+\text{d} : D(n) \longrightarrow \Omega_{D(n)}
+\end{equation}
+Of course we have $D = D(0)$. Note that the rings $D(0), D(1), D(2), \ldots$
+form a cosimplicial object in the category of divided power rings.
+
+\begin{lemma}
+\label{lemma-structure-Dn}
+Let $D$ and $D(n)$ be as in (\ref{equation-D}) and (\ref{equation-Dn}).
+The coprojection $P \to P \otimes_A \ldots \otimes_A P$,
+$f \mapsto f \otimes 1 \otimes \ldots \otimes 1$
+induces an isomorphism
+\begin{equation}
+\label{equation-structure-Dn}
+D(n) = \lim_e D\langle \xi_i(j) \rangle/p^eD\langle \xi_i(j) \rangle
+\end{equation}
+of algebras over $D$ with
+$$
+\xi_i(j) = x_i \otimes 1 \otimes \ldots \otimes 1 -
+1 \otimes \ldots \otimes 1 \otimes x_i \otimes 1 \otimes \ldots \otimes 1
+$$
+for $j = 1, \ldots, n$ where the second $x_i$ is placed in the $j + 1$st
+slot; recall that $D(n)$ is constructed starting with the
+$n + 1$-fold tensor product of $P$ over $A$.
+\end{lemma}
+
+\begin{proof}
+We have
+$$
+P \otimes_A \ldots \otimes_A P = P[\xi_i(j)]
+$$
+and $J(n)$ is generated by $J$ and the elements $\xi_i(j)$.
+Hence the lemma follows from
+Lemma \ref{lemma-divided-power-envelope-add-variables}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-property-Dn}
+Let $D$ and $D(n)$ be as in (\ref{equation-D}) and (\ref{equation-Dn}).
+Then $(D, \bar J, \bar\gamma)$ and $(D(n), \bar J(n), \bar\gamma(n))$
+are objects of $\text{Cris}^\wedge(C/A)$, see
+Remark \ref{remark-completed-affine-site}, and
+$$
+D(n) = \coprod\nolimits_{j = 0, \ldots, n} D
+$$
+in $\text{Cris}^\wedge(C/A)$.
+\end{lemma}
+
+\begin{proof}
+The first assertion is clear. For the second, if $(B \to C, \delta)$ is an
+object of $\text{Cris}^\wedge(C/A)$, then we have
+$$
+\Mor_{\text{Cris}^\wedge(C/A)}(D, B) =
+\Hom_A((P, J), (B, \Ker(B \to C)))
+$$
+and similarly for $D(n)$ replacing $(P, J)$ by
+$(P \otimes_A \ldots \otimes_A P, J(n))$. The property on coproducts follows
+as $P \otimes_A \ldots \otimes_A P$ is a coproduct.
+\end{proof}
+
+\noindent
+In the lemma below we will consider pairs $(M, \nabla)$ satisfying the
+following conditions
+\begin{enumerate}
+\item
+\label{item-complete}
+$M$ is a $p$-adically complete $D$-module,
+\item
+\label{item-connection}
+$\nabla : M \to M \otimes^\wedge_D \Omega_D$ is a connection, i.e.,
+$\nabla(fm) = m \otimes \text{d}f + f\nabla(m)$,
+\item
+\label{item-integrable}
+$\nabla$ is integrable
+(see Remark \ref{remark-connection}), and
+\item
+\label{item-topologically-quasi-nilpotent}
+$\nabla$ is {\it topologically quasi-nilpotent}: If we write
+$\nabla(m) = \sum \theta_i(m)\text{d}x_i$ for some operators
+$\theta_i : M \to M$, then for any $m \in M$ there are only finitely
+many pairs $(i, k)$ such that $\theta_i^k(m) \not \in pM$.
+\end{enumerate}
+The operators $\theta_i$ are sometimes denoted
+$\nabla_{\partial/\partial x_i}$ in the literature.
+In the following lemma we construct a functor from crystals in quasi-coherent
+modules on $\text{Cris}(X/S)$ to the category of such pairs. We will show
+this functor is an equivalence in
+Proposition \ref{proposition-crystals-on-affine}.
+
+\begin{lemma}
+\label{lemma-crystals-on-affine}
+In the situation above there is a functor
+$$
+\begin{matrix}
+\text{crystals in quasi-coherent} \\
+\mathcal{O}_{X/S}\text{-modules on }\text{Cris}(X/S)
+\end{matrix}
+\longrightarrow
+\begin{matrix}
+\text{pairs }(M, \nabla)\text{ satisfying} \\
+\text{(\ref{item-complete}), (\ref{item-connection}),
+(\ref{item-integrable}), and (\ref{item-topologically-quasi-nilpotent})}
+\end{matrix}
+$$
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be a crystal in quasi-coherent modules on $X/S$.
+Set $T_e = \Spec(D_e)$ so that $(X, T_e, \bar\gamma)$ is an object
+of $\text{Cris}(X/S)$ for $e \gg 0$. We have morphisms
+$$
+(X, T_e, \bar\gamma) \to (X, T_{e + 1}, \bar\gamma) \to \ldots
+$$
+which are closed immersions. We set
+$$
+M =
+\lim_e \Gamma((X, T_e, \bar\gamma), \mathcal{F}) =
+\lim_e \Gamma(T_e, \mathcal{F}_{T_e}) = \lim_e M_e
+$$
+Note that since $\mathcal{F}$ is locally quasi-coherent we have
+$\mathcal{F}_{T_e} = \widetilde{M_e}$. Since $\mathcal{F}$ is a
+crystal we have $M_e = M_{e + 1}/p^eM_{e + 1}$. Hence we see that
+$M_e = M/p^eM$ and that $M$ is $p$-adically complete, see
+Algebra, Lemma \ref{algebra-lemma-limit-complete}.
+
+\medskip\noindent
+By Lemma \ref{lemma-automatic-connection} we know that $\mathcal{F}$
+comes endowed with a canonical integrable connection
+$\nabla : \mathcal{F} \to \mathcal{F} \otimes \Omega_{X/S}$.
+If we evaluate this connection on the objects $T_e$ constructed above
+we obtain a canonical integrable connection
+$$
+\nabla : M \longrightarrow M \otimes^\wedge_D \Omega_D
+$$
+To see that this is topologically nilpotent we work out what this means.
+
+\medskip\noindent
+Now we can do the same procedure for the rings $D(n)$.
+This produces a $p$-adically complete $D(n)$-module $M(n)$. Again using
+the crystal property of $\mathcal{F}$ we obtain isomorphisms
+$$
+M \otimes^\wedge_{D, p_0} D(1) \rightarrow M(1)
+\leftarrow M \otimes^\wedge_{D, p_1} D(1)
+$$
+compare with the proof of Lemma \ref{lemma-automatic-connection}.
+Denote $c$ the composition from left to right. Pick $m \in M$.
+Write $\xi_i = x_i \otimes 1 - 1 \otimes x_i$.
+Using (\ref{equation-structure-Dn}) we can write uniquely
+$$
+c(m \otimes 1) = \sum\nolimits_K \theta_K(m) \otimes \prod \xi_i^{[k_i]}
+$$
+for some $\theta_K(m) \in M$ where the sum is over multi-indices
+$K = (k_i)$ with $k_i \geq 0$ and $\sum k_i < \infty$. Set
+$\theta_i = \theta_K$ where $K$ has a $1$ in the $i$th spot and
+zeros elsewhere. We have
+$$
+\nabla(m) = \sum \theta_i(m) \text{d}x_i.
+$$
+as can be seen by comparing with the definition of
+$\nabla$. Namely, the defining equation is
+$p_1^*m = \nabla(m) - c(p_0^*m)$ in Lemma \ref{lemma-automatic-connection}
+but the sign works out because in the Stacks project we consistently use
+$\text{d}f = p_1(f) - p_0(f)$ modulo the ideal of the diagonal squared,
+and hence $\xi_i = x_i \otimes 1 - 1 \otimes x_i$ maps to $-\text{d}x_i$
+modulo the ideal of the diagonal squared.
+
+\medskip\noindent
+Denote $q_i : D \to D(2)$ and $q_{ij} : D(1) \to D(2)$ the coprojections
+corresponding to the indices $i, j$. As in the last paragraph of the proof of
+Lemma \ref{lemma-automatic-connection}
+we see that
+$$
+q_{02}^*c = q_{12}^*c \circ q_{01}^*c.
+$$
+This means that
+$$
+\sum\nolimits_{K''} \theta_{K''}(m) \otimes \prod {\zeta''_i}^{[k''_i]}
+=
+\sum\nolimits_{K', K} \theta_{K'}(\theta_K(m))
+\otimes \prod {\zeta'_i}^{[k'_i]} \prod \zeta_i^{[k_i]}
+$$
+in $M \otimes^\wedge_{D, q_2} D(2)$ where
+\begin{align*}
+\zeta_i & = x_i \otimes 1 \otimes 1 - 1 \otimes x_i \otimes 1,\\
+\zeta'_i & = 1 \otimes x_i \otimes 1 - 1 \otimes 1 \otimes x_i,\\
+\zeta''_i & = x_i \otimes 1 \otimes 1 - 1 \otimes 1 \otimes x_i.
+\end{align*}
+In particular $\zeta''_i = \zeta_i + \zeta'_i$ and we have that
+$D(2)$ is the $p$-adic completion of the divided power polynomial
+ring in $\zeta_i, \zeta'_i$ over $q_2(D)$, see Lemma \ref{lemma-structure-Dn}.
+Comparing coefficients in the expression above it follows immediately that
+$\theta_i \circ \theta_j = \theta_j \circ \theta_i$
+(this provides an alternative proof of the integrability of $\nabla$) and that
+$$
+\theta_K(m) = (\prod \theta_i^{k_i})(m).
+$$
+In particular, as the sum expressing $c(m \otimes 1)$ above has to converge
+$p$-adically we conclude that for each $i$ and each $m \in M$ only a finite
+number of $\theta_i^k(m)$ are allowed to be nonzero modulo $p$.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-crystals-on-affine}
+The functor
+$$
+\begin{matrix}
+\text{crystals in quasi-coherent} \\
+\mathcal{O}_{X/S}\text{-modules on }\text{Cris}(X/S)
+\end{matrix}
+\longrightarrow
+\begin{matrix}
+\text{pairs }(M, \nabla)\text{ satisfying} \\
+\text{(\ref{item-complete}), (\ref{item-connection}),
+(\ref{item-integrable}), and (\ref{item-topologically-quasi-nilpotent})}
+\end{matrix}
+$$
+of Lemma \ref{lemma-crystals-on-affine}
+is an equivalence of categories.
+\end{proposition}
+
+\begin{proof}
+Let $(M, \nabla)$ be given. We are going to construct
+a crystal in quasi-coherent modules $\mathcal{F}$.
+Write $\nabla(m) = \sum \theta_i(m)\text{d}x_i$.
+Then $\theta_i \circ \theta_j = \theta_j \circ \theta_i$ and we
+can set $\theta_K(m) = (\prod \theta_i^{k_i})(m)$ for any multi-index
+$K = (k_i)$ with $k_i \geq 0$ and $\sum k_i < \infty$.
+
+\medskip\noindent
+Let $(U, T, \delta)$ be any object of $\text{Cris}(X/S)$ with $T$ affine.
+Say $T = \Spec(B)$ and the ideal of $U \to T$ is $J_B \subset B$.
+By Lemma \ref{lemma-set-generators} there exists an integer $e$ and a morphism
+$$
+f : (U, T, \delta) \longrightarrow (X, T_e, \bar\gamma)
+$$
+where $T_e = \Spec(D_e)$ as in the proof of
+Lemma \ref{lemma-crystals-on-affine}.
+Choose such an $e$ and $f$; denote $f : D \to B$ also the corresponding
+divided power $A$-algebra map. We will set $\mathcal{F}_T$ equal to the
+quasi-coherent sheaf of $\mathcal{O}_T$-modules associated to the $B$-module
+$$
+M \otimes_{D, f} B.
+$$
+However, we have to show that this is independent of the choice of $f$.
+Suppose that $g : D \to B$ is a second such morphism. Since $f$ and $g$
+are morphisms in $\text{Cris}(X/S)$ we see that the image of
+$f - g : D \to B$ is contained in the divided power ideal $J_B$.
+Write $\xi_i = f(x_i) - g(x_i) \in J_B$. By analogy with the proof
+of Lemma \ref{lemma-crystals-on-affine} we define an isomorphism
+$$
+c_{f, g} : M \otimes_{D, f} B \longrightarrow M \otimes_{D, g} B
+$$
+by the formula
+$$
+m \otimes 1 \longmapsto
+\sum\nolimits_K \theta_K(m) \otimes \prod \xi_i^{[k_i]}
+$$
+which makes sense by our remarks above and the fact that $\nabla$
+is topologically quasi-nilpotent (so the sum is finite!).
+A computation shows that
+$$
+c_{g, h} \circ c_{f, g} = c_{f, h}
+$$
+if given a third morphism
+$h : (U, T, \delta) \longrightarrow (X, T_e, \bar\gamma)$.
+It is also true that $c_{f, f} = 1$.
+Hence these maps are all isomorphisms and we see that
+the module $\mathcal{F}_T$ is independent of the choice of $f$.
+
+\medskip\noindent
+If $a : (U', T', \delta') \to (U, T, \delta)$ is a morphism of affine objects
+of $\text{Cris}(X/S)$, then choosing $f' = f \circ a$ it is clear
+that there exists a canonical isomorphism
+$a^*\mathcal{F}_T \to \mathcal{F}_{T'}$. We omit the verification that this
+map is independent of the choice of $f$. Using these maps as the restriction
+maps it is clear that we obtain a crystal in quasi-coherent modules
+on the full subcategory of $\text{Cris}(X/S)$ consisting of affine objects.
+We omit the proof that this extends to a crystal on all of
+$\text{Cris}(X/S)$. We also omit the proof that this procedure is a functor
+and that it is quasi-inverse to the functor constructed in
+Lemma \ref{lemma-crystals-on-affine}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-crystals-on-affine-smooth}
+In Situation \ref{situation-affine}.
+Let $A \to P' \to C$ be ring maps with $A \to P'$ smooth and $P' \to C$
+surjective with kernel $J'$. Let $D'$ be the $p$-adic completion of
+$D_{P', \gamma}(J')$. There are homomorphisms of divided power $A$-algebras
+$$
+a : D \longrightarrow D',\quad b : D' \longrightarrow D
+$$
+compatible with the maps $D \to C$ and $D' \to C$ such that
+$a \circ b = \text{id}_{D'}$. These maps induce
+an equivalence of categories of pairs $(M, \nabla)$ satisfying
+(\ref{item-complete}), (\ref{item-connection}),
+(\ref{item-integrable}), and (\ref{item-topologically-quasi-nilpotent})
+over $D$ and pairs $(M', \nabla')$ satisfying
+(\ref{item-complete}), (\ref{item-connection}),
+(\ref{item-integrable}), and
+(\ref{item-topologically-quasi-nilpotent})\footnote{This condition
+is tricky to formulate for $(M', \nabla')$ over $D'$. See proof.} over $D'$.
+In particular, the equivalence of categories of
+Proposition \ref{proposition-crystals-on-affine}
+also holds for the corresponding functor towards pairs over $D'$.
+\end{lemma}
+
+\begin{proof}
+First, suppose that $P' = A[y_1, \ldots, y_m]$ is a polynomial algebra
+over $A$. In this case, we can find ring maps $P \to P'$ and $P' \to P$
+compatible with the maps to $C$ which induce maps $a : D \to D'$ and
+$b : D' \to D$ as in the lemma. Using completed base change along $a$
+and $b$ we obtain functors between the categories of modules with connection
+satisfying properties (\ref{item-complete}), (\ref{item-connection}),
+(\ref{item-integrable}), and (\ref{item-topologically-quasi-nilpotent})
+simply because these these categories are equivalent to the category
+of quasi-coherent crystals by Proposition \ref{proposition-crystals-on-affine}
+(and this equivalence is compatible with the base change operation as shown
+in the proof of the proposition).
+
+\medskip\noindent
+Proof for general smooth $P'$.
+By the first paragraph of the proof we may assume $P = A[y_1, \ldots, y_m]$
+which gives us a surjection $P \to P'$ compatible with the map to $C$.
+Hence we obtain a surjective map $a : D \to D'$ by functoriality of
+divided power envelopes and completion. Pick $e$ large enough so that
+$D_e$ is a divided power
+thickening of $C$ over $A$. Then $D_e \to C$ is a surjection whose kernel
+is locally nilpotent, see Divided Power Algebra, Lemma \ref{dpa-lemma-nil}.
+Setting $D'_e = D'/p^eD'$
+we see that the kernel of $D_e \to D'_e$ is locally nilpotent.
+Hence by Algebra, Lemma \ref{algebra-lemma-smooth-strong-lift}
+we can find a lift $\beta_e : P' \to D_e$ of the map $P' \to D'_e$.
+Note that $D_{e + i + 1} \to D_{e + i} \times_{D'_{e + i}} D'_{e + i + 1}$
+is surjective with square zero kernel for any $i \geq 0$ because
+$p^{e + i}D \to p^{e + i}D'$ is surjective. Applying the usual lifting
+property (Algebra, Proposition \ref{algebra-proposition-smooth-formally-smooth})
+successively to the diagrams
+$$
+\xymatrix{
+P' \ar[r] & D_{e + i} \times_{D'_{e + i}} D'_{e + i + 1} \\
+A \ar[u] \ar[r] & D_{e + i + 1} \ar[u]
+}
+$$
+we see that we can find an $A$-algebra map $\beta : P' \to D$ whose
+composition with $a$ is the given map $P' \to D'$.
+By the universal property of the divided power envelope we obtain a
+map $D_{P', \gamma}(J') \to D$. As $D$ is $p$-adically complete we
+obtain $b : D' \to D$ such that $a \circ b = \text{id}_{D'}$.
+
+\medskip\noindent
+Consider the base change functors
+$$
+F : (M, \nabla) \longmapsto
+(M \otimes^\wedge_{D, a} D', \nabla')
+\quad\text{and}\quad
+G : (M', \nabla') \longmapsto
+(M' \otimes^\wedge_{D', b} D, \nabla)
+$$
+on modules with connections satisfying (\ref{item-complete}),
+(\ref{item-connection}), and (\ref{item-integrable}).
+See Remark \ref{remark-base-change-connection}.
+Since $a \circ b = \text{id}_{D'}$ we see that
+$F \circ G$ is the identity functor. Let us say that $(M', \nabla')$
+has property (\ref{item-topologically-quasi-nilpotent}) if this
+is true for $G(M', \nabla')$. A formal argument now shows that to finish
+the proof it suffices to show that $G(F(M, \nabla))$ is isomorphic
+to $(M, \nabla)$ in the case that $(M, \nabla)$ satisfies all four
+conditions (\ref{item-complete}), (\ref{item-connection}),
+(\ref{item-integrable}), and (\ref{item-topologically-quasi-nilpotent}).
+For this we use the functorial isomorphism
+$$
+c_{\text{id}_D, b \circ a} :
+M \otimes_{D, \text{id}_D} D
+\longrightarrow
+M \otimes_{D, b \circ a} D
+$$
+of the proof of Proposition \ref{proposition-crystals-on-affine}
+(which requires the topological quasi-nilpotency of $\nabla$
+which we have assumed).
+It remains to prove that this map is horizontal, i.e.,
+compatible with connections, which we omit.
+
+\medskip\noindent
+The last statement of the proof now follows.
+\end{proof}
+
+\begin{remark}
+\label{remark-equivalence-more-general}
+The equivalence of Proposition \ref{proposition-crystals-on-affine}
+holds if we start with a surjection $P \to C$ where $P/A$ satisfies the
+strong lifting property of
+Algebra, Lemma \ref{algebra-lemma-smooth-strong-lift}.
+To prove this we can argue as in the proof of
+Lemma \ref{lemma-crystals-on-affine-smooth}.
+(Details will be added here if we ever need this.)
+Presumably there is also a direct proof of this result, but the advantage
+of using polynomial rings is that the rings $D(n)$ are $p$-adic completions
+of divided power polynomial rings and the algebra is simplified.
+\end{remark}
+
+
+
+\section{General remarks on cohomology}
+\label{section-cohomology-lqc}
+
+\noindent
+In this section we do a bit of work to translate the cohomology
+of modules on the cristalline site of an affine scheme into
+an algebraic question.
+
+\begin{lemma}
+\label{lemma-vanishing-lqc}
+In Situation \ref{situation-global}.
+Let $\mathcal{F}$ be a locally quasi-coherent $\mathcal{O}_{X/S}$-module
+on $\text{Cris}(X/S)$. Then we have
+$$
+H^p((U, T, \delta), \mathcal{F}) = 0
+$$
+for all $p > 0$ and all $(U, T, \delta)$ with $T$ or $U$ affine.
+\end{lemma}
+
+\begin{proof}
+As $U \to T$ is a thickening we see that $U$ is affine if and only if $T$
+is affine, see Limits, Lemma \ref{limits-lemma-affine}.
+Having said this, let us apply
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-cech-vanish-collection}
+to the collection $\mathcal{B}$ of affine objects $(U, T, \delta)$ and the
+collection $\text{Cov}$ of affine open coverings
+$\mathcal{U} = \{(U_i, T_i, \delta_i) \to (U, T, \delta)\}$. The
+{\v C}ech complex
+${\check C}^*(\mathcal{U}, \mathcal{F})$ for such a covering is simply
+the {\v C}ech complex of the quasi-coherent $\mathcal{O}_T$-module
+$\mathcal{F}_T$
+(here we are using the assumption that $\mathcal{F}$ is locally quasi-coherent)
+with respect to the affine open covering $\{T_i \to T\}$ of the
+affine scheme $T$. Hence the {\v C}ech cohomology is zero by
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cech-cohomology-quasi-coherent} and
+\ref{coherent-lemma-quasi-coherent-affine-cohomology-zero}.
+Thus the hypothesis of
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-cech-vanish-collection}
+are satisfied and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare}
+In Situation \ref{situation-global}.
+Assume moreover $X$ and $S$ are affine schemes.
+Consider the full subcategory $\mathcal{C} \subset \text{Cris}(X/S)$
+consisting of divided power thickenings $(X, T, \delta)$
+endowed with the chaotic topology (see
+Sites, Example \ref{sites-example-indiscrete}).
+For any locally quasi-coherent $\mathcal{O}_{X/S}$-module $\mathcal{F}$
+we have
+$$
+R\Gamma(\mathcal{C}, \mathcal{F}|_\mathcal{C}) =
+R\Gamma(\text{Cris}(X/S), \mathcal{F})
+$$
+\end{lemma}
+
+\begin{proof}
+Denote $\text{AffineCris}(X/S)$ the fully subcategory of $\text{Cris}(X/S)$
+consisting of those objects $(U, T, \delta)$ with $U$ and $T$ affine.
+We turn this into a site by saying a family of morphisms
+$\{(U_i, T_i, \delta_i) \to (U, T, \delta)\}_{i \in I}$ of
+$\text{AffineCris}(X/S)$ is a covering if and only if it is a covering
+of $\text{Cris}(X/S)$. With this definition the inclusion functor
+$$
+\text{AffineCris}(X/S) \longrightarrow \text{Cris}(X/S)
+$$
+is a special cocontinuous functor as defined in
+Sites, Definition \ref{sites-definition-special-cocontinuous-functor}.
+The proof of this is exactly the same as the proof of
+Topologies, Lemma \ref{topologies-lemma-affine-big-site-Zariski}.
+Thus we see that the topos of sheaves on $\text{Cris}(X/S)$
+is the same as the topos of sheaves on $\text{AffineCris}(X/S)$
+via restriction by the displayed inclusion functor.
+Therefore we have to prove the corresponding statement for the
+inclusion $\mathcal{C} \subset \text{AffineCris}(X/S)$.
+
+\medskip\noindent
+We will use without further mention that $\mathcal{C}$ and
+$\text{AffineCris}(X/S)$ have products and fibre products
+(details omitted, see
+Lemma \ref{lemma-divided-power-thickening-fibre-products}).
+The inclusion functor $u : \mathcal{C} \to \text{AffineCris}(X/S)$
+is fully faithful, continuous, and commutes with products and fibre products.
+We claim it defines a morphism of ringed sites
+$$
+f :
+(\text{AffineCris}(X/S), \mathcal{O}_{X/S})
+\longrightarrow
+(\Sh(\mathcal{C}), \mathcal{O}_{X/S}|_\mathcal{C})
+$$
+To see this we will use Sites, Lemma \ref{sites-lemma-directed-morphism}.
+Note that $\mathcal{C}$ has fibre products and $u$ commutes with them
+so the categories $\mathcal{I}^u_{(U, T, \delta)}$ are disjoint unions
+of directed categories (by Sites, Lemma \ref{sites-lemma-almost-directed} and
+Categories, Lemma \ref{categories-lemma-split-into-directed}). Hence it
+suffices to show that $\mathcal{I}^u_{(U, T, \delta)}$ is connected.
+Nonempty follows from Lemma \ref{lemma-set-generators}: since $U$ and $T$
+are affine that lemma says there is at least one object
+$(X, T', \delta')$ of $\mathcal{C}$ and a morphism
+$(U, T, \delta) \to (X, T', \delta')$ of divided power thickenings.
+Connectedness follows from the fact that $\mathcal{C}$ has products
+and that $u$ commutes with them (compare with the proof of
+Sites, Lemma \ref{sites-lemma-directed}).
+
+\medskip\noindent
+Note that $f_*\mathcal{F} = \mathcal{F}|_\mathcal{C}$. Hence the lemma
+follows if $R^pf_*\mathcal{F} = 0$ for $p > 0$, see
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-apply-Leray}. By
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-higher-direct-images}
+it suffices to show that
+$H^p(\text{AffineCris}(X/S)/(X, T, \delta), \mathcal{F}) = 0$
+for all $(X, T, \delta)$.
+This follows from Lemma \ref{lemma-vanishing-lqc} because the
+topos of the site $\text{AffineCris}(X/S)/(X, T, \delta)$
+is equivalent to the topos of the site
+$\text{Cris}(X/S)/(X, T, \delta)$ used in the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complete}
+In Situation \ref{situation-affine}.
+Set $\mathcal{C} = (\text{Cris}(C/A))^{opp}$ and
+$\mathcal{C}^\wedge = (\text{Cris}^\wedge(C/A))^{opp}$
+endowed with the chaotic topology, see
+Remark \ref{remark-completed-affine-site} for notation.
+There is a morphism of topoi
+$$
+g : \Sh(\mathcal{C}) \longrightarrow \Sh(\mathcal{C}^\wedge)
+$$
+such that if $\mathcal{F}$ is a sheaf of abelian groups on
+$\mathcal{C}$, then
+$$
+R^pg_*\mathcal{F}(B \to C, \delta) =
+\left\{
+\begin{matrix}
+\lim_e \mathcal{F}(B_e \to C, \delta) & \text{if }p = 0 \\
+R^1\lim_e \mathcal{F}(B_e \to C, \delta) & \text{if }p = 1 \\
+0 & \text{else}
+\end{matrix}
+\right.
+$$
+where $B_e = B/p^eB$ for $e \gg 0$.
+\end{lemma}
+
+\begin{proof}
+Any functor between categories defines a morphism between chaotic
+topoi in the same direction, for example because such a functor
+can be considered as a cocontinuous functor between sites, see
+Sites, Section \ref{sites-section-cocontinuous-morphism-topoi}.
+Proof of the description of $g_*\mathcal{F}$ is omitted.
+Note that in the statement we take $(B_e \to C, \delta)$
+is an object of $\text{Cris}(C/A)$ only for $e$ large enough.
+Let $\mathcal{I}$ be an injective abelian sheaf on $\mathcal{C}$.
+Then the transition maps
+$$
+\mathcal{I}(B_e \to C, \delta) \leftarrow
+\mathcal{I}(B_{e + 1} \to C, \delta)
+$$
+are surjective as the morphisms
+$$
+(B_e \to C, \delta)
+\longrightarrow
+(B_{e + 1} \to C, \delta)
+$$
+are monomorphisms in the category $\mathcal{C}$. Hence for an injective
+abelian sheaf both sides of the displayed formula of the lemma agree.
+Taking an injective resolution of $\mathcal{F}$ one easily obtains
+the result (sheaves are presheaves, so exactness is measured on the
+level of groups of sections over objects).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-category-with-covering}
+Let $\mathcal{C}$ be a category endowed with the chaotic topology.
+Let $X$ be an object of $\mathcal{C}$ such that every object of
+$\mathcal{C}$ has a morphism towards $X$. Assume that $\mathcal{C}$
+has products of pairs.
+Then for every abelian sheaf $\mathcal{F}$ on $\mathcal{C}$
+the total cohomology $R\Gamma(\mathcal{C}, \mathcal{F})$ is represented
+by the complex
+$$
+\mathcal{F}(X) \to \mathcal{F}(X \times X) \to
+\mathcal{F}(X \times X \times X) \to \ldots
+$$
+associated to the cosimplicial abelian group $[n] \mapsto \mathcal{F}(X^n)$.
+\end{lemma}
+
+\begin{proof}
+Note that $H^q(X^p, \mathcal{F}) = 0$ for all $q > 0$ as any presheaf is a
+sheaf on $\mathcal{C}$. The assumption on $X$ is that $h_X \to *$
+is surjective. Using that $H^q(X, \mathcal{F}) = H^q(h_X, \mathcal{F})$ and
+$H^q(\mathcal{C}, \mathcal{F}) = H^q(*, \mathcal{F})$ we see that our
+statement is a special case of
+Cohomology on Sites,
+Lemma \ref{sites-cohomology-lemma-cech-to-cohomology-sheaf-sets}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Cosimplicial preparations}
+\label{section-cohomology}
+
+\noindent
+In this section we compare crystalline cohomology with de Rham
+cohomology. We follow \cite{Bhatt}.
+
+\begin{example}
+\label{example-cosimplicial-module}
+Suppose that $A_*$ is any cosimplicial ring.
+Consider the cosimplicial module $M_*$ defined by the rule
+$$
+M_n = \bigoplus\nolimits_{i = 0, ..., n} A_n e_i
+$$
+For a map $f : [n] \to [m]$ define $M_*(f) : M_n \to M_m$
+to be the unique $A_*(f)$-linear map which maps $e_i$ to $e_{f(i)}$.
+We claim the identity on $M_*$ is homotopic to $0$.
+Namely, a homotopy is given by a map of cosimplicial modules
+$$
+h : M_* \longrightarrow \Hom(\Delta[1], M_*)
+$$
+see Section \ref{section-cosimplicial}.
+For $j \in \{0, \ldots, n + 1\}$ we let $\alpha^n_j : [n] \to [1]$ be the map
+defined by $\alpha^n_j(i) = 0 \Leftrightarrow i < j$. Then
+$\Delta[1]_n = \{\alpha^n_0, \ldots, \alpha^n_{n + 1}\}$ and correspondingly
+$\Hom(\Delta[1], M_*)_n = \prod_{j = 0, \ldots, n + 1} M_n$, see
+Simplicial, Sections \ref{simplicial-section-homotopy} and
+\ref{simplicial-section-homotopy-cosimplicial}. Instead of using
+this product representation, we think of an element
+in $\Hom(\Delta[1], M_*)_n$ as a function $\Delta[1]_n \to M_n$.
+Using this notation, we define $h$ in degree $n$ by the rule
+$$
+h_n(e_i)(\alpha^n_j) =
+\left\{
+\begin{matrix}
+e_{i} & \text{if} & i < j \\
+0 & \text{else}
+\end{matrix}
+\right.
+$$
+We first check $h$ is a morphism of cosimplicial modules. Namely, for
+$f : [n] \to [m]$ we will show that
+\begin{equation}
+\label{equation-cosimplicial-morphism}
+h_m \circ M_*(f) = \Hom(\Delta[1], M_*)(f) \circ h_n
+\end{equation}
+The left hand side of (\ref{equation-cosimplicial-morphism}) evaluated at
+$e_i$ and then in turn evaluated at $\alpha^m_j$ is
+$$
+h_m(e_{f(i)})(\alpha^m_j) =
+\left\{
+\begin{matrix}
+e_{f(i)} & \text{if} & f(i) < j \\
+0 & \text{else}
+\end{matrix}
+\right.
+$$
+Note that $\alpha^m_j \circ f = \alpha^n_{j'}$ where
+$0 \leq j' \leq n + 1$ is the unique index such that $f(i) < j$
+if and only if $i < j'$. Thus the right hand side of
+(\ref{equation-cosimplicial-morphism}) evaluated at $e_i$
+and then in turn evaluated at $\alpha^m_j$ is
+$$
+M_*(f)(h_n(e_i)(\alpha^m_j \circ f) =
+M_*(f)(h_n(e_i)(\alpha^n_{j'})) =
+\left\{
+\begin{matrix}
+e_{f(i)} & \text{if} & i < j' \\
+0 & \text{else}
+\end{matrix}
+\right.
+$$
+It follows from our description of $j'$ that the two answers are equal.
+Hence $h$ is a map of cosimplicial modules.
+Let $0 : \Delta[0] \to \Delta[1]$ and
+$1 : \Delta[0] \to \Delta[1]$ be the obvious maps, and denote
+$ev_0, ev_1 : \Hom(\Delta[1], M_*) \to M_*$ the corresponding
+evaluation maps. The reader verifies readily that
+the compositions
+$$
+ev_0 \circ h, ev_1 \circ h : M_* \longrightarrow M_*
+$$
+are $0$ and $1$ respectively, whence $h$ is the desired homotopy between
+$0$ and $1$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-vanishing-omega-1}
+With notation as in (\ref{equation-omega-Dn}) the complex
+$$
+\Omega_{D(0)} \to \Omega_{D(1)} \to \Omega_{D(2)} \to \ldots
+$$
+is homotopic to zero as a $D(*)$-cosimplicial module.
+\end{lemma}
+
+\begin{proof}
+We are going to use the principle of
+Simplicial, Lemma \ref{simplicial-lemma-functorial-homotopy}
+and more specifically
+Lemma \ref{lemma-homotopy-tensor}
+which tells us that homotopic maps between (co)simplicial objects
+are transformed by any functor into homotopic maps.
+The complex of the lemma is equal to the $p$-adic completion of the
+base change of the cosimplicial module
+$$
+M_* = \left(
+\Omega_{P/A} \to
+\Omega_{P \otimes_A P/A} \to
+\Omega_{P \otimes_A P \otimes_A P/A} \to \ldots
+\right)
+$$
+via the cosimplicial ring map $P\otimes_A \ldots \otimes_A P \to D(n)$. This
+follows from Lemma \ref{lemma-module-differentials-divided-power-envelope},
+see comments following (\ref{equation-omega-D}). Hence it
+suffices to show that the cosimplicial module $M_*$ is homotopic to zero
+(uses base change and $p$-adic completion).
+We can even assume $A = \mathbf{Z}$ and $P = \mathbf{Z}[\{x_i\}_{i \in I}]$
+as we can use base change with $\mathbf{Z} \to A$.
+In this case $P^{\otimes n + 1}$ is the polynomial algebra on the elements
+$$
+x_i(e) = 1 \otimes \ldots \otimes x_i \otimes \ldots \otimes 1
+$$
+with $x_i$ in the $e$th slot. The modules of the complex are free on the
+generators $\text{d}x_i(e)$. Note that if $f : [n] \to [m]$ is a
+map then we see that
+$$
+M_*(f)(\text{d}x_i(e)) = \text{d}x_i(f(e))
+$$
+Hence we see that $M_*$ is a direct sum over $I$ of copies of the module
+studied in Example \ref{example-cosimplicial-module} and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing}
+With notation as in (\ref{equation-Dn}) and (\ref{equation-omega-Dn}),
+given any cosimplicial module $M_*$ over $D(*)$ and
+$i > 0$ the cosimplicial module
+$$
+M_0 \otimes^\wedge_{D(0)} \Omega^i_{D(0)} \to
+M_1 \otimes^\wedge_{D(1)} \Omega^i_{D(1)} \to
+M_2 \otimes^\wedge_{D(2)} \Omega^i_{D(2)} \to \ldots
+$$
+is homotopic to zero, where $\Omega^i_{D(n)}$ is the $p$-adic completion
+of the $i$th exterior power of $\Omega_{D(n)}$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-vanishing-omega-1} the endomorphisms $0$ and $1$
+of $\Omega_{D(*)}$ are homotopic.
+If we apply the functor $\wedge^i$ we see that
+the same is true for the cosimplicial module $\wedge^i\Omega_{D(*)}$, see
+Lemma \ref{lemma-homotopy-tensor}.
+Another application of the same lemma shows the $p$-adic completion
+$\Omega^i_{D(*)}$ is homotopy equivalent to zero.
+Tensoring with $M_*$ we see that $M_* \otimes_{D(*)} \Omega^i_{D(*)}$
+is homotopic to zero, see Lemma \ref{lemma-homotopy-tensor} again.
+A final application of the $p$-adic completion functor finishes the proof.
+\end{proof}
+
+
+
+\section{Divided power Poincar\'e lemma}
+\label{section-poincare}
+
+\noindent
+Just the simplest possible version.
+
+\begin{lemma}
+\label{lemma-poincare}
+Let $A$ be a ring. Let $P = A\langle x_i \rangle$ be a divided
+power polynomial ring over $A$. For any $A$-module $M$ the complex
+$$
+0 \to M \to
+M \otimes_A P \to
+M \otimes_A \Omega^1_{P/A, \delta} \to
+M \otimes_A \Omega^2_{P/A, \delta} \to \ldots
+$$
+is exact. Let $D$ be the $p$-adic completion of $P$.
+Let $\Omega^i_D$ be the $p$-adic completion of the $i$th exterior
+power of $\Omega_{D/A, \delta}$. For any $p$-adically complete
+$A$-module $M$ the complex
+$$
+0 \to M \to
+M \otimes^\wedge_A D \to
+M \otimes^\wedge_A \Omega^1_D \to
+M \otimes^\wedge_A \Omega^2_D \to \ldots
+$$
+is exact.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that the complex
+$$
+E :
+(0 \to A \to P \to \Omega^1_{P/A, \delta} \to
+\Omega^2_{P/A, \delta} \to \ldots)
+$$
+is homotopy equivalent to zero as a complex of $A$-modules.
+For every multi-index $K = (k_i)$ we can consider the subcomplex $E(K)$
+which in degree $j$ consists of
+$$
+\bigoplus\nolimits_{I = \{i_1, \ldots, i_j\} \subset \text{Supp}(K)}
+A
+\prod\nolimits_{i \not \in I} x_i^{[k_i]}
+\prod\nolimits_{i \in I} x_i^{[k_i - 1]}
+\text{d}x_{i_1} \wedge \ldots \wedge \text{d}x_{i_j}
+$$
+Since $E = \bigoplus E(K)$ we see that it suffices to prove each of the
+complexes $E(K)$ is homotopic to zero. If $K = 0$, then
+$E(K) : (A \to A)$ is homotopic to zero. If $K$ has nonempty (finite)
+support $S$, then the complex $E(K)$ is isomorphic to the complex
+$$
+0 \to A \to
+\bigoplus\nolimits_{s \in S} A \to
+\wedge^2(\bigoplus\nolimits_{s \in S} A) \to
+\ldots \to \wedge^{\# S}(\bigoplus\nolimits_{s \in S} A) \to 0
+$$
+which is homotopic to zero, for example by
+More on Algebra, Lemma \ref{more-algebra-lemma-homotopy-koszul-abstract}.
+\end{proof}
+
+\noindent
+An alternative (more direct) approach to the following lemma is
+explained in Example \ref{example-integrate}.
+
+\begin{lemma}
+\label{lemma-relative-poincare}
+Let $A$ be a ring. Let $(B, I, \delta)$ be a divided power ring.
+Let $P = B\langle x_i \rangle$ be a divided power polynomial
+ring over $B$ with divided power ideal $J = IP + B\langle x_i \rangle_{+}$
+as usual. Let $M$ be a $B$-module endowed with an integrable connection
+$\nabla : M \to M \otimes_B \Omega^1_{B/A, \delta}$. Then the map of
+de Rham complexes
+$$
+M \otimes_B \Omega^*_{B/A, \delta}
+\longrightarrow
+M \otimes_P \Omega^*_{P/A, \delta}
+$$
+is a quasi-isomorphism. Let $D$, resp.\ $D'$ be the $p$-adic completion of
+$B$, resp.\ $P$ and let $\Omega^i_D$, resp.\ $\Omega^i_{D'}$ be the $p$-adic
+completion of $\Omega^i_{B/A, \delta}$,
+resp.\ $\Omega^i_{P/A, \delta}$. Let $M$ be a $p$-adically complete
+$D$-module endowed with an integral connection
+$\nabla : M \to M \otimes^\wedge_D \Omega^1_D$.
+Then the map of de Rham complexes
+$$
+M \otimes^\wedge_D \Omega^*_D
+\longrightarrow
+M \otimes^\wedge_D \Omega^*_{D'}
+$$
+is a quasi-isomorphism.
+\end{lemma}
+
+\begin{proof}
+Consider the decreasing filtration $F^*$ on $\Omega^*_{B/A, \delta}$
+given by the subcomplexes
+$F^i(\Omega^*_{B/A, \delta}) = \sigma_{\geq i}\Omega^*_{B/A, \delta}$.
+See Homology, Section \ref{homology-section-truncations}.
+This induces a decreasing filtration $F^*$ on $\Omega^*_{P/A, \delta}$
+by setting
+$$
+F^i(\Omega^*_{P/A, \delta}) =
+F^i(\Omega^*_{B/A, \delta}) \wedge \Omega^*_{P/A, \delta}.
+$$
+We have a split short exact sequence
+$$
+0 \to \Omega^1_{B/A, \delta} \otimes_B P \to
+\Omega^1_{P/A, \delta} \to
+\Omega^1_{P/B, \delta} \to 0
+$$
+and the last module is free on $\text{d}x_i$. It follows from this that
+$F^i(\Omega^*_{P/A, \delta}) \to \Omega^*_{P/A, \delta}$ is a termwise
+split injection and that
+$$
+\text{gr}^i_F(\Omega^*_{P/A, \delta}) =
+\Omega^i_{B/A, \delta} \otimes_B \Omega^*_{P/B, \delta}
+$$
+as complexes. Thus we can define a filtration $F^*$ on
+$M \otimes_B \Omega^*_{B/A, \delta}$ by setting
+$$
+F^i(M \otimes_B \Omega^*_{P/A, \delta}) =
+M \otimes_B F^i(\Omega^*_{P/A, \delta})
+$$
+and we have
+$$
+\text{gr}^i_F(M \otimes_B \Omega^*_{P/A, \delta}) =
+M \otimes_B \Omega^i_{B/A, \delta} \otimes_B \Omega^*_{P/B, \delta}
+$$
+as complexes.
+By Lemma \ref{lemma-poincare} each of these complexes is
+quasi-isomorphic to $M \otimes_B \Omega^i_{B/A, \delta}$ placed in degree $0$.
+Hence we see that the first displayed map of the lemma is a morphism of
+filtered complexes which induces a quasi-isomorphism on graded pieces. This
+implies that it is a quasi-isomorphism, for example by the spectral sequence
+associated to a filtered complex, see
+Homology, Section \ref{homology-section-filtered-complex}.
+
+\medskip\noindent
+The proof of the second quasi-isomorphism is exactly the same.
+\end{proof}
+
+
+
+\section{Cohomology in the affine case}
+\label{section-cohomology-affine}
+
+\noindent
+Let's go back to the situation studied in
+Section \ref{section-quasi-coherent-crystals}. We
+start with $(A, I, \gamma)$ and $A/I \to C$ and set
+$X = \Spec(C)$ and $S = \Spec(A)$. Then we choose
+a polynomial ring $P$ over $A$ and a surjection $P \to C$ with
+kernel $J$. We obtain $D$ and $D(n)$ see
+(\ref{equation-D}) and (\ref{equation-Dn}).
+Set $T(n)_e = \Spec(D(n)/p^eD(n))$ so that
+$(X, T(n)_e, \delta(n))$ is an object of $\text{Cris}(X/S)$.
+Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_{X/S}$-modules and set
+$$
+M(n) = \lim_e \Gamma((X, T(n)_e, \delta(n)), \mathcal{F})
+$$
+for $n = 0, 1, 2, 3, \ldots$. This forms a cosimplicial module
+over the cosimplicial ring $D(0), D(1), D(2), \ldots$.
+
+\begin{proposition}
+\label{proposition-compute-cohomology}
+With notations as above assume that
+\begin{enumerate}
+\item $\mathcal{F}$ is locally quasi-coherent, and
+\item for any morphism $(U, T, \delta) \to (U', T', \delta')$
+of $\text{Cris}(X/S)$ where $f : T \to T'$ is a closed immersion
+the map $c_f : f^*\mathcal{F}_{T'} \to \mathcal{F}_T$ is surjective.
+\end{enumerate}
+Then the complex
+$$
+M(0) \to M(1) \to M(2) \to \ldots
+$$
+computes $R\Gamma(\text{Cris}(X/S), \mathcal{F})$.
+\end{proposition}
+
+\begin{proof}
+Using assumption (1) and Lemma \ref{lemma-compare} we see that
+$R\Gamma(\text{Cris}(X/S), \mathcal{F})$ is isomorphic to
+$R\Gamma(\mathcal{C}, \mathcal{F})$. Note that the categories
+$\mathcal{C}$ used in Lemmas \ref{lemma-compare} and \ref{lemma-complete}
+agree. Let $f : T \to T'$ be a closed immersion as in (2). Surjectivity
+of $c_f : f^*\mathcal{F}_{T'} \to \mathcal{F}_T$ is equivalent to
+surjectivity of $\mathcal{F}_{T'} \to f_*\mathcal{F}_T$. Hence, if
+$\mathcal{F}$ satisfies (1) and (2), then we obtain a short exact sequence
+$$
+0 \to \mathcal{K} \to \mathcal{F}_{T'} \to f_*\mathcal{F}_T \to 0
+$$
+of quasi-coherent $\mathcal{O}_{T'}$-modules on $T'$, see
+Schemes, Section \ref{schemes-section-quasi-coherent} and in particular
+Lemma \ref{schemes-lemma-push-forward-quasi-coherent}.
+Thus, if $T'$ is affine, then we conclude that the restriction map
+$\mathcal{F}(U', T', \delta') \to \mathcal{F}(U, T, \delta)$
+is surjective by the vanishing of $H^1(T', \mathcal{K})$, see
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherent-affine-cohomology-zero}.
+Hence the transition maps of the inverse systems in Lemma \ref{lemma-complete}
+are surjective. We conclude that
+$R^pg_*(\mathcal{F}|_\mathcal{C}) = 0$ for all $p \geq 1$
+where $g$ is as in Lemma \ref{lemma-complete}.
+The object $D$ of the category $\mathcal{C}^\wedge$
+satisfies the assumption of Lemma \ref{lemma-category-with-covering} by
+Lemma \ref{lemma-generator-completion}
+with
+$$
+D \times \ldots \times D = D(n)
+$$
+in $\mathcal{C}$ because $D(n)$ is the $n + 1$-fold coproduct of
+$D$ in $\text{Cris}^\wedge(C/A)$, see Lemma \ref{lemma-property-Dn}.
+Thus we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-is-zero}
+Assumptions and notation as in
+Proposition \ref{proposition-compute-cohomology}.
+Then
+$$
+H^j(\text{Cris}(X/S), \mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega^i_{X/S})
+= 0
+$$
+for all $i > 0$ and all $j \geq 0$.
+\end{lemma}
+
+\begin{proof}
+Using Lemma \ref{lemma-omega-locally-quasi-coherent} it follows that
+$\mathcal{H} = \mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega^i_{X/S}$
+also satisfies assumptions (1) and (2) of
+Proposition \ref{proposition-compute-cohomology}.
+Write $M(n)_e = \Gamma((X, T(n)_e, \delta(n)), \mathcal{F})$
+so that $M(n) = \lim_e M(n)_e$. Then
+\begin{align*}
+\lim_e \Gamma((X, T(n)_e, \delta(n)), \mathcal{H}) & =
+\lim_e M(n)_e \otimes_{D(n)_e} \Omega_{D(n)}/p^e\Omega_{D(n)} \\
+& = \lim_e M(n)_e \otimes_{D(n)} \Omega_{D(n)}
+\end{align*}
+By
+Lemma \ref{lemma-vanishing}
+the cosimplicial modules
+$$
+M(0)_e \otimes_{D(0)} \Omega^i_{D(0)} \to
+M(1)_e \otimes_{D(1)} \Omega^i_{D(1)} \to
+M(2)_e \otimes_{D(2)} \Omega^i_{D(2)} \to \ldots
+$$
+are homotopic to zero. Because the transition maps
+$M(n)_{e + 1} \to M(n)_e$ are surjective, we see that
+the inverse limit of the associated complexes are acyclic\footnote{Actually,
+they are even homotopic to zero as the homotopies fit together, but we don't
+need this. The reason for this roundabout argument is that
+the limit $\lim_e M(n)_e \otimes_{D(n)} \Omega^i_{D(n)}$ isn't the
+$p$-adic completion of $M(n) \otimes_{D(n)} \Omega^i_{D(n)}$ as with
+the assumptions of the lemma we don't know that
+$M(n)_e = M(n)_{e + 1}/p^eM(n)_{e + 1}$. If $\mathcal{F}$ is a crystal
+then this does hold.}.
+Hence the vanishing of cohomology of $\mathcal{H}$ by
+Proposition \ref{proposition-compute-cohomology}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-compute-cohomology-crystal}
+Assumptions as in Proposition \ref{proposition-compute-cohomology}
+but now assume that $\mathcal{F}$ is a crystal in quasi-coherent modules.
+Let $(M, \nabla)$ be the corresponding module with connection over $D$, see
+Proposition \ref{proposition-crystals-on-affine}. Then the complex
+$$
+M \otimes^\wedge_D \Omega^*_D
+$$
+computes $R\Gamma(\text{Cris}(X/S), \mathcal{F})$.
+\end{proposition}
+
+\begin{proof}
+We will prove this using the two spectral sequences associated to the
+double complex $K^{*, *}$ with terms
+$$
+K^{a, b} = M \otimes_D^\wedge \Omega^a_{D(b)}
+$$
+What do we know so far? Well, Lemma \ref{lemma-vanishing}
+tells us that each column $K^{a, *}$, $a > 0$ is acyclic.
+Proposition \ref{proposition-compute-cohomology} tells us that
+the first column $K^{0, *}$ is quasi-isomorphic to
+$R\Gamma(\text{Cris}(X/S), \mathcal{F})$.
+Hence the first spectral sequence associated to the double complex
+shows that there is a canonical quasi-isomorphism of
+$R\Gamma(\text{Cris}(X/S), \mathcal{F})$ with
+$\text{Tot}(K^{*, *})$.
+
+\medskip\noindent
+Next, let's consider the rows $K^{*, b}$. By
+Lemma \ref{lemma-structure-Dn}
+each of the $b + 1$ maps $D \to D(b)$ presents $D(b)$ as the $p$-adic
+completion of a divided power polynomial algebra over $D$.
+Hence Lemma \ref{lemma-relative-poincare} shows that the map
+$$
+M \otimes^\wedge_D\Omega^*_D
+\longrightarrow
+M \otimes^\wedge_{D(b)} \Omega^*_{D(b)} = K^{*, b}
+$$
+is a quasi-isomorphism. Note that each of these maps defines the {\it same}
+map on cohomology (and even the same map in the derived category) as
+the inverse is given by the co-diagonal map $D(b) \to D$ (corresponding
+to the multiplication map $P \otimes_A \ldots \otimes_A P \to P$).
+Hence if we look at the $E_1$ page of the second spectral sequence
+we obtain
+$$
+E_1^{a, b} = H^a(M \otimes^\wedge_D\Omega^*_D)
+$$
+with differentials
+$$
+E_1^{a, 0} \xrightarrow{0}
+E_1^{a, 1} \xrightarrow{1}
+E_1^{a, 2} \xrightarrow{0}
+E_1^{a, 3} \xrightarrow{1} \ldots
+$$
+as each of these is the alternation sum of the given identifications
+$H^a(M \otimes^\wedge_D\Omega^*_D) = E_1^{a, 0} = E_1^{a, 1} = \ldots$.
+Thus we see that the $E_2$ page is equal $H^a(M \otimes^\wedge_D\Omega^*_D)$
+on the first row and zero elsewhere. It follows that the identification
+of $M \otimes^\wedge_D\Omega^*_D$ with the first row induces a
+quasi-isomorphism of $M \otimes^\wedge_D\Omega^*_D$ with
+$\text{Tot}(K^{*, *})$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-cohomology-crystal-smooth}
+Assumptions as in Proposition \ref{proposition-compute-cohomology-crystal}.
+Let $A \to P' \to C$ be ring maps with $A \to P'$ smooth and $P' \to C$
+surjective with kernel $J'$. Let $D'$ be the $p$-adic completion of
+$D_{P', \gamma}(J')$. Let $(M', \nabla')$ be the pair over $D'$
+corresponding to $\mathcal{F}$, see
+Lemma \ref{lemma-crystals-on-affine-smooth}. Then the complex
+$$
+M' \otimes^\wedge_{D'} \Omega^*_{D'}
+$$
+computes $R\Gamma(\text{Cris}(X/S), \mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+Choose $a : D \to D'$ and $b : D' \to D$ as in
+Lemma \ref{lemma-crystals-on-affine-smooth}.
+Note that the base change $M = M' \otimes_{D', b} D$ with its
+connection $\nabla$ corresponds to $\mathcal{F}$. Hence we know
+that $M \otimes^\wedge_D \Omega_D^*$ computes the crystalline
+cohomology of $\mathcal{F}$, see
+Proposition \ref{proposition-compute-cohomology-crystal}.
+Hence it suffices to show that the base change maps (induced
+by $a$ and $b$)
+$$
+M' \otimes^\wedge_{D'} \Omega^*_{D'}
+\longrightarrow
+M \otimes^\wedge_D \Omega^*_D
+\quad\text{and}\quad
+M \otimes^\wedge_D \Omega^*_D
+\longrightarrow
+M' \otimes^\wedge_{D'} \Omega^*_{D'}
+$$
+are quasi-isomorphisms. Since $a \circ b = \text{id}_{D'}$ we see
+that the composition one way around is the identity on the complex
+$M' \otimes^\wedge_{D'} \Omega^*_{D'}$. Hence it suffices to show that
+the map
+$$
+M \otimes^\wedge_D \Omega^*_D
+\longrightarrow
+M \otimes^\wedge_D \Omega^*_D
+$$
+induced by $b \circ a : D \to D$ is a quasi-isomorphism. (Note that we
+have the same complex on both sides as $M = M' \otimes^\wedge_{D', b} D$,
+hence $M \otimes^\wedge_{D, b \circ a} D =
+M' \otimes^\wedge_{D', b \circ a \circ b} D =
+M' \otimes^\wedge_{D', b} D = M$.) In fact, we claim that for any
+divided power $A$-algebra homomorphism $\rho : D \to D$ compatible
+with the augmentation to $C$ the induced map
+$M \otimes^\wedge_D \Omega^*_D \to M \otimes^\wedge_{D, \rho} \Omega^*_D$
+is a quasi-isomorphism.
+
+\medskip\noindent
+Write $\rho(x_i) = x_i + z_i$. The elements $z_i$ are in the
+divided power ideal of $D$ because $\rho$ is compatible with the
+augmentation to $C$. Hence we can factor the map $\rho$
+as a composition
+$$
+D \xrightarrow{\sigma} D\langle \xi_i \rangle^\wedge \xrightarrow{\tau} D
+$$
+where the first map is given by $x_i \mapsto x_i + \xi_i$ and the
+second map is the divided power $D$-algebra map which maps $\xi_i$ to $z_i$.
+(This uses the universal properties of polynomial algebra, divided
+power polynomial algebras, divided power envelopes, and $p$-adic completion.)
+Note that there exists an {\it automorphism} $\alpha$ of
+$D\langle \xi_i \rangle^\wedge$ with $\alpha(x_i) = x_i - \xi_i$
+and $\alpha(\xi_i) = \xi_i$. Applying Lemma \ref{lemma-relative-poincare}
+to $\alpha \circ \sigma$ (which maps $x_i$ to $x_i$) and using that
+$\alpha$ is an isomorphism we conclude that $\sigma$ induces a
+quasi-isomorphism of $M \otimes^\wedge_D \Omega^*_D$ with
+$M \otimes^\wedge_{D, \sigma} \Omega^*_{D\langle x_i \rangle^\wedge}$.
+On the other hand the map $\tau$ has as a left inverse the
+map $D \to D\langle x_i \rangle^\wedge$, $x_i \mapsto x_i$
+and we conclude (using Lemma \ref{lemma-relative-poincare} once more)
+that $\tau$ induces a quasi-isomorphism of
+$M \otimes^\wedge_{D, \sigma} \Omega^*_{D\langle x_i \rangle^\wedge}$
+with $M \otimes^\wedge_{D, \tau \circ \sigma} \Omega^*_D$. Composing these
+two quasi-isomorphisms we obtain that $\rho$ induces a quasi-isomorphism
+$M \otimes^\wedge_D \Omega^*_D \to M \otimes^\wedge_{D, \rho} \Omega^*_D$
+as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Two counter examples}
+\label{section-examples}
+
+\noindent
+Before we turn to some of the successes of crystalline cohomology,
+let us give two examples which explain why crystalline cohomology
+does not work very well if the schemes in question are either not
+proper over the base, or singular. The first example can be found
+in \cite{BO}.
+
+\begin{example}
+\label{example-torsion}
+Let $A = \mathbf{Z}_p$ with divided power ideal $(p)$ endowed with
+its unique divided powers $\gamma$. Let
+$C = \mathbf{F}_p[x, y]/(x^2, xy, y^2)$. We choose the presentation
+$$
+C = P/J = \mathbf{Z}_p[x, y]/(x^2, xy, y^2, p)
+$$
+Let $D = D_{P, \gamma}(J)^\wedge$ with divided power ideal
+$(\bar J, \bar \gamma)$ as in Section \ref{section-quasi-coherent-crystals}.
+We will denote $x, y$ also the images of $x$ and $y$ in $D$.
+Consider the element
+$$
+\tau = \bar\gamma_p(x^2)\bar\gamma_p(y^2) - \bar\gamma_p(xy)^2 \in D
+$$
+We note that $p\tau = 0$ as
+$$
+p! \bar\gamma_p(x^2) \bar\gamma_p(y^2) =
+x^{2p} \bar\gamma_p(y^2) = \bar\gamma_p(x^2y^2) =
+x^py^p \bar\gamma_p(xy) = p! \bar\gamma_p(xy)^2
+$$
+in $D$. We also note that $\text{d}\tau = 0$ in $\Omega_D$ as
+\begin{align*}
+\text{d}(\bar\gamma_p(x^2) \bar\gamma_p(y^2))
+& =
+\bar\gamma_{p - 1}(x^2)\bar\gamma_p(y^2)\text{d}x^2 +
+\bar\gamma_p(x^2)\bar\gamma_{p - 1}(y^2)\text{d}y^2 \\
+& =
+2 x \bar\gamma_{p - 1}(x^2)\bar\gamma_p(y^2)\text{d}x +
+2 y \bar\gamma_p(x^2)\bar\gamma_{p - 1}(y^2)\text{d}y \\
+& =
+2/(p - 1)!( x^{2p - 1} \bar\gamma_p(y^2)\text{d}x +
+y^{2p - 1} \bar\gamma_p(x^2)\text{d}y ) \\
+& =
+2/(p - 1)!
+(x^{p - 1} \bar\gamma_p(xy^2)\text{d}x +
+y^{p - 1} \bar\gamma_p(x^2y)\text{d}y) \\
+& =
+2/(p - 1)!
+(x^{p - 1}y^p \bar\gamma_p(xy)\text{d}x +
+x^py^{p - 1} \bar\gamma_p(xy)\text{d}y) \\
+& =
+2 \bar\gamma_{p - 1}(xy) \bar\gamma_p(xy)(y\text{d}x + x \text{d}y) \\
+& =
+\text{d}(\bar\gamma_p(xy)^2)
+\end{align*}
+Finally, we claim that $\tau \not = 0$ in $D$. To see this it suffices to
+produce an object $(B \to \mathbf{F}_p[x, y]/(x^2, xy, y^2), \delta)$
+of $\text{Cris}(C/S)$ such that $\tau$ does not map to zero in $B$.
+To do this take
+$$
+B = \mathbf{F}_p[x, y, u, v]/(x^3, x^2y, xy^2, y^3, xu, yu, xv, yv, u^2, v^2)
+$$
+with the obvious surjection to $C$. Let $K = \Ker(B \to C)$ and
+consider the map
+$$
+\delta_p : K \longrightarrow K,\quad
+ax^2 + bxy + cy^2 + du + ev + fuv \longmapsto a^pu + c^pv
+$$
+One checks this satisfies the assumptions (1), (2), (3) of
+Divided Power Algebra, Lemma \ref{dpa-lemma-need-only-gamma-p}
+and hence defines a divided power structure. Moreover,
+we see that $\tau$ maps to $uv$ which is not zero in $B$.
+Set $X = \Spec(C)$ and $S = \Spec(A)$.
+We draw the following conclusions
+\begin{enumerate}
+\item $H^0(\text{Cris}(X/S), \mathcal{O}_{X/S})$ has $p$-torsion, and
+\item pulling back by Frobenius $F^* : H^0(\text{Cris}(X/S), \mathcal{O}_{X/S})
+\to H^0(\text{Cris}(X/S), \mathcal{O}_{X/S})$ is not injective.
+\end{enumerate}
+Namely, $\tau$ defines a nonzero torsion element of
+$H^0(\text{Cris}(X/S), \mathcal{O}_{X/S})$ by
+Proposition \ref{proposition-compute-cohomology-crystal}.
+Similarly, $F^*(\tau) = \sigma(\tau)$ where $\sigma : D \to D$ is the
+map induced by any lift of Frobenius on $P$. If we choose $\sigma(x) = x^p$
+and $\sigma(y) = y^p$, then an easy computation shows that $F^*(\tau) = 0$.
+\end{example}
+
+\noindent
+The next example shows that even for affine $n$-space crystalline
+cohomology does not give the correct thing.
+
+\begin{example}
+\label{example-affine-n-space}
+Let $A = \mathbf{Z}_p$ with divided power ideal $(p)$ endowed with
+its unique divided powers $\gamma$. Let
+$C = \mathbf{F}_p[x_1, \ldots, x_r]$. We choose the presentation
+$$
+C = P/J = P/pP\quad\text{with}\quad P = \mathbf{Z}_p[x_1, \ldots, x_r]
+$$
+Note that $pP$ has divided powers by
+Divided Power Algebra, Lemma \ref{dpa-lemma-gamma-extends}.
+Hence setting $D = P^\wedge$ with divided power ideal $(p)$ we obtain a
+situation as in Section \ref{section-quasi-coherent-crystals}.
+We conclude that $R\Gamma(\text{Cris}(X/S), \mathcal{O}_{X/S})$
+is represented by the complex
+$$
+D \to \Omega^1_D \to \Omega^2_D \to \ldots \to \Omega^r_D
+$$
+see Proposition \ref{proposition-compute-cohomology-crystal}.
+Assuming $r > 0$ we conclude the following
+\begin{enumerate}
+\item The cristalline cohomology of the cristalline structure sheaf
+of $X = \mathbf{A}^r_{\mathbf{F}_p}$ over $S = \Spec(\mathbf{Z}_p)$
+is zero except in degrees $0, \ldots, r$.
+\item We have $H^0(\text{Cris}(X/S), \mathcal{O}_{X/S}) = \mathbf{Z}_p$.
+\item The cohomology group $H^r(\text{Cris}(X/S), \mathcal{O}_{X/S})$
+is infinite and is not a torsion abelian group.
+\item The cohomology group $H^r(\text{Cris}(X/S), \mathcal{O}_{X/S})$
+is not separated for the $p$-adic topology.
+\end{enumerate}
+While the first two statements are reasonable, parts (3) and (4) are
+disconcerting! The truth of these statements follows immediately from
+working out what the complex displayed above looks like. Let's just do
+this in case $r = 1$. Then we are just looking at the two term complex
+of $p$-adically complete modules
+$$
+\text{d} :
+D = \left(
+\bigoplus\nolimits_{n \geq 0} \mathbf{Z}_p x^n
+\right)^\wedge
+\longrightarrow
+\Omega^1_D = \left(
+\bigoplus\nolimits_{n \geq 1} \mathbf{Z}_p x^{n - 1}\text{d}x
+\right)^\wedge
+$$
+The map is given by $\text{diag}(0, 1, 2, 3, 4, \ldots)$ except that
+the first summand is missing on the right hand side. Now it is clear
+that $\bigoplus_{n > 0} \mathbf{Z}_p/n\mathbf{Z}_p$ is a subgroup
+of the cokernel, hence the cokernel is infinite. In fact, the element
+$$
+\omega = \sum\nolimits_{e > 0} p^e x^{p^{2e} - 1}\text{d}x
+$$
+is clearly not a torsion element of the cokernel. But it gets worse.
+Namely, consider the element
+$$
+\eta = \sum\nolimits_{e > 0} p^e x^{p^e - 1}\text{d}x
+$$
+For every $t > 0$ the element $\eta$ is congruent to
+$\sum_{e > t} p^e x^{p^e - 1}\text{d}x$ modulo the image of
+$\text{d}$ which is divisible by $p^t$. But $\eta$ is not in the image of
+$\text{d}$ because it would have to be the image of
+$a + \sum_{e > 0} x^{p^e}$ for some $a \in \mathbf{Z}_p$
+which is not an element of the left hand side. In fact, $p^N\eta$
+is similarly not in the image of $\text{d}$ for any integer $N$.
+This implies that $\eta$ ``generates'' a copy of $\mathbf{Q}_p$ inside
+of $H^1_{\text{cris}}(\mathbf{A}_{\mathbf{F}_p}^1/\Spec(\mathbf{Z}_p))$.
+\end{example}
+
+
+
+
+
+
+
+
+\section{Applications}
+\label{section-applications}
+
+\noindent
+In this section we collect some applications of the material in
+the previous sections.
+
+\begin{proposition}
+\label{proposition-compare-with-de-Rham}
+In Situation \ref{situation-global}.
+Let $\mathcal{F}$ be a crystal in quasi-coherent modules on
+$\text{Cris}(X/S)$. The truncation map of complexes
+$$
+(\mathcal{F} \to
+\mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega^1_{X/S} \to
+\mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega^2_{X/S} \to \ldots)
+\longrightarrow \mathcal{F}[0],
+$$
+while not a quasi-isomorphism, becomes a quasi-isomorphism after applying
+$Ru_{X/S, *}$. In fact, for any $i > 0$, we have
+$$
+Ru_{X/S, *}(\mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega^i_{X/S}) = 0.
+$$
+\end{proposition}
+
+\begin{proof}
+By Lemma \ref{lemma-automatic-connection} we get a de Rham complex
+as indicated in the lemma. We abbreviate
+$\mathcal{H} = \mathcal{F} \otimes \Omega^i_{X/S}$.
+Let $X' \subset X$ be an affine open
+subscheme which maps into an affine open subscheme $S' \subset S$.
+Then
+$$
+(Ru_{X/S, *}\mathcal{H})|_{X'_{Zar}} =
+Ru_{X'/S', *}(\mathcal{H}|_{\text{Cris}(X'/S')}),
+$$
+see Lemma \ref{lemma-localize}. Thus
+Lemma \ref{lemma-cohomology-is-zero}
+shows that $Ru_{X/S, *}\mathcal{H}$ is a complex of sheaves on
+$X_{Zar}$ whose cohomology on any affine open is trivial.
+As $X$ has a basis for its topology consisting of affine opens
+this implies that $Ru_{X/S, *}\mathcal{H}$ is quasi-isomorphic to zero.
+\end{proof}
+
+\begin{remark}
+\label{remark-vanishing}
+The proof of Proposition \ref{proposition-compare-with-de-Rham}
+shows that the conclusion
+$$
+Ru_{X/S, *}(\mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega^i_{X/S}) = 0
+$$
+for $i > 0$ is true for any $\mathcal{O}_{X/S}$-module
+$\mathcal{F}$ which satisfies conditions (1) and (2) of
+Proposition \ref{proposition-compute-cohomology}.
+This applies to the following non-crystals:
+$\Omega^i_{X/S}$ for all $i$, and any sheaf of the form
+$\underline{\mathcal{F}}$, where $\mathcal{F}$ is a quasi-coherent
+$\mathcal{O}_X$-module. In particular, it applies to the
+sheaf $\underline{\mathcal{O}_X} = \underline{\mathbf{G}_a}$.
+But note that we need something like Lemma \ref{lemma-automatic-connection}
+to produce a de Rham complex which requires $\mathcal{F}$ to be a crystal.
+Hence (currently) the collection of sheaves of modules for which the full
+statement of Proposition \ref{proposition-compare-with-de-Rham} holds
+is exactly the category of crystals in quasi-coherent modules.
+\end{remark}
+
+\noindent
+In Situation \ref{situation-global}.
+Let $\mathcal{F}$ be a crystal in quasi-coherent modules on
+$\text{Cris}(X/S)$. Let $(U, T, \delta)$ be an object of
+$\text{Cris}(X/S)$. Proposition \ref{proposition-compare-with-de-Rham}
+allows us to construct a canonical map
+\begin{equation}
+\label{equation-restriction}
+R\Gamma(\text{Cris}(X/S), \mathcal{F})
+\longrightarrow
+R\Gamma(T, \mathcal{F}_T \otimes_{\mathcal{O}_T} \Omega^*_{T/S, \delta})
+\end{equation}
+Namely, we have $R\Gamma(\text{Cris}(X/S), \mathcal{F}) =
+R\Gamma(\text{Cris}(X/S), \mathcal{F} \otimes \Omega^*_{X/S})$,
+we can restrict global cohomology classes to $T$, and $\Omega_{X/S}$
+restricts to $\Omega_{T/S, \delta}$ by
+Lemma \ref{lemma-module-of-differentials}.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Some further results}
+\label{section-missing}
+
+\noindent
+In this section we mention some results whose proof is missing.
+We will formulate these as a series of remarks and we will convert
+them into actual lemmas and propositions only when we add detailed
+proofs.
+
+\begin{remark}[Higher direct images]
+\label{remark-compute-direct-image}
+Let $p$ be a prime number. Let
+$(S, \mathcal{I}, \gamma) \to (S', \mathcal{I}', \gamma')$ be
+a morphism of divided power schemes over $\mathbf{Z}_{(p)}$. Let
+$$
+\xymatrix{
+X \ar[r]_f \ar[d] & X' \ar[d] \\
+S_0 \ar[r] & S'_0
+}
+$$
+be a commutative diagram of morphisms of schemes and assume $p$ is
+locally nilpotent on $X$ and $X'$. Let $\mathcal{F}$ be an
+$\mathcal{O}_{X/S}$-module on $\text{Cris}(X/S)$. Then
+$Rf_{\text{cris}, *}\mathcal{F}$ can be computed as follows.
+
+\medskip\noindent
+Given an object $(U', T', \delta')$ of $\text{Cris}(X'/S')$ set
+$U = X \times_{X'} U' = f^{-1}(U')$ (an open subscheme of $X$). Denote
+$(T_0, T, \delta)$ the divided power scheme over $S$ such that
+$$
+\xymatrix{
+T \ar[r] \ar[d] & T' \ar[d] \\
+S \ar[r] & S'
+}
+$$
+is cartesian in the category of divided power schemes, see
+Lemma \ref{lemma-fibre-product}. There is an
+induced morphism $U \to T_0$ and we obtain a morphism
+$(U/T)_{\text{cris}} \to (X/S)_{\text{cris}}$, see
+Remark \ref{remark-functoriality-cris}.
+Let $\mathcal{F}_U$ be the pullback of $\mathcal{F}$.
+Let $\tau_{U/T} : (U/T)_{\text{cris}} \to T_{Zar}$ be the structure morphism.
+Then we have
+\begin{equation}
+\label{equation-identify-pushforward}
+\left(Rf_{\text{cris}, *}\mathcal{F}\right)_{T'} =
+R(T \to T')_*\left(R\tau_{U/T, *} \mathcal{F}_U \right)
+\end{equation}
+where the left hand side is the restriction (see
+Section \ref{section-sheaves}).
+
+\medskip\noindent
+Hints: First, show that $\text{Cris}(U/T)$ is the localization (in the sense
+of Sites, Lemma \ref{sites-lemma-localize-topos-site}) of $\text{Cris}(X/S)$
+at the sheaf of sets $f_{\text{cris}}^{-1}h_{(U', T', \delta')}$. Next, reduce
+the statement to the case where $\mathcal{F}$ is an injective module
+and pushforward of modules using that the pullback of an injective
+$\mathcal{O}_{X/S}$-module is an injective $\mathcal{O}_{U/T}$-module on
+$\text{Cris}(U/T)$. Finally, check the result holds for plain pushforward.
+\end{remark}
+
+\begin{remark}[Mayer-Vietoris]
+\label{remark-mayer-vietoris}
+In the situation of Remark \ref{remark-compute-direct-image}
+suppose we have an open covering $X = X' \cup X''$. Denote
+$X''' = X' \cap X''$. Let $f'$, $f''$, and $f''$ be the restriction of $f$
+to $X'$, $X''$, and $X'''$. Moreover, let $\mathcal{F}'$, $\mathcal{F}''$,
+and $\mathcal{F}'''$ be the restriction of $\mathcal{F}$ to the crystalline
+sites of $X'$, $X''$, and $X'''$. Then there exists a distinguished triangle
+$$
+Rf_{\text{cris}, *}\mathcal{F}
+\longrightarrow
+Rf'_{\text{cris}, *}\mathcal{F}' \oplus Rf''_{\text{cris}, *}\mathcal{F}''
+\longrightarrow
+Rf'''_{\text{cris}, *}\mathcal{F}'''
+\longrightarrow
+Rf_{\text{cris}, *}\mathcal{F}[1]
+$$
+in $D(\mathcal{O}_{X'/S'})$.
+
+\medskip\noindent
+Hints: This is a formal consequence of the fact that the subcategories
+$\text{Cris}(X'/S)$, $\text{Cris}(X''/S)$, $\text{Cris}(X'''/S)$ correspond
+to open subobjects of the final sheaf on $\text{Cris}(X/S)$ and that the
+last is the intersection of the first two.
+\end{remark}
+
+\begin{remark}[{\v C}ech complex]
+\label{remark-cech-complex}
+Let $p$ be a prime number. Let $(A, I, \gamma)$ be a divided power
+ring with $A$ a $\mathbf{Z}_{(p)}$-algebra. Set $S = \Spec(A)$ and
+$S_0 = \Spec(A/I)$. Let $X$ be a separated\footnote{This assumption is
+not strictly necessary, as using hypercoverings the construction of the
+remark can be extended to the general case.} scheme over
+$S_0$ such that $p$ is locally nilpotent on $X$. Let $\mathcal{F}$ be a
+crystal in quasi-coherent $\mathcal{O}_{X/S}$-modules.
+
+\medskip\noindent
+Choose an affine open covering
+$X = \bigcup_{\lambda \in \Lambda} U_\lambda$ of $X$.
+Write $U_\lambda = \Spec(C_\lambda)$. Choose a polynomial algebra
+$P_\lambda$ over $A$ and a surjection $P_\lambda \to C_\lambda$.
+Having fixed these choices we can construct a {\v C}ech complex which
+computes $R\Gamma(\text{Cris}(X/S), \mathcal{F})$.
+
+\medskip\noindent
+Given $n \geq 0$ and $\lambda_0, \ldots, \lambda_n \in \Lambda$
+write $U_{\lambda_0 \ldots \lambda_n} = U_{\lambda_0} \cap \ldots
+\cap U_{\lambda_n}$. This is an affine scheme by assumption. Write
+$U_{\lambda_0 \ldots \lambda_n} = \Spec(C_{\lambda_0 \ldots \lambda_n})$.
+Set
+$$
+P_{\lambda_0 \ldots \lambda_n} =
+P_{\lambda_0} \otimes_A \ldots \otimes_A P_{\lambda_n}
+$$
+which comes with a canonical surjection onto $C_{\lambda_0 \ldots \lambda_n}$.
+Denote the kernel $J_{\lambda_0 \ldots \lambda_n}$ and set
+$D_{\lambda_0 \ldots \lambda_n}$
+the $p$-adically completed divided power envelope of
+$J_{\lambda_0 \ldots \lambda_n}$ in $P_{\lambda_0 \ldots \lambda_n}$
+relative to $\gamma$. Let $M_{\lambda_0 \ldots \lambda_n}$ be the
+$P_{\lambda_0 \ldots \lambda_n}$-module corresponding
+to the restriction of $\mathcal{F}$ to
+$\text{Cris}(U_{\lambda_0 \ldots \lambda_n}/S)$ via
+Proposition \ref{proposition-crystals-on-affine}.
+By construction we obtain a cosimplicial divided power ring $D(*)$
+having in degree $n$ the ring
+$$
+D(n) =
+\prod\nolimits_{\lambda_0 \ldots \lambda_n}
+D_{\lambda_0 \ldots \lambda_n}
+$$
+(use that divided power envelopes are functorial and the trivial
+cosimplicial structure on the ring $P(*)$ defined similarly).
+Since $M_{\lambda_0 \ldots \lambda_n}$ is the ``value'' of $\mathcal{F}$
+on the objects $\Spec(D_{\lambda_0 \ldots \lambda_n})$ we see that
+$M(*)$ defined by the rule
+$$
+M(n) = \prod\nolimits_{\lambda_0 \ldots \lambda_n}
+M_{\lambda_0 \ldots \lambda_n}
+$$
+forms a cosimplicial $D(*)$-module. Now we claim that we have
+$$
+R\Gamma(\text{Cris}(X/S), \mathcal{F}) = s(M(*))
+$$
+Here $s(-)$ denotes the cochain complex associated to a cosimplicial
+module (see
+Simplicial, Section \ref{simplicial-section-dold-kan-cosimplicial}).
+
+\medskip\noindent
+Hints: The proof of this is similar to the proof of
+Proposition \ref{proposition-compute-cohomology} (in particular
+the result holds for any module satisfying the assumptions of
+that proposition).
+\end{remark}
+
+\begin{remark}[Alternating {\v C}ech complex]
+\label{remark-alternating-cech-complex}
+Let $p$ be a prime number. Let $(A, I, \gamma)$ be a divided power
+ring with $A$ a $\mathbf{Z}_{(p)}$-algebra. Set $S = \Spec(A)$ and
+$S_0 = \Spec(A/I)$. Let $X$ be a separated quasi-compact scheme
+over $S_0$ such that $p$ is locally nilpotent on $X$. Let
+$\mathcal{F}$ be a crystal in quasi-coherent $\mathcal{O}_{X/S}$-modules.
+
+\medskip\noindent
+Choose a finite affine open covering
+$X = \bigcup_{\lambda \in \Lambda} U_\lambda$ of $X$
+and a total ordering on $\Lambda$.
+Write $U_\lambda = \Spec(C_\lambda)$. Choose a polynomial algebra
+$P_\lambda$ over $A$ and a surjection $P_\lambda \to C_\lambda$.
+Having fixed these choices we can construct an alternating
+{\v C}ech complex which computes $R\Gamma(\text{Cris}(X/S), \mathcal{F})$.
+
+\medskip\noindent
+We are going to use the notation introduced in
+Remark \ref{remark-cech-complex}.
+Denote $\Omega_{\lambda_0 \ldots \lambda_n}$
+the $p$-adically completed module of differentials of
+$D_{\lambda_0 \ldots \lambda_n}$ over $A$ compatible with the divided power
+structure. Let $\nabla$ be the integrable connection on
+$M_{\lambda_0 \ldots \lambda_n}$ coming from
+Proposition \ref{proposition-crystals-on-affine}.
+Consider the double complex $M^{\bullet, \bullet}$ with
+terms
+$$
+M^{n, m} =
+\bigoplus\nolimits_{\lambda_0 < \ldots < \lambda_n}
+M_{\lambda_0 \ldots \lambda_n}
+\otimes^\wedge_{D_{\lambda_0 \ldots \lambda_n}}
+\Omega^m_{D_{\lambda_0 \ldots \lambda_n}}.
+$$
+For the differential $d_1$ (increasing $n$) we use the usual
+{\v C}ech differential and for the differential $d_2$ we use
+the connection, i.e., the differential of the de Rham complex.
+We claim that
+$$
+R\Gamma(\text{Cris}(X/S), \mathcal{F}) = \text{Tot}(M^{\bullet, \bullet})
+$$
+Here $\text{Tot}(-)$ denotes the total complex associated to a
+double complex, see
+Homology, Definition \ref{homology-definition-associated-simple-complex}.
+
+\medskip\noindent
+Hints: We have
+$$
+R\Gamma(\text{Cris}(X/S), \mathcal{F}) = R\Gamma(\text{Cris}(X/S),
+\mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega_{X/S}^\bullet)
+$$
+by Proposition \ref{proposition-compare-with-de-Rham}.
+The right hand side of the formula is simply the alternating {\v C}ech complex
+for the covering $X = \bigcup_{\lambda \in \Lambda} U_\lambda$
+(which induces an open covering of the final sheaf of $\text{Cris}(X/S)$)
+and the complex $\mathcal{F} \otimes_{\mathcal{O}_{X/S}} \Omega_{X/S}^\bullet$,
+see Proposition \ref{proposition-compute-cohomology-crystal}.
+Now the result follows from a general result in cohomology on sites,
+namely that the alternating {\v C}ech complex computes the cohomology
+provided it gives the correct answer on all the pieces (insert future
+reference here).
+\end{remark}
+
+\begin{remark}[Quasi-coherence]
+\label{remark-quasi-coherent}
+In the situation of Remark \ref{remark-compute-direct-image}
+assume that $S \to S'$ is quasi-compact and quasi-separated and
+that $X \to S_0$ is quasi-compact and quasi-separated. Then for a crystal
+in quasi-coherent $\mathcal{O}_{X/S}$-modules $\mathcal{F}$
+the sheaves $R^if_{\text{cris}, *}\mathcal{F}$ are locally quasi-coherent.
+
+\medskip\noindent
+Hints: We have to show that the restrictions to $T'$ are quasi-coherent
+$\mathcal{O}_{T'}$-modules, where $(U', T', \delta')$ is any object of
+$\text{Cris}(X'/S')$. It suffices to do this when $T'$ is affine.
+We use the formula (\ref{equation-identify-pushforward}),
+the fact that $T \to T'$ is quasi-compact and quasi-separated (as $T$
+is affine over the base change of $T'$ by $S \to S'$), and
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherence-higher-direct-images}
+to see that it suffices to show that the sheaves
+$R^i\tau_{U/T, *}\mathcal{F}_U$ are quasi-coherent.
+Note that $U \to T_0$ is also quasi-compact and quasi-separated, see
+Schemes, Lemmas \ref{schemes-lemma-quasi-compact-permanence} and
+\ref{schemes-lemma-quasi-compact-permanence}.
+
+\medskip\noindent
+This reduces us to proving that $R^i\tau_{X/S, *}\mathcal{F}$
+is quasi-coherent on $S$ in the case that $p$ locally nilpotent on $S$. Here
+$\tau_{X/S}$ is the structure morphism, see
+Remark \ref{remark-structure-morphism}.
+We may work locally on $S$, hence we may assume $S$ affine
+(see Lemma \ref{lemma-localize}). Induction on the number
+of affines covering $X$ and Mayer-Vietoris
+(Remark \ref{remark-mayer-vietoris}) reduces the question to
+the case where $X$ is also affine (as in the proof of
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherence-higher-direct-images}).
+Say $X = \Spec(C)$ and $S = \Spec(A)$ so that $(A, I, \gamma)$ and
+$A \to C$ are as
+in Situation \ref{situation-affine}. Choose a polynomial algebra
+$P$ over $A$ and a surjection $P \to C$ as in
+Section \ref{section-quasi-coherent-crystals}.
+Let $(M, \nabla)$ be the module corresponding to $\mathcal{F}$, see
+Proposition \ref{proposition-crystals-on-affine}.
+Applying
+Proposition \ref{proposition-compute-cohomology-crystal}
+we see that $R\Gamma(\text{Cris}(X/S), \mathcal{F})$ is represented by
+$M \otimes_D \Omega_D^*$. Note that completion isn't necessary
+as $p$ is nilpotent in $A$! We have to show that this is compatible
+with taking principal opens in $S = \Spec(A)$. Suppose that $g \in A$.
+Then we conclude that similarly $R\Gamma(\text{Cris}(X_g/S_g), \mathcal{F})$
+is computed by $M_g \otimes_{D_g} \Omega_{D_g}^*$ (again this uses that
+$p$-adic completion isn't necessary). Hence we conclude because localization
+is an exact functor on $A$-modules.
+\end{remark}
+
+\begin{remark}[Boundedness]
+\label{remark-bounded-cohomology}
+In the situation of Remark \ref{remark-compute-direct-image}
+assume that $S \to S'$ is quasi-compact and quasi-separated and
+that $X \to S_0$ is of finite type and quasi-separated. Then there exists
+an integer $i_0$ such that for any crystal
+in quasi-coherent $\mathcal{O}_{X/S}$-modules $\mathcal{F}$
+we have $R^if_{\text{cris}, *}\mathcal{F} = 0$ for all $i > i_0$.
+
+\medskip\noindent
+Hints: Arguing as in Remark \ref{remark-quasi-coherent} (using
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherence-higher-direct-images})
+we reduce to proving that $H^i(\text{Cris}(X/S), \mathcal{F}) = 0$ for $i \gg 0$
+in the situation of Proposition \ref{proposition-compute-cohomology-crystal}
+when $C$ is a finite type algebra over $A$. This is clear as we can
+choose a finite polynomial algebra and we see that $\Omega^i_D = 0$
+for $i \gg 0$.
+\end{remark}
+
+\begin{remark}[Specific boundedness]
+\label{remark-bounded-cohomology-over-point}
+In Situation \ref{situation-global} let $\mathcal{F}$ be a crystal in
+quasi-coherent $\mathcal{O}_{X/S}$-modules. Assume that $S_0$
+has a unique point and that $X \to S_0$ is of finite presentation.
+\begin{enumerate}
+\item If $\dim X = d$ and $X/S_0$ has embedding dimension $e$, then
+$H^i(\text{Cris}(X/S), \mathcal{F}) = 0$ for $i > d + e$.
+\item If $X$ is separated and can be covered by $q$ affines, and
+$X/S_0$ has embedding dimension $e$, then
+$H^i(\text{Cris}(X/S), \mathcal{F}) = 0$ for $i > q + e$.
+\end{enumerate}
+Hints: In case (1) we can use that
+$$
+H^i(\text{Cris}(X/S), \mathcal{F}) = H^i(X_{Zar}, Ru_{X/S, *}\mathcal{F})
+$$
+and that $Ru_{X/S, *}\mathcal{F}$ is locally calculated by a de Rham
+complex constructed using an embedding of $X$ into a smooth scheme
+of dimension $e$ over $S$
+(see Lemma \ref{lemma-compute-cohomology-crystal-smooth}).
+These de Rham complexes are zero in all degrees $> e$. Hence (1)
+follows from Cohomology, Proposition
+\ref{cohomology-proposition-vanishing-Noetherian}.
+In case (2) we use the alternating {\v C}ech complex (see
+Remark \ref{remark-alternating-cech-complex}) to reduce to the case
+$X$ affine. In the affine case we prove the result using the de Rham complex
+associated to an embedding of $X$ into a smooth scheme of dimension $e$
+over $S$ (it takes some work to construct such a thing).
+\end{remark}
+
+\begin{remark}[Base change map]
+\label{remark-base-change}
+In the situation of Remark \ref{remark-compute-direct-image}
+assume $S = \Spec(A)$ and $S' = \Spec(A')$ are affine.
+Let $\mathcal{F}'$ be an $\mathcal{O}_{X'/S'}$-module.
+Let $\mathcal{F}$ be the pullback of $\mathcal{F}'$.
+Then there is a canonical base change map
+$$
+L(S' \to S)^*R\tau_{X'/S', *}\mathcal{F}'
+\longrightarrow
+R\tau_{X/S, *}\mathcal{F}
+$$
+where $\tau_{X/S}$ and $\tau_{X'/S'}$ are the structure morphisms, see
+Remark \ref{remark-structure-morphism}. On global sections this
+gives a base change map
+\begin{equation}
+\label{equation-base-change-map}
+R\Gamma(\text{Cris}(X'/S'), \mathcal{F}') \otimes^\mathbf{L}_{A'} A
+\longrightarrow
+R\Gamma(\text{Cris}(X/S), \mathcal{F})
+\end{equation}
+in $D(A)$.
+
+\medskip\noindent
+Hint: Compose the very general base change map of
+Cohomology on Sites, Remark \ref{sites-cohomology-remark-base-change}
+with the canonical map
+$Lf_{\text{cris}}^*\mathcal{F}' \to
+f_{\text{cris}}^*\mathcal{F}' = \mathcal{F}$.
+\end{remark}
+
+\begin{remark}[Base change isomorphism]
+\label{remark-base-change-isomorphism}
+The map (\ref{equation-base-change-map}) is an isomorphism provided
+all of the following conditions are satisfied:
+\begin{enumerate}
+\item $p$ is nilpotent in $A'$,
+\item $\mathcal{F}'$ is a crystal in quasi-coherent
+$\mathcal{O}_{X'/S'}$-modules,
+\item $X' \to S'_0$ is a quasi-compact, quasi-separated morphism,
+\item $X = X' \times_{S'_0} S_0$,
+\item $\mathcal{F}'$ is a flat $\mathcal{O}_{X'/S'}$-module,
+\item $X' \to S'_0$ is a local complete intersection morphism (see
+More on Morphisms, Definition \ref{more-morphisms-definition-lci}; this
+holds for example if $X' \to S'_0$ is syntomic or smooth),
+\item $X'$ and $S_0$ are Tor independent over $S'_0$ (see
+More on Algebra, Definition \ref{more-algebra-definition-tor-independent};
+this holds for example if either $S_0 \to S'_0$ or $X' \to S'_0$ is flat).
+\end{enumerate}
+Hints: Condition (1) means that in the arguments below $p$-adic completion
+does nothing and can be ignored.
+Using condition (3) and Mayer Vietoris (see
+Remark \ref{remark-mayer-vietoris}) this reduces to the case
+where $X'$ is affine. In fact by condition (6), after shrinking
+further, we can assume that $X' = \Spec(C')$ and we are given a presentation
+$C' = A'/I'[x_1, \ldots, x_n]/(\bar f'_1, \ldots, \bar f'_c)$
+where $\bar f'_1, \ldots, \bar f'_c$ is a Koszul-regular sequence in $A'/I'$.
+(This means that smooth locally $\bar f'_1, \ldots, \bar f'_c$ forms
+a regular sequence, see More on Algebra,
+Lemma \ref{more-algebra-lemma-Koszul-regular-flat-locally-regular}.)
+We choose a lift of
+$\bar f'_i$ to an element $f'_i \in A'[x_1, \ldots, x_n]$. By (4) we see that
+$X = \Spec(C)$ with $C = A/I[x_1, \ldots, x_n]/(\bar f_1, \ldots, \bar f_c)$
+where $f_i \in A[x_1, \ldots, x_n]$ is the image of $f'_i$.
+By property (7) we see that $\bar f_1, \ldots, \bar f_c$ is a Koszul-regular
+sequence in $A/I[x_1, \ldots, x_n]$. The divided power envelope of
+$I'A'[x_1, \ldots, x_n] + (f'_1, \ldots, f'_c)$ in $A'[x_1, \ldots, x_n]$
+relative to $\gamma'$ is
+$$
+D' = A'[x_1, \ldots, x_n]\langle \xi_1, \ldots, \xi_c \rangle/(\xi_i - f'_i)
+$$
+see Lemma \ref{lemma-describe-divided-power-envelope}. Then you check that
+$\xi_1 - f'_1, \ldots, \xi_n - f'_n$ is a Koszul-regular sequence in the
+ring $A'[x_1, \ldots, x_n]\langle \xi_1, \ldots, \xi_c\rangle$.
+Similarly the divided power envelope of
+$IA[x_1, \ldots, x_n] + (f_1, \ldots, f_c)$ in $A[x_1, \ldots, x_n]$
+relative to $\gamma$ is
+$$
+D = A[x_1, \ldots, x_n]\langle \xi_1, \ldots, \xi_c\rangle/(\xi_i - f_i)
+$$
+and $\xi_1 - f_1, \ldots, \xi_n - f_n$ is a Koszul-regular sequence in the
+ring $A[x_1, \ldots, x_n]\langle \xi_1, \ldots, \xi_c\rangle$.
+It follows that $D' \otimes_{A'}^\mathbf{L} A = D$. Condition (2)
+implies $\mathcal{F}'$ corresponds to a pair $(M', \nabla)$
+consisting of a $D'$-module with connection, see
+Proposition \ref{proposition-crystals-on-affine}.
+Then $M = M' \otimes_{D'} D$ corresponds to the pullback $\mathcal{F}$.
+By assumption (5) we see that $M'$ is a flat $D'$-module, hence
+$$
+M = M' \otimes_{D'} D = M' \otimes_{D'} D' \otimes_{A'}^\mathbf{L} A
+= M' \otimes_{A'}^\mathbf{L} A
+$$
+Since the modules of differentials $\Omega_{D'}$ and $\Omega_D$
+(as defined in Section \ref{section-quasi-coherent-crystals})
+are free $D'$-modules on the same generators we see that
+$$
+M \otimes_D \Omega^\bullet_D =
+M' \otimes_{D'} \Omega^\bullet_{D'} \otimes_{D'} D =
+M' \otimes_{D'} \Omega^\bullet_{D'} \otimes_{A'}^\mathbf{L} A
+$$
+which proves what we want by
+Proposition \ref{proposition-compute-cohomology-crystal}.
+\end{remark}
+
+\begin{remark}[Rlim]
+\label{remark-rlim}
+Let $p$ be a prime number. Let $(A, I, \gamma)$ be a divided power
+ring with $A$ an algebra over $\mathbf{Z}_{(p)}$ with $p$ nilpotent
+in $A/I$. Set $S = \Spec(A)$ and $S_0 = \Spec(A/I)$.
+Let $X$ be a scheme over $S_0$ with $p$ locally
+nilpotent on $X$. Let $\mathcal{F}$ be any
+$\mathcal{O}_{X/S}$-module. For $e \gg 0$ we have $(p^e) \subset I$
+is preserved by $\gamma$, see
+Divided Power Algebra, Lemma \ref{dpa-lemma-extend-to-completion}.
+Set $S_e = \Spec(A/p^eA)$ for $e \gg 0$.
+Then $\text{Cris}(X/S_e)$ is a full subcategory of $\text{Cris}(X/S)$
+and we denote $\mathcal{F}_e$ the restriction of $\mathcal{F}$ to
+$\text{Cris}(X/S_e)$. Then
+$$
+R\Gamma(\text{Cris}(X/S), \mathcal{F}) =
+R\lim_e R\Gamma(\text{Cris}(X/S_e), \mathcal{F}_e)
+$$
+
+\medskip\noindent
+Hints: Suffices to prove this for $\mathcal{F}$ injective.
+In this case the sheaves $\mathcal{F}_e$ are injective
+modules too, the transition maps
+$\Gamma(\mathcal{F}_{e + 1}) \to \Gamma(\mathcal{F}_e)$ are
+surjective, and we have
+$\Gamma(\mathcal{F}) = \lim_e \Gamma(\mathcal{F}_e)$ because
+any object of $\text{Cris}(X/S)$ is locally an object of one
+of the categories $\text{Cris}(X/S_e)$ by definition of
+$\text{Cris}(X/S)$.
+\end{remark}
+
+\begin{remark}[Comparison]
+\label{remark-comparison}
+Let $p$ be a prime number. Let $(A, I, \gamma)$ be a divided power
+ring with $p$ nilpotent in $A$. Set $S = \Spec(A)$ and
+$S_0 = \Spec(A/I)$. Let $Y$ be a smooth scheme over $S$ and set
+$X = Y \times_S S_0$. Let
+$\mathcal{F}$ be a crystal in quasi-coherent $\mathcal{O}_{X/S}$-modules.
+Then
+\begin{enumerate}
+\item $\gamma$ extends to a divided power structure on the ideal
+of $X$ in $Y$ so that $(X, Y, \gamma)$ is an object of $\text{Cris}(X/S)$,
+\item the restriction $\mathcal{F}_Y$ (see Section \ref{section-sheaves})
+comes endowed with a canonical integrable connection
+$\nabla : \mathcal{F}_Y \to
+\mathcal{F}_Y \otimes_{\mathcal{O}_Y} \Omega_{Y/S}$, and
+\item we have
+$$
+R\Gamma(\text{Cris}(X/S), \mathcal{F}) =
+R\Gamma(Y, \mathcal{F}_Y \otimes_{\mathcal{O}_Y} \Omega^\bullet_{Y/S})
+$$
+in $D(A)$.
+\end{enumerate}
+Hints: See Divided Power Algebra, Lemma \ref{dpa-lemma-gamma-extends} for (1).
+See Lemma \ref{lemma-automatic-connection} for (2).
+For Part (3) note that there is a map, see
+(\ref{equation-restriction}). This map is an isomorphism when
+$X$ is affine, see
+Lemma \ref{lemma-compute-cohomology-crystal-smooth}.
+This shows that $Ru_{X/S, *}\mathcal{F}$ and
+$\mathcal{F}_Y \otimes \Omega^\bullet_{Y/S}$ are quasi-isomorphic
+as complexes on $Y_{Zar} = X_{Zar}$.
+Since $R\Gamma(\text{Cris}(X/S), \mathcal{F}) =
+R\Gamma(X_{Zar}, Ru_{X/S, *}\mathcal{F})$ the result follows.
+\end{remark}
+
+\begin{remark}[Perfectness]
+\label{remark-perfect}
+Let $p$ be a prime number. Let $(A, I, \gamma)$ be a divided power
+ring with $p$ nilpotent in $A$. Set $S = \Spec(A)$ and
+$S_0 = \Spec(A/I)$. Let $X$ be a proper smooth scheme over $S_0$.
+Let $\mathcal{F}$ be a crystal in finite locally free
+quasi-coherent $\mathcal{O}_{X/S}$-modules.
+Then $R\Gamma(\text{Cris}(X/S), \mathcal{F})$ is a
+perfect object of $D(A)$.
+
+\medskip\noindent
+Hints: By Remark \ref{remark-base-change-isomorphism} we have
+$$
+R\Gamma(\text{Cris}(X/S), \mathcal{F}) \otimes_A^\mathbf{L} A/I
+\cong
+R\Gamma(\text{Cris}(X/S_0), \mathcal{F}|_{\text{Cris}(X/S_0)})
+$$
+By Remark \ref{remark-comparison} we have
+$$
+R\Gamma(\text{Cris}(X/S_0), \mathcal{F}|_{\text{Cris}(X/S_0)}) =
+R\Gamma(X, \mathcal{F}_X \otimes \Omega^\bullet_{X/S_0})
+$$
+Using the stupid filtration on the de Rham complex we see that
+the last displayed complex is perfect in $D(A/I)$ as soon as the complexes
+$$
+R\Gamma(X, \mathcal{F}_X \otimes \Omega^q_{X/S_0})
+$$
+are perfect complexes in $D(A/I)$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-two-out-of-three-perfect}.
+This is true by standard arguments
+in coherent cohomology using that $\mathcal{F}_X \otimes \Omega^q_{X/S_0}$
+is a finite locally free sheaf and $X \to S_0$ is proper and flat
+(insert future reference here). Applying
+More on Algebra, Lemma \ref{more-algebra-lemma-perfect-modulo-nilpotent-ideal}
+we see that
+$$
+R\Gamma(\text{Cris}(X/S), \mathcal{F}) \otimes_A^\mathbf{L} A/I^n
+$$
+is a perfect object of $D(A/I^n)$ for all $n$. This isn't quite enough
+unless $A$ is Noetherian. Namely, even though $I$ is locally nilpotent
+by our assumption that $p$ is nilpotent, see
+Divided Power Algebra, Lemma \ref{dpa-lemma-nil},
+we cannot conclude that $I^n = 0$ for some $n$. A counter example
+is $\mathbf{F}_p\langle x \rangle$. To prove it in general when
+$\mathcal{F} = \mathcal{O}_{X/S}$ the argument of
+\url{https://math.columbia.edu/~dejong/wordpress/?p=2227}
+works. When the coefficients $\mathcal{F}$ are non-trivial the
+argument of \cite{Faltings-very} seems to be as follows. Reduce to the
+case $pA = 0$ by More on Algebra, Lemma
+\ref{more-algebra-lemma-perfect-modulo-nilpotent-ideal}.
+In this case the Frobenius map $A \to A$, $a \mapsto a^p$ factors
+as $A \to A/I \xrightarrow{\varphi} A$ (as $x^p = 0$ for $x \in I$). Set
+$X^{(1)} = X \otimes_{A/I, \varphi} A$. The absolute Frobenius morphism
+of $X$ factors through a morphism $F_X : X \to X^{(1)}$ (a kind of
+relative Frobenius). Affine locally if $X = \Spec(C)$ then
+$X^{(1)} = \Spec( C \otimes_{A/I, \varphi} A)$
+and $F_X$ corresponds to $C \otimes_{A/I, \varphi} A \to C$,
+$c \otimes a \mapsto c^pa$. This defines morphisms of ringed topoi
+$$
+(X/S)_{\text{cris}}
+\xrightarrow{(F_X)_{\text{cris}}}
+(X^{(1)}/S)_{\text{cris}}
+\xrightarrow{u_{X^{(1)}/S}}
+\Sh(X^{(1)}_{Zar})
+$$
+whose composition is denoted $\text{Frob}_X$. One then shows that
+$R\text{Frob}_{X, *}\mathcal{F}$ is representable by a
+perfect complex of $\mathcal{O}_{X^{(1)}}$-modules(!)
+by a local calculation.
+\end{remark}
+
+\begin{remark}[Complete perfectness]
+\label{remark-complete-perfect}
+Let $p$ be a prime number. Let $(A, I, \gamma)$ be a divided power
+ring with $A$ a $p$-adically complete ring and $p$ nilpotent in $A/I$. Set
+$S = \Spec(A)$ and $S_0 = \Spec(A/I)$. Let $X$ be a proper
+smooth scheme over $S_0$. Let $\mathcal{F}$ be a crystal in
+finite locally free quasi-coherent $\mathcal{O}_{X/S}$-modules.
+Then $R\Gamma(\text{Cris}(X/S), \mathcal{F})$ is a
+perfect object of $D(A)$.
+
+\medskip\noindent
+Hints: We know that $K = R\Gamma(\text{Cris}(X/S), \mathcal{F})$
+is the derived limit $K = R\lim K_e$ of the cohomologies over $A/p^eA$,
+see Remark \ref{remark-rlim}.
+Each $K_e$ is a perfect complex of $D(A/p^eA)$ by
+Remark \ref{remark-perfect}.
+Since $A$ is $p$-adically complete the result
+follows from
+More on Algebra, Lemma \ref{more-algebra-lemma-Rlim-perfect-gives-complete}.
+\end{remark}
+
+\begin{remark}[Complete comparison]
+\label{remark-complete-comparison}
+Let $p$ be a prime number. Let $(A, I, \gamma)$ be a divided power
+ring with $A$ a Noetherian $p$-adically complete ring and $p$ nilpotent
+in $A/I$. Set $S = \Spec(A)$ and
+$S_0 = \Spec(A/I)$. Let $Y$ be a proper smooth scheme over $S$ and set
+$X = Y \times_S S_0$. Let $\mathcal{F}$ be a finite type crystal in
+quasi-coherent $\mathcal{O}_{X/S}$-modules. Then
+\begin{enumerate}
+\item there exists a coherent $\mathcal{O}_Y$-module $\mathcal{F}_Y$
+endowed with integrable connection
+$$
+\nabla :
+\mathcal{F}_Y
+\longrightarrow
+\mathcal{F}_Y \otimes_{\mathcal{O}_Y} \Omega_{Y/S}
+$$
+such that $\mathcal{F}_Y/p^e\mathcal{F}_Y$ is the module with connection
+over $A/p^eA$ found in Remark \ref{remark-comparison}, and
+\item we have
+$$
+R\Gamma(\text{Cris}(X/S), \mathcal{F}) =
+R\Gamma(Y, \mathcal{F}_Y \otimes_{\mathcal{O}_Y} \Omega^\bullet_{Y/S})
+$$
+in $D(A)$.
+\end{enumerate}
+Hints: The existence of $\mathcal{F}_Y$ is Grothendieck's existence theorem
+(insert future reference here). The isomorphism of cohomologies follows
+as both sides are computed as $R\lim$ of the versions modulo $p^e$
+(see Remark \ref{remark-rlim} for the left hand side; use the theorem
+on formal functions, see
+Cohomology of Schemes, Theorem \ref{coherent-theorem-formal-functions}
+for the right hand side).
+Each of the versions modulo $p^e$ are isomorphic by
+Remark \ref{remark-comparison}.
+\end{remark}
+
+
+
+
+\section{Pulling back along purely inseparable maps}
+\label{section-pull-back-along-pth-root}
+
+\noindent
+By an $\alpha_p$-cover we mean a morphism of the form
+$$
+X' = \Spec(C[z]/(z^p - c)) \longrightarrow \Spec(C) = X
+$$
+where $C$ is an $\mathbf{F}_p$-algebra and $c \in C$. Equivalently,
+$X'$ is an $\alpha_p$-torsor over $X$. An {\it iterated
+$\alpha_p$-cover}\footnote{This is nonstandard notation.}
+is a morphism of schemes in characteristic
+$p$ which is locally on the target a composition of finitely many
+$\alpha_p$-covers. In this section we prove that pullback along
+such a morphism induces a quasi-isomorphism on crystalline cohomology
+after inverting the prime $p$. In fact, we prove a precise version
+of this result. We begin with a preliminary lemma whose formulation
+needs some notation.
+
+\medskip\noindent
+Assume we have a ring map $B \to B'$ and quotients $\Omega_B \to \Omega$ and
+$\Omega_{B'} \to \Omega'$ satisfying the assumptions of
+Remark \ref{remark-base-change-connection}.
+Thus (\ref{equation-base-change-map-complexes}) provides a
+canonical map of complexes
+$$
+c_M^\bullet :
+M \otimes_B \Omega^\bullet
+\longrightarrow
+M \otimes_B (\Omega')^\bullet
+$$
+for all $B$-modules $M$ endowed with integrable connection
+$\nabla : M \to M \otimes_B \Omega_B$.
+
+\medskip\noindent
+Suppose we have $a \in B$, $z \in B'$, and a map $\theta : B' \to B'$
+satisfying the following assumptions
+\begin{enumerate}
+\item
+\label{item-d-a-zero}
+$\text{d}(a) = 0$,
+\item
+\label{item-direct-sum}
+$\Omega' = B' \otimes_B \Omega \oplus B'\text{d}z$; we write
+$\text{d}(f) = \text{d}_1(f) + \partial_z(f) \text{d}z$
+with $\text{d}_1(f) \in B' \otimes \Omega$ and $\partial_z(f) \in B'$
+for all $f \in B'$,
+\item
+\label{item-theta-linear}
+$\theta : B' \to B'$ is $B$-linear,
+\item
+\label{item-integrate}
+$\partial_z \circ \theta = a$,
+\item
+\label{item-injective}
+$B \to B'$ is universally injective (and hence $\Omega \to \Omega'$
+is injective),
+\item
+\label{item-factor}
+$af - \theta(\partial_z(f)) \in B$ for all $f \in B'$,
+\item
+\label{item-horizontal}
+$(\theta \otimes 1)(\text{d}_1(f)) - \text{d}_1(\theta(f)) \in \Omega$
+for all $f \in B'$ where
+$\theta \otimes 1 : B' \otimes \Omega \to B' \otimes \Omega$
+\end{enumerate}
+These conditions are not logically independent.
+For example, assumption (\ref{item-integrate}) implies
+that $\partial_z(af - \theta(\partial_z(f))) = 0$.
+Hence if the image of $B \to B'$ is the collection of
+elements annihilated by $\partial_z$, then (\ref{item-factor})
+follows. A similar argument can be made for condition (\ref{item-horizontal}).
+
+\begin{lemma}
+\label{lemma-find-homotopy}
+In the situation above there exists a map of complexes
+$$
+e_M^\bullet :
+M \otimes_B (\Omega')^\bullet
+\longrightarrow
+M \otimes_B \Omega^\bullet
+$$
+such that $c_M^\bullet \circ e_M^\bullet$
+and $e_M^\bullet \circ c_M^\bullet$ are homotopic to
+multiplication by $a$.
+\end{lemma}
+
+\begin{proof}
+In this proof all tensor products are over $B$.
+Assumption (\ref{item-direct-sum}) implies that
+$$
+M \otimes (\Omega')^i =
+(B' \otimes M \otimes \Omega^i)
+\oplus
+(B' \text{d}z \otimes M \otimes \Omega^{i - 1})
+$$
+for all $i \geq 0$. A collection of additive generators for
+$M \otimes (\Omega')^i$ is formed by elements of the form
+$f \omega$ and elements of the form $f \text{d}z \wedge \eta$
+where $f \in B'$, $\omega \in M \otimes \Omega^i$, and
+$\eta \in M \otimes \Omega^{i - 1}$.
+
+\medskip\noindent
+For $f \in B'$ we write
+$$
+\epsilon(f) = af - \theta(\partial_z(f))
+\quad\text{and}\quad
+\epsilon'(f) = (\theta \otimes 1)(\text{d}_1(f)) - \text{d}_1(\theta(f))
+$$
+so that $\epsilon(f) \in B$ and $\epsilon'(f) \in \Omega$ by
+assumptions (\ref{item-factor}) and (\ref{item-horizontal}).
+We define $e_M^\bullet$ by the rules
+$e^i_M(f\omega) = \epsilon(f) \omega$ and
+$e^i_M(f \text{d}z \wedge \eta) = \epsilon'(f) \wedge \eta$.
+We will see below that the collection of maps $e^i_M$ is a map of complexes.
+
+\medskip\noindent
+We define
+$$
+h^i : M \otimes_B (\Omega')^i \longrightarrow M \otimes_B (\Omega')^{i - 1}
+$$
+by the rules $h^i(f \omega) = 0$ and
+$h^i(f \text{d}z \wedge \eta) = \theta(f) \eta$
+for elements as above. We claim that
+$$
+\text{d} \circ h + h \circ \text{d} = a - c_M^\bullet \circ e_M^\bullet
+$$
+Note that multiplication by $a$ is a map of complexes
+by (\ref{item-d-a-zero}). Hence, since $c_M^\bullet$ is an injective map
+of complexes by assumption (\ref{item-injective}), we conclude that
+$e_M^\bullet$ is a map of complexes. To prove the claim we compute
+\begin{align*}
+(\text{d} \circ h + h \circ \text{d})(f \omega)
+& =
+h\left(\text{d}(f) \wedge \omega + f \nabla(\omega)\right)
+\\
+& =
+\theta(\partial_z(f)) \omega
+\\
+& =
+a f\omega - \epsilon(f)\omega
+\\
+& =
+a f \omega - c^i_M(e^i_M(f\omega))
+\end{align*}
+The second equality because $\text{d}z$ does not occur in $\nabla(\omega)$
+and the third equality by assumption (6). Similarly, we have
+\begin{align*}
+(\text{d} \circ h + h \circ \text{d})(f \text{d}z \wedge \eta)
+& =
+\text{d}(\theta(f) \eta) +
+h\left(\text{d}(f) \wedge \text{d}z \wedge \eta -
+f \text{d}z \wedge \nabla(\eta)\right)
+\\
+& =
+\text{d}(\theta(f)) \wedge \eta + \theta(f) \nabla(\eta)
+- (\theta \otimes 1)(\text{d}_1(f)) \wedge \eta
+- \theta(f) \nabla(\eta)
+\\
+& =
+\text{d}_1(\theta(f)) \wedge \eta +
+\partial_z(\theta(f)) \text{d}z \wedge \eta -
+(\theta \otimes 1)(\text{d}_1(f)) \wedge \eta
+\\
+& =
+a f \text{d}z \wedge \eta - \epsilon'(f) \wedge \eta \\
+& = a f \text{d}z \wedge \eta - c^i_M(e^i_M(f \text{d}z \wedge \eta))
+\end{align*}
+The second equality because
+$\text{d}(f) \wedge \text{d}z \wedge \eta =
+- \text{d}z \wedge \text{d}_1(f) \wedge \eta$.
+The fourth equality by assumption (\ref{item-integrate}).
+On the other hand it is immediate from the definitions
+that $e^i_M(c^i_M(\omega)) = \epsilon(1) \omega = a \omega$.
+This proves the lemma.
+\end{proof}
+
+\begin{example}
+\label{example-integrate}
+A standard example of the situation above occurs when
+$B' = B\langle z \rangle$ is the divided power polynomial ring
+over a divided power ring $(B, J, \delta)$ with divided powers
+$\delta'$ on $J' = B'_{+} + JB' \subset B'$. Namely, we take
+$\Omega = \Omega_{B, \delta}$ and $\Omega' = \Omega_{B', \delta'}$.
+In this case we can take $a = 1$ and
+$$
+\theta( \sum b_m z^{[m]} ) = \sum b_m z^{[m + 1]}
+$$
+Note that
+$$
+f - \theta(\partial_z(f)) = f(0)
+$$
+equals the constant term. It follows that in this case
+Lemma \ref{lemma-find-homotopy}
+recovers the crystalline Poincar\'e lemma
+(Lemma \ref{lemma-relative-poincare}).
+\end{example}
+
+\begin{lemma}
+\label{lemma-computation}
+In Situation \ref{situation-affine}. Assume $D$ and $\Omega_D$ are as in
+(\ref{equation-D}) and (\ref{equation-omega-D}).
+Let $\lambda \in D$. Let $D'$ be the $p$-adic completion of
+$$
+D[z]\langle \xi \rangle/(\xi - (z^p - \lambda))
+$$
+and let $\Omega_{D'}$ be the $p$-adic completion of the module of
+divided power differentials of $D'$ over $A$. For any pair $(M, \nabla)$
+over $D$ satisfying (\ref{item-complete}), (\ref{item-connection}),
+(\ref{item-integrable}), and (\ref{item-topologically-quasi-nilpotent})
+the canonical map of complexes (\ref{equation-base-change-map-complexes})
+$$
+c_M^\bullet : M \otimes_D^\wedge \Omega^\bullet_D
+\longrightarrow
+M \otimes_D^\wedge \Omega^\bullet_{D'}
+$$
+has the following property: There exists a map $e_M^\bullet$
+in the opposite direction such that both $c_M^\bullet \circ e_M^\bullet$
+and $e_M^\bullet \circ c_M^\bullet$ are homotopic to multiplication by $p$.
+\end{lemma}
+
+\begin{proof}
+We will prove this using Lemma \ref{lemma-find-homotopy} with $a = p$.
+Thus we have to find $\theta : D' \to D'$ and prove
+(\ref{item-d-a-zero}), (\ref{item-direct-sum}), (\ref{item-theta-linear}),
+(\ref{item-integrate}), (\ref{item-injective}), (\ref{item-factor}),
+(\ref{item-horizontal}). We first collect some information about the rings
+$D$ and $D'$ and the modules $\Omega_D$ and $\Omega_{D'}$.
+
+\medskip\noindent
+Writing
+$$
+D[z]\langle \xi \rangle/(\xi - (z^p - \lambda))
+=
+D\langle \xi \rangle[z]/(z^p - \xi - \lambda)
+$$
+we see that $D'$ is the $p$-adic completion of the free $D$-module
+$$
+\bigoplus\nolimits_{i = 0, \ldots, p - 1}
+\bigoplus\nolimits_{n \geq 0}
+z^i \xi^{[n]} D
+$$
+where $\xi^{[0]} = 1$.
+It follows that $D \to D'$ has a continuous $D$-linear section, in particular
+$D \to D'$ is universally injective, i.e., (\ref{item-injective}) holds.
+We think of $D'$ as a divided power algebra
+over $A$ with divided power ideal $\overline{J}' = \overline{J}D' + (\xi)$.
+Then $D'$ is also the $p$-adic completion of the divided power envelope
+of the ideal generated by $z^p - \lambda$ in $D$, see
+Lemma \ref{lemma-describe-divided-power-envelope}. Hence
+$$
+\Omega_{D'} = \Omega_D \otimes_D^\wedge D' \oplus D'\text{d}z
+$$
+by Lemma \ref{lemma-module-differentials-divided-power-envelope}.
+This proves (\ref{item-direct-sum}). Note that (\ref{item-d-a-zero})
+is obvious.
+
+\medskip\noindent
+At this point we construct $\theta$. (We wrote a PARI/gp script theta.gp
+verifying some of the formulas in this proof which can be found in the
+scripts subdirectory of the Stacks project.) Before we do so we compute
+the derivative of the elements $z^i \xi^{[n]}$. We have
+$\text{d}z^i = i z^{i - 1} \text{d}z$. For $n \geq 1$ we have
+$$
+\text{d}\xi^{[n]} =
+\xi^{[n - 1]} \text{d}\xi =
+- \xi^{[n - 1]}\text{d}\lambda + p z^{p - 1} \xi^{[n - 1]}\text{d}z
+$$
+because $\xi = z^p - \lambda$. For $0 < i < p$ and $n \geq 1$ we have
+\begin{align*}
+\text{d}(z^i\xi^{[n]})
+& =
+iz^{i - 1}\xi^{[n]}\text{d}z + z^i\xi^{[n - 1]}\text{d}\xi \\
+& =
+iz^{i - 1}\xi^{[n]}\text{d}z + z^i\xi^{[n - 1]}\text{d}(z^p - \lambda) \\
+& =
+- z^i\xi^{[n - 1]}\text{d}\lambda +
+(iz^{i - 1}\xi^{[n]} + pz^{i + p - 1}\xi^{[n - 1]})\text{d}z \\
+& =
+- z^i\xi^{[n - 1]}\text{d}\lambda +
+(iz^{i - 1}\xi^{[n]} + pz^{i - 1}(\xi + \lambda)\xi^{[n - 1]})\text{d}z \\
+& =
+- z^i\xi^{[n - 1]}\text{d}\lambda +
+((i + pn)z^{i - 1}\xi^{[n]} + p\lambda z^{i - 1}\xi^{[n - 1]})\text{d}z
+\end{align*}
+the last equality because $\xi \xi^{[n - 1]} = n\xi^{[n]}$.
+Thus we see that
+\begin{align*}
+\partial_z(z^i) & = i z^{i - 1} \\
+\partial_z(\xi^{[n]}) & = p z^{p - 1} \xi^{[n - 1]} \\
+\partial_z(z^i\xi^{[n]}) & =
+(i + pn) z^{i - 1} \xi^{[n]} + p \lambda z^{i - 1}\xi^{[n - 1]}
+\end{align*}
+Motivated by these formulas we define $\theta$ by the rules
+$$
+\begin{matrix}
+\theta(z^j)
+& = & p\frac{z^{j + 1}}{j + 1}
+& j = 0, \ldots p - 1, \\
+\theta(z^{p - 1}\xi^{[m]})
+& = & \xi^{[m + 1]}
+& m \geq 1, \\
+\theta(z^j \xi^{[m]})
+& = &
+\frac{p z^{j + 1} \xi^{[m]} - \theta(p\lambda z^j \xi^{[m - 1]})}{(j + 1 + pm)}
+& 0 \leq j < p - 1, m \geq 1
+\end{matrix}
+$$
+where in the last line we use induction on $m$ to define our choice of
+$\theta$. Working this out we get (for $0 \leq j < p - 1$ and $1 \leq m$)
+$$
+\theta(z^j \xi^{[m]}) =
+\textstyle{\frac{p z^{j + 1} \xi^{[m]}}{(j + 1 + pm)} -
+\frac{p^2 \lambda z^{j + 1} \xi^{[m - 1]}}{(j + 1 + pm)(j + 1 + p(m - 1))} +
+\ldots +
+\frac{(-1)^m p^{m + 1} \lambda^m z^{j + 1}}
+{(j + 1 + pm) \ldots (j + 1)}}
+$$
+although we will not use this expression below. It is clear that $\theta$
+extends uniquely to a $p$-adically continuous $D$-linear map on $D'$.
+By construction we have (\ref{item-theta-linear}) and (\ref{item-integrate}).
+It remains to prove (\ref{item-factor}) and (\ref{item-horizontal}).
+
+\medskip\noindent
+Proof of (\ref{item-factor}) and (\ref{item-horizontal}).
+As $\theta$ is $D$-linear and continuous it suffices to prove that
+$p - \theta \circ \partial_z$,
+resp.\ $(\theta \otimes 1) \circ \text{d}_1 - \text{d}_1 \circ \theta$
+gives an element of $D$, resp.\ $\Omega_D$ when evaluated on the
+elements $z^i\xi^{[n]}$\footnote{This can be done by direct computation:
+It turns out that $p - \theta \circ \partial_z$ evaluated on
+$z^i\xi^{[n]}$ gives zero except for $1$ which is mapped to $p$ and
+$\xi$ which is mapped to $-p\lambda$. It turns out that
+$(\theta \otimes 1) \circ \text{d}_1 - \text{d}_1 \circ \theta$
+evaluated on $z^i\xi^{[n]}$ gives zero except for $z^{p - 1}\xi$
+which is mapped to $-\lambda$.}.
+Set $D_0 = \mathbf{Z}_{(p)}[\lambda]$ and
+$D_0' = \mathbf{Z}_{(p)}[z, \lambda]\langle \xi \rangle/(\xi - z^p + \lambda)$.
+Observe that each of the expressions above is an element of
+$D_0'$ or $\Omega_{D_0'}$. Hence it suffices to prove the result
+in the case of $D_0 \to D_0'$. Note that $D_0$ and $D_0'$
+are torsion free rings and that $D_0 \otimes \mathbf{Q} = \mathbf{Q}[\lambda]$
+and $D'_0 \otimes \mathbf{Q} = \mathbf{Q}[z, \lambda]$.
+Hence $D_0 \subset D'_0$ is the subring of elements annihilated
+by $\partial_z$ and (\ref{item-factor})
+follows from (\ref{item-integrate}), see the discussion directly preceding
+Lemma \ref{lemma-find-homotopy}. Similarly, we have
+$\text{d}_1(f) = \partial_\lambda(f)\text{d}\lambda$ hence
+$$
+\left((\theta \otimes 1) \circ \text{d}_1 - \text{d}_1 \circ \theta\right)(f)
+=
+\left(\theta(\partial_\lambda(f)) - \partial_\lambda(\theta(f))\right)
+\text{d}\lambda
+$$
+Applying $\partial_z$ to the coefficient we obtain
+\begin{align*}
+\partial_z\left(
+\theta(\partial_\lambda(f)) - \partial_\lambda(\theta(f))
+\right)
+& =
+p \partial_\lambda(f) - \partial_z(\partial_\lambda(\theta(f))) \\
+& =
+p \partial_\lambda(f) - \partial_\lambda(\partial_z(\theta(f))) \\
+& =
+p \partial_\lambda(f) - \partial_\lambda(p f) = 0
+\end{align*}
+whence the coefficient does not depend on $z$ as desired.
+This finishes the proof of the lemma.
+\end{proof}
+
+\noindent
+Note that an iterated $\alpha_p$-cover $X' \to X$ (as defined in the
+introduction to this section) is finite locally free. Hence if $X$ is
+connected the degree of $X' \to X$ is constant and is a power of $p$.
+
+\begin{lemma}
+\label{lemma-pullback-along-p-power-cover}
+Let $p$ be a prime number. Let $(S, \mathcal{I}, \gamma)$ be a divided power
+scheme over $\mathbf{Z}_{(p)}$ with $p \in \mathcal{I}$. We set
+$S_0 = V(\mathcal{I}) \subset S$. Let $f : X' \to X$ be an iterated
+$\alpha_p$-cover of schemes over $S_0$ with constant degree $q$. Let
+$\mathcal{F}$ be any crystal in quasi-coherent sheaves on $X$ and set
+$\mathcal{F}' = f_{\text{cris}}^*\mathcal{F}$.
+In the distinguished triangle
+$$
+Ru_{X/S, *}\mathcal{F}
+\longrightarrow
+f_*Ru_{X'/S, *}\mathcal{F}'
+\longrightarrow
+E
+\longrightarrow
+Ru_{X/S, *}\mathcal{F}[1]
+$$
+the object $E$ has cohomology sheaves annihilated by $q$.
+\end{lemma}
+
+\begin{proof}
+Note that $X' \to X$ is a homeomorphism hence we can identify the underlying
+topological spaces of $X$ and $X'$. The question is clearly local on $X$,
+hence we may assume $X$, $X'$, and $S$ affine and $X' \to X$ given as a
+composition
+$$
+X' = X_n \to X_{n - 1} \to X_{n - 2} \to \ldots \to X_0 = X
+$$
+where each morphism $X_{i + 1} \to X_i$ is an $\alpha_p$-cover.
+Denote $\mathcal{F}_i$ the pullback of $\mathcal{F}$ to $X_i$.
+It suffices to prove that each of the maps
+$$
+R\Gamma(\text{Cris}(X_i/S), \mathcal{F}_i)
+\longrightarrow
+R\Gamma(\text{Cris}(X_{i + 1}/S), \mathcal{F}_{i + 1})
+$$
+fits into a triangle whose third member has cohomology groups annihilated
+by $p$. (This uses axiom TR4 for the triangulated category $D(X)$. Details
+omitted.)
+
+\medskip\noindent
+Hence we may assume that $S = \Spec(A)$, $X = \Spec(C)$, $X' = \Spec(C')$
+and $C' = C[z]/(z^p - c)$ for some $c \in C$. Choose a polynomial algebra
+$P$ over $A$ and a surjection $P \to C$. Let $D$ be the $p$-adically completed
+divided power envelop of $\Ker(P \to C)$ in $P$ as in (\ref{equation-D}).
+Set $P' = P[z]$ with surjection $P' \to C'$ mapping $z$ to the class of $z$
+in $C'$. Choose a lift $\lambda \in D$ of $c \in C$. Then we see that
+the $p$-adically completed divided power envelope $D'$ of
+$\Ker(P' \to C')$ in $P'$ is isomorphic to the $p$-adic completion of
+$D[z]\langle \xi \rangle/(\xi - (z^p - \lambda))$, see
+Lemma \ref{lemma-computation} and its proof.
+Thus we see that the result follows from this lemma
+by the computation of cohomology of crystals in quasi-coherent modules in
+Proposition \ref{proposition-compute-cohomology-crystal}.
+\end{proof}
+
+\noindent
+The bound in the following lemma is probably not optimal.
+
+\begin{lemma}
+\label{lemma-pullback-along-p-power-cover-cohomology}
+With notations and assumptions as in
+Lemma \ref{lemma-pullback-along-p-power-cover}
+the map
+$$
+f^* :
+H^i(\text{Cris}(X/S), \mathcal{F})
+\longrightarrow
+H^i(\text{Cris}(X'/S), \mathcal{F}')
+$$
+has kernel and cokernel annihilated by $q^{i + 1}$.
+\end{lemma}
+
+\begin{proof}
+This follows from the fact that $E$ has nonzero cohomology sheaves in
+degrees $-1$ and up, so that the spectral sequence
+$H^a(\mathcal{H}^b(E)) \Rightarrow H^{a + b}(E)$ converges.
+This combined with the long exact cohomology sequence associated
+to a distinguished triangle gives the bound.
+\end{proof}
+
+\noindent
+In Situation \ref{situation-global} assume that $p \in \mathcal{I}$.
+Set
+$$
+X^{(1)} = X \times_{S_0, F_{S_0}} S_0.
+$$
+Denote $F_{X/S_0} : X \to X^{(1)}$ the relative Frobenius morphism.
+
+\begin{lemma}
+\label{lemma-pullback-relative-frobenius}
+In the situation above, assume that $X \to S_0$ is smooth of relative
+dimension $d$. Then $F_{X/S_0}$ is an iterated $\alpha_p$-cover
+of degree $p^d$. Hence Lemmas \ref{lemma-pullback-along-p-power-cover} and
+\ref{lemma-pullback-along-p-power-cover-cohomology} apply to this
+situation. In particular, for any crystal in quasi-coherent modules
+$\mathcal{G}$ on $\text{Cris}(X^{(1)}/S)$ the map
+$$
+F_{X/S_0}^* : H^i(\text{Cris}(X^{(1)}/S), \mathcal{G})
+\longrightarrow
+H^i(\text{Cris}(X/S), F_{X/S_0, \text{cris}}^*\mathcal{G})
+$$
+has kernel and cokernel annihilated by $p^{d(i + 1)}$.
+\end{lemma}
+
+\begin{proof}
+It suffices to prove the first statement. To see this we may assume
+that $X$ is \'etale over $\mathbf{A}^d_{S_0}$, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-etale-over-affine-space}.
+Denote $\varphi : X \to \mathbf{A}^d_{S_0}$ this \'etale morphism.
+In this case the relative Frobenius of $X/S_0$ fits into a diagram
+$$
+\xymatrix{
+X \ar[d] \ar[r] & X^{(1)} \ar[d] \\
+\mathbf{A}^d_{S_0} \ar[r] & \mathbf{A}^d_{S_0}
+}
+$$
+where the lower horizontal arrow is the relative frobenius morphism
+of $\mathbf{A}^d_{S_0}$ over $S_0$. This is the morphism which raises
+all the coordinates to the $p$th power, hence it is an iterated
+$\alpha_p$-cover. The proof is finished by observing that the diagram
+is a fibre square, see
+\'Etale Morphisms, Lemma \ref{etale-lemma-relative-frobenius-etale}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Frobenius action on crystalline cohomology}
+\label{section-frobenius}
+
+\noindent
+In this section we prove that Frobenius pullback induces a quasi-isomorphism
+on crystalline cohomology after inverting the prime $p$. But in order to
+even formulate this we need to work in a special situation.
+
+\begin{situation}
+\label{situation-F-crystal}
+In Situation \ref{situation-global} assume the following
+\begin{enumerate}
+\item $S = \Spec(A)$ for some divided power ring $(A, I, \gamma)$
+with $p \in I$,
+\item there is given a homomorphism of divided power rings $\sigma : A \to A$
+such that $\sigma(x) = x^p \bmod pA$ for all $x \in A$.
+\end{enumerate}
+\end{situation}
+
+\noindent
+In Situation \ref{situation-F-crystal} the morphism
+$\Spec(\sigma) : S \to S$ is a lift of the absolute Frobenius
+$F_{S_0} : S_0 \to S_0$ and since the diagram
+$$
+\xymatrix{
+X \ar[d] \ar[r]_{F_X} & X \ar[d] \\
+S_0 \ar[r]^{F_{S_0}} & S_0
+}
+$$
+is commutative where $F_X : X \to X$ is the absolute Frobenius morphism
+of $X$. Thus we obtain a morphism of crystalline topoi
+$$
+(F_X)_{\text{cris}} :
+(X/S)_{\text{cris}}
+\longrightarrow
+(X/S)_{\text{cris}}
+$$
+see Remark \ref{remark-functoriality-cris}. Here is the terminology concerning
+$F$-crystals following the notation of Saavedra, see
+\cite{Saavedra}.
+
+\begin{definition}
+\label{definition-F-crystal}
+In Situation \ref{situation-F-crystal} an {\it $F$-crystal on $X/S$
+(relative to $\sigma$)} is a pair $(\mathcal{E}, F_\mathcal{E})$
+given by a crystal in finite locally free $\mathcal{O}_{X/S}$-modules
+$\mathcal{E}$ together with a map
+$$
+F_\mathcal{E} : (F_X)_{\text{cris}}^*\mathcal{E} \longrightarrow \mathcal{E}
+$$
+An $F$-crystal is called {\it nondegenerate} if there exists an integer
+$i \geq 0$ a map $V : \mathcal{E} \to (F_X)_{\text{cris}}^*\mathcal{E}$
+such that $V \circ F_{\mathcal{E}} = p^i \text{id}$.
+\end{definition}
+
+\begin{remark}
+\label{remark-F-crystal-variants}
+Let $(\mathcal{E}, F)$ be an $F$-crystal as in
+Definition \ref{definition-F-crystal}.
+In the literature the nondegeneracy condition is often part of the
+definition of an $F$-crystal. Moreover, often it is also assumed that
+$F \circ V = p^n\text{id}$. What is needed for the result below is
+that there exists an integer $j \geq 0$ such that $\Ker(F)$ and
+$\Coker(F)$ are killed by $p^j$. If the rank of $\mathcal{E}$
+is bounded (for example if $X$ is quasi-compact), then both of these
+conditions follow from the nondegeneracy condition as formulated in
+the definition. Namely, suppose $R$ is a ring, $r \geq 1$ is an integer and
+$K, L \in \text{Mat}(r \times r, R)$ are matrices with
+$K L = p^i 1_{r \times r}$. Then $\det(K)\det(L) = p^{ri}$.
+Let $L'$ be the adjugate matrix of $L$, i.e.,
+$L' L = L L' = \det(L)$. Set $K' = p^{ri} K$ and $j = ri + i$.
+Then we have $K' L = p^j 1_{r \times r}$ as $K L = p^i$ and
+$$
+L K' = L K \det(L) \det(M) = L K L L' \det(M) = L p^i L' \det(M) =
+p^j 1_{r \times r}
+$$
+It follows that if $V$ is as in Definition \ref{definition-F-crystal}
+then setting $V' = p^N V$ where $N > i \cdot \text{rank}(\mathcal{E})$
+we get $V' \circ F = p^{N + i}$ and $F \circ V' = p^{N + i}$.
+\end{remark}
+
+\begin{theorem}
+\label{theorem-cohomology-F-crystal}
+In Situation \ref{situation-F-crystal} let $(\mathcal{E}, F_\mathcal{E})$
+be a nondegenerate $F$-crystal. Assume $A$ is a $p$-adically complete
+Noetherian ring and that $X \to S_0$ is proper smooth. Then
+the canonical map
+$$
+F_\mathcal{E} \circ (F_X)_{\text{cris}}^* :
+R\Gamma(\text{Cris}(X/S), \mathcal{E}) \otimes^\mathbf{L}_{A, \sigma} A
+\longrightarrow
+R\Gamma(\text{Cris}(X/S), \mathcal{E})
+$$
+becomes an isomorphism after inverting $p$.
+\end{theorem}
+
+\begin{proof}
+We first write the arrow as a composition of three arrows.
+Namely, set
+$$
+X^{(1)} = X \times_{S_0, F_{S_0}} S_0
+$$
+and denote $F_{X/S_0} : X \to X^{(1)}$ the relative Frobenius morphism.
+Denote $\mathcal{E}^{(1)}$ the base change of $\mathcal{E}$
+by $\Spec(\sigma)$, in other words the pullback of $\mathcal{E}$
+to $\text{Cris}(X^{(1)}/S)$ by the morphism of crystalline topoi
+associated to the commutative diagram
+$$
+\xymatrix{
+X^{(1)} \ar[r] \ar[d] & X \ar[d] \\
+S \ar[r]^{\Spec(\sigma)} & S
+}
+$$
+Then we have the base change map
+\begin{equation}
+\label{equation-base-change-sigma}
+R\Gamma(\text{Cris}(X/S), \mathcal{E}) \otimes^\mathbf{L}_{A, \sigma} A
+\longrightarrow
+R\Gamma(\text{Cris}(X^{(1)}/S), \mathcal{E}^{(1)})
+\end{equation}
+see Remark \ref{remark-base-change}. Note that the composition
+of $F_{X/S_0} : X \to X^{(1)}$ with the projection $X^{(1)} \to X$
+is the absolute Frobenius morphism $F_X$. Hence we see that
+$F_{X/S_0}^*\mathcal{E}^{(1)} = (F_X)_{\text{cris}}^*\mathcal{E}$.
+Thus pullback by $F_{X/S_0}$ is a map
+\begin{equation}
+\label{equation-to-prove}
+F_{X/S_0}^* :
+R\Gamma(\text{Cris}(X^{(1)}/S), \mathcal{E}^{(1)})
+\longrightarrow
+R\Gamma(\text{Cris}(X/S), (F_X)^*_{\text{cris}}\mathcal{E})
+\end{equation}
+Finally we can use $F_\mathcal{E}$ to get a map
+\begin{equation}
+\label{equation-F-E}
+R\Gamma(\text{Cris}(X/S), (F_X)^*_{\text{cris}}\mathcal{E})
+\longrightarrow
+R\Gamma(\text{Cris}(X/S), \mathcal{E})
+\end{equation}
+The map of the theorem is the composition of the three maps
+(\ref{equation-base-change-sigma}), (\ref{equation-to-prove}), and
+(\ref{equation-F-E}) above. The first is a
+quasi-isomorphism modulo all powers of $p$ by
+Remark \ref{remark-base-change-isomorphism}.
+Hence it is a quasi-isomorphism since the complexes involved are perfect
+in $D(A)$ see Remark \ref{remark-complete-perfect}.
+The third map is a quasi-isomorphism after inverting $p$ simply
+because $F_\mathcal{E}$ has an inverse up to a power of $p$, see
+Remark \ref{remark-F-crystal-variants}.
+Finally, the second is an isomorphism after inverting $p$
+by Lemma \ref{lemma-pullback-relative-frobenius}.
+\end{proof}
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/curves.tex b/books/stacks/curves.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ecb341bda4f298f1dc99e4ecf86fb87c13c9d91e
--- /dev/null
+++ b/books/stacks/curves.tex
@@ -0,0 +1,6648 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Algebraic Curves}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we develop some of the theory of algebraic curves.
+A reference covering algebraic curves over the complex numbers is
+the book \cite{ACGH}.
+
+\medskip\noindent
+What we already know. Besides general algebraic geometry, we
+have already proved some specific results on algebraic curves.
+Here is a list.
+\begin{enumerate}
+\item We have discussed affine opens of and ample invertible sheaves on
+$1$ dimensional Noetherian schemes in
+Varieties, Section \ref{varieties-section-dimension-one}.
+\item We have seen a curve is either affine or projective
+in Varieties, Section \ref{varieties-section-curves}.
+\item We have discussed degrees of locally free modules on
+proper curves in Varieties, Section \ref{varieties-section-divisors-curves}.
+\item We have discussed the Picard scheme of a nonsingular projective
+curve over an algebraically closed field in
+Picard Schemes of Curves, Section \ref{pic-section-introduction}.
+\end{enumerate}
+
+
+
+
+
+\section{Curves and function fields}
+\label{section-curves-function-fields}
+
+\noindent
+In this section we elaborate on the results of
+Varieties, Section \ref{varieties-section-varieties-rational-maps}
+in the case of curves.
+
+\begin{lemma}
+\label{lemma-extend-over-dvr}
+Let $k$ be a field. Let $X$ be a curve and $Y$ a proper variety.
+Let $U \subset X$ be a nonempty open and let $f : U \to Y$ be a morphism.
+If $x \in X$ is a closed point such that $\mathcal{O}_{X, x}$
+is a discrete valuation ring, then there exist an open
+$U \subset U' \subset X$ containing $x$ and a morphism of
+varieties $f' : U' \to Y$ extending $f$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of
+Morphisms, Lemma \ref{morphisms-lemma-extend-across}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extend-over-normal-curve}
+Let $k$ be a field. Let $X$ be a normal curve and $Y$ a proper variety.
+The set of rational maps from $X$ to $Y$ is the same as the set
+of morphisms $X \to Y$.
+\end{lemma}
+
+\begin{proof}
+A rational map from $X$ to $Y$ can be extended to a morphism $X \to Y$
+by Lemma \ref{lemma-extend-over-dvr}
+as every local ring is a discrete valuation ring
+(for example by Varieties, Lemma \ref{varieties-lemma-regular-point-on-curve}).
+Conversely, if two morphisms $f,g: X \to Y$ are equivalent as rational maps,
+then $f = g$ by Morphisms, Lemma \ref{morphisms-lemma-equality-of-morphisms}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat}
+Let $k$ be a field. Let $f : X \to Y$ be a nonconstant morphism
+of curves over $k$. If $Y$ is normal, then $f$ is flat.
+\end{lemma}
+
+\begin{proof}
+Pick $x \in X$ mapping to $y \in Y$. Then $\mathcal{O}_{Y, y}$ is either a
+field or a discrete valuation ring
+(Varieties, Lemma \ref{varieties-lemma-regular-point-on-curve}).
+Since $f$ is nonconstant it is dominant (as it must map the
+generic point of $X$ to the generic point of $Y$). This implies that
+$\mathcal{O}_{Y, y} \to \mathcal{O}_{X, x}$ is injective
+(Morphisms, Lemma \ref{morphisms-lemma-dominant-between-integral}).
+Hence $\mathcal{O}_{X, x}$ is torsion free as a $\mathcal{O}_{Y, y}$-module
+and therefore $\mathcal{O}_{X, x}$ is flat as a $\mathcal{O}_{Y, y}$-module
+by More on Algebra, Lemma
+\ref{more-algebra-lemma-valuation-ring-torsion-free-flat}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite}
+Let $k$ be a field. Let $f : X \to Y$ be a morphism of
+schemes over $k$. Assume
+\begin{enumerate}
+\item $Y$ is separated over $k$,
+\item $X$ is proper of dimension $\leq 1$ over $k$,
+\item $f(Z)$ has at least two points for every irreducible
+component $Z \subset X$ of dimension $1$.
+\end{enumerate}
+Then $f$ is finite.
+\end{lemma}
+
+\begin{proof}
+The morphism $f$ is proper by
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed}.
+Thus $f(X)$ is closed and images of closed points are closed.
+Let $y \in Y$ be the image of a closed point in $X$.
+Then $f^{-1}(\{y\})$ is a closed subset of $X$ not
+containing any of the generic points of irreducible components
+of dimension $1$ by condition (3). It follows that $f^{-1}(\{y\})$
+is finite. Hence $f$ is finite over an open neighbourhood of $y$
+by
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-proper-finite-fibre-finite-in-neighbourhood}
+(if $Y$ is Noetherian, then you can use the easier
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-proper-finite-fibre-finite-in-neighbourhood}).
+Since we've seen above that there are enough of these points
+$y$, the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extend-to-completion}
+Let $k$ be a field. Let $X \to Y$ be a morphism of varieties
+with $Y$ proper and $X$ a curve.
+There exists a factorization $X \to \overline{X} \to Y$
+where $X \to \overline{X}$ is an open immersion
+and $\overline{X}$ is a projective curve.
+\end{lemma}
+
+\begin{proof}
+This is clear from Lemma \ref{lemma-extend-over-dvr}
+and Varieties, Lemma \ref{varieties-lemma-reduced-dim-1-projective-completion}.
+\end{proof}
+
+\noindent
+Here is the main theorem of this section. We will say a morphism
+$f : X \to Y$ of varieties is {\it constant} if the image $f(X)$
+consists of a single point $y$ of $Y$. If this happens then
+$y$ is a closed point of $Y$ (since the image of a closed point
+of $X$ will be a closed point of $Y$).
+
+\begin{theorem}
+\label{theorem-curves-rational-maps}
+Let $k$ be a field. The following categories are canonically equivalent
+\begin{enumerate}
+\item The category of finitely generated field extensions $K/k$ of
+transcendence degree $1$.
+\item The category of curves and dominant rational maps.
+\item The category of normal projective curves and nonconstant morphisms.
+\item The category of nonsingular projective curves and nonconstant morphisms.
+\item The category of regular projective curves and nonconstant morphisms.
+\item The category of normal proper curves and nonconstant morphisms.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+The equivalence between categories (1) and (2) is the restriction of the
+equivalence of
+Varieties, Theorem \ref{varieties-theorem-varieties-rational-maps}.
+Namely, a variety is a curve if and only if its function field has
+transcendence degree $1$, see for example
+Varieties, Lemma \ref{varieties-lemma-dimension-locally-algebraic}.
+
+\medskip\noindent
+The categories in (3), (4), (5), and (6) are the same. First of all, the
+terms ``regular'' and ``nonsingular'' are synonyms, see
+Properties, Definition \ref{properties-definition-regular}.
+Being normal and regular are the same thing for Noetherian
+$1$-dimensional schemes
+(Properties, Lemmas \ref{properties-lemma-regular-normal} and
+\ref{properties-lemma-normal-dimension-1-regular}). See
+Varieties, Lemma \ref{varieties-lemma-regular-point-on-curve}
+for the case of curves. Thus (3) is the same as (5). Finally, (6)
+is the same as (3) by
+Varieties, Lemma \ref{varieties-lemma-dim-1-proper-projective}.
+
+\medskip\noindent
+If $f : X \to Y$ is a nonconstant morphism of nonsingular projective curves,
+then $f$ sends the generic point $\eta$ of $X$ to the generic point $\xi$ of
+$Y$. Hence we obtain a morphism
+$k(Y) = \mathcal{O}_{Y, \xi} \to \mathcal{O}_{X, \eta} = k(X)$
+in the category (1). If two morphisms $f,g: X \to Y$ gives the same morphism
+$k(Y) \to k(X)$, then by the equivalence between (1) and (2),
+$f$ and $g$ are equivalent as rational maps, so $f=g$ by
+Lemma \ref{lemma-extend-over-normal-curve}.
+Conversely, suppose that we have a map
+$k(Y) \to k(X)$ in the category (1). Then we obtain a morphism $U \to Y$
+for some nonempty open $U \subset X$. By Lemma \ref{lemma-extend-over-dvr}
+this extends to all of $X$ and we obtain a morphism in the category (5).
+Thus we see that there is a fully faithful functor (5)$\to$(1).
+
+\medskip\noindent
+To finish the proof we have to show that every $K/k$ in (1)
+is the function field of a normal projective curve.
+We already know that $K = k(X)$ for some curve $X$.
+After replacing $X$ by its normalization
+(which is a variety birational to $X$)
+we may assume $X$ is normal
+(Varieties, Lemma \ref{varieties-lemma-normalization-locally-algebraic}).
+Then we choose $X \to \overline{X}$ with
+$\overline{X} \setminus X = \{x_1, \ldots, x_n\}$ as in
+Varieties, Lemma \ref{varieties-lemma-reduced-dim-1-projective-completion}.
+Since $X$ is normal and since each
+of the local rings $\mathcal{O}_{\overline{X}, x_i}$ is normal
+we conclude that $\overline{X}$ is a normal projective curve as desired.
+(Remark: We can also first compactify using
+Varieties, Lemma \ref{varieties-lemma-dim-1-projective-completion}
+and then normalize using
+Varieties, Lemma \ref{varieties-lemma-normalization-locally-algebraic}.
+Doing it this way we avoid using the somewhat tricky
+Morphisms, Lemma \ref{morphisms-lemma-relative-normalization-normal-codim-1}.)
+\end{proof}
+
+\begin{definition}
+\label{definition-normal-projective-model}
+Let $k$ be a field. Let $X$ be a curve.
+A {\it nonsingular projective model of $X$}
+is a pair $(Y, \varphi)$ where $Y$ is a nonsingular projective
+curve and $\varphi : k(X) \to k(Y)$ is an isomorphism
+of function fields.
+\end{definition}
+
+\noindent
+A nonsingular projective model is determined up to unique
+isomorphism by Theorem \ref{theorem-curves-rational-maps}.
+Thus we often say ``the nonsingular projective model''.
+We usually drop $\varphi$ from the notation.
+Warning: it needn't be the case that $Y$ is smooth over $k$
+but Lemma \ref{lemma-nonsingular-model-smooth}
+shows this can only happen in positive characteristic.
+
+\begin{lemma}
+\label{lemma-nonsingular-model-smooth}
+Let $k$ be a field. Let $X$ be a curve and let $Y$ be the nonsingular
+projective model of $X$. If $k$ is perfect, then $Y$ is a smooth
+projective curve.
+\end{lemma}
+
+\begin{proof}
+See Varieties, Lemma \ref{varieties-lemma-regular-point-on-curve}
+for example.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-models}
+Let $k$ be a field. Let $X$ be a geometrically irreducible curve over $k$.
+For a field extension $K/k$ denote $Y_K$ a nonsingular projective model
+of $(X_K)_{red}$.
+\begin{enumerate}
+\item If $X$ is proper, then $Y_K$ is the normalization of $X_K$.
+\item There exists $K/k$ finite purely inseparable such that $Y_K$ is smooth.
+\item Whenever $Y_K$ is smooth\footnote{Or even geometrically reduced.}
+we have $H^0(Y_K, \mathcal{O}_{Y_K}) = K$.
+\item Given a commutative diagram
+$$
+\xymatrix{
+\Omega & K' \ar[l] \\
+K \ar[u] & k \ar[l] \ar[u]
+}
+$$
+of fields such that $Y_K$ and $Y_{K'}$ are smooth, then
+$Y_\Omega = (Y_K)_\Omega = (Y_{K'})_\Omega$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $X'$ be a nonsingular projective model of $X$. Then $X'$ and
+$X$ have isomorphic nonempty open subschemes. In particular
+$X'$ is geometrically irreducible as $X$ is (some details omitted).
+Thus we may assume that $X$ is projective.
+
+\medskip\noindent
+Assume $X$ is proper. Then $X_K$ is proper and hence the normalization
+$(X_K)^\nu$ is proper as a scheme finite over a proper scheme
+(Varieties, Lemma \ref{varieties-lemma-normalization-locally-algebraic}
+and Morphisms, Lemmas \ref{morphisms-lemma-finite-proper} and
+\ref{morphisms-lemma-composition-proper}).
+On the other hand, $X_K$ is irreducible as $X$ is geometrically
+irreducible. Hence $X_K^\nu$ is proper, normal, irreducible, and birational
+to $(X_K)_{red}$. This proves (1) because a proper curve is projective
+(Varieties, Lemma \ref{varieties-lemma-dim-1-proper-projective}).
+
+\medskip\noindent
+Proof of (2). As $X$ is proper and we have (1), we can apply
+Varieties, Lemma \ref{varieties-lemma-finite-extension-geometrically-normal}
+to find $K/k$ finite purely inseparable such that
+$Y_K$ is geometrically normal. Then $Y_K$ is geometrically regular
+as normal and regular are the same for curves
+(Properties, Lemma \ref{properties-lemma-normal-dimension-1-regular}).
+Then $Y$ is a smooth variety by
+Varieties, Lemma \ref{varieties-lemma-geometrically-regular-smooth}.
+
+\medskip\noindent
+If $Y_K$ is geometrically reduced, then $Y_K$ is geometrically
+integral (Varieties, Lemma \ref{varieties-lemma-geometrically-integral})
+and we see that $H^0(Y_K, \mathcal{O}_{Y_K}) = K$ by
+Varieties, Lemma \ref{varieties-lemma-regular-functions-proper-variety}.
+This proves (3) because a smooth variety is geometrically reduced
+(even geometrically regular, see
+Varieties, Lemma \ref{varieties-lemma-geometrically-regular-smooth}).
+
+\medskip\noindent
+If $Y_K$ is smooth, then for every extension $\Omega/K$
+the base change $(Y_K)_\Omega$ is smooth over $\Omega$
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-smooth}).
+Hence it is clear that $Y_\Omega = (Y_K)_\Omega$. This proves (4).
+\end{proof}
+
+
+
+
+
+
+\section{Linear series}
+\label{section-linear-series}
+
+\noindent
+We deviate from the classical story
+(see Remark \ref{remark-classical-linear-series})
+by defining linear series in the following manner.
+
+\begin{definition}
+\label{definition-linear-series}
+Let $k$ be a field. Let $X$ be a proper scheme of dimension $\leq 1$ over $k$.
+Let $d \geq 0$ and $r \geq 0$.
+A {\it linear series of degree $d$ and dimension $r$}
+is a pair $(\mathcal{L}, V)$ where $\mathcal{L}$ is an
+invertible $\mathcal{O}_X$-module of degree $d$
+(Varieties, Definition \ref{varieties-definition-degree-invertible-sheaf})
+and $V \subset H^0(X, \mathcal{L})$ is a $k$-subvector space
+of dimension $r + 1$. We will abbreviate this by saying
+$(\mathcal{L}, V)$ is a {\it $\mathfrak g^r_d$} on $X$.
+\end{definition}
+
+\noindent
+We will mostly use this when $X$ is a nonsingular proper curve.
+In fact, the definition above is just one way to generalize the
+classical definition of a $\mathfrak g^r_d$. For example, if $X$
+is a proper curve, then one can generalize linear series by allowing
+$\mathcal{L}$ to be a torsion free coherent $\mathcal{O}_X$-module
+of rank $1$. On a nonsingular curve every torsion free
+coherent module is locally free, so this agrees with our
+notion for nonsingular proper curves.
+
+\medskip\noindent
+The following lemma explains the geometric meaning of linear series
+for proper nonsingular curves.
+
+\begin{lemma}
+\label{lemma-linear-series}
+Let $k$ be a field. Let $X$ be a nonsingular proper curve over $k$.
+Let $(\mathcal{L}, V)$ be a $\mathfrak g^r_d$ on $X$. Then
+there exists a morphism
+$$
+\varphi : X \longrightarrow \mathbf{P}^r_k = \text{Proj}(k[T_0, \ldots, T_r])
+$$
+of varieties over $k$ and a map
+$\alpha : \varphi^*\mathcal{O}_{\mathbf{P}^r_k}(1) \to \mathcal{L}$
+such that $\varphi^*T_0, \ldots, \varphi^*T_r$
+are sent to a basis of $V$ by $\alpha$.
+\end{lemma}
+
+\begin{proof}
+Let $s_0, \ldots, s_r \in V$ be a $k$-basis. Since $X$ is nonsingular
+the image $\mathcal{L}' \subset \mathcal{L}$ of the map
+$s_0, \ldots, s_r : \mathcal{O}_X^{\oplus r} \to \mathcal{L}$
+is an invertible $\mathcal{O}_X$-module for example by
+Divisors, Lemma \ref{divisors-lemma-torsion-free-over-regular-dim-1}.
+Then we use
+Constructions, Lemma \ref{constructions-lemma-projective-space}
+to get a morphism
+$$
+\varphi = \varphi_{(\mathcal{L}', (s_0, \ldots, s_r))} :
+X \longrightarrow \mathbf{P}^r_k
+$$
+as in the statement of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-linear-series-trivial-existence}
+Let $k$ be a field. Let $X$ be a proper scheme of dimension $\leq 1$ over $k$.
+If $X$ has a $\mathfrak g^r_d$, then $X$ has a $\mathfrak g^s_d$ for
+all $0 \leq s \leq r$.
+\end{lemma}
+
+\begin{proof}
+This is true because a vector space $V$ of dimension $r + 1$
+over $k$ has a linear subspace of dimension $s + 1$ for all
+$0 \leq s \leq r$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-g1d}
+Let $k$ be a field. Let $X$ be a nonsingular proper curve over $k$.
+Let $(\mathcal{L}, V)$ be a $\mathfrak g^1_d$ on $X$. Then the morphism
+$\varphi : X \to \mathbf{P}^1_k$ of Lemma \ref{lemma-linear-series}
+either
+\begin{enumerate}
+\item is nonconstant and has degree $\leq d$, or
+\item factors through a closed point of $\mathbf{P}^1_k$ and in this
+case $H^0(X, \mathcal{O}_X) \not = k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-linear-series} we see that
+$\mathcal{L}' = \varphi^*\mathcal{O}_{\mathbf{P}^1_k}(1)$
+has a nonzero map $\mathcal{L}' \to \mathcal{L}$.
+Hence by Varieties, Lemma \ref{varieties-lemma-check-invertible-sheaf-trivial}
+we see that $0 \leq \deg(\mathcal{L}') \leq d$.
+If $\deg(\mathcal{L}') = 0$, then the same lemma tells us
+$\mathcal{L}' \cong \mathcal{O}_X$ and since we have
+two linearly independent sections we find we are in case (2).
+If $\deg(\mathcal{L}') > 0$ then $\varphi$ is nonconstant (since the
+pullback of an invertible module by a constant morphism is trivial).
+Hence
+$$
+\deg(\mathcal{L}') =
+\deg(X/\mathbf{P}^1_k) \deg(\mathcal{O}_{\mathbf{P}^1_k}(1))
+$$
+by Varieties, Lemma \ref{varieties-lemma-degree-pullback-map-proper-curves}.
+This finishes the proof as the degree of
+$\mathcal{O}_{\mathbf{P}^1_k}(1)$ is $1$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-grd-inequalities}
+Let $k$ be a field. Let $X$ be a proper curve over $k$ with
+$H^0(X, \mathcal{O}_X) = k$. If $X$ has a $\mathfrak g^r_d$, then
+$r \leq d$. If equality holds, then $H^1(X, \mathcal{O}_X) = 0$, i.e.,
+the genus of $X$ (Definition \ref{definition-genus}) is $0$.
+\end{lemma}
+
+\begin{proof}
+Let $(\mathcal{L}, V)$ be a $\mathfrak g^r_d$. Since this will only
+increase $r$, we may assume $V = H^0(X, \mathcal{L})$. Choose a
+nonzero element $s \in V$. Then the zero scheme of $s$ is an effective Cartier
+divisor $D \subset X$, we have $\mathcal{L} = \mathcal{O}_X(D)$, and
+we have a short exact sequence
+$$
+0 \to \mathcal{O}_X \to \mathcal{L} \to \mathcal{L}|_D \to 0
+$$
+see Divisors, Lemma \ref{divisors-lemma-characterize-OD} and
+Remark \ref{divisors-remark-ses-regular-section}.
+By Varieties, Lemma \ref{varieties-lemma-degree-effective-Cartier-divisor}
+we have $\deg(D) = \deg(\mathcal{L}) = d$.
+Since $D$ is an Artinian scheme we have
+$\mathcal{L}|_D \cong \mathcal{O}_D$\footnote{In our case this
+follows from Divisors, Lemma
+\ref{divisors-lemma-finite-trivialize-invertible-upstairs}
+as $D \to \Spec(k)$ is finite.}.
+Thus
+$$
+\dim_k H^0(D, \mathcal{L}|_D) = \dim_k H^0(D, \mathcal{O}_D) = \deg(D) = d
+$$
+On the other hand, by assumption
+$\dim_k H^0(X, \mathcal{O}_X) = 1$ and $\dim H^0(X, \mathcal{L}) = r + 1$.
+We conclude that $r + 1 \leq 1 + d$, i.e., $r \leq d$ as in the lemma.
+
+\medskip\noindent
+Assume equality holds. Then
+$H^0(X, \mathcal{L}) \to H^0(X, \mathcal{L}|_D)$ is surjective.
+If we knew that $H^1(X, \mathcal{L})$ was zero, then we would
+conclude that $H^1(X, \mathcal{O}_X)$ is zero by the long exact
+cohomology sequence and the proof would
+be complete. Our strategy will be to replace $\mathcal{L}$ by a
+large power which has vanishing. As $\mathcal{L}|_D$ is the
+trivial invertible module (see above), we can
+find a section $t$ of $\mathcal{L}$ whose restriction
+of $D$ generates $\mathcal{L}|_D$.
+Consider the multiplication map
+$$
+\mu :
+H^0(X, \mathcal{L}) \otimes_k H^0(X, \mathcal{L})
+\longrightarrow
+H^0(X, \mathcal{L}^{\otimes 2})
+$$
+and consider the short exact sequence
+$$
+0 \to \mathcal{L} \xrightarrow{s}
+\mathcal{L}^{\otimes 2} \to \mathcal{L}^{\otimes 2}|_D \to 0
+$$
+Since $H^0(\mathcal{L}) \to H^0(\mathcal{L}|_D)$ is surjective and since
+$t$ maps to a trivialization of $\mathcal{L}|_D$ we see that
+$\mu(H^0(X, \mathcal{L}) \otimes t)$ gives a subspace of
+$H^0(X, \mathcal{L}^{\otimes 2})$ surjecting onto the global sections of
+$\mathcal{L}^{\otimes 2}|_D$. Thus we see that
+$$
+\dim H^0(X, \mathcal{L}^{\otimes 2}) = r + 1 + d = 2r + 1 =
+\deg(\mathcal{L}^{\otimes 2}) + 1
+$$
+Ok, so $\mathcal{L}^{\otimes 2}$ has the same property as $\mathcal{L}$, i.e.,
+that the dimension of the space of global sections is equal to the
+degree plus one. Since $\mathcal{L}$ is ample
+(Varieties, Lemma \ref{varieties-lemma-ample-curve})
+there exists some $n_0$ such that $\mathcal{L}^{\otimes n}$
+has vanishing $H^1$ for all $n \geq n_0$
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-proper-ample}).
+Thus applying the argument above to $\mathcal{L}^{\otimes n}$
+with $n = 2^m$ for some sufficiently large $m$ we conclude the
+lemma is true.
+\end{proof}
+
+\begin{remark}[Classical definition]
+\label{remark-classical-linear-series}
+Let $X$ be a smooth projective curve over an algebraically closed field $k$.
+We say two effective Cartier divisors $D, D' \subset X$ are
+{\it linearly equivalent} if and only if
+$\mathcal{O}_X(D) \cong \mathcal{O}_X(D')$ as $\mathcal{O}_X$-modules.
+Since $\Pic(X) = \text{Cl}(X)$
+(Divisors, Lemma \ref{divisors-lemma-local-rings-UFD-c1-bijective})
+we see that $D$ and $D'$ are linearly equivalent
+if and only if the Weil divisors associated to
+$D$ and $D'$ define the same element of $\text{Cl}(X)$.
+Given an effective Cartier divisor $D \subset X$ of degree $d$ the
+{\it complete linear system} or {\it complete linear series} $|D|$ of $D$
+is the set of effective Cartier divisors $E \subset X$
+which are linearly equivalent to $D$.
+Another way to say it is that $|D|$ is the set of closed
+points of the fibre of the morphism
+$$
+\gamma_d :
+\underline{\Hilbfunctor}^d_{X/k}
+\longrightarrow
+\underline{\Picardfunctor}^d_{X/k}
+$$
+(Picard Schemes of Curves, Lemma \ref{pic-lemma-picard-pieces})
+over the closed point corresponding to $\mathcal{O}_X(D)$.
+This gives $|D|$ a natural scheme structure and it
+turns out that $|D| \cong \mathbf{P}^m_k$ with
+$m + 1 = h^0(\mathcal{O}_X(D))$. In fact, more canonically we have
+$$
+|D| = \mathbf{P}(H^0(X, \mathcal{O}_X(D))^\vee)
+$$
+where $(-)^\vee$ indicates $k$-linear dual and $\mathbf{P}$ is as
+in Constructions, Example \ref{constructions-example-projective-space}.
+In this language a {\it linear system} or a {\it linear series} on
+$X$ is a closed subvariety $L \subset |D|$ which can be cut out by
+linear equations. If $L$ has dimension $r$, then $L = \mathbf{P}(V^\vee)$
+where $V \subset H^0(X, \mathcal{O}_X(D))$ is a linear subspace
+of dimension $r + 1$. Thus the classical linear series
+$L \subset |D|$ corresponds to the linear series $(\mathcal{O}_X(D), V)$
+as defined above.
+\end{remark}
+
+
+
+
+
+
+
+\section{Duality}
+\label{section-duality}
+
+\noindent
+In this section we work out the consequences of the very general
+material on dualizing complexes and duality for proper $1$-dimensional
+schemes over fields. If you are interested in the analogous discussion
+for higher dimension proper schemes over fields, see
+Duality for Schemes, Section \ref{duality-section-duality-proper-over-field}.
+
+\begin{lemma}
+\label{lemma-duality-dim-1}
+Let $X$ be a proper scheme of dimension $\leq 1$ over a field $k$.
+There exists a dualizing complex $\omega_X^\bullet$ with the
+following properties
+\begin{enumerate}
+\item $H^i(\omega_X^\bullet)$ is nonzero only for $i = -1, 0$,
+\item $\omega_X = H^{-1}(\omega_X^\bullet)$
+is a coherent Cohen-Macaulay module whose support is the
+irreducible components of dimension $1$,
+\item for $x \in X$ closed, the module $H^0(\omega_{X, x}^\bullet)$
+is nonzero if and only if either
+\begin{enumerate}
+\item $\dim(\mathcal{O}_{X, x}) = 0$ or
+\item $\dim(\mathcal{O}_{X, x}) = 1$
+and $\mathcal{O}_{X, x}$ is not Cohen-Macaulay,
+\end{enumerate}
+\item for $K \in D_\QCoh(\mathcal{O}_X)$ there are functorial
+isomorphisms\footnote{This property
+characterizes $\omega_X^\bullet$ in $D_\QCoh(\mathcal{O}_X)$
+up to unique isomorphism by the Yoneda lemma. Since $\omega_X^\bullet$
+is in $D^b_{\textit{Coh}}(\mathcal{O}_X)$ in fact it suffices to consider
+$K \in D^b_{\textit{Coh}}(\mathcal{O}_X)$.}
+$$
+\Ext^i_X(K, \omega_X^\bullet) = \Hom_k(H^{-i}(X, K), k)
+$$
+compatible with shifts and distinguished triangles,
+\item there are functorial isomorphisms
+$\Hom(\mathcal{F}, \omega_X) = \Hom_k(H^1(X, \mathcal{F}), k)$
+for $\mathcal{F}$ quasi-coherent on $X$,
+\item if $X \to \Spec(k)$ is smooth of relative dimension $1$,
+then $\omega_X \cong \Omega_{X/k}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $f : X \to \Spec(k)$ the structure morphism.
+We start with the relative dualizing complex
+$$
+\omega_X^\bullet = \omega_{X/k}^\bullet = a(\mathcal{O}_{\Spec(k)})
+$$
+as described in Duality for Schemes,
+Remark \ref{duality-remark-relative-dualizing-complex}.
+Then property (4) holds by construction as $a$ is the right
+adjoint for $f_* : D_\QCoh(\mathcal{O}_X) \to D(\mathcal{O}_{\Spec(k)})$.
+Since $f$ is proper we have
+$f^!(\mathcal{O}_{\Spec(k)}) = a(\mathcal{O}_{\Spec(k)})$ by
+definition, see
+Duality for Schemes, Section \ref{duality-section-upper-shriek}.
+Hence $\omega_X^\bullet$ and $\omega_X$ are as in
+Duality for Schemes, Example \ref{duality-example-proper-over-local}
+and as in
+Duality for Schemes, Example \ref{duality-example-equidimensional-over-field}.
+Parts (1) and (2) follow from
+Duality for Schemes, Lemma \ref{duality-lemma-vanishing-good-dualizing}.
+For a closed point $x \in X$ we see that $\omega_{X, x}^\bullet$ is a
+normalized dualizing complex over $\mathcal{O}_{X, x}$, see
+Duality for Schemes, Lemma \ref{duality-lemma-good-dualizing-normalized}.
+Assertion (3) then follows from
+Dualizing Complexes, Lemma \ref{dualizing-lemma-apply-CM}.
+Assertion (5) follows from
+Duality for Schemes, Lemma \ref{duality-lemma-dualizing-module-proper-over-A}
+for coherent $\mathcal{F}$ and in general by unwinding
+(4) for $K = \mathcal{F}[0]$ and $i = -1$.
+Assertion (6) follows from Duality for Schemes,
+Lemma \ref{duality-lemma-smooth-proper}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-duality-dim-1-CM}
+Let $X$ be a proper scheme over a field $k$ which is Cohen-Macaulay
+and equidimensional of dimension $1$. The module $\omega_X$
+of Lemma \ref{lemma-duality-dim-1} has the following properties
+\begin{enumerate}
+\item $\omega_X$ is a dualizing module on $X$
+(Duality for Schemes, Section \ref{duality-section-dualizing-module}),
+\item $\omega_X$ is a coherent Cohen-Macaulay module whose support is $X$,
+\item there are functorial isomorphisms
+$\Ext^i_X(K, \omega_X[1]) = \Hom_k(H^{-i}(X, K), k)$
+compatible with shifts for $K \in D_\QCoh(X)$,
+\item there are functorial isomorphisms
+$\Ext^{1 + i}(\mathcal{F}, \omega_X) = \Hom_k(H^{-i}(X, \mathcal{F}), k)$
+for $\mathcal{F}$ quasi-coherent on $X$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Recall from the proof of Lemma \ref{lemma-duality-dim-1}
+that $\omega_X$ is as in Duality for Schemes, Example
+\ref{duality-example-proper-over-local} and hence is
+a dualizing module. The other statements follow from
+Lemma \ref{lemma-duality-dim-1}
+and the fact that $\omega_X^\bullet = \omega_X[1]$
+as $X$ is Cohen-Macualay (Duality for Schemes, Lemma
+\ref{duality-lemma-dualizing-module-CM-scheme}).
+\end{proof}
+
+\begin{remark}
+\label{remark-rework-duality-locally-free}
+Let $X$ be a proper scheme of dimension $\leq 1$ over a field $k$.
+Let $\omega_X^\bullet$ and $\omega_X$ be as in Lemma \ref{lemma-duality-dim-1}.
+If $\mathcal{E}$ is a finite locally free $\mathcal{O}_X$-module
+with dual $\mathcal{E}^\vee$ then we have canonical isomorphisms
+$$
+\Hom_k(H^{-i}(X, \mathcal{E}), k) =
+H^i(X, \mathcal{E}^\vee \otimes_{\mathcal{O}_X}^\mathbf{L} \omega_X^\bullet)
+$$
+This follows from the lemma and
+Cohomology, Lemma \ref{cohomology-lemma-dual-perfect-complex}.
+If $X$ is Cohen-Macaulay and equidimensional of dimension $1$, then
+we have canonical isomorphisms
+$$
+\Hom_k(H^{-i}(X, \mathcal{E}), k) =
+H^{1 + i}(X, \mathcal{E}^\vee \otimes_{\mathcal{O}_X} \omega_X)
+$$
+by Lemma \ref{lemma-duality-dim-1-CM}. In particular
+if $\mathcal{L}$ is an invertible $\mathcal{O}_X$-module, then we have
+$$
+\dim_k H^0(X, \mathcal{L}) =
+\dim_k H^1(X, \mathcal{L}^{\otimes -1} \otimes_{\mathcal{O}_X} \omega_X)
+$$
+and
+$$
+\dim_k H^1(X, \mathcal{L}) =
+\dim_k H^0(X, \mathcal{L}^{\otimes -1} \otimes_{\mathcal{O}_X} \omega_X)
+$$
+\end{remark}
+
+\noindent
+Here is a sanity check for the dualizing complex.
+
+\begin{lemma}
+\label{lemma-sanity-check-duality}
+Let $X$ be a proper scheme of dimension $\leq 1$ over a field $k$.
+Let $\omega_X^\bullet$ and $\omega_X$ be as in Lemma \ref{lemma-duality-dim-1}.
+\begin{enumerate}
+\item If $X \to \Spec(k)$ factors as $X \to \Spec(k') \to \Spec(k)$
+for some field $k'$, then $\omega_X^\bullet$ and $\omega_X$
+satisfy properties (4), (5), (6) with $k$ replaced with $k'$.
+\item If $K/k$ is a field extension, then the pullback of
+$\omega_X^\bullet$ and $\omega_X$ to the base change $X_K$
+are as in Lemma \ref{lemma-duality-dim-1} for the morphism
+$X_K \to \Spec(K)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $f : X \to \Spec(k)$ the structure morphism.
+Assertion (1) really means that $\omega_X^\bullet$ and $\omega_X$
+are as in Lemma \ref{lemma-duality-dim-1} for the morphism
+$f' : X \to \Spec(k')$. In the proof of Lemma \ref{lemma-duality-dim-1}
+we took $\omega_X^\bullet = a(\mathcal{O}_{\Spec(k)})$
+where $a$ be is the right adjoint of
+Duality for Schemes, Lemma
+\ref{duality-lemma-twisted-inverse-image} for $f$.
+Thus we have to show
+$a(\mathcal{O}_{\Spec(k)}) \cong a'(\mathcal{O}_{\Spec(k)})$
+where $a'$ be is the right adjoint of
+Duality for Schemes, Lemma
+\ref{duality-lemma-twisted-inverse-image} for $f'$.
+Since $k' \subset H^0(X, \mathcal{O}_X)$ we see that $k'/k$ is a finite
+extension (Cohomology of Schemes, Lemma
+\ref{coherent-lemma-proper-over-affine-cohomology-finite}).
+By uniqueness of adjoints we have $a = a' \circ b$ where
+$b$ is the right adjoint of Duality for Schemes, Lemma
+\ref{duality-lemma-twisted-inverse-image} for $g : \Spec(k') \to \Spec(k)$.
+Another way to say this: we have $f^! = (f')^! \circ g^!$.
+Thus it suffices to show that $\Hom_k(k', k) \cong k'$ as
+$k'$-modules, see Duality for Schemes, Example
+\ref{duality-example-affine-twisted-inverse-image}.
+This holds because these are $k'$-vector spaces of
+the same dimension (namely dimension $1$).
+
+\medskip\noindent
+Proof of (2). This holds because we have base change for $a$ by
+Duality for Schemes, Lemma \ref{duality-lemma-more-base-change}.
+See discussion in Duality for Schemes, Remark
+\ref{duality-remark-relative-dualizing-complex}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-closed-immersion-dim-1-CM}
+Let $X$ be a proper scheme of dimension $\leq 1$ over a field $k$.
+Let $i : Y \to X$ be a closed immersion.
+Let $\omega_X^\bullet$, $\omega_X$, $\omega_Y^\bullet$, $\omega_Y$
+be as in Lemma \ref{lemma-duality-dim-1}. Then
+\begin{enumerate}
+\item $\omega_Y^\bullet = R\SheafHom(\mathcal{O}_Y, \omega_X^\bullet)$,
+\item $\omega_Y = \SheafHom(\mathcal{O}_Y, \omega_X)$ and
+$i_*\omega_Y = \SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_Y, \omega_X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $g : Y \to \Spec(k)$ and $f : X \to \Spec(k)$ the structure morphisms.
+Then $g = f \circ i$. Denote $a, b, c$ the right adjoint of
+Duality for Schemes, Lemma
+\ref{duality-lemma-twisted-inverse-image} for $f, g, i$.
+Then $b = c \circ a$ by uniqueness of right adjoints
+and because $Rg_* = Rf_* \circ Ri_*$.
+In the proof of Lemma \ref{lemma-duality-dim-1}
+we set
+$\omega_X^\bullet = a(\mathcal{O}_{\Spec(k)})$ and
+$\omega_Y^\bullet = b(\mathcal{O}_{\Spec(k)})$.
+Hence $\omega_Y^\bullet = c(\omega_X^\bullet)$
+which implies (1) by Duality for Schemes, Lemma
+\ref{duality-lemma-twisted-inverse-image-closed}.
+Since $\omega_X = H^{-1}(\omega_X^\bullet)$ and
+$\omega_Y = H^{-1}(\omega_Y^\bullet)$ we conclude that
+$\omega_Y = \SheafHom(\mathcal{O}_Y, \omega_X)$.
+This implies
+$i_*\omega_Y = \SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_Y, \omega_X)$
+by Duality for Schemes, Lemma
+\ref{duality-lemma-sheaf-with-exact-support-ext}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-closed-subscheme-reduced-gorenstein}
+Let $X$ be a proper scheme over a field $k$ which is
+Gorenstein, reduced, and equidimensional of dimension $1$.
+Let $i : Y \to X$ be a reduced closed subscheme equidimensional
+of dimension $1$. Let $j : Z \to X$ be the scheme theoretic
+closure of $X \setminus Y$. Then
+\begin{enumerate}
+\item $Y$ and $Z$ are Cohen-Macaulay,
+\item if $\mathcal{I} \subset \mathcal{O}_X$,
+resp.\ $\mathcal{J} \subset \mathcal{O}_X$ is the ideal sheaf of
+$Y$, resp.\ $Z$ in $X$, then
+$$
+\mathcal{I} = i_*\mathcal{I}'
+\quad\text{and}\quad
+\mathcal{J} = j_*\mathcal{J}'
+$$
+where $\mathcal{I}' \subset \mathcal{O}_Z$,
+resp.\ $\mathcal{J}' \subset \mathcal{O}_Y$ is the ideal sheaf
+of $Y \cap Z$ in $Z$, resp.\ $Y$,
+\item $\omega_Y = \mathcal{J}'(i^*\omega_X)$ and
+$i_*(\omega_Y) = \mathcal{J}\omega_X$,
+\item $\omega_Z = \mathcal{I}'(i^*\omega_X)$ and
+$i_*(\omega_Z) = \mathcal{I}\omega_X$,
+\item we have the following short exact sequences
+\begin{align*}
+0 \to \omega_X \to i_*i^*\omega_X \oplus j_*j^*\omega_X \to
+\mathcal{O}_{Y \cap Z} \to 0 \\
+0 \to i_*\omega_Y \to \omega_X \to j_*j^*\omega_X \to 0 \\
+0 \to j_*\omega_Z \to \omega_X \to i_*i^*\omega_X \to 0 \\
+0 \to i_*\omega_Y \oplus j_*\omega_Z \to \omega_X \to
+\mathcal{O}_{Y \cap Z} \to 0 \\
+0 \to \omega_Y \to i^*\omega_X \to \mathcal{O}_{Y \cap Z} \to 0 \\
+0 \to \omega_Z \to j^*\omega_X \to \mathcal{O}_{Y \cap Z} \to 0
+\end{align*}
+\end{enumerate}
+Here $\omega_X$, $\omega_Y$, $\omega_Z$ are as in
+Lemma \ref{lemma-duality-dim-1}.
+\end{lemma}
+
+\begin{proof}
+A reduced $1$-dimensional Noetherian scheme is Cohen-Macaulay, so
+(1) is true. Since $X$ is reduced, we see that $X = Y \cup Z$
+scheme theoretically. With notation as in
+Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-union}
+and by the statement of that lemma
+we have a short exact sequence
+$$
+0 \to \mathcal{O}_X \to
+\mathcal{O}_Y \oplus \mathcal{O}_Z \to \mathcal{O}_{Y \cap Z} \to 0
+$$
+Since $\mathcal{J} = \Ker(\mathcal{O}_X \to \mathcal{O}_Z)$,
+$\mathcal{J}' = \Ker(\mathcal{O}_Y \to \mathcal{O}_{Y \cap Z})$,
+$\mathcal{I} = \Ker(\mathcal{O}_X \to \mathcal{O}_Y)$, and
+$\mathcal{I}' = \Ker(\mathcal{O}_Z \to \mathcal{O}_{Y \cap Z})$
+a diagram chase implies (2).
+Observe that $\mathcal{I} + \mathcal{J}$ is the ideal sheaf
+of $Y \cap Z$ and that $\mathcal{I} \cap \mathcal{J} = 0$.
+Hence we have the following exact sequences
+\begin{align*}
+0 \to \mathcal{O}_X \to \mathcal{O}_Y \oplus \mathcal{O}_Z \to
+\mathcal{O}_{Y \cap Z} \to 0 \\
+0 \to \mathcal{J} \to \mathcal{O}_X \to \mathcal{O}_Z \to 0 \\
+0 \to \mathcal{I} \to \mathcal{O}_X \to \mathcal{O}_Y \to 0 \\
+0 \to \mathcal{J} \oplus \mathcal{I} \to \mathcal{O}_X \to
+\mathcal{O}_{Y \cap Z} \to 0 \\
+0 \to \mathcal{J}' \to \mathcal{O}_Y \to \mathcal{O}_{Y \cap Z} \to 0 \\
+0 \to \mathcal{I}' \to \mathcal{O}_Z \to \mathcal{O}_{Y \cap Z} \to 0
+\end{align*}
+Since $X$ is Gorenstein $\omega_X$ is an invertible $\mathcal{O}_X$-module
+(Duality for Schemes, Lemma \ref{duality-lemma-gorenstein}).
+Since $Y \cap Z$ has dimension $0$ we have
+$\omega_X|_{Y \cap Z} \cong \mathcal{O}_{Y \cap Z}$.
+Thus if we prove (3) and (4), then we obtain the short exact
+sequences of the lemma by tensoring the above
+short exact sequence with the invertible module $\omega_X$.
+By symmetry it suffices to prove (3) and by
+(2) it suffices to prove $i_*(\omega_Y) = \mathcal{J}\omega_X$.
+
+\medskip\noindent
+We have
+$i_*\omega_Y = \SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_Y, \omega_X)$
+by Lemma \ref{lemma-closed-immersion-dim-1-CM}.
+Again using that $\omega_X$ is invertible
+we finally conclude that it suffices to show
+$\SheafHom_{\mathcal{O}_X}(\mathcal{O}_X/\mathcal{I}, \mathcal{O}_X)$
+maps isomorphically to $\mathcal{J}$ by evaluation at $1$.
+In other words, that $\mathcal{J}$ is the annihilator of
+$\mathcal{I}$. This follows from the above.
+\end{proof}
+
+
+
+
+
+
+
+\section{Riemann-Roch}
+\label{section-Riemann-Roch}
+
+\noindent
+Let $k$ be a field. Let $X$ be a proper scheme of dimension $\leq 1$
+over $k$. In Varieties, Section \ref{varieties-section-divisors-curves}
+we have defined the degree of a locally free $\mathcal{O}_X$-module
+$\mathcal{E}$ of constant rank by the formula
+\begin{equation}
+\label{equation-degree}
+\deg(\mathcal{E}) =
+\chi(X, \mathcal{E}) - \text{rank}(\mathcal{E})\chi(X, \mathcal{O}_X)
+\end{equation}
+see Varieties, Definition \ref{varieties-definition-degree-invertible-sheaf}.
+In the chapter on Chow Homology we defined the first Chern class of
+$\mathcal{E}$ as an operation on cycles
+(Chow Homology, Section
+\ref{chow-section-intersecting-chern-classes}) and we proved that
+\begin{equation}
+\label{equation-degree-c1}
+\deg(\mathcal{E}) = \deg(c_1(\mathcal{E}) \cap [X]_1)
+\end{equation}
+see Chow Homology, Lemma \ref{chow-lemma-degree-vector-bundle}.
+Combining (\ref{equation-degree}) and (\ref{equation-degree-c1})
+we obtain our first version of the Riemann-Roch formula
+\begin{equation}
+\label{equation-rr}
+\chi(X, \mathcal{E}) =
+\deg(c_1(\mathcal{E}) \cap [X]_1) +
+\text{rank}(\mathcal{E})\chi(X, \mathcal{O}_X)
+\end{equation}
+If $\mathcal{L}$ is an invertible $\mathcal{O}_X$-module, then
+we can also consider the numerical intersection
+$(\mathcal{L} \cdot X)$ as defined in
+Varieties, Definition \ref{varieties-definition-intersection-number}.
+However, this does not give anything new as
+\begin{equation}
+\label{equation-numerical-degree}
+(\mathcal{L} \cdot X) = \deg(\mathcal{L})
+\end{equation}
+by Varieties, Lemma
+\ref{varieties-lemma-intersection-numbers-and-degrees-on-curves}. If
+$\mathcal{L}$ is ample, then this integer is positive and is
+called the degree
+\begin{equation}
+\label{equation-degree-X}
+\deg_\mathcal{L}(X) = (\mathcal{L} \cdot X) = \deg(\mathcal{L})
+\end{equation}
+of $X$ with respect to $\mathcal{L}$, see
+Varieties, Definition \ref{varieties-definition-degree}.
+
+\medskip\noindent
+To obtain a true Riemann-Roch theorem we would like to write
+$\chi(X, \mathcal{O}_X)$ as the degree of a canonical zero cycle on $X$.
+We refer to \cite{F} for a fully general version of this. We will use
+duality to get a formula in the case where $X$ is Gorenstein; however,
+in some sense this is a cheat (for example because this method cannot
+work in higher dimension).
+
+\medskip\noindent
+We first use Lemmas \ref{lemma-duality-dim-1} and \ref{lemma-duality-dim-1-CM}
+to get a relation between the euler
+characteristic of $\mathcal{O}_X$ and the euler characteristic
+of the dualizing complex or the dualizing module.
+
+\begin{lemma}
+\label{lemma-euler}
+Let $X$ be a proper scheme of dimension $\leq 1$ over a field $k$.
+With $\omega_X^\bullet$ and $\omega_X$ as in Lemma \ref{lemma-duality-dim-1}
+we have
+$$
+\chi(X, \mathcal{O}_X) = \chi(X, \omega_X^\bullet)
+$$
+If $X$ is Cohen-Macaulay and equidimensional of dimension $1$, then
+$$
+\chi(X, \mathcal{O}_X) = - \chi(X, \omega_X)
+$$
+\end{lemma}
+
+\begin{proof}
+We define the right hand side of the first formula as follows:
+$$
+\chi(X, \omega_X^\bullet) =
+\sum\nolimits_{i \in \mathbf{Z}} (-1)^i\dim_k H^i(X, \omega_X^\bullet)
+$$
+This is well defined because $\omega_X^\bullet$ is in
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$, but also because
+$$
+H^i(X, \omega_X^\bullet) =
+\Ext^i(\mathcal{O}_X, \omega_X^\bullet) =
+H^{-i}(X, \mathcal{O}_X)
+$$
+which is always finite dimensional and nonzero only if $i = 0, -1$.
+This of course also proves the first formula. The second is a consequence
+of the first because $\omega_X^\bullet = \omega_X[1]$ in the CM case, see
+Lemma \ref{lemma-duality-dim-1-CM}.
+\end{proof}
+
+\noindent
+We will use Lemma \ref{lemma-euler} to get the desired formula for
+$\chi(X, \mathcal{O}_X)$ in the case that $\omega_X$ is
+invertible, i.e., that $X$ is Gorenstein.
+The statement is that $-1/2$ of the first Chern class of $\omega_X$
+capped with the cycle $[X]_1$ associated to $X$ is a natural zero
+cycle on $X$ with half-integer coefficients whose degree is
+$\chi(X, \mathcal{O}_X)$.
+The occurence of fractions in the statement of Riemann-Roch cannot
+be avoided.
+
+\begin{lemma}[Riemann-Roch]
+\label{lemma-rr}
+Let $X$ be a proper scheme over a field $k$ which is Gorenstein and
+equidimensional of dimension $1$. Let $\omega_X$ be as in
+Lemma \ref{lemma-duality-dim-1}. Then
+\begin{enumerate}
+\item $\omega_X$ is an invertible $\mathcal{O}_X$-module,
+\item $\deg(\omega_X) = -2\chi(X, \mathcal{O}_X)$,
+\item for a locally free $\mathcal{O}_X$-module $\mathcal{E}$
+of constant rank we have
+$$
+\chi(X, \mathcal{E}) = \deg(\mathcal{E}) -
+\textstyle{\frac{1}{2}} \text{rank}(\mathcal{E}) \deg(\omega_X)
+$$
+and $\dim_k(H^i(X, \mathcal{E})) =
+\dim_k(H^{1 - i}(X, \mathcal{E}^\vee \otimes_{\mathcal{O}_X} \omega_X))$
+for all $i \in \mathbf{Z}$.
+\end{enumerate}
+\end{lemma}
+
+\noindent
+Nonsingular (normal) curves are Gorenstein, see
+Duality for Schemes, Lemma \ref{duality-lemma-regular-gorenstein}.
+
+\begin{proof}
+Recall that Gorenstein schemes are Cohen-Macaulay
+(Duality for Schemes, Lemma \ref{duality-lemma-gorenstein-CM})
+and hence $\omega_X$ is a dualizing module on $X$, see
+Lemma \ref{lemma-duality-dim-1-CM}.
+It follows more or less from the definition of the Gorenstein property
+that the dualizing sheaf is invertible, see
+Duality for Schemes, Section \ref{duality-section-gorenstein}.
+By (\ref{equation-rr}) applied to $\omega_X$ we have
+$$
+\chi(X, \omega_X) =
+\deg(c_1(\omega_X) \cap [X]_1) + \chi(X, \mathcal{O}_X)
+$$
+Combined with Lemma \ref{lemma-euler} this gives
+$$
+2\chi(X, \mathcal{O}_X) = - \deg(c_1(\omega_X) \cap [X]_1) = - \deg(\omega_X)
+$$
+the second equality by (\ref{equation-degree-c1}). Putting this back into
+(\ref{equation-rr}) for $\mathcal{E}$ gives the displayed formula of the lemma.
+The symmetry in dimensions is a consequence of duality for $X$, see
+Remark \ref{remark-rework-duality-locally-free}.
+\end{proof}
+
+
+
+
+
+\section{Some vanishing results}
+\label{section-vanishing}
+
+\noindent
+This section contains some very weak vanishing results.
+Please see Section \ref{section-more-vanishing} for
+a few more and more interesting results.
+
+\begin{lemma}
+\label{lemma-automatic}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Then $X$ is connected, Cohen-Macaulay,
+and equidimensional of dimension $1$.
+\end{lemma}
+
+\begin{proof}
+Since $\Gamma(X, \mathcal{O}_X) = k$ has no nontrivial idempotents,
+we see that $X$ is connected. This already shows that $X$ is
+equidimensional of dimension $1$ (any irreducible component
+of dimension $0$ would be a connected component).
+Let $\mathcal{I} \subset \mathcal{O}_X$
+be the maximal coherent submodule supported in closed points.
+Then $\mathcal{I}$ exists
+(Divisors, Lemma \ref{divisors-lemma-remove-embedded-points})
+and is globally generated
+(Varieties, Lemma \ref{varieties-lemma-chi-tensor-finite}).
+Since $1 \in \Gamma(X, \mathcal{O}_X)$ is not a section
+of $\mathcal{I}$ we conclude that $\mathcal{I} = 0$.
+Thus $X$ does not have embedded points
+(Divisors, Lemma \ref{divisors-lemma-remove-embedded-points}).
+Thus $X$ has $(S_1)$ by
+Divisors, Lemma \ref{divisors-lemma-S1-no-embedded}.
+Hence $X$ is Cohen-Macaulay.
+\end{proof}
+
+\noindent
+In this section we work in the following situation.
+
+\begin{situation}
+\label{situation-Cohen-Macaulay-curve}
+Here $k$ is a field, $X$ is a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$.
+\end{situation}
+
+\noindent
+By Lemma \ref{lemma-automatic} the scheme $X$ is Cohen-Macaulay and
+equidimensional of dimension $1$. The dualizing module $\omega_X$
+discussed in Lemmas \ref{lemma-duality-dim-1} and
+\ref{lemma-duality-dim-1-CM} has nonvanishing $H^1$ because in fact
+$\dim_k H^1(X, \omega_X) = \dim_k H^0(X, \mathcal{O}_X) = 1$. It turns out
+that anything slightly more ``positive'' than $\omega_X$ has vanishing $H^1$.
+
+\begin{lemma}
+\label{lemma-vanishing}
+In Situation \ref{situation-Cohen-Macaulay-curve}. Given an exact sequence
+$$
+\omega_X \to \mathcal{F} \to \mathcal{Q} \to 0
+$$
+of coherent $\mathcal{O}_X$-modules with $H^1(X, \mathcal{Q}) = 0$
+(for example if $\dim(\text{Supp}(\mathcal{Q})) = 0$), then
+either $H^1(X, \mathcal{F}) = 0$ or
+$\mathcal{F} = \omega_X \oplus \mathcal{Q}$.
+\end{lemma}
+
+\begin{proof}
+(The parenthetical statement follows from
+Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-support-dimension-0}.)
+Since $H^0(X, \mathcal{O}_X) = k$ is dual to $H^1(X, \omega_X)$
+(see Section \ref{section-Riemann-Roch})
+we see that $\dim H^1(X, \omega_X) = 1$. The sheaf $\omega_X$
+represents the functor
+$\mathcal{F} \mapsto \Hom_k(H^1(X, \mathcal{F}), k)$
+on the category of coherent $\mathcal{O}_X$-modules
+(Duality for Schemes, Lemma
+\ref{duality-lemma-dualizing-module-proper-over-A}).
+Consider an exact sequence as in the statement of the lemma
+and assume that $H^1(X, \mathcal{F}) \not = 0$. Since
+$H^1(X, \mathcal{Q}) = 0$ we see that
+$H^1(X, \omega_X) \to H^1(X, \mathcal{F})$ is an isomorphism.
+By the universal property of $\omega_X$ stated above, we conclude there
+is a map $\mathcal{F} \to \omega_X$ whose action on $H^1$ is the inverse
+of this isomorphism. The composition $\omega_X \to \mathcal{F} \to \omega_X$
+is the identity (by the universal property) and the lemma is proved.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-twist}
+In Situation \ref{situation-Cohen-Macaulay-curve}. Let
+$\mathcal{L}$ be an invertible $\mathcal{O}_X$-module which is
+globally generated and not isomorphic to $\mathcal{O}_X$. Then
+$H^1(X, \omega_X \otimes \mathcal{L}) = 0$.
+\end{lemma}
+
+\begin{proof}
+By duality as discussed in Section \ref{section-Riemann-Roch} we have to
+show that $H^0(X, \mathcal{L}^{\otimes - 1}) = 0$. If not, then we can
+choose a global section $t$ of $\mathcal{L}^{\otimes - 1}$
+and a global section $s$ of $\mathcal{L}$ such that $st \not = 0$.
+However, then $st$ is a constant multiple of $1$, by our assumption
+that $H^0(X, \mathcal{O}_X) = k$. It follows that
+$\mathcal{L} \cong \mathcal{O}_X$, which is a contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-globally-generated}
+In Situation \ref{situation-Cohen-Macaulay-curve}. Given an exact sequence
+$$
+\omega_X \to \mathcal{F} \to \mathcal{Q} \to 0
+$$
+of coherent $\mathcal{O}_X$-modules with $\dim(\text{Supp}(\mathcal{Q})) = 0$
+and $\dim_k H^0(X, \mathcal{Q}) \geq 2$ and such that there is no nonzero
+submodule $\mathcal{Q}' \subset \mathcal{F}$ such that
+$\mathcal{Q}' \to \mathcal{Q}$ is injective.
+Then the submodule of $\mathcal{F}$ generated by global
+sections surjects onto $\mathcal{Q}$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}' \subset \mathcal{F}$ be the submodule generated by
+global sections and the image of $\omega_X \to \mathcal{F}$. Since
+$\dim_k H^0(X, \mathcal{Q}) \geq 2$ and
+$\dim_k H^1(X, \omega_X) = \dim_k H^0(X, \mathcal{O}_X) = 1$,
+we see that $\mathcal{F}' \to \mathcal{Q}$ is not zero and
+$\omega_X \to \mathcal{F}'$ is not an isomorphism.
+Hence $H^1(X, \mathcal{F}') = 0$ by Lemma \ref{lemma-vanishing}
+and our assumption on $\mathcal{F}$.
+Consider the short exact sequence
+$$
+0 \to \mathcal{F}' \to \mathcal{F} \to
+\mathcal{Q}/\Im(\mathcal{F}' \to \mathcal{Q}) \to 0
+$$
+If the quotient on the right is nonzero, then we obtain a contradiction
+because then $H^0(X, \mathcal{F})$ is bigger than $H^0(X, \mathcal{F}')$.
+\end{proof}
+
+\noindent
+Here is an example global generation statement.
+
+\begin{lemma}
+\label{lemma-globally-generated-curve}
+In Situation \ref{situation-Cohen-Macaulay-curve} assume that
+$X$ is integral. Let $0 \to \omega_X \to \mathcal{F} \to \mathcal{Q} \to 0$
+be a short exact sequence of coherent $\mathcal{O}_X$-modules with
+$\mathcal{F}$ torsion free, $\dim(\text{Supp}(\mathcal{Q})) = 0$,
+and $\dim_k H^0(X, \mathcal{Q}) \geq 2$. Then $\mathcal{F}$
+is globally generated.
+\end{lemma}
+
+\begin{proof}
+Consider the submodule $\mathcal{F}'$ generated by the global sections. By
+Lemma \ref{lemma-globally-generated} we see that $\mathcal{F}' \to \mathcal{Q}$
+is surjective, in particular $\mathcal{F}' \not = 0$. Since $X$ is a curve, we
+see that $\mathcal{F}' \subset \mathcal{F}$ is an inclusion of rank $1$
+sheaves, hence $\mathcal{Q}' = \mathcal{F}/\mathcal{F}'$ is supported in
+finitely many points. To get a contradiction, assume that
+$\mathcal{Q}'$ is nonzero. Then we see that $H^1(X, \mathcal{F}') \not = 0$.
+Then we get a nonzero map $\mathcal{F}' \to \omega_X$ by the universal
+property (Duality for Schemes, Lemma
+\ref{duality-lemma-dualizing-module-proper-over-A}).
+The image of the composition $\mathcal{F}' \to \omega_X \to \mathcal{F}$
+is generated by global sections, hence is inside of $\mathcal{F}'$.
+Thus we get a nonzero self map $\mathcal{F}' \to \mathcal{F}'$.
+Since $\mathcal{F}'$ is torsion free of rank $1$ on a proper curve
+this has to be an automorphism (details omitted). But then this implies that
+$\mathcal{F}'$ is contained in $\omega_X \subset \mathcal{F}$
+contradicting the surjectivity of $\mathcal{F}' \to \mathcal{Q}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-omega-with-globally-generated-invertible}
+In Situation \ref{situation-Cohen-Macaulay-curve}. Let $\mathcal{L}$
+be a very ample invertible $\mathcal{O}_X$-module with
+$\deg(\mathcal{L}) \geq 2$. Then
+$\omega_X \otimes_{\mathcal{O}_X} \mathcal{L}$ is globally generated.
+\end{lemma}
+
+\begin{proof}
+Assume $k$ is algebraically closed. Let $x \in X$ be a closed point.
+Let $C_i \subset X$ be the irreducible components and for each $i$
+let $x_i \in C_i$ be the generic point. By
+Varieties, Lemma \ref{varieties-lemma-very-ample-vanish-at-point}
+we can choose a section $s \in H^0(X, \mathcal{L})$ such that $s$
+vanishes at $x$ but not at $x_i$ for all $i$. The corresponding
+module map $s : \mathcal{O}_X \to \mathcal{L}$ is injective with
+cokernel $\mathcal{Q}$ supported in finitely many points and
+with $H^0(X, \mathcal{Q}) \geq 2$. Consider the corresponding
+exact sequence
+$$
+0 \to \omega_X \to \omega_X \otimes \mathcal{L} \to
+\omega_X \otimes \mathcal{Q} \to 0
+$$
+By Lemma \ref{lemma-globally-generated} we see that the module generated
+by global sections surjects onto $\omega_X \otimes \mathcal{Q}$.
+Since $x$ was arbitrary this proves the lemma. Some details omitted.
+
+\medskip\noindent
+We will reduce the case where $k$ is not algebraically closed, to
+the algebraically closed field case. We suggest the reader skip
+the rest of the proof. Choose an algebraic closure $\overline{k}$
+of $k$ and consider the base change $X_{\overline{k}}$. Let us
+check that $X_{\overline{k}} \to \Spec(\overline{k})$ is an example
+of Situation \ref{situation-Cohen-Macaulay-curve}. By flat base change
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology})
+we see that $H^0(X_{\overline{k}}, \mathcal{O}) = \overline{k}$.
+The scheme $X_{\overline{k}}$ is proper over $\overline{k}$ (Morphisms,
+Lemma \ref{morphisms-lemma-base-change-proper}) and
+equidimensional of dimension $1$
+(Morphisms, Lemma \ref{morphisms-lemma-dimension-fibre-after-base-change}).
+The pullback of $\omega_X$ to $X_{\overline{k}}$ is the dualizing
+module of $X_{\overline{k}}$ by Lemma \ref{lemma-sanity-check-duality}.
+The pullback of $\mathcal{L}$ to $X_{\overline{k}}$ is very ample
+(Morphisms, Lemma \ref{morphisms-lemma-very-ample-base-change}).
+The degree of the pullback of $\mathcal{L}$ to $X_{\overline{k}}$
+is equal to the degree of $\mathcal{L}$ on $X$ (Varieties, Lemma
+\ref{varieties-lemma-degree-base-change}). Finally, we see that
+$\omega_X \otimes \mathcal{L}$ is globally generated if and only
+if its base change is so
+(Varieties, Lemma \ref{varieties-lemma-globally-generated-base-change}).
+In this way we see that the result follows from the result in the
+case of an algebraically closed ground field.
+\end{proof}
+
+
+
+
+\section{Very ample invertible sheaves}
+\label{section-very-ample}
+
+\noindent
+An often used criterion for very ampleness of an invertible module
+$\mathcal{L}$ on a scheme $X$ of finite type over an algebraically
+closed field is: sections of $\mathcal{L}$ separate points and
+tangent vectors (Varieties, Section
+\ref{varieties-section-separating-points-tangent-vectors}). Here is another
+criterion for curves; please compare with
+Varieties, Subsection \ref{varieties-subsection-regularity}.
+
+\begin{lemma}
+\label{lemma-criterion-very-ample}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Let $\mathcal{L}$ be an invertible
+$\mathcal{O}_X$-module. Assume
+\begin{enumerate}
+\item $\mathcal{L}$ has a regular global section,
+\item $H^1(X, \mathcal{L}) = 0$, and
+\item $\mathcal{L}$ is ample.
+\end{enumerate}
+Then $\mathcal{L}^{\otimes 6}$ is very ample on $X$ over $k$.
+\end{lemma}
+
+\begin{proof}
+Let $s$ be a regular global section of $\mathcal{L}$. Let
+$i : Z = Z(s) \to X$ be the zero scheme of $s$, see
+Divisors, Section \ref{divisors-section-effective-Cartier-invertible}.
+By condition (3) we see that $Z \not = \emptyset$ (small detail omitted).
+Consider the short exact sequence
+$$
+0 \to \mathcal{O}_X \xrightarrow{s} \mathcal{L} \to
+i_*(\mathcal{L}|_Z) \to 0
+$$
+Tensoring with $\mathcal{L}$ we obtain
+$$
+0 \to \mathcal{L} \to \mathcal{L}^{\otimes 2} \to
+i_*(\mathcal{L}^{\otimes 2}|_Z) \to 0
+$$
+Observe that $Z$ has dimension $0$
+(Divisors, Lemma \ref{divisors-lemma-effective-Cartier-makes-dimension-drop})
+and hence is the spectrum of an Artinian ring
+(Varieties, Lemma \ref{varieties-lemma-algebraic-scheme-dim-0})
+hence $\mathcal{L}|_Z \cong \mathcal{O}_Z$
+(Algebra, Lemma \ref{algebra-lemma-locally-free-semi-local-free}).
+The short exact sequence also shows that
+$H^1(X, \mathcal{L}^{\otimes 2}) = 0$ (for example using
+Varieties, Lemma \ref{varieties-lemma-chi-tensor-finite}
+to see vanishing in the spot on the right). Using induction
+on $n \geq 1$ and the sequence
+$$
+0 \to \mathcal{L}^{\otimes n} \xrightarrow{s}
+\mathcal{L}^{\otimes n + 1} \to
+i_*(\mathcal{L}^{\otimes n + 1}|_Z) \to 0
+$$
+we see that $H^1(X, \mathcal{L}^{\otimes n}) = 0$ for $n > 0$
+and that there exists a global section $t_{n + 1}$
+of $\mathcal{L}^{\otimes n + 1}$ which gives a trivialization of
+$\mathcal{L}^{\otimes n + 1}|_Z \cong \mathcal{O}_Z$.
+
+\medskip\noindent
+Consider the multiplication map
+$$
+\mu_n :
+H^0(X, \mathcal{L}) \otimes_k H^0(X, \mathcal{L}^{\otimes n})
+\oplus
+H^0(X, \mathcal{L}^{\otimes 2}) \otimes_k H^0(X, \mathcal{L}^{\otimes n - 1})
+\longrightarrow
+H^0(X, \mathcal{L}^{\otimes n + 1})
+$$
+We claim this is surjective for $n \geq 3$.
+To see this we consider the short exact sequence
+$$
+0 \to \mathcal{L}^{\otimes n} \xrightarrow{s}
+\mathcal{L}^{\otimes n + 1} \to i_*(\mathcal{L}^{\otimes n + 1}|_Z) \to 0
+$$
+The sections of $\mathcal{L}^{\otimes n + 1}$ coming from the left in this
+sequence are in the image of $\mu_n$. On the other hand, since
+$H^0(\mathcal{L}^{\otimes 2}) \to H^0(\mathcal{L}^{\otimes 2}|_Z)$
+is surjective (see above) and since $t_{n - 1}$ maps to a trivialization of
+$\mathcal{L}^{\otimes n - 1}|_Z$
+we see that $\mu_n(H^0(X, \mathcal{L}^{\otimes 2}) \otimes t_{n - 1})$
+gives a subspace
+of $H^0(X, \mathcal{L}^{\otimes n + 1})$ surjecting onto the global sections of
+$\mathcal{L}^{\otimes n + 1}|_Z$. This proves the claim.
+
+\medskip\noindent
+From the claim in the previous paragraph we conclude
+that the graded $k$-algebra
+$$
+S = \bigoplus\nolimits_{n \geq 0} H^0(X, \mathcal{L}^{\otimes n})
+$$
+is generated in degrees $0, 1, 2, 3$ over $k$.
+Recall that $X = \text{Proj}(S)$, see
+Morphisms, Lemma \ref{morphisms-lemma-proper-ample-is-proj}.
+Thus $S^{(6)} = \bigoplus_{n} S_{6n}$ is generated in degree $1$.
+This means that $\mathcal{L}^{\otimes 6}$ is very ample as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-criterion-very-ample-bis}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Let $\mathcal{L}$ be an invertible
+$\mathcal{O}_X$-module. Assume
+\begin{enumerate}
+\item $\mathcal{L}$ is globally generated,
+\item $H^1(X, \mathcal{L}) = 0$, and
+\item $\mathcal{L}$ is ample.
+\end{enumerate}
+Then $\mathcal{L}^{\otimes 2}$ is very ample on $X$ over $k$.
+\end{lemma}
+
+\begin{proof}
+Choose basis $s_0, \ldots, s_n$ of $H^0(X, \mathcal{L}^{\otimes 2})$
+over $k$. By property (1) we see that $\mathcal{L}^{\otimes 2}$
+is globally generated and we get a morphism
+$$
+\varphi_{\mathcal{L}^{\otimes 2}, (s_0, \ldots, s_n)} :
+X \longrightarrow \mathbf{P}^n_k
+$$
+See Constructions, Section \ref{constructions-section-projective-space}.
+The lemma asserts that this morphism is a closed immersion.
+To check this we may replace $k$ by its algebraic closure, see
+Descent, Lemma \ref{descent-lemma-descending-property-closed-immersion}.
+Thus we may assume $k$ is algebraically closed.
+
+\medskip\noindent
+Assume $k$ is algebraically closed. For each generic point $\eta_i \in X$
+let $V_i \subset H^0(X, \mathcal{L})$ be the $k$-subvector space of
+sections vanishing at $\eta_i$. Since $\mathcal{L}$ is globally generated,
+we see that $V_i \not = H^0(X, \mathcal{L})$. Since $X$ has only a
+finite number of irreducible components and $k$ is infinite, we can find
+$s \in H^0(X, \mathcal{L})$ nonvanishing at $\eta_i$ for all $i$.
+Then $s$ is a regular section of $\mathcal{L}$ (because $X$ is
+Cohen-Macaulay by Lemma \ref{lemma-automatic} and hence $\mathcal{L}$
+has no embedded associated points).
+
+\medskip\noindent
+In particular, all of the statements given in the proof of
+Lemma \ref{lemma-criterion-very-ample} hold with this $s$.
+Moreover, as $\mathcal{L}$ is globally generated, we can find
+a global section $t \in H^0(X, \mathcal{L})$ such that
+$t|_Z$ is nonvanishing (argue as above using the finite number
+of points of $Z$). Then in the proof of Lemma \ref{lemma-criterion-very-ample}
+we can use $t$ to see that additionally the multiplication map
+$$
+\mu_n :
+H^0(X, \mathcal{L}) \otimes_k H^0(X, \mathcal{L}^{\otimes 2})
+\longrightarrow
+H^0(X, \mathcal{L}^{\otimes 3})
+$$
+is surjective. Thus
+$$
+S = \bigoplus\nolimits_{n \geq 0} H^0(X, \mathcal{L}^{\otimes n})
+$$
+is generated in degrees $0, 1, 2$ over $k$. Arguing as in the
+proof of Lemma \ref{lemma-criterion-very-ample} we find that
+$S^{(2)} = \bigoplus_{n} S_{2n}$ is generated in degree $1$.
+This means that $\mathcal{L}^{\otimes 2}$ is very ample as desired.
+Some details omitted.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{The genus of a curve}
+\label{section-genus}
+
+\noindent
+If $X$ is a smooth projective geometrically irreducible curve over a field $k$,
+then we've previously defined the genus of $X$ as the dimension of
+$H^1(X, \mathcal{O}_X)$, see
+Picard Schemes of Curves, Definition \ref{pic-definition-genus}.
+Observe that $H^0(X, \mathcal{O}_X) = k$ in this case, see
+Varieties, Lemma \ref{varieties-lemma-regular-functions-proper-variety}.
+Let us generalize this as follows.
+
+\begin{definition}
+\label{definition-genus}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having
+dimension $1$ and $H^0(X, \mathcal{O}_X) = k$.
+Then the {\it genus} of $X$ is $g = \dim_k H^1(X, \mathcal{O}_X)$.
+\end{definition}
+
+\noindent
+This is sometimes called the {\it arithmetic genus} of $X$.
+In the literature the arithmetic genus of a proper curve $X$
+over $k$ is sometimes defined as
+$$
+p_a(X) = 1 - \chi(X, \mathcal{O}_X) =
+1 - \dim_k H^0(X, \mathcal{O}_X) + \dim_k H^1(X, \mathcal{O}_X)
+$$
+This agrees with our definition when it applies because we assume
+$H^0(X, \mathcal{O}_X) = k$. But note that
+\begin{enumerate}
+\item $p_a(X)$ can be negative, and
+\item $p_a(X)$ depends on the base field $k$ and should be written $p_a(X/k)$.
+\end{enumerate}
+For example if $k = \mathbf{Q}$
+and $X = \mathbf{P}^1_{\mathbf{Q}(i)}$ then
+$p_a(X/\mathbf{Q}) = -1$ and $p_a(X/\mathbf{Q}(i)) = 0$.
+
+\medskip\noindent
+The assumption that $H^0(X, \mathcal{O}_X) = k$ in our definition has
+two consequences. On the one hand, it means there is no confusion about
+the base field. On the other hand, it implies the scheme $X$ is
+Cohen-Macaulay and equidimensional of dimension $1$
+(Lemma \ref{lemma-automatic}). If $\omega_X$ denotes the dualizing
+module as in Lemmas \ref{lemma-duality-dim-1} and
+\ref{lemma-duality-dim-1-CM} we see that
+\begin{equation}
+\label{equation-genus}
+g = \dim_k H^1(X, \mathcal{O}_X) = \dim_k H^0(X, \omega_X)
+\end{equation}
+by duality, see Remark \ref{remark-rework-duality-locally-free}.
+
+\medskip\noindent
+If $X$ is proper over $k$ of dimension $\leq 1$ and $H^0(X, \mathcal{O}_X)$
+is not equal to the ground field $k$, instead of using the arithmetic genus
+$p_a(X)$ given by the displayed formula above we shall use the invariant
+$\chi(X, \mathcal{O}_X)$. In fact, it is advocated in
+\cite[page 276]{FAC} and \cite[Introduction]{Hirzebruch}
+that we should call $\chi(X, \mathcal{O}_X)$ the arithmetic genus.
+
+\begin{lemma}
+\label{lemma-genus-base-change}
+Let $k'/k$ be a field extension. Let $X$ be a proper scheme over $k$ having
+dimension $1$ and $H^0(X, \mathcal{O}_X) = k$. Then $X_{k'}$ is a
+proper scheme over $k'$
+having dimension $1$ and $H^0(X_{k'}, \mathcal{O}_{X_{k'}}) = k'$.
+Moreover the genus of $X_{k'}$ is equal to the genus of $X$.
+\end{lemma}
+
+\begin{proof}
+The dimension of $X_{k'}$ is $1$ for example by
+Morphisms, Lemma \ref{morphisms-lemma-dimension-fibre-after-base-change}.
+The morphism $X_{k'} \to \Spec(k')$ is proper by
+Morphisms, Lemma \ref{morphisms-lemma-base-change-proper}.
+The equality $H^0(X_{k'}, \mathcal{O}_{X_{k'}}) = k'$ follows from
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-flat-base-change-cohomology}.
+The equality of the genus follows from the same lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-genus-gorenstein}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having
+dimension $1$ and $H^0(X, \mathcal{O}_X) = k$. If $X$ is Gorenstein,
+then
+$$
+\deg(\omega_X) = 2g - 2
+$$
+where $g$ is the genus of $X$ and $\omega_X$ is as in
+Lemma \ref{lemma-duality-dim-1}.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-rr}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-genus-smooth}
+Let $X$ be a smooth proper curve over a field $k$
+with $H^0(X, \mathcal{O}_X) = k$. Then
+$$
+\dim_k H^0(X, \Omega_{X/k}) = g
+\quad\text{and}\quad
+\deg(\Omega_{X/k}) = 2g - 2
+$$
+where $g$ is the genus of $X$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-duality-dim-1} we have $\Omega_{X/k} = \omega_X$.
+Hence the formulas hold by (\ref{equation-genus}) and
+Lemma \ref{lemma-genus-gorenstein}.
+\end{proof}
+
+
+
+
+
+
+\section{Plane curves}
+\label{section-plane-curves}
+
+\noindent
+Let $k$ be a field. A {\it plane curve} will be a curve $X$ which is isomorphic
+to a closed subscheme of $\mathbf{P}^2_k$. Often the embedding
+$X \to \mathbf{P}^2_k$ will be considered given. By
+Divisors, Example \ref{divisors-example-closed-subscheme-of-proj}
+a curve is determined by the corresponding homogeneous ideal
+$$
+I(X) =
+\Ker\left(
+k[T_0, T_2, T_2] \longrightarrow \bigoplus \Gamma(X, \mathcal{O}_X(n))
+\right)
+$$
+Recall that in this situation we have
+$$
+X = \text{Proj}(k[T_0, T_2, T_2]/I)
+$$
+as closed subschemes of $\mathbf{P}^2_k$.
+For more general information on these constructions we refer the
+reader to Divisors, Example \ref{divisors-example-closed-subscheme-of-proj}
+and the references therein.
+It turns out that $I(X) = (F)$ for some homogeneous polynomial
+$F \in k[T_0, T_1, T_2]$, see Lemma \ref{lemma-equation-plane-curve}.
+Since $X$ is irreducible, it follows that $F$ is irreducible, see
+Lemma \ref{lemma-plane-curve}. Moreover, looking at the short exact
+sequence
+$$
+0 \to \mathcal{O}_{\mathbf{P}^2_k}(-d) \xrightarrow{F}
+\mathcal{O}_{\mathbf{P}^2_k} \to \mathcal{O}_X \to 0
+$$
+where $d = \deg(F)$ we find that $H^0(X, \mathcal{O}_X) = k$ and that $X$
+has genus $(d - 1)(d - 2)/2$, see proof of Lemma \ref{lemma-genus-plane-curve}.
+
+\medskip\noindent
+To find smooth plane curves it is easiest to write explicit
+equations. Let $p$ denote the characteristic of $k$. If $p$
+does not divide $d$, then we can take
+$$
+F = T_0^d + T_1^d + T_2^d
+$$
+The corresponding curve $X = V_+(F)$ is called the
+{\it Fermat curve} of degree $d$. It is smooth because
+on each standard affine piece $D_+(T_i)$ we obtain
+a curve isomorphic to the affine curve
+$$
+\Spec(k[x, y]/(x^d + y^d + 1))
+$$
+The ring map $k \to k[x, y]/(x^d + y^d + 1)$ is smooth by
+Algebra, Lemma \ref{algebra-lemma-relative-global-complete-intersection-smooth}
+as $d x^{d - 1}$ and $d y^{d - 1}$ generate the unit ideal
+in $k[x, y]/(x^d + y^d + 1)$. If $p | d$ but $p \not = 3$
+then you can use the equation
+$$
+F = T_0^{d - 1}T_1 + T_1^{d - 1}T_2 + T_2^{d - 1}T_0
+$$
+Namely, on the affine pieces you get $x + x^{d - 1}y + y^{d - 1}$
+with derivatives $1 - x^{d - 2}y$ and $x^{d - 1} - y^{d - 2}$
+whose common zero set (of all three) is empty\footnote{Namely,
+as $x^{d - 1} = y^{d - 2}$, then $0 = x + x^{d - 1}y + y^{d - 1} =
+x + 2 x^{d - 1} y$. Since $x \not = 0$ because $1 = x^{d - 2}y$
+we get $0 = 1 + 2x^{d - 2}y = 3$ which is absurd unless $3 = 0$.}.
+We leave it to the reader to make examples in characteristic $3$.
+
+\medskip\noindent
+More generally for any field $k$ and any $n$ and $d$ there exists
+a smooth hypersurface of degree $d$ in $\mathbf{P}^n_k$, see
+for example \cite{Poonen}.
+
+\medskip\noindent
+Of course, in this way we only find smooth curves whose genus
+is a triangular number. To get smooth curves of an arbitrary
+genus one can look for smooth curves lying on
+$\mathbf{P}^1 \times \mathbf{P}^1$ (insert future reference here).
+
+\begin{lemma}
+\label{lemma-equation-plane-curve}
+Let $Z \subset \mathbf{P}^2_k$ be a closed subscheme which
+is equidimensional of dimension $1$ and has no embedded points
+(equivalently $Z$ is Cohen-Macaulay).
+Then the ideal $I(Z) \subset k[T_0, T_1, T_2]$ corresponding
+to $Z$ is principal.
+\end{lemma}
+
+\begin{proof}
+This is a special case of
+Divisors, Lemma \ref{divisors-lemma-equation-codim-1-in-projective-space}
+(see also Varieties, Lemma
+\ref{varieties-lemma-equation-codim-1-in-projective-space}).
+The parenthetical statement follows from the fact that a
+$1$ dimensional Noetherian scheme is Cohen-Macaulay
+if and only if it has no embedded points, see
+Divisors, Lemma \ref{divisors-lemma-noetherian-dim-1-CM-no-embedded-points}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-plane-curve}
+Let $Z \subset \mathbf{P}^2_k$ be as in Lemma \ref{lemma-equation-plane-curve}
+and let $I(Z) = (F)$ for some $F \in k[T_0, T_1, T_2]$.
+Then $Z$ is a curve if and only if $F$ is irreducible.
+\end{lemma}
+
+\begin{proof}
+If $F$ is reducible, say $F = F' F''$ then let $Z'$ be the closed subscheme
+of $\mathbf{P}^2_k$ defined by $F'$. It is clear that $Z' \subset Z$
+and that $Z' \not = Z$. Since $Z'$ has dimension $1$ as well, we conclude
+that either $Z$ is not reduced, or that $Z$ is not irreducible.
+Conversely, write $Z = \sum a_i D_i$ where $D_i$ are the irreducible
+components of $Z$, see
+Divisors, Lemmas \ref{divisors-lemma-codim-1-part} and
+\ref{divisors-lemma-codimension-1-is-effective-Cartier}.
+Let $F_i \in k[T_0, T_1, T_2]$ be the homogeneous
+polynomial generating the ideal of $D_i$. Then it is clear that
+$F$ and $\prod F_i^{a_i}$ cut out the same closed subscheme of
+$\mathbf{P}^2_k$. Hence $F = \lambda \prod F_i^{a_i}$ for some
+$\lambda \in k^*$ because both generate the ideal of $Z$.
+Thus we see that if $F$ is irreducible, then $Z$ is
+a prime divisor, i.e., a curve.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-genus-plane-curve}
+Let $Z \subset \mathbf{P}^2_k$ be as in Lemma \ref{lemma-equation-plane-curve}
+and let $I(Z) = (F)$ for some $F \in k[T_0, T_1, T_2]$.
+Then $H^0(Z, \mathcal{O}_Z) = k$ and the genus of $Z$ is
+$(d - 1)(d - 2)/2$ where $d = \deg(F)$.
+\end{lemma}
+
+\begin{proof}
+Let $S = k[T_0, T_1, T_2]$.
+There is an exact sequence of graded modules
+$$
+0 \to S(-d) \xrightarrow{F} S \to S/(F) \to 0
+$$
+Denote $i : Z \to \mathbf{P}^2_k$ the given closed immersion.
+Applying the exact functor $\widetilde{\ }$
+(Constructions, Lemma \ref{constructions-lemma-proj-sheaves})
+we obtain
+$$
+0 \to \mathcal{O}_{\mathbf{P}^2_k}(-d) \to
+\mathcal{O}_{\mathbf{P}^2_k} \to i_*\mathcal{O}_Z \to 0
+$$
+because $F$ generates the ideal of $Z$.
+Note that the cohomology groups of $\mathcal{O}_{\mathbf{P}^2_k}(-d)$ and
+$\mathcal{O}_{\mathbf{P}^2_k}$ are given in
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cohomology-projective-space-over-ring}.
+On the other hand, we have
+$H^q(Z, \mathcal{O}_Z) = H^q(\mathbf{P}^2_k, i_*\mathcal{O}_Z)$ by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-relative-affine-cohomology}.
+Applying the long exact cohomology sequence
+we first obtain that
+$$
+k = H^0(\mathbf{P}^2_k, \mathcal{O}_{\mathbf{P}^2_k}) \longrightarrow
+H^0(Z, \mathcal{O}_Z)
+$$
+is an isomorphism and next that the boundary map
+$$
+H^1(Z, \mathcal{O}_Z) \longrightarrow
+H^2(\mathbf{P}^2_k, \mathcal{O}_{\mathbf{P}^2_k}(-d)) \cong
+k[T_0, T_1, T_2]_{d - 3}
+$$
+is an isomorphism. Since it is easy to see that the dimension of this
+is $(d - 1)(d - 2)/2$ the proof is finished.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-plane-curve-point-over-separable}
+Let $Z \subset \mathbf{P}^2_k$ be as in Lemma \ref{lemma-equation-plane-curve}
+and let $I(Z) = (F)$ for some $F \in k[T_0, T_1, T_2]$.
+If $Z \to \Spec(k)$ is smooth in at least one point and $k$ is infinite, then
+there exists a closed point $z \in Z$ contained in the smooth
+locus such that $\kappa(z)/k$ is finite separable of degree
+at most $d$.
+\end{lemma}
+
+\begin{proof}
+Suppose that $z' \in Z$ is a point where $Z \to \Spec(k)$ is smooth.
+After renumbering the coordinates if necessary we may assume
+$z'$ is contained in $D_+(T_0)$. Set $f = F(1, x, y) \in k[x, y]$.
+Then $Z \cap D_+(X_0)$ is isomorphic to the spectrum of $k[x, y]/(f)$.
+Let $f_x, f_y$ be the partial derivatives of $f$ with respect to
+$x, y$. Since $z'$ is a smooth point of $Z/k$ we see that either
+$f_x$ or $f_y$ is nonzero in $z'$ (see discussion in
+Algebra, Section \ref{algebra-section-smooth}).
+After renumbering the coordinates
+we may assume $f_y$ is not zero at $z'$. Hence there is a nonempty
+open subscheme $V \subset Z \cap D_{+}(X_0)$ such that the
+projection
+$$
+p : V \longrightarrow \Spec(k[x])
+$$
+is \'etale. Because the degree of $f$ as a polynomial in $y$
+is at most $d$, we see that the degrees of the fibres of the
+projection $p$ are at most $d$ (see discussion in
+Morphisms, Section \ref{morphisms-section-universally-bounded}).
+Moreover, as $p$ is \'etale
+the image of $p$ is an open $U \subset \Spec(k[x])$.
+Finally, since $k$ is infinite, the set of $k$-rational points
+$U(k)$ of $U$ is infinite, in particular not empty. Pick any
+$t \in U(k)$ and let $z \in V$ be a point mapping to $t$.
+Then $z$ works.
+\end{proof}
+
+
+
+
+
+\section{Curves of genus zero}
+\label{section-genus-zero}
+
+\noindent
+Later we will need to know what a proper genus zero curve looks like.
+It turns out that a Gorenstein proper genus zero curve is a plane
+curve of degree $2$, i.e., a conic, see Lemma \ref{lemma-genus-zero}.
+A general proper genus zero curve is obtained from a nonsingular one
+(over a bigger field) by a pushout procedure, see
+Lemma \ref{lemma-genus-zero-singular}.
+Since a nonsingular curve is Gorenstein, these two results
+cover all possible cases.
+
+\begin{lemma}
+\label{lemma-genus-zero-pic}
+Let $X$ be a proper curve over a field $k$ with $H^0(X, \mathcal{O}_X) = k$.
+If $X$ has genus $0$, then every invertible $\mathcal{O}_X$-module
+$\mathcal{L}$ of degree $0$ is trivial.
+\end{lemma}
+
+\begin{proof}
+Namely, we have $\dim_k H^0(X, \mathcal{L}) \geq 0 + 1 - 0 = 1$
+by Riemann-Roch (Lemma \ref{lemma-rr}), hence $\mathcal{L}$ has a
+nonzero section, hence $\mathcal{L} \cong \mathcal{O}_X$ by
+Varieties, Lemma \ref{varieties-lemma-check-invertible-sheaf-trivial}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-genus-zero-positive-degree}
+Let $X$ be a proper curve over a field $k$ with $H^0(X, \mathcal{O}_X) = k$.
+Assume $X$ has genus $0$. Let $\mathcal{L}$ be an invertible
+$\mathcal{O}_X$-module of degree $d > 0$. Then we have
+\begin{enumerate}
+\item $\dim_k H^0(X, \mathcal{L}) = d + 1$ and $\dim_k H^1(X, \mathcal{L}) = 0$,
+\item $\mathcal{L}$ is very ample and defines a closed immersion into
+$\mathbf{P}^d_k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By definition of degree and genus we have
+$$
+\dim_k H^0(X, \mathcal{L}) - \dim_k H^1(X, \mathcal{L}) = d + 1
+$$
+Let $s$ be a nonzero section of $\mathcal{L}$.
+Then the zero scheme of $s$ is an effective Cartier
+divisor $D \subset X$, we have $\mathcal{L} = \mathcal{O}_X(D)$ and
+we have a short exact sequence
+$$
+0 \to \mathcal{O}_X \to \mathcal{L} \to \mathcal{L}|_D \to 0
+$$
+see Divisors, Lemma \ref{divisors-lemma-characterize-OD} and
+Remark \ref{divisors-remark-ses-regular-section}.
+Since $H^1(X, \mathcal{O}_X) = 0$ by assumption, we see that
+$H^0(X, \mathcal{L}) \to H^0(X, \mathcal{L}|_D)$ is surjective.
+As $\mathcal{L}|_D$ is generated by global sections
+(because $\dim(D) = 0$, see
+Varieties, Lemma \ref{varieties-lemma-chi-tensor-finite})
+we conclude that the invertible module $\mathcal{L}$
+is generated by global sections.
+In fact, since $D$ is an Artinian scheme we have
+$\mathcal{L}|_D \cong \mathcal{O}_D$\footnote{In our case this
+follows from Divisors, Lemma
+\ref{divisors-lemma-finite-trivialize-invertible-upstairs}
+as $D \to \Spec(k)$ is finite.} and hence we can
+find a section $t$ of $\mathcal{L}$ whose restriction
+of $D$ generates $\mathcal{L}|_D$.
+The short exact sequence also shows that $H^1(X, \mathcal{L}) = 0$.
+
+\medskip\noindent
+For $n \geq 1$ consider the multiplication map
+$$
+\mu_n :
+H^0(X, \mathcal{L}) \otimes_k H^0(X, \mathcal{L}^{\otimes n})
+\longrightarrow
+H^0(X, \mathcal{L}^{\otimes n + 1})
+$$
+We claim this is surjective. To see this we consider the short exact
+sequence
+$$
+0 \to \mathcal{L}^{\otimes n} \xrightarrow{s}
+\mathcal{L}^{\otimes n + 1} \to \mathcal{L}^{\otimes n + 1}|_D \to 0
+$$
+The sections of $\mathcal{L}^{\otimes n + 1}$ coming from the left in this
+sequence are in the image of $\mu_n$. On the other hand, since
+$H^0(\mathcal{L}) \to H^0(\mathcal{L}|_D)$ is surjective and since
+$t^n$ maps to a trivialization of $\mathcal{L}^{\otimes n}|_D$
+we see that $\mu_n(H^0(X, \mathcal{L}) \otimes t^n)$ gives a subspace
+of $H^0(X, \mathcal{L}^{\otimes n + 1})$ surjecting onto the global sections of
+$\mathcal{L}^{\otimes n + 1}|_D$. This proves the claim.
+
+\medskip\noindent
+Observe that $\mathcal{L}$ is ample by
+Varieties, Lemma \ref{varieties-lemma-ample-curve}.
+Hence
+Morphisms, Lemma \ref{morphisms-lemma-proper-ample-is-proj}
+gives an isomorphism
+$$
+X \longrightarrow
+\text{Proj}\left(
+\bigoplus\nolimits_{n \geq 0} H^0(X, \mathcal{L}^{\otimes n})\right)
+$$
+Since the maps $\mu_n$ are surjective for all $n \geq 1$ we see that
+the graded algebra on the right hand side is a quotient of
+the symmetric algebra on $H^0(X, \mathcal{L})$. Choosing a $k$-basis
+$s_0, \ldots, s_d$ of $H^0(X, \mathcal{L})$ we see that
+it is a quotient of a polynomial algebra in $d + 1$ variables.
+Since quotients of graded rings correspond to closed immersions
+of $\text{Proj}$ (Constructions, Lemma
+\ref{constructions-lemma-surjective-graded-rings-generated-degree-1-map-proj})
+we find a closed immersion $X \to \mathbf{P}^d_k$. We omit the
+verification that this morphism is the morphism of
+Constructions, Lemma \ref{constructions-lemma-projective-space}
+associated to the sections $s_0, \ldots, s_d$ of $\mathcal{L}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-genus-zero}
+Let $X$ be a proper curve over a field $k$ with $H^0(X, \mathcal{O}_X) = k$.
+If $X$ is Gorenstein and has genus $0$, then $X$
+is isomorphic to a plane curve of degree $2$.
+\end{lemma}
+
+\begin{proof}
+Consider the invertible sheaf $\mathcal{L} = \omega_X^{\otimes -1}$ where
+$\omega_X$ is as in Lemma \ref{lemma-duality-dim-1}. Then
+$\deg(\omega_X) = -2$ by Lemma \ref{lemma-genus-gorenstein}
+and hence $\deg(\mathcal{L}) = 2$. By
+Lemma \ref{lemma-genus-zero-positive-degree}
+we conclude that choosing a basis $s_0, s_1, s_2$ of the $k$-vector
+space of global sections of $\mathcal{L}$ we obtain a closed immersion
+$$
+\varphi_{(\mathcal{L}, (s_0, s_1, s_2))} :
+X \longrightarrow \mathbf{P}^2_k
+$$
+Thus $X$ is a plane curve of some degree $d$. Let $F \in k[T_0, T_1, T_2]_d$
+be its equation (Lemma \ref{lemma-equation-plane-curve}).
+Because the genus of $X$ is $0$ we see that $d$ is $1$ or $2$
+(Lemma \ref{lemma-genus-plane-curve}). Observe that
+$F$ restricts to the zero section on $\varphi(X)$ and hence
+$F(s_0, s_1, s_2)$ is the zero section of $\mathcal{L}^{\otimes 2}$.
+Because $s_0, s_1, s_2$ are linearly independent we see that $F$
+cannot be linear, i.e., $d = \deg(F) \geq 2$. Thus $d = 2$
+and the proof is complete.
+\end{proof}
+
+\begin{proposition}[Characterization of the projective line]
+\label{proposition-projective-line}
+Let $k$ be a field. Let $X$ be a proper curve over $k$.
+The following are equivalent
+\begin{enumerate}
+\item $X \cong \mathbf{P}^1_k$,
+\item $X$ is smooth and geometrically irreducible over $k$,
+$X$ has genus $0$, and $X$ has an invertible module of odd degree,
+\item $X$ is geometrically integral over $k$, $X$ has genus $0$,
+$X$ is Gorenstein, and $X$ has an invertible sheaf of odd degree,
+\item $H^0(X, \mathcal{O}_X) = k$, $X$ has genus $0$, $X$ is Gorenstein,
+and $X$ has an invertible sheaf of odd degree,
+\item $X$ is geometrically integral over $k$, $X$ has genus $0$,
+and $X$ has an invertible $\mathcal{O}_X$-module of degree $1$,
+\item $H^0(X, \mathcal{O}_X) = k$, $X$ has genus $0$,
+and $X$ has an invertible $\mathcal{O}_X$-module of degree $1$,
+\item $H^1(X, \mathcal{O}_X) = 0$ and $X$ has an invertible
+$\mathcal{O}_X$-module of degree $1$,
+\item $H^1(X, \mathcal{O}_X) = 0$ and $X$
+has closed points $x_1, \ldots, x_n$ such that
+$\mathcal{O}_{X, x_i}$ is normal and $\gcd([\kappa(x_i) : k]) = 1$, and
+\item add more here.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+We will prove that each condition (2) -- (8) implies (1) and we omit
+the verification that (1) implies (2) -- (8).
+
+\medskip\noindent
+Assume (2). A smooth scheme over $k$ is geometrically reduced
+(Varieties, Lemma \ref{varieties-lemma-smooth-geometrically-normal})
+and regular (Varieties, Lemma \ref{varieties-lemma-smooth-regular}).
+Hence $X$ is Gorenstein (Duality for Schemes, Lemma
+\ref{duality-lemma-regular-gorenstein}).
+Thus we reduce to (3).
+
+\medskip\noindent
+Assume (3). Since $X$ is geometrically integral over $k$ we have
+$H^0(X, \mathcal{O}_X) = k$ by
+Varieties, Lemma \ref{varieties-lemma-regular-functions-proper-variety}.
+and we reduce to (4).
+
+\medskip\noindent
+Assume (4). Since $X$ is Gorenstein the dualizing module
+$\omega_X$ as in Lemma \ref{lemma-duality-dim-1} has degree
+$\deg(\omega_X) = -2$ by Lemma \ref{lemma-genus-gorenstein}.
+Combined with the assumed existence of an odd degree invertible
+module, we conclude there exists an invertible module of degree $1$.
+In this way we reduce to (6).
+
+\medskip\noindent
+Assume (5). Since $X$ is geometrically integral over $k$ we have
+$H^0(X, \mathcal{O}_X) = k$ by
+Varieties, Lemma \ref{varieties-lemma-regular-functions-proper-variety}.
+and we reduce to (6).
+
+\medskip\noindent
+Assume (6). Then $X \cong \mathbf{P}^1_k$ by
+Lemma \ref{lemma-genus-zero-positive-degree}.
+
+\medskip\noindent
+Assume (7). Observe that $\kappa = H^0(X, \mathcal{O}_X)$ is a field
+finite over $k$ by
+Varieties, Lemma \ref{varieties-lemma-regular-functions-proper-variety}.
+If $d = [\kappa : k] > 1$, then every invertible sheaf has degree
+divisible by $d$ and there cannot be an invertible sheaf of degree $1$.
+Hence $d = 1$ and we reduce to case (6).
+
+\medskip\noindent
+Assume (8). Observe that $\kappa = H^0(X, \mathcal{O}_X)$ is a field
+finite over $k$ by
+Varieties, Lemma \ref{varieties-lemma-regular-functions-proper-variety}.
+Since $\kappa \subset \kappa(x_i)$ we see that $k = \kappa$
+by the assumption on the gcd of the degrees. The same condition
+allows us to find integers $a_i$ such that
+$1 = \sum a_i[\kappa(x_i) : k]$. Because $x_i$ defines an
+effective Cartier divisor on $X$ by
+Varieties, Lemma \ref{varieties-lemma-regular-point-on-curve}
+we can consider the invertible module $\mathcal{O}_X(\sum a_i x_i)$.
+By our choice of $a_i$ the degree of $\mathcal{L}$ is $1$.
+Thus $X \cong \mathbf{P}^1_k$ by Lemma \ref{lemma-genus-zero-positive-degree}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-genus-zero-singular}
+Let $X$ be a proper curve over a field $k$ with $H^0(X, \mathcal{O}_X) = k$.
+Assume $X$ is singular and has genus $0$. Then there exists a diagram
+$$
+\xymatrix{
+x' \ar[d] \ar[r] & X' \ar[d]^\nu \ar[r] & \Spec(k') \ar[d] \\
+x \ar[r] & X \ar[r] & \Spec(k)
+}
+$$
+where
+\begin{enumerate}
+\item $k'/k$ is a nontrivial finite extension,
+\item $X' \cong \mathbf{P}^1_{k'}$,
+\item $x'$ is a $k'$-rational point of $X'$,
+\item $x$ is a $k$-rational point of $X$,
+\item $X' \setminus \{x'\} \to X \setminus \{x\}$ is an isomorphism,
+\item $0 \to \mathcal{O}_X \to \nu_*\mathcal{O}_{X'} \to k'/k \to 0$
+is a short exact sequence
+where $k'/k = \kappa(x')/\kappa(x)$ indicates the skyscraper sheaf
+on the point $x$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\nu : X' \to X$ be the normalization of $X$, see
+Varieties, Sections \ref{varieties-section-normalization} and
+\ref{varieties-section-normalization-one-dimensional}.
+Since $X$ is singular $\nu$ is not an isomorphism.
+Then $k' = H^0(X', \mathcal{O}_{X'})$ is a finite extension of $k$
+(Varieties, Lemma \ref{varieties-lemma-regular-functions-proper-variety}).
+The short exact sequence
+$$
+0 \to \mathcal{O}_X \to \nu_*\mathcal{O}_{X'} \to \mathcal{Q} \to 0
+$$
+and the fact that $\mathcal{Q}$ is supported in finitely many
+closed points give us that
+\begin{enumerate}
+\item $H^1(X', \mathcal{O}_{X'}) = 0$, i.e., $X'$ has genus $0$
+as a curve over $k'$,
+\item there is a short exact sequence
+$0 \to k \to k' \to H^0(X, \mathcal{Q}) \to 0$.
+\end{enumerate}
+In particular $k'/k$ is a nontrivial extension.
+
+\medskip\noindent
+Next, we consider what is often called the {\it conductor ideal}
+$$
+\mathcal{I} = \SheafHom_{\mathcal{O}_X}(\nu_*\mathcal{O}_{X'}, \mathcal{O}_X)
+$$
+This is a quasi-coherent $\mathcal{O}_X$-module. We view $\mathcal{I}$
+as an ideal in $\mathcal{O}_X$ via the map $\varphi \mapsto \varphi(1)$.
+Thus $\mathcal{I}(U)$ is the set of $f \in \mathcal{O}_X(U)$ such that
+$f \left(\nu_*\mathcal{O}_{X'}(U)\right) \subset \mathcal{O}_X(U)$. In
+other words, the condition is that $f$ annihilates $\mathcal{Q}$.
+In other words, there is a defining exact sequence
+$$
+0 \to \mathcal{I} \to \mathcal{O}_X \to
+\SheafHom_{\mathcal{O}_X}(\mathcal{Q}, \mathcal{Q})
+$$
+Let $U \subset X$ be an affine open containing the support of $\mathcal{Q}$.
+Then $V = \mathcal{Q}(U) = H^0(X, \mathcal{Q})$ is a $k$-vector space
+of dimension $n - 1$. The image of
+$\mathcal{O}_X(U) \to \Hom_k(V, V)$ is a commutative subalgebra,
+hence has dimension $\leq n - 1$ over $k$ (this is a property of
+commutative subalgebras of matrix algebras; details omitted).
+We conclude that we have a short exact sequence
+$$
+0 \to \mathcal{I} \to \mathcal{O}_X \to \mathcal{A} \to 0
+$$
+where $\text{Supp}(\mathcal{A}) = \text{Supp}(\mathcal{Q})$
+and $\dim_k H^0(X, \mathcal{A}) \leq n - 1$.
+On the other hand, the description
+$\mathcal{I} = \SheafHom_{\mathcal{O}_X}(\nu_*\mathcal{O}_{X'}, \mathcal{O}_X)$
+provides $\mathcal{I}$ with a $\nu_*\mathcal{O}_{X'}$-module structure
+such that the inclusion map $\mathcal{I} \to \nu_*\mathcal{O}_{X'}$
+is a $\nu_*\mathcal{O}_{X'}$-module map.
+We conclude that $\mathcal{I} = \nu_*\mathcal{I}'$
+for some quasi-coherent sheaf of ideals
+$\mathcal{I}' \subset \mathcal{O}_{X'}$, see
+Morphisms, Lemma \ref{morphisms-lemma-affine-equivalence-modules}.
+Define $\mathcal{A}'$ as the cokernel:
+$$
+0 \to \mathcal{I}' \to \mathcal{O}_{X'} \to \mathcal{A}' \to 0
+$$
+Combining the exact sequences so far we obtain a short exact sequence
+$0 \to \mathcal{A} \to \nu_*\mathcal{A}' \to \mathcal{Q} \to 0$.
+Using the estimate above,
+combined with $\dim_k H^0(X, \mathcal{Q}) = n - 1$, gives
+$$
+\dim_k H^0(X', \mathcal{A}') =
+\dim_k H^0(X, \mathcal{A}) + \dim_k H^0(X, \mathcal{Q}) \leq 2 n - 2
+$$
+However, since $X'$ is a curve over $k'$ we see that
+the left hand side is divisible by $n$
+(Varieties, Lemma \ref{varieties-lemma-divisible}).
+As $\mathcal{A}$ and $\mathcal{A}'$ cannot be zero, we conclude that
+$\dim_k H^0(X', \mathcal{A}') = n$ which means that $\mathcal{I}'$
+is the ideal sheaf of a $k'$-rational point $x'$.
+By Proposition \ref{proposition-projective-line}
+we find $X' \cong \mathbf{P}^1_{k'}$.
+Going back to the equalities above, we conclude that
+$\dim_k H^0(X, \mathcal{A}) = 1$. This
+means that $\mathcal{I}$ is the ideal sheaf of a
+$k$-rational point $x$. Then $\mathcal{A} = \kappa(x) = k$
+and $\mathcal{A}' = \kappa(x') = k'$ as skyscraper sheaves.
+Comparing the exact sequences given above,
+this immediately implies the result on structure sheaves
+as stated in the lemma.
+\end{proof}
+
+\begin{example}
+\label{example-squish-on-P1}
+In fact, the situation described in Lemma \ref{lemma-genus-zero-singular}
+occurs for any nontrivial finite extension $k'/k$. Namely, we can consider
+$$
+A = \{f \in k'[x] \mid f(0) \in k \}
+$$
+The spectrum of $A$ is an affine curve, which we can glue to
+the spectrum of $B = k'[y]$ using the isomorphism
+$A_x \cong B_y$ sending $x^{-1}$ to $y$.
+The result is a proper curve $X$ with $H^0(X, \mathcal{O}_X) = k$
+and singular point $x$ corresponding to the maximal ideal $A \cap (x)$.
+The normalization of $X$ is $\mathbf{P}^1_{k'}$ exactly as in the lemma.
+\end{example}
+
+
+
+
+
+
+
+
+\section{Geometric genus}
+\label{section-geometric-genus}
+
+\noindent
+If $X$ is a proper and {\bf smooth} curve over $k$ with
+$H^0(X, \mathcal{O}_X) = k$, then
+$$
+p_g(X) = \dim_k H^0(X, \Omega_{X/k})
+$$
+is called the {\it geometric genus} of $X$. By Lemma \ref{lemma-genus-smooth}
+the geometric genus of $X$ agrees with the (arithmetic) genus. However,
+in higher dimensions there is a difference between the geometric genus
+and the arithmetic genus, see Remark \ref{remark-genus-higher-dimension}.
+
+\medskip\noindent
+For singular curves, we will define the geometric genus as follows.
+
+\begin{definition}
+\label{definition-geometric-genus}
+Let $k$ be a field. Let $X$ be a geometrically irreducible
+curve over $k$. The {\it geometric genus} of $X$ is the genus
+of a smooth projective model of $X$ possibly defined over
+an extension field of $k$ as in
+Lemma \ref{lemma-smooth-models}.
+\end{definition}
+
+\noindent
+If $k$ is perfect, then the nonsingular projective model $Y$ of $X$
+is smooth (Lemma \ref{lemma-nonsingular-model-smooth})
+and the geometric genus of $X$ is just the genus of $Y$.
+But if $k$ is not perfect, this may not be true.
+In this case we choose an extension $K/k$ such that
+the nonsingular projective model $Y_K$ of $(X_K)_{red}$ is
+a smooth projective curve and we define the geometric genus
+of $X$ to be the genus of $Y_K$. This is well defined by
+Lemmas \ref{lemma-smooth-models} and \ref{lemma-genus-base-change}.
+
+\begin{remark}
+\label{remark-genus-higher-dimension}
+Suppose that $X$ is a $d$-dimensional proper smooth variety over
+an algebraically closed field $k$.
+Then the {\it arithmetic genus} is often defined as
+$p_a(X) = (-1)^d(\chi(X, \mathcal{O}_X) - 1)$ and the {\it geometric genus}
+as $p_g(X) = \dim_k H^0(X, \Omega^d_{X/k})$. In this situation
+the arithmetic genus and the geometric genus no longer agree
+even though it is still true that $\omega_X \cong \Omega_{X/k}^d$.
+For example, if $d = 2$, then we have
+\begin{align*}
+p_a(X) - p_g(X) & =
+h^0(X, \mathcal{O}_X) - h^1(X, \mathcal{O}_X) + h^2(X, \mathcal{O}_X) - 1
+- h^0(X, \Omega^2_{X/k}) \\
+& =
+- h^1(X, \mathcal{O}_X) + h^2(X, \mathcal{O}_X) - h^0(X, \omega_X) \\
+& =
+- h^1(X, \mathcal{O}_X)
+\end{align*}
+where $h^i(X, \mathcal{F}) = \dim_k H^i(X, \mathcal{F})$ and
+where the last equality follows from duality.
+Hence for a surface the difference $p_g(X) - p_a(X)$ is always
+nonnegative; it is sometimes called the irregularity of the surface.
+If $X = C_1 \times C_2$ is a product of smooth projective curves of
+genus $g_1$ and $g_2$, then the irregularity is $g_1 + g_2$.
+\end{remark}
+
+
+
+
+
+
+\section{Riemann-Hurwitz}
+\label{section-riemann-hurewitz}
+
+\noindent
+Let $k$ be a field. Let $f : X \to Y$ be a morphism of smooth curves over $k$.
+Then we obtain a canonical exact sequence
+$$
+f^*\Omega_{Y/k} \xrightarrow{\text{d}f} \Omega_{X/k}
+\longrightarrow \Omega_{X/Y} \longrightarrow 0
+$$
+by Morphisms, Lemma \ref{morphisms-lemma-triangle-differentials}.
+Since $X$ and $Y$ are smooth, the sheaves $\Omega_{X/k}$ and
+$\Omega_{Y/k}$ are invertible modules, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-omega-finite-locally-free}.
+Assume the first map is nonzero, i.e., assume $f$ is generically
+\'etale, see Lemma \ref{lemma-generically-etale}. Let $R \subset X$
+be the closed subscheme cut out by the different $\mathfrak{D}_f$ of $f$.
+By Discriminants, Lemma
+\ref{discriminant-lemma-discriminant-quasi-finite-morphism-smooth}
+this is the same as the vanishing locus of $\text{d}f$, it is
+an effective Cartier divisor, and we get
+$$
+f^*\Omega_{Y/k} \otimes_{\mathcal{O}_X} \mathcal{O}_X(R) = \Omega_{X/k}
+$$
+In particular, if $X$, $Y$ are projective with
+$k = H^0(Y, \mathcal{O}_Y) = H^0(X, \mathcal{O}_X)$
+and $X$, $Y$ have genus $g_X$, $g_Y$, then we get the
+Riemann-Hurwitz formula
+\begin{align*}
+2g_X - 2 & =
+\deg(\Omega_{X/k}) \\
+& =
+\deg(f^*\Omega_{Y/k} \otimes_{\mathcal{O}_X} \mathcal{O}_X(R)) \\
+& =
+\deg(f) \deg(\Omega_{Y/k}) + \deg(R) \\
+& =
+\deg(f) (2g_Y - 2) + \deg(R)
+\end{align*}
+The first and last equality by Lemma \ref{lemma-genus-smooth}.
+The second equality by the isomorphism of invertible sheaves given above.
+The third equality by additivity of degrees
+(Varieties, Lemma \ref{varieties-lemma-degree-tensor-product}),
+the formula for the degree of a pullback
+(Varieties, Lemma \ref{varieties-lemma-degree-pullback-map-proper-curves}),
+and finally the formula for the degree of $\mathcal{O}_X(R)$
+(Varieties, Lemma \ref{varieties-lemma-degree-effective-Cartier-divisor}).
+
+\medskip\noindent
+To use the Riemann-Hurwitz formula we need to compute
+$\deg(R) = \dim_k \Gamma(R, \mathcal{O}_R)$. By the structure
+of zero dimensional schemes over $k$ (see for example
+Varieties, Lemma \ref{varieties-lemma-algebraic-scheme-dim-0}),
+we see that $R$ is a finite disjoint union of spectra of
+Artinian local rings $R = \coprod_{x \in R} \Spec(\mathcal{O}_{R, x})$
+with each $\mathcal{O}_{R, x}$ of finite dimension over $k$. Thus
+$$
+\deg(R) = \sum\nolimits_{x \in R} \dim_k \mathcal{O}_{R, x} =
+\sum\nolimits_{x \in R} d_x [\kappa(x) : k]
+$$
+with
+$$
+d_x = \text{length}_{\mathcal{O}_{R, x}} \mathcal{O}_{R, x} =
+\text{length}_{\mathcal{O}_{X, x}} \mathcal{O}_{R, x}
+$$
+the multiplicity of $x$ in $R$
+(see Algebra, Lemma \ref{algebra-lemma-pushdown-module}).
+Let $x \in X$ be a closed point with image $y \in Y$.
+Looking at stalks we obtain an exact sequence
+$$
+\Omega_{Y/k, y} \to \Omega_{X/k, x} \to \Omega_{X/Y, x} \to 0
+$$
+Choosing local generators $\eta_x$ and $\eta_y$ of the
+(free rank $1$) modules $\Omega_{X/k, x}$ and $\Omega_{Y/k, y}$
+we see that
+$
+\eta_y \mapsto h \eta_x
+$
+for some nonzero $h \in \mathcal{O}_{X, x}$. By the exact sequence we see that
+$\Omega_{X/Y, x} \cong \mathcal{O}_{X, x}/h\mathcal{O}_{X, x}$
+as $\mathcal{O}_{X, x}$-modules. Since the divisor
+$R$ is cut out by $h$ (see above) we have
+$\mathcal{O}_{R, x} = \mathcal{O}_{X, x}/h\mathcal{O}_{X, x}$.
+Thus we find the following equalities
+\begin{align*}
+d_x
+& =
+\text{length}_{\mathcal{O}_{X, x}}(\mathcal{O}_{R, x}) \\
+& =
+\text{length}_{\mathcal{O}_{X, x}}(\mathcal{O}_{X, x}/h\mathcal{O}_{X, x}) \\
+& =
+\text{length}_{\mathcal{O}_{X, x}}(\Omega_{X/Y, x}) \\
+& =
+\text{ord}_{\mathcal{O}_{X, x}}(h) \\
+& =
+\text{ord}_{\mathcal{O}_{X, x}}(``\eta_y/\eta_x")
+\end{align*}
+The first equality by our definition of $d_x$. The second and third
+we saw above. The fourth equality is the definition of $\text{ord}$, see
+Algebra, Definition \ref{algebra-definition-ord}. Note that since
+$\mathcal{O}_{X, x}$ is a discrete valuation ring, the integer
+$\text{ord}_{\mathcal{O}_{X, x}}(h)$ just the valuation of $h$.
+The fifth equality is a mnemonic.
+
+\medskip\noindent
+Here is a case where one can ``calculate'' the multiplicity $d_x$ in terms
+of other invariants. Namely, if $\kappa(x)$ is separable over $k$, then
+we may choose $\eta_x = \text{d}s$ and $\eta_y = \text{d}t$ where $s$
+and $t$ are uniformizers in $\mathcal{O}_{X, x}$ and $\mathcal{O}_{Y, y}$
+(Lemma \ref{lemma-uniformizer-works}).
+Then $t \mapsto u s^{e_x}$ for some unit $u \in \mathcal{O}_{X, x}$
+where $e_x$ is the ramification index of the extension
+$\mathcal{O}_{Y, y} \subset \mathcal{O}_{X, x}$. Hence we get
+$$
+\eta_y = \text{d}t = \text{d}(u s^{e_x}) =
+e s^{e_x - 1} u \text{d}s + s^{e_x} \text{d}u
+$$
+Writing $\text{d}u = w \text{d}s$ for some $w \in \mathcal{O}_{X, x}$
+we see that
+$$
+``\eta_y/\eta_x" = e s^{e_x - 1} u + s^{e_x} w = (e_x u + s w)s^{e_x - 1}
+$$
+We conclude that the order of vanishing of this is $e_x - 1$
+unless the characteristic of $\kappa(x)$ is $p > 0$ and $p$ divides $e_x$
+in which case the order of vanishing is $> e_x - 1$.
+
+\medskip\noindent
+Combining all of the above we find that if $k$ has characteristic
+zero, then
+$$
+2g_X - 2 = (2g_Y - 2)\deg(f) +
+\sum\nolimits_{x \in X} (e_x - 1)[\kappa(x) : k]
+$$
+where $e_x$ is the ramification index of $\mathcal{O}_{X, x}$ over
+$\mathcal{O}_{Y, f(x)}$. This precise formula will hold if and only
+if all the ramification is tame, i.e., when the
+residue field extensions $\kappa(x)/\kappa(y)$ are separable and
+$e_x$ is prime to the characteristic of $k$, although the
+arguments above are insufficient to prove this. We refer the reader
+to Lemma \ref{lemma-rhe} and its proof.
+
+\begin{lemma}
+\label{lemma-generically-etale}
+\begin{slogan}
+A morphism of smooth curves is separable iff it is etale almost everywhere
+\end{slogan}
+Let $k$ be a field. Let $f : X \to Y$ be a morphism of smooth curves over $k$.
+The following are equivalent
+\begin{enumerate}
+\item $\text{d}f : f^*\Omega_{Y/k} \to \Omega_{X/k}$ is nonzero,
+\item $\Omega_{X/Y}$ is supported on a proper closed subset of $X$,
+\item there exists a nonempty open $U \subset X$ such that
+$f|_U : U \to Y$ is unramified,
+\item there exists a nonempty open $U \subset X$ such that
+$f|_U : U \to Y$ is \'etale,
+\item the extension $k(X)/k(Y)$ of function fields is
+finite separable.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $X$ and $Y$ are smooth, the sheaves $\Omega_{X/k}$ and
+$\Omega_{Y/k}$ are invertible modules, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-omega-finite-locally-free}.
+Using the exact sequence
+$$
+f^*\Omega_{Y/k} \longrightarrow \Omega_{X/k}
+\longrightarrow \Omega_{X/Y} \longrightarrow 0
+$$
+of Morphisms, Lemma \ref{morphisms-lemma-triangle-differentials}
+we see that (1) and (2) are equivalent and equivalent to the
+condition that $f^*\Omega_{Y/k} \to \Omega_{X/k}$ is nonzero
+in the generic point. The equivalence of (2) and (3) follows
+from Morphisms, Lemma \ref{morphisms-lemma-unramified-omega-zero}.
+The equivalence between (3) and (4) follows from
+Morphisms, Lemma \ref{morphisms-lemma-flat-unramified-etale}
+and the fact that flatness is automatic
+(Lemma \ref{lemma-flat}).
+To see the equivalence of (5) and (4)
+use Algebra, Lemma \ref{algebra-lemma-smooth-at-generic-point}.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-rh}
+Let $f : X \to Y$ be a morphism of smooth proper curves
+over a field $k$ which satisfies the equivalent conditions of
+Lemma \ref{lemma-generically-etale}. If
+$k = H^0(Y, \mathcal{O}_Y) = H^0(X, \mathcal{O}_X)$
+and $X$ and $Y$ have genus $g_X$ and $g_Y$, then
+$$
+2g_X - 2 = (2g_Y - 2) \deg(f) + \deg(R)
+$$
+where $R \subset X$ is the effective Cartier divisor cut out by
+the different of $f$.
+\end{lemma}
+
+\begin{proof}
+See discussion above; we used
+Discriminants, Lemma
+\ref{discriminant-lemma-discriminant-quasi-finite-morphism-smooth},
+Lemma \ref{lemma-genus-smooth}, and
+Varieties, Lemmas \ref{varieties-lemma-degree-tensor-product} and
+\ref{varieties-lemma-degree-pullback-map-proper-curves}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-uniformizer-works}
+Let $X \to \Spec(k)$ be smooth of relative dimension $1$ at a closed
+point $x \in X$. If $\kappa(x)$ is separable over $k$, then for
+any uniformizer $s$ in the discrete valuation ring $\mathcal{O}_{X, x}$
+the element $\text{d}s$ freely generates $\Omega_{X/k, x}$
+over $\mathcal{O}_{X, x}$.
+\end{lemma}
+
+\begin{proof}
+The ring $\mathcal{O}_{X, x}$ is a discrete valuation ring by
+Algebra, Lemma \ref{algebra-lemma-characterize-smooth-over-field}.
+Since $x$ is closed $\kappa(x)$ is finite over $k$. Hence if
+$\kappa(x)/k$ is separable, then any uniformizer $s$
+maps to a nonzero element of
+$\Omega_{X/k, x} \otimes_{\mathcal{O}_{X, x}} \kappa(x)$ by
+Algebra, Lemma \ref{algebra-lemma-computation-differential}.
+Since $\Omega_{X/k, x}$ is free of rank $1$ over $\mathcal{O}_{X, x}$
+the result follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-rhe}
+Notation and assumptions as in Lemma \ref{lemma-rh}. For a closed point
+$x \in X$ let $d_x$ be the multiplicity of $x$ in $R$. Then
+$$
+2g_X - 2 = (2g_Y - 2) \deg(f) + \sum\nolimits d_x [\kappa(x) : k]
+$$
+Moreover, we have the following results
+\begin{enumerate}
+\item $d_x = \text{length}_{\mathcal{O}_{X, x}}(\Omega_{X/Y, x})$,
+\item $d_x \geq e_x - 1$ where $e_x$ is the ramification index
+of $\mathcal{O}_{X, x}$ over $\mathcal{O}_{Y, y}$,
+\item $d_x = e_x - 1$ if and only if $\mathcal{O}_{X, x}$ is tamely
+ramified over $\mathcal{O}_{Y, y}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-rh} and the discussion above
+(which used
+Varieties, Lemma \ref{varieties-lemma-algebraic-scheme-dim-0}
+and
+Algebra, Lemma \ref{algebra-lemma-pushdown-module})
+it suffices to prove the results on the
+multiplicity $d_x$ of $x$ in $R$. Part (1) was proved
+in the discussion above. In the discussion above
+we proved (2) and (3) only in the case where $\kappa(x)$ is
+separable over $k$.
+In the rest of the proof we give a uniform treatment
+of (2) and (3) using material on differents of
+quasi-finite Gorenstein morphisms.
+
+\medskip\noindent
+First, observe that $f$ is a quasi-finite Gorenstein morphism.
+This is true for example because
+$f$ is a flat quasi-finite morphism and $X$ is Gorenstein
+(see Duality for Schemes, Lemma
+\ref{duality-lemma-flat-morphism-from-gorenstein-scheme})
+or because it was shown in the proof of
+Discriminants, Lemma
+\ref{discriminant-lemma-discriminant-quasi-finite-morphism-smooth}
+(which we used above). Thus $\omega_{X/Y}$ is invertible by
+Discriminants, Lemma \ref{discriminant-lemma-gorenstein-quasi-finite}
+and the same remains true after replacing $X$ by opens and after
+performing a base change by some $Y' \to Y$. We will use this
+below without further mention.
+
+\medskip\noindent
+Choose affine opens $U \subset X$ and $V \subset Y$
+such that $x \in U$, $y \in V$, $f(U) \subset V$, and $x$ is the only
+point of $U$ lying over $y$. Write $U = \Spec(A)$ and $V = \Spec(B)$.
+Then $R \cap U$ is the different of $f|_U : U \to V$.
+By Discriminants, Lemma \ref{discriminant-lemma-base-change-different}
+formation of the different commutes with arbitrary base change
+in our case. By our choice of $U$ and $V$ we have
+$$
+A \otimes_B \kappa(y) =
+\mathcal{O}_{X, x} \otimes_{\mathcal{O}_{Y, y}} \kappa(y) =
+\mathcal{O}_{X, x}/(s^{e_x})
+$$
+where $e_x$ is the ramification index as in the statement of the lemma.
+Let $C = \mathcal{O}_{X, x}/(s^{e_x})$ viewed as a finite algebra
+over $\kappa(y)$. Let $\mathfrak{D}_{C/\kappa(y)}$ be the different
+of $C$ over $\kappa(y)$ in the sense of
+Discriminants, Definition \ref{discriminant-definition-different}.
+It suffices to show: $\mathfrak{D}_{C/\kappa(y)}$
+is nonzero if and only if the extension
+$\mathcal{O}_{Y, y} \subset \mathcal{O}_{X, x}$ is tamely ramified
+and in the tamely ramified case $\mathfrak{D}_{C/\kappa(y)}$
+is equal to the ideal generated by $s^{e_x - 1}$ in $C$.
+Recall that tame ramification means exactly that $\kappa(x)/\kappa(y)$
+is separable and that the characteristic of $\kappa(y)$ does not
+divide $e_x$. On the other hand, the different of $C/\kappa(y)$ is nonzero
+if and only if $\tau_{C/\kappa(y)} \in \omega_{C/\kappa(y)}$ is nonzero.
+Namely, since $\omega_{C/\kappa(y)}$ is an invertible $C$-module
+(as the base change of $\omega_{A/B}$)
+it is free of rank $1$, say with generator $\lambda$. Write
+$\tau_{C/\kappa(y)} = h\lambda$ for some $h \in C$. Then
+$\mathfrak{D}_{C/\kappa(y)} = (h) \subset C$ whence the claim.
+By Discriminants, Lemma \ref{discriminant-lemma-tau-nonzero}
+we have $\tau_{C/\kappa(y)} \not = 0$
+if and only if $\kappa(x)/\kappa(y)$
+is separable and $e_x$ is prime to the characteristic.
+Finally, even if $\tau_{C/\kappa(y)}$ is nonzero, then
+it is still the case that $s \tau_{C/\kappa(y)} = 0$
+because $s\tau_{C/\kappa(y)} : C \to \kappa(y)$
+sends $c$ to the trace of the nilpotent operator $sc$ which is zero.
+Hence $sh = 0$, hence $h \in (s^{e_x - 1})$ which proves
+that $\mathfrak{D}_{C/\kappa(y)} \subset (s^{e_x - 1})$ always.
+Since $(s^{e_x - 1}) \subset C$ is the smallest nonzero ideal,
+we have proved the final assertion.
+\end{proof}
+
+
+
+
+
+
+\section{Inseparable maps}
+\label{section-inseparable}
+
+\noindent
+Some remarks on the behaviour of the genus under inseparable maps.
+
+\begin{lemma}
+\label{lemma-dominated-by-smooth}
+Let $k$ be a field. Let $f : X \to Y$ be a surjective morphism
+of curves over $k$. If $X$ is smooth over $k$ and
+$Y$ is normal, then $Y$ is smooth over $k$.
+\end{lemma}
+
+\begin{proof}
+Let $y \in Y$. Pick $x \in X$ mapping to $y$.
+By Varieties, Lemma \ref{varieties-lemma-flat-under-smooth}
+it suffices to show that $f$ is flat at $x$.
+This follows from Lemma \ref{lemma-flat}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-purely-inseparable}
+Let $k$ be a field of characteristic $p > 0$. Let $f : X \to Y$ be a
+nonconstant morphism of proper nonsingular curves over $k$.
+If the extension $k(X)/k(Y)$ of function fields
+is purely inseparable, then there exists a factorization
+$$
+X = X_0 \to X_1 \to \ldots \to X_n = Y
+$$
+such that each $X_i$ is a proper nonsingular curve
+and $X_i \to X_{i + 1}$ is a degree $p$
+morphism with $k(X_{i + 1}) \subset k(X_i)$
+inseparable.
+\end{lemma}
+
+\begin{proof}
+This follows from Theorem \ref{theorem-curves-rational-maps}
+and the fact that a finite purely inseparable extension of fields
+can always be gotten as a sequence of (inseparable) extensions of degree $p$,
+see Fields, Lemma \ref{fields-lemma-finite-purely-inseparable}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inseparable-deg-p-smooth}
+Let $k$ be a field of characteristic $p > 0$. Let $f : X \to Y$ be a
+nonconstant morphism of proper nonsingular curves over $k$.
+If $X$ is smooth and $k(Y) \subset k(X)$ is inseparable of degree $p$,
+then there is a unique isomorphism $Y = X^{(p)}$ such that
+$f$ is $F_{X/k}$.
+\end{lemma}
+
+\begin{proof}
+The relative frobenius morphism $F_{X/k} : X \to X^{(p)}$
+is constructed in Varieties, Section \ref{varieties-section-frobenius}.
+Observe that $X^{(p)}$ is a smooth proper curve over $k$
+as a base change of $X$. The morphism $F_{X/k}$ has degree $p$ by
+Varieties, Lemma \ref{varieties-lemma-inseparable-deg-p-smooth}.
+Thus $k(X^{(p)})$ and $k(Y)$ are both subfields of $k(X)$
+with $[k(X) : k(Y)] = [k(X) : k(X^{(p)})] = p$. To prove the lemma
+it suffices to show that $k(Y) = k(X^{(p)})$ inside $k(X)$. See
+Theorem \ref{theorem-curves-rational-maps}.
+
+\medskip\noindent
+Write $K = k(X)$. Consider the map $\text{d} : K \to \Omega_{K/k}$.
+It follows from Lemma \ref{lemma-generically-etale}
+that both $k(Y)$ is contained in the
+kernel of $\text{d}$. By
+Varieties, Lemma \ref{varieties-lemma-relative-frobenius-omega}
+we see that $k(X^{(p)})$ is in the kernel of $\text{d}$.
+Since $X$ is a smooth curve we know that $\Omega_{K/k}$
+is a vector space of dimension $1$ over $K$.
+Then More on Algebra, Lemma \ref{more-algebra-lemma-p-basis}.
+implies that $\Ker(\text{d}) = kK^p$ and
+that $[K : kK^p] = p$.
+Thus $k(Y) = kK^p = k(X^{(p)})$ for reasons of degree.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-purely-inseparable-smooth}
+Let $k$ be a field of characteristic $p > 0$. Let $f : X \to Y$ be a
+nonconstant morphism of proper nonsingular curves over $k$.
+If $X$ is smooth and $k(Y) \subset k(X)$ is purely inseparable,
+then there is a unique $n \geq 0$ and a unique isomorphism $Y = X^{(p^n)}$
+such that $f$ is the $n$-fold relative Frobenius of $X/k$.
+\end{lemma}
+
+\begin{proof}
+The $n$-fold relative Frobenius of $X/k$ is defined in
+Varieties, Remark \ref{varieties-remark-n-fold-relative-frobenius}.
+The lemma follows by combining Lemmas \ref{lemma-inseparable-deg-p-smooth}
+and \ref{lemma-purely-inseparable}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-purely-inseparable-smooth-genus}
+Let $k$ be a field of characteristic $p > 0$. Let $f : X \to Y$ be a
+nonconstant morphism of proper nonsingular curves over $k$.
+Assume
+\begin{enumerate}
+\item $X$ is smooth,
+\item $H^0(X, \mathcal{O}_X) = k$,
+\item $k(X)/k(Y)$ is purely inseparable.
+\end{enumerate}
+Then $Y$ is smooth, $H^0(Y, \mathcal{O}_Y) = k$, and the genus of $Y$
+is equal to the genus of $X$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-purely-inseparable-smooth}
+we see that $Y = X^{(p^n)}$ is the base change of
+$X$ by $F_{\Spec(k)}^n$. Thus $Y$ is smooth and the result on the
+cohomology and genus follows from
+Lemma \ref{lemma-genus-base-change}.
+\end{proof}
+
+\begin{example}
+\label{example-inseparable}
+This example will show that the genus can change under a
+purely inseparable morphism of nonsingular projective curves.
+Let $k$ be a field of characteristic $3$. Assume there exists
+an element $a \in k$ which is not a $3$rd power. For example
+$k = \mathbf{F}_3(a)$ would work. Let $X$ be the plane curve
+with homogeneous equation
+$$
+F = T_1^2T_0 - T_2^3 + aT_0^3
+$$
+as in Section \ref{section-plane-curves}.
+On the affine piece $D_+(T_0)$ using coordinates $x = T_1/T_0$
+and $y = T_2/T_0$ we obtain $x^2 - y^3 + a = 0$ which defines
+a nonsingular affine curve. Moreover, the point at infinity
+$(0 : 1: 0)$ is a smooth point. Hence $X$ is a nonsingular projective
+curve of genus $1$ (Lemma \ref{lemma-genus-plane-curve}).
+On the other hand, consider the morphism
+$f : X \to \mathbf{P}^1_k$ which on $D_+(T_0)$ sends $(x, y)$ to
+$y \in \mathbf{A}^1_k \subset \mathbf{P}^1_k$.
+Then $f$ is a morphism of proper nonsingular curves over $k$
+inducing an inseparable function field extension of degree $p$
+but the genus of $X$ is $1$ and the genus of $\mathbf{P}^1_k$ is $0$.
+\end{example}
+
+\begin{proposition}
+\label{proposition-unwind-morphism-smooth}
+Let $k$ be a field of characteristic $p > 0$. Let $f : X \to Y$ be a
+nonconstant morphism of proper smooth curves over $k$.
+Then we can factor $f$ as
+$$
+X \longrightarrow X^{(p^n)} \longrightarrow Y
+$$
+where $X^{(p^n)} \to Y$ is a nonconstant morphism of proper smooth curves
+inducing a separable field extension $k(X^{(p^n)})/k(Y)$, we have
+$$
+X^{(p^n)} = X \times_{\Spec(k), F_{\Spec(k)}^n} \Spec(k),
+$$
+and $X \to X^{(p^n)}$ is the $n$-fold relative frobenius of $X$.
+\end{proposition}
+
+\begin{proof}
+By Fields, Lemma \ref{fields-lemma-separable-first}
+there is a subextension $k(X)/E/k(Y)$ such that
+$k(X)/E$ is purely inseparable and $E/k(Y)$ is separable.
+By Theorem \ref{theorem-curves-rational-maps}
+this corresponds to a factorization
+$X \to Z \to Y$ of $f$ with $Z$ a nonsingular proper curve.
+Apply Lemma \ref{lemma-purely-inseparable-smooth}
+to the morphism $X \to Z$ to conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inseparable-linear-system}
+Let $k$ be a field of characteristic $p > 0$. Let $X$ be a smooth proper
+curve over $k$. Let $(\mathcal{L}, V)$ be a $\mathfrak g^r_d$ with $r \geq 1$.
+Then one of the following two is true
+\begin{enumerate}
+\item there exists a $\mathfrak g^1_d$ whose corresponding morphism
+$X \to \mathbf{P}^1_k$ (Lemma \ref{lemma-linear-series})
+is generically \'etale (i.e., is as in Lemma \ref{lemma-generically-etale}), or
+\item there exists a $\mathfrak g^r_{d'}$ on $X^{(p)}$ where
+$d' \leq d/p$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Pick two $k$-linearly independent elements $s, t \in V$.
+Then $f = s/t$ is the rational function defining the morphism
+$X \to \mathbf{P}^1_k$ corresponding to the linear series
+$(\mathcal{L}, ks + kt)$. If this morphism is not
+generically \'etale, then $f \in k(X^{(p)})$ by
+Proposition \ref{proposition-unwind-morphism-smooth}.
+Now choose a basis $s_0, \ldots, s_r$ of $V$ and let
+$\mathcal{L}' \subset \mathcal{L}$ be the invertible sheaf
+generated by $s_0, \ldots, s_r$. Set $f_i = s_i/s_0$ in $k(X)$.
+If for each pair $(s_0, s_i)$ we have $f_i \in k(X^{(p)})$, then
+the morphism
+$$
+\varphi = \varphi_{(\mathcal{L}', (s_0, \ldots, s_r)} :
+X
+\longrightarrow
+\mathbf{P}^r_k = \text{Proj}(k[T_0, \ldots, T_r])
+$$
+factors through $X^{(p)}$ as this is true over the affine open
+$D_+(T_0)$ and we can extend the morphism over the affine part
+to the whole of the smooth curve $X^{(p)}$ by
+Lemma \ref{lemma-extend-over-normal-curve}.
+Introducing notation, say we have the factorization
+$$
+X \xrightarrow{F_{X/k}} X^{(p)} \xrightarrow{\psi} \mathbf{P}^r_k
+$$
+of $\varphi$. Then $\mathcal{N} = \psi^*\mathcal{O}_{\mathbf{P}^1_k}(1)$
+is an invertible $\mathcal{O}_{X^{(p)}}$-module with
+$\mathcal{L}' = F_{X/k}^*\mathcal{N}$ and with
+$\psi^*T_0, \ldots, \psi^*T_r$ $k$-linearly independent
+(as they pullback to $s_0, \ldots, s_r$ on $X$).
+Finally, we have
+$$
+d = \deg(\mathcal{L}) \geq \deg(\mathcal{L}') =
+\deg(F_{X/k}) \deg(\mathcal{N}) = p \deg(\mathcal{N})
+$$
+as desired. Here we used Varieties, Lemmas
+\ref{varieties-lemma-check-invertible-sheaf-trivial},
+\ref{varieties-lemma-degree-pullback-map-proper-curves}, and
+\ref{varieties-lemma-inseparable-deg-p-smooth}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-point-over-separable-extension}
+Let $k$ be a field. Let $X$ be a smooth proper curve over $k$
+with $H^0(X, \mathcal{O}_X) = k$ and genus $g \geq 2$.
+Then there exists a closed point $x \in X$ with
+$\kappa(x)/k$ separable of degree $\leq 2g - 2$.
+\end{lemma}
+
+\begin{proof}
+Set $\omega = \Omega_{X/k}$. By
+Lemma \ref{lemma-genus-smooth} this has degree $2g - 2$
+and has $g$ global sections. Thus we have a $\mathfrak g^{g - 1}_{2g - 2}$.
+By the trivial Lemma \ref{lemma-linear-series-trivial-existence}
+there exists a $\mathfrak g^1_{2g - 2}$
+and by Lemma \ref{lemma-g1d} we obtain a morphism
+$$
+\varphi : X \longrightarrow \mathbf{P}^1_k
+$$
+of some degree $d \leq 2g - 2$. Since $\varphi$ is flat
+(Lemma \ref{lemma-flat}) and finite
+(Lemma \ref{lemma-finite})
+it is finite locally free of degree $d$
+(Morphisms, Lemma \ref{morphisms-lemma-finite-flat}).
+Pick any rational point $t \in \mathbf{P}^1_k$
+and any point $x \in X$ with $\varphi(x) = t$.
+Then
+$$
+d \geq [\kappa(x) : \kappa(t)] = [\kappa(x) : k]
+$$
+for example by
+Morphisms, Lemmas \ref{morphisms-lemma-finite-locally-free-universally-bounded}
+and \ref{morphisms-lemma-characterize-universally-bounded}.
+Thus if $k$ is perfect (for example has characteristic zero
+or is finite) then the lemma is proved. Thus we reduce to the
+case discussed in the next paragraph.
+
+\medskip\noindent
+Assume that $k$ is an infinite field of characteristic $p > 0$.
+As above we will use that $X$ has a $\mathfrak g^{g - 1}_{2g - 2}$.
+The smooth proper curve $X^{(p)}$ has the same genus as $X$.
+Hence its genus is $> 0$. We conclude that $X^{(p)}$ does not have a
+$\mathfrak g^{g - 1}_d$ for any $d \leq g - 1$ by
+Lemma \ref{lemma-grd-inequalities}.
+Applying Lemma \ref{lemma-inseparable-linear-system}
+to our $\mathfrak g^{g - 1}_{2g - 2}$ (and noting that $2g - 2/p \leq g - 1$)
+we conclude that possibility (2) does not occur. Hence we obtain a morphism
+$$
+\varphi : X \longrightarrow \mathbf{P}^1_k
+$$
+which is generically \'etale (in the sense of the lemma)
+and has degree $\leq 2g - 2$. Let $U \subset X$ be the nonempty
+open subscheme where $\varphi$ is \'etale. Then
+$\varphi(U) \subset \mathbf{P}^1_k$ is a nonempty Zariski open
+and we can pick a $k$-rational point $t \in \varphi(U)$ as $k$ is infinite.
+Let $u \in U$ be a point with $\varphi(u) = t$.
+Then $\kappa(u)/\kappa(t)$ is separable
+(Morphisms, Lemma \ref{morphisms-lemma-etale-over-field}),
+$\kappa(t) = k$, and $[\kappa(u) : k] \leq 2g - 2$ as before.
+\end{proof}
+
+\noindent
+The following lemma does not really belong in this section
+but we don't know a good place for it elsewhere.
+
+\begin{lemma}
+\label{lemma-ramification-to-algebraic-closure}
+Let $X$ be a smooth curve over a field $k$. Let
+$\overline{x} \in X_{\overline{k}}$ be a closed
+point with image $x \in X$. The ramification index of
+$\mathcal{O}_{X, x} \subset \mathcal{O}_{X_{\overline{k}}, \overline{x}}$
+is the inseparable degree of $\kappa(x)/k$.
+\end{lemma}
+
+\begin{proof}
+After shrinking $X$ we may assume there is an \'etale morphism
+$\pi : X \to \mathbf{A}^1_k$, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-etale-over-affine-space}.
+Then we can consider the diagram of local rings
+$$
+\xymatrix{
+\mathcal{O}_{X_{\overline{k}}, \overline{x}} &
+\mathcal{O}_{\mathbf{A}^1_{\overline{k}}, \pi(\overline{x})} \ar[l] \\
+\mathcal{O}_{X, x} \ar[u] &
+\mathcal{O}_{\mathbf{A}^1_k, \pi(x)} \ar[l] \ar[u]
+}
+$$
+The horizontal arrows have ramification index $1$ as they correspond to
+\'etale morphisms. Moreover, the extension $\kappa(x)/\kappa(\pi(x))$ is
+separable hence $\kappa(x)$ and $\kappa(\pi(x))$ have the same
+inseparable degree over $k$.
+By multiplicativity of ramification indices it suffices to
+prove the result when $x$ is a point of the affine line.
+
+\medskip\noindent
+Assume $X = \mathbf{A}^1_k$. In this case, the local ring of $X$ at $x$
+looks like
+$$
+\mathcal{O}_{X, x} = k[t]_{(P)}
+$$
+where $P$ is an irreducible monic polynomial over $k$.
+Then $P(t) = Q(t^q)$ for some separable polynomial $Q \in k[t]$, see
+Fields, Lemma \ref{fields-lemma-irreducible-polynomials}.
+Observe that $\kappa(x) = k[t]/(P)$ has inseparable degree $q$
+over $k$. On the other hand, over $\overline{k}$ we can factor
+$Q(t) = \prod (t - \alpha_i)$ with $\alpha_i$ pairwise distinct.
+Write $\alpha_i = \beta_i^q$ for some unique $\beta_i \in \overline{k}$.
+Then our point $\overline{x}$ corresponds to one of the $\beta_i$
+and we conclude because the ramification index of
+$$
+k[t]_{(P)} \longrightarrow \overline{k}[t]_{(t - \beta_i)}
+$$
+is indeed equal to $q$ as the uniformizer $P$ maps to
+$(t - \beta_i)^q$ times a unit.
+\end{proof}
+
+
+
+
+
+\section{Pushouts}
+\label{section-pushouts}
+
+\noindent
+Let $k$ be a field. Consider a solid diagram
+$$
+\xymatrix{
+Z' \ar[d] \ar[r]_{i'} & X' \ar@{..>}[d]^a \\
+Z \ar@{..>}[r]^i & X
+}
+$$
+of schemes over $k$ satisfying
+\begin{enumerate}
+\item[(a)] $X'$ is separated of finite type over $k$ of dimension $\leq 1$,
+\item[(b)] $i : Z' \to X'$ is a closed immersion,
+\item[(c)] $Z'$ and $Z$ are finite over $\Spec(k)$, and
+\item[(d)] $Z' \to Z$ is surjective.
+\end{enumerate}
+In this situation every finite set of points of $X'$ are contained
+in an affine open, see Varieties, Proposition
+\ref{varieties-proposition-finite-set-of-points-of-codim-1-in-affine}.
+Thus the assumptions of
+More on Morphisms, Proposition
+\ref{more-morphisms-proposition-pushout-along-closed-immersion-and-integral}
+are satisfied and we obtain the following
+\begin{enumerate}
+\item the pushout $X = Z \amalg_{Z'} X'$ exists in the category of schemes,
+\item $i : Z \to X$ is a closed immersion,
+\item $a : X' \to X$ is integral surjective,
+\item $X \to \Spec(k)$ is separated by More on Morphisms, Lemma
+\ref{more-morphisms-lemma-pushout-separated}
+\item $X \to \Spec(k)$ is of finite type by
+More on Morphisms, Lemmas \ref{more-morphisms-lemma-pushout-finite-type},
+\item thus $a : X' \to X$ is finite by
+Morphisms, Lemmas \ref{morphisms-lemma-finite-integral} and
+\ref{morphisms-lemma-permanence-finite-type},
+\item if $X' \to \Spec(k)$ is proper, then $X \to \Spec(k)$ is proper
+by Morphisms, Lemma \ref{morphisms-lemma-image-proper-is-proper}.
+\end{enumerate}
+
+\noindent
+The following lemma can be generalized significantly.
+
+\begin{lemma}
+\label{lemma-complete-local-ring-pushout}
+In the situation above, let $Z = \Spec(k')$ where $k'$ is a field and
+$Z' = \Spec(k'_1 \times \ldots \times k'_n)$ with $k'_i/k'$
+finite extensions of fields. Let $x \in X$ be the image of $Z \to X$
+and $x'_i \in X'$ the image of $\Spec(k'_i) \to X'$.
+Then we have a fibre product diagram
+$$
+\xymatrix{
+\prod\nolimits_{i = 1, \ldots, n} k'_i &
+\prod\nolimits_{i = 1, \ldots, n} \mathcal{O}_{X', x'_i}^\wedge \ar[l] \\
+k' \ar[u] &
+\mathcal{O}_{X, x}^\wedge \ar[u] \ar[l]
+}
+$$
+where the horizontal arrows are given by the maps to the residue fields.
+\end{lemma}
+
+\begin{proof}
+Choose an affine open neighbourhood $\Spec(A)$ of $x$ in $X$.
+Let $\Spec(A') \subset X'$ be the inverse image. By construction
+we have a fibre product diagram
+$$
+\xymatrix{
+\prod\nolimits_{i = 1, \ldots, n} k'_i &
+A' \ar[l] \\
+k' \ar[u] &
+A \ar[u] \ar[l]
+}
+$$
+Since everything is finite over $A$ we see that the diagram remains
+a fibre product diagram after completion with respect to the
+maximal ideal $\mathfrak m \subset A$ corresponding to $x$
+(Algebra, Lemma \ref{algebra-lemma-completion-flat}).
+Finally, apply Algebra, Lemma
+\ref{algebra-lemma-completion-finite-extension}
+to identify the completion of $A'$.
+\end{proof}
+
+
+
+
+\section{Glueing and squishing}
+\label{section-glueing-squishing}
+
+\noindent
+Below we will indicate $k[\epsilon]$ the algebra of dual numbers
+over $k$ as defined in
+Varieties, Definition \ref{varieties-definition-dual-numbers}.
+
+\begin{lemma}
+\label{lemma-no-in-between-over-k}
+Let $k$ be an algebraically closed field. Let $k \subset A$ be a ring
+extension such that $A$ has exactly two $k$-sub algebras, then
+either $A = k \times k$ or $A = k[\epsilon]$.
+\end{lemma}
+
+\begin{proof}
+The assumption means $k \not = A$ and any subring $k \subset C \subset A$
+is equal to either $k$ or $A$. Let $t \in A$, $t \not \in k$.
+Then $A$ is generated by $t$ over $k$. Hence $A = k[x]/I$ for some
+ideal $I$. If $I = (0)$, then we have the subalgebra $k[x^2]$
+which is not allowed. Otherwise $I$ is generated by a monic polynomial $P$.
+Write $P = \prod_{i = 1}^d (t - a_i)$. If $d > 2$, then the subalgebra
+generated by $(t - a_1)(t - a_2)$ gives a contradiction.
+Thus $d = 2$. If $a_1 \not = a_2$, then $A = k \times k$,
+if $a_1 = a_2$, then $A = k[\epsilon]$.
+\end{proof}
+
+\begin{example}[Glueing points]
+\label{example-glue-points}
+Let $k$ be an algebraically closed field. Let $f : X' \to X$
+be a morphism of algebraic $k$-schemes. We say $X$ is
+obtained by glueing $a$ and $b$ in $X'$ if the following are true:
+\begin{enumerate}
+\item $a, b \in X'(k)$ are distinct points which map to the same
+point $x \in X(k)$,
+\item $f$ is finite and
+$f^{-1}(X \setminus \{x\}) \to X \setminus \{x\}$ is an isomorphism,
+\item there is a short exact sequence
+$$
+0 \to \mathcal{O}_X \to f_*\mathcal{O}_{X'} \xrightarrow{a - b} x_*k \to 0
+$$
+where arrow on the right sends a local section $h$ of $f_*\mathcal{O}_{X'}$
+to the difference $h(a) - h(b) \in k$.
+\end{enumerate}
+If this is the case, then there also is a short exact sequence
+$$
+0 \to \mathcal{O}_X^* \to f_*\mathcal{O}_{X'}^*
+\xrightarrow{ab^{-1}} x_*k^* \to 0
+$$
+where arrow on the right sends a local section $h$ of $f_*\mathcal{O}_{X'}^*$
+to the multiplicative difference $h(a)h(b)^{-1} \in k^*$.
+\end{example}
+
+\begin{example}[Squishing a tangent vector]
+\label{example-squish-tangent-vector}
+Let $k$ be an algebraically closed field. Let $f : X' \to X$
+be a morphism of algebraic $k$-schemes. We say $X$ is
+obtained by squishing the tangent vector $\vartheta$ in $X'$
+if the following are true:
+\begin{enumerate}
+\item $\vartheta : \Spec(k[\epsilon]) \to X'$ is a closed immersion
+over $k$ such that $f \circ \vartheta$ factors through a point $x \in X(k)$,
+\item $f$ is finite and
+$f^{-1}(X \setminus \{x\}) \to X \setminus \{x\}$ is an isomorphism,
+\item there is a short exact sequence
+$$
+0 \to \mathcal{O}_X \to f_*\mathcal{O}_{X'} \xrightarrow{\vartheta} x_*k \to 0
+$$
+where arrow on the right sends a local section $h$ of $f_*\mathcal{O}_{X'}$
+to the coefficient of $\epsilon$ in $\vartheta^\sharp(h) \in k[\epsilon]$.
+\end{enumerate}
+If this is the case, then there also is a short exact sequence
+$$
+0 \to \mathcal{O}_X^* \to f_*\mathcal{O}_{X'}^*
+\xrightarrow{\vartheta} x_*k \to 0
+$$
+where arrow on the right sends a local section $h$ of $f_*\mathcal{O}_{X'}^*$
+to $\text{d}\log(\vartheta^\sharp(h))$ where
+$\text{d}\log : k[\epsilon]^* \to k$
+is the homomorphism of abelian groups sending $a + b\epsilon$ to $b/a \in k$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-factor-almost-isomorphism}
+Let $k$ be an algebraically closed field. Let $f : X' \to X$ be a
+finite morphism algebraic $k$-schemes such that
+$\mathcal{O}_X \subset f_*\mathcal{O}_{X'}$ and such that $f$ is an
+isomorphism away from a finite set of points. Then there is a factorization
+$$
+X' = X_n \to X_{n - 1} \to \ldots \to X_1 \to X_0 = X
+$$
+such that each $X_i \to X_{i - 1}$ is either the glueing of
+two points or the squishing of a tangent vector
+(see Examples \ref{example-glue-points} and
+\ref{example-squish-tangent-vector}).
+\end{lemma}
+
+\begin{proof}
+Let $U \subset X$ be the maximal open set over which $f$ is an isomorphism.
+Then $X \setminus U = \{x_1, \ldots, x_n\}$ with $x_i \in X(k)$.
+We will consider factorizations $X' \to Y \to X$ of $f$ such that
+both morphisms are finite and
+$$
+\mathcal{O}_X \subset g_*\mathcal{O}_Y \subset f_*\mathcal{O}_{X'}
+$$
+where $g : Y \to X$ is the given morphism. By assumption
+$\mathcal{O}_{X, x} \to (f_*\mathcal{O}_{X'})_x$ is an isomorphism
+onless $x = x_i$ for some $i$. Hence the cokernel
+$$
+f_*\mathcal{O}_{X'}/\mathcal{O}_X = \bigoplus \mathcal{Q}_i
+$$
+is a direct sum of skyscraper sheaves $\mathcal{Q}_i$ supported at
+$x_1, \ldots, x_n$.
+Because the displayed quotient is a coherent $\mathcal{O}_X$-module,
+we conclude that $\mathcal{Q}_i$ has finite length over
+$\mathcal{O}_{X, x_i}$. Hence we can argue
+by induction on the sum of these lengths, i.e., the length of
+the whole cokernel.
+
+\medskip\noindent
+If $n > 1$, then we can define an $\mathcal{O}_X$-subalgebra
+$\mathcal{A} \subset f_*\mathcal{O}_{X'}$ by taking the inverse
+image of $\mathcal{Q}_1$. This will give a nontrivial factorization
+and we win by induction.
+
+\medskip\noindent
+Assume $n = 1$. We abbreviate $x = x_1$. Consider the finite
+$k$-algebra extension
+$$
+A = \mathcal{O}_{X, x} \subset (f_*\mathcal{O}_{X'})_x = B
+$$
+Note that $\mathcal{Q} = \mathcal{Q}_1$ is the skyscraper sheaf
+with value $B/A$.
+We have a $k$-subalgebra $A \subset A + \mathfrak m_A B \subset B$.
+If both inclusions are strict, then we obtain a nontrivial
+factorization and we win by induction as above.
+If $A + \mathfrak m_A B = B$, then $A = B$ by Nakayama, then
+$f$ is an isomorphism and there is nothing to prove.
+We conclude that we may assume $B = A + \mathfrak m_A B$.
+Set $C = B/\mathfrak m_A B$. If $C$ has more than $2$
+$k$-subalgebras, then we obtain a subalgebra between $A$
+and $B$ by taking the inverse image in $B$. Thus we may
+assume $C$ has exactly $2$ $k$-subalgebras. Thus $C = k \times k$
+or $C = k[\epsilon]$ by Lemma \ref{lemma-no-in-between-over-k}.
+In this case $f$ is correspondingly the glueing two points or the
+squishing of a tangent vector.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glue-points}
+Let $k$ be an algebraically closed field. If $f : X' \to X$ is the
+glueing of two points $a, b$ as in Example \ref{example-glue-points}, then
+there is an exact sequence
+$$
+k^* \to \Pic(X) \to \Pic(X') \to 0
+$$
+The first map is zero if $a$ and $b$ are on different
+connected components of $X'$ and injective
+if $X'$ is proper and $a$ and $b$ are on the same connected component of $X'$.
+\end{lemma}
+
+\begin{proof}
+The map $\Pic(X) \to \Pic(X')$ is surjective
+by Varieties, Lemma \ref{varieties-lemma-surjective-pic-birational-finite}.
+Using the short exact sequence
+$$
+0 \to \mathcal{O}_X^* \to f_*\mathcal{O}_{X'}^*
+\xrightarrow{ab^{-1}} x_*k^* \to 0
+$$
+we obtain
+$$
+H^0(X', \mathcal{O}_{X'}^*) \xrightarrow{ab^{-1}} k^* \to
+H^1(X, \mathcal{O}_X^*) \to H^1(X, f_*\mathcal{O}_{X'}^*)
+$$
+We have $H^1(X, f_*\mathcal{O}_{X'}^*) \subset H^1(X', \mathcal{O}_{X'}^*)$
+(for example by the Leray spectral sequence, see
+Cohomology, Lemma \ref{cohomology-lemma-Leray}).
+Hence the kernel of $\Pic(X) \to \Pic(X')$ is the
+cokernel of $ab^{-1} : H^0(X', \mathcal{O}_{X'}^*) \to k^*$.
+If $a$ and $b$ are on different connected components of $X'$,
+then $ab^{-1}$ is surjective.
+Because $k$ is algebraically closed any regular function on a
+reduced connected proper scheme over $k$ comes from an element of $k$, see
+Varieties, Lemma
+\ref{varieties-lemma-proper-geometrically-reduced-global-sections}.
+Thus $ab^{-1}$ is zero if $X'$ is proper and $a$ and $b$ are on
+the same connected component.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-squish-tangent-vector}
+Let $k$ be an algebraically closed field. If $f : X' \to X$ is the
+squishing of a tangent vector $\vartheta$ as in
+Example \ref{example-squish-tangent-vector}, then
+there is an exact sequence
+$$
+(k, +) \to \Pic(X) \to \Pic(X') \to 0
+$$
+and the first map is injective if $X'$ is proper and reduced.
+\end{lemma}
+
+\begin{proof}
+The map $\Pic(X) \to \Pic(X')$ is surjective
+by Varieties, Lemma \ref{varieties-lemma-surjective-pic-birational-finite}.
+Using the short exact sequence
+$$
+0 \to \mathcal{O}_X^* \to f_*\mathcal{O}_{X'}^*
+\xrightarrow{\vartheta} x_*k \to 0
+$$
+of Example \ref{example-squish-tangent-vector} we obtain
+$$
+H^0(X', \mathcal{O}_{X'}^*) \xrightarrow{\vartheta} k \to
+H^1(X, \mathcal{O}_X^*) \to H^1(X, f_*\mathcal{O}_{X'}^*)
+$$
+We have $H^1(X, f_*\mathcal{O}_{X'}^*) \subset H^1(X', \mathcal{O}_{X'}^*)$
+(for example by the Leray spectral sequence, see
+Cohomology, Lemma \ref{cohomology-lemma-Leray}).
+Hence the kernel of $\Pic(X) \to \Pic(X')$ is the
+cokernel of the map $\vartheta : H^0(X', \mathcal{O}_{X'}^*) \to k$.
+Because $k$ is algebraically closed any regular function on a
+reduced connected proper scheme over $k$ comes from an element of $k$, see
+Varieties, Lemma
+\ref{varieties-lemma-proper-geometrically-reduced-global-sections}.
+Thus the final statement of the lemma.
+\end{proof}
+
+
+\section{Multicross and nodal singularities}
+\label{section-multicross}
+
+\noindent
+In this section we discuss the simplest possible curve singularities.
+
+\medskip\noindent
+Let $k$ be a field. Consider the complete local $k$-algebra
+\begin{equation}
+\label{equation-multicross}
+A = \{(f_1, \ldots, f_n) \in k[[t]] \times \ldots \times k[[t]] \mid
+f_1(0) = \ldots = f_n(0)\}
+\end{equation}
+In the language introduced in
+Varieties, Definition \ref{varieties-definition-wedge}
+we see that $A$ is a wedge of $n$ copies of the power series
+ring in $1$ variable over $k$. Observe that
+$k[[t]] \times \ldots \times k[[t]]$
+is the integral closure of $A$ in its total ring of fractions.
+Hence the $\delta$-invariant of $A$ is $n - 1$.
+There is an isomorphism
+$$
+k[[x_1, \ldots, x_n]]/(\{x_ix_j\}_{i \not = j}) \longrightarrow A
+$$
+obtained by sending $x_i$ to $(0, \ldots, 0, t, 0, \ldots, 0)$
+in $A$. It follows that $\dim(A) = 1$ and
+$\dim_k \mathfrak m/\mathfrak m^2 = n$.
+In particular, $A$ is regular if and only if $n = 1$.
+
+\begin{lemma}
+\label{lemma-multicross-algebra}
+Let $k$ be a separably closed field. Let $A$ be a $1$-dimensional
+reduced Nagata local $k$-algebra with residue field $k$. Then
+$$
+\delta\text{-invariant }A \geq \text{number of branches of }A - 1
+$$
+If equality holds, then $A^\wedge$ is as in (\ref{equation-multicross}).
+\end{lemma}
+
+\begin{proof}
+Since the residue field of $A$ is separably closed, the number
+of branches of $A$ is equal to the number of geometric branches
+of $A$, see
+More on Algebra, Definition \ref{more-algebra-definition-number-of-branches}.
+The inequality holds by
+Varieties, Lemma \ref{varieties-lemma-delta-number-branches-inequality}.
+Assume equality holds.
+We may replace $A$ by the completion of $A$; this does
+not change the number of branches or the $\delta$-invariant, see
+More on Algebra, Lemma
+\ref{more-algebra-lemma-one-dimensional-number-of-branches}
+and Varieties, Lemma \ref{varieties-lemma-delta-same-after-completion}.
+Then $A$ is strictly henselian, see
+Algebra, Lemma \ref{algebra-lemma-complete-henselian}.
+By Varieties, Lemma \ref{varieties-lemma-delta-number-branches-inequality-sh}
+we see that $A$ is a wedge of complete discrete valuation rings.
+Each of these is isomorphic to $k[[t]]$ by Algebra, Lemma
+\ref{algebra-lemma-regular-complete-containing-coefficient-field}.
+Hence $A$ is as in (\ref{equation-multicross}).
+\end{proof}
+
+\begin{definition}
+\label{definition-multicross}
+Let $k$ be an algebraically closed field. Let $X$ be an algebraic
+$1$-dimensional $k$-scheme. Let $x \in X$ be a closed point.
+We say $x$ defines a {\it multicross singularity} if the completion
+$\mathcal{O}_{X, x}^\wedge$
+is isomorphic to (\ref{equation-multicross}) for some $n \geq 2$.
+We say $x$ is a {\it node}, or an {\it ordinary double point}, or
+{\it defines a nodal singularity} if $n = 2$.
+\end{definition}
+
+\noindent
+These singularities are in some sense the simplest kind of singularities
+one can have on a curve over an algebraically closed field.
+
+\begin{lemma}
+\label{lemma-multicross}
+Let $k$ be an algebraically closed field. Let $X$ be a reduced algebraic
+$1$-dimensional $k$-scheme. Let $x \in X$. The following are equivalent
+\begin{enumerate}
+\item $x$ defines a multicross singularity,
+\item the $\delta$-invariant of $X$ at $x$ is the
+number of branches of $X$ at $x$ minus $1$,
+\item there is a sequence of morphisms
+$U_n \to U_{n - 1} \to \ldots \to U_0 = U \subset X$
+where $U$ is an open neighbourhood of $x$, where
+$U_n$ is nonsingular, and where each $U_i \to U_{i - 1}$
+is the glueing of two points as in Example \ref{example-glue-points}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) is Lemma \ref{lemma-multicross-algebra}.
+
+\medskip\noindent
+Assume (3). We will argue by descending induction on $i$ that all singularities
+of $U_i$ are multicross. This is true for $U_n$ as $U_n$ has no singular points.
+If $U_i$ is gotten from $U_{i + 1}$ by glueing $a, b \in U_{i + 1}$
+to a point $c \in U_i$, then we see that
+$$
+\mathcal{O}_{U_i, c}^\wedge \subset
+\mathcal{O}_{U_{i + 1}, a}^\wedge \times \mathcal{O}_{U_{i + 1}, b}^\wedge
+$$
+is the set of elements having the same residue classes in $k$.
+Thus the number of branches at $c$ is the sum of the number of
+branches at $a$ and $b$, and the $\delta$-invariant at $c$
+is the sum of the $\delta$-invariants at $a$ and $b$ plus $1$
+(because the displayed inclusion has codimension $1$).
+This proves that (2) holds as desired.
+
+\medskip\noindent
+Assume the equivalent conditions (1) and (2). We may choose an open
+$U \subset X$ such that $x$ is the only singular point of $U$.
+Then we apply Lemma \ref{lemma-factor-almost-isomorphism} to
+the normalization morphism
+$$
+U^\nu = U_n \to U_{n - 1} \to \ldots \to U_1 \to U_0 = U
+$$
+All we have to do is show that in none of the steps we are
+squishing a tangent vector. Suppose $U_{i + 1} \to U_i$ is the
+smallest $i$ such that this is the squishing of a tangent
+vector $\theta$ at $u' \in U_{i + 1}$ lying over $u \in U_i$.
+Arguing as above, we see that $u_i$ is a multicross singularity
+(because the maps $U_i \to \ldots \to U_0$ are glueing of
+pairs of points). But now the number of branches at $u'$ and $u$
+is the same and the $\delta$-invariant of $U_i$ at $u$
+is $1$ bigger than the $\delta$-invariant of $U_{i + 1}$ at $u'$.
+By Lemma \ref{lemma-multicross-algebra}
+this implies that $u$ cannot be a multicross singularity which
+is a contradiction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-multicross-gorenstein-is-nodal}
+Let $k$ be an algebraically closed field. Let $X$ be a reduced algebraic
+$1$-dimensional $k$-scheme. Let $x \in X$ be a multicross singularity
+(Definition \ref{definition-multicross}).
+If $X$ is Gorenstein, then $x$ is a node.
+\end{lemma}
+
+\begin{proof}
+The map $\mathcal{O}_{X, x} \to \mathcal{O}_{X, x}^\wedge$
+is flat and unramified in the sense that
+$\kappa(x) = \mathcal{O}_{X, x}^\wedge/\mathfrak m_x \mathcal{O}_{X, x}^\wedge$.
+(See More on Algebra, Section \ref{more-algebra-section-permanence-completion}.)
+Thus $X$ is Gorenstein implies $\mathcal{O}_{X, x}$ is Gorenstein, implies
+$\mathcal{O}_{X, x}^\wedge$ is Gorenstein by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-flat-under-gorenstein}.
+Thus it suffices to show that the ring $A$ in
+(\ref{equation-multicross}) with $n \geq 2$
+is Gorenstein if and only if $n = 2$.
+
+\medskip\noindent
+If $n = 2$, then $A = k[[x, y]]/(xy)$ is a complete intersection
+and hence Gorenstein. For example this follows from
+Duality for Schemes, Lemma \ref{duality-lemma-gorenstein-lci}
+applied to $k[[x, y]] \to A$ and the fact that the regular
+local ring $k[[x, y]]$ is Gorenstein by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-regular-gorenstein}.
+
+\medskip\noindent
+Assume $n > 2$. If $A$ where Gorenstein, then $A$ would be a
+dualizing complex over $A$
+(Duality for Schemes, Definition \ref{duality-definition-gorenstein}).
+Then $R\Hom(k, A)$ would be equal to $k[n]$ for some $n \in \mathbf{Z}$, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-find-function}.
+It would follow that $\Ext^1_A(k, A) \cong k$
+or $\Ext^1_A(k, A) = 0$ (depending on the value of $n$;
+in fact $n$ has to be $-1$ but it doesn't matter to us here).
+Using the exact sequence
+$$
+0 \to \mathfrak m_A \to A \to k \to 0
+$$
+we find that
+$$
+\Ext^1_A(k, A) = \Hom_A(\mathfrak m_A, A)/A
+$$
+where $A \to \Hom_A(\mathfrak m_A, A)$ is given by
+$a \mapsto (a' \mapsto aa')$. Let $e_i \in \Hom_A(\mathfrak m_A, A)$
+be the element that sends $(f_1, \ldots, f_n) \in \mathfrak m_A$ to
+$(0, \ldots, 0, f_i, 0, \ldots, 0)$. The reader verifies easily
+that $e_1, \ldots, e_{n - 1}$ are $k$-linearly independent in
+$\Hom_A(\mathfrak m_A, A)/A$. Thus
+$\dim_k \Ext^1_A(k, A) \geq n - 1 \geq 2$ which
+finishes the proof.
+(Observe that $e_1 + \ldots + e_n$ is the image of $1$ under the map
+$A \to \Hom_A(\mathfrak m_A, A)$.)
+\end{proof}
+
+
+
+
+\section{Torsion in the Picard group}
+\label{section-torsion-in-pic}
+
+\noindent
+In this section we bound the torsion in the Picard group of a $1$-dimensional
+proper scheme over a field. We will use this in our
+study of semistable reduction for curves.
+
+\medskip\noindent
+There does not seem to be an elementary way to obtain the result of
+Lemma \ref{lemma-torsion-picard-smooth-projective}.
+Analyzing the proof there are two key ingredients:
+(1) there is an abelian variety classifying degree zero invertible sheaves on
+a smooth projective curve and (2) the structure of torsion points on
+an abelian variety can be determined.
+
+\begin{lemma}
+\label{lemma-torsion-picard-smooth-projective}
+Let $k$ be an algebraically closed field.
+Let $X$ be a smooth projective curve of genus $g$ over $k$.
+\begin{enumerate}
+\item If $n \geq 1$ is invertible in $k$, then
+$\Pic(X)[n] \cong (\mathbf{Z}/n\mathbf{Z})^{\oplus 2g}$.
+\item If the characteristic of $k$ is $p > 0$, then there exists
+an integer $0 \leq f \leq g$ such that
+$\Pic(X)[p^m] \cong (\mathbf{Z}/p^m\mathbf{Z})^{\oplus f}$ for
+all $m \geq 1$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\Pic^0(X) \subset \Pic(X)$
+denote the subgroup of invertible sheaves of degree $0$.
+In other words, there is a short exact sequence
+$$
+0 \to \Pic^0(X) \to \Pic(X) \xrightarrow{\deg} \mathbf{Z} \to 0.
+$$
+The group $\Pic^0(X)$ is the $k$-points of
+the group scheme $\underline{\Picardfunctor}^0_{X/k}$, see
+Picard Schemes of Curves, Lemma \ref{pic-lemma-picard-pieces}.
+The same lemma tells us that $\underline{\Picardfunctor}^0_{X/k}$
+is a $g$-dimensional abelian variety over $k$ as defined in
+Groupoids, Definition \ref{groupoids-definition-abelian-variety}.
+Thus we conclude by the results of
+Groupoids, Proposition \ref{groupoids-proposition-review-abelian-varieties}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-torsion-picard-becomes-visible}
+Let $k$ be a field. Let $n$ be prime to the characteristic of $k$.
+Let $X$ be a smooth proper curve over $k$ with $H^0(X, \mathcal{O}_X) = k$
+and of genus $g$.
+\begin{enumerate}
+\item If $g = 1$ then there exists a finite separable extension
+$k'/k$ such that $X_{k'}$ has a $k'$-rational point and
+$\Pic(X_{k'})[n] \cong (\mathbf{Z}/n\mathbf{Z})^{\oplus 2}$.
+\item If $g \geq 2$ then there exists a finite separable extension
+$k'/k$ with $[k' : k] \leq (2g - 2)(n^{2g})!$
+such that $X_{k'}$ has a $k'$-rational point and
+$\Pic(X_{k'})[n] \cong (\mathbf{Z}/n\mathbf{Z})^{\oplus 2g}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $g \geq 2$. First we may choose a finite separable extension
+of degree at most $2g - 2$ such that $X$ acquires a rational point, see
+Lemma \ref{lemma-point-over-separable-extension}.
+Thus we may assume $X$ has a $k$-rational point $x \in X(k)$
+but now we have to prove the lemma with
+$[k' : k] \leq (n^{2g})!$.
+Let $k \subset k^{sep} \subset \overline{k}$ be a separable algebraic
+closure inside an algebraic closure.
+By Lemma \ref{lemma-torsion-picard-smooth-projective} we have
+$$
+\Pic(X_{\overline{k}})[n] \cong (\mathbf{Z}/n\mathbf{Z})^{\oplus 2g}
+$$
+By Picard Schemes of Curves, Lemma \ref{pic-lemma-torsion-descends}
+we conclude that
+$$
+\Pic(X_{k^{sep}})[n] \cong (\mathbf{Z}/n\mathbf{Z})^{\oplus 2g}
+$$
+By Picard Schemes of Curves, Lemma \ref{pic-lemma-torsion-descends}
+there is a continuous action
+$$
+\text{Gal}(k^{sep}/k)
+\longrightarrow
+\text{Aut}(\Pic(X_{k^{sep}})[n]
+$$
+and the lemma is true for the fixed field $k'$ of the kernel of this map.
+The kernel is open because the action is continuous which implies
+that $k'/k$ is finite. By Galois theory $\text{Gal}(k'/k)$
+is the image of the displayed arrow. Since the permutation
+group of a set of cardinality $n^{2g}$ has cardinality $(n^{2g})!$
+we conclude by Galois theory that $[k' : k] \leq (n^{2g})!$.
+(Of course this proves the lemma with the bound
+$|\text{GL}_{2g}(\mathbf{Z}/n\mathbf{Z})|$, but all we want
+here is that there is some bound.)
+
+\medskip\noindent
+If the genus is $1$, then there is no upper bound on the degree of a
+finite separable field extension over which $X$ acquires a rational point
+(details omitted). Still, there is such an extension for example by
+Varieties, Lemma \ref{varieties-lemma-smooth-separable-closed-points-dense}.
+The rest of the proof is the same as in the case of $g \geq 2$.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-torsion-picard-reduced-proper}
+Let $k$ be an algebraically closed field. Let $X$ be a proper scheme over $k$
+which is reduced, connected, and has dimension $1$. Let $g$ be the genus
+of $X$ and let $g_{geom}$ be the sum of the geometric genera of the
+irreducible components of $X$. For any prime $\ell$ different from
+the characteristic of $k$ we have
+$$
+\dim_{\mathbf{F}_\ell} \Pic(X)[\ell]
+\leq g + g_{geom}
+$$
+and equality holds if and only if all the singularities of $X$
+are multicross.
+\end{proposition}
+
+\begin{proof}
+Let $\nu : X^\nu \to X$ be the normalization
+(Varieties, Lemma \ref{varieties-lemma-prepare-delta-invariant}).
+Choose a factorization
+$$
+X^\nu = X_n \to X_{n - 1} \to \ldots \to X_1 \to X_0 = X
+$$
+as in Lemma \ref{lemma-factor-almost-isomorphism}.
+Let us denote $h^0_i = \dim_k H^0(X_i, \mathcal{O}_{X_i})$
+and $h^1_i = \dim_k H^1(X_i, \mathcal{O}_{X_i})$.
+By Lemmas \ref{lemma-glue-points} and \ref{lemma-squish-tangent-vector}
+for each $n > i \geq 0$ we have
+one of the following there possibilities
+\begin{enumerate}
+\item $X_i$ is obtained by glueing $a, b \in X_{i + 1}$
+which are on different connected components: in this case
+$\Pic(X_i) = \Pic(X_{i + 1})$,
+$h^0_{i + 1} = h^0_i + 1$, $h^1_{i + 1} = h^1_i$,
+\item $X_i$ is obtained by glueing $a, b \in X_{i + 1}$
+which are on the same connected component: in this case
+there is a short exact sequence
+$$
+0 \to k^* \to \Pic(X_i) \to \Pic(X_{i + 1}) \to 0,
+$$
+and $h^0_{i + 1} = h^0_i$, $h^1_{i + 1} = h^1_i - 1$,
+\item $X_i$ is obtained by squishing a tangent vector in $X_{i + 1}$:
+in this case there is a short exact sequence
+$$
+0 \to (k, +) \to \Pic(X_i) \to \Pic(X_{i + 1}) \to 0,
+$$
+and $h^0_{i + 1} = h^0_i$, $h^1_{i + 1} = h^1_i - 1$.
+\end{enumerate}
+To prove the statements on dimensions of cohomology groups of the
+structure sheaf, use the exact sequences in
+Examples \ref{example-glue-points} and \ref{example-squish-tangent-vector}.
+Since $k$ is algebraically closed of characteristic prime to $\ell$
+we see that $(k, +)$ and $k^*$ are $\ell$-divisible and with
+$\ell$-torsion $(k, +)[\ell] = 0$ and $k^*[\ell] \cong \mathbf{F}_\ell$.
+Hence
+$$
+\dim_{\mathbf{F}_\ell} \Pic(X_{i + 1})[\ell] -
+\dim_{\mathbf{F}_\ell}\Pic(X_i)[\ell]
+$$
+is zero, except in case (2) where it is equal to $-1$.
+At the end of this process we get the normalization
+$X^\nu = X_n$ which is a disjoint union of smooth projective
+curves over $k$. Hence we have
+\begin{enumerate}
+\item $h^1_n = g_{geom}$ and
+\item $\dim_{\mathbf{F}_\ell} \Pic(X_n)[\ell] = 2g_{geom}$.
+\end{enumerate}
+The last equality by Lemma \ref{lemma-torsion-picard-smooth-projective}.
+Since $g = h^1_0$ we see that the number of steps of type
+(2) and (3) is at most $h^1_0 - h^1_n = g - g_{geom}$.
+By our comptation of the differences in ranks we conclude that
+$$
+\dim_{\mathbf{F}_\ell} \Pic(X)[\ell] \leq
+g - g_{geom} + 2g_{geom} = g + g_{geom}
+$$
+and equality holds if and only if no steps of type (3) occur.
+This indeed means that all singularities of $X$ are multicross
+by Lemma \ref{lemma-multicross}. Conversely, if all the singularities
+are multicross, then Lemma \ref{lemma-multicross} guarantees that
+we can find a sequence $X^\nu = X_n \to \ldots \to X_0 = X$
+as above such that no steps of type (3) occur in the sequence
+and we find equality holds in the lemma (just glue the local sequences
+for each point to find one that works for all singular points of $x$;
+some details omitted).
+\end{proof}
+
+
+
+
+
+\section{Genus versus geometric genus}
+\label{section-genus-geometric-genus}
+
+\noindent
+Let $k$ be a field with algebraic closure $\overline{k}$.
+Let $X$ be a proper scheme of dimension $\leq 1$ over $k$.
+We define $g_{geom}(X/k)$ to be the sum of the geometric genera
+of the irreducible components of $X_{\overline{k}}$ which have dimension $1$.
+
+\begin{lemma}
+\label{lemma-bound-geometric-genus}
+Let $k$ be a field. Let $X$ be a proper scheme of dimension $\leq 1$ over $k$.
+Then
+$$
+g_{geom}(X/k) = \sum\nolimits_{C \subset X} g_{geom}(C/k)
+$$
+where the sum is over irreducible components $C \subset X$ of dimension $1$.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the definition and the fact that an irreducible
+component $\overline{Z}$ of $X_{\overline{k}}$ maps onto an
+irreducible component $Z$ of $X$
+(Varieties, Lemma \ref{varieties-lemma-image-irreducible})
+of the same dimension
+(Morphisms, Lemma \ref{morphisms-lemma-dimension-fibre-after-base-change}
+applied to the generic point of $\overline{Z}$).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometric-genus-normalization}
+Let $k$ be a field. Let $X$ be a proper scheme of dimension $\leq 1$ over $k$.
+Then
+\begin{enumerate}
+\item We have $g_{geom}(X/k) = g_{geom}(X_{red}/k)$.
+\item If $X' \to X$ is a birational proper morphism, then
+$g_{geom}(X'/k) = g_{geom}(X/k)$.
+\item If $X^\nu \to X$ is the normalization morphism, then
+$g_{geom}(X^\nu/k) = g_{geom}(X/k)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is immediate from Lemma \ref{lemma-bound-geometric-genus}.
+If $X' \to X$ is proper birational, then it is finite and
+an isomorphism over a dense open (see
+Varieties, Lemmas \ref{varieties-lemma-finite-in-codim-1} and
+\ref{varieties-lemma-modification-normal-iso-over-codimension-1}).
+Hence $X'_{\overline{k}} \to X_{\overline{k}}$ is an isomorphism
+over a dense open. Thus the irreducible components of $X'_{\overline{k}}$
+and $X_{\overline{k}}$ are in bijective correspondence and the
+corresponding components have isomorphic function fields.
+In particular these components have isomorphic nonsingular projective models
+and hence have the same geometric genera.
+This proves (2).
+Part (3) follows from (1) and (2) and the fact that
+$X^\nu \to X_{red}$ is birational
+(Morphisms, Lemma \ref{morphisms-lemma-normalization-birational}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-genus-goes-down}
+Let $k$ be a field. Let $X$ be a proper scheme of dimension
+$\leq 1$ over $k$. Let $f : Y \to X$ be a finite morphism
+such that there exists a dense open $U \subset X$ over
+which $f$ is a closed immersion. Then
+$$
+\dim_k H^1(X, \mathcal{O}_X) \geq \dim_k H^1(Y, \mathcal{O}_Y)
+$$
+\end{lemma}
+
+\begin{proof}
+Consider the exact sequence
+$$
+0 \to \mathcal{G} \to \mathcal{O}_X \to f_*\mathcal{O}_Y \to \mathcal{F} \to 0
+$$
+of coherent sheaves on $X$.
+By assumption $\mathcal{F}$ is supported in finitely many closed points
+and hence has vanishing higher cohomology
+(Varieties, Lemma \ref{varieties-lemma-chi-tensor-finite}).
+On the other hand, we have $H^2(X, \mathcal{G}) = 0$ by
+Cohomology, Proposition \ref{cohomology-proposition-vanishing-Noetherian}.
+It follows formally that the induced map
+$H^1(X, \mathcal{O}_X) \to H^1(X, f_*\mathcal{O}_Y)$
+is surjective. Since $H^1(X, f_*\mathcal{O}_Y) = H^1(Y, \mathcal{O}_Y)$
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-relative-affine-cohomology})
+we conclude the lemma holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-genus-normalization}
+Let $k$ be a field. Let $X$ be a proper scheme of dimension $\leq 1$ over $k$.
+If $X' \to X$ is a birational proper morphism, then
+$$
+\dim_k H^1(X, \mathcal{O}_X) \geq \dim_k H^1(X', \mathcal{O}_{X'})
+$$
+If $X$ is reduced, $H^0(X, \mathcal{O}_X) \to H^0(X', \mathcal{O}_{X'})$
+is surjective, and equality holds, then $X' = X$.
+\end{lemma}
+
+\begin{proof}
+If $f : X' \to X$ is proper birational, then it is finite and
+an isomorphism over a dense open (see
+Varieties, Lemmas \ref{varieties-lemma-finite-in-codim-1} and
+\ref{varieties-lemma-modification-normal-iso-over-codimension-1}).
+Thus the inequality by Lemma \ref{lemma-genus-goes-down}.
+Assume $X$ is reduced. Then $\mathcal{O}_X \to f_*\mathcal{O}_{X'}$
+is injective and we obtain a short exact sequence
+$$
+0 \to \mathcal{O}_X \to f_*\mathcal{O}_{X'} \to \mathcal{F} \to 0
+$$
+Under the assumptions given in the second statement,
+we conclude from the long exact cohomology sequence that
+$H^0(X, \mathcal{F}) = 0$. Then $\mathcal{F} = 0$ because
+$\mathcal{F}$ is generated by global sections
+(Varieties, Lemma \ref{varieties-lemma-chi-tensor-finite}).
+and $\mathcal{O}_X = f_*\mathcal{O}_{X'}$. Since $f$ is affine
+this implies $X = X'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bound-geometric-genus-curve}
+Let $k$ be a field. Let $C$ be a proper curve over $k$.
+Set $\kappa = H^0(C, \mathcal{O}_C)$. Then
+$$
+[\kappa : k]_s \dim_\kappa H^1(C, \mathcal{O}_C) \geq g_{geom}(C/k)
+$$
+\end{lemma}
+
+\begin{proof}
+Varieties, Lemma \ref{varieties-lemma-regular-functions-proper-variety}
+implies $\kappa$ is a field and a finite extension of $k$.
+By Fields, Lemma \ref{fields-lemma-separable-degree}
+we have $[\kappa : k]_s = |\Mor_k(\kappa, \overline{k})|$
+and hence $\Spec(\kappa \otimes_k \overline{k})$ has
+$[\kappa : k]_s$ points each with residue field $\overline{k}$.
+Thus
+$$
+C_{\overline{k}} =
+\bigcup\nolimits_{t \in \Spec(\kappa \otimes_k \overline{k})} C_t
+$$
+(set theoretic union). Here
+$C_t = C \times_{\Spec(\kappa), t} \Spec(\overline{k})$ where
+we view $t$ as a $k$-algebra map $t : \kappa \to \overline{k}$.
+The conclusion is that $g_{geom}(C/k) = \sum_t g_{geom}(C_t/\overline{k})$
+and the sum is over an index set of size $[\kappa : k]_s$.
+We have
+$$
+H^0(C_t, \mathcal{O}_{C_t}) = \overline{k}
+\quad\text{and}\quad
+\dim_{\overline{k}} H^1(C_t, \mathcal{O}_{C_t}) =
+\dim_\kappa H^1(C, \mathcal{O}_C)
+$$
+by cohomology and base change
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology}).
+Observe that the normalization $C_t^\nu$ is the disjoint
+union of the nonsingular projective models of the
+irreducible components of $C_t$
+(Morphisms, Lemma \ref{morphisms-lemma-normalization-in-terms-of-components}).
+Hence $\dim_{\overline{k}} H^1(C_t^\nu, \mathcal{O}_{C_t^\nu})$
+is equal to $g_{geom}(C_t/\overline{k})$.
+By Lemma \ref{lemma-genus-goes-down} we have
+$$
+\dim_{\overline{k}} H^1(C_t, \mathcal{O}_{C_t}) \geq
+\dim_{\overline{k}} H^1(C_t^\nu, \mathcal{O}_{C_t^\nu})
+$$
+and this finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bound-torsion-simple}
+Let $k$ be a field. Let $X$ be a proper scheme of dimension $\leq 1$ over $k$.
+Let $\ell$ be a prime number invertible in $k$. Then
+$$
+\dim_{\mathbf{F}_\ell} \Pic(X)[\ell] \leq
+\dim_k H^1(X, \mathcal{O}_X) + g_{geom}(X/k)
+$$
+where $g_{geom}(X/k)$ is as defined above.
+\end{lemma}
+
+\begin{proof}
+The map $\Pic(X) \to \Pic(X_{\overline{k}})$
+is injective by Varieties, Lemma \ref{varieties-lemma-change-fields-pic}.
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology}
+$\dim_k H^1(X, \mathcal{O}_X)$ equals
+$\dim_{\overline{k}} H^1(X_{\overline{k}}, \mathcal{O}_{X_{\overline{k}}})$.
+Hence we may assume $k$ is algebraically closed.
+
+\medskip\noindent
+Let $X_{red}$ be the reduction of $X$. Then the surjection
+$\mathcal{O}_X \to \mathcal{O}_{X_{red}}$ induces a surjection
+$H^1(X, \mathcal{O}_X) \to H^1(X, \mathcal{O}_{X_{red}})$
+because cohomology of quasi-coherent sheaves vanishes in degrees
+$\geq 2$ by
+Cohomology, Proposition \ref{cohomology-proposition-vanishing-Noetherian}.
+Since $X_{red} \to X$ induces an isomorphism on irreducible
+components over $\overline{k}$ and an isomorphism on
+$\ell$-torsion in Picard groups
+(Picard Schemes of Curves, Lemma \ref{pic-lemma-torsion-descends})
+we may replace $X$ by $X_{red}$. In this way we reduce to
+Proposition \ref{proposition-torsion-picard-reduced-proper}.
+\end{proof}
+
+
+
+
+
+
+\section{Nodal curves}
+\label{section-nodal}
+
+\noindent
+We have already defined ordinary double points over algebraically
+closed fields, see Definition \ref{definition-multicross}. Namely,
+if $x \in X$ is a closed point of a $1$-dimensional
+algebraic scheme over an algebraically closed field $k$, then
+$x$ is an ordinary double point if and only if
+$$
+\mathcal{O}_{X, x}^\wedge \cong k[[x, y]]/(xy)
+$$
+See discussion following (\ref{equation-multicross}) in
+Section \ref{section-multicross}.
+
+\begin{definition}
+\label{definition-nodal}
+Let $k$ be a field. Let $X$ be a $1$-dimensional locally algebraic $k$-scheme.
+\begin{enumerate}
+\item We say a closed point $x \in X$ is a {\it node}, or an
+{\it ordinary double point}, or {\it defines a nodal singularity}
+if there exists an ordinary double point $\overline{x} \in X_{\overline{k}}$
+mapping to $x$.
+\item We say the {\it singularities of $X$ are at-worst-nodal} if
+all closed points of $X$ are either in the smooth locus of
+the structure morphism $X \to \Spec(k)$ or are ordinary double points.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Often a $1$-dimensional algebraic scheme $X$ is called a {\it nodal curve}
+if the singularities of $X$ are at worst nodal. Sometimes a nodal curve
+is required to be proper. Since a nodal curve
+so defined need not be irreducible, this conflicts with our earlier definition
+of a curve as a variety of dimension $1$.
+
+\begin{lemma}
+\label{lemma-reduced-quotient-regular-ring-dim-2}
+Let $(A, \mathfrak m)$ be a regular local ring of dimension $2$.
+Let $I \subset \mathfrak m$ be an ideal.
+\begin{enumerate}
+\item If $A/I$ is reduced, then $I = (0)$, $I = \mathfrak m$, or
+$I = (f)$ for some nonzero $f \in \mathfrak m$.
+\item If $A/I$ has depth $1$, then $I = (f)$ for some nonzero
+$f \in \mathfrak m$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $I \not = 0$. Write $I = (f_1, \ldots, f_r)$. As $A$ is a UFD
+(More on Algebra, Lemma \ref{more-algebra-lemma-regular-local-UFD})
+we can write $f_i = fg_i$ where $f$ is the gcd of $f_1, \ldots, f_r$.
+Thus the gcd of $g_1, \ldots, g_r$ is $1$ which means that
+there is no height $1$ prime ideal over $g_1, \ldots, g_r$.
+Then either $(g_1, \ldots, g_r) = A$ which implies $I = (f)$ or
+if not, then $\dim(A) = 2$ implies that
+$V(g_1, \ldots, g_r) = \{\mathfrak m\}$, i.e.,
+$\mathfrak m = \sqrt{(g_1, \ldots, g_r)}$.
+
+\medskip\noindent
+Assume $A/I$ reduced, i.e., $I$ radical. If $f$ is a unit, then since $I$
+is radical we see that $I = \mathfrak m$. If $f \in \mathfrak m$, then we
+see that $f^n$ maps to zero in $A/I$. Hence $f \in I$ by reducedness and
+we conclude $I = (f)$.
+
+\medskip\noindent
+Assume $A/I$ has depth $1$. Then $\mathfrak m$ is not an associated
+prime of $A/I$. Since the class of $f$ modulo $I$ is annihilated
+by $g_1, \ldots, g_r$, this implies that the class of $f$ is zero
+in $A/I$. Thus $I = (f)$ as desired.
+\end{proof}
+
+\noindent
+Let $\kappa$ be a field and let $V$ be a vector space over $\kappa$.
+We will say $q \in \text{Sym}^2_\kappa(V)$ is {\it nondegenerate}
+if the induced $\kappa$-linear map $V^\vee \to V$ is an
+isomorphism. If $q = \sum_{i \leq j} a_{ij} x_i x_j$ for some
+$\kappa$-basis $x_1, \ldots, x_n$ of $V$, then this means that
+the determinant of the matrix
+$$
+\left(
+\begin{matrix}
+2a_{11} & a_{12} & \ldots \\
+a_{12} & 2a_{22} & \ldots \\
+\ldots & \ldots & \ldots
+\end{matrix}
+\right)
+$$
+is nonzero. This is equivalent to the condition that the
+partial derivatives of $q$ with respect to the $x_i$ cut out
+$0$ scheme theoretically.
+
+\begin{lemma}
+\label{lemma-nodal-algebraic}
+Let $k$ be a field. Let $(A, \mathfrak m, \kappa)$ be a Noetherian
+local $k$-algebra. The following are equivalent
+\begin{enumerate}
+\item $\kappa/k$ is separable, $A$ is reduced,
+$\dim_\kappa(\mathfrak m/\mathfrak m^2) = 2$, and there exists a nondegenerate
+$q \in \text{Sym}^2_\kappa(\mathfrak m/\mathfrak m^2)$
+which maps to zero in $\mathfrak m^2/\mathfrak m^3$,
+\item $\kappa/k$ is separable, $\text{depth}(A) = 1$,
+$\dim_\kappa(\mathfrak m/\mathfrak m^2) = 2$, and there exists a nondegenerate
+$q \in \text{Sym}^2_\kappa(\mathfrak m/\mathfrak m^2)$
+which maps to zero in $\mathfrak m^2/\mathfrak m^3$,
+\item $\kappa/k$ is separable,
+$A^\wedge \cong \kappa[[x, y]]/(ax^2 + bxy + cy^2)$
+as a $k$-algebra where $ax^2 + bxy + cy^2$ is a nondegenerate quadratic form
+over $\kappa$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (3). Then $A^\wedge$ is reduced because $ax^2 + bxy + cy^2$
+is either irreducible or a product of two nonassociated prime elements.
+Hence $A \subset A^\wedge$ is reduced. It follows that (1) is true.
+
+\medskip\noindent
+Assume (1). Then $A$ cannot be Artinian, since it would not be reduced
+because $\mathfrak m \not = (0)$.
+Hence $\dim(A) \geq 1$, hence $\text{depth}(A) \geq 1$
+by Algebra, Lemma \ref{algebra-lemma-criterion-reduced}.
+On the other hand $\dim(A) = 2$ implies $A$ is regular
+which contradicts the existence of $q$ by
+Algebra, Lemma \ref{algebra-lemma-regular-graded}.
+Thus $\dim(A) \leq 1$ and we conclude $\text{depth}(A) = 1$
+by Algebra, Lemma \ref{algebra-lemma-bound-depth}.
+It follows that (2) is true.
+
+\medskip\noindent
+Assume (2). Since the depth of $A$ is the same as the depth of $A^\wedge$
+(More on Algebra, Lemma \ref{more-algebra-lemma-completion-depth})
+and since the other conditions are insensitive to completion, we may
+assume that $A$ is complete. Choose $\kappa \to A$ as in
+More on Algebra, Lemma \ref{more-algebra-lemma-lift-residue-field}.
+Since $\dim_\kappa(\mathfrak m/\mathfrak m^2) = 2$ we can choose
+$x_0, y_0 \in \mathfrak m$ which map to a basis.
+We obtain a continuous $\kappa$-algebra map
+$$
+\kappa[[x, y]] \longrightarrow A
+$$
+by the rules $x \mapsto x_0$ and $y \mapsto y_0$. Let
+$q$ be the class of $ax_0^2 + bx_0y_0 + cy_0^2$ in
+$\text{Sym}^2_\kappa(\mathfrak m/\mathfrak m^2)$.
+Write $Q(x, y) = ax^2 + bxy + cy^2$ viewed as a polynomial
+in two variables. Then we see that
+$$
+Q(x_0, y_0) = ax_0^2 + bx_0y_0 + cy_0^2 =
+\sum\nolimits_{i + j = 3} a_{ij} x_0^iy_0^j
+$$
+for some $a_{ij}$ in $A$. We want to prove that we can
+increase the order of vanishing by changing our choice
+of $x_0$, $y_0$. Suppose that $x_1, y_1 \in \mathfrak m^2$.
+Then
+$$
+Q(x_0 + x_1, y_0 + y_1) = Q(x_0, y_0) +
+(2ax_0 + by_0)x_1 + (bx_0 + 2cy_0)y_1 \bmod \mathfrak m^4
+$$
+Nondegeneracy of $Q$ means exactly that $2ax_0 + by_0$ and $bx_0 + 2cy_0$
+are a $\kappa$-basis for $\mathfrak m/\mathfrak m^2$, see discussion
+preceding the lemma. Hence we can
+certainly choose $x_1, y_1 \in \mathfrak m^2$ such that
+$Q(x_0 + x_1, y_0 + y_1) \in \mathfrak m^4$.
+Continuing in this fashion by induction
+we can find $x_i, y_i \in \mathfrak m^{i + 1}$
+such that
+$$
+Q(x_0 + x_1 + \ldots + x_n, y_0 + y_1 + \ldots + y_n) \in \mathfrak m^{n + 3}
+$$
+Since $A$ is complete we can set
+$x_\infty = \sum x_i$ and $y_\infty = \sum y_i$
+and we can consider the map $\kappa[[x, y]] \longrightarrow A$
+sending $x$ to $x_\infty$ and $y$ to $y_\infty$. This map
+induces a surjection $\kappa[[x, y]]/(Q) \longrightarrow A$ by
+Algebra, Lemma \ref{algebra-lemma-completion-generalities}.
+By Lemma \ref{lemma-reduced-quotient-regular-ring-dim-2}
+the kernel of $k[[x, y]] \to A$ is principal.
+But the kernel cannot contain a proper divisor of $Q$
+as such a divisor would have degree $1$ in $x, y$
+and this would contradict $\dim(\mathfrak m/\mathfrak m^2) = 2$.
+Hence $Q$ generates the kernel as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-2-branches-delta-1}
+Let $k$ be a field. Let $(A, \mathfrak m, \kappa)$ be a
+Nagata local $k$-algebra. The following are equivalent
+\begin{enumerate}
+\item $k \to A$ is as in Lemma \ref{lemma-nodal-algebraic},
+\item $\kappa/k$ is separable, $A$ is reduced of dimension $1$,
+the $\delta$-invariant of $A$ is $1$, and $A$ has $2$ geometric branches.
+\end{enumerate}
+If this holds, then the integral closure $A'$ of $A$
+in its total ring of fractions has either $1$ or $2$
+maximal ideals $\mathfrak m'$ and the extensions
+$\kappa(\mathfrak m')/k$ are separable.
+\end{lemma}
+
+\begin{proof}
+In both cases $A$ and $A^\wedge$ are reduced. In case (2)
+because the completion of a reduced local Nagata ring is reduced
+(More on Algebra, Lemma \ref{more-algebra-lemma-completion-reduced}).
+In both cases $A$ and $A^\wedge$ have dimension $1$
+(More on Algebra, Lemma \ref{more-algebra-lemma-completion-dimension}).
+The $\delta$-invariant and the number of geometric branches of
+$A$ and $A^\wedge$ agree by
+Varieties, Lemma \ref{varieties-lemma-delta-same-after-completion}
+and
+More on Algebra, Lemma
+\ref{more-algebra-lemma-one-dimensional-number-of-branches}.
+Let $A'$ be the integral closure of $A$ in its total ring of fractions
+as in Varieties, Lemma \ref{varieties-lemma-pre-delta-invariant}.
+By Varieties, Lemma \ref{varieties-lemma-normalization-same-after-completion}
+we see that $A' \otimes_A A^\wedge$ plays the same role for $A^\wedge$.
+Thus we may replace $A$ by $A^\wedge$ and assume $A$ is complete.
+
+\medskip\noindent
+Assume (1) holds. It suffices to show that $A$ has two
+geometric branches and $\delta$-invariant $1$.
+We may assume $A = \kappa[[x, y]]/(ax^2 + bxy + cy^2)$ with
+$q = ax^2 + bxy + cy^2$ nondegenerate. There are two cases.
+
+\medskip\noindent
+Case I: $q$ splits over $\kappa$. In this case we may after
+changing coordinates assume that $q = xy$. Then we see that
+$$
+A' = \kappa[[x, y]]/(x) \times \kappa[[x, y]]/(y)
+$$
+
+\medskip\noindent
+Case II: $q$ does not split. In this case $c \not = 0$ and
+nondegenerate means $b^2 - 4ac \not = 0$. Hence
+$\kappa' = \kappa[t]/(a + bt + ct^2)$ is a degree $2$
+separable extension of $\kappa$. Then $t = y/x$
+is integral over $A$ and we conclude that
+$$
+A' = \kappa'[[x]]
+$$
+with $y$ mapping to $tx$ on the right hand side.
+
+\medskip\noindent
+In both cases one verifies by hand that the $\delta$-invariant
+is $1$ and the number of geometric branches is $2$. In
+this way we see that (1) implies (2).
+Moreover we conclude that the final statement of the lemma holds.
+
+\medskip\noindent
+Assume (2) holds.
+More on Algebra, Lemma \ref{more-algebra-lemma-number-of-branches-1}
+implies $A'$ either has two maximal ideals or $A'$ has one maximal ideal
+and $[\kappa(\mathfrak m') : \kappa]_s = 2$.
+
+\medskip\noindent
+Case I: $A'$ has two maximal ideals $\mathfrak m'_1$, $\mathfrak m'_2$
+with residue fields $\kappa_1$, $\kappa_2$.
+Since the $\delta$-invariant is the length of $A'/A$ and
+since there is a surjection $A'/A \to (\kappa_1 \times \kappa_2)/\kappa$
+we see that $\kappa = \kappa_1 = \kappa_2$. Since $A$ is complete
+(and henselian by Algebra, Lemma \ref{algebra-lemma-complete-henselian})
+and $A'$ is finite over $A$ we see that $A' = A_1 \times A_2$
+(by Algebra, Lemma \ref{algebra-lemma-finite-over-henselian}).
+Since $A'$ is a normal ring it follows that $A_1$ and $A_2$ are
+discrete valuation rings.
+Hence $A_1$ and $A_2$ are isomorphic to $\kappa[[t]]$
+(as $k$-algebras) by
+More on Algebra, Lemma \ref{more-algebra-lemma-power-series-over-residue-field}.
+Since the $\delta$-invariant is $1$ we conclude that $A$
+is the wedge of $A_1$ and $A_2$
+(Varieties, Definition \ref{varieties-definition-wedge}).
+It follows easily that $A \cong \kappa[[x, y]]/(xy)$.
+
+\medskip\noindent
+Case II: $A'$ has a single maximal ideal $\mathfrak m'$ with residue
+field $\kappa'$ and $[\kappa' : \kappa]_s = 2$. Arguing exactly
+as in Case I we see that $[\kappa' : \kappa] = 2$ and $\kappa'$
+is separable over $\kappa$. Since $A'$ is normal we see that
+$A'$ is isomorphic to $\kappa'[[t]]$ (see reference above).
+Since $A'/A$ has length $1$ we conclude that
+$$
+A = \{f \in \kappa'[[t]] \mid f(0) \in \kappa\}
+$$
+Then a simple computation shows that $A$ as in case (1).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fitting-ideal-well-defined}
+Let $k$ be a field. Let $A = k[[x_1, \ldots, x_n]]$. Let
+$I = (f_1, \ldots, f_m) \subset A$ be an ideal. For any
+$r \geq 0$ the ideal in $A/I$ generated by the $r \times r$-minors
+of the matrix $(\partial f_j/\partial x_i)$ is independent
+of the choice of the generators of $I$ or the
+regular system of parameters $x_1, \ldots, x_n$ of $A$.
+\end{lemma}
+
+\begin{proof}
+The ``correct'' proof of this lemma is to prove that this ideal
+is the $(n - r)$th Fitting ideal of a module of continuous differentials
+of $A/I$ over $k$. Here is a direct proof.
+If $g_1, \ldots g_l$ is a second set of generators of $I$, then
+we can write $g_s = \sum a_{sj}f_j$ and we have the equality of matrices
+$$
+(\partial g_s/\partial x_i) = (a_{sj}) (\partial f_j/\partial x_i)
++ (\partial a_{sj}/\partial x_i f_j)
+$$
+The final term is zero in $A/I$.
+By the Cauchy-Binet formula we see that the ideal of minors for the
+$g_s$ is contained in the ideal for the $f_j$. By symmetry
+these ideals are the same. If $y_1, \ldots, y_n \in \mathfrak m_A$
+is a second regular system of parameters, then the matrix
+$(\partial y_j/\partial x_i)$
+is invertible and we can use the chain rule for differentiation.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fitting-ideal}
+Let $k$ be a field. Let $A = k[[x_1, \ldots, x_n]]$. Let
+$I = (f_1, \ldots, f_m) \subset \mathfrak m_A$ be an ideal. The following
+are equivalent
+\begin{enumerate}
+\item $k \to A/I$ is as in Lemma \ref{lemma-nodal-algebraic},
+\item $A/I$ is reduced and the
+$(n - 1) \times (n - 1)$ minors of the matrix
+$(\partial f_j/\partial x_i)$ generate $I + \mathfrak m_A$,
+\item $\text{depth}(A/I) = 1$ and the
+$(n - 1) \times (n - 1)$ minors of the matrix
+$(\partial f_j/\partial x_i)$ generate $I + \mathfrak m_A$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-fitting-ideal-well-defined}
+we may change our system of coordinates and the
+choice of generators during the proof.
+
+\medskip\noindent
+If (1) holds, then we may change coordinates such that
+$x_1, \ldots, x_{n - 2}$ map to zero in $A/I$ and
+$A/I = k[[x_{n - 1}, x_n]]/(a x_{n - 1}^2 + b x_{n - 1}x_n + c x_n^2)$
+for some nondegenerate quadric $a x_{n - 1}^2 + b x_{n - 1}x_n + c x_n^2$.
+Then we can explicitly compute to show that both (2) and (3) are true.
+
+\medskip\noindent
+Assume the $(n - 1) \times (n - 1)$ minors of the matrix
+$(\partial f_j/\partial x_i)$ generate $I + \mathfrak m_A$.
+Suppose that for some $i$ and $j$ the partial derivative
+$\partial f_j/\partial x_i$ is a unit in $A$. Then
+we may use the system of parameters
+$f_j, x_1, \ldots, x_{i - 1}, \hat x_i, x_{i + 1}, \ldots, x_n$
+and the generators
+$f_j, f_1, \ldots, f_{j - 1}, \hat f_j, f_{j + 1}, \ldots, f_m$
+of $I$. Then we get a regular system of parameters $x_1, \ldots, x_n$
+and generators $x_1, f_2, \ldots, f_m$ of $I$.
+Next, we look for an $i \geq 2$ and $j \geq 2$ such that
+$\partial f_j/\partial x_i$ is a unit in $A$. If such a pair
+exists, then we can make a replacement as above and assume
+that we have a regular system of parameters
+$x_1, \ldots, x_n$ and generators $x_1, x_2, f_3, \ldots, f_m$ of $I$.
+Continuing, in finitely many steps we reach the situation where
+we have a regular system of parameters
+$x_1, \ldots, x_n$ and generators
+$x_1, \ldots, x_t, f_{t + 1}, \ldots, f_m$ of $I$
+such that $\partial f_j/\partial x_i \in \mathfrak m_A$
+for all $i, j \geq t + 1$.
+
+\medskip\noindent
+In this case the matrix of partial derivatives has the following
+block shape
+$$
+\left(
+\begin{matrix}
+I_{t \times t} & * \\
+0 & \mathfrak m_A
+\end{matrix}
+\right)
+$$
+Hence every $(n - 1) \times (n - 1)$-minor is in $\mathfrak m_A^{n - 1 - t}$.
+Note that $I \not = \mathfrak m_A$ otherwise the ideal of minors
+would contain $1$. It follows that $n - 1 - t \leq 1$ because there
+is an element of $\mathfrak m_A \setminus \mathfrak m_A^2 + I$ (otherwise
+$I = \mathfrak m_A$ by Nakayama). Thus $t \geq n - 2$.
+We have seen that $t \not = n$ above and similarly if
+$t = n - 1$, then there is an invertible $(n - 1) \times (n - 1)$-minor
+which is disallowed as well. Hence $t = n - 2$.
+Then $A/I$ is a quotient of $k[[x_{n - 1}, x_n]]$ and
+Lemma \ref{lemma-reduced-quotient-regular-ring-dim-2}
+implies in both cases (2) and (3) that $I$ is generated by
+$x_1, \ldots, x_{n - 2}, f$ for some $f = f(x_{n - 1}, x_n)$.
+In this case the condition on the minors exactly says that the quadratic
+term in $f$ is nondegenerate, i.e., $A/I$ is as in
+Lemma \ref{lemma-nodal-algebraic}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nodal}
+Let $k$ be a field. Let $X$ be a $1$-dimensional algebraic $k$-scheme.
+Let $x \in X$ be a closed point. The following are equivalent
+\begin{enumerate}
+\item $x$ is a node,
+\item $k \to \mathcal{O}_{X, x}$ is as in Lemma \ref{lemma-nodal-algebraic},
+\item any $\overline{x} \in X_{\overline{k}}$ mapping to $x$ defines
+a nodal singularity,
+\item $\kappa(x)/k$ is separable, $\mathcal{O}_{X, x}$ is reduced, and
+the first Fitting ideal of $\Omega_{X/k}$ generates $\mathfrak m_x$
+in $\mathcal{O}_{X, x}$,
+\item $\kappa(x)/k$ is separable, $\text{depth}(\mathcal{O}_{X, x}) = 1$, and
+the first Fitting ideal of $\Omega_{X/k}$ generates $\mathfrak m_x$
+in $\mathcal{O}_{X, x}$,
+\item $\kappa(x)/k$ is separable and $\mathcal{O}_{X, x}$ is reduced, has
+$\delta$-invariant $1$, and has $2$ geometric branches.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First assume that $k$ is algebraically closed.
+In this case the equivalence of (1) and (3) is trivial.
+The equivalence of (1) and (3) with (2) holds because the only
+nondegenerate quadric in two variables is $xy$ up to change in
+coordinates. The equivalence of (1) and (6) is
+Lemma \ref{lemma-multicross-algebra}.
+After replacing $X$ by an affine neighbourhood
+of $x$, we may assume there is a closed immersion $X \to \mathbf{A}^n_k$
+mapping $x$ to $0$. Let $f_1, \ldots, f_m \in k[x_1, \ldots, x_n]$
+be generators for the ideal $I$ of $X$ in $\mathbf{A}^n_k$.
+Then $\Omega_{X/k}$ corresponds to the $R = k[x_1, \ldots, x_n]/I$-module
+$\Omega_{R/k}$ which has a presentation
+$$
+R^{\oplus m} \xrightarrow{(\partial f_j/\partial x_i)}
+R^{\oplus n} \to \Omega_{R/k} \to 0
+$$
+(See Algebra, Sections \ref{algebra-section-differentials} and
+\ref{algebra-section-netherlander}.)
+The first Fitting ideal of $\Omega_{R/k}$ is thus the ideal
+generated by the $(n - 1) \times (n - 1)$-minors of the
+matrix $(\partial f_j/\partial x_i)$. Hence (2), (4), (5)
+are equivalent by Lemma \ref{lemma-fitting-ideal} applied
+to the completion of $k[x_1, \ldots, x_n] \to R$
+at the maximal ideal $(x_1, \ldots, x_n)$.
+
+\medskip\noindent
+Now assume $k$ is an arbitrary field.
+In cases (2), (4), (5), (6) the residue field $\kappa(x)$ is
+separable over $k$. Let us show this holds as well
+in cases (1) and (3). Namely, let $Z \subset X$ be the closed subscheme
+of $X$ defined by the first Fitting ideal of $\Omega_{X/k}$.
+The formation of $Z$ commutes with field extension
+(Divisors, Lemma \ref{divisors-lemma-base-change-and-fitting-ideal-omega}).
+If (1) or (3) is true, then there exists a point
+$\overline{x}$ of $X_{\overline{k}}$ such that $\overline{x}$
+is an isolated point of multiplicity $1$ of $Z_{\overline{k}}$ (as we have the
+equivalence of the conditions of the lemma over $\overline{k}$).
+In particular $Z_{\overline{x}}$ is geometrically reduced at $\overline{x}$
+(because $\overline{k}$ is algebraically closed). Hence
+$Z$ is geometrically reduced at $x$
+(Varieties, Lemma \ref{varieties-lemma-geometrically-reduced-upstairs}).
+In particular, $Z$ is reduced at $x$, hence $Z = \Spec(\kappa(x))$
+in a neighbourhood of $x$ and $\kappa(x)$ is geometrically reduced
+over $k$. This means that $\kappa(x)/k$ is separable
+(Algebra, Lemma \ref{algebra-lemma-characterize-separable-field-extensions}).
+
+\medskip\noindent
+The argument of the previous paragraph shows that if (1) or (3) holds, then
+the first Fitting ideal of $\Omega_{X/k}$ generates $\mathfrak m_x$.
+Since $\mathcal{O}_{X, x} \to \mathcal{O}_{X_{\overline{k}}, \overline{x}}$
+is flat and since $\mathcal{O}_{X_{\overline{k}}, \overline{x}}$
+is reduced and has depth $1$, we see that (4) and (5) hold
+(use Algebra, Lemmas \ref{algebra-lemma-descent-reduced} and
+\ref{algebra-lemma-apply-grothendieck}).
+Conversely, (4) implies (5) by
+Algebra, Lemma \ref{algebra-lemma-criterion-reduced}.
+If (5) holds, then $Z$ is geometrically reduced at $x$
+(because $\kappa(x)/k$ separable and $Z$ is $x$ in a neighbourhood).
+Hence $Z_{\overline{k}}$ is reduced at any point $\overline{x}$
+of $X_{\overline{k}}$ lying over $x$. In other words, the first
+fitting ideal of $\Omega_{X_{\overline{k}}/\overline{k}}$ generates
+$\mathfrak m_{\overline{x}}$ in $\mathcal{O}_{X_{\overline{k}, \overline{x}}}$.
+Moreover, since
+$\mathcal{O}_{X, x} \to \mathcal{O}_{X_{\overline{k}}, \overline{x}}$ is flat
+we see that $\text{depth}(\mathcal{O}_{X_{\overline{k}}, \overline{x}}) = 1$
+(see reference above).
+Hence (5) holds for $\overline{x} \in X_{\overline{k}}$ and we
+conclude that (3) holds (because of the equivalence over algebraically
+closed fields). In this way we see that (1), (3), (4), (5)
+are equivalent.
+
+\medskip\noindent
+The equivalence of (2) and (6) follows from
+Lemma \ref{lemma-2-branches-delta-1}.
+
+\medskip\noindent
+Finally, we prove the equivalence of (2) $=$ (6) with
+(1) $=$ (3) $=$ (4) $=$ (5). First we note that the geometric number
+of branches of $X$ at $x$ and the geometric number of branches
+of $X_{\overline{k}}$ at $\overline{x}$ are equal by
+Varieties, Lemma
+\ref{varieties-lemma-geometric-branches-and-change-of-fields}.
+We conclude from the information available to us at this point
+that in all cases this number is equal to $2$.
+On the other hand, in case (1) it is clear that $X$ is geometrically
+reduced at $x$, and hence
+$$
+\delta\text{-invariant of }X\text{ at }x \leq
+\delta\text{-invariant of }X_{\overline{k}}\text{ at }\overline{x}
+$$
+by Varieties, Lemma \ref{varieties-lemma-delta-invariant-and-change-of-fields}.
+Since in case (1) the right hand side is $1$, this
+forces the $\delta$-invariant of $X$ at $x$ to be $1$
+(because if it were zero, then $\mathcal{O}_{X, x}$ would
+be a discrete valuation ring by
+Varieties, Lemma \ref{varieties-lemma-delta-invariant-is-zero}
+which is unibranch, a contradiction). Thus (5) holds.
+Conversely, if (2) $=$ (5) is true, then assumptions (a), (b), (c) of
+Varieties, Lemma \ref{varieties-lemma-geometrically-normal-in-codim-1}
+hold for $x \in X$ by
+Lemma \ref{lemma-2-branches-delta-1}. Thus
+Varieties, Lemma
+\ref{varieties-lemma-delta-invariant-and-change-of-fields-better}
+applies and shows that we have equality in the above displayed inequality.
+We conclude that (5) holds for $\overline{x} \in X_{\overline{k}}$
+and we are back in case (1) by the equivalence of the conditions
+over an algebraically closed field.
+\end{proof}
+
+\begin{remark}[The quadratic extension associated to a node]
+\label{remark-quadratic-extension}
+Let $k$ be a field. Let $(A, \mathfrak m, \kappa)$ be a
+Noetherian local $k$-algebra. Assume that either
+$(A, \mathfrak m, \kappa)$ is as in Lemma \ref{lemma-nodal-algebraic}, or
+$A$ is Nagata as in Lemma \ref{lemma-2-branches-delta-1}, or
+$A$ is complete and as in Lemma \ref{lemma-fitting-ideal}.
+Then $A$ defines canonically a degree $2$ separable $\kappa$-algebra
+$\kappa'$ as follows
+\begin{enumerate}
+\item let $q = ax^2 + bxy + cy^2$ be a nondegenerate quadric
+as in Lemma \ref{lemma-nodal-algebraic} with coordinates $x, y$ chosen
+such that $a \not = 0$ and set
+$\kappa' = \kappa[x]/(ax^2 + bx + c)$,
+\item let $A' \supset A$ be the integral closure of $A$ in its
+total ring of fractions and set $\kappa' = A'/\mathfrak m A'$, or
+\item let $\kappa'$ be the $\kappa$-algebra such that
+$\text{Proj}(\bigoplus_{n \geq 0} \mathfrak m^n/\mathfrak m^{n + 1}) =
+\Spec(\kappa')$.
+\end{enumerate}
+The equivalence of (1) and (2) was shown in the proof of
+Lemma \ref{lemma-2-branches-delta-1}. We omit the equivalence of
+this with (3). If $X$ is a locally Noetherian $k$-scheme and $x \in X$
+is a point such that $\mathcal{O}_{X, x} = A$, then (3) shows that
+$\Spec(\kappa') = X^\nu \times_X \Spec(\kappa)$ where $\nu : X^\nu \to X$
+is the normalization morphism.
+\end{remark}
+
+\begin{remark}[Trivial quadratic extension]
+\label{remark-trivial-quadratic-extension}
+Let $k$ be a field. Let $(A, \mathfrak m, \kappa)$ be as in
+Remark \ref{remark-quadratic-extension} and let $\kappa'/\kappa$
+be the associated separable algebra of degree $2$.
+Then the following are equivalent
+\begin{enumerate}
+\item $\kappa' \cong \kappa \times \kappa$ as $\kappa$-algebra,
+\item the form $q$ of Lemma \ref{lemma-nodal-algebraic}
+can be chosen to be $xy$,
+\item $A$ has two branches,
+\item the extension $A'/A$ of Lemma \ref{lemma-2-branches-delta-1}
+has two maximal ideals, and
+\item $A^\wedge \cong \kappa[[x, y]]/(xy)$ as a $k$-algebra.
+\end{enumerate}
+The equivalence between these conditions has been shown in the
+proof of Lemma \ref{lemma-2-branches-delta-1}. If $X$ is a
+locally Noetherian $k$-scheme and $x \in X$ is a point such that
+$\mathcal{O}_{X, x} = A$, then this means exactly that
+there are two points $x_1, x_2$ of the normalization $X^\nu$
+lying over $x$ and that $\kappa(x) = \kappa(x_1) = \kappa(x_2)$.
+\end{remark}
+
+\begin{definition}
+\label{definition-split-node}
+Let $k$ be a field. Let $X$ be a $1$-dimensional algebraic $k$-scheme.
+Let $x \in X$ be a closed point. We say $x$ is a {\it split node}
+if $x$ is a node, $\kappa(x) = k$, and the equivalent assertions of
+Remark \ref{remark-trivial-quadratic-extension}
+hold for $A = \mathcal{O}_{X, x}$.
+\end{definition}
+
+\noindent
+We formulate the obligatory lemma stating what we already know
+about this concept.
+
+\begin{lemma}
+\label{lemma-split-node}
+Let $k$ be a field. Let $X$ be a $1$-dimensional algebraic $k$-scheme.
+Let $x \in X$ be a closed point. The following are equivalent
+\begin{enumerate}
+\item $x$ is a split node,
+\item $x$ is a node and there are exactly two points $x_1, x_2$
+of the normalization $X^\nu$ lying over $x$ with
+$k = \kappa(x_1) = \kappa(x_2)$,
+\item $\mathcal{O}_{X, x}^\wedge \cong k[[x, y]]/(xy)$ as a $k$-algebra, and
+\item add more here.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from the discussion in
+Remark \ref{remark-trivial-quadratic-extension}
+and Lemma \ref{lemma-nodal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-node-field-extension}
+Let $K/k$ be an extension of fields. Let $X$ be a locally algebraic
+$k$-scheme of dimension $1$. Let $y \in X_K$ be a point with image
+$x \in X$. The following are equivalent
+\begin{enumerate}
+\item $x$ is a closed point of $X$ and a node, and
+\item $y$ is a closed point of $Y$ and a node.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $x$ is a closed point of $X$, then $y$ is too (look at residue fields).
+But conversely, this need not be the case, i.e., it can happen that a
+closed point of $Y$ maps to a nonclosed point of $X$. However, in this
+case $y$ cannot be a node. Namely, then $X$ would be geometrically
+unibranch at $x$ (because $x$ would be a generic point of $X$ and
+$\mathcal{O}_{X, x}$ would be Artinian and any Artinian local ring is
+geometrically unibranch), hence $Y$ is geometrically unibranch at $y$
+(Varieties, Lemma
+\ref{varieties-lemma-geometrically-unibranch-and-change-of-fields}),
+which means that $y$ cannot be a node by Lemma \ref{lemma-nodal}.
+Thus we may and do assume that both $x$ and $y$ are closed points.
+
+\medskip\noindent
+Choose algebraic closures $\overline{k}$, $\overline{K}$
+and a map $\overline{k} \to \overline{K}$ extending the
+given map $k \to K$. Using the equivalence of (1) and (3)
+in Lemma \ref{lemma-nodal}
+we reduce to the case where $k$ and $K$ are algebraically closed.
+In this case we can argue as in the proof of
+Lemma \ref{lemma-nodal} that the geometric number of branches
+and $\delta$-invariants of $X$ at $x$ and $Y$ at $y$ are the same.
+Another argument can be given by choosing an isomorphism
+$k[[x_1, \ldots, x_n]]/(g_1, \ldots, g_m) \to \mathcal{O}_{X, x}^\wedge$
+of $k$-algebras as in Varieties, Lemma
+\ref{varieties-lemma-complete-local-ring-structure-as-algebra}.
+By Varieties, Lemma \ref{varieties-lemma-base-change-complete-local-ring}
+this gives an isomorphism
+$K[[x_1, \ldots, x_n]]/(g_1, \ldots, g_m) \to \mathcal{O}_{Y, y}^\wedge$
+of $K$-algebras. By definition we have to show that
+$$
+k[[x_1, \ldots, x_n]]/(g_1, \ldots, g_m) \cong k[[s, t]]/(st)
+$$
+if and only if
+$$
+K[[x_1, \ldots, x_n]]/(g_1, \ldots, g_m) \cong K[[s, t]]/(st)
+$$
+We encourage the reader to prove this for themselves.
+Since $k$ and $K$ are algebraically closed fields, this is the same as
+asking these rings to be as in Lemma \ref{lemma-nodal-algebraic}.
+Via Lemma \ref{lemma-fitting-ideal} this translates into a statement
+about the $(n - 1) \times (n - 1)$-minors of the matrix
+$(\partial g_j/\partial x_i)$ which is clearly independent of the
+field used. We omit the details.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-node-etale-local}
+Let $k$ be a field. Let $X$ be a locally algebraic
+$k$-scheme of dimension $1$. Let $Y \to X$ be an \'etale morphism.
+Let $y \in Y$ be a point with image $x \in X$. The following are equivalent
+\begin{enumerate}
+\item $x$ is a closed point of $X$ and a node, and
+\item $y$ is a closed point of $Y$ and a node.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-node-field-extension}
+we may base change to the algebraic closure of $k$.
+Then the residue fields of $x$ and $y$ are $k$.
+Hence the map $\mathcal{O}_{X, x}^\wedge \to \mathcal{O}_{Y, y}^\wedge$
+is an isomorphism (for example by
+\'Etale Morphisms, Lemma \ref{etale-lemma-characterize-etale-completions} or
+More on Algebra, Lemma \ref{more-algebra-lemma-flat-unramified}).
+Thus the lemma is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-node-over-separable-extension}
+Let $k'/k$ be a finite separable field extension.
+Let $X$ be a locally algebraic $k'$-scheme of dimension $1$.
+Let $x \in X$ be a closed point. The following are equivalent
+\begin{enumerate}
+\item $x$ is a node, and
+\item $x$ is a node when $X$ viewed as a locally algebraic $k$-scheme.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows immediately from the characterization of nodes in
+Lemma \ref{lemma-nodal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nodal-lci}
+Let $k$ be a field. Let $X$ be a locally algebraic $k$-scheme
+equidimensional of dimension $1$.
+The following are equivalent
+\begin{enumerate}
+\item the singularities of $X$ are at-worst-nodal, and
+\item $X$ is a local complete intersection over $k$
+and the closed subscheme $Z \subset X$ cut out by the
+first fitting ideal of $\Omega_{X/k}$ is unramified over $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We urge the reader to find their own proof of
+this lemma; what follows is just putting together earlier results
+and may hide what is really going on.
+
+\medskip\noindent
+Assume (2). Since $Z \to \Spec(k)$ is quasi-finite
+(Morphisms, Lemma \ref{morphisms-lemma-unramified-quasi-finite})
+we see that the residue fields of points $x \in Z$ are finite
+over $k$ (as well as separable) by
+Morphisms, Lemma \ref{morphisms-lemma-residue-field-quasi-finite}.
+Hence each $x \in Z$ is a closed point of $X$ by
+Morphisms, Lemma
+\ref{morphisms-lemma-algebraic-residue-field-extension-closed-point-fibre}.
+The local ring $\mathcal{O}_{X, x}$ is Cohen-Macaulay by
+Algebra, Lemma \ref{algebra-lemma-lci-CM}.
+Since $\dim(\mathcal{O}_{X, x}) = 1$ by dimension theory
+(Varieties, Section \ref{varieties-section-algebraic-schemes}), we conclude
+that $\text{depth}(\mathcal{O}_{X, x})) = 1$. Thus $x$ is a node
+by Lemma \ref{lemma-nodal}. If $x \in X$, $x \not \in Z$, then
+$X \to \Spec(k)$ is smooth at $x$ by
+Divisors, Lemma \ref{divisors-lemma-d-fitting-ideal-omega-smooth}.
+
+\medskip\noindent
+Assume (1). Under this assumption $X$ is geometrically reduced
+at every closed point (see
+Varieties, Lemma \ref{varieties-lemma-geometrically-reduced-upstairs}).
+Hence $X \to \Spec(k)$ is smooth on a dense open by
+Varieties, Lemma \ref{varieties-lemma-geometrically-reduced-dense-smooth-open}.
+Thus $Z$ is closed and consists of closed points.
+By Divisors, Lemma \ref{divisors-lemma-d-fitting-ideal-omega-smooth}
+the morphism $X \setminus Z \to \Spec(k)$ is smooth.
+Hence $X \setminus Z$ is a local complete intersection by
+Morphisms, Lemma \ref{morphisms-lemma-smooth-syntomic}
+and the definition of a local complete intersection in
+Morphisms, Definition \ref{morphisms-definition-syntomic}.
+By Lemma \ref{lemma-nodal} for every point $x \in Z$
+the local ring $\mathcal{O}_{Z, x}$ is equal to $\kappa(x)$
+and $\kappa(x)$ is separable over $k$. Thus $Z \to \Spec(k)$
+is unramified (Morphisms, Lemma \ref{morphisms-lemma-unramified-over-field}).
+Finally, Lemma \ref{lemma-nodal} via part (3) of
+Lemma \ref{lemma-nodal-algebraic}, shows that $\mathcal{O}_{X, x}$
+is a complete intersection in the sense of
+Divided Power Algebra, Definition \ref{dpa-definition-lci}.
+However, Divided Power Algebra, Lemma \ref{dpa-lemma-check-lci-agrees}
+and Morphisms, Lemma \ref{morphisms-lemma-local-complete-intersection}
+show that this agrees with the notion used to define a local
+complete intersection scheme over a field and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-facts-about-nodal-curves}
+Let $k$ be a field. Let $X$ be a locally algebraic $k$-scheme
+equidimensional of dimension $1$ whose singularities are at-worst-nodal.
+Then $X$ is Gorenstein and geometrically reduced.
+\end{lemma}
+
+\begin{proof}
+The Gorenstein assertion follows from Lemma \ref{lemma-nodal-lci}
+and Duality for Schemes, Lemma \ref{duality-lemma-gorenstein-lci}.
+Or you can use that it suffices to check after passing to the
+algebraic closure (Duality for Schemes, Lemma
+\ref{duality-lemma-gorenstein-base-change}), then use that
+a Noetherian local ring is Gorenstein if and only if its
+completion is so (by Dualizing Complexes, Lemma
+\ref{dualizing-lemma-flat-under-gorenstein}), and
+then prove that the local rings $k[[t]]$ and $k[[x, y]]/(xy)$
+are Gorenstein by hand.
+
+\medskip\noindent
+To see that $X$ is geometrically reduced, it suffices to show that
+$X_{\overline{k}}$ is reduced (Varieties, Lemmas
+\ref{varieties-lemma-perfect-reduced} and
+\ref{varieties-lemma-geometrically-reduced}).
+But $X_{\overline{k}}$ is a nodal curve over an
+algebraically closed field. Thus the complete local rings
+of $X_{\overline{k}}$ are isomorphic to either
+$\overline{k}[[t]]$ or $\overline{k}[[x, y]]/(xy)$
+which are reduced as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-closed-subscheme-nodal-curve}
+Let $k$ be a field. Let $X$ be a locally algebraic $k$-scheme
+equidimensional of dimension $1$ whose singularities are at-worst-nodal.
+If $Y \subset X$ is a reduced closed subscheme
+equidimensional of dimension $1$, then
+\begin{enumerate}
+\item the singularities of $Y$ are at-worst-nodal, and
+\item if $Z \subset X$ is the scheme theoretic closure of
+$X \setminus Y$, then
+\begin{enumerate}
+\item the scheme theoretic intersection $Y \cap Z$ is
+the disjoint union of spectra of finite separable extensions of $k$,
+\item each point of $Y \cap Z$ is a node of $X$, and
+\item $Y \to \Spec(k)$ is smooth at every point of $Y \cap Z$.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $X$ and $Y$ are reduced and equidimensional of dimension $1$,
+we see that $Y$ is the scheme theoretic union of a subset of the
+irreducible components of $X$ (in a reduced ring $(0)$
+is the intersection of the minimal primes).
+Let $y \in Y$ be a closed point.
+If $y$ is in the smooth locus of
+$X \to \Spec(k)$, then $y$ is on a unique irreducible component
+of $X$ and we see that $Y$ and $X$ agree in an open neighbourhood
+of $y$. Hence $Y \to \Spec(k)$ is smooth at $y$.
+If $y$ is a node of $X$ but still lies on a unique irreducible
+component of $X$, then $y$ is a node on $Y$ by the same argument.
+Suppose that $y$ lies on more than $1$ irreducible component of $X$.
+Since the number of geometric branches of $X$ at $y$ is $2$
+by Lemma \ref{lemma-nodal},
+there can be at most $2$ irreducible components passing through $y$
+by Properties, Lemma
+\ref{properties-lemma-number-of-branches-irreducible-components}.
+If $Y$ contains both of these, then again $Y = X$ in an open neighbourhood
+of $y$ and $y$ is a node of $Y$. Finally, assume $Y$ contains only one
+of the irreducible components. After replacing $X$ by an open
+neighbourhood of $x$ we may assume $Y$ is one of the two
+irreducble components and $Z$ is the other. By Properties, Lemma
+\ref{properties-lemma-number-of-branches-irreducible-components}
+again we see that $X$ has two branches at $y$, i.e., the local ring
+$\mathcal{O}_{X, y}$ has two branches and that these branches
+come from $\mathcal{O}_{Y, y}$ and $\mathcal{O}_{Z, y}$. Write
+$\mathcal{O}_{X, y}^\wedge \cong \kappa(y)[[u, v]]/(uv)$
+as in Remark \ref{remark-trivial-quadratic-extension}.
+The field $\kappa(y)$ is finite separable over $k$ by
+Lemma \ref{lemma-nodal} for example.
+Thus, after possibly switching the roles of $u$ and $v$,
+the completion of the map
+$\mathcal{O}_{X, y} \to \mathcal{O}_{Y, Y}$
+corresponds to $\kappa(y)[[u, v]]/(uv) \to \kappa(y)[[u]]$
+and the completion of the map
+$\mathcal{O}_{X, y} \to \mathcal{O}_{Y, Y}$
+corresponds to $\kappa(y)[[u, v]]/(uv) \to \kappa(y)[[v]]$.
+The scheme theoretic intersection of $Y \cap Z$ is cut out
+by the sum of their ideas which in the completion is $(u, v)$, i.e.,
+the maximal ideal. Thus (2)(a) and (2)(b) are clear.
+Finally, (2)(c) holds: the completion of $\mathcal{O}_{Y, y}$
+is regular, hence $\mathcal{O}_{Y, y}$ is regular
+(More on Algebra, Lemma \ref{more-algebra-lemma-completion-regular})
+and $\kappa(y)/k$ is separable, hence
+smoothness in an open neighbourhood
+by Algebra, Lemma \ref{algebra-lemma-separable-smooth}.
+\end{proof}
+
+
+
+
+\section{Families of nodal curves}
+\label{section-families-nodal}
+
+\noindent
+In the Stacks project curves are irreducible varieties of dimension $1$,
+but in the literature a ``semi-stable curve'' or a ``nodal curve'' is
+usually not irreducible and often assumed to be proper, especially
+when used in a phrase such as ``family of semistable curves'' or
+``family of nodal curves'', or ``nodal family''. Thus it is a bit
+difficult for us to choose a terminology which is consistent with the
+literature as well as internally consistent. Moreover, we really want
+to first study the notion introduced in the following lemma (which is
+local on the source).
+
+\begin{lemma}
+\label{lemma-nodal-family}
+Let $f : X \to S$ be a morphism of schemes. The following are equivalent
+\begin{enumerate}
+\item $f$ is flat, locally of finite presentation, every nonempty fibre
+$X_s$ is equidimensional of dimension $1$, and $X_s$ has
+at-worst-nodal singularities, and
+\item $f$ is syntomic of relative dimension $1$ and the closed subscheme
+$\text{Sing}(f) \subset X$ defined by the first Fitting ideal of
+$\Omega_{X/S}$ is unramified over $S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Recall that the formation of $\text{Sing}(f)$ commutes with base
+change, see Divisors, Lemma
+\ref{divisors-lemma-base-change-and-fitting-ideal-omega}.
+Thus the lemma follows from Lemma \ref{lemma-nodal-lci},
+Morphisms, Lemma \ref{morphisms-lemma-syntomic-flat-fibres}, and
+Morphisms, Lemma \ref{morphisms-lemma-unramified-etale-fibres}.
+(We also use the trivial
+Morphisms, Lemmas \ref{morphisms-lemma-syntomic-locally-finite-presentation}
+and \ref{morphisms-lemma-syntomic-flat}.)
+\end{proof}
+
+\begin{definition}
+\label{definition-nodal-family}
+Let $f : X \to S$ be a morphism of schemes. We say $f$ is
+{\it at-worst-nodal of relative dimension $1$} if $f$ satisfies
+the equivalent conditions of Lemma \ref{lemma-nodal-family}.
+\end{definition}
+
+\noindent
+Here are some reasons for the cumbersome terminology\footnote{But please
+email the maintainer of the Stacks project if you have a better suggestion.}.
+First, we want to make sure this notion is not confused with any of the
+other notions in the literature (see introduction to this section).
+Second, we can imagine several generalizations of this notion to morphisms
+of higher relative dimension (for example, one can ask for morphisms
+which are \'etale locally compositions of at-worst-nodal morphisms or
+one can ask for morphisms whose fibres are higher dimensional but have
+at worst ordinary double points).
+
+\begin{lemma}
+\label{lemma-smooth-relative-dimension-1}
+A smooth morphism of relative dimension $1$ is
+at-worst-nodal of relative dimension $1$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-nodal-family}
+Let $f : X \to S$ be at-worst-nodal of relative dimension $1$.
+Then the same is true for any base change of $f$.
+\end{lemma}
+
+\begin{proof}
+This is true because the base change of a syntomic morphism
+is syntomic (Morphisms, Lemma \ref{morphisms-lemma-base-change-syntomic}),
+the base change of a morphism of relative dimension $1$ has
+relative dimension $1$
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-relative-dimension-d}),
+the formation of $\text{Sing}(f)$ commutes with base change
+(Divisors, Lemma
+\ref{divisors-lemma-base-change-and-fitting-ideal-omega}), and
+the base change of an unramified morphism is unramified
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-unramified}).
+\end{proof}
+
+\noindent
+The following lemma tells us that we can check whether a morphism
+is at-worst-nodal of relative dimension $1$ on the fibres.
+
+\begin{lemma}
+\label{lemma-locus-where-nodal}
+Let $f : X \to S$ be a morphism of schemes which is flat and
+locally of finite presentation. Then there is a maximal open
+subscheme $U \subset X$ such that $f|_U : U \to S$ is
+at-worst-nodal of relative dimension $1$. Moreover, formation
+of $U$ commutes with arbitrary base change.
+\end{lemma}
+
+\begin{proof}
+By Morphisms, Lemma \ref{morphisms-lemma-set-points-where-fibres-lci}
+we find that there is such an open where $f$ is syntomic.
+Hence we may assume that $f$ is a syntomic morphism.
+In particular $f$ is a Cohen-Macaulay morphism
+(Duality for Schemes, Lemmas \ref{duality-lemma-lci-gorenstein} and
+\ref{duality-lemma-gorenstein-CM-morphism}).
+Thus $X$ is a disjoint union of open and closed subschemes on which
+$f$ has given relative dimension, see Morphisms, Lemma
+\ref{morphisms-lemma-flat-finite-presentation-CM-fibres-relative-dimension}.
+This decomposition is preserved by arbitrary base change, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-relative-dimension-d}.
+Discarding all but one piece we may assume $f$ is syntomic of
+relative dimension $1$. Let $\text{Sing}(f) \subset X$ be the
+closed subscheem defined by the first fitting ideal of
+$\Omega_{X/S}$. There is a maximal open subscheme
+$W \subset \text{Sing}(f)$ such that $W \to S$ is unramified
+and its formation commutes with base change
+(Morphisms, Lemma \ref{morphisms-lemma-set-points-where-fibres-unramified}).
+Since also formation of $\text{Sing}(f)$ commutes with base change
+(Divisors, Lemma
+\ref{divisors-lemma-base-change-and-fitting-ideal-omega}),
+we see that
+$$
+U = (X \setminus \text{Sing}(f)) \cup W
+$$
+is the maximal open subscheme of $X$ such that
+$f|_U : U \to S$ is at-worst-nodal of relative dimension $1$
+and that formation of $U$ commutes with base change.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nodal-family-precompose-etale}
+Let $f : X \to S$ be at-worst-nodal of relative dimension $1$.
+If $Y \to X$ is an \'etale morphism, then the composition $g : Y \to S$
+is at-worst-nodal of relative dimension $1$.
+\end{lemma}
+
+\begin{proof}
+Observe that $g$ is flat and locally of finite presentation as
+a composition of morphisms which are flat and locally of finite
+presentation (use
+Morphisms, Lemmas \ref{morphisms-lemma-etale-locally-finite-presentation},
+\ref{morphisms-lemma-etale-flat},
+\ref{morphisms-lemma-composition-finite-presentation}, and
+\ref{morphisms-lemma-composition-flat}).
+Thus it suffices to prove the fibres have at-worst-nodal singularities.
+This follows from Lemma \ref{lemma-node-etale-local}
+(and the fact that the composition of an \'etale morphism and
+a smooth morphism is smooth by
+Morphisms, Lemmas \ref{morphisms-lemma-etale-smooth-unramified} and
+\ref{morphisms-lemma-composition-smooth}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nodal-family-postcompose-etale}
+Let $S' \to S$ be an \'etale morphism of schemes.
+Let $f : X \to S'$ be at-worst-nodal of relative dimension $1$.
+Then the composition $g : X \to S$
+is at-worst-nodal of relative dimension $1$.
+\end{lemma}
+
+\begin{proof}
+Observe that $g$ is flat and locally of finite presentation as
+a composition of morphisms which are flat and locally of finite
+presentation (use
+Morphisms, Lemmas \ref{morphisms-lemma-etale-locally-finite-presentation},
+\ref{morphisms-lemma-etale-flat},
+\ref{morphisms-lemma-composition-finite-presentation}, and
+\ref{morphisms-lemma-composition-flat}).
+Thus it suffices to prove the fibres of $g$
+have at-worst-nodal singularities.
+This follows from Lemma \ref{lemma-node-over-separable-extension}
+and the analogous result for smooth points.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nodal-family-etale-local-source}
+Let $f : X \to S$ be a morphism of schemes. Let $\{U_i \to X\}$
+be an \'etale covering. The following are equivalent
+\begin{enumerate}
+\item $f$ is at-worst-nodal of relative dimension $1$,
+\item each $U_i \to S$ is at-worst-nodal of relative dimension $1$.
+\end{enumerate}
+In other words, being at-worst-nodal of relative dimension $1$
+is \'etale local on the source.
+\end{lemma}
+
+\begin{proof}
+One direction we have seen in Lemma \ref{lemma-nodal-family-precompose-etale}.
+For the other direction, observe that being locally of finite
+presentation, flat, or to have relative dimension $1$
+is \'etale local on the source
+(Descent, Lemmas
+\ref{descent-lemma-locally-finite-presentation-fppf-local-source},
+\ref{descent-lemma-flat-fpqc-local-source}, and
+\ref{descent-lemma-dimension-at-point}). Taking fibres we reduce
+to the case where $S$ is the spectrum of a field. In this case the
+result follows from Lemma \ref{lemma-node-etale-local}
+(and the fact that being smooth is \'etale local on the source by
+Descent, Lemma \ref{descent-lemma-smooth-smooth-local-source}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nodal-family-fpqc-local-target}
+Let $f : X \to S$ be a morphism of schemes. Let $\{U_i \to S\}$
+be an fpqc covering. The following are equivalent
+\begin{enumerate}
+\item $f$ is at-worst-nodal of relative dimension $1$,
+\item each $X \times_S U_i \to U_i$ is at-worst-nodal of relative
+dimension $1$.
+\end{enumerate}
+In other words, being at-worst-nodal of relative dimension $1$
+is fpqc local on the target.
+\end{lemma}
+
+\begin{proof}
+One direction we have seen in Lemma \ref{lemma-base-change-nodal-family}.
+For the other direction, observe that being locally of finite
+presentation, flat, or to have relative dimension $1$
+is fpqc local on the target
+(Descent, Lemmas
+\ref{descent-lemma-descending-property-locally-finite-presentation},
+\ref{descent-lemma-descending-property-flat}, and
+Morphisms, Lemma \ref{morphisms-lemma-dimension-fibre-after-base-change}).
+Taking fibres we reduce
+to the case where $S$ is the spectrum of a field. In this case the
+result follows from Lemma \ref{lemma-node-field-extension}
+(and the fact that being smooth is fpqc local on the target by
+Descent, Lemma \ref{descent-lemma-descending-property-smooth}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descend-nodal-family}
+Let $S = \lim S_i$ be a limit of a directed system of schemes
+with affine transition morphisms.
+Let $0 \in I$ and let $f_0 : X_0 \to Y_0$ be a morphism of schemes over $S_0$.
+Assume $S_0$, $X_0$, $Y_0$ are quasi-compact and quasi-separated.
+Let $f_i : X_i \to Y_i$ be the base change of $f_0$ to $S_i$ and
+let $f : X \to Y$ be the base change of $f_0$ to $S$.
+If
+\begin{enumerate}
+\item $f$ is at-worst-nodal of relative dimension $1$, and
+\item $f_0$ is locally of finite presentation,
+\end{enumerate}
+then there exists an $i \geq 0$ such that $f_i$ is at-worst-nodal
+of relative dimension $1$.
+\end{lemma}
+
+\begin{proof}
+By Limits, Lemma \ref{limits-lemma-descend-syntomic}
+there exists an $i$ such that $f_i$ is syntomic.
+Then $X_i = \coprod_{d \geq 0} X_{i, d}$ is a disjoint union of
+open and closed subschemes such that $X_{i, d} \to Y_i$
+has relative dimension $d$, see
+Morphisms, Lemma \ref{morphisms-lemma-syntomic-relative-dimension}.
+Because of the behaviour of dimensions of fibres under base change given in
+Morphisms, Lemma \ref{morphisms-lemma-dimension-fibre-after-base-change}
+we see that $X \to X_i$ maps into $X_{i, 1}$.
+Then there exists an $i' \geq i$ such that $X_{i'} \to X_i$
+maps into $X_{i, 1}$, see
+Limits, Lemma \ref{limits-lemma-limit-contained-in-constructible}.
+Thus $f_{i'} : X_{i'} \to Y_{i'}$ is syntomic of relative dimension $1$
+(by Morphisms, Lemma \ref{morphisms-lemma-dimension-fibre-after-base-change}
+again).
+Consider the morphism $\text{Sing}(f_{i'}) \to Y_{i'}$.
+We know that the base change to $Y$ is an unramified morphism.
+Hence by Limits, Lemma \ref{limits-lemma-descend-unramified}
+we see that after increasing $i'$ the morphism
+$\text{Sing}(f_{i'}) \to Y_{i'}$ becomes unramified.
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-formal-local-structure-nodal-family}
+Let $f : T \to S$ be a morphism of schemes. Let $t \in T$
+with image $s \in S$. Assume
+\begin{enumerate}
+\item $f$ is flat at $t$,
+\item $\mathcal{O}_{S, s}$ is Noetherian,
+\item $f$ is locally of finite type,
+\item $t$ is a split node of the fibre $T_s$.
+\end{enumerate}
+Then there exists an $h \in \mathfrak m_s^\wedge$ and an isomorphism
+$$
+\mathcal{O}_{T, t}^\wedge \cong
+\mathcal{O}_{S, s}^\wedge[[x, y]]/(xy - h)
+$$
+of $\mathcal{O}_{S, s}^\wedge$-algebras.
+\end{lemma}
+
+\begin{proof}
+We replace $S$ by $\Spec(\mathcal{O}_{S, s})$ and $T$ by the base change
+to $\Spec(\mathcal{O}_{S, s})$. Then $T$ is locally Noetherian and hence
+$\mathcal{O}_{T, t}$ is Noetherian.
+Set $A = \mathcal{O}_{S, s}^\wedge$, $\mathfrak m = \mathfrak m_A$, and
+$B = \mathcal{O}_{T, t}^\wedge$. By
+More on Algebra, Lemma \ref{more-algebra-lemma-flat-completion}
+we see that $A \to B$ is flat. Since
+$\mathcal{O}_{T, t}/\mathfrak m_s \mathcal{O}_{T, t} = \mathcal{O}_{T_s, t}$
+we see that $B/\mathfrak m B = \mathcal{O}_{T_s, t}^\wedge$.
+By assumption (4) and Lemma \ref{lemma-split-node}
+we conclude there exist $\overline{u}, \overline{v} \in B/\mathfrak m B$
+such that the map
+$$
+(A/\mathfrak m)[[x, y]] \longrightarrow B/\mathfrak m B,\quad
+x \longmapsto \overline{u},
+x \longmapsto \overline{v}
+$$
+is surjective with kernel $(xy)$.
+
+\medskip\noindent
+Assume we have $n \geq 1$ and $u, v \in B$ mapping to
+$\overline{u}, \overline{v}$ such that
+$$
+u v = h + \delta
+$$
+for some $h \in A$ and $\delta \in \mathfrak m^nB$.
+We claim that there exist $u', v' \in B$ with
+$u - u', v - v' \in \mathfrak m^n B$ such that
+$$
+u' v' = h' + \delta'
+$$
+for some $h' \in A$ and $\delta' \in \mathfrak m^{n + 1}B$.
+To see this, write $\delta = \sum f_i b_i$ with
+$f_i \in \mathfrak m^n$ and $b_i \in B$. Then write
+$b_i = a_i + u b_{i, 1} + v b_{i, 2} + \delta_i$ with
+$a_i \in A$, $b_{i, 1}, b_{i, 2} \in B$ and $\delta_i \in \mathfrak m B$.
+This is possible because the residue field of $B$ agrees with the
+residue field of $A$ and the images of $u$ and $v$ in $B/\mathfrak m B$
+generate the maximal ideal. Then we set
+$$
+u' = u - \sum b_{i, 2}f_i,\quad
+v' = v - \sum b_{i, 1}f_i
+$$
+and we obtain
+$$
+u'v' = h + \delta - \sum (b_{i, 1}u + b_{i, 2}v)f_i + \sum
+c_{ij}f_if_j =
+h + \sum a_if_i + \sum f_i \delta_i + \sum c_{ij}f_if_j
+$$
+for some $c_{i, j} \in B$.
+Thus we get a formula as above with $h' = h + \sum a_if_i$
+and $\delta' = \sum f_i \delta_i + \sum c_{ij}f_if_j$.
+
+\medskip\noindent
+Arguing by induction and starting with any lifts $u_1, v_1 \in B$
+of $\overline{u}, \overline{v}$ the result of the previous paragraph
+shows that we find a sequence of elements
+$u_n, v_n \in B$ and $h_n \in A$ such that
+$u_n - u_{n + 1} \in \mathfrak m^n B$,
+$v_n - v_{n + 1} \in \mathfrak m^n B$,
+$h_n - h_{n + 1} \in \mathfrak m^n$,
+and such that $u_n v_n - h_n \in \mathfrak m^n B$.
+Since $A$ and $B$ are complete we can set
+$u_\infty = \lim u_n$, $v_\infty = \lim v_n$, and
+$h_\infty = \lim h_n$, and then we obtain $u_\infty v_\infty = h_\infty$
+in $B$. Thus we have an $A$-algebra map
+$$
+A[[x, y]]/(xy - h_\infty) \longrightarrow B
+$$
+sending $x$ to $u_\infty$ and $v$ to $v_\infty$.
+This is a map of flat $A$-algebras which is an
+isomorphism after dividing by $\mathfrak m$.
+It is surjective modulo $\mathfrak m$ and hence surjective
+by completeness and
+Algebra, Lemma \ref{algebra-lemma-completion-generalities}.
+Then we can apply Algebra, Lemma \ref{algebra-lemma-mod-injective}
+to conclude it is an isomorphism.
+\end{proof}
+
+\noindent
+Consider the morphism of schemes
+$$
+\Spec(\mathbf{Z}[u, v, a]/(uv - a))
+\longrightarrow
+\Spec(\mathbf{Z}[a])
+$$
+The next lemma shows that this morphism is a model for
+the \'etale local structure of a nodal family of curves.
+If you know a proof of this lemma avoiding the use of Artin approximation,
+then please email
+\href{mailto:stacks.project@gmail.com}{stacks.project@gmail.com}.
+
+\begin{lemma}
+\label{lemma-etale-local-structure-nodal-family}
+Let $f : X \to S$ be a morphism of schemes. Assume that
+$f$ is at-worst-nodal of relative dimension $1$. Let
+$x \in X$ be a point which is a singular point of the
+fibre $X_s$. Then there exists a commutative diagram of schemes
+$$
+\xymatrix{
+X \ar[d] &
+U \ar[rr] \ar[l] \ar[rd] & &
+W \ar[r] \ar[ld] &
+\Spec(\mathbf{Z}[u, v, a]/(uv - a)) \ar[d] \\
+S & &
+V \ar[ll] \ar[rr] & & \Spec(\mathbf{Z}[a])
+}
+$$
+with $X \leftarrow U$, $S \leftarrow V$, and $U \to W$ \'etale morphisms,
+and with the right hand square cartesian, such that there exists
+a point $u \in U$ mapping to $x$ in $X$.
+\end{lemma}
+
+\begin{proof}
+We first use absolute Noetherian approximation to reduce to the
+case of schemes of finite type over $\mathbf{Z}$.
+The question is local on $X$ and $S$. Hence we may assume that
+$X$ and $S$ are affine. Then we can write $S = \Spec(R)$
+and write $R$ as a filtered colimit $R = \colim R_i$
+of finite type $\mathbf{Z}$-algebras.
+Using Limits, Lemma \ref{limits-lemma-descend-finite-presentation}
+we can find an $i$ and a morphism $f_i : X_i \to \Spec(R_i)$ whose base
+change to $S$ is $f$. After increasing $i$ we may assume that $f_i$
+is at-worst-nodal of relative dimension $1$, see
+Lemma \ref{lemma-descend-nodal-family}.
+The image $x_i \in X_i$ of $x$ will be a singular
+point of its fibre, for example because the formation of
+$\text{Sing}(f)$ commutes with base change (Divisors, Lemma
+\ref{divisors-lemma-base-change-and-fitting-ideal-omega}).
+If we can prove the lemma for $f_i : X_i \to S_i$ and
+$x_i$, then the lemma follows for $f : X \to S$ by base
+change. Thus we reduce to the case studied in the next
+paragraph.
+
+\medskip\noindent
+Assume $S$ is of finite type over $\mathbf{Z}$. Let $s \in S$ be the
+image of $x$. Recall that $\kappa(x)$ is a finite separable extension
+of $\kappa(s)$, for example because $\text{Sing}(f) \to S$
+is unramified or because $x$ is a node of the fibre $X_s$
+and we can apply Lemma \ref{lemma-nodal}.
+Furthermore, let $\kappa'/\kappa(x)$ be the
+degree $2$ separable algebra associated to $\mathcal{O}_{X_s, x}$ in
+Remark \ref{remark-quadratic-extension}.
+By More on Morphisms, Lemma
+\ref{more-morphisms-lemma-realize-prescribed-residue-field-extension-etale}
+we can choose an \'etale neighbourhood $(V, v) \to (S, s)$
+such that the extension $\kappa(v)/\kappa(s)$ realizes either
+the extension $\kappa(x)/\kappa(s)$ in case
+$\kappa' \cong \kappa(x) \times \kappa(x)$ or
+the extension $\kappa'/\kappa(s)$ if $\kappa'$ is a field.
+After replacing $X$ by $X \times_S V$ and $S$ by $V$
+we reduce to the situation described in the next paragraph.
+
+\medskip\noindent
+Assume $S$ is of finite type over $\mathbf{Z}$ and
+$x \in X_s$ is a split node, see Definition \ref{definition-split-node}.
+By Lemma \ref{lemma-formal-local-structure-nodal-family} we see that there
+exists an $\mathcal{O}_{S, s}$-algebra isomorphism
+$$
+\mathcal{O}_{X, x}^\wedge \cong
+\mathcal{O}_{S, s}^\wedge[[s, t]]/(st - h)
+$$
+for some $h \in \mathfrak m_s^\wedge \subset \mathcal{O}_{S, s}^\wedge$.
+In other words, if we consider the homomorphism
+$$
+\sigma : \mathbf{Z}[a] \longrightarrow \mathcal{O}_{S, s}^\wedge
+$$
+sending $a$ to $h$, then there exists an $\mathcal{O}_{S, s}$-algebra
+isomorphism
+$$
+\mathcal{O}_{X, x}^\wedge
+\longrightarrow
+\mathcal{O}_{Y_\sigma, y_\sigma}^\wedge
+$$
+where
+$$
+Y_\sigma = \Spec(\mathbf{Z}[u, v, t]/(uv - a))
+\times_{\Spec(\mathbf{Z}[a]), \sigma} \Spec(\mathcal{O}_{S, s}^\wedge)
+$$
+and $y_\sigma$ is the point of $Y_\sigma$ lying over
+the closed point of $\Spec(\mathcal{O}_{S, s}^\wedge)$
+and having coordinates $u, v$ equal to zero. Since
+$\mathcal{O}_{S, s}$ is a G-ring by
+More on Algebra, Proposition \ref{more-algebra-proposition-ubiquity-G-ring}
+we may apply More on Morphisms, Lemma
+\ref{more-morphisms-lemma-relative-map-approximation-pre}
+to conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-genus-in-nodal-family-of-curves}
+Let $f : X \to S$ be a morphism of schemes. Assume
+\begin{enumerate}
+\item $f$ is proper,
+\item $f$ is at-worst-nodal of relative dimension $1$, and
+\item the geometric fibres of $f$ are connected.
+\end{enumerate}
+Then (a) $f_*\mathcal{O}_X = \mathcal{O}_S$ and this holds after
+any base change, (b) $R^1f_*\mathcal{O}_X$ is a finite locally free
+$\mathcal{O}_S$-module whose formation commutes with any base change,
+and (c) $R^qf_*\mathcal{O}_X = 0$ for $q \geq 2$.
+\end{lemma}
+
+\begin{proof}
+Part (a) follows from
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-proper-flat-geom-red-connected}.
+By Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-proper-flat-h0} locally on $S$
+we can write $Rf_*\mathcal{O}_X = \mathcal{O}_S \oplus P$
+where $P$ is perfect of tor amplitude in $[1, \infty)$.
+Recall that formation of $Rf_*\mathcal{O}_X$ commutes
+with arbitrary base change
+(Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}).
+Thus for $s \in S$ we have
+$$
+H^i(P \otimes_{\mathcal{O}_S}^\mathbf{L} \kappa(s)) =
+H^i(X_s, \mathcal{O}_{X_s})
+\text{ for }i \geq 1
+$$
+This is zero unless $i = 1$ since $X_s$ is a $1$-dimensional
+Noetherian scheme, see
+Cohomology, Proposition \ref{cohomology-proposition-vanishing-Noetherian}.
+Then $P = H^1(P)[-1]$ and $H^1(P)$ is finite locally free
+for example by More on Algebra, Lemma
+\ref{more-algebra-lemma-lift-perfect-from-residue-field}.
+Since everything is compatible with base change we conclude.
+\end{proof}
+
+
+
+
+
+
+
+\section{More vanishing results}
+\label{section-more-vanishing}
+
+\noindent
+Continuation of Section \ref{section-vanishing}.
+
+\begin{lemma}
+\label{lemma-h1-nonzero-degree-leq-2g-2}
+In Situation \ref{situation-Cohen-Macaulay-curve} assume $X$ is integral and
+has genus $g$. Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $Z \subset X$ be a $0$-dimensional closed subscheme with ideal
+sheaf $\mathcal{I} \subset \mathcal{O}_X$. If $H^1(X, \mathcal{I}\mathcal{L})$
+is nonzero, then
+$$
+\deg(\mathcal{L}) \leq 2g - 2 + \deg(Z)
+$$
+with strict inequality unless $\mathcal{I}\mathcal{L} \cong \omega_X$.
+\end{lemma}
+
+\begin{proof}
+Any curve, e.g.\ $X$, is Cohen-Macaulay.
+If $H^1(X, \mathcal{I}\mathcal{L})$ is nonzero, then there is a nonzero
+map $\mathcal{I}\mathcal{L} \to \omega_X$, see
+Lemma \ref{lemma-duality-dim-1-CM}.
+Since $\mathcal{I}\mathcal{L}$ is torsion free, this map is injective.
+Since a field is Gorenstein and $X$ is reduced, we find
+that the Gorenstein locus $U \subset X$ of $X$ is nonempty, see
+Duality for Schemes, Lemma \ref{duality-lemma-gorenstein}.
+This lemma also tells us that $\omega_X|_U$ is invertible.
+In this way we see we have a short exact sequence
+$$
+0 \to \mathcal{I}\mathcal{L} \to \omega_X \to \mathcal{Q} \to 0
+$$
+where the support of $\mathcal{Q}$ is zero dimensional.
+Hence we have
+\begin{align*}
+0 & \leq \dim \Gamma(X, \mathcal{Q})\\
+& =
+\chi(\mathcal{Q}) \\
+& =
+\chi(\omega_X) - \chi(\mathcal{I}\mathcal{L}) \\
+& =
+\chi(\omega_X) - \deg(\mathcal{L}) - \chi(\mathcal{I}) \\
+& =
+2g - 2 - \deg(\mathcal{L}) + \deg(Z)
+\end{align*}
+by Lemmas \ref{lemma-euler} and \ref{lemma-rr}, by (\ref{equation-genus}),
+and by Varieties, Lemmas \ref{varieties-lemma-chi-tensor-finite}
+and \ref{varieties-lemma-degree-on-proper-curve}. We have also used
+that $\deg(Z) = \dim_k \Gamma(Z, \mathcal{O}_Z) = \chi(\mathcal{O}_Z)$
+and the short exact sequence
+$0 \to \mathcal{I} \to \mathcal{O}_X \to \mathcal{O}_Z \to 0$.
+The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-degree-more-than-2g-2}
+\begin{reference}
+\cite[Lemma 2]{Jongmin}
+\end{reference}
+In Situation \ref{situation-Cohen-Macaulay-curve}
+assume $X$ is integral and has genus $g$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $Z \subset X$ be a $0$-dimensional closed subscheme with ideal
+sheaf $\mathcal{I} \subset \mathcal{O}_X$.
+If $\deg(\mathcal{L}) > 2g - 2 + \deg(Z)$, then
+$H^1(X, \mathcal{I}\mathcal{L}) = 0$ and one of the following possibilities
+occurs
+\begin{enumerate}
+\item $H^0(X, \mathcal{I}\mathcal{L}) \not = 0$, or
+\item $g = 0$ and $\deg(\mathcal{L}) = \deg(Z) - 1$.
+\end{enumerate}
+In case (2) if $Z = \emptyset$, then $X \cong \mathbf{P}^1_k$ and $\mathcal{L}$
+corresponds to $\mathcal{O}_{\mathbf{P}^1}(-1)$.
+\end{lemma}
+
+\begin{proof}
+The vanishing of $H^1(X, \mathcal{I}\mathcal{L})$ follows from
+Lemma \ref{lemma-h1-nonzero-degree-leq-2g-2}.
+If $H^0(X, \mathcal{I}\mathcal{L}) = 0$, then
+$\chi(\mathcal{I}\mathcal{L}) = 0$. From the short exact
+sequence $0 \to \mathcal{I}\mathcal{L} \to \mathcal{L} \to \mathcal{O}_Z \to 0$
+we conclude $\deg(\mathcal{L}) = g - 1 + \deg(Z)$.
+Thus $g - 1 + \deg(Z) > 2g - 2 + \deg(Z)$ which implies $g = 0$
+hence (2) holds. If $Z = \emptyset$ in case (2),
+then $\mathcal{L}^{-1}$ is an invertible sheaf of degree $1$.
+This implies there is an isomorphism $X \to \mathbf{P}^1_k$ and
+$\mathcal{L}^{-1}$ is the pullback of $\mathcal{O}_{\mathbf{P}^1}(1)$ by
+Lemma \ref{lemma-genus-zero-positive-degree}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-degree-more-than-2g}
+\begin{reference}
+\cite[Lemma 3]{Jongmin}
+\end{reference}
+In Situation \ref{situation-Cohen-Macaulay-curve}
+assume $X$ is integral and has genus $g$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+If $\deg(\mathcal{L}) \geq 2g$, then $\mathcal{L}$
+is globally generated.
+\end{lemma}
+
+\begin{proof}
+Let $Z \subset X$ be the closed subscheme cut out by the
+global sections of $\mathcal{L}$. By Lemma \ref{lemma-degree-more-than-2g-2}
+we see that $Z \not = X$. Let $\mathcal{I} \subset \mathcal{O}_X$
+be the ideal sheaf cutting out $Z$. Consider the short exact sequence
+$$
+0 \to \mathcal{I}\mathcal{L}
+\to \mathcal{L} \to \mathcal{O}_Z \to 0
+$$
+If $Z \not = \emptyset$, then
+$H^1(X, \mathcal{I}\mathcal{L})$ is nonzero
+as follows from the long exact sequence of cohomology.
+By Lemma \ref{lemma-duality-dim-1-CM} this gives a
+nonzero and hence injective map
+$$
+\mathcal{I}\mathcal{L}
+\longrightarrow
+\omega_X
+$$
+In particular, we find an injective map
+$H^0(X, \mathcal{L}) = H^0(X, \mathcal{I}\mathcal{L})
+\to H^0(X, \omega_X)$. This is impossible as
+$$
+\dim_k H^0(X, \mathcal{L}) = \dim_k H^1(X, \mathcal{L}) +
+\deg(\mathcal{L}) + 1 - g \geq g + 1
+$$
+and $\dim H^0(X, \omega_X) = g$ by (\ref{equation-genus}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-degree-more-than-2g-1-and-Z}
+In Situation \ref{situation-Cohen-Macaulay-curve}
+assume $X$ is integral and has genus $g$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $Z \subset X$ be a nonempty $0$-dimensional closed subscheme.
+If $\deg(\mathcal{L}) \geq 2g - 1 + \deg(Z)$, then $\mathcal{L}$
+is globally generated and $H^0(X, \mathcal{L}) \to H^0(X, \mathcal{L}|_Z)$
+is surjective.
+\end{lemma}
+
+\begin{proof}
+Global generation by Lemma \ref{lemma-degree-more-than-2g}.
+If $\mathcal{I} \subset \mathcal{O}_X$ is the ideal sheaf
+of $Z$, then $H^1(X, \mathcal{I}\mathcal{L}) = 0$ by
+Lemma \ref{lemma-h1-nonzero-degree-leq-2g-2}. Hence surjectivity.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-on-gorenstein}
+\begin{reference}
+Weak version of \cite[Lemma 4]{Jongmin}
+\end{reference}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$
+which is reduced, connected, and of dimension $1$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $Z \subset X$ be a $0$-dimensional closed subscheme with ideal
+sheaf $\mathcal{I} \subset \mathcal{O}_X$.
+If $H^1(X, \mathcal{I}\mathcal{L}) \not = 0$, then there exists
+a reduced connected closed subscheme $Y \subset X$
+of dimension $1$ such that
+$$
+\deg(\mathcal{L}|_Y) \leq -2\chi(Y, \mathcal{O}_Y) + \deg(Z \cap Y)
+$$
+where $Z \cap Y$ is the scheme theoretic intersection.
+\end{lemma}
+
+\begin{proof}
+If $H^1(X, \mathcal{I}\mathcal{L})$ is nonzero, then there is a nonzero map
+$\varphi : \mathcal{I}\mathcal{L} \to \omega_X$, see
+Lemma \ref{lemma-duality-dim-1-CM}. Let $Y \subset X$
+be the union of the irreducible components $C$ of $X$ such that
+$\varphi$ is nonzero in the generic point of $C$.
+Then $Y$ is a reduced closed subscheme.
+Let $\mathcal{J} \subset \mathcal{O}_X$ be the ideal sheaf of $Y$.
+Since $\mathcal{J}\mathcal{I}\mathcal{L}$
+has no embedded associated points
+(as a submodule of $\mathcal{L}$) and as $\varphi$ is zero
+in the generic points of the support of $\mathcal{J}$
+(by choice of $Y$ and as $X$ is reduced), we find that
+$\varphi$ factors as
+$$
+\mathcal{I}\mathcal{L} \to
+\mathcal{I}\mathcal{L}/\mathcal{J}\mathcal{I}\mathcal{L} \to \omega_X
+$$
+We can view $\mathcal{I}\mathcal{L}/\mathcal{J}\mathcal{I}\mathcal{L}$
+as the pushforward of a coherent sheaf on $Y$ which by abuse of
+notation we indicate with the same symbol.
+Since $\omega_Y = \SheafHom(\mathcal{O}_Y, \omega_X)$
+by Lemma \ref{lemma-closed-immersion-dim-1-CM}
+we find a map
+$$
+\mathcal{I}\mathcal{L}/
+\mathcal{J}\mathcal{I}\mathcal{L}
+\to \omega_Y
+$$
+of $\mathcal{O}_Y$-modules which is injective in the generic points
+of $Y$. Let $\mathcal{I}' \subset \mathcal{O}_Y$ be the ideal
+sheaf of $Z \cap Y$. There is a map
+$\mathcal{I}\mathcal{L}/\mathcal{J}\mathcal{I}\mathcal{L} \to
+\mathcal{I}'\mathcal{L}|_Y$ whose kernel is supported in closed points.
+Since $\omega_Y$ is a Cohen-Macaulay module, the map above
+factors through an injective map $\mathcal{I}'\mathcal{L}|_Y \to
+\omega_Y$. We see that we get
+an exact sequence
+$$
+0 \to \mathcal{I}'\mathcal{L}|_Y \to \omega_Y \to \mathcal{Q} \to 0
+$$
+of coherent sheaves on $Y$ where $\mathcal{Q}$ is supported in dimension $0$
+(this uses that $\omega_Y$ is an invertible module in the generic points
+of $Y$). We conclude that
+$$
+0 \leq \dim \Gamma(Y, \mathcal{Q}) =
+\chi(\mathcal{Q}) = \chi(\omega_Y) - \chi(\mathcal{I}'\mathcal{L}) =
+-2\chi(\mathcal{O}_Y) - \deg(\mathcal{L}|_Y) + \deg(Z \cap Y)
+$$
+by Lemma \ref{lemma-euler} and
+Varieties, Lemma \ref{varieties-lemma-chi-tensor-finite}.
+If $Y$ is connected, then this proves the lemma.
+If not, then we repeat the last part of the argument
+for one of the connected components of $Y$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-global-generation}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$
+which is reduced, connected, and of dimension $1$.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Assume that for every reduced connected closed subscheme
+$Y \subset X$ of dimension $1$ we have
+$$
+\deg(\mathcal{L}|_Y) \geq 2\dim_k H^1(Y, \mathcal{O}_Y)
+$$
+Then $\mathcal{L}$ is globally generated.
+\end{lemma}
+
+\begin{proof}
+By induction on the number of irreducible components of $X$.
+If $X$ is irreducible, then the lemma holds by
+Lemma \ref{lemma-degree-more-than-2g}
+applied to $X$ viewed as a scheme over the field
+$k' = H^0(X, \mathcal{O}_X)$. Assume $X$ is not irreducible.
+Before we continue, if $k$ is finite, then we replace $k$
+by a purely transcendental extension $K$. This is allowed by
+Varieties, Lemmas
+\ref{varieties-lemma-globally-generated-base-change},
+\ref{varieties-lemma-degree-base-change},
+\ref{varieties-lemma-geometrically-reduced-any-base-change}, and
+\ref{varieties-lemma-bijection-irreducible-components},
+Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology},
+Lemma \ref{lemma-sanity-check-duality} and the elementary fact that
+$K$ is geometrically integral over $k$.
+
+\medskip\noindent
+Assume that $\mathcal{L}$ is not globally generated to get a contradiction.
+Then we may choose a coherent ideal sheaf
+$\mathcal{I} \subset \mathcal{O}_X$ such that
+$H^0(X, \mathcal{I}\mathcal{L}) = H^0(X, \mathcal{L})$
+and such that $\mathcal{O}_X/\mathcal{I}$ is nonzero with
+support of dimension $0$. For example, take $\mathcal{I}$
+the ideal sheaf of any closed point in the common
+vanishing locus of the global sections of $\mathcal{L}$.
+We consider the short exact sequence
+$$
+0 \to \mathcal{I}\mathcal{L} \to \mathcal{L} \to
+\mathcal{L}/\mathcal{I}\mathcal{L} \to 0
+$$
+Since the support of $\mathcal{L}/\mathcal{I}\mathcal{L}$
+has dimension $0$ we see that $\mathcal{L}/\mathcal{I}\mathcal{L}$
+is generated by global sections
+(Varieties, Lemma \ref{varieties-lemma-chi-tensor-finite}).
+From the short exact sequence,
+and the fact that $H^0(X, \mathcal{I}\mathcal{L}) = H^0(X, \mathcal{L})$
+we get an injection
+$H^0(X, \mathcal{L}/\mathcal{I}\mathcal{L}) \to H^1(X, \mathcal{I}\mathcal{L})$.
+
+\medskip\noindent
+Recall that the $k$-vector space $H^1(X, \mathcal{I}\mathcal{L})$
+is dual to $\Hom(\mathcal{I}\mathcal{L}, \omega_X)$.
+Choose $\varphi : \mathcal{I}\mathcal{L} \to \omega_X$.
+By Lemma \ref{lemma-vanishing-on-gorenstein} we have
+$H^1(X, \mathcal{L}) = 0$. Hence
+$$
+\dim_k H^0(X, \mathcal{I}\mathcal{L}) = \dim_k H^0(X, \mathcal{L}) =
+\deg(\mathcal{L}) + \chi(\mathcal{O}_X) > \dim_k H^1(X, \mathcal{O}_X) =
+\dim_k H^0(X, \omega_X)
+$$
+We conclude that $\varphi$ is not injective on global sections, in particular
+$\varphi$ is not injective. For every generic point $\eta \in X$
+of an irreducible component of $X$ denote
+$V_\eta \subset \Hom(\mathcal{I}\mathcal{L}, \omega_X)$ the $k$-subvector
+space consisting of those $\varphi$ which are zero at $\eta$.
+Since every associated point of $\mathcal{I}\mathcal{L}$
+is a generic point of $X$, the above shows that
+$\Hom(\mathcal{I}\mathcal{L}, \omega_X) = \bigcup V_\eta$.
+As $X$ has finitely many generic points and $k$ is infinite, we conclude
+$\Hom(\mathcal{I}\mathcal{L}, \omega_X) = V_\eta$ for some $\eta$.
+Let $\eta \in C \subset X$ be the corresponding irreducible component.
+Let $Y \subset X$ be the union of the other irreducible components
+of $X$. Then $Y$ is a nonempty reduced closed subscheme not equal to $X$.
+Let $\mathcal{J} \subset \mathcal{O}_X$ be the ideal sheaf of $Y$.
+Please keep in mind that the support of $\mathcal{J}$ is $C$.
+
+\medskip\noindent
+Let $\varphi : \mathcal{I}\mathcal{L} \to \omega_X$ be arbitrary.
+Since $\mathcal{J}\mathcal{I}\mathcal{L}$
+has no embedded associated points
+(as a submodule of $\mathcal{L}$) and as $\varphi$ is zero
+in the generic point $\eta$ of the support of $\mathcal{J}$, we find that
+$\varphi$ factors as
+$$
+\mathcal{I}\mathcal{L} \to
+\mathcal{I}\mathcal{L}/\mathcal{J}\mathcal{I}\mathcal{L} \to \omega_X
+$$
+We can view $\mathcal{I}\mathcal{L}/\mathcal{J}\mathcal{I}\mathcal{L}$
+as the pushforward of a coherent sheaf on $Y$ which by abuse of
+notation we indicate with the same symbol.
+Since $\omega_Y = \SheafHom(\mathcal{O}_Y, \omega_X)$
+by Lemma \ref{lemma-closed-immersion-dim-1-CM}
+we find a factorization
+$$
+\mathcal{I}\mathcal{L} \to
+\mathcal{I}\mathcal{L}/
+\mathcal{J}\mathcal{I}\mathcal{L}
+\xrightarrow{\varphi'} \omega_Y \to \omega_X
+$$
+of $\varphi$. Let $\mathcal{I}' \subset \mathcal{O}_Y$ be the
+image of $\mathcal{I} \subset \mathcal{O}_X$. There is a surjective map
+$\mathcal{I}\mathcal{L}/\mathcal{J}\mathcal{I}\mathcal{L} \to
+\mathcal{I}'\mathcal{L}|_Y$ whose kernel is supported in closed points.
+Since $\omega_Y$ is a Cohen-Macaulay module on $Y$, the map $\varphi'$
+factors through a map
+$\varphi'' : \mathcal{I}'\mathcal{L}|_Y \to \omega_Y$.
+Thus we have commutative diagrams
+$$
+\vcenter{
+\xymatrix{
+0 \ar[r] &
+\mathcal{I}\mathcal{L} \ar[r] \ar[d] &
+\mathcal{L} \ar[r] \ar[d] &
+\mathcal{L}/\mathcal{I}\mathcal{L} \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\mathcal{I}'\mathcal{L}|_Y \ar[r] &
+\mathcal{L}|_Y \ar[r] &
+\mathcal{L}|_Y/\mathcal{I}'\mathcal{L}|_Y \ar[r] &
+0
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+\mathcal{I}\mathcal{L} \ar[r]_\varphi \ar[d] & \omega_X \\
+\mathcal{I}'\mathcal{L}|_Y \ar[r]^{\varphi''} & \omega_Y \ar[u]
+}
+}
+$$
+Now we can finish the proof as follows:
+Since for every $\varphi$ we have a $\varphi''$ and since
+$\omega_X \in \textit{Coh}(\mathcal{O}_X)$
+represents the functor $\mathcal{F} \mapsto \Hom_k(H^1(X, \mathcal{F}), k)$,
+we find that
+$H^1(X, \mathcal{I}\mathcal{L}) \to H^1(Y, \mathcal{I}'\mathcal{L}|_Y)$
+is injective. Since the boundary
+$H^0(X, \mathcal{L}/\mathcal{I}\mathcal{L}) \to H^1(X, \mathcal{I}\mathcal{L})$
+is injective, we conclude the composition
+$$
+H^0(X, \mathcal{L}/\mathcal{I}\mathcal{L}) \to
+H^0(X, \mathcal{L}|_Y/\mathcal{I}'\mathcal{L}|_Y) \to
+H^1(X, \mathcal{I}'\mathcal{L}|_Y)
+$$
+is injective. Since
+$\mathcal{L}/\mathcal{I}\mathcal{L} \to
+\mathcal{L}|_Y/\mathcal{I}'\mathcal{L}|_Y$
+is a surjective map of coherent modules whose supports have
+dimension $0$, we see that the first map
+$H^0(X, \mathcal{L}/\mathcal{I}\mathcal{L}) \to
+H^0(X, \mathcal{L}|_Y/\mathcal{I}'\mathcal{L}|_Y)$
+is surjective (and hence bijective).
+But by induction we have that $\mathcal{L}|_Y$ is globally
+generated (if $Y$ is disconnected this still works of course)
+and hence the boundary map
+$$
+H^0(X, \mathcal{L}|_Y/\mathcal{I}'\mathcal{L}|_Y) \to
+H^1(X, \mathcal{I}'\mathcal{L}|_Y)
+$$
+cannot be injective.
+This contradiction finishes the proof.
+\end{proof}
+
+
+
+
+
+\section{Contracting rational tails}
+\label{section-contracting-rational-tails}
+
+\noindent
+In this section we discuss the simplest possible case of contracting
+a scheme to improve positivity properties of its canonical sheaf.
+
+\begin{example}[Contracting a rational tail]
+\label{example-rational-tail}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Assume the singularities of $X$ are
+at-worst-nodal.
+A {\it rational tail} will be an irreducible component $C \subset X$
+(viewed as an integral closed subscheme) with the following properties
+\begin{enumerate}
+\item $X' \not = \emptyset$ where $X' \subset X$ is the scheme theoretic closure
+of $X \setminus C$,
+\item the scheme theoretic intersection $C \cap X'$ is a single
+reduced point $x$,
+\item $H^0(C, \mathcal{O}_C)$ maps isomorphically to the
+residue field of $x$, and
+\item $C$ has genus zero.
+\end{enumerate}
+Since there are at least two irreducible components of $X$ passing through
+$x$, we conclude that $x$ is a node.
+Set $k' = H^0(C, \mathcal{O}_C) = \kappa(x)$.
+Then $k'/k$ is a finite separable extension of fields
+(Lemma \ref{lemma-nodal}). There is a canonical morphism
+$$
+c : X \longrightarrow X'
+$$
+inducing the identity on $X'$ and mapping $C$ to $x \in X'$
+via the canonical morphism $C \to \Spec(k') = x$. This follows from
+Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-union}
+since $X$ is the scheme theoretic union of $C$ and $X'$ (as $X$ is reduced).
+Moreover, we claim that
+$$
+c_*\mathcal{O}_X = \mathcal{O}_{X'}
+\quad\text{and}\quad
+R^1c_*\mathcal{O}_X = 0
+$$
+To see this, denote $i_C : C \to X$, $i_{X'} : X' \to X$ and $i_x : x \to X$
+the embeddings and use the exact sequence
+$$
+0 \to \mathcal{O}_X \to
+i_{C, *}\mathcal{O}_C \oplus i_{X', *}\mathcal{O}_{X'} \to
+i_{x, *}\kappa(x) \to 0
+$$
+of Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-union}.
+Looking at the long exact sequence of higher direct images,
+it follows that it suffices to show $H^0(C, \mathcal{O}_C) = k'$
+and $H^1(C, \mathcal{O}_C) = 0$ which follows from the assumptions.
+Observe that $X'$ is also a proper scheme over $k$, of dimension $1$
+whose singularities are at-worst-nodal
+(Lemma \ref{lemma-closed-subscheme-nodal-curve})
+has $H^0(X', \mathcal{O}_{X'}) = k$, and
+$X'$ has the same genus as $X$.
+We will say $c : X \to X'$ is the
+{\it contraction of a rational tail}.
+\end{example}
+
+\begin{lemma}
+\label{lemma-rational-tail-negative}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Assume the singularities of $X$ are
+at-worst-nodal. Let $C \subset X$ be a rational tail
+(Example \ref{example-rational-tail}). Then $\deg(\omega_X|_C) < 0$.
+\end{lemma}
+
+\begin{proof}
+Let $X' \subset X$ be as in the example. Then we have a short exact
+sequence
+$$
+0 \to \omega_C \to \omega_X|_C \to \mathcal{O}_{C \cap X'} \to 0
+$$
+See Lemmas \ref{lemma-closed-subscheme-reduced-gorenstein},
+\ref{lemma-facts-about-nodal-curves}, and
+\ref{lemma-closed-subscheme-nodal-curve}.
+With $k'$ as in the example we see that $\deg(\omega_C) = -2[k' : k]$
+as $C \cong \mathbf{P}^1_{k'}$ by Proposition \ref{proposition-projective-line}
+and $\deg(C \cap X') = [k' : k]$.
+Hence $\deg(\omega_X|_C) = -[k' : k]$ which is negative.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-rational-tail-field-extension}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Assume the singularities of $X$ are
+at-worst-nodal. Let $C \subset X$ be a rational tail
+(Example \ref{example-rational-tail}).
+For any field extension $K/k$ the base change $C_K \subset X_K$
+is a finite disjoint union of rational tails.
+\end{lemma}
+
+\begin{proof}
+Let $x \in C$ and $k' = \kappa(x)$ be as in the example.
+Observe that $C \cong \mathbf{P}^1_{k'}$ by
+Proposition \ref{proposition-projective-line}.
+Since $k'/k$ is finite separable, we see that
+$k' \otimes_k K = K'_1 \times \ldots \times K'_n$
+is a finite product of finite separable extensions $K'_i/K$.
+Set $C_i = \mathbf{P}^1_{K'_i}$ and denote $x_i \in C_i$
+the inverse image of $x$. Then $C_K = \coprod C_i$ and
+$X'_K \cap C_i = x_i$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-no-rational-tail}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Assume the singularities of $X$ are
+at-worst-nodal. If $X$ does not have a rational tail
+(Example \ref{example-rational-tail}),
+then for every reduced connected closed subscheme
+$Y \subset X$, $Y \not = X$ of dimension $1$ we have
+$\deg(\omega_X|_Y) \geq \dim_k H^1(Y, \mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+Let $Y \subset X$ be as in the statement. Then $k' = H^0(Y, \mathcal{O}_Y)$
+is a field and a finite extension of $k$ and $[k' : k]$
+divides all numerical invariants below associated to $Y$ and
+coherent sheaves on $Y$, see
+Varieties, Lemma \ref{varieties-lemma-divisible}.
+Let $Z \subset X$ be as in
+Lemma \ref{lemma-closed-subscheme-reduced-gorenstein}.
+We will use the results of this lemma and of
+Lemmas \ref{lemma-facts-about-nodal-curves} and
+\ref{lemma-closed-subscheme-nodal-curve} without further mention.
+Then we get a short exact sequence
+$$
+0 \to \omega_Y \to \omega_X|_Y \to \mathcal{O}_{Y \cap Z} \to 0
+$$
+See Lemma \ref{lemma-closed-subscheme-reduced-gorenstein}.
+We conclude that
+$$
+\deg(\omega_X|_Y) = \deg(Y \cap Z) + \deg(\omega_Y) =
+\deg(Y \cap Z) - 2\chi(Y, \mathcal{O}_Y)
+$$
+Hence, if the lemma is false, then
+$$
+2[k' : k] > \deg(Y \cap Z) + \dim_k H^1(Y, \mathcal{O}_Y)
+$$
+Since $Y \cap Z$ is nonempty and by the divisiblity mentioned above,
+this can happen only if $Y \cap Z$ is a single $k'$-rational point
+of the smooth locus of $Y$ and $H^1(Y, \mathcal{O}_Y) = 0$.
+If $Y$ is irreducible, then this implies $Y$ is a rational tail.
+If $Y$ is reducible, then since $\deg(\omega_X|_Y) = -[k' : k]$
+we find there is some irreducible component $C$ of $Y$
+such that $\deg(\omega_X|_C) < 0$, see
+Varieties, Lemma \ref{varieties-lemma-degree-in-terms-of-components}.
+Then the analysis above applied
+to $C$ gives that $C$ is a rational tail.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-no-rational-tail-semiample-genus-geq-2}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Assume the singularities of $X$ are
+at-worst-nodal. Assume $X$ does not have a rational tail
+(Example \ref{example-rational-tail}). If
+\begin{enumerate}
+\item the genus of $X$ is $0$, then $X$ is isomorphic to an
+irreducible plane conic and $\omega_X^{\otimes -1}$ is very ample,
+\item the genus of $X$ is $1$, then $\omega_X \cong \mathcal{O}_X$,
+\item the genus of $X$ is $\geq 2$, then
+$\omega_X^{\otimes m}$ is globally generated for $m \geq 2$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-facts-about-nodal-curves} we find that $X$ is
+Gorenstein, i.e., $\omega_X$ is an invertible $\mathcal{O}_X$-module.
+
+\medskip\noindent
+If the genus of $X$ is zero, then $\deg(\omega_X) < 0$, hence if
+$X$ has more than one irreducible component, we get a contradiction
+with Lemma \ref{lemma-no-rational-tail}. In the irreducible case
+we see that $X$ is isomorphic to an irreducible plane conic and
+$\omega_X^{\otimes -1}$ is very ample by Lemma \ref{lemma-genus-zero}.
+
+\medskip\noindent
+If the genus of $X$ is $1$, then $\omega_X$ has a global section and
+$\deg(\omega_X|_C) = 0$ for all irreducible components.
+Namely, $\deg(\omega_X|_C) \geq 0$ for all irreducible components $C$
+by Lemma \ref{lemma-no-rational-tail}, the sum of these numbers is
+$0$ by Lemma \ref{lemma-genus-gorenstein}, and we can apply
+Varieties, Lemma \ref{varieties-lemma-degree-in-terms-of-components}.
+Then $\omega_X \cong \mathcal{O}_X$ by
+Varieties, Lemma \ref{varieties-lemma-no-sections-dual-nef}.
+
+\medskip\noindent
+Assume the genus $g$ of $X$ is greater than or equal to $2$.
+If $X$ is irreducible, then we are done by
+Lemma \ref{lemma-degree-more-than-2g}.
+Assume $X$ reducible.
+By Lemma \ref{lemma-no-rational-tail} the
+inequalities of Lemma \ref{lemma-global-generation}
+hold for every $Y \subset X$ as in the statement, except for
+$Y = X$. Analyzing the proof of Lemma \ref{lemma-global-generation}
+we see that (in the reducible case) the only inequality
+used for $Y = X$ are
+$$
+\deg(\omega_X^{\otimes m}) > -2 \chi(\mathcal{O}_X)
+\quad\text{and}\quad
+\deg(\omega_X^{\otimes m}) + \chi(\mathcal{O}_X) > \dim_k H^1(X, \mathcal{O}_X)
+$$
+Since these both hold under the assumption $g \geq 2$ and $m \geq 2$ we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-contracting-rational-tails}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ of dimension $1$
+with $H^0(X, \mathcal{O}_X) = k$. Assume the singularities of $X$ are
+at-worst-nodal. Consider a sequence
+$$
+X = X_0 \to X_1 \to \ldots \to X_n = X'
+$$
+of contractions of rational tails (Example \ref{example-rational-tail})
+until none are left. Then
+\begin{enumerate}
+\item if the genus of $X$ is $0$, then $X'$ is an irreducible
+plane conic,
+\item if the genus of $X$ is $1$, then $\omega_{X'} \cong \mathcal{O}_X$,
+\item if the genus of $X$ is $> 1$, then
+$\omega_{X'}^{\otimes m}$ is globally generated for $m \geq 2$.
+\end{enumerate}
+If the genus of $X$ is $\geq 1$, then the morphism $X \to X'$
+is independent of choices and formation of this morphism
+commutes with base field extensions.
+\end{lemma}
+
+\begin{proof}
+We proceed by contracting rational tails until there are none
+left. Then we see that (1), (2), (3) hold by
+Lemma \ref{lemma-no-rational-tail-semiample-genus-geq-2}.
+
+\medskip\noindent
+Uniqueness. To see that $f : X \to X'$ is independent of the choices
+made, it suffices to show: any rational tail $C \subset X$ is
+mapped to a point by $X \to X'$; some details omitted.
+If not, then we can find a section
+$s \in \Gamma(X', \omega_{X'}^{\otimes 2})$ which does
+not vanish in the generic point of the irreducible component $f(C)$.
+Since in each of the contractions $X_i \to X_{i + 1}$
+we have a section $X_{i + 1} \to X_i$, there is a section
+$X' \to X$ of $f$. Then we have an exact sequence
+$$
+0 \to \omega_{X'} \to \omega_X \to \omega_X|_{X''} \to 0
+$$
+where $X'' \subset X$ is the union of the irreducible components
+contracted by $f$. See Lemma \ref{lemma-closed-subscheme-reduced-gorenstein}.
+Thus we get a map $\omega_{X'}^{\otimes 2} \to \omega_X^{\otimes 2}$
+and we can take the image of $s$ to get a section of
+$\omega_X^{\otimes 2}$ not vanishing in the generic point of $C$.
+This is a contradiction with the fact that the restriction of
+$\omega_X$ to a rational tail has negative degree
+(Lemma \ref{lemma-rational-tail-negative}).
+
+\medskip\noindent
+The statement on base field extensions follows from
+Lemma \ref{lemma-rational-tail-field-extension}. Some details omitted.
+\end{proof}
+
+
+
+
+
+\section{Contracting rational bridges}
+\label{section-contracting-rational-bridges}
+
+\noindent
+In this section we discuss the next simplest possible case (after the case
+discussed in Section \ref{section-contracting-rational-tails}) of contracting
+a scheme to improve positivity properties of its canonical sheaf.
+
+\begin{example}[Contracting a rational bridge]
+\label{example-rational-bridge}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Assume the singularities of $X$ are
+at-worst-nodal.
+A {\it rational bridge} will be an irreducible component $C \subset X$
+(viewed as an integral closed subscheme) with the following properties
+\begin{enumerate}
+\item $X' \not = \emptyset$ where $X' \subset X$ is the scheme theoretic closure
+of $X \setminus C$,
+\item the scheme theoretic interesection $C \cap X'$
+has degree $2$ over $H^0(C, \mathcal{O}_C)$, and
+\item $C$ has genus zero.
+\end{enumerate}
+Set $k' = H^0(C, \mathcal{O}_C)$ and
+$k'' = H^0(C \cap X', \mathcal{O}_{C \cap X'})$.
+Then $k'$ is a field (Varieties, Lemma
+\ref{varieties-lemma-proper-geometrically-reduced-global-sections})
+and $\dim_{k'}(k'') = 2$. Since there are at least
+two irreducible components of $X$ passing through
+each point of $C \cap X'$, we conclude these points are nodes
+and smooth points on both $C$ and $X'$
+(Lemma \ref{lemma-closed-subscheme-nodal-curve}).
+Hence $k'/k$ is a finite separable extension of fields
+and $k''/k'$ is either a degree $2$ separable extension of fields
+or $k'' = k' \times k'$ (Lemma \ref{lemma-nodal}). By
+Section \ref{section-pushouts} there exists a pushout
+$$
+\xymatrix{
+C \cap X' \ar[r] \ar[d] &
+X' \ar[d]^a \\
+\Spec(k') \ar[r] &
+Y
+}
+$$
+with many good properties (all of which we will use below without
+futher mention). Let $y \in Y$ be the image of $\Spec(k') \to Y$.
+Then
+$$
+\mathcal{O}_{Y, y}^\wedge \cong k'[[s, t]]/(st)
+\quad\text{or}\quad
+\mathcal{O}_{Y, y}^\wedge \cong
+\{f \in k''[[s]] : f(0) \in k'\}
+$$
+depending on whether $C \cap X'$ has $2$ or $1$ points.
+This follows from Lemma \ref{lemma-complete-local-ring-pushout}
+and the fact that $\mathcal{O}_{X', p} \cong \kappa(p)[[t]]$
+for $p \in C \cap X'$ by More on Algebra, Lemma
+\ref{more-algebra-lemma-power-series-over-residue-field}.
+Thus we see that $y \in Y$ is a node, see Lemmas \ref{lemma-nodal}
+and \ref{lemma-2-branches-delta-1} and in particular the discussion
+of Case II in the proof of (2) $\Rightarrow$ (1) in
+Lemma \ref{lemma-2-branches-delta-1}.
+Thus the singularities of $Y$ are at-worst-nodal.
+
+\medskip\noindent
+We can extend the commutative diagram above to a diagram
+$$
+\xymatrix{
+C \cap X' \ar[r] \ar[d] &
+X' \ar[d]^a \ar[r] & X \ar[ld]^c &
+C \ar[ld] \ar[l] \\
+\Spec(k') \ar[r] &
+Y &
+\Spec(k') \ar[l]
+}
+$$
+where the two lower horizontal arrows are the same. Namely, $X$ is the
+scheme theoretic union of $X'$ and $C$ (thus a pushout by
+Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-union})
+and the morphisms $C \to Y$ and $X' \to Y$ agree on $C \cap X'$.
+Finally, we claim that
+$$
+c_*\mathcal{O}_X = \mathcal{O}_Y
+\quad\text{and}\quad
+R^1c_*\mathcal{O}_X = 0
+$$
+To see this use the exact sequence
+$$
+0 \to \mathcal{O}_X \to
+\mathcal{O}_C \oplus \mathcal{O}_{X'} \to
+\mathcal{O}_{C \cap X'} \to 0
+$$
+of Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-union}.
+The long exact sequence of higher direct images is
+$$
+0 \to c_*\mathcal{O}_X \to c_*\mathcal{O}_C \oplus c_*\mathcal{O}_{X'} \to
+c_*\mathcal{O}_{C \cap X'} \to
+R^1c_*\mathcal{O}_X \to
+R^1c_*\mathcal{O}_C \oplus R^1c_*\mathcal{O}_{X'}
+$$
+Since $c|_{X'} = a$ is affine we see that
+$R^1c_*\mathcal{O}_{X'} = 0$.
+Since $c|_C$ factors as $C \to \Spec(k') \to X$ and since
+$C$ has genus zero, we find that $R^1c_*\mathcal{O}_C = 0$.
+Since $\mathcal{O}_{X'} \to \mathcal{O}_{C \cap X'}$ is
+surjective and since $c|_{X'}$ is affine, we see that
+$c_*\mathcal{O}_{X'} \to c_*\mathcal{O}_{C \cap X'}$
+is surjective. This proves that $R^1c_*\mathcal{O}_X = 0$.
+Finally, we have
+$\mathcal{O}_Y = c_*\mathcal{O}_X$ by
+the exact sequence and
+the description of the structure sheaf of the pushout
+in More on Morphisms, Proposition
+\ref{more-morphisms-proposition-pushout-along-closed-immersion-and-integral}.
+
+\medskip\noindent
+All of this means that $Y$ is also a proper scheme over $k$
+having dimension $1$ and $H^0(Y, \mathcal{O}_Y) = k$
+whose singularities are at-worst-nodal
+(Lemma \ref{lemma-closed-subscheme-nodal-curve})
+and that $Y$ has the same genus as $X$.
+We will say $c : X \to Y$ is the
+{\it contraction of a rational bridge}.
+\end{example}
+
+\begin{lemma}
+\label{lemma-rational-bridge-zero}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Assume the singularities of $X$ are
+at-worst-nodal. Let $C \subset X$ be a rational bridge
+(Example \ref{example-rational-bridge}). Then $\deg(\omega_X|_C) = 0$.
+\end{lemma}
+
+\begin{proof}
+Let $X' \subset X$ be as in the example. Then we have a short exact
+sequence
+$$
+0 \to \omega_C \to \omega_X|_C \to \mathcal{O}_{C \cap X'} \to 0
+$$
+See Lemmas \ref{lemma-closed-subscheme-reduced-gorenstein},
+\ref{lemma-facts-about-nodal-curves}, and
+\ref{lemma-closed-subscheme-nodal-curve}.
+With $k''/k'/k$ as in the example we see that
+$\deg(\omega_C) = -2[k' : k]$ as $C$ has genus $0$
+(Lemma \ref{lemma-rr})
+and $\deg(C \cap X') = [k'' : k] = 2[k' : k]$.
+Hence $\deg(\omega_X|_C) = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-rational-bridge-field-extension}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Assume the singularities of $X$ are
+at-worst-nodal. Let $C \subset X$ be a rational bridge
+(Example \ref{example-rational-bridge}).
+For any field extension $K/k$ the base change $C_K \subset X_K$
+is a finite disjoint union of rational bridges.
+\end{lemma}
+
+\begin{proof}
+Let $k''/k'/k$ be as in the example.
+Since $k'/k$ is finite separable, we see that
+$k' \otimes_k K = K'_1 \times \ldots \times K'_n$
+is a finite product of finite separable extensions $K'_i/K$.
+The corresponding product decomposition
+$k'' \otimes_k K = \prod K''_i$ gives degree $2$
+separable algebra extensions $K''_i/K'_i$.
+Set $C_i = C_{K'_i}$. Then $C_K = \coprod C_i$
+and therefore each $C_i$ has genus $0$ (viewed as a curve
+over $K'_i$), because $H^1(C_K, \mathcal{O}_{C_K}) = 0$
+by flat base change.
+Finally, we have $X'_K \cap C_i = \Spec(K''_i)$ has degree $2$
+over $K'_i$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-rational-bridge-canonical}
+Let $c : X \to Y$ be the contraction of a rational bridge
+(Example \ref{example-rational-bridge}).
+Then $c^*\omega_Y \cong \omega_X$.
+\end{lemma}
+
+\begin{proof}
+You can prove this by direct computation, but we prefer to use the
+characterization of $\omega_X$ as the coherent $\mathcal{O}_X$-module
+which represents the functor
+$\textit{Coh}(\mathcal{O}_X) \to \textit{Sets}$,
+$\mathcal{F} \mapsto \Hom_k(H^1(X, \mathcal{F}), k) =
+H^1(X, \mathcal{F})^\vee$, see
+Lemma \ref{lemma-duality-dim-1-CM} or
+Duality for Schemes, Lemma
+\ref{duality-lemma-dualizing-module-proper-over-A}.
+
+\medskip\noindent
+To be precise, denote $\mathcal{C}_Y$ the category whose objects are
+invertible $\mathcal{O}_Y$-modules and whose maps are
+$\mathcal{O}_Y$-module homomorphisms. Denote $\mathcal{C}_X$ the category
+whose objects are invertible $\mathcal{O}_X$-modules $\mathcal{L}$ with
+$\mathcal{L}|_C \cong \mathcal{O}_C$ and whose maps are
+$\mathcal{O}_Y$-module homomorphisms. We claim that the functor
+$$
+c^* : \mathcal{C}_Y \to \mathcal{C}_X
+$$
+is an equivalence of categories. Namely, by
+More on Morphisms, Lemma \ref{more-morphisms-lemma-bijection-on-Pic}
+it is essentially surjective. Then the projection formula
+(Cohomology, Lemma \ref{cohomology-lemma-projection-formula})
+shows $c_*c^*\mathcal{N} = \mathcal{N}$ and hence $c^*$
+is an equivalence with quasi-inverse given by $c_*$.
+
+\medskip\noindent
+We claim $\omega_X$ is an object of $\mathcal{C}_X$. Namely, we have a
+short exact sequence
+$$
+0 \to \omega_C \to \omega_X|_C \to \mathcal{O}_{C \cap X'} \to 0
+$$
+See Lemma \ref{lemma-closed-subscheme-reduced-gorenstein}.
+Taking degrees we find $\deg(\omega_X|_C) = 0$ (small detail omitted).
+Thus $\omega_X|_C$ is trivial by Lemma \ref{lemma-genus-zero-pic}
+and $\omega_X$ is an object of $\mathcal{C}_X$.
+
+\medskip\noindent
+Since $R^1c_*\mathcal{O}_X = 0$ the projection formula shows that
+$R^1c_*c^*\mathcal{N} = 0$ for $\mathcal{N} \in \Ob(\mathcal{C}_Y)$.
+Therefore the Leray spectral sequence
+(Cohomology, Lemma \ref{cohomology-lemma-apply-Leray})
+the diagram
+$$
+\xymatrix{
+\mathcal{C}_Y \ar[rr]_{c^*} \ar[dr]_{H^1(Y, -)^\vee} & &
+\mathcal{C}_X \ar[ld]^{H^1(X, -)^\vee} \\
+& \textit{Sets}
+}
+$$
+of categories and functors is commutative. Since
+$\omega_Y \in \Ob(\mathcal{C}_Y)$ represents the south-east arrow and
+$\omega_X \in \Ob(\mathcal{C}_X)$ represents the south-east arrow
+we conclude by the Yoneda lemma
+(Categories, Lemma \ref{categories-lemma-yoneda}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-no-rational-bridge-ample-genus-geq-2}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ having dimension $1$
+and $H^0(X, \mathcal{O}_X) = k$. Assume
+\begin{enumerate}
+\item the singularities of $X$ are at-worst-nodal,
+\item $X$ does not have a rational tail
+(Example \ref{example-rational-tail}),
+\item $X$ does not have a rational bridge
+(Example \ref{example-rational-bridge}),
+\item the genus $g$ of $X$ is $\geq 2$.
+\end{enumerate}
+Then $\omega_X$ is ample.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that $\deg(\omega_X|_C) > 0$ for every irreducible
+component $C$ of $X$, see Varieties, Lemma
+\ref{varieties-lemma-ampleness-in-terms-of-degrees-components}.
+If $X = C$ is irreducible, this follows from $g \geq 2$
+and Lemma \ref{lemma-genus-gorenstein}.
+Otherwise, set $k' = H^0(C, \mathcal{O}_C)$. This
+is a field and a finite extension of $k$ and $[k' : k]$
+divides all numerical invariants below associated to $C$ and
+coherent sheaves on $C$, see
+Varieties, Lemma \ref{varieties-lemma-divisible}.
+Let $X' \subset X$ be the closure of $X \setminus C$ as in
+Lemma \ref{lemma-closed-subscheme-reduced-gorenstein}.
+We will use the results of this lemma and of
+Lemmas \ref{lemma-facts-about-nodal-curves} and
+\ref{lemma-closed-subscheme-nodal-curve} without further mention.
+Then we get a short exact sequence
+$$
+0 \to \omega_C \to \omega_X|_C \to \mathcal{O}_{C \cap X'} \to 0
+$$
+See Lemma \ref{lemma-closed-subscheme-reduced-gorenstein}.
+We conclude that
+$$
+\deg(\omega_X|_C) = \deg(C \cap X') + \deg(\omega_C) =
+\deg(C \cap X') - 2\chi(C, \mathcal{O}_C)
+$$
+Hence, if the lemma is false, then
+$$
+2[k' : k] \geq \deg(C \cap X') + 2\dim_k H^1(C, \mathcal{O}_C)
+$$
+Since $C \cap X'$ is nonempty and by the divisiblity mentioned above,
+this can happen only if either
+\begin{enumerate}
+\item[(a)] $C \cap X'$ is a single $k'$-rational point of $C$ and
+$H^1(C, \mathcal{O}_C) = 0$, and
+\item[(b)] $C \cap X'$ has degree $2$ over $k'$ and
+$H^1(C, \mathcal{O}_C) = 0$.
+\end{enumerate}
+The first possibility means $C$ is a rational tail
+and the second that $C$ is a rational bridge.
+Since both are excluded the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-contracting-rational-bridges}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ of dimension $1$
+with $H^0(X, \mathcal{O}_X) = k$ having genus $g \geq 2$.
+Assume the singularities of $X$ are at-worst-nodal and that
+$X$ has no rational tails. Consider a sequence
+$$
+X = X_0 \to X_1 \to \ldots \to X_n = X'
+$$
+of contractions of rational bridges
+(Example \ref{example-rational-bridge}) until none are left.
+Then $\omega_{X'}$ ample.
+The morphism $X \to X'$ is independent of choices and
+formation of this morphism commutes with base field extensions.
+\end{lemma}
+
+\begin{proof}
+We proceed by contracting rational bridges until there are none
+left. Then $\omega_{X'}$ is ample by
+Lemma \ref{lemma-no-rational-bridge-ample-genus-geq-2}.
+
+\medskip\noindent
+Denote $f : X \to X'$ the composition. By
+Lemma \ref{lemma-rational-bridge-canonical} and induction we see that
+$f^*\omega_{X'} = \omega_X$.
+We have $f_*\mathcal{O}_X = \mathcal{O}_{X'}$
+because this is true for contraction of a rational bridge.
+Thus the projection formula says that
+$f_*f^*\mathcal{L} = \mathcal{L}$ for all invertible
+$\mathcal{O}_{X'}$-modules $\mathcal{L}$.
+Hence
+$$
+\Gamma(X', \omega_{X'}^{\otimes m}) = \Gamma(X, \omega_X^{\otimes m})
+$$
+for all $m$. Since $X'$ is the Proj of the direct sum of these
+by Morphisms, Lemma \ref{morphisms-lemma-proper-ample-is-proj}
+we conclude that the morphism $X \to X'$ is completely canonical.
+
+\medskip\noindent
+Let $K/k$ be an extension of fields, then
+$\omega_{X_K}$ is the pullback of $\omega_X$
+(Lemma \ref{lemma-sanity-check-duality}) and we have
+$\Gamma(X, \omega_X^{\otimes m}) \otimes_k K$
+is equal to
+$\Gamma(X_K, \omega_{X_K}^{\otimes m})$
+by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology}.
+Thus formation of $f : X \to X'$ commutes with base change by
+$K/k$ by the arguments given above. Some details omitted.
+\end{proof}
+
+
+
+
+
+
+\section{Contracting to a stable curve}
+\label{section-contracting-to-stable}
+
+\noindent
+In this section we combine the contraction morphisms found in
+Sections \ref{section-contracting-rational-tails} and
+\ref{section-contracting-rational-bridges}.
+Namely, suppose that $k$ is a field and let $X$ be a proper scheme over $k$
+of dimension $1$ with $H^0(X, \mathcal{O}_X) = k$ having genus $g \geq 2$.
+Assume the singularities of $X$ are at-worst-nodal. Composing
+the morphism of Lemma \ref{lemma-contracting-rational-tails}
+with the morphism of Lemma \ref{lemma-contracting-rational-bridges}
+we get a morphism
+$$
+c : X \longrightarrow Y
+$$
+such that $Y$ also is a proper scheme over $k$ of dimension $1$
+whose singularities are at worst nodal, with $k = H^0(Y, \mathcal{O}_Y)$
+and having genus $g$, such that
+$\mathcal{O}_Y = c_*\mathcal{O}_X$ and $R^1c_*\mathcal{O}_X = 0$,
+and such that $\omega_Y$ is ample on $Y$.
+Lemma \ref{lemma-characterize-contraction-to-stable}
+shows these conditions in fact characterize
+this morphism.
+
+\begin{lemma}
+\label{lemma-contract-gorenstein-canonical}
+Let $k$ be a field. Let $c : X \to Y$ be a morphism of proper schemes over $k$
+Assume
+\begin{enumerate}
+\item $\mathcal{O}_Y = c_*\mathcal{O}_X$ and $R^1c_*\mathcal{O}_X = 0$,
+\item $X$ and $Y$ are reduced, Gorenstein, and have dimension $1$,
+\item $\exists\ m \in \mathbf{Z}$ with
+$H^1(X, \omega_X^{\otimes m}) = 0$ and $\omega_X^{\otimes m}$
+generated by global sections.
+\end{enumerate}
+Then $c^*\omega_Y \cong \omega_X$.
+\end{lemma}
+
+\begin{proof}
+The fibres of $c$ are geometrically connected by
+More on Morphisms, Theorem
+\ref{more-morphisms-theorem-stein-factorization-Noetherian}.
+In particular $c$ is surjective.
+There are finitely many closed points $y = y_1, \ldots, y_r$ of $Y$ where
+$X_y$ has dimension $1$ and over $Y \setminus \{y_1, \ldots, y_r\}$
+the morphism $c$ is an isomorphism.
+Some details omitted; hint: outside of $\{y_1, \ldots, y_r\}$
+the morphism $c$ is finite, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-characterize-finite}.
+
+\medskip\noindent
+Let us carefully construct a map $b : c^*\omega_Y \to \omega_X$.
+Denote $f : X \to \Spec(k)$ and $g : Y \to \Spec(k)$ the structure
+morphisms. We have $f^!k = \omega_X[1]$ and $g^!k = \omega_Y[1]$, see
+Lemma \ref{lemma-duality-dim-1} and its proof. Then
+$f^! = c^! \circ g^!$ and hence $c^!\omega_Y = \omega_X$.
+Thus there is a functorial isomorphism
+$$
+\Hom_{D(\mathcal{O}_X)}(\mathcal{F}, \omega_X)
+\longrightarrow
+\Hom_{D(\mathcal{O}_Y)}(Rc_*\mathcal{F}, \omega_Y)
+$$
+for coherent $\mathcal{O}_X$-modules $\mathcal{F}$ by definition
+of $c^!$\footnote{As the restriction of the right adjoint of
+Duality for Schemes, Lemma \ref{duality-lemma-twisted-inverse-image} to
+$D^+_\QCoh(\mathcal{O}_Y)$.}.
+This isomorphism is induced by a trace map $t : Rc_*\omega_X \to \omega_Y$
+(the counit of the adjunction). By the projection formula
+(Cohomology, Lemma \ref{cohomology-lemma-projection-formula})
+the canonical map $a : \omega_Y \to Rc_*c^*\omega_Y$ is an isomorphism.
+Combining the above we see there is a canonical map
+$b : c^*\omega_Y \to \omega_X$ such that
+$$
+t \circ Rc_*(b) = a^{-1}
+$$
+In particular, if we restrict $b$ to $c^{-1}(Y \setminus \{y_1, \ldots, y_r\})$
+then it is an isomorphism (because it is a map between invertible modules
+whose composition with another gives the isomorphism $a^{-1}$).
+
+\medskip\noindent
+Choose $m \in \mathbf{Z}$ as in (3) consider the map
+$$
+b^{\otimes m} :
+\Gamma(Y, \omega_Y^{\otimes m})
+\longrightarrow
+\Gamma(X, \omega_X^{\otimes m})
+$$
+This map is injective because $Y$ is reduced and by the last
+property of $b$ mentioned in its construction.
+By Riemann-Roch (Lemma \ref{lemma-rr}) we have
+$\chi(X, \omega_X^{\otimes m}) =\chi(Y, \omega_Y^{\otimes m})$.
+Thus
+$$
+\dim_k \Gamma(Y, \omega_Y^{\otimes m}) \geq
+\dim_k \Gamma(X, \omega_X^{\otimes m}) = \chi(X, \omega_X^{\otimes m})
+$$
+and we conclude $b^{\otimes m}$ induces an isomorphism on global
+sections. So
+$b^{\otimes m} : c^*\omega_Y^{\otimes m} \to \omega_X^{\otimes m}$
+is surjective as generators of $\omega_X^{\otimes m}$ are in the image.
+Hence $b^{\otimes m}$ is an isomorphism. Thus $b$ is an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-contraction-to-stable}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ of dimension
+$1$ with $H^0(X, \mathcal{O}_X) = k$ having genus $g \geq 2$.
+Assume the singularities of $X$ are at-worst-nodal.
+There is a unique morphism (up to unique isomorphism)
+$$
+c : X \longrightarrow Y
+$$
+of schemes over $k$ having the following properties:
+\begin{enumerate}
+\item $Y$ is proper over $k$, $\dim(Y) = 1$, the singularities of $Y$
+are at-worst-nodal,
+\item $\mathcal{O}_Y = c_*\mathcal{O}_X$ and $R^1c_*\mathcal{O}_X = 0$, and
+\item $\omega_Y$ is ample on $Y$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Existence: A morphism with all the properties listed exists by
+combining Lemmas \ref{lemma-contracting-rational-tails} and
+\ref{lemma-contracting-rational-bridges} as discussed in the
+introduction to this section.
+Moreover, we see that it can be written as a composition
+$$
+X \to X_1 \to X_2 \ldots \to X_n \to X_{n + 1} \to \ldots \to X_{n + n'}
+$$
+where the first $n$ morphisms are contractions of rational tails
+and the last $n'$ morphisms are contractions of rational bridges.
+Note that property (2) holds for each contraction of a rational
+tail (Example \ref{example-rational-tail}) and contraction of a
+rational bridge (Example \ref{example-rational-bridge}).
+It is easy to see that this property is inherited by compositions of morphisms.
+
+\medskip\noindent
+Uniqueness: Let $c : X \to Y$ be a morphism satisfying conditions
+(1), (2), and (3). We will show that there is a unique isomorphism
+$X_{n + n'} \to Y$ compatible with the morphisms $X \to X_{n + n'}$ and $c$.
+
+\medskip\noindent
+Before we start the proof we make some observations about $c$.
+We first observe that the fibres of $c$ are geometrically connected by
+More on Morphisms, Theorem
+\ref{more-morphisms-theorem-stein-factorization-Noetherian}.
+In particular $c$ is surjective.
+For a closed point $y \in Y$ the fibre $X_y$ satisfies
+$$
+H^1(X_y, \mathcal{O}_{X_y}) = 0
+\quad\text{and}\quad
+H^0(X_y, \mathcal{O}_{X_y}) = \kappa(y)
+$$
+The first equality by More on Morphisms, Lemma
+\ref{more-morphisms-lemma-check-h1-fibre-zero}
+and the second by More on Morphisms, Lemma
+\ref{more-morphisms-lemma-h1-fibre-zero-check-h0-kappa}.
+Thus either $X_y = x$ where $x$ is the unique point of $X$ mapping to
+$y$ and has the same residue field as $y$, or $X_y$ is a $1$-dimensional
+proper scheme over $\kappa(y)$. Observe that in the second case
+$X_y$ is Cohen-Macaulay (Lemma \ref{lemma-automatic}).
+However, since $X$ is reduced, we see that $X_y$ must be reduced
+at all of its generic points (details omitted), and hence $X_y$
+is reduced by Properties, Lemma \ref{properties-lemma-criterion-reduced}.
+It follows that the singularities of $X_y$ are at-worst-nodal
+(Lemma \ref{lemma-closed-subscheme-nodal-curve}).
+Note that the genus of $X_y$ is zero (see above).
+Finally, there are only a finite number of points $y$ where
+the fibre $X_y$ has dimension $1$, say
+$\{y_1, \ldots, y_r\}$, and $c^{-1}(Y \setminus \{y_1, \ldots, y_r\})$
+maps isomorphically to $Y \setminus \{y_1, \ldots, y_r\}$ by $c$.
+Some details omitted; hint: outside of $\{y_1, \ldots, y_r\}$
+the morphism $c$ is finite, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-characterize-finite}.
+
+\medskip\noindent
+Let $C \subset X$ be a rational tail.
+We claim that $c$ maps $C$ to a point. Assume that this
+is not the case to get a contradiction. Then the image
+of $C$ is an irreducible component $D \subset Y$.
+Recall that $H^0(C, \mathcal{O}_C) = k'$ is a finite separable
+extension of $k$ and that $C$ has a $k'$-rational point $x$
+which is also the unique intersection of $C$ with the ``rest'' of $X$.
+We conclude from the general discussion above that
+$C \setminus \{x\} \subset c^{-1}(Y \setminus \{y_1, \ldots, y_r\})$
+maps isomorphically to an open $V$ of $D$. Let $y = c(x) \in D$.
+Observe that $y$ is the only point of $D$ meeting the
+``rest'' of $Y$. If $y \not \in \{y_1, \ldots, y_r\}$, then $C \cong D$
+and it is clear that $D$ is a rational tail of $Y$ which is
+a contradiction with the ampleness of $\omega_Y$
+(Lemma \ref{lemma-rational-tail-negative}).
+Thus $y \in \{y_1, \ldots, y_r\}$ and $\dim(X_y) = 1$.
+Then $x \in X_y \cap C$ and $x$ is a smooth point of $X_y$ and $C$
+(Lemma \ref{lemma-closed-subscheme-nodal-curve}).
+If $y \in D$ is a singular point of $D$, then $y$ is a node
+and then $Y = D$ (because there cannot be another component of
+$Y$ passing through $y$ by Lemma \ref{lemma-closed-subscheme-nodal-curve}).
+Then $X = X_y \cup C$ which means $g = 0$ because it is
+equal to the genus of $X_y$ by the discussion in
+Example \ref{example-rational-tail}; a contradiction.
+If $y \in D$ is a smooth point of $D$, then
+$C \to D$ is an isomorphism (because the nonsingular projective
+model is unique and $C$ and $D$ are birational, see
+Section \ref{section-curves-function-fields}). Then $D$ is
+a rational tail of $Y$ which is a contradiction with
+ampleness of $\omega_Y$.
+
+\medskip\noindent
+Assume $n \geq 1$. If $C \subset X$ is the rational tail contracted
+by $X \to X_1$, then we see that $C$ is mapped to a point of $Y$ by
+the previous paragraph. Hence $c : X \to Y$ factors through $X \to X_1$
+(because $X$ is the pushout of $C$ and $X_1$, see discussion in
+Example \ref{example-rational-tail}).
+After replacing $X$ by $X_1$ we have decreased
+$n$. By induction we may assume $n = 0$, i.e., $X$ does not have
+a rational tail.
+
+\medskip\noindent
+Assume $n = 0$, i.e., $X$ does not have any rational tails.
+Then $\omega_X^{\otimes 2}$ and $\omega_X^{\otimes 3}$ are
+globally generated by Lemma \ref{lemma-no-rational-tail-semiample-genus-geq-2}.
+It follows that $H^1(X, \omega_X^{\otimes 3}) = 0$ by
+Lemma \ref{lemma-vanishing-twist}.
+By Lemma \ref{lemma-contract-gorenstein-canonical} applied with $m = 3$
+we find that $c^*\omega_Y \cong \omega_X$.
+We also have that $\omega_X = (X \to X_{n'})^*\omega_{X_{n'}}$ by
+Lemma \ref{lemma-rational-bridge-canonical} and induction.
+Applying the projection formula for both $c$ and
+$X \to X_{n'}$ we conclude that
+$$
+\Gamma(X_{n'}, \omega_{X_{n'}}^{\otimes m}) =
+\Gamma(X, \omega_X^{\otimes m}) =
+\Gamma(Y, \omega_Y^{\otimes m})
+$$
+for all $m$.
+Since $X_{n'}$ and $Y$ are the Proj of the direct sum of these
+by Morphisms, Lemma \ref{morphisms-lemma-proper-ample-is-proj}
+we conclude that there is a canonical isomorphism $X_{n'} = Y$
+as desired. We omit the verification that this is the unique
+isomorphism making the diagram commute.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tricanonical}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ of dimension $1$ with
+$H^0(X, \mathcal{O}_X) = k$ having genus $g \geq 2$. Assume the singularities
+of $X$ are at-worst-nodal and $\omega_X$ is ample. Then
+$\omega_X^{\otimes 3}$ is very ample and $H^1(X, \omega_X^{\otimes 3}) = 0$.
+\end{lemma}
+
+\begin{proof}
+Combining Varieties, Lemma
+\ref{varieties-lemma-ampleness-in-terms-of-degrees-components} and
+Lemmas \ref{lemma-rational-tail-negative} and \ref{lemma-rational-bridge-zero}
+we see that $X$ contains no rational tails or bridges.
+Then we see that $\omega_X^{\otimes 3}$ is globally generated
+by Lemma \ref{lemma-contracting-rational-tails}.
+Choose a $k$-basis $s_0, \ldots, s_n$ of
+$H^0(X, \omega_X^{\otimes 3})$. We get a morphism
+$$
+\varphi_{\omega_X^{\otimes 3}, (s_0, \ldots, s_n)} :
+X \longrightarrow \mathbf{P}^n_k
+$$
+See Constructions, Section \ref{constructions-section-projective-space}.
+The lemma asserts that this morphism is a closed immersion.
+To check this we may replace $k$ by its algebraic closure, see
+Descent, Lemma \ref{descent-lemma-descending-property-closed-immersion}.
+Thus we may assume $k$ is algebraically closed.
+
+\medskip\noindent
+Assume $k$ is algebraically closed.
+We will use Varieties, Lemma
+\ref{varieties-lemma-variant-separate-points-tangent-vectors}
+to prove the lemma.
+Let $Z \subset X$ be a closed subscheme of degree $2$ over $Z$
+with ideal sheaf $\mathcal{I} \subset \mathcal{O}_X$.
+We have to show that
+$$
+H^0(X, \mathcal{L}) \to H^0(Z, \mathcal{L}|_Z)
+$$
+is surjective. Thus it suffices to show that
+$H^1(X, \mathcal{I}\mathcal{L}) = 0$.
+To do this we will use Lemma \ref{lemma-vanishing-on-gorenstein}.
+Thus it suffices to show that
+$$
+3\deg(\omega_X|_Y) > -2\chi(Y, \mathcal{O}_Y) + \deg(Z \cap Y)
+$$
+for every reduced connected closed subscheme $Y \subset X$.
+Since $k$ is algebraically closed and $Y$ connected and reduced
+we have $H^0(Y, \mathcal{O}_Y) = k$ (Varieties, Lemma
+\ref{varieties-lemma-proper-geometrically-reduced-global-sections}).
+Hence $\chi(Y, \mathcal{O}_Y) = 1 - \dim H^1(Y, \mathcal{O}_Y)$.
+Thus we have to show
+$$
+3\deg(\omega_X|_Y) > -2 + 2\dim H^1(Y, \mathcal{O}_Y) + \deg(Z \cap Y)
+$$
+which is true by Lemma \ref{lemma-no-rational-tail}
+except possibly if $Y = X$ or if $\deg(\omega_X|_Y) = 0$.
+Since $\omega_X$ is ample the second possibility does not
+occur (see first lemma cited in this proof). Finally, if
+$Y = X$ we can use Riemann-Roch (Lemma \ref{lemma-rr})
+and the fact that $g \geq 2$ to see that the inquality holds.
+The same argument with $Z = \emptyset$ shows that
+$H^1(X, \omega_X^{\otimes 3}) = 0$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Vector fields}
+\label{section-vector-fields}
+
+\noindent
+In this section we study the space of vector fields on a curve.
+Vector fields correspond to infinitesimal automorphisms, see
+More on Morphisms, Section \ref{more-morphisms-section-action-by-derivations},
+hence play an important role in moduli theory.
+
+\medskip\noindent
+Let $k$ be an algebraically closed field.
+Let $X$ be a finite type scheme over $k$.
+Let $x \in X$ be a closed point.
+We will say an element $D \in \text{Der}_k(\mathcal{O}_X, \mathcal{O}_X)$
+{\it fixes $x$} if $D(\mathcal{I}) \subset \mathcal{I}$ where
+$\mathcal{I} \subset \mathcal{O}_X$ is the ideal sheaf of $x$.
+
+\begin{lemma}
+\label{lemma-smooth-vector-fields}
+Let $k$ be an algebraically closed field.
+Let $X$ be a smooth, proper, connected curve over $k$.
+Let $g$ be the genus of $X$.
+\begin{enumerate}
+\item If $g \geq 2$, then $\text{Der}_k(\mathcal{O}_X, \mathcal{O}_X)$
+is zero,
+\item if $g = 1$ and $D \in \text{Der}_k(\mathcal{O}_X, \mathcal{O}_X)$
+is nonzero, then $D$ does not fix any closed point of $X$, and
+\item if $g = 0$ and $D \in \text{Der}_k(\mathcal{O}_X, \mathcal{O}_X)$
+is nonzero, then $D$ fixes at most $2$ closed points of $X$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Recall that we have a universal $k$-derivation
+$d : \mathcal{O}_X \to \Omega_{X/k}$ and hence $D = \theta \circ d$
+for some $\mathcal{O}_X$-linear map $\theta : \Omega_{X/k} \to \mathcal{O}_X$.
+Recall that $\Omega_{X/k} \cong \omega_X$, see
+Lemma \ref{lemma-duality-dim-1}.
+By Riemann-Roch we have $\deg(\omega_X) = 2g - 2$
+(Lemma \ref{lemma-rr}).
+Thus we see that $\theta$ is forced to be zero
+if $g > 1$ by Varieties, Lemma
+\ref{varieties-lemma-check-invertible-sheaf-trivial}.
+This proves part (1).
+If $g = 1$, then a nonzero $\theta$ does not vanish anywhere and if
+$g = 0$, then a nonzero $\theta$ vanishes in a divisor of degree $2$.
+Thus parts (2) and (3) follow if we show that
+vanishing of $\theta$ at a closed point $x \in X$ is
+equivalent to the statement that $D$ fixes $x$ (as defined above).
+Let $z \in \mathcal{O}_{X, x}$ be a uniformizer.
+Then $dz$ is a basis element for $\Omega_{X, x}$, see
+Lemma \ref{lemma-uniformizer-works}.
+Since $D(z) = \theta(dz)$ we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nodal-vector-fields}
+Let $k$ be an algebraically closed field.
+Let $X$ be an at-worst-nodal, proper, connected
+$1$-dimensional scheme over $k$. Let $\nu : X^\nu \to X$ be the normalization.
+Let $S \subset X^\nu$ be the set of points where $\nu$ is not an
+isomorphism. Then
+$$
+\text{Der}_k(\mathcal{O}_X, \mathcal{O}_X) =
+\{D' \in \text{Der}_k(\mathcal{O}_{X^\nu}, \mathcal{O}_{X^\nu}) \mid
+D' \text{ fixes every }x^\nu \in S\}
+$$
+\end{lemma}
+
+\begin{proof}
+Let $x \in X$ be a node. Let $x', x'' \in X^\nu$ be the inverse images
+of $x$. (Every node is a split node since $k$ is algebriacally closed, see
+Definition \ref{definition-split-node} and
+Lemma \ref{lemma-split-node}.)
+Let $u \in \mathcal{O}_{X^\nu, x'}$ and $v \in \mathcal{O}_{X^\nu, x''}$
+be uniformizers. Observe that we have an exact sequence
+$$
+0 \to \mathcal{O}_{X, x} \to
+\mathcal{O}_{X^\nu, x'} \times \mathcal{O}_{X^\nu, x''} \to k \to 0
+$$
+This follows from Lemma \ref{lemma-multicross}.
+Thus we can view $u$ and $v$ as elements of $\mathcal{O}_{X, x}$
+with $uv = 0$.
+
+\medskip\noindent
+Let $D \in \text{Der}_k(\mathcal{O}_X, \mathcal{O}_X)$. Then
+$0 = D(uv) = vD(u) + uD(v)$. Since $(u)$ is annihilator of
+$v$ in $\mathcal{O}_{X, x}$ and vice versa, we see that $D(u) \in (u)$ and
+$D(v) \in (v)$. As $\mathcal{O}_{X^\nu, x'} = k + (u)$
+we conclude that we can extend $D$ to $\mathcal{O}_{X^\nu, x'}$
+and moreover the extension fixes $x'$. This produces a $D'$
+in the right hand side of the equality. Conversely, given a
+$D'$ fixing $x'$ and $x''$ we find that $D'$ preserves
+the subring $\mathcal{O}_{X, x} \subset
+\mathcal{O}_{X^\nu, x'} \times \mathcal{O}_{X^\nu, x''}$
+and this is how we go from right to left in the equality.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-stable-vector-fields}
+Let $k$ be an algebraically closed field.
+Let $X$ be an at-worst-nodal, proper, connected
+$1$-dimensional scheme over $k$. Assume the genus of $X$ is at least
+$2$ and that $X$ has no rational tails
+or bridges. Then
+$\text{Der}_k(\mathcal{O}_X, \mathcal{O}_X) = 0$.
+\end{lemma}
+
+\begin{proof}
+Let $D \in \text{Der}_k(\mathcal{O}_X, \mathcal{O}_X)$.
+Let $X^\nu$ be the normalization of $X$.
+Let $D' \in \text{Der}_k(\mathcal{O}_{X^\nu}, \mathcal{O}_{X^\nu})$
+be the element corresponding to $D$ via Lemma \ref{lemma-nodal-vector-fields}.
+Let $C \subset X^\nu$ be an irreducible component.
+If the genus of $C$ is $> 1$, then $D'|_{\mathcal{O}_C} = 0$
+by Lemma \ref{lemma-smooth-vector-fields} part (1).
+If the genus of $C$ is $1$, then there is at least one closed point $c$ of $C$
+which maps to a node on $X$ (since otherwise $X \cong C$ would have genus $1$).
+By the correspondence this means that
+$D'|_{\mathcal{O}_C}$ fixes $c$ hence is zero by
+Lemma \ref{lemma-smooth-vector-fields} part (2).
+Finally, if the genus of $C$ is zero, then there are at least
+$3$ pairwise distinct closed points $c_1, c_2, c_3 \in C$
+mapping to nodes in $X$, since otherwise either
+$X$ is $C$ with two points glued (two points of $C$ mapping
+to the same node), or
+$C$ is a rational bridge (two points mapping to different nodes of $X$), or
+$C$ is a rational tail (one point mapping to a node of $X$).
+These three possibilities are not permitted since
+$C$ has genus $\geq 2$ and has no rational bridges, or rational tails.
+Whence $D'|_{\mathcal{O}_C}$ fixes $c_1, c_2, c_3$ hence is zero by
+Lemma \ref{lemma-smooth-vector-fields} part (3).
+\end{proof}
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/decent-spaces.tex b/books/stacks/decent-spaces.tex
new file mode 100644
index 0000000000000000000000000000000000000000..be77afc9cb7daf651855490c37ce2d6d29a374ea
--- /dev/null
+++ b/books/stacks/decent-spaces.tex
@@ -0,0 +1,5521 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Decent Algebraic Spaces}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we study ``local'' properties of general
+algebraic spaces, i.e., those algebraic spaces which aren't quasi-separated.
+Quasi-separated algebraic spaces are studied in \cite{Kn}.
+It turns out that essentially new phenomena happen, especially
+regarding points and specializations of points, on more
+general algebraic spaces. On the other hand, for most basic results
+on algebraic spaces, one needn't worry about these phenomena, which is why
+we have decided to have this material in a separate chapter following
+the standard development of the theory.
+
+
+
+\section{Conventions}
+\label{section-conventions}
+
+\noindent
+The standing assumption is that all schemes are contained in
+a big fppf site $\Sch_{fppf}$. And all rings $A$ considered
+have the property that $\Spec(A)$ is (isomorphic) to an
+object of this big site.
+
+\medskip\noindent
+Let $S$ be a scheme and let $X$ be an algebraic space over $S$.
+In this chapter and the following we will write $X \times_S X$
+for the product of $X$ with itself (in the category of algebraic
+spaces over $S$), instead of $X \times X$.
+
+
+
+\section{Universally bounded fibres}
+\label{section-universally-bounded}
+
+\noindent
+We briefly discuss what it means for a morphism from a scheme to an
+algebraic space to have universally bounded fibres. Please refer to
+Morphisms, Section \ref{morphisms-section-universally-bounded}
+for similar definitions and results on morphisms of schemes.
+
+\begin{definition}
+\label{definition-universally-bounded}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$, and
+let $U$ be a scheme over $S$. Let $f : U \to X$ be a morphism over $S$.
+We say the {\it fibres of $f$ are universally bounded}\footnote{This is
+probably nonstandard notation.}
+if there exists an integer $n$ such that for all fields
+$k$ and all morphisms $\Spec(k) \to X$ the fibre
+product $\Spec(k) \times_X U$ is a finite scheme over $k$
+whose degree over $k$ is $\leq n$.
+\end{definition}
+
+\noindent
+This definition makes sense because the fibre product
+$\Spec(k) \times_Y X$ is a scheme. Moreover, if $Y$ is a scheme
+we recover the notion of
+Morphisms, Definition \ref{morphisms-definition-universally-bounded}
+by virtue of
+Morphisms, Lemma \ref{morphisms-lemma-characterize-universally-bounded}.
+
+\begin{lemma}
+\label{lemma-composition-universally-bounded}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+Let $V \to U$ be a morphism of schemes over $S$, and let
+$U \to X$ be a morphism from $U$ to $X$. If the fibres of
+$V \to U$ and $U \to X$ are universally bounded, then so
+are the fibres of $V \to X$.
+\end{lemma}
+
+\begin{proof}
+Let $n$ be an integer which works for $V \to U$, and let $m$ be
+an integer which works for $U \to X$ in
+Definition \ref{definition-universally-bounded}.
+Let $\Spec(k) \to X$ be a morphism, where $k$ is a field.
+Consider the morphisms
+$$
+\Spec(k) \times_X V
+\longrightarrow
+\Spec(k) \times_X U
+\longrightarrow
+\Spec(k).
+$$
+By assumption the scheme $\Spec(k) \times_X U$
+is finite of degree at most $m$ over $k$, and $n$ is an integer which
+bounds the degree of the fibres of the first morphism. Hence by
+Morphisms, Lemma \ref{morphisms-lemma-composition-universally-bounded}
+we conclude that $\Spec(k) \times_X V$ is finite over $k$
+of degree at most $nm$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-universally-bounded}
+Let $S$ be a scheme.
+Let $Y \to X$ be a representable morphism of algebraic spaces over $S$.
+Let $U \to X$ be a morphism from a scheme to $X$.
+If the fibres of $U \to X$ are universally bounded, then the fibres
+of $U \times_X Y \to Y$ are universally bounded.
+\end{lemma}
+
+\begin{proof}
+This is clear from the definition, and properties of fibre products.
+(Note that $U \times_X Y$ is a scheme
+as we assumed $Y \to X$ representable, so the definition applies.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-universally-bounded}
+Let $S$ be a scheme. Let $g : Y \to X$ be a representable morphism of
+algebraic spaces over $S$. Let $f : U \to X$ be a morphism from a scheme
+towards $X$. Let $f' : U \times_X Y \to Y$ be the base change of $f$.
+If
+$$
+\Im(|f| : |U| \to |X|) \subset \Im(|g| : |Y| \to |X|)
+$$
+and $f'$ has universally bounded fibres, then $f$ has universally
+bounded fibres.
+\end{lemma}
+
+\begin{proof}
+Let $n \geq 0$ be an integer bounding the degrees of the fibre
+products $\Spec(k) \times_Y (U \times_X Y)$ as in
+Definition \ref{definition-universally-bounded} for the morphism $f'$.
+We claim that $n$ works for $f$ also. Namely, suppose that
+$x : \Spec(k) \to X$ is a morphism from the spectrum of
+a field. Then either $\Spec(k) \times_X U$ is empty (and there
+is nothing to prove), or $x$ is in the image of $|f|$. By
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-points-cartesian}
+and the assumption of the lemma we see
+that this means there exists a field extension $k'/k$ and a
+commutative diagram
+$$
+\xymatrix{
+\Spec(k') \ar[r] \ar[d] & Y \ar[d] \\
+\Spec(k) \ar[r] & X
+}
+$$
+Hence we see that
+$$
+\Spec(k') \times_Y (U \times_X Y) =
+\Spec(k') \times_{\Spec(k)} (\Spec(k) \times_X U)
+$$
+Since the scheme $\Spec(k') \times_Y (U \times_X Y)$ is assumed finite
+of degree $\leq n$ over $k'$ it follows that also $\Spec(k) \times_X U$
+is finite of degree $\leq n$ over $k$ as desired. (Some details omitted.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-bounded-permanence}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+Consider a commutative diagram
+$$
+\xymatrix{
+U \ar[rd]_g \ar[rr]_f & & V \ar[ld]^h \\
+& X &
+}
+$$
+where $U$ and $V$ are schemes. If $g$ has universally bounded fibres,
+and $f$ is surjective and flat, then also $h$ has universally bounded fibres.
+\end{lemma}
+
+\begin{proof}
+Assume $g$ has universally bounded fibres, and $f$ is surjective and flat.
+Say $n \geq 0$ is an integer which bounds the degrees of the schemes
+$\Spec(k) \times_X U$ as in
+Definition \ref{definition-universally-bounded}.
+We claim $n$ also works for $h$.
+Let $\Spec(k) \to X$ be a morphism from the spectrum of a
+field to $X$. Consider the morphism of schemes
+$$
+\Spec(k) \times_X V \longrightarrow \Spec(k) \times_X U
+$$
+It is flat and surjective. By assumption the scheme
+on the left is finite of degree $\leq n$ over $\Spec(k)$.
+It follows from
+Morphisms, Lemma \ref{morphisms-lemma-universally-bounded-permanence}
+that the degree of the scheme on the right is also bounded by $n$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-bounded-finite-fibres}
+Let $S$ be a scheme.
+Let $X$ be an algebraic space over $S$, and let $U$ be a scheme over $S$.
+Let $\varphi : U \to X$ be a morphism over $S$.
+If the fibres of $\varphi$ are universally bounded, then there exists an
+integer $n$ such that each fibre of $|U| \to |X|$ has at most
+$n$ elements.
+\end{lemma}
+
+\begin{proof}
+The integer $n$ of Definition \ref{definition-universally-bounded} works.
+Namely, pick $x \in |X|$. Represent $x$ by a morphism
+$x : \Spec(k) \to X$. Then we get a commutative diagram
+$$
+\xymatrix{
+\Spec(k) \times_X U \ar[r] \ar[d] & U \ar[d] \\
+\Spec(k) \ar[r]^x & X
+}
+$$
+which shows (via
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-points-cartesian})
+that the inverse image of $x$ in $|U|$ is the image of
+the top horizontal arrow. Since $\Spec(k) \times_X U$ is finite
+of degree $\leq n$ over $k$ it has at most $n$ points.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Finiteness conditions and points}
+\label{section-points-monomorphisms}
+
+\noindent
+In this section we elaborate on the question of when points can be represented
+by monomorphisms from spectra of fields into the space.
+
+\begin{remark}
+\label{remark-recall}
+Before we give the proof of the next lemma let us recall some facts
+about \'etale morphisms of schemes:
+\begin{enumerate}
+\item An \'etale morphism is flat and hence generalizations lift along
+an \'etale morphism
+(Morphisms, Lemmas \ref{morphisms-lemma-etale-flat}
+and \ref{morphisms-lemma-generalizations-lift-flat}).
+\item An \'etale morphism is unramified, an unramified morphism is locally
+quasi-finite, hence fibres are discrete
+(Morphisms, Lemmas \ref{morphisms-lemma-flat-unramified-etale},
+\ref{morphisms-lemma-unramified-quasi-finite}, and
+\ref{morphisms-lemma-quasi-finite-at-point-characterize}).
+\item A quasi-compact \'etale morphism is quasi-finite and in particular
+has finite fibres
+(Morphisms, Lemmas \ref{morphisms-lemma-quasi-finite-locally-quasi-compact} and
+\ref{morphisms-lemma-quasi-finite}).
+\item An \'etale scheme over a field $k$ is a disjoint union of spectra
+of finite separable field extension of $k$
+(Morphisms, Lemma \ref{morphisms-lemma-etale-over-field}).
+\end{enumerate}
+For a general discussion of \'etale morphisms, please see
+\'Etale Morphisms, Section \ref{etale-section-etale-morphisms}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-U-finite-above-x}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+Let $x \in |X|$. The following are equivalent:
+\begin{enumerate}
+\item there exists a family of schemes $U_i$ and
+\'etale morphisms $\varphi_i : U_i \to X$ such that
+$\coprod \varphi_i : \coprod U_i \to X$ is surjective,
+and such that for each $i$ the fibre of
+$|U_i| \to |X|$ over $x$ is finite, and
+\item for every affine scheme $U$ and \'etale morphism $\varphi : U \to X$
+the fibre of $|U| \to |X|$ over $x$ is finite.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (2) $\Rightarrow$ (1) is trivial.
+Let $\varphi_i : U_i \to X$ be a family of \'etale morphisms as in (1).
+Let $\varphi : U \to X$ be an \'etale morphism from an affine scheme
+towards $X$. Consider the fibre product diagrams
+$$
+\xymatrix{
+U \times_X U_i \ar[r]_-{p_i} \ar[d]_{q_i} & U_i \ar[d]^{\varphi_i} \\
+U \ar[r]^\varphi & X
+}
+\quad \quad
+\xymatrix{
+\coprod U \times_X U_i \ar[r]_-{\coprod p_i} \ar[d]_{\coprod q_i} &
+\coprod U_i \ar[d]^{\coprod \varphi_i} \\
+U \ar[r]^\varphi & X
+}
+$$
+Since $q_i$ is \'etale it is open (see Remark \ref{remark-recall}).
+Moreover, the morphism $\coprod q_i$ is surjective.
+Hence there exist finitely many indices $i_1, \ldots, i_n$ and
+a quasi-compact opens $W_{i_j} \subset U \times_X U_{i_j}$
+which surject onto $U$.
+The morphism $p_i$ is \'etale, hence locally quasi-finite (see remark on
+\'etale morphisms above). Thus we may apply
+Morphisms, Lemma
+\ref{morphisms-lemma-locally-quasi-finite-qc-source-universally-bounded}
+to see the fibres of $p_{i_j}|_{W_{i_j}} : W_{i_j} \to U_i$ are finite.
+Hence by
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-points-cartesian}
+and the assumption on $\varphi_i$ we conclude that the fibre
+of $\varphi$ over $x$ is finite. In other words (2) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-R-finite-above-x}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+Let $x \in |X|$. The following are equivalent:
+\begin{enumerate}
+\item there exists a scheme $U$, an \'etale morphism
+$\varphi : U \to X$, and points $u, u' \in U$ mapping to
+$x$ such that setting $R = U \times_X U$ the fibre of
+$$
+|R| \to |U| \times_{|X|} |U|
+$$
+over $(u, u')$ is finite,
+\item for every scheme $U$, \'etale morphism $\varphi : U \to X$ and
+any points $u, u' \in U$ mapping to
+$x$ setting $R = U \times_X U$ the fibre of
+$$
+|R| \to |U| \times_{|X|} |U|
+$$
+over $(u, u')$ is finite,
+\item there exists a morphism $\Spec(k) \to X$ with $k$ a field
+in the equivalence class of $x$ such that the projections
+$\Spec(k) \times_X \Spec(k) \to \Spec(k)$ are
+\'etale and quasi-compact, and
+\item there exists a monomorphism $\Spec(k) \to X$ with $k$ a field
+in the equivalence class of $x$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1), i.e., let $\varphi : U \to X$ be an \'etale morphism from a scheme
+towards $X$, and let $u, u'$ be points of $U$ lying over $x$
+such that the fibre of $|R| \to |U| \times_{|X|} |U|$ over $(u, u')$
+is a finite set. In this proof we think of a point $u = \Spec(\kappa(u))$
+as a scheme. Note that $u \to U$, $u' \to U$ are monomorphisms (see
+Schemes, Lemma \ref{schemes-lemma-injective-points-surjective-stalks}),
+hence $u \times_X u' \to R = U \times_X U$ is a monomorphism.
+In this language the assumption really means that
+$u \times_X u'$ is a scheme whose underlying topological space has
+finitely many points.
+Let $\psi : W \to X$ be an \'etale morphism from a scheme towards $X$.
+Let $w, w' \in W$ be points of $W$ mapping to $x$.
+We have to show that $w \times_X w'$ is a scheme whose underlying topological
+space has finitely many points.
+Consider the fibre product diagram
+$$
+\xymatrix{
+W \times_X U \ar[r]_p \ar[d]_q & U \ar[d]^\varphi \\
+W \ar[r]^\psi & X
+}
+$$
+As $x$ is the image of $u$ and $u'$ we may pick points
+$\tilde w, \tilde w'$ in $W \times_X U$ with $q(\tilde w) = w$,
+$q(\tilde w') = w'$, $u = p(\tilde w)$ and $u' = p(\tilde w')$, see
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-points-cartesian}.
+As $p$, $q$ are \'etale the field extensions
+$\kappa(w) \subset \kappa(\tilde w) \supset \kappa(u)$ and
+$\kappa(w') \subset \kappa(\tilde w') \supset \kappa(u')$ are
+finite separable, see Remark \ref{remark-recall}.
+Then we get a commutative diagram
+$$
+\xymatrix{
+w \times_X w' \ar[d] &
+\tilde w \times_X \tilde w' \ar[l] \ar[d] \ar[r] &
+u \times_X u' \ar[d] \\
+w \times_X w' &
+\tilde w \times_S \tilde w' \ar[l] \ar[r] &
+u \times_S u'
+}
+$$
+where the squares are fibre product squares. The lower horizontal
+morphisms are \'etale and quasi-compact, as any scheme of the form
+$\Spec(k) \times_S \Spec(k')$ is affine, and by our
+observations about the field extensions above.
+Thus we see that the top horizontal arrows are \'etale and quasi-compact
+and hence have finite fibres.
+We have seen above that $|u \times_X u'|$ is finite, so we conclude that
+$|w \times_X w'|$ is finite. In other words, (2) holds.
+
+\medskip\noindent
+Assume (2). Let $U \to X$ be an \'etale morphism from a scheme $U$
+such that $x$ is in the image of $|U| \to |X|$. Let $u \in U$ be
+a point mapping to $x$. Then we have seen in the previous
+paragraph that $u = \Spec(\kappa(u)) \to X$ has the property that
+$u \times_X u$ has a finite underlying topological space. On the other
+hand, the projection maps $u \times_X u \to u$ are the composition
+$$
+u \times_X u \longrightarrow
+u \times_X U \longrightarrow
+u \times_X X = u,
+$$
+i.e., the composition of a monomorphism (the base change of the monomorphism
+$u \to U$) by an \'etale morphism (the base change of the \'etale morphism
+$U \to X$). Hence $u \times_X U$ is a disjoint union of spectra of fields
+finite separable over $\kappa(u)$ (see
+Remark \ref{remark-recall}). Since $u \times_X u$ is finite the image
+of it in $u \times_X U$ is a finite disjoint union of spectra of fields
+finite separable over $\kappa(u)$. By
+Schemes, Lemma \ref{schemes-lemma-mono-towards-spec-field}
+we conclude that $u \times_X u$ is a finite disjoint union of spectra
+of fields finite separable over $\kappa(u)$. In other words, we see that
+$u \times_X u \to u$ is quasi-compact and \'etale. This means that (3) holds.
+
+\medskip\noindent
+Let us prove that (3) implies (4). Let $\Spec(k) \to X$ be a morphism
+from the spectrum of a field into $X$, in the equivalence class of $x$
+such that the two projections
+$t, s : R = \Spec(k) \times_X \Spec(k) \to \Spec(k)$
+are quasi-compact and \'etale.
+This means in particular
+that $R$ is an \'etale equivalence relation on $\Spec(k)$.
+By Spaces, Theorem \ref{spaces-theorem-presentation}
+we know that the quotient sheaf
+$X' = \Spec(k)/R$ is an algebraic space. By
+Groupoids, Lemma \ref{groupoids-lemma-quotient-groupoid-restrict}
+the map $X' \to X$ is a monomorphism.
+Since $s, t$ are quasi-compact, we see that $R$ is quasi-compact and hence
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-point-like-spaces}
+applies to $X'$, and we see that
+$X' = \Spec(k')$ for some field $k'$. Hence we get a factorization
+$$
+\Spec(k) \longrightarrow
+\Spec(k') \longrightarrow X
+$$
+which shows that $\Spec(k') \to X$ is a monomorphism mapping
+to $x \in |X|$. In other words (4) holds.
+
+\medskip\noindent
+Finally, we prove that (4) implies (1). Let $\Spec(k) \to X$
+be a monomorphism with $k$ a field in the equivalence class of $x$.
+Let $U \to X$ be a surjective \'etale morphism from a scheme $U$ to $X$.
+Let $u \in U$ be a point over $x$. Since $\Spec(k) \times_X u$
+is nonempty, and since $\Spec(k) \times_X u \to u$ is a monomorphism
+we conclude that $\Spec(k) \times_X u = u$ (see
+Schemes, Lemma \ref{schemes-lemma-mono-towards-spec-field}).
+Hence $u \to U \to X$ factors through $\Spec(k) \to X$, here is
+a picture
+$$
+\xymatrix{
+u \ar[r] \ar[d] & U \ar[d] \\
+\Spec(k) \ar[r] & X
+}
+$$
+Since the right vertical arrow is \'etale this implies that
+$\kappa(u)/k$ is a finite separable extension. Hence we conclude that
+$$
+u \times_X u = u \times_{\Spec(k)} u
+$$
+is a finite scheme, and we win by the discussion of the meaning of property
+(1) in the first paragraph of this proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weak-UR-finite-above-x}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+Let $x \in |X|$.
+Let $U$ be a scheme and let $\varphi : U \to X$ be an \'etale morphism.
+The following are equivalent:
+\begin{enumerate}
+\item $x$ is in the image of $|U| \to |X|$, and
+setting $R = U \times_X U$ the fibres of both
+$$
+|U| \longrightarrow |X|
+\quad\text{and}\quad
+|R| \longrightarrow |X|
+$$
+over $x$ are finite,
+\item there exists a monomorphism $\Spec(k) \to X$ with $k$ a field
+in the equivalence class of $x$, and
+the fibre product $\Spec(k) \times_X U$ is
+a finite nonempty scheme over $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). This clearly implies the first condition of
+Lemma \ref{lemma-R-finite-above-x} and hence we obtain a monomorphism
+$\Spec(k) \to X$ in the class of $x$. Taking the fibre product
+we see that $\Spec(k) \times_X U \to \Spec(k)$ is a scheme
+\'etale over $\Spec(k)$ with finitely many points, hence a finite
+nonempty scheme over $k$, i.e., (2) holds.
+
+\medskip\noindent
+Assume (2). By assumption $x$ is in the image of
+$|U| \to |X|$. The finiteness of the fibre of
+$|U| \to |X|$ over $x$ is clear since this fibre is equal to
+$|\Spec(k) \times_X U|$ by
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-points-cartesian}.
+The finiteness of the fibre of $|R| \to |X|$ above $x$ is also clear
+since it is equal to the set underlying the scheme
+$$
+(\Spec(k) \times_X U) \times_{\Spec(k)} (\Spec(k) \times_X U)
+$$
+which is finite over $k$. Thus (1) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-UR-finite-above-x}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+Let $x \in |X|$. The following are equivalent:
+\begin{enumerate}
+\item for every affine scheme $U$, any \'etale morphism
+$\varphi : U \to X$ setting $R = U \times_X U$ the fibres of both
+$$
+|U| \longrightarrow |X|
+\quad\text{and}\quad
+|R| \longrightarrow |X|
+$$
+over $x$ are finite,
+\item there exist schemes $U_i$ and \'etale morphisms
+$U_i \to X$ such that $\coprod U_i \to X$ is surjective and for each
+$i$, setting $R_i = U_i \times_X U_i$ the fibres of both
+$$
+|U_i| \longrightarrow |X|
+\quad\text{and}\quad
+|R_i| \longrightarrow |X|
+$$
+over $x$ are finite,
+\item there exists a monomorphism $\Spec(k) \to X$ with $k$ a field
+in the equivalence class of $x$, and for any affine scheme $U$ and \'etale
+morphism $U \to X$ the fibre product $\Spec(k) \times_X U$ is
+a finite scheme over $k$,
+\item there exists a quasi-compact monomorphism $\Spec(k) \to X$
+with $k$ a field in the equivalence class of $x$, and
+\item there exists a quasi-compact morphism $\Spec(k) \to X$
+with $k$ a field in the equivalence class of $x$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (3) follows on applying
+Lemma \ref{lemma-weak-UR-finite-above-x}
+to every \'etale morphism $U \to X$ with $U$ affine.
+It is clear that (3) implies (2).
+Assume $U_i \to X$ and $R_i$ are as in (2). We conclude from
+Lemma \ref{lemma-U-finite-above-x}
+that for any affine scheme $U$ and \'etale morphism $U \to X$
+the fibre of $|U| \to |X|$ over $x$ is finite.
+Say this fibre is $\{u_1, \ldots, u_n\}$. Then, as
+Lemma \ref{lemma-R-finite-above-x} (1)
+applies to $U_i \to X$ for some $i$ such that $x$ is in the image of
+$|U_i| \to |X|$, we see that the fibre of
+$|R = U \times_X U| \to |U| \times_{|X|} |U|$
+is finite over $(u_a, u_b)$, $a, b \in \{1, \ldots, n\}$.
+Hence the fibre of $|R| \to |X|$ over $x$ is finite.
+In this way we see that (1) holds. At this point we know that
+(1), (2), and (3) are equivalent.
+
+\medskip\noindent
+If (4) holds, then for any affine scheme $U$ and \'etale morphism
+$U \to X$ the scheme $\Spec(k) \times_X U$ is on the one hand
+\'etale over $k$ (hence a disjoint union of spectra of finite separable
+extensions of $k$ by
+Remark \ref{remark-recall})
+and on the other hand quasi-compact over $U$ (hence quasi-compact).
+Thus we see that (3) holds.
+Conversely, if $U_i \to X$ is as in (2) and $\Spec(k) \to X$
+is a monomorphism as in (3), then
+$$
+\coprod \Spec(k) \times_X U_i
+\longrightarrow
+\coprod U_i
+$$
+is quasi-compact (because over each $U_i$ we see that
+$\Spec(k) \times_X U_i$ is a finite disjoint union spectra of fields).
+Thus $\Spec(k) \to X$ is quasi-compact by
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-quasi-compact-local}.
+
+\medskip\noindent
+It is immediate that (4) implies (5). Conversely, let $\Spec(k) \to X$
+be a quasi-compact morphism in the equivalence class of $x$. Let $U \to X$
+be an \'etale morphism with $U$ affine. Consider the fibre product
+$$
+\xymatrix{
+F \ar[r] \ar[d] & U \ar[d] \\
+\Spec(k) \ar[r] & X
+}
+$$
+Then $F \to U$ is quasi-compact, hence $F$ is quasi-compact.
+On the other hand, $F \to \Spec(k)$ is \'etale, hence $F$ is a
+finite disjoint union of spectra of finite separable extensions of $k$
+(Remark \ref{remark-recall}). Since the image of $|F| \to |U|$
+is the fibre of $|U| \to |X|$ over $x$ (Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-points-cartesian}), we conclude that
+the fibre of $|U| \to |X|$ over $x$ is finite. The scheme
+$F \times_{\Spec(k)} F$ is also a finite union of spectra of fields
+because it is also quasi-compact and \'etale over $\Spec(k)$.
+There is a monomorphism
+$F \times_X F \to F \times_{\Spec(k)} F$, hence $F \times_X F$ is
+a finite disjoint union of spectra of fields
+(Schemes, Lemma \ref{schemes-lemma-mono-towards-spec-field}).
+Thus the image of $F \times_X F \to U \times_X U = R$ is finite.
+Since this image is the fibre of $|R| \to |X|$ over $x$ by
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-points-cartesian}
+we conclude that (1) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-U-universally-bounded}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+The following are equivalent:
+\begin{enumerate}
+\item there exist schemes $U_i$ and \'etale morphisms
+$U_i \to X$ such that $\coprod U_i \to X$ is surjective and
+each $U_i \to X$ has universally bounded fibres, and
+\item for every affine scheme $U$ and \'etale morphism $\varphi : U \to X$
+the fibres of $U \to X$ are universally bounded.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (2) $\Rightarrow$ (1) is trivial.
+Assume (1). Let $(\varphi_i : U_i \to X)_{i \in I}$ be a collection of
+\'etale morphisms from schemes towards $X$, covering $X$, such that
+each $\varphi_i$ has universally bounded fibres.
+Let $\psi : U \to X$ be an \'etale morphism from an affine scheme towards $X$.
+For each $i$ consider the fibre product diagram
+$$
+\xymatrix{
+U \times_X U_i \ar[r]_{p_i} \ar[d]_{q_i} & U_i \ar[d]^{\varphi_i} \\
+U \ar[r]^\psi & X
+}
+$$
+Since $q_i$ is \'etale it is open (see Remark \ref{remark-recall}).
+Moreover, we have $U = \bigcup \Im(q_i)$, since the family
+$(\varphi_i)_{i \in I}$ is surjective. Since $U$ is affine, hence quasi-compact
+we can finite finitely many $i_1, \ldots, i_n \in I$ and quasi-compact
+opens $W_j \subset U \times_X U_{i_j}$ such that
+$U = \bigcup p_{i_j}(W_j)$.
+The morphism $p_{i_j}$ is \'etale, hence locally quasi-finite
+(see remark on \'etale morphisms above). Thus we may apply
+Morphisms, Lemma
+\ref{morphisms-lemma-locally-quasi-finite-qc-source-universally-bounded}
+to see the fibres of $p_{i_j}|_{W_j} : W_j \to U_{i_j}$ are universally
+bounded. Hence by
+Lemma \ref{lemma-composition-universally-bounded}
+we see that the fibres of $W_j \to X$ are universally bounded.
+Thus also $\coprod_{j = 1, \ldots, n} W_j \to X$ has universally
+bounded fibres. Since $\coprod_{j = 1, \ldots, n} W_j \to X$ factors
+through the surjective \'etale map
+$\coprod q_{i_j}|_{W_j} : \coprod_{j = 1, \ldots, n} W_j \to U$ we
+see that the fibres of $U \to X$ are universally bounded by
+Lemma \ref{lemma-universally-bounded-permanence}.
+In other words (2) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-very-reasonable}
+Let $S$ be a scheme.
+Let $X$ be an algebraic space over $S$.
+The following are equivalent:
+\begin{enumerate}
+\item there exists a Zariski covering $X = \bigcup X_i$ and for
+each $i$ a scheme $U_i$ and a quasi-compact surjective \'etale
+morphism $U_i \to X_i$, and
+\item there exist schemes $U_i$ and \'etale morphisms $U_i \to X$
+such that the projections $U_i \times_X U_i \to U_i$ are quasi-compact
+and $\coprod U_i \to X$ is surjective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If (1) holds then the morphisms $U_i \to X_i \to X$ are \'etale (combine
+Morphisms, Lemma \ref{morphisms-lemma-composition-etale}
+and
+Spaces, Lemmas
+\ref{spaces-lemma-composition-representable-transformations-property} and
+\ref{spaces-lemma-morphism-schemes-gives-representable-transformation-property}
+).
+Moreover, as $U_i \times_X U_i = U_i \times_{X_i} U_i$,
+both projections $U_i \times_X U_i \to U_i$ are quasi-compact.
+
+\medskip\noindent
+If (2) holds then let $X_i \subset X$ be the open subspace corresponding
+to the image of the open map $|U_i| \to |X|$, see
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-etale-image-open}.
+The morphisms $U_i \to X_i$ are surjective.
+Hence $U_i \to X_i$ is surjective \'etale, and the projections
+$U_i \times_{X_i} U_i \to U_i$ are quasi-compact, because
+$U_i \times_{X_i} U_i = U_i \times_X U_i$. Thus by
+Spaces, Lemma \ref{spaces-lemma-representable-morphisms-spaces-property}
+the morphisms $U_i \to X_i$ are quasi-compact.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Conditions on algebraic spaces}
+\label{section-conditions}
+
+\noindent
+In this section we discuss the relationship between various natural
+conditions on algebraic spaces we have seen above. Please read
+Section \ref{section-reasonable-decent}
+to get a feeling for the meaning of these conditions.
+
+\begin{lemma}
+\label{lemma-bounded-fibres}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+Consider the following conditions on $X$:
+\begin{itemize}
+\item[] $(\alpha)$ For every $x \in |X|$, the equivalent conditions of
+Lemma \ref{lemma-U-finite-above-x}
+hold.
+\item[] $(\beta)$ For every $x \in |X|$, the equivalent conditions of
+Lemma \ref{lemma-R-finite-above-x}
+hold.
+\item[] $(\gamma)$ For every $x \in |X|$, the equivalent conditions of
+Lemma \ref{lemma-UR-finite-above-x}
+hold.
+\item[] $(\delta)$ The equivalent conditions of
+Lemma \ref{lemma-U-universally-bounded}
+hold.
+\item[] $(\epsilon)$ The equivalent conditions of
+Lemma \ref{lemma-characterize-very-reasonable}
+hold.
+\item[] $(\zeta)$ The space $X$ is Zariski locally quasi-separated.
+\item[] $(\eta)$ The space $X$ is quasi-separated
+\item[] $(\theta)$ The space $X$ is representable, i.e., $X$ is a scheme.
+\item[] $(\iota)$ The space $X$ is a quasi-separated scheme.
+\end{itemize}
+We have
+$$
+\xymatrix{
+& (\theta) \ar@{=>}[rd] & & & & \\
+(\iota) \ar@{=>}[ru] \ar@{=>}[rd] & &
+(\zeta) \ar@{=>}[r] &
+(\epsilon) \ar@{=>}[r] &
+(\delta) \ar@{=>}[r] &
+(\gamma) \ar@{<=>}[r] & (\alpha) + (\beta) \\
+& (\eta) \ar@{=>}[ru] & & & &
+}
+$$
+\end{lemma}
+
+\begin{proof}
+The implication $(\gamma) \Leftrightarrow (\alpha) + (\beta)$ is immediate.
+The implications in the diamond on the left are clear from the
+definitions.
+
+\medskip\noindent
+Assume $(\zeta)$, i.e., that $X$ is Zariski locally quasi-separated.
+Then $(\epsilon)$ holds by
+Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-quasi-separated-quasi-compact-pieces}.
+
+\medskip\noindent
+Assume $(\epsilon)$. By
+Lemma \ref{lemma-characterize-very-reasonable}
+there exists
+a Zariski open covering $X = \bigcup X_i$ such that for each $i$
+there exists a scheme $U_i$ and a quasi-compact surjective \'etale morphism
+$U_i \to X_i$. Choose an $i$ and an affine open subscheme $W \subset U_i$.
+It suffices to show that $W \to X$ has universally bounded fibres, since then
+the family of all these morphisms $W \to X$ covers $X$.
+To do this we consider the diagram
+$$
+\xymatrix{
+W \times_X U_i \ar[r]_-p \ar[d]_q & U_i \ar[d] \\
+W \ar[r] & X
+}
+$$
+Since $W \to X$ factors through $X_i$ we see that
+$W \times_X U_i = W \times_{X_i} U_i$, and hence $q$ is quasi-compact.
+Since $W$ is affine this implies that the scheme $W \times_X U_i$
+is quasi-compact. Thus we may apply
+Morphisms, Lemma
+\ref{morphisms-lemma-locally-quasi-finite-qc-source-universally-bounded}
+and we conclude that $p$ has universally bounded fibres. From
+Lemma \ref{lemma-descent-universally-bounded}
+we conclude that $W \to X$ has universally bounded fibres as well.
+
+\medskip\noindent
+Assume $(\delta)$. Let $U$ be an affine scheme, and let $U \to X$ be an \'etale
+morphism. By assumption the fibres of the morphism $U \to X$ are universally
+bounded. Thus also the fibres of both projections $R = U \times_X U \to U$
+are universally bounded, see
+Lemma \ref{lemma-base-change-universally-bounded}.
+And by
+Lemma \ref{lemma-composition-universally-bounded}
+also the fibres of $R \to X$ are universally bounded.
+Hence for any $x \in X$ the fibres of $|U| \to |X|$ and $|R| \to |X|$
+over $x$ are finite, see
+Lemma \ref{lemma-universally-bounded-finite-fibres}.
+In other words, the equivalent conditions of
+Lemma \ref{lemma-UR-finite-above-x}
+hold. This proves that $(\delta) \Rightarrow (\gamma)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-properties-local}
+Let $S$ be a scheme.
+Let $\mathcal{P}$ be one of the properties
+$(\alpha)$, $(\beta)$, $(\gamma)$, $(\delta)$, $(\epsilon)$, $(\zeta)$, or
+$(\theta)$ of algebraic spaces listed in
+Lemma \ref{lemma-bounded-fibres}.
+Then if $X$ is an algebraic space over $S$, and $X = \bigcup X_i$ is a
+Zariski open covering such that each $X_i$ has $\mathcal{P}$,
+then $X$ has $\mathcal{P}$.
+\end{lemma}
+
+\begin{proof}
+Let $X$ be an algebraic space over $S$, and let $X = \bigcup X_i$ is a
+Zariski open covering such that each $X_i$ has $\mathcal{P}$.
+
+\medskip\noindent
+The case $\mathcal{P} = (\alpha)$. The condition $(\alpha)$ for $X_i$
+means that for every $x \in |X_i|$ and every affine scheme $U$, and
+\'etale morphism $\varphi : U \to X_i$ the fibre of $\varphi : |U| \to |X_i|$
+over $x$ is finite. Consider $x \in X$, an affine scheme $U$ and
+an \'etale morphism $U \to X$. Since $X = \bigcup X_i$ is a
+Zariski open covering there exits a finite affine open covering
+$U = U_1 \cup \ldots \cup U_n$ such that each $U_j \to X$ factors through
+some $X_{i_j}$. By assumption the fibres of $|U_j | \to |X_{i_j}|$
+over $x$ are finite for $j = 1, \ldots, n$. Clearly this means that
+the fibre of $|U| \to |X|$ over $x$ is finite.
+This proves the result for $(\alpha)$.
+
+\medskip\noindent
+The case $\mathcal{P} = (\beta)$. The condition $(\beta)$ for $X_i$ means
+that every $x \in |X_i|$ is represented by a monomorphism from the
+spectrum of a field towards $X_i$. Hence the same follows for $X$
+as $X_i \to X$ is a monomorphism and $X = \bigcup X_i$.
+
+\medskip\noindent
+The case $\mathcal{P} = (\gamma)$.
+Note that $(\gamma) = (\alpha) + (\beta)$ by
+Lemma \ref{lemma-bounded-fibres}
+hence the lemma for $(\gamma)$ follows from the cases treated above.
+
+\medskip\noindent
+The case $\mathcal{P} = (\delta)$. The condition $(\delta)$ for $X_i$ means
+there exist schemes $U_{ij}$ and \'etale morphisms $U_{ij} \to X_i$ with
+universally bounded fibres which cover $X_i$. These schemes also give an
+\'etale surjective morphism $\coprod U_{ij} \to X$ and $U_{ij} \to X$
+still has universally bounded fibres.
+
+\medskip\noindent
+The case $\mathcal{P} = (\epsilon)$. The condition $(\epsilon)$ for $X_i$ means
+we can find a set $J_i$ and morphisms
+$\varphi_{ij} : U_{ij} \to X_i$ such that each $\varphi_{ij}$
+is \'etale, both projections $U_{ij} \times_{X_i} U_{ij} \to U_{ij}$
+are quasi-compact, and $\coprod_{j \in J_i} U_{ij} \to X_i$ is surjective.
+In this case the compositions $U_{ij} \to X_i \to X$ are \'etale
+(combine
+Morphisms, Lemmas
+\ref{morphisms-lemma-composition-etale} and
+\ref{morphisms-lemma-open-immersion-etale}
+and
+Spaces, Lemmas
+\ref{spaces-lemma-composition-representable-transformations-property} and
+\ref{spaces-lemma-morphism-schemes-gives-representable-transformation-property}
+).
+Since $X_i \subset X$ is a subspace we see that
+$U_{ij} \times_{X_i} U_{ij} = U_{ij} \times_X U_{ij}$, and hence the
+condition on fibre products is preserved. And clearly
+$\coprod_{i, j} U_{ij} \to X$ is surjective. Hence $X$
+satisfies $(\epsilon)$.
+
+\medskip\noindent
+The case $\mathcal{P} = (\zeta)$. The condition $(\zeta)$ for $X_i$
+means that $X_i$ is Zariski locally quasi-separated. It is immediately
+clear that this means $X$ is Zariski locally quasi-separated.
+
+\medskip\noindent
+For $(\theta)$, see
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-subscheme}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-properties}
+Let $S$ be a scheme. Let $\mathcal{P}$ be one of the properties
+$(\beta)$, $(\gamma)$, $(\delta)$, $(\epsilon)$, or
+$(\theta)$ of algebraic spaces listed in
+Lemma \ref{lemma-bounded-fibres}.
+Let $X$, $Y$ be algebraic spaces over $S$.
+Let $X \to Y$ be a representable morphism.
+If $Y$ has property $\mathcal{P}$, so does $X$.
+\end{lemma}
+
+\begin{proof}
+Assume $f : X \to Y$ is a representable morphism of algebraic spaces,
+and assume that $Y$ has $\mathcal{P}$. Let $x \in |X|$, and set
+$y = f(x) \in |Y|$.
+
+\medskip\noindent
+The case $\mathcal{P} = (\beta)$. Condition $(\beta)$ for $Y$ means
+there exists a monomorphism $\Spec(k) \to Y$ representing $y$.
+The fibre product $X_y = \Spec(k) \times_Y X$ is a scheme, and
+$x$ corresponds to a point of $X_y$, i.e., to a monomorphism
+$\Spec(k') \to X_y$. As $X_y \to X$ is a monomorphism also we see
+that $x$ is represented by the monomorphism $\Spec(k') \to X_y \to X$.
+In other words $(\beta)$ holds for $X$.
+
+\medskip\noindent
+The case $\mathcal{P} = (\gamma)$. Since $(\gamma) \Rightarrow (\beta)$
+we have seen in the preceding paragraph that $y$ and $x$ can be represented
+by monomorphisms as in the following diagram
+$$
+\xymatrix{
+\Spec(k') \ar[r]_-x \ar[d] & X \ar[d] \\
+\Spec(k) \ar[r]^-y & Y
+}
+$$
+Also, by definition of property $(\gamma)$ via
+Lemma \ref{lemma-UR-finite-above-x} (2)
+there exist schemes
+$V_i$ and \'etale morphisms $V_i \to Y$ such that $\coprod V_i \to Y$
+is surjective and for each $i$, setting $R_i = V_i \times_Y V_i$
+the fibres of both
+$$
+|V_i| \longrightarrow |Y|
+\quad\text{and}\quad
+|R_i| \longrightarrow |Y|
+$$
+over $y$ are finite. This means that the schemes
+$(V_i)_y$ and $(R_i)_y$ are finite schemes over $y = \Spec(k)$.
+As $X \to Y$ is representable, the fibre products $U_i = V_i \times_Y X$
+are schemes. The morphisms $U_i \to X$ are \'etale, and
+$\coprod U_i \to X$ is surjective. Finally, for each $i$ we have
+$$
+(U_i)_x =
+(V_i \times_Y X)_x =
+(V_i)_y \times_{\Spec(k)} \Spec(k')
+$$
+and
+$$
+(U_i \times_X U_i)_x =
+\left((V_i \times_Y X) \times_X (V_i \times_Y X)\right)_x =
+(R_i)_y \times_{\Spec(k)} \Spec(k')
+$$
+hence these are finite over $k'$ as base changes of the finite
+schemes $(V_i)_y$ and $(R_i)_y$. This implies that $(\gamma)$ holds for $X$,
+again via the second condition of
+Lemma \ref{lemma-UR-finite-above-x}.
+
+\medskip\noindent
+The case $\mathcal{P} = (\delta)$. Let $V \to Y$ be an \'etale morphism with
+$V$ an affine scheme. Since $Y$ has property $(\delta)$ this morphism has
+universally bounded fibres. By
+Lemma \ref{lemma-base-change-universally-bounded}
+the base change $V \times_Y X \to X$ also has universally bounded fibres.
+Hence the first part of
+Lemma \ref{lemma-U-universally-bounded}
+applies and we see that $Y$ also has property $(\delta)$.
+
+\medskip\noindent
+The case $\mathcal{P} = (\epsilon)$. We will repeatedly use
+Spaces, Lemma
+\ref{spaces-lemma-base-change-representable-transformations-property}.
+Let $V_i \to Y$ be as in
+Lemma \ref{lemma-characterize-very-reasonable} (2).
+Set $U_i = X \times_Y V_i$. The morphisms $U_i \to X$ are \'etale,
+and $\coprod U_i \to X$ is surjective. Because
+$U_i \times_X U_i = X \times_Y (V_i \times_Y V_i)$ we see
+that the projections $U_i \times_Y U_i \to U_i$ are
+base changes of the projections $V_i \times_Y V_i \to V_i$, and so
+quasi-compact as well. Hence $X$ satisfies
+Lemma \ref{lemma-characterize-very-reasonable} (2).
+
+\medskip\noindent
+The case $\mathcal{P} = (\theta)$. In this case the result is
+Categories, Lemma \ref{categories-lemma-representable-over-representable}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Reasonable and decent algebraic spaces}
+\label{section-reasonable-decent}
+
+\noindent
+In
+Lemma \ref{lemma-bounded-fibres}
+we have seen a number of conditions on algebraic spaces related to
+the behaviour of \'etale morphisms from affine schemes into $X$
+and related to the existence of special \'etale coverings of $X$ by
+schemes. We tabulate the different types of conditions here:
+$$
+\boxed{
+\begin{matrix}
+(\alpha) & \text{fibres of \'etale morphisms from affines are finite} \\
+(\beta) & \text{points come from monomorphisms of spectra of fields} \\
+(\gamma) & \text{points come from quasi-compact monomorphisms of
+spectra of fields} \\
+(\delta) & \text{fibres of \'etale morphisms from affines are universally
+bounded} \\
+(\epsilon) & \text{cover by \'etale morphisms from schemes quasi-compact
+onto their image}
+\end{matrix}
+}
+$$
+
+\medskip\noindent
+The conditions in the following definition
+are not exactly conditions on the diagonal of $X$, but they are in some
+sense separation conditions on $X$.
+
+\begin{definition}
+\label{definition-very-reasonable}
+Let $S$ be a scheme.
+Let $X$ be an algebraic space over $S$.
+\begin{enumerate}
+\item We say $X$ is {\it decent} if for every point $x \in X$ the equivalent
+conditions of
+Lemma \ref{lemma-UR-finite-above-x}
+hold, in other words property $(\gamma)$ of
+Lemma \ref{lemma-bounded-fibres}
+holds.
+\item We say $X$ is {\it reasonable} if the equivalent conditions of
+Lemma \ref{lemma-U-universally-bounded}
+hold, in other words property $(\delta)$ of
+Lemma \ref{lemma-bounded-fibres}
+holds.
+\item We say $X$ is {\it very reasonable} if the equivalent conditions of
+Lemma \ref{lemma-characterize-very-reasonable}
+hold, i.e., property $(\epsilon)$ of
+Lemma \ref{lemma-bounded-fibres}
+holds.
+\end{enumerate}
+\end{definition}
+
+\noindent
+We have the following implications among these conditions on algebraic spaces:
+$$
+\xymatrix{
+\text{representable} \ar@{=>}[rd] & & & \\
+ & \text{very reasonable} \ar@{=>}[r] &
+\text{reasonable} \ar@{=>}[r] &
+\text{decent} \\
+\text{quasi-separated} \ar@{=>}[ru] & & &
+}
+$$
+The notion of a very reasonable algebraic space is obsolete.
+It was introduced because the assumption was needed to prove some results
+which are now proven for the class of decent spaces.
+The class of decent spaces is the largest class of spaces $X$ where one has
+a good relationship between the topology of $|X|$ and
+properties of $X$ itself.
+
+\begin{example}
+\label{example-not-decent}
+The algebraic space $\mathbf{A}^1_{\mathbf{Q}}/\mathbf{Z}$ constructed in
+Spaces, Example \ref{spaces-example-affine-line-translation}
+is not decent as its ``generic point'' cannot be represented by a monomorphism
+from the spectrum of a field.
+\end{example}
+
+\begin{remark}
+\label{remark-reasonable}
+Reasonable algebraic spaces are technically easier to work with than very
+reasonable algebraic spaces. For example, if $X \to Y$ is a quasi-compact
+\'etale surjective morphism of algebraic spaces and $X$ is reasonable, then
+so is $Y$, see
+Lemma \ref{lemma-descent-conditions}
+but we don't know if this is true for the property ``very reasonable''.
+Below we give another technical property enjoyed by reasonable
+algebraic spaces.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-fun-property-reasonable}
+Let $S$ be a scheme.
+Let $X$ be a quasi-compact reasonable algebraic space.
+Then there exists a directed system of quasi-compact and quasi-separated
+algebraic spaces $X_i$ such that $X = \colim_i X_i$
+(colimit in the category of sheaves). Moreover we can arrange it such that
+\begin{enumerate}
+\item for every quasi-compact scheme $T$ over $S$ we have
+$\colim X_i(T) = X(T)$,
+\item the transition morphisms $X_i \to X_{i'}$ of the system
+and the coprojections $X_i \to X$ are surjective and \'etale, and
+\item if $X$ is a scheme, then the algebraic spaces $X_i$ are schemes
+and the transition morphisms $X_i \to X_{i'}$
+and the coprojections $X_i \to X$ are local isomorphisms.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We sketch the proof. By
+Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-quasi-compact-affine-cover}
+we have $X = U/R$ with $U$ affine.
+In this case, reasonable means $U \to X$ is universally bounded.
+Hence there exists an integer $N$ such that the ``fibres'' of $U \to X$
+have degree at most $N$, see
+Definition \ref{definition-universally-bounded}.
+Denote $s, t : R \to U$ and $c : R \times_{s, U, t} R \to R$ the
+groupoid structural maps.
+
+\medskip\noindent
+Claim: for every quasi-compact open $A \subset R$ there exists
+an open $R' \subset R$ such that
+\begin{enumerate}
+\item $A \subset R'$,
+\item $R'$ is quasi-compact, and
+\item $(U, R', s|_{R'}, t|_{R'}, c|_{R' \times_{s, U, t} R'})$ is
+a groupoid scheme.
+\end{enumerate}
+Note that $e : U \to R$ is open as it is a section of the \'etale morphism
+$s : R \to U$, see
+\'Etale Morphisms, Proposition \ref{etale-proposition-properties-sections}.
+Moreover $U$ is affine hence quasi-compact. Hence we may replace $A$ by
+$A \cup e(U) \subset R$, and assume that $A$ contains $e(U)$. Next, we
+define inductively $A^1 = A$, and
+$$
+A^n = c(A^{n - 1} \times_{s, U, t} A) \subset R
+$$
+for $n \geq 2$. Arguing inductively, we see that $A^n$ is quasi-compact for
+all $n \geq 2$, as the image of the quasi-compact fibre product
+$A^{n - 1} \times_{s, U, t} A$. If $k$ is an algebraically
+closed field over $S$, and we consider $k$-points then
+$$
+A^n(k) = \left\{(u, u') \in U(k)
+:
+\begin{matrix}
+\text{there exist } u = u_1, u_2, \ldots, u_n \in U(k)\text{ with} \\
+(u_i , u_{i + 1}) \in A \text{ for all }i = 1, \ldots, n - 1.
+\end{matrix}
+\right\}
+$$
+But as the fibres of $U(k) \to X(k)$ have size at most $N$ we see that if
+$n > N$ then we get a repeat in the sequence above, and we can shorten it
+proving $A^N = A^n$ for all $n \geq N$.
+This implies that $R' = A^N$ gives a groupoid scheme
+$(U, R', s|_{R'}, t|_{R'}, c|_{R' \times_{s, U, t} R'})$, proving the claim
+above.
+
+\medskip\noindent
+Consider the map of sheaves on $(\Sch/S)_{fppf}$
+$$
+\colim_{R' \subset R} U/R' \longrightarrow U/R
+$$
+where $R' \subset R$ runs over the quasi-compact open subschemes
+of $R$ which give \'etale equivalence relations as above. Each of the
+quotients $U/R'$ is an algebraic space
+(see Spaces, Theorem \ref{spaces-theorem-presentation}).
+Since $R'$ is quasi-compact, and $U$ affine the morphism
+$R' \to U \times_{\Spec(\mathbf{Z})} U$ is quasi-compact,
+and hence $U/R'$ is quasi-separated. Finally, if $T$ is a quasi-compact
+scheme, then
+$$
+\colim_{R' \subset R} U(T)/R'(T) \longrightarrow U(T)/R(T)
+$$
+is a bijection, since every morphism from $T$ into $R$ ends up in one
+of the open subrelations $R'$ by the claim above. This clearly implies
+that the colimit of the sheaves $U/R'$ is $U/R$. In other words
+the algebraic space $X = U/R$ is the colimit of the quasi-separated
+algebraic spaces $U/R'$.
+
+\medskip\noindent
+Properties (1) and (2) follow from the discussion above.
+If $X$ is a scheme, then if we choose $U$ to be a finite
+disjoint union of affine opens of $X$ we will obtain (3).
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-named-properties}
+Let $S$ be a scheme. Let $X$, $Y$ be algebraic spaces over $S$.
+Let $X \to Y$ be a representable morphism.
+If $Y$ is decent (resp.\ reasonable), then so is $X$.
+\end{lemma}
+
+\begin{proof}
+Translation of Lemma \ref{lemma-representable-properties}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-named-properties}
+Let $S$ be a scheme. Let $X \to Y$ be an \'etale morphism of
+algebraic spaces over $S$. If $Y$ is decent, resp.\ reasonable,
+then so is $X$.
+\end{lemma}
+
+\begin{proof}
+Let $U$ be an affine scheme and $U \to X$ an \'etale morphism.
+Set $R = U \times_X U$ and $R' = U \times_Y U$. Note that
+$R \to R'$ is a monomorphism.
+
+\medskip\noindent
+Let $x \in |X|$. To show that $X$ is decent, we have to show that
+the fibres of $|U| \to |X|$ and $|R| \to |X|$ over $x$ are finite.
+But if $Y$ is decent, then the fibres of $|U| \to |Y|$ and
+$|R'| \to |Y|$ are finite. Hence the result for ``decent''.
+
+\medskip\noindent
+To show that $X$ is reasonable, we have to show that the fibres of
+$U \to X$ are universally bounded. However, if $Y$ is reasonable,
+then the fibres of $U \to Y$ are universally bounded, which immediately
+implies the same thing for the fibres of $U \to X$.
+Hence the result for ``reasonable''.
+\end{proof}
+
+
+
+
+
+
+
+\section{Points and specializations}
+\label{section-specializations}
+
+\noindent
+There exists an \'etale morphism of algebraic spaces $f : X \to Y$
+and a nontrivial specialization between points in a fibre of
+$|f| : |X| \to |Y|$, see
+Examples, Lemma \ref{examples-lemma-specializations-fibre-etale}.
+If the source of the morphism is a scheme we can avoid this by
+imposing condition ($\alpha$) on $Y$.
+
+\begin{lemma}
+\label{lemma-no-specializations-map-to-same-point}
+Let $S$ be a scheme.
+Let $X$ be an algebraic space over $S$.
+Let $U \to X$ be an \'etale morphism from a scheme to $X$.
+Assume $u, u' \in |U|$ map to the same point $x$ of $|X|$, and
+$u' \leadsto u$. If the pair $(X, x)$ satisfies the
+equivalent conditions of
+Lemma \ref{lemma-U-finite-above-x}
+then $u = u'$.
+\end{lemma}
+
+\begin{proof}
+Assume the pair $(X, x)$ satisfies the
+equivalent conditions for Lemma \ref{lemma-U-finite-above-x}.
+Let $U$ be a scheme, $U \to X$ \'etale, and
+let $u, u' \in |U|$ map to $x$ of $|X|$, and
+$u' \leadsto u$. We may and do replace $U$ by an affine
+neighbourhood of $u$. Let $t, s : R = U \times_X U \to U$
+be the \'etale projection maps.
+
+\medskip\noindent
+Pick a point $r \in R$ with $t(r) = u$ and $s(r) = u'$.
+This is possible by
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-points-presentation}.
+Because generalizations lift along the \'etale morphism $t$
+(Remark \ref{remark-recall}) we can find a specialization $r' \leadsto r$ with
+$t(r') = u'$. Set $u'' = s(r')$. Then $u'' \leadsto u'$.
+Thus we may repeat and find $r'' \leadsto r'$ with
+$t(r'') = u''$. Set $u''' = s(r'')$, and so on.
+Here is a picture:
+$$
+\xymatrix{
+& r'' \ar[rd]^s \ar[ld]_t \ar@{~>}[d] & \\
+u'' \ar@{~>}[d] & r' \ar[rd]^s \ar[ld]_t \ar@{~>}[d] & u''' \ar@{~>}[d] \\
+u' \ar@{~>}[d] & r \ar[rd]^s \ar[ld]_t & u'' \ar@{~>}[d] \\
+u & & u'
+}
+$$
+In Remark \ref{remark-recall} we have seen that there are no specializations
+among points in the fibres of the \'etale morphism $s$. Hence if
+$u^{(n + 1)} = u^{(n)}$ for some $n$, then also $r^{(n)} = r^{(n - 1)}$ and
+hence also (by taking $t$) $u^{(n)} = u^{(n - 1)}$. This then forces the
+whole tower to collapse, in particular $u = u'$. Thus we see that if
+$u \not = u'$, then all the specializations are strict and
+$\{u, u', u'', \ldots\}$ is an infinite set of points in $U$ which map to the
+point $x$ in $|X|$. As we chose $U$ affine this contradicts the second part of
+Lemma \ref{lemma-U-finite-above-x}, as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-specialization}
+Let $S$ be a scheme.
+Let $X$ be an algebraic space over $S$.
+Let $x, x' \in |X|$ and assume $x' \leadsto x$, i.e., $x$ is a
+specialization of $x'$.
+Assume the pair $(X, x')$ satisfies the equivalent conditions
+of Lemma \ref{lemma-UR-finite-above-x}.
+Then for every \'etale morphism $\varphi : U \to X$ from a scheme $U$ and any
+$u \in U$ with $\varphi(u) = x$, exists a point $u'\in U$,
+$u' \leadsto u$ with $\varphi(u') = x'$.
+\end{lemma}
+
+\begin{proof}
+We may replace $U$ by an affine open neighbourhood of $u$.
+Hence we may assume that $U$ is affine. As $x$ is in the
+image of the open map $|U| \to |X|$, so is $x'$. Thus we may
+replace $X$ by the Zariski open subspace corresponding to
+the image of $|U| \to |X|$, see
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-etale-image-open}.
+In other words we may assume that
+$U \to X$ is surjective and \'etale.
+Let $s, t : R = U \times_X U \to U$ be the projections.
+By our assumption that $(X, x')$ satisfies the equivalent conditions of
+Lemma \ref{lemma-UR-finite-above-x}
+we see that the fibres of $|U| \to |X|$ and $|R| \to |X|$
+over $x'$ are finite. Say $\{u'_1, \ldots, u'_n\} \subset U$ and
+$\{r'_1, \ldots, r'_m\} \subset R$ form the complete inverse image
+of $\{x'\}$.
+Consider the closed sets
+$$
+T = \overline{\{u'_1\}} \cup \ldots \cup \overline{\{u'_n\}} \subset |U|,
+\quad
+T' = \overline{\{r'_1\}} \cup \ldots \cup \overline{\{r'_m\}} \subset |R|.
+$$
+Trivially we have $s(T') \subset T$. Because $R$ is an equivalence
+relation we also have $t(T') = s(T')$ as the set $\{r_j'\}$
+is invariant under the inverse of $R$ by construction. Let $w \in T$
+be any point. Then $u'_i \leadsto w$ for some $i$. Choose $r \in R$
+with $s(r) = w$. Since generalizations lift along $s : R \to U$, see
+Remark \ref{remark-recall}, we can find $r' \leadsto r$ with
+$s(r') = u_i'$. Then $r' = r'_j$ for some $j$ and we conclude that
+$w \in s(T')$. Hence $T = s(T') = t(T')$ is an $|R|$-invariant closed
+set in $|U|$. This means $T$ is the inverse image of a closed (!)
+subset $T'' = \varphi(T)$ of $|X|$, see
+Properties of Spaces,
+Lemmas \ref{spaces-properties-lemma-points-presentation} and
+\ref{spaces-properties-lemma-topology-points}.
+Hence $T'' = \overline{\{x'\}}$.
+Thus $T$ contains some point $u_1$ mapping to $x$ as $x \in T''$.
+I.e., we see that for some $i$ there exists a specialization
+$u'_i \leadsto u_1$ which maps to the given specialization
+$x' \leadsto x$.
+
+\medskip\noindent
+To finish the proof, choose a point $r \in R$ such that
+$s(r) = u$ and $t(r) = u_1$ (using
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-points-cartesian}).
+As generalizations lift along $t$, and $u'_i \leadsto u_1$
+we can find a specialization $r' \leadsto r$ such that $t(r') = u'_i$.
+Set $u' = s(r')$. Then $u' \leadsto u$ and $\varphi(u') = x'$ as
+desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generalizations-lift-flat}
+Let $S$ be a scheme. Let $f : Y \to X$ be a flat morphism of algebraic spaces
+over $S$. Let $x, x' \in |X|$ and assume $x' \leadsto x$, i.e., $x$ is a
+specialization of $x'$. Assume the pair $(X, x')$ satisfies the equivalent
+conditions of Lemma \ref{lemma-UR-finite-above-x} (for example if
+$X$ is decent, $X$ is quasi-separated, or $X$ is representable).
+Then for every $y \in |Y|$ with $f(y) = x$, there exists a point $y' \in |Y|$,
+$y' \leadsto y$ with $f(y') = x'$.
+\end{lemma}
+
+\begin{proof}
+(The parenthetical statement holds by the definition of decent spaces
+and the implications between the different separation conditions
+mentioned in Section \ref{section-reasonable-decent}.)
+Choose a scheme $V$ and a surjective \'etale morphism $V \to Y$.
+Choose $v \in V$ mapping to $y$. Then we see that it suffices to
+prove the lemma for $V \to X$. Thus we may assume $Y$ is a scheme.
+Choose a scheme $U$ and a surjective \'etale morphism $U \to X$.
+Choose $u \in U$ mapping to $x$. By Lemma \ref{lemma-specialization}
+we may choose $u' \leadsto u$ mapping to $x'$. By
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-points-cartesian}
+we may choose $z \in U \times_X Y$ mapping to $y$ and $u$.
+Thus we reduce to the case of the flat morphism of
+schemes $U \times_X Y \to U$ which is
+Morphisms, Lemma \ref{morphisms-lemma-generalizations-lift-flat}.
+\end{proof}
+
+
+
+
+
+
+\section{Stratifying algebraic spaces by schemes}
+\label{section-stratifications}
+
+\noindent
+In this section we prove that a quasi-compact and quasi-separated
+algebraic space has a finite stratification by locally closed subspaces
+each of which is a scheme and such that the glueing of the parts is by
+elementary distinguished squares. We first prove a slightly weaker
+result for reasonable algebraic spaces.
+
+\begin{lemma}
+\label{lemma-quasi-compact-reasonable-stratification}
+Let $S$ be a scheme. Let $W \to X$ be a morphism of a scheme $W$
+to an algebraic space $X$ which is flat, locally of finite presentation,
+separated, locally quasi-finite with universally bounded fibres. There exist
+reduced closed subspaces
+$$
+\emptyset = Z_{-1} \subset Z_0 \subset Z_1 \subset Z_2 \subset
+\ldots \subset Z_n = X
+$$
+such that with $X_r = Z_r \setminus Z_{r - 1}$ the stratification
+$X = \coprod_{r = 0, \ldots, n} X_r$ is characterized by the following
+universal property: Given $g : T \to X$ the projection
+$W \times_X T \to T$ is finite locally free of degree $r$ if and only if
+$g(|T|) \subset |X_r|$.
+\end{lemma}
+
+\begin{proof}
+Let $n$ be an integer bounding the degrees of the fibres of $W \to X$.
+Choose a scheme $U$ and a surjective \'etale morphism $U \to X$.
+Apply More on Morphisms, Lemma
+\ref{more-morphisms-lemma-stratify-flat-fp-lqf-universally-bounded}
+to $W \times_X U \to U$. We obtain closed subsets
+$$
+\emptyset = Y_{-1} \subset Y_0 \subset Y_1 \subset Y_2 \subset
+\ldots \subset Y_n = U
+$$
+characterized by the property stated in the lemma for the morphism
+$W \times_X U \to U$. Clearly, the formation of these closed subsets commutes
+with base change. Setting $R = U \times_X U$ with projection maps
+$s, t : R \to U$ we conclude that
+$$
+s^{-1}(Y_r) = t^{-1}(Y_r)
+$$
+as closed subsets of $R$. In other words the closed subsets $Y_r \subset U$
+are $R$-invariant. This means that $|Y_r|$ is the inverse image of a closed
+subset $Z_r \subset |X|$. Denote $Z_r \subset X$ also the reduced induced
+algebraic space structure, see
+Properties of Spaces, Definition
+\ref{spaces-properties-definition-reduced-induced-space}.
+
+\medskip\noindent
+Let $g : T \to X$ be a morphism of algebraic spaces. Choose a scheme $V$
+and a surjective \'etale morphism $V \to T$. To prove the final
+assertion of the lemma it suffices to prove the assertion for the composition
+$V \to X$ (by our definition of finite locally free morphisms, see
+Morphisms of Spaces, Section
+\ref{spaces-morphisms-section-finite-locally-free}).
+Similarly, the morphism of schemes $W \times_X V \to V$ is finite
+locally free of degree $r$ if and only if the morphism of schemes
+$$
+W \times_X (U \times_X V)
+\longrightarrow
+U \times_X V
+$$
+is finite locally free of degree $r$ (see
+Descent, Lemma \ref{descent-lemma-descending-property-finite-locally-free}).
+By construction this happens if and only if $|U \times_X V| \to |U|$
+maps into $|Y_r|$, which is true if and only if $|V| \to |X|$ maps
+into $|Z_r|$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-stratify-flat-fp-lqf}
+Let $S$ be a scheme. Let $W \to X$ be a morphism of a scheme $W$ to an
+algebraic space $X$ which is flat, locally of finite presentation,
+separated, and locally quasi-finite. Then there
+exist open subspaces
+$$
+X = X_0 \supset X_1 \supset X_2 \supset \ldots
+$$
+such that a morphism $\Spec(k) \to X$ factors through $X_d$ if and
+only if $W \times_X \Spec(k)$ has degree $\geq d$ over $k$.
+\end{lemma}
+
+\begin{proof}
+Choose a scheme $U$ and a surjective \'etale morphism $U \to X$. Apply
+More on Morphisms, Lemma \ref{more-morphisms-lemma-stratify-flat-fp-lqf}
+to $W \times_X U \to U$. We obtain open subschemes
+$$
+U = U_0 \supset U_1 \supset U_2 \supset \ldots
+$$
+characterized by the property stated in the lemma for the morphism
+$W \times_X U \to U$. Clearly, the formation of these closed subsets commutes
+with base change. Setting $R = U \times_X U$ with projection maps
+$s, t : R \to U$ we conclude that
+$$
+s^{-1}(U_d) = t^{-1}(U_d)
+$$
+as open subschemes of $R$. In other words the open subschemes $U_d \subset U$
+are $R$-invariant. This means that $U_d$ is the inverse image of an
+open subspace $X_d \subset X$
+(Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-subspaces-presentation}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-filter-quasi-compact}
+Let $S$ be a scheme. Let $X$ be a quasi-compact algebraic space
+over $S$. There exist open subspaces
+$$
+\ldots \subset U_4 \subset U_3 \subset U_2 \subset U_1 = X
+$$
+with the following properties:
+\begin{enumerate}
+\item setting $T_p = U_p \setminus U_{p + 1}$ (with reduced induced subspace
+structure) there exists a separated scheme $V_p$ and a surjective \'etale
+morphism $f_p : V_p \to U_p$ such that $f_p^{-1}(T_p) \to T_p$ is an
+isomorphism,
+\item if $x \in |X|$ can be represented by a quasi-compact morphism
+$\Spec(k) \to X$ from a field, then $x \in T_p$ for some $p$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-quasi-compact-affine-cover}
+we can choose an affine scheme $U$ and a surjective \'etale morphism
+$U \to X$. For $p \geq 0$ set
+$$
+W_p = U \times_X \ldots \times_X U \setminus \text{all diagonals}
+$$
+where the fibre product has $p$ factors. Since $U$ is separated,
+the morphism $U \to X$ is separated and all fibre products
+$U \times_X \ldots \times_X U$ are separated schemes. Since $U \to X$ is
+separated the diagonal $U \to U \times_X U$ is a closed immersion. Since
+$U \to X$ is \'etale the diagonal $U \to U \times_X U$ is an open
+immersion, see Morphisms of Spaces, Lemmas
+\ref{spaces-morphisms-lemma-etale-unramified} and
+\ref{spaces-morphisms-lemma-diagonal-unramified-morphism}.
+Similarly, all the diagonal morphisms are open and closed immersions and
+$W_p$ is an open and closed subscheme of $U \times_X \ldots \times_X U$.
+Moreover, the morphism
+$$
+U \times_X \ldots \times_X U \longrightarrow
+U \times_{\Spec(\mathbf{Z})} \ldots \times_{\Spec(\mathbf{Z})} U
+$$
+is locally quasi-finite and separated (Morphisms of Spaces,
+Lemma \ref{spaces-morphisms-lemma-fibre-product-after-map})
+and its target is an affine scheme. Hence every finite set of points of
+$U \times_X \ldots \times_X U$ is contained in an affine open, see
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-separated-locally-quasi-finite-over-affine}.
+Therefore, the same is true for $W_p$.
+There is a free action of the symmetric group $S_p$ on $W_p$ over $X$
+(because we threw out the fix point locus from
+$U \times_X \ldots \times_X U$). By the above and
+Properties of Spaces, Proposition
+\ref{spaces-properties-proposition-finite-flat-equivalence-global}
+the quotient $V_p = W_p/S_p$ is a scheme. Since the action of
+$S_p$ on $W_p$ was over $X$, there is a morphism $V_p \to X$.
+Since $W_p \to X$ is \'etale and since $W_p \to V_p$ is surjective
+\'etale, it follows that also $V_p \to X$ is \'etale, see
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-etale-local}.
+Observe that $V_p$ is a separated scheme by
+Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-quotient-separated}.
+
+\medskip\noindent
+We let $U_p \subset X$ be the open subspace which is the
+image of $V_p \to X$. By construction a morphism $\Spec(k) \to X$ with
+$k$ algebraically closed, factors through $U_p$ if and only if
+$U \times_X \Spec(k)$ has $\geq p$ points; as usual observe that
+$U \times_X \Spec(k)$ is scheme theoretically a disjoint union of
+(possibly infinitely many) copies of $\Spec(k)$, see
+Remark \ref{remark-recall}. It follows that
+the $U_p$ give a filtration of $X$ as stated in the lemma.
+Moreover, our morphism $\Spec(k) \to X$ factors through $T_p$
+if and only if $U \times_X \Spec(k)$ has exactly $p$ points.
+In this case we see that $V_p \times_X \Spec(k)$ has exactly one point.
+Set $Z_p = f_p^{-1}(T_p) \subset V_p$. This is a closed subscheme of $V_p$.
+Then $Z_p \to T_p$ is an \'etale morphism between
+algebraic spaces which induces a bijection on $k$-valued
+points for any algebraically closed field $k$. To be sure this
+implies that $Z_p \to T_p$ is universally injective, whence an
+open immersion by
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-etale-universally-injective-open}
+hence an isomorphism and (1) has been proved.
+
+\medskip\noindent
+Let $x : \Spec(k) \to X$ be a quasi-compact morphism where $k$ is a field.
+Then the composition $\Spec(\overline{k}) \to \Spec(k) \to X$ is quasi-compact
+as well (Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-composition-quasi-compact}).
+In this case the scheme $U \times_X \Spec(\overline{k})$ is
+quasi-compact. In view of the fact (seen above) that it is a disjoint union
+of copies of $\Spec(\overline{k})$ we find that it has finitely many points.
+If the number of points is $p$, then we see that indeed $x \in T_p$ and
+the proof is finished.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-filter-reasonable}
+Let $S$ be a scheme. Let $X$ be a quasi-compact, reasonable algebraic space
+over $S$. There exist an integer $n$ and open subspaces
+$$
+\emptyset = U_{n + 1} \subset
+U_n \subset U_{n - 1} \subset \ldots \subset U_1 = X
+$$
+with the following property: setting $T_p = U_p \setminus U_{p + 1}$
+(with reduced induced subspace structure) there exists a separated scheme
+$V_p$ and a surjective \'etale morphism $f_p : V_p \to U_p$ such that
+$f_p^{-1}(T_p) \to T_p$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+The proof of this lemma is identical to the proof of
+Lemma \ref{lemma-filter-quasi-compact}.
+Let $n$ be an integer bounding the degrees of
+the fibres of $U \to X$ which exists as $X$ is reasonable, see
+Definition \ref{definition-very-reasonable}.
+Then we see that $U_{n + 1} = \emptyset$ and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-stratify-reasonable}
+Let $S$ be a scheme. Let $X$ be a quasi-compact, reasonable algebraic space
+over $S$. There exist an integer $n$ and open subspaces
+$$
+\emptyset = U_{n + 1} \subset
+U_n \subset U_{n - 1} \subset \ldots \subset U_1 = X
+$$
+such that each $T_p = U_p \setminus U_{p + 1}$ (with reduced induced subspace
+structure) is a scheme.
+\end{lemma}
+
+\begin{proof}
+Immediate consequence of Lemma \ref{lemma-filter-reasonable}.
+\end{proof}
+
+\noindent
+The following result is almost identical to
+\cite[Proposition 5.7.8]{GruRay}.
+
+\begin{lemma}
+\label{lemma-filter-quasi-compact-quasi-separated}
+\begin{reference}
+This result is almost identical to \cite[Proposition 5.7.8]{GruRay}.
+\end{reference}
+Let $X$ be a quasi-compact and quasi-separated algebraic space over
+$\Spec(\mathbf{Z})$. There exist an integer $n$ and open subspaces
+$$
+\emptyset = U_{n + 1} \subset
+U_n \subset U_{n - 1} \subset \ldots \subset U_1 = X
+$$
+with the following property: setting $T_p = U_p \setminus U_{p + 1}$
+(with reduced induced subspace structure) there exists a quasi-compact
+separated scheme $V_p$ and a surjective \'etale morphism $f_p : V_p \to U_p$
+such that $f_p^{-1}(T_p) \to T_p$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+The proof of this lemma is identical to the proof of
+Lemma \ref{lemma-filter-quasi-compact}.
+Observe that a quasi-separated space is reasonable, see
+Lemma \ref{lemma-bounded-fibres} and
+Definition \ref{definition-very-reasonable}.
+Hence we find that $U_{n + 1} = \emptyset$ as in
+Lemma \ref{lemma-filter-reasonable}.
+At the end of the argument we add that since $X$ is quasi-separated
+the schemes $U \times_X \ldots \times_X U$ are all quasi-compact.
+Hence the schemes $W_p$ are quasi-compact. Hence the
+quotients $V_p = W_p/S_p$ by the symmetric group $S_p$ are quasi-compact
+schemes.
+\end{proof}
+
+\noindent
+The following lemma probably belongs somewhere else.
+
+\begin{lemma}
+\label{lemma-locally-constructible}
+Let $S$ be a scheme. Let $X$ be a quasi-separated algebraic space over $S$.
+Let $E \subset |X|$ be a subset. Then $E$ is \'etale locally constructible
+(Properties of Spaces, Definition
+\ref{spaces-properties-definition-locally-constructible})
+if and only if $E$ is a locally constructible subset of
+the topological space $|X|$
+(Topology, Definition \ref{topology-definition-constructible}).
+\end{lemma}
+
+\begin{proof}
+Assume $E \subset |X|$ is a locally constructible subset of
+the topological space $|X|$. Let $f : U \to X$ be an
+\'etale morphism where $U$ is a scheme. We have to show that
+$f^{-1}(E)$ is locally constructible in $U$. The question is
+local on $U$ and $X$, hence we may assume that $X$ is quasi-compact,
+$E \subset |X|$ is constructible, and $U$ is affine.
+In this case $U \to X$ is quasi-compact, hence
+$f : |U| \to |X|$ is quasi-compact. Observe that retrocompact
+opens of $|X|$, resp.\ $U$ are the same thing as quasi-compact opens
+of $|X|$, resp.\ $U$, see
+Topology, Lemma \ref{topology-lemma-topology-quasi-separated-scheme}.
+Thus $f^{-1}(E)$ is constructible by Topology, Lemma
+\ref{topology-lemma-inverse-images-constructibles}.
+
+\medskip\noindent
+Conversely, assume $E$ is \'etale locally constructible.
+We want to show that $E$ is locally constructible in the
+topological space $|X|$.
+The question is local on $X$, hence we may assume that $X$ is
+quasi-compact as well as quasi-separated. We will show that
+in this case $E$ is constructible in $|X|$.
+Choose open subspaces
+$$
+\emptyset = U_{n + 1} \subset
+U_n \subset U_{n - 1} \subset \ldots \subset U_1 = X
+$$
+and surjective \'etale morphisms $f_p : V_p \to U_p$
+inducing isomorphisms $f_p^{-1}(T_p) \to T_p = U_p \setminus U_{p + 1}$
+where $V_p$ is a quasi-compact separated scheme as in
+Lemma \ref{lemma-filter-quasi-compact-quasi-separated}.
+By definition the inverse image $E_p \subset V_p$ of $E$ is
+locally constructible in $V_p$. Then $E_p$ is constructible in $V_p$
+by Properties, Lemma
+\ref{properties-lemma-constructible-quasi-compact-quasi-separated}.
+Thus $E_p \cap |f_p^{-1}(T_p)| = E \cap |T_p|$ is constructible
+in $|T_p|$ by
+Topology, Lemma \ref{topology-lemma-intersect-constructible-with-closed}
+(observe that $V_p \setminus f_p^{-1}(T_p)$ is quasi-compact as it is the
+inverse image of the quasi-compact space $U_{p + 1}$ by the
+quasi-compact morphism $f_p$).
+Thus
+$$
+E = (|T_n| \cap E) \cup (|T_{n - 1}| \cap E) \cup \ldots \cup
+(|T_1| \cap E)
+$$
+is constructible by
+Topology, Lemma \ref{topology-lemma-collate-constructible-from-constructible}.
+Here we use that $|T_p|$ is constructible in $|X|$ which is clear from
+what was said above.
+\end{proof}
+
+
+
+
+
+\section{Integral cover by a scheme}
+\label{section-integral-cover}
+
+\noindent
+Here we prove that given any quasi-compact and quasi-separated
+algebraic space $X$, there is a scheme $Y$ and a surjective, integral
+morphism $Y \to X$. After we develop some theory about limits of
+algebraic spaces, we will prove that one can do this with a finite
+morphism, see
+Limits of Spaces, Section \ref{spaces-limits-section-finite-cover}.
+
+\begin{lemma}
+\label{lemma-extend-integral-morphism}
+Let $S$ be a scheme. Let $j : V \to Y$ be a quasi-compact open immersion
+of algebraic spaces over $S$. Let $\pi : Z \to V$ be an integral morphism.
+Then there exists an integral morphism $\nu : Y' \to Y$ such that
+$Z$ is $V$-isomorphic to the inverse image of $V$ in $Y'$.
+\end{lemma}
+
+\begin{proof}
+Since both $j$ and $\pi$ are quasi-compact and separated, so is
+$j \circ \pi$. Let $\nu : Y' \to Y$ be the normalization of $Y$ in $Z$, see
+Morphisms of Spaces, Section
+\ref{spaces-morphisms-section-normalization-X-in-Y}.
+Of course $\nu$ is integral, see
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-characterize-normalization}.
+The final statement follows formally from
+Morphisms of Spaces, Lemmas
+\ref{spaces-morphisms-lemma-properties-normalization} and
+\ref{spaces-morphisms-lemma-normalization-in-integral}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-there-is-a-scheme-integral-over}
+Let $S$ be a scheme. Let $X$ be a quasi-compact and quasi-separated
+algebraic space over $S$.
+\begin{enumerate}
+\item There exists a surjective integral morphism $Y \to X$ where $Y$
+is a scheme,
+\item given a surjective \'etale morphism $U \to X$ we may choose
+$Y \to X$ such that for every $y \in Y$ there is an open neighbourhood
+$V \subset Y$ such that $V \to X$ factors through $U$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is the special case of part (2) where $U = X$.
+Choose a surjective \'etale morphism $U' \to U$
+where $U'$ is a scheme. It is clear that we may replace $U$ by $U'$
+and hence we may assume $U$ is a scheme. Since $X$ is quasi-compact,
+there exist finitely many affine opens $U_i \subset U$ such that
+$U' = \coprod U_i \to X$ is surjective.
+After replacing $U$ by $U'$ again, we see that we may assume $U$ is affine.
+Since $X$ is quasi-separated, hence reasonable, there exists an integer
+$d$ bounding the degree of the geometric fibres of $U \to X$
+(see Lemma \ref{lemma-bounded-fibres}).
+We will prove the lemma by induction on $d$ for all quasi-compact
+and separated schemes $U$ mapping surjective and \'etale onto $X$.
+If $d = 1$, then $U = X$ and the result holds with $Y = U$.
+Assume $d > 1$.
+
+\medskip\noindent
+We apply Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-quasi-finite-separated-quasi-affine}
+and we obtain a factorization
+$$
+\xymatrix{
+U \ar[rr]_j \ar[rd] & & Y \ar[ld]^\pi \\
+& X
+}
+$$
+with $\pi$ integral and $j$ a quasi-compact open immersion. We may and do
+assume that $j(U)$ is scheme theoretically dense in $Y$. Then $U \times_X Y$
+is a quasi-compact, separated scheme (being finite over $U$) and we have
+$$
+U \times_X Y = U \amalg W
+$$
+Here the first summand is the image of $U \to U \times_X Y$
+(which is closed by
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-semi-diagonal}
+and open because it is \'etale as a morphism between
+algebraic spaces \'etale over $Y$) and
+the second summand is the (open and closed) complement.
+The image $V \subset Y$ of $W$ is an open subspace containing
+$Y \setminus U$.
+
+\medskip\noindent
+The \'etale morphism $W \to Y$ has geometric fibres of cardinality $< d$.
+Namely, this is clear for geometric points of $U \subset Y$ by inspection.
+Since $|U| \subset |Y|$ is dense, it holds for all geometric points of $Y$
+by Lemma \ref{lemma-quasi-compact-reasonable-stratification}
+(the degree of the fibres of a quasi-compact \'etale morphism
+does not go up under specialization). Thus we may apply the induction
+hypothesis to $W \to V$ and find a surjective integral morphism
+$Z \to V$ with $Z$ a scheme, which Zariski locally factors through $W$.
+Choose a factorization $Z \to Z' \to Y$ with $Z' \to Y$ integral and
+$Z \to Z'$ open immersion
+(Lemma \ref{lemma-extend-integral-morphism}).
+After replacing $Z'$ by the scheme theoretic closure of $Z$ in $Z'$
+we may assume that $Z$ is scheme theoretically dense in $Z'$.
+After doing this we have $Z' \times_Y V = Z$. Finally,
+let $T \subset Y$ be the induced closed subspace structure on $Y \setminus V$.
+Consider the morphism
+$$
+Z' \amalg T \longrightarrow X
+$$
+This is a surjective integral morphism by construction.
+Since $T \subset U$ it is clear that the morphism $T \to X$
+factors through $U$. On the other hand, let $z \in Z'$
+be a point. If $z \not \in Z$, then $z$ maps to a point of
+$Y \setminus V \subset U$ and we find a neighbourhood of $z$
+on which the morphism factors through $U$.
+If $z \in Z$, then we have an open neighbourhood of $z$ in $Z$
+(which is also an open neighbourhood of $z$ in $Z'$)
+which factors through $W \subset U \times_X Y$ and hence through $U$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-there-is-a-scheme-integral-over-refined}
+Let $S$ be a scheme. Let $X$ be a quasi-compact and quasi-separated
+algebraic space over $S$ such that $|X|$ has finitely many irreducible
+components.
+\begin{enumerate}
+\item There exists a surjective integral morphism $Y \to X$ where $Y$
+is a scheme such that $f$ is finite \'etale over a quasi-compact
+dense open $U \subset X$,
+\item given a surjective \'etale morphism $V \to X$ we may choose
+$Y \to X$ such that for every $y \in Y$ there is an open neighbourhood
+$W \subset Y$ such that $W \to X$ factors through $V$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The proof is the (roughly) same as the proof of
+Lemma \ref{lemma-there-is-a-scheme-integral-over}
+with additional technical comments to obtain
+the dense quasi-compact open $U$ (and unfortunately
+changes in notation to keep track of $U$).
+
+\medskip\noindent
+Part (1) is the special case of part (2) where $V = X$.
+
+\medskip\noindent
+Proof of (2). Choose a surjective \'etale morphism $V' \to V$
+where $V'$ is a scheme. It is clear that we may replace $V$ by $V'$
+and hence we may assume $V$ is a scheme. Since $X$ is quasi-compact,
+there exist finitely many affine opens $V_i \subset V$ such that
+$V' = \coprod V_i \to X$ is surjective.
+After replacing $V$ by $V'$ again, we see that we may assume $V$ is affine.
+Since $X$ is quasi-separated, hence reasonable, there exists an integer
+$d$ bounding the degree of the geometric fibres of $V \to X$
+(see Lemma \ref{lemma-bounded-fibres}).
+
+\medskip\noindent
+By induction on $d \geq 1$ we will prove the following induction
+hypothesis $(H_d)$:
+\begin{itemize}
+\item for any quasi-compact and quasi-separated algebraic space
+$X$ with finitely many irreducible components, for any $m \geq 0$,
+for any quasi-compact and separated schemes $V_j$, $j = 1, \ldots, m$,
+for any \'etale morphisms $\varphi_j : V_j \to X$, $j = 1, \ldots, m$
+such that $d$ bounds the degree of the geometric fibres of
+$\varphi_j : V_j\to X$ and
+$\varphi = \coprod \varphi_j : V = \coprod V_j \to X$
+is surjective, the statement of the lemma holds for $\varphi : V \to X$.
+\end{itemize}
+If $d = 1$, then each $\varphi_j$ is an open immersion. Hence $X$
+is a scheme and the result holds with $Y = V$.
+Assume $d > 1$, assume $(H_{d - 1})$ and let
+$m$, $\varphi : V_j \to X$, $j = 1, \ldots, m$ be as in $(H_d)$.
+
+\medskip\noindent
+Let $\eta_1, \ldots, \eta_n \in |X|$ be the generic points of the
+irreducible components of $|X|$. By
+Properties of Spaces, Proposition
+\ref{spaces-properties-proposition-locally-quasi-separated-open-dense-scheme}
+there is an open subscheme $U \subset X$ with $\eta_1, \ldots, \eta_n \in U$.
+By shrinking $U$ we may assume $U$ affine and by
+Morphisms, Lemma \ref{morphisms-lemma-generically-finite}
+we may assume each $\varphi_j : V_j \to X$ is finite \'etale over $U$.
+Of course, we see that $U$ is quasi-compact and dense in $X$
+and that $\varphi_j^{-1}(U)$ is dense in $V_j$. In particular each $V_j$
+has finitely many irreducible components.
+
+\medskip\noindent
+Fix $j \in \{1, \ldots, m\}$.
+As in Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-quasi-finite-separated-quasi-affine}
+we let $Y_j$ be the normalization of $X$ in $V_j$. We obtain a factorization
+$$
+\xymatrix{
+V_j \ar[rr] \ar[rd]_{\varphi_j} & & Y_j \ar[ld]^{\pi_j} \\
+& X
+}
+$$
+with $\pi_j$ integral and $V_j \to Y_j$ a quasi-compact open immersion. Since
+$Y_j$ is the normalization of $X$ in $V_j$, we see from
+Morphisms of Spaces, Lemmas
+\ref{spaces-morphisms-lemma-properties-normalization} and
+\ref{spaces-morphisms-lemma-normalization-in-integral}
+that $\varphi_j^{-1}(U) \to \pi_j^{-1}(U)$ is an isomorphism.
+Thus $\pi_j$ is finite \'etale over $U$. Observe that $V_j$
+is scheme theoretically dense in $Y_j$ because $Y_j$ is the normalization
+of $X$ in $V_j$ (follows from the characterization of relative normalization
+in Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-characterize-normalization}). Since $V_j$
+is quasi-compact we see that $|V_j| \subset |Y_j|$ is dense,
+see Morphisms of Spaces, Section
+\ref{spaces-morphisms-section-scheme-theoretic-closure}
+(and especially Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-quasi-compact-immersion}).
+It follows that $|Y_j|$ has finitely many irreducible components.
+Then $V_j \times_X Y_j$ is a quasi-compact, separated scheme
+(being finite over $V_j$) and
+$$
+V_j \times_X Y_j = V_j \amalg W_j
+$$
+Here the first summand is the image of $V_j \to V_j \times_X Y_j$
+(which is closed by
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-semi-diagonal}
+and open because it is \'etale as a morphism between
+algebraic spaces \'etale over $Y$) and
+the second summand is the (open and closed) complement.
+
+\medskip\noindent
+The \'etale morphism $W_j \to Y_j$ has geometric fibres of cardinality $< d$.
+Namely, this is clear for geometric points of $V_j \subset Y_j$ by inspection.
+Since $|V_j| \subset |Y_j|$ is dense, it holds for all geometric points
+of $Y_j$ by Lemma \ref{lemma-quasi-compact-reasonable-stratification}
+(the degree of the fibres of a quasi-compact \'etale morphism
+does not go up under specialization). By $(H_{d - 1})$ applied
+to $V_j \amalg W_j \to Y_j$ we find a surjective integral morphism
+$Y_j' \to Y_j$ with $Y_j'$ a scheme, which Zariski locally factors
+through $V_j \amalg W_j$, and which is finite \'etale over a
+quasi-compact dense open $U_j \subset Y_j$. After shrinking $U$
+we may and do assume that $\pi_j^{-1}(U) \subset U_j$
+(we may and do choose the same $U$ for all $j$; some details omitted).
+
+\medskip\noindent
+We claim that
+$$
+Y = \coprod\nolimits_{j = 1, \ldots, m} Y'_j \longrightarrow X
+$$
+is the solution to our problem. First, this morphism is integral
+as on each summand we have the composition $Y'_j \to Y \to X$
+of integral morphisms (Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-composition-integral}). Second, this
+morphism Zariski locally factors through $V = \coprod V_j$ because
+we saw above that each $Y'_j \to Y_j$ factors Zariski locally through
+$V_j \amalg W_j = V_j \times_X Y_j$. Finally, since both
+$Y'_j \to Y_j$ and $Y_j \to X$ are finite \'etale over
+$U$, so is the composition. This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Schematic locus}
+\label{section-schematic}
+
+\noindent
+In this section we prove that a decent algebraic space has a dense open
+subspace which is a scheme. We first prove this for reasonable algebraic
+spaces.
+
+
+\begin{proposition}
+\label{proposition-reasonable-open-dense-scheme}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+If $X$ is reasonable, then there exists a dense open subspace
+of $X$ which is a scheme.
+\end{proposition}
+
+\begin{proof}
+By Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-subscheme}
+the question is local on $X$. Hence we may assume there exists an affine
+scheme $U$ and a surjective \'etale morphism $U \to X$
+(Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-cover-by-union-affines}).
+Let $n$ be an integer bounding the degrees of the fibres of $U \to X$
+which exists as $X$ is reasonable, see
+Definition \ref{definition-very-reasonable}.
+We will argue by induction on $n$ that whenever
+\begin{enumerate}
+\item $U \to X$ is a surjective \'etale morphism whose fibres have
+degree $\leq n$, and
+\item $U$ is isomorphic to a locally closed subscheme of an affine scheme
+\end{enumerate}
+then the schematic locus is dense in $X$.
+
+\medskip\noindent
+Let $X_n \subset X$ be the open subspace which is the complement of the
+closed subspace $Z_{n - 1} \subset X$ constructed in
+Lemma \ref{lemma-quasi-compact-reasonable-stratification}
+using the morphism $U \to X$.
+Let $U_n \subset U$ be the inverse image of $X_n$. Then
+$U_n \to X_n$ is finite locally free of degree $n$.
+Hence $X_n$ is a scheme by
+Properties of Spaces, Proposition
+\ref{spaces-properties-proposition-finite-flat-equivalence-global}
+(and the fact that any finite set of points of $U_n$ is contained in
+an affine open of $U_n$, see
+Properties, Lemma \ref{properties-lemma-ample-finite-set-in-affine}).
+
+\medskip\noindent
+Let $X' \subset X$ be the open subspace such that $|X'|$ is the
+interior of $|Z_{n - 1}|$ in $|X|$ (see
+Topology, Definition \ref{topology-definition-nowhere-dense}).
+Let $U' \subset U$ be the inverse image. Then $U' \to X'$ is surjective
+\'etale and has degrees of fibres bounded by $n - 1$. By induction
+we see that the schematic locus of $X'$ is an open dense $X'' \subset X'$.
+By elementary topology we see that $X'' \cup X_n \subset X$ is
+open and dense and we win.
+\end{proof}
+
+\begin{theorem}[David Rydh]
+\label{theorem-decent-open-dense-scheme}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+If $X$ is decent, then there exists a dense open subspace
+of $X$ which is a scheme.
+\end{theorem}
+
+\begin{proof}
+Assume $X$ is a decent algebraic space for which the theorem is false. By
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-subscheme}
+there exists a largest open subspace $X' \subset X$ which is a scheme.
+Since $X'$ is not dense in $X$, there exists an open subspace
+$X'' \subset X$ such that $|X''| \cap |X'| = \emptyset$. Replacing $X$
+by $X''$ we get a nonempty decent algebraic space $X$ which does not
+contain {\it any} open subspace which is a scheme.
+
+\medskip\noindent
+Choose a nonempty affine scheme $U$ and an \'etale morphism $U \to X$.
+We may and do replace $X$ by the open subscheme corresponding to the
+image of $|U| \to |X|$. Consider the sequence of open subspaces
+$$
+X = X_0 \supset X_1 \supset X_2 \ldots
+$$
+constructed in Lemma \ref{lemma-stratify-flat-fp-lqf}
+for the morphism $U \to X$. Note that $X_0 = X_1$ as $U \to X$
+is surjective. Let $U = U_0 = U_1 \supset U_2 \ldots$ be the induced
+sequence of open subschemes of $U$.
+
+\medskip\noindent
+Choose a nonempty open affine $V_1 \subset U_1$ (for example $V_1 = U_1$).
+By induction we will construct a sequence of nonempty affine opens
+$V_1 \supset V_2 \supset \ldots$ with $V_n \subset U_n$. Namely, having
+constructed $V_1, \ldots, V_{n - 1}$ we can always choose $V_n$ unless
+$V_{n - 1} \cap U_n = \emptyset$. But if $V_{n - 1} \cap U_n = \emptyset$,
+then the open subspace $X' \subset X$ with
+$|X'| = \Im(|V_{n - 1}| \to |X|)$ is contained in $|X| \setminus |X_n|$.
+Hence $V_{n - 1} \to X'$ is an \'etale morphism whose fibres have degree
+bounded by $n - 1$. In other words, $X'$ is reasonable (by definition),
+hence $X'$ contains a nonempty open subscheme by
+Proposition \ref{proposition-reasonable-open-dense-scheme}.
+This is a contradiction which shows that we can pick $V_n$.
+
+\medskip\noindent
+By Limits, Lemma \ref{limits-lemma-limit-nonempty}
+the limit $V_\infty = \lim V_n$ is a nonempty scheme. Pick a morphism
+$\Spec(k) \to V_\infty$. The composition $\Spec(k) \to V_\infty \to U \to X$
+has image contained in all $X_d$ by construction. In other words, the
+fibred $U \times_X \Spec(k)$ has infinite degree which contradicts
+the definition of a decent space. This contradiction finishes the proof
+of the theorem.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-quotient-scheme-at-point}
+Let $S$ be a scheme. Let $X \to Y$ be a surjective finite locally free
+morphism of algebraic spaces over $S$. For $y \in |Y|$ the following are
+equivalent
+\begin{enumerate}
+\item $y$ is in the schematic locus of $Y$, and
+\item there exists an affine open $U \subset X$
+containing the preimage of $y$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $y \in Y$ is in the schematic locus, then it has an affine open
+neighbourhood $V \subset Y$ and the inverse image $U$ of $V$ in $X$
+is an open finite over $V$, hence affine. Thus (1) implies (2).
+
+\medskip\noindent
+Conversely, assume that $U \subset X$ as in (2) is given.
+Set $R = X \times_Y X$ and denote the projections $s, t : R \to X$.
+Consider $Z = R \setminus s^{-1}(U) \cap t^{-1}(U)$.
+This is a closed subset of $R$. The image $t(Z)$ is a closed
+subset of $X$ which can loosely be described as the set of
+points of $X$ which are $R$-equivalent to a point of
+$X \setminus U$. Hence $U' = X \setminus t(Z)$ is an $R$-invariant,
+open subspace of $X$ contained in $U$ which contains
+the fibre of $X \to Y$ over $y$. Since $X \to Y$ is open
+(Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-fppf-open})
+the image of $U'$ is an open subspace $V' \subset Y$.
+Since $U'$ is $R$-invariant and $R = X \times_Y X$, we see that $U'$ is the
+inverse image of $V'$ (use
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-points-cartesian}).
+After replacing $Y$ by $V'$ and $X$ by $U'$ we see that we may assume
+$X$ is a scheme isomorphic to an open subscheme of an affine scheme.
+
+\medskip\noindent
+Assume $X$ is a scheme isomorphic to an open subscheme of an affine scheme.
+In this case the fppf quotient sheaf $X/R$ is a scheme, see
+Properties of Spaces, Proposition
+\ref{spaces-properties-proposition-finite-flat-equivalence-global}.
+Since $Y$ is a sheaf in the fppf topology, obtain a canonical
+map $X/R \to Y$ factoring $X \to Y$. Since $X \to Y$ is surjective
+finite locally free, it is surjective as a map of sheaves
+(Spaces, Lemma \ref{spaces-lemma-surjective-flat-locally-finite-presentation}).
+We conclude that $X/R \to Y$ is surjective as a map of sheaves.
+On the other hand, since $R = X \times_Y X$ as sheaves we conclude that
+$X/R \to Y$ is injective as a map of sheaves. Hence $X/R \to Y$
+is an isomorphism and we see that $Y$ is representable.
+\end{proof}
+
+\noindent
+At this point we have several different ways for proving the following
+lemma.
+
+\begin{lemma}
+\label{lemma-finite-etale-cover-dense-open-scheme}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+If there exists a finite, \'etale, surjective morphism
+$U \to X$ where $U$ is a scheme, then there exists a dense open subspace
+of $X$ which is a scheme.
+\end{lemma}
+
+\begin{proof}[First proof]
+The morphism $U \to X$ is finite locally free. Hence there is a decomposition
+of $X$ into open and closed subspaces $X_d \subset X$ such that
+$U \times_X X_d \to X_d$ is finite locally free of degree $d$.
+Thus we may assume $U \to X$ is finite locally free of degree $d$.
+In this case, let $U_i \subset U$, $i \in I$ be the set of affine opens.
+For each $i$ the morphism $U_i \to X$ is \'etale and has
+universally bounded fibres (namely, bounded by $d$).
+In other words, $X$ is reasonable and
+the result follows from
+Proposition \ref{proposition-reasonable-open-dense-scheme}.
+\end{proof}
+
+\begin{proof}[Second proof]
+The question is local on $X$
+(Properties of Spaces, Lemma \ref{spaces-properties-lemma-subscheme}),
+hence may assume $X$ is quasi-compact. Then $U$ is quasi-compact.
+Then there exists a dense open subscheme $W \subset U$ which is
+separated (Properties, Lemma
+\ref{properties-lemma-quasi-compact-dense-open-separated}).
+Set $Z = U \setminus W$.
+Let $R = U \times_X U$ and $s, t : R \to U$ the projections.
+Then $t^{-1}(Z)$ is nowhere dense in $R$
+(Topology, Lemma \ref{topology-lemma-open-inverse-image-closed-nowhere-dense})
+and hence $\Delta = s(t^{-1}(Z))$ is an $R$-invariant
+closed nowhere dense subset of $U$
+(Morphisms, Lemma \ref{morphisms-lemma-image-nowhere-dense-finite}).
+Let $u \in U \setminus \Delta$ be a generic point of an
+irreducible component. Since these points are dense in $U \setminus \Delta$
+and since $\Delta$ is nowhere dense, it suffices to show that the image
+$x \in X$ of $u$ is in the schematic locus of $X$.
+Observe that $t(s^{-1}(\{u\})) \subset W$ is a
+finite set of generic points of irreducible components of $W$
+(compare with
+Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-codimension-0-points}).
+By Properties, Lemma \ref{properties-lemma-maximal-points-affine}
+we can find an affine open $V \subset W$ such that
+$t(s^{-1}(\{u\})) \subset V$. Since $t(s^{-1}(\{u\}))$ is the fibre
+of $|U| \to |X|$ over $x$, we conclude by
+Lemma \ref{lemma-when-quotient-scheme-at-point}.
+\end{proof}
+
+\begin{proof}[Third proof]
+(This proof is essentially the same as the second proof, but uses
+fewer references.)
+Assume $X$ is an algebraic space, $U$ a scheme, and $U \to X$ is a finite
+\'etale surjective morphism. Write $R = U \times_X U$ and denote
+$s, t : R \to U$ the projections as usual. Note that $s, t$ are surjective,
+finite and \'etale. Claim: The union of the $R$-invariant affine opens of
+$U$ is topologically dense in $U$.
+
+\medskip\noindent
+Proof of the claim. Let $W \subset U$ be an affine open.
+Set $W' = t(s^{-1}(W)) \subset U$. Since $s^{-1}(W)$ is affine
+(hence quasi-compact) we see that $W' \subset U$ is a quasi-compact open. By
+Properties, Lemma \ref{properties-lemma-quasi-compact-dense-open-separated}
+there exists a dense open $W'' \subset W'$ which is a separated scheme.
+Set $\Delta' = W' \setminus W''$. This is a nowhere dense closed subset of
+$W''$. Since $t|_{s^{-1}(W)} : s^{-1}(W) \to W'$ is open (because it is \'etale)
+we see that the inverse image
+$(t|_{s^{-1}(W)})^{-1}(\Delta') \subset s^{-1}(W)$
+is a nowhere dense closed subset (see
+Topology, Lemma \ref{topology-lemma-open-inverse-image-closed-nowhere-dense}).
+Hence, by
+Morphisms, Lemma \ref{morphisms-lemma-image-nowhere-dense-finite}
+we see that
+$$
+\Delta = s\left((t|_{s^{-1}(W)})^{-1}(\Delta')\right)
+$$
+is a nowhere dense closed subset of $W$. Pick any point $\eta \in W$,
+$\eta \not \in \Delta$ which is a generic point of an irreducible
+component of $W$ (and hence of $U$). By our choices above the finite set
+$t(s^{-1}(\{\eta\})) = \{\eta_1, \ldots, \eta_n\}$
+is contained in the separated scheme $W''$.
+Note that the fibres of $s$ is are finite discrete spaces, and that
+generalizations lift along the \'etale morphism $t$, see
+Morphisms, Lemmas \ref{morphisms-lemma-etale-flat}
+and \ref{morphisms-lemma-generalizations-lift-flat}.
+In this way we see that each $\eta_i$ is a generic point of an
+irreducible component of $W''$. Thus, by
+Properties, Lemma \ref{properties-lemma-maximal-points-affine}
+we can find an affine open $V \subset W''$ such that
+$\{\eta_1, \ldots, \eta_n\} \subset V$.
+By
+Groupoids, Lemma \ref{groupoids-lemma-find-invariant-affine}
+this implies that $\eta$ is contained in an $R$-invariant affine
+open subscheme of $U$. The claim follows as $W$ was chosen as an
+arbitrary affine open of $U$ and because the set of generic points
+of irreducible components of $W \setminus \Delta$ is dense in $W$.
+
+\medskip\noindent
+Using the claim we can finish the proof. Namely, if $W \subset U$ is
+an $R$-invariant affine open, then the restriction $R_W$ of $R$ to $W$
+equals $R_W = s^{-1}(W) = t^{-1}(W)$ (see
+Groupoids, Definition \ref{groupoids-definition-invariant-open}
+and discussion following it). In particular the maps $R_W \to W$ are
+finite \'etale also. It follows in particular that $R_W$ is affine.
+Thus we see that $W/R_W$ is a scheme, by
+Groupoids, Proposition \ref{groupoids-proposition-finite-flat-equivalence}.
+On the other hand, $W/R_W$ is an open subspace of $X$ by
+Spaces, Lemma \ref{spaces-lemma-finding-opens}.
+Hence having a dense collection of points contained in $R$-invariant
+affine open of $U$ certainly implies that the schematic locus of $X$
+(see Properties of Spaces, Lemma \ref{spaces-properties-lemma-subscheme})
+is open dense in $X$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Residue fields and henselian local rings}
+\label{section-residue-fields-henselian-local-rings}
+
+\noindent
+For a decent algebraic space we can define the residue field and the
+henselian local ring at a point. For example, the following lemma
+tells us the residue field of a point on a decent space is defined.
+
+\begin{lemma}
+\label{lemma-decent-points-monomorphism}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+Consider the map
+$$
+\{\Spec(k) \to X \text{ monomorphism where }k\text{ is a field}\}
+\longrightarrow
+|X|
+$$
+This map is always injective. If $X$ is decent then this map
+is a bijection.
+\end{lemma}
+
+\begin{proof}
+We have seen in
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-points-monomorphism}
+that the map is an injection in general.
+By Lemma \ref{lemma-bounded-fibres} it is surjective when $X$ is
+decent (actually one can say this is part of the definition
+of being decent).
+\end{proof}
+
+\noindent
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+If a point $x \in |X|$ can be represented by a monomorphism
+$\Spec(k) \to X$, then the field $k$ is unique up to unique
+isomorphism. For a decent
+algebraic space such a monomorphism exists for every point
+by Lemma \ref{lemma-decent-points-monomorphism}
+and hence the following definition makes sense.
+
+\begin{definition}
+\label{definition-residue-field}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $x \in |X|$. The {\it residue field of $X$ at $x$}
+is the unique field $\kappa(x)$ which comes equipped with a
+monomorphism $\Spec(\kappa(x)) \to X$ representing $x$.
+\end{definition}
+
+\noindent
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of decent
+algebraic spaces over $S$. Let $x \in |X|$ be a point.
+Set $y = f(x) \in |Y|$. Then the composition $\Spec(\kappa(x)) \to Y$
+is in the equivalence class defining $y$ and hence factors through
+$\Spec(\kappa(y)) \to Y$. In other words we get a commutative diagram
+$$
+\xymatrix{
+\Spec(\kappa(x)) \ar[r]_-x \ar[d] & X \ar[d]^f \\
+\Spec(\kappa(y)) \ar[r]^-y & Y
+}
+$$
+The left vertical morphism corresponds to a homomorphism
+$\kappa(y) \to \kappa(x)$ of fields. We will often simply
+call this the homomorphism induced by $f$.
+
+\begin{lemma}
+\label{lemma-identifies-residue-fields}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of decent
+algebraic spaces over $S$. Let $x \in |X|$ be a point
+with image $y = f(x) \in |Y|$.
+The following are equivalent
+\begin{enumerate}
+\item $f$ induces an isomorphism $\kappa(y) \to \kappa(x)$, and
+\item the induced morphism $\Spec(\kappa(x)) \to Y$ is a monomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Immediate from the discussion above.
+\end{proof}
+
+\noindent
+The following lemma tells us that the henselian local ring of a point
+on a decent algebraic space is defined.
+
+\begin{lemma}
+\label{lemma-decent-space-elementary-etale-neighbourhood}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+For every point $x \in |X|$ there exists an \'etale morphism
+$$
+(U, u) \longrightarrow (X, x)
+$$
+where $U$ is an affine scheme, $u$ is the only point of $U$ lying
+over $x$, and the induced homomorphism $\kappa(x) \to \kappa(u)$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+We may assume that $X$ is quasi-compact by replacing $X$ with a
+quasi-compact open containing $x$. Recall that $x$ can be
+represented by a quasi-compact (mono)morphism
+from the spectrum a field (by definition of decent spaces). Thus the
+lemma follows from Lemma \ref{lemma-filter-quasi-compact}.
+\end{proof}
+
+\begin{definition}
+\label{definition-elemenary-etale-neighbourhood}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+Let $x \in X$ be a point. An {\it elementary \'etale neighbourhood}
+is an \'etale morphism $(U, u) \to (X, x)$ where $U$ is a scheme,
+$u \in U$ is a point mapping to $x$, and $\kappa(x) \to \kappa(u)$
+is an isomorphism. A {\it morphism of elementary \'etale neighbourhoods}
+$(U, u) \to (U', u')$ is defined as a morphism $U \to U'$
+over $X$ mapping $u$ to $u'$.
+\end{definition}
+
+\noindent
+If $X$ is not decent then the category of elementary \'etale neighbourhoods
+may be empty.
+
+\begin{lemma}
+\label{lemma-elementary-etale-neighbourhoods}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $x$ be a point of $X$.
+The category of elementary \'etale neighborhoods of $(X, x)$
+is cofiltered (see
+Categories, Definition \ref{categories-definition-codirected}).
+\end{lemma}
+
+\begin{proof}
+The category is nonempty by
+Lemma \ref{lemma-decent-space-elementary-etale-neighbourhood}.
+Suppose that we have two elementary \'etale neighbourhoods
+$(U_i, u_i) \to (X, x)$.
+Then consider $U = U_1 \times_X U_2$. Since
+$\Spec(\kappa(u_i)) \to X$, $i = 1, 2$ are both monomorphisms
+in the class of $x$ (Lemma \ref{lemma-identifies-residue-fields})
+, we see that
+$$
+u = \Spec(\kappa(u_1)) \times_X \Spec(\kappa(u_2))
+$$
+is the spectrum of a field $\kappa(u)$ such that the induced maps
+$\kappa(u_i) \to \kappa(u)$ are isomorphisms. Then $u \to U$ is a point
+of $U$ and we see that $(U, u) \to (X, x)$ is an elementary
+\'etale neighbourhood dominating $(U_i, u_i)$.
+If $a, b : (U_1, u_1) \to (U_2, u_2)$ are two morphisms between
+our elementary \'etale neighbourhoods, then we consider the scheme
+$$
+U = U_1 \times_{(a, b), (U_2 \times_X U_2), \Delta} U_2
+$$
+Using Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-etale-permanence}
+we see that $U \to X$ is \'etale. Moreover, in exactly the same manner
+as before we see that $U$ has a point $u$
+such that $(U, u) \to (X, x)$ is an elementary
+\'etale neighbourhood. Finally, $U \to U_1$ equalizes $a$ and $b$
+and the proof is finished.
+\end{proof}
+
+\begin{definition}
+\label{definition-henselian-local-ring}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $x \in |X|$. The {\it henselian local ring of $X$ at $x$}, is
+$$
+\mathcal{O}_{X, x}^h = \colim \Gamma(U, \mathcal{O}_U)
+$$
+where the colimit is over the elementary \'etale neighbourhoods
+$(U, u) \to (X, x)$.
+\end{definition}
+
+\noindent
+Here is the analogue of
+Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-describe-etale-local-ring}.
+
+\begin{lemma}
+\label{lemma-describe-henselian-local-ring}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $x \in |X|$. Let $(U, u) \to (X, x)$ be an elementary
+\'etale neighbourhood. Then
+$$
+\mathcal{O}_{X, x}^h = \mathcal{O}_{U, u}^h
+$$
+In words: the henselian local ring of $X$ at $x$
+is equal to the henselization $\mathcal{O}_{U, u}^h$
+of the local ring $\mathcal{O}_{U, u}$ of $U$ at $u$.
+\end{lemma}
+
+\begin{proof}
+Since the category of elementary \'etale neighbourhood of $(X, x)$
+is cofiltered (Lemma \ref{lemma-elementary-etale-neighbourhoods})
+we see that
+the category of elementary \'etale neighbourhoods of $(U, u)$
+is initial in
+the category of elementary \'etale neighbourhood of $(X, x)$.
+Then the equality follows from
+More on Morphisms, Lemma \ref{more-morphisms-lemma-describe-henselization}
+and
+Categories, Lemma \ref{categories-lemma-cofinal}
+(initial is turned into cofinal because the colimit
+definining henselian local rings is over the
+opposite of the category of elementary
+\'etale neighbourhoods).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-henselian-local-ring-strict}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $\overline{x}$ be a geometric point of $X$ lying over $x \in |X|$.
+The \'etale local ring $\mathcal{O}_{X, \overline{x}}$ of $X$ at $\overline{x}$
+(Properties of Spaces, Definition
+\ref{spaces-properties-definition-etale-local-rings})
+is the strict henselization
+of the henselian local ring $\mathcal{O}_{X, x}^h$ of $X$ at $x$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-describe-henselian-local-ring},
+Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-describe-etale-local-ring}
+and the fact that $(R^h)^{sh} = R^{sh}$
+for a local ring $(R, \mathfrak m, \kappa)$ and a given
+separable algebraic closure $\kappa^{sep}$ of $\kappa$.
+This equality follows from
+Algebra, Lemma \ref{algebra-lemma-uniqueness-henselian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-residue-field-henselian-local-ring}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $x \in |X|$. The residue field of the
+henselian local ring of $X$ at $x$
+(Definition \ref{definition-henselian-local-ring})
+is the residue field of $X$ at $x$
+(Definition \ref{definition-residue-field}).
+\end{lemma}
+
+\begin{proof}
+Choose an elementary \'etale neighbourhood $(U, u) \to (X, x)$.
+Then $\kappa(u) = \kappa(x)$ and
+$\mathcal{O}_{X, x}^h = \mathcal{O}_{U, u}^h$
+(Lemma \ref{lemma-describe-henselian-local-ring}).
+The residue field of $\mathcal{O}_{U, u}^h$
+is $\kappa(u)$ by Algebra, Lemma \ref{algebra-lemma-henselization}
+(the output of this lemma is the construction/definition
+of the henselization of a local ring, see
+Algebra, Definition \ref{algebra-definition-henselization}).
+\end{proof}
+
+\begin{remark}
+\label{remark-functoriality-henselian-local-ring}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of decent algebraic spaces
+over $S$. Let $x \in |X|$ with image $y \in |Y|$. Choose an elementary
+\'etale neighbourhood $(V, v) \to (Y, y)$ (possible by
+Lemma \ref{lemma-decent-space-elementary-etale-neighbourhood}).
+Then $V \times_Y X$ is an algebraic space \'etale over $X$
+which has a unique point $x'$ mapping to $x$ in $X$ and to $v$ in $V$.
+(Details omitted; use that all points can be represented by
+monomorphisms from spectra of fields.)
+Choose an elementary \'etale neighbourhood $(U, u) \to (V \times_Y X, x')$.
+Then we obtain the following commutative diagram
+$$
+\xymatrix{
+\Spec(\mathcal{O}_{X, \overline{x}}) \ar[r] \ar[d] &
+\Spec(\mathcal{O}_{X, x}^h) \ar[r] \ar[d] &
+\Spec(\mathcal{O}_{U, u}) \ar[r] \ar[d] &
+U \ar[r] \ar[d] &
+X \ar[d] \\
+\Spec(\mathcal{O}_{Y, \overline{y}}) \ar[r] &
+\Spec(\mathcal{O}_{Y, y}^h) \ar[r] &
+\Spec(\mathcal{O}_{V, v}) \ar[r] &
+V \ar[r] &
+Y
+}
+$$
+This comes from the identifications
+$\mathcal{O}_{X, \overline{x}} = \mathcal{O}_{U, u}^{sh}$,
+$\mathcal{O}_{X, x}^h = \mathcal{O}_{U, u}^h$,
+$\mathcal{O}_{Y, \overline{y}} = \mathcal{O}_{V, v}^{sh}$,
+$\mathcal{O}_{Y, y}^h = \mathcal{O}_{V, v}^h$
+see in
+Lemma \ref{lemma-describe-henselian-local-ring}
+and
+Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-describe-etale-local-ring}
+and the functoriality of the (strict) henselization
+discussed in Algebra, Sections \ref{algebra-section-ind-etale} and
+\ref{algebra-section-henselization}.
+\end{remark}
+
+
+
+
+
+
+
+
+
+\section{Points on decent spaces}
+\label{section-points}
+
+\noindent
+In this section we prove some properties of points on decent algebraic spaces.
+The following lemma shows that specialization of points behaves well
+on decent algebraic spaces.
+Spaces, Example \ref{spaces-example-infinite-product}
+shows that this is {\bf not} true in general.
+
+\begin{lemma}
+\label{lemma-decent-no-specializations-map-to-same-point}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $U \to X$ be an \'etale morphism from a scheme to $X$.
+If $u, u' \in |U|$ map to the same point of $|X|$, and
+$u' \leadsto u$, then $u = u'$.
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-bounded-fibres} and
+\ref{lemma-no-specializations-map-to-same-point}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-decent-specialization}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $x, x' \in |X|$ and assume $x' \leadsto x$, i.e., $x$ is a
+specialization of $x'$. Then for every \'etale morphism
+$\varphi : U \to X$ from a scheme $U$ and any $u \in U$ with
+$\varphi(u) = x$, exists a point $u'\in U$, $u' \leadsto u$ with
+$\varphi(u') = x'$.
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-bounded-fibres} and
+\ref{lemma-specialization}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kolmogorov}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Then $|X|$ is Kolmogorov (see
+Topology, Definition \ref{topology-definition-generic-point}).
+\end{lemma}
+
+\begin{proof}
+Let $x_1, x_2 \in |X|$ with $x_1 \leadsto x_2$ and $x_2 \leadsto x_1$.
+We have to show that $x_1 = x_2$. Pick a scheme $U$ and an \'etale morphism
+$U \to X$ such that $x_1, x_2$ are both in the image of $|U| \to |X|$.
+By Lemma \ref{lemma-decent-specialization} we can find a specialization
+$u_1 \leadsto u_2$ in $U$ mapping to $x_1 \leadsto x_2$.
+By Lemma \ref{lemma-decent-specialization} we can find
+$u_2' \leadsto u_1$ mapping to $x_2 \leadsto x_1$. This means that
+$u_2' \leadsto u_2$ is a specialization between points of $U$ mapping to
+the same point of $X$, namely $x_2$. This is not possible, unless
+$u_2' = u_2$, see
+Lemma \ref{lemma-decent-no-specializations-map-to-same-point}. Hence
+also $u_1 = u_2$ as desired.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-reasonable-sober}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Then the topological space $|X|$ is sober (see
+Topology, Definition \ref{topology-definition-generic-point}).
+\end{proposition}
+
+\begin{proof}
+We have seen in Lemma \ref{lemma-kolmogorov} that $|X|$ is Kolmogorov.
+Hence it remains to show that every irreducible closed subset
+$T \subset |X|$ has a generic point. By
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-reduced-closed-subspace}
+there exists a closed subspace $Z \subset X$ with $|Z| = |T|$.
+By definition this means that $Z \to X$ is a representable morphism
+of algebraic spaces. Hence $Z$ is a decent algebraic space
+by Lemma \ref{lemma-representable-properties}. By
+Theorem \ref{theorem-decent-open-dense-scheme}
+we see that there exists an open dense subspace $Z' \subset Z$
+which is a scheme. This means that $|Z'| \subset T$ is open dense.
+Hence the topological space $|Z'|$ is irreducible, which means that
+$Z'$ is an irreducible scheme. By
+Schemes, Lemma \ref{schemes-lemma-scheme-sober}
+we conclude that $|Z'|$ is the closure of a single point $\eta \in T$
+and hence also $T = \overline{\{\eta\}}$, and we win.
+\end{proof}
+
+\noindent
+For decent algebraic spaces dimension works as expected.
+
+\begin{lemma}
+\label{lemma-dimension-decent-space}
+Let $S$ be a scheme. Dimension as defined in
+Properties of Spaces, Section \ref{spaces-properties-section-dimension}
+behaves well on decent algebraic spaces $X$ over $S$.
+\begin{enumerate}
+\item If $x \in |X|$, then $\dim_x(|X|) = \dim_x(X)$, and
+\item $\dim(|X|) = \dim(X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Choose a scheme $U$ with a point $u \in U$
+and an \'etale morphism $h : U \to X$ mapping $u$ to $x$.
+By definition the dimension of $X$ at $x$ is $\dim_u(|U|)$.
+Thus we may pick $U$ such that $\dim_x(X) = \dim(|U|)$.
+Let $d$ be an integer. If $\dim(U) \geq d$, then
+there exists a sequence of nontrivial specializations
+$u_d \leadsto \ldots \leadsto u_0$ in $U$. Taking the image
+we find a corresponding sequence
+$h(u_d) \leadsto \ldots \leadsto h(u_0)$
+each of which is nontrivial by
+Lemma \ref{lemma-decent-no-specializations-map-to-same-point}.
+Hence we see that the image of $|U|$ in $|X|$ has dimension at least $d$.
+Conversely, suppose that $x_d \leadsto \ldots \leadsto x_0$ is a
+sequence of specializations in $|X|$ with $x_0$ in the image of
+$|U| \to |X|$. Then we can lift this to a sequence of specializations
+in $U$ by Lemma \ref{lemma-decent-specialization}.
+
+\medskip\noindent
+Part (2) is an immediate consequence of part (1),
+Topology, Lemma \ref{topology-lemma-dimension-supremum-local-dimensions},
+and Properties of Spaces, Section \ref{spaces-properties-section-dimension}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-local-ring-quasi-finite}
+Let $S$ be a scheme. Let $X \to Y$ be a locally quasi-finite morphism
+of algebraic spaces over $S$. Let $x \in |X|$ with image $y \in |Y|$.
+Then the dimension of the local ring of $Y$ at $y$ is $\geq$ to the
+dimension of the local ring of $X$ at $x$.
+\end{lemma}
+
+\begin{proof}
+The definition of the dimension of the local ring of a point on an
+algebraic space is given in Properties of Spaces, Definition
+\ref{spaces-properties-definition-dimension-local-ring}.
+Choose an \'etale morphism $(V, v) \to (Y, y)$ where $V$ is a scheme.
+Choose an \'etale morphism $U \to V \times_Y X$ and a point $u \in U$
+mapping to $x \in |X|$ and $v \in V$. Then $U \to V$ is locally
+quasi-finite and we have to prove that
+$$
+\dim(\mathcal{O}_{V, v}) \geq \dim(\mathcal{O}_{U, u})
+$$
+This is Algebra, Lemma \ref{algebra-lemma-dimension-inequality-quasi-finite}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-quasi-finite}
+Let $S$ be a scheme. Let $X \to Y$ be a locally quasi-finite morphism
+of algebraic spaces over $S$. Then $\dim(X) \leq \dim(Y)$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-dimension-local-ring-quasi-finite}
+and Properties of Spaces, Lemma \ref{spaces-properties-lemma-dimension}.
+\end{proof}
+
+\noindent
+The following lemma is a tiny bit stronger than
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-point-like-spaces}.
+We will improve this lemma in Lemma \ref{lemma-when-field}.
+
+\begin{lemma}
+\label{lemma-decent-point-like-spaces}
+Let $S$ be a scheme. Let $k$ be a field. Let $X$ be an algebraic space
+over $S$ and assume that there exists a surjective \'etale morphism
+$\Spec(k) \to X$. If $X$ is decent, then $X \cong \Spec(k')$
+where $k/k'$ is a finite separable extension.
+\end{lemma}
+
+\begin{proof}
+The assumption implies that $|X| = \{x\}$ is a singleton. Since
+$X$ is decent we can find a quasi-compact monomorphism $\Spec(k') \to X$
+whose image is $x$. Then the projection
+$U = \Spec(k') \times_X \Spec(k) \to \Spec(k)$
+is a monomorphism, whence $U = \Spec(k)$, see
+Schemes, Lemma \ref{schemes-lemma-mono-towards-spec-field}.
+Hence the projection $\Spec(k) = U \to \Spec(k')$ is \'etale and
+we win.
+\end{proof}
+
+
+
+
+
+
+\section{Reduced singleton spaces}
+\label{section-singleton}
+
+\noindent
+A {\it singleton} space is an algebraic space $X$ such that $|X|$ is
+a singleton. It turns out that these can be more interesting than
+just being the spectrum of a field, see
+Spaces, Example \ref{spaces-example-Qbar}.
+We develop a tiny bit of machinery to be able to talk about these.
+
+\begin{lemma}
+\label{lemma-flat-cover-by-field}
+Let $S$ be a scheme. Let $Z$ be an algebraic space over $S$.
+Let $k$ be a field and let $\Spec(k) \to Z$ be surjective and flat.
+Then any morphism $\Spec(k') \to Z$ where $k'$ is a field is
+surjective and flat.
+\end{lemma}
+
+\begin{proof}
+Consider the fibre square
+$$
+\xymatrix{
+T \ar[d] \ar[r] & \Spec(k) \ar[d] \\
+\Spec(k') \ar[r] & Z
+}
+$$
+Note that $T \to \Spec(k')$ is flat and surjective hence $T$
+is not empty. On the other hand $T \to \Spec(k)$ is flat as
+$k$ is a field. Hence $T \to Z$ is flat and surjective.
+It follows from
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-flat-permanence}
+that $\Spec(k') \to Z$ is flat. It is surjective as by assumption
+$|Z|$ is a singleton.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unique-point}
+Let $S$ be a scheme.
+Let $Z$ be an algebraic space over $S$. The following are equivalent
+\begin{enumerate}
+\item $Z$ is reduced and $|Z|$ is a singleton,
+\item there exists a surjective flat morphism $\Spec(k) \to Z$
+where $k$ is a field, and
+\item there exists a locally of finite type, surjective, flat morphism
+$\Spec(k) \to Z$ where $k$ is a field.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). Let $W$ be a scheme and
+let $W \to Z$ be a surjective \'etale morphism. Then $W$ is
+a reduced scheme. Let $\eta \in W$ be a generic point of an irreducible
+component of $W$. Since $W$ is reduced we have
+$\mathcal{O}_{W, \eta} = \kappa(\eta)$. It follows that the canonical
+morphism $\eta = \Spec(\kappa(\eta)) \to W$ is flat. We see that the
+composition $\eta \to Z$ is flat (see
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-composition-flat}).
+It is also surjective as $|Z|$ is a singleton. In other words
+(2) holds.
+
+\medskip\noindent
+Assume (2). Let $W$ be a scheme and
+let $W \to Z$ be a surjective \'etale morphism. Choose a field
+$k$ and a surjective flat morphism $\Spec(k) \to Z$.
+Then $W \times_Z \Spec(k)$ is a scheme \'etale over $k$.
+Hence $W \times_Z \Spec(k)$ is a disjoint union of spectra of fields
+(see Remark \ref{remark-recall}), in particular reduced. Since
+$W \times_Z \Spec(k) \to W$
+is surjective and flat we conclude that $W$ is reduced
+(Descent, Lemma \ref{descent-lemma-descend-reduced}).
+In other words (1) holds.
+
+\medskip\noindent
+It is clear that (3) implies (2). Finally, assume (2). Pick a nonempty
+affine scheme $W$ and an \'etale morphism $W \to Z$. Pick a closed
+point $w \in W$ and set $k = \kappa(w)$. The composition
+$$
+\Spec(k) \xrightarrow{w} W \longrightarrow Z
+$$
+is locally of finite type by
+Morphisms of Spaces, Lemmas
+\ref{spaces-morphisms-lemma-composition-finite-type} and
+\ref{spaces-morphisms-lemma-etale-locally-finite-type}.
+It is also flat and surjective by
+Lemma \ref{lemma-flat-cover-by-field}.
+Hence (3) holds.
+\end{proof}
+
+\noindent
+The following lemma singles out a slightly better class of singleton
+algebraic spaces than the preceding lemma.
+
+\begin{lemma}
+\label{lemma-unique-point-better}
+Let $S$ be a scheme. Let $Z$ be an algebraic space over $S$.
+The following are equivalent
+\begin{enumerate}
+\item $Z$ is reduced, locally Noetherian, and $|Z|$
+is a singleton, and
+\item there exists a locally finitely presented, surjective, flat morphism
+$\Spec(k) \to Z$ where $k$ is a field.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2) holds. By
+Lemma \ref{lemma-unique-point}
+we see that $Z$ is reduced and $|Z|$ is a singleton.
+Let $W$ be a scheme and let $W \to Z$ be a surjective \'etale
+morphism. Choose a field $k$ and a locally finitely presented, surjective,
+flat morphism $\Spec(k) \to Z$.
+Then $W \times_Z \Spec(k)$ is a scheme
+\'etale over $k$, hence a disjoint union of spectra of fields
+(see Remark \ref{remark-recall}),
+hence locally Noetherian. Since $W \times_Z \Spec(k) \to W$
+is flat, surjective, and locally of finite presentation, we see
+that $\{W \times_Z \Spec(k) \to W\}$ is an fppf covering
+and we conclude that $W$ is locally Noetherian
+(Descent, Lemma
+\ref{descent-lemma-Noetherian-local-fppf}).
+In other words (1) holds.
+
+\medskip\noindent
+Assume (1). Pick a nonempty affine scheme $W$ and an \'etale morphism
+$W \to Z$. Pick a closed point $w \in W$ and set
+$k = \kappa(w)$. Because $W$ is locally Noetherian the morphism
+$w : \Spec(k) \to W$ is of finite presentation, see
+Morphisms, Lemma \ref{morphisms-lemma-closed-immersion-finite-presentation}.
+Hence the composition
+$$
+\Spec(k) \xrightarrow{w} W \longrightarrow Z
+$$
+is locally of finite presentation by
+Morphisms of Spaces, Lemmas
+\ref{spaces-morphisms-lemma-composition-finite-presentation} and
+\ref{spaces-morphisms-lemma-etale-locally-finite-presentation}.
+It is also flat and surjective by
+Lemma \ref{lemma-flat-cover-by-field}.
+Hence (2) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-monomorphism-into-point}
+Let $S$ be a scheme.
+Let $Z' \to Z$ be a monomorphism of algebraic spaces over $S$.
+Assume there exists a field $k$ and a locally finitely presented, surjective,
+flat morphism $\Spec(k) \to Z$. Then either $Z'$
+is empty or $Z' = Z$.
+\end{lemma}
+
+\begin{proof}
+We may assume that $Z'$ is nonempty. In this case the
+fibre product $T = Z' \times_Z \Spec(k)$
+is nonempty, see
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-points-cartesian}.
+Now $T$ is an algebraic space and the projection $T \to \Spec(k)$
+is a monomorphism. Hence $T = \Spec(k)$, see
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-monomorphism-toward-field}.
+We conclude that $\Spec(k) \to Z$ factors through $Z'$.
+But as $\Spec(k) \to Z$ is surjective, flat and locally of finite
+presentation, we see that $\Spec(k) \to Z$ is surjective as a
+map of sheaves on $(\Sch/S)_{fppf}$ (see
+Spaces, Remark \ref{spaces-remark-warning})
+and we conclude that $Z' = Z$.
+\end{proof}
+
+\noindent
+The following lemma says that to each point of an algebraic space we
+can associate a canonical reduced, locally Noetherian singleton
+algebraic space.
+
+\begin{lemma}
+\label{lemma-find-singleton-from-point}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+Let $x \in |X|$. Then there exists a unique monomorphism
+$Z \to X$ of algebraic spaces
+over $S$ such that $Z$ is an algebraic space which satisfies the equivalent
+conditions of
+Lemma \ref{lemma-unique-point-better}
+and such that the image of $|Z| \to |X|$ is $\{x\}$.
+\end{lemma}
+
+\begin{proof}
+Choose a scheme $U$ and a surjective \'etale morphism $U \to X$.
+Set $R = U \times_X U$ so that $X = U/R$ is a presentation (see
+Spaces, Section \ref{spaces-section-presentations}).
+Set
+$$
+U' = \coprod\nolimits_{u \in U\text{ lying over }x} \Spec(\kappa(u)).
+$$
+The canonical morphism $U' \to U$ is a monomorphism. Let
+$$
+R' = U' \times_X U' = R \times_{(U \times_S U)} (U' \times_S U').
+$$
+Because $U' \to U$ is a monomorphism we see that the projections
+$s', t' : R' \to U'$ factor as a monomorphism followed by an
+\'etale morphism. Hence, as $U'$ is a disjoint union of spectra
+of fields, using
+Remark \ref{remark-recall},
+and using
+Schemes, Lemma \ref{schemes-lemma-mono-towards-spec-field}
+we conclude that $R'$ is a disjoint union of spectra of fields and
+that the morphisms $s', t' : R' \to U'$ are \'etale. Hence
+$Z = U'/R'$ is an algebraic space by
+Spaces, Theorem \ref{spaces-theorem-presentation}.
+As $R'$ is the restriction of $R$ by $U' \to U$ we see
+$Z \to X$ is a monomorphism by
+Groupoids, Lemma \ref{groupoids-lemma-quotient-groupoid-restrict}.
+Since $Z \to X$ is a monomorphism we see that $|Z| \to |X|$ is injective, see
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-monomorphism-injective-points}.
+By
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-points-cartesian}
+we see that
+$$
+|U'| = |Z \times_X U'| \to |Z| \times_{|X|} |U'|
+$$
+is surjective which implies (by our choice of $U'$) that
+$|Z| \to |X|$ has image $\{x\}$. We conclude that $|Z|$ is a singleton.
+Finally, by construction $U'$ is locally Noetherian and reduced, i.e.,
+we see that $Z$ satisfies the equivalent conditions of
+Lemma \ref{lemma-unique-point-better}.
+
+\medskip\noindent
+Let us prove uniqueness of $Z \to X$. Suppose that
+$Z' \to X$ is a second such monomorphism of algebraic spaces.
+Then the projections
+$$
+Z' \longleftarrow Z' \times_X Z \longrightarrow Z
+$$
+are monomorphisms. The algebraic space in the middle is nonempty by
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-points-cartesian}.
+Hence the two projections are isomorphisms by
+Lemma \ref{lemma-monomorphism-into-point}
+and we win.
+\end{proof}
+
+\noindent
+We introduce the following terminology which foreshadows
+the residual gerbes we will introduce later, see
+Properties of Stacks, Definition
+\ref{stacks-properties-definition-residual-gerbe}.
+
+\begin{definition}
+\label{definition-residual-space}
+Let $S$ be a scheme.
+Let $X$ be an algebraic space over $S$. Let $x \in |X|$.
+The
+{\it residual space of $X$ at $x$}\footnote{This is nonstandard notation.}
+is the monomorphism $Z_x \to X$ constructed in
+Lemma \ref{lemma-find-singleton-from-point}.
+\end{definition}
+
+\noindent
+In particular we know that $Z_x$ is a
+locally Noetherian, reduced, singleton algebraic space
+and that there exists a field and a surjective, flat, locally
+finitely presented morphism
+$$
+\Spec(k) \longrightarrow Z_x.
+$$
+It turns out that $Z_x$
+is a regular algebraic space as follows from the following lemma.
+
+\begin{lemma}
+\label{lemma-residual-space-regular}
+A reduced, locally Noetherian singleton algebraic space $Z$ is regular.
+\end{lemma}
+
+\begin{proof}
+Let $Z$ be a reduced, locally Noetherian singleton algebraic space
+over a scheme $S$. Let $W \to Z$ be a surjective \'etale morphism where $W$
+is a scheme. Let $k$ be a field and let $\Spec(k) \to Z$
+be surjective, flat, and locally of finite presentation (see
+Lemma \ref{lemma-unique-point-better}).
+The scheme $T = W \times_Z \Spec(k)$ is
+\'etale over $k$ in particular regular, see
+Remark \ref{remark-recall}.
+Since $T \to W$ is locally of finite presentation, flat, and surjective it
+follows that $W$ is regular, see
+Descent, Lemma \ref{descent-lemma-descend-regular}.
+By definition this means that $Z$ is regular.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Decent spaces}
+\label{section-decent}
+
+\noindent
+In this section we collect some useful facts on decent spaces.
+
+\begin{lemma}
+\label{lemma-locally-Noetherian-decent-quasi-separated}
+Any locally Noetherian decent algebraic space is quasi-separated.
+\end{lemma}
+
+\begin{proof}
+Namely, let $X$ be an algebraic space (over some base scheme, for
+example over $\mathbf{Z}$) which is decent and locally Noetherian.
+Let $U \to X$ and $V \to X$ be \'etale morphisms with $U$ and $V$
+affine schemes. We have to show that $W = U \times_X V$ is quasi-compact
+(Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-characterize-quasi-separated}).
+Since $X$ is locally Noetherian, the schemes $U$, $V$ are Noetherian
+and $W$ is locally Noetherian. Since $X$ is decent, the fibres
+of the morphism $W \to U$ are finite. Namely, we can represent
+any $x \in |X|$ by a quasi-compact monomorphism $\Spec(k) \to X$.
+Then $U_k$ and $V_k$ are finite disjoint unions of spectra of
+finite separable extensions of $k$ (Remark \ref{remark-recall})
+and we see that $W_k = U_k \times_{\Spec(k)} V_k$ is finite.
+Let $n$ be the maximum degree of a fibre of $W \to U$ at a generic
+point of an irreducible component of $U$. Consider the stratification
+$$
+U = U_0 \supset U_1 \supset U_2 \supset \ldots
+$$
+associated to $W \to U$ in
+More on Morphisms, Lemma \ref{more-morphisms-lemma-stratify-flat-fp-lqf}.
+By our choice of $n$ above we conclude that $U_{n + 1}$ is empty.
+Hence we see that the fibres of $W \to U$ are universally bounded.
+Then we can apply More on Morphisms, Lemma
+\ref{more-morphisms-lemma-stratify-flat-fp-lqf-universally-bounded}
+to find a stratification
+$$
+\emptyset = Z_{-1} \subset Z_0 \subset Z_1 \subset Z_2 \subset
+\ldots \subset Z_n = U
+$$
+by closed subsets such that with $S_r = Z_r \setminus Z_{r - 1}$
+the morphism $W \times_U S_r \to S_r$ is finite locally free.
+Since $U$ is Noetherian, the schemes $S_r$ are Noetherian,
+whence the schemes $W \times_U S_r$ are Noetherian, whence
+$W = \coprod W \times_U S_r$ is quasi-compact as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-field}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+\begin{enumerate}
+\item If $|X|$ is a singleton then $X$ is a scheme.
+\item If $|X|$ is a singleton and $X$ is reduced, then
+$X \cong \Spec(k)$ for some field $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $|X|$ is a singleton. It follows immediately from
+Theorem \ref{theorem-decent-open-dense-scheme} that $X$ is a scheme,
+but we can also argue directly as follows.
+Choose an affine scheme $U$ and a surjective \'etale morphism $U \to X$.
+Set $R = U \times_X U$. Then $U$ and $R$ have finitely many points by
+Lemma \ref{lemma-UR-finite-above-x} (and the definition of a decent space).
+All of these points are closed in $U$ and $R$ by
+Lemma \ref{lemma-decent-no-specializations-map-to-same-point}.
+It follows that $U$ and $R$ are affine schemes.
+We may shrink $U$ to a singleton space. Then $U$ is
+the spectrum of a henselian local ring, see
+Algebra, Lemma \ref{algebra-lemma-local-dimension-zero-henselian}.
+The projections $R \to U$ are \'etale, hence finite \'etale because
+$U$ is the spectrum of a $0$-dimensional henselian local ring, see
+Algebra, Lemma \ref{algebra-lemma-characterize-henselian}.
+It follows that $X$ is a scheme by
+Groupoids, Proposition \ref{groupoids-proposition-finite-flat-equivalence}.
+
+\medskip\noindent
+Part (2) follows from (1) and the fact that a reduced singleton
+scheme is the spectrum of a field.
+\end{proof}
+
+\begin{remark}
+\label{remark-one-point-decent-scheme}
+We will see in
+Limits of Spaces, Lemma \ref{spaces-limits-lemma-reduction-scheme}
+that an algebraic space
+whose reduction is a scheme is a scheme.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-algebraic-residue-field-extension-closed-point}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Consider a commutative diagram
+$$
+\xymatrix{
+\Spec(k) \ar[rr] \ar[rd] & & X \ar[ld] \\
+& S
+}
+$$
+Assume that the image point $s \in S$ of $\Spec(k) \to S$ is
+a closed point and that $\kappa(s) \subset k$ is algebraic.
+Then the image $x$ of $\Spec(k) \to X$ is a closed point of $|X|$.
+\end{lemma}
+
+\begin{proof}
+Suppose that $x \leadsto x'$ for some $x' \in |X|$. Choose an
+\'etale morphism $U \to X$ where $U$ is a scheme and a point $u' \in U'$
+mapping to $x'$. Choose a specialization $u \leadsto u'$ in $U$ with $u$
+mapping to $x$ in $X$, see Lemma \ref{lemma-decent-specialization}.
+Then $u$ is the image of a point $w$ of the scheme
+$W = \Spec(k) \times_X U$. Since the projection $W \to \Spec(k)$ is \'etale
+we see that $\kappa(w) \supset k$ is finite. Hence
+$\kappa(w) \supset \kappa(s)$ is algebraic. Hence $\kappa(u) \supset \kappa(s)$
+is algebraic. Thus $u$ is a closed point of $U$ by
+Morphisms, Lemma
+\ref{morphisms-lemma-algebraic-residue-field-extension-closed-point-fibre}.
+Thus $u = u'$, whence $x = x'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-residue-field-extension-finite}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Consider a commutative diagram
+$$
+\xymatrix{
+\Spec(k) \ar[rr] \ar[rd] & & X \ar[ld] \\
+& S
+}
+$$
+Assume that the image point $s \in S$ of $\Spec(k) \to S$ is
+a closed point and that the field extension $k/\kappa(s)$ is finite.
+Then $\Spec(k) \to X$ is finite morphism. If $\kappa(s) = k$
+then $\Spec(k) \to X$ is closed immersion.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-algebraic-residue-field-extension-closed-point}
+the image point $x \in |X|$ is closed. Let $Z \subset X$ be the
+reduced closed subspace with $|Z| = \{x\}$ (Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-reduced-closed-subspace}).
+Note that $Z$ is a decent algebraic space by
+Lemma \ref{lemma-representable-named-properties}.
+By Lemma \ref{lemma-when-field} we see that $Z = \Spec(k')$
+for some field $k'$. Of course $k \supset k' \supset \kappa(s)$.
+Then $\Spec(k) \to Z$ is a finite morphism of schemes
+and $Z \to X$ is a finite morphism as it is a closed immersion.
+Hence $\Spec(k) \to X$ is finite (Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-composition-integral}).
+If $k = \kappa(s)$, then $\Spec(k) = Z$ and $\Spec(k) \to X$
+is a closed immersion.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-decent-space-closed-point}
+Let $S$ be a scheme. Suppose $X$ is a decent algebraic space over $S$.
+Let $x \in |X|$ be a closed point. Then $x$ can be represented by a
+closed immersion $i : \Spec(k) \to X$ from the spectrum of a field.
+\end{lemma}
+
+\begin{proof}
+We know that $x$ can be represented by a quasi-compact monomorphism
+$i : \Spec(k) \to X$ where $k$ is a field
+(Definition \ref{definition-very-reasonable}).
+Let $U \to X$ be an \'etale morphism where $U$ is an affine scheme.
+As $x$ is closed and $X$ decent, the fibre $F$ of $|U| \to |X|$ over $x$
+consists of closed points
+(Lemma \ref{lemma-decent-no-specializations-map-to-same-point}).
+As $i$ is a monomorphism, so is $U_k = U \times_X \Spec(k) \to U$.
+In particular, the map $|U_k| \to F$ is injective. Since $U_k$
+is quasi-compact and \'etale over a field, we see that $U_k$ is a
+finite disjoint union of spectra of fields (Remark \ref{remark-recall}).
+Say $U_k = \Spec(k_1) \amalg \ldots \amalg \Spec(k_r)$.
+Since $\Spec(k_i) \to U$ is a monomorphism, we see that
+its image $u_i$ has residue field $\kappa(u_i) = k_i$.
+Since $u_i \in F$ is a closed point we conclude the morphism
+$\Spec(k_i) \to U$ is a closed immersion. As the $u_i$ are pairwise distinct,
+$U_k \to U$ is a closed immersion. Hence $i$ is a closed immersion
+(Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-closed-immersion-local}). This finishes the proof.
+\end{proof}
+
+
+
+
+
+\section{Locally separated spaces}
+\label{section-locally-separated}
+
+\noindent
+It turns out that a locally separated algebraic space is decent.
+
+\begin{lemma}
+\label{lemma-infinite-number}
+Let $A$ be a ring. Let $k$ be a field. Let $\mathfrak p_n$, $n \geq 1$
+be a sequence of pairwise distinct primes of $A$. Moreover, for each
+$n$ let $k \to \kappa(\mathfrak p_n)$ be an embedding. Then the closure
+of the image of
+$$
+\coprod\nolimits_{n \not = m}
+\Spec(\kappa(\mathfrak p_n) \otimes_k \kappa(\mathfrak p_m))
+\longrightarrow
+\Spec(A \otimes A)
+$$
+meets the diagonal.
+\end{lemma}
+
+\begin{proof}
+Set $k_n = \kappa(\mathfrak p_n)$. We may assume that $A = \prod k_n$.
+Denote $x_n = \Spec(k_n)$ the open and closed point corresponding to
+$A \to k_n$. Then $\Spec(A) = Z \amalg \{x_n\}$ where $Z$ is a nonempty
+closed subset. Namely, $Z = V(e_n; n \geq 1)$ where $e_n$
+is the idempotent of $A$ corresponding to the factor $k_n$
+and $Z$ is nonempty as the ideal generated by the $e_n$ is not
+equal to $A$. We will show that the closure of the image
+contains $\Delta(Z)$. The kernel of the map
+$$
+(\prod k_n) \otimes_k (\prod k_m)
+\longrightarrow
+\prod\nolimits_{n \not = m} k_n \otimes_k k_m
+$$
+is the ideal generated by $e_n \otimes e_n$, $n \geq 1$.
+Hence the closure of the image of the map on spectra is
+$V(e_n \otimes e_n; n \geq 1)$ whose intersection with $\Delta(\Spec(A))$
+is $\Delta(Z)$. Thus it suffices to show that
+$$
+\coprod\nolimits_{n \not = m} \Spec(k_n \otimes_k k_m)
+\longrightarrow
+\Spec(\prod\nolimits_{n \not = m} k_n \otimes_k k_m)
+$$
+has dense image. This follows as the family of ring maps
+$\prod_{n \not = m} k_n \otimes_k k_m \to k_n \otimes_k k_m$
+is jointly injective.
+\end{proof}
+
+\begin{lemma}[David Rydh]
+\label{lemma-locally-separated-decent}
+A locally separated algebraic space is decent.
+\end{lemma}
+
+\begin{proof}
+Let $S$ be a scheme and let $X$ be a locally separated algebraic space
+over $S$. We may assume $S = \Spec(\mathbf{Z})$, see
+Properties of Spaces, Definition \ref{spaces-properties-definition-separated}.
+Unadorned fibre products will be over $\mathbf{Z}$.
+Let $x \in |X|$. Choose a scheme $U$, an \'etale
+morphism $U \to X$, and a point $u \in U$ mapping to $x$ in $|X|$.
+As usual we identify $u = \Spec(\kappa(u))$.
+As $X$ is locally separated the morphism
+$$
+u \times_X u \to u \times u
+$$
+is an immersion (Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-fibre-product-after-map}).
+Hence More on Groupoids, Lemma
+\ref{more-groupoids-lemma-locally-closed-image-is-closed}
+tells us that it is a closed immersion (use
+Schemes, Lemma \ref{schemes-lemma-immersion-when-closed}).
+As $u \times_X u \to u \times_X U$ is a monomorphism (base change
+of $u \to U$) and as $u \times_X U \to u$ is \'etale we conclude that
+$u \times_X u$ is a disjoint union of spectra of fields
+(see Remark \ref{remark-recall} and
+Schemes, Lemma \ref{schemes-lemma-mono-towards-spec-field}).
+Since it is also closed in the affine scheme $u \times u$ we
+conclude $u \times_X u$ is a finite disjoint union of spectra of fields.
+Thus $x$ can be represented by a monomorphism $\Spec(k) \to X$ where $k$
+is a field, see
+Lemma \ref{lemma-R-finite-above-x}.
+
+\medskip\noindent
+Next, let $U = \Spec(A)$ be an affine scheme and let $U \to X$ be an
+\'etale morphism. To finish the proof it suffices to show that
+$F = U \times_X \Spec(k)$ is finite. Write $F = \coprod_{i \in I} \Spec(k_i)$
+as the disjoint union of finite separable extensions of $k$.
+We have to show that $I$ is finite.
+Set $R = U \times_X U$. As $X$ is locally separated, the morphism
+$j : R \to U \times U$ is an immersion. Let $U' \subset U \times U$
+be an open such that $j$ factors through a closed immersion $j' : R \to U'$.
+Let $e : U \to R$ be the diagonal map. Using that $e$ is a morphism between
+schemes \'etale over $U$ such that $\Delta = j \circ e$ is a
+closed immersion, we conclude that $R = e(U) \amalg W$ for some
+open and closed subscheme $W \subset R$. Since $j'$ is a closed immersion
+we conclude that $j'(W) \subset U'$ is closed and disjoint from
+$j'(e(U))$. Therefore
+$\overline{j(W)} \cap \Delta(U) = \emptyset$ in $U \times U$.
+Note that $W$ contains $\Spec(k_i \otimes_k k_{i'})$ for all
+$i \not = i'$, $i, i' \in I$. By Lemma \ref{lemma-infinite-number}
+we conclude that $I$ is finite as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Valuative criterion}
+\label{section-valuative-criterion-universally-closed}
+
+\noindent
+For a quasi-compact morphism from a decent space the valuative
+criterion is necessary in order for the morphism to be
+universally closed.
+
+
+\begin{proposition}
+\label{proposition-characterize-universally-closed}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces
+over $S$. Assume $f$ is quasi-compact, and $X$ is decent. Then $f$ is
+universally closed if and only if the existence part of the valuative
+criterion holds.
+\end{proposition}
+
+\begin{proof}
+In
+Morphisms of Spaces,
+Lemma \ref{spaces-morphisms-lemma-quasi-compact-existence-universally-closed}
+we have seen one of the implications.
+To prove the other, assume that $f$ is universally closed. Let
+$$
+\xymatrix{
+\Spec(K) \ar[r] \ar[d] & X \ar[d] \\
+\Spec(A) \ar[r] & Y
+}
+$$
+be a diagram as in
+Morphisms of Spaces,
+Definition \ref{spaces-morphisms-definition-valuative-criterion}.
+Let $X_A = \Spec(A) \times_Y X$, so that we have
+$$
+\xymatrix{
+\Spec(K) \ar[r] \ar[rd] & X_A \ar[d] \\
+ & \Spec(A)
+}
+$$
+By
+Morphisms of Spaces,
+Lemma \ref{spaces-morphisms-lemma-base-change-quasi-compact}
+we see that $X_A \to \Spec(A)$ is quasi-compact. Since $X_A \to X$
+is representable, we see that $X_A$ is decent also, see
+Lemma \ref{lemma-representable-properties}.
+Moreover, as $f$ is universally closed, we see that $X_A \to \Spec(A)$
+is universally closed.
+Hence we may and do replace $X$ by $X_A$ and $Y$ by $\Spec(A)$.
+
+\medskip\noindent
+Let $x' \in |X|$ be the equivalence class of
+$\Spec(K) \to X$. Let $y \in |Y| = |\Spec(A)|$ be
+the closed point. Set $y' = f(x')$; it is the generic point of
+$\Spec(A)$. Since $f$ is universally closed we see that
+$f(\overline{\{x'\}})$ contains $\overline{\{y'\}}$, and hence
+contains $y$. Let $x \in \overline{\{x'\}}$ be a point such that
+$f(x) = y$. Let $U$ be a scheme, and $\varphi : U \to X$
+an \'etale morphism such that there exists a $u \in U$ with
+$\varphi(u) = x$. By
+Lemma \ref{lemma-specialization}
+and our assumption that $X$ is decent
+there exists a specialization $u' \leadsto u$ on $U$ with $\varphi(u') = x'$.
+This means that there exists a common field extension
+$K \subset K' \supset \kappa(u')$ such that
+$$
+\xymatrix{
+\Spec(K') \ar[r] \ar[d] & U \ar[d] \\
+\Spec(K) \ar[r] \ar[rd] & X \ar[d] \\
+ & \Spec(A)
+}
+$$
+is commutative. This gives the following commutative diagram of rings
+$$
+\xymatrix{
+K' & \mathcal{O}_{U, u} \ar[l] \\
+K \ar[u] & \\
+ & A \ar[lu] \ar[uu]
+}
+$$
+By
+Algebra, Lemma \ref{algebra-lemma-dominate}
+we can find a valuation ring $A' \subset K'$ dominating the image of
+$\mathcal{O}_{U, u}$ in $K'$. Since by construction $\mathcal{O}_{U, u}$
+dominates $A$ we see that $A'$ dominates $A$ also. Hence we obtain a diagram
+resembling the second diagram of
+Morphisms of Spaces,
+Definition \ref{spaces-morphisms-definition-valuative-criterion}
+and the proposition is proved.
+\end{proof}
+
+
+
+
+
+
+
+\section{Relative conditions}
+\label{section-relative-conditions}
+
+\noindent
+This is a (yet another) technical section dealing with conditions on
+algebraic spaces having to do with points. It is probably a good idea
+to skip this section.
+
+\begin{definition}
+\label{definition-relative-conditions}
+Let $S$ be a scheme. We say an algebraic space $X$ over $S$
+{\it has property $(\beta)$} if $X$ has the corresponding property of
+Lemma \ref{lemma-bounded-fibres}.
+Let $f : X \to Y$ be a morphism of algebraic spaces over $S$.
+\begin{enumerate}
+\item We say $f$ {\it has property $(\beta)$} if for any scheme $T$ and
+morphism $T \to Y$ the fibre product $T \times_Y X$ has property $(\beta)$.
+\item We say $f$ is {\it decent} if for any scheme $T$ and
+morphism $T \to Y$ the fibre product $T \times_Y X$ is a decent
+algebraic space.
+\item We say $f$ is {\it reasonable} if for any scheme $T$ and
+morphism $T \to Y$ the fibre product $T \times_Y X$ is a reasonable
+algebraic space.
+\item We say $f$ is {\it very reasonable} if for any scheme $T$ and
+morphism $T \to Y$ the fibre product $T \times_Y X$ is a very reasonable
+algebraic space.
+\end{enumerate}
+\end{definition}
+
+\noindent
+We refer to Remark \ref{remark-very-reasonable} for an informal discussion.
+It will turn out that the class of very reasonable morphisms is not so
+useful, but that the classes of decent and reasonable morphisms are useful.
+
+\begin{lemma}
+\label{lemma-properties-trivial-implications}
+Let $S$ be a scheme.
+Let $f : X \to Y$ be a morphism of algebraic spaces over $S$.
+We have the following implications among the conditions on $f$:
+$$
+\xymatrix{
+\text{representable} \ar@{=>}[rd] & & & & \\
+& \text{very reasonable} \ar@{=>}[r] & \text{reasonable} \ar@{=>}[r] &
+\text{decent} \ar@{=>}[r] & (\beta) \\
+\text{quasi-separated} \ar@{=>}[ru] & & & &
+}
+$$
+\end{lemma}
+
+\begin{proof}
+This is clear from the definitions,
+Lemma \ref{lemma-bounded-fibres}
+and
+Morphisms of Spaces,
+Lemma \ref{spaces-morphisms-lemma-separated-local}.
+\end{proof}
+
+\noindent
+Here is another sanity check.
+
+\begin{lemma}
+\label{lemma-property-for-morphism-out-of-property}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic
+spaces over $S$. If $X$ is decent (resp.\ is reasonable, resp.\ has property
+$(\beta)$ of Lemma \ref{lemma-bounded-fibres}), then $f$ is
+decent (resp.\ reasonable, resp.\ has property $(\beta)$).
+\end{lemma}
+
+\begin{proof}
+Let $T$ be a scheme and let $T \to Y$ be a morphism. Then $T \to Y$
+is representable, hence the base change $T \times_Y X \to X$ is representable.
+Hence if $X$ is decent (or reasonable), then so is $T \times_Y X$, see
+Lemma \ref{lemma-representable-named-properties}.
+Similarly, for property $(\beta)$, see
+Lemma \ref{lemma-representable-properties}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-relative-conditions}
+Having property $(\beta)$, being decent, or being reasonable
+is preserved under arbitrary base change.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the definition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-property-over-property}
+Let $S$ be a scheme.
+Let $f : X \to Y$ be a morphism of algebraic spaces over $S$.
+Let $\omega \in \{\beta, decent, reasonable\}$.
+Suppose that $Y$ has property $(\omega)$ and $f : X \to Y$ has $(\omega)$.
+Then $X$ has $(\omega)$.
+\end{lemma}
+
+\begin{proof}
+Let us prove the lemma in case $\omega = \beta$. In this case we have to show
+that any $x \in |X|$ is represented by a monomorphism from the spectrum
+of a field into $X$. Let $y = f(x) \in |Y|$. By assumption there exists
+a field $k$ and a monomorphism $\Spec(k) \to Y$ representing $y$.
+Then $x$ corresponds to a point $x'$ of $\Spec(k) \times_Y X$.
+By assumption $x'$ is represented by a monomorphism
+$\Spec(k') \to \Spec(k) \times_Y X$. Clearly the composition
+$\Spec(k') \to X$ is a monomorphism representing $x$.
+
+\medskip\noindent
+Let us prove the lemma in case $\omega = decent$.
+Let $x \in |X|$ and $y = f(x) \in |Y|$. By the result of the preceding
+paragraph we can choose a diagram
+$$
+\xymatrix{
+\Spec(k') \ar[r]_x \ar[d] & X \ar[d]^f \\
+\Spec(k) \ar[r]^y & Y
+}
+$$
+whose horizontal arrows monomorphisms. As $Y$ is decent the morphism
+$y$ is quasi-compact. As $f$ is decent the algebraic space
+$\Spec(k) \times_Y X$ is decent. Hence the monomorphism
+$\Spec(k') \to \Spec(k) \times_Y X$ is quasi-compact.
+Then the monomorphism $x : \Spec(k') \to X$ is quasi-compact
+as a composition of quasi-compact morphisms (use
+Morphisms of Spaces, Lemmas
+\ref{spaces-morphisms-lemma-base-change-quasi-compact} and
+\ref{spaces-morphisms-lemma-composition-quasi-compact}).
+As the point $x$ was arbitrary this implies $X$ is decent.
+
+\medskip\noindent
+Let us prove the lemma in case $\omega = reasonable$.
+Choose $V \to Y$ \'etale with $V$ an affine scheme.
+Choose $U \to V \times_Y X$ \'etale with $U$ an affine scheme.
+By assumption $V \to Y$ has universally bounded fibres. By
+Lemma \ref{lemma-base-change-universally-bounded}
+the morphism $V \times_Y X \to X$ has universally bounded fibres.
+By assumption on $f$ we see that $U \to V \times_Y X$ has
+universally bounded fibres. By
+Lemma \ref{lemma-composition-universally-bounded}
+the composition $U \to X$ has universally bounded fibres.
+Hence there exists sufficiently many \'etale morphisms $U \to X$
+from schemes with universally bounded fibres, and we conclude
+that $X$ is reasonable.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-relative-conditions}
+Having property $(\beta)$, being decent, or being reasonable
+is preserved under compositions.
+\end{lemma}
+
+\begin{proof}
+Let $\omega \in \{\beta, decent, reasonable\}$.
+Let $f : X \to Y$ and $g : Y \to Z$ be morphisms of algebraic spaces
+over the scheme $S$. Assume $f$ and $g$ both have property
+$(\omega)$. Then we have to show
+that for any scheme $T$ and morphism $T \to Z$ the space $T \times_Z X$
+has $(\omega)$. By
+Lemma \ref{lemma-base-change-relative-conditions}
+this reduces us to the following claim: Suppose that $Y$ is an algebraic
+space having property $(\omega)$, and that $f : X \to Y$ is a morphism
+with $(\omega)$. Then $X$ has $(\omega)$.
+This is the content of Lemma \ref{lemma-property-over-property}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fibre-product-conditions}
+Let $S$ be a scheme. Let $f : X \to Y$, $g : Z \to Y$ be morphisms
+of algebraic spaces over $S$. If $X$ and $Z$ are decent
+(resp.\ reasonable, resp.\ have property
+$(\beta)$ of Lemma \ref{lemma-bounded-fibres}), then so does $X \times_Y Z$.
+\end{lemma}
+
+\begin{proof}
+Namely, by Lemma \ref{lemma-property-for-morphism-out-of-property}
+the morphism $X \to Y$ has the property. Then the base change
+$X \times_Y Z \to Z$ has the property by
+Lemma \ref{lemma-base-change-relative-conditions}.
+And finally this implies $X \times_Y Z$ has the
+property by Lemma \ref{lemma-property-over-property}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-conditions}
+Let $S$ be a scheme.
+Let $f : X \to Y$ be a morphism of algebraic spaces over $S$.
+Let $\mathcal{P} \in \{(\beta), decent, reasonable\}$.
+Assume
+\begin{enumerate}
+\item $f$ is quasi-compact,
+\item $f$ is \'etale,
+\item $|f| : |X| \to |Y|$ is surjective, and
+\item the algebraic space $X$ has property $\mathcal{P}$.
+\end{enumerate}
+Then $Y$ has property $\mathcal{P}$.
+\end{lemma}
+
+\begin{proof}
+Let us prove this in case $\mathcal{P} = (\beta)$. Let $y \in |Y|$ be
+a point. We have to show that $y$ can be represented by a monomorphism
+from a field. Choose a point $x \in |X|$ with $f(x) = y$.
+By assumption we may represent $x$ by a monomorphism
+$\Spec(k) \to X$, with $k$ a field. By
+Lemma \ref{lemma-R-finite-above-x}
+it suffices to show that the projections
+$\Spec(k) \times_Y \Spec(k) \to \Spec(k)$
+are \'etale and quasi-compact. We can factor the first projection as
+$$
+\Spec(k) \times_Y \Spec(k)
+\longrightarrow
+\Spec(k) \times_Y X
+\longrightarrow
+\Spec(k)
+$$
+The first morphism is a monomorphism, and the second is \'etale and
+quasi-compact. By
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-etale-over-field-scheme}
+we see that $\Spec(k) \times_Y X$ is a scheme. Hence it is a
+finite disjoint union of spectra of finite separable field extensions
+of $k$. By
+Schemes, Lemma \ref{schemes-lemma-mono-towards-spec-field}
+we see that the first arrow identifies
+$\Spec(k) \times_Y \Spec(k)$ with a finite disjoint
+union of spectra of finite separable field extensions of $k$.
+Hence the projection morphism is \'etale and quasi-compact.
+
+\medskip\noindent
+Let us prove this in case $\mathcal{P} = decent$.
+We have already seen in the first paragraph of the proof that this implies
+that every $y \in |Y|$ can be represented by a monomorphism
+$y : \Spec(k) \to Y$. Pick such a $y$. Pick an affine
+scheme $U$ and an \'etale morphism $U \to X$ such that the image
+of $|U| \to |Y|$ contains $y$. By
+Lemma \ref{lemma-UR-finite-above-x}
+it suffices to show that $U_y$ is a finite scheme over $k$. The fibre
+product $X_y = \Spec(k) \times_Y X$ is a quasi-compact \'etale
+algebraic space over $k$. Hence by
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-etale-over-field-scheme}
+it is a scheme. So it is a finite disjoint union of spectra of
+finite separable extensions of $k$. Say $X_y = \{x_1, \ldots, x_n\}$
+so $x_i$ is given by $x_i : \Spec(k_i) \to X$ with
+$[k_i : k] < \infty$. By assumption $X$ is decent, so the schemes
+$U_{x_i} = \Spec(k_i) \times_X U$ are finite over $k_i$.
+Finally, we note that $U_y = \coprod U_{x_i}$ as a scheme and we conclude
+that $U_y$ is finite over $k$ as desired.
+
+\medskip\noindent
+Let us prove this in case $\mathcal{P} = reasonable$.
+Pick an affine scheme $V$ and an \'etale morphism $V \to Y$.
+We have the show the fibres of $V \to Y$ are universally bounded.
+The algebraic space $V \times_Y X$ is quasi-compact.
+Thus we can find an affine scheme $W$ and a surjective \'etale morphism
+$W \to V \times_Y X$, see
+Properties of Spaces,
+Lemma \ref{spaces-properties-lemma-quasi-compact-affine-cover}.
+Here is a picture (solid diagram)
+$$
+\xymatrix{
+W \ar[r] \ar[rd] &
+V \times_Y X \ar[r] \ar[d] &
+X \ar[d]_f & \Spec(k) \ar@{..>}[l]^x \ar@{..>}[ld]^y \\
+ & V \ar[r] & Y
+}
+$$
+The morphism $W \to X$ is universally bounded by our assumption that
+the space $X$ is reasonable. Let $n$ be an integer bounding
+the degrees of the fibres of $W \to X$. We claim that the same integer
+works for bounding the fibres of $V \to Y$. Namely, suppose $y \in |Y|$
+is a point. Then there exists a $x \in |X|$ with $f(x) = y$ (see above).
+This means we can find a field $k$ and morphisms $x, y$ given as dotted
+arrows in the diagram above. In particular we get a surjective \'etale
+morphism
+$$
+\Spec(k) \times_{x, X} W
+\to
+\Spec(k) \times_{x, X} (V \times_Y X) = \Spec(k) \times_{y, Y} V
+$$
+which shows that the degree of $\Spec(k) \times_{y, Y} V$ over $k$
+is less than or equal to the degree of $\Spec(k) \times_{x, X} W$
+over $k$, i.e., $\leq n$, and we win. (This last part of the argument
+is the same as the argument in the proof of
+Lemma \ref{lemma-descent-universally-bounded}.
+Unfortunately that lemma is not general enough because it only applies
+to representable morphisms.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-conditions-local}
+Let $S$ be a scheme.
+Let $f : X \to Y$ be a morphism of algebraic spaces over $S$.
+Let $\mathcal{P} \in \{(\beta), decent, reasonable, very\ reasonable\}$.
+The following are equivalent
+\begin{enumerate}
+\item $f$ is $\mathcal{P}$,
+\item for every affine scheme $Z$ and every morphism $Z \to Y$ the
+base change $Z \times_Y X \to Z$ of $f$ is $\mathcal{P}$,
+\item for every affine scheme $Z$ and every morphism $Z \to Y$ the
+algebraic space $Z \times_Y X$ is $\mathcal{P}$, and
+\item there exists a Zariski covering $Y = \bigcup Y_i$ such
+that each morphism $f^{-1}(Y_i) \to Y_i$ has $\mathcal{P}$.
+\end{enumerate}
+If $\mathcal{P} \in \{(\beta), decent, reasonable\}$, then this is also
+equivalent to
+\begin{enumerate}
+\item[(5)] there exists a scheme $V$ and a surjective \'etale morphism
+$V \to Y$ such that the base change $V \times_Y X \to V$ has
+$\mathcal{P}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implications (1) $\Rightarrow$ (2) $\Rightarrow$ (3) $\Rightarrow$ (4)
+are trivial.
+The implication (3) $\Rightarrow$ (1) can be seen as follows.
+Let $Z \to Y$ be a morphism whose source is a scheme over $S$.
+Consider the algebraic space $Z \times_Y X$. If we assume (3), then
+for any affine open $W \subset Z$, the open subspace
+$W \times_Y X$ of $Z \times_Y X$ has property $\mathcal{P}$. Hence by
+Lemma \ref{lemma-properties-local}
+the space $Z \times_Y X$ has property $\mathcal{P}$, i.e., (1) holds.
+A similar argument (omitted) shows that (4) implies (1).
+
+\medskip\noindent
+The implication (1) $\Rightarrow$ (5) is trivial. Let $V \to Y$ be
+an \'etale morphism from a scheme as in (5). Let $Z$ be an affine scheme,
+and let $Z \to Y$ be a morphism. Consider the diagram
+$$
+\xymatrix{
+Z \times_Y V \ar[r]_q \ar[d]_p & V \ar[d] \\
+Z \ar[r] & Y
+}
+$$
+Since $p$ is \'etale, and hence open, we can choose finitely many affine open
+subschemes $W_i \subset Z \times_Y V$ such that $Z = \bigcup p(W_i)$.
+Consider the commutative diagram
+$$
+\xymatrix{
+V \times_Y X \ar[d] &
+(\coprod W_i) \times_Y X \ar[l] \ar[d] \ar[r] &
+Z \times_Y X \ar[d] \\
+V &
+\coprod W_i \ar[l] \ar[r] &
+Z
+}
+$$
+We know $V \times_Y X$ has property $\mathcal{P}$. By
+Lemma \ref{lemma-representable-properties}
+we see that $(\coprod W_i) \times_Y X$ has property $\mathcal{P}$.
+Note that the morphism $(\coprod W_i) \times_Y X \to Z \times_Y X$
+is \'etale and quasi-compact as the base change of $\coprod W_i \to Z$.
+Hence by Lemma \ref{lemma-descent-conditions}
+we conclude that $Z \times_Y X$ has property $\mathcal{P}$.
+\end{proof}
+
+\begin{remark}
+\label{remark-very-reasonable}
+An informal description of the properties $(\beta)$, decent, reasonable,
+very reasonable was given in Section \ref{section-reasonable-decent}.
+A morphism has one of these properties if (very) loosely speaking the
+fibres of the morphism have the corresponding properties.
+Being decent is useful to prove things about specializations of
+points on $|X|$. Being reasonable is a bit stronger and technically
+quite easy to work with.
+\end{remark}
+
+\noindent
+Here is a lemma we promised earlier which uses decent morphisms.
+
+\begin{lemma}
+\label{lemma-re-characterize-universally-closed}
+Let $S$ be a scheme.
+Let $f : X \to Y$ be a morphism of algebraic spaces over $S$.
+Assume $f$ is quasi-compact and decent.
+(For example if $f$ is representable, or quasi-separated, see
+Lemma \ref{lemma-properties-trivial-implications}.)
+Then $f$ is universally closed if and only if the
+existence part of the valuative criterion holds.
+\end{lemma}
+
+\begin{proof}
+In
+Morphisms of Spaces,
+Lemma \ref{spaces-morphisms-lemma-quasi-compact-existence-universally-closed}
+we proved that any quasi-compact morphism which satisfies the existence
+part of the valuative criterion is universally closed.
+To prove the other, assume that $f$ is universally closed.
+In the proof of
+Proposition \ref{proposition-characterize-universally-closed}
+we have seen that it suffices to show, for any valuation ring $A$,
+and any morphism $\Spec(A) \to Y$, that the base change
+$f_A : X_A \to \Spec(A)$ satisfies the existence part of the valuative
+criterion. By definition the algebraic space $X_A$ has property $(\gamma)$
+and hence
+Proposition \ref{proposition-characterize-universally-closed}
+applies to the morphism $f_A$ and we win.
+\end{proof}
+
+
+
+
+
+\section{Points of fibres}
+\label{section-points-fibres}
+
+\noindent
+Let $S$ be a scheme. Consider a cartesian diagram
+\begin{equation}
+\label{equation-points-fibres}
+\xymatrix{
+W \ar[r]_q \ar[d]_p & Z \ar[d]^g \\
+X \ar[r]^f & Y
+}
+\end{equation}
+of algebraic spaces over $S$. Let $x \in |X|$ and $z \in |Z|$
+be points mapping to the same point $y \in |Y|$. We may ask:
+When is the set
+\begin{equation}
+\label{equation-fibre}
+F_{x, z} = \{ w \in |W| \text{ such that }p(w) = x\text{ and }q(w) = z\}
+\end{equation}
+finite?
+
+\begin{example}
+\label{example-schemes}
+If $X, Y, Z$ are schemes, then the set $F_{x, z}$
+is equal to the spectrum of $\kappa(x) \otimes_{\kappa(y)} \kappa(z)$
+(Schemes, Lemma \ref{schemes-lemma-points-fibre-product}). Thus we
+obtain a finite set if either $\kappa(y) \subset \kappa(x)$ is finite or if
+$\kappa(y) \subset \kappa(z)$ is finite. In particular, this is always
+the case if $g$ is quasi-finite at $z$ (Morphisms, Lemma
+\ref{morphisms-lemma-residue-field-quasi-finite}).
+\end{example}
+
+\begin{example}
+\label{example-not-finite}
+Let $K$ be a characteristic $0$ field endowed with an automorphism
+$\sigma$ of infinite order. Set $Y = \Spec(K)/\mathbf{Z}$ and
+$X = \mathbf{A}^1_K/\mathbf{Z}$ where $\mathbf{Z}$ acts on $K$ via $\sigma$
+and on $\mathbf{A}^1_K = \Spec(K[t])$ via $t \mapsto t + 1$.
+Let $Z = \Spec(K)$. Then $W = \mathbf{A}^1_K$. Picture
+$$
+\xymatrix{
+\mathbf{A}^1_K \ar[r]_q \ar[d]_p & \Spec(K) \ar[d]^g \\
+\mathbf{A}^1_K/\mathbf{Z} \ar[r]^f & \Spec(K)/\mathbf{Z}
+}
+$$
+Take $x$ corresponding to $t = 0$ and $z$ the unique point of $\Spec(K)$.
+Then we see that $F_{x, z} = \mathbf{Z}$ as a set.
+\end{example}
+
+\begin{lemma}
+\label{lemma-surjective-on-fibres}
+In the situation of (\ref{equation-points-fibres}) if $Z' \to Z$
+is a morphism and $z' \in |Z'|$ maps to $z$, then the induced map
+$F_{x, z'} \to F_{x, z}$ is surjective.
+\end{lemma}
+
+\begin{proof}
+Set $W' = X \times_Y Z' = W \times_Z Z'$. Then
+$|W'| \to |W| \times_{|Z|} |Z'|$ is surjective by
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-points-cartesian}.
+Hence the surjectivity of $F_{x, z'} \to F_{x, z}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-qf-and-qc-finite-fibre}
+In diagram (\ref{equation-points-fibres}) the set (\ref{equation-fibre})
+is finite if $f$ is of finite type and $f$ is quasi-finite at $x$.
+\end{lemma}
+
+\begin{proof}
+The morphism $q$ is quasi-finite at every $w \in F_{x, z}$, see
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-base-change-quasi-finite-locus}.
+Hence the lemma follows from
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-quasi-finite-at-a-finite-number-of-points}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-decent-finite-fibre}
+In diagram (\ref{equation-points-fibres}) the set (\ref{equation-fibre})
+is finite if $y$ can be represented by a monomorphism $\Spec(k) \to Y$
+where $k$ is a field and $g$ is quasi-finite at $z$.
+(Special case: $Y$ is decent and $g$ is \'etale.)
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-surjective-on-fibres} applied twice
+we may replace $Z$ by $Z_k = \Spec(k) \times_Y Z$ and
+$X$ by $X_k = \Spec(k) \times_Y X$. We may and do
+replace $Y$ by $\Spec(k)$ as well. Note that $Z_k \to \Spec(k)$
+is quasi-finite at $z$ by Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-base-change-quasi-finite-locus}.
+Choose a scheme $V$, a point $v \in V$, and an \'etale morphism
+$V \to Z_k$ mapping $v$ to $z$. Choose a scheme $U$, a point $u \in U$,
+and an \'etale morphism $U \to X_k$ mapping $u$ to $x$.
+Again by Lemma \ref{lemma-surjective-on-fibres}
+it suffices to show $F_{u, v}$ is finite for the diagram
+$$
+\xymatrix{
+U \times_{\Spec(k)} V \ar[r] \ar[d] & V \ar[d] \\
+U \ar[r] & \Spec(k)
+}
+$$
+The morphism $V \to \Spec(k)$ is quasi-finite at $v$
+(follows from the general discussion in
+Morphisms of Spaces, Section \ref{spaces-morphisms-section-local-source-target}
+and the definition of being quasi-finite at a point).
+At this point the finiteness follows from Example \ref{example-schemes}.
+The parenthetical remark of the statement of the lemma follows
+from the fact that on decent spaces points are represented by
+monomorphisms from fields and from the fact that an \'etale
+morphism of algebraic spaces is locally quasi-finite.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-topology-fibre}
+\begin{slogan}
+Fibers of field points of algebraic spaces have the
+expected Zariski topologies.
+\end{slogan}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces
+over $S$.
+Let $y \in |Y|$ and assume that $y$ is represented by a quasi-compact
+monomorphism $\Spec(k) \to Y$. Then $|X_k| \to |X|$ is a
+homeomorphism onto $f^{-1}(\{y\}) \subset |X|$ with induced topology.
+\end{lemma}
+
+\begin{proof}
+We will use
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-etale-open}
+and
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-monomorphism-injective-points}
+without further mention.
+Let $V \to Y$ be an \'etale morphism with $V$ affine such that there
+exists a $v \in V$ mapping to $y$. Since $\Spec(k) \to Y$ is quasi-compact
+there are a finite number of points of $V$ mapping to $y$
+(Lemma \ref{lemma-UR-finite-above-x}). After shrinking
+$V$ we may assume $v$ is the only one. Choose a scheme $U$ and
+a surjective \'etale morphism $U \to X$.
+Consider the commutative diagram
+$$
+\xymatrix{
+U \ar[d] & U_V \ar[l] \ar[d] & U_v \ar[l] \ar[d] \\
+X \ar[d] & X_V \ar[l] \ar[d] & X_v \ar[l] \ar[d] \\
+Y & V \ar[l] & v \ar[l]
+}
+$$
+Since $U_v \to U_V$ identifies $U_v$ with a subset of $U_V$ with
+the induced topology (Schemes, Lemma \ref{schemes-lemma-fibre-topological}),
+and since $|U_V| \to |X_V|$ and $|U_v| \to |X_v|$ are surjective and open,
+we see that $|X_v| \to |X_V|$ is a homeomorphism onto its image (with
+induced topology).
+On the other hand, the inverse image of $f^{-1}(\{y\})$
+under the open map $|X_V| \to |X|$ is equal to $|X_v|$.
+We conclude that $|X_v| \to f^{-1}(\{y\})$ is open.
+The morphism $X_v \to X$ factors through $X_k$
+and $|X_k| \to |X|$ is injective with image $f^{-1}(\{y\})$
+by Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-points-cartesian}. Using
+$|X_v| \to |X_k| \to f^{-1}(\{y\})$ the lemma follows because
+$X_v \to X_k$ is surjective.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-conditions-on-point-on-space-over-field}
+Let $X$ be an algebraic space locally of finite type over a field $k$.
+Let $x \in |X|$. Consider the conditions
+\begin{enumerate}
+\item $\dim_x(|X|) = 0$,
+\item $x$ is closed in $|X|$ and if $x' \leadsto x$ in $|X|$ then $x' = x$,
+\item $x$ is an isolated point of $|X|$,
+\item $\dim_x(X) = 0$,
+\item $X \to \Spec(k)$ is quasi-finite at $x$.
+\end{enumerate}
+Then (2), (3), (4), and (5) are equivalent.
+If $X$ is decent, then (1) is equivalent to the others.
+\end{lemma}
+
+\begin{proof}
+Parts (4) and (5) are equivalent for example by
+Morphisms of Spaces, Lemmas
+\ref{spaces-morphisms-lemma-locally-finite-type-quasi-finite-part} and
+\ref{spaces-morphisms-lemma-quasi-finite-at-point}.
+
+\medskip\noindent
+Let $U \to X$ be an \'etale morphism where $U$ is an affine scheme and let
+$u \in U$ be a point mapping to $x$. Moreover, if $x$ is a closed
+point, e.g., in case (2) or (3), then we may and do assume that $u$
+is a closed point. Observe that $\dim_u(U) = \dim_x(X)$ by definition
+and that this is equal to $\dim(\mathcal{O}_{U, u})$ if $u$ is a closed
+point, see Algebra, Lemma
+\ref{algebra-lemma-dimension-closed-point-finite-type-field}.
+
+\medskip\noindent
+If $\dim_x(X) > 0$ and $u$ is closed, by the arguments above
+we can choose a nontrivial
+specialization $u' \leadsto u$ in $U$. Then the transcendence degree
+of $\kappa(u')$ over $k$ exceeds the transcendence degree of
+$\kappa(u)$ over $k$. It follows that the images $x$ and $x'$ in $X$
+are distinct, because the transcendence degree of $x/k$ and $x'/k$
+are well defined, see Morphisms of Spaces, Definition
+\ref{spaces-morphisms-definition-dimension-fibre}.
+This applies in particular in cases (2) and (3) and we
+conclude that (2) and (3) imply (4).
+
+\medskip\noindent
+Conversely, if $X \to \Spec(k)$ is locally quasi-finite at $x$, then
+$U \to \Spec(k)$ is locally quasi-finite at $u$, hence $u$ is an
+isolated point of $U$
+(Morphisms, Lemma \ref{morphisms-lemma-quasi-finite-at-point-characterize}).
+It follows that (5) implies (2) and (3) as
+$|U| \to |X|$ is continuous and open.
+
+\medskip\noindent
+Assume $X$ is decent and (1) holds. Then $\dim_x(X) = \dim_x(|X|)$
+by Lemma \ref{lemma-dimension-decent-space} and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-conditions-on-space-over-field}
+Let $X$ be an algebraic space locally of finite type over a field $k$.
+Consider the conditions
+\begin{enumerate}
+\item $|X|$ is a finite set,
+\item $|X|$ is a discrete space,
+\item $\dim(|X|) = 0$,
+\item $\dim(X) = 0$,
+\item $X \to \Spec(k)$ is locally quasi-finite,
+\end{enumerate}
+Then (2), (3), (4), and (5) are equivalent.
+If $X$ is decent, then (1) implies the others.
+\end{lemma}
+
+\begin{proof}
+Parts (4) and (5) are equivalent for example by
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-locally-finite-type-quasi-finite-part}.
+
+\medskip\noindent
+Let $U \to X$ be a surjective \'etale morphism where $U$ is a scheme.
+
+\medskip\noindent
+If $\dim(U) > 0$, then choose a nontrivial specialization
+$u \leadsto u'$ in $U$ and the transcendence degree of $\kappa(u)$
+over $k$ exceeds the transcendence degree of $\kappa(u')$ over $k$.
+It follows that the images $x$ and $x'$ in $X$ are distinct, because
+the transcendence degree of $x/k$ and $x'/k$ is well defined, see
+Morphisms of Spaces, Definition
+\ref{spaces-morphisms-definition-dimension-fibre}.
+We conclude that (2) and (3) imply (4).
+
+\medskip\noindent
+Conversely, if $X \to \Spec(k)$ is locally quasi-finite, then $U$ is
+locally Noetherian
+(Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian})
+of dimension $0$
+(Morphisms, Lemma \ref{morphisms-lemma-locally-quasi-finite-rel-dimension-0})
+and hence is a disjoint union of spectra of Artinian local rings
+(Properties, Lemma \ref{properties-lemma-locally-Noetherian-dimension-0}).
+Hence $U$ is a discrete topological space, and since $|U| \to |X|$
+is continuous and open, the same is true for $|X|$.
+In other words, (4) implies (2) and (3).
+
+\medskip\noindent
+Assume $X$ is decent and (1) holds. Then we may choose $U$ above to
+be affine. The fibres of $|U| \to |X|$ are finite (this is a part of the
+defining property of decent spaces). Hence $U$ is a finite type scheme
+over $k$ with finitely many points. Hence $U$ is quasi-finite over $k$
+(Morphisms, Lemma \ref{morphisms-lemma-finite-fibre})
+which by definition means that $X \to \Spec(k)$ is locally quasi-finite.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-conditions-on-point-in-fibre-and-qf}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces
+over $S$ which is locally of finite type. Let $x \in |X|$ with image
+$y \in |Y|$. Let $F = f^{-1}(\{y\})$ with induced topology from $|X|$.
+Let $k$ be a field and let $\Spec(k) \to Y$ be in the
+equivalence class defining $y$. Set $X_k = \Spec(k) \times_Y X$.
+Let $\tilde x \in |X_k|$ map to $x \in |X|$.
+Consider the following conditions
+\begin{enumerate}
+\item
+\label{item-fibre-at-x-dim-0}
+$\dim_x(F) = 0$,
+\item
+\label{item-isolated-in-fibre}
+$x$ is isolated in $F$,
+\item
+\label{item-no-specializations-in-fibre}
+$x$ is closed in $F$ and if $x' \leadsto x$ in $F$, then $x = x'$,
+\item
+\label{item-dimension-top-k-fibre}
+$\dim_{\tilde x}(|X_k|) = 0$,
+\item
+\label{item-isolated-in-k-fibre}
+$\tilde x$ is isolated in $|X_k|$,
+\item
+\label{item-no-specializations-in-k-fibre}
+$\tilde x$ is closed in $|X_k|$ and if $\tilde x' \leadsto \tilde x$
+in $|X_k|$, then $\tilde x = \tilde x'$,
+\item
+\label{item-k-fibre-at-x-dim-0}
+$\dim_{\tilde x}(X_k) = 0$,
+\item
+\label{item-quasi-finite-at-x}
+$f$ is quasi-finite at $x$.
+\end{enumerate}
+Then we have
+$$
+\xymatrix{
+(\ref{item-dimension-top-k-fibre}) \ar@{=>}[r]_{f\text{ decent}} &
+(\ref{item-isolated-in-k-fibre}) \ar@{<=>}[r] &
+(\ref{item-no-specializations-in-k-fibre}) \ar@{<=>}[r] &
+(\ref{item-k-fibre-at-x-dim-0}) \ar@{<=>}[r] &
+(\ref{item-quasi-finite-at-x})
+}
+$$
+If $Y$ is decent, then conditions (\ref{item-isolated-in-fibre}) and
+(\ref{item-no-specializations-in-fibre}) are equivalent to each other
+and to conditions
+(\ref{item-isolated-in-k-fibre}),
+(\ref{item-no-specializations-in-k-fibre}),
+(\ref{item-k-fibre-at-x-dim-0}), and
+(\ref{item-quasi-finite-at-x}).
+If $Y$ and $X$ are decent, then all conditions are equivalent.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-conditions-on-point-on-space-over-field} conditions
+(\ref{item-isolated-in-k-fibre}),
+(\ref{item-no-specializations-in-k-fibre}), and (\ref{item-k-fibre-at-x-dim-0})
+are equivalent to each other and to the condition that
+$X_k \to \Spec(k)$ is quasi-finite at $\tilde x$.
+Thus by Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-base-change-quasi-finite-locus}
+they are also equivalent to (\ref{item-quasi-finite-at-x}).
+If $f$ is decent, then $X_k$ is a decent algebraic space and
+Lemma \ref{lemma-conditions-on-point-on-space-over-field}
+shows that (\ref{item-dimension-top-k-fibre}) implies
+(\ref{item-isolated-in-k-fibre}).
+
+\medskip\noindent
+If $Y$ is decent, then we can pick a quasi-compact monomorphism
+$\Spec(k') \to Y$ in the equivalence class of $y$. In this case
+Lemma \ref{lemma-topology-fibre}
+tells us that $|X_{k'}| \to F$ is a homeomorphism.
+Combined with the arguments given above this implies
+the remaining statements of the lemma; details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-conditions-on-fibre-and-qf}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces
+over $S$ which is locally of finite type. Let $y \in |Y|$. Let $k$ be a field
+and let $\Spec(k) \to Y$ be in the equivalence class defining $y$.
+Set $X_k = \Spec(k) \times_Y X$ and let $F = f^{-1}(\{y\})$ with the
+induced topology from $|X|$. Consider the following conditions
+\begin{enumerate}
+\item
+\label{item-fibre-finite}
+$F$ is finite,
+\item
+\label{item-fibre-discrete}
+$F$ is a discrete topological space,
+\item
+\label{item-fibre-no-specializations}
+$\dim(F) = 0$,
+\item
+\label{item-k-fibre-finite}
+$|X_k|$ is a finite set,
+\item
+\label{item-k-fibre-discrete}
+$|X_k|$ is a discrete space,
+\item
+\label{item-k-fibre-no-specializations}
+$\dim(|X_k|) = 0$,
+\item
+\label{item-k-fibre-dim-0}
+$\dim(X_k) = 0$,
+\item
+\label{item-quasi-finite-at-points-fibre}
+$f$ is quasi-finite at all points of $|X|$ lying over $y$.
+\end{enumerate}
+Then we have
+$$
+\xymatrix{
+(\ref{item-fibre-finite}) &
+(\ref{item-k-fibre-finite}) \ar@{=>}[l] \ar@{=>}[r]_{f\text{ decent}} &
+(\ref{item-k-fibre-discrete}) \ar@{<=>}[r] &
+(\ref{item-k-fibre-no-specializations}) \ar@{<=>}[r] &
+(\ref{item-k-fibre-dim-0}) \ar@{<=>}[r] &
+(\ref{item-quasi-finite-at-points-fibre})
+}
+$$
+If $Y$ is decent, then conditions (\ref{item-fibre-discrete}) and
+(\ref{item-fibre-no-specializations})
+are equivalent to each other and to conditions (\ref{item-k-fibre-discrete}),
+(\ref{item-k-fibre-no-specializations}), (\ref{item-k-fibre-dim-0}), and
+(\ref{item-quasi-finite-at-points-fibre}).
+If $Y$ and $X$ are decent, then (\ref{item-fibre-finite}) implies
+all the other conditions.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-conditions-on-space-over-field}
+conditions (\ref{item-k-fibre-discrete}),
+(\ref{item-k-fibre-no-specializations}), and (\ref{item-k-fibre-dim-0})
+are equivalent to each other and to the condition that
+$X_k \to \Spec(k)$ is locally quasi-finite.
+Thus by Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-base-change-quasi-finite-locus}
+they are also equivalent to (\ref{item-quasi-finite-at-points-fibre}).
+If $f$ is decent, then $X_k$ is a decent algebraic space and
+Lemma \ref{lemma-conditions-on-space-over-field}
+shows that (\ref{item-k-fibre-finite}) implies (\ref{item-k-fibre-discrete}).
+
+\medskip\noindent
+The map $|X_k| \to F$ is surjective by
+Properties of Spaces, Lemma \ref{spaces-properties-lemma-points-cartesian}
+and we see
+(\ref{item-k-fibre-finite}) $\Rightarrow$ (\ref{item-fibre-finite}).
+
+\medskip\noindent
+If $Y$ is decent, then we can pick a quasi-compact monomorphism
+$\Spec(k') \to Y$ in the equivalence class of $y$. In this case
+Lemma \ref{lemma-topology-fibre}
+tells us that $|X_{k'}| \to F$ is a homeomorphism.
+Combined with the arguments given above this implies
+the remaining statements of the lemma; details omitted.
+\end{proof}
+
+
+
+
+
+
+
+\section{Monomorphisms}
+\label{section-monomorphisms}
+
+\noindent
+Here is another case where monomorphisms are representable.
+Please see More on Morphisms of Spaces, Section
+\ref{spaces-more-morphisms-section-monomorphisms}
+for more information.
+
+\begin{lemma}
+\label{lemma-monomorphism-toward-disjoint-union-dim-0-rings}
+Let $S$ be a scheme. Let $Y$ be a disjoint union of spectra of
+zero dimensional local rings over $S$.
+Let $f : X \to Y$ be a monomorphism of algebraic spaces over $S$.
+Then $f$ is representable, i.e., $X$ is a scheme.
+\end{lemma}
+
+\begin{proof}
+This immediately reduces to the case $Y = \Spec(A)$ where
+$A$ is a zero dimensional local ring, i.e.,
+$\Spec(A) = \{\mathfrak m_A\}$
+is a singleton. If $X = \emptyset$, then there is nothing to prove.
+If not, choose a nonempty affine scheme $U = \Spec(B)$
+and an \'etale morphism $U \to X$. As $|X|$ is a singleton (as a
+subset of $|Y|$, see
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-monomorphism-injective-points})
+we see that $U \to X$ is surjective. Note that
+$U \times_X U = U \times_Y U = \Spec(B \otimes_A B)$.
+Thus we see that the ring maps $B \to B \otimes_A B$ are \'etale.
+Since
+$$
+(B \otimes_A B)/\mathfrak m_A(B \otimes_A B)
+=
+(B/\mathfrak m_AB) \otimes_{A/\mathfrak m_A} (B/\mathfrak m_AB)
+$$
+we see that
+$B/\mathfrak m_AB \to (B \otimes_A B)/\mathfrak m_A(B \otimes_A B)$
+is flat and in fact free of rank equal to the dimension of
+$B/\mathfrak m_AB$ as a $A/\mathfrak m_A$-vector space. Since
+$B \to B \otimes_A B$ is \'etale, this can only happen if this
+dimension is finite (see for example
+Morphisms, Lemmas \ref{morphisms-lemma-etale-universally-bounded} and
+\ref{morphisms-lemma-locally-quasi-finite-qc-source-universally-bounded}).
+Every prime of $B$ lies over $\mathfrak m_A$ (the unique prime of $A$).
+Hence $\Spec(B) = \Spec(B/\mathfrak m_A)$ as a topological
+space, and this space is a finite discrete set as $B/\mathfrak m_A B$
+is an Artinian ring, see
+Algebra, Lemmas \ref{algebra-lemma-finite-dimensional-algebra} and
+\ref{algebra-lemma-artinian-finite-length}.
+Hence all prime ideals of $B$ are maximal and
+$B = B_1 \times \ldots \times B_n$ is a product of finitely many
+local rings of dimension zero, see
+Algebra, Lemma \ref{algebra-lemma-product-local}.
+Thus $B \to B \otimes_A B$ is finite \'etale as all the local rings
+$B_i$ are henselian by
+Algebra, Lemma \ref{algebra-lemma-local-dimension-zero-henselian}.
+Thus $X$ is an affine scheme by
+Groupoids, Proposition \ref{groupoids-proposition-finite-flat-equivalence}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Generic points}
+\label{section-generic-points}
+
+\noindent
+This section is a continuation of
+Properties of Spaces, Section \ref{spaces-properties-section-generic-points}.
+
+\begin{lemma}
+\label{lemma-decent-generic-points}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $x \in |X|$. The following are equivalent
+\begin{enumerate}
+\item $x$ is a generic point of an irreducible component of $|X|$,
+\item for any \'etale morphism $(Y, y) \to (X, x)$ of pointed algebraic
+spaces, $y$ is a generic point of an irreducible component of $|Y|$,
+\item for some \'etale morphism $(Y, y) \to (X, x)$ of pointed algebraic
+spaces, $y$ is a generic point of an irreducible component of $|Y|$,
+\item the dimension of the local ring of $X$ at $x$ is zero, and
+\item $x$ is a point of codimension $0$ on $X$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Conditions (4) and (5) are equivalent for any algebraic space
+by definition, see Properties of Spaces, Definition
+\ref{spaces-properties-definition-dimension-local-ring}.
+Observe that any $Y$ as in (2) and (3) is decent by
+Lemma \ref{lemma-etale-named-properties}.
+Thus it suffices to prove the equivalence of (1) and (4)
+as then the equivalence with (2) and (3) follows since the dimension
+of the local ring of $Y$ at $y$ is equal to the dimension
+of the local ring of $X$ at $x$.
+Let $f : U \to X$ be an \'etale morphism from an affine scheme and let
+$u \in U$ be a point mapping to $x$.
+
+\medskip\noindent
+Assume (1). Let $u' \leadsto u$ be a specialization in $U$.
+Then $f(u') = f(u) = x$. By
+Lemma \ref{lemma-decent-no-specializations-map-to-same-point}
+we see that $u' = u$. Hence $u$ is a generic point of an irreducible component
+of $U$. Thus $\dim(\mathcal{O}_{U, u}) = 0$ and we see that (4) holds.
+
+\medskip\noindent
+Assume (4). The point $x$ is contained in an irreducible component
+$T \subset |X|$. Since $|X|$ is sober
+(Proposition \ref{proposition-reasonable-sober})
+we $T$ has a generic point $x'$. Of course $x' \leadsto x$.
+Then we can lift this specialization to $u' \leadsto u$ in $U$
+(Lemma \ref{lemma-decent-specialization}). This contradicts the assumption
+that $\dim(\mathcal{O}_{U, u}) = 0$ unless $u' = u$, i.e., $x' = x$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-codimension-local-ring}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $T \subset |X|$ be an irreducible closed subset. Let $\xi \in T$
+be the generic point (Proposition \ref{proposition-reasonable-sober}).
+Then $\text{codim}(T, |X|)$
+(Topology, Definition \ref{topology-definition-codimension})
+is the dimension of the local ring of $X$ at $\xi$
+(Properties of Spaces, Definition
+\ref{spaces-properties-definition-dimension-local-ring}).
+\end{lemma}
+
+\begin{proof}
+Choose a scheme $U$, a point $u \in U$, and an \'etale morphism
+$U \to X$ sending $u$ to $\xi$. Then any sequence of nontrivial
+specializations $\xi_e \leadsto \ldots \leadsto \xi_0 = \xi$
+can be lifted to a sequence $u_e \leadsto \ldots \leadsto u_0 = u$ in $U$
+by Lemma \ref{lemma-decent-specialization}.
+Conversely, any sequence of nontrivial specializations
+$u_e \leadsto \ldots \leadsto u_0 = u$ in $U$
+maps to a sequence of nontrivial specializations
+$\xi_e \leadsto \ldots \leadsto \xi_0 = \xi$ by
+Lemma \ref{lemma-decent-no-specializations-map-to-same-point}.
+Because $|X|$ and $U$ are sober topological spaces
+we conclude that the codimension of $T$ in $|X|$
+and of $\overline{\{u\}}$ in $U$ are the same.
+In this way the lemma reduces to the schemes case which
+is Properties, Lemma \ref{properties-lemma-codimension-local-ring}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-get-reasonable}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$. Assume
+\begin{enumerate}
+\item every quasi-compact scheme \'etale over $X$ has finitely many
+irreducible components, and
+\item every $x \in |X|$ of codimension $0$ on $X$ can be represented
+by a monomorphism $\Spec(k) \to X$.
+\end{enumerate}
+Then $X$ is a reasonable algebraic space.
+\end{lemma}
+
+\begin{proof}
+Let $U$ be an affine scheme and let $a : U \to X$ be an \'etale morphism.
+We have to show that the fibres of $a$ are universally bounded. By
+assumption (1) the scheme $U$ has finitely many irreducible components.
+Let $u_1, \ldots, u_n \in U$ be the generic points of these irreducible
+components. Let $\{x_1, \ldots, x_m\} \subset |X|$ be the image
+of $\{u_1, \ldots, u_n\}$. Each $x_j$ is a point of codimension $0$.
+By assumption (2) we may choose a monomorphism $\Spec(k_j) \to X$
+representing $x_j$. By Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-codimension-0-points} we have
+$$
+U \times_X \Spec(k_j) = \coprod\nolimits_{a(u_i) = x_j} \Spec(\kappa(u_i))
+$$
+This is a scheme finite over $\Spec(k_j)$ of degree
+$d_j = \sum_{a(u_i) = x_j} [\kappa(u_i) : k_j]$. Set $n = \max d_j$.
+
+\medskip\noindent
+Observe that $a$ is separated
+(Properties of Spaces, Lemma \ref{spaces-properties-lemma-separated-cover}).
+Consider the stratification
+$$
+X = X_0 \supset X_1 \supset X_2 \supset \ldots
+$$
+associated to $U \to X$ in Lemma \ref{lemma-stratify-flat-fp-lqf}.
+By our choice of $n$ above we conclude that $X_{n + 1}$ is empty.
+Namely, if not, then $a^{-1}(X_{n + 1})$ is a nonempty open
+of $U$ and hence would contain one of the $x_i$. This would mean
+that $X_{n + 1}$ contains $x_j = a(u_i)$ which is impossible.
+Hence we see that the fibres of $U \to X$ are universally bounded
+(in fact by the integer $n$).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finitely-many-irreducible-components}
+Let $S$ be a scheme. Let $X$ be an algebraic space over $S$.
+The following are equivalent
+\begin{enumerate}
+\item $X$ is decent and $|X|$ has finitely many irreducible components,
+\item every quasi-compact scheme \'etale over $X$ has finitely many
+irreducible components, there are finitely many $x \in |X|$ of
+codimension $0$ on $X$, and each of these can be represented
+by a monomorphism $\Spec(k) \to X$,
+\item there exists a dense open $X' \subset X$ which is
+a scheme, $X'$ has finitely many irreducible components
+with generic points $\{x'_1, \ldots, x'_m\}$, and
+the morphism $x'_j \to X$ is quasi-compact for $j = 1, \ldots, m$.
+\end{enumerate}
+Moreover, if these conditions hold, then $X$ is reasonable and the
+points $x'_j \in |X|$ are the generic points of the irreducible
+components of $|X|$.
+\end{lemma}
+
+\begin{proof}
+In the proof we use Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-codimension-0-points}
+without further mention.
+Assume (1). Then $X$ has a dense open subscheme $X'$ by
+Theorem \ref{theorem-decent-open-dense-scheme}.
+Since the closure of an irreducible component of $|X'|$
+is an irreducible component of $|X|$, we see that $|X'|$
+has finitely many irreducible components. Thus (3) holds.
+
+\medskip\noindent
+Assume $X' \subset X$ is as in (3). Let $\{x'_1, \ldots, x'_m\}$
+be the generic points of the irreducible components of $X'$.
+Let $a : U \to X$ be an \'etale morphism with $U$ a quasi-compact scheme.
+To prove (2) it suffices to show that $U$ has
+finitely many irreducible components
+whose generic points lie over $\{x'_1, \ldots, x'_m\}$. It suffices
+to prove this for the members of a finite affine open cover of $U$,
+hence we may and do assume $U$ is affine.
+Note that $U' = a^{-1}(X') \subset U$ is a dense open.
+Since $U' \to X'$ is an \'etale morphism of schemes, we see
+the generic points of irreducible components of $U'$ are the points
+lying over $\{x'_1, \ldots, x'_m\}$. Since $x'_j \to X$ is
+quasi-compact there are finitely many points of $U$ lying over $x'_j$
+(Lemma \ref{lemma-UR-finite-above-x}). Hence $U'$ has finitely
+many irreducible components, which implies that the closures
+of these irreducible components are the irreducible components of
+$U$. Thus (2) holds.
+
+\medskip\noindent
+Assume (2). This implies (1) and the final
+statement by Lemma \ref{lemma-get-reasonable}.
+(We also use that a reasonable algebraic space is decent, see
+discussion following Definition \ref{definition-very-reasonable}.)
+\end{proof}
+
+
+
+\section{Generically finite morphisms}
+\label{section-generically-finite}
+
+\noindent
+This section discusses for morphisms of algebraic spaces the material
+discussed in Morphisms, Section \ref{morphisms-section-generically-finite}
+and
+Varieties, Section \ref{varieties-section-generically-finite}
+for morphisms of schemes.
+
+\begin{lemma}
+\label{lemma-generically-finite}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces
+over $S$. Assume that $f$ is quasi-separated of finite type.
+Let $y \in |Y|$ be a point of codimension $0$ on $Y$.
+The following are equivalent:
+\begin{enumerate}
+\item the space $|X_k|$ is finite where $\Spec(k) \to Y$ represents $y$,
+\item $X \to Y$ is quasi-finite at all points of $|X|$ over $y$,
+\item there exists an open subspace $Y' \subset Y$ with $y \in |Y'|$
+such that $Y' \times_Y X \to Y'$ is finite.
+\end{enumerate}
+If $Y$ is decent these are also equivalent to
+\begin{enumerate}
+\item[(4)] the set $f^{-1}(\{y\})$ is finite.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) follows from
+Lemma \ref{lemma-conditions-on-fibre-and-qf}
+(and the fact that a quasi-separated morphism is decent by
+Lemma \ref{lemma-properties-trivial-implications}).
+
+\medskip\noindent
+Assume the equivalent conditions of (1) and (2). Choose an affine scheme $V$
+and an \'etale morphism $V \to Y$ mapping a point $v \in V$ to $y$. Then $v$
+is a generic point of an irreducible component of $V$ by
+Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-codimension-0-points}.
+Choose an affine scheme $U$
+and a surjective \'etale morphism $U \to V \times_Y X$. Then $U \to V$ is of
+finite type. The morphism $U \to V$ is quasi-finite at every point lying over
+$v$ by (2). It follows that the fibre of $U \to V$ over $v$ is finite
+(Morphisms, Lemma
+\ref{morphisms-lemma-quasi-finite-at-a-finite-number-of-points}). By
+Morphisms, Lemma \ref{morphisms-lemma-generically-finite}
+after shrinking $V$ we may assume that $U \to V$ is finite.
+Let
+$$
+R = U \times_{V \times_Y X} U
+$$
+Since $f$ is quasi-separated, we see that $V \times_Y X$ is quasi-separated
+and hence $R$ is a quasi-compact scheme. Moreover the morphisms
+$R \to V$ is quasi-finite as the composition of an \'etale morphism
+$R \to U$ and a finite morphism $U \to V$. Hence we may apply
+Morphisms, Lemma \ref{morphisms-lemma-generically-finite}
+once more and after shrinking $V$ we may assume that $R \to V$ is
+finite as well. This of course implies that the two projections
+$R \to V$ are finite \'etale. It follows that
+$V/R = V \times_Y X$ is an affine scheme, see
+Groupoids, Proposition \ref{groupoids-proposition-finite-flat-equivalence}.
+By Morphisms, Lemma \ref{morphisms-lemma-image-proper-is-proper}
+we conclude that $V \times_Y X \to V$ is proper and by
+Morphisms, Lemma \ref{morphisms-lemma-finite-proper}
+we conclude that $V \times_Y X \to V$ is finite.
+Finally, we let $Y' \subset Y$ be the open subspace of $Y$
+corresponding to the image of $|V| \to |Y|$.
+By Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-integral-local}
+we conclude that $Y' \times_Y X \to Y'$ is finite as the base
+change to $V$ is finite and as $V \to Y'$ is a surjective \'etale
+morphism.
+
+\medskip\noindent
+If $Y$ is decent and $f$ is quasi-separated, then we see that
+$X$ is decent too; use Lemmas
+\ref{lemma-properties-trivial-implications} and
+\ref{lemma-property-over-property}.
+Hence Lemma \ref{lemma-conditions-on-fibre-and-qf}
+applies to show that (4) implies (1) and (2). On the other hand,
+we see that (2) implies (4) by Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-quasi-finite-at-a-finite-number-of-points}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generically-finite-reprise}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces
+over $S$. Assume that $f$ is quasi-separated and locally of finite type
+and $Y$ quasi-separated. Let $y \in |Y|$ be a point of codimension $0$ on $Y$.
+The following are equivalent:
+\begin{enumerate}
+\item the set $f^{-1}(\{y\})$ is finite,
+\item the space $|X_k|$ is finite where $\Spec(k) \to Y$ represents $y$,
+\item there exist open subspaces $X' \subset X$ and $Y' \subset Y$
+with $f(X') \subset Y'$, $y \in |Y'|$, and $f^{-1}(\{y\}) \subset |X'|$
+such that $f|_{X'} : X' \to Y'$ is finite.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since quasi-separated algebraic spaces are decent, the equivalence
+of (1) and (2) follows from
+Lemma \ref{lemma-conditions-on-fibre-and-qf}.
+To prove that (1) and (2) imply (3)
+we may and do replace $Y$ by a quasi-compact open containing $y$.
+Since $f^{-1}(\{y\})$ is finite, we can find a quasi-compact
+open subspace of $X' \subset X$ containing the fibre.
+The restriction $f|_{X'} : X' \to Y$ is quasi-compact and quasi-separated
+by Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-quasi-compact-quasi-separated-permanence}
+(this is where we use that $Y$ is quasi-separated).
+Applying Lemma \ref{lemma-generically-finite}
+to $f|_{X'} : X' \to Y$ we see that (3) holds.
+We omit the proof that (3) implies (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finiteness-over-generic-point}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces
+over $S$. Assume $f$ is locally of finite type.
+Let $X^0 \subset |X|$, resp.\ $Y^0 \subset |Y|$ denote the set of
+codimension $0$ points of $X$, resp.\ $Y$. Let $y \in Y^0$. The following are
+equivalent
+\begin{enumerate}
+\item $f^{-1}(\{y\}) \subset X^0$,
+\item $f$ is quasi-finite at all points lying over $y$,
+\item $f$ is quasi-finite at all $x \in X^0$ lying over $y$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $V$ be a scheme and let $V \to Y$ be a surjective \'etale morphism.
+Let $U$ be a scheme and let $U \to V \times_Y X$ be a surjective \'etale
+morphism. Then $f$ is quasi-finite at the image $x$ of a point $u \in U$
+if and only if $U \to V$ is quasi-finite at $u$. Moreover, $x \in X^0$
+if and only if $u$ is the generic point of an irreducible component
+of $U$ (Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-codimension-0-points}).
+Thus the lemma reduces to the case of the morphism $U \to V$, i.e., to
+Morphisms, Lemma \ref{morphisms-lemma-quasi-finiteness-over-generic-point}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-over-dense-open}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces
+over $S$. Assume $f$ is locally of finite type.
+Let $X^0 \subset |X|$, resp.\ $Y^0 \subset |Y|$ denote the set of
+codimension $0$ points of $X$, resp.\ $Y$. Assume
+\begin{enumerate}
+\item $Y$ is decent,
+\item $X^0$ and $Y^0$ are finite and $f^{-1}(Y^0) = X^0$,
+\item either $f$ is quasi-compact or $f$ is separated.
+\end{enumerate}
+Then there exists a dense open $V \subset Y$
+such that $f^{-1}(V) \to V$ is finite.
+\end{lemma}
+
+\begin{proof}
+By Lemmas \ref{lemma-finitely-many-irreducible-components} and
+\ref{lemma-decent-generic-points} we may assume $Y$ is a scheme
+with finitely many irreducible components. Shrinking further we
+may assume $Y$ is an irreducible affine scheme with generic point $y$.
+Then the fibre of $f$ over $y$ is finite.
+
+\medskip\noindent
+Assume $f$ is quasi-compact and $Y$ affine irreducible. Then $X$ is
+quasi-compact and we may choose an affine scheme $U$ and a
+surjective \'etale morphism $U \to X$. Then $U \to Y$ is of finite type
+and the fibre of $U \to Y$ over $y$ is the set $U^0$ of generic points of
+irreducible components of $U$ (Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-codimension-0-points}).
+Hence $U^0$ is finite
+(Morphisms, Lemma
+\ref{morphisms-lemma-quasi-finite-at-a-finite-number-of-points})
+and after shrinking $Y$ we may assume that $U \to Y$ is finite
+(Morphisms, Lemma \ref{morphisms-lemma-generically-finite}).
+Next, consider $R = U \times_X U$. Since the projection
+$s : R \to U$ is \'etale we see that $R^0 = s^{-1}(U^0)$
+lies over $y$. Since $R \to U \times_Y U$ is a monomorphism,
+we conclude that $R^0$ is finite as $U \times_Y U \to Y$ is finite.
+And $R$ is separated
+(Properties of Spaces, Lemma \ref{spaces-properties-lemma-separated-cover}).
+Thus we may shrink $Y$ once more to reach the situation
+where $R$ is finite over $Y$
+(Morphisms, Lemma \ref{morphisms-lemma-finite-over-dense-open}).
+In this case it follows that $X = U/R$ is finite over $Y$
+by exactly the same arguments as given in the proof of
+Lemma \ref{lemma-generically-finite}
+(or we can simply apply that lemma because
+it follows immediately that $X$ is quasi-separated as well).
+
+\medskip\noindent
+Assume $f$ is separated and $Y$ affine irreducible. Choose $V \subset Y$
+and $U \subset X$ as in Lemma \ref{lemma-generically-finite-reprise}.
+Since $f|_U : U \to V$ is finite, we see that $U \subset f^{-1}(V)$
+is closed as well as open
+(Morphisms of Spaces, Lemmas
+\ref{spaces-morphisms-lemma-universally-closed-permanence} and
+\ref{spaces-morphisms-lemma-finite-proper}).
+Thus $f^{-1}(V) = U \amalg W$ for some
+open subspace $W$ of $X$. However, since $U$ contains all the codimension
+$0$ points of $X$ we conclude that $W = \emptyset$
+(Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-codimension-0-points-dense})
+as desired.
+\end{proof}
+
+
+
+
+
+
+\section{Birational morphisms}
+\label{section-birational}
+
+\noindent
+The following definition of a birational morphism of algebraic spaces
+seems to be the closest to our definition
+(Morphisms, Definition \ref{morphisms-definition-birational})
+of a birational morphism of schemes.
+
+\begin{definition}
+\label{definition-birational}
+Let $S$ be a scheme. Let $X$ and $Y$ algebraic spaces over $S$.
+Assume $X$ and $Y$ are decent and that $|X|$ and $|Y|$ have finitely many
+irreducible components. We say a morphism $f : X \to Y$ is
+{\it birational} if
+\begin{enumerate}
+\item $|f|$ induces a bijection between the set of generic points
+of irreducible components of $|X|$ and the set of generic points
+of the irreducible components of $|Y|$, and
+\item for every generic point $x \in |X|$ of an irreducible component
+the local ring map $\mathcal{O}_{Y, f(x)} \to \mathcal{O}_{X, x}$
+is an isomorphism (see clarification below).
+\end{enumerate}
+\end{definition}
+
+\noindent
+Clarification: Since $X$ and $Y$ are decent the topological spaces
+$|X|$ and $|Y|$ are sober (Proposition \ref{proposition-reasonable-sober}).
+Hence condition (1) makes sense. Moreover, because we have assumed
+that $|X|$ and $|Y|$ have finitely many irreducible components, we
+see that the generic points $x_1, \ldots, x_n \in |X|$,
+resp.\ $y_1, \ldots, y_n \in |Y|$ are contained in any dense open
+of $|X|$, resp.\ $|Y|$. In particular, they are contained in
+the schematic locus of $X$, resp.\ $Y$ by
+Theorem \ref{theorem-decent-open-dense-scheme}.
+Thus we can define $\mathcal{O}_{X, x_i}$, resp.\ $\mathcal{O}_{Y, y_i}$
+to be the local ring of this scheme at $x_i$, resp.\ $y_i$.
+
+\medskip\noindent
+We conclude that if the morphism $f : X \to Y$ is birational, then
+there exist dense open subspaces $X' \subset X$ and $Y' \subset Y$ such that
+\begin{enumerate}
+\item $f(X') \subset Y'$,
+\item $X'$ and $Y'$ are representable, and
+\item $f|_{X'} : X' \to Y'$ is birational in
+the sense of Morphisms, Definition \ref{morphisms-definition-birational}.
+\end{enumerate}
+However, we do insist that $X$ and $Y$ are decent with finitely many
+irreducible components. Other ways to characterize decent algebraic spaces
+with finitely many irreducible components
+are given in Lemma \ref{lemma-finitely-many-irreducible-components}.
+In most cases birational morphisms are isomorphisms over dense opens.
+
+\begin{lemma}
+\label{lemma-birational-dominant}
+Let $S$ be a scheme.
+Let $f : X \to Y$ be a morphism of algebraic spaces over $S$ which
+are decent and have finitely many irreducible components. If $f$ is
+birational then $f$ is dominant.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from the definitions. See
+Morphisms of Spaces, Definition \ref{spaces-morphisms-definition-dominant}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-birational-generic-fibres}
+Let $S$ be a scheme. Let $f : X \to Y$ be a birational morphism of
+algebraic spaces over $S$ which are decent and have finitely
+many irreducible components. If $y \in Y$ is the generic point of
+an irreducible component, then the base change
+$X \times_Y \Spec(\mathcal{O}_{Y, y}) \to \Spec(\mathcal{O}_{Y, y})$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Let $X' \subset X$ and $Y' \subset Y$ be the maximal open subspaces
+which are representable, see
+Lemma \ref{lemma-finitely-many-irreducible-components}. By
+Lemma \ref{lemma-quasi-finiteness-over-generic-point}
+the fibre of $f$ over $y$ is consists
+of points of codimension $0$ of $X$ and is therefore contained
+in $X'$. Hence $X \times_Y \Spec(\mathcal{O}_{Y, y}) =
+X' \times_{Y'} \Spec(\mathcal{O}_{Y', y})$ and the result follows
+from Morphisms, Lemma \ref{morphisms-lemma-birational-generic-fibres}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-birational-birational}
+Let $S$ be a scheme. Let $f : X \to Y$ be a birational morphism of
+algebraic spaces over $S$ which are decent and have finitely many
+irreducible components. Assume one of the following conditions is satisfied
+\begin{enumerate}
+\item $f$ is locally of finite type and $Y$ reduced (i.e., integral),
+\item $f$ is locally of finite presentation.
+\end{enumerate}
+Then there exist dense opens $U \subset X$ and $V \subset Y$
+such that $f(U) \subset V$ and $f|_U : U \to V$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-finitely-many-irreducible-components} we may assume
+that $X$ and $Y$ are schemes. In this case the result is
+Morphisms, Lemma \ref{morphisms-lemma-birational-birational}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-birational-isomorphism-over-dense-open}
+Let $S$ be a scheme. Let $f : X \to Y$ be a birational morphism of
+algebraic spaces over $S$ which are decent and have finitely
+many irreducible components. Assume
+\begin{enumerate}
+\item either $f$ is quasi-compact or $f$ is separated, and
+\item either $f$ is locally of finite type and $Y$ is reduced or
+$f$ is locally of finite presentation.
+\end{enumerate}
+Then there exists a dense open $V \subset Y$
+such that $f^{-1}(V) \to V$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-finitely-many-irreducible-components} we may assume
+$Y$ is a scheme. By Lemma \ref{lemma-finite-over-dense-open} we may assume
+that $f$ is finite. Then $X$ is a scheme too and the result follows from
+Morphisms, Lemma \ref{morphisms-lemma-birational-isomorphism-over-dense-open}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-birational-etale-localization}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic
+spaces over $S$ which are decent and have finitely many irreducible
+components. If $f$ is birational and $V \to Y$ is an \'etale morphism
+with $V$ affine, then $X \times_Y V$ is decent with finitely
+many irreducible components and $X \times_Y V \to V$ is birational.
+\end{lemma}
+
+\begin{proof}
+The algebraic space $U = X \times_Y V$ is decent
+(Lemma \ref{lemma-etale-named-properties}).
+The generic points of $V$ and $U$ are the elements of $|V|$ and $|U|$
+which lie over generic points of $|Y|$ and $|X|$
+(Lemma \ref{lemma-decent-generic-points}).
+Since $Y$ is decent we conclude there are finitely many generic points
+on $V$. Let $\xi \in |X|$ be a generic point of an irreducible component.
+By the discussion following Definition \ref{definition-birational}
+we have a cartesian square
+$$
+\xymatrix{
+\Spec(\mathcal{O}_{X, \xi}) \ar[d] \ar[r] & X \ar[d] \\
+\Spec(\mathcal{O}_{Y, f(\xi)}) \ar[r] & Y
+}
+$$
+whose horizontal morphisms are monomorphisms identifying local rings
+and where the left vertical arrow is an isomorphism. It follows that
+in the diagram
+$$
+\xymatrix{
+\Spec(\mathcal{O}_{X, \xi}) \times_X U \ar[d] \ar[r] & U \ar[d] \\
+\Spec(\mathcal{O}_{Y, f(\xi)}) \times_Y V \ar[r] & V
+}
+$$
+the vertical arrow on the left is an isomorphism. The horizonal arrows
+have image contained in the schematic locus of $U$ and $V$ and
+identify local rings (some details omitted). Since the image of
+the horizontal arrows are the points of $|U|$, resp.\ $|V|$
+lying over $\xi$, resp.\ $f(\xi)$ we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-birational-induced-morphism-normalizations}
+Let $S$ be a scheme. Let $f : X \to Y$ be a birational morphism between
+algebraic spaces over $S$ which are decent and have finitely many irreducible
+components. Then the normalizations $X^\nu \to X$ and $Y^\nu \to Y$ exist
+and there is a commutative diagram
+$$
+\xymatrix{
+X^\nu \ar[r] \ar[d] & Y^\nu \ar[d] \\
+X \ar[r] & Y
+}
+$$
+of algebraic spaces over $S$. The morphism $X^\nu \to Y^\nu$ is birational.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-finitely-many-irreducible-components} we see that
+$X$ and $Y$ satisfy the equivalent conditions of
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-prepare-normalization}
+and the normalizations are defined. By
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-normalization-normal}
+the algebraic space $X^\nu$ is normal and maps codimension $0$ points
+to codimension $0$ points. Since $f$ maps codimension $0$ points to
+codimension $0$ points (this is the same as generic points on decent
+spaces by Lemma \ref{lemma-decent-generic-points})
+we obtain from
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-normalization-normal}
+a factorization of the composition $X^\nu \to X \to Y$ through $Y^\nu$.
+
+\medskip\noindent
+Observe that $X^\nu$ and $Y^\nu$ are decent for example by
+Lemma \ref{lemma-representable-named-properties}.
+Moreover the maps $X^\nu \to X$ and $Y^\nu \to Y$
+induce bijections on irreducible components (see references above)
+hence $X^\nu$ and $Y^\nu$ both have a finite number of irreducible
+components and the map $X^\nu \to Y^\nu$ induces a bijection
+between their generic points.
+To prove that $X^\nu \to Y^\nu$ is birational, it therefore
+suffices to show it induces an isomorphism on local rings at
+these points. To do this we may replace $X$ and $Y$ by open neighbourhoods
+of their generic points, hence we may assume $X$ and $Y$ are affine
+irreducible schemes with generic points $x$ and $y$. Since
+$f$ is birational the map $\mathcal{O}_{X, x} \to \mathcal{O}_{Y, y}$
+is an isomorphism. Let $x^\nu \in X^\nu$ and $y^\nu \in Y^\nu$ be
+the points lying over $x$ and $y$.
+By construction of the normalization
+we see that $\mathcal{O}_{X^\nu, x^\nu} = \mathcal{O}_{X, x}/\mathfrak m_x$
+and similarly on $Y$. Thus the map
+$\mathcal{O}_{X^\nu, x^\nu} \to \mathcal{O}_{Y^\nu, y^\nu}$
+is an isomorphism as well.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-birational-over-normal}
+Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic
+spaces over $S$. Assume
+\begin{enumerate}
+\item $X$ and $Y$ are decent and have finitely many irreducible components,
+\item $f$ is integral and birational,
+\item $Y$ is normal, and
+\item $X$ is reduced.
+\end{enumerate}
+Then $f$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Let $V \to Y$ be an \'etale morphism with $V$ affine. It suffices to show that
+$U = X \times_Y V \to V$ is an isomorphism. By
+Lemma \ref{lemma-birational-etale-localization} and its proof
+we see that $U$ and $V$ are decent and have finitely many
+irreducible components and that $U \to V$ is birational.
+By Properties, Lemma
+\ref{properties-lemma-normal-locally-finite-nr-irreducibles}
+$V$ is a finite disjoint union of integral schemes.
+Thus we may assume $V$ is integral. As $f$ is birational, we
+see that $U$ is irreducible and reduced, i.e., integral
+(note that $U$ is a scheme as $f$ is integral, hence representable).
+Thus we may assume that $X$ and $Y$ are integral schemes
+and the result follows from the case of schemes, see
+Morphisms, Lemma \ref{morphisms-lemma-finite-birational-over-normal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-normalization-normal}
+Let $S$ be a scheme. Let $f : X \to Y$ be an integral birational morphism of
+decent algebraic spaces over $S$ which have finitely many irreducible
+components. Then there exists a factorization $Y^\nu \to X \to Y$ and
+$Y^\nu \to X$ is the normalization of $X$.
+\end{lemma}
+
+\begin{proof}
+Consider the map $X^\nu \to Y^\nu$ of
+Lemma \ref{lemma-birational-induced-morphism-normalizations}.
+This map is integral by
+Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-finite-permanence}.
+Hence it is an isomorphism by
+Lemma \ref{lemma-finite-birational-over-normal}.
+\end{proof}
+
+
+
+
+\section{Jacobson spaces}
+\label{section-jacobson}
+
+\noindent
+We have defined the Jacobson property for algebraic spaces in
+Properties of Spaces, Remark
+\ref{spaces-properties-remark-list-properties-local-etale-topology}.
+For representable algebraic spaces it agrees with the property discussed in
+Properties, Section \ref{properties-section-jacobson}.
+The relationship between the Jacobson property and the behaviour of
+the topological space $|X|$ is not evident for general algebraic spaces $|X|$.
+However, a decent (for example quasi-separated or locally separated)
+algebraic space $X$ is Jacobson if and only if $|X|$ is Jacobson
+(see Lemma \ref{lemma-decent-Jacobson}).
+
+\begin{lemma}
+\label{lemma-Jacobson-universally-Jacobson}
+Let $S$ be a scheme. Let $X$ be a Jacobson algebraic space over $S$.
+Any algebraic space locally of finite type over $X$ is Jacobson.
+\end{lemma}
+
+\begin{proof}
+Let $U \to X$ be a surjective \'etale morphism where $U$ is a scheme.
+Then $U$ is Jacobson (by definition) and for a morphism of schemes $V \to U$
+which is locally of finite type we see that $V$ is Jacobson by the
+corresponding result for schemes (Morphisms, Lemma
+\ref{morphisms-lemma-Jacobson-universally-Jacobson}).
+Thus if $Y \to X$ is a morphism of algebraic spaces which is locally
+of finite type, then setting $V = U \times_X Y$ we see that
+$Y$ is Jacobson by definition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Jacobson-ft-points-lift-to-closed}
+Let $S$ be a scheme. Let $X$ be a Jacobson algebraic space over $S$.
+For $x \in X_{\text{ft-pts}}$ and $g : W \to X$ locally of finite type
+with $W$ a scheme, if $x \in \Im(|g|)$, then there exists a closed
+point of $W$ mapping to $x$.
+\end{lemma}
+
+\begin{proof}
+Let $U \to X$ be an \'etale morphism with $U$ a scheme and with $u \in U$
+closed mapping to $x$, see
+Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-identify-finite-type-points}.
+Observe that $W$, $W \times_X U$, and $U$ are Jacobson schemes
+by Lemma \ref{lemma-Jacobson-universally-Jacobson}.
+Hence finite type points on these schemes
+are the same thing as closed points by
+Morphisms, Lemma \ref{morphisms-lemma-jacobson-finite-type-points}.
+The inverse image $T \subset W \times_X U$ of $u$ is a nonempty
+(as $x$ in the image of $W \to X$) closed subset.
+By Morphisms, Lemma \ref{morphisms-lemma-enough-finite-type-points}
+there is a closed point $t$ of $W \times_X U$ which maps to $u$.
+As $W \times_X U \to W$ is locally of finite type
+the image of $t$ in $W$ is closed by
+Morphisms, Lemma \ref{morphisms-lemma-jacobson-finite-type-points}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-decent-Jacobson-ft-pts}
+Let $S$ be a scheme. Let $X$ be a decent Jacobson algebraic space over $S$.
+Then $X_{\text{ft-pts}} \subset |X|$ is the set of closed points.
+\end{lemma}
+
+\begin{proof}
+If $x \in |X|$ is closed, then we can represent $x$ by a closed
+immersion $\Spec(k) \to X$, see Lemma \ref{lemma-decent-space-closed-point}.
+Hence $x$ is certainly a finite type point.
+
+\medskip\noindent
+Conversely, let $x \in |X|$ be a finite type point. We know that
+$x$ can be represented by a quasi-compact monomorphism
+$\Spec(k) \to X$ where $k$ is a field
+(Definition \ref{definition-very-reasonable}). On the other hand,
+by definition, there exists a morphism $\Spec(k') \to X$
+which is locally of finite type and represents $x$
+(Morphisms, Definition \ref{morphisms-definition-finite-type-point}).
+We obtain a factorization $\Spec(k') \to \Spec(k) \to X$.
+Let $U \to X$ be any \'etale morphism with $U$ affine and consider
+the morphisms
+$$
+\Spec(k') \times_X U \to \Spec(k) \times_X U \to U
+$$
+The quasi-compact scheme $\Spec(k) \times_X U$ is \'etale over
+$\Spec(k)$ hence is a finite disjoint union
+of spectra of fields (Remark \ref{remark-recall}).
+Moreover, the first morphism is surjective and locally of finite type
+(Morphisms, Lemma \ref{morphisms-lemma-permanence-finite-type})
+hence surjective on finite type points
+(Morphisms, Lemma \ref{morphisms-lemma-finite-type-points-surjective-morphism})
+and the composition (which is locally of finite type) sends
+finite type points to closed points as $U$ is Jacobson
+(Morphisms, Lemma \ref{morphisms-lemma-jacobson-finite-type-points}).
+Thus the image of
+$\Spec(k) \times_X U \to U$ is a finite set of closed points hence
+closed. Since this is true for every affine $U$ and \'etale morphism
+$U \to X$, we conclude that $x \in |X|$ is closed.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-decent-Jacobson}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Then $X$ is Jacobson if and only if $|X|$ is Jacobson.
+\end{lemma}
+
+\begin{proof}
+Assume $X$ is Jacobson and that $T \subset |X|$ is a closed subset.
+By Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-enough-finite-type-points}
+we see that $T \cap X_{\text{ft-pts}}$ is dense in $T$.
+By Lemma \ref{lemma-decent-Jacobson-ft-pts} we see that
+$X_{\text{ft-pts}}$ are the
+closed points of $|X|$. Thus $|X|$ is indeed Jacobson.
+
+\medskip\noindent
+Assume $|X|$ is Jacobson. Let $f : U \to X$ be an \'etale
+morphism with $U$ an affine scheme. We have to show that $U$
+is Jacobson. If $x \in |X|$ is closed,
+then the fibre $F = f^{-1}(\{x\})$ is a finite (by definition of
+decent) closed (by construction of the topology on $|X|$) subset of $U$.
+Since there are no specializations between points of $F$
+(Lemma \ref{lemma-decent-no-specializations-map-to-same-point})
+we conclude that every point of $F$ is closed in $U$.
+If $U$ is not Jacobson, then there exists a non-closed point
+$u \in U$ such that $\{u\}$ is locally closed (Topology, Lemma
+\ref{topology-lemma-non-jacobson-Noetherian-characterize}).
+We will show that $f(u) \in |X|$ is closed; by the above $u$
+is closed in $U$ which is a contradiction and finishes
+the proof. To prove this we may replace $U$ by an affine open
+neighbourhood of $u$.
+Thus we may assume that $\{u\}$ is closed in $U$.
+Let $R = U \times_X U$ with projections $s, t : R \to U$.
+Then $s^{-1}(\{u\}) = \{r_1, \ldots, r_m\}$ is finite (by
+definition of decent spaces). After replacing $U$ by a smaller affine
+open neighbourhood of $u$ we may assume that $t(r_j) = u$ for
+$j = 1, \ldots, m$. It follows that $\{u\}$ is an $R$-invariant
+closed subset of $U$. Hence $\{f(u)\}$ is a locally closed subset
+of $X$ as it is closed in the open $|f|(|U|)$ of $|X|$. Since $|X|$
+is Jacobson we conclude that $f(u)$ is closed in $|X|$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-punctured-spec}
+Let $S$ be a scheme. Let $X$ be a decent locally Noetherian algebraic
+space over $S$. Let $x \in |X|$. Then
+$$
+W = \{x' \in |X| : x' \leadsto x,\ x' \not = x\}
+$$
+is a Noetherian, spectral, sober, Jacobson topological space.
+\end{lemma}
+
+\begin{proof}
+We may replace by any open subspace containing $x$.
+Thus we may assume that $X$ is quasi-compact.
+Then $|X|$ is a Noetherian topological space
+(Properties of Spaces, Lemma \ref{spaces-properties-lemma-Noetherian-topology}).
+Thus $W$ is a Noetherian topological space
+(Topology, Lemma \ref{topology-lemma-Noetherian}).
+
+\medskip\noindent
+Combining Lemma \ref{lemma-locally-Noetherian-decent-quasi-separated} with
+Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-quasi-compact-quasi-separated-spectral}
+we see that $|X|$ is a spectral toplogical space.
+By Topology, Lemma \ref{topology-lemma-make-spectral-space}
+we see that $W \cup \{x\}$ is a spectral topological space.
+Now $W$ is a quasi-compact open of $W \cup \{x\}$ and hence $W$ is
+spectral by Topology, Lemma \ref{topology-lemma-spectral-sub}.
+
+\medskip\noindent
+Let $E \subset W$ be an irreducible closed subset. Then if $Z \subset |X|$
+is the closure of $E$ we see that $x \in Z$. There is a unique generic
+point $\eta \in Z$ by Proposition \ref{proposition-reasonable-sober}.
+Of course $\eta \in W$ and hence $\eta \in E$. We conclude that $E$
+has a unique generic point, i.e., $W$ is sober.
+
+\medskip\noindent
+Let $x' \in W$ be a point such that $\{x'\}$ is locally closed in $W$.
+To finish the proof we have to show that $x'$ is a closed point of $W$.
+If not, then there exists a nontrivial specialization $x' \leadsto x'_1$
+in $W$. Let $U$ be an affine scheme, $u \in U$ a point, and let $U \to X$
+be an \'etale morphism mapping $u$ to $x$. By
+Lemma \ref{lemma-decent-specialization}
+we can choose specializations $u' \leadsto u'_1 \leadsto u$
+mapping to $x' \leadsto x'_1 \leadsto x$.
+Let $\mathfrak p' \subset \mathcal{O}_{U, u}$ be the prime ideal
+corresponding to $u'$. The existence of the specializations
+implies that $\dim(\mathcal{O}_{U, u}/\mathfrak p') \geq 2$.
+Hence every nonempty open of $\Spec(\mathcal{O}_{U, u}/\mathfrak p')$
+is infinite by Algebra, Lemma
+\ref{algebra-lemma-Noetherian-local-domain-dim-2-infinite-opens}.
+By Lemma \ref{lemma-decent-no-specializations-map-to-same-point}
+we obtain a continuous map
+$$
+\Spec(\mathcal{O}_{U, u}/\mathfrak p')
+\setminus \{\mathfrak m_u/\mathfrak p'\}
+\longrightarrow
+W
+$$
+Since the generic point of the LHS maps to $x'$ the image is
+contained in $\overline{\{x'\}}$. We conclude the inverse image of $\{x'\}$
+under the displayed arrow is nonempty open hence infinite.
+However, the fibres of $U \to X$ are finite as $X$
+is decent and we conclude that $\{x'\}$ is infinite.
+This contradiction finishes the proof.
+\end{proof}
+
+
+
+
+
+
+\section{Local irreducibility}
+\label{section-irreducible-local-ring}
+
+\noindent
+We have already defined the geometric number of branches of an
+algebraic space at a point in Properties of Spaces, Section
+\ref{spaces-properties-section-irreducible-local-ring}.
+The number of branches of an algebraic space at a point can only
+be defined for decent algebraic spaces.
+
+\begin{lemma}
+\label{lemma-irreducible-local-ring}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $x \in |X|$ be a point. The following are equivalent
+\begin{enumerate}
+\item for any elementary \'etale neighbourhood $(U, u) \to (X, x)$
+the local ring $\mathcal{O}_{U, u}$ has a unique minimal prime,
+\item for any elementary \'etale neighbourhood $(U, u) \to (X, x)$
+there is a unique irreducible component of $U$ through $u$,
+\item for any elementary \'etale neighbourhood $(U, u) \to (X, x)$
+the local ring $\mathcal{O}_{U, u}$ is unibranch,
+\item the henselian local ring
+$\mathcal{O}_{X, x}^h$ has a unique minimal prime.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) follows from the fact that irreducible
+components of $U$ passing through $u$ are in $1$-$1$ correspondence with
+minimal primes of the local ring of $U$ at $u$. The ring
+$\mathcal{O}_{X, x}^h$ is the henselization of $\mathcal{O}_{U, u}$, see
+discussion following Definition \ref{definition-henselian-local-ring}.
+In particular (3) and (4) are equivalent by
+More on Algebra, Lemma \ref{more-algebra-lemma-unibranch}.
+The equivalence of (2) and (3) follows from
+More on Morphisms, Lemma \ref{more-morphisms-lemma-nr-branches}.
+\end{proof}
+
+\begin{definition}
+\label{definition-unibranch}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $x \in |X|$. We say that $X$ is {\it unibranch at $x$}
+if the equivalent conditions of
+Lemma \ref{lemma-irreducible-local-ring} hold.
+We say that $X$ is {\it unibranch} if $X$ is
+unibranch at every $x \in |X|$.
+\end{definition}
+
+\noindent
+This is consistent with the definition for schemes
+(Properties, Definition \ref{properties-definition-unibranch}).
+
+\begin{lemma}
+\label{lemma-nr-branches-local-ring}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $x \in |X|$ be a point. Let $n \in \{1, 2, \ldots\}$ be an integer.
+The following are equivalent
+\begin{enumerate}
+\item for any elementary \'etale neighbourhood $(U, u) \to (X, x)$
+the number of minimal primes of the local ring $\mathcal{O}_{U, u}$
+is $\leq n$ and for at least one choice of $(U, u)$ it is $n$,
+\item for any elementary \'etale neighbourhood $(U, u) \to (X, x)$
+the number irreducible components of $U$ passing through $u$ is $\leq n$
+and for at least one choice of $(U, u)$ it is $n$,
+\item for any elementary \'etale neighbourhood $(U, u) \to (X, x)$
+the number of branches of $U$ at $u$ is $\leq n$
+and for at least one choice of $(U, u)$ it is $n$,
+\item the number of minimal prime ideals of
+$\mathcal{O}_{X, x}^h$ is $n$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) follows from the fact that irreducible
+components of $U$ passing through $u$ are in $1$-$1$ correspondence with
+minimal primes of the local ring of $U$ at $u$.
+The ring $\mathcal{O}_{X, x}$ is the henselization of $\mathcal{O}_{U, u}$, see
+discussion following Definition \ref{definition-henselian-local-ring}.
+In particular (3) and (4) are equivalent by
+More on Algebra, Lemma \ref{more-algebra-lemma-unibranch}.
+The equivalence of (2) and (3) follows from
+More on Morphisms, Lemma \ref{more-morphisms-lemma-nr-branches}.
+\end{proof}
+
+\begin{definition}
+\label{definition-number-of-branches}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+Let $x \in |X|$. The {\it number of branches of $X$ at $x$} is
+either $n \in \mathbf{N}$ if the equivalent conditions
+of Lemma \ref{lemma-nr-branches-local-ring}
+hold, or else $\infty$.
+\end{definition}
+
+
+
+
+
+
+\section{Catenary algebraic spaces}
+\label{section-catenary}
+
+\noindent
+This section extends the material in
+Properties, Section \ref{properties-section-catenary}
+and Morphisms, Section \ref{morphisms-section-universally-catenary}
+to algebraic spaces.
+
+\begin{definition}
+\label{definition-catenary}
+Let $S$ be a scheme. Let $X$ be a decent algebraic space over $S$.
+We say $X$ is {\it catenary} if $|X|$ is catenary
+(Topology, Definition \ref{topology-definition-catenary}).
+\end{definition}
+
+\noindent
+If $X$ is representable, then this is equivalent to the corresponding notion
+for the scheme representing $X$.
+
+\begin{lemma}
+\label{lemma-scheme-with-dimension-function}
+Let $S$ be a locally Noetherian and universally catenary scheme.
+Let $\delta : S \to \mathbf{Z}$ be a dimension function.
+Let $X$ be a decent algebraic space over $S$ such that
+the structure morphism $X \to S$ is locally of
+finite type. Let $\delta_X : |X| \to \mathbf{Z}$ be the map
+sending $x$ to $\delta(f(x))$ plus the transcendence degree
+of $x/f(x)$. Then $\delta_X$ is a dimension function on $|X|$.
+\end{lemma}
+
+\begin{proof}
+Let $\varphi : U \to X$ be a surjective \'etale morphism where $U$ is a scheme.
+Then the similarly defined function $\delta_U$ is a
+dimension function on $U$ by
+Morphisms, Lemma \ref{morphisms-lemma-dimension-function-propagates}.
+On the other hand, by the definition of relative transcendence degree in
+(Morphisms of Spaces, Definition
+\ref{spaces-morphisms-definition-dimension-fibre}) we see
+that $\delta_U(u) = \delta_X(\varphi(u))$.
+
+\medskip\noindent
+Let $x \leadsto x'$ be a specialization of points in $|X|$.
+by Lemma \ref{lemma-decent-specialization} we can find
+a specialization $u \leadsto u'$ of points of $U$ with
+$\varphi(u) = x$ and $\varphi(u') = x'$. Moreover, we see
+that $x = x'$ if and only if $u = u'$, see
+Lemma \ref{lemma-decent-no-specializations-map-to-same-point}.
+Thus the fact that $\delta_U$ is a dimension function implies that
+$\delta_X$ is a dimension function, see
+Topology, Definition \ref{topology-definition-dimension-function}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-catenary-scheme}
+Let $S$ be a locally Noetherian and universally catenary scheme.
+Let $X$ be an algebraic space over $S$ such that $X$ is decent
+and such that the structure morphism $X \to S$ is locally of
+finite type. Then $X$ is catenary.
+\end{lemma}
+
+\begin{proof}
+The question is local on $S$ (use
+Topology, Lemma \ref{topology-lemma-catenary}).
+Thus we may assume that $S$ has a
+dimension function, see Topology, Lemma
+\ref{topology-lemma-locally-dimension-function}.
+Then we conclude that $|X|$ has a dimension function by
+Lemma \ref{lemma-scheme-with-dimension-function}.
+Since $|X|$ is sober (Proposition \ref{proposition-reasonable-sober})
+we conclude that $|X|$ is catenary by
+Topology, Lemma \ref{topology-lemma-dimension-function-catenary}.
+\end{proof}
+
+\noindent
+By Lemma \ref{lemma-universally-catenary-scheme}
+the following definition is compatible with the
+already existing notion for representable algebraic spaces.
+
+\begin{definition}
+\label{definition-universally-catenary}
+Let $S$ be a scheme. Let $X$ be a decent and locally Noetherian
+algebraic space over $S$. We say $X$ is {\it universally catenary}
+if for every morphism $Y \to X$ of algebraic spaces which is
+locally of finite type and with $Y$ decent, the algebraic space
+$Y$ is catenary.
+\end{definition}
+
+\noindent
+If $X$ is an algebraic space, then the condition
+``$X$ is decent and locally Noetherian'' is equivalent to
+``$X$ is quasi-separated and locally Noetherian''. This is
+Lemma \ref{lemma-locally-Noetherian-decent-quasi-separated}.
+Thus another way to understand the definition above is that $X$
+is universally catenary if and only if $Y$ is catenary for
+all morphisms $Y \to X$ which are quasi-separated and locally of finite type.
+
+\begin{lemma}
+\label{lemma-universally-catenary}
+Let $S$ be a scheme. Let $X$ be a decent, locally Noetherian, and
+universally catenary algebraic space over $S$. Then any decent algebraic
+space locally of finite type over $X$ is universally catenary.
+\end{lemma}
+
+\begin{proof}
+This is formal from the definitions and the fact that
+compositions of morphisms locally of finite type are
+locally of finite type (Morphisms of Spaces, Lemma
+\ref{spaces-morphisms-lemma-composition-finite-type}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-dimension-function-finite-cover}
+Let $S$ be a scheme. Let $f : Y \to X$ be a surjective finite morphism of
+decent and locally Noetherian algebraic spaces. Let
+$\delta : |X| \to \mathbf{Z}$ be a function. If $\delta \circ |f|$ is a
+dimension function, then $\delta$ is a dimension function.
+\end{lemma}
+
+\begin{proof}
+Let $x \mapsto x'$, $x \not = x'$ be a specialization in $|X|$.
+Choose $y \in |Y|$ with $|f|(y) = x$. Since $|f|$ is closed
+(Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-finite-proper})
+we find a specialization $y \leadsto y'$ with $|f|(y') = x'$.
+Thus we conclude that
+$\delta(x) = \delta(|f|(y)) > \delta(|f|(y')) = \delta(x')$
+(see Topology, Definition \ref{topology-definition-dimension-function}).
+If $x \leadsto x'$ is an immediate specialization, then
+$y \leadsto y'$ is an immediate specialization too:
+namely if $y \leadsto y'' \leadsto y'$, then $|f|(y'')$
+must be either $x$ or $x'$ and there are no nontrivial
+specializations between points of fibres of $|f|$ by
+Lemma \ref{lemma-conditions-on-fibre-and-qf}.
+\end{proof}
+
+\noindent
+The discussion will be continued in
+More on Morphisms of Spaces, Section
+\ref{spaces-more-morphisms-section-catenary}.
+
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/defos.tex b/books/stacks/defos.tex
new file mode 100644
index 0000000000000000000000000000000000000000..97d69916b0c9172312d0e024b3ceba8a348bc564
--- /dev/null
+++ b/books/stacks/defos.tex
@@ -0,0 +1,5718 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Deformation Theory}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+The goal of this chapter is to give a (relatively) gentle introduction to
+deformation theory of modules, morphisms, etc. In this chapter we deal with
+those results that can be proven using the naive cotangent complex. In
+the chapter on the cotangent complex we will extend these results a little
+bit. The advanced reader may wish to consult the treatise by Illusie on this
+subject, see \cite{cotangent}.
+
+
+
+
+
+\section{Deformations of rings and the naive cotangent complex}
+\label{section-deformations}
+
+\noindent
+In this section we use the naive cotangent complex to do a little bit
+of deformation theory. We start with a surjective ring map $A' \to A$
+whose kernel is an ideal $I$ of square zero. Moreover we assume
+given a ring map $A \to B$, a $B$-module $N$, and an $A$-module map
+$c : I \to N$. In this section we ask ourselves whether we can find
+the question mark fitting into the following diagram
+\begin{equation}
+\label{equation-to-solve}
+\vcenter{
+\xymatrix{
+0 \ar[r] & N \ar[r] & {?} \ar[r] & B \ar[r] & 0 \\
+0 \ar[r] & I \ar[u]^c \ar[r] & A' \ar[u] \ar[r] & A \ar[u] \ar[r] & 0
+}
+}
+\end{equation}
+and moreover how unique the solution is (if it exists). More precisely,
+we look for a surjection of $A'$-algebras $B' \to B$ whose kernel is
+an ideal of square zero and is
+identified with $N$ such that $A' \to B'$ induces the given map $c$.
+We will say $B'$ is a {\it solution} to (\ref{equation-to-solve}).
+
+\begin{lemma}
+\label{lemma-huge-diagram}
+Given a commutative diagram
+$$
+\xymatrix{
+& 0 \ar[r] & N_2 \ar[r] & B'_2 \ar[r] & B_2 \ar[r] & 0 \\
+& 0 \ar[r]|\hole & I_2 \ar[u]_{c_2} \ar[r] &
+A'_2 \ar[u] \ar[r]|\hole & A_2 \ar[u] \ar[r] & 0 \\
+0 \ar[r] & N_1 \ar[ruu] \ar[r] & B'_1 \ar[r] & B_1 \ar[ruu] \ar[r] & 0 \\
+0 \ar[r] & I_1 \ar[ruu]|\hole \ar[u]^{c_1} \ar[r] &
+A'_1 \ar[ruu]|\hole \ar[u] \ar[r] & A_1 \ar[ruu]|\hole \ar[u] \ar[r] & 0
+}
+$$
+with front and back solutions to (\ref{equation-to-solve}) we have
+\begin{enumerate}
+\item There exist a canonical element in
+$\Ext^1_{B_1}(\NL_{B_1/A_1}, N_2)$
+whose vanishing is a necessary and sufficient condition for the existence
+of a ring map $B'_1 \to B'_2$ fitting into the diagram.
+\item If there exists a map $B'_1 \to B'_2$ fitting into the diagram
+the set of all such maps is a principal homogeneous space under
+$\Hom_{B_1}(\Omega_{B_1/A_1}, N_2)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $E = B_1$ viewed as a set.
+Consider the surjection $A_1[E] \to B_1$ with kernel $J$ used
+to define the naive cotangent complex by the formula
+$$
+\NL_{B_1/A_1} = (J/J^2 \to \Omega_{A_1[E]/A_1} \otimes_{A_1[E]} B_1)
+$$
+in
+Algebra, Section \ref{algebra-section-netherlander}.
+Since $\Omega_{A_1[E]/A_1} \otimes B_1$ is a free
+$B_1$-module we have
+$$
+\Ext^1_{B_1}(\NL_{B_1/A_1}, N_2) =
+\frac{\Hom_{B_1}(J/J^2, N_2)}
+{\Hom_{B_1}(\Omega_{A_1[E]/A_1} \otimes B_1, N_2)}
+$$
+We will construct an obstruction in the module on the right.
+Let $J' = \Ker(A'_1[E] \to B_1)$. Note that there is a surjection
+$J' \to J$ whose kernel is $I_1A_1[E]$.
+For every $e \in E$ denote $x_e \in A_1[E]$ the corresponding variable.
+Choose a lift $y_e \in B'_1$ of the image of $x_e$ in $B_1$ and
+a lift $z_e \in B'_2$ of the image of $x_e$ in $B_2$.
+These choices determine $A'_1$-algebra maps
+$$
+A'_1[E] \to B'_1 \quad\text{and}\quad A'_1[E] \to B'_2
+$$
+The first of these gives a map $J' \to N_1$, $f' \mapsto f'(y_e)$
+and the second gives a map $J' \to N_2$, $f' \mapsto f'(z_e)$.
+A calculation shows that these maps annihilate $(J')^2$.
+Because the left square of the diagram (involving $c_1$ and $c_2$)
+commutes we see that these maps agree on $I_1A_1[E]$ as maps into $N_2$.
+Observe that $B'_1$ is the pushout of $J' \to A'_1[B_1]$ and $J' \to N_1$.
+Thus, if the maps $J' \to N_1 \to N_2$ and $J' \to N_2$ agree, then we
+obtain a map $B'_1 \to B'_2$ fitting into the diagram.
+Thus we let the obstruction be the class of the map
+$$
+J/J^2 \to N_2,\quad f \mapsto f'(z_e) - \nu(f'(y_e))
+$$
+where $\nu : N_1 \to N_2$ is the given map and where $f' \in J'$
+is a lift of $f$. This is well defined by our remarks above.
+Note that we have the freedom
+to modify our choices of $z_e$ into $z_e + \delta_{2, e}$
+and $y_e$ into $y_e + \delta_{1, e}$ for some $\delta_{i, e} \in N_i$.
+This will modify the map above into
+$$
+f \mapsto f'(z_e + \delta_{2, e}) - \nu(f'(y_e + \delta_{1, e})) =
+f'(z_e) - \nu(f'(z_e)) +
+\sum (\delta_{2, e} - \nu(\delta_{1, e}))\frac{\partial f}{\partial x_e}
+$$
+This means exactly that we are modifying the map $J/J^2 \to N_2$
+by the composition $J/J^2 \to \Omega_{A_1[E]/A_1} \otimes B_1 \to N_2$
+where the second map sends $\text{d}x_e$ to
+$\delta_{2, e} - \nu(\delta_{1, e})$. Thus our obstruction is well defined
+and is zero if and only if a lift exists.
+
+\medskip\noindent
+Part (2) comes from the observation that given two maps
+$\varphi, \psi : B'_1 \to B'_2$ fitting into the diagram, then
+$\varphi - \psi$ factors through a map $D : B_1 \to N_2$ which
+is an $A_1$-derivation:
+\begin{align*}
+D(fg) & = \varphi(f'g') - \psi(f'g') \\
+& =
+\varphi(f')\varphi(g') - \psi(f')\psi(g') \\
+& =
+(\varphi(f') - \psi(f'))\varphi(g') + \psi(f')(\varphi(g') - \psi(g')) \\
+& =
+gD(f) + fD(g)
+\end{align*}
+Thus $D$ corresponds to a unique $B_1$-linear map
+$\Omega_{B_1/A_1} \to N_2$. Conversely, given such a linear map
+we get a derivation $D$ and given a ring map $\psi : B'_1 \to B'_2$
+fitting into the diagram
+the map $\psi + D$ is another ring map fitting into the diagram.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-choices}
+If there exists a solution to (\ref{equation-to-solve}), then the set of
+isomorphism classes of solutions is principal homogeneous under
+$\Ext^1_B(\NL_{B/A}, N)$.
+\end{lemma}
+
+\begin{proof}
+We observe right away that given two solutions $B'_1$ and $B'_2$
+to (\ref{equation-to-solve}) we obtain by Lemma \ref{lemma-huge-diagram} an
+obstruction element $o(B'_1, B'_2) \in \Ext^1_B(\NL_{B/A}, N)$
+to the existence of a map $B'_1 \to B'_2$. Clearly, this element
+is the obstruction to the existence of an isomorphism, hence separates
+the isomorphism classes. To finish the proof it therefore suffices to
+show that given a solution $B'$ and an element
+$\xi \in \Ext^1_B(\NL_{B/A}, N)$
+we can find a second solution $B'_\xi$ such that
+$o(B', B'_\xi) = \xi$.
+
+\medskip\noindent
+Let $E = B$ viewed as a set. Consider the surjection $A[E] \to B$ with kernel
+$J$ used to define the naive cotangent complex by the formula
+$$
+\NL_{B/A} = (J/J^2 \to \Omega_{A[E]/A} \otimes_{A[E]} B)
+$$
+in Algebra, Section \ref{algebra-section-netherlander}.
+Since $\Omega_{A[E]/A} \otimes B$ is a free $B$-module we have
+$$
+\Ext^1_B(\NL_{B/A}, N) =
+\frac{\Hom_B(J/J^2, N)}
+{\Hom_B(\Omega_{A[E]/A} \otimes B, N)}
+$$
+Thus we may represent $\xi$ as the class of a morphism $\delta : J/J^2 \to N$.
+
+\medskip\noindent
+For every $e \in E$ denote $x_e \in A[E]$ the corresponding variable.
+Choose a lift $y_e \in B'$ of the image of $x_e$ in $B$.
+These choices determine an $A'$-algebra map $\varphi : A'[E] \to B'$.
+Let $J' = \Ker(A'[E] \to B)$. Observe that $\varphi$ induces a map
+$\varphi|_{J'} : J' \to N$ and that $B'$ is the pushout, as in the following
+diagram
+$$
+\xymatrix{
+0 \ar[r] & N \ar[r] & B' \ar[r] & B \ar[r] & 0 \\
+0 \ar[r] & J' \ar[u]^{\varphi|_{J'}} \ar[r] & A'[E] \ar[u] \ar[r] &
+B \ar[u]_{=} \ar[r] & 0
+}
+$$
+Let $\psi : J' \to N$ be the sum of the map $\varphi|_{J'}$ and the
+composition
+$$
+J' \to J'/(J')^2 \to J/J^2 \xrightarrow{\delta} N.
+$$
+Then the pushout along $\psi$ is an other ring extension $B'_\xi$
+fitting into a diagram as above. A calculation shows that
+$o(B', B'_\xi) = \xi$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extensions-of-algebras}
+Let $A$ be a ring. Let $B$ be an $A$-algebra. Let $N$ be a $B$-module.
+The set of isomorphism classes of extensions of $A$-algebras
+$$
+0 \to N \to B' \to B \to 0
+$$
+where $N$ is an ideal of square zero is canonically bijective to
+$\Ext^1_B(\NL_{B/A}, N)$.
+\end{lemma}
+
+\begin{proof}
+To prove this we apply the previous results to the case where
+(\ref{equation-to-solve}) is given by the diagram
+$$
+\xymatrix{
+0 \ar[r] & N \ar[r] & {?} \ar[r] & B \ar[r] & 0 \\
+0 \ar[r] & 0 \ar[u] \ar[r] & A \ar[u] \ar[r]^{\text{id}} & A \ar[u] \ar[r] & 0
+}
+$$
+Thus our lemma follows from Lemma \ref{lemma-choices}
+and the fact that there exists a solution, namely $N \oplus B$.
+(See remark below for a direct construction of the bijection.)
+\end{proof}
+
+\begin{remark}
+\label{remark-extensions-of-algebras}
+Let $A \to B$ and $N$ be as in Lemma \ref{lemma-extensions-of-algebras}.
+Let $\alpha : P \to B$ be a presentation of $B$ over $A$, see
+Algebra, Section \ref{algebra-section-netherlander}.
+With $J = \Ker(\alpha)$ the naive cotangent complex $\NL(\alpha)$
+associated to $\alpha$ is the complex $J/J^2 \to \Omega_{P/A} \otimes_P B$.
+We have
+$$
+\Ext^1_B(\NL(\alpha), N) =
+\Coker\left(\Hom_B(\Omega_{P/A} \otimes_P B, N) \to \Hom_B(J/J^2, N)\right)
+$$
+because $\Omega_{P/A}$ is a free module.
+Consider a extension $0 \to N \to B' \to B \to 0$ as in the lemma.
+Since $P$ is a polynomial algebra over $A$ we can
+lift $\alpha$ to an $A$-algebra map $\alpha' : P' \to B'$.
+Then $\alpha'|_J : J \to N$ factors as $J \to J/J^2 \to N$
+as $N$ has square zero in $B'$. The lemma sends our extension
+to the class of this map $J/J^2 \to N$ in the displayed cokernel.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-extensions-of-algebras-functorial}
+Given ring maps $A \to B \to C$, a $B$-module $M$, a $C$-module $N$,
+a $B$-linear map $c : M \to N$, and extensions of
+$A$-algebras with square zero kernels
+\begin{enumerate}
+\item[(a)] $0 \to M \to B' \to B \to 0$ corresponding to
+$\xi \in \Ext^1_B(\NL_{B/A}, M)$, and
+\item[(b)] $0 \to N \to C' \to C \to 0$ corresponding to
+$\zeta \in \Ext^1_C(\NL_{C/A}, N)$.
+\end{enumerate}
+See Lemma \ref{lemma-extensions-of-algebras}.
+Then there is an $A$-algebra map $B' \to C'$ compatible with
+$B \to C$ and $c$ if and only if $\xi$ and $\zeta$
+map to the same element of
+$\Ext^1_B(\NL_{B/A}, N)$.
+\end{lemma}
+
+\begin{proof}
+The stament makes sense as we have the maps
+$$
+\Ext^1_B(\NL_{B/A}, M) \to \Ext^1_B(\NL_{B/A}, N)
+$$
+using the map $M \to N$ and
+$$
+\Ext^1_C(\NL_{C/A}, N) \to \Ext^1_B(\NL_{C/A}, N) \to \Ext^1_B(\NL_{B/A}, N)
+$$
+where the first arrows uses the restriction map $D(C) \to D(B)$
+and the second arrow uses the canonical map of complexes
+$\NL_{B/A} \to \NL_{C/A}$. The statement of the lemma can be deduced from
+Lemma \ref{lemma-huge-diagram} applied to the diagram
+$$
+\xymatrix{
+& 0 \ar[r] & N \ar[r] & C' \ar[r] & C \ar[r] & 0 \\
+& 0 \ar[r]|\hole & 0 \ar[u] \ar[r] &
+A \ar[u] \ar[r]|\hole & A \ar[u] \ar[r] & 0 \\
+0 \ar[r] & M \ar[ruu] \ar[r] & B' \ar[r] & B \ar[ruu] \ar[r] & 0 \\
+0 \ar[r] & 0 \ar[ruu]|\hole \ar[u] \ar[r] &
+A \ar[ruu]|\hole \ar[u] \ar[r] & A \ar[ruu]|\hole \ar[u] \ar[r] & 0
+}
+$$
+and a compatibility between the constructions in the proofs
+of Lemmas \ref{lemma-extensions-of-algebras} and \ref{lemma-huge-diagram}
+whose statement and proof we omit. (See remark below for a direct argument.)
+\end{proof}
+
+\begin{remark}
+\label{remark-extensions-of-algebras-functorial}
+Let $A \to B \to C$, $M$, $N$, $c : M \to N$,
+$0 \to M \to B' \to B \to 0$, $\xi \in \Ext^1_B(\NL_{B/A}, M)$,
+$0 \to N \to C' \to C \to 0$, and $\zeta \in \Ext^1_C(\NL_{C/A}, N)$ be as in
+Lemma \ref{lemma-extensions-of-algebras-functorial}.
+Using pushout along $c : M \to N$ we can construct an extension
+$$
+\xymatrix{
+0 \ar[r] & N \ar[r] & B'_1 \ar[r] & B \ar[r] & 0 \\
+0 \ar[r] & M \ar[u]^c \ar[r] & B' \ar[u] \ar[r] & B \ar[u] \ar[r] & 0
+}
+$$
+by setting $B'_1 = (N \times B')/M$ where $M$ is antidiagonally
+embedded. Using pullback along $B \to C$ we can construct an extension
+$$
+\xymatrix{
+0 \ar[r] & N \ar[r] & C' \ar[r] & C \ar[r] & 0 \\
+0 \ar[r] & N \ar[u] \ar[r] & B'_2 \ar[u] \ar[r] & B \ar[u] \ar[r] & 0
+}
+$$
+by setting $B'_2 = C' \times_C B$ (fibre product of rings). A simple diagram
+chase tells us that there exists an $A$-algebra map $B' \to C'$
+compatible with $B \to C$ and $c$ if and only if $B'_1$ is isomorphic
+to $B'_2$ as $A$-algebra extensions of $B$ by $N$. Thus to see
+Lemma \ref{lemma-extensions-of-algebras-functorial}
+is true, it suffices to show that $B'_1$ corresponds via the bijection of
+Lemma \ref{lemma-extensions-of-algebras}
+to the image of $\xi$ by the map
+$\Ext^1_B(\NL_{B/A}, M) \to \Ext^1_B(\NL_{B/A}, N)$
+and that $B'_2$ correspond to the image of $\zeta$ by the map
+$\Ext^1_C(\NL_{C/A}, N) \to \Ext^1_B(\NL_{B/A}, N)$.
+The first of these two statements is immediate from the construction
+of the class in Remark \ref{remark-extensions-of-algebras}.
+For the second, choose a commutative diagram
+$$
+\xymatrix{
+Q \ar[r]_\beta & C \\
+P \ar[u]^\varphi \ar[r]^\alpha & B \ar[u]
+}
+$$
+of $A$-algebras, such that $\alpha$ is a presentation of $B$ over $A$
+and $\beta$ is a presentation of $C$ over $A$. See
+Remark \ref{remark-extensions-of-algebras} and references therein.
+Set $J = \Ker(\alpha)$ and $K = \Ker(\beta)$. The map $\varphi$
+induces a map of complexes $\NL(\alpha) \to \NL(\beta)$
+and in particular $\bar\varphi : J/J^2 \to K/K^2$.
+Choose $A$-algebra homomorphism $\beta' : Q \to C'$
+which is a lift of $\beta$. Then
+$\alpha' = (\beta' \circ \varphi, \alpha) : P \to B'_2 = C' \times_C B$
+is a lift of $\alpha$. With these choices the composition of the map
+$K/K^2 \to N$ induced by $\beta'$ and the map $\bar\varphi : J/J^2 \to K/K^2$
+is the restriction of $\alpha'$ to $J/J^2$.
+Unwinding the constructions of our classes in
+Remark \ref{remark-extensions-of-algebras}
+this indeed shows that
+$B'_2$ correspond to the image of $\zeta$ by the map
+$\Ext^1_C(\NL_{C/A}, N) \to \Ext^1_B(\NL_{B/A}, N)$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-parametrize-solutions}
+Let $0 \to I \to A' \to A \to 0$, $A \to B$, and $c : I \to N$ be as in
+(\ref{equation-to-solve}). Denote $\xi \in \Ext^1_A(\NL_{A/A'}, I)$
+the element corresponding to the extension $A'$ of $A$ by $I$ via
+Lemma \ref{lemma-extensions-of-algebras}. The set of isomorphism
+classes of solutions is canonically bijective to the fibre of
+$$
+\Ext^1_B(\NL_{B/A'}, N) \to \Ext^1_A(\NL_{A'/A}, N)
+$$
+over the image of $\xi$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-extensions-of-algebras} applied to $A' \to B$ and
+the $B$-module $N$ we see that elements $\zeta$ of $\Ext^1_B(\NL_{B/A'}, N)$
+parametrize extensions $0 \to N \to B' \to B \to 0$ of $A'$-algebras.
+By Lemma \ref{lemma-extensions-of-algebras-functorial} applied
+to $A' \to A \to B$ and $c : I \to N$ we see that there is an $A'$-algebra
+map $A' \to B'$ compatible with $c$ and $A \to B$ if and only if
+$\zeta$ maps to $\xi$. Of course this is the same thing as saying $B'$ is a
+solution of (\ref{equation-to-solve}).
+\end{proof}
+
+\begin{remark}
+\label{remark-parametrize-solutions}
+Observe that in the situation of Lemma \ref{lemma-parametrize-solutions}
+we have
+$$
+\Ext^1_A(\NL_{A'/A}, N) =
+\Ext^1_B(\NL_{A'/A} \otimes_A^\mathbf{L} B, N) =
+\Ext^1_B(\NL_{A'/A} \otimes_A B, N)
+$$
+The first equality by
+More on Algebra, Lemma \ref{more-algebra-lemma-tensor-hom-adjoint} and
+the second by
+More on Algebra, Lemma \ref{more-algebra-lemma-tensor-NL}.
+We have maps of complexes
+$$
+\NL_{A'/A} \otimes_A B \to \NL_{B/A'} \to \NL_{B/A}
+$$
+which is close to being a distinguished triangle, see
+Algebra, Lemma \ref{algebra-lemma-exact-sequence-NL}.
+If it were a distinguished triangle we would conclude
+that the image of $\xi$ in $\Ext^2_B(\NL_{B/A}, N)$
+would be the obstruction to the existence of a solution to
+(\ref{equation-to-solve}).
+\end{remark}
+
+\noindent
+If our ring map $A \to B$ is a local complete intersection, then there
+is a solutuion. This is a kind of lifting result; observe that
+for syntomic ring maps we have proved a rather strong lifting result in
+Smoothing Ring Maps, Proposition \ref{smoothing-proposition-lift-smooth}.
+
+\begin{lemma}
+\label{lemma-existence-lci}
+If $A \to B$ is a local complete intersection ring map, then
+there exists a solution to (\ref{equation-to-solve}).
+\end{lemma}
+
+\begin{proof}[First proof]
+Write $B = A[x_1, \ldots, x_n]/J$. By More on Algebra, Definition
+\ref{more-algebra-definition-local-complete-intersection}
+the ideal $J$ is Koszul-regular. This implies $J$ is $H_1$-regular and
+quasi-regular, see More on Algebra, Section \ref{more-algebra-section-ideals}.
+Let $J' \subset A'[x_1, \ldots, x_n]$
+be the inverse image of $J$. Denote $I[x_1, \ldots, x_n]$ the
+kernel of $A'[x_1, \ldots, x_n] \to A[x_1, \ldots, x_n]$.
+By More on Algebra, Lemma
+\ref{more-algebra-lemma-conormal-sequence-H1-regular-ideal} we have
+$I[x_1, \ldots, x_n] \cap (J')^2 = J'I[x_1, \ldots, x_n] =
+JI[x_1, \ldots, x_n]$. Hence we obtain a short exact sequence
+$$
+0 \to I \otimes_A B \to J'/(J')^2 \to J/J^2 \to 0
+$$
+Since $J/J^2$ is projective (More on Algebra, Lemma
+\ref{more-algebra-lemma-quasi-regular-ideal-finite-projective})
+we can choose a splitting of this sequence
+$$
+J'/(J')^2 = I \otimes_A B \oplus J/J^2
+$$
+Let $(J')^2 \subset J'' \subset J'$ be the elements which map to the
+second summand in the decomposition above. Then
+$$
+0 \to I \otimes_A B \to A'[x_1, \ldots, x_n]/J'' \to B \to 0
+$$
+is a solution to (\ref{equation-to-solve}) with $N = I \otimes_A B$.
+The general case is obtained by doing a pushout along the given
+map $I \otimes_A B \to N$.
+\end{proof}
+
+\begin{proof}[Second proof]
+Please read Remark \ref{remark-parametrize-solutions}
+before reading this proof. By
+More on Algebra, Lemma \ref{more-algebra-lemma-transitive-lci-at-end}
+the maps $\NL_{A'/A} \otimes_A B \to \NL_{B/A'} \to \NL_{B/A}$
+do form a distinguished triangle in $D(B)$.
+Hence it suffices to show that $\Ext^2_{B/A}(\NL_{B/A}, N)$ vanishes.
+By More on Algebra, Lemma \ref{more-algebra-lemma-lci-NL}
+the complex $\NL_{B/A}$ is perfect of tor-amplitude in
+$[-1, 0]$. This implies our $\Ext^2$ vanishes for example
+by More on Algebra, Lemma \ref{more-algebra-lemma-splitting-unique} part (1).
+\end{proof}
+
+
+
+
+
+
+
+\section{Thickenings of ringed spaces}
+\label{section-thickenings-spaces}
+
+\noindent
+In the following few sections we will use the following notions:
+\begin{enumerate}
+\item A sheaf of ideals $\mathcal{I} \subset \mathcal{O}_{X'}$ on
+a ringed space $(X', \mathcal{O}_{X'})$ is {\it locally nilpotent}
+if any local section of $\mathcal{I}$ is locally nilpotent.
+Compare with Algebra, Item \ref{algebra-item-ideal-locally-nilpotent}.
+\item A {\it thickening} of ringed spaces is a morphism
+$i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$ of ringed spaces
+such that
+\begin{enumerate}
+\item $i$ induces a homeomorphism $X \to X'$,
+\item the map $i^\sharp : \mathcal{O}_{X'} \to i_*\mathcal{O}_X$
+is surjective, and
+\item the kernel of $i^\sharp$ is a locally nilpotent sheaf of ideals.
+\end{enumerate}
+\item A {\it first order thickening} of ringed spaces is a thickening
+$i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$ of ringed spaces
+such that $\Ker(i^\sharp)$ has square zero.
+\item It is clear how to define {\it morphisms of thickenings},
+{\it morphisms of thickenings over a base ringed space}, etc.
+\end{enumerate}
+If $i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$ is a thickening
+of ringed spaces then we identify the underlying topological spaces
+and think of $\mathcal{O}_X$, $\mathcal{O}_{X'}$, and
+$\mathcal{I} = \Ker(i^\sharp)$ as sheaves on $X = X'$. We obtain
+a short exact sequence
+$$
+0 \to \mathcal{I} \to \mathcal{O}_{X'} \to \mathcal{O}_X \to 0
+$$
+of $\mathcal{O}_{X'}$-modules. By
+Modules, Lemma \ref{modules-lemma-i-star-equivalence}
+the category of $\mathcal{O}_X$-modules is equivalent to the category
+of $\mathcal{O}_{X'}$-modules annihilated by $\mathcal{I}$. In particular,
+if $i$ is a first order thickening, then
+$\mathcal{I}$ is a $\mathcal{O}_X$-module.
+
+\begin{situation}
+\label{situation-morphism-thickenings}
+A morphism of thickenings $(f, f')$ is given by a commutative diagram
+\begin{equation}
+\label{equation-morphism-thickenings}
+\vcenter{
+\xymatrix{
+(X, \mathcal{O}_X) \ar[r]_i \ar[d]_f & (X', \mathcal{O}_{X'}) \ar[d]^{f'} \\
+(S, \mathcal{O}_S) \ar[r]^t & (S', \mathcal{O}_{S'})
+}
+}
+\end{equation}
+of ringed spaces whose horizontal arrows are thickenings. In this
+situation we set
+$\mathcal{I} = \Ker(i^\sharp) \subset \mathcal{O}_{X'}$ and
+$\mathcal{J} = \Ker(t^\sharp) \subset \mathcal{O}_{S'}$.
+As $f = f'$ on underlying topological spaces we will identify
+the (topological) pullback functors $f^{-1}$ and $(f')^{-1}$.
+Observe that $(f')^\sharp : f^{-1}\mathcal{O}_{S'} \to \mathcal{O}_{X'}$
+induces in particular a map $f^{-1}\mathcal{J} \to \mathcal{I}$
+and therefore a map of $\mathcal{O}_{X'}$-modules
+$$
+(f')^*\mathcal{J} \longrightarrow \mathcal{I}
+$$
+If $i$ and $t$ are first order thickenings, then
+$(f')^*\mathcal{J} = f^*\mathcal{J}$ and the map above becomes a
+map $f^*\mathcal{J} \to \mathcal{I}$.
+\end{situation}
+
+\begin{definition}
+\label{definition-strict-morphism-thickenings}
+In Situation \ref{situation-morphism-thickenings} we say that $(f, f')$ is a
+{\it strict morphism of thickenings}
+if the map $(f')^*\mathcal{J} \longrightarrow \mathcal{I}$ is surjective.
+\end{definition}
+
+\noindent
+The following lemma in particular shows that a morphism
+$(f, f') : (X \subset X') \to (S \subset S')$ of
+thickenings of schemes is strict if and only if $X = S \times_{S'} X'$.
+
+\begin{lemma}
+\label{lemma-strict-morphism-thickenings}
+In Situation \ref{situation-morphism-thickenings} the morphism $(f, f')$
+is a strict morphism of thickenings if and only if
+(\ref{equation-morphism-thickenings}) is cartesian in the category
+of ringed spaces.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+
+
+
+\section{Modules on first order thickenings of ringed spaces}
+\label{section-modules-thickenings}
+
+\noindent
+In this section we discuss some preliminaries to the deformation theory
+of modules. Let $i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$
+be a first order thickening of ringed spaces. We will freely use the notation
+introduced in Section \ref{section-thickenings-spaces}, in particular we will
+identify the underlying topological spaces.
+In this section we consider short exact sequences
+\begin{equation}
+\label{equation-extension}
+0 \to \mathcal{K} \to \mathcal{F}' \to \mathcal{F} \to 0
+\end{equation}
+of $\mathcal{O}_{X'}$-modules, where $\mathcal{F}$, $\mathcal{K}$ are
+$\mathcal{O}_X$-modules and $\mathcal{F}'$ is an $\mathcal{O}_{X'}$-module.
+In this situation we have a canonical $\mathcal{O}_X$-module map
+$$
+c_{\mathcal{F}'} :
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F}
+\longrightarrow
+\mathcal{K}
+$$
+where $\mathcal{I} = \Ker(i^\sharp)$.
+Namely, given local sections $f$ of $\mathcal{I}$ and $s$
+of $\mathcal{F}$ we set $c_{\mathcal{F}'}(f \otimes s) = fs'$
+where $s'$ is a local section of $\mathcal{F}'$ lifting $s$.
+
+\begin{lemma}
+\label{lemma-inf-map}
+Let $i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$
+be a first order thickening of ringed spaces. Assume given
+extensions
+$$
+0 \to \mathcal{K} \to \mathcal{F}' \to \mathcal{F} \to 0
+\quad\text{and}\quad
+0 \to \mathcal{L} \to \mathcal{G}' \to \mathcal{G} \to 0
+$$
+as in (\ref{equation-extension})
+and maps $\varphi : \mathcal{F} \to \mathcal{G}$ and
+$\psi : \mathcal{K} \to \mathcal{L}$.
+\begin{enumerate}
+\item If there exists an $\mathcal{O}_{X'}$-module
+map $\varphi' : \mathcal{F}' \to \mathcal{G}'$ compatible with $\varphi$
+and $\psi$, then the diagram
+$$
+\xymatrix{
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F}
+\ar[r]_-{c_{\mathcal{F}'}} \ar[d]_{1 \otimes \varphi} &
+\mathcal{K} \ar[d]^\psi \\
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{G}
+\ar[r]^-{c_{\mathcal{G}'}} &
+\mathcal{L}
+}
+$$
+is commutative.
+\item The set of $\mathcal{O}_{X'}$-module
+maps $\varphi' : \mathcal{F}' \to \mathcal{G}'$ compatible with $\varphi$
+and $\psi$ is, if nonempty, a principal homogeneous space under
+$\Hom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{L})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is immediate from the description of the maps.
+For (2), if $\varphi'$ and $\varphi''$ are two maps
+$\mathcal{F}' \to \mathcal{G}'$ compatible with $\varphi$
+and $\psi$, then $\varphi' - \varphi''$ factors as
+$$
+\mathcal{F}' \to \mathcal{F} \to \mathcal{L} \to \mathcal{G}'
+$$
+The map in the middle comes from a unique element of
+$\Hom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{L})$ by
+Modules, Lemma \ref{modules-lemma-i-star-equivalence}.
+Conversely, given an element $\alpha$ of this group we can add the
+composition (as displayed above with $\alpha$ in the middle)
+to $\varphi'$. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-obs-map}
+Let $i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$
+be a first order thickening of ringed spaces. Assume given
+extensions
+$$
+0 \to \mathcal{K} \to \mathcal{F}' \to \mathcal{F} \to 0
+\quad\text{and}\quad
+0 \to \mathcal{L} \to \mathcal{G}' \to \mathcal{G} \to 0
+$$
+as in (\ref{equation-extension})
+and maps $\varphi : \mathcal{F} \to \mathcal{G}$ and
+$\psi : \mathcal{K} \to \mathcal{L}$. Assume the diagram
+$$
+\xymatrix{
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F}
+\ar[r]_-{c_{\mathcal{F}'}} \ar[d]_{1 \otimes \varphi} &
+\mathcal{K} \ar[d]^\psi \\
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{G}
+\ar[r]^-{c_{\mathcal{G}'}} &
+\mathcal{L}
+}
+$$
+is commutative. Then there exists an element
+$$
+o(\varphi, \psi) \in
+\Ext^1_{\mathcal{O}_X}(\mathcal{F}, \mathcal{L})
+$$
+whose vanishing is a necessary and sufficient condition for the existence
+of a map $\varphi' : \mathcal{F}' \to \mathcal{G}'$ compatible with
+$\varphi$ and $\psi$.
+\end{lemma}
+
+\begin{proof}
+We can construct explicitly an extension
+$$
+0 \to \mathcal{L} \to \mathcal{H} \to \mathcal{F} \to 0
+$$
+by taking $\mathcal{H}$ to be the cohomology of the complex
+$$
+\mathcal{K}
+\xrightarrow{1, - \psi}
+\mathcal{F}' \oplus \mathcal{G}' \xrightarrow{\varphi, 1}
+\mathcal{G}
+$$
+in the middle (with obvious notation). A calculation with local sections
+using the assumption that the diagram of the lemma commutes
+shows that $\mathcal{H}$ is annihilated by $\mathcal{I}$. Hence
+$\mathcal{H}$ defines a class in
+$$
+\Ext^1_{\mathcal{O}_X}(\mathcal{F}, \mathcal{L})
+\subset
+\Ext^1_{\mathcal{O}_{X'}}(\mathcal{F}, \mathcal{L})
+$$
+Finally, the class of $\mathcal{H}$ is the difference of the pushout
+of the extension $\mathcal{F}'$ via $\psi$ and the pullback
+of the extension $\mathcal{G}'$ via $\varphi$ (calculations omitted).
+Thus the vanishing of the class of $\mathcal{H}$ is equivalent to the
+existence of a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K} \ar[r] \ar[d]_{\psi} &
+\mathcal{F}' \ar[r] \ar[d]_{\varphi'} &
+\mathcal{F} \ar[r] \ar[d]_\varphi & 0\\
+0 \ar[r] &
+\mathcal{L} \ar[r] &
+\mathcal{G}' \ar[r] &
+\mathcal{G} \ar[r] & 0
+}
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-ext}
+Let $i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$ be a first order
+thickening of ringed spaces.
+Assume given $\mathcal{O}_X$-modules $\mathcal{F}$, $\mathcal{K}$
+and an $\mathcal{O}_X$-linear map
+$c : \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F} \to \mathcal{K}$.
+If there exists a sequence (\ref{equation-extension}) with
+$c_{\mathcal{F}'} = c$ then the set of isomorphism classes of these
+extensions is principal homogeneous under
+$\Ext^1_{\mathcal{O}_X}(\mathcal{F}, \mathcal{K})$.
+\end{lemma}
+
+\begin{proof}
+Assume given extensions
+$$
+0 \to \mathcal{K} \to \mathcal{F}'_1 \to \mathcal{F} \to 0
+\quad\text{and}\quad
+0 \to \mathcal{K} \to \mathcal{F}'_2 \to \mathcal{F} \to 0
+$$
+with $c_{\mathcal{F}'_1} = c_{\mathcal{F}'_2} = c$. Then the difference
+(in the extension group, see
+Homology, Section \ref{homology-section-extensions})
+is an extension
+$$
+0 \to \mathcal{K} \to \mathcal{E} \to \mathcal{F} \to 0
+$$
+where $\mathcal{E}$ is annihilated by $\mathcal{I}$ (local computation
+omitted). Hence the sequence is an extension of $\mathcal{O}_X$-modules,
+see Modules, Lemma \ref{modules-lemma-i-star-equivalence}.
+Conversely, given such an extension $\mathcal{E}$ we can add the extension
+$\mathcal{E}$ to the $\mathcal{O}_{X'}$-extension $\mathcal{F}'$ without
+affecting the map $c_{\mathcal{F}'}$. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-obs-ext}
+Let $i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$
+be a first order thickening of ringed spaces. Assume given
+$\mathcal{O}_X$-modules $\mathcal{F}$, $\mathcal{K}$
+and an $\mathcal{O}_X$-linear map
+$c : \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F} \to \mathcal{K}$.
+Then there exists an element
+$$
+o(\mathcal{F}, \mathcal{K}, c) \in
+\Ext^2_{\mathcal{O}_X}(\mathcal{F}, \mathcal{K})
+$$
+whose vanishing is a necessary and sufficient condition for the existence
+of a sequence (\ref{equation-extension}) with $c_{\mathcal{F}'} = c$.
+\end{lemma}
+
+\begin{proof}
+We first show that if $\mathcal{K}$ is an injective $\mathcal{O}_X$-module,
+then there does exist a sequence (\ref{equation-extension}) with
+$c_{\mathcal{F}'} = c$. To do this, choose a flat
+$\mathcal{O}_{X'}$-module $\mathcal{H}'$ and a surjection
+$\mathcal{H}' \to \mathcal{F}$
+(Modules, Lemma \ref{modules-lemma-module-quotient-flat}).
+Let $\mathcal{J} \subset \mathcal{H}'$ be the kernel. Since $\mathcal{H}'$
+is flat we have
+$$
+\mathcal{I} \otimes_{\mathcal{O}_{X'}} \mathcal{H}' =
+\mathcal{I}\mathcal{H}'
+\subset \mathcal{J} \subset \mathcal{H}'
+$$
+Observe that the map
+$$
+\mathcal{I}\mathcal{H}' =
+\mathcal{I} \otimes_{\mathcal{O}_{X'}} \mathcal{H}'
+\longrightarrow
+\mathcal{I} \otimes_{\mathcal{O}_{X'}} \mathcal{F} =
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F}
+$$
+annihilates $\mathcal{I}\mathcal{J}$. Namely, if $f$ is a local section
+of $\mathcal{I}$ and $s$ is a local section of $\mathcal{H}$, then
+$fs$ is mapped to $f \otimes \overline{s}$ where $\overline{s}$ is
+the image of $s$ in $\mathcal{F}$. Thus we obtain
+$$
+\xymatrix{
+\mathcal{I}\mathcal{H}'/\mathcal{I}\mathcal{J}
+\ar@{^{(}->}[r] \ar[d] &
+\mathcal{J}/\mathcal{I}\mathcal{J} \ar@{..>}[d]_\gamma \\
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F} \ar[r]^-c &
+\mathcal{K}
+}
+$$
+a diagram of $\mathcal{O}_X$-modules. If $\mathcal{K}$ is injective
+as an $\mathcal{O}_X$-module, then we obtain the dotted arrow.
+Denote $\gamma' : \mathcal{J} \to \mathcal{K}$ the composition
+of $\gamma$ with $\mathcal{J} \to \mathcal{J}/\mathcal{I}\mathcal{J}$.
+A local calculation shows the pushout
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{J} \ar[r] \ar[d]_{\gamma'} &
+\mathcal{H}' \ar[r] \ar[d] &
+\mathcal{F} \ar[r] \ar@{=}[d] &
+0 \\
+0 \ar[r] &
+\mathcal{K} \ar[r] &
+\mathcal{F}' \ar[r] &
+\mathcal{F} \ar[r] &
+0
+}
+$$
+is a solution to the problem posed by the lemma.
+
+\medskip\noindent
+General case. Choose an embedding $\mathcal{K} \subset \mathcal{K}'$
+with $\mathcal{K}'$ an injective $\mathcal{O}_X$-module. Let $\mathcal{Q}$
+be the quotient, so that we have an exact sequence
+$$
+0 \to \mathcal{K} \to \mathcal{K}' \to \mathcal{Q} \to 0
+$$
+Denote
+$c' : \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F} \to \mathcal{K}'$
+be the composition. By the paragraph above there exists a sequence
+$$
+0 \to \mathcal{K}' \to \mathcal{E}' \to \mathcal{F} \to 0
+$$
+as in (\ref{equation-extension}) with $c_{\mathcal{E}'} = c'$.
+Note that $c'$ composed with the map $\mathcal{K}' \to \mathcal{Q}$
+is zero, hence the pushout of $\mathcal{E}'$ by
+$\mathcal{K}' \to \mathcal{Q}$ is an extension
+$$
+0 \to \mathcal{Q} \to \mathcal{D}' \to \mathcal{F} \to 0
+$$
+as in (\ref{equation-extension}) with $c_{\mathcal{D}'} = 0$.
+This means exactly that $\mathcal{D}'$ is annihilated by
+$\mathcal{I}$, in other words, the $\mathcal{D}'$ is an extension
+of $\mathcal{O}_X$-modules, i.e., defines an element
+$$
+o(\mathcal{F}, \mathcal{K}, c) \in
+\Ext^1_{\mathcal{O}_X}(\mathcal{F}, \mathcal{Q}) =
+\Ext^2_{\mathcal{O}_X}(\mathcal{F}, \mathcal{K})
+$$
+(the equality holds by the long exact cohomology sequence associated
+to the exact sequence above and the vanishing of higher ext groups
+into the injective module $\mathcal{K}'$). If
+$o(\mathcal{F}, \mathcal{K}, c) = 0$, then we can choose a splitting
+$s : \mathcal{F} \to \mathcal{D}'$ and we can set
+$$
+\mathcal{F}' = \Ker(\mathcal{E}' \to \mathcal{D}'/s(\mathcal{F}))
+$$
+so that we obtain the following diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K} \ar[r] \ar[d] &
+\mathcal{F}' \ar[r] \ar[d] &
+\mathcal{F} \ar[r] \ar@{=}[d] &
+0 \\
+0 \ar[r] &
+\mathcal{K}' \ar[r] &
+\mathcal{E}' \ar[r] &
+\mathcal{F} \ar[r] & 0
+}
+$$
+with exact rows which shows that $c_{\mathcal{F}'} = c$. Conversely, if
+$\mathcal{F}'$ exists, then the pushout of $\mathcal{F}'$ by the map
+$\mathcal{K} \to \mathcal{K}'$ is isomorphic to $\mathcal{E}'$ by
+Lemma \ref{lemma-inf-ext} and the vanishing of higher ext groups
+into the injective module $\mathcal{K}'$. This gives a diagram
+as above, which implies that $\mathcal{D}'$ is split as an extension, i.e.,
+the class $o(\mathcal{F}, \mathcal{K}, c)$ is zero.
+\end{proof}
+
+\begin{remark}
+\label{remark-trivial-thickening}
+Let $(X, \mathcal{O}_X)$ be a ringed space. A first order thickening
+$i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$ is said
+to be {\it trivial} if there exists a morphism of ringed spaces
+$\pi : (X', \mathcal{O}_{X'}) \to (X, \mathcal{O}_X)$ which is a
+left inverse to $i$. The choice of such a morphism
+$\pi$ is called a {\it trivialization} of the first order thickening.
+Given $\pi$ we obtain a splitting
+\begin{equation}
+\label{equation-splitting}
+\mathcal{O}_{X'} = \mathcal{O}_X \oplus \mathcal{I}
+\end{equation}
+as sheaves of algebras on $X$ by using $\pi^\sharp$ to split the surjection
+$\mathcal{O}_{X'} \to \mathcal{O}_X$. Conversely, such a splitting determines
+a morphism $\pi$. The category of trivialized first order thickenings of
+$(X, \mathcal{O}_X)$ is equivalent to the category of
+$\mathcal{O}_X$-modules.
+\end{remark}
+
+\begin{remark}
+\label{remark-trivial-extension}
+Let $i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$
+be a trivial first order thickening of ringed spaces
+and let $\pi : (X', \mathcal{O}_{X'}) \to (X, \mathcal{O}_X)$
+be a trivialization. Then given any triple
+$(\mathcal{F}, \mathcal{K}, c)$ consisting of a pair of
+$\mathcal{O}_X$-modules and a map
+$c : \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F} \to \mathcal{K}$
+we may set
+$$
+\mathcal{F}'_{c, triv} = \mathcal{F} \oplus \mathcal{K}
+$$
+and use the splitting (\ref{equation-splitting}) associated to $\pi$
+and the map $c$ to define the $\mathcal{O}_{X'}$-module structure
+and obtain an extension (\ref{equation-extension}). We will call
+$\mathcal{F}'_{c, triv}$ the {\it trivial extension} of $\mathcal{F}$
+by $\mathcal{K}$ corresponding
+to $c$ and the trivialization $\pi$. Given any extension
+$\mathcal{F}'$ as in (\ref{equation-extension}) we can use
+$\pi^\sharp : \mathcal{O}_X \to \mathcal{O}_{X'}$ to think of $\mathcal{F}'$
+as an $\mathcal{O}_X$-module extension, hence a class $\xi_{\mathcal{F}'}$
+in $\Ext^1_{\mathcal{O}_X}(\mathcal{F}, \mathcal{K})$.
+Lemma \ref{lemma-inf-ext} assures that
+$\mathcal{F}' \mapsto \xi_{\mathcal{F}'}$
+induces a bijection
+$$
+\left\{
+\begin{matrix}
+\text{isomorphism classes of extensions}\\
+\mathcal{F}'\text{ as in (\ref{equation-extension}) with }c = c_{\mathcal{F}'}
+\end{matrix}
+\right\}
+\longrightarrow
+\Ext^1_{\mathcal{O}_X}(\mathcal{F}, \mathcal{K})
+$$
+Moreover, the trivial extension $\mathcal{F}'_{c, triv}$ maps to the zero class.
+\end{remark}
+
+\begin{remark}
+\label{remark-extension-functorial}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let
+$(X, \mathcal{O}_X) \to (X'_i, \mathcal{O}_{X'_i})$, $i = 1, 2$
+be first order thickenings with ideal sheaves $\mathcal{I}_i$.
+Let $h : (X'_1, \mathcal{O}_{X'_1}) \to (X'_2, \mathcal{O}_{X'_2})$
+be a morphism of first order thickenings of $(X, \mathcal{O}_X)$.
+Picture
+$$
+\xymatrix{
+& (X, \mathcal{O}_X) \ar[ld] \ar[rd] & \\
+(X'_1, \mathcal{O}_{X'_1}) \ar[rr]^h & &
+(X'_2, \mathcal{O}_{X'_2})
+}
+$$
+Observe that $h^\sharp : \mathcal{O}_{X'_2} \to \mathcal{O}_{X'_1}$
+in particular induces an $\mathcal{O}_X$-module map
+$\mathcal{I}_2 \to \mathcal{I}_1$.
+Let $\mathcal{F}$ be an
+$\mathcal{O}_X$-module. Let $(\mathcal{K}_i, c_i)$, $i = 1, 2$ be a pair
+consisting of an $\mathcal{O}_X$-module $\mathcal{K}_i$ and a map
+$c_i : \mathcal{I}_i \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{K}_i$. Assume furthermore given a map
+of $\mathcal{O}_X$-modules $\mathcal{K}_2 \to \mathcal{K}_1$
+such that
+$$
+\xymatrix{
+\mathcal{I}_2 \otimes_{\mathcal{O}_X} \mathcal{F}
+\ar[r]_-{c_2} \ar[d] &
+\mathcal{K}_2 \ar[d] \\
+\mathcal{I}_1 \otimes_{\mathcal{O}_X} \mathcal{F}
+\ar[r]^-{c_1} &
+\mathcal{K}_1
+}
+$$
+is commutative. Then there is a canonical functoriality
+$$
+\left\{
+\begin{matrix}
+\mathcal{F}'_2\text{ as in (\ref{equation-extension}) with }\\
+c_2 = c_{\mathcal{F}'_2}\text{ and }\mathcal{K} = \mathcal{K}_2
+\end{matrix}
+\right\}
+\longrightarrow
+\left\{
+\begin{matrix}
+\mathcal{F}'_1\text{ as in (\ref{equation-extension}) with }\\
+c_1 = c_{\mathcal{F}'_1}\text{ and }\mathcal{K} = \mathcal{K}_1
+\end{matrix}
+\right\}
+$$
+Namely, thinking of all sheaves $\mathcal{O}_X$, $\mathcal{O}_{X'_i}$,
+$\mathcal{F}$, $\mathcal{K}_i$, etc as sheaves on $X$, we set
+given $\mathcal{F}'_2$ the sheaf $\mathcal{F}'_1$ equal to the
+pushout, i.e., fitting into the following diagram of extensions
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K}_2 \ar[r] \ar[d] &
+\mathcal{F}'_2 \ar[r] \ar[d] &
+\mathcal{F} \ar@{=}[d] \ar[r] & 0 \\
+0 \ar[r] &
+\mathcal{K}_1 \ar[r] &
+\mathcal{F}'_1 \ar[r] &
+\mathcal{F} \ar[r] & 0
+}
+$$
+We omit the construction of the $\mathcal{O}_{X'_1}$-module structure
+on the pushout (this uses the commutativity of the diagram
+involving $c_1$ and $c_2$).
+\end{remark}
+
+\begin{remark}
+\label{remark-trivial-extension-functorial}
+Let $(X, \mathcal{O}_X)$, $(X, \mathcal{O}_X) \to (X'_i, \mathcal{O}_{X'_i})$,
+$\mathcal{I}_i$, and
+$h : (X'_1, \mathcal{O}_{X'_1}) \to (X'_2, \mathcal{O}_{X'_2})$
+be as in Remark \ref{remark-extension-functorial}. Assume that we are
+given trivializations $\pi_i : X'_i \to X$ such that
+$\pi_1 = h \circ \pi_2$. In other words, assume $h$ is a morphism
+of trivialized first order thickening of $(X, \mathcal{O}_X)$. Let
+$(\mathcal{K}_i, c_i)$, $i = 1, 2$ be a pair consisting of an
+$\mathcal{O}_X$-module $\mathcal{K}_i$ and a map
+$c_i : \mathcal{I}_i \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{K}_i$. Assume furthermore given a map
+of $\mathcal{O}_X$-modules $\mathcal{K}_2 \to \mathcal{K}_1$
+such that
+$$
+\xymatrix{
+\mathcal{I}_2 \otimes_{\mathcal{O}_X} \mathcal{F}
+\ar[r]_-{c_2} \ar[d] &
+\mathcal{K}_2 \ar[d] \\
+\mathcal{I}_1 \otimes_{\mathcal{O}_X} \mathcal{F}
+\ar[r]^-{c_1} &
+\mathcal{K}_1
+}
+$$
+is commutative. In this situation the construction of
+Remark \ref{remark-trivial-extension} induces
+a commutative diagram
+$$
+\xymatrix{
+\{\mathcal{F}'_2\text{ as in (\ref{equation-extension}) with }
+c_2 = c_{\mathcal{F}'_2}\text{ and }\mathcal{K} = \mathcal{K}_2\}
+\ar[d] \ar[rr] & &
+\Ext^1_{\mathcal{O}_X}(\mathcal{F}, \mathcal{K}_2) \ar[d] \\
+\{\mathcal{F}'_1\text{ as in (\ref{equation-extension}) with }
+c_1 = c_{\mathcal{F}'_1}\text{ and }\mathcal{K} = \mathcal{K}_1\}
+\ar[rr] & &
+\Ext^1_{\mathcal{O}_X}(\mathcal{F}, \mathcal{K}_1)
+}
+$$
+where the vertical map on the right is given by functoriality of $\Ext$
+and the map $\mathcal{K}_2 \to \mathcal{K}_1$ and the vertical map on the left
+is the one from Remark \ref{remark-extension-functorial}.
+\end{remark}
+
+\begin{remark}
+\label{remark-short-exact-sequence-thickenings}
+Let $(X, \mathcal{O}_X)$ be a ringed space. We define a sequence of morphisms
+of first order thickenings
+$$
+(X'_1, \mathcal{O}_{X'_1}) \to
+(X'_2, \mathcal{O}_{X'_2}) \to
+(X'_3, \mathcal{O}_{X'_3})
+$$
+of $(X, \mathcal{O}_X)$ to be a {\it complex}
+if the corresponding maps between
+the ideal sheaves $\mathcal{I}_i$
+give a complex of $\mathcal{O}_X$-modules
+$\mathcal{I}_3 \to \mathcal{I}_2 \to \mathcal{I}_1$
+(i.e., the composition is zero). In this case the composition
+$(X'_1, \mathcal{O}_{X'_1}) \to (X_3', \mathcal{O}_{X'_3})$ factors through
+$(X, \mathcal{O}_X) \to (X'_3, \mathcal{O}_{X'_3})$, i.e.,
+the first order thickening $(X'_1, \mathcal{O}_{X'_1})$ of
+$(X, \mathcal{O}_X)$ is trivial and comes with
+a canonical trivialization
+$\pi : (X'_1, \mathcal{O}_{X'_1}) \to (X, \mathcal{O}_X)$.
+
+\medskip\noindent
+We say a sequence of morphisms of first order thickenings
+$$
+(X'_1, \mathcal{O}_{X'_1}) \to
+(X'_2, \mathcal{O}_{X'_2}) \to
+(X'_3, \mathcal{O}_{X'_3})
+$$
+of $(X, \mathcal{O}_X)$ is {\it a short exact sequence} if the
+corresponding maps between ideal sheaves is a short exact sequence
+$$
+0 \to \mathcal{I}_3 \to \mathcal{I}_2 \to \mathcal{I}_1 \to 0
+$$
+of $\mathcal{O}_X$-modules.
+\end{remark}
+
+\begin{remark}
+\label{remark-complex-thickenings-and-ses-modules}
+Let $(X, \mathcal{O}_X)$ be a ringed space. Let $\mathcal{F}$ be an
+$\mathcal{O}_X$-module. Let
+$$
+(X'_1, \mathcal{O}_{X'_1}) \to
+(X'_2, \mathcal{O}_{X'_2}) \to
+(X'_3, \mathcal{O}_{X'_3})
+$$
+be a complex first order thickenings of $(X, \mathcal{O}_X)$, see
+Remark \ref{remark-short-exact-sequence-thickenings}.
+Let $(\mathcal{K}_i, c_i)$, $i = 1, 2, 3$ be pairs consisting of
+an $\mathcal{O}_X$-module $\mathcal{K}_i$ and a map
+$c_i : \mathcal{I}_i \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{K}_i$. Assume given a short exact sequence
+of $\mathcal{O}_X$-modules
+$$
+0 \to \mathcal{K}_3 \to \mathcal{K}_2 \to \mathcal{K}_1 \to 0
+$$
+such that
+$$
+\vcenter{
+\xymatrix{
+\mathcal{I}_2 \otimes_{\mathcal{O}_X} \mathcal{F}
+\ar[r]_-{c_2} \ar[d] &
+\mathcal{K}_2 \ar[d] \\
+\mathcal{I}_1 \otimes_{\mathcal{O}_X} \mathcal{F}
+\ar[r]^-{c_1} &
+\mathcal{K}_1
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+\mathcal{I}_3 \otimes_{\mathcal{O}_X} \mathcal{F}
+\ar[r]_-{c_3} \ar[d] &
+\mathcal{K}_3 \ar[d] \\
+\mathcal{I}_2 \otimes_{\mathcal{O}_X} \mathcal{F}
+\ar[r]^-{c_2} &
+\mathcal{K}_2
+}
+}
+$$
+are commutative. Finally, assume given an extension
+$$
+0 \to \mathcal{K}_2 \to \mathcal{F}'_2 \to \mathcal{F} \to 0
+$$
+as in (\ref{equation-extension}) with $\mathcal{K} = \mathcal{K}_2$
+of $\mathcal{O}_{X'_2}$-modules with $c_{\mathcal{F}'_2} = c_2$.
+In this situation we can apply the functoriality of
+Remark \ref{remark-extension-functorial} to obtain an extension
+$\mathcal{F}'_1$ on $X'_1$ (we'll describe $\mathcal{F}'_1$
+in this special case below). By
+Remark \ref{remark-trivial-extension}
+using the canonical splitting
+$\pi : (X'_1, \mathcal{O}_{X'_1}) \to (X, \mathcal{O}_X)$ of
+Remark \ref{remark-short-exact-sequence-thickenings}
+we obtain
+$\xi_{\mathcal{F}'_1} \in
+\Ext^1_{\mathcal{O}_X}(\mathcal{F}, \mathcal{K}_1)$.
+Finally, we have the obstruction
+$$
+o(\mathcal{F}, \mathcal{K}_3, c_3) \in
+\Ext^2_{\mathcal{O}_X}(\mathcal{F}, \mathcal{K}_3)
+$$
+see Lemma \ref{lemma-inf-obs-ext}.
+In this situation we {\bf claim} that the canonical map
+$$
+\partial :
+\Ext^1_{\mathcal{O}_X}(\mathcal{F}, \mathcal{K}_1)
+\longrightarrow
+\Ext^2_{\mathcal{O}_X}(\mathcal{F}, \mathcal{K}_3)
+$$
+coming from the short exact sequence
+$0 \to \mathcal{K}_3 \to \mathcal{K}_2 \to \mathcal{K}_1 \to 0$
+sends $\xi_{\mathcal{F}'_1}$
+to the obstruction class $o(\mathcal{F}, \mathcal{K}_3, c_3)$.
+
+\medskip\noindent
+To prove this claim choose an embedding $j : \mathcal{K}_3 \to \mathcal{K}$
+where $\mathcal{K}$ is an injective $\mathcal{O}_X$-module.
+We can lift $j$ to a map $j' : \mathcal{K}_2 \to \mathcal{K}$.
+Set $\mathcal{E}'_2 = j'_*\mathcal{F}'_2$ equal to the pushout
+of $\mathcal{F}'_2$ by $j'$ so that $c_{\mathcal{E}'_2} = j' \circ c_2$.
+Picture:
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K}_2 \ar[r] \ar[d]_{j'} &
+\mathcal{F}'_2 \ar[r] \ar[d] &
+\mathcal{F} \ar[r] \ar[d] & 0 \\
+0 \ar[r] &
+\mathcal{K} \ar[r] &
+\mathcal{E}'_2 \ar[r] &
+\mathcal{F} \ar[r] & 0
+}
+$$
+Set $\mathcal{E}'_3 = \mathcal{E}'_2$ but viewed as an
+$\mathcal{O}_{X'_3}$-module via $\mathcal{O}_{X'_3} \to \mathcal{O}_{X'_2}$.
+Then $c_{\mathcal{E}'_3} = j \circ c_3$.
+The proof of Lemma \ref{lemma-inf-obs-ext} constructs
+$o(\mathcal{F}, \mathcal{K}_3, c_3)$
+as the boundary of the class of the extension of $\mathcal{O}_X$-modules
+$$
+0 \to
+\mathcal{K}/\mathcal{K}_3 \to
+\mathcal{E}'_3/\mathcal{K}_3 \to
+\mathcal{F} \to 0
+$$
+On the other hand, note that $\mathcal{F}'_1 = \mathcal{F}'_2/\mathcal{K}_3$
+hence the class $\xi_{\mathcal{F}'_1}$ is the class
+of the extension
+$$
+0 \to \mathcal{K}_2/\mathcal{K}_3 \to \mathcal{F}'_2/\mathcal{K}_3
+\to \mathcal{F} \to 0
+$$
+seen as a sequence of $\mathcal{O}_X$-modules using $\pi^\sharp$
+where $\pi : (X'_1, \mathcal{O}_{X'_1}) \to (X, \mathcal{O}_X)$
+is the canonical splitting.
+Thus finally, the claim follows from the fact that we have
+a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K}_2/\mathcal{K}_3 \ar[r] \ar[d] &
+\mathcal{F}'_2/\mathcal{K}_3 \ar[r] \ar[d] &
+\mathcal{F} \ar[r] \ar[d] & 0 \\
+0 \ar[r] &
+\mathcal{K}/\mathcal{K}_3 \ar[r] &
+\mathcal{E}'_3/\mathcal{K}_3 \ar[r] &
+\mathcal{F} \ar[r] & 0
+}
+$$
+which is $\mathcal{O}_X$-linear (with the $\mathcal{O}_X$-module
+structures given above).
+\end{remark}
+
+
+
+
+
+
+
+
+\section{Infinitesimal deformations of modules on ringed spaces}
+\label{section-deformation-modules}
+
+\noindent
+Let $i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$ be a first
+order thickening of ringed spaces. We freely use the notation introduced in
+Section \ref{section-thickenings-spaces}.
+Let $\mathcal{F}'$ be an $\mathcal{O}_{X'}$-module
+and set $\mathcal{F} = i^*\mathcal{F}'$.
+In this situation we have a short exact sequence
+$$
+0 \to \mathcal{I}\mathcal{F}' \to \mathcal{F}' \to \mathcal{F} \to 0
+$$
+of $\mathcal{O}_{X'}$-modules. Since $\mathcal{I}^2 = 0$ the
+$\mathcal{O}_{X'}$-module structure on $\mathcal{I}\mathcal{F}'$
+comes from a unique $\mathcal{O}_X$-module structure.
+Thus the sequence above is an extension as in (\ref{equation-extension}).
+As a special case, if $\mathcal{F}' = \mathcal{O}_{X'}$ we have
+$i^*\mathcal{O}_{X'} = \mathcal{O}_X$ and
+$\mathcal{I}\mathcal{O}_{X'} = \mathcal{I}$ and we recover the
+sequence of structure sheaves
+$$
+0 \to \mathcal{I} \to \mathcal{O}_{X'} \to \mathcal{O}_X \to 0
+$$
+
+\begin{lemma}
+\label{lemma-inf-map-special}
+Let $i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$
+be a first order thickening of ringed spaces.
+Let $\mathcal{F}'$, $\mathcal{G}'$ be $\mathcal{O}_{X'}$-modules.
+Set $\mathcal{F} = i^*\mathcal{F}'$ and $\mathcal{G} = i^*\mathcal{G}'$.
+Let $\varphi : \mathcal{F} \to \mathcal{G}$ be an $\mathcal{O}_X$-linear map.
+The set of lifts of $\varphi$ to an $\mathcal{O}_{X'}$-linear map
+$\varphi' : \mathcal{F}' \to \mathcal{G}'$ is, if nonempty, a principal
+homogeneous space under
+$\Hom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{I}\mathcal{G}')$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-inf-map} but we also
+give a direct proof. We have short exact sequences of modules
+$$
+0 \to \mathcal{I} \to \mathcal{O}_{X'} \to \mathcal{O}_X \to 0
+\quad\text{and}\quad
+0 \to \mathcal{I}\mathcal{G}' \to \mathcal{G}' \to \mathcal{G} \to 0
+$$
+and similarly for $\mathcal{F}'$.
+Since $\mathcal{I}$ has square zero the $\mathcal{O}_{X'}$-module
+structure on $\mathcal{I}$ and $\mathcal{I}\mathcal{G}'$ comes from
+a unique $\mathcal{O}_X$-module structure. It follows that
+$$
+\Hom_{\mathcal{O}_{X'}}(\mathcal{F}', \mathcal{I}\mathcal{G}') =
+\Hom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{I}\mathcal{G}')
+\quad\text{and}\quad
+\Hom_{\mathcal{O}_{X'}}(\mathcal{F}', \mathcal{G}) =
+\Hom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})
+$$
+The lemma now follows from the exact sequence
+$$
+0 \to \Hom_{\mathcal{O}_{X'}}(\mathcal{F}', \mathcal{I}\mathcal{G}') \to
+\Hom_{\mathcal{O}_{X'}}(\mathcal{F}', \mathcal{G}') \to
+\Hom_{\mathcal{O}_{X'}}(\mathcal{F}', \mathcal{G})
+$$
+see Homology, Lemma \ref{homology-lemma-check-exactness}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-deform-module}
+Let $(f, f')$ be a morphism of first order thickenings of ringed spaces
+as in Situation \ref{situation-morphism-thickenings}.
+Let $\mathcal{F}'$ be an $\mathcal{O}_{X'}$-module
+and set $\mathcal{F} = i^*\mathcal{F}'$.
+Assume that $\mathcal{F}$ is flat over $S$
+and that $(f, f')$ is a strict morphism of thickenings
+(Definition \ref{definition-strict-morphism-thickenings}).
+Then the following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}'$ is flat over $S'$, and
+\item the canonical map
+$f^*\mathcal{J} \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{I}\mathcal{F}'$
+is an isomorphism.
+\end{enumerate}
+Moreover, in this case the maps
+$$
+f^*\mathcal{J} \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{I}\mathcal{F}'
+$$
+are isomorphisms.
+\end{lemma}
+
+\begin{proof}
+The map $f^*\mathcal{J} \to \mathcal{I}$ is surjective
+as $(f, f')$ is a strict morphism of thickenings.
+Hence the final statement is a consequence of (2).
+
+\medskip\noindent
+Proof of the equivalence of (1) and (2). We may check these conditions
+at stalks. Let $x \in X \subset X'$
+be a point with image $s = f(x) \in S \subset S'$.
+Set $A' = \mathcal{O}_{S', s}$, $B' = \mathcal{O}_{X', x}$,
+$A = \mathcal{O}_{S, s}$, and $B = \mathcal{O}_{X, x}$.
+Then $A = A'/J$ and $B = B'/I$ for some square zero ideals.
+Since $(f, f')$ is a strict morphism of thickenings we have $I = JB'$.
+Let $M' = \mathcal{F}'_x$ and $M = \mathcal{F}_x$.
+Then $M'$ is a $B'$-module and $M$ is a $B$-module.
+Since $\mathcal{F} = i^*\mathcal{F}'$ we see that the kernel of the
+surjection $M' \to M$ is $IM' = JM'$. Thus we have a short exact
+sequence
+$$
+0 \to JM' \to M' \to M \to 0
+$$
+Using
+Sheaves, Lemma \ref{sheaves-lemma-stalk-pullback-modules}
+and
+Modules, Lemma \ref{modules-lemma-stalk-tensor-product}
+to identify stalks of pullbacks and tensor products we see
+that the stalk at $x$ of the canonical map of the lemma is the map
+$$
+(J \otimes_A B) \otimes_B M = J \otimes_A M = J \otimes_{A'} M'
+\longrightarrow JM'
+$$
+The assumption that $\mathcal{F}$ is flat over $S$ signifies that
+$M$ is a flat $A$-module.
+
+\medskip\noindent
+Assume (1). Flatness implies $\text{Tor}_1^{A'}(M', A) = 0$ by
+Algebra, Lemma \ref{algebra-lemma-characterize-flat}.
+This means $J \otimes_{A'} M' \to M'$ is injective by
+Algebra, Remark \ref{algebra-remark-Tor-ring-mod-ideal}.
+Hence $J \otimes_A M \to JM'$ is an isomorphism.
+
+\medskip\noindent
+Assume (2). Then $J \otimes_{A'} M' \to M'$ is injective. Hence
+$\text{Tor}_1^{A'}(M', A) = 0$ by
+Algebra, Remark \ref{algebra-remark-Tor-ring-mod-ideal}.
+Hence $M'$ is flat over $A'$ by
+Algebra, Lemma \ref{algebra-lemma-what-does-it-mean}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-map-rel}
+Let $(f, f')$ be a morphism of first order thickenings as in
+Situation \ref{situation-morphism-thickenings}.
+Let $\mathcal{F}'$, $\mathcal{G}'$ be $\mathcal{O}_{X'}$-modules and set
+$\mathcal{F} = i^*\mathcal{F}'$ and $\mathcal{G} = i^*\mathcal{G}'$.
+Let $\varphi : \mathcal{F} \to \mathcal{G}$ be an $\mathcal{O}_X$-linear map.
+Assume that $\mathcal{G}'$ is flat over $S'$ and that
+$(f, f')$ is a strict morphism of thickenings.
+The set of lifts of $\varphi$ to an $\mathcal{O}_{X'}$-linear map
+$\varphi' : \mathcal{F}' \to \mathcal{G}'$ is, if nonempty, a principal
+homogeneous space under
+$$
+\Hom_{\mathcal{O}_X}(\mathcal{F},
+\mathcal{G} \otimes_{\mathcal{O}_X} f^*\mathcal{J})
+$$
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-inf-map-special} and \ref{lemma-deform-module}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-obs-map-special}
+Let $i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$
+be a first order thickening of ringed spaces.
+Let $\mathcal{F}'$, $\mathcal{G}'$ be $\mathcal{O}_{X'}$-modules and set
+$\mathcal{F} = i^*\mathcal{F}'$ and $\mathcal{G} = i^*\mathcal{G}'$.
+Let $\varphi : \mathcal{F} \to \mathcal{G}$ be an $\mathcal{O}_X$-linear map.
+There exists an element
+$$
+o(\varphi) \in
+\Ext^1_{\mathcal{O}_X}(Li^*\mathcal{F}',
+\mathcal{I}\mathcal{G}')
+$$
+whose vanishing is a necessary and sufficient condition for the
+existence of a lift of $\varphi$ to an $\mathcal{O}_{X'}$-linear map
+$\varphi' : \mathcal{F}' \to \mathcal{G}'$.
+\end{lemma}
+
+\begin{proof}
+It is clear from the proof of Lemma \ref{lemma-inf-map-special} that the
+vanishing of the boundary of $\varphi$ via the map
+$$
+\Hom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G}) =
+\Hom_{\mathcal{O}_{X'}}(\mathcal{F}', \mathcal{G}) \longrightarrow
+\Ext^1_{\mathcal{O}_{X'}}(\mathcal{F}', \mathcal{I}\mathcal{G}')
+$$
+is a necessary and sufficient condition for the existence of a lift. We
+conclude as
+$$
+\Ext^1_{\mathcal{O}_{X'}}(\mathcal{F}', \mathcal{I}\mathcal{G}') =
+\Ext^1_{\mathcal{O}_X}(Li^*\mathcal{F}', \mathcal{I}\mathcal{G}')
+$$
+the adjointness of $i_* = Ri_*$ and $Li^*$ on the derived category
+(Cohomology, Lemma \ref{cohomology-lemma-adjoint}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-obs-map-rel}
+Let $(f, f')$ be a morphism of first
+order thickenings as in Situation \ref{situation-morphism-thickenings}.
+Let $\mathcal{F}'$, $\mathcal{G}'$ be $\mathcal{O}_{X'}$-modules and set
+$\mathcal{F} = i^*\mathcal{F}'$ and $\mathcal{G} = i^*\mathcal{G}'$.
+Let $\varphi : \mathcal{F} \to \mathcal{G}$ be an $\mathcal{O}_X$-linear map.
+Assume that $\mathcal{F}'$ and $\mathcal{G}'$ are flat over $S'$ and
+that $(f, f')$ is a strict morphism of thickenings. There exists an element
+$$
+o(\varphi) \in \Ext^1_{\mathcal{O}_X}(\mathcal{F},
+\mathcal{G} \otimes_{\mathcal{O}_X} f^*\mathcal{J})
+$$
+whose vanishing is a necessary and sufficient condition for the
+existence of a lift of $\varphi$ to an $\mathcal{O}_{X'}$-linear map
+$\varphi' : \mathcal{F}' \to \mathcal{G}'$.
+\end{lemma}
+
+\begin{proof}[First proof]
+This follows from Lemma \ref{lemma-inf-obs-map-special}
+as we claim that under the assumptions of the lemma we have
+$$
+\Ext^1_{\mathcal{O}_X}(Li^*\mathcal{F}',
+\mathcal{I}\mathcal{G}') =
+\Ext^1_{\mathcal{O}_X}(\mathcal{F},
+\mathcal{G} \otimes_{\mathcal{O}_X} f^*\mathcal{J})
+$$
+Namely, we have
+$\mathcal{I}\mathcal{G}' =
+\mathcal{G} \otimes_{\mathcal{O}_X} f^*\mathcal{J}$
+by Lemma \ref{lemma-deform-module}.
+On the other hand, observe that
+$$
+H^{-1}(Li^*\mathcal{F}') =
+\text{Tor}_1^{\mathcal{O}_{X'}}(\mathcal{F}', \mathcal{O}_X)
+$$
+(local computation omitted). Using the short exact sequence
+$$
+0 \to \mathcal{I} \to \mathcal{O}_{X'} \to \mathcal{O}_X \to 0
+$$
+we see that this $\text{Tor}_1$ is computed by the kernel of the map
+$\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F} \to \mathcal{I}\mathcal{F}'$
+which is zero by the final assertion of Lemma \ref{lemma-deform-module}.
+Thus $\tau_{\geq -1}Li^*\mathcal{F}' = \mathcal{F}$.
+On the other hand, we have
+$$
+\Ext^1_{\mathcal{O}_X}(Li^*\mathcal{F}',
+\mathcal{I}\mathcal{G}') =
+\Ext^1_{\mathcal{O}_X}(\tau_{\geq -1}Li^*\mathcal{F}',
+\mathcal{I}\mathcal{G}')
+$$
+by the dual of
+Derived Categories, Lemma \ref{derived-lemma-negative-vanishing}.
+\end{proof}
+
+\begin{proof}[Second proof]
+We can apply Lemma \ref{lemma-inf-obs-map} as follows. Note that
+$\mathcal{K} = \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F}$ and
+$\mathcal{L} = \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{G}$
+by Lemma \ref{lemma-deform-module}, that
+$c_{\mathcal{F}'} = 1 \otimes 1$ and $c_{\mathcal{G}'} = 1 \otimes 1$
+and taking $\psi = 1 \otimes \varphi$ the diagram of the lemma
+commutes. Thus $o(\varphi) = o(\varphi, 1 \otimes \varphi)$
+works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-ext-rel}
+Let $(f, f')$ be a morphism of first order thickenings as in
+Situation \ref{situation-morphism-thickenings}.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
+Assume $(f, f')$ is a strict morphism of thickenings and
+$\mathcal{F}$ flat over $S$. If there exists a pair
+$(\mathcal{F}', \alpha)$ consisting of an
+$\mathcal{O}_{X'}$-module $\mathcal{F}'$ flat over $S'$ and an isomorphism
+$\alpha : i^*\mathcal{F}' \to \mathcal{F}$, then the set of
+isomorphism classes of such pairs is principal homogeneous
+under
+$\Ext^1_{\mathcal{O}_X}(
+\mathcal{F}, \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+If we assume there exists one such module, then the canonical map
+$$
+f^*\mathcal{J} \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F}
+$$
+is an isomorphism by Lemma \ref{lemma-deform-module}. Apply
+Lemma \ref{lemma-inf-ext} with $\mathcal{K} =
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F}$
+and $c = 1$. By Lemma \ref{lemma-deform-module} the corresponding extensions
+$\mathcal{F}'$ are all flat over $S'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-obs-ext-rel}
+Let $(f, f')$ be a morphism of first order thickenings as in
+Situation \ref{situation-morphism-thickenings}.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module. Assume
+$(f, f')$ is a strict morphism of thickenings
+and $\mathcal{F}$ flat over $S$. There exists an
+$\mathcal{O}_{X'}$-module $\mathcal{F}'$ flat over $S'$ with
+$i^*\mathcal{F}' \cong \mathcal{F}$, if and only if
+\begin{enumerate}
+\item the canonical map $
+f^*\mathcal{J} \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F}$
+is an isomorphism, and
+\item the class
+$o(\mathcal{F}, \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F}, 1)
+\in \Ext^2_{\mathcal{O}_X}(
+\mathcal{F}, \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F})$
+of Lemma \ref{lemma-inf-obs-ext} is zero.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows immediately from the characterization of
+$\mathcal{O}_{X'}$-modules flat over $S'$ of
+Lemma \ref{lemma-deform-module} and
+Lemma \ref{lemma-inf-obs-ext}.
+\end{proof}
+
+
+
+
+
+
+\section{Application to flat modules on flat thickenings of ringed spaces}
+\label{section-flat}
+
+\noindent
+Consider a commutative diagram
+$$
+\xymatrix{
+(X, \mathcal{O}_X) \ar[r]_i \ar[d]_f & (X', \mathcal{O}_{X'}) \ar[d]^{f'} \\
+(S, \mathcal{O}_S) \ar[r]^t & (S', \mathcal{O}_{S'})
+}
+$$
+of ringed spaces whose horizontal arrows are first order thickenings as in
+Situation \ref{situation-morphism-thickenings}. Set
+$\mathcal{I} = \Ker(i^\sharp) \subset \mathcal{O}_{X'}$ and
+$\mathcal{J} = \Ker(t^\sharp) \subset \mathcal{O}_{S'}$.
+Let $\mathcal{F}$ be an $\mathcal{O}_X$-module. Assume that
+\begin{enumerate}
+\item $(f, f')$ is a strict morphism of thickenings,
+\item $f'$ is flat, and
+\item $\mathcal{F}$ is flat over $S$.
+\end{enumerate}
+Note that (1) $+$ (2) imply that $\mathcal{I} = f^*\mathcal{J}$
+(apply Lemma \ref{lemma-deform-module} to $\mathcal{O}_{X'}$).
+The theory of the preceding section is especially nice
+under these assumptions. We summarize the results already obtained
+in the following lemma.
+
+\begin{lemma}
+\label{lemma-flat}
+In the situation above.
+\begin{enumerate}
+\item There exists an $\mathcal{O}_{X'}$-module $\mathcal{F}'$ flat over
+$S'$ with $i^*\mathcal{F}' \cong \mathcal{F}$, if and only if
+the class
+$o(\mathcal{F}, f^*\mathcal{J} \otimes_{\mathcal{O}_X} \mathcal{F}, 1)
+\in \Ext^2_{\mathcal{O}_X}(
+\mathcal{F}, f^*\mathcal{J} \otimes_{\mathcal{O}_X} \mathcal{F})$
+of Lemma \ref{lemma-inf-obs-ext} is zero.
+\item If such a module exists, then the set of isomorphism classes
+of lifts is principal homogeneous under
+$\Ext^1_{\mathcal{O}_X}(
+\mathcal{F}, f^*\mathcal{J} \otimes_{\mathcal{O}_X} \mathcal{F})$.
+\item Given a lift $\mathcal{F}'$, the set of automorphisms of
+$\mathcal{F}'$ which pull back to $\text{id}_\mathcal{F}$ is canonically
+isomorphic to $\Ext^0_{\mathcal{O}_X}(
+\mathcal{F}, f^*\mathcal{J} \otimes_{\mathcal{O}_X} \mathcal{F})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) follows from Lemma \ref{lemma-inf-obs-ext-rel}
+as we have seen above that $\mathcal{I} = f^*\mathcal{J}$.
+Part (2) follows from Lemma \ref{lemma-inf-ext-rel}.
+Part (3) follows from Lemma \ref{lemma-inf-map-rel}.
+\end{proof}
+
+\begin{situation}
+\label{situation-ses-flat-thickenings}
+Let $f : (X, \mathcal{O}_X) \to (S, \mathcal{O}_S)$ be a morphism of
+ringed spaces. Consider a commutative diagram
+$$
+\xymatrix{
+(X'_1, \mathcal{O}'_1) \ar[r]_h \ar[d]_{f'_1} &
+(X'_2, \mathcal{O}'_2) \ar[r] \ar[d]_{f'_2} &
+(X'_3, \mathcal{O}'_3) \ar[d]_{f'_3} \\
+(S'_1, \mathcal{O}_{S'_1}) \ar[r] &
+(S'_2, \mathcal{O}_{S'_2}) \ar[r] &
+(S'_3, \mathcal{O}_{S'_3})
+}
+$$
+where (a) the top row is a short exact sequence of first order thickenings
+of $X$, (b) the lower row is a short exact sequence of first order
+thickenings of $S$, (c) each $f'_i$ restricts to $f$, (d) each pair
+$(f, f_i')$ is a strict morphism of thickenings, and (e) each $f'_i$ is flat.
+Finally, let $\mathcal{F}'_2$ be an $\mathcal{O}'_2$-module flat over
+$S'_2$ and set $\mathcal{F} = \mathcal{F}'_2|_X$. Let $\pi : X'_1 \to X$
+be the canonical splitting
+(Remark \ref{remark-short-exact-sequence-thickenings}).
+\end{situation}
+
+\begin{lemma}
+\label{lemma-verify-iv}
+In Situation \ref{situation-ses-flat-thickenings} the modules
+$\pi^*\mathcal{F}$ and $h^*\mathcal{F}'_2$ are $\mathcal{O}'_1$-modules
+flat over $S'_1$ restricting to $\mathcal{F}$ on $X$.
+Their difference (Lemma \ref{lemma-flat}) is an element
+$\theta$ of $\Ext^1_{\mathcal{O}_X}(
+\mathcal{F}, f^*\mathcal{J}_1 \otimes_{\mathcal{O}_X} \mathcal{F})$
+whose boundary in
+$\Ext^2_{\mathcal{O}_X}(
+\mathcal{F}, f^*\mathcal{J}_3 \otimes_{\mathcal{O}_X} \mathcal{F})$
+equals the obstruction (Lemma \ref{lemma-flat})
+to lifting $\mathcal{F}$ to an $\mathcal{O}'_3$-module flat over $S'_3$.
+\end{lemma}
+
+\begin{proof}
+Note that both $\pi^*\mathcal{F}$ and $h^*\mathcal{F}'_2$
+restrict to $\mathcal{F}$ on $X$ and that the kernels of
+$\pi^*\mathcal{F} \to \mathcal{F}$ and $h^*\mathcal{F}'_2 \to \mathcal{F}$
+are given by $f^*\mathcal{J}_1 \otimes_{\mathcal{O}_X} \mathcal{F}$.
+Hence flatness by Lemma \ref{lemma-deform-module}.
+Taking the boundary makes sense as the sequence of modules
+$$
+0 \to f^*\mathcal{J}_3 \otimes_{\mathcal{O}_X} \mathcal{F} \to
+f^*\mathcal{J}_2 \otimes_{\mathcal{O}_X} \mathcal{F} \to
+f^*\mathcal{J}_1 \otimes_{\mathcal{O}_X} \mathcal{F} \to 0
+$$
+is short exact due to the assumptions in
+Situation \ref{situation-ses-flat-thickenings}
+and the fact that $\mathcal{F}$ is flat over $S$.
+The statement on the obstruction class is a direct translation
+of the result of
+Remark \ref{remark-complex-thickenings-and-ses-modules}
+to this particular situation.
+\end{proof}
+
+
+
+
+
+
+\section{Deformations of ringed spaces and the naive cotangent complex}
+\label{section-deformations-ringed-spaces}
+
+\noindent
+In this section we use the naive cotangent complex to do a little bit
+of deformation theory. We start with a first order thickening
+$t : (S, \mathcal{O}_S) \to (S', \mathcal{O}_{S'})$ of ringed spaces.
+We denote $\mathcal{J} = \Ker(t^\sharp)$ and we
+identify the underlying topological spaces of $S$ and $S'$.
+Moreover we assume given a morphism of ringed spaces
+$f : (X, \mathcal{O}_X) \to (S, \mathcal{O}_S)$, an $\mathcal{O}_X$-module
+$\mathcal{G}$, and an $f$-map $c : \mathcal{J} \to \mathcal{G}$
+of sheaves of modules (Sheaves, Definition \ref{sheaves-definition-f-map}
+and Section \ref{sheaves-section-ringed-spaces-functoriality-modules}).
+In this section we ask ourselves whether we can find
+the question mark fitting into the following diagram
+\begin{equation}
+\label{equation-to-solve-ringed-spaces}
+\vcenter{
+\xymatrix{
+0 \ar[r] & \mathcal{G} \ar[r] & {?} \ar[r] & \mathcal{O}_X \ar[r] & 0 \\
+0 \ar[r] & \mathcal{J} \ar[u]^c \ar[r] & \mathcal{O}_{S'} \ar[u] \ar[r] &
+\mathcal{O}_S \ar[u] \ar[r] & 0
+}
+}
+\end{equation}
+(where the vertical arrows are $f$-maps)
+and moreover how unique the solution is (if it exists). More precisely,
+we look for a first order thickening
+$i : (X, \mathcal{O}_X) \to (X', \mathcal{O}_{X'})$
+and a morphism of thickenings $(f, f')$ as in
+(\ref{equation-morphism-thickenings})
+where $\Ker(i^\sharp)$ is identified with $\mathcal{G}$
+such that $(f')^\sharp$ induces the given map $c$.
+We will say $X'$ is a {\it solution} to
+(\ref{equation-to-solve-ringed-spaces}).
+
+\begin{lemma}
+\label{lemma-huge-diagram-ringed-spaces}
+Assume given a commutative diagram of morphisms of ringed spaces
+\begin{equation}
+\label{equation-huge-1}
+\vcenter{
+\xymatrix{
+& (X_2, \mathcal{O}_{X_2}) \ar[r]_{i_2} \ar[d]_{f_2} \ar[ddl]_g &
+(X'_2, \mathcal{O}_{X'_2}) \ar[d]^{f'_2} \\
+& (S_2, \mathcal{O}_{S_2}) \ar[r]^{t_2} \ar[ddl]|\hole &
+(S'_2, \mathcal{O}_{S'_2}) \ar[ddl] \\
+(X_1, \mathcal{O}_{X_1}) \ar[r]_{i_1} \ar[d]_{f_1} &
+(X'_1, \mathcal{O}_{X'_1}) \ar[d]^{f'_1} \\
+(S_1, \mathcal{O}_{S_1}) \ar[r]^{t_1} &
+(S'_1, \mathcal{O}_{S'_1})
+}
+}
+\end{equation}
+whose horizontal arrows are first order thickenings. Set
+$\mathcal{G}_j = \Ker(i_j^\sharp)$ and assume given
+a $g$-map $\nu : \mathcal{G}_1 \to \mathcal{G}_2$ of modules
+giving rise to the commutative diagram
+\begin{equation}
+\label{equation-huge-2}
+\vcenter{
+\xymatrix{
+& 0 \ar[r] & \mathcal{G}_2 \ar[r] &
+\mathcal{O}_{X'_2} \ar[r] &
+\mathcal{O}_{X_2} \ar[r] & 0 \\
+& 0 \ar[r]|\hole &
+\mathcal{J}_2 \ar[u]_{c_2} \ar[r] &
+\mathcal{O}_{S'_2} \ar[u] \ar[r]|\hole &
+\mathcal{O}_{S_2} \ar[u] \ar[r] & 0 \\
+0 \ar[r] & \mathcal{G}_1 \ar[ruu] \ar[r] &
+\mathcal{O}_{X'_1} \ar[r] &
+\mathcal{O}_{X_1} \ar[ruu] \ar[r] & 0 \\
+0 \ar[r] & \mathcal{J}_1 \ar[ruu]|\hole \ar[u]^{c_1} \ar[r] &
+\mathcal{O}_{S'_1} \ar[ruu]|\hole \ar[u] \ar[r] &
+\mathcal{O}_{S_1} \ar[ruu]|\hole \ar[u] \ar[r] & 0
+}
+}
+\end{equation}
+with front and back solutions to (\ref{equation-to-solve-ringed-spaces}).
+\begin{enumerate}
+\item There exist a canonical element in
+$\Ext^1_{\mathcal{O}_{X_2}}(Lg^*\NL_{X_1/S_1}, \mathcal{G}_2)$
+whose vanishing is a necessary and sufficient condition for the existence
+of a morphism of ringed spaces $X'_2 \to X'_1$ fitting into
+(\ref{equation-huge-1}) compatibly with $\nu$.
+\item If there exists a morphism $X'_2 \to X'_1$ fitting into
+(\ref{equation-huge-1}) compatibly with $\nu$ the set of all such morphisms
+is a principal homogeneous space under
+$$
+\Hom_{\mathcal{O}_{X_1}}(\Omega_{X_1/S_1}, g_*\mathcal{G}_2) =
+\Hom_{\mathcal{O}_{X_2}}(g^*\Omega_{X_1/S_1}, \mathcal{G}_2) =
+\Ext^0_{\mathcal{O}_{X_2}}(Lg^*\NL_{X_1/S_1}, \mathcal{G}_2).
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The naive cotangent complex $\NL_{X_1/S_1}$ is defined in Modules, Definition
+\ref{modules-definition-cotangent-complex-morphism-ringed-topoi}.
+The equalities in the last statement of the lemma follow from
+the fact that $g^*$ is adjoint to $g_*$, the fact that
+$H^0(\NL_{X_1/S_1}) = \Omega_{X_1/S_1}$ (by construction of the
+naive cotangent complex) and the fact that $Lg^*$ is the left
+derived functor of $g^*$. Thus we will work with the groups
+$\Ext^k_{\mathcal{O}_{X_2}}(Lg^*\NL_{X_1/S_1}, \mathcal{G}_2)$,
+$k = 0, 1$ in the rest of the proof. We first argue that we can reduce
+to the case where the underlying topological spaces of all ringed
+spaces in the lemma is the same.
+
+\medskip\noindent
+To do this, observe that $g^{-1}\NL_{X_1/S_1}$ is equal to the naive
+cotangent complex of the homomorphism of sheaves of rings
+$g^{-1}f_1^{-1}\mathcal{O}_{S_1} \to g^{-1}\mathcal{O}_{X_1}$, see
+Modules, Lemma \ref{modules-lemma-pullback-NL}.
+Moreover, the degree $0$ term of $\NL_{X_1/S_1}$ is a flat
+$\mathcal{O}_{X_1}$-module, hence the canonical map
+$$
+Lg^*\NL_{X_1/S_1}
+\longrightarrow
+g^{-1}\NL_{X_1/S_1} \otimes_{g^{-1}\mathcal{O}_{X_1}} \mathcal{O}_{X_2}
+$$
+induces an isomorphism on cohomology sheaves in degrees $0$ and $-1$.
+Thus we may replace the Ext groups of the lemma with
+$$
+\Ext^k_{g^{-1}\mathcal{O}_{X_1}}(g^{-1}\NL_{X_1/S_1}, \mathcal{G}_2) =
+\Ext^k_{g^{-1}\mathcal{O}_{X_1}}(
+\NL_{g^{-1}\mathcal{O}_{X_1}/g^{-1}f_1^{-1}\mathcal{O}_{S_1}}, \mathcal{G}_2)
+$$
+The set of morphism of ringed spaces $X'_2 \to X'_1$ fitting into
+(\ref{equation-huge-1}) compatibly with $\nu$
+is in one-to-one bijection with
+the set of homomorphisms of $g^{-1}f_1^{-1}\mathcal{O}_{S'_1}$-algebras
+$g^{-1}\mathcal{O}_{X'_1} \to \mathcal{O}_{X'_2}$ which are compatible with
+$f^\sharp$ and $\nu$. In this way we see that we may assume we have a
+diagram (\ref{equation-huge-2}) of sheaves on $X$ and we are looking to
+find a homomorphism of sheaves of rings
+$\mathcal{O}_{X'_1} \to \mathcal{O}_{X'_2}$ fitting into it.
+
+\medskip\noindent
+In the rest of the proof of the lemma we assume
+all underlying topological spaces are the
+same, i.e., we have a diagram (\ref{equation-huge-2}) of sheaves on
+a space $X$ and we are looking for homomorphisms of sheaves of rings
+$\mathcal{O}_{X'_1} \to \mathcal{O}_{X'_2}$ fitting into it.
+As ext groups we will use
+$\Ext^k_{\mathcal{O}_{X_1}}(
+\NL_{\mathcal{O}_{X_1}/\mathcal{O}_{S_1}}, \mathcal{G}_2)$, $k = 0, 1$.
+
+\medskip\noindent
+Step 1. Construction of the obstruction class. Consider the sheaf
+of sets
+$$
+\mathcal{E} = \mathcal{O}_{X'_1} \times_{\mathcal{O}_{X_2}} \mathcal{O}_{X'_2}
+$$
+This comes with a surjective map $\alpha : \mathcal{E} \to \mathcal{O}_{X_1}$
+and hence we can use $\NL(\alpha)$ instead of
+$\NL_{\mathcal{O}_{X_1}/\mathcal{O}_{S_1}}$, see
+Modules, Lemma \ref{modules-lemma-NL-up-to-qis}.
+Set
+$$
+\mathcal{I}' =
+\Ker(\mathcal{O}_{S'_1}[\mathcal{E}] \to \mathcal{O}_{X_1})
+\quad\text{and}\quad
+\mathcal{I} =
+\Ker(\mathcal{O}_{S_1}[\mathcal{E}] \to \mathcal{O}_{X_1})
+$$
+There is a surjection $\mathcal{I}' \to \mathcal{I}$ whose kernel
+is $\mathcal{J}_1\mathcal{O}_{S'_1}[\mathcal{E}]$.
+We obtain two homomorphisms of $\mathcal{O}_{S'_2}$-algebras
+$$
+a : \mathcal{O}_{S'_1}[\mathcal{E}] \to \mathcal{O}_{X'_1}
+\quad\text{and}\quad
+b : \mathcal{O}_{S'_1}[\mathcal{E}] \to \mathcal{O}_{X'_2}
+$$
+which induce maps $a|_{\mathcal{I}'} : \mathcal{I}' \to \mathcal{G}_1$ and
+$b|_{\mathcal{I}'} : \mathcal{I}' \to \mathcal{G}_2$. Both $a$ and $b$
+annihilate $(\mathcal{I}')^2$. Moreover $a$ and $b$ agree on
+$\mathcal{J}_1\mathcal{O}_{S'_1}[\mathcal{E}]$ as maps into $\mathcal{G}_2$
+because the left hand square of (\ref{equation-huge-2}) is commutative.
+Thus the difference
+$b|_{\mathcal{I}'} - \nu \circ a|_{\mathcal{I}'}$
+induces a well defined $\mathcal{O}_{X_1}$-linear map
+$$
+\xi : \mathcal{I}/\mathcal{I}^2 \longrightarrow \mathcal{G}_2
+$$
+which sends the class of a local section $f$ of $\mathcal{I}$ to
+$a(f') - \nu(b(f'))$ where $f'$ is a lift of $f$ to a local
+section of $\mathcal{I}'$. We let
+$[\xi] \in \Ext^1_{\mathcal{O}_{X_1}}(\NL(\alpha), \mathcal{G}_2)$
+be the image (see below).
+
+\medskip\noindent
+Step 2. Vanishing of $[\xi]$ is necessary. Let us write
+$\Omega = \Omega_{\mathcal{O}_{S_1}[\mathcal{E}]/\mathcal{O}_{S_1}}
+\otimes_{\mathcal{O}_{S_1}[\mathcal{E}]} \mathcal{O}_{X_1}$.
+Observe that $\NL(\alpha) = (\mathcal{I}/\mathcal{I}^2 \to \Omega)$
+fits into a distinguished triangle
+$$
+\Omega[0] \to
+\NL(\alpha) \to
+\mathcal{I}/\mathcal{I}^2[1] \to
+\Omega[1]
+$$
+Thus we see that $[\xi]$ is zero if and only if $\xi$
+is a composition $\mathcal{I}/\mathcal{I}^2 \to \Omega \to \mathcal{G}_2$
+for some map $\Omega \to \mathcal{G}_2$. Suppose there exists a
+homomorphisms of sheaves of rings
+$\varphi : \mathcal{O}_{X'_1} \to \mathcal{O}_{X'_2}$ fitting into
+(\ref{equation-huge-2}). In this case consider the map
+$\mathcal{O}_{S'_1}[\mathcal{E}] \to \mathcal{G}_2$,
+$f' \mapsto b(f') - \varphi(a(f'))$. A calculation
+shows this annihilates $\mathcal{J}_1\mathcal{O}_{S'_1}[\mathcal{E}]$
+and induces a derivation $\mathcal{O}_{S_1}[\mathcal{E}] \to \mathcal{G}_2$.
+The resulting linear map $\Omega \to \mathcal{G}_2$ witnesses the
+fact that $[\xi] = 0$ in this case.
+
+\medskip\noindent
+Step 3. Vanishing of $[\xi]$ is sufficient. Let
+$\theta : \Omega \to \mathcal{G}_2$ be a $\mathcal{O}_{X_1}$-linear map
+such that $\xi$ is equal to
+$\theta \circ (\mathcal{I}/\mathcal{I}^2 \to \Omega)$.
+Then a calculation shows that
+$$
+b + \theta \circ d : \mathcal{O}_{S'_1}[\mathcal{E}] \to \mathcal{O}_{X'_2}
+$$
+annihilates $\mathcal{I}'$ and hence defines a map
+$\mathcal{O}_{X'_1} \to \mathcal{O}_{X'_2}$ fitting into
+(\ref{equation-huge-2}).
+
+\medskip\noindent
+Proof of (2) in the special case above. Omitted. Hint:
+This is exactly the same as the proof of (2) of Lemma \ref{lemma-huge-diagram}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-NL-represent-ext-class}
+Let $X$ be a topological space. Let $\mathcal{A} \to \mathcal{B}$ be a
+homomorphism of sheaves of rings. Let $\mathcal{G}$ be a $\mathcal{B}$-module.
+Let
+$\xi \in \Ext^1_\mathcal{B}(\NL_{\mathcal{B}/\mathcal{A}}, \mathcal{G})$.
+There exists a map of sheaves of sets $\alpha : \mathcal{E} \to \mathcal{B}$
+such that $\xi \in \Ext^1_\mathcal{B}(\NL(\alpha), \mathcal{G})$
+is the class of a map $\mathcal{I}/\mathcal{I}^2 \to \mathcal{G}$
+(see proof for notation).
+\end{lemma}
+
+\begin{proof}
+Recall that given $\alpha : \mathcal{E} \to \mathcal{B}$
+such that $\mathcal{A}[\mathcal{E}] \to \mathcal{B}$ is surjective
+with kernel $\mathcal{I}$ the complex
+$\NL(\alpha) = (\mathcal{I}/\mathcal{I}^2 \to
+\Omega_{\mathcal{A}[\mathcal{E}]/\mathcal{A}}
+\otimes_{\mathcal{A}[\mathcal{E}]} \mathcal{B})$ is canonically
+isomorphic to $\NL_{\mathcal{B}/\mathcal{A}}$, see
+Modules, Lemma \ref{modules-lemma-NL-up-to-qis}.
+Observe moreover, that
+$\Omega = \Omega_{\mathcal{A}[\mathcal{E}]/\mathcal{A}}
+\otimes_{\mathcal{A}[\mathcal{E}]} \mathcal{B}$ is the sheaf
+associated to the presheaf
+$U \mapsto \bigoplus_{e \in \mathcal{E}(U)} \mathcal{B}(U)$.
+In other words, $\Omega$ is the free $\mathcal{B}$-module on the
+sheaf of sets $\mathcal{E}$ and in particular there is a canonical
+map $\mathcal{E} \to \Omega$.
+
+\medskip\noindent
+Having said this, pick some $\mathcal{E}$ (for example
+$\mathcal{E} = \mathcal{B}$ as in the definition of the naive cotangent
+complex). The obstruction to writing $\xi$ as the class of a map
+$\mathcal{I}/\mathcal{I}^2 \to \mathcal{G}$ is an element in
+$\Ext^1_\mathcal{B}(\Omega, \mathcal{G})$. Say this is represented
+by the extension $0 \to \mathcal{G} \to \mathcal{H} \to \Omega \to 0$
+of $\mathcal{B}$-modules. Consider the sheaf of sets
+$\mathcal{E}' = \mathcal{E} \times_\Omega \mathcal{H}$
+which comes with an induced map $\alpha' : \mathcal{E}' \to \mathcal{B}$.
+Let $\mathcal{I}' = \Ker(\mathcal{A}[\mathcal{E}'] \to \mathcal{B})$
+and $\Omega' = \Omega_{\mathcal{A}[\mathcal{E}']/\mathcal{A}}
+\otimes_{\mathcal{A}[\mathcal{E}']} \mathcal{B}$.
+The pullback of $\xi$ under the quasi-isomorphism
+$\NL(\alpha') \to \NL(\alpha)$ maps to zero in
+$\Ext^1_\mathcal{B}(\Omega', \mathcal{G})$
+because the pullback of the extension $\mathcal{H}$
+by the map $\Omega' \to \Omega$ is split as $\Omega'$ is the free
+$\mathcal{B}$-module on the sheaf of sets $\mathcal{E}'$ and since
+by construction there is a commutative diagram
+$$
+\xymatrix{
+\mathcal{E}' \ar[r] \ar[d] & \mathcal{E} \ar[d] \\
+\mathcal{H} \ar[r] & \Omega
+}
+$$
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-choices-ringed-spaces}
+If there exists a solution to (\ref{equation-to-solve-ringed-spaces}),
+then the set of isomorphism classes of solutions is principal homogeneous
+under $\Ext^1_{\mathcal{O}_X}(\NL_{X/S}, \mathcal{G})$.
+\end{lemma}
+
+\begin{proof}
+We observe right away that given two solutions $X'_1$ and $X'_2$
+to (\ref{equation-to-solve-ringed-spaces}) we obtain by
+Lemma \ref{lemma-huge-diagram-ringed-spaces} an obstruction element
+$o(X'_1, X'_2) \in \Ext^1_{\mathcal{O}_X}(\NL_{X/S}, \mathcal{G})$
+to the existence of a map $X'_1 \to X'_2$. Clearly, this element
+is the obstruction to the existence of an isomorphism, hence separates
+the isomorphism classes. To finish the proof it therefore suffices to
+show that given a solution $X'$ and an element
+$\xi \in \Ext^1_{\mathcal{O}_X}(\NL_{X/S}, \mathcal{G})$
+we can find a second solution $X'_\xi$ such that
+$o(X', X'_\xi) = \xi$.
+
+\medskip\noindent
+Pick $\alpha : \mathcal{E} \to \mathcal{O}_X$ as in
+Lemma \ref{lemma-NL-represent-ext-class}
+for the class $\xi$. Consider the surjection
+$f^{-1}\mathcal{O}_S[\mathcal{E}] \to \mathcal{O}_X$
+with kernel $\mathcal{I}$ and corresponding naive cotangent complex
+$\NL(\alpha) = (\mathcal{I}/\mathcal{I}^2 \to
+\Omega_{f^{-1}\mathcal{O}_S[\mathcal{E}]/f^{-1}\mathcal{O}_S}
+\otimes_{f^{-1}\mathcal{O}_S[\mathcal{E}]} \mathcal{O}_X)$.
+By the lemma $\xi$ is the class of a morphism
+$\delta : \mathcal{I}/\mathcal{I}^2 \to \mathcal{G}$.
+After replacing $\mathcal{E}$ by
+$\mathcal{E} \times_{\mathcal{O}_X} \mathcal{O}_{X'}$ we may also assume
+that $\alpha$ factors through a map
+$\alpha' : \mathcal{E} \to \mathcal{O}_{X'}$.
+
+\medskip\noindent
+These choices determine an $f^{-1}\mathcal{O}_{S'}$-algebra map
+$\varphi : \mathcal{O}_{S'}[\mathcal{E}] \to \mathcal{O}_{X'}$.
+Let $\mathcal{I}' = \Ker(\varphi)$.
+Observe that $\varphi$ induces a map
+$\varphi|_{\mathcal{I}'} : \mathcal{I}' \to \mathcal{G}$
+and that $\mathcal{O}_{X'}$ is the pushout, as in the following
+diagram
+$$
+\xymatrix{
+0 \ar[r] & \mathcal{G} \ar[r] & \mathcal{O}_{X'} \ar[r] &
+\mathcal{O}_X \ar[r] & 0 \\
+0 \ar[r] & \mathcal{I}' \ar[u]^{\varphi|_{\mathcal{I}'}} \ar[r] &
+f^{-1}\mathcal{O}_{S'}[\mathcal{E}] \ar[u] \ar[r] &
+\mathcal{O}_X \ar[u]_{=} \ar[r] & 0
+}
+$$
+Let $\psi : \mathcal{I}' \to \mathcal{G}$ be the sum of the map
+$\varphi|_{\mathcal{I}'}$ and the composition
+$$
+\mathcal{I}' \to \mathcal{I}'/(\mathcal{I}')^2 \to
+\mathcal{I}/\mathcal{I}^2 \xrightarrow{\delta} \mathcal{G}.
+$$
+Then the pushout along $\psi$ is an other ring extension
+$\mathcal{O}_{X'_\xi}$ fitting into a diagram as above.
+A calculation (omitted) shows that $o(X', X'_\xi) = \xi$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extensions-of-relative-ringed-spaces}
+Let $f : (X, \mathcal{O}_X) \to (S, \mathcal{O}_S)$ be a morphism of
+ringed spaces. Let $\mathcal{G}$ be a $\mathcal{O}_X$-module.
+The set of isomorphism classes of extensions of
+$f^{-1}\mathcal{O}_S$-algebras
+$$
+0 \to \mathcal{G} \to \mathcal{O}_{X'} \to \mathcal{O}_X \to 0
+$$
+where $\mathcal{G}$ is an ideal of square zero\footnote{In other words,
+the set of isomorphism classes of first order thickenings
+$i : X \to X'$ over $S$ endowed with an isomorphism
+$\mathcal{G} \to \Ker(i^\sharp)$ of $\mathcal{O}_X$-modules.}
+is canonically bijective to
+$\Ext^1_{\mathcal{O}_X}(\NL_{X/S}, \mathcal{G})$.
+\end{lemma}
+
+\begin{proof}
+To prove this we apply the previous results to the case where
+(\ref{equation-to-solve-ringed-spaces}) is given by the diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{G} \ar[r] &
+{?} \ar[r] &
+\mathcal{O}_X \ar[r] & 0 \\
+0 \ar[r] &
+0 \ar[u] \ar[r] &
+\mathcal{O}_S \ar[u] \ar[r]^{\text{id}} &
+\mathcal{O}_S \ar[u] \ar[r] & 0
+}
+$$
+Thus our lemma follows from Lemma \ref{lemma-choices-ringed-spaces}
+and the fact that there exists a solution, namely
+$\mathcal{G} \oplus \mathcal{O}_X$.
+(See remark below for a direct construction of the bijection.)
+\end{proof}
+
+\begin{remark}
+\label{remark-extensions-of-relative-ringed-spaces}
+Let $f : (X, \mathcal{O}_X) \to (S, \mathcal{O}_S)$ and $\mathcal{G}$
+be as in Lemma \ref{lemma-extensions-of-relative-ringed-spaces}.
+Consider an extension
+$0 \to \mathcal{G} \to \mathcal{O}_{X'} \to \mathcal{O}_X \to 0$
+as in the lemma. We can choose a sheaf of sets $\mathcal{E}$
+and a commutative diagram
+$$
+\xymatrix{
+\mathcal{E} \ar[d]_{\alpha'} \ar[rd]^\alpha \\
+\mathcal{O}_{X'} \ar[r] & \mathcal{O}_X
+}
+$$
+such that $f^{-1}\mathcal{O}_S[\mathcal{E}] \to \mathcal{O}_X$
+is surjective with kernel $\mathcal{J}$.
+(For example you can take any sheaf of sets surjecting
+onto $\mathcal{O}_{X'}$.) Then
+$$
+\NL_{X/S} \cong \NL(\alpha) =
+\left(
+\mathcal{J}/\mathcal{J}^2
+\longrightarrow
+\Omega_{f^{-1}\mathcal{O}_S[\mathcal{E}]/f^{-1}\mathcal{O}_S}
+\otimes_{f^{-1}\mathcal{O}_S[\mathcal{E}]} \mathcal{O}_X\right)
+$$
+See Modules, Section \ref{modules-section-netherlander} and in particular
+Lemma \ref{modules-lemma-NL-up-to-qis}. Of course $\alpha'$ determines a map
+$f^{-1}\mathcal{O}_S[\mathcal{E}] \to \mathcal{O}_{X'}$
+which in turn determines a map
+$$
+\mathcal{J}/\mathcal{J}^2 \longrightarrow \mathcal{G}
+$$
+which in turn determines the element of
+$\Ext^1_{\mathcal{O}_X}(\NL(\alpha), \mathcal{G}) =
+\Ext^1_{\mathcal{O}_X}(\NL_{X/S}, \mathcal{G})$
+corresponding to $\mathcal{O}_{X'}$ by the bijection of the lemma.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-extensions-of-relative-ringed-spaces-functorial}
+Let $f : (X, \mathcal{O}_X) \to (S, \mathcal{O}_S)$ and
+$g : (Y, \mathcal{O}_Y) \to (X, \mathcal{O}_X)$ be morphisms
+of ringed spaces. Let $\mathcal{F}$ be a $\mathcal{O}_X$-module.
+Let $\mathcal{G}$ be a $\mathcal{O}_Y$-module. Let
+$c : \mathcal{F} \to \mathcal{G}$ be a $g$-map. Finally, consider
+\begin{enumerate}
+\item[(a)] $0 \to \mathcal{F} \to \mathcal{O}_{X'} \to \mathcal{O}_X \to 0$
+an extension of $f^{-1}\mathcal{O}_S$-algebras
+corresponding to $\xi \in \Ext^1_{\mathcal{O}_X}(\NL_{X/S}, \mathcal{F})$, and
+\item[(b)] $0 \to \mathcal{G} \to \mathcal{O}_{Y'} \to \mathcal{O}_Y \to 0$
+an extension of $g^{-1}f^{-1}\mathcal{O}_S$-algebras
+corresponding to $\zeta \in \Ext^1_{\mathcal{O}_Y}(\NL_{Y/S}, \mathcal{G})$.
+\end{enumerate}
+See Lemma \ref{lemma-extensions-of-relative-ringed-spaces}.
+Then there is an $S$-morphism $g' : Y' \to X'$
+compatible with $g$ and $c$ if and only if $\xi$ and $\zeta$
+map to the same element of
+$\Ext^1_{\mathcal{O}_Y}(Lg^*\NL_{X/S}, \mathcal{G})$.
+\end{lemma}
+
+\begin{proof}
+The stament makes sense as we have the maps
+$$
+\Ext^1_{\mathcal{O}_X}(\NL_{X/S}, \mathcal{F}) \to
+\Ext^1_{\mathcal{O}_Y}(Lg^*\NL_{X/S}, Lg^*\mathcal{F}) \to
+\Ext^1_{\mathcal{O}_Y}(Lg^*\NL_{X/S}, \mathcal{G})
+$$
+using the map $Lg^*\mathcal{F} \to g^*\mathcal{F} \xrightarrow{c} \mathcal{G}$
+and
+$$
+\Ext^1_{\mathcal{O}_Y}(\NL_{Y/S}, \mathcal{G}) \to
+\Ext^1_{\mathcal{O}_Y}(Lg^*\NL_{X/S}, \mathcal{G})
+$$
+using the map $Lg^*\NL_{X/S} \to \NL_{Y/S}$.
+The statement of the lemma can be deduced from
+Lemma \ref{lemma-huge-diagram-ringed-spaces} applied to the diagram
+$$
+\xymatrix{
+& 0 \ar[r] &
+\mathcal{G} \ar[r] &
+\mathcal{O}_{Y'} \ar[r] &
+\mathcal{O}_Y \ar[r] & 0 \\
+& 0 \ar[r]|\hole & 0 \ar[u] \ar[r] &
+\mathcal{O}_S \ar[u] \ar[r]|\hole &
+\mathcal{O}_S \ar[u] \ar[r] & 0 \\
+0 \ar[r] &
+\mathcal{F} \ar[ruu] \ar[r] &
+\mathcal{O}_{X'} \ar[r] &
+\mathcal{O}_X \ar[ruu] \ar[r] & 0 \\
+0 \ar[r] & 0 \ar[ruu]|\hole \ar[u] \ar[r] &
+\mathcal{O}_S \ar[ruu]|\hole \ar[u] \ar[r] &
+\mathcal{O}_S \ar[ruu]|\hole \ar[u] \ar[r] & 0
+}
+$$
+and a compatibility between the constructions in the proofs
+of Lemmas \ref{lemma-extensions-of-relative-ringed-spaces} and
+\ref{lemma-huge-diagram-ringed-spaces}
+whose statement and proof we omit. (See remark below for a direct argument.)
+\end{proof}
+
+\begin{remark}
+\label{remark-extensions-of-relative-ringed-spaces-functorial}
+Let $f : (X, \mathcal{O}_X) \to (S, \mathcal{O}_S)$,
+$g : (Y, \mathcal{O}_Y) \to (X, \mathcal{O}_X)$,
+$\mathcal{F}$,
+$\mathcal{G}$,
+$c : \mathcal{F} \to \mathcal{G}$,
+$0 \to \mathcal{F} \to \mathcal{O}_{X'} \to \mathcal{O}_X \to 0$,
+$\xi \in \Ext^1_{\mathcal{O}_X}(\NL_{X/S}, \mathcal{F})$,
+$0 \to \mathcal{G} \to \mathcal{O}_{Y'} \to \mathcal{O}_Y \to 0$,
+and $\zeta \in \Ext^1_{\mathcal{O}_Y}(\NL_{Y/S}, \mathcal{G})$
+be as in Lemma \ref{lemma-extensions-of-relative-ringed-spaces-functorial}.
+Using pushout along $c : g^{-1}\mathcal{F} \to \mathcal{G}$
+we can construct an extension
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{G} \ar[r] &
+\mathcal{O}'_1 \ar[r] &
+g^{-1}\mathcal{O}_X \ar[r] & 0 \\
+0 \ar[r] &
+g^{-1}\mathcal{F} \ar[u]^c \ar[r] &
+g^{-1}\mathcal{O}_{X'} \ar[u] \ar[r] &
+g^{-1}\mathcal{O}_X \ar@{=}[u] \ar[r] & 0
+}
+$$
+Using pullback along
+$g^\sharp : g^{-1}\mathcal{O}_X \to \mathcal{O}_Y$
+we can construct an extension
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{G} \ar[r] &
+\mathcal{O}_{Y'} \ar[r] &
+\mathcal{O}_Y \ar[r] & 0 \\
+0 \ar[r] &
+\mathcal{G} \ar@{=}[u] \ar[r] &
+\mathcal{O}'_2 \ar[u] \ar[r] &
+g^{-1}\mathcal{O}_X \ar[u] \ar[r] & 0
+}
+$$
+A diagram chase tells us that there exists an $S$-morphism $Y' \to X'$
+compatible with $g$ and $c$ if and only if $\mathcal{O}'_1$ is isomorphic
+to $\mathcal{O}'_2$ as $g^{-1}f^{-1}\mathcal{O}_S$-algebra extensions
+of $g^{-1}\mathcal{O}_X$ by $\mathcal{G}$. By
+Lemma \ref{lemma-extensions-of-relative-ringed-spaces}
+these extensions are classified by the LHS of
+$$
+\Ext^1_{g^{-1}\mathcal{O}_X}(
+\NL_{g^{-1}\mathcal{O}_X/g^{-1}f^{-1}\mathcal{O}_S}, \mathcal{G}) =
+\Ext^1_{\mathcal{O}_Y}(Lg^*\NL_{X/S}, \mathcal{G})
+$$
+Here the equality comes from tensor-hom adjunction and
+the equalities
+$$
+\NL_{g^{-1}\mathcal{O}_X/g^{-1}f^{-1}\mathcal{O}_S} = g^{-1}\NL_{X/S}
+\quad\text{and}\quad
+Lg^*\NL_{X/S} =
+g^{-1}\NL_{X/S} \otimes_{g^{-1}\mathcal{O}_X}^\mathbf{L} \mathcal{O}_Y
+$$
+For the first of these see
+Modules, Lemma \ref{modules-lemma-pullback-NL}; the second
+follows from the definition of derived pullback.
+Thus, in order to see that
+Lemma \ref{lemma-extensions-of-relative-ringed-spaces-functorial}
+is true, it suffices to show that $\mathcal{O}'_1$ corresponds
+to the image of $\xi$ and that $\mathcal{O}'_2$ correspond to
+the image of $\zeta$.
+The correspondence between $\xi$ and $\mathcal{O}'_1$
+is immediate from the construction of the class $\xi$ in
+Remark \ref{remark-extensions-of-relative-ringed-spaces}.
+For the correspondence between $\zeta$ and $\mathcal{O}'_2$,
+we first choose a commutative diagram
+$$
+\xymatrix{
+\mathcal{E} \ar[d]_{\beta'} \ar[rd]^\beta \\
+\mathcal{O}_{Y'} \ar[r] & \mathcal{O}_Y
+}
+$$
+such that $g^{-1}f^{-1}\mathcal{O}_S[\mathcal{E}] \to \mathcal{O}_Y$
+is surjective with kernel $\mathcal{K}$. Next choose a
+commutative diagram
+$$
+\xymatrix{
+\mathcal{E} \ar[d]_{\beta'} &
+\mathcal{E}' \ar[l]^\varphi \ar[d]_{\alpha'} \ar[rd]^\alpha \\
+\mathcal{O}_{Y'} &
+\mathcal{O}'_2 \ar[l] \ar[r] &
+g^{-1}\mathcal{O}_X
+}
+$$
+such that $g^{-1}f^{-1}\mathcal{O}_S[\mathcal{E}'] \to g^{-1}\mathcal{O}_X$
+is surjective with kernel $\mathcal{J}$. (For example just take
+$\mathcal{E}' = \mathcal{E} \amalg \mathcal{O}'_2$ as a sheaf of sets.)
+The map $\varphi$ induces a map of complexes $\NL(\alpha) \to \NL(\beta)$
+(notation as in Modules, Section \ref{modules-section-netherlander})
+and in particular
+$\bar\varphi : \mathcal{J}/\mathcal{J}^2 \to \mathcal{K}/\mathcal{K}^2$.
+Then $\NL(\alpha) \cong \NL_{Y/S}$ and
+$\NL(\beta) \cong \NL_{g^{-1}\mathcal{O}_X/g^{-1}f^{-1}\mathcal{O}_S}$
+and the map of complexes $\NL(\alpha) \to \NL(\beta)$
+represents the map $Lg^*\NL_{X/S} \to \NL_{Y/S}$ used in the
+statement of Lemma \ref{lemma-extensions-of-relative-ringed-spaces-functorial}
+(see first part of its proof). Now $\zeta$ corresponds to the
+class of the map $\mathcal{K}/\mathcal{K}^2 \to \mathcal{G}$
+induced by $\beta'$, see
+Remark \ref{remark-extensions-of-relative-ringed-spaces}.
+Similarly, the extension $\mathcal{O}'_2$ corresponds to the map
+$\mathcal{J}/\mathcal{J}^2 \to \mathcal{G}$ induced by $\alpha'$.
+The commutative diagram above shows that this map is
+the composition of the map $\mathcal{K}/\mathcal{K}^2 \to \mathcal{G}$
+induced by $\beta'$ with the map
+$\bar\varphi : \mathcal{J}/\mathcal{J}^2 \to \mathcal{K}/\mathcal{K}^2$.
+This proves the compatibility we were looking for.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-parametrize-solutions-ringed-spaces}
+Let $t : (S, \mathcal{O}_S) \to (S', \mathcal{O}_{S'})$,
+$\mathcal{J} = \Ker(t^\sharp)$,
+$f : (X, \mathcal{O}_X) \to (S, \mathcal{O}_S)$, $\mathcal{G}$, and
+$c : \mathcal{J} \to \mathcal{G}$ be as in
+(\ref{equation-to-solve-ringed-spaces}).
+Denote $\xi \in \Ext^1_{\mathcal{O}_S}(\NL_{S/S'}, \mathcal{J})$
+the element corresponding to the extension $\mathcal{O}_{S'}$
+of $\mathcal{O}_S$ by $\mathcal{J}$ via
+Lemma \ref{lemma-extensions-of-relative-ringed-spaces}.
+The set of isomorphism classes of solutions is canonically bijective
+to the fibre of
+$$
+\Ext^1_{\mathcal{O}_X}(\NL_{X/S'}, \mathcal{G}) \to
+\Ext^1_{\mathcal{O}_X}(Lf^*\NL_{S/S'}, \mathcal{G})
+$$
+over the image of $\xi$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-extensions-of-relative-ringed-spaces}
+applied to $X \to S'$ and the $\mathcal{O}_X$-module $\mathcal{G}$
+we see that elements $\zeta$ of
+$\Ext^1_{\mathcal{O}_X}(\NL_{X/S'}, \mathcal{G})$
+parametrize extensions
+$0 \to \mathcal{G} \to \mathcal{O}_{X'} \to \mathcal{O}_X \to 0$
+of $f^{-1}\mathcal{O}_{S'}$-algebras. By
+Lemma \ref{lemma-extensions-of-relative-ringed-spaces-functorial} applied
+to $X \to S \to S'$ and $c : \mathcal{J} \to \mathcal{G}$
+we see that there is an $S'$-morphism
+$X' \to S'$ compatible with $c$ and $f : X \to S$ if and only if
+$\zeta$ maps to $\xi$. Of course this is the same thing as saying
+$\mathcal{O}_{X'}$ is a
+solution of (\ref{equation-to-solve-ringed-spaces}).
+\end{proof}
+
+\begin{remark}
+\label{remark-parametrize-solutions-ringed-spaces}
+In the situation of
+Lemma \ref{lemma-parametrize-solutions-ringed-spaces}
+we have maps of complexes
+$$
+Lf^*\NL_{S'/S} \to \NL_{X/S'} \to \NL_{X/S}
+$$
+These maps are closed to forming a distinguished triangle, see
+Modules, Lemma \ref{modules-lemma-exact-sequence-NL-ringed-topoi}.
+If it were a distinguished triangle we would conclude
+that the image of $\xi$ in $\Ext^2_{\mathcal{O}_X}(\NL_{X/S}, \mathcal{G})$
+would be the obstruction to the existence of a solution to
+(\ref{equation-to-solve-ringed-spaces}).
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Deformations of schemes}
+\label{section-deformations-schemes}
+
+\noindent
+In this section we spell out what the results in
+Section \ref{section-deformations-ringed-spaces}
+mean for deformations of schemes.
+
+\begin{lemma}
+\label{lemma-deform}
+Let $S \subset S'$ be a first order thickening of schemes.
+Let $f : X \to S$ be a flat morphism of schemes.
+If there exists a flat morphism $f' : X' \to S'$ of schemes
+and an isomorphism $a : X \to X' \times_{S'} S$ over $S$, then
+\begin{enumerate}
+\item the set of isomorphism classes of pairs $(f' : X' \to S', a)$ is
+principal homogeneous under
+$\Ext^1_{\mathcal{O}_X}(\NL_{X/S}, f^*\mathcal{C}_{S/S'})$, and
+\item the set of automorphisms of $\varphi : X' \to X'$
+over $S'$ which reduce to the identity on $X' \times_{S'} S$
+is $\Ext^0_{\mathcal{O}_X}(\NL_{X/S}, f^*\mathcal{C}_{S/S'})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First we observe that thickenings of schemes as defined in
+More on Morphisms, Section \ref{more-morphisms-section-thickenings}
+are the same things as morphisms of schemes which
+are thickenings in the sense of
+Section \ref{section-thickenings-spaces}.
+We may think of $X$ as a closed subscheme of $X'$
+so that $(f, f') : (X \subset X') \to (S \subset S')$
+is a morphism of first order thickenings. Then we see
+from More on Morphisms, Lemma \ref{more-morphisms-lemma-deform}
+(or from the more general Lemma \ref{lemma-deform-module})
+that the ideal sheaf of $X$ in $X'$ is equal to $f^*\mathcal{C}_{S/S'}$.
+Hence we have a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] & f^*\mathcal{C}_{S/S'} \ar[r] &
+\mathcal{O}_{X'} \ar[r] &
+\mathcal{O}_X \ar[r] & 0 \\
+0 \ar[r] & \mathcal{C}_{S/S'} \ar[u] \ar[r] &
+\mathcal{O}_{S'} \ar[u] \ar[r] &
+\mathcal{O}_S \ar[u] \ar[r] & 0
+}
+$$
+where the vertical arrows are $f$-maps; please compare with
+(\ref{equation-to-solve-ringed-spaces}).
+Thus part (1) follows from
+Lemma \ref{lemma-choices-ringed-spaces}
+and part (2) from part (2) of
+Lemma \ref{lemma-huge-diagram-ringed-spaces}.
+(Note that $\NL_{X/S}$ as defined for a morphism of schemes in
+More on Morphisms, Section \ref{more-morphisms-section-netherlander}
+agrees with $\NL_{X/S}$ as used in
+Section \ref{section-deformations-ringed-spaces}.)
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Thickenings of ringed topoi}
+\label{section-thickenings-ringed-topoi}
+
+\noindent
+This section is the analogue of Section \ref{section-thickenings-spaces}
+for ringed topoi.
+In the following few sections we will use the following notions:
+\begin{enumerate}
+\item A sheaf of ideals $\mathcal{I} \subset \mathcal{O}'$ on
+a ringed topos $(\Sh(\mathcal{D}), \mathcal{O}')$ is {\it locally nilpotent}
+if any local section of $\mathcal{I}$ is locally nilpotent.
+\item A {\it thickening} of ringed topoi is a morphism
+$i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+of ringed topoi such that
+\begin{enumerate}
+\item $i_*$ is an equivalence $\Sh(\mathcal{C}) \to \Sh(\mathcal{D})$,
+\item the map $i^\sharp : \mathcal{O}' \to i_*\mathcal{O}$
+is surjective, and
+\item the kernel of $i^\sharp$ is a locally nilpotent sheaf of ideals.
+\end{enumerate}
+\item A {\it first order thickening} of ringed topoi is a thickening
+$i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+of ringed topoi such that $\Ker(i^\sharp)$ has square zero.
+\item It is clear how to define
+{\it morphisms of thickenings of ringed topoi},
+{\it morphisms of thickenings of ringed topoi over a base ringed topos}, etc.
+\end{enumerate}
+If
+$i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+is a thickening of ringed topoi then we identify the underlying topoi
+and think of $\mathcal{O}$, $\mathcal{O}'$, and
+$\mathcal{I} = \Ker(i^\sharp)$ as sheaves on $\mathcal{C}$.
+We obtain a short exact sequence
+$$
+0 \to \mathcal{I} \to \mathcal{O}' \to \mathcal{O} \to 0
+$$
+of $\mathcal{O}'$-modules. By
+Modules on Sites, Lemma \ref{sites-modules-lemma-i-star-equivalence}
+the category of $\mathcal{O}$-modules is equivalent to the category
+of $\mathcal{O}'$-modules annihilated by $\mathcal{I}$. In particular,
+if $i$ is a first order thickening, then
+$\mathcal{I}$ is a $\mathcal{O}$-module.
+
+\begin{situation}
+\label{situation-morphism-thickenings-ringed-topoi}
+A morphism of thickenings of ringed topoi $(f, f')$
+is given by a commutative diagram
+\begin{equation}
+\label{equation-morphism-thickenings-ringed-topoi}
+\vcenter{
+\xymatrix{
+(\Sh(\mathcal{C}), \mathcal{O}) \ar[r]_i \ar[d]_f &
+(\Sh(\mathcal{D}), \mathcal{O}') \ar[d]^{f'} \\
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B}) \ar[r]^t &
+(\Sh(\mathcal{B}'), \mathcal{O}_{\mathcal{B}'})
+}
+}
+\end{equation}
+of ringed topoi whose horizontal arrows are thickenings. In this
+situation we set
+$\mathcal{I} = \Ker(i^\sharp) \subset \mathcal{O}'$ and
+$\mathcal{J} = \Ker(t^\sharp) \subset \mathcal{O}_{\mathcal{B}'}$.
+As $f = f'$ on underlying topoi we will identify
+the pullback functors $f^{-1}$ and $(f')^{-1}$.
+Observe that
+$(f')^\sharp : f^{-1}\mathcal{O}_{\mathcal{B}'} \to \mathcal{O}'$
+induces in particular a map $f^{-1}\mathcal{J} \to \mathcal{I}$
+and therefore a map of $\mathcal{O}'$-modules
+$$
+(f')^*\mathcal{J} \longrightarrow \mathcal{I}
+$$
+If $i$ and $t$ are first order thickenings, then
+$(f')^*\mathcal{J} = f^*\mathcal{J}$ and the map above becomes a
+map $f^*\mathcal{J} \to \mathcal{I}$.
+\end{situation}
+
+\begin{definition}
+\label{definition-strict-morphism-thickenings-ringed-topoi}
+In Situation \ref{situation-morphism-thickenings-ringed-topoi}
+we say that $(f, f')$ is a {\it strict morphism of thickenings}
+if the map $(f')^*\mathcal{J} \longrightarrow \mathcal{I}$ is surjective.
+\end{definition}
+
+
+
+
+
+
+
+
+\section{Modules on first order thickenings of ringed topoi}
+\label{section-modules-thickenings-ringed-topoi}
+
+\noindent
+In this section we discuss some preliminaries to the deformation theory
+of modules. Let
+$i : (\Sh(\mathcal{C}, \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+be a first order thickening of ringed topoi. We will freely use the notation
+introduced in Section \ref{section-thickenings-ringed-topoi},
+in particular we will identify the underlying topological topoi.
+In this section we consider short exact sequences
+\begin{equation}
+\label{equation-extension-ringed-topoi}
+0 \to \mathcal{K} \to \mathcal{F}' \to \mathcal{F} \to 0
+\end{equation}
+of $\mathcal{O}'$-modules, where $\mathcal{F}$, $\mathcal{K}$ are
+$\mathcal{O}$-modules and $\mathcal{F}'$ is an $\mathcal{O}'$-module.
+In this situation we have a canonical $\mathcal{O}$-module map
+$$
+c_{\mathcal{F}'} :
+\mathcal{I} \otimes_\mathcal{O} \mathcal{F}
+\longrightarrow
+\mathcal{K}
+$$
+where $\mathcal{I} = \Ker(i^\sharp)$.
+Namely, given local sections $f$ of $\mathcal{I}$ and $s$
+of $\mathcal{F}$ we set $c_{\mathcal{F}'}(f \otimes s) = fs'$
+where $s'$ is a local section of $\mathcal{F}'$ lifting $s$.
+
+\begin{lemma}
+\label{lemma-inf-map-ringed-topoi}
+Let $i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+be a first order thickening of ringed topoi. Assume given
+extensions
+$$
+0 \to \mathcal{K} \to \mathcal{F}' \to \mathcal{F} \to 0
+\quad\text{and}\quad
+0 \to \mathcal{L} \to \mathcal{G}' \to \mathcal{G} \to 0
+$$
+as in (\ref{equation-extension-ringed-topoi})
+and maps $\varphi : \mathcal{F} \to \mathcal{G}$ and
+$\psi : \mathcal{K} \to \mathcal{L}$.
+\begin{enumerate}
+\item If there exists an $\mathcal{O}'$-module
+map $\varphi' : \mathcal{F}' \to \mathcal{G}'$ compatible with $\varphi$
+and $\psi$, then the diagram
+$$
+\xymatrix{
+\mathcal{I} \otimes_\mathcal{O} \mathcal{F}
+\ar[r]_-{c_{\mathcal{F}'}} \ar[d]_{1 \otimes \varphi} &
+\mathcal{K} \ar[d]^\psi \\
+\mathcal{I} \otimes_\mathcal{O} \mathcal{G}
+\ar[r]^-{c_{\mathcal{G}'}} &
+\mathcal{L}
+}
+$$
+is commutative.
+\item The set of $\mathcal{O}'$-module
+maps $\varphi' : \mathcal{F}' \to \mathcal{G}'$ compatible with $\varphi$
+and $\psi$ is, if nonempty, a principal homogeneous space under
+$\Hom_\mathcal{O}(\mathcal{F}, \mathcal{L})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is immediate from the description of the maps.
+For (2), if $\varphi'$ and $\varphi''$ are two maps
+$\mathcal{F}' \to \mathcal{G}'$ compatible with $\varphi$
+and $\psi$, then $\varphi' - \varphi''$ factors as
+$$
+\mathcal{F}' \to \mathcal{F} \to \mathcal{L} \to \mathcal{G}'
+$$
+The map in the middle comes from a unique element of
+$\Hom_\mathcal{O}(\mathcal{F}, \mathcal{L})$ by
+Modules on Sites, Lemma \ref{sites-modules-lemma-i-star-equivalence}.
+Conversely, given an element $\alpha$ of this group we can add the
+composition (as displayed above with $\alpha$ in the middle)
+to $\varphi'$. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-obs-map-ringed-topoi}
+Let $i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+be a first order thickening of ringed topoi. Assume given extensions
+$$
+0 \to \mathcal{K} \to \mathcal{F}' \to \mathcal{F} \to 0
+\quad\text{and}\quad
+0 \to \mathcal{L} \to \mathcal{G}' \to \mathcal{G} \to 0
+$$
+as in (\ref{equation-extension-ringed-topoi})
+and maps $\varphi : \mathcal{F} \to \mathcal{G}$ and
+$\psi : \mathcal{K} \to \mathcal{L}$. Assume the diagram
+$$
+\xymatrix{
+\mathcal{I} \otimes_\mathcal{O} \mathcal{F}
+\ar[r]_-{c_{\mathcal{F}'}} \ar[d]_{1 \otimes \varphi} &
+\mathcal{K} \ar[d]^\psi \\
+\mathcal{I} \otimes_\mathcal{O} \mathcal{G}
+\ar[r]^-{c_{\mathcal{G}'}} &
+\mathcal{L}
+}
+$$
+is commutative. Then there exists an element
+$$
+o(\varphi, \psi) \in
+\Ext^1_\mathcal{O}(\mathcal{F}, \mathcal{L})
+$$
+whose vanishing is a necessary and sufficient condition for the existence
+of a map $\varphi' : \mathcal{F}' \to \mathcal{G}'$ compatible with
+$\varphi$ and $\psi$.
+\end{lemma}
+
+\begin{proof}
+We can construct explicitly an extension
+$$
+0 \to \mathcal{L} \to \mathcal{H} \to \mathcal{F} \to 0
+$$
+by taking $\mathcal{H}$ to be the cohomology of the complex
+$$
+\mathcal{K}
+\xrightarrow{1, - \psi}
+\mathcal{F}' \oplus \mathcal{G}' \xrightarrow{\varphi, 1}
+\mathcal{G}
+$$
+in the middle (with obvious notation). A calculation with local sections
+using the assumption that the diagram of the lemma commutes
+shows that $\mathcal{H}$ is annihilated by $\mathcal{I}$. Hence
+$\mathcal{H}$ defines a class in
+$$
+\Ext^1_\mathcal{O}(\mathcal{F}, \mathcal{L})
+\subset
+\Ext^1_{\mathcal{O}'}(\mathcal{F}, \mathcal{L})
+$$
+Finally, the class of $\mathcal{H}$ is the difference of the pushout
+of the extension $\mathcal{F}'$ via $\psi$ and the pullback
+of the extension $\mathcal{G}'$ via $\varphi$ (calculations omitted).
+Thus the vanishing of the class of $\mathcal{H}$ is equivalent to the
+existence of a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K} \ar[r] \ar[d]_{\psi} &
+\mathcal{F}' \ar[r] \ar[d]_{\varphi'} &
+\mathcal{F} \ar[r] \ar[d]_\varphi & 0\\
+0 \ar[r] &
+\mathcal{L} \ar[r] &
+\mathcal{G}' \ar[r] &
+\mathcal{G} \ar[r] & 0
+}
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-ext-ringed-topoi}
+Let $i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+be a first order thickening of ringed topoi. Assume given
+$\mathcal{O}$-modules $\mathcal{F}$, $\mathcal{K}$
+and an $\mathcal{O}$-linear map
+$c : \mathcal{I} \otimes_\mathcal{O} \mathcal{F} \to \mathcal{K}$.
+If there exists a sequence (\ref{equation-extension-ringed-topoi}) with
+$c_{\mathcal{F}'} = c$ then the set of isomorphism classes of these
+extensions is principal homogeneous under
+$\Ext^1_\mathcal{O}(\mathcal{F}, \mathcal{K})$.
+\end{lemma}
+
+\begin{proof}
+Assume given extensions
+$$
+0 \to \mathcal{K} \to \mathcal{F}'_1 \to \mathcal{F} \to 0
+\quad\text{and}\quad
+0 \to \mathcal{K} \to \mathcal{F}'_2 \to \mathcal{F} \to 0
+$$
+with $c_{\mathcal{F}'_1} = c_{\mathcal{F}'_2} = c$. Then the difference
+(in the extension group, see
+Homology, Section \ref{homology-section-extensions})
+is an extension
+$$
+0 \to \mathcal{K} \to \mathcal{E} \to \mathcal{F} \to 0
+$$
+where $\mathcal{E}$ is annihilated by $\mathcal{I}$ (local computation
+omitted). Hence the sequence is an extension of $\mathcal{O}$-modules,
+see Modules on Sites, Lemma \ref{sites-modules-lemma-i-star-equivalence}.
+Conversely, given such an extension $\mathcal{E}$ we can add the extension
+$\mathcal{E}$ to the $\mathcal{O}'$-extension $\mathcal{F}'$ without
+affecting the map $c_{\mathcal{F}'}$. Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-obs-ext-ringed-topoi}
+Let $i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+be a first order thickening of ringed topoi. Assume given
+$\mathcal{O}$-modules $\mathcal{F}$, $\mathcal{K}$
+and an $\mathcal{O}$-linear map
+$c : \mathcal{I} \otimes_\mathcal{O} \mathcal{F} \to \mathcal{K}$.
+Then there exists an element
+$$
+o(\mathcal{F}, \mathcal{K}, c) \in
+\Ext^2_\mathcal{O}(\mathcal{F}, \mathcal{K})
+$$
+whose vanishing is a necessary and sufficient condition for the existence
+of a sequence (\ref{equation-extension-ringed-topoi})
+with $c_{\mathcal{F}'} = c$.
+\end{lemma}
+
+\begin{proof}
+We first show that if $\mathcal{K}$ is an injective $\mathcal{O}$-module,
+then there does exist a sequence (\ref{equation-extension-ringed-topoi}) with
+$c_{\mathcal{F}'} = c$. To do this, choose a flat
+$\mathcal{O}'$-module $\mathcal{H}'$ and a surjection
+$\mathcal{H}' \to \mathcal{F}$
+(Modules on Sites, Lemma \ref{sites-modules-lemma-module-quotient-flat}).
+Let $\mathcal{J} \subset \mathcal{H}'$ be the kernel. Since $\mathcal{H}'$
+is flat we have
+$$
+\mathcal{I} \otimes_{\mathcal{O}'} \mathcal{H}' =
+\mathcal{I}\mathcal{H}'
+\subset \mathcal{J} \subset \mathcal{H}'
+$$
+Observe that the map
+$$
+\mathcal{I}\mathcal{H}' =
+\mathcal{I} \otimes_{\mathcal{O}'} \mathcal{H}'
+\longrightarrow
+\mathcal{I} \otimes_{\mathcal{O}'} \mathcal{F} =
+\mathcal{I} \otimes_\mathcal{O} \mathcal{F}
+$$
+annihilates $\mathcal{I}\mathcal{J}$. Namely, if $f$ is a local section
+of $\mathcal{I}$ and $s$ is a local section of $\mathcal{H}$, then
+$fs$ is mapped to $f \otimes \overline{s}$ where $\overline{s}$ is
+the image of $s$ in $\mathcal{F}$. Thus we obtain
+$$
+\xymatrix{
+\mathcal{I}\mathcal{H}'/\mathcal{I}\mathcal{J}
+\ar@{^{(}->}[r] \ar[d] &
+\mathcal{J}/\mathcal{I}\mathcal{J} \ar@{..>}[d]_\gamma \\
+\mathcal{I} \otimes_\mathcal{O} \mathcal{F} \ar[r]^-c &
+\mathcal{K}
+}
+$$
+a diagram of $\mathcal{O}$-modules. If $\mathcal{K}$ is injective
+as an $\mathcal{O}$-module, then we obtain the dotted arrow.
+Denote $\gamma' : \mathcal{J} \to \mathcal{K}$ the composition
+of $\gamma$ with $\mathcal{J} \to \mathcal{J}/\mathcal{I}\mathcal{J}$.
+A local calculation shows the pushout
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{J} \ar[r] \ar[d]_{\gamma'} &
+\mathcal{H}' \ar[r] \ar[d] &
+\mathcal{F} \ar[r] \ar@{=}[d] &
+0 \\
+0 \ar[r] &
+\mathcal{K} \ar[r] &
+\mathcal{F}' \ar[r] &
+\mathcal{F} \ar[r] &
+0
+}
+$$
+is a solution to the problem posed by the lemma.
+
+\medskip\noindent
+General case. Choose an embedding $\mathcal{K} \subset \mathcal{K}'$
+with $\mathcal{K}'$ an injective $\mathcal{O}$-module. Let $\mathcal{Q}$
+be the quotient, so that we have an exact sequence
+$$
+0 \to \mathcal{K} \to \mathcal{K}' \to \mathcal{Q} \to 0
+$$
+Denote
+$c' : \mathcal{I} \otimes_\mathcal{O} \mathcal{F} \to \mathcal{K}'$
+be the composition. By the paragraph above there exists a sequence
+$$
+0 \to \mathcal{K}' \to \mathcal{E}' \to \mathcal{F} \to 0
+$$
+as in (\ref{equation-extension-ringed-topoi}) with $c_{\mathcal{E}'} = c'$.
+Note that $c'$ composed with the map $\mathcal{K}' \to \mathcal{Q}$
+is zero, hence the pushout of $\mathcal{E}'$ by
+$\mathcal{K}' \to \mathcal{Q}$ is an extension
+$$
+0 \to \mathcal{Q} \to \mathcal{D}' \to \mathcal{F} \to 0
+$$
+as in (\ref{equation-extension-ringed-topoi}) with $c_{\mathcal{D}'} = 0$.
+This means exactly that $\mathcal{D}'$ is annihilated by
+$\mathcal{I}$, in other words, the $\mathcal{D}'$ is an extension
+of $\mathcal{O}$-modules, i.e., defines an element
+$$
+o(\mathcal{F}, \mathcal{K}, c) \in
+\Ext^1_\mathcal{O}(\mathcal{F}, \mathcal{Q}) =
+\Ext^2_\mathcal{O}(\mathcal{F}, \mathcal{K})
+$$
+(the equality holds by the long exact cohomology sequence associated
+to the exact sequence above and the vanishing of higher ext groups
+into the injective module $\mathcal{K}'$). If
+$o(\mathcal{F}, \mathcal{K}, c) = 0$, then we can choose a splitting
+$s : \mathcal{F} \to \mathcal{D}'$ and we can set
+$$
+\mathcal{F}' = \Ker(\mathcal{E}' \to \mathcal{D}'/s(\mathcal{F}))
+$$
+so that we obtain the following diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K} \ar[r] \ar[d] &
+\mathcal{F}' \ar[r] \ar[d] &
+\mathcal{F} \ar[r] \ar@{=}[d] &
+0 \\
+0 \ar[r] &
+\mathcal{K}' \ar[r] &
+\mathcal{E}' \ar[r] &
+\mathcal{F} \ar[r] & 0
+}
+$$
+with exact rows which shows that $c_{\mathcal{F}'} = c$. Conversely, if
+$\mathcal{F}'$ exists, then the pushout of $\mathcal{F}'$ by the map
+$\mathcal{K} \to \mathcal{K}'$ is isomorphic to $\mathcal{E}'$ by
+Lemma \ref{lemma-inf-ext-ringed-topoi} and the vanishing of higher ext groups
+into the injective module $\mathcal{K}'$. This gives a diagram
+as above, which implies that $\mathcal{D}'$ is split as an extension, i.e.,
+the class $o(\mathcal{F}, \mathcal{K}, c)$ is zero.
+\end{proof}
+
+\begin{remark}
+\label{remark-trivial-thickening-ringed-topoi}
+Let $(\Sh(\mathcal{C}), \mathcal{O})$ be a ringed topos. A first order
+thickening $i : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{D}), \mathcal{O}')$ is said
+to be {\it trivial} if there exists a morphism of ringed topoi
+$\pi : (\Sh(\mathcal{D}), \mathcal{O}') \to (\Sh(\mathcal{C}), \mathcal{O})$
+which is a left inverse to $i$. The choice of such a morphism
+$\pi$ is called a {\it trivialization} of the first order thickening.
+Given $\pi$ we obtain a splitting
+\begin{equation}
+\label{equation-splitting-ringed-topoi}
+\mathcal{O}' = \mathcal{O} \oplus \mathcal{I}
+\end{equation}
+as sheaves of algebras on $\mathcal{C}$ by using $\pi^\sharp$
+to split the surjection $\mathcal{O}' \to \mathcal{O}$.
+Conversely, such a splitting determines
+a morphism $\pi$. The category of trivialized first order thickenings of
+$(\Sh(\mathcal{C}), \mathcal{O})$ is equivalent to the category of
+$\mathcal{O}$-modules.
+\end{remark}
+
+\begin{remark}
+\label{remark-trivial-extension-ringed-topoi}
+Let $i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+be a trivial first order thickening of ringed topoi
+and let $\pi : (\Sh(\mathcal{D}), \mathcal{O}') \to
+(\Sh(\mathcal{C}), \mathcal{O})$ be a trivialization. Then given any triple
+$(\mathcal{F}, \mathcal{K}, c)$ consisting of a pair of
+$\mathcal{O}$-modules and a map
+$c : \mathcal{I} \otimes_\mathcal{O} \mathcal{F} \to \mathcal{K}$
+we may set
+$$
+\mathcal{F}'_{c, triv} = \mathcal{F} \oplus \mathcal{K}
+$$
+and use the splitting (\ref{equation-splitting-ringed-topoi})
+associated to $\pi$ and the map $c$ to define the $\mathcal{O}'$-module
+structure and obtain an extension (\ref{equation-extension-ringed-topoi}).
+We will call $\mathcal{F}'_{c, triv}$ the {\it trivial extension} of
+$\mathcal{F}$ by $\mathcal{K}$ corresponding
+to $c$ and the trivialization $\pi$. Given any extension
+$\mathcal{F}'$ as in (\ref{equation-extension-ringed-topoi}) we can use
+$\pi^\sharp : \mathcal{O} \to \mathcal{O}'$ to think of $\mathcal{F}'$
+as an $\mathcal{O}$-module extension, hence a class $\xi_{\mathcal{F}'}$
+in $\Ext^1_\mathcal{O}(\mathcal{F}, \mathcal{K})$.
+Lemma \ref{lemma-inf-ext-ringed-topoi} assures that
+$\mathcal{F}' \mapsto \xi_{\mathcal{F}'}$
+induces a bijection
+$$
+\left\{
+\begin{matrix}
+\text{isomorphism classes of extensions}\\
+\mathcal{F}'\text{ as in (\ref{equation-extension-ringed-topoi}) with }
+c = c_{\mathcal{F}'}
+\end{matrix}
+\right\}
+\longrightarrow
+\Ext^1_\mathcal{O}(\mathcal{F}, \mathcal{K})
+$$
+Moreover, the trivial extension $\mathcal{F}'_{c, triv}$ maps to the zero class.
+\end{remark}
+
+\begin{remark}
+\label{remark-extension-functorial-ringed-topoi}
+Let $(\Sh(\mathcal{C}), \mathcal{O})$ be a ringed topos. Let
+$(\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}_i), \mathcal{O}'_i)$,
+$i = 1, 2$ be first order thickenings with ideal sheaves $\mathcal{I}_i$.
+Let $h : (\Sh(\mathcal{D}_1), \mathcal{O}'_1) \to
+(\Sh(\mathcal{D}_2), \mathcal{O}'_2)$
+be a morphism of first order thickenings of $(\Sh(\mathcal{C}), \mathcal{O})$.
+Picture
+$$
+\xymatrix{
+& (\Sh(\mathcal{C}), \mathcal{O}) \ar[ld] \ar[rd] & \\
+(\Sh(\mathcal{D}_1), \mathcal{O}'_1) \ar[rr]^h & &
+(\Sh(\mathcal{D}_2), \mathcal{O}'_2)
+}
+$$
+Observe that $h^\sharp : \mathcal{O}'_2 \to \mathcal{O}'_1$
+in particular induces an $\mathcal{O}$-module map
+$\mathcal{I}_2 \to \mathcal{I}_1$.
+Let $\mathcal{F}$ be an $\mathcal{O}$-module.
+Let $(\mathcal{K}_i, c_i)$, $i = 1, 2$ be a pair
+consisting of an $\mathcal{O}$-module $\mathcal{K}_i$ and a map
+$c_i : \mathcal{I}_i \otimes_\mathcal{O} \mathcal{F} \to
+\mathcal{K}_i$. Assume furthermore given a map
+of $\mathcal{O}$-modules $\mathcal{K}_2 \to \mathcal{K}_1$
+such that
+$$
+\xymatrix{
+\mathcal{I}_2 \otimes_\mathcal{O} \mathcal{F}
+\ar[r]_-{c_2} \ar[d] &
+\mathcal{K}_2 \ar[d] \\
+\mathcal{I}_1 \otimes_\mathcal{O} \mathcal{F}
+\ar[r]^-{c_1} &
+\mathcal{K}_1
+}
+$$
+is commutative. Then there is a canonical functoriality
+$$
+\left\{
+\begin{matrix}
+\mathcal{F}'_2\text{ as in (\ref{equation-extension-ringed-topoi}) with }\\
+c_2 = c_{\mathcal{F}'_2}\text{ and }\mathcal{K} = \mathcal{K}_2
+\end{matrix}
+\right\}
+\longrightarrow
+\left\{
+\begin{matrix}
+\mathcal{F}'_1\text{ as in (\ref{equation-extension-ringed-topoi}) with }\\
+c_1 = c_{\mathcal{F}'_1}\text{ and }\mathcal{K} = \mathcal{K}_1
+\end{matrix}
+\right\}
+$$
+Namely, thinking of all sheaves $\mathcal{O}$, $\mathcal{O}'_i$,
+$\mathcal{F}$, $\mathcal{K}_i$, etc as sheaves on $\mathcal{C}$, we set
+given $\mathcal{F}'_2$ the sheaf $\mathcal{F}'_1$ equal to the
+pushout, i.e., fitting into the following diagram of extensions
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K}_2 \ar[r] \ar[d] &
+\mathcal{F}'_2 \ar[r] \ar[d] &
+\mathcal{F} \ar@{=}[d] \ar[r] & 0 \\
+0 \ar[r] &
+\mathcal{K}_1 \ar[r] &
+\mathcal{F}'_1 \ar[r] &
+\mathcal{F} \ar[r] & 0
+}
+$$
+We omit the construction of the $\mathcal{O}'_1$-module structure
+on the pushout (this uses the commutativity of the diagram
+involving $c_1$ and $c_2$).
+\end{remark}
+
+\begin{remark}
+\label{remark-trivial-extension-functorial-ringed-topoi}
+Let $(\Sh(\mathcal{C}), \mathcal{O})$,
+$(\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}_i), \mathcal{O}'_i)$,
+$\mathcal{I}_i$, and $h : (\Sh(\mathcal{D}_1), \mathcal{O}'_1) \to
+(\Sh(\mathcal{D}_2), \mathcal{O}'_2)$ be as in
+Remark \ref{remark-extension-functorial-ringed-topoi}.
+Assume that we are given trivializations
+$\pi_i : (\Sh(\mathcal{D}_i), \mathcal{O}'_i) \to
+(\Sh(\mathcal{C}), \mathcal{O})$ such that
+$\pi_1 = h \circ \pi_2$. In other words, assume $h$ is a morphism
+of trivialized first order thickenings of $(\Sh(\mathcal{C}), \mathcal{O})$.
+Let $(\mathcal{K}_i, c_i)$, $i = 1, 2$ be a pair consisting of an
+$\mathcal{O}$-module $\mathcal{K}_i$ and a map
+$c_i : \mathcal{I}_i \otimes_\mathcal{O} \mathcal{F} \to
+\mathcal{K}_i$. Assume furthermore given a map
+of $\mathcal{O}$-modules $\mathcal{K}_2 \to \mathcal{K}_1$
+such that
+$$
+\xymatrix{
+\mathcal{I}_2 \otimes_\mathcal{O} \mathcal{F}
+\ar[r]_-{c_2} \ar[d] &
+\mathcal{K}_2 \ar[d] \\
+\mathcal{I}_1 \otimes_\mathcal{O} \mathcal{F}
+\ar[r]^-{c_1} &
+\mathcal{K}_1
+}
+$$
+is commutative. In this situation the construction of
+Remark \ref{remark-trivial-extension-ringed-topoi} induces
+a commutative diagram
+$$
+\xymatrix{
+\{\mathcal{F}'_2\text{ as in (\ref{equation-extension-ringed-topoi}) with }
+c_2 = c_{\mathcal{F}'_2}\text{ and }\mathcal{K} = \mathcal{K}_2\}
+\ar[d] \ar[rr] & &
+\Ext^1_\mathcal{O}(\mathcal{F}, \mathcal{K}_2) \ar[d] \\
+\{\mathcal{F}'_1\text{ as in (\ref{equation-extension-ringed-topoi}) with }
+c_1 = c_{\mathcal{F}'_1}\text{ and }\mathcal{K} = \mathcal{K}_1\}
+\ar[rr] & &
+\Ext^1_\mathcal{O}(\mathcal{F}, \mathcal{K}_1)
+}
+$$
+where the vertical map on the right is given by functoriality of $\Ext$
+and the map $\mathcal{K}_2 \to \mathcal{K}_1$ and the vertical map on the left
+is the one from Remark \ref{remark-extension-functorial-ringed-topoi}.
+\end{remark}
+
+\begin{remark}
+\label{remark-obstruction-extension-functorial-ringed-topoi}
+Let $(\Sh(\mathcal{C}), \mathcal{O})$,
+$(\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}_i), \mathcal{O}'_i)$,
+$\mathcal{I}_i$, and $h : (\Sh(\mathcal{D}_1), \mathcal{O}'_1) \to
+(\Sh(\mathcal{D}_2), \mathcal{O}'_2)$ be as in
+Remark \ref{remark-extension-functorial-ringed-topoi}.
+Observe that $h^\sharp : \mathcal{O}'_2 \to \mathcal{O}'_1$
+in particular induces an $\mathcal{O}$-module map
+$\mathcal{I}_2 \to \mathcal{I}_1$.
+Let $\mathcal{F}$ be an $\mathcal{O}$-module.
+Let $(\mathcal{K}_i, c_i)$, $i = 1, 2$ be a pair
+consisting of an $\mathcal{O}$-module $\mathcal{K}_i$ and a map
+$c_i : \mathcal{I}_i \otimes_\mathcal{O} \mathcal{F} \to
+\mathcal{K}_i$. Assume furthermore given a map
+of $\mathcal{O}$-modules $\mathcal{K}_2 \to \mathcal{K}_1$
+such that
+$$
+\xymatrix{
+\mathcal{I}_2 \otimes_\mathcal{O} \mathcal{F}
+\ar[r]_-{c_2} \ar[d] &
+\mathcal{K}_2 \ar[d] \\
+\mathcal{I}_1 \otimes_\mathcal{O} \mathcal{F}
+\ar[r]^-{c_1} &
+\mathcal{K}_1
+}
+$$
+is commutative. Then we {\bf claim} the map
+$$
+\Ext^2_\mathcal{O}(\mathcal{F}, \mathcal{K}_2)
+\longrightarrow
+\Ext^2_\mathcal{O}(\mathcal{F}, \mathcal{K}_1)
+$$
+sends $o(\mathcal{F}, \mathcal{K}_2, c_2)$ to
+$o(\mathcal{F}, \mathcal{K}_1, c_1)$.
+
+\medskip\noindent
+To prove this claim choose an embedding
+$j_2 : \mathcal{K}_2 \to \mathcal{K}_2'$
+where $\mathcal{K}_2'$ is an injective $\mathcal{O}$-module.
+As in the proof of Lemma \ref{lemma-inf-obs-ext-ringed-topoi}
+we can choose an extension of $\mathcal{O}_2$-modules
+$$
+0 \to \mathcal{K}_2' \to \mathcal{E}_2 \to \mathcal{F} \to 0
+$$
+such that $c_{\mathcal{E}_2} = j_2 \circ c_2$.
+The proof of Lemma \ref{lemma-inf-obs-ext-ringed-topoi} constructs
+$o(\mathcal{F}, \mathcal{K}_2, c_2)$
+as the Yoneda extension class (in the sense of
+Derived Categories, Section \ref{derived-section-ext})
+of the exact sequence of $\mathcal{O}$-modules
+$$
+0 \to
+\mathcal{K}_2 \to \mathcal{K}_2' \to
+\mathcal{E}_2/\mathcal{K}_2 \to
+\mathcal{F} \to 0
+$$
+Let $\mathcal{K}_1'$ be the cokernel of
+$\mathcal{K}_2 \to \mathcal{K}_1 \oplus \mathcal{K}_2'$.
+There is an injection $j_1 : \mathcal{K}_1 \to \mathcal{K}_1'$
+and a map $\mathcal{K}_2' \to \mathcal{K}_1'$ forming
+a commutative square. We form the pushout:
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K}_2' \ar[r] \ar[d] &
+\mathcal{E}_2 \ar[r] \ar[d] &
+\mathcal{F} \ar[r] \ar[d] & 0 \\
+0 \ar[r] &
+\mathcal{K}_1' \ar[r] &
+\mathcal{E}_1 \ar[r] &
+\mathcal{F} \ar[r] & 0
+}
+$$
+There is a canonical $\mathcal{O}_1$-module structure on
+$\mathcal{E}_1$ and for this structure we have
+$c_{\mathcal{E}_1} = j_1 \circ c_1$ (this uses the commutativity
+of the diagram involving $c_1$ and $c_2$ above).
+The procedure of Lemma \ref{lemma-inf-obs-ext-ringed-topoi}
+tells us that $o(\mathcal{F}, \mathcal{K}_1, c_1)$
+is the Yoneda extension class of the exact sequence
+of $\mathcal{O}$-modules
+$$
+0 \to
+\mathcal{K}_1 \to
+\mathcal{K}_1' \to
+\mathcal{E}_1/\mathcal{K}_1 \to
+\mathcal{F} \to 0
+$$
+Since we have maps of exact sequences
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K}_2 \ar[d] \ar[r] &
+\mathcal{K}_2' \ar[d] \ar[r] &
+\mathcal{E}_2/\mathcal{K}_2 \ar[r] \ar[d] &
+\mathcal{F} \ar[r] \ar@{=}[d] &
+0 \\
+0 \ar[r] &
+\mathcal{K}_2 \ar[r] &
+\mathcal{K}_2' \ar[r] &
+\mathcal{E}_2/\mathcal{K}_2 \ar[r] &
+\mathcal{F} \ar[r] &
+0
+}
+$$
+we conclude that the claim is true.
+\end{remark}
+
+\begin{remark}
+\label{remark-short-exact-sequence-thickenings-ringed-topoi}
+Let $(\Sh(\mathcal{C}), \mathcal{O})$ be a ringed topos.
+We define a sequence of morphisms of first order thickenings
+$$
+(\Sh(\mathcal{D}_1), \mathcal{O}'_1) \to
+(\Sh(\mathcal{D}_2), \mathcal{O}'_2) \to
+(\Sh(\mathcal{D}_3), \mathcal{O}'_3)
+$$
+of $(\Sh(\mathcal{C}), \mathcal{O})$ to be a {\it complex}
+if the corresponding maps between
+the ideal sheaves $\mathcal{I}_i$
+give a complex of $\mathcal{O}$-modules
+$\mathcal{I}_3 \to \mathcal{I}_2 \to \mathcal{I}_1$
+(i.e., the composition is zero). In this case the composition
+$(\Sh(\mathcal{D}_1), \mathcal{O}'_1) \to
+(\Sh(\mathcal{D}_3), \mathcal{O}'_3)$ factors through
+$(\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{D}_3), \mathcal{O}'_3)$, i.e.,
+the first order thickening
+$(\Sh(\mathcal{D}_1), \mathcal{O}'_1)$ of
+$(\Sh(\mathcal{C}), \mathcal{O})$ is trivial and comes with
+a canonical trivialization
+$\pi : (\Sh(\mathcal{D}_1), \mathcal{O}'_1) \to
+(\Sh(\mathcal{C}), \mathcal{O})$.
+
+\medskip\noindent
+We say a sequence of morphisms of first order thickenings
+$$
+(\Sh(\mathcal{D}_1), \mathcal{O}'_1) \to
+(\Sh(\mathcal{D}_2), \mathcal{O}'_2) \to
+(\Sh(\mathcal{D}_3), \mathcal{O}'_3)
+$$
+of $(\Sh(\mathcal{C}), \mathcal{O})$ is {\it a short exact sequence} if the
+corresponding maps between ideal sheaves is a short exact sequence
+$$
+0 \to \mathcal{I}_3 \to \mathcal{I}_2 \to \mathcal{I}_1 \to 0
+$$
+of $\mathcal{O}$-modules.
+\end{remark}
+
+\begin{remark}
+\label{remark-complex-thickenings-and-ses-modules-ringed-topoi}
+Let $(\Sh(\mathcal{C}), \mathcal{O})$ be a ringed topos.
+Let $\mathcal{F}$ be an $\mathcal{O}$-module. Let
+$$
+(\Sh(\mathcal{D}_1), \mathcal{O}'_1) \to
+(\Sh(\mathcal{D}_2), \mathcal{O}'_2) \to
+(\Sh(\mathcal{D}_3), \mathcal{O}'_3)
+$$
+be a complex first order thickenings of $(\Sh(\mathcal{C}), \mathcal{O})$, see
+Remark \ref{remark-short-exact-sequence-thickenings-ringed-topoi}.
+Let $(\mathcal{K}_i, c_i)$, $i = 1, 2, 3$ be pairs consisting of
+an $\mathcal{O}$-module $\mathcal{K}_i$ and a map
+$c_i : \mathcal{I}_i \otimes_\mathcal{O} \mathcal{F} \to
+\mathcal{K}_i$. Assume given a short exact sequence
+of $\mathcal{O}$-modules
+$$
+0 \to \mathcal{K}_3 \to \mathcal{K}_2 \to \mathcal{K}_1 \to 0
+$$
+such that
+$$
+\vcenter{
+\xymatrix{
+\mathcal{I}_2 \otimes_\mathcal{O} \mathcal{F}
+\ar[r]_-{c_2} \ar[d] &
+\mathcal{K}_2 \ar[d] \\
+\mathcal{I}_1 \otimes_\mathcal{O} \mathcal{F}
+\ar[r]^-{c_1} &
+\mathcal{K}_1
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+\mathcal{I}_3 \otimes_\mathcal{O} \mathcal{F}
+\ar[r]_-{c_3} \ar[d] &
+\mathcal{K}_3 \ar[d] \\
+\mathcal{I}_2 \otimes_\mathcal{O} \mathcal{F}
+\ar[r]^-{c_2} &
+\mathcal{K}_2
+}
+}
+$$
+are commutative. Finally, assume given an extension
+$$
+0 \to \mathcal{K}_2 \to \mathcal{F}'_2 \to \mathcal{F} \to 0
+$$
+as in (\ref{equation-extension-ringed-topoi})
+with $\mathcal{K} = \mathcal{K}_2$
+of $\mathcal{O}'_2$-modules with $c_{\mathcal{F}'_2} = c_2$.
+In this situation we can apply the functoriality of
+Remark \ref{remark-extension-functorial-ringed-topoi}
+to obtain an extension $\mathcal{F}'_1$ of $\mathcal{O}'_1$-modules
+(we'll describe $\mathcal{F}'_1$ in this special case below). By
+Remark \ref{remark-trivial-extension-ringed-topoi}
+using the canonical splitting
+$\pi : (\Sh(\mathcal{D}_1), \mathcal{O}'_1) \to
+(\Sh(\mathcal{C}), \mathcal{O})$ of
+Remark \ref{remark-short-exact-sequence-thickenings-ringed-topoi}
+we obtain
+$\xi_{\mathcal{F}'_1} \in
+\Ext^1_\mathcal{O}(\mathcal{F}, \mathcal{K}_1)$.
+Finally, we have the obstruction
+$$
+o(\mathcal{F}, \mathcal{K}_3, c_3) \in
+\Ext^2_\mathcal{O}(\mathcal{F}, \mathcal{K}_3)
+$$
+see Lemma \ref{lemma-inf-obs-ext-ringed-topoi}.
+In this situation we {\bf claim} that the canonical map
+$$
+\partial :
+\Ext^1_\mathcal{O}(\mathcal{F}, \mathcal{K}_1)
+\longrightarrow
+\Ext^2_\mathcal{O}(\mathcal{F}, \mathcal{K}_3)
+$$
+coming from the short exact sequence
+$0 \to \mathcal{K}_3 \to \mathcal{K}_2 \to \mathcal{K}_1 \to 0$
+sends $\xi_{\mathcal{F}'_1}$
+to the obstruction class $o(\mathcal{F}, \mathcal{K}_3, c_3)$.
+
+\medskip\noindent
+To prove this claim choose an embedding $j : \mathcal{K}_3 \to \mathcal{K}$
+where $\mathcal{K}$ is an injective $\mathcal{O}$-module.
+We can lift $j$ to a map $j' : \mathcal{K}_2 \to \mathcal{K}$.
+Set $\mathcal{E}'_2 = j'_*\mathcal{F}'_2$ equal to the pushout
+of $\mathcal{F}'_2$ by $j'$ so that $c_{\mathcal{E}'_2} = j' \circ c_2$.
+Picture:
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K}_2 \ar[r] \ar[d]_{j'} &
+\mathcal{F}'_2 \ar[r] \ar[d] &
+\mathcal{F} \ar[r] \ar[d] & 0 \\
+0 \ar[r] &
+\mathcal{K} \ar[r] &
+\mathcal{E}'_2 \ar[r] &
+\mathcal{F} \ar[r] & 0
+}
+$$
+Set $\mathcal{E}'_3 = \mathcal{E}'_2$ but viewed as an
+$\mathcal{O}'_3$-module via $\mathcal{O}'_3 \to \mathcal{O}'_2$.
+Then $c_{\mathcal{E}'_3} = j \circ c_3$.
+The proof of Lemma \ref{lemma-inf-obs-ext-ringed-topoi} constructs
+$o(\mathcal{F}, \mathcal{K}_3, c_3)$
+as the boundary of the class of the extension of $\mathcal{O}$-modules
+$$
+0 \to
+\mathcal{K}/\mathcal{K}_3 \to
+\mathcal{E}'_3/\mathcal{K}_3 \to
+\mathcal{F} \to 0
+$$
+On the other hand, note that $\mathcal{F}'_1 = \mathcal{F}'_2/\mathcal{K}_3$
+hence the class $\xi_{\mathcal{F}'_1}$ is the class
+of the extension
+$$
+0 \to \mathcal{K}_2/\mathcal{K}_3 \to \mathcal{F}'_2/\mathcal{K}_3
+\to \mathcal{F} \to 0
+$$
+seen as a sequence of $\mathcal{O}$-modules using $\pi^\sharp$
+where $\pi : (\Sh(\mathcal{D}_1), \mathcal{O}'_1) \to
+(\Sh(\mathcal{C}), \mathcal{O})$ is the canonical splitting.
+Thus finally, the claim follows from the fact that we have
+a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{K}_2/\mathcal{K}_3 \ar[r] \ar[d] &
+\mathcal{F}'_2/\mathcal{K}_3 \ar[r] \ar[d] &
+\mathcal{F} \ar[r] \ar[d] & 0 \\
+0 \ar[r] &
+\mathcal{K}/\mathcal{K}_3 \ar[r] &
+\mathcal{E}'_3/\mathcal{K}_3 \ar[r] &
+\mathcal{F} \ar[r] & 0
+}
+$$
+which is $\mathcal{O}$-linear (with the $\mathcal{O}$-module
+structures given above).
+\end{remark}
+
+
+
+
+
+
+
+\section{Infinitesimal deformations of modules on ringed topoi}
+\label{section-deformation-modules-ringed-topoi}
+
+\noindent
+Let $i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+be a first order thickening of ringed topoi. We freely use the notation
+introduced in Section \ref{section-thickenings-ringed-topoi}.
+Let $\mathcal{F}'$ be an $\mathcal{O}'$-module
+and set $\mathcal{F} = i^*\mathcal{F}'$.
+In this situation we have a short exact sequence
+$$
+0 \to \mathcal{I}\mathcal{F}' \to \mathcal{F}' \to \mathcal{F} \to 0
+$$
+of $\mathcal{O}'$-modules. Since $\mathcal{I}^2 = 0$ the
+$\mathcal{O}'$-module structure on $\mathcal{I}\mathcal{F}'$
+comes from a unique $\mathcal{O}$-module structure.
+Thus the sequence above is an extension as in
+(\ref{equation-extension-ringed-topoi}).
+As a special case, if $\mathcal{F}' = \mathcal{O}'$ we have
+$i^*\mathcal{O}' = \mathcal{O}$ and
+$\mathcal{I}\mathcal{O}' = \mathcal{I}$ and we recover the
+sequence of structure sheaves
+$$
+0 \to \mathcal{I} \to \mathcal{O}' \to \mathcal{O} \to 0
+$$
+
+\begin{lemma}
+\label{lemma-inf-map-special-ringed-topoi}
+Let $i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+be a first order thickening of ringed topoi.
+Let $\mathcal{F}'$, $\mathcal{G}'$ be $\mathcal{O}'$-modules.
+Set $\mathcal{F} = i^*\mathcal{F}'$ and $\mathcal{G} = i^*\mathcal{G}'$.
+Let $\varphi : \mathcal{F} \to \mathcal{G}$ be an $\mathcal{O}$-linear map.
+The set of lifts of $\varphi$ to an $\mathcal{O}'$-linear map
+$\varphi' : \mathcal{F}' \to \mathcal{G}'$ is, if nonempty, a principal
+homogeneous space under
+$\Hom_\mathcal{O}(\mathcal{F}, \mathcal{I}\mathcal{G}')$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-inf-map-ringed-topoi} but we also
+give a direct proof. We have short exact sequences of modules
+$$
+0 \to \mathcal{I} \to \mathcal{O}' \to \mathcal{O} \to 0
+\quad\text{and}\quad
+0 \to \mathcal{I}\mathcal{G}' \to \mathcal{G}' \to \mathcal{G} \to 0
+$$
+and similarly for $\mathcal{F}'$.
+Since $\mathcal{I}$ has square zero the $\mathcal{O}'$-module
+structure on $\mathcal{I}$ and $\mathcal{I}\mathcal{G}'$ comes from
+a unique $\mathcal{O}$-module structure. It follows that
+$$
+\Hom_{\mathcal{O}'}(\mathcal{F}', \mathcal{I}\mathcal{G}') =
+\Hom_\mathcal{O}(\mathcal{F}, \mathcal{I}\mathcal{G}')
+\quad\text{and}\quad
+\Hom_{\mathcal{O}'}(\mathcal{F}', \mathcal{G}) =
+\Hom_\mathcal{O}(\mathcal{F}, \mathcal{G})
+$$
+The lemma now follows from the exact sequence
+$$
+0 \to \Hom_{\mathcal{O}'}(\mathcal{F}', \mathcal{I}\mathcal{G}') \to
+\Hom_{\mathcal{O}'}(\mathcal{F}', \mathcal{G}') \to
+\Hom_{\mathcal{O}'}(\mathcal{F}', \mathcal{G})
+$$
+see Homology, Lemma \ref{homology-lemma-check-exactness}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-deform-module-ringed-topoi}
+Let $(f, f')$ be a morphism of first order thickenings of ringed topoi
+as in Situation \ref{situation-morphism-thickenings-ringed-topoi}.
+Let $\mathcal{F}'$ be an $\mathcal{O}'$-module
+and set $\mathcal{F} = i^*\mathcal{F}'$.
+Assume that $\mathcal{F}$ is flat over $\mathcal{O}_\mathcal{B}$
+and that $(f, f')$ is a strict morphism of thickenings
+(Definition \ref{definition-strict-morphism-thickenings-ringed-topoi}).
+Then the following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}'$ is flat over $\mathcal{O}_{\mathcal{B}'}$, and
+\item the canonical map
+$f^*\mathcal{J} \otimes_\mathcal{O} \mathcal{F} \to
+\mathcal{I}\mathcal{F}'$
+is an isomorphism.
+\end{enumerate}
+Moreover, in this case the maps
+$$
+f^*\mathcal{J} \otimes_\mathcal{O} \mathcal{F} \to
+\mathcal{I} \otimes_\mathcal{O} \mathcal{F} \to
+\mathcal{I}\mathcal{F}'
+$$
+are isomorphisms.
+\end{lemma}
+
+\begin{proof}
+The map $f^*\mathcal{J} \to \mathcal{I}$ is surjective
+as $(f, f')$ is a strict morphism of thickenings.
+Hence the final statement is a consequence of (2).
+
+\medskip\noindent
+Proof of the equivalence of (1) and (2). By definition flatness over
+$\mathcal{O}_\mathcal{B}$ means flatness over $f^{-1}\mathcal{O}_\mathcal{B}$.
+Similarly for flatness over $f^{-1}\mathcal{O}_{\mathcal{B}'}$.
+Note that the strictness of $(f, f')$ and the assumption that
+$\mathcal{F} = i^*\mathcal{F}'$ imply that
+$$
+\mathcal{F} = \mathcal{F}'/(f^{-1}\mathcal{J})\mathcal{F}'
+$$
+as sheaves on $\mathcal{C}$. Moreover, observe that
+$f^*\mathcal{J} \otimes_\mathcal{O} \mathcal{F} =
+f^{-1}\mathcal{J} \otimes_{f^{-1}\mathcal{O}_\mathcal{B}} \mathcal{F}$.
+Hence the equivalence of (1) and (2) follows from
+Modules on Sites, Lemma \ref{sites-modules-lemma-flat-over-thickening}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-deform-fp-module-ringed-topoi}
+Let $(f, f')$ be a morphism of first order thickenings of ringed topoi
+as in Situation \ref{situation-morphism-thickenings-ringed-topoi}.
+Let $\mathcal{F}'$ be an $\mathcal{O}'$-module
+and set $\mathcal{F} = i^*\mathcal{F}'$.
+Assume that $\mathcal{F}'$ is flat over $\mathcal{O}_{\mathcal{B}'}$
+and that $(f, f')$ is a strict morphism of thickenings.
+Then the following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}'$ is an $\mathcal{O}'$-module of finite presentation, and
+\item $\mathcal{F}$ is an $\mathcal{O}$-module of finite presentation.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (1) $\Rightarrow$ (2) follows from
+Modules on Sites, Lemma \ref{sites-modules-lemma-local-pullback}.
+For the converse, assume $\mathcal{F}$ of finite presentation.
+We may and do assume that $\mathcal{C} = \mathcal{C}'$.
+By Lemma \ref{lemma-deform-module-ringed-topoi} we have a short exact sequence
+$$
+0 \to \mathcal{I} \otimes_{\mathcal{O}_X} \mathcal{F} \to
+\mathcal{F}' \to \mathcal{F} \to 0
+$$
+Let $U$ be an object of $\mathcal{C}$ such that $\mathcal{F}|_U$ has a
+presentation
+$$
+\mathcal{O}_U^{\oplus m} \to \mathcal{O}_U^{\oplus n} \to \mathcal{F}|_U \to 0
+$$
+After replacing $U$ by the members of a covering we may assume the
+map $\mathcal{O}_U^{\oplus n} \to \mathcal{F}|_U$ lifts to a map
+$(\mathcal{O}'_U)^{\oplus n} \to \mathcal{F}'|_U$. The induced map
+$\mathcal{I}^{\oplus n} \to \mathcal{I} \otimes \mathcal{F}$ is
+surjective by right exactness of $\otimes$. Thus after replacing $U$
+by the members of a covering we can find a lift
+$(\mathcal{O}'|_U)^{\oplus m} \to (\mathcal{O}'|_U)^{\oplus n}$
+of the given map $\mathcal{O}_U^{\oplus m} \to \mathcal{O}_U^{\oplus n}$
+such that
+$$
+(\mathcal{O}'_U)^{\oplus m} \to (\mathcal{O}'_U)^{\oplus n} \to
+\mathcal{F}'|_U \to 0
+$$
+is a complex. Using right exactness of $\otimes$ once more it is seen
+that this complex is exact.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-map-rel-ringed-topoi}
+Let $(f, f')$ be a morphism of first order thickenings as in
+Situation \ref{situation-morphism-thickenings-ringed-topoi}.
+Let $\mathcal{F}'$, $\mathcal{G}'$ be $\mathcal{O}'$-modules and set
+$\mathcal{F} = i^*\mathcal{F}'$ and $\mathcal{G} = i^*\mathcal{G}'$.
+Let $\varphi : \mathcal{F} \to \mathcal{G}$ be an $\mathcal{O}$-linear map.
+Assume that $\mathcal{G}'$ is flat over $\mathcal{O}_{\mathcal{B}'}$ and that
+$(f, f')$ is a strict morphism of thickenings.
+The set of lifts of $\varphi$ to an $\mathcal{O}'$-linear map
+$\varphi' : \mathcal{F}' \to \mathcal{G}'$ is, if nonempty, a principal
+homogeneous space under
+$$
+\Hom_\mathcal{O}(\mathcal{F},
+\mathcal{G} \otimes_\mathcal{O} f^*\mathcal{J})
+$$
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-inf-map-special-ringed-topoi} and
+\ref{lemma-deform-module-ringed-topoi}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-obs-map-special-ringed-topoi}
+Let $i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{D}), \mathcal{O}')$
+be a first order thickening of ringed topoi.
+Let $\mathcal{F}'$, $\mathcal{G}'$ be $\mathcal{O}'$-modules and set
+$\mathcal{F} = i^*\mathcal{F}'$ and $\mathcal{G} = i^*\mathcal{G}'$.
+Let $\varphi : \mathcal{F} \to \mathcal{G}$ be an $\mathcal{O}$-linear map.
+There exists an element
+$$
+o(\varphi) \in
+\Ext^1_\mathcal{O}(Li^*\mathcal{F}', \mathcal{I}\mathcal{G}')
+$$
+whose vanishing is a necessary and sufficient condition for the
+existence of a lift of $\varphi$ to an $\mathcal{O}'$-linear map
+$\varphi' : \mathcal{F}' \to \mathcal{G}'$.
+\end{lemma}
+
+\begin{proof}
+It is clear from the proof of Lemma \ref{lemma-inf-map-special-ringed-topoi}
+that the vanishing of the boundary of $\varphi$ via the map
+$$
+\Hom_\mathcal{O}(\mathcal{F}, \mathcal{G}) =
+\Hom_{\mathcal{O}'}(\mathcal{F}', \mathcal{G}) \longrightarrow
+\Ext^1_{\mathcal{O}'}(\mathcal{F}', \mathcal{I}\mathcal{G}')
+$$
+is a necessary and sufficient condition for the existence of a lift. We
+conclude as
+$$
+\Ext^1_{\mathcal{O}'}(\mathcal{F}', \mathcal{I}\mathcal{G}') =
+\Ext^1_\mathcal{O}(Li^*\mathcal{F}', \mathcal{I}\mathcal{G}')
+$$
+the adjointness of $i_* = Ri_*$ and $Li^*$ on the derived category
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-adjoint}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-obs-map-rel-ringed-topoi}
+Let $(f, f')$ be a morphism of first order thickenings as in
+Situation \ref{situation-morphism-thickenings-ringed-topoi}.
+Let $\mathcal{F}'$, $\mathcal{G}'$ be $\mathcal{O}'$-modules and set
+$\mathcal{F} = i^*\mathcal{F}'$ and $\mathcal{G} = i^*\mathcal{G}'$.
+Let $\varphi : \mathcal{F} \to \mathcal{G}$ be an $\mathcal{O}$-linear map.
+Assume that $\mathcal{F}'$ and $\mathcal{G}'$ are flat over
+$\mathcal{O}_{\mathcal{B}'}$ and
+that $(f, f')$ is a strict morphism of thickenings. There exists an element
+$$
+o(\varphi) \in
+\Ext^1_\mathcal{O}(\mathcal{F},
+\mathcal{G} \otimes_\mathcal{O} f^*\mathcal{J})
+$$
+whose vanishing is a necessary and sufficient condition for the
+existence of a lift of $\varphi$ to an $\mathcal{O}'$-linear map
+$\varphi' : \mathcal{F}' \to \mathcal{G}'$.
+\end{lemma}
+
+\begin{proof}[First proof]
+This follows from Lemma \ref{lemma-inf-obs-map-special-ringed-topoi}
+as we claim that under the assumptions of the lemma we have
+$$
+\Ext^1_\mathcal{O}(Li^*\mathcal{F}', \mathcal{I}\mathcal{G}') =
+\Ext^1_\mathcal{O}(\mathcal{F},
+\mathcal{G} \otimes_\mathcal{O} f^*\mathcal{J})
+$$
+Namely, we have
+$\mathcal{I}\mathcal{G}' =
+\mathcal{G} \otimes_\mathcal{O} f^*\mathcal{J}$
+by Lemma \ref{lemma-deform-module-ringed-topoi}.
+On the other hand, observe that
+$$
+H^{-1}(Li^*\mathcal{F}') =
+\text{Tor}_1^{\mathcal{O}'}(\mathcal{F}', \mathcal{O})
+$$
+(local computation omitted). Using the short exact sequence
+$$
+0 \to \mathcal{I} \to \mathcal{O}' \to \mathcal{O} \to 0
+$$
+we see that this $\text{Tor}_1$ is computed by the kernel of the map
+$\mathcal{I} \otimes_\mathcal{O} \mathcal{F} \to \mathcal{I}\mathcal{F}'$
+which is zero by the final assertion of
+Lemma \ref{lemma-deform-module-ringed-topoi}.
+Thus $\tau_{\geq -1}Li^*\mathcal{F}' = \mathcal{F}$.
+On the other hand, we have
+$$
+\Ext^1_\mathcal{O}(Li^*\mathcal{F}',
+\mathcal{I}\mathcal{G}') =
+\Ext^1_\mathcal{O}(\tau_{\geq -1}Li^*\mathcal{F}',
+\mathcal{I}\mathcal{G}')
+$$
+by the dual of
+Derived Categories, Lemma \ref{derived-lemma-negative-vanishing}.
+\end{proof}
+
+\begin{proof}[Second proof]
+We can apply Lemma \ref{lemma-inf-obs-map-ringed-topoi} as follows. Note that
+$\mathcal{K} = \mathcal{I} \otimes_\mathcal{O} \mathcal{F}$ and
+$\mathcal{L} = \mathcal{I} \otimes_\mathcal{O} \mathcal{G}$
+by Lemma \ref{lemma-deform-module-ringed-topoi}, that
+$c_{\mathcal{F}'} = 1 \otimes 1$ and $c_{\mathcal{G}'} = 1 \otimes 1$
+and taking $\psi = 1 \otimes \varphi$ the diagram of the lemma
+commutes. Thus $o(\varphi) = o(\varphi, 1 \otimes \varphi)$
+works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-ext-rel-ringed-topoi}
+Let $(f, f')$ be a morphism of first order thickenings as in
+Situation \ref{situation-morphism-thickenings-ringed-topoi}.
+Let $\mathcal{F}$ be an $\mathcal{O}$-module.
+Assume $(f, f')$ is a strict morphism of thickenings and
+$\mathcal{F}$ flat over $\mathcal{O}_\mathcal{B}$. If there exists a pair
+$(\mathcal{F}', \alpha)$ consisting of an
+$\mathcal{O}'$-module $\mathcal{F}'$ flat over $\mathcal{O}_{\mathcal{B}'}$
+and an isomorphism
+$\alpha : i^*\mathcal{F}' \to \mathcal{F}$, then the set of
+isomorphism classes of such pairs is principal homogeneous
+under
+$\Ext^1_\mathcal{O}(
+\mathcal{F}, \mathcal{I} \otimes_\mathcal{O} \mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+If we assume there exists one such module, then the canonical map
+$$
+f^*\mathcal{J} \otimes_\mathcal{O} \mathcal{F} \to
+\mathcal{I} \otimes_\mathcal{O} \mathcal{F}
+$$
+is an isomorphism by Lemma \ref{lemma-deform-module-ringed-topoi}. Apply
+Lemma \ref{lemma-inf-ext-ringed-topoi} with $\mathcal{K} =
+\mathcal{I} \otimes_\mathcal{O} \mathcal{F}$
+and $c = 1$. By Lemma \ref{lemma-deform-module-ringed-topoi}
+the corresponding extensions
+$\mathcal{F}'$ are all flat over $\mathcal{O}_{\mathcal{B}'}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-obs-ext-rel-ringed-topoi}
+Let $(f, f')$ be a morphism of first order thickenings as in
+Situation \ref{situation-morphism-thickenings-ringed-topoi}.
+Let $\mathcal{F}$ be an $\mathcal{O}$-module. Assume
+$(f, f')$ is a strict morphism of thickenings
+and $\mathcal{F}$ flat over $\mathcal{O}_\mathcal{B}$. There exists an
+$\mathcal{O}'$-module $\mathcal{F}'$ flat over $\mathcal{O}_{\mathcal{B}'}$
+with $i^*\mathcal{F}' \cong \mathcal{F}$, if and only if
+\begin{enumerate}
+\item the canonical map
+$f^*\mathcal{J} \otimes_\mathcal{O} \mathcal{F} \to
+\mathcal{I} \otimes_\mathcal{O} \mathcal{F}$
+is an isomorphism, and
+\item the class
+$o(\mathcal{F}, \mathcal{I} \otimes_\mathcal{O} \mathcal{F}, 1)
+\in \Ext^2_\mathcal{O}(
+\mathcal{F}, \mathcal{I} \otimes_\mathcal{O} \mathcal{F})$
+of Lemma \ref{lemma-inf-obs-ext-ringed-topoi} is zero.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows immediately from the characterization of
+$\mathcal{O}'$-modules flat over $\mathcal{O}_{\mathcal{B}'}$ of
+Lemma \ref{lemma-deform-module-ringed-topoi} and
+Lemma \ref{lemma-inf-obs-ext-ringed-topoi}.
+\end{proof}
+
+
+
+
+
+
+\section{Application to flat modules on flat thickenings of ringed topoi}
+\label{section-flat-ringed-topoi}
+
+\noindent
+Consider a commutative diagram
+$$
+\xymatrix{
+(\Sh(\mathcal{C}), \mathcal{O}) \ar[r]_i \ar[d]_f &
+(\Sh(\mathcal{D}), \mathcal{O}') \ar[d]^{f'} \\
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B}) \ar[r]^t &
+(\Sh(\mathcal{B}'), \mathcal{O}_{\mathcal{B}'})
+}
+$$
+of ringed topoi whose horizontal arrows are first order thickenings
+as in Situation \ref{situation-morphism-thickenings-ringed-topoi}. Set
+$\mathcal{I} = \Ker(i^\sharp) \subset \mathcal{O}'$ and
+$\mathcal{J} = \Ker(t^\sharp) \subset \mathcal{O}_{\mathcal{B}'}$.
+Let $\mathcal{F}$ be an $\mathcal{O}$-module. Assume that
+\begin{enumerate}
+\item $(f, f')$ is a strict morphism of thickenings,
+\item $f'$ is flat, and
+\item $\mathcal{F}$ is flat over $\mathcal{O}_\mathcal{B}$.
+\end{enumerate}
+Note that (1) $+$ (2) imply that $\mathcal{I} = f^*\mathcal{J}$
+(apply Lemma \ref{lemma-deform-module-ringed-topoi} to $\mathcal{O}'$).
+The theory of the preceding section is especially nice
+under these assumptions. We summarize the results already obtained
+in the following lemma.
+
+\begin{lemma}
+\label{lemma-flat-ringed-topoi}
+In the situation above.
+\begin{enumerate}
+\item There exists an $\mathcal{O}'$-module $\mathcal{F}'$ flat over
+$\mathcal{O}_{\mathcal{B}'}$ with $i^*\mathcal{F}' \cong \mathcal{F}$,
+if and only if
+the class $o(\mathcal{F}, f^*\mathcal{J} \otimes_\mathcal{O} \mathcal{F}, 1)
+\in \Ext^2_\mathcal{O}(
+\mathcal{F}, f^*\mathcal{J} \otimes_\mathcal{O} \mathcal{F})$
+of Lemma \ref{lemma-inf-obs-ext-ringed-topoi} is zero.
+\item If such a module exists, then the set of isomorphism classes
+of lifts is principal homogeneous under
+$\Ext^1_\mathcal{O}(
+\mathcal{F}, f^*\mathcal{J} \otimes_\mathcal{O} \mathcal{F})$.
+\item Given a lift $\mathcal{F}'$, the set of automorphisms of
+$\mathcal{F}'$ which pull back to $\text{id}_\mathcal{F}$ is canonically
+isomorphic to $\Ext^0_\mathcal{O}(
+\mathcal{F}, f^*\mathcal{J} \otimes_\mathcal{O} \mathcal{F})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) follows from Lemma \ref{lemma-inf-obs-ext-rel-ringed-topoi}
+as we have seen above that $\mathcal{I} = f^*\mathcal{J}$.
+Part (2) follows from Lemma \ref{lemma-inf-ext-rel-ringed-topoi}.
+Part (3) follows from Lemma \ref{lemma-inf-map-rel-ringed-topoi}.
+\end{proof}
+
+\begin{situation}
+\label{situation-morphism-flat-thickenings-ringed-topoi}
+Let $f : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$ be a morphism of
+ringed topoi. Consider a commutative diagram
+$$
+\xymatrix{
+(\Sh(\mathcal{C}'_1), \mathcal{O}'_1) \ar[r]_h \ar[d]_{f'_1} &
+(\Sh(\mathcal{C}'_2), \mathcal{O}'_2) \ar[d]_{f'_2} \\
+(\Sh(\mathcal{B}'_1), \mathcal{O}_{\mathcal{B}'_1}) \ar[r] &
+(\Sh(\mathcal{B}'_2), \mathcal{O}_{\mathcal{B}'_2})
+}
+$$
+where $h$ is a morphism of first order thickenings
+of $(\Sh(\mathcal{C}), \mathcal{O})$, the lower horizontal arrow
+is a morphism of first order thickenings of
+$(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$, each $f'_i$ restricts
+to $f$, both pairs $(f, f_i')$ are strict morphisms of thickenings, and
+both $f'_i$ are flat. Finally, let $\mathcal{F}$ be an
+$\mathcal{O}$-module flat over $\mathcal{O}_\mathcal{B}$.
+\end{situation}
+
+\begin{lemma}
+\label{lemma-functorial-ringed-topoi}
+In Situation \ref{situation-morphism-flat-thickenings-ringed-topoi}
+the obstruction class
+$o(\mathcal{F}, f^*\mathcal{J}_2 \otimes_\mathcal{O} \mathcal{F}, 1)$
+maps to the obstruction class
+$o(\mathcal{F}, f^*\mathcal{J}_1 \otimes_\mathcal{O} \mathcal{F}, 1)$
+under the canonical map
+$$
+\Ext^2_\mathcal{O}(
+\mathcal{F}, f^*\mathcal{J}_2 \otimes_\mathcal{O} \mathcal{F})
+\to \Ext^2_\mathcal{O}(
+\mathcal{F}, f^*\mathcal{J}_1 \otimes_\mathcal{O} \mathcal{F})
+$$
+\end{lemma}
+
+\begin{proof}
+Follows from Remark \ref{remark-obstruction-extension-functorial-ringed-topoi}.
+\end{proof}
+
+\begin{situation}
+\label{situation-ses-flat-thickenings-ringed-topoi}
+Let $f : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$ be a morphism of
+ringed topoi. Consider a commutative diagram
+$$
+\xymatrix{
+(\Sh(\mathcal{C}'_1), \mathcal{O}'_1) \ar[r]_h \ar[d]_{f'_1} &
+(\Sh(\mathcal{C}'_2), \mathcal{O}'_2) \ar[r] \ar[d]_{f'_2} &
+(\Sh(\mathcal{C}'_3), \mathcal{O}'_3) \ar[d]_{f'_3} \\
+(\Sh(\mathcal{B}'_1), \mathcal{O}_{\mathcal{B}'_1}) \ar[r] &
+(\Sh(\mathcal{B}'_2), \mathcal{O}_{\mathcal{B}'_2}) \ar[r] &
+(\Sh(\mathcal{B}'_3), \mathcal{O}_{\mathcal{B}'_3})
+}
+$$
+where (a) the top row is a short exact sequence of first order thickenings
+of $(\Sh(\mathcal{C}), \mathcal{O})$, (b) the lower row is a short exact
+sequence of first order thickenings of
+$(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$, (c) each $f'_i$ restricts
+to $f$, (d) each pair $(f, f_i')$ is a strict morphism of thickenings, and
+(e) each $f'_i$ is flat. Finally, let $\mathcal{F}'_2$ be an
+$\mathcal{O}'_2$-module flat over $\mathcal{O}_{\mathcal{B}'_2}$ and
+set $\mathcal{F} = \mathcal{F}'_2 \otimes \mathcal{O}$. Let
+$\pi : (\Sh(\mathcal{C}'_1), \mathcal{O}'_1) \to
+(\Sh(\mathcal{C}), \mathcal{O})$ be the canonical splitting
+(Remark \ref{remark-short-exact-sequence-thickenings-ringed-topoi}).
+\end{situation}
+
+\begin{lemma}
+\label{lemma-verify-iv-ringed-topoi}
+In Situation \ref{situation-ses-flat-thickenings-ringed-topoi} the modules
+$\pi^*\mathcal{F}$ and $h^*\mathcal{F}'_2$ are $\mathcal{O}'_1$-modules
+flat over $\mathcal{O}_{\mathcal{B}'_1}$ restricting to $\mathcal{F}$ on
+$(\Sh(\mathcal{C}), \mathcal{O})$.
+Their difference (Lemma \ref{lemma-flat-ringed-topoi}) is an element
+$\theta$ of
+$\Ext^1_\mathcal{O}(\mathcal{F},
+f^*\mathcal{J}_1 \otimes_\mathcal{O} \mathcal{F})$
+whose boundary in
+$\Ext^2_\mathcal{O}(\mathcal{F},
+f^*\mathcal{J}_3 \otimes_\mathcal{O} \mathcal{F})$
+equals the obstruction (Lemma \ref{lemma-flat-ringed-topoi})
+to lifting $\mathcal{F}$ to an $\mathcal{O}'_3$-module flat over
+$\mathcal{O}_{\mathcal{B}'_3}$.
+\end{lemma}
+
+\begin{proof}
+Note that both $\pi^*\mathcal{F}$ and $h^*\mathcal{F}'_2$
+restrict to $\mathcal{F}$ on $(\Sh(\mathcal{C}), \mathcal{O})$
+and that the kernels of
+$\pi^*\mathcal{F} \to \mathcal{F}$ and $h^*\mathcal{F}'_2 \to \mathcal{F}$
+are given by $f^*\mathcal{J}_1 \otimes_\mathcal{O} \mathcal{F}$.
+Hence flatness by Lemma \ref{lemma-deform-module-ringed-topoi}.
+Taking the boundary makes sense as the sequence of modules
+$$
+0 \to f^*\mathcal{J}_3 \otimes_\mathcal{O} \mathcal{F} \to
+f^*\mathcal{J}_2 \otimes_\mathcal{O} \mathcal{F} \to
+f^*\mathcal{J}_1 \otimes_\mathcal{O} \mathcal{F} \to 0
+$$
+is short exact due to the assumptions in
+Situation \ref{situation-ses-flat-thickenings-ringed-topoi}
+and the fact that $\mathcal{F}$ is flat over $\mathcal{O}_\mathcal{B}$.
+The statement on the obstruction class is a direct translation
+of the result of
+Remark \ref{remark-complex-thickenings-and-ses-modules-ringed-topoi}
+to this particular situation.
+\end{proof}
+
+
+
+
+
+
+
+\section{Deformations of ringed topoi and the naive cotangent complex}
+\label{section-deformations-ringed-topoi}
+
+\noindent
+In this section we use the naive cotangent complex to do a little bit
+of deformation theory. We start with a first order thickening
+$t : (\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B}) \to
+(\Sh(\mathcal{B}'), \mathcal{O}_{\mathcal{B}'})$ of ringed topoi.
+We denote $\mathcal{J} = \Ker(t^\sharp)$ and we
+identify the underlying topoi of $\mathcal{B}$ and $\mathcal{B}'$.
+Moreover we assume given a morphism of ringed topoi
+$f : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$, an $\mathcal{O}$-module
+$\mathcal{G}$, and a map $f^{-1}\mathcal{J} \to \mathcal{G}$
+of sheaves of $f^{-1}\mathcal{O}_\mathcal{B}$-modules.
+In this section we ask ourselves whether we can find
+the question mark fitting into the following diagram
+\begin{equation}
+\label{equation-to-solve-ringed-topoi}
+\vcenter{
+\xymatrix{
+0 \ar[r] & \mathcal{G} \ar[r] & {?} \ar[r] & \mathcal{O} \ar[r] & 0 \\
+0 \ar[r] & f^{-1}\mathcal{J} \ar[u]^c \ar[r] &
+f^{-1}\mathcal{O}_{\mathcal{B}'} \ar[u] \ar[r] &
+f^{-1}\mathcal{O}_\mathcal{B} \ar[u] \ar[r] & 0
+}
+}
+\end{equation}
+and moreover how unique the solution is (if it exists). More precisely,
+we look for a first order thickening
+$i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{C}'), \mathcal{O}')$
+and a morphism of thickenings $(f, f')$ as in
+(\ref{equation-morphism-thickenings-ringed-topoi})
+where $\Ker(i^\sharp)$ is identified with $\mathcal{G}$
+such that $(f')^\sharp$ induces the given map $c$.
+We will say $(\Sh(\mathcal{C}'), \mathcal{O}')$ is a {\it solution} to
+(\ref{equation-to-solve-ringed-topoi}).
+
+
+\begin{lemma}
+\label{lemma-huge-diagram-ringed-topoi}
+Assume given a commutative diagram of morphisms ringed topoi
+\begin{equation}
+\label{equation-huge-1-ringed-topoi}
+\vcenter{
+\xymatrix{
+& (\Sh(\mathcal{C}_2), \mathcal{O}_2) \ar[r]_{i_2} \ar[d]_{f_2} \ar[ddl]_g &
+(\Sh(\mathcal{C}'_2), \mathcal{O}'_2) \ar[d]^{f'_2} \\
+&
+(\Sh(\mathcal{B}_2), \mathcal{O}_{\mathcal{B}_2}) \ar[r]^{t_2} \ar[ddl]|\hole &
+(\Sh(\mathcal{B}'_2), \mathcal{O}_{\mathcal{B}'_2}) \ar[ddl] \\
+(\Sh(\mathcal{C}_1), \mathcal{O}_1) \ar[r]_{i_1} \ar[d]_{f_1} &
+(\Sh(\mathcal{C}'_1), \mathcal{O}'_1) \ar[d]^{f'_1} \\
+(\Sh(\mathcal{B}_1), \mathcal{O}_{\mathcal{B}_1}) \ar[r]^{t_1} &
+(\Sh(\mathcal{B}'_1), \mathcal{O}_{\mathcal{B}'_1})
+}
+}
+\end{equation}
+whose horizontal arrows are first order thickenings. Set
+$\mathcal{G}_j = \Ker(i_j^\sharp)$ and assume given a
+map of $g^{-1}\mathcal{O}_1$-modules
+$\nu : g^{-1}\mathcal{G}_1 \to \mathcal{G}_2$
+giving rise to the commutative diagram
+\begin{equation}
+\label{equation-huge-2-ringed-topoi}
+\vcenter{
+\xymatrix{
+& 0 \ar[r] & \mathcal{G}_2 \ar[r] &
+\mathcal{O}'_2 \ar[r] &
+\mathcal{O}_2 \ar[r] & 0 \\
+& 0 \ar[r]|\hole &
+f_2^{-1}\mathcal{J}_2 \ar[u]_{c_2} \ar[r] &
+f_2^{-1}\mathcal{O}_{\mathcal{B}'_2} \ar[u] \ar[r]|\hole &
+f_2^{-1}\mathcal{O}_{\mathcal{B}_2} \ar[u] \ar[r] & 0 \\
+0 \ar[r] &
+\mathcal{G}_1 \ar[ruu] \ar[r] &
+\mathcal{O}'_1 \ar[r] &
+\mathcal{O}_1 \ar[ruu] \ar[r] & 0 \\
+0 \ar[r] &
+f_1^{-1}\mathcal{J}_1 \ar[ruu]|\hole \ar[u]^{c_1} \ar[r] &
+f_1^{-1}\mathcal{O}_{\mathcal{B}'_1} \ar[ruu]|\hole \ar[u] \ar[r] &
+f_1^{-1}\mathcal{O}_{\mathcal{B}_1} \ar[ruu]|\hole \ar[u] \ar[r] & 0
+}
+}
+\end{equation}
+with front and back solutions to (\ref{equation-to-solve-ringed-topoi}).
+(The north-north-west arrows are maps on $\mathcal{C}_2$ after applying
+$g^{-1}$ to the source.)
+\begin{enumerate}
+\item There exist a canonical element in
+$\Ext^1_{\mathcal{O}_2}(
+Lg^*\NL_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}, \mathcal{G}_2)$
+whose vanishing is a necessary and sufficient condition for the existence
+of a morphism of ringed topoi
+$(\Sh(\mathcal{C}'_2), \mathcal{O}'_2) \to
+(\Sh(\mathcal{C}'_1), \mathcal{O}'_1)$ fitting into
+(\ref{equation-huge-1-ringed-topoi}) compatibly with $\nu$.
+\item If there exists a morphism
+$(\Sh(\mathcal{C}'_2), \mathcal{O}'_2) \to
+(\Sh(\mathcal{C}'_1), \mathcal{O}'_1)$
+fitting into
+(\ref{equation-huge-1-ringed-topoi}) compatibly with $\nu$ the set
+of all such morphisms is a principal homogeneous space under
+$$
+\Hom_{\mathcal{O}_1}(
+\Omega_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}, g_*\mathcal{G}_2) =
+\Hom_{\mathcal{O}_2}(
+g^*\Omega_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}, \mathcal{G}_2) =
+\Ext^0_{\mathcal{O}_2}(
+Lg^*\NL_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}, \mathcal{G}_2).
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The proof of this lemma is identical to the proof of
+Lemma \ref{lemma-huge-diagram-ringed-spaces}.
+We urge the reader to read that proof instead of this one.
+We will identify the underlying topoi for every
+thickening in sight (we have already used this convention
+in the statement). The equalities in the last statement of the
+lemma are immediate from the definitions. Thus we will work with the groups
+$\Ext^k_{\mathcal{O}_2}(
+Lg^*\NL_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}, \mathcal{G}_2)$,
+$k = 0, 1$ in the rest of the proof. We first argue that we can reduce
+to the case where the underlying topos of all ringed topoi in the lemma
+is the same.
+
+\medskip\noindent
+To do this, observe that
+$g^{-1}\NL_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}$ is equal to the naive
+cotangent complex of the homomorphism of sheaves of rings
+$g^{-1}f_1^{-1}\mathcal{O}_{\mathcal{B}_1} \to g^{-1}\mathcal{O}_1$, see
+Modules on Sites, Lemma \ref{sites-modules-lemma-pullback-differentials}.
+Moreover, the degree $0$ term of
+$\NL_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}$ is a flat
+$\mathcal{O}_1$-module, hence the canonical map
+$$
+Lg^*\NL_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}
+\longrightarrow
+g^{-1}\NL_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}
+\otimes_{g^{-1}\mathcal{O}_1} \mathcal{O}_2
+$$
+induces an isomorphism on cohomology sheaves in degrees $0$ and $-1$.
+Thus we may replace the Ext groups of the lemma with
+$$
+\Ext^k_{g^{-1}\mathcal{O}_1}(
+g^{-1}\NL_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}, \mathcal{G}_2) =
+\Ext^k_{g^{-1}\mathcal{O}_1}(
+\NL_{g^{-1}\mathcal{O}_1/g^{-1}f_1^{-1}\mathcal{O}_{\mathcal{B}_1}},
+\mathcal{G}_2)
+$$
+The set of morphism of ringed topoi
+$(\Sh(\mathcal{C}'_2), \mathcal{O}'_2) \to
+(\Sh(\mathcal{C}'_1), \mathcal{O}'_1)$ fitting into
+(\ref{equation-huge-1-ringed-topoi}) compatibly with $\nu$ is in
+one-to-one bijection with the set of homomorphisms of
+$g^{-1}f_1^{-1}\mathcal{O}_{\mathcal{B}'_1}$-algebras
+$g^{-1}\mathcal{O}'_1 \to \mathcal{O}'_2$ which are compatible with
+$f^\sharp$ and $\nu$. In this way we see that we may assume we have a
+diagram (\ref{equation-huge-2-ringed-topoi}) of sheaves on a site
+$\mathcal{C}$ (with $f_1 = f_2 = \text{id}$ on underlying topoi)
+and we are looking to find a homomorphism of sheaves of rings
+$\mathcal{O}'_1 \to \mathcal{O}'_2$ fitting into it.
+
+\medskip\noindent
+In the rest of the proof of the lemma we assume
+all underlying topological spaces are the
+same, i.e., we have a diagram (\ref{equation-huge-2-ringed-topoi})
+of sheaves on a site $\mathcal{C}$ (with $f_1 = f_2 = \text{id}$
+on underlying topoi) and we are looking for
+homomorphisms of sheaves of rings
+$\mathcal{O}'_1 \to \mathcal{O}'_2$ fitting into it.
+As ext groups we will use
+$\Ext^k_{\mathcal{O}_1}(
+\NL_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}, \mathcal{G}_2)$, $k = 0, 1$.
+
+\medskip\noindent
+Step 1. Construction of the obstruction class. Consider the sheaf
+of sets
+$$
+\mathcal{E} = \mathcal{O}'_1 \times_{\mathcal{O}_2} \mathcal{O}'_2
+$$
+This comes with a surjective map $\alpha : \mathcal{E} \to \mathcal{O}_1$
+and hence we can use $\NL(\alpha)$ instead of
+$\NL_{\mathcal{O}_1/\mathcal{O}_{\mathcal{B}_1}}$, see
+Modules on Sites, Lemma \ref{sites-modules-lemma-NL-up-to-qis}.
+Set
+$$
+\mathcal{I}' =
+\Ker(\mathcal{O}_{\mathcal{B}'_1}[\mathcal{E}] \to \mathcal{O}_1)
+\quad\text{and}\quad
+\mathcal{I} =
+\Ker(\mathcal{O}_{\mathcal{B}_1}[\mathcal{E}] \to \mathcal{O}_1)
+$$
+There is a surjection $\mathcal{I}' \to \mathcal{I}$ whose kernel
+is $\mathcal{J}_1\mathcal{O}_{\mathcal{B}'_1}[\mathcal{E}]$.
+We obtain two homomorphisms of $\mathcal{O}_{\mathcal{B}'_2}$-algebras
+$$
+a : \mathcal{O}_{\mathcal{B}'_1}[\mathcal{E}] \to \mathcal{O}'_1
+\quad\text{and}\quad
+b : \mathcal{O}_{\mathcal{B}'_1}[\mathcal{E}] \to \mathcal{O}'_2
+$$
+which induce maps $a|_{\mathcal{I}'} : \mathcal{I}' \to \mathcal{G}_1$ and
+$b|_{\mathcal{I}'} : \mathcal{I}' \to \mathcal{G}_2$. Both $a$ and $b$
+annihilate $(\mathcal{I}')^2$. Moreover $a$ and $b$ agree on
+$\mathcal{J}_1\mathcal{O}_{\mathcal{B}'_1}[\mathcal{E}]$
+as maps into $\mathcal{G}_2$
+because the left hand square of (\ref{equation-huge-2-ringed-topoi})
+is commutative. Thus the difference
+$b|_{\mathcal{I}'} - \nu \circ a|_{\mathcal{I}'}$
+induces a well defined $\mathcal{O}_1$-linear map
+$$
+\xi : \mathcal{I}/\mathcal{I}^2 \longrightarrow \mathcal{G}_2
+$$
+which sends the class of a local section $f$ of $\mathcal{I}$ to
+$a(f') - \nu(b(f'))$ where $f'$ is a lift of $f$ to a local
+section of $\mathcal{I}'$. We let
+$[\xi] \in \Ext^1_{\mathcal{O}_1}(\NL(\alpha), \mathcal{G}_2)$
+be the image (see below).
+
+\medskip\noindent
+Step 2. Vanishing of $[\xi]$ is necessary. Let us write $\Omega =
+\Omega_{\mathcal{O}_{\mathcal{B}_1}[\mathcal{E}]/\mathcal{O}_{\mathcal{B}_1}}
+\otimes_{\mathcal{O}_{\mathcal{B}_1}[\mathcal{E}]} \mathcal{O}_1$.
+Observe that $\NL(\alpha) = (\mathcal{I}/\mathcal{I}^2 \to \Omega)$
+fits into a distinguished triangle
+$$
+\Omega[0] \to
+\NL(\alpha) \to
+\mathcal{I}/\mathcal{I}^2[1] \to
+\Omega[1]
+$$
+Thus we see that $[\xi]$ is zero if and only if $\xi$
+is a composition $\mathcal{I}/\mathcal{I}^2 \to \Omega \to \mathcal{G}_2$
+for some map $\Omega \to \mathcal{G}_2$. Suppose there exists a
+homomorphisms of sheaves of rings
+$\varphi : \mathcal{O}'_1 \to \mathcal{O}'_2$ fitting into
+(\ref{equation-huge-2-ringed-topoi}). In this case consider the map
+$\mathcal{O}'_1[\mathcal{E}] \to \mathcal{G}_2$,
+$f' \mapsto b(f') - \varphi(a(f'))$. A calculation
+shows this annihilates $\mathcal{J}_1\mathcal{O}_{\mathcal{B}'_1}[\mathcal{E}]$
+and induces a derivation
+$\mathcal{O}_{\mathcal{B}_1}[\mathcal{E}] \to \mathcal{G}_2$.
+The resulting linear map $\Omega \to \mathcal{G}_2$ witnesses the
+fact that $[\xi] = 0$ in this case.
+
+\medskip\noindent
+Step 3. Vanishing of $[\xi]$ is sufficient. Let
+$\theta : \Omega \to \mathcal{G}_2$ be a $\mathcal{O}_1$-linear map
+such that $\xi$ is equal to
+$\theta \circ (\mathcal{I}/\mathcal{I}^2 \to \Omega)$.
+Then a calculation shows that
+$$
+b + \theta \circ d :
+\mathcal{O}_{\mathcal{B}'_1}[\mathcal{E}]
+\longrightarrow
+\mathcal{O}'_2
+$$
+annihilates $\mathcal{I}'$ and hence defines a map
+$\mathcal{O}'_1 \to \mathcal{O}'_2$ fitting into
+(\ref{equation-huge-2-ringed-topoi}).
+
+\medskip\noindent
+Proof of (2) in the special case above. Omitted. Hint:
+This is exactly the same as the proof of (2) of
+Lemma \ref{lemma-huge-diagram}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-NL-represent-ext-class-ringed-topoi}
+Let $\mathcal{C}$ be a site. Let $\mathcal{A} \to \mathcal{B}$ be a
+homomorphism of sheaves of rings on $\mathcal{C}$.
+Let $\mathcal{G}$ be a $\mathcal{B}$-module.
+Let
+$\xi \in \Ext^1_\mathcal{B}(\NL_{\mathcal{B}/\mathcal{A}}, \mathcal{G})$.
+There exists a map of sheaves of sets $\alpha : \mathcal{E} \to \mathcal{B}$
+such that $\xi \in \Ext^1_\mathcal{B}(\NL(\alpha), \mathcal{G})$
+is the class of a map $\mathcal{I}/\mathcal{I}^2 \to \mathcal{G}$
+(see proof for notation).
+\end{lemma}
+
+\begin{proof}
+Recall that given $\alpha : \mathcal{E} \to \mathcal{B}$
+such that $\mathcal{A}[\mathcal{E}] \to \mathcal{B}$ is surjective
+with kernel $\mathcal{I}$ the complex
+$\NL(\alpha) = (\mathcal{I}/\mathcal{I}^2 \to
+\Omega_{\mathcal{A}[\mathcal{E}]/\mathcal{A}}
+\otimes_{\mathcal{A}[\mathcal{E}]} \mathcal{B})$ is canonically
+isomorphic to $\NL_{\mathcal{B}/\mathcal{A}}$, see
+Modules on Sites, Lemma \ref{sites-modules-lemma-NL-up-to-qis}.
+Observe moreover, that
+$\Omega = \Omega_{\mathcal{A}[\mathcal{E}]/\mathcal{A}}
+\otimes_{\mathcal{A}[\mathcal{E}]} \mathcal{B}$ is the sheaf
+associated to the presheaf
+$U \mapsto \bigoplus_{e \in \mathcal{E}(U)} \mathcal{B}(U)$.
+In other words, $\Omega$ is the free $\mathcal{B}$-module on the
+sheaf of sets $\mathcal{E}$ and in particular there is a canonical
+map $\mathcal{E} \to \Omega$.
+
+\medskip\noindent
+Having said this, pick some $\mathcal{E}$ (for example
+$\mathcal{E} = \mathcal{B}$ as in the definition of the naive cotangent
+complex). The obstruction to writing $\xi$ as the class of a map
+$\mathcal{I}/\mathcal{I}^2 \to \mathcal{G}$ is an element in
+$\Ext^1_\mathcal{B}(\Omega, \mathcal{G})$. Say this is represented
+by the extension $0 \to \mathcal{G} \to \mathcal{H} \to \Omega \to 0$
+of $\mathcal{B}$-modules. Consider the sheaf of sets
+$\mathcal{E}' = \mathcal{E} \times_\Omega \mathcal{H}$
+which comes with an induced map $\alpha' : \mathcal{E}' \to \mathcal{B}$.
+Let $\mathcal{I}' = \Ker(\mathcal{A}[\mathcal{E}'] \to \mathcal{B})$
+and $\Omega' = \Omega_{\mathcal{A}[\mathcal{E}']/\mathcal{A}}
+\otimes_{\mathcal{A}[\mathcal{E}']} \mathcal{B}$.
+The pullback of $\xi$ under the quasi-isomorphism
+$\NL(\alpha') \to \NL(\alpha)$ maps to zero in
+$\Ext^1_\mathcal{B}(\Omega', \mathcal{G})$
+because the pullback of the extension $\mathcal{H}$
+by the map $\Omega' \to \Omega$ is split as $\Omega'$ is the free
+$\mathcal{B}$-module on the sheaf of sets $\mathcal{E}'$ and since
+by construction there is a commutative diagram
+$$
+\xymatrix{
+\mathcal{E}' \ar[r] \ar[d] & \mathcal{E} \ar[d] \\
+\mathcal{H} \ar[r] & \Omega
+}
+$$
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-choices-ringed-topoi}
+If there exists a solution to (\ref{equation-to-solve-ringed-topoi}),
+then the set of isomorphism classes of solutions is principal homogeneous
+under $\Ext^1_\mathcal{O}(
+\NL_{\mathcal{O}/\mathcal{O}_\mathcal{B}}, \mathcal{G})$.
+\end{lemma}
+
+\begin{proof}
+We observe right away that given two solutions $\mathcal{O}'_1$ and
+$\mathcal{O}'_2$ to (\ref{equation-to-solve-ringed-topoi}) we obtain by
+Lemma \ref{lemma-huge-diagram-ringed-topoi} an obstruction element
+$o(\mathcal{O}'_1, \mathcal{O}'_2) \in \Ext^1_\mathcal{O}(
+\NL_{\mathcal{O}/\mathcal{O}_\mathcal{B}}, \mathcal{G})$
+to the existence of a map $\mathcal{O}'_1 \to \mathcal{O}'_2$.
+Clearly, this element
+is the obstruction to the existence of an isomorphism, hence separates
+the isomorphism classes. To finish the proof it therefore suffices to
+show that given a solution $\mathcal{O}'$ and an element
+$\xi \in \Ext^1_\mathcal{O}(
+\NL_{\mathcal{O}/\mathcal{O}_\mathcal{B}}, \mathcal{G})$
+we can find a second solution $\mathcal{O}'_\xi$ such that
+$o(\mathcal{O}', \mathcal{O}'_\xi) = \xi$.
+
+\medskip\noindent
+Pick $\alpha : \mathcal{E} \to \mathcal{O}$ as in
+Lemma \ref{lemma-NL-represent-ext-class-ringed-topoi}
+for the class $\xi$. Consider the surjection
+$f^{-1}\mathcal{O}_\mathcal{B}[\mathcal{E}] \to \mathcal{O}$
+with kernel $\mathcal{I}$ and corresponding naive cotangent complex
+$\NL(\alpha) = (\mathcal{I}/\mathcal{I}^2 \to
+\Omega_{f^{-1}\mathcal{O}_\mathcal{B}[\mathcal{E}]/
+f^{-1}\mathcal{O}_\mathcal{B}}
+\otimes_{f^{-1}\mathcal{O}_\mathcal{B}[\mathcal{E}]} \mathcal{O})$.
+By the lemma $\xi$ is the class of a morphism
+$\delta : \mathcal{I}/\mathcal{I}^2 \to \mathcal{G}$.
+After replacing $\mathcal{E}$ by
+$\mathcal{E} \times_\mathcal{O} \mathcal{O}'$ we may also assume
+that $\alpha$ factors through a map
+$\alpha' : \mathcal{E} \to \mathcal{O}'$.
+
+\medskip\noindent
+These choices determine an $f^{-1}\mathcal{O}_{\mathcal{B}'}$-algebra map
+$\varphi : \mathcal{O}_{\mathcal{B}'}[\mathcal{E}] \to \mathcal{O}'$.
+Let $\mathcal{I}' = \Ker(\varphi)$.
+Observe that $\varphi$ induces a map
+$\varphi|_{\mathcal{I}'} : \mathcal{I}' \to \mathcal{G}$
+and that $\mathcal{O}'$ is the pushout, as in the following
+diagram
+$$
+\xymatrix{
+0 \ar[r] & \mathcal{G} \ar[r] & \mathcal{O}' \ar[r] &
+\mathcal{O} \ar[r] & 0 \\
+0 \ar[r] & \mathcal{I}' \ar[u]^{\varphi|_{\mathcal{I}'}} \ar[r] &
+f^{-1}\mathcal{O}_{\mathcal{B}'}[\mathcal{E}] \ar[u] \ar[r] &
+\mathcal{O} \ar[u]_{=} \ar[r] & 0
+}
+$$
+Let $\psi : \mathcal{I}' \to \mathcal{G}$ be the sum of the map
+$\varphi|_{\mathcal{I}'}$ and the composition
+$$
+\mathcal{I}' \to \mathcal{I}'/(\mathcal{I}')^2 \to
+\mathcal{I}/\mathcal{I}^2 \xrightarrow{\delta} \mathcal{G}.
+$$
+Then the pushout along $\psi$ is an other ring extension
+$\mathcal{O}'_\xi$ fitting into a diagram as above.
+A calculation (omitted) shows that $o(\mathcal{O}', \mathcal{O}'_\xi) = \xi$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extensions-of-relative-ringed-topoi}
+Let $f : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$ be a morphism of
+ringed topoi. Let $\mathcal{G}$ be an $\mathcal{O}$-module.
+The set of isomorphism classes of extensions of
+$f^{-1}\mathcal{O}_\mathcal{B}$-algebras
+$$
+0 \to \mathcal{G} \to \mathcal{O}' \to \mathcal{O} \to 0
+$$
+where $\mathcal{G}$ is an ideal of square zero\footnote{In other words,
+the set of isomorphism classes of first order thickenings
+$i : (\Sh(\mathcal{C}), \mathcal{O}) \to (\Sh(\mathcal{C}), \mathcal{O}')$
+over $(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$ endowed with an isomorphism
+$\mathcal{G} \to \Ker(i^\sharp)$ of $\mathcal{O}$-modules.}
+is canonically bijective to
+$\Ext^1_\mathcal{O}(\NL_{\mathcal{O}/\mathcal{O}_\mathcal{B}}, \mathcal{G})$.
+\end{lemma}
+
+\begin{proof}
+To prove this we apply the previous results to the case where
+(\ref{equation-to-solve-ringed-topoi}) is given by the diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{G} \ar[r] &
+{?} \ar[r] &
+\mathcal{O} \ar[r] & 0 \\
+0 \ar[r] &
+0 \ar[u] \ar[r] &
+f^{-1}\mathcal{O}_\mathcal{B} \ar[u] \ar[r]^{\text{id}} &
+f^{-1}\mathcal{O}_\mathcal{B} \ar[u] \ar[r] & 0
+}
+$$
+Thus our lemma follows from Lemma \ref{lemma-choices-ringed-topoi}
+and the fact that there exists a solution, namely
+$\mathcal{G} \oplus \mathcal{O}$.
+(See remark below for a direct construction of the bijection.)
+\end{proof}
+
+\begin{remark}
+\label{remark-extensions-of-relative-ringed-topoi}
+Let $f : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\mathcal{B}, \mathcal{O}_\mathcal{B})$ and $\mathcal{G}$
+be as in Lemma \ref{lemma-extensions-of-relative-ringed-topoi}.
+Consider an extension
+$0 \to \mathcal{G} \to \mathcal{O}' \to \mathcal{O} \to 0$
+as in the lemma. We can choose a sheaf of sets $\mathcal{E}$
+and a commutative diagram
+$$
+\xymatrix{
+\mathcal{E} \ar[d]_{\alpha'} \ar[rd]^\alpha \\
+\mathcal{O}' \ar[r] & \mathcal{O}
+}
+$$
+such that $f^{-1}\mathcal{O}_\mathcal{B}[\mathcal{E}] \to \mathcal{O}$
+is surjective with kernel $\mathcal{J}$.
+(For example you can take any sheaf of sets surjecting
+onto $\mathcal{O}'$.) Then
+$$
+\NL_{\mathcal{O}/\mathcal{O}_\mathcal{B}} \cong \NL(\alpha) =
+\left(
+\mathcal{J}/\mathcal{J}^2
+\longrightarrow
+\Omega_{f^{-1}\mathcal{O}_\mathcal{B}[\mathcal{E}]/
+f^{-1}\mathcal{O}_\mathcal{B}}
+\otimes_{f^{-1}\mathcal{O}_\mathcal{B}[\mathcal{E}]} \mathcal{O}\right)
+$$
+See Modules on Sites, Section \ref{sites-modules-section-netherlander}
+and in particular Lemma \ref{sites-modules-lemma-NL-up-to-qis}.
+Of course $\alpha'$ determines a map
+$f^{-1}\mathcal{O}_\mathcal{B}[\mathcal{E}] \to \mathcal{O}'$
+which in turn determines a map
+$$
+\mathcal{J}/\mathcal{J}^2 \longrightarrow \mathcal{G}
+$$
+which in turn determines the element of
+$\Ext^1_\mathcal{O}(\NL(\alpha), \mathcal{G}) =
+\Ext^1_\mathcal{O}(\NL_{\mathcal{O}/\mathcal{O}_\mathcal{B}}, \mathcal{G})$
+corresponding to $\mathcal{O}'$ by the bijection of the lemma.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-extensions-of-relative-ringed-topoi-functorial}
+Let $f : (\Sh(\mathcal{C}), \mathcal{O}_\mathcal{C}) \to
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$ and
+$g : (\Sh(\mathcal{D}), \mathcal{O}_\mathcal{D}) \to
+(\Sh(\mathcal{C}), \mathcal{O}_\mathcal{C})$ be morphisms
+of ringed topoi. Let $\mathcal{F}$ be a $\mathcal{O}_\mathcal{C}$-module.
+Let $\mathcal{G}$ be a $\mathcal{O}_\mathcal{D}$-module. Let
+$c : g^*\mathcal{F} \to \mathcal{G}$ be a $\mathcal{O}_\mathcal{D}$-linear
+map. Finally, consider
+\begin{enumerate}
+\item[(a)]
+$0 \to \mathcal{F} \to \mathcal{O}_{\mathcal{C}'} \to
+\mathcal{O}_\mathcal{C} \to 0$
+an extension of $f^{-1}\mathcal{O}_\mathcal{B}$-algebras
+corresponding to
+$\xi \in \Ext^1_{\mathcal{O}_\mathcal{C}}(
+\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}}, \mathcal{F})$, and
+\item[(b)]
+$0 \to \mathcal{G} \to \mathcal{O}_{\mathcal{D}'} \to
+\mathcal{O}_\mathcal{D} \to 0$
+an extension of $g^{-1}f^{-1}\mathcal{O}_\mathcal{B}$-algebras
+corresponding to
+$\zeta \in \Ext^1_{\mathcal{O}_\mathcal{D}}(
+\NL_{\mathcal{O}_\mathcal{D}/\mathcal{O}_\mathcal{B}}, \mathcal{G})$.
+\end{enumerate}
+See Lemma \ref{lemma-extensions-of-relative-ringed-topoi}.
+Then there is a morphism
+$$
+g' :
+(\Sh(\mathcal{D}), \mathcal{O}_{\mathcal{D}'})
+\longrightarrow
+(\Sh(\mathcal{C}), \mathcal{O}_{\mathcal{C}'})
+$$
+of ringed topoi over $(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$
+compatible with $g$ and $c$ if and only if $\xi$ and $\zeta$
+map to the same element of
+$\Ext^1_{\mathcal{O}_\mathcal{D}}(
+Lg^*\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}}, \mathcal{G})$.
+\end{lemma}
+
+\begin{proof}
+The stament makes sense as we have the maps
+$$
+\Ext^1_{\mathcal{O}_\mathcal{C}}(
+\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}}, \mathcal{F}) \to
+\Ext^1_{\mathcal{O}_\mathcal{D}}(
+Lg^*\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}}, Lg^*\mathcal{F}) \to
+\Ext^1_{\mathcal{O}_\mathcal{D}}
+(Lg^*\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}}, \mathcal{G})
+$$
+using the map $Lg^*\mathcal{F} \to g^*\mathcal{F} \xrightarrow{c} \mathcal{G}$
+and
+$$
+\Ext^1_{\mathcal{O}_Y}(
+\NL_{\mathcal{O}_\mathcal{D}/\mathcal{O}_\mathcal{B}}, \mathcal{G}) \to
+\Ext^1_{\mathcal{O}_Y}(
+Lg^*\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}}, \mathcal{G})
+$$
+using the map $Lg^*\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}} \to
+\NL_{\mathcal{O}_\mathcal{D}/\mathcal{O}_\mathcal{B}}$.
+The statement of the lemma can be deduced from
+Lemma \ref{lemma-huge-diagram-ringed-topoi} applied to the diagram
+$$
+\xymatrix{
+& 0 \ar[r] &
+\mathcal{G} \ar[r] &
+\mathcal{O}_{\mathcal{D}'} \ar[r] &
+\mathcal{O}_\mathcal{D} \ar[r] & 0 \\
+& 0 \ar[r]|\hole & 0 \ar[u] \ar[r] &
+g^{-1}f^{-1}\mathcal{O}_\mathcal{B} \ar[u] \ar[r]|\hole &
+g^{-1}f^{-1}\mathcal{O}_\mathcal{B} \ar[u] \ar[r] & 0 \\
+0 \ar[r] &
+\mathcal{F} \ar[ruu] \ar[r] &
+\mathcal{O}_{\mathcal{C}'} \ar[r] &
+\mathcal{O}_\mathcal{C} \ar[ruu] \ar[r] & 0 \\
+0 \ar[r] & 0 \ar[ruu]|\hole \ar[u] \ar[r] &
+f^{-1}\mathcal{O}_\mathcal{B} \ar[ruu]|\hole \ar[u] \ar[r] &
+f^{-1}\mathcal{O}_\mathcal{B} \ar[ruu]|\hole \ar[u] \ar[r] & 0
+}
+$$
+and a compatibility between the constructions in the proofs
+of Lemmas \ref{lemma-extensions-of-relative-ringed-topoi} and
+\ref{lemma-huge-diagram-ringed-topoi}
+whose statement and proof we omit. (See remark below for a direct argument.)
+\end{proof}
+
+\begin{remark}
+\label{remark-extensions-of-relative-ringed-topoi-functorial}
+Let $f : (\Sh(\mathcal{C}), \mathcal{O}_\mathcal{C}) \to
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$,
+$g : (\Sh(\mathcal{D}), \mathcal{O}_\mathcal{D}) \to
+(\Sh(\mathcal{C}), \mathcal{O}_\mathcal{C})$,
+$\mathcal{F}$,
+$\mathcal{G}$,
+$c : g^*\mathcal{F} \to \mathcal{G}$,
+$0 \to \mathcal{F} \to \mathcal{O}_{\mathcal{C}'} \to
+\mathcal{O}_\mathcal{C} \to 0$,
+$\xi \in \Ext^1_{\mathcal{O}_\mathcal{C}}(
+\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}}, \mathcal{F})$,
+$0 \to \mathcal{G} \to \mathcal{O}_{\mathcal{D}'} \to
+\mathcal{O}_\mathcal{D} \to 0$, and
+$\zeta \in \Ext^1_{\mathcal{O}_\mathcal{D}}(
+\NL_{\mathcal{O}_\mathcal{D}/\mathcal{O}_\mathcal{B}}, \mathcal{G})$
+be as in Lemma \ref{lemma-extensions-of-relative-ringed-topoi-functorial}.
+Using pushout along $c : g^{-1}\mathcal{F} \to \mathcal{G}$
+we can construct an extension
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{G} \ar[r] &
+\mathcal{O}'_1 \ar[r] &
+g^{-1}\mathcal{O}_\mathcal{C} \ar[r] & 0 \\
+0 \ar[r] &
+g^{-1}\mathcal{F} \ar[u]^c \ar[r] &
+g^{-1}\mathcal{O}_{\mathcal{C}'} \ar[u] \ar[r] &
+g^{-1}\mathcal{O}_\mathcal{C} \ar@{=}[u] \ar[r] & 0
+}
+$$
+Using pullback along
+$g^\sharp : g^{-1}\mathcal{O}_\mathcal{C} \to \mathcal{O}_\mathcal{D}$
+we can construct an extension
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{G} \ar[r] &
+\mathcal{O}_{\mathcal{D}'} \ar[r] &
+\mathcal{O}_\mathcal{D} \ar[r] & 0 \\
+0 \ar[r] &
+\mathcal{G} \ar@{=}[u] \ar[r] &
+\mathcal{O}'_2 \ar[u] \ar[r] &
+g^{-1}\mathcal{O}_\mathcal{C} \ar[u] \ar[r] & 0
+}
+$$
+A diagram chase tells us that there exists a morphism
+$g' : (\Sh(\mathcal{D}), \mathcal{O}_{\mathcal{D}'}) \to
+(\Sh(\mathcal{C}), \mathcal{O}_{\mathcal{C}'})$
+over $(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$
+compatible with $g$ and $c$ if and only if $\mathcal{O}'_1$ is isomorphic
+to $\mathcal{O}'_2$ as $g^{-1}f^{-1}\mathcal{O}_\mathcal{B}$-algebra extensions
+of $g^{-1}\mathcal{O}_\mathcal{C}$ by $\mathcal{G}$. By
+Lemma \ref{lemma-extensions-of-relative-ringed-topoi}
+these extensions are classified by the LHS of
+$$
+\Ext^1_{g^{-1}\mathcal{O}_\mathcal{C}}(
+\NL_{g^{-1}\mathcal{O}_\mathcal{C}/g^{-1}f^{-1}\mathcal{O}_\mathcal{B}},
+\mathcal{G}) =
+\Ext^1_{\mathcal{O}_\mathcal{D}}(
+Lg^*\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}}, \mathcal{G})
+$$
+Here the equality comes from tensor-hom adjunction and
+the equalities
+$$
+\NL_{g^{-1}\mathcal{O}_\mathcal{C}/g^{-1}f^{-1}\mathcal{O}_\mathcal{B}} =
+g^{-1}\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}}
+\quad\text{and}\quad
+Lg^*\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}} =
+g^{-1}\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}}
+\otimes_{g^{-1}\mathcal{O}_X}^\mathbf{L} \mathcal{O}_Y
+$$
+For the first of these see
+Modules on Sites, Lemma \ref{sites-modules-lemma-pullback-NL}; the second
+follows from the definition of derived pullback.
+Thus, in order to see that
+Lemma \ref{lemma-extensions-of-relative-ringed-topoi-functorial}
+is true, it suffices to show that $\mathcal{O}'_1$ corresponds
+to the image of $\xi$ and that $\mathcal{O}'_2$ correspond to
+the image of $\zeta$.
+The correspondence between $\xi$ and $\mathcal{O}'_1$
+is immediate from the construction of the class $\xi$ in
+Remark \ref{remark-extensions-of-relative-ringed-topoi}.
+For the correspondence between $\zeta$ and $\mathcal{O}'_2$,
+we first choose a commutative diagram
+$$
+\xymatrix{
+\mathcal{E} \ar[d]_{\beta'} \ar[rd]^\beta \\
+\mathcal{O}_{\mathcal{D}'} \ar[r] & \mathcal{O}_\mathcal{D}
+}
+$$
+such that $g^{-1}f^{-1}\mathcal{O}_\mathcal{B}[\mathcal{E}] \to
+\mathcal{O}_\mathcal{D}$
+is surjective with kernel $\mathcal{K}$. Next choose a
+commutative diagram
+$$
+\xymatrix{
+\mathcal{E} \ar[d]_{\beta'} &
+\mathcal{E}' \ar[l]^\varphi \ar[d]_{\alpha'} \ar[rd]^\alpha \\
+\mathcal{O}_{\mathcal{D}'} &
+\mathcal{O}'_2 \ar[l] \ar[r] &
+g^{-1}\mathcal{O}_\mathcal{C}
+}
+$$
+such that $g^{-1}f^{-1}\mathcal{O}_\mathcal{B}[\mathcal{E}'] \to
+g^{-1}\mathcal{O}_\mathcal{C}$
+is surjective with kernel $\mathcal{J}$. (For example just take
+$\mathcal{E}' = \mathcal{E} \amalg \mathcal{O}'_2$ as a sheaf of sets.)
+The map $\varphi$ induces a map of complexes $\NL(\alpha) \to \NL(\beta)$
+(notation as in Modules, Section \ref{modules-section-netherlander})
+and in particular
+$\bar\varphi : \mathcal{J}/\mathcal{J}^2 \to \mathcal{K}/\mathcal{K}^2$.
+Then $\NL(\alpha) \cong \NL_{\mathcal{O}_\mathcal{D}/\mathcal{O}_\mathcal{B}}$
+and
+$\NL(\beta) \cong
+\NL_{g^{-1}\mathcal{O}_\mathcal{C}/g^{-1}f^{-1}\mathcal{O}_\mathcal{B}}$
+and the map of complexes $\NL(\alpha) \to \NL(\beta)$
+represents the map
+$Lg^*\NL_{\mathcal{O}_\mathcal{C}/\mathcal{O}_\mathcal{B}} \to
+\NL_{\mathcal{O}_\mathcal{D}/\mathcal{O}_\mathcal{B}}$
+used in the
+statement of Lemma \ref{lemma-extensions-of-relative-ringed-topoi-functorial}
+(see first part of its proof). Now $\zeta$ corresponds to the
+class of the map $\mathcal{K}/\mathcal{K}^2 \to \mathcal{G}$
+induced by $\beta'$, see
+Remark \ref{remark-extensions-of-relative-ringed-topoi}.
+Similarly, the extension $\mathcal{O}'_2$ corresponds to the map
+$\mathcal{J}/\mathcal{J}^2 \to \mathcal{G}$ induced by $\alpha'$.
+The commutative diagram above shows that this map is
+the composition of the map $\mathcal{K}/\mathcal{K}^2 \to \mathcal{G}$
+induced by $\beta'$ with the map
+$\bar\varphi : \mathcal{J}/\mathcal{J}^2 \to \mathcal{K}/\mathcal{K}^2$.
+This proves the compatibility we were looking for.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-parametrize-solutions-ringed-topoi}
+Let $t : (\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B}) \to
+(\Sh(\mathcal{B}'), \mathcal{O}_{\mathcal{B}'})$,
+$\mathcal{J} = \Ker(t^\sharp)$,
+$f : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B})$, $\mathcal{G}$, and
+$c : \mathcal{J} \to \mathcal{G}$ be as in
+(\ref{equation-to-solve-ringed-topoi}).
+Denote $\xi \in \Ext^1_{\mathcal{O}_\mathcal{B}}(
+\NL_{\mathcal{O}_\mathcal{B}/\mathcal{O}_{\mathcal{B}'}}, \mathcal{J})$
+the element corresponding to the extension $\mathcal{O}_{\mathcal{B}'}$
+of $\mathcal{O}_\mathcal{B}$ by $\mathcal{J}$ via
+Lemma \ref{lemma-extensions-of-relative-ringed-topoi}.
+The set of isomorphism classes of solutions is canonically bijective
+to the fibre of
+$$
+\Ext^1_\mathcal{O}(\NL_{\mathcal{O}/\mathcal{O}_{\mathcal{B}'}},
+\mathcal{G})\to
+\Ext^1_\mathcal{O}(
+Lf^*\NL_{\mathcal{O}_\mathcal{B}/\mathcal{O}_{\mathcal{B}'}}, \mathcal{G})
+$$
+over the image of $\xi$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-extensions-of-relative-ringed-topoi}
+applied to $t \circ f : (\Sh(\mathcal{C}), \mathcal{O}) \to
+(\Sh(\mathcal{B}'), \mathcal{O}_{\mathcal{B}'})$
+and the $\mathcal{O}$-module $\mathcal{G}$
+we see that elements $\zeta$ of
+$\Ext^1_\mathcal{O}(\NL_{\mathcal{O}/\mathcal{O}_{\mathcal{B}'}},
+\mathcal{G})$
+parametrize extensions
+$0 \to \mathcal{G} \to \mathcal{O}' \to \mathcal{O} \to 0$
+of $f^{-1}\mathcal{O}_{\mathcal{B}'}$-algebras. By
+Lemma \ref{lemma-extensions-of-relative-ringed-topoi-functorial} applied
+to
+$$
+(\Sh(\mathcal{C}), \mathcal{O}) \xrightarrow{f}
+(\Sh(\mathcal{B}), \mathcal{O}_\mathcal{B}) \xrightarrow{t}
+(\Sh(\mathcal{B}'), \mathcal{O}_{\mathcal{B}'})
+$$
+and $c : \mathcal{J} \to \mathcal{G}$
+we see that there is an morphism
+$$
+f' :
+(\Sh(\mathcal{C}), \mathcal{O}')
+\longrightarrow
+(\Sh(\mathcal{B}'), \mathcal{O}_{\mathcal{B}'})
+$$
+over $(\Sh(\mathcal{B}'), \mathcal{O}_{\mathcal{B}'})$
+compatible with $c$ and $f$ if and only if
+$\zeta$ maps to $\xi$. Of course this is the same thing as saying
+$\mathcal{O}'$ is a solution of (\ref{equation-to-solve-ringed-topoi}).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Deformations of algebraic spaces}
+\label{section-deformations-spaces}
+
+\noindent
+In this section we spell out what the results in
+Section \ref{section-deformations-ringed-topoi}
+mean for deformations of algebraic spaces.
+
+\begin{lemma}
+\label{lemma-match-thickenings}
+Let $S$ be a scheme. Let $i : Z \to Z'$ be a morphism of algebraic spaces
+over $S$. The following are equivalent
+\begin{enumerate}
+\item $i$ is a thickening of algebraic spaces as defined
+in More on Morphisms of Spaces, Section
+\ref{spaces-more-morphisms-section-thickenings}, and
+\item the associated morphism
+$i_{small} : (\Sh(Z_\etale), \mathcal{O}_Z) \to
+(\Sh(Z'_\etale), \mathcal{O}_{Z'})$
+of ringed topoi (Properties of Spaces, Lemma
+\ref{spaces-properties-lemma-morphism-ringed-topoi})
+is a thickening in the sense of
+Section \ref{section-thickenings-ringed-topoi}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We stress that this is not a triviality.
+
+\medskip\noindent
+Assume (1). By More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-thickening-equivalence}
+the morphism $i$ induces an equivalence of small \'etale
+sites and in particular of topoi. Of course $i^\sharp$
+is surjective with locally nilpotent kernel by definition
+of thickenings.
+
+\medskip\noindent
+Assume (2). (This direction is less important and more of
+a curiosity.) For any \'etale morphism $Y' \to Z'$ we see
+that $Y = Z \times_{Z'} Y'$ has the same \'etale topos
+as $Y'$. In particular, $Y'$ is quasi-compact if and only if
+$Y$ is quasi-compact because being quasi-compact
+is a topos theoretic notion (Sites, Lemma \ref{sites-lemma-quasi-compact}).
+Having said this we see that $Y'$ is quasi-compact and quasi-separated
+if and only if $Y$ is quasi-compact and quasi-separated
+(because you can characterize $Y'$ being quasi-separated by saying
+that for all $Y'_1, Y'_2$ quasi-compact algebraic spaces \'etale over $Y'$
+we have that $Y'_1 \times_{Y'} Y'_2$ is quasi-compact).
+Take $Y'$ affine. Then the algebraic space $Y$ is
+quasi-compact and quasi-separated. For any
+quasi-coherent $\mathcal{O}_Y$-module $\mathcal{F}$ we have
+$H^q(Y, \mathcal{F}) = H^q(Y', (Y \to Y')_*\mathcal{F})$
+because the \'etale topoi are the same.
+Then $H^q(Y', (Y \to Y')_*\mathcal{F}) = 0$
+because the pushforward is quasi-coherent
+(Morphisms of Spaces, Lemma \ref{spaces-morphisms-lemma-pushforward})
+and $Y$ is affine. It follows that $Y'$ is affine by
+Cohomology of Spaces, Proposition
+\ref{spaces-cohomology-proposition-vanishing-affine}
+(there surely is a proof of this direction of the lemma
+avoiding this reference).
+Hence $i$ is an affine morphism. In the affine case it
+follows easily from the conditions in
+Section \ref{section-thickenings-ringed-topoi}
+that $i$ is a thickening of algebraic spaces.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-deform-spaces}
+Let $S$ be a scheme.
+Let $Y \subset Y'$ be a first order thickening of algebraic spaces
+over $S$.
+Let $f : X \to Y$ be a flat morphism of algebraic spaces over $S$.
+If there exists a flat morphism $f' : X' \to Y'$ of algebraic spaces over $S$
+and an isomorphsm $a : X \to X' \times_{Y'} Y$ over $Y$, then
+\begin{enumerate}
+\item the set of isomorphism classes of pairs $(f' : X' \to Y', a)$ is
+principal homogeneous under
+$\Ext^1_{\mathcal{O}_X}(\NL_{X/Y}, f^*\mathcal{C}_{Y/Y'})$, and
+\item the set of automorphisms of $\varphi : X' \to X'$
+over $Y'$ which reduce to the identity on $X' \times_{Y'} Y$
+is $\Ext^0_{\mathcal{O}_X}(\NL_{X/Y}, f^*\mathcal{C}_{Y/Y'})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will apply the material on deformations of ringed topoi
+to the small \'etale topoi of the algebraic spaces in the lemma.
+We may think of $X$ as a closed subspace of $X'$
+so that $(f, f') : (X \subset X') \to (Y \subset Y')$
+is a morphism of first order thickenings.
+By Lemma \ref{lemma-match-thickenings}
+this translates into a morphism of thickenings of ringed topoi.
+Then we see from More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-deform}
+(or from the more general Lemma \ref{lemma-deform-module-ringed-topoi})
+that the ideal sheaf of $X$ in $X'$ is equal to $f^*\mathcal{C}_{Y'/Y}$
+and this is in fact equivalent to flatness of $X'$ over $Y'$.
+Hence we have a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] & f^*\mathcal{C}_{Y/Y'} \ar[r] &
+\mathcal{O}_{X'} \ar[r] &
+\mathcal{O}_X \ar[r] & 0 \\
+0 \ar[r] &
+f_{small}^{-1}\mathcal{C}_{Y/Y'} \ar[u] \ar[r] &
+f_{small}^{-1}\mathcal{O}_{Y'} \ar[u] \ar[r] &
+f_{small}^{-1}\mathcal{O}_Y \ar[u] \ar[r] & 0
+}
+$$
+Please compare with (\ref{equation-to-solve-ringed-topoi}).
+Observe that automorphisms $\varphi$ as in (2)
+give automorphisms $\varphi^\sharp : \mathcal{O}_{X'} \to \mathcal{O}_{X'}$
+fitting in the diagram above. Conversely, an automorphism
+$\alpha : \mathcal{O}_{X'} \to \mathcal{O}_{X'}$
+fitting into the diagram of sheaves above is equal to $\varphi^\sharp$
+for some automorphism $\varphi$ as in (2)
+by More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-first-order-thickening-maps}.
+Finally, by More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-first-order-thickening}
+if we find another sheaf of rings $\mathcal{A}$ on $X_\etale$
+fitting into the diagram
+$$
+\xymatrix{
+0 \ar[r] & f^*\mathcal{C}_{Y/Y'} \ar[r] &
+\mathcal{A} \ar[r] &
+\mathcal{O}_X \ar[r] & 0 \\
+0 \ar[r] &
+f_{small}^{-1}\mathcal{C}_{Y/Y'} \ar[u] \ar[r] &
+f_{small}^{-1}\mathcal{O}_{Y'} \ar[u] \ar[r] &
+f_{small}^{-1}\mathcal{O}_Y \ar[u] \ar[r] & 0
+}
+$$
+then there exists a first order thickening $X \subset X''$
+with $\mathcal{O}_{X''} = \mathcal{A}$ and applying
+More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-first-order-thickening-maps}
+once more, we obtain a morphism
+$(f, f'') : (X \subset X'') \to (Y \subset Y')$ with all the
+desired properties.
+Thus part (1) follows from
+Lemma \ref{lemma-choices-ringed-topoi}
+and part (2) from part (2) of
+Lemma \ref{lemma-huge-diagram-ringed-topoi}.
+(Note that $\NL_{X/Y}$ as defined for a morphism of algebraic spaces in
+More on Morphisms of Spaces, Section
+\ref{spaces-more-morphisms-section-netherlander}
+agrees with $\NL_{X/Y}$ as used in
+Section \ref{section-deformations-ringed-topoi}.)
+\end{proof}
+
+\noindent
+Let $S$ be a scheme. Let $f : X \to B$
+be a morphism of algebraic spaces over $S$.
+Let $\mathcal{F} \to \mathcal{G}$ be a homomorphism of $\mathcal{O}_X$-modules
+(not necessarily quasi-coherent).
+Consider the functor
+$$
+F :
+\left\{
+\begin{matrix}
+\text{extensions of }f^{-1}\mathcal{O}_B\text{ algebras}\\
+0 \to \mathcal{F} \to \mathcal{O}' \to \mathcal{O}_X \to 0\\
+\text{where }\mathcal{F}\text{ is an ideal of square zero}
+\end{matrix}
+\right\}
+\longrightarrow
+\left\{
+\begin{matrix}
+\text{extensions of }f^{-1}\mathcal{O}_B\text{ algebras}\\
+0 \to \mathcal{G} \to \mathcal{O}' \to \mathcal{O}_X \to 0\\
+\text{where }\mathcal{G}\text{ is an ideal of square zero}
+\end{matrix}
+\right\}
+$$
+given by pushout.
+
+\begin{lemma}
+\label{lemma-thickening-space-quasi-coherent}
+In the situation above assume that $X$ is quasi-compact and quasi-separated
+and that $DQ_X(\mathcal{F}) \to DQ_X(\mathcal{G})$
+(Derived Categories of Spaces, Section
+\ref{spaces-perfect-section-better-coherator})
+is an isomorphism. Then the functor $F$ is an equivalence of categories.
+\end{lemma}
+
+\begin{proof}
+Recall that $\NL_{X/B}$ is an object of $D_\QCoh(\mathcal{O}_X)$, see
+More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-netherlander-quasi-coherent}.
+Hence our assumption implies the maps
+$$
+\Ext^i_X(\NL_{X/B}, \mathcal{F}) \longrightarrow
+\Ext^i_X(\NL_{X/B}, \mathcal{G})
+$$
+are isomorphisms for all $i$. This implies our functor is fully
+faithful by Lemma \ref{lemma-huge-diagram-ringed-topoi}.
+On the other hand, the functor is essentially surjective by
+Lemma \ref{lemma-choices-ringed-topoi} because
+we have the solutions $\mathcal{O}_X \oplus \mathcal{F}$
+and $\mathcal{O}_X \oplus \mathcal{G}$ in both categories.
+\end{proof}
+
+\noindent
+Let $S$ be a scheme. Let $B \subset B'$ be a first order thickening of
+algebraic spaces over $S$ with ideal sheaf $\mathcal{J}$
+which we view either as a quasi-coherent $\mathcal{O}_B$-module
+or as a quasi-coherent sheaf of ideals on $B'$, see
+More on Morphisms of Spaces, Section
+\ref{spaces-more-morphisms-section-thickenings}.
+Let $f : X \to B$ be a morphism of algebraic spaces over $S$.
+Let $\mathcal{F} \to \mathcal{G}$ be a homomorphism of
+$\mathcal{O}_X$-modules (not necessarily quasi-coherent).
+Let $c : f^{-1}\mathcal{J} \to \mathcal{F}$ be a map
+of $f^{-1}\mathcal{O}_B$-modules and denote
+$c' : f^{-1}\mathcal{J} \to \mathcal{G}$ the composition.
+Consider the functor
+$$
+FT :
+\{\text{solutions to }(\ref{equation-to-solve-ringed-topoi})
+\text{ for }\mathcal{F}\text{ and }c\}
+\longrightarrow
+\{\text{solutions to }(\ref{equation-to-solve-ringed-topoi})
+\text{ for }\mathcal{G}\text{ and }c'\}
+$$
+given by pushout.
+
+\begin{lemma}
+\label{lemma-thickening-over-thickening-space-quasi-coherent}
+In the situation above assume that $X$ is quasi-compact and quasi-separated
+and that $DQ_X(\mathcal{F}) \to DQ_X(\mathcal{G})$
+(Derived Categories of Spaces, Section
+\ref{spaces-perfect-section-better-coherator})
+is an isomorphism. Then the functor $FT$ is an equivalence of categories.
+\end{lemma}
+
+\begin{proof}
+A solution of (\ref{equation-to-solve-ringed-topoi}) for $\mathcal{F}$
+in particular gives an extension of $f^{-1}\mathcal{O}_{B'}$-algebras
+$$
+0 \to \mathcal{F} \to \mathcal{O}' \to \mathcal{O}_X \to 0
+$$
+where $\mathcal{F}$ is an ideal of square zero. Similarly for $\mathcal{G}$.
+Moreover, given such an extension, we obtain a map
+$c_{\mathcal{O}'} : f^{-1}\mathcal{J} \to \mathcal{F}$.
+Thus we are looking at the full subcategory of such extensions
+of $f^{-1}\mathcal{O}_{B'}$-algebras with $c = c_{\mathcal{O}'}$.
+Clearly, if $\mathcal{O}'' = F(\mathcal{O}')$ where
+$F$ is the equivalence of Lemma \ref{lemma-thickening-space-quasi-coherent}
+(applied to $X \to B'$ this time),
+then $c_{\mathcal{O}''}$ is the composition of
+$c_{\mathcal{O}'}$ and the map $\mathcal{F} \to \mathcal{G}$.
+This proves the lemma.
+\end{proof}
+
+
+
+
+
+\section{Deformations of complexes}
+\label{section-deformations-complexes}
+
+\noindent
+This section is a warmup for the next one.
+We will use as much as possible the material
+in the chapters on commutative algebra.
+
+\begin{lemma}
+\label{lemma-canonical-class-algebra}
+Let $R' \to R$ be a surjection of rings whose kernel is an ideal
+$I$ of square zero. For every $K \in D^-(R)$ there is a canonical
+map
+$$
+\omega(K) : K \longrightarrow K \otimes_R^\mathbf{L} I[2]
+$$
+in $D(R)$ with the following properties
+\begin{enumerate}
+\item $\omega(K) = 0$ if and only if there exists
+$K' \in D(R')$ with $K' \otimes_{R'}^\mathbf{L} R = K$,
+\item given $K \to L$ in $D^-(R)$ the diagram
+$$
+\xymatrix{
+K \ar[d] \ar[rr]_-{\omega(K)} & &
+K \otimes^\mathbf{L}_R I[2] \ar[d] \\
+L \ar[rr]^-{\omega(L)} & &
+L \otimes^\mathbf{L}_R I[2]
+}
+$$
+commutes, and
+\item formation of $\omega(K)$ is compatible with ring maps $R' \to S'$
+(see proof for a precise statement).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose a bounded above complex $K^\bullet$ of free $R$-modules representing
+$K$. Then we can choose free $R'$-modules $(K')^n$ lifting $K^n$.
+We can choose $R'$-module maps $(d')^n_K : (K')^n \to (K')^{n + 1}$
+lifting the differentials $d^n_K : K^n \to K^{n + 1}$ of $K^\bullet$.
+Although the compositions
+$$
+(d')^{n + 1}_K \circ (d')^n_K : (K')^n \to (K')^{n + 2}
+$$
+may not be zero, they do factor as
+$$
+(K')^n \to K^n \xrightarrow{\omega^n_K}
+K^{n + 2} \otimes_R I = I(K')^{n + 2} \to (K')^{n + 2}
+$$
+because $d^{n + 1} \circ d^n = 0$.
+A calculation shows that $\omega^n_K$ defines a map of complexes.
+This map of complexes defines $\omega(K)$.
+
+\medskip\noindent
+Let us prove this construction is compatible with a map of complexes
+$\alpha^\bullet : K^\bullet \to L^\bullet$ of bounded above free $R$-modules
+and given choices of lifts $(K')^n, (L')^n, (d')^n_K, (d')^n_L$.
+Namely, choose $(\alpha')^n : (K')^n \to (L')^n$ lifting the
+components $\alpha^n : K^n \to L^n$. As before we get a
+factorization
+$$
+(K')^n \to K^n \xrightarrow{h^n}
+L^{n + 1} \otimes_R I = I(L')^{n + 1} \to (L')^{n + 2}
+$$
+of $(d')^n_L \circ (\alpha')^n - (\alpha')^{n + 1} \circ (d')_K^n$.
+Then it is an pleasant calculation to show that
+$$
+\omega^n_L \circ \alpha^n =
+(d_L^{n + 1} \otimes \text{id}_I) \circ h^n + h^{n + 1} \circ d_K^n +
+(\alpha^{n + 2} \otimes \text{id}_I) \circ \omega^n_K
+$$
+This proves the commutativity of the diagram in (2) of the lemma
+in this particular case. Using this for two different choices
+of bounded above free complexes representing $K$, we find that
+$\omega(K)$ is well defined! And of course (2) holds in general as well.
+
+\medskip\noindent
+If $K$ lifts to $K'$ in $D^-(R')$, then we can represent
+$K'$ by a bounded above complex of free $R'$-modules
+and we see immediately that $\omega(K) = 0$.
+Conversely, going back to our choices $K^\bullet$,
+$(K')^n$, $(d')^n_K$, if $\omega(K) = 0$, then we can find
+$g^n : K^n \to K^{n + 1} \otimes_R I$ with
+$$
+\omega^n = (d_K^{n + 1} \otimes \text{id}_I) \circ g^n +
+g^{n + 1} \circ d_K^n
+$$
+This means that with differentials
+$(d')^n_K + g^n : (K')^n \to (K')^{n + 1}$
+we obtain a complex of free $R'$-modules lifting $K^\bullet$.
+This proves (1).
+
+\medskip\noindent
+Finally, part (3) means the following: Let $R' \to S'$ be a map of
+rings. Set $S = S' \otimes_{R'} R$ and denote $J = IS' \subset S'$
+the square zero kernel of $S' \to S$. Then given $K \in D^-(R)$
+the statement is that we get a commutative diagram
+$$
+\xymatrix{
+K \otimes_R^\mathbf{L} S \ar[d] \ar[rr]_-{\omega(K) \otimes \text{id}} & &
+(K \otimes^\mathbf{L}_R I[2]) \otimes_R^\mathbf{L} S \ar[d] \\
+K \otimes_R^\mathbf{L} S \ar[rr]^-{\omega(K \otimes_R^\mathbf{L} S)} & &
+(K \otimes_R^\mathbf{L} S) \otimes^\mathbf{L}_S J[2]
+}
+$$
+Here the right vertical arrow comes from
+$$
+(K \otimes^\mathbf{L}_R I[2]) \otimes_R^\mathbf{L} S =
+(K \otimes_R^\mathbf{L} S) \otimes_S^\mathbf{L}
+(I \otimes_R^\mathbf{L} S)[2] \longrightarrow
+(K \otimes_R^\mathbf{L} S) \otimes_S^\mathbf{L} J[2]
+$$
+Choose $K^\bullet$, $(K')^n$, and $(d')^n_K$ as above.
+Then we can use $K^\bullet \otimes_R S$, $(K')^n \otimes_{R'} S'$, and
+$(d')^n_K \otimes \text{id}_{S'}$ for the construction of
+$\omega(K \otimes_R^\mathbf{L} S)$.
+With these choices commutativity
+is immediately verified on the level of maps of complexes.
+\end{proof}
+
+
+
+
+
+
+
+\section{Deformations of complexes on ringed topoi}
+\label{section-thickenings-complexes}
+
+\noindent
+This material is taken from \cite{lieblich-complexes}.
+
+\medskip\noindent
+The material in this section works in the setting of a first order thickening
+of ringed topoi as defined in Section \ref{section-thickenings-ringed-topoi}.
+However, in order to simplify the notation we will assume the underlying
+sites $\mathcal{C}$ and $\mathcal{D}$ are the same.
+Moreover, the surjective homomorphism $\mathcal{O}' \to \mathcal{O}$
+of sheaves of rings will be denoted $\mathcal{O} \to \mathcal{O}_0$
+as is perhaps more customary in the literature.
+
+\begin{lemma}
+\label{lemma-lift-complex}
+Let $\mathcal{C}$ be a site. Let $\mathcal{O} \to \mathcal{O}_0$
+be a surjection of sheaves of rings. Assume given the following data
+\begin{enumerate}
+\item flat $\mathcal{O}$-modules $\mathcal{G}^n$,
+\item maps of $\mathcal{O}$-modules $\mathcal{G}^n \to \mathcal{G}^{n + 1}$,
+\item a complex $\mathcal{K}_0^\bullet$ of $\mathcal{O}_0$-modules,
+\item maps of $\mathcal{O}$-modules $\mathcal{G}^n \to \mathcal{K}_0^n$
+\end{enumerate}
+such that
+\begin{enumerate}
+\item[(a)] $H^n(\mathcal{K}_0^\bullet) = 0$ for $n \gg 0$,
+\item[(b)] $\mathcal{G}^n = 0$ for $n \gg 0$,
+\item[(c)] with
+$\mathcal{G}^n_0 = \mathcal{G}^n \otimes_\mathcal{O} \mathcal{O}_0$
+the induced maps determine a complex $\mathcal{G}_0^\bullet$ and a map
+of complexes $\mathcal{G}_0^\bullet \to \mathcal{K}_0^\bullet$.
+\end{enumerate}
+Then there exist
+\begin{enumerate}
+\item[(\romannumeral1)]
+flat $\mathcal{O}$-modules $\mathcal{F}^n$,
+\item[(\romannumeral2)]
+maps of $\mathcal{O}$-modules $\mathcal{F}^n \to \mathcal{F}^{n + 1}$,
+\item[(\romannumeral3)]
+maps of $\mathcal{O}$-modules $\mathcal{F}^n \to \mathcal{K}_0^n$,
+\item[(\romannumeral4)]
+maps of $\mathcal{O}$-modules $\mathcal{G}^n \to \mathcal{F}^n$,
+\end{enumerate}
+such that $\mathcal{F}^n = 0$ for $n \gg 0$, such that the diagrams
+$$
+\xymatrix{
+\mathcal{G}^n \ar[r] \ar[d] & \mathcal{G}^{n + 1} \ar[d] \\
+\mathcal{F}^n \ar[r] & \mathcal{F}^{n + 1}
+}
+$$
+commute for all $n$, such that the composition
+$\mathcal{G}^n \to \mathcal{F}^n \to \mathcal{K}_0^n$
+is the given map $\mathcal{G}^n \to \mathcal{K}_0^n$, and such that with
+$\mathcal{F}^n_0 = \mathcal{F}^n \otimes_\mathcal{O} \mathcal{O}_0$
+we obtain a complex $\mathcal{F}_0^\bullet$ and map of complexes
+$\mathcal{F}_0^\bullet \to \mathcal{K}_0^\bullet$ which is a
+quasi-isomorphism.
+\end{lemma}
+
+\begin{proof}
+We will prove by descending induction on $e$ that we can find $\mathcal{F}^n$,
+$\mathcal{G}^n \to \mathcal{F}^n$, and
+$\mathcal{F}^n \to \mathcal{F}^{n + 1}$ for $n \geq e$
+fitting into a commutative diagram
+$$
+\xymatrix{
+\ldots \ar[r] &
+\mathcal{G}^{e - 1} \ar[r] \ar@/_2pc/[dd] &
+\mathcal{G}^e \ar[d] \ar[r] \ar@/_2pc/[dd] &
+\mathcal{G}^{e + 1} \ar[d] \ar[r] \ar@/_2pc/[dd]|\hole &
+\ldots \\
+& &
+\mathcal{F}^e \ar[d] \ar[r] &
+\mathcal{F}^{e + 1} \ar[d] \ar[r] & \ldots \\
+\ldots \ar[r] &
+\mathcal{K}_0^{e - 1} \ar[r] &
+\mathcal{K}_0^e \ar[r] &
+\mathcal{K}_0^{e + 1} \ar[r] & \ldots
+}
+$$
+such that $\mathcal{F}_0^\bullet$ is a complex,
+the induced map $\mathcal{F}_0^\bullet \to \mathcal{K}_0^\bullet$
+induces an isomorphism on $H^n$ for $n > e$ and a surjection
+for $n = e$. For $e \gg 0$ this is true because we can take
+$\mathcal{F}^n = 0$ for $n \geq e$ in that case by assumptions
+(a) and (b).
+
+\medskip\noindent
+Induction step. We have to construct $\mathcal{F}^{e - 1}$
+and the maps $\mathcal{G}^{e - 1} \to \mathcal{F}^{e - 1}$,
+$\mathcal{F}^{e - 1} \to \mathcal{F}^e$, and
+$\mathcal{F}^{e - 1} \to \mathcal{K}_0^{e - 1}$.
+We will choose $\mathcal{F}^{e - 1} = A \oplus B \oplus C$
+as a direct sum of three pieces.
+
+\medskip\noindent
+For the first we take $A = \mathcal{G}^{e - 1}$ and we choose our map
+$\mathcal{G}^{e - 1} \to \mathcal{F}^{e - 1}$ to be the inclusion of
+the first summand. The maps $A \to \mathcal{K}^{e - 1}_0$
+and $A \to \mathcal{F}^e$ will be the obvious ones.
+
+\medskip\noindent
+To choose $B$ we consider the surjection (by induction hypothesis)
+$$
+\gamma :
+\Ker(\mathcal{F}^e_0 \to \mathcal{F}^{e + 1}_0)
+\longrightarrow
+\Ker(\mathcal{K}^e_0 \to \mathcal{K}^{e + 1}_0)/
+\Im(\mathcal{K}^{e - 1}_0 \to \mathcal{K}^e_0)
+$$
+We can choose a set $I$, for each $i \in I$
+an object $U_i$ of $\mathcal{C}$, and sections
+$s_i \in \mathcal{F}^e(U_i)$, $t_i \in \mathcal{K}^{e - 1}_0(U_i)$
+such that
+\begin{enumerate}
+\item $s_i$ maps to a section of $\Ker(\gamma) \subset
+\Ker(\mathcal{F}^e_0 \to \mathcal{F}^{e + 1}_0)$,
+\item $s_i$ and $t_i$ map to the same section of
+$\mathcal{K}^e_0$,
+\item the sections $s_i$ generate $\Ker(\gamma)$ as an $\mathcal{O}_0$-module.
+\end{enumerate}
+We omit giving the full justification for this;
+one uses that $\mathcal{F}^e \to \mathcal{F}^e_0$
+is a surjective maps of sheaves of sets. Then we set
+to put
+$$
+B = \bigoplus\nolimits_{i \in I} j_{U_i!}\mathcal{O}_{U_i}
+$$
+and define the maps $B \to \mathcal{F}^e$ and $B \to \mathcal{K}_0^{e - 1}$
+by using $s_i$ and $t_i$ to determine where to send the summand
+$j_{U_i!}\mathcal{O}_{U_i}$.
+
+\medskip\noindent
+With $\mathcal{F}^{e - 1} = A \oplus B$ and maps as above,
+this produces a diagram as above for $e - 1$ such that
+$\mathcal{F}_0^\bullet \to \mathcal{K}_0^\bullet$
+induces an isomorphism on $H^n$ for $n \geq e$.
+To get the map to be surjective on $H^{e - 1}$ we choose
+the summand $C$ as follows.
+Choose a set $J$, for each $j \in J$ an object $U_j$ of $\mathcal{C}$
+and a section $t_j$ of $\Ker(\mathcal{K}^{e - 1}_0 \to \mathcal{K}^e_0)$
+over $U_j$ such that these sections generate this kernel over
+$\mathcal{O}_0$. Then we put
+$$
+C = \bigoplus\nolimits_{j \in J} j_{U_j!}\mathcal{O}_{U_j}
+$$
+and the zero map $C \to \mathcal{F}^e$ and the map
+$C \to \mathcal{K}_0^{e - 1}$ by using $s_j$ to determine where to the summand
+$j_{U_j!}\mathcal{O}_{U_j}$. This finishes the induction step
+by taking $\mathcal{F}^{e - 1} = A \oplus B \oplus C$ and
+maps as indicated.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-canonical-class}
+Let $\mathcal{C}$ be a site. Let $\mathcal{O} \to \mathcal{O}_0$
+be a surjection of sheaves of rings whose kernel is an ideal sheaf
+$\mathcal{I}$ of square zero. For every object
+$K_0$ in $D^-(\mathcal{O}_0)$ there is a canonical map
+$$
+\omega(K_0) :
+K_0 \longrightarrow
+K_0 \otimes_{\mathcal{O}_0}^\mathbf{L} \mathcal{I}[2]
+$$
+in $D(\mathcal{O}_0)$ such that for any map
+$K_0 \to L_0$ in $D^-(\mathcal{O}_0)$ the diagram
+$$
+\xymatrix{
+K_0 \ar[d] \ar[rr]_-{\omega(K_0)} & &
+(K_0 \otimes^\mathbf{L}_{\mathcal{O}_0} \mathcal{I})[2] \ar[d] \\
+L_0 \ar[rr]^-{\omega(L_0)} & &
+(L_0 \otimes^\mathbf{L}_{\mathcal{O}_0} \mathcal{I})[2]
+}
+$$
+commutes.
+\end{lemma}
+
+\begin{proof}
+Represent $K_0$ by any complex
+$\mathcal{K}_0^\bullet$ of $\mathcal{O}_0$-modules.
+Apply Lemma \ref{lemma-lift-complex}
+with $\mathcal{G}^n = 0$ for all $n$.
+Denote $d : \mathcal{F}^n \to \mathcal{F}^{n + 1}$
+the maps produced by the lemma. Then we see that
+$d \circ d : \mathcal{F}^n \to \mathcal{F}^{n + 2}$
+is zero modulo $\mathcal{I}$. Since $\mathcal{F}^n$ is flat,
+we see that
+$\mathcal{I}\mathcal{F}^n =
+\mathcal{F}^n \otimes_{\mathcal{O}} \mathcal{I} =
+\mathcal{F}^n_0 \otimes_{\mathcal{O}_0} \mathcal{I}$.
+Hence we obtain a canonical map of complexes
+$$
+d \circ d : \mathcal{F}_0^\bullet
+\longrightarrow
+(\mathcal{F}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I})[2]
+$$
+Since $\mathcal{F}_0^\bullet$ is a bounded above complex
+of flat $\mathcal{O}_0$-modules, it is K-flat and may be used
+to compute derived tensor product. Moreover, the map of
+complexes $\mathcal{F}_0^\bullet \to \mathcal{K}_0^\bullet$
+is a quasi-isomorphism by construction. Therefore the source and
+target of the map just constructed represent $K_0$ and
+$K_0 \otimes_{\mathcal{O}_0}^\mathbf{L} \mathcal{I}[2]$
+and we obtain our map $\omega(K_0)$.
+
+\medskip\noindent
+Let us show that this procedure is compatible with maps of complexes.
+Namely, let $\mathcal{L}_0^\bullet$ represent another object of
+$D^-(\mathcal{O}_0)$ and suppose that
+$$
+\mathcal{K}_0^\bullet \longrightarrow \mathcal{L}_0^\bullet
+$$
+is a map of complexes. Apply Lemma \ref{lemma-lift-complex}
+for the complex $\mathcal{L}_0^\bullet$, the flat modules
+$\mathcal{F}^n$, the maps $\mathcal{F}^n \to \mathcal{F}^{n + 1}$,
+and the compositions
+$\mathcal{F}^n \to \mathcal{K}_0^n \to \mathcal{L}_0^n$
+(we apologize for the reversal of letters used).
+We obtain flat modules $\mathcal{G}^n$, maps
+$\mathcal{F}^n \to \mathcal{G}^n$, maps
+$\mathcal{G}^n \to \mathcal{G}^{n + 1}$, and maps
+$\mathcal{G}^n \to \mathcal{L}_0^n$ with all properties
+as in the lemma. Then it is clear that
+$$
+\xymatrix{
+\mathcal{F}_0^\bullet \ar[d] \ar[r] &
+(\mathcal{F}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I})[2] \ar[d] \\
+\mathcal{G}_0^\bullet \ar[r] &
+(\mathcal{G}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I})[2]
+}
+$$
+is a commutative diagram of complexes.
+
+\medskip\noindent
+To see that $\omega(K_0)$ is well defined, suppose that we have two complexes
+$\mathcal{K}_0^\bullet$ and $(\mathcal{K}'_0)^\bullet$
+of $\mathcal{O}_0$-modules representing $K_0$ and two systems
+$(\mathcal{F}^n, d : \mathcal{F}^n \to \mathcal{F}^{n + 1},
+\mathcal{F}^n \to \mathcal{K}_0^n)$
+and
+$((\mathcal{F}')^n, d : (\mathcal{F}')^n \to (\mathcal{F}')^{n + 1},
+(\mathcal{F}')^n \to \mathcal{K}_0^n)$
+as above. Then we can choose a complex $(\mathcal{K}''_0)^\bullet$
+and quasi-isomorphisms
+$\mathcal{K}_0^\bullet \to (\mathcal{K}''_0)^\bullet$
+and
+$(\mathcal{K}'_0)^\bullet \to (\mathcal{K}''_0)^\bullet$
+realizing the fact that both complexes represent $K_0$ in the
+derived category. Next, we apply the result of the previous paragraph
+to
+$$
+(\mathcal{K}_0)^\bullet \oplus (\mathcal{K}'_0)^\bullet
+\longrightarrow
+(\mathcal{K}''_0)^\bullet
+$$
+This produces a commutative diagram
+$$
+\xymatrix{
+\mathcal{F}_0^\bullet \oplus (\mathcal{F}'_0)^\bullet
+\ar[d] \ar[r] &
+(\mathcal{F}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I})[2] \oplus
+((\mathcal{F}'_0)^\bullet \otimes_{\mathcal{O}_0} \mathcal{I})[2] \ar[d] \\
+\mathcal{G}_0^\bullet \ar[r] &
+(\mathcal{G}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I})[2]
+}
+$$
+Since the vertical arrows give quasi-isomorphisms on the summands
+we conclude the desired commutativity in $D(\mathcal{O}_0)$.
+
+\medskip\noindent
+Having established well-definedness, the statement on compatibility
+with maps is a consequence of the result in the second
+paragraph.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-induced-map}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site.
+Let $\alpha : K \to L$ be a map of $D^-(\mathcal{O})$.
+Let $\mathcal{F}$ be a sheaf of $\mathcal{O}$-modules.
+Let $n \in \mathbf{Z}$.
+\begin{enumerate}
+\item If $H^i(\alpha)$ is an isomorphism for $i \geq n$,
+then $H^i(\alpha \otimes_\mathcal{O}^\mathbf{L} \text{id}_\mathcal{F})$
+is an isomorphism for $i \geq n$.
+\item If $H^i(\alpha)$ is an isomorphism for $i > n$
+and surjective for $i = n$,
+then $H^i(\alpha \otimes_\mathcal{O}^\mathbf{L} \text{id}_\mathcal{F})$
+is an isomorphism for $i > n$ and surjective for $i = n$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose a distinguished triangle
+$$
+K \to L \to C \to K[1]
+$$
+In case (2) we see that $H^i(C) = 0$ for $i \geq n$.
+Hence $H^i(C \otimes_\mathcal{O}^\mathbf{L} \mathcal{F}) = 0$
+for $i \geq n$ by (the dual of)
+Derived Categories, Lemma \ref{derived-lemma-negative-vanishing}.
+This in turn shows that
+$H^i(\alpha \otimes_\mathcal{O}^\mathbf{L} \text{id}_\mathcal{F})$
+is an isomorphism for $i > n$ and surjective for $i = n$.
+In case (1) we moreover see that $H^{n - 1}(L) \to H^{n - 1}(C)$
+is surjective. Considering the diagram
+$$
+\xymatrix{
+H^{n - 1}(L) \otimes_\mathcal{O} \mathcal{F} \ar[r] \ar[d] &
+H^{n - 1}(C) \otimes_\mathcal{O} \mathcal{F} \ar@{=}[d] \\
+H^{n - 1}(L \otimes_\mathcal{O}^\mathbf{L} \mathcal{F}) \ar[r] &
+H^{n - 1}(C \otimes_\mathcal{O}^\mathbf{L} \mathcal{F})
+}
+$$
+we conclude the lower horizontal arrow is surjective. Combined with what
+was said before this implies that
+$H^n(\alpha \otimes_\mathcal{O}^\mathbf{L} \text{id}_\mathcal{F})$
+is an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-canonical-class-obstruction}
+Let $\mathcal{C}$ be a site. Let $\mathcal{O} \to \mathcal{O}_0$
+be a surjection of sheaves of rings whose kernel is an ideal sheaf
+$\mathcal{I}$ of square zero. For every object
+$K_0$ in $D^-(\mathcal{O}_0)$ the following are equivalent
+\begin{enumerate}
+\item the class
+$\omega(K_0) \in
+\Ext^2_{\mathcal{O}_0}(K_0, K_0 \otimes_{\mathcal{O}_0} \mathcal{I})$
+constructed in Lemma \ref{lemma-canonical-class} is zero,
+\item there exists $K \in D^-(\mathcal{O})$ with
+$K \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}_0 = K_0$
+in $D(\mathcal{O}_0)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $K$ be as in (2). Then we can represent $K$ by a bounded above
+complex $\mathcal{F}^\bullet$ of flat $\mathcal{O}$-modules.
+Then $\mathcal{F}_0^\bullet =
+\mathcal{F}^\bullet \otimes_{\mathcal{O}} \mathcal{O}_0$
+represents $K_0$ in $D(\mathcal{O}_0)$.
+Since $d_{\mathcal{F}^\bullet} \circ d_{\mathcal{F}^\bullet} = 0$
+as $\mathcal{F}^\bullet$ is a complex, we see from the very construction
+of $\omega(K_0)$ that it is zero.
+
+\medskip\noindent
+Assume (1). Let $\mathcal{F}^n$, $d : \mathcal{F}^n \to \mathcal{F}^{n + 1}$
+be as in the construction of $\omega(K_0)$. The nullity of
+$\omega(K_0)$ implies that the map
+$$
+\omega = d \circ d : \mathcal{F}_0^\bullet
+\longrightarrow
+(\mathcal{F}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I})[2]
+$$
+is zero in $D(\mathcal{O}_0)$. By definition of the derived category
+as the localization of the homotopy category of complexes
+of $\mathcal{O}_0$-modules, there exists a quasi-isomorphism
+$\alpha : \mathcal{G}_0^\bullet \to \mathcal{F}_0^\bullet$
+such that there exist $\mathcal{O}_0$-modules maps
+$h^n : \mathcal{G}_0^n \to
+\mathcal{F}_0^{n + 1} \otimes_\mathcal{O} \mathcal{I}$
+with
+$$
+\omega \circ \alpha =
+d_{\mathcal{F}_0^\bullet \otimes \mathcal{I}} \circ h +
+h \circ d_{\mathcal{G}_0^\bullet}
+$$
+We set
+$$
+\mathcal{H}^n = \mathcal{F}^n \times_{\mathcal{F}^n_0} \mathcal{G}_0^n
+$$
+and we define
+$$
+d' : \mathcal{H}^n \longrightarrow \mathcal{H}^{n + 1},\quad
+(f^n, g_0^n) \longmapsto (d(f^n) - h^n(g_0^n), d(g_0^n))
+$$
+with obvious notation using that
+$\mathcal{F}_0^{n + 1} \otimes_{\mathcal{O}_0} \mathcal{I} =
+\mathcal{F}^{n + 1} \otimes_\mathcal{O} \mathcal{I} =
+\mathcal{I}\mathcal{F}^{n + 1} \subset \mathcal{F}^{n + 1}$.
+Then one checks $d' \circ d' = 0$ by our choice of $h^n$
+and definition of $\omega$.
+Hence $\mathcal{H}^\bullet$ defines an object in $D(\mathcal{O})$.
+On the other hand, there is a short exact sequence of complexes
+of $\mathcal{O}$-modules
+$$
+0 \to \mathcal{F}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I} \to
+\mathcal{H}^\bullet \to \mathcal{G}_0^\bullet \to 0
+$$
+We still have to show that
+$\mathcal{H}^\bullet \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}_0$
+is isomorphic to $K_0$.
+Choose a quasi-isomorphism
+$\mathcal{E}^\bullet \to \mathcal{H}^\bullet$
+where $\mathcal{E}^\bullet$ is a bounded above complex of flat
+$\mathcal{O}$-modules. We obtain a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{E}^\bullet \otimes_\mathcal{O} \mathcal{I} \ar[d]^\beta \ar[r] &
+\mathcal{E}^\bullet \ar[d]^\gamma \ar[r] &
+\mathcal{E}_0^\bullet \ar[d]^\delta \ar[r] &
+0 \\
+0 \ar[r] &
+\mathcal{F}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I} \ar[r] &
+\mathcal{H}^\bullet \ar[r] &
+\mathcal{G}_0^\bullet \ar[r] &
+0
+}
+$$
+We claim that $\delta$ is a quasi-isomorphism. Since $H^i(\delta)$
+is an isomorphism for $i \gg 0$, we can use descending induction
+on $n$ such that $H^i(\delta)$ is an isomorphism for $i \geq n$.
+Observe that
+$\mathcal{E}^\bullet \otimes_\mathcal{O} \mathcal{I}$
+represents
+$\mathcal{E}_0^\bullet \otimes_{\mathcal{O}_0}^\mathbf{L} \mathcal{I}$,
+that
+$\mathcal{F}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I}$
+represents
+$\mathcal{G}_0^\bullet \otimes_{\mathcal{O}_0}^\mathbf{L} \mathcal{I}$,
+and that
+$\beta = \delta \otimes_{\mathcal{O}_0}^\mathbf{L} \text{id}_\mathcal{I}$
+as maps in $D(\mathcal{O}_0)$. This is true because
+$\beta =
+(\alpha \otimes \text{id}_\mathcal{I})
+\circ
+(\delta \otimes \text{id}_\mathcal{I})$.
+Suppose that $H^i(\delta)$ is an isomorphism in degrees $\geq n$.
+Then the same is true for $\beta$ by what we just said
+and Lemma \ref{lemma-induced-map}.
+Then we can look at the diagram
+$$
+\xymatrix{
+H^{n - 1}(\mathcal{E}^\bullet \otimes_\mathcal{O} \mathcal{I})
+\ar[r] \ar[d]^{H^{n - 1}(\beta)} &
+H^{n - 1}(\mathcal{E}^\bullet) \ar[r] \ar[d] &
+H^{n - 1}(\mathcal{E}_0^\bullet) \ar[r] \ar[d]^{H^{n - 1}(\delta)} &
+H^n(\mathcal{E}^\bullet \otimes_\mathcal{O} \mathcal{I})
+\ar[r] \ar[d]^{H^n(\beta)} &
+H^n(\mathcal{E}^\bullet) \ar[d] \\
+H^{n - 1}(\mathcal{F}_0^\bullet \otimes_\mathcal{O} \mathcal{I}) \ar[r] &
+H^{n - 1}(\mathcal{H}^\bullet) \ar[r] &
+H^{n - 1}(\mathcal{G}_0^\bullet) \ar[r] &
+H^n(\mathcal{F}_0^\bullet \otimes_\mathcal{O} \mathcal{I}) \ar[r] &
+H^n(\mathcal{H}^\bullet)
+}
+$$
+Using Homology, Lemma \ref{homology-lemma-four-lemma}
+we see that $H^{n - 1}(\delta)$ is surjective.
+This in turn implies that $H^{n - 1}(\beta)$ is surjective
+by Lemma \ref{lemma-induced-map}.
+Using Homology, Lemma \ref{homology-lemma-four-lemma}
+again we see that $H^{n - 1}(\delta)$ is an isomorphism.
+The claim holds by induction, so $\delta$ is a quasi-isomorphism
+which is what we wanted to show.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-map-complexes}
+Let $\mathcal{C}$ be a site. Let $\mathcal{O} \to \mathcal{O}_0$
+be a surjection of sheaves of rings. Assume given the following data
+\begin{enumerate}
+\item a complex of $\mathcal{O}$-modules $\mathcal{F}^\bullet$,
+\item a complex $\mathcal{K}_0^\bullet$ of $\mathcal{O}_0$-modules,
+\item a quasi-isomorphism $\mathcal{K}_0^\bullet \to
+\mathcal{F}^\bullet \otimes_\mathcal{O} \mathcal{O}_0$,
+\end{enumerate}
+Then there exist a quasi-isomorphism
+$\mathcal{G}^\bullet \to \mathcal{F}^\bullet$ such that the map
+of complexes
+$\mathcal{G}^\bullet \otimes_\mathcal{O} \mathcal{O}_0 \to
+\mathcal{F}^\bullet \otimes_\mathcal{O} \mathcal{O}_0$ factors
+through $\mathcal{K}_0^\bullet$ in the homotopy category
+of complexes of $\mathcal{O}_0$-modules.
+\end{lemma}
+
+\begin{proof}
+Set $\mathcal{F}_0^\bullet =
+\mathcal{F}^\bullet \otimes_\mathcal{O} \mathcal{O}_0$.
+By Derived Categories, Lemma \ref{derived-lemma-make-surjective}
+there exists a factorization
+$$
+\mathcal{K}_0^\bullet \to \mathcal{L}_0^\bullet \to \mathcal{F}_0^\bullet
+$$
+of the given map such that the first arrow has an inverse up
+to homotopy and the second arrow is termwise split surjective.
+Hence we may assume that $\mathcal{K}_0^\bullet \to \mathcal{F}_0^\bullet$
+is termwise surjective.
+In that case we take
+$$
+\mathcal{G}^n = \mathcal{F}^n \times_{\mathcal{F}^n_0} \mathcal{K}_0^n
+$$
+and everything is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-inf-obs-map-defo-complex}
+Let $\mathcal{C}$ be a site. Let $\mathcal{O} \to \mathcal{O}_0$
+be a surjection of sheaves of rings whose kernel is an ideal sheaf
+$\mathcal{I}$ of square zero. Let $K, L \in D^-(\mathcal{O})$.
+Set $K_0 = K \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}_0$
+and $L_0 = L \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}_0$
+in $D^-(\mathcal{O}_0)$. Given $\alpha_0 : K_0 \to L_0$ in $D(\mathcal{O}_0)$
+there is a canonical element
+$$
+o(\alpha_0) \in \Ext^1_{\mathcal{O}_0}(K_0,
+L_0 \otimes_{\mathcal{O}_0}^\mathbf{L} \mathcal{I})
+$$
+whose vanishing is necessary and sufficient for the
+existence of a map $\alpha : K \to L$ in $D(\mathcal{O})$
+with $\alpha_0 = \alpha \otimes_\mathcal{O}^\mathbf{L} \text{id}$.
+\end{lemma}
+
+\begin{proof}
+Finding $\alpha : K \to L$ lifing $\alpha_0$ is the same as finding
+$\alpha : K \to L$ such that the composition $K \xrightarrow{\alpha} L \to L_0$
+is equal to the composition $K \to K_0 \xrightarrow{\alpha_0} L_0$.
+The short exact sequence
+$0 \to \mathcal{I} \to \mathcal{O} \to \mathcal{O}_0 \to 0$
+gives rise to a canonical distinguished triangle
+$$
+L \otimes_\mathcal{O}^\mathbf{L} \mathcal{I} \to
+L \to
+L_0 \to
+(L \otimes_\mathcal{O}^\mathbf{L} \mathcal{I})[1]
+$$
+in $D(\mathcal{O})$.
+By Derived Categories, Lemma \ref{derived-lemma-representable-homological}
+the composition
+$$
+K \to K_0 \xrightarrow{\alpha_0} L_0 \to
+(L \otimes_\mathcal{O}^\mathbf{L} \mathcal{I})[1]
+$$
+is zero if and only if we can find $\alpha : K \to L$
+lifting $\alpha_0$. The composition is an element in
+$$
+\Hom_{D(\mathcal{O})}(K, (L \otimes_\mathcal{O}^\mathbf{L} \mathcal{I})[1]) =
+\Hom_{D(\mathcal{O}_0)}(K_0,
+(L \otimes_\mathcal{O}^\mathbf{L} \mathcal{I})[1]) =
+\Ext^1_{\mathcal{O}_0}(K_0,
+L_0 \otimes_{\mathcal{O}_0}^\mathbf{L} \mathcal{I})
+$$
+by adjunction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-first-order-defos-complex}
+Let $\mathcal{C}$ be a site. Let $\mathcal{O} \to \mathcal{O}_0$
+be a surjection of sheaves of rings whose kernel is an ideal sheaf
+$\mathcal{I}$ of square zero. Let $K_0 \in D^-(\mathcal{O})$.
+A lift of $K_0$ is a pair $(K, \alpha_0)$ consisting of an object
+$K$ in $D^-(\mathcal{O})$ and an isomorphism
+$\alpha_0 : K \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}_0 \to K_0$
+in $D(\mathcal{O}_0)$.
+\begin{enumerate}
+\item Given a lift $(K, \alpha)$ the group of automorphism of the pair
+is canonically the cokernel of a map
+$$
+\Ext^{-1}_{\mathcal{O}_0}(K_0, K_0)
+\longrightarrow
+\Hom_{\mathcal{O}_0}(K_0, K_0 \otimes_{\mathcal{O}_0}^\mathbf{L} \mathcal{I})
+$$
+\item If there is a lift, then the set of isomorphism classes of lifts
+is principal homogenenous under
+$\Ext^1_{\mathcal{O}_0}(K_0,
+K_0 \otimes_{\mathcal{O}_0}^\mathbf{L} \mathcal{I})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+An automorphism of $(K, \alpha)$ is a map $\varphi : K \to K$
+in $D(\mathcal{O})$ with
+$\varphi \otimes_\mathcal{O} \text{id}_{\mathcal{O}_0} = \text{id}$.
+This is the same thing as saying that
+$$
+K \xrightarrow{\varphi - \text{id}} K \to
+K \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}_0
+$$
+is zero. We conclude the group of automorphisms is
+the cokernel of a map
+$$
+\Hom_\mathcal{O}(K, K_0[-1])
+\longrightarrow
+\Hom_\mathcal{O}(K, K_0 \otimes_{\mathcal{O}_0}^\mathbf{L} \mathcal{I})
+$$
+by the distinguished triangle
+$$
+K \otimes_\mathcal{O}^\mathbf{L} \mathcal{I} \to
+K \to
+K \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}_0 \to
+(K \otimes_\mathcal{O}^\mathbf{L} \mathcal{I})[1]
+$$
+in $D(\mathcal{O})$ and
+Derived Categories, Lemma \ref{derived-lemma-representable-homological}.
+To translate into the groups in the lemma use adjunction
+of the restriction functor $D(\mathcal{O}_0) \to D(\mathcal{O})$ and
+$- \otimes_\mathcal{O} \mathcal{O}_0 : D(\mathcal{O}) \to D(\mathcal{O}_0)$.
+This proves (1).
+
+\medskip\noindent
+Proof of (2).
+Assume that $K_0 = K \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}_0$
+in $D(\mathcal{O})$. By Lemma \ref{lemma-inf-obs-map-defo-complex}
+the map sending a lift $(K', \alpha_0)$ to the obstruction $o(\alpha_0)$
+to lifting $\alpha_0$ defines a canonical injective map
+from the set of isomomorphism classes of pairs
+to $\Ext^1_{\mathcal{O}_0}(K_0,
+K_0 \otimes_{\mathcal{O}_0}^\mathbf{L} \mathcal{I})$.
+To finish the proof we show that it is surjective.
+Pick $\xi : K_0 \to (K_0 \otimes_{\mathcal{O}_0}^\mathbf{L} \mathcal{I})[1]$
+in the $\Ext^1$ of the lemma.
+Choose a bounded above complex $\mathcal{F}^\bullet$
+of flat $\mathcal{O}$-modules representing $K$.
+The map $\xi$ can be represented as $t \circ s^{-1}$
+where $s : \mathcal{K}_0^\bullet \to \mathcal{F}_0^\bullet$
+is a quasi-isomorphism and
+$t : \mathcal{K}_0^\bullet \to
+\mathcal{F}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I}[1]$
+is a map of complexes.
+By Lemma \ref{lemma-lift-map-complexes}
+we can assume there exists a quasi-isomorphism
+$\mathcal{G}^\bullet \to \mathcal{F}^\bullet$
+of complexes of $\mathcal{O}$-modules
+such that $\mathcal{G}_0^\bullet \to \mathcal{F}_0^\bullet$
+factors through $s$ up to homotopy.
+We may and do replace $\mathcal{G}^\bullet$ by a bounded
+above complex of flat $\mathcal{O}$-modules (by picking a qis
+from such to $\mathcal{G}^\bullet$ and replacing).
+Then we see that $\xi$ is represented by
+a map of complexes
+$t : \mathcal{G}_0^\bullet \to
+\mathcal{F}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I}[1]$
+and the quasi-isomorphism $\mathcal{G}_0^\bullet \to \mathcal{F}_0^\bullet$.
+Set
+$$
+\mathcal{H}^n = \mathcal{F}^n \times_{\mathcal{F}_0^n} \mathcal{G}_0^n
+$$
+with differentials
+$$
+\mathcal{H}^n \to \mathcal{H}^{n + 1},\quad
+(f^n, g_0^n) \mapsto (d(f^n) + t(g_0^n), d(g_0^n))
+$$
+This makes sense as
+$\mathcal{F}_0^{n + 1} \otimes_{\mathcal{O}_0} \mathcal{I} =
+\mathcal{F}^{n + 1} \otimes_\mathcal{O} \mathcal{I} =
+\mathcal{I}\mathcal{F}^{n + 1} \subset \mathcal{F}^{n + 1}$.
+We omit the computation that shows that $\mathcal{H}^\bullet$
+is a complex of $\mathcal{O}$-modules. By construction there is
+a short exact sequence
+$$
+0 \to \mathcal{F}_0^\bullet \otimes_{\mathcal{O}_0} \mathcal{I} \to
+\mathcal{H}^\bullet \to \mathcal{G}_0^\bullet \to 0
+$$
+of complexes of $\mathcal{O}$-modules.
+Exactly as in the proof of Lemma \ref{lemma-canonical-class-obstruction}
+one shows that this sequence induces an isomorphism
+$\alpha_0 :
+\mathcal{H}^\bullet \otimes_\mathcal{O}^\mathbf{L} \mathcal{O}_0 \to
+\mathcal{G}_0^\bullet$ in $D(\mathcal{O}_0)$.
+In other words, we have produced a pair $(\mathcal{H}^\bullet, \alpha_0)$.
+We omit the verification that $o(\alpha_0) = \xi$; hint: $o(\alpha_0)$
+can be computed explitly in this case as we have maps
+$\mathcal{H}^n \to \mathcal{F}^n$ (not compatible with differentials)
+lifting the components of $\alpha_0$. This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/derham.tex b/books/stacks/derham.tex
new file mode 100644
index 0000000000000000000000000000000000000000..88fe016f506984e1a8c15068578c3375aedf222c
--- /dev/null
+++ b/books/stacks/derham.tex
@@ -0,0 +1,6186 @@
+\input{preamble}
+
+% OK, start here
+%
+\begin{document}
+
+\title{de Rham Cohomology}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we start with a discussion of the de Rham complex
+of a morphism of schemes and we end with a proof that de Rham cohomology
+defines a Weil cohomology theory when the base field has characteristic zero.
+
+
+
+
+\section{The de Rham complex}
+\label{section-de-rham-complex}
+
+\noindent
+Let $p : X \to S$ be a morphism of schemes. There is a complex
+$$
+\Omega^\bullet_{X/S} =
+\mathcal{O}_{X/S} \to \Omega^1_{X/S} \to \Omega^2_{X/S} \to \ldots
+$$
+of $p^{-1}\mathcal{O}_S$-modules with
+$\Omega^i_{X/S} = \wedge^i(\Omega_{X/S})$
+placed in degree $i$ and differential determined by the rule
+$\text{d}(g_0 \text{d}g_1 \wedge \ldots \wedge \text{d}g_p) =
+\text{d}g_0 \wedge \text{d}g_1 \wedge \ldots \wedge \text{d}g_p$
+on local sections.
+See Modules, Section \ref{modules-section-de-rham-complex}.
+
+\medskip\noindent
+Given a commutative diagram
+$$
+\xymatrix{
+X' \ar[r]_f \ar[d] & X \ar[d] \\
+S' \ar[r] & S
+}
+$$
+of schemes, there are canonical maps of complexes
+$f^{-1}\Omega_{X/S}^\bullet \to \Omega^\bullet_{X'/S'}$ and
+$\Omega_{X/S}^\bullet \to f_*\Omega^\bullet_{X'/S'}$.
+See Modules, Section \ref{modules-section-de-rham-complex}.
+Linearizing, for every $p$ we obtain a linear map
+$f^*\Omega^p_{X/S} \to \Omega^p_{X'/S'}$.
+
+\medskip\noindent
+In particular, if $f : Y \to X$ be a morphism of schemes over
+a base scheme $S$, then there is a map of complexes
+$$
+\Omega^\bullet_{X/S} \longrightarrow f_*\Omega^\bullet_{Y/S}
+$$
+Linearizing, we see that for every $p \geq 0$ we obtain a canonical map
+$$
+\Omega^p_{X/S} \otimes_{\mathcal{O}_X} f_*\mathcal{O}_Y
+\longrightarrow
+f_*\Omega^p_{Y/S}
+$$
+
+\begin{lemma}
+\label{lemma-base-change-de-rham}
+Let
+$$
+\xymatrix{
+X' \ar[r]_f \ar[d] & X \ar[d] \\
+S' \ar[r] & S
+}
+$$
+be a cartesian diagram of schemes. Then the maps discussed
+above induce isomorphisms
+$f^*\Omega^p_{X/S} \to \Omega^p_{X'/S'}$.
+\end{lemma}
+
+\begin{proof}
+Combine Morphisms, Lemma \ref{morphisms-lemma-base-change-differentials}
+with the fact that formation of exterior power commutes with base change.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale}
+Consider a commutative diagram of schemes
+$$
+\xymatrix{
+X' \ar[r]_f \ar[d] & X \ar[d] \\
+S' \ar[r] & S
+}
+$$
+If $X' \to X$ and $S' \to S$ are \'etale, then the maps discussed
+above induce isomorphisms
+$f^*\Omega^p_{X/S} \to \Omega^p_{X'/S'}$.
+\end{lemma}
+
+\begin{proof}
+We have $\Omega_{S'/S} = 0$ and $\Omega_{X'/X} = 0$, see for example
+Morphisms, Lemma \ref{morphisms-lemma-etale-at-point}. Then by
+the short exact sequences of Morphisms, Lemmas
+\ref{morphisms-lemma-triangle-differentials} and
+\ref{morphisms-lemma-triangle-differentials-smooth}
+we see that $\Omega_{X'/S'} = \Omega_{X'/S} = f^*\Omega_{X/S}$.
+Taking exterior powers we conclude.
+\end{proof}
+
+
+
+
+
+
+\section{de Rham cohomology}
+\label{section-de-rham-cohomology}
+
+\noindent
+Let $p : X \to S$ be a morphism of schemes. We define the
+{\it de Rham cohomology of $X$ over $S$} to be the cohomology
+groups
+$$
+H^i_{dR}(X/S) = H^i(R\Gamma(X, \Omega^\bullet_{X/S}))
+$$
+Since $\Omega^\bullet_{X/S}$ is a complex of $p^{-1}\mathcal{O}_S$-modules,
+these cohomology groups are naturally modules over $H^0(S, \mathcal{O}_S)$.
+
+\medskip\noindent
+Given a commutative diagram
+$$
+\xymatrix{
+X' \ar[r]_f \ar[d] & X \ar[d] \\
+S' \ar[r] & S
+}
+$$
+of schemes, using the canonical maps of Section \ref{section-de-rham-complex}
+we obtain pullback maps
+$$
+f^* :
+R\Gamma(X, \Omega^\bullet_{X/S})
+\longrightarrow
+R\Gamma(X', \Omega^\bullet_{X'/S'})
+$$
+and
+$$
+f^* : H^i_{dR}(X/S) \longrightarrow H^i_{dR}(X'/S')
+$$
+These pullbacks satisfy an obvious composition law.
+In particular, if we work over a fixed base scheme $S$, then de Rham
+cohomology is a contravariant functor on the category of schemes over $S$.
+
+\begin{lemma}
+\label{lemma-de-rham-affine}
+Let $X \to S$ be a morphism of affine schemes given by the ring map
+$R \to A$. Then $R\Gamma(X, \Omega^\bullet_{X/S}) = \Omega^\bullet_{A/R}$
+in $D(R)$ and $H^i_{dR}(X/S) = H^i(\Omega^\bullet_{A/R})$.
+\end{lemma}
+
+\begin{proof}
+This follows from Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherent-affine-cohomology-zero}
+and Leray's acyclicity lemma
+(Derived Categories, Lemma \ref{derived-lemma-leray-acyclicity}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-coherence-relative}
+Let $p : X \to S$ be a morphism of schemes. If $p$ is quasi-compact
+and quasi-separated, then $Rp_*\Omega^\bullet_{X/S}$ is an object
+of $D_\QCoh(\mathcal{O}_S)$.
+\end{lemma}
+
+\begin{proof}
+There is a spectral sequence with first page
+$E_1^{a, b} = R^bp_*\Omega^a_{X/S}$ converging to
+the cohomology of $Rp_*\Omega^\bullet_{X/S}$
+(see Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}).
+Hence by Homology, Lemma \ref{homology-lemma-first-quadrant-ss}
+it suffices to show that $R^bp_*\Omega^a_{X/S}$ is quasi-coherent.
+This follows from Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherence-higher-direct-images}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-coherence-relative}
+Let $p : X \to S$ be a proper morphism of schemes with $S$ locally
+Noetherian. Then $Rp_*\Omega^\bullet_{X/S}$ is an object
+of $D_{\textit{Coh}}(\mathcal{O}_S)$.
+\end{lemma}
+
+\begin{proof}
+In this case by Morphisms, Lemma \ref{morphisms-lemma-finite-type-differentials}
+the modules $\Omega^i_{X/S}$ are coherent. Hence we can use exactly the
+same argument as in the proof of Lemma \ref{lemma-quasi-coherence-relative}
+using Cohomology of Schemes, Proposition
+\ref{coherent-proposition-proper-pushforward-coherent}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-de-Rham}
+Let $A$ be a Noetherian ring. Let $X$ be a proper scheme over $S = \Spec(A)$.
+Then $H^i_{dR}(X/S)$ is a finite $A$-module for all $i$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-coherence-relative}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-smooth-de-Rham}
+Let $f : X \to S$ be a proper smooth morphism of schemes. Then
+$Rf_*\Omega^p_{X/S}$, $p \geq 0$ and $Rf_*\Omega^\bullet_{X/S}$ are
+perfect objects of $D(\mathcal{O}_S)$ whose formation commutes
+with arbitrary change of base.
+\end{lemma}
+
+\begin{proof}
+Since $f$ is smooth the modules $\Omega^p_{X/S}$ are finite locally
+free $\mathcal{O}_X$-modules, see Morphisms, Lemma
+\ref{morphisms-lemma-smooth-omega-finite-locally-free}. Their
+formation commutes with arbitrary change of base by
+Lemma \ref{lemma-base-change-de-rham}. Hence
+$Rf_*\Omega^p_{X/S}$ is a perfect object of $D(\mathcal{O}_S)$
+whose formation commutes with abitrary base change, see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}.
+This proves the first assertion of the lemma.
+
+\medskip\noindent
+To prove that $Rf_*\Omega^\bullet_{X/S}$ is perfect on $S$ we may work
+locally on $S$. Thus we may assume $S$ is quasi-compact. This means
+we may assume that $\Omega^n_{X/S}$ is zero for $n$ large enough.
+For every $p \geq 0$ we claim that
+$Rf_*\sigma_{\geq p}\Omega^\bullet_{X/S}$ is a
+perfect object of $D(\mathcal{O}_S)$ whose formation commutes
+with arbitrary change of base. By the above we see that
+this is true for $p \gg 0$. Suppose the claim holds for $p$
+and consider the distinguished triangle
+$$
+\sigma_{\geq p}\Omega^\bullet_{X/S} \to
+\sigma_{\geq p - 1}\Omega^\bullet_{X/S} \to
+\Omega^{p - 1}_{X/S}[-(p - 1)] \to
+(\sigma_{\geq p}\Omega^\bullet_{X/S})[1]
+$$
+in $D(f^{-1}\mathcal{O}_S)$.
+Applying the exact functor $Rf_*$ we obtain a distinguished triangle
+in $D(\mathcal{O}_S)$.
+Since we have the 2-out-of-3 property for being perfect
+(Cohomology, Lemma \ref{cohomology-lemma-two-out-of-three-perfect})
+we conclude $Rf_*\sigma_{\geq p - 1}\Omega^\bullet_{X/S}$ is a
+perfect object of $D(\mathcal{O}_S)$. Similarly for the
+commutation with arbitrary base change.
+\end{proof}
+
+
+
+
+
+\section{Cup product}
+\label{section-cup-product}
+
+\noindent
+Consider the maps
+$\Omega^p_{X/S} \times \Omega^q_{X/S} \to \Omega^{p + q}_{X/S}$
+given by $(\omega , \eta) \longmapsto \omega \wedge \eta$.
+Using the formula for $\text{d}$ given in Section \ref{section-de-rham-complex}
+and the Leibniz rule for $\text{d} : \mathcal{O}_X \to \Omega_{X/S}$
+we see that $\text{d}(\omega \wedge \eta) = \text{d}(\omega) \wedge \eta +
+(-1)^{\deg(\omega)} \omega \wedge \text{d}(\eta)$. This means that
+$\wedge$ defines a morphism
+\begin{equation}
+\label{equation-wedge}
+\wedge :
+\text{Tot}(
+\Omega^\bullet_{X/S} \otimes_{p^{-1}\mathcal{O}_S} \Omega^\bullet_{X/S})
+\longrightarrow
+\Omega^\bullet_{X/S}
+\end{equation}
+of complexes of $p^{-1}\mathcal{O}_S$-modules.
+
+\medskip\noindent
+Combining the cup product of
+Cohomology, Section \ref{cohomology-section-cup-product}
+with (\ref{equation-wedge}) we find a
+$H^0(S, \mathcal{O}_S)$-bilinear cup product map
+$$
+\cup : H^i_{dR}(X/S) \times H^j_{dR}(X/S) \longrightarrow H^{i + j}_{dR}(X/S)
+$$
+For example, if $\omega \in \Gamma(X, \Omega^i_{X/S})$ and
+$\eta \in \Gamma(X, \Omega^j_{X/S})$ are closed, then
+the cup product of the de Rham cohomology classes of
+$\omega$ and $\eta$ is the de Rham cohomology class of $\omega \wedge \eta$,
+see discussion in Cohomology, Section \ref{cohomology-section-cup-product}.
+
+\medskip\noindent
+Given a commutative diagram
+$$
+\xymatrix{
+X' \ar[r]_f \ar[d] & X \ar[d] \\
+S' \ar[r] & S
+}
+$$
+of schemes, the pullback maps
+$f^* : R\Gamma(X, \Omega^\bullet_{X/S}) \to R\Gamma(X', \Omega^\bullet_{X'/S'})$
+and
+$f^* : H^i_{dR}(X/S) \longrightarrow H^i_{dR}(X'/S')$
+are compatible with the cup product defined above.
+
+\begin{lemma}
+\label{lemma-cup-product-graded-commutative}
+Let $p : X \to S$ be a morphism of schemes.
+The cup product on $H^*_{dR}(X/S)$ is associative and
+graded commutative.
+\end{lemma}
+
+\begin{proof}
+This follows from
+Cohomology, Lemmas \ref{cohomology-lemma-cup-product-associative} and
+\ref{cohomology-lemma-cup-product-commutative}
+and the fact that $\wedge$ is associative and graded commutative.
+\end{proof}
+
+\begin{remark}
+\label{remark-relative-cup-product}
+Let $p : X \to S$ be a morphism of schemes. Then we can think of
+$\Omega^\bullet_{X/S}$ as a sheaf of differential graded
+$p^{-1}\mathcal{O}_S$-algebras, see
+Differential Graded Sheaves, Definition \ref{sdga-definition-dga}.
+In particular, the discussion in
+Differential Graded Sheaves, Section \ref{sdga-section-misc}
+applies. For example, this means that for any commutative diagram
+$$
+\xymatrix{
+X \ar[d]_p \ar[r]_f & Y \ar[d]^q \\
+S \ar[r]^h & T
+}
+$$
+of schemes there is a canonical relative cup product
+$$
+\mu :
+Rf_*\Omega^\bullet_{X/S}
+\otimes_{q^{-1}\mathcal{O}_T}^\mathbf{L}
+Rf_*\Omega^\bullet_{X/S}
+\longrightarrow
+Rf_*\Omega^\bullet_{X/S}
+$$
+in $D(Y, q^{-1}\mathcal{O}_T)$ which is associative and
+which on cohomology reproduces the cup product discussed above.
+\end{remark}
+
+\begin{remark}
+\label{remark-cup-product-as-a-map}
+Let $f : X \to S$ be a morphism of schemes. Let $\xi \in H_{dR}^n(X/S)$.
+According to the discussion
+Differential Graded Sheaves, Section \ref{sdga-section-misc}
+there exists a canonical morphism
+$$
+\xi' : \Omega^\bullet_{X/S} \to \Omega^\bullet_{X/S}[n]
+$$
+in $D(f^{-1}\mathcal{O}_S)$ uniquely characterized by
+(1) and (2) of the following list of properties:
+\begin{enumerate}
+\item $\xi'$ can be lifted to a map in the derived category of right
+differential graded $\Omega^\bullet_{X/S}$-modules, and
+\item $\xi'(1) = \xi$ in
+$H^0(X, \Omega^\bullet_{X/S}[n]) = H^n_{dR}(X/S)$,
+\item the map $\xi'$ sends $\eta \in H^m_{dR}(X/S)$
+to $\xi \cup \eta$ in $H^{n + m}_{dR}(X/S)$,
+\item the construction of $\xi'$ commutes with restrictions to
+opens: for $U \subset X$ open the restriction $\xi'|_U$ is
+the map corresponding to the image $\xi|_U \in H^n_{dR}(U/S)$,
+\item for any diagram as in Remark \ref{remark-relative-cup-product}
+we obtain a commutative diagram
+$$
+\xymatrix{
+Rf_*\Omega^\bullet_{X/S}
+\otimes_{q^{-1}\mathcal{O}_T}^\mathbf{L}
+Rf_*\Omega^\bullet_{X/S} \ar[d]_{\xi' \otimes \text{id}}
+\ar[r]_-\mu &
+Rf_*\Omega^\bullet_{X/S} \ar[d]^{\xi'} \\
+Rf_*\Omega^\bullet_{X/S}[n]
+\otimes_{q^{-1}\mathcal{O}_T}^\mathbf{L}
+Rf_*\Omega^\bullet_{X/S}
+\ar[r]^-\mu &
+Rf_*\Omega^\bullet_{X/S}[n]
+}
+$$
+in $D(Y, q^{-1}\mathcal{O}_T)$.
+\end{enumerate}
+\end{remark}
+
+
+
+
+\section{Hodge cohomology}
+\label{section-hodge-cohomology}
+
+\noindent
+Let $p : X \to S$ be a morphism of schemes. We define the
+{\it Hodge cohomology of $X$ over $S$} to be the cohomology groups
+$$
+H^n_{Hodge}(X/S) = \bigoplus\nolimits_{n = p + q} H^q(X, \Omega^p_{X/S})
+$$
+viewed as a graded $H^0(X, \mathcal{O}_X)$-module. The wedge product
+of forms combined with the cup product of
+Cohomology, Section \ref{cohomology-section-cup-product}
+defines a $H^0(X, \mathcal{O}_X)$-bilinear cup product
+$$
+\cup :
+H^i_{Hodge}(X/S) \times H^j_{Hodge}(X/S)
+\longrightarrow
+H^{i + j}_{Hodge}(X/S)
+$$
+Of course if $\xi \in H^q(X, \Omega^p_{X/S})$ and
+$\xi' \in H^{q'}(X, \Omega^{p'}_{X/S})$ then $\xi \cup \xi' \in
+H^{q + q'}(X, \Omega^{p + p'}_{X/S})$.
+
+\begin{lemma}
+\label{lemma-cup-product-hodge-graded-commutative}
+Let $p : X \to S$ be a morphism of schemes.
+The cup product on $H^*_{Hodge}(X/S)$ is associative and graded commutative.
+\end{lemma}
+
+\begin{proof}
+The proof is identical to the proof of
+Lemma \ref{lemma-cup-product-graded-commutative}.
+\end{proof}
+
+\noindent
+Given a commutative diagram
+$$
+\xymatrix{
+X' \ar[r]_f \ar[d] & X \ar[d] \\
+S' \ar[r] & S
+}
+$$
+of schemes, there are pullback maps
+$f^* : H^i_{Hodge}(X/S) \longrightarrow H^i_{Hodge}(X'/S')$
+compatible with gradings and with the cup product defined above.
+
+
+
+
+
+
+
+\section{Two spectral sequences}
+\label{section-hodge-to-de-rham}
+
+\noindent
+Let $p : X \to S$ be a morphism of schemes. Since the category
+of $p^{-1}\mathcal{O}_S$-modules on $X$ has enough injectives
+there exist a Cartan-Eilenberg resolution for $\Omega^\bullet_{X/S}$.
+See Derived Categories, Lemma \ref{derived-lemma-cartan-eilenberg}.
+Hence we can apply Derived Categories, Lemma
+\ref{derived-lemma-two-ss-complex-functor} to get two spectral sequences
+both converging to the de Rham cohomology of $X$ over $S$.
+
+\medskip\noindent
+The first is customarily called {\it the Hodge-to-de Rham spectral sequence}.
+The first page of this spectral sequence has
+$$
+E_1^{p, q} = H^q(X, \Omega^p_{X/S})
+$$
+which are the Hodge cohomology groups of $X/S$ (whence the name). The
+differential $d_1$ on this page is given by the maps
+$d_1^{p, q} : H^q(X, \Omega^p_{X/S}) \to H^q(X. \Omega^{p + 1}_{X/S})$
+induced by the differential
+$\text{d} : \Omega^p_{X/S} \to \Omega^{p + 1}_{X/S}$.
+Here is a picture
+$$
+\xymatrix{
+H^2(X, \mathcal{O}_X) \ar[r] \ar@{-->}[rrd] \ar@{..>}[rrrdd] &
+H^2(X, \Omega^1_{X/S}) \ar[r] \ar@{-->}[rrd] &
+H^2(X, \Omega^2_{X/S}) \ar[r] &
+H^2(X, \Omega^3_{X/S}) \\
+H^1(X, \mathcal{O}_X) \ar[r] \ar@{-->}[rrd] &
+H^1(X, \Omega^1_{X/S}) \ar[r] \ar@{-->}[rrd] &
+H^1(X, \Omega^2_{X/S}) \ar[r] &
+H^1(X, \Omega^3_{X/S}) \\
+H^0(X, \mathcal{O}_X) \ar[r] &
+H^0(X, \Omega^1_{X/S}) \ar[r] &
+H^0(X, \Omega^2_{X/S}) \ar[r] &
+H^0(X, \Omega^3_{X/S})
+}
+$$
+where we have drawn striped arrows to indicate the source and target of
+the differentials on the $E_2$ page and a dotted arrow for a differential
+on the $E_3$ page. Looking in degree $0$ we conclude that
+$$
+H^0_{dR}(X/S) =
+\Ker(\text{d} : H^0(X, \mathcal{O}_X) \to H^0(X, \Omega^1_{X/S}))
+$$
+Of course, this is also immediately clear from the fact that the
+de Rham complex starts in degree $0$ with $\mathcal{O}_X \to \Omega^1_{X/S}$.
+
+\medskip\noindent
+The second spectral sequence is usually called
+{\it the conjugate spectral sequence}. The second page of this
+spectral sequence has
+$$
+E_2^{p, q} = H^p(X, H^q(\Omega^\bullet_{X/S})) = H^p(X, \mathcal{H}^q)
+$$
+where $\mathcal{H}^q = H^q(\Omega^\bullet_{X/S})$ is the $q$th
+cohomology sheaf of the de Rham complex of $X/S$. The differentials
+on this page are given by $E_2^{p, q} \to E_2^{p + 2, q - 1}$.
+Here is a picture
+$$
+\xymatrix{
+H^0(X, \mathcal{H}^2) \ar[rrd] \ar@{..>}[rrrdd] &
+H^1(X, \mathcal{H}^2) \ar[rrd] &
+H^2(X, \mathcal{H}^2) &
+H^3(X, \mathcal{H}^2) \\
+H^0(X, \mathcal{H}^1) \ar[rrd] &
+H^1(X, \mathcal{H}^1) \ar[rrd] &
+H^2(X, \mathcal{H}^1) &
+H^3(X, \mathcal{H}^1) \\
+H^0(X, \mathcal{H}^0) &
+H^1(X, \mathcal{H}^0) &
+H^2(X, \mathcal{H}^0) &
+H^3(X, \mathcal{H}^0)
+}
+$$
+Looking in degree $0$ we conclude that
+$$
+H^0_{dR}(X/S) = H^0(X, \mathcal{H}^0)
+$$
+which is obvious if you think about it. In degree $1$ we get an exact sequence
+$$
+0 \to H^1(X, \mathcal{H}^0) \to H^1_{dR}(X/S) \to
+H^0(X, \mathcal{H}^1) \to H^2(X, \mathcal{H}^0) \to H^2_{dR}(X/S)
+$$
+It turns out that if $X \to S$ is smooth and $S$ lives in characteristic $p$,
+then the sheaves $\mathcal{H}^q$ are computable (in terms of a certain
+sheaves of differentials) and the conjugate spectral sequence is a valuable
+tool (insert future reference here).
+
+
+
+\section{The Hodge filtration}
+\label{section-hodge-filtration}
+
+\noindent
+Let $X \to S$ be a morphism of schemes. The Hodge filtration on $H^n_{dR}(X/S)$
+is the filtration induced by the Hodge-to-de Rham spectral sequence
+(Homology, Definition
+\ref{homology-definition-filtration-cohomology-filtered-complex}).
+To avoid misunderstanding, we explicitly define it as follows.
+
+\begin{definition}
+\label{definition-hodge-filtration}
+Let $X \to S$ be a morphism of schemes. The {\it Hodge filtration}
+on $H^n_{dR}(X/S)$ is the filtration with terms
+$$
+F^pH^n_{dR}(X/S) = \Im\left(H^n(X, \sigma_{\geq p}\Omega^\bullet_{X/S})
+\longrightarrow H^n_{dR}(X/S)\right)
+$$
+where $\sigma_{\geq p}\Omega^\bullet_{X/S}$ is as in
+Homology, Section \ref{homology-section-truncations}.
+\end{definition}
+
+\noindent
+Of course $\sigma_{\geq p}\Omega^\bullet_{X/S}$ is a subcomplex of
+the relative de Rham complex and we obtain a filtration
+$$
+\Omega^\bullet_{X/S} = \sigma_{\geq 0}\Omega^\bullet_{X/S} \supset
+\sigma_{\geq 1}\Omega^\bullet_{X/S} \supset
+\sigma_{\geq 2}\Omega^\bullet_{X/S} \supset
+\sigma_{\geq 3}\Omega^\bullet_{X/S} \supset \ldots
+$$
+of the relative de Rham complex with
+$\text{gr}^p(\Omega^\bullet_{X/S}) = \Omega^p_{X/S}[-p]$.
+The spectral sequence constructed in
+Cohomology, Lemma \ref{cohomology-lemma-spectral-sequence-filtered-object}
+for $\Omega^\bullet_{X/S}$ viewed as a filtered complex of sheaves
+is the same as the Hodge-to-de Rham spectral sequence constructed in
+Section \ref{section-hodge-to-de-rham} by
+Cohomology, Example \ref{cohomology-example-spectral-sequence-bis}. Further the
+wedge product (\ref{equation-wedge}) sends
+$\text{Tot}(\sigma_{\geq i}\Omega^\bullet_{X/S} \otimes_{p^{-1}\mathcal{O}_S}
+\sigma_{\geq j}\Omega^\bullet_{X/S})$ into
+$\sigma_{\geq i + j}\Omega^\bullet_{X/S}$. Hence we get
+commutative diagrams
+$$
+\xymatrix{
+H^n(X, \sigma_{\geq j}\Omega^\bullet_{X/S}))
+\times
+H^m(X, \sigma_{\geq j}\Omega^\bullet_{X/S}))
+\ar[r] \ar[d] &
+H^{n + m}(X, \sigma_{\geq i + j}\Omega^\bullet_{X/S})) \ar[d] \\
+H^n_{dR}(X/S) \times
+H^m_{dR}(X/S)
+\ar[r]^\cup &
+H^{n + m}_{dR}(X/S)
+}
+$$
+In particular we find that
+$$
+F^iH^n_{dR}(X/S) \cup F^jH^m_{dR}(X/S) \subset F^{i + j}H^{n + m}_{dR}(X/S)
+$$
+
+
+
+
+
+
+\section{K\"unneth formula}
+\label{section-kunneth}
+
+\noindent
+An important feature of de Rham cohomology is that there is a
+K\"unneth formula.
+
+\medskip\noindent
+Let $a : X \to S$ and $b : Y \to S$ be morphisms of schemes with the same
+target. Let $p : X \times_S Y \to X$ and $q : X \times_S Y \to Y$ be the
+projection morphisms and $f = a \circ p = b \circ q$. Here is a picture
+$$
+\xymatrix{
+& X \times_S Y \ar[ld]^p \ar[rd]_q \ar[dd]^f \\
+X \ar[rd]_a & & Y \ar[ld]^b \\
+& S
+}
+$$
+In this section, given an $\mathcal{O}_X$-module $\mathcal{F}$
+and an $\mathcal{O}_Y$-module $\mathcal{G}$ let us set
+$$
+\mathcal{F} \boxtimes \mathcal{G} =
+p^*\mathcal{F} \otimes_{\mathcal{O}_{X \times_S Y}} q^*\mathcal{G}
+$$
+The bifunctor
+$(\mathcal{F}, \mathcal{G}) \mapsto \mathcal{F} \boxtimes \mathcal{G}$
+on quasi-coherent modules extends to a bifunctor on quasi-coherent modules
+and differential operators of finite over over $S$, see
+Morphisms, Remark \ref{morphisms-remark-base-change-differential-operators}.
+Since the differentials of the de Rham complexes $\Omega^\bullet_{X/S}$ and
+$\Omega^\bullet_{Y/S}$ are differential operators of order $1$
+over $S$ by Modules, Lemma
+\ref{modules-lemma-differentials-relative-de-rham-complex-order-1}.
+Thus it makes sense to consider the complex
+$$
+\text{Tot}(\Omega^\bullet_{X/S} \boxtimes \Omega^\bullet_{Y/S})
+$$
+Please see the discussion in Derived Categories of Schemes, Section
+\ref{perfect-section-kunneth-complexes}.
+
+\begin{lemma}
+\label{lemma-de-rham-complex-product}
+In the situation above there is a canonical isomorphism
+$$
+\text{Tot}(\Omega^\bullet_{X/S} \boxtimes \Omega^\bullet_{Y/S})
+\longrightarrow
+\Omega^\bullet_{X \times_S Y/S}
+$$
+of complexes of $f^{-1}\mathcal{O}_S$-modules.
+\end{lemma}
+
+\begin{proof}
+We know that
+$
+\Omega_{X \times_S Y/S} = p^*\Omega_{X/S} \oplus q^*\Omega_{Y/S}
+$
+by Morphisms, Lemma \ref{morphisms-lemma-differential-product}.
+Taking exterior powers we obtain
+$$
+\Omega^n_{X \times_S Y/S} =
+\bigoplus\nolimits_{i + j = n}
+p^*\Omega^i_{X/S} \otimes_{\mathcal{O}_{X \times_S Y}} q^*\Omega^j_{Y/S} =
+\bigoplus\nolimits_{i + j = n}
+\Omega^i_{X/S} \boxtimes \Omega^j_{Y/S}
+$$
+by elementary properties of exterior powers. These identifications determine
+isomorphisms between the terms of the complexes on the left and the right
+of the arrow in the lemma. We omit the verification that these maps
+are compatible with differentials.
+\end{proof}
+
+\noindent
+Set $A = \Gamma(S, \mathcal{O}_S)$. Combining the result of
+Lemma \ref{lemma-de-rham-complex-product} with the map
+Derived Categories of Schemes, Equation
+(\ref{perfect-equation-de-rham-kunneth})
+we obtain a cup product
+$$
+R\Gamma(X, \Omega^\bullet_{X/S})
+\otimes_A^\mathbf{L}
+R\Gamma(Y, \Omega^\bullet_{Y/S})
+\longrightarrow
+R\Gamma(X \times_S Y, \Omega^\bullet_{X \times_S Y/S})
+$$
+On the level of cohomology, using the discussion in
+More on Algebra, Section \ref{more-algebra-section-products-tor},
+we obtain a canonical map
+$$
+H^i_{dR}(X/S) \otimes_A H^j_{dR}(Y/S)
+\longrightarrow
+H^{i + j}_{dR}(X \times_S Y/S),\quad
+(\xi, \zeta) \longmapsto p^*\xi \cup q^*\zeta
+$$
+We note that the construction above indeed proceeds by
+first pulling back and then taking the cup product.
+
+\begin{lemma}
+\label{lemma-kunneth-de-rham}
+Assume $X$ and $Y$ are smooth, quasi-compact, with affine diagonal over
+$S = \Spec(A)$. Then the map
+$$
+R\Gamma(X, \Omega^\bullet_{X/S})
+\otimes_A^\mathbf{L}
+R\Gamma(Y, \Omega^\bullet_{Y/S})
+\longrightarrow
+R\Gamma(X \times_S Y, \Omega^\bullet_{X \times_S Y/S})
+$$
+is an isomorphism in $D(A)$.
+\end{lemma}
+
+\begin{proof}
+By Morphisms, Lemma \ref{morphisms-lemma-smooth-omega-finite-locally-free}
+the sheaves $\Omega^n_{X/S}$ and $\Omega^m_{Y/S}$ are finite locally free
+$\mathcal{O}_X$ and $\mathcal{O}_Y$-modules. On the other hand, $X$ and $Y$
+are flat over $S$ (Morphisms, Lemma \ref{morphisms-lemma-smooth-flat})
+and hence we find that $\Omega^n_{X/S}$ and $\Omega^m_{Y/S}$ are flat over $S$.
+Also, observe that $\Omega^\bullet_{X/S}$ is a locally bounded. Thus
+the result by Lemma \ref{lemma-de-rham-complex-product} and
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-kunneth-special}.
+\end{proof}
+
+\noindent
+There is a relative version of the cup product, namely a map
+$$
+Ra_*\Omega^\bullet_{X/S}
+\otimes_{\mathcal{O}_S}^\mathbf{L}
+Rb_*\Omega^\bullet_{Y/S}
+\longrightarrow
+Rf_*\Omega^\bullet_{X \times_S Y/S}
+$$
+in $D(\mathcal{O}_S)$. The construction combines
+Lemma \ref{lemma-de-rham-complex-product} with the map
+Derived Categories of Schemes, Equation
+(\ref{perfect-equation-relative-de-rham-kunneth}).
+The construction shows that this map is given by the diagram
+$$
+\xymatrix{
+Ra_*\Omega^\bullet_{X/S}
+\otimes_{\mathcal{O}_S}^\mathbf{L}
+Rb_*\Omega^\bullet_{Y/S}
+\ar[d]^{\text{units of adjunction}} \\
+Rf_*(p^{-1}\Omega^\bullet_{X/S})
+\otimes_{\mathcal{O}_S}^\mathbf{L}
+Rf_*(q^{-1}\Omega^\bullet_{Y/S}) \ar[r] \ar[d]^{\text{relative cup product}} &
+Rf_*(\Omega^\bullet_{X \times_S Y/S})
+\otimes_{\mathcal{O}_S}^\mathbf{L}
+Rf_*(\Omega^\bullet_{X \times_S Y/S}) \ar[d]^{\text{relative cup product}} \\
+Rf_*(p^{-1}\Omega^\bullet_{X/S}
+\otimes_{f^{-1}\mathcal{O}_S}^\mathbf{L}
+q^{-1}\Omega^\bullet_{Y/S})
+\ar[d]^{\text{from derived to usual}} \ar[r] &
+Rf_*(\Omega^\bullet_{X \times_S Y/S}
+\otimes_{f^{-1}\mathcal{O}_S}^\mathbf{L}
+\Omega^\bullet_{X \times_S Y/S})
+\ar[d]^{\text{from derived to usual}} \\
+Rf_*\text{Tot}(p^{-1}\Omega^\bullet_{X/S}
+\otimes_{f^{-1}\mathcal{O}_S}
+q^{-1}\Omega^\bullet_{Y/S}) \ar[r] \ar[d]^{\text{canonical map}} &
+Rf_*\text{Tot}(\Omega^\bullet_{X \times_S Y/S}
+\otimes_{f^{-1}\mathcal{O}_S}
+\Omega^\bullet_{X \times_S Y/S})
+\ar[d]^{\eta \otimes \omega \mapsto \eta \wedge \omega} \\
+Rf_*\text{Tot}(\Omega^\bullet_{X/S} \boxtimes \Omega^\bullet_{Y/S})
+\ar@{=}[r]
+&
+Rf_*\Omega^\bullet_{X \times_S Y/S}
+}
+$$
+Here the first arrow uses the units $\text{id} \to Rp_* p^{-1}$
+and $\text{id} \to Rq_* q^{-1}$ of adjunction as well as the
+identifications $Rf_* p^{-1} = Ra_* Rp_* p^{-1}$ and
+$Rf_* q^{-1} = Rb_* Rq_* q^{-1}$.
+The second arrow is the relative cup product of
+Cohomology, Remark \ref{cohomology-remark-cup-product}.
+The third arrow is the map sending a derived tensor product
+of complexes to the totalization of the tensor product of complexes.
+The final equality is Lemma \ref{lemma-de-rham-complex-product}.
+This construction recovers on global section the construction given earlier.
+
+\begin{lemma}
+\label{lemma-kunneth-de-rham-relative}
+Assume $X \to S$ and $Y \to S$ are smooth and quasi-compact
+and the morphisms $X \to X \times_S X$ and $Y \to Y \times_S Y$ are affine.
+Then the relative cup product
+$$
+Ra_*\Omega^\bullet_{X/S}
+\otimes_{\mathcal{O}_S}^\mathbf{L}
+Rb_*\Omega^\bullet_{Y/S}
+\longrightarrow
+Rf_*\Omega^\bullet_{X \times_S Y/S}
+$$
+is an isomorphism in $D(\mathcal{O}_S)$.
+\end{lemma}
+
+\begin{proof}
+Immediate consequence of Lemma \ref{lemma-kunneth-de-rham}.
+\end{proof}
+
+
+
+
+
+
+
+\section{First Chern class in de Rham cohomology}
+\label{section-first-chern-class}
+
+\noindent
+Let $X \to S$ be a morphism of schemes. There is a map of complexes
+$$
+\text{d}\log : \mathcal{O}_X^*[-1] \longrightarrow \Omega^\bullet_{X/S}
+$$
+which sends the section $g \in \mathcal{O}_X^*(U)$ to the section
+$\text{d}\log(g) = g^{-1}\text{d}g$ of $\Omega^1_{X/S}(U)$.
+Thus we can consider the map
+$$
+\Pic(X) = H^1(X, \mathcal{O}_X^*) =
+H^2(X, \mathcal{O}_X^*[-1]) \longrightarrow H^2_{dR}(X/S)
+$$
+where the first equality is
+Cohomology, Lemma \ref{cohomology-lemma-h1-invertible}.
+The image of the isomorphism class of the invertible module
+$\mathcal{L}$ is denoted $c^{dR}_1(\mathcal{L}) \in H^2_{dR}(X/S)$.
+
+\medskip\noindent
+We can also use the map $\text{d}\log : \mathcal{O}_X^* \to \Omega^1_{X/S}$
+to define a Chern class in Hodge cohomology
+$$
+c_1^{Hodge} : \Pic(X) \longrightarrow H^1(X, \Omega^1_{X/S})
+\subset H^2_{Hodge}(X/S)
+$$
+These constructions are compatible with pullbacks.
+
+\begin{lemma}
+\label{lemma-pullback-c1}
+Given a commutative diagram
+$$
+\xymatrix{
+X' \ar[r]_f \ar[d] & X \ar[d] \\
+S' \ar[r] & S
+}
+$$
+of schemes the diagrams
+$$
+\xymatrix{
+\Pic(X') \ar[d]_{c_1^{dR}} &
+\Pic(X) \ar[d]^{c_1^{dR}} \ar[l]^{f^*} \\
+H^2_{dR}(X'/S') &
+H^2_{dR}(X/S) \ar[l]_{f^*}
+}
+\quad
+\xymatrix{
+\Pic(X') \ar[d]_{c_1^{Hodge}} &
+\Pic(X) \ar[d]^{c_1^{Hodge}} \ar[l]^{f^*} \\
+H^1(X', \Omega^1_{X'/S'}) &
+H^1(X, \Omega^1_{X/S}) \ar[l]_{f^*}
+}
+$$
+commute.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\noindent
+Let us ``compute'' the element $c^{dR}_1(\mathcal{L})$ in {\v C}ech
+cohomology (with sign rules for {\v C}ech differentials
+as in Cohomology, Section
+\ref{cohomology-section-cech-cohomology-of-complexes}).
+Namely, choose an open covering
+$\mathcal{U} : X = \bigcup_{i \in I} U_i$ such that
+we have a trivializing section $s_i$ of $\mathcal{L}|_{U_i}$ for all $i$.
+On the overlaps $U_{i_0i_1} = U_{i_0} \cap U_{i_1}$
+we have an invertible function $f_{i_0i_1}$ such that
+$f_{i_0i_1} = s_{i_1}|_{U_{i_0i_1}} s_{i_0}|_{U_{i_0i_1}}^{-1}$\footnote{The
+{\v C}ech differential of a $0$-cycle $\{a_{i_0}\}$ has
+$a_{i_1} - a_{i_0}$ over $U_{i_0i_1}$.}.
+Of course we have
+$$
+f_{i_1i_2}|_{U_{i_0i_1i_2}}
+f_{i_0i_2}^{-1}|_{U_{i_0i_1i_2}}
+f_{i_0i_1}|_{U_{i_0i_1i_2}} = 1
+$$
+The cohomology class of $\mathcal{L}$ in $H^1(X, \mathcal{O}_X^*)$ is
+the image of the {\v C}ech cohomology class of the cocycle $\{f_{i_0i_1}\}$ in
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{O}_X^*)$.
+Therefore we see that $c_1^{dR}(\mathcal{L})$ is the image
+of the cohomology class associated to the {\v C}ech cocycle
+$\{\alpha_{i_0 \ldots i_p}\}$ in
+$\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \Omega_{X/S}^\bullet))$
+of degree $2$ given by
+\begin{enumerate}
+\item $\alpha_{i_0} = 0$ in $\Omega^2_{X/S}(U_{i_0})$,
+\item $\alpha_{i_0i_1} = f_{i_0i_1}^{-1}\text{d}f_{i_0i_1}$ in
+$\Omega^1_{X/S}(U_{i_0i_1})$, and
+\item $\alpha_{i_0i_1i_2} = 0$ in $\mathcal{O}_{X/S}(U_{i_0i_1i_2})$.
+\end{enumerate}
+Suppose we have invertible modules $\mathcal{L}_k$, $k = 1, \ldots, a$
+each trivialized over $U_i$ for all $i \in I$ giving rise to cocycles
+$f_{k, i_0i_1}$ and $\alpha_k = \{\alpha_{k, i_0 \ldots i_p}\}$ as above.
+Using the rule in
+Cohomology, Section \ref{cohomology-section-cech-cohomology-of-complexes}
+we can compute
+$$
+\beta = \alpha_1 \cup \alpha_2 \cup \ldots \cup \alpha_a
+$$
+to be given by the cocycle $\beta = \{\beta_{i_0 \ldots i_p}\}$
+described as follows
+\begin{enumerate}
+\item $\beta_{i_0 \ldots i_p} = 0$ in
+$\Omega^{2a - p}_{X/S}(U_{i_0 \ldots i_p})$ unless $p = a$, and
+\item $\beta_{i_0 \ldots i_a} = (-1)^{a(a - 1)/2}
+\alpha_{1, i_0i_1} \wedge \alpha_{2, i_1 i_2} \wedge \ldots \wedge
+\alpha_{a, i_{a - 1}i_a}$ in
+$\Omega^a_{X/S}(U_{i_0 \ldots i_a})$.
+\end{enumerate}
+Thus this is a cocycle representing
+$c_1^{dR}(\mathcal{L}_1) \cup \ldots \cup c_1^{dR}(\mathcal{L}_a)$
+Of course, the same computation shows that the cocycle
+$\{\beta_{i_0 \ldots i_a}\}$ in
+$\check{\mathcal{C}}^a(\mathcal{U}, \Omega_{X/S}^a))$
+represents the cohomology class
+$c_1^{Hodge}(\mathcal{L}_1) \cup \ldots \cup c_1^{Hodge}(\mathcal{L}_a)$
+
+\begin{remark}
+\label{remark-truncations}
+Here is a reformulation of the calculations above in more abstract terms.
+Let $p : X \to S$ be a morphism of schemes. Let $\mathcal{L}$ be an
+invertible $\mathcal{O}_X$-module. If we view $\text{d}\log$ as a map
+$$
+\mathcal{O}_X^*[-1] \to \sigma_{\geq 1}\Omega^\bullet_{X/S}
+$$
+then using $\Pic(X) = H^1(X, \mathcal{O}_X^*)$ as above we find a
+cohomology class
+$$
+\gamma_1(\mathcal{L}) \in H^2(X, \sigma_{\geq 1}\Omega^\bullet_{X/S})
+$$
+The image of $\gamma_1(\mathcal{L})$ under the map
+$\sigma_{\geq 1}\Omega^\bullet_{X/S} \to \Omega^\bullet_{X/S}$
+recovers $c_1^{dR}(\mathcal{L})$. In particular we see that
+$c_1^{dR}(\mathcal{L}) \in F^1H^2_{dR}(X/S)$, see
+Section \ref{section-hodge-filtration}. The image of $\gamma_1(\mathcal{L})$
+under the map $\sigma_{\geq 1}\Omega^\bullet_{X/S} \to \Omega^1_{X/S}[-1]$
+recovers $c_1^{Hodge}(\mathcal{L})$. Taking the cup product
+(see Section \ref{section-hodge-filtration}) we obtain
+$$
+\xi = \gamma_1(\mathcal{L}_1) \cup \ldots \cup \gamma_1(\mathcal{L}_a) \in
+H^{2a}(X, \sigma_{\geq a}\Omega^\bullet_{X/S})
+$$
+The commutative diagrams in Section \ref{section-hodge-filtration}
+show that $\xi$ is mapped to
+$c_1^{dR}(\mathcal{L}_1) \cup \ldots \cup c_1^{dR}(\mathcal{L}_a)$
+in $H^{2a}_{dR}(X/S)$ by the map
+$\sigma_{\geq a}\Omega^\bullet_{X/S} \to \Omega^\bullet_{X/S}$.
+Also, it follows
+$c_1^{dR}(\mathcal{L}_1) \cup \ldots \cup c_1^{dR}(\mathcal{L}_a)$
+is contained in $F^a H^{2a}_{dR}(X/S)$. Similarly, the map
+$\sigma_{\geq a}\Omega^\bullet_{X/S} \to \Omega^a_{X/S}[-a]$
+sends $\xi$ to
+$c_1^{Hodge}(\mathcal{L}_1) \cup \ldots \cup c_1^{Hodge}(\mathcal{L}_a)$
+in $H^a(X, \Omega^a_{X/S})$.
+\end{remark}
+
+\begin{remark}
+\label{remark-log-forms}
+Let $p : X \to S$ be a morphism of schemes. For $i > 0$
+denote $\Omega^i_{X/S, log} \subset \Omega^i_{X/S}$ the abelian subsheaf
+generated by local sections of the form
+$$
+\text{d}\log(u_1) \wedge \ldots \wedge \text{d}\log(u_i)
+$$
+where $u_1, \ldots, u_n$ are invertible local sections of $\mathcal{O}_X$.
+For $i = 0$ the subsheaf $\Omega^0_{X/S, log} \subset \mathcal{O}_X$
+is the image of $\mathbf{Z} \to \mathcal{O}_X$. For every $i \geq 0$ we
+have a map of complexes
+$$
+\Omega^i_{X/S, log}[-i] \longrightarrow \Omega^\bullet_{X/S}
+$$
+because the derivative of a logarithmic form is zero. Moreover, wedging
+logarithmic forms gives another, hence we find bilinear maps
+$$
+\wedge : \Omega^i_{X/S, log} \times
+\Omega^j_{X/S, log} \longrightarrow \Omega^{i + j}_{X/S, log}
+$$
+compatible with (\ref{equation-wedge}) and the maps above.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Using the map of abelian sheaves
+$\text{d}\log : \mathcal{O}_X^* \to \Omega^1_{X/S, log}$
+and the identification $\Pic(X) = H^1(X, \mathcal{O}_X^*)$
+we find a canonical cohomology class
+$$
+\tilde \gamma_1(\mathcal{L}) \in H^1(X, \Omega^1_{X/S, log})
+$$
+These classes have the following properties
+\begin{enumerate}
+\item the image of $\tilde \gamma_1(\mathcal{L})$ under the canonical
+map $\Omega^1_{X/S, log}[-1] \to \sigma_{\geq 1}\Omega^\bullet_{X/S}$
+sends $\tilde \gamma_1(\mathcal{L})$ to the class
+$\gamma_1(\mathcal{L}) \in
+H^2(X, \sigma_{\geq 1}\Omega^\bullet_{X/S})$
+of Remark \ref{remark-truncations},
+\item the image of $\tilde \gamma_1(\mathcal{L})$ under the canonical
+map $\Omega^1_{X/S, log}[-1] \to \Omega^\bullet_{X/S}$
+sends $\tilde \gamma_1(\mathcal{L})$ to $c_1^{dR}(\mathcal{L})$ in
+$H^2_{dR}(X/S)$,
+\item the image of $\tilde \gamma_1(\mathcal{L})$ under the canonical
+map $\Omega^1_{X/S, log} \to \Omega^1_{X/S}$
+sends $\tilde \gamma_1(\mathcal{L})$ to $c_1^{Hodge}(\mathcal{L})$ in
+$H^1(X, \Omega^1_{X/S})$,
+\item the construction of these classes is compatible with pullbacks,
+\item add more here.
+\end{enumerate}
+\end{remark}
+
+
+
+
+
+\section{de Rham cohomology of a line bundle}
+\label{section-line-bundle}
+
+\noindent
+A line bundle is a special case of a vector bundle, which in turn is a
+cone endowed with some extra structure. To intelligently talk about
+the de Rham complex of these, it makes sense to discuss the de Rham
+complex of a graded ring.
+
+\begin{remark}[de Rham complex of a graded ring]
+\label{remark-de-rham-complex-graded}
+Let $G$ be an abelian monoid written additively with neutral element $0$.
+Let $R \to A$ be a ring map and assume $A$ comes with a grading
+$A = \bigoplus_{g \in G} A_g$ by $R$-modules such that $R$ maps into $A_0$
+and $A_g \cdot A_{g'} \subset A_{g + g'}$. Then the module of differentials
+comes with a grading
+$$
+\Omega_{A/R} = \bigoplus\nolimits_{g \in G} \Omega_{A/R, g}
+$$
+where $\Omega_{A/R, g}$ is the $R$-submodule of $\Omega_{A/R}$
+generated by $a_0 \text{d}a_1$ with $a_i \in A_{g_i}$ such that
+$g = g_0 + g_1$. Similarly, we obtain
+$$
+\Omega^p_{A/R} = \bigoplus\nolimits_{g \in G} \Omega^p_{A/R, g}
+$$
+where $\Omega^p_{A/R, g}$ is the $R$-submodule of $\Omega^p_{A/R}$
+generated by $a_0 \text{d}a_1 \wedge \ldots \wedge \text{d}a_p$
+with $a_i \in A_{g_i}$ such that $g = g_0 + g_1 + \ldots + g_p$.
+Of course the differentials preserve the grading and the wedge
+product is compatible with the gradings in the obvious manner.
+\end{remark}
+
+\noindent
+Let $f : X \to S$ be a morphism of schemes. Let $\pi : C \to X$ be a cone, see
+Constructions, Definition \ref{constructions-definition-abstract-cone}.
+Recall that this means $\pi$ is affine and we have a grading
+$\pi_*\mathcal{O}_C = \bigoplus_{n \geq 0} \mathcal{A}_n$ with
+$\mathcal{A}_0 = \mathcal{O}_X$.
+Using the discussion in Remark \ref{remark-de-rham-complex-graded}
+over affine opens we find that\footnote{With excuses for the notation!}
+$$
+\pi_*(\Omega^\bullet_{C/S}) =
+\bigoplus\nolimits_{n \geq 0} \Omega^\bullet_{C/S, n}
+$$
+is canonically a direct sum of subcomplexes. Moreover, we have a factorization
+$$
+\Omega^\bullet_{X/S} \to \Omega^\bullet_{C/S, 0} \to
+\pi_*(\Omega^\bullet_{C/S})
+$$
+and we know that $\omega \wedge \eta \in \Omega^{p + q}_{C/S, n + m}$
+if $\omega \in \Omega^p_{C/S, n}$ and $\eta \in \Omega^q_{C/S, m}$.
+
+\medskip\noindent
+Let $f : X \to S$ be a morphism of schemes. Let $\pi : L \to X$ be the
+line bundle associated to the invertible $\mathcal{O}_X$-module $\mathcal{L}$.
+This means that $\pi$ is the unique affine morphism such that
+$$
+\pi_*\mathcal{O}_L =
+\bigoplus\nolimits_{n \geq 0} \mathcal{L}^{\otimes n}
+$$
+as $\mathcal{O}_X$-algebras. Thus $L$ is a cone over $X$.
+By the discussion above we find a
+canonical direct sum decomposition
+$$
+\pi_*(\Omega^\bullet_{L/S}) =
+\bigoplus\nolimits_{n \geq 0} \Omega^\bullet_{L/S, n}
+$$
+compatible with wedge product, compatible with the decomposition
+of $\pi_*\mathcal{O}_L$ above, and such that $\Omega_{X/S}$
+maps into the part $\Omega_{L/S, 0}$ of degree $0$.
+
+\medskip\noindent
+There is another case which will be useful to us. Namely, consider the
+complement\footnote{The scheme $L^\star$ is the $\mathbf{G}_m$-torsor
+over $X$ associated to $L$. This is why the grading we get below is
+a $\mathbf{Z}$-grading, compare with Groupoids,
+Example \ref{groupoids-example-Gm-on-affine} and
+Lemmas \ref{groupoids-lemma-complete-reducibility-Gm} and
+\ref{groupoids-lemma-Gm-equivariant-module}.}
+$L^\star \subset L$ of the zero section $o : X \to L$ in our line
+bundle $L$. A local computation shows we have a canonical isomorphism
+$$
+(L^\star \to X)_*\mathcal{O}_{L^\star} =
+\bigoplus\nolimits_{n \in \mathbf{Z}} \mathcal{L}^{\otimes n}
+$$
+of $\mathcal{O}_X$-algebras. The right hand side is a $\mathbf{Z}$-graded
+quasi-coherent $\mathcal{O}_X$-algebra. Using the discussion in
+Remark \ref{remark-de-rham-complex-graded} over affine opens we find that
+$$
+(L^\star \to X)_*(\Omega^\bullet_{L^\star/S}) =
+\bigoplus\nolimits_{n \in \mathbf{Z}} \Omega^\bullet_{L^\star/S, n}
+$$
+compatible with wedge product, compatible with the decomposition
+of $(L^\star \to X)_*\mathcal{O}_{L^\star}$ above, and such that
+$\Omega_{X/S}$ maps into the part $\Omega_{L^\star/S, 0}$ of degree $0$.
+The complex $\Omega^\bullet_{L^\star/S, 0}$ will be
+of particular interest to us.
+
+\begin{lemma}
+\label{lemma-the-complex-for-L-star}
+With notation as above, there is a short exact sequence of complexes
+$$
+0 \to \Omega^\bullet_{X/S} \to
+\Omega^\bullet_{L^\star/S, 0} \to
+\Omega^\bullet_{X/S}[-1] \to 0
+$$
+\end{lemma}
+
+\begin{proof}
+We have constructed the map
+$\Omega^\bullet_{X/S} \to \Omega^\bullet_{L^\star/S, 0}$ above.
+
+\medskip\noindent
+Construction of
+$\text{Res} : \Omega^\bullet_{L^\star/S, 0} \to \Omega^\bullet_{X/S}[-1]$.
+Let $U \subset X$ be an open and let $s \in \mathcal{L}(U)$
+and $s' \in \mathcal{L}^{\otimes -1}(U)$ be sections such that
+$s' s = 1$. Then $s$ gives an invertible section of the sheaf of
+algebras $(L^\star \to X)_*\mathcal{O}_{L^\star}$ over $U$
+with inverse $s' = s^{-1}$. Then we can consider the $1$-form
+$\text{d}\log(s) = s' \text{d}(s)$ which is an element of
+$\Omega^1_{L^\star/S, 0}(U)$ by our construction of the grading on
+$\Omega^1_{L^\star/S}$. Our computations on affines given below
+will show that $1$ and $\text{d}\log(s)$ freely generate
+$\Omega^\bullet_{L^\star/S, 0}|_U$ as a right module over
+$\Omega^\bullet_{X/S}|_U$.
+Thus we can define $\text{Res}$ over $U$ by the rule
+$$
+\text{Res}(\omega' + \text{d}\log(s) \wedge \omega) = \omega
+$$
+for all $\omega', \omega \in \Omega^\bullet_{X/S}(U)$. This
+map is independent of the choice of local generator $s$ and hence
+glues to give a global map. Namely, another choice of $s$
+would be of the form $gs$ for some invertible $g \in \mathcal{O}_X(U)$
+and we would get $\text{d}\log(gs) = g^{-1}\text{d}(g) + \text{d}\log(s)$
+from which the independence easily follows.
+Finally, observe that our rule for $\text{Res}$
+is compatible with differentials
+as $\text{d}(\omega' + \text{d}\log(s) \wedge \omega) =
+\text{d}(\omega') - \text{d}\log(s) \wedge \text{d}(\omega)$
+and because the differential on $\Omega^\bullet_{X/S}[-1]$
+sends $\omega'$ to $-\text{d}(\omega')$ by our sign convention in
+Homology, Definition \ref{homology-definition-shift-cochain}.
+
+\medskip\noindent
+Local computation. We can cover $X$ by affine opens $U \subset X$
+such that $\mathcal{L}|_U \cong \mathcal{O}_U$ which moreover map
+into an affine open $V \subset S$. Write $U = \Spec(A)$, $V = \Spec(R)$
+and choose a generator $s$ of $\mathcal{L}$. We find that we have
+$$
+L^\star \times_X U = \Spec(A[s, s^{-1}])
+$$
+Computing differentials we see that
+$$
+\Omega^1_{A[s, s^{-1}]/R} =
+A[s, s^{-1}] \otimes_A \Omega^1_{A/R} \oplus A[s, s^{-1}] \text{d}\log(s)
+$$
+and therefore taking exterior powers we obtain
+$$
+\Omega^p_{A[s, s^{-1}]/R} =
+A[s, s^{-1}] \otimes_A \Omega^p_{A/R}
+\oplus
+A[s, s^{-1}] \text{d}\log(s) \otimes_A \Omega^{p - 1}_{A/R}
+$$
+Taking degree $0$ parts we find
+$$
+\Omega^p_{A[s, s^{-1}]/R, 0} =
+\Omega^p_{A/R} \oplus \text{d}\log(s) \otimes_A \Omega^{p - 1}_{A/R}
+$$
+and the proof of the lemma is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-the-complex-for-L-star-gives-chern-class}
+The ``boundary'' map
+$\delta : \Omega^\bullet_{X/S} \to \Omega^\bullet_{X/S}[2]$
+in $D(X, f^{-1}\mathcal{O}_S)$ coming from
+the short exact sequence in Lemma \ref{lemma-the-complex-for-L-star}
+is the map of Remark \ref{remark-cup-product-as-a-map}
+for $\xi = c_1^{dR}(\mathcal{L})$.
+\end{lemma}
+
+\begin{proof}
+To be precise we consider the shift
+$$
+0 \to \Omega^\bullet_{X/S}[1] \to
+\Omega^\bullet_{L^\star/S, 0}[1] \to
+\Omega^\bullet_{X/S} \to 0
+$$
+of the short exact sequence of Lemma \ref{lemma-the-complex-for-L-star}.
+As the degree zero part of a grading on
+$(L^\star \to X)_*\Omega^\bullet_{L^\star/S}$
+we see that $\Omega^\bullet_{L^\star/S, 0}$ is a differential
+graded $\mathcal{O}_X$-algebra and that the map
+$\Omega^\bullet_{X/S} \to \Omega^\bullet_{L^\star/S, 0}$
+is a homomorphism of differential graded $\mathcal{O}_X$-algebras.
+Hence we may view $\Omega^\bullet_{X/S}[1] \to
+\Omega^\bullet_{L^\star/S, 0}[1]$ as a map of right differential graded
+$\Omega^\bullet_{X/S}$-modules on $X$. The map
+$\text{Res} : \Omega^\bullet_{L^\star/S, 0}[1] \to \Omega^\bullet_{X/S}$
+is a map of right differential graded $\Omega^\bullet_{X/S}$-modules
+since it is locally defined by the rule
+$\text{Res}(\omega' + \text{d}\log(s) \wedge \omega) = \omega$, see
+proof of Lemma \ref{lemma-the-complex-for-L-star}.
+Thus by the discussion in
+Differential Graded Sheaves, Section \ref{sdga-section-misc}
+we see that $\delta$ comes from a map
+$\delta' : \Omega^\bullet_{X/S} \to \Omega^\bullet_{X/S}[2]$
+in the derived category $D(\Omega^\bullet_{X/S}, \text{d})$
+of right differential graded modules over the de Rham complex.
+The uniqueness averted in Remark \ref{remark-cup-product-as-a-map}
+shows it suffices to prove that $\delta(1) = c_1^{dR}(\mathcal{L})$.
+
+\medskip\noindent
+We claim that there is a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+\mathcal{O}_X^* \ar[r] \ar[d]_{\text{d}\log} &
+E \ar[r] \ar[d] &
+\underline{\mathbf{Z}} \ar[d] \ar[r] &
+0 \\
+0 \ar[r] &
+\Omega^\bullet_{X/S}[1] \ar[r] &
+\Omega^\bullet_{L^\star/S, 0}[1] \ar[r] &
+\Omega^\bullet_{X/S} \ar[r] &
+0
+}
+$$
+where the top row is a short exact sequence of abelian sheaves whose
+boundary map sends $1$ to the class of $\mathcal{L}$ in
+$H^1(X, \mathcal{O}_X^*)$. It suffices to prove the claim
+by the compatibility of boundary maps with maps between short
+exact sequences. We define $E$ as the sheafification of the rule
+$$
+U \longmapsto \{(s, n) \mid
+n \in \mathbf{Z},\ s \in \mathcal{L}^{\otimes n}(U)\text{ generator}\}
+$$
+with group structure given by $(s, n) \cdot (t, m) = (s \otimes t, n + m)$.
+The middle vertical map sends $(s, n)$ to $\text{d}\log(s)$. This produces
+a map of short exact sequences
+because the map $Res : \Omega^1_{L^\star/S, 0} \to \mathcal{O}_X$
+constructed in the proof of Lemma \ref{lemma-the-complex-for-L-star} sends
+$\text{d}\log(s)$ to $1$ if $s$ is a local generator of $\mathcal{L}$.
+To calculate the boundary of $1$ in the top row, choose local trivializations
+$s_i$ of $\mathcal{L}$ over opens $U_i$ as in
+Section \ref{section-first-chern-class}. On the overlaps
+$U_{i_0i_1} = U_{i_0} \cap U_{i_1}$
+we have an invertible function $f_{i_0i_1}$ such that
+$f_{i_0i_1} = s_{i_1}|_{U_{i_0i_1}} s_{i_0}|_{U_{i_0i_1}}^{-1}$
+and the cohomology class of $\mathcal{L}$ is given by the {\v C}ech cocycle
+$\{f_{i_0i_1}\}$. Then of course we have
+$$
+(f_{i_0i_1}, 0) = (s_{i_1}, 1)|_{U_{i_0i_1}} \cdot
+(s_{i_0}, 1)|_{U_{i_0i_1}}^{-1}
+$$
+as sections of $E$ which finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-push-omega-a}
+With notation as above we have
+\begin{enumerate}
+\item $\Omega^p_{L^\star/S, n} =
+\Omega^p_{L^\star/S, 0} \otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes n}$
+for all $n \in \mathbf{Z}$ as quasi-coherent $\mathcal{O}_X$-modules,
+\item $\Omega^\bullet_{X/S} = \Omega^\bullet_{L/X, 0}$
+as complexes, and
+\item for $n > 0$ and $p \geq 0$ we have
+$\Omega^p_{L/X, n} = \Omega^p_{L^\star/S, n}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+In each case there is a globally defined canonical map which
+is an isomorphism by local calculations which we omit.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-line-bundle-characteristic-zero}
+In the situation above, assume there is a morphism $S \to \Spec(\mathbf{Q})$.
+Then $\Omega^\bullet_{X/S} \to \pi_*\Omega^\bullet_{L/S}$ is a
+quasi-isomorphism and $H_{dR}^*(X/S) = H_{dR}^*(L/S)$.
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a $\mathbf{Q}$-algebra. Let $A$ be an $R$-algebra.
+The affine local statement is that the map
+$$
+\Omega^\bullet_{A/R} \longrightarrow \Omega^\bullet_{A[t]/R}
+$$
+is a quasi-isomorphism of complexes of $R$-modules. In fact it is a
+homotopy equivalence with homotopy inverse given by the map sending
+$g \omega + g' \text{d}t \wedge \omega'$ to $g(0)\omega$ for
+$g, g' \in A[t]$ and $\omega, \omega' \in \Omega^\bullet_{A/R}$.
+The homotopy sends $g \omega + g' \text{d}t \wedge \omega'$
+to $(\int g') \omega'$ were $\int g' \in A[t]$ is the polynomial
+with vanishing constant term whose derivative with respect to $t$
+is $g'$. Of course, here we use that $R$ contains $\mathbf{Q}$
+as $\int t^n = (1/n)t^{n + 1}$.
+\end{proof}
+
+\begin{example}
+\label{example-affine-line}
+Lemma \ref{lemma-line-bundle-characteristic-zero} is
+false in positive characteristic. The de Rham complex of
+$\mathbf{A}^1_k = \Spec(k[x])$ over a field $k$ looks like a direct sum
+$$
+k \oplus
+\bigoplus\nolimits_{n \geq 1}
+(k \cdot t^n \xrightarrow{n}
+k \cdot t^{n - 1} \text{d}t)
+$$
+Hence if the characteristic of $k$ is $p > 0$, then
+we see that both $H^0_{dR}(\mathbf{A}^1_k/k)$ and
+$H^1_{dR}(\mathbf{A}^1_k/k)$
+are infinite dimensional over $k$.
+\end{example}
+
+
+
+
+
+
+
+
+\section{de Rham cohomology of projective space}
+\label{section-projective-space}
+
+\noindent
+Let $A$ be a ring. Let $n \geq 1$. The structure morphism
+$\mathbf{P}^n_A \to \Spec(A)$ is a proper smooth of relative
+dimension $n$. It is smooth of relative dimension $n$ and of finite type
+as $\mathbf{P}^n_A$ has a finite affine open covering by schemes each
+isomorphic to $\mathbf{A}^n_A$, see Constructions, Lemma
+\ref{constructions-lemma-standard-covering-projective-space}.
+It is proper because it is also separated and universally closed
+by Constructions, Lemma \ref{constructions-lemma-projective-space-separated}.
+Let us denote $\mathcal{O}$ and $\mathcal{O}(d)$ the structure sheaf
+$\mathcal{O}_{\mathbf{P}^n_A}$ and the Serre twists
+$\mathcal{O}_{\mathbf{P}^n_A}(d)$.
+Let us denote $\Omega = \Omega_{\mathbf{P}^n_A/A}$ the sheaf
+of relative differentials and $\Omega^p$ its exterior powers.
+
+\begin{lemma}
+\label{lemma-euler-sequence}
+There exists a short exact sequence
+$$
+0 \to \Omega \to \mathcal{O}(-1)^{\oplus n + 1} \to \mathcal{O} \to 0
+$$
+\end{lemma}
+
+\begin{proof}
+To explain this, we recall that
+$\mathbf{P}^n_A = \text{Proj}(A[T_0, \ldots, T_n])$,
+and we write symbolically
+$$
+\mathcal{O}(-1)^{\oplus n + 1} =
+\bigoplus\nolimits_{j = 0, \ldots, n} \mathcal{O}(-1) \text{d}T_j
+$$
+The first arrow
+$$
+\Omega \to
+\bigoplus\nolimits_{j = 0, \ldots, n} \mathcal{O}(-1) \text{d}T_j
+$$
+in the short exact sequence above
+is given on each of the standard opens
+$D_+(T_i) = \Spec(A[T_0/T_i, \ldots, T_n/T_i])$
+mentioned above by the rule
+$$
+\sum\nolimits_{j \not = i} g_j \text{d}(T_j/T_i)
+\longmapsto
+\sum\nolimits_{j \not = i} g_j/T_i \text{d}T_j
+- (\sum\nolimits_{j \not = i} g_jT_j/T_i^2) \text{d}T_i
+$$
+This makes sense because $1/T_i$ is a section of $\mathcal{O}(-1)$
+over $D_+(T_i)$. The map
+$$
+\bigoplus\nolimits_{j = 0, \ldots, n} \mathcal{O}(-1) \text{d}T_j
+\to
+\mathcal{O}
+$$
+is given by sending $\text{d}T_j$ to $T_j$, more precisely, on
+$D_+(T_i)$ we send the section $\sum g_j \text{d}T_j$ to
+$\sum T_jg_j$. We omit the verification that this produces
+a short exact sequence.
+\end{proof}
+
+\noindent
+Given an integer $k \in \mathbf{Z}$ and a quasi-coherent
+$\mathcal{O}_{\mathbf{P}^n_A}$-module $\mathcal{F}$
+denote as usual $\mathcal{F}(k)$ the $k$th Serre twist of $\mathcal{F}$.
+See Constructions, Definition \ref{constructions-definition-twist}.
+
+\begin{lemma}
+\label{lemma-twisted-hodge-cohomology-projective-space}
+In the situation above we have the following cohomology groups
+\begin{enumerate}
+\item $H^q(\mathbf{P}^n_A, \Omega^p) = 0$
+unless $0 \leq p = q \leq n$,
+\item for $0 \leq p \leq n$ the $A$-module
+$H^p(\mathbf{P}^n_A, \Omega^p)$ free of rank $1$.
+\item for $q > 0$, $k > 0$, and $p$ arbitrary we have
+$H^q(\mathbf{P}^n_A, \Omega^p(k)) = 0$, and
+\item add more here.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We are going to use the results of Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cohomology-projective-space-over-ring}
+without further mention. In particular, the statements are true
+for $H^q(\mathbf{P}^n_A, \mathcal{O}(k))$.
+
+\medskip\noindent
+Proof for $p = 1$. Consider the short exact sequence
+$$
+0 \to \Omega \to \mathcal{O}(-1)^{\oplus n + 1} \to \mathcal{O} \to 0
+$$
+of Lemma \ref{lemma-euler-sequence}. Since $\mathcal{O}(-1)$ has
+vanishing cohomology in all degrees, this gives that
+$H^q(\mathbf{P}^n_A, \Omega)$ is zero except in degree $1$
+where it is freely generated by the boundary of $1$ in
+$H^0(\mathbf{P}^n_A, \mathcal{O})$.
+
+\medskip\noindent
+Assume $p > 1$. Let us think of the short exact sequence
+above as defining a $2$ step filtration on $\mathcal{O}(-1)^{\oplus n + 1}$.
+The induced filtration on $\wedge^p\mathcal{O}(-1)^{\oplus n + 1}$ looks
+like this
+$$
+0 \to \Omega^p \to \wedge^p\left(\mathcal{O}(-1)^{\oplus n + 1}\right)
+\to \Omega^{p - 1} \to 0
+$$
+Observe that $\wedge^p\mathcal{O}(-1)^{\oplus n + 1}$ is isomorphic
+to a direct sum of $n + 1$ choose $p$ copies of $\mathcal{O}(-p)$
+and hence has vanishing cohomology in all degrees.
+By induction hypothesis, this shows that $H^q(\mathbf{P}^n_A, \Omega^p)$
+is zero unless $q = p$ and $H^p(\mathbf{P}^n_A, \Omega^p)$ is free
+of rank $1$ with generator the boundary of the generator in
+$H^{p - 1}(\mathbf{P}^n_A, \Omega^{p - 1})$.
+
+\medskip\noindent
+Let $k > 0$. Observe that $\Omega^n = \mathcal{O}(-n - 1)$ for example
+by the short exact sequence above for $p = n + 1$.
+Hence $\Omega^n(k)$ has vanishing cohomology in positive degrees.
+Using the short exact sequences
+$$
+0 \to \Omega^p(k) \to \wedge^p\left(\mathcal{O}(-1)^{\oplus n + 1}\right)(k)
+\to \Omega^{p - 1}(k) \to 0
+$$
+and {\it descending} induction on $p$ we get the vanishing of
+cohomology of $\Omega^p(k)$ in positive degrees for all $p$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hodge-cohomology-projective-space}
+We have $H^q(\mathbf{P}^n_A, \Omega^p) = 0$
+unless $0 \leq p = q \leq n$. For $0 \leq p \leq n$ the $A$-module
+$H^p(\mathbf{P}^n_A, \Omega^p)$ free of rank $1$ with basis element
+$c_1^{Hodge}(\mathcal{O}(1))^p$.
+\end{lemma}
+
+\begin{proof}
+We have the vanishing and and freeness by
+Lemma \ref{lemma-twisted-hodge-cohomology-projective-space}.
+For $p = 0$ it is certainly true that
+$1 \in H^0(\mathbf{P}^n_A, \mathcal{O})$ is a generator.
+
+\medskip\noindent
+Proof for $p = 1$. Consider the short exact sequence
+$$
+0 \to \Omega \to \mathcal{O}(-1)^{\oplus n + 1} \to \mathcal{O} \to 0
+$$
+of Lemma \ref{lemma-euler-sequence}. In the proof of
+Lemma \ref{lemma-twisted-hodge-cohomology-projective-space}
+we have seen that the generator of $H^1(\mathbf{P}^n_A, \Omega)$
+is the boundary $\xi$ of $1 \in H^0(\mathbf{P}^n_A, \mathcal{O})$.
+As in the proof of Lemma \ref{lemma-euler-sequence} we will identify
+$\mathcal{O}(-1)^{\oplus n + 1}$ with
+$\bigoplus_{j = 0, \ldots, n} \mathcal{O}(-1)\text{d}T_j$.
+Consider the open covering
+$$
+\mathcal{U} :
+\mathbf{P}^n_A =
+\bigcup\nolimits_{i = 0, \ldots, n} D_{+}(T_i)
+$$
+We can lift the restriction of the global section $1$ of $\mathcal{O}$
+to $U_i = D_+(T_i)$ by the section $T_i^{-1} \text{d}T_i$ of
+$\bigoplus \mathcal{O}(-1)\text{d}T_j$ over $U_i$. Thus the cocyle
+representing $\xi$ is given by
+$$
+T_{i_1}^{-1} \text{d}T_{i_1} - T_{i_0}^{-1} \text{d}T_{i_0} =
+\text{d}\log(T_{i_1}/T_{i_0}) \in \Omega(U_{i_0i_1})
+$$
+On the other hand, for each $i$ the section $T_i$ is a trivializing
+section of $\mathcal{O}(1)$ over $U_i$. Hence we see that
+$f_{i_0i_1} = T_{i_1}/T_{i_0} \in \mathcal{O}^*(U_{i_0i_1})$
+is the cocycle representing $\mathcal{O}(1)$ in $\Pic(\mathbf{P}^n_A)$,
+see Section \ref{section-first-chern-class}.
+Hence $c_1^{Hodge}(\mathcal{O}(1))$
+is given by the cocycle $\text{d}\log(T_{i_1}/T_{i_0})$
+which agrees with what we got for $\xi$ above.
+
+\medskip\noindent
+Proof for general $p$ by induction. The base cases $p = 0, 1$ were handled
+above. Assume $p > 1$. In the proof of
+Lemma \ref{lemma-twisted-hodge-cohomology-projective-space}
+we have seen that the generator of $H^p(\mathbf{P}^n_A, \Omega^p)$
+is the boundary of $c_1^{Hodge}(\mathcal{O}(1))^{p - 1}$
+in the long exact cohomology sequence associated to
+$$
+0 \to \Omega^p \to \wedge^p\left(\mathcal{O}(-1)^{\oplus n + 1}\right)
+\to \Omega^{p - 1} \to 0
+$$
+By the calculation in Section \ref{section-first-chern-class}
+the cohomology class $c_1^{Hodge}(\mathcal{O}(1))^{p - 1}$
+is, up to a sign, represented by the cocycle with terms
+$$
+\beta_{i_0 \ldots i_{p - 1}} =
+\text{d}\log(T_{i_1}/T_{i_0}) \wedge
+\text{d}\log(T_{i_2}/T_{i_1}) \wedge \ldots \wedge
+\text{d}\log(T_{i_{p - 1}}/T_{i_{p - 2}})
+$$
+in $\Omega^{p - 1}(U_{i_0 \ldots i_{p - 1}})$. These
+$\beta_{i_0 \ldots i_{p - 1}}$ can be lifted to the sections
+$\tilde \beta_{i_0 \ldots i_{p -1}} =
+T_{i_0}^{-1}\text{d}T_{i_0} \wedge \beta_{i_0 \ldots i_{p - 1}}$
+of $\wedge^p(\bigoplus \mathcal{O}(-1) \text{d}T_j)$ over
+$U_{i_0 \ldots i_{p - 1}}$. We conclude that the generator of
+$H^p(\mathbf{P}^n_A, \Omega^p)$ is given by the cocycle whose
+components are
+\begin{align*}
+\sum\nolimits_{a = 0}^p (-1)^a
+\tilde \beta_{i_0 \ldots \hat{i_a} \ldots i_p}
+& =
+T_{i_1}^{-1}\text{d}T_{i_1} \wedge \beta_{i_1 \ldots i_p}
++ \sum\nolimits_{a = 1}^p (-1)^a
+T_{i_0}^{-1}\text{d}T_{i_0} \wedge
+\beta_{i_0 \ldots \hat{i_a} \ldots i_p} \\
+& =
+(T_{i_1}^{-1}\text{d}T_{i_1} - T_{i_0}^{-1}\text{d}T_{i_0}) \wedge
+\beta_{i_1 \ldots i_p} +
+T_{i_0}^{-1}\text{d}T_{i_0} \wedge \text{d}(\beta)_{i_0 \ldots i_p} \\
+& =
+\text{d}\log(T_{i_1}/T_{i_0}) \wedge \beta_{i_1 \ldots i_p}
+\end{align*}
+viewed as a section of $\Omega^p$ over $U_{i_0 \ldots i_p}$.
+This is up to sign the same as the cocycle representing
+$c_1^{Hodge}(\mathcal{O}(1))^p$ and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-de-rham-cohomology-projective-space}
+For $0 \leq i \leq n$ the de Rham cohomology
+$H^{2i}_{dR}(\mathbf{P}^n_A/A)$ is a free $A$-module of rank $1$
+with basis element $c_1^{dR}(\mathcal{O}(1))^i$.
+In all other degrees the de Rham cohomology of $\mathbf{P}^n_A$
+over $A$ is zero.
+\end{lemma}
+
+\begin{proof}
+Consider the Hodge-to-de Rham spectral sequence of
+Section \ref{section-hodge-to-de-rham}.
+By the computation of the Hodge cohomology of $\mathbf{P}^n_A$ over $A$
+done in Lemma \ref{lemma-hodge-cohomology-projective-space}
+we see that the spectral sequence degenerates on the $E_1$ page.
+In this way we see that $H^{2i}_{dR}(\mathbf{P}^n_A/A)$ is a free
+$A$-module of rank $1$ for $0 \leq i \leq n$ and zero else.
+Observe that $c_1^{dR}(\mathcal{O}(1))^i \in H^{2i}_{dR}(\mathbf{P}^n_A/A)$
+for $i = 0, \ldots, n$ and that for $i = n$ this element is the
+image of $c_1^{Hodge}(\mathcal{L})^n$ by the map of complexes
+$$
+\Omega^n_{\mathbf{P}^n_A/A}[-n]
+\longrightarrow
+\Omega^\bullet_{\mathbf{P}^n_A/A}
+$$
+This follows for example from the discussion in Remark \ref{remark-truncations}
+or from the explicit description of cocycles representing these classes in
+Section \ref{section-first-chern-class}.
+The spectral sequence shows that the induced map
+$$
+H^n(\mathbf{P}^n_A, \Omega^n_{\mathbf{P}^n_A/A}) \longrightarrow
+H^{2n}_{dR}(\mathbf{P}^n_A/A)
+$$
+is an isomorphism and since $c_1^{Hodge}(\mathcal{L})^n$ is a generator of
+of the source (Lemma \ref{lemma-hodge-cohomology-projective-space}),
+we conclude that $c_1^{dR}(\mathcal{L})^n$ is a generator
+of the target. By the $A$-bilinearity of the cup products,
+it follows that also $c_1^{dR}(\mathcal{L})^i$
+is a generator of $H^{2i}_{dR}(\mathbf{P}^n_A/A)$ for
+$0 \leq i \leq n$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{The spectral sequence for a smooth morphism}
+\label{section-relative-spectral-sequence}
+
+\noindent
+Consider a commutative diagram of schemes
+$$
+\xymatrix{
+X \ar[rr]_f \ar[rd]_p & & Y \ar[ld]^q \\
+& S
+}
+$$
+where $f$ is a smooth morphism. Then we obtain a locally split short
+exact sequence
+$$
+0 \to f^*\Omega_{Y/S} \to \Omega_{X/S} \to \Omega_{X/Y} \to 0
+$$
+by Morphisms, Lemma \ref{morphisms-lemma-triangle-differentials-smooth}.
+Let us think of this as a descending filtration $F$ on $\Omega_{X/S}$
+with $F^0\Omega_{X/S} = \Omega_{X/S}$, $F^1\Omega_{X/S} = f^*\Omega_{Y/S}$, and
+$F^2\Omega_{X/S} = 0$. Applying the functor $\wedge^p$ we obtain
+for every $p$ an induced filtration
+$$
+\Omega^p_{X/S} = F^0\Omega^p_{X/S} \supset
+F^1\Omega^p_{X/S} \supset
+F^2\Omega^p_{X/S} \supset \ldots \supset F^{p + 1}\Omega^p_{X/S} = 0
+$$
+whose succesive quotients are
+$$
+\text{gr}^k\Omega^p_{X/S} =
+F^k\Omega^p_{X/S}/F^{k + 1}\Omega^p_{X/S} =
+f^*\Omega^k_{Y/S} \otimes_{\mathcal{O}_X} \Omega^{p - k}_{X/Y} =
+f^{-1}\Omega^k_{Y/S} \otimes_{f^{-1}\mathcal{O}_Y} \Omega^{p - k}_{X/Y}
+$$
+for $k = 0, \ldots, p$. In fact, the reader can check using the
+Leibniz rule that $F^k\Omega^\bullet_{X/S}$ is a subcomplex of
+$\Omega^\bullet_{X/S}$. In this way $\Omega^\bullet_{X/S}$ has
+the structure of a filtered complex. We can also see this by observing
+that
+$$
+F^k\Omega^\bullet_{X/S} =
+\Im\left(\wedge :
+\text{Tot}(
+f^{-1}\sigma_{\geq k}\Omega^\bullet_{Y/S} \otimes_{p^{-1}\mathcal{O}_S}
+\Omega^\bullet_{X/S})
+\longrightarrow
+\Omega^\bullet_{X/S}\right)
+$$
+is the image of a map of complexes on $X$. The filtered complex
+$$
+\Omega^\bullet_{X/S} = F^0\Omega^\bullet_{X/S} \supset
+F^1\Omega^\bullet_{X/S} \supset F^2\Omega^\bullet_{X/S} \supset \ldots
+$$
+has the following associated graded parts
+$$
+\text{gr}^k\Omega^\bullet_{X/S} =
+f^{-1}\Omega^k_{Y/S}[-k] \otimes_{f^{-1}\mathcal{O}_Y} \Omega^\bullet_{X/Y}
+$$
+by what was said above.
+
+\begin{lemma}
+\label{lemma-spectral-sequence-smooth}
+Let $f : X \to Y$ be a quasi-compact, quasi-separated, and smooth
+morphism of schemes over a base scheme $S$. There is a bounded spectral
+sequence with first page
+$$
+E_1^{p, q} =
+H^q(\Omega^p_{Y/S} \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*\Omega^\bullet_{X/Y})
+$$
+converging to $R^{p + q}f_*\Omega^\bullet_{X/S}$.
+\end{lemma}
+
+\begin{proof}
+Consider $\Omega^\bullet_{X/S}$ as a filtered complex with the
+filtration introduced above. The spectral sequence is the
+spectral sequence of Cohomology, Lemma
+\ref{cohomology-lemma-relative-spectral-sequence-filtered-object}.
+By Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-de-rham-base-change} we have
+$$
+Rf_*\text{gr}^k\Omega^\bullet_{X/S} =
+\Omega^k_{Y/S}[-k] \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*\Omega^\bullet_{X/Y}
+$$
+and thus we conclude.
+\end{proof}
+
+\begin{remark}
+\label{remark-gauss-manin}
+In Lemma \ref{lemma-spectral-sequence-smooth} consider the cohomology sheaves
+$$
+\mathcal{H}^q_{dR}(X/Y) = H^q(Rf_*\Omega^\bullet_{X/Y}))
+$$
+If $f$ is proper in addition to being smooth and $S$ is a scheme over
+$\mathbf{Q}$ then $\mathcal{H}^q_{dR}(X/Y)$ is finite locally free (insert
+future reference here). If we only assume $\mathcal{H}^q_{dR}(X/Y)$
+are flat $\mathcal{O}_Y$-modules, then we obtain (tiny argument omitted)
+$$
+E_1^{p, q} =
+\Omega^p_{Y/S} \otimes_{\mathcal{O}_Y} \mathcal{H}^q_{dR}(X/Y)
+$$
+and the differentials in the spectral sequence are maps
+$$
+d_1^{p, q} :
+\Omega^p_{Y/S} \otimes_{\mathcal{O}_Y} \mathcal{H}^q_{dR}(X/Y)
+\longrightarrow
+\Omega^{p + 1}_{Y/S} \otimes_{\mathcal{O}_Y} \mathcal{H}^q_{dR}(X/Y)
+$$
+In particular, for $p = 0$ we obtain a map
+$d_1^{0, q} : \mathcal{H}^q_{dR}(X/Y) \to
+\Omega^1_{Y/S} \otimes_{\mathcal{O}_Y} \mathcal{H}^q_{dR}(X/Y)$
+which turns out to be an integrable connection
+$\nabla$ (insert future reference here)
+and the complex
+$$
+\mathcal{H}^q_{dR}(X/Y) \to
+\Omega^1_{Y/S} \otimes_{\mathcal{O}_Y} \mathcal{H}^q_{dR}(X/Y) \to
+\Omega^2_{Y/S} \otimes_{\mathcal{O}_Y} \mathcal{H}^q_{dR}(X/Y) \to \ldots
+$$
+with differentials given by $d_1^{\bullet, q}$
+is the de Rham complex of $\nabla$.
+The connection $\nabla$ is known as the {\it Gauss-Manin connection}.
+\end{remark}
+
+
+
+
+
+
+\section{Leray-Hirsch type theorems}
+\label{section-leray-hirsch}
+
+\noindent
+In this section we prove that for a smooth proper morphism one
+can sometimes express the de Rham cohomology upstairs in terms
+of the de Rham cohomology downstairs.
+
+\begin{lemma}
+\label{lemma-relative-global-generation-on-fibres}
+Let $f : X \to Y$ be a smooth proper morphism of schemes.
+Let $N$ and $n_1, \ldots, n_N \geq 0$ be integers and let
+$\xi_i \in H^{n_i}_{dR}(X/Y)$, $1 \leq i \leq N$.
+Assume for all points $y \in Y$ the images of $\xi_1, \ldots, \xi_N$
+in $H^*_{dR}(X_y/y)$ form a basis over $\kappa(y)$. Then the map
+$$
+\bigoplus\nolimits_{i = 1}^N \mathcal{O}_Y[-n_i]
+\longrightarrow
+Rf_*\Omega^\bullet_{X/Y}
+$$
+associated to $\xi_1, \ldots, \xi_N$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-proper-smooth-de-Rham}
+$Rf_*\Omega^\bullet_{X/Y}$ is a perfect object of $D(\mathcal{O}_Y)$
+whose formation commutes with arbitrary base change.
+Thus the map of the lemma is a map $a : K \to L$
+between perfect objects of $D(\mathcal{O}_Y)$
+whose derived restriction to any point is an isomorphism
+by our assumption on fibres. Then the cone $C$ on $a$ is a perfect
+object of $D(\mathcal{O}_Y)$ (Cohomology, Lemma
+\ref{cohomology-lemma-two-out-of-three-perfect}) whose
+derived restriction to any point is zero. It follows that $C$
+is zero by More on Algebra, Lemma
+\ref{more-algebra-lemma-lift-perfect-from-residue-field}
+and $a$ is an isomorphism. (This also uses Derived Categories of Schemes,
+Lemmas \ref{perfect-lemma-affine-compare-bounded} and
+\ref{perfect-lemma-perfect-affine} to translate into algebra.)
+\end{proof}
+
+\noindent
+We first prove the main result of this section in the
+following special case.
+
+\begin{lemma}
+\label{lemma-global-generation-on-fibres}
+Let $f : X \to Y$ be a smooth proper morphism of schemes over a base $S$.
+Assume
+\begin{enumerate}
+\item $Y$ and $S$ are affine, and
+\item there exist integers $N$ and $n_1, \ldots, n_N \geq 0$ and
+$\xi_i \in H^{n_i}_{dR}(X/S)$, $1 \leq i \leq N$ such that
+for all points $y \in Y$ the images of $\xi_1, \ldots, \xi_N$
+in $H^*_{dR}(X_y/y)$ form a basis over $\kappa(y)$.
+\end{enumerate}
+Then the map
+$$
+\bigoplus\nolimits_{i = 1}^N H^*_{dR}(Y/S) \longrightarrow
+H^*_{dR}(X/S), \quad
+(a_1, \ldots, a_N) \longmapsto \sum \xi_i \cup f^*a_i
+$$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Say $Y = \Spec(A)$ and $S = \Spec(R)$.
+In this case $\Omega^\bullet_{A/R}$ computes
+$R\Gamma(Y, \Omega^\bullet_{Y/S})$ by Lemma \ref{lemma-de-rham-affine}.
+Choose a finite affine open covering $\mathcal{U} : X = \bigcup_{i \in I} U_i$.
+Consider the complex
+$$
+K^\bullet =
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \Omega_{X/S}^\bullet))
+$$
+as in
+Cohomology, Section \ref{cohomology-section-cech-cohomology-of-complexes}.
+Let us collect some facts about this complex most of which
+can be found in the reference just given:
+\begin{enumerate}
+\item $K^\bullet$ is a complex of $R$-modules whose terms are
+$A$-modules,
+\item $K^\bullet$ represents $R\Gamma(X, \Omega^\bullet_{X/S})$ in $D(R)$
+(Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherent-affine-cohomology-zero} and
+Cohomology, Lemma \ref{cohomology-lemma-cech-complex-complex-computes}),
+\item there is a natural map $\Omega^\bullet_{A/R} \to K^\bullet$
+of complexes of $R$-modules which is $A$-linear on terms and
+induces the pullback map $H^*_{dR}(Y/S) \to H^*_{dR}(X/S)$
+on cohomology,
+\item $K^\bullet$ has a multiplication denoted $\wedge$
+which turns it into a differential graded $R$-algebra,
+\item the multiplication on $K^\bullet$
+induces the cup product on $H^*_{dR}(X/S)$
+(Cohomology, Section \ref{cohomology-section-cup-product}),
+\item the filtration $F$ on $\Omega^*_{X/S}$ induces a filtration
+$$
+K^\bullet =
+F^0K^\bullet \supset F^1K^\bullet \supset F^2K^\bullet \supset \ldots
+$$
+by subcomplexes on $K^\bullet$ such that
+\begin{enumerate}
+\item $F^kK^n \subset K^n$ is an $A$-submmodule,
+\item $F^kK^\bullet \wedge F^lK^\bullet \subset F^{k + l}K^\bullet$,
+\item $\text{gr}^kK^\bullet$ is a complex of $A$-modules,
+\item $\text{gr}^0K^\bullet =
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \Omega_{X/Y}^\bullet))$
+and represents $R\Gamma(X, \Omega^\bullet_{X/Y})$ in $D(A)$,
+\item multiplication induces an isomorphism
+$\Omega^k_{A/R}[-k] \otimes_A \text{gr}^0K^\bullet \to \text{gr}^kK^\bullet$
+\end{enumerate}
+\end{enumerate}
+We omit the detailed proofs of these statements; please see discussion
+leading up to the construction of the spectral sequence in
+Lemma \ref{lemma-spectral-sequence-smooth}.
+
+\medskip\noindent
+For every $i = 1, \ldots, N$ we choose a cocycle $x_i \in K^{n_i}$
+representing $\xi_i$. Next, we look at the map of complexes
+$$
+\tilde x :
+M^\bullet = \bigoplus\nolimits_{i = 1, \ldots, N}
+\Omega^\bullet_{A/R}[-n_i]
+\longrightarrow
+K^\bullet
+$$
+which sends $\omega$ in the $i$th summand to $x_i \wedge \omega$.
+All that remains is to show that this map is a quasi-isomorphism.
+We endow $M^\bullet$ with the structure of a filtered complex
+by the rule
+$$
+F^kM^\bullet =
+\bigoplus\nolimits_{i = 1, \ldots, N}
+(\sigma_{\geq k}\Omega^\bullet_{A/R})[-n_i]
+$$
+With this choice the map $\tilde x$ is a morphism of filtered complexes.
+Observe that $\text{gr}^0M^\bullet = \bigoplus A[-n_i]$
+and multiplication induces an isomorphism
+$\Omega^k_{A/R}[-k] \otimes_A \text{gr}^0M^\bullet \to \text{gr}^kM^\bullet$.
+By construction and Lemma \ref{lemma-relative-global-generation-on-fibres}
+we see that
+$$
+\text{gr}^0\tilde x :
+\text{gr}^0M^\bullet \longrightarrow
+\text{gr}^0K^\bullet
+$$
+is an isomorphism in $D(A)$. It follows that for all $k \geq 0$
+we obtain isomorphisms
+$$
+\text{gr}^k \tilde x :
+\text{gr}^kM^\bullet = \Omega^k_{A/R}[-k] \otimes_A \text{gr}^0M^\bullet
+\longrightarrow
+\Omega^k_{A/R}[-k] \otimes_A \text{gr}^0K^\bullet =
+\text{gr}^kK^\bullet
+$$
+in $D(A)$. Namely, the complex
+$\text{gr}^0K^\bullet =
+\text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \Omega_{X/Y}^\bullet))$
+is K-flat as a complex of $A$-modules by Derived Categories of Schemes,
+Lemma \ref{perfect-lemma-K-flat}.
+Hence the tensor product on the right hand side is the
+derived tensor product as is true by inspection on the left hand side.
+Finally, taking the derived tensor product
+$\Omega^k_{A/R}[-k] \otimes_A^\mathbf{L} -$ is a functor on $D(A)$
+and therefore sends isomorphisms to isomorphisms.
+Arguing by induction on $k$ we deduce that
+$$
+\tilde x : M^\bullet/F^kM^\bullet \to K^\bullet/F^kK^\bullet
+$$
+is an isomorphism in $D(R)$ since we have the short exact sequences
+$$
+0 \to F^kM^\bullet/F^{k + 1}M^\bullet \to
+M^\bullet/F^{k + 1}M^\bullet \to
+\text{gr}^kM^\bullet \to 0
+$$
+and similarly for $K^\bullet$. This proves that $\tilde x$ is a
+quasi-isomorphism as the filtrations are finite in any given degree.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-global-generation-on-fibres}
+Let $f : X \to Y$ be a smooth proper morphism of schemes over a base $S$.
+Let $N$ and $n_1, \ldots, n_N \geq 0$ be integers and let
+$\xi_i \in H^{n_i}_{dR}(X/S)$, $1 \leq i \leq N$.
+Assume for all points $y \in Y$ the images of $\xi_1, \ldots, \xi_N$
+in $H^*_{dR}(X_y/y)$ form a basis over $\kappa(y)$. The map
+$$
+\tilde \xi = \bigoplus \tilde \xi_i[-n_i] :
+\bigoplus \Omega^\bullet_{Y/S}[-n_i]
+\longrightarrow
+Rf_*\Omega^\bullet_{X/S}
+$$
+(see proof) is an isomorphism in $D(Y, (Y \to S)^{-1}\mathcal{O}_S)$ and
+correspondingly the map
+$$
+\bigoplus\nolimits_{i = 1}^N H^*_{dR}(Y/S) \longrightarrow
+H^*_{dR}(X/S), \quad
+(a_1, \ldots, a_N) \longmapsto \sum \xi_i \cup f^*a_i
+$$
+is an isomorphism.
+\end{proposition}
+
+\begin{proof}
+Denote $p : X \to S$ and $q : Y \to S$ be the structure morphisms.
+Let $\xi'_i : \Omega^\bullet_{X/S} \to \Omega^\bullet_{X/S}[n_i]$
+be the map of Remark \ref{remark-cup-product-as-a-map} corresponding
+to $\xi_i$. Denote
+$$
+\tilde \xi_i :
+\Omega^\bullet_{Y/S} \to Rf_*\Omega^\bullet_{X/S}[n_i]
+$$
+the composition of $\xi'_i$ with the canonical map
+$\Omega^\bullet_{Y/S} \to Rf_*\Omega^\bullet_{X/S}$.
+Using
+$$
+R\Gamma(Y, Rf_*\Omega^\bullet_{X/S}) = R\Gamma(X, \Omega^\bullet_{X/S})
+$$
+on cohomology $\tilde \xi_i$ is the map $\eta \mapsto \xi_i \cup f^*\eta$
+from $H^m_{dR}(Y/S)$ to $H^{m + n}_{dR}(X/S)$.
+Further, since the formation of $\xi'_i$ commutes with
+restrictions to opens, so does the formation of $\tilde \xi_i$
+commute with restriction to opens.
+
+\medskip\noindent
+Thus we can consider the map
+$$
+\tilde \xi = \bigoplus \tilde \xi_i[-n_i] :
+\bigoplus \Omega^\bullet_{Y/S}[-n_i]
+\longrightarrow
+Rf_*\Omega^\bullet_{X/S}
+$$
+To prove the lemma it suffices to show that this is an isomorphism in
+$D(Y, q^{-1}\mathcal{O}_S)$. If we could show $\tilde \xi$
+comes from a map of filtered complexes (with suitable filtrations),
+then we could appeal to the spectral sequence of
+Lemma \ref{lemma-spectral-sequence-smooth} to finish the proof.
+This takes more work than is necessary and instead our approach
+will be to reduce to the affine case (whose proof does in some sense
+use the spectral sequence).
+
+\medskip\noindent
+Indeed, if $Y' \subset Y$ is is any open with inverse image
+$X' \subset X$, then $\tilde \xi|_{X'}$ induces the map
+$$
+\bigoplus\nolimits_{i = 1}^N H^*_{dR}(Y'/S) \longrightarrow
+H^*_{dR}(X'/S), \quad
+(a_1, \ldots, a_N) \longmapsto \sum \xi_i|_{X'} \cup f^*a_i
+$$
+on cohomology over $Y'$, see discussion above.
+Thus it suffices to find a basis for the topology
+on $Y$ such that the proposition holds for the members of the basis
+(in particular we can forget about the map $\tilde \xi$ when
+we do this). This reduces us to the case where $Y$ and $S$
+are affine which is handled by Lemma \ref{lemma-global-generation-on-fibres}
+and the proof is complete.
+\end{proof}
+
+
+
+
+
+\section{Projective space bundle formula}
+\label{section-projective-space-bundle-formula}
+
+\noindent
+The title says it all.
+
+\begin{proposition}
+\label{proposition-projective-space-bundle-formula}
+Let $X \to S$ be a morphism of schemes. Let $\mathcal{E}$ be a locally
+free $\mathcal{O}_X$-module of constant rank $r$. Consider the morphism
+$p : P = \mathbf{P}(\mathcal{E}) \to X$.
+Then the map
+$$
+\bigoplus\nolimits_{i = 0, \ldots, r - 1} H^*_{dR}(X/S)
+\longrightarrow
+H^*_{dR}(P/S)
+$$
+given by the rule
+$$
+(a_0, \ldots, a_{r - 1}) \longmapsto
+\sum\nolimits_{i = 0, \ldots, r - 1} c_1^{dR}(\mathcal{O}_P(1))^i \cup p^*(a_i)
+$$
+is an isomorphism.
+\end{proposition}
+
+\begin{proof}
+Choose an affine open $\Spec(A) \subset X$ such that $\mathcal{E}$ restricts
+to the trivial locally free module $\mathcal{O}_{\Spec(A)}^{\oplus r}$.
+Then $P \times_X \Spec(A) = \mathbf{P}^{r - 1}_A$. Thus we see that
+$p$ is proper and smooth, see Section \ref{section-projective-space}.
+Moreover, the classes $c_1^{dR}(\mathcal{O}_P(1))^i$, $i = 0, 1, \ldots, r - 1$
+restricted to a fibre $X_y = \mathbf{P}^{r - 1}_y$ freely generate the
+de Rham cohomology $H^*_{dR}(X_y/y)$ over $\kappa(y)$, see
+Lemma \ref{lemma-de-rham-cohomology-projective-space}. Thus we've verified the
+conditions of Proposition \ref{proposition-global-generation-on-fibres}
+and we win.
+\end{proof}
+
+\begin{remark}
+\label{remark-projective-space-bundle-formula}
+In the situation of
+Proposition \ref{proposition-projective-space-bundle-formula}
+we get moreover that the map
+$$
+\tilde \xi :
+\bigoplus\nolimits_{t = 0, \ldots, r - 1}
+\Omega^\bullet_{X/S}[-2t]
+\longrightarrow
+Rp_*\Omega^\bullet_{P/S}
+$$
+is an isomorphism in $D(X, (X \to S)^{-1}\mathcal{O}_X)$ as follows
+immediately from the application of
+Proposition \ref{proposition-global-generation-on-fibres}.
+Note that the arrow for $t = 0$ is simply the canonical map
+$c_{P/X} : \Omega^\bullet_{X/S} \to Rp_*\Omega^\bullet_{P/S}$
+of Section \ref{section-de-rham-complex}.
+In fact, we can pin down this map further in this particular case.
+Namely, consider the canonical map
+$$
+\xi' : \Omega^\bullet_{P/S} \to \Omega^\bullet_{P/S}[2]
+$$
+of Remark \ref{remark-cup-product-as-a-map} corresponding to
+$c_1^{dR}(\mathcal{O}_P(1))$. Then
+$$
+\xi'[2(t - 1)] \circ \ldots \circ \xi'[2] \circ \xi' :
+\Omega^\bullet_{P/S} \to \Omega^\bullet_{P/S}[2t]
+$$
+is the map of Remark \ref{remark-cup-product-as-a-map} corresponding to
+$c_1^{dR}(\mathcal{O}_P(1))^t$. Tracing through the choices made in the
+proof of Proposition \ref{proposition-global-generation-on-fibres}
+we find the value
+$$
+\tilde \xi|_{\Omega^\bullet_{X/S}[-2t]} =
+Rp_*\xi'[-2] \circ \ldots \circ Rp_*\xi'[-2(t - 1)] \circ
+Rp_*\xi'[-2t] \circ c_{P/X}[-2t]
+$$
+for the restriction of our isomorphism to the summand
+$\Omega^\bullet_{X/S}[-2t]$. This has the following simple
+consequence we will use below: let
+$$
+M = \bigoplus\nolimits_{t = 1, \ldots, r - 1} \Omega^\bullet_{X/S}[-2t]
+\quad\text{and}\quad
+K = \bigoplus\nolimits_{t = 0, \ldots, r - 2} \Omega^\bullet_{X/S}[-2t]
+$$
+viewed as subcomplexes of the source of the arrow $\tilde \xi$.
+It follows formally from the discussion above that
+$$
+c_{P/X} \oplus
+\tilde \xi|_M :
+\Omega^\bullet_{X/S} \oplus M \longrightarrow
+Rp_*\Omega^\bullet_{P/S}
+$$
+is an isomorphism and that the diagram
+$$
+\xymatrix{
+K \ar[d]_{\tilde \xi|_K} \ar[r]_{\text{id}} &
+M[2] \ar[d]^{(\tilde \xi|_M)[2]} \\
+Rp_*\Omega^\bullet_{P/S} \ar[r]^{Rp_*\xi'} &
+Rp_*\Omega^\bullet_{P/S}[2]
+}
+$$
+commutes where $\text{id} : K \to M[2]$ identifies the summand
+corresponding to $t$ in the deomposition of $K$ to the summand
+corresponding to $t + 1$ in the decomposition of $M$.
+\end{remark}
+
+
+
+
+
+
+
+
+
+\section{Log poles along a divisor}
+\label{section-divisor}
+
+\noindent
+Let $X \to S$ be a morphism of schemes. Let $Y \subset X$ be an
+effective Cartier divisor. If $X$ \'etale locally along $Y$ looks
+like $Y \times \mathbf{A}^1$, then there is a canonical short exact sequence
+of complexes
+$$
+0 \to \Omega^\bullet_{X/S} \to
+\Omega^\bullet_{X/S}(\log Y) \to
+\Omega^\bullet_{Y/S}[-1] \to 0
+$$
+having many good properties we will discuss in this section. There is a
+variant of this construction where one starts with a normal crossings
+divisor
+(\'Etale Morphisms, Definition \ref{etale-definition-strict-normal-crossings})
+which we will discuss elsewhere (insert future reference here).
+
+\begin{definition}
+\label{definition-local-product}
+Let $X \to S$ be a morphism of schemes. Let $Y \subset X$ be an
+effective Cartier divisor. We say the
+{\it de Rham complex of log poles is defined for $Y \subset X$ over $S$}
+if for all $y \in Y$ and local equation $f \in \mathcal{O}_{X, y}$
+of $Y$ we have
+\begin{enumerate}
+\item $\mathcal{O}_{X, y} \to \Omega_{X/S, y}$, $g \mapsto g \text{d}f$
+is a split injection, and
+\item $\Omega^p_{X/S, y}$ is $f$-torsion free for all $p$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+An easy local calculation shows that it suffices for every $y \in Y$
+to find one local equation $f$ for which conditions (1) and (2) hold.
+
+\begin{lemma}
+\label{lemma-log-complex}
+Let $X \to S$ be a morphism of schemes. Let $Y \subset X$ be an
+effective Cartier divisor.
+Assume the de Rham complex of log poles is defined for $Y \subset X$ over $S$.
+There is a canonical short exact sequence
+of complexes
+$$
+0 \to \Omega^\bullet_{X/S} \to
+\Omega^\bullet_{X/S}(\log Y) \to
+\Omega^\bullet_{Y/S}[-1] \to 0
+$$
+\end{lemma}
+
+\begin{proof}
+Our assumption is that for every $y \in Y$ and local equation
+$f \in \mathcal{O}_{X, y}$ of $Y$ we have
+$$
+\Omega_{X/S, y} = \mathcal{O}_{X, y}\text{d}f \oplus M
+\quad\text{and}\quad
+\Omega^p_{X/S, y} = \wedge^{p - 1}(M)\text{d}f \oplus \wedge^p(M)
+$$
+for some module $M$ with $f$-torsion free exterior powers $\wedge^p(M)$.
+It follows that
+$$
+\Omega^p_{Y/S, y} = \wedge^p(M/fM) = \wedge^p(M)/f\wedge^p(M)
+$$
+Below we will tacitly use these facts.
+In particular the sheaves $\Omega^p_{X/S}$ have no nonzero local
+sections supported on $Y$ and we have a canonical inclusion
+$$
+\Omega^p_{X/S} \subset \Omega^p_{X/S}(Y)
+$$
+see More on Flatness, Section \ref{flat-section-eta}. Let $U = \Spec(A)$
+be an affine open subscheme such that $Y \cap U = V(f)$ for some
+nonzerodivisor $f \in A$. Let us consider the $\mathcal{O}_U$-submodule
+of $\Omega^p_{X/S}(Y)|_U$ generated by
+$\Omega^p_{X/S}|_U$ and $\text{d}\log(f) \wedge \Omega^{p - 1}_{X/S}$
+where $\text{d}\log(f) = f^{-1}\text{d}(f)$.
+This is independent of the choice of $f$ as another generator of the
+ideal of $Y$ on $U$ is equal to $uf$ for a unit $u \in A$ and we get
+$$
+\text{d}\log(uf) - \text{d}\log(f) = \text{d}\log(u) = u^{-1}\text{d}u
+$$
+which is a section of $\Omega_{X/S}$ over $U$. These local
+sheaves glue to give a quasi-coherent submodule
+$$
+\Omega^p_{X/S} \subset \Omega^p_{X/S}(\log Y) \subset \Omega^p_{X/S}(Y)
+$$
+Let us agree to think of $\Omega^p_{Y/S}$ as a quasi-coherent
+$\mathcal{O}_X$-module. There is a unique surjective
+$\mathcal{O}_X$-linear map
+$$
+\text{Res} : \Omega^p_{X/S}(\log Y) \to \Omega^{p - 1}_{Y/S}
+$$
+defined by the rule
+$$
+\text{Res}(\eta' + \text{d}\log(f) \wedge \eta) = \eta|_{Y \cap U}
+$$
+for all opens $U$ as above and all
+$\eta' \in \Omega^p_{X/S}(U)$ and $\eta \in \Omega^{p - 1}_{X/S}(U)$.
+If a form $\eta$ over $U$ restricts to zero on $Y \cap U$, then
+$\eta = \text{d}f \wedge \eta' + f\eta''$ for some forms $\eta'$ and $\eta''$
+over $U$. We conclude that
+we have a short exact sequence
+$$
+0 \to \Omega^p_{X/S} \to \Omega^p_{X/S}(\log Y) \to \Omega^{p - 1}_{Y/S} \to 0
+$$
+for all $p$. We still have to define the differentials
+$\Omega^p_{X/S}(\log Y) \to \Omega^{p + 1}_{X/S}(\log Y)$.
+On the subsheaf $\Omega^p_{X/S}$ we use the differential of
+the de Rham complex of $X$ over $S$. Finally, we define
+$\text{d}(\text{d}\log(f) \wedge \eta) = -\text{d}\log(f) \wedge \text{d}\eta$.
+The sign is forced on us by the Leibniz rule (on $\Omega^\bullet_{X/S}$)
+and it is compatible with the differential on $\Omega^\bullet_{Y/S}[-1]$
+which is after all $-\text{d}_{Y/S}$ by our sign convention in
+Homology, Definition \ref{homology-definition-shift-cochain}.
+In this way we obtain a short exact
+sequence of complexes as stated in the lemma.
+\end{proof}
+
+\begin{definition}
+\label{definition-log-complex}
+Let $X \to S$ be a morphism of schemes. Let $Y \subset X$ be an
+effective Cartier divisor. Assume the de Rham complex of log poles
+is defined for $Y \subset X$ over $S$. Then the complex
+$$
+\Omega^\bullet_{X/S}(\log Y)
+$$
+constructed in Lemma \ref{lemma-log-complex} is the
+{\it de Rham complex of log poles for $Y \subset X$ over $S$}.
+\end{definition}
+
+\noindent
+This complex has many good properties.
+
+\begin{lemma}
+\label{lemma-multiplication-log}
+Let $p : X \to S$ be a morphism of schemes. Let $Y \subset X$ be an
+effective Cartier divisor. Assume the de Rham complex of log poles
+is defined for $Y \subset X$ over $S$.
+\begin{enumerate}
+\item The maps
+$\wedge : \Omega^p_{X/S} \times \Omega^q_{X/S} \to \Omega^{p + q}_{X/S}$
+extend uniquely to $\mathcal{O}_X$-bilinear maps
+$$
+\wedge : \Omega^p_{X/S}(\log Y) \times \Omega^q_{X/S}(\log Y)
+\to \Omega^{p + q}_{X/S}(\log Y)
+$$
+satisfying the Leibniz rule
+$
+\text{d}(\omega \wedge \eta) = \text{d}(\omega) \wedge \eta +
+(-1)^{\deg(\omega)} \omega \wedge \text{d}(\eta)$,
+\item with multiplication as in (1) the map
+$\Omega^\bullet_{X/S} \to \Omega^\bullet_{X/S}(\log(Y)$
+is a homomorphism of differential graded $\mathcal{O}_S$-algebras,
+\item via the maps in (1) we have $\Omega^p_{X/S}(\log Y) =
+\wedge^p(\Omega^1_{X/S}(\log Y))$, and
+\item the map
+$\text{Res} : \Omega^\bullet_{X/S}(\log Y) \to \Omega^\bullet_{Y/S}[-1]$
+satisfies
+$$
+\text{Res}(\omega \wedge \eta) = \text{Res}(\omega) \wedge \eta|_Y
+$$
+for $\omega$ a local section of $\Omega^p_{X/S}(\log Y)$ and $\eta$
+a local section of $\Omega^q_{X/S}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows by direct calcuation from the local construction
+of the complex in the proof of Lemma \ref{lemma-log-complex}.
+Details omitted.
+\end{proof}
+
+\noindent
+Consider a commutative diagram
+$$
+\xymatrix{
+X' \ar[r]_f \ar[d] & X \ar[d] \\
+S' \ar[r] & S
+}
+$$
+of schemes. Let $Y \subset X$ be an effective Cartier divisor
+whose pullback $Y' = f^*Y$ is defined
+(Divisors, Definition
+\ref{divisors-definition-pullback-effective-Cartier-divisor}).
+Assume
+the de Rham complex of log poles is defined for $Y \subset X$ over $S$
+and
+the de Rham complex of log poles is defined for $Y' \subset X'$ over $S'$.
+In this case we obtain a map of short exact sequences of complexes
+$$
+\xymatrix{
+0 \ar[r] &
+f^{-1}\Omega^\bullet_{X/S} \ar[r] \ar[d] &
+f^{-1}\Omega^\bullet_{X/S}(\log Y) \ar[r] \ar[d] &
+f^{-1}\Omega^\bullet_{Y/S}[-1] \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\Omega^\bullet_{X'/S'} \ar[r] &
+\Omega^\bullet_{X'/S'}(\log Y') \ar[r] &
+\Omega^\bullet_{Y'/S'}[-1] \ar[r] &
+0
+}
+$$
+Linearizing, for every $p$ we obtain a linear map
+$f^*\Omega^p_{X/S}(\log Y) \to \Omega^p_{X'/S'}(\log Y')$.
+
+\begin{lemma}
+\label{lemma-gysin-via-log-complex}
+Let $f : X \to S$ be a morphism of schemes. Let $Y \subset X$ be an effective
+Cartier divisor. Assume the de Rham complex of log poles is defined for
+$Y \subset X$ over $S$. Denote
+$$
+\delta : \Omega^\bullet_{Y/S} \to \Omega^\bullet_{X/S}[2]
+$$
+in $D(X, f^{-1}\mathcal{O}_S)$ the ``boundary'' map coming from the
+short exact sequence in Lemma \ref{lemma-log-complex}. Denote
+$$
+\xi' : \Omega^\bullet_{X/S} \to \Omega^\bullet_{X/S}[2]
+$$
+in $D(X, f^{-1}\mathcal{O}_S)$ the map of
+Remark \ref{remark-cup-product-as-a-map}
+corresponding to $\xi = c_1^{dR}(\mathcal{O}_X(-Y))$. Denote
+$$
+\zeta' : \Omega^\bullet_{Y/S} \to \Omega^\bullet_{Y/S}[2]
+$$
+in $D(Y, f|_Y^{-1}\mathcal{O}_S)$ the map of
+Remark \ref{remark-cup-product-as-a-map} corresponding to
+$\zeta = c_1^{dR}(\mathcal{O}_X(-Y)|_Y)$. Then the diagram
+$$
+\xymatrix{
+\Omega^\bullet_{X/S} \ar[d]_{\xi'} \ar[r] &
+\Omega^\bullet_{Y/S} \ar[d]^{\zeta'} \ar[ld]_\delta \\
+\Omega^\bullet_{X/S}[2] \ar[r] &
+\Omega^\bullet_{Y/S}[2]
+}
+$$
+is commutative in $D(X, f^{-1}\mathcal{O}_S)$.
+\end{lemma}
+
+\begin{proof}
+More precisely, we define $\delta$ as the boundary map corresponding to the
+shifted short exact sequence
+$$
+0 \to \Omega^\bullet_{X/S}[1] \to
+\Omega^\bullet_{X/S}(\log Y)[1] \to
+\Omega^\bullet_{Y/S} \to 0
+$$
+It suffices to prove each triangle commutes. Set
+$\mathcal{L} = \mathcal{O}_X(-Y)$. Denote $\pi : L \to X$ the line bundle
+with $\pi_*\mathcal{O}_L = \bigoplus_{n \geq 0} \mathcal{L}^{\otimes n}$.
+
+\medskip\noindent
+Commutativity of the upper left triangle.
+By Lemma \ref{lemma-the-complex-for-L-star-gives-chern-class}
+the map $\xi'$ is the boundary map of the triangle given in
+Lemma \ref{lemma-the-complex-for-L-star}.
+By functoriality it suffices to prove there exists a morphism of
+short exact sequences
+$$
+\xymatrix{
+0 \ar[r] &
+\Omega^\bullet_{X/S}[1] \ar[r] \ar[d] &
+\Omega^\bullet_{L^\star/S, 0}[1] \ar[r] \ar[d] &
+\Omega^\bullet_{X/S} \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\Omega^\bullet_{X/S}[1] \ar[r] &
+\Omega^\bullet_{X/S}(\log Y)[1] \ar[r] &
+\Omega^\bullet_{Y/S} \ar[r] &
+0
+}
+$$
+where the left and right vertical arrows are the obvious ones.
+We can define the middle vertical arrow by the rule
+$$
+\omega' + \text{d}\log(s) \wedge \omega \longmapsto
+\omega' + \text{d}\log(f) \wedge \omega
+$$
+where $\omega', \omega$ are local sections of $\Omega^\bullet_{X/S}$
+and where $s$ is a local generator of $\mathcal{L}$ and
+$f \in \mathcal{O}_X(-Y)$ is the corresponding section of the ideal
+sheaf of $Y$ in $X$. Since the constructions of the maps in
+Lemmas \ref{lemma-the-complex-for-L-star} and \ref{lemma-log-complex}
+match exactly, this works.
+
+\medskip\noindent
+Commutativity of the lower right triangle. Denote
+$\overline{L}$ the restriction of $L$ to $Y$.
+By Lemma \ref{lemma-the-complex-for-L-star-gives-chern-class}
+the map $\zeta'$ is the boundary map of the triangle given in
+Lemma \ref{lemma-the-complex-for-L-star} using the line bundle
+$\overline{L}$ on $Y$.
+By functoriality it suffices to prove there exists a morphism of
+short exact sequences
+$$
+\xymatrix{
+0 \ar[r] &
+\Omega^\bullet_{X/S}[1] \ar[r] \ar[d] &
+\Omega^\bullet_{X/S}(\log Y)[1] \ar[r] \ar[d] &
+\Omega^\bullet_{Y/S} \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+\Omega^\bullet_{Y/S}[1] \ar[r] &
+\Omega^\bullet_{\overline{L}^\star/S, 0}[1] \ar[r] &
+\Omega^\bullet_{Y/S} \ar[r] &
+0 \\
+}
+$$
+where the left and right vertical arrows are the obvious ones.
+We can define the middle vertical arrow by the rule
+$$
+\omega' + \text{d}\log(f) \wedge \omega \longmapsto
+\omega'|_Y + \text{d}\log(\overline{s}) \wedge \omega|_Y
+$$
+where $\omega', \omega$ are local sections of $\Omega^\bullet_{X/S}$
+and where $f$ is a local generator of $\mathcal{O}_X(-Y)$ viewed as
+a function on $X$ and where $\overline{s}$ is $f|_Y$ viewed as a
+section of $\mathcal{L}|_Y = \mathcal{O}_X(-Y)|_Y$.
+Since the constructions of the maps in
+Lemmas \ref{lemma-the-complex-for-L-star} and \ref{lemma-log-complex}
+match exactly, this works.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-log-complex-consequence}
+Let $X \to S$ be a morphism of schemes. Let $Y \subset X$ be an effective
+Cartier divisor. Assume the de Rham complex of log poles is defined for
+$Y \subset X$ over $S$. Let $b \in H^m_{dR}(X/S)$ be a de Rham cohomology
+class whose restriction to $Y$ is zero. Then
+$c_1^{dR}(\mathcal{O}_X(Y)) \cup b = 0$ in $H^{m + 2}_{dR}(X/S)$.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from Lemma \ref{lemma-gysin-via-log-complex}.
+Namely, we have
+$$
+c_1^{dR}(\mathcal{O}_X(Y)) \cup b =
+-c_1^{dR}(\mathcal{O}_X(-Y)) \cup b = -\xi'(b) = -\delta(b|_Y) = 0
+$$
+as desired. For the second equality, see
+Remark \ref{remark-cup-product-as-a-map}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-log-smooth}
+Let $X \to T \to S$ be morphisms of schemes. Let $Y \subset X$ be an effective
+Cartier divisor. If both $X \to T$ and $Y \to T$ are smooth, then
+the de Rham complex of log poles is defined for $Y \subset X$ over $S$.
+\end{lemma}
+
+\begin{proof}
+Let $y \in Y$ be a point.
+By More on Morphisms, Lemma \ref{more-morphisms-lemma-etale-local-structure}
+there exists an integer $0 \geq m$ and a commutative diagram
+$$
+\xymatrix{
+Y \ar[d] &
+V \ar[l] \ar[d] \ar[r] &
+\mathbf{A}^m_T
+\ar[d]^{(a_1, \ldots, a_m) \mapsto (a_1, \ldots, a_m, 0)} \\
+X &
+U \ar[l] \ar[r]^-\pi &
+\mathbf{A}^{m + 1}_T
+}
+$$
+where $U \subset X$ is open, $V = Y \cap U$,
+$\pi$ is \'etale, $V = \pi^{-1}(\mathbf{A}^m_T)$, and $y \in V$.
+Denote $z \in \mathbf{A}^m_T$ the image of $y$. Then we have
+$$
+\Omega^p_{X/S, y} = \Omega^p_{\mathbf{A}^{m + 1}_T/S, z}
+\otimes_{\mathcal{O}_{\mathbf{A}^{m + 1}_T, z}} \mathcal{O}_{X, x}
+$$
+by Lemma \ref{lemma-etale}. Denote $x_1, \ldots, x_{m + 1}$
+the coordinate functions on $\mathbf{A}^{m + 1}_T$.
+Since the conditions (1) and (2) in Definition \ref{definition-local-product}
+do not depend on the choice of the local coordinate,
+it suffices to check the conditions (1) and (2) when $f$ is the
+image of $x_{m + 1}$ by the flat local ring homomorphism
+$\mathcal{O}_{\mathbf{A}^{m + 1}_T, z} \to \mathcal{O}_{X, x}$.
+In this way we see that it suffices to check conditions (1) and (2)
+for $\mathbf{A}^m_T \subset \mathbf{A}^{m + 1}_T$ and the point $z$.
+To prove this case we may assume $S = \Spec(A)$ and $T = \Spec(B)$
+are affine. Let $A \to B$ be the ring map corresponding to the morphism
+$T \to S$ and set $P = B[x_1, \ldots, x_{m + 1}]$ so that
+$\mathbf{A}^{m + 1}_T = \Spec(B)$. We have
+$$
+\Omega_{P/A} = \Omega_{B/A} \otimes_B P \oplus
+\bigoplus\nolimits_{j = 1, \ldots, m} P \text{d}x_j \oplus
+P \text{d}x_{m + 1}
+$$
+Hence the map $P \to \Omega_{P/A}$, $g \mapsto g \text{d}x_{m + 1}$
+is a split injection and $x_{m + 1}$ is a nonzerodivisor on
+$\Omega^p_{P/A}$ for all $p \geq 0$. Localizing at the prime ideal
+corresponding to $z$ finishes the proof.
+\end{proof}
+
+\begin{remark}
+\label{remark-check-log-completion-1}
+Let $S$ be a locally Noetherian scheme. Let $X$ be locally of finite
+type over $S$. Let $Y \subset X$ be an effective Cartier divisor.
+If the map
+$$
+\mathcal{O}_{X, y}^\wedge \longrightarrow \mathcal{O}_{Y, y}^\wedge
+$$
+has a section for all $y \in Y$, then
+the de Rham complex of log poles is defined for $Y \subset X$ over $S$.
+If we ever need this result we will formulate a precise statement and
+add a proof here.
+\end{remark}
+
+\begin{remark}
+\label{remark-check-log-completion-2}
+Let $S$ be a locally Noetherian scheme. Let $X$ be locally of finite
+type over $S$. Let $Y \subset X$ be an effective Cartier divisor.
+If for every $y \in Y$ we can find a diagram of schemes over $S$
+$$
+X \xleftarrow{\varphi} U \xrightarrow{\psi} V
+$$
+with $\varphi$ \'etale and $\psi|_{\varphi^{-1}(Y)} : \varphi^{-1}(Y) \to V$
+\'etale, then the de Rham complex of log poles is defined for
+$Y \subset X$ over $S$. A special case is when the pair $(X, Y)$
+\'etale locally looks like $(V \times \mathbf{A}^1, V \times \{0\})$.
+If we ever need this result we will formulate
+a precise statement and add a proof here.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Calculations}
+\label{section-calculations}
+
+\noindent
+In this section we calculate some Hodge and de Rham cohomology
+groups for a standard blowing up.
+
+\medskip\noindent
+We fix a ring $R$ and we set $S = \Spec(R)$. Fix integers $0 \leq m$ and
+$1 \leq n$. Consider the closed immersion
+$$
+Z = \mathbf{A}^m_S \longrightarrow \mathbf{A}^{m + n}_S = X,\quad
+(a_1, \ldots, a_m) \mapsto (a_1, \ldots, a_m, 0, \ldots 0).
+$$
+We are going to consider the blowing up $L$ of $X$
+along the closed subscheme $Z$. Write
+$$
+X =
+\mathbf{A}^{m + n}_S =
+\Spec(A)
+\quad\text{with}\quad
+A = R[x_1, \ldots, x_m, y_1, \ldots, y_n]
+$$
+We will consider $A = R[x_1, \ldots, x_m, y_1, \ldots, y_n]$ as a
+graded $R$-algebra by setting $\deg(x_i) = 0$ and $\deg(y_j) = 1$.
+With this grading we have
+$$
+P =
+\text{Proj}(A) =
+\mathbf{A}^m_S \times_S \mathbf{P}^{n - 1}_S =
+Z \times_S \mathbf{P}^{n - 1}_S =
+\mathbf{P}^{n - 1}_Z
+$$
+Observe that the ideal cutting out $Z$ in $X$ is the ideal $A_+$.
+Hence $L$ is the Proj of the Rees algebra
+$$
+A \oplus A_+ \oplus (A_+)^2 \oplus \ldots =
+\bigoplus\nolimits_{d \geq 0} A_{\geq d}
+$$
+Hence $L$ is an example of the phenomenon studied in
+more generality in More on Morphisms, Section
+\ref{more-morphisms-section-proj-spec};
+we will use the observations we made there without further mention.
+In particular, we have a commutative diagram
+$$
+\xymatrix{
+P \ar[r]_0 \ar[d]_p &
+L \ar[r]_-\pi \ar[d]^b &
+P \ar[d]^p \\
+Z \ar[r]^i &
+X \ar[r] &
+Z
+}
+$$
+such that $\pi : L \to P$ is a line bundle over
+$P = Z \times_S \mathbf{P}^{n - 1}_S$
+with zero section $0$ whose image $E = 0(P) \subset L$
+is the exceptional divisor of the blowup $b$.
+
+\begin{lemma}
+\label{lemma-comparison}
+For $a \geq 0$ we have
+\begin{enumerate}
+\item the map
+$\Omega^a_{X/S} \to b_*\Omega^a_{L/S}$ is an isomorphism,
+\item the map $\Omega^a_{Z/S} \to p_*\Omega^a_{P/S}$ is an isomorphism,
+and
+\item the map $Rb_*\Omega^a_{L/S} \to i_*Rp_*\Omega^a_{P/S}$ is an isomorphism
+on cohomology sheaves in degree $\geq 1$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let us first prove part (2). Since
+$P = Z \times_S \mathbf{P}^{n - 1}_S$
+we see that
+$$
+\Omega^a_{P/S} = \bigoplus\nolimits_{a = r + s}
+\text{pr}_1^*\Omega^r_{Z/S} \otimes
+\text{pr}_2^*\Omega^s_{\mathbf{P}^{n - 1}_S/S}
+$$
+Recalling that $p = \text{pr}_1$ by the projection formula
+(Cohomology, Lemma \ref{cohomology-lemma-projection-formula})
+we obtain
+$$
+p_*\Omega^a_{P/S} = \bigoplus\nolimits_{a = r + s}
+\Omega^r_{Z/S} \otimes
+\text{pr}_{1, *}\text{pr}_2^*\Omega^s_{\mathbf{P}^{n - 1}_S/S}
+$$
+By the calculations in Section \ref{section-projective-space}
+and in particular in
+the proof of Lemma \ref{lemma-hodge-cohomology-projective-space}
+we have $\text{pr}_{1, *}\text{pr}_2^*\Omega^s_{\mathbf{P}^{n - 1}_S/S} = 0$
+except if $s = 0$ in which case we get
+$\text{pr}_{1, *}\mathcal{O}_P = \mathcal{O}_Z$.
+This proves (2).
+
+\medskip\noindent
+By the material in Section \ref{section-line-bundle} and in particular
+Lemma \ref{lemma-push-omega-a} we have
+$\pi_*\Omega^a_{L/S} = \Omega^a_{P/S} \oplus
+\bigoplus_{k \geq 1} \Omega^a_{L/S, k}$.
+Since the composition $\pi \circ 0$ in the diagram above
+is the identity morphism on $P$ to prove part (3) it suffices to show that
+$\Omega^a_{L/S, k}$ has vanishing higher cohomology for $k > 0$.
+By Lemmas \ref{lemma-the-complex-for-L-star} and \ref{lemma-push-omega-a}
+there are short exact sequences
+$$
+0 \to \Omega^a_{P/S} \otimes \mathcal{O}_P(k)
+\to \Omega^a_{L/S, k} \to
+\Omega^{a - 1}_{P/S} \otimes \mathcal{O}_P(k) \to 0
+$$
+where $\Omega^{a - 1}_{P/S} = 0$ if $a = 0$. Since
+$P = Z \times_S \mathbf{P}^{n - 1}_S$ we have
+$$
+\Omega^a_{P/S} = \bigoplus\nolimits_{i + j = a}
+\Omega^i_{Z/S} \boxtimes \Omega^j_{\mathbf{P}^{n - 1}_S/S}
+$$
+by Lemma \ref{lemma-de-rham-complex-product}.
+Since $\Omega^i_{Z/S}$ is free of finite rank
+we see that it suffices to show that the higher cohomology of
+$\mathcal{O}_Z \boxtimes \Omega^j_{\mathbf{P}^{n - 1}_S/S}(k)$
+is zero for $k > 0$. This follows from
+Lemma \ref{lemma-twisted-hodge-cohomology-projective-space}
+applied to $P = Z \times_S \mathbf{P}^{n - 1}_S = \mathbf{P}^{n - 1}_Z$
+and the proof of (3) is complete.
+
+\medskip\noindent
+We still have to prove (1). If $n = 1$, then we are blowing
+up an effective Cartier divisor and $b$ is an isomorphism
+and we have (1). If $n > 1$, then the composition
+$$
+\Gamma(X, \Omega^a_{X/S})
+\to
+\Gamma(L, \Omega^a_{L/S})
+\to
+\Gamma(L \setminus E, \Omega^a_{L/S})
+=
+\Gamma(X \setminus Z, \Omega^a_{X/S})
+$$
+is an isomorphism as $\Omega^a_{X/S}$ is finite free
+(small detail omitted). Thus the only way (1) can fail is if
+there are nonzero elements of $\Gamma(L, \Omega^a_{L/S})$ which vanish
+outside of $E = 0(P)$. Since $L$ is a line bundle over $P$
+with zero section $0 : P \to L$, it suffices to show that
+on a line bundle there are no nonzero sections of a sheaf
+of differentials which vanish identically outside the zero section.
+The reader sees this is true either (preferably) by a local caculation
+or by using that $\Omega_{L/S, k} \subset \Omega_{L^\star/S, k}$
+(see references above).
+\end{proof}
+
+\noindent
+We suggest the reader skip to the next section at this point.
+
+\begin{lemma}
+\label{lemma-comparison-bis}
+For $a \geq 0$ there are canonical maps
+$$
+b^*\Omega^a_{X/S} \longrightarrow
+\Omega^a_{L/S} \longrightarrow
+b^*\Omega^a_{X/S} \otimes_{\mathcal{O}_L} \mathcal{O}_L((n - 1)E)
+$$
+whose composition is induced by the inclusion
+$\mathcal{O}_L \subset \mathcal{O}_L((n - 1)E)$.
+\end{lemma}
+
+\begin{proof}
+The first arrow in the displayed formula is
+discussed in Section \ref{section-de-rham-complex}.
+To get the second arrow we have to show that if we view
+a local section of $\Omega^a_{L/S}$ as a ``meromorphic section''
+of $b^*\Omega^a_{X/S}$, then it has a pole of order at most
+$n - 1$ along $E$. To see this we work on affine local charts
+on $L$. Namely, recall that $L$ is covered by the spectra of the
+affine blowup algebras $A[\frac{I}{y_i}]$ where $I = A_{+}$
+is the ideal generated by $y_1, \ldots, y_n$. See
+Algebra, Section \ref{algebra-section-blow-up} and
+Divisors, Lemma \ref{divisors-lemma-blowing-up-affine}.
+By symmetry it is enough to work on the
+chart corresponding to $i = 1$. Then
+$$
+A[\frac{I}{y_1}] = R[x_1, \ldots, x_m, y_1, t_2, \ldots, t_n]
+$$
+where $t_i = y_i/y_1$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-blowup-regular-sequence}.
+Thus the module $\Omega^1_{L/S}$ is over the corresponding
+affine open freely generated by
+$\text{d}x_1, \ldots, \text{d}x_m$, $\text{d}y_1$, and
+$\text{d}t_1, \ldots, \text{d}t_n$.
+Of course, the first $m + 1$ of these generators come from
+$b^*\Omega^1_{X/S}$ and for the remaining $n - 1$ we have
+$$
+\text{d}t_j =
+\text{d}\frac{y_j}{y_1} =
+\frac{1}{y_1}\text{d}y_j - \frac{y_j}{y_1^2}\text{d}y_1 =
+\frac{\text{d}y_j - t_j \text{d}y_1}{y_1}
+$$
+which has a pole of order $1$ along $E$ since $E$ is cut out by $y_1$
+on this chart. Since the wedges of $a$ of these elements give a basis
+of $\Omega^a_{L/S}$ over this chart, and since there are at most
+$n - 1$ of the $\text{d}t_j$ involved this finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowup-twist-same-cohomology}
+Let $E = 0(P)$ be the exceptional divisor of the blowing up $b$.
+For any locally free $\mathcal{O}_X$-module $\mathcal{E}$ and
+$0 \leq i \leq n - 1$ the map
+$$
+\mathcal{E}
+\longrightarrow
+Rb_*(b^*\mathcal{E} \otimes_{\mathcal{O}_L} \mathcal{O}_L(iE))
+$$
+is an isomorphism in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+By the projection formula it is enough to show this for
+$\mathcal{E} = \mathcal{O}_X$, see Cohomology, Lemma
+\ref{cohomology-lemma-projection-formula}.
+Since $X$ is affine it suffices to show that the maps
+$$
+H^0(X, \mathcal{O}_X) \to
+H^0(L, \mathcal{O}_L) \to
+H^0(L, \mathcal{O}_L(iE))
+$$
+are isomorphisms and that $H^j(X, \mathcal{O}_L(iE)) = 0$
+for $j > 0$ and $0 \leq i \leq n - 1$, see Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherence-higher-direct-images-application}.
+Since $\pi$ is affine, we can compute global sections and
+cohomology after taking $\pi_*$, see Cohomology of Schemes, Lemma
+\ref{coherent-lemma-relative-affine-cohomology}. If $n = 1$, then
+$L \to X$ is an isomorphism and $i = 0$ hence the first statement holds.
+If $n > 1$, then we consider the composition
+$$
+H^0(X, \mathcal{O}_X) \to H^0(L, \mathcal{O}_L) \to
+H^0(L, \mathcal{O}_L(iE)) \to H^0(L \setminus E, \mathcal{O}_L) =
+H^0(X \setminus Z, \mathcal{O}_X)
+$$
+Since
+$H^0(X \setminus Z, \mathcal{O}_X) = H^0(X, \mathcal{O}_X)$ in this
+case as $Z$ has codimension $n \geq 2$ in $X$ (details omitted) we conclude
+the first statement holds. For the second, recall that
+$\mathcal{O}_L(E) = \mathcal{O}_L(-1)$, see Divisors, Lemma
+\ref{divisors-lemma-blowing-up-gives-effective-Cartier-divisor}.
+Hence we have
+$$
+\pi_*\mathcal{O}_L(iE) =
+\pi_*\mathcal{O}_L(-i) =
+\bigoplus\nolimits_{k \geq -i} \mathcal{O}_P(k)
+$$
+as discussed in
+More on Morphisms, Section \ref{more-morphisms-section-proj-spec}.
+Thus we conclude by the vanishing of the cohomology of twists
+of the structure sheaf on $P = \mathbf{P}^{n - 1}_Z$
+shown in Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cohomology-projective-space-over-ring}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Blowing up and de Rham cohomology}
+\label{section-blowing-up}
+
+\noindent
+Fix a base scheme $S$, a smooth morphism $X \to S$, and a closed subscheme
+$Z \subset X$ which is also smooth over $S$. Denote $b : X' \to X$
+the blowing up of $X$ along $Z$. Denote $E \subset X'$ the exceptional
+divisor. Picture
+\begin{equation}
+\label{equation-blowup}
+\vcenter{
+\xymatrix{
+E \ar[r]_j \ar[d]_p & X' \ar[d]^b \\
+Z \ar[r]^i & X
+}
+}
+\end{equation}
+Our goal in this section is to prove that the map
+$b^* : H_{dR}^*(X/S) \longrightarrow H_{dR}^*(X'/S)$
+is injective (although a lot more can be said).
+
+\begin{lemma}
+\label{lemma-blowup}
+Let $S$ be a scheme. Let $Z \to X$ be a closed immersion of schemes
+smooth over $S$. Let $b : X' \to X$ be the blowing up of $Z$ with
+exceptional divisor $E \subset X'$. Then $X'$ and $E$ are smooth
+over $S$. The morphism $p : E \to Z$ is canonically isomorphic
+to the projective space bundle
+$$
+\mathbf{P}(\mathcal{I}/\mathcal{I}^2) \longrightarrow Z
+$$
+where $\mathcal{I} \subset \mathcal{O}_X$ is the ideal sheaf
+of $Z$. The relative $\mathcal{O}_E(1)$ coming from the projective
+space bundle structure is isomorphic to the restriction of
+$\mathcal{O}_{X'}(-E)$ to $E$.
+\end{lemma}
+
+\begin{proof}
+By Divisors, Lemma
+\ref{divisors-lemma-immersion-smooth-into-smooth-regular-immersion}
+the immersion $Z \to X$ is a regular immmersion, hence
+the ideal sheaf $\mathcal{I}$ is of finite type, hence $b$ is a projective
+morphism with relatively ample invertible sheaf
+$\mathcal{O}_{X'}(1) = \mathcal{O}_{X'}(-E)$, see
+Divisors, Lemmas
+\ref{divisors-lemma-blowing-up-gives-effective-Cartier-divisor} and
+\ref{divisors-lemma-blowing-up-projective}.
+The canonical map $\mathcal{I} \to b_*\mathcal{O}_{X'}(1)$
+gives a closed immersion
+$$
+X' \longrightarrow
+\mathbf{P}\left(\bigoplus\nolimits_{n \geq 0}
+\text{Sym}^n_{\mathcal{O}_X}(\mathcal{I})\right)
+$$
+by the very construction of the blowup. The restriction of this morphism
+to $E$ gives a canonical map
+$$
+E \longrightarrow
+\mathbf{P}\left(\bigoplus\nolimits_{n \geq 0}
+\text{Sym}^n_{\mathcal{O}_Z}(\mathcal{I}/\mathcal{I}^2)\right)
+$$
+over $Z$. Since $\mathcal{I}/\mathcal{I}^2$ is finite locally free
+if this canonical map is an isomorphism, then the final part of the
+lemma holds. Having said all of this, now the question is \'etale
+local on $X$. Namely, blowing up commutes with flat base change by
+Divisors, Lemma \ref{divisors-lemma-flat-base-change-blowing-up}
+and we can check smoothness after precomposing with a surjective
+\'etale morphism. Thus by the \'etale local structure of a
+closed immersion of schemes over $S$ given in More on Morphisms, Lemma
+\ref{more-morphisms-lemma-etale-local-structure}
+this reduces to the situation discussed in
+Section \ref{section-calculations}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-comparison-general}
+With notation as in Lemma \ref{lemma-blowup} for $a \geq 0$ we have
+\begin{enumerate}
+\item the map
+$\Omega^a_{X/S} \to b_*\Omega^a_{X'/S}$ is an isomorphism,
+\item the map $\Omega^a_{Z/S} \to p_*\Omega^a_{E/S}$ is an isomorphism,
+\item the map $Rb_*\Omega^a_{X'/S} \to i_*Rp_*\Omega^a_{E/S}$ is an isomorphism
+on cohomology sheaves in degree $\geq 1$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\epsilon : X_1 \to X$ be a surjective \'etale morphism. Denote
+$i_1 : Z_1 \to X_1$, $b_1 : X'_1 \to X_1$, $E_1 \subset X'_1$, and
+$p_1 : E_1 \to Z_1$ the base changes of the objects considered in
+Lemma \ref{lemma-blowup}. Observe that $i_1$ is a closed immersion
+of schemes smooth over $S$ and that $b_1$ is the blowing up with center
+$Z_1$ by Divisors, Lemma \ref{divisors-lemma-flat-base-change-blowing-up}.
+Suppose that we can prove (1), (2), and (3)
+for the morphisms $b_1$, $p_1$, and $i_1$. Then by
+Lemma \ref{lemma-etale} we obtain that the pullback by $\epsilon$
+of the maps in (1), (2), and (3) are isomorphisms. As $\epsilon$
+is a surjective flat morphism we conclude.
+Thus working \'etale locally, by
+More on Morphisms, Lemma \ref{more-morphisms-lemma-etale-local-structure},
+we may assume we are in the situation discussed in
+Section \ref{section-calculations}. In this case the lemma
+is the same as Lemma \ref{lemma-comparison}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-distinguished-triangle-blowup}
+With notation as in Lemma \ref{lemma-blowup} and denoting $f : X \to S$
+the structure morphism there is a canonical
+distinguished triangle
+$$
+\Omega^\bullet_{X/S} \to
+Rb_*(\Omega^\bullet_{X'/S}) \oplus i_*\Omega^\bullet_{Z/S} \to
+i_*Rp_*(\Omega^\bullet_{E/S}) \to
+\Omega^\bullet_{X/S}[1]
+$$
+in $D(X, f^{-1}\mathcal{O}_S)$ where the four maps
+$$
+\begin{matrix}
+\Omega^\bullet_{X/S} & \to & Rb_*(\Omega^\bullet_{X'/S}), \\
+\Omega^\bullet_{X/S} & \to & i_*\Omega^\bullet_{Z/S}, \\
+Rb_*(\Omega^\bullet_{X'/S}) & \to & i_*Rp_*(\Omega^\bullet_{E/S}), \\
+i_*\Omega^\bullet_{Z/S} & \to & i_*Rp_*(\Omega^\bullet_{E/S})
+\end{matrix}
+$$
+are the canonical ones (Section \ref{section-de-rham-complex}),
+except with sign reversed for one of them.
+\end{lemma}
+
+\begin{proof}
+Choose a distinguished triangle
+$$
+C \to Rb_*\Omega^\bullet_{X'/S} \oplus i_*\Omega^\bullet_{Z/S}
+\to i_*Rp_*\Omega^\bullet_{E/S} \to C[1]
+$$
+in $D(X, f^{-1}\mathcal{O}_S)$. It suffices to show that
+$\Omega^\bullet_{X/S}$ is isomorphic to $C$ in a manner compatible
+with the canonical maps. By the axioms of triangulated categories
+there exists a map of distinguished triangles
+$$
+\xymatrix{
+C' \ar[r] \ar[d] &
+b_*\Omega^\bullet_{X'/S} \oplus i_*\Omega^\bullet_{Z/S} \ar[r] \ar[d] &
+i_*p_*\Omega^\bullet_{E/S} \ar[r] \ar[d] &
+C'[1] \ar[d] \\
+C \ar[r] &
+Rb_*\Omega^\bullet_{X'/S} \oplus i_*\Omega^\bullet_{Z/S} \ar[r] &
+i_*Rp_*\Omega^\bullet_{E/S} \ar[r] &
+C[1]
+}
+$$
+By Lemma \ref{lemma-comparison-general} part (3) and
+Derived Categories, Proposition \ref{derived-proposition-9} we conclude that
+$C' \to C$ is an isomorphism. By Lemma \ref{lemma-comparison-general} part (2)
+the map $i_*\Omega^\bullet_{Z/S} \to i_*p_*\Omega^\bullet_{E/S}$
+is an isomorphism. Thus $C' = b_*\Omega^\bullet_{X'/S}$
+in the derived category. Finally we use Lemma \ref{lemma-comparison-general}
+part (1) tells us this is equal to $\Omega^\bullet_{X/S}$.
+We omit the verification this is compatible with the canonical maps.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-blowup-split}
+With notation as in Lemma \ref{lemma-blowup} the map
+$\Omega^\bullet_{X/S} \to Rb_*\Omega^\bullet_{X'/S}$
+has a splitting in $D(X, (X \to S)^{-1}\mathcal{O}_S)$.
+\end{proposition}
+
+\begin{proof}
+Consider the triangle constructed in
+Lemma \ref{lemma-distinguished-triangle-blowup}.
+We claim that the map
+$$
+Rb_*(\Omega^\bullet_{X'/S}) \oplus i_*\Omega^\bullet_{Z/S} \to
+i_*Rp_*(\Omega^\bullet_{E/S})
+$$
+has a splitting whose image contains the summand $i_*\Omega^\bullet_{Z/S}$.
+By Derived Categories, Lemma \ref{derived-lemma-split} this will show that
+the first arrow of the triangle has a splitting which vanishes on
+the summand $i_*\Omega^\bullet_{Z/S}$ which proves the lemma.
+We will prove the claim by decomposing $Rp_*\Omega^\bullet_{E/S}$
+into a direct sum where the first piece corresponds to
+$\Omega^\bullet_{Z/S}$ and the second piece can be lifted
+through $Rb_*\Omega^\bullet_{X'/S}$.
+
+\medskip\noindent
+Proof of the claim. We may decompose $X$ into open and closed subschemes
+having fixed relative dimension to $S$, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-omega-finite-locally-free}.
+Since the derived category $D(X, f^{-1}\mathcal{O})_S)$ correspondingly
+decomposes as a product of categories, we may assume $X$ has
+fixed relative dimension $N$ over $S$. We may decompose
+$Z = \coprod Z_m$ into open and closed subschemes of relative
+dimension $m \geq 0$ over $S$. The restriction $i_m : Z_m \to X$ of
+$i$ to $Z_m$ is a regular immersion of codimension $N - m$, see Divisors, Lemma
+\ref{divisors-lemma-immersion-smooth-into-smooth-regular-immersion}.
+Let $E = \coprod E_m$ be the corresponding decomposition, i.e.,
+we set $E_m = p^{-1}(Z_m)$. If $p_m : E_m \to Z_m$ denotes the
+restriction of $p$ to $E_m$, then we have a canonical isomorphism
+$$
+\tilde \xi_m :
+\bigoplus\nolimits_{t = 0, \ldots, N - m - 1}
+\Omega^\bullet_{Z_m/S}[-2t]
+\longrightarrow
+Rp_{m, *}\Omega^\bullet_{E_m/S}
+$$
+in $D(Z_m, (Z_m \to S)^{-1}\mathcal{O}_S)$
+where in degree $0$ we have the canonical map
+$\Omega^\bullet_{Z_m/S} \to Rp_{m, *}\Omega^\bullet_{E_m/S}$.
+See Remark \ref{remark-projective-space-bundle-formula}.
+Thus we have an isomorphism
+$$
+\tilde \xi :
+\bigoplus\nolimits_m
+\bigoplus\nolimits_{t = 0, \ldots, N - m - 1}
+\Omega^\bullet_{Z_m/S}[-2t]
+\longrightarrow
+Rp_*(\Omega^\bullet_{E/S})
+$$
+in $D(Z, (Z \to S)^{-1}\mathcal{O}_S)$
+whose restriction to the summand
+$\Omega^\bullet_{Z/S} = \bigoplus \Omega^\bullet_{Z_m/S}$ of the source
+is the canonical map $\Omega^\bullet_{Z/S} \to Rp_*(\Omega^\bullet_{E/S})$.
+Consider the subcomplexes $M_m$ and $K_m$ of the complex
+$\bigoplus\nolimits_{t = 0, \ldots, N - m - 1} \Omega^\bullet_{Z_m/S}[-2t]$
+introduced in Remark \ref{remark-projective-space-bundle-formula}.
+We set
+$$
+M = \bigoplus M_m
+\quad\text{and}\quad
+K = \bigoplus K_m
+$$
+We have $M = K[-2]$ and by construction the map
+$$
+c_{E/Z} \oplus \tilde \xi|_M :
+\Omega^\bullet_{Z/S} \oplus M
+\longrightarrow
+Rp_*(\Omega^\bullet_{E/S})
+$$
+is an isomorphism (see remark referenced above).
+
+\medskip\noindent
+Consider the map
+$$
+\delta : \Omega^\bullet_{E/S}[-2] \longrightarrow \Omega^\bullet_{X'/S}
+$$
+in $D(X', (X' \to S)^{-1}\mathcal{O}_S)$ of
+Lemma \ref{lemma-gysin-via-log-complex}
+with the property that the composition
+$$
+\Omega^\bullet_{E/S}[-2] \longrightarrow \Omega^\bullet_{X'/S}
+\longrightarrow
+\Omega^\bullet_{E/S}
+$$
+is the map $\theta'$ of Remark \ref{remark-cup-product-as-a-map} for
+$c_1^{dR}(\mathcal{O}_{X'}(-E))|_E) = c_1^{dR}(\mathcal{O}_E(1))$.
+The final assertion of Remark \ref{remark-projective-space-bundle-formula}
+tells us that the diagram
+$$
+\xymatrix{
+K[-2] \ar[d]_{(\tilde \xi|_K)[-2]} \ar[r]_{\text{id}} &
+M \ar[d]^{\tilde x|_M} \\
+Rp_*\Omega^\bullet_{E/S}[-2] \ar[r]^-{Rp_*\theta'} &
+Rp_*\Omega^\bullet_{E/S}
+}
+$$
+commutes. Thus we see that we can obtain the desired splitting of
+the claim as the map
+\begin{align*}
+Rp_*(\Omega^\bullet_{E/S})
+& \xrightarrow{(c_{E/Z} \oplus \tilde \xi|_M)^{-1}}
+\Omega^\bullet_{Z/S} \oplus M \\
+& \xrightarrow{\text{id} \oplus \text{id}^{-1}}
+\Omega^\bullet_{Z/S} \oplus K[-2] \\
+& \xrightarrow{\text{id} \oplus (\tilde \xi|_K)[-2]}
+\Omega^\bullet_{Z/S} \oplus Rp_*\Omega^\bullet_{E/S}[-2] \\
+& \xrightarrow{\text{id} \oplus Rb_*\delta}
+\Omega^\bullet_{Z/S} \oplus Rb_*\Omega^\bullet_{X'/S}
+\end{align*}
+The relationship between $\theta'$ and $\delta$ stated above
+together with the commutative diagram involving $\theta'$, $\tilde \xi|_K$,
+and $\tilde \xi|_M$ above are exactly what's needed to
+show that this is a section to the canonical map
+$\Omega^\bullet_{Z/S} \oplus Rb_*(\Omega^\bullet_{X'/S}) \to
+Rp_*(\Omega^\bullet_{E/S})$ and the proof of the claim is complete.
+\end{proof}
+
+\noindent
+Lemma \ref{lemma-splitting-on-omega-a}
+shows that producing the splitting on Hodge
+cohomology is a good deal easier than the result of
+Proposition \ref{proposition-blowup-split}.
+We urge the reader to skip ahead to the next section.
+
+\begin{lemma}
+\label{lemma-ext-zero}
+Let $i : Z \to X$ be a closed immersion of schemes which is regular of
+codimension $c$. Then $\Ext^q_{\mathcal{O}_X}(i_*\mathcal{F}, \mathcal{E}) = 0$
+for $q < c$ for $\mathcal{E}$ locally free on $X$ and $\mathcal{F}$
+any $\mathcal{O}_Z$-module.
+\end{lemma}
+
+\begin{proof}
+By the local to global spectral sequence of $\Ext$ it suffices
+to prove this affine locally on $X$. See
+Cohomology, Section \ref{cohomology-section-ext}.
+Thus we may assume $X = \Spec(A)$
+and there exists a regular sequence $f_1, \ldots, f_c$ in $A$
+such that $Z = \Spec(A/(f_1, \ldots, f_c))$. We may assume $c \geq 1$.
+Then we see that $f_1 : \mathcal{E} \to \mathcal{E}$
+is injective. Since $i_*\mathcal{F}$ is annihilated by $f_1$
+this shows that the lemma holds for $i = 0$ and that we have
+a surjection
+$$
+\Ext^{q - 1}_{\mathcal{O}_X}(i_*\mathcal{F}, \mathcal{E}/f_1\mathcal{E})
+\longrightarrow
+\Ext^q_{\mathcal{O}_X}(i_*\mathcal{F}, \mathcal{E})
+$$
+Thus it suffices to show that the source of this arrow is zero.
+Next we repeat this argument: if $c \geq 2$ the map
+$f_2 : \mathcal{E}/f_1\mathcal{E} \to \mathcal{E}/f_1\mathcal{E}$
+is injective. Since $i_*\mathcal{F}$ is annihilated by $f_2$
+this shows that the lemma holds for $q = 1$ and that we have a
+surjection
+$$
+\Ext^{q - 2}_{\mathcal{O}_X}(i_*\mathcal{F},
+\mathcal{E}/f_1\mathcal{E} + f_2\mathcal{E})
+\longrightarrow
+\Ext^{q - 1}_{\mathcal{O}_X}(i_*\mathcal{F}, \mathcal{E}/f_1\mathcal{E})
+$$
+Continuing in this fashion the lemma is proved.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-splitting-on-omega-a}
+With notation as in Lemma \ref{lemma-blowup} for $a \geq 0$
+there is a unique arrow
+$Rb_*\Omega^a_{X'/S} \to \Omega^a_{X/S}$ in $D(\mathcal{O}_X)$
+whose composition with $\Omega^a_{X/S} \to Rb_*\Omega^a_{X'/S}$
+is the identity on $\Omega^a_{X/S}$.
+\end{lemma}
+
+\begin{proof}
+We may decompose $X$ into open and closed subschemes
+having fixed relative dimension to $S$, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-omega-finite-locally-free}.
+Since the derived category $D(X, f^{-1}\mathcal{O})_S)$ correspondingly
+decomposes as a product of categories, we may assume $X$ has
+fixed relative dimension $N$ over $S$. We may decompose
+$Z = \coprod Z_m$ into open and closed subschemes of relative
+dimension $m \geq 0$ over $S$. The restriction $i_m : Z_m \to X$ of
+$i$ to $Z_m$ is a regular immersion of codimension $N - m$, see Divisors, Lemma
+\ref{divisors-lemma-immersion-smooth-into-smooth-regular-immersion}.
+Let $E = \coprod E_m$ be the corresponding decomposition, i.e.,
+we set $E_m = p^{-1}(Z_m)$. We claim that there are natural maps
+$$
+b^*\Omega^a_{X/S} \to \Omega^a_{X'/S} \to
+b^*\Omega^a_{X/S} \otimes_{\mathcal{O}_{X'}}
+\mathcal{O}_{X'}(\sum (N - m - 1)E_m)
+$$
+whose composition is induced by the inclusion
+$\mathcal{O}_{X'} \to \mathcal{O}_{X'}(\sum (N - m - 1)E_m)$.
+Namely, in order to prove this, it suffices to show that the
+cokernel of the first arrow is locally on $X'$ annihilated by
+a local equation of the effective Cartier divisor $\sum (N - m - 1)E_m$.
+To see this in turn we can work \'etale locally on $X$ as in the
+proof of Lemma \ref{lemma-comparison-general} and apply
+Lemma \ref{lemma-comparison-bis}.
+Computing \'etale locally using Lemma \ref{lemma-blowup-twist-same-cohomology}
+we see that the induced composition
+$$
+\Omega^a_{X/S} \to Rb_*\Omega^a_{X'/S} \to
+Rb_*\left(b^*\Omega^a_{X/S} \otimes_{\mathcal{O}_{X'}}
+\mathcal{O}_{X'}(\sum (N - m - 1)E_m)\right)
+$$
+is an isomorphism in $D(\mathcal{O}_X)$
+which is how we obtain the existence of the map in the lemma.
+
+\medskip\noindent
+For uniqueness, it suffices to show that there are no nonzero maps from
+$\tau_{\geq 1}Rb_*\Omega_{X'/S}$ to $\Omega^a_{X/S}$ in $D(\mathcal{O}_X)$.
+For this it suffices in turn to show that there are no nonzero maps
+from $R^qb_*\Omega_{X'/s}[-q]$ to $\Omega^a_{X/S}$ in $D(\mathcal{O}_X)$
+for $q \geq 1$ (details omitted). By
+Lemma \ref{lemma-comparison-general}
+we see that $R^qb_*\Omega_{X'/s} \cong i_*R^qp_*\Omega^a_{E/S}$
+is the pushforward of a module on $Z = \coprod Z_m$.
+Moreover, observe that the restriction of $R^qp_*\Omega^a_{E/S}$
+to $Z_m$ is nonzero only for $q < N - m$. Namely, the fibres of
+$E_m \to Z_m$ have dimension $N - m - 1$ and we can apply Limits, Lemma
+\ref{limits-lemma-higher-direct-images-zero-above-dimension-fibre}.
+Thus the desired vanishing follows from Lemma \ref{lemma-ext-zero}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Comparing sheaves of differential forms}
+\label{section-quasi-finite-syntomic}
+
+\noindent
+The goal of this section is to compare the sheaves
+$\Omega^p_{X/\mathbf{Z}}$ and $\Omega^p_{Y/\mathbf{Z}}$
+when given a locally quasi-finite syntomic morphism of schemes $f : Y \to X$.
+The result will be applied in Section \ref{section-trace}
+to the construction of the trace map on de Rham complexes if $f$ is finite.
+
+\begin{lemma}
+\label{lemma-funny-map}
+Let $R$ be a ring and consider a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+K^0 \ar[r] &
+L^0 \ar[r] &
+M^0 \ar[r] & 0 \\
+& & L^{-1} \ar[u]_\partial \ar@{=}[r] &
+M^{-1} \ar[u]
+}
+$$
+of $R$-modules with exact top row and $M^0$ and $M^{-1}$
+finite free of the same rank. Then there are canonical maps
+$$
+\wedge^i(H^0(L^\bullet)) \longrightarrow \wedge^i(K^0) \otimes_R \det(M^\bullet)
+$$
+whose composition with $\wedge^i(K^0) \to \wedge^i(H^0(L^\bullet))$
+is equal to multiplication with $\delta(M^\bullet)$.
+\end{lemma}
+
+\begin{proof}
+Say $M^0$ and $M^{-1}$ are free of rank $n$. For every $i \geq 0$
+there is a canonical surjection
+$$
+\pi_i :
+\wedge^{n + i}(L^0)
+\longrightarrow
+\wedge^i(K^0) \otimes \wedge^n(M^0)
+$$
+whose kernel is the submodule generated by wedges
+$l_1 \wedge \ldots \wedge l_{n + i}$ such that $> i$ of the
+$l_j$ are in $K^0$. On the other hand, the exact sequence
+$$
+L^{-1} \to L^0 \to H^0(L^\bullet) \to 0
+$$
+similarly produces canonical maps
+$$
+\wedge^i(H^0(L^\bullet)) \otimes \wedge^n(L^{-1})
+\longrightarrow
+\wedge^{n + i}(L^0)
+$$
+by sending $\eta \otimes \theta$ to $\tilde \eta \wedge \partial(\theta)$
+where $\tilde \eta \in \wedge^i(L^0)$ is a lift of $\eta$.
+The composition of these two maps, combined with the identification
+$\wedge^n(L^{-1}) = \wedge^n(M^{-1})$ gives a map
+$$
+\wedge^i(H^0(L^\bullet)) \otimes \wedge^n(M^{-1})
+\longrightarrow
+\wedge^i(K^0) \otimes \wedge^n(M^0)
+$$
+Since $\det(M^\bullet) = \wedge^n(M^0) \otimes
+(\wedge^n(M^{-1}))^{\otimes -1}$ this produces a map as
+in the statement of the lemma.
+If $\eta$ is the image of $\omega \in \wedge^i(K^0)$, then we see
+that $\theta \otimes \eta$ is mapped to
+$\pi_i(\omega \wedge \partial(\theta)) = \omega \otimes \overline{\theta}$ in
+$\wedge^i(K^0) \otimes \wedge^n(M^0)$ where $\overline{\theta}$
+is the image of $\theta$ in $\wedge^n(M^0)$. Since
+$\delta(M^\bullet)$ is simply the determinant of the map
+$M^{-1} \to M^0$ this proves the last statement.
+\end{proof}
+
+\begin{remark}
+\label{remark-local-description}
+Let $A$ be a ring. Let $P = A[x_1, \ldots, x_n]$. Let
+$f_1, \ldots, f_n \in P$ and set $B = P/(f_1, \ldots, f_n)$.
+Assume $A \to B$ is quasi-finite. Then
+$B$ is a relative global complete intersection over $A$ (Algebra, Definition
+\ref{algebra-definition-relative-global-complete-intersection}) and
+$(f_1, \ldots, f_n)/(f_1, \ldots, f_n)^2$ is free with generators
+the classes $\overline{f}_i$ by Algebra, Lemma
+\ref{algebra-lemma-relative-global-complete-intersection-conormal}.
+Consider the following diagram
+$$
+\xymatrix{
+\Omega_{A/\mathbf{Z}} \otimes_A B \ar[r] &
+\Omega_{P/\mathbf{Z}} \otimes_P B \ar[r] &
+\Omega_{P/A} \otimes_P B \\
+&
+(f_1, \ldots, f_n)/(f_1, \ldots, f_n)^2 \ar[u] \ar@{=}[r] &
+(f_1, \ldots, f_n)/(f_1, \ldots, f_n)^2 \ar[u]
+}
+$$
+The right column represents $\NL_{B/A}$ in $D(B)$ hence has cohomology
+$\Omega_{B/A}$ in degree $0$. The top row is the split short exact sequence
+$0 \to \Omega_{A/\mathbf{Z}} \otimes_A B \to
+\Omega_{P/\mathbf{Z}} \otimes_P B \to \Omega_{P/A} \otimes_P B \to 0$.
+The middle column has cohomology $\Omega_{B/\mathbf{Z}}$ in degree $0$
+by Algebra, Lemma \ref{algebra-lemma-differential-seq}.
+Thus by Lemma \ref{lemma-funny-map} we obtain canonical $B$-module maps
+$$
+\Omega^p_{B/\mathbf{Z}} \longrightarrow
+\Omega^p_{A/\mathbf{Z}} \otimes_A \det(\NL_{B/A})
+$$
+whose composition with
+$\Omega^p_{A/\mathbf{Z}} \to \Omega^p_{B/\mathbf{Z}}$
+is multiplication by $\delta(\NL_{B/A})$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-Garel-upstairs}
+There exists a unique rule that to every locally quasi-finite syntomic
+morphism of schemes $f : Y \to X$ assigns $\mathcal{O}_Y$-module maps
+$$
+c^p_{Y/X} :
+\Omega^p_{Y/\mathbf{Z}}
+\longrightarrow
+f^*\Omega^p_{X/\mathbf{Z}} \otimes_{\mathcal{O}_Y} \det(\NL_{Y/X})
+$$
+satisfying the following two properties
+\begin{enumerate}
+\item the composition with
+$f^*\Omega^p_{X/\mathbf{Z}} \to \Omega^p_{Y/\mathbf{Z}}$
+is multiplication by $\delta(\NL_{Y/X})$, and
+\item the rule is compatible with restriction to opens and with
+base change.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This proof is very similar to the proof of
+Discriminants, Proposition \ref{discriminant-proposition-tate-map}
+and we suggest the reader look at that proof first.
+We fix $p \geq 0$ throughout the proof.
+
+\medskip\noindent
+Let us reformulate the statement. Consider the category
+$\mathcal{C}$ whose objects, denoted $Y/X$, are locally quasi-finite syntomic
+morphism $f : Y \to X$ of schemes and whose morphisms
+$b/a : Y'/X' \to Y/X$ are commutative diagrams
+$$
+\xymatrix{
+Y' \ar[d]_{f'} \ar[r]_b & Y \ar[d]^f \\
+X' \ar[r]^a & X
+}
+$$
+which induce an isomorphism of $Y'$ with an open subscheme of
+$X' \times_X Y$. The lemma means that for every object
+$Y/X$ of $\mathcal{C}$ we have maps $c^p_{Y/X}$ with property (1)
+and for every morphism $b/a : Y'/X' \to Y/X$ of $\mathcal{C}$ we have
+$b^*c^p_{Y/X} = c^p_{Y'/X'}$ via the identifications
+$b^*\det(\NL_{Y/X}) = \det(\NL_{Y'/X'})$
+(Discriminants, Section \ref{discriminant-section-tate-map})
+and $b^*\Omega^p_{Y/X} = \Omega^p_{Y'/X'}$
+(Lemma \ref{lemma-base-change-de-rham}).
+
+\medskip\noindent
+Given $Y/X$ in $\mathcal{C}$ and $y \in Y$ we can find
+an affine open $V \subset Y$ and $U \subset X$ with $f(V) \subset U$
+such that there exists some maps
+$$
+\Omega^p_{Y/\mathbf{Z}}|_V
+\longrightarrow
+\left(
+f^*\Omega^p_{X/\mathbf{Z}} \otimes_{\mathcal{O}_Y} \det(\NL_{Y/X})
+\right)|_V
+$$
+with property (1). This follows
+from picking affine opens as in
+Discriminants, Lemma \ref{discriminant-lemma-syntomic-quasi-finite} part (5)
+and Remark \ref{remark-local-description}.
+If $\Omega^p_{X/\mathbf{Z}}$ is finite locally free and
+annihilator of the section $\delta(\NL_{Y/X})$ is zero, then
+these local maps are unique and automatically glue!
+
+\medskip\noindent
+Let $\mathcal{C}_{nice} \subset \mathcal{C}$ denote the full subcategory
+of $Y/X$ such that
+\begin{enumerate}
+\item $X$ is of finite type over $\mathbf{Z}$,
+\item $\Omega_{X/\mathbf{Z}}$ is locally free, and
+\item the annihilator of $\delta(\NL_{Y/X})$ is zero.
+\end{enumerate}
+By the remarks in the previous paragraph, we see that for any
+object $Y/X$ of $\mathcal{C}_{nice}$ we have a unique map
+$c^p_{Y/X}$ satisfying condition (1). If $b/a : Y'/X' \to Y/X$
+is a morphism of $\mathcal{C}_{nice}$, then
+$b^*c^p_{Y/X}$ is equal to $c^p_{Y'/X'}$ because
+$b^*\delta(\NL_{Y/X}) = \delta(\NL_{Y'/X'})$ (see
+Discriminants, Section \ref{discriminant-section-tate-map}).
+In other words, we have solved the problem
+on the full subcategory $\mathcal{C}_{nice}$. For $Y/X$ in $\mathcal{C}_{nice}$
+we continue to denote $c^p_{Y/X}$ the solution we've just found.
+
+\medskip\noindent
+Consider morphisms
+$$
+Y_1/X_1 \xleftarrow{b_1/a_1} Y/X \xrightarrow{b_2/a_2} Y_2/X_2
+$$
+in $\mathcal{C}$ such that $Y_1/X_1$ and $Y_2/X_2$ are objects
+of $\mathcal{C}_{nice}$.
+{\bf Claim.} $b_1^*c^p_{Y_1/X_1} = b_2^*c^p_{Y_2/X_2}$.
+We will first show that the claim implies the lemma
+and then we will prove the claim.
+
+\medskip\noindent
+Let $d, n \geq 1$ and consider the locally
+quasi-finite syntomic morphism $Y_{n, d} \to X_{n, d}$
+constructed in Discriminants, Example
+\ref{discriminant-example-universal-quasi-finite-syntomic}.
+Then $Y_{n, d}$ and $Y_{n, d}$ are irreducible schemes of finite type and
+smooth over $\mathbf{Z}$. Namely, $X_{n, d}$ is a spectrum of a
+polynomial ring over $\mathbf{Z}$ and $Y_{n, d}$ is an open subscheme
+of such. The morphism $Y_{n, d} \to X_{n, d}$ is locally quasi-finite syntomic
+and \'etale over a dense open, see Discriminants, Lemma
+\ref{discriminant-lemma-universal-quasi-finite-syntomic-etale}.
+Thus $\delta(\NL_{Y_{n, d}/X_{n, d}})$ is nonzero: for example we have
+the local description of $\delta(\NL_{Y/X})$ in
+Discriminants, Remark \ref{discriminant-remark-local-description-delta}
+and we have the local description of \'etale morphisms in
+Morphisms, Lemma \ref{morphisms-lemma-etale-at-point} part (8).
+Now a nonzero section of an invertible module over an irreducible
+regular scheme has vanishing annihilator. Thus
+$Y_{n, d}/X_{n, d}$ is an object of $\mathcal{C}_{nice}$.
+
+\medskip\noindent
+Let $Y/X$ be an arbitrary object of $\mathcal{C}$. Let $y \in Y$.
+By Discriminants, Lemma \ref{discriminant-lemma-locally-comes-from-universal}
+we can find $n, d \geq 1$ and morphisms
+$$
+Y/X \leftarrow V/U \xrightarrow{b/a} Y_{n, d}/X_{n, d}
+$$
+of $\mathcal{C}$ such that $V \subset Y$ and $U \subset X$ are open.
+Thus we can pullback the canonical morphism $c^p_{Y_{n, d}/X_{n, d}}$
+constructed above by $b$ to $V$. The claim guarantees these local
+isomorphisms glue! Thus we get a well defined global maps
+$c^p_{Y/X}$ with property (1).
+If $b/a : Y'/X' \to Y/X$ is a morphism of $\mathcal{C}$, then
+the claim also implies that the similarly constructed map
+$c^p_{Y'/X'}$ is the pullback by $b$ of the locally constructed
+map $c^p_{Y/X}$. Thus it remains to prove the claim.
+
+\medskip\noindent
+In the rest of the proof we prove the claim. We may pick a point
+$y \in Y$ and prove the maps agree in an open neighbourhood of $y$.
+Thus we may replace $Y_1$, $Y_2$ by open neighbourhoods of the
+image of $y$ in $Y_1$ and $Y_2$. Thus we may assume
+$Y, X, Y_1, X_1, Y_2, X_2$ are affine.
+We may write $X = \lim X_\lambda$ as a cofiltered limit of affine
+schemes of finite type over $X_1 \times X_2$. For each $\lambda$
+we get
+$$
+Y_1 \times_{X_1} X_\lambda
+\quad\text{and}\quad
+X_\lambda \times_{X_2} Y_2
+$$
+If we take limits we obtain
+$$
+\lim Y_1 \times_{X_1} X_\lambda =
+Y_1 \times_{X_1} X \supset Y \subset
+X \times_{X_2} Y_2 = \lim X_\lambda \times_{X_2} Y_2
+$$
+By Limits, Lemma \ref{limits-lemma-descend-opens}
+we can find a $\lambda$ and opens
+$V_{1, \lambda} \subset Y_1 \times_{X_1} X_\lambda$ and
+$V_{2, \lambda} \subset X_\lambda \times_{X_2} Y_2$
+whose base change to $X$ recovers $Y$ (on both sides).
+After increasing $\lambda$ we may assume
+there is an isomorphism
+$V_{1, \lambda} \to V_{2, \lambda}$ whose base change to $X$ is the
+identity on $Y$, see
+Limits, Lemma \ref{limits-lemma-descend-finite-presentation}.
+Then we have the commutative diagram
+$$
+\xymatrix{
+& Y/X \ar[d] \ar[ld]_{b_1/a_1} \ar[rd]^{b_2/a_2} \\
+Y_1/X_1 & V_{1, \lambda}/X_\lambda \ar[l] \ar[r] & Y_2/X_2
+}
+$$
+Thus it suffices to prove the claim for the lower row
+of the diagram and we reduce to the case discussed in the
+next paragraph.
+
+\medskip\noindent
+Assume $Y, X, Y_1, X_1, Y_2, X_2$ are affine of finite type over $\mathbf{Z}$.
+Write $X = \Spec(A)$, $X_i = \Spec(A_i)$. The ring map $A_1 \to A$ corresponding
+to $X \to X_1$ is of finite type and hence we may choose a surjection
+$A_1[x_1, \ldots, x_n] \to A$. Similarly, we may choose a surjection
+$A_2[y_1, \ldots, y_m] \to A$. Set $X'_1 = \Spec(A_1[x_1, \ldots, x_n])$
+and $X'_2 = \Spec(A_2[y_1, \ldots, y_m])$. Observe that
+$\Omega_{X'_1/\mathbf{Z}}$ is the direct sum of the pullback of
+$\Omega_{X_1/\mathbf{Z}}$ and a finite free module.
+Similarly for $X'_2$. Set $Y'_1 = Y_1 \times_{X_1} X'_1$ and
+$Y'_2 = Y_2 \times_{X_2} X'_2$. We get the following diagram
+$$
+Y_1/X_1 \leftarrow
+Y'_1/X'_1 \leftarrow
+Y/X
+\rightarrow Y'_2/X'_2
+\rightarrow Y_2/X_2
+$$
+Since $X'_1 \to X_1$ and $X'_2 \to X_2$ are flat, the same is true
+for $Y'_1 \to Y_1$ and $Y'_2 \to Y_2$. It follows easily that the
+annihilators of $\delta(\NL_{Y'_1/X'_1})$ and $\delta(\NL_{Y'_2/X'_2})$
+are zero.
+Hence $Y'_1/X'_1$ and $Y'_2/X'_2$ are in $\mathcal{C}_{nice}$.
+Thus the outer morphisms in the displayed diagram are morphisms
+of $\mathcal{C}_{nice}$ for which we know the desired compatibilities.
+Thus it suffices to prove the claim for
+$Y'_1/X'_1 \leftarrow Y/X \rightarrow Y'_2/X'_2$. This reduces us
+to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $Y, X, Y_1, X_1, Y_2, X_2$ are affine of finite type over
+$\mathbf{Z}$ and $X \to X_1$ and $X \to X_2$ are closed immersions.
+Consider the open embeddings
+$Y_1 \times_{X_1} X \supset Y \subset X \times_{X_2} Y_2$.
+There is an open neighbourhood $V \subset Y$ of $y$ which is a
+standard open of both $Y_1 \times_{X_1} X$ and $X \times_{X_2} Y_2$.
+This follows from Schemes, Lemma \ref{schemes-lemma-standard-open-two-affines}
+applied to the scheme obtained by glueing $Y_1 \times_{X_1} X$ and
+$X \times_{X_2} Y_2$ along $Y$; details omitted.
+Since $X \times_{X_2} Y_2$ is a closed subscheme of $Y_2$
+we can find a standard open $V_2 \subset Y_2$ such that
+$V_2 \times_{X_2} X = V$. Similarly, we can find a standard open
+$V_1 \subset Y_1$ such that $V_1 \times_{X_1} X = V$.
+After replacing $Y, Y_1, Y_2$ by $V, V_1, V_2$ we reduce to the
+case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $Y, X, Y_1, X_1, Y_2, X_2$ are affine of finite type over
+$\mathbf{Z}$ and $X \to X_1$ and $X \to X_2$ are closed immersions
+and $Y_1 \times_{X_1} X = Y = X \times_{X_2} Y_2$.
+Write $X = \Spec(A)$, $X_i = \Spec(A_i)$, $Y = \Spec(B)$,
+$Y_i = \Spec(B_i)$. Then we can consider the affine schemes
+$$
+X' = \Spec(A_1 \times_A A_2) = \Spec(A')
+\quad\text{and}\quad
+Y' = \Spec(B_1 \times_B B_2) = \Spec(B')
+$$
+Observe that $X' = X_1 \amalg_X X_2$ and $Y' = Y_1 \amalg_Y Y_2$, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-basic-example-pushout}.
+By More on Algebra, Lemma \ref{more-algebra-lemma-fibre-product-finite-type}
+the rings $A'$ and $B'$ are of finite type over $\mathbf{Z}$. By
+More on Algebra, Lemma \ref{more-algebra-lemma-module-over-fibre-product}
+we have $B' \otimes_A A_1 = B_1$ and $B' \times_A A_2 = B_2$.
+In particular a fibre of $Y' \to X'$ over a point of
+$X' = X_1 \amalg_X X_2$ is always equal to either a fibre of $Y_1 \to X_1$
+or a fibre of $Y_2 \to X_2$. By More on Algebra, Lemma
+\ref{more-algebra-lemma-flat-module-over-fibre-product}
+the ring map $A' \to B'$ is flat. Thus by Discriminants, Lemma
+\ref{discriminant-lemma-syntomic-quasi-finite} part (3)
+we conclude that $Y'/X'$ is an object of $\mathcal{C}$.
+Consider now the commutative diagram
+$$
+\xymatrix{
+& Y/X \ar[ld]_{b_1/a_1} \ar[rd]^{b_2/a_2} \\
+Y_1/X_1 \ar[rd] & & Y_2/X_2 \ar[ld] \\
+& Y'/X'
+}
+$$
+Now we would be done if $Y'/X'$ is an object of $\mathcal{C}_{nice}$,
+but this is almost never the case. Namely, then pulling back $c^p_{Y'/X'}$
+around the two sides of the square, we would obtain the desired conclusion.
+To get around the problem that $Y'/X'$ is not in $\mathcal{C}_{nice}$
+we note the arguments above show that, after possibly shrinking all
+of the schemes $X, Y, X_1, Y_1, X_2, Y_2, X', Y'$ we can find some
+$n, d \geq 1$, and extend the diagram like so:
+$$
+\xymatrix{
+& Y/X \ar[ld]_{b_1/a_1} \ar[rd]^{b_2/a_2} \\
+Y_1/X_1 \ar[rd] & & Y_2/X_2 \ar[ld] \\
+& Y'/X' \ar[d] \\
+& Y_{n, d}/X_{n, d}
+}
+$$
+and then we can use the already given argument by pulling
+back from $c^p_{Y_{n, d}/X_{n, d}}$. This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Trace maps on de Rham complexes}
+\label{section-trace}
+
+\noindent
+A reference for some of the material in this section is \cite{Garel}.
+Let $S$ be a scheme. Let $f : Y \to X$ be a finite locally free morphism
+of schemes over $S$. Then there is a trace map
+$\text{Trace}_f : f_*\mathcal{O}_Y \to \mathcal{O}_X$, see
+Discriminants, Section \ref{discriminant-section-discriminant}.
+In this situation a trace map on de Rham complexes is a map
+of complexes
+$$
+\Theta_{Y/X} : f_*\Omega^\bullet_{Y/S} \longrightarrow \Omega^\bullet_{X/S}
+$$
+such that $\Theta_{Y/X}$ is equal to $\text{Trace}_f$ in degree $0$
+and satisfies
+$$
+\Theta_{Y/X}(\omega \wedge \eta) = \omega \wedge \Theta_{Y/X}(\eta)
+$$
+for local sections $\omega$ of $\Omega^\bullet_{X/S}$ and $\eta$
+of $f_*\Omega^\bullet_{Y/S}$. It is not clear to us whether such a trace map
+$\Theta_{Y/X}$ exists for every finite locally free morphism $Y \to X$;
+please email
+\href{mailto:stacks.project@gmail.com}{stacks.project@gmail.com}
+if you have a counterexample or a proof.
+
+\begin{example}
+\label{example-no-trace}
+Here is an example where we do not have a trace map on de Rham complexes.
+For example, consider the $\mathbf{C}$-algebra $B = \mathbf{C}[x, y]$ with
+action of $G = \{\pm 1\}$ given by $x \mapsto -x$ and $y \mapsto -y$.
+The invariants $A = B^G$ form a normal domain of finite type over $\mathbf{C}$
+generated by $x^2, xy, y^2$. We claim that for the inclusion $A \subset B$
+there is no reasonable trace map
+$\Omega_{B/\mathbf{C}} \to \Omega_{A/\mathbf{C}}$
+on $1$-forms. Namely, consider the element
+$\omega = x \text{d} y \in \Omega_{B/\mathbf{C}}$.
+Since $\omega$ is invariant under the action of $G$ if a ``reasonable''
+trace map exists, then $2\omega$ should be in the image of
+$\Omega_{A/\mathbf{C}} \to \Omega_{B/\mathbf{C}}$. This is
+not the case: there is no way to write $2\omega$ as a linear
+combination of $\text{d}(x^2)$, $\text{d}(xy)$, and $\text{d}(y^2)$
+even with coefficients in $B$.
+This example contradicts the main theorem in
+\cite{Zannier}.
+\end{example}
+
+\begin{lemma}
+\label{lemma-Garel}
+There exists a unique rule that to every finite syntomic
+morphism of schemes $f : Y \to X$ assigns $\mathcal{O}_X$-module maps
+$$
+\Theta^p_{Y/X} :
+f_*\Omega^p_{Y/\mathbf{Z}}
+\longrightarrow
+\Omega^p_{X/\mathbf{Z}}
+$$
+satisfying the following properties
+\begin{enumerate}
+\item the composition with
+$\Omega^p_{X/\mathbf{Z}} \otimes_{\mathcal{O}_X} f_*\mathcal{O}_Y
+\to f_*\Omega^p_{Y/\mathbf{Z}}$ is equal to
+$\text{id} \otimes \text{Trace}_f$
+where $\text{Trace}_f : f_*\mathcal{O}_Y \to \mathcal{O}_X$
+is the map from
+Discriminants, Section \ref{discriminant-section-discriminant},
+\item the rule is compatible with base change.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First, assume that $X$ is locally Noetherian. By
+Lemma \ref{lemma-Garel-upstairs} we have a canonical map
+$$
+c^p_{Y/X} : \Omega_{Y/S}^p
+\longrightarrow
+f^*\Omega_{X/S}^p \otimes_{\mathcal{O}_Y} \det(\NL_{Y/X})
+$$
+By Discriminants, Proposition \ref{discriminant-proposition-tate-map}
+we have a canonical isomorphism
+$$
+c_{Y/X} : \det(\NL_{Y/X}) \to \omega_{Y/X}
+$$
+mapping $\delta(\NL_{Y/X})$ to $\tau_{Y/X}$. Combined these maps give
+$$
+c^p_{Y/X} \otimes c_{Y/X} :
+\Omega_{Y/S}^p
+\longrightarrow
+f^*\Omega_{X/S}^p \otimes_{\mathcal{O}_Y} \omega_{Y/X}
+$$
+By Discriminants, Section \ref{discriminant-section-finite-morphisms}
+this is the same thing as a map
+$$
+\Theta_{Y/X}^p :
+f_*\Omega_{Y/S}^p
+\longrightarrow
+\Omega_{X/S}^p
+$$
+Recall that the relationship between $c^p_{Y/X} \otimes c_{Y/X}$
+and $\Theta_{Y/X}^p$ uses the evaluation map
+$f_*\omega_{Y/X} \to \mathcal{O}_X$
+which sends $\tau_{Y/X}$ to $\text{Trace}_f(1)$, see
+Discriminants, Section \ref{discriminant-section-finite-morphisms}.
+Hence property (1) holds. Property (2) holds for base changes by
+$X' \to X$ with $X'$ locally Noetherian because both $c^p_{Y/X}$ and
+$c_{Y/X}$ are compatible with such base changes. For $f : Y \to X$
+finite syntomic and $X$ locally Noetherian,
+we will continue to denote $\Theta^p_{Y/X}$ the solution we've just found.
+
+\medskip\noindent
+Uniqueness. Suppose that we have a finite syntomic morphism
+$f: Y \to X$ such that $X$ is smooth over $\Spec(\mathbf{Z})$
+and $f$ is \'etale over a dense open of $X$. We claim that
+in this case $\Theta^p_{Y/X}$ is uniquely determined by property (1).
+Namely, consider the maps
+$$
+\Omega^p_{X/\mathbf{Z}} \otimes_{\mathcal{O}_X} f_*\mathcal{O}_Y \to
+f_*\Omega^p_{Y/\mathbf{Z}} \to
+\Omega^p_{X/\mathbf{Z}}
+$$
+The sheaf $\Omega^p_{X/\mathbf{Z}}$ is torsion free (by the assumed
+smoothness), hence it suffices to check that the restriction of
+$\Theta^p_{Y/X}$ is uniquely determined over the dense open over
+which $f$ is \'etale, i.e., we may assume $f$ is \'etale.
+However, if $f$ is \'etale, then
+$f^*\Omega_{X/\mathbf{Z}} = \Omega_{Y/\mathbf{Z}}$
+hence the first arrow in the displayed equation is an isomorphism.
+Since we've pinned down the composition, this guarantees uniqueness.
+
+\medskip\noindent
+Let $f : Y \to X$ be a finite syntomic morphism of locally Noetherian schemes.
+Let $x \in X$. By Discriminants, Lemma
+\ref{discriminant-lemma-locally-comes-from-universal-finite}
+we can find $d \geq 1$ and a commutative diagram
+$$
+\xymatrix{
+Y \ar[d] &
+V \ar[d] \ar[l] \ar[r] &
+V_d \ar[d] \\
+X &
+U \ar[l] \ar[r] &
+U_d
+}
+$$
+such that $x \in U \subset X$ is open, $V = f^{-1}(U)$
+and $V = U \times_{U_d} V_d$. Thus $\Theta^p_{Y/X}|_V$
+is the pullback of the map $\Theta^p_{V_d/U_d}$.
+However, by the discussion on uniqueness above and
+Discriminants, Lemmas
+\ref{discriminant-lemma-universal-finite-syntomic-smooth} and
+\ref{discriminant-lemma-universal-finite-syntomic-etale}
+the map $\Theta^p_{V_d/U_d}$ is uniquely determined
+by the requirement (1). Hence uniqueness holds.
+
+\medskip\noindent
+At this point we know that we have existence and uniqueness
+for all finite syntomic morphisms $Y \to X$ with $X$ locally Noetherian.
+We could now give an argument similar to the proof of
+Lemma \ref{lemma-Garel-upstairs} to extend to general $X$.
+However, instead it possible to directly use absolute Noetherian approximation
+to finish the proof. Namely, to construct $\Theta^p_{Y/X}$
+it suffices to do so Zariski locally on $X$ (provided we also
+show the uniqueness). Hence we may assume $X$ is affine (small
+detail omitted). Then we can write $X = \lim_{i \in I} X_i$
+as the limit over a directed set $I$ of Noetherian affine schemes.
+By Algebra, Lemma \ref{algebra-lemma-colimit-category-fp-algebras}
+we can find $0 \in I$ and a finitely
+presented morphism of affines $f_0 : Y_0 \to X_0$ whose base change to
+$X$ is $Y \to X$. After increasing $0$ we may assume $Y_0 \to X_0$
+is finite and syntomic, see
+Algebra, Lemma \ref{algebra-lemma-colimit-lci} and
+\ref{algebra-lemma-colimit-finite}. For $i \geq 0$ also the
+base change $f_i : Y_i = Y_0 \times_{X_0} X_i \to X_i$ is finite syntomic.
+Then
+$$
+\Gamma(X, f_*\Omega^p_{Y/\mathbf{Z}}) =
+\Gamma(Y, \Omega^p_{Y/\mathbf{Z}}) =
+\colim_{i \geq 0} \Gamma(Y_i, \Omega^p_{Y_i/\mathbf{Z}}) =
+\colim_{i \geq 0} \Gamma(X_i, f_{i, *}\Omega^p_{Y_i/\mathbf{Z}})
+$$
+Hence we can (and are forced to) define $\Theta^p_{Y/X}$ as the colimit
+of the maps $\Theta^p_{Y_i/X_i}$. This map is compatible with any
+cartesian diagram
+$$
+\xymatrix{
+Y' \ar[r] \ar[d] & Y \ar[d] \\
+X' \ar[r] & X
+}
+$$
+with $X'$ affine as we know this for the case of Noetherian affine schemes
+by the arguments given above (small detail omitted; hint: if we also
+write $X' = \lim_{j \in J} X'_j$ then for every $i \in I$ there is a $j \in J$
+and a morphism $X'_j \to X_i$ compatible with the morphism $X' \to X$).
+This finishes the proof.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-Garel}
+\begin{reference}
+\cite{Garel}
+\end{reference}
+Let $f : Y \to X$ be a finite syntomic morphism of schemes.
+The maps $\Theta^p_{Y/X}$ of Lemma \ref{lemma-Garel} define a map of complexes
+$$
+\Theta_{Y/X} :
+f_*\Omega^\bullet_{Y/\mathbf{Z}}
+\longrightarrow
+\Omega^\bullet_{X/\mathbf{Z}}
+$$
+with the following properties
+\begin{enumerate}
+\item in degree $0$ we get
+$\text{Trace}_f : f_*\mathcal{O}_Y \to \mathcal{O}_X$, see
+Discriminants, Section \ref{discriminant-section-discriminant},
+\item we have
+$\Theta_{Y/X}(\omega \wedge \eta) = \omega \wedge \Theta_{Y/X}(\eta)$
+for $\omega$ in $\Omega^\bullet_{X/\mathbf{Z}}$ and $\eta$
+in $f_*\Omega^\bullet_{Y/\mathbf{Z}}$,
+\item if $f$ is a morphism over a base scheme $S$, then
+$\Theta_{Y/X}$ induces a map of complexes
+$f_*\Omega^\bullet_{Y/S} \to \Omega^\bullet_{X/S}$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+By Discriminants, Lemma
+\ref{discriminant-lemma-locally-comes-from-universal-finite}
+for every $x \in X$ we can find $d \geq 1$ and a commutative diagram
+$$
+\xymatrix{
+Y \ar[d] &
+V \ar[d] \ar[l] \ar[r] &
+V_d \ar[d] \ar[r] &
+Y_d = \Spec(B_d) \ar[d] \\
+X &
+U \ar[l] \ar[r] &
+U_d \ar[r] &
+X_d = \Spec(A_d)
+}
+$$
+such that $x \in U \subset X$ is affine open, $V = f^{-1}(U)$
+and $V = U \times_{U_d} V_d$. Write $U = \Spec(A)$ and $V = \Spec(B)$
+and observe that $B = A \otimes_{A_d} B_d$ and recall that
+$B_d = A_d e_1 \oplus \ldots \oplus A_d e_d$. Suppose we have
+$a_1, \ldots, a_r \in A$ and $b_1, \ldots, b_s \in B$.
+We may write $b_j = \sum a_{j, l} e_d$ with $a_{j, l} \in A$.
+Set $N = r + sd$ and consider the factorizations
+$$
+\xymatrix{
+V \ar[r] \ar[d] &
+V' = \mathbf{A}^N \times V_d \ar[r] \ar[d] &
+V_d \ar[d] \\
+U \ar[r]&
+U' = \mathbf{A}^N \times U_d \ar[r] &
+U_d
+}
+$$
+Here the horizontal lower right arrow is given by the morphism
+$U \to U_d$ (from the earlier diagram) and the morphism
+$U \to \mathbf{A}^N$ given by $a_1, \ldots, a_r, a_{1, 1}, \ldots, a_{s, d}$.
+Then we see that the functions $a_1, \ldots, a_r$ are in the image of
+$\Gamma(U', \mathcal{O}_{U'}) \to \Gamma(U, \mathcal{O}_U)$
+and the functions $b_1, \ldots, b_s$ are in the image of
+$\Gamma(V', \mathcal{O}_{V'}) \to \Gamma(V, \mathcal{O}_V)$.
+In this way we see that for any finite collection of elements\footnote{After
+all these elements will be finite sums of elements of the form
+$a_0 \text{d}a_1 \wedge \ldots \wedge \text{d}a_i$ with
+$a_0, \ldots, a_i \in A$ or finite sums of elements of the form
+$b_0 \text{d}b_1 \wedge \ldots \wedge \text{d}b_j$ with
+$b_0, \ldots, b_j \in B$.} of the groups
+$$
+\Gamma(V, \Omega^i_{Y/\mathbf{Z}}),\quad i = 0, 1, 2, \ldots
+\quad\text{and}\quad
+\Gamma(U, \Omega^j_{X/\mathbf{Z}}),\quad j = 0, 1, 2, \ldots
+$$
+we can find a factorizations $V \to V' \to V_d$ and
+$U \to U' \to U_d$ with $V' = \mathbf{A}^N \times V_d$ and
+$U' = \mathbf{A}^N \times U_d$ as above
+such that these sections are the pullbacks of sections from
+$$
+\Gamma(V', \Omega^i_{V'/\mathbf{Z}}),\quad i = 0, 1, 2, \ldots
+\quad\text{and}\quad
+\Gamma(U', \Omega^j_{U'/\mathbf{Z}}),\quad j = 0, 1, 2, \ldots
+$$
+The upshot of this is that to check
+$\text{d} \circ \Theta_{Y/X} = \Theta_{Y/X} \circ \text{d}$
+it suffices to check this is true for $\Theta_{V'/U'}$.
+Similarly, for property (2) of the lemma.
+
+\medskip\noindent
+By Discriminants, Lemmas
+\ref{discriminant-lemma-universal-finite-syntomic-smooth} and
+\ref{discriminant-lemma-universal-finite-syntomic-etale}
+the scheme $U_d$ is smooth and the morphism $V_d \to U_d$
+is \'etale over a dense open of $U_d$.
+Hence the same is true for the morphism
+$V' \to U'$. Since $\Omega_{U'/\mathbf{Z}}$ is locally free and hence
+$\Omega^p_{U'/\mathbf{Z}}$ is torsion
+free, it suffices to check the desired relations
+after restricting to the open over which $V'$ is finite \'etale.
+Then we may check the relations after a surjective \'etale
+base change. Hence we may split the finite \'etale cover
+and assume we are looking at a morphism of the form
+$$
+\coprod\nolimits_{i = 1, \ldots, d} W \longrightarrow W
+$$
+with $W$ smooth over $\mathbf{Z}$.
+In this case any local properties of our construction are trivial to check
+(provided they are true). This finishes the proof of (1) and (2).
+
+\medskip\noindent
+Finally, we observe that (3) follows from (2) because $\Omega_{Y/S}$
+is the quotient of $\Omega_{Y/\mathbf{Z}}$ by the submodule
+generated by pullbacks of local sections of $\Omega_{S/\mathbf{Z}}$.
+\end{proof}
+
+\begin{example}
+\label{example-Garel}
+Let $A$ be a ring. Let $f = x^d + \sum_{0 \leq i < d} a_{d - i} x^i \in A[x]$.
+Let $B = A[x]/(f)$. By Proposition \ref{proposition-Garel}
+we have a morphism of complexes
+$$
+\Theta_{B/A} : \Omega^\bullet_B \longrightarrow \Omega^\bullet_A
+$$
+In particular, if $t \in B$ denotes the image of $x \in A[x]$
+we can consider the elements
+$$
+\Theta_{B/A}(t^i\text{d}t) \in \Omega^1_A,\quad i = 0, \ldots, d - 1
+$$
+What are these elements? By the same principle as used in the proof of
+Proposition \ref{proposition-Garel} it suffices to compute this
+in the universal case, i.e., when $A = \mathbf{Z}[a_1, \ldots, a_d]$
+or even when $A$ is replaced by the fraction field
+$\mathbf{Q}(a_1, \ldots, a_d)$. Writing symbolically
+$$
+f = \prod\nolimits_{i = 1, \ldots, d} (x - \alpha_i)
+$$
+we see that over $\mathbf{Q}(\alpha_1, \ldots, \alpha_d)$
+the algebra $B$ becomes split:
+$$
+\mathbf{Q}(a_0, \ldots, a_{d - 1})[x]/(f)
+\longrightarrow
+\prod\nolimits_{i = 1, \ldots, d} \mathbf{Q}(\alpha_1, \ldots, \alpha_d),
+\quad
+t \longmapsto (\alpha_1, \ldots, \alpha_d)
+$$
+Thus for example
+$$
+\Theta(\text{d}t) = \sum \text{d} \alpha_i = - \text{d}a_1
+$$
+Next, we have
+$$
+\Theta(t\text{d}t) = \sum \alpha_i \text{d}\alpha_i =
+a_1 \text{d} a_1 - \text{d}a_2
+$$
+Next, we have
+$$
+\Theta(t^2\text{d}t) = \sum \alpha_i^2 \text{d}\alpha_i =
+- a_1^2 \text{d} a_1 + a_1 \text{d}a_2 + a_2 \text{d}a_1 - \text{d}a_3
+$$
+(modulo calculation error), and so on. This suggests that
+if $f(x) = x^d - a$ then
+$$
+\Theta_{B/A}(t^i\text{d}t) =
+\left\{
+\begin{matrix}
+0 & \text{if} & i = 0, \ldots, d - 2 \\
+\text{d}a & \text{if} & i = d - 1
+\end{matrix}
+\right.
+$$
+in $\Omega_A$. This is true for in this particular case one can do
+the calculation for the extension $\mathbf{Q}(a)[x]/(x^d - a)$
+to verify this directly.
+\end{example}
+
+\begin{lemma}
+\label{lemma-Garel-map-frobenius-smooth-char-p}
+Let $p$ be a prime number. Let $X \to S$ be a smooth morphism
+of relative dimension $d$ of schemes in characteristic $p$.
+The relative Frobenius $F_{X/S} : X \to X^{(p)}$ of $X/S$
+(Varieties, Definition \ref{varieties-definition-relative-frobenius})
+is finite syntomic and the corresponding map
+$$
+\Theta_{X/X^{(p)}} :
+F_{X/S, *}\Omega^\bullet_{X/S} \to \Omega^\bullet_{X^{(p)}/S}
+$$
+is zero in all degrees except in degree $d$ where it defines a
+surjection.
+\end{lemma}
+
+\begin{proof}
+Observe that $F_{X/S}$ is a finite morphism by
+Varieties, Lemma \ref{varieties-lemma-relative-frobenius-finite}.
+To prove that $F_{X/S}$ is flat, it suffices to show that
+the morphism $F_{X/S, s} : X_s \to X^{(p)}_s$ between fibres
+is flat for all $s \in S$, see More on Morphisms, Theorem
+\ref{more-morphisms-theorem-criterion-flatness-fibre}.
+Flatness of $X_s \to X^{(p)}_s$ follows from
+Algebra, Lemma \ref{algebra-lemma-CM-over-regular-flat}
+(and the finiteness already shown).
+By More on Morphisms, Lemma
+\ref{more-morphisms-lemma-lci-permanence}
+the morphism $F_{X/S}$ is a local complete intersection morphism.
+Hence $F_{X/S}$ is finite syntomic (see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-flat-lci}).
+
+\medskip\noindent
+For every point $x \in X$ we may choose a commutative diagram
+$$
+\xymatrix{
+X \ar[d] & U \ar[l] \ar[d]_\pi \\
+S & \mathbf{A}^d_S \ar[l]
+}
+$$
+where $\pi$ is \'etale and $x \in U$ is open in $X$, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-etale-over-affine-space}.
+Observe that
+$\mathbf{A}^d_S \to \mathbf{A}^d_S$, $(x_1, \ldots, x_d) \mapsto
+(x_1^p, \ldots, x_d^p)$ is the relative Frobenius for $\mathcal{A}^d_S$
+over $S$. The commutative diagram
+$$
+\xymatrix{
+U \ar[d]_\pi \ar[r]_{F_{X/S}} & U^{(p)} \ar[d]^{\pi^{(p)}} \\
+\mathbf{A}^d_S \ar[r]^{x_i \mapsto x_i^p} & \mathbf{A}^d_S
+}
+$$
+of
+Varieties, Lemma \ref{varieties-lemma-relative-frobenius-endomorphism-identity}
+for $\pi : U \to \mathbf{A}^d_S$ is cartesian by
+\'Etale Morphisms, Lemma
+\ref{etale-lemma-relative-frobenius-etale}.
+Since the construction of $\Theta$ is compatible with base change
+and since $\Omega_{U/S} = \pi^*\Omega_{\mathbf{A}^d_S/S}$
+(Lemma \ref{lemma-etale})
+we conclude that it suffices to show the lemma for
+$\mathbf{A}^d_S$.
+
+\medskip\noindent
+Let $A$ be a ring of characteristic $p$. Consider the unique $A$-algebra
+homomorphism $A[y_1, \ldots, y_d] \to A[x_1, \ldots, x_d]$
+sending $y_i$ to $x_i^p$. The arguments above
+reduce us to computing the map
+$$
+\Theta^i : \Omega^i_{A[x_1, \ldots, x_d]/A} \to
+\Omega^i_{A[y_1, \ldots, y_d]/A}
+$$
+We urge the reader to do the computation in this case for themselves.
+As in Example \ref{example-Garel} we may reduce this to computing
+a formula for $\Theta^i$ in the universal case
+$$
+\mathbf{Z}[y_1, \ldots, y_d] \to \mathbf{Z}[x_1, \ldots, x_d],\quad
+y_i \mapsto x_i^p
+$$
+In turn, we can find the formula for $\Theta^i$ by computing in the complex
+case, i.e., for the $\mathbf{C}$-algebra map
+$$
+\mathbf{C}[y_1, \ldots, y_d] \to \mathbf{C}[x_1, \ldots, x_d],\quad
+y_i \mapsto x_i^p
+$$
+We may even invert $x_1, \ldots, x_d$ and $y_1, \ldots, y_d$.
+In this case, we have $\text{d}x_i = p^{-1} x_i^{- p + 1}\text{d}y_i$.
+Hence we see that
+\begin{align*}
+\Theta^i(
+x_1^{e_1} \ldots x_d^{e_d} \text{d}x_1 \wedge \ldots \wedge \text{d}x_i)
+& =
+p^{-i} \Theta^i(
+x_1^{e_1 - p + 1} \ldots x_i^{e_i - p + 1} x_{i + 1}^{e_{i + 1}} \ldots
+x_d^{e_d} \text{d}y_1 \wedge \ldots \wedge \text{d}y_i ) \\
+& =
+p^{-i} \text{Trace}(x_1^{e_1 - p + 1} \ldots x_i^{e_i - p + 1}
+x_{i + 1}^{e_{i + 1}} \ldots x_d^{e_d})
+\text{d}y_1 \wedge \ldots \wedge \text{d}y_i
+\end{align*}
+by the properties of $\Theta^i$. An elementary computation shows
+that the trace in the expression above is zero unless
+$e_1, \ldots, e_i$ are congruent to $-1$ modulo $p$
+and $e_{i + 1}, \ldots, e_d$ are divisible by $p$.
+Moreover, in this case we obtain
+$$
+p^{d - i} y_1^{(e_1 - p + 1)/p} \ldots y_i^{(e_i - p + 1)/p}
+y_{i + 1}^{e_{i + 1}/p} \ldots y_d^{e_d/p}
+\text{d}y_1 \wedge \ldots \wedge \text{d}y_i
+$$
+We conclude that we get zero in characteristic $p$ unless $d = i$
+and in this case we get every possible $d$-form.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Poincar\'e duality}
+\label{section-poincare-duality}
+
+\noindent
+In this section we prove Poincar'e duality for the de Rham cohomology
+of a proper smooth scheme over a field. Let us first explain how this
+works for Hodge cohomology.
+
+\begin{lemma}
+\label{lemma-duality-hodge}
+Let $k$ be a field. Let $X$ be a nonempty smooth proper scheme over $k$
+equidimensional of dimension $d$. There exists a $k$-linear map
+$$
+t : H^d(X, \Omega^d_{X/k}) \longrightarrow k
+$$
+unique up to precomposing by multiplication by a unit of
+$H^0(X, \mathcal{O}_X)$ with the following property: for all $p, q$ the pairing
+$$
+H^q(X, \Omega^p_{X/k}) \times H^{d - q}(X, \Omega^{d - p}_{X/k})
+\longrightarrow
+k, \quad
+(\xi, \xi') \longmapsto t(\xi \cup \xi')
+$$
+is perfect.
+\end{lemma}
+
+\begin{proof}
+By Duality for Schemes, Lemma \ref{duality-lemma-duality-proper-over-field}
+we have $\omega_X^\bullet = \Omega^d_{X/k}[d]$.
+Since $\Omega_{X/k}$ is locally free of rank $d$
+(Morphisms, Lemma \ref{morphisms-lemma-smooth-omega-finite-locally-free})
+we have
+$$
+\Omega^d_{X/k} \otimes_{\mathcal{O}_X} (\Omega^p_{X/k})^\vee
+\cong
+\Omega^{d - p}_{X/k}
+$$
+Thus we obtain a $k$-linear map $t : H^d(X, \Omega^d_{X/k}) \to k$
+such that the statement is true by Duality for Schemes, Lemma
+\ref{duality-lemma-duality-proper-over-field-perfect}.
+In particular the pairing
+$H^0(X, \mathcal{O}_X) \times H^d(X, \Omega^d_{X/k}) \to k$
+is perfect, which implies that any $k$-linear map
+$t' : H^d(X, \Omega^d_{X/k}) \to k$ is of the form
+$\xi \mapsto t(g\xi)$ for some $g \in H^0(X, \mathcal{O}_X)$.
+Of course, in order for $t'$ to still produce a duality
+between $H^0(X, \mathcal{O}_X)$ and $H^d(X, \Omega^d_{X/k})$
+we need $g$ to be a unit. Denote $\langle -, - \rangle_{p, q}$
+the pairing constructed using $t$ and denote $\langle -, - \rangle'_{p, q}$
+the pairing constructed using $t'$. Clearly we have
+$$
+\langle \xi, \xi' \rangle'_{p, q} =
+\langle g\xi, \xi' \rangle_{p, q}
+$$
+for $\xi \in H^q(X, \Omega^p_{X/k})$ and
+$\xi' \in H^{d - q}(X, \Omega^{d - p}_{X/k})$. Since $g$ is a unit, i.e.,
+invertible, we see that using $t'$ instead of $t$ we still get perfect
+pairings for all $p, q$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bottom-part-degenerates}
+Let $k$ be a field. Let $X$ be a smooth proper scheme over $k$. The map
+$$
+\text{d} : H^0(X, \mathcal{O}_X) \to H^0(X, \Omega^1_{X/k})
+$$
+is zero.
+\end{lemma}
+
+\begin{proof}
+Since $X$ is smooth over $k$ it is geometrically reduced over $k$, see
+Varieties, Lemma \ref{varieties-lemma-smooth-geometrically-normal}.
+Hence $H^0(X, \mathcal{O}_X) = \prod k_i$
+is a finite product of finite separable
+field extensions $k_i/k$, see Varieties, Lemma
+\ref{varieties-lemma-proper-geometrically-reduced-global-sections}.
+It follows that $\Omega_{H^0(X, \mathcal{O}_X)/k} = \prod \Omega_{k_i/k} = 0$
+(see for example Algebra, Lemma
+\ref{algebra-lemma-characterize-separable-algebraic-field-extensions}).
+Since the map of the lemma factors as
+$$
+H^0(X, \mathcal{O}_X) \to
+\Omega_{H^0(X, \mathcal{O}_X)/k} \to
+H^0(X, \Omega_{X/k})
+$$
+by functoriality of the de Rham complex
+(see Section \ref{section-de-rham-complex}), we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-top-part-degenerates}
+Let $k$ be a field. Let $X$ be a smooth proper scheme over $k$
+equidimensional of dimension $d$. The map
+$$
+\text{d} : H^d(X, \Omega^{d - 1}_{X/k}) \to H^d(X, \Omega^d_{X/k})
+$$
+is zero.
+\end{lemma}
+
+\begin{proof}
+It is tempting to think this follows from a combination of
+Lemmas \ref{lemma-bottom-part-degenerates} and \ref{lemma-duality-hodge}.
+However this doesn't work because the maps $\mathcal{O}_X \to \Omega^1_{X/k}$
+and $\Omega^{d - 1}_{X/k} \to \Omega^d_{X/k}$ are not $\mathcal{O}_X$-linear
+and hence we cannot use the functoriality discussed in
+Duality for Schemes, Remark
+\ref{duality-remark-coherent-duality-proper-over-field}
+to conclude the map in Lemma \ref{lemma-bottom-part-degenerates}
+is dual to the one in this lemma.
+
+\medskip\noindent
+We may replace $X$ by a connected component of $X$. Hence we may assume
+$X$ is irreducible. By
+Varieties, Lemmas \ref{varieties-lemma-smooth-geometrically-normal} and
+\ref{varieties-lemma-proper-geometrically-reduced-global-sections}
+we see that $k' = H^0(X, \mathcal{O}_X)$ is a finite separable
+extension $k'/k$. Since $\Omega_{k'/k} = 0$
+(see for example Algebra, Lemma
+\ref{algebra-lemma-characterize-separable-algebraic-field-extensions})
+we see that $\Omega_{X/k} = \Omega_{X/k'}$
+(see Morphisms, Lemma \ref{morphisms-lemma-triangle-differentials}).
+Thus we may replace $k$ by $k'$ and assume that $H^0(X, \mathcal{O}_X) = k$.
+
+\medskip\noindent
+Assume $H^0(X, \mathcal{O}_X) = k$. We conclude that
+$\dim H^d(X, \Omega^d_{X/k}) = 1$ by Lemma \ref{lemma-duality-hodge}.
+Assume first that the characteristic of $k$ is a prime number $p$.
+Denote $F_{X/k} : X \to X^{(p)}$ the relative Frobenius of $X$ over $k$;
+please keep in mind the facts proved about this morphism in
+Lemma \ref{lemma-Garel-map-frobenius-smooth-char-p}.
+Consider the commutative diagram
+$$
+\xymatrix{
+H^d(X, \Omega^{d - 1}_{X/k}) \ar[d] \ar[r] &
+H^d(X^{(p)}, F_{X/k, *}\Omega^{d - 1}_{X/k}) \ar[d] \ar[r]_{\Theta^{d - 1}} &
+H^d(X^{(p)}, \Omega^{d - 1}_{X^{(p)}/k}) \ar[d] \\
+H^d(X, \Omega^d_{X/k}) \ar[r] &
+H^d(X^{(p)}, F_{X/k, *}\Omega^d_{X/k}) \ar[r]^{\Theta^d} &
+H^d(X^{(p)}, \Omega^d_{X^{(p)}/k})
+}
+$$
+The left two horizontal arrows are isomorphisms as $F_{X/k}$ is finite, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-relative-affine-cohomology}.
+The right square commutes as $\Theta_{X^{(p)}/X}$ is a morphism of
+complexes and $\Theta^{d - 1}$ is zero. Thus it suffices to show that
+$\Theta^d$ is nonzero (because the dimension of the source of the map
+$\Theta^d$ is $1$ by the discussion above). However, we know that
+$$
+\Theta^d : F_{X/k, *}\Omega^d_{X/k} \to \Omega^d_{X^{(p)}/k}
+$$
+is surjective and hence surjective after applying the right exact
+functor $H^d(X^{(p)}, -)$ (right exactness by the vanishing of cohomology
+beyond $d$ as follows from
+Cohomology, Proposition \ref{cohomology-proposition-vanishing-Noetherian}).
+Finally, $H^d(X^{(d)}, \Omega^d_{X^{(d)}/k})$ is nonzero for example because
+it is dual to $H^0(X^{(d)}, \mathcal{O}_{X^{(p)}})$ by
+Lemma \ref{lemma-duality-hodge} applied to $X^{(p)}$ over $k$.
+This finishes the proof in this case.
+
+\medskip\noindent
+Finally, assume the characteristic of $k$ is $0$.
+We can write $k$ as the filtered colimit of its finite type
+$\mathbf{Z}$-subalgebras $R$. For one of these we can find a
+cartesian diagram of schemes
+$$
+\xymatrix{
+X \ar[d] \ar[r] & Y \ar[d] \\
+\Spec(k) \ar[r] & \Spec(R)
+}
+$$
+such that $Y \to \Spec(R)$ is smooth of relative dimension $d$ and proper.
+See Limits, Lemmas \ref{limits-lemma-descend-finite-presentation},
+\ref{limits-lemma-descend-smooth}, \ref{limits-lemma-descend-dimension-d}, and
+\ref{limits-lemma-eventually-proper}.
+The modules $M^{i, j} = H^j(Y, \Omega^i_{Y/R})$ are finite $R$-modules, see
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-proper-over-affine-cohomology-finite}.
+Thus after replacing $R$ by a localization we may assume all of these
+modules are finite free. We have
+$M^{i, j} \otimes_R k = H^j(X, \Omega^i_{X/k})$
+by flat base change (Cohomology of Schemes, Lemma
+\ref{coherent-lemma-flat-base-change-cohomology}).
+Thus it suffices to show that $M^{d - 1, d} \to M^{d, d}$
+is zero. This is a map of finite free modules over a domain,
+hence it suffices to find a dense set of primes $\mathfrak p \subset R$
+such that after tensoring with $\kappa(\mathfrak p)$ we get zero.
+Since $R$ is of finite type over $\mathbf{Z}$, we can take
+the collection of primes $\mathfrak p$ whose residue field
+has positive characteristic (details omitted). Observe that
+$$
+M^{d - 1, d} \otimes_R \kappa(\mathfrak p) =
+H^d(Y_{\kappa(\mathfrak p)},
+\Omega^{d - 1}_{Y_{\kappa(\mathfrak p)}/\kappa(\mathfrak p)})
+$$
+for example by Limits, Lemma
+\ref{limits-lemma-higher-direct-images-zero-above-dimension-fibre}.
+Similarly for $M^{d, d}$. Thus we see that
+$M^{d - 1, d} \otimes_R \kappa(\mathfrak p) \to
+M^{d, d} \otimes_R \kappa(\mathfrak p)$
+is zero by the case of positive characteristic handled above.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-poincare-duality}
+Let $k$ be a field. Let $X$ be a nonempty smooth proper scheme over $k$
+equidimensional of dimension $d$. There exists a $k$-linear map
+$$
+t : H^{2d}_{dR}(X/k) \longrightarrow k
+$$
+unique up to precomposing by multiplication by a unit of
+$H^0(X, \mathcal{O}_X)$ with the following property: for all $i$ the pairing
+$$
+H^i_{dR}(X/k) \times H_{dR}^{2d - i}(X/k)
+\longrightarrow
+k, \quad
+(\xi, \xi') \longmapsto t(\xi \cup \xi')
+$$
+is perfect.
+\end{proposition}
+
+\begin{proof}
+By the Hodge-to-de Rham spectral sequence
+(Section \ref{section-hodge-to-de-rham}), the vanishing
+of $\Omega^i_{X/k}$ for $i > d$, the vanishing in
+Cohomology, Proposition \ref{cohomology-proposition-vanishing-Noetherian}
+and the results of Lemmas \ref{lemma-bottom-part-degenerates} and
+\ref{lemma-top-part-degenerates}
+we see that $H^0_{dR}(X/k) = H^0(X, \mathcal{O}_X)$
+and $H^d(X, \Omega^d_{X/k}) = H_{dR}^{2d}(X/k)$.
+More precisesly, these identifications come from the maps
+of complexes
+$$
+\Omega^\bullet_{X/k} \to \mathcal{O}_X[0]
+\quad\text{and}\quad
+\Omega^d_{X/k}[-d] \to \Omega^\bullet_{X/k}
+$$
+Let us choose $t : H_{dR}^{2d}(X/k) \to k$ which via this identification
+corresponds to a $t$ as in Lemma \ref{lemma-duality-hodge}.
+Then in any case we see that the pairing displayed in the lemma
+is perfect for $i = 0$.
+
+\medskip\noindent
+Denote $\underline{k}$ the constant sheaf with value $k$ on $X$.
+Let us abbreviate $\Omega^\bullet = \Omega^\bullet_{X/k}$.
+Consider the map (\ref{equation-wedge}) which in our situation reads
+$$
+\wedge :
+\text{Tot}(\Omega^\bullet \otimes_{\underline{k}} \Omega^\bullet)
+\longrightarrow
+\Omega^\bullet
+$$
+For every integer $p = 0, 1, \ldots, d$ this map
+annihilates the subcomplex
+$\text{Tot}(\sigma_{> p} \Omega^\bullet \otimes_{\underline{k}}
+\sigma_{\geq d - p} \Omega^\bullet)$ for degree reasons.
+Hence we find that the restriction of $\wedge$ to the subcomplex
+$\text{Tot}(\Omega^\bullet \otimes_{\underline{k}}
+\sigma_{\geq d - p}\Omega^\bullet)$ factors through a map of complexes
+$$
+\gamma_p :
+\text{Tot}(\sigma_{\leq p} \Omega^\bullet \otimes_{\underline{k}}
+\sigma_{\geq d - p} \Omega^\bullet)
+\longrightarrow
+\Omega^\bullet
+$$
+Using the same procedure as in Section \ref{section-cup-product} we obtain
+cup products
+$$
+H^i(X, \sigma_{\leq p} \Omega^\bullet) \times
+H^{2d - i}(X, \sigma_{\geq d - p}\Omega^\bullet)
+\longrightarrow
+H_{dR}^{2d}(X, \Omega^\bullet)
+$$
+We will prove by induction on $p$ that these cup products via $t$
+induce perfect pairings between $H^i(X, \sigma_{\leq p} \Omega^\bullet)$
+and $H^{2d - i}(X, \sigma_{\geq d - p}\Omega^\bullet)$. For $p = d$
+this is the assertion of the proposition.
+
+\medskip\noindent
+The base case is $p = 0$. In this case we simply obtain the pairing
+between $H^i(X, \mathcal{O}_X)$ and $H^{d - i}(X, \Omega^d)$ of
+Lemma \ref{lemma-duality-hodge} and the result is true.
+
+\medskip\noindent
+Induction step. Say we know the result is true for $p$. Then
+we consider the distinguished triangle
+$$
+\Omega^{p + 1}[-p - 1] \to
+\sigma_{\leq p + 1}\Omega^\bullet \to
+\sigma_{\leq p}\Omega^\bullet \to
+\Omega^{p + 1}[-p]
+$$
+and the distinguished triangle
+$$
+\sigma_{\geq d - p}\Omega^\bullet \to
+\sigma_{\geq d - p - 1}\Omega^\bullet \to
+\Omega^{d - p - 1}[-d + p + 1] \to
+(\sigma_{\geq d - p}\Omega^\bullet)[1]
+$$
+Observe that both are distinguished triangles in the homotopy category
+of complexes of sheaves of $\underline{k}$-modules; in particular the
+maps $\sigma_{\leq p}\Omega^\bullet \to \Omega^{p + 1}[-p]$ and
+$\Omega^{d - p - 1}[-d + p + 1] \to (\sigma_{\geq d - p}\Omega^\bullet)[1]$
+are given by actual maps of complexes, namely using the differential
+$\Omega^p \to \Omega^{p + 1}$ and the differential
+$\Omega^{d - p - 1} \to \Omega^{d - p}$.
+Consider the long exact cohomology sequences associated to these
+distinguished triangles
+$$
+\xymatrix{
+H^{i - 1}(X, \sigma_{\leq p}\Omega^\bullet) \ar[d]_a \\
+H^i(X, \Omega^{p + 1}[-p - 1]) \ar[d]_b \\
+H^i(X, \sigma_{\leq p + 1}\Omega^\bullet) \ar[d]_c \\
+H^i(X, \sigma_{\leq p}\Omega^\bullet) \ar[d]_d \\
+H^{i + 1}(X, \Omega^{p + 1}[-p - 1])
+}
+\quad\quad
+\xymatrix{
+H^{2d - i + 1}(X, \sigma_{\geq d - p}\Omega^\bullet) \\
+H^{2d - i}(X, \Omega^{d - p - 1}[-d + p + 1]) \ar[u]_{a'} \\
+H^{2d - i}(X, \sigma_{\geq d - p - 1}\Omega^\bullet) \ar[u]_{b'} \\
+H^{2d - i}(X, \sigma_{\geq d - p}\Omega^\bullet) \ar[u]_{c'} \\
+H^{2d - i - 1}(X, \Omega^{d - p - 1}[-d + p + 1]) \ar[u]_{d'}
+}
+$$
+By induction and Lemma \ref{lemma-duality-hodge}
+we know that the pairings constructed above between the
+$k$-vectorspaces on the first, second, fourth, and fifth
+rows are perfect. By the $5$-lemma, in order to show that
+the pairing between the cohomology groups in the middle row
+is perfect, it suffices to show that the pairs
+$(a, a')$, $(b, b')$, $(c, c')$, and $(d, d')$
+are compatible with the given pairings (see below).
+
+\medskip\noindent
+Let us prove this for the pair $(c, c')$. Here we observe simply
+that we have a commutative diagram
+$$
+\xymatrix{
+\text{Tot}(\sigma_{\leq p} \Omega^\bullet \otimes_{\underline{k}}
+\sigma_{\geq d - p} \Omega^\bullet) \ar[d]_{\gamma_p} &
+\text{Tot}(\sigma_{\leq p + 1} \Omega^\bullet \otimes_{\underline{k}}
+\sigma_{\geq d - p} \Omega^\bullet) \ar[l] \ar[d] \\
+\Omega^\bullet &
+\text{Tot}(\sigma_{\leq p + 1} \Omega^\bullet \otimes_{\underline{k}}
+\sigma_{\geq d - p - 1} \Omega^\bullet) \ar[l]_-{\gamma_{p + 1}}
+}
+$$
+Hence if we have $\alpha \in H^i(X, \sigma_{\leq p + 1}\Omega^\bullet)$
+and $\beta \in H^{2d - i}(X, \sigma_{\geq d - p}\Omega^\bullet)$
+then we get
+$\gamma_p(\alpha \cup c'(\beta)) = \gamma_{p + 1}(c(\alpha) \cup \beta)$
+by functoriality of the cup product.
+
+\medskip\noindent
+Similarly for the pair $(b, b')$ we use the commutative diagram
+$$
+\xymatrix{
+\text{Tot}(\sigma_{\leq p + 1} \Omega^\bullet \otimes_{\underline{k}}
+\sigma_{\geq d - p - 1} \Omega^\bullet) \ar[d]_{\gamma_{p + 1}} &
+\text{Tot}(\Omega^{p + 1}[-p - 1] \otimes_{\underline{k}}
+\sigma_{\geq d - p - 1} \Omega^\bullet) \ar[l] \ar[d] \\
+\Omega^\bullet &
+\Omega^{p + 1}[-p - 1]
+\otimes_{\underline{k}}
+\Omega^{d - p - 1}[-d + p + 1] \ar[l]_-\wedge
+}
+$$
+and argue in the same manner.
+
+\medskip\noindent
+For the pair $(d, d')$ we use the commutative diagram
+$$
+\xymatrix{
+\Omega^{p + 1}[-p] \otimes_{\underline{k}}
+\Omega^{d - p - 1}[-d + p] \ar[d] &
+\text{Tot}(\sigma_{\leq p}\Omega^\bullet \otimes_{\underline{k}}
+\Omega^{d - p - 1}[-d + p]) \ar[l] \ar[d] \\
+\Omega^\bullet &
+\text{Tot}(\sigma_{\leq p}\Omega^\bullet \otimes_{\underline{k}}
+\sigma_{\geq d - p}\Omega^\bullet) \ar[l]
+}
+$$
+and we look at cohomology classes in
+$H^i(X, \sigma_{\leq p}\Omega^\bullet)$ and
+$H^{2d - i}(X, \Omega^{d - p - 1}[-d + p])$.
+Changing $i$ to $i - 1$ we get the result for the pair $(a, a')$
+thereby finishing the proof that our pairings are perfect.
+
+\medskip\noindent
+We omit the argument showing the uniqueness of $t$ up to
+precomposing by multiplication by a unit in $H^0(X, \mathcal{O}_X)$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Chern classes}
+\label{section-chern-classes}
+
+\noindent
+The results proved so far suffice to use the discussion in
+Weil Cohomology Theories, Section \ref{weil-section-chern}
+to produce Chern classes in de Rham cohomology.
+
+\begin{lemma}
+\label{lemma-chern-classes}
+There is a unique rule which assigns to every quasi-compact and
+quasi-separated scheme $X$ a total Chern class
+$$
+c^{dR} :
+K_0(\textit{Vect}(X))
+\longrightarrow
+\prod\nolimits_{i \geq 0} H^{2i}_{dR}(X/\mathbf{Z})
+$$
+with the following properties
+\begin{enumerate}
+\item we have $c^{dR}(\alpha + \beta) = c^{dR}(\alpha) c^{dR}(\beta)$
+for $\alpha, \beta \in K_0(\textit{Vect}(X))$,
+\item if $f : X \to X'$ is a morphism of quasi-compact and
+quasi-separated schemes, then $c^{dR}(f^*\alpha) = f^*c^{dR}(\alpha)$,
+\item given $\mathcal{L} \in \Pic(X)$ we have
+$c^{dR}([\mathcal{L}]) = 1 + c_1^{dR}(\mathcal{L})$
+\end{enumerate}
+\end{lemma}
+
+\noindent
+The construction can easily be extended to all schemes, but to do so one needs
+to slightly upgrade the discussion in Weil Cohomology Theories,
+Section \ref{weil-section-chern}.
+
+\begin{proof}
+We will apply Weil Cohomology Theories, Proposition
+\ref{weil-proposition-chern-class} to get this.
+
+\medskip\noindent
+Let $\mathcal{C}$ be the category of all quasi-compact and quasi-separated
+schemes. This certainly satisfies conditions
+(1), (2), and (3) (a), (b), and (c) of Weil Cohomology Theories,
+Section \ref{weil-section-chern}.
+
+\medskip\noindent
+As our contravariant functor $A$ from $\mathcal{C}$ to the
+category of graded algebras will send $X$ to
+$A(X) = \bigoplus_{i \geq 0} H_{dR}^{2i}(X/\mathbf{Z})$
+endowed with its cup product.
+Functoriality is discussed in Section \ref{section-de-rham-cohomology}
+and the cup product in Section \ref{section-cup-product}.
+For the additive maps $c_1^A$ we take $c_1^{dR}$ constructed
+in Section \ref{section-first-chern-class}.
+
+\medskip\noindent
+In fact, we obtain commutative algebras by
+Lemma \ref{lemma-cup-product-graded-commutative}
+which shows we have axiom (1) for $A$.
+
+\medskip\noindent
+To check axiom (2) for $A$ it suffices to check that
+$H^*_{dR}(X \coprod Y/\mathbf{Z}) = H^*_{dR}(X/\mathbf{Z}) \times
+H^*_{dR}(Y/\mathbf{Z})$.
+This is a consequence of the fact that de Rham cohomology
+is constructed by taking the cohomology of a sheaf of differential
+graded algebras (in the Zariski topology).
+
+\medskip\noindent
+Axiom (3) for $A$ is just the statement that taking first Chern
+classes of invertible modules is compatible with pullbacks.
+This follows from the more general Lemma \ref{lemma-pullback-c1}.
+
+\medskip\noindent
+Axiom (4) for $A$ is the projective space bundle formula which
+we proved in Proposition \ref{proposition-projective-space-bundle-formula}.
+
+\medskip\noindent
+Axiom (5). Let $X$ be a quasi-compact and quasi-separated scheme and
+let $\mathcal{E} \to \mathcal{F}$ be a surjection of finite locally free
+$\mathcal{O}_X$-modules of ranks $r + 1$ and $r$. Denote
+$i : P' = \mathbf{P}(\mathcal{F}) \to \mathbf{P}(\mathcal{E}) = P$ the
+corresponding incusion morphism. This is a morphism of smooth projective
+schemes over $X$ which exhibits $P'$ as an effective Cartier divisor on $P$.
+Thus by Lemma \ref{lemma-check-log-smooth} the complex of log poles
+for $P' \subset P$ over $\mathbf{Z}$ is defined.
+Hence for $a \in A(P)$ with $i^*a = 0$ we have
+$a \cup c_1^A(\mathcal{O}_P(P')) = 0$ by
+Lemma \ref{lemma-log-complex-consequence}.
+This finishes the proof.
+\end{proof}
+
+\begin{remark}
+\label{remark-splitting-principle}
+The analogues of Weil Cohomology Theories, Lemmas
+\ref{weil-lemma-splitting-principle} (splitting principle) and
+\ref{weil-lemma-chern-classes-E-tensor-L} (chern classes of tensor products)
+hold for de Rham Chern classes on quasi-compact and quasi-separated schemes.
+This is clear as we've shown in the proof of
+Lemma \ref{lemma-chern-classes}
+that all the axioms of Weil Cohomology Theories, Section
+\ref{weil-section-chern} are satisfied.
+\end{remark}
+
+\noindent
+Working with schemes over $\mathbf{Q}$ we can construct a Chern character.
+
+\begin{lemma}
+\label{lemma-chern-character}
+There is a unique rule which assigns to every quasi-compact and quasi-separated
+scheme $X$ over $\mathbf{Q}$ a ``chern character''
+$$
+ch^{dR} : K_0(\textit{Vect}(X)) \longrightarrow
+\prod\nolimits_{i \geq 0} H_{dR}^{2i}(X/\mathbf{Q})
+$$
+with the following properties
+\begin{enumerate}
+\item $ch^{dR}$ is a ring map for all $X$,
+\item if $f : X' \to X$ is a morphism of quasi-compact and quasi-separated
+schemes over $\mathbf{Q}$, then $f^* \circ ch^{dR} = ch^{dR} \circ f^*$, and
+\item given $\mathcal{L} \in \Pic(X)$
+we have $ch^{dR}([\mathcal{L}]) = \exp(c_1^{dR}(\mathcal{L}))$.
+\end{enumerate}
+\end{lemma}
+
+\noindent
+The construction can easily be extended to all schemes over $\mathbf{Q}$,
+but to do so one needs to slightly upgrade the discussion in
+Weil Cohomology Theories, Section \ref{weil-section-chern}.
+
+\begin{proof}
+Exactly as in the proof of Lemma \ref{lemma-chern-classes}
+one shows that the category of quasi-compact and quasi-separated
+schemes over $\mathbf{Q}$ together with the functor
+$A^*(X) = \bigoplus_{i \geq 0} H_{dR}^{2i}(X/\mathbf{Q})$
+satisfy the axioms of
+Weil Cohomology Theories, Section \ref{weil-section-chern}.
+Moreover, in this case $A(X)$ is a $\mathbf{Q}$-algebra for
+all $X$. Hence the lemma follows from
+Weil Cohomology Theories, Proposition
+\ref{weil-proposition-chern-character}.
+\end{proof}
+
+
+
+
+
+
+
+\section{A Weil cohomology theory}
+\label{section-weil}
+
+\noindent
+Let $k$ be a field of characteristic $0$. In this section we prove that
+the functor
+$$
+X \longmapsto H^*_{dR}(X/k)
+$$
+defines a Weil cohomology theory over $k$ with coefficients in $k$ as defined
+in Weil Cohomology Theories, Definition
+\ref{weil-definition-weil-cohomology-theory}.
+We will proceed by checking the constructions earlier in this
+chapter provide us with data (D0), (D1), and (D2') satisfying
+axioms (A1) -- (A9) of
+Weil Cohomology Theories, Section \ref{weil-section-c1}.
+
+\medskip\noindent
+Throughout the rest of this section we fix the field $k$ of characteristic
+$0$ and we set $F = k$. Next, we take the following data
+\begin{enumerate}
+\item[(D0)] For our $1$-dimensional $F$ vector space $F(1)$ we take
+$F(1) = F = k$.
+\item[(D1)] For our functor $H^*$ we take the functor sending
+a smooth projective scheme $X$ over $k$ to $H^*_{dR}(X/k)$.
+Functoriality is discussed in Section \ref{section-de-rham-cohomology}
+and the cup product in Section \ref{section-cup-product}.
+We obtain graded commutative $F$-algebras by
+Lemma \ref{lemma-cup-product-graded-commutative}.
+\item[(D2')] For the maps $c_1^H : \Pic(X) \to H^2(X)(1)$ we
+use the de Rham first Chern class introduced in
+Section \ref{section-first-chern-class}.
+\end{enumerate}
+We are going to show axioms (A1) -- (A9) hold.
+
+\medskip\noindent
+In this paragraph, we are going to reduce the checking of the
+axioms to the case where $k$ is algebraically closed by
+using Weil Cohomology Theories, Lemma \ref{weil-lemma-check-over-extension}.
+Denote $k'$ the algebraic closure of $k$.
+Set $F' = k'$. We obtain data (D0), (D1), (D2') over $k'$ with
+coefficient field $F'$ in exactly the same way as above.
+By Lemma \ref{lemma-proper-smooth-de-Rham} there are
+functorial isomorphisms
+$$
+H_{dR}^{2d}(X/k) \otimes_k k'
+\longrightarrow
+H_{dR}^{2d}(X_{k'}/k')
+$$
+for $X$ smooth and projective over $k$. Moreover, the diagrams
+$$
+\xymatrix{
+\Pic(X) \ar[r]_{c^{dR}_1} \ar[d] & H_{dR}^2(X/k) \ar[d] \\
+\Pic(X_{k'}) \ar[r]^{c^{dR}_1} & H_{dR}^2(X_{k'}/k')
+}
+$$
+commute by Lemma \ref{lemma-pullback-c1}.
+This finishes the proof of the reduction.
+
+\medskip\noindent
+Assume $k$ is algebraically closed field of characteristic zero.
+We will show axioms (A1) -- (A9) for the data (D0), (D1), and (D2')
+given above.
+
+\medskip\noindent
+Axiom (A1). Here we have to check that
+$H^*_{dR}(X \coprod Y/k) = H^*_{dR}(X/k) \times H^*_{dR}(Y/k)$.
+This is a consequence of the fact that de Rham cohomology
+is constructed by taking the cohomology of a sheaf of differential
+graded algebras (in the Zariski topology).
+
+\medskip\noindent
+Axiom (A2). This is just the statement that taking first Chern
+classes of invertible modules is compatible with pullbacks.
+This follows from the more general Lemma \ref{lemma-pullback-c1}.
+
+\medskip\noindent
+Axiom (A3). This follows from the more general
+Proposition \ref{proposition-projective-space-bundle-formula}.
+
+\medskip\noindent
+Axiom (A4). This follows from the more general
+Lemma \ref{lemma-log-complex-consequence}.
+
+\medskip\noindent
+Already at this point, using
+Weil Cohomology Theories, Lemmas \ref{weil-lemma-chern-classes} and
+\ref{weil-lemma-cycle-classes}, we obtain a Chern character and
+cycle class maps
+$$
+\gamma :
+\CH^*(X)
+\longrightarrow
+\bigoplus\nolimits_{i \geq 0} H^{2i}_{dR}(X/k)
+$$
+for $X$ smooth projective over $k$ which are graded ring homomorphisms
+compatible with pullbacks between morphisms $f : X \to Y$
+of smooth projective schemes over $k$.
+
+\medskip\noindent
+Axiom (A5). We have $H_{dR}^*(\Spec(k)/k) = k = F$ in degree $0$.
+We have the K\"unneth formula for the product of two smooth projective
+$k$-schemes by Lemma \ref{lemma-kunneth-de-rham} (observe that the
+derived tensor products in the statement are harmless as we are
+tensoring over the field $k$).
+
+\medskip\noindent
+Axiom (A7). This follows from Proposition \ref{proposition-blowup-split}.
+
+\medskip\noindent
+Axiom (A8). Let $X$ be a smooth projective scheme over $k$.
+By the explanatory text to this axiom in
+Weil Cohomology Theories, Section \ref{weil-section-c1}
+we see that $k' = H^0(X, \mathcal{O}_X)$ is a finite
+separable $k$-algebra. It follows that $H_{dR}^*(\Spec(k')/k) = k'$
+sitting in degree $0$ because $\Omega_{k'/k} = 0$. By
+Lemma \ref{lemma-bottom-part-degenerates}
+we also have $H_{dR}^0(X, \mathcal{O}_X) = k'$ and we get
+the axiom.
+
+\medskip\noindent
+Axiom (A6). Let $X$ be a nonempty smooth projective scheme over $k$
+which is equidimensional of dimension $d$. Denote
+$\Delta : X \to X \times_{\Spec(k)} X$
+the diagonal morphism of $X$ over $k$. We have to show that there
+exists a $k$-linear map
+$$
+\lambda : H_{dR}^{2d}(X/k) \longrightarrow k
+$$
+such that $(1 \otimes \lambda)\gamma([\Delta]) = 1$ in $H^0_{dR}(X/k)$.
+Let us write
+$$
+\gamma = \gamma([\Delta]) = \gamma_0 + \ldots + \gamma_{2d}
+$$
+with $\gamma_i \in H_{dR}^i(X/k) \otimes_k H_{dR}^{2d - i}(X/k)$
+the K\"unneth components. Our problem is to show that there is a
+linear map $\lambda : H_{dR}^{2d}(X/k) \to k$ such that
+$(1 \otimes \lambda)\gamma_0 = 1$ in $H^0_{dR}(X/k)$.
+
+\medskip\noindent
+Let $X = \coprod X_i$ be the decomposition of $X$ into connected
+and hence irreducible components. Then we have correspondingly
+$\Delta = \coprod \Delta_i$ with $\Delta_i \subset X_i \times X_i$.
+It follows that
+$$
+\gamma([\Delta]) = \sum \gamma([\Delta_i])
+$$
+and moreover $\gamma([\Delta_i])$ corresponds to the class of
+$\Delta_i \subset X_i \times X_i$ via the decomposition
+$$
+H^*_{dR}(X \times X) = \prod\nolimits_{i, j} H^*_{dR}(X_i \times X_j)
+$$
+We omit the details; one way to show this is to use that in
+$\CH^0(X \times X)$ we have idempotents $e_{i, j}$ corresponding to
+the open and closed subschemes $X_i \times X_j$ and to use that
+$\gamma$ is a ring map which sends $e_{i, j}$ to the corresponding
+idempotent in the displayed product decomposition of cohomology.
+If we can find $\lambda_i : H_{dR}^{2d}(X_i/k) \to k$ with
+$(1 \otimes \lambda_i)\gamma([\Delta_i]) = 1$ in $H^0_{dR}(X_i/k)$
+then taking $\lambda = \sum \lambda_i$ will solve the problem for $X$.
+Thus we may and do assume $X$ is irreducible.
+
+\medskip\noindent
+Proof of Axiom (A6) for $X$ irreducible. Since $k$ is algebraically
+closed we have $H^0_{dR}(X/k) = k$ because $H^0(X, \mathcal{O}_X) = k$
+as $X$ is a projective variety over an algebraically closed field
+(see Varieties, Lemma
+\ref{varieties-lemma-proper-geometrically-reduced-global-sections}
+for example). Let $x \in X$ be any closed point.
+Consider the cartesian diagram
+$$
+\xymatrix{
+x \ar[d] \ar[r] & X \ar[d]^\Delta \\
+X \ar[r]^-{x \times \text{id}} & X \times_{\Spec(k)} X
+}
+$$
+Compatibility of $\gamma$ with pullbacks implies that
+$\gamma([\Delta])$ maps to $\gamma([x])$ in $H_{dR}^{2d}(X/k)$,
+in other words, we have $\gamma_0 = 1 \otimes \gamma([x])$.
+We conclude two things from this: (a) the class
+$\gamma([x])$ is independent of $x$, (b) it suffices
+to show the class $\gamma([x])$ is nonzero, and hence (c)
+it suffices to find any zero cycle $\alpha$ on $X$ such that
+$\gamma(\alpha) \not = 0$. To do this we choose a finite
+morphism
+$$
+f : X \longrightarrow \mathbf{P}^d_k
+$$
+To see such a morphism exist, see
+Intersection Theory, Section \ref{intersection-section-projection}
+and in particular Lemma \ref{intersection-lemma-projection-generically-finite}.
+Observe that $f$ is finite syntomic (local complete intersection morphism
+by More on Morphisms, Lemma \ref{more-morphisms-lemma-lci-permanence}
+and flat by Algebra, Lemma \ref{algebra-lemma-CM-over-regular-flat}).
+By Proposition \ref{proposition-Garel} we have a trace map
+$$
+\Theta_f :
+f_*\Omega^\bullet_{X/k}
+\longrightarrow
+\Omega^\bullet_{\mathbf{P}^d_k/k}
+$$
+whose composition with the canonical map
+$$
+\Omega^\bullet_{\mathbf{P}^d_k/k}
+\longrightarrow
+f_*\Omega^\bullet_{X/k}
+$$
+is multiplication by the degree of $f$. Hence we see that we get a map
+$$
+\Theta : H_{dR}^{2d}(X/k) \to H_{dR}^{2d}(\mathbf{P}^d_k/k)
+$$
+such that $\Theta \circ f^*$ is multiplication by a positive integer.
+Hence if we can find a zero cycle on $\mathbf{P}^d_k$ whose class
+is nonzero, then we conclude by the compatibility of $\gamma$
+with pullbacks. This is true by
+Lemma \ref{lemma-de-rham-cohomology-projective-space} and this
+finishes the proof of axiom (A6).
+
+\medskip\noindent
+Below we will use the following without further mention.
+First, by Weil Cohomology Theories, Remark \ref{weil-remark-trace}
+the map $\lambda_X : H^{2d}_{dR}(X/k) \to k$ is unique.
+Second, in the proof of axiom (A6) we have
+seen that $\lambda_X(\gamma([x])) = 1$ when $X$ is irreducible, i.e.,
+the composition of the cycle class map
+$\gamma : \CH^d(X) \to H_{dR}^{2d}(X/k)$ with $\lambda_X$
+is the degree map.
+
+\medskip\noindent
+Axiom (A9). Let $Y \subset X$ be a nonempty smooth divisor on a
+nonempty smooth equidimensional projective scheme $X$ over $k$
+of dimension $d$. We have to show that the diagram
+$$
+\xymatrix{
+H_{dR}^{2d - 2}(X/k)
+\ar[rrr]_{c^{dR}_1(\mathcal{O}_X(Y)) \cap -} \ar[d]_{restriction} & & &
+H_{dR}^{2d}(X) \ar[d]^{\lambda_X} \\
+H_{dR}^{2d - 2}(Y/k) \ar[rrr]^-{\lambda_Y} & & & k
+}
+$$
+commutes where $\lambda_X$ and $\lambda_Y$ are as in axiom (A6).
+Above we have seen that if we decompose $X = \coprod X_i$ into connected
+(equivalently irreducible) components, then
+we have correspondingly $\lambda_X = \sum \lambda_{X_i}$.
+Similarly, if we decompoese $Y = \coprod Y_j$ into connected (equivalently
+irreducible) components, then we have $\lambda_Y = \sum \lambda_{Y_j}$.
+Moreover, in this case we have
+$\mathcal{O}_X(Y) = \otimes_j \mathcal{O}_X(Y_j)$ and hence
+$$
+c_1^{dR}(\mathcal{O}_X(Y)) = \sum\nolimits_j
+c^{dR}_1(\mathcal{O}_X(Y_j))
+$$
+in $H_{dR}^2(X/k)$. A straightforward diagram chase shows that it suffices
+to prove the commutativity of the diagram in case $X$ and $Y$ are both
+irreducible. Then $H_{dR}^{2d - 2}(Y/k)$ is $1$-dimensional as
+we have Poincar'e duality for $Y$ by
+Weil Cohomology Theories, Lemma \ref{weil-lemma-poincare-duality}.
+By axiom (A4) the kernel of restriction (left vertical arrow)
+is contained in the kernel of cupping with $c^{dR}_1(\mathcal{O}_X(Y))$.
+This means it suffices to find one cohomology class
+$a \in H_{dR}^{2d - 2}(X)$ whose restriction to $Y$ is nonzero
+such that we have commutativity in the diagram for $a$.
+Take any ample invertible module $\mathcal{L}$ and set
+$$
+a = c^{dR}_1(\mathcal{L})^{d - 1}
+$$
+Then we know that $a|_Y = c^{dR}_1(\mathcal{L}|_Y)^{d - 1}$
+and hence
+$$
+\lambda_Y(a|_Y) = \deg(c_1(\mathcal{L}|_Y)^{d - 1} \cap [Y])
+$$
+by our description of $\lambda_Y$ above. This is a positive integer
+by Chow Homology, Lemma
+\ref{chow-lemma-degrees-and-numerical-intersections} combined with
+Varieties, Lemma \ref{varieties-lemma-ample-positive}.
+Similarly, we find
+$$
+\lambda_X(c^{dR}_1(\mathcal{O}_X(Y)) \cap a) =
+\deg(c_1(\mathcal{O}_X(Y)) \cap c_1(\mathcal{L})^{d - 1} \cap [X])
+$$
+Since we know that $c_1(\mathcal{O}_X(Y)) \cap [X] = [Y]$ more or
+less by definition we have an equality of zero cycles
+$$
+(Y \to X)_*\left(c_1(\mathcal{L}|_Y)^{d - 1} \cap [Y]\right) =
+c_1(\mathcal{O}_X(Y)) \cap c_1(\mathcal{L})^{d - 1} \cap [X]
+$$
+on $X$. Thus these cycles have the same degree and the proof is complete.
+
+\begin{proposition}
+\label{proposition-de-rham-is-weil}
+Let $k$ be a field of characteristic zero. The functor that
+sends a smooth projective scheme $X$ over $k$ to $H_{dR}^*(X/k)$
+is a Weil cohomology theory in the sense of
+Weil Cohomology Theories, Definition
+\ref{weil-definition-weil-cohomology-theory}.
+\end{proposition}
+
+\begin{proof}
+In the discussion above we showed that our data (D0), (D1), (D2')
+satisfies axioms (A1) -- (A9) of Weil Cohomology Theories, Section
+\ref{weil-section-c1}. Hence we conclude by
+Weil Cohomology Theories, Proposition \ref{weil-proposition-get-weil}.
+
+\medskip\noindent
+Please don't read what follows. In the proof of the assertions we also used
+Lemmas \ref{lemma-proper-smooth-de-Rham},
+\ref{lemma-pullback-c1},
+\ref{lemma-log-complex-consequence},
+\ref{lemma-kunneth-de-rham},
+\ref{lemma-bottom-part-degenerates}, and
+\ref{lemma-de-rham-cohomology-projective-space},
+Propositions
+\ref{proposition-projective-space-bundle-formula},
+\ref{proposition-blowup-split}, and
+\ref{proposition-Garel},
+Weil Cohomology Theories, Lemmas
+\ref{weil-lemma-check-over-extension},
+\ref{weil-lemma-chern-classes},
+\ref{weil-lemma-cycle-classes}, and
+\ref{weil-lemma-poincare-duality},
+Weil Cohomology Theories, Remark \ref{weil-remark-trace},
+Varieties, Lemmas
+\ref{varieties-lemma-proper-geometrically-reduced-global-sections} and
+\ref{varieties-lemma-ample-positive},
+Intersection Theory, Section \ref{intersection-section-projection} and
+Lemma \ref{intersection-lemma-projection-generically-finite},
+More on Morphisms, Lemma \ref{more-morphisms-lemma-lci-permanence},
+Algebra, Lemma \ref{algebra-lemma-CM-over-regular-flat}, and
+Chow Homology, Lemma
+\ref{chow-lemma-degrees-and-numerical-intersections}.
+\end{proof}
+
+\begin{remark}
+\label{remark-hodge-cohomology-is-weil}
+In exactly the same manner as above one can show that
+Hodge cohomology $X \mapsto H_{Hodge}^*(X/k)$ equipped
+with $c_1^{Hodge}$ determines a Weil
+cohomology theory. If we ever need this, we will precisely
+formulate and prove this here. This leads to the following
+amusing consequence: If the betti numbers of a Weil cohomology
+theory are independent of the chosen Weil cohomology theory
+(over our field $k$ of characteristic $0$), then
+the Hodge-to-de Rham spectral sequence
+degenerates at $E_1$! Of course, the degeneration of
+the Hodge-to-de Rham spectral sequence is known
+(see for example \cite{Deligne-Illusie} for a marvelous algebraic proof),
+but it is by no means an easy result! This suggests that proving
+the independence of betti numbers is a hard problem as well
+and as far as we know is still an open problem. See
+Weil Cohomology Theories, Remark
+\ref{weil-remark-betti-numbers-in-some-sense} for a related question.
+\end{remark}
+
+
+
+
+
+
+
+\section{Gysin maps for closed immersions}
+\label{section-gysin}
+
+\noindent
+In this section we define the gysin map for closed immersions.
+
+\begin{remark}
+\label{remark-gysin-equations}
+Let $X \to S$ be a morphism of schemes. Let
+$f_1, \ldots, f_c \in \Gamma(X, \mathcal{O}_X)$. Let $Z \subset X$
+be the closed subscheme cut out by $f_1, \ldots, f_c$. Below we will
+study the {\it gysin map}
+\begin{equation}
+\label{equation-gysin}
+\gamma^p_{f_1, \ldots, f_c} :
+\Omega^p_{Z/S}
+\longrightarrow
+\mathcal{H}_Z^c(\Omega^{p + c}_{X/S})
+\end{equation}
+defined as follows. Given a local section $\omega$ of $\Omega^p_{Z/S}$
+which is the restriction of a section $\tilde \omega$ of $\Omega^p_{X/S}$
+we set
+$$
+\gamma^p_{f_1, \ldots, f_c}(\omega) =
+c_{f_1, \ldots, f_c}(\tilde \omega|_Z) \wedge
+\text{d}f_1 \wedge \ldots \wedge \text{d}f_c
+$$
+where $c_{f_1, \ldots, f_c} : \Omega^p_{X/S} \otimes \mathcal{O}_Z \to
+\mathcal{H}_Z^c(\Omega^p_{X/S})$ is the map constructed in
+Derived Categories of Schemes, Remark
+\ref{perfect-remark-supported-map-c-equations}.
+This is well defined: given $\omega$ we can change our choice of
+$\tilde \omega$ by elements of the form
+$\sum f_i \omega'_i + \sum \text{d}(f_i) \wedge \omega''_i$
+which are mapped to zero by the construction.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-gysin-differential}
+The gysin map (\ref{equation-gysin}) is compatible with the de Rham
+differentials on $\Omega^\bullet_{X/S}$ and $\Omega^\bullet_{Z/S}$.
+\end{lemma}
+
+\begin{proof}
+This follows from an almost trivial calculation once
+we correctly interpret this. First, we recall that the functor
+$\mathcal{H}^c_Z$ computed on the category of $\mathcal{O}_X$-modules
+agrees with the similarly defined functor on the category of abelian
+sheaves on $X$, see
+Cohomology, Lemma \ref{cohomology-lemma-sections-support-abelian-unbounded}.
+Hence, the differential $\text{d} : \Omega^p_{X/S} \to \Omega^{p + 1}_{X/S}$
+induces a map
+$\mathcal{H}^c_Z(\Omega^p_{X/S}) \to \mathcal{H}^c_Z(\Omega^{p + 1}_{X/S})$.
+Moreover, the formation of the extended alternating {\v C}ech complex in
+Derived Categories of Schemes, Remark \ref{perfect-remark-support-c-equations}
+works on the category of abelian sheaves. The map
+$$
+\Coker\left(\bigoplus \mathcal{F}_{1 \ldots \hat i \ldots c} \to
+\mathcal{F}_{1 \ldots c}\right)
+\longrightarrow
+i_*\mathcal{H}^c_Z(\mathcal{F})
+$$
+used in the construction of $c_{f_1, \ldots, f_c}$ in
+Derived Categories of Schemes, Remark
+\ref{perfect-remark-supported-map-c-equations}
+is well defined and
+functorial on the category of all abelian sheaves on $X$.
+Hence we see that the lemma follows from the equality
+$$
+\text{d}\left(
+\frac{\tilde \omega \wedge \text{d}f_1 \wedge \ldots \wedge
+\text{d}f_c}{f_1 \ldots f_c}\right) =
+\frac{\text{d}(\tilde \omega) \wedge
+\text{d}f_1 \wedge \ldots \wedge \text{d}f_c}{f_1 \ldots f_c}
+$$
+which is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-global}
+Let $X \to S$ be a morphism of schemes. Let $Z \to X$ be a closed immersion
+of finite presentation whose conormal sheaf $\mathcal{C}_{Z/X}$ is
+locally free of rank $c$. Then there is a canonical map
+$$
+\gamma^p : \Omega^p_{Z/S} \to \mathcal{H}^c_Z(\Omega^{p + c}_{X/S})
+$$
+which is locally given by the maps $\gamma^p_{f_1, \ldots, f_c}$
+of Remark \ref{remark-gysin-equations}.
+\end{lemma}
+
+\begin{proof}
+The assumptions imply that given $x \in Z \subset X$ there exists an
+open neighbourhood $U$ of $x$ such that $Z$ is cut out by $c$
+elements $f_1, \ldots, f_c \in \mathcal{O}_X(U)$. Thus
+it suffices to show that given $f_1, \ldots, f_c$ and
+$g_1, \ldots, g_c$ in $\mathcal{O}_X(U)$ cutting out $Z \cap U$,
+the maps $\gamma^p_{f_1, \ldots, f_c}$
+and $\gamma^p_{g_1, \ldots, g_c}$ are the same. To do this, after shrinking
+$U$ we may assume $g_j = \sum a_{ji} f_i$ for some
+$a_{ji} \in \mathcal{O}_X(U)$. Then we have
+$c_{f_1, \ldots, f_c} = \det(a_{ji}) c_{g_1, \ldots, g_c}$ by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-supported-map-determinant}.
+On the other hand we have
+$$
+\text{d}(g_1) \wedge \ldots \wedge \text{d}(g_c) \equiv
+\det(a_{ji}) \text{d}(f_1) \wedge \ldots \wedge \text{d}(f_c)
+\bmod (f_1, \ldots, f_c)\Omega^c_{X/S}
+$$
+Combining these relations, a straightforward calculation gives the
+desired equality.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-differential-global}
+Let $X \to S$ and $i : Z \to X$ be as in Lemma \ref{lemma-gysin-global}.
+The gysin map $\gamma^p$ is compatible with the de Rham
+differentials on $\Omega^\bullet_{X/S}$ and $\Omega^\bullet_{Z/S}$.
+\end{lemma}
+
+\begin{proof}
+We may check this locally and then it follows from
+Lemma \ref{lemma-gysin-differential}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-projection}
+Let $X \to S$ and $i : Z \to X$ be as in Lemma \ref{lemma-gysin-global}.
+Given $\alpha \in H^q(X, \Omega^p_{X/S})$ we have
+$\gamma^p(\alpha|_Z) = i^{-1}\alpha \wedge \gamma^0(1)$ in
+$H^q(Z, \mathcal{H}^c_Z(\Omega^{p + c}_{X/S}))$.
+Please see proof for notation.
+\end{lemma}
+
+\begin{proof}
+The restriction $\alpha|_Z$ is the element of $H^q(Z, \Omega^p_{Z/S})$
+given by functoriality for Hodge cohomology. Applying functoriality
+for cohomology using
+$\gamma^p : \Omega^p_{Z/S} \to \mathcal{H}^c_Z(\Omega^{p + c}_{X/S})$
+we get get $\gamma^p(\alpha|_Z)$ in
+$H^q(Z, \mathcal{H}^c_Z(\Omega^{p + c}_{X/S}))$.
+This explains the left hand side of the formula.
+
+\medskip\noindent
+To explain the right hand side, we first pullback by the map
+of ringed spaces $i : (Z, i^{-1}\mathcal{O}_X) \to (X, \mathcal{O}_X)$
+to get the element $i^{-1}\alpha \in H^q(Z, i^{-1}\Omega^p_{X/S})$.
+Let $\gamma^0(1) \in H^0(Z, \mathcal{H}_Z^c(\Omega^c_{X/S}))$
+be the image of $1 \in H^0(Z, \mathcal{O}_Z) = H^0(Z, \Omega^0_{Z/S})$
+by $\gamma^0$. Using cup product we obtain an element
+$$
+i^{-1}\alpha \cup \gamma^0(1)
+\in
+H^{q + c}(Z,
+i^{-1}\Omega^p_{X/S} \otimes_{i^{-1}\mathcal{O}_X}
+\mathcal{H}^c_Z(\Omega^c_{X/S}))
+$$
+Using Cohomology, Remark \ref{cohomology-remark-support-cup-product}
+and wedge product there are canonical maps
+$$
+i^{-1}\Omega^p_{X/S} \otimes_{i^{-1}\mathcal{O}_X}^\mathbf{L}
+R\mathcal{H}_Z(\Omega^c_{X/S}) \to
+R\mathcal{H}_Z(\Omega^p_{X/S} \otimes_{\mathcal{O}_X}^\mathbf{L}
+\Omega^c_{X/S}) \to
+R\mathcal{H}_Z(\Omega^{p + c}_{X/S})
+$$
+By Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-supported-trivial-vanishing}
+the objects $R\mathcal{H}_Z(\Omega^j_{X/S})$ have vanishing
+cohomology sheaves in degrees $> c$. Hence on cohomology
+sheaves in degree $c$ we obtain a map
+$$
+i^{-1}\Omega^p_{X/S} \otimes_{i^{-1}\mathcal{O}_X}
+\mathcal{H}^c_Z(\Omega^c_{X/S}) \longrightarrow
+\mathcal{H}^c_Z(\Omega^{p + c}_{X/S})
+$$
+The expression $i^{-1}\alpha \wedge \gamma^0(1)$ is the image
+of the cup product $i^{-1}\alpha \cup \gamma^0(1)$ by the
+functoriality of cohomology.
+
+\medskip\noindent
+Having explained the content of the formula in this manner, by
+general properties of cup products
+(Cohomology, Section \ref{cohomology-section-cup-product}),
+it now suffices to prove that the diagram
+$$
+\xymatrix{
+i^{-1}\Omega^p_X \otimes \Omega^0_Z \ar[rr]_{\text{id} \otimes \gamma^0}
+\ar[d] & &
+i^{-1}\Omega^p_X \otimes \mathcal{H}^c_Z(\Omega^c_X) \ar[d]^\wedge \\
+\Omega^p_Z \otimes \Omega^0_Z \ar[r]^\wedge &
+\Omega^p_Z \ar[r]^{\gamma^p} &
+\mathcal{H}^c_Z(\Omega^{p + c}_X)
+}
+$$
+is commutative in the category of sheaves on $Z$ (with obvious abuse of
+notation). This boils down to a simple computation for the maps
+$\gamma^j_{f_1, \ldots, f_c}$ which we omit; in fact these maps
+are chosen exactly such that this works and such that $1$ maps to
+$\frac{\text{d}f_1 \wedge \ldots \wedge \text{d}f_c}{f_1 \ldots f_c}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-transverse}
+Let $c \geq 0$ be a integer. Let
+$$
+\xymatrix{
+Z' \ar[d]_h \ar[r] & X' \ar[d]_g \ar[r] & S' \ar[d] \\
+Z \ar[r] & X \ar[r] & S
+}
+$$
+be a commutative diagram of schemes.
+Assume
+\begin{enumerate}
+\item $Z \to X$ and $Z' \to X'$
+satisfy the assumptions of Lemma \ref{lemma-gysin-global},
+\item the left square in the diagram is cartesian, and
+\item $h^*\mathcal{C}_{Z/X} \to \mathcal{C}_{Z'/X'}$
+(Morphisms, Lemma \ref{morphisms-lemma-conormal-functorial})
+is an isomorphism.
+\end{enumerate}
+Then the diagram
+$$
+\xymatrix{
+h^*\Omega^p_{Z/S} \ar[rr]_-{h^{-1}\gamma^p} \ar[d] & &
+\mathcal{O}_{X'}|_{Z'} \otimes_{h^{-1}\mathcal{O}_X|_Z}
+h^{-1}\mathcal{H}^c_Z(\Omega^{p + c}_{X/S}) \ar[d] \\
+\Omega^p_{Z'/S'} \ar[rr]^{\gamma^p} & &
+\mathcal{H}^c_{Z'}(\Omega^{p + c}_{X'/S'})
+}
+$$
+is commutative. The left vertical arrow is functoriality of modules of
+differentials and the right vertical arrow uses
+Cohomology, Remark \ref{cohomology-remark-support-functorial}.
+\end{lemma}
+
+\begin{proof}
+More precisely, consider the composition
+\begin{align*}
+\mathcal{O}_{X'}|_{Z'} \otimes_{h^{-1}\mathcal{O}_X|_Z}^\mathbf{L}
+h^{-1}R\mathcal{H}_Z(\Omega^{p + c}_{X/S})
+& \to
+R\mathcal{H}_{Z'}(Lg^*\Omega^{p + c}_{X/S}) \\
+& \to
+R\mathcal{H}_{Z'}(g^*\Omega^{p + c}_{X/S}) \\
+& \to
+R\mathcal{H}_{Z'}(\Omega^{p + c}_{X'/S'})
+\end{align*}
+where the first arrow is given by
+Cohomology, Remark \ref{cohomology-remark-support-functorial}
+and the last one by functoriality of differentials.
+Since we have the vanishing of cohomology sheaves in degrees $> c$
+by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-supported-trivial-vanishing}
+this induces the right vertical arrow.
+We can check the commutativity locally.
+Thus we may assume $Z$ is cut out by
+$f_1, \ldots, f_c \in \Gamma(X, \mathcal{O}_X)$.
+Then $Z'$ is cut out by $f'_i = g^\sharp(f_i)$.
+The maps $c_{f_1, \ldots, f_c}$ and $c_{f'_1, \ldots, f'_c}$
+fit into the commutative diagram
+$$
+\xymatrix{
+h^*i^*\Omega^p_{X/S} \ar[rr]_-{h^{-1}c_{f_1, \ldots, f_c}} \ar[d] & &
+\mathcal{O}_{X'}|_{Z'} \otimes_{h^{-1}\mathcal{O}_X|_Z}
+h^{-1}\mathcal{H}^c_Z(\Omega^p_{X/S}) \ar[d] \\
+(i')^*\Omega^p_{X'/S'} \ar[rr]^{c_{f'_1, \ldots, f'_c}} & &
+\mathcal{H}^c_{Z'}(\Omega^p_{X'/S'})
+}
+$$
+See Derived Categories of Schemes, Remark
+\ref{perfect-remark-supported-functorial}.
+Recall given a $p$-form $\omega$ on $Z$ we define
+$\gamma^p(\omega)$ by choosing (locally on $X$ and $Z$)
+a $p$-form $\tilde \omega$ on $X$ lifting $\omega$ and taking
+$\gamma^p(\omega) =
+c_{f_1, \ldots, f_c}(\tilde \omega) \wedge
+\text{d}f_1 \wedge \ldots \wedge \text{d}f_c$.
+Since the form $\text{d}f_1 \wedge \ldots \wedge \text{d}f_c$
+pulls back to
+$\text{d}f'_1 \wedge \ldots \wedge \text{d}f'_c$ we conclude.
+\end{proof}
+
+\begin{remark}
+\label{remark-how-to-use}
+Let $X \to S$, $i : Z \to X$, and $c \geq 0$ be as in
+Lemma \ref{lemma-gysin-global}.
+Let $p \geq 0$ and assume that $\mathcal{H}^i_Z(\Omega^{p + c}_{X/S}) = 0$
+for $i = 0, \ldots, c - 1$. This vanishing holds if $X \to S$ is smooth
+and $Z \to X$ is a Koszul regular immersion, see
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-supported-vanishing}.
+Then we obtain a map
+$$
+\gamma^{p, q} :
+H^q(Z, \Omega^p_{Z/S})
+\longrightarrow
+H^{q + c}(X, \Omega^{p + c}_{X/S})
+$$
+by first using
+$\gamma^p : \Omega^p_{Z/S} \to \mathcal{H}^c_Z(\Omega^{p + c}_{X/S})$
+to map into
+$$
+H^q(Z, \mathcal{H}^c_Z(\Omega^{p + c}_{X/S})) =
+H^q(Z, R\mathcal{H}_Z(\Omega^{p + c}_{X/S})[c]) =
+H^q(X, i_*R\mathcal{H}_Z(\Omega^{p + c}_{X/S})[c])
+$$
+and then using the adjunction map
+$i_*R\mathcal{H}_Z(\Omega^{p + c}_{X/S}) \to \Omega^{p + c}_{X/S}$
+to continue on to the desired Hodge cohomology module.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-gysin-differential-hodge}
+Let $X \to S$ and $i : Z \to X$ be as in Lemma \ref{lemma-gysin-global}.
+Assume $X \to S$ is smooth and $Z \to X$ Koszul regular.
+The gysin maps $\gamma^{p, q}$ are compatible with the de Rham
+differentials on $\Omega^\bullet_{X/S}$ and $\Omega^\bullet_{Z/S}$.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from
+Lemma \ref{lemma-gysin-differential-global}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-projection-global}
+Let $X \to S$, $i : Z \to X$, and $c \geq 0$ be as in
+Lemma \ref{lemma-gysin-global}. Assume $X \to S$ smooth and
+$Z \to X$ Koszul regular. Given $\alpha \in H^q(X, \Omega^p_{X/S})$ we have
+$\gamma^{p, q}(\alpha|_Z) = \alpha \cup \gamma^{0, 0}(1)$ in
+$H^{q + c}(X, \Omega^{p + c}_{X/S})$ with $\gamma^{a, b}$ as in
+Remark \ref{remark-how-to-use}.
+\end{lemma}
+
+\begin{proof}
+This lemma follows from Lemma \ref{lemma-gysin-projection}
+and Cohomology, Lemma \ref{cohomology-lemma-support-cup-product}.
+We suggest the reader skip over the more detailed discussion below.
+
+\medskip\noindent
+We will use without further mention that
+$R\mathcal{H}_Z(\Omega^j_{X/S}) = \mathcal{H}^c_Z(\Omega^j_{X/S})[-c]$
+for all $j$ as pointed out in Remark \ref{remark-how-to-use}.
+We will also silently use the identifications
+$H^{q + c}_Z(X, \Omega^j_{X/S}) = H^{q + c}(Z, R\mathcal{H}_Z(\Omega^j_{X/S}) =
+H^q(Z, \mathcal{H}^c_Z(\Omega^j_{X/S}))$, see
+Cohomology, Lemma \ref{cohomology-lemma-local-to-global-sections-with-support}
+for the first one. With these identifications
+\begin{enumerate}
+\item $\gamma^0(1) \in H^c_Z(X, \Omega^c_{X/S})$ maps to $\gamma^{0, 0}(1)$
+in $H^c(X, \Omega^c_{X/S})$,
+\item the right hand side $i^{-1}\alpha \wedge \gamma^0(1)$
+of the equality in Lemma \ref{lemma-gysin-projection}
+is the (image by wedge product of the) cup product of
+Cohomology, Remark \ref{cohomology-remark-support-cup-product-global}
+of the elements $\alpha$ and $\gamma^0(1)$, in other words, the constructions
+in the proof of Lemma \ref{lemma-gysin-projection} and in
+Cohomology, Remark \ref{cohomology-remark-support-cup-product-global} match,
+\item by Cohomology, Lemma \ref{cohomology-lemma-support-cup-product}
+this maps to $\alpha \cup \gamma^{0, 0}(1)$ in
+$H^{q + c}(X, \Omega^p_{X/S} \otimes \Omega^c_{X/S})$, and
+\item the left hand side $\gamma^p(\alpha|_Z)$ of the equality in
+Lemma \ref{lemma-gysin-projection} maps to
+$\gamma^{p, q}(\alpha|_Z)$.
+\end{enumerate}
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gysin-transverse-global}
+Let $c \geq 0$ and
+$$
+\xymatrix{
+Z' \ar[d]_h \ar[r] & X' \ar[d]_g \ar[r] & S' \ar[d] \\
+Z \ar[r] & X \ar[r] & S
+}
+$$
+satisfy the assumptions of Lemma \ref{lemma-gysin-transverse} and assume
+in addition that $X \to S$ and $X' \to S'$ are smooth and that
+$Z \to X$ and $Z' \to X'$ are Koszul regular immersions.
+Then the diagram
+$$
+\xymatrix{
+H^q(Z, \Omega^p_{Z/S}) \ar[rr]_-{\gamma^{p, q}} \ar[d] & &
+H^{q + c}(X, \Omega^{p + c}_{X/S}) \ar[d] \\
+H^q(Z', \Omega^p_{Z'/S'}) \ar[rr]^{\gamma^{p, q}} & &
+H^{q + c}(X', \Omega^{p + c}_{X'/S'})
+}
+$$
+is commutative where $\gamma^{p, q}$ is as in Remark \ref{remark-how-to-use}.
+\end{lemma}
+
+\begin{proof}
+This follows on combining Lemma \ref{lemma-gysin-transverse}
+and Cohomology, Lemma \ref{cohomology-lemma-support-functorial}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-class-of-a-point}
+Let $k$ be a field. Let $X$ be an irreducible smooth proper scheme over $k$
+of dimension $d$. Let $Z \subset X$ be the reduced closed subscheme consisting
+of a single $k$-rational point $x$. Then the image of
+$1 \in k = H^0(Z, \mathcal{O}_Z) = H^0(Z, \Omega^0_{Z/k})$
+by the map $H^0(Z, \Omega^0_{Z/k}) \to H^d(X, \Omega^d_{X/k})$
+of Remark \ref{remark-how-to-use} is nonzero.
+\end{lemma}
+
+\begin{proof}
+The map $\gamma^0 : \mathcal{O}_Z \to
+\mathcal{H}^d_Z(\Omega^d_{X/k}) = R\mathcal{H}_Z(\Omega^d_{X/k})[d]$
+is adjoint to a map
+$$
+g^0 : i_*\mathcal{O}_Z \longrightarrow \Omega^d_{X/k}[d]
+$$
+in $D(\mathcal{O}_X)$. Recall that $\Omega^d_{X/k} = \omega_X$ is a
+dualizing sheaf for $X/k$, see
+Duality for Schemes, Lemma \ref{duality-lemma-duality-proper-over-field}.
+Hence the $k$-linear dual of the map in the statement
+of the lemma is the map
+$$
+H^0(X, \mathcal{O}_X) \to \Ext^d_X(i_*\mathcal{O}_Z, \omega_X)
+$$
+which sends $1$ to $g^0$. Thus it suffices to show that $g^0$ is nonzero.
+This we may do in any neighbourhood $U$ of the point $x$. Choose $U$
+such that there exist $f_1, \ldots, f_d \in \mathcal{O}_X(U)$
+vanishing only at $x$ and generating the maximal ideal
+$\mathfrak m_x \subset \mathcal{O}_{X, x}$. We may assume
+assume $U = \Spec(R)$ is affine. Looking over the
+construction of $\gamma^0$ we find that our extension is given by
+$$
+k \to
+(R \to \bigoplus\nolimits_{i_0} R_{f_{i_0}} \to
+\bigoplus\nolimits_{i_0 < i_1} R_{f_{i_0}f_{i_1}} \to
+\ldots \to R_{f_1\ldots f_r})[d] \to R[d]
+$$
+where $1$ maps to $1/f_1 \ldots f_c$ under the first map.
+This is nonzero because $1/f_1 \ldots f_c$ is a nonzero element
+of local cohomology group $H^d_{(f_1, \ldots, f_d)}(R)$ in this case,
+\end{proof}
+
+
+
+
+
+
+\section{Relative Poincar\'e duality}
+\label{section-relative-poincare-duality}
+
+\noindent
+In this section we prove Poincar'e duality for the relative de Rham cohomology
+of a proper smooth scheme over a base. We strongly urge the reader to
+look at Section \ref{section-poincare-duality} first.
+
+\begin{situation}
+\label{situation-relative-duality}
+Here $S$ is a quasi-compact and quasi-separated scheme and
+$f : X \to S$ is a proper smooth morphism of schemes all of whose
+fibres are nonempty and equidimensional of dimension $n$.
+\end{situation}
+
+\begin{lemma}
+\label{lemma-relative-bottom-part-degenerates}
+In Situation \ref{situation-relative-duality} the psuhforward
+$f_*\mathcal{O}_X$ is a finite \'etale $\mathcal{O}_S$-algebra
+and locally on $S$ we have $Rf_*\mathcal{O}_X = f_*\mathcal{O}_X \oplus P$
+in $D(\mathcal{O}_S)$ with $P$ perfect of tor amplitude in $[1, \infty)$.
+The map $\text{d} : f_*\mathcal{O}_X \to f_*\Omega_{X/S}$ is zero.
+\end{lemma}
+
+\begin{proof}
+The first part of the statement follows from
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-proper-flat-geom-red}.
+Setting $S' = \underline{\Spec}_S(f_*\mathcal{O}_X)$ we get a factorization
+$X \to S' \to S$ (this is the Stein factorization, see
+More on Morphisms, Section \ref{more-morphisms-section-stein-factorization},
+although we don't need this)
+and we see that $\Omega_{X/S} = \Omega_{X/S'}$ for example by
+Morphisms, Lemma \ref{morphisms-lemma-triangle-differentials} and
+\ref{morphisms-lemma-etale-at-point}. This of course implies that
+$\text{d} : f_*\mathcal{O}_X \to f_*\Omega_{X/S}$ is zero.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-duality-hodge}
+In Situation \ref{situation-relative-duality} there exists an
+$\mathcal{O}_S$-module map
+$$
+t : Rf_*\Omega^n_{X/S}[n] \longrightarrow \mathcal{O}_S
+$$
+unique up to precomposing by multiplication by a unit of
+$H^0(X, \mathcal{O}_X)$ with the following property: for all $p$ the pairing
+$$
+Rf_*\Omega^p_{X/S}
+\otimes_{\mathcal{O}_S}^\mathbf{L}
+Rf_*\Omega^{n - p}_{X/S}[n]
+\longrightarrow
+\mathcal{O}_S
+$$
+given by the relative cup product composed with $t$
+is a perfect pairing of perfect complexes on $S$.
+\end{lemma}
+
+\begin{proof}
+Let $\omega^\bullet_{X/S}$ be the relative dualizing complex of $X$ over $S$ as
+in Duality for Schemes, Remark \ref{duality-remark-relative-dualizing-complex}
+and let $Rf_*\omega_{X/S}^\bullet \to \mathcal{O}_S$ be its trace map. By
+Duality for Schemes, Lemma \ref{duality-lemma-smooth-proper}
+there exists an isomorphism $\omega^\bullet_{X/S} \cong \Omega^n_{X/S}[n]$
+and using this isomorphism we obtain $t$. The complexes $Rf_*\Omega^p_{X/S}$
+are perfect by Lemma \ref{lemma-proper-smooth-de-Rham}.
+Since $\Omega^p_{X/S}$ is locally free and since
+$\Omega^p_{X/S} \otimes_{\mathcal{O}_X} \Omega^{n - p}_{X/S} \to
+\Omega^n_{X/S}$ exhibits an isomorphism $\Omega^p_{X/S} \cong
+\SheafHom_{\mathcal{O}_X}(\Omega^{n - p}_{X/S}, \Omega^n_{X/S})$
+we see that the pairing induced by the relative cup product is perfect by
+Duality for Schemes, Remark
+\ref{duality-remark-relative-dualizing-complex-relative-cup-product}.
+
+\medskip\noindent
+Uniqueness of $t$. Choose a distinguished triangle
+$f_*\mathcal{O}_X \to Rf_*\mathcal{O}_X \to P \to f_*\mathcal{O}_X[1]$.
+By Lemma \ref{lemma-relative-bottom-part-degenerates}
+the object $P$ is perfect of tor amplitude in $[1, \infty)$
+and the triangle is locally on $S$ split.
+Thus $R\SheafHom_{\mathcal{O}_X}(P, \mathcal{O}_X)$ is perfect
+of tor amplitude in $(-\infty, -1]$. Hence duality (above) shows that
+locally on $S$ we have
+$$
+Rf_*\Omega^n_{X/S}[n] \cong
+R\SheafHom_{\mathcal{O}_S}(f_*\mathcal{O}_X, \mathcal{O}_S)
+\oplus R\SheafHom_{\mathcal{O}_X}(P, \mathcal{O}_X)
+$$
+This shows that $R^nf_*\Omega^n_{X/S}$ is finite locally free and
+that we obtain a perfect $\mathcal{O}_S$-bilinear pairing
+$$
+f_*\mathcal{O}_X \times R^nf_*\Omega^n_{X/S} \longrightarrow \mathcal{O}_S
+$$
+using $t$.
+This implies that any $\mathcal{O}_S$-linear map
+$t' : R^nf_*\Omega^n_{X/S} \to \mathcal{O}_S$ is of the form
+$t' = t \circ g$ for some
+$g \in \Gamma(S, f_*\mathcal{O}_X) = \Gamma(X, \mathcal{O}_X)$.
+In order for $t'$ to still determine a perfect pairing $g$ will have
+to be a unit. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-top-part-degenerates}
+In Situation \ref{situation-relative-duality} the map
+$\text{d} : R^nf_*\Omega^{n - 1}_{X/S} \to R^nf_*\Omega^n_{X/S}$
+is zero.
+\end{lemma}
+
+\noindent
+As we mentioned in the proof of Lemma \ref{lemma-top-part-degenerates}
+this lemma is not an easy consequence of Lemmas
+\ref{lemma-relative-duality-hodge} and
+\ref{lemma-relative-bottom-part-degenerates}.
+
+\begin{proof}[Proof in case $S$ is reduced]
+Assume $S$ is reduced. Observe that
+$\text{d} : R^nf_*\Omega^{n - 1}_{X/S} \to R^nf_*\Omega^n_{X/S}$
+is an $\mathcal{O}_S$-linear map of (quasi-coherent) $\mathcal{O}_S$-modules.
+The $\mathcal{O}_S$-module $R^nf_*\Omega^n_{X/S}$ is finite locally free
+(as the dual of the finite locally free $\mathcal{O}_S$-module
+$f_*\mathcal{O}_X$ by Lemmas
+\ref{lemma-relative-duality-hodge} and
+\ref{lemma-relative-bottom-part-degenerates}).
+Since $S$ is reduced it suffices to show that
+the stalk of $\text{d}$ in every generic point $\eta \in S$
+is zero; this follows by looking at sections over affine opens,
+using that the target of $\text{d}$ is locally free, and
+Algebra, Lemma \ref{algebra-lemma-reduced-ring-sub-product-fields} part (2).
+Since $S$ is reduced we have $\mathcal{O}_{S, \eta} = \kappa(\eta)$, see
+Algebra, Lemma \ref{algebra-lemma-minimal-prime-reduced-ring}.
+Thus $\text{d}_\eta$ is identified with the map
+$$
+\text{d} :
+H^n(X_\eta, \Omega^{n - 1}_{X_\eta/\kappa(\eta)})
+\longrightarrow
+H^n(X_\eta, \Omega^n_{X_\eta/\kappa(\eta)})
+$$
+which is zero by Lemma \ref{lemma-top-part-degenerates}.
+\end{proof}
+
+\begin{proof}[Proof in the general case]
+Observe that the question is flat local on $S$: if $S' \to S$ is a surjective
+flat morphism of schemes and the map is zero after pullback to $S'$,
+then the map is zero. Also, formation of the map commutes with base change
+by flat morphisms by flat base change (Cohomology of Schemes, Lemma
+\ref{coherent-lemma-flat-base-change-cohomology}).
+
+\medskip\noindent
+Consider the Stein factorization $X \to S' \to S$ as in
+More on Morphisms, Theorem
+\ref{more-morphisms-theorem-stein-factorization-general}.
+By Lemma \ref{lemma-relative-bottom-part-degenerates} the morphism
+$\pi : S' \to S$ is finite \'etale.
+The morphism $f : X \to S'$ is proper (by the theorem),
+smooth (by More on Morphisms, Lemma
+\ref{more-morphisms-lemma-smooth-etale-permanence}) with geometrically
+connected fibres by the theorem on Stein factorization.
+In the proof of Lemma \ref{lemma-relative-bottom-part-degenerates}
+we saw that $\Omega_{X/S} = \Omega_{X/S'}$ because $S' \to S$ is \'etale.
+Hence $\Omega^\bullet_{X/S} = \Omega^\bullet_{X/S'}$.
+We have
+$$
+R^qf_*\Omega^p_{X/S} = \pi_*R^qf'_*\Omega^p_{X/S'}
+$$
+for all $p, q$ by the Leray spectral sequence
+(Cohomology, Lemma \ref{cohomology-lemma-relative-Leray}),
+the fact that $\pi$ is finite hence affine, and
+Cohomology of Schemes, Lemma \ref{coherent-lemma-relative-affine-vanishing}
+(of course we also use that $R^qf'_*\Omega^p_{X'/S}$ is
+quasi-coherent).
+Thus the map of the lemma is $\pi_*$ applied to
+$\text{d} : R^nf'_*\Omega^{n - 1}_{X/S'} \to R^nf'_*\Omega^n_{X/S'}$.
+In other words, in order to prove the lemma we may replace
+$f : X \to S$ by $f' : X \to S'$ to reduce to the case discussed
+in the next pargraph.
+
+\medskip\noindent
+Assume $f$ has geometrically connected fibres and
+$f_*\mathcal{O}_X = \mathcal{O}_S$.
+For every $s \in S$ we can choose an \'etale neighbourhood
+$(S', s') \to (S, s)$ such that the base change $X' \to S'$ of $S$
+has a section. See More on Morphisms, Lemma
+\ref{more-morphisms-lemma-etale-nbhd-dominates-smooth}.
+By the initial remarks of the proof this reduces us to the case
+discussed in the next paragraph.
+
+\medskip\noindent
+Assume $f$ has geometrically connected fibres,
+$f_*\mathcal{O}_X = \mathcal{O}_S$, and we have
+a section $s : S \to X$ of $f$. We may and do assume $S = \Spec(A)$
+is affine. The map
+$s^* : R\Gamma(X, \mathcal{O}_X) \to R\Gamma(S, \mathcal{O}_S) = A$
+is a splitting of the map $A \to R\Gamma(X, \mathcal{O}_X)$. Thus we can write
+$$
+R\Gamma(X, \mathcal{O}_X) = A \oplus P
+$$
+where $P$ is the ``kernel'' of $s^*$. By
+Lemma \ref{lemma-relative-bottom-part-degenerates} the object $P$
+of $D(A)$ is perfect of tor amplitude in $[1, n]$. As in the proof
+of Lemma \ref{lemma-relative-duality-hodge} we see that
+$H^n(X, \Omega^n_{X/S})$ is a locally free $A$-module of rank $1$
+(and in fact dual to $A$ so free of rank $1$ -- we will soon choose
+a generator but we don't want to check it is the same generator
+nor will it be necessary to do so).
+
+\medskip\noindent
+Denote $Z \subset X$ the image of $s$ which is a closed subscheme of $X$ by
+Schemes, Lemma \ref{schemes-lemma-section-immersion}.
+Observe that $Z \to X$ is a regular (and a fortiori Koszul regular by
+Divisors, Lemma \ref{divisors-lemma-regular-quasi-regular-immersion})
+closed immersion by
+Divisors, Lemma \ref{divisors-lemma-section-smooth-regular-immersion}.
+Of course $Z \to X$ has codimension $n$. Thus by
+Remark \ref{remark-how-to-use}
+we can consider the map
+$$
+\gamma^{0, 0} : H^0(Z, \Omega^0_{Z/S}) \longrightarrow H^n(X, \Omega^n_{X/S})
+$$
+and we set $\xi = \gamma^{0, 0}(1) \in H^n(X, \Omega^n_{X/S})$.
+
+\medskip\noindent
+We claim $\xi$ is a basis element. Namely, since we have base change in
+top degree (see for example Limits, Lemma
+\ref{limits-lemma-higher-direct-images-zero-above-dimension-fibre})
+we see that
+$H^n(X, \Omega^n_{X/S}) \otimes_A k = H^n(X_k, \Omega^n_{X_k/k})$
+for any ring map $A \to k$. By the compatibility of
+the construction of $\xi$ with base change,
+see Lemma \ref{lemma-gysin-transverse-global},
+we see that the image of $\xi$ in $H^n(X_k, \Omega^n_{X_k/k})$
+is nonzero by Lemma \ref{lemma-class-of-a-point} if $k$ is a field.
+Thus $\xi$ is a nowhere vanishing section of an invertible module
+and hence a generator.
+
+\medskip\noindent
+Let $\theta \in H^n(X, \Omega^{n - 1}_{X/S})$. We have to show that
+$\text{d}(\theta)$ is zero in $H^n(X, \Omega^n_{X/S})$.
+We may write $\text{d}(\theta) = a \xi$ for some $a \in A$
+as $\xi$ is a basis element. Then we have to show $a = 0$.
+
+\medskip\noindent
+Consider the closed immersion
+$$
+\Delta : X \to X \times_S X
+$$
+This is also a section of a smooth morphism (namely either projection)
+and hence a regular and Koszul immersion of codimension $n$ as well.
+Thus we can consider the maps
+$$
+\gamma^{p, q} :
+H^q(X, \Omega^p_{X/S})
+\longrightarrow
+H^{q + n}(X \times_S X, \Omega^{p + n}_{X \times_S X/S})
+$$
+of Remark \ref{remark-how-to-use}. Consider the image
+$$
+\gamma^{n - 1, n}(\theta) \in
+H^{2n}(X \times_S X, \Omega^{2n - 1}_{X \times_S X})
+$$
+By Lemma \ref{lemma-de-rham-complex-product} we have
+$$
+\Omega^{2n - 1}_{X \times_S X} =
+\Omega^{n - 1}_{X/S} \boxtimes \Omega^n_{X/S} \oplus
+\Omega^n_{X/S} \boxtimes \Omega^{n - 1}_{X/S}
+$$
+By the K\"unneth formula (either
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-kunneth} or
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-kunneth-single-sheaf})
+we see that
+$$
+H^{2n}(X \times_S X, \Omega^{n - 1}_{X/S} \boxtimes \Omega^n_{X/S}) =
+H^n(X, \Omega^{n - 1}_{X/S}) \otimes_A H^n(X, \Omega^n_{X/S})
+$$
+and
+$$
+H^{2n}(X \times_S X, \Omega^n_{X/S} \boxtimes \Omega^{n - 1}_{X/S}) =
+H^n(X, \Omega^n_{X/S}) \otimes_A H^n(X, \Omega^{n - 1}_{X/S})
+$$
+Namely, since we are looking in top degree there no higher tor groups
+that intervene. Combined with the fact that $\xi$ is a generator this means
+we can write
+$$
+\gamma^{n - 1, n}(\theta) = \theta_1 \otimes \xi + \xi \otimes \theta_2
+$$
+with $\theta_1, \theta_2 \in H^n(X, \Omega^{n - 1}_{X/S})$.
+Arguing in exactly the same manner we can write
+$$
+\gamma^{n, n}(\xi) = b \xi \otimes \xi
+$$
+in
+$H^{2n}(X \times_S X, \Omega^{2n}_{X \times_S X/S}) =
+H^n(X, \Omega^n_{X/S}) \otimes_A H^n(X, \Omega^n_{X/S})$
+for some $b \in H^0(S, \mathcal{O}_S)$.
+
+\medskip\noindent
+{\bf Claim:} $\theta_1 = \theta$, $\theta_2 = \theta$, and $b = 1$.
+Let us show that the claim implies the desired result $a = 0$.
+Namely, by Lemma \ref{lemma-gysin-differential-hodge}
+we have
+$$
+\gamma^{n, n}(\text{d}(\theta)) = \text{d}(\gamma^{n - 1, n}(\theta))
+$$
+By our choices above this gives
+$$
+a \xi \otimes \xi =
+\gamma^{n, n}(a\xi) =
+\text{d}(\theta \otimes \xi + \xi \otimes \theta) =
+a \xi \otimes \xi + (-1)^n a \xi \otimes \xi
+$$
+The right most equality comes from the fact that the map
+$\text{d} : \Omega^{2n - 1}_{X \otimes_S X/S} \to \Omega^{2n}_{X \times_S X/S}$
+by Lemma \ref{lemma-de-rham-complex-product}
+is the sum of the differential
+$\text{d} \boxtimes 1 : \Omega^{n - 1}_{X/S} \boxtimes \Omega^n_{X/S}
+\to \Omega^n_{X/S} \boxtimes \Omega^n_{X/S}$
+and the differential
+$(-1)^n 1 \boxtimes \text{d} : \Omega^n_{X/S} \boxtimes \Omega^{n - 1}_{X/S}
+\to \Omega^n_{X/S} \boxtimes \Omega^n_{X/S}$. Please see discussion in
+Section \ref{section-kunneth} and
+Derived Categories of Schemes, Section
+\ref{perfect-section-kunneth-complexes} for more information.
+Since $\xi \otimes \xi$ is a basis for the rank $1$ free $A$-module
+$H^n(X, \Omega^n_{X/S}) \otimes_A H^n(X, \Omega^n_{X/S})$
+we conclude
+$$
+a = a + (-1)^n a \Rightarrow a = 0
+$$
+as desired.
+
+\medskip\noindent
+In the rest of the proof we prove the claim above. Let us denote
+$\eta = \gamma^{0, 0}(1) \in H^n(X \times_S X, \Omega^n_{X \times_S X/S})$.
+Since $\Omega^n_{X \times_S X/S} =
+\bigoplus_{p + p' = n} \Omega^p_{X/S} \boxtimes \Omega^{p'}_{X/S}$
+we may write
+$$
+\eta = \eta_0 + \eta_1 + \ldots + \eta_n
+$$
+where $\eta_p$ is in
+$H^n(X \times_S X, \Omega^p_{X/S} \boxtimes \Omega^{n - p}_{X/S})$.
+For $p = 0$ we can write
+\begin{align*}
+H^n(X \times_S X, \mathcal{O}_X \boxtimes \Omega^n_{X/S})
+& =
+H^n(R\Gamma(X, \mathcal{O}_X) \otimes_A^\mathbf{L}
+R\Gamma(X, \Omega^n_{X/S})) \\
+& =
+A \otimes_A H^n(X, \Omega^n_{X/S}) \oplus
+H^n(P \otimes_A^\mathbf{L} R\Gamma(X, \Omega^n_{X/S}))
+\end{align*}
+by our previously given decomposition $R\Gamma(X, \mathcal{O}_X) = A \oplus P$.
+Consider the morphism $(s, \text{id}) : X \to X \times_S X$.
+Then $(s, \text{id})^{-1}(\Delta) = Z$ scheme theoretically.
+Hence we see that $(s, \text{id})^*\eta = \xi$ by
+Lemma \ref{lemma-gysin-transverse-global}. This means that
+$$
+\xi = (s, \text{id})^*\eta = (s^* \otimes \text{id})(\eta_0)
+$$
+This means exactly that the first component of $\eta_0$
+in the direct sum decomposition above is $\xi$. In other words, we can write
+$$
+\eta_0 = 1 \otimes \xi + \eta'_0
+$$
+with $\eta'_0 \in H^n(P \otimes_A^\mathbf{L} R\Gamma(X, \Omega^n_{X/S}))$.
+In exactly the same manner for $p = n$ we can write
+\begin{align*}
+H^n(X \times_S X, \Omega^n_{X/S} \boxtimes \mathcal{O}_X)
+& =
+H^n(R\Gamma(X, \Omega^n_{X/S}) \otimes_A^\mathbf{L}
+R\Gamma(X, \mathcal{O}_X)) \\
+& =
+H^n(X, \Omega^n_{X/S}) \otimes_A A \oplus
+H^n(R\Gamma(X, \Omega^n_{X/S}) \otimes_A^\mathbf{L} P)
+\end{align*}
+and we can write
+$$
+\eta_n = \xi \otimes 1 + \eta'_n
+$$
+with $\eta'_n \in H^n(R\Gamma(X, \Omega^n_{X/S}) \otimes_A^\mathbf{L} P)$.
+
+\medskip\noindent
+Observe that $\text{pr}_1^*\theta = \theta \otimes 1$
+and $\text{pr}_2^*\theta = 1 \otimes \theta$ are
+Hodge cohomology classes on
+$X \times_S X$ which pull back to $\theta$ by $\Delta$.
+Hence by Lemma \ref{lemma-gysin-projection-global} we have
+$$
+\theta_1 \otimes \xi + \xi \otimes \theta_2 =
+\gamma^{n - 1, n}(\theta) =
+(\theta \otimes 1) \cup \eta =
+(1 \otimes \theta) \cup \eta
+$$
+in the Hodge cohomology ring of $X \times_S X$ over $S$.
+In terms of the direct sum decomposition on the modules
+of differentials of $X \times_S X/S$ we obtain
+$$
+\theta_1 \otimes \xi =
+(\theta \otimes 1) \cup \eta_0
+\quad\text{and}\quad
+\xi \otimes \theta_2 =
+(1 \otimes \theta) \cup \eta_n
+$$
+Looking at the formula $\eta_0 = 1 \otimes \xi + \eta'_0$ we found above,
+we see that to show that $\theta_1 = \theta$ it suffices to prove that
+$$
+(\theta \otimes 1) \cup \eta'_0 = 0
+$$
+To do this, observe that cupping with $\theta \otimes 1$ is given
+by the action on cohomology of the map
+$$
+(P \otimes_A^\mathbf{L} R\Gamma(X, \Omega^n_{X/S}))[-n]
+\xrightarrow{\theta \otimes 1}
+R\Gamma(X, \Omega^{n - 1}_{X/S}) \otimes_A^\mathbf{L}
+R\Gamma(X, \Omega^n_{X/S})
+$$
+in the derived category, see Cohomology, Remark
+\ref{cohomology-remark-cup-with-element-map-total-cohomology}.
+This map is the derived tensor product of the two maps
+$$
+\theta : P[-n] \to R\Gamma(X, \Omega^{n - 1}_{X/S})
+\quad\text{and}\quad
+1 : R\Gamma(X, \Omega^n_{X/S}) \to R\Gamma(X, \Omega^n_{X/S})
+$$
+by Derived Categories of Schemes, Remark
+\ref{perfect-remark-annoying-compatibility}.
+However, the first of these is zero in $D(A)$ because it is a map from
+a perfect complex of tor amplitude in $[n + 1, 2n]$ to a complex
+with cohomology only in degrees $0, 1, \ldots, n$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-splitting-unique}.
+A similar argument works to show the vanishing of
+$(1 \otimes \theta) \cup \eta'_n$. Finally,
+in exactly the same manner we obtain
+$$
+b \xi \otimes \xi = \gamma^{n, n}(\xi) = (\xi \otimes 1) \cup \eta_0
+$$
+and we conclude as before by showing that
+$(\xi \otimes 1) \cup \eta'_0 = 0$ in the same manner as above.
+This finishes the proof.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-relative-poincare-duality}
+Let $S$ be a quasi-compact and quasi-separated scheme. Let $f : X \to S$
+be a proper smooth morphism of schemes all of whose fibres are nonempty
+and equidimensional of dimension $n$. There exists an
+$\mathcal{O}_S$-module map
+$$
+t : R^{2n}f_*\Omega^\bullet_{X/S} \longrightarrow \mathcal{O}_S
+$$
+unique up to precomposing by multiplication by a unit of
+$H^0(X, \mathcal{O}_X)$ with the following property: the pairing
+$$
+Rf_*\Omega^\bullet_{X/S}
+\otimes_{\mathcal{O}_S}^\mathbf{L}
+Rf_*\Omega^\bullet_{X/S}[2n]
+\longrightarrow
+\mathcal{O}_S, \quad
+(\xi, \xi') \longmapsto t(\xi \cup \xi')
+$$
+is a perfect pairing of perfect complexes on $S$.
+\end{proposition}
+
+\begin{proof}
+The proof is exactly the same as the proof of
+Proposition \ref{proposition-poincare-duality}.
+
+\medskip\noindent
+By the relative Hodge-to-de Rham spectral sequence
+$$
+E_1^{p, q} = R^qf_*\Omega^p_{X/S} \Rightarrow R^{p + q}f_*\Omega^\bullet_{X/S}
+$$
+(Section \ref{section-hodge-to-de-rham}), the vanishing
+of $\Omega^i_{X/S}$ for $i > n$, the vanishing in for example Limits, Lemma
+\ref{limits-lemma-higher-direct-images-zero-above-dimension-fibre}
+and the results of Lemmas \ref{lemma-relative-bottom-part-degenerates} and
+\ref{lemma-relative-top-part-degenerates}
+we see that $R^0f_*\Omega_{X/S} = R^0f_*\mathcal{O}_X$
+and $R^nf_*\Omega^n_{X/S} = R^{2n}f_*\Omega^\bullet_{X/S}$.
+More precisesly, these identifications come from the maps
+of complexes
+$$
+\Omega^\bullet_{X/S} \to \mathcal{O}_X[0]
+\quad\text{and}\quad
+\Omega^n_{X/S}[-n] \to \Omega^\bullet_{X/S}
+$$
+Let us choose $t : R^{2n}f_*\Omega_{X/S} \to \mathcal{O}_S$
+which via this identification corresponds to a $t$ as in
+Lemma \ref{lemma-relative-duality-hodge}.
+
+\medskip\noindent
+Let us abbreviate $\Omega^\bullet = \Omega^\bullet_{X/S}$.
+Consider the map (\ref{equation-wedge}) which in our situation reads
+$$
+\wedge :
+\text{Tot}(\Omega^\bullet \otimes_{f^{-1}\mathcal{O}_S} \Omega^\bullet)
+\longrightarrow
+\Omega^\bullet
+$$
+For every integer $p = 0, 1, \ldots, n$ this map annihilates the subcomplex
+$\text{Tot}(\sigma_{> p} \Omega^\bullet \otimes_{f^{-1}\mathcal{O}_S}
+\sigma_{\geq n - p} \Omega^\bullet)$ for degree reasons.
+Hence we find that the restriction of $\wedge$ to the subcomplex
+$\text{Tot}(\Omega^\bullet \otimes_{f^{-1}\mathcal{O}_S}
+\geq_{n - p}\Omega^\bullet)$ factors through a map of complexes
+$$
+\gamma_p :
+\text{Tot}(\sigma_{\leq p} \Omega^\bullet \otimes_{f^{-1}\mathcal{O}_S}
+\sigma_{\geq n - p} \Omega^\bullet)
+\longrightarrow
+\Omega^\bullet
+$$
+Using the same procedure as in Section \ref{section-cup-product} we obtain
+relative cup products
+$$
+Rf_*\sigma_{\leq p} \Omega^\bullet
+\otimes_{\mathcal{O}_S}^\mathbf{L}
+Rf_*\sigma_{\geq n - p}\Omega^\bullet
+\longrightarrow
+Rf_*\Omega^\bullet
+$$
+We will prove by induction on $p$ that these cup products via $t$
+induce perfect pairings between $Rf_*\sigma_{\leq p} \Omega^\bullet$
+and $Rf_*\sigma_{\geq n - p}\Omega^\bullet[2n]$. For $p = n$
+this is the assertion of the proposition.
+
+\medskip\noindent
+The base case is $p = 0$. In this case we have
+$$
+Rf_*\sigma_{\leq p}\Omega^\bullet = Rf_*\mathcal{O}_X
+\quad\text{and}\quad
+Rf_*\sigma_{\geq n - p}\Omega^\bullet[2n] = Rf_*(\Omega^n[-n])[2n] =
+Rf_*\Omega^n[n]
+$$
+In this case we simply obtain the pairing
+between $Rf_*\mathcal{O}_X$ and $Rf_*\Omega^n[n]$ of
+Lemma \ref{lemma-relative-duality-hodge} and the result is true.
+
+\medskip\noindent
+Induction step. Say we know the result is true for $p$. Then
+we consider the distinguished triangle
+$$
+\Omega^{p + 1}[-p - 1] \to
+\sigma_{\leq p + 1}\Omega^\bullet \to
+\sigma_{\leq p}\Omega^\bullet \to
+\Omega^{p + 1}[-p]
+$$
+and the distinguished triangle
+$$
+\sigma_{\geq n - p}\Omega^\bullet \to
+\sigma_{\geq n - p - 1}\Omega^\bullet \to
+\Omega^{n - p - 1}[-n + p + 1] \to
+(\sigma_{\geq n - p}\Omega^\bullet)[1]
+$$
+Observe that both are distinguished triangles in the homotopy category
+of complexes of sheaves of $f^{-1}\mathcal{O}_S$-modules; in particular the
+maps $\sigma_{\leq p}\Omega^\bullet \to \Omega^{p + 1}[-p]$ and
+$\Omega^{n - p - 1}[-d + p + 1] \to (\sigma_{\geq n - p}\Omega^\bullet)[1]$
+are given by actual maps of complexes, namely using the differential
+$\Omega^p \to \Omega^{p + 1}$ and the differential
+$\Omega^{n - p - 1} \to \Omega^{n - p}$.
+Consider the distinguished triangles associated gotten from these
+distinguished triangles by applying $Rf_*$
+$$
+\xymatrix{
+Rf_*\sigma_{\leq p}\Omega^\bullet \ar[d]_a \\
+Rf_*\Omega^{p + 1}[-p - 1] \ar[d]_b \\
+Rf_*\sigma_{\leq p + 1}\Omega^\bullet \ar[d]_c \\
+Rf_*\sigma_{\leq p}\Omega^\bullet \ar[d]_d \\
+Rf_*\Omega^{p + 1}[-p - 1]
+}
+\quad\quad
+\xymatrix{
+Rf_*\sigma_{\geq n - p}\Omega^\bullet \\
+Rf_*\Omega^{n - p - 1}[-n + p + 1] \ar[u]_{a'} \\
+Rf_*\sigma_{\geq n - p - 1}\Omega^\bullet \ar[u]_{b'} \\
+Rf_*\sigma_{\geq n - p}\Omega^\bullet \ar[u]_{c'} \\
+Rf_*\Omega^{n - p - 1}[-n + p + 1] \ar[u]_{d'}
+}
+$$
+We will show below that the pairs $(a, a')$, $(b, b')$, $(c, c')$, and
+$(d, d')$ are compatible with the given pairings. This means we obtain a
+map from the distinguished triangle on the left to the distuiguished triangle
+obtained by applying $R\SheafHom(-, \mathcal{O}_S)$ to the distinguished
+triangle on the right. By induction and Lemma \ref{lemma-duality-hodge}
+we know that the pairings constructed above between the
+complexes on the first, second, fourth, and fifth
+rows are perfect, i.e., determine isomorphisms after taking duals.
+By Derived Categories, Lemma \ref{derived-lemma-third-isomorphism-triangle}
+we conclude the pairing between the complexes in the middle row
+is perfect as desired.
+
+\medskip\noindent
+Let $e : K \to K'$ and $e' : M' \to M$ be maps of objects
+of $D(\mathcal{O}_S)$ and let
+$K \otimes_{\mathcal{O}_S}^\mathbf{L} M \to \mathcal{O}_S$ and
+$K' \otimes_{\mathcal{O}_S}^\mathbf{L} M' \to \mathcal{O}_S$
+be pairings. Then we say these pairings are compatible if the
+diagram
+$$
+\xymatrix{
+K' \otimes_{\mathcal{O}_S}^\mathbf{L} M' \ar[d] &
+K \otimes_{\mathcal{O}_S}^\mathbf{L} M'
+\ar[l]^{e \otimes 1} \ar[d]^{1 \otimes e'} \\
+\mathcal{O}_S &
+K \otimes_{\mathcal{O}_S}^\mathbf{L} M \ar[l]
+}
+$$
+commutes. This indeed means that the diagram
+$$
+\xymatrix{
+K \ar[r] \ar[d]_e & R\SheafHom(M, \mathcal{O}_S) \ar[d]^{R\SheafHom(e', -)} \\
+K' \ar[r] & R\SheafHom(M', \mathcal{O}_S)
+}
+$$
+commutes and hence is sufficient for our purposes.
+
+\medskip\noindent
+Let us prove this for the pair $(c, c')$. Here we observe simply
+that we have a commutative diagram
+$$
+\xymatrix{
+\text{Tot}(\sigma_{\leq p} \Omega^\bullet \otimes_{f^{-1}\mathcal{O}_S}
+\sigma_{\geq n - p} \Omega^\bullet) \ar[d]_{\gamma_p} &
+\text{Tot}(\sigma_{\leq p + 1} \Omega^\bullet \otimes_{f^{-1}\mathcal{O}_S}
+\sigma_{\geq n - p} \Omega^\bullet) \ar[l] \ar[d] \\
+\Omega^\bullet &
+\text{Tot}(\sigma_{\leq p + 1} \Omega^\bullet \otimes_{f^{-1}\mathcal{O}_S}
+\sigma_{\geq n - p - 1} \Omega^\bullet) \ar[l]_-{\gamma_{p + 1}}
+}
+$$
+By functoriality of the cup product we obtain commutativity of the
+desired diagram.
+
+\medskip\noindent
+Similarly for the pair $(b, b')$ we use the commutative diagram
+$$
+\xymatrix{
+\text{Tot}(\sigma_{\leq p + 1} \Omega^\bullet \otimes_{f^{-1}\mathcal{O}_S}
+\sigma_{\geq n - p - 1} \Omega^\bullet) \ar[d]_{\gamma_{p + 1}} &
+\text{Tot}(\Omega^{p + 1}[-p - 1] \otimes_{f^{-1}\mathcal{O}_S}
+\sigma_{\geq n - p - 1} \Omega^\bullet) \ar[l] \ar[d] \\
+\Omega^\bullet &
+\Omega^{p + 1}[-p - 1]
+\otimes_{f^{-1}\mathcal{O}_S}
+\Omega^{n - p - 1}[-n + p + 1] \ar[l]_-\wedge
+}
+$$
+
+\medskip\noindent
+For the pairs $(d, d')$ and $(a, a')$ we use the commutative diagram
+$$
+\xymatrix{
+\Omega^{p + 1}[-p] \otimes_{f^{-1}\mathcal{O}_S}
+\Omega^{n - p - 1}[-n + p] \ar[d] &
+\text{Tot}(\sigma_{\leq p}\Omega^\bullet \otimes_{f^{-1}\mathcal{O}_S}
+\Omega^{n - p - 1}[-n + p]) \ar[l] \ar[d] \\
+\Omega^\bullet &
+\text{Tot}(\sigma_{\leq p}\Omega^\bullet \otimes_{f^{-1}\mathcal{O}_S}
+\sigma_{\geq n - p}\Omega^\bullet) \ar[l]
+}
+$$
+
+\medskip\noindent
+We omit the argument showing the uniqueness of $t$ up to
+precomposing by multiplication by a unit in $H^0(X, \mathcal{O}_X)$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/derived.tex b/books/stacks/derived.tex
new file mode 100644
index 0000000000000000000000000000000000000000..4fbf137733834af0b6ceb060dbe53bc7e4972caf
--- /dev/null
+++ b/books/stacks/derived.tex
@@ -0,0 +1,12381 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Derived Categories}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+We first discuss triangulated categories and localization in triangulated
+categories. Next, we prove that the homotopy category of complexes in an
+additive category is a triangulated category. Once this is done we define
+the derived category of an abelian category as the localization of the
+homotopy category with respect to quasi-isomorphisms.
+A good reference is Verdier's thesis \cite{Verdier}.
+
+
+
+\section{Triangulated categories}
+\label{section-triangulated-categories}
+
+\noindent
+Triangulated categories are a convenient tool to describe the type
+of structure inherent in the derived category of an abelian category.
+Some references are \cite{Verdier}, \cite{KS}, and \cite{Neeman}.
+
+
+
+
+\section{The definition of a triangulated category}
+\label{section-triangulated-definitions}
+
+\noindent
+In this section we collect most of the definitions concerning triangulated
+and pre-triangulated categories.
+
+\begin{definition}
+\label{definition-triangle}
+Let $\mathcal{D}$ be an additive category.
+Let $[n] : \mathcal{D} \to \mathcal{D}$, $E \mapsto E[n]$
+be a collection of additive functors indexed by $n \in \mathbf{Z}$ such that
+$[n] \circ [m] = [n + m]$ and $[0] = \text{id}$ (equality as functors).
+In this situation we define a {\it triangle} to be a sextuple
+$(X, Y, Z, f, g, h)$ where $X, Y, Z \in \Ob(\mathcal{D})$ and
+$f : X \to Y$, $g : Y \to Z$ and $h : Z \to X[1]$ are morphisms
+of $\mathcal{D}$.
+A {\it morphism of triangles}
+$(X, Y, Z, f, g, h) \to (X', Y', Z', f', g', h')$
+is given by morphisms $a : X \to X'$, $b : Y \to Y'$ and $c : Z \to Z'$
+of $\mathcal{D}$ such that
+$b \circ f = f' \circ a$, $c \circ g = g' \circ b$ and
+$a[1] \circ h = h' \circ c$.
+\end{definition}
+
+\noindent
+A morphism of triangles is visualized by the following
+commutative diagram
+$$
+\xymatrix{
+X \ar[r] \ar[d]^a &
+Y \ar[r] \ar[d]^b &
+Z \ar[r] \ar[d]^c &
+X[1] \ar[d]^{a[1]} \\
+X' \ar[r] &
+Y' \ar[r] &
+Z' \ar[r] &
+X'[1]
+}
+$$
+Here is the definition of a triangulated category as given in
+Verdier's thesis.
+
+\begin{definition}
+\label{definition-triangulated-category}
+A {\it triangulated category} consists of a triple
+$(\mathcal{D}, \{[n]\}_{n\in \mathbf{Z}}, \mathcal{T})$
+where
+\begin{enumerate}
+\item $\mathcal{D}$ is an additive category,
+\item $[n] : \mathcal{D} \to \mathcal{D}$, $E \mapsto E[n]$
+is a collection of additive functors indexed by $n \in \mathbf{Z}$ such that
+$[n] \circ [m] = [n + m]$ and $[0] = \text{id}$ (equality as functors), and
+\item $\mathcal{T}$ is a set of triangles called the
+{\it distinguished triangles}
+\end{enumerate}
+subject to the following conditions
+\begin{enumerate}
+\item[TR1] Any triangle isomorphic to a distinguished triangle is
+a distinguished triangle. Any triangle of the form
+$(X, X, 0, \text{id}, 0, 0)$ is distinguished.
+For any morphism $f : X \to Y$ of $\mathcal{D}$ there exists a
+distinguished triangle of the form $(X, Y, Z, f, g, h)$.
+\item[TR2] The triangle $(X, Y, Z, f, g, h)$ is distinguished
+if and only if the triangle $(Y, Z, X[1], g, h, -f[1])$ is.
+\item[TR3] Given a solid diagram
+$$
+\xymatrix{
+X \ar[r]^f \ar[d]^a &
+Y \ar[r]^g \ar[d]^b &
+Z \ar[r]^h \ar@{-->}[d] &
+X[1] \ar[d]^{a[1]} \\
+X' \ar[r]^{f'} &
+Y' \ar[r]^{g'} &
+Z' \ar[r]^{h'} &
+X'[1]
+}
+$$
+whose rows are distinguished triangles and which satisfies
+$b \circ f = f' \circ a$, there exists a morphism
+$c : Z \to Z'$ such that $(a, b, c)$ is a morphism of triangles.
+\item[TR4] Given objects $X$, $Y$, $Z$ of $\mathcal{D}$, and morphisms
+$f : X \to Y$, $g : Y \to Z$, and distinguished triangles
+$(X, Y, Q_1, f, p_1, d_1)$,
+$(X, Z, Q_2, g \circ f, p_2, d_2)$,
+and
+$(Y, Z, Q_3, g, p_3, d_3)$,
+there exist
+morphisms $a : Q_1 \to Q_2$ and $b : Q_2 \to Q_3$ such
+that
+\begin{enumerate}
+\item $(Q_1, Q_2, Q_3, a, b, p_1[1] \circ d_3)$ is a
+distinguished triangle,
+\item the triple $(\text{id}_X, g, a)$ is
+a morphism of triangles
+$(X, Y, Q_1, f, p_1, d_1) \to (X, Z, Q_2, g \circ f, p_2, d_2)$, and
+\item the triple $(f, \text{id}_Z, b)$ is a morphism of triangles
+$(X, Z, Q_2, g \circ f, p_2, d_2) \to (Y, Z, Q_3, g, p_3, d_3)$.
+\end{enumerate}
+\end{enumerate}
+We will call $(\mathcal{D}, [\ ], \mathcal{T})$ a
+{\it pre-triangulated category} if TR1, TR2 and TR3
+hold.\footnote{We use $[\ ]$ as an abbreviation for the
+family $\{[n]\}_{n\in \mathbf{Z}}$.}
+\end{definition}
+
+\noindent
+The explanation of TR4 is that if you think of $Q_1$ as
+$Y/X$, $Q_2$ as $Z/X$ and $Q_3$ as $Z/Y$, then TR4(a) expresses
+the isomorphism $(Z/X)/(Y/X) \cong Z/Y$ and TR4(b) and TR4(c)
+express that we can compare the triangles $X \to Y \to Q_1 \to X[1]$
+etc with morphisms of triangles. For a more precise reformulation
+of this idea see the proof of Lemma \ref{lemma-two-split-injections}.
+
+\medskip\noindent
+The sign in TR2 means that if $(X, Y, Z, f, g, h)$ is a distinguished triangle
+then in the long sequence
+\begin{equation}
+\label{equation-rotate}
+\ldots \to
+Z[-1] \xrightarrow{-h[-1]}
+X \xrightarrow{f}
+Y \xrightarrow{g}
+Z \xrightarrow{h}
+X[1] \xrightarrow{-f[1]}
+Y[1] \xrightarrow{-g[1]}
+Z[1] \to \ldots
+\end{equation}
+each four term sequence gives a distinguished triangle.
+
+\medskip\noindent
+As usual we abuse notation and we simply speak of a (pre-)triangulated
+category $\mathcal{D}$ without explicitly introducing notation for the
+additional data. The notion of a pre-triangulated category is
+useful in finding statements equivalent to TR4.
+
+\medskip\noindent
+We have the following definition of a triangulated functor.
+
+\begin{definition}
+\label{definition-exact-functor-triangulated-categories}
+Let $\mathcal{D}$, $\mathcal{D}'$ be pre-triangulated
+categories. An {\it exact functor}, or a {\it triangulated functor}
+from $\mathcal{D}$ to $\mathcal{D}'$ is a functor
+$F : \mathcal{D} \to \mathcal{D}'$ together
+with given functorial isomorphisms $\xi_X : F(X[1]) \to F(X)[1]$
+such that for every distinguished triangle
+$(X, Y, Z, f, g, h)$ of $\mathcal{D}$ the triangle
+$(F(X), F(Y), F(Z), F(f), F(g), \xi_X \circ F(h))$
+is a distinguished triangle of $\mathcal{D}'$.
+\end{definition}
+
+\noindent
+An exact functor is additive, see
+Lemma \ref{lemma-exact-functor-additive}.
+When we say two triangulated categories are equivalent we mean that
+they are equivalent in the $2$-category of triangulated categories.
+A $2$-morphism $a : (F, \xi) \to (F', \xi')$ in this $2$-category is
+simply a transformation of functors $a : F \to F'$ which is compatible
+with $\xi$ and $\xi'$, i.e.,
+$$
+\xymatrix{
+F \circ [1] \ar[r]_\xi \ar[d]_{a \star 1} & [1] \circ F \ar[d]^{1 \star a} \\
+F' \circ [1] \ar[r]^{\xi'} & [1] \circ F'
+}
+$$
+commutes.
+
+\begin{definition}
+\label{definition-triangulated-subcategory}
+Let $(\mathcal{D}, [\ ], \mathcal{T})$ be a pre-triangulated category.
+A {\it pre-triangulated subcategory}\footnote{This definition may be
+nonstandard. If $\mathcal{D}'$ is a full subcategory then $\mathcal{T}'$
+is the intersection of the set of triangles in $\mathcal{D}'$ with
+$\mathcal{T}$, see
+Lemma \ref{lemma-triangulated-subcategory}.
+In this case we drop $\mathcal{T}'$ from the notation.}
+is a pair $(\mathcal{D}', \mathcal{T}')$ such that
+\begin{enumerate}
+\item $\mathcal{D}'$ is an additive subcategory of $\mathcal{D}$
+which is preserved under $[1]$ and $[-1]$,
+\item $\mathcal{T}' \subset \mathcal{T}$ is a subset such that for every
+$(X, Y, Z, f, g, h) \in \mathcal{T}'$ we have
+$X, Y, Z \in \Ob(\mathcal{D}')$ and
+$f, g, h \in \text{Arrows}(\mathcal{D}')$, and
+\item $(\mathcal{D}', [\ ], \mathcal{T}')$ is a pre-triangulated
+category.
+\end{enumerate}
+If $\mathcal{D}$ is a triangulated category, then we say
+$(\mathcal{D}', \mathcal{T}')$ is a {\it triangulated subcategory} if
+it is a pre-triangulated subcategory and
+$(\mathcal{D}', [\ ], \mathcal{T}')$ is a triangulated category.
+\end{definition}
+
+\noindent
+In this situation the inclusion functor
+$\mathcal{D}' \to \mathcal{D}$ is an exact functor
+with $\xi_X : X[1] \to X[1]$ given by the identity on $X[1]$.
+
+\medskip\noindent
+We will see in
+Lemma \ref{lemma-composition-zero}
+that for a distinguished triangle $(X, Y, Z, f, g, h)$
+in a pre-triangulated category the composition $g \circ f : X \to Z$ is zero.
+Thus the sequence (\ref{equation-rotate}) is a complex.
+A homological functor is one that turns this complex into a long
+exact sequence.
+
+\begin{definition}
+\label{definition-homological}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let $\mathcal{A}$ be an abelian category.
+An additive functor $H : \mathcal{D} \to \mathcal{A}$ is called
+{\it homological} if for every distinguished triangle
+$(X, Y, Z, f, g, h)$ the sequence
+$$
+H(X) \to H(Y) \to H(Z)
+$$
+is exact in the abelian category $\mathcal{A}$. An additive functor
+$H : \mathcal{D}^{opp} \to \mathcal{A}$ is called {\it cohomological}
+if the corresponding functor $\mathcal{D} \to \mathcal{A}^{opp}$ is
+homological.
+\end{definition}
+
+\noindent
+If $H : \mathcal{D} \to \mathcal{A}$ is a homological functor
+we often write $H^n(X) = H(X[n])$ so that $H(X) = H^0(X)$.
+Our discussion of TR2 above implies that a distinguished triangle
+$(X, Y, Z, f, g, h)$ determines a long exact sequence
+\begin{equation}
+\label{equation-long-exact-cohomology-sequence}
+\xymatrix@C=3pc{
+H^{-1}(Z) \ar[r]^{H(h[-1])} &
+H^0(X) \ar[r]^{H(f)} &
+H^0(Y) \ar[r]^{H(g)} &
+H^0(Z) \ar[r]^{H(h)} &
+H^1(X)
+}
+\end{equation}
+This will be called the {\it long exact sequence} associated to the
+distinguished triangle and the homological functor. As indicated
+we will not use any signs for the morphisms in the long exact
+sequence. This has the side effect that maps in the long exact sequence
+associated to the rotation (TR2) of a distinguished triangle differ
+from the maps in the sequence above by some signs.
+
+\begin{definition}
+\label{definition-delta-functor}
+Let $\mathcal{A}$ be an abelian category.
+Let $\mathcal{D}$ be a triangulated category.
+A {\it $\delta$-functor from $\mathcal{A}$ to $\mathcal{D}$} is
+given by a functor $G : \mathcal{A} \to \mathcal{D}$ and
+a rule which assigns to every short exact sequence
+$$
+0 \to A \xrightarrow{a} B \xrightarrow{b} C \to 0
+$$
+a morphism $\delta = \delta_{A \to B \to C} : G(C) \to G(A)[1]$
+such that
+\begin{enumerate}
+\item the triangle
+$(G(A), G(B), G(C), G(a), G(b), \delta_{A \to B \to C})$
+is a distinguished triangle of $\mathcal{D}$
+for any short exact sequence as above, and
+\item for every morphism $(A \to B \to C) \to (A' \to B' \to C')$
+of short exact sequences the diagram
+$$
+\xymatrix{
+G(C) \ar[d] \ar[rr]_{\delta_{A \to B \to C}} & &
+G(A)[1] \ar[d] \\
+G(C') \ar[rr]^{\delta_{A' \to B' \to C'}} & &
+G(A')[1]
+}
+$$
+is commutative.
+\end{enumerate}
+In this situation we call
+$(G(A), G(B), G(C), G(a), G(b), \delta_{A \to B \to C})$
+the {\it image of the short exact sequence under the
+given $\delta$-functor}.
+\end{definition}
+
+\noindent
+Note how a $\delta$-functor comes equipped with additional structure.
+Strictly speaking it does not make sense to say that a given
+functor $\mathcal{A} \to \mathcal{D}$ is a $\delta$-functor, but we
+will often do so anyway.
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Elementary results on triangulated categories}
+\label{section-elementary-results}
+
+\noindent
+Most of the results in this section are proved for pre-triangulated categories
+and a fortiori hold in any triangulated category.
+
+\begin{lemma}
+\label{lemma-composition-zero}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let $(X, Y, Z, f, g, h)$ be a distinguished triangle.
+Then $g \circ f = 0$,
+$h \circ g = 0$ and $f[1] \circ h = 0$.
+\end{lemma}
+
+\begin{proof}
+By TR1 we know $(X, X, 0, 1, 0, 0)$ is a distinguished triangle.
+Apply TR3 to
+$$
+\xymatrix{
+X \ar[r] \ar[d]^1 &
+X \ar[r] \ar[d]^f &
+0 \ar[r] \ar@{-->}[d] &
+X[1] \ar[d]^{1[1]} \\
+X \ar[r]^f &
+Y \ar[r]^g &
+Z \ar[r]^h &
+X[1]
+}
+$$
+Of course the dotted arrow is the zero map. Hence the commutativity of
+the diagram implies that $g \circ f = 0$. For the other cases
+rotate the triangle, i.e., apply TR2.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representable-homological}
+Let $\mathcal{D}$ be a pre-triangulated category.
+For any object $W$ of $\mathcal{D}$ the functor
+$\Hom_\mathcal{D}(W, -)$ is homological, and the functor
+$\Hom_\mathcal{D}(-, W)$ is cohomological.
+\end{lemma}
+
+\begin{proof}
+Consider a distinguished triangle $(X, Y, Z, f, g, h)$.
+We have already seen that $g \circ f = 0$, see
+Lemma \ref{lemma-composition-zero}.
+Suppose $a : W \to Y$ is a morphism such that $g \circ a = 0$.
+Then we get a commutative diagram
+$$
+\xymatrix{
+W \ar[r]_1 \ar@{..>}[d]^b &
+W \ar[r] \ar[d]^a &
+0 \ar[r] \ar[d]^0 &
+W[1] \ar@{..>}[d]^{b[1]} \\
+X \ar[r] & Y \ar[r] & Z \ar[r] & X[1]
+}
+$$
+Both rows are distinguished triangles (use TR1 for the top row).
+Hence we can fill the dotted arrow $b$ (first rotate using TR2,
+then apply TR3, and then rotate back). This proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-third-isomorphism-triangle}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let
+$$
+(a, b, c) : (X, Y, Z, f, g, h) \to (X', Y', Z', f', g', h')
+$$
+be a morphism of distinguished triangles. If two among $a, b, c$
+are isomorphisms so is the third.
+\end{lemma}
+
+\begin{proof}
+Assume that $a$ and $c$ are isomorphisms.
+For any object $W$ of $\mathcal{D}$ write
+$H_W( - ) = \Hom_\mathcal{D}(W, -)$.
+Then we get a commutative diagram of abelian groups
+$$
+\xymatrix{
+H_W(Z[-1]) \ar[r] \ar[d] &
+H_W(X) \ar[r] \ar[d] &
+H_W(Y) \ar[r] \ar[d] &
+H_W(Z) \ar[r] \ar[d] &
+H_W(X[1]) \ar[d] \\
+H_W(Z'[-1]) \ar[r] &
+H_W(X') \ar[r] &
+H_W(Y') \ar[r] &
+H_W(Z') \ar[r] &
+H_W(X'[1])
+}
+$$
+By assumption the right two and left two vertical arrows are bijective.
+As $H_W$ is homological by
+Lemma \ref{lemma-representable-homological}
+and the five lemma
+(Homology, Lemma \ref{homology-lemma-five-lemma})
+it follows that the middle vertical arrow is an isomorphism.
+Hence by Yoneda's lemma, see
+Categories, Lemma \ref{categories-lemma-yoneda}
+we see that $b$ is an isomorphism.
+This implies the other cases by rotating (using TR2).
+\end{proof}
+
+\begin{remark}
+\label{remark-special-triangles}
+Let $\mathcal{D}$ be an additive category with translation functors $[n]$
+as in Definition \ref{definition-triangle}. Let us call a triangle
+$(X, Y, Z, f, g, h)$ {\it special}\footnote{This is nonstandard notation.}
+if for every object $W$ of $\mathcal{D}$
+the long sequence of abelian groups
+$$
+\ldots \to
+\Hom_\mathcal{D}(W, X) \to
+\Hom_\mathcal{D}(W, Y) \to
+\Hom_\mathcal{D}(W, Z) \to
+\Hom_\mathcal{D}(W, X[1]) \to \ldots
+$$
+is exact. The proof of Lemma \ref{lemma-third-isomorphism-triangle}
+shows that if
+$$
+(a, b, c) : (X, Y, Z, f, g, h) \to (X', Y', Z', f', g', h')
+$$
+is a morphism of special triangles and if two among $a, b, c$
+are isomorphisms so is the third. There is a dual statement for
+{\it co-special} triangles, i.e., triangles which turn into long
+exact sequences on applying the functor $\Hom_\mathcal{D}(-, W)$.
+Thus distinguished triangles are special and co-special, but in
+general there are many more (co-)special triangles, than there are
+distinguished triangles.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-third-map-square-zero}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let
+$$
+(0, b, 0), (0, b', 0) : (X, Y, Z, f, g, h) \to (X, Y, Z, f, g, h)
+$$
+be endomorphisms of a distinguished triangle. Then $bb' = 0$.
+\end{lemma}
+
+\begin{proof}
+Picture
+$$
+\xymatrix{
+X \ar[r] \ar[d]^0 &
+Y \ar[r] \ar[d]^{b, b'} \ar@{..>}[ld]^\alpha &
+Z \ar[r] \ar[d]^0 \ar@{..>}[ld]^\beta &
+X[1] \ar[d]^0 \\
+X \ar[r] & Y \ar[r] & Z \ar[r] & X[1]
+}
+$$
+Applying
+Lemma \ref{lemma-representable-homological}
+we find dotted arrows $\alpha$ and $\beta$ such that
+$b' = f \circ \alpha$ and $b = \beta \circ g$. Then
+$bb' = \beta \circ g \circ f \circ \alpha = 0$
+as $g \circ f = 0$ by
+Lemma \ref{lemma-composition-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-third-map-idempotent}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let $(X, Y, Z, f, g, h)$ be a distinguished triangle.
+If
+$$
+\xymatrix{
+Z \ar[r]_h \ar[d]_c & X[1] \ar[d]^{a[1]} \\
+Z \ar[r]^h & X[1]
+}
+$$
+is commutative and $a^2 = a$, $c^2 = c$, then there exists a
+morphism $b : Y \to Y$ with $b^2 = b$ such that
+$(a, b, c)$ is an endomorphism of the triangle $(X, Y, Z, f, g, h)$.
+\end{lemma}
+
+\begin{proof}
+By TR3 there exists a morphism $b'$ such that
+$(a, b', c)$ is an endomorphism of $(X, Y, Z, f, g, h)$.
+Then $(0, (b')^2 - b', 0)$ is also an endomorphism. By
+Lemma \ref{lemma-third-map-square-zero}
+we see that $(b')^2 - b'$ has square zero.
+Set $b = b' - (2b' - 1)((b')^2 - b') = 3(b')^2 - 2(b')^3$.
+A computation shows that $(a, b, c)$ is an endomorphism and
+that $b^2 - b = (4(b')^2 - 4b' - 3)((b')^2 - b')^2 = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cone-triangle-unique-isomorphism}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let $f : X \to Y$ be a morphism of $\mathcal{D}$.
+There exists a distinguished triangle $(X, Y, Z, f, g, h)$ which
+is unique up to (nonunique) isomorphism of triangles.
+More precisely, given a second such distinguished triangle
+$(X, Y, Z', f, g', h')$ there exists an isomorphism
+$$
+(1, 1, c) : (X, Y, Z, f, g, h) \longrightarrow (X, Y, Z', f, g', h')
+$$
+\end{lemma}
+
+\begin{proof}
+Existence by TR1. Uniqueness up to isomorphism by TR3 and
+Lemma \ref{lemma-third-isomorphism-triangle}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-uniqueness-third-arrow}
+Let $\mathcal{D}$ be a pre-triangulated category. Let
+$$
+(a, b, c) : (X, Y, Z, f, g, h) \to (X', Y', Z', f', g', h')
+$$
+be a morphism of distinguished triangles. If one of the following
+conditions holds
+\begin{enumerate}
+\item $\Hom(Y, X') = 0$,
+\item $\Hom(Z, Y') = 0$,
+\item $\Hom(X, X') = \Hom(Z, X') = 0$,
+\item $\Hom(Z, X') = \Hom(Z, Z') = 0$, or
+\item $\Hom(X[1], Z') = \Hom(Z, X') = 0$
+\end{enumerate}
+then $b$ is the unique morphism from $Y \to Y'$ such that
+$(a, b, c)$ is a morphism of triangles.
+\end{lemma}
+
+\begin{proof}
+If we have a second morphism of triangles $(a, b', c)$
+then $(0, b - b', 0)$ is a morphism of triangles. Hence we
+have to show: the only morphism $b : Y \to Y'$ such that
+$X \to Y \to Y'$ and $Y \to Y' \to Z'$ are zero is $0$.
+We will use Lemma \ref{lemma-representable-homological}
+without further mention. In particular, condition (3) implies (1).
+Given condition (1) if the composition $g' \circ b : Y \to Y' \to Z'$
+is zero, then $b$ lifts to a morphism $Y \to X'$ which has to be zero.
+This proves (1).
+
+\medskip\noindent
+The proof of (2) and (4) are dual to this argument.
+
+\medskip\noindent
+Assume (5). Consider the diagram
+$$
+\xymatrix{
+X \ar[r]_f \ar[d]^0 &
+Y \ar[r]_g \ar[d]^b &
+Z \ar[r]_h \ar[d]^0 \ar@{..>}[ld]^\epsilon &
+X[1] \ar[d]^0 \\
+X' \ar[r]^{f'} &
+Y' \ar[r]^{g'} &
+Z' \ar[r]^{h'} &
+X'[1]
+}
+$$
+We may choose $\epsilon$ such that $b = \epsilon \circ g$.
+Then $g' \circ \epsilon \circ g = 0$ which implies that
+$g' \circ \epsilon = \delta \circ h$ for some
+$\delta \in \Hom(X[1], Z')$. Since $\Hom(X[1], Z') = 0$
+we conclude that $g' \circ \epsilon = 0$. Hence
+$\epsilon = f' \circ \gamma$ for some $\gamma \in \Hom(Z, X')$.
+Since $\Hom(Z, X') = 0$ we conclude that $\epsilon = 0$
+and hence $b = 0$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-third-object-zero}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let $f : X \to Y$ be a morphism of $\mathcal{D}$.
+The following are equivalent
+\begin{enumerate}
+\item $f$ is an isomorphism,
+\item $(X, Y, 0, f, 0, 0)$ is a distinguished triangle, and
+\item for any distinguished triangle $(X, Y, Z, f, g, h)$ we have $Z = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By TR1 the triangle $(X, X, 0, 1, 0, 0)$ is distinguished.
+Let $(X, Y, Z, f, g, h)$ be a distinguished triangle.
+By TR3 there is a map of distinguished triangles
+$(1, f, 0) : (X, X, 0) \to (X, Y, Z)$.
+If $f$ is an isomorphism, then $(1, f, 0)$ is an isomorphism
+of triangles by Lemma \ref{lemma-third-isomorphism-triangle}
+and $Z = 0$. Conversely, if $Z = 0$, then $(1, f, 0)$ is an
+isomorphism of triangles as well, hence $f$ is an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-direct-sum-triangles}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let $(X, Y, Z, f, g, h)$ and $(X', Y', Z', f', g', h')$ be triangles.
+The following are equivalent
+\begin{enumerate}
+\item $(X \oplus X', Y \oplus Y', Z \oplus Z',
+f \oplus f', g \oplus g', h \oplus h')$
+is a distinguished triangle,
+\item both $(X, Y, Z, f, g, h)$ and $(X', Y', Z', f', g', h')$ are
+distinguished triangles.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2). By TR1 we may choose a distinguished triangle
+$(X \oplus X', Y \oplus Y', Q, f \oplus f', g'', h'')$.
+By TR3 we can find morphisms of distinguished triangles
+$(X, Y, Z, f, g, h) \to
+(X \oplus X', Y \oplus Y', Q, f \oplus f', g'', h'')$
+and
+$(X', Y', Z', f', g', h') \to
+(X \oplus X', Y \oplus Y', Q, f \oplus f', g'', h'')$.
+Taking the direct sum of these morphisms
+we obtain a morphism of triangles
+$$
+\xymatrix{
+(X \oplus X', Y \oplus Y', Z \oplus Z',
+f \oplus f', g \oplus g', h \oplus h')
+\ar[d]^{(1, 1, c)} \\
+(X \oplus X', Y \oplus Y', Q, f \oplus f', g'', h'').
+}
+$$
+In the terminology of Remark \ref{remark-special-triangles}
+this is a map of special triangles (because a direct sum of special
+triangles is special) and we conclude
+that $c$ is an isomorphism. Thus (1) holds.
+
+\medskip\noindent
+Assume (1). We will show that $(X, Y, Z, f, g, h)$ is a distinguished
+triangle. First observe that $(X, Y, Z, f, g, h)$ is a special triangle
+(terminology from Remark \ref{remark-special-triangles})
+as a direct summand of the distinguished hence special
+triangle $(X \oplus X', Y \oplus Y', Z \oplus Z',
+f \oplus f', g \oplus g', h \oplus h')$. Using TR1 let
+$(X, Y, Q, f, g'', h'')$ be a distinguished triangle. By TR3 there exists
+a morphism of distinguished triangles
+ $(X \oplus X', Y \oplus Y', Z \oplus Z',
+f \oplus f', g \oplus g', h \oplus h') \to (X, Y, Q, f, g'', h'')$.
+Composing this with the inclusion map we get a morphism of triangles
+$$
+(1, 1, c) :
+(X, Y, Z, f, g, h)
+\longrightarrow
+(X, Y, Q, f, g'', h'')
+$$
+By Remark \ref{remark-special-triangles}
+we find that $c$ is an isomorphism and we conclude
+that (2) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-split}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let $(X, Y, Z, f, g, h)$ be a distinguished triangle.
+\begin{enumerate}
+\item If $h = 0$, then there exists a right inverse $s : Z \to Y$ to $g$.
+\item For any right inverse $s : Z \to Y$ of $g$ the map
+$f \oplus s : X \oplus Z \to Y$ is an isomorphism.
+\item For any objects $X', Z'$ of $\mathcal{D}$ the triangle
+$(X', X' \oplus Z', Z', (1, 0), (0, 1), 0)$ is distinguished.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To see (1) use that
+$\Hom_\mathcal{D}(Z, Y) \to \Hom_\mathcal{D}(Z, Z) \to
+\Hom_\mathcal{D}(Z, X[1])$
+is exact by
+Lemma \ref{lemma-representable-homological}.
+By the same token, if $s$ is as in (2), then $h = 0$ and the sequence
+$$
+0 \to \Hom_\mathcal{D}(W, X) \to \Hom_\mathcal{D}(W, Y)
+\to \Hom_\mathcal{D}(W, Z) \to 0
+$$
+is split exact (split by $s : Z \to Y$). Hence by Yoneda's lemma we
+see that $X \oplus Z \to Y$ is an isomorphism. The last assertion follows
+from TR1 and
+Lemma \ref{lemma-direct-sum-triangles}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-split}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let $f : X \to Y$ be a morphism of $\mathcal{D}$.
+The following are equivalent
+\begin{enumerate}
+\item $f$ has a kernel,
+\item $f$ has a cokernel,
+\item $f$ is the isomorphic to a composition
+$K \oplus Z \to Z \to Z \oplus Q$ of a projection and coprojection
+for some objects $K, Z, Q$ of $\mathcal{D}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Any morphism isomorphic to a map of the form
+$X' \oplus Z \to Z \oplus Y'$ has both a kernel and a cokernel.
+Hence (3) $\Rightarrow$ (1), (2).
+Next we prove (1) $\Rightarrow$ (3).
+Suppose first that $f : X \to Y$ is a monomorphism, i.e., its kernel is zero.
+By TR1 there exists a distinguished triangle $(X, Y, Z, f, g, h)$.
+By Lemma \ref{lemma-composition-zero} the composition
+$f \circ h[-1] = 0$. As $f$ is a monomorphism we see that $h[-1] = 0$
+and hence $h = 0$. Then
+Lemma \ref{lemma-split}
+implies that $Y = X \oplus Z$, i.e., we see that (3) holds.
+Next, assume $f$ has a kernel $K$. As $K \to X$ is a monomorphism we
+conclude $X = K \oplus X'$ and $f|_{X'} : X' \to Y$ is a monomorphism.
+Hence $Y = X' \oplus Y'$ and we win.
+The implication (2) $\Rightarrow$ (3) is dual to this.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-products-sums-shifts-triangles}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let $I$ be a set.
+\begin{enumerate}
+\item Let $X_i$, $i \in I$ be a family of objects of $\mathcal{D}$.
+\begin{enumerate}
+\item If $\prod X_i$ exists, then $(\prod X_i)[1] = \prod X_i[1]$.
+\item If $\bigoplus X_i$ exists, then $(\bigoplus X_i)[1] = \bigoplus X_i[1]$.
+\end{enumerate}
+\item Let $X_i \to Y_i \to Z_i \to X_i[1]$ be a family of distinguished
+triangles of $\mathcal{D}$.
+\begin{enumerate}
+\item If $\prod X_i$, $\prod Y_i$, $\prod Z_i$ exist, then
+$\prod X_i \to \prod Y_i \to \prod Z_i \to \prod X_i[1]$
+is a distinguished triangle.
+\item If $\bigoplus X_i$, $\bigoplus Y_i$,
+$\bigoplus Z_i$ exist, then
+$\bigoplus X_i \to \bigoplus Y_i \to \bigoplus Z_i \to \bigoplus X_i[1]$
+is a distinguished triangle.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is true because $[1]$ is an autoequivalence of $\mathcal{D}$
+and because direct sums and products are defined in terms of the
+category structure. Let us prove (2)(a). Choose a distinguished triangle
+$\prod X_i \to \prod Y_i \to Z \to \prod X_i[1]$. For each $j$ we can
+use TR3 to choose a morphism $p_j : Z \to Z_j$
+fitting into a morphism of distinguished
+triangles with the projection maps $\prod X_i \to X_j$ and $\prod Y_i \to Y_j$.
+Using the definition of products we obtain a map
+$\prod p_i : Z \to \prod Z_i$ fitting into a morphism
+of triangles from the distinguished triangle to the triangle
+made out of the products. Observe that the ``product'' triangle
+$\prod X_i \to \prod Y_i \to \prod Z_i \to \prod X_i[1]$
+is special in the terminology of Remark \ref{remark-special-triangles}
+because products of exact sequences of abelian groups are exact.
+Hence Remark \ref{remark-special-triangles} shows that
+the morphism of triangles is an isomorphism and we conclude by TR1.
+The proof of (2)(b) is dual.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projectors-have-images-triangulated}
+Let $\mathcal{D}$ be a pre-triangulated category.
+If $\mathcal{D}$ has countable products, then $\mathcal{D}$
+is Karoubian.
+If $\mathcal{D}$ has countable coproducts, then $\mathcal{D}$
+is Karoubian.
+\end{lemma}
+
+\begin{proof}
+Assume $\mathcal{D}$ has countable products. By
+Homology, Lemma \ref{homology-lemma-projectors-have-images}
+it suffices to check that morphisms which have a right inverse have kernels.
+Any morphism which has a right inverse is an epimorphism, hence
+has a kernel by
+Lemma \ref{lemma-when-split}.
+The second statement is dual to the first.
+\end{proof}
+
+\noindent
+The following lemma makes it slightly easier to prove that a
+pre-triangulated category is triangulated.
+
+\begin{lemma}
+\label{lemma-easier-axiom-four}
+Let $\mathcal{D}$ be a pre-triangulated category.
+In order to prove TR4 it suffices to show that given
+any pair of composable morphisms
+$f : X \to Y$ and $g : Y \to Z$ there exist
+\begin{enumerate}
+\item isomorphisms $i : X' \to X$, $j : Y' \to Y$ and
+$k : Z' \to Z$, and then setting $f' = j^{-1}fi : X' \to Y'$ and
+$g' = k^{-1}gj : Y' \to Z'$ there exist
+\item distinguished triangles
+$(X', Y', Q_1, f', p_1, d_1)$,
+$(X', Z', Q_2, g' \circ f', p_2, d_2)$
+and
+$(Y', Z', Q_3, g', p_3, d_3)$,
+such that the assertion of TR4 holds.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The replacement of $X, Y, Z$ by $X', Y', Z'$ is harmless by our
+definition of distinguished triangles and their isomorphisms.
+The lemma follows from the fact that the distinguished triangles
+$(X', Y', Q_1, f', p_1, d_1)$,
+$(X', Z', Q_2, g' \circ f', p_2, d_2)$
+and
+$(Y', Z', Q_3, g', p_3, d_3)$
+are unique up to isomorphism by
+Lemma \ref{lemma-cone-triangle-unique-isomorphism}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-triangulated-subcategory}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Assume that $\mathcal{D}'$ is an additive full subcategory of $\mathcal{D}$.
+The following are equivalent
+\begin{enumerate}
+\item there exists a set of triangles $\mathcal{T}'$ such that
+$(\mathcal{D}', \mathcal{T}')$ is a pre-triangulated subcategory
+of $\mathcal{D}$,
+\item $\mathcal{D}'$ is preserved under $[1], [-1]$ and
+given any morphism $f : X \to Y$ in $\mathcal{D}'$ there exists
+a distinguished triangle $(X, Y, Z, f, g, h)$ in $\mathcal{D}$
+such that $Z$ is isomorphic to an object of $\mathcal{D}'$.
+\end{enumerate}
+In this case $\mathcal{T}'$ as in (1) is the set of distinguished triangles
+$(X, Y, Z, f, g, h)$ of $\mathcal{D}$ such that
+$X, Y, Z \in \Ob(\mathcal{D}')$. Finally, if $\mathcal{D}$
+is a triangulated category, then (1) and (2) are also equivalent to
+\begin{enumerate}
+\item[(3)] $\mathcal{D}'$ is a triangulated subcategory.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exact-functor-additive}
+An exact functor of pre-triangulated categories is additive.
+\end{lemma}
+
+\begin{proof}
+Let $F : \mathcal{D} \to \mathcal{D}'$ be an exact functor of
+pre-triangulated categories. Since
+$(0, 0, 0, 1_0, 1_0, 0)$ is a distinguished triangle of $\mathcal{D}$
+the triangle
+$$
+(F(0), F(0), F(0), 1_{F(0)}, 1_{F(0)}, F(0))
+$$
+is distinguished in $\mathcal{D}'$.
+This implies that $1_{F(0)} \circ 1_{F(0)}$ is zero, see
+Lemma \ref{lemma-composition-zero}.
+Hence $F(0)$ is the zero object of $\mathcal{D}'$. This also implies
+that $F$ applied to any zero morphism is zero (since a morphism in
+an additive category is zero if and only if it factors through the
+zero object). Next, using that
+$(X, X \oplus Y, Y, (1, 0), (0, 1), 0)$ is a distinguished triangle,
+we see that $(F(X), F(X \oplus Y), F(Y), F(1, 0), F(0, 1), 0)$ is
+one too. This implies that the map
+$F(1, 0) \oplus F(0, 1) : F(X) \oplus F(Y) \to F(X \oplus Y)$
+is an isomorphism, see
+Lemma \ref{lemma-split}.
+We omit the rest of the argument.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exact-equivalence}
+Let $F : \mathcal{D} \to \mathcal{D}'$ be a fully faithful exact functor
+of pre-triangulated categories. Then a triangle $(X, Y, Z, f, g, h)$
+of $\mathcal{D}$ is distinguished if and only if
+$(F(X), F(Y), F(Z), F(f), F(g), F(h))$ is distinguished in $\mathcal{D}'$.
+\end{lemma}
+
+\begin{proof}
+The ``only if'' part is clear. Assume $(F(X), F(Y), F(Z))$ is
+distinguished in $\mathcal{D}'$. Pick a distinguished triangle
+$(X, Y, Z', f, g', h')$ in $\mathcal{D}$. By
+Lemma \ref{lemma-cone-triangle-unique-isomorphism}
+there exists an isomorphism of triangles
+$$
+(1, 1, c') : (F(X), F(Y), F(Z)) \longrightarrow (F(X), F(Y), F(Z')).
+$$
+Since $F$ is fully faithful, there exists a morphism $c : Z \to Z'$
+such that $F(c) = c'$. Then $(1, 1, c)$ is an isomorphism between
+$(X, Y, Z)$ and $(X, Y, Z')$. Hence $(X, Y, Z)$ is distinguished
+by TR1.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-exact}
+Let $\mathcal{D}, \mathcal{D}', \mathcal{D}''$ be pre-triangulated categories.
+Let $F : \mathcal{D} \to \mathcal{D}'$ and
+$F' : \mathcal{D}' \to \mathcal{D}''$ be exact functors.
+Then $F' \circ F$ is an exact functor.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exact-compose-homological-functor}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let $\mathcal{A}$ be an abelian category.
+Let $H : \mathcal{D} \to \mathcal{A}$ be a homological functor.
+\begin{enumerate}
+\item Let $\mathcal{D}'$ be a pre-triangulated category.
+Let $F : \mathcal{D}' \to \mathcal{D}$ be an exact functor.
+Then the composition $H \circ F$ is a homological functor as well.
+\item Let $\mathcal{A}'$ be an abelian category. Let
+$G : \mathcal{A} \to \mathcal{A}'$ be an exact functor.
+Then $G \circ H$ is a homological functor as well.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-exact-compose-delta-functor}
+Let $\mathcal{D}$ be a triangulated category.
+Let $\mathcal{A}$ be an abelian category.
+Let $G : \mathcal{A} \to \mathcal{D}$ be a $\delta$-functor.
+\begin{enumerate}
+\item Let $\mathcal{D}'$ be a triangulated category.
+Let $F : \mathcal{D} \to \mathcal{D}'$ be an exact functor.
+Then the composition $F \circ G$ is a $\delta$-functor as well.
+\item Let $\mathcal{A}'$ be an abelian category. Let
+$H : \mathcal{A}' \to \mathcal{A}$ be an exact functor.
+Then $G \circ H$ is a $\delta$-functor as well.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compose-delta-functor-homological}
+Let $\mathcal{D}$ be a triangulated category.
+Let $\mathcal{A}$ and $\mathcal{B}$ be abelian categories.
+Let $G : \mathcal{A} \to \mathcal{D}$ be a $\delta$-functor.
+Let $H : \mathcal{D} \to \mathcal{B}$ be a homological functor.
+Assume that $H^{-1}(G(A)) = 0$ for all $A$ in $\mathcal{A}$.
+Then the collection
+$$
+\{H^n \circ G, H^n(\delta_{A \to B \to C})\}_{n \geq 0}
+$$
+is a $\delta$-functor from $\mathcal{A} \to \mathcal{B}$, see
+Homology, Definition \ref{homology-definition-cohomological-delta-functor}.
+\end{lemma}
+
+\begin{proof}
+The notation signifies the following. If
+$0 \to A \xrightarrow{a} B \xrightarrow{b} C \to 0$ is
+a short exact sequence in $\mathcal{A}$, then
+$$
+\delta = \delta_{A \to B \to C} : G(C) \to G(A)[1]
+$$
+is a morphism in $\mathcal{D}$ such that
+$(G(A), G(B), G(C), a, b, \delta)$ is
+a distinguished triangle, see
+Definition \ref{definition-delta-functor}.
+Then $H^n(\delta) : H^n(G(C)) \to H^n(G(A)[1]) = H^{n + 1}(G(A))$
+is clearly functorial in the short exact sequence.
+Finally, the long exact cohomology sequence
+(\ref{equation-long-exact-cohomology-sequence})
+combined with the vanishing of $H^{-1}(G(C))$
+gives a long exact sequence
+$$
+0 \to H^0(G(A)) \to H^0(G(B)) \to H^0(G(C))
+\xrightarrow{H^0(\delta)} H^1(G(A)) \to \ldots
+$$
+in $\mathcal{B}$ as desired.
+\end{proof}
+
+\noindent
+The proof of the following result uses TR4.
+
+\begin{proposition}
+\label{proposition-9}
+Let $\mathcal{D}$ be a triangulated category. Any commutative diagram
+$$
+\xymatrix{
+X \ar[r] \ar[d] & Y \ar[d] \\
+X' \ar[r] & Y'
+}
+$$
+can be extended to a diagram
+$$
+\xymatrix{
+X \ar[r] \ar[d] & Y \ar[r] \ar[d] & Z \ar[r] \ar[d] & X[1] \ar[d] \\
+X' \ar[r] \ar[d] & Y' \ar[r] \ar[d] & Z' \ar[r] \ar[d] & X'[1] \ar[d] \\
+X'' \ar[r] \ar[d] & Y'' \ar[r] \ar[d] & Z'' \ar[r] \ar[d] & X''[1] \ar[d] \\
+X[1] \ar[r] & Y[1] \ar[r] & Z[1] \ar[r] & X[2]
+}
+$$
+where all the squares are commutative, except for the lower right square
+which is anticommutative. Moreover, each of the rows and columns are
+distinguished triangles. Finally, the morphisms on the bottom row
+(resp.\ right column) are obtained from the morphisms of the top row
+(resp.\ left column) by applying $[1]$.
+\end{proposition}
+
+\begin{proof}
+During this proof we avoid writing the arrows in order to make the proof
+legible. Choose distinguished triangles
+$(X, Y, Z)$, $(X', Y', Z')$, $(X, X', X'')$, $(Y, Y', Y'')$, and
+$(X, Y', A)$. Note that the morphism $X \to Y'$ is both equal
+to the composition $X \to Y \to Y'$ and equal to the composition
+$X \to X' \to Y'$. Hence, we can find morphisms
+\begin{enumerate}
+\item $a : Z \to A$ and $b : A \to Y''$, and
+\item $a' : X'' \to A$ and $b' : A \to Z'$
+\end{enumerate}
+as in TR4. Denote $c : Y'' \to Z[1]$ the composition
+$Y'' \to Y[1] \to Z[1]$ and denote $c' : Z' \to X''[1]$ the composition
+$Z' \to X'[1] \to X''[1]$. The conclusion of our application TR4
+are that
+\begin{enumerate}
+\item $(Z, A, Y'', a, b, c)$, $(X'', A, Z', a', b', c')$
+are distinguished triangles,
+\item $(X, Y, Z) \to (X, Y', A)$,
+$(X, Y', A) \to (Y, Y', Y'')$,
+$(X, X', X'') \to (X, Y', A)$,
+$(X, Y', A) \to (X', Y', Z')$
+are morphisms of triangles.
+\end{enumerate}
+First using that
+$(X, X', X'') \to (X, Y', A)$ and $(X, Y', A) \to (Y, Y', Y'')$.
+are morphisms of triangles we see the first of the diagrams
+$$
+\vcenter{
+\xymatrix{
+X' \ar[r] \ar[d] & Y' \ar[d] \\
+X'' \ar[r]^{b \circ a'} \ar[d] & Y'' \ar[d] \\
+X[1] \ar[r] & Y[1]
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+Y \ar[r] \ar[d] & Z \ar[d]^{b' \circ a} \ar[r] & X[1] \ar[d] \\
+Y' \ar[r] & Z' \ar[r] & X'[1]
+}
+}
+$$
+is commutative. The second is commutative too using that
+$(X, Y, Z) \to (X, Y', A)$ and $(X, Y', A) \to (X', Y', Z')$ are morphisms
+of triangles. At this point we choose a distinguished triangle
+$(X'', Y'' , Z'')$ starting with the map $b \circ a' : X'' \to Y''$.
+
+\medskip\noindent
+Next we apply TR4 one more time to the morphisms
+$X'' \to A \to Y''$ and the triangles
+$(X'', A, Z', a', b', c')$,
+$(X'', Y'', Z'')$, and
+$(A, Y'', Z[1], b, c , -a[1])$ to get morphisms
+$a'' : Z' \to Z''$ and $b'' : Z'' \to Z[1]$.
+Then $(Z', Z'', Z[1], a'', b'', - b'[1] \circ a[1])$ is a distinguished
+triangle, hence also $(Z, Z', Z'', -b' \circ a, a'', -b'')$
+and hence also $(Z, Z', Z'', b' \circ a, a'', b'')$.
+Moreover, $(X'', A, Z') \to (X'', Y'', Z'')$ and
+$(X'', Y'', Z'') \to (A, Y'', Z[1], b, c , -a[1])$
+are morphisms of triangles.
+At this point we have defined all the distinguished triangles
+and all the morphisms, and all that's left is to verify some
+commutativity relations.
+
+\medskip\noindent
+To see that the middle square in the diagram commutes, note
+that the arrow $Y' \to Z'$ factors as $Y' \to A \to Z'$
+because $(X, Y', A) \to (X', Y', Z')$ is a morphism of triangles.
+Similarly, the morphism $Y' \to Y''$ factors as
+$Y' \to A \to Y''$ because $(X, Y', A) \to (Y, Y', Y'')$ is a
+morphism of triangles. Hence the middle square commutes because
+the square with sides $(A, Z', Z'', Y'')$ commutes as
+$(X'', A, Z') \to (X'', Y'', Z'')$ is a morphism of triangles (by TR4).
+The square with sides $(Y'', Z'', Y[1], Z[1])$ commutes
+because $(X'', Y'', Z'') \to (A, Y'', Z[1], b, c , -a[1])$
+is a morphism of triangles and $c : Y'' \to Z[1]$ is the composition
+$Y'' \to Y[1] \to Z[1]$.
+The square with sides $(Z', X'[1], X''[1], Z'')$ is commutative
+because $(X'', A, Z') \to (X'', Y'', Z'')$ is a morphism of triangles
+and $c' : Z' \to X''[1]$ is the composition $Z' \to X'[1] \to X''[1]$.
+Finally, we have to show that the square with sides
+$(Z'', X''[1], Z[1], X[2])$ anticommutes. This holds because
+$(X'', Y'', Z'') \to (A, Y'', Z[1], b, c , -a[1])$
+is a morphism of triangles and we're done.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Localization of triangulated categories}
+\label{section-localization}
+
+\noindent
+In order to construct the derived category starting from the homotopy
+category of complexes, we will use a localization process.
+
+\begin{definition}
+\label{definition-localization}
+Let $\mathcal{D}$ be a pre-triangulated category. We say a multiplicative
+system $S$ is {\it compatible with the triangulated structure} if
+the following two conditions hold:
+\begin{enumerate}
+\item[MS5] For $s \in S$ we have $s[n] \in S$ for all $n \in \mathbf{Z}$.
+\item[MS6] Given a solid commutative square
+$$
+\xymatrix{
+X \ar[r] \ar[d]^s &
+Y \ar[r] \ar[d]^{s'} &
+Z \ar[r] \ar@{-->}[d] &
+X[1] \ar[d]^{s[1]} \\
+X' \ar[r] &
+Y' \ar[r] &
+Z' \ar[r] &
+X'[1]
+}
+$$
+whose rows are distinguished triangles with $s, s' \in S$
+there exists a morphism $s'' : Z \to Z'$ in $S$ such that
+$(s, s', s'')$ is a morphism of triangles.
+\end{enumerate}
+\end{definition}
+
+\noindent
+It turns out that these axioms are not independent of the
+axioms defining multiplicative systems.
+
+\begin{lemma}
+\label{lemma-localization-conditions}
+Let $\mathcal{D}$ be a pre-triangulated category.
+Let $S$ be a set of morphisms of $\mathcal{D}$ and assume that axioms
+MS1, MS5, MS6 hold (see
+Categories, Definition \ref{categories-definition-multiplicative-system}
+and
+Definition \ref{definition-localization}).
+Then MS2 holds.
+\end{lemma}
+
+\begin{proof}
+Suppose that $f : X \to Y$ is a morphism of $\mathcal{D}$ and
+$t : X \to X'$ an element of $S$. Choose a distinguished triangle
+$(X, Y, Z, f, g, h)$. Next, choose a distinguished triangle
+$(X', Y', Z, f', g', t[1] \circ h)$ (here we use TR1 and TR2).
+By MS5, MS6 (and TR2 to rotate) we can find the dotted arrow
+in the commutative diagram
+$$
+\xymatrix{
+X \ar[r] \ar[d]^t &
+Y \ar[r] \ar@{..>}[d]^{s'} &
+Z \ar[r] \ar[d]^1 &
+X[1] \ar[d]^{t[1]} \\
+X' \ar[r] &
+Y' \ar[r] &
+Z \ar[r] &
+X'[1]
+}
+$$
+with moreover $s' \in S$. This proves LMS2. The proof of RMS2 is dual.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-triangle-functor-localize}
+Let $F : \mathcal{D} \to \mathcal{D}'$ be an exact functor of
+pre-triangulated categories. Let
+$$
+S = \{f \in \text{Arrows}(\mathcal{D})
+\mid F(f)\text{ is an isomorphism}\}
+$$
+Then $S$ is a saturated (see
+Categories,
+Definition \ref{categories-definition-saturated-multiplicative-system})
+multiplicative system compatible with the
+triangulated structure on $\mathcal{D}$.
+\end{lemma}
+
+\begin{proof}
+We have to prove axioms MS1 -- MS6, see
+Categories, Definitions \ref{categories-definition-multiplicative-system} and
+\ref{categories-definition-saturated-multiplicative-system}
+and
+Definition \ref{definition-localization}.
+MS1, MS4, and MS5 are direct from the definitions. MS6 follows from TR3 and
+Lemma \ref{lemma-third-isomorphism-triangle}.
+By
+Lemma \ref{lemma-localization-conditions}
+we conclude that MS2 holds. To finish the proof we have to show that
+MS3 holds. To do this let $f, g : X \to Y$ be morphisms of $\mathcal{D}$,
+and let $t : Z \to X$ be an element of $S$ such that $f \circ t = g \circ t$.
+As $\mathcal{D}$ is additive this simply means that $a \circ t = 0$ with
+$a = f - g$. Choose a distinguished triangle $(Z, X, Q, t, d, h)$ using TR1.
+Since $a \circ t = 0$ we see by
+Lemma \ref{lemma-representable-homological}
+there exists a morphism $i : Q \to Y$ such that $i \circ d = a$.
+Finally, using TR1 again we can choose a triangle
+$(Q, Y, W, i, j, k)$. Here is a picture
+$$
+\xymatrix{
+Z \ar[r]_t & X \ar[r]_d \ar[d]^1 & Q \ar[r] \ar[d]^i & Z[1] \\
+& X \ar[r]_a & Y \ar[d]^j \\
+& & W
+}
+$$
+OK, and now we apply the functor $F$ to this diagram.
+Since $t \in S$ we see that $F(Q) = 0$, see
+Lemma \ref{lemma-third-object-zero}.
+Hence $F(j)$ is an isomorphism by the same lemma, i.e., $j \in S$.
+Finally, $j \circ a = j \circ i \circ d = 0$ as $j \circ i = 0$.
+Thus $j \circ f = j \circ g$ and we see that LMS3 holds.
+The proof of RMS3 is dual.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-homological-functor-localize}
+Let $H : \mathcal{D} \to \mathcal{A}$ be a homological functor between a
+pre-triangulated category and an abelian category. Let
+$$
+S = \{f \in \text{Arrows}(\mathcal{D})
+\mid H^i(f)\text{ is an isomorphism for all }i \in \mathbf{Z}\}
+$$
+Then $S$ is a saturated (see
+Categories,
+Definition \ref{categories-definition-saturated-multiplicative-system})
+multiplicative system compatible with the
+triangulated structure on $\mathcal{D}$.
+\end{lemma}
+
+\begin{proof}
+We have to prove axioms MS1 -- MS6, see
+Categories, Definitions \ref{categories-definition-multiplicative-system} and
+\ref{categories-definition-saturated-multiplicative-system}
+and
+Definition \ref{definition-localization}.
+MS1, MS4, and MS5 are direct from the definitions.
+MS6 follows from TR3 and the long exact cohomology sequence
+(\ref{equation-long-exact-cohomology-sequence}).
+By
+Lemma \ref{lemma-localization-conditions}
+we conclude that MS2 holds. To finish the proof we have to show that
+MS3 holds. To do this let $f, g : X \to Y$ be morphisms of $\mathcal{D}$,
+and let $t : Z \to X$ be an element of $S$ such that $f \circ t = g \circ t$.
+As $\mathcal{D}$ is additive this simply means that $a \circ t = 0$ with
+$a = f - g$. Choose a distinguished triangle $(Z, X, Q, t, g, h)$ using
+TR1 and TR2. Since $a \circ t = 0$ we see by
+Lemma \ref{lemma-representable-homological}
+there exists a morphism $i : Q \to Y$ such that $i \circ g = a$.
+Finally, using TR1 again we can choose a triangle
+$(Q, Y, W, i, j, k)$. Here is a picture
+$$
+\xymatrix{
+Z \ar[r]_t & X \ar[r]_g \ar[d]^1 & Q \ar[r] \ar[d]^i & Z[1] \\
+& X \ar[r]_a & Y \ar[d]^j \\
+& & W
+}
+$$
+OK, and now we apply the functors $H^i$ to this diagram.
+Since $t \in S$ we see that $H^i(Q) = 0$ by the long exact cohomology
+sequence (\ref{equation-long-exact-cohomology-sequence}).
+Hence $H^i(j)$ is an isomorphism for all $i$ by the same argument,
+i.e., $j \in S$. Finally, $j \circ a = j \circ i \circ g = 0$ as
+$j \circ i = 0$. Thus $j \circ f = j \circ g$ and we see that LMS3 holds.
+The proof of RMS3 is dual.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-construct-localization}
+Let $\mathcal{D}$ be a pre-triangulated category. Let $S$ be a multiplicative
+system compatible with the triangulated structure.
+Then there exists a unique structure of a pre-triangulated category on
+$S^{-1}\mathcal{D}$ such that the localization functor
+$Q : \mathcal{D} \to S^{-1}\mathcal{D}$ is exact.
+Moreover, if $\mathcal{D}$ is a triangulated category, so is
+$S^{-1}\mathcal{D}$.
+\end{proposition}
+
+\begin{proof}
+We have seen that $S^{-1}\mathcal{D}$ is an additive category
+and that the localization functor $Q$ is additive in
+Homology, Lemma \ref{homology-lemma-localization-additive}.
+It is clear that we may define $Q(X)[n] = Q(X[n])$ since
+$\mathcal{S}$ is preserved under the shift functors $[n]$ by
+MS5. Finally, we say a triangle of $S^{-1}\mathcal{D}$ is distinguished
+if it is isomorphic to the image of a distinguished triangle under
+the localization functor $Q$.
+
+\medskip\noindent
+Proof of TR1. The only thing to prove here is that if
+$a : Q(X) \to Q(Y)$ is a morphism of $S^{-1}\mathcal{D}$, then
+$a$ fits into a distinguished triangle. Write $a = Q(s)^{-1} \circ Q(f)$ for
+some $s : Y \to Y'$ in $S$ and $f : X \to Y'$. Choose a distinguished
+triangle $(X, Y', Z, f, g, h)$ in $\mathcal{D}$. Then we see that
+$(Q(X), Q(Y), Q(Z), a, Q(g) \circ Q(s), Q(h))$ is a distinguished triangle
+of $S^{-1}\mathcal{D}$.
+
+\medskip\noindent
+Proof of TR2. This is immediate from the definitions.
+
+\medskip\noindent
+Proof of TR3. Note that the existence of the dotted arrow which is
+required to exist may be proven after replacing the two triangles
+by isomorphic triangles. Hence we may assume given distinguished
+triangles $(X, Y, Z, f, g, h)$ and $(X', Y', Z', f', g', h')$ of
+$\mathcal{D}$ and a commutative diagram
+$$
+\xymatrix{
+Q(X) \ar[r]_{Q(f)} \ar[d]_a & Q(Y) \ar[d]^b \\
+Q(X') \ar[r]^{Q(f')} & Q(Y')
+}
+$$
+in $S^{-1}\mathcal{D}$. Now we apply
+Categories, Lemma \ref{categories-lemma-left-localization-diagram}
+to find a morphism $f'' : X'' \to Y''$ in $\mathcal{D}$ and a commutative
+diagram
+$$
+\xymatrix{
+X \ar[d]_f \ar[r]_k & X'' \ar[d]^{f''} & X' \ar[d]^{f'} \ar[l]^s \\
+Y \ar[r]^l & Y'' & Y' \ar[l]_t
+}
+$$
+in $\mathcal{D}$ with $s, t \in S$ and $a = s^{-1}k$, $b = t^{-1}l$.
+At this point we can use TR3 for $\mathcal{D}$ and MS6 to find
+a commutative diagram
+$$
+\xymatrix{
+X \ar[r] \ar[d]^k &
+Y \ar[r] \ar[d]^l &
+Z \ar[r] \ar[d]^m &
+X[1] \ar[d]^{g[1]} \\
+X'' \ar[r] &
+Y'' \ar[r] &
+Z'' \ar[r] &
+X''[1] \\
+X' \ar[r] \ar[u]_s &
+Y' \ar[r] \ar[u]_t &
+Z' \ar[r] \ar[u]_r &
+X'[1] \ar[u]_{s[1]}
+}
+$$
+with $r \in S$. It follows that setting $c = Q(r)^{-1}Q(m)$ we obtain
+the desired morphism of triangles
+$$
+\xymatrix{
+(Q(X), Q(Y), Q(Z), Q(f), Q(g), Q(h))
+\ar[d]^{(a, b, c)} \\
+(Q(X'), Q(Y'), Q(Z'), Q(f'), Q(g'), Q(h'))
+}
+$$
+
+\medskip\noindent
+This proves the first statement of the lemma. If $\mathcal{D}$ is also
+a triangulated category, then we still have to prove TR4 in order to show
+that $S^{-1}\mathcal{D}$ is triangulated as well. To do this we reduce by
+Lemma \ref{lemma-easier-axiom-four}
+to the following statement: Given composable morphisms
+$a : Q(X) \to Q(Y)$ and $b : Q(Y) \to Q(Z)$ we have to produce
+an octahedron after possibly replacing $Q(X), Q(Y), Q(Z)$ by isomorphic
+objects. To do this we may first replace $Y$ by an object such that
+$a = Q(f)$ for some morphism $f : X \to Y$ in $\mathcal{D}$. (More precisely,
+write $a = s^{-1}f$ with $s : Y \to Y'$ in $S$ and $f : X \to Y'$. Then
+replace $Y$ by $Y'$.) After this we similarly replace $Z$ by an object such
+that $b = Q(g)$ for some morphism $g : Y \to Z$. Now we can find
+distinguished triangles $(X, Y, Q_1, f, p_1, d_1)$,
+$(X, Z, Q_2, g \circ f, p_2, d_2)$, and
+$(Y, Z, Q_3, g, p_3, d_3)$ in $\mathcal{D}$ (by TR1), and
+morphisms $a : Q_1 \to Q_2$ and $b : Q_2 \to Q_3$ as in TR4.
+Then it is immediately verified that applying the functor $Q$ to
+all these data gives a corresponding structure in $S^{-1}\mathcal{D}$
+\end{proof}
+
+\noindent
+The universal property of the localization of a triangulated category
+is as follows (we formulate this for pre-triangulated categories, hence
+it holds a fortiori for triangulated categories).
+
+\begin{lemma}
+\label{lemma-universal-property-localization}
+Let $\mathcal{D}$ be a pre-triangulated category. Let $S$ be a multiplicative
+system compatible with the triangulated structure. Let
+$Q : \mathcal{D} \to S^{-1}\mathcal{D}$ be the localization functor, see
+Proposition \ref{proposition-construct-localization}.
+\begin{enumerate}
+\item If $H : \mathcal{D} \to \mathcal{A}$ is a homological functor into
+an abelian category $\mathcal{A}$ such that $H(s)$ is an isomorphism for
+all $s \in S$, then the unique factorization
+$H' : S^{-1}\mathcal{D} \to \mathcal{A}$ such that $H = H' \circ Q$ (see
+Categories, Lemma \ref{categories-lemma-properties-left-localization})
+is a homological functor too.
+\item If $F : \mathcal{D} \to \mathcal{D}'$ is an exact functor into
+a pre-triangulated category $\mathcal{D}'$ such that $F(s)$ is an isomorphism
+for all $s \in S$, then the unique factorization
+$F' : S^{-1}\mathcal{D} \to \mathcal{D}'$ such that $F = F' \circ Q$ (see
+Categories, Lemma \ref{categories-lemma-properties-left-localization})
+is an exact functor too.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma proves itself. Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localization-subcategory}
+Let $\mathcal{D}$ be a pre-triangulated category and let
+$\mathcal{D}' \subset \mathcal{D}$ be a full, pre-triangulated subcategory.
+Let $S$ be a saturated multiplicative system of $\mathcal{D}$
+compatible with the triangulated structure. Assume that for each $X$
+in $\mathcal{D}$ there exists an $s : X' \to X$ in $S$ such that $X'$ is
+an object of $\mathcal{D}'$. Then
+$S' = S \cap \text{Arrows}(\mathcal{D}')$ is a saturated
+multiplicative system compatible with the triangulated structure and
+the functor
+$$
+(S')^{-1}\mathcal{D}' \longrightarrow S^{-1}\mathcal{D}
+$$
+is an equivalence of pre-triangulated categories.
+\end{lemma}
+
+\begin{proof}
+Consider the quotient functor $Q : \mathcal{D} \to S^{-1}\mathcal{D}$
+of Proposition \ref{proposition-construct-localization}.
+Since $S$ is saturated we have that a morphism $f : X \to Y$ is in
+$S$ if and only if $Q(f)$ is invertible, see
+Categories, Lemma \ref{categories-lemma-what-gets-inverted}.
+Thus $S'$ is the collection of arrows which are turned into isomorphisms
+by the composition $\mathcal{D}' \to \mathcal{D} \to S^{-1}\mathcal{D}$.
+Hence $S'$ is is a saturated multiplicative system compatible with the
+triangulated structure by Lemma \ref{lemma-triangle-functor-localize}.
+By Lemma \ref{lemma-universal-property-localization} we obtain the
+exact functor $(S')^{-1}\mathcal{D}' \to S^{-1}\mathcal{D}$ of pre-triangulated
+categories. By assumption this functor is essentially surjective.
+Let $X', Y'$ be objects of $\mathcal{D}'$. By Categories, Remark
+\ref{categories-remark-right-localization-morphisms-colimit} we have
+$$
+\Mor_{S^{-1}\mathcal{D}}(X', Y') =
+\colim_{s : X \to X'\text{ in }S} \Mor_\mathcal{D}(X, Y')
+$$
+Our assumption implies that for any $s : X \to X'$ in $S$ we can
+find a morphism $s' : X'' \to X$ in $S$ with $X''$ in $\mathcal{D}'$.
+Then $s \circ s' : X'' \to X'$ is in $S'$. Hence the colimit above is
+equal to
+$$
+\colim_{s'' : X'' \to X'\text{ in }S'} \Mor_{\mathcal{D}'}(X'', Y') =
+\Mor_{(S')^{-1}\mathcal{D}'}(X', Y')
+$$
+This proves our functor is also fully faithful and the proof is complete.
+\end{proof}
+
+\noindent
+The following lemma describes the kernel (see
+Definition \ref{definition-kernel-category})
+of the localization functor.
+
+\begin{lemma}
+\label{lemma-kernel-localization}
+Let $\mathcal{D}$ be a pre-triangulated category. Let $S$ be a multiplicative
+system compatible with the triangulated structure. Let $Z$ be an object
+of $\mathcal{D}$. The following are equivalent
+\begin{enumerate}
+\item $Q(Z) = 0$ in $S^{-1}\mathcal{D}$,
+\item there exists $Z' \in \Ob(\mathcal{D})$ such that
+$0 : Z \to Z'$ is an element of $S$,
+\item there exists $Z' \in \Ob(\mathcal{D})$ such that
+$0 : Z' \to Z$ is an element of $S$, and
+\item there exists an object $Z'$ and a distinguished triangle
+$(X, Y, Z \oplus Z', f, g, h)$ such that $f \in S$.
+\end{enumerate}
+If $S$ is saturated, then these are also equivalent to
+\begin{enumerate}
+\item[(5)] the morphism $0 \to Z$ is an element of $S$,
+\item[(6)] the morphism $Z \to 0$ is an element of $S$,
+\item[(7)] there exists a distinguished triangle $(X, Y, Z, f, g, h)$
+such that $f \in S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1), (2), and (3) is
+Homology, Lemma \ref{homology-lemma-kernel-localization}.
+If (2) holds, then $(Z'[-1], Z'[-1] \oplus Z, Z, (1, 0), (0, 1), 0)$
+is a distinguished triangle (see
+Lemma \ref{lemma-split})
+with ``$0 \in S$''. By rotating we conclude that (4) holds.
+If $(X, Y, Z \oplus Z', f, g, h)$ is a distinguished triangle with $f \in S$
+then $Q(f)$ is an isomorphism hence $Q(Z \oplus Z') = 0$ hence $Q(Z) = 0$.
+Thus (1) -- (4) are all equivalent.
+
+\medskip\noindent
+Next, assume that $S$ is saturated. Note that each of (5), (6), (7)
+implies one of the equivalent conditions (1) -- (4). Suppose that
+$Q(Z) = 0$. Then $0 \to Z$ is a morphism of $\mathcal{D}$ which becomes
+an isomorphism in $S^{-1}\mathcal{D}$. According to
+Categories, Lemma \ref{categories-lemma-what-gets-inverted}
+the fact that $S$ is saturated implies that $0 \to Z$ is in $S$.
+Hence (1) $\Rightarrow$ (5). Dually (1) $\Rightarrow$ (6).
+Finally, if $0 \to Z$ is in $S$, then the triangle
+$(0, Z, Z, 0, \text{id}_Z, 0)$ is distinguished by TR1 and TR2 and
+is a triangle as in (4).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-triangles}
+Let $\mathcal{D}$ be a triangulated category.
+Let $S$ be a saturated multiplicative system in $\mathcal{D}$
+that is compatible with the triangulated structure.
+Let $(X, Y, Z, f, g, h)$ be a distinguished triangle in $\mathcal{D}$.
+Consider the category of morphisms of triangles
+$$
+\mathcal{I} =
+\{(s, s', s'') : (X, Y, Z, f, g, h) \to (X', Y', Z', f', g', h')
+\mid s, s', s'' \in S\}
+$$
+Then $\mathcal{I}$ is a filtered category and the functors
+$\mathcal{I} \to X/S$, $\mathcal{I} \to Y/S$, and $\mathcal{I} \to Z/S$
+are cofinal.
+\end{lemma}
+
+\begin{proof}
+We strongly suggest the reader skip the proof of this lemma and instead
+work it out on a napkin.
+
+\medskip\noindent
+The first remark is that using rotation of distinguished triangles (TR2)
+gives an equivalence of categories between $\mathcal{I}$ and the
+corresponding category for the distinguished triangle
+$(Y, Z, X[1], g, h, -f[1])$. Using this we see for example that if
+we prove the functor $\mathcal{I} \to X/S$ is cofinal, then
+the same thing is true for the functors $\mathcal{I} \to Y/S$ and
+$\mathcal{I} \to Z/S$.
+
+\medskip\noindent
+Note that if $s : X \to X'$ is a morphism of $S$, then using
+MS2 we can find $s' : Y \to Y'$ and $f' : X' \to Y'$ such that
+$f' \circ s = s' \circ f$, whereupon we can use MS6 to complete
+this into an object of $\mathcal{I}$. Hence the functor
+$\mathcal{I} \to X/S$ is surjective on objects. Using rotation as above
+this implies the same thing is true for the functors
+$\mathcal{I} \to Y/S$ and $\mathcal{I} \to Z/S$.
+
+\medskip\noindent
+Suppose given objects $s_1 : X \to X_1 $ and $s_2 : X \to X_2$ in
+$X/S$ and a morphism $a : X_1 \to X_2$ in $X/S$. Since $S$ is saturated,
+we see that $a \in S$, see
+Categories, Lemma \ref{categories-lemma-what-gets-inverted}.
+By the argument of the previous paragraph we can complete
+$s_1 : X \to X_1$ to an object
+$(s_1, s'_1, s''_1) : (X, Y, Z, f, g, h) \to (X_1, Y_1, Z_1, f_1, g_1, h_1)$
+in $\mathcal{I}$. Then we can repeat and find
+$(a, b, c) : (X_1, Y_1, Z_1, f_1, g_1, h_1) \to (X_2, Y_2, Z_2, f_2, g_2, h_2)$
+with $a, b, c \in S$ completing the given $a : X_1 \to X_2$.
+But then $(a, b, c)$ is a morphism in $\mathcal{I}$.
+In this way we conclude that the functor $\mathcal{I} \to X/S$ is
+also surjective on arrows. Using rotation as above,
+this implies the same thing is true for the functors
+$\mathcal{I} \to Y/S$ and $\mathcal{I} \to Z/S$.
+
+\medskip\noindent
+The category $\mathcal{I}$ is nonempty as the identity provides an object.
+This proves the condition (1) of the definition of a filtered category, see
+Categories, Definition \ref{categories-definition-directed}.
+
+\medskip\noindent
+We check condition (2) of
+Categories, Definition \ref{categories-definition-directed}
+for the category $\mathcal{I}$. Suppose given objects
+$(s_1, s'_1, s''_1) : (X, Y, Z, f, g, h) \to (X_1, Y_1, Z_1, f_1, g_1, h_1)$
+and
+$(s_2, s'_2, s''_2) : (X, Y, Z, f, g, h) \to (X_2, Y_2, Z_2, f_2, g_2, h_2)$
+in $\mathcal{I}$. We want to find an object of $\mathcal{I}$
+which is the target of an arrow from both
+$(X_1, Y_1, Z_1, f_1, g_1, h_1)$ and $(X_2, Y_2, Z_2, f_2, g_2, h_2)$.
+By Categories, Remark
+\ref{categories-remark-left-localization-morphisms-colimit}
+the categories $X/S$, $Y/S$, $Z/S$ are filtered.
+Thus we can find $X \to X_3$ in $X/S$ and morphisms
+$s : X_2 \to X_3$ and $a : X_1 \to X_3$. By the above we can find a morphism
+$(s, s', s'') : (X_2, Y_2, Z_2, f_2, g_2, h_2) \to
+(X_3, Y_3, Z_3, f_3, g_3, h_3)$ with $s', s'' \in S$.
+After replacing $(X_2, Y_2, Z_2)$ by $(X_3, Y_3, Z_3)$ we may
+assume that there exists a morphism $a : X_1 \to X_2$ in $X/S$.
+Repeating the argument for $Y$ and $Z$ (by rotating as above)
+we may assume there is a morphism
+$a : X_1 \to X_2$ in $X/S$,
+$b : Y_1 \to Y_2$ in $Y/S$, and
+$c : Z_1 \to Z_2$ in $Z/S$.
+However, these morphisms do not necessarily give rise to a morphism of
+distinguished triangles. On the other hand, the necessary diagrams
+do commute in $S^{-1}\mathcal{D}$. Hence we see (for example) that
+there exists a morphism $s'_2 : Y_2 \to Y_3$ in $S$ such that
+$s'_2 \circ f_2 \circ a = s'_2 \circ b \circ f_1$. Another replacement
+of $(X_2, Y_2, Z_2)$ as above then gets us to the situation where
+$f_2 \circ a = b \circ f_1$. Rotating and applying the same argument
+two more times we see that we may assume $(a, b, c)$ is a morphism
+of triangles. This proves condition (2).
+
+\medskip\noindent
+Next we check condition (3) of
+Categories, Definition \ref{categories-definition-directed}.
+Suppose $(s_1, s_1', s_1'') : (X, Y, Z) \to (X_1, Y_1, Z_1)$ and
+$(s_2, s_2', s_2'') : (X, Y, Z) \to (X_2, Y_2, Z_2)$
+are objects of $\mathcal{I}$, and suppose $(a, b, c), (a', b', c')$
+are two morphisms between them. Since $a \circ s_1 = a' \circ s_1$
+there exists a morphism $s_3 : X_2 \to X_3$ such that
+$s_3 \circ a = s_3 \circ a'$. Using the surjectivity statement
+we can complete this to a morphism of triangles
+$(s_3, s_3', s_3'') : (X_2, Y_2, Z_2) \to (X_3, Y_3, Z_3)$
+with $s_3, s_3', s_3'' \in S$. Thus
+$(s_3 \circ s_2, s_3' \circ s_2', s_3'' \circ s_2'') :
+(X, Y, Z) \to (X_3, Y_3, Z_3)$ is also an object of $\mathcal{I}$
+and after composing the maps $(a, b, c), (a', b', c')$ with
+$(s_3, s_3', s_3'')$ we obtain $a = a'$. By rotating we may do the
+same to get $b = b'$ and $c = c'$.
+
+\medskip\noindent
+Finally, we check that $\mathcal{I} \to X/S$ is cofinal, see
+Categories, Definition \ref{categories-definition-cofinal}.
+The first condition is true as the functor is surjective.
+Suppose that we have an object $s : X \to X'$ in $X/S$ and
+two objects
+$(s_1, s'_1, s''_1) : (X, Y, Z, f, g, h) \to (X_1, Y_1, Z_1, f_1, g_1, h_1)$
+and
+$(s_2, s'_2, s''_2) : (X, Y, Z, f, g, h) \to (X_2, Y_2, Z_2, f_2, g_2, h_2)$
+in $\mathcal{I}$ as well as morphisms $t_1 : X' \to X_1$ and
+$t_2 : X' \to X_2$ in $X/S$. By property (2) of $\mathcal{I}$
+proved above we can find morphisms
+$(s_3, s'_3, s''_3) : (X_1, Y_1, Z_1, f_1, g_1, h_1) \to
+(X_3, Y_3, Z_3, f_3, g_3, h_3)$
+and
+$(s_4, s'_4, s''_4) : (X_2, Y_2, Z_2, f_2, g_2, h_2) \to
+(X_3, Y_3, Z_3, f_3, g_3, h_3)$ in $\mathcal{I}$.
+We would be done if the compositions
+$X' \to X_1 \to X_3$ and $X' \to X_1 \to X_3$ where equal
+(see displayed equation in
+Categories, Definition \ref{categories-definition-cofinal}).
+If not, then, because $X/S$ is filtered, we can choose
+a morphism $X_3 \to X_4$ in $S$ such that the compositions
+$X' \to X_1 \to X_3 \to X_4$ and $X' \to X_1 \to X_3 \to X_4$ are equal.
+Then we finally complete $X_3 \to X_4$ to a morphism
+$(X_3, Y_3, Z_3) \to (X_4, Y_4, Z_4)$ in $\mathcal{I}$
+and compose with that morphism to see that the result is true.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Quotients of triangulated categories}
+\label{section-quotients}
+
+\noindent
+Given a triangulated category and a triangulated subcategory we can
+construct another triangulated category by taking the ``quotient''.
+The construction uses a localization. This is similar to the quotient
+of an abelian category by a Serre subcategory, see
+Homology, Section \ref{homology-section-serre-subcategories}.
+Before we do the actual construction we briefly discuss kernels
+of exact functors.
+
+\begin{definition}
+\label{definition-saturated}
+Let $\mathcal{D}$ be a pre-triangulated category. We say a full
+pre-triangulated subcategory $\mathcal{D}'$ of $\mathcal{D}$ is
+{\it saturated} if whenever $X \oplus Y$ is isomorphic to an object
+of $\mathcal{D}'$ then both $X$ and $Y$ are isomorphic to objects
+of $\mathcal{D}'$.
+\end{definition}
+
+\noindent
+A saturated triangulated subcategory is sometimes called a
+{\it thick triangulated subcategory}. In some references, this is
+only used for strictly full triangulated subcategories (and sometimes
+the definition is written such that it implies strictness).
+There is another notion, that of an {\it \'epaisse triangulated
+subcategory}. The definition is that given a commutative diagram
+$$
+\xymatrix{
+& S \ar[rd] \\
+X \ar[ru] \ar[rr] & & Y \ar[r] & T \ar[r] & X[1]
+}
+$$
+where the second line is a distinguished triangle and $S$ and $T$ isomorphic
+to objects of $\mathcal{D}'$, then also $X$ and $Y$ are isomorphic to objects
+of $\mathcal{D}'$. It turns out that this is equivalent to being
+saturated (this is elementary and can be found in \cite{Rickard-derived})
+and the notion of a saturated category is easier to work with.
+
+\begin{lemma}
+\label{lemma-triangle-functor-kernel}
+Let $F : \mathcal{D} \to \mathcal{D}'$ be an exact functor of
+pre-triangulated categories. Let $\mathcal{D}''$ be the full subcategory
+of $\mathcal{D}$ with objects
+$$
+\Ob(\mathcal{D}'') =
+\{X \in \Ob(\mathcal{D}) \mid F(X) = 0\}
+$$
+Then $\mathcal{D}''$ is a strictly full saturated pre-triangulated
+subcategory of $\mathcal{D}$. If $\mathcal{D}$ is a triangulated category,
+then $\mathcal{D}''$ is a triangulated subcategory.
+\end{lemma}
+
+\begin{proof}
+It is clear that $\mathcal{D}''$ is preserved under $[1]$ and $[-1]$.
+If $(X, Y, Z, f, g, h)$ is a distinguished triangle of $\mathcal{D}$
+and $F(X) = F(Y) = 0$, then also $F(Z) = 0$ as
+$(F(X), F(Y), F(Z), F(f), F(g), F(h))$ is distinguished.
+Hence we may apply
+Lemma \ref{lemma-triangulated-subcategory}
+to see that $\mathcal{D}''$ is a pre-triangulated subcategory (respectively
+a triangulated subcategory if $\mathcal{D}$ is a triangulated category).
+The final assertion of being saturated follows from
+$F(X) \oplus F(Y) = 0 \Rightarrow F(X) = F(Y) = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-homological-functor-kernel}
+Let $H : \mathcal{D} \to \mathcal{A}$ be a homological functor of
+a pre-triangulated category into an abelian category.
+Let $\mathcal{D}'$ be the full subcategory of $\mathcal{D}$ with objects
+$$
+\Ob(\mathcal{D}') =
+\{X \in \Ob(\mathcal{D}) \mid
+H(X[n]) = 0\text{ for all }n \in \mathbf{Z}\}
+$$
+Then $\mathcal{D}'$ is a strictly full saturated pre-triangulated subcategory
+of $\mathcal{D}$. If $\mathcal{D}$ is a triangulated category, then
+$\mathcal{D}'$ is a triangulated subcategory.
+\end{lemma}
+
+\begin{proof}
+It is clear that $\mathcal{D}'$ is preserved under $[1]$ and $[-1]$.
+If $(X, Y, Z, f, g, h)$ is a distinguished triangle of $\mathcal{D}$
+and $H(X[n]) = H(Y[n]) = 0$ for all $n$, then also $H(Z[n]) = 0$ for all $n$
+by the long exact sequence (\ref{equation-long-exact-cohomology-sequence}).
+Hence we may apply
+Lemma \ref{lemma-triangulated-subcategory}
+to see that $\mathcal{D}'$ is a pre-triangulated subcategory (respectively
+a triangulated subcategory if $\mathcal{D}$ is a triangulated category).
+The assertion of being saturated follows from
+\begin{align*}
+H((X \oplus Y)[n]) = 0 & \Rightarrow H(X[n] \oplus Y[n]) = 0 \\
+& \Rightarrow H(X[n]) \oplus H(Y[n]) = 0 \\
+& \Rightarrow H(X[n]) = H(Y[n]) = 0
+\end{align*}
+for all $n \in \mathbf{Z}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-homological-functor-bounded}
+Let $H : \mathcal{D} \to \mathcal{A}$ be a homological functor of
+a pre-triangulated category into an abelian category.
+Let $\mathcal{D}_H^{+}, \mathcal{D}_H^{-}, \mathcal{D}_H^b$
+be the full subcategory of $\mathcal{D}$ with objects
+$$
+\begin{matrix}
+\Ob(\mathcal{D}_H^{+}) =
+\{X \in \Ob(\mathcal{D}) \mid
+H(X[n]) = 0\text{ for all }n \ll 0\} \\
+\Ob(\mathcal{D}_H^{-}) =
+\{X \in \Ob(\mathcal{D}) \mid
+H(X[n]) = 0\text{ for all }n \gg 0\} \\
+\Ob(\mathcal{D}_H^b) =
+\{X \in \Ob(\mathcal{D}) \mid
+H(X[n]) = 0\text{ for all }|n| \gg 0\}
+\end{matrix}
+$$
+Each of these is a strictly full saturated pre-triangulated subcategory
+of $\mathcal{D}$. If $\mathcal{D}$ is a triangulated category, then
+each is a triangulated subcategory.
+\end{lemma}
+
+\begin{proof}
+Let us prove this for $\mathcal{D}_H^{+}$.
+It is clear that it is preserved under $[1]$ and $[-1]$.
+If $(X, Y, Z, f, g, h)$ is a distinguished triangle of $\mathcal{D}$
+and $H(X[n]) = H(Y[n]) = 0$ for all $n \ll 0$, then also $H(Z[n]) = 0$
+for all $n \ll 0$ by the long exact sequence
+(\ref{equation-long-exact-cohomology-sequence}).
+Hence we may apply
+Lemma \ref{lemma-triangulated-subcategory}
+to see that $\mathcal{D}_H^{+}$ is a pre-triangulated subcategory
+(respectively a triangulated subcategory if $\mathcal{D}$ is a
+triangulated category). The assertion of being saturated follows from
+\begin{align*}
+H((X \oplus Y)[n]) = 0 & \Rightarrow H(X[n] \oplus Y[n]) = 0 \\
+& \Rightarrow H(X[n]) \oplus H(Y[n]) = 0 \\
+& \Rightarrow H(X[n]) = H(Y[n]) = 0
+\end{align*}
+for all $n \in \mathbf{Z}$.
+\end{proof}
+
+\begin{definition}
+\label{definition-kernel-category}
+Let $\mathcal{D}$ be a (pre-)triangulated category.
+\begin{enumerate}
+\item Let $F : \mathcal{D} \to \mathcal{D}'$ be an exact functor.
+The {\it kernel of $F$} is the strictly full saturated
+(pre-)triangulated subcategory described in
+Lemma \ref{lemma-triangle-functor-kernel}.
+\item Let $H : \mathcal{D} \to \mathcal{A}$ be a homological functor.
+The {\it kernel of $H$} is the strictly full saturated
+(pre-)triangulated subcategory described in
+Lemma \ref{lemma-homological-functor-kernel}.
+\end{enumerate}
+These are sometimes denoted $\Ker(F)$ or $\Ker(H)$.
+\end{definition}
+
+\noindent
+The proof of the following lemma uses TR4.
+
+\begin{lemma}
+\label{lemma-construct-multiplicative-system}
+Let $\mathcal{D}$ be a triangulated category.
+Let $\mathcal{D}' \subset \mathcal{D}$ be a full triangulated
+subcategory. Set
+\begin{equation}
+\label{equation-multiplicative-system}
+S =
+\left\{
+\begin{matrix}
+f \in \text{Arrows}(\mathcal{D})
+\text{ such that there exists a distinguished triangle }\\
+(X, Y, Z, f, g, h) \text{ of }\mathcal{D}\text{ with }
+Z\text{ isomorphic to an object of }\mathcal{D}'
+\end{matrix}
+\right\}
+\end{equation}
+Then $S$ is a multiplicative system compatible with the triangulated
+structure on $\mathcal{D}$. In this situation the following are equivalent
+\begin{enumerate}
+\item $S$ is a saturated multiplicative system,
+\item $\mathcal{D}'$ is a saturated triangulated subcategory.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove the first assertion we have to prove that
+MS1, MS2, MS3 and MS5, MS6 hold.
+
+\medskip\noindent
+Proof of MS1. It is clear that identities are in $S$ because
+$(X, X, 0, 1, 0, 0)$ is distinguished for every object $X$ of $\mathcal{D}$
+and because $0$ is an object of $\mathcal{D}'$. Let $f : X \to Y$
+and $g : Y \to Z$ be composable morphisms contained in $S$.
+Choose distinguished triangles $(X, Y, Q_1, f, p_1, d_1)$,
+$(X, Z, Q_2, g \circ f, p_2, d_2)$, and $(Y, Z, Q_3, g, p_3, d_3)$.
+By assumption we know that $Q_1$ and $Q_3$ are isomorphic to objects
+of $\mathcal{D}'$. By TR4 we know there exists a distinguished
+triangle $(Q_1, Q_2, Q_3, a, b, c)$. Since $\mathcal{D}'$ is a
+triangulated subcategory we conclude that $Q_2$ is isomorphic to
+an object of $\mathcal{D}'$. Hence $g \circ f \in S$.
+
+\medskip\noindent
+Proof of MS3. Let $a : X \to Y$ be a morphism and let $t : Z \to X$ be
+an element of $S$ such that $a \circ t = 0$. To prove LMS3 it suffices to
+find an $s \in S$ such that $s \circ a = 0$, compare with the proof of
+Lemma \ref{lemma-triangle-functor-localize}. Choose a distinguished
+triangle $(Z, X, Q, t, g, h)$ using TR1 and TR2. Since $a \circ t = 0$
+we see by
+Lemma \ref{lemma-representable-homological}
+there exists a morphism $i : Q \to Y$ such that $i \circ g = a$.
+Finally, using TR1 again we can choose a triangle
+$(Q, Y, W, i, s, k)$. Here is a picture
+$$
+\xymatrix{
+Z \ar[r]_t & X \ar[r]_g \ar[d]^1 & Q \ar[r] \ar[d]^i & Z[1] \\
+& X \ar[r]_a & Y \ar[d]^s \\
+& & W
+}
+$$
+Since $t \in S$ we see that $Q$ is isomorphic to an object of $\mathcal{D}'$.
+Hence $s \in S$. Finally, $s \circ a = s \circ i \circ g = 0$ as
+$s \circ i = 0$ by Lemma \ref{lemma-composition-zero}.
+We conclude that LMS3 holds.
+The proof of RMS3 is dual.
+
+\medskip\noindent
+Proof of MS5. Follows as distinguished triangles and $\mathcal{D}'$
+are stable under translations
+
+\medskip\noindent
+Proof of MS6. Suppose given a commutative diagram
+$$
+\xymatrix{
+X \ar[r] \ar[d]^s &
+Y \ar[d]^{s'} \\
+X' \ar[r] &
+Y'
+}
+$$
+with $s, s' \in S$. By
+Proposition \ref{proposition-9}
+we can extend this to a nine square diagram. As $s, s'$ are elements of $S$
+we see that $X'', Y''$ are isomorphic to objects of $\mathcal{D}'$.
+Since $\mathcal{D}'$ is a full triangulated subcategory we see that
+$Z''$ is also isomorphic to an object of $\mathcal{D}'$.
+Whence the morphism $Z \to Z'$
+is an element of $S$. This proves MS6.
+
+\medskip\noindent
+MS2 is a formal consequence of MS1, MS5, and MS6, see
+Lemma \ref{lemma-localization-conditions}.
+This finishes the proof of the first assertion of the lemma.
+
+\medskip\noindent
+Let's assume that $S$ is saturated. (In the following we will use
+rotation of distinguished triangles without further mention.)
+Let $X \oplus Y$ be an object isomorphic to an object of $\mathcal{D}'$.
+Consider the morphism $f : 0 \to X$. The composition
+$0 \to X \to X \oplus Y$ is an element
+of $S$ as $(0, X \oplus Y, X \oplus Y, 0, 1, 0)$ is a distinguished
+triangle. The composition $Y[-1] \to 0 \to X$ is an element of $S$
+as $(X, X \oplus Y, Y, (1, 0), (0, 1), 0)$ is a distinguished triangle, see
+Lemma \ref{lemma-split}.
+Hence $0 \to X$ is an element of $S$ (as $S$ is saturated).
+Thus $X$ is isomorphic to an object of $\mathcal{D}'$ as desired.
+
+\medskip\noindent
+Finally, assume $\mathcal{D}'$ is a saturated triangulated subcategory.
+Let
+$$
+W \xrightarrow{h}
+X \xrightarrow{g}
+Y \xrightarrow{f} Z
+$$
+be composable morphisms of $\mathcal{D}$ such that $fg, gh \in S$.
+We will build up a picture of objects as in the diagram below.
+$$
+\xymatrix{
+ & &
+Q_{12} \ar[rd] & &
+Q_{23} \ar[rd] \\
+ &
+Q_1 \ar[ld]_{\! + \! 1} \ar[ru] & &
+Q_2 \ar[ld]_{\! + \! 1} \ar[ll]_{\! + \! 1} \ar[ru] & &
+Q_3 \ar[ld]_{\! + \! 1} \ar[ll]_{\! + \! 1} \\
+W \ar[rr] & &
+X \ar[lu] \ar[rr] & &
+Y \ar[lu] \ar[rr] & &
+Z \ar[lu]
+}
+$$
+First choose distinguished triangles
+$(W, X, Q_1)$, $(X, Y, Q_2)$, $(Y, Z, Q_3)$ $(W, Y, Q_{12})$, and
+$(X, Z, Q_{23})$. Denote $s : Q_2 \to Q_1[1]$ the composition
+$Q_2 \to X[1] \to Q_1[1]$. Denote $t : Q_3 \to Q_2[1]$ the
+composition $Q_3 \to Y[1] \to Q_2[1]$.
+By TR4 applied to the composition $W \to X \to Y$
+and the composition $X \to Y \to Z$ there exist
+distinguished triangles $(Q_1, Q_{12}, Q_2)$ and $(Q_2, Q_{23}, Q_3)$
+which use the morphisms $s$ and $t$.
+The objects $Q_{12}$ and $Q_{23}$ are isomorphic to objects of
+$\mathcal{D}'$ as $W \to Y$ and $X \to Z$ are assumed in $S$.
+Hence also $s[1]t$ is an element of $S$ as $S$ is closed under compositions
+and shifts.
+Note that $s[1]t = 0$ as $Y[1] \to Q_2[1] \to X[2]$ is zero, see
+Lemma \ref{lemma-composition-zero}.
+Hence $Q_3[1] \oplus Q_1[2]$ is isomorphic to an object of
+$\mathcal{D}'$, see Lemma \ref{lemma-split}.
+By assumption on $\mathcal{D}'$ we conclude that $Q_3$ and $Q_1$ are isomorphic
+to objects of $\mathcal{D}'$. Looking at the distinguished triangle
+$(Q_1, Q_{12}, Q_2)$ we conclude that $Q_2$ is also isomorphic to
+an object of $\mathcal{D}'$. Looking at the distinguished triangle
+$(X, Y, Q_2)$ we finally conclude that $g \in S$. (It is also
+follows that $h, f \in S$, but we don't need this.)
+\end{proof}
+
+\begin{definition}
+\label{definition-quotient-category}
+Let $\mathcal{D}$ be a triangulated category.
+Let $\mathcal{B}$ be a full triangulated subcategory.
+We define the {\it quotient category $\mathcal{D}/\mathcal{B}$}
+by the formula $\mathcal{D}/\mathcal{B} = S^{-1}\mathcal{D}$, where
+$S$ is the multiplicative system of $\mathcal{D}$ associated to
+$\mathcal{B}$ via
+Lemma \ref{lemma-construct-multiplicative-system}.
+The localization functor $Q : \mathcal{D} \to \mathcal{D}/\mathcal{B}$
+is called the {\it quotient functor} in this case.
+\end{definition}
+
+\noindent
+Note that the quotient functor
+$Q : \mathcal{D} \to \mathcal{D}/\mathcal{B}$
+is an exact functor of triangulated categories, see
+Proposition \ref{proposition-construct-localization}.
+The universal property of this construction is the following.
+
+\begin{lemma}
+\label{lemma-universal-property-quotient}
+\begin{slogan}
+The universal property of the Verdier quotient.
+\end{slogan}
+Let $\mathcal{D}$ be a triangulated category. Let $\mathcal{B}$
+be a full triangulated subcategory of $\mathcal{D}$. Let
+$Q : \mathcal{D} \to \mathcal{D}/\mathcal{B}$ be the quotient functor.
+\begin{enumerate}
+\item If $H : \mathcal{D} \to \mathcal{A}$ is a homological functor into
+an abelian category $\mathcal{A}$ such that
+$\mathcal{B} \subset \Ker(H)$ then there exists a unique factorization
+$H' : \mathcal{D}/\mathcal{B} \to \mathcal{A}$ such that $H = H' \circ Q$
+and $H'$ is a homological functor too.
+\item If $F : \mathcal{D} \to \mathcal{D}'$ is an exact functor into
+a pre-triangulated category $\mathcal{D}'$ such that
+$\mathcal{B} \subset \Ker(F)$ then there exists a unique factorization
+$F' : \mathcal{D}/\mathcal{B} \to \mathcal{D}'$ such that $F = F' \circ Q$
+and $F'$ is an exact functor too.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma follows from
+Lemma \ref{lemma-universal-property-localization}.
+Namely, if $f : X \to Y$ is a morphism of $\mathcal{D}$
+such that for some distinguished triangle $(X, Y, Z, f, g, h)$
+the object $Z$ is isomorphic to an object of $\mathcal{B}$, then
+$H(f)$, resp.\ $F(f)$ is an isomorphism under the assumptions of
+(1), resp.\ (2). Details omitted.
+\end{proof}
+
+\noindent
+The kernel of the quotient functor can be described as follows.
+
+\begin{lemma}
+\label{lemma-kernel-quotient}
+Let $\mathcal{D}$ be a triangulated category.
+Let $\mathcal{B}$ be a full triangulated subcategory.
+The kernel of the quotient functor
+$Q : \mathcal{D} \to \mathcal{D}/\mathcal{B}$
+is the strictly full subcategory of $\mathcal{D}$ whose objects are
+$$
+\Ob(\Ker(Q)) =
+\left\{
+\begin{matrix}
+Z \in \Ob(\mathcal{D})
+\text{ such that there exists a }Z' \in \Ob(\mathcal{D}) \\
+\text{ such that }Z \oplus Z'\text{ is isomorphic to an object of }\mathcal{B}
+\end{matrix}
+\right\}
+$$
+In other words it is the smallest strictly full saturated triangulated
+subcategory of $\mathcal{D}$ containing $\mathcal{B}$.
+\end{lemma}
+
+\begin{proof}
+First note that the kernel is automatically a strictly full
+triangulated subcategory containing summands of any of its objects, see
+Lemma \ref{lemma-triangle-functor-kernel}.
+The description of its objects follows from the definitions and
+Lemma \ref{lemma-kernel-localization} part (4).
+\end{proof}
+
+\noindent
+Let $\mathcal{D}$ be a triangulated category.
+At this point we have constructions which induce order
+preserving maps between
+\begin{enumerate}
+\item the partially ordered set of multiplicative systems $S$ in $\mathcal{D}$
+compatible with the triangulated structure, and
+\item the partially ordered set of full triangulated subcategories
+$\mathcal{B} \subset \mathcal{D}$.
+\end{enumerate}
+Namely, the constructions are given by
+$S \mapsto \mathcal{B}(S) = \Ker(Q : \mathcal{D} \to S^{-1}\mathcal{D})$
+and $\mathcal{B} \mapsto S(\mathcal{B})$
+where $S(\mathcal{B})$ is the multiplicative set of
+(\ref{equation-multiplicative-system}), i.e.,
+$$
+S(\mathcal{B}) =
+\left\{
+\begin{matrix}
+f \in \text{Arrows}(\mathcal{D})
+\text{ such that there exists a distinguished triangle }\\
+(X, Y, Z, f, g, h) \text{ of }\mathcal{D}\text{ with }
+Z\text{ isomorphic to an object of }\mathcal{B}
+\end{matrix}
+\right\}
+$$
+Note that it is not the case that these operations are mutually inverse.
+
+\begin{lemma}
+\label{lemma-operations}
+Let $\mathcal{D}$ be a triangulated category. The operations described above
+have the following properties
+\begin{enumerate}
+\item $S(\mathcal{B}(S))$ is the ``saturation'' of $S$, i.e., it is the
+smallest saturated multiplicative system in $\mathcal{D}$ containing $S$, and
+\item $\mathcal{B}(S(\mathcal{B}))$ is the ``saturation'' of $\mathcal{B}$,
+i.e., it is the smallest strictly full saturated triangulated subcategory of
+$\mathcal{D}$ containing $\mathcal{B}$.
+\end{enumerate}
+In particular, the constructions define mutually inverse maps between
+the (partially ordered) set of saturated multiplicative systems in
+$\mathcal{D}$ compatible with the triangulated structure on $\mathcal{D}$
+and
+the (partially ordered) set of strictly full saturated triangulated
+subcategories of $\mathcal{D}$.
+\end{lemma}
+
+\begin{proof}
+First, let's start with a full triangulated subcategory $\mathcal{B}$. Then
+$\mathcal{B}(S(\mathcal{B})) =
+\Ker(Q : \mathcal{D} \to \mathcal{D}/\mathcal{B})$
+and hence (2) is the content of
+Lemma \ref{lemma-kernel-quotient}.
+
+\medskip\noindent
+Next, suppose that $S$ is multiplicative system in $\mathcal{D}$ compatible
+with the triangulation on $\mathcal{D}$. Then
+$\mathcal{B}(S) = \Ker(Q : \mathcal{D} \to S^{-1}\mathcal{D})$.
+Hence (using
+Lemma \ref{lemma-third-object-zero}
+in the localized category)
+\begin{align*}
+S(\mathcal{B}(S))
+& =
+\left\{
+\begin{matrix}
+f \in \text{Arrows}(\mathcal{D})
+\text{ such that there exists a distinguished}\\
+\text{triangle }(X, Y, Z, f, g, h) \text{ of }\mathcal{D}\text{ with }Q(Z) = 0
+\end{matrix}
+\right\}
+\\
+& =
+\{f \in \text{Arrows}(\mathcal{D}) \mid Q(f)\text{ is an isomorphism}\} \\
+& = \hat S = S'
+\end{align*}
+in the notation of
+Categories, Lemma \ref{categories-lemma-what-gets-inverted}.
+The final statement of that lemma finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-acyclic-general}
+Let $H : \mathcal{D} \to \mathcal{A}$ be a homological functor from a
+triangulated category $\mathcal{D}$ to an abelian category $\mathcal{A}$, see
+Definition \ref{definition-homological}.
+The subcategory $\Ker(H)$ of $\mathcal{D}$ is a strictly full
+saturated triangulated subcategory of $\mathcal{D}$ whose corresponding
+saturated multiplicative system (see
+Lemma \ref{lemma-operations})
+is the set
+$$
+S = \{f \in \text{Arrows}(\mathcal{D}) \mid
+H^i(f)\text{ is an isomorphism for all }i \in \mathbf{Z}\}.
+$$
+The functor $H$ factors through the quotient functor
+$Q : \mathcal{D} \to \mathcal{D}/\Ker(H)$.
+\end{lemma}
+
+\begin{proof}
+The category $\Ker(H)$ is a strictly full saturated triangulated
+subcategory of $\mathcal{D}$ by
+Lemma \ref{lemma-homological-functor-kernel}.
+The set $S$ is a saturated multiplicative system compatible with the
+triangulated structure by
+Lemma \ref{lemma-homological-functor-localize}.
+Recall that the multiplicative system corresponding to
+$\Ker(H)$ is the set
+$$
+\left\{
+\begin{matrix}
+f \in \text{Arrows}(\mathcal{D})
+\text{ such that there exists a distinguished triangle }\\
+(X, Y, Z, f, g, h)\text{ with } H^i(Z) = 0 \text{ for all }i
+\end{matrix}
+\right\}
+$$
+By the long exact cohomology sequence, see
+(\ref{equation-long-exact-cohomology-sequence}),
+it is clear that $f$ is an element of this set if and only if $f$ is
+an element of $S$. Finally, the factorization of $H$ through $Q$ is a
+consequence of
+Lemma \ref{lemma-universal-property-quotient}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Adjoints for exact functors}
+\label{section-adjoints}
+
+\noindent
+Results on adjoint functors between triangulated categories.
+
+\begin{lemma}
+\label{lemma-adjoint-is-exact}
+Let $F : \mathcal{D} \to \mathcal{D}'$ be an exact functor between
+triangulated categories. If $F$ admits a right adjoint
+$G: \mathcal{D'} \to \mathcal{D}$, then $G$ is also an exact functor.
+\end{lemma}
+
+\begin{proof}
+Let $X$ be an object of $\mathcal{D}$ and
+$A$ an object of $\mathcal{D}'$. Since $F$ is an exact functor we see that
+\begin{align*}
+\Mor_\mathcal{D}(X, G(A[1])
+& =
+\Mor_{\mathcal{D}'}(F(X), A[1]) \\
+& =
+\Mor_{\mathcal{D}'}(F(X)[-1], A) \\
+& =
+\Mor_{\mathcal{D}'}(F(X[-1]), A) \\
+& =
+\Mor_\mathcal{D}(X[-1], G(A)) \\
+& =
+\Mor_\mathcal{D}(X, G(A)[1])
+\end{align*}
+By Yoneda's lemma (Categories, Lemma \ref{categories-lemma-yoneda})
+we obtain a canonical isomorphism $G(A)[1] = G(A[1])$.
+Let $A \to B \to C \to A[1]$ be a distinguished triangle in $\mathcal{D}'$.
+Choose a distinguished triangle
+$$
+G(A) \to G(B) \to X \to G(A)[1]
+$$
+in $\mathcal{D}$. Then $F(G(A)) \to F(G(B)) \to F(X) \to F(G(A))[1]$
+is a distinguished triangle in $\mathcal{D}'$. By TR3 we can choose
+a morphism of distinguished triangles
+$$
+\xymatrix{
+F(G(A)) \ar[r] \ar[d] & F(G(B)) \ar[r] \ar[d] & F(X) \ar[r] \ar[d] &
+F(G(A))[1] \ar[d] \\
+A \ar[r] & B \ar[r] & C \ar[r] & A[1]
+}
+$$
+Since $G$ is the adjoint the new morphism determines a morphism $X \to G(C)$
+such that the diagram
+$$
+\xymatrix{
+G(A) \ar[r] \ar[d] & G(B) \ar[r] \ar[d] & X \ar[r] \ar[d] & G(A)[1] \ar[d] \\
+G(A) \ar[r] & G(B) \ar[r] & G(C) \ar[r] & G(A)[1]
+}
+$$
+commutes. Applying the homological functor $\Hom_{\mathcal{D}'}(W, -)$
+for an object $W$ of $\mathcal{D}'$ we deduce from the $5$ lemma that
+$$
+\Hom_{\mathcal{D}'}(W, X) \to \Hom_{\mathcal{D}'}(W, G(C))
+$$
+is a bijection and using the Yoneda lemma once more we conclude that
+$X \to G(C)$ is an isomorphism. Hence we conclude that
+$G(A) \to G(B) \to G(C) \to G(A)[1]$ is a distinguished triangle
+which is what we wanted to show.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-adjoint-kernel-zero}
+Let $\mathcal{D}$, $\mathcal{D}'$ be triangulated categories.
+Let $F : \mathcal{D} \to \mathcal{D}'$ and
+$G : \mathcal{D}' \to \mathcal{D}$ be functors. Assume that
+\begin{enumerate}
+\item $F$ and $G$ are exact functors,
+\item $F$ is fully faithful,
+\item $G$ is a right adjoint to $F$, and
+\item the kernel of $G$ is zero.
+\end{enumerate}
+Then $F$ is an equivalence of categories.
+\end{lemma}
+
+\begin{proof}
+Since $F$ is fully faithful the adjunction map $\text{id} \to G \circ F$
+is an isomorphism (Categories, Lemma
+\ref{categories-lemma-adjoint-fully-faithful}).
+Let $X$ be an object of $\mathcal{D}'$.
+Choose a distinguished triangle
+$$
+F(G(X)) \to X \to Y \to F(G(X))[1]
+$$
+in $\mathcal{D}'$. Applying $G$ and using that $G(F(G(X))) = G(X)$
+we find a distinguished triangle
+$$
+G(X) \to G(X) \to G(Y) \to G(X)[1]
+$$
+Hence $G(Y) = 0$. Thus $Y = 0$. Thus $F(G(X)) \to X$ is an isomorphism.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{The homotopy category}
+\label{section-homotopy}
+
+\noindent
+Let $\mathcal{A}$ be an additive category. The homotopy category
+$K(\mathcal{A})$ of $\mathcal{A}$ is the category of complexes of
+$\mathcal{A}$ with morphisms given by morphisms of complexes up to homotopy.
+Here is the formal definition.
+
+\begin{definition}
+\label{definition-complexes-notation}
+Let $\mathcal{A}$ be an additive category.
+\begin{enumerate}
+\item We set $\text{Comp}(\mathcal{A}) = \text{CoCh}(\mathcal{A})$
+be the {\it category of (cochain) complexes}.
+\item A complex $K^\bullet$ is said to be
+{\it bounded below} if $K^n = 0$ for all $n \ll 0$.
+\item A complex $K^\bullet$ is said to be
+{\it bounded above} if $K^n = 0$ for all $n \gg 0$.
+\item A complex $K^\bullet$ is said to be
+{\it bounded} if $K^n = 0$ for all $|n| \gg 0$.
+\item We let
+$\text{Comp}^{+}(\mathcal{A})$, $\text{Comp}^{-}(\mathcal{A})$,
+resp.\ $\text{Comp}^b(\mathcal{A})$ be the full subcategory
+of $\text{Comp}(\mathcal{A})$ whose objects are the complexes
+which are bounded below, bounded above, resp.\ bounded.
+\item We let $K(\mathcal{A})$ be the category with the same objects
+as $\text{Comp}(\mathcal{A})$ but as morphisms homotopy classes of
+maps of complexes (see
+Homology, Lemma \ref{homology-lemma-compose-homotopy-cochain}).
+\item We let $K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$,
+resp.\ $K^b(\mathcal{A})$ be the full subcategory of $K(\mathcal{A})$
+whose objects are bounded below, bounded above, resp.\ bounded
+complexes of $\mathcal{A}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+It will turn out that the categories $K(\mathcal{A})$,
+$K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, and $K^b(\mathcal{A})$
+are triangulated categories. To prove this we first develop
+some machinery related to cones and split exact sequences.
+
+
+
+
+\section{Cones and termwise split sequences}
+\label{section-cones}
+
+\noindent
+Let $\mathcal{A}$ be an additive category, and let
+$K(\mathcal{A})$ denote the category of complexes of
+$\mathcal{A}$ with morphisms given by morphisms of
+complexes up to homotopy. Note that the shift functors
+$[n]$ on complexes, see
+Homology, Definition \ref{homology-definition-shift-cochain},
+give rise to functors $[n] : K(\mathcal{A}) \to K(\mathcal{A})$
+such that $[n] \circ [m] = [n + m]$ and $[0] = \text{id}$.
+
+\begin{definition}
+\label{definition-cone}
+Let $\mathcal{A}$ be an additive category.
+Let $f : K^\bullet \to L^\bullet$ be a morphism of
+complexes of $\mathcal{A}$. The {\it cone} of $f$
+is the complex $C(f)^\bullet$ given by
+$C(f)^n = L^n \oplus K^{n + 1}$ and
+differential
+$$
+d_{C(f)}^n =
+\left(
+\begin{matrix}
+d^n_L & f^{n + 1} \\
+0 & -d_K^{n + 1}
+\end{matrix}
+\right)
+$$
+It comes equipped with canonical morphisms of complexes
+$i : L^\bullet \to C(f)^\bullet$ and $p : C(f)^\bullet \to K^\bullet[1]$
+induced by the obvious maps $L^n \to C(f)^n \to K^{n + 1}$.
+\end{definition}
+
+\noindent
+In other words $(K, L, C(f), f, i, p)$ forms a triangle:
+$$
+K^\bullet \to L^\bullet \to C(f)^\bullet \to K^\bullet[1]
+$$
+The formation of this triangle is
+functorial in the following sense.
+
+\begin{lemma}
+\label{lemma-functorial-cone}
+Suppose that
+$$
+\xymatrix{
+K_1^\bullet \ar[r]_{f_1} \ar[d]_a & L_1^\bullet \ar[d]^b \\
+K_2^\bullet \ar[r]^{f_2} & L_2^\bullet
+}
+$$
+is a diagram of morphisms of complexes which is commutative
+up to homotopy. Then there exists a morphism
+$c : C(f_1)^\bullet \to C(f_2)^\bullet$ which gives rise to
+a morphism of triangles
+$(a, b, c) : (K_1^\bullet, L_1^\bullet, C(f_1)^\bullet, f_1, i_1, p_1)
+\to
+(K_2^\bullet, L_2^\bullet, C(f_2)^\bullet, f_2, i_2, p_2)$
+of $K(\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+Let $h^n : K_1^n \to L_2^{n - 1}$ be a family of morphisms such that
+$b \circ f_1 - f_2 \circ a= d \circ h + h \circ d$.
+Define $c^n$ by the matrix
+$$
+c^n =
+\left(
+\begin{matrix}
+b^n & h^{n + 1} \\
+0 & a^{n + 1}
+\end{matrix}
+\right) :
+L_1^n \oplus K_1^{n + 1} \to L_2^n \oplus K_2^{n + 1}
+$$
+A matrix computation show that $c$ is a morphism of complexes.
+It is trivial that $c \circ i_1 = i_2 \circ b$, and it is
+trivial also to check that $p_2 \circ c = a \circ p_1$.
+\end{proof}
+
+\noindent
+Note that the morphism $c : C(f_1)^\bullet \to C(f_2)^\bullet$
+constructed in the
+proof of Lemma \ref{lemma-functorial-cone} in general depends on the
+chosen homotopy $h$ between $f_2 \circ a$ and $b \circ f_1$.
+
+\begin{lemma}
+\label{lemma-map-from-cone}
+Suppose that $f: K^\bullet \to L^\bullet$ and $g : L^\bullet \to M^\bullet$
+are morphisms of complexes such that $g \circ f$ is homotopic to zero.
+Then
+\begin{enumerate}
+\item $g$ factors through a morphism $C(f)^\bullet \to M^\bullet$, and
+\item $f$ factors through a morphism $K^\bullet \to C(g)^\bullet[-1]$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The assumptions say that the diagram
+$$
+\xymatrix{
+K^\bullet \ar[r]_f \ar[d] & L^\bullet \ar[d]^g \\
+0 \ar[r] & M^\bullet
+}
+$$
+commutes up to homotopy.
+Since the cone on $0 \to M^\bullet$ is $M^\bullet$ the
+map $C(f)^\bullet \to C(0 \to M^\bullet) = M^\bullet$
+of Lemma \ref{lemma-functorial-cone}
+is the map in (1). The cone on $K^\bullet \to 0$ is
+$K^\bullet[1]$ and applying Lemma \ref{lemma-functorial-cone}
+gives a map $K^\bullet[1] \to C(g)^\bullet$. Applying
+$[-1]$ we obtain the map in (2).
+\end{proof}
+
+\noindent
+Note that the morphisms $C(f)^\bullet \to M^\bullet$ and
+$K^\bullet \to C(g)^\bullet[-1]$ constructed in the proof
+of Lemma \ref{lemma-map-from-cone} in general depend on the
+chosen homotopy.
+
+\begin{definition}
+\label{definition-termwise-split-map}
+Let $\mathcal{A}$ be an additive category.
+A {\it termwise split injection $\alpha : A^\bullet \to B^\bullet$}
+is a morphism of complexes such that each $A^n \to B^n$
+is isomorphic to the inclusion of a direct summand.
+A {\it termwise split surjection $\beta : B^\bullet \to C^\bullet$}
+is a morphism of complexes such that each $B^n \to C^n$
+is isomorphic to the projection onto a direct summand.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-make-commute-map}
+Let $\mathcal{A}$ be an additive category.
+Let
+$$
+\xymatrix{
+A^\bullet \ar[r]_f \ar[d]_a & B^\bullet \ar[d]^b \\
+C^\bullet \ar[r]^g & D^\bullet
+}
+$$
+be a diagram of morphisms of complexes commuting up to homotopy.
+If $f$ is a termwise split injection, then $b$ is homotopic to a
+morphism which makes the diagram commute.
+If $g$ is a split surjection, then $a$ is homotopic to a
+morphism which makes the diagram commute.
+\end{lemma}
+
+\begin{proof}
+Let $h^n : A^n \to D^{n - 1}$ be a collection of morphisms
+such that $bf - ga = dh + hd$. Suppose that $\pi^n : B^n \to A^n$
+are morphisms splitting the morphisms $f^n$.
+Take $b' = b - dh\pi - h\pi d$.
+Suppose $s^n : D^n \to C^n$ are morphisms splitting the morphisms
+$g^n : C^n \to D^n$. Take $a' = a + dsh + shd$.
+Computations omitted.
+\end{proof}
+
+\noindent
+The following lemma can be used to replace a morphism of complexes
+by a morphism where in each degree the map is the injection of a
+direct summand.
+
+\begin{lemma}
+\label{lemma-make-injective}
+Let $\mathcal{A}$ be an additive category.
+Let $\alpha : K^\bullet \to L^\bullet$ be a morphism
+of complexes of $\mathcal{A}$.
+There exists a factorization
+$$
+\xymatrix{
+K^\bullet \ar[r]^{\tilde \alpha} \ar@/_1pc/[rr]_\alpha &
+\tilde L^\bullet \ar[r]^\pi &
+L^\bullet
+}
+$$
+such that
+\begin{enumerate}
+\item $\tilde \alpha$ is a termwise split injection (see
+Definition \ref{definition-termwise-split-map}),
+\item there is a map of complexes $s : L^\bullet \to \tilde L^\bullet$
+such that $\pi \circ s = \text{id}_{L^\bullet}$ and such that
+$s \circ \pi$ is homotopic to $\text{id}_{\tilde L^\bullet}$.
+\end{enumerate}
+Moreover, if both $K^\bullet$ and $L^\bullet$ are in
+$K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, or $K^b(\mathcal{A})$,
+then so is $\tilde L^\bullet$.
+\end{lemma}
+
+\begin{proof}
+We set
+$$
+\tilde L^n = L^n \oplus K^n \oplus K^{n + 1}
+$$
+and we define
+$$
+d^n_{\tilde L} =
+\left(
+\begin{matrix}
+d^n_L & 0 & 0 \\
+0 & d^n_K & \text{id}_{K^{n + 1}} \\
+0 & 0 & -d^{n + 1}_K
+\end{matrix}
+\right)
+$$
+In other words, $\tilde L^\bullet = L^\bullet \oplus C(1_{K^\bullet})$.
+Moreover, we set
+$$
+\tilde \alpha =
+\left(
+\begin{matrix}
+\alpha \\
+\text{id}_{K^n} \\
+0
+\end{matrix}
+\right)
+$$
+which is clearly a split injection. It is also clear that it defines a morphism
+of complexes. We define
+$$
+\pi =
+\left(
+\begin{matrix}
+\text{id}_{L^n} &
+0 &
+0
+\end{matrix}
+\right)
+$$
+so that clearly $\pi \circ \tilde \alpha = \alpha$. We set
+$$
+s =
+\left(
+\begin{matrix}
+\text{id}_{L^n} \\
+0 \\
+0
+\end{matrix}
+\right)
+$$
+so that $\pi \circ s = \text{id}_{L^\bullet}$. Finally,
+let $h^n : \tilde L^n \to \tilde L^{n - 1}$ be the map
+which maps the summand $K^n$ of $\tilde L^n$ via the identity morphism
+to the summand $K^n$ of $\tilde L^{n - 1}$. Then it is a trivial matter
+(see computations in remark below) to prove that
+$$
+\text{id}_{\tilde L^\bullet} - s \circ \pi
+=
+d \circ h + h \circ d
+$$
+which finishes the proof of the lemma.
+\end{proof}
+
+\begin{remark}
+\label{remark-compute-modules}
+To see the last displayed equality in the proof above we can argue
+with elements as follows. We have
+$s\pi(l, k, k^{+}) = (l, 0, 0)$.
+Hence the morphism of the left hand side maps
+$(l, k, k^{+})$ to $(0, k, k^{+})$.
+On the other hand $h(l, k, k^{+}) = (0, 0, k)$ and
+$d(l, k, k^{+}) = (dl, dk + k^{+}, -dk^{+})$.
+Hence $(dh + hd)(l, k, k^{+}) =
+d(0, 0, k) + h(dl, dk + k^{+}, -dk^{+}) =
+(0, k, -dk) + (0, 0, dk + k^{+}) = (0, k, k^{+})$
+as desired.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-make-surjective}
+Let $\mathcal{A}$ be an additive category.
+Let $\alpha : K^\bullet \to L^\bullet$ be a morphism
+of complexes of $\mathcal{A}$.
+There exists a factorization
+$$
+\xymatrix{
+K^\bullet \ar[r]^i \ar@/_1pc/[rr]_\alpha &
+\tilde K^\bullet \ar[r]^{\tilde \alpha} &
+L^\bullet
+}
+$$
+such that
+\begin{enumerate}
+\item $\tilde \alpha$ is a termwise split surjection (see
+Definition \ref{definition-termwise-split-map}),
+\item there is a map of complexes $s : \tilde K^\bullet \to K^\bullet$
+such that $s \circ i = \text{id}_{K^\bullet}$ and such that
+$i \circ s$ is homotopic to $\text{id}_{\tilde K^\bullet}$.
+\end{enumerate}
+Moreover, if both $K^\bullet$ and $L^\bullet$ are in
+$K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, or $K^b(\mathcal{A})$,
+then so is $\tilde K^\bullet$.
+\end{lemma}
+
+\begin{proof}
+Dual to Lemma \ref{lemma-make-injective}.
+Take
+$$
+\tilde K^n = K^n \oplus L^{n - 1} \oplus L^n
+$$
+and we define
+$$
+d^n_{\tilde K} =
+\left(
+\begin{matrix}
+d^n_K & 0 & 0 \\
+0 & - d^{n - 1}_L & \text{id}_{L^n} \\
+0 & 0 & d^n_L
+\end{matrix}
+\right)
+$$
+in other words $\tilde K^\bullet = K^\bullet \oplus C(1_{L^\bullet[-1]})$.
+Moreover, we set
+$$
+\tilde \alpha =
+\left(
+\begin{matrix}
+\alpha &
+0 &
+\text{id}_{L^n}
+\end{matrix}
+\right)
+$$
+which is clearly a split surjection. It is also clear that it defines a
+morphism of complexes. We define
+$$
+i =
+\left(
+\begin{matrix}
+\text{id}_{K^n} \\
+0 \\
+0
+\end{matrix}
+\right)
+$$
+so that clearly $\tilde \alpha \circ i = \alpha$. We set
+$$
+s =
+\left(
+\begin{matrix}
+\text{id}_{K^n} &
+0 &
+0
+\end{matrix}
+\right)
+$$
+so that $s \circ i = \text{id}_{K^\bullet}$. Finally,
+let $h^n : \tilde K^n \to \tilde K^{n - 1}$ be the map
+which maps the summand $L^{n - 1}$ of $\tilde K^n$ via the identity morphism
+to the summand $L^{n - 1}$ of $\tilde K^{n - 1}$. Then it is a trivial matter
+to prove that
+$$
+\text{id}_{\tilde K^\bullet} - i \circ s
+=
+d \circ h + h \circ d
+$$
+which finishes the proof of the lemma.
+\end{proof}
+
+\begin{definition}
+\label{definition-split-ses}
+Let $\mathcal{A}$ be an additive category.
+A {\it termwise split exact sequence of complexes of $\mathcal{A}$}
+is a complex of complexes
+$$
+0 \to
+A^\bullet \xrightarrow{\alpha}
+B^\bullet \xrightarrow{\beta}
+C^\bullet \to 0
+$$
+together with given direct sum decompositions
+$B^n = A^n \oplus C^n$
+compatible with $\alpha^n$ and $\beta^n$.
+We often write $s^n : C^n \to B^n$ and $\pi^n : B^n \to A^n$
+for the maps induced by the direct sum decompositions.
+According to
+Homology, Lemma \ref{homology-lemma-ses-termwise-split-cochain}
+we get an associated morphism of complexes
+$$
+\delta : C^\bullet \longrightarrow A^\bullet[1]
+$$
+which in degree $n$ is the map $\pi^{n + 1} \circ d_B^n \circ s^n$.
+In other words
+$(A^\bullet, B^\bullet, C^\bullet, \alpha, \beta, \delta)$
+forms a triangle
+$$
+A^\bullet \to B^\bullet \to C^\bullet \to A^\bullet[1]
+$$
+This will be the {\it triangle associated to the termwise
+split sequence of complexes}.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-triangle-independent-splittings}
+Let $\mathcal{A}$ be an additive category. Let
+$0 \to A^\bullet \to B^\bullet \to C^\bullet \to 0$
+be termwise split exact sequences as in
+Definition \ref{definition-split-ses}.
+Let $(\pi')^n$, $(s')^n$ be a second collection of splittings.
+Denote $\delta' : C^\bullet \longrightarrow A^\bullet[1]$ the
+morphism associated to this second set of splittings.
+Then
+$$
+(1, 1, 1) :
+(A^\bullet, B^\bullet, C^\bullet, \alpha, \beta, \delta)
+\longrightarrow
+(A^\bullet, B^\bullet, C^\bullet, \alpha, \beta, \delta')
+$$
+is an isomorphism of triangles in $K(\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+The statement simply means that $\delta$ and $\delta'$ are
+homotopic maps of complexes. This is
+Homology, Lemma \ref{homology-lemma-ses-termwise-split-homotopy-cochain}.
+\end{proof}
+
+\begin{remark}
+\label{remark-make-commute}
+Let $\mathcal{A}$ be an additive category.
+Let $0 \to A_i^\bullet \to B_i^\bullet \to C_i^\bullet \to 0$, $i = 1, 2$
+be termwise split exact sequences. Suppose that
+$a : A_1^\bullet \to A_2^\bullet$,
+$b : B_1^\bullet \to B_2^\bullet$, and
+$c : C_1^\bullet \to C_2^\bullet$ are morphisms of complexes
+such that
+$$
+\xymatrix{
+A_1^\bullet \ar[d]_a \ar[r] &
+B_1^\bullet \ar[r] \ar[d]_b &
+C_1^\bullet \ar[d]_c \\
+A_2^\bullet \ar[r] & B_2^\bullet \ar[r] & C_2^\bullet
+}
+$$
+commutes in $K(\mathcal{A})$. In general, there does {\bf not} exist
+a morphism $b' : B_1^\bullet \to B_2^\bullet$ which is homotopic to $b$
+such that the diagram above commutes in the category of complexes.
+Namely, consider
+Examples, Equation (\ref{examples-equation-commutes-up-to-homotopy}).
+If we could replace the middle map there by a homotopic one such that
+the diagram commutes, then we would have additivity of traces which we do not.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-nilpotent}
+Let $\mathcal{A}$ be an additive category.
+Let $0 \to A_i^\bullet \to B_i^\bullet \to C_i^\bullet \to 0$, $i = 1, 2, 3$
+be termwise split exact sequences of complexes. Let
+$b : B_1^\bullet \to B_2^\bullet$ and $b' : B_2^\bullet \to B_3^\bullet$
+be morphisms of complexes such that
+$$
+\vcenter{
+\xymatrix{
+A_1^\bullet \ar[d]_0 \ar[r] &
+B_1^\bullet \ar[r] \ar[d]_b &
+C_1^\bullet \ar[d]_0 \\
+A_2^\bullet \ar[r] & B_2^\bullet \ar[r] & C_2^\bullet
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+A_2^\bullet \ar[d]^0 \ar[r] &
+B_2^\bullet \ar[r] \ar[d]^{b'} &
+C_2^\bullet \ar[d]^0 \\
+A_3^\bullet \ar[r] & B_3^\bullet \ar[r] & C_3^\bullet
+}
+}
+$$
+commute in $K(\mathcal{A})$. Then $b' \circ b = 0$ in $K(\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-make-commute-map} we can replace $b$ and $b'$ by homotopic
+maps such that the right square of the left diagram commutes and the
+left square of the right diagram commutes. In other words, we have
+$\Im(b^n) \subset \Im(A_2^n \to B_2^n)$ and
+$\Ker((b')^n) \supset \Im(A_2^n \to B_2^n)$.
+Then $b' \circ b = 0$ as a map of complexes.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-third-isomorphism}
+Let $\mathcal{A}$ be an additive category.
+Let $f_1 : K_1^\bullet \to L_1^\bullet$ and
+$f_2 : K_2^\bullet \to L_2^\bullet$ be morphisms of complexes.
+Let
+$$
+(a, b, c) :
+(K_1^\bullet, L_1^\bullet, C(f_1)^\bullet, f_1, i_1, p_1)
+\longrightarrow
+(K_2^\bullet, L_2^\bullet, C(f_2)^\bullet, f_2, i_2, p_2)
+$$
+be any morphism of triangles of $K(\mathcal{A})$.
+If $a$ and $b$ are homotopy equivalences then so is $c$.
+\end{lemma}
+
+\begin{proof}
+Let $a^{-1} : K_2^\bullet \to K_1^\bullet$ be a morphism of complexes which
+is inverse to $a$ in $K(\mathcal{A})$.
+Let $b^{-1} : L_2^\bullet \to L_1^\bullet$ be a morphism of complexes which
+is inverse to $b$ in $K(\mathcal{A})$.
+Let $c' : C(f_2)^\bullet \to C(f_1)^\bullet$
+be the morphism from Lemma \ref{lemma-functorial-cone} applied
+to $f_1 \circ a^{-1} = b^{-1} \circ f_2$. If we can show that
+$c \circ c'$ and $c' \circ c$ are isomorphisms in $K(\mathcal{A})$
+then we win. Hence it suffices to prove the following: Given
+a morphism of triangles
+$(1, 1, c) : (K^\bullet, L^\bullet, C(f)^\bullet, f, i, p)$
+in $K(\mathcal{A})$ the morphism $c$ is an isomorphism in $K(\mathcal{A})$.
+By assumption the two squares in the diagram
+$$
+\xymatrix{
+L^\bullet \ar[r] \ar[d]_1 &
+C(f)^\bullet \ar[r] \ar[d]_c &
+K^\bullet[1] \ar[d]_1 \\
+L^\bullet \ar[r] &
+C(f)^\bullet \ar[r] &
+K^\bullet[1]
+}
+$$
+commute up to homotopy. By construction of $C(f)^\bullet$ the rows
+form termwise split sequences of complexes. Thus we see that
+$(c - 1)^2 = 0$ in $K(\mathcal{A})$ by Lemma \ref{lemma-nilpotent}.
+Hence $c$ is an isomorphism in $K(\mathcal{A})$ with inverse $2 - c$.
+\end{proof}
+
+\noindent
+Hence if $a$ and $b$ are homotopy equivalences then
+the resulting morphism of triangles is an isomorphism of triangles
+in $K(\mathcal{A})$.
+It turns out that the collection of triangles of $K(\mathcal{A})$
+given by cones and the collection of triangles of $K(\mathcal{A})$
+given by termwise split sequences of complexes are the same
+up to isomorphisms, at least up to sign!
+
+\begin{lemma}
+\label{lemma-the-same-up-to-isomorphisms}
+Let $\mathcal{A}$ be an additive category.
+\begin{enumerate}
+\item Given a termwise split sequence of complexes
+$(\alpha : A^\bullet \to B^\bullet,
+\beta : B^\bullet \to C^\bullet, s^n, \pi^n)$
+there exists a homotopy equivalence $C(\alpha)^\bullet \to C^\bullet$
+such that the diagram
+$$
+\xymatrix{
+A^\bullet \ar[r] \ar[d] & B^\bullet \ar[d] \ar[r] &
+C(\alpha)^\bullet \ar[r]_{-p} \ar[d] & A^\bullet[1] \ar[d] \\
+A^\bullet \ar[r] & B^\bullet \ar[r] &
+C^\bullet \ar[r]^\delta & A^\bullet[1]
+}
+$$
+defines an isomorphism of triangles in $K(\mathcal{A})$.
+\item Given a morphism of complexes $f : K^\bullet \to L^\bullet$
+there exists an isomorphism of triangles
+$$
+\xymatrix{
+K^\bullet \ar[r] \ar[d] & \tilde L^\bullet \ar[d] \ar[r] &
+M^\bullet \ar[r]_{\delta} \ar[d] & K^\bullet[1] \ar[d] \\
+K^\bullet \ar[r] & L^\bullet \ar[r] &
+C(f)^\bullet \ar[r]^{-p} & K^\bullet[1]
+}
+$$
+where the upper triangle is the triangle associated to a
+termwise split exact sequence $K^\bullet \to \tilde L^\bullet \to M^\bullet$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). We have $C(\alpha)^n = B^n \oplus A^{n + 1}$
+and we simply define $C(\alpha)^n \to C^n$ via the projection
+onto $B^n$ followed by $\beta^n$. This defines
+a morphism of complexes because the compositions
+$A^{n + 1} \to B^{n + 1} \to C^{n + 1}$ are zero.
+To get a homotopy inverse we take
+$C^\bullet \to C(\alpha)^\bullet$ given by
+$(s^n , -\delta^n)$ in degree $n$. This is a morphism of complexes
+because the morphism $\delta^n$ can be characterized as the
+unique morphism $C^n \to A^{n + 1}$ such that
+$d \circ s^n - s^{n + 1} \circ d = \alpha \circ \delta^n$,
+see proof of
+Homology, Lemma \ref{homology-lemma-ses-termwise-split-cochain}.
+The composition
+$C^\bullet \to C(\alpha)^\bullet \to C^\bullet$ is the identity.
+The composition $C(\alpha)^\bullet \to C^\bullet \to C(\alpha)^\bullet$
+is equal to the morphism
+$$
+\left(
+\begin{matrix}
+s^n \circ \beta^n & 0 \\
+-\delta^n \circ \beta^n & 0
+\end{matrix}
+\right)
+$$
+To see that this is homotopic to the identity map
+use the homotopy $h^n : C(\alpha)^n \to C(\alpha)^{n - 1}$
+given by the matrix
+$$
+\left(
+\begin{matrix}
+0 & 0 \\
+\pi^n & 0
+\end{matrix}
+\right) : C(\alpha)^n = B^n \oplus A^{n + 1} \to
+B^{n - 1} \oplus A^n = C(\alpha)^{n - 1}
+$$
+It is trivial to verify that
+$$
+\left(
+\begin{matrix}
+1 & 0 \\
+0 & 1
+\end{matrix}
+\right)
+-
+\left(
+\begin{matrix}
+s^n \\
+-\delta^n
+\end{matrix}
+\right)
+\left(
+\begin{matrix}
+\beta^n & 0
+\end{matrix}
+\right)
+=
+\left(
+\begin{matrix}
+d & \alpha^n \\
+0 & -d
+\end{matrix}
+\right)
+\left(
+\begin{matrix}
+0 & 0 \\
+\pi^n & 0
+\end{matrix}
+\right)
++
+\left(
+\begin{matrix}
+0 & 0 \\
+\pi^{n + 1} & 0
+\end{matrix}
+\right)
+\left(
+\begin{matrix}
+d & \alpha^{n + 1} \\
+0 & -d
+\end{matrix}
+\right)
+$$
+To finish the proof of (1) we have to show that the morphisms
+$-p : C(\alpha)^\bullet \to A^\bullet[1]$ (see
+Definition \ref{definition-cone})
+and $C(\alpha)^\bullet \to C^\bullet \to A^\bullet[1]$ agree up
+to homotopy. This is clear from the above. Namely, we can use the homotopy
+inverse $(s, -\delta) : C^\bullet \to C(\alpha)^\bullet$
+and check instead that the two maps
+$C^\bullet \to A^\bullet[1]$ agree. And note that
+$p \circ (s, -\delta) = -\delta$ as desired.
+
+\medskip\noindent
+Proof of (2). We let $\tilde f : K^\bullet \to \tilde L^\bullet$,
+$s : L^\bullet \to \tilde L^\bullet$
+and $\pi : \tilde L^\bullet \to L^\bullet$ be as in
+Lemma \ref{lemma-make-injective}. By
+Lemmas \ref{lemma-functorial-cone} and \ref{lemma-third-isomorphism}
+the triangles $(K^\bullet, L^\bullet, C(f), i, p)$ and
+$(K^\bullet, \tilde L^\bullet, C(\tilde f), \tilde i, \tilde p)$
+are isomorphic. Note that we can compose isomorphisms of
+triangles. Thus we may replace $L^\bullet$ by
+$\tilde L^\bullet$ and $f$ by $\tilde f$. In other words
+we may assume that $f$ is a termwise split injection.
+In this case the result follows from part (1).
+\end{proof}
+
+
+\begin{lemma}
+\label{lemma-sequence-maps-split}
+Let $\mathcal{A}$ be an additive category.
+Let $A_1^\bullet \to A_2^\bullet \to \ldots \to A_n^\bullet$
+be a sequence of composable morphisms of complexes.
+There exists a commutative diagram
+$$
+\xymatrix{
+A_1^\bullet \ar[r] &
+A_2^\bullet \ar[r] &
+\ldots \ar[r] &
+A_n^\bullet \\
+B_1^\bullet \ar[r] \ar[u] &
+B_2^\bullet \ar[r] \ar[u] &
+\ldots \ar[r] &
+B_n^\bullet \ar[u]
+}
+$$
+such that each morphism $B_i^\bullet \to B_{i + 1}^\bullet$
+is a split injection and each $B_i^\bullet \to A_i^\bullet$
+is a homotopy equivalence. Moreover, if all $A_i^\bullet$ are in
+$K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, or $K^b(\mathcal{A})$,
+then so are the $B_i^\bullet$.
+\end{lemma}
+
+\begin{proof}
+The case $n = 1$ is without content.
+Lemma \ref{lemma-make-injective} is the case $n = 2$.
+Suppose we have constructed the diagram
+except for $B_n^\bullet$. Apply Lemma \ref{lemma-make-injective} to
+the composition $B_{n - 1}^\bullet \to A_{n - 1}^\bullet \to A_n^\bullet$.
+The result is a factorization
+$B_{n - 1}^\bullet \to B_n^\bullet \to A_n^\bullet$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-rotate-triangle}
+Let $\mathcal{A}$ be an additive category. Let
+$(\alpha : A^\bullet \to B^\bullet, \beta : B^\bullet \to C^\bullet, s^n,
+\pi^n)$ be a termwise split sequence of complexes.
+Let $(A^\bullet, B^\bullet, C^\bullet, \alpha, \beta, \delta)$
+be the associated triangle.
+Then the triangle
+$(C^\bullet[-1], A^\bullet, B^\bullet, \delta[-1], \alpha, \beta)$
+is isomorphic to the triangle
+$(C^\bullet[-1], A^\bullet, C(\delta[-1])^\bullet, \delta[-1], i, p)$.
+\end{lemma}
+
+\begin{proof}
+We write $B^n = A^n \oplus C^n$ and we identify $\alpha^n$ and $\beta^n$
+with the natural inclusion and projection maps. By construction of $\delta$ we
+have
+$$
+d_B^n =
+\left(
+\begin{matrix}
+d_A^n & \delta^n \\
+0 & d_C^n
+\end{matrix}
+\right)
+$$
+On the other hand the cone of $\delta[-1] : C^\bullet[-1] \to A^\bullet$
+is given as $C(\delta[-1])^n = A^n \oplus C^n$ with differential identical
+with the matrix above! Whence the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-rotate-cone}
+Let $\mathcal{A}$ be an additive category.
+Let $f : K^\bullet \to L^\bullet$ be a morphism of complexes.
+The triangle $(L^\bullet, C(f)^\bullet, K^\bullet[1], i, p, f[1])$ is
+the triangle associated to the termwise split sequence
+$$
+0 \to L^\bullet \to C(f)^\bullet \to K^\bullet[1] \to 0
+$$
+coming from the definition of the cone of $f$.
+\end{lemma}
+
+\begin{proof}
+Immediate from the definitions.
+\end{proof}
+
+
+
+
+
+
+\section{Distinguished triangles in the homotopy category}
+\label{section-homotopy-triangulated}
+
+\noindent
+Since we want our boundary maps in long exact sequences of cohomology
+to be given by the maps in the snake lemma without signs we define
+distinguished triangles in the homotopy category as follows.
+
+\begin{definition}
+\label{definition-distinguished-triangle}
+Let $\mathcal{A}$ be an additive category.
+A triangle $(X, Y, Z, f, g, h)$ of $K(\mathcal{A})$ is
+called a {\it distinguished triangle of $K(\mathcal{A})$}
+if it is isomorphic to the triangle associated to
+a termwise split exact sequence of complexes, see Definition
+\ref{definition-split-ses}.
+Same definition for $K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, and
+$K^b(\mathcal{A})$.
+\end{definition}
+
+\noindent
+Note that according to Lemma \ref{lemma-the-same-up-to-isomorphisms}
+a triangle of the form $(K^\bullet, L^\bullet, C(f)^\bullet, f, i, -p)$
+is a distinguished triangle.
+This does indeed lead to a triangulated category, see
+Proposition \ref{proposition-homotopy-category-triangulated}.
+Before we can prove the proposition we need one more lemma
+in order to be able to prove TR4.
+
+\begin{lemma}
+\label{lemma-two-split-injections}
+Let $\mathcal{A}$ be an additive category. Suppose that
+$\alpha : A^\bullet \to B^\bullet$ and $\beta : B^\bullet \to C^\bullet$
+are split injections of complexes. Then there exist distinguished triangles
+$(A^\bullet, B^\bullet, Q_1^\bullet, \alpha, p_1, d_1)$,
+$(A^\bullet, C^\bullet, Q_2^\bullet, \beta \circ \alpha, p_2, d_2)$
+and
+$(B^\bullet, C^\bullet, Q_3^\bullet, \beta, p_3, d_3)$
+for which TR4 holds.
+\end{lemma}
+
+\begin{proof}
+Say $\pi_1^n : B^n \to A^n$, and $\pi_3^n : C^n \to B^n$ are the splittings.
+Then also $A^\bullet \to C^\bullet$ is a split injection with splittings
+$\pi_2^n = \pi_1^n \circ \pi_3^n$. Let us write $Q_1^\bullet$, $Q_2^\bullet$
+and $Q_3^\bullet$ for the ``quotient'' complexes. In other words,
+$Q_1^n = \Ker(\pi_1^n)$, $Q_3^n = \Ker(\pi_3^n)$ and
+$Q_2^n = \Ker(\pi_2^n)$. Note that the kernels exist. Then
+$B^n = A^n \oplus Q_1^n$ and $C_n = B^n \oplus Q_3^n$, where we think of $A^n$
+as a subobject of $B^n$ and so on. This implies
+$C^n = A^n \oplus Q_1^n \oplus Q_3^n$. Note that
+$\pi_2^n = \pi_1^n \circ \pi_3^n$ is zero on both $Q_1^n$ and $Q_3^n$. Hence
+$Q_2^n = Q_1^n \oplus Q_3^n$. Consider the commutative diagram
+$$
+\begin{matrix}
+0 & \to & A^\bullet & \to & B^\bullet & \to & Q_1^\bullet & \to & 0 \\
+ & & \downarrow & & \downarrow & & \downarrow & \\
+0 & \to & A^\bullet & \to & C^\bullet & \to & Q_2^\bullet & \to & 0 \\
+ & & \downarrow & & \downarrow & & \downarrow & \\
+0 & \to & B^\bullet & \to & C^\bullet & \to & Q_3^\bullet & \to & 0
+\end{matrix}
+$$
+The rows of this diagram are termwise split exact sequences, and
+hence determine distinguished triangles by
+definition. Moreover downward arrows in the diagram above
+are compatible with the chosen splittings and hence
+define morphisms of triangles
+$$
+(A^\bullet \to B^\bullet \to Q_1^\bullet \to A^\bullet[1])
+\longrightarrow
+(A^\bullet \to C^\bullet \to Q_2^\bullet \to A^\bullet[1])
+$$
+and
+$$
+(A^\bullet \to C^\bullet \to Q_2^\bullet \to A^\bullet[1])
+\longrightarrow
+(B^\bullet \to C^\bullet \to Q_3^\bullet \to B^\bullet[1]).
+$$
+Note that the splittings $Q_3^n \to C^n$
+of the bottom split sequence in the diagram provides a splitting
+for the split sequence
+$0 \to Q_1^\bullet \to Q_2^\bullet \to Q_3^\bullet \to 0$
+upon composing with $C^n \to Q_2^n$. It follows easily from this
+that the morphism $\delta : Q_3^\bullet \to Q_1^\bullet[1]$
+in the corresponding distinguished triangle
+$$
+(Q_1^\bullet \to Q_2^\bullet \to Q_3^\bullet \to Q_1^\bullet[1])
+$$
+is equal to the composition $Q_3^\bullet \to B^\bullet[1] \to Q_1^\bullet[1]$.
+Hence we get a structure as in the conclusion of axiom TR4.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-homotopy-category-triangulated}
+Let $\mathcal{A}$ be an additive category.
+The category $K(\mathcal{A})$ of complexes up to
+homotopy with its natural translation functors
+and distinguished triangles as defined above
+is a triangulated category.
+\end{proposition}
+
+\begin{proof}
+Proof of TR1. By definition every triangle isomorphic to a distinguished
+one is distinguished. Also, any triangle $(A^\bullet, A^\bullet, 0, 1, 0, 0)$
+is distinguished since $0 \to A^\bullet \to A^\bullet \to 0 \to 0$ is
+a termwise split sequence of complexes. Finally, given any morphism of
+complexes $f : K^\bullet \to L^\bullet$ the triangle
+$(K, L, C(f), f, i, -p)$ is distinguished by
+Lemma \ref{lemma-the-same-up-to-isomorphisms}.
+
+\medskip\noindent
+Proof of TR2. Let $(X, Y, Z, f, g, h)$ be a triangle.
+Assume $(Y, Z, X[1], g, h, -f[1])$ is distinguished.
+Then there exists a termwise split sequence of complexes
+$A^\bullet \to B^\bullet \to C^\bullet$ such that the associated
+triangle $(A^\bullet, B^\bullet, C^\bullet, \alpha, \beta, \delta)$
+is isomorphic to $(Y, Z, X[1], g, h, -f[1])$. Rotating back we see
+that $(X, Y, Z, f, g, h)$ is isomorphic to
+$(C^\bullet[-1], A^\bullet, B^\bullet, -\delta[-1], \alpha, \beta)$.
+It follows from Lemma \ref{lemma-rotate-triangle} that the triangle
+$(C^\bullet[-1], A^\bullet, B^\bullet, \delta[-1], \alpha, \beta)$
+is isomorphic to
+$(C^\bullet[-1], A^\bullet, C(\delta[-1])^\bullet, \delta[-1], i, p)$.
+Precomposing the previous isomorphism of triangles with $-1$ on $Y$
+it follows that $(X, Y, Z, f, g, h)$ is isomorphic to
+$(C^\bullet[-1], A^\bullet, C(\delta[-1])^\bullet, \delta[-1], i, -p)$.
+Hence it is distinguished by
+Lemma \ref{lemma-the-same-up-to-isomorphisms}.
+On the other hand, suppose that $(X, Y, Z, f, g, h)$ is distinguished.
+By Lemma \ref{lemma-the-same-up-to-isomorphisms} this means that it is
+isomorphic to a triangle of the form
+$(K^\bullet, L^\bullet, C(f), f, i, -p)$ for some morphism of
+complexes $f$. Then the rotated triangle $(Y, Z, X[1], g, h, -f[1])$ is
+isomorphic to $(L^\bullet, C(f), K^\bullet[1], i, -p, -f[1])$ which is
+isomorphic to the triangle $(L^\bullet, C(f), K^\bullet[1], i, p, f[1])$.
+By Lemma \ref{lemma-rotate-cone} this triangle is distinguished.
+Hence $(Y, Z, X[1], g, h, -f[1])$ is distinguished as desired.
+
+\medskip\noindent
+Proof of TR3. Let
+$(X, Y, Z, f, g, h)$ and $(X', Y', Z', f', g', h')$
+be distinguished triangles of $K(\mathcal{A})$
+and let $a : X \to X'$ and $b : Y \to Y'$ be morphisms
+such that $f' \circ a = b \circ f$. By
+Lemma \ref{lemma-the-same-up-to-isomorphisms} we may assume that
+$(X, Y, Z, f, g, h) = (X, Y, C(f), f, i, -p)$ and
+$(X', Y', Z', f', g', h') = (X', Y', C(f'), f', i', -p')$.
+At this point we simply apply Lemma \ref{lemma-functorial-cone}
+to the commutative diagram given by $f, f', a, b$.
+
+\medskip\noindent
+Proof of TR4. At this point we know that $K(\mathcal{A})$
+is a pre-triangulated category. Hence we can use
+Lemma \ref{lemma-easier-axiom-four}. Let $A^\bullet \to B^\bullet$
+and $B^\bullet \to C^\bullet$ be composable morphisms of
+$K(\mathcal{A})$. By Lemma \ref{lemma-sequence-maps-split} we may assume that
+$A^\bullet \to B^\bullet$ and $B^\bullet \to C^\bullet$
+are split injective morphisms. In this case the result follows
+from Lemma \ref{lemma-two-split-injections}.
+\end{proof}
+
+\begin{remark}
+\label{remark-boundedness-conditions-triangulated}
+Let $\mathcal{A}$ be an additive category.
+Exactly the same proof as the proof of
+Proposition \ref{proposition-homotopy-category-triangulated}
+shows that the categories
+$K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, and $K^b(\mathcal{A})$
+are triangulated categories. Namely, the cone of a morphism between
+bounded (above, below) is bounded (above, below).
+But we prove below that these are triangulated subcategories
+of $K(\mathcal{A})$ which gives another proof.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-bounded-triangulated-subcategories}
+Let $\mathcal{A}$ be an additive category. The categories
+$K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, and $K^b(\mathcal{A})$
+are full triangulated subcategories of $K(\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+Each of the categories mentioned is a full additive subcategory.
+We use the criterion of
+Lemma \ref{lemma-triangulated-subcategory}
+to show that they are triangulated subcategories.
+It is clear that each of the categories
+$K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, and $K^b(\mathcal{A})$
+is preserved under the shift functors $[1], [-1]$.
+Finally, suppose that $f : A^\bullet \to B^\bullet$ is a morphism
+in $K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, or $K^b(\mathcal{A})$.
+Then $(A^\bullet, B^\bullet, C(f)^\bullet, f, i, -p)$ is a distinguished
+triangle of $K(\mathcal{A})$ with $C(f)^\bullet \in K^{+}(\mathcal{A})$,
+$K^{-}(\mathcal{A})$, or $K^b(\mathcal{A})$ as is clear from the construction
+of the cone. Thus the lemma is proved. (Alternatively,
+$K^\bullet \to L^\bullet$ is isomorphic to an termwise split injection
+of complexes in $K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, or
+$K^b(\mathcal{A})$, see
+Lemma \ref{lemma-make-injective}
+and then one can directly take the associated
+distinguished triangle.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-additive-exact-homotopy-category}
+Let $\mathcal{A}$, $\mathcal{B}$ be additive categories.
+Let $F : \mathcal{A} \to \mathcal{B}$ be an additive functor.
+The induced functors
+$$
+\begin{matrix}
+F : K(\mathcal{A}) \longrightarrow K(\mathcal{B}) \\
+F : K^{+}(\mathcal{A}) \longrightarrow K^{+}(\mathcal{B}) \\
+F : K^{-}(\mathcal{A}) \longrightarrow K^{-}(\mathcal{B}) \\
+F : K^b(\mathcal{A}) \longrightarrow K^b(\mathcal{B})
+\end{matrix}
+$$
+are exact functors of triangulated categories.
+\end{lemma}
+
+\begin{proof}
+Suppose $A^\bullet \to B^\bullet \to C^\bullet$
+is a termwise split sequence of complexes of $\mathcal{A}$ with splittings
+$(s^n, \pi^n)$ and associated morphism $\delta : C^\bullet \to A^\bullet[1]$,
+see Definition \ref{definition-split-ses}. Then
+$F(A^\bullet) \to F(B^\bullet) \to F(C^\bullet)$
+is a termwise split sequence of complexes with splittings
+$(F(s^n), F(\pi^n))$ and associated morphism
+$F(\delta) : F(C^\bullet) \to F(A^\bullet)[1]$.
+Thus $F$ transforms distinguished triangles into distinguished triangles.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-improve-distinguished-triangle-homotopy}
+Let $\mathcal{A}$ be an additive category. Let
+$(A^\bullet, B^\bullet, C^\bullet, a, b, c)$ be a distinguished triangle in
+$K(\mathcal{A})$. Then there exists an isomorphic distinguished triangle
+$(A^\bullet, (B')^\bullet, C^\bullet, a', b', c)$ such that
+$0 \to A^n \to (B')^n \to C^n \to 0$ is a split short exact sequence
+for all $n$.
+\end{lemma}
+
+\begin{proof}
+We will use that $K(\mathcal{A})$ is a triangulated category by
+Proposition \ref{proposition-homotopy-category-triangulated}.
+Let $W^\bullet$ be the cone on $c : C^\bullet \to A^\bullet[1]$ with its maps
+$i : A^\bullet[1] \to W^\bullet$ and $p : W^\bullet \to C^\bullet[1]$. Then
+$(C^\bullet, A^\bullet[1], W^\bullet, c, i, -p)$ is a distinguished triangle
+by Lemma \ref{lemma-the-same-up-to-isomorphisms}. Rotating backwards twice
+we see that $(A^\bullet, W^\bullet[-1], C^\bullet, -i[-1], p[-1], c)$
+is a distinguished triangle. By TR3 there is a morphism of distinguished
+triangles
+$(\text{id}, \beta, \text{id}) : (A^\bullet, B^\bullet, C^\bullet, a, b, c) \to
+(A^\bullet, W^\bullet[-1], C^\bullet, -i[-1], p[-1], c)$
+which must be an isomorphism by Lemma \ref{lemma-third-isomorphism-triangle}.
+This finishes the proof because
+$0 \to A^\bullet \to W^\bullet[-1] \to C^\bullet \to 0$
+is a termwise split short exact sequence of complexes
+by the very construction of cones in Section \ref{section-cones}.
+\end{proof}
+
+\begin{remark}
+\label{remark-homotopy-double}
+Let $\mathcal{A}$ be an additive category with countable direct sums.
+Let $\text{DoubleComp}(\mathcal{A})$ denote the category of double complexes
+in $\mathcal{A}$, see
+Homology, Section \ref{homology-section-double-complexes}.
+We can use this category to construct two triangulated categories.
+\begin{enumerate}
+\item We can consider an object $A^{\bullet, \bullet}$ of
+$\text{DoubleComp}(\mathcal{A})$ as a complex of complexes
+as follows
+$$
+\ldots \to A^{\bullet, -1} \to A^{\bullet, 0} \to A^{\bullet, 1} \to \ldots
+$$
+and take the homotopy category $K_{first}(\text{DoubleComp}(\mathcal{A}))$
+with the corresponding triangulated structure given by
+Proposition \ref{proposition-homotopy-category-triangulated}.
+By Homology, Remark
+\ref{homology-remark-double-complex-complex-of-complexes-first} the functor
+$$
+\text{Tot} :
+K_{first}(\text{DoubleComp}(\mathcal{A}))
+\longrightarrow
+K(\mathcal{A})
+$$
+is an exact functor of triangulated categories.
+\item We can consider an object $A^{\bullet, \bullet}$ of
+$\text{DoubleComp}(\mathcal{A})$ as a complex of complexes
+as follows
+$$
+\ldots \to A^{-1, \bullet} \to A^{0, \bullet} \to A^{1, \bullet} \to \ldots
+$$
+and take the homotopy category $K_{second}(\text{DoubleComp}(\mathcal{A}))$
+with the corresponding triangulated structure given by
+Proposition \ref{proposition-homotopy-category-triangulated}.
+By Homology, Remark
+\ref{homology-remark-double-complex-complex-of-complexes-second} the functor
+$$
+\text{Tot} :
+K_{second}(\text{DoubleComp}(\mathcal{A}))
+\longrightarrow
+K(\mathcal{A})
+$$
+is an exact functor of triangulated categories.
+\end{enumerate}
+\end{remark}
+
+\begin{remark}
+\label{remark-double-complex-as-tensor-product-of}
+Let $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$ be additive categories
+and assume $\mathcal{C}$ has countable direct sums. Suppose that
+$$
+\otimes : \mathcal{A} \times \mathcal{B} \longrightarrow \mathcal{C},
+\quad
+(X, Y) \longmapsto X \otimes Y
+$$
+is a functor which is bilinear on morphisms. This determines a functor
+$$
+\text{Comp}(\mathcal{A}) \times \text{Comp}(\mathcal{B})
+\longrightarrow
+\text{DoubleComp}(\mathcal{C}), \quad
+(X^\bullet, Y^\bullet)
+\longmapsto
+X^\bullet \otimes Y^\bullet
+$$
+See
+Homology, Example \ref{homology-example-double-complex-as-tensor-product-of}.
+\begin{enumerate}
+\item For a fixed object $X^\bullet$ of $\text{Comp}(\mathcal{A})$
+the functor
+$$
+K(\mathcal{B}) \longrightarrow K(\mathcal{C}), \quad
+Y^\bullet \longmapsto \text{Tot}(X^\bullet \otimes Y^\bullet)
+$$
+is an exact functor of triangulated categories.
+\item For a fixed object $Y^\bullet$ of $\text{Comp}(\mathcal{B})$
+the functor
+$$
+K(\mathcal{A}) \longrightarrow K(\mathcal{C}), \quad
+X^\bullet \longmapsto \text{Tot}(X^\bullet \otimes Y^\bullet)
+$$
+is an exact functor of triangulated categories.
+\end{enumerate}
+This follows from Remark \ref{remark-homotopy-double} since
+the functors
+$\text{Comp}(\mathcal{A}) \to \text{DoubleComp}(\mathcal{C})$,
+$Y^\bullet \mapsto X^\bullet \otimes Y^\bullet$ and
+$\text{Comp}(\mathcal{B}) \to \text{DoubleComp}(\mathcal{C})$,
+$X^\bullet \mapsto X^\bullet \otimes Y^\bullet$
+are immediately seen to be compatible with homotopies
+and termwise split short exact sequences and hence induce
+exact functors of triangulated categories
+$$
+K(\mathcal{B}) \to K_{first}(\text{DoubleComp}(\mathcal{C}))
+\quad\text{and}\quad
+K(\mathcal{A}) \to K_{second}(\text{DoubleComp}(\mathcal{C}))
+$$
+Observe that for the first of the two the isomorphism
+$$
+\text{Tot}(X^\bullet \otimes Y^\bullet[1]) \cong
+\text{Tot}(X^\bullet \otimes Y^\bullet)[1]
+$$
+involves signs (this goes back to the signs chosen in
+Homology, Remark \ref{homology-remark-shift-double-complex}).
+\end{remark}
+
+
+
+
+
+
+\section{Derived categories}
+\label{section-derived-categories}
+
+\noindent
+In this section we construct the derived category of an abelian category
+$\mathcal{A}$ by inverting the quasi-isomorphisms in $K(\mathcal{A})$.
+Before we do this recall that the functors
+$H^i : \text{Comp}(\mathcal{A}) \to \mathcal{A}$
+factor through $K(\mathcal{A})$, see
+Homology, Lemma \ref{homology-lemma-map-cohomology-homotopy-cochain}.
+Moreover, in
+Homology, Definition \ref{homology-definition-cohomology-shift}
+we have defined identifications $H^i(K^\bullet[n]) = H^{i + n}(K^\bullet)$.
+At this point it makes sense to redefine
+$$
+H^i(K^\bullet) = H^0(K^\bullet[i])
+$$
+in order to avoid confusion and possible sign errors.
+
+\begin{lemma}
+\label{lemma-cohomology-homological}
+Let $\mathcal{A}$ be an abelian category. The functor
+$$
+H^0 : K(\mathcal{A}) \longrightarrow \mathcal{A}
+$$
+is homological.
+\end{lemma}
+
+\begin{proof}
+Because $H^0$ is a functor, and by our definition of distinguished triangles
+it suffices to prove that given a termwise split short exact sequence
+of complexes $0 \to A^\bullet \to B^\bullet \to C^\bullet \to 0$
+the sequence $H^0(A^\bullet) \to H^0(B^\bullet) \to H^0(C^\bullet)$
+is exact. This follows from
+Homology, Lemma \ref{homology-lemma-long-exact-sequence-cochain}.
+\end{proof}
+
+\noindent
+In particular, this lemma implies that a distinguished triangle
+$(X, Y, Z, f, g, h)$ in $K(\mathcal{A})$ gives rise to a long exact
+cohomology sequence
+\begin{equation}
+\label{equation-long-exact-cohomology-sequence-D}
+\xymatrix{
+\ldots \ar[r] &
+H^i(X) \ar[r]^{H^i(f)} &
+H^i(Y) \ar[r]^{H^i(g)} &
+H^i(Z) \ar[r]^{H^i(h)} &
+H^{i + 1}(X) \ar[r] & \ldots
+}
+\end{equation}
+see (\ref{equation-long-exact-cohomology-sequence}). Moreover, there is
+a compatibility with the long exact sequence of cohomology associated to
+a short exact sequence of complexes (insert future reference here). For
+example, if $(A^\bullet, B^\bullet, C^\bullet, \alpha, \beta, \delta)$
+is the distinguished triangle associated to a termwise split exact
+sequence of complexes (see
+Definition \ref{definition-split-ses}),
+then the cohomology sequence above agrees with the one defined using the
+snake lemma, see
+Homology, Lemma \ref{homology-lemma-long-exact-sequence-cochain}
+and for agreement of sequences, see
+Homology, Lemma \ref{homology-lemma-ses-termwise-split-long-cochain}.
+
+\medskip\noindent
+Recall that a complex $K^\bullet$ is {\it acyclic} if $H^i(K^\bullet) = 0$
+for all $i \in \mathbf{Z}$. Moreover, recall that a morphism of complexes
+$f : K^\bullet \to L^\bullet$ is a {\it quasi-isomorphism} if and only if
+$H^i(f)$ is an isomorphism for all $i$. See
+Homology, Definition \ref{homology-definition-quasi-isomorphism-cochain}.
+
+\begin{lemma}
+\label{lemma-acyclic}
+Let $\mathcal{A}$ be an abelian category. The full subcategory
+$\text{Ac}(\mathcal{A})$ of $K(\mathcal{A})$ consisting of acyclic complexes
+is a strictly full saturated triangulated subcategory of $K(\mathcal{A})$.
+The corresponding saturated multiplicative system (see
+Lemma \ref{lemma-operations})
+of $K(\mathcal{A})$ is the set $\text{Qis}(\mathcal{A})$
+of quasi-isomorphisms. In particular, the kernel of the localization
+functor $Q : K(\mathcal{A}) \to \text{Qis}(\mathcal{A})^{-1}K(\mathcal{A})$
+is $\text{Ac}(\mathcal{A})$ and the functor $H^0$ factors through $Q$.
+\end{lemma}
+
+\begin{proof}
+We know that $H^0$ is a homological functor by
+Lemma \ref{lemma-cohomology-homological}.
+Thus this lemma is a special case of
+Lemma \ref{lemma-acyclic-general}.
+\end{proof}
+
+\begin{definition}
+\label{definition-unbounded-derived-category}
+Let $\mathcal{A}$ be an abelian category.
+Let $\text{Ac}(\mathcal{A})$ and $\text{Qis}(\mathcal{A})$
+be as in
+Lemma \ref{lemma-acyclic}.
+The {\it derived category of $\mathcal{A}$} is the triangulated
+category
+$$
+D(\mathcal{A}) =
+K(\mathcal{A})/\text{Ac}(\mathcal{A}) =
+\text{Qis}(\mathcal{A})^{-1} K(\mathcal{A}).
+$$
+We denote $H^0 : D(\mathcal{A}) \to \mathcal{A}$ the unique functor
+whose composition with the quotient functor gives back the functor
+$H^0$ defined above. Using
+Lemma \ref{lemma-homological-functor-bounded}
+we introduce the strictly full saturated triangulated subcategories
+$D^{+}(\mathcal{A}), D^{-}(\mathcal{A}), D^b(\mathcal{A})$
+whose sets of objects are
+$$
+\begin{matrix}
+\Ob(D^{+}(\mathcal{A})) =
+\{X \in \Ob(D(\mathcal{A})) \mid
+H^n(X) = 0\text{ for all }n \ll 0\} \\
+\Ob(D^{-}(\mathcal{A})) =
+\{X \in \Ob(D(\mathcal{A})) \mid
+H^n(X) = 0\text{ for all }n \gg 0\} \\
+\Ob(D^b(\mathcal{A})) =
+\{X \in \Ob(D(\mathcal{A})) \mid
+H^n(X) = 0\text{ for all }|n| \gg 0\}
+\end{matrix}
+$$
+The category $D^b(\mathcal{A})$ is called the {\it bounded derived
+category} of $\mathcal{A}$.
+\end{definition}
+
+\noindent
+If $K^\bullet$ and $L^\bullet$ are complexes of $\mathcal{A}$
+then we sometimes say ``$K^\bullet$ is {\it quasi-isomorphic} to
+$L^\bullet$'' to indicate that $K^\bullet$ and $L^\bullet$ are
+isomorphic objects of $D(\mathcal{A})$.
+
+\begin{remark}
+\label{remark-existence-derived}
+In this chapter, we consistently work with ``small'' abelian categories
+(as is the convention in the Stacks project). For a ``big'' abelian
+category $\mathcal{A}$, it isn't clear that the derived category
+$D(\mathcal{A})$ exists, because it isn't clear that morphisms in the
+derived category are sets. In fact, in general they aren't, see
+Examples, Lemma \ref{examples-lemma-big-abelian-category}.
+However, if $\mathcal{A}$ is a Grothendieck abelian category, and given
+$K^\bullet, L^\bullet$ in $K(\mathcal{A})$, then by
+Injectives, Theorem \ref{injectives-theorem-K-injective-embedding-grothendieck}
+there exists a quasi-isomorphism $L^\bullet \to I^\bullet$ to a
+K-injective complex $I^\bullet$ and Lemma \ref{lemma-K-injective} shows that
+$$
+\Hom_{D(\mathcal{A})}(K^\bullet, L^\bullet) =
+\Hom_{K(\mathcal{A})}(K^\bullet, I^\bullet)
+$$
+which is a set. Some examples of Grothendieck abelian categories
+are the category of modules over a ring, or more generally
+the category of sheaves of modules on a ringed site.
+\end{remark}
+
+\noindent
+Each of the variants $D^{+}(\mathcal{A}), D^{-}(\mathcal{A}), D^b(\mathcal{A})$
+can be constructed as a localization of the corresponding homotopy category.
+This relies on the following simple lemma.
+
+\begin{lemma}
+\label{lemma-complex-cohomology-bounded}
+Let $\mathcal{A}$ be an abelian category.
+Let $K^\bullet$ be a complex.
+\begin{enumerate}
+\item If $H^n(K^\bullet) = 0$ for all $n \ll 0$, then there exists
+a quasi-isomorphism $K^\bullet \to L^\bullet$ with $L^\bullet$
+bounded below.
+\item If $H^n(K^\bullet) = 0$ for all $n \gg 0$, then there exists
+a quasi-isomorphism $M^\bullet \to K^\bullet$ with $M^\bullet$
+bounded above.
+\item If $H^n(K^\bullet) = 0$ for all $|n| \gg 0$, then there exists
+a commutative diagram of morphisms of complexes
+$$
+\xymatrix{
+K^\bullet \ar[r] & L^\bullet \\
+M^\bullet \ar[u] \ar[r] & N^\bullet \ar[u]
+}
+$$
+where all the arrows are quasi-isomorphisms, $L^\bullet$
+bounded below, $M^\bullet$ bounded above, and $N^\bullet$ a bounded
+complex.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Pick $a \ll 0 \ll b$ and set $M^\bullet = \tau_{\leq b}K^\bullet$,
+$L^\bullet = \tau_{\geq a}K^\bullet$, and
+$N^\bullet = \tau_{\leq b}L^\bullet = \tau_{\geq a}M^\bullet$.
+See
+Homology, Section \ref{homology-section-truncations}
+for the truncation functors.
+\end{proof}
+
+\noindent
+To state the following lemma denote
+$\text{Ac}^{+}(\mathcal{A})$, $\text{Ac}^{-}(\mathcal{A})$,
+resp.\ $\text{Ac}^b(\mathcal{A})$ the intersection of
+$K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, resp.\ $K^b(\mathcal{A})$
+with $\text{Ac}(\mathcal{A})$.
+Denote $\text{Qis}^{+}(\mathcal{A})$, $\text{Qis}^{-}(\mathcal{A})$,
+resp.\ $\text{Qis}^b(\mathcal{A})$ the intersection of
+$K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, resp.\ $K^b(\mathcal{A})$
+with $\text{Qis}(\mathcal{A})$.
+
+\begin{lemma}
+\label{lemma-bounded-derived}
+Let $\mathcal{A}$ be an abelian category. The subcategories
+$\text{Ac}^{+}(\mathcal{A})$, $\text{Ac}^{-}(\mathcal{A})$,
+resp.\ $\text{Ac}^b(\mathcal{A})$
+are strictly full saturated triangulated subcategories
+of $K^{+}(\mathcal{A})$, $K^{-}(\mathcal{A})$, resp.\ $K^b(\mathcal{A})$.
+The corresponding saturated multiplicative systems (see
+Lemma \ref{lemma-operations})
+are the sets $\text{Qis}^{+}(\mathcal{A})$, $\text{Qis}^{-}(\mathcal{A})$,
+resp.\ $\text{Qis}^b(\mathcal{A})$.
+\begin{enumerate}
+\item The kernel of the functor $K^{+}(\mathcal{A}) \to D^{+}(\mathcal{A})$
+is $\text{Ac}^{+}(\mathcal{A})$ and this induces an equivalence
+of triangulated categories
+$$
+K^{+}(\mathcal{A})/\text{Ac}^{+}(\mathcal{A}) =
+\text{Qis}^{+}(\mathcal{A})^{-1}K^{+}(\mathcal{A})
+\longrightarrow
+D^{+}(\mathcal{A})
+$$
+\item The kernel of the functor $K^{-}(\mathcal{A}) \to D^{-}(\mathcal{A})$
+is $\text{Ac}^{-}(\mathcal{A})$ and this induces an equivalence
+of triangulated categories
+$$
+K^{-}(\mathcal{A})/\text{Ac}^{-}(\mathcal{A}) =
+\text{Qis}^{-}(\mathcal{A})^{-1}K^{-}(\mathcal{A})
+\longrightarrow
+D^{-}(\mathcal{A})
+$$
+\item The kernel of the functor $K^b(\mathcal{A}) \to D^b(\mathcal{A})$
+is $\text{Ac}^b(\mathcal{A})$ and this induces an equivalence
+of triangulated categories
+$$
+K^b(\mathcal{A})/\text{Ac}^b(\mathcal{A}) =
+\text{Qis}^b(\mathcal{A})^{-1}K^b(\mathcal{A})
+\longrightarrow
+D^b(\mathcal{A})
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The initial statements follow from
+Lemma \ref{lemma-acyclic-general}
+by considering the restriction of the homological functor $H^0$.
+The statement on kernels in (1), (2), (3) is a consequence of the
+definitions in each case.
+Each of the functors is essentially surjective by
+Lemma \ref{lemma-complex-cohomology-bounded}.
+To finish the proof we have to show the functors are fully faithful.
+We first do this for the bounded below version.
+
+\medskip\noindent
+Suppose that $K^\bullet, L^\bullet$ are bounded above complexes.
+A morphism between these in $D(\mathcal{A})$ is of the form
+$s^{-1}f$ for a pair
+$f : K^\bullet \to (L')^\bullet$, $s : L^\bullet \to (L')^\bullet$
+where $s$ is a quasi-isomorphism. This implies that $(L')^\bullet$
+has cohomology bounded below. Hence by
+Lemma \ref{lemma-complex-cohomology-bounded}
+we can choose a quasi-isomorphism
+$s' : (L')^\bullet \to (L'')^\bullet$
+with $(L'')^\bullet$ bounded below. Then the pair $(s' \circ f, s' \circ s)$
+defines a morphism in $\text{Qis}^{+}(\mathcal{A})^{-1}K^{+}(\mathcal{A})$.
+Hence the functor is ``full''. Finally, suppose that the pair
+$f : K^\bullet \to (L')^\bullet$, $s : L^\bullet \to (L')^\bullet$
+defines a morphism in $\text{Qis}^{+}(\mathcal{A})^{-1}K^{+}(\mathcal{A})$
+which is zero in $D(\mathcal{A})$. This means that there exists a
+quasi-isomorphism $s' : (L')^\bullet \to (L'')^\bullet$
+such that $s' \circ f = 0$. Using
+Lemma \ref{lemma-complex-cohomology-bounded}
+once more we obtain a quasi-isomorphism
+$s'' : (L'')^\bullet \to (L''')^\bullet$
+with $(L''')^\bullet$ bounded below.
+Thus we see that $s'' \circ s' \circ f = 0$ which implies that
+$s^{-1}f$ is zero in $\text{Qis}^{+}(\mathcal{A})^{-1}K^{+}(\mathcal{A})$.
+This finishes the proof that the functor in (1) is an equivalence.
+
+\medskip\noindent
+The proof of (2) is dual to the proof of (1).
+To prove (3) we may use the result of (2). Hence it suffices to
+prove that the functor
+$\text{Qis}^b(\mathcal{A})^{-1}K^b(\mathcal{A})
+\to \text{Qis}^{-}(\mathcal{A})^{-1}K^{-}(\mathcal{A})$
+is fully faithful. The argument given in the previous paragraph
+applies directly to show this where we consistently work with complexes
+which are already bounded above.
+\end{proof}
+
+
+
+
+
+
+
+\section{The canonical delta-functor}
+\label{section-canonical-delta-functor}
+
+\noindent
+The derived category should be the receptacle for the universal
+cohomology functor. In order to state the result we use
+the notion of a $\delta$-functor from an abelian category
+into a triangulated category, see
+Definition \ref{definition-delta-functor}.
+
+\medskip\noindent
+Consider the functor
+$\text{Comp}(\mathcal{A}) \to K(\mathcal{A})$.
+This functor is {\bf not} a $\delta$-functor in general.
+The easiest way to see this is to consider a nonsplit
+short exact sequence $0 \to A \to B \to C \to 0$
+of objects of $\mathcal{A}$. Since
+$\Hom_{K(\mathcal{A})}(C[0], A[1]) = 0$
+we see that any distinguished triangle arising from
+this short exact sequence would look like
+$(A[0], B[0], C[0], a, b, 0)$. But the existence of such a
+distinguished triangle in $K(\mathcal{A})$ implies
+that the extension is split. A contradiction.
+
+\medskip\noindent
+It turns out that the functor
+$\text{Comp}(\mathcal{A}) \to D(\mathcal{A})$ is a
+$\delta$-functor. In order to see this we have to define
+the morphisms $\delta$ associated to a short exact sequence
+$$
+0 \to A^\bullet \xrightarrow{a} B^\bullet \xrightarrow{b} C^\bullet \to 0
+$$
+of complexes in the abelian category $\mathcal{A}$.
+Consider the cone $C(a)^\bullet$ of the morphism $a$.
+We have $C(a)^n = B^n \oplus A^{n + 1}$ and we define
+$q^n : C(a)^n \to C^n$ via the projection to $B^n$ followed
+by $b^n$. Hence a morphism of complexes
+$$
+q : C(a)^\bullet \longrightarrow C^\bullet.
+$$
+It is clear that $q \circ i = b$ where $i$ is as in
+Definition \ref{definition-cone}.
+Note that, as $a^\bullet$ is injective in each degree,
+the kernel of $q$ is identified with the cone of
+$\text{id}_{A^\bullet}$ which is acyclic. Hence we see that
+$q$ is a quasi-isomorphism. According to
+Lemma \ref{lemma-the-same-up-to-isomorphisms}
+the triangle
+$$
+(A, B, C(a), a, i, -p)
+$$
+is a distinguished triangle in $K(\mathcal{A})$.
+As the localization functor
+$K(\mathcal{A}) \to D(\mathcal{A})$ is
+exact we see that $(A, B, C(a), a, i, -p)$ is a distinguished
+triangle in $D(\mathcal{A})$. Since $q$ is a quasi-isomorphism
+we see that $q$ is an isomorphism in $D(\mathcal{A})$.
+Hence we deduce that
+$$
+(A, B, C, a, b, -p \circ q^{-1})
+$$
+is a distinguished triangle of $D(\mathcal{A})$.
+This suggests the following lemma.
+
+\begin{lemma}
+\label{lemma-derived-canonical-delta-functor}
+Let $\mathcal{A}$ be an abelian category. The functor
+$\text{Comp}(\mathcal{A}) \to D(\mathcal{A})$
+defined has the natural structure of a $\delta$-functor,
+with
+$$
+\delta_{A^\bullet \to B^\bullet \to C^\bullet} = - p \circ q^{-1}
+$$
+with $p$ and $q$ as explained above. The same construction turns the
+functors
+$\text{Comp}^{+}(\mathcal{A}) \to D^{+}(\mathcal{A})$,
+$\text{Comp}^{-}(\mathcal{A}) \to D^{-}(\mathcal{A})$, and
+$\text{Comp}^b(\mathcal{A}) \to D^b(\mathcal{A})$
+into $\delta$-functors.
+\end{lemma}
+
+\begin{proof}
+We have already seen that this choice leads to a distinguished
+triangle whenever given a short exact sequence of complexes.
+We have to show that given a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+A^\bullet \ar[r]_a \ar[d]_f &
+B^\bullet \ar[r]_b \ar[d]_g &
+C^\bullet \ar[r] \ar[d]_h &
+0 \\
+0 \ar[r] &
+(A')^\bullet \ar[r]^{a'} &
+(B')^\bullet \ar[r]^{b'} &
+(C')^\bullet \ar[r] &
+0
+}
+$$
+we get the desired commutative diagram of
+Definition \ref{definition-delta-functor} (2).
+By Lemma \ref{lemma-functorial-cone}
+the pair $(f, g)$ induces a canonical morphism
+$c : C(a)^\bullet \to C(a')^\bullet$. It is a simple computation
+to show that $q' \circ c = h \circ q$ and
+$f[1] \circ p = p' \circ c$. From this the result follows directly.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derived-compare-triangles-ses}
+Let $\mathcal{A}$ be an abelian category.
+Let
+$$
+\xymatrix{
+0 \ar[r] &
+A^\bullet \ar[r] \ar[d] &
+B^\bullet \ar[r] \ar[d] &
+C^\bullet \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+D^\bullet \ar[r] &
+E^\bullet \ar[r] &
+F^\bullet \ar[r] &
+0
+}
+$$
+be a commutative diagram of morphisms of complexes
+such that the rows are short exact sequences of complexes, and
+the vertical arrows are quasi-isomorphisms.
+The $\delta$-functor of
+Lemma \ref{lemma-derived-canonical-delta-functor}
+above maps the short exact sequences
+$0 \to A^\bullet \to B^\bullet \to C^\bullet \to 0$
+and
+$0 \to D^\bullet \to E^\bullet \to F^\bullet \to 0$
+to isomorphic distinguished triangles.
+\end{lemma}
+
+\begin{proof}
+Trivial from the fact that $K(\mathcal{A}) \to D(\mathcal{A})$
+transforms quasi-isomorphisms into isomorphisms and that the
+associated distinguished triangles are functorial.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derived-compare-triangles-split-case}
+Let $\mathcal{A}$ be an abelian category. Let
+$$
+\xymatrix{
+0 \ar[r] &
+A^\bullet \ar[r] &
+B^\bullet \ar[r] &
+C^\bullet \ar[r] &
+0
+}
+$$
+be a short exact sequences of complexes.
+Assume this short exact sequence is termwise split. Let
+$(A^\bullet, B^\bullet, C^\bullet, \alpha, \beta, \delta)$
+be the distinguished triangle of $K(\mathcal{A})$
+associated to the sequence. The $\delta$-functor of
+Lemma \ref{lemma-derived-canonical-delta-functor}
+above maps the short exact sequences
+$0 \to A^\bullet \to B^\bullet \to C^\bullet \to 0$
+to a triangle isomorphic to the distinguished triangle
+$$
+(A^\bullet, B^\bullet, C^\bullet, \alpha, \beta, \delta).
+$$
+\end{lemma}
+
+\begin{proof}
+Follows from
+Lemma \ref{lemma-the-same-up-to-isomorphisms}.
+\end{proof}
+
+\begin{remark}
+\label{remark-truncation-distinguished-triangle}
+Let $\mathcal{A}$ be an abelian category. Let $K^\bullet$ be a complex
+of $\mathcal{A}$. Let $a \in \mathbf{Z}$. We claim there is a canonical
+distinguished triangle
+$$
+\tau_{\leq a}K^\bullet \to K^\bullet \to \tau_{\geq a + 1}K^\bullet \to
+(\tau_{\leq a}K^\bullet)[1]
+$$
+in $D(\mathcal{A})$. Here we have used the canonical truncation functors $\tau$
+from Homology, Section \ref{homology-section-truncations}.
+Namely, we first take the distinguished
+triangle associated by our $\delta$-functor
+(Lemma \ref{lemma-derived-canonical-delta-functor})
+to the short exact sequence of complexes
+$$
+0 \to \tau_{\leq a}K^\bullet \to K^\bullet \to
+K^\bullet/\tau_{\leq a}K^\bullet \to 0
+$$
+Next, we use that the map $K^\bullet \to \tau_{\geq a + 1}K^\bullet$
+factors through a quasi-isomorphism
+$K^\bullet/\tau_{\leq a}K^\bullet \to \tau_{\geq a + 1}K^\bullet$
+by the description of cohomology groups in
+Homology, Section \ref{homology-section-truncations}.
+In a similar way we obtain canonical distinguished triangles
+$$
+\tau_{\leq a}K^\bullet \to \tau_{\leq a + 1}K^\bullet \to
+H^{a + 1}(K^\bullet)[-a-1] \to (\tau_{\leq a}K^\bullet)[1]
+$$
+and
+$$
+H^a(K^\bullet)[-a] \to \tau_{\geq a}K^\bullet \to \tau_{\geq a + 1}K^\bullet
+\to H^a(K^\bullet)[-a + 1]
+$$
+\end{remark}
+
+\begin{lemma}
+\label{lemma-trick-vanishing-composition}
+Let $\mathcal{A}$ be an abelian category. Let
+$$
+K_0^\bullet \to K_1^\bullet \to \ldots \to K_n^\bullet
+$$
+be maps of complexes such that
+\begin{enumerate}
+\item $H^i(K_0^\bullet) = 0$ for $i > 0$,
+\item $H^{-j}(K_j^\bullet) \to H^{-j}(K_{j + 1}^\bullet)$ is zero.
+\end{enumerate}
+Then the composition $K_0^\bullet \to K_n^\bullet$ factors through
+$\tau_{\leq -n}K_n^\bullet \to K_n^\bullet$ in $D(\mathcal{A})$.
+Dually, given maps of complexes
+$$
+K_n^\bullet \to K_{n - 1}^\bullet \to \ldots \to K_0^\bullet
+$$
+such that
+\begin{enumerate}
+\item $H^i(K_0^\bullet) = 0$ for $i < 0$,
+\item $H^j(K_{j + 1}^\bullet) \to H^j(K_j^\bullet)$ is zero,
+\end{enumerate}
+then the composition $K_n^\bullet \to K_0^\bullet$ factors through
+$K_n^\bullet \to \tau_{\geq n}K_n^\bullet$ in $D(\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+The case $n = 1$. Since $\tau_{\leq 0}K_0^\bullet = K_0^\bullet$
+in $D(\mathcal{A})$ we can replace
+$K_0^\bullet$ by $\tau_{\leq 0}K_0^\bullet$ and
+$K_1^\bullet$ by $\tau_{\leq 0}K_1^\bullet$.
+Consider the distinguished triangle
+$$
+\tau_{\leq -1}K_1^\bullet \to K_1^\bullet \to
+H^0(K_1^\bullet)[0] \to (\tau_{\leq -1}K_1^\bullet)[1]
+$$
+(Remark \ref{remark-truncation-distinguished-triangle}).
+The composition $K_0^\bullet \to K_1^\bullet \to H^0(K_1^\bullet)[0]$
+is zero as it is equal to $K_0^\bullet \to H^0(K_0^\bullet)[0] \to
+H^0(K_1^\bullet)[0]$ which is zero by assumption.
+The fact that $\Hom_{D(\mathcal{A})}(K_0^\bullet, -)$
+is a homological functor (Lemma \ref{lemma-representable-homological}),
+allows us to find the desired factorization.
+For $n = 2$ we get a factorization
+$K_0^\bullet \to \tau_{\leq -1}K_1^\bullet$ by the case $n = 1$
+and we can apply the case $n = 1$ to the map of complexes
+$\tau_{\leq -1}K_1^\bullet \to \tau_{\leq -1}K_2^\bullet$
+to get a factorization
+$\tau_{\leq -1}K_1^\bullet \to \tau_{\leq -2}K_2^\bullet$.
+The general case is proved in exactly the same manner.
+\end{proof}
+
+
+
+
+\section{Filtered derived categories}
+\label{section-filtered-derived-category}
+
+\noindent
+A reference for this section is \cite[I, Chapter V]{cotangent}. Let
+$\mathcal{A}$ be an abelian category. In this section we will define the
+filtered derived category $DF(\mathcal{A})$ of $\mathcal{A}$.
+In short, we will define it as the derived category
+of the exact category of objects of $\mathcal{A}$ endowed with a finite
+filtration. (Thus our construction is a special case of a
+more general construction of the derived category of an
+exact category, see for example \cite{Buhler}, \cite{Keller}.)
+Illusie's filtered derived category is the full subcategory
+of ours consisting of those objects whose filtration is finite.
+(In our category the filtration is still finite in each degree, but may
+not be uniformly bounded.) The rationale for our choice is that it is not
+harder and it allows us to apply the discussion to the spectral sequences of
+Lemma \ref{lemma-two-ss-complex-functor}, see also
+Remark \ref{remark-functorial-ss}.
+
+\medskip\noindent
+We will use the notation regarding filtered objects introduced in
+Homology, Section \ref{homology-section-filtrations}.
+The category of filtered objects of $\mathcal{A}$ is
+denoted $\text{Fil}(\mathcal{A})$.
+All filtrations will be decreasing by fiat.
+
+\begin{definition}
+\label{definition-finite-filtered}
+Let $\mathcal{A}$ be an abelian category. The
+{\it category of finite filtered objects of $\mathcal{A}$}
+is the category of filtered objects
+$(A, F)$ of $\mathcal{A}$ whose filtration $F$ is finite.
+We denote it $\text{Fil}^f(\mathcal{A})$.
+\end{definition}
+
+\noindent
+Thus $\text{Fil}^f(\mathcal{A})$ is a full subcategory of
+$\text{Fil}(\mathcal{A})$. For each $p \in \mathbf{Z}$ there is
+a functor
+$\text{gr}^p : \text{Fil}^f(\mathcal{A}) \to \mathcal{A}$.
+There is a functor
+$$
+\text{gr} = \bigoplus\nolimits_{p \in \mathbf{Z}} \text{gr}^p :
+\text{Fil}^f(\mathcal{A}) \to \text{Gr}(\mathcal{A})
+$$
+where $\text{Gr}(\mathcal{A})$ is the category of graded objects of
+$\mathcal{A}$, see Homology, Definition \ref{homology-definition-graded}.
+Finally, there is a functor
+$$
+(\text{forget }F) : \text{Fil}^f(\mathcal{A}) \longrightarrow \mathcal{A}
+$$
+which associates to the filtered object $(A, F)$ the underlying object
+of $\mathcal{A}$.
+The category $\text{Fil}^f(\mathcal{A})$ is an additive category, but not
+abelian in general, see
+Homology, Example \ref{homology-example-not-abelian}.
+
+\medskip\noindent
+Because the functors $\text{gr}^p$, $\text{gr}$, $(\text{forget }F)$
+are additive they induce exact functors of triangulated categories
+$$
+\text{gr}^p, (\text{forget }F) :
+K(\text{Fil}^f(\mathcal{A}))
+\to
+K(\mathcal{A})
+\quad\text{and}\quad
+\text{gr} :
+K(\text{Fil}^f(\mathcal{A}))
+\to
+K(\text{Gr}(\mathcal{A}))
+$$
+by
+Lemma \ref{lemma-additive-exact-homotopy-category}.
+By analogy with the case of the homotopy category of an abelian category
+we make the following definitions.
+
+\begin{definition}
+\label{definition-filtered-acyclic}
+Let $\mathcal{A}$ be an abelian category.
+\begin{enumerate}
+\item Let $\alpha : K^\bullet \to L^\bullet$ be a morphism of
+$K(\text{Fil}^f(\mathcal{A}))$. We say that
+$\alpha$ is a {\it filtered quasi-isomorphism} if
+the morphism $\text{gr}(\alpha)$ is a quasi-isomorphism.
+\item Let $K^\bullet$ be an object of $K(\text{Fil}^f(\mathcal{A}))$.
+We say that $K^\bullet$ is {\it filtered acyclic} if
+the complex $\text{gr}(K^\bullet)$ is acyclic.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that $\alpha : K^\bullet \to L^\bullet$ is a filtered quasi-isomorphism
+if and only if each $\text{gr}^p(\alpha)$ is a quasi-isomorphism. Similarly
+a complex $K^\bullet$ is filtered acyclic if and only if each
+$\text{gr}^p(K^\bullet)$ is acyclic.
+
+\begin{lemma}
+\label{lemma-filtered-cohomology-homological}
+Let $\mathcal{A}$ be an abelian category.
+\begin{enumerate}
+\item The functor
+$K(\text{Fil}^f(\mathcal{A})) \longrightarrow \text{Gr}(\mathcal{A})$,
+$K^\bullet \longmapsto H^0(\text{gr}(K^\bullet))$
+is homological.
+\item The functor
+$K(\text{Fil}^f(\mathcal{A})) \rightarrow \mathcal{A}$,
+$K^\bullet \longmapsto H^0(\text{gr}^p(K^\bullet))$
+is homological.
+\item The functor
+$K(\text{Fil}^f(\mathcal{A})) \longrightarrow \mathcal{A}$,
+$K^\bullet \longmapsto H^0((\text{forget }F)K^\bullet)$
+is homological.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from the fact that
+$H^0 : K(\mathcal{A}) \to \mathcal{A}$ is homological, see
+Lemma \ref{lemma-cohomology-homological}
+and the fact that the functors $\text{gr}, \text{gr}^p, (\text{forget }F)$
+are exact functors of triangulated categories. See
+Lemma \ref{lemma-exact-compose-homological-functor}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-filtered-acyclic}
+Let $\mathcal{A}$ be an abelian category. The full subcategory
+$\text{FAc}(\mathcal{A})$ of $K(\text{Fil}^f(\mathcal{A}))$
+consisting of filtered acyclic complexes is a strictly full saturated
+triangulated subcategory of $K(\text{Fil}^f(\mathcal{A}))$.
+The corresponding saturated multiplicative system (see
+Lemma \ref{lemma-operations})
+of $K(\text{Fil}^f(\mathcal{A}))$ is the set
+$\text{FQis}(\mathcal{A})$ of filtered quasi-isomorphisms.
+In particular, the kernel of the localization
+functor
+$$
+Q :
+K(\text{Fil}^f(\mathcal{A}))
+\longrightarrow
+\text{FQis}(\mathcal{A})^{-1}K(\text{Fil}^f(\mathcal{A}))
+$$
+is $\text{FAc}(\mathcal{A})$ and the functor $H^0 \circ \text{gr}$
+factors through $Q$.
+\end{lemma}
+
+\begin{proof}
+We know that $H^0 \circ \text{gr}$ is a homological functor by
+Lemma \ref{lemma-filtered-cohomology-homological}.
+Thus this lemma is a special case of
+Lemma \ref{lemma-acyclic-general}.
+\end{proof}
+
+\begin{definition}
+\label{definition-filtered-derived}
+Let $\mathcal{A}$ be an abelian category.
+Let $\text{FAc}(\mathcal{A})$ and $\text{FQis}(\mathcal{A})$
+be as in
+Lemma \ref{lemma-filtered-acyclic}.
+The {\it filtered derived category of $\mathcal{A}$}
+is the triangulated category
+$$
+DF(\mathcal{A}) =
+K(\text{Fil}^f(\mathcal{A}))/\text{FAc}(\mathcal{A}) =
+\text{FQis}(\mathcal{A})^{-1} K(\text{Fil}^f(\mathcal{A})).
+$$
+\end{definition}
+
+\begin{lemma}
+\label{lemma-filtered-derived-functors}
+The functors $\text{gr}^p, \text{gr}, (\text{forget }F)$ induce
+canonical exact functors
+$$
+\text{gr}^p, \text{gr}, (\text{forget }F):
+DF(\mathcal{A})
+\longrightarrow
+D(\mathcal{A})
+$$
+which commute with the localization functors.
+\end{lemma}
+
+\begin{proof}
+This follows from the universal property of localization, see
+Lemma \ref{lemma-universal-property-localization},
+provided we can show that a filtered quasi-isomorphism is turned
+into a quasi-isomorphism by each of the functors
+$\text{gr}^p, \text{gr}, (\text{forget }F)$. This is true by definition
+for the first two. For the last one the statement we have to do a little
+bit of work. Let $f : K^\bullet \to L^\bullet$ be a filtered
+quasi-isomorphism in $K(\text{Fil}^f(\mathcal{A}))$.
+Choose a distinguished triangle $(K^\bullet, L^\bullet, M^\bullet, f, g, h)$
+which contains $f$. Then $M^\bullet$ is filtered acyclic, see
+Lemma \ref{lemma-filtered-acyclic}.
+Hence by the corresponding lemma for $K(\mathcal{A})$ it suffices
+to show that a filtered acyclic complex is an acyclic complex if
+we forget the filtration.
+This follows from
+Homology, Lemma \ref{homology-lemma-filtered-acyclic}.
+\end{proof}
+
+\begin{definition}
+\label{definition-filtered-derived-bounded}
+Let $\mathcal{A}$ be an abelian category.
+The {\it bounded filtered derived category} $DF^b(\mathcal{A})$ is
+the full subcategory of $DF(\mathcal{A})$ with objects those $X$
+such that $\text{gr}(X) \in D^b(\mathcal{A})$.
+Similarly for the bounded below filtered derived category
+$DF^{+}(\mathcal{A})$ and the bounded above filtered derived category
+$DF^{-}(\mathcal{A})$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-filtered-complex-cohomology-bounded}
+Let $\mathcal{A}$ be an abelian category.
+Let $K^\bullet \in K(\text{Fil}^f(\mathcal{A}))$.
+\begin{enumerate}
+\item If $H^n(\text{gr}(K^\bullet)) = 0$ for all $n < a$, then there exists
+a filtered quasi-isomorphism $K^\bullet \to L^\bullet$ with
+$L^n = 0$ for all $n < a$.
+\item If $H^n(\text{gr}(K^\bullet)) = 0$ for all $n > b$, then there exists
+a filtered quasi-isomorphism $M^\bullet \to K^\bullet$ with
+$M^n = 0$ for all $n > b$.
+\item If $H^n(\text{gr}(K^\bullet)) = 0$ for all $|n| \gg 0$, then there
+exists a commutative diagram of morphisms of complexes
+$$
+\xymatrix{
+K^\bullet \ar[r] & L^\bullet \\
+M^\bullet \ar[u] \ar[r] & N^\bullet \ar[u]
+}
+$$
+where all the arrows are filtered quasi-isomorphisms, $L^\bullet$
+bounded below, $M^\bullet$ bounded above, and $N^\bullet$ a bounded
+complex.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Suppose that $H^n(\text{gr}(K^\bullet)) = 0$ for all $n < a$. By
+Homology, Lemma \ref{homology-lemma-filtered-acyclic}
+the sequence
+$$
+K^{a - 1} \xrightarrow{d^{a - 2}} K^{a - 1} \xrightarrow{d^{a - 1}} K^a
+$$
+is an exact sequence of objects of $\mathcal{A}$ and the morphisms
+$d^{a - 2}$ and $d^{a - 1}$ are strict. Hence
+$\Coim(d^{a - 1}) = \Im(d^{a - 1})$ in $\text{Fil}^f(\mathcal{A})$
+and the map $\text{gr}(\Im(d^{a - 1})) \to \text{gr}(K^a)$
+is injective with image equal to the image of
+$\text{gr}(K^{a - 1}) \to \text{gr}(K^a)$, see
+Homology, Lemma \ref{homology-lemma-characterize-strict}.
+This means that the map $K^\bullet \to \tau_{\geq a}K^\bullet$
+into the truncation
+$$
+\tau_{\geq a}K^\bullet =
+(\ldots \to 0 \to K^a/\Im(d^{a - 1}) \to K^{a + 1} \to \ldots)
+$$
+is a filtered quasi-isomorphism. This proves (1). The proof of (2)
+is dual to the proof of (1). Part (3) follows formally from (1) and (2).
+\end{proof}
+
+\noindent
+To state the following lemma denote
+$\text{FAc}^{+}(\mathcal{A})$, $\text{FAc}^{-}(\mathcal{A})$,
+resp.\ $\text{FAc}^b(\mathcal{A})$ the intersection of
+$K^{+}(\text{Fil}^f\mathcal{A})$, $K^{-}(\text{Fil}^f\mathcal{A})$,
+resp.\ $K^b(\text{Fil}^f\mathcal{A})$ with $\text{FAc}(\mathcal{A})$.
+Denote $\text{FQis}^{+}(\mathcal{A})$, $\text{FQis}^{-}(\mathcal{A})$,
+resp.\ $\text{FQis}^b(\mathcal{A})$ the intersection of
+$K^{+}(\text{Fil}^f\mathcal{A})$, $K^{-}(\text{Fil}^f\mathcal{A})$,
+resp.\ $K^b(\text{Fil}^f\mathcal{A})$ with $\text{FQis}(\mathcal{A})$.
+
+\begin{lemma}
+\label{lemma-filtered-bounded-derived}
+Let $\mathcal{A}$ be an abelian category. The subcategories
+$\text{FAc}^{+}(\mathcal{A})$, $\text{FAc}^{-}(\mathcal{A})$,
+resp.\ $\text{FAc}^b(\mathcal{A})$
+are strictly full saturated triangulated subcategories
+of $K^{+}(\text{Fil}^f\mathcal{A})$, $K^{-}(\text{Fil}^f\mathcal{A})$,
+resp.\ $K^b(\text{Fil}^f\mathcal{A})$.
+The corresponding saturated multiplicative systems (see
+Lemma \ref{lemma-operations})
+are the sets $\text{FQis}^{+}(\mathcal{A})$, $\text{FQis}^{-}(\mathcal{A})$,
+resp.\ $\text{FQis}^b(\mathcal{A})$.
+\begin{enumerate}
+\item The kernel of the functor
+$K^{+}(\text{Fil}^f\mathcal{A}) \to DF^{+}(\mathcal{A})$
+is $\text{FAc}^{+}(\mathcal{A})$ and this induces an equivalence
+of triangulated categories
+$$
+K^{+}(\text{Fil}^f\mathcal{A})/\text{FAc}^{+}(\mathcal{A}) =
+\text{FQis}^{+}(\mathcal{A})^{-1}K^{+}(\text{Fil}^f\mathcal{A})
+\longrightarrow
+DF^{+}(\mathcal{A})
+$$
+\item The kernel of the functor
+$K^{-}(\text{Fil}^f\mathcal{A}) \to DF^{-}(\mathcal{A})$
+is $\text{FAc}^{-}(\mathcal{A})$ and this induces an equivalence
+of triangulated categories
+$$
+K^{-}(\text{Fil}^f\mathcal{A})/\text{FAc}^{-}(\mathcal{A}) =
+\text{FQis}^{-}(\mathcal{A})^{-1}K^{-}(\text{Fil}^f\mathcal{A})
+\longrightarrow
+DF^{-}(\mathcal{A})
+$$
+\item The kernel of the functor
+$K^b(\text{Fil}^f\mathcal{A}) \to DF^b(\mathcal{A})$
+is $\text{FAc}^b(\mathcal{A})$ and this induces an equivalence
+of triangulated categories
+$$
+K^b(\text{Fil}^f\mathcal{A})/\text{FAc}^b(\mathcal{A}) =
+\text{FQis}^b(\mathcal{A})^{-1}K^b(\text{Fil}^f\mathcal{A})
+\longrightarrow
+DF^b(\mathcal{A})
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from the results above, in particular
+Lemma \ref{lemma-filtered-complex-cohomology-bounded},
+by exactly the same arguments as used in the proof of
+Lemma \ref{lemma-bounded-derived}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Derived functors in general}
+\label{section-derived-functors}
+
+\noindent
+A reference for this section is Deligne's expos\'e XVII in \cite{SGA4}.
+A very general notion of right and left derived functors exists where
+we have an exact functor between triangulated categories, a multiplicative
+system in the source category and we want to find the ``correct'' extension
+of the exact functor to the localized category.
+
+\begin{situation}
+\label{situation-derived-functor}
+Here $F : \mathcal{D} \to \mathcal{D}'$ is an exact functor of triangulated
+categories and $S$ is a saturated multiplicative
+system in $\mathcal{D}$ compatible with the structure
+of triangulated category on $\mathcal{D}$.
+\end{situation}
+
+\noindent
+Let $X \in \Ob(\mathcal{D})$. Recall from
+Categories, Remark \ref{categories-remark-left-localization-morphisms-colimit}
+the filtered category $X/S$ of arrows $s : X \to X'$ in $S$ with source $X$.
+Dually, in
+Categories, Remark \ref{categories-remark-right-localization-morphisms-colimit}
+we defined the cofiltered category $S/X$ of arrows $s : X' \to X$ in $S$
+with target $X$.
+
+\begin{definition}
+\label{definition-right-derived-functor-defined}
+Assumptions and notation as in
+Situation \ref{situation-derived-functor}.
+Let $X \in \Ob(\mathcal{D})$.
+\begin{enumerate}
+\item we say the {\it right derived functor $RF$ is defined at}
+$X$ if the ind-object
+$$
+(X/S) \longrightarrow \mathcal{D}', \quad
+(s : X \to X') \longmapsto F(X')
+$$
+is essentially constant\footnote{For a discussion of when an ind-object
+or pro-object of a category is essentially constant we refer to
+Categories, Section \ref{categories-section-essentially-constant}.};
+in this case the value
+$Y$ in $\mathcal{D}'$ is called the {\it value of $RF$ at $X$}.
+\item we say the {\it left derived functor $LF$ is defined at} $X$
+if the pro-object
+$$
+(S/X) \longrightarrow \mathcal{D}', \quad
+(s: X' \to X) \longmapsto F(X')
+$$
+is essentially constant; in this case the value $Y$ in $\mathcal{D}'$
+is called the {\it value of $LF$ at $X$}.
+\end{enumerate}
+By abuse of notation we often denote the values simply
+$RF(X)$ or $LF(X)$.
+\end{definition}
+
+\noindent
+It will turn out that the full subcategory of $\mathcal{D}$ consisting
+of objects where $RF$ is defined is a triangulated subcategory, and
+$RF$ will define a functor on this subcategory which transforms morphisms
+of $S$ into isomorphisms.
+
+\begin{lemma}
+\label{lemma-derived-functor}
+Assumptions and notation as in
+Situation \ref{situation-derived-functor}.
+Let $f : X \to Y$ be a morphism of $\mathcal{D}$.
+\begin{enumerate}
+\item If $RF$ is defined at $X$ and $Y$ then there exists a unique
+morphism $RF(f) : RF(X) \to RF(Y)$ between the values such that
+for any commutative diagram
+$$
+\xymatrix{
+X \ar[d]_f \ar[r]_s & X' \ar[d]^{f'} \\
+Y \ar[r]^{s'} & Y'
+}
+$$
+with $s, s' \in S$ the diagram
+$$
+\xymatrix{
+F(X) \ar[d] \ar[r] & F(X') \ar[d] \ar[r] & RF(X) \ar[d] \\
+F(Y) \ar[r] & F(Y') \ar[r] & RF(Y)
+}
+$$
+commutes.
+\item If $LF$ is defined at $X$ and $Y$ then there exists a unique
+morphism $LF(f) : LF(X) \to LF(Y)$ between the values such that
+for any commutative diagram
+$$
+\xymatrix{
+X' \ar[d]_{f'} \ar[r]_s & X \ar[d]^f \\
+Y' \ar[r]^{s'} & Y
+}
+$$
+with $s, s'$ in $S$ the diagram
+$$
+\xymatrix{
+LF(X) \ar[d] \ar[r] & F(X') \ar[d] \ar[r] & F(X) \ar[d] \\
+LF(Y) \ar[r] & F(Y') \ar[r] & F(Y)
+}
+$$
+commutes.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) holds if we only assume that the colimits
+$$
+RF(X) = \colim_{s : X \to X'} F(X')
+\quad\text{and}\quad
+RF(Y) = \colim_{s' : Y \to Y'} F(Y')
+$$
+exist. Namely, to give a morphism $RF(X) \to RF(Y)$ between the colimits
+is the same thing as giving for each $s : X \to X'$ in $\Ob(X/S)$
+a morphism $F(X') \to RF(Y)$ compatible with morphisms in the category
+$X/S$. To get the morphism we choose a commutative diagram
+$$
+\xymatrix{
+X \ar[d]_f \ar[r]_s & X' \ar[d]^{f'} \\
+Y \ar[r]^{s'} & Y'
+}
+$$
+with $s, s'$ in $S$ as is possible by MS2 and we set
+$F(X') \to RF(Y)$ equal to the composition $F(X') \to F(Y') \to RF(Y)$.
+To see that this is independent of the choice of the diagram above use
+MS3. Details omitted. The proof of (2) is dual.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derived-inverts}
+Assumptions and notation as in
+Situation \ref{situation-derived-functor}.
+Let $s : X \to Y$ be an element of $S$.
+\begin{enumerate}
+\item $RF$ is defined at $X$ if and only if it is defined at $Y$.
+In this case the map $RF(s) : RF(X) \to RF(Y)$ between values
+is an isomorphism.
+\item $LF$ is defined at $X$ if and only if it is defined at $Y$.
+In this case the map $LF(s) : LF(X) \to LF(Y)$ between values
+is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derived-shift}
+\begin{slogan}
+Derived functors are compatible with shifts
+\end{slogan}
+Assumptions and notation as in
+Situation \ref{situation-derived-functor}.
+Let $X$ be an object of $\mathcal{D}$ and $n \in \mathbf{Z}$.
+\begin{enumerate}
+\item $RF$ is defined at $X$ if and only if it is defined at $X[n]$.
+In this case there is a canonical isomorphism
+$RF(X)[n]= RF(X[n])$ between values.
+\item $LF$ is defined at $X$ if and only if it is defined at $X[n]$.
+In this case there is a canonical isomorphism
+$LF(X)[n] \to LF(X[n])$ between values.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-2-out-of-3-defined}
+Assumptions and notation as in
+Situation \ref{situation-derived-functor}.
+Let $(X, Y, Z, f, g, h)$ be a distinguished triangle of $\mathcal{D}$.
+If $RF$ is defined at two out of three of $X, Y, Z$, then it is defined
+at the third. Moreover, in this case
+$$
+(RF(X), RF(Y), RF(Z), RF(f), RF(g), RF(h))
+$$
+is a distinguished triangle in $\mathcal{D}'$. Similarly for $LF$.
+\end{lemma}
+
+\begin{proof}
+Say $RF$ is defined at $X, Y$ with values $A, B$.
+Let $RF(f) : A \to B$ be the induced morphism, see
+Lemma \ref{lemma-derived-functor}.
+We may choose a distinguished triangle
+$(A, B, C, RF(f), b, c)$
+in $\mathcal{D}'$. We claim that $C$ is a value of $RF$ at $Z$.
+
+\medskip\noindent
+To see this pick $s : X \to X'$ in $S$ such that there exists a morphism
+$\alpha : A \to F(X')$ as in
+Categories,
+Definition \ref{categories-definition-essentially-constant-diagram}.
+We may choose a commutative diagram
+$$
+\xymatrix{
+X \ar[d]_f \ar[r]_s & X' \ar[d]^{f'} \\
+Y \ar[r]^{s'} & Y'
+}
+$$
+with $s' \in S$ by MS2. Using that $Y/S$ is filtered we can (after replacing
+$s'$ by some $s'' : Y \to Y''$ in $S$) assume that there exists
+a morphism $\beta : B \to F(Y')$ as in
+Categories,
+Definition \ref{categories-definition-essentially-constant-diagram}.
+Picture
+$$
+\xymatrix{
+A \ar[d]_{RF(f)} \ar[r]_-\alpha &
+F(X') \ar[r] \ar[d]^{F(f')} &
+A \ar[d]^{RF(f)} \\
+B \ar[r]^-\beta & F(Y') \ar[r] & B
+}
+$$
+It may not be true that the left square commutes, but the outer and
+right squares commute.
+The assumption that the ind-object $\{F(Y')\}_{s' : Y' \to Y}$
+is essentially constant means that there exists a $s'' : Y \to Y''$
+in $S$ and a morphism $h : Y' \to Y''$ such that $s'' = h \circ s'$ and
+such that $F(h)$ equal to $F(Y') \to B \to F(Y') \to F(Y'')$. Hence
+after replacing $Y'$ by $Y''$ and $\beta$ by $F(h) \circ \beta$ the
+diagram will commute (by direct computation with arrows).
+
+\medskip\noindent
+Using MS6 choose a morphism of triangles
+$$
+(s, s', s'') : (X, Y, Z, f, g, h) \longrightarrow (X', Y', Z', f', g', h')
+$$
+with $s'' \in S$. By TR3 choose a morphism of triangles
+$$
+(\alpha, \beta, \gamma) :
+(A, B, C, RF(f), b, c)
+\longrightarrow
+(F(X'), F(Y'), F(Z'), F(f'), F(g'), F(h'))
+$$
+
+\medskip\noindent
+By
+Lemma \ref{lemma-derived-inverts}
+it suffices to prove that $RF(Z')$ is defined and has value $C$.
+Consider the category $\mathcal{I}$ of
+Lemma \ref{lemma-limit-triangles}
+of triangles
+$$
+\mathcal{I} =
+\{(t, t', t'') : (X', Y', Z', f', g', h') \to (X'', Y'', Z'', f'', g'', h'')
+\mid (t, t', t'') \in S\}
+$$
+To show that the system $F(Z'')$ is essentially constant over the category
+$Z'/S$ is equivalent to showing that the system of $F(Z'')$ is essentially
+constant over $\mathcal{I}$ because $\mathcal{I} \to Z'/S$ is cofinal, see
+Categories, Lemma \ref{categories-lemma-cofinal-essentially-constant}
+(cofinality is proven in Lemma \ref{lemma-limit-triangles}).
+For any object $W$ in $\mathcal{D}'$ we
+consider the diagram
+$$
+\xymatrix{
+\colim_\mathcal{I} \Mor_{\mathcal{D}'}(W, F(X'')) &
+\Mor_{\mathcal{D}'}(W, A) \ar[l] \\
+\colim_\mathcal{I} \Mor_{\mathcal{D}'}(W, F(Y'')) \ar[u] &
+\Mor_{\mathcal{D}'}(W, B) \ar[u] \ar[l] \\
+\colim_\mathcal{I} \Mor_{\mathcal{D}'}(W, F(Z'')) \ar[u] &
+\Mor_{\mathcal{D}'}(W, C) \ar[u] \ar[l] \\
+\colim_\mathcal{I} \Mor_{\mathcal{D}'}(W, F(X''[1])) \ar[u] &
+\Mor_{\mathcal{D}'}(W, A[1]) \ar[u] \ar[l] \\
+\colim_\mathcal{I} \Mor_{\mathcal{D}'}(W, F(Y''[1])) \ar[u] &
+\Mor_{\mathcal{D}'}(W, B[1]) \ar[u] \ar[l]
+}
+$$
+where the horizontal arrows are given by composing with
+$(\alpha, \beta, \gamma)$. Since filtered colimits are exact
+(Algebra, Lemma \ref{algebra-lemma-directed-colimit-exact}) the left column
+is an exact sequence. Thus the $5$ lemma
+(Homology, Lemma \ref{homology-lemma-five-lemma}) tells us the
+$$
+\colim_\mathcal{I} \Mor_{\mathcal{D}'}(W, F(Z''))
+\longrightarrow
+\Mor_{\mathcal{D}'}(W, C)
+$$
+is bijective. Choose an object
+$(t, t', t'') : (X', Y', Z') \to (X'', Y'', Z'')$ of $\mathcal{I}$.
+Applying what we just showed to $W = F(Z'')$ and the element
+$\text{id}_{F(X'')}$ of the colimit we find a unique morphism
+$c_{(X'', Y'', Z'')} : F(Z'') \to C$ such that for some
+$(X'', Y'', Z'') \to (X''', Y''', Z'')$ in $\mathcal{I}$
+$$
+F(Z'') \xrightarrow{c_{(X'', Y'', Z'')}} C \xrightarrow{\gamma}
+F(Z') \to F(Z'') \to F(Z''')
+\quad\text{equals}\quad
+F(Z'') \to F(Z''')
+$$
+The family of morphisms $c_{(X'', Y'', Z'')}$ form an element $c$ of
+$\lim_\mathcal{I} \Mor_{\mathcal{D}'}(F(Z''), C)$ by uniqueness
+(computation omitted). Finally, we show that
+$\colim_\mathcal{I} F(Z'') = C$ via the morphisms $c_{(X'', Y'', Z'')}$
+which will finish the proof by
+Categories, Lemma \ref{categories-lemma-characterize-essentially-constant-ind}.
+Namely, let $W$ be an object of $\mathcal{D}'$ and let
+$d_{(X'', Y'', Z'')} : F(Z'') \to W$ be a family of maps corresponding
+to an element of $\lim_\mathcal{I} \Mor_{\mathcal{D}'}(F(Z''), W)$.
+If $d_{(X', Y', Z')} \circ \gamma = 0$, then for every object
+$(X'', Y'', Z'')$ of $\mathcal{I}$ the morphism $d_{(X'', Y'', Z'')}$
+is zero by the existence of $c_{(X'', Y'', Z'')}$ and the
+morphism $(X'', Y'', Z'') \to (X''', Y''', Z'')$ in $\mathcal{I}$
+satisfying the displayed equality above. Hence the map
+$$
+\lim_\mathcal{I} \Mor_{\mathcal{D}'}(F(Z''), W)
+\longrightarrow
+\Mor_{\mathcal{D}'}(C, W)
+$$
+(coming from precomposing by $\gamma$) is injective. However, it is
+also surjective because the element $c$ gives a left inverse. We conclude
+that $C$ is the colimit by
+Categories, Remark \ref{categories-remark-limit-colim}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-direct-sum-defined}
+Assumptions and notation as in Situation \ref{situation-derived-functor}.
+Let $X, Y$ be objects of $\mathcal{D}$.
+\begin{enumerate}
+\item If $RF$ is defined at $X$ and $Y$, then $RF$ is defined at $X \oplus Y$.
+\item If $\mathcal{D}'$ is Karoubian and $RF$ is defined at $X \oplus Y$,
+then $RF$ is defined at both $X$ and $Y$.
+\end{enumerate}
+In either case we have $RF(X \oplus Y) = RF(X) \oplus RF(Y)$.
+Similarly for $LF$.
+\end{lemma}
+
+\begin{proof}
+If $RF$ is defined at $X$ and $Y$, then the distinguished triangle
+$X \to X \oplus Y \to Y \to X[1]$ (Lemma \ref{lemma-split}) and
+Lemma \ref{lemma-2-out-of-3-defined}
+shows that $RF$ is defined at $X \oplus Y$ and that we
+have a distinguished triangle
+$RF(X) \to RF(X \oplus Y) \to RF(Y) \to RF(X)[1]$.
+Applying Lemma \ref{lemma-split} to this once more we find
+that $RF(X \oplus Y) = RF(X) \oplus RF(Y)$.
+This proves (1) and the final assertion.
+
+\medskip\noindent
+Conversely, assume that $RF$ is defined at $X \oplus Y$ and that $\mathcal{D}'$
+is Karoubian. Since $S$ is a saturated system $S$ is the set of arrows which
+become invertible under the additive localization functor
+$Q : \mathcal{D} \to S^{-1}\mathcal{D}$, see
+Categories, Lemma \ref{categories-lemma-what-gets-inverted}.
+Thus for any $s : X \to X'$ and $s' : Y \to Y'$ in $S$ the morphism
+$s \oplus s' : X \oplus Y \to X' \oplus Y'$ is an element of $S$.
+In this way we obtain a functor
+$$
+X/S \times Y/S \longrightarrow (X \oplus Y)/S
+$$
+Recall that the categories $X/S, Y/S, (X \oplus Y)/S$ are filtered
+(Categories, Remark
+\ref{categories-remark-left-localization-morphisms-colimit}).
+By Categories, Lemma \ref{categories-lemma-essentially-constant-over-product}
+$X/S \times Y/S$ is filtered and
+$F|_{X/S} : X/S \to \mathcal{D}'$ (resp.\ $G|_{Y/S} : Y/S \to \mathcal{D}'$)
+is essentially constant if and only if
+$F|_{X/S} \circ \text{pr}_1 : X/S \times Y/S \to \mathcal{D}'$
+(resp.\ $G|_{Y/S} \circ \text{pr}_2 : X/S \times Y/S \to \mathcal{D}'$)
+is essentially constant. Below we will show that the displayed functor
+is cofinal, hence by
+Categories, Lemma \ref{categories-lemma-cofinal-essentially-constant}.
+we see that $F|_{(X \oplus Y)/S}$ is essentially constant implies that
+$F|_{X/S} \circ \text{pr}_1 \oplus F|_{Y/S} \circ \text{pr}_2 :
+X/S \times Y/S \to \mathcal{D}'$
+is essentially constant. By Homology, Lemma
+\ref{homology-lemma-direct-sum-from-product-essentially-constant}
+(and this is where we use that $\mathcal{D}'$ is Karoubian)
+we see that
+$F|_{X/S} \circ \text{pr}_1 \oplus F|_{Y/S} \circ \text{pr}_2$
+being essentially constant implies
+$F|_{X/S} \circ \text{pr}_1$ and
+$F|_{Y/S} \circ \text{pr}_2$ are essentially constant proving that $RF$ is
+defined at $X$ and $Y$.
+
+\medskip\noindent
+Proof that the displayed functor is cofinal.
+To do this pick any $t : X \oplus Y \to Z$ in $S$.
+Using MS2 we can find morphisms $Z \to X'$, $Z \to Y'$
+and $s : X \to X'$, $s' : Y \to Y'$ in $S$ such that
+$$
+\xymatrix{
+X \ar[d]^s & X \oplus Y \ar[d] \ar[l] \ar[r] & Y \ar[d]_{s'} \\
+X' & Z \ar[l] \ar[r] & Y'
+}
+$$
+commutes. This proves there is a map $Z \to X' \oplus Y'$ in
+$(X \oplus Y)/S$, i.e., we get part (1) of Categories, Definition
+\ref{categories-definition-cofinal}. To prove part (2) it suffices
+to prove that given $t : X \oplus Y \to Z$ and morphisms
+$s_i \oplus s'_i : Z \to X'_i \oplus Y'_i$, $i = 1, 2$ in $(X \oplus Y)/S$
+we can find morphisms $a : X'_1 \to X'$, $b : X'_2 \to X'$,
+$c : Y'_1 \to Y'$, $d : Y'_2 \to Y'$ in $S$ such that
+$a \circ s_1 = b \circ s_2$ and $c \circ s'_1 = d \circ s'_2$.
+To do this we first choose any $X'$ and $Y'$ and maps $a, b, c, d$
+in $S$; this is possible as $X/S$ and $Y/S$ are filtered. Then the
+two maps $a \circ s_1, b \circ s_2 : Z \to X'$ become equal in
+$S^{-1}\mathcal{D}$. Hence we can find a morphism
+$X' \to X''$ in $S$ equalizing them. Similarly we find $Y' \to Y''$ in $S$
+equalizing $c \circ s'_1$ and $d \circ s'_2$. Replacing $X'$ by $X''$ and
+$Y'$ by $Y''$ we get $a \circ s_1 = b \circ s_2$ and
+$c \circ s'_1 = d \circ s'_2$.
+
+\medskip\noindent
+The proof of the corresponding statements for $LF$ are dual.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-derived-functor}
+Assumptions and notation as in Situation \ref{situation-derived-functor}.
+\begin{enumerate}
+\item The full subcategory $\mathcal{E}$ of $\mathcal{D}$ consisting of
+objects at which $RF$ is defined is a strictly full triangulated
+subcategory of $\mathcal{D}$.
+\item We obtain an exact functor
+$RF : \mathcal{E} \longrightarrow \mathcal{D}'$
+of triangulated categories.
+\item Elements of $S$ with either source or target
+in $\mathcal{E}$ are morphisms of $\mathcal{E}$.
+\item The functor $S_\mathcal{E}^{-1}\mathcal{E} \to S^{-1}\mathcal{D}$
+is a fully faithful exact functor of triangulated categories.
+\item Any element of $S_\mathcal{E} = \text{Arrows}(\mathcal{E}) \cap S$
+is mapped to an isomorphism by $RF$.
+\item We obtain an exact functor
+$$
+RF : S_\mathcal{E}^{-1}\mathcal{E} \longrightarrow \mathcal{D}'.
+$$
+\item If $\mathcal{D}'$ is Karoubian, then $\mathcal{E}$ is a saturated
+triangulated subcategory of $\mathcal{D}$.
+\end{enumerate}
+A similar result holds for $LF$.
+\end{proposition}
+
+\begin{proof}
+Since $S$ is saturated it contains all isomorphisms (see remark
+following Categories, Definition
+\ref{categories-definition-saturated-multiplicative-system}). Hence
+(1) follows from Lemmas \ref{lemma-derived-inverts},
+\ref{lemma-2-out-of-3-defined}, and
+\ref{lemma-derived-shift}. We get (2) from
+Lemmas \ref{lemma-derived-functor}, \ref{lemma-derived-shift}, and
+\ref{lemma-2-out-of-3-defined}. We get (3) from
+Lemma \ref{lemma-derived-inverts}. The fully faithfulness in (4) follows
+from (3) and the definitions. The fact that
+$S_\mathcal{E}^{-1}\mathcal{E} \to S^{-1}\mathcal{D}$ is exact
+follows from the fact that a triangle in $S_\mathcal{E}^{-1}\mathcal{E}$
+is distinguished if and only if it is isomorphic to the image of a
+distinguished triangle in $\mathcal{E}$, see proof of
+Proposition \ref{proposition-construct-localization}.
+Part (5) follows from Lemma \ref{lemma-derived-inverts}.
+The factorization of $RF : \mathcal{E} \to \mathcal{D}'$
+through an exact functor $S_\mathcal{E}^{-1}\mathcal{E} \to \mathcal{D}'$
+follows from Lemma \ref{lemma-universal-property-localization}.
+Part (7) follows from Lemma \ref{lemma-direct-sum-defined}.
+\end{proof}
+
+\noindent
+Proposition \ref{proposition-derived-functor}
+tells us that $RF$ lives on a maximal strictly full
+triangulated subcategory of $S^{-1}\mathcal{D}$ and is an exact functor
+on this triangulated category. Picture:
+$$
+\xymatrix{
+\mathcal{D} \ar[d]_Q \ar[rrr]_F & & & \mathcal{D}' \\
+S^{-1}\mathcal{D} & &
+S_\mathcal{E}^{-1}\mathcal{E}
+\ar[ll]_{\text{fully faithful}}^{\text{exact}} \ar[ur]_{RF}
+}
+$$
+
+\begin{definition}
+\label{definition-everywhere-defined}
+In
+Situation \ref{situation-derived-functor}.
+We say $F$ is {\it right derivable}, or that $RF$ {\it everywhere defined}
+if $RF$ is defined at every object of $\mathcal{D}$.
+We say $F$ is {\it left derivable}, or that $LF$ {\it everywhere defined}
+if $LF$ is defined at every object of $\mathcal{D}$.
+\end{definition}
+
+\noindent
+In this case we obtain a right (resp.\ left) derived functor
+\begin{equation}
+\label{equation-everywhere}
+RF : S^{-1}\mathcal{D} \longrightarrow \mathcal{D}',
+\quad\text{(resp. }
+LF : S^{-1}\mathcal{D} \longrightarrow \mathcal{D}'),
+\end{equation}
+see
+Proposition \ref{proposition-derived-functor}.
+In most interesting situations it is not the case that $RF \circ Q$ is
+equal to $F$. In fact, it might happen that the canonical map
+$F(X) \to RF(X)$ is never an isomorphism. In practice this does not happen,
+because in practice we only know how to prove $F$ is right derivable by
+showing that $RF$ can be computed by evaluating $F$ at judiciously chosen
+objects of the triangulated category $\mathcal{D}$. This warrants
+a definition.
+
+\begin{definition}
+\label{definition-computes}
+In
+Situation \ref{situation-derived-functor}.
+\begin{enumerate}
+\item An object $X$ of $\mathcal{D}$ {\it computes} $RF$ if $RF$ is defined
+at $X$ and the canonical map $F(X) \to RF(X)$ is an isomorphism.
+\item An object $X$ of $\mathcal{D}$ {\it computes} $LF$ if $LF$ is defined
+at $X$ and the canonical map $LF(X) \to F(X)$ is an isomorphism.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-computes-shift}
+Assumptions and notation as in
+Situation \ref{situation-derived-functor}.
+Let $X$ be an object of $\mathcal{D}$ and $n \in \mathbf{Z}$.
+\begin{enumerate}
+\item $X$ computes $RF$ if and only if $X[n]$ computes $RF$.
+\item $X$ computes $LF$ if and only if $X[n]$ computes $LF$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-2-out-of-3-computes}
+Assumptions and notation as in
+Situation \ref{situation-derived-functor}.
+Let $(X, Y, Z, f, g, h)$ be a distinguished triangle of $\mathcal{D}$.
+If $X, Y$ compute $RF$ then so does $Z$. Similar for $LF$.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-2-out-of-3-defined}
+we know that $RF$ is defined at $Z$ and that $RF$ applied to the
+triangle produces a distinguished triangle.
+Consider the morphism of distinguished triangles
+$$
+\xymatrix{
+(F(X), F(Y), F(Z), F(f), F(g), F(h)) \ar[d] \\
+(RF(X), RF(Y), RF(Z), RF(f), RF(g), RF(h))
+}
+$$
+Two out of three maps are isomorphisms, hence so is the third.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-direct-sum-computes}
+Assumptions and notation as in Situation \ref{situation-derived-functor}.
+Let $X, Y$ be objects of $\mathcal{D}$. If $X \oplus Y$ computes $RF$, then
+$X$ and $Y$ compute $RF$. Similarly for $LF$.
+\end{lemma}
+
+\begin{proof}
+If $X \oplus Y$ computes $RF$, then $RF(X \oplus Y) = F(X) \oplus F(Y)$.
+In the proof of Lemma \ref{lemma-direct-sum-defined} we have seen that
+the functor $X/S \times Y/S \to (X \oplus Y)/S$, $(s, s') \mapsto s \oplus s'$
+is cofinal. We will use this without further mention. Let $s : X \to X'$ be
+an element of $S$. Then $F(X) \to F(X')$ has a section, namely,
+$$
+F(X') \to F(X' \oplus Y) \to RF(X' \oplus Y) =
+RF(X \oplus Y) = F(X) \oplus F(Y) \to F(X).
+$$
+where we have used Lemma \ref{lemma-derived-inverts}.
+Hence $F(X') = F(X) \oplus E$ for some object $E$ of $\mathcal{D}'$
+such that $E \to F(X' \oplus Y) \to RF(X'\oplus Y) = RF(X \oplus Y)$
+is zero (Lemma \ref{lemma-when-split}).
+Because $RF$ is defined at $X' \oplus Y$ with value
+$F(X) \oplus F(Y)$ we can find a morphism $t : X' \oplus Y \to Z$
+of $S$ such that $F(t)$ annihilates $E$. We may assume
+$Z = X'' \oplus Y''$ and $t = t' \oplus t''$ with $t', t'' \in S$.
+Then $F(t')$ annihilates $E$. It follows that $F$ is essentially constant
+on $X/S$ with value $F(X)$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-existence-computes}
+Assumptions and notation as in
+Situation \ref{situation-derived-functor}.
+\begin{enumerate}
+\item If for every object $X \in \Ob(\mathcal{D})$
+there exists an arrow $s : X \to X'$ in $S$ such that $X'$ computes
+$RF$, then $RF$ is everywhere defined.
+\item If for every object $X \in \Ob(\mathcal{D})$
+there exists an arrow $s : X' \to X$ in $S$ such that $X'$ computes
+$LF$, then $LF$ is everywhere defined.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is clear from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-find-existence-computes}
+Assumptions and notation as in
+Situation \ref{situation-derived-functor}.
+If there exists a subset $\mathcal{I} \subset \Ob(\mathcal{D})$
+such that
+\begin{enumerate}
+\item for all $X \in \Ob(\mathcal{D})$
+there exists $s : X \to X'$ in $S$ with $X' \in \mathcal{I}$,
+and
+\item for every arrow $s : X \to X'$ in $S$ with $X, X' \in \mathcal{I}$
+the map $F(s) : F(X) \to F(X')$ is an isomorphism,
+\end{enumerate}
+then $RF$ is everywhere defined and every $X \in \mathcal{I}$
+computes $RF$. Dually, if there exists a subset
+$\mathcal{P} \subset \Ob(\mathcal{D})$
+such that
+\begin{enumerate}
+\item for all $X \in \Ob(\mathcal{D})$
+there exists $s : X' \to X$ in $S$ with $X' \in \mathcal{P}$,
+and
+\item for every arrow $s : X \to X'$ in $S$ with $X, X' \in \mathcal{P}$
+the map $F(s) : F(X) \to F(X')$ is an isomorphism,
+\end{enumerate}
+then $LF$ is everywhere defined and every $X \in \mathcal{P}$
+computes $LF$.
+\end{lemma}
+
+\begin{proof}
+Let $X$ be an object of $\mathcal{D}$.
+Assumption (1) implies that the arrows $s : X \to X'$ in $S$ with
+$X' \in \mathcal{I}$ are cofinal in the category $X/S$. Assumption
+(2) implies that $F$ is constant on this cofinal subcategory.
+Clearly this implies that $F : (X/S) \to \mathcal{D}'$ is essentially
+constant with value $F(X')$ for any $s : X \to X'$ in $S$
+with $X' \in \mathcal{I}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compose-derived-functors-general}
+Let $\mathcal{A}, \mathcal{B}, \mathcal{C}$ be triangulated categories.
+Let $S$, resp.\ $S'$ be a saturated multiplicative system in
+$\mathcal{A}$, resp.\ $\mathcal{B}$ compatible with the triangulated structure.
+Let $F : \mathcal{A} \to \mathcal{B}$ and $G : \mathcal{B} \to \mathcal{C}$
+be exact functors. Denote $F' : \mathcal{A} \to (S')^{-1}\mathcal{B}$ the
+composition of $F$ with the localization functor.
+\begin{enumerate}
+\item If $RF'$, $RG$, $R(G \circ F)$ are everywhere defined, then there
+is a canonical transformation of functors
+$t : R(G \circ F) \longrightarrow RG \circ RF'$.
+\item If $LF'$, $LG$, $L(G \circ F)$ are everywhere defined, then there
+is a canonical transformation of functors
+$t : LG \circ LF' \to L(G \circ F)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+In this proof we try to be careful. Hence let us think of
+the derived functors as the functors
+$$
+RF' : S^{-1}\mathcal{A} \to (S')^{-1}\mathcal{B}, \quad
+R(G \circ F) : S^{-1}\mathcal{A} \to \mathcal{C}, \quad
+RG : (S')^{-1}\mathcal{B} \to \mathcal{C}.
+$$
+Let us denote
+$Q_A : \mathcal{A} \to S^{-1}\mathcal{A}$ and
+$Q_B : \mathcal{B} \to (S')^{-1}\mathcal{B}$
+the localization functors. Then $F' = Q_B \circ F$. Note that for
+every object $Y$ of $\mathcal{B}$ there is a canonical map
+$$
+G(Y) \longrightarrow RG(Q_B(Y))
+$$
+in other words, there is a transformation of functors
+$t' : G \to RG \circ Q_B$. Let $X$ be an object of $\mathcal{A}$.
+We have
+\begin{align*}
+R(G \circ F)(Q_A(X))
+& = \colim_{s : X \to X' \in S} G(F(X')) \\
+& \xrightarrow{t'} \colim_{s : X \to X' \in S} RG(Q_B(F(X'))) \\
+& = \colim_{s : X \to X' \in S} RG(F'(X')) \\
+& = RG(\colim_{s : X \to X' \in S} F'(X')) \\
+& = RG(RF'(X)).
+\end{align*}
+The system $F'(X')$ is essentially constant in the category
+$(S')^{-1}\mathcal{B}$. Hence we may pull the colimit inside the
+functor $RG$ in the third equality of the diagram above, see
+Categories, Lemma \ref{categories-lemma-image-essentially-constant}
+and its proof. We omit the proof this defines a transformation
+of functors. The case of left derived functors is similar.
+\end{proof}
+
+
+
+
+\section{Derived functors on derived categories}
+\label{section-derived-functors-classical}
+
+\noindent
+In practice derived functors come about most often when given an
+additive functor between abelian categories.
+
+\begin{situation}
+\label{situation-classical}
+Here $F : \mathcal{A} \to \mathcal{B}$ is an additive functor between
+abelian categories. This induces exact functors
+$$
+F : K(\mathcal{A}) \to K(\mathcal{B}), \quad
+K^{+}(\mathcal{A}) \to K^{+}(\mathcal{B}), \quad
+K^{-}(\mathcal{A}) \to K^{-}(\mathcal{B}).
+$$
+See Lemma \ref{lemma-additive-exact-homotopy-category}.
+We also denote $F$ the composition $K(\mathcal{A}) \to D(\mathcal{B})$,
+$K^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$, and
+$K^{-}(\mathcal{A}) \to D^-(\mathcal{B})$ of $F$ with the localization
+functor $K(\mathcal{B}) \to D(\mathcal{B})$, etc. This situation leads
+to four derived functors we will consider in the following.
+\begin{enumerate}
+\item The right derived functor of
+$F : K(\mathcal{A}) \to D(\mathcal{B})$
+relative to the multiplicative system $\text{Qis}(\mathcal{A})$.
+\item The right derived functor of
+$F : K^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$
+relative to the multiplicative system $\text{Qis}^{+}(\mathcal{A})$.
+\item The left derived functor of
+$F : K(\mathcal{A}) \to D(\mathcal{B})$
+relative to the multiplicative system $\text{Qis}(\mathcal{A})$.
+\item The left derived functor of
+$F : K^{-}(\mathcal{A}) \to D^{-}(\mathcal{B})$
+relative to the multiplicative system $\text{Qis}^-(\mathcal{A})$.
+\end{enumerate}
+Each of these cases is an example of
+Situation \ref{situation-derived-functor}.
+\end{situation}
+
+\noindent
+Some of the ambiguity that may arise is alleviated by the following.
+
+\begin{lemma}
+\label{lemma-irrelevant}
+In
+Situation \ref{situation-classical}.
+\begin{enumerate}
+\item Let $X$ be an object of $K^{+}(\mathcal{A})$.
+The right derived functor of $K(\mathcal{A}) \to D(\mathcal{B})$
+is defined at $X$ if and only if the right derived functor of
+$K^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ is defined at $X$.
+Moreover, the values are canonically isomorphic.
+\item Let $X$ be an object of $K^{+}(\mathcal{A})$.
+Then $X$ computes the right derived functor of
+$K(\mathcal{A}) \to D(\mathcal{B})$
+if and only if $X$ computes the right derived functor of
+$K^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$.
+\item Let $X$ be an object of $K^{-}(\mathcal{A})$.
+The left derived functor of $K(\mathcal{A}) \to D(\mathcal{B})$
+is defined at $X$ if and only if the left derived functor of
+$K^{-}(\mathcal{A}) \to D^{-}(\mathcal{B})$ is defined at $X$.
+Moreover, the values are canonically isomorphic.
+\item Let $X$ be an object of $K^{-}(\mathcal{A})$.
+Then $X$ computes the left derived functor of
+$K(\mathcal{A}) \to D(\mathcal{B})$ if and only if $X$ computes
+the left derived functor of $K^{-}(\mathcal{A}) \to D^{-}(\mathcal{B})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $X$ be an object of $K^{+}(\mathcal{A})$.
+Consider a quasi-isomorphism $s : X \to X'$ in $K(\mathcal{A})$.
+By
+Lemma \ref{lemma-complex-cohomology-bounded}
+there exists quasi-isomorphism $X' \to X''$ with $X''$ bounded below.
+Hence we see that $X/\text{Qis}^+(\mathcal{A})$ is cofinal
+in $X/\text{Qis}(\mathcal{A})$. Thus it is clear that (1) holds.
+Part (2) follows directly from part (1).
+Parts (3) and (4) are dual to parts (1) and (2).
+\end{proof}
+
+\noindent
+Given an object $A$ of an abelian category $\mathcal{A}$ we get a complex
+$$
+A[0] = ( \ldots \to 0 \to A \to 0 \to \ldots )
+$$
+where $A$ is placed in degree zero. Hence a functor
+$\mathcal{A} \to K(\mathcal{A})$, $A \mapsto A[0]$.
+Let us temporarily say that a partial functor is one that is
+defined on a subcategory.
+
+\begin{definition}
+\label{definition-derived-functor}
+In
+Situation \ref{situation-classical}.
+\begin{enumerate}
+\item The {\it right derived functors of $F$} are the partial functors
+$RF$ associated to cases (1) and (2) of
+Situation \ref{situation-classical}.
+\item The {\it left derived functors of $F$} are the partial functors
+$LF$ associated to cases (3) and (4) of
+Situation \ref{situation-classical}.
+\item An object $A$ of $\mathcal{A}$ is said to be
+{\it right acyclic for $F$}, or {\it acyclic for $RF$}
+if $A[0]$ computes $RF$.
+\item An object $A$ of $\mathcal{A}$ is said to be
+{\it left acyclic for $F$}, or {\it acyclic for $LF$}
+if $A[0]$ computes $LF$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+The following few lemmas give some criteria for the existence of
+enough acyclics.
+
+\begin{lemma}
+\label{lemma-subcategory-left-resolution}
+Let $\mathcal{A}$ be an abelian category. Let
+$\mathcal{P} \subset \Ob(\mathcal{A})$ be a subset containing $0$
+such that every object of $\mathcal{A}$ is a quotient of an element of
+$\mathcal{P}$. Let $a \in \mathbf{Z}$.
+\begin{enumerate}
+\item Given $K^\bullet$ with $K^n = 0$ for $n > a$
+there exists a quasi-isomorphism $P^\bullet \to K^\bullet$
+with $P^n \in \mathcal{P}$ and $P^n \to K^n$ surjective
+for all $n$ and $P^n = 0$ for $n > a$.
+\item Given $K^\bullet$ with $H^n(K^\bullet) = 0$ for $n > a$
+there exists a quasi-isomorphism $P^\bullet \to K^\bullet$
+with $P^n \in \mathcal{P}$ for all $n$ and $P^n = 0$ for $n > a$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of part (1). Consider the following induction hypothesis $IH_n$:
+There are $P^j \in \mathcal{P}$, $j \geq n$, with $P^j = 0$ for $j > a$,
+maps $d^j : P^j \to P^{j + 1}$ for $j \geq n$, and surjective maps
+$\alpha^j : P^j \to K^j$ for $j \geq n$ such that the diagram
+$$
+\xymatrix{
+& &
+P^n \ar[d]^\alpha \ar[r] &
+P^{n + 1} \ar[d]^\alpha \ar[r] &
+P^{n + 2} \ar[d]^\alpha \ar[r] &
+\ldots \\
+\ldots \ar[r] &
+K^{n - 1} \ar[r] &
+K^n \ar[r] &
+K^{n + 1} \ar[r] &
+K^{n + 2} \ar[r] &
+\ldots
+}
+$$
+is commutative, such that $d^{j + 1} \circ d^j = 0$ for $j \geq n$,
+such that $\alpha$ induces isomorphisms
+$H^j(K^\bullet) \to \Ker(d^j)/\Im(d^{j - 1})$
+for $j > n$, and such that $\alpha : \Ker(d^n) \to \Ker(d_K^n)$
+is surjective. Then we choose a surjection
+$$
+P^{n - 1}
+\longrightarrow
+K^{n - 1} \times_{K^n} \Ker(d^n) =
+K^{n - 1} \times_{\Ker(d_K^n)} \Ker(d^n)
+$$
+with $P^{n - 1}$ in $\mathcal{P}$. This allows us to extend the diagram
+above to
+$$
+\xymatrix{
+& P^{n - 1} \ar[d]^\alpha \ar[r] &
+P^n \ar[d]^\alpha \ar[r] &
+P^{n + 1} \ar[d]^\alpha \ar[r] &
+P^{n + 2} \ar[d]^\alpha \ar[r] &
+\ldots \\
+\ldots \ar[r] &
+K^{n - 1} \ar[r] &
+K^n \ar[r] &
+K^{n + 1} \ar[r] &
+K^{n + 2} \ar[r] &
+\ldots
+}
+$$
+The reader easily checks that $IH_{n - 1}$ holds with this choice.
+
+\medskip\noindent
+We finish the proof of (1) as follows.
+First we note that $IH_n$ is true for $n = a + 1$ since
+we can just take $P^j = 0$ for $j > a$. Hence we see that
+proceeding by descending induction we produce a complex $P^\bullet$
+with $P^n = 0$ for $n > a$
+consisting of objects from $\mathcal{P}$, and a termwise
+surjective quasi-isomorphism $\alpha : P^\bullet \to K^\bullet$ as desired.
+
+\medskip\noindent
+Proof of part (2). The assumption implies that the morphism
+$\tau_{\leq a}K^\bullet \to K^\bullet$
+(Homology, Section \ref{homology-section-truncations})
+is a quasi-isomorphism.
+Apply part (1) to find $P^\bullet \to \tau_{\leq a}K^\bullet$.
+The composition $P^\bullet \to K^\bullet$ is the desired quasi-isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-subcategory-right-resolution}
+Let $\mathcal{A}$ be an abelian category. Let
+$\mathcal{I} \subset \Ob(\mathcal{A})$ be a subset containing $0$
+such that every object of $\mathcal{A}$ is a subobject of an element of
+$\mathcal{I}$. Let $a \in \mathbf{Z}$.
+\begin{enumerate}
+\item Given $K^\bullet$ with $K^n = 0$ for $n < a$
+there exists a quasi-isomorphism $K^\bullet \to I^\bullet$
+with $K^n \to I^n$ injective and $I^n \in \mathcal{I}$ for all $n$
+and $I^n = 0$ for $n < a$,
+\item Given $K^\bullet$ with $H^n(K^\bullet) = 0$
+for $n < a$ there exists a quasi-isomorphism $K^\bullet \to I^\bullet$
+with $I^n \in \mathcal{I}$ and $I^n = 0$ for $n < a$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma is dual to Lemma \ref{lemma-subcategory-left-resolution}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-subcategory-right-acyclics}
+In
+Situation \ref{situation-classical}.
+Let $\mathcal{I} \subset \Ob(\mathcal{A})$ be a subset with the
+following properties:
+\begin{enumerate}
+\item every object of $\mathcal{A}$ is a subobject of an element of
+$\mathcal{I}$,
+\item for any short exact sequence $0 \to P \to Q \to R \to 0$ of
+$\mathcal{A}$ with $P, Q \in \mathcal{I}$, then $R \in \mathcal{I}$,
+and $0 \to F(P) \to F(Q) \to F(R) \to 0$ is exact.
+\end{enumerate}
+Then every object of $\mathcal{I}$ is acyclic for $RF$.
+\end{lemma}
+
+\begin{proof}
+We may add $0$ to $\mathcal{I}$ if necessary. Pick $A \in \mathcal{I}$.
+Let $A[0] \to K^\bullet$ be a quasi-isomorphism with $K^\bullet$
+bounded below. Then we can find a quasi-isomorphism
+$K^\bullet \to I^\bullet$ with $I^\bullet$ bounded below and
+each $I^n \in \mathcal{I}$, see
+Lemma \ref{lemma-subcategory-right-resolution}.
+Hence we see that these resolutions are cofinal in the category
+$A[0]/\text{Qis}^{+}(\mathcal{A})$. To finish the proof it therefore
+suffices to show that for any quasi-isomorphism
+$A[0] \to I^\bullet$ with $I^\bullet$ bounded below and $I^n \in \mathcal{I}$
+we have $F(A)[0] \to F(I^\bullet)$ is a quasi-isomorphism.
+To see this suppose that $I^n = 0$ for $n < n_0$. Of course we may assume
+that $n_0 < 0$. Starting with $n = n_0$ we prove inductively that
+$\Im(d^{n - 1}) = \Ker(d^n)$ and $\Im(d^{-1})$
+are elements of $\mathcal{I}$ using property (2) and the exact sequences
+$$
+0 \to \Ker(d^n) \to I^n \to \Im(d^n) \to 0.
+$$
+Moreover, property (2) also guarantees that the complex
+$$
+0 \to F(I^{n_0}) \to F(I^{n_0 + 1}) \to \ldots \to F(I^{-1}) \to
+F(\Im(d^{-1})) \to 0
+$$
+is exact. The exact sequence
+$0 \to \Im(d^{-1}) \to I^0 \to I^0/\Im(d^{-1}) \to 0$
+implies that $I^0/\Im(d^{-1})$ is an element of $\mathcal{I}$.
+The exact sequence $0 \to A \to I^0/\Im(d^{-1}) \to \Im(d^0) \to 0$
+then implies that $\Im(d^0) = \Ker(d^1)$ is an elements of
+$\mathcal{I}$ and from then on one continues as before to show that
+$\Im(d^{n - 1}) = \Ker(d^n)$ is an element of $\mathcal{I}$
+for all $n > 0$. Applying $F$ to each of the short exact sequences
+mentioned above and using (2) we observe that $F(A)[0] \to F(I^\bullet)$
+is an isomorphism as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-subcategory-left-acyclics}
+In
+Situation \ref{situation-classical}.
+Let $\mathcal{P} \subset \Ob(\mathcal{A})$ be a subset with the
+following properties:
+\begin{enumerate}
+\item every object of $\mathcal{A}$ is a quotient of an element of
+$\mathcal{P}$,
+\item for any short exact sequence $0 \to P \to Q \to R \to 0$ of
+$\mathcal{A}$ with $Q, R \in \mathcal{P}$, then $P \in \mathcal{P}$,
+and $0 \to F(P) \to F(Q) \to F(R) \to 0$ is exact.
+\end{enumerate}
+Then every object of $\mathcal{P}$ is acyclic for $LF$.
+\end{lemma}
+
+\begin{proof}
+Dual to the proof of
+Lemma \ref{lemma-subcategory-right-acyclics}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Higher derived functors}
+\label{section-higher-derived}
+
+\noindent
+The following simple lemma shows that right derived functors
+``move to the right''.
+
+\begin{lemma}
+\label{lemma-negative-vanishing}
+Let $F : \mathcal{A} \to \mathcal{B}$ be an additive functor
+between abelian categories. Let $K^\bullet$ be a complex of $\mathcal{A}$
+and $a \in \mathbf{Z}$.
+\begin{enumerate}
+\item If $H^i(K^\bullet) = 0$ for all $i < a$ and $RF$ is defined at
+$K^\bullet$, then $H^i(RF(K^\bullet)) = 0$ for all $i < a$.
+\item If $RF$ is defined at $K^\bullet$ and $\tau_{\leq a}K^\bullet$,
+then $H^i(RF(\tau_{\leq a}K^\bullet)) = H^i(RF(K^\bullet))$
+for all $i \leq a$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $K^\bullet$ satisfies the assumptions of (1).
+Let $K^\bullet \to L^\bullet$ be any quasi-isomorphism.
+Then it is also true that $K^\bullet \to \tau_{\geq a}L^\bullet$
+is a quasi-isomorphism by our assumption on $K^\bullet$.
+Hence in the category $K^\bullet/\text{Qis}^{+}(\mathcal{A})$ the
+quasi-isomorphisms $s : K^\bullet \to L^\bullet$ with $L^n = 0$ for $n < a$
+are cofinal. Thus $RF$ is the value of the essentially constant
+ind-object $F(L^\bullet)$ for these $s$ it follows that
+$H^i(RF(K^\bullet)) = 0$ for $i < a$.
+
+\medskip\noindent
+To prove (2) we use the distinguished triangle
+$$
+\tau_{\leq a}K^\bullet \to K^\bullet \to \tau_{\geq a + 1}K^\bullet
+\to (\tau_{\leq a}K^\bullet)[1]
+$$
+of Remark \ref{remark-truncation-distinguished-triangle} to conclude
+via Lemma \ref{lemma-2-out-of-3-defined} that
+$RF$ is defined at $\tau_{\geq a + 1}K^\bullet$ as well and that we have
+a distinguished triangle
+$$
+RF(\tau_{\leq a}K^\bullet) \to RF(K^\bullet) \to RF(\tau_{\geq a + 1}K^\bullet)
+\to RF(\tau_{\leq a}K^\bullet)[1]
+$$
+in $D(\mathcal{B})$. By part (1) we see that $RF(\tau_{\geq a + 1}K^\bullet)$
+has vanishing cohomology in degrees $< a + 1$. The long exact cohomology
+sequence of this distinguished triangle then shows what we want.
+\end{proof}
+
+\begin{definition}
+\label{definition-higher-derived-functors}
+Let $F : \mathcal{A} \to \mathcal{B}$ be an additive functor
+between abelian categories. Assume
+$RF : D^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ is everywhere
+defined. Let $i \in \mathbf{Z}$.
+The {\it $i$th right derived functor $R^iF$ of $F$} is the functor
+$$
+R^iF = H^i \circ RF :
+\mathcal{A}
+\longrightarrow
+\mathcal{B}
+$$
+\end{definition}
+
+\noindent
+The following lemma shows that it really does not make a lot
+of sense to take the right derived functor unless the functor
+is left exact.
+
+\begin{lemma}
+\label{lemma-left-exact-higher-derived}
+Let $F : \mathcal{A} \to \mathcal{B}$ be an additive functor
+between abelian categories and assume
+$RF : D^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ is everywhere
+defined.
+\begin{enumerate}
+\item We have $R^iF = 0$ for $i < 0$,
+\item $R^0F$ is left exact,
+\item the map $F \to R^0F$ is an isomorphism if and
+only if $F$ is left exact.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $A$ be an object of $\mathcal{A}$. Let $A[0] \to K^\bullet$
+be any quasi-isomorphism. Then it is also true that
+$A[0] \to \tau_{\geq 0}K^\bullet$ is a quasi-isomorphism.
+Hence in the category $A[0]/\text{Qis}^{+}(\mathcal{A})$ the
+quasi-isomorphisms $s : A[0] \to K^\bullet$ with $K^n = 0$ for $n < 0$
+are cofinal. Thus it is clear that $H^i(RF(A[0])) = 0$ for $i < 0$.
+Moreover, for such an $s$ the sequence
+$$
+0 \to A \to K^0 \to K^1
+$$
+is exact. Hence if $F$ is left exact, then $0 \to F(A) \to F(K^0) \to F(K^1)$
+is exact as well, and we see that $F(A) \to H^0(F(K^\bullet))$ is an
+isomorphism for every $s : A[0] \to K^\bullet$ as above which implies
+that $H^0(RF(A[0])) = F(A)$.
+
+\medskip\noindent
+Let $0 \to A \to B \to C \to 0$ be a short exact sequence of $\mathcal{A}$.
+By
+Lemma \ref{lemma-derived-canonical-delta-functor}
+we obtain a distinguished triangle
+$(A[0], B[0], C[0], a, b, c)$ in $K^{+}(\mathcal{A})$.
+From the long exact cohomology sequence (and the vanishing for $i < 0$
+proved above) we deduce that $0 \to R^0F(A) \to R^0F(B) \to R^0F(C)$
+is exact. Hence $R^0F$ is left exact. Of course this also proves that if
+$F \to R^0F$ is an isomorphism, then $F$ is left exact.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-F-acyclic}
+Let $F : \mathcal{A} \to \mathcal{B}$ be an additive functor
+between abelian categories and assume
+$RF : D^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ is everywhere
+defined. Let $A$ be an object of $\mathcal{A}$.
+\begin{enumerate}
+\item $A$ is right acyclic for $F$ if and only if
+$F(A) \to R^0F(A)$ is an isomorphism and $R^iF(A) = 0$ for all $i > 0$,
+\item if $F$ is left exact, then $A$ is right acyclic for $F$
+if and only if $R^iF(A) = 0$ for all $i > 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $A$ is right acyclic for $F$, then $RF(A[0]) = F(A)[0]$ and in
+particular $F(A) \to R^0F(A)$ is an isomorphism and
+$R^iF(A) = 0$ for $i \not = 0$. Conversely, if $F(A) \to R^0F(A)$
+is an isomorphism and $R^iF(A) = 0$ for all $i > 0$ then
+$F(A[0]) \to RF(A[0])$ is a quasi-isomorphism by
+Lemma \ref{lemma-left-exact-higher-derived} part (1)
+and hence $A$ is acyclic. If $F$ is left exact then $F = R^0F$, see
+Lemma \ref{lemma-left-exact-higher-derived}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-F-acyclic-ses}
+Let $F : \mathcal{A} \to \mathcal{B}$ be a left exact functor
+between abelian categories and assume
+$RF : D^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ is everywhere
+defined. Let $0 \to A \to B \to C \to 0$ be a short exact sequence
+of $\mathcal{A}$.
+\begin{enumerate}
+\item If $A$ and $C$ are right acyclic for $F$ then so is $B$.
+\item If $A$ and $B$ are right acyclic for $F$ then so is $C$.
+\item If $B$ and $C$ are right acyclic for $F$ and $F(B) \to F(C)$ is
+surjective then $A$ is right acyclic for $F$.
+\end{enumerate}
+In each of the three cases
+$$
+0 \to F(A) \to F(B) \to F(C) \to 0
+$$
+is a short exact sequence of $\mathcal{B}$.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-derived-canonical-delta-functor}
+we obtain a distinguished triangle
+$(A[0], B[0], C[0], a, b, c)$ in $K^{+}(\mathcal{A})$.
+As $RF$ is an exact functor and since
+$R^iF = 0$ for $i < 0$ and $R^0F = F$
+(Lemma \ref{lemma-left-exact-higher-derived})
+we obtain an exact cohomology sequence
+$$
+0 \to F(A) \to F(B) \to F(C) \to R^1F(A) \to \ldots
+$$
+in the abelian category $\mathcal{B}$. Thus the lemma follows from
+the characterization of acyclic objects in
+Lemma \ref{lemma-F-acyclic}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-right-derived-delta-functor}
+Let $F : \mathcal{A} \to \mathcal{B}$ be an additive functor
+between abelian categories and assume
+$RF : D^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ is everywhere defined.
+\begin{enumerate}
+\item The functors $R^iF$, $i \geq 0$ come equipped with a canonical
+structure of a $\delta$-functor from $\mathcal{A} \to \mathcal{B}$, see
+Homology, Definition \ref{homology-definition-cohomological-delta-functor}.
+\item If every object of $\mathcal{A}$ is a subobject of a right
+acyclic object for $F$, then $\{R^iF, \delta\}_{i \geq 0}$ is a
+universal $\delta$-functor, see
+Homology, Definition \ref{homology-definition-universal-delta-functor}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The functor $\mathcal{A} \to \text{Comp}^{+}(\mathcal{A})$,
+$A \mapsto A[0]$ is exact. The functor
+$\text{Comp}^{+}(\mathcal{A}) \to D^{+}(\mathcal{A})$
+is a $\delta$-functor, see
+Lemma \ref{lemma-derived-canonical-delta-functor}.
+The functor $RF : D^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ is exact.
+Finally, the functor $H^0 : D^{+}(\mathcal{B}) \to \mathcal{B}$
+is a homological functor, see
+Definition \ref{definition-unbounded-derived-category}.
+Hence we get the structure of a $\delta$-functor from
+Lemma \ref{lemma-compose-delta-functor-homological}
+and
+Lemma \ref{lemma-exact-compose-delta-functor}.
+Part (2) follows from
+Homology, Lemma \ref{homology-lemma-efface-implies-universal}
+and the description of acyclics in
+Lemma \ref{lemma-F-acyclic}.
+\end{proof}
+
+\begin{lemma}[Leray's acyclicity lemma]
+\label{lemma-leray-acyclicity}
+Let $F : \mathcal{A} \to \mathcal{B}$ be an additive functor
+between abelian categories. Let $A^\bullet$ be a bounded below complex
+of right $F$-acyclic objects such that $RF$ is defined at
+$A^\bullet$\footnote{For example this holds if
+$RF : D^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ is everywhere defined.}.
+The canonical map
+$$
+F(A^\bullet) \longrightarrow RF(A^\bullet)
+$$
+is an isomorphism in $D^{+}(\mathcal{B})$, i.e., $A^\bullet$ computes
+$RF$.
+\end{lemma}
+
+\begin{proof}
+Let $A^\bullet$ be a bounded complex of right $F$-acyclic objects.
+We claim that $RF$ is defined at $A^\bullet$ and that
+$F(A^\bullet) \to RF(A^\bullet)$ is an isomorphism in $D^+(\mathcal{B})$.
+Namely, it holds for complexes with at most one nonzero right $F$-acyclic
+object for example by Lemma \ref{lemma-F-acyclic}. Next, suppose that
+$A^n = 0$ for $n \not \in [a, b]$. Using the ``stupid'' truncations we obtain
+a termwise split short exact sequence of complexes
+$$
+0 \to \sigma_{\geq a + 1} A^\bullet \to A^\bullet \to
+\sigma_{\leq a} A^\bullet \to 0
+$$
+see
+Homology, Section \ref{homology-section-truncations}.
+Thus a distinguished triangle
+$(\sigma_{\geq a + 1} A^\bullet, A^\bullet, \sigma_{\leq a} A^\bullet)$.
+By induction hypothesis $RF$ is defined for the two outer complexes
+and these complexes compute $RF$. Then the same is true for the middle one
+by Lemma \ref{lemma-2-out-of-3-computes}.
+
+\medskip\noindent
+Suppose that $A^\bullet$ is a bounded below complex of acyclic objects
+such that $RF$ is defined at $A^\bullet$.
+To show that $F(A^\bullet) \to RF(A^\bullet)$
+is an isomorphism in $D^{+}(\mathcal{B})$
+it suffices to show that $H^i(F(A^\bullet)) \to H^i(RF(A^\bullet))$
+is an isomorphism for
+all $i$. Pick $i$. Consider the termwise split short exact sequence of
+complexes
+$$
+0 \to \sigma_{\geq i + 2} A^\bullet \to A^\bullet \to
+\sigma_{\leq i + 1} A^\bullet \to 0.
+$$
+Note that this induces a termwise split short exact sequence
+$$
+0 \to \sigma_{\geq i + 2} F(A^\bullet) \to F(A^\bullet) \to
+\sigma_{\leq i + 1} F(A^\bullet) \to 0.
+$$
+Hence we get distinguished triangles
+$$
+(\sigma_{\geq i + 2} A^\bullet, A^\bullet,
+\sigma_{\leq i + 1} A^\bullet)
+\quad\text{and}\quad
+(\sigma_{\geq i + 2} F(A^\bullet), F(A^\bullet),
+\sigma_{\leq i + 1} F(A^\bullet))
+$$
+Since $RF$ is defined at $A^\bullet$ (by assumption)
+and at $\sigma_{\leq i + 1}A^\bullet$ (by the first paragraph)
+we see that $RF$ is defined at $\sigma_{\geq i + 1}A^\bullet$
+and we get a distinghuished triangle
+$$
+(RF(\sigma_{\geq i + 2} A^\bullet), RF(A^\bullet),
+RF(\sigma_{\leq i + 1} A^\bullet))
+$$
+See Lemma \ref{lemma-2-out-of-3-defined}.
+Using these distinguished triangles we obtain a map of exact sequences
+$$
+\xymatrix{
+H^i(\sigma_{\geq i + 2} F(A^\bullet)) \ar[r] \ar[d] &
+H^i(F(A^\bullet)) \ar[r] \ar[d]^\alpha &
+H^i(\sigma_{\leq i + 1} F(A^\bullet)) \ar[r] \ar[d]^\beta &
+H^{i + 1}(\sigma_{\geq i + 2} F(A^\bullet)) \ar[d] \\
+H^i(RF(\sigma_{\geq i + 2} A^\bullet)) \ar[r] &
+H^i(RF(A^\bullet)) \ar[r] &
+H^i(RF(\sigma_{\leq i + 1} A^\bullet)) \ar[r] &
+H^{i + 1}(RF(\sigma_{\geq i + 2} A^\bullet))
+}
+$$
+By the results of the first paragraph the map $\beta$ is an isomorphism.
+By inspection the objects on the upper left and the upper right
+are zero. Hence to finish the proof it suffices to show that
+$H^i(RF(\sigma_{\geq i + 2} A^\bullet)) = 0$ and
+$H^{i + 1}(RF(\sigma_{\geq i + 2} A^\bullet)) = 0$.
+This follows immediately from
+Lemma \ref{lemma-negative-vanishing}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-enough-acyclics}
+\begin{slogan}
+A functor on an Abelian categories is extended to the (bounded below or above)
+derived category by resolving with a complex that is acyclic for that functor.
+\end{slogan}
+Let $F : \mathcal{A} \to \mathcal{B}$ be an additive functor of
+abelian categories.
+\begin{enumerate}
+\item If every object of $\mathcal{A}$ injects into an object acyclic
+for $RF$, then $RF$ is defined on all of $K^{+}(\mathcal{A})$
+and we obtain an exact functor
+$$
+RF : D^{+}(\mathcal{A}) \longrightarrow D^{+}(\mathcal{B})
+$$
+see (\ref{equation-everywhere}). Moreover, any bounded below complex
+$A^\bullet$ whose terms are acyclic for $RF$ computes $RF$.
+\item If every object of $\mathcal{A}$ is quotient of
+an object acyclic for $LF$, then $LF$ is defined on all of
+$K^{-}(\mathcal{A})$ and we obtain an exact functor
+$$
+LF : D^{-}(\mathcal{A}) \longrightarrow D^{-}(\mathcal{B})
+$$
+see (\ref{equation-everywhere}). Moreover, any bounded above complex
+$A^\bullet$ whose terms are acyclic for $LF$ computes $LF$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Assume every object of $\mathcal{A}$ injects into an object acyclic
+for $RF$. Let $\mathcal{I}$ be the set of objects acyclic for $RF$.
+Let $K^\bullet$ be a bounded below complex in $\mathcal{A}$. By
+Lemma \ref{lemma-subcategory-right-resolution}
+there exists a quasi-isomorphism $\alpha : K^\bullet \to I^\bullet$ with
+$I^\bullet$ bounded below and $I^n \in \mathcal{I}$. Hence in order to
+prove (1) it suffices to show that
+$F(I^\bullet) \to F((I')^\bullet)$ is a quasi-isomorphism when
+$s : I^\bullet \to (I')^\bullet$ is a quasi-isomorphism of bounded
+below complexes of objects from $\mathcal{I}$, see
+Lemma \ref{lemma-find-existence-computes}.
+Note that the cone $C(s)^\bullet$ is an acyclic bounded below complex
+all of whose terms are in $\mathcal{I}$.
+Hence it suffices to show: given an acyclic bounded below complex
+$I^\bullet$ all of whose terms are in $\mathcal{I}$ the complex
+$F(I^\bullet)$ is acyclic.
+
+\medskip\noindent
+Say $I^n = 0$ for $n < n_0$. Setting $J^n = \Im(d^n)$ we break
+$I^\bullet$ into short exact sequences
+$0 \to J^n \to I^{n + 1} \to J^{n + 1} \to 0$
+for $n \geq n_0$. These sequences induce distinguished triangles
+$(J^n, I^{n + 1}, J^{n + 1})$ in $D^+(\mathcal{A})$ by
+Lemma \ref{lemma-derived-canonical-delta-functor}.
+For each $k \in \mathbf{Z}$ denote $H_k$ the assertion:
+For all $n \leq k$ the right derived functor
+$RF$ is defined at $J^n$ and $R^iF(J^n) = 0$ for $i \not = 0$.
+Then $H_k$ holds trivially for $k \leq n_0$. If $H_n$ holds,
+then, using Proposition \ref{proposition-derived-functor},
+we see that $RF$ is defined at $J^{n + 1}$ and
+$(RF(J^n), RF(I^{n + 1}), RF(J^{n + 1}))$ is a distinguished
+triangle of $D^+(\mathcal{B})$. Thus the long exact cohomology sequence
+(\ref{equation-long-exact-cohomology-sequence-D})
+associated to this triangle gives an exact sequence
+$$
+0 \to R^{-1}F(J^{n + 1}) \to R^0F(J^n) \to
+F(I^{n + 1}) \to R^0F(J^{n + 1}) \to 0
+$$
+and gives that $R^iF(J^{n + 1}) = 0$ for $i \not \in \{-1, 0\}$.
+By Lemma \ref{lemma-negative-vanishing} we see that $R^{-1}F(J^{n + 1}) = 0$.
+This proves that $H_{n + 1}$ is true hence $H_k$ holds for all $k$.
+We also conclude that
+$$
+0 \to R^0F(J^n) \to F(I^{n + 1}) \to R^0F(J^{n + 1}) \to 0
+$$
+is short exact for all $n$. This in turn proves that $F(I^\bullet)$ is exact.
+
+\medskip\noindent
+The proof in the case of $LF$ is dual.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-right-derived-exact-functor}
+Let $F : \mathcal{A} \to \mathcal{B}$ be an exact functor of
+abelian categories. Then
+\begin{enumerate}
+\item every object of $\mathcal{A}$ is right acyclic for $F$,
+\item $RF : D^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ is everywhere defined,
+\item $RF : D(\mathcal{A}) \to D(\mathcal{B})$ is everywhere defined,
+\item every complex computes $RF$, in other words, the canonical
+map $F(K^\bullet) \to RF(K^\bullet)$ is an isomorphism for all complexes, and
+\item $R^iF = 0$ for $i \not = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is true because $F$ transforms acyclic complexes into acyclic complexes
+and quasi-isomorphisms into quasi-isomorphisms. Details omitted.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Triangulated subcategories of the derived category}
+\label{section-triangulated-sub}
+
+\noindent
+Let $\mathcal{A}$ be an abelian category. In this section we look at
+certain strictly full saturated triangulated subcategories
+$\mathcal{D}' \subset D(\mathcal{A})$.
+
+\medskip\noindent
+Let $\mathcal{B} \subset \mathcal{A}$ be a weak Serre subcategory, see
+Homology, Definition \ref{homology-definition-serre-subcategory} and
+Lemma \ref{homology-lemma-characterize-weak-serre-subcategory}.
+We let $D_\mathcal{B}(\mathcal{A})$ the full subcategory of
+$D(\mathcal{A})$ whose objects are
+$$
+\Ob(D_\mathcal{B}(\mathcal{A}))
+=
+\{X \in \Ob(D(\mathcal{A})) \mid
+H^n(X) \text{ is an object of }\mathcal{B}\text{ for all }n\}
+$$
+We also define
+$D^{+}_\mathcal{B}(\mathcal{A}) =
+D^{+}(\mathcal{A}) \cap D_\mathcal{B}(\mathcal{A})$
+and similarly for the other bounded versions.
+
+\begin{lemma}
+\label{lemma-cohomology-in-serre-subcategory}
+Let $\mathcal{A}$ be an abelian category.
+Let $\mathcal{B} \subset \mathcal{A}$ be a weak Serre subcategory.
+The category $D_\mathcal{B}(\mathcal{A})$ is a strictly full
+saturated triangulated subcategory of $D(\mathcal{A})$.
+Similarly for the bounded versions.
+\end{lemma}
+
+\begin{proof}
+It is clear that $D_\mathcal{B}(\mathcal{A})$ is an additive subcategory
+preserved under the translation functors.
+If $X \oplus Y$ is in $D_\mathcal{B}(\mathcal{A})$, then
+both $H^n(X)$ and $H^n(Y)$ are kernels of maps between maps of objects
+of $\mathcal{B}$ as $H^n(X \oplus Y) = H^n(X) \oplus H^n(Y)$.
+Hence both $X$ and $Y$ are in $D_\mathcal{B}(\mathcal{A})$. By
+Lemma \ref{lemma-triangulated-subcategory}
+it therefore suffices to show that given a distinguished triangle
+$(X, Y, Z, f, g, h)$ such that $X$ and $Y$ are in $D_\mathcal{B}(\mathcal{A})$
+then $Z$ is an object of $D_\mathcal{B}(\mathcal{A})$. The long exact
+cohomology sequence (\ref{equation-long-exact-cohomology-sequence-D})
+and the definition of a weak Serre subcategory (see
+Homology, Definition \ref{homology-definition-serre-subcategory})
+show that $H^n(Z)$ is an object of $\mathcal{B}$ for all $n$.
+Thus $Z$ is an object of $D_\mathcal{B}(\mathcal{A})$.
+\end{proof}
+
+\noindent
+We continue to assume that $\mathcal{B}$ is a weak Serre subcategory
+of the abelian category $\mathcal{A}$. Then $\mathcal{B}$ is an abelian
+category and the inclusion functor $\mathcal{B} \to \mathcal{A}$ is exact.
+Hence we obtain a derived functor $D(\mathcal{B}) \to D(\mathcal{A})$, see
+Lemma \ref{lemma-right-derived-exact-functor}. Clearly the
+functor $D(\mathcal{B}) \to D(\mathcal{A})$ factors through a canonical
+exact functor
+\begin{equation}
+\label{equation-compare}
+D(\mathcal{B}) \longrightarrow D_\mathcal{B}(\mathcal{A})
+\end{equation}
+After all a complex made from objects of $\mathcal{B}$ certainly
+gives rise to an object of $D_\mathcal{B}(\mathcal{A})$ and as
+distinguished triangles in $D_\mathcal{B}(\mathcal{A})$ are exactly the
+distinguished triangles of $D(\mathcal{A})$ whose vertices are in
+$D_\mathcal{B}(\mathcal{A})$ we see that the functor is exact since
+$D(\mathcal{B}) \to D(\mathcal{A})$ is exact. Similarly we obtain functors
+$D^+(\mathcal{B}) \to D^+_\mathcal{B}(\mathcal{A})$,
+$D^-(\mathcal{B}) \to D^-_\mathcal{B}(\mathcal{A})$, and
+$D^b(\mathcal{B}) \to D^b_\mathcal{B}(\mathcal{A})$
+for the bounded versions. A key question in many cases is whether the
+displayed functor is an equivalence.
+
+\medskip\noindent
+Now, suppose that $\mathcal{B}$ is a Serre subcategory of $\mathcal{A}$.
+In this case we have the quotient functor
+$\mathcal{A} \to \mathcal{A}/\mathcal{B}$, see
+Homology, Lemma \ref{homology-lemma-serre-subcategory-is-kernel}.
+In this case $D_\mathcal{B}(\mathcal{A})$ is the kernel of the functor
+$D(\mathcal{A}) \to D(\mathcal{A}/\mathcal{B})$.
+Thus we obtain a canonical functor
+$$
+D(\mathcal{A})/D_\mathcal{B}(\mathcal{A})
+\longrightarrow
+D(\mathcal{A}/\mathcal{B})
+$$
+by
+Lemma \ref{lemma-universal-property-quotient}.
+Similarly for the bounded versions.
+
+\begin{lemma}
+\label{lemma-derived-of-quotient}
+Let $\mathcal{A}$ be an abelian category.
+Let $\mathcal{B} \subset \mathcal{A}$ be a Serre subcategory.
+Then $D(\mathcal{A}) \to D(\mathcal{A}/\mathcal{B})$
+is essentially surjective.
+\end{lemma}
+
+\begin{proof}
+We will use the description of the category $\mathcal{A}/\mathcal{B}$
+in the proof of
+Homology, Lemma \ref{homology-lemma-serre-subcategory-is-kernel}.
+Let $(X^\bullet, d^\bullet)$ be a complex of $\mathcal{A}/\mathcal{B}$.
+This means that $X^i$ is an object of $\mathcal{A}$ and
+$d^i : X^i \to X^{i + 1}$ is a morphism in $\mathcal{A}/\mathcal{B}$
+such that $d^i \circ d^{i - 1} = 0$ in $\mathcal{A}/\mathcal{B}$.
+
+\medskip\noindent
+For $i \geq 0$ we may write $d^i = (s^i, f^i)$ where $s^i : Y^i \to X^i$
+is a morphism of $\mathcal{A}$ whose kernel and cokernel are in $\mathcal{B}$
+(equivalently $s^i$ becomes an isomorphism in the quotient category)
+and $f^i : Y^i \to X^{i + 1}$ is a morphism of $\mathcal{A}$.
+By induction we will construct a commutative diagram
+$$
+\xymatrix{
+& (X')^1 \ar@{..>}[r] & (X')^2 \ar@{..>}[r] & \ldots \\
+X^0 \ar@{..>}[ru] &
+X^1 \ar@{..>}[u] &
+X^2 \ar@{..>}[u] &
+\ldots \\
+Y^0 \ar[u]_{s^0} \ar[ru]_{f^0} &
+Y^1 \ar[u]_{s^1} \ar[ru]_{f^1} &
+Y^2 \ar[u]_{s^2} \ar[ru]_{f^2} &
+\ldots
+}
+$$
+where the vertical arrows $X^i \to (X')^i$ become isomorphisms
+in the quotient category. Namely, we first let
+$(X')^1 = \Coker(Y^0 \to X^0 \oplus X^1)$ (or rather the
+pushout of the diagram with arrows $s^0$ and $f^0$) which gives the
+first commutative diagram. Next, we take
+$(X')^2 = \Coker(Y^1 \to (X')^1 \oplus X^2)$. And so on.
+Setting additionally $(X')^n = X^n$ for $n \leq 0$ we see that the map
+$(X^\bullet, d^\bullet) \to ((X')^\bullet, (d')^\bullet)$
+is an isomorphism of complexes in $\mathcal{A}/\mathcal{B}$.
+Hence we may assume $d^n : X^n \to X^{n + 1}$ is given
+by a map $X^n \to X^{n + 1}$ in $\mathcal{A}$ for $n \geq 0$.
+
+\medskip\noindent
+Dually, for $i < 0$ we may write $d^i = (g^i, t^{i + 1})$ where
+$t^{i + 1} : X^{i + 1} \to Z^{i + 1}$ is an isomorphism in the
+quotient category and $g^i : X^i \to Z^{i + 1}$ is a morphism.
+By induction we will construct a commutative diagram
+$$
+\xymatrix{
+\ldots &
+Z^{-2} &
+Z^{-1} &
+Z^0 \\
+\ldots &
+X^{-2} \ar[u]_{t_{-2}} \ar[ru]_{g_{-2}} &
+X^{-1} \ar[u]_{t_{-1}} \ar[ru]_{g_{-1}} &
+X^0 \ar[u]_{t^0} \\
+\ldots &
+(X')^{-2} \ar@{..>}[u] \ar@{..>}[r] &
+(X')^{-1} \ar@{..>}[u] \ar@{..>}[ru]
+}
+$$
+where the vertical arrows $(X')^i \to X^i$ become isomorphisms
+in the quotient category. Namely, we take
+$(X')^{-1} = X^{-1} \times_{Z^0} X^0$. Then we take
+$(X')^{-2} = X^{-2} \times_{Z^{-1}} (X')^{-1}$. And so on.
+Setting additionally $(X')^n = X^n$ for $n \geq 0$ we see that the map
+$((X')^\bullet, (d')^\bullet) \to (X^\bullet, d^\bullet)$
+is an isomorphism of complexes in $\mathcal{A}/\mathcal{B}$.
+Hence we may assume $d^n : X^n \to X^{n + 1}$ is given
+by a map $d^n : X^n \to X^{n + 1}$ in $\mathcal{A}$
+for all $n \in \mathbf{Z}$.
+
+\medskip\noindent
+In this case we know the compositions $d^n \circ d^{n - 1}$
+are zero in $\mathcal{A}/\mathcal{B}$. If for $n > 0$ we replace
+$X^n$ by
+$$
+(X')^n = X^n/\sum\nolimits_{0 < k \leq n} \Im(\Im(X^{k - 2} \to X^k) \to X^n)
+$$
+then the compositions $d^n \circ d^{n - 1}$ are zero for $n \geq 0$.
+(Similarly to the second paragraph above we obtain an isomorphism of
+complexes
+$(X^\bullet, d^\bullet) \to ((X')^\bullet, (d')^\bullet)$.)
+Finally, for $n < 0$ we replace $X^n$ by
+$$
+(X')^n = \bigcap\nolimits_{n \leq k < 0}
+(X^n \to X^k)^{-1}\Ker(X^k \to X^{k + 2})
+$$
+and we argue in the same manner to get a complex in $\mathcal{A}$
+whose image in $\mathcal{A}/\mathcal{B}$ is isomorphic to the given one.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quotient-by-serre-easy}
+Let $\mathcal{A}$ be an abelian category.
+Let $\mathcal{B} \subset \mathcal{A}$ be a Serre subcategory.
+Suppose that the functor $v : \mathcal{A} \to \mathcal{A}/\mathcal{B}$
+has a left adjoint $u : \mathcal{A}/\mathcal{B} \to \mathcal{A}$
+such that $vu \cong \text{id}$. Then
+$$
+D(\mathcal{A})/D_\mathcal{B}(\mathcal{A}) = D(\mathcal{A}/\mathcal{B})
+$$
+and similarly for the bounded versions.
+\end{lemma}
+
+\begin{proof}
+The functor $D(v) : D(\mathcal{A}) \to D(\mathcal{A}/\mathcal{B})$
+is essentially surjective by
+Lemma \ref{lemma-derived-of-quotient}.
+For an object $X$ of $D(\mathcal{A})$ the adjunction mapping
+$c_X : uvX \to X$ maps to an isomorphism in $D(\mathcal{A}/\mathcal{B})$
+because $vuv \cong v$ by the assumption that $vu \cong \text{id}$.
+Thus in a distinguished triangle $(uvX, X, Z, c_X, g, h)$ the object
+$Z$ is an object of $D_\mathcal{B}(\mathcal{A})$ as we see by looking
+at the long exact cohomology sequence.
+Hence $c_X$ is an element of the multiplicative system used to define
+the quotient category $D(\mathcal{A})/D_\mathcal{B}(\mathcal{A})$.
+Thus $uvX \cong X$ in $D(\mathcal{A})/D_\mathcal{B}(\mathcal{A})$.
+For $X, Y \in \Ob(\mathcal{A}))$ the map
+$$
+\Hom_{D(\mathcal{A})/D_\mathcal{B}(\mathcal{A})}(X, Y)
+\longrightarrow
+\Hom_{D(\mathcal{A}/\mathcal{B})}(vX, vY)
+$$
+is bijective because $u$ gives an inverse (by the remarks above).
+\end{proof}
+
+\noindent
+For certain Serre subcategories $\mathcal{B} \subset \mathcal{A}$
+we can prove that the functor
+$D(\mathcal{B}) \to D_\mathcal{B}(\mathcal{A})$
+is fully faithful.
+
+\begin{lemma}
+\label{lemma-fully-faithful-embedding}
+Let $\mathcal{A}$ be an abelian category. Let $\mathcal{B} \subset \mathcal{A}$
+be a Serre subcategory. Assume that for every surjection $X \to Y$
+with $X \in \Ob(\mathcal{A})$ and $Y \in \Ob(\mathcal{B})$ there exists
+$X' \subset X$, $X' \in \Ob(\mathcal{B})$ which surjects onto $Y$.
+Then the functor $D^-(\mathcal{B}) \to D^-_\mathcal{B}(\mathcal{A})$ of
+(\ref{equation-compare}) is an equivalence.
+\end{lemma}
+
+\begin{proof}
+Let $X^\bullet$ be a bounded above complex of $\mathcal{A}$ such that
+$H^i(X^\bullet) \in \Ob(\mathcal{B})$ for all $i \in \mathbf{Z}$.
+Moreover, suppose we are given $B^i \subset X^i$, $B^i \in \Ob(\mathcal{B})$
+for all $i \in \mathbf{Z}$. Claim: there exists a subcomplex
+$Y^\bullet \subset X^\bullet$ such that
+\begin{enumerate}
+\item $Y^\bullet \to X^\bullet$ is a quasi-isomorphism,
+\item $Y^i \in \Ob(\mathcal{B})$ for all $i \in \mathbf{Z}$, and
+\item $B^i \subset Y^i$ for all $i \in \mathbf{Z}$.
+\end{enumerate}
+To prove the claim, using the assumption of the lemma we first choose
+$C^i \subset \Ker(d^i : X^i \to X^{i + 1})$, $C^i \in \Ob(\mathcal{B})$
+surjecting onto $H^i(X^\bullet)$. Setting
+$D^i = C^i + d^{i - 1}(B^{i - 1}) + B^i$ we find a subcomplex
+$D^\bullet$ satisfying (2) and (3) such that
+$H^i(D^\bullet) \to H^i(X^\bullet)$ is surjective for all $i \in \mathbf{Z}$.
+For any choice of $E^i \subset X^i$ with $E^i \in \Ob(\mathcal{B})$ and
+$d^i(E^i) \subset D^{i + 1} + E^{i + 1}$ we see that setting
+$Y^i = D^i + E^i$ gives a subcomplex whose terms are in $\mathcal{B}$ and
+whose cohomology surjects onto the cohomology of $X^\bullet$. Clearly, if
+$d^i(E^i) = (D^{i + 1} + E^{i + 1}) \cap \Im(d^i)$ then we see that
+the map on cohomology is also injective. For $n \gg 0$ we can
+take $E^n$ equal to $0$. By descending induction
+we can choose $E^i$ for all $i$ with the desired property.
+Namely, given $E^{i + 1}, E^{i + 2}, \ldots$ we choose $E^i \subset X^i$
+such that $d^i(E^i) = (D^{i + 1} + E^{i + 1}) \cap \Im(d^i)$.
+This is possible by our assumption in the lemma combined with
+the fact that $(D^{i + 1} + E^{i + 1}) \cap \Im(d^i)$ is
+in $\mathcal{B}$ as $\mathcal{B}$ is a Serre subcategory of $\mathcal{A}$.
+
+\medskip\noindent
+The claim above implies the lemma. Essential surjectivity is immediate
+from the claim. Let us prove faithfulness. Namely, suppose we have
+a morphism $f : U^\bullet \to V^\bullet$ of bounded above complexes
+of $\mathcal{B}$ whose image in $D(\mathcal{A})$ is zero. Then
+there exists a quasi-isomorphism $s : V^\bullet \to X^\bullet$
+into a bounded above complex of $\mathcal{A}$ such that
+$s \circ f$ is homotopic to zero. Choose a homotopy
+$h^i : U^i \to X^{i - 1}$ between $0$ and $s \circ f$.
+Apply the claim with $B^i = h^{i + 1}(U^{i + 1}) + s^i(V^i)$.
+The resulting map $s' : V^\bullet \to Y^\bullet$
+is a quasi-isomorphism as well and $s' \circ f$ is homotopic
+to zero as is clear from the fact that $h^i$ factors through $Y^{i - 1}$.
+This proves faithfulness. Fully faithfulness is proved in the
+exact same manner.
+\end{proof}
+
+
+
+
+
+\section{Injective resolutions}
+\label{section-injective-resolutions}
+
+\noindent
+In this section we prove some lemmas regarding the existence
+of injective resolutions in abelian categories having enough injectives.
+
+\begin{definition}
+\label{definition-injective-resolution}
+Let $\mathcal{A}$ be an abelian category.
+Let $A \in \Ob(\mathcal{A})$.
+An {\it injective resolution of $A$} is a complex
+$I^\bullet$ together with a map $A \to I^0$ such
+that:
+\begin{enumerate}
+\item We have $I^n = 0$ for $n < 0$.
+\item Each $I^n$ is an injective object of $\mathcal{A}$.
+\item The map $A \to I^0$ is an isomorphism onto $\Ker(d^0)$.
+\item We have $H^i(I^\bullet) = 0$ for $i > 0$.
+\end{enumerate}
+Hence $A[0] \to I^\bullet$ is a quasi-isomorphism.
+In other words the complex
+$$
+\ldots \to 0 \to A \to I^0 \to I^1 \to \ldots
+$$
+is acyclic.
+Let $K^\bullet$ be a complex in $\mathcal{A}$.
+An {\it injective resolution of $K^\bullet$} is a complex
+$I^\bullet$ together with a map $\alpha : K^\bullet \to I^\bullet$
+of complexes such that
+\begin{enumerate}
+\item We have $I^n = 0$ for $n \ll 0$, i.e., $I^\bullet$ is bounded below.
+\item Each $I^n$ is an injective object of $\mathcal{A}$.
+\item The map $\alpha : K^\bullet \to I^\bullet$ is a
+quasi-isomorphism.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In other words an injective resolution $K^\bullet \to I^\bullet$
+gives rise to a diagram
+$$
+\xymatrix{
+\ldots \ar[r] & K^{n - 1} \ar[d] \ar[r] & K^n \ar[d] \ar[r] &
+K^{n + 1} \ar[d] \ar[r] & \ldots \\
+\ldots \ar[r] & I^{n - 1} \ar[r] & I^n \ar[r] & I^{n + 1} \ar[r] & \ldots
+}
+$$
+which induces an isomorphism on cohomology objects in each degree.
+An injective resolution of an object $A$ of $\mathcal{A}$
+is almost the same thing as an injective resolution of
+the complex $A[0]$.
+
+\begin{lemma}
+\label{lemma-cohomology-bounded-below}
+Let $\mathcal{A}$ be an abelian category.
+Let $K^\bullet$ be a complex of $\mathcal{A}$.
+\begin{enumerate}
+\item If $K^\bullet$ has an injective resolution then
+$H^n(K^\bullet) = 0$ for $n \ll 0$.
+\item If $H^n(K^\bullet) = 0$ for all $n \ll 0$ then there
+exists a quasi-isomorphism $K^\bullet \to L^\bullet$
+with $L^\bullet$ bounded below.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. For the second statement use
+$L^\bullet = \tau_{\geq n}K^\bullet$ for
+some $n \ll 0$. See
+Homology, Section \ref{homology-section-truncations}
+for the definition of the truncation $\tau_{\geq n}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-resolutions-exist}
+Let $\mathcal{A}$ be an abelian category.
+Assume $\mathcal{A}$ has enough injectives.
+\begin{enumerate}
+\item Any object of $\mathcal{A}$ has an injective resolution.
+\item If $H^n(K^\bullet) = 0$ for all $n \ll 0$ then
+$K^\bullet$ has an injective resolution.
+\item If $K^\bullet$ is a complex with $K^n = 0$ for $n < a$, then
+there exists an injective resolution $\alpha : K^\bullet \to I^\bullet$
+with $I^n = 0$ for $n < a$ such that each $\alpha^n : K^n \to I^n$ is
+injective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). First choose an injection $A \to I^0$ of $A$ into an
+injective object of $\mathcal{A}$. Next, choose an injection
+$I_0/A \to I^1$ into an injective object of $\mathcal{A}$.
+Denote $d^0$ the induced map $I^0 \to I^1$.
+Next, choose an injection $I^1/\Im(d^0) \to I^2$ into
+an injective object of $\mathcal{A}$. Denote $d^1$ the induced
+map $I^1 \to I^2$. And so on.
+By Lemma \ref{lemma-cohomology-bounded-below} part (2) follows from part (3).
+Part (3) is a special case of
+Lemma \ref{lemma-subcategory-right-resolution}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-acyclic-is-zero}
+Let $\mathcal{A}$ be an abelian category.
+Let $K^\bullet$ be an acyclic complex.
+Let $I^\bullet$ be bounded below and consisting of injective objects.
+Any morphism $K^\bullet \to I^\bullet$ is homotopic to zero.
+\end{lemma}
+
+\begin{proof}
+Let $\alpha : K^\bullet \to I^\bullet$ be a morphism of
+complexes. Assume that $\alpha^j = 0$ for $j < n$.
+We will show that there exists a morphism $h : K^{n + 1} \to I^n$
+such that $\alpha^n = h \circ d$. Thus $\alpha$ will be homotopic
+to the morphism of complexes $\beta$ defined by
+$$
+\beta^j =
+\left\{
+\begin{matrix}
+0 & \text{if} & j \leq n \\
+\alpha^{n + 1} - d \circ h & \text{if} & j = n + 1 \\
+\alpha^j & \text{if} & j > n + 1
+\end{matrix}
+\right.
+$$
+This will clearly prove the lemma (by induction).
+To prove the existence of $h$ note that
+$\alpha^n|_{d^{n - 1}(K^{n - 1})} = 0$ since
+$\alpha^{n - 1} = 0$. Since $K^\bullet$ is acyclic we
+have $d^{n - 1}(K^{n - 1}) = \Ker(K^n \to K^{n + 1})$.
+Hence we can think of $\alpha^n$ as a map into $I^n$ defined
+on the subobject $\Im(K^n \to K^{n + 1})$ of $K^{n + 1}$.
+By injectivity of the object $I^n$ we can extend this to
+a map $h : K^{n + 1} \to I^n$ as desired.
+\end{proof}
+
+\begin{remark}
+\label{remark-easier-proofs}
+Let $\mathcal{A}$ be an abelian category.
+Using the fact that $K(\mathcal{A})$ is a triangulated category we
+may use
+Lemma \ref{lemma-acyclic-is-zero}
+to obtain proofs of some of the lemmas below which are usually proved by
+chasing through diagrams.
+Namely, suppose that $\alpha : K^\bullet \to L^\bullet$ is a quasi-isomorphism
+of complexes. Then
+$$
+(K^\bullet, L^\bullet, C(\alpha)^\bullet, \alpha, i, -p)
+$$
+is a distinguished triangle in $K(\mathcal{A})$
+(Lemma \ref{lemma-the-same-up-to-isomorphisms})
+and $C(\alpha)^\bullet$ is an acyclic complex
+(Lemma \ref{lemma-acyclic}).
+Next, let $I^\bullet$ be a bounded below complex of injective objects. Then
+$$
+\xymatrix{
+\Hom_{K(\mathcal{A})}(C(\alpha)^\bullet, I^\bullet) \ar[r] &
+\Hom_{K(\mathcal{A})}(L^\bullet, I^\bullet) \ar[r] &
+\Hom_{K(\mathcal{A})}(K^\bullet, I^\bullet) \ar[lld] \\
+\Hom_{K(\mathcal{A})}(C(\alpha)^\bullet[-1], I^\bullet)
+}
+$$
+is an exact sequence of abelian groups, see
+Lemma \ref{lemma-representable-homological}.
+At this point
+Lemma \ref{lemma-acyclic-is-zero}
+guarantees that the outer two groups are zero and hence
+$\Hom_{K(\mathcal{A})}(L^\bullet, I^\bullet) =
+\Hom_{K(\mathcal{A})}(K^\bullet, I^\bullet)$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-morphisms-lift}
+Let $\mathcal{A}$ be an abelian category.
+Consider a solid diagram
+$$
+\xymatrix{
+K^\bullet \ar[r]_\alpha \ar[d]_\gamma & L^\bullet \ar@{-->}[dl]^\beta \\
+I^\bullet
+}
+$$
+where $I^\bullet$ is bounded below and consists of injective
+objects, and $\alpha$ is a quasi-isomorphism.
+\begin{enumerate}
+\item There exists a map of complexes $\beta$ making the diagram
+commute up to homotopy.
+\item If $\alpha$ is injective in every degree
+then we can find a $\beta$ which makes the diagram commute.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The ``correct'' proof of part (1) is explained in
+Remark \ref{remark-easier-proofs}.
+We also give a direct proof here.
+
+\medskip\noindent
+We first show that (2) implies (1). Namely, let
+$\tilde \alpha : K \to \tilde L^\bullet$, $\pi$, $s$ be as in
+Lemma \ref{lemma-make-injective}. Since $\tilde \alpha$ is injective
+by (2) there exists a morphism $\tilde \beta : \tilde L^\bullet \to I^\bullet$
+such that $\gamma = \tilde \beta \circ \tilde \alpha$. Set
+$\beta = \tilde \beta \circ s$. Then we have
+$$
+\beta \circ \alpha
+=
+\tilde \beta \circ s \circ \pi \circ \tilde \alpha
+\sim
+\tilde \beta \circ \tilde \alpha
+=
+\gamma
+$$
+as desired.
+
+\medskip\noindent
+Assume that $\alpha : K^\bullet \to L^\bullet$ is injective.
+Suppose we have already defined $\beta$ in all degrees
+$\leq n - 1$ compatible with differentials and such that
+$\gamma^j = \beta^j \circ \alpha^j$ for all $j \leq n - 1$.
+Consider the commutative solid diagram
+$$
+\xymatrix{
+K^{n - 1} \ar[r] \ar@/_2pc/[dd]_\gamma \ar[d]^\alpha &
+K^n \ar@/^2pc/[dd]^\gamma \ar[d]^\alpha \\
+L^{n - 1} \ar[r] \ar[d]^\beta &
+L^n \ar@{-->}[d] \\
+I^{n - 1} \ar[r] &
+I^n
+}
+$$
+Thus we see that the dotted arrow is prescribed on the subobjects
+$\alpha(K^n)$ and $d^{n - 1}(L^{n - 1})$. Moreover, these two arrows
+agree on $\alpha(d^{n - 1}(K^{n - 1}))$. Hence if
+\begin{equation}
+\label{equation-qis}
+\alpha(d^{n - 1}(K^{n - 1}))
+=
+\alpha(K^n) \cap d^{n - 1}(L^{n - 1})
+\end{equation}
+then these morphisms glue to a morphism
+$\alpha(K^n) + d^{n - 1}(L^{n - 1}) \to I^n$ and, using the injectivity
+of $I^n$, we can extend this to a morphism from all of $L^n$ into $I^n$.
+After this by induction we get the morphism $\beta$ for all $n$ simultaneously
+(note that we can set $\beta^n = 0$ for all $n \ll 0$ since $I^\bullet$
+is bounded below -- in this way starting the induction).
+
+\medskip\noindent
+It remains to prove the equality (\ref{equation-qis}).
+The reader is encouraged to argue this for themselves with a suitable
+diagram chase. Nonetheless here is our argument.
+Note that the inclusion
+$\alpha(d^{n - 1}(K^{n - 1})) \subset \alpha(K^n) \cap d^{n - 1}(L^{n - 1})$
+is obvious. Take an object $T$ of $\mathcal{A}$ and a morphism
+$x : T \to L^n$ whose image is contained in the subobject
+$\alpha(K^n) \cap d^{n - 1}(L^{n - 1})$.
+Since $\alpha$ is injective we see that $x = \alpha \circ x'$ for
+some $x' : T \to K^n$. Moreover, since $x$ lies in $d^{n - 1}(L^{n - 1})$
+we see that $d^n \circ x = 0$. Hence using injectivity of $\alpha$ again
+we see that $d^n \circ x' = 0$. Thus $x'$ gives a morphism
+$[x'] : T \to H^n(K^\bullet)$. On the other hand the corresponding
+map $[x] : T \to H^n(L^\bullet)$ induced by $x$ is zero by assumption.
+Since $\alpha$ is a quasi-isomorphism we conclude that $[x'] = 0$.
+This of course means exactly that the image of $x'$ is
+contained in $d^{n - 1}(K^{n - 1})$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphisms-equal-up-to-homotopy}
+Let $\mathcal{A}$ be an abelian category.
+Consider a solid diagram
+$$
+\xymatrix{
+K^\bullet \ar[r]_\alpha \ar[d]_\gamma & L^\bullet \ar@{-->}[dl]^{\beta_i} \\
+I^\bullet
+}
+$$
+where $I^\bullet$ is bounded below and consists of injective
+objects, and $\alpha$ is a quasi-isomorphism.
+Any two morphisms $\beta_1, \beta_2$ making the diagram commute
+up to homotopy are homotopic.
+\end{lemma}
+
+\begin{proof}
+This follows from
+Remark \ref{remark-easier-proofs}.
+We also give a direct argument here.
+
+\medskip\noindent
+Let $\tilde \alpha : K \to \tilde L^\bullet$, $\pi$, $s$ be as in
+Lemma \ref{lemma-make-injective}. If we can show that $\beta_1 \circ\pi$
+is homotopic to $\beta_2 \circ \pi$, then we deduce that
+$\beta_1 \sim \beta_2$ because $\pi \circ s$ is the identity.
+Hence we may assume $\alpha^n : K^n \to L^n$ is the
+inclusion of a direct summand for all $n$. Thus we get a
+short exact sequence of complexes
+$$
+0 \to K^\bullet \to L^\bullet \to M^\bullet \to 0
+$$
+which is termwise split and such that $M^\bullet$ is acyclic.
+We choose splittings $L^n = K^n \oplus M^n$, so we have
+$\beta_i^n : K^n \oplus M^n \to I^n$ and $\gamma^n : K^n \to I^n$.
+In this case the condition on $\beta_i$ is that there are morphisms
+$h_i^n : K^n \to I^{n - 1}$ such that
+$$
+\gamma^n - \beta_i^n|_{K^n} = d \circ h_i^n + h_i^{n + 1} \circ d
+$$
+Thus we see that
+$$
+\beta_1^n|_{K^n} - \beta_2^n|_{K^n}
+=
+d \circ (h_1^n - h_2^n) + (h_1^{n + 1} - h_2^{n + 1}) \circ d
+$$
+Consider the map $h^n : K^n \oplus M^n \to I^{n - 1}$ which
+equals $h_1^n - h_2^n$ on the first summand and zero on the second.
+Then we see that
+$$
+\beta_1^n - \beta_2^n
+-
+(d \circ h^n + h^{n + 1}) \circ d
+$$
+is a morphism of complexes $L^\bullet \to I^\bullet$
+which is identically zero on the subcomplex $K^\bullet$.
+Hence it factors as $L^\bullet \to M^\bullet \to I^\bullet$.
+Thus the result of the lemma follows from Lemma \ref{lemma-acyclic-is-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphisms-into-injective-complex}
+Let $\mathcal{A}$ be an abelian category.
+Let $I^\bullet$ be bounded below complex consisting of injective
+objects. Let $L^\bullet \in K(\mathcal{A})$. Then
+$$
+\Mor_{K(\mathcal{A})}(L^\bullet, I^\bullet)
+=
+\Mor_{D(\mathcal{A})}(L^\bullet, I^\bullet).
+$$
+\end{lemma}
+
+\begin{proof}
+Let $a$ be an element of the right hand side.
+We may represent $a = \gamma\alpha^{-1}$ where
+$\alpha : K^\bullet \to L^\bullet$
+is a quasi-isomorphism and $\gamma : K^\bullet \to I^\bullet$ is a map
+of complexes. By
+Lemma \ref{lemma-morphisms-lift}
+we can find a morphism $\beta : L^\bullet \to I^\bullet$ such that
+$\beta \circ \alpha$ is homotopic to $\gamma$. This proves that the
+map is surjective. Let $b$ be an element of the left hand side
+which maps to zero in the right hand side. Then $b$ is the homotopy class
+of a morphism $\beta : L^\bullet \to I^\bullet$ such that there exists a
+quasi-isomorphism $\alpha : K^\bullet \to L^\bullet$ with
+$\beta \circ \alpha$ homotopic to zero. Then
+Lemma \ref{lemma-morphisms-equal-up-to-homotopy}
+shows that $\beta$ is homotopic to zero also, i.e., $b = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-resolution-ses}
+Let $\mathcal{A}$ be an abelian category.
+Assume $\mathcal{A}$ has enough injectives.
+For any short exact sequence
+$0 \to A^\bullet \to B^\bullet \to C^\bullet \to 0$
+of $\text{Comp}^{+}(\mathcal{A})$ there exists a
+commutative diagram in $\text{Comp}^{+}(\mathcal{A})$
+$$
+\xymatrix{
+0 \ar[r] &
+A^\bullet \ar[r] \ar[d] &
+B^\bullet \ar[r] \ar[d] &
+C^\bullet \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+I_1^\bullet \ar[r] &
+I_2^\bullet \ar[r] &
+I_3^\bullet \ar[r] &
+0
+}
+$$
+where the vertical arrows are injective resolutions and
+the rows are short exact sequences of complexes.
+In fact, given any injective resolution $A^\bullet \to I^\bullet$
+we may assume $I_1^\bullet = I^\bullet$.
+\end{lemma}
+
+\begin{proof}
+Step 1. Choose an injective resolution $A^\bullet \to I^\bullet$ (see
+Lemma \ref{lemma-injective-resolutions-exist}) or use the given one.
+Recall that $\text{Comp}^{+}(\mathcal{A})$ is an
+abelian category, see
+Homology, Lemma \ref{homology-lemma-cat-cochain-abelian}.
+Hence we may form the pushout along
+the injective map $A^\bullet \to I^\bullet$ to get
+$$
+\xymatrix{
+0 \ar[r] &
+A^\bullet \ar[r] \ar[d] &
+B^\bullet \ar[r] \ar[d] &
+C^\bullet \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+I^\bullet \ar[r] &
+E^\bullet \ar[r] &
+C^\bullet \ar[r] &
+0
+}
+$$
+Note that the lower short exact sequence is termwise split, see
+Homology, Lemma \ref{homology-lemma-characterize-injectives}.
+Hence it suffices to prove the lemma when
+$0 \to A^\bullet \to B^\bullet \to C^\bullet \to 0$ is
+termwise split.
+
+\medskip\noindent
+Step 2. Choose splittings. In other words, write
+$B^n = A^n \oplus C^n$. Denote $\delta : C^\bullet \to A^\bullet[1]$
+the morphism as in
+Homology, Lemma \ref{homology-lemma-ses-termwise-split-cochain}.
+Choose injective resolutions $f_1 : A^\bullet \to I_1^\bullet$
+and $f_3 : C^\bullet \to I_3^\bullet$. (If $A^\bullet$ is a complex of
+injectives, then use $I_1^\bullet = A^\bullet$.)
+We may assume $f_3$ is injective in
+every degree. By Lemma \ref{lemma-morphisms-lift} we may find
+a morphism $\delta' : I_3^\bullet \to I_1^\bullet[1]$ such
+that $\delta' \circ f_3 = f_1[1] \circ \delta$ (equality of
+morphisms of complexes). Set $I_2^n = I_1^n \oplus I_3^n$.
+Define
+$$
+d_{I_2}^n =
+\left(
+\begin{matrix}
+d_{I_1}^n & (\delta')^n \\
+0 & d_{I_3}^n
+\end{matrix}
+\right)
+$$
+and define the maps $B^n \to I_2^n$ to be given as the
+sum of the maps $A^n \to I_1^n$ and $C^n \to I_3^n$.
+Everything is clear.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Projective resolutions}
+\label{section-projective-resolutions}
+
+\noindent
+This section is dual to
+Section \ref{section-injective-resolutions}.
+We give definitions and state results, but we do not reprove the lemmas.
+
+\begin{definition}
+\label{definition-projective-resolution}
+Let $\mathcal{A}$ be an abelian category.
+Let $A \in \Ob(\mathcal{A})$.
+An {\it projective resolution of $A$} is a complex
+$P^\bullet$ together with a map $P^0 \to A$ such
+that:
+\begin{enumerate}
+\item We have $P^n = 0$ for $n > 0$.
+\item Each $P^n$ is an projective object of $\mathcal{A}$.
+\item The map $P^0 \to A$ induces an isomorphism $\Coker(d^{-1}) \to A$.
+\item We have $H^i(P^\bullet) = 0$ for $i < 0$.
+\end{enumerate}
+Hence $P^\bullet \to A[0]$ is a quasi-isomorphism.
+In other words the complex
+$$
+\ldots \to P^{-1} \to P^0 \to A \to 0 \to \ldots
+$$
+is acyclic. Let $K^\bullet$ be a complex in $\mathcal{A}$.
+An {\it projective resolution of $K^\bullet$} is a complex
+$P^\bullet$ together with a map $\alpha : P^\bullet \to K^\bullet$
+of complexes such that
+\begin{enumerate}
+\item We have $P^n = 0$ for $n \gg 0$, i.e., $P^\bullet$ is bounded above.
+\item Each $P^n$ is an projective object of $\mathcal{A}$.
+\item The map $\alpha : P^\bullet \to K^\bullet$ is a
+quasi-isomorphism.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-cohomology-bounded-above}
+Let $\mathcal{A}$ be an abelian category.
+Let $K^\bullet$ be a complex of $\mathcal{A}$.
+\begin{enumerate}
+\item If $K^\bullet$ has a projective resolution then
+$H^n(K^\bullet) = 0$ for $n \gg 0$.
+\item If $H^n(K^\bullet) = 0$ for $n \gg 0$ then there
+exists a quasi-isomorphism $L^\bullet \to K^\bullet$
+with $L^\bullet$ bounded above.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Dual to
+Lemma \ref{lemma-cohomology-bounded-below}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projective-resolutions-exist}
+Let $\mathcal{A}$ be an abelian category.
+Assume $\mathcal{A}$ has enough projectives.
+\begin{enumerate}
+\item Any object of $\mathcal{A}$ has a projective resolution.
+\item If $H^n(K^\bullet) = 0$ for all $n \gg 0$ then
+$K^\bullet$ has a projective resolution.
+\item If $K^\bullet$ is a complex with $K^n = 0$ for $n > a$, then
+there exists a projective resolution $\alpha : P^\bullet \to K^\bullet$
+with $P^n = 0$ for $n > a$ such that each $\alpha^n : P^n \to K^n$ is
+surjective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Dual to
+Lemma \ref{lemma-injective-resolutions-exist}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projective-into-acyclic-is-zero}
+Let $\mathcal{A}$ be an abelian category.
+Let $K^\bullet$ be an acyclic complex.
+Let $P^\bullet$ be bounded above and consisting of projective objects.
+Any morphism $P^\bullet \to K^\bullet$ is homotopic to zero.
+\end{lemma}
+
+\begin{proof}
+Dual to
+Lemma \ref{lemma-acyclic-is-zero}.
+\end{proof}
+
+\begin{remark}
+\label{remark-easier-projective}
+Let $\mathcal{A}$ be an abelian category.
+Suppose that $\alpha : K^\bullet \to L^\bullet$ is a quasi-isomorphism
+of complexes. Let $P^\bullet$ be a bounded above complex of projectives.
+Then
+$$
+\Hom_{K(\mathcal{A})}(P^\bullet, K^\bullet)
+\longrightarrow
+\Hom_{K(\mathcal{A})}(P^\bullet, L^\bullet)
+$$
+is an isomorphism. This is dual to
+Remark \ref{remark-easier-proofs}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-morphisms-lift-projective}
+Let $\mathcal{A}$ be an abelian category.
+Consider a solid diagram
+$$
+\xymatrix{
+K^\bullet & L^\bullet \ar[l]^\alpha \\
+P^\bullet \ar[u] \ar@{-->}[ru]_\beta
+}
+$$
+where $P^\bullet$ is bounded above and consists of projective
+objects, and $\alpha$ is a quasi-isomorphism.
+\begin{enumerate}
+\item There exists a map of complexes $\beta$ making the diagram
+commute up to homotopy.
+\item If $\alpha$ is surjective in every degree
+then we can find a $\beta$ which makes the diagram commute.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Dual to
+Lemma \ref{lemma-morphisms-lift}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphisms-equal-up-to-homotopy-projective}
+Let $\mathcal{A}$ be an abelian category. Consider a solid diagram
+$$
+\xymatrix{
+K^\bullet & L^\bullet \ar[l]^\alpha \\
+P^\bullet \ar[u] \ar@{-->}[ru]_{\beta_i}
+}
+$$
+where $P^\bullet$ is bounded above and consists of projective
+objects, and $\alpha$ is a quasi-isomorphism.
+Any two morphisms $\beta_1, \beta_2$ making the diagram commute
+up to homotopy are homotopic.
+\end{lemma}
+
+\begin{proof}
+Dual to
+Lemma \ref{lemma-morphisms-equal-up-to-homotopy}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphisms-from-projective-complex}
+Let $\mathcal{A}$ be an abelian category.
+Let $P^\bullet$ be bounded above complex consisting of projective
+objects. Let $L^\bullet \in K(\mathcal{A})$. Then
+$$
+\Mor_{K(\mathcal{A})}(P^\bullet, L^\bullet)
+=
+\Mor_{D(\mathcal{A})}(P^\bullet, L^\bullet).
+$$
+\end{lemma}
+
+\begin{proof}
+Dual to
+Lemma \ref{lemma-morphisms-into-injective-complex}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projective-resolution-ses}
+Let $\mathcal{A}$ be an abelian category.
+Assume $\mathcal{A}$ has enough projectives.
+For any short exact sequence
+$0 \to A^\bullet \to B^\bullet \to C^\bullet \to 0$
+of $\text{Comp}^{+}(\mathcal{A})$ there exists a
+commutative diagram in $\text{Comp}^{+}(\mathcal{A})$
+$$
+\xymatrix{
+0 \ar[r] &
+P_1^\bullet \ar[r] \ar[d] &
+P_2^\bullet \ar[r] \ar[d] &
+P_3^\bullet \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+A^\bullet \ar[r] &
+B^\bullet \ar[r] &
+C^\bullet \ar[r] &
+0
+}
+$$
+where the vertical arrows are projective resolutions and
+the rows are short exact sequences of complexes.
+In fact, given any projective resolution $P^\bullet \to C^\bullet$
+we may assume $P_3^\bullet = P^\bullet$.
+\end{lemma}
+
+\begin{proof}
+Dual to
+Lemma \ref{lemma-injective-resolution-ses}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-precise-vanishing}
+Let $\mathcal{A}$ be an abelian category.
+Let $P^\bullet$, $K^\bullet$ be complexes.
+Let $n \in \mathbf{Z}$. Assume that
+\begin{enumerate}
+\item $P^\bullet$ is a bounded complex consisting of projective
+objects,
+\item $P^i = 0$ for $i < n$, and
+\item $H^i(K^\bullet) = 0$ for $i \geq n$.
+\end{enumerate}
+Then
+$\Hom_{K(\mathcal{A})}(P^\bullet, K^\bullet) =
+\Hom_{D(\mathcal{A})}(P^\bullet, K^\bullet) = 0$.
+\end{lemma}
+
+\begin{proof}
+The first equality follows from
+Lemma \ref{lemma-morphisms-from-projective-complex}.
+Note that there is a distinguished triangle
+$$
+(\tau_{\leq n - 1}K^\bullet, K^\bullet, \tau_{\geq n}K^\bullet, f, g, h)
+$$
+by Remark \ref{remark-truncation-distinguished-triangle}. Hence, by
+Lemma \ref{lemma-representable-homological}
+it suffices to prove
+$\Hom_{K(\mathcal{A})}(P^\bullet, \tau_{\leq n - 1}K^\bullet) = 0$ and
+$\Hom_{K(\mathcal{A})}(P^\bullet, \tau_{\geq n} K^\bullet) = 0$.
+The first vanishing is trivial and the second is
+Lemma \ref{lemma-projective-into-acyclic-is-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-map}
+Let $\mathcal{A}$ be an abelian category.
+Let $\beta : P^\bullet \to L^\bullet$ and
+$\alpha : E^\bullet \to L^\bullet$ be
+maps of complexes. Let $n \in \mathbf{Z}$. Assume
+\begin{enumerate}
+\item $P^\bullet$ is a bounded complex of projectives and
+$P^i = 0$ for $i < n$,
+\item $H^i(\alpha)$ is an isomorphism for $i > n$ and surjective
+for $i = n$.
+\end{enumerate}
+Then there exists a map of complexes $\gamma : P^\bullet \to E^\bullet$
+such that $\alpha \circ \gamma$ and $\beta$ are homotopic.
+\end{lemma}
+
+\begin{proof}
+Consider the cone $C^\bullet = C(\alpha)^\bullet$ with map
+$i : L^\bullet \to C^\bullet$.
+Note that $i \circ \beta$ is zero by
+Lemma \ref{lemma-precise-vanishing}.
+Hence we can lift $\beta$ to $E^\bullet$ by
+Lemma \ref{lemma-representable-homological}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Right derived functors and injective resolutions}
+\label{section-right-derived-functor}
+
+\noindent
+At this point we can use the material above to define the right derived
+functors of an additive functor between an abelian category having
+enough injectives and a general abelian category.
+
+\begin{lemma}
+\label{lemma-injective-acyclic}
+Let $\mathcal{A}$ be an abelian category.
+Let $I \in \Ob(\mathcal{A})$ be an injective object.
+Let $I^\bullet$ be a bounded below complex of injectives in $\mathcal{A}$.
+\begin{enumerate}
+\item $I^\bullet$ computes $RF$ relative to $\text{Qis}^{+}(\mathcal{A})$
+for any exact functor $F : K^{+}(\mathcal{A}) \to \mathcal{D}$
+into any triangulated category $\mathcal{D}$.
+\item $I$ is right acyclic for any additive functor
+$F : \mathcal{A} \to \mathcal{B}$ into any abelian category $\mathcal{B}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (2) is a direct consequences of part (1) and
+Definition \ref{definition-derived-functor}.
+To prove (1) let $\alpha : I^\bullet \to K^\bullet$ be a quasi-isomorphism
+into a complex. By
+Lemma \ref{lemma-morphisms-lift}
+we see that $\alpha$ has a left inverse. Hence the category
+$I^\bullet/\text{Qis}^{+}(\mathcal{A})$ is essentially constant with value
+$\text{id} : I^\bullet \to I^\bullet$. Thus also the ind-object
+$$
+I^\bullet/\text{Qis}^{+}(\mathcal{A}) \longrightarrow \mathcal{D}, \quad
+(I^\bullet \to K^\bullet) \longmapsto F(K^\bullet)
+$$
+is essentially constant with value $F(I^\bullet)$. This proves (1), see
+Definitions \ref{definition-right-derived-functor-defined} and
+\ref{definition-computes}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-enough-injectives-right-derived}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+\begin{enumerate}
+\item For any exact functor $F : K^{+}(\mathcal{A}) \to \mathcal{D}$
+into a triangulated category $\mathcal{D}$ the right derived
+functor
+$$
+RF : D^{+}(\mathcal{A}) \longrightarrow \mathcal{D}
+$$
+is everywhere defined.
+\item For any additive functor $F : \mathcal{A} \to \mathcal{B}$ into an
+abelian category $\mathcal{B}$ the right derived functor
+$$
+RF : D^{+}(\mathcal{A}) \longrightarrow D^{+}(\mathcal{B})
+$$
+is everywhere defined.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Combine
+Lemma \ref{lemma-injective-acyclic}
+and
+Proposition \ref{proposition-enough-acyclics}
+for the second assertion. To see the first assertion combine
+Lemma \ref{lemma-injective-resolutions-exist},
+Lemma \ref{lemma-injective-acyclic},
+Lemma \ref{lemma-existence-computes},
+and Equation (\ref{equation-everywhere}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-right-derived-properties}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+Let $F : \mathcal{A} \to \mathcal{B}$ be an additive functor.
+\begin{enumerate}
+\item The functor $RF$ is an exact functor
+$D^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$.
+\item The functor $RF$ induces an exact functor
+$K^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$.
+\item The functor $RF$ induces a $\delta$-functor
+$\text{Comp}^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$.
+\item The functor $RF$ induces a $\delta$-functor
+$\mathcal{A} \to D^{+}(\mathcal{B})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma simply reviews some of the results obtained so far.
+Note that by
+Lemma \ref{lemma-enough-injectives-right-derived}
+$RF$ is everywhere defined. Here are some references:
+\begin{enumerate}
+\item The derived functor is exact: This boils down to
+Lemma \ref{lemma-2-out-of-3-defined}.
+\item This is true because $K^{+}(\mathcal{A}) \to D^{+}(\mathcal{A})$
+is exact and compositions of exact functors are exact.
+\item This is true because
+$\text{Comp}^{+}(\mathcal{A}) \to D^{+}(\mathcal{A})$ is
+a $\delta$-functor, see
+Lemma \ref{lemma-derived-canonical-delta-functor}.
+\item This is true because $\mathcal{A} \to \text{Comp}^{+}(\mathcal{A})$
+is exact and precomposing a $\delta$-functor by an exact functor gives
+a $\delta$-functor.
+\end{enumerate}
+\end{proof}
+
+\begin{lemma}
+\label{lemma-higher-derived-functors}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+Let $F : \mathcal{A} \to \mathcal{B}$ be a left exact functor.
+\begin{enumerate}
+\item For any short exact sequence
+$0 \to A^\bullet \to B^\bullet \to C^\bullet \to 0$
+of complexes in $\text{Comp}^{+}(\mathcal{A})$ there
+is an associated long exact sequence
+$$
+\ldots \to
+H^i(RF(A^\bullet)) \to
+H^i(RF(B^\bullet)) \to
+H^i(RF(C^\bullet)) \to
+H^{i + 1}(RF(A^\bullet)) \to \ldots
+$$
+\item The functors $R^iF : \mathcal{A} \to \mathcal{B}$
+are zero for $i < 0$. Also $R^0F = F : \mathcal{A} \to \mathcal{B}$.
+\item We have $R^iF(I) = 0$ for $i > 0$ and $I$ injective.
+\item The sequence $(R^iF, \delta)$ forms a universal $\delta$-functor (see
+Homology, Definition \ref{homology-definition-universal-delta-functor})
+from $\mathcal{A}$ to $\mathcal{B}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma simply reviews some of the results obtained so far.
+Note that by
+Lemma \ref{lemma-enough-injectives-right-derived}
+$RF$ is everywhere defined. Here are some references:
+\begin{enumerate}
+\item This follows from
+Lemma \ref{lemma-right-derived-properties} part (3)
+combined with the long exact cohomology sequence
+(\ref{equation-long-exact-cohomology-sequence-D}) for
+$D^{+}(\mathcal{B})$.
+\item This is
+Lemma \ref{lemma-left-exact-higher-derived}.
+\item This is the fact that injective objects are acyclic.
+\item This is
+Lemma \ref{lemma-right-derived-delta-functor}.
+\end{enumerate}
+\end{proof}
+
+
+
+
+
+
+\section{Cartan-Eilenberg resolutions}
+\label{section-cartan-eilenberg}
+
+\noindent
+This section can be expanded. The material can be generalized and applied in
+more cases. Resolutions need not use injectives and the method also
+works in the unbounded case in some situations.
+
+\begin{definition}
+\label{definition-cartan-eilenberg}
+Let $\mathcal{A}$ be an abelian category.
+Let $K^\bullet$ be a bounded below complex.
+A {\it Cartan-Eilenberg resolution} of $K^\bullet$
+is given by a double complex $I^{\bullet, \bullet}$
+and a morphism of complexes $\epsilon : K^\bullet \to I^{\bullet, 0}$
+with the following properties:
+\begin{enumerate}
+\item There exists a $i \ll 0$ such that $I^{p, q} = 0$ for all $p < i$
+and all $q$.
+\item We have $I^{p, q} = 0$ if $q < 0$.
+\item The complex $I^{p, \bullet}$ is an injective resolution of $K^p$.
+\item The complex $\Ker(d_1^{p, \bullet})$ is an injective resolution
+of $\Ker(d_K^p)$.
+\item The complex $\Im(d_1^{p, \bullet})$ is an injective resolution
+of $\Im(d_K^p)$.
+\item The complex $H^p_I(I^{\bullet, \bullet})$ is an injective resolution
+of $H^p(K^\bullet)$.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-cartan-eilenberg}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+Let $K^\bullet$ be a bounded below complex.
+There exists a Cartan-Eilenberg resolution of $K^\bullet$.
+\end{lemma}
+
+\begin{proof}
+Suppose that $K^p = 0$ for $p < n$. Decompose $K^\bullet$ into
+short exact sequences as follows: Set $Z^p = \Ker(d^p)$,
+$B^p = \Im(d^{p - 1})$, $H^p = Z^p/B^p$, and consider
+$$
+\begin{matrix}
+0 \to Z^n \to K^n \to B^{n + 1} \to 0 \\
+0 \to B^{n + 1} \to Z^{n + 1} \to H^{n + 1} \to 0 \\
+0 \to Z^{n + 1} \to K^{n + 1} \to B^{n + 2} \to 0 \\
+0 \to B^{n + 2} \to Z^{n + 2} \to H^{n + 2} \to 0 \\
+\ldots
+\end{matrix}
+$$
+Set $I^{p, q} = 0$ for $p < n$. Inductively we choose
+injective resolutions as follows:
+\begin{enumerate}
+\item Choose an injective resolution $Z^n \to J_Z^{n, \bullet}$.
+\item Using Lemma \ref{lemma-injective-resolution-ses} choose injective
+resolutions $K^n \to I^{n, \bullet}$, $B^{n + 1} \to J_B^{n + 1, \bullet}$,
+and an exact sequence of complexes
+$0 \to J_Z^{n, \bullet} \to I^{n, \bullet} \to J_B^{n + 1, \bullet} \to 0$
+compatible with the short exact sequence
+$0 \to Z^n \to K^n \to B^{n + 1} \to 0$.
+\item Using Lemma \ref{lemma-injective-resolution-ses} choose injective
+resolutions $Z^{n + 1} \to J_Z^{n + 1, \bullet}$,
+$H^{n + 1} \to J_H^{n + 1, \bullet}$,
+and an exact sequence of complexes
+$0 \to J_B^{n + 1, \bullet} \to J_Z^{n + 1, \bullet}
+\to J_H^{n + 1, \bullet} \to 0$
+compatible with the short exact sequence
+$0 \to B^{n + 1} \to Z^{n + 1} \to H^{n + 1} \to 0$.
+\item Etc.
+\end{enumerate}
+Taking as maps $d_1^\bullet : I^{p, \bullet} \to I^{p + 1, \bullet}$
+the compositions
+$I^{p, \bullet} \to J_B^{p + 1, \bullet} \to
+J_Z^{p + 1, \bullet} \to I^{p + 1, \bullet}$ everything is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-two-ss-complex-functor}
+Let $F : \mathcal{A} \to \mathcal{B}$ be a left exact functor of
+abelian categories.
+Let $K^\bullet$ be a bounded below complex of $\mathcal{A}$.
+Let $I^{\bullet, \bullet}$ be a Cartan-Eilenberg resolution
+for $K^\bullet$. The spectral sequences
+$({}'E_r, {}'d_r)_{r \geq 0}$ and $({}''E_r, {}''d_r)_{r \geq 0}$
+associated to the double complex $F(I^{\bullet, \bullet})$
+satisfy the relations
+$$
+{}'E_1^{p, q} = R^qF(K^p)
+\quad
+\text{and}
+\quad
+{}''E_2^{p, q} = R^pF(H^q(K^\bullet))
+$$
+Moreover, these spectral sequences are bounded, converge to
+$H^*(RF(K^\bullet))$, and the associated induced filtrations on
+$H^n(RF(K^\bullet))$ are finite.
+\end{lemma}
+
+\begin{proof}
+We will use the following remarks without further mention:
+\begin{enumerate}
+\item As $I^{p, \bullet}$ is an injective resolution of
+$K^p$ we see that $RF$ is defined at $K^p[0]$
+with value $F(I^{p, \bullet})$.
+\item As $H^p_I(I^{\bullet, \bullet})$ is an injective resolution of
+$H^p(K^\bullet)$ the derived functor $RF$ is defined at $H^p(K^\bullet)[0]$
+with value $F(H^p_I(I^{\bullet, \bullet}))$.
+\item By
+Homology, Lemma \ref{homology-lemma-double-complex-gives-resolution}
+the total complex $\text{Tot}(I^{\bullet, \bullet})$
+is an injective resolution of
+$K^\bullet$. Hence $RF$ is defined at $K^\bullet$ with value
+$F(\text{Tot}(I^{\bullet, \bullet}))$.
+\end{enumerate}
+Consider the two spectral sequences associated to the double complex
+$L^{\bullet, \bullet} = F(I^{\bullet, \bullet})$, see
+Homology, Lemma \ref{homology-lemma-ss-double-complex}.
+These are both bounded, converge to $H^*(\text{Tot}(L^{\bullet, \bullet}))$,
+and induce finite filtrations on $H^n(\text{Tot}(L^{\bullet, \bullet}))$, see
+Homology, Lemma \ref{homology-lemma-first-quadrant-ss}.
+Since
+$\text{Tot}(L^{\bullet, \bullet}) =
+\text{Tot}(F(I^{\bullet, \bullet})) =
+F(\text{Tot}(I^{\bullet, \bullet}))$ computes
+$H^n(RF(K^\bullet))$ we find the final assertion of the lemma holds true.
+
+\medskip\noindent
+Computation of the first spectral sequence. We have
+${}'E_1^{p, q} = H^q(L^{p, \bullet})$ in other words
+$$
+{}'E_1^{p, q} = H^q(F(I^{p, \bullet})) = R^qF(K^p)
+$$
+as desired. Observe for later use that the maps
+${}'d_1^{p, q} : {}'E_1^{p, q} \to {}'E_1^{p + 1, q}$ are the maps
+$R^qF(K^p) \to R^qF(K^{p + 1})$ induced by $K^p \to K^{p + 1}$
+and the fact that $R^qF$ is a functor.
+
+\medskip\noindent
+Computation of the second spectral sequence. We have
+${}''E_1^{p, q} = H^q(L^{\bullet, p}) = H^q(F(I^{\bullet, p}))$.
+Note that the complex $I^{\bullet, p}$ is bounded below,
+consists of injectives, and moreover each kernel, image, and
+cohomology group of the differentials is an injective object
+of $\mathcal{A}$. Hence we can split the differentials, i.e.,
+each differential is a split surjection onto a direct summand.
+It follows that the same is true after applying $F$. Hence
+${}''E_1^{p, q} = F(H^q(I^{\bullet, p})) = F(H^q_I(I^{\bullet, p}))$.
+The differentials on this are $(-1)^q$ times $F$ applied to
+the differential of the complex $H^p_I(I^{\bullet, \bullet})$
+which is an injective resolution of $H^p(K^\bullet)$. Hence the
+description of the $E_2$ terms.
+\end{proof}
+
+\begin{remark}
+\label{remark-functorial-ss}
+The spectral sequences of Lemma \ref{lemma-two-ss-complex-functor}
+are functorial in the complex $K^\bullet$. This follows from functoriality
+properties of Cartan-Eilenberg resolutions. On the other hand, they are
+both examples of a more general spectral sequence which may be associated
+to a filtered complex of $\mathcal{A}$. The functoriality will follow from
+its construction. We will return to this in the section on the filtered
+derived category, see Remark \ref{remark-final-functorial}.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+\section{Composition of right derived functors}
+\label{section-composition-right-derived-functors}
+
+\noindent
+Sometimes we can compute the right derived functor of a composition.
+Suppose that $\mathcal{A}, \mathcal{B}, \mathcal{C}$ be abelian categories.
+Let $F : \mathcal{A} \to \mathcal{B}$ and $G : \mathcal{B} \to \mathcal{C}$
+be left exact functors. Assume that the right derived functors
+$RF : D^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$,
+$RG : D^{+}(\mathcal{B}) \to D^{+}(\mathcal{C})$, and
+$R(G \circ F) : D^{+}(\mathcal{A}) \to D^{+}(\mathcal{C})$
+are everywhere defined. Then there exists a canonical transformation
+$$
+t : R(G \circ F) \longrightarrow RG \circ RF
+$$
+of functors from $D^{+}(\mathcal{A})$ to $D^{+}(\mathcal{C})$, see
+Lemma \ref{lemma-compose-derived-functors-general}.
+This transformation need not always be an isomorphism.
+
+\begin{lemma}
+\label{lemma-compose-derived-functors}
+Let $\mathcal{A}, \mathcal{B}, \mathcal{C}$ be abelian categories.
+Let $F : \mathcal{A} \to \mathcal{B}$ and $G : \mathcal{B} \to \mathcal{C}$
+be left exact functors. Assume $\mathcal{A}$, $\mathcal{B}$ have
+enough injectives. The following are equivalent
+\begin{enumerate}
+\item $F(I)$ is right acyclic for $G$ for each injective object $I$
+of $\mathcal{A}$, and
+\item the canonical map
+$$
+t : R(G \circ F) \longrightarrow RG \circ RF.
+$$
+is isomorphism of functors from $D^{+}(\mathcal{A})$ to $D^{+}(\mathcal{C})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If (2) holds, then (1) follows by evaluating the isomorphism
+$t$ on $RF(I) = F(I)$. Conversely, assume (1) holds.
+Let $A^\bullet$ be a bounded below complex of $\mathcal{A}$.
+Choose an injective resolution $A^\bullet \to I^\bullet$.
+The map $t$ is given (see proof of
+Lemma \ref{lemma-compose-derived-functors-general})
+by the maps
+$$
+R(G \circ F)(A^\bullet) =
+(G \circ F)(I^\bullet) =
+G(F(I^\bullet))) \to
+RG(F(I^\bullet)) =
+RG(RF(A^\bullet))
+$$
+where the arrow is an isomorphism by
+Lemma \ref{lemma-leray-acyclicity}.
+\end{proof}
+
+\begin{lemma}[Grothendieck spectral sequence]
+\label{lemma-grothendieck-spectral-sequence}
+With assumptions as in Lemma \ref{lemma-compose-derived-functors}
+and assuming the equivalent conditions (1) and (2) hold.
+Let $X$ be an object of $D^{+}(\mathcal{A})$.
+There exists a spectral sequence $(E_r, d_r)_{r \geq 0}$
+consisting of bigraded objects $E_r$ of $\mathcal{C}$ with
+$d_r$ of bidegree $(r, - r + 1)$ and with
+$$
+E_2^{p, q} = R^pG(H^q(RF(X)))
+$$
+Moreover, this spectral sequence is bounded, converges to
+$H^*(R(G \circ F)(X))$, and induces a finite filtration
+on each $H^n(R(G \circ F)(X))$.
+\end{lemma}
+
+\noindent
+For an object $A$ of $\mathcal{A}$ we get
+$E_2^{p, q} = R^pG(R^qF(A))$ converging to $R^{p + q}(G \circ F)(A)$.
+
+\begin{proof}
+We may represent $X$ by a bounded below complex $A^\bullet$.
+Choose an injective resolution $A^\bullet \to I^\bullet$.
+Choose a Cartan-Eilenberg resolution
+$F(I^\bullet) \to I^{\bullet, \bullet}$ using
+Lemma \ref{lemma-cartan-eilenberg}.
+Apply the second spectral sequence of
+Lemma \ref{lemma-two-ss-complex-functor}.
+\end{proof}
+
+
+
+
+
+
+\section{Resolution functors}
+\label{section-derived-category}
+
+\noindent
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+Denote $\mathcal{I}$ the full additive subcategory of $\mathcal{A}$ whose
+objects are the injective objects of $\mathcal{A}$.
+It turns out that $K^{+}(\mathcal{I})$ and $D^{+}(\mathcal{A})$
+are equivalent in this case (see
+Proposition \ref{proposition-derived-category}).
+For many purposes it therefore makes sense to think of
+$D^{+}(\mathcal{A})$ as the (easier to grok) category $K^{+}(\mathcal{I})$
+in this case.
+
+\begin{proposition}
+\label{proposition-derived-category}
+Let $\mathcal{A}$ be an abelian category.
+Assume $\mathcal{A}$ has enough injectives.
+Denote $\mathcal{I} \subset \mathcal{A}$ the strictly full
+additive subcategory whose objects are the injective objects of
+$\mathcal{A}$.
+The functor
+$$
+K^{+}(\mathcal{I}) \longrightarrow D^{+}(\mathcal{A})
+$$
+is exact, fully faithful and essentially surjective, i.e.,
+an equivalence of triangulated categories.
+\end{proposition}
+
+\begin{proof}
+It is clear that the functor is exact.
+It is essentially surjective by
+Lemma \ref{lemma-injective-resolutions-exist}.
+Fully faithfulness is a consequence of
+Lemma \ref{lemma-morphisms-into-injective-complex}.
+\end{proof}
+
+\noindent
+Proposition \ref{proposition-derived-category}
+implies that we can find resolution functors.
+It turns out that we can prove resolution functors exist
+even in some cases where the abelian category $\mathcal{A}$ is
+a ``big'' category, i.e., has a class of objects.
+
+\begin{definition}
+\label{definition-localization-functor}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+A {\it resolution functor}\footnote{This is likely nonstandard terminology.}
+for $\mathcal{A}$ is given by the following data:
+\begin{enumerate}
+\item for all $K^\bullet \in \Ob(K^{+}(\mathcal{A}))$ a
+bounded below complex of injectives $j(K^\bullet)$, and
+\item for all $K^\bullet \in \Ob(K^{+}(\mathcal{A}))$ a
+quasi-isomorphism $i_{K^\bullet} : K^\bullet \to j(K^\bullet)$.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-resolution-functor}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+Given a resolution functor $(j, i)$ there is a unique way to
+turn $j$ into a functor and $i$ into a $2$-isomorphism
+producing a $2$-commutative diagram
+$$
+\xymatrix{
+K^{+}(\mathcal{A}) \ar[rd] \ar[rr]_j & & K^{+}(\mathcal{I}) \ar[ld] \\
+& D^{+}(\mathcal{A})
+}
+$$
+where $\mathcal{I}$ is the full additive subcategory of $\mathcal{A}$
+consisting of injective objects.
+\end{lemma}
+
+\begin{proof}
+For every morphism $\alpha : K^\bullet \to L^\bullet$ of $K^{+}(\mathcal{A})$
+there is a unique morphism
+$j(\alpha) : j(K^\bullet) \to j(L^\bullet)$ in $K^{+}(\mathcal{I})$
+such that
+$$
+\xymatrix{
+K^\bullet \ar[r]_\alpha \ar[d]_{i_{K^\bullet}} &
+L^\bullet \ar[d]^{i_{L^\bullet}} \\
+j(K^\bullet) \ar[r]^{j(\alpha)} & j(L^\bullet)
+}
+$$
+is commutative in $K^{+}(\mathcal{A})$. To see this either use
+Lemmas \ref{lemma-morphisms-lift} and
+\ref{lemma-morphisms-equal-up-to-homotopy}
+or the equivalent
+Lemma \ref{lemma-morphisms-into-injective-complex}.
+The uniqueness implies that $j$ is a functor, and the commutativity of
+the diagram implies that $i$ gives a $2$-morphism which witnesses the
+$2$-commutativity of the diagram of categories in the statement of
+the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-into-derived-category}
+Let $\mathcal{A}$ be an abelian category.
+Assume $\mathcal{A}$ has enough injectives.
+Then a resolution functor $j$ exists and is
+unique up to unique isomorphism of functors.
+\end{lemma}
+
+\begin{proof}
+Consider the set of all objects $K^\bullet$ of $K^{+}(\mathcal{A})$.
+(Recall that by our conventions any category has a set of
+objects unless mentioned otherwise.)
+By Lemma \ref{lemma-injective-resolutions-exist} every object
+has an injective resolution.
+By the axiom of choice we can choose for each $K^\bullet$
+an injective resolution $i_{K^\bullet} : K^\bullet \to j(K^\bullet)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-j-is-exact}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+Any resolution functor
+$j : K^{+}(\mathcal{A}) \to K^{+}(\mathcal{I})$
+is exact.
+\end{lemma}
+
+\begin{proof}
+Denote $i_{K^\bullet} : K^\bullet \to j(K^\bullet)$ the
+canonical maps of Definition \ref{definition-localization-functor}.
+First we discuss the existence of the functorial isomorphism
+$j(K^\bullet[1]) \to j(K^\bullet)[1]$.
+Consider the diagram
+$$
+\xymatrix{
+K^\bullet[1] \ar[d]^{i_{K^\bullet[1]}} \ar@{=}[rr] & &
+K^\bullet[1] \ar[d]^{i_{K^\bullet}[1]} \\
+j(K^\bullet[1]) \ar@{..>}[rr]^{\xi_{K^\bullet}} & & j(K^\bullet)[1]
+}
+$$
+By Lemmas \ref{lemma-morphisms-lift}
+and \ref{lemma-morphisms-equal-up-to-homotopy}
+there exists a unique dotted arrow $\xi_{K^\bullet}$ in $K^{+}(\mathcal{I})$
+making the diagram commute in $K^{+}(\mathcal{A})$.
+We omit the verification that this gives a functorial isomorphism.
+(Hint: use Lemma \ref{lemma-morphisms-equal-up-to-homotopy} again.)
+
+\medskip\noindent
+Let $(K^\bullet, L^\bullet, M^\bullet, f, g, h)$
+be a distinguished triangle of $K^{+}(\mathcal{A})$.
+We have to show that
+$(j(K^\bullet), j(L^\bullet), j(M^\bullet), j(f), j(g),
+\xi_{K^\bullet} \circ j(h))$ is
+a distinguished triangle of $K^{+}(\mathcal{I})$.
+Note that we have a commutative diagram
+$$
+\xymatrix{
+K^\bullet \ar[r]_f \ar[d] &
+L^\bullet \ar[r]_g \ar[d] &
+M^\bullet \ar[rr]_h \ar[d] & &
+K^\bullet[1] \ar[d] \\
+j(K^\bullet) \ar[r]^{j(f)} &
+j(L^\bullet) \ar[r]^{j(g)} &
+j(M^\bullet) \ar[rr]^{\xi_{K^\bullet} \circ j(h)} & &
+j(K^\bullet)[1]
+}
+$$
+in $K^{+}(\mathcal{A})$ whose vertical arrows are the quasi-isomorphisms
+$i_K, i_L, i_M$. Hence we see that the image of
+$(j(K^\bullet), j(L^\bullet), j(M^\bullet), j(f), j(g),
+\xi_{K^\bullet} \circ j(h))$
+in $D^{+}(\mathcal{A})$ is isomorphic to a distinguished triangle
+and hence a distinguished triangle by TR1. Thus we see from
+Lemma \ref{lemma-exact-equivalence}
+that $(j(K^\bullet), j(L^\bullet), j(M^\bullet), j(f), j(g),
+\xi_{K^\bullet} \circ j(h))$ is a distinguished triangle in
+$K^{+}(\mathcal{I})$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-resolution-functor-quasi-inverse}
+Let $\mathcal{A}$ be an abelian category which has enough injectives.
+Let $j$ be a resolution functor. Write
+$Q : K^{+}(\mathcal{A}) \to D^{+}(\mathcal{A})$ for the natural functor.
+Then $j = j' \circ Q$ for a unique
+functor $j' : D^{+}(\mathcal{A}) \to K^{+}(\mathcal{I})$ which
+is quasi-inverse to the canonical functor
+$K^{+}(\mathcal{I}) \to D^{+}(\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-bounded-derived}
+$Q$ is a localization functor.
+To prove the existence of $j'$ it suffices to show that any element of
+$\text{Qis}^{+}(\mathcal{A})$ is mapped to an isomorphism under
+the functor $j$, see
+Lemma \ref{lemma-universal-property-localization}.
+This is true by the remarks following
+Definition \ref{definition-localization-functor}.
+\end{proof}
+
+\begin{remark}
+\label{remark-big-localization}
+Suppose that $\mathcal{A}$ is a ``big'' abelian category with enough injectives
+such as the category of abelian groups. In this case we have to be slightly
+more careful in constructing our resolution functor since we cannot use
+the axiom of choice with a quantifier ranging over a class. But note that
+the proof of the lemma does show that any two localization functors are
+canonically isomorphic. Namely, given quasi-isomorphisms
+$i : K^\bullet \to I^\bullet$ and $i' : K^\bullet \to J^\bullet$ of
+a bounded below complex $K^\bullet$ into bounded below complexes of injectives
+there exists a unique(!) morphism $a : I^\bullet \to J^\bullet$
+in $K^{+}(\mathcal{I})$ such that $i' = i \circ a$ as morphisms in
+$K^{+}(\mathcal{I})$. Hence the only issue is existence, and we will see how
+to deal with this in the next section.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+\section{Functorial injective embeddings and resolution functors}
+\label{section-functorial-injective-resolutions}
+
+\noindent
+In this section we redo the construction of a resolution functor
+$K^{+}(\mathcal{A}) \to K^{+}(\mathcal{I})$ in case the
+category $\mathcal{A}$ has functorial injective embeddings.
+There are two reasons for this: (1) the proof is easier and (2)
+the construction also works if $\mathcal{A}$ is a ``big'' abelian
+category. See
+Remark \ref{remark-big-abelian-category}
+below.
+
+\medskip\noindent
+Let $\mathcal{A}$ be an abelian category. As before denote $\mathcal{I}$
+the additive full subcategory of $\mathcal{A}$ consisting of injective
+objects. Consider the category $\text{InjRes}(\mathcal{A})$
+of arrows $\alpha : K^\bullet \to I^\bullet$
+where $K^\bullet$ is a bounded below complex of $\mathcal{A}$,
+$I^\bullet$ is a bounded below complex of injectives of $\mathcal{A}$
+and $\alpha$ is a quasi-isomorphism. In other words, $\alpha$ is
+an injective resolution and $K^\bullet$ is bounded below.
+There is an obvious functor
+$$
+s : \text{InjRes}(\mathcal{A}) \longrightarrow \text{Comp}^{+}(\mathcal{A})
+$$
+defined by $(\alpha : K^\bullet \to I^\bullet) \mapsto K^\bullet$.
+There is also a functor
+$$
+t : \text{InjRes}(\mathcal{A}) \longrightarrow K^{+}(\mathcal{I})
+$$
+defined by $(\alpha : K^\bullet \to I^\bullet) \mapsto I^\bullet$.
+
+\begin{lemma}
+\label{lemma-functorial-injective-resolutions}
+Let $\mathcal{A}$ be an abelian category.
+Assume $\mathcal{A}$ has functorial injective embeddings, see
+Homology, Definition \ref{homology-definition-functorial-injective-embedding}.
+\begin{enumerate}
+\item There exists a functor
+$inj : \text{Comp}^{+}(\mathcal{A}) \to \text{InjRes}(\mathcal{A})$
+such that $s \circ inj = \text{id}$.
+\item For any functor
+$inj : \text{Comp}^{+}(\mathcal{A}) \to \text{InjRes}(\mathcal{A})$
+such that $s \circ inj = \text{id}$ we obtain a resolution functor, see
+Definition \ref{definition-localization-functor}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $A \mapsto (A \to J(A))$ be a functorial injective embedding, see
+Homology, Definition \ref{homology-definition-functorial-injective-embedding}.
+We first note that we may assume $J(0) = 0$. Namely, if not then
+for any object $A$ we have $0 \to A \to 0$ which gives
+a direct sum decomposition $J(A) = J(0) \oplus \Ker(J(A) \to J(0))$.
+Note that the functorial morphism $A \to J(A)$ has to map
+into the second summand. Hence we can replace our functor
+by $J'(A) = \Ker(J(A) \to J(0))$ if needed.
+
+\medskip\noindent
+Let $K^\bullet$ be a bounded below complex of $\mathcal{A}$.
+Say $K^p = 0$ if $p < B$.
+We are going to construct a double complex $I^{\bullet, \bullet}$
+of injectives, together with a map $\alpha : K^\bullet \to I^{\bullet, 0}$
+such that $\alpha$ induces a quasi-isomorphism of $K^\bullet$
+with the associated total complex of $I^{\bullet, \bullet}$.
+First we set $I^{p, q} = 0$ whenever $q < 0$.
+Next, we set $I^{p, 0} = J(K^p)$ and $\alpha^p : K^p \to I^{p, 0}$
+the functorial embedding. Since $J$ is a functor we see that
+$I^{\bullet, 0}$ is a complex and that $\alpha$ is a
+morphism of complexes. Each $\alpha^p$ is injective. And
+$I^{p, 0} = 0$ for $p < B$ because $J(0) = 0$. Next, we set
+$I^{p, 1} = J(\Coker(K^p \to I^{p, 0}))$. Again by functoriality
+we see that $I^{\bullet, 1}$ is a complex. And again we get
+that $I^{p, 1} = 0$ for $p < B$. It is also clear that
+$K^p$ maps isomorphically onto $\Ker(I^{p, 0} \to I^{p, 1})$.
+As our third step we take $I^{p, 2} = J(\Coker(I^{p, 0} \to I^{p, 1}))$.
+And so on and so forth.
+
+\medskip\noindent
+At this point we can apply
+Homology, Lemma \ref{homology-lemma-double-complex-gives-resolution}
+to get that the map
+$$
+\alpha : K^\bullet \longrightarrow \text{Tot}(I^{\bullet, \bullet})
+$$
+is a quasi-isomorphism. To prove we get a functor $inj$ it
+rests to show that the construction above
+is functorial. This verification is omitted.
+
+\medskip\noindent
+Suppose we have a functor $inj$ such that $s \circ inj = \text{id}$.
+For every object $K^\bullet$ of $\text{Comp}^{+}(\mathcal{A})$
+we can write
+$$
+inj(K^\bullet) = (i_{K^\bullet} : K^\bullet \to j(K^\bullet))
+$$
+This provides us with a resolution functor as in
+Definition \ref{definition-localization-functor}.
+\end{proof}
+
+\begin{remark}
+\label{remark-match}
+Suppose $inj$ is a functor such that $s \circ inj = \text{id}$
+as in part (2) of
+Lemma \ref{lemma-functorial-injective-resolutions}.
+Write $inj(K^\bullet) = (i_{K^\bullet} : K^\bullet \to j(K^\bullet))$
+as in the proof of that lemma.
+Suppose $\alpha : K^\bullet \to L^\bullet$ is a map
+of bounded below complexes. Consider the map
+$inj(\alpha)$ in the category $\text{InjRes}(\mathcal{A})$.
+It induces a commutative diagram
+$$
+\xymatrix{
+K^\bullet
+\ar[rr]^-{\alpha}
+\ar[d]_{i_K} & &
+L^\bullet \ar[d]^{i_L} \\
+j(K)^\bullet
+\ar[rr]^-{inj(\alpha)}
+& &
+j(L)^\bullet
+}
+$$
+of morphisms of complexes.
+Hence, looking at the proof of
+Lemma \ref{lemma-resolution-functor}
+we see that the functor $j : K^{+}(\mathcal{A}) \to K^{+}(\mathcal{I})$
+is given by the rule
+$$
+j(\alpha\text{ up to homotopy}) = inj(\alpha)\text{ up to homotopy}\in
+\Hom_{K^{+}(\mathcal{I})}(j(K^\bullet), j(L^\bullet))
+$$
+Hence we see that $j$ matches $t \circ inj$ in this case, i.e., the
+diagram
+$$
+\xymatrix{
+\text{Comp}^{+}(\mathcal{A}) \ar[rr]_{t \circ inj} \ar[rd] & &
+K^{+}(\mathcal{I}) \\
+& K^{+}(\mathcal{A}) \ar[ru]_j
+}
+$$
+is commutative.
+\end{remark}
+
+\begin{remark}
+\label{remark-big-abelian-category}
+Let $\textit{Mod}(\mathcal{O}_X)$ be the category of $\mathcal{O}_X$-modules
+on a ringed space $(X, \mathcal{O}_X)$ (or more generally on a
+ringed site). We will see later that $\textit{Mod}(\mathcal{O}_X)$ has enough
+injectives and in fact functorial injective embeddings, see
+Injectives, Theorem \ref{injectives-theorem-sheaves-modules-injectives}.
+Note that the proof of Lemma \ref{lemma-into-derived-category} does
+not apply to $\textit{Mod}(\mathcal{O}_X)$. But the proof of
+Lemma \ref{lemma-functorial-injective-resolutions} does apply
+to $\textit{Mod}(\mathcal{O}_X)$. Thus we obtain
+$$
+j : K^{+}(\textit{Mod}(\mathcal{O}_X))
+\longrightarrow
+K^{+}(\mathcal{I})
+$$
+which is a resolution functor where $\mathcal{I}$ is the additive
+category of injective $\mathcal{O}_X$-modules. This argument also
+works in the following cases:
+\begin{enumerate}
+\item The category $\text{Mod}_R$ of $R$-modules over a ring $R$.
+\item The category $\textit{PMod}(\mathcal{O})$ of presheaves of
+$\mathcal{O}$-modules on a site endowed with a presheaf of rings.
+\item The category $\textit{Mod}(\mathcal{O})$ of sheaves of
+$\mathcal{O}$-modules on a ringed site.
+\item Add more here as needed.
+\end{enumerate}
+\end{remark}
+
+
+
+
+
+
+
+
+
+\section{Right derived functors via resolution functors}
+\label{section-right-derived-functor-via-resolutions}
+
+\noindent
+The content of the following lemma is that we can simply define
+$RF(K^\bullet) = F(j(K^\bullet))$ if we are given a resolution functor $j$.
+
+\begin{lemma}
+\label{lemma-right-derived-functor}
+Let $\mathcal{A}$ be an abelian category with enough injectives
+Let $F : \mathcal{A} \to \mathcal{B}$ be an additive functor into
+an abelian category. Let $(i, j)$ be a resolution functor, see
+Definition \ref{definition-localization-functor}.
+The right derived functor $RF$ of $F$ fits into the following
+$2$-commutative diagram
+$$
+\xymatrix{
+D^{+}(\mathcal{A}) \ar[rd]_{RF} \ar[rr]^{j'} & &
+K^{+}(\mathcal{I}) \ar[ld]^F \\
+& D^{+}(\mathcal{B})
+}
+$$
+where $j'$ is the functor from
+Lemma \ref{lemma-resolution-functor-quasi-inverse}.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-injective-acyclic}
+we have $RF(K^\bullet) = F(j(K^\bullet))$.
+\end{proof}
+
+\begin{remark}
+\label{remark-right-derived-functor}
+In the situation of
+Lemma \ref{lemma-right-derived-functor}
+we see that we have actually lifted the right derived
+functor to an exact functor
+$F \circ j' : D^{+}(\mathcal{A}) \to K^{+}(\mathcal{B})$.
+It is occasionally useful to use such a factorization.
+\end{remark}
+
+
+
+
+
+
+
+
+\section{Filtered derived category and injective resolutions}
+\label{section-filtered-derived}
+
+\noindent
+Let $\mathcal{A}$ be an abelian category. In this section we will show
+that if $\mathcal{A}$ has enough injectives, then so does
+the category $\text{Fil}^f(\mathcal{A})$ in some sense. One
+can use this observation to compute in the filtered derived category
+of $\mathcal{A}$.
+
+\medskip\noindent
+The category $\text{Fil}^f(\mathcal{A})$ is an example of an
+exact category, see
+Injectives, Remark \ref{injectives-remark-embed-exact-category}.
+A special role is played by the strict morphisms, see
+Homology, Definition \ref{homology-definition-strict},
+i.e., the morphisms $f$ such that $\Coim(f) = \Im(f)$.
+We will say that a complex $A \to B \to C$ in $\text{Fil}^f(\mathcal{A})$ is
+{\it exact} if the sequence $\text{gr}(A) \to \text{gr}(B) \to \text{gr}(C)$
+is exact in $\mathcal{A}$. This implies that $A \to B$ and $B \to C$
+are strict morphisms, see
+Homology, Lemma \ref{homology-lemma-filtered-acyclic}.
+
+\begin{definition}
+\label{definition-filtered-complexes-notation}
+Let $\mathcal{A}$ be an abelian category.
+We say an object $I$ of $\text{Fil}^f(\mathcal{A})$
+is {\it filtered injective} if each $\text{gr}^p(I)$ is
+an injective object of $\mathcal{A}$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-filtered-injective}
+Let $\mathcal{A}$ be an abelian category.
+An object $I$ of $\text{Fil}^f(\mathcal{A})$ is filtered injective
+if and only if
+there exist $a \leq b$, injective objects $I_n$, $a \leq n \leq b$
+of $\mathcal{A}$ and an isomorphism $I \cong \bigoplus_{a \leq n \leq b} I_n$
+such that $F^pI = \bigoplus_{n \geq p} I_n$.
+\end{lemma}
+
+\begin{proof}
+Follows from the fact that any injection $J \to M$ of $\mathcal{A}$
+is split if $J$ is an injective object. Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-split-strict-monomorphism}
+Let $\mathcal{A}$ be an abelian category.
+Any strict monomorphism $u : I \to A$ of $\text{Fil}^f(\mathcal{A})$
+where $I$ is a filtered injective object is a split injection.
+\end{lemma}
+
+\begin{proof}
+Let $p$ be the largest integer such that $F^pI \not = 0$.
+In particular $\text{gr}^p(I) = F^pI$.
+Let $I'$ be the object of $\text{Fil}^f(\mathcal{A})$ whose
+underlying object of $\mathcal{A}$ is $F^pI$ and with filtration
+given by $F^nI' = 0$ for $n > p$ and $F^nI' = I' = F^pI$ for
+$n \leq p$. Note that $I' \to I$ is a strict monomorphism too.
+The fact that $u$ is a strict monomorphism implies that
+$F^pI \to A/F^{p + 1}(A)$ is injective, see
+Homology, Lemma \ref{homology-lemma-characterize-strict}.
+Choose a splitting $s : A/F^{p + 1}A \to F^pI$ in $\mathcal{A}$.
+The induced morphism $s' : A \to I'$ is a strict morphism of
+filtered objects splitting the composition $I' \to I \to A$.
+Hence we can write $A = I' \oplus \Ker(s')$ and
+$I = I' \oplus \Ker(s'|_I)$. Note that
+$\Ker(s'|_I) \to \Ker(s')$ is a strict monomorphism
+and that $\Ker(s'|_I)$ is a filtered injective object.
+By induction on the length of the filtration on $I$ the map
+$\Ker(s'|_I) \to \Ker(s')$ is a split injection.
+Thus we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-property-filtered-injective}
+Let $\mathcal{A}$ be an abelian category.
+Let $u : A \to B$ be a strict monomorphism
+of $\text{Fil}^f(\mathcal{A})$
+and $f : A \to I$ a morphism from $A$ into a filtered injective object
+in $\text{Fil}^f(\mathcal{A})$.
+Then there exists a morphism $g : B \to I$ such that $f = g \circ u$.
+\end{lemma}
+
+\begin{proof}
+The pushout $f' : I \to I \amalg_A B$ of $f$ by $u$ is a strict
+monomorphism, see
+Homology, Lemma \ref{homology-lemma-pushout-filtered}.
+Hence the result follows formally from
+Lemma \ref{lemma-split-strict-monomorphism}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strict-monomorphism-into-filtered-injective}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+For any object $A$ of $\text{Fil}^f(\mathcal{A})$ there exists
+a strict monomorphism $A \to I$
+where $I$ is a filtered injective object.
+\end{lemma}
+
+\begin{proof}
+Pick $a \leq b$ such that $\text{gr}^p(A) = 0$ unless
+$p \in \{a, a + 1, \ldots, b\}$. For each
+$n \in \{a, a + 1, \ldots, b\}$ choose an injection
+$u_n : A/F^{n + 1}A \to I_n$ with $I_n$ an injective object.
+Set $I = \bigoplus_{a \leq n \leq b} I_n$ with filtration
+$F^pI = \bigoplus_{n \geq p} I_n$ and set $u : A \to I$ equal to
+the direct sum of the maps $u_n$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-filtered-injective-right-resolution-single-object}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+For any object $A$ of $\text{Fil}^f(\mathcal{A})$ there exists
+a filtered quasi-isomorphism $A[0] \to I^\bullet$
+where $I^\bullet$ is a complex of filtered injective objects
+with $I^n = 0$ for $n < 0$.
+\end{lemma}
+
+\begin{proof}
+First choose a strict monomorphism $u_0 : A \to I^0$ of $A$ into a filtered
+injective object, see
+Lemma \ref{lemma-strict-monomorphism-into-filtered-injective}.
+Next, choose a strict monomorphism
+$u_1 : \Coker(u_0) \to I^1$ into a filtered injective object of
+$\mathcal{A}$. Denote $d^0$ the induced map $I^0 \to I^1$.
+Next, choose a strict monomorphism $u_2 : \Coker(u_1) \to I^2$ into
+a filtered injective object of $\mathcal{A}$. Denote $d^1$ the induced
+map $I^1 \to I^2$. And so on. This works because each
+of the sequences
+$$
+0 \to \Coker(u_n) \to I^{n + 1} \to \Coker(u_{n + 1}) \to 0
+$$
+is short exact, i.e., induces a short exact sequence on applying
+$\text{gr}$. To see this use
+Homology, Lemma \ref{homology-lemma-characterize-strict}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-filtered-injective-right-resolution-map}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+Let $f : A \to B$ be a morphism of $\text{Fil}^f(\mathcal{A})$.
+Given filtered quasi-isomorphisms $A[0] \to I^\bullet$ and
+$B[0] \to J^\bullet$ where $I^\bullet, J^\bullet$ are complexes of
+filtered injective objects with $I^n = J^n = 0$ for $n < 0$, then
+there exists a commutative diagram
+$$
+\xymatrix{
+A[0] \ar[r] \ar[d] &
+B[0] \ar[d] \\
+I^\bullet \ar[r] &
+J^\bullet
+}
+$$
+\end{lemma}
+
+\begin{proof}
+As $A[0] \to I^\bullet$ and $C[0] \to J^\bullet$ are filtered
+quasi-isomorphisms we conclude that $a : A \to I^0$, $b : B \to J^0$
+and all the morphisms $d_I^n$, $d_J^n$ are strict, see
+Homology, Lemma \ref{homology-lemma-filtered-acyclic}.
+We will inductively construct the maps $f^n$ in the following
+commutative diagram
+$$
+\xymatrix{
+A \ar[r]_a \ar[d]_f &
+I^0 \ar[r] \ar[d]^{f^0} &
+I^1 \ar[r] \ar[d]^{f^1} &
+I^2 \ar[r] \ar[d]^{f^2} &
+\ldots \\
+B \ar[r]^b &
+J^0 \ar[r] &
+J^1 \ar[r] &
+J^2 \ar[r] &
+\ldots
+}
+$$
+Because $A \to I^0$ is a strict monomorphism and because
+$J^0$ is filtered injective, we can find a morphism $f^0 : I^0 \to J^0$
+such that $f^0 \circ a = b \circ f$, see
+Lemma \ref{lemma-injective-property-filtered-injective}.
+The composition $d_J^0 \circ b \circ f$ is zero, hence
+$d_J^0 \circ f^0 \circ a = 0$, hence $d_J^0 \circ f^0$ factors
+through a unique morphism
+$$
+\Coker(a) = \Coim(d_I^0) = \Im(d_I^0) \longrightarrow J^1.
+$$
+As $\Im(d_I^0) \to I^1$ is a strict monomorphism we can extend the
+displayed arrow to a morphism $f^1 : I^1 \to J^1$ by
+Lemma \ref{lemma-injective-property-filtered-injective}
+again. And so on.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-filtered-injective-right-resolution-ses}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+Let $0 \to A \to B \to C \to 0$ be a short exact sequence in
+$\text{Fil}^f(\mathcal{A})$.
+Given filtered quasi-isomorphisms $A[0] \to I^\bullet$ and
+$C[0] \to J^\bullet$ where $I^\bullet, J^\bullet$ are complexes of
+filtered injective objects with $I^n = J^n = 0$ for $n < 0$, then
+there exists a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+A[0] \ar[r] \ar[d] &
+B[0] \ar[r] \ar[d] &
+C[0] \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+I^\bullet \ar[r] &
+M^\bullet \ar[r] &
+J^\bullet \ar[r] &
+0
+}
+$$
+where the lower row is a termwise split sequence of complexes.
+\end{lemma}
+
+\begin{proof}
+As $A[0] \to I^\bullet$ and $C[0] \to J^\bullet$ are filtered
+quasi-isomorphisms we conclude that $a : A \to I^0$, $c : C \to J^0$
+and all the morphisms $d_I^n$, $d_J^n$ are strict, see
+Homology, Lemma \ref{lemma-filtered-acyclic}.
+We are going to step by step construct the south-east and the south
+arrows in the following commutative diagram
+$$
+\xymatrix{
+B \ar[r]_\beta \ar[rd]^b &
+C \ar[r]_c \ar[rd]^{\overline{b}} &
+J^0 \ar[d]^{\delta^0} \ar[r] &
+J^1 \ar[d]^{\delta^1} \ar[r] & \ldots \\
+A \ar[u]^\alpha \ar[r]^a &
+I^0 \ar[r] &
+I^1 \ar[r] &
+I^2 \ar[r] & \ldots
+}
+$$
+As $A \to B$ is a strict monomorphism, we can find a morphism
+$b : B \to I^0$ such that $b \circ \alpha = a$, see
+Lemma \ref{lemma-injective-property-filtered-injective}.
+As $A$ is the kernel of the strict morphism $I^0 \to I^1$
+and $\beta = \Coker(\alpha)$ we obtain a unique morphism
+$\overline{b} : C \to I^1$ fitting into the diagram.
+As $c$ is a strict monomorphism and $I^1$ is filtered injective
+we can find $\delta^0 : J^0 \to I^1$, see
+Lemma \ref{lemma-injective-property-filtered-injective}.
+Because $B \to C$ is a strict epimorphism and because
+$B \to I^0 \to I^1 \to I^2$ is zero, we see that
+$C \to I^1 \to I^2$ is zero. Hence $d_I^1 \circ \delta^0$
+is zero on $C \cong \Im(c)$.
+Hence $d_I^1 \circ \delta^0$ factors through a unique morphism
+$$
+\Coker(c) = \Coim(d_J^0) = \Im(d_J^0) \longrightarrow I^2.
+$$
+As $I^2$ is filtered injective and $\Im(d_J^0) \to J^1$ is a
+strict monomorphism we can extend the displayed morphism to a morphism
+$\delta^1 : J^1 \to I^2$, see
+Lemma \ref{lemma-injective-property-filtered-injective}.
+And so on. We set $M^\bullet = I^\bullet \oplus J^\bullet$
+with differential
+$$
+d_M^n =
+\left(
+\begin{matrix}
+d_I^n & (-1)^{n + 1}\delta^n \\
+0 & d_J^n
+\end{matrix}
+\right)
+$$
+Finally, the map $B[0] \to M^\bullet$ is given by
+$b \oplus c \circ \beta : M \to I^0 \oplus J^0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-right-resolution-by-filtered-injectives}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+For every $K^\bullet \in K^{+}(\text{Fil}^f(\mathcal{A}))$
+there exists a filtered quasi-isomorphism $K^\bullet \to I^\bullet$
+with $I^\bullet$ bounded below,
+each $I^n$ a filtered injective object, and
+each $K^n \to I^n$ a strict monomorphism.
+\end{lemma}
+
+\begin{proof}
+After replacing $K^\bullet$ by a shift (which is harmless for the proof)
+we may assume that $K^n = 0$ for $n < 0$. Consider the
+short exact sequences
+$$
+\begin{matrix}
+0 \to \Ker(d_K^0) \to K^0 \to \Coim(d_K^0) \to 0 \\
+0 \to \Ker(d_K^1) \to K^1 \to \Coim(d_K^1) \to 0 \\
+0 \to \Ker(d_K^2) \to K^2 \to \Coim(d_K^2) \to 0 \\
+\ldots
+\end{matrix}
+$$
+of the exact category $\text{Fil}^f(\mathcal{A})$
+and the maps $u_i : \Coim(d_K^i) \to \Ker(d_K^{i + 1})$.
+For each $i \geq 0$ we may choose filtered quasi-isomorphisms
+$$
+\begin{matrix}
+\Ker(d_K^i)[0] \to I_{ker, i}^\bullet \\
+\Coim(d_K^i)[0] \to I_{coim, i}^\bullet
+\end{matrix}
+$$
+with $I_{ker, i}^n, I_{coim, i}^n$ filtered injective and zero for $n < 0$, see
+Lemma \ref{lemma-filtered-injective-right-resolution-single-object}.
+By
+Lemma \ref{lemma-filtered-injective-right-resolution-map}
+we may lift $u_i$ to a morphism of complexes
+$u_i^\bullet : I_{coim, i}^\bullet \to I_{ker, i + 1}^\bullet$.
+Finally, for each $i \geq 0$ we may complete the diagrams
+$$
+\xymatrix{
+0 \ar[r] &
+\Ker(d_K^i)[0] \ar[r] \ar[d] &
+K^i[0] \ar[r] \ar[d] &
+\Coim(d_K^i)[0] \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+I_{ker, i}^\bullet \ar[r]^{\alpha_i} &
+I_i^\bullet \ar[r]^{\beta_i} &
+I_{coim, i}^\bullet \ar[r] &
+0
+}
+$$
+with the lower sequence a termwise split exact sequence, see
+Lemma \ref{lemma-filtered-injective-right-resolution-ses}.
+For $i \geq 0$ set $d_i : I_i^\bullet \to I_{i + 1}^\bullet$
+equal to $d_i = \alpha_{i + 1} \circ u_i^\bullet \circ \beta_i$.
+Note that $d_i \circ d_{i - 1} = 0$ because
+$\beta_i \circ \alpha_i = 0$. Hence we have constructed
+a commutative diagram
+$$
+\xymatrix{
+I_0^\bullet \ar[r] &
+I_1^\bullet \ar[r] &
+I_2^\bullet \ar[r] & \ldots \\
+K^0[0] \ar[r] \ar[u] &
+K^1[0] \ar[r] \ar[u] &
+K^2[0] \ar[r] \ar[u] &
+\ldots
+}
+$$
+Here the vertical arrows are filtered quasi-isomorphisms.
+The upper row is a complex of complexes and each complex consists of
+filtered injective objects with no nonzero objects in degree $< 0$.
+Thus we obtain a double complex by setting $I^{a, b} = I_a^b$ and using
+$$
+d_1^{a, b} : I^{a, b} = I_a^b \to I_{a + 1}^b = I^{a + 1, b}
+$$
+the map $d_a^b$ and using for
+$$
+d_2^{a, b} : I^{a, b} = I_a^b \to I_a^{b + 1} = I^{a, b + 1}
+$$
+the map $d_{I_a}^b$. Denote $\text{Tot}(I^{\bullet, \bullet})$
+the total complex associated to this double complex, see
+Homology, Definition \ref{homology-definition-associated-simple-complex}.
+Observe that the maps $K^n[0] \to I_n^\bullet$ come from maps
+$K^n \to I^{n, 0}$ which give rise to a map of complexes
+$$
+K^\bullet \longrightarrow \text{Tot}(I^{\bullet, \bullet})
+$$
+We claim this is a filtered quasi-isomorphism.
+As $\text{gr}(-)$ is an additive functor, we see that
+$\text{gr}(\text{Tot}(I^{\bullet, \bullet})) =
+\text{Tot}(\text{gr}(I^{\bullet, \bullet}))$.
+Thus we can use
+Homology,
+Lemma \ref{homology-lemma-double-complex-gives-resolution}
+to conclude that
+$\text{gr}(K^\bullet) \to \text{gr}(\text{Tot}(I^{\bullet, \bullet}))$
+is a quasi-isomorphism as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-filtered-acyclic-is-zero}
+Let $\mathcal{A}$ be an abelian category.
+Let $K^\bullet, I^\bullet \in K(\text{Fil}^f(\mathcal{A}))$.
+Assume $K^\bullet$ is filtered acyclic and
+$I^\bullet$ bounded below and consisting of filtered injective objects.
+Any morphism $K^\bullet \to I^\bullet$ is homotopic to zero:
+$\Hom_{K(\text{Fil}^f(\mathcal{A}))}(K^\bullet, I^\bullet) = 0$.
+\end{lemma}
+
+\begin{proof}
+Let $\alpha : K^\bullet \to I^\bullet$ be a morphism of
+complexes. Assume that $\alpha^j = 0$ for $j < n$.
+We will show that there exists a morphism $h : K^{n + 1} \to I^n$
+such that $\alpha^n = h \circ d$. Thus $\alpha$ will be homotopic
+to the morphism of complexes $\beta$ defined by
+$$
+\beta^j =
+\left\{
+\begin{matrix}
+0 & \text{if} & j \leq n \\
+\alpha^{n + 1} - d \circ h & \text{if} & j = n + 1 \\
+\alpha^j & \text{if} & j > n + 1
+\end{matrix}
+\right.
+$$
+This will clearly prove the lemma (by induction).
+To prove the existence of $h$ note that
+$\alpha^n \circ d_K^{n - 1} = 0$ since
+$\alpha^{n - 1} = 0$. Since $K^\bullet$ is filtered acyclic
+we see that $d_K^{n - 1}$ and $d_K^n$ are strict and that
+$$
+0 \to \Im(d_K^{n - 1}) \to K^n \to \Im(d_K^n) \to 0
+$$
+is an exact sequence of the exact category $\text{Fil}^f(\mathcal{A})$, see
+Homology, Lemma \ref{homology-lemma-filtered-acyclic}.
+Hence we can think of $\alpha^n$ as a map into $I^n$ defined
+on $\Im(d_K^n)$.
+Using that $\Im(d_K^n) \to K^{n + 1}$ is a strict monomorphism
+and that $I^n$ is filtered injective we may lift this map to a map
+$h : K^{n + 1} \to I^n$ as desired, see
+Lemma \ref{lemma-injective-property-filtered-injective}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphisms-into-filtered-injective-complex}
+Let $\mathcal{A}$ be an abelian category.
+Let $I^\bullet \in K(\text{Fil}^f(\mathcal{A}))$
+be a bounded below complex consisting of
+filtered injective objects.
+\begin{enumerate}
+\item Let $\alpha : K^\bullet \to L^\bullet$ in $K(\text{Fil}^f(\mathcal{A}))$
+be a filtered quasi-isomorphism.
+Then the map
+$$
+\Hom_{K(\text{Fil}^f(\mathcal{A}))}(L^\bullet, I^\bullet)
+\to
+\Hom_{K(\text{Fil}^f(\mathcal{A}))}(K^\bullet, I^\bullet)
+$$
+is bijective.
+\item Let $L^\bullet \in K(\text{Fil}^f(\mathcal{A}))$. Then
+$$
+\Hom_{K(\text{Fil}^f(\mathcal{A}))}(L^\bullet, I^\bullet)
+=
+\Hom_{DF(\mathcal{A})}(L^\bullet, I^\bullet).
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Note that
+$$
+(K^\bullet, L^\bullet, C(\alpha)^\bullet, \alpha, i, -p)
+$$
+is a distinguished triangle in $K(\text{Fil}^f(\mathcal{A}))$
+(Lemma \ref{lemma-the-same-up-to-isomorphisms})
+and $C(\alpha)^\bullet$ is a filtered acyclic complex
+(Lemma \ref{lemma-filtered-acyclic}).
+Then
+$$
+\xymatrix{
+\Hom_{K(\text{Fil}^f(\mathcal{A}))}(C(\alpha)^\bullet, I^\bullet) \ar[r] &
+\Hom_{K(\text{Fil}^f(\mathcal{A}))}(L^\bullet, I^\bullet) \ar[r] &
+\Hom_{K(\text{Fil}^f(\mathcal{A}))}(K^\bullet, I^\bullet) \ar[lld] \\
+\Hom_{K(\text{Fil}^f(\mathcal{A}))}(C(\alpha)^\bullet[-1], I^\bullet)
+}
+$$
+is an exact sequence of abelian groups, see
+Lemma \ref{lemma-representable-homological}.
+At this point
+Lemma \ref{lemma-filtered-acyclic-is-zero}
+guarantees that the outer two groups are zero and hence
+$\Hom_{K(\mathcal{A})}(L^\bullet, I^\bullet) =
+\Hom_{K(\mathcal{A})}(K^\bullet, I^\bullet)$.
+
+\medskip\noindent
+Proof of (2).
+Let $a$ be an element of the right hand side.
+We may represent $a = \gamma\alpha^{-1}$ where
+$\alpha : K^\bullet \to L^\bullet$
+is a filtered quasi-isomorphism and $\gamma : K^\bullet \to I^\bullet$
+is a map of complexes. By part (1)
+we can find a morphism $\beta : L^\bullet \to I^\bullet$ such that
+$\beta \circ \alpha$ is homotopic to $\gamma$. This proves that the
+map is surjective. Let $b$ be an element of the left hand side
+which maps to zero in the right hand side. Then $b$ is the homotopy class
+of a morphism $\beta : L^\bullet \to I^\bullet$ such that there exists a
+filtered quasi-isomorphism $\alpha : K^\bullet \to L^\bullet$ with
+$\beta \circ \alpha$ homotopic to zero. Then part (1)
+shows that $\beta$ is homotopic to zero also, i.e., $b = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-filtered-localization-functor}
+Let $\mathcal{A}$ be an abelian category with enough injectives.
+Let $\mathcal{I}^f \subset \text{Fil}^f(\mathcal{A})$
+denote the strictly full additive subcategory whose objects are
+the filtered injective objects. The canonical functor
+$$
+K^{+}(\mathcal{I}^f)
+\longrightarrow
+DF^{+}(\mathcal{A})
+$$
+is exact, fully faithful and essentially surjective, i.e., an
+equivalence of triangulated categories. Furthermore the diagrams
+$$
+\xymatrix{
+K^{+}(\mathcal{I}^f) \ar[d]_{\text{gr}^p} \ar[r] &
+DF^{+}(\mathcal{A}) \ar[d]_{\text{gr}^p} \\
+K^{+}(\mathcal{I}) \ar[r] &
+D^{+}(\mathcal{A})
+}
+\quad
+\xymatrix{
+K^{+}(\mathcal{I}^f) \ar[d]^{\text{forget }F} \ar[r] &
+DF^{+}(\mathcal{A}) \ar[d]^{\text{forget }F} \\
+K^{+}(\mathcal{I}) \ar[r] &
+D^{+}(\mathcal{A})
+}
+$$
+are commutative, where $\mathcal{I} \subset \mathcal{A}$ is the
+strictly full additive subcategory whose objects are
+the injective objects.
+\end{lemma}
+
+\begin{proof}
+The functor $K^{+}(\mathcal{I}^f) \to DF^{+}(\mathcal{A})$
+is essentially surjective by
+Lemma \ref{lemma-right-resolution-by-filtered-injectives}.
+It is fully faithful by
+Lemma \ref{lemma-morphisms-into-filtered-injective-complex}.
+It is an exact functor by our definitions regarding distinguished
+triangles.
+The commutativity of the squares is immediate.
+\end{proof}
+
+\begin{remark}
+\label{remark-filtered-localization-big}
+We can invert the arrow of the lemma
+only if $\mathcal{A}$ is a category in our sense,
+namely if it has a set of objects. However, suppose given a big abelian
+category $\mathcal{A}$ with enough injectives, such as
+$\textit{Mod}(\mathcal{O}_X)$ for example. Then for any given set of objects
+$\{A_i\}_{i\in I}$ there is an abelian subcategory
+$\mathcal{A}' \subset \mathcal{A}$ containing all of them
+and having enough injectives, see
+Sets, Lemma \ref{sets-lemma-abelian-injectives}.
+Thus we may use the lemma above for $\mathcal{A}'$.
+This essentially means that if we use a set worth of diagrams, etc
+then we will never run into trouble using the lemma.
+\end{remark}
+
+\noindent
+Let $\mathcal{A}, \mathcal{B}$ be abelian categories.
+Let $T : \mathcal{A} \to \mathcal{B}$ be a left exact functor.
+(We cannot use the letter $F$ for the functor since this would
+conflict too much with our use of the letter $F$ to indicate
+filtrations.) Note that $T$ induces an additive functor
+$$
+T : \text{Fil}^f(\mathcal{A}) \to \text{Fil}^f(\mathcal{B})
+$$
+by the rule $T(A, F) = (T(A), F)$ where $F^pT(A) = T(F^pA)$ which makes
+sense as $T$ is left exact. (Warning: It may not be the case that
+$\text{gr}(T(A)) = T(\text{gr}(A))$.)
+This induces functors of triangulated categories
+\begin{equation}
+\label{equation-induced-T-filtered}
+T :
+K^{+}(\text{Fil}^f(\mathcal{A}))
+\longrightarrow
+K^{+}(\text{Fil}^f(\mathcal{B}))
+\end{equation}
+The filtered right derived functor of $T$ is the right derived functor of
+Definition \ref{definition-right-derived-functor-defined}
+for this exact functor composed with the exact functor
+$K^{+}(\text{Fil}^f(\mathcal{B})) \to DF^{+}(\mathcal{B})$ and the
+multiplicative set $\text{FQis}^{+}(\mathcal{A})$.
+Assume $\mathcal{A}$ has enough injectives. At this point we can redo the
+discussion of
+Section \ref{section-right-derived-functor}
+to define the
+{\it filtered right derived functors}
+\begin{equation}
+\label{equation-filtered-derived-functor}
+RT : DF^{+}(\mathcal{A}) \longrightarrow DF^{+}(\mathcal{B})
+\end{equation}
+of our functor $T$.
+
+\medskip\noindent
+However, instead we will proceed as in
+Section \ref{section-right-derived-functor-via-resolutions},
+and it will turn out that we can define $RT$ even if $T$ is just additive.
+Namely, we first choose a quasi-inverse
+$j' : DF^{+}(\mathcal{A}) \to K^{+}(\mathcal{I}^f)$ of the
+equivalence of
+Lemma \ref{lemma-filtered-localization-functor}.
+By
+Lemma \ref{lemma-exact-equivalence}
+we see that $j'$ is an exact functor of triangulated categories.
+Next, we note that for a filtered injective object $I$ we have
+a (noncanonical) decomposition
+\begin{equation}
+\label{equation-decompose}
+I \cong \bigoplus\nolimits_{p \in \mathbf{Z}} I_p,
+\quad\text{with}\quad
+F^pI = \bigoplus\nolimits_{q \geq p} I_q
+\end{equation}
+by
+Lemma \ref{lemma-filtered-injective}.
+Hence if $T$ is any additive functor $T : \mathcal{A} \to \mathcal{B}$
+then we get an additive functor
+\begin{equation}
+\label{equation-extend-T}
+T_{ext} : \mathcal{I}^f \to \text{Fil}^f(\mathcal{B})
+\end{equation}
+by setting $T_{ext}(I) = \bigoplus T(I_p)$ with
+$F^pT_{ext}(I) = \bigoplus_{q \geq p} T(I_q)$. Note that we have the
+property $\text{gr}(T_{ext}(I)) = T(\text{gr}(I))$ by construction.
+Hence we obtain a functor
+\begin{equation}
+\label{equation-extend-T-complexes}
+T_{ext} : K^{+}(\mathcal{I}^f) \to K^{+}(\text{Fil}^f(\mathcal{B}))
+\end{equation}
+which commutes with $\text{gr}$. Then we define
+(\ref{equation-filtered-derived-functor}) by the composition
+\begin{equation}
+\label{equation-definition-filtered-derived-functor}
+RT = T_{ext} \circ j'.
+\end{equation}
+Since $RT : D^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$ is computed
+by injective resolutions as well, see
+Lemmas \ref{lemma-injective-acyclic},
+the commutation of $T$ with $\text{gr}$, and the commutative diagrams of
+Lemma \ref{lemma-filtered-localization-functor}
+imply that
+\begin{equation}
+\label{equation-commute-gr}
+\text{gr}^p \circ RT \cong RT \circ \text{gr}^p
+\end{equation}
+and
+\begin{equation}
+\label{equation-commute-forget}
+(\text{forget }F) \circ RT \cong RT \circ (\text{forget }F)
+\end{equation}
+as functors $DF^{+}(\mathcal{A}) \to D^{+}(\mathcal{B})$.
+
+\medskip\noindent
+The filtered derived functor $RT$ (\ref{equation-filtered-derived-functor})
+induces functors
+$$
+\begin{matrix}
+RT : \text{Fil}^f(\mathcal{A}) \to DF^{+}(\mathcal{B}), \\
+RT : \text{Comp}^{+}(\text{Fil}^f(\mathcal{A})) \to DF^{+}(\mathcal{B}), \\
+RT : KF^{+}(\mathcal{A}) \to DF^{+}(\mathcal{B}).
+\end{matrix}
+$$
+Note that since $\text{Fil}^f(\mathcal{A})$, and
+$\text{Comp}^{+}(\text{Fil}^f(\mathcal{A}))$ are no longer
+abelian it does not make sense to say that $RT$ restricts to
+a $\delta$-functor on them. (This can be
+repaired by thinking of these categories as exact categories and
+formulating the notion of a $\delta$-functor from an exact category
+into a triangulated category.)
+But it does make sense, and it is true
+by construction, that $RT$ is an exact functor on the triangulated
+category $KF^{+}(\mathcal{A})$.
+
+\begin{lemma}
+\label{lemma-ss-filtered-derived}
+Let $\mathcal{A}, \mathcal{B}$ be abelian categories. Let
+$T : \mathcal{A} \to \mathcal{B}$ be a left exact functor.
+Assume $\mathcal{A}$ has enough injectives.
+Let $(K^\bullet, F)$ be an object of
+$\text{Comp}^{+}(\text{Fil}^f(\mathcal{A}))$.
+There exists a spectral sequence $(E_r, d_r)_{r\geq 0}$
+consisting of bigraded objects $E_r$ of $\mathcal{B}$
+and $d_r$ of bidegree $(r, - r + 1)$ and with
+$$
+E_1^{p, q} = R^{p + q}T(\text{gr}^p(K^\bullet))
+$$
+Moreover, this spectral sequence is bounded, converges
+to $R^*T(K^\bullet)$, and induces a finite
+filtration on each $R^nT(K^\bullet)$. The construction
+of this spectral sequence is functorial in the object
+$K^\bullet$ of $\text{Comp}^{+}(\text{Fil}^f(\mathcal{A}))$
+and the terms $(E_r, d_r)$ for $r \geq 1$ do not depend
+on any choices.
+\end{lemma}
+
+\begin{proof}
+Choose a filtered quasi-isomorphism $K^\bullet \to I^\bullet$
+with $I^\bullet$ a bounded below complex of filtered injective objects, see
+Lemma \ref{lemma-right-resolution-by-filtered-injectives}.
+Consider the complex $RT(K^\bullet) = T_{ext}(I^\bullet)$, see
+(\ref{equation-definition-filtered-derived-functor}).
+Thus we can consider the spectral sequence
+$(E_r, d_r)_{r \geq 0}$ associated to
+this as a filtered complex in $\mathcal{B}$, see
+Homology, Section \ref{homology-section-filtered-complex}.
+By
+Homology, Lemma \ref{homology-lemma-spectral-sequence-filtered-complex}
+we have $E_1^{p, q} = H^{p + q}(\text{gr}^p(T(I^\bullet)))$.
+By Equation (\ref{equation-decompose}) we have
+$E_1^{p, q} = H^{p + q}(T(\text{gr}^p(I^\bullet)))$, and
+by definition of a filtered injective resolution the
+map $\text{gr}^p(K^\bullet) \to \text{gr}^p(I^\bullet)$
+is an injective resolution. Hence
+$E_1^{p, q} = R^{p + q}T(\text{gr}^p(K^\bullet))$.
+
+\medskip\noindent
+On the other hand, each $I^n$ has a finite filtration and hence
+each $T(I^n)$ has a finite filtration. Thus we may apply
+Homology, Lemma \ref{homology-lemma-biregular-ss-converges}
+to conclude that the spectral sequence is bounded, converges to
+$H^n(T(I^\bullet)) = R^nT(K^\bullet)$
+moreover inducing finite filtrations on each of the terms.
+
+\medskip\noindent
+Suppose that $K^\bullet \to L^\bullet$ is a morphism of
+$\text{Comp}^{+}(\text{Fil}^f(\mathcal{A}))$.
+Choose a filtered quasi-isomorphism $L^\bullet \to J^\bullet$
+with $J^\bullet$ a bounded below complex of filtered injective
+objects, see
+Lemma \ref{lemma-right-resolution-by-filtered-injectives}.
+By our results above,
+for example
+Lemma \ref{lemma-morphisms-into-filtered-injective-complex},
+there exists a diagram
+$$
+\xymatrix{
+K^\bullet \ar[r] \ar[d] & L^\bullet \ar[d] \\
+I^\bullet \ar[r] & J^\bullet
+}
+$$
+which commutes up to homotopy. Hence we get a morphism of filtered
+complexes $T(I^\bullet) \to T(J^\bullet)$ which gives rise to the
+morphism of spectral sequences, see
+Homology,
+Lemma \ref{homology-lemma-spectral-sequence-filtered-complex-functorial}.
+The last statement follows from this.
+\end{proof}
+
+\begin{remark}
+\label{remark-final-functorial}
+As promised in Remark \ref{remark-functorial-ss} we discuss the connection
+of the lemma above with the constructions using Cartan-Eilenberg resolutions.
+Namely, let $T : \mathcal{A} \to \mathcal{B}$ be a left exact functor
+of abelian categories, assume $\mathcal{A}$
+has enough injectives, and let $K^\bullet$ be a bounded below complex
+of $\mathcal{A}$. We give an alternative construction of the
+spectral sequences ${}'E$ and ${}''E$ of
+Lemma \ref{lemma-two-ss-complex-functor}.
+
+\medskip\noindent
+First spectral sequence. Consider the ``stupid'' filtration on $K^\bullet$
+obtained by setting $F^p(K^\bullet) = \sigma_{\geq p}(K^\bullet)$, see
+Homology, Section \ref{homology-section-truncations}.
+Note that this stupid in the sense that
+$d(F^p(K^\bullet)) \subset F^{p + 1}(K^\bullet)$, compare
+Homology, Lemma \ref{homology-lemma-spectral-sequence-filtered-complex-d1}.
+Note that $\text{gr}^p(K^\bullet) = K^p[-p]$ with this filtration.
+According to Lemma \ref{lemma-ss-filtered-derived} there is a spectral sequence
+with $E_1$ term
+$$
+E_1^{p, q} = R^{p + q}T(K^p[-p]) = R^qT(K^p)
+$$
+as in the spectral sequence ${}'E_r$. Observe moreover that the differentials
+$E_1^{p, q} \to E_1^{p + 1, q}$ agree with the differentials in $'{}E_1$, see
+Homology, Lemma
+\ref{homology-lemma-spectral-sequence-filtered-complex-d1} part (2)
+and the description of ${}'d_1$ in the proof of
+Lemma \ref{lemma-two-ss-complex-functor}.
+
+\medskip\noindent
+Second spectral sequence. Consider the filtration on the complex $K^\bullet$
+obtained by setting $F^p(K^\bullet) = \tau_{\leq -p}(K^\bullet)$, see
+Homology, Section \ref{homology-section-truncations}.
+The minus sign is necessary
+to get a decreasing filtration. Note that
+$\text{gr}^p(K^\bullet)$ is quasi-isomorphic to $H^{-p}(K^\bullet)[p]$
+with this filtration. According to Lemma \ref{lemma-ss-filtered-derived}
+there is a spectral sequence with $E_1$ term
+$$
+E_1^{p, q} = R^{p + q}T(H^{-p}(K^\bullet)[p])
+= R^{2p + q}T(H^{-p}(K^\bullet)) = {}''E_2^{i, j}
+$$
+with $i = 2p + q$ and $j = -p$. (This looks unnatural, but note that we
+could just have well developed the whole theory of filtered complexes
+using increasing filtrations, with the end result that this then looks
+natural, but the other one doesn't.) We leave it to the reader to see
+that the differentials match up.
+
+\medskip\noindent
+Actually, given a Cartan-Eilenberg resolution
+$K^\bullet \to I^{\bullet, \bullet}$ the induced morphism
+$K^\bullet \to \text{Tot}(I^{\bullet, \bullet})$
+into the associated total complex
+will be a filtered injective resolution for either filtration
+using suitable filtrations on $\text{Tot}(I^{\bullet, \bullet})$.
+This can be used
+to match up the spectral sequences exactly.
+\end{remark}
+
+
+
+
+
+
+\section{Ext groups}
+\label{section-ext}
+
+\noindent
+In this section we start describing the Ext groups of objects
+of an abelian category. First we have the following very general
+definition.
+
+\begin{definition}
+\label{definition-ext}
+Let $\mathcal{A}$ be an abelian category. Let $i \in \mathbf{Z}$. Let
+$X, Y$ be objects of $D(\mathcal{A})$. The {\it $i$th extension group}
+of $X$ by $Y$ is the group
+$$
+\Ext^i_\mathcal{A}(X, Y) =
+\Hom_{D(\mathcal{A})}(X, Y[i]) =
+\Hom_{D(\mathcal{A})}(X[-i], Y).
+$$
+If $A, B \in \Ob(\mathcal{A})$ we set
+$\Ext^i_\mathcal{A}(A, B) = \text{Ext}^i_\mathcal{A}(A[0], B[0])$.
+\end{definition}
+
+\noindent
+Since $\Hom_{D(\mathcal{A})}(X, -)$,
+resp.\ $\Hom_{D(\mathcal{A})}(-, Y)$ is a homological,
+resp.\ cohomological functor, see
+Lemma \ref{lemma-representable-homological},
+we see that a distinguished triangle $(Y, Y', Y'')$,
+resp.\ $(X, X', X'')$ leads to a long exact sequence
+$$
+\ldots \to
+\Ext^i_\mathcal{A}(X, Y) \to
+\Ext^i_\mathcal{A}(X, Y') \to
+\Ext^i_\mathcal{A}(X, Y'') \to
+\Ext^{i + 1}_\mathcal{A}(X, Y) \to \ldots
+$$
+respectively
+$$
+\ldots \to
+\Ext^i_\mathcal{A}(X'', Y) \to
+\Ext^i_\mathcal{A}(X', Y) \to
+\Ext^i_\mathcal{A}(X, Y) \to
+\Ext^{i + 1}_\mathcal{A}(X'', Y) \to \ldots
+$$
+Note that since $D^+(\mathcal{A})$, $D^-(\mathcal{A})$, $D^b(\mathcal{A})$
+are full subcategories we may compute the Ext groups by Hom groups
+in these categories provided $X$, $Y$ are contained in them.
+
+\medskip\noindent
+In case the category $\mathcal{A}$ has enough injectives or enough
+projectives we can compute the Ext groups using injective or
+projective resolutions. To avoid confusion, recall that having an
+injective (resp.\ projective) resolution implies vanishing of homology
+in all low (resp.\ high) degrees, see
+Lemmas \ref{lemma-cohomology-bounded-below} and
+\ref{lemma-cohomology-bounded-above}.
+
+\begin{lemma}
+\label{lemma-compute-ext-resolutions}
+Let $\mathcal{A}$ be an abelian category.
+Let $X^\bullet, Y^\bullet \in \Ob(K(\mathcal{A}))$.
+\begin{enumerate}
+\item Let $Y^\bullet \to I^\bullet$ be an injective resolution
+(Definition \ref{definition-injective-resolution}). Then
+$$
+\Ext^i_\mathcal{A}(X^\bullet, Y^\bullet) =
+\Hom_{K(\mathcal{A})}(X^\bullet, I^\bullet[i]).
+$$
+\item Let $P^\bullet \to X^\bullet$ be a projective resolution
+(Definition \ref{definition-projective-resolution}). Then
+$$
+\Ext^i_\mathcal{A}(X^\bullet, Y^\bullet) =
+\Hom_{K(\mathcal{A})}(P^\bullet[-i], Y^\bullet).
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows immediately from
+Lemma \ref{lemma-morphisms-into-injective-complex}
+and
+Lemma \ref{lemma-morphisms-from-projective-complex}.
+\end{proof}
+
+\noindent
+In the rest of this section we discuss extensions of objects of the
+abelian category itself. First we observe the following.
+
+\begin{lemma}
+\label{lemma-negative-exts}
+Let $\mathcal{A}$ be an abelian category.
+\begin{enumerate}
+\item Let $X$, $Y$ be objects of $D(\mathcal{A})$. Given $a, b \in \mathbf{Z}$
+such that $H^i(X) = 0$ for $i > a$ and $H^j(Y) = 0$
+for $j < b$, we have $\Ext^n_\mathcal{A}(X, Y) = 0$ for
+$n < b - a$ and
+$$
+\Ext^{b - a}_\mathcal{A}(X, Y) = \Hom_\mathcal{A}(H^a(X), H^b(Y))
+$$
+\item Let $A, B \in \Ob(\mathcal{A})$.
+For $i < 0$ we have $\Ext^i_\mathcal{A}(B, A) = 0$.
+We have $\Ext^0_\mathcal{A}(B, A) = \Hom_\mathcal{A}(B, A)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose complexes $X^\bullet$ and $Y^\bullet$ representing $X$ and $Y$.
+Since $Y^\bullet \to \tau_{\geq b}Y^\bullet$ is a quasi-isomorphism,
+we may assume that $Y^j = 0$ for $j < b$.
+Let $L^\bullet \to X^\bullet$ be any quasi-isomorphism.
+Then $\tau_{\leq a}L^\bullet \to X^\bullet$
+is a quasi-isomorphism. Hence a morphism $X \to Y[n]$
+in $D(\mathcal{A})$ can be represented as $fs^{-1}$ where
+$s : L^\bullet \to X^\bullet$ is a quasi-isomorphism,
+$f : L^\bullet \to Y^\bullet[n]$ a morphism, and
+$L^i = 0$ for $i < a$. Note that $f$ maps $L^i$ to $Y^{i + n}$.
+Thus $f = 0$ if $n < b - a$ because always either $L^i$ or $Y^{i + n}$ is
+zero. If $n = b - a$, then $f$ corresponds exactly to a morphism
+$H^a(X) \to H^b(Y)$. Part (2) is a special case of (1).
+\end{proof}
+
+\noindent
+Let $\mathcal{A}$ be an abelian category.
+Suppose that $0 \to A \to A' \to A'' \to 0$ is a short exact
+sequence of objects of $\mathcal{A}$. Then
+$0 \to A[0] \to A'[0] \to A''[0] \to 0$ leads to a distinguished
+triangle in $D(\mathcal{A})$ (see
+Lemma \ref{lemma-derived-canonical-delta-functor})
+hence a long exact sequence of Ext groups
+$$
+0 \to \Ext^0_\mathcal{A}(B, A) \to
+\Ext^0_\mathcal{A}(B, A') \to
+\Ext^0_\mathcal{A}(B, A'') \to
+\Ext^1_\mathcal{A}(B, A) \to \ldots
+$$
+Similarly, given a short exact sequence $0 \to B \to B' \to B'' \to 0$
+we obtain a long exact sequence of Ext groups
+$$
+0 \to \Ext^0_\mathcal{A}(B'', A) \to
+\Ext^0_\mathcal{A}(B', A) \to
+\Ext^0_\mathcal{A}(B, A) \to
+\Ext^1_\mathcal{A}(B'', A) \to \ldots
+$$
+We may view these Ext groups as an application of the construction of the
+derived category. It shows one can define Ext groups and construct
+the long exact sequence of Ext groups without needing the existence of enough
+injectives or projectives. There is an alternative construction of the Ext
+groups due to Yoneda which avoids the use of the derived category, see
+\cite{Yoneda}.
+
+\begin{definition}
+\label{definition-yoneda-extension}
+Let $\mathcal{A}$ be an abelian category.
+Let $A, B \in \Ob(\mathcal{A})$.
+A degree $i$ {\it Yoneda extension} of $B$ by $A$ is an exact sequence
+$$
+E : 0 \to A \to Z_{i - 1} \to Z_{i - 2} \to \ldots \to Z_0 \to B \to 0
+$$
+in $\mathcal{A}$. We say two Yoneda extensions $E$ and $E'$ of the same degree
+are {\it equivalent} if there exists a commutative diagram
+$$
+\xymatrix{
+0 \ar[r] & A \ar[r] & Z_{i - 1} \ar[r] & \ldots \ar[r] &
+Z_0 \ar[r] & B \ar[r] & 0 \\
+0 \ar[r] &
+A \ar[r] \ar[u]^{\text{id}} \ar[d]_{\text{id}} &
+Z''_{i - 1} \ar[r] \ar[u] \ar[d] &
+\ldots \ar[r] &
+Z''_0 \ar[r] \ar[u] \ar[d] &
+B \ar[r] \ar[u]_{\text{id}} \ar[d]^{\text{id}} & 0 \\
+0 \ar[r] & A \ar[r] & Z'_{i - 1} \ar[r] & \ldots \ar[r] &
+Z'_0 \ar[r] & B \ar[r] & 0
+}
+$$
+where the middle row is a Yoneda extension as well.
+\end{definition}
+
+\noindent
+It is not immediately clear that the equivalence of the definition is
+an equivalence relation. Although it is instructive to prove this directly
+this will also follow from
+Lemma \ref{lemma-yoneda-extension}
+below.
+
+\medskip\noindent
+Let $\mathcal{A}$ be an abelian category with objects $A$, $B$.
+Given a Yoneda extension
+$E : 0 \to A \to Z_{i - 1} \to Z_{i - 2} \to \ldots \to Z_0 \to B \to 0$
+we define an associated element $\delta(E) \in \Ext^i(B, A)$
+as the morphism $\delta(E) = fs^{-1} : B[0] \to A[i]$ where
+$s$ is the quasi-isomorphism
+$$
+(\ldots \to 0 \to A \to Z_{i - 1} \to \ldots \to Z_0 \to 0 \to \ldots)
+\longrightarrow
+B[0]
+$$
+and $f$ is the morphism of complexes
+$$
+(\ldots \to 0 \to A \to Z_{i - 1} \to \ldots \to Z_0 \to 0 \to \ldots)
+\longrightarrow
+A[i]
+$$
+We call $\delta(E) = fs^{-1}$ the {\it class} of the Yoneda extension.
+It turns out that this class characterizes the equivalence class
+of the Yoneda extension.
+
+\begin{lemma}
+\label{lemma-yoneda-extension}
+Let $\mathcal{A}$ be an abelian category with objects $A$, $B$.
+Any element in $\Ext^i_\mathcal{A}(B, A)$ is $\delta(E)$
+for some degree $i$ Yoneda extension of $B$ by $A$.
+Given two Yoneda extensions $E$, $E'$ of the same degree
+then $E$ is equivalent to $E'$ if and only if $\delta(E) = \delta(E')$.
+\end{lemma}
+
+\begin{proof}
+Let $\xi : B[0] \to A[i]$ be an element of $\Ext^i_\mathcal{A}(B, A)$.
+We may write $\xi = f s^{-1}$ for some quasi-isomorphism
+$s : L^\bullet \to B[0]$ and map $f : L^\bullet \to A[i]$.
+After replacing $L^\bullet$ by $\tau_{\leq 0}L^\bullet$ we may assume
+that $L^j = 0$ for $j > 0$. Picture
+$$
+\xymatrix{
+L^{- i - 1} \ar[r] & L^{-i} \ar[r] \ar[d] & \ldots \ar[r] &
+L^0 \ar[r] & B \ar[r] & 0 \\
+& A
+}
+$$
+Then setting $Z_{i - 1} = (L^{- i + 1} \oplus A)/L^{-i}$ and
+$Z_j = L^{-j}$ for $j = i - 2, \ldots, 0$ we see that we obtain a
+degree $i$ extension $E$ of $B$ by $A$ whose class $\delta(E)$ equals
+$\xi$.
+
+\medskip\noindent
+It is immediate from the definitions that equivalent Yoneda extensions
+have the same class. Suppose that
+$E : 0 \to A \to Z_{i - 1} \to Z_{i - 2} \to \ldots \to Z_0 \to B \to 0$ and
+$E' : 0 \to A \to Z'_{i - 1} \to Z'_{i - 2} \to \ldots \to Z'_0 \to B \to 0$
+are Yoneda extensions with the same class.
+By construction of $D(\mathcal{A})$ as the localization
+of $K(\mathcal{A})$ at the set of quasi-isomorphisms, this means there
+exists a complex $L^\bullet$ and quasi-isomorphisms
+$$
+t : L^\bullet \to
+(\ldots \to 0 \to A \to Z_{i - 1} \to \ldots \to Z_0 \to 0 \to \ldots)
+$$
+and
+$$
+t' : L^\bullet \to
+(\ldots \to 0 \to A \to Z'_{i - 1} \to \ldots \to Z'_0 \to 0 \to \ldots)
+$$
+such that $s \circ t = s' \circ t'$ and $f \circ t = f' \circ t'$, see
+Categories, Section \ref{categories-section-localization}.
+Let $E''$ be the degree $i$ extension of $B$ by $A$ constructed from
+the pair $L^\bullet \to B[0]$ and $L^\bullet \to A[i]$ in the first
+paragraph of the proof. Then the reader sees readily that there exists
+``morphisms'' of degree $i$ Yoneda extensions $E'' \to E$ and $E'' \to E'$
+as in the definition of equivalent Yoneda extensions (details omitted).
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ext-1}
+Let $\mathcal{A}$ be an abelian category. Let $A$, $B$ be objects
+of $\mathcal{A}$. Then $\Ext^1_\mathcal{A}(B, A)$ is
+the group $\Ext_\mathcal{A}(B, A)$ constructed in
+Homology, Definition \ref{homology-definition-ext-group}.
+\end{lemma}
+
+\begin{proof}
+This is the case $i = 1$ of
+Lemma \ref{lemma-yoneda-extension}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cup-ext-1-zero}
+Let $\mathcal{A}$ be an abelian category. Let
+$0 \to A \to Z \to B \to 0$ and
+$0 \to B \to Z' \to C \to 0$ be short exact sequences in $\mathcal{A}$.
+Denote $[Z] \in \Ext^1(B, A)$ and $[Z'] \in \Ext^1(C, B)$ their classes.
+Then $[Z] \circ [Z'] \in \Ext^2_\mathcal{A}(C, A)$ is $0$ if and
+only if there exists a commutative diagram
+$$
+\xymatrix{
+&
+&
+0 \ar[d] &
+0 \ar[d]
+\\
+0 \ar[r] &
+A \ar[r] \ar[d]^1 &
+Z \ar[r] \ar[d] &
+B \ar[r] \ar[d] &
+0 \\
+0 \ar[r] &
+A \ar[r] &
+W \ar[r] \ar[d] &
+Z' \ar[r] \ar[d] &
+0 \\
+&
+&
+C \ar[r]^1 \ar[d]&
+C \ar[d]\\
+&
+&
+0 &
+0
+}
+$$
+with exact rows and columns in $\mathcal{A}$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hints: You can argue this using the result of
+Lemma \ref{lemma-yoneda-extension} and working
+out what it means for a $2$-extension class to be zero.
+Or you can use that if $[Z] \circ [Z'] \in \Ext^2_\mathcal{A}(C, A)$
+is zero, then by the long exact cohomology sequence
+of $\Ext$ the element $[Z] \in \Ext^1(B, A)$ is the
+image of some element in $\Ext^1(W', A)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-higher-ext-zero}
+Let $\mathcal{A}$ be an abelian category and let $p \geq 0$.
+If $\Ext^p_\mathcal{A}(B, A) = 0$ for any pair of objects $A$, $B$
+of $\mathcal{A}$, then $\Ext^i_\mathcal{A}(B, A) = 0$ for
+$i \geq p$ and any pair of objects $A$, $B$ of $\mathcal{A}$.
+\end{lemma}
+
+\begin{proof}
+For $i > p$ write any class $\xi$ as $\delta(E)$
+where $E$ is a Yoneda extension
+$$
+E : 0 \to A \to Z_{i - 1} \to Z_{i - 2} \to \ldots \to Z_0 \to B \to 0
+$$
+This is possible by Lemma \ref{lemma-yoneda-extension}.
+Set $C = \Ker(Z_{p - 1} \to Z_p) = \Im(Z_p \to Z_{p - 1})$.
+Then $\delta(E)$ is the composition of $\delta(E')$ and $\delta(E'')$
+where
+$$
+E' : 0 \to C \to Z_{p - 1} \to \ldots \to Z_0 \to B \to 0
+$$
+and
+$$
+E'' : 0 \to A \to Z_{i - 1} \to Z_{i - 2} \to \ldots \to Z_p \to C \to 0
+$$
+Since $\delta(E') \in \Ext^p_\mathcal{A}(B, C) = 0$
+we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ext-2-zero-pre}
+Let $\mathcal{A}$ be an abelian category. Let $K$ be an object of
+$D^b(\mathcal{A})$ such that $\Ext^p_\mathcal{A}(H^i(K), H^j(K)) = 0$
+for all $p \geq 2$ and $i > j$. Then $K$ is isomorphic to the direct
+sum of its cohomologies: $K \cong \bigoplus H^i(K)[-i]$.
+\end{lemma}
+
+\begin{proof}
+Choose $a, b$ such that $H^i(K) = 0$ for $i \not \in [a, b]$.
+We will prove the lemma by induction on $b - a$. If $b - a \leq 0$,
+then the result is clear. If $b - a > 0$, then we look at the
+distinguished triangle of truncations
+$$
+\tau_{\leq b - 1}K \to K \to H^b(K)[-b] \to (\tau_{\leq b - 1}K)[1]
+$$
+see Remark \ref{remark-truncation-distinguished-triangle}.
+By Lemma \ref{lemma-split} if the last arrow is zero, then
+$K \cong \tau_{\leq b - 1}K \oplus H^b(K)[-b]$ and we win
+by induction. Again using induction we see that
+$$
+\Hom_{D(\mathcal{A})}(H^b(K)[-b], (\tau_{\leq b - 1}K)[1]) =
+\bigoplus\nolimits_{i < b} \Ext_\mathcal{A}^{b - i + 1}(H^b(K), H^i(K))
+$$
+By assumption the direct sum is zero and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ext-2-zero}
+Let $\mathcal{A}$ be an abelian category. Assume $\Ext^2_\mathcal{A}(B, A) = 0$
+for any pair of objects $A$, $B$ of $\mathcal{A}$.
+Then any object $K$ of $D^b(\mathcal{A})$ is isomorphic to the direct
+sum of its cohomologies: $K \cong \bigoplus H^i(K)[-i]$.
+\end{lemma}
+
+\begin{proof}
+The assumption implies that $\Ext^i_\mathcal{A}(B, A) = 0$ for $i \geq 2$
+and any pair of objects $A, B$ of $\mathcal{A}$ by
+Lemma \ref{lemma-higher-ext-zero}. Hence this lemma is a special
+case of Lemma \ref{lemma-ext-2-zero-pre}.
+\end{proof}
+
+
+
+
+
+
+\section{K-groups}
+\label{section-K-groups}
+
+\noindent
+A tiny bit about $K_0$ of a triangulated category.
+
+\begin{definition}
+\label{definition-K-zero}
+Let $\mathcal{D}$ be a triangulated category. We denote $K_0(\mathcal{D})$ the
+{\it zeroth $K$-group of $\mathcal{D}$}. It is the abelian group constructed
+as follows. Take the free abelian group on the objects on $\mathcal{D}$
+and for every distinguished triangle $X \to Y \to Z$
+impose the relation $[Y] - [X] - [Z] = 0$.
+\end{definition}
+
+\noindent
+Observe that this implies that $[X[n]] = (-1)^n[X]$ because we have
+the distinguished triangle $(X, 0, X[1], 0, 0, -\text{id}[1])$.
+
+\begin{lemma}
+\label{lemma-K-bounded-derived}
+Let $\mathcal{A}$ be an abelian category. Then there is a canonical
+identification $K_0(D^b(\mathcal{A})) = K_0(\mathcal{A})$
+of zeroth $K$-groups.
+\end{lemma}
+
+\begin{proof}
+Given an object $A$ of $\mathcal{A}$ denote $A[0]$ the object $A$
+viewed as a complex sitting in degree $0$.
+If $0 \to A \to A' \to A'' \to 0$ is a short
+exact sequence, then we get a distinguished triangle
+$A[0] \to A'[0] \to A''[0] \to A[1]$, see
+Section \ref{section-canonical-delta-functor}.
+This shows that we obtain a map $K_0(\mathcal{A}) \to K_0(D^b(\mathcal{A}))$
+by sending $[A]$ to $[A[0]]$ with apologies for the horrendous notation.
+
+\medskip\noindent
+On the other hand, given an object $X$ of $D^b(\mathcal{A})$ we can
+consider the element
+$$
+c(X) = \sum (-1)^i[H^i(X)] \in K_0(\mathcal{A})
+$$
+Given a distinguished triangle $X \to Y \to Z$ the long exact sequence
+of cohomology (\ref{equation-long-exact-cohomology-sequence-D})
+and the relations in $K_0(\mathcal{A})$ show that
+$c(Y) = c(X) + c(Z)$. Thus $c$ factors through a map
+$c : K_0(D^b(\mathcal{A})) \to K_0(\mathcal{A})$.
+
+\medskip\noindent
+We want to show that the two maps above are mutually inverse.
+It is clear that the composition $K_0(\mathcal{A}) \to
+K_0(D^b(\mathcal{A})) \to K_0(\mathcal{A})$ is the identity.
+Suppose that $X^\bullet$ is a bounded complex of $\mathcal{A}$.
+The existence of the distinguished triangles of ``stupid truncations'' (see
+Homology, Section \ref{homology-section-truncations})
+$$
+\sigma_{\geq n}X^\bullet \to \sigma_{\geq n - 1}X^\bullet \to
+X^{n - 1}[-n + 1] \to (\sigma_{\geq n}X^\bullet)[1]
+$$
+and induction show that
+$$
+[X^\bullet] = \sum (-1)^i[X^i[0]]
+$$
+in $K_0(D^b(\mathcal{A}))$ (with again apologies for the notation).
+It follows that the composition
+$K_0(\mathcal{A}) \to K_0(D^b(\mathcal{A}))$ is surjective
+which finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-K}
+Let $F : \mathcal{D} \to \mathcal{D}'$ be an exact functor of triangulated
+categories. Then $F$ induces a group homomorphism
+$K_0(\mathcal{D}) \to K_0(\mathcal{D}')$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-homological-map-K}
+Let $H : \mathcal{D} \to \mathcal{A}$ be a homological functor
+from a triangulated category to an abelian category. Assume that
+for any $X$ in $\mathcal{D}$ only a finite number of the objects
+$H(X[i])$ are nonzero in $\mathcal{A}$. Then $H$ induces a group homomorphism
+$K_0(\mathcal{D}) \to K_0(\mathcal{A})$ sending $[X]$ to
+$\sum (-1)^i[H(X[i])]$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-DBA-map-K}
+Let $\mathcal{B}$ be a weak Serre subcategory of the abelian category
+$\mathcal{A}$. Then there are canonical maps
+$$
+K_0(\mathcal{B}) \longrightarrow
+K_0(D^b_\mathcal{B}(\mathcal{A})) \longrightarrow
+K_0(\mathcal{B})
+$$
+whose composition is zero. The second arrow
+sends the class $[X]$ of the object $X$ to the element
+$\sum (-1)^i[H^i(X)]$ of $K_0(\mathcal{B})$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bilinear-map-K}
+Let $\mathcal{D}$, $\mathcal{D}'$, $\mathcal{D}''$ be triangulated categories.
+Let
+$$
+\otimes : \mathcal{D} \times \mathcal{D}' \longrightarrow \mathcal{D}''
+$$
+be a functor such that for fixed $X$ in $\mathcal{D}$ the functor
+$X \otimes - : \mathcal{D}' \to \mathcal{D}''$ is an exact functor and
+for fixed $X'$ in $\mathcal{D}'$ the functor
+$- \otimes X' : \mathcal{D} \to \mathcal{D}''$ is an exact functor. Then
+$\otimes$ induces a bilinear map
+$K_0(\mathcal{D}) \times K_0(\mathcal{D}') \to K_0(\mathcal{D}'')$
+which sends $([X], [X'])$ to $[X \otimes X']$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Unbounded complexes}
+\label{section-unbounded}
+
+\noindent
+A reference for the material in this section is \cite{Spaltenstein}.
+The following lemma is useful to find ``good'' left resolutions of
+unbounded complexes.
+
+\begin{lemma}
+\label{lemma-special-direct-system}
+Let $\mathcal{A}$ be an abelian category. Let
+$\mathcal{P} \subset \Ob(\mathcal{A})$ be a subset.
+Assume $\mathcal{P}$ contains $0$, is closed under (finite) direct sums,
+and every object of $\mathcal{A}$ is a quotient of an
+element of $\mathcal{P}$. Let $K^\bullet$ be a complex.
+There exists a commutative diagram
+$$
+\xymatrix{
+P_1^\bullet \ar[d] \ar[r] & P_2^\bullet \ar[d] \ar[r] & \ldots \\
+\tau_{\leq 1}K^\bullet \ar[r] & \tau_{\leq 2}K^\bullet \ar[r] & \ldots
+}
+$$
+in the category of complexes such that
+\begin{enumerate}
+\item the vertical arrows are quasi-isomorphisms and termwise surjective,
+\item $P_n^\bullet$ is a bounded above complex with terms in
+$\mathcal{P}$,
+\item the arrows $P_n^\bullet \to P_{n + 1}^\bullet$
+are termwise split injections and each cokernel
+$P^i_{n + 1}/P^i_n$ is an element of $\mathcal{P}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We are going to use that the homotopy category $K(\mathcal{A})$ is a
+triangulated category, see Proposition
+\ref{proposition-homotopy-category-triangulated}.
+By Lemma \ref{lemma-subcategory-left-resolution} we can find a
+termwise surjective map of complexes $P_1^\bullet \to \tau_{\leq 1}K^\bullet$
+which is a quasi-isomorphism
+such that the terms of $P_1^\bullet$ are in $\mathcal{P}$.
+By induction it suffices, given
+$P_1^\bullet, \ldots, P_n^\bullet$ to construct
+$P_{n + 1}^\bullet$ and the maps
+$P_n^\bullet \to P_{n + 1}^\bullet$ and
+$P_{n + 1}^\bullet \to \tau_{\leq n + 1}K^\bullet$.
+
+\medskip\noindent
+Choose a distinguished triangle
+$P_n^\bullet \to \tau_{\leq n + 1}K^\bullet \to C^\bullet \to P_n^\bullet[1]$
+in $K(\mathcal{A})$. Applying
+Lemma \ref{lemma-subcategory-left-resolution} we choose a
+map of complexes $Q^\bullet \to C^\bullet$
+which is a quasi-isomorphism such that the terms of $Q^\bullet$
+are in $\mathcal{P}$. By the axioms of triangulated categories
+we may fit the composition $Q^\bullet \to C^\bullet \to P_n^\bullet[1]$ into
+a distinguished triangle
+$P_n^\bullet \to P_{n + 1}^\bullet \to Q^\bullet \to P_n^\bullet[1]$
+in $K(\mathcal{A})$.
+By Lemma \ref{lemma-improve-distinguished-triangle-homotopy}
+we may and do assume
+$0 \to P_n^\bullet \to P_{n + 1}^\bullet \to Q^\bullet \to 0$
+is a termwise split short exact sequence. This implies that
+the terms of $P_{n + 1}^\bullet$ are in $\mathcal{P}$ and that
+$P_n^\bullet \to P_{n + 1}^\bullet$ is a termwise split injection
+whose cokernels are in $\mathcal{P}$.
+By the axioms of triangulated categories we obtain a map
+of distinguished triangles
+$$
+\xymatrix{
+P_n^\bullet \ar[r] \ar[d] &
+P_{n + 1}^\bullet \ar[r] \ar[d] &
+Q^\bullet \ar[r] \ar[d] &
+P_n^\bullet[1] \ar[d] \\
+P_n^\bullet \ar[r] &
+\tau_{\leq n + 1}K^\bullet \ar[r] &
+C^\bullet \ar[r] &
+P_n^\bullet[1]
+}
+$$
+in the triangulated category $K(\mathcal{A})$. Choose an actual morphism of
+complexes $f : P_{n + 1}^\bullet \to \tau_{\leq n + 1}K^\bullet$.
+The left square of the diagram above commutes up to homotopy, but as
+$P_n^\bullet \to P_{n + 1}^\bullet$ is a termwise split injection
+we can lift the homotopy and modify our choice of $f$ to make it commute.
+Finally, $f$ is a quasi-isomorphism, because both $P_n^\bullet \to P_n^\bullet$
+and $Q^\bullet \to C^\bullet$ are.
+
+\medskip\noindent
+At this point we have all the properties we want, except we don't know
+that the map $f : P_{n + 1}^\bullet \to \tau_{\leq n + 1}K^\bullet$
+is termwise surjective. Since we have the commutative diagram
+$$
+\xymatrix{
+P_n^\bullet \ar[d] \ar[r] & P_{n + 1}^\bullet \ar[d] \\
+\tau_{\leq n}K^\bullet \ar[r] & \tau_{\leq n + 1}K^\bullet
+}
+$$
+of complexes, by induction hypothesis we see that $f$ is surjective
+on terms in all degrees except possibly $n$ and $n + 1$. Choose
+an object $P \in \mathcal{P}$ and a surjection $q : P \to K^n$.
+Consider the map
+$$
+g :
+P^\bullet = (\ldots \to 0 \to P \xrightarrow{1} P \to 0 \to \ldots)
+\longrightarrow
+\tau_{\leq n + 1}K^\bullet
+$$
+with first copy of $P$ in degree $n$ and maps given by
+$q$ in degree $n$ and $d_K \circ q$ in degree $n + 1$.
+This is a surjection in degree $n$ and the cokernel in
+degree $n + 1$ is $H^{n + 1}(\tau_{\leq n + 1}K^\bullet)$;
+to see this recall that $\tau_{\leq n + 1}K^\bullet$ has
+$\Ker(d_K^{n + 1})$ in degree $n + 1$.
+However, since $f$ is a quasi-isomorphism we know that
+$H^{n + 1}(f)$ is surjective. Hence after replacing
+$f : P_{n + 1}^\bullet \to \tau_{\leq n + 1}K^\bullet$
+by
+$f \oplus g : P_{n + 1}^\bullet \oplus P^\bullet \to \tau_{\leq n + 1}K^\bullet$
+we win.
+\end{proof}
+
+\noindent
+In some cases we can use the lemma above to show that a left derived
+functor is everywhere defined.
+
+\begin{proposition}
+\label{proposition-left-derived-exists}
+Let $F : \mathcal{A} \to \mathcal{B}$ be a right exact functor
+of abelian categories. Let $\mathcal{P} \subset \Ob(\mathcal{A})$ be a
+subset. Assume
+\begin{enumerate}
+\item $\mathcal{P}$ contains $0$, is closed under (finite) direct sums,
+and every object of $\mathcal{A}$ is a quotient of an
+element of $\mathcal{P}$,
+\item for any bounded above acyclic complex $P^\bullet$ of
+$\mathcal{A}$ with $P^n \in \mathcal{P}$ for all $n$ the
+complex $F(P^\bullet)$ is exact,
+\item $\mathcal{A}$ and $\mathcal{B}$ have colimits
+of systems over $\mathbf{N}$,
+\item colimits over $\mathbf{N}$ are exact in both
+$\mathcal{A}$ and $\mathcal{B}$, and
+\item $F$ commutes with colimits over $\mathbf{N}$.
+\end{enumerate}
+Then $LF$ is defined on all of $D(\mathcal{A})$.
+\end{proposition}
+
+\begin{proof}
+By (1) and Lemma \ref{lemma-subcategory-left-resolution} for any bounded
+above complex $K^\bullet$ there exists a quasi-isomorphism
+$P^\bullet \to K^\bullet$ with $P^\bullet$ bounded above and
+$P^n \in \mathcal{P}$ for all $n$. Suppose that
+$s : P^\bullet \to (P')^\bullet$ is a quasi-isomorphism of bounded
+above complexes consisting of objects of $\mathcal{P}$. Then
+$F(P^\bullet) \to F((P')^\bullet)$ is a quasi-isomorphism because
+$F(C(s)^\bullet)$ is acyclic by assumption (2). This already shows that
+$LF$ is defined on $D^{-}(\mathcal{A})$ and that a bounded above
+complex consisting of objects of $\mathcal{P}$ computes $LF$, see
+Lemma \ref{lemma-find-existence-computes}.
+
+\medskip\noindent
+Next, let $K^\bullet$ be an arbitrary complex of $\mathcal{A}$.
+Choose a diagram
+$$
+\xymatrix{
+P_1^\bullet \ar[d] \ar[r] & P_2^\bullet \ar[d] \ar[r] & \ldots \\
+\tau_{\leq 1}K^\bullet \ar[r] & \tau_{\leq 2}K^\bullet \ar[r] & \ldots
+}
+$$
+as in Lemma \ref{lemma-special-direct-system}. Note that
+the map $\colim P_n^\bullet \to K^\bullet$ is a quasi-isomorphism
+because colimits over $\mathbf{N}$ in $\mathcal{A}$ are exact
+and $H^i(P_n^\bullet) = H^i(K^\bullet)$ for $n > i$. We claim that
+$$
+F(\colim P_n^\bullet) = \colim F(P_n^\bullet)
+$$
+(termwise colimits) is $LF(K^\bullet)$, i.e., that $\colim P_n^\bullet$
+computes $LF$. To see this, by Lemma \ref{lemma-find-existence-computes},
+it suffices to prove the following claim. Suppose that
+$$
+\colim Q_n^\bullet = Q^\bullet
+\xrightarrow{\ \alpha\ }
+P^\bullet = \colim P_n^\bullet
+$$
+is a quasi-isomorphism of complexes, such that each
+$P_n^\bullet$, $Q_n^\bullet$ is a bounded above complex whose terms are
+in $\mathcal{P}$ and the maps $P_n^\bullet \to \tau_{\leq n}P^\bullet$ and
+$Q_n^\bullet \to \tau_{\leq n}Q^\bullet$ are quasi-isomorphisms.
+Claim: $F(\alpha)$ is a quasi-isomorphism.
+
+\medskip\noindent
+The problem is that we do not assume that $\alpha$ is given as a colimit
+of maps between the complexes $P_n^\bullet$ and $Q_n^\bullet$. However,
+for each $n$ we know that the solid arrows in the diagram
+$$
+\xymatrix{
+& R^\bullet \ar@{..>}[d] \\
+P_n^\bullet \ar[d] &
+L^\bullet \ar@{..>}[l] \ar@{..>}[r] &
+Q_n^\bullet \ar[d] \\
+\tau_{\leq n}P^\bullet \ar[rr]^{\tau_{\leq n}\alpha} & &
+\tau_{\leq n}Q^\bullet
+}
+$$
+are quasi-isomorphisms. Because quasi-isomorphisms form a multiplicative
+system in $K(\mathcal{A})$ (see Lemma \ref{lemma-acyclic})
+we can find a quasi-isomorphism
+$L^\bullet \to P_n^\bullet$ and map of complexes $L^\bullet \to Q_n^\bullet$
+such that the diagram above commutes up to homotopy. Then
+$\tau_{\leq n}L^\bullet \to L^\bullet$ is a quasi-isomorphism.
+Hence (by the first part of the proof) we can find a bounded above
+complex $R^\bullet$ whose terms are in $\mathcal{P}$ and a quasi-isomorphism
+$R^\bullet \to L^\bullet$ (as indicated in the diagram). Using the result
+of the first paragraph of the proof we see that
+$F(R^\bullet) \to F(P_n^\bullet)$ and $F(R^\bullet) \to F(Q_n^\bullet)$
+are quasi-isomorphisms. Thus we obtain a isomorphisms
+$H^i(F(P_n^\bullet)) \to H^i(F(Q_n^\bullet))$ fitting into the commutative
+diagram
+$$
+\xymatrix{
+H^i(F(P_n^\bullet)) \ar[r] \ar[d] &
+H^i(F(Q_n^\bullet)) \ar[d] \\
+H^i(F(P^\bullet)) \ar[r] &
+H^i(F(Q^\bullet))
+}
+$$
+The exact same argument shows that these maps are also compatible
+as $n$ varies. Since by (4) and (5) we have
+$$
+H^i(F(P^\bullet)) =
+H^i(F(\colim P_n^\bullet)) =
+H^i(\colim F(P_n^\bullet)) = \colim H^i(F(P_n^\bullet))
+$$
+and similarly for $Q^\bullet$ we conclude that
+$H^i(\alpha) : H^i(F(P^\bullet) \to H^i(F(Q^\bullet)$ is an isomorphism
+and the claim follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-special-inverse-system}
+Let $\mathcal{A}$ be an abelian category. Let
+$\mathcal{I} \subset \Ob(\mathcal{A})$ be a subset.
+Assume $\mathcal{I}$ contains $0$, is closed under (finite) products,
+and every object of $\mathcal{A}$ is a subobject of an
+element of $\mathcal{I}$. Let $K^\bullet$ be a complex.
+There exists a commutative diagram
+$$
+\xymatrix{
+\ldots \ar[r] &
+\tau_{\geq -2}K^\bullet \ar[r] \ar[d] &
+\tau_{\geq -1}K^\bullet \ar[d] \\
+\ldots \ar[r] & I_2^\bullet \ar[r] & I_1^\bullet
+}
+$$
+in the category of complexes such that
+\begin{enumerate}
+\item the vertical arrows are quasi-isomorphisms and termwise injective,
+\item $I_n^\bullet$ is a bounded below complex with terms in $\mathcal{I}$,
+\item the arrows $I_{n + 1}^\bullet \to I_n^\bullet$ are termwise split
+surjections and $\Ker(I^i_{n + 1} \to I^i_n)$ is an element of $\mathcal{I}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma is dual to
+Lemma \ref{lemma-special-direct-system}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Deriving adjoints}
+\label{section-deriving-adjoints}
+
+\noindent
+Let $F : \mathcal{D} \to \mathcal{D}'$ and $G : \mathcal{D}' \to \mathcal{D}$
+be exact functors of triangulated categories.
+Let $S$, resp.\ $S'$ be a multiplicative system for
+$\mathcal{D}$, resp.\ $\mathcal{D}'$ compatible with the
+triangulated structure.
+Denote $Q : \mathcal{D} \to S^{-1}\mathcal{D}$
+and $Q' : \mathcal{D}' \to (S')^{-1}\mathcal{D}'$
+the localization functors. In this situation, by abuse of notation,
+one often denotes $RF$ the partially defined right derived functor
+corresponding to $Q' \circ F : \mathcal{D} \to (S')^{-1}\mathcal{D}'$
+and the multiplicative system $S$. Similarly one denotes
+$LG$ the partially defined left derived functor
+corresponding to $Q \circ G : \mathcal{D}' \to S^{-1}\mathcal{D}$
+and the multiplicative system $S'$. Picture
+$$
+\vcenter{
+\xymatrix{
+\mathcal{D} \ar[r]_F \ar[d]_Q & \mathcal{D}' \ar[d]^{Q'} \\
+S^{-1}\mathcal{D} \ar@{..>}[r]^{RF} &
+(S')^{-1}\mathcal{D}'
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+\mathcal{D}' \ar[r]_G \ar[d]_{Q'} & \mathcal{D} \ar[d]^Q \\
+(S')^{-1}\mathcal{D}' \ar@{..>}[r]^{LG} &
+S^{-1}\mathcal{D}
+}
+}
+$$
+
+\begin{lemma}
+\label{lemma-pre-derived-adjoint-functors-general}
+In the situation above assume $F$ is right adjoint
+to $G$. Let $K \in \Ob(\mathcal{D})$ and
+$M \in \Ob(\mathcal{D}')$. If $RF$ is defined at $K$
+and $LG$ is defined at $M$, then there is a canonical isomorphism
+$$
+\Hom_{(S')^{-1}\mathcal{D}'}(M, RF(K)) =
+\Hom_{S^{-1}\mathcal{D}}(LG(M), K)
+$$
+This isomorphism is functorial in both variables on the triangulated
+subcategories of $S^{-1}\mathcal{D}$ and $(S')^{-1}\mathcal{D}'$
+where $RF$ and $LG$ are defined.
+\end{lemma}
+
+\begin{proof}
+Since $RF$ is defined at $K$, we see that the rule which assigns to a
+$s : K \to I$ in $S$ the object $F(I)$ is essentially
+constant as an ind-object of $(S')^{-1}\mathcal{D}'$ with value $RF(K)$.
+Similarly, the rule which assigns to a $t : P \to M$ in $S'$
+the object $G(P)$ is essentially constant as a pro-object of
+$S^{-1}\mathcal{D}$ with value $LG(M)$. Thus we have
+\begin{align*}
+\Hom_{(S')^{-1}\mathcal{D}'}(M, RF(K))
+& =
+\colim_{s : K \to I} \Hom_{(S')^{-1}\mathcal{D}'}(M, F(I)) \\
+& =
+\colim_{s : K \to I} \colim_{t : P \to M} \Hom_{\mathcal{D}'}(P, F(I)) \\
+& =
+\colim_{t : P \to M} \colim_{s : K \to I} \Hom_{\mathcal{D}'}(P, F(I)) \\
+& =
+\colim_{t : P \to M} \colim_{s : K \to I} \Hom_{\mathcal{D}}(G(P), I) \\
+& =
+\colim_{t : P \to M} \Hom_{S^{-1}\mathcal{D}}(G(P), K) \\
+& =
+\Hom_{S^{-1}\mathcal{D}}(LG(M), K)
+\end{align*}
+The first equality holds by
+Categories, Lemma \ref{categories-lemma-characterize-essentially-constant-ind}.
+The second equality holds by the definition of morphisms in
+$D(\mathcal{B})$, see Categories, Remark
+\ref{categories-remark-right-localization-morphisms-colimit}.
+The third equality holds by
+Categories, Lemma \ref{categories-lemma-colimits-commute}.
+The fourth equality holds because $F$ and $G$ are adjoint.
+The fifth equality holds by definition of morphism
+in $D(\mathcal{A})$, see Categories, Remark
+\ref{categories-remark-left-localization-morphisms-colimit}.
+The sixth equality holds by
+Categories, Lemma \ref{categories-lemma-characterize-essentially-constant-pro}.
+We omit the proof of functoriality.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pre-derived-adjoint-functors}
+Let $F : \mathcal{A} \to \mathcal{B}$ and $G : \mathcal{B} \to \mathcal{A}$
+be functors of abelian categories such that $F$ is a right adjoint to $G$.
+Let $K^\bullet$ be a complex of $\mathcal{A}$ and let $M^\bullet$ be
+a complex of $\mathcal{B}$. If $RF$ is defined at $K^\bullet$
+and $LG$ is defined at $M^\bullet$, then there is a canonical isomorphism
+$$
+\Hom_{D(\mathcal{B})}(M^\bullet, RF(K^\bullet)) =
+\Hom_{D(\mathcal{A})}(LG(M^\bullet), K^\bullet)
+$$
+This isomorphism is functorial in both variables on the triangulated
+subcategories of $D(\mathcal{A})$ and $D(\mathcal{B})$
+where $RF$ and $LG$ are defined.
+\end{lemma}
+
+\begin{proof}
+This is a special case of the very general
+Lemma \ref{lemma-pre-derived-adjoint-functors-general}.
+\end{proof}
+
+\noindent
+The following lemma is an example of why it is easier to work
+with unbounded derived categories. Namely, without having the
+unbounded derived functors, the lemma could not even be stated.
+
+\begin{lemma}
+\label{lemma-derived-adjoint-functors}
+Let $F : \mathcal{A} \to \mathcal{B}$ and $G : \mathcal{B} \to \mathcal{A}$
+be functors of abelian categories such that $F$ is a right adjoint to $G$.
+If the derived functors $RF : D(\mathcal{A}) \to D(\mathcal{B})$ and
+$LG : D(\mathcal{B}) \to D(\mathcal{A})$ exist, then
+$RF$ is a right adjoint to $LG$.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-pre-derived-adjoint-functors}.
+\end{proof}
+
+
+
+
+
+\section{K-injective complexes}
+\label{section-K-injective}
+
+\noindent
+The following types of complexes can be used to compute right derived
+functors on the unbounded derived category.
+
+\begin{definition}
+\label{definition-K-injective}
+Let $\mathcal{A}$ be an abelian category. A complex $I^\bullet$
+is {\it K-injective} if for every acyclic complex $M^\bullet$ we
+have $\Hom_{K(\mathcal{A})}(M^\bullet, I^\bullet) = 0$.
+\end{definition}
+
+\noindent
+In the situation of the definition we have in fact
+$\Hom_{K(\mathcal{A})}(M^\bullet[i], I^\bullet) = 0$ for all $i$
+as the translate of an acyclic complex is acyclic.
+
+\begin{lemma}
+\label{lemma-K-injective}
+Let $\mathcal{A}$ be an abelian category.
+Let $I^\bullet$ be a complex. The following are equivalent
+\begin{enumerate}
+\item $I^\bullet$ is K-injective,
+\item for every quasi-isomorphism $M^\bullet \to N^\bullet$ the map
+$$
+\Hom_{K(\mathcal{A})}(N^\bullet, I^\bullet)
+\to \Hom_{K(\mathcal{A})}(M^\bullet, I^\bullet)
+$$
+is bijective, and
+\item for every complex $N^\bullet$ the map
+$$
+\Hom_{K(\mathcal{A})}(N^\bullet, I^\bullet)
+\to \Hom_{D(\mathcal{A})}(N^\bullet, I^\bullet)
+$$
+is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). Then (2) holds because the functor
+$\Hom_{K(\mathcal{A})}( - , I^\bullet)$ is cohomological
+and the cone on a quasi-isomorphism is acyclic.
+
+\medskip\noindent
+Assume (2). A morphism $N^\bullet \to I^\bullet$ in $D(\mathcal{A})$
+is of the form $fs^{-1} : N^\bullet \to I^\bullet$ where
+$s : M^\bullet \to N^\bullet$ is a quasi-isomorphism and
+$f : M^\bullet \to I^\bullet$ is a map. By (2) this corresponds to
+a unique morphism $N^\bullet \to I^\bullet$ in $K(\mathcal{A})$, i.e.,
+(3) holds.
+
+\medskip\noindent
+Assume (3). If $M^\bullet$ is acyclic then $M^\bullet$ is isomorphic
+to the zero complex in $D(\mathcal{A})$ hence
+$\Hom_{D(\mathcal{A})}(M^\bullet, I^\bullet) = 0$, whence
+$\Hom_{K(\mathcal{A})}(M^\bullet, I^\bullet) = 0$ by (3),
+i.e., (1) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-triangle-K-injective}
+Let $\mathcal{A}$ be an abelian category. Let $(K, L, M, f, g, h)$
+be a distinguished triangle of $K(\mathcal{A})$. If two out of
+$K$, $L$, $M$ are K-injective complexes, then the third is too.
+\end{lemma}
+
+\begin{proof}
+Follows from the definition,
+Lemma \ref{lemma-representable-homological}, and
+the fact that $K(\mathcal{A})$ is a triangulated category
+(Proposition \ref{proposition-homotopy-category-triangulated}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bounded-below-injectives-K-injective}
+Let $\mathcal{A}$ be an abelian category. A bounded below complex of
+injectives is K-injective.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Lemmas \ref{lemma-K-injective} and
+\ref{lemma-morphisms-into-injective-complex}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-product-K-injective}
+Let $\mathcal{A}$ be an abelian category. Let $T$ be a set and for
+each $t \in T$ let $I_t^\bullet$ be a K-injective complex. If
+$I^n = \prod_t I_t^n$ exists for all $n$, then $I^\bullet$ is
+a K-injective complex. Moreover, $I^\bullet$ represents the
+product of the objects $I_t^\bullet$ in $D(\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+Let $K^\bullet$ be an complex. Observe that the complex
+$$
+C :
+\prod\nolimits_b \Hom(K^{-b}, I^{b - 1}) \to
+\prod\nolimits_b \Hom(K^{-b}, I^b) \to
+\prod\nolimits_b \Hom(K^{-b}, I^{b + 1})
+$$
+has cohomology $\Hom_{K(\mathcal{A})}(K^\bullet, I^\bullet)$
+in the middle. Similarly, the complex
+$$
+C_t :
+\prod\nolimits_b \Hom(K^{-b}, I_t^{b - 1}) \to
+\prod\nolimits_b \Hom(K^{-b}, I_t^b) \to
+\prod\nolimits_b \Hom(K^{-b}, I_t^{b + 1})
+$$
+computes $\Hom_{K(\mathcal{A})}(K^\bullet, I_t^\bullet)$.
+Next, observe that we have
+$$
+C = \prod\nolimits_{t \in T} C_t
+$$
+as complexes of abelian groups by our choice of $I$.
+Taking products is an exact functor on the
+category of abelian groups. Hence if $K^\bullet$ is acyclic, then
+$\Hom_{K(\mathcal{A})}(K^\bullet, I_t^\bullet) = 0$, hence
+$C_t$ is acyclic, hence $C$ is acyclic, hence we get
+$\Hom_{K(\mathcal{A})}(K^\bullet, I^\bullet) = 0$.
+Thus we find that $I^\bullet$ is K-injective.
+Having said this, we can use Lemma \ref{lemma-K-injective}
+to conclude that
+$$
+\Hom_{D(\mathcal{A})}(K^\bullet, I^\bullet)
+=
+\prod\nolimits_{t \in T} \Hom_{D(\mathcal{A})}(K^\bullet, I_t^\bullet)
+$$
+and indeed $I^\bullet$ represents the product in the derived category.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-K-injective-defined}
+Let $\mathcal{A}$ be an abelian category.
+Let $F : K(\mathcal{A}) \to \mathcal{D}'$ be an exact functor
+of triangulated categories. Then $RF$ is defined at every complex
+in $K(\mathcal{A})$ which is quasi-isomorphic to a
+K-injective complex. In fact, every K-injective complex computes $RF$.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-derived-inverts}
+it suffices to show that $RF$ is defined at a K-injective complex,
+i.e., it suffices to show a K-injective complex $I^\bullet$ computes $RF$.
+Any quasi-isomorphism $I^\bullet \to N^\bullet$ is a homotopy equivalence
+as it has an inverse by
+Lemma \ref{lemma-K-injective}.
+Thus $I^\bullet \to I^\bullet$ is a final object of
+$I^\bullet/\text{Qis}(\mathcal{A})$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-enough-K-injectives-implies}
+Let $\mathcal{A}$ be an abelian category.
+Assume every complex has a quasi-isomorphism towards a K-injective complex.
+Then any exact functor $F : K(\mathcal{A}) \to \mathcal{D}'$ of triangulated
+categories has a right derived functor
+$$
+RF : D(\mathcal{A}) \longrightarrow \mathcal{D}'
+$$
+and $RF(I^\bullet) = F(I^\bullet)$ for K-injective complexes $I^\bullet$.
+\end{lemma}
+
+\begin{proof}
+To see this we apply
+Lemma \ref{lemma-find-existence-computes}
+with $\mathcal{I}$ the collection of K-injective complexes. Since (1)
+holds by assumption, it suffices to prove that if $I^\bullet \to J^\bullet$
+is a quasi-isomorphism of K-injective complexes, then
+$F(I^\bullet) \to F(J^\bullet)$ is an isomorphism. This is clear because
+$I^\bullet \to J^\bullet$ is a homotopy equivalence, i.e., an
+isomorphism in $K(\mathcal{A})$, by
+Lemma \ref{lemma-K-injective}.
+\end{proof}
+
+\noindent
+The following lemma can be generalized to limits over bigger ordinals.
+
+\begin{lemma}
+\label{lemma-limit-K-injectives}
+\begin{slogan}
+The limit of a ``split'' tower of K-injective complexes is K-injective.
+\end{slogan}
+Let $\mathcal{A}$ be an abelian category. Let
+$$
+\ldots \to I_3^\bullet \to I_2^\bullet \to I_1^\bullet
+$$
+be an inverse system of complexes. Assume
+\begin{enumerate}
+\item each $I_n^\bullet$ is $K$-injective,
+\item each map $I_{n + 1}^m \to I_n^m$ is a split surjection,
+\item the limits $I^m = \lim I_n^m$ exist.
+\end{enumerate}
+Then the complex $I^\bullet$ is K-injective.
+\end{lemma}
+
+\begin{proof}
+We urge the reader to skip the proof of this lemma.
+Let $M^\bullet$ be an acyclic complex. Let us abbreviate
+$H_n(a, b) = \Hom_\mathcal{A}(M^a, I_n^b)$. With this notation
+$\Hom_{K(\mathcal{A})}(M^\bullet, I^\bullet)$ is the cohomology
+of the complex
+$$
+\prod_m \lim\limits_n H_n(m, m - 2)
+\to
+\prod_m \lim\limits_n H_n(m, m - 1)
+\to
+\prod_m \lim\limits_n H_n(m, m)
+\to
+\prod_m \lim\limits_n H_n(m, m + 1)
+$$
+in the third spot from the left.
+We may exchange the order of $\prod$ and $\lim$ and each of the complexes
+$$
+\prod_m H_n(m, m - 2)
+\to
+\prod_m H_n(m, m - 1)
+\to
+\prod_m H_n(m, m)
+\to
+\prod_m H_n(m, m + 1)
+$$
+is exact by assumption (1). By assumption (2) the maps in the systems
+$$
+\ldots \to
+\prod_m H_3(m, m - 2) \to
+\prod_m H_2(m, m - 2) \to
+\prod_m H_1(m, m - 2)
+$$
+are surjective. Thus the lemma follows from
+Homology, Lemma \ref{homology-lemma-apply-Mittag-Leffler}.
+\end{proof}
+
+\noindent
+It appears that a combination of Lemmas \ref{lemma-special-inverse-system},
+\ref{lemma-bounded-below-injectives-K-injective}, and
+\ref{lemma-limit-K-injectives} produces ``enough K-injectives'' for any
+abelian category with enough injectives and countable products.
+Actually, this may not work! See Lemma \ref{lemma-difficulty-K-injectives}
+for an explanation.
+
+\begin{lemma}
+\label{lemma-adjoint-preserve-K-injectives}
+Let $\mathcal{A}$ and $\mathcal{B}$ be abelian categories.
+Let $u : \mathcal{A} \to \mathcal{B}$ and
+$v : \mathcal{B} \to \mathcal{A}$ be additive functors. Assume
+\begin{enumerate}
+\item $u$ is right adjoint to $v$, and
+\item $v$ is exact.
+\end{enumerate}
+Then $u$ transforms K-injective complexes into K-injective complexes.
+\end{lemma}
+
+\begin{proof}
+Let $I^\bullet$ be a K-injective complex of $\mathcal{A}$.
+Let $M^\bullet$ be a acyclic complex of $\mathcal{B}$.
+As $v$ is exact we see that $v(M^\bullet)$ is an acyclic complex.
+By adjointness we get
+$$
+0 = \Hom_{K(\mathcal{A})}(v(M^\bullet), I^\bullet) =
+\Hom_{K(\mathcal{B})}(M^\bullet, u(I^\bullet))
+$$
+hence the lemma follows.
+\end{proof}
+
+
+
+
+
+\section{Bounded cohomological dimension}
+\label{section-bounded}
+
+\noindent
+There is another case where the unbounded derived functor exists.
+Namely, when the functor has bounded cohomological dimension.
+
+\begin{lemma}
+\label{lemma-replace-resolution}
+Let $\mathcal{A}$ be an abelian category. Let
+$d : \Ob(\mathcal{A}) \to \{0, 1, 2, \ldots, \infty\}$ be a function.
+Assume that
+\begin{enumerate}
+\item every object of $\mathcal{A}$ is a subobject of an
+object $A$ with $d(A) = 0$,
+\item $d(A \oplus B) \leq \max \{d(A), d(B)\}$ for $A, B \in \mathcal{A}$, and
+\item if $0 \to A \to B \to C \to 0$ is short exact, then
+$d(C) \leq \max\{d(A) - 1, d(B)\}$.
+\end{enumerate}
+Let $K^\bullet$ be a complex such that $n + d(K^n)$ tends to $-\infty$
+as $n \to -\infty$. Then there exists a quasi-isomorphism
+$K^\bullet \to L^\bullet$ with $d(L^n) = 0$ for all $n \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-subcategory-right-resolution} we can find a
+quasi-isomorphism $\sigma_{\geq 0}K^\bullet \to M^\bullet$ with
+$M^n = 0$ for $n < 0$ and $d(M^n) = 0$ for $n \geq 0$. Then $K^\bullet$
+is quasi-isomorphic to the complex
+$$
+\ldots \to K^{-2} \to K^{-1} \to M^0 \to M^1 \to \ldots
+$$
+Hence we may assume that $d(K^n) = 0$ for $n \gg 0$. Note that
+the condition $n + d(K^n) \to -\infty$ as $n \to -\infty$ is not
+violated by this replacement.
+
+\medskip\noindent
+We are going to improve $K^\bullet$ by an (infinite) sequence of
+elementary replacements. An {\it elementary replacement} is the following.
+Choose an index $n$ such that $d(K^n) > 0$. Choose an injection
+$K^n \to M$ where $d(M) = 0$. Set
+$M' = \Coker(K^n \to M \oplus K^{n + 1})$. Consider the map of complexes
+$$
+\xymatrix{
+K^\bullet : \ar[d] &
+K^{n - 1} \ar[d] \ar[r] &
+K^n \ar[d] \ar[r] &
+K^{n + 1} \ar[d] \ar[r] &
+K^{n + 2} \ar[d] \\
+(K')^\bullet : &
+K^{n - 1} \ar[r] &
+M \ar[r] &
+M' \ar[r] &
+K^{n + 2}
+}
+$$
+It is clear that $K^\bullet \to (K')^\bullet$ is a quasi-isomorphism.
+Moreover, it is clear that $d((K')^n) = 0$ and
+$$
+d((K')^{n + 1}) \leq \max\{d(K^n) - 1, d(M \oplus K^{n + 1})\} \leq
+\max\{d(K^n) - 1, d(K^{n + 1})\}
+$$
+and the other values are unchanged.
+
+\medskip\noindent
+To finish the proof we carefuly choose the order in which to do
+the elementary replacements so that for every integer $m$ the complex
+$\sigma_{\geq m}K^\bullet$ is changed only a finite number of times.
+To do this set
+$$
+\xi(K^\bullet) = \max \{n + d(K^n) \mid d(K^n) > 0\}
+$$
+and
+$$
+I = \{n \in \mathbf{Z} \mid \xi(K^\bullet) = n + d(K^n)
+\text{ and }
+ d(K^n) > 0\}
+$$
+Our assumption that $n + d(K^n)$ tends to $-\infty$ as $n \to -\infty$
+and the fact that $d(K^n) = 0$ for $n >> 0$
+implies $\xi(K^\bullet) < +\infty$ and that $I$ is a finite set.
+It is clear that $\xi((K')^\bullet) \leq \xi(K^\bullet)$ for an
+elementary transformation as above. An elementary transformation
+changes the complex in degrees $\leq \xi(K^\bullet) + 1$. Hence if we can
+find finite sequence of elementary transformations which
+decrease $\xi(K^\bullet)$, then we win. However, note that if we
+do an elementary transformation starting with the smallest element
+$n \in I$, then we either decrease the size of $I$, or we increase
+$\min I$. Since every element of $I$ is $\leq \xi(K^\bullet)$ we see
+that we win after a finite number of steps.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unbounded-right-derived}
+Let $F : \mathcal{A} \to \mathcal{B}$ be a left exact functor of
+abelian categories. Assume
+\begin{enumerate}
+\item every object of $\mathcal{A}$ is a subobject of an object
+which is right acyclic for $F$,
+\item there exists an integer $n \geq 0$ such that $R^nF = 0$,
+\end{enumerate}
+Then
+\begin{enumerate}
+\item $RF : D(\mathcal{A}) \to D(\mathcal{B})$ exists,
+\item any complex consisting of right acyclic objects for $F$ computes $RF$,
+\item any complex is the source of a quasi-isomorphism into a complex
+consisting of right acyclic objects for $F$,
+\item for $E \in D(\mathcal{A})$
+\begin{enumerate}
+\item $H^i(RF(\tau_{\leq a}E) \to H^i(RF(E))$ is an isomorphism
+for $i \leq a$,
+\item $H^i(RF(E)) \to H^i(RF(\tau_{\geq b - n + 1}E))$ is an isomorphism
+for $i \geq b$,
+\item if $H^i(E) = 0$ for $i \not \in [a, b]$ for some
+$-\infty \leq a \leq b \leq \infty$, then $H^i(RF(E)) = 0$
+for $i \not \in [a, b + n - 1]$.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Note that the first assumption implies that
+$RF : D^+(\mathcal{A}) \to D^+(\mathcal{B})$ exists, see
+Proposition \ref{proposition-enough-acyclics}.
+Let $A$ be an object of $\mathcal{A}$. Choose an injection $A \to A'$
+with $A'$ acyclic. Then we see that $R^{n + 1}F(A) = R^nF(A'/A) = 0$ by
+the long exact cohomology sequence. Hence we conclude that $R^{n + 1}F = 0$.
+Continuing like this using induction we find that $R^mF = 0$ for all
+$m \geq n$.
+
+\medskip\noindent
+We are going to use Lemma \ref{lemma-replace-resolution} with the function
+$d : \Ob(\mathcal{A}) \to \{0, 1, 2, \ldots \}$ given by
+$d(A) = \max \{0\} \cup \{i \mid R^iF(A) \not = 0\}$.
+The first assumption of Lemma \ref{lemma-replace-resolution}
+is our assumption (1). The second assumption of
+Lemma \ref{lemma-replace-resolution} follows from the fact
+that $RF(A \oplus B) = RF(A) \oplus RF(B)$. The third assumption of
+Lemma \ref{lemma-replace-resolution} follows from the long exact
+cohomology sequence. Hence for every complex $K^\bullet$ there exists a
+quasi-isomorphism $K^\bullet \to L^\bullet$ into a complex of
+objects right acyclic for $F$. This proves statement (3).
+
+\medskip\noindent
+We claim that if $L^\bullet \to M^\bullet$ is a quasi-isomorphism of
+complexes of right acyclic objects for $F$, then
+$F(L^\bullet) \to F(M^\bullet)$
+is a quasi-isomorphism. If we prove this claim then we get statements
+(1) and (2) of the lemma by
+Lemma \ref{lemma-find-existence-computes}.
+To prove the claim pick an integer $i \in \mathbf{Z}$.
+Consider the distinguished triangle
+$$
+\sigma_{\geq i - n - 1}L^\bullet \to
+\sigma_{\geq i - n - 1}M^\bullet \to Q^\bullet,
+$$
+i.e., let $Q^\bullet$ be the cone of the first map.
+Note that $Q^\bullet$ is bounded below and that
+$H^j(Q^\bullet)$ is zero except possibly for $j = i - n - 1$
+or $j = i - n - 2$. We may apply $RF$ to $Q^\bullet$.
+Using the second spectral sequence of
+Lemma \ref{lemma-two-ss-complex-functor}
+and the assumed vanishing of cohomology (2) we conclude
+that $H^j(RF(Q^\bullet))$ is zero except possibly for
+$j \in \{i - n - 2, \ldots, i - 1\}$. Hence we see that
+$RF(\sigma_{\geq i - n - 1}L^\bullet) \to RF(\sigma_{\geq i - n - 1}M^\bullet)$
+induces an isomorphism of cohomology objects in degrees $\geq i$.
+By Proposition \ref{proposition-enough-acyclics} we know that
+$RF(\sigma_{\geq i - n - 1}L^\bullet) = \sigma_{\geq i - n - 1}F(L^\bullet)$
+and
+$RF(\sigma_{\geq i - n - 1}M^\bullet) = \sigma_{\geq i - n - 1}F(M^\bullet)$.
+We conclude that $F(L^\bullet) \to F(M^\bullet)$
+is an isomorphism in degree $i$ as desired.
+
+\medskip\noindent
+Part (4)(a) follows from Lemma \ref{lemma-negative-vanishing}.
+
+\medskip\noindent
+For part (4)(b) let $E$ be represented by the complex $L^\bullet$
+of objects right acyclic for $F$. By part (2) $RF(E)$ is represented
+by the complex $F(L^\bullet)$ and $RF(\sigma_{\geq c}L^\bullet)$
+is represented by $\sigma_{\geq c}F(L^\bullet)$. Consider the
+distinguished triangle
+$$
+H^{b - n}(L^\bullet)[n - b] \to
+\tau_{\geq b - n}L^\bullet \to
+\tau_{\geq b - n + 1}L^\bullet
+$$
+of Remark \ref{remark-truncation-distinguished-triangle}.
+The vanishing established above gives that
+$H^i(RF(\tau_{\geq b - n}L^\bullet))$ agrees with
+$H^i(RF(\tau_{\geq b - n + 1}L^\bullet))$ for $i \geq b$.
+Consider the short exact sequence of complexes
+$$
+0 \to
+\Im(L^{b - n - 1} \to L^{b - n})[n - b] \to
+\sigma_{\geq b - n}L^\bullet \to
+\tau_{\geq b - n}L^\bullet \to 0
+$$
+Using the distinguished triangle associated to this
+(see Section \ref{section-canonical-delta-functor})
+and the vanishing as before we conclude that
+$H^i(RF(\tau_{\geq b - n}L^\bullet))$ agrees with
+$H^i(RF(\sigma_{\geq b - n}L^\bullet))$ for $i \geq b$.
+Since the map $RF(\sigma_{\geq b - n}L^\bullet) \to RF(L^\bullet)$
+is represented by $\sigma_{\geq b - n}F(L^\bullet) \to F(L^\bullet)$
+we conclude that this in turn agrees with $H^i(RF(L^\bullet))$
+for $i \geq b$ as desired.
+
+\medskip\noindent
+Proof of (4)(c). Under the assumption on $E$ we have
+$\tau_{\leq a - 1}E = 0$ and we get the vanishing
+of $H^i(RF(E))$ for $i \leq a - 1$ from part (4)(a).
+Similarly, we have $\tau_{\geq b + 1}E = 0$ and hence
+we get the vanishing of $H^i(RF(E))$ for $i \geq b + n$ from
+part (4)(b).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unbounded-left-derived}
+Let $F : \mathcal{A} \to \mathcal{B}$ be a right exact functor of
+abelian categories. If
+\begin{enumerate}
+\item every object of $\mathcal{A}$ is a quotient of an object
+which is left acyclic for $F$,
+\item there exists an integer $n \geq 0$ such that $L^nF = 0$,
+\end{enumerate}
+Then
+\begin{enumerate}
+\item $LF : D(\mathcal{A}) \to D(\mathcal{B})$ exists,
+\item any complex consisting of left acyclic objects for $F$ computes $LF$,
+\item any complex is the target of a quasi-isomorphism from a complex
+consisting of left acyclic objects for $F$,
+\item for $E \in D(\mathcal{A})$
+\begin{enumerate}
+\item $H^i(LF(\tau_{\leq a + n - 1}E) \to H^i(LF(E))$ is an isomorphism
+for $i \leq a$,
+\item $H^i(LF(E)) \to H^i(LF(\tau_{\geq b}E))$ is an isomorphism
+for $i \geq b$,
+\item if $H^i(E) = 0$ for $i \not \in [a, b]$ for some
+$-\infty \leq a \leq b \leq \infty$, then $H^i(LF(E)) = 0$
+for $i \not \in [a - n + 1, b]$.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is dual to Lemma \ref{lemma-unbounded-right-derived}.
+\end{proof}
+
+
+
+
+
+
+\section{Derived colimits}
+\label{section-derived-colimit}
+
+\noindent
+In a triangulated category there is a notion of derived colimit.
+
+\begin{definition}
+\label{definition-derived-colimit}
+Let $\mathcal{D}$ be a triangulated category.
+Let $(K_n, f_n)$ be a system of objects of $\mathcal{D}$.
+We say an object $K$ is a {\it derived colimit}, or a
+{\it homotopy colimit} of the system $(K_n)$ if
+the direct sum $\bigoplus K_n$ exists and there is a distinguished triangle
+$$
+\bigoplus K_n \to \bigoplus K_n \to K \to \bigoplus K_n[1]
+$$
+where the map $\bigoplus K_n \to \bigoplus K_n$ is given
+by $1 - f_n$ in degree $n$. If this is the
+case, then we sometimes indicate this by the notation
+$K = \text{hocolim} K_n$.
+\end{definition}
+
+\noindent
+By TR3 a derived colimit, if it exists, is unique up to (non-unique)
+isomorphism. Moreover, by TR1 a derived colimit of $K_n$ exists
+as soon as $\bigoplus K_n$ exists. The derived category $D(\textit{Ab})$
+of the category of abelian groups is an example of a triangulated
+category where all homotopy colimits exist.
+
+\medskip\noindent
+The nonuniqueness makes it hard to pin down the derived colimit. In
+More on Algebra, Lemma \ref{more-algebra-lemma-map-from-hocolim}
+the reader finds an exact sequence
+$$
+0 \to R^1\lim \Hom(K_n, L[-1]) \to \Hom(\text{hocolim} K_n, L)
+\to \lim \Hom(K_n, L) \to 0
+$$
+describing the $\Hom$s out of a homotopy colimit in terms of the
+usual $\Hom$s.
+
+\begin{remark}
+\label{remark-uniqueness-derived-colimit}
+Let $\mathcal{D}$ be a triangulated category.
+Let $(K_n, f_n)$ be a system of objects of $\mathcal{D}$.
+We may think of a derived colimit as an object $K$
+of $\mathcal{D}$ endowed with morphisms $i_n : K_n \to K$
+such that $i_{n + 1} \circ f_n = i_n$ and such that there
+exists a morphism $c : K \to \bigoplus K_n$ with the property that
+$$
+\bigoplus K_n \xrightarrow{1 - f_n} \bigoplus K_n \xrightarrow{i_n}
+K \xrightarrow{c} \bigoplus K_n[1]
+$$
+is a distinguished triangle. If $(K', i'_n, c')$ is a second
+derived colimit, then there exists an isomorphism
+$\varphi : K \to K'$ such that $\varphi \circ i_n = i'_n$ and
+$c' \circ \varphi = c$. The existence of $\varphi$ is
+TR3 and the fact that $\varphi$ is an isomorphism is
+Lemma \ref{lemma-third-isomorphism-triangle}.
+\end{remark}
+
+\begin{remark}
+\label{remark-functoriality-derived-colimit}
+Let $\mathcal{D}$ be a triangulated category.
+Let $(a_n) : (K_n, f_n) \to (L_n, g_n)$ be a morphism of systems
+of objects of $\mathcal{D}$. Let $(K, i_n, c)$ be a derived
+colimit of the first system and let $(L, j_n, d)$ be a derived
+colimit of the second system with notation as in
+Remark \ref{remark-uniqueness-derived-colimit}.
+Then there exists a morphism $a : K \to L$
+such that $a \circ i_n = j_n$ and $d \circ a = (a_n[1]) \circ c$.
+This follows from TR3 applied to the defining distinguished
+triangles.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-hocolim-subsequence}
+Let $\mathcal{D}$ be a triangulated category.
+Let $(K_n, f_n)$ be a system of objects of $\mathcal{D}$.
+Let $n_1 < n_2 < n_3 < \ldots$ be a sequence of integers.
+Assume $\bigoplus K_n$ and $\bigoplus K_{n_i}$ exist.
+Then there exists an isomorphism
+$\text{hocolim} K_{n_i} \to \text{hocolim} K_n$
+such that
+$$
+\xymatrix{
+K_{n_i} \ar[r] \ar[d]_{\text{id}} & \text{hocolim} K_{n_i} \ar[d] \\
+K_{n_i} \ar[r] & \text{hocolim} K_n
+}
+$$
+commutes for all $i$.
+\end{lemma}
+
+\begin{proof}
+Let $g_i : K_{n_i} \to K_{n_{i + 1}}$ be the composition
+$f_{n_{i + 1} - 1} \circ \ldots \circ f_{n_i}$.
+We construct commutative diagrams
+$$
+\vcenter{
+\xymatrix{
+\bigoplus\nolimits_i K_{n_i} \ar[r]_{1 - g_i} \ar[d]_b &
+\bigoplus\nolimits_i K_{n_i} \ar[d]^a \\
+\bigoplus\nolimits_n K_n \ar[r]^{1 - f_n} &
+\bigoplus\nolimits_n K_n
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+\bigoplus\nolimits_n K_n \ar[r]_{1 - f_n} \ar[d]_d &
+\bigoplus\nolimits_n K_n \ar[d]^c \\
+\bigoplus\nolimits_i K_{n_i} \ar[r]^{1 - g_i} &
+\bigoplus\nolimits_i K_{n_i}
+}
+}
+$$
+as follows. Let $a_i = a|_{K_{n_i}}$ be the inclusion of $K_{n_i}$
+into the direct sum. In other words, $a$ is the natural inclusion.
+Let $b_i = b|_{K_{n_i}}$ be the map
+$$
+K_{n_i}
+\xrightarrow{1,\ f_{n_i},\ f_{n_i + 1} \circ f_{n_i},
+\ \ldots,\ f_{n_{i + 1} - 2} \circ \ldots \circ f_{n_i}}
+K_{n_i} \oplus K_{n_i + 1} \oplus \ldots \oplus K_{n_{i + 1} - 1}
+$$
+If $n_{i - 1} < j \leq n_i$, then we let $c_j = c|_{K_j}$
+be the map
+$$
+K_j \xrightarrow{f_{n_i - 1} \circ \ldots \circ f_j} K_{n_i}
+$$
+We let $d_j = d|_{K_j}$ be zero if $j \not = n_i$ for any $i$
+and we let $d_{n_i}$ be the natural inclusion of $K_{n_i}$
+into the direct sum. In other words, $d$ is the natural projection.
+By TR3 these diagrams define morphisms
+$$
+\varphi : \text{hocolim} K_{n_i} \to \text{hocolim} K_n
+\quad\text{and}\quad
+\psi : \text{hocolim} K_n \to \text{hocolim} K_{n_i}
+$$
+Since $c \circ a$ and $d \circ b$ are the identity maps we see that
+$\varphi \circ \psi$ is an isomorphism by
+Lemma \ref{lemma-third-isomorphism-triangle}.
+The other way around we get the morphisms $a \circ c$ and $b \circ d$.
+Consider the morphism
+$h = (h_j) : \bigoplus K_n \to \bigoplus K_n$ given by
+the rule: for $n_{i - 1} < j < n_i$ we set
+$$
+h_j : K_j
+\xrightarrow{1,\ f_j,\ f_{j + 1} \circ f_j,
+\ \ldots,\ f_{n_i - 1} \circ \ldots \circ f_j}
+K_j \oplus \ldots \oplus K_{n_i}
+$$
+Then the reader verifies that $(1 - f) \circ h = \text{id} - a \circ c$
+and $h \circ (1 - f) = \text{id} - b \circ d$. This means that
+$\text{id} - \psi \circ \varphi$ has square zero by
+Lemma \ref{lemma-third-map-square-zero} (small argument omitted).
+In other words, $\psi \circ \varphi$ differs from the identity
+by a nilpotent endomorphism, hence is an isomorphism. Thus
+$\varphi$ and $\psi$ are isomorphisms as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-direct-sums}
+Let $\mathcal{A}$ be an abelian category.
+If $\mathcal{A}$ has exact countable direct sums, then
+$D(\mathcal{A})$ has countable direct sums. In fact given
+a collection of complexes $K_i^\bullet$ indexed by a countable
+index set $I$ the termwise direct sum $\bigoplus K_i^\bullet$
+is the direct sum of $K_i^\bullet$ in $D(\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+Let $L^\bullet$ be a complex. Suppose given maps
+$\alpha_i : K_i^\bullet \to L^\bullet$ in $D(\mathcal{A})$.
+This means there exist quasi-isomorphisms
+$s_i : M_i^\bullet \to K_i^\bullet$
+of complexes and maps of complexes $f_i : M_i^\bullet \to L^\bullet$
+such that $\alpha_i = f_is_i^{-1}$. By assumption the map of complexes
+$$
+s : \bigoplus M_i^\bullet \longrightarrow \bigoplus K_i^\bullet
+$$
+is a quasi-isomorphism. Hence setting $f = \bigoplus f_i$ we see that
+$\alpha = fs^{-1}$ is a map in $D(\mathcal{A})$ whose composition
+with the coprojection $K_i^\bullet \to \bigoplus K_i^\bullet$ is $\alpha_i$.
+We omit the verification that $\alpha$ is unique.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-colimit}
+Let $\mathcal{A}$ be an abelian category. Assume colimits over $\mathbf{N}$
+exist and are exact. Then countable direct sums exists and are exact.
+Moreover, if $(A_n, f_n)$ is a system over $\mathbf{N}$, then there is
+a short exact sequence
+$$
+0 \to \bigoplus A_n \to \bigoplus A_n \to \colim A_n \to 0
+$$
+where the first map in degree $n$ is given by $1 - f_n$.
+\end{lemma}
+
+\begin{proof}
+The first statement follows from
+$\bigoplus A_n = \colim (A_1 \oplus \ldots \oplus A_n)$.
+For the second, note that for each $n$ we have the short exact sequence
+$$
+0 \to
+A_1 \oplus \ldots \oplus A_{n - 1} \to
+A_1 \oplus \ldots \oplus A_n \to A_n \to 0
+$$
+where the first map is given by the maps $1 - f_i$ and the
+second map is the sum of the transition maps.
+Take the colimit to get the sequence of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colim-hocolim}
+Let $\mathcal{A}$ be an abelian category. Let $L_n^\bullet$
+be a system of complexes of $\mathcal{A}$. Assume
+colimits over $\mathbf{N}$ exist and are exact in $\mathcal{A}$.
+Then the termwise
+colimit $L^\bullet = \colim L_n^\bullet$ is a homotopy colimit of the
+system in $D(\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+We have an exact sequence of complexes
+$$
+0 \to \bigoplus L_n^\bullet \to \bigoplus L_n^\bullet \to L^\bullet \to 0
+$$
+by Lemma \ref{lemma-compute-colimit}.
+The direct sums are direct sums in $D(\mathcal{A})$ by
+Lemma \ref{lemma-direct-sums}.
+Thus the result follows from the definition
+of derived colimits in
+Definition \ref{definition-derived-colimit}
+and the fact that a short exact sequence of complexes
+gives a distinguished triangle
+(Lemma \ref{lemma-derived-canonical-delta-functor}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-of-hocolim}
+Let $\mathcal{D}$ be a triangulated category having countable
+direct sums. Let $\mathcal{A}$ be an abelian category with exact
+colimits over $\mathbf{N}$.
+Let $H : \mathcal{D} \to \mathcal{A}$ be a homological functor
+commuting with countable direct sums.
+Then $H(\text{hocolim} K_n) = \colim H(K_n)$
+for any system of objects of $\mathcal{D}$.
+\end{lemma}
+
+\begin{proof}
+Write $K = \text{hocolim} K_n$. Apply $H$ to the defining
+distinguished triangle to get
+$$
+\bigoplus H(K_n) \to \bigoplus H(K_n)
+\to H(K) \to
+\bigoplus H(K_n[1]) \to \bigoplus H(K_n[1])
+$$
+where the first map is given by $1 - H(f_n)$ and the last map
+is given by $1 - H(f_n[1])$.
+Apply Lemma \ref{lemma-compute-colimit} to see that this proves the lemma.
+\end{proof}
+
+\noindent
+The following lemma tells us that taking maps out of a compact
+object (to be defined later) commutes with derived colimits.
+
+\begin{lemma}
+\label{lemma-commutes-with-countable-sums}
+Let $\mathcal{D}$ be a triangulated category with countable direct sums.
+Let $K \in \mathcal{D}$ be an object such that for every
+countable set of objects $E_n \in \mathcal{D}$ the canonical map
+$$
+\bigoplus \Hom_\mathcal{D}(K, E_n)
+\longrightarrow
+\Hom_\mathcal{D}(K, \bigoplus E_n)
+$$
+is a bijection. Then, given any system $L_n$ of $\mathcal{D}$ over
+$\mathbf{N}$ whose derived colimit $L = \text{hocolim} L_n$
+exists we have that
+$$
+\colim \Hom_\mathcal{D}(K, L_n) \longrightarrow \Hom_\mathcal{D}(K, L)
+$$
+is a bijection.
+\end{lemma}
+
+\begin{proof}
+Consider the defining distinguished triangle
+$$
+\bigoplus L_n \to \bigoplus L_n \to L \to \bigoplus L_n[1]
+$$
+Apply the cohomological functor $\Hom_\mathcal{D}(K, -)$
+(see Lemma \ref{lemma-representable-homological}).
+By elementary considerations concerning colimits of abelian groups
+we get the result.
+\end{proof}
+
+
+
+
+
+
+\section{Derived limits}
+\label{section-derived-limit}
+
+\noindent
+In a triangulated category there is a notion of derived limit.
+
+\begin{definition}
+\label{definition-derived-limit}
+Let $\mathcal{D}$ be a triangulated category.
+Let $(K_n, f_n)$ be an inverse system of objects of $\mathcal{D}$.
+We say an object $K$ is a {\it derived limit}, or a
+{\it homotopy limit} of the system $(K_n)$ if
+the product $\prod K_n$ exists and there is a distinguished triangle
+$$
+K \to \prod K_n \to \prod K_n \to K[1]
+$$
+where the map $\prod K_n \to \prod K_n$ is given
+by $(k_n) \mapsto (k_n - f_{n+1}(k_{n + 1}))$. If this is the
+case, then we sometimes indicate this by the notation $K = R\lim K_n$.
+\end{definition}
+
+\noindent
+By TR3 a derived limit, if it exists, is unique up to (non-unique)
+isomorphism. Moreover, by TR1 a derived limit $R\lim K_n$ exists
+as soon as $\prod K_n$ exists. The derived category $D(\textit{Ab})$
+of the category of abelian groups is an example of a triangulated category
+where all derived limits exist.
+
+\medskip\noindent
+The nonuniqueness makes it hard to pin down the derived limit. In
+More on Algebra, Lemma \ref{more-algebra-lemma-map-into-Rlim}
+the reader finds an exact sequence
+$$
+0 \to R^1\lim \Hom(L, K_n[-1]) \to \Hom(L, R\lim K_n)
+\to \lim \Hom(L, K_n) \to 0
+$$
+describing the $\Hom$s into a derived limit in terms of the
+usual $\Hom$s.
+
+\begin{lemma}
+\label{lemma-products}
+Let $\mathcal{A}$ be an abelian category with exact
+countable products. Then
+\begin{enumerate}
+\item $D(\mathcal{A})$ has countable products,
+\item countable products $\prod K_i$ in $D(\mathcal{A})$ are obtained by
+taking termwise products of any complexes representing the $K_i$, and
+\item $H^p(\prod K_i) = \prod H^p(K_i)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $K_i^\bullet$ be a complex representing $K_i$ in $D(\mathcal{A})$.
+Let $L^\bullet$ be a complex. Suppose given maps
+$\alpha_i : L^\bullet \to K_i^\bullet$ in $D(\mathcal{A})$.
+This means there exist quasi-isomorphisms $s_i : K_i^\bullet \to M_i^\bullet$
+of complexes and maps of complexes $f_i : L^\bullet \to M_i^\bullet$
+such that $\alpha_i = s_i^{-1}f_i$. By assumption the map of complexes
+$$
+s : \prod K_i^\bullet \longrightarrow \prod M_i^\bullet
+$$
+is a quasi-isomorphism. Hence setting $f = \prod f_i$ we see that
+$\alpha = s^{-1}f$ is a map in $D(\mathcal{A})$ whose composition
+with the projection $\prod K_i^\bullet \to K_i^\bullet$ is $\alpha_i$.
+We omit the verification that $\alpha$ is unique.
+\end{proof}
+
+\noindent
+The duals of Lemmas \ref{lemma-compute-colimit},
+\ref{lemma-colim-hocolim}, and
+\ref{lemma-commutes-with-countable-sums}
+should be stated here and proved. However, we do not know any applications
+of these lemmas for now.
+
+\begin{lemma}
+\label{lemma-inverse-limit-bounded-below}
+Let $\mathcal{A}$ be an abelian category with countable products and
+enough injectives. Let $(K_n)$ be an inverse system of $D^+(\mathcal{A})$.
+Then $R\lim K_n$ exists.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that $\prod K_n$ exists in $D(\mathcal{A})$.
+For every $n$ we can represent $K_n$ by a bounded below complex
+$I_n^\bullet$ of injectives (Lemma \ref{lemma-injective-resolutions-exist}).
+Then $\prod K_n$ is represented by $\prod I_n^\bullet$, see
+Lemma \ref{lemma-product-K-injective}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-difficulty-K-injectives}
+Let $\mathcal{A}$ be an abelian category with countable products and
+enough injectives. Let $K^\bullet$ be a complex. Let $I_n^\bullet$ be
+the inverse system of bounded below complexes of injectives produced by
+Lemma \ref{lemma-special-inverse-system}. Then
+$I^\bullet = \lim I_n^\bullet$ exists, is K-injective, and
+the following are equivalent
+\begin{enumerate}
+\item the map $K^\bullet \to I^\bullet$ is a quasi-isomorphism,
+\item the canonical map $K^\bullet \to R\lim \tau_{\geq -n}K^\bullet$
+is an isomorphism in $D(\mathcal{A})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The statement of the lemma makes sense as $R\lim \tau_{\geq -n}K^\bullet$
+exists by Lemma \ref{lemma-inverse-limit-bounded-below}.
+Each complex $I_n^\bullet$ is K-injective by
+Lemma \ref{lemma-bounded-below-injectives-K-injective}.
+Choose direct sum decompositions $I_{n + 1}^p = C_{n + 1}^p \oplus I_n^p$
+for all $n \geq 1$. Set $C_1^p = I_1^p$. The complex
+$I^\bullet = \lim I_n^\bullet$ exists because we can take
+$I^p = \prod_{n \geq 1} C_n^p$. Fix $p \in \mathbf{Z}$.
+We claim there is a split short exact sequence
+$$
+0 \to I^p \to \prod I_n^p \to \prod I_n^p \to 0
+$$
+of objects of $\mathcal{A}$. Here the first map is given by
+the projection maps $I^p \to I_n^p$ and the second map
+by $(x_n) \mapsto (x_n - f^p_{n + 1}(x_{n + 1}))$ where
+$f^p_n : I_n^p \to I_{n - 1}^p$ are the transition maps.
+The splitting comes from the map $\prod I_n^p \to \prod C_n^p = I^p$.
+We obtain a termwise split short exact sequence of complexes
+$$
+0 \to I^\bullet \to \prod I_n^\bullet \to \prod I_n^\bullet \to 0
+$$
+Hence a corresponding distinguished triangle in $K(\mathcal{A})$
+and $D(\mathcal{A})$. By Lemma \ref{lemma-product-K-injective}
+the products are K-injective and represent the corresponding
+products in $D(\mathcal{A})$.
+It follows that $I^\bullet$ represents $R\lim I_n^\bullet$
+(Definition \ref{definition-derived-limit}).
+Moreover, it follows that $I^\bullet$ is K-injective by
+Lemma \ref{lemma-triangle-K-injective}.
+By the commutative diagram of Lemma \ref{lemma-special-inverse-system}
+we obtain a corresponding commutative diagram
+$$
+\xymatrix{
+K^\bullet \ar[r] \ar[d] & R\lim \tau_{\geq -n} K^\bullet \ar[d] \\
+I^\bullet \ar[r] & R\lim I_n^\bullet
+}
+$$
+in $D(\mathcal{A})$. Since the right vertical arrow is an isomorphism
+(as derived limits are defined on the level of the derived category
+and since $\tau_{\geq -n}K^\bullet \to I_n^\bullet$ is a quasi-isomorphism),
+the lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-enough-K-injectives-Ab4-star}
+Let $\mathcal{A}$ be an abelian category having enough injectives
+and exact countable products. Then for every complex
+there is a quasi-isomorphism to a K-injective complex.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-difficulty-K-injectives} it suffices to show that
+$K \to R\lim\tau_{\geq -n}K$ is an isomorphism for all $K$ in $D(\mathcal{A})$.
+Consider the defining distinguished triangle
+$$
+R\lim\tau_{\geq -n}K \to
+\prod \tau_{\geq -n}K \to
+\prod \tau_{\geq -n}K \to
+(R\lim\tau_{\geq -n}K)[1]
+$$
+By Lemma \ref{lemma-products} we have
+$$
+H^p(\prod \tau_{\geq -n}K) = \prod\nolimits_{p \geq -n} H^p(K)
+$$
+It follows in a straightforward manner from the long exact cohomology
+sequence of the displayed distinguished triangle
+that $H^p(R\lim \tau_{\geq -n}K) = H^p(K)$.
+\end{proof}
+
+
+
+
+\section{Operations on full subcategories}
+\label{section-operate-on-full}
+
+\noindent
+Let $\mathcal{T}$ be a triangulated category. We will identify full
+subcategories of $\mathcal{T}$ with subsets of $\Ob(\mathcal{T})$.
+Given full subcategories $\mathcal{A}, \mathcal{B}, \ldots$ we let
+\begin{enumerate}
+\item $\mathcal{A}[a, b]$ for $-\infty \leq a \leq b \leq \infty$
+be the full subcategory of $\mathcal{T}$ consisting of all objects
+$A[-i]$ with $i \in [a, b] \cap \mathbf{Z}$
+with $A \in \Ob(\mathcal{A})$ (note the minus sign!),
+\item $smd(\mathcal{A})$ be the full subcategory of $\mathcal{T}$
+consisting of all objects which are isomorphic to direct summands of objects
+of $\mathcal{A}$,
+\item $add(\mathcal{A})$ be the full subcategory of $\mathcal{T}$
+consisting of all objects which are isomorphic to finite direct sums of objects
+of $\mathcal{A}$,
+\item $\mathcal{A} \star \mathcal{B}$ be the full subcategory of $\mathcal{T}$
+consisting of all objects $X$ of $\mathcal{T}$ which fit into a distinguished
+triangle $A \to X \to B$ with $A \in \Ob(\mathcal{A})$ and
+$B \in \Ob(\mathcal{B})$,
+\item $\mathcal{A}^{\star n} = \mathcal{A} \star \ldots \star \mathcal{A}$
+with $n \geq 1$ factors (we will see $\star$ is associative below),
+\item $smd(add(\mathcal{A})^{\star n}) =
+smd(add(\mathcal{A}) \star \ldots \star add(\mathcal{A}))$
+with $n \geq 1$ factors.
+\end{enumerate}
+If $E$ is an object of $\mathcal{T}$, then we think of $E$ sometimes
+also as the full subcategory of $\mathcal{T}$ whose single object is $E$.
+Then we can consider things like $add(E[-1, 2])$ and so on and so forth.
+We warn the reader that this notation is not universally accepted.
+
+\begin{lemma}
+\label{lemma-associativity-star}
+Let $\mathcal{T}$ be a triangulated category.
+Given full subcategories $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$
+we have $(\mathcal{A} \star \mathcal{B}) \star \mathcal{C} =
+\mathcal{A} \star (\mathcal{B} \star \mathcal{C})$.
+\end{lemma}
+
+\begin{proof}
+If we have distinguished triangles $A \to X \to B$ and $X \to Y \to C$
+then by Axiom TR4 we have distinguished triangles
+$A \to Y \to Z$ and $B \to Z \to C$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smd-star}
+Let $\mathcal{T}$ be a triangulated category.
+Given full subcategories $\mathcal{A}$, $\mathcal{B}$
+we have
+$smd(\mathcal{A}) \star smd(\mathcal{B}) \subset
+smd(\mathcal{A} \star \mathcal{B})$ and
+$smd(smd(\mathcal{A}) \star smd(\mathcal{B})) =
+smd(\mathcal{A} \star \mathcal{B})$.
+\end{lemma}
+
+\begin{proof}
+Suppose we have a distinguished triangle $A_1 \to X \to B_1$ where
+$A_1 \oplus A_2 \in \Ob(\mathcal{A})$ and $B_1 \oplus B_2 \in \Ob(\mathcal{B})$.
+Then we obtain a distinguished triangle
+$A_1 \oplus A_2 \to A_2 \oplus X \oplus B_2 \to B_1 \oplus B_2$
+which proves that $X$ is in $smd(\mathcal{A} \star \mathcal{B})$.
+This proves the inclusion. The equality follows trivially from this.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-add-star}
+Let $\mathcal{T}$ be a triangulated category. Given full subcategories
+$\mathcal{A}$, $\mathcal{B}$ the full subcategories
+$add(\mathcal{A}) \star add(\mathcal{B})$ and
+$smd(add(\mathcal{A}))$ are closed under direct sums.
+\end{lemma}
+
+\begin{proof}
+Namely, if $A \to X \to B$ and $A' \to X' \to B'$ are distinguished triangles
+and $A, A' \in add(\mathcal{A})$ and $B, B' \in add(\mathcal{B})$ then
+$A \oplus A' \to X \oplus X' \to B \oplus B'$ is a distinguished triangle
+with $A \oplus A' \in add(\mathcal{A})$ and $B \oplus B' \in add(\mathcal{B})$.
+The result for $smd(add(\mathcal{A}))$ is trivial.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cone-n}
+Let $\mathcal{T}$ be a triangulated category. Given a full subcategory
+$\mathcal{A}$ for $n \geq 1$ the subcategory
+$$
+\mathcal{C}_n = smd(add(\mathcal{A})^{\star n}) =
+smd(add(\mathcal{A}) \star \ldots \star add(\mathcal{A}))
+$$
+defined above is a strictly full subcategory of $\mathcal{T}$
+closed under direct sums and direct summands and
+$\mathcal{C}_{n + m} = smd(\mathcal{C}_n \star \mathcal{C}_m)$
+for all $n, m \geq 1$.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemmas \ref{lemma-associativity-star}, \ref{lemma-smd-star}, and
+\ref{lemma-add-star}.
+\end{proof}
+
+\begin{remark}
+\label{remark-operations-functor}
+Let $F : \mathcal{T} \to \mathcal{T}'$ be an exact functor of triangulated
+categories. Given a full subcategory $\mathcal{A}$ of $\mathcal{T}$ we denote
+$F(\mathcal{A})$ the full subcategory of $\mathcal{T}'$ whose objects
+consists of all objects $F(A)$ with $A \in \Ob(\mathcal{A})$. We have
+$$
+F(\mathcal{A}[a, b]) = F(\mathcal{A})[a, b]
+$$
+$$
+F(smd(\mathcal{A})) \subset smd(F(\mathcal{A})),
+$$
+$$
+F(add(\mathcal{A})) \subset add(F(\mathcal{A})),
+$$
+$$
+F(\mathcal{A} \star \mathcal{B}) \subset F(\mathcal{A}) \star F(\mathcal{B}),
+$$
+$$
+F(\mathcal{A}^{\star n}) \subset F(\mathcal{A})^{\star n}.
+$$
+We omit the trivial verifications.
+\end{remark}
+
+\begin{remark}
+\label{remark-operations-unions}
+Let $\mathcal{T}$ be a triangulated category. Given full subcategories
+$\mathcal{A}_1 \subset \mathcal{A}_2 \subset \mathcal{A}_3 \subset \ldots$
+and $\mathcal{B}$ of $\mathcal{T}$ we have
+$$
+\left(\bigcup \mathcal{A}_i\right)[a, b] = \bigcup \mathcal{A}_i[a, b]
+$$
+$$
+smd\left(\bigcup \mathcal{A}_i\right) = \bigcup smd(\mathcal{A}_i),
+$$
+$$
+add\left(\bigcup \mathcal{A}_i\right) = \bigcup add(\mathcal{A}_i),
+$$
+$$
+\left(\bigcup \mathcal{A}_i\right) \star \mathcal{B} =
+\bigcup \mathcal{A}_i \star \mathcal{B},
+$$
+$$
+\mathcal{B} \star \left(\bigcup \mathcal{A}_i\right) =
+\bigcup \mathcal{B} \star \mathcal{A}_i,
+$$
+$$
+\left(\bigcup \mathcal{A}_i\right)^{\star n} =
+\bigcup \mathcal{A}_i^{\star n}.
+$$
+We omit the trivial verifications.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-in-cone-n}
+Let $\mathcal{A}$ be an abelian category. Let $\mathcal{D} = D(\mathcal{A})$.
+Let $\mathcal{E} \subset \Ob(\mathcal{A})$ be a subset which we view as
+a subset of $\Ob(\mathcal{D})$ also. Let $K$ be an object of $\mathcal{D}$.
+\begin{enumerate}
+\item Let $b \geq a$ and assume $H^i(K)$ is zero for $i \not \in [a, b]$
+and $H^i(K) \in \mathcal{E}$ if $i \in [a, b]$. Then $K$ is in
+$smd(add(\mathcal{E}[a, b])^{\star (b - a + 1)})$.
+\item Let $b \geq a$ and assume $H^i(K)$ is zero for $i \not \in [a, b]$
+and $H^i(K) \in smd(add(\mathcal{E}))$ if $i \in [a, b]$. Then $K$ is in
+$smd(add(\mathcal{E}[a, b])^{\star (b - a + 1)})$.
+\item Let $b \geq a$ and assume $K$ can be represented by a complex $K^\bullet$
+with $K^i = 0$ for $i \not \in [a, b]$ and $K^i \in \mathcal{E}$ for
+$i \in [a, b]$. Then $K$ is in
+$smd(add(\mathcal{E}[a, b])^{\star (b - a + 1)})$.
+\item Let $b \geq a$ and assume $K$ can be represented by a complex $K^\bullet$
+with $K^i = 0$ for $i \not \in [a, b]$ and $K^i \in smd(add(\mathcal{E}))$ for
+$i \in [a, b]$. Then $K$ is in
+$smd(add(\mathcal{E}[a, b])^{\star (b - a + 1)})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will use Lemma \ref{lemma-cone-n} without further mention.
+We will prove (2) which trivially implies (1). We use induction
+on $b - a$. If $b - a = 0$, then $K$ is isomorphic to $H^i(K)[-a]$
+in $\mathcal{D}$ and the result is immediate. If $b - a > 0$, then
+we consider the distinguished triangle
+$$
+\tau_{\leq b - 1}K^\bullet \to K^\bullet \to K^b[-b]
+$$
+and we conclude by induction on $b - a$. We omit the proof of (3) and (4).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-forward-cone-n}
+Let $\mathcal{T}$ be a triangulated category. Let
+$H : \mathcal{T} \to \mathcal{A}$ be a homological functor
+to an abelian category $\mathcal{A}$.
+Let $a \leq b$ and $\mathcal{E} \subset \Ob(\mathcal{T})$
+be a subset such that $H^i(E) = 0$ for $E \in \mathcal{E}$
+and $i \not \in [a, b]$.
+Then for $X \in smd(add(\mathcal{E}[-m, m])^{\star n})$
+we have $H^i(X) = 0$ for $i \not \in [-m + na, m + nb]$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Pleasant exercise in the definitions.
+\end{proof}
+
+
+
+
+
+\section{Generators of triangulated categories}
+\label{section-generators}
+
+\noindent
+In this section we briefly introduce a few of the different notions
+of a generator for a triangulated category. Our terminology is
+taken from \cite{BvdB} (except that we use ``saturated'' for what
+they call ``\'epaisse'', see Definition \ref{definition-saturated}, and
+our definition of $add(\mathcal{A})$ is different).
+
+\medskip\noindent
+Let $\mathcal{D}$ be a triangulated category. Let $E$ be an object
+of $\mathcal{D}$. Denote $\langle E \rangle_1$ the strictly full subcategory
+of $\mathcal{D}$ consisting of objects in $\mathcal{D}$ isomorphic to
+direct summands of finite direct sums
+$$
+\bigoplus\nolimits_{i = 1, \ldots, r} E[n_i]
+$$
+of shifts of $E$. It is clear that in the notation of
+Section \ref{section-operate-on-full} we have
+$$
+\langle E \rangle_1 = smd(add(E[-\infty, \infty]))
+$$
+For $n > 1$ let $\langle E \rangle_n$ denote the full
+subcategory of $\mathcal{D}$ consisting of objects of $\mathcal{D}$
+isomorphic to direct summands of objects $X$ which fit into a distinguished
+triangle
+$$
+A \to X \to B \to A[1]
+$$
+where $A$ is an object of $\langle E \rangle_1$ and $B$ an object of
+$\langle E \rangle_{n - 1}$. In the notation of
+Section \ref{section-operate-on-full} we have
+$$
+\langle E \rangle_n = smd(\langle E \rangle_1 \star \langle E \rangle_{n - 1})
+$$
+Each of the categories $\langle E \rangle_n$ is a strictly full additive
+(by Lemma \ref{lemma-add-star}) subcategory of $\mathcal{D}$ preserved
+under shifts and under taking summands. But, $\langle E \rangle_n$ is not
+necessarily closed under ``taking cones'' or ``extensions'', hence not
+necessarily a triangulated subcategory. This will be true for the
+subcategory
+$$
+\langle E \rangle = \bigcup\nolimits_n \langle E \rangle_n
+$$
+as will be shown in the lemmas below.
+
+\begin{lemma}
+\label{lemma-generated-by-E-explicit}
+Let $\mathcal{T}$ be a triangulated category. Let $E$ be an object
+of $\mathcal{T}$. For $n \geq 1$ we have
+$$
+\langle E \rangle_n =
+smd(\langle E \rangle_1 \star \ldots \star \langle E \rangle_1) =
+smd({\langle E \rangle_1}^{\star n}) =
+\bigcup\nolimits_{m \geq 1} smd(add(E[-m, m])^{\star n})
+$$
+For $n, n' \geq 1$ we have $\langle E \rangle_{n + n'} =
+smd(\langle E \rangle_n \star \langle E \rangle_{n'})$.
+\end{lemma}
+
+\begin{proof}
+The left equality in the displayed formula follows from
+Lemmas \ref{lemma-associativity-star} and \ref{lemma-smd-star}
+and induction. The middle equality is a matter of notation.
+Since $\langle E \rangle_1 = smd(add(E[-\infty, \infty])])$
+and since $E[-\infty, \infty] = \bigcup_{m \geq 1} E[-m, m]$
+we see from Remark \ref{remark-operations-unions} and
+Lemma \ref{lemma-smd-star} that we get the equality on the right.
+Then the final statement follows from the remark and the
+corresponding statement of Lemma \ref{lemma-cone-n}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-find-smallest-containing-E}
+Let $\mathcal{D}$ be a triangulated category. Let $E$ be an object
+of $\mathcal{D}$. The subcategory
+$$
+\langle E \rangle = \bigcup\nolimits_n \langle E \rangle_n
+= \bigcup\nolimits_{n, m \geq 1} smd(add(E[-m, m])^{\star n})
+$$
+is a strictly full, saturated, triangulated subcategory of $\mathcal{D}$
+and it is the smallest such subcategory of $\mathcal{D}$ containing
+the object $E$.
+\end{lemma}
+
+\begin{proof}
+The equality on the right follows from
+Lemma \ref{lemma-generated-by-E-explicit}.
+It is clear that $\langle E \rangle = \bigcup \langle E \rangle_n$
+contains $E$, is preserved under shifts, direct sums, direct summands.
+If $A \in \langle E \rangle_a$ and $B \in \langle E \rangle_b$
+and if $A \to X \to B \to A[1]$ is a distinguished triangle, then
+$X \in \langle E \rangle_{a + b}$ by Lemma \ref{lemma-generated-by-E-explicit}.
+Hence $\bigcup \langle E \rangle_n$ is also preserved under extensions
+and it follows that it is a triangulated subcategory.
+
+\medskip\noindent
+Finally, let $\mathcal{D}' \subset \mathcal{D}$ be a
+strictly full, saturated, triangulated subcategory of $\mathcal{D}$
+containing $E$. Then
+$\mathcal{D}'[-\infty, \infty] \subset \mathcal{D}'$,
+$add(\mathcal{D}) \subset \mathcal{D}'$,
+$smd(\mathcal{D}') \subset \mathcal{D}'$, and
+$\mathcal{D}' \star \mathcal{D}' \subset \mathcal{D}'$.
+In other words, all the operations we used to construct
+$\langle E \rangle$ out of $E$ preserve $\mathcal{D}'$.
+Hence $\langle E \rangle \subset \mathcal{D}'$ and this
+finishes the proof.
+\end{proof}
+
+\begin{definition}
+\label{definition-generators}
+Let $\mathcal{D}$ be a triangulated category. Let $E$ be an object
+of $\mathcal{D}$.
+\begin{enumerate}
+\item We say $E$ is a {\it classical generator} of $\mathcal{D}$
+if the smallest strictly full, saturated, triangulated subcategory
+of $\mathcal{D}$ containing $E$ is equal to $\mathcal{D}$, in
+other words, if $\langle E \rangle = \mathcal{D}$.
+\item We say $E$ is a {\it strong generator} of $\mathcal{D}$
+if $\langle E \rangle_n = \mathcal{D}$ for some $n \geq 1$.
+\item We say $E$ is a {\it weak generator} or a {\it generator}
+of $\mathcal{D}$
+if for any nonzero object $K$ of $\mathcal{D}$ there exists
+an integer $n$ and a nonzero map $E \to K[n]$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+This definition can be generalized to the case of a family of objects.
+
+\begin{lemma}
+\label{lemma-right-orthogonal}
+Let $\mathcal{D}$ be a triangulated category. Let $E, K$ be objects
+of $\mathcal{D}$. The following are equivalent
+\begin{enumerate}
+\item $\Hom(E, K[i]) = 0$ for all $i \in \mathbf{Z}$,
+\item $\Hom(E', K) = 0$ for all $E' \in \langle E \rangle$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (2) $\Rightarrow$ (1) is immediate. Conversely,
+assume (1). Then $\Hom(X, K) = 0$ for all $X$ in $\langle E \rangle_1$.
+Arguing by induction on $n$ and using
+Lemma \ref{lemma-representable-homological}
+we see that $\Hom(X, K) = 0$ for all $X$
+in $\langle E \rangle_n$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-classical-generator-generator}
+Let $\mathcal{D}$ be a triangulated category. Let $E$ be an object
+of $\mathcal{D}$. If $E$ is a classical generator of $\mathcal{D}$,
+then $E$ is a generator.
+\end{lemma}
+
+\begin{proof}
+Assume $E$ is a classical generator. Let $K$ be an object of $\mathcal{D}$
+such that $\Hom(E, K[i]) = 0$ for all $i \in \mathbf{Z}$. By
+Lemma \ref{lemma-right-orthogonal}
+$\Hom(E', K) = 0$ for all $E'$ in $\langle E \rangle$. However, since
+$\mathcal{D} = \langle E \rangle$ we conclude that $\text{id}_K = 0$,
+i.e., $K = 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-classical-generator-strong-generator}
+Let $\mathcal{D}$ be a triangulated category which has a strong generator.
+Let $E$ be an object of $\mathcal{D}$. If $E$ is a classical generator of
+$\mathcal{D}$, then $E$ is a strong generator.
+\end{lemma}
+
+\begin{proof}
+Let $E'$ be an object of $\mathcal{D}$ such that
+$\mathcal{D} = \langle E' \rangle_n$. Since
+$\mathcal{D} = \langle E \rangle$ we see that $E' \in \langle E \rangle_m$
+for some $m \geq 1$ by Lemma \ref{lemma-find-smallest-containing-E}.
+Then $\langle E' \rangle_1 \subset \langle E \rangle_m$ hence
+$$
+\mathcal{D} =
+\langle E' \rangle_n = smd(
+\langle E' \rangle_1 \star \ldots \star \langle E' \rangle_1)
+\subset
+smd(
+\langle E \rangle_m \star \ldots \star \langle E \rangle_m)
+=
+\langle E \rangle_{nm}
+$$
+as desired. Here we used Lemma \ref{lemma-generated-by-E-explicit}.
+\end{proof}
+
+\begin{remark}
+\label{remark-check-on-generator}
+Let $\mathcal{D}$ be a triangulated category. Let $E$ be an object
+of $\mathcal{D}$. Let $T$ be a property of objects of $\mathcal{D}$.
+Suppose that
+\begin{enumerate}
+\item if $K_i \in D(A)$, $i = 1, \ldots, r$ with
+$T(K_i)$ for $i = 1, \ldots, r$, then $T(\bigoplus K_i)$,
+\item if $K \to L \to M \to K[1]$ is a distinguished triangle and
+$T$ holds for two, then $T$ holds for the third object,
+\item if $T(K \oplus L)$ then $T(K)$ and $T(L)$, and
+\item $T(E[n])$ holds for all $n$.
+\end{enumerate}
+Then $T$ holds for all objects of $\langle E \rangle$.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Compact objects}
+\label{section-compact}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-compact-object}
+Let $\mathcal{D}$ be an additive category with arbitrary direct
+sums. A {\it compact object} of $\mathcal{D}$ is an object $K$
+such that the map
+$$
+\bigoplus\nolimits_{i \in I} \Hom_{\mathcal{D}}(K, E_i)
+\longrightarrow
+\Hom_{\mathcal{D}}(K, \bigoplus\nolimits_{i \in I} E_i)
+$$
+is bijective for any set $I$ and objects
+$E_i \in \Ob(\mathcal{D})$ parametrized by $i \in I$.
+\end{definition}
+
+\noindent
+This notion turns out to be very useful in algebraic geometry.
+It is an intrinsic condition on objects that forces the objects
+to be, well, compact.
+
+\begin{lemma}
+\label{lemma-compact-objects-subcategory}
+Let $\mathcal{D}$ be a (pre-)triangulated category with direct sums.
+Then the compact objects of $\mathcal{D}$ form the objects of a
+Karoubian, saturated, strictly full, (pre-)triangulated subcategory
+$\mathcal{D}_c$ of $\mathcal{D}$.
+\end{lemma}
+
+\begin{proof}
+Let $(X, Y, Z, f, g, h)$ be a distinguished triangle of $\mathcal{D}$
+with $X$ and $Y$ compact. Then it follows from
+Lemma \ref{lemma-representable-homological}
+and the five lemma
+(Homology, Lemma \ref{homology-lemma-five-lemma})
+that $Z$ is a compact object too. It is clear that if $X \oplus Y$
+is compact, then $X$, $Y$ are compact objects too. Hence
+$\mathcal{D}_c$ is a saturated triangulated subcategory.
+Since $\mathcal{D}$ is Karoubian by
+Lemma \ref{lemma-projectors-have-images-triangulated}
+we conclude that the same is true for $\mathcal{D}_c$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-write-as-colimit}
+Let $\mathcal{D}$ be a triangulated category with direct sums.
+Let $E_i$, $i \in I$ be a family of compact objects of $\mathcal{D}$
+such that $\bigoplus E_i$ generates $\mathcal{D}$.
+Then every object $X$ of $\mathcal{D}$ can be written as
+$$
+X = \text{hocolim} X_n
+$$
+where $X_1$ is a direct sum of shifts of the $E_i$ and each transition
+morphism fits into a distinguished triangle
+$Y_n \to X_n \to X_{n + 1} \to Y_n[1]$
+where $Y_n$ is a direct sum of shifts of the $E_i$.
+\end{lemma}
+
+\begin{proof}
+Set $X_1 = \bigoplus_{(i, m, \varphi)} E_i[m]$ where the direct sum is over
+all triples $(i, m, \varphi)$ such that $i \in I$, $m \in \mathbf{Z}$
+and $\varphi : E_i[m] \to X$. Then $X_1$ comes equipped with a canonical
+morphism $X_1 \to X$. Given $X_n \to X$ we set
+$Y_n = \bigoplus_{(i, m, \varphi)} E_i[m]$ where the direct sum is over
+all triples $(i, m, \varphi)$ such that $i \in I$, $m \in \mathbf{Z}$, and
+$\varphi : E_i[m] \to X_n$ is a morphism such that $E_i[m] \to X_n \to X$
+is zero. Choose a distinguished triangle
+$Y_n \to X_n \to X_{n + 1} \to Y_n[1]$
+and let $X_{n + 1} \to X$ be any morphism such that $X_n \to X_{n + 1} \to X$
+is the given one; such a morphism exists by our choice of $Y_n$.
+We obtain a morphism $\text{hocolim} X_n \to X$ by the construction
+of our maps $X_n \to X$. Choose a distinguished triangle
+$$
+C \to \text{hocolim} X_n \to X \to C[1]
+$$
+Let $E_i[m] \to C$ be a morphism. Since $E_i$ is compact, the
+composition $E_i[m] \to C \to \text{hocolim} X_n$ factors through
+$X_n$ for some $n$, say by $E_i[m] \to X_n$. Then the
+construction of $Y_n$ shows that the composition
+$E_i[m] \to X_n \to X_{n + 1}$ is zero. In other words, the composition
+$E_i[m] \to C \to \text{hocolim} X_n$ is zero. This means that our
+morphism $E_i[m] \to C$ comes from a morphism $E_i[m] \to X[-1]$.
+The construction of $X_1$ then shows that such morphism lifts to
+$\text{hocolim} X_n$ and we conclude that our morphism $E_i[m] \to C$
+is zero. The assumption that $\bigoplus E_i$ generates $\mathcal{D}$
+implies that $C$ is zero and the proof is done.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-factor-through}
+With assumptions and notation as in Lemma \ref{lemma-write-as-colimit}.
+If $C$ is a compact object and $C \to X_n$ is a morphism, then
+there is a factorization $C \to E \to X_n$ where
+$E$ is an object of $\langle E_{i_1} \oplus \ldots \oplus E_{i_t} \rangle$
+for some $i_1, \ldots, i_t \in I$.
+\end{lemma}
+
+\begin{proof}
+We prove this by induction on $n$. The base case $n = 1$ is clear.
+If $n > 1$ consider the composition $C \to X_n \to Y_{n - 1}[1]$.
+This can be factored through some $E'[1] \to Y_{n - 1}[1]$ where
+$E'$ is a finite direct sum of shifts of the $E_i$. Let $I' \subset I$
+be the finite set of indices that occur in this direct sum. Thus we obtain
+$$
+\xymatrix{
+E' \ar[r] \ar[d] &
+C' \ar[r] \ar[d] &
+C \ar[r] \ar[d] &
+E'[1] \ar[d] \\
+Y_{n - 1} \ar[r] &
+X_{n - 1} \ar[r] &
+X_n \ar[r] &
+Y_{n - 1}[1]
+}
+$$
+By induction the morphism $C' \to X_{n - 1}$ factors through
+$E'' \to X_{n - 1}$ with $E''$ an object of
+$\langle \bigoplus_{i \in I''} E_i \rangle$
+for some finite subset $I'' \subset I$. Choose a distinguished
+triangle
+$$
+E' \to E'' \to E \to E'[1]
+$$
+then $E$ is an object of $\langle \bigoplus_{i \in I' \cup I''} E_i \rangle$.
+By construction and the axioms of a triangulated category we can choose
+morphisms $C \to E$ and a morphism $E \to X_n$ fitting into morphisms
+of triangles $(E', C', C) \to (E', E'', E)$ and
+$(E', E'', E) \to (Y_{n - 1}, X_{n - 1}, X_n)$. The composition
+$C \to E \to X_n$ may not equal the given morphism $C \to X_n$, but
+the compositions into $Y_{n - 1}$ are equal. Let $C \to X_{n - 1}$
+be a morphism that lifts the difference. By induction assumption we
+can factor this through a morphism $E''' \to X_{n - 1}$ with
+$E''$ an object of $\langle \bigoplus_{i \in I'''} E_i \rangle$
+for some finite subset $I' \subset I$. Thus we see that we get
+a solution on considering $E \oplus E''' \to X_n$ because
+$E \oplus E'''$ is an object of
+$\langle \bigoplus_{i \in I' \cup I'' \cup I'''} E_i \rangle$.
+\end{proof}
+
+\begin{definition}
+\label{definition-compactly-generated}
+Let $\mathcal{D}$ be a triangulated category with arbitrary direct
+sums. We say $\mathcal{D}$ is {\it compactly generated} if
+there exists a set $E_i$, $i \in I$ of compact objects such that
+$\bigoplus E_i$ generates $\mathcal{D}$.
+\end{definition}
+
+\noindent
+The following proposition clarifies the relationship between
+classical generators and weak generators.
+
+\begin{proposition}
+\label{proposition-generator-versus-classical-generator}
+Let $\mathcal{D}$ be a triangulated category with direct sums.
+Let $E$ be a compact object of $\mathcal{D}$.
+The following are equivalent
+\begin{enumerate}
+\item $E$ is a classical generator for $\mathcal{D}_c$ and
+$\mathcal{D}$ is compactly generated, and
+\item $E$ is a generator for $\mathcal{D}$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+If $E$ is a classical generator for $\mathcal{D}_c$, then
+$\mathcal{D}_c = \langle E \rangle$. It follows formally
+from the assumption that $\mathcal{D}$ is compactly generated
+and Lemma \ref{lemma-right-orthogonal} that $E$ is a generator
+for $\mathcal{D}$.
+
+\medskip\noindent
+The converse is more interesting. Assume that $E$ is a generator
+for $\mathcal{D}$. Let $X$ be a compact object of $\mathcal{D}$.
+Apply Lemma \ref{lemma-write-as-colimit} with $I = \{1\}$ and
+$E_1 = E$ to write
+$$
+X = \text{hocolim} X_n
+$$
+as in the lemma. Since $X$ is compact we
+find that $X \to \text{hocolim} X_n$ factors through $X_n$ for
+some $n$ (Lemma \ref{lemma-commutes-with-countable-sums}).
+Thus $X$ is a direct summand of $X_n$.
+By Lemma \ref{lemma-factor-through} we see that $X$ is an
+object of $\langle E \rangle$ and the lemma is proven.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Brown representability}
+\label{section-brown}
+
+\noindent
+A reference for the material in this section is \cite{Neeman-Grothendieck}.
+
+\begin{lemma}
+\label{lemma-brown}
+\begin{reference}
+\cite[Theorem 3.1]{Neeman-Grothendieck}.
+\end{reference}
+Let $\mathcal{D}$ be a triangulated category with direct sums which is
+compactly generated. Let $H : \mathcal{D} \to \textit{Ab}$ be a contravariant
+cohomological functor which transforms direct sums into products.
+Then $H$ is representable.
+\end{lemma}
+
+\begin{proof}
+Let $E_i$, $i \in I$ be a set of compact objects such that
+$\bigoplus_{i \in I} E_i$ generates $\mathcal{D}$. We may and do assume
+that the set of objects $\{E_i\}$ is preserved under shifts. Consider pairs
+$(i, a)$ where $i \in I$ and $a \in H(E_i)$ and set
+$$
+X_1 = \bigoplus\nolimits_{(i, a)} E_i
+$$
+Since $H(X_1) = \prod_{(i, a)} H(E_i)$ we see that $(a)_{(i, a)}$
+defines an element $a_1 \in H(X_1)$. Set $H_1 = \Hom_\mathcal{D}(- , X_1)$.
+By Yoneda's lemma (Categories, Lemma \ref{categories-lemma-yoneda})
+the element $a_1$ defines a natural transformation $H_1 \to H$.
+
+\medskip\noindent
+We are going to inductively construct $X_n$ and transformations
+$a_n : H_n \to H$ where $H_n = \Hom_\mathcal{D}(-, X_n)$.
+Namely, we apply the procedure
+above to the functor $\Ker(H_n \to H)$ to get an object
+$$
+K_{n + 1} = \bigoplus\nolimits_{(i, k),\ k \in \Ker(H_n(E_i) \to H(E_i))} E_i
+$$
+and a transformation $\Hom_\mathcal{D}(-, K_{n + 1}) \to \Ker(H_n \to H)$.
+By Yoneda's lemma the composition $\Hom_\mathcal{D}(-, K_{n + 1}) \to H_n$
+gives a morphism $K_{n + 1} \to X_n$. We choose
+a distinguished triangle
+$$
+K_{n + 1} \to X_n \to X_{n + 1} \to K_{n + 1}[1]
+$$
+in $\mathcal{D}$. The element $a_n \in H(X_n)$ maps to zero
+in $H(K_{n + 1})$ by construction. Since $H$ is cohomological
+we can lift it to an element $a_{n + 1} \in H(X_{n + 1})$.
+
+\medskip\noindent
+We claim that $X = \text{hocolim} X_n$ represents $H$. Applying $H$
+to the defining distinguished triangle
+$$
+\bigoplus X_n \to
+\bigoplus X_n \to X \to
+\bigoplus X_n[1]
+$$
+we obtain an exact sequence
+$$
+\prod H(X_n) \leftarrow
+\prod H(X_n) \leftarrow
+H(X)
+$$
+Thus there exists an element $a \in H(X)$ mapping to $(a_n)$
+in $\prod H(X_n)$. Hence a natural transformation
+$\Hom_\mathcal{D}(- , X) \to H$ such that
+$$
+\Hom_\mathcal{D}(-, X_1) \to
+\Hom_\mathcal{D}(-, X_2) \to
+\Hom_\mathcal{D}(-, X_3) \to \ldots \to
+\Hom_\mathcal{D}(-, X) \to H
+$$
+commutes. For each $i$ the map $\Hom_\mathcal{D}(E_i, X) \to H(E_i)$
+is surjective, by construction of $X_1$. On the other hand, by construction
+of $X_n \to X_{n + 1}$ the kernel of $\Hom_\mathcal{D}(E_i, X_n) \to H(E_i)$
+is killed by the map
+$\Hom_\mathcal{D}(E_i, X_n) \to \Hom_\mathcal{D}(E_i, X_{n + 1})$.
+Since
+$$
+\Hom_\mathcal{D}(E_i, X) = \colim \Hom_\mathcal{D}(E_i, X_n)
+$$
+by Lemma \ref{lemma-commutes-with-countable-sums}
+we see that $\Hom_\mathcal{D}(E_i, X) \to H(E_i)$ is injective.
+
+\medskip\noindent
+To finish the proof, consider the subcategory
+$$
+\mathcal{D}' =
+\{Y \in \Ob(\mathcal{D}) \mid \Hom_\mathcal{D}(Y[n], X) \to H(Y[n])
+\text{ is an isomorphism for all }n\}
+$$
+As $\Hom_\mathcal{D}(-, X) \to H$ is a transformation between
+cohomological functors,
+the subcategory $\mathcal{D}'$ is a strictly full, saturated, triangulated
+subcategory of $\mathcal{D}$ (details omitted; see proof of
+Lemma \ref{lemma-homological-functor-kernel}). Moreover, as both
+$H$ and $\Hom_\mathcal{D}(-, X)$ transform direct sums into products,
+we see that direct sums of objects of $\mathcal{D}'$ are in $\mathcal{D}'$.
+Thus derived colimits of objects of $\mathcal{D}'$ are in $\mathcal{D}'$.
+Since $\{E_i\}$ is preserved under shifts, we see that $E_i$
+is an object of $\mathcal{D}'$ for all $i$. It follows from
+Lemma \ref{lemma-write-as-colimit} that $\mathcal{D}' = \mathcal{D}$
+and the proof is complete.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-brown}
+\begin{reference}
+\cite[Theorem 4.1]{Neeman-Grothendieck}.
+\end{reference}
+Let $\mathcal{D}$ be a triangulated category with direct sums which is
+compactly generated. Let $F : \mathcal{D} \to \mathcal{D}'$ be an
+exact functor of triangulated categories which transforms direct sums
+into direct sums. Then $F$ has an exact right adjoint.
+\end{proposition}
+
+\begin{proof}
+For an object $Y$ of $\mathcal{D}'$ consider the contravariant functor
+$$
+\mathcal{D} \to \textit{Ab},\quad W \mapsto \Hom_{\mathcal{D}'}(F(W), Y)
+$$
+This is a cohomological functor as $F$ is exact and transforms direct sums
+into products as $F$ transforms direct sums into direct sums. Thus by
+Lemma \ref{lemma-brown} we find an object $X$ of $\mathcal{D}$ such that
+$\Hom_\mathcal{D}(W, X) = \Hom_{\mathcal{D}'}(F(W), Y)$.
+The existence of the adjoint follows from
+Categories, Lemma \ref{categories-lemma-adjoint-exists}.
+Exactness follows from Lemma \ref{lemma-adjoint-is-exact}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Brown representability, bis}
+\label{section-brown-bis}
+
+\noindent
+In this section we explain a version of Brown representability
+for triangulated categories which have a suitable set of generators;
+for other versions, please see \cite{Franke}, \cite{Neeman}, and \cite{Krause}.
+
+\begin{lemma}
+\label{lemma-brown-bis}
+\begin{reference}
+Weak version of \cite[Theorem A]{Krause}
+\end{reference}
+Let $\mathcal{D}$ be a triangulated category with direct sums.
+Suppose given a set $\mathcal{E}$ of objects of $\mathcal{D}$ such that
+\begin{enumerate}
+\item if $X$ is a nonzero object of $\mathcal{D}$, then there exists
+an $E \in \mathcal{E}$ and a nonzero map $E \to X$, and
+\item given objects $X_n$, $n \in \mathbf{N}$ of $\mathcal{D}$,
+$E \in \mathcal{E}$, and $\alpha : E \to \bigoplus X_n$,
+there exist $E_n \in \mathcal{E}$ and $\beta_n : E_n \to X_n$ and a morphism
+$\gamma : E \to \bigoplus E_n$ such that
+$\alpha = (\bigoplus \beta_n) \circ \gamma$.
+\end{enumerate}
+Let $H : \mathcal{D} \to \textit{Ab}$ be a contravariant cohomological functor
+which transforms direct sums into products. Then $H$ is representable.
+\end{lemma}
+
+\begin{proof}
+This proof is very similar to the proof of Lemma \ref{lemma-brown}.
+We may replace $\mathcal{E}$ by
+$\bigcup_{i \in \mathbf{Z}} \mathcal{E}[i]$ and assume
+that $\mathcal{E}$ is preserved by shifts.
+Consider pairs $(E, a)$ where $E \in \mathcal{E}$ and $a \in H(E)$ and set
+$$
+X_1 = \bigoplus\nolimits_{(E, a)} E
+$$
+Since $H(X_1) = \prod_{(E, a)} H(E)$ we see that $(a)_{(E, a)}$
+defines an element $a_1 \in H(X_1)$. Set $H_1 = \Hom_\mathcal{D}(- , X_1)$.
+By Yoneda's lemma (Categories, Lemma \ref{categories-lemma-yoneda})
+the element $a_1$ defines a natural transformation $H_1 \to H$.
+
+\medskip\noindent
+We are going to inductively construct $X_n$ and transformations
+$a_n : H_n \to H$ where $H_n = \Hom_\mathcal{D}(-, X_n)$.
+Namely, we apply the procedure
+above to the functor $\Ker(H_n \to H)$ to get an object
+$$
+K_{n + 1} = \bigoplus\nolimits_{(E, k),\ k \in \Ker(H_n(E) \to H(E))} E
+$$
+and a transformation $\Hom_\mathcal{D}(-, K_{n + 1}) \to \Ker(H_n \to H)$.
+By Yoneda's lemma the composition $\Hom_\mathcal{D}(-, K_{n + 1}) \to H_n$
+gives a morphism $K_{n + 1} \to X_n$. We choose
+a distinguished triangle
+$$
+K_{n + 1} \to X_n \to X_{n + 1} \to K_{n + 1}[1]
+$$
+in $\mathcal{D}$. The element $a_n \in H(X_n)$ maps to zero
+in $H(K_{n + 1})$ by construction. Since $H$ is cohomological
+we can lift it to an element $a_{n + 1} \in H(X_{n + 1})$.
+
+\medskip\noindent
+Set $X = \text{hocolim} X_n$. Applying $H$
+to the defining distinguished triangle
+$$
+\bigoplus X_n \to
+\bigoplus X_n \to X \to
+\bigoplus X_n[1]
+$$
+we obtain an exact sequence
+$$
+\prod H(X_n) \leftarrow
+\prod H(X_n) \leftarrow
+H(X)
+$$
+Thus there exists an element $a \in H(X)$ mapping to $(a_n)$
+in $\prod H(X_n)$. Hence a natural transformation
+$\Hom_\mathcal{D}(- , X) \to H$ such that
+$$
+\Hom_\mathcal{D}(-, X_1) \to
+\Hom_\mathcal{D}(-, X_2) \to
+\Hom_\mathcal{D}(-, X_3) \to \ldots \to
+\Hom_\mathcal{D}(-, X) \to H
+$$
+commutes. We claim that $\Hom_\mathcal{D}(-, X) \to H(-)$ is an isomorphism.
+
+\medskip\noindent
+Let $E \in \mathcal{E}$. Let us show that
+$$
+\Hom_\mathcal{D}(E, \bigoplus X_n) \to \Hom_\mathcal{D}(E, \bigoplus X_n)
+$$
+is injective. Namely, let $\alpha : E \to \bigoplus X_n$. Then
+by assumption (2) we obtain a factorization
+$\alpha = (\bigoplus \beta_n) \circ \gamma$.
+Since $E_n \to X_n \to X_{n + 1}$ is zero by construction, we see that
+the composition $\bigoplus E_n \to \bigoplus X_n \to \bigoplus X_n$
+is equal to $\bigoplus \beta_n$. Hence also the composition
+$E \to \bigoplus X_n \to \bigoplus X_n$ is equal to $\alpha$.
+This proves the stated injectivity and hence also
+$$
+\Hom_\mathcal{D}(E, \bigoplus X_n[1]) \to \Hom_\mathcal{D}(E, \bigoplus X_n[1])
+$$
+is injective. It follows that we have an exact sequence
+$$
+\Hom_\mathcal{D}(E, \bigoplus X_n) \to
+\Hom_\mathcal{D}(E, \bigoplus X_n) \to
+\Hom_\mathcal{D}(E, X) \to 0
+$$
+for all $E \in \mathcal{E}$.
+
+\medskip\noindent
+Let $E \in \mathcal{E}$ and let $f : E \to X$ be a morphism.
+By the previous paragraph, we may choose $\alpha : E \to \bigoplus X_n$
+lifting $f$. Then by assumption (2) we obtain a factorization
+$\alpha = (\bigoplus \beta_n) \circ \gamma$.
+For each $n$ there is a morphism $\delta_n : E_n \to X_1$ such that
+$\delta_n$ and $\beta_n$ map to the same element of $H(E_n)$. Then
+the compositions
+$$
+E_n \to X_n \to X_{n + 1}
+\quad\text{and}\quad
+E_n \to X_1 \to X_{n + 1}
+$$
+are equal by construction of $X_n \to X_{n + 1}$. It follows that
+$$
+\bigoplus E_n \to \bigoplus X_n \to X
+\quad\text{and}\quad
+\bigoplus E_n \to \bigoplus X_1 \to X
+$$
+are the same too. Observing that $\bigoplus X_1 \to X$ factors
+as $\bigoplus X_1 \to X_1 \to X$, we conclude that
+$$
+\Hom_\mathcal{D}(E, X_1) \to \Hom_\mathcal{D}(E, X)
+$$
+is surjective. Since by construction the map
+$\Hom_\mathcal{D}(E, X_1) \to H(E)$ is surjective and by
+construction the kernel of this map is annihilated by
+$\Hom_\mathcal{D}(E, X_1) \to \Hom_\mathcal{D}(E, X)$
+we conclude that $\Hom_\mathcal{D}(E, X) \to H(E)$ is
+a bijection for all $E \in \mathcal{E}$.
+
+\medskip\noindent
+To finish the proof, consider the subcategory
+$$
+\mathcal{D}' =
+\{Y \in \Ob(\mathcal{D}) \mid \Hom_\mathcal{D}(Y[n], X) \to H(Y[n])
+\text{ is an isomorphism for all }n\}
+$$
+As $\Hom_\mathcal{D}(-, X) \to H$ is a transformation between
+cohomological functors,
+the subcategory $\mathcal{D}'$ is a strictly full, saturated, triangulated
+subcategory of $\mathcal{D}$ (details omitted; see proof of
+Lemma \ref{lemma-homological-functor-kernel}). Moreover, as both
+$H$ and $\Hom_\mathcal{D}(-, X)$ transform direct sums into products,
+we see that direct sums of objects of $\mathcal{D}'$ are in $\mathcal{D}'$.
+Thus derived colimits of objects of $\mathcal{D}'$ are in $\mathcal{D}'$.
+Since $\mathcal{E}$ is preserved by shifts, we conclude that
+$\mathcal{E} \subset \Ob(\mathcal{D}')$ by the result of the
+previous paragraph. To finish the proof we have to show that
+$\mathcal{D}' = \mathcal{D}$.
+
+\medskip\noindent
+Let $Y$ be an object of $\mathcal{D}$ and set $H(-) = \Hom_\mathcal{D}(-, Y)$.
+Then $H$ is a cohomological functor which transforms direct sums into products.
+By the construction in the first part of the proof we obtain a morphism
+$\colim X_n = X \to Y$ such that
+$\Hom_\mathcal{D}(E, X) \to \Hom_\mathcal{D}(E, Y)$ is bijective for all
+$E \in \mathcal{E}$. Then assumption (1) tells us that $X \to Y$ is an
+isomorphism! On the other hand, by construction $X_1, X_2, \ldots$
+are in $\mathcal{D}'$ and so is $X$. Thus $Y \in \mathcal{D}'$ and
+the proof is complete.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-brown-bis}
+Let $\mathcal{D}$ be a triangulated category with direct sums.
+Assume there exists a set $\mathcal{E}$ of objects of $\mathcal{D}$
+satisfying conditions (1) and (2) of Lemma \ref{lemma-brown-bis}.
+Let $F : \mathcal{D} \to \mathcal{D}'$ be an exact functor of triangulated
+categories which transforms direct sums into direct sums.
+Then $F$ has an exact right adjoint.
+\end{proposition}
+
+\begin{proof}
+For an object $Y$ of $\mathcal{D}'$ consider the contravariant functor
+$$
+\mathcal{D} \to \textit{Ab},\quad W \mapsto \Hom_{\mathcal{D}'}(F(W), Y)
+$$
+This is a cohomological functor as $F$ is exact and transforms direct sums
+into products as $F$ transforms direct sums into direct sums. Thus by
+Lemma \ref{lemma-brown-bis} we find an object $X$ of $\mathcal{D}$ such that
+$\Hom_\mathcal{D}(W, X) = \Hom_{\mathcal{D}'}(F(W), Y)$.
+The existence of the adjoint follows from
+Categories, Lemma \ref{categories-lemma-adjoint-exists}.
+Exactness follows from Lemma \ref{lemma-adjoint-is-exact}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Admissible subcategories}
+\label{section-admissible}
+
+\noindent
+A reference for this section is \cite[Section 1]{Bondal-Kapranov}.
+
+\begin{definition}
+\label{definition-orthogonal}
+Let $\mathcal{D}$ be an additive category. Let $\mathcal{A} \subset \mathcal{D}$
+be a full subcategory. The {\it right orthogonal} $\mathcal{A}^\perp$ of
+$\mathcal{A}$ is the full subcategory consisting of the objects $X$ of
+$\mathcal{D}$ such that $\Hom(A, X) = 0$ for all $A \in \Ob(\mathcal{A})$.
+The {\it left orthogonal} ${}^\perp\mathcal{A}$ of
+$\mathcal{A}$ is the full subcategory consisting of the objects $X$ of
+$\mathcal{D}$ such that $\Hom(X, A) = 0$ for all $A \in \Ob(\mathcal{A})$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-pre-prepare-adjoint}
+Let $\mathcal{D}$ be a triangulated category.
+Let $\mathcal{A} \subset \mathcal{D}$
+be a full subcategory invariant under all shifts.
+Consider a distinguished triangle
+$$
+X \to Y \to Z \to X[1]
+$$
+of $\mathcal{D}$. The following are equivalent
+\begin{enumerate}
+\item $Z$ is in $\mathcal{A}^\perp$, and
+\item $\Hom(A, X) = \Hom(A, Y)$ for all $A \in \Ob(\mathcal{A})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-representable-homological} the functor
+$\Hom(A, -)$ is homological and hence we get a long exact sequence
+as in (\ref{equation-long-exact-cohomology-sequence}).
+Assume (1) and let $A \in \Ob(\mathcal{A})$.
+Then we consider the exact sequence
+$$
+\Hom(A[1], Z) \to \Hom(A, X) \to \Hom(A, Y) \to \Hom(A, Z)
+$$
+Since $A[1] \in \Ob(\mathcal{A})$
+we see that the first and last groups are zero.
+Thus we get (2). Assume (2) and let $A \in \Ob(\mathcal{A})$.
+Then we consider the exact sequence
+$$
+\Hom(A, X) \to \Hom(A, Y) \to \Hom(A, Z) \to \Hom(A[-1], X) \to \Hom(A[-1], Y)
+$$
+and we conclude that $\Hom(A, Z) = 0$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pre-prepare-adjoint-dual}
+Let $\mathcal{D}$ be a triangulated category.
+Let $\mathcal{B} \subset \mathcal{D}$
+be a full subcategory invariant under all shifts.
+Consider a distinguished triangle
+$$
+X \to Y \to Z \to X[1]
+$$
+of $\mathcal{D}$. The following are equivalent
+\begin{enumerate}
+\item $X$ is in ${}^\perp\mathcal{B}$, and
+\item $\Hom(Y, B) = \Hom(Z, B)$ for all $B \in \Ob(\mathcal{B})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Dual to Lemma \ref{lemma-pre-prepare-adjoint}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-orthogonal-triangulated}
+Let $\mathcal{D}$ be a triangulated category. Let
+$\mathcal{A} \subset \mathcal{D}$ be a full subcategory invariant
+under all shifts. Then both the right orthogonal $\mathcal{A}^\perp$ and
+the left orthogonal ${}^\perp\mathcal{A}$ of $\mathcal{A}$
+are strictly full, saturated\footnote{Definition \ref{definition-saturated}.},
+triangulated subcagories of $\mathcal{D}$.
+\end{lemma}
+
+\begin{proof}
+It is immediate from the definitions that the orthogonals are preserved
+under taking shifts, direct sums, and direct summands.
+Consider a distinguished triangle
+$$
+X \to Y \to Z \to X[1]
+$$
+of $\mathcal{D}$. By Lemma \ref{lemma-triangulated-subcategory} it
+suffices to show that if $X$ and $Y$ are in $\mathcal{A}^\perp$, then
+$Z$ is in $\mathcal{A}^\perp$. This is immediate from
+Lemma \ref{lemma-pre-prepare-adjoint}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-prepare-adjoint}
+Let $\mathcal{D}$ be a triangulated category. Let $\mathcal{A}$
+be a full triangulated subcategory of $\mathcal{D}$. For an object $X$
+of $\mathcal{D}$ consider the property $P(X)$: there exists a
+distinguished triangle $A \to X \to B \to A[1]$
+in $\mathcal{D}$ with $A$ in $\mathcal{A}$ and $B$ in $\mathcal{A}^\perp$.
+\begin{enumerate}
+\item If $X_1 \to X_2 \to X_3 \to X_1[1]$ is a distinguished triangle
+and $P$ holds for two out of three, then it holds for the third.
+\item If $P$ holds for $X_1$ and $X_2$, then it holds for $X_1 \oplus X_2$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $X_1 \to X_2 \to X_3 \to X_1[1]$ be a distinguished triangle
+and assume $P$ holds for $X_1$ and $X_2$. Choose distinguished triangles
+$$
+A_1 \to X_1 \to B_1 \to A_1[1]
+\quad\text{and}\quad
+A_2 \to X_2 \to B_2 \to A_2[1]
+$$
+as in condition $P$. Since
+$\Hom(A_1, A_2) = \Hom(A_1, X_2)$ by Lemma \ref{lemma-pre-prepare-adjoint}
+there is a unique morphism $A_1 \to A_2$ such that the diagram
+$$
+\xymatrix{
+A_1 \ar[d] \ar[r] & X_1 \ar[d] \\
+A_2 \ar[r] & X_2
+}
+$$
+commutes. Choose an extension of this to a diagram
+$$
+\xymatrix{
+A_1 \ar[r] \ar[d] & X_1 \ar[r] \ar[d] & Q_1 \ar[r] \ar[d] & A_1[1] \ar[d] \\
+A_2 \ar[r] \ar[d] & X_2 \ar[r] \ar[d] & Q_2 \ar[r] \ar[d] & A_2[1] \ar[d] \\
+A_3 \ar[r] \ar[d] & X_3 \ar[r] \ar[d] & Q_3 \ar[r] \ar[d] & A_3[1] \ar[d] \\
+A_1[1] \ar[r] & X_1[1] \ar[r] & Q_1[1] \ar[r] & A_1[2]
+}
+$$
+as in Proposition \ref{proposition-9}. By TR3 we see that
+$Q_1 \cong B_1$ and $Q_2 \cong B_2$ and hence
+$Q_1, Q_2 \in \Ob(\mathcal{A}^\perp)$.
+As $Q_1 \to Q_2 \to Q_3 \to Q_1[1]$
+is a distinguished triangle we see that $Q_3 \in \Ob(\mathcal{A}^\perp)$
+by Lemma \ref{lemma-orthogonal-triangulated}.
+Since $\mathcal{A}$ is a full triangulated subcategory, we see that
+$A_3$ is isomorphic to an object of $\mathcal{A}$.
+Thus $X_3$ satisfies $P$. The other cases of (1) follow from this
+case by translation. Part (2) is a special case of (1)
+via Lemma \ref{lemma-split}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-prepare-adjoint-dual}
+Let $\mathcal{D}$ be a triangulated category. Let $\mathcal{B}$
+be a full triangulated subcategory of $\mathcal{D}$. For an object $X$
+of $\mathcal{D}$ consider the property $P(X)$: there exists a
+distinguished triangle $A \to X \to B \to A[1]$
+in $\mathcal{D}$ with $B$ in $\mathcal{B}$ and $A$ in ${}^\perp\mathcal{B}$.
+\begin{enumerate}
+\item If $X_1 \to X_2 \to X_3 \to X_1[1]$ is a distinguished triangle
+and $P$ holds for two out of three, then it holds for the third.
+\item If $P$ holds for $X_1$ and $X_2$, then it holds for $X_1 \oplus X_2$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Dual to Lemma \ref{lemma-prepare-adjoint}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-right-adjoint}
+Let $\mathcal{D}$ be a triangulated category. Let
+$\mathcal{A} \subset \mathcal{D}$ be a full triangulated subcategory.
+The following are equivalent
+\begin{enumerate}
+\item the inclusion functor $\mathcal{A} \to \mathcal{D}$
+has a right adjoint, and
+\item for every $X$ in $\mathcal{D}$ there exists a distinguished
+triangle
+$$
+A \to X \to B \to A[1]
+$$
+in $\mathcal{D}$ with $A \in \Ob(\mathcal{A})$ and
+$B \in \Ob(\mathcal{A}^\perp)$.
+\end{enumerate}
+If this holds, then $\mathcal{A}$ is saturated
+(Definition \ref{definition-saturated}) and if $\mathcal{A}$
+is strictly full in $\mathcal{D}$, then
+$\mathcal{A} = {}^\perp(\mathcal{A}^\perp)$.
+\end{lemma}
+
+\begin{proof}
+Assume (1) and denote $v : \mathcal{D} \to \mathcal{A}$ the right adjoint.
+Let $X \in \Ob(\mathcal{D})$. Set $A = v(X)$. We may extend the
+adjunction mapping $A \to X$ to a distinguished triangle
+$A \to X \to B \to A[1]$. Since
+$$
+\Hom_\mathcal{A}(A', A) =
+\Hom_\mathcal{A}(A', v(X)) =
+\Hom_\mathcal{D}(A', X)
+$$
+for $A' \in \Ob(\mathcal{A})$, we conclude that $B \in \Ob(\mathcal{A}^\perp)$
+by Lemma \ref{lemma-pre-prepare-adjoint}.
+
+\medskip\noindent
+Assume (2). We will contruct the adjoint $v$ explictly.
+Let $X \in \Ob(\mathcal{D})$. Choose $A \to X \to B \to A[1]$ as in (2).
+Set $v(X) = A$. Let $f : X \to Y$ be a morphism in $\mathcal{D}$.
+Choose $A' \to Y \to B' \to A'[1]$ as in (2). Since
+$\Hom(A, A') = \Hom(A, Y)$ by Lemma \ref{lemma-pre-prepare-adjoint}
+there is a unique morphism $f' : A \to A'$ such that the diagram
+$$
+\xymatrix{
+A \ar[d]_{f'} \ar[r] & X \ar[d]^f \\
+A' \ar[r] & Y
+}
+$$
+commutes. Hence we can set $v(f) = f'$ to get a functor.
+To see that $v$ is adjoint to the inclusion morphism use
+Lemma \ref{lemma-pre-prepare-adjoint} again.
+
+\medskip\noindent
+Proof of the final statement. In order to prove that $\mathcal{A}$
+is saturated we may replace $\mathcal{A}$ by the strictly full
+subcategory having the same isomorphism classes as $\mathcal{A}$;
+details omitted. Assume $\mathcal{A}$ is strictly full. If we show that
+$\mathcal{A} = {}^\perp(\mathcal{A}^\perp)$, then
+$\mathcal{A}$ will be saturated by Lemma \ref{lemma-orthogonal-triangulated}.
+Since the incusion $\mathcal{A} \subset {}^\perp(\mathcal{A}^\perp)$
+is clear it suffices to prove the other inclusion.
+Let $X$ be an object of ${}^\perp(\mathcal{A}^\perp)$.
+Choose a distinguished triangle $A \to X \to B \to A[1]$
+as in (2). As $\Hom(X, B) = 0$ by assumption we see that
+$A \cong X \oplus B[-1]$ by Lemma \ref{lemma-split}.
+Since $\Hom(A, B[-1]) = 0$ as $B \in \mathcal{A}^\perp$
+this implies $B[-1] = 0$ and $A \cong X$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-left-adjoint}
+Let $\mathcal{D}$ be a triangulated category. Let
+$\mathcal{B} \subset \mathcal{D}$ be a full triangulated subcategory.
+The following are equivalent
+\begin{enumerate}
+\item the inclusion functor $\mathcal{B} \to \mathcal{D}$
+has a left adjoint, and
+\item for every $X$ in $\mathcal{D}$ there exists a distinguished
+triangle
+$$
+A \to X \to B \to A[1]
+$$
+in $\mathcal{D}$ with $B \in \Ob(\mathcal{B})$ and
+$A \in \Ob({}^\perp\mathcal{B})$.
+\end{enumerate}
+If this holds, then $\mathcal{B}$ is saturated
+(Definition \ref{definition-saturated}) and if $\mathcal{B}$
+is strictly full in $\mathcal{D}$, then
+$\mathcal{B} = ({}^\perp\mathcal{B})^\perp$.
+\end{lemma}
+
+\begin{proof}
+Dual to Lemma \ref{lemma-right-adjoint}.
+\end{proof}
+
+\begin{definition}
+\label{definition-admissible}
+Let $\mathcal{D}$ be a triangulated category. A {\it right admissible}
+subcategory of $\mathcal{D}$ is a strictly full triangulated subcategory
+satisfying the equivalent conditions of Lemma \ref{lemma-right-adjoint}.
+A {\it left admissible}
+subcategory of $\mathcal{D}$ is a strictly full triangulated subcategory
+satisfying the equivalent conditions of Lemma \ref{lemma-left-adjoint}.
+A {\it two-sided admissible} subcategory is one which is both
+right and left admissible.
+\end{definition}
+
+\noindent
+Let $\mathcal{A}$ be a right admissible subcategory of the triangulated
+category $\mathcal{D}$. Then we observe that for $X \in \mathcal{D}$
+the distinguished triangle
+$$
+A \to X \to B \to A[1]
+$$
+with $A \in \mathcal{A}$ and $B \in \mathcal{A}^\perp$
+is canonical in the following sense: for any other distinguished triangle
+$A' \to X \to B' \to A'[1]$ with $A' \in \mathcal{A}$ and
+$B' \in \mathcal{A}^\perp$ there is an isomorphism
+$(\alpha, \text{id}_X, \beta) : (A, X, B) \to (A', X, B')$
+of triangles. The following proposition summarizes what was said above.
+
+\begin{proposition}
+\label{proposition-summarize-admissible}
+Let $\mathcal{D}$ be a triangulated category. Let
+$\mathcal{A} \subset \mathcal{D}$ and $\mathcal{B} \subset \mathcal{D}$
+be subcategories. The following are equivalent
+\begin{enumerate}
+\item $\mathcal{A}$ is right admissible and $\mathcal{B} = \mathcal{A}^\perp$,
+\item $\mathcal{B}$ is left admissible and $\mathcal{A} = {}^\perp\mathcal{B}$,
+\item $\Hom(A, B) = 0$ for all $A \in \mathcal{A}$ and $B \in \mathcal{B}$
+and for every $X$ in $\mathcal{D}$ there exists a distinguished triangle
+$A \to X \to B \to A[1]$ in $\mathcal{D}$ with $A \in \mathcal{A}$ and
+$B \in \mathcal{B}$.
+\end{enumerate}
+If this is true, then
+$\mathcal{A} \to \mathcal{D}/\mathcal{B}$ and
+$\mathcal{B} \to \mathcal{D}/\mathcal{A}$ are equivalences
+of triangulated categories,
+the right adjoint to the inclusion functor $\mathcal{A} \to \mathcal{D}$
+is $\mathcal{D} \to \mathcal{D}/\mathcal{B} \to \mathcal{A}$, and
+the left adjoint to the inclusion functor $\mathcal{B} \to \mathcal{D}$
+is $\mathcal{D} \to \mathcal{D}/\mathcal{A} \to \mathcal{B}$.
+\end{proposition}
+
+\begin{proof}
+The equivalence between (1), (2), and (3) follows in a straighforward manner
+from Lemmas \ref{lemma-right-adjoint} and \ref{lemma-left-adjoint} (small
+detail omitted). Denote $v : \mathcal{D} \to \mathcal{A}$ the right
+adjoint of the inclusion functor $i : \mathcal{A} \to \mathcal{D}$.
+It is immediate that $\Ker(v) = \mathcal{A}^\perp = \mathcal{B}$.
+Thus $v$ factors over a functor
+$\overline{v} : \mathcal{D}/\mathcal{B} \to \mathcal{A}$
+by the universal property of the quotient. Since
+$v \circ i = \text{id}_\mathcal{A}$ by
+Categories, Lemma \ref{categories-lemma-adjoint-fully-faithful}
+we see that $\overline{v}$ is a left quasi-inverse to
+$\overline{i} : \mathcal{A} \to \mathcal{D}/\mathcal{B}$.
+We claim also the composition $\overline{i} \circ \overline{v}$
+is isomorphic to $\text{id}_{\mathcal{D}/\mathcal{B}}$.
+Namely, suppose we have $X$ fitting into a distinguished triangle
+$A \to X \to B \to A[1]$ as in (3). Then $v(X) = A$ as was seen
+in the proof of Lemma \ref{lemma-right-adjoint}.
+Viewing $X$ as an object of $\mathcal{D}/\mathcal{B}$
+we have $\overline{i}(\overline{v}(X)) = A$ and there
+is a functorial isomorphism $\overline{i}(\overline{v}(X)) = A \to X$
+in $\mathcal{D}/\mathcal{B}$. Thus we find that indeed
+$\overline{v} : \mathcal{D}/\mathcal{B} \to \mathcal{A}$
+is an equivalence. To show that
+$\mathcal{B} \to \mathcal{D}/\mathcal{A}$ is an equivalence and
+the left adjoint to the inclusion functor $\mathcal{B} \to \mathcal{D}$
+is $\mathcal{D} \to \mathcal{D}/\mathcal{A} \to \mathcal{B}$
+is dual to what we just said.
+\end{proof}
+
+
+
+
+\section{Postnikov systems}
+\label{section-postnikov}
+
+\noindent
+A reference for this section is \cite{Orlov-K3}. Let $\mathcal{D}$
+be a triangulated category. Let
+$$
+X_n \to X_{n - 1} \to \ldots \to X_0
+$$
+be a complex in $\mathcal{D}$. In this section we consider the problem
+of constructing a ``totalization'' of this complex.
+
+\begin{definition}
+\label{definition-postnikov-system}
+Let $\mathcal{D}$ be a triangulated category. Let
+$$
+X_n \to X_{n - 1} \to \ldots \to X_0
+$$
+be a complex in $\mathcal{D}$. A {\it Postnikov system} is defined
+inductively as follows.
+\begin{enumerate}
+\item If $n = 0$, then it is an isomorphism $Y_0 \to X_0$.
+\item If $n = 1$, then it is a choice of a distinguished triangle
+$$
+Y_1 \to X_1 \to Y_0 \to Y_1[1]
+$$
+where $X_1 \to Y_0$ composed with $Y_0 \to X_0$ is the given morphism
+$X_1 \to X_0$.
+\item If $n > 1$, then it is a choice of a Postnikov system
+for $X_{n - 1} \to \ldots \to X_0$ and a choice of a distinguished
+triangle
+$$
+Y_n \to X_n \to Y_{n - 1} \to Y_n[1]
+$$
+where the morphism $X_n \to Y_{n - 1}$ composed with
+$Y_{n - 1} \to X_{n - 1}$ is the given morphism $X_n \to X_{n - 1}$.
+\end{enumerate}
+Given a morphism
+\begin{equation}
+\label{equation-map-complexes}
+\vcenter{
+\xymatrix{
+X_n \ar[r] \ar[d] &
+X_{n - 1} \ar[r] \ar[d] &
+\ldots \ar[r] &
+X_0 \ar[d] \\
+X'_n \ar[r] &
+X'_{n - 1} \ar[r] &
+\ldots \ar[r] &
+X'_0
+}
+}
+\end{equation}
+between complexes of the same length in $\mathcal{D}$
+there is an obvious notion of a {\it morphism of Postnikov systems}.
+\end{definition}
+
+\noindent
+Here is a key example.
+
+\begin{example}
+\label{example-key-postnikov}
+Let $\mathcal{A}$ be an abelian category. Let $\ldots \to A_2 \to A_1 \to A_0$
+be a chain complex in $\mathcal{A}$.
+Then we can consider the objects
+$$
+X_n = A_n
+\quad\text{and}\quad
+Y_n = (A_n \to A_{n - 1} \to \ldots \to A_0)[-n]
+$$
+of $D(\mathcal{A})$. With the evident canonical maps $Y_n \to X_n$ and
+$Y_0 \to Y_1[1] \to Y_2[2] \to \ldots$ the distinguished triangles
+$Y_n \to X_n \to Y_{n - 1} \to Y_n[1]$ define a Postnikov system as in
+Definition \ref{definition-postnikov-system} for
+$\ldots \to X_2 \to X_1 \to X_0$. Here we are using the obvious
+extension of Postnikov systems for an infinite complex of $D(\mathcal{A})$.
+Finally, if colimits over $\mathbf{N}$ exist and are exact in $\mathcal{A}$
+then
+$$
+\text{hocolim} Y_n[n] = (\ldots \to A_2 \to A_1 \to A_0 \to 0 \to \ldots)
+$$
+in $D(\mathcal{A})$. This follows immediately from
+Lemma \ref{lemma-colim-hocolim}.
+\end{example}
+
+\noindent
+Given a complex $X_n \to X_{n - 1} \to \ldots \to X_0$ and a Postnikov
+system as in Definition \ref{definition-postnikov-system}
+we can consider the maps
+$$
+Y_0 \to Y_1[1] \to \ldots \to Y_n[n]
+$$
+These maps fit together in certain distinguished triangles
+and fit with the given maps between the $X_i$. Here is a
+picture for $n = 3$:
+$$
+\xymatrix{
+Y_0 \ar[rr] & &
+Y_1[1] \ar[dl] \ar[rr] & &
+Y_2[2] \ar[dl] \ar[rr] & &
+Y_3[3] \ar[dl] \\
+& X_1[1] \ar[lu]_{+1} & &
+X_2[2] \ar[ll]_{+1} \ar[lu]_{+1} & &
+X_3[3] \ar[ll]_{+1} \ar[lu]_{+1}
+}
+$$
+We encourage the reader to think of $Y_n[n]$ as obtained from
+$X_0, X_1[1], \ldots, X_n[n]$; for example if the maps
+$X_i \to X_{i - 1}$ are zero, then we can take
+$Y_n[n] = \bigoplus_{i = 0, \ldots, n} X_i[i]$.
+Postnikov systems do not always exist.
+Here is a simple lemma for low $n$.
+
+\begin{lemma}
+\label{lemma-postnikov-system-small-cases}
+Let $\mathcal{D}$ be a triangulated category. Consider
+Postnikov systems for complexes of length $n$.
+\begin{enumerate}
+\item For $n = 0$ Postnikov systems always exist and
+any morphism (\ref{equation-map-complexes}) of complexes
+extends to a unique morphism of Postnikov systems.
+\item For $n = 1$ Postnikov systems always exist and
+any morphism (\ref{equation-map-complexes}) of complexes
+extends to a (nonunique) morphism of Postnikov systems.
+\item For $n = 2$ Postnikov systems always exist but
+morphisms (\ref{equation-map-complexes}) of complexes
+in general do not extend to morphisms of Postnikov systems.
+\item For $n > 2$ Postnikov systems do not always exist.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The case $n = 0$ is immediate as isomorphisms are invertible.
+The case $n = 1$ follows immediately from TR1 (existence of triangles)
+and TR3 (extending morphisms to triangles).
+For the case $n = 2$ we argue as follows.
+Set $Y_0 = X_0$. By the case $n = 1$ we can choose
+a Postnikov system
+$$
+Y_1 \to X_1 \to Y_0 \to Y_1[1]
+$$
+Since the composition $X_2 \to X_1 \to X_0$ is zero, we can factor
+$X_2 \to X_1$ (nonuniquely) as $X_2 \to Y_1 \to X_1$ by
+Lemma \ref{lemma-representable-homological}.
+Then we simply fit the morphism $X_2 \to Y_1$ into a distinguished
+triangle
+$$
+Y_2 \to X_2 \to Y_1 \to Y_2[1]
+$$
+to get the Postnikov system for $n = 2$.
+For $n > 2$ we cannot argue similarly, as we do not
+know whether the composition $X_n \to X_{n - 1} \to Y_{n - 1}$
+is zero in $\mathcal{D}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-maps-postnikov-systems-vanishing}
+Let $\mathcal{D}$ be a triangulated category. Given a map
+(\ref{equation-map-complexes}) consider the condition
+\begin{equation}
+\label{equation-P}
+\Hom(X_i[i - j - 1], X'_j) = 0 \text{ for }i > j + 1
+\end{equation}
+Then
+\begin{enumerate}
+\item If we have a Postnikov system for
+$X'_n \to X'_{n - 1} \to \ldots \to X'_0$ then
+property (\ref{equation-P}) implies that
+$$
+\Hom(X_i[i - j - 1], Y'_j) = 0 \text{ for }i > j + 1
+$$
+\item If we are given Postnikov systems for both complexes and
+we have (\ref{equation-P}), then the map extends to a (nonunique) map
+of Postnikov systems.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We first prove (1) by induction on $j$. For the base case $j = 0$
+there is nothing to prove as $Y'_0 \to X'_0$ is an isomorphism.
+Say the result holds for $j - 1$. We consider the distinguished triangle
+$$
+Y'_j \to X'_j \to Y'_{j - 1} \to Y'_j[1]
+$$
+The long exact sequence of Lemma \ref{lemma-representable-homological}
+gives an exact sequence
+$$
+\Hom(X_i[i - j - 1], Y'_{j - 1}[-1]) \to
+\Hom(X_i[i - j - 1], Y'_j) \to
+\Hom(X_i[i - j - 1], X'_j)
+$$
+From the induction hypothesis and (\ref{equation-P}) we conclude the outer
+groups are zero and we win.
+
+\medskip\noindent
+Proof of (2). For $n = 1$ the existence of morphisms has been
+established in Lemma \ref{lemma-postnikov-system-small-cases}.
+For $n > 1$ by induction, we may assume given the map of
+Postnikov systems of length $n - 1$. The problem is that we do
+not know whether the diagram
+$$
+\xymatrix{
+X_n \ar[r] \ar[d] & Y_{n - 1} \ar[d] \\
+X'_n \ar[r] & Y'_{n - 1}
+}
+$$
+is commutative. Denote $\alpha : X_n \to Y'_{n - 1}$ the difference.
+Then we do know that the composition of $\alpha$ with
+$Y'_{n - 1} \to X'_{n - 1}$ is zero (because of what it means
+to be a map of Postnikov systems of length $n - 1$).
+By the distinguished triangle
+$Y'_{n - 1} \to X'_{n - 1} \to Y'_{n - 2} \to Y'_{n - 1}[1]$,
+this means that $\alpha$ is the composition of
+$Y'_{n - 2}[-1] \to Y'_{n - 1}$ with
+a map $\alpha' : X_n \to Y'_{n - 2}[-1]$. Then (\ref{equation-P}) guarantees
+$\alpha'$ is zero by part (1) of the lemma. Thus $\alpha$ is zero.
+To finish the proof of existence, the commutativity guarantees
+we can choose the dotted arrow fitting into the diagram
+$$
+\xymatrix{
+Y_{n - 1}[-1] \ar[d] \ar[r] &
+Y_n \ar[r] \ar@{..>}[d] &
+X_n \ar[r] \ar[d] &
+Y_{n - 1} \ar[d] \\
+Y'_{n - 1}[-1] \ar[r] &
+Y'_n \ar[r] &
+X'_n \ar[r] &
+Y'_{n - 1}
+}
+$$
+by TR3.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-uniqueness-maps-postnikov-systems}
+Let $\mathcal{D}$ be a triangulated category. Given a map
+(\ref{equation-map-complexes}) assume we are given
+Postnikov systems for both complexes. If
+\begin{enumerate}
+\item $\Hom(X_i[i], Y'_n[n]) = 0$ for $i = 1, \ldots, n$, or
+\item $\Hom(Y_n[n], X'_{n - i}[n - i]) = 0$ for $i = 1, \ldots, n$, or
+\item $\Hom(X_{j - i}[-i + 1], X'_j) = 0$ and
+$\Hom(X_j, X'_{j - i}[-i]) = 0$ for $j \geq i > 0$,
+\end{enumerate}
+then there exists at most one morphism between these Postnikov systems.
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Look at the following diagram
+$$
+\xymatrix{
+Y_0 \ar[r] \ar[d] &
+Y_1[1] \ar[r] \ar[ld] &
+Y_2[2] \ar[r] \ar[lld] &
+\ldots \ar[r] &
+Y_n[n] \ar[lllld] \\
+Y'_n[n]
+}
+$$
+The arrows are the composition of the morphism $Y_n[n] \to Y'_n[n]$
+and the morphism $Y_i[i] \to Y_n[n]$. The arrow $Y_0 \to Y'_n[n]$
+is determined as it is the composition $Y_0 = X_0 \to X'_0 = Y'_0 \to Y'_n[n]$.
+Since we have the distinguished triangle $Y_0 \to Y_1[1] \to X_1[1]$
+we see that $\Hom(X_1[1], Y'_n[n]) = 0$ guarantees that the second vertical
+arrow is unique. Since we have the distinguished triangle
+$Y_1[1] \to Y_2[2] \to X_2[2]$ we see that $\Hom(X_2[2], Y'_n[n]) = 0$
+guarantees that the third vertical arrow is unique. And so on.
+
+\medskip\noindent
+Proof of (2). The composition $Y_n[n] \to Y'_n[n] \to X_n[n]$ is
+the same as the composition $Y_n[n] \to X_n[n] \to X'_n[n]$ and hence
+is unique. Then using the distinguished triangle
+$Y'_{n - 1}[n - 1] \to Y'_n[n] \to X'_n[n]$ we see that it suffices
+to show $\Hom(Y_n[n], Y'_{n - 1}[n - 1]) = 0$. Using the distinguished
+triangles
+$$
+Y'_{n - i - 1}[n - i - 1] \to Y'_{n - i}[n - i] \to X'_{n - i}[n - i]
+$$
+we get this vanishing from our assumption. Small details omitted.
+
+\medskip\noindent
+Proof of (3). Looking at the proof of
+Lemma \ref{lemma-maps-postnikov-systems-vanishing}
+and arguing by induction on $n$ it suffices to show that the dotted arrow
+in the morphism of triangles
+$$
+\xymatrix{
+Y_{n - 1}[-1] \ar[d] \ar[r] &
+Y_n \ar[r] \ar@{..>}[d] &
+X_n \ar[r] \ar[d] &
+Y_{n - 1} \ar[d] \\
+Y'_{n - 1}[-1] \ar[r] &
+Y'_n \ar[r] &
+X'_n \ar[r] &
+Y'_{n - 1}
+}
+$$
+is unique. By Lemma \ref{lemma-uniqueness-third-arrow} part (5)
+it suffices to show that $\Hom(Y_{n - 1}, X'_n) = 0$ and
+$\Hom(X_n, Y'_{n - 1}[-1]) = 0$.
+To prove the first vanishing we use the distinguished triangles
+$Y_{n - i - 1}[-i] \to Y_{n - i}[-(i - 1)] \to X_{n - i}[-(i - 1)]$
+for $i > 0$ and induction on $i$ to see that the assumed
+vanishing of $\Hom(X_{n - i}[-i + 1], X'_n)$ is enough.
+For the second we similarly use the distinguished triangles
+$Y'_{n - i - 1}[-i - 1] \to Y'_{n - i}[-i] \to X'_{n - i}[-i]$
+to see that the assumed vanishing of
+$\Hom(X_n, X'_{n - i}[-i])$ is enough as well.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-existence-postnikov-system}
+Let $\mathcal{D}$ be a triangulated category.
+Let $X_n \to X_{n - 1} \to \ldots \to X_0$ be
+a complex in $\mathcal{D}$. If
+$$
+\Hom(X_i[i - j - 2], X_j) = 0 \text{ for }i > j + 2
+$$
+then there exists a Postnikov system. If we have
+$$
+\Hom(X_i[i - j - 1], X_j) = 0 \text{ for }i > j + 1
+$$
+then any two Postnikov systems are isomorphic.
+\end{lemma}
+
+\begin{proof}
+We argue by induction on $n$. The cases $n = 0, 1, 2$
+follow from Lemma \ref{lemma-postnikov-system-small-cases}.
+Assume $n > 2$.
+Suppose given a Postnikov system for the complex
+$X_{n - 1} \to X_{n - 2} \to \ldots \to X_0$.
+The only obstruction to extending this to a Postnikov system
+of length $n$ is that we have to find a morphism
+$X_n \to Y_{n - 1}$ such that the composition
+$X_n \to Y_{n - 1} \to X_{n - 1}$ is equal to the given map
+$X_n \to X_{n - 1}$. Considering the distinguished triangle
+$$
+Y_{n - 1} \to X_{n - 1} \to Y_{n - 2} \to Y_{n - 1}[1]
+$$
+and the associated long exact sequence coming from this
+and the functor $\Hom(X_n, -)$
+(see Lemma \ref{lemma-representable-homological})
+we find that it suffices to show that the composition
+$X_n \to X_{n - 1} \to Y_{n - 2}$ is zero.
+Since we know that $X_n \to X_{n - 1} \to X_{n - 2}$ is zero
+we can apply the distinguished triangle
+$$
+Y_{n - 2} \to X_{n - 2} \to Y_{n - 3} \to Y_{n - 2}[1]
+$$
+to see that it suffices if $\Hom(X_n, Y_{n - 3}[-1]) = 0$.
+Arguing exactly as in the proof of
+Lemma \ref{lemma-maps-postnikov-systems-vanishing} part (1)
+the reader easily sees this follows from the condition
+stated in the lemma.
+
+\medskip\noindent
+The statement on isomorphisms follows from the existence of a map
+between the Postnikov systems extending the identity on the complex
+proven in Lemma \ref{lemma-maps-postnikov-systems-vanishing} part (2)
+and Lemma \ref{lemma-third-isomorphism-triangle} to show all the maps are
+isomorphisms.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Essentially constant systems}
+\label{section-essentially-constant}
+
+\noindent
+Some preliminary lemmas on essentially constant systems in triangulated
+categories.
+
+\begin{lemma}
+\label{lemma-essentially-constant}
+Let $\mathcal{D}$ be a triangulated category. Let $(A_i)$ be an inverse system
+in $\mathcal{D}$. Then $(A_i)$ is essentially constant (see
+Categories, Definition
+\ref{categories-definition-essentially-constant-diagram})
+if and only if there exists an $i$ and for all $j \geq i$ a direct sum
+decomposition $A_j = A \oplus Z_j$ such that
+(a) the maps $A_{j'} \to A_j$ are compatible with the direct sum
+decompositions and identity on $A$, (b) for all $j \geq i$ there exists some
+$j' \geq j$ such that $Z_{j'} \to Z_j$ is zero.
+\end{lemma}
+
+\begin{proof}
+Assume $(A_i)$ is essentially constant with value $A$. Then $A = \lim A_i$
+and there exists an $i$ and a morphism $A_i \to A$ such that (1)
+the composition $A \to A_i \to A$ is the identity on $A$ and (2) for all
+$j \geq i$ there exists a $j' \geq j$ such that $A_{j'} \to A_j$ factors as
+$A_{j'} \to A_i \to A \to A_j$. From (1) we conclude that for $j \geq i$
+the maps $A \to A_j$ and $A_j \to A_i \to A$ compose to the identity on $A$.
+It follows that $A_j \to A$ has a kernel $Z_j$ and that
+the map $A \oplus Z_j \to A_j$ is an isomorphism, see
+Lemmas \ref{lemma-when-split} and \ref{lemma-split}.
+These direct sum decompositions clearly satisfy (a).
+From (2) we conclude that for all $j$ there is a $j' \geq j$ such that
+$Z_{j'} \to Z_j$ is zero, so (b) holds. Proof of the converse is omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-essentially-constant-2-out-of-3}
+Let $\mathcal{D}$ be a triangulated category. Let
+$$
+A_n \to B_n \to C_n \to A_n[1]
+$$
+be an inverse system of distinguished triangles in $\mathcal{D}$.
+If $(A_n)$ and $(C_n)$ are essentially constant, then
+$(B_n)$ is essentially constant and their values fit into
+a distinguished triangle $A \to B \to C \to A[1]$ such that for
+some $n \geq 1$ there is a map
+$$
+\xymatrix{
+A_n \ar[d] \ar[r] &
+B_n \ar[d] \ar[r] &
+C_n \ar[d] \ar[r] &
+A_n[1] \ar[d] \\
+A \ar[r] &
+B \ar[r] &
+C \ar[r] &
+A[1]
+}
+$$
+of distinguished triangles which induces an isomorphism
+$\lim_{n' \geq n} A_{n'} \to A$ and similarly for $B$ and $C$.
+\end{lemma}
+
+\begin{proof}
+After renumbering we may assume that $A_n = A \oplus A'_n$ and
+$C_n = C \oplus C'_n$ for inverse systems $(A'_n)$ and $(C'_n)$
+which are essentially zero, see Lemma \ref{lemma-essentially-constant}.
+In particular, the morphism
+$$
+C \oplus C'_n \to (A \oplus A'_n)[1]
+$$
+maps the summand $C$ into the summand $A[1]$ for all $n$ by a map
+$\delta : C \to A[1]$ which is independent of $n$. Choose a distinguished
+triangle
+$$
+A \to B \to C \xrightarrow{\delta} A[1]
+$$
+Next, choose a morphism of distingished triangles
+$$
+(A_1 \to B_1 \to C_1 \to A_1[1]) \to
+(A \to B \to C \to A[1])
+$$
+which is possible by TR3. For any object $D$ of $\mathcal{D}$ this induces
+a commutative diagram
+$$
+\xymatrix{
+\ldots \ar[r] &
+\Hom_\mathcal{D}(C, D) \ar[r] \ar[d] &
+\Hom_\mathcal{D}(B, D) \ar[r] \ar[d] &
+\Hom_\mathcal{D}(A, D) \ar[r] \ar[d] &
+\ldots \\
+\ldots \ar[r] &
+\colim \Hom_\mathcal{D}(C_n, D) \ar[r] &
+\colim \Hom_\mathcal{D}(B_n, D) \ar[r] &
+\colim \Hom_\mathcal{D}(A_n, D) \ar[r] &
+\ldots
+}
+$$
+The left and right vertical arrows are isomorphisms and so are the ones
+to the left and right of those. Thus by the 5-lemma we conclude that
+the middle arrow is an isomorphism. It follows that
+$(B_n)$ is isomorphic to the constant inverse system with value $B$
+by the discussion in
+Categories, Remark \ref{categories-remark-pro-category-copresheaves}.
+Since this is equivalent to $(B_n)$ being essentially constant
+with value $B$ by
+Categories, Remark \ref{categories-remark-pro-category}
+the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-essentially-constant-cohomology}
+Let $\mathcal{A}$ be an abelian category. Let $A_n$ be an inverse
+system of objects of $D(\mathcal{A})$. Assume
+\begin{enumerate}
+\item there exist integers $a \leq b$ such that $H^i(A_n) = 0$
+for $i \not \in [a, b]$, and
+\item the inverse systems $H^i(A_n)$ of $\mathcal{A}$ are essentially constant
+for all $i \in \mathbf{Z}$.
+\end{enumerate}
+Then $A_n$ is an essentially constant system of $D(\mathcal{A})$ whose
+value $A$ satisfies that $H^i(A)$ is the value of the constant system
+$H^i(A_n)$ for each $i \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+By Remark \ref{remark-truncation-distinguished-triangle} we obtain
+an inverse system of distinguished triangles
+$$
+\tau_{\leq a}A_n \to A_n \to \tau_{\geq a + 1}A_n \to (\tau_{\leq a}A_n)[1]
+$$
+Of course we have $\tau_{\leq a}A_n = H^a(A_n)[-a]$ in $D(\mathcal{A})$.
+Thus by assumption these form an essentially constant system.
+By induction on $b - a$ we find that the inverse system
+$\tau_{\geq a + 1}A_n$ is essentially constant, say with value $A'$.
+By Lemma \ref{lemma-essentially-constant-2-out-of-3} we find that
+$A_n$ is an essentially constant system. We omit the proof of
+the statement on cohomologies (hint: use the final part of
+Lemma \ref{lemma-essentially-constant-2-out-of-3}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pro-isomorphism}
+Let $\mathcal{D}$ be a triangulated category. Let
+$$
+A_n \to B_n \to C_n \to A_n[1]
+$$
+be an inverse system of distinguished triangles. If the system $C_n$
+is pro-zero (essentially constant with value $0$), then the maps
+$A_n \to B_n$ determine a pro-isomorphism between the pro-object $(A_n)$
+and the pro-object $(B_n)$.
+\end{lemma}
+
+\begin{proof}
+For any object $X$ of $\mathcal{D}$ consider the exact sequence
+$$
+\colim \Hom(C_n, X) \to
+\colim \Hom(B_n, X) \to
+\colim \Hom(A_n, X) \to
+\colim \Hom(C_n[-1], X) \to
+$$
+Exactness follows from Lemma \ref{lemma-representable-homological}
+combined with
+Algebra, Lemma \ref{algebra-lemma-directed-colimit-exact}.
+By assumption the first and last term are zero. Hence the map
+$\colim \Hom(B_n, X) \to \colim \Hom(A_n, X)$ is an isomorphism
+for all $X$. The lemma follows from this and
+Categories, Remark \ref{categories-remark-pro-category-copresheaves}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pro-isomorphism-bis}
+Let $\mathcal{A}$ be an abelian category.
+$$
+A_n \to B_n
+$$
+be an inverse system of maps of $D(\mathcal{A})$. Assume
+\begin{enumerate}
+\item there exist integers $a \leq b$ such that $H^i(A_n) = 0$
+and $H^i(B_n) = 0$ for $i \not \in [a, b]$, and
+\item the inverse system of maps $H^i(A_n) \to H^i(B_n)$ of $\mathcal{A}$
+define an isomorphism of pro-objects of $\mathcal{A}$
+for all $i \in \mathbf{Z}$.
+\end{enumerate}
+Then the maps $A_n \to B_n$
+determine a pro-isomorphism between the pro-object $(A_n)$
+and the pro-object $(B_n)$.
+\end{lemma}
+
+\begin{proof}
+We can inductively extend the maps $A_n \to B_n$ to an inverse system of
+distinguished triangles $A_n \to B_n \to C_n \to A_n[1]$ by
+axiom TR3. By Lemma \ref{lemma-pro-isomorphism} it suffices to prove
+that $C_n$ is pro-zero. By Lemma \ref{lemma-essentially-constant-cohomology}
+it suffices to show that $H^p(C_n)$ is pro-zero for each $p$.
+This follows from assumption (2) and the long exact sequences
+$$
+H^p(A_n) \xrightarrow{\alpha_n} H^p(B_n)
+\xrightarrow{\beta_n}
+H^p(C_n) \xrightarrow{\delta_n} H^{p + 1}(A_n)
+\xrightarrow{\epsilon_n}
+H^{p + 1}(B_n)
+$$
+Namely, for every $n$ we can find an $m > n$ such that
+$\Im(\beta_m)$ maps to zero in $H^p(C_n)$ because we may choose
+$m$ such that $H^p(B_m) \to H^p(B_n)$ factors through
+$\alpha_n : H^p(A_n) \to H^p(B_n)$. For a similar reason we may
+then choose $k > m$ such that $\Im(\delta_k)$ maps to zero
+in $H^{p + 1}(A_m)$. Then $H^p(C_k) \to H^p(C_n)$ is zero because
+$H^p(C_k) \to H^p(C_m)$ maps into $\Ker(\delta_m)$ and $H^p(C_m) \to H^p(C_n)$
+annihilates $\Ker(\delta_m) = \Im(\beta_m)$.
+\end{proof}
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/descent.tex b/books/stacks/descent.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6b481a43547e438feea597d9b4c809d4d95a57d3
--- /dev/null
+++ b/books/stacks/descent.tex
@@ -0,0 +1,9371 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Descent}
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In the chapter on topologies on schemes
+(see Topologies, Section \ref{topologies-section-introduction}) we introduced
+Zariski, \'etale, fppf, smooth, syntomic and fpqc coverings of schemes.
+In this chapter we discuss what kind of structures over schemes
+can be descended through such coverings.
+See for example \cite{Gr-I}, \cite{Gr-II}, \cite{Gr-III},
+\cite{Gr-IV}, \cite{Gr-V}, and \cite{Gr-VI}.
+This is also meant to introduce the notions of
+descent, descent data, effective descent data, in the less formal
+setting of descent questions for quasi-coherent sheaves, schemes, etc.
+The formal notion, that of a stack over a site, is discussed in
+the chapter on stacks (see Stacks, Section \ref{stacks-section-introduction}).
+
+\section{Descent data for quasi-coherent sheaves}
+\label{section-equivalence}
+
+\noindent
+In this chapter we will use the convention where
+the projection maps $\text{pr}_i : X \times \ldots \times X \to X$
+are labeled starting with $i = 0$. Hence we have
+$\text{pr}_0, \text{pr}_1 : X \times X \to X$,
+$\text{pr}_0, \text{pr}_1, \text{pr}_2 : X \times X \times X \to X$,
+etc.
+
+
+\begin{definition}
+\label{definition-descent-datum-quasi-coherent}
+Let $S$ be a scheme. Let $\{f_i : S_i \to S\}_{i \in I}$ be a family
+of morphisms with target $S$.
+\begin{enumerate}
+\item A {\it descent datum $(\mathcal{F}_i, \varphi_{ij})$
+for quasi-coherent sheaves} with respect to the given family
+is given by a quasi-coherent sheaf $\mathcal{F}_i$ on $S_i$ for
+each $i \in I$, an isomorphism of quasi-coherent
+$\mathcal{O}_{S_i \times_S S_j}$-modules
+$\varphi_{ij} : \text{pr}_0^*\mathcal{F}_i \to \text{pr}_1^*\mathcal{F}_j$
+for each pair $(i, j) \in I^2$
+such that for every triple of indices $(i, j, k) \in I^3$ the
+diagram
+$$
+\xymatrix{
+\text{pr}_0^*\mathcal{F}_i \ar[rd]_{\text{pr}_{01}^*\varphi_{ij}}
+\ar[rr]_{\text{pr}_{02}^*\varphi_{ik}} & &
+\text{pr}_2^*\mathcal{F}_k \\
+& \text{pr}_1^*\mathcal{F}_j \ar[ru]_{\text{pr}_{12}^*\varphi_{jk}} &
+}
+$$
+of $\mathcal{O}_{S_i \times_S S_j \times_S S_k}$-modules
+commutes. This is called the {\it cocycle condition}.
+\item A {\it morphism $\psi : (\mathcal{F}_i, \varphi_{ij}) \to
+(\mathcal{F}'_i, \varphi'_{ij})$ of descent data} is given
+by a family $\psi = (\psi_i)_{i\in I}$ of morphisms of
+$\mathcal{O}_{S_i}$-modules $\psi_i : \mathcal{F}_i \to \mathcal{F}'_i$
+such that all the diagrams
+$$
+\xymatrix{
+\text{pr}_0^*\mathcal{F}_i \ar[r]_{\varphi_{ij}} \ar[d]_{\text{pr}_0^*\psi_i}
+& \text{pr}_1^*\mathcal{F}_j \ar[d]^{\text{pr}_1^*\psi_j} \\
+\text{pr}_0^*\mathcal{F}'_i \ar[r]^{\varphi'_{ij}} &
+\text{pr}_1^*\mathcal{F}'_j \\
+}
+$$
+commute.
+\end{enumerate}
+\end{definition}
+
+\noindent
+A good example to keep in mind is the following.
+Suppose that $S = \bigcup S_i$ is an open covering.
+In that case we have seen descent data for sheaves of sets in
+Sheaves, Section \ref{sheaves-section-glueing-sheaves}
+where we called them ``glueing data for sheaves of sets
+with respect to the given covering''. Moreover, we proved
+that the category of glueing data is equivalent to the category
+of sheaves on $S$. We will show the analogue in the setting above when
+$\{S_i \to S\}_{i\in I}$ is an fpqc covering.
+
+\medskip\noindent
+In the extreme case where the covering $\{S \to S\}$
+is given by $\text{id}_S$ a descent datum is necessarily
+of the form $(\mathcal{F}, \text{id}_\mathcal{F})$. The cocycle
+condition guarantees that the identity on $\mathcal{F}$ is the
+only permitted map in this case. The following lemma shows
+in particular that to every quasi-coherent sheaf of
+$\mathcal{O}_S$-modules there is associated a unique
+descent datum with respect to any given family.
+
+\begin{lemma}
+\label{lemma-refine-descent-datum}
+Let $\mathcal{U} = \{U_i \to U\}_{i \in I}$ and
+$\mathcal{V} = \{V_j \to V\}_{j \in J}$
+be families of morphisms of schemes with fixed target.
+Let $(g, \alpha : I \to J, (g_i)) : \mathcal{U} \to \mathcal{V}$
+be a morphism of families of maps with fixed target, see
+Sites, Definition \ref{sites-definition-morphism-coverings}.
+Let $(\mathcal{F}_j, \varphi_{jj'})$ be a descent
+datum for quasi-coherent sheaves with respect to the
+family $\{V_j \to V\}_{j \in J}$. Then
+\begin{enumerate}
+\item The system
+$$
+\left(g_i^*\mathcal{F}_{\alpha(i)},
+(g_i \times g_{i'})^*\varphi_{\alpha(i)\alpha(i')}\right)
+$$
+is a descent datum with respect to the family $\{U_i \to U\}_{i \in I}$.
+\item This construction is functorial in the descent datum
+$(\mathcal{F}_j, \varphi_{jj'})$.
+\item Given a second morphism $(g', \alpha' : I \to J, (g'_i))$
+of families of maps with fixed target with $g = g'$
+there exists a functorial isomorphism of descent data
+$$
+(g_i^*\mathcal{F}_{\alpha(i)},
+(g_i \times g_{i'})^*\varphi_{\alpha(i)\alpha(i')})
+\cong
+((g'_i)^*\mathcal{F}_{\alpha'(i)},
+(g'_i \times g'_{i'})^*\varphi_{\alpha'(i)\alpha'(i')}).
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: The maps
+$g_i^*\mathcal{F}_{\alpha(i)} \to (g'_i)^*\mathcal{F}_{\alpha'(i)}$
+which give the isomorphism of descent data in part (3)
+are the pullbacks of the maps $\varphi_{\alpha(i)\alpha'(i)}$ by the
+morphisms $(g_i, g'_i) : U_i \to V_{\alpha(i)} \times_V V_{\alpha'(i)}$.
+\end{proof}
+
+\noindent
+Any family $\mathcal{U} = \{S_i \to S\}_{i \in I}$ is a refinement of
+the trivial covering $\{S \to S\}$ in a unique way. For
+a quasi-coherent sheaf $\mathcal{F}$ on $S$ we denote simply
+$(\mathcal{F}|_{S_i}, can)$ the descent datum with respect to
+$\mathcal{U}$ obtained by the procedure above.
+
+\begin{definition}
+\label{definition-descent-datum-effective-quasi-coherent}
+Let $S$ be a scheme.
+Let $\{S_i \to S\}_{i \in I}$ be a family of morphisms
+with target $S$.
+\begin{enumerate}
+\item Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_S$-module.
+We call the unique descent on $\mathcal{F}$ datum with respect to the covering
+$\{S \to S\}$ the {\it trivial descent datum}.
+\item The pullback of the trivial descent datum to
+$\{S_i \to S\}$ is called the {\it canonical descent datum}.
+Notation: $(\mathcal{F}|_{S_i}, can)$.
+\item A descent datum $(\mathcal{F}_i, \varphi_{ij})$
+for quasi-coherent sheaves with respect to the given covering
+is said to be {\it effective} if there exists a quasi-coherent
+sheaf $\mathcal{F}$ on $S$ such that $(\mathcal{F}_i, \varphi_{ij})$
+is isomorphic to $(\mathcal{F}|_{S_i}, can)$.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-zariski-descent-effective}
+Let $S$ be a scheme.
+Let $S = \bigcup U_i$ be an open covering.
+Any descent datum on quasi-coherent sheaves
+for the family $\mathcal{U} = \{U_i \to S\}$ is
+effective. Moreover, the functor from the category of
+quasi-coherent $\mathcal{O}_S$-modules to the category
+of descent data with respect to $\mathcal{U}$ is fully faithful.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from
+Sheaves, Section \ref{sheaves-section-glueing-sheaves}
+and the fact that being quasi-coherent is a local property, see
+Modules, Definition \ref{modules-definition-quasi-coherent}.
+\end{proof}
+
+\noindent
+To prove more we first need to study the case of modules over rings.
+
+
+
+
+
+
+
+
+
+
+\section{Descent for modules}
+\label{section-descent-modules}
+
+\noindent
+Let $R \to A$ be a ring map.
+By Simplicial, Example \ref{simplicial-example-push-outs-simplicial-object}
+this gives rise to a cosimplicial $R$-algebra
+$$
+\xymatrix{
+A
+\ar@<1ex>[r]
+\ar@<-1ex>[r]
+&
+A \otimes_R A
+\ar@<0ex>[l]
+\ar@<2ex>[r]
+\ar@<0ex>[r]
+\ar@<-2ex>[r]
+&
+A \otimes_R A \otimes_R A
+\ar@<1ex>[l]
+\ar@<-1ex>[l]
+}
+$$
+Let us denote this $(A/R)_\bullet$ so that $(A/R)_n$ is the $(n + 1)$-fold
+tensor product of $A$ over $R$. Given a map
+$\varphi : [n] \to [m]$ the $R$-algebra map $(A/R)_\bullet(\varphi)$
+is the map
+$$
+a_0 \otimes \ldots \otimes a_n
+\longmapsto
+\prod\nolimits_{\varphi(i) = 0} a_i
+\otimes
+\prod\nolimits_{\varphi(i) = 1} a_i
+\otimes \ldots \otimes
+\prod\nolimits_{\varphi(i) = m} a_i
+$$
+where we use the convention that the empty product is $1$. Thus the first
+few maps, notation as in
+Simplicial, Section \ref{simplicial-section-cosimplicial-object}, are
+$$
+\begin{matrix}
+\delta^1_0 & : & a_0 & \mapsto & 1 \otimes a_0 \\
+\delta^1_1 & : & a_0 & \mapsto & a_0 \otimes 1 \\
+\sigma^0_0 & : & a_0 \otimes a_1 & \mapsto & a_0a_1 \\
+\delta^2_0 & : & a_0 \otimes a_1 & \mapsto & 1 \otimes a_0 \otimes a_1 \\
+\delta^2_1 & : & a_0 \otimes a_1 & \mapsto & a_0 \otimes 1 \otimes a_1 \\
+\delta^2_2 & : & a_0 \otimes a_1 & \mapsto & a_0 \otimes a_1 \otimes 1 \\
+\sigma^1_0 & : & a_0 \otimes a_1 \otimes a_2 & \mapsto & a_0a_1 \otimes a_2 \\
+\sigma^1_1 & : & a_0 \otimes a_1 \otimes a_2 & \mapsto & a_0 \otimes a_1a_2
+\end{matrix}
+$$
+and so on.
+
+\medskip\noindent
+An $R$-module $M$ gives rise to a cosimplicial $(A/R)_\bullet$-module
+$(A/R)_\bullet \otimes_R M$. In other words
+$M_n = (A/R)_n \otimes_R M$ and using the $R$-algebra maps
+$(A/R)_n \to (A/R)_m$ to define the corresponding maps on
+$M \otimes_R (A/R)_\bullet$.
+
+\medskip\noindent
+The analogue to a descent datum
+for quasi-coherent sheaves in the setting of modules is the following.
+
+\begin{definition}
+\label{definition-descent-datum-modules}
+Let $R \to A$ be a ring map.
+\begin{enumerate}
+\item A {\it descent datum $(N, \varphi)$ for modules
+with respect to $R \to A$}
+is given by an $A$-module $N$ and an isomorphism of
+$A \otimes_R A$-modules
+$$
+\varphi : N \otimes_R A \to A \otimes_R N
+$$
+such that the {\it cocycle condition} holds: the diagram
+of $A \otimes_R A \otimes_R A$-module maps
+$$
+\xymatrix{
+N \otimes_R A \otimes_R A \ar[rr]_{\varphi_{02}}
+\ar[rd]_{\varphi_{01}}
+& &
+A \otimes_R A \otimes_R N \\
+& A \otimes_R N \otimes_R A \ar[ru]_{\varphi_{12}} &
+}
+$$
+commutes (see below for notation).
+\item A {\it morphism $(N, \varphi) \to (N', \varphi')$ of descent data}
+is a morphism of $A$-modules $\psi : N \to N'$ such that
+the diagram
+$$
+\xymatrix{
+N \otimes_R A \ar[r]_\varphi \ar[d]_{\psi \otimes \text{id}_A} &
+A \otimes_R N \ar[d]^{\text{id}_A \otimes \psi} \\
+N' \otimes_R A \ar[r]^{\varphi'} &
+A \otimes_R N'
+}
+$$
+is commutative.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In the definition we use the notation that
+$\varphi_{01} = \varphi \otimes \text{id}_A$,
+$\varphi_{12} = \text{id}_A \otimes \varphi$, and
+$\varphi_{02}(n \otimes 1 \otimes 1) = \sum a_i \otimes 1 \otimes n_i$
+if $\varphi(n \otimes 1) = \sum a_i \otimes n_i$. All three are
+$A \otimes_R A \otimes_R A$-module homomorphisms. Equivalently we have
+$$
+\varphi_{ij}
+=
+\varphi \otimes_{(A/R)_1, \ (A/R)_\bullet(\tau^2_{ij})} (A/R)_2
+$$
+where $\tau^2_{ij} : [1] \to [2]$ is the map
+$0 \mapsto i$, $1 \mapsto j$. Namely,
+$(A/R)_{\bullet}(\tau^2_{02})(a_0 \otimes a_1) =
+a_0 \otimes 1 \otimes a_1$,
+and similarly for the others\footnote{Note that
+$\tau^2_{ij} = \delta^2_k$, if $\{i, j, k\} = [2] = \{0, 1, 2\}$,
+see Simplicial, Definition \ref{simplicial-definition-face-degeneracy}.}.
+
+\medskip\noindent
+We need some more notation to be able to state the next lemma.
+Let $(N, \varphi)$ be a descent datum with respect to a ring map $R \to A$.
+For $n \geq 0$ and $i \in [n]$ we set
+$$
+N_{n, i} =
+A \otimes_R
+\ldots
+\otimes_R A \otimes_R N \otimes_R A \otimes_R
+\ldots
+\otimes_R A
+$$
+with the factor $N$ in the $i$th spot. It is an $(A/R)_n$-module.
+If we introduce the maps $\tau^n_i : [0] \to [n]$, $0 \mapsto i$
+then we see that
+$$
+N_{n, i} = N \otimes_{(A/R)_0, \ (A/R)_\bullet(\tau^n_i)} (A/R)_n
+$$
+For $0 \leq i \leq j \leq n$ we let $\tau^n_{ij} : [1] \to [n]$
+be the map such that $0$ maps to $i$ and $1$ to $j$. Similarly
+to the above the homomorphism $\varphi$ induces isomorphisms
+$$
+\varphi^n_{ij}
+=
+\varphi \otimes_{(A/R)_1, \ (A/R)_\bullet(\tau^n_{ij})} (A/R)_n :
+N_{n, i} \longrightarrow N_{n, j}
+$$
+of $(A/R)_n$-modules when $i < j$. If $i = j$ we set
+$\varphi^n_{ij} = \text{id}$. Since these are all isomorphisms they allow us
+to move the factor $N$ to any spot we like. And the cocycle condition
+exactly means that it does not matter how we do this (e.g., as a composition
+of two of these or at once). Finally, for any $\beta : [n] \to [m]$
+we define the morphism
+$$
+N_{\beta, i} : N_{n, i} \to N_{m, \beta(i)}
+$$
+as the unique $(A/R)_\bullet(\beta)$-semi linear map such that
+$$
+N_{\beta, i}(1 \otimes \ldots \otimes n \otimes \ldots \otimes 1)
+=
+1 \otimes \ldots \otimes n \otimes \ldots \otimes 1
+$$
+for all $n \in N$.
+This hints at the following lemma.
+
+\begin{lemma}
+\label{lemma-descent-datum-cosimplicial}
+Let $R \to A$ be a ring map.
+Given a descent datum $(N, \varphi)$ we can associate to it a
+cosimplicial $(A/R)_\bullet$-module $N_\bullet$\footnote{We should really
+write $(N, \varphi)_\bullet$.} by the
+rules $N_n = N_{n, n}$ and given $\beta : [n] \to [m]$
+setting we define
+$$
+N_\bullet(\beta) = (\varphi^m_{\beta(n)m}) \circ N_{\beta, n} :
+N_{n, n} \longrightarrow N_{m, m}.
+$$
+This procedure is functorial in the descent datum.
+\end{lemma}
+
+\begin{proof}
+Here are the first few maps
+where $\varphi(n \otimes 1) = \sum \alpha_i \otimes x_i$
+$$
+\begin{matrix}
+\delta^1_0 & : & N & \to & A \otimes N & n & \mapsto & 1 \otimes n \\
+\delta^1_1 & : & N & \to & A \otimes N & n & \mapsto &
+\sum \alpha_i \otimes x_i\\
+\sigma^0_0 & : & A \otimes N & \to & N & a_0 \otimes n & \mapsto & a_0n \\
+\delta^2_0 & : & A \otimes N & \to & A \otimes A \otimes N &
+a_0 \otimes n & \mapsto & 1 \otimes a_0 \otimes n \\
+\delta^2_1 & : & A \otimes N & \to & A \otimes A \otimes N &
+a_0 \otimes n & \mapsto & a_0 \otimes 1 \otimes n \\
+\delta^2_2 & : & A \otimes N & \to & A \otimes A \otimes N &
+a_0 \otimes n & \mapsto & \sum a_0 \otimes \alpha_i \otimes x_i \\
+\sigma^1_0 & : & A \otimes A \otimes N & \to & A \otimes N &
+a_0 \otimes a_1 \otimes n & \mapsto & a_0a_1 \otimes n \\
+\sigma^1_1 & : & A \otimes A \otimes N & \to & A \otimes N &
+a_0 \otimes a_1 \otimes n & \mapsto & a_0 \otimes a_1n
+\end{matrix}
+$$
+with notation as in
+Simplicial, Section \ref{simplicial-section-cosimplicial-object}.
+We first verify the two properties $\sigma^0_0 \circ \delta^1_0 = \text{id}$
+and $\sigma^0_0 \circ \delta^1_1 = \text{id}$.
+The first one, $\sigma^0_0 \circ \delta^1_0 = \text{id}$, is clear from
+the explicit description of the morphisms above.
+To prove the second relation we have to use the cocycle condition
+(because it does not hold for an arbitrary isomorphism
+$\varphi : N \otimes_R A \to A \otimes_R N$). Write
+$p = \sigma^0_0 \circ \delta^1_1 : N \to N$. By the description of the
+maps above we deduce that $p$ is also equal to
+$$
+p = \varphi \otimes \text{id} :
+N = (N \otimes_R A) \otimes_{(A \otimes_R A)} A
+\longrightarrow
+(A \otimes_R N) \otimes_{(A \otimes_R A)} A = N
+$$
+Since $\varphi$ is an isomorphism we see that $p$ is an isomorphism.
+Write $\varphi(n \otimes 1) = \sum \alpha_i \otimes x_i$ for certain
+$\alpha_i \in A$ and $x_i \in N$. Then $p(n) = \sum \alpha_ix_i$.
+Next, write
+$\varphi(x_i \otimes 1) = \sum \alpha_{ij} \otimes y_j$ for
+certain $\alpha_{ij} \in A$ and $y_j \in N$. Then the cocycle condition
+says that
+$$
+\sum \alpha_i \otimes \alpha_{ij} \otimes y_j
+=
+\sum \alpha_i \otimes 1 \otimes x_i.
+$$
+This means that $p(n) = \sum \alpha_ix_i = \sum \alpha_i\alpha_{ij}y_j =
+\sum \alpha_i p(x_i) = p(p(n))$. Thus $p$ is a projector, and since it is
+an isomorphism it is the identity.
+
+\medskip\noindent
+To prove fully that $N_\bullet$ is a cosimplicial module we have to check
+all 5 types of relations of
+Simplicial, Remark \ref{simplicial-remark-relations-cosimplicial}.
+The relations on composing $\sigma$'s are obvious.
+The relations on composing $\delta$'s come down to the
+cocycle condition for $\varphi$.
+In exactly the same way as above one checks the relations
+$\sigma_j \circ \delta_j = \sigma_j \circ \delta_{j + 1} = \text{id}$.
+Finally, the other relations on compositions of $\delta$'s and $\sigma$'s
+hold for any $\varphi$ whatsoever.
+\end{proof}
+
+\noindent
+Note that to an $R$-module $M$ we can associate a canonical
+descent datum, namely $(M \otimes_R A, can)$ where
+$can : (M \otimes_R A) \otimes_R A \to A \otimes_R (M \otimes_R A)$
+is the obvious map:
+$(m \otimes a) \otimes a' \mapsto a \otimes (m \otimes a')$.
+
+\begin{lemma}
+\label{lemma-canonical-descent-datum-cosimplicial}
+Let $R \to A$ be a ring map.
+Let $M$ be an $R$-module. The cosimplicial
+$(A/R)_\bullet$-module associated to the canonical descent
+datum is isomorphic to the cosimplicial module $(A/R)_\bullet \otimes_R M$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{definition}
+\label{definition-descent-datum-effective-module}
+Let $R \to A$ be a ring map.
+We say a descent datum $(N, \varphi)$ is {\it effective}
+if there exists an $R$-module $M$ and an isomorphism
+of descent data from $(M \otimes_R A, can)$ to
+$(N, \varphi)$.
+\end{definition}
+
+\noindent
+Let $R \to A$ be a ring map.
+Let $(N, \varphi)$ be a descent datum.
+We may take the cochain complex $s(N_\bullet)$ associated
+with $N_\bullet$ (see
+Simplicial, Section \ref{simplicial-section-dold-kan-cosimplicial}).
+It has the following shape:
+$$
+N \to A \otimes_R N \to A \otimes_R A \otimes_R N \to \ldots
+$$
+We can describe the maps.
+The first map is the map
+$$
+n \longmapsto 1 \otimes n - \varphi(n \otimes 1).
+$$
+The second map on pure tensors has the values
+$$
+a \otimes n \longmapsto 1 \otimes a \otimes n
+- a \otimes 1 \otimes n + a \otimes \varphi(n \otimes 1).
+$$
+It is clear how the pattern continues.
+
+\medskip\noindent
+In the special case
+where $N = A \otimes_R M$ we see that for any $m \in M$
+the element $1 \otimes m$ is in the kernel of the first map
+of the cochain complex associated to the cosimplicial
+module $(A/R)_\bullet \otimes_R M$. Hence we get an extended cochain complex
+\begin{equation}
+\label{equation-extended-complex}
+0 \to M \to A \otimes_R M \to A \otimes_R A \otimes_R M \to \ldots
+\end{equation}
+Here we think of the $0$ as being in degree $-2$,
+the module $M$ in degree $-1$, the module $A \otimes_R M$ in
+degree $0$, etc. Note that this complex has the shape
+$$
+0 \to R \to A \to A \otimes_R A \to A \otimes_R A \otimes_R A \to \ldots
+$$
+when $M = R$.
+
+\begin{lemma}
+\label{lemma-with-section-exact}
+Suppose that $R \to A$ has a section.
+Then for any $R$-module $M$ the extended cochain complex
+(\ref{equation-extended-complex}) is exact.
+\end{lemma}
+
+\begin{proof}
+By
+Simplicial, Lemma \ref{simplicial-lemma-push-outs-simplicial-object-w-section}
+the map $R \to (A/R)_\bullet$ is a homotopy equivalence
+of cosimplicial $R$-algebras
+(here $R$ denotes the constant cosimplicial $R$-algebra).
+Hence $M \to (A/R)_\bullet \otimes_R M$ is
+a homotopy equivalence in the category of cosimplicial
+$R$-modules, because $\otimes_R M$ is a
+functor from the category of $R$-algebras to the category
+of $R$-modules, see
+Simplicial, Lemma \ref{simplicial-lemma-functorial-homotopy}.
+This implies that the induced map of associated
+complexes is a homotopy equivalence, see
+Simplicial, Lemma \ref{simplicial-lemma-homotopy-s-Q}.
+Since the complex associated to the constant cosimplicial
+$R$-module $M$ is the complex
+$$
+\xymatrix{
+M \ar[r]^0 & M \ar[r]^1 & M \ar[r]^0 & M \ar[r]^1 & M \ldots
+}
+$$
+we win (since the extended version simply puts an extra $M$ at
+the beginning).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ff-exact}
+Suppose that $R \to A$ is faithfully flat, see
+Algebra, Definition \ref{algebra-definition-flat}.
+Then for any $R$-module $M$ the extended cochain complex
+(\ref{equation-extended-complex}) is exact.
+\end{lemma}
+
+\begin{proof}
+Suppose we can show there exists a faithfully flat ring map
+$R \to R'$ such that the result holds for the ring map
+$R' \to A' = R' \otimes_R A$. Then the result follows for
+$R \to A$. Namely, for any $R$-module $M$ the cosimplicial
+module $(M \otimes_R R') \otimes_{R'} (A'/R')_\bullet$ is
+just the cosimplicial module $R' \otimes_R (M \otimes_R (A/R)_\bullet)$.
+Hence the vanishing of cohomology of the complex associated to
+$(M \otimes_R R') \otimes_{R'} (A'/R')_\bullet$ implies the
+vanishing of the cohomology of the complex associated to
+$M \otimes_R (A/R)_\bullet$ by faithful flatness of $R \to R'$.
+Similarly for the vanishing of cohomology groups in degrees
+$-1$ and $0$ of the extended complex (proof omitted).
+
+\medskip\noindent
+But we have such a faithful flat extension. Namely $R' = A$ works
+because the ring map $R' = A \to A' = A \otimes_R A$ has a section
+$a \otimes a' \mapsto aa'$ and
+Lemma \ref{lemma-with-section-exact}
+applies.
+\end{proof}
+
+\noindent
+Here is how the complex relates to the question of effectivity.
+
+\begin{lemma}
+\label{lemma-recognize-effective}
+Let $R \to A$ be a faithfully flat ring map.
+Let $(N, \varphi)$ be a descent datum.
+Then $(N, \varphi)$ is effective if and only if the canonical
+map
+$$
+A \otimes_R H^0(s(N_\bullet)) \longrightarrow N
+$$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+If $(N, \varphi)$ is effective, then we may write $N = A \otimes_R M$
+with $\varphi = can$. It follows that $H^0(s(N_\bullet)) = M$ by
+Lemmas \ref{lemma-canonical-descent-datum-cosimplicial}
+and \ref{lemma-ff-exact}. Conversely, suppose the map of the lemma
+is an isomorphism. In this case set $M = H^0(s(N_\bullet))$.
+This is an $R$-submodule of $N$,
+namely $M = \{n \in N \mid 1 \otimes n = \varphi(n \otimes 1)\}$.
+The only thing to check is that via the isomorphism
+$A \otimes_R M \to N$
+the canonical descent data agrees with $\varphi$.
+We omit the verification.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-descends}
+Let $R \to A$ be a faithfully flat ring map, and let $R \to R'$
+be faithfully flat. Set $A' = R' \otimes_R A$. If all descent data
+for $R' \to A'$ are effective, then so are all descent data for $R \to A$.
+\end{lemma}
+
+\begin{proof}
+Let $(N, \varphi)$ be a descent datum for $R \to A$.
+Set $N' = R' \otimes_R N = A' \otimes_A N$, and denote
+$\varphi' = \text{id}_{R'} \otimes \varphi$ the base change
+of the descent datum $\varphi$. Then $(N', \varphi')$ is
+a descent datum for $R' \to A'$ and
+$H^0(s(N'_\bullet)) = R' \otimes_R H^0(s(N_\bullet))$.
+Moreover, the map
+$A' \otimes_{R'} H^0(s(N'_\bullet)) \to N'$ is identified
+with the base change of the $A$-module map
+$A \otimes_R H^0(s(N)) \to N$ via the faithfully flat map
+$A \to A'$. Hence we conclude by Lemma \ref{lemma-recognize-effective}.
+\end{proof}
+
+\noindent
+Here is the main result of this section.
+Its proof may seem a little clumsy; for a more highbrow approach see
+Remark \ref{remark-homotopy-equivalent-cosimplicial-algebras} below.
+
+\begin{proposition}
+\label{proposition-descent-module}
+\begin{slogan}
+Effective descent for modules along faithfully flat ring maps.
+\end{slogan}
+Let $R \to A$ be a faithfully flat ring map.
+Then
+\begin{enumerate}
+\item any descent datum on modules with respect to $R \to A$
+is effective,
+\item the functor $M \mapsto (A \otimes_R M, can)$ from $R$-modules
+to the category of descent data is an equivalence, and
+\item the inverse functor is given by $(N, \varphi) \mapsto H^0(s(N_\bullet))$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+We only prove (1) and omit the proofs of (2) and (3).
+As $R \to A$ is faithfully flat, there exists a faithfully flat
+base change $R \to R'$ such that $R' \to A' = R' \otimes_R A$ has
+a section (namely take $R' = A$ as in the proof of
+Lemma \ref{lemma-ff-exact}). Hence, using
+Lemma \ref{lemma-descent-descends}
+we may assume that $R \to A$ has a section, say $\sigma : A \to R$.
+Let $(N, \varphi)$ be a descent datum relative to $R \to A$.
+Set
+$$
+M = H^0(s(N_\bullet)) = \{n \in N \mid 1 \otimes n = \varphi(n \otimes 1)\}
+\subset
+N
+$$
+By Lemma \ref{lemma-recognize-effective} it suffices to show that
+$A \otimes_R M \to N$ is an isomorphism.
+
+\medskip\noindent
+Take an element $n \in N$. Write
+$\varphi(n \otimes 1) = \sum a_i \otimes x_i$ for certain
+$a_i \in A$ and $x_i \in N$. By Lemma \ref{lemma-descent-datum-cosimplicial}
+we have $n = \sum a_i x_i$ in $N$ (because
+$\sigma^0_0 \circ \delta^1_1 = \text{id}$ in any cosimplicial object).
+Next, write $\varphi(x_i \otimes 1) = \sum a_{ij} \otimes y_j$ for
+certain $a_{ij} \in A$ and $y_j \in N$.
+The cocycle condition means that
+$$
+\sum a_i \otimes a_{ij} \otimes y_j = \sum a_i \otimes 1 \otimes x_i
+$$
+in $A \otimes_R A \otimes_R N$. We conclude two things from this.
+First, by applying $\sigma$ to the first $A$ we conclude that
+$\sum \sigma(a_i) \varphi(x_i \otimes 1) = \sum \sigma(a_i) \otimes x_i$
+which means that $\sum \sigma(a_i) x_i \in M$. Next, by applying
+$\sigma$ to the middle $A$ and multiplying out we conclude that
+$\sum_i a_i (\sum_j \sigma(a_{ij}) y_j) = \sum a_i x_i = n$. Hence
+by the first conclusion we see that $A \otimes_R M \to N$ is
+surjective. Finally, suppose that $m_i \in M$ and
+$\sum a_i m_i = 0$. Then we see by applying $\varphi$ to
+$\sum a_im_i \otimes 1$ that $\sum a_i \otimes m_i = 0$.
+In other words $A \otimes_R M \to N$ is injective and we win.
+\end{proof}
+
+\begin{remark}
+\label{remark-standard-covering}
+Let $R$ be a ring. Let $f_1, \ldots, f_n\in R$ generate the
+unit ideal. The ring $A = \prod_i R_{f_i}$ is a faithfully flat
+$R$-algebra. We remark that the cosimplicial ring $(A/R)_\bullet$
+has the following ring in degree $n$:
+$$
+\prod\nolimits_{i_0, \ldots, i_n} R_{f_{i_0}\ldots f_{i_n}}
+$$
+Hence the results above recover
+Algebra, Lemmas \ref{algebra-lemma-standard-covering},
+\ref{algebra-lemma-cover-module} and \ref{algebra-lemma-glue-modules}.
+But the results above actually say more because of exactness
+in higher degrees. Namely, it implies that {\v C}ech cohomology of
+quasi-coherent sheaves on affines is trivial. Thus we get a second
+proof of Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cech-cohomology-quasi-coherent-trivial}.
+\end{remark}
+
+\begin{remark}
+\label{remark-homotopy-equivalent-cosimplicial-algebras}
+Let $R$ be a ring. Let $A_\bullet$ be a cosimplicial $R$-algebra.
+In this setting a descent datum corresponds to an cosimplicial
+$A_\bullet$-module $M_\bullet$ with the property that for
+every $n, m \geq 0$ and every $\varphi : [n] \to [m]$ the
+map $M(\varphi) : M_n \to M_m$ induces an isomorphism
+$$
+M_n \otimes_{A_n, A(\varphi)} A_m \longrightarrow M_m.
+$$
+Let us call such a cosimplicial module a {\it cartesian module}.
+In this setting, the proof of Proposition \ref{proposition-descent-module}
+can be split in the following steps
+\begin{enumerate}
+\item If $R \to R'$ and $R \to A$ are faithfully flat,
+then descent data for $A/R$ are effective if
+descent data for $(R' \otimes_R A)/R'$ are effective.
+\item Let $A$ be an $R$-algebra. Descent data for $A/R$ correspond
+to cartesian $(A/R)_\bullet$-modules.
+\item If $R \to A$ has a section then $(A/R)_\bullet$ is homotopy
+equivalent to $R$, the constant cosimplicial
+$R$-algebra with value $R$.
+\item If $A_\bullet \to B_\bullet$ is a homotopy equivalence of
+cosimplicial $R$-algebras then the functor
+$M_\bullet \mapsto M_\bullet \otimes_{A_\bullet} B_\bullet$
+induces an equivalence of categories between cartesian
+$A_\bullet$-modules and cartesian $B_\bullet$-modules.
+\end{enumerate}
+For (1) see Lemma \ref{lemma-descent-descends}.
+Part (2) uses Lemma \ref{lemma-descent-datum-cosimplicial}.
+Part (3) we have seen in the proof of Lemma \ref{lemma-with-section-exact}
+(it relies on Simplicial,
+Lemma \ref{simplicial-lemma-push-outs-simplicial-object-w-section}).
+Moreover, part (4) is a triviality if you think about it right!
+\end{remark}
+
+
+
+
+
+
+
+
+\section{Descent for universally injective morphisms}
+\label{section-descent-universally-injective}
+
+\noindent
+Numerous constructions in algebraic geometry are made using techniques of
+{\it descent}, such as constructing objects over a given space by first
+working over a somewhat larger space which projects down to the given space,
+or verifying a property of a space or a morphism by pulling back along a
+covering map. The utility of such techniques is of course dependent on
+identification of a wide class of {\it effective descent morphisms}.
+Early in the Grothendieckian development of modern algebraic geometry,
+the class of morphisms which are {\it quasi-compact} and {\it faithfully flat}
+was shown to be effective for descending objects, morphisms, and many
+properties thereof.
+
+\medskip\noindent
+As usual, this statement comes down to a property of rings and modules.
+For a homomorphism $f: R \to S$ to be an effective descent morphism for
+modules, Grothendieck showed that it is sufficient for $f$ to be
+faithfully flat. However, this excludes many natural examples: for instance,
+any split ring homomorphism is an effective descent morphism. One natural
+example of this even arises in the proof of faithfully flat descent: for
+$f: R \to S$ any ring homomorphism, $1_S \otimes f: S \to S \otimes_R S$
+is split by the multiplication map whether or not it is flat.
+
+\medskip\noindent
+One may then ask whether there is a natural ring-theoretic condition
+implying effective descent for modules which includes both the case of a
+faithfully flat morphism and that of a split ring homomorphism. It may
+surprise the reader (at least it surprised this author) to learn that a
+complete answer to this question has been known since around 1970! Namely,
+it is not hard to check that a necessary condition for $f: R \to S$ to be
+an effective descent morphism for modules is that $f$ must be
+{\it universally injective} in the category of $R$-modules, that is, for
+any $R$-module $M$, the map $1_M \otimes f: M \to M \otimes_R S$
+must be injective. This then turns out to be a sufficient condition as well.
+For example, if $f$ is split in the category of $R$-modules (but not
+necessarily in the category of rings), then $f$ is an effective descent
+morphism for modules.
+
+\medskip\noindent
+The history of this result is a bit involved: it was originally asserted
+by Olivier \cite{olivier}, who called universally injective morphisms
+{\it pure}, but without a clear indication of proof. One can extract the
+result from the work of Joyal and Tierney \cite{joyal-tierney}, but to the
+best of our knowledge, the first free-standing proof to appear in the
+literature is that of Mesablishvili \cite{mesablishvili1}. The first purpose
+of this section is to expose Mesablishvili's proof; this requires little
+modification of his original presentation aside from correcting typos, with
+the one exception that we make explicit the relationship between the
+customary definition of a descent datum in algebraic geometry and the one
+used in \cite{mesablishvili1}. The proof turns out to be entirely
+category-theoretic, and consequently can be put in the language of monads
+(and thus applied in other contexts); see \cite{janelidze-tholen}.
+
+\medskip\noindent
+The second purpose of this section is to collect some information about which
+properties of modules, algebras, and morphisms can be descended along
+universally injective ring homomorphisms. The cases of finite modules
+and flat modules were treated by Mesablishvili \cite{mesablishvili2}.
+
+
+\subsection{Category-theoretic preliminaries}
+\label{subsection-category-prelims}
+
+\noindent
+We start by recalling a few basic notions from category theory which will
+simplify the exposition. In this subsection, fix an ambient category.
+
+\medskip\noindent
+For two morphisms $g_1, g_2: B \to C$, recall that an {\it equalizer}
+of $g_1$ and $g_2$ is a morphism $f: A \to B$ which satisfies
+$g_1 \circ f = g_2 \circ f$ and is universal for this property.
+This second statement means that any commutative diagram
+$$
+\xymatrix{A' \ar[rd]^e \ar@/^1.5pc/[rrd] \ar@{-->}[d] & & \\
+A \ar[r]^f & B \ar@<1ex>[r]^{g_1} \ar@<-1ex>[r]_{g_2} &
+C
+}
+$$
+without the dashed arrow can be uniquely completed. We also say in this
+situation that the diagram
+\begin{equation}
+\label{equation-equalizer}
+\xymatrix{
+A \ar[r]^f & B \ar@<1ex>[r]^{g_1} \ar@<-1ex>[r]_{g_2} & C
+}
+\end{equation}
+is an equalizer. Reversing arrows gives the definition of a {\it coequalizer}.
+See Categories, Sections \ref{categories-section-equalizers} and
+\ref{categories-section-coequalizers}.
+
+\medskip\noindent
+Since it involves a universal property, the property of being an equalizer is
+typically not stable under applying a covariant functor. Just as for
+monomorphisms and epimorphisms, one can get around this in some
+cases by exhibiting splittings.
+
+\begin{definition}
+\label{definition-split-equalizer}
+A {\it split equalizer} is a diagram (\ref{equation-equalizer}) with
+$g_1 \circ f = g_2 \circ f$ for which there exist auxiliary morphisms
+$h : B \to A$ and $i : C \to B$ such that
+\begin{equation}
+\label{equation-split-equalizer-conditions}
+h \circ f = 1_A, \quad f \circ h = i \circ g_1, \quad i \circ g_2 = 1_B.
+\end{equation}
+\end{definition}
+
+\noindent
+The point is that the equalities among arrows force (\ref{equation-equalizer})
+to be an equalizer: the map $e$ factors uniquely through $f$ by writing
+$e = f \circ (h \circ e)$. Consequently, applying a covariant functor
+to a split equalizer gives a split equalizer; applying a contravariant functor
+gives a {\it split coequalizer}, whose definition is apparent.
+
+\subsection{Universally injective morphisms}
+\label{subsection-universally-injective}
+
+\noindent
+Recall that $\textit{Rings}$ denotes the category of commutative rings
+with $1$. For an object $R$ of $\textit{Rings}$ we denote $\text{Mod}_R$
+the category of $R$-modules.
+
+\begin{remark}
+\label{remark-reflects}
+Any functor $F : \mathcal{A} \to \mathcal{B}$ of abelian categories
+which is exact and takes nonzero objects to nonzero objects reflects
+injections and surjections. Namely, exactness implies that
+$F$ preserves kernels and cokernels (compare with
+Homology, Section \ref{homology-section-functors}).
+For example, if $f : R \to S$ is a
+faithfully flat ring homomorphism, then
+$\bullet \otimes_R S: \text{Mod}_R \to \text{Mod}_S$ has these properties.
+\end{remark}
+
+\noindent
+Let $R$ be a ring. Recall that a morphism $f : M \to N$ in $\text{Mod}_R$
+is {\it universally injective} if for all $P \in \text{Mod}_R$,
+the morphism $f \otimes 1_P: M \otimes_R P \to N \otimes_R P$ is injective.
+See Algebra, Definition \ref{algebra-definition-universally-injective}.
+
+\begin{definition}
+\label{definition-universally-injective}
+A ring map $f: R \to S$ is {\it universally injective}
+if it is universally injective as a morphism in $\text{Mod}_R$.
+\end{definition}
+
+\begin{example}
+\label{example-split-injection-universally-injective}
+Any split injection in $\text{Mod}_R$ is universally injective. In particular,
+any split injection in $\textit{Rings}$ is universally injective.
+\end{example}
+
+\begin{example}
+\label{example-cover-universally-injective}
+For a ring $R$ and $f_1, \ldots, f_n \in R$ generating the unit
+ideal, the morphism $R \to R_{f_1} \oplus \ldots \oplus R_{f_n}$ is
+universally injective. Although this is immediate from
+Lemma \ref{lemma-faithfully-flat-universally-injective},
+it is instructive to check it directly: we immediately reduce to the case
+where $R$ is local, in which case some $f_i$ must be a unit and so the map
+$R \to R_{f_i}$ is an isomorphism.
+\end{example}
+
+\begin{lemma}
+\label{lemma-faithfully-flat-universally-injective}
+Any faithfully flat ring map is universally injective.
+\end{lemma}
+
+\begin{proof}
+This is a reformulation of Algebra, Lemma
+\ref{algebra-lemma-faithfully-flat-universally-injective}.
+\end{proof}
+
+\noindent
+The key observation from \cite{mesablishvili1} is that universal injectivity
+can be usefully reformulated in terms of a splitting, using the usual
+construction of an injective cogenerator in $\text{Mod}_R$.
+
+\begin{definition}
+\label{definition-C}
+Let $R$ be a ring. Define the contravariant functor
+{\it $C$} $ : \text{Mod}_R \to \text{Mod}_R$ by setting
+$$
+C(M) = \Hom_{\textit{Ab}}(M, \mathbf{Q}/\mathbf{Z}),
+$$
+with the $R$-action on $C(M)$ given by $rf(s) = f(rs)$.
+\end{definition}
+
+\noindent
+This functor was denoted $M \mapsto M^\vee$ in
+More on Algebra, Section \ref{more-algebra-section-injectives-modules}.
+
+\begin{lemma}
+\label{lemma-C-is-faithful}
+For a ring $R$, the functor $C : \text{Mod}_R \to \text{Mod}_R$ is
+exact and reflects injections and surjections.
+\end{lemma}
+
+\begin{proof}
+Exactness is More on Algebra, Lemma \ref{more-algebra-lemma-vee-exact}
+and the other properties follow from this, see
+Remark \ref{remark-reflects}.
+\end{proof}
+
+\begin{remark}
+\label{remark-adjunction}
+We will use frequently the standard adjunction between $\Hom$ and tensor
+product, in the form of the natural isomorphism of contravariant functors
+\begin{equation}
+\label{equation-adjunction}
+C(\bullet_1 \otimes_R \bullet_2) \cong \Hom_R(\bullet_1, C(\bullet_2)):
+\text{Mod}_R \times \text{Mod}_R \to \text{Mod}_R
+\end{equation}
+taking $f: M_1 \otimes_R M_2 \to \mathbf{Q}/\mathbf{Z}$ to the map $m_1 \mapsto
+(m_2 \mapsto f(m_1 \otimes m_2))$. See
+Algebra, Lemma \ref{algebra-lemma-hom-from-tensor-product-variant}.
+A corollary of this observation is that if
+$$
+\xymatrix@C=9pc{
+C(M) \ar@<1ex>[r] \ar@<-1ex>[r] & C(N) \ar[r] & C(P)
+}
+$$
+is a split coequalizer diagram in $\text{Mod}_R$, then so is
+$$
+\xymatrix@C=9pc{
+C(M \otimes_R Q) \ar@<1ex>[r] \ar@<-1ex>[r] & C(N \otimes_R Q) \ar[r] & C(P
+\otimes_R Q)
+}
+$$
+for any $Q \in \text{Mod}_R$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-split-surjection}
+Let $R$ be a ring. A morphism $f: M \to N$ in $\text{Mod}_R$ is universally
+injective if and only if $C(f): C(N) \to C(M)$ is a split surjection.
+\end{lemma}
+
+\begin{proof}
+By (\ref{equation-adjunction}), for any $P \in \text{Mod}_R$ we have a
+commutative diagram
+$$
+\xymatrix@C=9pc{
+\Hom_R( P, C(N)) \ar[r]_{\Hom_R(P,C(f))} \ar[d]^{\cong} &
+\Hom_R(P,C(M)) \ar[d]^{\cong} \\
+C(P \otimes_R N ) \ar[r]^{C(1_{P} \otimes f)} & C(P \otimes_R M ).
+}
+$$
+If $f$ is universally injective, then $1_{C(M)} \otimes f: C(M) \otimes_R M \to
+C(M) \otimes_R N$ is injective,
+so both rows in the above diagram are surjective for $P = C(M)$. We may thus
+lift
+$1_{C(M)} \in \Hom_R(C(M), C(M))$ to some $g \in \Hom_R(C(N), C(M))$ splitting
+$C(f)$.
+Conversely, if $C(f)$ is a split surjection, then
+both rows in the above diagram are surjective,
+so by Lemma \ref{lemma-C-is-faithful}, $1_{P} \otimes f$ is injective.
+\end{proof}
+
+\begin{remark}
+\label{remark-functorial-splitting}
+Let $f: M \to N$ be a universally injective morphism in $\text{Mod}_R$. By
+choosing a splitting
+$g$ of $C(f)$, we may construct a functorial splitting of $C(1_P \otimes f)$
+for each $P \in \text{Mod}_R$.
+Namely, by (\ref{equation-adjunction}) this amounts to splitting $\Hom_R(P,
+C(f))$ functorially in $P$,
+and this is achieved by the map $g \circ \bullet$.
+\end{remark}
+
+
+\subsection{Descent for modules and their morphisms}
+\label{subsection-descent-modules-morphisms}
+
+\noindent
+Throughout this subsection, fix a ring map $f: R \to S$. As seen in
+Section \ref{section-descent-modules} we can use the language of cosimplicial
+algebras to talk about descent data for modules, but in this
+subsection we prefer a more down to earth terminology.
+
+\medskip\noindent
+For $i = 1, 2, 3$, let $S_i$ be the $i$-fold tensor product of $S$ over $R$.
+Define the ring homomorphisms $\delta_0^1, \delta_1^1: S_1 \to S_2$,
+$\delta_{01}^1, \delta_{02}^1, \delta_{12}^1: S_1 \to S_3$, and
+$\delta_0^2, \delta_1^2, \delta_2^2: S_2 \to S_3$ by the formulas
+\begin{align*}
+\delta^1_0 (a_0) & = 1 \otimes a_0 \\
+\delta^1_1 (a_0) & = a_0 \otimes 1 \\
+\delta^2_0 (a_0 \otimes a_1) & = 1 \otimes a_0 \otimes a_1 \\
+\delta^2_1 (a_0 \otimes a_1) & = a_0 \otimes 1 \otimes a_1 \\
+\delta^2_2 (a_0 \otimes a_1) & = a_0 \otimes a_1 \otimes 1 \\
+\delta_{01}^1(a_0) & = 1 \otimes 1 \otimes a_0 \\
+\delta_{02}^1(a_0) & = 1 \otimes a_0 \otimes 1 \\
+\delta_{12}^1(a_0) & = a_0 \otimes 1 \otimes 1.
+\end{align*}
+In other words, the upper index indicates the source ring, while the lower
+index indicates where to insert factors of 1. (This notation is compatible
+with the notation introduced in Section \ref{section-descent-modules}.)
+
+\medskip\noindent
+Recall\footnote{To be precise, our $\theta$ here is the inverse of
+$\varphi$ from Definition \ref{definition-descent-datum-modules}.}
+from Definition \ref{definition-descent-datum-modules} that for
+$M \in \text{Mod}_S$, a {\it descent datum} on $M$ relative to $f$ is
+an isomorphism
+$$
+\theta :
+M \otimes_{S,\delta^1_0} S_2
+\longrightarrow
+M \otimes_{S,\delta^1_1} S_2
+$$
+of $S_2$-modules satisfying the {\it cocycle condition}
+\begin{equation}
+\label{equation-cocycle-condition}
+(\theta \otimes \delta_2^2) \circ (\theta \otimes \delta_2^0) = (\theta \otimes
+\delta_2^1):
+M \otimes_{S, \delta^1_{01}} S_3 \to M \otimes_{S,\delta^1_{12}} S_3.
+\end{equation}
+Let $DD_{S/R}$ be the category of $S$-modules equipped with descent data
+relative to $f$.
+
+\medskip\noindent
+For example, for $M_0 \in \text{Mod}_R$ and a choice of isomorphism
+$M \cong M_0 \otimes_R S$ gives rise to a descent datum by identifying
+$M \otimes_{S,\delta^1_0} S_2$ and $M \otimes_{S,\delta^1_1} S_2$
+naturally with $M_0 \otimes_R S_2$. This construction in particular
+defines a functor $f^*: \text{Mod}_R \to DD_{S/R}$.
+
+\begin{definition}
+\label{definition-effective-descent}
+The functor $f^*: \text{Mod}_R \to DD_{S/R}$
+is called {\it base extension along $f$}. We say that $f$ is a
+{\it descent morphism for modules} if $f^*$ is fully
+faithful. We say that $f$ is an {\it effective descent morphism for modules}
+if $f^*$ is an equivalence of categories.
+\end{definition}
+
+\noindent
+Our goal is to show that for $f$ universally injective, we can use $\theta$ to
+locate $M_0$ within $M$. This process makes crucial use of some equalizer
+diagrams.
+
+\begin{lemma}
+\label{lemma-equalizer-M}
+For $(M,\theta) \in DD_{S/R}$, the diagram
+\begin{equation}
+\label{equation-equalizer-M}
+\xymatrix@C=8pc{
+M \ar[r]^{\theta \circ (1_M \otimes \delta_0^1)} &
+M \otimes_{S, \delta_1^1} S_2
+\ar@<1ex>[r]^{(\theta \otimes \delta_2^2) \circ (1_M \otimes \delta^2_0)}
+\ar@<-1ex>[r]_{1_{M \otimes S_2} \otimes \delta^2_1} &
+M \otimes_{S, \delta_{12}^1} S_3
+}
+\end{equation}
+is a split equalizer.
+\end{lemma}
+
+\begin{proof}
+Define the ring homomorphisms $\sigma^0_0: S_2 \to S_1$ and $\sigma_0^1,
+\sigma_1^1: S_3 \to S_2$ by the formulas
+\begin{align*}
+\sigma^0_0 (a_0 \otimes a_1) & = a_0a_1 \\
+\sigma^1_0 (a_0 \otimes a_1 \otimes a_2) & = a_0a_1 \otimes a_2 \\
+\sigma^1_1 (a_0 \otimes a_1 \otimes a_2) & = a_0 \otimes a_1a_2.
+\end{align*}
+We then take the auxiliary morphisms to be
+$1_M \otimes \sigma_0^0: M \otimes_{S, \delta_1^1} S_2 \to M$
+and $1_M \otimes \sigma_0^1: M \otimes_{S,\delta_{12}^1} S_3 \to M \otimes_{S,
+\delta_1^1} S_2$.
+Of the compatibilities required in (\ref{equation-split-equalizer-conditions}),
+the first follows from tensoring the cocycle condition
+(\ref{equation-cocycle-condition}) with $\sigma_1^1$
+and the others are immediate.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-equalizer-CM}
+For $(M, \theta) \in DD_{S/R}$, the diagram
+\begin{equation}
+\label{equation-coequalizer-CM}
+\xymatrix@C=8pc{
+C(M \otimes_{S, \delta_{12}^1} S_3)
+\ar@<1ex>[r]^{C((\theta \otimes \delta_2^2) \circ (1_M \otimes \delta^2_0))}
+\ar@<-1ex>[r]_{C(1_{M \otimes S_2} \otimes \delta^2_1)} &
+C(M \otimes_{S, \delta_1^1} S_2 )
+\ar[r]^{C(\theta \circ (1_M \otimes \delta_0^1))} & C(M).
+}
+\end{equation}
+obtained by applying $C$ to (\ref{equation-equalizer-M}) is a split
+coequalizer.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-equalizer-S}
+The diagram
+\begin{equation}
+\label{equation-equalizer-S}
+\xymatrix@C=8pc{
+S_1 \ar[r]^{\delta^1_1} &
+S_2 \ar@<1ex>[r]^{\delta^2_2} \ar@<-1ex>[r]_{\delta^2_1} &
+S_3
+}
+\end{equation}
+is a split equalizer.
+\end{lemma}
+
+\begin{proof}
+In Lemma \ref{lemma-equalizer-M}, take $(M, \theta) = f^*(S)$.
+\end{proof}
+
+\noindent
+This suggests a definition of a potential quasi-inverse functor for $f^*$.
+
+\begin{definition}
+\label{definition-pushforward}
+Define the functor {\it $f_*$} $: DD_{S/R} \to \text{Mod}_R$ by taking
+$f_*(M, \theta)$ to be the $R$-submodule of $M$ for which the diagram
+\begin{equation}
+\label{equation-equalizer-f}
+\xymatrix@C=8pc{f_*(M,\theta) \ar[r] & M \ar@<1ex>^{\theta \circ (1_M \otimes
+\delta_0^1)}[r] \ar@<-1ex>_{1_M \otimes \delta_1^1}[r] &
+M \otimes_{S, \delta_1^1} S_2
+}
+\end{equation}
+is an equalizer.
+\end{definition}
+
+\noindent
+Using Lemma \ref{lemma-equalizer-M} and the fact that the restriction functor
+$\text{Mod}_S \to \text{Mod}_R$ is right adjoint to the base extension
+functor $\bullet \otimes_R S: \text{Mod}_R \to \text{Mod}_S$,
+we deduce that $f_*$ is right adjoint to $f^*$.
+
+\medskip\noindent
+We are ready for the key lemma. In the faithfully flat case this is a
+triviality (see Remark \ref{remark-descent-lemma}),
+but in the general case some argument is needed.
+
+\begin{lemma}
+\label{lemma-descent-lemma}
+If $f$ is universally injective, then the diagram
+\begin{equation}
+\label{equation-equalizer-f2}
+\xymatrix@C=8pc{
+f_*(M, \theta) \otimes_R S
+\ar[r]^{\theta \circ (1_M \otimes \delta_0^1)} &
+M \otimes_{S, \delta_1^1} S_2
+\ar@<1ex>[r]^{(\theta \otimes \delta_2^2) \circ (1_M \otimes \delta^2_0)}
+\ar@<-1ex>[r]_{1_{M \otimes S_2} \otimes \delta^2_1} &
+M \otimes_{S, \delta_{12}^1} S_3
+}
+\end{equation}
+obtained by tensoring (\ref{equation-equalizer-f}) over $R$ with $S$ is an
+equalizer.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-split-surjection} and
+Remark \ref{remark-functorial-splitting},
+the map $C(1_N \otimes f): C(N \otimes_R S) \to C(N)$ can be split functorially
+in $N$. This gives the upper vertical arrows in the commutative diagram
+$$
+\xymatrix@C=8pc{
+C(M \otimes_{S, \delta_1^1} S_2)
+\ar@<1ex>^{C(\theta \circ (1_M \otimes \delta_0^1))}[r]
+\ar@<-1ex>_{C(1_M \otimes \delta_1^1)}[r] \ar[d] &
+C(M) \ar[r]\ar[d] & C(f_*(M,\theta)) \ar@{-->}[d] \\
+C(M \otimes_{S,\delta_{12}^1} S_3)
+\ar@<1ex>^{C((\theta \otimes \delta_2^2) \circ (1_M \otimes \delta^2_0))}[r]
+\ar@<-1ex>_{C(1_{M \otimes S_2} \otimes \delta^2_1)}[r] \ar[d] &
+C(M \otimes_{S, \delta_1^1} S_2 )
+\ar[r]^{C(\theta \circ (1_M \otimes \delta_0^1))}
+\ar[d]^{C(1_M \otimes \delta_1^1)} &
+C(M) \ar[d] \ar@{=}[dl] \\
+C(M \otimes_{S, \delta_1^1} S_2)
+\ar@<1ex>[r]^{C(\theta \circ (1_M \otimes \delta_0^1))}
+\ar@<-1ex>[r]_{C(1_M \otimes \delta_1^1)} &
+C(M) \ar[r] &
+C(f_*(M,\theta))
+}
+$$
+in which the compositions along the columns are identity morphisms.
+The second row is the coequalizer diagram
+(\ref{equation-coequalizer-CM}); this produces the dashed arrow.
+From the top right square, we obtain auxiliary morphisms $C(f_*(M,\theta)) \to
+C(M)$
+and $C(M) \to C(M\otimes_{S,\delta_1^1} S_2)$ which imply that the first row is
+a split coequalizer diagram.
+By Remark \ref{remark-adjunction}, we may tensor with $S$ inside $C$ to obtain
+the split coequalizer diagram
+$$
+\xymatrix@C=8pc{
+C(M \otimes_{S,\delta_2^2 \circ \delta_1^1} S_3)
+\ar@<1ex>^{C((\theta \otimes \delta_2^2) \circ (1_M \otimes \delta^2_0))}[r]
+\ar@<-1ex>_{C(1_{M \otimes S_2} \otimes \delta^2_1)}[r] &
+C(M \otimes_{S, \delta_1^1} S_2 )
+\ar[r]^{C(\theta \circ (1_M \otimes \delta_0^1))} &
+C(f_*(M,\theta) \otimes_R S).
+}
+$$
+By Lemma \ref{lemma-C-is-faithful}, we conclude
+(\ref{equation-equalizer-f2}) must also be an equalizer.
+\end{proof}
+
+\begin{remark}
+\label{remark-descent-lemma}
+If $f$ is a split injection in $\text{Mod}_R$, one can simplify the argument by
+splitting $f$ directly,
+without using $C$. Things are even simpler if $f$ is faithfully flat; in this
+case,
+the conclusion of Lemma \ref{lemma-descent-lemma}
+is immediate because tensoring over $R$ with $S$ preserves all equalizers.
+\end{remark}
+
+\begin{theorem}
+\label{theorem-descent}
+The following conditions are equivalent.
+\begin{enumerate}
+\item[(a)] The morphism $f$ is a descent morphism for modules.
+\item[(b)] The morphism $f$ is an effective descent morphism for modules.
+\item[(c)] The morphism $f$ is universally injective.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+It is clear that (b) implies (a). We now check that (a) implies (c). If $f$ is
+not universally injective, we can find $M \in \text{Mod}_R$ such that the map
+$1_M \otimes f: M \to M \otimes_R S$ has nontrivial kernel $N$.
+The natural projection $M \to M/N$ is not an isomorphism, but its image in
+$DD_{S/R}$ is an isomorphism.
+Hence $f^*$ is not fully faithful.
+
+\medskip\noindent
+We finally check that (c) implies (b). By Lemma \ref{lemma-descent-lemma}, for
+$(M, \theta) \in DD_{S/R}$,
+the natural map $f^* f_*(M,\theta) \to M$ is an isomorphism of $S$-modules. On
+the other hand, for $M_0 \in \text{Mod}_R$,
+we may tensor (\ref{equation-equalizer-S}) with $M_0$ over $R$ to obtain an
+equalizer sequence,
+so $M_0 \to f_* f^* M$ is an isomorphism. Consequently, $f_*$ and $f^*$ are
+quasi-inverse functors, proving the claim.
+\end{proof}
+
+\subsection{Descent for properties of modules}
+\label{subsection-descent-properties-modules}
+
+\noindent
+Throughout this subsection, fix a universally injective ring map $f : R \to S$,
+an object $M \in \text{Mod}_R$, and a ring map $R \to A$. We now investigate
+the question of which properties of $M$ or $A$ can be checked after base
+extension along $f$. We start with some results from
+\cite{mesablishvili2}.
+
+\begin{lemma}
+\label{lemma-flat-to-injective}
+If $M \in \text{Mod}_R$ is flat, then $C(M)$ is an injective $R$-module.
+\end{lemma}
+
+\begin{proof}
+Let $0 \to N \to P \to Q \to 0$ be an exact sequence in $\text{Mod}_R$. Since
+$M$ is flat,
+$$
+0 \to N \otimes_R M \to P \otimes_R M \to Q \otimes_R M \to 0
+$$
+is exact.
+By Lemma \ref{lemma-C-is-faithful},
+$$
+0 \to C(Q \otimes_R M) \to C(P \otimes_R M) \to C(N \otimes_R M) \to 0
+$$
+is exact. By (\ref{equation-adjunction}), this last sequence can be rewritten
+as
+$$
+0 \to \Hom_R(Q, C(M)) \to \Hom_R(P, C(M)) \to \Hom_R(N, C(M)) \to 0.
+$$
+Hence $C(M)$ is an injective object of $\text{Mod}_R$.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-descend-module-properties}
+If $M \otimes_R S$ has one of the following properties as an $S$-module
+\begin{enumerate}
+\item[(a)]
+finitely generated;
+\item[(b)]
+finitely presented;
+\item[(c)]
+flat;
+\item[(d)]
+faithfully flat;
+\item[(e)]
+finite projective;
+\end{enumerate}
+then so does $M$ as an $R$-module (and conversely).
+\end{theorem}
+
+\begin{proof}
+To prove (a), choose a finite set $\{n_i\}$ of generators of $M \otimes_R S$
+in $\text{Mod}_S$. Write each $n_i$ as $\sum_j m_{ij} \otimes s_{ij}$ with
+$m_{ij} \in M$ and $s_{ij} \in S$. Let $F$ be the finite free $R$-module with
+basis $e_{ij}$ and let $F \to M$ be the $R$-module map sending $e_{ij}$ to
+$m_{ij}$. Then $F \otimes_R S\to M \otimes_R S$ is surjective, so
+$\Coker(F \to M) \otimes_R S$ is zero and hence $\Coker(F \to M)$
+is zero. This proves (a).
+
+\medskip\noindent
+To see (b) assume $M \otimes_R S$ is finitely presented. Then $M$ is finitely
+generated by (a). Choose a surjection $R^{\oplus n} \to M$ with kernel $K$.
+Then $K \otimes_R S \to S^{\oplus r} \to M \otimes_R S \to 0$ is exact.
+By Algebra, Lemma \ref{algebra-lemma-extension}
+the kernel of $S^{\oplus r} \to M \otimes_R S$
+is a finite $S$-module. Thus we can find finitely many elements
+$k_1, \ldots, k_t \in K$ such that the images of $k_i \otimes 1$ in
+$S^{\oplus r}$ generate the kernel of $S^{\oplus r} \to M \otimes_R S$.
+Let $K' \subset K$ be the submodule generated by $k_1, \ldots, k_t$.
+Then $M' = R^{\oplus r}/K'$ is a finitely presented $R$-module
+with a morphism $M' \to M$ such that $M' \otimes_R S \to M \otimes_R S$
+is an isomorphism. Thus $M' \cong M$ as desired.
+
+\medskip\noindent
+To prove (c), let $0 \to M' \to M'' \to M \to 0$ be a short exact sequence in
+$\text{Mod}_R$. Since $\bullet \otimes_R S$ is a right exact functor,
+$M'' \otimes_R S \to M \otimes_R S$ is surjective. So by
+Lemma \ref{lemma-C-is-faithful} the map
+$C(M \otimes_R S) \to C(M'' \otimes_R S)$ is injective.
+If $M \otimes_R S$ is flat, then
+Lemma \ref{lemma-flat-to-injective} shows
+$C(M \otimes_R S)$ is an injective object of $\text{Mod}_S$, so the injection
+$C(M \otimes_R S) \to C(M'' \otimes_R S)$
+is split in $\text{Mod}_S$ and hence also in $\text{Mod}_R$.
+Since $C(M \otimes_R S) \to C(M)$ is a split surjection by
+Lemma \ref{lemma-split-surjection}, it follows that
+$C(M) \to C(M'')$ is a split injection in $\text{Mod}_R$. That is, the sequence
+$$
+0 \to C(M) \to C(M'') \to C(M') \to 0
+$$
+is split exact.
+For $N \in \text{Mod}_R$, by (\ref{equation-adjunction}) we see that
+$$
+0 \to C(M \otimes_R N) \to C(M'' \otimes_R N) \to C(M' \otimes_R N) \to 0
+$$
+is split exact. By Lemma \ref{lemma-C-is-faithful},
+$$
+0 \to M' \otimes_R N \to M'' \otimes_R N \to M \otimes_R N \to 0
+$$
+is exact. This implies $M$ is flat over $R$. Namely, taking
+$M'$ a free module surjecting onto $M$ we conclude that
+$\text{Tor}_1^R(M, N) = 0$ for all modules $N$ and we can use
+Algebra, Lemma \ref{algebra-lemma-characterize-flat}.
+This proves (c).
+
+\medskip\noindent
+To deduce (d) from (c), note that if $N \in \text{Mod}_R$ and $M \otimes_R N$
+is zero,
+then $M \otimes_R S \otimes_S (N \otimes_R S) \cong (M \otimes_R N) \otimes_R
+S$ is zero,
+so $N \otimes_R S$ is zero and hence $N$ is zero.
+
+\medskip\noindent
+To deduce (e) at this point, it suffices to recall that $M$ is finitely
+generated and projective if and only if it is finitely presented and flat.
+See Algebra, Lemma \ref{algebra-lemma-finite-projective}.
+\end{proof}
+
+\noindent
+There is a variant for $R$-algebras.
+
+\begin{theorem}
+\label{theorem-descend-algebra-properties}
+If $A \otimes_R S$ has one of the following properties as an $S$-algebra
+\begin{enumerate}
+\item[(a)]
+of finite type;
+\item[(b)]
+of finite presentation;
+\item[(c)]
+formally unramified;
+\item[(d)]
+unramified;
+\item[(e)]
+\'etale;
+\end{enumerate}
+then so does $A$ as an $R$-algebra (and of course conversely).
+\end{theorem}
+
+\begin{proof}
+To prove (a), choose a finite set $\{x_i\}$ of generators of $A \otimes_R S$
+over $S$. Write each $x_i$ as $\sum_j y_{ij} \otimes s_{ij}$ with
+$y_{ij} \in A$ and $s_{ij} \in S$. Let $F$ be the polynomial $R$-algebra
+on variables $e_{ij}$ and let $F \to M$ be the $R$-algebra map sending
+$e_{ij}$ to $y_{ij}$. Then $F \otimes_R S\to A \otimes_R S$ is surjective, so
+$\Coker(F \to A) \otimes_R S$ is zero and hence $\Coker(F \to A)$
+is zero. This proves (a).
+
+\medskip\noindent
+To see (b) assume $A \otimes_R S$ is a finitely presented $S$-algebra.
+Then $A$ is finite type over $R$ by (a). Choose a surjection
+$R[x_1, \ldots, x_n] \to A$ with kernel $I$.
+Then $I \otimes_R S \to S[x_1, \ldots, x_n] \to A \otimes_R S \to 0$ is exact.
+By Algebra, Lemma \ref{algebra-lemma-finite-presentation-independent}
+the kernel of $S[x_1, \ldots, x_n] \to A \otimes_R S$
+is a finitely generated ideal. Thus we can find finitely many elements
+$y_1, \ldots, y_t \in I$ such that the images of $y_i \otimes 1$ in
+$S[x_1, \ldots, x_n]$ generate the kernel of
+$S[x_1, \ldots, x_n] \to A \otimes_R S$.
+Let $I' \subset I$ be the ideal generated by $y_1, \ldots, y_t$.
+Then $A' = R[x_1, \ldots, x_n]/I'$ is a finitely presented $R$-algebra
+with a morphism $A' \to A$ such that $A' \otimes_R S \to A \otimes_R S$
+is an isomorphism. Thus $A' \cong A$ as desired.
+
+\medskip\noindent
+To prove (c), recall that $A$ is formally unramified over $R$ if and only
+if the module of relative differentials $\Omega_{A/R}$ vanishes, see
+Algebra, Lemma \ref{algebra-lemma-characterize-formally-unramified} or
+\cite[Proposition~17.2.1]{EGA4}.
+Since $\Omega_{(A \otimes_R S)/S} = \Omega_{A/R} \otimes_R S$,
+the vanishing descends by Theorem \ref{theorem-descent}.
+
+\medskip\noindent
+To deduce (d) from the previous cases, recall that $A$ is unramified
+over $R$ if and only if $A$ is formally unramified and of finite type
+over $R$, see
+Algebra, Lemma \ref{algebra-lemma-formally-unramified-unramified}.
+
+\medskip\noindent
+To prove (e), recall that by
+Algebra, Lemma \ref{algebra-lemma-etale-flat-unramified-finite-presentation}
+or \cite[Th\'eor\`eme~17.6.1]{EGA4} the algebra
+$A$ is \'etale over $R$ if and only if
+$A$ is flat, unramified, and of finite presentation over $R$.
+\end{proof}
+
+\begin{remark}
+\label{remark-when-locally-split}
+It would make things easier to have a faithfully
+flat ring homomorphism $g: R \to T$ for which $T \to S \otimes_R T$ has some
+extra structure.
+For instance, if one could ensure that $T \to S \otimes_R T$ is split in
+$\textit{Rings}$,
+then it would follow that every property of a module or algebra which is stable
+under base extension
+and which descends along faithfully flat morphisms also descends along
+universally injective morphisms.
+An obvious guess would be to find $g$ for which $T$ is not only faithfully flat
+but also injective in $\text{Mod}_R$,
+but even for $R = \mathbf{Z}$ no such homomorphism can exist.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Fpqc descent of quasi-coherent sheaves}
+\label{section-fpqc-descent-quasi-coherent}
+
+\noindent
+The main application of flat descent for modules is
+the corresponding descent statement for quasi-coherent
+sheaves with respect to fpqc-coverings.
+
+\begin{lemma}
+\label{lemma-standard-fpqc-covering}
+Let $S$ be an affine scheme.
+Let $\mathcal{U} = \{f_i : U_i \to S\}_{i = 1, \ldots, n}$
+be a standard fpqc covering of $S$, see
+Topologies, Definition \ref{topologies-definition-standard-fpqc}.
+Any descent datum on quasi-coherent sheaves
+for $\mathcal{U} = \{U_i \to S\}$ is effective.
+Moreover, the functor from the category of
+quasi-coherent $\mathcal{O}_S$-modules to the category
+of descent data with respect to $\mathcal{U}$ is fully faithful.
+\end{lemma}
+
+\begin{proof}
+This is a restatement of Proposition \ref{proposition-descent-module}
+in terms of schemes. First, note that a descent datum $\xi$
+for quasi-coherent sheaves with respect to $\mathcal{U}$
+is exactly the same as a descent datum $\xi'$ for quasi-coherent sheaves
+with respect to the covering
+$\mathcal{U}' = \{\coprod_{i = 1, \ldots, n} U_i \to S\}$.
+Moreover, effectivity for $\xi$ is the same as effectivity for $\xi'$.
+Hence we may assume $n = 1$, i.e., $\mathcal{U} = \{U \to S\}$
+where $U$ and $S$ are affine. In this case descent data
+correspond to descent data on modules with respect to the ring map
+$$
+\Gamma(S, \mathcal{O})
+\longrightarrow
+\Gamma(U, \mathcal{O}).
+$$
+Since $U \to S$ is surjective and flat, we see that this ring map
+is faithfully flat. In other words,
+Proposition \ref{proposition-descent-module} applies and we win.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-fpqc-descent-quasi-coherent}
+Let $S$ be a scheme.
+Let $\mathcal{U} = \{\varphi_i : U_i \to S\}$ be an fpqc covering, see
+Topologies, Definition \ref{topologies-definition-fpqc-covering}.
+Any descent datum on quasi-coherent sheaves
+for $\mathcal{U} = \{U_i \to S\}$ is effective.
+Moreover, the functor from the category of
+quasi-coherent $\mathcal{O}_S$-modules to the category
+of descent data with respect to $\mathcal{U}$ is fully faithful.
+\end{proposition}
+
+\begin{proof}
+Let $S = \bigcup_{j \in J} V_j$ be an affine open covering.
+For $j, j' \in J$ we denote $V_{jj'} = V_j \cap V_{j'}$ the intersection
+(which need not be affine). For $V \subset S$ open we denote
+$\mathcal{U}_V = \{V \times_S U_i \to V\}_{i \in I}$ which is a
+fpqc-covering (Topologies, Lemma \ref{topologies-lemma-fpqc}).
+By definition of an fpqc covering, we can find for each $j \in J$ a
+finite set $K_j$, a map $\underline{i} : K_j \to I$,
+affine opens $U_{\underline{i}(k), k} \subset U_{\underline{i}(k)}$,
+$k \in K_j$ such that
+$\mathcal{V}_j = \{U_{\underline{i}(k), k} \to V_j\}_{k \in K_j}$ is
+a standard fpqc covering of $V_j$. And of course, $\mathcal{V}_j$
+is a refinement of $\mathcal{U}_{V_j}$. Picture
+$$
+\xymatrix{
+\mathcal{V}_j \ar[r] \ar@{~>}[d] &
+\mathcal{U}_{V_j} \ar[r] \ar@{~>}[d] &
+\mathcal{U} \ar@{~>}[d] \\
+V_j \ar@{=}[r] & V_j \ar[r] & S
+}
+$$
+where the top horizontal arrows are morphisms of families of
+morphisms with fixed target (see
+Sites, Definition \ref{sites-definition-morphism-coverings}).
+
+\medskip\noindent
+To prove the proposition you show successively the
+faithfulness, fullness, and essential surjectivity of the
+functor from quasi-coherent sheaves to descent data.
+
+\medskip\noindent
+Faithfulness. Let $\mathcal{F}$, $\mathcal{G}$ be quasi-coherent
+sheaves on $S$ and let $a, b : \mathcal{F} \to \mathcal{G}$ be
+homomorphisms of $\mathcal{O}_S$-modules.
+Suppose $\varphi_i^*(a) = \varphi_i^*(b)$ for all $i$.
+Pick $s \in S$. Then $s = \varphi_i(u)$ for some $i \in I$ and
+$u \in U_i$. Since $\mathcal{O}_{S, s} \to \mathcal{O}_{U_i, u}$
+is flat, hence faithfully flat
+(Algebra, Lemma \ref{algebra-lemma-local-flat-ff}) we see
+that $a_s = b_s : \mathcal{F}_s \to \mathcal{G}_s$. Hence $a = b$.
+
+\medskip\noindent
+Fully faithfulness. Let $\mathcal{F}$, $\mathcal{G}$ be quasi-coherent
+sheaves on $S$ and let
+$a_i : \varphi_i^*\mathcal{F} \to \varphi_i^*\mathcal{G}$ be
+homomorphisms of $\mathcal{O}_{U_i}$-modules such that
+$\text{pr}_0^*a_i = \text{pr}_1^*a_j$ on $U_i \times_U U_j$.
+We can pull back these morphisms to get morphisms
+$$
+a_k :
+\mathcal{F}|_{U_{\underline{i}(k), k}}
+\longrightarrow
+\mathcal{G}|_{U_{\underline{i}(k), k}}
+$$
+$k \in K_j$ with notation as above. Moreover,
+Lemma \ref{lemma-refine-descent-datum} assures us
+that these define a morphism between (canonical) descent data on
+$\mathcal{V}_j$. Hence, by
+Lemma \ref{lemma-standard-fpqc-covering}, we get correspondingly
+unique morphisms $a_j : \mathcal{F}|_{V_j} \to \mathcal{G}|_{V_j}$.
+To see that $a_j|_{V_{jj'}} = a_{j'}|_{V_{jj'}}$ we use that
+both $a_j$ and $a_{j'}$ agree with the pullback of the morphism
+$(a_i)_{i \in I}$ of (canonical) descent data to any covering
+refining both $\mathcal{V}_{j, V_{jj'}}$ and
+$\mathcal{V}_{j', V_{jj'}}$, and using the faithfulness already
+shown. For example the covering
+$\mathcal{V}_{jj'} =
+\{V_k \times_S V_{k'} \to V_{jj'}\}_{k \in K_j, k' \in K_{j'}}$
+will do.
+
+\medskip\noindent
+Essential surjectivity. Let $\xi = (\mathcal{F}_i, \varphi_{ii'})$
+be a descent datum for quasi-coherent sheaves relative to the covering
+$\mathcal{U}$. Pull back this descent datum to get descent data
+$\xi_j$ for quasi-coherent sheaves relative to the coverings
+$\mathcal{V}_j$ of $V_j$. By Lemma \ref{lemma-standard-fpqc-covering}
+once again there exist
+quasi-coherent sheaves $\mathcal{F}_j$ on $V_j$ whose associated
+canonical descent datum is isomorphic to $\xi_j$. By fully faithfulness
+(proved above) we see there are isomorphisms
+$$
+\phi_{jj'} :
+\mathcal{F}_j|_{V_{jj'}}
+\longrightarrow
+\mathcal{F}_{j'}|_{V_{jj'}}
+$$
+corresponding to the isomorphism of descent data between the pullback
+of $\xi_j$ and $\xi_{j'}$ to $\mathcal{V}_{jj'}$. To see that these
+maps $\phi_{jj'}$ satisfy the cocycle condition we use faithfulness
+(proved above) over the triple intersections $V_{jj'j''}$. Hence, by
+Lemma \ref{lemma-zariski-descent-effective}
+we see that the sheaves $\mathcal{F}_j$
+glue to a quasi-coherent sheaf $\mathcal{F}$ as desired.
+We still have to verify that the canonical descent datum relative to
+$\mathcal{U}$ associated to $\mathcal{F}$ is isomorphic to the descent
+datum we started out with. This verification is omitted.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Galois descent for quasi-coherent sheaves}
+\label{section-galois-descent}
+
+\noindent
+Galois descent for quasi-coherent sheaves is just a special
+case of fpqc descent for quasi-coherent sheaves. In this section
+we will explain how to translate from a Galois descent to
+an fpqc descent and then apply earlier results to conclude.
+
+\medskip\noindent
+Let $k'/k$ be a field extension. Then $\{\Spec(k') \to \Spec(k)\}$
+is an fpqc covering. Let $X$ be a scheme over $k$. For a $k$-algebra
+$A$ we set $X_A = X \times_{\Spec(k)} \Spec(A)$. By
+Topologies, Lemma \ref{topologies-lemma-fpqc}
+we see that $\{X_{k'} \to X\}$ is an fpqc covering. Observe that
+$$
+X_{k'} \times_X X_{k'} = X_{k' \otimes_k k'}
+\quad\text{and}\quad
+X_{k'} \times_X X_{k'} \times_X X_{k'} = X_{k' \otimes_k k' \otimes_k k'}
+$$
+Thus a descent datum for quasi-coherent sheaves with respect to
+$\{X_{k'} \to X\}$ is given by a quasi-coherent sheaf $\mathcal{F}$
+on $X_{k'}$, an isomorphism
+$\varphi : \text{pr}_0^*\mathcal{F} \to \text{pr}_1^*\mathcal{F}$
+on $X_{k' \otimes_k k'}$
+which satisfies an obvious cocycle condition on
+$X_{k' \otimes_k k' \otimes_k k'}$.
+We will work out what this means in the case of a Galois extension below.
+
+\medskip\noindent
+Let $k'/k$ be a finite Galois extension with Galois group
+$G = \text{Gal}(k'/k)$. Then there are $k$-algebra isomorphisms
+$$
+k' \otimes_k k' \longrightarrow \prod\nolimits_{\sigma \in G} k',\quad
+a \otimes b \longrightarrow \prod a\sigma(b)
+$$
+and
+$$
+k' \otimes_k k' \otimes_k k' \longrightarrow
+\prod\nolimits_{(\sigma, \tau) \in G \times G} k',\quad
+a \otimes b \otimes c \longrightarrow \prod a\sigma(b)\sigma(\tau(c))
+$$
+The reason for choosing here $a\sigma(b)\sigma(\tau(c))$
+and not $a\sigma(b)\tau(c)$ is that the formulas below simplify but
+it isn't strictly necessary. Given $\sigma \in G$ we denote
+$$
+f_\sigma = \text{id}_X \times \Spec(\sigma) :
+X_{k'} \longrightarrow X_{k'}
+$$
+Please keep in mind that because $\Spec(-)$ is a contravariant functor we have
+$f_{\sigma \tau} = f_\tau \circ f_\sigma$ and not the other way around.
+Using the first isomorphism above we obtain an identification
+$$
+X_{k' \otimes_k k'} = \coprod\nolimits_{\sigma \in G} X_{k'}
+$$
+such that $\text{pr}_0$ corresponds to the map
+$$
+\coprod\nolimits_{\sigma \in G} X_{k'}
+\xrightarrow{\coprod \text{id}}
+X_{k'}
+$$
+and such that $\text{pr}_1$ corresponds to the map
+$$
+\coprod\nolimits_{\sigma \in G} X_{k'}
+\xrightarrow{\coprod f_\sigma}
+X_{k'}
+$$
+Thus we see that a descent datum $\varphi$ on $\mathcal{F}$ over $X_{k'}$
+corresponds to a family of isomorphisms
+$\varphi_\sigma : \mathcal{F} \to f_\sigma^*\mathcal{F}$.
+To work out the cocycle condition we use the identification
+$$
+X_{k' \otimes_k k' \otimes_k k'} =
+\coprod\nolimits_{(\sigma, \tau) \in G \times G} X_{k'}.
+$$
+we get from our isomorphism of algebras above.
+Via this identification the map $\text{pr}_{01}$ corresponds to
+the map
+$$
+\coprod\nolimits_{(\sigma, \tau) \in G \times G} X_{k'}
+\longrightarrow
+\coprod\nolimits_{\sigma \in G} X_{k'}
+$$
+which maps the summand with index $(\sigma, \tau)$ to the summand
+with index $\sigma$ via the identity morphism. The map $\text{pr}_{12}$
+corresponds to the map
+$$
+\coprod\nolimits_{(\sigma, \tau) \in G \times G} X_{k'}
+\longrightarrow
+\coprod\nolimits_{\sigma \in G} X_{k'}
+$$
+which maps the summand with index $(\sigma, \tau)$ to the summand
+with index $\tau$ via the morphism $f_\sigma$. Finally, the map
+$\text{pr}_{02}$ corresponds to the map
+$$
+\coprod\nolimits_{(\sigma, \tau) \in G \times G} X_{k'}
+\longrightarrow
+\coprod\nolimits_{\sigma \in G} X_{k'}
+$$
+which maps the summand with index $(\sigma, \tau)$ to the summand
+with index $\sigma\tau$ via the identity morphism.
+Thus the cocycle condition
+$$
+\text{pr}_{02}^*\varphi = \text{pr}_{12}^*\varphi \circ \text{pr}_{01}^*\varphi
+$$
+translates into one condition for each pair $(\sigma, \tau)$, namely
+$$
+\varphi_{\sigma\tau} = f_\sigma^*\varphi_\tau \circ \varphi_\sigma
+$$
+as maps $\mathcal{F} \to f_{\sigma\tau}^*\mathcal{F}$.
+(Everything works out beautifully; for example the target of
+$\varphi_\sigma$ is $f_\sigma^*\mathcal{F}$ and the
+source of $f_\sigma^*\varphi_\tau$ is $f_\sigma^*\mathcal{F}$ as well.)
+
+\begin{lemma}
+\label{lemma-galois-descent}
+Let $k'/k$ be a (finite) Galois extension with Galois group $G$.
+Let $X$ be a scheme over $k$. The category of quasi-coherent
+$\mathcal{O}_X$-modules is equivalent to the category of systems
+$(\mathcal{F}, (\varphi_\sigma)_{\sigma \in G})$ where
+\begin{enumerate}
+\item $\mathcal{F}$ is a quasi-coherent module on $X_{k'}$,
+\item $\varphi_\sigma : \mathcal{F} \to f_\sigma^*\mathcal{F}$
+is an isomorphism of modules,
+\item $\varphi_{\sigma\tau} = f_\sigma^*\varphi_\tau \circ \varphi_\sigma$
+for all $\sigma, \tau \in G$.
+\end{enumerate}
+Here $f_\sigma = \text{id}_X \times \Spec(\sigma) : X_{k'} \to X_{k'}$.
+\end{lemma}
+
+\begin{proof}
+As seen above a datum $(\mathcal{F}, (\varphi_\sigma)_{\sigma \in G})$
+as in the lemma is the same thing as a descent datum for the
+fpqc covering $\{X_{k'} \to X\}$. Thus the lemma follows from
+Proposition \ref{proposition-fpqc-descent-quasi-coherent}.
+\end{proof}
+
+\noindent
+A slightly more general case of the above is the following.
+Suppose we have a surjective finite \'etale morphism $X \to Y$
+and a finite group $G$ together with a group homomorphism
+$G^{opp} \to \text{Aut}_Y(X), \sigma \mapsto f_\sigma$
+such that the map
+$$
+G \times X \longrightarrow X \times_Y X,\quad
+(\sigma, x) \longmapsto (x, f_\sigma(x))
+$$
+is an isomorphism. Then the same result as above holds.
+
+\begin{lemma}
+\label{lemma-galois-descent-more-general}
+Let $X \to Y$, $G$, and $f_\sigma : X \to X$ be as above.
+The category of quasi-coherent
+$\mathcal{O}_Y$-modules is equivalent to the category of systems
+$(\mathcal{F}, (\varphi_\sigma)_{\sigma \in G})$ where
+\begin{enumerate}
+\item $\mathcal{F}$ is a quasi-coherent $\mathcal{O}_X$-module,
+\item $\varphi_\sigma : \mathcal{F} \to f_\sigma^*\mathcal{F}$
+is an isomorphism of modules,
+\item $\varphi_{\sigma\tau} = f_\sigma^*\varphi_\tau \circ \varphi_\sigma$
+for all $\sigma, \tau \in G$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $X \to Y$ is surjective finite \'etale $\{X \to Y\}$ is
+an fpqc covering. Since
+$G \times X \to X \times_Y X$, $(\sigma, x) \mapsto (x, f_\sigma(x))$
+is an isomorphism, we see that
+$G \times G \times X \to X \times_Y X \times_Y X$,
+$(\sigma, \tau, x) \mapsto (x, f_\sigma(x), f_{\sigma\tau}(x))$
+is an isomorphism too. Using these identifications, the category of
+data as in the lemma is the same as the category of descent data
+for quasi-coherent sheaves for the covering $\{x \to Y\}$.
+Thus the lemma follows from
+Proposition \ref{proposition-fpqc-descent-quasi-coherent}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Descent of finiteness properties of modules}
+\label{section-descent-finiteness}
+
+\noindent
+In this section we prove that one can check quasi-coherent module
+has a certain finiteness conditions by checking on the members of
+a covering.
+
+\begin{lemma}
+\label{lemma-finite-type-descends}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $\{f_i : X_i \to X\}_{i \in I}$ be an fpqc covering such that
+each $f_i^*\mathcal{F}$ is a finite type $\mathcal{O}_{X_i}$-module.
+Then $\mathcal{F}$ is a finite type $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+Omitted. For the affine case, see
+Algebra, Lemma \ref{algebra-lemma-descend-properties-modules}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-type-descends-fppf}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism of
+locally ringed spaces. Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_Y$-modules.
+If
+\begin{enumerate}
+\item $f$ is open as a map of topological spaces,
+\item $f$ is surjective and flat, and
+\item $f^*\mathcal{F}$ is of finite type,
+\end{enumerate}
+then $\mathcal{F}$ is of finite type.
+\end{lemma}
+
+\begin{proof}
+Let $y \in Y$ be a point. Choose a point $x \in X$ mapping to $y$.
+Choose an open $x \in U \subset X$ and elements $s_1, \ldots, s_n$
+of $f^*\mathcal{F}(U)$ which generate $f^*\mathcal{F}$ over $U$.
+Since $f^*\mathcal{F} =
+f^{-1}\mathcal{F} \otimes_{f^{-1}\mathcal{O}_Y} \mathcal{O}_X$
+we can after shrinking $U$ assume $s_i = \sum t_{ij} \otimes a_{ij}$
+with $t_{ij} \in f^{-1}\mathcal{F}(U)$ and $a_{ij} \in \mathcal{O}_X(U)$.
+After shrinking $U$ further we may assume that $t_{ij}$ comes from
+a section $s_{ij} \in \mathcal{F}(V)$ for some $V \subset Y$ open
+with $f(U) \subset V$. Let $N$ be the number of sections $s_{ij}$ and
+consider the map
+$$
+\sigma = (s_{ij}) : \mathcal{O}_V^{\oplus N} \to \mathcal{F}|_V
+$$
+By our choice of the sections we see that $f^*\sigma|_U$ is surjective.
+Hence for every $u \in U$ the map
+$$
+\sigma_{f(u)} \otimes_{\mathcal{O}_{Y, f(u)}} \mathcal{O}_{X, u} :
+\mathcal{O}_{X, u}^{\oplus N}
+\longrightarrow
+\mathcal{F}_{f(u)} \otimes_{\mathcal{O}_{Y, f(u)}} \mathcal{O}_{X, u}
+$$
+is surjective. As $f$ is flat, the local ring map
+$\mathcal{O}_{Y, f(u)} \to \mathcal{O}_{X, u}$ is flat, hence
+faithfully flat (Algebra, Lemma \ref{algebra-lemma-local-flat-ff}).
+Hence $\sigma_{f(u)}$ is surjective. Since $f$ is open, $f(U)$ is
+an open neighbourhood of $y$ and the proof is done.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-presentation-descends}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $\{f_i : X_i \to X\}_{i \in I}$ be an fpqc covering such that
+each $f_i^*\mathcal{F}$ is an $\mathcal{O}_{X_i}$-module of finite
+presentation. Then $\mathcal{F}$ is an $\mathcal{O}_X$-module
+of finite presentation.
+\end{lemma}
+
+\begin{proof}
+Omitted. For the affine case, see
+Algebra, Lemma \ref{algebra-lemma-descend-properties-modules}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-generated-by-r-sections-descends}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $\{f_i : X_i \to X\}_{i \in I}$ be an fpqc covering such that
+each $f_i^*\mathcal{F}$ is locally generated by $r$ sections as an
+$\mathcal{O}_{X_i}$-module. Then $\mathcal{F}$ is locally generated by
+$r$ sections as an $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-finite-type-descends} we see that $\mathcal{F}$
+is of finite type. Hence Nakayama's lemma
+(Algebra, Lemma \ref{algebra-lemma-NAK}) implies that $\mathcal{F}$
+is generated by $r$ sections in the neighbourhood of a point $x \in X$
+if and only if $\dim_{\kappa(x)} \mathcal{F}_x \otimes \kappa(x) \leq r$.
+Choose an $i$ and a point $x_i \in X_i$ mapping to $x$. Then
+$\dim_{\kappa(x)} \mathcal{F}_x \otimes \kappa(x) =
+\dim_{\kappa(x_i)} (f_i^*\mathcal{F})_{x_i} \otimes \kappa(x_i)$
+which is $\leq r$ as $f_i^*\mathcal{F}$ is locally generated by $r$
+sections.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-descends}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $\{f_i : X_i \to X\}_{i \in I}$ be an fpqc covering such that
+each $f_i^*\mathcal{F}$ is a flat $\mathcal{O}_{X_i}$-module.
+Then $\mathcal{F}$ is a flat $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+Omitted. For the affine case, see
+Algebra, Lemma \ref{algebra-lemma-descend-properties-modules}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-locally-free-descends}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $\{f_i : X_i \to X\}_{i \in I}$ be an fpqc covering such that
+each $f_i^*\mathcal{F}$ is a finite locally free $\mathcal{O}_{X_i}$-module.
+Then $\mathcal{F}$ is a finite locally free $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+This follows from the fact that a quasi-coherent sheaf is finite locally
+free if and only if it is of finite presentation and flat, see
+Algebra, Lemma \ref{algebra-lemma-finite-projective}.
+Namely, if each $f_i^*\mathcal{F}$ is flat and of finite presentation,
+then so is $\mathcal{F}$ by
+Lemmas \ref{lemma-flat-descends} and
+\ref{lemma-finite-presentation-descends}.
+\end{proof}
+
+\noindent
+The definition of a locally projective quasi-coherent sheaf can be found in
+Properties, Section \ref{properties-section-locally-projective}.
+
+\begin{lemma}
+\label{lemma-locally-projective-descends}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $\{f_i : X_i \to X\}_{i \in I}$ be an fpqc covering such that
+each $f_i^*\mathcal{F}$ is a locally projective $\mathcal{O}_{X_i}$-module.
+Then $\mathcal{F}$ is a locally projective $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+Omitted. For Zariski coverings this is
+Properties, Lemma \ref{properties-lemma-locally-projective}.
+For the affine case this is
+Algebra, Theorem \ref{algebra-theorem-ffdescent-projectivity}.
+\end{proof}
+
+\begin{remark}
+\label{remark-locally-free-descends}
+Being locally free is a property of quasi-coherent modules which
+does not descend in the fpqc topology. Namely, suppose that
+$R$ is a ring and that $M$ is a projective $R$-module which is
+a countable direct sum $M = \bigoplus L_n$ of rank 1 locally
+free modules, but not locally free, see
+Examples, Lemma \ref{examples-lemma-projective-not-locally-free}.
+Then $M$ becomes free on making the faithfully flat base change
+$$
+R \longrightarrow
+\bigoplus\nolimits_{m \geq 1}
+\bigoplus\nolimits_{(i_1, \ldots, i_m) \in \mathbf{Z}^{\oplus m}}
+L_1^{\otimes i_1} \otimes_R \ldots \otimes_R L_m^{\otimes i_m}
+$$
+But we don't know what happens for fppf coverings. In other words,
+we don't know the answer to the following question:
+Suppose $A \to B$ is a faithfully
+flat ring map of finite presentation. Let $M$ be an $A$-module
+such that $M \otimes_A B$ is free. Is $M$ a locally free
+$A$-module? It turns out that if $A$ is Noetherian, then the answer
+is yes. This follows from the results of \cite{Bass}. But in general
+we don't know the answer. If you know the answer, or have a reference,
+please email
+\href{mailto:stacks.project@gmail.com}{stacks.project@gmail.com}.
+\end{remark}
+
+\noindent
+We also add here two results which are related to the results above, but
+are of a slightly different nature.
+
+\begin{lemma}
+\label{lemma-finite-over-finite-module}
+Let $f : X \to Y$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Assume $f$ is a finite morphism.
+Then $\mathcal{F}$ is an $\mathcal{O}_X$-module of finite type
+if and only if $f_*\mathcal{F}$ is an $\mathcal{O}_Y$-module of finite
+type.
+\end{lemma}
+
+\begin{proof}
+As $f$ is finite it is affine. This reduces us to the case where
+$f$ is the morphism $\Spec(B) \to \Spec(A)$ given
+by a finite ring map $A \to B$.
+Moreover, then $\mathcal{F} = \widetilde{M}$ is the sheaf of modules
+associated to the $B$-module $M$.
+Note that $M$ is finite as a $B$-module if and only if
+$M$ is finite as an $A$-module, see
+Algebra, Lemma \ref{algebra-lemma-finite-module-over-finite-extension}.
+Combined with
+Properties, Lemma \ref{properties-lemma-finite-type-module}
+this proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-finitely-presented-module}
+Let $f : X \to Y$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Assume $f$ is finite and of finite presentation.
+Then $\mathcal{F}$ is an $\mathcal{O}_X$-module of finite presentation
+if and only if $f_*\mathcal{F}$ is an $\mathcal{O}_Y$-module of finite
+presentation.
+\end{lemma}
+
+\begin{proof}
+As $f$ is finite it is affine. This reduces us to the case where
+$f$ is the morphism $\Spec(B) \to \Spec(A)$ given
+by a finite and finitely presented ring map $A \to B$.
+Moreover, then $\mathcal{F} = \widetilde{M}$ is the sheaf of modules
+associated to the $B$-module $M$.
+Note that $M$ is finitely presented as a $B$-module if and only if
+$M$ is finitely presented as an $A$-module, see
+Algebra, Lemma \ref{algebra-lemma-finite-finitely-presented-extension}.
+Combined with
+Properties, Lemma \ref{properties-lemma-finite-presentation-module}
+this proves the lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Quasi-coherent sheaves and topologies, I}
+\label{section-quasi-coherent-sheaves}
+
+\noindent
+The results in this section say there is a natural equivalence between
+the category quasi-coherent modules on a scheme $S$ and the category
+of quasi-coherent modules on many of the sites associated to $S$
+in the chapter on topologies.
+
+\medskip\noindent
+Let $S$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_S$-module.
+Consider the functor
+\begin{equation}
+\label{equation-quasi-coherent-presheaf}
+(\Sch/S)^{opp} \longrightarrow \textit{Ab},
+\quad
+(f : T \to S) \longmapsto \Gamma(T, f^*\mathcal{F}).
+\end{equation}
+
+\begin{lemma}
+\label{lemma-sheaf-condition-holds}
+Let $S$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_S$-module.
+Let $\tau \in \{Zariski, \linebreak[0] \etale, \linebreak[0] smooth,
+\linebreak[0] syntomic, \linebreak[0] fppf, \linebreak[0] fpqc\}$.
+The functor defined in (\ref{equation-quasi-coherent-presheaf})
+satisfies the sheaf condition with respect to any $\tau$-covering
+$\{T_i \to T\}_{i \in I}$ of any scheme $T$ over $S$.
+\end{lemma}
+
+\begin{proof}
+For $\tau \in \{Zariski, \linebreak[0] \etale, \linebreak[0] smooth,
+\linebreak[0] syntomic, \linebreak[0] fppf\}$ a $\tau$-covering
+is also a fpqc-covering, see the results in
+Topologies, Lemmas
+\ref{topologies-lemma-zariski-etale},
+\ref{topologies-lemma-zariski-etale-smooth},
+\ref{topologies-lemma-zariski-etale-smooth-syntomic},
+\ref{topologies-lemma-zariski-etale-smooth-syntomic-fppf}, and
+\ref{topologies-lemma-zariski-etale-smooth-syntomic-fppf-fpqc}.
+Hence it suffices to prove the theorem
+for a fpqc covering. Assume that $\{f_i : T_i \to T\}_{i \in I}$
+is an fpqc covering where $f : T \to S$ is given. Suppose that
+we have a family of sections $s_i \in \Gamma(T_i , f_i^*f^*\mathcal{F})$
+such that $s_i|_{T_i \times_T T_j} = s_j|_{T_i \times_T T_j}$.
+We have to find the correspond section $s \in \Gamma(T, f^*\mathcal{F})$.
+We can reinterpret the $s_i$ as a family of maps
+$\varphi_i : f_i^*\mathcal{O}_T = \mathcal{O}_{T_i} \to f_i^*f^*\mathcal{F}$
+compatible with the canonical descent data associated to the
+quasi-coherent sheaves $\mathcal{O}_T$ and $f^*\mathcal{F}$ on $T$.
+Hence by Proposition \ref{proposition-fpqc-descent-quasi-coherent}
+we see that we may (uniquely) descend
+these to a map $\mathcal{O}_T \to f^*\mathcal{F}$ which gives
+us our section $s$.
+\end{proof}
+
+\noindent
+We may in particular make the following definition.
+
+\begin{definition}
+\label{definition-structure-sheaf}
+Let $\tau \in \{Zariski, \linebreak[0] \etale, \linebreak[0]
+smooth, \linebreak[0] syntomic, \linebreak[0] fppf\}$.
+Let $S$ be a scheme.
+Let $\Sch_\tau$ be a big site containing $S$.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_S$-module.
+\begin{enumerate}
+\item The {\it structure sheaf of the big site $(\Sch/S)_\tau$}
+is the sheaf of rings $T/S \mapsto \Gamma(T, \mathcal{O}_T)$ which is
+denoted $\mathcal{O}$ or $\mathcal{O}_S$.
+\item If $\tau = Zariski$ or $\tau = \etale$ the
+{\it structure sheaf of the small site} $S_{Zar}$ or $S_\etale$
+is the sheaf of rings $T/S \mapsto \Gamma(T, \mathcal{O}_T)$
+which is denoted $\mathcal{O}$ or $\mathcal{O}_S$.
+\item The {\it sheaf of $\mathcal{O}$-modules associated to
+$\mathcal{F}$} on the big site $(\Sch/S)_\tau$
+is the sheaf of $\mathcal{O}$-modules
+$(f : T \to S) \mapsto \Gamma(T, f^*\mathcal{F})$
+which is denoted $\mathcal{F}^a$ (and often simply $\mathcal{F}$).
+\item If $\tau = Zariski$ or $\tau = \etale$ the
+{\it sheaf of $\mathcal{O}$-modules associated to $\mathcal{F}$}
+on the small site $S_{Zar}$ or $S_\etale$ is the sheaf of
+$\mathcal{O}$-modules $(f : T \to S) \mapsto \Gamma(T, f^*\mathcal{F})$
+which is denoted $\mathcal{F}^a$ (and often simply $\mathcal{F}$).
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note how we use the same notation $\mathcal{F}^a$ in each case.
+No confusion can really arise from this as by definition the rule
+that defines the sheaf $\mathcal{F}^a$ is independent of the site
+we choose to look at.
+
+\begin{remark}
+\label{remark-Zariski-site-space}
+In Topologies, Lemma \ref{topologies-lemma-Zariski-usual}
+we have seen that the small Zariski site of a scheme $S$ is
+equivalent to $S$ as a topological space in the sense that the
+categories of sheaves are naturally equivalent. Now that $S_{Zar}$
+is also endowed with a structure sheaf $\mathcal{O}$ we see
+that sheaves of modules on the ringed site $(S_{Zar}, \mathcal{O})$
+agree with sheaves of modules on the ringed space $(S, \mathcal{O}_S)$.
+\end{remark}
+
+\begin{remark}
+\label{remark-change-topologies-ringed}
+Let $f : T \to S$ be a morphism of schemes.
+Each of the morphisms of sites $f_{sites}$ listed in
+Topologies, Section \ref{topologies-section-change-topologies}
+becomes a morphism of ringed sites. Namely, each of these morphisms of sites
+$f_{sites} : (\Sch/T)_\tau \to (\Sch/S)_{\tau'}$, or
+$f_{sites} : (\Sch/S)_\tau \to S_{\tau'}$ is given by the continuous
+functor $S'/S \mapsto T \times_S S'/S$. Hence, given $S'/S$ we let
+$$
+f_{sites}^\sharp :
+\mathcal{O}(S'/S)
+\longrightarrow
+f_{sites, *}\mathcal{O}(S'/S) =
+\mathcal{O}(S \times_S S'/T)
+$$
+be the usual map
+$\text{pr}_{S'}^\sharp : \mathcal{O}(S') \to \mathcal{O}(T \times_S S')$.
+Similarly, the morphism
+$i_f : \Sh(T_\tau) \to \Sh((\Sch/S)_\tau)$
+for $\tau \in \{Zar, \etale\}$, see
+Topologies, Lemmas \ref{topologies-lemma-put-in-T} and
+\ref{topologies-lemma-put-in-T-etale},
+becomes a morphism of ringed topoi because $i_f^{-1}\mathcal{O} = \mathcal{O}$.
+Here are some special cases:
+\begin{enumerate}
+\item The morphism of big sites
+$f_{big} : (\Sch/X)_{fppf} \to (\Sch/Y)_{fppf}$,
+becomes a morphism of ringed sites
+$$
+(f_{big}, f_{big}^\sharp) :
+((\Sch/X)_{fppf}, \mathcal{O}_X)
+\longrightarrow
+((\Sch/Y)_{fppf}, \mathcal{O}_Y)
+$$
+as in Modules on Sites, Definition \ref{sites-modules-definition-ringed-site}.
+Similarly for the big syntomic, smooth, \'etale and Zariski sites.
+\item The morphism of small sites
+$f_{small} : X_\etale \to Y_\etale$
+becomes a morphism of ringed sites
+$$
+(f_{small}, f_{small}^\sharp) :
+(X_\etale, \mathcal{O}_X)
+\longrightarrow
+(Y_\etale, \mathcal{O}_Y)
+$$
+as in Modules on Sites, Definition \ref{sites-modules-definition-ringed-site}.
+Similarly for the small Zariski site.
+\end{enumerate}
+\end{remark}
+
+\noindent
+Let $S$ be a scheme. It is clear that given an $\mathcal{O}$-module on (say)
+$(\Sch/S)_{Zar}$ the pullback to (say) $(\Sch/S)_{fppf}$
+is just the fppf-sheafification. To see what happens when comparing
+big and small sites we have the following.
+
+\begin{lemma}
+\label{lemma-compare-sites}
+Let $S$ be a scheme. Denote
+$$
+\begin{matrix}
+\text{id}_{\tau, Zar} & : & (\Sch/S)_\tau \to S_{Zar}, &
+\tau \in \{Zar, \etale, smooth, syntomic, fppf\} \\
+\text{id}_{\tau, \etale} & : &
+(\Sch/S)_\tau \to S_\etale, &
+\tau \in \{\etale, smooth, syntomic, fppf\} \\
+\text{id}_{small, \etale, Zar} & : & S_\etale \to S_{Zar},
+\end{matrix}
+$$
+the morphisms of ringed sites of
+Remark \ref{remark-change-topologies-ringed}.
+Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_S$-modules
+which we view a sheaf of $\mathcal{O}$-modules on $S_{Zar}$. Then
+\begin{enumerate}
+\item $(\text{id}_{\tau, Zar})^*\mathcal{F}$ is the $\tau$-sheafification
+of the Zariski sheaf
+$$
+(f : T \to S) \longmapsto \Gamma(T, f^*\mathcal{F})
+$$
+on $(\Sch/S)_\tau$, and
+\item $(\text{id}_{small, \etale, Zar})^*\mathcal{F}$ is the
+\'etale sheafification of the Zariski sheaf
+$$
+(f : T \to S) \longmapsto \Gamma(T, f^*\mathcal{F})
+$$
+on $S_\etale$.
+\end{enumerate}
+Let $\mathcal{G}$ be a sheaf of $\mathcal{O}$-modules
+on $S_\etale$. Then
+\begin{enumerate}
+\item[(3)] $(\text{id}_{\tau, \etale})^*\mathcal{G}$ is the
+$\tau$-sheafification of the \'etale sheaf
+$$
+(f : T \to S) \longmapsto \Gamma(T, f_{small}^*\mathcal{G})
+$$
+where $f_{small} : T_\etale \to S_\etale$
+is the morphism of ringed small \'etale sites of
+Remark \ref{remark-change-topologies-ringed}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). We first note that the result is true when $\tau = Zar$
+because in that case we have the morphism of topoi
+$i_f : \Sh(T_{Zar}) \to \Sh((\Sch/S)_{Zar})$
+such that $\text{id}_{\tau, Zar} \circ i_f = f_{small}$ as morphisms
+$T_{Zar} \to S_{Zar}$, see
+Topologies, Lemmas \ref{topologies-lemma-put-in-T} and
+\ref{topologies-lemma-morphism-big-small}.
+Since pullback is transitive (see
+Modules on Sites,
+Lemma \ref{sites-modules-lemma-push-pull-composition-modules})
+we see that
+$i_f^*(\text{id}_{\tau, Zar})^*\mathcal{F} = f_{small}^*\mathcal{F}$
+as desired. Hence, by the remark preceding this lemma we see that
+$(\text{id}_{\tau, Zar})^*\mathcal{F}$ is the $\tau$-sheafification of
+the presheaf $T \mapsto \Gamma(T, f^*\mathcal{F})$.
+
+\medskip\noindent
+The proof of (3) is exactly the same as the proof of (1), except that it
+uses
+Topologies, Lemmas \ref{topologies-lemma-put-in-T-etale} and
+\ref{topologies-lemma-morphism-big-small-etale}.
+We omit the proof of (2).
+\end{proof}
+
+\begin{remark}
+\label{remark-change-topologies-ringed-sites}
+Remark \ref{remark-change-topologies-ringed}
+and
+Lemma \ref{lemma-compare-sites}
+have the following applications:
+\begin{enumerate}
+\item Let $S$ be a scheme.
+The construction $\mathcal{F} \mapsto \mathcal{F}^a$ is
+the pullback under the morphism of ringed sites
+$\text{id}_{\tau, Zar} : ((\Sch/S)_\tau, \mathcal{O})
+\to (S_{Zar}, \mathcal{O})$
+or the morphism
+$\text{id}_{small, \etale, Zar} :
+(S_\etale, \mathcal{O}) \to (S_{Zar}, \mathcal{O})$.
+\item Let $f : X \to Y$ be a morphism of schemes.
+For any of the morphisms $f_{sites}$ of ringed sites of
+Remark \ref{remark-change-topologies-ringed}
+we have
+$$
+(f^*\mathcal{F})^a = f_{sites}^*\mathcal{F}^a.
+$$
+This follows from (1) and the fact that pullbacks are compatible with
+compositions of morphisms of ringed sites, see
+Modules on Sites,
+Lemma \ref{sites-modules-lemma-push-pull-composition-modules}.
+\end{enumerate}
+\end{remark}
+
+\begin{lemma}
+\label{lemma-quasi-coherent-gives-quasi-coherent}
+Let $S$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_S$-module.
+Let $\tau \in \{Zariski, \linebreak[0] \etale, \linebreak[0]
+smooth, \linebreak[0] syntomic, \linebreak[0] fppf\}$.
+\begin{enumerate}
+\item The sheaf $\mathcal{F}^a$ is a quasi-coherent
+$\mathcal{O}$-module on $(\Sch/S)_\tau$, as defined in
+Modules on Sites, Definition \ref{sites-modules-definition-site-local}.
+\item If $\tau = Zariski$ or $\tau = \etale$, then the sheaf
+$\mathcal{F}^a$ is a quasi-coherent $\mathcal{O}$-module on
+$S_{Zar}$ or $S_\etale$ as defined in
+Modules on Sites, Definition \ref{sites-modules-definition-site-local}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\{S_i \to S\}$ be a Zariski covering such that we have exact sequences
+$$
+\bigoplus\nolimits_{k \in K_i} \mathcal{O}_{S_i} \longrightarrow
+\bigoplus\nolimits_{j \in J_i} \mathcal{O}_{S_i} \longrightarrow
+\mathcal{F} \longrightarrow 0
+$$
+for some index sets $K_i$ and $J_i$. This is possible by the definition
+of a quasi-coherent sheaf on a ringed space
+(See Modules, Definition \ref{modules-definition-quasi-coherent}).
+
+\medskip\noindent
+Proof of (1). Let $\tau \in \{Zariski, \linebreak[0] fppf, \linebreak[0]
+\etale, \linebreak[0] smooth, \linebreak[0] syntomic\}$.
+It is clear that $\mathcal{F}^a|_{(\Sch/S_i)_\tau}$ also
+sits in an exact sequence
+$$
+\bigoplus\nolimits_{k \in K_i} \mathcal{O}|_{(\Sch/S_i)_\tau}
+\longrightarrow
+\bigoplus\nolimits_{j \in J_i} \mathcal{O}|_{(\Sch/S_i)_\tau}
+\longrightarrow
+\mathcal{F}^a|_{(\Sch/S_i)_\tau} \longrightarrow 0
+$$
+Hence $\mathcal{F}^a$ is quasi-coherent by Modules on Sites,
+Lemma \ref{sites-modules-lemma-local-final-object}.
+
+\medskip\noindent
+Proof of (2). Let $\tau = \etale$.
+It is clear that $\mathcal{F}^a|_{(S_i)_\etale}$ also sits
+in an exact sequence
+$$
+\bigoplus\nolimits_{k \in K_i} \mathcal{O}|_{(S_i)_\etale}
+\longrightarrow
+\bigoplus\nolimits_{j \in J_i} \mathcal{O}|_{(S_i)_\etale}
+\longrightarrow
+\mathcal{F}^a|_{(S_i)_\etale} \longrightarrow 0
+$$
+Hence $\mathcal{F}^a$ is quasi-coherent by Modules on Sites,
+Lemma \ref{sites-modules-lemma-local-final-object}.
+The case $\tau = Zariski$ is similar (actually, it is really
+tautological since the corresponding ringed topoi agree).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-associated}
+Let $S$ be a scheme.
+Let $\tau \in \{Zariski, \linebreak[0] \etale, \linebreak[0]
+smooth, \linebreak[0] syntomic, \linebreak[0] fppf\}$.
+Each of the functors $\mathcal{F} \mapsto \mathcal{F}^a$
+of Definition \ref{definition-structure-sheaf}
+$$
+\QCoh(\mathcal{O}_S) \to \QCoh((\Sch/S)_\tau, \mathcal{O})
+\quad\text{or}\quad
+\QCoh(\mathcal{O}_S) \to \QCoh(S_\tau, \mathcal{O})
+$$
+is fully faithful.
+\end{lemma}
+
+\begin{proof}
+(By Lemma \ref{lemma-quasi-coherent-gives-quasi-coherent} we do
+indeed get functors as indicated.)
+We may and do identify $\mathcal{O}_S$-modules on $S$ with
+modules on $(S_{Zar}, \mathcal{O}_S)$.
+The functor $\mathcal{F} \mapsto \mathcal{F}^a$ on quasi-coherent modules
+$\mathcal{F}$ is given by pullback by a morphism $f$
+of ringed sites, see Remark \ref{remark-change-topologies-ringed-sites}.
+In each case the functor $f_*$ is given by restriction
+along the inclusion functor $S_{Zar} \to S_\tau$ or
+$S_{Zar} \to (\Sch/S)_\tau$ (see discussion of how
+these morphisms of sites are defined in Topologies, Section
+\ref{topologies-section-change-topologies}).
+Combining this with the description of $f^*\mathcal{F} = \mathcal{F}^a$
+we see that $f_*f^*\mathcal{F} = \mathcal{F}$ provided that
+$\mathcal{F}$ is quasi-coherent. Then we see that
+$$
+\Hom_\mathcal{O}(\mathcal{F}^a, \mathcal{G}^a) =
+\Hom_\mathcal{O}(f^*\mathcal{F}, f^*\mathcal{G}) =
+\Hom_{\mathcal{O}_S}(\mathcal{F}, f_*f^*\mathcal{G}) =
+\Hom_{\mathcal{O}_S}(\mathcal{F}, \mathcal{G})
+$$
+as desired.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-equivalence-quasi-coherent}
+Let $S$ be a scheme.
+Let $\tau \in \{Zariski, \linebreak[0] \etale, \linebreak[0]
+smooth, \linebreak[0] syntomic, \linebreak[0] fppf\}$.
+\begin{enumerate}
+\item The functor $\mathcal{F} \mapsto \mathcal{F}^a$
+defines an equivalence of categories
+$$
+\QCoh(\mathcal{O}_S)
+\longrightarrow
+\QCoh((\Sch/S)_\tau, \mathcal{O})
+$$
+between the category of quasi-coherent sheaves on $S$ and the category
+of quasi-coherent $\mathcal{O}$-modules on the big $\tau$ site of $S$.
+\item Let $\tau = Zariski$ or $\tau = \etale$.
+The functor $\mathcal{F} \mapsto \mathcal{F}^a$
+defines an equivalence of categories
+$$
+\QCoh(\mathcal{O}_S)
+\longrightarrow
+\QCoh(S_\tau, \mathcal{O})
+$$
+between the category of quasi-coherent sheaves on $S$ and the category
+of quasi-coherent $\mathcal{O}$-modules on the small $\tau$ site of $S$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+We have seen in Lemma \ref{lemma-quasi-coherent-gives-quasi-coherent}
+that the functor is well defined. By Lemma \ref{lemma-fully-faithful-associated}
+the functor is fully faithful. To finish the proof we will show that a
+quasi-coherent $\mathcal{O}$-module on $(\Sch/S)_\tau$ gives
+rise to a descent datum for quasi-coherent sheaves relative to a
+$\tau$-covering of $S$. Having produced this descent datum we will appeal
+to Proposition \ref{proposition-fpqc-descent-quasi-coherent} to get the
+corresponding quasi-coherent sheaf on $S$.
+
+\medskip\noindent
+Let $\mathcal{G}$ be a quasi-coherent $\mathcal{O}$-modules on
+the big $\tau$ site of $S$. By
+Modules on Sites, Definition \ref{sites-modules-definition-site-local}
+there exists a $\tau$-covering $\{S_i \to S\}_{i \in I}$ of $S$
+such that each of the restrictions
+$\mathcal{G}|_{(\Sch/S_i)_\tau}$ has a global presentation
+$$
+\bigoplus\nolimits_{k \in K_i} \mathcal{O}|_{(\Sch/S_i)_\tau}
+\longrightarrow
+\bigoplus\nolimits_{j \in J_i} \mathcal{O}|_{(\Sch/S_i)_\tau}
+\longrightarrow
+\mathcal{G}|_{(\Sch/S_i)_\tau} \longrightarrow 0
+$$
+for some index sets $J_i$ and $K_i$. We claim that this implies
+that $\mathcal{G}|_{(\Sch/S_i)_\tau}$ is $\mathcal{F}_i^a$
+for some quasi-coherent sheaf $\mathcal{F}_i$ on $S_i$. Namely,
+this is clear for the direct sums
+$\bigoplus\nolimits_{k \in K_i} \mathcal{O}|_{(\Sch/S_i)_\tau}$
+and
+$\bigoplus\nolimits_{j \in J_i} \mathcal{O}|_{(\Sch/S_i)_\tau}$.
+Hence we see that $\mathcal{G}|_{(\Sch/S_i)_\tau}$ is a
+cokernel of a map $\varphi : \mathcal{K}_i^a \to \mathcal{L}_i^a$
+for some quasi-coherent sheaves $\mathcal{K}_i$, $\mathcal{L}_i$
+on $S_i$. By the fully faithfulness of $(\ )^a$ we see that
+$\varphi = \phi^a$ for some map of quasi-coherent sheaves
+$\phi : \mathcal{K}_i \to \mathcal{L}_i$ on $S_i$. Then it is
+clear that
+$\mathcal{G}|_{(\Sch/S_i)_\tau} \cong \Coker(\phi)^a$
+as claimed.
+
+\medskip\noindent
+Since $\mathcal{G}$ lives on all of the category
+$(\Sch/S)_\tau$ we see that
+$$
+(\text{pr}_0^*\mathcal{F}_i)^a
+\cong
+\mathcal{G}|_{(\Sch/(S_i \times_S S_j))_\tau}
+\cong
+(\text{pr}_1^*\mathcal{F})^a
+$$
+as $\mathcal{O}$-modules on $(\Sch/(S_i \times_S S_j))_\tau$.
+Hence, using fully faithfulness again we get canonical isomorphisms
+$$
+\phi_{ij} :
+\text{pr}_0^*\mathcal{F}_i
+\longrightarrow
+\text{pr}_1^*\mathcal{F}_j
+$$
+of quasi-coherent modules over $S_i \times_S S_j$. We omit the verification
+that these satisfy the cocycle condition. Since they do we see by
+effectivity of descent for quasi-coherent sheaves and the covering
+$\{S_i \to S\}$ (Proposition \ref{proposition-fpqc-descent-quasi-coherent})
+that there exists a quasi-coherent sheaf $\mathcal{F}$ on $S$
+with $\mathcal{F}|_{S_i} \cong \mathcal{F}_i$ compatible
+with the given descent data. In other words we are given
+$\mathcal{O}$-module isomorphisms
+$$
+\phi_i :
+\mathcal{F}^a|_{(\Sch/S_i)_\tau}
+\longrightarrow
+\mathcal{G}|_{(\Sch/S_i)_\tau}
+$$
+which agree over $S_i \times_S S_j$. Hence, since
+$\SheafHom_\mathcal{O}(\mathcal{F}^a, \mathcal{G})$ is
+a sheaf (Modules on Sites, Lemma \ref{sites-modules-lemma-internal-hom}),
+we conclude that
+there is a morphism of $\mathcal{O}$-modules $\mathcal{F}^a \to \mathcal{G}$
+recovering the isomorphisms $\phi_i$ above. Hence this is an isomorphism
+and we win.
+
+\medskip\noindent
+The case of the sites $S_\etale$ and $S_{Zar}$ is proved in the
+exact same manner.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-equivalence-quasi-coherent-properties}
+Let $S$ be a scheme.
+Let $\tau \in \{Zariski, \linebreak[0] \etale, \linebreak[0]
+smooth, \linebreak[0] syntomic, \linebreak[0] fppf\}$.
+Let $\mathcal{P}$ be one of the properties of modules\footnote{The list is:
+free, finite free, generated by global sections,
+generated by $r$ global sections, generated by finitely many global sections,
+having a global presentation, having a global finite presentation,
+locally free, finite locally free, locally generated by sections,
+locally generated by $r$ sections, finite type, of finite presentation,
+coherent, or flat.} defined in
+Modules on Sites, Definitions \ref{sites-modules-definition-global},
+\ref{sites-modules-definition-site-local}, and
+\ref{sites-modules-definition-flat}.
+The equivalences of categories
+$$
+\QCoh(\mathcal{O}_S)
+\longrightarrow
+\QCoh((\Sch/S)_\tau, \mathcal{O})
+\quad\text{and}\quad
+\QCoh(\mathcal{O}_S)
+\longrightarrow
+\QCoh(S_\tau, \mathcal{O})
+$$
+defined by the rule $\mathcal{F} \mapsto \mathcal{F}^a$ seen in
+Proposition \ref{proposition-equivalence-quasi-coherent}
+have the property
+$$
+\mathcal{F}\text{ has }\mathcal{P}
+\Leftrightarrow
+\mathcal{F}^a\text{ has }\mathcal{P}\text{ as an }\mathcal{O}\text{-module}
+$$
+except (possibly) when $\mathcal{P}$ is ``locally free'' or ``coherent''.
+If $\mathcal{P}=$``coherent'' the equivalence
+holds for $\QCoh(\mathcal{O}_S) \to \QCoh(S_\tau, \mathcal{O})$
+when $S$ is locally Noetherian and $\tau$ is Zariski or \'etale.
+\end{lemma}
+
+\begin{proof}
+This is immediate for the global properties, i.e., those defined in
+Modules on Sites, Definition \ref{sites-modules-definition-global}.
+For the local properties we can use
+Modules on Sites, Lemma \ref{sites-modules-lemma-local-final-object}
+to translate ``$\mathcal{F}^a$ has $\mathcal{P}$'' into a property
+on the members of a covering of $X$. Hence the result follows from
+Lemmas \ref{lemma-finite-type-descends},
+\ref{lemma-finite-presentation-descends},
+\ref{lemma-locally-generated-by-r-sections-descends},
+\ref{lemma-flat-descends}, and
+\ref{lemma-finite-locally-free-descends}.
+Being coherent for a quasi-coherent module is the same as being
+of finite type over a locally Noetherian scheme (see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-Noetherian})
+hence this reduces
+to the case of finite type modules (details omitted).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Cohomology of quasi-coherent modules and topologies}
+\label{section-quasi-coherent-cohomology}
+
+\noindent
+In this section we prove that cohomology of quasi-coherent
+modules is independent of the choice of topology.
+
+\begin{lemma}
+\label{lemma-standard-covering-Cech}
+Let $S$ be a scheme. Let
+\begin{enumerate}
+\item[(a)] $\tau \in \{Zariski, \linebreak[0] fppf, \linebreak[0]
+\etale, \linebreak[0] smooth, \linebreak[0] syntomic\}$
+and $\mathcal{C} = (\Sch/S)_\tau$, or
+\item[(b)] let $\tau = \etale$ and $\mathcal{C} = S_\etale$, or
+\item[(c)] let $\tau = Zariski$ and $\mathcal{C} = S_{Zar}$.
+\end{enumerate}
+Let $\mathcal{F}$ be an abelian sheaf on $\mathcal{C}$.
+Let $U \in \Ob(\mathcal{C})$ be affine.
+Let $\mathcal{U} = \{U_i \to U\}_{i = 1, \ldots, n}$ be a standard affine
+$\tau$-covering in $\mathcal{C}$. Then
+\begin{enumerate}
+\item $\mathcal{V} = \{\coprod_{i = 1, \ldots, n} U_i \to U\}$ is a
+$\tau$-covering of $U$,
+\item $\mathcal{U}$ is a refinement of $\mathcal{V}$, and
+\item the induced map on {\v C}ech complexes
+(Cohomology on Sites,
+Equation (\ref{sites-cohomology-equation-map-cech-complexes}))
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{F})
+\longrightarrow
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+$$
+is an isomorphism of complexes.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows because
+$$
+(\coprod\nolimits_{i_0 = 1, \ldots, n} U_{i_0}) \times_U
+\ldots \times_U
+(\coprod\nolimits_{i_p = 1, \ldots, n} U_{i_p})
+=
+\coprod\nolimits_{i_0, \ldots, i_p \in \{1, \ldots, n\}}
+U_{i_0} \times_U \ldots \times_U U_{i_p}
+$$
+and the fact that $\mathcal{F}(\coprod_a V_a) = \prod_a \mathcal{F}(V_a)$
+since disjoint unions are $\tau$-coverings.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-standard-covering-Cech-quasi-coherent}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a quasi-coherent sheaf on $S$.
+Let $\tau$, $\mathcal{C}$, $U$, $\mathcal{U}$ be as in
+Lemma \ref{lemma-standard-covering-Cech}. Then there is an isomorphism
+of complexes
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}^a)
+\cong
+s((A/R)_\bullet \otimes_R M)
+$$
+(see Section \ref{section-descent-modules})
+where $R = \Gamma(U, \mathcal{O}_U)$, $M = \Gamma(U, \mathcal{F}^a)$
+and $R \to A$ is a faithfully flat ring map. In particular
+$$
+\check{H}^p(\mathcal{U}, \mathcal{F}^a) = 0
+$$
+for all $p \geq 1$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-standard-covering-Cech} we see that
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}^a)$
+is isomorphic to $\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{F}^a)$
+where $\mathcal{V} = \{V \to U\}$ with $V = \coprod_{i = 1, \ldots n} U_i$
+affine also. Set $A = \Gamma(V, \mathcal{O}_V)$. Since $\{V \to U\}$
+is a $\tau$-covering we see that $R \to A$ is faithfully flat.
+On the other hand, by definition of $\mathcal{F}^a$ we have
+that the degree $p$ term $\check{\mathcal{C}}^p(\mathcal{V}, \mathcal{F}^a)$
+is
+$$
+\Gamma(V \times_U \ldots \times_U V, \mathcal{F}^a)
+=
+\Gamma(\Spec(A \otimes_R \ldots \otimes_R A), \mathcal{F}^a)
+=
+A \otimes_R \ldots \otimes_R A \otimes_R M
+$$
+We omit the verification that the maps of the {\v C}ech complex agree with
+the maps in the complex $s((A/R)_\bullet \otimes_R M)$. The vanishing
+of cohomology is Lemma \ref{lemma-ff-exact}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-same-cohomology-quasi-coherent}
+\begin{slogan}
+Cohomology of quasi-coherent sheaves is the same no matter which
+topology you use.
+\end{slogan}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a quasi-coherent sheaf on $S$.
+Let $\tau \in \{Zariski, \linebreak[0] \etale, \linebreak[0]
+smooth, \linebreak[0] syntomic, \linebreak[0] fppf\}$.
+\begin{enumerate}
+\item There is a canonical isomorphism
+$$
+H^q(S, \mathcal{F}) = H^q((\Sch/S)_\tau, \mathcal{F}^a).
+$$
+\item There are canonical isomorphisms
+$$
+H^q(S, \mathcal{F}) =
+H^q(S_{Zar}, \mathcal{F}^a) =
+H^q(S_\etale, \mathcal{F}^a).
+$$
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+The result for $q = 0$ is clear from the definition of $\mathcal{F}^a$.
+Let $\mathcal{C} = (\Sch/S)_\tau$, or $\mathcal{C} = S_\etale$,
+or $\mathcal{C} = S_{Zar}$.
+
+\medskip\noindent
+We are going to apply
+Cohomology on Sites,
+Lemma \ref{sites-cohomology-lemma-cech-vanish-collection}
+with $\mathcal{F} = \mathcal{F}^a$,
+$\mathcal{B} \subset \Ob(\mathcal{C})$ the set of affine schemes
+in $\mathcal{C}$, and $\text{Cov} \subset \text{Cov}_\mathcal{C}$ the
+set of standard affine $\tau$-coverings. Assumption (3) of
+the lemma is satisfied by
+Lemma \ref{lemma-standard-covering-Cech-quasi-coherent}.
+Hence we conclude that $H^p(U, \mathcal{F}^a) = 0$ for every
+affine object $U$ of $\mathcal{C}$.
+
+\medskip\noindent
+Next, let $U \in \Ob(\mathcal{C})$ be any separated object.
+Denote $f : U \to S$ the structure morphism.
+Let $U = \bigcup U_i$ be an affine open covering.
+We may also think of this as a $\tau$-covering
+$\mathcal{U} = \{U_i \to U\}$ of $U$ in $\mathcal{C}$.
+Note that
+$U_{i_0} \times_U \ldots \times_U U_{i_p} =
+U_{i_0} \cap \ldots \cap U_{i_p}$ is affine as we assumed $U$ separated.
+By
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-cech-spectral-sequence-application}
+and the result above we see that
+$$
+H^p(U, \mathcal{F}^a) = \check{H}^p(\mathcal{U}, \mathcal{F}^a)
+= H^p(U, f^*\mathcal{F})
+$$
+the last equality by
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cech-cohomology-quasi-coherent}.
+In particular, if $S$ is separated we can take $U = S$ and
+$f = \text{id}_S$ and the proposition is proved.
+We suggest the reader skip the rest of the proof (or rewrite it
+to give a clearer exposition).
+
+\medskip\noindent
+Choose an injective resolution $\mathcal{F} \to \mathcal{I}^\bullet$ on $S$.
+Choose an injective resolution $\mathcal{F}^a \to \mathcal{J}^\bullet$
+on $\mathcal{C}$. Denote $\mathcal{J}^n|_S$ the restriction of $\mathcal{J}^n$
+to opens of $S$; this is a sheaf on the topological space $S$ as open
+coverings are $\tau$-coverings. We get a complex
+$$
+0 \to \mathcal{F} \to \mathcal{J}^0|_S \to \mathcal{J}^1|_S \to \ldots
+$$
+which is exact since its sections over any affine open $U \subset S$
+is exact (by the vanishing of $H^p(U, \mathcal{F}^a)$, $p > 0$ seen
+above). Hence by
+Derived Categories, Lemma \ref{derived-lemma-morphisms-lift}
+there exists map of complexes
+$\mathcal{J}^\bullet|_S \to \mathcal{I}^\bullet$ which in particular
+induces a map
+$$
+R\Gamma(\mathcal{C}, \mathcal{F}^a)
+=
+\Gamma(S, \mathcal{J}^\bullet)
+\longrightarrow
+\Gamma(S, \mathcal{I}^\bullet)
+=
+R\Gamma(S, \mathcal{F}).
+$$
+Taking cohomology gives the map
+$H^n(\mathcal{C}, \mathcal{F}^a) \to H^n(S, \mathcal{F})$ which
+we have to prove is an isomorphism.
+Let $\mathcal{U} : S = \bigcup U_i$ be an affine open covering
+which we may think of as a $\tau$-covering also.
+By the above we get a map of double complexes
+$$
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{J})
+=
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{J}|_S)
+\longrightarrow
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}).
+$$
+This map induces a map of spectral sequences
+$$
+{}^\tau\! E_2^{p, q} = \check{H}^p(\mathcal{U}, \underline{H}^q(\mathcal{F}^a))
+\longrightarrow
+E_2^{p, q} = \check{H}^p(\mathcal{U}, \underline{H}^q(\mathcal{F}))
+$$
+The first spectral sequence converges to
+$H^{p + q}(\mathcal{C}, \mathcal{F})$ and the second to
+$H^{p + q}(S, \mathcal{F})$. On the other hand, we have seen
+that the induced maps ${}^\tau\! E_2^{p, q} \to E_2^{p, q}$ are
+bijections (as all the intersections are separated being opens in affines).
+Whence also the maps $H^n(\mathcal{C}, \mathcal{F}^a) \to H^n(S, \mathcal{F})$
+are isomorphisms, and we win.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-equivalence-quasi-coherent-functorial}
+Let $f : T \to S$ be a morphism of schemes.
+\begin{enumerate}
+\item The equivalences of categories of
+Proposition \ref{proposition-equivalence-quasi-coherent}
+are compatible with pullback.
+More precisely, we have $f^*(\mathcal{G}^a) = (f^*\mathcal{G})^a$
+for any quasi-coherent sheaf $\mathcal{G}$ on $S$.
+\item The equivalences of categories of
+Proposition \ref{proposition-equivalence-quasi-coherent} part (1)
+are {\bf not} compatible with pushforward in general.
+\item If $f$ is quasi-compact and quasi-separated, and
+$\tau \in \{Zariski, \etale\}$ then $f_*$ and $f_{small, *}$
+preserve quasi-coherent sheaves and the diagram
+$$
+\xymatrix{
+\QCoh(\mathcal{O}_T)
+\ar[rr]_{f_*} \ar[d]_{\mathcal{F} \mapsto \mathcal{F}^a} & &
+\QCoh(\mathcal{O}_S)
+\ar[d]^{\mathcal{G} \mapsto \mathcal{G}^a} \\
+\QCoh(T_\tau, \mathcal{O}) \ar[rr]^{f_{small, *}} & &
+\QCoh(S_\tau, \mathcal{O})
+}
+$$
+is commutative, i.e., $f_{small, *}(\mathcal{F}^a) = (f_*\mathcal{F})^a$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Part (1) follows from the discussion in
+Remark \ref{remark-change-topologies-ringed-sites}.
+Part (2) is just a warning, and can be explained in the following way:
+First the statement cannot be made precise since $f_*$ does not
+transform quasi-coherent sheaves into quasi-coherent sheaves in general.
+Even if this is the case for $f$ (and any base change of $f$), then the
+compatibility over the big sites would mean that formation of $f_*\mathcal{F}$
+commutes with any base change, which does not hold in general.
+An explicit example is the quasi-compact open immersion
+$j : X = \mathbf{A}^2_k \setminus \{0\} \to \mathbf{A}^2_k = Y$
+where $k$ is a field. We have $j_*\mathcal{O}_X = \mathcal{O}_Y$
+but after base change to $\Spec(k)$ by the $0$ map
+we see that the pushforward is zero.
+
+\medskip\noindent
+Let us prove (3) in case $\tau = \etale$. Note that $f$, and any
+base change of $f$, transforms quasi-coherent sheaves
+into quasi-coherent sheaves, see
+Schemes, Lemma \ref{schemes-lemma-push-forward-quasi-coherent}.
+The equality $f_{small, *}(\mathcal{F}^a) = (f_*\mathcal{F})^a$
+means that for any \'etale morphism $g : U \to S$ we have
+$\Gamma(U, g^*f_*\mathcal{F}) = \Gamma(U \times_S T, (g')^*\mathcal{F})$
+where $g' : U \times_S T \to T$ is the projection. This is true by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-higher-direct-images-small-etale}
+Let $f : T \to S$ be a quasi-compact and quasi-separated morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent sheaf on $T$. For either the \'etale
+or Zariski topology, there are canonical isomorphisms
+$R^if_{small, *}(\mathcal{F}^a) = (R^if_*\mathcal{F})^a$.
+\end{lemma}
+
+\begin{proof}
+We prove this for the \'etale topology; we omit the proof in the case
+of the Zariski topology. By Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherence-higher-direct-images}
+the sheaves $R^if_*\mathcal{F}$ are quasi-coherent so that the assertion
+makes sense. The sheaf $R^if_{small, *}\mathcal{F}^a$ is the sheaf associated
+to the presheaf
+$$
+U \longmapsto H^i(U \times_S T, \mathcal{F}^a)
+$$
+where $g : U \to S$ is an object of $S_\etale$, see
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-higher-direct-images}.
+By our conventions the right hand side is the \'etale
+cohomology of the restriction of $\mathcal{F}^a$ to the localization
+$T_\etale/U \times_S T$ which equals
+$(U \times_S T)_\etale$. By
+Proposition \ref{proposition-same-cohomology-quasi-coherent}
+this is presheaf the same as the presheaf
+$$
+U \longmapsto
+H^i(U \times_S T, (g')^*\mathcal{F}),
+$$
+where $g' : U \times_S T \to T$ is the projection. If $U$ is affine
+then this is the same as $H^0(U, R^if'_*(g')^*\mathcal{F})$, see
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherence-higher-direct-images-application}.
+By
+Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology}
+this is equal to $H^0(U, g^*R^if_*\mathcal{F})$ which is the value
+of $(R^if_*\mathcal{F})^a$ on $U$.
+Thus the values of the sheaves of modules
+$R^if_{small, *}(\mathcal{F}^a)$ and $(R^if_*\mathcal{F})^a$
+on every affine object of $S_\etale$ are canonically isomorphic
+which implies they are canonically isomorphic.
+\end{proof}
+
+
+
+
+\section{Quasi-coherent sheaves and topologies, II}
+\label{section-quasi-coherent-sheaves-bis}
+
+\noindent
+We continue the discussion comparing quasi-coherent modules on a scheme $S$
+with quasi-coherent modules on any of the sites associated to $S$
+in the chapter on topologies.
+
+
+\begin{lemma}
+\label{lemma-compare-etale-zariski-flat}
+In Lemma \ref{lemma-compare-sites} the morphism of ringed
+sites $\text{id}_{small, \etale, Zar} : S_\etale \to S_{Zar}$ is flat.
+\end{lemma}
+
+\begin{proof}
+Let us denote $\epsilon = \text{id}_{small, \etale, Zar}$ and
+$\mathcal{O}_\etale$ and $\mathcal{O}_{Zar}$ the structure
+sheaves on $S_\etale$ and $S_{Zar}$. We have to show that
+$\mathcal{O}_\etale$ is a flat $\epsilon^{-1}\mathcal{O}_{Zar}$-module.
+Recall that \'etale morphisms are open, see
+Morphisms, Lemma \ref{morphisms-lemma-etale-open}.
+It follows (from the construction of pullback on sheaves)
+that $\epsilon^{-1}\mathcal{O}_{Zar}$ is the sheafification
+of the presheaf $\mathcal{O}'$ on $S_\etale$
+which sends an \'etale morphism $f : V \to S$ to $\mathcal{O}_S(f(V))$.
+If both $V$ and $U = f(V) \subset S$ are affine,
+then $V \to U$ is an \'etale morphism of affines,
+hence corresponds to an \'etale ring map.
+Since \'etale ring maps are flat, we see that
+$\mathcal{O}_S(U) = \mathcal{O}'(V) \to
+\mathcal{O}_\etale(V) = \mathcal{O}_V(V)$ is flat.
+Finally, for every \'etale morphism $f : V \to S$, i.e., object of
+$S_\etale$, there is an affine open covering $V = \bigcup V_i$
+such that $f(V_i)$ is an affine open in
+$S$ for all $i$\footnote{Namely, for $y \in V$, we pick an affine open
+$y \in V' \subset V$ with $f(V')$ contained in an affine
+open $U \subset S$. Then we pick an affine open $f(y) \in U' \subset f(V')$.
+Then $V'' = f^{-1}(U') \subset V'$ is affine as it is equal
+to $U' \times_U V'$ and $f(V'') = U'$ is affine too.}.
+Thus the result by Modules on Sites, Lemma
+\ref{sites-modules-lemma-flatness-sheafification-refined}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-equivalence-quasi-coherent-limits}
+Let $S$ be a scheme.
+Let $\tau \in \{Zariski, \linebreak[0] \etale,
+\linebreak[0] smooth, \linebreak[0] syntomic, \linebreak[0] fppf\}$.
+The functors
+$$
+\QCoh(\mathcal{O}_S)
+\longrightarrow
+\textit{Mod}((\Sch/S)_\tau, \mathcal{O})
+\quad\text{and}\quad
+\QCoh(\mathcal{O}_S)
+\longrightarrow
+\textit{Mod}(S_\tau, \mathcal{O})
+$$
+defined by the rule $\mathcal{F} \mapsto \mathcal{F}^a$ seen in
+Proposition \ref{proposition-equivalence-quasi-coherent}
+are
+\begin{enumerate}
+\item fully faithful,
+\item compatible with direct sums,
+\item compatible with colimits,
+\item right exact,
+\item exact as a functor
+$\QCoh(\mathcal{O}_S) \to \textit{Mod}(S_\etale, \mathcal{O})$,
+\item {\bf not} exact as a functor
+$\QCoh(\mathcal{O}_S) \to
+\textit{Mod}((\Sch/S)_\tau, \mathcal{O})$
+in general,
+\item given two quasi-coherent $\mathcal{O}_S$-modules
+$\mathcal{F}$, $\mathcal{G}$ we have
+$(\mathcal{F} \otimes_{\mathcal{O}_S} \mathcal{G})^a =
+\mathcal{F}^a \otimes_\mathcal{O} \mathcal{G}^a$,
+\item if $\tau = \etale$ or $\tau = Zariski$,
+given two quasi-coherent $\mathcal{O}_S$-modules
+$\mathcal{F}$, $\mathcal{G}$ such that $\mathcal{F}$
+is of finite presentation we have
+$(\SheafHom_{\mathcal{O}_S}(\mathcal{F}, \mathcal{G}))^a =
+\SheafHom_\mathcal{O}(\mathcal{F}^a, \mathcal{G}^a)$ in
+$\textit{Mod}(S_\tau, \mathcal{O})$,
+\item given two quasi-coherent $\mathcal{O}_S$-modules
+$\mathcal{F}$, $\mathcal{G}$ we do {\bf not} have
+$(\SheafHom_{\mathcal{O}_S}(\mathcal{F}, \mathcal{G}))^a =
+\SheafHom_\mathcal{O}(\mathcal{F}^a, \mathcal{G}^a)$
+in $\textit{Mod}((\Sch/S)_\tau, \mathcal{O})$ in general
+even if $\mathcal{F}$ is of finite presentation, and
+\item given a short exact sequence
+$0 \to \mathcal{F}_1^a \to \mathcal{E} \to \mathcal{F}_2^a \to 0$
+of $\mathcal{O}$-modules then $\mathcal{E}$ is
+quasi-coherent\footnote{Warning: This is misleading. See part (6).}, i.e.,
+$\mathcal{E}$ is in the essential image of the functor.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) we saw in
+Proposition \ref{proposition-equivalence-quasi-coherent}.
+
+\medskip\noindent
+We have seen in
+Schemes, Section \ref{schemes-section-quasi-coherent}
+that a colimit of quasi-coherent sheaves on a scheme is a quasi-coherent
+sheaf. Moreover, in
+Remark \ref{remark-change-topologies-ringed-sites}
+we saw that $\mathcal{F} \mapsto \mathcal{F}^a$ is the pullback functor
+for a morphism of ringed sites, hence commutes with all colimits, see
+Modules on Sites, Lemma
+\ref{sites-modules-lemma-exactness-pushforward-pullback}.
+Thus (3) and its special case (3) hold.
+
+\medskip\noindent
+This also shows that the functor is right exact (i.e., commutes with
+finite colimits), hence (4).
+
+\medskip\noindent
+The functor $\QCoh(\mathcal{O}_S) \to
+\QCoh(S_\etale, \mathcal{O})$,
+$\mathcal{F} \mapsto \mathcal{F}^a$
+is left exact because an \'etale morphism is flat, see
+Morphisms, Lemma \ref{morphisms-lemma-etale-flat}.
+This proves (5).
+
+\medskip\noindent
+To see (6), suppose that $S = \Spec(\mathbf{Z})$.
+Then $2 : \mathcal{O}_S \to \mathcal{O}_S$ is injective but the associated
+map of $\mathcal{O}$-modules on $(\Sch/S)_\tau$ isn't
+injective because $2 : \mathbf{F}_2 \to \mathbf{F}_2$ isn't injective
+and $\Spec(\mathbf{F}_2)$ is an object of $(\Sch/S)_\tau$.
+
+\medskip\noindent
+Part (7) holds because, as mentioned above, the functor
+$\mathcal{F} \mapsto \mathcal{F}^a$ is the pullback functor
+for a morphism of ringed sites and such commute with tensor
+products by Modules on Sites, Lemma
+\ref{sites-modules-lemma-tensor-product-pullback}.
+
+\medskip\noindent
+Part (8) is obvious if $\tau = Zariski$ because the category of
+$\mathcal{O}$-modules on $S_{Zar}$ is the same as the category
+of $\mathcal{O}_S$-modules on the topological space $S$.
+If $\tau = \etale$ then (8) holds because, as mentioned above,
+the functor $\mathcal{F} \mapsto \mathcal{F}^a$ is the pullback functor
+for the flat morphism of ringed sites
+$(S_\etale, \mathcal{O}) \to (S_{Zar}, \mathcal{O}_S)$, see
+Lemma \ref{lemma-compare-etale-zariski-flat}.
+Pullback by flat morphisms of ringed sites commutes with
+taking internal hom out of a finitely presented module by
+Modules on Sites, Lemma \ref{sites-modules-lemma-pullback-internal-hom}.
+
+\medskip\noindent
+To see (9), suppose that $S = \Spec(\mathbf{Z})$. Let
+$\mathcal{F} = \Coker(2 : \mathcal{O}_S \to \mathcal{O}_S)$
+and $\mathcal{G} = \mathcal{O}_S$.
+Then $\mathcal{F}^a = \Coker(2 : \mathcal{O} \to \mathcal{O})$
+and $\mathcal{G}^a = \mathcal{O}$.
+Hence
+$\SheafHom_\mathcal{O}(\mathcal{F}^a, \mathcal{G}^a) = \mathcal{O}[2]$
+is equal to the $2$-torsion in $\mathcal{O}$, which is not zero,
+see proof of (6). On the other hand, the module
+$\SheafHom_{\mathcal{O}_S}(\mathcal{F}, \mathcal{G})$
+is zero.
+
+\medskip\noindent
+Proof of (10).
+Let $0 \to \mathcal{F}_1^a \to \mathcal{E} \to \mathcal{F}_2^a \to 0$
+be a short exact sequence of $\mathcal{O}$-modules with $\mathcal{F}_1$
+and $\mathcal{F}_2$ quasi-coherent on $S$. Consider the restriction
+$$
+0 \to \mathcal{F}_1 \to \mathcal{E}|_{S_{Zar}} \to \mathcal{F}_2
+$$
+to $S_{Zar}$. By
+Proposition \ref{proposition-same-cohomology-quasi-coherent}
+we see that on any affine $U \subset S$ we have
+$H^1(U, \mathcal{F}_1^a) = H^1(U, \mathcal{F}_1) = 0$.
+Hence the sequence above is also exact on the right. By
+Schemes, Section \ref{schemes-section-quasi-coherent}
+we conclude that $\mathcal{F} = \mathcal{E}|_{S_{Zar}}$ is
+quasi-coherent. Thus we obtain a commutative diagram
+$$
+\xymatrix{
+& \mathcal{F}_1^a \ar[r] \ar[d] &
+\mathcal{F}^a \ar[r] \ar[d] &
+\mathcal{F}_2^a \ar[r] \ar[d] & 0 \\
+0 \ar[r] &
+\mathcal{F}_1^a \ar[r] &
+\mathcal{E} \ar[r] &
+\mathcal{F}_2^a \ar[r] & 0
+}
+$$
+To finish the proof it suffices to show that the top row is also
+right exact. To do this, denote once more $U = \Spec(A) \subset S$
+an affine open of $S$. We have seen above that
+$0 \to \mathcal{F}_1(U) \to \mathcal{E}(U) \to \mathcal{F}_2(U) \to 0$
+is exact. For any affine scheme $V/U$,
+$V = \Spec(B)$ the map $\mathcal{F}_1^a(V) \to \mathcal{E}(V)$
+is injective. We have $\mathcal{F}_1^a(V) = \mathcal{F}_1(U) \otimes_A B$
+by definition. The injection
+$\mathcal{F}_1^a(V) \to \mathcal{E}(V)$ factors as
+$$
+\mathcal{F}_1(U) \otimes_A B \to
+\mathcal{E}(U) \otimes_A B \to \mathcal{E}(U)
+$$
+Considering $A$-algebras $B$ of the form $B = A \oplus M$
+we see that $\mathcal{F}_1(U) \to \mathcal{E}(U)$ is
+universally injective (see
+Algebra, Definition \ref{algebra-definition-universally-injective}).
+Since $\mathcal{E}(U) = \mathcal{F}(U)$ we conclude that
+$\mathcal{F}_1 \to \mathcal{F}$ remains injective after any base change,
+or equivalently that $\mathcal{F}_1^a \to \mathcal{F}^a$ is injective.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-properties-quasi-coherent}
+Let $S$ be a scheme. The category $\QCoh(S_\etale, \mathcal{O})$
+of quasi-coherent modules on $S_\etale$
+has the following properties:
+\begin{enumerate}
+\item Any direct sum of quasi-coherent sheaves is quasi-coherent.
+\item Any colimit of quasi-coherent sheaves is quasi-coherent.
+\item The kernel and cokernel of a morphism of quasi-coherent sheaves
+is quasi-coherent.
+\item Given a short exact sequence of $\mathcal{O}$-modules
+$0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0$
+if two out of three are quasi-coherent so is the third.
+\item Given two quasi-coherent $\mathcal{O}$-modules
+the tensor product is quasi-coherent.
+\item Given two quasi-coherent $\mathcal{O}$-modules
+$\mathcal{F}$, $\mathcal{G}$ such that $\mathcal{F}$
+is of finite presentation.
+then the internal hom
+$\SheafHom_\mathcal{O}(\mathcal{F}, \mathcal{G})$
+is quasi-coherent.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The corresponding facts hold for quasi-coherent modules on the scheme $S$,
+see Schemes, Section \ref{schemes-section-quasi-coherent}. The proof will
+be to use Lemma \ref{lemma-equivalence-quasi-coherent-limits} to transfer
+these truths to $S_\etale$.
+
+\medskip\noindent
+Proof of (1). Let $\mathcal{F}_i$, $i \in I$ be a family of objects of
+$\QCoh(S_\etale, \mathcal{O})$. Write $\mathcal{F}_i = \mathcal{G}_i^a$
+for some quasi-coherent modules $\mathcal{G}_i$ on $S$.
+Then $\bigoplus \mathcal{F}_i = (\bigoplus \mathcal{G}_i)^a$ by
+the lemma cited and we conclude.
+
+\medskip\noindent
+Proof of (2). Let $\mathcal{I} \to \QCoh(S_\etale, \mathcal{O})$,
+$i \mapsto \mathcal{F}_i$ be a diagram. Write
+$\mathcal{F}_i = \mathcal{G}_i^a$ so we get a diagram
+$\mathcal{I} \to \QCoh(\mathcal{O}_S)$.
+Then $\colim \mathcal{F}_i = (\colim \mathcal{G}_i)^a$ by
+the lemma cited and we conclude.
+
+\medskip\noindent
+Proof of (3). Let $a : \mathcal{F} \to \mathcal{F}'$
+be an arrow of $\QCoh(S_\etale, \mathcal{O})$.
+Write $a = b^a$ for some map $b : \mathcal{G} \to \mathcal{G}'$
+of quasi-coherent modules on $S$. By the lemma cited
+we have $\Ker(a) = \Ker(b)^a$ and $\Coker(a) = \Coker(b)^a$
+and we conclude.
+
+\medskip\noindent
+Proof of (4). This follows from (3) except in the case when
+we know $\mathcal{F}_1$ and $\mathcal{F}_3$ are quasi-coherent.
+In this case write $\mathcal{F}_1 = \mathcal{G}_1^a$
+and $\mathcal{F}_3 = \mathcal{G}_3^a$ with
+$\mathcal{G}_i$ quasi-coherent on $S$.
+By Lemma \ref{lemma-equivalence-quasi-coherent-limits} part (10)
+we conclude.
+
+\medskip\noindent
+Proof of (5). Let $\mathcal{F}$ and $\mathcal{F}'$
+be in $\QCoh(S_\etale, \mathcal{O})$.
+Write $\mathcal{F} = \mathcal{G}^a$
+and $\mathcal{F}' = (\mathcal{G}')^a$
+with $\mathcal{G}$ and $\mathcal{G}'$ quasi-coherent on $S$.
+By the lemma cited we have
+$\mathcal{F} \otimes_\mathcal{O} \mathcal{F}' =
+(\mathcal{G} \otimes_{\mathcal{O}_S} \mathcal{G}')^a$
+and we conclude.
+
+\medskip\noindent
+Proof of (6). Let $\mathcal{F}$ and $\mathcal{G}$
+be in $\QCoh(S_\etale, \mathcal{O})$ with $\mathcal{F}$
+of finite presentation. Write $\mathcal{F} = \mathcal{H}^a$
+and $\mathcal{G} = (\mathcal{I})^a$
+with $\mathcal{H}$ and $\mathcal{I}$ quasi-coherent on $S$.
+By Lemma \ref{lemma-equivalence-quasi-coherent-properties}
+we see that $\mathcal{H}$ is of finite presentation.
+By Lemma \ref{lemma-equivalence-quasi-coherent-limits} part (8)
+we have
+$\SheafHom_\mathcal{O}(\mathcal{F}, \mathcal{G}) =
+(\SheafHom_{\mathcal{O}_S}(\mathcal{H}, \mathcal{I}))^a$
+and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-properties-quasi-coherent-on-big}
+Let $S$ be a scheme.
+Let $\tau \in \{Zariski, \linebreak[0] \etale, \linebreak[0]
+smooth, \linebreak[0] syntomic, \linebreak[0] fppf\}$.
+The category $\QCoh((\Sch/S)_\tau, \mathcal{O})$
+of quasi-coherent modules on $(\Sch/S)_\tau$
+has the following properties:
+\begin{enumerate}
+\item Any direct sum of quasi-coherent sheaves is quasi-coherent.
+\item Any colimit of quasi-coherent sheaves is quasi-coherent.
+\item The cokernel of a morphism of quasi-coherent sheaves
+is quasi-coherent.
+\item Given a short exact sequence of $\mathcal{O}$-modules
+$0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0$
+if $\mathcal{F}_1$ and $\mathcal{F}_3$ are quasi-coherent so is
+$\mathcal{F}_2$.
+\item Given two quasi-coherent $\mathcal{O}$-modules
+the tensor product is quasi-coherent.
+\item Given two quasi-coherent $\mathcal{O}$-modules
+$\mathcal{F}$, $\mathcal{G}$ such that $\mathcal{F}$
+is finite locally free, the internal hom
+$\SheafHom_\mathcal{O}(\mathcal{F}, \mathcal{G})$
+is quasi-coherent.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The corresponding facts hold for quasi-coherent modules on the scheme $S$,
+see Schemes, Section \ref{schemes-section-quasi-coherent}. The proof will
+be to use Lemma \ref{lemma-equivalence-quasi-coherent-limits} to transfer
+these truths to $(\Sch/S)_\tau$.
+
+\medskip\noindent
+Proof of (1). Let $\mathcal{F}_i$, $i \in I$ be a family of objects of
+$\QCoh((\Sch/S)_\tau, \mathcal{O})$. Write $\mathcal{F}_i = \mathcal{G}_i^a$
+for some quasi-coherent modules $\mathcal{G}_i$ on $S$.
+Then $\bigoplus \mathcal{F}_i = (\bigoplus \mathcal{G}_i)^a$ by
+the lemma cited and we conclude.
+
+\medskip\noindent
+Proof of (2). Let $\mathcal{I} \to \QCoh((\Sch/S)_\tau, \mathcal{O})$,
+$i \mapsto \mathcal{F}_i$ be a diagram. Write
+$\mathcal{F}_i = \mathcal{G}_i^a$ so we get a diagram
+$\mathcal{I} \to \QCoh(\mathcal{O}_S)$.
+Then $\colim \mathcal{F}_i = (\colim \mathcal{G}_i)^a$ by
+the lemma cited and we conclude.
+
+\medskip\noindent
+Proof of (3). Let $a : \mathcal{F} \to \mathcal{F}'$
+be an arrow of $\QCoh((\Sch/S)_\tau, \mathcal{O})$.
+Write $a = b^a$ for some map $b : \mathcal{G} \to \mathcal{G}'$
+of quasi-coherent modules on $S$. By the lemma cited
+we have $\Coker(a) = \Coker(b)^a$ (because a cokernel is a colimit)
+and we conclude.
+
+\medskip\noindent
+Proof of (4). Write $\mathcal{F}_1 = \mathcal{G}_1^a$
+and $\mathcal{F}_3 = \mathcal{G}_3^a$ with
+$\mathcal{G}_i$ quasi-coherent on $S$.
+By Lemma \ref{lemma-equivalence-quasi-coherent-limits} part (10)
+we conclude.
+
+\medskip\noindent
+Proof of (5). Let $\mathcal{F}$ and $\mathcal{F}'$ be in
+$\QCoh((\Sch/S)_\tau, \mathcal{O})$. Write $\mathcal{F} = \mathcal{G}^a$
+and $\mathcal{F}' = (\mathcal{G}')^a$
+with $\mathcal{G}$ and $\mathcal{G}'$ quasi-coherent on $S$.
+By the lemma cited we have
+$\mathcal{F} \otimes_\mathcal{O} \mathcal{F}' =
+(\mathcal{G} \otimes_{\mathcal{O}_S} \mathcal{G}')^a$
+and we conclude.
+
+\medskip\noindent
+Proof of (6). Write $\mathcal{F} = \mathcal{H}^a$ for some
+quasi-coherent $\mathcal{O}_S$-module. By
+Lemma \ref{lemma-equivalence-quasi-coherent-properties}
+we see that $\mathcal{H}$ is finite locally free.
+The problem is Zariski local on $S$ (details omitted) hence
+we may assume $\mathcal{H} = \mathcal{O}_S^{\oplus n}$ is
+finite free. Then $\mathcal{F} = \mathcal{O}^{\oplus n}$
+and $\SheafHom_\mathcal{O}(\mathcal{F}, \mathcal{G}) = \mathcal{G}^{\oplus n}$
+is quasi-coherent.
+\end{proof}
+
+\begin{example}
+\label{example-internal-hom-not-qcoh}
+Let $S$ be a scheme. Let $\mathcal{F}$ and $\mathcal{G}$ be quasi-coherent
+modules on $(\Sch/S)_\tau$ for one of the topologies $\tau$ considered in
+Lemma \ref{lemma-properties-quasi-coherent-on-big}.
+In general it is not the case that
+$\SheafHom_\mathcal{O}(\mathcal{F}, \mathcal{G})$
+is quasi-coherent even if $\mathcal{F}$ is of finite presentation.
+Namely, say $S = \Spec(\mathbf{Z})$,
+$\mathcal{F} = \Coker(2 : \mathcal{O} \to \mathcal{O})$,
+and $\mathcal{G} = \mathcal{O}$. Then
+$\SheafHom_\mathcal{O}(\mathcal{F}, \mathcal{G}) = \mathcal{O}[2]$
+is equal to the $2$-torsion in $\mathcal{O}$, which is not quasi-coherent.
+\end{example}
+
+\begin{lemma}
+\label{lemma-qc-colimits}
+Let $S$ be a scheme.
+\begin{enumerate}
+\item The category $\QCoh((\Sch/S)_{fppf}, \mathcal{O})$
+has colimits and they agree with colimits in the categories
+$\textit{Mod}((\Sch/S)_{Zar}, \mathcal{O})$,
+$\textit{Mod}((\Sch/S)_\etale, \mathcal{O})$, and
+$\textit{Mod}((\Sch/S)_{fppf}, \mathcal{O})$.
+\item Given $\mathcal{F}, \mathcal{G}$ in $\QCoh((\Sch/S)_{fppf}, \mathcal{O})$
+the tensor products $\mathcal{F} \otimes_\mathcal{O} \mathcal{G}$
+computed in $\textit{Mod}((\Sch/S)_{Zar}, \mathcal{O})$,
+$\textit{Mod}((\Sch/S)_\etale, \mathcal{O})$, or
+$\textit{Mod}((\Sch/S)_{fppf}, \mathcal{O})$ agree and the common value
+is an object of $\QCoh((\Sch/S)_{fppf}, \mathcal{O})$.
+\item Given $\mathcal{F}, \mathcal{G}$ in $\QCoh((\Sch/S)_{fppf}, \mathcal{O})$
+with $\mathcal{F}$ finite locally free (in fppf, or equivalently \'etale, or
+equivalently Zariski topology) the internal homs
+$\SheafHom_\mathcal{O}(\mathcal{F}, \mathcal{G})$
+computed in $\textit{Mod}((\Sch/S)_{Zar}, \mathcal{O})$,
+$\textit{Mod}((\Sch/S)_\etale, \mathcal{O})$, or
+$\textit{Mod}((\Sch/S)_{fppf}, \mathcal{O})$ agree and the common value
+is an object of $\QCoh((\Sch/S)_{fppf}, \mathcal{O})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma collects the results shown above in a slightly different manner.
+First of all, by Lemma \ref{lemma-properties-quasi-coherent-on-big}
+we already know the output of the construction in (1), (2), or (3)
+ends up in $\QCoh((\Sch/S)_\tau, \mathcal{O})$.
+It remains to show in each case that the result is
+independent of the topology used. The key to this is that the equivalence
+$\QCoh(\mathcal{O}_S) \to \QCoh((\Sch/S)_\tau, \mathcal{O})$,
+$\mathcal{F} \mapsto \mathcal{F}^a$
+of Proposition \ref{proposition-equivalence-quasi-coherent}
+is given by the same formula independent of the choice
+of the topology $\tau \in \{Zariski, \etale, fppf\}$.
+
+\medskip\noindent
+Proof of (1). Let $\mathcal{I} \to \QCoh((\Sch/S)_{fppf}, \mathcal{O})$,
+$i \mapsto \mathcal{F}_i$ be a diagram. Write
+$\mathcal{F}_i = \mathcal{G}_i^a$ so we get a diagram
+$\mathcal{I} \to \QCoh(\mathcal{O}_S)$.
+Then $\colim \mathcal{F}_i = (\colim \mathcal{G}_i)^a$ in
+$\textit{Mod}((\Sch/S)_\tau, \mathcal{O})$ for
+$\tau \in \{Zariski, \etale, fppf\}$
+by Lemma \ref{lemma-equivalence-quasi-coherent-limits}.
+This proves (1).
+
+\medskip\noindent
+Proof of (2). Write $\mathcal{F} = \mathcal{H}^a$ and
+$\mathcal{G} = (\mathcal{I})^a$ with $\mathcal{H}$ and $\mathcal{I}$
+quasi-coherent on $S$. Then
+$\mathcal{F} \otimes_\mathcal{O} \mathcal{G} =
+(\mathcal{H} \otimes_\mathcal{O} \mathcal{I})^a$ in
+$\textit{Mod}((\Sch/S)_\tau, \mathcal{O})$ for
+$\tau \in \{Zariski, \etale, fppf\}$
+by Lemma \ref{lemma-equivalence-quasi-coherent-limits}.
+This proves (2).
+
+\medskip\noindent
+Proof of (3). Let $\mathcal{F}$ and $\mathcal{G}$ be in
+$\QCoh((\Sch/S)_{fppf}, \mathcal{O})$. Write
+$\mathcal{F} = \mathcal{H}^a$ with $\mathcal{H}$
+quasi-coherent on $S$. By
+Lemma \ref{lemma-equivalence-quasi-coherent-properties} we have
+\begin{align*}
+\mathcal{F}\text{ finite locally free in fppf topology}
+& \Leftrightarrow
+\mathcal{H}\text{ finite locally free on }S \\
+& \Leftrightarrow
+\mathcal{F}\text{ finite locally free in \'etale topology} \\
+& \Leftrightarrow
+\mathcal{H}\text{ finite locally free on }S \\
+& \Leftrightarrow
+\mathcal{F}\text{ finite locally free in Zariski topology}
+\end{align*}
+This explains the parenthetical statement of part (3).
+Now, if these equivalent conditions hold, then $\mathcal{H}$
+is finite locally free. The construction of
+$\SheafHom_\mathcal{O}(\mathcal{F}, \mathcal{G})$ in
+Modules on Sites, Section \ref{sites-modules-section-internal-hom}
+depends only on $\mathcal{F}$ and $\mathcal{G}$ as presheaves
+of modules (only whether the output $\SheafHom$ is
+a sheaf depends on whether $\mathcal{F}$ and $\mathcal{G}$ are
+sheaves).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Quasi-coherent modules and affines}
+\label{section-alternative-quasi-coherent}
+
+\noindent
+Let $S$ be a scheme\footnote{In this section, as in
+Topologies, Section \ref{topologies-section-change-topologies},
+we choose our sites $(\Sch/S)_\tau$ to have the same underlying category
+for $\tau \in \{Zariski, \etale, smooth, syntomic, fppf\}$. Then also
+the sites $(\textit{Aff}/S)_\tau$ have the same underlying category.}.
+Let $\tau \in \{Zariski, \etale, smooth, syntomic, fppf\}$.
+Recall that $(\textit{Aff}/S)_\tau$ is the full subcategory
+of $(\Sch/S)_\tau$ whose objects are affine turned into
+a site by declaring the coverings to be the standard $\tau$-coverings.
+By Topologies, Lemmas
+\ref{topologies-lemma-affine-big-site-Zariski},
+\ref{topologies-lemma-affine-big-site-etale},
+\ref{topologies-lemma-affine-big-site-smooth},
+\ref{topologies-lemma-affine-big-site-syntomic}, and
+\ref{topologies-lemma-affine-big-site-fppf}
+we have an equivalence of topoi
+$g : \Sh((\textit{Aff}/S)_\tau) \to \Sh((\Sch/S)_\tau)$
+whose pullback functor is given by restriction.
+Recalling that $\mathcal{O}$ denotes the structure sheaf on
+$(\Sch/S)_\tau$, let us temporarily and pedantically
+denote $\mathcal{O}_{\textit{Aff}}$
+the restriction of $\mathcal{O}$ to $(\textit{Aff}/S)_\tau$.
+Then we obtain an equivalence
+\begin{equation}
+\label{equation-alternative-ringed}
+(\Sh((\textit{Aff}/S)_\tau), \mathcal{O}_{\textit{Aff}})
+\longrightarrow
+(\Sh((\Sch/S)_\tau), \mathcal{O})
+\end{equation}
+of ringed topoi. Having said this we can compare quasi-coherent modules
+as well.
+
+\begin{lemma}
+\label{lemma-quasi-coherent-alternative}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a presheaf
+of $\mathcal{O}_{\textit{Aff}}$-modules on $(\textit{Aff}/S)_{fppf}$.
+The following are equivalent
+\begin{enumerate}
+\item for every morphism $U \to U'$ of $(\textit{Aff}/S)_{fppf}$ the map
+$\mathcal{F}(U') \otimes_{\mathcal{O}(U')} \mathcal{O}(U) \to \mathcal{F}(U)$
+is an isomorphism,
+\item $\mathcal{F}$ is a sheaf on $(\textit{Aff}/S)_{Zar}$ and
+a quasi-coherent module on the ringed site
+$((\textit{Aff}/S)_{Zar}, \mathcal{O}_{\textit{Aff}})$ in the sense of
+Modules on Sites, Definition \ref{sites-modules-definition-site-local},
+\item same as in (3) for the \'etale topology,
+\item same as in (3) for the smooth topology,
+\item same as in (3) for the syntomic topology,
+\item same as in (3) for the fppf topology,
+\item $\mathcal{F}$ corresponds to a quasi-coherent module on
+$(\Sch/S)_{Zar}$,
+$(\Sch/S)_\etale$,
+$(\Sch/S)_{smooth}$,
+$(\Sch/S)_{syntomic}$, or
+$(\Sch/S)_{fppf}$
+via the equivalence (\ref{equation-alternative-ringed}),
+\item $\mathcal{F}$ comes from a unique quasi-coherent
+$\mathcal{O}_S$-module $\mathcal{G}$ by the procedure
+described in Section \ref{section-quasi-coherent-sheaves}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since the notion of a quasi-coherent module is intrinsic
+(Modules on Sites, Lemma \ref{sites-modules-lemma-special-locally-free})
+we see that the equivalence (\ref{equation-alternative-ringed})
+induces an equivalence between categories of quasi-coherent modules.
+Proposition \ref{proposition-equivalence-quasi-coherent}
+says the topology we use to study quasi-coherent modules on
+$\Sch/S$ does not matter and it also tells us that (8)
+is the same as (7). Hence we see that (2) -- (8) are all equivalent.
+
+\medskip\noindent
+Assume the equivalent conditions (2) -- (8) hold and let
+$\mathcal{G}$ be as in (8). Let $h : U \to U' \to S$ be a morphism
+of $\textit{Aff}/S$. Denote $f : U \to S$ and $f' : U' \to S$ the
+structure morphisms, so that $f = f' \circ h$.
+We have $\mathcal{F}(U') = \Gamma(U', (f')^*\mathcal{G})$ and
+$\mathcal{F}(U) = \Gamma(U, f^*\mathcal{G}) = \Gamma(U, h^*(f')^*\mathcal{G})$.
+Hence (1) holds by Schemes, Lemma \ref{schemes-lemma-widetilde-pullback}.
+
+\medskip\noindent
+Assume (1) holds. To finish the proof it suffices to prove (2).
+Let $U$ be an object of $(\textit{Aff}/S)_{Zar}$.
+Say $U = \Spec(R)$. A standard open covering $U = U_1 \cup \ldots \cup U_n$
+is given by $U_i = D(f_i)$ for some elements $f_1, \ldots, f_n \in R$
+generating the unit ideal of $R$. By property (1) we see that
+$$
+\mathcal{F}(U_i) =
+\mathcal{F}(U) \otimes_R R_{f_i} =
+\mathcal{F}(U)_{f_i}
+$$
+and
+$$
+\mathcal{F}(U_i \cap U_j) =
+\mathcal{F}(U) \otimes_R R_{f_if_j} =
+\mathcal{F}(U)_{f_if_j}
+$$
+Thus we conclude from Algebra, Lemma \ref{algebra-lemma-cover-module}
+that $\mathcal{F}$ is a sheaf on $(\textit{Aff}/S)_{Zar}$. Choose a
+presentation
+$$
+\bigoplus\nolimits_{k \in K} R
+\longrightarrow
+\bigoplus\nolimits_{l \in L} R
+\longrightarrow
+\mathcal{F}(U)
+\longrightarrow 0
+$$
+by free $R$-modules. By property (1) and the right exactness of tensor product
+we see that for every morphism $U' \to U$ in $(\textit{Aff}/S)_{Zar}$
+we obtain a presentation
+$$
+\bigoplus\nolimits_{k \in K} \mathcal{O}_{Aff}(U')
+\longrightarrow
+\bigoplus\nolimits_{l \in L} \mathcal{O}_{Aff}(U')
+\longrightarrow
+\mathcal{F}(U')
+\longrightarrow 0
+$$
+In other words, we see that the restriction of $\mathcal{F}$
+to the localized category $(\textit{Aff}/S)_{Zar}/U$ has a presentation
+$$
+\bigoplus\nolimits_{k \in K} \mathcal{O}_{Aff}|_{(\textit{Aff}/S)_{Zar}/U}
+\longrightarrow
+\bigoplus\nolimits_{l \in L} \mathcal{O}_{Aff}|_{(\textit{Aff}/S)_{Zar}/U}
+\longrightarrow
+\mathcal{F}|_{(\textit{Aff}/S)_{Zar}/U}
+\longrightarrow 0
+$$
+With apologies for the horrible notation, this finishes the proof.
+\end{proof}
+
+\noindent
+We continue the discussion started in the introduction to this section.
+Let $\tau \in \{Zariski, \etale\}$. Recall that $S_{affine, \tau}$
+is the full subcategory of $S_\tau$ whose objects are affine turned
+into a site by declaring the coverings to be the standard $\tau$
+coverings. See Topologies, Definitions
+\ref{topologies-definition-big-small-Zariski} and
+\ref{topologies-definition-big-small-etale}.
+By Topologies, Lemmas \ref{topologies-lemma-alternative-zariski},
+resp.\ \ref{topologies-lemma-alternative}
+we have an equivalence of topoi $g : \Sh(S_{affine, \tau}) \to \Sh(S_\tau)$,
+whose pullback functor is given by restriction.
+Recalling that $\mathcal{O}$ denotes the structure sheaf on
+$S_\tau$ let us temporarily and pedantically denote
+$\mathcal{O}_{affine}$ the restriction of $\mathcal{O}$ to
+$S_{affine, \tau}$. Then we obtain an equivalence
+\begin{equation}
+\label{equation-alternative-small-ringed}
+(\Sh(S_{affine, \tau}), \mathcal{O}_{affine})
+\longrightarrow
+(\Sh(S_\tau), \mathcal{O})
+\end{equation}
+of ringed topoi. Having said this we can compare quasi-coherent modules
+as well.
+
+\begin{lemma}
+\label{lemma-quasi-coherent-alternative-small}
+Let $S$ be a scheme. Let $\tau \in \{Zariski, \etale\}$.
+Let $\mathcal{F}$ be a presheaf of $\mathcal{O}_{affine}$-modules
+on $S_{affine, \tau}$. The following are equivalent
+\begin{enumerate}
+\item for every morphism $U \to U'$ of $S_{affine, \tau}$ the map
+$\mathcal{F}(U') \otimes_{\mathcal{O}(U')} \mathcal{O}(U) \to \mathcal{F}(U)$
+is an isomorphism,
+\item $\mathcal{F}$ is a sheaf on $S_{affine, \tau}$ and
+a quasi-coherent module on the ringed site
+$(S_{affine, \tau}, \mathcal{O}_{affine})$ in the sense of
+Modules on Sites, Definition \ref{sites-modules-definition-site-local},
+\item $\mathcal{F}$ corresponds to a quasi-coherent module on
+$S_\tau$ via the equivalence (\ref{equation-alternative-small-ringed}),
+\item $\mathcal{F}$ comes from a unique quasi-coherent
+$\mathcal{O}_S$-module $\mathcal{G}$ by the procedure
+described in Section \ref{section-quasi-coherent-sheaves}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let us prove this in the case of the \'etale topology.
+
+\medskip\noindent
+Assume (1) holds. To show that $\mathcal{F}$ is a sheaf, let
+$\mathcal{U} = \{U_i \to U\}_{i = 1, \ldots, n}$ be a covering
+of $S_{affine, \etale}$. The sheaf condition for $\mathcal{F}$
+and $\mathcal{U}$, by our assumption on $\mathcal{F}$.
+reduces to showing that
+$$
+0 \to \mathcal{F}(U) \to
+\prod \mathcal{F}(U) \otimes_{\mathcal{O}(U)} \mathcal{O}(U_i) \to
+\prod \mathcal{F}(U) \otimes_{\mathcal{O}(U)} \mathcal{O}(U_i \times_U U_j)
+$$
+is exact. This is true because $\mathcal{O}(U) \to \prod \mathcal{O}(U_i)$
+is faithfully flat (by Lemma \ref{lemma-standard-covering-Cech} and
+the fact that coverings in $S_{affine, \etale}$ are standard \'etale
+coverings) and we may apply Lemma \ref{lemma-ff-exact}.
+Next, we show that $\mathcal{F}$ is quasi-coherent on $S_{affine, \etale}$.
+Namely, for $U$ in $S_{affine, \'etale}$, set $R = \mathcal{O}(U)$
+and choose a presentation
+$$
+\bigoplus\nolimits_{k \in K} R
+\longrightarrow
+\bigoplus\nolimits_{l \in L} R
+\longrightarrow
+\mathcal{F}(U)
+\longrightarrow 0
+$$
+by free $R$-modules. By property (1) and the right exactness of tensor product
+we see that for every morphism $U' \to U$ in $S_{affine, \etale}$
+we obtain a presentation
+$$
+\bigoplus\nolimits_{k \in K} \mathcal{O}(U')
+\longrightarrow
+\bigoplus\nolimits_{l \in L} \mathcal{O}(U')
+\longrightarrow
+\mathcal{F}(U')
+\longrightarrow 0
+$$
+In other words, we see that the restriction of $\mathcal{F}$
+to the localized category $S_{affine, etale}/U$ has a presentation
+$$
+\bigoplus\nolimits_{k \in K} \mathcal{O}_{affine}|_{S_{affine, \etale}/U}
+\longrightarrow
+\bigoplus\nolimits_{l \in L} \mathcal{O}_{affine}|_{S_{affine, \etale}/U}
+\longrightarrow
+\mathcal{F}|_{S_{affine, \etale}/U}
+\longrightarrow 0
+$$
+as required to show that $\mathcal{F}$ is quasi-coherent.
+With apologies for the horrible notation, this finishes the proof
+that (1) implies (2).
+
+\medskip\noindent
+Since the notion of a quasi-coherent module is intrinsic
+(Modules on Sites, Lemma \ref{sites-modules-lemma-special-locally-free})
+we see that the equivalence (\ref{equation-alternative-small-ringed})
+induces an equivalence between categories of quasi-coherent modules.
+Thus we have the equivalence of (2) and (3).
+
+\medskip\noindent
+The equivalence of (3) and (4) follows from
+Proposition \ref{proposition-equivalence-quasi-coherent}.
+
+\medskip\noindent
+Let us assume (4) and prove (1). Namely, let
+$\mathcal{G}$ be as in (4). Let $h : U \to U' \to S$ be a morphism
+of $S_{affine, \etale}$. Denote $f : U \to S$ and $f' : U' \to S$ the
+structure morphisms, so that $f = f' \circ h$.
+We have $\mathcal{F}(U') = \Gamma(U', (f')^*\mathcal{G})$ and
+$\mathcal{F}(U) = \Gamma(U, f^*\mathcal{G}) = \Gamma(U, h^*(f')^*\mathcal{G})$.
+Hence (1) holds by Schemes, Lemma \ref{schemes-lemma-widetilde-pullback}.
+
+\medskip\noindent
+We omit the proof in the case of the Zariski topology.
+\end{proof}
+
+
+
+
+
+\section{Parasitic modules}
+\label{section-parasitic}
+
+\noindent
+Parasitic modules are those which are zero when restricted
+to schemes flat over the base scheme. Here is the formal definition.
+
+\begin{definition}
+\label{definition-parasitic}
+Let $S$ be a scheme. Let $\tau \in \{Zar, \etale,
+smooth, syntomic, fppf\}$. Let $\mathcal{F}$ be a presheaf
+of $\mathcal{O}$-modules on $(\Sch/S)_\tau$.
+\begin{enumerate}
+\item $\mathcal{F}$ is called
+{\it parasitic}\footnote{This may be nonstandard notation.}
+if for every flat morphism $U \to S$ we have $\mathcal{F}(U) = 0$.
+\item $\mathcal{F}$ is called {\it parasitic for the $\tau$-topology}
+if for every $\tau$-covering $\{U_i \to S\}_{i \in I}$ we have
+$\mathcal{F}(U_i) = 0$ for all $i$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+If $\tau = fppf$ this means that $\mathcal{F}|_{U_{Zar}} = 0$ whenever
+$U \to S$ is flat and locally of finite presentation; similar for
+the other cases.
+
+\begin{lemma}
+\label{lemma-cohomology-parasitic}
+Let $S$ be a scheme. Let $\tau \in \{Zar, \etale, smooth,
+syntomic, fppf\}$. Let $\mathcal{G}$ be a presheaf of
+$\mathcal{O}$-modules on $(\Sch/S)_\tau$.
+\begin{enumerate}
+\item If $\mathcal{G}$ is parasitic for the $\tau$-topology, then
+$H^p_\tau(U, \mathcal{G}) = 0$ for every $U$ open in $S$,
+resp.\ \'etale over $S$,
+resp.\ smooth over $S$,
+resp.\ syntomic over $S$,
+resp.\ flat and locally of finite presentation over $S$.
+\item If $\mathcal{G}$ is parasitic then $H^p_\tau(U, \mathcal{G}) = 0$
+for every $U$ flat over $S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof in case $\tau = fppf$; the other cases are proved in the
+exact same way. The assumption means that $\mathcal{G}(U) = 0$ for any
+$U \to S$ flat and locally of finite presentation. Apply
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-cech-vanish-collection}
+to the subset $\mathcal{B} \subset \Ob((\Sch/S)_{fppf})$ consisting
+of $U \to S$ flat and locally of finite presentation and the collection
+$\text{Cov}$ of all fppf coverings of elements of $\mathcal{B}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-direct-image-parasitic}
+Let $f : T \to S$ be a morphism of schemes. For any parasitic
+$\mathcal{O}$-module on $(\Sch/T)_\tau$ the pushforward
+$f_*\mathcal{F}$ and the higher direct images $R^if_*\mathcal{F}$
+are parasitic $\mathcal{O}$-modules on $(\Sch/S)_\tau$.
+\end{lemma}
+
+\begin{proof}
+Recall that $R^if_*\mathcal{F}$ is the sheaf associated to the
+presheaf
+$$
+U \mapsto H^i((\Sch/U \times_S T)_\tau, \mathcal{F})
+$$
+see
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-higher-direct-images}.
+If $U \to S$ is flat, then $U \times_S T \to T$ is flat as a base change.
+Hence the displayed group is zero by
+Lemma \ref{lemma-cohomology-parasitic}.
+If $\{U_i \to U\}$ is a $\tau$-covering then
+$U_i \times_S T \to T$ is also flat.
+Hence it is clear that the sheafification of the displayed
+presheaf is zero on schemes $U$ flat over $S$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-coherent-and-flat-base-change}
+Let $S$ be a scheme. Let $\tau \in \{Zar, \etale\}$.
+Let $\mathcal{G}$ be a sheaf of $\mathcal{O}$-modules on
+$(\Sch/S)_{fppf}$ such that
+\begin{enumerate}
+\item $\mathcal{G}|_{S_\tau}$ is quasi-coherent, and
+\item for every flat, locally finitely presented morphism
+$g : U \to S$ the canonical map
+$g_{\tau, small}^*(\mathcal{G}|_{S_\tau}) \to \mathcal{G}|_{U_\tau}$
+is an isomorphism.
+\end{enumerate}
+Then $H^p(U, \mathcal{G}) = H^p(U, \mathcal{G}|_{U_\tau})$
+for every $U$ flat and locally of finite presentation over $S$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be the pullback of $\mathcal{G}|_{S_\tau}$
+to the big fppf site $(\Sch/S)_{fppf}$. Note that $\mathcal{F}$
+is quasi-coherent. There is a canonical
+comparison map $\varphi : \mathcal{F} \to \mathcal{G}$ which by
+assumptions (1) and (2) induces an isomorphism
+$\mathcal{F}|_{U_\tau} \to \mathcal{G}|_{U_\tau}$
+for all $g : U \to S$ flat and locally of finite presentation.
+Hence in the short exact sequences
+$$
+0 \to \Ker(\varphi) \to \mathcal{F} \to \Im(\varphi) \to 0
+$$
+and
+$$
+0 \to \Im(\varphi) \to \mathcal{G} \to \Coker(\varphi) \to 0
+$$
+the sheaves $\Ker(\varphi)$ and $\Coker(\varphi)$ are
+parasitic for the fppf topology. By
+Lemma \ref{lemma-cohomology-parasitic}
+we conclude that $H^p(U, \mathcal{F}) \to H^p(U, \mathcal{G})$
+is an isomorphism for $g : U \to S$ flat and locally of finite presentation.
+Since the result holds for $\mathcal{F}$ by
+Proposition \ref{proposition-same-cohomology-quasi-coherent}
+we win.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Fpqc coverings are universal effective epimorphisms}
+\label{section-fpqc-universal-effective-epimorphisms}
+
+\noindent
+We apply the material above to prove an interesting result, namely
+Lemma \ref{lemma-fpqc-universal-effective-epimorphisms}.
+By Sites, Section \ref{sites-section-representable-sheaves}
+this lemma implies that the representable
+presheaves on any of the sites $(\Sch/S)_\tau$ are sheaves for
+$\tau \in \{Zariski, fppf, \etale, smooth, syntomic\}$. First
+we prove a helper lemma.
+
+\begin{lemma}
+\label{lemma-equiv-fibre-product}
+For a scheme $X$ denote $|X|$ the underlying set.
+Let $f : X \to S$ be a morphism of schemes.
+Then
+$$
+|X \times_S X| \to |X| \times_{|S|} |X|
+$$
+is surjective.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from the description of points on the
+fibre product in Schemes, Lemma \ref{schemes-lemma-points-fibre-product}.
+\end{proof}
+
+
+\begin{lemma}
+\label{lemma-universal-effective-epimorphism-affine}
+Let $\{f_i : X_i \to X\}_{i \in I}$ be a family of morphisms of affine schemes.
+The following are equivalent
+\begin{enumerate}
+\item for any quasi-coherent $\mathcal{O}_X$-module $\mathcal{F}$ we have
+$$
+\Gamma(X, \mathcal{F}) =
+\text{Equalizer}\left(
+\xymatrix{
+\prod\nolimits_{i \in I} \Gamma(X_i, f_i^*\mathcal{F})
+\ar@<1ex>[r] \ar@<-1ex>[r] &
+\prod\nolimits_{i, j \in I}
+\Gamma(X_i \times_X X_j, (f_i \times f_j)^*\mathcal{F})
+}
+\right)
+$$
+\item $\{f_i : X_i \to X\}_{i \in I}$ is a universal effective epimorphism
+(Sites, Definition \ref{sites-definition-universal-effective-epimorphisms})
+in the category of affine schemes.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2) holds and let $\mathcal{F}$ be a quasi-coherent
+$\mathcal{O}_X$-module. Consider the scheme
+(Constructions, Section \ref{constructions-section-spec})
+$$
+X' = \underline{\Spec}_X(\mathcal{O}_X \oplus \mathcal{F})
+$$
+where $\mathcal{O}_X \oplus \mathcal{F}$ is an
+$\mathcal{O}_X$-algebra with multiplication
+$(f, s)(f', s') = (ff', fs' + f's)$.
+If $s_i \in \Gamma(X_i, f_i^*\mathcal{F})$ is a section,
+then $s_i$ determines a unique element of
+$$
+\Gamma(X' \times_X X_i, \mathcal{O}_{X' \times_X X_i}) =
+\Gamma(X_i, \mathcal{O}_{X_i}) \oplus \Gamma(X_i, f_i^*\mathcal{F})
+$$
+Proof of equality omitted.
+If $(s_i)_{i \in I}$ is in the equalizer of (1), then, using the equality
+$$
+\Mor(T, \mathbf{A}^1_\mathbf{Z}) = \Gamma(T, \mathcal{O}_T)
+$$
+which holds for any scheme $T$, we see that these sections define
+a family of morphisms $h_i : X' \times_X X_i \to \mathbf{A}^1_\mathbf{Z}$ with
+$h_i \circ \text{pr}_1 = h_j \circ \text{pr}_2$ as morphisms
+$(X' \times_X X_i) \times_{X'} (X' \times_X X_j) \to \mathbf{A}^1_\mathbf{Z}$.
+Since we've assume (2) we obtain a morphism
+$h : X' \to \mathbf{A}^1_\mathbf{Z}$ compatible with the morphisms $h_i$
+which in turn determines
+an element $s \in \Gamma(X, \mathcal{F})$.
+We omit the verification that $s$ maps to $s_i$ in
+$\Gamma(X_i, f_i^*\mathcal{F})$.
+
+\medskip\noindent
+Assume (1). Let $T$ be an affine scheme and let $h_i : X_i \to T$
+be a family of morphisms such that
+$h_i \circ \text{pr}_1 = h_j \circ \text{pr}_2$ on
+$X_i \times_X X_j$ for all $i, j \in I$. Then
+$$
+\prod h_i^\sharp :
+\Gamma(T, \mathcal{O}_T)
+\to
+\prod \Gamma(X_i, \mathcal{O}_{X_i})
+$$
+maps into the equalizer and we find that we get a ring map
+$\Gamma(T, \mathcal{O}_T) \to \Gamma(X, \mathcal{O}_X)$
+by the assumption of the lemma for $\mathcal{F} = \mathcal{O}_X$.
+This ring map corresponds to a morphism $h : X \to T$ such
+that $h_i = h \circ f_i$. Hence our family is an effective
+epimorphism.
+
+\medskip\noindent
+Let $p : Y \to X$ be a morphism of affines. We will show
+the base changes $g_i : Y_i \to Y$ of $f_i$ form an effective epimorphism
+by applying the result of the previous paragraph.
+Namely, if $\mathcal{G}$ is a quasi-coherent $\mathcal{O}_Y$-module, then
+$$
+\Gamma(Y, \mathcal{G}) = \Gamma(X, p_*\mathcal{G}),\quad
+\Gamma(Y_i, g_i^*\mathcal{G}) = \Gamma(X, f_i^*p_*\mathcal{G}),
+$$
+and
+$$
+\Gamma(Y_i \times_Y Y_j, (g_i \times g_j)^*\mathcal{G}) =
+\Gamma(X, (f_i \times f_j)^*p_*\mathcal{G})
+$$
+by the trivial base change formula
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-affine-base-change}).
+Thus we see property (1) lemma holds for the family $g_i$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universal-effective-epimorphism-surjective}
+Let $\{f_i : X_i \to X\}_{i \in I}$ be a family of morphisms of schemes.
+\begin{enumerate}
+\item If the family is universal effective
+epimorphism in the category of schemes, then $\coprod f_i$ is surjective.
+\item If $X$ and $X_i$ are affine and the family is a universal effective
+epimorphism in the category of affine schemes, then
+$\coprod f_i$ is surjective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: perform base change by $\Spec(\kappa(x)) \to X$
+to see that any $x \in X$ has to be in the image.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-universal-effective-epimorphism-affine}
+Let $\{f_i : X_i \to X\}_{i \in I}$ be a family of morphisms of schemes.
+If for every morphism $Y \to X$ with $Y$ affine the family of base changes
+$g_i : Y_i \to Y$ forms an effective epimorphism, then
+the family of $f_i$ forms a universal effective epimorphism
+in the category of schemes.
+\end{lemma}
+
+\begin{proof}
+Let $Y \to X$ be a morphism of schemes. We have to show that
+the base changes $g_i : Y_i \to Y$ form an effective epimorphism.
+To do this, assume given a scheme $T$ and morphisms $h_i : Y_i \to T$
+with $h_i \circ \text{pr}_1 = h_j \circ \text{pr}_2$ on
+$Y_i \times_Y Y_j$.
+Choose an affine open covering $Y = \bigcup V_\alpha$.
+Set $V_{\alpha, i}$ equal to the inverse image of
+$V_\alpha$ in $Y_i$. Then we see that
+$V_{\alpha, i} \to V_\alpha$ is the base change of
+$f_i$ by $V_\alpha \to X$. Thus by assumption
+the family of restrictions $h_i|_{V_{\alpha, i}}$
+come from a morphism of schemes $h_\alpha : V_\alpha \to T$.
+We leave it to the reader to show that these agree
+on overlaps and define the desired morphism $Y \to T$.
+See discussion in Schemes, Section \ref{schemes-section-glueing-schemes}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universal-effective-epimorphism}
+Let $\{f_i : X_i \to X\}_{i \in I}$ be a family of morphisms of affine
+schemes. Assume the equivalent assumption of
+Lemma \ref{lemma-universal-effective-epimorphism-affine} hold
+and that moreover for any morphism of affines $Y \to X$ the map
+$$
+\coprod X_i \times_X Y \longrightarrow Y
+$$
+is a submersive map of topological spaces
+(Topology, Definition \ref{topology-definition-submersive}).
+Then our family of morphisms is a universal effective epimorphism
+in the category of schemes.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-check-universal-effective-epimorphism-affine}
+it suffices to base change our family of morphisms
+by $Y \to X$ with $Y$ affine. Set $Y_i = X_i \times_X Y$.
+Let $T$ be a scheme and let $h_i : Y_i \to Y$ be a family of morphisms
+such that $h_i \circ \text{pr}_1 = h_j \circ \text{pr}_2$
+on $Y_i \times_Y Y_j$. Note that $Y$ as a set is the coequalizer
+of the two maps from $\coprod Y_i \times_Y Y_j$ to $\coprod Y_i$.
+Namely, surjectivity by the affine case of
+Lemma \ref{lemma-universal-effective-epimorphism-surjective}
+and injectivity by Lemma \ref{lemma-equiv-fibre-product}.
+Hence there is a set map of underlying sets $h : Y \to T$
+compatible with the maps $h_i$. By the second condition of
+the lemma we see that $h$ is continuous!
+Thus if $y \in Y$ and $U \subset T$ is an affine open
+neighbourhood of $h(y)$, then we can find an affine open
+$V \subset Y$ such that $h(V) \subset U$.
+Setting $V_i = Y_i \times_Y V = X_i \times_X V$
+we can use the result proved in
+Lemma \ref{lemma-universal-effective-epimorphism-affine}
+to see that $h|_V : V \to U \subset T$ comes from a unique
+morphism of affine schemes $h_V : V \to U$ agreeing with $h_i|_{V_i}$
+as morphisms of schemes for all $i$. Glueing these $h_V$
+(see Schemes, Section \ref{schemes-section-glueing-schemes})
+gives a morphism $Y \to T$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-open-fpqc-covering}
+Let $\{f_i : T_i \to T\}_{i \in I}$ be a fpqc covering.
+Suppose that for each $i$ we have an open subset $W_i \subset T_i$
+such that for all $i, j \in I$ we have
+$\text{pr}_0^{-1}(W_i) = \text{pr}_1^{-1}(W_j)$ as open
+subsets of $T_i \times_T T_j$. Then there exists a unique open subset
+$W \subset T$ such that $W_i = f_i^{-1}(W)$ for each $i$.
+\end{lemma}
+
+\begin{proof}
+Apply
+Lemma \ref{lemma-equiv-fibre-product}
+to the map $\coprod_{i \in I} T_i \to T$.
+It implies there exists a subset $W \subset T$ such that
+$W_i = f_i^{-1}(W)$ for each $i$, namely $W = \bigcup f_i(W_i)$.
+To see that $W$ is open we may work Zariski locally on $T$.
+Hence we may assume that $T$ is affine. Using the definition
+of a fpqc covering, this reduces us to the case where
+$\{f_i : T_i \to T\}$ is a standard fpqc covering. In this case we
+may apply
+Morphisms, Lemma \ref{morphisms-lemma-fpqc-quotient-topology}
+to the morphism
+$\coprod T_i \to T$ to conclude that $W$ is open.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fpqc-universal-effective-epimorphisms}
+Let $\{T_i \to T\}$ be an fpqc covering, see
+Topologies, Definition \ref{topologies-definition-fpqc-covering}.
+Then $\{T_i \to T\}$ is a universal effective epimorphism
+in the category of schemes, see
+Sites, Definition \ref{sites-definition-universal-effective-epimorphisms}.
+In other words, every representable functor on the category of schemes
+satisfies the sheaf condition for the fpqc topology, see
+Topologies, Definition \ref{topologies-definition-sheaf-property-fpqc}.
+\end{lemma}
+
+\begin{proof}
+Let $S$ be a scheme. We have to show the following:
+Given morphisms $\varphi_i : T_i \to S$
+such that $\varphi_i|_{T_i \times_T T_j} = \varphi_j|_{T_i \times_T T_j}$
+there exists a unique morphism $T \to S$ which restricts
+to $\varphi_i$ on each $T_i$.
+In other words, we have to show that the functor
+$h_S = \Mor_{\Sch}( - , S)$ satisfies
+the sheaf property for the fpqc topology.
+
+\medskip\noindent
+If $\{T_i \to T\}$ is a Zariski covering, then this follows from
+Schemes, Lemma \ref{schemes-lemma-glue}.
+Thus Topologies, Lemma \ref{topologies-lemma-sheaf-property-fpqc}
+reduces us to the case of a covering $\{X \to Y\}$
+given by a single surjective flat morphism of affines.
+
+\medskip\noindent
+First proof. By Lemma \ref{lemma-sheaf-condition-holds}
+we have the sheaf condition for quasi-coherent modules
+for $\{X \to Y\}$. By Lemma \ref{lemma-open-fpqc-covering}
+the morphism $X \to Y$ is universally submersive.
+Hence we may apply Lemma \ref{lemma-universal-effective-epimorphism}
+to see that $\{X \to Y\}$ is a universal effective epimorphism.
+
+\medskip\noindent
+Second proof. Let $R \to A$ be the faithfully flat ring map
+corresponding to our surjective flat morphism $\pi : X \to Y$.
+Let $f : X \to S$ be a morphism
+such that $f \circ \text{pr}_1 = f \circ \text{pr}_2$
+as morphisms $X \times_Y X = \Spec(A \otimes_R A) \to S$.
+By Lemma \ref{lemma-equiv-fibre-product} we see that
+as a map on the underlying
+sets $f$ is of the form $f = g \circ \pi$ for some
+(set theoretic) map $g : \Spec(R) \to S$.
+By Morphisms, Lemma \ref{morphisms-lemma-fpqc-quotient-topology}
+and the fact that $f$ is continuous we see that $g$
+is continuous.
+
+\medskip\noindent
+Pick $y \in Y = \Spec(R)$.
+Choose $U \subset S$ affine open containing $g(y)$.
+Say $U = \Spec(B)$.
+By the above we may choose an $r \in R$ such that
+$y \in D(r) \subset g^{-1}(U)$.
+The restriction of $f$ to $\pi^{-1}(D(r))$ into $U$
+corresponds to a ring map $B \to A_r$. The two induced
+ring maps $B \to A_r \otimes_{R_r} A_r = (A \otimes_R A)_r$ are equal
+by assumption on $f$.
+Note that $R_r \to A_r$ is faithfully flat.
+By Lemma \ref{lemma-ff-exact} the equalizer of
+the two arrows $A_r \to A_r \otimes_{R_r} A_r$ is $R_r$.
+We conclude that $B \to A_r$ factors uniquely through a map $B \to R_r$.
+This map in turn gives a morphism of schemes $D(r) \to U \to S$,
+see Schemes, Lemma \ref{schemes-lemma-morphism-into-affine}.
+
+\medskip\noindent
+What have we proved so far? We have shown that for any prime
+$\mathfrak p \subset R$, there exists a standard affine open
+$D(r) \subset \Spec(R)$ such that the morphism
+$f|_{\pi^{-1}(D(r))} : \pi^{-1}(D(r)) \to S$ factors uniquely
+through some morphism of schemes $D(r) \to S$. We omit the
+verification that these morphisms glue to the desired
+morphism $\Spec(R) \to S$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-coequalizer-fpqc-local}
+Consider schemes $X, Y, Z$ and morphisms $a, b : X \to Y$ and
+a morphism $c : Y \to Z$ with $c \circ a = c \circ b$. Set
+$d = c \circ a = c \circ b$. If there exists an
+fpqc covering $\{Z_i \to Z\}$ such that
+\begin{enumerate}
+\item for all $i$ the morphism $Y \times_{c, Z} Z_i \to Z_i$
+is the coequalizer of $(a, 1) : X \times_{d, Z} Z_i \to Y \times_{c, Z} Z_i$
+and $(b, 1) : X \times_{d, Z} Z_i \to Y \times_{c, Z} Z_i$, and
+\item for all $i$ and $i'$ the morphism
+$Y \times_{c, Z} (Z_i \times_Z Z_{i'}) \to (Z_i \times_Z Z_{i'})$
+is the coequalizer of
+$(a, 1) : X \times_{d, Z} (Z_i \times_Z Z_{i'}) \to
+Y \times_{c, Z} (Z_i \times_Z Z_{i'})$ and
+$(b, 1) : X \times_{d, Z} (Z_i \times_Z Z_{i'}) \to
+Y \times_{c, Z} (Z_i \times_Z Z_{i'})$
+\end{enumerate}
+then $c$ is the coequalizer of $a$ and $b$.
+\end{lemma}
+
+\begin{proof}
+Namely, for a scheme $T$ a morphism $Z \to T$ is the same thing as
+a collection of morphism $Z_i \to T$ which agree on overlaps by
+Lemma \ref{lemma-fpqc-universal-effective-epimorphisms}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Descent of finiteness and smoothness properties of morphisms}
+\label{section-descent-finiteness-morphisms}
+
+\noindent
+In this section we show that several properties
+of morphisms (being smooth, locally of finite presentation,
+and so on) descend under faithfully flat morphisms. We start
+with an algebraic version. (The ``Noetherian'' reader should
+consult Lemma \ref{lemma-finite-type-local-source-fppf-algebra}
+instead of the next lemma.)
+
+\begin{lemma}
+\label{lemma-flat-finitely-presented-permanence-algebra}
+Let $R \to A \to B$ be ring maps.
+Assume $R \to B$ is of finite presentation and
+$A \to B$ faithfully flat and of finite presentation.
+Then $R \to A$ is of finite presentation.
+\end{lemma}
+
+\begin{proof}
+Consider the algebra $C = B \otimes_A B$ together with the
+pair of maps $p, q : B \to C$ given by $p(b) = b \otimes 1$
+and $q(b) = 1 \otimes b$. Of course the two compositions
+$A \to B \to C$ are the same. Note that as
+$p : B \to C$ is flat and of finite presentation (base change of
+$A \to B$), the ring map $R \to C$ is of finite presentation
+(as the composite of $R \to B \to C$).
+
+\medskip\noindent
+We are going to use the criterion
+Algebra, Lemma \ref{algebra-lemma-characterize-finite-presentation}
+to show that $R \to A$ is of finite presentation.
+Let $S$ be any $R$-algebra, and suppose that
+$S = \colim_{\lambda \in \Lambda} S_\lambda$ is written
+as a directed colimit of $R$-algebras.
+Let $A \to S$ be an $R$-algebra homomorphism. We have to
+show that $A \to S$ factors through one of the $S_\lambda$.
+Consider the rings $B' = S \otimes_A B$ and
+$C' = S \otimes_A C = B' \otimes_S B'$.
+As $B$ is faithfully flat of finite presentation over $A$, also $B'$
+is faithfully flat of finite presentation over $S$.
+By Algebra, Lemma \ref{algebra-lemma-flat-finite-presentation-limit-flat}
+part (2) applied to the pair $(S \to B', B')$ and the system $(S_\lambda)$
+there exists a $\lambda_0 \in \Lambda$
+and a flat, finitely presented $S_{\lambda_0}$-algebra
+$B_{\lambda_0}$ such that $B' = S \otimes_{S_{\lambda_0}} B_{\lambda_0}$.
+For $\lambda \geq \lambda_0$ set
+$B_\lambda = S_\lambda \otimes_{S_{\lambda_0}} B_{\lambda_0}$ and
+$C_\lambda = B_\lambda \otimes_{S_\lambda} B_\lambda$.
+
+\medskip\noindent
+We interrupt the flow of the argument to show that $S_\lambda \to B_\lambda$
+is faithfully flat for $\lambda$ large enough. (This should really
+be a separate lemma somewhere else, maybe in the chapter on limits.)
+Since $\Spec(B_{\lambda_0}) \to \Spec(S_{\lambda_0})$ is
+flat and of finite presentation it is open (see Morphisms,
+Lemma \ref{morphisms-lemma-fppf-open}).
+Let $I \subset S_{\lambda_0}$ be an ideal such that
+$V(I) \subset \Spec(S_{\lambda_0})$ is the complement
+of the image. Note that formation of the image commutes
+with base change. Hence, since $\Spec(B') \to \Spec(S)$
+is surjective, and $B' = B_{\lambda_0} \otimes_{S_{\lambda_0}} S$
+we see that $IS = S$. Thus for some $\lambda \geq \lambda_0$ we
+have $IS_{\lambda} = S_\lambda$. For this and all greater
+$\lambda$ the morphism
+$\Spec(B_\lambda) \to \Spec(S_\lambda)$ is surjective.
+
+\medskip\noindent
+By analogy with the notation in the first paragraph of the proof denote
+$p_\lambda, q_\lambda : B_\lambda \to C_\lambda$ the two canonical maps.
+Then $B' = \colim_{\lambda \geq \lambda_0} B_\lambda$
+and $C' = \colim_{\lambda \geq \lambda_0} C_\lambda$.
+Since $B$ and $C$ are finitely presented over $R$ there exist
+(by Algebra, Lemma \ref{algebra-lemma-characterize-finite-presentation}
+applied several times)
+a $\lambda \geq \lambda_0$ and an $R$-algebra maps
+$B \to B_\lambda$, $C \to C_\lambda$ such that
+the diagram
+$$
+\xymatrix{
+C \ar[rr] & &
+C_\lambda \\
+B \ar[rr]
+\ar@<1ex>[u]^-p
+\ar@<-1ex>[u]_-q
+& &
+B_\lambda
+\ar@<1ex>[u]^-{p_\lambda}
+\ar@<-1ex>[u]_-{q_\lambda}
+}
+$$
+is commutative. OK, and this means that $A \to B \to B_\lambda$
+maps into the equalizer of $p_\lambda$ and $q_\lambda$.
+By Lemma \ref{lemma-ff-exact} we
+see that $S_\lambda$ is the equalizer of $p_\lambda$ and $q_\lambda$.
+Thus we get the desired ring map $A \to S_\lambda$ and we win.
+\end{proof}
+
+\noindent
+Here is an easier version of this dealing with the property
+of being of finite type.
+
+\begin{lemma}
+\label{lemma-finite-type-local-source-fppf-algebra}
+Let $R \to A \to B$ be ring maps.
+Assume $R \to B$ is of finite type and
+$A \to B$ faithfully flat and of finite presentation.
+Then $R \to A$ is of finite type.
+\end{lemma}
+
+\begin{proof}
+By
+Algebra, Lemma \ref{algebra-lemma-descend-faithfully-flat-finite-presentation}
+there exists a commutative diagram
+$$
+\xymatrix{
+R \ar[r] \ar@{=}[d] &
+A_0 \ar[d] \ar[r] &
+B_0 \ar[d] \\
+R \ar[r] & A \ar[r] & B
+}
+$$
+with $R \to A_0$ of finite presentation,
+$A_0 \to B_0$ faithfully flat of finite presentation
+and $B = A \otimes_{A_0} B_0$. Since $R \to B$ is of finite
+type by assumption, we may add some elements to $A_0$ and assume
+that the map $B_0 \to B$ is surjective!
+In this case, since $A_0 \to B_0$ is faithfully flat, we see
+that as
+$$
+(A_0 \to A) \otimes_{A_0} B_0 \cong (B_0 \to B)
+$$
+is surjective, also $A_0 \to A$ is surjective. Hence we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-finitely-presented-permanence}
+\begin{reference}
+\cite[IV, 17.7.5 (i) and (ii)]{EGA}.
+\end{reference}
+Let
+$$
+\xymatrix{
+X \ar[rr]_f \ar[rd]_p & &
+Y \ar[dl]^q \\
+& S
+}
+$$
+be a commutative diagram of morphisms of schemes. Assume that $f$ is
+surjective, flat and locally of finite presentation and assume
+that $p$ is locally of finite presentation (resp.\ locally of finite type).
+Then $q$ is locally of finite presentation (resp.\ locally of finite type).
+\end{lemma}
+
+\begin{proof}
+The problem is local on $S$ and $Y$. Hence we may assume that
+$S$ and $Y$ are affine. Since $f$ is flat and locally of finite
+presentation, we see that $f$ is open
+(Morphisms, Lemma \ref{morphisms-lemma-fppf-open}).
+Hence, since $Y$ is quasi-compact, there exist finitely many affine opens
+$X_i \subset X$ such that $Y = \bigcup f(X_i)$.
+Clearly we may replace $X$ by $\coprod X_i$, and hence we
+may assume $X$ is affine as well.
+In this case the lemma is equivalent to
+Lemma \ref{lemma-flat-finitely-presented-permanence-algebra}
+(resp. Lemma \ref{lemma-finite-type-local-source-fppf-algebra})
+above.
+\end{proof}
+
+\noindent
+We use this to improve some of the results on morphisms
+obtained earlier.
+
+\begin{lemma}
+\label{lemma-syntomic-smooth-etale-permanence}
+Let
+$$
+\xymatrix{
+X \ar[rr]_f \ar[rd]_p & &
+Y \ar[dl]^q \\
+& S
+}
+$$
+be a commutative diagram of morphisms of schemes. Assume that
+\begin{enumerate}
+\item $f$ is surjective, and syntomic (resp.\ smooth, resp.\ \'etale),
+\item $p$ is syntomic (resp.\ smooth, resp.\ \'etale).
+\end{enumerate}
+Then $q$ is syntomic (resp.\ smooth, resp.\ \'etale).
+\end{lemma}
+
+\begin{proof}
+Combine Morphisms, Lemmas
+\ref{morphisms-lemma-syntomic-permanence},
+\ref{morphisms-lemma-smooth-permanence}, and
+\ref{morphisms-lemma-etale-permanence-two}
+with Lemma \ref{lemma-flat-finitely-presented-permanence} above.
+\end{proof}
+
+\noindent
+Actually we can strengthen this result as follows.
+
+\begin{lemma}
+\label{lemma-smooth-permanence}
+Let
+$$
+\xymatrix{
+X \ar[rr]_f \ar[rd]_p & &
+Y \ar[dl]^q \\
+& S
+}
+$$
+be a commutative diagram of morphisms of schemes. Assume that
+\begin{enumerate}
+\item $f$ is surjective, flat, and locally of finite presentation,
+\item $p$ is smooth (resp.\ \'etale).
+\end{enumerate}
+Then $q$ is smooth (resp.\ \'etale).
+\end{lemma}
+
+\begin{proof}
+Assume (1) and that $p$ is smooth. By
+Lemma \ref{lemma-flat-finitely-presented-permanence}
+we see that $q$ is locally of finite presentation.
+By
+Morphisms, Lemma \ref{morphisms-lemma-flat-permanence}
+we see that $q$ is flat.
+Hence now it suffices to show that the fibres of $q$ are smooth, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-flat-smooth-fibres}.
+Apply
+Varieties, Lemma \ref{varieties-lemma-flat-under-smooth}
+to the flat surjective morphisms $X_s \to Y_s$ for $s \in S$ to
+conclude. We omit the proof of the \'etale case.
+\end{proof}
+
+\begin{remark}
+\label{remark-smooth-permanence}
+With the assumptions (1) and $p$ smooth in
+Lemma \ref{lemma-smooth-permanence}
+it is not automatically the case that $X \to Y$ is smooth.
+A counter example is $S = \Spec(k)$, $X = \Spec(k[s])$,
+$Y = \Spec(k[t])$ and $f$ given by $t \mapsto s^2$.
+But see also Lemma \ref{lemma-syntomic-permanence}
+for some information on the structure of $f$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-syntomic-permanence}
+Let
+$$
+\xymatrix{
+X \ar[rr]_f \ar[rd]_p & &
+Y \ar[dl]^q \\
+& S
+}
+$$
+be a commutative diagram of morphisms of schemes. Assume that
+\begin{enumerate}
+\item $f$ is surjective, flat, and locally of finite presentation,
+\item $p$ is syntomic.
+\end{enumerate}
+Then both $q$ and $f$ are syntomic.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-flat-finitely-presented-permanence} we see that $q$
+is of finite presentation. By
+Morphisms, Lemma \ref{morphisms-lemma-flat-permanence}
+we see that $q$ is flat.
+By Morphisms, Lemma \ref{morphisms-lemma-syntomic-locally-standard-syntomic}
+it now suffices to show that the local rings of the fibres of
+$Y \to S$ and the fibres of $X \to Y$ are local complete intersection
+rings. To do this we may take the fibre of $X \to Y \to S$ at
+a point $s \in S$, i.e., we may assume $S$ is the spectrum of a
+field. Pick a point $x \in X$ with image $y \in Y$ and
+consider the ring map
+$$
+\mathcal{O}_{Y, y} \longrightarrow \mathcal{O}_{X, x}
+$$
+This is a flat local homomorphism of local Noetherian rings.
+The local ring $\mathcal{O}_{X, x}$ is a complete intersection.
+Thus may use Avramov's result, see
+Divided Power Algebra, Lemma \ref{dpa-lemma-avramov},
+to conclude that both $\mathcal{O}_{Y, y}$ and
+$\mathcal{O}_{X, x}/\mathfrak m_y\mathcal{O}_{X, x}$ are
+complete intersection rings.
+\end{proof}
+
+\noindent
+The following type of lemma is occasionally useful.
+
+\begin{lemma}
+\label{lemma-curiosity}
+Let $X \to Y \to Z$ be morphism of schemes.
+Let $P$ be one of the following properties of morphisms of schemes:
+flat, locally finite type, locally finite presentation.
+Assume that $X \to Z$ has $P$ and that $\{X \to Y\}$
+can be refined by an fppf covering of $Y$. Then $Y \to Z$ is $P$.
+\end{lemma}
+
+\begin{proof}
+Let $\Spec(C) \subset Z$ be an affine open and let
+$\Spec(B) \subset Y$ be an affine open which maps into
+$\Spec(C)$. The assumption on $X \to Y$ implies we can
+find a standard affine fppf covering $\{\Spec(B_j) \to \Spec(B)\}$
+and lifts $x_j : \Spec(B_j) \to X$. Since $\Spec(B_j)$
+is quasi-compact we can find finitely many affine opens
+$\Spec(A_i) \subset X$ lying over $\Spec(B)$
+such that the image of each $x_j$
+is contained in the union $\bigcup \Spec(A_i)$. Hence after
+replacing each $\Spec(B_j)$ by a standard affine Zariski coverings
+of itself we may assume we have a
+standard affine fppf covering $\{\Spec(B_i) \to \Spec(B)\}$
+such that each $\Spec(B_i) \to Y$ factors through an affine
+open $\Spec(A_i) \subset X$ lying over $\Spec(B)$.
+In other words, we have ring maps $C \to B \to A_i \to B_i$ for each $i$.
+Note that we can also consider
+$$
+C \to B \to A = \prod A_i \to B' = \prod B_i
+$$
+and that the ring map $B \to \prod B_i$ is faithfully flat and
+of finite presentation.
+
+\medskip\noindent
+The case $P = flat$. In this case we know that $C \to A$ is flat
+and we have to prove that $C \to B$ is flat. Suppose that
+$N \to N' \to N''$ is an exact sequence of $C$-modules. We want to
+show that $N \otimes_C B \to N' \otimes_C B \to N'' \otimes_C B$
+is exact. Let $H$ be its cohomology and let $H'$ be the cohomology
+of $N \otimes_C B' \to N' \otimes_C B' \to N'' \otimes_C B'$. As
+$B \to B'$ is flat we know that $H' = H \otimes_B B'$. On the other hand
+$N \otimes_C A \to N' \otimes_C A \to N'' \otimes_C A$
+is exact hence has zero cohomology. Hence the map
+$H \to H'$ is zero (as it factors through the zero module).
+Thus $H' = 0$. As $B \to B'$ is faithfully flat we conclude that
+$H = 0$ as desired.
+
+\medskip\noindent
+The case $P = locally\ finite\ type$.
+In this case we know that $C \to A$ is of finite type and
+we have to prove that $C \to B$ is of finite type.
+Because $B \to B'$ is of finite presentation (hence of finite type)
+we see that $A \to B'$ is of finite type, see
+Algebra, Lemma \ref{algebra-lemma-compose-finite-type}.
+Therefore $C \to B'$ is of finite type and we conclude by
+Lemma \ref{lemma-finite-type-local-source-fppf-algebra}.
+
+\medskip\noindent
+The case $P = locally\ finite\ presentation$.
+In this case we know that $C \to A$ is of finite presentation and
+we have to prove that $C \to B$ is of finite presentation.
+Because $B \to B'$ is of finite presentation and $B \to A$
+of finite type we see that $A \to B'$ is of finite presentation, see
+Algebra, Lemma \ref{algebra-lemma-compose-finite-type}.
+Therefore $C \to B'$ is of finite presentation and we conclude by
+Lemma \ref{lemma-flat-finitely-presented-permanence-algebra}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Local properties of schemes}
+\label{section-descending-properties}
+
+\noindent
+It often happens one can prove the members of a covering of a scheme
+have a certain property. In many cases this implies the scheme has the
+property too. For example, if $S$ is a scheme, and $f : S' \to S$
+is a surjective flat morphism such that $S'$ is a reduced scheme, then $S$ is
+reduced. You can prove this by looking at local rings and using
+Algebra, Lemma \ref{algebra-lemma-descent-reduced}.
+We say that the property of being reduced
+{\it descends through flat surjective morphisms}.
+Some results of this type are collected in
+Algebra, Section \ref{algebra-section-descending-properties} and
+for schemes in Section \ref{section-variants}.
+Some analogous results on descending properties
+of morphisms are in Section \ref{section-descent-finiteness-morphisms}.
+
+\medskip\noindent
+On the other hand, there are examples of surjective flat morphisms
+$f : S' \to S$ with $S$ reduced and $S'$ not, for example the morphism
+$\Spec(k[x]/(x^2)) \to \Spec(k)$. Hence the property of
+being reduced does not {\it ascend along flat morphisms}. Having infinite
+residue fields is a property which does ascend along flat morphisms (but
+does not descend along surjective flat morphisms of course). Some results
+of this type are collected in
+Algebra, Section \ref{algebra-section-ascending-properties}.
+
+\medskip\noindent
+Finally, we say that a property is {\it local for the flat topology}
+if it ascends along flat morphisms and descends along flat surjective
+morphisms. A somewhat silly example is the property of having residue
+fields of a given characteristic. To be more precise, and to tie this in
+with the various topologies on schemes, we make the following
+formal definition.
+
+\begin{definition}
+\label{definition-property-local}
+Let $\mathcal{P}$ be a property of schemes. Let
+$\tau \in \{fpqc, \linebreak[0] fppf, \linebreak[0] syntomic, \linebreak[0]
+smooth, \linebreak[0] \etale, \linebreak[0] Zariski\}$.
+We say $\mathcal{P}$ is {\it local in the $\tau$-topology} if for any
+$\tau$-covering $\{S_i \to S\}_{i \in I}$ (see
+Topologies, Section \ref{topologies-section-procedure})
+we have
+$$
+S \text{ has }\mathcal{P}
+\Leftrightarrow
+\text{each }S_i \text{ has }\mathcal{P}.
+$$
+\end{definition}
+
+\noindent
+To be sure, since isomorphisms are always coverings
+we see (or require) that property $\mathcal{P}$ holds for $S$
+if and only if it holds for any scheme $S'$ isomorphic to $S$.
+In fact, if $\tau = fpqc, \linebreak[0] fppf, \linebreak[0] syntomic,
+\linebreak[0] smooth, \linebreak[0] \etale$, or $Zariski$, then
+if $S$ has $\mathcal{P}$ and $S' \to S$ is
+flat, flat and locally of finite presentation, syntomic, smooth, \'etale, or
+an open immersion, then $S'$ has $\mathcal{P}$. This is true because
+we can always extend $\{S' \to S\}$ to a $\tau$-covering.
+
+\medskip\noindent
+We have the following implications:
+$\mathcal{P}$ is local in the fpqc topology
+$\Rightarrow$
+$\mathcal{P}$ is local in the fppf topology
+$\Rightarrow$
+$\mathcal{P}$ is local in the syntomic topology
+$\Rightarrow$
+$\mathcal{P}$ is local in the smooth topology
+$\Rightarrow$
+$\mathcal{P}$ is local in the \'etale topology
+$\Rightarrow$
+$\mathcal{P}$ is local in the Zariski topology.
+This follows from
+Topologies, Lemmas
+\ref{topologies-lemma-zariski-etale},
+\ref{topologies-lemma-zariski-etale-smooth},
+\ref{topologies-lemma-zariski-etale-smooth-syntomic},
+\ref{topologies-lemma-zariski-etale-smooth-syntomic-fppf}, and
+\ref{topologies-lemma-zariski-etale-smooth-syntomic-fppf-fpqc}.
+
+\begin{lemma}
+\label{lemma-descending-properties}
+Let $\mathcal{P}$ be a property of schemes.
+Let $\tau \in \{fpqc, \linebreak[0] fppf, \linebreak[0]
+\etale, \linebreak[0] smooth, \linebreak[0] syntomic\}$.
+Assume that
+\begin{enumerate}
+\item the property is local in the Zariski topology,
+\item for any morphism of affine schemes $S' \to S$
+which is flat, flat of finite presentation,
+\'etale, smooth or syntomic depending on whether $\tau$ is
+fpqc, fppf, \'etale, smooth, or syntomic,
+property $\mathcal{P}$ holds for $S'$ if property $\mathcal{P}$
+holds for $S$, and
+\item for any surjective morphism of affine schemes $S' \to S$
+which is flat, flat of finite presentation,
+\'etale, smooth or syntomic depending on whether $\tau$ is
+fpqc, fppf, \'etale, smooth, or syntomic,
+property $\mathcal{P}$ holds for $S$ if property $\mathcal{P}$
+holds for $S'$.
+\end{enumerate}
+Then $\mathcal{P}$ is $\tau$ local on the base.
+\end{lemma}
+
+\begin{proof}
+This follows almost immediately from the definition of
+a $\tau$-covering, see
+Topologies, Definition
+\ref{topologies-definition-fpqc-covering}
+\ref{topologies-definition-fppf-covering}
+\ref{topologies-definition-etale-covering}
+\ref{topologies-definition-smooth-covering}, or
+\ref{topologies-definition-syntomic-covering}
+and Topologies, Lemma
+\ref{topologies-lemma-fpqc-affine},
+\ref{topologies-lemma-fppf-affine},
+\ref{topologies-lemma-etale-affine},
+\ref{topologies-lemma-smooth-affine}, or
+\ref{topologies-lemma-syntomic-affine}.
+Details omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-descending-properties-standard}
+In Lemma \ref{lemma-descending-properties} above if
+$\tau = smooth$ then in condition (3) we may assume that
+the morphism is a (surjective) standard smooth morphism.
+Similarly, when $\tau = syntomic$ or $\tau = \etale$.
+\end{remark}
+
+
+
+
+
+\section{Properties of schemes local in the fppf topology}
+\label{section-descending-properties-fppf}
+
+\noindent
+In this section we find some properties of schemes which are local on the base
+in the fppf topology.
+
+\begin{lemma}
+\label{lemma-Noetherian-local-fppf}
+The property $\mathcal{P}(S) =$``$S$ is locally Noetherian'' is local
+in the fppf topology.
+\end{lemma}
+
+\begin{proof}
+We will use Lemma \ref{lemma-descending-properties}.
+First we note that ``being locally Noetherian'' is local
+in the Zariski topology. This is clear from the definition,
+see Properties, Definition \ref{properties-definition-noetherian}.
+Next, we show that if $S' \to S$ is a flat, finitely presented
+morphism of affines and $S$ is locally Noetherian, then $S'$ is
+locally Noetherian. This is
+Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian}.
+Finally, we have to show that if $S' \to S$ is a surjective
+flat, finitely presented morphism of affines and $S'$ is
+locally Noetherian, then $S$ is locally Noetherian. This follows from
+Algebra, Lemma \ref{algebra-lemma-descent-Noetherian}.
+Thus (1), (2) and (3) of Lemma \ref{lemma-descending-properties} hold
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Jacobson-local-fppf}
+The property $\mathcal{P}(S) =$``$S$ is Jacobson'' is local
+in the fppf topology.
+\end{lemma}
+
+\begin{proof}
+We will use Lemma \ref{lemma-descending-properties}.
+First we note that ``being Jacobson'' is local
+in the Zariski topology. This is
+Properties, Lemma \ref{properties-lemma-locally-jacobson}.
+Next, we show that if $S' \to S$ is a flat, finitely presented
+morphism of affines and $S$ is Jacobson, then $S'$ is
+Jacobson. This is
+Morphisms, Lemma \ref{morphisms-lemma-Jacobson-universally-Jacobson}.
+Finally, we have to show that if $f : S' \to S$ is a surjective
+flat, finitely presented morphism of affines and $S'$ is
+Jacobson, then $S$ is Jacobson. Say $S = \Spec(A)$ and
+$S' = \Spec(B)$ and $S' \to S$ given by $A \to B$.
+Then $A \to B$ is finitely presented and faithfully flat.
+Moreover, the ring $B$ is Jacobson, see
+Properties, Lemma \ref{properties-lemma-locally-jacobson}.
+
+\medskip\noindent
+By Algebra, Lemma \ref{algebra-lemma-fppf-fpqf} there exists a diagram
+$$
+\xymatrix{
+B \ar[rr] & & B' \\
+& A \ar[ru] \ar[lu] &
+}
+$$
+with $A \to B'$ finitely presented, faithfully flat and quasi-finite.
+In particular, $B \to B'$ is finite type, and we see from
+Algebra, Proposition \ref{algebra-proposition-Jacobson-permanence}
+that $B'$ is Jacobson. Hence we may assume that $A \to B$ is quasi-finite
+as well as faithfully flat and of finite presentation.
+
+\medskip\noindent
+Assume $A$ is not Jacobson to get a contradiction.
+According to Algebra, Lemma \ref{algebra-lemma-characterize-jacobson}
+there exists a nonmaximal prime $\mathfrak p \subset A$ and
+an element $f \in A$, $f \not \in \mathfrak p$ such that
+$V(\mathfrak p) \cap D(f) = \{\mathfrak p\}$.
+
+\medskip\noindent
+This leads to a contradiction as follows. First let
+$\mathfrak p \subset \mathfrak m$ be a maximal ideal of $A$.
+Pick a prime $\mathfrak m' \subset B$ lying over $\mathfrak m$
+(exists because $A \to B$ is faithfully flat, see
+Algebra, Lemma \ref{algebra-lemma-ff-rings}).
+As $A \to B$ is flat, by going down see
+Algebra, Lemma \ref{algebra-lemma-flat-going-down},
+we can find a prime $\mathfrak q \subset \mathfrak m'$ lying over
+$\mathfrak p$. In particular we see that $\mathfrak q$ is not
+maximal. Hence according to
+Algebra, Lemma \ref{algebra-lemma-characterize-jacobson} again
+the set $V(\mathfrak q) \cap D(f)$ is infinite
+(here we finally use that $B$ is Jacobson).
+All points of $V(\mathfrak q) \cap D(f)$ map to
+$V(\mathfrak p) \cap D(f) = \{\mathfrak p\}$. Hence the
+fibre over $\mathfrak p$ is infinite. This contradicts the
+fact that $A \to B$ is quasi-finite (see
+Algebra, Lemma \ref{algebra-lemma-quasi-finite}
+or more explicitly
+Morphisms, Lemma \ref{morphisms-lemma-quasi-finite}).
+Thus the lemma is proved.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-finite-nr-irred-local-fppf}
+The property $\mathcal{P}(S) =$``every quasi-compact open of $S$
+has a finite number of irreducible components'' is local
+in the fppf topology.
+\end{lemma}
+
+\begin{proof}
+We will use Lemma \ref{lemma-descending-properties}. First we note that
+$\mathcal{P}$ is local in the Zariski topology.
+Next, we show that if $T \to S$ is a flat, finitely presented
+morphism of affines and $S$ has a finite number of irreducible
+components, then so does $T$. Namely, since $T \to S$ is flat,
+the generic points of $T$ map to the generic points of $S$, see
+Morphisms, Lemma \ref{morphisms-lemma-generalizations-lift-flat}.
+Hence it suffices to show that for $s \in S$ the fibre $T_s$
+has a finite number of generic points. Note that $T_s$ is an
+affine scheme of finite type over $\kappa(s)$, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-finite-type}.
+Hence $T_s$ is Noetherian and has a finite number of irreducible components
+(Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian} and
+Properties, Lemma \ref{properties-lemma-Noetherian-irreducible-components}).
+Finally, we have to show that if $T \to S$ is a surjective flat,
+finitely presented morphism of affines and $T$ has a finite number of
+irreducible components, then so does $S$. This follows from Topology, Lemma
+\ref{topology-lemma-surjective-continuous-irreducible-components}.
+Thus (1), (2) and (3) of Lemma \ref{lemma-descending-properties} hold
+and we win.
+\end{proof}
+
+
+
+
+\section{Properties of schemes local in the syntomic topology}
+\label{section-descending-properties-syntomic}
+
+\noindent
+In this section we find some properties of schemes which are local on the base
+in the syntomic topology.
+
+\begin{lemma}
+\label{lemma-Sk-local-syntomic}
+The property $\mathcal{P}(S) =$``$S$ is locally Noetherian and $(S_k)$''
+is local in the syntomic topology.
+\end{lemma}
+
+\begin{proof}
+We will check (1), (2) and (3) of Lemma \ref{lemma-descending-properties}.
+As a syntomic morphism is flat of finite presentation
+(Morphisms, Lemmas \ref{morphisms-lemma-syntomic-flat}
+and \ref{morphisms-lemma-syntomic-locally-finite-presentation})
+we have already checked this for ``being locally Noetherian'' in the proof
+of Lemma \ref{lemma-Noetherian-local-fppf}.
+We will use this without further mention in the proof.
+First we note that $\mathcal{P}$ is local in the Zariski topology.
+This is clear from the definition,
+see Cohomology of Schemes, Definition \ref{coherent-definition-depth}.
+Next, we show that if $S' \to S$ is a syntomic morphism of affines
+and $S$ has $\mathcal{P}$, then $S'$ has $\mathcal{P}$. This
+is Algebra, Lemma \ref{algebra-lemma-Sk-goes-up}
+(use
+Morphisms, Lemma \ref{morphisms-lemma-syntomic-characterize}
+and
+Algebra, Definition \ref{algebra-definition-lci} and
+Lemma \ref{algebra-lemma-lci-CM}).
+Finally, we show that if $S' \to S$ is a surjective
+syntomic morphism of affines and $S'$ has $\mathcal{P}$,
+then $S$ has $\mathcal{P}$. This is
+Algebra, Lemma \ref{algebra-lemma-descent-Sk}.
+Thus (1), (2) and (3) of Lemma \ref{lemma-descending-properties} hold
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM-local-syntomic}
+The property $\mathcal{P}(S) =$``$S$ is Cohen-Macaulay''
+is local in the syntomic topology.
+\end{lemma}
+
+\begin{proof}
+This is clear from Lemma \ref{lemma-Sk-local-syntomic}
+above since a scheme is Cohen-Macaulay if and only if
+it is locally Noetherian and $(S_k)$ for all $k \geq 0$, see
+Properties, Lemma \ref{properties-lemma-scheme-CM-iff-all-Sk}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Properties of schemes local in the smooth topology}
+\label{section-descending-properties-smooth}
+
+\noindent
+In this section we find some properties of schemes which are local on the base
+in the smooth topology.
+
+\begin{lemma}
+\label{lemma-reduced-local-smooth}
+The property $\mathcal{P}(S) =$``$S$ is reduced'' is local in the smooth
+topology.
+\end{lemma}
+
+\begin{proof}
+We will use Lemma \ref{lemma-descending-properties}.
+First we note that ``being reduced'' is local
+in the Zariski topology. This is clear from the definition,
+see Schemes, Definition \ref{schemes-definition-reduced}.
+Next, we show that if $S' \to S$ is a smooth morphism of affines
+and $S$ is reduced, then $S'$ is reduced. This is
+Algebra, Lemma \ref{algebra-lemma-reduced-goes-up}.
+Finally, we show that if $S' \to S$ is a surjective
+smooth morphism of affines
+and $S'$ is reduced, then $S$ is reduced. This is
+Algebra, Lemma \ref{algebra-lemma-descent-reduced}.
+Thus (1), (2) and (3) of Lemma \ref{lemma-descending-properties} hold
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-normal-local-smooth}
+\begin{slogan}
+Normality is local in the smooth topology.
+\end{slogan}
+The property $\mathcal{P}(S) =$``$S$ is normal'' is local in the smooth
+topology.
+\end{lemma}
+
+\begin{proof}
+We will use Lemma \ref{lemma-descending-properties}.
+First we show ``being normal'' is local
+in the Zariski topology. This is clear from the definition,
+see Properties, Definition \ref{properties-definition-normal}.
+Next, we show that if $S' \to S$ is a smooth morphism of affines
+and $S$ is normal, then $S'$ is normal. This is
+Algebra, Lemma \ref{algebra-lemma-normal-goes-up}.
+Finally, we show that if $S' \to S$ is a surjective
+smooth morphism of affines
+and $S'$ is normal, then $S$ is normal. This is
+Algebra, Lemma \ref{algebra-lemma-descent-normal}.
+Thus (1), (2) and (3) of Lemma \ref{lemma-descending-properties} hold
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Rk-local-smooth}
+The property $\mathcal{P}(S) =$``$S$ is locally Noetherian and $(R_k)$''
+is local in the smooth topology.
+\end{lemma}
+
+\begin{proof}
+We will check (1), (2) and (3) of Lemma \ref{lemma-descending-properties}.
+As a smooth morphism is flat of finite presentation
+(Morphisms, Lemmas \ref{morphisms-lemma-smooth-flat}
+and \ref{morphisms-lemma-smooth-locally-finite-presentation})
+we have already checked this for ``being locally Noetherian'' in the proof
+of Lemma \ref{lemma-Noetherian-local-fppf}.
+We will use this without further mention in the proof.
+First we note that $\mathcal{P}$ is local in the Zariski topology.
+This is clear from the definition,
+see Properties, Definition \ref{properties-definition-Rk}.
+Next, we show that if $S' \to S$ is a smooth morphism of affines
+and $S$ has $\mathcal{P}$, then $S'$ has $\mathcal{P}$. This
+is Algebra, Lemmas \ref{algebra-lemma-Rk-goes-up}
+(use Morphisms, Lemma \ref{morphisms-lemma-smooth-characterize},
+Algebra, Lemmas \ref{algebra-lemma-base-change-smooth}
+and \ref{algebra-lemma-characterize-smooth-over-field}).
+Finally, we show that if $S' \to S$ is a surjective
+smooth morphism of affines and $S'$ has $\mathcal{P}$,
+then $S$ has $\mathcal{P}$. This is
+Algebra, Lemma \ref{algebra-lemma-descent-Rk}.
+Thus (1), (2) and (3) of Lemma \ref{lemma-descending-properties} hold
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-local-smooth}
+The property $\mathcal{P}(S) =$``$S$ is regular''
+is local in the smooth topology.
+\end{lemma}
+
+\begin{proof}
+This is clear from Lemma \ref{lemma-Rk-local-smooth}
+above since a locally Noetherian scheme is regular if and only if
+it is locally Noetherian and $(R_k)$ for all $k \geq 0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Nagata-local-smooth}
+The property $\mathcal{P}(S) =$``$S$ is Nagata''
+is local in the smooth topology.
+\end{lemma}
+
+\begin{proof}
+We will check (1), (2) and (3) of Lemma \ref{lemma-descending-properties}.
+First we note that being Nagata is local in the Zariski topology.
+This is Properties, Lemma \ref{properties-lemma-locally-nagata}.
+Next, we show that if $S' \to S$ is a smooth morphism of affines
+and $S$ is Nagata, then $S'$ is Nagata. This
+is Morphisms, Lemma \ref{morphisms-lemma-finite-type-nagata}.
+Finally, we show that if $S' \to S$ is a surjective
+smooth morphism of affines and $S'$ is Nagata,
+then $S$ is Nagata. This is
+Algebra, Lemma \ref{algebra-lemma-descent-nagata}.
+Thus (1), (2) and (3) of Lemma \ref{lemma-descending-properties} hold
+and we win.
+\end{proof}
+
+
+
+
+
+\section{Variants on descending properties}
+\label{section-variants}
+
+\noindent
+Sometimes one can descend properties, which are not local.
+We put results of this kind in this section. See also
+Section \ref{section-descent-finiteness-morphisms}
+on descending properties of morphisms,
+such as smoothness.
+
+\begin{lemma}
+\label{lemma-descend-reduced}
+If $f : X \to Y$ is a flat and surjective morphism of schemes
+and $X$ is reduced, then $Y$ is reduced.
+\end{lemma}
+
+\begin{proof}
+The result follows by looking at local rings
+(Schemes, Definition \ref{schemes-definition-reduced})
+and
+Algebra, Lemma \ref{algebra-lemma-descent-reduced}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descend-regular}
+Let $f : X \to Y$ be a morphism of algebraic spaces.
+If $f$ is locally of finite presentation, flat, and surjective and
+$X$ is regular, then $Y$ is regular.
+\end{lemma}
+
+\begin{proof}
+This lemma reduces to the following algebra statement: If $A \to B$ is
+a faithfully flat, finitely presented ring homomorphism with $B$ Noetherian
+and regular, then $A$ is Noetherian and regular. We see that
+$A$ is Noetherian by
+Algebra, Lemma \ref{algebra-lemma-descent-Noetherian}
+and regular by
+Algebra, Lemma \ref{algebra-lemma-flat-under-regular}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Germs of schemes}
+\label{section-germs}
+
+\begin{definition}
+\label{definition-germs}
+Germs of schemes.
+\begin{enumerate}
+\item A pair $(X, x)$ consisting of a scheme $X$ and a point $x \in X$ is
+called the {\it germ of $X$ at $x$}.
+\item A {\it morphism of germs} $f : (X, x) \to (S, s)$
+is an equivalence class of morphisms of schemes $f : U \to S$ with $f(x) = s$
+where $U \subset X$ is an open neighbourhood of $x$. Two such
+$f$, $f'$ are said to be equivalent if and only if $f$ and $f'$
+agree in some open neighbourhood of $x$.
+\item We define the {\it composition of morphisms of germs}
+by composing representatives (this is well defined).
+\end{enumerate}
+\end{definition}
+
+\noindent
+Before we continue we need one more definition.
+
+\begin{definition}
+\label{definition-etale-morphism-germs}
+Let $f : (X, x) \to (S, s)$ be a morphism of germs.
+We say $f$ is {\it \'etale} (resp.\ {\it smooth}) if there exists a
+representative $f : U \to S$ of $f$ which is an \'etale morphism
+(resp.\ a smooth morphism) of schemes.
+\end{definition}
+
+
+
+
+
+
+
+
+\section{Local properties of germs}
+\label{section-properties-germs-local}
+
+\begin{definition}
+\label{definition-local-at-point}
+Let $\mathcal{P}$ be a property of germs of schemes.
+We say that $\mathcal{P}$ is {\it \'etale local}
+(resp.\ {\it smooth local}) if for any
+\'etale (resp.\ smooth) morphism of germs $(U', u') \to (U, u)$
+we have $\mathcal{P}(U, u) \Leftrightarrow \mathcal{P}(U', u')$.
+\end{definition}
+
+\noindent
+Let $(X, x)$ be a germ of a scheme.
+The dimension of $X$ at $x$ is the minimum of the dimensions of
+open neighbourhoods of $x$ in $X$, and any small enough open neighbourhood
+has this dimension. Hence this is an invariant of the isomorphism class
+of the germ. We denote this simply $\dim_x(X)$.
+The following lemma tells us that the assertion
+$\dim_x(X) = d$ is an \'etale local property of germs.
+
+\begin{lemma}
+\label{lemma-dimension-at-point-local}
+Let $f : U \to V$ be an \'etale morphism of schemes.
+Let $u \in U$ and $v = f(u)$. Then $\dim_u(U) = \dim_v(V)$.
+\end{lemma}
+
+\begin{proof}
+In the statement $\dim_u(U)$ is the dimension of $U$ at $u$ as defined in
+Topology, Definition \ref{topology-definition-Krull}
+as the minimum of the Krull dimensions of open neighbourhoods of $u$ in $U$.
+Similarly for $\dim_v(V)$.
+
+\medskip\noindent
+Let us show that $\dim_v(V) \geq \dim_u(U)$.
+Let $V'$ be an open neighbourhood of $v$ in $V$.
+Then there exists an open neighbourhood $U'$ of $u$ in $U$
+contained in $f^{-1}(V')$ such that $\dim_u(U) = \dim(U')$. Suppose that
+$Z_0 \subset Z_1 \subset \ldots \subset Z_n$ is a chain of irreducible
+closed subschemes of $U'$. If $\xi_i \in Z_i$ is the generic point
+then we have specializations
+$\xi_n \leadsto \xi_{n - 1} \leadsto \ldots \leadsto \xi_0$.
+This gives specializations
+$f(\xi_n) \leadsto f(\xi_{n - 1}) \leadsto \ldots \leadsto f(\xi_0)$
+in $V'$. Note that $f(\xi_j) \not = f(\xi_i)$ if $i \not = j$ as
+the fibres of $f$ are discrete (see
+Morphisms, Lemma \ref{morphisms-lemma-etale-over-field}).
+Hence we see that $\dim(V') \geq n$. The inequality
+$\dim_v(V) \geq \dim_u(U)$ follows formally.
+
+\medskip\noindent
+Let us show that $\dim_u(U) \geq \dim_v(V)$.
+Let $U'$ be an open neighbourhood of $u$ in $U$.
+Note that $V' = f(U')$ is an open neighbourhood of $v$ by
+Morphisms, Lemma \ref{morphisms-lemma-fppf-open}.
+Hence $\dim(V') \geq \dim_v(V)$. Pick a chain
+$Z_0 \subset Z_1 \subset \ldots \subset Z_n$ of irreducible
+closed subschemes of $V'$. Let $\xi_i \in Z_i$ be the generic point,
+so we have specializations
+$\xi_n \leadsto \xi_{n - 1} \leadsto \ldots \leadsto \xi_0$.
+Since $\xi_0 \in f(U')$ we can find a point $\eta_0 \in U'$
+with $f(\eta_0) = \xi_0$. Consider the map of local rings
+$$
+\mathcal{O}_{V', \xi_0} \longrightarrow \mathcal{O}_{U', \eta_0}
+$$
+which is a flat local ring map by
+Morphisms, Lemma \ref{morphisms-lemma-etale-flat}.
+Note that the points $\xi_i$ correspond to primes of the ring on the left by
+Schemes, Lemma \ref{schemes-lemma-specialize-points}.
+Hence by going down (see
+Algebra, Section \ref{algebra-section-going-up})
+for the displayed ring map we can find a sequence of specializations
+$\eta_n \leadsto \eta_{n - 1} \leadsto \ldots \leadsto \eta_0$
+in $U'$ mapping to the sequence
+$\xi_n \leadsto \xi_{n - 1} \leadsto \ldots \leadsto \xi_0$
+under $f$. This implies that $\dim_u(U) \geq \dim_v(V)$.
+\end{proof}
+
+\noindent
+Let $(X, x)$ be a germ of a scheme.
+The isomorphism class of the local ring $\mathcal{O}_{X, x}$
+is an invariant of the germ. The following lemma says that the
+property $\dim(\mathcal{O}_{X, x}) = d$ is an \'etale local property
+of germs.
+
+\begin{lemma}
+\label{lemma-dimension-local-ring-local}
+Let $f : U \to V$ be an \'etale morphism of schemes.
+Let $u \in U$ and $v = f(u)$. Then
+$\dim(\mathcal{O}_{U, u}) = \dim(\mathcal{O}_{V, v})$.
+\end{lemma}
+
+\begin{proof}
+The algebraic statement we are asked to prove is the following:
+If $A \to B$ is an \'etale ring map and $\mathfrak q$ is a prime of
+$B$ lying over $\mathfrak p \subset A$, then
+$\dim(A_{\mathfrak p}) = \dim(B_{\mathfrak q})$.
+This is
+More on Algebra, Lemma \ref{more-algebra-lemma-dimension-etale-extension}.
+\end{proof}
+
+\noindent
+Let $(X, x)$ be a germ of a scheme.
+The isomorphism class of the local ring $\mathcal{O}_{X, x}$
+is an invariant of the germ. The following lemma says that the
+property ``$\mathcal{O}_{X, x}$ is regular'' is an \'etale local property
+of germs.
+
+\begin{lemma}
+\label{lemma-regular-local-ring-local}
+Let $f : U \to V$ be an \'etale morphism of schemes.
+Let $u \in U$ and $v = f(u)$. Then
+$\mathcal{O}_{U, u}$ is a regular local ring if and only if
+$\mathcal{O}_{V, v}$ is a regular local ring.
+\end{lemma}
+
+\begin{proof}
+The algebraic statement we are asked to prove is the following:
+If $A \to B$ is an \'etale ring map and $\mathfrak q$ is a prime of
+$B$ lying over $\mathfrak p \subset A$, then
+$A_{\mathfrak p}$ is regular if and only if $B_{\mathfrak q}$ is regular.
+This is More on Algebra, Lemma
+\ref{more-algebra-lemma-regular-etale-extension}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Properties of morphisms local on the target}
+\label{section-descending-properties-morphisms}
+
+\noindent
+Suppose that $f : X \to Y$ is a morphism of schemes.
+Let $g : Y' \to Y$ be a morphism of schemes.
+Let $f' : X' \to Y'$ be the base change of $f$ by $g$:
+$$
+\xymatrix{
+X' \ar[d]_{f'} \ar[r]_{g'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+Let $\mathcal{P}$ be a property of morphisms of schemes.
+Then we can wonder if (a) $\mathcal{P}(f) \Rightarrow \mathcal{P}(f')$,
+and also whether the converse (b) $\mathcal{P}(f') \Rightarrow \mathcal{P}(f)$
+is true. If (a) holds whenever $g$ is flat, then we say $\mathcal{P}$
+is preserved under flat base change. If (b) holds whenever $g$ is
+surjective and flat, then we say $\mathcal{P}$ descends through
+flat surjective base changes. If $\mathcal{P}$ is preserved under
+flat base changes and descends through flat surjective base changes,
+then we say $\mathcal{P}$ is flat local on the target.
+Compare with the discussion in
+Section \ref{section-descending-properties}.
+This turns out to be a very important notion which
+we formalize in the following definition.
+
+\begin{definition}
+\label{definition-property-morphisms-local}
+Let $\mathcal{P}$ be a property of morphisms of schemes over a base.
+Let $\tau \in \{fpqc, fppf, syntomic, smooth, \etale, Zariski\}$.
+We say $\mathcal{P}$ is {\it $\tau$ local on the base}, or
+{\it $\tau$ local on the target}, or
+{\it local on the base for the $\tau$-topology} if for any
+$\tau$-covering $\{Y_i \to Y\}_{i \in I}$ (see
+Topologies, Section \ref{topologies-section-procedure})
+and any morphism of schemes $f : X \to Y$ over $S$ we
+have
+$$
+f \text{ has }\mathcal{P}
+\Leftrightarrow
+\text{each }Y_i \times_Y X \to Y_i\text{ has }\mathcal{P}.
+$$
+\end{definition}
+
+\noindent
+To be sure, since isomorphisms are always coverings
+we see (or require) that property $\mathcal{P}$ holds for $X \to Y$
+if and only if it holds for any arrow $X' \to Y'$ isomorphic to $X \to Y$.
+If a property is $\tau$-local on the target then it is preserved
+by base changes by morphisms which occur in $\tau$-coverings. Here
+is a formal statement.
+
+\begin{lemma}
+\label{lemma-pullback-property-local-target}
+Let $\tau \in \{fpqc, fppf, syntomic, smooth, \etale, Zariski\}$.
+Let $\mathcal{P}$ be a property of morphisms which is $\tau$ local
+on the target. Let $f : X \to Y$ have property $\mathcal{P}$.
+For any morphism $Y' \to Y$ which is
+flat, resp.\ flat and locally of finite presentation, resp.\ syntomic,
+resp.\ \'etale, resp.\ an open immersion, the base change
+$f' : Y' \times_Y X \to Y'$ of $f$ has property $\mathcal{P}$.
+\end{lemma}
+
+\begin{proof}
+This is true because we can fit $Y' \to Y$ into a family of
+morphisms which forms a $\tau$-covering.
+\end{proof}
+
+\noindent
+A simple often used consequence of the above is that if
+$f : X \to Y$ has property $\mathcal{P}$ which is $\tau$-local
+on the target and $f(X) \subset V$
+for some open subscheme $V \subset Y$, then also the induced
+morphism $X \to V$ has $\mathcal{P}$. Proof: The base change
+$f$ by $V \to Y$ gives $X \to V$.
+
+\begin{lemma}
+\label{lemma-largest-open-of-the-base}
+Let $\tau \in \{fppf, syntomic, smooth, \etale\}$.
+Let $\mathcal{P}$ be a property of morphisms which is $\tau$ local
+on the target. For any morphism of schemes $f : X \to Y$ there exists
+a largest open $W(f) \subset Y$ such that the restriction
+$X_{W(f)} \to W(f)$ has $\mathcal{P}$. Moreover,
+\begin{enumerate}
+\item if $g : Y' \to Y$ is flat and locally of finite presentation,
+syntomic, smooth, or \'etale and the base change $f' : X_{Y'} \to Y'$
+has $\mathcal{P}$, then $g(Y') \subset W(f)$,
+\item if $g : Y' \to Y$ is flat and locally of finite presentation,
+syntomic, smooth, or \'etale, then $W(f') = g^{-1}(W(f))$, and
+\item if $\{g_i : Y_i \to Y\}$ is a $\tau$-covering, then
+$g_i^{-1}(W(f)) = W(f_i)$, where $f_i$ is the base change of $f$
+by $Y_i \to Y$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Consider the union $W$ of the images $g(Y') \subset Y$ of
+morphisms $g : Y' \to Y$ with the properties:
+\begin{enumerate}
+\item $g$ is flat and locally of finite presentation, syntomic,
+smooth, or \'etale, and
+\item the base change $Y' \times_{g, Y} X \to Y'$ has property
+$\mathcal{P}$.
+\end{enumerate}
+Since such a morphism $g$ is open (see
+Morphisms, Lemma \ref{morphisms-lemma-fppf-open})
+we see that $W \subset Y$ is an open subset of $Y$. Since $\mathcal{P}$
+is local in the $\tau$ topology the restriction $X_W \to W$ has property
+$\mathcal{P}$ because we are given a covering $\{Y' \to W\}$ of $W$ such that
+the pullbacks have $\mathcal{P}$. This proves the existence and proves
+that $W(f)$ has property (1). To see property (2) note that
+$W(f') \supset g^{-1}(W(f))$ because $\mathcal{P}$ is stable under
+base change by flat and locally of finite presentation,
+syntomic, smooth, or \'etale morphisms, see
+Lemma \ref{lemma-pullback-property-local-target}.
+On the other hand, if $Y'' \subset Y'$ is an open such that
+$X_{Y''} \to Y''$ has property $\mathcal{P}$, then $Y'' \to Y$ factors
+through $W$ by construction, i.e., $Y'' \subset g^{-1}(W(f))$. This
+proves (2). Assertion (3) follows from (2) because each morphism
+$Y_i \to Y$ is flat and locally of finite presentation, syntomic,
+smooth, or \'etale by our definition of a $\tau$-covering.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-properties-morphisms}
+Let $\mathcal{P}$ be a property of morphisms of schemes over a base.
+Let $\tau \in \{fpqc, fppf, \etale, smooth, syntomic\}$.
+Assume that
+\begin{enumerate}
+\item the property is preserved under
+flat, flat and locally of finite presentation, \'etale, smooth, or syntomic
+base change depending on whether $\tau$ is fpqc, fppf, \'etale, smooth, or
+syntomic (compare with
+Schemes, Definition \ref{schemes-definition-preserved-by-base-change}),
+\item the property is Zariski local on the base.
+\item for any surjective morphism of affine schemes $S' \to S$
+which is flat, flat of finite presentation,
+\'etale, smooth or syntomic depending on whether $\tau$ is
+fpqc, fppf, \'etale, smooth, or syntomic,
+and any morphism of schemes $f : X \to S$ property
+$\mathcal{P}$ holds for $f$ if property $\mathcal{P}$
+holds for the base change $f' : X' = S' \times_S X \to S'$.
+\end{enumerate}
+Then $\mathcal{P}$ is $\tau$ local on the base.
+\end{lemma}
+
+\begin{proof}
+This follows almost immediately from the definition of
+a $\tau$-covering, see
+Topologies, Definition
+\ref{topologies-definition-fpqc-covering}
+\ref{topologies-definition-fppf-covering}
+\ref{topologies-definition-etale-covering}
+\ref{topologies-definition-smooth-covering}, or
+\ref{topologies-definition-syntomic-covering}
+and Topologies, Lemma
+\ref{topologies-lemma-fpqc-affine},
+\ref{topologies-lemma-fppf-affine},
+\ref{topologies-lemma-etale-affine},
+\ref{topologies-lemma-smooth-affine}, or
+\ref{topologies-lemma-syntomic-affine}.
+Details omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-descending-properties-morphisms-standard}
+(This is a repeat of Remark \ref{remark-descending-properties-standard} above.)
+In Lemma \ref{lemma-descending-properties-morphisms} above if
+$\tau = smooth$ then in condition (3) we may assume that
+the morphism is a (surjective) standard smooth morphism.
+Similarly, when $\tau = syntomic$ or $\tau = \etale$.
+\end{remark}
+
+
+
+
+\section{Properties of morphisms local in the fpqc topology on the target}
+\label{section-descending-properties-morphisms-fpqc}
+
+\noindent
+In this section we find a large number of properties
+of morphisms of schemes which are local on the base
+in the fpqc topology. By contrast, in
+Examples, Section \ref{examples-section-non-descending-property-projective}
+we will show that the properties ``projective'' and ``quasi-projective''
+are not local on the base even in the Zariski topology.
+
+\begin{lemma}
+\label{lemma-descending-property-quasi-compact}
+The property $\mathcal{P}(f) =$``$f$ is quasi-compact''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+A base change of a quasi-compact morphism is quasi-compact, see
+Schemes, Lemma \ref{schemes-lemma-quasi-compact-preserved-base-change}.
+Being quasi-compact is Zariski local on the base, see
+Schemes, Lemma \ref{schemes-lemma-quasi-compact-affine}.
+Finally, let
+$S' \to S$ be a flat surjective morphism of affine schemes,
+and let $f : X \to S$ be a morphism. Assume that the base change
+$f' : X' \to S'$ is quasi-compact. Then $X'$ is quasi-compact,
+and $X' \to X$ is surjective. Hence $X$ is quasi-compact.
+This implies that $f$ is quasi-compact.
+Therefore Lemma \ref{lemma-descending-properties-morphisms} applies and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-quasi-separated}
+The property $\mathcal{P}(f) =$``$f$ is quasi-separated''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Any base change of a quasi-separated morphism is quasi-separated, see
+Schemes, Lemma \ref{schemes-lemma-separated-permanence}.
+Being quasi-separated is Zariski local on the base (from the
+definition or by
+Schemes, Lemma \ref{schemes-lemma-characterize-quasi-separated}).
+Finally, let
+$S' \to S$ be a flat surjective morphism of affine schemes,
+and let $f : X \to S$ be a morphism. Assume that the base change
+$f' : X' \to S'$ is quasi-separated. This means that
+$\Delta' : X' \to X'\times_{S'} X'$ is quasi-compact.
+Note that $\Delta'$ is the base change of $\Delta : X \to X \times_S X$
+via $S' \to S$. By Lemma \ref{lemma-descending-property-quasi-compact}
+this implies $\Delta$ is quasi-compact, and hence $f$ is
+quasi-separated.
+Therefore Lemma \ref{lemma-descending-properties-morphisms} applies and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-universally-closed}
+The property $\mathcal{P}(f) =$``$f$ is universally closed''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+A base change of a universally closed morphism is universally closed
+by definition.
+Being universally closed is Zariski local on the base (from the
+definition or by
+Morphisms, Lemma
+\ref{morphisms-lemma-universally-closed-local-on-the-base}).
+Finally, let
+$S' \to S$ be a flat surjective morphism of affine schemes,
+and let $f : X \to S$ be a morphism. Assume that the base change
+$f' : X' \to S'$ is universally closed. Let $T \to S$ be any morphism.
+Consider the diagram
+$$
+\xymatrix{
+X' \ar[d] &
+S' \times_S T \times_S X \ar[d] \ar[r] \ar[l] &
+T \times_S X \ar[d] \\
+S' &
+S' \times_S T \ar[r] \ar[l] &
+T
+}
+$$
+in which both squares are cartesian.
+Thus the assumption implies that the middle vertical
+arrow is closed. The right horizontal arrows are flat, quasi-compact
+and surjective (as base changes of $S' \to S$).
+Hence a subset of $T$ is closed if and only if its inverse
+image in $S' \times_S T$ is closed, see Morphisms,
+Lemma \ref{morphisms-lemma-fpqc-quotient-topology}.
+An easy diagram chase shows that the right vertical
+arrow is closed too, and we conclude $X \to S$ is
+universally closed.
+Therefore Lemma \ref{lemma-descending-properties-morphisms} applies and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-universally-open}
+The property $\mathcal{P}(f) =$``$f$ is universally open''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+The proof is the same as the proof of
+Lemma \ref{lemma-descending-property-universally-closed}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-universally-submersive}
+The property $\mathcal{P}(f) =$``$f$ is universally submersive''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+The proof is the same as the proof of
+Lemma \ref{lemma-descending-property-universally-closed}
+using that a quasi-compact flat surjective morphism is
+universally submersive by
+Morphisms, Lemma \ref{morphisms-lemma-fpqc-quotient-topology}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-separated}
+The property $\mathcal{P}(f) =$``$f$ is separated''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+A base change of a separated morphism is separated, see
+Schemes, Lemma \ref{schemes-lemma-separated-permanence}.
+Being separated is Zariski local on the base (from the
+definition or by
+Schemes, Lemma \ref{schemes-lemma-characterize-separated}).
+Finally, let
+$S' \to S$ be a flat surjective morphism of affine schemes,
+and let $f : X \to S$ be a morphism. Assume that the base change
+$f' : X' \to S'$ is separated. This means that
+$\Delta' : X' \to X'\times_{S'} X'$ is a closed immersion,
+hence universally closed.
+Note that $\Delta'$ is the base change of $\Delta : X \to X \times_S X$
+via $S' \to S$. By Lemma \ref{lemma-descending-property-universally-closed}
+this implies $\Delta$ is universally closed. Since it is
+an immersion
+(Schemes, Lemma \ref{schemes-lemma-diagonal-immersion})
+we conclude $\Delta$ is a closed immersion.
+Hence $f$ is separated.
+Therefore Lemma \ref{lemma-descending-properties-morphisms} applies and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-surjective}
+The property $\mathcal{P}(f) =$``$f$ is surjective''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+This is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-universally-injective}
+The property $\mathcal{P}(f) =$``$f$ is universally injective''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+A base change of a universally injective morphism is universally
+injective (this is formal). Being universally injective is Zariski
+local on the base; this is clear from the definition.
+Finally, let
+$S' \to S$ be a flat surjective morphism of affine schemes,
+and let $f : X \to S$ be a morphism. Assume that the base change
+$f' : X' \to S'$ is universally injective. Let $K$ be a field, and let
+$a, b : \Spec(K) \to X$ be two morphisms such that
+$f \circ a = f \circ b$. As $S' \to S$ is surjective and
+by the discussion in Schemes,
+Section \ref{schemes-section-points} there exists a field
+extension $K'/K$ and a morphism $\Spec(K')
+\to S'$ such that the following solid diagram commutes
+$$
+\xymatrix{
+\Spec(K') \ar[rrd] \ar@{-->}[rd]_{a', b'} \ar[dd] \\
+ &
+X' \ar[r] \ar[d] &
+S' \ar[d] \\
+\Spec(K) \ar[r]^{a, b} &
+X \ar[r] &
+S
+}
+$$
+As the square is cartesian we get the two dotted arrows $a'$, $b'$ making the
+diagram commute. Since $X' \to S'$ is universally injective we get $a' = b'$,
+by
+Morphisms, Lemma \ref{morphisms-lemma-universally-injective}.
+Clearly this forces $a = b$ (by the discussion in Schemes,
+Section \ref{schemes-section-points}).
+Therefore Lemma \ref{lemma-descending-properties-morphisms} applies and we win.
+
+\medskip\noindent
+An alternative proof would be to use the characterization of a universally
+injective morphism as one whose diagonal is surjective, see
+Morphisms, Lemma \ref{morphisms-lemma-universally-injective}.
+The lemma then follows from the fact that
+the property of being surjective is fpqc local on the base, see
+Lemma \ref{lemma-descending-property-surjective}.
+(Hint: use that the base change of the diagonal is the diagonal
+of the base change.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-universal-homeomorphism}
+The property $\mathcal{P}(f) =$``$f$ is a universal homeomorphism''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+This can be proved in exactly the same manner as
+Lemma \ref{lemma-descending-property-universally-closed}.
+Alternatively, one can use that
+a map of topological spaces is a homeomorphism if and only if
+it is injective, surjective, and open. Thus
+a universal homeomorphism is the same thing as a
+surjective, universally injective, and universally open morphism.
+Thus the lemma follows from
+Lemmas \ref{lemma-descending-property-surjective},
+\ref{lemma-descending-property-universally-injective}, and
+\ref{lemma-descending-property-universally-open}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-locally-finite-type}
+The property $\mathcal{P}(f) =$``$f$ is locally of finite type''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Being locally of finite type is preserved under base change, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-finite-type}.
+Being locally of finite type is Zariski local on the base, see
+Morphisms, Lemma \ref{morphisms-lemma-locally-finite-type-characterize}.
+Finally, let
+$S' \to S$ be a flat surjective morphism of affine schemes,
+and let $f : X \to S$ be a morphism. Assume that the base change
+$f' : X' \to S'$ is locally of finite type.
+Let $U \subset X$ be an affine open. Then $U' = S' \times_S U$
+is affine and of finite type over $S'$. Write
+$S = \Spec(R)$,
+$S' = \Spec(R')$,
+$U = \Spec(A)$, and
+$U' = \Spec(A')$.
+We know that $R \to R'$ is faithfully flat,
+$A' = R' \otimes_R A$ and $R' \to A'$ is of finite type.
+We have to show that $R \to A$ is of finite type.
+This is the result of
+Algebra, Lemma \ref{algebra-lemma-finite-type-descends}.
+It follows that $f$ is locally of finite type.
+Therefore Lemma \ref{lemma-descending-properties-morphisms} applies and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-locally-finite-presentation}
+The property $\mathcal{P}(f) =$``$f$ is locally of finite presentation''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Being locally of finite presentation is preserved under base change, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-finite-presentation}.
+Being locally of finite type is Zariski local on the base, see Morphisms,
+Lemma \ref{morphisms-lemma-locally-finite-presentation-characterize}.
+Finally, let
+$S' \to S$ be a flat surjective morphism of affine schemes,
+and let $f : X \to S$ be a morphism. Assume that the base change
+$f' : X' \to S'$ is locally of finite presentation.
+Let $U \subset X$ be an affine open. Then $U' = S' \times_S U$
+is affine and of finite type over $S'$. Write
+$S = \Spec(R)$,
+$S' = \Spec(R')$,
+$U = \Spec(A)$, and
+$U' = \Spec(A')$.
+We know that $R \to R'$ is faithfully flat,
+$A' = R' \otimes_R A$ and $R' \to A'$ is of finite presentation.
+We have to show that $R \to A$ is of finite presentation.
+This is the result of
+Algebra, Lemma \ref{algebra-lemma-finite-presentation-descends}.
+It follows that $f$ is locally of finite presentation.
+Therefore Lemma \ref{lemma-descending-properties-morphisms} applies and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-finite-type}
+The property $\mathcal{P}(f) =$``$f$ is of finite type''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-descending-property-quasi-compact}
+and \ref{lemma-descending-property-locally-finite-type}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-finite-presentation}
+The property $\mathcal{P}(f) =$``$f$ is of finite presentation''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-descending-property-quasi-compact},
+\ref{lemma-descending-property-quasi-separated} and
+\ref{lemma-descending-property-locally-finite-presentation}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-proper}
+The property $\mathcal{P}(f) =$``$f$ is proper''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+The lemma follows by combining
+Lemmas \ref{lemma-descending-property-universally-closed},
+\ref{lemma-descending-property-separated}
+and \ref{lemma-descending-property-finite-type}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-flat}
+The property $\mathcal{P}(f) =$``$f$ is flat''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Being flat is preserved under arbitrary base change, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-flat}.
+Being flat is Zariski local on the base by definition.
+Finally, let
+$S' \to S$ be a flat surjective morphism of affine schemes,
+and let $f : X \to S$ be a morphism. Assume that the base change
+$f' : X' \to S'$ is flat.
+Let $U \subset X$ be an affine open. Then $U' = S' \times_S U$
+is affine. Write
+$S = \Spec(R)$,
+$S' = \Spec(R')$,
+$U = \Spec(A)$, and
+$U' = \Spec(A')$.
+We know that $R \to R'$ is faithfully flat,
+$A' = R' \otimes_R A$ and $R' \to A'$ is flat.
+Goal: Show that $R \to A$ is flat.
+This follows immediately from
+Algebra, Lemma \ref{algebra-lemma-flatness-descends}.
+Hence $f$ is flat.
+Therefore Lemma \ref{lemma-descending-properties-morphisms} applies and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-open-immersion}
+The property $\mathcal{P}(f) =$``$f$ is an open immersion''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+The property of being an open immersion is stable under base change,
+see Schemes, Lemma \ref{schemes-lemma-base-change-immersion}.
+The property of being an open immersion is Zariski local on the base
+(this is obvious).
+
+\medskip\noindent
+Let $S' \to S$ be a flat surjective morphism of affine schemes,
+and let $f : X \to S$ be a morphism. Assume that the base change
+$f' : X' \to S'$ is an open immersion. We claim that $f$ is an
+open immersion.
+Then $f'$ is universally open, and universally injective.
+Hence we conclude that $f$ is universally open by
+Lemma \ref{lemma-descending-property-universally-open}, and
+universally injective by
+Lemma \ref{lemma-descending-property-universally-injective}.
+In particular $f(X) \subset S$ is open. If for every affine
+open $U \subset f(X)$ we can prove that $f^{-1}(U) \to U$
+is an isomorphism, then $f$ is an open immersion and we're done.
+If $U' \subset S'$ denotes the inverse image of $U$,
+then $U' \to U$ is a faithfully flat morphism of affines and
+$(f')^{-1}(U') \to U'$ is an isomorphism (as $f'(X')$ contains $U'$
+by our choice of $U$). Thus we reduce to the case discussed
+in the next paragraph.
+
+\medskip\noindent
+Let $S' \to S$ be a flat surjective morphism of affine schemes,
+let $f : X \to S$ be a morphism, and assume that the base change
+$f' : X' \to S'$ is an isomorphism. We have to show that $f$ is an
+isomorphism also. It is clear that $f$ is surjective, universally injective,
+and universally open (see arguments above for the last two).
+Hence $f$ is bijective, i.e., $f$ is a homeomorphism.
+Thus $f$ is affine by
+Morphisms, Lemma \ref{morphisms-lemma-homeomorphism-affine}.
+Since
+$$
+\mathcal{O}(S') \to
+\mathcal{O}(X') =
+\mathcal{O}(S') \otimes_{\mathcal{O}(S)} \mathcal{O}(X)
+$$
+is an isomorphism and since $\mathcal{O}(S) \to \mathcal{O}(S')$
+is faithfully flat this implies that $\mathcal{O}(S) \to \mathcal{O}(X)$
+is an isomorphism. Thus $f$ is an isomorphism. This finishes the proof of
+the claim above.
+Therefore Lemma \ref{lemma-descending-properties-morphisms} applies and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-isomorphism}
+The property $\mathcal{P}(f) =$``$f$ is an isomorphism''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-descending-property-surjective}
+and \ref{lemma-descending-property-open-immersion}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-affine}
+The property $\mathcal{P}(f) =$``$f$ is affine''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+A base change of an affine morphism is affine, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-affine}.
+Being affine is Zariski local on the base, see
+Morphisms, Lemma \ref{morphisms-lemma-characterize-affine}.
+Finally, let
+$g : S' \to S$ be a flat surjective morphism of affine schemes,
+and let $f : X \to S$ be a morphism. Assume that the base change
+$f' : X' \to S'$ is affine. In other words, $X'$ is affine, say
+$X' = \Spec(A')$. Also write $S = \Spec(R)$
+and $S' = \Spec(R')$. We have to show that $X$ is affine.
+
+\medskip\noindent
+By Lemmas \ref{lemma-descending-property-quasi-compact}
+and \ref{lemma-descending-property-separated} we see that
+$X \to S$ is separated and quasi-compact. Thus
+$f_*\mathcal{O}_X$ is a quasi-coherent sheaf of $\mathcal{O}_S$-algebras,
+see Schemes, Lemma \ref{schemes-lemma-push-forward-quasi-coherent}.
+Hence $f_*\mathcal{O}_X = \widetilde{A}$ for some $R$-algebra $A$.
+In fact $A = \Gamma(X, \mathcal{O}_X)$ of course.
+Also, by flat base change
+(see for example
+Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology})
+we have $g^*f_*\mathcal{O}_X = f'_*\mathcal{O}_{X'}$.
+In other words, we have $A' = R' \otimes_R A$.
+Consider the canonical morphism
+$$
+X \longrightarrow \Spec(A)
+$$
+over $S$ from Schemes, Lemma \ref{schemes-lemma-morphism-into-affine}.
+By the above the base change of this morphism to $S'$ is an isomorphism.
+Hence it is an isomorphism by
+Lemma \ref{lemma-descending-property-isomorphism}.
+Therefore Lemma \ref{lemma-descending-properties-morphisms} applies and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-closed-immersion}
+The property $\mathcal{P}(f) =$``$f$ is a closed immersion''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Let $f : X \to Y$ be a morphism of schemes.
+Let $\{Y_i \to Y\}$ be an fpqc covering.
+Assume that each $f_i : Y_i \times_Y X \to Y_i$
+is a closed immersion.
+This implies that each $f_i$ is affine, see
+Morphisms, Lemma \ref{morphisms-lemma-closed-immersion-affine}.
+By Lemma \ref{lemma-descending-property-affine}
+we conclude that $f$ is affine. It remains to show that
+$\mathcal{O}_Y \to f_*\mathcal{O}_X$ is surjective.
+For every $y \in Y$ there exists an $i$ and a point
+$y_i \in Y_i$ mapping to $y$.
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology}
+the sheaf $f_{i, *}(\mathcal{O}_{Y_i \times_Y X})$
+is the pullback of $f_*\mathcal{O}_X$.
+By assumption it is a quotient of $\mathcal{O}_{Y_i}$.
+Hence we see that
+$$
+\Big(
+\mathcal{O}_{Y, y} \longrightarrow (f_*\mathcal{O}_X)_y
+\Big)
+\otimes_{\mathcal{O}_{Y, y}} \mathcal{O}_{Y_i, y_i}
+$$
+is surjective. Since $\mathcal{O}_{Y_i, y_i}$ is faithfully
+flat over $\mathcal{O}_{Y, y}$ this implies the surjectivity
+of $\mathcal{O}_{Y, y} \longrightarrow (f_*\mathcal{O}_X)_y$ as
+desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-quasi-affine}
+The property $\mathcal{P}(f) =$``$f$ is quasi-affine''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Let $f : X \to Y$ be a morphism of schemes.
+Let $\{g_i : Y_i \to Y\}$ be an fpqc covering.
+Assume that each $f_i : Y_i \times_Y X \to Y_i$
+is quasi-affine.
+This implies that each $f_i$ is quasi-compact and separated.
+By Lemmas \ref{lemma-descending-property-quasi-compact}
+and \ref{lemma-descending-property-separated}
+this implies that $f$ is quasi-compact and separated.
+Consider the sheaf of $\mathcal{O}_Y$-algebras
+$\mathcal{A} = f_*\mathcal{O}_X$.
+By Schemes, Lemma \ref{schemes-lemma-push-forward-quasi-coherent}
+it is a quasi-coherent $\mathcal{O}_Y$-algebra.
+Consider the canonical morphism
+$$
+j : X \longrightarrow \underline{\Spec}_Y(\mathcal{A})
+$$
+see Constructions, Lemma \ref{constructions-lemma-canonical-morphism}.
+By flat base change
+(see for example
+Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology})
+we have $g_i^*f_*\mathcal{O}_X = f_{i, *}\mathcal{O}_{X'}$
+where $g_i : Y_i \to Y$ are the given flat
+maps. Hence the base change $j_i$ of $j$ by $g_i$ is the canonical
+morphism of Constructions, Lemma \ref{constructions-lemma-canonical-morphism}
+for the morphism $f_i$. By assumption and
+Morphisms, Lemma \ref{morphisms-lemma-characterize-quasi-affine}
+all of these
+morphisms $j_i$ are quasi-compact open immersions. Hence, by
+Lemmas \ref{lemma-descending-property-quasi-compact} and
+\ref{lemma-descending-property-open-immersion} we
+see that $j$ is a quasi-compact open immersion.
+Hence by
+Morphisms, Lemma \ref{morphisms-lemma-characterize-quasi-affine}
+again we conclude that $f$ is quasi-affine.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-quasi-compact-immersion}
+The property $\mathcal{P}(f) =$``$f$ is a quasi-compact immersion''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Let $f : X \to Y$ be a morphism of schemes.
+Let $\{Y_i \to Y\}$ be an fpqc covering.
+Write $X_i = Y_i \times_Y X$ and $f_i : X_i \to Y_i$
+the base change of $f$. Also denote
+$q_i : Y_i \to Y$ the given flat morphisms.
+Assume each $f_i$ is a quasi-compact immersion.
+By Schemes, Lemma \ref{schemes-lemma-immersions-monomorphisms}
+each $f_i$ is separated.
+By Lemmas \ref{lemma-descending-property-quasi-compact} and
+\ref{lemma-descending-property-separated}
+this implies that $f$ is quasi-compact and separated.
+Let $X \to Z \to Y$ be the factorization of $f$ through its
+scheme theoretic image. By
+Morphisms, Lemma \ref{morphisms-lemma-quasi-compact-scheme-theoretic-image}
+the closed subscheme $Z \subset Y$ is cut out by the
+quasi-coherent sheaf of ideals
+$\mathcal{I} = \Ker(\mathcal{O}_Y \to f_*\mathcal{O}_X)$
+as $f$ is quasi-compact. By flat base change
+(see for example
+Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology};
+here we use $f$ is separated)
+we see $f_{i, *}\mathcal{O}_{X_i}$ is the pullback $q_i^*f_*\mathcal{O}_X$.
+Hence $Y_i \times_Y Z$ is cut out by the
+quasi-coherent sheaf of ideals $q_i^*\mathcal{I} =
+\Ker(\mathcal{O}_{Y_i} \to f_{i, *}\mathcal{O}_{X_i})$.
+By Morphisms, Lemma \ref{morphisms-lemma-quasi-compact-immersion}
+the morphisms $X_i \to Y_i \times_Y Z$
+are open immersions. Hence by
+Lemma \ref{lemma-descending-property-open-immersion}
+we see that $X \to Z$ is an open immersion and
+hence $f$ is a immersion as desired
+(we already saw it was quasi-compact).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-integral}
+The property $\mathcal{P}(f) =$``$f$ is integral''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+An integral morphism is the same thing as an affine,
+universally closed morphism. See
+Morphisms, Lemma \ref{morphisms-lemma-integral-universally-closed}.
+Hence the lemma follows on combining
+Lemmas \ref{lemma-descending-property-universally-closed}
+and \ref{lemma-descending-property-affine}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-finite}
+The property $\mathcal{P}(f) =$``$f$ is finite''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+An finite morphism is the same thing as an integral
+morphism which is locally of finite type. See
+Morphisms, Lemma \ref{morphisms-lemma-finite-integral}.
+Hence the lemma follows on combining
+Lemmas \ref{lemma-descending-property-locally-finite-type}
+and \ref{lemma-descending-property-integral}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-quasi-finite}
+The properties
+$\mathcal{P}(f) =$``$f$ is locally quasi-finite''
+and
+$\mathcal{P}(f) =$``$f$ is quasi-finite''
+are fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Let $f : X \to S$ be a morphism of schemes, and let $\{S_i \to S\}$
+be an fpqc covering such that each base change
+$f_i : X_i \to S_i$ is locally quasi-finite.
+We have already seen
+(Lemma \ref{lemma-descending-property-locally-finite-type})
+that ``locally of finite type'' is fpqc local
+on the base, and hence we see that $f$ is locally of finite type.
+Then it follows from
+Morphisms, Lemma \ref{morphisms-lemma-base-change-quasi-finite}
+that $f$ is locally quasi-finite. The quasi-finite case follows
+as we have already seen that ``quasi-compact'' is fpqc local on the base
+(Lemma \ref{lemma-descending-property-quasi-compact}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-relative-dimension-d}
+The property $\mathcal{P}(f) =$``$f$ is locally of finite type
+of relative dimension $d$'' is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from the fact that being locally of finite
+type is fpqc local on the base and
+Morphisms, Lemma \ref{morphisms-lemma-dimension-fibre-after-base-change}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-syntomic}
+The property $\mathcal{P}(f) =$``$f$ is syntomic''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+A morphism is syntomic if and only if it is locally of finite presentation,
+flat, and has locally complete intersections as fibres. We have seen
+already that being flat and locally of finite presentation are
+fpqc local on the base (Lemmas
+\ref{lemma-descending-property-flat}, and
+\ref{lemma-descending-property-locally-finite-presentation}).
+Hence the result follows for syntomic from
+Morphisms, Lemma \ref{morphisms-lemma-set-points-where-fibres-lci}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-smooth}
+The property $\mathcal{P}(f) =$``$f$ is smooth''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+A morphism is smooth if and only if it is locally of finite presentation,
+flat, and has smooth fibres. We have seen
+already that being flat and locally of finite presentation are
+fpqc local on the base (Lemmas
+\ref{lemma-descending-property-flat}, and
+\ref{lemma-descending-property-locally-finite-presentation}).
+Hence the result follows for smooth from
+Morphisms, Lemma \ref{morphisms-lemma-set-points-where-fibres-smooth}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-unramified}
+The property $\mathcal{P}(f) =$``$f$ is unramified''
+is fpqc local on the base.
+The property $\mathcal{P}(f) =$``$f$ is G-unramified''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+A morphism is unramified (resp.\ G-unramified) if and only if it is
+locally of finite type (resp.\ finite presentation)
+and its diagonal morphism is an open immersion (see
+Morphisms, Lemma \ref{morphisms-lemma-diagonal-unramified-morphism}).
+We have seen already that being locally of finite type
+(resp.\ locally of finite presentation) and an open immersion is
+fpqc local on the base (Lemmas
+\ref{lemma-descending-property-locally-finite-presentation},
+\ref{lemma-descending-property-locally-finite-type}, and
+\ref{lemma-descending-property-open-immersion}).
+Hence the result follows formally.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-etale}
+The property $\mathcal{P}(f) =$``$f$ is \'etale''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+A morphism is \'etale if and only if it flat and G-unramified.
+See Morphisms, Lemma \ref{morphisms-lemma-flat-unramified-etale}.
+We have seen already that being flat and G-unramified
+are fpqc local on the base (Lemmas
+\ref{lemma-descending-property-flat}, and
+\ref{lemma-descending-property-unramified}).
+Hence the result follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-finite-locally-free}
+The property $\mathcal{P}(f) =$``$f$ is finite locally free''
+is fpqc local on the base.
+Let $d \geq 0$.
+The property $\mathcal{P}(f) =$``$f$ is finite locally free of degree $d$''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Being finite locally free is equivalent to being
+finite, flat and locally of finite presentation
+(Morphisms, Lemma \ref{morphisms-lemma-finite-flat}).
+Hence this follows from Lemmas
+\ref{lemma-descending-property-finite},
+\ref{lemma-descending-property-flat}, and
+\ref{lemma-descending-property-locally-finite-presentation}.
+If $f : Z \to U$ is finite locally free, and $\{U_i \to U\}$ is a surjective
+family of morphisms such that each pullback $Z \times_U U_i \to U_i$ has
+degree $d$, then $Z \to U$ has degree $d$, for example because we
+can read off the degree in a point $u \in U$ from the fibre
+$(f_*\mathcal{O}_Z)_u \otimes_{\mathcal{O}_{U, u}} \kappa(u)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-monomorphism}
+The property $\mathcal{P}(f) =$``$f$ is a monomorphism''
+is fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\{S_i \to S\}$ be an fpqc covering, and assume
+each of the base changes $f_i : X_i \to S_i$ of $f$ is
+a monomorphism. Let $a, b : T \to X$ be two morphisms
+such that $f \circ a = f \circ b$. We have to show that $a = b$.
+Since $f_i$ is a monomorphism we see that $a_i = b_i$, where
+$a_i, b_i : S_i \times_S T \to X_i$ are
+the base changes. In particular the compositions
+$S_i \times_S T \to T \to X$ are equal.
+Since $\coprod S_i \times_S T \to T$
+is an epimorphism (see
+e.g.\ Lemma \ref{lemma-fpqc-universal-effective-epimorphisms})
+we conclude $a = b$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-regular-immersion}
+The properties
+\begin{enumerate}
+\item[] $\mathcal{P}(f) =$``$f$ is a Koszul-regular immersion'',
+\item[] $\mathcal{P}(f) =$``$f$ is an $H_1$-regular immersion'', and
+\item[] $\mathcal{P}(f) =$``$f$ is a quasi-regular immersion''
+\end{enumerate}
+are fpqc local on the base.
+\end{lemma}
+
+\begin{proof}
+We will use the criterion of
+Lemma \ref{lemma-descending-properties-morphisms}
+to prove this. By
+Divisors, Definition \ref{divisors-definition-regular-immersion}
+being a Koszul-regular (resp.\ $H_1$-regular, quasi-regular)
+immersion is Zariski local on the base. By
+Divisors, Lemma \ref{divisors-lemma-flat-base-change-regular-immersion}
+being a Koszul-regular (resp.\ $H_1$-regular, quasi-regular)
+immersion is preserved under flat base change.
+The final hypothesis (3) of
+Lemma \ref{lemma-descending-properties-morphisms}
+translates into the following algebra statement:
+Let $A \to B$ be a faithfully flat ring map. Let $I \subset A$ be an ideal.
+If $IB$ is locally on $\Spec(B)$ generated by a Koszul-regular
+(resp.\ $H_1$-regular, quasi-regular) sequence in $B$, then $I \subset A$
+is locally on $\Spec(A)$ generated by a Koszul-regular
+(resp.\ $H_1$-regular, quasi-regular) sequence in $A$. This is
+More on Algebra, Lemma \ref{more-algebra-lemma-flat-descent-regular-ideal}.
+\end{proof}
+
+
+
+
+
+\section{Properties of morphisms local in the fppf topology on the target}
+\label{section-descending-properties-morphisms-fppf}
+
+\noindent
+In this section we find some properties of morphisms of schemes
+for which we could not (yet) show they are local on the base in
+the fpqc topology which, however, are local on the base
+in the fppf topology.
+
+\begin{lemma}
+\label{lemma-descending-fppf-property-immersion}
+The property $\mathcal{P}(f) =$``$f$ is an immersion''
+is fppf local on the base.
+\end{lemma}
+
+\begin{proof}
+The property of being an immersion is stable under base change,
+see Schemes, Lemma \ref{schemes-lemma-base-change-immersion}.
+The property of being an immersion is Zariski local on the base.
+Finally, let
+$\pi : S' \to S$ be a surjective morphism of affine schemes,
+which is flat and locally of finite presentation.
+Note that $\pi : S' \to S$ is open by
+Morphisms, Lemma \ref{morphisms-lemma-fppf-open}.
+Let $f : X \to S$ be a morphism.
+Assume that the base change $f' : X' \to S'$ is an immersion.
+In particular we see that $f'(X') = \pi^{-1}(f(X))$ is locally closed.
+Hence by Topology, Lemma \ref{topology-lemma-open-morphism-quotient-topology}
+we see that $f(X) \subset S$
+is locally closed. Let $Z \subset S$ be
+the closed subset $Z = \overline{f(X)} \setminus f(X)$.
+By Topology, Lemma \ref{topology-lemma-open-morphism-quotient-topology}
+again we see that $f'(X')$ is closed in $S' \setminus Z'$.
+Hence we may apply Lemma \ref{lemma-descending-property-closed-immersion}
+to the fpqc covering $\{S' \setminus Z' \to S \setminus Z\}$
+and conclude that $f : X \to S \setminus Z$ is a closed
+immersion. In other words, $f$ is an immersion.
+Therefore Lemma \ref{lemma-descending-properties-morphisms} applies and we win.
+\end{proof}
+
+
+
+
+
+
+\section{Application of fpqc descent of properties of morphisms}
+\label{section-application-descending-properties-morphisms}
+
+\noindent
+The following lemma may seem a bit frivolous but turns out is a useful
+tool in studying \'etale and unramified morphisms.
+
+\begin{lemma}
+\label{lemma-flat-surjective-quasi-compact-monomorphism-isomorphism}
+Let $f : X \to Y$ be a flat, quasi-compact, surjective monomorphism.
+Then f is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+As $f$ is a flat, quasi-compact, surjective morphism
+we see $\{X \to Y\}$ is an fpqc covering of $Y$.
+The diagonal $\Delta : X \to X \times_Y X$ is an isomorphism.
+This implies that the base change of $f$ by $f$ is an
+isomorphism. Hence we see $f$ is an isomorphism by
+Lemma \ref{lemma-descending-property-isomorphism}.
+\end{proof}
+
+\noindent
+We can use this lemma to show the following important result; we also
+give a proof avoiding fpqc descent.
+We will discuss this and related results in more detail in
+\'Etale Morphisms, Section \ref{etale-section-topological-etale}.
+
+\begin{lemma}
+\label{lemma-universally-injective-etale-open-immersion}
+A universally injective \'etale morphism is an open immersion.
+\end{lemma}
+
+\begin{proof}[First proof]
+Let $f : X \to Y$ be an \'etale morphism which is universally injective.
+Then $f$ is open
+(Morphisms, Lemma \ref{morphisms-lemma-etale-open})
+hence we can replace $Y$ by $f(X)$ and we may assume that $f$ is surjective.
+Then $f$ is bijective and open hence a homeomorphism. Hence $f$ is
+quasi-compact. Thus by
+Lemma \ref{lemma-flat-surjective-quasi-compact-monomorphism-isomorphism}
+it suffices to show that $f$ is a monomorphism. As $X \to Y$ is \'etale
+the morphism $\Delta_{X/Y} : X \to X \times_Y X$ is an open immersion by
+Morphisms, Lemma \ref{morphisms-lemma-diagonal-unramified-morphism}
+(and
+Morphisms, Lemma \ref{morphisms-lemma-flat-unramified-etale}).
+As $f$ is universally injective $\Delta_{X/Y}$ is also surjective, see
+Morphisms, Lemma \ref{morphisms-lemma-universally-injective}.
+Hence $\Delta_{X/Y}$ is an isomorphism, i.e., $X \to Y$ is a monomorphism.
+\end{proof}
+
+\begin{proof}[Second proof]
+Let $f : X \to Y$ be an \'etale morphism which is universally injective.
+Then $f$ is open (Morphisms, Lemma \ref{morphisms-lemma-etale-open})
+hence we can replace $Y$ by $f(X)$ and we may assume that $f$ is surjective.
+Since the hypotheses remain satisfied after any base change, we conclude
+that $f$ is a universal homeomorphism. Therefore $f$ is integral, see
+Morphisms, Lemma \ref{morphisms-lemma-universal-homeomorphism}.
+It follows that $f$ is finite by
+Morphisms, Lemma \ref{morphisms-lemma-finite-integral}.
+It follows that $f$ is finite locally free by
+Morphisms, Lemma \ref{morphisms-lemma-finite-flat}.
+To finish the proof, it suffices that $f$ is finite locally
+free of degree $1$ (a finite locally free morphism of degree $1$
+is an isomorphism).
+There is decomposition of $Y$ into open and closed subschemes
+$V_d$ such that $f^{-1}(V_d) \to V_d$ is finite locally free of
+degree $d$, see Morphisms, Lemma \ref{morphisms-lemma-finite-locally-free}.
+If $V_d$ is not empty, we can pick a morphism $\Spec(k) \to V_d \subset Y$
+where $k$ is an algebraically closed field (just take the algebraic
+closure of the residue field of some point of $V_d$).
+Then $\Spec(k) \times_Y X \to \Spec(k)$ is a disjoint union of
+copies of $\Spec(k)$, by
+Morphisms, Lemma \ref{morphisms-lemma-etale-over-field}
+and the fact that $k$ is algebraically closed.
+However, since $f$ is universally injective, there can only be
+one copy and hence $d = 1$ as desired.
+\end{proof}
+
+\noindent
+We can reformulate the hypotheses in the lemma above a bit by using the
+following characterization of flat universally injective morphisms.
+
+\begin{lemma}
+\label{lemma-flat-universally-injective}
+Let $f : X \to Y$ be a morphism of schemes. Let $X^0$ denote the set
+of generic points of irreducible components of $X$. If
+\begin{enumerate}
+\item $f$ is flat and separated,
+\item for $\xi \in X^0$ we have $\kappa(f(\xi)) = \kappa(\xi)$, and
+\item if $\xi, \xi' \in X^0$, $\xi \not = \xi'$, then $f(\xi) \not = f(\xi')$,
+\end{enumerate}
+then $f$ is universally injective.
+\end{lemma}
+
+\begin{proof}
+We have to show that $\Delta : X \to X \times_Y X$ is surjective, see
+Morphisms, Lemma \ref{morphisms-lemma-universally-injective}.
+As $X \to Y$ is separated, the image of $\Delta$ is closed.
+Thus if $\Delta$ is not surjective, we can find a generic point
+$\eta \in X \times_S X$ of an irreducible component of $X \times_S X$
+which is not in the image of $\Delta$. The projection
+$\text{pr}_1 : X \times_Y X \to X$
+is flat as a base change of the flat morphism $X \to Y$, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-flat}.
+Hence generalizations lift along $\text{pr}_1$, see
+Morphisms, Lemma \ref{morphisms-lemma-generalizations-lift-flat}.
+We conclude that $\xi = \text{pr}_1(\eta) \in X^0$.
+However, assumptions (2) and (3) guarantee that the scheme
+$(X \times_Y X)_{f(\xi)}$ has at most one point for every $\xi \in X^0$.
+In other words, we have $\Delta(\xi) = \eta$ a contradiction.
+\end{proof}
+
+\noindent
+Thus we can reformulate
+Lemma \ref{lemma-universally-injective-etale-open-immersion} as follows.
+
+\begin{lemma}
+\label{lemma-characterize-open-immersion}
+Let $f : X \to Y$ be a morphism of schemes. Let $X^0$ denote the set
+of generic points of irreducible components of $X$. If
+\begin{enumerate}
+\item $f$ is \'etale and separated,
+\item for $\xi \in X^0$ we have $\kappa(f(\xi)) = \kappa(\xi)$, and
+\item if $\xi, \xi' \in X^0$, $\xi \not = \xi'$, then $f(\xi) \not = f(\xi')$,
+\end{enumerate}
+then $f$ is an open immersion.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemmas \ref{lemma-flat-universally-injective} and
+\ref{lemma-universally-injective-etale-open-immersion}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-proper-over-base}
+Let $f : X \to Y$ be a morphism of schemes which is locally of finite type.
+Let $Z$ be a closed subset of $X$. If there exists an fpqc covering
+$\{Y_i \to Y\}$ such that the inverse image $Z_i \subset Y_i \times_Y X$
+is proper over $Y_i$
+(Cohomology of Schemes, Definition \ref{coherent-definition-proper-over-base})
+then $Z$ is proper over $Y$.
+\end{lemma}
+
+\begin{proof}
+Endow $Z$ with the reduced induced closed subscheme structure, see
+Schemes, Definition \ref{schemes-definition-reduced-induced-scheme}.
+For every $i$ the base change $Y_i \times_Y Z$ is a closed subscheme
+of $Y_i \times_Y X$ whose underlying closed subset is $Z_i$.
+By definition (via
+Cohomology of Schemes, Lemma \ref{coherent-lemma-closed-proper-over-base})
+we conclude that the projections $Y_i \times_Y Z \to Y_i$ are proper
+morphisms. Hence $Z \to Y$ is a proper morphism by
+Lemma \ref{lemma-descending-property-proper}.
+Thus $Z$ is proper over $Y$ by definition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descending-property-ample}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $\{g_i : S_i \to S\}_{i \in I}$ be an fpqc covering.
+Let $f_i : X_i \to S_i$ be the base change of $f$ and let $\mathcal{L}_i$
+be the pullback of $\mathcal{L}$ to $X_i$.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{L}$ is ample on $X/S$, and
+\item $\mathcal{L}_i$ is ample on $X_i/S_i$
+for every $i \in I$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (1) $\Rightarrow$ (2) follows from
+Morphisms, Lemma \ref{morphisms-lemma-ample-base-change}.
+Assume $\mathcal{L}_i$ is ample on $X_i/S_i$ for every $i \in I$.
+By Morphisms, Definition \ref{morphisms-definition-relatively-ample}
+this implies that $X_i \to S_i$ is quasi-compact and by
+Morphisms, Lemma \ref{morphisms-lemma-relatively-ample-separated}
+this implies $X_i \to S$ is separated.
+Hence $f$ is quasi-compact and separated by
+Lemmas \ref{lemma-descending-property-quasi-compact} and
+\ref{lemma-descending-property-separated}.
+
+\medskip\noindent
+This means that
+$\mathcal{A} = \bigoplus_{d \geq 0} f_*\mathcal{L}^{\otimes d}$
+is a quasi-coherent graded $\mathcal{O}_S$-algebra
+(Schemes, Lemma \ref{schemes-lemma-push-forward-quasi-coherent}).
+Moreover, the formation of $\mathcal{A}$ commutes with flat
+base change by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology}.
+In particular, if we set
+$\mathcal{A}_i = \bigoplus_{d \geq 0} f_{i, *}\mathcal{L}_i^{\otimes d}$
+then we have $\mathcal{A}_i = g_i^*\mathcal{A}$.
+It follows that the natural maps
+$\psi_d : f^*\mathcal{A}_d \to \mathcal{L}^{\otimes d}$
+of $\mathcal{O}_X$
+pullback to give the natural maps
+$\psi_{i, d} : f_i^*(\mathcal{A}_i)_d \to \mathcal{L}_i^{\otimes d}$
+of $\mathcal{O}_{X_i}$-modules. Since $\mathcal{L}_i$ is ample on $X_i/S_i$
+we see that for any point $x_i \in X_i$, there exists a $d \geq 1$
+such that $f_i^*(\mathcal{A}_i)_d \to \mathcal{L}_i^{\otimes d}$
+is surjective on stalks at $x_i$. This follows either directly
+from the definition of a relatively ample module or from
+Morphisms, Lemma \ref{morphisms-lemma-characterize-relatively-ample}.
+If $x \in X$, then we can choose an $i$ and an $x_i \in X_i$
+mapping to $x$. Since $\mathcal{O}_{X, x} \to \mathcal{O}_{X_i, x_i}$
+is flat hence faithfully flat, we conclude that for every $x \in X$
+there exists a $d \geq 1$ such that
+$f^*\mathcal{A}_d \to \mathcal{L}^{\otimes d}$
+is surjective on stalks at $x$.
+This implies that the open subset $U(\psi) \subset X$ of
+Constructions, Lemma
+\ref{constructions-lemma-invertible-map-into-relative-proj}
+corresponding to the map
+$\psi : f^*\mathcal{A} \to \bigoplus_{d \geq 0} \mathcal{L}^{\otimes d}$
+of graded $\mathcal{O}_X$-algebras
+is equal to $X$. Consider the corresponding morphism
+$$
+r_{\mathcal{L}, \psi} : X \longrightarrow \underline{\text{Proj}}_S(\mathcal{A})
+$$
+It is clear from the above that the base change of
+$r_{\mathcal{L}, \psi}$ to $S_i$ is the morphism
+$r_{\mathcal{L}_i, \psi_i}$ which is an open immersion by
+Morphisms, Lemma \ref{morphisms-lemma-characterize-relatively-ample}.
+Hence $r_{\mathcal{L}, \psi}$ is an open immersion
+by Lemma \ref{lemma-descending-property-open-immersion}
+and we conclude $\mathcal{L}$ is ample on $X/S$ by
+Morphisms, Lemma \ref{morphisms-lemma-characterize-relatively-ample}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Properties of morphisms local on the source}
+\label{section-properties-morphisms-local-source}
+
+\noindent
+It often happens one can prove a morphism has a certain property
+after precomposing with some other morphism. In many cases this
+implies the morphism has the property too. We formalize
+this in the following definition.
+
+\begin{definition}
+\label{definition-property-morphisms-local-source}
+Let $\mathcal{P}$ be a property of morphisms of schemes.
+Let $\tau \in \{Zariski, \linebreak[0] fpqc, \linebreak[0] fppf, \linebreak[0]
+\etale, \linebreak[0] smooth, \linebreak[0] syntomic\}$.
+We say $\mathcal{P}$ is
+{\it $\tau$ local on the source}, or
+{\it local on the source for the $\tau$-topology} if for
+any morphism of schemes $f : X \to Y$ over $S$, and any
+$\tau$-covering $\{X_i \to X\}_{i \in I}$ we
+have
+$$
+f \text{ has }\mathcal{P}
+\Leftrightarrow
+\text{each }X_i \to Y\text{ has }\mathcal{P}.
+$$
+\end{definition}
+
+\noindent
+To be sure, since isomorphisms are always coverings
+we see (or require) that property $\mathcal{P}$ holds for $X \to Y$
+if and only if it holds for any arrow $X' \to Y'$ isomorphic to $X \to Y$.
+If a property is $\tau$-local on the source then it is preserved by
+precomposing with morphisms which occur in $\tau$-coverings. Here
+is a formal statement.
+
+\begin{lemma}
+\label{lemma-precompose-property-local-source}
+Let $\tau \in \{fpqc, fppf, syntomic, smooth, \etale, Zariski\}$.
+Let $\mathcal{P}$ be a property of morphisms which is $\tau$ local
+on the source. Let $f : X \to Y$ have property $\mathcal{P}$.
+For any morphism $a : X' \to X$ which is
+flat, resp.\ flat and locally of finite presentation, resp.\ syntomic,
+resp.\ \'etale, resp.\ an open immersion, the composition
+$f \circ a : X' \to Y$ has property $\mathcal{P}$.
+\end{lemma}
+
+\begin{proof}
+This is true because we can fit $X' \to X$ into a family of
+morphisms which forms a $\tau$-covering.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-largest-open-of-the-source}
+Let $\tau \in \{fppf, syntomic, smooth, \etale\}$.
+Let $\mathcal{P}$ be a property of morphisms which is $\tau$ local
+on the source. For any morphism of schemes $f : X \to Y$ there exists
+a largest open $W(f) \subset X$ such that the restriction
+$f|_{W(f)} : W(f) \to Y$ has $\mathcal{P}$. Moreover,
+if $g : X' \to X$ is flat and locally of finite presentation,
+syntomic, smooth, or \'etale and $f' = f \circ g : X' \to Y$, then
+$g^{-1}(W(f)) = W(f')$.
+\end{lemma}
+
+\begin{proof}
+Consider the union $W$ of the images $g(X') \subset X$ of
+morphisms $g : X' \to X$ with the properties:
+\begin{enumerate}
+\item $g$ is flat and locally of finite presentation, syntomic,
+smooth, or \'etale, and
+\item the composition $X' \to X \to Y$ has property $\mathcal{P}$.
+\end{enumerate}
+Since such a morphism $g$ is open (see
+Morphisms, Lemma \ref{morphisms-lemma-fppf-open})
+we see that $W \subset X$ is an open subset of $X$. Since $\mathcal{P}$
+is local in the $\tau$ topology the restriction $f|_W : W \to Y$ has property
+$\mathcal{P}$ because we are given a $\tau$ covering $\{X' \to W\}$ of $W$
+such that the pullbacks have $\mathcal{P}$. This proves the existence of $W(f)$.
+The compatibility stated in the last sentence follows immediately
+from the construction of $W(f)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-properties-morphisms-local-source}
+Let $\mathcal{P}$ be a property of morphisms of schemes.
+Let $\tau \in \{fpqc, \linebreak[0] fppf, \linebreak[0]
+\etale, \linebreak[0] smooth, \linebreak[0] syntomic\}$.
+Assume that
+\begin{enumerate}
+\item the property is preserved under precomposing with
+flat, flat locally of finite presentation, \'etale, smooth or syntomic morphisms
+depending on whether $\tau$ is fpqc, fppf, \'etale, smooth, or syntomic,
+\item the property is Zariski local on the source,
+\item the property is Zariski local on the target,
+\item for any morphism of affine schemes $f : X \to Y$, and
+any surjective morphism of affine schemes $X' \to X$
+which is flat, flat of finite presentation,
+\'etale, smooth or syntomic depending on whether $\tau$ is
+fpqc, fppf, \'etale, smooth, or syntomic, property
+$\mathcal{P}$ holds for $f$ if property $\mathcal{P}$
+holds for the composition $f' : X' \to Y$.
+\end{enumerate}
+Then $\mathcal{P}$ is $\tau$ local on the source.
+\end{lemma}
+
+\begin{proof}
+This follows almost immediately from the definition of
+a $\tau$-covering, see
+Topologies, Definition
+\ref{topologies-definition-fpqc-covering}
+\ref{topologies-definition-fppf-covering}
+\ref{topologies-definition-etale-covering}
+\ref{topologies-definition-smooth-covering}, or
+\ref{topologies-definition-syntomic-covering}
+and Topologies, Lemma
+\ref{topologies-lemma-fpqc-affine},
+\ref{topologies-lemma-fppf-affine},
+\ref{topologies-lemma-etale-affine},
+\ref{topologies-lemma-smooth-affine}, or
+\ref{topologies-lemma-syntomic-affine}.
+Details omitted. (Hint: Use locality on the source and target to
+reduce the verification of property $\mathcal{P}$ to the case of
+a morphism between affines. Then apply (1) and (4).)
+\end{proof}
+
+\begin{remark}
+\label{remark-properties-morphisms-local-source-standard}
+(This is a repeat of
+Remarks \ref{remark-descending-properties-standard}
+and \ref{remark-descending-properties-morphisms-standard} above.)
+In Lemma \ref{lemma-properties-morphisms-local-source} above if
+$\tau = smooth$ then in condition (4) we may assume that
+the morphism is a (surjective) standard smooth morphism.
+Similarly, when $\tau = syntomic$ or $\tau = \etale$.
+\end{remark}
+
+
+
+\section{Properties of morphisms local in the fpqc topology on the source}
+\label{section-fpqc-local-source}
+
+\noindent
+Here are some properties of morphisms that are fpqc local on the source.
+
+\begin{lemma}
+\label{lemma-flat-fpqc-local-source}
+The property $\mathcal{P}(f)=$``$f$ is flat'' is fpqc local on the source.
+\end{lemma}
+
+\begin{proof}
+Since flatness is defined in terms of the maps of local rings
+(Morphisms, Definition \ref{morphisms-definition-flat})
+what has to be shown is the following
+algebraic fact: Suppose $A \to B \to C$ are local homomorphisms of local
+rings, and assume $B \to C$ is flat. Then $A \to B$ is
+flat if and only if $A \to C$ is flat.
+If $A \to B$ is flat, then $A \to C$ is flat by
+Algebra, Lemma \ref{algebra-lemma-composition-flat}.
+Conversely, assume $A \to C$ is flat.
+Note that $B \to C$ is faithfully
+flat, see
+Algebra, Lemma \ref{algebra-lemma-local-flat-ff}.
+Hence $A \to B$ is flat by
+Algebra, Lemma \ref{algebra-lemma-flat-permanence}.
+(Also see Morphisms, Lemma \ref{morphisms-lemma-flat-permanence}
+for a direct proof.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-local-rings-fpqc-local-source}
+Then property
+$\mathcal{P}(f : X \to Y)=$``for every $x \in X$ the map of local
+rings $\mathcal{O}_{Y, f(x)} \to \mathcal{O}_{X, x}$ is injective''
+is fpqc local on the source.
+\end{lemma}
+
+\begin{proof}
+Omitted. This is just a (probably misguided) attempt to be playful.
+\end{proof}
+
+
+
+
+
+\section{Properties of morphisms local in the fppf topology on the source}
+\label{section-fppf-local-source}
+
+\noindent
+Here are some properties of morphisms that are fppf local on the source.
+
+\begin{lemma}
+\label{lemma-locally-finite-presentation-fppf-local-source}
+The property $\mathcal{P}(f)=$``$f$ is locally of finite presentation''
+is fppf local on the source.
+\end{lemma}
+
+\begin{proof}
+Being locally of finite presentation is Zariski local on the source
+and the target, see Morphisms,
+Lemma \ref{morphisms-lemma-locally-finite-presentation-characterize}.
+It is a property which is preserved under composition, see
+Morphisms, Lemma \ref{morphisms-lemma-composition-finite-presentation}.
+This proves
+(1), (2) and (3) of Lemma \ref{lemma-properties-morphisms-local-source}.
+The final condition (4) is
+Lemma \ref{lemma-flat-finitely-presented-permanence-algebra}. Hence we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-finite-type-fppf-local-source}
+The property $\mathcal{P}(f)=$``$f$ is locally of finite type''
+is fppf local on the source.
+\end{lemma}
+
+\begin{proof}
+Being locally of finite type is Zariski local on the source
+and the target, see Morphisms,
+Lemma \ref{morphisms-lemma-locally-finite-type-characterize}.
+It is a property which is preserved under composition, see
+Morphisms, Lemma \ref{morphisms-lemma-composition-finite-type}, and
+a flat morphism locally of finite presentation is locally of finite type, see
+Morphisms, Lemma \ref{morphisms-lemma-finite-presentation-finite-type}.
+This proves
+(1), (2) and (3) of Lemma \ref{lemma-properties-morphisms-local-source}.
+The final condition (4) is
+Lemma \ref{lemma-finite-type-local-source-fppf-algebra}. Hence we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-open-fppf-local-source}
+The property $\mathcal{P}(f)=$``$f$ is open''
+is fppf local on the source.
+\end{lemma}
+
+\begin{proof}
+Being an open morphism is clearly Zariski local on the source and the target.
+It is a property which is preserved under composition, see
+Morphisms, Lemma \ref{morphisms-lemma-composition-open}, and
+a flat morphism of finite presentation is open, see
+Morphisms, Lemma \ref{morphisms-lemma-fppf-open}
+This proves
+(1), (2) and (3) of Lemma \ref{lemma-properties-morphisms-local-source}.
+The final condition (4) follows from
+Morphisms, Lemma \ref{morphisms-lemma-fpqc-quotient-topology}.
+Hence we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-open-fppf-local-source}
+The property $\mathcal{P}(f)=$``$f$ is universally open''
+is fppf local on the source.
+\end{lemma}
+
+\begin{proof}
+Let $f : X \to Y$ be a morphism of schemes.
+Let $\{X_i \to X\}_{i \in I}$ be an fppf covering.
+Denote $f_i : X_i \to X$ the compositions.
+We have to show that $f$ is universally open if and only if
+each $f_i$ is universally open. If $f$ is universally open,
+then also each $f_i$ is universally open since the maps
+$X_i \to X$ are universally open and compositions
+of universally open morphisms are universally open
+(Morphisms, Lemmas \ref{morphisms-lemma-fppf-open}
+and \ref{morphisms-lemma-composition-open}).
+Conversely, assume each $f_i$ is universally open.
+Let $Y' \to Y$ be a morphism of schemes.
+Denote $X' = Y' \times_Y X$ and $X'_i = Y' \times_Y X_i$.
+Note that $\{X_i' \to X'\}_{i \in I}$ is an fppf covering also.
+The morphisms $f'_i : X_i' \to Y'$ are open by assumption.
+Hence by the Lemma \ref{lemma-open-fppf-local-source}
+above we conclude that $f' : X' \to Y'$ is open as desired.
+\end{proof}
+
+
+
+\section{Properties of morphisms local in the syntomic topology on the source}
+\label{section-syntomic-local-source}
+
+\noindent
+Here are some properties of morphisms that are syntomic local on the source.
+
+\begin{lemma}
+\label{lemma-syntomic-syntomic-local-source}
+The property $\mathcal{P}(f)=$``$f$ is syntomic''
+is syntomic local on the source.
+\end{lemma}
+
+\begin{proof}
+Combine Lemma \ref{lemma-properties-morphisms-local-source} with
+Morphisms, Lemma \ref{morphisms-lemma-syntomic-characterize}
+(local for Zariski on source and target),
+Morphisms, Lemma \ref{morphisms-lemma-composition-syntomic} (pre-composing),
+and Lemma \ref{lemma-syntomic-smooth-etale-permanence} (part (4)).
+\end{proof}
+
+
+
+
+\section{Properties of morphisms local in the smooth topology on the source}
+\label{section-smooth-local-source}
+
+\noindent
+Here are some properties of morphisms that are smooth local on the source.
+Note also the (in some respects stronger) result
+on descending smoothness via flat morphisms,
+Lemma \ref{lemma-smooth-permanence}.
+
+\begin{lemma}
+\label{lemma-smooth-smooth-local-source}
+The property $\mathcal{P}(f)=$``$f$ is smooth''
+is smooth local on the source.
+\end{lemma}
+
+\begin{proof}
+Combine Lemma \ref{lemma-properties-morphisms-local-source} with
+Morphisms, Lemma \ref{morphisms-lemma-smooth-characterize}
+(local for Zariski on source and target),
+Morphisms, Lemma \ref{morphisms-lemma-composition-smooth} (pre-composing), and
+Lemma \ref{lemma-syntomic-smooth-etale-permanence} (part (4)).
+\end{proof}
+
+
+
+\section{Properties of morphisms local in the \'etale topology on the source}
+\label{section-etale-local-source}
+
+\noindent
+Here are some properties of morphisms that are \'etale local on the source.
+
+\begin{lemma}
+\label{lemma-etale-etale-local-source}
+The property $\mathcal{P}(f)=$``$f$ is \'etale''
+is \'etale local on the source.
+\end{lemma}
+
+\begin{proof}
+Combine Lemma \ref{lemma-properties-morphisms-local-source} with
+Morphisms, Lemma \ref{morphisms-lemma-etale-characterize}
+(local for Zariski on source and target),
+Morphisms, Lemma \ref{morphisms-lemma-composition-etale} (pre-composing), and
+Lemma \ref{lemma-syntomic-smooth-etale-permanence} (part (4)).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-quasi-finite-etale-local-source}
+The property $\mathcal{P}(f)=$``$f$ is locally quasi-finite''
+is \'etale local on the source.
+\end{lemma}
+
+\begin{proof}
+We are going to use
+Lemma \ref{lemma-properties-morphisms-local-source}.
+By
+Morphisms, Lemma
+\ref{morphisms-lemma-locally-quasi-finite-characterize}
+the property of being locally quasi-finite is local for Zariski on source
+and target. By
+Morphisms, Lemmas
+\ref{morphisms-lemma-composition-quasi-finite} and
+\ref{morphisms-lemma-etale-locally-quasi-finite}
+we see the precomposition
+of a locally quasi-finite morphism by an \'etale morphism is locally
+quasi-finite. Finally, suppose that $X \to Y$ is a morphism of affine schemes
+and that $X' \to X$ is a surjective \'etale morphism of affine schemes
+such that $X' \to Y$ is locally quasi-finite. Then $X' \to Y$ is of finite
+type, and by
+Lemma \ref{lemma-finite-type-local-source-fppf-algebra}
+we see that $X \to Y$ is of finite type also.
+Moreover, by assumption $X' \to Y$ has finite fibres, and hence $X \to Y$
+has finite fibres also. We conclude that $X \to Y$ is quasi-finite by
+Morphisms, Lemma \ref{morphisms-lemma-quasi-finite}.
+This proves the last assumption of
+Lemma \ref{lemma-properties-morphisms-local-source}
+and finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unramified-etale-local-source}
+The property $\mathcal{P}(f)=$``$f$ is unramified''
+is \'etale local on the source.
+The property $\mathcal{P}(f)=$``$f$ is G-unramified''
+is \'etale local on the source.
+\end{lemma}
+
+\begin{proof}
+We are going to use
+Lemma \ref{lemma-properties-morphisms-local-source}.
+By
+Morphisms, Lemma \ref{morphisms-lemma-unramified-characterize}
+the property of being unramified (resp.\ G-unramified)
+is local for Zariski on source and target. By
+Morphisms, Lemmas \ref{morphisms-lemma-composition-unramified} and
+\ref{morphisms-lemma-etale-smooth-unramified}
+we see the precomposition
+of an unramified (resp.\ G-unramified) morphism by an \'etale morphism is
+unramified (resp.\ G-unramified).
+Finally, suppose that $X \to Y$ is a morphism of affine schemes
+and that $f : X' \to X$ is a surjective \'etale morphism of affine schemes
+such that $X' \to Y$ is unramified (resp.\ G-unramified).
+Then $X' \to Y$ is of finite type (resp.\ finite presentation), and by
+Lemma \ref{lemma-finite-type-local-source-fppf-algebra}
+(resp.\ Lemma \ref{lemma-flat-finitely-presented-permanence-algebra})
+we see that $X \to Y$ is of finite type (resp.\ finite presentation) also. By
+Morphisms, Lemma \ref{morphisms-lemma-triangle-differentials-smooth}
+we have a short exact sequence
+$$
+0 \to f^*\Omega_{X/Y} \to \Omega_{X'/Y} \to \Omega_{X'/X} \to 0.
+$$
+As $X' \to Y$ is unramified we see that the middle term is zero.
+Hence, as $f$ is faithfully flat we see that $\Omega_{X/Y} = 0$.
+Hence $X \to Y$ is unramified (resp.\ G-unramified), see
+Morphisms, Lemma \ref{morphisms-lemma-unramified-omega-zero}.
+This proves the last assumption of
+Lemma \ref{lemma-properties-morphisms-local-source}
+and finishes the proof.
+\end{proof}
+
+
+
+
+\section{Properties of morphisms \'etale local on source-and-target}
+\label{section-properties-etale-local-source-target}
+
+\noindent
+Let $\mathcal{P}$ be a property of morphisms of schemes. There is an
+intuitive meaning to the phrase ``$\mathcal{P}$ is \'etale local on the
+source and target''. However, it turns out that this notion is not
+the same as asking $\mathcal{P}$ to be both \'etale
+local on the source and \'etale local on the target.
+Before we discuss this further we give two silly examples.
+
+\begin{example}
+\label{example-silly-one}
+Consider the property $\mathcal{P}$ of morphisms of schemes defined
+by the rule $\mathcal{P}(X \to Y) = $``$Y$ is locally Noetherian''.
+The reader can verify that this is \'etale local on the source and
+\'etale local on the target (omitted, see
+Lemma \ref{lemma-Noetherian-local-fppf}).
+But it is {\bf not} true that if $f : X \to Y$ has $\mathcal{P}$
+and $g : Y \to Z$ is \'etale, then $g \circ f$ has $\mathcal{P}$.
+Namely, $f$ could be the identity on $Y$ and $g$ could be an open
+immersion of a locally Noetherian scheme $Y$ into a non locally Noetherian
+scheme $Z$.
+\end{example}
+
+\noindent
+The following example is in some sense worse.
+
+\begin{example}
+\label{example-silly-two}
+Consider the property $\mathcal{P}$ of morphisms of schemes defined
+by the rule $\mathcal{P}(f : X \to Y) = $``for every $y \in Y$ which is
+a specialization of some $f(x)$, $x \in X$ the local ring
+$\mathcal{O}_{Y, y}$ is Noetherian''. Let us verify that this is
+\'etale local on the source and \'etale local on the target. We will freely use
+Schemes, Lemma \ref{schemes-lemma-specialize-points}.
+
+\medskip\noindent
+Local on the target:
+Let $\{g_i : Y_i \to Y\}$ be an \'etale covering. Let $f_i : X_i \to Y_i$
+be the base change of $f$, and denote $h_i : X_i \to X$ the projection.
+Assume $\mathcal{P}(f)$. Let $f(x_i) \leadsto y_i$
+be a specialization. Then $f(h_i(x_i)) \leadsto g_i(y_i)$ so
+$\mathcal{P}(f)$ implies $\mathcal{O}_{Y, g_i(y_i)}$ is Noetherian.
+Also $\mathcal{O}_{Y, g_i(y_i)} \to \mathcal{O}_{Y_i, y_i}$ is a
+localization of an \'etale ring map.
+Hence $\mathcal{O}_{Y_i, y_i}$ is Noetherian by
+Algebra, Lemma \ref{algebra-lemma-Noetherian-permanence}.
+Conversely, assume $\mathcal{P}(f_i)$ for all $i$. Let $f(x) \leadsto y$
+be a specialization. Choose an $i$ and $y_i \in Y_i$ mapping to $y$.
+Since $x$ can be viewed as a point of
+$\Spec(\mathcal{O}_{Y, y}) \times_Y X$ and
+$\mathcal{O}_{Y, y} \to \mathcal{O}_{Y_i, y_i}$ is faithfully flat,
+there exists a point
+$x_i \in \Spec(\mathcal{O}_{Y_i, y_i}) \times_Y X$
+mapping to $x$. Then $x_i \in X_i$, and $f_i(x_i)$ specializes to $y_i$.
+Thus we see that $\mathcal{O}_{Y_i, y_i}$ is Noetherian by
+$\mathcal{P}(f_i)$ which implies that $\mathcal{O}_{Y, y}$ is
+Noetherian by
+Algebra, Lemma \ref{algebra-lemma-descent-Noetherian}.
+
+\medskip\noindent
+Local on the source:
+Let $\{h_i : X_i \to X\}$ be an \'etale covering. Let $f_i : X_i \to Y$
+be the composition $f \circ h_i$. Assume $\mathcal{P}(f)$. Let
+$f(x_i) \leadsto y$ be a specialization. Then $f(h_i(x_i)) \leadsto y$ so
+$\mathcal{P}(f)$ implies $\mathcal{O}_{Y, y}$ is Noetherian. Thus
+$\mathcal{P}(f_i)$ holds.
+Conversely, assume $\mathcal{P}(f_i)$ for all $i$. Let $f(x) \leadsto y$
+be a specialization. Choose an $i$ and $x_i \in X_i$ mapping to $x$.
+Then $y$ is a specialization of $f_i(x_i) = f(x)$. Hence
+$\mathcal{P}(f_i)$ implies $\mathcal{O}_{Y, y}$ is Noetherian
+as desired.
+
+\medskip\noindent
+We claim that there exists a commutative diagram
+$$
+\xymatrix{
+U \ar[d]_a \ar[r]_h & V \ar[d]^b \\
+X \ar[r]^f & Y
+}
+$$
+with surjective \'etale vertical arrows, such that $h$ has $\mathcal{P}$
+and $f$ does not have $\mathcal{P}$. Namely, let
+$$
+Y =
+\Spec\Big(
+\mathbf{C}[x_n; n \in \mathbf{Z}]/(x_n x_m; n \not = m)
+\Big)
+$$
+and let $X \subset Y$ be the open subscheme which is the complement of
+the point all of whose coordinates $x_n = 0$. Let $U = X$, let
+$V = X \amalg Y$, let $a, b$ the obvious map, and let $h : U \to V$
+be the inclusion of $U = X$ into the first summand of $V$. The claim above
+holds because $U$ is locally Noetherian, but $Y$ is not.
+\end{example}
+
+\noindent
+What should be the correct notion of a property which is \'etale local
+on the source-and-target? We think that, by analogy with
+Morphisms, Definition \ref{morphisms-definition-property-local}
+it should be the following.
+
+\begin{definition}
+\label{definition-local-source-target}
+Let $\mathcal{P}$ be a property of morphisms of schemes.
+We say $\mathcal{P}$ is {\it \'etale local on source-and-target} if
+\begin{enumerate}
+\item (stable under precomposing with \'etale maps)
+if $f : X \to Y$ is \'etale and $g : Y \to Z$ has $\mathcal{P}$,
+then $g \circ f$ has $\mathcal{P}$,
+\item (stable under \'etale base change)
+if $f : X \to Y$ has $\mathcal{P}$ and $Y' \to Y$ is \'etale, then
+the base change $f' : Y' \times_Y X \to Y'$ has $\mathcal{P}$, and
+\item (locality) given a morphism $f : X \to Y$ the following are equivalent
+\begin{enumerate}
+\item $f$ has $\mathcal{P}$,
+\item for every $x \in X$ there exists a commutative diagram
+$$
+\xymatrix{
+U \ar[d]_a \ar[r]_h & V \ar[d]^b \\
+X \ar[r]^f & Y
+}
+$$
+with \'etale vertical arrows and $u \in U$ with $a(u) = x$ such that
+$h$ has $\mathcal{P}$.
+\end{enumerate}
+\end{enumerate}
+\end{definition}
+
+\noindent
+It turns out this definition excludes the behavior seen in
+Examples \ref{example-silly-one} and \ref{example-silly-two}.
+We will compare this to the definition in the paper
+\cite{DM} by Deligne and Mumford in
+Remark \ref{remark-compare-definitions}.
+Moreover, a property which is \'etale local on the source-and-target is
+\'etale local on the source and \'etale local on the target.
+Finally, the converse is almost true as we will see in
+Lemma \ref{lemma-etale-local-source-target}.
+
+\begin{lemma}
+\label{lemma-local-source-target-implies}
+Let $\mathcal{P}$ be a property of morphisms of schemes which is
+\'etale local on source-and-target. Then
+\begin{enumerate}
+\item $\mathcal{P}$ is \'etale local on the source,
+\item $\mathcal{P}$ is \'etale local on the target,
+\item $\mathcal{P}$ is stable under postcomposing with \'etale morphisms:
+if $f : X \to Y$ has $\mathcal{P}$ and $g : Y \to Z$ is \'etale, then
+$g \circ f$ has $\mathcal{P}$, and
+\item $\mathcal{P}$ has a permanence property: given $f : X \to Y$ and
+$g : Y \to Z$ \'etale such that $g \circ f$ has $\mathcal{P}$, then
+$f$ has $\mathcal{P}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We write everything out completely.
+
+\medskip\noindent
+Proof of (1). Let $f : X \to Y$ be a morphism of schemes.
+Let $\{X_i \to X\}_{i \in I}$ be an \'etale covering of $X$. If each composition
+$h_i : X_i \to Y$ has $\mathcal{P}$, then for each $x \in X$ we can find
+an $i \in I$ and a point $x_i \in X_i$ mapping to $x$. Then
+$(X_i, x_i) \to (X, x)$ is an \'etale morphism of germs, and
+$\text{id}_Y : Y \to Y$ is an \'etale morphism, and $h_i$ is as in part (3) of
+Definition \ref{definition-local-source-target}.
+Thus we see that $f$ has $\mathcal{P}$.
+Conversely, if $f$ has $\mathcal{P}$ then each $X_i \to Y$ has
+$\mathcal{P}$ by
+Definition \ref{definition-local-source-target} part (1).
+
+\medskip\noindent
+Proof of (2). Let $f : X \to Y$ be a morphism of schemes.
+Let $\{Y_i \to Y\}_{i \in I}$ be an \'etale covering of $Y$.
+Write $X_i = Y_i \times_Y X$ and $h_i : X_i \to Y_i$ for the base change
+of $f$. If each $h_i : X_i \to Y_i$ has $\mathcal{P}$, then for each
+$x \in X$ we pick an $i \in I$ and a point $x_i \in X_i$ mapping to $x$.
+Then $(X_i, x_i) \to (X, x)$ is an \'etale morphism of germs, $Y_i \to Y$ is
+\'etale, and $h_i$ is as in part (3) of
+Definition \ref{definition-local-source-target}.
+Thus we see that $f$ has $\mathcal{P}$.
+Conversely, if $f$ has $\mathcal{P}$, then each $X_i \to Y_i$ has
+$\mathcal{P}$ by
+Definition \ref{definition-local-source-target} part (2).
+
+\medskip\noindent
+Proof of (3). Assume $f : X \to Y$ has $\mathcal{P}$ and $g : Y \to Z$ is
+\'etale. For every $x \in X$ we can think of $(X, x) \to (X, x)$ as an
+\'etale morphism of germs, $Y \to Z$ is an \'etale morphism, and $h = f$ is as
+in part (3) of
+Definition \ref{definition-local-source-target}.
+Thus we see that $g \circ f$ has $\mathcal{P}$.
+
+\medskip\noindent
+Proof of (4). Let $f : X \to Y$ be a morphism and $g : Y \to Z$ \'etale
+such that $g \circ f$ has $\mathcal{P}$. Then by
+Definition \ref{definition-local-source-target} part (2)
+we see that $\text{pr}_Y : Y \times_Z X \to Y$ has $\mathcal{P}$. But
+the morphism $(f, 1) : X \to Y \times_Z X$ is \'etale as a section to the
+\'etale projection $\text{pr}_X : Y \times_Z X \to X$, see
+Morphisms, Lemma \ref{morphisms-lemma-etale-permanence}.
+Hence $f = \text{pr}_Y \circ (f, 1)$ has $\mathcal{P}$ by
+Definition \ref{definition-local-source-target} part (1).
+\end{proof}
+
+\noindent
+The following lemma is the analogue of
+Morphisms, Lemma \ref{morphisms-lemma-locally-P-characterize}.
+
+\begin{lemma}
+\label{lemma-local-source-target-characterize}
+Let $\mathcal{P}$ be a property of morphisms of schemes which is
+\'etale local on source-and-target. Let $f : X \to Y$ be a morphism
+of schemes. The following are equivalent:
+\begin{enumerate}
+\item[(a)] $f$ has property $\mathcal{P}$,
+\item[(b)] for every $x \in X$ there exists an \'etale morphism of germs
+$a : (U, u) \to (X, x)$, an \'etale morphism $b : V \to Y$, and
+a morphism $h : U \to V$ such that $f \circ a = b \circ h$ and
+$h$ has $\mathcal{P}$,
+\item[(c)]
+for any commutative diagram
+$$
+\xymatrix{
+U \ar[d]_a \ar[r]_h & V \ar[d]^b \\
+X \ar[r]^f & Y
+}
+$$
+with $a$, $b$ \'etale the morphism $h$ has $\mathcal{P}$,
+\item[(d)] for some diagram as in (c)
+with $a : U \to X$ surjective $h$ has $\mathcal{P}$,
+\item[(e)] there exists an \'etale covering $\{Y_i \to Y\}_{i \in I}$ such
+that each base change $Y_i \times_Y X \to Y_i$ has $\mathcal{P}$,
+\item[(f)] there exists an \'etale covering $\{X_i \to X\}_{i \in I}$ such
+that each composition $X_i \to Y$ has $\mathcal{P}$,
+\item[(g)] there exists an \'etale covering $\{Y_i \to Y\}_{i \in I}$ and
+for each $i \in I$ an \'etale covering
+$\{X_{ij} \to Y_i \times_Y X\}_{j \in J_i}$ such that each morphism
+$X_{ij} \to Y_i$ has $\mathcal{P}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (a) and (b) is part of
+Definition \ref{definition-local-source-target}.
+The equivalence of (a) and (e) is
+Lemma \ref{lemma-local-source-target-implies} part (2).
+The equivalence of (a) and (f) is
+Lemma \ref{lemma-local-source-target-implies} part (1).
+As (a) is now equivalent to (e) and (f) it follows that
+(a) equivalent to (g).
+
+\medskip\noindent
+It is clear that (c) implies (a). If (a) holds, then for any
+diagram as in (c) the morphism $f \circ a$ has $\mathcal{P}$ by
+Definition \ref{definition-local-source-target} part (1),
+whereupon $h$ has $\mathcal{P}$ by
+Lemma \ref{lemma-local-source-target-implies} part (4).
+Thus (a) and (c) are equivalent. It is clear that (c) implies (d).
+To see that (d) implies (a) assume we have a diagram as in (c)
+with $a : U \to X$ surjective and $h$ having $\mathcal{P}$.
+Then $b \circ h$ has $\mathcal{P}$ by
+Lemma \ref{lemma-local-source-target-implies} part (3).
+Since $\{a : U \to X\}$ is an \'etale covering we conclude that
+$f$ has $\mathcal{P}$ by
+Lemma \ref{lemma-local-source-target-implies} part (1).
+\end{proof}
+
+\noindent
+It seems that the result of the following lemma is not a formality, i.e.,
+it actually uses something about the geometry of \'etale morphisms.
+
+\begin{lemma}
+\label{lemma-etale-local-source-target}
+Let $\mathcal{P}$ be a property of morphisms of schemes.
+Assume
+\begin{enumerate}
+\item $\mathcal{P}$ is \'etale local on the source,
+\item $\mathcal{P}$ is \'etale local on the target, and
+\item $\mathcal{P}$ is stable under postcomposing with open immersions:
+if $f : X \to Y$ has $\mathcal{P}$ and $Y \subset Z$ is an open
+subscheme then $X \to Z$ has $\mathcal{P}$.
+\end{enumerate}
+Then $\mathcal{P}$ is \'etale local on the source-and-target.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{P}$ be a property of morphisms of schemes which
+satisfies conditions (1), (2) and (3) of the lemma. By
+Lemma \ref{lemma-precompose-property-local-source}
+we see that $\mathcal{P}$ is stable under precomposing with
+\'etale morphisms. By
+Lemma \ref{lemma-pullback-property-local-target}
+we see that $\mathcal{P}$ is stable under \'etale base change.
+Hence it suffices to prove part (3) of
+Definition \ref{definition-local-source-target}
+holds.
+
+\medskip\noindent
+More precisely, suppose that $f : X \to Y$ is a morphism
+of schemes which satisfies
+Definition \ref{definition-local-source-target} part (3)(b).
+In other words, for every $x \in X$ there exists an \'etale
+morphism $a_x : U_x \to X$, a point $u_x \in U_x$ mapping to $x$,
+an \'etale morphism $b_x : V_x \to Y$, and a morphism $h_x : U_x \to V_x$
+such that $f \circ a_x = b_x \circ h_x$ and $h_x$ has $\mathcal{P}$.
+The proof of the lemma is complete once we show that $f$ has $\mathcal{P}$.
+Set $U = \coprod U_x$, $a = \coprod a_x$, $V = \coprod V_x$,
+$b = \coprod b_x$, and $h = \coprod h_x$. We obtain a
+commutative diagram
+$$
+\xymatrix{
+U \ar[d]_a \ar[r]_h & V \ar[d]^b \\
+X \ar[r]^f & Y
+}
+$$
+with $a$, $b$ \'etale, $a$ surjective. Note that $h$ has $\mathcal{P}$
+as each $h_x$ does and $\mathcal{P}$ is \'etale local on the target.
+Because $a$ is surjective and $\mathcal{P}$ is \'etale local on the source,
+it suffices to prove that $b \circ h$ has $\mathcal{P}$.
+This reduces the lemma to proving that $\mathcal{P}$ is stable under
+postcomposing with an \'etale morphism.
+
+\medskip\noindent
+During the rest of the proof we let $f : X \to Y$ be a
+morphism with property $\mathcal{P}$ and $g : Y \to Z$ is an \'etale
+morphism. Consider the following statements:
+\begin{enumerate}
+\item[(-)] With no additional assumptions $g \circ f$
+has property $\mathcal{P}$.
+\item[(A)] Whenever $Z$ is affine
+$g \circ f$ has property $\mathcal{P}$.
+\item[(AA)] Whenever $X$ and $Z$ are affine
+$g \circ f$ has property $\mathcal{P}$.
+\item[(AAA)] Whenever $X$, $Y$, and $Z$ are affine
+$g \circ f$ has property $\mathcal{P}$.
+\end{enumerate}
+Once we have proved (-) the proof of the lemma will be complete.
+
+\medskip\noindent
+Claim 1: (AAA) $\Rightarrow$ (AA).
+Namely, let $f : X \to Y$, $g : Y \to Z$ be as above with $X$, $Z$ affine.
+As $X$ is affine hence quasi-compact we can find finitely many
+affine open $Y_i \subset Y$, $i = 1, \ldots, n$ such that
+$X = \bigcup_{i = 1, \ldots, n} f^{-1}(Y_i)$. Set $X_i = f^{-1}(Y_i)$. By
+Lemma \ref{lemma-pullback-property-local-target}
+each of the morphisms $X_i \to Y_i$ has $\mathcal{P}$.
+Hence $\coprod_{i = 1, \ldots, n} X_i \to \coprod_{i = 1, \ldots, n} Y_i$
+has $\mathcal{P}$ as $\mathcal{P}$ is \'etale local on the target.
+By (AAA) applied to
+$\coprod_{i = 1, \ldots, n} X_i \to \coprod_{i = 1, \ldots, n} Y_i$
+and the \'etale morphism $\coprod_{i = 1, \ldots, n} Y_i \to Z$
+we see that $\coprod_{i = 1, \ldots, n} X_i \to Z$ has $\mathcal{P}$.
+Now $\{\coprod_{i = 1, \ldots, n} X_i \to X\}$ is an \'etale
+covering, hence as $\mathcal{P}$ is \'etale local on the source
+we conclude that $X \to Z$ has $\mathcal{P}$ as desired.
+
+\medskip\noindent
+Claim 2: (AAA) $\Rightarrow$ (A).
+Namely, let $f : X \to Y$, $g : Y \to Z$ be as above with $Z$ affine.
+Choose an affine open covering $X = \bigcup X_i$.
+As $\mathcal{P}$ is \'etale local on the source we see that
+each $f|_{X_i} : X_i \to Y$ has $\mathcal{P}$.
+By (AA), which follows from (AAA) according to Claim 1, we see that
+$X_i \to Z$ has $\mathcal{P}$ for each $i$.
+Since $\{X_i \to X\}$ is an \'etale covering and $\mathcal{P}$ is \'etale
+local on the source we conclude that
+$X \to Z$ has $\mathcal{P}$.
+
+\medskip\noindent
+Claim 3: (AAA) $\Rightarrow$ (-).
+Namely, let $f : X \to Y$, $g : Y \to Z$ be as above.
+Choose an affine open covering $Z = \bigcup Z_i$.
+Set $Y_i = g^{-1}(Z_i)$ and $X_i = f^{-1}(Y_i)$. By
+Lemma \ref{lemma-pullback-property-local-target}
+each of the morphisms $X_i \to Y_i$ has $\mathcal{P}$.
+By (A), which follows from (AAA) according to Claim 2, we see that
+$X_i \to Z_i$ has $\mathcal{P}$ for each $i$.
+Since $\mathcal{P}$ is local on the target and $X_i = (g \circ f)^{-1}(Z_i)$
+we conclude that $X \to Z$ has $\mathcal{P}$.
+
+\medskip\noindent
+Thus to prove the lemma it suffices to prove (AAA).
+Let $f : X \to Y$ and $g : Y \to Z$ be as above $X, Y, Z$ affine.
+Note that an \'etale morphism of affines has universally bounded fibres, see
+Morphisms,
+Lemma \ref{morphisms-lemma-etale-locally-quasi-finite} and
+Lemma \ref{morphisms-lemma-locally-quasi-finite-qc-source-universally-bounded}.
+Hence we can do induction on the integer $n$ bounding the degree of the fibres
+of $Y \to Z$. See
+Morphisms, Lemma \ref{morphisms-lemma-etale-universally-bounded}
+for a description of this integer in the case of an \'etale morphism.
+If $n = 1$, then $Y \to Z$ is an open immersion, see
+Lemma \ref{lemma-universally-injective-etale-open-immersion},
+and the result follows from assumption (3) of the lemma. Assume $n > 1$.
+
+\medskip\noindent
+Consider the following commutative diagram
+$$
+\xymatrix{
+X \times_Z Y \ar[d] \ar[r]_{f_Y} &
+Y \times_Z Y \ar[d] \ar[r]_-{\text{pr}} &
+Y \ar[d] \\
+X \ar[r]^f &
+Y \ar[r]^g &
+Z
+}
+$$
+Note that we have a decomposition into open and closed
+subschemes $Y \times_Z Y = \Delta_{Y/Z}(Y) \amalg Y'$, see
+Morphisms, Lemma \ref{morphisms-lemma-diagonal-unramified-morphism}.
+As a base change the degrees of the fibres of the second projection
+$\text{pr} : Y \times_Z Y \to Y$ are bounded by $n$, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-universally-bounded}.
+On the other hand, $\text{pr}|_{\Delta(Y)} : \Delta(Y) \to Y$ is
+an isomorphism and every fibre has exactly one point.
+Thus, on applying
+Morphisms, Lemma \ref{morphisms-lemma-etale-universally-bounded}
+we conclude the degrees of the fibres of the restriction
+$\text{pr}|_{Y'} : Y' \to Y$ are bounded by $n - 1$.
+Set $X' = f_Y^{-1}(Y')$. Picture
+$$
+\xymatrix{
+X \amalg X' \ar@{=}[d] \ar[r]_-{f \amalg f'} &
+\Delta(Y) \amalg Y' \ar@{=}[d] \ar[r] &
+Y \ar@{=}[d] \\
+X \times_Z Y \ar[r]^{f_Y} &
+Y \times_Z Y \ar[r]^-{\text{pr}} &
+Y
+}
+$$
+As $\mathcal{P}$ is \'etale local on the target and hence stable under
+\'etale base change (see
+Lemma \ref{lemma-pullback-property-local-target})
+we see that $f_Y$ has $\mathcal{P}$.
+Hence, as $\mathcal{P}$ is \'etale local on the source,
+$f' = f_Y|_{X'}$ has $\mathcal{P}$. By induction hypothesis
+we see that $X' \to Y$ has $\mathcal{P}$.
+As $\mathcal{P}$ is local on the source, and
+$\{X \to X \times_Z Y, X' \to X \times_Y Z\}$ is an \'etale covering,
+we conclude that $\text{pr} \circ f_Y$ has $\mathcal{P}$.
+Note that $g \circ f$ can be viewed as a morphism
+$g \circ f : X \to g(Y)$. As $\text{pr} \circ f_Y$ is the pullback of
+$g \circ f : X \to g(Y)$ via the \'etale covering $\{Y \to g(Y)\}$,
+and as $\mathcal{P}$ is \'etale local on the target, we conclude that
+$g \circ f : X \to g(Y)$ has property $\mathcal{P}$. Finally, applying
+assumption (3) of the lemma once more we conclude that
+$g \circ f : X \to Z$ has property $\mathcal{P}$.
+\end{proof}
+
+\begin{remark}
+\label{remark-list-local-source-target}
+Using
+Lemma \ref{lemma-etale-local-source-target}
+and the work done in the earlier sections of this chapter it is easy
+to make a list of types of morphisms which are \'etale local on the
+source-and-target. In each case we list the lemma which implies
+the property is \'etale local on the source and the lemma which implies
+the property is \'etale local on the target. In each case the third assumption
+of
+Lemma \ref{lemma-etale-local-source-target}
+is trivial to check, and we omit it. Here is the list:
+\begin{enumerate}
+\item flat, see
+Lemmas \ref{lemma-flat-fpqc-local-source} and
+\ref{lemma-descending-property-flat},
+\item locally of finite presentation, see
+Lemmas \ref{lemma-locally-finite-presentation-fppf-local-source} and
+\ref{lemma-descending-property-locally-finite-presentation},
+\item locally finite type, see
+Lemmas \ref{lemma-locally-finite-type-fppf-local-source} and
+\ref{lemma-descending-property-locally-finite-type},
+\item universally open, see
+Lemmas \ref{lemma-universally-open-fppf-local-source} and
+\ref{lemma-descending-property-universally-open},
+\item syntomic, see
+Lemmas \ref{lemma-syntomic-syntomic-local-source} and
+\ref{lemma-descending-property-syntomic},
+\item smooth, see
+Lemmas \ref{lemma-smooth-smooth-local-source} and
+\ref{lemma-descending-property-smooth},
+\item \'etale, see
+Lemmas \ref{lemma-etale-etale-local-source} and
+\ref{lemma-descending-property-etale},
+\item locally quasi-finite, see
+Lemmas \ref{lemma-locally-quasi-finite-etale-local-source} and
+\ref{lemma-descending-property-quasi-finite},
+\item unramified, see
+Lemmas \ref{lemma-unramified-etale-local-source} and
+\ref{lemma-descending-property-unramified},
+\item G-unramified, see
+Lemmas \ref{lemma-unramified-etale-local-source} and
+\ref{lemma-descending-property-unramified}, and
+\item add more here as needed.
+\end{enumerate}
+\end{remark}
+
+\begin{remark}
+\label{remark-compare-definitions}
+At this point we have three possible definitions of what it means for a
+property $\mathcal{P}$ of morphisms to be ``\'etale local on the source and
+target'':
+\begin{enumerate}
+\item[(ST)] $\mathcal{P}$ is \'etale local on the source and $\mathcal{P}$ is
+\'etale local on the target,
+\item[(DM)] (the definition in the paper \cite[Page 100]{DM} by
+Deligne and Mumford) for every diagram
+$$
+\xymatrix{
+U \ar[d]_a \ar[r]_h & V \ar[d]^b \\
+X \ar[r]^f & Y
+}
+$$
+with surjective \'etale vertical arrows we have
+$\mathcal{P}(h) \Leftrightarrow \mathcal{P}(f)$, and
+\item[(SP)] $\mathcal{P}$ is \'etale local on the source-and-target.
+\end{enumerate}
+In this section we have seen that (SP) $\Rightarrow$ (DM) $\Rightarrow$ (ST).
+The
+Examples \ref{example-silly-one} and \ref{example-silly-two}
+show that neither implication can be reversed. Finally,
+Lemma \ref{lemma-etale-local-source-target}
+shows that the difference disappears when looking at properties of
+morphisms which are stable under postcomposing with open immersions, which
+in practice will always be the case.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-etale-etale-local-source-target}
+Let $\mathcal{P}$ be a property of morphisms of schemes which
+is \'etale local on the source-and-target.
+Given a commutative diagram of schemes
+$$
+\vcenter{
+\xymatrix{
+X' \ar[d]_{g'} \ar[r]_{f'} & Y' \ar[d]^g \\
+X \ar[r]^f & Y
+}
+}
+\quad\text{with points}\quad
+\vcenter{
+\xymatrix{
+x' \ar[d] \ar[r] & y' \ar[d] \\
+x \ar[r] & y
+}
+}
+$$
+such that $g'$ is \'etale at $x'$ and $g$ is \'etale at $y'$, then
+$x \in W(f) \Leftrightarrow x' \in W(f')$
+where $W(-)$ is as in Lemma \ref{lemma-largest-open-of-the-source}.
+\end{lemma}
+
+\begin{proof}
+Lemma \ref{lemma-largest-open-of-the-source} applies since
+$\mathcal{P}$ is \'etale local on the source by
+Lemma \ref{lemma-local-source-target-implies}.
+
+\medskip\noindent
+Assume $x \in W(f)$. Let $U' \subset X'$ and $V' \subset Y'$
+be open neighbourhoods of $x'$ and $y'$ such that $f'(U') \subset V'$,
+$g'(U') \subset W(f)$ and $g'|_{U'}$ and $g|_{V'}$ are \'etale.
+Then $f \circ g'|_{U'} = g \circ f'|_{U'}$
+has $\mathcal{P}$ by property (1) of
+Definition \ref{definition-local-source-target}.
+Then $f'|_{U'} : U' \to V'$ has property $\mathcal{P}$
+by (4) of Lemma \ref{lemma-local-source-target-implies}.
+Then by (3) of Lemma \ref{lemma-local-source-target-implies}
+we conclude that $f'_{U'} : U' \to Y'$ has $\mathcal{P}$.
+Hence $U' \subset W(f')$ by definition. Hence $x' \in W(f')$.
+
+\medskip\noindent
+Assume $x' \in W(f')$. Let $U' \subset X'$ and $V' \subset Y'$
+be open neighbourhoods of $x'$ and $y'$ such that $f'(U') \subset V'$,
+$U' \subset W(f')$ and $g'|_{U'}$ and $g|_{V'}$ are \'etale.
+Then $U' \to Y'$ has $\mathcal{P}$ by definition of $W(f')$.
+Then $U' \to V'$ has $\mathcal{P}$ by (4) of
+Lemma \ref{lemma-local-source-target-implies}.
+Then $U' \to Y$ has $\mathcal{P}$ by (3) of
+Lemma \ref{lemma-local-source-target-implies}.
+Let $U \subset X$ be the image of the \'etale (hence open)
+morphism $g'|_U' : U' \to X$. Then $\{U' \to U\}$
+is an \'etale covering and we conclude that
+$U \to Y$ has $\mathcal{P}$ by (1) of
+Lemma \ref{lemma-local-source-target-implies}.
+Thus $U \subset W(f)$ by definition. Hence $x \in W(f)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-orbits}
+Let $k$ be a field. Let $n \geq 2$. For $1 \leq i, j \leq n$ with
+$i \not = j$ and $d \geq 0$ denote $T_{i, j, d}$ the automorphism
+of $\mathbf{A}^n_k$ given in coordinates by
+$$
+(x_1, \ldots, x_n) \longmapsto
+(x_1, \ldots, x_{i - 1}, x_i + x_j^d, x_{i + 1}, \ldots, x_n)
+$$
+Let $W \subset \mathbf{A}^n_k$ be a nonempty open subscheme
+such that $T_{i, j, d}(W) = W$ for all $i, j, d$ as above.
+Then either $W = \mathbf{A}^n_k$ or the characteristic of $k$
+is $p > 0$ and $\mathbf{A}^n_k \setminus W$ is a finite set
+of closed points whose coordinates are algebraic over $\mathbf{F}_p$.
+\end{lemma}
+
+\begin{proof}
+We may replace $k$ by any extension field in order to prove this.
+Let $Z$ be an irreducible component of $\mathbf{A}^n_k \setminus W$.
+Assume $\dim(Z) \geq 1$, to get a contradiction.
+Then there exists an extension field $k'/k$ and a $k'$-valued
+point $\xi = (\xi_1, \ldots, \xi_n) \in (k')^n$ of
+$Z_{k'} \subset \mathbf{A}^n_{k'}$
+such that at least one of $x_1, \ldots, x_n$ is transcendental over the
+prime field. Claim: the orbit of $\xi$ under the group generated by
+the transformations $T_{i, j, d}$ is Zariski
+dense in $\mathbf{A}^n_{k'}$. The claim will give the desired contradiction.
+
+\medskip\noindent
+If the characteristic of $k'$ is zero, then already the operators
+$T_{i, j, 0}$ will be enough since these transform $\xi$ into
+the points
+$$
+(\xi_1 + a_1, \ldots, \xi_n + a_n)
+$$
+for arbitrary $(a_1, \ldots, a_n) \in \mathbf{Z}_{\geq 0}^n$.
+If the characteristic is $p > 0$, we may assume after renumbering
+that $\xi_n$ is transcendental over $\mathbf{F}_p$. By
+successively applying the operators $T_{i, n, d}$ for
+$i < n$ we see the orbit of $\xi$ contains the elements
+$$
+(\xi_1 + P_1(\xi_n), \ldots, \xi_{n - 1} + P_{n - 1}(\xi_n), \xi_n)
+$$
+for arbitrary $(P_1, \ldots, P_{n - 1}) \in \mathbf{F}_p[t]$.
+Thus the Zariski closure of the orbit contains the coordinate
+hyperplane $x_n = \xi_n$. Repeating the argument with a different
+coordinate, we conclude that the Zariski closure contains
+$x_i = \xi_i + P(\xi_n)$ for any $P \in \mathbf{F}_p[t]$
+such that $\xi_i + P(\xi_n)$ is transcendental over $\mathbf{F}_p$.
+Since there are infinitely many such $P$ the claim follows.
+
+\medskip\noindent
+Of course the argument in the preceding paragraph also applies
+if $Z = \{z\}$ has dimension $0$ and the coordinates of $z$
+in $\kappa(z)$ are not algebraic over $\mathbf{F}_p$. The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-tau-local-source-target}
+Let $\mathcal{P}$ be a property of morphisms of schemes. Assume
+\begin{enumerate}
+\item $\mathcal{P}$ is \'etale local on the source,
+\item $\mathcal{P}$ is smooth local on the target,
+\item $\mathcal{P}$ is stable under postcomposing with open immersions:
+if $f : X \to Y$ has $\mathcal{P}$ and $Y \subset Z$ is an open
+subscheme then $X \to Z$ has $\mathcal{P}$.
+\end{enumerate}
+Given a commutative diagram of schemes
+$$
+\vcenter{
+\xymatrix{
+X' \ar[d]_{g'} \ar[r]_{f'} & Y' \ar[d]^g \\
+X \ar[r]^f & Y
+}
+}
+\quad\text{with points}\quad
+\vcenter{
+\xymatrix{
+x' \ar[d] \ar[r] & y' \ar[d] \\
+x \ar[r] & y
+}
+}
+$$
+such that $g$ is smooth $y'$ and $X' \to X \times_Y Y'$ is \'etale
+at $x'$, then $x \in W(f) \Leftrightarrow x' \in W(f')$
+where $W(-)$ is as in Lemma \ref{lemma-largest-open-of-the-source}.
+\end{lemma}
+
+\begin{proof}
+Since $\mathcal{P}$ is \'etale local on the source we see
+that $x \in W(f)$ if and only if the image of $x$ in
+$X \times_Y Y'$ is in $W(X \times_Y Y' \to Y')$. Hence we
+may assume the diagram in the lemma is cartesian.
+
+\medskip\noindent
+Assume $x \in W(f)$. Since $\mathcal{P}$ is smooth local on the target
+we see that $(g')^{-1}W(f) = W(f) \times_Y Y' \to Y'$ has $\mathcal{P}$.
+Hence $(g')^{-1}W(f) \subset W(f')$. We conclude $x' \in W(f')$.
+
+\medskip\noindent
+Assume $x' \in W(f')$.
+For any open neighbourhood $V' \subset Y'$ of $y'$ we may replace
+$Y'$ by $V'$ and $X'$ by $U' = (f')^{-1}V'$ because $V' \to Y'$ is smooth
+and hence the base change $W(f') \cap U' \to V'$ of $W(f') \to Y'$
+has property $\mathcal{P}$. Thus we may assume there exists
+an \'etale morphism $Y' \to \mathbf{A}^n_Y$ over $Y$, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-etale-over-affine-space}.
+Picture
+$$
+\xymatrix{
+X' \ar[r] \ar[d] & Y' \ar[d] \\
+\mathbf{A}^n_X \ar[r]_{f_n} \ar[d] & \mathbf{A}^n_Y \ar[d] \\
+X \ar[r]^f & Y
+}
+$$
+By Lemma \ref{lemma-etale-local-source-target}
+(and because \'etale coverings are smooth coverings)
+we see that $\mathcal{P}$ is \'etale local on the source-and-target.
+By Lemma \ref{lemma-etale-etale-local-source-target}
+we see that $W(f')$ is the inverse image of
+the open $W(f_n) \subset \mathbf{A}^n_X$. In particular
+$W(f_n)$ contains a point lying over $x$.
+After replacing $X$ by the image of $W(f_n)$ (which is open)
+we may assume $W(f_n) \to X$ is surjective.
+Claim: $W(f_n) = \mathbf{A}^n_X$.
+The claim implies $f$ has $\mathcal{P}$ as
+$\mathcal{P}$ is local in the smooth topology
+and $\{\mathbf{A}^n_Y \to Y\}$ is a smooth covering.
+
+\medskip\noindent
+Essentially, the claim follows as $W(f_n) \subset \mathbf{A}^n_X$ is a
+``translation invariant'' open which meets every fibre of
+$\mathbf{A}^n_X \to X$. However, to produce an argument along these lines
+one has to do \'etale localization on $Y$ to produce enough translations
+and it becomes a bit annoying. Instead we use the automorphisms
+of Lemma \ref{lemma-orbits} and \'etale morphisms of affine spaces.
+We may assume $n \geq 2$. Namely, if $n = 0$, then we are done.
+If $n = 1$, then we consider the diagram
+$$
+\xymatrix{
+\mathbf{A}^2_X \ar[r]_{f_2} \ar[d]_p & \mathbf{A}^2_Y \ar[d] \\
+\mathbf{A}^1_X \ar[r]^{f_1} & \mathbf{A}^1_Y
+}
+$$
+We have $p^{-1}(W(f_1)) \subset W(f_2)$ (see first paragraph
+of the proof). Thus $W(f_2) \to X$ is still surjective
+and we may work with $f_2$. Assume $n \geq 2$.
+
+\medskip\noindent
+For any $1 \leq i, j \leq n$ with $i \not = j$ and $d \geq 0$
+denote $T_{i, j, d}$ the automorphism of $\mathbf{A}^n$ defined
+in Lemma \ref{lemma-orbits}. Then we get a commutative diagram
+$$
+\xymatrix{
+\mathbf{A}^n_X \ar[r]_{f_n} \ar[d]_{T_{i, j, d}} &
+\mathbf{A}^n_Y \ar[d]^{T_{i, j, d}} \\
+\mathbf{A}^n_X \ar[r]^{f_n} & \mathbf{A}^n_Y
+}
+$$
+whose vertical arrows are isomorphisms. We conclude that
+$T_{i, j, d}(W(f_n)) = W(f_n)$. Applying Lemma \ref{lemma-orbits}
+we conclude for any $x \in X$ the fibre $W(f_n)_x \subset \mathbf{A}^n_x$ is
+either $\mathbf{A}^n_x$ (this is what we want) or $\kappa(x)$
+has characteristic $p > 0$ and $W(f_n)_x$
+is the complement of a finite set $Z_x \subset \mathbf{A}^n_x$
+of closed points. The second possibility cannot occur. Namely,
+consider the morphism $T_p : \mathbf{A}^n \to \mathbf{A}^n$ given by
+$$
+(x_1, \ldots, x_n) \mapsto (x_1 - x_1^p, \ldots, x_n - x_n^p)
+$$
+As above we get a commutative diagram
+$$
+\xymatrix{
+\mathbf{A}^n_X \ar[r]_{f_n} \ar[d]_{T_p} &
+\mathbf{A}^n_Y \ar[d]^{T_p} \\
+\mathbf{A}^n_X \ar[r]^{f_n} & \mathbf{A}^n_Y
+}
+$$
+The morphism $T_p : \mathbf{A}^n_X \to \mathbf{A}^n_X$
+is \'etale at every point lying over $x$
+and the morphism $T_p : \mathbf{A}^n_Y \to \mathbf{A}^n_Y$
+is \'etale at every point lying over the image of $x$ in $Y$.
+(Details omitted; hint: compute the derivatives.)
+We conclude that
+$$
+T_p^{-1}(W) \cap \mathbf{A}^n_x = W \cap \mathbf{A}^n_x
+$$
+by Lemma \ref{lemma-etale-etale-local-source-target}
+(we've already seen $\mathcal{P}$ is
+\'etale local on the source-and-target).
+Since $T_p : \mathbf{A}^n_x \to \mathbf{A}^n_x$ is finite \'etale
+of degree $p^n > 1$ we see that if $Z_x$ is not empty then it contains
+$T_p^{-1}(Z_x)$ which is bigger. This contradiction finishes
+the proof.
+\end{proof}
+
+
+
+
+
+\section{Properties of morphisms of germs local on source-and-target}
+\label{section-local-source-target-at-point}
+
+\noindent
+In this section we discuss the analogue of the material in
+Section \ref{section-properties-etale-local-source-target}
+for morphisms of germs of schemes.
+
+\begin{definition}
+\label{definition-local-source-target-at-point}
+Let $\mathcal{Q}$ be a property of morphisms of germs of schemes.
+We say $\mathcal{Q}$ is {\it \'etale local on the source-and-target}
+if for any commutative diagram
+$$
+\xymatrix{
+(U', u') \ar[d]_a \ar[r]_{h'} & (V', v') \ar[d]^b \\
+(U, u) \ar[r]^h & (V, v)
+}
+$$
+of germs with \'etale vertical arrows we have
+$\mathcal{Q}(h) \Leftrightarrow \mathcal{Q}(h')$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-local-source-target-global-implies-local}
+Let $\mathcal{P}$ be a property of morphisms of schemes
+which is \'etale local on the source-and-target.
+Consider the property $\mathcal{Q}$ of
+morphisms of germs defined by the rule
+$$
+\mathcal{Q}((X, x) \to (S, s))
+\Leftrightarrow
+\text{there exists a representative }U \to S
+\text{ which has }\mathcal{P}
+$$
+Then $\mathcal{Q}$ is \'etale local on the source-and-target as in
+Definition \ref{definition-local-source-target-at-point}.
+\end{lemma}
+
+\begin{proof}
+If a morphism of germs $(X, x) \to (S, s)$ has $\mathcal{Q}$,
+then there are arbitrarily small neighbourhoods
+$U \subset X$ of $x$ and $V \subset S$ of $s$
+such that a representative $U \to V$ of $(X, x) \to (S, s)$ has $\mathcal{P}$.
+This follows from Lemma \ref{lemma-local-source-target-implies}. Let
+$$
+\xymatrix{
+(U', u') \ar[r]_{h'} \ar[d]_a & (V', v') \ar[d]^b \\
+(U, u) \ar[r]^h & (V, v)
+}
+$$
+be as in Definition \ref{definition-local-source-target-at-point}.
+Choose $U_1 \subset U$ and a representative $h_1 : U_1 \to V$ of $h$.
+Choose $V'_1 \subset V'$ and an \'etale representative $b_1 : V'_1 \to V$
+of $b$ (Definition \ref{definition-etale-morphism-germs}).
+Choose $U'_1 \subset U'$ and representatives $a_1 : U'_1 \to U_1$
+and $h'_1 : U'_1 \to V'_1$ of $a$ and $h'$ with $a_1$ \'etale.
+After shrinking $U'_1$ we may assume $h_1 \circ a_1 = b_1 \circ h'_1$.
+By the initial remark of the proof, we are trying to show
+$u' \in W(h'_1) \Leftrightarrow u \in W(h_1)$ where $W(-)$ is as
+in Lemma \ref{lemma-largest-open-of-the-source}.
+Thus the lemma follows from Lemma \ref{lemma-etale-etale-local-source-target}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-source-target-local-implies-global}
+Let $\mathcal{P}$ be a property of morphisms of schemes which is
+\'etale local on source-and-target. Let $Q$ be the associated property
+of morphisms of germs, see
+Lemma \ref{lemma-local-source-target-global-implies-local}.
+Let $f : X \to Y$ be a morphism
+of schemes. The following are equivalent:
+\begin{enumerate}
+\item $f$ has property $\mathcal{P}$, and
+\item for every $x \in X$ the morphism of germs $(X, x) \to (Y, f(x))$
+has property $\mathcal{Q}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (1) $\Rightarrow$ (2) is direct from the definitions.
+The implication (2) $\Rightarrow$ (1) also follows from part (3) of
+Definition \ref{definition-local-source-target}.
+\end{proof}
+
+\noindent
+A morphism of germs $(X, x) \to (S, s)$ determines a well defined
+map of local rings. Hence the following lemma makes sense.
+
+\begin{lemma}
+\label{lemma-flat-at-point}
+The property of morphisms of germs
+$$
+\mathcal{P}((X, x) \to (S, s)) =
+\mathcal{O}_{S, s} \to \mathcal{O}_{X, x}\text{ is flat}
+$$
+is \'etale local on the source-and-target.
+\end{lemma}
+
+\begin{proof}
+Given a diagram as in
+Definition \ref{definition-local-source-target-at-point}
+we obtain the following diagram of local homomorphisms of local rings
+$$
+\xymatrix{
+\mathcal{O}_{U', u'} & \mathcal{O}_{V', v'} \ar[l] \\
+\mathcal{O}_{U, u} \ar[u] & \mathcal{O}_{V, v} \ar[l] \ar[u]
+}
+$$
+Note that the vertical arrows are localizations of \'etale ring maps,
+in particular they are essentially of finite presentation, flat,
+and unramified (see
+Algebra, Section \ref{algebra-section-etale}).
+In particular the vertical maps are faithfully flat, see
+Algebra, Lemma \ref{algebra-lemma-local-flat-ff}.
+Now, if the upper horizontal arrow is flat, then the lower horizontal
+arrow is flat by an application of
+Algebra, Lemma \ref{algebra-lemma-flat-permanence}
+with $R = \mathcal{O}_{V, v}$, $S = \mathcal{O}_{U, u}$ and
+$M = \mathcal{O}_{U', u'}$. If the lower horizontal arrow is
+flat, then the ring map
+$$
+\mathcal{O}_{V', v'} \otimes_{\mathcal{O}_{V, v}} \mathcal{O}_{U, u}
+\longleftarrow
+\mathcal{O}_{V', v'}
+$$
+is flat by
+Algebra, Lemma \ref{algebra-lemma-flat-base-change}.
+And the ring map
+$$
+\mathcal{O}_{U', u'}
+\longleftarrow
+\mathcal{O}_{V', v'} \otimes_{\mathcal{O}_{V, v}} \mathcal{O}_{U, u}
+$$
+is a localization of a map between \'etale ring extensions of
+$\mathcal{O}_{U, u}$, hence flat by
+Algebra, Lemma \ref{algebra-lemma-map-between-etale}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-on-fiber}
+Consider a commutative diagram of morphisms of schemes
+$$
+\xymatrix{
+U' \ar[r] \ar[d] & V' \ar[d] \\
+U \ar[r] & V
+}
+$$
+with \'etale vertical arrows and a point $v' \in V'$ mapping to $v \in V$.
+Then the morphism of fibres $U'_{v'} \to U_v$ is \'etale.
+\end{lemma}
+
+\begin{proof}
+Note that $U'_v \to U_v$ is \'etale as a base change of the \'etale
+morphism $U' \to U$. The scheme $U'_v$ is a scheme over $V'_v$. By
+Morphisms, Lemma \ref{morphisms-lemma-etale-over-field}
+the scheme $V'_v$ is a disjoint union of spectra
+of finite separable field extensions of $\kappa(v)$.
+One of these is $v' = \Spec(\kappa(v'))$. Hence
+$U'_{v'}$ is an open and closed subscheme of $U'_v$ and it follows
+that $U'_{v'} \to U'_v \to U_v$ is \'etale (as a composition of an
+open immersion and an \'etale morphism, see
+Morphisms, Section \ref{morphisms-section-etale}).
+\end{proof}
+
+\noindent
+Given a morphism of germs of schemes $(X, x) \to (S, s)$
+we can define the {\it fibre} as the isomorphism class of germs
+$(U_s, x)$ where $U \to S$ is any representative. We will often abuse notation
+and just write $(X_s, x)$.
+
+\begin{lemma}
+\label{lemma-dimension-local-ring-fibre}
+Let $d \in \{0, 1, 2, \ldots, \infty\}$.
+The property of morphisms of germs
+$$
+\mathcal{P}_d((X, x) \to (S, s)) =
+\text{the local ring }
+\mathcal{O}_{X_s, x}
+\text{ of the fibre has dimension }d
+$$
+is \'etale local on the source-and-target.
+\end{lemma}
+
+\begin{proof}
+Given a diagram as in
+Definition \ref{definition-local-source-target-at-point}
+we obtain an \'etale morphism of fibres
+$U'_{v'} \to U_v$ mapping $u'$ to $u$, see
+Lemma \ref{lemma-etale-on-fiber}.
+Hence the result follows from
+Lemma \ref{lemma-dimension-local-ring-local}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-transcendence-degree-at-point}
+Let $r \in \{0, 1, 2, \ldots, \infty\}$.
+The property of morphisms of germs
+$$
+\mathcal{P}_r((X, x) \to (S, s))
+\Leftrightarrow
+\text{trdeg}_{\kappa(s)} \kappa(x) = r
+$$
+is \'etale local on the source-and-target.
+\end{lemma}
+
+\begin{proof}
+Given a diagram as in
+Definition \ref{definition-local-source-target-at-point}
+we obtain the following diagram of local homomorphisms of local rings
+$$
+\xymatrix{
+\mathcal{O}_{U', u'} & \mathcal{O}_{V', v'} \ar[l] \\
+\mathcal{O}_{U, u} \ar[u] & \mathcal{O}_{V, v} \ar[l] \ar[u]
+}
+$$
+Note that the vertical arrows are localizations of \'etale ring maps,
+in particular they are unramified (see
+Algebra, Section \ref{algebra-section-etale}).
+Hence $\kappa(u')/\kappa(u)$ and $\kappa(v')/\kappa(v)$
+are finite separable field extensions.
+Thus we have
+$\text{trdeg}_{\kappa(v)} \kappa(u) = \text{trdeg}_{\kappa(v')} \kappa(u)$
+which proves the lemma.
+\end{proof}
+
+\noindent
+Let $(X, x)$ be a germ of a scheme.
+The dimension of $X$ at $x$ is the minimum of the dimensions of
+open neighbourhoods of $x$ in $X$, and any small enough open neighbourhood
+has this dimension. Hence this is an invariant of the isomorphism class
+of the germ. We denote this simply $\dim_x(X)$.
+
+\begin{lemma}
+\label{lemma-dimension-at-point}
+Let $d \in \{0, 1, 2, \ldots, \infty\}$.
+The property of morphisms of germs
+$$
+\mathcal{P}_d((X, x) \to (S, s))
+\Leftrightarrow
+\dim_x (X_s) = d
+$$
+is \'etale local on the source-and-target.
+\end{lemma}
+
+\begin{proof}
+Given a diagram as in
+Definition \ref{definition-local-source-target-at-point}
+we obtain an \'etale morphism of fibres
+$U'_{v'} \to U_v$ mapping $u'$ to $u$, see
+Lemma \ref{lemma-etale-on-fiber}.
+Hence now the equality $\dim_u(U_v) = \dim_{u'}(U'_{v'})$ follows from
+Lemma \ref{lemma-dimension-at-point-local}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Descent data for schemes over schemes}
+\label{section-descent-datum}
+
+\noindent
+Most of the arguments in this section are formal relying only
+on the definition of a descent datum. In
+Simplicial Spaces, Section \ref{spaces-simplicial-section-simplicial-descent}
+we will examine the relationship with simplicial schemes which will
+somewhat clarify the situation.
+
+\begin{definition}
+\label{definition-descent-datum}
+Let $f : X \to S$ be a morphism of schemes.
+\begin{enumerate}
+\item Let $V \to X$ be a scheme over $X$.
+A {\it descent datum for $V/X/S$} is an isomorphism
+$\varphi : V \times_S X \to X \times_S V$ of schemes over
+$X \times_S X$ satisfying the {\it cocycle condition}
+that the diagram
+$$
+\xymatrix{
+V \times_S X \times_S X \ar[rd]^{\varphi_{01}} \ar[rr]_{\varphi_{02}} &
+&
+X \times_S X \times_S V\\
+&
+X \times_S V \times_S X \ar[ru]^{\varphi_{12}}
+}
+$$
+commutes (with obvious notation).
+\item We also say that the pair $(V/X, \varphi)$ is
+a {\it descent datum relative to $X \to S$}.
+\item A {\it morphism $f : (V/X, \varphi) \to (V'/X, \varphi')$ of
+descent data relative to $X \to S$} is a morphism
+$f : V \to V'$ of schemes over $X$ such that
+the diagram
+$$
+\xymatrix{
+V \times_S X \ar[r]_{\varphi} \ar[d]_{f \times \text{id}_X} &
+X \times_S V \ar[d]^{\text{id}_X \times f} \\
+V' \times_S X \ar[r]^{\varphi'} & X \times_S V'
+}
+$$
+commutes.
+\end{enumerate}
+\end{definition}
+
+\noindent
+There are all kinds of ``miraculous'' identities which arise out of the
+definition above. For example the pullback of $\varphi$ via the diagonal
+morphism $\Delta : X \to X \times_S X$ can be seen as a morphism
+$\Delta^*\varphi : V \to V$.
+This because $X \times_{\Delta, X \times_S X} (V \times_S X) = V$
+and also $X \times_{\Delta, X \times_S X} (X \times_S V) = V$.
+In fact, $\Delta^*\varphi$ is equal to the identity.
+This is a good exercise if you are unfamiliar with this material.
+
+\begin{remark}
+\label{remark-easier}
+Let $X \to S$ be a morphism of schemes. Let $(V/X, \varphi)$ be
+a descent datum relative to $X \to S$. We may think of the
+isomorphism $\varphi$ as an isomorphism
+$$
+(X \times_S X) \times_{\text{pr}_0, X} V
+\longrightarrow
+(X \times_S X) \times_{\text{pr}_1, X} V
+$$
+of schemes over $X \times_S X$. So loosely speaking one may
+think of $\varphi$ as a map
+$\varphi : \text{pr}_0^*V \to \text{pr}_1^*V$\footnote{Unfortunately,
+we have chosen the ``wrong'' direction for our arrow here. In
+Definitions \ref{definition-descent-datum} and
+\ref{definition-descent-datum-for-family-of-morphisms}
+we should have the opposite direction to what was done in
+Definition \ref{definition-descent-datum-quasi-coherent}
+by the general principle that ``functions'' and ``spaces'' are dual.}.
+The cocycle condition then says that
+$\text{pr}_{02}^*\varphi =
+\text{pr}_{12}^*\varphi \circ \text{pr}_{01}^*\varphi$.
+In this way it is very similar to the case of a descent datum on
+quasi-coherent sheaves.
+\end{remark}
+
+\noindent
+Here is the definition in case you have a family of morphisms
+with fixed target.
+
+\begin{definition}
+\label{definition-descent-datum-for-family-of-morphisms}
+Let $S$ be a scheme.
+Let $\{X_i \to S\}_{i \in I}$ be a family of morphisms with target $S$.
+\begin{enumerate}
+\item A {\it descent datum $(V_i, \varphi_{ij})$ relative to the
+family $\{X_i \to S\}$} is given by a scheme $V_i$ over $X_i$
+for each $i \in I$, an isomorphism
+$\varphi_{ij} : V_i \times_S X_j \to X_i \times_S V_j$
+of schemes over $X_i \times_S X_j$ for each pair $(i, j) \in I^2$
+such that for every triple of indices $(i, j, k) \in I^3$
+the diagram
+$$
+\xymatrix{
+V_i \times_S X_j \times_S X_k
+\ar[rd]^{\text{pr}_{01}^*\varphi_{ij}}
+\ar[rr]_{\text{pr}_{02}^*\varphi_{ik}} &
+&
+X_i \times_S X_j \times_S V_k\\
+&
+X_i \times_S V_j \times_S X_k
+\ar[ru]^{\text{pr}_{12}^*\varphi_{jk}}
+}
+$$
+of schemes over $X_i \times_S X_j \times_S X_k$ commutes
+(with obvious notation).
+\item A {\it morphism
+$\psi : (V_i, \varphi_{ij}) \to (V'_i, \varphi'_{ij})$
+of descent data} is given by a family
+$\psi = (\psi_i)_{i \in I}$ of morphisms of
+$X_i$-schemes $\psi_i : V_i \to V'_i$ such that all the diagrams
+$$
+\xymatrix{
+V_i \times_S X_j \ar[r]_{\varphi_{ij}} \ar[d]_{\psi_i \times \text{id}} &
+X_i \times_S V_j \ar[d]^{\text{id} \times \psi_j} \\
+V'_i \times_S X_j \ar[r]^{\varphi'_{ij}} & X_i \times_S V'_j
+}
+$$
+commute.
+\end{enumerate}
+\end{definition}
+
+\noindent
+This is the notion that comes up naturally for example when the question arises
+whether the fibred category of relative curves is a stack in the
+fpqc topology (it isn't -- at least not if you stick to schemes).
+
+\begin{remark}
+\label{remark-easier-family}
+Let $S$ be a scheme.
+Let $\{X_i \to S\}_{i \in I}$ be a family of morphisms with target $S$.
+Let $(V_i, \varphi_{ij})$ be a descent datum relative to
+$\{X_i \to S\}$. We may think of the isomorphisms $\varphi_{ij}$
+as isomorphisms
+$$
+(X_i \times_S X_j) \times_{\text{pr}_0, X_i} V_i
+\longrightarrow
+(X_i \times_S X_j) \times_{\text{pr}_1, X_j} V_j
+$$
+of schemes over $X_i \times_S X_j$. So loosely speaking one may
+think of $\varphi_{ij}$ as an isomorphism
+$\text{pr}_0^*V_i \to \text{pr}_1^*V_j$ over $X_i \times_S X_j$.
+The cocycle condition then says that
+$\text{pr}_{02}^*\varphi_{ik} =
+\text{pr}_{12}^*\varphi_{jk} \circ \text{pr}_{01}^*\varphi_{ij}$.
+In this way it is very similar to the case of a descent datum on
+quasi-coherent sheaves.
+\end{remark}
+
+\noindent
+The reason we will usually work with the version of a family consisting
+of a single morphism is the following lemma.
+
+\begin{lemma}
+\label{lemma-family-is-one}
+Let $S$ be a scheme.
+Let $\{X_i \to S\}_{i \in I}$ be a family of morphisms with target $S$.
+Set $X = \coprod_{i \in I} X_i$, and consider it as an $S$-scheme.
+There is a canonical equivalence of categories
+$$
+\begin{matrix}
+\text{category of descent data } \\
+\text{relative to the family } \{X_i \to S\}_{i \in I}
+\end{matrix}
+\longrightarrow
+\begin{matrix}
+\text{ category of descent data} \\
+\text{ relative to } X/S
+\end{matrix}
+$$
+which maps $(V_i, \varphi_{ij})$ to $(V, \varphi)$ with
+$V = \coprod_{i\in I} V_i$ and $\varphi = \coprod \varphi_{ij}$.
+\end{lemma}
+
+\begin{proof}
+Observe that $X \times_S X = \coprod_{ij} X_i \times_S X_j$
+and similarly for higher fibre products.
+Giving a morphism $V \to X$ is exactly the same as
+giving a family $V_i \to X_i$. And giving a descent datum
+$\varphi$ is exactly the same as giving a family $\varphi_{ij}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback}
+Pullback of descent data for schemes over schemes.
+\begin{enumerate}
+\item Let
+$$
+\xymatrix{
+X' \ar[r]_f \ar[d]_{a'} & X \ar[d]^a \\
+S' \ar[r]^h & S
+}
+$$
+be a commutative diagram of morphisms of schemes.
+The construction
+$$
+(V \to X, \varphi) \longmapsto f^*(V \to X, \varphi) = (V' \to X', \varphi')
+$$
+where $V' = X' \times_X V$ and where
+$\varphi'$ is defined as the composition
+$$
+\xymatrix{
+V' \times_{S'} X' \ar@{=}[r] &
+(X' \times_X V) \times_{S'} X' \ar@{=}[r] &
+(X' \times_{S'} X') \times_{X \times_S X} (V \times_S X)
+\ar[d]^{\text{id} \times \varphi} \\
+X' \times_{S'} V' \ar@{=}[r] &
+X' \times_{S'} (X' \times_X V) &
+(X' \times_{S'} X') \times_{X \times_S X} (X \times_S V) \ar@{=}[l]
+}
+$$
+defines a functor from the category of descent data
+relative to $X \to S$ to the category of descent data
+relative to $X' \to S'$.
+\item Given two morphisms $f_i : X' \to X$, $i = 0, 1$ making the
+diagram commute the functors $f_0^*$ and $f_1^*$ are
+canonically isomorphic.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We omit the proof of (1), but we remark that the morphism
+$\varphi'$ is the morphism $(f \times f)^*\varphi$ in the
+notation introduced in Remark \ref{remark-easier}.
+For (2) we indicate which morphism
+$f_0^*V \to f_1^*V$ gives the functorial isomorphism. Namely,
+since $f_0$ and $f_1$ both fit into the commutative diagram
+we see there is a unique morphism $r : X' \to X \times_S X$
+with $f_i = \text{pr}_i \circ r$. Then we take
+\begin{eqnarray*}
+f_0^*V & = &
+X' \times_{f_0, X} V \\
+& = &
+X' \times_{\text{pr}_0 \circ r, X} V \\
+& = &
+X' \times_{r, X \times_S X} (X \times_S X) \times_{\text{pr}_0, X} V \\
+& \xrightarrow{\varphi} &
+X' \times_{r, X \times_S X} (X \times_S X) \times_{\text{pr}_1, X} V \\
+& = &
+X' \times_{\text{pr}_1 \circ r, X} V \\
+& = &
+X' \times_{f_1, X} V \\
+& = & f_1^*V
+\end{eqnarray*}
+We omit the verification that this works.
+\end{proof}
+
+\begin{definition}
+\label{definition-pullback-functor}
+With $S, S', X, X', f, a, a', h$ as in Lemma \ref{lemma-pullback} the functor
+$$
+(V, \varphi) \longmapsto f^*(V, \varphi)
+$$
+constructed in that lemma is called the {\it pullback functor} on descent data.
+\end{definition}
+
+\begin{lemma}[Pullback of descent data for schemes over families]
+\label{lemma-pullback-family}
+Let $\mathcal{U} = \{U_i \to S'\}_{i \in I}$ and
+$\mathcal{V} = \{V_j \to S\}_{j \in J}$ be families of morphisms with
+fixed target. Let $\alpha : I \to J$, $h : S' \to S$ and
+$g_i : U_i \to V_{\alpha(i)}$ be a morphism of families
+of maps with fixed target, see
+Sites, Definition \ref{sites-definition-morphism-coverings}.
+\begin{enumerate}
+\item Let $(Y_j, \varphi_{jj'})$ be a descent datum relative to the
+family $\{V_j \to S'\}$. The system
+$$
+\left(
+g_i^*Y_{\alpha(i)},
+(g_i \times g_{i'})^*\varphi_{\alpha(i)\alpha(i')}
+\right)
+$$
+(with notation as in Remark \ref{remark-easier-family})
+is a descent datum relative to $\mathcal{V}$.
+\item This construction defines a functor between descent data relative
+to $\mathcal{U}$ and descent data relative to $\mathcal{V}$.
+\item Given a second $\alpha' : I \to J$, $h' : S' \to S$ and
+$g'_i : U_i \to V_{\alpha'(i)}$ morphism of families
+of maps with fixed target, then if $h = h'$ the two resulting functors
+between descent data are canonically isomorphic.
+\item These functors agree, via Lemma \ref{lemma-family-is-one},
+with the pullback functors constructed in Lemma \ref{lemma-pullback}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-pullback} via the
+correspondence of Lemma \ref{lemma-family-is-one}.
+\end{proof}
+
+\begin{definition}
+\label{definition-pullback-functor-family}
+With $\mathcal{U} = \{U_i \to S'\}_{i \in I}$,
+$\mathcal{V} = \{V_j \to S\}_{j \in J}$, $\alpha : I \to J$, $h : S' \to S$,
+and $g_i : U_i \to V_{\alpha(i)}$ as in Lemma \ref{lemma-pullback-family}
+the functor
+$$
+(Y_j, \varphi_{jj'}) \longmapsto
+(g_i^*Y_{\alpha(i)}, (g_i \times g_{i'})^*\varphi_{\alpha(i)\alpha(i')})
+$$
+constructed in that lemma
+is called the {\it pullback functor} on descent data.
+\end{definition}
+
+\noindent
+If $\mathcal{U}$ and $\mathcal{V}$ have the same target $S$,
+and if $\mathcal{U}$ refines $\mathcal{V}$ (see
+Sites, Definition \ref{sites-definition-morphism-coverings})
+but no explicit pair $(\alpha, g_i)$ is given, then we can still
+talk about the pullback functor since we have seen in
+Lemma \ref{lemma-pullback-family} that the choice of the pair does not matter
+(up to a canonical isomorphism).
+
+
+\begin{definition}
+\label{definition-effective}
+Let $S$ be a scheme.
+Let $f : X \to S$ be a morphism of schemes.
+\begin{enumerate}
+\item Given a scheme $U$ over $S$ we have the
+{\it trivial descent datum} of $U$ relative to
+$\text{id} : S \to S$, namely the identity morphism on $U$.
+\item By Lemma \ref{lemma-pullback} we get a
+{\it canonical descent datum} on $X \times_S U$
+relative to $X \to S$ by pulling back the trivial
+descent datum via $f$. We often
+denote $(X \times_S U, can)$ this descent datum.
+\item A descent datum $(V, \varphi)$ relative to $X/S$ is
+called {\it effective} if $(V, \varphi)$
+is isomorphic to the canonical descent datum
+$(X \times_S U, can)$ for some scheme $U$ over $S$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Thus being effective means there exists a scheme $U$
+over $S$ and an isomorphism $\psi : V \to X \times_S U$
+of $X$-schemes such that $\varphi$ is equal to the composition
+$$
+V \times_S X \xrightarrow{\psi \times \text{id}_X}
+X \times_S U \times_S X =
+X \times_S X \times_S U
+\xrightarrow{\text{id}_X \times \psi^{-1}}
+X \times_S V
+$$
+
+\begin{definition}
+\label{definition-effective-family}
+Let $S$ be a scheme.
+Let $\{X_i \to S\}$ be a family of morphisms
+with target $S$.
+\begin{enumerate}
+\item Given a scheme $U$ over $S$
+we have a {\it canonical descent datum} on the family of
+schemes $X_i \times_S U$ by pulling back the trivial
+descent datum for $U$ relative to $\{\text{id} : S \to S\}$.
+We denote this descent datum $(X_i \times_S U, can)$.
+\item A descent datum $(V_i, \varphi_{ij})$
+relative to $\{X_i \to S\}$ is called {\it effective}
+if there exists a scheme $U$ over $S$ such that
+$(V_i, \varphi_{ij})$ is isomorphic to $(X_i \times_S U, can)$.
+\end{enumerate}
+\end{definition}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Fully faithfulness of the pullback functors}
+\label{section-fully-faithful}
+
+\noindent
+It turns out that the pullback functor between descent data
+for fpqc-coverings is fully faithful. In other words, morphisms of schemes
+satisfy fpqc descent. The goal of this section
+is to prove this. The reader is encouraged instead to prove this him/herself.
+The key is to use Lemma \ref{lemma-fpqc-universal-effective-epimorphisms}.
+
+\begin{lemma}
+\label{lemma-surjective-flat-epi}
+A surjective and flat morphism is an epimorphism in the
+category of schemes.
+\end{lemma}
+
+\begin{proof}
+Suppose we have $h : X' \to X$ surjective and flat and
+$a, b : X \to Y$ morphisms such that $a \circ h = b \circ h$.
+As $h$ is surjective we see that $a$ and $b$ agree on underlying
+topological spaces. Pick $x' \in X'$ and set $x = h(x')$ and
+$y = a(x) = b(x)$. Consider the local ring maps
+$$
+a^\sharp_x, b^\sharp_x : \mathcal{O}_{Y, y} \to \mathcal{O}_{X, x}
+$$
+These become equal when composed with
+the flat local homomorphism
+$h^\sharp_{x'} : \mathcal{O}_{X, x} \to \mathcal{O}_{X', x'}$.
+Since a flat local homomorphism is faithfully flat
+(Algebra, Lemma \ref{algebra-lemma-local-flat-ff})
+we conclude that $h^\sharp_{x'}$ is injective.
+Hence $a^\sharp_x = b^\sharp_x$ which implies $a = b$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ff-base-change-faithful}
+Let $h : S' \to S$ be a surjective, flat morphism of
+schemes. The base change functor
+$$
+\Sch/S \longrightarrow \Sch/S', \quad
+X \longmapsto S' \times_S X
+$$
+is faithful.
+\end{lemma}
+
+\begin{proof}
+Let $X_1$, $X_2$ be schemes over $S$.
+Let $\alpha, \beta : X_2 \to X_1$ be morphisms over $S$.
+If $\alpha$, $\beta$ base change to the same morphism then
+we get a commutative diagram as follows
+$$
+\xymatrix{
+X_2 \ar[d]^\alpha &
+S' \times_S X_2 \ar[l] \ar[d] \ar[r] &
+X_2 \ar[d]^\beta \\
+X_1 &
+S' \times_S X_1 \ar[l] \ar[r] &
+X_1
+}
+$$
+Hence it suffices to show that $S' \times_S X_2 \to X_2$
+is an epimorphism. As the base change of a surjective and
+flat morphism it is surjective and flat (see
+Morphisms, Lemmas \ref{morphisms-lemma-base-change-surjective}
+and \ref{morphisms-lemma-base-change-flat}). Hence the lemma follows
+from Lemma \ref{lemma-surjective-flat-epi}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-faithful}
+In the situation of Lemma \ref{lemma-pullback}
+assume that $f : X' \to X$ is surjective
+and flat. Then the pullback functor is faithful.
+\end{lemma}
+
+\begin{proof}
+Let $(V_i, \varphi_i)$, $i = 1, 2$ be descent data for $X \to S$.
+Let $\alpha, \beta : V_1 \to V_2$ be morphisms of descent data.
+Suppose that $f^*\alpha = f^*\beta$. Our task is to show that
+$\alpha = \beta$. Note that $\alpha$, $\beta$ are morphisms
+of schemes over $X$, and that $f^*\alpha$, $f^*\beta$ are
+simply the base changes of $\alpha$, $\beta$ to morphisms over
+$X'$. Hence the lemma follows from Lemma \ref{lemma-ff-base-change-faithful}.
+\end{proof}
+
+\noindent
+Here is the key lemma of this section.
+
+\begin{lemma}
+\label{lemma-fully-faithful}
+In the situation of Lemma \ref{lemma-pullback}
+assume
+\begin{enumerate}
+\item $\{f : X' \to X\}$ is an fpqc covering (for example if $f$ is
+surjective, flat, and quasi-compact), and
+\item $S = S'$.
+\end{enumerate}
+Then the pullback functor is fully faithful.
+\end{lemma}
+
+\begin{proof}
+Assumption (1) implies that $f$ is surjective and flat.
+Hence the pullback functor is faithful by
+Lemma \ref{lemma-faithful}.
+Let $(V, \varphi)$ and $(W, \psi)$ be two descent data relative
+to $X \to S$. Set $(V', \varphi') = f^*(V, \varphi)$ and
+$(W', \psi') = f^*(W, \psi)$.
+Let $\alpha' : V' \to W'$ be a morphism of descent data for $X'$ over $S$.
+We have to show there exists a morphism $\alpha : V \to W$ of
+descent data for $X$ over $S$ whose pullback is $\alpha'$.
+
+\medskip\noindent
+Recall that $V'$ is the base change of $V$ by $f$ and that
+$\varphi'$ is the base change of $\varphi$ by $f \times f$
+(see Remark \ref{remark-easier}).
+By assumption the diagram
+$$
+\xymatrix{
+V' \times_S X' \ar[r]_{\varphi'} \ar[d]_{\alpha' \times \text{id}} &
+X' \times_S V' \ar[d]^{\text{id} \times \alpha'} \\
+W' \times_S X' \ar[r]^{\psi'} &
+X' \times_S W'
+}
+$$
+commutes. We claim the two compositions
+$$
+\xymatrix{
+V' \times_V V' \ar[r]^-{\text{pr}_i} &
+V' \ar[r]^{\alpha'} &
+W' \ar[r] &
+W
+}
+, \quad i = 0, 1
+$$
+are the same. The reader is advised to prove this themselves rather
+than read the rest of this paragraph. (Please email if you find a
+nice clean argument.)
+Let $v_0, v_1$ be points of $V'$ which map to the same point $v \in V$.
+Let $x_i \in X'$ be the image of $v_i$, and let
+$x$ be the point of $X$ which is the image of $v$ in $X$. In other words,
+$v_i = (x_i, v)$ in $V' = X' \times_X V$. Write
+$\varphi(v, x) = (x, v')$ for some point $v'$ of $V$.
+This is possible because $\varphi$ is
+a morphism over $X \times_S X$. Denote
+$v_i' = (x_i, v')$ which is a point of $V'$.
+Then a calculation (using the definition of $\varphi'$)
+shows that $\varphi'(v_i, x_j) = (x_i, v'_j)$. Denote
+$w_i = \alpha'(v_i)$ and $w'_i = \alpha'(v_i')$.
+Now we may write $w_i = (x_i, u_i)$ for some point $u_i$ of $W$,
+and $w_i' = (x_i, u'_i)$ for some point $u_i'$ of $W$.
+The claim is equivalent to the assertion: $u_0 = u_1$.
+A formal calculation using the definition of $\psi'$
+(see Lemma \ref{lemma-pullback}) shows
+that the commutativity of the diagram displayed above says that
+$$
+((x_i, x_j), \psi(u_i, x)) = ((x_i, x_j), (x, u'_j))
+$$
+as points of
+$(X' \times_S X') \times_{X \times_S X} (X \times_S W)$
+for all $i, j \in \{0, 1\}$. This shows that $\psi(u_0, x) = \psi(u_1, x)$
+and hence $u_0 = u_1$ by taking $\psi^{-1}$.
+This proves the claim because the argument above was formal
+and we can take scheme points (in other words, we may
+take $(v_0, v_1) = \text{id}_{V' \times_V V'}$).
+
+\medskip\noindent
+At this point we can use
+Lemma \ref{lemma-fpqc-universal-effective-epimorphisms}.
+Namely, $\{V' \to V\}$ is a fpqc covering as
+the base change of the morphism $f : X' \to X$.
+Hence, by
+Lemma \ref{lemma-fpqc-universal-effective-epimorphisms}
+the morphism $\alpha' : V' \to W' \to W$ factors through
+a unique morphism $\alpha : V \to W$ whose base change is
+necessarily $\alpha'$. Finally, we see the diagram
+$$
+\xymatrix{
+V \times_S X \ar[r]_{\varphi} \ar[d]_{\alpha \times \text{id}} &
+X \times_S V \ar[d]^{\text{id} \times \alpha} \\
+W \times_S X \ar[r]^{\psi} & X \times_S W
+}
+$$
+commutes because its base change to $X' \times_S X'$
+commutes and the morphism $X' \times_S X' \to X \times_S X$
+is surjective and flat (use Lemma \ref{lemma-ff-base-change-faithful}).
+Hence $\alpha$ is a morphism of descent data
+$(V, \varphi) \to (W, \psi)$ as desired.
+\end{proof}
+
+\noindent
+The following two lemmas have been obsoleted by the improved
+exposition of the previous material. But they are still true!
+
+\begin{lemma}
+\label{lemma-pullback-selfmap}
+Let $X \to S$ be a morphism of schemes.
+Let $f : X \to X$ be a selfmap of $X$ over $S$.
+In this case pullback by $f$ is isomorphic to the
+identity functor on the category of descent data
+relative to $X \to S$.
+\end{lemma}
+
+\begin{proof}
+This is clear from Lemma \ref{lemma-pullback} since it tells us that
+$f^* \cong \text{id}^*$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphism-with-section-equivalence}
+Let $f : X' \to X$ be a morphism of schemes over a base scheme $S$.
+Assume there exists a morphism $g : X \to X'$ over $S$, for example
+if $f$ has a section. Then the pullback functor
+of Lemma \ref{lemma-pullback} defines an equivalence of
+categories between the category of descent data relative to
+$X/S$ and $X'/S$.
+\end{lemma}
+
+\begin{proof}
+Let $g : X \to X'$ be a morphism over $S$.
+Lemma \ref{lemma-pullback-selfmap} above shows that the functors
+$f^* \circ g^* = (g \circ f)^*$ and $g^* \circ f^* = (f \circ g)^*$
+are isomorphic
+to the respective identity functors as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphism-source-faithfully-flat}
+Let $f : X \to X'$ be a morphism of schemes over a base scheme $S$.
+Assume $X \to S$ is surjective and flat. Then the pullback functor
+of Lemma \ref{lemma-pullback} is a faithful functor
+from the category of descent data relative to $X'/S$ to the
+category of descent data relative to $X/S$.
+\end{lemma}
+
+\begin{proof}
+We may factor $X \to X'$ as $X \to X \times_S X' \to X'$.
+The first morphism has a section, hence induces an equivalence of
+categories of descent data by
+Lemma \ref{lemma-morphism-with-section-equivalence}.
+The second morphism is surjective and flat, hence induces a
+faithful functor by Lemma \ref{lemma-faithful}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphism-source-fpqc-covering}
+Let $f : X \to X'$ be a morphism of schemes over a base scheme $S$.
+Assume $\{X \to S\}$ is an fpqc covering (for example if $f$ is
+surjective, flat and quasi-compact).
+Then the pullback functor of Lemma \ref{lemma-pullback} is a
+fully faithful functor from the category of descent data relative
+to $X'/S$ to the category of descent data relative to $X/S$.
+\end{lemma}
+
+\begin{proof}
+We may factor $X \to X'$ as $X \to X \times_S X' \to X'$.
+The first morphism has a section, hence induces an equivalence of
+categories of descent data by
+Lemma \ref{lemma-morphism-with-section-equivalence}.
+The second morphism is an fpqc covering
+hence induces a fully faithful functor by Lemma \ref{lemma-fully-faithful}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fpqc-refinement-coverings-fully-faithful}
+Let $S$ be a scheme.
+Let $\mathcal{U} = \{U_i \to S\}_{i \in I}$, and
+$\mathcal{V} = \{V_j \to S\}_{j \in J}$,
+be families of morphisms with target $S$.
+Let $\alpha : I \to J$, $\text{id} : S \to S$ and
+$g_i : U_i \to V_{\alpha(i)}$ be a morphism of families
+of maps with fixed target, see
+Sites, Definition \ref{sites-definition-morphism-coverings}.
+Assume that for each $j \in J$ the family
+$\{g_i : U_i \to V_j\}_{\alpha(i) = j}$ is an fpqc
+covering of $V_j$. Then the pullback functor
+$$
+\text{descent data relative to }
+\mathcal{V}
+\longrightarrow
+\text{descent data relative to }
+\mathcal{U}
+$$
+of Lemma \ref{lemma-pullback-family} is fully faithful.
+\end{lemma}
+
+\begin{proof}
+Consider the morphism of schemes
+$$
+g :
+X = \coprod\nolimits_{i \in I} U_i
+\longrightarrow
+Y = \coprod\nolimits_{j \in J} V_j
+$$
+over $S$ which on the $i$th component maps into the $\alpha(i)$th component
+via the morphism $g_{\alpha(i)}$. We claim that $\{g : X \to Y\}$
+is an fpqc covering of schemes. Namely, by
+Topologies, Lemma \ref{topologies-lemma-disjoint-union-is-fpqc-covering}
+for each $j$ the morphism $\{\coprod_{\alpha(i) = j} U_i \to V_j\}$ is an
+fpqc covering. Thus for every affine open $V \subset V_j$
+(which we may think of as an affine open of $Y$)
+we can find finitely many affine opens
+$W_1, \ldots, W_n \subset \coprod_{\alpha(i) = j} U_i$
+(which we may think of as affine opens of $X$)
+such that $V = \bigcup_{i = 1, \ldots, n} g(W_i)$.
+This provides enough affine opens of $Y$ which can be covered by finitely
+many affine opens of $X$ so that
+Topologies, Lemma \ref{topologies-lemma-recognize-fpqc-covering} part (3)
+applies, and the claim follows. Let us write $DD(X/S)$,
+resp.\ $DD(\mathcal{U})$ for the category of descent data with respect
+to $X/S$, resp.\ $\mathcal{U}$, and similarly for $Y/S$ and $\mathcal{V}$.
+Consider the diagram
+$$
+\xymatrix{
+DD(Y/S) \ar[r] & DD(X/S) \\
+DD(\mathcal{V}) \ar[u]^{\text{Lemma }\ref{lemma-family-is-one}} \ar[r] &
+DD(\mathcal{U}) \ar[u]_{\text{Lemma }\ref{lemma-family-is-one}}
+}
+$$
+This diagram is commutative, see the proof of
+Lemma \ref{lemma-pullback-family}.
+The vertical arrows are equivalences. Hence the lemma follows from
+Lemma \ref{lemma-fully-faithful} which shows the top horizontal arrow
+of the diagram is fully faithful.
+\end{proof}
+
+\noindent
+The next lemma shows that, in order to check effectiveness,
+we may always Zariski refine the given family of morphisms
+with target $S$.
+
+\begin{lemma}
+\label{lemma-Zariski-refinement-coverings-equivalence}
+Let $S$ be a scheme.
+Let $\mathcal{U} = \{U_i \to S\}_{i \in I}$, and
+$\mathcal{V} = \{V_j \to S\}_{j \in J}$,
+be families of morphisms with target $S$.
+Let $\alpha : I \to J$, $\text{id} : S \to S$ and
+$g_i : U_i \to V_{\alpha(i)}$ be a morphism of families
+of maps with fixed target, see
+Sites, Definition \ref{sites-definition-morphism-coverings}.
+Assume that for each $j \in J$ the family
+$\{g_i : U_i \to V_j\}_{\alpha(i) = j}$ is a Zariski covering (see
+Topologies, Definition \ref{topologies-definition-zariski-covering})
+of $V_j$. Then the pullback functor
+$$
+\text{descent data relative to }
+\mathcal{V}
+\longrightarrow
+\text{descent data relative to }
+\mathcal{U}
+$$
+of Lemma \ref{lemma-pullback-family} is an equivalence of categories.
+In particular, the category of schemes over $S$
+is equivalent to the category
+of descent data relative to any Zariski covering of $S$.
+\end{lemma}
+
+\begin{proof}
+The functor is faithful and fully faithful by
+Lemma \ref{lemma-fpqc-refinement-coverings-fully-faithful}.
+Let us indicate how to prove that it is essentially surjective.
+Let $(X_i, \varphi_{ii'})$ be a descent datum relative to $\mathcal{U}$.
+Fix $j \in J$ and set $I_j = \{i \in I \mid \alpha(i) = j\}$.
+For $i, i' \in I_j$ note that there is a canonical morphism
+$$
+c_{ii'} : U_i \times_{g_i, V_j, g_{i'}} U_{i'} \to U_i \times_S U_{i'}.
+$$
+Hence we can pullback $\varphi_{ii'}$ by this morphism
+and set $\psi_{ii'} = c_{ii'}^*\varphi_{ii'}$ for $i, i' \in I_j$.
+In this way we obtain a descent datum $(X_i, \psi_{ii'})$
+relative to the Zariski covering
+$\{g_i : U_i \to V_j\}_{i \in I_j}$.
+Note that $\psi_{ii'}$ is an isomorphism from the open
+$X_{i, U_i \times_{V_j} U_{i'}}$ of $X_i$ to the corresponding
+open of $X_{i'}$. It follows from
+Schemes, Section \ref{schemes-section-glueing-schemes}
+that we may glue $(X_i, \psi_{ii'})$ into a scheme
+$Y_j$ over $V_j$. Moreover, the morphisms $\varphi_{ii'}$
+for $i \in I_j$ and $i' \in I_{j'}$ glue to a morphism
+$\varphi_{jj'} : Y_j \times_S V_{j'} \to V_j \times_S Y_{j'}$
+satisfying the cocycle condition (details omitted).
+Hence we obtain the desired descent datum
+$(Y_j, \varphi_{jj'})$ relative to $\mathcal{V}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-refine-coverings-fully-faithful}
+Let $S$ be a scheme.
+Let $\mathcal{U} = \{U_i \to S\}_{i \in I}$, and
+$\mathcal{V} = \{V_j \to S\}_{j \in J}$,
+be fpqc-coverings of $S$.
+If $\mathcal{U}$ is a refinement of $\mathcal{V}$,
+then the pullback functor
+$$
+\text{descent data relative to }
+\mathcal{V}
+\longrightarrow
+\text{descent data relative to }
+\mathcal{U}
+$$
+is fully faithful.
+In particular, the category of schemes over $S$
+is identified with a full subcategory of the category
+of descent data relative to any fpqc-covering of $S$.
+\end{lemma}
+
+\begin{proof}
+Consider the fpqc-covering
+$\mathcal{W} = \{U_i \times_S V_j \to S\}_{(i, j) \in I \times J}$ of $S$.
+It is a refinement of both $\mathcal{U}$ and $\mathcal{V}$.
+Hence we have a $2$-commutative diagram of functors and categories
+$$
+\xymatrix{
+DD(\mathcal{V}) \ar[rd] \ar[rr] & & DD(\mathcal{U}) \ar[ld] \\
+& DD(\mathcal{W}) &
+}
+$$
+Notation as in the proof of
+Lemma \ref{lemma-fpqc-refinement-coverings-fully-faithful} and
+commutativity by Lemma \ref{lemma-pullback-family} part (3).
+Hence clearly it suffices to prove the functors
+$DD(\mathcal{V}) \to DD(\mathcal{W})$ and
+$DD(\mathcal{U}) \to DD(\mathcal{W})$ are fully faithful.
+This follows from
+Lemma \ref{lemma-fpqc-refinement-coverings-fully-faithful}
+as desired.
+\end{proof}
+
+\begin{remark}
+\label{remark-morphisms-of-schemes-satisfy-fpqc-descent}
+Lemma \ref{lemma-refine-coverings-fully-faithful}
+says that morphisms of schemes satisfy fpqc descent.
+In other words, given a scheme $S$ and schemes $X$, $Y$ over $S$
+the functor
+$$
+(\Sch/S)^{opp} \longrightarrow \textit{Sets},
+\quad
+T \longmapsto \Mor_T(X_T, Y_T)
+$$
+satisfies the sheaf condition for the fpqc topology.
+The simplest case of this is the following. Suppose that $T \to S$
+is a surjective flat morphism of affines. Let $\psi_0 : X_T \to Y_T$
+be a morphism of schemes over $T$ which is compatible with the
+canonical descent data. Then there exists a unique morphism
+$\psi : X \to Y$ whose base change to $T$ is $\psi_0$. In fact this
+special case follows in a straightforward manner from
+Lemma \ref{lemma-fully-faithful}.
+And, in turn, that lemma is a formal consequence of the following
+two facts:
+(a) the base change functor by a faithfully flat morphism is faithful, see
+Lemma \ref{lemma-ff-base-change-faithful}
+and (b) a scheme satisfies the sheaf condition for the fpqc topology, see
+Lemma \ref{lemma-fpqc-universal-effective-epimorphisms}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-effective-for-fpqc-is-local-upstairs}
+Let $X \to S$ be a surjective, quasi-compact, flat morphism of
+schemes. Let $(V, \varphi)$ be a descent datum relative to $X/S$.
+Suppose that for all $v \in V$ there exists an open subscheme
+$v \in W \subset V$ such that $\varphi(W \times_S X) \subset X \times_S W$
+and such that the descent datum $(W, \varphi|_{W \times_S X})$
+is effective. Then $(V, \varphi)$ is effective.
+\end{lemma}
+
+\begin{proof}
+Let $V = \bigcup W_i$ be an open covering with
+$\varphi(W_i \times_S X) \subset X \times_S W_i$
+and such that the descent datum $(W_i, \varphi|_{W_i \times_S X})$
+is effective. Let $U_i \to S$ be a scheme and let
+$\alpha_i : (X \times_S U_i, can) \to (W_i, \varphi|_{W_i \times_S X})$
+be an isomorphism of descent data. For each pair of indices
+$(i, j)$ consider the open
+$\alpha_i^{-1}(W_i \cap W_j) \subset X \times_S U_i$.
+Because everything is compatible with descent data
+and since $\{X \to S\}$ is an fpqc covering, we
+may apply Lemma \ref{lemma-open-fpqc-covering}
+to find an open $V_{ij} \subset V_j$ such that
+$\alpha_i^{-1}(W_i \cap W_j) = X \times_S V_{ij}$.
+Now the identity morphism on $W_i \cap W_j$ is
+compatible with descent data, hence comes from a
+unique morphism $\varphi_{ij} : U_{ij} \to U_{ji}$ over $S$
+(see Remark \ref{remark-morphisms-of-schemes-satisfy-fpqc-descent}).
+Then $(U_i, U_{ij}, \varphi_{ij})$ is a glueing
+data as in Schemes, Section \ref{schemes-section-glueing-schemes}
+(proof omitted). Thus we may assume there is a scheme $U$ over $S$
+such that $U_i \subset U$ is open, $U_{ij} = U_i \cap U_j$ and
+$\varphi_{ij} = \text{id}_{U_i \cap U_j}$, see
+Schemes, Lemma \ref{schemes-lemma-glue}.
+Pulling back to $X$ we can use the $\alpha_i$ to
+get the desired isomorphism $\alpha : X \times_S U \to V$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Descending types of morphisms}
+\label{section-descending-types-morphisms}
+
+\noindent
+In the following we study the question as to whether
+descent data for schemes relative to a fpqc-covering
+are effective. The first remark to make is that this
+is not always the case. We will see this in Algebraic
+Spaces, Example \ref{spaces-example-non-representable-descent}.
+Even projective morphisms do not always satisfy descent
+for fpqc-coverings, by Examples,
+Lemma \ref{examples-lemma-non-effective-descent-projective}.
+
+\medskip\noindent
+On the other hand, if the schemes we are trying to
+descend are particularly simple, then it is sometime the
+case that for whole classes of schemes descent data
+are effective. We will introduce terminology here that
+describes this phenomenon abstractly, even though it
+may lead to confusion if not used correctly later on.
+
+\begin{definition}
+\label{definition-descending-types-morphisms}
+Let $\mathcal{P}$ be a property of morphisms of schemes over a base.
+Let $\tau \in \{Zariski, fpqc, fppf, \etale, smooth, syntomic\}$.
+We say
+{\it morphisms of type $\mathcal{P}$ satisfy descent for $\tau$-coverings}
+if for
+any $\tau$-covering $\mathcal{U} : \{U_i \to S\}_{i \in I}$
+(see Topologies, Section \ref{topologies-section-procedure}),
+any descent datum $(X_i, \varphi_{ij})$ relative to $\mathcal{U}$
+such that each morphism $X_i \to U_i$ has property $\mathcal{P}$
+is effective.
+\end{definition}
+
+\noindent
+Note that in each of the cases we have already seen that
+the functor from schemes over $S$ to descent data over
+$\mathcal{U}$ is fully faithful
+(Lemma \ref{lemma-refine-coverings-fully-faithful} combined
+with the results in Topologies that any $\tau$-covering
+is also a fpqc-covering).
+We have also seen that descent data are always effective with
+respect to Zariski coverings
+(Lemma \ref{lemma-Zariski-refinement-coverings-equivalence}).
+It may be prudent to only study the notion just introduced
+when $\mathcal{P}$ is either stable under any base change or at least
+local on the base in the $\tau$-topology
+(see Definition \ref{definition-property-morphisms-local})
+in order to avoid erroneous arguments (relying on $\mathcal{P}$
+when descending halfway).
+
+\medskip\noindent
+Here is the obligatory lemma reducing this question
+to the case of a covering given by a single morphism of affines.
+
+\begin{lemma}
+\label{lemma-descending-types-morphisms}
+Let $\mathcal{P}$ be a property of morphisms of schemes over a base.
+Let $\tau \in \{fpqc, fppf, \etale, smooth, syntomic\}$.
+Suppose that
+\begin{enumerate}
+\item $\mathcal{P}$ is stable under any base change
+(see Schemes, Definition \ref{schemes-definition-preserved-by-base-change}),
+\item if $Y_j \to V_j$, $j = 1, \ldots, m$ have $\mathcal{P}$,
+then so does $\coprod Y_j \to \coprod V_j$, and
+\item for any surjective morphism of affines
+$X \to S$ which is flat, flat of finite presentation,
+\'etale, smooth or syntomic depending on whether $\tau$ is
+fpqc, fppf, \'etale, smooth, or syntomic,
+any descent datum $(V, \varphi)$ relative
+to $X$ over $S$ such that $\mathcal{P}$ holds for
+$V \to X$ is effective.
+\end{enumerate}
+Then morphisms of type $\mathcal{P}$ satisfy descent for $\tau$-coverings.
+\end{lemma}
+
+\begin{proof}
+Let $S$ be a scheme.
+Let $\mathcal{U} = \{\varphi_i : U_i \to S\}_{i \in I}$
+be a $\tau$-covering of $S$.
+Let $(X_i, \varphi_{ii'})$ be a descent datum relative to
+$\mathcal{U}$ and assume that each morphism $X_i \to U_i$ has property
+$\mathcal{P}$. We have to show there exists a scheme $X \to S$ such that
+$(X_i, \varphi_{ii'}) \cong (U_i \times_S X, can)$.
+
+\medskip\noindent
+Before we start the proof proper we remark that for any
+family of morphisms $\mathcal{V} : \{V_j \to S\}$ and any
+morphism of families $\mathcal{V} \to \mathcal{U}$, if we pullback
+the descent datum $(X_i, \varphi_{ii'})$ to a descent datum
+$(Y_j, \varphi_{jj'})$ over $\mathcal{V}$, then each of the
+morphisms $Y_j \to V_j$ has property $\mathcal{P}$ also.
+This is true because of assumption (1) that $\mathcal{P}$ is stable
+under any base change and the definition of pullback
+(see Definition \ref{definition-pullback-functor-family}).
+We will use this without further mention.
+
+\medskip\noindent
+First, let us prove the lemma when $S$ is affine.
+By Topologies, Lemma
+\ref{topologies-lemma-fpqc-affine},
+\ref{topologies-lemma-fppf-affine},
+\ref{topologies-lemma-etale-affine},
+\ref{topologies-lemma-smooth-affine}, or
+\ref{topologies-lemma-syntomic-affine}
+there exists a standard $\tau$-covering
+$\mathcal{V} : \{V_j \to S\}_{j = 1, \ldots, m}$
+which refines $\mathcal{U}$. The pullback functor
+$DD(\mathcal{U}) \to DD(\mathcal{V})$
+between categories of descent data is fully faithful
+by Lemma \ref{lemma-refine-coverings-fully-faithful}.
+Hence it suffices to prove that the descent datum over
+the standard $\tau$-covering $\mathcal{V}$ is effective.
+By assumption (2) we see that $\coprod Y_j \to \coprod V_j$
+has property $\mathcal{P}$.
+By Lemma \ref{lemma-family-is-one} this reduces us to the covering
+$\{\coprod_{j = 1, \ldots, m} V_j \to S\}$ for which we have
+assumed the result in assumption (3) of the lemma.
+Hence the lemma holds when $S$ is affine.
+
+\medskip\noindent
+Assume $S$ is general. Let $V \subset S$ be an affine open.
+By the properties of site the family
+$\mathcal{U}_V = \{V \times_S U_i \to V\}_{i \in I}$ is a
+$\tau$-covering of $V$. Denote
+$(X_i, \varphi_{ii'})_V$ the restriction (or pullback) of
+the given descent datum to $\mathcal{U}_V$.
+Hence by what we just saw we obtain a scheme $X_V$ over $V$
+whose canonical descent datum with respect to
+$\mathcal{U}_V$ is isomorphic to $(X_i, \varphi_{ii'})_V$.
+Suppose that $V' \subset V$ is an affine open of $V$.
+Then both $X_{V'}$ and $V' \times_V X_V$ have canonical
+descent data isomorphic to $(X_i, \varphi_{ii'})_{V'}$.
+Hence, by Lemma \ref{lemma-refine-coverings-fully-faithful}
+again we obtain a canonical morphism
+$\rho^V_{V'} : X_{V'} \to X_V$ over $S$ which identifies
+$X_{V'}$ with the inverse image of $V'$ in $X_V$.
+We omit the verification that given affine opens
+$V'' \subset V' \subset V$ of $S$ we have
+$\rho^V_{V''} = \rho^V_{V'} \circ \rho^{V'}_{V''}$.
+
+\medskip\noindent
+By Constructions, Lemma \ref{constructions-lemma-relative-glueing} the data
+$(X_V, \rho^V_{V'})$ glue to a scheme $X \to S$.
+Moreover, we are given isomorphisms $V \times_S X \to X_V$
+which recover the maps $\rho^V_{V'}$. Unwinding the construction
+of the schemes $X_V$ we obtain isomorphisms
+$$
+V \times_S U_i \times_S X
+\longrightarrow
+V \times_S X_i
+$$
+compatible with the maps $\varphi_{ii'}$ and compatible with
+restricting to smaller affine opens in $X$. This implies that
+the canonical descent datum on $U_i \times_S X$ is isomorphic
+to the given descent datum and we win.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Descending affine morphisms}
+\label{section-affine}
+
+\noindent
+In this section we show that
+``affine morphisms satisfy descent for fpqc-coverings''.
+Here is the formal statement.
+
+\begin{lemma}
+\label{lemma-affine}
+Let $S$ be a scheme.
+Let $\{X_i \to S\}_{i\in I}$ be an fpqc covering, see
+Topologies, Definition \ref{topologies-definition-fpqc-covering}.
+Let $(V_i/X_i, \varphi_{ij})$ be a descent datum
+relative to $\{X_i \to S\}$. If each morphism
+$V_i \to X_i$ is affine, then the descent datum is
+effective.
+\end{lemma}
+
+\begin{proof}
+Being affine is a property of morphisms of schemes
+which is local on the base and preserved under any base change, see
+Morphisms, Lemmas \ref{morphisms-lemma-characterize-affine} and
+\ref{morphisms-lemma-base-change-affine}.
+Hence Lemma \ref{lemma-descending-types-morphisms} applies
+and it suffices to prove the statement of the lemma
+in case the fpqc-covering is given by a single
+$\{X \to S\}$ flat surjective morphism of affines.
+Say $X = \Spec(A)$ and $S = \Spec(R)$ so
+that $R \to A$ is a faithfully flat ring map.
+Let $(V, \varphi)$ be a descent datum relative to $X$ over $S$
+and assume that $V \to X$ is affine.
+Then $V \to X$ being affine implies that $V = \Spec(B)$
+for some $A$-algebra $B$ (see
+Morphisms, Definition \ref{morphisms-definition-affine}).
+The isomorphism $\varphi$ corresponds to an isomorphism
+of rings
+$$
+\varphi^\sharp :
+B \otimes_R A \longleftarrow A \otimes_R B
+$$
+as $A \otimes_R A$-algebras. The cocycle condition on $\varphi$
+says that
+$$
+\xymatrix{
+B \otimes_R A \otimes_R A & &
+A \otimes_R A \otimes_R B \ar[ll] \ar[ld]\\
+& A \otimes_R B \otimes_R A \ar[lu] &
+}
+$$
+is commutative. Inverting these arrows we see that we have a
+descent datum for modules with respect to $R \to A$ as in
+Definition \ref{definition-descent-datum-modules}.
+Hence we may apply Proposition \ref{proposition-descent-module}
+to obtain an $R$-module
+$C = \Ker(B \to A \otimes_R B)$
+and an isomorphism $A \otimes_R C \cong B$
+respecting descent data. Given any pair $c, c' \in C$
+the product $cc'$ in $B$ lies in $C$ since the
+map $\varphi$ is an algebra homomorphism. Hence
+$C$ is an $R$-algebra whose base change to $A$ is
+isomorphic to $B$ compatibly with descent data.
+Applying $\Spec$ we obtain a scheme
+$U$ over $S$ such that $(V, \varphi) \cong (X \times_S U, can)$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-closed-immersion}
+Let $S$ be a scheme.
+Let $\{X_i \to S\}_{i\in I}$ be an fpqc covering, see
+Topologies, Definition \ref{topologies-definition-fpqc-covering}.
+Let $(V_i/X_i, \varphi_{ij})$ be a descent datum
+relative to $\{X_i \to S\}$. If each morphism
+$V_i \to X_i$ is a closed immersion, then the descent datum is
+effective.
+\end{lemma}
+
+\begin{proof}
+This is true because a closed immersion is an affine morphism
+(Morphisms, Lemma \ref{morphisms-lemma-closed-immersion-affine}),
+and hence Lemma \ref{lemma-affine} applies.
+\end{proof}
+
+
+\section{Descending quasi-affine morphisms}
+\label{section-quasi-affine}
+
+\noindent
+In this section we show that
+``quasi-affine morphisms satisfy descent for fpqc-coverings''.
+Here is the formal statement.
+
+\begin{lemma}
+\label{lemma-quasi-affine}
+Let $S$ be a scheme.
+Let $\{X_i \to S\}_{i\in I}$ be an fpqc covering, see
+Topologies, Definition \ref{topologies-definition-fpqc-covering}.
+Let $(V_i/X_i, \varphi_{ij})$ be a descent datum
+relative to $\{X_i \to S\}$. If each morphism
+$V_i \to X_i$ is quasi-affine, then the descent datum is
+effective.
+\end{lemma}
+
+\begin{proof}
+Being quasi-affine is a property of morphisms of schemes
+which is preserved under any base change, see
+Morphisms, Lemmas \ref{morphisms-lemma-characterize-quasi-affine} and
+\ref{morphisms-lemma-base-change-quasi-affine}.
+Hence Lemma \ref{lemma-descending-types-morphisms} applies
+and it suffices to prove the statement of the lemma
+in case the fpqc-covering is given by a single
+$\{X \to S\}$ flat surjective morphism of affines.
+Say $X = \Spec(A)$ and $S = \Spec(R)$ so
+that $R \to A$ is a faithfully flat ring map.
+Let $(V, \varphi)$ be a descent datum relative to $X$ over $S$
+and assume that $\pi : V \to X$ is quasi-affine.
+
+\medskip\noindent
+According to Morphisms, Lemma \ref{morphisms-lemma-characterize-quasi-affine}
+this means that
+$$
+V \longrightarrow \underline{\Spec}_X(\pi_*\mathcal{O}_V) = W
+$$
+is a quasi-compact open immersion of schemes over $X$.
+The projections $\text{pr}_i : X \times_S X \to X$ are flat
+and hence we have
+$$
+\text{pr}_0^*\pi_*\mathcal{O}_V =
+(\pi \times \text{id}_X)_*\mathcal{O}_{V \times_S X}, \quad
+\text{pr}_1^*\pi_*\mathcal{O}_V =
+(\text{id}_X \times \pi)_*\mathcal{O}_{X \times_S V}
+$$
+by flat base change
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-flat-base-change-cohomology}).
+Thus the isomorphism $\varphi : V \times_S X \to X \times_S V$ (which
+is an isomorphism over $X \times_S X$) induces an isomorphism
+of quasi-coherent sheaves of algebras
+$$
+\varphi^\sharp :
+\text{pr}_0^*\pi_*\mathcal{O}_V
+\longrightarrow
+\text{pr}_1^*\pi_*\mathcal{O}_V
+$$
+on $X \times_S X$.
+The cocycle condition for $\varphi$ implies the cocycle condition
+for $\varphi^\sharp$. Another way to say this is that it produces
+a descent datum $\varphi'$ on the affine scheme $W$ relative to
+$X$ over $S$, which moreover has the property that the morphism
+$V \to W$ is a morphism of descent data.
+Hence by Lemma \ref{lemma-affine}
+(or by effectivity of descent for quasi-coherent
+algebras) we obtain a scheme $U' \to S$ with an isomorphism
+$(W, \varphi') \cong (X \times_S U', can)$ of descent data.
+We note in passing that $U'$ is affine by
+Lemma \ref{lemma-descending-property-affine}.
+
+\medskip\noindent
+And now we can think of $V$ as a (quasi-compact)
+open $V \subset X \times_S U'$ with the property that
+it is stable under the descent datum
+$$
+can : X \times_S U' \times_S X \to X \times_S X \times_S U',
+(x_0, u', x_1) \mapsto (x_0, x_1, u').
+$$
+In other words $(x_0, u') \in V \Rightarrow (x_1, u') \in V$
+for any $x_0, x_1, u'$ mapping to the same point of $S$.
+Because $X \to S$ is surjective we immediately find that
+$V$ is the inverse image of a subset $U \subset U'$ under
+the morphism $X \times_S U' \to U'$.
+Because $X \to S$ is quasi-compact, flat and surjective
+also $X \times_S U' \to U'$ is quasi-compact flat and surjective.
+Hence by Morphisms, Lemma \ref{morphisms-lemma-fpqc-quotient-topology}
+this subset $U \subset U'$ is open and we win.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Descent data in terms of sheaves}
+\label{section-descent-data-sheaves}
+
+
+\noindent
+Here is another way to think about descent data
+in case of a covering on a site.
+
+\begin{lemma}
+\label{lemma-descent-data-sheaves}
+Let $\tau \in \{Zariski, fppf, \etale, smooth, syntomic\}$\footnote{The
+fact that fpqc is missing is not a typo. See discussion
+in Topologies, Section \ref{topologies-section-fpqc}.}.
+Let $\Sch_\tau$ be a big $\tau$-site.
+Let $S \in \Ob(\Sch_\tau)$.
+Let $\{S_i \to S\}_{i \in I}$ be a covering in the
+site $(\Sch/S)_\tau$. There is an equivalence of
+categories
+$$
+\left\{
+\begin{matrix}
+\text{descent data }(X_i, \varphi_{ii'})\text{ such that}\\
+\text{each }X_i \in \Ob((\Sch/S)_\tau)
+\end{matrix}
+\right\}
+\leftrightarrow
+\left\{
+\begin{matrix}
+\text{sheaves }F\text{ on }(\Sch/S)_\tau\text{ such that}\\
+\text{each }h_{S_i} \times F\text{ is representable}
+\end{matrix}
+\right\}.
+$$
+Moreover,
+\begin{enumerate}
+\item the objects representing $h_{S_i} \times F$ on the right hand side
+correspond to the schemes $X_i$ on the left hand side, and
+\item the sheaf $F$ is representable if and only if the
+corresponding descent datum $(X_i, \varphi_{ii'})$ is effective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We have seen in Section \ref{section-fpqc-universal-effective-epimorphisms}
+that representable presheaves are sheaves on the site $(\Sch/S)_\tau$.
+Moreover, the Yoneda lemma (Categories, Lemma \ref{categories-lemma-yoneda})
+guarantees that maps between representable
+sheaves correspond one to one with maps between the representing objects.
+We will use these remarks without further mention during the proof.
+
+\medskip\noindent
+Let us construct the functor from right to left.
+Let $F$ be a sheaf on $(\Sch/S)_\tau$ such that each
+$h_{S_i} \times F$ is representable. In this case let $X_i$
+be a representing object in $(\Sch/S)_\tau$.
+It comes equipped with a morphism $X_i \to S_i$.
+Then both $X_i \times_S S_{i'}$ and $S_i \times_S X_{i'}$
+represent the sheaf $h_{S_i} \times F \times h_{S_{i'}}$
+and hence we obtain an isomorphism
+$$
+\varphi_{ii'} : X_i \times_S S_{i'} \to S_i \times_S X_{i'}
+$$
+It is straightforward to see that the maps $\varphi_{ii'}$
+are morphisms over $S_i \times_S S_{i'}$ and satisfy the
+cocycle condition. The functor from right to left is given
+by this construction $F \mapsto (X_i, \varphi_{ii'})$.
+
+\medskip\noindent
+Let us construct a functor from left to right.
+For each $i$ denote $F_i$ the sheaf $h_{X_i}$.
+The isomorphisms $\varphi_{ii'}$ give isomorphisms
+$$
+\varphi_{ii'} :
+F_i \times h_{S_{i'}}
+\longrightarrow
+h_{S_i} \times F_{i'}
+$$
+over $h_{S_i} \times h_{S_{i'}}$.
+Set $F$ equal to the coequalizer in the following diagram
+$$
+\xymatrix{
+\coprod_{i, i'} F_i \times h_{S_{i'}}
+\ar@<1ex>[rr]^-{\text{pr}_0}
+\ar@<-1ex>[rr]_-{\text{pr}_1 \circ \varphi_{ii'}}
+& &
+\coprod_i F_i \ar[r]
+&
+F
+}
+$$
+The cocycle condition guarantees that $h_{S_i} \times F$ is
+isomorphic to $F_i$ and hence representable.
+The functor from left to right is given
+by this construction $(X_i, \varphi_{ii'}) \mapsto F$.
+
+\medskip\noindent
+We omit the verification that these constructions
+are mutually quasi-inverse functors. The final statements
+(1) and (2) follow from the constructions.
+\end{proof}
+
+\begin{remark}
+\label{remark-what-product-means}
+In the statement of Lemma \ref{lemma-descent-data-sheaves} the condition that
+$h_{S_i} \times F$ is representable is equivalent to
+the condition that the restriction of $F$ to
+$(\Sch/S_i)_\tau$ is representable.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+
+\end{document}
diff --git a/books/stacks/desirables.tex b/books/stacks/desirables.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1b3a4b81db8c528dc308a8bc0a901255aed46c18
--- /dev/null
+++ b/books/stacks/desirables.tex
@@ -0,0 +1,313 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Desirables}
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+This is basically just a list of things that we want to put in the stacks
+project. As we add material to the Stacks project continuously this is always
+somewhat behind the current state of the Stacks project. In fact, it may have
+been a mistake to try and list things we should add, because it seems
+impossible to keep it up to date.
+
+\medskip\noindent
+Last updated: Thursday, August 31, 2017.
+
+
+\section{Conventions}
+\label{section-conventions}
+
+\noindent
+We should have a chapter with a short list of conventions used in the document.
+This chapter already exists, see
+Conventions, Section \ref{conventions-section-comments},
+but a lot more could be added there. Especially useful would be to find
+``hidden'' conventions and tacit assumptions and put those there.
+
+
+\section{Sites and Topoi}
+\label{section-sites}
+
+\noindent
+We have a chapter on sites and sheaves, see
+Sites, Section \ref{sites-section-introduction}.
+We have a chapter on ringed sites (and topoi) and modules on them, see
+Modules on Sites, Section \ref{sites-modules-section-introduction}.
+We have a chapter on cohomology in this setting, see
+Cohomology on Sites, Section \ref{sites-cohomology-section-introduction}.
+But a lot more could be added, especially in the chapter on cohomology.
+
+
+\section{Stacks}
+\label{section-stacks}
+
+\noindent
+We have a chapter on (abstract) stacks, see
+Stacks, Section \ref{stacks-section-introduction}.
+It would be nice if
+\begin{enumerate}
+\item improve the discussion on ``stackyfication'',
+\item give examples of stackyfication,
+\item more examples in general,
+\item improve the discussion of gerbes.
+\end{enumerate}
+Example result which has not been added yet: Given a sheaf of abelian
+groups $\mathcal{F}$
+over $\mathcal{C}$ the set of equivalence classes of gerbes banded by
+$\mathcal{F}$ is bijective to $H^2(\mathcal{C}, \mathcal{F})$.
+
+
+\section{Simplicial methods}
+\label{section-simplicial}
+
+\noindent
+We have a chapter on simplicial methods, see
+Simplicial, Section \ref{simplicial-section-introduction}.
+This has to be reviewed and improved. The discussion of
+the relationship between simplicial homotopy (also known as
+combinatorial homotopy) and Kan complexes should be improved upon.
+There is a chapter on simplicial spaces, see
+Simplicial Spaces, Section \ref{spaces-simplicial-section-introduction}.
+This chapter briefly discusses
+simplicial topological spaces, simplicial sites, and simplicial topoi.
+We can further develop ``simplicial algebraic geometry'' to discuss
+simplicial schemes (or simplicial algebraic spaces, or
+simplicial algebraic stacks) and treat geometric questions, their cohomology,
+etc.
+
+
+\section{Cohomology of schemes}
+\label{section-schemes-cohomology}
+
+\noindent
+There is already a chapter on cohomology of quasi-coherent sheaves, see
+Cohomology of Schemes, Section \ref{coherent-section-introduction}.
+We have a chapter discussing the derived category of
+quasi-coherent sheaves on a scheme, see
+Derived Categories of Schemes, Section \ref{perfect-section-introduction}.
+We have a chapter discussing duality for Noetherian schemes
+and relative duality for morphisms of schemes, see
+Duality for Schemes, Section \ref{duality-section-introduction}.
+We also have chapters on \'etale cohomology of schemes and on
+crystalline cohomology of schemes. But most of the material in these
+chapters is very basic and a lot more could/should be added there.
+
+
+\section{Deformation theory \`a la Schlessinger}
+\label{section-deformation-schlessinger}
+
+\noindent
+We have a chapter on this material, see
+Formal Deformation Theory, Section \ref{formal-defos-section-introduction}.
+We have a chapter discussing examples of the general theory, see
+Deformation Problems, Section \ref{examples-defos-section-introduction}.
+We have a chapter, see
+Deformation Theory, Section \ref{defos-section-introduction}
+which discusses deformations of rings (and modules),
+deformations of ringed spaces (and sheaves of modules),
+deformations of ringed topoi (and sheaves of modules).
+In this chapter we use the naive cotangent complex
+to describe obstructions, first order deformations, and
+infinitesimal automorphisms. This material has found some
+applications to algebraicity of moduli stacks in later chapters.
+There is also a chapter discussing the full cotangent complex, see
+Cotangent, Section \ref{cotangent-section-introduction}.
+
+
+\section{Definition of algebraic stacks}
+\label{section-definition-algebraic-stacks}
+
+\noindent
+An algebraic stack is a stack in groupoids over the category of schemes
+with the fppf topology that has a diagonal representable by algebraic
+spaces and is the target of a surjective smooth morphism from a scheme.
+See Algebraic Stacks, Section \ref{algebraic-section-algebraic-stacks}.
+A ``Deligne-Mumford stack'' is an algebraic stack for which there exists
+a scheme and a surjective \'etale morphism from that scheme to it
+as in the paper \cite{DM} of Deligne and Mumford, see
+Algebraic Stacks, Definition \ref{algebraic-definition-deligne-mumford}.
+We will reserve the term ``Artin stack'' for a stack such as in the papers by
+Artin, see \cite{ArtinI}, \cite{ArtinII}, and \cite{ArtinVersal}.
+A possible definition is that an Artin stack is an algebraic stack
+$\mathcal{X}$ over a locally Noetherian scheme $S$ such that
+$\mathcal{X} \to S$ is
+locally of finite type\footnote{Namely, these are exactly the algebraic
+stacks over $S$ satisfying Artin's axioms [-1], [0], [1], [2], [3], [4], [5]
+of Artin's Axioms, Section \ref{artin-section-axioms}.}.
+
+
+\section{Examples of schemes, algebraic spaces, algebraic stacks}
+\label{section-examples-stacks}
+
+\noindent
+The Stacks project currently contains two chapters discussing
+moduli stacks and their properties, see
+Moduli Stacks, Section \ref{moduli-section-introduction} and
+Moduli of Curves, Section \ref{moduli-curves-section-introduction}.
+Over time we intend to add more, for example:
+\begin{enumerate}
+\item $\mathcal{A}_g$,
+i.e., principally polarized abelian schemes of genus $g$,
+\item $\mathcal{A}_1 = \mathcal{M}_{1, 1}$, i.e.,
+$1$-pointed smooth projective genus $1$ curves,
+\item $\mathcal{M}_{g, n}$, i.e., smooth projective genus $g$-curves
+with $n$ pairwise distinct labeled points,
+\item $\overline{\mathcal{M}}_{g, n}$, i.e.,
+stable $n$-pointed nodal projective genus $g$-curves,
+\item $\SheafHom_S(\mathcal{X}, \mathcal{Y})$, moduli of morphisms
+(with suitable conditions on the stacks $\mathcal{X}$, $\mathcal{Y}$
+and the base scheme $S$),
+\item $\textit{Bun}_G(X) = \SheafHom_S(X, BG)$, the stack of $G$-bundles
+of the geometric Langlands programme (with suitable conditions on the scheme
+$X$, the group scheme $G$, and the base scheme $S$),
+\item $\Picardstack_{\mathcal{X}/S}$, i.e., the Picard stack associated
+to an algebraic stack over a base scheme (or space).
+\end{enumerate}
+More generally, the Stacks project is somewhat
+lacking in geometrically meaningful examples.
+
+
+\section{Properties of algebraic stacks}
+\label{section-stacks-properties}
+
+\noindent
+This is perhaps one of the easier projects to work on, as most of the
+basic theory is there now. Of course these things are really properties
+of morphisms of stacks. We can define singularities (up to smooth factors)
+etc. Prove that a connected normal stack is irreducible, etc.
+
+
+\section{Lisse \'etale site of an algebraic stack}
+\label{section-lisse-etale}
+
+\noindent
+This has been introduced in
+Cohomology of Stacks, Section \ref{stacks-cohomology-section-lisse-etale}.
+An example to show that it is not functorial with respect to $1$-morphisms
+of algebraic stacks is discussed in
+Examples, Section \ref{examples-section-lisse-etale-not-functorial}.
+Of course a lot more could be said about this, but it turns out
+to be very useful to prove things using the ``big'' \'etale site
+as much as possible.
+
+
+
+\section{Things you always wanted to know but were afraid to ask}
+\label{section-stacks-fun-lemmas}
+
+\noindent
+There are going to be lots of lemmas that you use over and over again
+that are useful but aren't really mentioned specifically in the literature,
+or it isn't easy to find references for. Bag of tricks.
+
+\medskip\noindent
+Example: Given two groupoids in schemes $R\Rightarrow U$ and
+$R' \Rightarrow U'$ what does it mean to have a $1$-morphism
+$[U/R] \to [U'/R']$ purely in terms of groupoids in schemes.
+
+
+
+\section{Quasi-coherent sheaves on stacks}
+\label{section-quasi-coherent}
+
+\noindent
+These are defined and discussed in the chapter
+Cohomology of Stacks, Section \ref{stacks-cohomology-section-introduction}.
+Derived categories of modules are discussed in the chapter
+Derived Categories of Stacks, Section \ref{stacks-perfect-section-introduction}.
+A lot more could be added to these chapters.
+
+
+
+\section{Flat and smooth}
+\label{section-flat-smooth}
+
+\noindent
+Artin's theorem that having a flat surjection from a scheme is a replacement
+for the smooth surjective condition. This is now available as
+Criteria for Representability, Theorem \ref{criteria-theorem-bootstrap}.
+
+
+\section{Artin's representability theorem}
+\label{section-representability}
+
+\noindent
+This is discussed in the chapter
+Artin's Axioms, Section \ref{artin-section-introduction}.
+We also have an application, see
+Quot, Theorem \ref{quot-theorem-coherent-algebraic}.
+There should be a lot more applications and the chapter
+itself has to be cleaned up as well.
+
+
+\section{DM stacks are finitely covered by schemes}
+\label{section-dm-finite-cover}
+
+\noindent
+We already have the corresponding result for algebraic spaces, see
+Limits of Spaces, Section \ref{spaces-limits-section-finite-cover}.
+What is missing is the result for DM and quasi-DM stacks.
+
+
+\section{Martin Olsson's paper on properness}
+\label{section-proper-parametrization}
+
+\noindent
+This proves two notions of proper are the same. The first part of this
+is now available in the form of Chow's lemma for algebraic stacks, see
+More on Morphisms of Stacks, Theorem
+\ref{stacks-more-morphisms-theorem-chow-finite-type}.
+As a consequence we show that it suffices to use DVR's
+in checking the valuative criterion for properness for
+algebraic stacks in certain cases, see
+More on Morphisms of Stacks, Section
+\ref{stacks-more-morphisms-section-Noetherian-valuative-criterion}.
+
+
+\section{Proper pushforward of coherent sheaves}
+\label{section-proper-pushforward}
+
+\noindent
+We can start working on this now that we have Chow's lemma for
+algebraic stacks, see previous section.
+
+
+\section{Keel and Mori}
+\label{section-keel-mori}
+
+\noindent
+See \cite{K-M}. Their result has been added in
+More on Morphisms of Stacks, Section
+\ref{stacks-more-morphisms-section-Keel-Mori}.
+
+
+\section{Add more here}
+\label{section-add-more}
+
+\noindent
+Actually, no we should never have started this list as part of
+the Stacks project itself! There is a todo list somewhere else
+which is much easier to update.
+
+
+\input{chapters}
+
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/dga.tex b/books/stacks/dga.tex
new file mode 100644
index 0000000000000000000000000000000000000000..4d06fc6926cb776785260b314daf888a3e5bc316
--- /dev/null
+++ b/books/stacks/dga.tex
@@ -0,0 +1,7209 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Differential Graded Algebra}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we talk about differential graded algebras, modules,
+categories, etc. A basic reference is \cite{Keller-Deriving}.
+A survey paper is \cite{Keller-survey}.
+
+\medskip\noindent
+Since we do not worry about length of exposition in the Stacks project
+we first develop the material in the setting of categories of differential
+graded modules. After that we redo the constructions in the setting of
+differential graded modules over differential graded categories.
+
+
+
+\section{Conventions}
+\label{section-conventions}
+
+\noindent
+In this chapter we hold on to the convention that {\it ring} means
+commutative ring with $1$. If $R$ is a ring, then an {\it $R$-algebra $A$}
+will be an $R$-module $A$ endowed with an $R$-bilinear map $A \times A \to A$
+(multiplication) such that multiplication is associative and has a unit.
+In other words, these are unital associative $R$-algebras
+such that the structure map $R \to A$ maps into the center of $A$.
+
+\medskip\noindent
+{\bf Sign rules.} In this chapter we will work with graded algebras
+and graded modules often equipped with differentials. The sign rules on
+underlying complexes will always be (compatible with) those introduced in
+More on Algebra, Section \ref{more-algebra-section-sign-rules}.
+This will occasionally cause the multiplicative structure to be
+twisted in unexpected ways especially when considering left modules
+or the relationship between left and right modules.
+
+
+
+
+
+\section{Differential graded algebras}
+\label{section-dga}
+
+
+\noindent
+Just the definitions.
+
+\begin{definition}
+\label{definition-dga}
+Let $R$ be a commutative ring. A {\it differential graded algebra over $R$}
+is either
+\begin{enumerate}
+\item a chain complex $A_\bullet$ of $R$-modules endowed with
+$R$-bilinear maps $A_n \times A_m \to A_{n + m}$,
+$(a, b) \mapsto ab$ such that
+$$
+\text{d}_{n + m}(ab) = \text{d}_n(a)b + (-1)^n a\text{d}_m(b)
+$$
+and such that $\bigoplus A_n$ becomes an associative and unital
+$R$-algebra, or
+\item a cochain complex $A^\bullet$ of $R$-modules endowed with
+$R$-bilinear maps $A^n \times A^m \to A^{n + m}$, $(a, b) \mapsto ab$
+such that
+$$
+\text{d}^{n + m}(ab) = \text{d}^n(a)b + (-1)^n a\text{d}^m(b)
+$$
+and such that $\bigoplus A^n$ becomes an associative and unital $R$-algebra.
+\end{enumerate}
+\end{definition}
+
+\noindent
+We often just write $A = \bigoplus A_n$ or $A = \bigoplus A^n$ and
+think of this as an associative unital $R$-algebra endowed with a
+$\mathbf{Z}$-grading and an $R$-linear operator $\text{d}$ whose square
+is zero and which satisfies the Leibniz rule as explained above. In this case
+we often say ``Let $(A, \text{d})$ be a differential graded algebra''.
+
+\medskip\noindent
+The Leibniz rule relating differentials and multiplication on a differential
+graded $R$-algebra $A$ exactly means that the multiplication map defines
+a map of cochain complexes
+$$
+\text{Tot}(A^\bullet \otimes_R A^\bullet) \to A^\bullet
+$$
+Here $A^\bullet$ denote the underlying cochain complex of $A$.
+
+\begin{definition}
+\label{definition-homomorphism-dga}
+A {\it homomorphism of differential graded algebras}
+$f : (A, \text{d}) \to (B, \text{d})$ is an algebra map $f : A \to B$
+compatible with the gradings and $\text{d}$.
+\end{definition}
+
+\begin{definition}
+\label{definition-cdga}
+A differential graded algebra $(A, \text{d})$ is {\it commutative} if
+$ab = (-1)^{nm}ba$ for $a$ in degree $n$ and $b$ in degree $m$.
+We say $A$ is {\it strictly commutative} if in addition $a^2 = 0$
+for $\deg(a)$ odd.
+\end{definition}
+
+\noindent
+The following definition makes sense in general but is perhaps
+``correct'' only when tensoring commutative differential graded
+algebras.
+
+\begin{definition}
+\label{definition-tensor-product}
+Let $R$ be a ring.
+Let $(A, \text{d})$, $(B, \text{d})$ be differential graded algebras over $R$.
+The {\it tensor product differential graded algebra} of $A$ and $B$
+is the algebra $A \otimes_R B$ with multiplication defined by
+$$
+(a \otimes b)(a' \otimes b') = (-1)^{\deg(a')\deg(b)} aa' \otimes bb'
+$$
+endowed with differential $\text{d}$ defined by the rule
+$\text{d}(a \otimes b) = \text{d}(a) \otimes b + (-1)^m a \otimes \text{d}(b)$
+where $m = \deg(a)$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-total-complex-tensor-product}
+Let $R$ be a ring.
+Let $(A, \text{d})$, $(B, \text{d})$ be differential graded algebras over $R$.
+Denote $A^\bullet$, $B^\bullet$ the underlying cochain complexes.
+As cochain complexes of $R$-modules we have
+$$
+(A \otimes_R B)^\bullet = \text{Tot}(A^\bullet \otimes_R B^\bullet).
+$$
+\end{lemma}
+
+\begin{proof}
+Recall that the differential of the total complex is given by
+$\text{d}_1^{p, q} + (-1)^p \text{d}_2^{p, q}$ on $A^p \otimes_R B^q$.
+And this is exactly the same as the rule for the differential
+on $A \otimes_R B$ in
+Definition \ref{definition-tensor-product}.
+\end{proof}
+
+
+
+
+
+
+\section{Differential graded modules}
+\label{section-modules}
+
+\noindent
+Our default in this chapter is right modules;
+we discuss left modules in Section \ref{section-left-modules}.
+
+\begin{definition}
+\label{definition-dgm}
+Let $R$ be a ring.
+Let $(A, \text{d})$ be a differential graded algebra over $R$.
+A (right) {\it differential graded module} $M$ over $A$ is a right $A$-module
+$M$ which has a grading $M = \bigoplus M^n$ and a differential $\text{d}$
+such that $M^n A^m \subset M^{n + m}$, such that
+$\text{d}(M^n) \subset M^{n + 1}$, and such that
+$$
+\text{d}(ma) = \text{d}(m)a + (-1)^n m\text{d}(a)
+$$
+for $a \in A$ and $m \in M^n$. A
+{\it homomorphism of differential graded modules} $f : M \to N$
+is an $A$-module map compatible with gradings and differentials.
+The category of (right) differential graded $A$-modules is denoted
+$\text{Mod}_{(A, \text{d})}$.
+\end{definition}
+
+\noindent
+Note that we can think of $M$ as a cochain complex $M^\bullet$
+of (right) $R$-modules. Namely, for $r \in R$ we have $\text{d}(r) = 0$
+and $r$ maps to a degree $0$ element of $A$, hence
+$\text{d}(mr) = \text{d}(m)r$.
+
+\medskip\noindent
+The Leibniz rule relating differentials and multiplication on a differential
+graded $R$-module $M$ over a differential graded $R$-algebra $A$
+exactly means that the multiplication map defines a map of cochain complexes
+$$
+\text{Tot}(M^\bullet \otimes_R A^\bullet) \to M^\bullet
+$$
+Here $A^\bullet$ and $M^\bullet$ denote the underlying cochain complexes
+of $A$ and $M$.
+
+\begin{lemma}
+\label{lemma-dgm-abelian}
+Let $(A, d)$ be a differential graded algebra. The category
+$\text{Mod}_{(A, \text{d})}$ is abelian and has arbitrary limits and colimits.
+\end{lemma}
+
+\begin{proof}
+Kernels and cokernels commute with taking underlying $A$-modules.
+Similarly for direct sums and colimits. In other words, these operations
+in $\text{Mod}_{(A, \text{d})}$ commute with the forgetful functor to the
+category of $A$-modules. This is not the case for products and limits.
+Namely, if $N_i$, $i \in I$ is a family of
+differential graded $A$-modules, then the product $\prod N_i$ in
+$\text{Mod}_{(A, \text{d})}$ is given by setting $(\prod N_i)^n = \prod N_i^n$
+and $\prod N_i = \bigoplus_n (\prod N_i)^n$. Thus we see that the product
+does commute with the forgetful functor to the category of graded $A$-modules.
+A category with products and equalizers has limits, see
+Categories, Lemma \ref{categories-lemma-limits-products-equalizers}.
+\end{proof}
+
+\noindent
+Thus, if $(A, \text{d})$ is a differential graded
+algebra over $R$, then there is an exact functor
+$$
+\text{Mod}_{(A, \text{d})} \longrightarrow \text{Comp}(R)
+$$
+of abelian categories. For a differential graded module $M$ the
+cohomology groups $H^n(M)$ are defined as the cohomology of the
+corresponding complex of $R$-modules. Therefore, a short exact
+sequence $0 \to K \to L \to M \to 0$ of differential graded modules
+gives rise to a long exact sequence
+\begin{equation}
+\label{equation-les}
+H^n(K) \to H^n(L) \to H^n(M) \to H^{n + 1}(K)
+\end{equation}
+of cohomology modules, see
+Homology, Lemma \ref{homology-lemma-long-exact-sequence-cochain}.
+
+\medskip\noindent
+Moreover, from now on we borrow all the terminology used for
+complexes of modules. For example, we say that a differential
+graded $A$-module $M$ is {\it acyclic} if $H^k(M) = 0$ for
+all $k \in \mathbf{Z}$. We say that a homomorphism $M \to N$
+of differential graded $A$-modules is a {\it quasi-isomorphism}
+if it induces isomorphisms $H^k(M) \to H^k(N)$ for all $k \in \mathbf{Z}$.
+And so on and so forth.
+
+\begin{definition}
+\label{definition-shift}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $M$ be a differential graded module whose underlying complex
+of $R$-modules is $M^\bullet$. For any $k \in \mathbf{Z}$
+we define the {\it $k$-shifted module} $M[k]$ as follows
+\begin{enumerate}
+\item the underlying complex of $R$-modules of $M[k]$ is $M^\bullet[k]$,
+i.e., we have $M[k]^n = M^{n + k}$ and
+$\text{d}_{M[k]} = (-1)^k\text{d}_M$ and
+\item as $A$-module the multiplication
+$$
+(M[k])^n \times A^m \longrightarrow (M[k])^{n + m}
+$$
+is equal to the given multiplication $M^{n + k} \times A^m \to M^{n + k + m}$.
+\end{enumerate}
+For a morphism $f : M \to N$ of differential graded $A$-modules
+we let $f[k] : M[k] \to N[k]$ be the map equal to $f$ on underlying
+$A$-modules. This defines a functor
+$[k] : \text{Mod}_{(A, \text{d})} \to \text{Mod}_{(A, \text{d})}$.
+\end{definition}
+
+\noindent
+Let us check that with this choice the Leibniz rule is satisfied.
+Let $x \in M[k]^n = M^{n + k}$ and $a \in A^m$ and denoting
+$\cdot_{M[k]}$ the product in $M[k]$ then we see
+\begin{align*}
+\text{d}_{M[k]}(x \cdot_{M[k]} a)
+& =
+(-1)^k \text{d}_M(xa) \\
+& =
+(-1)^k \text{d}_M(x) a + (-1)^{k + n + k} x \text{d}(a) \\
+& =
+\text{d}_{M[k]}(x) a + (-1)^n x \text{d}(a) \\
+& =
+\text{d}_{M[k]}(x) \cdot_{M[k]} a + (-1)^n x \cdot_{M[k]} \text{d}(a)
+\end{align*}
+This is what we want as $x$ has degree $n$ as a homogeneous element of $M[k]$.
+We also observe that with these choices we may think of
+the multiplication map as the map of complexes
+$$
+\text{Tot}(M^\bullet[k] \otimes _R A^\bullet) \to
+\text{Tot}(M^\bullet \otimes _R A^\bullet)[k] \to
+M^\bullet[k]
+$$
+where the first arrow is
+More on Algebra, Section \ref{more-algebra-section-sign-rules}
+(\ref{more-algebra-item-shift-tensor}) which in this
+case does not involve a sign. (In fact, we could have deduced
+that the Liebniz rule holds from this observation.)
+
+\medskip\noindent
+The remarks in Homology, Section \ref{homology-section-homotopy-shift} apply.
+In particular, we will identify the cohomology groups of all shifts
+$M[k]$ without the intervention of signs.
+
+\medskip\noindent
+At this point we have enough structure to talk about {\it triangles},
+see Derived Categories, Definition \ref{derived-definition-triangle}.
+In fact, our next goal is to develop enough theory to be able to
+state and prove that the homotopy category of differential graded
+modules is a triangulated category. First we define the homotopy category.
+
+
+
+
+
+
+\section{The homotopy category}
+\label{section-homotopy}
+
+\noindent
+Our homotopies take into account the $A$-module structure and the
+grading, but not the differential (of course).
+
+\begin{definition}
+\label{definition-homotopy}
+Let $(A, \text{d})$ be a differential graded algebra. Let
+$f, g : M \to N$ be homomorphisms of differential graded $A$-modules.
+A {\it homotopy between $f$ and $g$} is an $A$-module map $h : M \to N$
+such that
+\begin{enumerate}
+\item $h(M^n) \subset N^{n - 1}$ for all $n$, and
+\item $f(x) - g(x) = \text{d}_N(h(x)) + h(\text{d}_M(x))$ for
+all $x \in M$.
+\end{enumerate}
+If a homotopy exists, then we say $f$ and $g$ are {\it homotopic}.
+\end{definition}
+
+\noindent
+Thus $h$ is compatible with the $A$-module structure and the grading
+but not with the differential. If $f = g$ and $h$ is a homotopy
+as in the definition, then $h$ defines a morphism $h : M \to N[-1]$
+in $\text{Mod}_{(A, \text{d})}$.
+
+\begin{lemma}
+\label{lemma-compose-homotopy}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $f, g : L \to M$ be homomorphisms of differential graded $A$-modules.
+Suppose given further homomorphisms $a : K \to L$, and $c : M \to N$.
+If $h : L \to M$ is an $A$-module map which defines a homotopy between
+$f$ and $g$, then $c \circ h \circ a$ defines a homotopy between
+$c \circ f \circ a$ and $c \circ g \circ a$.
+\end{lemma}
+
+\begin{proof}
+Immediate from Homology, Lemma \ref{homology-lemma-compose-homotopy-cochain}.
+\end{proof}
+
+\noindent
+This lemma allows us to define the homotopy category as follows.
+
+\begin{definition}
+\label{definition-complexes-notation}
+Let $(A, \text{d})$ be a differential graded algebra.
+The {\it homotopy category}, denoted $K(\text{Mod}_{(A, \text{d})})$, is
+the category whose objects are the objects of
+$\text{Mod}_{(A, \text{d})}$ and whose morphisms are homotopy classes
+of homomorphisms of differential graded $A$-modules.
+\end{definition}
+
+\noindent
+The notation $K(\text{Mod}_{(A, \text{d})})$ is not standard but at least is
+consistent with the use of $K(-)$ in other places of the Stacks project.
+
+\begin{lemma}
+\label{lemma-homotopy-direct-sums}
+Let $(A, \text{d})$ be a differential graded algebra.
+The homotopy category $K(\text{Mod}_{(A, \text{d})})$
+has direct sums and products.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: Just use the direct sums and products as in
+Lemma \ref{lemma-dgm-abelian}. This works because we saw that
+these functors commute with the forgetful functor to the category
+of graded $A$-modules and because $\prod$ is an exact functor
+on the category of families of abelian groups.
+\end{proof}
+
+
+
+
+
+
+
+\section{Cones}
+\label{section-cones}
+
+\noindent
+We introduce cones for the category of differential graded modules.
+
+\begin{definition}
+\label{definition-cone}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $f : K \to L$ be a homomorphism of differential graded $A$-modules.
+The {\it cone} of $f$ is the differential graded $A$-module
+$C(f)$ given by $C(f) = L \oplus K$ with grading
+$C(f)^n = L^n \oplus K^{n + 1}$ and
+differential
+$$
+d_{C(f)} =
+\left(
+\begin{matrix}
+\text{d}_L & f \\
+0 & -\text{d}_K
+\end{matrix}
+\right)
+$$
+It comes equipped with canonical morphisms of complexes $i : L \to C(f)$
+and $p : C(f) \to K[1]$ induced by the obvious maps $L \to C(f)$
+and $C(f) \to K$.
+\end{definition}
+
+\noindent
+The formation of the cone triangle is functorial in the following sense.
+
+\begin{lemma}
+\label{lemma-functorial-cone}
+Let $(A, \text{d})$ be a differential graded algebra.
+Suppose that
+$$
+\xymatrix{
+K_1 \ar[r]_{f_1} \ar[d]_a & L_1 \ar[d]^b \\
+K_2 \ar[r]^{f_2} & L_2
+}
+$$
+is a diagram of homomorphisms of differential graded $A$-modules which is
+commutative up to homotopy.
+Then there exists a morphism $c : C(f_1) \to C(f_2)$ which gives rise to
+a morphism of triangles
+$$
+(a, b, c) : (K_1, L_1, C(f_1), f_1, i_1, p_1) \to
+(K_1, L_1, C(f_1), f_2, i_2, p_2)
+$$
+in $K(\text{Mod}_{(A, \text{d})})$.
+\end{lemma}
+
+\begin{proof}
+Let $h : K_1 \to L_2$ be a homotopy between $f_2 \circ a$ and $b \circ f_1$.
+Define $c$ by the matrix
+$$
+c =
+\left(
+\begin{matrix}
+b & h \\
+0 & a
+\end{matrix}
+\right) :
+L_1 \oplus K_1 \to L_2 \oplus K_2
+$$
+A matrix computation show that $c$ is a morphism of differential
+graded modules. It is trivial that $c \circ i_1 = i_2 \circ b$, and it is
+trivial also to check that $p_2 \circ c = a \circ p_1$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Admissible short exact sequences}
+\label{section-admissible}
+
+\noindent
+An admissible short exact sequence is the analogue of termwise split exact
+sequences in the setting of differential graded modules.
+
+\begin{definition}
+\label{definition-admissible-ses}
+Let $(A, \text{d})$ be a differential graded algebra.
+\begin{enumerate}
+\item A homomorphism $K \to L$ of differential graded $A$-modules
+is an {\it admissible monomorphism} if there exists a graded $A$-module
+map $L \to K$ which is left inverse to $K \to L$.
+\item A homomorphism $L \to M$ of differential graded $A$-modules
+is an {\it admissible epimorphism} if there exists a graded $A$-module
+map $M \to L$ which is right inverse to $L \to M$.
+\item A short exact sequence $0 \to K \to L \to M \to 0$ of differential
+graded $A$-modules is an {\it admissible short exact sequence}
+if it is split as a sequence of graded $A$-modules.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Thus the splittings are compatible with all the data except for
+the differentials. Given an admissible short exact sequence we
+obtain a triangle; this is the reason that we require our splittings
+to be compatible with the $A$-module structure.
+
+\begin{lemma}
+\label{lemma-admissible-ses}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $0 \to K \to L \to M \to 0$ be an admissible short exact sequence
+of differential graded $A$-modules. Let $s : M \to L$ and $\pi : L \to K$
+be splittings such that $\Ker(\pi) = \Im(s)$.
+Then we obtain a morphism
+$$
+\delta = \pi \circ \text{d}_L \circ s : M \to K[1]
+$$
+of $\text{Mod}_{(A, \text{d})}$ which induces the boundary maps
+in the long exact sequence of cohomology (\ref{equation-les}).
+\end{lemma}
+
+\begin{proof}
+The map $\pi \circ \text{d}_L \circ s$ is compatible with the $A$-module
+structure and the gradings by construction. It is compatible with
+differentials by Homology, Lemmas
+\ref{homology-lemma-ses-termwise-split-cochain}.
+Let $R$ be the ring that $A$ is a differential graded algebra over.
+The equality of maps is a statement about $R$-modules. Hence this
+follows from Homology, Lemmas
+\ref{homology-lemma-ses-termwise-split-cochain} and
+\ref{homology-lemma-ses-termwise-split-long-cochain}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-make-commute-map}
+Let $(A, \text{d})$ be a differential graded algebra. Let
+$$
+\xymatrix{
+K \ar[r]_f \ar[d]_a & L \ar[d]^b \\
+M \ar[r]^g & N
+}
+$$
+be a diagram of homomorphisms of differential graded $A$-modules
+commuting up to homotopy.
+\begin{enumerate}
+\item If $f$ is an admissible monomorphism, then $b$ is homotopic to a
+homomorphism which makes the diagram commute.
+\item If $g$ is an admissible epimorphism, then $a$ is homotopic to a
+morphism which makes the diagram commute.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $h : K \to N$ be a homotopy between $bf$ and $ga$, i.e.,
+$bf - ga = \text{d}h + h\text{d}$. Suppose that $\pi : L \to K$
+is a graded $A$-module map left inverse to $f$. Take
+$b' = b - \text{d}h\pi - h\pi \text{d}$.
+Suppose $s : N \to M$ is a graded $A$-module map right inverse to $g$.
+Take $a' = a + \text{d}sh + sh\text{d}$.
+Computations omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-make-injective}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $\alpha : K \to L$ be a homomorphism of differential graded
+$A$-modules. There exists a factorization
+$$
+\xymatrix{
+K \ar[r]^{\tilde \alpha} \ar@/_1pc/[rr]_\alpha &
+\tilde L \ar[r]^\pi & L
+}
+$$
+in $\text{Mod}_{(A, \text{d})}$ such that
+\begin{enumerate}
+\item $\tilde \alpha$ is an admissible monomorphism (see
+Definition \ref{definition-admissible-ses}),
+\item there is a morphism $s : L \to \tilde L$
+such that $\pi \circ s = \text{id}_L$ and such that
+$s \circ \pi$ is homotopic to $\text{id}_{\tilde L}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The proof is identical to the proof of
+Derived Categories, Lemma \ref{derived-lemma-make-injective}.
+Namely, we set $\tilde L = L \oplus C(1_K)$ and we use elementary
+properties of the cone construction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sequence-maps-split}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $L_1 \to L_2 \to \ldots \to L_n$
+be a sequence of composable homomorphisms of
+differential graded $A$-modules.
+There exists a commutative diagram
+$$
+\xymatrix{
+L_1 \ar[r] &
+L_2 \ar[r] &
+\ldots \ar[r] &
+L_n \\
+M_1 \ar[r] \ar[u] &
+M_2 \ar[r] \ar[u] &
+\ldots \ar[r] &
+M_n \ar[u]
+}
+$$
+in $\text{Mod}_{(A, \text{d})}$ such that each $M_i \to M_{i + 1}$
+is an admissible monomorphism and each $M_i \to L_i$
+is a homotopy equivalence.
+\end{lemma}
+
+\begin{proof}
+The case $n = 1$ is without content.
+Lemma \ref{lemma-make-injective} is the case $n = 2$.
+Suppose we have constructed the diagram
+except for $M_n$. Apply Lemma \ref{lemma-make-injective} to
+the composition $M_{n - 1} \to L_{n - 1} \to L_n$.
+The result is a factorization $M_{n - 1} \to M_n \to L_n$
+as desired.
+\end{proof}
+
+
+
+\begin{lemma}
+\label{lemma-nilpotent}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $0 \to K_i \to L_i \to M_i \to 0$, $i = 1, 2, 3$
+be admissible short exact sequence of differential graded $A$-modules.
+Let $b : L_1 \to L_2$ and $b' : L_2 \to L_3$
+be homomorphisms of differential graded modules such that
+$$
+\vcenter{
+\xymatrix{
+K_1 \ar[d]_0 \ar[r] &
+L_1 \ar[r] \ar[d]_b &
+M_1 \ar[d]_0 \\
+K_2 \ar[r] & L_2 \ar[r] & M_2
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+K_2 \ar[d]^0 \ar[r] &
+L_2 \ar[r] \ar[d]^{b'} &
+M_2 \ar[d]^0 \\
+K_3 \ar[r] & L_3 \ar[r] & M_3
+}
+}
+$$
+commute up to homotopy. Then $b' \circ b$ is homotopic to $0$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-make-commute-map} we can replace $b$ and $b'$ by
+homotopic maps such that the right square of the left diagram commutes
+and the left square of the right diagram commutes. In other words, we have
+$\Im(b) \subset \Im(K_2 \to L_2)$ and
+$\Ker((b')^n) \supset \Im(K_2 \to L_2)$.
+Then $b \circ b' = 0$ as a map of modules.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Distinguished triangles}
+\label{section-distinguished}
+
+\noindent
+The following lemma produces our distinguished triangles.
+
+\begin{lemma}
+\label{lemma-triangle-independent-splittings}
+Let $(A, \text{d})$ be a differential graded algebra. Let
+$0 \to K \to L \to M \to 0$ be an admissible short exact sequence
+of differential graded $A$-modules. The triangle
+\begin{equation}
+\label{equation-triangle-associated-to-admissible-ses}
+K \to L \to M \xrightarrow{\delta} K[1]
+\end{equation}
+with $\delta$ as in Lemma \ref{lemma-admissible-ses} is, up to canonical
+isomorphism in $K(\text{Mod}_{(A, \text{d})})$, independent of the choices
+made in Lemma \ref{lemma-admissible-ses}.
+\end{lemma}
+
+\begin{proof}
+Namely, let $(s', \pi')$ be a second choice of splittings as in
+Lemma \ref{lemma-admissible-ses}. Then we claim that $\delta$ and $\delta'$
+are homotopic. Namely, write $s' = s + \alpha \circ h$ and
+$\pi' = \pi + g \circ \beta$ for some unique homomorphisms
+of $A$-modules $h : M \to K$ and $g : M \to K$ of degree $-1$.
+Then $g = -h$ and $g$ is a homotopy between $\delta$ and $\delta'$.
+The computations are done in the proof of
+Homology, Lemma \ref{homology-lemma-ses-termwise-split-homotopy-cochain}.
+\end{proof}
+
+\begin{definition}
+\label{definition-distinguished-triangle}
+Let $(A, \text{d})$ be a differential graded algebra.
+\begin{enumerate}
+\item If $0 \to K \to L \to M \to 0$ is an admissible short exact sequence
+of differential graded $A$-modules, then the {\it triangle associated
+to $0 \to K \to L \to M \to 0$} is the triangle
+(\ref{equation-triangle-associated-to-admissible-ses})
+of $K(\text{Mod}_{(A, \text{d})})$.
+\item A triangle of $K(\text{Mod}_{(A, \text{d})})$ is called a
+{\it distinguished triangle} if it is isomorphic to a triangle
+associated to an admissible short exact sequence
+of differential graded $A$-modules.
+\end{enumerate}
+\end{definition}
+
+
+
+
+
+
+
+
+
+\section{Cones and distinguished triangles}
+\label{section-cones-and-triangles}
+
+\noindent
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $f : K \to L$ be a homomorphism of differential graded $A$-modules.
+Then $(K, L, C(f), f, i, p)$ forms a triangle:
+$$
+K \to L \to C(f) \to K[1]
+$$
+in $\text{Mod}_{(A, \text{d})}$ and hence in $K(\text{Mod}_{(A, \text{d})})$.
+Cones are {\bf not} distinguished triangles in general, but the difference
+is a sign or a rotation (your choice). Here are two precise statements.
+
+\begin{lemma}
+\label{lemma-rotate-cone}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $f : K \to L$ be a homomorphism of differential graded modules.
+The triangle $(L, C(f), K[1], i, p, f[1])$ is
+the triangle associated to the admissible short exact sequence
+$$
+0 \to L \to C(f) \to K[1] \to 0
+$$
+coming from the definition of the cone of $f$.
+\end{lemma}
+
+\begin{proof}
+Immediate from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-rotate-triangle}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $\alpha : K \to L$ and $\beta : L \to M$
+define an admissible short exact sequence
+$$
+0 \to K \to L \to M \to 0
+$$
+of differential graded $A$-modules.
+Let $(K, L, M, \alpha, \beta, \delta)$
+be the associated triangle. Then the triangles
+$$
+(M[-1], K, L, \delta[-1], \alpha, \beta)
+\quad\text{and}\quad
+(M[-1], K, C(\delta[-1]), \delta[-1], i, p)
+$$
+are isomorphic.
+\end{lemma}
+
+\begin{proof}
+Using a choice of splittings we write $L = K \oplus M$ and we identify
+$\alpha$ and $\beta$ with the natural inclusion and projection maps.
+By construction of $\delta$ we have
+$$
+d_B =
+\left(
+\begin{matrix}
+d_K & \delta \\
+0 & d_M
+\end{matrix}
+\right)
+$$
+On the other hand the cone of $\delta[-1] : M[-1] \to K$
+is given as $C(\delta[-1]) = K \oplus M$ with differential identical
+with the matrix above! Whence the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-third-isomorphism}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $f_1 : K_1 \to L_1$ and $f_2 : K_2 \to L_2$ be homomorphisms of
+differential graded $A$-modules. Let
+$$
+(a, b, c) :
+(K_1, L_1, C(f_1), f_1, i_1, p_1)
+\longrightarrow
+(K_1, L_1, C(f_1), f_2, i_2, p_2)
+$$
+be any morphism of triangles of $K(\text{Mod}_{(A, \text{d})})$.
+If $a$ and $b$ are homotopy equivalences then so is $c$.
+\end{lemma}
+
+\begin{proof}
+Let $a^{-1} : K_2 \to K_1$ be a homomorphism of differential graded $A$-modules
+which is inverse to $a$ in $K(\text{Mod}_{(A, \text{d})})$.
+Let $b^{-1} : L_2 \to L_1$ be a homomorphism of differential graded $A$-modules
+which is inverse to $b$ in $K(\text{Mod}_{(A, \text{d})})$.
+Let $c' : C(f_2) \to C(f_1)$ be the morphism from
+Lemma \ref{lemma-functorial-cone} applied to
+$f_1 \circ a^{-1} = b^{-1} \circ f_2$.
+If we can show that $c \circ c'$ and $c' \circ c$ are isomorphisms in
+$K(\text{Mod}_{(A, \text{d})})$
+then we win. Hence it suffices to prove the following: Given
+a morphism of triangles
+$(1, 1, c) : (K, L, C(f), f, i, p)$
+in $K(\text{Mod}_{(A, \text{d})})$ the morphism $c$ is an isomorphism
+in $K(\text{Mod}_{(A, \text{d})})$.
+By assumption the two squares in the diagram
+$$
+\xymatrix{
+L \ar[r] \ar[d]_1 &
+C(f) \ar[r] \ar[d]_c &
+K[1] \ar[d]_1 \\
+L \ar[r] &
+C(f) \ar[r] &
+K[1]
+}
+$$
+commute up to homotopy. By construction of $C(f)$ the rows
+form admissible short exact sequences. Thus we see that
+$(c - 1)^2 = 0$ in $K(\text{Mod}_{(A, \text{d})})$ by
+Lemma \ref{lemma-nilpotent}.
+Hence $c$ is an isomorphism in $K(\text{Mod}_{(A, \text{d})})$
+with inverse $2 - c$.
+\end{proof}
+
+\noindent
+The following lemma shows that the collection of triangles of the homotopy
+category given by cones and the distinguished triangles are the same
+up to isomorphisms, at least up to sign!
+
+\begin{lemma}
+\label{lemma-the-same-up-to-isomorphisms}
+Let $(A, \text{d})$ be a differential graded algebra.
+\begin{enumerate}
+\item Given an admissible short exact sequence
+$0 \to K \xrightarrow{\alpha} L \to M \to 0$
+of differential graded $A$-modules there exists a homotopy equivalence
+$C(\alpha) \to M$ such that the diagram
+$$
+\xymatrix{
+K \ar[r] \ar[d] & L \ar[d] \ar[r] &
+C(\alpha) \ar[r]_{-p} \ar[d] & K[1] \ar[d] \\
+K \ar[r]^\alpha & L \ar[r]^\beta &
+M \ar[r]^\delta & K[1]
+}
+$$
+defines an isomorphism of triangles in $K(\text{Mod}_{(A, \text{d})})$.
+\item Given a morphism of complexes $f : K \to L$
+there exists an isomorphism of triangles
+$$
+\xymatrix{
+K \ar[r] \ar[d] & \tilde L \ar[d] \ar[r] &
+M \ar[r]_{\delta} \ar[d] & K[1] \ar[d] \\
+K \ar[r] & L \ar[r] &
+C(f) \ar[r]^{-p} & K[1]
+}
+$$
+where the upper triangle is the triangle associated to a
+admissible short exact sequence $K \to \tilde L \to M$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). We have $C(\alpha) = L \oplus K$ and we simply define
+$C(\alpha) \to M$ via the projection onto $L$ followed by $\beta$.
+This defines a morphism of differential graded modules because the
+compositions $K^{n + 1} \to L^{n + 1} \to M^{n + 1}$ are zero.
+Choose splittings $s : M \to L$ and $\pi : L \to K$ with
+$\Ker(\pi) = \Im(s)$ and set
+$\delta = \pi \circ \text{d}_L \circ s$ as usual.
+To get a homotopy inverse we take
+$M \to C(\alpha)$ given by $(s , -\delta)$. This is compatible with
+differentials because $\delta^n$ can be characterized as the
+unique map $M^n \to K^{n + 1}$ such that
+$\text{d} \circ s^n - s^{n + 1} \circ \text{d} = \alpha \circ \delta^n$,
+see proof of
+Homology, Lemma \ref{homology-lemma-ses-termwise-split-cochain}.
+The composition $M \to C(f) \to M$ is the identity.
+The composition $C(f) \to M \to C(f)$ is equal to the morphism
+$$
+\left(
+\begin{matrix}
+s \circ \beta & 0 \\
+-\delta \circ \beta & 0
+\end{matrix}
+\right)
+$$
+To see that this is homotopic to the identity map
+use the homotopy $h : C(\alpha) \to C(\alpha)$
+given by the matrix
+$$
+\left(
+\begin{matrix}
+0 & 0 \\
+\pi & 0
+\end{matrix}
+\right) :
+C(\alpha) = L \oplus K
+\to
+L \oplus K = C(\alpha)
+$$
+It is trivial to verify that
+$$
+\left(
+\begin{matrix}
+1 & 0 \\
+0 & 1
+\end{matrix}
+\right)
+-
+\left(
+\begin{matrix}
+s \\
+-\delta
+\end{matrix}
+\right)
+\left(
+\begin{matrix}
+\beta & 0
+\end{matrix}
+\right)
+=
+\left(
+\begin{matrix}
+\text{d} & \alpha \\
+0 & -\text{d}
+\end{matrix}
+\right)
+\left(
+\begin{matrix}
+0 & 0 \\
+\pi & 0
+\end{matrix}
+\right)
++
+\left(
+\begin{matrix}
+0 & 0 \\
+\pi & 0
+\end{matrix}
+\right)
+\left(
+\begin{matrix}
+\text{d} & \alpha \\
+0 & -\text{d}
+\end{matrix}
+\right)
+$$
+To finish the proof of (1) we have to show that the morphisms
+$-p : C(\alpha) \to K[1]$ (see
+Definition \ref{definition-cone})
+and $C(\alpha) \to M \to K[1]$ agree up
+to homotopy. This is clear from the above. Namely, we can use the homotopy
+inverse $(s, -\delta) : M \to C(\alpha)$
+and check instead that the two maps
+$M \to K[1]$ agree. And note that
+$p \circ (s, -\delta) = -\delta$ as desired.
+
+\medskip\noindent
+Proof of (2). We let $\tilde f : K \to \tilde L$,
+$s : L \to \tilde L$
+and $\pi : L \to L$ be as in
+Lemma \ref{lemma-make-injective}. By
+Lemmas \ref{lemma-functorial-cone} and \ref{lemma-third-isomorphism}
+the triangles $(K, L, C(f), i, p)$ and
+$(K, \tilde L, C(\tilde f), \tilde i, \tilde p)$
+are isomorphic. Note that we can compose isomorphisms of
+triangles. Thus we may replace $L$ by
+$\tilde L$ and $f$ by $\tilde f$. In other words
+we may assume that $f$ is an admissible monomorphism.
+In this case the result follows from part (1).
+\end{proof}
+
+
+
+
+
+
+
+\section{The homotopy category is triangulated}
+\label{section-homotopy-triangulated}
+
+\noindent
+We first prove that it is pre-triangulated.
+
+\begin{lemma}
+\label{lemma-homotopy-category-pre-triangulated}
+Let $(A, \text{d})$ be a differential graded algebra.
+The homotopy category $K(\text{Mod}_{(A, \text{d})})$
+with its natural translation functors and distinguished triangles
+is a pre-triangulated category.
+\end{lemma}
+
+\begin{proof}
+Proof of TR1. By definition every triangle isomorphic to a distinguished
+one is distinguished. Also, any triangle $(K, K, 0, 1, 0, 0)$
+is distinguished since $0 \to K \to K \to 0 \to 0$ is
+an admissible short exact sequence. Finally, given any homomorphism
+$f : K \to L$ of differential graded $A$-modules the triangle
+$(K, L, C(f), f, i, -p)$ is distinguished by
+Lemma \ref{lemma-the-same-up-to-isomorphisms}.
+
+\medskip\noindent
+Proof of TR2. Let $(X, Y, Z, f, g, h)$ be a triangle.
+Assume $(Y, Z, X[1], g, h, -f[1])$ is distinguished.
+Then there exists an admissible short exact sequence
+$0 \to K \to L \to M \to 0$ such that the associated
+triangle $(K, L, M, \alpha, \beta, \delta)$
+is isomorphic to $(Y, Z, X[1], g, h, -f[1])$. Rotating back we see
+that $(X, Y, Z, f, g, h)$ is isomorphic to
+$(M[-1], K, L, -\delta[-1], \alpha, \beta)$.
+It follows from Lemma \ref{lemma-rotate-triangle} that the triangle
+$(M[-1], K, L, \delta[-1], \alpha, \beta)$
+is isomorphic to
+$(M[-1], K, C(\delta[-1]), \delta[-1], i, p)$.
+Precomposing the previous isomorphism of triangles with $-1$ on $Y$
+it follows that $(X, Y, Z, f, g, h)$ is isomorphic to
+$(M[-1], K, C(\delta[-1]), \delta[-1], i, -p)$.
+Hence it is distinguished by
+Lemma \ref{lemma-the-same-up-to-isomorphisms}.
+On the other hand, suppose that $(X, Y, Z, f, g, h)$ is distinguished.
+By Lemma \ref{lemma-the-same-up-to-isomorphisms} this means that it is
+isomorphic to a triangle of the form
+$(K, L, C(f), f, i, -p)$ for some morphism $f$ of
+$\text{Mod}_{(A, \text{d})}$. Then the rotated triangle
+$(Y, Z, X[1], g, h, -f[1])$ is
+isomorphic to $(L, C(f), K[1], i, -p, -f[1])$ which is
+isomorphic to the triangle
+$(L, C(f), K[1], i, p, f[1])$.
+By Lemma \ref{lemma-rotate-cone} this triangle is distinguished.
+Hence $(Y, Z, X[1], g, h, -f[1])$ is distinguished as desired.
+
+\medskip\noindent
+Proof of TR3. Let $(X, Y, Z, f, g, h)$ and $(X', Y', Z', f', g', h')$
+be distinguished triangles of $K(\mathcal{A})$ and let $a : X \to X'$
+and $b : Y \to Y'$ be morphisms such that $f' \circ a = b \circ f$. By
+Lemma \ref{lemma-the-same-up-to-isomorphisms} we may assume that
+$(X, Y, Z, f, g, h) = (X, Y, C(f), f, i, -p)$ and
+$(X', Y', Z', f', g', h') = (X', Y', C(f'), f', i', -p')$.
+At this point we simply apply Lemma \ref{lemma-functorial-cone}
+to the commutative diagram given by $f, f', a, b$.
+\end{proof}
+
+\noindent
+Before we prove TR4 in general we prove it in a special case.
+
+\begin{lemma}
+\label{lemma-two-split-injections}
+Let $(A, \text{d})$ be a differential graded algebra. Suppose that
+$\alpha : K \to L$ and $\beta : L \to M$ are admissible monomorphisms
+of differential graded $A$-modules. Then there exist distinguished triangles
+$(K, L, Q_1, \alpha, p_1, d_1)$, $(K, M, Q_2, \beta \circ \alpha, p_2, d_2)$
+and $(L, M, Q_3, \beta, p_3, d_3)$ for which TR4 holds.
+\end{lemma}
+
+\begin{proof}
+Say $\pi_1 : L \to K$ and $\pi_3 : M \to L$ are homomorphisms
+of graded $A$-modules which are left inverse to $\alpha$ and $\beta$.
+Then also $K \to M$ is an admissible monomorphism with left
+inverse $\pi_2 = \pi_1 \circ \pi_3$. Let us write $Q_1$, $Q_2$
+and $Q_3$ for the cokernels of $K \to L$, $K \to M$, and $L \to M$.
+Then we obtain identifications (as graded $A$-modules)
+$Q_1 = \Ker(\pi_1)$, $Q_3 = \Ker(\pi_3)$ and
+$Q_2 = \Ker(\pi_2)$. Then $L = K \oplus Q_1$ and
+$M = L \oplus Q_3$ as graded $A$-modules. This implies
+$M = K \oplus Q_1 \oplus Q_3$. Note that $\pi_2 = \pi_1 \circ \pi_3$
+is zero on both $Q_1$ and $Q_3$. Hence $Q_2 = Q_1 \oplus Q_3$.
+Consider the commutative diagram
+$$
+\begin{matrix}
+0 & \to & K & \to & L & \to & Q_1 & \to & 0 \\
+ & & \downarrow & & \downarrow & & \downarrow & \\
+0 & \to & K & \to & M & \to & Q_2 & \to & 0 \\
+ & & \downarrow & & \downarrow & & \downarrow & \\
+0 & \to & L & \to & M & \to & Q_3 & \to & 0
+\end{matrix}
+$$
+The rows of this diagram are admissible short exact sequences, and
+hence determine distinguished triangles by definition. Moreover
+downward arrows in the diagram above are compatible with the chosen
+splittings and hence define morphisms of triangles
+$$
+(K \to L \to Q_1 \to K[1])
+\longrightarrow
+(K \to M \to Q_2 \to K[1])
+$$
+and
+$$
+(K \to M \to Q_2 \to K[1])
+\longrightarrow
+(L \to M \to Q_3 \to L[1]).
+$$
+Note that the splittings $Q_3 \to M$ of the bottom sequence in the
+diagram provides a splitting for the split sequence
+$0 \to Q_1 \to Q_2 \to Q_3 \to 0$ upon composing with $M \to Q_2$.
+It follows easily from this that the morphism $\delta : Q_3 \to Q_1[1]$
+in the corresponding distinguished triangle
+$$
+(Q_1 \to Q_2 \to Q_3 \to Q_1[1])
+$$
+is equal to the composition $Q_3 \to L[1] \to Q_1[1]$.
+Hence we get a structure as in the conclusion of axiom TR4.
+\end{proof}
+
+\noindent
+Here is the final result.
+
+\begin{proposition}
+\label{proposition-homotopy-category-triangulated}
+Let $(A, \text{d})$ be a differential graded algebra. The homotopy category
+$K(\text{Mod}_{(A, \text{d})})$ of differential graded $A$-modules with its
+natural translation functors and distinguished triangles is a triangulated
+category.
+\end{proposition}
+
+\begin{proof}
+We know that $K(\text{Mod}_{(A, \text{d})})$ is a pre-triangulated category.
+Hence it suffices to prove TR4 and to prove it we can use
+Derived Categories, Lemma \ref{derived-lemma-easier-axiom-four}.
+Let $K \to L$ and $L \to M$ be composable morphisms of
+$K(\text{Mod}_{(A, \text{d})})$. By
+Lemma \ref{lemma-sequence-maps-split} we may assume that
+$K \to L$ and $L \to M$ are admissible monomorphisms.
+In this case the result follows from
+Lemma \ref{lemma-two-split-injections}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Left modules}
+\label{section-left-modules}
+
+\noindent
+Everything we have said sofar has an analogue in the setting
+of left differential graded modules, except that one has to
+take care with some sign rules.
+
+\medskip\noindent
+Let $(A, \text{d})$ be a differential graded $R$-algebra.
+Exactly analogous to right modules, we define a
+{\it left differential graded $A$-module} $M$ as a left
+$A$-module $M$ which has a grading $M = \bigoplus M^n$
+and a differential $\text{d}$, such that $A^n M^m \subset M^{n + m}$,
+such that $\text{d}(M^n) \subset M^{n + 1}$, and such that
+$$
+\text{d}(am) = \text{d}(a) m + (-1)^{\deg(a)}a \text{d}(m)
+$$
+for homogeneous elements $a \in A$ and $m \in M$. As before this
+Leibniz rule exactly signifies that the multiplication defines
+a map of complexes
+$$
+\text{Tot}(A^\bullet \otimes_R M^\bullet) \to M^\bullet
+$$
+Here $A^\bullet$ and $M^\bullet$ denote the complexes of $R$-modules
+underlying $A$ and $M$.
+
+\begin{definition}
+\label{definition-opposite-dga}
+Let $R$ be a ring. Let $(A, \text{d})$ be a differential graded algebra
+over $R$. The {\it opposite differential graded algebra} is the differential
+graded algebra $(A^{opp}, \text{d})$ over $R$ where $A^{opp} = A$
+as a graded $R$-module, $\text{d} = \text{d}$, and multiplication is
+given by
+$$
+a \cdot_{opp} b = (-1)^{\deg(a)\deg(b)} b a
+$$
+for homogeneous elements $a, b \in A$.
+\end{definition}
+
+\noindent
+This makes sense because
+\begin{align*}
+\text{d}(a \cdot_{opp} b)
+& =
+(-1)^{\deg(a)\deg(b)} \text{d}(b a) \\
+& =
+(-1)^{\deg(a)\deg(b)} \text{d}(b) a +
+(-1)^{\deg(a)\deg(b) + \deg(b)}b\text{d}(a) \\
+& =
+(-1)^{\deg(a)}a \cdot_{opp} \text{d}(b) + \text{d}(a) \cdot_{opp} b
+\end{align*}
+as desired. In terms of underlying complexes of $R$-modules
+this means that the diagram
+$$
+\xymatrix{
+\text{Tot}(A^\bullet \otimes_R A^\bullet)
+\ar[rrr]_-{\text{multiplication of }A^{opp}}
+\ar[d]_{\text{commutativity constraint}} & & &
+A^\bullet \ar[d]^{\text{id}} \\
+\text{Tot}(A^\bullet \otimes_R A^\bullet)
+\ar[rrr]^-{\text{multiplication of }A} & & &
+A^\bullet
+}
+$$
+commutes. Here the commutativity constraint on the symmetric monoidal
+category of complexes of $R$-modules is given in
+More on Algebra, Section \ref{more-algebra-section-sign-rules}.
+
+\medskip\noindent
+Let $(A, \text{d})$ be a differential graded algebra over $R$.
+Let $M$ be a left differential graded $A$-module. We will denote
+$M^{opp}$ the module $M$ viewed as a right $A^{opp}$-module with
+multiplication $\cdot_{opp}$ defined by the rule
+$$
+m \cdot_{opp} a = (-1)^{\deg(a)\deg(m)} a m
+$$
+for $a$ and $m$ homogeneous. This is compatible with differentials
+because we could have used the diagram
+$$
+\xymatrix{
+\text{Tot}(M^\bullet \otimes_R A^\bullet)
+\ar[rrr]_-{\text{multiplication on }M^{opp}}
+\ar[d]_{\text{commutativity constraint}} & & &
+M^\bullet \ar[d]^{\text{id}} \\
+\text{Tot}(A^\bullet \otimes_R M^\bullet)
+\ar[rrr]^-{\text{multiplication on }M} & & &
+M^\bullet
+}
+$$
+to define the multiplication $\cdot_{opp}$ on $M^{opp}$.
+To see that it is an associative multiplication we compute for
+homogeneous elements $a, b \in A$ and $m \in M$ that
+\begin{align*}
+m \cdot_{opp} (a \cdot_{opp} b)
+& =
+(-1)^{\deg(a)\deg(b)} m \cdot_{opp} (ba) \\
+& =
+(-1)^{\deg(a)\deg(b) + \deg(ab)\deg(m)} bam \\
+& =
+(-1)^{\deg(a)\deg(b) + \deg(ab)\deg(m) + \deg(b)\deg(am)}
+(am) \cdot_{opp} b \\
+& =
+(-1)^{\deg(a)\deg(b) + \deg(ab)\deg(m) + \deg(b)\deg(am) + \deg(a)\deg(m)}
+(m \cdot_{opp} a) \cdot_{opp} b \\
+& =
+(m \cdot_{opp} a) \cdot_{opp} b
+\end{align*}
+Of course, we could have been shown this using the compatibility between
+the associativity and commutativity constraint on the symmetric monoidal
+category of complexes of $R$-modules as well.
+
+\begin{lemma}
+\label{lemma-left-right}
+Let $(A, \text{d})$ be a differential graded $R$-algebra.
+The functor $M \mapsto M^{opp}$ from the category of
+left differential graded $A$-modules to the category of right
+differential graded $A^{opp}$-modules is an equivalence.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\noindent
+Mext, we come to shifts. Let $(A, \text{d})$ be a differential graded algebra.
+Let $M$ be a left differential graded $A$-module whose underlying complex
+of $R$-modules is denoted $M^\bullet$.
+For any $k \in \mathbf{Z}$ we define the {\it $k$-shifted module}
+$M[k]$ as follows
+\begin{enumerate}
+\item the underlying complex of $R$-modules of $M[k]$ is $M^\bullet[k]$
+\item as $A$-module the multiplication
+$$
+A^n \times (M[k])^m \longrightarrow (M[k])^{n + m}
+$$
+is equal to $(-1)^{nk}$ times the given multiplication
+$A^n \times M^{m + k} \to M^{n + m + k}$.
+\end{enumerate}
+Let us check that with this choice the Leibniz rule is satisfied.
+Let $a \in A^n$ and $x \in M[k]^m = M^{m + k}$ and denoting
+$\cdot_{M[k]}$ the product in $M[k]$ then we see
+\begin{align*}
+\text{d}_{M[k]}(a \cdot_{M[k]} x)
+& =
+(-1)^{k + nk} \text{d}_M(ax) \\
+& =
+(-1)^{k + nk} \text{d}(a) x + (-1)^{k + nk + n} a \text{d}_M(x) \\
+& =
+\text{d}(a) \cdot_{M[k]} x + (-1)^{nk + n} a \text{d}_{M[k]}(x) \\
+& =
+\text{d}(a) \cdot_{M[k]} x + (-1)^n a \cdot_{M[k]} \text{d}_{M[k]}(x)
+\end{align*}
+This is what we want as $a$ has degree $n$ as a homogeneous element of $A$.
+We also observe that with these choices we may think of
+the multiplication map as the map of complexes
+$$
+\text{Tot}(A^\bullet \otimes_R M^\bullet[k]) \to
+\text{Tot}(A^\bullet \otimes_R M^\bullet)[k] \to
+M^\bullet[k]
+$$
+where the first arrow is
+More on Algebra, Section \ref{more-algebra-section-sign-rules}
+(\ref{more-algebra-item-shift-tensor}) which in this
+case involves exactly the sign we chose above. (In fact, we could have deduced
+that the Liebniz rule holds from this observation.)
+
+\medskip\noindent
+With the rule above we have canonical identifications
+$$
+(M[k])^{opp} = M^{opp}[k]
+$$
+of right differential graded $A^{opp}$-modules
+defined without the intervention of signs, in other words, the equivalence
+of Lemma \ref{lemma-left-right} is compatible with shift functors.
+
+\medskip\noindent
+Our choice above necessitates the following definition.
+
+\begin{definition}
+\label{definition-shift-graded-module}
+Let $R$ be a ring. Let $A$ be a $\mathbf{Z}$-graded $R$-algebra.
+\begin{enumerate}
+\item Given a right graded $A$-module $M$ we define the
+{\it $k$th shifted $A$-module} $M[k]$ as the same as
+a right $A$-module but with grading $(M[k])^n = M^{n + k}$.
+\item Given a left graded $A$-module $M$ we define the
+{\it $k$th shifted $A$-module} $M[k]$ as the module
+with grading $(M[k])^n = M^{n + k}$ and multiplication
+$A^n \times (M[k])^m \to (M[k])^{n + m}$
+equal to $(-1)^{nk}$ times the given multiplication
+$A^n \times M^{m + k} \to M^{n + m + k}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Let $(A, \text{d})$ be a differential graded algebra. Let
+$f, g : M \to N$ be homomorphisms of left differential graded $A$-modules.
+A {\it homotopy between $f$ and $g$} is a graded $A$-module map
+$h : M \to N[-1]$ (observe the shift!) such that
+$$
+f(x) - g(x) = \text{d}_N(h(x)) + h(\text{d}_M(x))
+$$
+for all $x \in M$. If a homotopy exists, then we say $f$ and $g$ are
+{\it homotopic}. Thus $h$ is compatible with the $A$-module structure
+(with the shifted one on $N$) and the grading (with shifted grading on $N$)
+but not with the differential. If $f = g$ and $h$ is a homotopy, then
+$h$ defines a morphism $h : M \to N[-1]$ of left differential
+graded $A$-modules.
+
+\medskip\noindent
+With the rule above we find that $f, g : M \to N$ are homotopic if
+and only if the induced morphisms
+$f^{opp}, g^{opp} : M^{opp} \to N^{opp}$
+are homotopic as right differential graded $A^{opp}$-module homomorphisms
+(with the same homotopy).
+
+\medskip\noindent
+The homotopy category, cones, admissible short exact sequences,
+distinguished triangles are all defined in exactly the same manner
+as for right differential graded modules (and everything agrees
+on underlying complexes of $R$-modules with the constructions for
+complexes of $R$-modules). In this manner we obtain the analogue of
+Proposition \ref{proposition-homotopy-category-triangulated}
+for left modules as well, or we can deduce it by working
+with right modules over the opposite algebra.
+
+
+
+\section{Tensor product}
+\label{section-tensor-product}
+
+\noindent
+Let $R$ be a ring. Let $A$ be an $R$-algebra (see
+Section \ref{section-conventions}). Given a right $A$-module $M$
+and a left $A$-module $N$ there is a {\it tensor product}
+$$
+M \otimes_A N
+$$
+This tensor product is a module over $R$. As an $R$-module $M \otimes_A N$
+is generated by symbols $x \otimes y$ with $x \in M$ and $y \in N$ subject
+to the relations
+$$
+\begin{matrix}
+(x_1 + x_2) \otimes y - x_1 \otimes y - x_2 \otimes y, \\
+x \otimes (y_1 + y_2) - x \otimes y_1 - x \otimes y_2, \\
+xa \otimes y - x \otimes ay
+\end{matrix}
+$$
+for $a \in A$, $x, x_1, x_2 \in M$ and $y, y_1, y_2 \in N$.
+We list some properties of the tensor product
+
+\medskip\noindent
+In each variable the tensor product is right exact, in fact commutes
+with direct sums and arbitrary colimits.
+
+\medskip\noindent
+The tensor product $M \otimes_A N$ is the receptacle of the universal
+$A$-bilinear map $M \times N \to M \otimes_A N$, $(x, y) \mapsto x \otimes y$.
+In a formula
+$$
+\text{Bilinear}_A(M \times N, Q) = \Hom_R(M \otimes_A N, Q)
+$$
+for any $R$-module $Q$.
+
+\medskip\noindent
+If $A$ is a $\mathbf{Z}$-graded algebra and $M$, $N$ are graded
+$A$-modules then $M \otimes_A N$ is a graded $R$-module.
+Then $n$th graded piece $(M \otimes_A N)^n$ of $M \otimes_A N$
+is equal to
+$$
+\Coker\left(
+\bigoplus\nolimits_{r + t + s = n}
+M^r \otimes_R A^t \otimes_R N^s \to
+\bigoplus\nolimits_{p + q = n} M^p \otimes_R N^q
+\right)
+$$
+where the map sends $x \otimes a \otimes y$ to
+$x \otimes ay - xa \otimes y$ for
+$x \in M^r$, $y \in N^s$, and $a \in A^t$ with $r + s + t = n$.
+In this case the map $M \times N \to M \otimes_A N$ is $A$-bilinear
+and compatible with gradings and universal in the sense that
+$$
+\text{GradedBilinear}_A(M \times N, Q) =
+\Hom_{\text{graded }R\text{-modules}}(M \otimes_A N, Q)
+$$
+for any graded $R$-module $Q$ with an obvious notion of graded
+bilinar map.
+
+\medskip\noindent
+If $(A, \text{d})$ is a differential graded algebra and
+$M$ and $N$ are left and right differential graded $A$-modules, then
+$M \otimes_A N$ is a differential graded $R$-module with differential
+$$
+\text{d}(x \otimes y) =
+\text{d}(x) \otimes y + (-1)^{\deg(x)}x \otimes \text{d}(y)
+$$
+for $x \in M$ and $y \in N$ homogeneous. In this case the map
+$M \times N \to M \otimes_A N$ is $A$-bilinear, compatible with gradings,
+and compatible with differentials and universal in the sense that
+$$
+\text{DifferentialGradedBilinear}_A(M \times N, Q) =
+\Hom_{\text{Comp}(R)}(M \otimes_A N, Q)
+$$
+for any differential graded $R$-module $Q$ with an obvious notion of
+differential graded bilinar map.
+
+
+
+
+
+
+\section{Hom complexes and differential graded modules}
+\label{section-hom-complexes}
+
+\noindent
+We urge the reader to skip this section.
+
+\medskip\noindent
+Let $R$ be a ring and let $M^\bullet$ be a complex of $R$-modules.
+Consider the complex of $R$-modules
+$$
+E^\bullet = \Hom^\bullet(M^\bullet, M^\bullet)
+$$
+introduced in
+More on Algebra, Section \ref{more-algebra-section-hom-complexes}.
+By More on Algebra, Lemma \ref{more-algebra-lemma-composition}
+there is a canonical composition law
+$$
+\text{Tot}(E^\bullet \otimes_R E^\bullet) \to E^\bullet
+$$
+which is a map of complexes. Thus we see that $E^\bullet$ with this
+multiplication is a differential graded $R$-algebra which we will
+denote $(E, \text{d})$. Moreover, viewing $M^\bullet$ as
+$\Hom^\bullet(R, M^\bullet)$ we see that composition defines a multiplication
+$$
+\text{Tot}(E^\bullet \otimes_R M^\bullet) \to M^\bullet
+$$
+which turns $M^\bullet$ into a {\bf left} differential graded $E$-module
+which we will denote $M$.
+
+\begin{lemma}
+\label{lemma-left-module-structure}
+In the situation above, let $A$ be a differential graded $R$-algebra.
+To give a left $A$-module structure on $M$ is the same thing as
+giving a homomorphism $A \to E$ of differential graded $R$-algebras.
+\end{lemma}
+
+\begin{proof}
+Proof omitted. Observe that no signs intervene in this correspondence.
+\end{proof}
+
+\noindent
+We continue with the discussion above and we assume given another
+complex $N^\bullet$ of $R$-modules. Consider the complex
+of $R$-modules $\Hom^\bullet(M^\bullet, N^\bullet)$ introduced in
+More on Algebra, Section \ref{more-algebra-section-hom-complexes}.
+As above we see that composition
+$$
+\text{Tot}(\Hom^\bullet(M^\bullet, N^\bullet) \otimes_R E^\bullet)
+\to \Hom^\bullet(M^\bullet, N^\bullet)
+$$
+defines a multiplication which turns $\Hom^\bullet(M^\bullet, N^\bullet)$
+into a {\bf right} differential graded $E$-module. Using
+Lemma \ref{lemma-left-module-structure} we
+conclude that given a left differential graded $A$-module $M$ and
+a complex of $R$-modules $N^\bullet$ there is a canonical
+right differential graded $A$-module whose underlying complex
+of $R$-modules is $\Hom^\bullet(M^\bullet, N^\bullet)$ and
+where multiplication
+$$
+\Hom^n(M^\bullet, N^\bullet) \times A^m \longrightarrow
+\Hom^{n + m}(M^\bullet, N^\bullet)
+$$
+sends $f = (f_{p, q})_{p + q = n}$ with $f_{p, q} \in \Hom(M^{-q}, N^p)$
+and $a \in A^m$ to the element $f \cdot a = (f_{p, q} \circ a)$ where
+$f_{p, q} \circ a$ is the map
+$$
+M^{-q - m} \xrightarrow{a} M^{-q} \xrightarrow{f_{p, q}} N^p, \quad
+x \longmapsto f_{p, q}(ax)
+$$
+without the intervention of signs. Let us use the notation
+$\Hom(M, N^\bullet)$ to denote this right differential graded $A$-module.
+
+\begin{lemma}
+\label{lemma-characterize-hom}
+Let $R$ be a ring. Let $(A, \text{d})$ be a differential graded $R$-algebra.
+Let $M'$ be a right differential graded $A$-module and let
+$M$ be a left differential graded $A$-module.
+Let $N^\bullet$ be a complex of $R$-modules. Then we have
+$$
+\Hom_{\text{Mod}_{(A, d)}}(M', \Hom(M, N^\bullet)) =
+\Hom_{\text{Comp}(R)}(M' \otimes_A M, N^\bullet)
+$$
+where $M \otimes_A M$ is viewed as a complex of $R$-modules
+as in Section \ref{section-tensor-product}.
+\end{lemma}
+
+\begin{proof}
+Let us show that both sides correspond to graded $A$-bilinear maps
+$$
+M' \times M \longrightarrow N^\bullet
+$$
+compatible with differentials. We have seen this is true for the right
+hand side in Section \ref{section-tensor-product}. Given an element
+$g$ of the left hand side, the equality of
+More on Algebra, Lemma \ref{more-algebra-lemma-compose}
+determines a map of complexes of $R$-modules
+$g' : \text{Tot}(M' \otimes_R M) \to N^\bullet$.
+In other words, we obtain a graded $R$-bilinear
+map $g'' : M' \times M \to N^\bullet$ compatible with differentials.
+The $A$-linearity of $g$ translates immediately
+into $A$-bilinarity of $g''$.
+\end{proof}
+
+\noindent
+Let $R$, $M^\bullet$, $E^\bullet$, $E$, and $M$ be as above.
+However, now suppose given a differential graded $R$-algebra $A$
+and a {\bf right} differential graded $A$-module structure on $M$.
+Then we can consider the map
+$$
+\text{Tot}(A^\bullet \otimes_R M^\bullet)
+\xrightarrow{\psi}
+\text{Tot}(A^\bullet \otimes_R M^\bullet)
+\to
+M^\bullet
+$$
+where the first arrow is the commutativity constraint on the
+differential graded category of complexes of $R$-modules.
+This corresponds to a map
+$$
+\tau : A^\bullet \longrightarrow E^\bullet
+$$
+of complexes of $R$-modules. Recall that
+$E^n = \prod_{p + q = n} \Hom_R(M^{-q}, M^p)$
+and write $\tau(a) = (\tau_{p, q}(a))_{p + q = n}$ for $a \in A^n$.
+Then we see
+$$
+\tau_{p, q}(a) : M^{-q} \longrightarrow M^p,\quad
+x \longmapsto (-1)^{\deg(a)\deg(x)}x a = (-1)^{-nq}xa
+$$
+This is not compatible with the product on $A$ as the readed should
+expect from the discussion in Section \ref{section-left-modules}.
+Namely, we have
+$$
+\tau(a a') = (-1)^{\deg(a)\deg(a')}\tau(a') \tau(a)
+$$
+We conclude the following lemma is true
+
+\begin{lemma}
+\label{lemma-right-module-structure}
+In the situation above, let $A$ be a differential graded $R$-algebra.
+To give a right $A$-module structure on $M$ is the same thing as
+giving a homomorphism $\tau : A \to E^{opp}$
+of differential graded $R$-algebras.
+\end{lemma}
+
+\begin{proof}
+See discussion above and note that the construction of $\tau$
+from the multiplication map $M^n \times A^m \to M^{n + m}$
+uses signs.
+\end{proof}
+
+\noindent
+Let $R$, $M^\bullet$, $E^\bullet$, $E$, $A$ and $M$ be as above
+and let a right differential graded $A$-module structure on $M$
+be given as in the lemma. In this case there is a canonical left
+differential graded $A$-module whose underlying complex of $R$-modules is
+$\Hom^\bullet(M^\bullet, N^\bullet)$. Namely, for multiplication
+we can use
+\begin{align*}
+\text{Tot}(A^\bullet \otimes_R \Hom^\bullet(M^\bullet, N^\bullet))
+& \xrightarrow{\psi}
+\text{Tot}(\Hom^\bullet(M^\bullet, N^\bullet) \otimes_R A^\bullet) \\
+& \xrightarrow{\tau}
+\text{Tot}(\Hom^\bullet(M^\bullet, N^\bullet) \otimes_R
+\Hom^\bullet(M^\bullet, M^\bullet)) \\
+& \to
+\text{Tot}(\Hom^\bullet(M^\bullet, N^\bullet)
+\end{align*}
+The first arrow uses the commutativity constraint on the category
+of complexes of $R$-modules, the second arrow is described above, and
+the third arrow is the composition law for the Hom complex.
+Each map is a map of complexes, hence the result is a map of complexes.
+In fact, this construction turns $\Hom^\bullet(M^\bullet, N^\bullet)$
+into a left differential graded $A$-module (associativity of the multiplication
+can be shown using the symmetric monoidal structure or by a direct calculation
+using the formulae below). Let us explicate the multiplication
+$$
+A^n \times \Hom^m(M^\bullet, N^\bullet) \longrightarrow
+\Hom^{n + m}(M^\bullet, N^\bullet)
+$$
+It sends $a \in A^n$ and
+$f = (f_{p, q})_{p + q = m}$ with $f_{p, q} \in \Hom(M^{-q}, N^p)$
+to the element $a \cdot f$ with constituents
+$$
+(-1)^{nm}f_{p, q} \circ \tau_{-q, q + n}(a) =
+(-1)^{nm - n(q + n)}f_{p, q} \circ a =
+(-1)^{np + n} f_{p, q} \circ a
+$$
+in $\Hom_R(M^{-q - n}, N^p)$ where $f_{p, q} \circ a$ is the map
+$$
+M^{-q - n} \xrightarrow{a} M^{-q} \xrightarrow{f_{p, q}} N^p,\quad
+x \longmapsto f_{p, q}(xa)
+$$
+Here a sign of $(-1)^{np + n}$ does intervene. Let us use the notation
+$\Hom(M, N^\bullet)$ to denote this left differential graded $A$-module.
+
+\begin{lemma}
+\label{lemma-characterize-hom-other-side}
+Let $R$ be a ring. Let $(A, \text{d})$ be a differential graded $R$-algebra.
+Let $M$ be a right differential graded $A$-module and let
+$M'$ be a left differential graded $A$-module.
+Let $N^\bullet$ be a complex of $R$-modules. Then we have
+$$
+\Hom_{\text{left diff graded }A\text{-modules}}(M', \Hom(M, N^\bullet)) =
+\Hom_{\text{Comp}(R)}(M \otimes_A M', N^\bullet)
+$$
+where $M \otimes_A M'$ is viewed as a complex of $R$-modules
+as in Section \ref{section-tensor-product}.
+\end{lemma}
+
+\begin{proof}
+Let us show that both sides correspond to graded $A$-bilinear maps
+$$
+M \times M' \longrightarrow N^\bullet
+$$
+compatible with differentials. We have seen this is true for the right
+hand side in Section \ref{section-tensor-product}. Given an element
+$g$ of the left hand side, the equality of
+More on Algebra, Lemma \ref{more-algebra-lemma-compose}
+determines a map of complexes
+$g' : \text{Tot}(M' \otimes_R M) \to N^\bullet$.
+We precompose with the commutativity constraint to get
+$$
+\text{Tot}(M \otimes_R M') \xrightarrow{\psi}
+\text{Tot}(M' \otimes_R M) \xrightarrow{g'}
+N^\bullet
+$$
+which corresponds to a graded $R$-bilinear
+map $g'' : M \times M' \to N^\bullet$ compatible with differentials.
+The $A$-linearity of $g$ translates immediately into $A$-bilinarity of $g''$.
+Namely, say $x \in M^e$ and $x' \in (M')^{e'}$ and $a \in A^n$. Then
+on the one hand we have
+\begin{align*}
+g''(x, ax')
+& =
+(-1)^{e(n + e')} g'(ax' \otimes x) \\
+& =
+(-1)^{e(n + e')} g(ax')(x) \\
+& =
+(-1)^{e(n + e')} (a \cdot g(x'))(x) \\
+& =
+(-1)^{e(n + e') + n(n + e + e') + n} g(x')(xa)
+\end{align*}
+and on the other hand we have
+$$
+g''(xa, x') = (-1)^{(e + n)e'} g'(x' \otimes xa) =
+(-1)^{(e + n)e'} g(x')(xa)
+$$
+which is the same thing by a trivial mod $2$ calculation of the exponents.
+\end{proof}
+
+\begin{remark}
+\label{remark-evaluation-map-left}
+Let $R$ be a ring. Let $A$ be a differential graded $R$-algebra.
+Let $M$ be a left differential graded $A$-module. Let
+$N^\bullet$ be a complex of $R$-modules. The constructions above
+produce a right differential graded $A$-module $\Hom(M, N^\bullet)$
+and then a leftt differential graded $A$-module
+$\Hom(\Hom(M, N^\bullet), N^\bullet)$. We claim there is an
+evaluation map
+$$
+ev : M \longrightarrow \Hom(\Hom(M, N^\bullet), N^\bullet)
+$$
+in the category of left differential graded $A$-modules. To define it, by
+Lemma \ref{lemma-characterize-hom} it suffices to construct an
+$A$-bilinear pairing
+$$
+\Hom(M, N^\bullet) \times M \longrightarrow N^\bullet
+$$
+compatible with grading and differentials. For this we take
+$$
+(f, x) \longmapsto f(x)
+$$
+We leave it to the reader to verify this is compatible with grading,
+differentials, and $A$-bilinear. The map $ev$ on underlying complexes
+of $R$-modules is More on Algebra, Item (\ref{more-algebra-item-evaluation}).
+\end{remark}
+
+\begin{remark}
+\label{remark-evaluation-map-right}
+Let $R$ be a ring. Let $A$ be a differential graded $R$-algebra.
+Let $M$ be a right differential graded $A$-module. Let
+$N^\bullet$ be a complex of $R$-modules. The constructions above
+produce a left differential graded $A$-module $\Hom(M, N^\bullet)$
+and then a right differential graded $A$-module
+$\Hom(\Hom(M, N^\bullet), N^\bullet)$. We claim there is an evaluation map
+$$
+ev : M \longrightarrow \Hom(\Hom(M, N^\bullet), N^\bullet)
+$$
+in the category of right differential graded $A$-modules. To define it, by
+Lemma \ref{lemma-characterize-hom} it suffices to construct an
+$A$-bilinear pairing
+$$
+M \times \Hom(M, N^\bullet) \longrightarrow N^\bullet
+$$
+compatible with grading and differentials. For this we take
+$$
+(x, f) \longmapsto (-1)^{\deg(x)\deg(f)}f(x)
+$$
+We leave it to the reader to verify this is compatible with grading,
+differentials, and $A$-bilinear. The map $ev$ on underlying complexes
+of $R$-modules is More on Algebra, Item (\ref{more-algebra-item-evaluation}).
+\end{remark}
+
+\begin{remark}
+\label{remark-shift-dual}
+Let $R$ be a ring. Let $A$ be a differential graded $R$-algebra.
+Let $M^\bullet$ and $N^\bullet$ be complexes of $R$-modules.
+Let $k \in \mathbf{Z}$ and consider the isomorphism
+$$
+\Hom^\bullet(M^\bullet, N^\bullet)[-k]
+\longrightarrow
+\Hom^\bullet(M^\bullet[k], N^\bullet)
+$$
+of complexes of $R$-modules defined in
+More on Algebra, Item (\ref{more-algebra-item-shift-hom}).
+If $M^\bullet$ has the structure of a left, resp.\ right
+differential graded $A$-module, then this is a map of
+right, resp.\ left differential graded $A$-modules (with the
+module structures as defined in this section).
+We omit the verification; we warn the reader that the
+$A$-module structure on the shift of a left graded $A$-module
+is defined using a sign, see
+Definition \ref{definition-shift-graded-module}.
+\end{remark}
+
+
+
+
+
+
+\section{Projective modules over algebras}
+\label{section-projectives-over-algebras}
+
+\noindent
+In this section we discuss projective modules over algebras analogous to
+Algebra, Section \ref{algebra-section-projective}.
+This section should probably be moved somewhere else.
+
+\medskip\noindent
+Let $R$ be a ring and let $A$ be an $R$-algebra, see
+Section \ref{section-conventions} for our conventions.
+It is clear that $A$ is a projective right $A$-module since
+$\Hom_A(A, M) = M$ for any right $A$-module $M$ (and thus $\Hom_A(A, -)$
+is exact). Conversely, let $P$ be a projective right $A$-module. Then
+we can choose a surjection
+$\bigoplus_{i \in I} A \to P$ by choosing a set $\{p_i\}_{i \in I}$
+of generators of $P$ over $A$. Since $P$ is projective there is a
+left inverse to the surjection, and we find that $P$ is isomorphic
+to a direct summand of a free module, exactly as in the commutative case
+(Algebra, Lemma \ref{algebra-lemma-characterize-projective}).
+
+\medskip\noindent
+We conclude
+\begin{enumerate}
+\item the category of $A$-modules has enough projectives,
+\item $A$ is a projective $A$-module,
+\item every $A$-module is a quotient of a direct sum of copies of $A$,
+\item every projective $A$-module is a direct summand of a direct
+sum of copies of $A$.
+\end{enumerate}
+
+
+
+\section{Projective modules over graded algebras}
+\label{section-projectives-over-graded-algebras}
+
+\noindent
+In this section we discuss projective graded modules over graded algebras
+analogous to Algebra, Section \ref{algebra-section-projective}.
+
+\medskip\noindent
+Let $R$ be a ring. Let $A$ be a $\mathbf{Z}$-graded algebra over $R$.
+Section \ref{section-conventions} for our conventions.
+Let $\text{Mod}_A$ denote the category of graded right $A$-modules.
+For an integer $k$ let $A[k]$ denote the shift of $A$.
+For a graded right $A$-module we have
+$$
+\Hom_{\text{Mod}_A}(A[k], M) = M^{-k}
+$$
+As the functor $M \mapsto M^{-k}$ is exact on $\text{Mod}_A$ we
+conclude that $A[k]$ is a projective object of $\text{Mod}_A$.
+Conversely, suppose that $P$ is a projective object of $\text{Mod}_A$.
+By choosing a set of homogeneous generators of $P$ as an $A$-module,
+we can find a surjection
+$$
+\bigoplus\nolimits_{i \in I} A[k_i] \longrightarrow P
+$$
+Thus we conclude that a projective object of $\text{Mod}_A$ is
+a direct summand of a direct sum of the shifts $A[k]$.
+
+\medskip\noindent
+We conclude
+\begin{enumerate}
+\item the category of graded $A$-modules has enough projectives,
+\item $A[k]$ is a projective $A$-module for every $k \in \mathbf{Z}$,
+\item every graded $A$-module is a quotient of a direct sum of
+copies of the modules $A[k]$ for varying $k$,
+\item every projective $A$-module is a direct summand of a direct
+sum of copies of the modules $A[k]$ for varying $k$.
+\end{enumerate}
+
+
+
+
+\section{Projective modules and differential graded algebras}
+\label{section-projective-over-differential-graded}
+
+\noindent
+If $(A, \text{d})$ is a differential graded algebra and $P$ is
+an object of $\text{Mod}_{(A, \text{d})}$ then we say
+{\it $P$ is projective as a graded $A$-module} or sometimes
+{\it $P$ is graded projective} to mean that $P$
+is a projective object of the abelian category $\text{Mod}_A$
+of graded $A$-modules as in
+Section \ref{section-projectives-over-graded-algebras}.
+
+\begin{lemma}
+\label{lemma-target-graded-projective}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $M \to P$ be a surjective homomorphism of differential graded
+$A$-modules. If $P$ is projective as a graded $A$-module, then
+$M \to P$ is an admissible epimorphism.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hom-from-shift-free}
+Let $(A, d)$ be a differential graded algebra. Then we have
+$$
+\Hom_{\text{Mod}_{(A, \text{d})}}(A[k], M) =
+\Ker(\text{d} : M^{-k} \to M^{-k + 1})
+$$
+and
+$$
+\Hom_{K(\text{Mod}_{(A, \text{d})})}(A[k], M) = H^{-k}(M)
+$$
+for any differential graded $A$-module $M$.
+\end{lemma}
+
+\begin{proof}
+Immediate from the definitions.
+\end{proof}
+
+
+
+
+
+
+
+\section{Injective modules over algebras}
+\label{section-modules-noncommutative}
+
+\noindent
+In this section we discuss injective modules over algebras
+analogous to
+More on Algebra, Section \ref{more-algebra-section-injectives-modules}.
+This section should probably be moved somewhere else.
+
+\medskip\noindent
+Let $R$ be a ring and let $A$ be an
+$R$-algebra, see Section \ref{section-conventions} for our conventions.
+For a right $A$-module $M$ we set
+$$
+M^\vee = \Hom_\mathbf{Z}(M, \mathbf{Q}/\mathbf{Z})
+$$
+which we think of as a left $A$-module by the multiplication
+$(a f)(x) = f(xa)$. Namely, $((ab)f)(x) = f(xab) = (bf)(xa) = (a(bf))(x)$.
+Conversely, if $M$ is a left $A$-module, then $M^\vee$ is a right
+$A$-module. Since $\mathbf{Q}/\mathbf{Z}$ is an injective abelian
+group (More on Algebra, Lemma \ref{more-algebra-lemma-injective-abelian}), the
+functor $M \mapsto M^\vee$ is exact
+(More on Algebra, Lemma \ref{more-algebra-lemma-vee-exact}).
+Moreover, the evaluation map $M \to (M^\vee)^\vee$ is
+injective for all modules $M$
+(More on Algebra, Lemma \ref{more-algebra-lemma-ev-injective}).
+
+\medskip\noindent
+We claim that $A^\vee$ is an injective right $A$-module. Namely, given
+a right $A$-module $N$ we have
+$$
+\Hom_A(N, A^\vee) =
+\Hom_A(N, \Hom_\mathbf{Z}(A, \mathbf{Q}/\mathbf{Z})) = N^\vee
+$$
+and we conclude because the functor $N \mapsto N^\vee$ is exact.
+The second equality holds because
+$$
+\Hom_\mathbf{Z}(N, \Hom_\mathbf{Z}(A, \mathbf{Q}/\mathbf{Z})) =
+\Hom_\mathbf{Z}(N \otimes_\mathbf{Z} A, \mathbf{Q}/\mathbf{Z})
+$$
+by Algebra, Lemma \ref{algebra-lemma-hom-from-tensor-product}.
+Inside this module $A$-linearity exactly picks out the bilinear maps
+$\varphi : N \times A \to \mathbf{Q}/\mathbf{Z}$ which
+have the same value on $x \otimes a$ and $xa \otimes 1$, i.e.,
+come from elements of $N^\vee$.
+
+\medskip\noindent
+Finally, for every right $A$-module $M$ we can choose a surjection
+$\bigoplus_{i \in I} A \to M^\vee$ to get an injection
+$M \to (M^\vee)^\vee \to \prod_{i \in I} A^\vee$.
+
+\medskip\noindent
+We conclude
+\begin{enumerate}
+\item the category of $A$-modules has enough injectives,
+\item $A^\vee$ is an injective $A$-module, and
+\item every $A$-module injects into a product of copies of $A^\vee$.
+\end{enumerate}
+
+
+
+
+
+\section{Injective modules over graded algebras}
+\label{section-modules-noncommutative-graded}
+
+\noindent
+In this section we discuss injective graded modules over graded algebras
+analogous to
+More on Algebra, Section \ref{more-algebra-section-injectives-modules}.
+
+\medskip\noindent
+Let $R$ be a ring. Let $A$ be a $\mathbf{Z}$-graded algebra over $R$.
+Section \ref{section-conventions} for our conventions.
+If $M$ is a graded $R$-module we set
+$$
+M^\vee =
+\bigoplus\nolimits_{n \in \mathbf{Z}}
+\Hom_\mathbf{Z}(M^{-n}, \mathbf{Q}/\mathbf{Z}) =
+\bigoplus\nolimits_{n \in \mathbf{Z}} (M^{-n})^\vee
+$$
+as a graded $R$-module (no signs in the actions of $R$ on the
+homogeneous parts). If $M$ has the structure of a left graded
+$A$-module, then we define a right graded $A$-module structure
+on $M^\vee$ by letting $a \in A^m$ act by
+$$
+(M^{-n})^\vee \to (M^{-n - m})^\vee, \quad
+f \mapsto f \circ a
+$$
+as in Section \ref{section-hom-complexes}.
+If $M$ has the structure of a right graded
+$A$-module, then we define a left graded $A$-module structure
+on $M^\vee$ by letting $a \in A^n$ act by
+$$
+(M^{-m})^\vee \to (M^{-m - n})^\vee, \quad
+f \mapsto (-1)^{nm}f \circ a
+$$
+as in Section \ref{section-hom-complexes} (the sign is forced on
+us because we want to use the same formula for the case
+when working with differential graded modules --- if you only
+care about graded modules, then you can omit the sign here).
+On the category of (left or right) graded $A$-modules the
+functor $M \mapsto M^\vee$ is exact (check on graded pieces).
+Moreover, there is an injective evaluation map
+$$
+ev : M \longrightarrow (M^\vee)^\vee, \quad
+ev^n = (-1)^n \text{ the evaluation map }M^n \to ((M^n)^\vee)^\vee
+$$
+of graded $R$-modules, see
+More on Algebra, Item (\ref{more-algebra-item-evaluation}).
+This evaluation map is a left, resp.\ right $A$-module homomorphism
+if $M$ is a left, resp.\ right $A$-module, see
+Remarks \ref{remark-evaluation-map-left} and \ref{remark-evaluation-map-right}.
+Finally, given $k \in \mathbf{Z}$ there is a canonical isomorphism
+$$
+M^\vee[-k] \longrightarrow (M[k])^\vee
+$$
+of graded $R$-modules which uses a sign and which, if
+$M$ is a left, resp.\ right $A$-module, is an isomorphism
+of right, resp.\ left $A$-modules. See Remark \ref{remark-shift-dual}.
+
+\medskip\noindent
+We claim that $A^\vee$ is an injective object of the category
+$\text{Mod}_A$ of graded right $A$-modules. Namely, given a graded
+right $A$-module $N$ we have
+$$
+\Hom_{\text{Mod}_A}(N, A^\vee) =
+\Hom_{\text{Comp}(\mathbf{Z})}(N \otimes_A A, \mathbf{Q}/\mathbf{Z})) =
+(N^0)^\vee
+$$
+by Lemma \ref{lemma-characterize-hom}
+(applied to the case where all the differentials are zero).
+We conclude because the functor $N \mapsto (N^0)^\vee = (N^\vee)^0$
+is exact.
+
+\medskip\noindent
+Finally, for every graded right $A$-module $M$ we can choose a surjection
+of graded left $A$-modules
+$$
+\bigoplus\nolimits_{i \in I} A[k_i] \to M^\vee
+$$
+where $A[k_i]$ denotes the shift of $A$ by $k_i \in \mathbf{Z}$.
+We do this by choosing homogeneous generators for $M^\vee$.
+In this way we get an injection
+$$
+M \to (M^\vee)^\vee \to \prod A[k_i]^\vee = \prod A^\vee[-k_i]
+$$
+Observe that the products in the formula above are products in the
+category of graded modules (in other words, take products in each degree
+and then take the direct sum of the pieces).
+
+\medskip\noindent
+We conclude that
+\begin{enumerate}
+\item the category of graded $A$-modules has enough injectives,
+\item for every $k \in \mathbf{Z}$ the module $A^\vee[k]$ is injective, and
+\item every $A$-module injects into a product in the category of graded
+modules of copies of shifts $A^\vee[k]$.
+\end{enumerate}
+
+
+
+
+
+\section{Injective modules and differential graded algebras}
+\label{section-modules-noncommutative-differential-graded}
+
+\noindent
+If $(A, \text{d})$ is a differential graded algebra and $I$ is
+an object of $\text{Mod}_{(A, \text{d})}$ then we say
+{\it $I$ is injective as a graded $A$-module} or sometimes
+{\it $I$ is graded injective} to mean
+that $I$ is a injective object of the abelian category $\text{Mod}_A$
+of graded $A$-modules.
+
+\begin{lemma}
+\label{lemma-source-graded-injective}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $I \to M$ be an injective homomorphism of differential graded
+$A$-modules. If $I$ is graded injective, then
+$I \to M$ is an admissible monomorphism.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the definitions.
+\end{proof}
+
+\noindent
+Let $(A, \text{d})$ be a differential graded algebra. If $M$ is a
+left, resp.\ right differential graded $A$-module, then
+$$
+M^\vee = \Hom^\bullet(M^\bullet, \mathbf{Q}/\mathbf{Z})
+$$
+with $A$-module structure constructed in
+Section \ref{section-modules-noncommutative-graded} is a
+right, resp.\ left differential graded $A$-module by the
+discussion in Section \ref{section-hom-complexes}.
+By Remarks
+\ref{remark-evaluation-map-left} and \ref{remark-evaluation-map-right}
+there evaluation map of Section \ref{section-modules-noncommutative-graded}
+$$
+M \longrightarrow (M^\vee)^\vee
+$$
+is a homomorphism of left, resp.\ right differential graded $A$-modules
+
+\begin{lemma}
+\label{lemma-map-into-dual}
+Let $(A, \text{d})$ be a differential graded algebra. If
+$M$ is a left differential graded $A$-module and $N$ is a
+right differential graded $A$-module, then
+\begin{align*}
+\Hom_{\text{Mod}_{(A, \text{d})}}(N, M^\vee)
+& =
+\Hom_{\text{Comp}(\mathbf{Z})}(N \otimes_A M, \mathbf{Q}/\mathbf{Z}) \\
+& =
+\text{DifferentialGradedBilinear}_A(N \times M, \mathbf{Q}/\mathbf{Z})
+\end{align*}
+\end{lemma}
+
+\begin{proof}
+The first equality is Lemma \ref{lemma-characterize-hom}
+and the second equality was shown in Section \ref{section-tensor-product}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hom-into-shift-dual-free}
+Let $(A, \text{d})$ be a differential graded algebra. Then we have
+$$
+\Hom_{\text{Mod}_{(A, \text{d})}}(M, A^\vee[k]) =
+\Ker(\text{d} : (M^\vee)^k \to (M^\vee)^{k + 1})
+$$
+and
+$$
+\Hom_{K(\text{Mod}_{(A, \text{d})})}(M, A^\vee[k]) = H^k(M^\vee)
+$$
+as functors in the differential graded $A$-module $M$.
+\end{lemma}
+
+\begin{proof}
+This is clear from the discussion above.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{P-resolutions}
+\label{section-P-resolutions}
+
+\noindent
+This section is the analogue of
+Derived Categories, Section \ref{derived-section-unbounded}.
+
+\medskip\noindent
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $P$ be a differential graded $A$-module. We say $P$
+{\it has property (P)} if it there exists a filtration
+$$
+0 = F_{-1}P \subset F_0P \subset F_1P \subset \ldots \subset P
+$$
+by differential graded submodules such that
+\begin{enumerate}
+\item $P = \bigcup F_pP$,
+\item the inclusions $F_iP \to F_{i + 1}P$ are admissible
+monomorphisms,
+\item the quotients $F_{i + 1}P/F_iP$ are isomorphic as differential
+graded $A$-modules to a direct sum of $A[k]$.
+\end{enumerate}
+In fact, condition (2) is a consequence of condition (3), see
+Lemma \ref{lemma-target-graded-projective}. Moreover, the reader
+can verify that as a graded $A$-module $P$ will be isomorphic to a
+direct sum of shifts of $A$.
+
+\begin{lemma}
+\label{lemma-property-P-sequence}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $P$ be a differential graded $A$-module. If $F_\bullet$
+is a filtration as in property (P), then we obtain an
+admissible short exact sequence
+$$
+0 \to
+\bigoplus\nolimits F_iP \to
+\bigoplus\nolimits F_iP \to P \to 0
+$$
+of differential graded $A$-modules.
+\end{lemma}
+
+\begin{proof}
+The second map is the direct sum of the inclusion maps.
+The first map on the summand $F_iP$ of the source is the sum
+of the identity $F_iP \to F_iP$ and the negative of the inclusion
+map $F_iP \to F_{i + 1}P$. Choose homomorphisms $s_i : F_{i + 1}P \to F_iP$
+of graded $A$-modules which are left inverse to the inclusion maps.
+Composing gives maps $s_{j, i} : F_jP \to F_iP$ for all $j > i$.
+Then a left inverse of the first arrow maps $x \in F_jP$ to
+$(s_{j, 0}(x), s_{j, 1}(x), \ldots, s_{j, j - 1}(x), 0, \ldots)$
+in $\bigoplus F_iP$.
+\end{proof}
+
+\noindent
+The following lemma shows that differential graded modules with
+property (P) are the dual notion to K-injective modules
+(i.e., they are K-projective in some sense). See
+Derived Categories, Definition \ref{derived-definition-K-injective}.
+
+\begin{lemma}
+\label{lemma-property-P-K-projective}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $P$ be a differential graded $A$-module with property (P).
+Then
+$$
+\Hom_{K(\text{Mod}_{(A, \text{d})})}(P, N) = 0
+$$
+for all acyclic differential graded $A$-modules $N$.
+\end{lemma}
+
+\begin{proof}
+We will use that $K(\text{Mod}_{(A, \text{d})})$ is a triangulated
+category (Proposition \ref{proposition-homotopy-category-triangulated}).
+Let $F_\bullet$ be a filtration on $P$ as in property (P).
+The short exact sequence of Lemma \ref{lemma-property-P-sequence}
+produces a distinguished triangle. Hence by
+Derived Categories, Lemma \ref{derived-lemma-representable-homological}
+it suffices to show that
+$$
+\Hom_{K(\text{Mod}_{(A, \text{d})})}(F_iP, N) = 0
+$$
+for all acyclic differential graded $A$-modules $N$ and all $i$.
+Each of the differential graded modules $F_iP$ has a finite filtration
+by admissible monomorphisms, whose graded pieces are direct sums
+of shifts $A[k]$. Thus it suffices to prove that
+$$
+\Hom_{K(\text{Mod}_{(A, \text{d})})}(A[k], N) = 0
+$$
+for all acyclic differential graded $A$-modules $N$ and all $k$.
+This follows from Lemma \ref{lemma-hom-from-shift-free}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-good-quotient}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $M$ be a differential graded $A$-module. There exists a homomorphism
+$P \to M$ of differential graded $A$-modules with the following
+properties
+\begin{enumerate}
+\item $P \to M$ is surjective,
+\item $\Ker(\text{d}_P) \to \Ker(\text{d}_M)$ is surjective, and
+\item $P$ sits in an admissible short exact sequence
+$0 \to P' \to P \to P'' \to 0$ where $P'$, $P''$ are direct sums
+of shifts of $A$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $P_k$ be the free $A$-module with generators $x, y$ in degrees
+$k$ and $k + 1$. Define the structure of a differential graded
+$A$-module on $P_k$ by setting $\text{d}(x) = y$ and $\text{d}(y) = 0$.
+For every element $m \in M^k$ there is a homomorphism
+$P_k \to M$ sending $x$ to $m$ and $y$ to $\text{d}(m)$.
+Thus we see that there is a surjection from a direct sum
+of copies of $P_k$ to $M$. This clearly produces $P \to M$
+having properties (1) and (3). To obtain property (2) note
+that if $m \in \Ker(\text{d}_M)$ has degree $k$, then there is a map
+$A[k] \to M$ mapping $1$ to $m$. Hence we can achieve (2) by adding
+a direct sum of copies of shifts of $A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-resolve}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $M$ be a differential graded $A$-module. There exists a homomorphism
+$P \to M$ of differential graded $A$-modules such that
+\begin{enumerate}
+\item $P \to M$ is a quasi-isomorphism, and
+\item $P$ has property (P).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Set $M = M_0$. We inductively choose short exact sequences
+$$
+0 \to M_{i + 1} \to P_i \to M_i \to 0
+$$
+where the maps $P_i \to M_i$ are chosen as in Lemma \ref{lemma-good-quotient}.
+This gives a ``resolution''
+$$
+\ldots \to P_2 \xrightarrow{f_2} P_1 \xrightarrow{f_1} P_0 \to M \to 0
+$$
+Then we set
+$$
+P = \bigoplus\nolimits_{i \geq 0} P_i
+$$
+as an $A$-module with grading given by
+$P^n = \bigoplus_{a + b = n} P_{-a}^b$ and
+differential (as in the construction of the total complex associated
+to a double complex) by
+$$
+\text{d}_P(x) = f_{-a}(x) + (-1)^a \text{d}_{P_{-a}}(x)
+$$
+for $x \in P_{-a}^b$. With these conventions $P$ is indeed a differential
+graded $A$-module. Recalling that each $P_i$ has a two step filtration
+$0 \to P_i' \to P_i \to P_i'' \to 0$ we set
+$$
+F_{2i}P = \bigoplus\nolimits_{i \geq j \geq 0} P_j
+\subset
+\bigoplus\nolimits_{i \geq 0} P_i = P
+$$
+and we add $P'_{i + 1}$ to $F_{2i}P$ to get $F_{2i + 1}$.
+These are differential graded submodules and the successive quotients
+are direct sums of shifts of $A$. By
+Lemma \ref{lemma-target-graded-projective} we see that
+the inclusions $F_iP \to F_{i + 1}P$ are admissible monomorphisms.
+Finally, we have to show that the map $P \to M$ (given by the
+augmentation $P_0 \to M$) is a quasi-isomorphism. This follows from
+Homology, Lemma \ref{homology-lemma-good-resolution-gives-qis}.
+\end{proof}
+
+
+
+
+
+\section{I-resolutions}
+\label{section-I-resolutions}
+
+\noindent
+This section is the dual of the section on P-resolutions.
+
+\medskip\noindent
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $I$ be a differential graded $A$-module. We say $I$
+{\it has property (I)} if it there exists a filtration
+$$
+I = F_0I \supset F_1I \supset F_2I \supset \ldots \supset 0
+$$
+by differential graded submodules such that
+\begin{enumerate}
+\item $I = \lim I/F_pI$,
+\item the maps $I/F_{i + 1}I \to I/F_iI$ are admissible epimorphisms,
+\item the quotients $F_iI/F_{i + 1}I$ are isomorphic as differential
+graded $A$-modules to products of the modules $A^\vee[k]$ constructed
+in Section \ref{section-modules-noncommutative-differential-graded}.
+\end{enumerate}
+In fact, condition (2) is a consequence of condition (3), see
+Lemma \ref{lemma-source-graded-injective}. The reader can verify that as
+a graded module $I$ will be isomorphic to a product of $A^\vee[k]$.
+
+\begin{lemma}
+\label{lemma-property-I-sequence}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $I$ be a differential graded $A$-module. If $F_\bullet$
+is a filtration as in property (I), then we obtain an
+admissible short exact sequence
+$$
+0 \to I \to
+\prod\nolimits I/F_iI \to
+\prod\nolimits I/F_iI \to 0
+$$
+of differential graded $A$-modules.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: This is dual to Lemma \ref{lemma-property-P-sequence}.
+\end{proof}
+
+\noindent
+The following lemma shows that differential graded modules with
+property (I) are the analogue of K-injective modules. See
+Derived Categories, Definition \ref{derived-definition-K-injective}.
+
+\begin{lemma}
+\label{lemma-property-I-K-injective}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $I$ be a differential graded $A$-module with property (I).
+Then
+$$
+\Hom_{K(\text{Mod}_{(A, \text{d})})}(N, I) = 0
+$$
+for all acyclic differential graded $A$-modules $N$.
+\end{lemma}
+
+\begin{proof}
+We will use that $K(\text{Mod}_{(A, \text{d})})$ is a triangulated
+category (Proposition \ref{proposition-homotopy-category-triangulated}).
+Let $F_\bullet$ be a filtration on $I$ as in property (I).
+The short exact sequence of Lemma \ref{lemma-property-I-sequence}
+produces a distinguished triangle. Hence by
+Derived Categories, Lemma \ref{derived-lemma-representable-homological}
+it suffices to show that
+$$
+\Hom_{K(\text{Mod}_{(A, \text{d})})}(N, I/F_iI) = 0
+$$
+for all acyclic differential graded $A$-modules $N$ and all $i$.
+Each of the differential graded modules $I/F_iI$ has a finite filtration
+by admissible monomorphisms, whose graded pieces are
+products of $A^\vee[k]$. Thus it suffices to prove that
+$$
+\Hom_{K(\text{Mod}_{(A, \text{d})})}(N, A^\vee[k]) = 0
+$$
+for all acyclic differential graded $A$-modules $N$ and all $k$.
+This follows from Lemma \ref{lemma-hom-into-shift-dual-free}
+and the fact that $(-)^\vee$ is an exact functor.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-good-sub}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $M$ be a differential graded $A$-module. There exists a homomorphism
+$M \to I$ of differential graded $A$-modules with the following
+properties
+\begin{enumerate}
+\item $M \to I$ is injective,
+\item $\Coker(\text{d}_M) \to \Coker(\text{d}_I)$ is injective,
+and
+\item $I$ sits in an admissible short exact sequence
+$0 \to I' \to I \to I'' \to 0$ where $I'$, $I''$ are products
+of shifts of $A^\vee$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will use the functors $N \mapsto N^\vee$ (from left to right
+differential graded modules and from right to left differential
+graded modules) constructed in
+Section \ref{section-modules-noncommutative-differential-graded}
+and all of their properties.
+For every $k \in \mathbf{Z}$ let $Q_k$ be the free left $A$-module with
+generators $x, y$ in degrees $k$ and $k + 1$. Define the structure of a
+left differential graded $A$-module on $Q_k$ by setting $\text{d}(x) = y$
+and $\text{d}(y) = 0$. Arguing exactly as in the proof of
+Lemma \ref{lemma-good-quotient} we find a surjection
+$$
+\bigoplus\nolimits_{i \in I} Q_{k_i} \longrightarrow M^\vee
+$$
+of left differential graded $A$-modules. Then we can consider the injection
+$$
+M \to (M^\vee)^\vee \to (\bigoplus\nolimits_{i \in I} Q_{k_i})^\vee =
+\prod\nolimits_{i \in I} I_{k_i}
+$$
+where $I_k = Q_{-k}^\vee$ is the ``dual'' right differential graded $A$-module.
+Further, the short exact sequence $0 \to A[-k - 1] \to Q_k \to A[-k] \to 0$
+produces a short exact sequence
+$0 \to A^\vee[k] \to I_k \to A^\vee[k + 1] \to 0$.
+
+\medskip\noindent
+The result of the previous paragraph produces $M \to I$
+having properties (1) and (3). To obtain property (2), suppose
+$\overline{m} \in \Coker(\text{d}_M)$ is a nonzero element of
+degree $k$. Pick a map $\lambda : M^k \to \mathbf{Q}/\mathbf{Z}$
+which vanishes on $\Im(M^{k - 1} \to M^k)$ but not on $m$. By
+Lemma \ref{lemma-hom-into-shift-dual-free}
+this corresponds to a homomorphism $M \to A^\vee[k]$ of
+differential graded $A$-modules which does not vanish on $m$.
+Hence we can achieve (2) by adding
+a product of copies of shifts of $A^\vee$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-right-resolution}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $M$ be a differential graded $A$-module. There exists a homomorphism
+$M \to I$ of differential graded $A$-modules such that
+\begin{enumerate}
+\item $M \to I$ is a quasi-isomorphism, and
+\item $I$ has property (I).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Set $M = M_0$. We inductively choose short exact sequences
+$$
+0 \to M_i \to I_i \to M_{i + 1} \to 0
+$$
+where the maps $M_i \to I_i$ are chosen as in Lemma \ref{lemma-good-sub}.
+This gives a ``resolution''
+$$
+0 \to M \to I_0 \xrightarrow{f_0} I_1 \xrightarrow{f_1} I_1 \to \ldots
+$$
+Denote $I$ the differential graded $A$-module with graded parts
+$$
+I^n = \prod\nolimits_{i \geq 0} I^{n - i}_i
+$$
+and differential defined by
+$$
+\text{d}_I(x) = f_i(x) + (-1)^i \text{d}_{I_i}(x)
+$$
+for $x \in I_i^{n - i}$. With these conventions $I$ is indeed a differential
+graded $A$-module. Recalling that each $I_i$ has a two step filtration
+$0 \to I_i' \to I_i \to I_i'' \to 0$ we set
+$$
+F_{2i}I^n = \prod\nolimits_{j \geq i} I^{n - j}_j
+\subset
+\prod\nolimits_{i \geq 0} I^{n - i}_i = I^n
+$$
+and we add a factor $I'_{i + 1}$ to $F_{2i}I$ to get $F_{2i + 1}I$.
+These are differential graded submodules and the successive quotients
+are products of shifts of $A^\vee$. By
+Lemma \ref{lemma-source-graded-injective} we see that
+the inclusions $F_{i + 1}I \to F_iI$ are admissible monomorphisms.
+Finally, we have to show that the map $M \to I$ (given by the
+augmentation $M \to I_0$) is a quasi-isomorphism. This follows from
+Homology, Lemma \ref{homology-lemma-good-right-resolution-gives-qis}.
+\end{proof}
+
+
+
+
+
+
+\section{The derived category}
+\label{section-derived}
+
+\noindent
+Recall that the notions of acyclic differential graded modules
+and quasi-isomorphism of differential graded modules make sense
+(see Section \ref{section-modules}).
+
+\begin{lemma}
+\label{lemma-acyclic}
+Let $(A, \text{d})$ be a differential graded algebra.
+The full subcategory $\text{Ac}$ of $K(\text{Mod}_{(A, \text{d})})$
+consisting of acyclic modules is a strictly full saturated triangulated
+subcategory of $K(\text{Mod}_{(A, \text{d})})$.
+The corresponding saturated multiplicative system
+(see Derived Categories, Lemma \ref{derived-lemma-operations})
+of $K(\text{Mod}_{(A, \text{d})})$ is the class $\text{Qis}$
+of quasi-isomorphisms. In particular, the kernel of the localization
+functor
+$$
+Q : K(\text{Mod}_{(A, \text{d})}) \to
+\text{Qis}^{-1}K(\text{Mod}_{(A, \text{d})})
+$$
+is $\text{Ac}$. Moreover, the functor $H^0$ factors through $Q$.
+\end{lemma}
+
+\begin{proof}
+We know that $H^0$ is a homological functor by the long exact
+sequence of homology (\ref{equation-les}).
+The kernel of $H^0$ is the subcategory of acyclic objects and
+the arrows with induce isomorphisms on all $H^i$ are the
+quasi-isomorphisms. Thus this lemma is a special case of
+Derived Categories, Lemma \ref{derived-lemma-acyclic-general}.
+
+\medskip\noindent
+Set theoretical remark. The construction of the localization in
+Derived Categories, Proposition
+\ref{derived-proposition-construct-localization}
+assumes the given triangulated category is ``small'', i.e., that the
+underlying collection of objects forms a set. Let $V_\alpha$ be a
+partial universe (as in Sets, Section \ref{sets-section-sets-hierarchy})
+containing $(A, \text{d})$ and where the cofinality of $\alpha$
+is bigger than $\aleph_0$
+(see Sets, Proposition \ref{sets-proposition-exist-ordinals-large-cofinality}).
+Then we can consider the category $\text{Mod}_{(A, \text{d}), \alpha}$
+of differential graded $A$-modules contained in $V_\alpha$.
+A straightforward check shows that all the constructions used in
+the proof of Proposition \ref{proposition-homotopy-category-triangulated}
+work inside of $\text{Mod}_{(A, \text{d}), \alpha}$
+(because at worst we take finite direct sums of differential graded modules).
+Thus we obtain a triangulated category
+$\text{Qis}_\alpha^{-1}K(\text{Mod}_{(A, \text{d}), \alpha})$.
+We will see below that if $\beta > \alpha$, then the transition functors
+$$
+\text{Qis}_\alpha^{-1}K(\text{Mod}_{(A, \text{d}), \alpha})
+\longrightarrow
+\text{Qis}_\beta^{-1}K(\text{Mod}_{(A, \text{d}), \beta})
+$$
+are fully faithful as the morphism sets in the quotient categories
+are computed by maps in the homotopy categories from P-resolutions
+(the construction of a P-resolution in the proof of Lemma \ref{lemma-resolve}
+takes countable direct sums as well as direct sums indexed over subsets
+of the given module). The reader should therefore think of the category
+of the lemma as the union of these subcategories.
+\end{proof}
+
+\noindent
+Taking into account the set theoretical remark at the end of the
+proof of the preceding lemma we define the derived category as follows.
+
+\begin{definition}
+\label{definition-unbounded-derived-category}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $\text{Ac}$ and $\text{Qis}$ be as in Lemma \ref{lemma-acyclic}.
+The {\it derived category of $(A, \text{d})$} is the triangulated
+category
+$$
+D(A, \text{d}) =
+K(\text{Mod}_{(A, \text{d})})/\text{Ac} =
+\text{Qis}^{-1}K(\text{Mod}_{(A, \text{d})}).
+$$
+We denote $H^0 : D(A, \text{d}) \to \text{Mod}_R$ the unique functor
+whose composition with the quotient functor gives back the functor
+$H^0$ defined above.
+\end{definition}
+
+\noindent
+Here is the promised lemma computing morphism sets in the
+derived category.
+
+\begin{lemma}
+\label{lemma-hom-derived}
+Let $(A, \text{d})$ be a differential graded algebra.
+Let $M$ and $N$ be differential graded $A$-modules.
+\begin{enumerate}
+\item Let $P \to M$ be a P-resolution as in
+Lemma \ref{lemma-resolve}. Then
+$$
+\Hom_{D(A, \text{d})}(M, N) =
+\Hom_{K(\text{Mod}_{(A, \text{d})})}(P, N)
+$$
+\item Let $N \to I$ be an I-resolution as in
+Lemma \ref{lemma-right-resolution}. Then
+$$
+\Hom_{D(A, \text{d})}(M, N) =
+\Hom_{K(\text{Mod}_{(A, \text{d})})}(M, I)
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $P \to M$ be as in (1). Since $P \to M$ is a quasi-isomorphism we see that
+$$
+\Hom_{D(A, \text{d})}(P, N) = \Hom_{D(A, \text{d})}(M, N)
+$$
+by definition of the derived category. A morphism
+$f : P \to N$ in $D(A, \text{d})$ is equal to
+$s^{-1}f'$ where $f' : P \to N'$ is a morphism and
+$s : N \to N'$ is a quasi-isomorphism. Choose a distinguished triangle
+$$
+N \to N' \to Q \to N[1]
+$$
+As $s$ is a quasi-isomorphism, we see that $Q$ is acyclic. Thus
+$\Hom_{K(\text{Mod}_{(A, \text{d})})}(P, Q[k]) = 0$ for all $k$ by
+Lemma \ref{lemma-property-P-K-projective}. Since
+$\Hom_{K(\text{Mod}_{(A, \text{d})})}(P, -)$
+is cohomological, we conclude that we can lift $f' : P \to N'$
+uniquely to a morphism $f : P \to N$. This finishes the proof.
+
+\medskip\noindent
+The proof of (2) is dual to that of (1) using
+Lemma \ref{lemma-property-I-K-injective} in stead of
+Lemma \ref{lemma-property-P-K-projective}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derived-products}
+Let $(A, \text{d})$ be a differential graded algebra. Then
+\begin{enumerate}
+\item $D(A, \text{d})$ has both direct sums and products,
+\item direct sums are obtained by taking direct sums of differential graded
+modules,
+\item products are obtained by taking products of differential
+graded modules.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will use that $\text{Mod}_{(A, \text{d})}$ is an abelian category
+with arbitrary direct sums and products, and that these give rise
+to direct sums and products in $K(\text{Mod}_{(A, \text{d})})$.
+See Lemmas \ref{lemma-dgm-abelian} and \ref{lemma-homotopy-direct-sums}.
+
+\medskip\noindent
+Let $M_j$ be a family of differential graded $A$-modules.
+Consider the graded direct sum $M = \bigoplus M_j$ which is a
+differential graded $A$-module with the obvious.
+For a differential graded $A$-module $N$ choose a quasi-isomorphism
+$N \to I$ where $I$ is a differential graded $A$-module with property (I).
+See Lemma \ref{lemma-right-resolution}.
+Using Lemma \ref{lemma-hom-derived} we have
+\begin{align*}
+\Hom_{D(A, \text{d})}(M, N)
+& =
+\Hom_{K(A, \text{d})}(M, I) \\
+& =
+\prod \Hom_{K(A, \text{d})}(M_j, I) \\
+& =
+\prod \Hom_{D(A, \text{d})}(M_j, N)
+\end{align*}
+whence the existence of direct sums in $D(A, \text{d})$ as given in
+part (2) of the lemma.
+
+\medskip\noindent
+Let $M_j$ be a family of differential graded $A$-modules.
+Consider the product $M = \prod M_j$ of differential graded $A$-modules.
+For a differential graded $A$-module $N$ choose a quasi-isomorphism
+$P \to N$ where $P$ is a differential graded $A$-module with property (P).
+See Lemma \ref{lemma-resolve}.
+Using Lemma \ref{lemma-hom-derived} we have
+\begin{align*}
+\Hom_{D(A, \text{d})}(N, M)
+& =
+\Hom_{K(A, \text{d})}(P, M) \\
+& =
+\prod \Hom_{K(A, \text{d})}(P, M_j) \\
+& =
+\prod \Hom_{D(A, \text{d})}(N, M_j)
+\end{align*}
+whence the existence of direct sums in $D(A, \text{d})$ as given in
+part (3) of the lemma.
+\end{proof}
+
+\begin{remark}
+\label{remark-P-resolution}
+Let $R$ be a ring. Let $(A, \text{d})$ be a differential graded $R$-algebra.
+Using P-resolutions we can sometimes reduce statements about general
+objects of $D(A, \text{d})$ to statements about $A[k]$. Namely, let
+$T$ be a property of objects of $D(A, \text{d})$ and assume that
+\begin{enumerate}
+\item if $K_i$, $i \in I$ is a family of objects of $D(A, \text{d})$
+and $T(K_i)$ holds for all $i \in I$, then $T(\bigoplus K_i)$,
+\item if $K \to L \to M \to K[1]$ is a distinguished triangle of
+$D(A, \text{d})$ and $T$ holds for two, then $T$
+holds for the third object, and
+\item $T(A[k])$ holds for all $k \in \mathbf{Z}$.
+\end{enumerate}
+Then $T$ holds for all objects of $D(A, \text{d})$. This is clear from
+Lemmas \ref{lemma-property-P-sequence} and \ref{lemma-resolve}.
+\end{remark}
+
+
+
+
+
+
+
+\section{The canonical delta-functor}
+\label{section-canonical-delta-functor}
+
+\noindent
+Let $(A, \text{d})$ be a differential graded algebra.
+Consider the functor
+$\text{Mod}_{(A, \text{d})} \to K(\text{Mod}_{(A, \text{d})})$.
+This functor is {\bf not} a $\delta$-functor in general.
+However, it turns out that the functor
+$\text{Mod}_{(A, \text{d})} \to D(A, \text{d})$ is a
+$\delta$-functor. In order to see this we have to define
+the morphisms $\delta$ associated to a short exact sequence
+$$
+0 \to K \xrightarrow{a} L \xrightarrow{b} M \to 0
+$$
+in the abelian category $\text{Mod}_{(A, \text{d})}$.
+Consider the cone $C(a)$ of the morphism $a$. We have $C(a) = L \oplus K$
+and we define $q : C(a) \to M$ via the projection to $L$ followed
+by $b$. Hence a homomorphism of differential graded $A$-modules
+$$
+q : C(a) \longrightarrow M.
+$$
+It is clear that $q \circ i = b$ where $i$ is as in
+Definition \ref{definition-cone}.
+Note that, as $a$ is injective, the kernel of $q$ is identified with the
+cone of $\text{id}_K$ which is acyclic. Hence we see that
+$q$ is a quasi-isomorphism. According to
+Lemma \ref{lemma-the-same-up-to-isomorphisms}
+the triangle
+$$
+(K, L, C(a), a, i, -p)
+$$
+is a distinguished triangle in $K(\text{Mod}_{(A, \text{d})})$.
+As the localization functor
+$K(\text{Mod}_{(A, \text{d})}) \to D(A, \text{d})$ is
+exact we see that $(K, L, C(a), a, i, -p)$ is a distinguished
+triangle in $D(A, \text{d})$. Since $q$ is a quasi-isomorphism
+we see that $q$ is an isomorphism in $D(A, \text{d})$.
+Hence we deduce that
+$$
+(K, L, M, a, b, -p \circ q^{-1})
+$$
+is a distinguished triangle of $D(A, \text{d})$.
+This suggests the following lemma.
+
+\begin{lemma}
+\label{lemma-derived-canonical-delta-functor}
+Let $(A, \text{d})$ be a differential graded algebra. The functor
+$\text{Mod}_{(A, \text{d})} \to D(A, \text{d})$
+defined has the natural structure of a $\delta$-functor, with
+$$
+\delta_{K \to L \to M} = - p \circ q^{-1}
+$$
+with $p$ and $q$ as explained above.
+\end{lemma}
+
+\begin{proof}
+We have already seen that this choice leads to a distinguished
+triangle whenever given a short exact sequence of complexes.
+We have to show functoriality of this construction, see
+Derived Categories, Definition \ref{derived-definition-delta-functor}.
+This follows from Lemma \ref{lemma-functorial-cone} with a bit of
+work. Compare with
+Derived Categories, Lemma \ref{derived-lemma-derived-canonical-delta-functor}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-homotopy-colimit}
+Let $(A, \text{d})$ be a differential graded algebra. Let
+$M_n$ be a system of differential graded modules. Then the derived
+colimit $\text{hocolim} M_n$ in $D(A, \text{d})$ is represented
+by the differential graded module $\colim M_n$.
+\end{lemma}
+
+\begin{proof}
+Set $M = \colim M_n$. We have an exact sequence of differential graded modules
+$$
+0 \to \bigoplus M_n \to \bigoplus M_n \to M \to 0
+$$
+by Derived Categories, Lemma \ref{derived-lemma-compute-colimit}
+(applied the underlying complexes of abelian groups).
+The direct sums are direct sums in $D(\mathcal{A})$ by
+Lemma \ref{lemma-derived-products}.
+Thus the result follows from the definition
+of derived colimits in Derived Categories,
+Definition \ref{derived-definition-derived-colimit}
+and the fact that a short exact sequence of complexes
+gives a distinguished triangle
+(Lemma \ref{lemma-derived-canonical-delta-functor}).
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Linear categories}
+\label{section-linear}
+
+\noindent
+Just the definitions.
+
+\begin{definition}
+\label{definition-linear-category}
+Let $R$ be a ring. An {\it $R$-linear category $\mathcal{A}$} is a category
+where every morphism set is given the structure of an $R$-module
+and where for $x, y, z \in \Ob(\mathcal{A})$ composition law
+$$
+\Hom_\mathcal{A}(y, z) \times \Hom_\mathcal{A}(x, y)
+\longrightarrow
+\Hom_\mathcal{A}(x, z)
+$$
+is $R$-bilinear.
+\end{definition}
+
+\noindent
+Thus composition determines an $R$-linear map
+$$
+\Hom_\mathcal{A}(y, z) \otimes_R \Hom_\mathcal{A}(x, y)
+\longrightarrow
+\Hom_\mathcal{A}(x, z)
+$$
+of $R$-modules. Note that we do not assume $R$-linear categories to be
+additive.
+
+\begin{definition}
+\label{definition-functor-linear-categories}
+Let $R$ be a ring. A {\it functor of $R$-linear categories}, or an
+{\it $R$-linear functor} is a functor $F : \mathcal{A} \to \mathcal{B}$
+where for all objects $x, y$ of $\mathcal{A}$ the map
+$F : \Hom_\mathcal{A}(x, y) \to \Hom_\mathcal{B}(F(x), F(y))$
+is a homomorphism of $R$-modules.
+\end{definition}
+
+
+
+
+
+
+
+\section{Graded categories}
+\label{section-graded}
+
+\noindent
+Just some definitions.
+
+\begin{definition}
+\label{definition-graded-category}
+Let $R$ be a ring. A {\it graded category $\mathcal{A}$
+over $R$} is a category where every morphism set is given the structure
+of a graded $R$-module and where for
+$x, y, z \in \Ob(\mathcal{A})$ composition is $R$-bilinear and induces
+a homomorphism
+$$
+\Hom_\mathcal{A}(y, z) \otimes_R \Hom_\mathcal{A}(x, y)
+\longrightarrow
+\Hom_\mathcal{A}(x, z)
+$$
+of graded $R$-modules (i.e., preserving degrees).
+\end{definition}
+
+\noindent
+In this situation we denote $\Hom_\mathcal{A}^i(x, y)$ the degree $i$
+part of the graded object $\Hom_\mathcal{A}(x, y)$, so that
+$$
+\Hom_\mathcal{A}(x, y) =
+\bigoplus\nolimits_{i \in \mathbf{Z}} \Hom_\mathcal{A}^i(x, y)
+$$
+is the direct sum decomposition into graded parts.
+
+\begin{definition}
+\label{definition-functor-graded-categories}
+Let $R$ be a ring. A {\it functor of graded categories over $R$}, or a
+{\it graded functor}
+is a functor $F : \mathcal{A} \to \mathcal{B}$ where for all objects
+$x, y$ of $\mathcal{A}$ the map
+$F : \Hom_\mathcal{A}(x, y) \to \Hom_\mathcal{A}(F(x), F(y))$
+is a homomorphism of graded $R$-modules.
+\end{definition}
+
+\noindent
+Given a graded category we are often interested in the
+corresponding ``usual'' category of maps of degree $0$.
+Here is a formal definition.
+
+\begin{definition}
+\label{definition-H0-of-graded-category}
+Let $R$ be a ring. Let $\mathcal{A}$ be a graded category
+over $R$. We let {\it $\mathcal{A}^0$} be the category with the
+same objects as $\mathcal{A}$ and with
+$$
+\Hom_{\mathcal{A}^0}(x, y) = \Hom^0_\mathcal{A}(x, y)
+$$
+the degree $0$ graded piece of the graded module of morphisms of
+$\mathcal{A}$.
+\end{definition}
+
+\begin{definition}
+\label{definition-graded-direct-sum}
+Let $R$ be a ring. Let $\mathcal{A}$ be a graded category over $R$.
+A direct sum $(x, y, z, i, j, p, q)$ in $\mathcal{A}$ (notation as in
+Homology, Remark \ref{homology-remark-direct-sum})
+is a {\it graded direct sum} if $i, j, p, q$ are homogeneous
+of degree $0$.
+\end{definition}
+
+\begin{example}[Graded category of graded objects]
+\label{example-graded-category-graded-objects}
+Let $\mathcal{B}$ be an additive category. Recall that we have defined
+the category $\text{Gr}(\mathcal{B})$ of graded objects of $\mathcal{B}$ in
+Homology, Definition \ref{homology-definition-graded}.
+In this example, we will construct a graded category
+$\text{Gr}^{gr}(\mathcal{B})$ over $R = \mathbf{Z}$
+whose associated category $\text{Gr}^{gr}(\mathcal{B})^0$
+recovers $\text{Gr}(\mathcal{B})$.
+As objects of $\text{Gr}^{gr}(\mathcal{B})$
+we take graded objects of $\mathcal{B}$. Then, given graded objects
+$A = (A^i)$ and $B = (B^i)$ of $\mathcal{B}$ we set
+$$
+\Hom_{\text{Gr}^{gr}(\mathcal{B})}(A, B) =
+\bigoplus\nolimits_{n \in \mathbf{Z}} \Hom^n(A, B)
+$$
+where the graded piece of degree $n$ is the abelian group of homogeneous
+maps of degree $n$ from $A$ to $B$. Explicitly we have
+$$
+\Hom^n(A, B) = \prod\nolimits_{p + q = n} \Hom_\mathcal{B}(A^{-q}, B^p)
+$$
+(observe reversal of indices and observe that we have a product here and
+not a direct sum). In other words, a degree $n$ morphism $f$
+from $A$ to $B$ can be seen as a system $f = (f_{p, q})$ where
+$p, q \in \mathbf{Z}$, $p + q = n$ with
+$f_{p, q} : A^{-q} \to B^p$ a morphism of $\mathcal{B}$.
+Given graded objects $A$, $B$, $C$ of $\mathcal{B}$
+composition of morphisms in $\text{Gr}^{gr}(\mathcal{B})$ is defined
+via the maps
+$$
+\Hom^m(B, C) \times \Hom^n(A, B) \longrightarrow \Hom^{n + m}(A, C)
+$$
+by simple composition $(g, f) \mapsto g \circ f$ of homogeneous
+maps of graded objects. In terms of components we have
+$$
+(g \circ f)_{p, r} = g_{p, q} \circ f_{-q, r}
+$$
+where $q$ is such that $p + q = m$ and $-q + r = n$.
+\end{example}
+
+\begin{example}[Graded category of graded modules]
+\label{example-gm-gr-cat}
+Let $A$ be a $\mathbf{Z}$-graded algebra over a ring $R$. We will construct
+a graded category $\text{Mod}^{gr}_A$ over $R$ whose associated category
+$(\text{Mod}^{gr}_A)^0$ is the category of graded $A$-modules. As objects
+of $\text{Mod}^{gr}_A$ we take right graded $A$-modules (see
+Section \ref{section-projectives-over-algebras}). Given graded
+$A$-modules $L$ and $M$ we set
+$$
+\Hom_{\text{Mod}^{gr}_A}(L, M) =
+\bigoplus\nolimits_{n \in \mathbf{Z}} \Hom^n(L, M)
+$$
+where $\Hom^n(L, M)$ is the set of right $A$-module maps $L \to M$ which
+are homogeneous of degree $n$, i.e., $f(L^i) \subset M^{i + n}$ for
+all $i \in \mathbf{Z}$. In terms of components, we have that
+$$
+\Hom^n(L, M) \subset \prod\nolimits_{p + q = n} \Hom_R(L^{-q}, M^p)
+$$
+(observe reversal of indices) is the subset consisting of those
+$f = (f_{p, q})$ such that
+$$
+f_{p, q}(m a) = f_{p - i, q + i}(m)a
+$$
+for $a \in A^i$ and $m \in L^{-q - i}$. For graded $A$-modules
+$K$, $L$, $M$ we define composition in $\text{Mod}^{gr}_A$ via
+the maps
+$$
+\Hom^m(L, M) \times \Hom^n(K, L) \longrightarrow \Hom^{n + m}(K, M)
+$$
+by simple composition of right $A$-module maps: $(g, f) \mapsto g \circ f$.
+\end{example}
+
+\begin{remark}
+\label{remark-graded-shift-functors}
+Let $R$ be a ring. Let $\mathcal{D}$ be an $R$-linear category endowed with a
+collection of $R$-linear functors $[n] : \mathcal{D} \to \mathcal{D}$,
+$x \mapsto x[n]$ indexed by $n \in \mathbf{Z}$ such that
+$[n] \circ [m] = [n + m]$ and $[0] = \text{id}_\mathcal{D}$ (equality as
+functors). This allows us to construct a graded category $\mathcal{D}^{gr}$
+over $R$ with the same objects of $\mathcal{D}$ setting
+$$
+\Hom_{\mathcal{D}^{gr}}(x, y) =
+\bigoplus\nolimits_{n \in \mathbf{Z}} \Hom_\mathcal{D}(x, y[n])
+$$
+for $x, y$ in $\mathcal{D}$. Observe that $(\mathcal{D}^{gr})^0 = \mathcal{D}$
+(see Definition \ref{definition-H0-of-graded-category}). Moreover, the graded
+category $\mathcal{D}^{gr}$ inherits $R$-linear graded functors $[n]$
+satisfying $[n] \circ [m] = [n + m]$ and $[0] = \text{id}_{\mathcal{D}^{gr}}$
+with the property that
+$$
+\Hom_{\mathcal{D}^{gr}}(x, y[n]) = \Hom_{\mathcal{D}^{gr}}(x, y)[n]
+$$
+as graded $R$-modules compatible with composition of morphisms.
+
+\medskip\noindent
+Conversely, suppose given a graded category $\mathcal{A}$ over $R$ endowed
+with a collection of $R$-linear graded functors $[n]$
+satisfying $[n] \circ [m] = [n + m]$ and $[0] = \text{id}_\mathcal{A}$
+which are moreover equipped with isomorphisms
+$$
+\Hom_\mathcal{A}(x, y[n]) = \Hom_\mathcal{A}(x, y)[n]
+$$
+as graded $R$-modules compatible with composition of morphisms. Then
+the reader easily shows that $\mathcal{A} = (\mathcal{A}^0)^{gr}$.
+
+\medskip\noindent
+Here are two examples of the relationship
+$\mathcal{D} \leftrightarrow \mathcal{A}$ we established above:
+\begin{enumerate}
+\item Let $\mathcal{B}$ be an additive category. If
+$\mathcal{D} = \text{Gr}(\mathcal{B})$, then
+$\mathcal{A} = \text{Gr}^{gr}(\mathcal{B})$ as in
+Example \ref{example-graded-category-graded-objects}.
+\item If $A$ is a graded ring and $\mathcal{D} = \text{Mod}_A$
+is the category of graded right $A$-modules, then
+$\mathcal{A} = \text{Mod}^{gr}_A$, see Example \ref{example-gm-gr-cat}.
+\end{enumerate}
+\end{remark}
+
+
+
+
+
+
+\section{Differential graded categories}
+\label{section-dga-categories}
+
+\noindent
+Note that if $R$ is a ring, then $R$ is a differential graded algebra
+over itself (with $R = R^0$ of course). In this case a differential
+graded $R$-module is the same thing as a complex of $R$-modules.
+In particular, given two differential graded $R$-modules $M$ and $N$
+we denote $M \otimes_R N$ the differential graded $R$-module
+corresponding to the total complex associated to the double
+complex obtained by the tensor product of the complexes of $R$-modules
+associated to $M$ and $N$.
+
+\begin{definition}
+\label{definition-dga-category}
+Let $R$ be a ring. A {\it differential graded category $\mathcal{A}$
+over $R$} is a category where every morphism set is given the structure
+of a differential graded $R$-module and where for
+$x, y, z \in \Ob(\mathcal{A})$ composition is $R$-bilinear and induces
+a homomorphism
+$$
+\Hom_\mathcal{A}(y, z) \otimes_R \Hom_\mathcal{A}(x, y)
+\longrightarrow
+\Hom_\mathcal{A}(x, z)
+$$
+of differential graded $R$-modules.
+\end{definition}
+
+\noindent
+The final condition of the definition signifies the following:
+if $f \in \Hom_\mathcal{A}^n(x, y)$ and
+$g \in \Hom_\mathcal{A}^m(y, z)$ are homogeneous
+of degrees $n$ and $m$, then
+$$
+\text{d}(g \circ f) = \text{d}(g) \circ f + (-1)^mg \circ \text{d}(f)
+$$
+in $\Hom_\mathcal{A}^{n + m + 1}(x, z)$. This follows from the sign
+rule for the differential on the total complex of a double complex, see
+Homology, Definition \ref{homology-definition-associated-simple-complex}.
+
+\begin{definition}
+\label{definition-functor-dga-categories}
+Let $R$ be a ring. A {\it functor of differential graded categories over $R$}
+is a functor $F : \mathcal{A} \to \mathcal{B}$ where for all objects
+$x, y$ of $\mathcal{A}$ the map
+$F : \Hom_\mathcal{A}(x, y) \to \Hom_\mathcal{A}(F(x), F(y))$
+is a homomorphism of differential graded $R$-modules.
+\end{definition}
+
+\noindent
+Given a differential graded category we are often interested in the
+corresponding categories of complexes and homotopy category.
+Here is a formal definition.
+
+\begin{definition}
+\label{definition-homotopy-category-of-dga-category}
+Let $R$ be a ring. Let $\mathcal{A}$ be a differential graded category
+over $R$. Then we let
+\begin{enumerate}
+\item the {\it category of complexes of $\mathcal{A}$}\footnote{This may
+be nonstandard terminology.} be the category
+$\text{Comp}(\mathcal{A})$ whose objects are the same as the objects
+of $\mathcal{A}$ and with
+$$
+\Hom_{\text{Comp}(\mathcal{A})}(x, y) =
+\Ker(d : \Hom^0_\mathcal{A}(x, y) \to \Hom^1_\mathcal{A}(x, y))
+$$
+\item the {\it homotopy category of $\mathcal{A}$} be the category
+$K(\mathcal{A})$ whose objects are the same as the objects
+of $\mathcal{A}$ and with
+$$
+\Hom_{K(\mathcal{A})}(x, y) = H^0(\Hom_\mathcal{A}(x, y))
+$$
+\end{enumerate}
+\end{definition}
+
+\noindent
+Our use of the symbol $K(\mathcal{A})$ is nonstandard, but at least
+is compatible with the use of $K(-)$ in other chapters of the Stacks project.
+
+\begin{definition}
+\label{definition-dg-direct-sum}
+Let $R$ be a ring. Let $\mathcal{A}$ be a differential graded category over
+$R$. A direct sum $(x, y, z, i, j, p, q)$ in $\mathcal{A}$ (notation as in
+Homology, Remark \ref{homology-remark-direct-sum})
+is a {\it differential graded direct sum} if $i, j, p, q$ are homogeneous
+of degree $0$ and closed, i.e., $\text{d}(i) = 0$, etc.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-functorial}
+Let $R$ be a ring. A functor $F : \mathcal{A} \to \mathcal{B}$
+of differential graded categories over $R$ induces functors
+$\text{Comp}(\mathcal{A}) \to \text{Comp}(\mathcal{B})$
+and $K(\mathcal{A}) \to K(\mathcal{B})$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{example}[Differential graded category of complexes]
+\label{example-category-complexes}
+Let $\mathcal{B}$ be an additive category. We will construct
+a differential graded category $\text{Comp}^{dg}(\mathcal{B})$
+over $R = \mathbf{Z}$ whose associated category of complexes
+is $\text{Comp}(\mathcal{B})$ and whose associated homotopy
+category is $K(\mathcal{B})$. As objects of $\text{Comp}^{dg}(\mathcal{B})$
+we take complexes of $\mathcal{B}$. Given complexes
+$A^\bullet$ and $B^\bullet$ of $\mathcal{B}$, we sometimes also
+denote $A^\bullet$ and $B^\bullet$ the corresponding graded objects
+of $\mathcal{B}$ (i.e., forget about the differential).
+Using this abuse of notation, we set
+$$
+\Hom_{\text{Comp}^{dg}(\mathcal{B})}(A^\bullet, B^\bullet) =
+\Hom_{\text{Gr}^{gr}(\mathcal{B})}(A^\bullet, B^\bullet) =
+\bigoplus\nolimits_{n \in \mathbf{Z}} \Hom^n(A, B)
+$$
+as a graded $\mathbf{Z}$-module with notation and definitions as
+in Example \ref{example-graded-category-graded-objects}.
+In other words, the $n$th graded piece is
+the abelian group of homogeneous morphism of degree $n$ of graded objects
+$$
+\Hom^n(A^\bullet, B^\bullet) =
+\prod\nolimits_{p + q = n} \Hom_\mathcal{B}(A^{-q}, B^p)
+$$
+Observe reversal of indices and observe we have a direct product
+and not a direct sum. For an element
+$f \in \Hom^n(A^\bullet, B^\bullet)$ of degree $n$ we set
+$$
+\text{d}(f) = \text{d}_B \circ f - (-1)^n f \circ \text{d}_A
+$$
+The sign is exactly as in
+More on Algebra, Section \ref{more-algebra-section-sign-rules}.
+To make sense of this we think of $\text{d}_B$ and $\text{d}_A$
+as maps of graded objects of $\mathcal{B}$ homogeneous of degree $1$
+and we use composition in the category $\text{Gr}^{gr}(\mathcal{B})$
+on the right hand side. In terms of components, if $f = (f_{p, q})$ with
+$f_{p, q} : A^{-q} \to B^p$ we have
+\begin{equation}
+\label{equation-differential-hom-complex}
+\text{d}(f_{p, q}) =
+\text{d}_B \circ f_{p, q} - (-1)^{p + q} f_{p, q} \circ \text{d}_A
+\end{equation}
+Note that the first term of this expression is in
+$\Hom_\mathcal{B}(A^{-q}, B^{p + 1})$ and the second term is in
+$\Hom_\mathcal{B}(A^{-q - 1}, B^p)$. The reader checks that
+\begin{enumerate}
+\item $\text{d}$ has square zero,
+\item an element $f$ in $\Hom^n(A^\bullet, B^\bullet)$
+has $\text{d}(f) = 0$ if and only if the morphism
+$f : A^\bullet \to B^\bullet[n]$ of graded objects of $\mathcal{B}$
+is actually a map of complexes,
+\item in particular, the category of complexes of
+$\text{Comp}^{dg}(\mathcal{B})$ is equal to $\text{Comp}(\mathcal{B})$,
+\item the morphism of complexes defined by $f$ as in (2)
+is homotopy equivalent to zero if and only if $f = \text{d}(g)$
+for some $g \in \Hom^{n - 1}(A^\bullet, B^\bullet)$.
+\item in particular, we obtain a canonical isomorphism
+$$
+\Hom_{K(\mathcal{B})}(A^\bullet, B^\bullet)
+\longrightarrow
+H^0(\Hom_{\text{Comp}^{dg}(\mathcal{B})}(A^\bullet, B^\bullet))
+$$
+and the homotopy category of $\text{Comp}^{dg}(\mathcal{B})$ is equal to
+$K(\mathcal{B})$.
+\end{enumerate}
+Given complexes $A^\bullet$, $B^\bullet$, $C^\bullet$ we define
+composition
+$$
+\Hom^m(B^\bullet, C^\bullet) \times \Hom^n(A^\bullet, B^\bullet)
+\longrightarrow
+\Hom^{n + m}(A^\bullet, C^\bullet)
+$$
+by composition $(g, f) \mapsto g \circ f$ in the graded category
+$\text{Gr}^{gr}(\mathcal{B})$, see
+Example \ref{example-graded-category-graded-objects}.
+This defines a map of differential graded modules
+$$
+\Hom_{\text{Comp}^{dg}(\mathcal{B})}(B^\bullet, C^\bullet)
+\otimes_R
+\Hom_{\text{Comp}^{dg}(\mathcal{B})}(A^\bullet, B^\bullet)
+\longrightarrow
+\Hom_{\text{Comp}^{dg}(\mathcal{B})}(A^\bullet, C^\bullet)
+$$
+as required in Definition \ref{definition-dga-category}
+because
+\begin{align*}
+\text{d}(g \circ f) & =
+\text{d}_C \circ g \circ f - (-1)^{n + m} g \circ f \circ \text{d}_A \\
+& =
+\left(\text{d}_C \circ g - (-1)^m g \circ \text{d}_B\right) \circ f +
+(-1)^m g \circ \left(\text{d}_B \circ f - (-1)^n f \circ \text{d}_A\right) \\
+& =
+\text{d}(g) \circ f + (-1)^m g \circ \text{d}(f)
+\end{align*}
+as desired.
+\end{example}
+
+\begin{lemma}
+\label{lemma-additive-functor-induces-dga-functor}
+Let $F : \mathcal{B} \to \mathcal{B}'$ be an additive functor between
+additive categories. Then $F$ induces a functor of differential
+graded categories
+$$
+F : \text{Comp}^{dg}(\mathcal{B}) \to \text{Comp}^{dg}(\mathcal{B}')
+$$
+of Example \ref{example-category-complexes}
+inducing the usual functors on the category of complexes and the
+homotopy categories.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{example}[Differential graded category of differential graded modules]
+\label{example-dgm-dg-cat}
+Let $(A, \text{d})$ be a differential graded algebra over a ring $R$. We will
+construct a differential graded category $\text{Mod}^{dg}_{(A, \text{d})}$
+over $R$ whose category of complexes is $\text{Mod}_{(A, \text{d})}$ and
+whose homotopy category is $K(\text{Mod}_{(A, \text{d})})$.
+As objects of $\text{Mod}^{dg}_{(A, \text{d})}$
+we take the differential graded $A$-modules. Given differential
+graded $A$-modules $L$ and $M$ we set
+$$
+\Hom_{\text{Mod}^{dg}_{(A, \text{d})}}(L, M) =
+\Hom_{\text{Mod}^{gr}_A}(L, M) = \bigoplus \Hom^n(L, M)
+$$
+as a graded $R$-module where the right hand side is defined as in
+Example \ref{example-gm-gr-cat}. In other words, the $n$th graded piece
+$\Hom^n(L, M)$ is the $R$-module of right $A$-module maps homogeneous
+of degree $n$. For an element $f \in \Hom^n(L, M)$ we set
+$$
+\text{d}(f) = \text{d}_M \circ f - (-1)^n f \circ \text{d}_L
+$$
+To make sense of this we think of $\text{d}_M$ and $\text{d}_L$
+as graded $R$-module maps and we use composition of graded
+$R$-module maps. It is clear that $\text{d}(f)$ is homogeneous of
+degree $n + 1$ as a graded $R$-module map, and it is $A$-linear
+because
+\begin{align*}
+\text{d}(f)(xa)
+& =
+\text{d}_M(f(x) a) - (-1)^n f (\text{d}_L(xa)) \\
+& =
+\text{d}_M(f(x)) a + (-1)^{\deg(x) + n} f(x) \text{d}(a)
+- (-1)^n f(\text{d}_L(x)) a - (-1)^{n + \deg(x)} f(x) \text{d}(a) \\
+& = \text{d}(f)(x) a
+\end{align*}
+as desired (observe that this calculation would not work without the
+sign in the definition of our differential on $\Hom$). Similar formulae
+to those of Example \ref{example-category-complexes} hold for the
+differential of $f$ in terms of components.
+The reader checks (in the same way as in
+Example \ref{example-category-complexes}) that
+\begin{enumerate}
+\item $\text{d}$ has square zero,
+\item an element $f$ in $\Hom^n(L, M)$ has $\text{d}(f) = 0$ if and only if
+$f : L \to M[n]$ is a homomorphism of differential graded $A$-modules,
+\item in particular, the category of complexes of
+$\text{Mod}^{dg}_{(A, \text{d})}$ is $\text{Mod}_{(A, \text{d})}$,
+\item the homomorphism defined by $f$ as in (2) is homotopy equivalent
+to zero if and only if $f = \text{d}(g)$ for some
+$g \in \Hom^{n - 1}(L, M)$.
+\item in particular, we obtain a canonical isomorphism
+$$
+\Hom_{K(\text{Mod}_{(A, \text{d})})}(L, M)
+\longrightarrow
+H^0(\Hom_{\text{Mod}^{dg}_{(A, \text{d})}}(L, M))
+$$
+and the homotopy category of $\text{Mod}^{dg}_{(A, \text{d})}$ is
+$K(\text{Mod}_{(A, \text{d})})$.
+\end{enumerate}
+Given differential graded $A$-modules $K$, $L$, $M$ we define
+composition
+$$
+\Hom^m(L, M) \times \Hom^n(K, L) \longrightarrow \Hom^{n + m}(K, M)
+$$
+by composition of homogeneous right $A$-module maps $(g, f) \mapsto g \circ f$.
+This defines a map of differential graded modules
+$$
+\Hom_{\text{Mod}^{dg}_{(A, \text{d})}}(L, M) \otimes_R
+\Hom_{\text{Mod}^{dg}_{(A, \text{d})}}(K, L) \longrightarrow
+\Hom_{\text{Mod}^{dg}_{(A, \text{d})}}(K, M)
+$$
+as required in
+Definition \ref{definition-dga-category}
+because
+\begin{align*}
+\text{d}(g \circ f) & =
+\text{d}_M \circ g \circ f - (-1)^{n + m} g \circ f \circ \text{d}_K \\
+& =
+\left(\text{d}_M \circ g - (-1)^m g \circ \text{d}_L\right) \circ f +
+(-1)^m g \circ \left(\text{d}_L \circ f - (-1)^n f \circ \text{d}_K\right) \\
+& =
+\text{d}(g) \circ f + (-1)^m g \circ \text{d}(f)
+\end{align*}
+as desired.
+\end{example}
+
+\begin{lemma}
+\label{lemma-homomorphism-induces-dga-functor}
+Let $\varphi : (A, \text{d}) \to (E, \text{d})$ be a homomorphism of
+differential graded algebras. Then $\varphi$ induces a functor of differential
+graded categories
+$$
+F :
+\text{Mod}^{dg}_{(E, \text{d})}
+\longrightarrow
+\text{Mod}^{dg}_{(A, \text{d})}
+$$
+of Example \ref{example-dgm-dg-cat} inducing obvious restriction functors
+on the categories of differential graded modules and homotopy categories.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-construction}
+Let $R$ be a ring. Let $\mathcal{A}$ be a differential graded category
+over $R$. Let $x$ be an object of $\mathcal{A}$. Let
+$$
+(E, \text{d}) = \Hom_\mathcal{A}(x, x)
+$$
+be the differential graded $R$-algebra of endomorphisms of $x$.
+We obtain a functor
+$$
+\mathcal{A} \longrightarrow \text{Mod}^{dg}_{(E, \text{d})},\quad
+y \longmapsto \Hom_\mathcal{A}(x, y)
+$$
+of differential graded categories by letting $E$ act on
+$\Hom_\mathcal{A}(x, y)$ via composition in $\mathcal{A}$.
+This functor induces functors
+$$
+\text{Comp}(\mathcal{A}) \to \text{Mod}_{(A, \text{d})}
+\quad\text{and}\quad
+K(\mathcal{A}) \to K(\text{Mod}_{(A, \text{d})})
+$$
+by an application of Lemma \ref{lemma-functorial}.
+\end{lemma}
+
+\begin{proof}
+This lemma proves itself.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Obtaining triangulated categories}
+\label{section-review}
+
+\noindent
+In this section we discuss the most general setup to which the arguments
+proving Derived Categories, Proposition
+\ref{derived-proposition-homotopy-category-triangulated} and
+Proposition \ref{proposition-homotopy-category-triangulated} apply.
+
+\medskip\noindent
+Let $R$ be a ring. Let $\mathcal{A}$ be a differential graded category
+over $R$. To make our argument work, we impose some axioms on $\mathcal{A}$:
+\begin{enumerate}
+\item[(A)] $\mathcal{A}$ has a zero object and differential
+graded direct sums of two objects
+(as in Definition \ref{definition-dg-direct-sum}).
+\item[(B)] there are functors $[n] : \mathcal{A} \longrightarrow \mathcal{A}$
+of differential graded categories such that
+$[0] = \text{id}_\mathcal{A}$ and $[n + m] = [n] \circ [m]$
+and given isomorphisms
+$$
+\Hom_\mathcal{A}(x, y[n]) = \Hom_\mathcal{A}(x, y)[n]
+$$
+of differential graded $R$-modules compatible with composition.
+\end{enumerate}
+
+\noindent
+Given our differential graded category $\mathcal{A}$ we say
+\begin{enumerate}
+\item a sequence $x \to y \to z$ of morphisms of $\text{Comp}(\mathcal{A})$
+is an {\it admissible short exact sequence} if there exists
+an isomorphism $y \cong x \oplus z$ in the underlying graded category
+such that $x \to z$ and $y \to z$ are (co)projections.
+\item a morphism $x\to y$ of $\text{Comp}(\mathcal{A})$ is an
+{\it admissible monomorphism} if it extends to an
+admissible short exact sequence $x\to y\to z$.
+\item a morphism $y\to z$ of $\text{Comp}(\mathcal{A})$ is an
+{\it admissible epimorphism} if it extends to an
+admissible short exact sequence $x\to y\to z$.
+\end{enumerate}
+The next lemma tells us an admissible short exact sequence gives a
+triangle, provided we have axioms (A) and (B).
+
+\begin{lemma}
+\label{lemma-get-triangle}
+Let $\mathcal{A}$ be a differential graded category satisfying
+axioms (A) and (B). Given an admissible short exact sequence
+$x \to y \to z$ we obtain (see proof) a triangle
+$$
+x \to y \to z \to x[1]
+$$
+in $\text{Comp}(\mathcal{A})$ with the property that any two compositions
+in $z[-1] \to x \to y \to z \to x[1]$ are zero in $K(\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+Choose a diagram
+$$
+\xymatrix{
+x \ar[rr]_1 \ar[rd]_a & & x \\
+& y \ar[ru]_\pi \ar[rd]^b & \\
+z \ar[rr]^1 \ar[ru]^s & & z
+}
+$$
+giving the isomorphism of graded objects $y \cong x \oplus z$ as in the
+definition of an admissible short exact sequence. Here are some equations
+that hold in this situation
+\begin{enumerate}
+\item $1 = \pi a$ and hence $\text{d}(\pi) a = 0$,
+\item $1 = b s$ and hence $b \text{d}(s) = 0$,
+\item $1 = a \pi + s b$ and hence $a \text{d}(\pi) + \text{d}(s) b = 0$,
+\item $\pi s = 0$ and hence $\text{d}(\pi)s + \pi \text{d}(s) = 0$,
+\item $\text{d}(s) = a \pi \text{d}(s)$ because
+$\text{d}(s) = (a \pi + s b)\text{d}(s)$ and $b\text{d}(s) = 0$,
+\item $\text{d}(\pi) = \text{d}(\pi) s b$ because
+$\text{d}(\pi) = \text{d}(\pi)(a \pi + s b)$ and $\text{d}(\pi)a = 0$,
+\item $\text{d}(\pi \text{d}(s)) = 0$ because if we postcompose it
+with the monomorphism $a$ we get
+$\text{d}(a\pi \text{d}(s)) = \text{d}(\text{d}(s)) = 0$, and
+\item $\text{d}(\text{d}(\pi)s) = 0$ as by (4) it is the negative
+of $\text{d}(\pi\text{d}(s))$ which is $0$ by (7).
+\end{enumerate}
+We've used repeatedly that $\text{d}(a) = 0$, $\text{d}(b) = 0$,
+and that $\text{d}(1) = 0$. By (7) we see that
+$$
+\delta = \pi \text{d}(s) = - \text{d}(\pi) s : z \to x[1]
+$$
+is a morphism in $\text{Comp}(\mathcal{A})$. By (5) we see that
+the composition $a \delta = a \pi \text{d}(s) = \text{d}(s)$
+is homotopic to zero. By (6) we see that the composition
+$\delta b = - \text{d}(\pi)sb = \text{d}(-\pi)$ is homotopic to zero.
+\end{proof}
+
+\noindent
+Besides axioms (A) and (B) we need an axiom concerning the existence of
+cones. We formalize everything as follows.
+
+\begin{situation}
+\label{situation-ABC}
+Here $R$ is a ring and $\mathcal{A}$ is a differential graded category
+over $R$ having axioms (A), (B), and
+\begin{enumerate}
+\item[(C)] given an arrow $f : x \to y$ of degree $0$ with
+$\text{d}(f) = 0$ there exists an admissible short exact sequence
+$y \to c(f) \to x[1]$ in $\text{Comp}(\mathcal{A})$ such that the map
+$x[1] \to y[1]$ of Lemma \ref{lemma-get-triangle} is equal to $f[1]$.
+\end{enumerate}
+\end{situation}
+
+\noindent
+We will call $c(f)$ a {\it cone} of the morphism $f$.
+If (A), (B), and (C) hold, then
+cones are functorial in a weak sense.
+
+\begin{lemma}
+\label{lemma-cone}
+\begin{slogan}
+The homotopy category is a triangulated category.
+This lemma proves a part of the axioms of a triangulated category.
+\end{slogan}
+In Situation \ref{situation-ABC} suppose that
+$$
+\xymatrix{
+x_1 \ar[r]_{f_1} \ar[d]_a & y_1 \ar[d]^b \\
+x_2 \ar[r]^{f_2} & y_2
+}
+$$
+is a diagram of $\text{Comp}(\mathcal{A})$ commutative up to homotopy.
+Then there exists a morphism $c : c(f_1) \to c(f_2)$ which gives rise to
+a morphism of triangles
+$$
+(a, b, c) : (x_1, y_1, c(f_1)) \to (x_1, y_1, c(f_1))
+$$
+in $K(\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+The assumption means there exists a morphism $h : x_1 \to y_2$ of degree
+$-1$ such that $\text{d}(h) = b f_1 - f_2 a$. Choose isomorphisms
+$c(f_i) = y_i \oplus x_i[1]$ of graded objects compatible with the
+morphisms $y_i \to c(f_i) \to x_i[1]$. Let's denote
+$a_i : y_i \to c(f_i)$, $b_i : c(f_i) \to x_i[1]$, $s_i : x_i[1] \to c(f_i)$,
+and $\pi_i : c(f_i) \to y_i$ the given morphisms. Recall that
+$x_i[1] \to y_i[1]$ is given by $\pi_i \text{d}(s_i)$. By axiom (C)
+this means that
+$$
+f_i = \pi_i \text{d}(s_i) = - \text{d}(\pi_i) s_i
+$$
+(we identify $\Hom(x_i, y_i)$ with $\Hom(x_i[1], y_i[1])$
+using the shift functor $[1]$).
+Set $c = a_2 b \pi_1 + s_2 a b_1 + a_2hb$. Then, using the
+equalities found in the proof of Lemma \ref{lemma-get-triangle}
+we obtain
+\begin{align*}
+\text{d}(c)
+& =
+a_2 b \text{d}(\pi_1) + \text{d}(s_2) a b_1 + a_2 \text{d}(h) b_1 \\
+& =
+- a_2 b f_1 b_1 + a_2 f_2 a b_1 + a_2 (b f_1 - f_2 a) b_1 \\
+& = 0
+\end{align*}
+(where we have used in particular that
+$\text{d}(\pi_1) = \text{d}(\pi_1) s_1 b_1 = f_1 b_1$ and
+$\text{d}(s_2) = a_2 \pi_2 \text{d}(s_2) = a_2 f_2$).
+Thus $c$ is a degree $0$ morphism $c : c(f_1) \to c(f_2)$ of $\mathcal{A}$
+compatible with the given morphisms $y_i \to c(f_i) \to x_i[1]$.
+\end{proof}
+
+\noindent
+In Situation \ref{situation-ABC} we say that a triangle
+$(x, y, z, f, g, h)$ in $K(\mathcal{A})$ is a
+{\it distinguished triangle} if there exists an admissible
+short exact sequence $x' \to y' \to z'$ such that
+$(x, y, z, f, g, h)$ is isomorphic as a triangle in $K(\mathcal{A})$
+to the triangle $(x', y', z', x' \to y', y' \to z', \delta)$
+constructed in Lemma \ref{lemma-get-triangle}. We will show below that
+$$
+\boxed{
+K(\mathcal{A})\text{ is a triangulated category}
+}
+$$
+This result, although not as general as one might think, applies to a
+number of natural generalizations of the cases covered so far in the
+Stacks project. Here are some examples:
+\begin{enumerate}
+\item Let $(X, \mathcal{O}_X)$ be a ringed space. Let $(A, d)$ be a
+sheaf of differential graded $\mathcal{O}_X$-algebras. Let
+$\mathcal{A}$ be the differential graded category of differential
+graded $A$-modules. Then $K(\mathcal{A})$ is a triangulated category.
+\item Let $(\mathcal{C}, \mathcal{O})$ be a ringed site. Let $(A, d)$ be a
+sheaf of differential graded $\mathcal{O}$-algebras. Let
+$\mathcal{A}$ be the differential graded category of differential
+graded $A$-modules. Then $K(\mathcal{A})$ is a triangulated category.
+See Differential Graded Sheaves, Proposition
+\ref{sdga-proposition-homotopy-category-triangulated}.
+\item Two examples with a different flavor may be found in Examples, Section
+\ref{examples-section-nongraded-differential-graded}.
+\end{enumerate}
+
+\noindent
+The following simple lemma is a key to the construction.
+
+\begin{lemma}
+\label{lemma-id-cone-null}
+In Situation \ref{situation-ABC}
+given any object $x$ of $\mathcal{A}$, and the cone $C(1_x)$ of the
+identity morphism $1_x : x \to x$, the identity morphism on
+$C(1_x)$ is homotopic to zero.
+\end{lemma}
+
+\begin{proof}
+Consider the admissible short exact sequence given by axiom (C).
+$$
+\xymatrix{
+x \ar@<0.5ex>[r]^a &
+C(1_x) \ar@<0.5ex>[l]^{\pi} \ar@<0.5ex>[r]^b &
+x[1]\ar@<0.5ex>[l]^s
+}
+$$
+Then by Lemma \ref{lemma-get-triangle}, identifying hom-sets under
+shifting, we have $1_x=\pi d(s)=-d(\pi)s$ where $s$ is regarded as
+a morphism in $\Hom_{\mathcal{A}}^{-1}(x,C(1_x))$. Therefore
+$a=a\pi d(s)=d(s)$ using formula (5) of Lemma \ref{lemma-get-triangle},
+and $b=-d(\pi)sb=-d(\pi)$ by formula (6) of Lemma \ref{lemma-get-triangle}.
+Hence
+$$
+1_{C(1_x)} = a\pi + sb = d(s)\pi - sd(\pi) = d(s\pi)
+$$
+since $s$ is of degree $-1$.
+\end{proof}
+
+\noindent
+A more general version of the above lemma will appear in
+Lemma \ref{lemma-cone-homotopy}. The following lemma is the
+analogue of Lemma \ref{lemma-make-commute-map}.
+
+\begin{lemma}
+\label{lemma-homo-change}
+In Situation \ref{situation-ABC} given a diagram
+$$
+\xymatrix{x\ar[r]^f\ar[d]_a & y\ar[d]^b\\
+z\ar[r]^g & w}
+$$
+in $\text{Comp}(\mathcal{A})$ commuting up to homotopy. Then
+\begin{enumerate}
+\item If $f$ is an admissible monomorphism, then $b$ is homotopic
+to a morphism $b'$ which makes the diagram commute.
+\item If $g$ is an admissible epimorphism, then $a$ is homotopic
+to a morphism $a'$ which makes the diagram commute.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove (1), observe that the hypothesis implies that there is some
+$h\in\Hom_{\mathcal{A}}(x,w)$ of degree $-1$ such that $bf-ga=d(h)$.
+Since $f$ is an admissible monomorphism, there is a morphism
+$\pi : y \to x$ in the category $\mathcal{A}$ of degree $0$.
+Let $b' = b - d(h\pi)$. Then
+\begin{align*}
+b'f = bf - d(h\pi)f
+= &
+bf - d(h\pi f) \quad (\text{since }d(f) = 0) \\
+= &
+bf-d(h) \\
+= &
+ga
+\end{align*}
+as desired. The proof for (2) is omitted.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of Lemma \ref{lemma-make-injective}.
+
+\begin{lemma}
+\label{lemma-factor}
+In Situation \ref{situation-ABC} let $\alpha : x \to y$
+be a morphism in $\text{Comp}(\mathcal{A})$. Then there exists
+a factorization in $\text{Comp}(\mathcal{A})$:
+$$
+\xymatrix{
+x \ar[r]^{\tilde{\alpha}} &
+\tilde{y} \ar@<0.5ex>[r]^{\pi} &
+y\ar@<0.5ex>[l]^s
+}
+$$
+such that
+\begin{enumerate}
+\item $\tilde{\alpha}$ is an admissible monomorphism, and
+$\pi\tilde{\alpha}=\alpha$.
+\item There exists a morphism
+$s:y\to\tilde{y}$ in $\text{Comp}(\mathcal{A})$
+such that $\pi s=1_y$ and $s\pi$ is homotopic to $1_{\tilde{y}}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By axiom (A), we may let $\tilde{y}$ be the differential graded direct
+sum of $y$ and $C(1_x)$, i.e., there exists a diagram
+$$
+\xymatrix@C=3pc{
+y \ar@<0.5ex>[r]^s &
+y\oplus C(1_x) \ar@<0.5ex>[l]^{\pi} \ar@<0.5ex>[r]^{p} &
+C(1_x)\ar@<0.5ex>[l]^t
+}
+$$
+where all morphisms are of degree zero, and in
+$\text{Comp}(\mathcal{A})$. Let $\tilde{y} = y \oplus C(1_x)$.
+Then $1_{\tilde{y}} = s\pi + tp$. Consider now the diagram
+$$
+\xymatrix{
+x \ar[r]^{\tilde{\alpha}} &
+\tilde{y} \ar@<0.5ex>[r]^{\pi} &
+y\ar@<0.5ex>[l]^s
+}
+$$
+where $\tilde{\alpha}$ is induced by the morphism $x\xrightarrow{\alpha}y$
+and the natural morphism $x\to C(1_x)$ fitting in the admissible
+short exact sequence
+$$
+\xymatrix{
+x \ar@<0.5ex>[r] &
+C(1_x) \ar@<0.5ex>[l] \ar@<0.5ex>[r] &
+x[1]\ar@<0.5ex>[l]
+}
+$$
+So the morphism $C(1_x)\to x$ of degree 0 in this diagram,
+together with the zero morphism $y\to x$, induces a degree-0
+morphism $\beta : \tilde{y} \to x$. Then $\tilde{\alpha}$ is an
+admissible monomorphism since it fits into the admissible short
+exact sequence
+$$
+\xymatrix{
+x\ar[r]^{\tilde{\alpha}} &
+\tilde{y} \ar[r] &
+x[1]
+}
+$$
+Furthermore, $\pi\tilde{\alpha} = \alpha$ by the construction of
+$\tilde{\alpha}$, and $\pi s = 1_y$ by the first diagram. It
+remains to show that $s\pi$ is homotopic to $1_{\tilde{y}}$.
+Write $1_x$ as $d(h)$ for some degree $-1$ map. Then, our
+last statement follows from
+\begin{align*}
+1_{\tilde{y}} - s\pi
+= &
+tp \\
+= &
+t(dh)p\quad\text{(by Lemma \ref{lemma-id-cone-null})} \\
+= &
+d(thp)
+\end{align*}
+since $dt = dp = 0$, and $t$ is of degree zero.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of Lemma \ref{lemma-sequence-maps-split}.
+
+\begin{lemma}
+\label{lemma-analogue-sequence-maps-split}
+In Situation \ref{situation-ABC}
+let $x_1 \to x_2 \to \ldots \to x_n$
+be a sequence of composable morphisms in $\text{Comp}(\mathcal{A})$.
+Then there exists a commutative diagram in $\text{Comp}(\mathcal{A})$:
+$$
+\xymatrix{x_1\ar[r] & x_2\ar[r] & \ldots\ar[r] & x_n\\
+y_1\ar[r]\ar[u] & y_2\ar[r]\ar[u] & \ldots\ar[r] & y_n\ar[u]}
+$$
+such that each $y_i\to y_{i+1}$ is an admissible monomorphism
+and each $y_i\to x_i$ is a homotopy equivalence.
+\end{lemma}
+
+\begin{proof}
+The case for $n=1$ is trivial: one simply takes $y_1 = x_1$ and the
+identity morphism on $x_1$ is in particular a homotopy equivalence.
+The case $n = 2$ is given by Lemma \ref{lemma-factor}. Suppose we have
+constructed the diagram up to $x_{n - 1}$. We apply
+Lemma \ref{lemma-factor} to the composition
+$y_{n - 1} \to x_{n-1} \to x_n$ to obtain $y_n$. Then
+$y_{n - 1} \to y_n$ will be an admissible monomorphism, and
+$y_n \to x_n$ a homotopy equivalence.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of Lemma \ref{lemma-nilpotent}.
+
+\begin{lemma}
+\label{lemma-triseq}
+In Situation \ref{situation-ABC} let $x_i \to y_i \to z_i$
+be morphisms in $\mathcal{A}$ ($i=1,2,3$) such that
+$x_2 \to y_2\to z_2$ is an admissible short exact sequence.
+Let $b : y_1 \to y_2$ and $b' : y_2\to y_3$ be morphisms
+in $\text{Comp}(\mathcal{A})$ such that
+$$
+\vcenter{
+\xymatrix{
+x_1 \ar[d]_0 \ar[r] &
+y_1 \ar[r] \ar[d]_b &
+z_1 \ar[d]_0 \\
+x_2 \ar[r] & y_2 \ar[r] & z_2
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+x_2 \ar[d]^0 \ar[r] &
+y_2 \ar[r] \ar[d]^{b'} &
+z_2 \ar[d]^0 \\
+x_3 \ar[r] & y_3 \ar[r] & z_3
+}
+}
+$$
+commute up to homotopy. Then $b'\circ b$ is homotopic to $0$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-homo-change}, we can replace $b$ and $b'$
+by homotopic maps $\tilde{b}$ and $\tilde{b}'$, such that the right
+square of the left diagram commutes and the left square of the right
+diagram commutes. Say $b = \tilde{b} + d(h)$ and $b'=\tilde{b}'+d(h')$
+for degree $-1$ morphisms $h$ and $h'$ in $\mathcal{A}$. Hence
+$$
+b'b = \tilde{b}'\tilde{b} + d(\tilde{b}'h + h'\tilde{b} + h'd(h))
+$$
+since $d(\tilde{b})=d(\tilde{b}')=0$, i.e. $b'b$ is homotopic to
+$\tilde{b}'\tilde{b}$. We now want to show that $\tilde{b}'\tilde{b}=0$.
+Because $x_2\xrightarrow{f} y_2\xrightarrow{g} z_2$ is an admissible
+short exact sequence, there exist degree $0$ morphisms
+$\pi : y_2 \to x_2$ and $s : z_2 \to y_2$ such that
+$\text{id}_{y_2} = f\pi + sg$. Therefore
+$$
+\tilde{b}'\tilde{b} = \tilde{b}'(f\pi + sg)\tilde{b} = 0
+$$
+since $g\tilde{b} = 0$ and $\tilde{b}'f = 0$ as consequences
+of the two commuting squares.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of
+Lemma \ref{lemma-triangle-independent-splittings}.
+
+\begin{lemma}
+\label{lemma-analogue-triangle-independent-splittings}
+In Situation \ref{situation-ABC}
+let $0 \to x \to y \to z \to 0$ be an admissible short
+exact sequence in $\text{Comp}(\mathcal{A})$. The triangle
+$$
+\xymatrix{x\ar[r] & y\ar[r] & z\ar[r]^{\delta} & x[1]}
+$$
+with $\delta : z \to x[1]$ as defined in Lemma \ref{lemma-get-triangle}
+is up to canonical isomorphism in $K(\mathcal{A})$, independent of the
+choices made in Lemma \ref{lemma-get-triangle}.
+\end{lemma}
+
+\begin{proof}
+Suppose $\delta$ is defined by the splitting
+$$
+\xymatrix{
+x \ar@<0.5ex>[r]^{a} &
+y \ar@<0.5ex>[r]^b\ar@<0.5ex>[l]^{\pi} &
+z \ar@<0.5ex>[l]^s
+}
+$$
+and $\delta'$ is defined by the splitting with $\pi',s'$
+in place of $\pi,s$. Then
+$$
+s'-s = (a\pi + sb)(s'-s) = a\pi s'
+$$
+since $bs' = bs = 1_z$ and $\pi s = 0$. Similarly,
+$$
+\pi' - \pi = (\pi' - \pi)(a\pi + sb) = \pi'sb
+$$
+Since $\delta = \pi d(s)$ and $\delta' = \pi'd(s')$
+as constructed in Lemma \ref{lemma-get-triangle}, we may compute
+$$
+\delta' = \pi'd(s') = (\pi + \pi'sb)d(s + a\pi s') = \delta + d(\pi s')
+$$
+using $\pi a = 1_x$, $ba = 0$, and $\pi'sbd(s') = \pi'sba\pi d(s') = 0$
+by formula (5) in Lemma \ref{lemma-get-triangle}.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of Lemma \ref{lemma-rotate-cone}.
+
+\begin{lemma}
+\label{lemma-restate-axiom-c}
+In Situation \ref{situation-ABC}
+let $f: x \to y$ be a morphism in $\text{Comp}(\mathcal{A})$.
+The triangle $(y, c(f), x[1], i, p, f[1])$ is the triangle associated
+to the admissible short exact sequence
+$$
+\xymatrix{y\ar[r] & c(f) \ar[r] & x[1]}
+$$
+where the cone $c(f)$ is defined as in Lemma \ref{lemma-get-triangle}.
+\end{lemma}
+
+\begin{proof}
+This follows from axiom (C).
+\end{proof}
+
+\noindent
+The following lemma is the analogue of Lemma \ref{lemma-rotate-triangle}.
+
+\begin{lemma}
+\label{lemma-cone-rotate-isom}
+In Situation \ref{situation-ABC} let $\alpha : x \to y$ and $\beta : y \to z$
+define an admissible short exact sequence
+$$
+\xymatrix{
+x \ar[r] &
+y\ar[r] &
+z
+}
+$$
+in $\text{Comp}(\mathcal{A})$. Let $(x, y, z, \alpha, \beta, \delta)$
+be the associated triangle in $K(\mathcal{A})$. Then, the triangles
+$$
+(z[-1], x, y, \delta[-1], \alpha, \beta)
+\quad\text{and}\quad
+(z[-1], x, c(\delta[-1]), \delta[-1], i, p)
+$$
+are isomorphic.
+\end{lemma}
+
+\begin{proof}
+We have a diagram of the form
+$$
+\xymatrix{
+z[-1]\ar[r]^{\delta[-1]}\ar[d]^1 &
+x\ar@<0.5ex>[r]^{\alpha}\ar[d]^1 &
+y\ar@<0.5ex>[r]^{\beta}\ar@{.>}[d]\ar@<0.5ex>[l]^{\tilde{\alpha}} &
+z\ar[d]^1\ar@<0.5ex>[l]^{\tilde\beta} \\
+z[-1] \ar[r]^{\delta[-1]} &
+x\ar@<0.5ex>[r]^i &
+c(\delta[-1]) \ar@<0.5ex>[r]^p\ar@<0.5ex>[l]^{\tilde i} &
+z\ar@<0.5ex>[l]^{\tilde p}
+}
+$$
+with splittings to $\alpha, \beta, i$, and $p$ given by
+$\tilde{\alpha}, \tilde{\beta}, \tilde{i},$ and $\tilde{p}$ respectively.
+Define a morphism $y \to c(\delta[-1])$ by
+$i\tilde{\alpha} + \tilde{p}\beta$ and a morphism
+$c(\delta[-1]) \to y$ by $\alpha \tilde{i} + \tilde{\beta} p$.
+Let us first check that these define morphisms in $\text{Comp}(\mathcal{A})$.
+We remark that by identities from Lemma \ref{lemma-get-triangle},
+we have the relation
+$\delta[-1] = \tilde{\alpha}d(\tilde{\beta}) = -d(\tilde{\alpha})\tilde{\beta}$
+and the relation $\delta[-1] = \tilde{i}d(\tilde{p})$. Then
+\begin{align*}
+d(\tilde{\alpha})
+& =
+d(\tilde{\alpha})\tilde{\beta}\beta \\
+& =
+-\delta[-1]\beta
+\end{align*}
+where we have used equation (6) of
+Lemma \ref{lemma-get-triangle} for the first equality and
+the preceeding remark for the second. Similarly, we obtain
+$d(\tilde{p}) = i\delta[-1]$. Hence
+\begin{align*}
+d(i\tilde{\alpha} + \tilde{p}\beta)
+& =
+d(i)\tilde{\alpha} + id(\tilde{\alpha}) +
+d(\tilde{p})\beta + \tilde{p}d(\beta) \\
+& =
+id(\tilde{\alpha}) + d(\tilde{p})\beta \\
+& =
+-i\delta[-1]\beta + i\delta[-1]\beta \\
+& =
+0
+\end{align*}
+so $i\tilde{\alpha} + \tilde{p}\beta$ is indeed a morphism of
+$\text{Comp}(\mathcal{A})$. By a similar calculation,
+$\alpha \tilde{i} + \tilde{\beta} p$ is also a morphism of
+$\text{Comp}(\mathcal{A})$. It is immediate that these morphisms
+fit in the commutative diagram. We compute:
+\begin{align*}
+(i\tilde{\alpha} + \tilde{p}\beta)(\alpha \tilde{i} + \tilde{\beta} p)
+& =
+i\tilde{\alpha}\alpha\tilde{i} + i\tilde{\alpha}\tilde{\beta}p
++ \tilde{p}\beta\alpha\tilde{i} + \tilde{p}\beta\tilde{\beta}p \\
+& =
+i\tilde{i} + \tilde{p}p \\
+& =
+1_{c(\delta[-1])}
+\end{align*}
+where we have freely used the identities of
+Lemma \ref{lemma-get-triangle}. Similarly, we compute
+$(\alpha \tilde{i} + \tilde{\beta} p)(i\tilde{\alpha} + \tilde{p}\beta) = 1_y$,
+so we conclude $y \cong c(\delta[-1])$. Hence, the two triangles in question
+are isomorphic.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of
+Lemma \ref{lemma-third-isomorphism}.
+
+\begin{lemma}
+\label{lemma-analogue-third-isomorphism}
+In Situation \ref{situation-ABC} let $f_1 : x_1 \to y_1$ and
+$f_2 : x_2 \to y_2$ be morphisms in $\text{Comp}(\mathcal{A})$. Let
+$$
+(a,b,c): (x_1,y_1,c(f_1), f_1, i_1, p_1) \to (x_2,y_2, c(f_2), f_2, i_1, p_1)
+$$
+be any morphism of triangles in $K(\mathcal{A})$.
+If $a$ and $b$ are homotopy equivalences, then so is $c$.
+\end{lemma}
+
+\begin{proof}
+Since $a$ and $b$ are homotopy equivalences, they are invertible in
+$K(\mathcal{A})$ so let $a^{-1}$ and $b^{-1}$ denote their inverses
+in $K(\mathcal{A})$, giving us a commutative diagram
+$$
+\xymatrix{
+x_2\ar[d]^{a^{-1}}\ar[r]^{f_2} &
+y_2\ar[d]^{b^{-1}}\ar[r]^{i_2} &
+c(f_2)\ar[d]^{c'} \\
+x_1\ar[r]^{f_1} &
+y_1 \ar[r]^{i_1} &
+c(f_1)
+}
+$$
+where the map $c'$ is defined via Lemma \ref{lemma-cone} applied to the left
+commutative box of the above diagram. Since the diagram commutes
+in $K(\mathcal{A})$, it suffices by Lemma \ref{lemma-triseq} to
+prove the following: given a morphism of triangle
+$(1,1,c): (x,y,c(f),f,i,p)\to (x,y,c(f),f,i,p)$
+in $K(\mathcal{A})$, the map $c$ is an isomorphism in
+$K(\mathcal{A})$. We have the commutative diagrams in $K(\mathcal{A})$:
+$$
+\vcenter{
+\xymatrix{
+y\ar[d]^{1}\ar[r] &
+c(f)\ar[d]^{c}\ar[r] &
+x[1]\ar[d]^{1} \\
+y\ar[r] &
+c(f) \ar[r] &
+x[1]
+}
+}
+\quad\Rightarrow\quad
+\vcenter{
+\xymatrix{
+y\ar[d]^{0}\ar[r] &
+c(f)\ar[d]^{c-1}\ar[r] &
+x[1]\ar[d]^{0} \\
+y\ar[r] &
+c(f) \ar[r] &
+x[1]
+}
+}
+$$
+Since the rows are admissible short exact sequences, we obtain
+the identity $(c-1)^2 = 0$ by Lemma \ref{lemma-triseq}, from
+which we conclude that $2-c$ is inverse to $c$ in $K(\mathcal{A})$
+so that $c$ is an isomorphism.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of
+Lemma \ref{lemma-the-same-up-to-isomorphisms}.
+
+\begin{lemma}
+\label{lemma-cone-homotopy}
+In Situation \ref{situation-ABC}.
+\begin{enumerate}
+\item Given an admissible short exact sequence
+$x\xrightarrow{\alpha} y\xrightarrow{\beta} z$.
+Then there exists a homotopy equivalence
+$e:C(\alpha)\to z$ such that the diagram
+\begin{equation}
+\label{equation-cone-isom-triangle}
+\vcenter{
+\xymatrix{
+x\ar[r]^{\alpha}\ar[d] &
+y\ar[r]^{b}\ar[d] &
+C(\alpha)\ar[r]^{-c}\ar@{.>}[d]^{e} &
+x[1]\ar[d] \\
+x\ar[r]^{\alpha} &
+y\ar[r]^{\beta} &
+z\ar[r]^{\delta} & x[1]
+}
+}
+\end{equation}
+defines an isomorphism of triangles in $K(\mathcal{A})$. Here
+$y\xrightarrow{b}C(\alpha)\xrightarrow{c}x[1]$
+is the admissible short exact sequence given as in axiom (C).
+\item Given a morphism
+$\alpha : x \to y$ in $\text{Comp}(\mathcal{A})$, let
+$x \xrightarrow{\tilde{\alpha}} \tilde{y} \to y$ be the
+factorization given as in Lemma \ref{lemma-factor}, where the admissible
+monomorphism $x \xrightarrow{\tilde{\alpha}} y$ extends to the
+admissible short exact sequence
+$$
+\xymatrix{
+x \ar[r]^{\tilde{\alpha}} &
+\tilde{y} \ar[r] & z
+}
+$$
+Then there exists an isomorphism of triangles
+$$
+\xymatrix{
+x \ar[r]^{\tilde{\alpha}} \ar[d] &
+\tilde{y} \ar[r] \ar[d] &
+z \ar[r]^{\delta} \ar@{.>}[d]^{e} &
+x[1] \ar[d] \\
+x \ar[r]^{\alpha} &
+y \ar[r] &
+C(\alpha) \ar[r]^{-c} &
+x[1]
+}
+$$
+where the upper triangle is the triangle
+associated to the sequence
+$x \xrightarrow{\tilde{\alpha}} \tilde{y} \to z$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For (1), we consider the more complete diagram, \emph{without} the
+sign change on $c$:
+$$
+\xymatrix{
+x\ar@<0.5ex>[r]^{\alpha} \ar[d] &
+y\ar@<0.5ex>[l]^{\pi} \ar@<0.5ex>[r]^{b}\ar[d] &
+C(\alpha)\ar@<0.5ex>[l]^{p} \ar@<0.5ex>[r]^{c}\ar@{.>}@<0.5ex>[d]^{e} &
+x[1]\ar@<0.5ex>[l]^{\sigma} \ar[d]\ar@<0.5ex>[r]^{\alpha} &
+y[1]\ar@<0.5ex>[l]^{\pi} \\
+x\ar@<0.5ex>[r]^{\alpha} &
+y\ar@<0.5ex>[r]^{\beta} \ar@<0.5ex>[l]^{\pi} &
+z\ar[r]^{\delta}\ar@<0.5ex>[l]^{s} \ar@{.>}@<0.5ex>[u]^{f} &
+x[1]
+}
+$$
+where the admissible short exact sequence
+$x\xrightarrow{\alpha} y\xrightarrow{\beta} z$
+is given the splitting $\pi$, $s$, and the admissible short exact sequence
+$y\xrightarrow{b}C(\alpha)\xrightarrow{c}x[1]$ is given the splitting
+$p$, $\sigma$. Note that (identifying hom-sets under shifting)
+$$
+\alpha = pd(\sigma) = -d(p)\sigma,\quad
+\delta = \pi d(s) = -d(\pi)s
+$$
+by the construction in Lemma \ref{lemma-get-triangle}.
+
+\medskip\noindent
+We define $e=\beta p$ and $f=bs-\sigma\delta$. We first check that they are
+morphisms in $\text{Comp}(\mathcal{A})$. To show that $d(e)=\beta d(p)$
+vanishes, it suffices to show that $\beta d(p)b$ and $\beta d(p)\sigma$
+both vanish, whereas
+$$
+\beta d(p)b = \beta d(pb) = \beta d(1_y) = 0,\quad
+\beta d(p)\sigma = -\beta\alpha = 0
+$$
+Similarly, to check that $d(f)=bd(s)-d(\sigma)\delta$ vanishes,
+it suffices to check the post-compositions by $p$ and $c$ both vanish,
+whereas
+\begin{align*}
+pbd(s) - pd(\sigma)\delta
+= &
+d(s)-\alpha\delta = d(s)-\alpha\pi d(s) = 0 \\
+cbd(s)-cd(\sigma)\delta
+= &
+-cd(\sigma)\delta = -d(c\sigma)\delta = 0
+\end{align*}
+The commutativity of left two squares of the
+diagram \ref{equation-cone-isom-triangle} follows directly from definition.
+Before we prove the commutativity of the right square (up to homotopy),
+we first check that $e$ is a homotopy equivalence. Clearly,
+$$
+ef=\beta p (bs-\sigma\delta)=\beta s=1_z
+$$
+To check that $fe$ is homotopic to $1_{C(\alpha)}$, we first observe
+$$
+b\alpha = bpd(\alpha) = d(\sigma),\quad
+\alpha c = -d(p)\sigma c = -d(p),\quad
+d(\pi)p = d(\pi)s\beta p = -\delta\beta p
+$$
+Using these identities, we compute
+\begin{align*}
+1_{C(\alpha)} = &
+bp + \sigma c
+\quad (\text{from }y \xrightarrow{b} C(\alpha) \xrightarrow{c} x[1]) \\
+= &
+b(\alpha\pi + s\beta)p + \sigma(\pi\alpha)c
+\quad (\text{from }x \xrightarrow{\alpha} y \xrightarrow{\beta} z) \\
+= &
+d(\sigma)\pi p + bs\beta p - \sigma\pi d(p)
+\quad (\text{by the first two identities above}) \\
+= &
+d(\sigma)\pi p + bs\beta p - \sigma\delta\beta p
++ \sigma\delta\beta p - \sigma\pi d(p) \\
+= &
+(bs - \sigma\delta)\beta p + d(\sigma)\pi p
+- \sigma d(\pi)p - \sigma\pi d(p)\quad
+(\text{by the third identity above}) \\
+= &
+fe + d(\sigma \pi p)
+\end{align*}
+since $\sigma \in \Hom^{-1}(x, C(\alpha))$
+(cf. proof of Lemma \ref{lemma-id-cone-null}).
+Hence $e$ and $f$ are homotopy inverses.
+Finally, to check that the right square of
+diagram \ref{equation-cone-isom-triangle} commutes up to homotopy,
+it suffices to check that $-cf=\delta$. This follows from
+$$
+-cf = -c(bs-\sigma\delta) = c\sigma\delta = \delta
+$$
+since $cb=0$.
+
+\medskip\noindent
+For (2), consider the factorization
+$x\xrightarrow{\tilde{\alpha}}\tilde{y}\to y$
+given as in Lemma \ref{lemma-factor}, so the second morphism
+is a homotopy equivalence. By Lemmas \ref{lemma-cone} and
+\ref{lemma-analogue-third-isomorphism}, there
+exists an isomorphism of triangles between
+$$
+x \xrightarrow{\alpha} y \to C(\alpha) \to x[1]
+\quad\text{and}\quad
+x \xrightarrow{\tilde{\alpha}} \tilde{y} \to C(\tilde{\alpha}) \to x[1]
+$$
+Since we can compose isomorphisms of triangles, by replacing
+$\alpha$ by $\tilde{\alpha}$, $y$ by $\tilde{y}$, and $C(\alpha)$ by
+$C(\tilde{\alpha})$, we may assume $\alpha$ is an admissible monomorphism.
+In this case, the result follows from (1).
+\end{proof}
+
+\noindent
+The following lemma is the analogue of
+Lemma \ref{lemma-homotopy-category-pre-triangulated}.
+
+\begin{lemma}
+\label{lemma-analogue-homotopy-category-pre-triangulated}
+In Situation \ref{situation-ABC} the homotopy category $K(\mathcal{A})$
+with its natural translation functors and distinguished triangles
+is a pre-triangulated category.
+\end{lemma}
+
+\begin{proof}
+We will verify each of TR1, TR2, and TR3.
+
+\medskip\noindent
+Proof of TR1. By definition every triangle isomorphic to a distinguished
+one is distinguished. Since
+$$
+\xymatrix{x\ar[r]^{1_x} & x\ar[r] & 0}
+$$
+is an admissible short exact sequence, $(x, x, 0, 1_x, 0, 0)$
+is a distinguished triangle. Moreover, given a morphism
+$\alpha : x \to y$ in $\text{Comp}(\mathcal{A})$, the triangle
+given by $(x, y, c(\alpha), \alpha, i, -p)$ is distinguished by
+Lemma \ref{lemma-cone-homotopy}.
+
+\medskip\noindent
+Proof of TR2. Let $(x,y,z,\alpha,\beta,\gamma)$ be a triangle and
+suppose $(y,z,x[1],\beta,\gamma,-\alpha[1])$ is distinguished.
+Then there exists an admissible short exact sequence
+$0 \to x' \to y' \to z' \to 0$ such that the associated triangle
+$(x',y',z',\alpha',\beta',\gamma')$ is isomorphic to
+$(y,z,x[1],\beta,\gamma,-\alpha[1])$. After rotating, we conclude
+that $(x,y,z,\alpha,\beta,\gamma)$ is isomorphic to
+$(z'[-1],x',y', \gamma'[-1], \alpha',\beta')$. By
+Lemma \ref{lemma-cone-rotate-isom},
+we deduce that $(z'[-1],x',y', \gamma'[-1], \alpha',\beta')$ is
+isomorphic to $(z'[-1],x',c(\gamma'[-1]), \gamma'[-1], i, p)$.
+Composing the two isomorphisms with sign changes as indicated in
+the following diagram:
+$$
+\xymatrix@C=3pc{
+x\ar[r]^{\alpha}\ar[d] &
+y\ar[r]^{\beta}\ar[d] &
+z\ar[r]^{\gamma}\ar[d] &
+x[1]\ar[d] \\
+z'[-1]\ar[r]^{-\gamma'[-1]}\ar[d]_{-1_{z'[-1]}} &
+x \ar[r]^{\alpha'}\ar@{=}[d] &
+y' \ar[r]^{\beta'} \ar[d] &
+z'\ar[d]^{-1_{z'}} \\
+z'[-1]\ar[r]^{\gamma'[-1]} &
+x \ar[r]^{\alpha'} &
+c(\gamma'[-1]) \ar[r]^{-p} &
+z'
+}
+$$
+We conclude that $(x,y,z,\alpha,\beta,\gamma)$ is distinguished by
+Lemma \ref{lemma-cone-homotopy} (2). Conversely, suppose that
+$(x,y,z,\alpha,\beta,\gamma)$ is distinguished, so that by
+Lemma \ref{lemma-cone-homotopy} (1), it is isomorphic to a
+triangle of the form $(x',y', c(\alpha'), \alpha', i, -p)$
+for some morphism $\alpha': x' \to y'$ in $\text{Comp}(\mathcal{A})$.
+The rotated triangle $(y,z,x[1],\beta,\gamma, -\alpha[1])$ is
+isomorphic to the triangle $(y',c(\alpha'), x'[1], i, -p, -\alpha[1])$
+which is isomorphic to $(y',c(\alpha'), x'[1], i, p, \alpha[1])$.
+By Lemma \ref{lemma-restate-axiom-c}, this triangle is distinguished,
+from which it follows that $(y,z,x[1], \beta,\gamma, -\alpha[1])$
+is distinguished.
+
+\medskip\noindent
+Proof of TR3: Suppose $(x,y,z, \alpha,\beta,\gamma)$ and
+$(x',y',z',\alpha',\beta',\gamma')$ are distinguished triangles
+of $\text{Comp}(\mathcal{A})$ and let $f: x \to x'$ and
+$g: y \to y'$ be morphisms such that
+$\alpha' \circ f = g \circ \alpha$. By
+Lemma \ref{lemma-cone-homotopy}, we may assume that
+$(x,y,z,\alpha,\beta,\gamma)= (x,y,c(\alpha),\alpha, i, -p)$
+and $(x',y',z', \alpha',\beta',\gamma')= (x',y',c(\alpha'), \alpha',i',-p')$.
+Now apply Lemma \ref{lemma-cone}
+and we are done.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of Lemma \ref{lemma-two-split-injections}.
+
+\begin{lemma}
+\label{lemma-dgc-analogue-tr4}
+In Situation \ref{situation-ABC} given admissible monomorphisms
+$x \xrightarrow{\alpha} y$, $y \xrightarrow{\beta} z$ in $\mathcal{A}$,
+there exist distinguished triangles
+$(x,y,q_1,\alpha,p_1,\delta_1)$, $(x,z,q_2,\beta\alpha,p_2,\delta_2)$
+and $(y,z,q_3,\beta,p_3,\delta_3)$ for which TR4 holds.
+\end{lemma}
+
+\begin{proof}
+Given admissible monomorphisms $x\xrightarrow{\alpha} y$ and
+$y\xrightarrow{\beta}z$, we can find distinguished triangles,
+via their extensions to admissible short exact sequences,
+$$
+\xymatrix{
+x\ar@<0.5ex>[r]^{\alpha} &
+y\ar@<0.5ex>[l]^{\pi_1} \ar@<0.5ex>[r]^{p_1} &
+q_1 \ar[r]^{\delta_1} \ar@<0.5ex>[l]^{s_1} &
+x[1]
+}
+$$
+$$
+\xymatrix{
+x\ar@<0.5ex>[r]^{\beta\alpha} &
+z\ar@<0.5ex>[l]^{\pi_1\pi_3} \ar@<0.5ex>[r]^{p_2} &
+q_2 \ar[r]^{\delta_2} \ar@<0.5ex>[l]^{s_2} &
+x[1]
+}
+$$
+$$
+\xymatrix{
+y\ar@<0.5ex>[r]^{\beta} &
+z\ar@<0.5ex>[l]^{\pi_3} \ar@<0.5ex>[r]^{p_3} &
+q_3 \ar[r]^{\delta_3} \ar@<0.5ex>[l]^{s_3} &
+x[1]
+}
+$$
+In these diagrams, the maps $\delta_i$ are defined as
+$\delta_i = \pi_i d(s_i)$ analagous to the maps defined in
+Lemma \ref{lemma-get-triangle}.
+They fit in the following solid commutative diagram
+$$
+\xymatrix@C=5pc@R=3pc{
+x\ar@<0.5ex>[r]^{\alpha} \ar@<0.5ex>[dr]^{\beta\alpha} &
+y\ar@<0.5ex>[d]^{\beta} \ar@<0.5ex>[l]^{\pi_1} \ar@<0.5ex>[r]^{p_1} &
+q_1 \ar[r]^{\delta_1} \ar@<0.5ex>[l]^{s_1} \ar@{.>}[dd]^{p_2\beta s_1} &
+x[1] \\
+ &
+z \ar@<0.5ex>[u]^{\pi_3}\ar@<0.5ex>[d]^{p_3}
+\ar@<0.5ex>[dr]^{p_2} \ar@<0.5ex>[ul]^{\pi_1\pi_3} & & \\
+ &
+q_3\ar@<0.5ex>[u]^{s_3} \ar[d]^{\delta_3} &
+q_2 \ar@{.>}[l]^{p_3s_2} \ar@<0.5ex>[ul]^{s_2} \ar[dr]^{\delta_2} \\
+ &
+y[1] & & x[1]}
+$$
+where we have defined the dashed arrows as indicated.
+Clearly, their composition $p_3s_2p_2\beta s_1 = 0$
+since $s_2p_2 = 0$. We claim that they both are morphisms of
+$\text{Comp}(\mathcal{A})$. We can check this using equations in
+Lemma \ref{lemma-get-triangle}:
+$$
+d(p_2\beta s_1) = p_2\beta d(s_1) = p_2\beta\alpha\pi_1 d(s_1) = 0
+$$
+since $p_2\beta\alpha = 0$, and
+$$
+d(p_3s_2) = p_3d(s_2) = p_3\beta\alpha\pi_1\pi_3 d(s_2) = 0
+$$
+since $p_3\beta = 0$. To check that $q_1\to q_2\to q_3$
+is an admissible short exact sequence, it remains to show
+that in the underlying graded category, $q_2 = q_1\oplus q_3$
+with the above two morphisms as coprojection and projection.
+To do this, observe that in the underlying graded category
+$\mathcal{C}$, there hold
+$$
+y = x\oplus q_1,\quad
+z = y\oplus q_3 = x\oplus q_1\oplus q_3
+$$
+where $\pi_1\pi_3$ gives the projection morphism onto the first
+factor: $x\oplus q_1\oplus q_3\to z$. By axiom (A) on
+$\mathcal{A}$, $\mathcal{C}$ is an additive category, hence
+we may apply
+Homology, Lemma \ref{homology-lemma-additive-cat-biproduct-kernel}
+and conclude that
+$$
+\Ker(\pi_1\pi_3) = q_1\oplus q_3
+$$
+in $\mathcal{C}$. Another application of
+Homology, Lemma \ref{homology-lemma-additive-cat-biproduct-kernel}
+to $z = x\oplus q_2$ gives $\Ker(\pi_1\pi_3) = q_2$.
+Hence $q_2\cong q_1\oplus q_3$ in $\mathcal{C}$.
+It is clear that the dashed morphisms defined above give
+coprojection and projection.
+
+\medskip\noindent
+Finally, we have to check that the morphism
+$\delta : q_3 \to q_1[1]$ induced by the admissible
+short exact sequence $q_1\to q_2\to q_3$ agrees with
+$p_1\delta_3$. By the construction in
+Lemma \ref{lemma-get-triangle}, the morphism $\delta$ is given by
+\begin{align*}
+p_1\pi_3s_2d(p_2s_3)
+= &
+p_1\pi_3s_2p_2d(s_3) \\
+= &
+p_1\pi_3(1-\beta\alpha\pi_1\pi_3)d(s_3) \\
+= &
+p_1\pi_3d(s_3)\quad (\text{since }\pi_3\beta = 0) \\
+= &
+p_1\delta_3
+\end{align*}
+as desired. The proof is complete.
+\end{proof}
+
+\noindent
+Putting everything together we finally obtain the analogue
+of Proposition \ref{proposition-homotopy-category-triangulated}.
+
+\begin{proposition}
+\label{proposition-ABC-homotopy-category-triangulated}
+In Situation \ref{situation-ABC} the homotopy category $K(\mathcal{A})$
+with its natural translation functors and distinguished triangles is a
+triangulated category.
+\end{proposition}
+
+\begin{proof}
+By Lemma \ref{lemma-analogue-homotopy-category-pre-triangulated} we know that
+$K(\mathcal{A})$ is pre-triangulated. Combining
+Lemmas \ref{lemma-analogue-sequence-maps-split} and
+\ref{lemma-dgc-analogue-tr4} with
+Derived Categories, Lemma \ref{derived-lemma-easier-axiom-four},
+we conclude that $K(\mathcal{A})$ is a triangulated category.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-functor-between-ABC}
+Let $R$ be a ring. Let $F : \mathcal{A} \to \mathcal{B}$ be a functor
+between differential graded categories over $R$ satisfying axioms
+(A), (B), and (C) such that $F(x[1]) = F(x)[1]$.
+Then $F$ induces an exact functor
+$K(\mathcal{A}) \to K(\mathcal{B})$ of triangulated categories.
+\end{lemma}
+
+\begin{proof}
+Namely, if $x \to y \to z$ is an admissible short exact sequence
+in $\text{Comp}(\mathcal{A})$, then $F(x) \to F(y) \to F(z)$
+is an admissible short exact sequence in $\text{Comp}(\mathcal{B})$.
+Moreover, the ``boundary'' morphism $\delta = \pi\text{d}(s) : z \to x[1]$
+constructed in Lemma \ref{lemma-get-triangle} produces the morphism
+$F(\delta) : F(z) \to F(x[1]) = F(x)[1]$ which is equal to the boundary
+map $F(\pi) \text{d}(F(s))$ for the admissible short exact sequence
+$F(x) \to F(y) \to F(z)$.
+\end{proof}
+
+
+
+
+
+\section{Bimodules}
+\label{section-bimodules}
+
+\noindent
+We continue the discussion started in Section \ref{section-tensor-product}.
+
+\begin{definition}
+\label{definition-bimodule}
+Bimodules. Let $R$ be a ring.
+\begin{enumerate}
+\item Let $A$ and $B$ be $R$-algebras. An {\it $(A, B)$-bimodule}
+is an $R$-module $M$ equippend with $R$-bilinear maps
+$$
+A \times M \to M, (a, x) \mapsto ax
+\quad\text{and}\quad
+M \times B \to M, (x, b) \mapsto xb
+$$
+such that the following hold
+\begin{enumerate}
+\item $a'(ax) = (a'a)x$ and $(xb)b' = x(bb')$,
+\item $a(xb) = (ax)b$, and
+\item $1 x = x = x 1$.
+\end{enumerate}
+\item Let $A$ and $B$ be $\mathbf{Z}$-graded $R$-algebras. A
+{\it graded $(A, B)$-bimodule} is an $(A, B)$-bimodule $M$ which
+has a grading $M = \bigoplus M^n$ such that
+$A^n M^m \subset M^{n + m}$ and $M^n B^m \subset M^{n + m}$.
+\item Let $A$ and $B$ be differential graded $R$-algebras. A
+{\it differential graded $(A, B)$-bimodule} is a graded $(A, B)$-bimodule
+which comes equipped with a differential
+$\text{d} : M \to M$ homogeneous of degree $1$
+such that $\text{d}(ax) = \text{d}(a)x + (-1)^{\deg(a)}a\text{d}(x)$ and
+$\text{d}(xb) = \text{d}(x)b + (-1)^{\deg(x)}x\text{d}(b)$
+for homogeneous elements $a \in A$, $x \in M$, $b \in B$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Observe that a differential graded $(A, B)$-bimodule
+$M$ is the same thing as a right differential graded
+$B$-module which is also a left differential graded
+$A$-module such that the grading and differentials agree
+and such that the $A$-module structure commutes with
+the $B$-module structure. Here is a precise statement.
+
+\begin{lemma}
+\label{lemma-what-makes-a-bimodule-dg}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded algebras over $R$. Let $M$ be a right differential
+graded $B$-module. There is a $1$-to-$1$ correspondence
+between $(A, B)$-bimodule structures on $M$ compatible with the given
+differential graded $B$-module structure and homomorphisms
+$$
+A
+\longrightarrow
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(M, M)
+$$
+of differential graded $R$-algebras.
+\end{lemma}
+
+\begin{proof}
+Let $\mu : A \times M \to M$ define a left differential graded $A$-module
+structure on the underlying complex of $R$-modules $M^\bullet$ of $M$.
+By Lemma \ref{lemma-left-module-structure} the structure $\mu$ corresponds
+to a map $\gamma : A \to \Hom^\bullet(M^\bullet, M^\bullet)$
+of differential graded $R$-algebras. The assertion of the lemma is simply
+that $\mu$ commutes with the $B$-action, if and only if $\gamma$ ends
+up inside
+$$
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(M, M) \subset
+\Hom^\bullet(M^\bullet, M^\bullet)
+$$
+We omit the detailed calculation.
+\end{proof}
+
+\noindent
+Let $M$ be a differential graded $(A, B)$-bimodule. Recall from
+Section \ref{section-left-modules} that the left differential graded
+$A$-module structure corresponds to a right differential graded
+$A^{opp}$-module structure. Since the $A$ and $B$ module structures
+commute this gives $M$ the structure of a differential graded
+$A^{opp} \otimes_R B$-module:
+$$
+x \cdot (a \otimes b) = (-1)^{\deg(a)\deg(x)} axb
+$$
+Conversely, if we have a differential graded $A^{opp} \otimes_R B$-module
+$M$, then we can use the formula above to get a differential graded
+$(A, B)$-bimodule.
+
+\begin{lemma}
+\label{lemma-bimodule-over-tensor}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$
+be differential graded algebras over $R$. The construction above
+defines an equivalence of categories
+$$
+\begin{matrix}
+\text{differential graded}\\
+(A, B)\text{-bimodules}
+\end{matrix}
+\longleftrightarrow
+\begin{matrix}
+\text{right differential graded }\\
+A^{opp} \otimes_R B\text{-modules}
+\end{matrix}
+$$
+\end{lemma}
+
+\begin{proof}
+Immediate from discussion the above.
+\end{proof}
+
+\noindent
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded $R$-algebras.
+Let $P$ be a differential graded $(A, B)$-bimodule. We say $P$
+{\it has property (P)} if it there exists a filtration
+$$
+0 = F_{-1}P \subset F_0P \subset F_1P \subset \ldots \subset P
+$$
+by differential graded $(A, B)$-bimodules such that
+\begin{enumerate}
+\item $P = \bigcup F_pP$,
+\item the inclusions $F_iP \to F_{i + 1}P$ are split as graded
+$(A, B)$-bimodule maps,
+\item the quotients $F_{i + 1}P/F_iP$ are isomorphic as differential
+graded $(A, B)$-bimodules to a direct sum of $(A \otimes_R B)[k]$.
+\end{enumerate}
+
+\begin{lemma}
+\label{lemma-bimodule-resolve}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded $R$-algebras. Let $M$ be a differential graded
+$(A, B)$-bimodule. There exists a homomorphism $P \to M$
+of differential graded $(A, B)$-bimodules which is a quasi-isomorphism
+such that $P$ has property (P) as defined above.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemmas \ref{lemma-bimodule-over-tensor} and
+\ref{lemma-resolve}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bimodule-property-P-sequence}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded $R$-algebras. Let $P$ be a
+differential graded $(A, B)$-bimodule having property (P)
+with corresponding filtration $F_\bullet$, then we obtain a
+short exact sequence
+$$
+0 \to
+\bigoplus\nolimits F_iP \to
+\bigoplus\nolimits F_iP \to P \to 0
+$$
+of differential graded $(A, B)$-bimodules which is split as a sequence
+of graded $(A, B)$-bimodules.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemmas \ref{lemma-bimodule-over-tensor} and
+\ref{lemma-property-P-sequence}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Bimodules and tensor product}
+\label{section-bimodules-tensor}
+
+\noindent
+Let $R$ be a ring. Let $A$ and $B$ be $R$-algebras. Let $M$ be a right
+$A$-module. Let $N$ be a $(A, B)$-bimodule. Then
+$M \otimes_A N$ is a right $B$-module.
+
+\medskip\noindent
+If in the situation of the previous paragraph
+$A$ and $B$ are $\mathbf{Z}$-graded algebras,
+$M$ is a graded $A$-module, and $N$ is a graded $(A, B)$-bimodule,
+then $M \otimes_A N$ is a right graded $B$-module. The construction
+is functorial in $M$ and defines a functor
+$$
+- \otimes_A N :
+\text{Mod}^{gr}_A
+\longrightarrow
+\text{Mod}^{gr}_B
+$$
+of graded categories as in Example \ref{example-gm-gr-cat}. Namely, if
+$M$ and $M'$ are graded $A$-modules and $f : M \to M'$ is an $A$-module
+homomorphism homogeneous of degree $n$, then
+$f \otimes \text{id}_N : M \otimes_A N \to M' \otimes_A N$
+is a $B$-module homomorphism homogeneous of degree $n$.
+
+\medskip\noindent
+If in the situation of the previous paragraph
+$(A, \text{d})$ and $(B, \text{d})$ are differential graded algebras,
+$M$ is a differential graded $A$-module, and $N$ is a differential
+graded $(A, B)$-bimodule, then $M \otimes_A N$ is a right
+differential graded $B$-module.
+
+\begin{lemma}
+\label{lemma-tensor}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$
+be differential graded algebras over $R$. Let $N$ be a
+differential graded $(A, B)$-bimodule. Then
+$M \mapsto M \otimes_A N$ defines a functor
+$$
+- \otimes_A N :
+\text{Mod}^{dg}_{(A, \text{d})}
+\longrightarrow
+\text{Mod}^{dg}_{(B, \text{d})}
+$$
+of differential graded categories. This functor induces functors
+$$
+\text{Mod}_{(A, \text{d})} \to \text{Mod}_{(B, \text{d})}
+\quad\text{and}\quad
+K(\text{Mod}_{(A, \text{d})}) \to K(\text{Mod}_{(B, \text{d})})
+$$
+by an application of Lemma \ref{lemma-functorial}.
+\end{lemma}
+
+\begin{proof}
+Above we have seen how the construction defines a functor of underlying
+graded categories. Thus it suffices to show that the construction is
+compatible with differentials. Let $M$ and $M'$ be differential
+graded $A$-modules and let $f : M \to M'$ be an $A$-module homomorphism
+which is homogeneous of degree $n$. Then we have
+$$
+\text{d}(f) = \text{d}_{M'} \circ f - (-1)^n f \circ \text{d}_M
+$$
+On the other hand, we have
+$$
+\text{d}(f \otimes \text{id}_N) =
+\text{d}_{M' \otimes_A N} \circ
+(f \otimes \text{id}_N)
+- (-1)^n
+(f \otimes \text{id}_N) \circ \text{d}_{M \otimes_A N}
+$$
+Applying this to an element $x \otimes y$ with $x \in M$ and
+$y \in N$ homogeneous we get
+\begin{align*}
+\text{d}(f \otimes \text{id}_N)(x \otimes y)
+= &
+\text{d}_{M'}(f(x)) \otimes y + (-1)^{n + \deg(x)}f(x) \otimes \text{d}_N(y) \\
+& - (-1)^n f(\text{d}_M(x)) \otimes y
+- (-1)^{n + \deg(x)}f(x) \otimes \text{d}_N(y) \\
+= &
+\text{d}(f) (x \otimes y)
+\end{align*}
+Thus we see that $\text{d}(f) \otimes \text{id}_N =
+\text{d}(f \otimes \text{id}_N)$ and the proof is complete.
+\end{proof}
+
+\begin{remark}
+\label{remark-shift-tensor-no-sign}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$
+be differential graded algebras over $R$. Let $N$ be a
+differential graded $(A, B)$-bimodule. Let $M$ be a right differential
+graded $A$-module. Then for every $k \in \mathbf{Z}$ there
+is an isomorphism
+$$
+(M \otimes_A N)[k] \longrightarrow
+M[k] \otimes_A N
+$$
+of right differential graded $B$-modules defined without the intervention
+of signs, see More on Algebra, Section \ref{more-algebra-section-sign-rules}.
+\end{remark}
+
+\noindent
+If we have a ring $R$ and $R$-algebras $A$, $B$, and $C$,
+a right $A$-module $M$, an $(A, B)$-bimodule $N$, and a
+$(B, C)$-bimodule $N'$, then
+$N \otimes_B N'$ is a $(A, C)$-bimodule and we have
+$$
+(M \otimes_A N) \otimes_B N' = M \otimes_A (N \otimes_B N')
+$$
+This equality continuous to hold in the graded and in the differential
+graded case. See More on Algebra, Section \ref{more-algebra-section-sign-rules}
+for sign rules.
+
+
+
+
+
+
+\section{Bimodules and internal hom}
+\label{section-bimodules-hom}
+
+\noindent
+Let $R$ be a ring. If $A$ is an $R$-algebra (see our conventions
+in Section \ref{section-conventions}) and $M$, $M'$ are right
+$A$-modules, then we define
+$$
+\Hom_A(M, M') = \{f : M \to M' \mid f \text{ is }A\text{-linear}\}
+$$
+as usual.
+
+\medskip\noindent
+Let $R$-be a ring. Let $A$ and $B$ be $R$-algebras. Let
+$N$ be an $(A, B)$-bimodule. Let $N'$ be a right $B$-module.
+In this situation we will think of
+$$
+\Hom_B(N, N')
+$$
+as a right $A$-module using precomposition.
+
+\medskip\noindent
+Let $R$-be a ring. Let $A$ and $B$ be $\mathbf{Z}$-graded $R$-algebras. Let
+$N$ be a graded $(A, B)$-bimodule. Let $N'$ be a right graded $B$-module.
+In this situation we will think of the graded $R$-module
+$$
+\Hom_{\text{Mod}^{gr}_B}(N, N')
+$$
+defined in Example \ref{example-gm-gr-cat} as a right graded $A$-module
+using precomposition. The construction is functorial in $N'$ and defines
+a functor
+$$
+\Hom_{\text{Mod}^{gr}_B}(N, -) :
+\text{Mod}^{gr}_B
+\longrightarrow
+\text{Mod}^{gr}_A
+$$
+of graded categories as in Example \ref{example-gm-gr-cat}. Namely, if
+$N_1$ and $N_2$ are graded $B$-modules and $f : N_1 \to N_2$ is a $B$-module
+homomorphism homogeneous of degree $n$, then the induced map
+$\Hom_{\text{Mod}^{gr}_B}(N, N_1) \to \Hom_{\text{Mod}^{gr}_B}(N, N_2)$
+is an $A$-module homomorphism homogeneous of degree $n$.
+
+\medskip\noindent
+Let $R$ be a ring. Let $A$ and $B$ be differential $\mathbf{Z}$-graded
+$R$-algebras. Let $N$ be a differential graded $(A, B)$-bimodule.
+Let $N'$ be a right differential graded $B$-module. In this situation
+we will think of the differential graded $R$-module
+$$
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, N')
+$$
+defined in Example \ref{example-dgm-dg-cat} as a right differential
+graded $A$-module using precomposition as in the graded case. This
+is compatible with differentials because multiplication is the
+composition
+$$
+\Hom_{\text{Mod}^{dg}_B}(N, N') \otimes_R A \to
+\Hom_{\text{Mod}^{dg}_B}(N, N') \otimes_R
+\Hom_{\text{Mod}^{dg}_B}(N, N) \to
+\Hom_{\text{Mod}^{dg}_B}(N, N')
+$$
+The first arrow uses the map
+of Lemma \ref{lemma-what-makes-a-bimodule-dg} and the second
+arrow is the composition in the differential graded category
+$\text{Mod}^{dg}_{(B, \text{d})}$.
+
+\begin{lemma}
+\label{lemma-hom}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$
+be differential graded algebras over $R$. Let $N$ be a
+differential graded $(A, B)$-bimodule. The construction above
+defines a functor
+$$
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, -) :
+\text{Mod}^{dg}_{(B, \text{d})}
+\longrightarrow
+\text{Mod}^{dg}_{(A, \text{d})}
+$$
+of differential graded categories. This functor induces functors
+$$
+\text{Mod}_{(B, \text{d})} \to \text{Mod}_{(A, \text{d})}
+\quad\text{and}\quad
+K(\text{Mod}_{(B, \text{d})}) \to K(\text{Mod}_{(A, \text{d})})
+$$
+by an application of Lemma \ref{lemma-functorial}.
+\end{lemma}
+
+\begin{proof}
+Above we have seen how the construction defines a functor of underlying
+graded categories. Thus it suffices to show that the construction is
+compatible with differentials. Let $N_1$ and $N_2$ be differential
+graded $B$-modules. Write
+$$
+H_{12} = \Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N_1, N_2),\quad
+H_1 = \Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, N_1),\quad
+H_2 = \Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, N_2)
+$$
+Consider the composition
+$$
+c : H_{12} \otimes_R H_1 \longrightarrow H_2
+$$
+in the differential graded category $\text{Mod}^{dg}_{(B, \text{d})}$.
+Let $f : N_1 \to N_2$ be a $B$-module homomorphism which is homogeneous
+of degree $n$, in other words, $f \in H_{12}^n$.
+The functor in the lemma sends $f$ to $c_f : H_1 \to H_2$, $g \mapsto c(f, g)$.
+Simlarly for $\text{d}(f)$. On the other hand, the differential on
+$$
+\Hom_{\text{Mod}^{dg}_{(A, \text{d})}}(H_1, H_2)
+$$
+sends $c_f$ to $\text{d}_{H_2} \circ c_f - (-1)^n c_f \circ \text{d}_{H_1}$.
+As $c$ is a morphism of complexes of $R$-modules we have
+$\text{d} c(f, g) = c(\text{d}f, g) + (-1)^n c(f, \text{d}g)$.
+Hence we see that
+\begin{align*}
+(\text{d}c_f)(g)
+& =
+\text{d}c(f,g) - (-1)^n c(f, \text{d}g) \\
+& =
+c(\text{d}f, g) + (-1)^n c(f, \text{d}g) - (-1)^n c(f, \text{d}g) \\
+& =
+c(\text{d}f, g) = c_{\text{d}f}(g)
+\end{align*}
+and the proof is complete.
+\end{proof}
+
+\begin{remark}
+\label{remark-shift-hom-no-sign}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$
+be differential graded algebras over $R$. Let $N$ be a
+differential graded $(A, B)$-bimodule. Let $N'$ be a right differential
+graded $B$-module. Then for every $k \in \mathbf{Z}$ there
+is an isomorphism
+$$
+\Hom_{\text{Mod}^{gr}_B}(N, N')[k]
+\longrightarrow
+\Hom_{\text{Mod}^{gr}_B}(N, N'[k])
+$$
+of right differential graded $A$-modules defined without the intervention
+of signs, see More on Algebra, Section \ref{more-algebra-section-sign-rules}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-tensor-hom-adjunction}
+Let $R$ be a ring. Let $A$ and $B$ be $R$-algebras.
+Let $M$ be a right $A$-module, $N$ an $(A, B)$-bimodule, and
+$N'$ a right $B$-module. Then we have a canonical isomorphism
+$$
+\Hom_B(M \otimes_A N, N') = \Hom_A(M, \Hom_B(N, N'))
+$$
+of $R$-modules.
+If $A$, $B$, $M$, $N$, $N'$ are compatibly graded, then we have a
+canonical isomorphism
+$$
+\Hom_{\text{Mod}_B^{gr}}(M \otimes_A N, N') =
+\Hom_{\text{Mod}_A^{gr}}(M, \Hom_{\text{Mod}_B^{gr}}(N, N'))
+$$
+of graded $R$-modules
+If $A$, $B$, $M$, $N$, $N'$ are compatibly differential graded, then
+we have a canonical isomorphism
+$$
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(M \otimes_A N, N') =
+\Hom_{\text{Mod}^{dg}_{(A, \text{d})}}(M,
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, N'))
+$$
+of complexes of $R$-modules.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: in the ungraded case interpret both sides as $A$-bilinear maps
+$\psi : M \times N \to N'$ which are $B$-linear on the right.
+In the (differential) graded case, use the isomorphism of
+More on Algebra, Lemma \ref{more-algebra-lemma-compose}
+and check it is compatible with the module structures.
+Alternatively, use the isomorphism of Lemma \ref{lemma-characterize-hom}
+and show that it is compatible with the $B$-module structures.
+\end{proof}
+
+
+
+
+
+
+
+\section{Derived Hom}
+\label{section-restriction}
+
+\noindent
+This section is analogous to
+More on Algebra, Section \ref{more-algebra-section-RHom}.
+
+\medskip\noindent
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$
+be differential graded algebras over $R$. Let $N$ be a
+differential graded $(A, B)$-bimodule. Consider the functor
+\begin{equation}
+\label{equation-restriction}
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, -) :
+\text{Mod}_{(B, \text{d})}
+\longrightarrow
+\text{Mod}_{(A, \text{d})}
+\end{equation}
+of Section \ref{section-bimodules-hom}.
+
+\begin{lemma}
+\label{lemma-restriction-homotopy}
+The functor (\ref{equation-restriction}) defines an exact functor
+$K(\text{Mod}_{(B, \text{d})}) \to K(\text{Mod}_{(A, \text{d})})$
+of triangulated categories.
+\end{lemma}
+
+\begin{proof}
+Via Lemma \ref{lemma-hom} and
+Remark \ref{remark-shift-hom-no-sign}
+this follows from the general principle of
+Lemma \ref{lemma-functor-between-ABC}.
+\end{proof}
+
+\noindent
+Recall that we have an exact functor
+of triangulated categories
+$$
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, -) :
+K(\text{Mod}_{(B, \text{d})}) \to K(\text{Mod}_{(A, \text{d})})
+$$
+see Lemma \ref{lemma-restriction-homotopy}. Consider the diagram
+$$
+\xymatrix{
+K(\text{Mod}_{(B, \text{d})}) \ar[d] \ar[rr]_{\text{see above}} \ar[rrd]_F & &
+K(\text{Mod}_{(A, \text{d})}) \ar[d] \\
+D(B, \text{d}) \ar@{..>}[rr] & &
+D(A, \text{d})
+}
+$$
+We would like to construct a dotted arrow as the
+{\it right derived functor} of the composition $F$.
+({\it Warning}: in most interesting cases the diagram will not commute.)
+Namely, in the general setting of
+Derived Categories, Section \ref{derived-section-derived-functors}
+we want to compute the
+right derived functor of $F$ with respect to the multiplicative system of
+quasi-isomorphisms in $K(\text{Mod}_{(A, \text{d})})$.
+
+\begin{lemma}
+\label{lemma-derived-restriction}
+In the situation above, the right derived functor of $F$ exists.
+We denote it $R\Hom(N, -) : D(B, \text{d}) \to D(A, \text{d})$.
+\end{lemma}
+
+\begin{proof}
+We will use
+Derived Categories, Lemma \ref{derived-lemma-find-existence-computes}
+to prove this. As our collection $\mathcal{I}$
+of objects we will use the objects with property (I).
+Property (1) was shown in Lemma \ref{lemma-right-resolution}.
+Property (2) holds because if $s : I \to I'$ is a quasi-isomorphism
+of modules with property (I), then $s$ is a homotopy equivalence
+by Lemma \ref{lemma-hom-derived}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-functoriality-derived-restriction}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded $R$-algebras. Let $f : N \to N'$ be a
+homomorphism of differential graded $(A, B)$-bimodules.
+Then $f$ induces a morphism of functors
+$$
+- \circ f : R\Hom(N', -) \longrightarrow R\Hom(N, -)
+$$
+If $f$ is a quasi-isomorphism, then $f \circ -$ is an isomorphism of
+functors.
+\end{lemma}
+
+\begin{proof}
+Write $\mathcal{B} = \text{Mod}^{dg}_{(B, \text{d})}$ the
+differential graded category of differential graded $B$-modules, see
+Example \ref{example-dgm-dg-cat}.
+Let $I$ be a differential graded $B$-module with property (I).
+Then $f \circ - : \Hom_\mathcal{B}(N', I) \to \Hom_\mathcal{B}(N, I)$
+is a map of differential graded $A$-modules. Moreover, this is functorial
+with respect to $I$. Since the functors
+$ R\Hom(N', -)$ and $R\Hom(N, -)$ are
+computed by applying $\Hom_\mathcal{B}$ into objects with property (I)
+(Lemma \ref{lemma-derived-restriction}) we obtain a transformation of functors
+as indicated.
+
+\medskip\noindent
+Assume that $f$ is a quasi-isomorphism. Let $F_\bullet$ be the
+given filtration on $I$. Since $I = \lim I/F_pI$ we see that
+$\Hom_\mathcal{B}(N', I) = \lim \Hom_\mathcal{B}(N', I/F_pI)$ and
+$\Hom_\mathcal{B}(N, I) = \lim \Hom_\mathcal{B}(N, I/F_pI)$.
+Since the transition maps in the system $I/F_pI$ are split
+as graded modules, we see that the transition maps in the
+systems $\Hom_\mathcal{B}(N', I/F_pI)$ and $\Hom_\mathcal{B}(N, I/F_pI)$
+are surjective. Hence $\Hom_\mathcal{B}(N', I)$, resp. $\Hom_\mathcal{B}(N, I)$
+viewed as a complex of abelian groups computes $R\lim$ of the system
+of complexes
+$\Hom_\mathcal{B}(N', I/F_pI)$, resp. $\Hom_\mathcal{B}(N, I/F_pI)$.
+See More on Algebra, Lemma \ref{more-algebra-lemma-compute-Rlim}.
+Thus it suffices to prove each
+$$
+\Hom_\mathcal{B}(N', I/F_pI) \to \Hom_\mathcal{B}(N, I/F_pI)
+$$
+is a quasi-isomorphism. Since the surjections $I/F_{p + 1}I \to I/F_pI$
+are split as maps of graded $B$-modules we see that
+$$
+0 \to \Hom_\mathcal{B}(N', F_pI/F_{p + 1}I) \to
+\Hom_\mathcal{B}(N', I/F_{p + 1}I) \to
+\Hom_\mathcal{B}(N', I/F_pI) \to 0
+$$
+is a short exact sequence of differential graded $A$-modules.
+There is a similar sequence for $N$ and $f$ induces a map
+of short exact sequences. Hence by induction on $p$ (starting with $p = 0$
+when $I/F_0I = 0$) we conclude that it suffices to show that
+the map
+$\Hom_\mathcal{B}(N', F_pI/F_{p + 1}I) \to \Hom_\mathcal{B}(N, F_pI/F_{p + 1}I)$
+is a quasi-isomorphism. Since $F_pI/F_{p + 1}I$ is a product of shifts of
+$A^\vee$ it suffice to prove
+$\Hom_\mathcal{B}(N', B^\vee[k]) \to \Hom_\mathcal{B}(N, B^\vee[k])$
+is a quasi-isomorphism. By Lemma \ref{lemma-hom-into-shift-dual-free}
+it suffices to show $(N')^\vee \to N^\vee$ is a quasi-isomorphism.
+This is true because $f$ is a quasi-isomorphism and $(\ )^\vee$
+is an exact functor.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-derived-restriction-exts}
+Let $(A, \text{d})$ and $(B, \text{d})$ be differential graded algebras
+over a ring $R$. Let $N$ be a differential graded $(A, B)$-bimodule.
+Then for every $n \in \mathbf{Z}$ there are isomorphisms
+$$
+H^n(R\Hom(N, M)) = \Ext^n_{D(B, \text{d})}(N, M)
+$$
+of $R$-modules functorial in $M$. It is also functorial in $N$
+with respect to the operation described in
+Lemma \ref{lemma-functoriality-derived-restriction}.
+\end{lemma}
+
+\begin{proof}
+In the proof of Lemma \ref{lemma-derived-restriction}
+we have seen
+$$
+R\Hom(N, M) =
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, I)
+$$
+as a differential graded $A$-module
+where $M \to I$ is a quasi-isomorphism of $M$ into a differential
+graded $B$-module with property (I). Hence this complex has the
+correct cohomology modules by Lemma \ref{lemma-hom-derived}.
+We omit a discussion of the functorial nature of these
+identifications.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-derived-restriction}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded $R$-algebras. Let $N$ be a differential
+graded $(A, B)$-bimodule. If
+$\Hom_{D(B, \text{d})}(N, N') = \Hom_{K(\text{Mod}_{(B, \text{d})})}(N, N')$
+for all $N' \in K(B, \text{d})$, for example if $N$
+has property (P) as a differential graded $B$-module, then
+$$
+R\Hom(N, M) = \Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, M)
+$$
+functorially in $M$ in $D(B, \text{d})$.
+\end{lemma}
+
+\begin{proof}
+By construction (Lemma \ref{lemma-derived-restriction})
+to find $R\Hom(N, M)$ we choose a quasi-isomorphism
+$M \to I$ where $I$ is a differential graded $B$-module
+with property (I) and we set
+$R\Hom(N, M) = \Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, I)$.
+By assumption the map
+$$
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, M) \longrightarrow
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, I)
+$$
+induced by $M \to I$ is a quasi-isomorphism, see discussion in
+Example \ref{example-dgm-dg-cat}. This proves the lemma.
+If $N$ has property (P) as a $B$-module, then we see that the
+assumption is satisfied by Lemma \ref{lemma-hom-derived}.
+\end{proof}
+
+
+
+
+\section{Variant of derived Hom}
+\label{section-variant}
+
+\noindent
+Let $\mathcal{A}$ be an abelian category. Consider the differential graded
+category $\text{Comp}^{dg}(\mathcal{A})$ of complexes of $\mathcal{A}$, see
+Example \ref{example-category-complexes}.
+Let $K^\bullet$ be a complex of $\mathcal{A}$. Set
+$$
+(E, \text{d}) = \Hom_{\text{Comp}^{dg}(\mathcal{A})}(K^\bullet, K^\bullet)
+$$
+and consider the functor of differential graded categories
+$$
+\text{Comp}^{dg}(\mathcal{A}) \longrightarrow \text{Mod}^{dg}_{(E, \text{d})},
+\quad
+X^\bullet
+\longmapsto
+\Hom_{\text{Comp}^{dg}(\mathcal{A})}(K^\bullet, X^\bullet)
+$$
+of Lemma \ref{lemma-construction}.
+
+\begin{lemma}
+\label{lemma-existence-of-derived}
+In the situation above. If the right derived functor $R\Hom(K^\bullet, -)$
+of $\Hom(K^\bullet, -) : K(\mathcal{A}) \to D(\textit{Ab})$
+is everywhere defined on $D(\mathcal{A})$, then we obtain a canonical exact
+functor
+$$
+R\Hom(K^\bullet, -) : D(\mathcal{A}) \longrightarrow D(E, \text{d})
+$$
+of triangulated categories which reduces to the usual one on taking
+associated complexes of abelian groups.
+\end{lemma}
+
+\begin{proof}
+Note that we have an associated functor
+$K(\mathcal{A}) \to K(\text{Mod}_{(E, \text{d})})$ by
+Lemma \ref{lemma-construction}.
+We claim this functor is an exact functor of triangulated categories.
+Namely, let $f : A^\bullet \to B^\bullet$ be a map of complexes
+of $\mathcal{A}$. Then a computation shows that
+$$
+\Hom_{\text{Comp}^{dg}(\mathcal{A})}(K^\bullet, C(f)^\bullet)
+=
+C\left(
+\Hom_{\text{Comp}^{dg}(\mathcal{A})}(K^\bullet, A^\bullet) \to
+\Hom_{\text{Comp}^{dg}(\mathcal{A})}(K^\bullet, B^\bullet)
+\right)
+$$
+where the right hand side is the cone in $\text{Mod}_{(E, \text{d})}$
+defined earlier in this chapter.
+This shows that our functor is compatible with cones, hence with
+distinguished triangles. Let $X^\bullet$ be an object of $K(\mathcal{A})$.
+Consider the category of quasi-isomorphisms $s : X^\bullet \to Y^\bullet$.
+We are given that the functor
+$(s : X^\bullet \to Y^\bullet) \mapsto \Hom_\mathcal{A}(K^\bullet, Y^\bullet)$
+is essentially constant when viewed in $D(\textit{Ab})$.
+But since the forgetful functor $D(E, \text{d}) \to D(\textit{Ab})$
+is compatible with taking cohomology, the same thing is true in
+$D(E, \text{d})$. This proves the lemma.
+\end{proof}
+
+\noindent
+{\bf Warning:} Although the lemma holds as stated and may be useful
+as stated, the differential algebra $E$ isn't the ``correct'' one unless
+$H^n(E) = \Ext^n_{D(\mathcal{A})}(K^\bullet, K^\bullet)$
+for all $n \in \mathbf{Z}$.
+
+
+
+
+
+\section{Derived tensor product}
+\label{section-base-change}
+
+\noindent
+This section is analogous to More on Algebra, Section
+\ref{more-algebra-section-derived-base-change}.
+
+\medskip\noindent
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded algebras over $R$. Let $N$ be a
+differential graded $(A, B)$-bimodule. Consider the functor
+\begin{equation}
+\label{equation-bc}
+\text{Mod}_{(A, \text{d})}
+\longrightarrow
+\text{Mod}_{(B, \text{d})},\quad
+M \longmapsto M \otimes_A N
+\end{equation}
+defined in Section \ref{section-bimodules-tensor}.
+
+\begin{lemma}
+\label{lemma-bc-homotopy}
+The functor (\ref{equation-bc}) defines an exact functor
+of triangulated categories
+$K(\text{Mod}_{(A, \text{d})}) \to K(\text{Mod}_{(B, \text{d})})$.
+\end{lemma}
+
+\begin{proof}
+Via Lemma \ref{lemma-tensor} and
+Remark \ref{remark-shift-tensor-no-sign}
+this follows from the general principle of
+Lemma \ref{lemma-functor-between-ABC}.
+\end{proof}
+
+\noindent
+At this point we can consider the diagram
+$$
+\xymatrix{
+K(\text{Mod}_{(A, \text{d})}) \ar[d] \ar[rr]_{- \otimes_A N} \ar[rrd]_F & &
+K(\text{Mod}_{(B, \text{d})}) \ar[d] \\
+D(A, \text{d}) \ar@{..>}[rr] & &
+D(B, \text{d})
+}
+$$
+The dotted arrow that we will construct below will be the
+{\it left derived functor} of the composition $F$.
+({\it Warning}: the diagram will not commute.)
+Namely, in the general setting of
+Derived Categories, Section \ref{derived-section-derived-functors}
+we want to compute the
+left derived functor of $F$ with respect to the multiplicative system of
+quasi-isomorphisms in $K(\text{Mod}_{(A, \text{d})})$.
+
+\begin{lemma}
+\label{lemma-derived-bc}
+In the situation above, the left derived functor of $F$ exists.
+We denote it
+$- \otimes_A^\mathbf{L} N : D(A, \text{d}) \to D(B, \text{d})$.
+\end{lemma}
+
+\begin{proof}
+We will use
+Derived Categories, Lemma \ref{derived-lemma-find-existence-computes}
+to prove this. As our collection $\mathcal{P}$
+of objects we will use the objects with property (P).
+Property (1) was shown in Lemma \ref{lemma-resolve}.
+Property (2) holds because if $s : P \to P'$ is a quasi-isomorphism
+of modules with property (P), then $s$ is a homotopy equivalence
+by Lemma \ref{lemma-hom-derived}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-functoriality-bc}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded $R$-algebras. Let $f : N \to N'$ be a
+homomorphism of differential graded $(A, B)$-bimodules.
+Then $f$ induces a morphism of functors
+$$
+1\otimes f :
+- \otimes_A^\mathbf{L} N
+\longrightarrow
+- \otimes_A^\mathbf{L} N'
+$$
+If $f$ is a quasi-isomorphism, then $1 \otimes f$ is an isomorphism of
+functors.
+\end{lemma}
+
+\begin{proof}
+Let $M$ be a differential graded $A$-module with property (P).
+Then $1 \otimes f : M \otimes_A N \to M \otimes_A N'$ is a
+map of differential graded $B$-modules. Moreover, this is functorial
+with respect to $M$. Since the functors
+$- \otimes_A^\mathbf{L} N$ and $- \otimes_A^\mathbf{L} N'$ are
+computed by tensoring on objects with property (P)
+(Lemma \ref{lemma-derived-bc}) we obtain a transformation of functors
+as indicated.
+
+\medskip\noindent
+Assume that $f$ is a quasi-isomorphism. Let $F_\bullet$ be the
+given filtration on $M$. Observe that
+$M \otimes_A N = \colim F_i(M) \otimes_A N$ and
+$M \otimes_A N' = \colim F_i(M) \otimes_A N'$.
+Hence it suffices to show that
+$F_n(M) \otimes_A N \to F_n(M) \otimes_A N'$
+is a quasi-isomorphism (filtered colimits are exact, see
+Algebra, Lemma \ref{algebra-lemma-directed-colimit-exact}).
+Since the inclusions $F_n(M) \to F_{n + 1}(M)$
+are split as maps of graded $A$-modules we see that
+$$
+0 \to F_n(M) \otimes_A N \to F_{n + 1}(M) \otimes_A N \to
+F_{n + 1}(M)/F_n(M) \otimes_A N \to 0
+$$
+is a short exact sequence of differential graded $B$-modules.
+There is a similar sequence for $N'$ and $f$ induces a map
+of short exact sequences. Hence by induction on $n$ (starting with $n = -1$
+when $F_{-1}(M) = 0$) we conclude that it suffices to show that
+the map $F_{n + 1}(M)/F_n(M) \otimes_A N \to F_{n + 1}(M)/F_n(M) \otimes_A N'$
+is a quasi-isomorphism. This is true because $F_{n + 1}(M)/F_n(M)$
+is a direct sum of shifts of $A$ and the result is true for $A[k]$
+as $f : N \to N'$ is a quasi-isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-bc}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded $R$-algebras. Let $N$ be a differential graded
+$(A, B)$-bimodule which has property (P) as a left differential graded
+$A$-module. Then $M \otimes_A^\mathbf{L} N$ is computed by
+$M \otimes_A N$ for all differential graded $A$-modules $M$.
+\end{lemma}
+
+\begin{proof}
+Let $f : M \to M'$ be a homomorphism of differential graded $A$-modules
+which is a quasi-isomorphism. We claim that $f \otimes \text{id} :
+M \otimes_A N \to M' \otimes_A N$ is a quasi-isomorphism. If this
+is true, then by the construction of the derived tensor product
+in the proof of Lemma \ref{lemma-derived-bc} we obtain the desired result.
+The construction of the map $f \otimes \text{id}$ only depends
+on the left differential graded $A$-module structure on $N$.
+Moreover, we have $M \otimes_A N = N \otimes_{A^{opp}} M =
+N \otimes_{A^{opp}}^\mathbf{L} M$ because $N$ has property (P) as
+a differential graded $A^{opp}$-module. Hence the claim follows
+from Lemma \ref{lemma-functoriality-bc}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-hom-adjoint}
+Let $R$ be a ring.
+Let $(A, \text{d})$ and $(B, \text{d})$ be differential graded $R$-algebras.
+Let $N$ be a differential graded $(A, B)$-bimodule.
+Then the functor
+$$
+- \otimes_A^\mathbf{L} N : D(A, \text{d}) \longrightarrow D(B, \text{d})
+$$
+of Lemma \ref{lemma-derived-bc} is a left adjoint to the functor
+$$
+R\Hom(N, -) : D(B, \text{d}) \longrightarrow D(A, \text{d})
+$$
+of Lemma \ref{lemma-derived-restriction}.
+\end{lemma}
+
+\begin{proof}
+This follows from Derived Categories, Lemma
+\ref{derived-lemma-pre-derived-adjoint-functors-general}
+and the fact that $- \otimes_A N$ and
+$\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, -)$ are adjoint by
+Lemma \ref{lemma-tensor-hom-adjunction}.
+\end{proof}
+
+\begin{example}
+\label{example-map-hom-tensor}
+Let $R$ be a ring. Let $(A, \text{d}) \to (B, \text{d})$ be a
+homomorphism of differential graded $R$-algebras. Then we can
+view $B$ as a differential graded $(A, B)$-bimodule and we get a functor
+$$
+- \otimes_A B : D(A, \text{d}) \longrightarrow D(B, \text{d})
+$$
+By Lemma \ref{lemma-tensor-hom-adjoint} the left adjoint of this
+is the functor $R\Hom(B, -)$. For a differential graded $B$-module
+let us denote $N_A$ the differential graded $A$-module obtained
+from $N$ by restriction via $A \to B$. Then we clearly have
+a canonical isomorphism
+$$
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(B, N) \longrightarrow N_A,\quad
+f \longmapsto f(1)
+$$
+functorial in the $B$-module $N$. Thus we see that
+$R\Hom(B, -)$ is the restriction functor and we obtain
+$$
+\Hom_{D(A, \text{d})}(M, N_A) =
+\Hom_{D(B, \text{d})}(M \otimes^\mathbf{L}_A B, N)
+$$
+bifunctorially in $M$ and $N$ exactly as in the case of commutative rings.
+Finally, observe that restriction is a tensor functor as well,
+since $N_A = N \otimes_B {}_BB_A = N \otimes_B^\mathbf{L} {}_BB_A$
+where ${}_BB_A$ is $B$ viewed as a differential graded $(B, A)$-bimodule.
+\end{example}
+
+\begin{lemma}
+\label{lemma-tensor-with-compact-fully-faithful}
+With notation and assumptions as in Lemma \ref{lemma-tensor-hom-adjoint}.
+Assume
+\begin{enumerate}
+\item $N$ defines a compact object of $D(B, \text{d})$, and
+\item the map $H^k(A) \to \Hom_{D(B, \text{d})}(N, N[k])$ is an
+isomorphism for all $k \in \mathbf{Z}$.
+\end{enumerate}
+Then the functor $-\otimes_A^\mathbf{L} N$ is fully faithful.
+\end{lemma}
+
+\begin{proof}
+Our functor has a left adjoint given by
+$R\Hom(N, -)$ by Lemma \ref{lemma-tensor-hom-adjoint}.
+By Categories, Lemma \ref{categories-lemma-adjoint-fully-faithful}
+it suffices to show that for a differential graded $A$-module $M$
+the map
+$$
+M \longrightarrow R\Hom(N, M \otimes_A^\mathbf{L} N)
+$$
+is an isomorphism in $D(A, \text{d})$. For this it suffices to show that
+$$
+H^n(M) \longrightarrow
+\text{Ext}^n_{D(B, \text{d})}(N, M \otimes_A^\mathbf{L} N)
+$$
+is an isomorphism, see Lemma \ref{lemma-derived-restriction-exts}.
+Since $N$ is a compact object the right hand side commutes
+with direct sums. Thus by Remark \ref{remark-P-resolution}
+it suffices to prove this map is an isomorphism for $M = A[k]$.
+Since $(A[k] \otimes_A^\mathbf{L} N) = N[k]$ by
+Remark \ref{remark-shift-tensor-no-sign},
+assumption (2) on $N$ is that the result holds for these.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-K-flat}
+Let $R \to R'$ be a ring map. Let $(A, \text{d})$ be a differential
+graded $R$-algebra. Let $(A', \text{d})$ be the base change, i.e.,
+$A' = A \otimes_R R'$. If $A$ is K-flat as a complex of $R$-modules,
+then
+\begin{enumerate}
+\item $- \otimes_A^\mathbf{L} A' : D(A, \text{d}) \to D(A', \text{d})$
+is equal to the right derived functor of
+$$
+K(A, \text{d}) \longrightarrow K(A', \text{d}),\quad
+M \longmapsto M \otimes_R R'
+$$
+\item the diagram
+$$
+\xymatrix{
+D(A, \text{d}) \ar[r]_{- \otimes_A^\mathbf{L} A'} \ar[d]_{restriction} &
+D(A', \text{d}) \ar[d]^{restriction} \\
+D(R) \ar[r]^{- \otimes_R^\mathbf{L} R'} & D(R')
+}
+$$
+commutes, and
+\item if $M$ is K-flat as a complex of $R$-modules, then the
+differential graded $A'$-module $M \otimes_R R'$ represents
+$M \otimes_A^\mathbf{L} A'$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For any differential graded $A$-module $M$ there is a canonical map
+$$
+c_M : M \otimes_R R' \longrightarrow M \otimes_A A'
+$$
+Let $P$ be a differential graded $A$-module with property (P).
+We claim that $c_P$ is an isomorphism and that $P$ is K-flat
+as a complex of $R$-modules. This will prove all the results
+stated in the lemma by formal arguments using the definition
+of derived tensor product in Lemma \ref{lemma-derived-bc} and
+More on Algebra, Section \ref{more-algebra-section-derived-tensor-product}.
+
+\medskip\noindent
+Let $F_\bullet$ be the filtration on $P$ showing that $P$ has property (P).
+Note that $c_A$ is an isomorphism and $A$ is K-flat as a complex
+of $R$-modules by assumption. Hence the same is true for
+direct sums of shifts of $A$ (you can use
+More on Algebra, Lemma \ref{more-algebra-lemma-colimit-K-flat}
+to deal with direct sums if you like).
+Hence this holds for the complexes $F_{p + 1}P/F_pP$.
+Since the short exact sequences
+$$
+0 \to F_pP \to F_{p + 1}P \to F_{p + 1}P/F_pP \to 0
+$$
+are split exact as sequences of graded modules, we can argue
+by induction that $c_{F_pP}$ is an isomorphism for all $p$
+and that $F_pP$ is K-flat as a complex of $R$-modules (use
+More on Algebra, Lemma \ref{more-algebra-lemma-K-flat-two-out-of-three}).
+Finally, using that $P = \colim F_pP$ we conclude that
+$c_P$ is an isomorphism and that $P$ is K-flat
+as a complex of $R$-modules (use
+More on Algebra, Lemma \ref{more-algebra-lemma-colimit-K-flat}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-RHom-is-tensor}
+Let $R$ be a ring.
+Let $(A, \text{d})$ and $(B, \text{d})$ be differential graded $R$-algebras.
+Let $T$ be a differential graded $(A, B)$-bimodule.
+Assume
+\begin{enumerate}
+\item $T$ defines a compact object of $D(B, \text{d})$, and
+\item $S = \Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(T, B)$
+represents $R\Hom(T, B)$ in $D(A, \text{d})$.
+\end{enumerate}
+Then $S$ has a structure of a differential graded $(B, A)$-bimodule
+and there is an isomorphism
+$$
+N \otimes_B^\mathbf{L} S \longrightarrow R\Hom(T, N)
+$$
+functorial in $N$ in $D(B, \text{d})$.
+\end{lemma}
+
+\begin{proof}
+Write $\mathcal{B} = \text{Mod}^{dg}_{(B, \text{d})}$.
+The right $A$-module structure on $S$ comes from the map
+$A \to \Hom_\mathcal{B}(T, T)$ and the composition
+$\Hom_\mathcal{B}(T, B) \otimes \Hom_\mathcal{B}(T, T)
+\to \Hom_\mathcal{B}(T, B)$ defined in Example \ref{example-dgm-dg-cat}.
+Using this multiplication a second time there is a map
+$$
+c_N :
+N \otimes_B S = \Hom_\mathcal{B}(B, N) \otimes_B \Hom_\mathcal{B}(T, B)
+\longrightarrow
+\Hom_\mathcal{B}(T, N)
+$$
+functorial in $N$. Given $N$ we can choose quasi-isomorphisms
+$P \to N \to I$ where $P$, resp.\ $I$ is a differential graded $B$-module
+with property (P), resp.\ (I). Then using $c_N$ we obtain a map
+$P \otimes_B S \to \Hom_\mathcal{B}(T, I)$ between the objects
+representing $S \otimes_B^\mathbf{L} N$ and $R\Hom(T, N)$.
+Clearly this defines a transformation of functors $c$ as in the lemma.
+
+\medskip\noindent
+To prove that $c$ is an isomorphism of functors, we may
+assume $N$ is a differential graded $B$-module which
+has property (P). Since $T$ defines a compact object in
+$D(B, \text{d})$ and since both sides of the arrow define
+exact functors of triangulated categories, we reduce using
+Lemma \ref{lemma-property-P-sequence}
+to the case where $N$ has a finite filtration whose
+graded pieces are direct sums of $B[k]$. Using again that
+both sides of the arrow are exact functors of triangulated
+categories and compactness of $T$ we reduce to
+the case $N = B[k]$. Assumption (2) is exactly the
+assumption that $c$ is an isomorphism in this case.
+\end{proof}
+
+
+
+
+
+
+\section{Composition of derived tensor products}
+\label{section-compose-tensor-functors}
+
+\noindent
+We encourage the reader to skip this section.
+
+\medskip\noindent
+Let $R$ be a ring. Let $(A, \text{d})$, $(B, \text{d})$, and
+$(C, \text{d})$ be differential graded $R$-algebras.
+Let $N$ be a differential graded $(A, B)$-bimodule.
+Let $N'$ be a differential graded $(B, C)$-module.
+We denote $N_B$ the bimodule $N$ viewed as a differential graded
+$B$-module (forgetting about the $A$-structure). There is a canonical map
+\begin{equation}
+\label{equation-plain-versus-derived}
+N_B \otimes_B^\mathbf{L} N'
+\longrightarrow
+(N \otimes_B N')_C
+\end{equation}
+in $D(C, \text{d})$. Here $(N \otimes_B N')_C$ denotes the
+$(A, C)$-bimodule $N \otimes_B N'$ viewed as a
+differential graded $C$-module. Namely, this map
+comes from the fact that the derived tensor product always maps to the
+plain tensor product (as it is a left derived functor).
+
+\begin{lemma}
+\label{lemma-compose-tensor-functors-general}
+Let $R$ be a ring. Let $(A, \text{d})$, $(B, \text{d})$, and
+$(C, \text{d})$ be differential graded $R$-algebras.
+Let $N$ be a differential graded $(A, B)$-bimodule.
+Let $N'$ be a differential graded $(B, C)$-module.
+Assume (\ref{equation-plain-versus-derived}) is an isomorphism.
+Then the composition
+$$
+\xymatrix{
+D(A, \text{d}) \ar[rr]^{- \otimes_A^\mathbf{L} N} & &
+D(B, \text{d}) \ar[rr]^{- \otimes_B^\mathbf{L} N'} & &
+D(C, \text{d})
+}
+$$
+is isomorphic to $- \otimes_A^\mathbf{L} N''$ with
+$N'' = N \otimes_B N'$ viewed as $(A, C)$-bimodule.
+\end{lemma}
+
+\begin{proof}
+Let us define a transformation of functors
+$$
+(- \otimes_A^\mathbf{L} N) \otimes_B^\mathbf{L} N'
+\longrightarrow
+- \otimes_A^\mathbf{L} N''
+$$
+To do this, let
+$M$ be a differential graded $A$-module with property (P).
+According to the construction of the functor $- \otimes_A^\mathbf{L} N''$
+of the proof of Lemma \ref{lemma-derived-bc} the plain tensor
+product $M \otimes_A N''$ represents $M \otimes_A^\mathbf{L} N''$
+in $D(C, \text{d})$. Then we write
+$$
+M \otimes_A N'' =
+M \otimes_A (N \otimes_B N') =
+(M \otimes_A N) \otimes_B N'
+$$
+The module $M \otimes_A N$ represents $M \otimes_A^\mathbf{L} N$
+in $D(B, \text{d})$. Choose a quasi-isomorphism $Q \to M \otimes_A N$
+where $Q$ is a differential graded $B$-module with property (P). Then
+$Q \otimes_B N'$ represents
+$(M \otimes_A^\mathbf{L} N) \otimes_B^\mathbf{L} N'$ in
+$D(C, \text{d})$.
+Thus we can define our map via
+$$
+(M \otimes_A^\mathbf{L} N) \otimes_B^\mathbf{L} N' =
+Q \otimes_B N' \to
+M \otimes_A N \otimes_B N' =
+M \otimes_A^\mathbf{L} N''
+$$
+The construction of this map is functorial in $M$ and compatible
+with distinguished triangles and direct sums; we omit the details.
+Consider the property $T$ of objects $M$ of $D(A, \text{d})$
+expressing that this map is an isomorphism. Then
+\begin{enumerate}
+\item if $T$ holds for $M_i$ then $T$ holds for $\bigoplus M_i$,
+\item if $T$ holds for $2$-out-of-$3$ in a distinguished
+triangle, then it holds for the third, and
+\item $T$ holds for $A[k]$ because here we obtain a shift
+of the map (\ref{equation-plain-versus-derived}) which we
+have assumed is an isomorphism.
+\end{enumerate}
+Thus by Remark \ref{remark-P-resolution} property $T$
+always holds and the proof is complete.
+\end{proof}
+
+\noindent
+Let $R$ be a ring. Let $(A, \text{d})$, $(B, \text{d})$, and
+$(C, \text{d})$ be differential graded $R$-algebras.
+We temporarily denote $(A \otimes_R B)_B$ the differential
+graded algebra $A \otimes_R B$ viewed as a (right) differential
+graded $B$-module, and ${}_B(B \otimes_R C)_C$ the differential
+graded algebra $B \otimes_R C$ viewed as a differential graded
+$(B, C)$-bimodule. Then there is a canonical map
+\begin{equation}
+\label{equation-plain-versus-derived-algebras}
+(A \otimes_R B)_B \otimes_B^\mathbf{L} {}_B(B \otimes_R C)_C
+\longrightarrow
+(A \otimes_R B \otimes_R C)_C
+\end{equation}
+in $D(C, \text{d})$ where $(A \otimes_R B \otimes_R C)_C$
+denotes the differential
+graded $R$-algebra $A \otimes_R B \otimes_R C$ viewed as a
+(right) differential graded $C$-module. Namely, this map
+comes from the identification
+$$
+(A \otimes_R B)_B \otimes_B {}_B(B \otimes_R C)_C =
+(A \otimes_R B \otimes_R C)_C
+$$
+and the fact that the derived tensor product always maps to the
+plain tensor product (as it is a left derived functor).
+
+\begin{lemma}
+\label{lemma-compose-tensor-functors-general-algebra}
+Let $R$ be a ring. Let $(A, \text{d})$, $(B, \text{d})$, and
+$(C, \text{d})$ be differential graded $R$-algebras. Assume
+that (\ref{equation-plain-versus-derived-algebras}) is an isomorphism.
+Let $N$ be a differential graded $(A, B)$-bimodule.
+Let $N'$ be a differential graded $(B, C)$-bimodule.
+Then the composition
+$$
+\xymatrix{
+D(A, \text{d}) \ar[rr]^{- \otimes_A^\mathbf{L} N} & &
+D(B, \text{d}) \ar[rr]^{- \otimes_B^\mathbf{L} N'} & &
+D(C, \text{d})
+}
+$$
+is isomorphic to $- \otimes_A^\mathbf{L} N''$ for a differential graded
+$(A, C)$-bimodule $N''$ described in the proof.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-functoriality-bc} we may replace $N$ and $N'$ by
+quasi-isomorphic bimodules. Thus we may assume $N$, resp.\ $N'$
+has property (P) as differential graded
+$(A, B)$-bimodule, resp.\ $(B, C)$-bimodule, see
+Lemma \ref{lemma-bimodule-resolve}. We claim the lemma holds
+with the $(A, C)$-bimodule $N'' = N \otimes_B N'$.
+To prove this, it suffices to show that
+$$
+N_B \otimes_B^\mathbf{L} N' \longrightarrow (N \otimes_B N')_C
+$$
+is an isomorphism in $D(C, \text{d})$, see
+Lemma \ref{lemma-compose-tensor-functors-general}.
+
+\medskip\noindent
+Let $F_\bullet$ be the filtration on $N$ as in property (P) for bimodules.
+By Lemma \ref{lemma-bimodule-property-P-sequence}
+there is a short exact sequence
+$$
+0 \to
+\bigoplus\nolimits F_iN \to
+\bigoplus\nolimits F_iN \to N \to 0
+$$
+of differential graded $(A, B)$-bimodules which is split as a sequence
+of graded $(A, B)$-bimodules. A fortiori this is an admissible short exact
+sequence of differential graded $B$-modules and this produces a distinguished
+triangle
+$$
+\bigoplus\nolimits F_iN_B \to
+\bigoplus\nolimits F_iN_B \to N_B \to
+\bigoplus\nolimits F_iN_B[1]
+$$
+in $D(B, \text{d})$. Using that $- \otimes_B^\mathbf{L} N'$
+is an exact functor of triangulated categories and commutes
+with direct sums and using that $- \otimes_B N'$ transforms
+admissible exact sequences into admissible exact sequences
+and commutes with direct sums we reduce to proving
+that
+$$
+(F_pN)_B \otimes_B^\mathbf{L} N' \longrightarrow (F_pN)_B \otimes_B N'
+$$
+is a quasi-isomorphism for all $p$. Repeating the argument
+with the short exact sequences of $(A, B)$-bimodules
+$$
+0 \to F_pN \to F_{p + 1}N \to F_{p + 1}N/F_pN \to 0
+$$
+which are split as graded $(A, B)$-bimodules
+we reduce to showing the same statement for $F_{p + 1}N/F_pN$.
+Since these modules are direct sums of shifts of $(A \otimes_R B)_B$
+we reduce to showing that
+$$
+(A \otimes_R B)_B \otimes_B^\mathbf{L} N'
+\longrightarrow
+(A \otimes_R B)_B \otimes_B N'
+$$
+is a quasi-isomorphism.
+
+\medskip\noindent
+Choose a filtration $F_\bullet$ on $N'$ as in property (P) for bimodules.
+Choose a quasi-isomorphism $P \to (A \otimes_R B)_B$
+of differential graded $B$-modules where $P$ has property (P).
+We have to show that
+$P \otimes_B N' \to (A \otimes_R B)_B \otimes_B N'$ is
+a quasi-isomorphism because $P \otimes_B N'$ represents
+$(A \otimes_R B)_B \otimes_B^\mathbf{L} N'$ in $D(C, \text{d})$
+by the construction in Lemma \ref{lemma-derived-bc}.
+As $N' = \colim F_pN'$ we find
+that it suffices to show that
+$P \otimes_B F_pN' \to (A \otimes_R B)_B \otimes_B F_pN'$
+is a quasi-isomorphism. Using the short exact sequences
+$0 \to F_pN' \to F_{p + 1}N' \to F_{p + 1}N'/F_pN' \to 0$
+which are split as graded $(B, C)$-bimodules we reduce to showing
+$P \otimes_B F_{p + 1}N'/F_pN' \to
+(A \otimes_R B)_B \otimes_B F_{p + 1}N'/F_pN'$
+is a quasi-isomorphism for all $p$.
+Then finally using that $F_{p + 1}N'/F_pN'$
+is a direct sum of shifts of ${}_B(B \otimes_R C)_C$
+we conclude that it suffices to show that
+$$
+P \otimes_B {}_B(B \otimes_R C)_C \to
+(A \otimes_R B)_B \otimes_B {}_B(B \otimes_R C)_C
+$$
+is a quasi-isomorphism. Since $P \to (A \otimes_R B)_B$
+is a resolution by a module satisfying property (P)
+this map of differential graded $C$-modules
+represents the morphism (\ref{equation-plain-versus-derived-algebras})
+in $D(C, \text{d})$ and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compose-tensor-functors}
+Let $R$ be a ring. Let $(A, \text{d})$, $(B, \text{d})$, and
+$(C, \text{d})$ be differential graded $R$-algebras.
+If $C$ is K-flat as a complex of $R$-modules, then
+(\ref{equation-plain-versus-derived-algebras})
+is an isomorphism and the conclusion of
+Lemma \ref{lemma-compose-tensor-functors-general-algebra} is valid.
+\end{lemma}
+
+\begin{proof}
+Choose a quasi-isomorphism $P \to (A \otimes_R B)_B$ of differential
+graded $B$-modules, where $P$ has property (P). Then we have to show
+that
+$$
+P \otimes_B (B \otimes_R C) \longrightarrow
+(A \otimes_R B) \otimes_B (B \otimes_R C)
+$$
+is a quasi-isomorphism. Equivalently we are looking at
+$$
+P \otimes_R C \longrightarrow
+A \otimes_R B \otimes_R C
+$$
+This is a quasi-isomorphism if $C$ is K-flat as a complex of $R$-modules by
+More on Algebra, Lemma \ref{more-algebra-lemma-K-flat-quasi-isomorphism}.
+\end{proof}
+
+
+
+
+
+\section{Variant of derived tensor product}
+\label{section-variant-base-change}
+
+\noindent
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site. Then we have the functors
+$$
+\text{Comp}(\mathcal{O}) \to K(\mathcal{O}) \to D(\mathcal{O})
+$$
+and as we've seen above we have differential graded enhancement
+$\text{Comp}^{dg}(\mathcal{O})$. Namely, this is the differential
+graded category of Example \ref{example-category-complexes} associated
+to the abelian category $\textit{Mod}(\mathcal{O})$.
+Let $K^\bullet$ be a complex of $\mathcal{O}$-modules in other
+words, an object of $\text{Comp}^{dg}(\mathcal{O})$. Set
+$$
+(E, \text{d}) =
+\Hom_{\text{Comp}^{dg}(\mathcal{O})}(K^\bullet, K^\bullet)
+$$
+This is a differential graded $\mathbf{Z}$-algebra. We claim there is
+an analogue of the derived base change in this situation.
+
+\begin{lemma}
+\label{lemma-tensor-with-complex}
+In the situation above there is a functor
+$$
+- \otimes_E K^\bullet :
+\text{Mod}^{dg}_{(E, \text{d})}
+\longrightarrow
+\text{Comp}^{dg}(\mathcal{O})
+$$
+of differential graded categories. This functor sends $E$ to $K^\bullet$
+and commutes with direct sums.
+\end{lemma}
+
+\begin{proof}
+Let $M$ be a differential graded $E$-module. For every object $U$ of
+$\mathcal{C}$ the complex $K^\bullet(U)$ is a left differential
+graded $E$-module as well as a right $\mathcal{O}(U)$-module.
+The actions commute, so we have a bimodule.
+Thus, by the constructions in
+Sections \ref{section-tensor-product} and \ref{section-bimodules}
+we can form the tensor product
+$$
+M \otimes_E K^\bullet(U)
+$$
+which is a differential graded $\mathcal{O}(U)$-module, i.e., a complex
+of $\mathcal{O}(U)$-modules. This construction is functorial with respect
+to $U$, hence we can sheafify to get a complex of $\mathcal{O}$-modules
+which we denote
+$$
+M \otimes_E K^\bullet
+$$
+Moreover, for each $U$ the construction determines a functor
+$\text{Mod}^{dg}_{(E, \text{d})} \to \text{Comp}^{dg}(\mathcal{O}(U))$
+of differential graded categories by Lemma \ref{lemma-tensor}.
+It is therefore clear that we obtain a functor as stated in the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-with-complex-homotopy}
+The functor of Lemma \ref{lemma-tensor-with-complex} defines an exact functor
+of triangulated categories
+$K(\text{Mod}_{(E, \text{d})}) \to K(\mathcal{O})$.
+\end{lemma}
+
+\begin{proof}
+The functor induces a functor between homotopy categories by
+Lemma \ref{lemma-functorial}.
+We have to show that $- \otimes_E K^\bullet$ transforms distinguished
+triangles into distinguished triangles.
+Suppose that $0 \to K \to L \to M \to 0$ is an admissible short
+exact sequence of differential graded $E$-modules. Let $s : M \to L$ be
+a graded $E$-module homomorphism which is left inverse to $L \to M$.
+Then $s$ defines a map $M \otimes_E K^\bullet \to L \otimes_E K^\bullet$
+of graded $\mathcal{O}$-modules (i.e., respecting $\mathcal{O}$-module
+structure and grading, but not differentials)
+which is left inverse to $L \otimes_E K^\bullet \to M \otimes_E K^\bullet$.
+Thus we see that
+$$
+0 \to K \otimes_E K^\bullet \to L \otimes_E K^\bullet \to
+M \otimes_E K^\bullet \to 0
+$$
+is a termwise split short exact sequences of complexes, i.e., a
+defines a distinguished triangle in $K(\mathcal{O})$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-with-complex-derived}
+The functor $K(\text{Mod}_{(E, \text{d})}) \to K(\mathcal{O})$
+of Lemma \ref{lemma-tensor-with-complex-homotopy} has a left derived
+version defined on all of $D(E, \text{d})$. We denote it
+$- \otimes_E^\mathbf{L} K^\bullet : D(E, \text{d}) \to D(\mathcal{O})$.
+\end{lemma}
+
+\begin{proof}
+We will use
+Derived Categories, Lemma \ref{derived-lemma-find-existence-computes}
+to prove this. As our collection $\mathcal{P}$
+of objects we will use the objects with property (P).
+Property (1) was shown in Lemma \ref{lemma-resolve}.
+Property (2) holds because if $s : P \to P'$ is a quasi-isomorphism
+of modules with property (P), then $s$ is a homotopy equivalence
+by Lemma \ref{lemma-hom-derived}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-upgrade-tensor-with-complex-derived}
+Let $R$ be a ring. Let $\mathcal{C}$ be a site. Let $\mathcal{O}$
+be a sheaf of commutative $R$-algebras. Let $K^\bullet$
+be a complex of $\mathcal{O}$-modules.
+The functor
+of Lemma \ref{lemma-tensor-with-complex-derived} has the following
+property: For every $M$, $N$ in $D(E, \text{d})$ there is a
+canonical map
+$$
+R\Hom(M, N)
+\longrightarrow
+R\Hom_\mathcal{O}(M \otimes_E^\mathbf{L} K^\bullet,
+N \otimes_E^\mathbf{L} K^\bullet)
+$$
+in $D(R)$ which on cohomology modules gives the maps
+$$
+\Ext^n_{D(E, \text{d})}(M, N) \to
+\Ext^n_{D(\mathcal{O})}
+(M \otimes_E^\mathbf{L} K^\bullet, N \otimes_E^\mathbf{L} K^\bullet)
+$$
+induced by the functor $- \otimes_E^\mathbf{L} K^\bullet$.
+\end{lemma}
+
+\begin{proof}
+The right hand side of the arrow is the global derived hom introduced
+in Cohomology on Sites, Section \ref{sites-cohomology-section-global-RHom}
+which has the correct cohomology modules.
+For the left hand side we think of $M$ as a $(R, A)$-bimodule and
+we have the derived $\Hom$ introduced in Section \ref{section-restriction}
+which also has the correct cohomology modules.
+To prove the lemma we may assume $M$ and $N$ are differential graded
+$E$-modules with property (P); this does not change the left hand
+side of the arrow by
+Lemma \ref{lemma-functoriality-derived-restriction}.
+By Lemma \ref{lemma-compute-derived-restriction}
+this means that the left hand side of the arrow becomes
+$\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(M, N)$.
+In Lemmas \ref{lemma-tensor-with-complex},
+\ref{lemma-tensor-with-complex-homotopy}, and
+\ref{lemma-tensor-with-complex-derived}
+we have constructed a functor
+$$
+- \otimes_E K^\bullet :
+\text{Mod}^{dg}_{(E, \text{d})}
+\longrightarrow
+\text{Comp}^{dg}(\mathcal{O})
+$$
+of differential graded categories
+and we have shown that $- \otimes_E^\mathbf{L} K^\bullet$ is computed
+by evaluating this functor
+on differential graded $E$-modules with property (P).
+Hence we obtain a map of complexes of $R$-modules
+$$
+\Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(M, N)
+\longrightarrow
+\Hom_{\text{Comp}^{dg}(\mathcal{O})}
+(M \otimes_E K^\bullet, N \otimes_E K^\bullet)
+$$
+For any complexes of $\mathcal{O}$-modules
+$\mathcal{F}^\bullet$, $\mathcal{G}^\bullet$ there
+is a canonical map
+$$
+\Hom_{\text{Comp}^{dg}(\mathcal{O})}
+(\mathcal{F}^\bullet, \mathcal{G}^\bullet) =
+\Gamma(\mathcal{C},
+\SheafHom^\bullet(\mathcal{F}^\bullet, \mathcal{G}^\bullet))
+\longrightarrow
+R\Hom_\mathcal{O}(\mathcal{F}^\bullet, \mathcal{G}^\bullet).
+$$
+Combining these maps
+we obtain the desired map of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-with-complex-hom-adjoint}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site.
+Let $K^\bullet$ be a complex of $\mathcal{O}$-modules.
+Then the functor
+$$
+- \otimes_E^\mathbf{L} K^\bullet :
+D(E, \text{d})
+\longrightarrow
+D(\mathcal{O})
+$$
+of Lemma \ref{lemma-tensor-with-complex-derived} is a left adjoint
+of the functor
+$$
+R\Hom(K^\bullet, -) : D(\mathcal{O}) \longrightarrow D(E, \text{d})
+$$
+of Lemma \ref{lemma-existence-of-derived}.
+\end{lemma}
+
+\begin{proof}
+The statement means that we have
+$$
+\Hom_{D(E, \text{d})}(M, R\Hom(K^\bullet, L^\bullet)) =
+\Hom_{D(\mathcal{O})}(M \otimes^\mathbf{L}_E K^\bullet, L^\bullet)
+$$
+bifunctorially in $M$ and $L^\bullet$. To see this we may replace $M$
+by a differential graded $E$-module $P$ with property (P).
+We also may replace $L^\bullet$ by a K-injective complex of
+$\mathcal{O}$-modules $I^\bullet$. The computation
+of the derived functors given in the lemmas referenced in the statement
+combined with Lemma \ref{lemma-hom-derived} translates the above into
+$$
+\Hom_{K(\text{Mod}_{(E, \text{d})})}
+(P, \Hom_\mathcal{B}(K^\bullet, I^\bullet)) =
+\Hom_{K(\mathcal{O})}(P \otimes_E K^\bullet, I^\bullet)
+$$
+where $\mathcal{B} = \text{Comp}^{dg}(\mathcal{O})$.
+There is an evaluation map from right to left functorial
+in $P$ and $I^\bullet$ (details omitted).
+Choose a filtration $F_\bullet$ on $P$ as in the definition of property (P).
+By Lemma \ref{lemma-property-P-sequence} and the fact that
+both sides of the equation are homological functors in $P$
+on $K(\text{Mod}_{(E, \text{d})})$
+we reduce to the case where $P$ is replaced by
+the differential graded $E$-module $\bigoplus F_iP$.
+Since both sides turn direct sums in the variable $P$
+into direct products we reduce to the case where $P$ is one of the
+differential graded $E$-modules $F_iP$.
+Since each $F_iP$ has a finite filtration (given by admissible
+monomorphisms) whose graded pieces are graded projective $E$-modules
+we reduce to the case where $P$ is a graded projective $E$-module.
+In this case we clearly have
+$$
+\Hom_{\text{Mod}^{dg}_{(E, \text{d})}}
+(P, \Hom_\mathcal{B}(K^\bullet, I^\bullet)) =
+\Hom_{\text{Comp}^{dg}(\mathcal{O})}(P \otimes_E K^\bullet, I^\bullet)
+$$
+as graded $\mathbf{Z}$-modules (because this statement reduces to the case
+$P = E[k]$ where it is obvious). As the isomorphism is compatible with
+differentials we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-in-compact-case}
+Let $(\mathcal{C}, \mathcal{O})$ be a ringed site.
+Let $K^\bullet$ be a complex of $\mathcal{O}$-modules.
+Assume
+\begin{enumerate}
+\item $K^\bullet$ represents a compact object of $D(\mathcal{O})$, and
+\item $E = \Hom_{\text{Comp}^{dg}(\mathcal{O})}(K^\bullet, K^\bullet)$
+computes the ext groups of $K^\bullet$ in $D(\mathcal{O})$.
+\end{enumerate}
+Then the functor
+$$
+- \otimes_E^\mathbf{L} K^\bullet :
+D(E, \text{d})
+\longrightarrow
+D(\mathcal{O})
+$$
+of Lemma \ref{lemma-tensor-with-complex-derived} is fully faithful.
+\end{lemma}
+
+\begin{proof}
+Because our functor has a left adjoint given by
+$R\Hom(K^\bullet, -)$ by Lemma \ref{lemma-tensor-with-complex-hom-adjoint}
+it suffices to show for a differential graded $E$-module $M$ that the map
+$$
+H^0(M) \longrightarrow
+\Hom_{D(\mathcal{O})}(K^\bullet, M \otimes_E^\mathbf{L} K^\bullet)
+$$
+is an isomorphism. We may assume that $M = P$ is a differential graded
+$E$-module which has property (P). Since $K^\bullet$ defines a
+compact object, we reduce using
+Lemma \ref{lemma-property-P-sequence}
+to the case where $P$ has a finite filtration whose graded pieces
+are direct sums of $E[k]$. Again using compactness we reduce
+to the case $P = E[k]$. The assumption on $K^\bullet$ is that
+the result holds for these.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Characterizing compact objects}
+\label{section-compact}
+
+\noindent
+Compact objects of additive categories are defined in
+Derived Categories, Definition \ref{derived-definition-compact-object}.
+In this section we characterize compact objects of the derived
+category of a differential graded algebra.
+
+\begin{remark}
+\label{remark-source-graded-projective}
+Let $(A, \text{d})$ be a differential graded algebra. Is there a
+characterization of those differential graded $A$-modules $P$
+for which we have
+$$
+\Hom_{K(A, \text{d})}(P, M) = \Hom_{D(A, \text{d})}(P, M)
+$$
+for all differential graded $A$-modules $M$? Let
+$\mathcal{D} \subset K(A, \text{d})$ be the full subcategory
+whose objects are the objects $P$ satisfying the above. Then $\mathcal{D}$
+is a strictly full saturated triangulated subcategory of $K(A, \text{d})$.
+If $P$ is projective as a graded $A$-module, then to see where $P$
+is an object of $\mathcal{D}$ it is enough to check that
+$\Hom_{K(A, \text{d})}(P, M) = 0$ whenever $M$ is acyclic.
+However, in general it is not enough to assume that $P$ is projective as
+a graded $A$-module. Example: take $A = R = k[\epsilon]$ where $k$ is
+a field and $k[\epsilon] = k[x]/(x^2)$ is the ring of dual numbers.
+Let $P$ be the object with $P^n = R$ for all $n \in \mathbf{Z}$
+and differential given by multiplication by $\epsilon$. Then
+$\text{id}_P \in \Hom_{K(A, \text{d})}(P, P)$ is a nonzero element
+but $P$ is acyclic.
+\end{remark}
+
+\begin{remark}
+\label{remark-graded-projective-is-compact}
+Let $(A, \text{d})$ be a differential graded algebra. Let us say a
+differential graded $A$-module $M$ is {\it finite} if $M$ is generated,
+as a right $A$-module, by finitely many elements. If $P$ is a
+differential graded $A$-module which is finite graded projective,
+then we can ask: Does $P$ give a compact object of $D(A, \text{d})$?
+Presumably, this is not true in general, but we do not know a
+counter example. However, if $P$ is also an object of the category
+$\mathcal{D}$ of Remark \ref{remark-source-graded-projective},
+then this is the case (this follows from the fact that direct sums
+in $D(A, \text{d})$ are given by direct sums of modules; details omitted).
+\end{remark}
+
+\begin{lemma}
+\label{lemma-factor-through-nicer}
+Let $(A, \text{d})$ be a differential graded algebra. Let $E$ be a compact
+object of $D(A, \text{d})$. Let $P$ be a differential graded $A$-module
+which has a finite filtration
+$$
+0 = F_{-1}P \subset F_0P \subset F_1P \subset \ldots \subset F_nP = P
+$$
+by differential graded submodules such that
+$$
+F_{i + 1}P/F_iP \cong \bigoplus\nolimits_{j \in J_i} A[k_{i, j}]
+$$
+as differential graded $A$-modules for some sets $J_i$ and integers $k_{i, j}$.
+Let $E \to P$ be a morphism of $D(A, \text{d})$.
+Then there exists a differential graded submodule $P' \subset P$ such that
+$F_{i + 1}P \cap P'/(F_iP \cap P')$ is equal to
+$\bigoplus_{j \in J'_i} A[k_{i, j}]$ for some finite subsets
+$J'_i \subset J_i$ and such that $E \to P$ factors through $P'$.
+\end{lemma}
+
+\begin{proof}
+We will prove by induction on $-1 \leq m \leq n$ that there exists
+a differential graded submodule $P' \subset P$ such that
+\begin{enumerate}
+\item $F_mP \subset P'$,
+\item for $i \geq m$ the quotient $F_{i + 1}P \cap P'/(F_iP \cap P')$ is
+isomorphic to $\bigoplus_{j \in J'_i} A[k_{i, j}]$ for some finite subsets
+$J'_i \subset J_i$, and
+\item $E \to P$ factors through $P'$.
+\end{enumerate}
+The base case is $m = n$ where we can take $P' = P$.
+
+\medskip\noindent
+Induction step. Assume $P'$ works for $m$.
+For $i \geq m$ and $j \in J'_i$ let $x_{i, j} \in F_{i + 1}P \cap P'$
+be a homogeneous element of degree $k_{i, j}$ whose image in
+$F_{i + 1}P \cap P'/(F_iP \cap P')$ is the generator in
+the summand corresponding to $j \in J_i$. The
+$x_{i, j}$ generate $P'/F_mP$ as an $A$-module. Write
+$$
+\text{d}(x_{i, j}) = \sum x_{i', j'} a_{i, j}^{i', j'} + y_{i, j}
+$$
+with $y_{i, j} \in F_mP$ and $a_{i, j}^{i', j'} \in A$.
+There exists a finite subset
+$J'_{m - 1} \subset J_{m - 1}$ such that each $y_{i, j}$ maps to
+an element of the submodule $\bigoplus_{j \in J'_{m - 1}} A[k_{m - 1, j}]$
+of $F_mP/F_{m - 1}P$. Let $P'' \subset F_mP$ be the inverse
+image of $\bigoplus_{j \in J'_{m - 1}} A[k_{m - 1, j}]$ under
+the map $F_mP \to F_mP/F_{m - 1}P$. Then we see that the $A$-submodule
+$$
+P'' + \sum x_{i, j}A
+$$
+is a differential graded submodule of the type we are looking for. Moreover
+$$
+P'/(P'' + \sum x_{i, j}A) =
+\bigoplus\nolimits_{j \in J_{m - 1} \setminus J'_{m - 1}} A[k_{m - 1, j}]
+$$
+Since $E$ is compact, the composition of the given map $E \to P'$
+with the quotient map, factors through a finite direct subsum of
+the module displayed above. Hence after enlarging $J'_{m - 1}$
+we may assume $E \to P'$ factors through
+$P'' + \sum x_{i, j}A$ as desired.
+\end{proof}
+
+\noindent
+It is not true that every compact object of $D(A, \text{d})$ comes
+from a finite graded projective differential graded $A$-module,
+see Examples, Section \ref{examples-section-interesting-compact}.
+
+\begin{proposition}
+\label{proposition-compact}
+Let $(A, \text{d})$ be a differential graded algebra. Let $E$ be an
+object of $D(A, \text{d})$. Then the following are equivalent
+\begin{enumerate}
+\item $E$ is a compact object,
+\item $E$ is a direct summand of an object of $D(A, \text{d})$
+which is represented by a differential graded module $P$ which
+has a finite filtration $F_\bullet$ by differential graded submodules
+such that $F_iP/F_{i - 1}P$ are finite direct sums of shifts of $A$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Assume $E$ is compact. By Lemma \ref{lemma-resolve} we may assume that $E$
+is represented by a differential graded $A$-module $P$ with property (P).
+Consider the distinguished triangle
+$$
+\bigoplus F_iP \to \bigoplus F_iP \to P
+\xrightarrow{\delta} \bigoplus F_iP[1]
+$$
+coming from the admissible short exact sequence of
+Lemma \ref{lemma-property-P-sequence}. Since $E$ is compact we have
+$\delta = \sum_{i = 1, \ldots, n} \delta_i$ for some
+$\delta_i : P \to F_iP[1]$. Since the composition of $\delta$
+with the map $\bigoplus F_iP[1] \to \bigoplus F_iP[1]$ is zero
+(Derived Categories, Lemma \ref{derived-lemma-composition-zero})
+it follows that $\delta = 0$ (follows as $\bigoplus F_iP \to \bigoplus F_iP$
+maps the summand $F_iP$ via the difference of $\text{id}$ and the inclusion
+map into $F_{i - 1}P$).
+Thus we see that the identity on $E$ factors through
+$\bigoplus F_iP$ in $D(A, \text{d})$ (by
+Derived Categories, Lemma \ref{derived-lemma-split}).
+Next, we use that $P$ is compact again to see that the map
+$E \to \bigoplus F_iP$ factors through $\bigoplus_{i = 1, \ldots, n} F_iP$
+for some $n$. In other words, the identity on $E$ factors through
+$\bigoplus_{i = 1, \ldots, n} F_iP$. By
+Lemma \ref{lemma-factor-through-nicer}
+we see that the identity of $E$ factors as $E \to P \to E$
+where $P$ is as in part (2) of the statement of the lemma.
+In other words, we have proven that (1) implies (2).
+
+\medskip\noindent
+Assume (2). By
+Derived Categories, Lemma \ref{derived-lemma-compact-objects-subcategory}
+it suffices to show that $P$ gives a compact object. Observe that
+$P$ has property (P), hence we have
+$$
+\Hom_{D(A, \text{d})}(P, M) = \Hom_{K(A, \text{d})}(P, M)
+$$
+for any differential graded module $M$ by Lemma \ref{lemma-hom-derived}.
+As direct sums in $D(A, \text{d})$ are given by direct sums of
+graded modules (Lemma \ref{lemma-derived-products}) we reduce
+to showing that $\Hom_{K(A, \text{d})}(P, M)$ commutes with direct
+sums. Using that $K(A, \text{d})$ is a triangulated category,
+that $\Hom$ is a cohomological functor in the first
+variable, and the filtration on $P$, we reduce to the case that
+$P$ is a finite direct sum of shifts of $A$. Thus we reduce to
+the case $P = A[k]$ which is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compact-implies-bounded}
+Let $(A, \text{d})$ be a differential graded algebra.
+For every compact object $E$ of $D(A, \text{d})$ there
+exist integers $a \leq b$ such that $\Hom_{D(A, \text{d})}(E, M) = 0$
+if $H^i(M) = 0$ for $i \in [a, b]$.
+\end{lemma}
+
+\begin{proof}
+Observe that the collection of objects of $D(A, \text{d})$ for which
+such a pair of integers exists is a saturated, strictly full triangulated
+subcategory of $D(A, \text{d})$.
+Thus by Proposition \ref{proposition-compact} it suffices to prove
+this when $E$ is represented by a differential graded module $P$ which
+has a finite filtration $F_\bullet$ by differential graded submodules
+such that $F_iP/F_{i - 1}P$ are finite direct sums of shifts of $A$.
+Using the compatibility with triangles, we see that it suffices
+to prove it for $P = A$. In this case $\Hom_{D(A, \text{d})}(A, M) = H^0(M)$
+and the result holds with $a = b = 0$.
+\end{proof}
+
+\noindent
+If $(A, \text{d})$ is just an algebra placed in degree $0$
+with zero differential or more generally lives in
+only a finite number of degrees, then we do obtain the
+more precise description of compact objects.
+
+\begin{lemma}
+\label{lemma-compact}
+Let $(A, \text{d})$ be a differential graded algebra. Assume that $A^n = 0$
+for $|n| \gg 0$. Let $E$ be an object of $D(A, \text{d})$.
+The following are equivalent
+\begin{enumerate}
+\item $E$ is a compact object, and
+\item $E$ can be represented by a differential graded $A$-module $P$
+which is finite projective as a graded $A$-module and satisfies
+$\Hom_{K(A, \text{d})}(P, M) = \Hom_{D(A, \text{d})}(P, M)$
+for every differential graded $A$-module $M$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{D} \subset K(A, \text{d})$ be the triangulated subcategory
+discussed in Remark \ref{remark-source-graded-projective}.
+Let $P$ be an object of $\mathcal{D}$ which is finite projective
+as a graded $A$-module. Then $P$ represents a compact object of
+$D(A, \text{d})$ by Remark \ref{remark-graded-projective-is-compact}.
+
+\medskip\noindent
+To prove the converse, let $E$ be a compact object of $D(A, \text{d})$.
+Fix $a \leq b$ as in Lemma \ref{lemma-compact-implies-bounded}.
+After decreasing $a$ and increasing $b$ if necessary, we may also
+assume that $H^i(E) = 0$ for $i \not \in [a, b]$ (this follows
+from Proposition \ref{proposition-compact} and our assumption on $A$).
+Moreover, fix an integer $c > 0$ such that $A^n = 0$ if $|n| \geq c$.
+
+\medskip\noindent
+By Proposition \ref{proposition-compact} we see that $E$ is a direct
+summand, in $D(A, \text{d})$, of a differential graded $A$-module $P$
+which has a finite filtration $F_\bullet$ by differential
+graded submodules such that $F_iP/F_{i - 1}P$ are finite direct sums
+of shifts of $A$. In particular, $P$ has property (P) and we have
+$\Hom_{D(A, \text{d})}(P, M) = \Hom_{K(A, \text{d})}(P, M)$ for any
+differential graded module $M$ by Lemma \ref{lemma-hom-derived}.
+In other words, $P$ is an object of the triangulated
+subcategory $\mathcal{D} \subset K(A, \text{d})$ discussed in
+Remark \ref{remark-source-graded-projective}.
+Note that $P$ is finite free as a graded $A$-module.
+
+\medskip\noindent
+Choose $n > 0$ such that $b + 4c - n < a$.
+Represent the projector onto $E$ by an endomorphism $\varphi : P \to P$ of
+differential graded $A$-modules. Consider the distinguished triangle
+$$
+P \xrightarrow{1 - \varphi} P \to C \to P[1]
+$$
+in $K(A, \text{d})$ where $C$ is the cone of the first arrow. Then
+$C$ is an object of $\mathcal{D}$,
+we have $C \cong E \oplus E[1]$ in $D(A, \text{d})$, and
+$C$ is a finite graded free $A$-module.
+Next, consider a distinguished triangle
+$$
+C[1] \to C \to C' \to C[2]
+$$
+in $K(A, \text{d})$ where $C'$ is the cone on a morphism $C[1] \to C$
+representing the composition
+$$
+C[1] \cong E[1] \oplus E[2] \to E[1] \to E \oplus E[1] \cong C
+$$
+in $D(A, \text{d})$. Then we see that $C'$ represents $E \oplus E[2]$.
+Continuing in this manner we see that we can find a differential
+graded $A$-module $P$ which is an object of $\mathcal{D}$,
+is a finite free as a graded $A$-module, and represents $E \oplus E[n]$.
+
+\medskip\noindent
+Choose a basis $x_i$, $i \in I$ of homogeneous elements for $P$ as an
+$A$-module. Let $d_i = \deg(x_i)$.
+Let $P_1$ be the $A$-submodule of $P$ generated by $x_i$ and
+$\text{d}(x_i)$ for $d_i \leq a - c - 1$.
+Let $P_2$ be the $A$-submodule of $P$ generated by $x_i$ and
+$\text{d}(x_i)$ for $d_i \geq b - n + c$.
+We observe
+\begin{enumerate}
+\item $P_1$ and $P_2$ are differential graded submodules of $P$,
+\item $P_1^t = 0$ for $t \geq a$,
+\item $P_1^t = P^t$ for $t \leq a - 2c$,
+\item $P_2^t = 0$ for $t \leq b - n$,
+\item $P_2^t = P^t$ for $t \geq b - n + 2c$.
+\end{enumerate}
+As $b - n + 2c \geq a - 2c$ by our choice of $n$
+we obtain a short exact sequence of differential graded $A$-modules
+$$
+0 \to P_1 \cap P_2 \to P_1 \oplus P_2 \xrightarrow{\pi} P \to 0
+$$
+Since $P$ is projective as a graded $A$-module this is an admissible
+short exact sequence (Lemma \ref{lemma-target-graded-projective}).
+Hence we obtain a boundary map
+$\delta : P \to (P_1 \cap P_2)[1]$ in $K(A, \text{d})$, see
+Lemma \ref{lemma-admissible-ses}.
+Since $P = E \oplus E[n]$ and since $P_1 \cap P_2$ lives in
+degrees $(b - n, a)$ we find that
+$\Hom_{D(A, \text{d})}(E \oplus E[n], (P_1 \cap P_2)[1])$ is
+zero. Therefore $\delta = 0$ as a morphism in $K(A, \text{d})$
+as $P$ is an object of $\mathcal{D}$.
+By Derived Categories, Lemma \ref{derived-lemma-split}
+we can find a map $s : P \to P_1 \oplus P_2$ such that
+$\pi \circ s = \text{id}_P + \text{d}h + h\text{d}$ for some $h : P \to P$
+of degree $-1$. Since $P_1 \oplus P_2 \to P$ is surjective and since $P$
+is projective as a graded $A$-module we can choose a homogeneous
+lift $\tilde h : P \to P_1 \oplus P_2$ of $h$. Then we change
+$s$ into $s + \text{d} \tilde h + \tilde h \text{d}$ to get
+$\pi \circ s = \text{id}_P$. This means we obtain a direct
+sum decomposition $P = s^{-1}(P_1) \oplus s^{-1}(P_2)$.
+Since $s^{-1}(P_2)$ is equal to $P$ in degrees $\geq b - n + 2c$
+we see that $s^{-1}(P_2) \to P \to E$ is a quasi-isomorphism,
+i.e., an isomorphism in $D(A, \text{d})$. This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Equivalences of derived categories}
+\label{section-equivalence}
+
+\noindent
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be differential
+graded $R$-algebras. A natural question that arises in nature is what
+it means that $D(A, \text{d})$ is equivalent to $D(B, \text{d})$
+as an $R$-linear triangulated category. This is a rather subtle question
+and it will turn out it isn't always the correct question to ask.
+Nonetheless, in this section we collection some conditions
+that guarantee this is the case.
+
+\medskip\noindent
+We strongly urge the reader to take a look at the groundbreaking
+paper \cite{Rickard} on this topic.
+
+\begin{lemma}
+\label{lemma-qis-equivalence}
+Let $R$ be a ring. Let $(A, \text{d}) \to (B, \text{d})$ be a
+homomorphism of differential graded algebras over $R$, which induces
+an isomorphism on cohomology algebras. Then
+$$
+- \otimes_A^\mathbf{L} B : D(A, \text{d}) \to D(B, \text{d})
+$$
+gives an $R$-linear equivalence of triangulated categories with
+quasi-inverse the restriction functor $N \mapsto N_A$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-tensor-with-compact-fully-faithful}
+the functor $M \longmapsto M \otimes_A^\mathbf{L} B$ is
+fully faithful. By Lemma \ref{lemma-tensor-hom-adjoint}
+the functor $N \longmapsto R\Hom(B, N) = N_A$ is a right adjoint, see
+Example \ref{example-map-hom-tensor}.
+It is clear that the kernel of $R\Hom(B, -)$ is zero.
+Hence the result follows from
+Derived Categories, Lemma
+\ref{derived-lemma-fully-faithful-adjoint-kernel-zero}.
+\end{proof}
+
+\noindent
+When we analyze the proof above we see that we obtain the
+following generalization for free.
+
+\begin{lemma}
+\label{lemma-tilting-equivalence}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded algebras over $R$. Let $N$ be a
+differential graded $(A, B)$-bimodule. Assume that
+\begin{enumerate}
+\item $N$ defines a compact object of $D(B, \text{d})$,
+\item if $N' \in D(B, \text{d})$ and
+$\Hom_{D(B, \text{d})}(N, N'[n]) = 0$ for $n \in \mathbf{Z}$,
+then $N' = 0$, and
+\item the map $H^k(A) \to \Hom_{D(B, \text{d})}(N, N[k])$ is an
+isomorphism for all $k \in \mathbf{Z}$.
+\end{enumerate}
+Then
+$$
+- \otimes_A^\mathbf{L} N : D(A, \text{d}) \to D(B, \text{d})
+$$
+gives an $R$-linear equivalence of triangulated categories.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-tensor-with-compact-fully-faithful}
+the functor $M \longmapsto M \otimes_A^\mathbf{L} N$ is
+fully faithful. By Lemma \ref{lemma-tensor-hom-adjoint}
+the functor $N' \longmapsto R\Hom(N, N')$ is a right adjoint.
+By assumption (3) the kernel of $R\Hom(N, -)$ is zero.
+Hence the result follows from
+Derived Categories, Lemma
+\ref{derived-lemma-fully-faithful-adjoint-kernel-zero}.
+\end{proof}
+
+\begin{remark}
+\label{remark-tilting-equivalence}
+In Lemma \ref{lemma-tilting-equivalence} we can replace
+condition (2) by the condition that $N$ is a classical
+generator for $D_{compact}(B, d)$, see
+Derived Categories, Proposition
+\ref{derived-proposition-generator-versus-classical-generator}.
+Moreover, if we knew that $R\Hom(N, B)$ is a compact object
+of $D(A, \text{d})$, then it suffices to check that $N$
+is a weak generator for $D_{compact}(B, \text{d})$.
+We omit the proof; we will add it here if we ever
+need it in the Stacks project.
+\end{remark}
+
+\noindent
+Sometimes the $B$-module $P$ in the lemma below is called an
+``$(A, B)$-tilting complex''.
+
+\begin{lemma}
+\label{lemma-rickard}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded $R$-algebras. Assume that $A = H^0(A)$.
+The following are equivalent
+\begin{enumerate}
+\item $D(A, \text{d})$ and $D(B, \text{d})$ are equivalent as $R$-linear
+triangulated categories, and
+\item there exists an object $P$ of $D(B, \text{d})$ such that
+\begin{enumerate}
+\item $P$ is a compact object of $D(B, \text{d})$,
+\item if $N \in D(B, \text{d})$ with $\Hom_{D(B, \text{d})}(P, N[i]) = 0$
+for $i \in \mathbf{Z}$, then $N = 0$,
+\item $\Hom_{D(B, \text{d})}(P, P[i]) = 0$ for $i \not = 0$ and
+equal to $A$ for $i = 0$.
+\end{enumerate}
+\end{enumerate}
+The equivalence $D(A, \text{d}) \to D(B, \text{d})$
+constructed in (2) sends $A$ to $P$.
+\end{lemma}
+
+\begin{proof}
+Let $F : D(A, \text{d}) \to D(B, \text{d})$ be an equivalence.
+Then $F$ maps compact objects to compact objects. Hence $P = F(A)$ is
+compact, i.e., (2)(a) holds. Conditions (2)(b) and (2)(c) are immediate
+from the fact that $F$ is an equivalence.
+
+\medskip\noindent
+Let $P$ be an object as in (2). Represent $P$ by a
+differential graded module with property (P). Set
+$$
+(E, \text{d}) = \Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(P, P)
+$$
+Then $H^0(E) = A$ and $H^k(E) = 0$ for $k \not = 0$ by
+Lemma \ref{lemma-hom-derived} and assumption (2)(c).
+Viewing $P$ as a $(E, B)$-bimodule and using
+Lemma \ref{lemma-tilting-equivalence} and assumption (2)(b)
+we obtain an equivalence
+$$
+D(E, \text{d}) \to D(B, \text{d})
+$$
+sending $E$ to $P$.
+Let $E' \subset E$ be the differential graded $R$-subalgebra
+with
+$$
+(E')^i = \left\{
+\begin{matrix}
+E^i & \text{if }i < 0 \\
+\Ker(E^0 \to E^1) & \text{if }i = 0 \\
+0 & \text{if }i > 0
+\end{matrix}
+\right.
+$$
+Then there are quasi-isomorphisms of differential graded
+algebras $(A, \text{d}) \leftarrow (E', \text{d}) \rightarrow (E, \text{d})$.
+Thus we obtain equivalences
+$$
+D(A, \text{d}) \leftarrow D(E', \text{d}) \rightarrow D(E, \text{d})
+\rightarrow D(B, \text{d})
+$$
+by Lemma \ref{lemma-qis-equivalence}.
+\end{proof}
+
+\begin{remark}
+\label{remark-lift-equivalence-to-dga}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be differential
+graded $R$-algebras. Suppose given an $R$-linear equivalence
+$$
+F : D(A, \text{d}) \longrightarrow D(B, \text{d})
+$$
+of triangulated categories. Set $N = F(A)$. Then $N$ is a differential
+graded $B$-module. Since $F$ is an equivalence and $A$ is a compact
+object of $D(A, \text{d})$, we conclude that $N$ is a compact object
+of $D(B, \text{d})$. Since $A$ generates $D(A, \text{d})$ and
+$F$ is an equivalence, we see that $N$ generates $D(B, \text{d})$.
+Finally, $H^k(A) = \Hom_{D(A, \text{d})}(A, A[k])$ and as $F$ an equivalence
+we see that $F$ induces an isomorphism
+$H^k(A) = \Hom_{D(B, \text{d})}(N, N[k])$ for all $k$.
+In order to conclude that there is an equivalence
+$D(A, \text{d}) \longrightarrow D(B, \text{d})$ which
+arises from the construction in
+Lemma \ref{lemma-tilting-equivalence}
+all we need is a left $A$-module structure on $N$
+compatible with derivation and commuting
+with the given right $B$-module structure. In fact, it
+suffices to do this after replacing $N$ by a quasi-isomorphic
+differential graded $B$-module.
+The module structure can be constructed in certain cases.
+For example, if we assume that $F$ can be lifted to a
+differential graded functor
+$$
+F^{dg} :
+\text{Mod}^{dg}_{(A, \text{d})}
+\longrightarrow
+\text{Mod}^{dg}_{(B, \text{d})}
+$$
+(for notation see Example \ref{example-dgm-dg-cat})
+between the associated differential graded categories,
+then this holds. Another case is discussed in the proposition below.
+\end{remark}
+
+\begin{proposition}
+\label{proposition-rickard}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be
+differential graded $R$-algebras. Let $F : D(A, \text{d}) \to D(B, \text{d})$
+be an $R$-linear equivalence of triangulated categories. Assume that
+\begin{enumerate}
+\item $A = H^0(A)$, and
+\item $B$ is K-flat as a complex of $R$-modules.
+\end{enumerate}
+Then there exists an $(A, B)$-bimodule $N$ as in
+Lemma \ref{lemma-tilting-equivalence}.
+\end{proposition}
+
+\begin{proof}
+As in Remark \ref{remark-lift-equivalence-to-dga} above, we set $N = F(A)$
+in $D(B, \text{d})$. We may assume that $N$ is a differential graded
+$B$-module with property (P). Set
+$$
+(E, \text{d}) = \Hom_{\text{Mod}^{dg}_{(B, \text{d})}}(N, N)
+$$
+Then $H^0(E) = A$ and $H^k(E) = 0$ for $k \not = 0$ by
+Lemma \ref{lemma-hom-derived}.
+Moreover, by the discussion in Remark \ref{remark-lift-equivalence-to-dga}
+and by Lemma \ref{lemma-tilting-equivalence}
+we see that $N$ as a $(E, B)$-bimodule induces an
+equivalence $- \otimes_E^\mathbf{L} N : D(E, \text{d}) \to D(B, \text{d})$.
+Let $E' \subset E$ be the differential graded $R$-subalgebra
+with
+$$
+(E')^i = \left\{
+\begin{matrix}
+E^i & \text{if }i < 0 \\
+\Ker(E^0 \to E^1) & \text{if }i = 0 \\
+0 & \text{if }i > 0
+\end{matrix}
+\right.
+$$
+Then there are quasi-isomorphisms of differential graded
+algebras $(A, \text{d}) \leftarrow (E', \text{d}) \rightarrow (E, \text{d})$.
+Thus we obtain equivalences
+$$
+D(A, \text{d}) \leftarrow D(E', \text{d}) \rightarrow D(E, \text{d})
+\rightarrow D(B, \text{d})
+$$
+by Lemma \ref{lemma-qis-equivalence}.
+Note that the quasi-inverse $D(A, \text{d}) \to D(E', \text{d})$
+of the left vertical arrow is given
+by $M \mapsto M \otimes_A^\mathbf{L} A$ where $A$ is viewed as a
+$(A, E')$-bimodule, see Example \ref{example-map-hom-tensor}.
+On the other hand the functor $D(E', \text{d}) \to D(B, \text{d})$ is given by
+$M \mapsto M \otimes_{E'}^\mathbf{L} N$ where $N$ is as above.
+We conclude by Lemma \ref{lemma-compose-tensor-functors}.
+\end{proof}
+
+\begin{remark}
+\label{remark-rickard}
+Let $A, B, F, N$ be as in Proposition \ref{proposition-rickard}.
+It is not clear that $F$ and the functor
+$G(-) = - \otimes_A^\mathbf{L} N$ are isomorphic.
+By construction there is an isomorphism
+$N = G(A) \to F(A)$ in $D(B, \text{d})$.
+It is straightforward to extend this to a functorial isomorphism
+$G(M) \to F(M)$ for $M$ is a differential graded $A$-module which
+is graded projective (e.g., a sum of shifts of $A$).
+Then one can conclude that $G(M) \cong F(M)$ when $M$ is a cone
+of a map between such modules. We don't know whether more is true
+in general.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-rickard-rings}
+Let $R$ be a ring.
+Let $A$ and $B$ be $R$-algebras. The following are equivalent
+\begin{enumerate}
+\item there is an $R$-linear equivalence $D(A) \to D(B)$
+of triangulated categories,
+\item there exists an object $P$ of $D(B)$ such that
+\begin{enumerate}
+\item $P$ can be represented by a finite complex
+of finite projective $B$-modules,
+\item if $K \in D(B)$ with $\Ext^i_B(P, K) = 0$ for
+$i \in \mathbf{Z}$, then $K = 0$, and
+\item $\Ext^i_B(P, P) = 0$ for $i \not = 0$ and
+equal to $A$ for $i= 0$.
+\end{enumerate}
+\end{enumerate}
+Moreover, if $B$ is flat as an $R$-module, then this is also
+equivalent to
+\begin{enumerate}
+\item[(3)] there exists an $(A, B)$-bimodule $N$ such that
+$- \otimes_A^\mathbf{L} N : D(A) \to D(B)$ is an equivalence.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) is a special case of
+Lemma \ref{lemma-rickard} combined with the result of
+Lemma \ref{lemma-compact} characterizing compact objects of $D(B)$
+(small detail omitted).
+The equivalence with (3) if $B$ is $R$-flat follows from
+Proposition \ref{proposition-rickard}.
+\end{proof}
+
+\begin{remark}
+\label{remark-centers}
+Let $R$ be a ring. Let $A$ and $B$ be $R$-algebras.
+If $D(A)$ and $D(B)$ are equivalent as $R$-linear triangulated
+categories, then the centers of $A$ and $B$ are isomorphic
+as $R$-algebras. In particular, if $A$ and $B$ are commutative,
+then $A \cong B$. The rather tricky proof can be found in
+\cite[Proposition 9.2]{Rickard} or \cite[Proposition 6.3.2]{KZ}.
+Another approach might be to use Hochschild cohomology (see
+remark below).
+\end{remark}
+
+\begin{remark}
+\label{remark-hochschild-cohomology}
+Let $R$ be a ring. Let $(A, \text{d})$ and $(B, \text{d})$ be differential
+graded $R$-algebras which are derived equivalent, i.e., such that there
+exists an $R$-linear equivalence $D(A, \text{d}) \to D(B, \text{d})$
+of triangulated categories. We would like to show that certain invariants
+of $(A, \text{d})$ and $(B, \text{d})$ coincide. In many situations
+one has more control of the situation. For example, it may happen
+that there is an equivalence of the form
+$$
+- \otimes_A \Omega : D(A, \text{d}) \longrightarrow D(B, \text{d})
+$$
+for some differential graded $(A, B)$-bimodule
+$\Omega$ (this happens in the situation of
+Proposition \ref{proposition-rickard} and is often true
+if the equivalence comes from a geometric construction).
+If also the quasi-inverse of our functor is given as
+$$
+- \otimes_B^\mathbf{L} \Omega' : D(B, \text{d}) \longrightarrow D(A, \text{d})
+$$
+for a differential graded $(B, A)$-bimodule $\Omega'$
+(and as before such a module $\Omega'$ often exists in practice).
+In this case we can consider the functor
+$$
+D(A^{opp} \otimes_R A, \text{d})
+\longrightarrow
+D(B^{opp} \otimes_R B, \text{d}),\quad
+M \longmapsto \Omega' \otimes^\mathbf{L}_A M \otimes_A^\mathbf{L} \Omega
+$$
+on derived categories of bimodules (use
+Lemma \ref{lemma-bimodule-over-tensor} to turn bimodules into
+right modules).
+Observe that this functor sends the $(A, A)$-bimodule $A$ to
+the $(B, B)$-bimodule $B$. Under suitable conditions
+(e.g., flatness of $A$, $B$, $\Omega$ over $R$, etc)
+this functor will be an equivalence as well.
+If this is the case, then it follows that we have isomorphisms
+of Hochschild cohomology groups
+$$
+HH^i(A, \text{d}) =
+\Hom_{D(A^{opp} \otimes_R A, \text{d})}(A, A[i])
+\longrightarrow
+\Hom_{D(B^{opp} \otimes_R B, \text{d})}(B, B[i]) =
+HH^i(B, \text{d}).
+$$
+For example, if $A = H^0(A)$, then $HH^0(A, \text{d})$
+is equal to the center of $A$, and this gives a conceptual proof
+of the result mentioned in Remark \ref{remark-centers}.
+If we ever need this remark we will provide a precise statement
+with a detailed proof here.
+\end{remark}
+
+
+
+\section{Resolutions of differential graded algebras}
+\label{section-resolution-dgas}
+
+\noindent
+Let $R$ be a ring. Under our assumptions the free $R$-algebra
+$R\langle S \rangle$ on a set $S$ is the algebra with $R$-basis
+the expressions
+$$
+s_1 s_2 \ldots s_n
+$$
+where $n \geq 0$ and $s_1, \ldots, s_n \in S$ is a sequence of
+elements of $S$. Multiplication is given by concatenation
+$$
+(s_1 s_2 \ldots s_n) \cdot (s'_1 s'_2 \ldots s'_m) =
+s_1 \ldots s_n s'_1 \ldots s'_m
+$$
+This algebra is characterized by the property that the map
+$$
+\Mor_{R\text{-alg}}(R\langle S \rangle, A) \to
+\text{Map}(S, A),\quad
+\varphi \longmapsto (s \mapsto \varphi(s))
+$$
+is a bijection for every $R$-algebra $A$.
+
+\medskip\noindent
+In the category of graded $R$-algebras our set $S$ should come
+with a grading, which we think of as a map $\deg : S \to \mathbf{Z}$.
+Then $R\langle S\rangle$ has a grading such that the monomials
+have degree
+$$
+\deg(s_1 s_2 \ldots s_n) = \deg(s_1) + \ldots + \deg(s_n)
+$$
+In this setting we have
+$$
+\Mor_{\text{graded }R\text{-alg}}(R\langle S \rangle, A) \to
+\text{Map}_{\text{graded sets}}(S, A),\quad
+\varphi \longmapsto (s \mapsto \varphi(s))
+$$
+is a bijection for every graded $R$-algebra $A$.
+
+\medskip\noindent
+If $A$ is a graded $R$-algebra and $S$ is a graded set,
+then we can similarly form $A\langle S \rangle$.
+Elements of $A\langle S \rangle$ are
+sums of elements of the form
+$$
+a_0 s_1 a_1 s_2 \ldots a_{n - 1} s_n a_n
+$$
+with $a_i \in A$ modulo the relations that these expressions
+are $R$-multilinear in $(a_0, \ldots, a_n)$.
+Thus for every sequence $s_1, \ldots, s_n$ of elements of $S$
+there is an inclusion
+$$
+A \otimes_R \ldots \otimes_R A \subset A\langle S \rangle
+$$
+and the algebra is the direct sum of these. With this definition the
+reader shows that the map
+$$
+\Mor_{\text{graded }R\text{-alg}}(A\langle S \rangle, B) \to
+\Mor_{\text{graded }R\text{-alg}}(A, B) \times
+\text{Map}_{\text{graded sets}}(S, B),
+$$
+sending $\varphi$ to $(\varphi|_A, (s \mapsto \varphi(s)))$
+is a bijection for every graded $R$-algebra $A$.
+We observe that if $A$ was a free graded $R$-algebra,
+then so is $A\langle S \rangle$.
+
+\medskip\noindent
+Suppose that $A$ is a differential graded $R$-algebra and
+that $S$ is a graded set. Suppose moreover for every $s \in S$
+we are given a homogeneous element $f_s \in A$ with $\deg(f_s) = \deg(s) + 1$
+and $\text{d}f_s = 0$. Then there exists a unique structure of
+differential graded algebra on $A\langle S \rangle$ with
+$\text{d}(s) = f_s$. For example, given $a, b, c \in A$ and
+$s, t \in S$ we would define
+\begin{align*}
+\text{d}(asbtc)
+& =
+\text{d}(a)sbtc + (-1)^{\deg(a)}a f_s b t c +
+(-1)^{\deg(a) + \deg(s)} as\text{d}(b)tc \\
+& + (-1)^{\deg(a) + \deg(s) + \deg(b)} asb f_t c +
+(-1)^{\deg(a) + \deg(s) + \deg(b) + \deg(t)} asbt\text{d}(c)
+\end{align*}
+We omit the details.
+
+\begin{lemma}
+\label{lemma-K-flat-resolution}
+Let $R$ be a ring. Let $(B, \text{d})$ be a differential graded $R$-algebra.
+There exists a quasi-isomorphism $(A, \text{d}) \to (B, \text{d})$ of
+differential graded $R$-algebras with the following properties
+\begin{enumerate}
+\item $A$ is K-flat as a complex of $R$-modules,
+\item $A$ is a free graded $R$-algebra.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First we claim we can find $(A_0, \text{d}) \to (B, \text{d})$
+having (1) and (2) inducing a surjection on cohomology.
+Namely, take a graded set $S$ and for each $s \in S$
+a homogeneous element $b_s \in \Ker(d : B \to B)$ of degree $\deg(s)$
+such that the classes $\overline{b}_s$ in $H^*(B)$
+generate $H^*(B)$ as an $R$-module.
+Then we can set $A_0 = R\langle S \rangle$ with zero differential
+and $A_0 \to B$ given by mapping $s$ to $b_s$.
+
+\medskip\noindent
+Given $A_0 \to B$ inducing a surjection on cohomology we construct
+a sequence
+$$
+A_0 \to A_1 \to A_2 \to \ldots B
+$$
+by induction. Given $A_n \to B$ we set $S_n$ be a graded set
+and for each $s \in S_n$ we let $a_s \in \Ker(A_n \to A_n)$
+be a homogeneous element of degree $\deg(s) + 1$
+mapping to a class $\overline{a}_s$ in $H^*(A_n)$
+which maps to zero in $H^*(B)$. We choose $S_n$ large enough
+so that the elements $\overline{a}_s$ generate $\Ker(H^*(A_n) \to H^*(B))$
+as an $R$-module. Then we set
+$$
+A_{n + 1} = A_n\langle S_n \rangle
+$$
+with differential given by $\text{d}(s) = a_s$ see discussion above.
+Then each $(A_n, \text{d})$ satisfies (1) and (2), we omit the details.
+The map $H^*(A_n) \to H^*(B)$ is surjective as this was true for $n = 0$.
+
+\medskip\noindent
+It is clear that $A = \colim A_n$ is a free graded $R$-algebra.
+It is K-flat by More on Algebra, Lemma \ref{more-algebra-lemma-colimit-K-flat}.
+The map $H^*(A) \to H^*(B)$ is an isomorphism as it is surjective
+and injective: every element of $H^*(A)$ comes from an element of
+$H^*(A_n)$ for some $n$ and if it dies in $H^*(B)$, then it dies
+in $H^*(A_{n + 1})$ hence in $H^*(A)$.
+\end{proof}
+
+\noindent
+As an application we prove the ``correct'' version of
+Lemma \ref{lemma-compose-tensor-functors-general-algebra}.
+
+\begin{lemma}
+\label{lemma-compose-tensor-functors-tor}
+Let $R$ be a ring. Let $(A, \text{d})$, $(B, \text{d})$, and
+$(C, \text{d})$ be differential graded $R$-algebras. Assume
+$A \otimes_R C$ represents $A \otimes^\mathbf{L}_R C$ in $D(R)$.
+Let $N$ be a differential graded $(A, B)$-bimodule.
+Let $N'$ be a differential graded $(B, C)$-bimodule.
+Then the composition
+$$
+\xymatrix{
+D(A, \text{d}) \ar[rr]^{- \otimes_A^\mathbf{L} N} & &
+D(B, \text{d}) \ar[rr]^{- \otimes_B^\mathbf{L} N'} & &
+D(C, \text{d})
+}
+$$
+is isomorphic to $- \otimes_A^\mathbf{L} N''$ for some differential graded
+$(A, C)$-bimodule $N''$.
+\end{lemma}
+
+\begin{proof}
+Using Lemma \ref{lemma-K-flat-resolution}
+we choose a quasi-isomorphism $(B', \text{d}) \to (B, \text{d})$
+with $B'$ K-flat as a complex of $R$-modules.
+By Lemma \ref{lemma-qis-equivalence}
+the functor $-\otimes^\mathbf{L}_{B'} B : D(B', \text{d}) \to D(B, \text{d})$
+is an equivalence with quasi-inverse given by restriction.
+Note that restriction is canonically isomorphic to the functor
+$- \otimes^\mathbf{L}_B B : D(B, \text{d}) \to D(B', \text{d})$
+where $B$ is viewed as a $(B, B')$-bimodule.
+Thus it suffices to prove the lemma for the compositions
+$$
+D(A) \to D(B) \to D(B'),\quad
+D(B') \to D(B) \to D(C),\quad
+D(A) \to D(B') \to D(C).
+$$
+The first one is Lemma \ref{lemma-compose-tensor-functors}
+because $B'$ is K-flat as a complex of $R$-modules.
+The second one is true because
+$B \otimes_B^\mathbf{L} N' = N' = B \otimes_B N'$
+and hence Lemma \ref{lemma-compose-tensor-functors-general} applies.
+Thus we reduce to the case where $B$ is K-flat as a complex
+of $R$-modules.
+
+\medskip\noindent
+Assume $B$ is K-flat as a complex of $R$-modules. It suffices to
+show that (\ref{equation-plain-versus-derived-algebras}) is an
+isomorphism, see
+Lemma \ref{lemma-compose-tensor-functors-general-algebra}.
+Choose a quasi-isomorphism $L \to A$ where $L$ is a differential
+graded $R$-module which has property (P). Then it is clear that
+$P = L \otimes_R B$ has property (P) as a differential graded $B$-module.
+Hence we have to show that $P \to A \otimes_R B$
+induces a quasi-isomorphism
+$$
+P \otimes_B (B \otimes_R C)
+\longrightarrow
+(A \otimes_R B) \otimes_B (B \otimes_R C)
+$$
+We can rewrite this as
+$$
+P \otimes_R B \otimes_R C \longrightarrow A \otimes_R B \otimes_R C
+$$
+Since $B$ is K-flat as a complex of $R$-modules, it
+follows from
+More on Algebra, Lemma \ref{more-algebra-lemma-K-flat-quasi-isomorphism}
+that it is enough
+to show that
+$$
+P \otimes_R C \to A \otimes_R C
+$$
+is a quasi-isomorphism, which is exactly our assumption.
+\end{proof}
+
+\noindent
+The following lemma does not really belong in this section, but there
+does not seem to be a good natural spot for it.
+
+\begin{lemma}
+\label{lemma-countable}
+Let $(A, \text{d})$ be a differential graded algebra with
+$H^i(A)$ countable for each $i$. Let $M$ be an object of $D(A, \text{d})$.
+Then the following are equivalent
+\begin{enumerate}
+\item $M = \text{hocolim} E_n$ with $E_n$ compact in $D(A, \text{d})$, and
+\item $H^i(M)$ is countable for each $i$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1) holds. Then we have $H^i(M) = \colim H^i(E_n)$ by
+Derived Categories, Lemma \ref{derived-lemma-cohomology-of-hocolim}.
+Thus it suffices to prove that $H^i(E_n)$ is countable for each $n$.
+By Proposition \ref{proposition-compact} we see that $E_n$
+is isomorphic in $D(A, \text{d})$ to a direct summand of a
+differential graded module $P$ which has a finite filtration
+$F_\bullet$ by differential graded submodules
+such that $F_jP/F_{j - 1}P$ are finite direct sums of shifts of $A$.
+By assumption the groups $H^i(F_jP/F_{j - 1}P)$ are countable.
+Arguing by induction on the length of the filtration and using
+the long exact cohomology sequence we conclude that (2) is true.
+The interesting implication is the other one.
+
+\medskip\noindent
+We claim there is a countable differential graded
+subalgebra $A' \subset A$ such that the inclusion map
+$A' \to A$ defines an isomorphism on cohomology.
+To construct $A'$ we choose countable differential graded
+subalgebras
+$$
+A_1 \subset A_2 \subset A_3 \subset \ldots
+$$
+such that (a) $H^i(A_1) \to H^i(A)$ is surjective, and (b)
+for $n > 1$ the kernel of the map $H^i(A_{n - 1}) \to H^i(A_n)$
+is the same as the kernel of the map $H^i(A_{n - 1}) \to H^i(A)$.
+To construct $A_1$ take any countable collection of cochains
+$S \subset A$ generating the cohomology of $A$ (as a ring or as
+a graded abelian group) and let
+$A_1$ be the differential graded subalgebra of $A$ generated by $S$.
+To construct $A_n$ given $A_{n - 1}$ for each cochain $a \in A_{n - 1}^i$
+which maps to zero in $H^i(A)$ choose $s_a \in A^{i - 1}$
+with $\text{d}(s_a) = a$ and let $A_n$ be the differential graded
+subalgebra of $A$ generated by $A_{n - 1}$ and the elements $s_a$.
+Finally, take $A' = \bigcup A_n$.
+
+\medskip\noindent
+By Lemma \ref{lemma-qis-equivalence}
+the restriction map $D(A, \text{d}) \to D(A', \text{d})$,
+$M \mapsto M_{A'}$ is an equivalence. Since the cohomology
+groups of $M$ and $M_{A'}$ are the same, we see that it
+suffices to prove the implication (2) $\Rightarrow$ (1)
+for $(A', \text{d})$.
+
+\medskip\noindent
+Assume $A$ is countable. By the exact same type of argument as
+given above we see that for $M$ in $D(A, \text{d})$
+the following are equivalent: $H^i(M)$ is countable for each $i$
+and $M$ can be represented by a countable differential graded module.
+Hence in order to prove the implication (2) $\Rightarrow$ (1)
+we reduce to the situation described in the next paragraph.
+
+\medskip\noindent
+Assume $A$ is countable and that $M$ is a countable differential graded
+module over $A$. We claim there exists a homomorphism
+$P \to M$ of differential graded $A$-modules such that
+\begin{enumerate}
+\item $P \to M$ is a quasi-isomorphism,
+\item $P$ has property (P), and
+\item $P$ is countable.
+\end{enumerate}
+Looking at the proof of the construction of P-resolutions in
+Lemma \ref{lemma-resolve} we see that it suffices to show that
+we can prove Lemma \ref{lemma-good-quotient}
+in the setting of countable differential graded modules.
+This is immediate from the proof.
+
+\medskip\noindent
+Assume that $A$ is countable and that $M$ is a countable
+differential graded module with property (P). Choose a filtration
+$$
+0 = F_{-1}P \subset F_0P \subset F_1P \subset \ldots \subset P
+$$
+by differential graded submodules such that we have
+\begin{enumerate}
+\item $P = \bigcup F_pP$,
+\item $F_iP \to F_{i + 1}P$ is an admissible monomorphism,
+\item isomorphisms of differential graded modules
+$F_iP/F_{i - 1}P \to \bigoplus_{j \in J_i} A[k_j]$
+for some sets $J_i$ and integers $k_j$.
+\end{enumerate}
+Of course $J_i$ is countable for each $i$. For each $i$ and
+$j \in J_i$ choose $x_{i, j} \in F_iP$ of degree $k_j$ whose
+image in $F_iP/F_{i - 1}P$ generates the summand corresponding
+to $j$.
+
+\medskip\noindent
+Claim: Given $n$ and finite subsets $S_i \subset J_i$, $i = 1, \ldots, n$
+there exist finite subsets $S_i \subset T_i \subset J_i$, $i = 1, \ldots, n$
+such that $P' = \bigoplus_{i \leq n} \bigoplus_{j \in T_i} Ax_{i, j}$
+is a differential graded submodule of $P$. This was shown in the
+proof of Lemma \ref{lemma-factor-through-nicer} but it is also
+easily shown directly: the elements $x_{i, j}$ freely generate
+$P$ as a right $A$-module. The structure of $P$ shows that
+$$
+\text{d}(x_{i, j}) = \sum\nolimits_{i' < i} x_{i', j'}a_{i', j'}
+$$
+where of course the sum is finite.
+Thus given $S_0, \ldots, S_n$ we can first choose
+$S_0 \subset S'_0, \ldots, S_{n - 1} \subset S'_{n - 1}$ with
+$\text{d}(x_{n, j}) \in \bigoplus_{i' < n, j' \in S'_{i'}} x_{i', j'}A$
+for all $j \in S_n$. Then by induction on $n$ we can choose
+$S'_0 \subset T_0, \ldots, S'_{n - 1} \subset T_{n - 1}$
+to make sure that $\bigoplus_{i' < n, j' \in T_{i'}} x_{i', j'}A$
+is a differential graded $A$-submodule. Setting $T_n = S_n$ we find that
+$P' = \bigoplus_{i \leq n, j \in T_i} x_{i, j}A$ is as desired.
+
+\medskip\noindent
+From the claim it is clear that $P = \bigcup P'_n$
+is a countable rising union of $P'_n$ as above.
+By construction each $P'_n$ is a differential graded module with
+property (P) such that the filtration is finite and the succesive
+quotients are finite direct sums of shifts of $A$. Hence $P'_n$
+defines a compact object of $D(A, \text{d})$, see for example
+Proposition \ref{proposition-compact}. Since
+$P = \text{hocolim} P'_n$ in $D(A, \text{d})$
+by Lemma \ref{lemma-homotopy-colimit}
+the proof of the implication (2) $\Rightarrow$ (1) is complete.
+\end{proof}
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/discriminant.tex b/books/stacks/discriminant.tex
new file mode 100644
index 0000000000000000000000000000000000000000..7796a96d1e2b31c86103db166da90571aae5c330
--- /dev/null
+++ b/books/stacks/discriminant.tex
@@ -0,0 +1,3424 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Discriminants and Differents}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we study the different and discriminant
+of locally quasi-finite morphisms of schemes.
+A good reference for some of this material is \cite{Kunz}.
+
+\medskip\noindent
+Given a quasi-finite morphism $f : Y \to X$ of Noetherian schemes
+there is a relative dualizing module $\omega_{Y/X}$.
+In Section \ref{section-quasi-finite-dualizing}
+we construct this module from scratch, using
+Zariski's main theorem and \'etale localization methods.
+The key property is that given a diagram
+$$
+\xymatrix{
+Y' \ar[d]_{f'} \ar[r]_{g'} & Y \ar[d]^f \\
+X' \ar[r]^g & X
+}
+$$
+with $g : X' \to X$ flat, $Y' \subset X' \times_X Y$ open, and
+$f' : Y' \to X'$ finite, then there is a canonical isomorphism
+$$
+f'_*(g')^*\omega_{Y/X} =
+\SheafHom_{\mathcal{O}_{X'}}(f'_*\mathcal{O}_{Y'}, \mathcal{O}_{X'})
+$$
+as sheaves of $f'_*\mathcal{O}_{Y'}$-modules. In
+Section \ref{section-quasi-finite-traces} we prove that
+if $f$ is flat, then there is a canonical global section
+$\tau_{Y/X} \in H^0(Y, \omega_{Y/X})$ which for every commutative
+diagram as above maps $(g')^*\tau_{Y/X}$ to the trace map
+of Section \ref{section-discriminant}
+for the finite locally free morphism $f'$.
+In Section \ref{section-different}
+we define the different for a flat quasi-finite
+morphism of Noetherian schemes as the annihilator of the
+cokernel of $\tau_{Y/X} : \mathcal{O}_X \to \omega_{Y/X}$.
+
+\medskip\noindent
+The main goal of this chapter is to prove that for
+quasi-finite syntomic\footnote{AKA flat and lci.} $f$ the
+different agrees with the K\"ahler different.
+The K\"ahler different is the zeroth fitting ideal of $\Omega_{Y/X}$, see
+Section \ref{section-kahler-different}.
+This agreement is not obvious; we use a slick argument
+due to Tate, see Section \ref{section-formula-different}.
+On the way we also discuss the Noether different
+and the Dedekind different.
+
+\medskip\noindent
+Only in the end of this chapter, see
+Sections \ref{section-comparison} and \ref{section-gorenstein-lci},
+do we make the link with the more advanced material
+on duality for schemes.
+
+
+
+
+
+
+
+
+\section{Dualizing modules for quasi-finite ring maps}
+\label{section-quasi-finite-dualizing}
+
+\noindent
+Let $A \to B$ be a quasi-finite homomorphism of Noetherian rings. By
+Zariski's main theorem
+(Algebra, Lemma \ref{algebra-lemma-quasi-finite-open-integral-closure})
+there exists a factorization $A \to B' \to B$ with
+$A \to B'$ finite and $B' \to B$ inducing an open immersion of spectra.
+We set
+\begin{equation}
+\label{equation-dualizing}
+\omega_{B/A} = \Hom_A(B', A) \otimes_{B'} B
+\end{equation}
+in this situation. The reader can think of this as a kind of relative
+dualizing module, see Lemmas \ref{lemma-compare-dualizing} and
+\ref{lemma-compare-dualizing-algebraic}.
+In this section we will show by elementary commutative algebra methods
+that $\omega_{B/A}$ is independent of the choice of the factorization
+and that formation of $\omega_{B/A}$ commutes with flat base change.
+To help prove the independence of factorizations we compare two
+given factorizations.
+
+\begin{lemma}
+\label{lemma-dominate-factorizations}
+Let $A \to B$ be a quasi-finite ring map. Given two factorizations
+$A \to B' \to B$ and $A \to B'' \to B$ with
+$A \to B'$ and $A \to B''$ finite and $\Spec(B) \to \Spec(B')$
+and $\Spec(B) \to \Spec(B'')$ open immersions, there exists
+an $A$-subalgebra $B''' \subset B$ finite over $A$ such that
+$\Spec(B) \to \Spec(B''')$ an open immersion and $B' \to B$ and
+$B'' \to B$ factor through $B'''$.
+\end{lemma}
+
+\begin{proof}
+Let $B''' \subset B$ be the $A$-subalgebra generated by the images
+of $B' \to B$ and $B'' \to B$. As $B'$ and $B''$ are each generated
+by finitely many elements integral over $A$, we see that $B'''$ is
+generated by finitely many elements integral over $A$ and we conclude
+that $B'''$ is finite over $A$
+(Algebra, Lemma \ref{algebra-lemma-characterize-finite-in-terms-of-integral}).
+Consider the maps
+$$
+B = B' \otimes_{B'} B \to B''' \otimes_{B'} B \to B \otimes_{B'} B = B
+$$
+The final equality holds because $\Spec(B) \to \Spec(B')$ is an
+open immersion (and hence a monomorphism). The second arrow is injective
+as $B' \to B$ is flat. Hence both arrows are isomorphisms.
+This means that
+$$
+\xymatrix{
+\Spec(B''') \ar[d] & \Spec(B) \ar[d] \ar[l] \\
+\Spec(B') & \Spec(B) \ar[l]
+}
+$$
+is cartesian. Since the base change of an open immersion is an
+open immersion we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-well-defined}
+The module (\ref{equation-dualizing}) is well defined, i.e.,
+independent of the choice of the factorization.
+\end{lemma}
+
+\begin{proof}
+Let $B', B'', B'''$ be as in Lemma \ref{lemma-dominate-factorizations}.
+We obtain a canonical map
+$$
+\omega''' = \Hom_A(B''', A) \otimes_{B'''} B \longrightarrow
+\Hom_A(B', A) \otimes_{B'} B = \omega'
+$$
+and a similar one involving $B''$. If we show these maps are isomorphisms
+then the lemma is proved. Let $g \in B'$ be an element such that
+$B'_g \to B_g$ is an isomorphism and hence $B'_g \to (B''')_g \to B_g$
+are isomorphisms. It suffices to show that $(\omega''')_g \to \omega'_g$
+is an isomorphism. The kernel and cokernel of the ring map $B' \to B'''$
+are finite $A$-modules and $g$-power torsion.
+Hence they are annihilated by a power of $g$.
+This easily implies the result.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localize-dualizing}
+Let $A \to B$ be a quasi-finite map of Noetherian rings.
+\begin{enumerate}
+\item If $A \to B$ factors as $A \to A_f \to B$ for some $f \in A$,
+then $\omega_{B/A} = \omega_{B/A_f}$.
+\item If $g \in B$, then $(\omega_{B/A})_g = \omega_{B_g/A}$.
+\item If $f \in A$, then $\omega_{B_f/A_f} = (\omega_{B/A})_f$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Say $A \to B' \to B$ is a factorization with $A \to B'$ finite and
+$\Spec(B) \to \Spec(B')$ an open immersion. In case (1) we may use
+the factorization $A_f \to B'_f \to B$ to compute $\omega_{B/A_f}$
+and use Algebra, Lemma \ref{algebra-lemma-hom-from-finitely-presented}.
+In case (2) use the factorization $A \to B' \to B_g$ to see the result.
+Part (3) follows from a combination of (1) and (2).
+\end{proof}
+
+\noindent
+Let $A \to B$ be a quasi-finite ring map of Noetherian rings, let
+$A \to A_1$ be an arbitrary ring map of Noetherian rings, and set
+$B_1 = B \otimes_A A_1$. We obtain a cocartesian diagram
+$$
+\xymatrix{
+B \ar[r] & B_1 \\
+A \ar[u] \ar[r] & A_1 \ar[u]
+}
+$$
+Observe that $A_1 \to B_1$ is quasi-finite as well (Algebra, Lemma
+\ref{algebra-lemma-quasi-finite-base-change}).
+In this situation we will define a canonical
+$B$-linear base change map
+\begin{equation}
+\label{equation-bc-dualizing}
+\omega_{B/A} \longrightarrow \omega_{B_1/A_1}
+\end{equation}
+Namely, we choose a factorization $A \to B' \to B$ as in the construction
+of $\omega_{B/A}$. Then $B'_1 = B' \otimes_A A_1$ is finite over $A_1$
+and we can use the factorization $A_1 \to B'_1 \to B_1$ in the construction
+of $\omega_{B_1/A_1}$. Thus we have to construct a map
+$$
+\Hom_A(B', A) \otimes_{B'} B
+\longrightarrow
+\Hom_{A_1}(B' \otimes_A A_1, A_1) \otimes_{B'_1} B_1
+$$
+Thus it suffices to construct a $B'$-linear map
+$\Hom_A(B', A) \to \Hom_{A_1}(B' \otimes_A A_1, A_1)$
+which we will denote $\varphi \mapsto \varphi_1$.
+Namely, given an $A$-linear map $\varphi : B' \to A$ we
+let $\varphi_1$ be the map such that
+$\varphi_1(b' \otimes a_1) = \varphi(b')a_1$.
+This is clearly $A_1$-linear and the construction is complete.
+
+\begin{lemma}
+\label{lemma-bc-map-dualizing}
+The base change map (\ref{equation-bc-dualizing})
+is independent of the choice of the
+factorization $A \to B' \to B$. Given ring maps $A \to A_1 \to A_2$
+the composition of the base change maps for $A \to A_1$ and $A_1 \to A_2$
+is the base change map for $A \to A_2$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: argue in exactly the same way as in
+Lemma \ref{lemma-dualizing-well-defined}
+using Lemma \ref{lemma-dominate-factorizations}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-flat-base-change}
+If $A \to A_1$ is flat, then
+the base change map (\ref{equation-bc-dualizing}) induces an isomorphism
+$\omega_{B/A} \otimes_B B_1 \to \omega_{B_1/A_1}$.
+\end{lemma}
+
+\begin{proof}
+Assume that $A \to A_1$ is flat. By construction of $\omega_{B/A}$ we may
+assume that $A \to B$ is finite. Then $\omega_{B/A} = \Hom_A(B, A)$ and
+$\omega_{B_1/A_1} = \Hom_{A_1}(B_1, A_1)$. Since $B_1 = B \otimes_A A_1$
+the result follows from More on Algebra, Lemma
+\ref{more-algebra-lemma-pseudo-coherence-and-base-change-ext}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-composition}
+Let $A \to B \to C$ be quasi-finite homomorphisms of Noetherian rings.
+There is a canonical map
+$\omega_{B/A} \otimes_B \omega_{C/B} \to \omega_{C/A}$.
+\end{lemma}
+
+\begin{proof}
+Choose $A \to B' \to B$ with $A \to B'$ finite such that
+$\Spec(B) \to \Spec(B')$ is an open immersion. Then
+$B' \to C$ is quasi-finite too. Choose $B' \to C' \to C$
+with $B' \to C'$ finite and $\Spec(C) \to \Spec(C')$ an
+open immersion. Then the source of the arrow is
+$$
+\Hom_A(B', A) \otimes_{B'} B \otimes_B
+\Hom_B(B \otimes_{B'} C', B) \otimes_{B \otimes_{B'} C'} C
+$$
+which is equal to
+$$
+\Hom_A(B', A) \otimes_{B'}
+\Hom_{B'}(C', B) \otimes_{C'} C
+$$
+This indeed comes with a canonical map to
+$\Hom_A(C', A) \otimes_{C'} C = \omega_{C/A}$
+coming from composition
+$\Hom_A(B', A) \times \Hom_{B'}(C', B) \to \Hom_A(C', A)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-product}
+Let $A \to B$ and $A \to C$ be quasi-finite maps of Noetherian rings.
+Then $\omega_{B \times C/A} = \omega_{B/A} \times \omega_{C/A}$
+as modules over $B \times C$.
+\end{lemma}
+
+\begin{proof}
+Choose factorizations $A \to B' \to B$ and $A \to C' \to C$ such that
+$A \to B'$ and $A \to C'$ are finite and such that $\Spec(B) \to \Spec(B')$
+and $\Spec(C) \to \Spec(C')$ are open immersions. Then
+$A \to B' \times C' \to B \times C$ is a similar factorization.
+Using this factorization to compute $\omega_{B \times C/A}$
+gives the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-associated-primes}
+Let $A \to B$ be a quasi-finite homomorphism of Noetherian rings.
+Then $\text{Ass}_B(\omega_{B/A})$ is the set of primes of $B$
+lying over associated primes of $A$.
+\end{lemma}
+
+\begin{proof}
+Choose a factorization $A \to B' \to B$ with $A \to B'$ finite and
+$B' \to B$ inducing an open immersion on spectra. As
+$\omega_{B/A} = \omega_{B'/A} \otimes_{B'} B$ it suffices
+to prove the statement for $\omega_{B'/A}$. Thus we may assume $A \to B$
+is finite.
+
+\medskip\noindent
+Assume $\mathfrak p \in \text{Ass}(A)$ and $\mathfrak q$ is a prime
+of $B$ lying over $\mathfrak p$. Let $x \in A$ be an element whose
+annihilator is $\mathfrak p$. Choose a nonzero $\kappa(\mathfrak p)$
+linear map $\lambda : \kappa(\mathfrak q) \to \kappa(\mathfrak p)$.
+Since $A/\mathfrak p \subset B/\mathfrak q$ is a finite extension
+of rings, there is an $f \in A$, $f \not \in \mathfrak p$
+such that $f\lambda$ maps $B/\mathfrak q$ into $A/\mathfrak p$.
+Hence we obtain a nonzero $A$-linear map
+$$
+B \to B/\mathfrak q \to A/\mathfrak p \to A,\quad
+b \mapsto f\lambda(b)x
+$$
+An easy computation shows that this element of $\omega_{B/A}$
+has annihilator $\mathfrak q$, whence
+$\mathfrak q \in \text{Ass}(\omega_{B/A})$.
+
+\medskip\noindent
+Conversely, suppose that $\mathfrak q \subset B$ is a prime ideal
+lying over a prime $\mathfrak p \subset A$ which is not an associated
+prime of $A$. We have to show that
+$\mathfrak q \not \in \text{Ass}_B(\omega_{B/A})$.
+After replacing $A$ by $A_\mathfrak p$ and $B$ by
+$B_\mathfrak p$ we may assume that $\mathfrak p$ is a maximal ideal
+of $A$. This is allowed by Lemma \ref{lemma-dualizing-flat-base-change} and
+Algebra, Lemma \ref{algebra-lemma-localize-ass}.
+Then there exists an $f \in \mathfrak m$
+which is a nonzerodivisor on $A$.
+Then $f$ is a nonzerodivisor on $\omega_{B/A}$
+and hence $\mathfrak q$ is not an associated prime of this module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-base-flat-flat}
+Let $A \to B$ be a flat quasi-finite homomorphism of Noetherian rings.
+Then $\omega_{B/A}$ is a flat $A$-module.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak q \subset B$ be a prime lying over $\mathfrak p \subset A$.
+We will show that the localization $\omega_{B/A, \mathfrak q}$ is flat
+over $A_\mathfrak p$.
+This suffices by Algebra, Lemma \ref{algebra-lemma-flat-localization}.
+By
+Algebra, Lemma \ref{algebra-lemma-etale-makes-quasi-finite-finite-one-prime}
+we can find an \'etale ring map $A \to A'$ and a prime
+ideal $\mathfrak p' \subset A'$ lying over $\mathfrak p$
+such that $\kappa(\mathfrak p') = \kappa(\mathfrak p)$ and
+such that
+$$
+B' = B \otimes_A A' = C \times D
+$$
+with $A' \to C$ finite and such that the unique prime $\mathfrak q'$
+of $B \otimes_A A'$ lying over $\mathfrak q$ and $\mathfrak p'$
+corresponds to a prime of $C$. By
+Lemma \ref{lemma-dualizing-flat-base-change}
+and Algebra, Lemma \ref{algebra-lemma-base-change-flat-up-down}
+it suffices to show $\omega_{B'/A', \mathfrak q'}$
+is flat over $A'_{\mathfrak p'}$.
+Since $\omega_{B'/A'} = \omega_{C/A'} \times \omega_{D/A'}$
+by Lemma \ref{lemma-dualizing-product}
+this reduces us to the case where $B$ is finite flat over $A$.
+In this case $B$ is finite locally free as an $A$-module
+and $\omega_{B/A} = \Hom_A(B, A)$ is the dual finite
+locally free $A$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-base-change-of-flat}
+If $A \to B$ is flat, then the base change map (\ref{equation-bc-dualizing})
+induces an isomorphism $\omega_{B/A} \otimes_B B_1 \to \omega_{B_1/A_1}$.
+\end{lemma}
+
+\begin{proof}
+If $A \to B$ is finite flat, then $B$ is finite locally free as an $A$-module.
+In this case $\omega_{B/A} = \Hom_A(B, A)$ is the dual finite
+locally free $A$-module and formation of this module commutes
+with arbitrary base change which proves the lemma in this case.
+In the next paragraph we reduce the general (quasi-finite flat)
+case to the finite flat case just discussed.
+
+\medskip\noindent
+Let $\mathfrak q_1 \subset B_1$ be a prime. We will show that the
+localization of the map at the prime $\mathfrak q_1$ is an isomorphism, which
+suffices by Algebra, Lemma \ref{algebra-lemma-characterize-zero-local}.
+Let $\mathfrak q \subset B$ and $\mathfrak p \subset A$ be the prime
+ideals lying under $\mathfrak q_1$. By
+Algebra, Lemma \ref{algebra-lemma-etale-makes-quasi-finite-finite-one-prime}
+we can find an \'etale ring map $A \to A'$ and a prime
+ideal $\mathfrak p' \subset A'$ lying over $\mathfrak p$
+such that $\kappa(\mathfrak p') = \kappa(\mathfrak p)$ and
+such that
+$$
+B' = B \otimes_A A' = C \times D
+$$
+with $A' \to C$ finite and such that the unique prime $\mathfrak q'$
+of $B \otimes_A A'$ lying over $\mathfrak q$ and $\mathfrak p'$
+corresponds to a prime of $C$. Set $A'_1 = A' \otimes_A A_1$ and
+consider the base change maps
+(\ref{equation-bc-dualizing}) for the ring maps
+$A \to A' \to A'_1$ and $A \to A_1 \to A'_1$ as in the diagram
+$$
+\xymatrix{
+\omega_{B'/A'} \otimes_{B'} B'_1 \ar[r] & \omega_{B'_1/A'_1} \\
+\omega_{B/A} \otimes_B B'_1 \ar[r] \ar[u] &
+\omega_{B_1/A_1} \otimes_{B_1} B'_1 \ar[u]
+}
+$$
+where $B' = B \otimes_A A'$, $B_1 = B \otimes_A A_1$, and
+$B_1' = B \otimes_A (A' \otimes_A A_1)$. By
+Lemma \ref{lemma-bc-map-dualizing} the diagram commutes. By
+Lemma \ref{lemma-dualizing-flat-base-change}
+the vertical arrows are isomorphisms.
+As $B_1 \to B'_1$ is \'etale and hence flat it suffices
+to prove the top horizontal arrow is an isomorphism after localizing
+at a prime $\mathfrak q'_1$ of $B'_1$ lying over $\mathfrak q$
+(there is such a prime and use
+Algebra, Lemma \ref{algebra-lemma-local-flat-ff}).
+Thus we may assume that $B = C \times D$ with $A \to C$
+finite and $\mathfrak q$ corresponding to a prime of $C$.
+In this case the dualizing module $\omega_{B/A}$ decomposes
+in a similar fashion (Lemma \ref{lemma-dualizing-product})
+which reduces the question
+to the finite flat case $A \to C$ handled above.
+\end{proof}
+
+\begin{remark}
+\label{remark-relative-dualizing-for-quasi-finite}
+Let $f : Y \to X$ be a locally quasi-finite morphism of locally Noetherian
+schemes. It is clear from Lemma \ref{lemma-localize-dualizing}
+that there is a unique coherent $\mathcal{O}_Y$-module
+$\omega_{Y/X}$ on $Y$ such that for every pair of affine opens
+$\Spec(B) = V \subset Y$, $\Spec(A) = U \subset X$ with $f(V) \subset U$
+there is a canonical isomorphism
+$$
+H^0(V, \omega_{Y/X}) = \omega_{B/A}
+$$
+and where these isomorphisms are compatible with restriction maps.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-compare-dualizing-algebraic}
+Let $A \to B$ be a quasi-finite homomorphism of Noetherian rings.
+Let $\omega_{B/A}^\bullet \in D(B)$ be the algebraic relative dualizing
+complex discussed in Dualizing Complexes, Section
+\ref{dualizing-section-relative-dualizing-complexes-Noetherian}.
+Then there is a (nonunique) isomorphism
+$\omega_{B/A} = H^0(\omega_{B/A}^\bullet)$.
+\end{lemma}
+
+\begin{proof}
+Choose a factorization $A \to B' \to B$
+where $A \to B'$ is finite and $\Spec(B') \to \Spec(B)$
+is an open immersion. Then
+$\omega_{B/A}^\bullet = \omega_{B'/A}^\bullet \otimes_B^\mathbf{L} B'$
+by Dualizing Complexes, Lemmas
+\ref{dualizing-lemma-composition-shriek-algebraic} and
+\ref{dualizing-lemma-upper-shriek-localize} and
+the definition of $\omega_{B/A}^\bullet$. Hence
+it suffices to show there is an isomorphism when $A \to B$ is finite.
+In this case we can use
+Dualizing Complexes, Lemma \ref{dualizing-lemma-upper-shriek-finite}
+to see that $\omega_{B/A}^\bullet = R\Hom(B, A)$ and hence
+$H^0(\omega^\bullet_{B/A}) = \Hom_A(B, A)$ as desired.
+\end{proof}
+
+
+
+
+
+
+\section{Discriminant of a finite locally free morphism}
+\label{section-discriminant}
+
+\noindent
+Let $X$ be a scheme and let $\mathcal{F}$ be a finite locally
+free $\mathcal{O}_X$-module. Then there is a canonical {\it trace} map
+$$
+\text{Trace} :
+\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{F})
+\longrightarrow
+\mathcal{O}_X
+$$
+See Exercises, Exercise \ref{exercises-exercise-trace-det}. This map has
+the property that $\text{Trace}(\text{id})$ is the locally constant function
+on $\mathcal{O}_X$ corresponding to the rank of $\mathcal{F}$.
+
+\medskip\noindent
+Let $\pi : X \to Y$ be a morphism of schemes which is finite locally
+free. Then there exists a canonical {\it trace for $\pi$}
+which is an $\mathcal{O}_Y$-linear map
+$$
+\text{Trace}_\pi : \pi_*\mathcal{O}_X \longrightarrow \mathcal{O}_Y
+$$
+sending a local section $f$ of $\pi_*\mathcal{O}_X$ to the
+trace of multiplication by $f$ on $\pi_*\mathcal{O}_X$. Over
+affine opens this recovers the construction in
+Exercises, Exercise \ref{exercises-exercise-trace-det-rings}.
+The composition
+$$
+\mathcal{O}_Y \xrightarrow{\pi^\sharp} \pi_*\mathcal{O}_X
+\xrightarrow{\text{Trace}_\pi} \mathcal{O}_Y
+$$
+equals multiplication by the degree of $\pi$ (which is a locally constant
+function on $Y$). In analogy with
+Fields, Section \ref{fields-section-trace-pairing}
+we can define the trace pairing
+$$
+Q_\pi :
+\pi_*\mathcal{O}_X \times \pi_*\mathcal{O}_X
+\longrightarrow
+\mathcal{O}_Y
+$$
+by the rule $(f, g) \mapsto \text{Trace}_\pi(fg)$. We can think of
+$Q_\pi$ as a linear map
+$\pi_*\mathcal{O}_X \to
+\SheafHom_{\mathcal{O}_Y}(\pi_*\mathcal{O}_X, \mathcal{O}_Y)$
+between locally free modules of the same rank, and hence obtain
+a determinant
+$$
+\det(Q_\pi) :
+\wedge^{top}(\pi_*\mathcal{O}_X)
+\longrightarrow
+\wedge^{top}(\pi_*\mathcal{O}_X)^{\otimes -1}
+$$
+or in other words a global section
+$$
+\det(Q_\pi) \in \Gamma(Y, \wedge^{top}(\pi_*\mathcal{O}_X)^{\otimes -2})
+$$
+The {\it discriminant of $\pi$} is by definition the closed
+subscheme $D_\pi \subset Y$ cut out by this global section.
+Clearly, $D_\pi$ is a locally principal closed subscheme of $Y$.
+
+\begin{lemma}
+\label{lemma-discriminant}
+Let $\pi : X \to Y$ be a morphism of schemes which is finite locally
+free. Then $\pi$ is \'etale if and only if its discriminant is empty.
+\end{lemma}
+
+\begin{proof}
+By Morphisms, Lemma \ref{morphisms-lemma-etale-flat-etale-fibres}
+it suffices to check that the fibres of $\pi$ are \'etale.
+Since the construction of the trace pairing commutes with base
+change we reduce to the following question: Let $k$ be a field
+and let $A$ be a finite dimensional $k$-algebra. Show that
+$A$ is \'etale over $k$ if and only if the trace pairing
+$Q_{A/k} : A \times A \to k$, $(a, b) \mapsto \text{Trace}_{A/k}(ab)$
+is nondegenerate.
+
+\medskip\noindent
+Assume $Q_{A/k}$ is nondegenerate. If $a \in A$ is a nilpotent element, then
+$ab$ is nilpotent for all $b \in A$ and we conclude that $Q_{A/k}(a, -)$ is
+identically zero. Hence $A$ is reduced. Then we can write
+$A = K_1 \times \ldots \times K_n$ as a product where each $K_i$
+is a field (see
+Algebra, Lemmas \ref{algebra-lemma-finite-dimensional-algebra},
+\ref{algebra-lemma-artinian-finite-length}, and
+\ref{algebra-lemma-minimal-prime-reduced-ring}).
+In this case the quadratic
+space $(A, Q_{A/k})$ is the orthogonal direct sum of the spaces
+$(K_i, Q_{K_i/k})$. It follows from
+Fields, Lemma \ref{fields-lemma-separable-trace-pairing}
+that each $K_i$ is separable over $k$. This means that $A$ is \'etale
+over $k$ by Algebra, Lemma \ref{algebra-lemma-etale-over-field}.
+The converse is proved by reading the argument backwards.
+\end{proof}
+
+
+
+
+
+\section{Traces for flat quasi-finite ring maps}
+\label{section-quasi-finite-traces}
+
+\noindent
+The trace referred to in the title of this section is of a completely
+different nature than the trace discussed in
+Duality for Schemes, Section \ref{duality-section-trace}.
+Namely, it is the trace
+as discussed in Fields, Section \ref{fields-section-trace-pairing}
+and generalized in Exercises, Exercises \ref{exercises-exercise-trace-det} and
+\ref{exercises-exercise-trace-det-rings}.
+
+\medskip\noindent
+Let $A \to B$ be a finite flat map of Noetherian rings. Then $B$ is finite
+flat as an $A$-module and hence finite locally free
+(Algebra, Lemma \ref{algebra-lemma-finite-projective}).
+Given $b \in B$ we can consider the {\it trace} $\text{Trace}_{B/A}(b)$
+of the $A$-linear map $B \to B$ given by
+multiplication by $b$ on $B$. By the references above this defines
+an $A$-linear map $\text{Trace}_{B/A} : B \to A$.
+Since $\omega_{B/A} = \Hom_A(B, A)$ as $A \to B$ is finite, we see
+that $\text{Trace}_{B/A} \in \omega_{B/A}$.
+
+\medskip\noindent
+For a general flat quasi-finite ring map we define the notion
+of a trace as follows.
+
+\begin{definition}
+\label{definition-trace-element}
+Let $A \to B$ be a flat quasi-finite map of Noetherian rings.
+The {\it trace element} is the unique\footnote{Uniqueness
+and existence will be justified in
+Lemmas \ref{lemma-trace-unique} and \ref{lemma-dualizing-tau}.}
+element
+$\tau_{B/A} \in \omega_{B/A}$
+with the following property: for any Noetherian $A$-algebra $A_1$
+such that $B_1 = B \otimes_A A_1$ comes with a
+product decomposition $B_1 = C \times D$ with $A_1 \to C$ finite
+the image of $\tau_{B/A}$ in $\omega_{C/A_1}$
+is $\text{Trace}_{C/A_1}$.
+Here we use the base change map (\ref{equation-bc-dualizing}) and
+Lemma \ref{lemma-dualizing-product} to get
+$\omega_{B/A} \to \omega_{B_1/A_1} \to \omega_{C/A_1}$.
+\end{definition}
+
+\noindent
+We first prove that trace elements are unique and then
+we prove that they exist.
+
+\begin{lemma}
+\label{lemma-trace-unique}
+Let $A \to B$ be a flat quasi-finite map of Noetherian rings.
+Then there is at most one trace element in $\omega_{B/A}$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak q \subset B$ be a prime ideal lying over the prime
+$\mathfrak p \subset A$. By
+Algebra, Lemma \ref{algebra-lemma-etale-makes-quasi-finite-finite-one-prime}
+we can find an \'etale ring map $A \to A_1$ and a prime
+ideal $\mathfrak p_1 \subset A_1$ lying over $\mathfrak p$
+such that $\kappa(\mathfrak p_1) = \kappa(\mathfrak p)$ and
+such that
+$$
+B_1 = B \otimes_A A_1 = C \times D
+$$
+with $A_1 \to C$ finite and such that the unique prime $\mathfrak q_1$
+of $B \otimes_A A_1$ lying over $\mathfrak q$ and $\mathfrak p_1$
+corresponds to a prime of $C$. Observe that
+$\omega_{C/A_1} = \omega_{B/A} \otimes_B C$
+(combine Lemmas \ref{lemma-dualizing-flat-base-change} and
+\ref{lemma-dualizing-product}). Since the collection
+of ring maps $B \to C$ obtained in this manner is a jointly
+injective family of flat maps and since the image of $\tau_{B/A}$
+in $\omega_{C/A_1}$ is prescribed the uniqueness follows.
+\end{proof}
+
+\noindent
+Here is a sanity check.
+
+\begin{lemma}
+\label{lemma-finite-flat-trace}
+Let $A \to B$ be a finite flat map of Noetherian rings.
+Then $\text{Trace}_{B/A} \in \omega_{B/A}$ is the trace element.
+\end{lemma}
+
+\begin{proof}
+Suppose we have $A \to A_1$ with $A_1$ Noetherian and
+a product decomposition $B \otimes_A A_1 = C \times D$ with $A_1 \to C$
+finite. Of course in this case $A_1 \to D$ is also finite.
+Set $B_1 = B \otimes_A A_1$.
+Since the construction of traces commutes with base change
+we see that $\text{Trace}_{B/A}$ maps to $\text{Trace}_{B_1/A_1}$.
+Thus the proof is finished by noticing that
+$\text{Trace}_{B_1/A_1} = (\text{Trace}_{C/A_1}, \text{Trace}_{D/A_1})$
+under the isomorphism
+$\omega_{B_1/A_1} = \omega_{C/A_1} \times \omega_{D/A_1}$
+of Lemma \ref{lemma-dualizing-product}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-trace-base-change}
+Let $A \to B$ be a flat quasi-finite map of Noetherian rings.
+Let $\tau \in \omega_{B/A}$ be a trace element.
+\begin{enumerate}
+\item If $A \to A_1$ is a map with $A_1$ Noetherian, then with
+$B_1 = A_1 \otimes_A B$ the image of $\tau$ in $\omega_{B_1/A_1}$ is a
+trace element.
+\item If $A = R_f$, then $\tau$ is a trace element in $\omega_{B/R}$.
+\item If $g \in B$, then the image of $\tau$ in $\omega_{B_g/A}$
+is a trace element.
+\item If $B = B_1 \times B_2$, then $\tau$ maps to a trace element
+in both $\omega_{B_1/A}$ and $\omega_{B_2/A}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is a formal consequence of the definition.
+
+\medskip\noindent
+Statement (2) makes sense because $\omega_{B/R} = \omega_{B/A}$
+by Lemma \ref{lemma-localize-dualizing}. Denote $\tau'$ the element
+$\tau$ but viewed as an element of $\omega_{B/R}$. To see that (2) is true
+suppose that we have $R \to R_1$ with $R_1$ Noetherian and a product
+decomposition $B \otimes_R R_1 = C \times D$ with $R_1 \to C$ finite.
+Then with $A_1 = (R_1)_f$ we see that $B \otimes_A A_1 = C \times D$.
+Since $R_1 \to C$ is finite, a fortiori $A_1 \to C$ is finite.
+Hence we can use the defining property of $\tau$ to get the corresponding
+property of $\tau'$.
+
+\medskip\noindent
+Statement (3) makes sense because $\omega_{B_g/A} = (\omega_{B/A})_g$
+by Lemma \ref{lemma-localize-dualizing}. The proof is similar to the proof
+of (2). Suppose we have $A \to A_1$ with $A_1$ Noetherian and
+a product decomposition $B_g \otimes_A A_1 = C \times D$ with $A_1 \to C$
+finite. Set $B_1 = B \otimes_A A_1$. Then
+$\Spec(C) \to \Spec(B_1)$ is an open immersion as $B_g \otimes_A A_1 = (B_1)_g$
+and the image is closed because $B_1 \to C$ is finite
+(as $A_1 \to C$ is finite).
+Thus we see that $B_1 = C \times D_1$ and $D = (D_1)_g$. Then we can use
+the defining property of $\tau$ to get the corresponding property
+for the image of $\tau$ in $\omega_{B_g/A}$.
+
+\medskip\noindent
+Statement (4) makes sense because
+$\omega_{B/A} = \omega_{B_1/A} \times \omega_{B_2/A}$ by
+Lemma \ref{lemma-dualizing-product}.
+Suppose we have $A \to A'$ with $A'$ Noetherian and
+a product decomposition $B \otimes_A A' = C \times D$ with $A' \to C$
+finite. Then it is clear that we can refine this product
+decomposition into $B \otimes_A A' = C_1 \times C_2 \times D_1 \times D_2$
+with $A' \to C_i$ finite such that $B_i \otimes_A A' = C_i \times D_i$.
+Then we can use the defining property of $\tau$ to get the corresponding
+property for the image of $\tau$ in $\omega_{B_i/A}$. This uses the obvious
+fact that
+$\text{Trace}_{C/A'} = (\text{Trace}_{C_1/A'}, \text{Trace}_{C_2/A'})$
+under the decomposition
+$\omega_{C/A'} = \omega_{C_1/A'} \times \omega_{C_2/A'}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glue-trace}
+Let $A \to B$ be a flat quasi-finite map of Noetherian rings.
+Let $g_1, \ldots, g_m \in B$ be elements generating the unit ideal.
+Let $\tau \in \omega_{B/A}$ be an element whose image in
+$\omega_{B_{g_i}/A}$ is a trace element for $A \to B_{g_i}$.
+Then $\tau$ is a trace element.
+\end{lemma}
+
+\begin{proof}
+Suppose we have $A \to A_1$ with $A_1$ Noetherian and a product
+decomposition $B \otimes_A A_1 = C \times D$ with $A_1 \to C$ finite.
+We have to show that the image of $\tau$ in $\omega_{C/A_1}$ is
+$\text{Trace}_{C/A_1}$. Observe that $g_1, \ldots, g_m$
+generate the unit ideal in $B_1 = B \otimes_A A_1$ and that
+$\tau$ maps to a trace element in $\omega_{(B_1)_{g_i}/A_1}$
+by Lemma \ref{lemma-trace-base-change}. Hence we may replace
+$A$ by $A_1$ and $B$ by $B_1$ to get to the situation as described
+in the next paragraph.
+
+\medskip\noindent
+Here we assume that $B = C \times D$ with $A \to C$ is finite.
+Let $\tau_C$ be the image of $\tau$ in $\omega_{C/A}$.
+We have to prove that $\tau_C = \text{Trace}_{C/A}$ in $\omega_{C/A}$.
+By the compatibility of trace elements with products
+(Lemma \ref{lemma-trace-base-change})
+we see that $\tau_C$ maps to a trace element in $\omega_{C_{g_i}/A}$.
+Hence, after replacing $B$ by $C$ we may assume that $A \to B$
+is finite flat.
+
+\medskip\noindent
+Assume $A \to B$ is finite flat. In this case $\text{Trace}_{B/A}$
+is a trace element by Lemma \ref{lemma-finite-flat-trace}.
+Hence $\text{Trace}_{B/A}$ maps to a trace element in
+$\omega_{B_{g_i}/A}$ by Lemma \ref{lemma-trace-base-change}.
+Since trace elements are unique (Lemma \ref{lemma-trace-unique})
+we find that $\text{Trace}_{B/A}$ and $\tau$ map
+to the same elements in $\omega_{B_{g_i}/A} = (\omega_{B/A})_{g_i}$.
+As $g_1, \ldots, g_m$ generate the unit ideal of $B$ the map
+$\omega_{B/A} \to \prod \omega_{B_{g_i}/A}$ is injective
+and we conclude that $\tau_C = \text{Trace}_{B/A}$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-tau}
+Let $A \to B$ be a flat quasi-finite map of Noetherian rings.
+There exists a trace element $\tau \in \omega_{B/A}$.
+\end{lemma}
+
+\begin{proof}
+Choose a factorization $A \to B' \to B$ with $A \to B'$ finite and
+$\Spec(B) \to \Spec(B')$ an open immersion. Let $g_1, \ldots, g_n \in B'$
+be elements such that $\Spec(B) = \bigcup D(g_i)$ as opens of $\Spec(B')$.
+Suppose that we can prove the existence of trace elements $\tau_i$ for the
+quasi-finite flat ring maps $A \to B_{g_i}$. Then for all $i, j$ the elements
+$\tau_i$ and $\tau_j$ map to trace elements of $\omega_{B_{g_ig_j}/A}$
+by Lemma \ref{lemma-trace-base-change}. By uniqueness of
+trace elements (Lemma \ref{lemma-trace-unique}) they map to the same element.
+Hence the sheaf condition for the quasi-coherent module associated to
+$\omega_{B/A}$ (see Algebra, Lemma \ref{algebra-lemma-cover-module})
+produces an element $\tau \in \omega_{B/A}$.
+Then $\tau$ is a trace element by
+Lemma \ref{lemma-glue-trace}.
+In this way we reduce to the case treated in the next paragraph.
+
+\medskip\noindent
+Assume we have $A \to B'$ finite and $g \in B'$ with $B = B'_g$ flat over $A$.
+It is our task to construct a trace element in
+$\omega_{B/A} = \Hom_A(B', A) \otimes_{B'} B$.
+Choose a resolution $F_1 \to F_0 \to B' \to 0$ of $B'$ by finite free
+$A$-modules $F_0$ and $F_1$. Then we have an exact sequence
+$$
+0 \to \Hom_A(B', A) \to F_0^\vee \to F_1^\vee
+$$
+where $F_i^\vee = \Hom_A(F_i, A)$ is the dual finite free module.
+Similarly we have the exact sequence
+$$
+0 \to \Hom_A(B', B') \to F_0^\vee \otimes_A B' \to F_1^\vee \otimes_A B'
+$$
+The idea of the construction of $\tau$ is to use the diagram
+$$
+B' \xrightarrow{\mu} \Hom_A(B', B')
+\leftarrow \Hom_A(B', A) \otimes_A B'
+\xrightarrow{ev} A
+$$
+where the first arrow sends $b' \in B'$ to the $A$-linear operator
+given by multiplication by $b'$ and the last arrow is the evaluation map.
+The problem is that the middle arrow, which sends $\lambda' \otimes b'$
+to the map $b'' \mapsto \lambda'(b'')b'$, is not an isomorphism.
+If $B'$ is flat over $A$, the exact sequences above show that it
+is an isomorphism and the composition from left to right is the usual trace
+$\text{Trace}_{B'/A}$. In the general case, we consider
+the diagram
+$$
+\xymatrix{
+& \Hom_A(B', A) \otimes_A B' \ar[r] \ar[d] &
+\Hom_A(B', A) \otimes_A B'_g \ar[d] \\
+B' \ar[r]_-\mu \ar@{..>}[rru] \ar@{..>}[ru]^\psi &
+\Hom_A(B', B') \ar[r] &
+\Ker(F_0^\vee \otimes_A B'_g \to F_1^\vee \otimes_A B'_g)
+}
+$$
+By flatness of $A \to B'_g$ we see that the right vertical arrow is an
+isomorphism. Hence we obtain the unadorned dotted arrow.
+Since $B'_g = \colim \frac{1}{g^n}B'$, since
+colimits commute with tensor products,
+and since $B'$ is a finitely presented $A$-module
+we can find an $n \geq 0$ and a $B'$-linear (for right $B'$-module structure)
+map $\psi : B' \to \Hom_A(B', A) \otimes_A B'$
+whose composition with the left vertical arrow is $g^n\mu$.
+Composing with $ev$ we obtain an element
+$ev \circ \psi \in \Hom_A(B', A)$. Then we set
+$$
+\tau = (ev \circ \psi) \otimes g^{-n} \in
+\Hom_A(B', A) \otimes_{B'} B'_g = \omega_{B'_g/A} = \omega_{B/A}
+$$
+We omit the easy verification that this element does not depend
+on the choice of $n$ and $\psi$ above.
+
+\medskip\noindent
+Let us prove that $\tau$ as constructed in the previous paragraph
+has the desired property in a special case. Namely, say
+$B' = C' \times D'$ and $g = (f, h)$ where $A \to C'$ flat, $D'_h$ is flat, and
+$f$ is a unit in $C'$.
+To show: $\tau$ maps to $\text{Trace}_{C'/A}$ in $\omega_{C'/A}$.
+In this case we first choose $n_D$ and
+$\psi_D : D' \to \Hom_A(D', A) \otimes_A D'$ as above for the pair
+$(D', h)$ and we can let
+$\psi_C : C' \to \Hom_A(C', A) \otimes_A C' = \Hom_A(C', C')$
+be the map seconding $c' \in C'$ to multiplication by $c'$.
+Then we take $n = n_D$ and $\psi = (f^{n_D} \psi_C, \psi_D)$
+and the desired compatibility is clear because
+$\text{Trace}_{C'/A} = ev \circ \psi_C$ as remarked above.
+
+\medskip\noindent
+To prove the desired property in general, suppose given
+$A \to A_1$ with $A_1$ Noetherian and a product decomposition
+$B'_g \otimes_A A_1 = C \times D$ with $A_1 \to C$ finite.
+Set $B'_1 = B' \otimes_A A_1$. Then $\Spec(C) \to \Spec(B'_1)$
+is an open immersion as $B'_g \otimes_A A_1 = (B'_1)_g$ and
+the image is closed as $B'_1 \to C$ is finite (since $A_1 \to C$
+is finite). Thus $B'_1 = C \times D'$ and $D'_g = D$.
+We conclude that $B'_1 = C \times D'$ and $g$ over $A_1$
+are as in the previous paragraph.
+Since formation of the displayed diagram above
+commutes with base change, the formation of $\tau$ commutes
+with the base change $A \to A_1$ (details omitted; use the
+resolution $F_1 \otimes_A A_1 \to F_0 \otimes_A A_1 \to B'_1 \to 0$
+to see this). Thus the desired compatibility follows from the result
+of the previous paragraph.
+\end{proof}
+
+\begin{remark}
+\label{remark-relative-dualizing-for-flat-quasi-finite}
+Let $f : Y \to X$ be a flat locally quasi-finite morphism of locally
+Noetherian schemes. Let $\omega_{Y/X}$ be as in
+Remark \ref{remark-relative-dualizing-for-quasi-finite}.
+It is clear from the uniqueness, existence, and compatibility with
+localization of trace elements
+(Lemmas \ref{lemma-trace-unique}, \ref{lemma-dualizing-tau}, and
+\ref{lemma-trace-base-change})
+that there exists a global section
+$$
+\tau_{Y/X} \in \Gamma(Y, \omega_{Y/X})
+$$
+such that for every pair of affine opens
+$\Spec(B) = V \subset Y$, $\Spec(A) = U \subset X$ with $f(V) \subset U$
+that element $\tau_{Y/X}$ maps to $\tau_{B/A}$ under the
+canonical isomorphism
+$H^0(V, \omega_{Y/X}) = \omega_{B/A}$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-tau-nonzero}
+Let $k$ be a field and let $A$ be a finite $k$-algebra. Assume $A$
+is local with residue field $k'$. The following are equivalent
+\begin{enumerate}
+\item $\text{Trace}_{A/k}$ is nonzero,
+\item $\tau_{A/k} \in \omega_{A/k}$ is nonzero, and
+\item $k'/k$ is separable and $\text{length}_A(A)$ is prime
+to the characteristic of $k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Conditions (1) and (2) are equivalent by Lemma \ref{lemma-finite-flat-trace}.
+Let $\mathfrak m \subset A$. Since $\dim_k(A) < \infty$ it is clear that
+$A$ has finite length over $A$. Choose a filtration
+$$
+A = I_0 \supset \mathfrak m = I_1 \supset I_2 \supset \ldots I_n = 0
+$$
+by ideals such that $I_i/I_{i + 1} \cong k'$ as $A$-modules. See
+Algebra, Lemma \ref{algebra-lemma-simple-pieces} which also shows that
+$n = \text{length}_A(A)$. If $a \in \mathfrak m$ then $aI_i \subset I_{i + 1}$
+and it is immediate that $\text{Trace}_{A/k}(a) = 0$.
+If $a \not \in \mathfrak m$ with image $\lambda \in k'$, then
+we conclude
+$$
+\text{Trace}_{A/k}(a) =
+\sum\nolimits_{i = 0, \ldots, n - 1}
+\text{Trace}_k(a : I_i/I_{i - 1} \to I_i/I_{i - 1}) =
+n \text{Trace}_{k'/k}(\lambda)
+$$
+The proof of the lemma is finished by applying
+Fields, Lemma \ref{fields-lemma-separable-trace-pairing}.
+\end{proof}
+
+
+
+
+
+
+\section{Finite morphisms}
+\label{section-finite-morphisms}
+
+\noindent
+In this section we collect some observations about the
+constructions in the previous sections for finite morphisms.
+Let $f : Y \to X$ be a finite morphism of locally Noetherian schemes.
+Let $\omega_{Y/X}$ be as in
+Remark \ref{remark-relative-dualizing-for-quasi-finite}.
+
+\medskip\noindent
+The first remark is that
+$$
+f_*\omega_{Y/X} = \SheafHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, \mathcal{O}_X)
+$$
+as sheaves of $f_*\mathcal{O}_Y$-modules. Since $f$ is affine, this
+formula uniquely characterizes $\omega_{Y/X}$, see
+Morphisms, Lemma \ref{morphisms-lemma-affine-equivalence-modules}.
+The formula holds because for $\Spec(A) = U \subset X$ affine open, the
+inverse image $V = f^{-1}(U)$ is the spectrum of a finite $A$-algebra
+$B$ and hence
+$$
+H^0(U, f_*\omega_{Y/X}) =
+H^0(V, \omega_{Y/X}) =
+\omega_{B/A} =
+\Hom_A(B, A) =
+H^0(U, \SheafHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, \mathcal{O}_X))
+$$
+by construction. In particular, we obtain a canonical evaluation map
+$$
+f_*\omega_{Y/X} \longrightarrow \mathcal{O}_X
+$$
+which is given by evaluation at $1$ if we think of $f_*\omega_{Y/X}$
+as the sheaf $\SheafHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, \mathcal{O}_X)$.
+
+\medskip\noindent
+The second remark is that using the evaluation map we obtain
+canonical identifications
+$$
+\Hom_Y(\mathcal{F}, f^*\mathcal{G} \otimes_{\mathcal{O}_Y} \omega_{Y/X})
+=
+\Hom_X(f_*\mathcal{F}, \mathcal{G})
+$$
+functorially in the quasi-coherent module $\mathcal{F}$ on $Y$
+and the finite locally free module $\mathcal{G}$ on $X$.
+If $\mathcal{G} = \mathcal{O}_X$ this follows immediately
+from the above and
+Algebra, Lemma \ref{algebra-lemma-adjoint-hom-restrict}.
+For general $\mathcal{G}$ we can use the same lemma and the
+isomorphisms
+$$
+f_*(f^*\mathcal{G} \otimes_{\mathcal{O}_Y} \omega_{Y/X}) =
+\mathcal{G} \otimes_{\mathcal{O}_X}
+\SheafHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, \mathcal{O}_X) =
+\SheafHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, \mathcal{G})
+$$
+of $f_*\mathcal{O}_Y$-modules where the first equality is the
+projection formula
+(Cohomology, Lemma \ref{cohomology-lemma-projection-formula}).
+An alternative is to prove the formula affine locally by
+direct computation.
+
+\medskip\noindent
+The third remark is that if $f$ is in addition flat, then the
+composition
+$$
+f_*\mathcal{O}_Y \xrightarrow{f_*\tau_{Y/X}} f_*\omega_{Y/X}
+\longrightarrow \mathcal{O}_X
+$$
+is equal to the trace map $\text{Trace}_f$ discussed in
+Section \ref{section-discriminant}. This follows immediately by
+looking over affine opens.
+
+\medskip\noindent
+The fourth remark is that if $f$ is flat and $X$ Noetherian, then
+we obtain
+$$
+\Hom_Y(K, Lf^*M \otimes_{\mathcal{O}_Y} \omega_{Y/X})
+=
+\Hom_X(Rf_*K, M)
+$$
+for any $K$ in $D_\QCoh(\mathcal{O}_Y)$ and $M$ in $D_\QCoh(\mathcal{O}_X)$.
+This follows from the material in
+Duality for Schemes, Section \ref{duality-section-proper-flat},
+but can be proven directly in this case as follows.
+First, if $X$ is affine, then it holds by
+Dualizing Complexes, Lemmas \ref{dualizing-lemma-right-adjoint} and
+\ref{dualizing-lemma-RHom-is-tensor-special}\footnote{There is a
+simpler proof of this lemma in our case.} and
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-affine-compare-bounded}.
+Then we can use the induction principle
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-induction-principle})
+and Mayer-Vietoris
+(in the form of Cohomology, Lemma \ref{cohomology-lemma-mayer-vietoris-hom})
+to finish the proof.
+
+
+
+
+
+
+
+\section{The Noether different}
+\label{section-noether-different}
+
+\noindent
+There are many different differents available in the literature.
+We list some of them in this and the next sections; for more
+information we suggest the reader consult \cite{Kunz}.
+
+\medskip\noindent
+Let $A \to B$ be a ring map. Denote
+$$
+\mu : B \otimes_A B \longrightarrow B,\quad
+b \otimes b' \longmapsto bb'
+$$
+the multiplication map. Let $I = \Ker(\mu)$. It is clear that $I$ is
+generated by the elements $b \otimes 1 - 1 \otimes b$ for $b \in B$.
+Hence the annihilator $J \subset B \otimes_A B$ of $I$ is a $B$-module
+in a canonical manner. The {\it Noether different} of $B$ over $A$ is
+the image of $J$ under the map $\mu : B \otimes_A B \to B$. Equivalently,
+the Noether different is the image of the map
+$$
+J = \Hom_{B \otimes_A B}(B, B \otimes_A B) \longrightarrow B,\quad
+\varphi \longmapsto \mu(\varphi(1))
+$$
+We begin with some obligatory lemmas.
+
+\begin{lemma}
+\label{lemma-noether-different-product}
+Let $A \to B_i$, $i = 1, 2$ be ring maps. Set $B = B_1 \times B_2$.
+\begin{enumerate}
+\item The annihilator $J$ of $\Ker(B \otimes_A B \to B)$ is $J_1 \times J_2$
+where $J_i$ is the annihilator of $\Ker(B_i \otimes_A B_i \to B_i)$.
+\item The Noether different $\mathfrak{D}$ of $B$ over $A$ is
+$\mathfrak{D}_1 \times \mathfrak{D}_2$, where $\mathfrak{D}_i$ is
+the Noether different of $B_i$ over $A$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-noether-different-base-change}
+Let $A \to B$ be a finite type ring map. Let $A \to A'$ be a flat ring map.
+Set $B' = B \otimes_A A'$.
+\begin{enumerate}
+\item The annihilator $J'$ of $\Ker(B' \otimes_{A'} B' \to B')$ is
+$J \otimes_A A'$ where $J$ is the annihilator of $\Ker(B \otimes_A B \to B)$.
+\item The Noether different $\mathfrak{D}'$ of $B'$ over $A'$ is
+$\mathfrak{D}B'$, where $\mathfrak{D}$ is
+the Noether different of $B$ over $A$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose generators $b_1, \ldots, b_n$ of $B$ as an $A$-algebra.
+Then
+$$
+J = \Ker(B \otimes_A B \xrightarrow{b_i \otimes 1 - 1 \otimes b_i}
+(B \otimes_A B)^{\oplus n})
+$$
+Hence we see that the formation of $J$ commutes with flat base change.
+The result on the Noether different follows immediately from this.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-noether-different-localization}
+Let $A \to B' \to B$ be ring maps with $A \to B'$
+of finite type and $B' \to B$ inducing an open immersion of spectra.
+\begin{enumerate}
+\item The annihilator $J$ of $\Ker(B \otimes_A B \to B)$ is
+$J' \otimes_{B'} B$ where $J'$ is the annihilator of
+$\Ker(B' \otimes_A B' \to B')$.
+\item The Noether different $\mathfrak{D}$ of $B$ over $A$ is
+$\mathfrak{D}'B$, where $\mathfrak{D}'$ is
+the Noether different of $B'$ over $A$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Write $I = \Ker(B \otimes_A B \to B)$ and $I' = \Ker(B' \otimes_A B' \to B')$.
+As $\Spec(B) \to \Spec(B')$ is an open immersion, it follows that
+$B = (B \otimes_A B) \otimes_{B' \otimes_A B'} B'$. Thus we see that
+$I = I'(B \otimes_A B)$. Since $I'$ is finitely generated and
+$B' \otimes_A B' \to B \otimes_A B$ is flat, we conclude that
+$J = J'(B \otimes_A B)$, see
+Algebra, Lemma \ref{algebra-lemma-annihilator-flat-base-change}.
+Since the $B' \otimes_A B'$-module structure of $J'$
+factors through $B' \otimes_A B' \to B'$ we conclude that (1) is true.
+Part (2) is a consequence of (1).
+\end{proof}
+
+\begin{remark}
+\label{remark-construction-pairing}
+Let $A \to B$ be a quasi-finite homomorphism of Noetherian rings.
+Let $J$ be the annihilator of $\Ker(B \otimes_A B \to B)$.
+There is a canonical $B$-bilinear pairing
+\begin{equation}
+\label{equation-pairing-noether}
+\omega_{B/A} \times J \longrightarrow B
+\end{equation}
+defined as follows. Choose a factorization $A \to B' \to B$
+with $A \to B'$ finite and $B' \to B$ inducing an open immersion
+of spectra. Let $J'$ be the annihilator of $\Ker(B' \otimes_A B' \to B')$.
+We first define
+$$
+\Hom_A(B', A) \times J' \longrightarrow B',\quad
+(\lambda, \sum b_i \otimes c_i) \longmapsto \sum \lambda(b_i)c_i
+$$
+This is $B'$-bilinear exactly because for $\xi \in J'$ and $b \in B'$
+we have $(b \otimes 1)\xi = (1 \otimes b)\xi$. By
+Lemma \ref{lemma-noether-different-localization}
+and the fact that $\omega_{B/A} = \Hom_A(B', A) \otimes_{B'} B$
+we can extend this to a $B$-bilinear pairing as displayed above.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-noether-pairing-compatibilities}
+Let $A \to B$ be a quasi-finite homomorphism of Noetherian rings.
+\begin{enumerate}
+\item If $A \to A'$ is a flat map of Noetherian rings, then
+$$
+\xymatrix{
+\omega_{B/A} \times J \ar[r] \ar[d] & B \ar[d] \\
+\omega_{B'/A'} \times J' \ar[r] & B'
+}
+$$
+is commutative where notation as in
+Lemma \ref{lemma-noether-different-base-change}
+and horizontal arrows are given by
+(\ref{equation-pairing-noether}).
+\item If $B = B_1 \times B_2$, then
+$$
+\xymatrix{
+\omega_{B/A} \times J \ar[r] \ar[d] & B \ar[d] \\
+\omega_{B_i/A} \times J_i \ar[r] & B_i
+}
+$$
+is commutative for $i = 1, 2$ where notation as in
+Lemma \ref{lemma-noether-different-product}
+and horizontal arrows are given by
+(\ref{equation-pairing-noether}).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Because of the construction of the pairing in
+Remark \ref{remark-construction-pairing}
+both (1) and (2) reduce to the case where $A \to B$ is finite.
+Then (1) follows from the fact that the contraction map
+$\Hom_A(M, A) \otimes_A M \otimes_A M \to M$,
+$\lambda \otimes m \otimes m' \mapsto \lambda(m)m'$
+commuted with base change. To see (2) use that
+$J = J_1 \times J_2$ is contained in the summands
+$B_1 \otimes_A B_1$ and $B_2 \otimes_A B_2$
+of $B \otimes_A B$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-noether-pairing-flat-quasi-finite}
+Let $A \to B$ be a flat quasi-finite homomorphism of Noetherian rings.
+The pairing of Remark \ref{remark-construction-pairing} induces an isomorphism
+$J \to \Hom_B(\omega_{B/A}, B)$.
+\end{lemma}
+
+\begin{proof}
+We first prove this when $A \to B$ is finite and flat. In this case we can
+localize on $A$ and assume $B$ is finite free as an $A$-module. Let
+$b_1, \ldots, b_n$ be a basis of $B$ as an $A$-module and denote
+$b_1^\vee, \ldots, b_n^\vee$ the dual basis of $\omega_{B/A}$. Note that
+$\sum b_i \otimes c_i \in J$ maps to the element of $\Hom_B(\omega_{B/A}, B)$
+which sends $b_i^\vee$ to $c_i$. Suppose $\varphi : \omega_{B/A} \to B$
+is $B$-linear. Then we claim that $\xi = \sum b_i \otimes \varphi(b_i^\vee)$
+is an element of $J$. Namely, the $B$-linearity of $\varphi$
+exactly implies that $(b \otimes 1)\xi = (1 \otimes b)\xi$ for all $b \in B$.
+Thus our map has an inverse and it is an isomorphism.
+
+\medskip\noindent
+Let $\mathfrak q \subset B$ be a prime lying over $\mathfrak p \subset A$.
+We will show that the localization
+$$
+J_\mathfrak q
+\longrightarrow
+\Hom_B(\omega_B/A, B)_\mathfrak q
+$$
+is an isomorphism.
+This suffices by Algebra, Lemma \ref{algebra-lemma-characterize-zero-local}.
+By
+Algebra, Lemma \ref{algebra-lemma-etale-makes-quasi-finite-finite-one-prime}
+we can find an \'etale ring map $A \to A'$ and a prime
+ideal $\mathfrak p' \subset A'$ lying over $\mathfrak p$
+such that $\kappa(\mathfrak p') = \kappa(\mathfrak p)$ and
+such that
+$$
+B' = B \otimes_A A' = C \times D
+$$
+with $A' \to C$ finite and such that the unique prime $\mathfrak q'$
+of $B \otimes_A A'$ lying over $\mathfrak q$ and $\mathfrak p'$
+corresponds to a prime of $C$. Let $J'$ be the annihilator of
+$\Ker(B' \otimes_{A'} B' \to B')$. By
+Lemmas \ref{lemma-dualizing-flat-base-change},
+\ref{lemma-noether-different-base-change}, and
+\ref{lemma-noether-pairing-compatibilities}
+the map $J' \to \Hom_{B'}(\omega_{B'/A'}, B')$
+is gotten by applying the functor $- \otimes_B B'$
+to the map $J \to \Hom_B(\omega_{B/A}, B)$.
+Since $B_\mathfrak q \to B'_{\mathfrak q'}$ is faithfully flat
+it suffices to prove the result for $(A' \to B', \mathfrak q')$.
+By Lemmas \ref{lemma-dualizing-product},
+\ref{lemma-noether-different-product}, and
+\ref{lemma-noether-pairing-compatibilities}
+this reduces us to the case proved in the first
+paragraph of the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-noether-different-flat-quasi-finite}
+Let $A \to B$ be a flat quasi-finite homomorphism of Noetherian rings.
+The diagram
+$$
+\xymatrix{
+J \ar[rr] \ar[rd]_\mu & &
+\Hom_B(\omega_{B/A}, B) \ar[ld]^{\varphi \mapsto \varphi(\tau_{B/A})} \\
+& B
+}
+$$
+commutes where the horizontal arrow is the isomorphism of
+Lemma \ref{lemma-noether-pairing-flat-quasi-finite}.
+Hence the Noether different of $B$ over $A$
+is the image of the map $\Hom_B(\omega_{B/A}, B) \to B$.
+\end{lemma}
+
+\begin{proof}
+Exactly as in the proof of Lemma \ref{lemma-noether-pairing-flat-quasi-finite}
+this reduces to the case of a finite free map $A \to B$.
+In this case $\tau_{B/A} = \text{Trace}_{B/A}$.
+Choose a basis $b_1, \ldots, b_n$ of $B$ as an $A$-module.
+Let $\xi = \sum b_i \otimes c_i \in J$. Then $\mu(\xi) = \sum b_i c_i$.
+On the other hand, the image of $\xi$ in $\Hom_B(\omega_{B/A}, B)$
+sends $\text{Trace}_{B/A}$ to $\sum \text{Trace}_{B/A}(b_i)c_i$.
+Thus we have to show
+$$
+\sum b_ic_i = \sum \text{Trace}_{B/A}(b_i)c_i
+$$
+when $\xi = \sum b_i \otimes c_i \in J$. Write $b_i b_j = \sum_k a_{ij}^k b_k$
+for some $a_{ij}^k \in A$. Then the right hand side is
+$\sum_{i, j} a_{ij}^j c_i$. On the other hand, $\xi \in J$ implies
+$$
+(b_j \otimes 1)(\sum\nolimits_i b_i \otimes c_i) =
+(1 \otimes b_j)(\sum\nolimits_i b_i \otimes c_i)
+$$
+which implies that $b_j c_i = \sum_k a_{jk}^i c_k$. Thus the left hand side
+is $\sum_{i, j} a_{ij}^i c_j$. Since $a_{ij}^k = a_{ji}^k$ the equality holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-noether-different}
+Let $A \to B$ be a finite type ring map. Let $\mathfrak{D} \subset B$
+be the Noether different. Then $V(\mathfrak{D})$ is the set of primes
+$\mathfrak q \subset B$ such that $A \to B$ is not unramified at $\mathfrak q$.
+\end{lemma}
+
+\begin{proof}
+Assume $A \to B$ is unramified at $\mathfrak q$. After replacing
+$B$ by $B_g$ for some $g \in B$, $g \not \in \mathfrak q$ we may
+assume $A \to B$ is unramified (Algebra, Definition
+\ref{algebra-definition-unramified} and
+Lemma \ref{lemma-noether-different-localization}).
+In this case $\Omega_{B/A} = 0$. Hence if $I = \Ker(B \otimes_A B \to B)$,
+then $I/I^2 = 0$ by
+Algebra, Lemma \ref{algebra-lemma-differentials-diagonal}.
+Since $A \to B$ is of finite type, we see that $I$ is finitely
+generated. Hence by Nakayama's lemma
+(Algebra, Lemma \ref{algebra-lemma-NAK})
+there exists an element of the form $1 + i$
+annihilating $I$. It follows that $\mathfrak{D} = B$.
+
+\medskip\noindent
+Conversely, assume that $\mathfrak{D} \not \subset \mathfrak q$.
+Then after replacing $B$ by a principal localization as above
+we may assume $\mathfrak{D} = B$. This means there exists an
+element of the form $1 + i$ in the annihilator of $I$.
+Conversely this implies that $I/I^2 = \Omega_{B/A}$ is zero
+and we conclude.
+\end{proof}
+
+
+
+
+
+
+
+\section{The K\"ahler different}
+\label{section-kahler-different}
+
+\noindent
+Let $A \to B$ be a finite type ring map. The {\it K\"ahler different} is the
+zeroth fitting ideal of $\Omega_{B/A}$ as a $B$-module. We globalize the
+definition as follows.
+
+\begin{definition}
+\label{definition-kahler-different}
+Let $f : Y \to X$ be a morphism of schemes which is locally of finite type.
+The {\it K\"ahler different} is the $0$th fitting ideal of $\Omega_{Y/X}$.
+\end{definition}
+
+\noindent
+The K\"ahler different is a quasi-coherent sheaf of ideals on $Y$.
+
+\begin{lemma}
+\label{lemma-base-change-kahler-different}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+Y' \ar[d]_{f'} \ar[r] & Y \ar[d]^f \\
+X' \ar[r]^g & X
+}
+$$
+with $f$ locally of finite type. Let $R \subset Y$, resp.\ $R' \subset Y'$
+be the closed subscheme cut out by the K\"ahler different of $f$, resp.\ $f'$.
+Then $Y' \to Y$ induces an isomorphism $R' \to R \times_Y Y'$.
+\end{lemma}
+
+\begin{proof}
+This is true because $\Omega_{Y'/X'}$ is the pullback of $\Omega_{Y/X}$
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-differentials})
+and then we can apply
+More on Algebra, Lemma \ref{more-algebra-lemma-fitting-ideal-basics}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kahler-different}
+Let $f : Y \to X$ be a morphism of schemes which is locally of finite type.
+Let $R \subset Y$ be the closed subscheme defined by
+the K\"ahler different. Then $R \subset Y$ is exactly
+the set of points where $f$ is not unramified.
+\end{lemma}
+
+\begin{proof}
+This is a copy of
+Divisors, Lemma \ref{divisors-lemma-zero-fitting-ideal-omega-unramified}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kahler-different-complete-intersection}
+Let $A$ be a ring. Let $n \geq 1$ and
+$f_1, \ldots, f_n \in A[x_1, \ldots, x_n]$.
+Set $B = A[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$.
+The K\"ahler different of $B$ over $A$ is the ideal
+of $B$ generated by $\det(\partial f_i/\partial x_j)$.
+\end{lemma}
+
+\begin{proof}
+This is true because $\Omega_{B/A}$ has a presentation
+$$
+\bigoplus\nolimits_{i = 1, \ldots, n} B f_i
+\xrightarrow{\text{d}}
+\bigoplus\nolimits_{j = 1, \ldots, n} B \text{d}x_j
+\rightarrow \Omega_{B/A} \rightarrow 0
+$$
+by Algebra, Lemma \ref{algebra-lemma-differential-seq}.
+\end{proof}
+
+
+
+\section{The Dedekind different}
+\label{section-dedekind-different}
+
+\noindent
+Let $A \to B$ be a ring map. We say {\it the Dedekind different is defined}
+if $A$ is Noetherian, $A \to B$ is finite,
+any nonzerodivisor on $A$ is a nonzerodivisor on $B$, and $K \to L$ is
+\'etale where $K = Q(A)$ and $L = B \otimes_A K$. Then $K \subset L$ is
+finite \'etale and
+$$
+\mathcal{L}_{B/A} = \{x \in L \mid \text{Trace}_{L/K}(bx) \in A
+\text{ for all }b \in B\}
+$$
+is the Dedekind complementary module. In this situation the
+{\it Dedekind different} is
+$$
+\mathfrak{D}_{B/A} = \{x \in L \mid x\mathcal{L}_{B/A} \subset B\}
+$$
+viewed as a $B$-submodule of $L$.
+By Lemma \ref{lemma-dedekind-different-ideal} the Dedekind different is an
+ideal of $B$ either if $A$ is normal or if $B$ is flat over $A$.
+
+\begin{lemma}
+\label{lemma-dedekind-different-ideal}
+Assume the Dedekind different of $A \to B$ is defined. Consider the statements
+\begin{enumerate}
+\item $A \to B$ is flat,
+\item $A$ is a normal ring,
+\item $\text{Trace}_{L/K}(B) \subset A$,
+\item $1 \in \mathcal{L}_{B/A}$, and
+\item the Dedekind different $\mathfrak{D}_{B/A}$ is an ideal of $B$.
+\end{enumerate}
+Then we have (1) $\Rightarrow$ (3), (2) $\Rightarrow$ (3),
+(3) $\Leftrightarrow$ (4), and (4) $\Rightarrow$ (5).
+\end{lemma}
+
+\begin{proof}
+The equivalence of (3) and (4) and the
+implication (4) $\Rightarrow$ (5) are immediate.
+
+\medskip\noindent
+If $A \to B$ is flat, then we see that $\text{Trace}_{B/A} : B \to A$ is
+defined and that $\text{Trace}_{L/K}$ is the base change. Hence (3) holds.
+
+\medskip\noindent
+If $A$ is normal, then $A$ is a finite product of normal domains,
+hence we reduce to the case of a normal domain. Then $K$ is
+the fraction field of $A$ and $L = \prod L_i$ is a finite product of
+finite separable field extensions of $K$. Then
+$\text{Trace}_{L/K}(b) = \sum \text{Trace}_{L_i/K}(b_i)$
+where $b_i \in L_i$ is the image of $b$.
+Since $b$ is integral over $A$ as $B$ is finite over $A$,
+these traces are in $A$. This is true because the
+minimal polynomial of $b_i$ over $K$ has coefficients in $A$
+(Algebra, Lemma \ref{algebra-lemma-minimal-polynomial-normal-domain})
+and because $\text{Trace}_{L_i/K}(b_i)$ is an
+integer multiple of one of these coefficients
+(Fields, Lemma \ref{fields-lemma-trace-and-norm-from-minimal-polynomial}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dedekind-complementary-module}
+If the Dedekind different of $A \to B$ is defined, then
+there is a canonical isomorphism
+$\mathcal{L}_{B/A} \to \omega_{B/A}$.
+\end{lemma}
+
+\begin{proof}
+Recall that $\omega_{B/A} = \Hom_A(B, A)$ as $A \to B$ is finite.
+We send $x \in \mathcal{L}_{B/A}$ to the map
+$b \mapsto \text{Trace}_{L/K}(bx)$.
+Conversely, given an $A$-linear map $\varphi : B \to A$
+we obtain a $K$-linear map $\varphi_K : L \to K$. Since $K \to L$ is finite
+\'etale, we see that the trace pairing is nondegenerate
+(Lemma \ref{lemma-discriminant}) and hence there exists a $x \in L$ such that
+$\varphi_K(y) = \text{Trace}_{L/K}(xy)$ for all $y \in L$.
+Then $x \in \mathcal{L}_{B/A}$ maps to $\varphi$ in $\omega_{B/A}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-dedekind-complementary-module-trace}
+If the Dedekind different of $A \to B$ is defined and $A \to B$ is flat, then
+\begin{enumerate}
+\item the canonical isomorphism $\mathcal{L}_{B/A} \to \omega_{B/A}$
+sends $1 \in \mathcal{L}_{B/A}$ to the trace element
+$\tau_{B/A} \in \omega_{B/A}$, and
+\item the Dedekind different is
+$\mathfrak{D}_{B/A} = \{b \in B \mid b\omega_{B/A} \subset B\tau_{B/A}\}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first assertion
+follows from the proof of Lemma \ref{lemma-dedekind-different-ideal}
+and Lemma \ref{lemma-finite-flat-trace}.
+The second assertion is immediate from the first and the
+definitions.
+\end{proof}
+
+
+
+\section{The different}
+\label{section-different}
+
+\noindent
+The motivation for the following definition is that it recovers the
+Dedekind different in the finite flat case as we will see below.
+
+\begin{definition}
+\label{definition-different}
+Let $f : Y \to X$ be a flat quasi-finite morphism of Noetherian schemes.
+Let $\omega_{Y/X}$ be the relative dualizing module and let
+$\tau_{Y/X} \in \Gamma(Y, \omega_{Y/X})$ be the trace element
+(Remarks \ref{remark-relative-dualizing-for-quasi-finite} and
+\ref{remark-relative-dualizing-for-flat-quasi-finite}).
+The annihilator of
+$$
+\Coker(\mathcal{O}_Y \xrightarrow{\tau_{Y/X}} \omega_{Y/X})
+$$
+is the {\it different} of $Y/X$. It is a coherent ideal
+$\mathfrak{D}_f \subset \mathcal{O}_Y$.
+\end{definition}
+
+\noindent
+We will generalize this in Remark \ref{remark-different-generalization} below.
+Observe that $\mathfrak{D}_f$ is locally generated by one element if
+$\omega_{Y/X}$ is an invertible $\mathcal{O}_Y$-module.
+We first state the agreement with the Dedekind different.
+
+\begin{lemma}
+\label{lemma-flat-agree-dedekind}
+Let $f : Y \to X$ be a flat quasi-finite morphism of Noetherian schemes.
+Let $V = \Spec(B) \subset Y$, $U = \Spec(A) \subset X$
+be affine open subschemes with $f(V) \subset U$.
+If the Dedekind different of $A \to B$ is defined, then
+$$
+\mathfrak{D}_f|_V = \widetilde{\mathfrak{D}_{B/A}}
+$$
+as coherent ideal sheaves on $V$.
+\end{lemma}
+
+\begin{proof}
+This is clear from Lemmas \ref{lemma-dedekind-different-ideal} and
+\ref{lemma-flat-dedekind-complementary-module-trace}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-gorenstein-agree-noether}
+Let $f : Y \to X$ be a flat quasi-finite morphism of Noetherian schemes.
+Let $V = \Spec(B) \subset Y$, $U = \Spec(A) \subset X$
+be affine open subschemes with $f(V) \subset U$.
+If $\omega_{Y/X}|_V$ is invertible, i.e., if $\omega_{B/A}$
+is an invertible $B$-module, then
+$$
+\mathfrak{D}_f|_V = \widetilde{\mathfrak{D}}
+$$
+as coherent ideal sheaves on $V$ where
+$\mathfrak{D} \subset B$ is the Noether different of $B$ over $A$.
+\end{lemma}
+
+\begin{proof}
+Consider the map
+$$
+\SheafHom_{\mathcal{O}_Y}(\omega_{Y/X}, \mathcal{O}_Y)
+\longrightarrow
+\mathcal{O}_Y,\quad
+\varphi \longmapsto \varphi(\tau_{Y/X})
+$$
+The image of this map corresponds to the Noether different
+on affine opens, see Lemma \ref{lemma-noether-different-flat-quasi-finite}.
+Hence the result follows from the elementary fact that given
+an invertible module $\omega$ and a global section $\tau$
+the image of
+$\tau : \SheafHom(\omega, \mathcal{O}) = \omega^{\otimes -1} \to \mathcal{O}$
+is the same as the annihilator of $\Coker(\tau : \mathcal{O} \to \omega)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-different}
+Consider a cartesian diagram of Noetherian schemes
+$$
+\xymatrix{
+Y' \ar[d]_{f'} \ar[r] & Y \ar[d]^f \\
+X' \ar[r]^g & X
+}
+$$
+with $f$ flat and quasi-finite. Let $R \subset Y$, resp.\ $R' \subset Y'$
+be the closed subscheme cut out by the different
+$\mathfrak{D}_f$, resp.\ $\mathfrak{D}_{f'}$.
+Then $Y' \to Y$ induces a bijective closed immersion $R' \to R \times_Y Y'$.
+If $g$ is flat or if $\omega_{Y/X}$ is invertible, then
+$R' = R \times_Y Y'$.
+\end{lemma}
+
+\begin{proof}
+There is an immediate reduction to the case where $X$, $X'$, $Y$, $Y'$
+are affine. In other words, we have a cocartesian diagram of Noetherian
+rings
+$$
+\xymatrix{
+B' & B \ar[l] \\
+A' \ar[u] & A \ar[l] \ar[u]
+}
+$$
+with $A \to B$ flat and quasi-finite. The base change map
+$\omega_{B/A} \otimes_B B' \to \omega_{B'/A'}$ is an isomorphism
+(Lemma \ref{lemma-dualizing-base-change-of-flat}) and maps
+the trace element $\tau_{B/A}$ to the trace element $\tau_{B'/A'}$
+(Lemma \ref{lemma-trace-base-change}).
+Hence the finite $B$-module $Q = \Coker(\tau_{B/A} : B \to \omega_{B/A})$
+satisfies $Q \otimes_B B' = \Coker(\tau_{B'/A'} : B' \to \omega_{B'/A'})$.
+Thus $\mathfrak{D}_{B/A}B' \subset \mathfrak{D}_{B'/A'}$ which means
+we obtain the closed immersion $R' \to R \times_Y Y'$.
+Since $R = \text{Supp}(Q)$ and $R' = \text{Supp}(Q \otimes_B B')$
+(Algebra, Lemma \ref{algebra-lemma-support-closed})
+we see that $R' \to R \times_Y Y'$ is bijective by
+Algebra, Lemma \ref{algebra-lemma-support-base-change}.
+The equality $\mathfrak{D}_{B/A}B' = \mathfrak{D}_{B'/A'}$ holds
+if $B \to B'$ is flat, e.g., if $A \to A'$ is flat, see
+Algebra, Lemma \ref{algebra-lemma-annihilator-flat-base-change}.
+Finally, if $\omega_{B/A}$ is invertible, then we can localize
+and assume $\omega_{B/A} = B \lambda$. Writing $\tau_{B/A} = b\lambda$
+we see that $Q = B/bB$ and $\mathfrak{D}_{B/A} = bB$.
+The same reasoning over $B'$
+gives $\mathfrak{D}_{B'/A'} = bB'$ and the lemma is proved.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-norm-different-in-discriminant}
+Let $f : Y \to X$ be a finite flat morphism of Noetherian schemes.
+Then $\text{Norm}_f : f_*\mathcal{O}_Y \to \mathcal{O}_X$ maps
+$f_*\mathfrak{D}_f$ into the ideal sheaf of the discriminant $D_f$.
+\end{lemma}
+
+\begin{proof}
+The norm map is constructed in
+Divisors, Lemma \ref{divisors-lemma-finite-locally-free-has-norm}
+and the discriminant of $f$ in Section \ref{section-discriminant}.
+The question is affine local, hence we may assume $X = \Spec(A)$,
+$Y = \Spec(B)$ and $f$ given by a finite locally free ring map $A \to B$.
+Localizing further we may assume $B$ is finite free as an $A$-module.
+Choose a basis $b_1, \ldots, b_n \in B$ for $B$ as an $A$-module.
+Denote $b_1^\vee, \ldots, b_n^\vee$ the dual basis of
+$\omega_{B/A} = \Hom_A(B, A)$ as an $A$-module.
+Since the norm of $b$ is the determinant of $b : B \to B$ as an
+$A$-linear map, we see that
+$\text{Norm}_{B/A}(b) = \det(b_i^\vee(bb_j))$.
+The discriminant is the principal closed subscheme of $\Spec(A)$
+defined by $\det(\text{Trace}_{B/A}(b_ib_j))$.
+If $b \in \mathfrak{D}_{B/A}$ then
+there exist $c_i \in B$ such that
+$b \cdot b_i^\vee = c_i \cdot \text{Trace}_{B/A}$ where
+we use a dot to indicate the $B$-module structure on $\omega_{B/A}$.
+Write $c_i = \sum a_{il} b_l$.
+We have
+\begin{align*}
+\text{Norm}_{B/A}(b)
+& =
+\det(b_i^\vee(bb_j)) \\
+& =
+\det( (b \cdot b_i^\vee)(b_j)) \\
+& =
+\det((c_i \cdot \text{Trace}_{B/A})(b_j)) \\
+& =
+\det(\text{Trace}_{B/A}(c_ib_j)) \\
+& =
+\det(a_{il}) \det(\text{Trace}_{B/A}(b_l b_j))
+\end{align*}
+which proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-different-ramification}
+Let $f : Y \to X$ be a flat quasi-finite morphism of Noetherian schemes.
+The closed subscheme $R \subset Y$ defined by the different $\mathfrak{D}_f$
+is exactly the set of points where $f$ is not \'etale
+(equivalently not unramified).
+\end{lemma}
+
+\begin{proof}
+Since $f$ is of finite presentation and flat, we see that it is \'etale
+at a point if and only if it is unramified at that point. Moreover, the
+formation of the locus of ramified points commutes with base change.
+See Morphisms, Section \ref{morphisms-section-etale} and especially
+Morphisms, Lemma \ref{morphisms-lemma-set-points-where-fibres-etale}.
+By Lemma \ref{lemma-base-change-different} the formation of $R$ commutes
+set theoretically with base change. Hence it suffices to prove the
+lemma when $X$ is the spectrum of a field. On the other hand, the
+construction of $(\omega_{Y/X}, \tau_{Y/X})$ is local on $Y$.
+Since $Y$ is a finite discrete space (being quasi-finite
+over a field), we may assume $Y$ has a unique point.
+
+\medskip\noindent
+Say $X = \Spec(k)$ and $Y = \Spec(B)$ where $k$ is a field and $B$ is
+a finite local $k$-algebra. If $Y \to X$ is \'etale, then
+$B$ is a finite separable extension of $k$, and the trace
+element $\text{Trace}_{B/k}$ is a basis element of $\omega_{B/k}$
+by Fields, Lemma \ref{fields-lemma-separable-trace-pairing}.
+Thus $\mathfrak{D}_{B/k} = B$ in this case.
+Conversely, if $\mathfrak{D}_{B/k} = B$, then we see from
+Lemma \ref{lemma-norm-different-in-discriminant}
+and the fact that the norm of $1$ equals $1$ that the
+discriminant is empty. Hence
+$Y \to X$ is \'etale by Lemma \ref{lemma-discriminant}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-norm-different-is-discriminant}
+Let $f : Y \to X$ be a flat quasi-finite morphism of Noetherian schemes.
+Let $R \subset Y$ be the closed subscheme defined by $\mathfrak{D}_f$.
+\begin{enumerate}
+\item If $\omega_{Y/X}$ is invertible,
+then $R$ is a locally principal closed subscheme of $Y$.
+\item If $\omega_{Y/X}$ is invertible and $f$ is finite, then
+the norm of $R$ is the discriminant $D_f$ of $f$.
+\item If $\omega_{Y/X}$ is invertible and $f$
+is \'etale at the associated points of $Y$, then $R$
+is an effective Cartier divisor and there is an
+isomorphism $\mathcal{O}_Y(R) = \omega_{Y/X}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). We may work locally on $Y$, hence we may assume
+$\omega_{Y/X}$ is free of rank $1$. Say $\omega_{Y/X} = \mathcal{O}_Y\lambda$.
+Then we can write $\tau_{Y/X} = h \lambda$ and then we see that
+$R$ is defined by $h$, i.e., $R$ is locally principal.
+
+\medskip\noindent
+Proof of (2). We may assume $Y \to X$ is given by a finite free ring
+map $A \to B$ and that $\omega_{B/A}$ is free of rank $1$ as $B$-module.
+Choose a $B$-basis element $\lambda$ for $\omega_{B/A}$ and write
+$\text{Trace}_{B/A} = b \cdot \lambda$ for some $b \in B$.
+Then $\mathfrak{D}_{B/A} = (b)$ and $D_f$ is cut out by
+$\det(\text{Trace}_{B/A}(b_ib_j))$ where $b_1, \ldots, b_n$ is a
+basis of $B$ as an $A$-module. Let $b_1^\vee, \ldots, b_n^\vee$
+be the dual basis.
+Writing $b_i^\vee = c_i \cdot \lambda$ we see that
+$c_1, \ldots, c_n$ is a basis of $B$ as well.
+Hence with $c_i = \sum a_{il}b_l$ we see that $\det(a_{il})$
+is a unit in $A$. Clearly,
+$b \cdot b_i^\vee = c_i \cdot \text{Trace}_{B/A}$
+hence we conclude from the computation in the proof of
+Lemma \ref{lemma-norm-different-in-discriminant}
+that $\text{Norm}_{B/A}(b)$ is a unit times
+$\det(\text{Trace}_{B/A}(b_ib_j))$.
+
+\medskip\noindent
+Proof of (3). In the notation above we see from
+Lemma \ref{lemma-different-ramification} and the assumption
+that $h$ does not vanish in
+the associated points of $Y$, which implies that $h$ is a nonzerodivisor.
+The canonical isomorphism sends $1$ to $\tau_{Y/X}$, see
+Divisors, Lemma \ref{divisors-lemma-characterize-OD}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Quasi-finite syntomic morphisms}
+\label{section-quasi-finite-syntomic}
+
+\noindent
+This section discusses the fact that a quasi-finite syntomic morphism
+has an invertible relative dualizing module.
+
+\begin{lemma}
+\label{lemma-syntomic-quasi-finite}
+Let $f : Y \to X$ be a morphism of schemes. The following are equivalent
+\begin{enumerate}
+\item $f$ is locally quasi-finite and syntomic,
+\item $f$ is locally quasi-finite, flat, and a local complete intersection
+morphism,
+\item $f$ is locally quasi-finite, flat, locally of finite presentation,
+and the fibres of $f$ are local complete intersections,
+\item $f$ is locally quasi-finite and for every $y \in Y$ there are
+affine opens $y \in V = \Spec(B) \subset Y$, $U = \Spec(A) \subset X$
+with $f(V) \subset U$ an integer $n$ and
+$h, f_1, \ldots, f_n \in A[x_1, \ldots, x_n]$ such that
+$B = A[x_1, \ldots, x_n, 1/h]/(f_1, \ldots, f_n)$,
+\item for every $y \in Y$ there are affine opens
+$y \in V = \Spec(B) \subset Y$, $U = \Spec(A) \subset X$
+with $f(V) \subset U$ such that $A \to B$ is a relative global complete
+intersection of the form $B = A[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$,
+\item $f$ is locally quasi-finite, flat, locally of finite presentation,
+and $\NL_{X/Y}$ has tor-amplitude in $[-1, 0]$, and
+\item $f$ is flat, locally of finite presentation,
+$\NL_{X/Y}$ is perfect of rank $0$ with tor-amplitude in $[-1, 0]$,
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) is
+More on Morphisms, Lemma \ref{more-morphisms-lemma-flat-lci}.
+The equivalence of (1) and (3) is
+Morphisms, Lemma \ref{morphisms-lemma-syntomic-flat-fibres}.
+
+\medskip\noindent
+If $A \to B$ is as in (4), then
+$B = A[x, x_1, \ldots, x_n]/(xh - 1, f_1, \ldots, f_n]$
+is a relative global complete intersection by see Algebra, Definition
+\ref{algebra-definition-relative-global-complete-intersection}.
+Thus (4) implies (5).
+It is clear that (5) implies (4).
+
+\medskip\noindent
+Condition (5) implies (1): by
+Algebra, Lemma \ref{algebra-lemma-relative-global-complete-intersection}
+a relative global complete intersection is syntomic and
+the definition of a relative global complete intersection
+guarantees that a relative global complete intersection on
+$n$ variables with $n$ equations is quasi-finite, see
+Algebra, Definition
+\ref{algebra-definition-relative-global-complete-intersection} and
+Lemma \ref{algebra-lemma-isolated-point-fibre}.
+
+\medskip\noindent
+Either Algebra, Lemma \ref{algebra-lemma-syntomic} or
+Morphisms, Lemma \ref{morphisms-lemma-syntomic-locally-standard-syntomic}
+shows that (1) implies (5).
+
+\medskip\noindent
+More on Morphisms, Lemma \ref{more-morphisms-lemma-flat-fp-NL-lci} shows that
+(6) is equivalent to (1). If the equivalent conditions (1) -- (6) hold,
+then we see that affine locally $Y \to X$ is given by a relative global
+complete intersection $B = A[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$
+with the same number of variables as the number of
+equations. Using this presentation we see that
+$$
+\NL_{B/A} =\left(
+(f_1, \ldots, f_n)/(f_1, \ldots, f_n)^2
+\longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} B \text{d} x_i\right)
+$$
+By Algebra, Lemma
+\ref{algebra-lemma-relative-global-complete-intersection-conormal}
+the module $(f_1, \ldots, f_n)/(f_1, \ldots, f_n)^2$
+is free with generators the congruence classes of the elements
+$f_1, \ldots, f_n$. Thus $\NL_{B/A}$ has rank $0$ and so does $\NL_{Y/X}$.
+In this way we see that (1) -- (6) imply (7).
+
+\medskip\noindent
+Finally, assume (7). By
+More on Morphisms, Lemma \ref{more-morphisms-lemma-flat-fp-NL-lci}
+we see that $f$ is syntomic. Thus on suitable affine opens
+$f$ is given by a relative global complete intersection
+$A \to B = A[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$, see
+Morphisms, Lemma \ref{morphisms-lemma-syntomic-locally-standard-syntomic}.
+Exactly as above we see that $\NL_{B/A}$ is a perfect complex
+of rank $n - m$. Thus $n = m$ and we see that (5) holds.
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-invertible}
+Invertibility of the relative dualizing module.
+\begin{enumerate}
+\item If $A \to B$ is a quasi-finite flat homomorphism of Noetherian rings,
+then $\omega_{B/A}$ is an invertible $B$-module if and only if
+$\omega_{B \otimes_A \kappa(\mathfrak p)/\kappa(\mathfrak p)}$
+is an invertible $B \otimes_A \kappa(\mathfrak p)$-module
+for all primes $\mathfrak p \subset A$.
+\item If $Y \to X$ is a quasi-finite flat morphism of
+Noetherian schemes, then $\omega_{Y/X}$ is invertible
+if and only if $\omega_{Y_x/x}$ is invertible for all $x \in X$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). As $A \to B$ is flat, the module
+$\omega_{B/A}$ is $A$-flat, see Lemma \ref{lemma-dualizing-base-flat-flat}.
+Thus $\omega_{B/A}$ is an invertible $B$-module if and only if
+$\omega_{B/A} \otimes_A \kappa(\mathfrak p)$
+is an invertible $B \otimes_A \kappa(\mathfrak p)$-module for
+every prime $\mathfrak p \subset A$, see More on Morphisms, Lemma
+\ref{more-morphisms-lemma-flat-and-free-at-point-fibre}.
+Still using that $A \to B$ is flat, we have that
+formation of $\omega_{B/A}$ commutes with base change, see
+Lemma \ref{lemma-dualizing-base-change-of-flat}.
+Thus we see that invertibility of the relative dualizing module,
+in the presence of flatness, is equivalent to invertibility
+of the relative dualizing module for the maps
+$\kappa(\mathfrak p) \to B \otimes_A \kappa(\mathfrak p)$.
+
+\medskip\noindent
+Part (2) follows from (1) and the fact that affine locally
+the dualizing modules are given by their algebraic counterparts, see
+Remark \ref{remark-relative-dualizing-for-quasi-finite}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dim-zero-global-complete-intersection-over-field}
+Let $k$ be a field. Let $B = k[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$
+be a global complete intersection over $k$ of dimension $0$.
+Then $\omega_{B/k}$ is invertible.
+\end{lemma}
+
+\begin{proof}
+By Noether normalization, see
+Algebra, Lemma \ref{algebra-lemma-Noether-normalization}
+we see that there exists a finite injection $k \to B$, i.e.,
+$\dim_k(B) < \infty$. Hence $\omega_{B/k} = \Hom_k(B, k)$
+as a $B$-module.
+By Dualizing Complexes, Lemma \ref{dualizing-lemma-dualizing-finite}
+we see that $R\Hom(B, k)$ is a dualizing complex for $B$
+and by Dualizing Complexes, Lemma \ref{dualizing-lemma-RHom-ext}
+we see that $R\Hom(B, k)$ is equal to $\omega_{B/k}$
+placed in degree $0$. Thus it suffices to show that
+$B$ is Gorenstein
+(Dualizing Complexes, Lemma \ref{dualizing-lemma-gorenstein}).
+This is true by Dualizing Complexes, Lemma
+\ref{dualizing-lemma-gorenstein-lci}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-syntomic-quasi-finite}
+Let $f : Y \to X$ be a morphism of locally Noetherian schemes. If $f$
+satisfies the equivalent conditions of Lemma \ref{lemma-syntomic-quasi-finite}
+then $\omega_{Y/X}$ is an invertible $\mathcal{O}_Y$-module.
+\end{lemma}
+
+\begin{proof}
+We may assume $A \to B$ is a relative global complete
+intersection of the form $B = A[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$
+and we have to show $\omega_{B/A}$ is invertible.
+This follows in combining Lemmas \ref{lemma-characterize-invertible} and
+\ref{lemma-dim-zero-global-complete-intersection-over-field}.
+\end{proof}
+
+\begin{example}
+\label{example-universal-quasi-finite-syntomic}
+Let $n \geq 1$ and $d \geq 1$ be integers. Let $T$ be the set of
+multi-indices $E = (e_1, \ldots, e_n)$ with $e_i \geq 0$ and
+$\sum e_i \leq d$. Consider the ring
+$$
+A = \mathbf{Z}[a_{i, E} ; 1 \leq i \leq n, E \in T]
+$$
+In $A[x_1, \ldots, x_n]$ consider the elements
+$f_i = \sum_{E \in T} a_{i, E} x^E$ where $x^E = x_1^{e_1} \ldots x_n^{e_n}$
+as is customary. Consider the $A$-algebra
+$$
+B = A[x_1, \ldots, x_n]/(f_1, \ldots, f_n)
+$$
+Denote $X_{n, d} = \Spec(A)$ and let $Y_{n, d} \subset \Spec(B)$
+be the maximal open subscheme such that the restriction of the
+morphism $\Spec(B) \to \Spec(A) = X_{n, d}$ is quasi-finite, see
+Algebra, Lemma \ref{algebra-lemma-quasi-finite-open}.
+\end{example}
+
+\begin{lemma}
+\label{lemma-universal-quasi-finite-syntomic-etale}
+With notation as in Example \ref{example-universal-quasi-finite-syntomic}
+the schemes $X_{n, d}$ and $Y_{n, d}$ are regular and irreducible,
+the morphism $Y_{n, d} \to X_{n, d}$ is locally quasi-finite and
+syntomic, and there is a dense open subscheme $V \subset Y_{n, d}$
+such that $Y_{n, d} \to X_{n, d}$ restricts to an \'etale morphism
+$V \to X_{n, d}$.
+\end{lemma}
+
+\begin{proof}
+The scheme $X_{n, d}$ is the spectrum of the polynomial ring $A$.
+Hence $X_{n, d}$ is regular and irreducible. Since we can write
+$$
+f_i = a_{i, (0, \ldots, 0)} +
+\sum\nolimits_{E \in T, E \not = (0, \ldots, 0)} a_{i, E} x^E
+$$
+we see that the ring $B$ is isomorphic to the polynomial ring
+on $x_1, \ldots, x_n$ and the elements $a_{i, E}$ with
+$E \not = (0, \ldots, 0)$. Hence $\Spec(B)$ is an irreducible and
+regular scheme and so is the open $Y_{n, d}$. The morphism
+$Y_{n, d} \to X_{n, d}$ is locally quasi-finite and syntomic by
+Lemma \ref{lemma-syntomic-quasi-finite}. To find $V$ it suffices
+to find a single point where $Y_{n, d} \to X_{n, d}$ is \'etale
+(the locus of points where a morphism is \'etale is open by
+definition). Thus it suffices to find a point of $X_{n, d}$
+where the fibre of $Y_{n, d} \to X_{n, d}$ is nonempty and \'etale, see
+Morphisms, Lemma \ref{morphisms-lemma-etale-at-point}. We choose
+the point corresponding to the ring map $\chi : A \to \mathbf{Q}$
+sending $f_i$ to $1 + x_i^d$. Then
+$$
+B \otimes_{A, \chi} \mathbf{Q} =
+\mathbf{Q}[x_1, \ldots, x_n]/(x_1^d - 1, \ldots, x_n^d - 1)
+$$
+which is a nonzero \'etale algebra over $\mathbf{Q}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-comes-from-universal}
+Let $f : Y \to X$ be a morphism of schemes. If $f$ satisfies the equivalent
+conditions of Lemma \ref{lemma-syntomic-quasi-finite} then for every
+$y \in Y$ there exist $n, d$ and a commutative diagram
+$$
+\xymatrix{
+Y \ar[d] &
+V \ar[d] \ar[l] \ar[r] &
+Y_{n, d} \ar[d] \\
+X & U \ar[l] \ar[r] &
+X_{n, d}
+}
+$$
+where $U \subset X$ and $V \subset Y$ are open, where $Y_{n, d} \to X_{n, d}$
+is as in Example \ref{example-universal-quasi-finite-syntomic}, and
+where the square on the right hand side is cartesian.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-syntomic-quasi-finite}
+we can choose $U$ and $V$ affine so that
+$U = \Spec(R)$ and $V = \Spec(S)$ with
+$S = R[y_1, \ldots, y_n]/(g_1, \ldots, g_n)$.
+With notation as in Example \ref{example-universal-quasi-finite-syntomic}
+if we pick $d$ large enough, then we can write each $g_i$ as
+$g_i = \sum_{E \in T} g_{i, E}y^E$ with $g_{i, E} \in R$.
+Then the map $A \to R$ sending $a_{i, E}$ to $g_{i, E}$
+and the map $B \to S$ sending $x_i \to y_i$ give a cocartesian
+diagram of rings
+$$
+\xymatrix{
+S & B \ar[l] \\
+R \ar[u] & A \ar[l] \ar[u]
+}
+$$
+which proves the lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Finite syntomic morphisms}
+\label{section-finite-syntomic}
+
+\noindent
+This section is the analogue of Section \ref{section-quasi-finite-syntomic}
+for finite syntomic morphisms.
+
+\begin{lemma}
+\label{lemma-syntomic-finite}
+Let $f : Y \to X$ be a morphism of schemes. The following are equivalent
+\begin{enumerate}
+\item $f$ is finite and syntomic,
+\item $f$ is finite, flat, and a local complete intersection morphism,
+\item $f$ is finite, flat, locally of finite presentation,
+and the fibres of $f$ are local complete intersections,
+\item $f$ is finite and for every $x \in X$ there is an
+affine open $x \in U = \Spec(A) \subset X$ an integer $n$
+and $f_1, \ldots, f_n \in A[x_1, \ldots, x_n]$ such that
+$f^{-1}(U)$ is isomorphic to the spectrum of
+$A[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$,
+\item $f$ is finite, flat, locally of finite presentation,
+and $\NL_{X/Y}$ has tor-amplitude in $[-1, 0]$, and
+\item $f$ is finite, flat, locally of finite presentation, and
+$\NL_{X/Y}$ is perfect of rank $0$ with tor-amplitude in $[-1, 0]$,
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1), (2), (3), (5), and (6)
+and the implication (4) $\Rightarrow$ (1) follow immediately
+from Lemma \ref{lemma-syntomic-quasi-finite}. Assume the equivalent conditions
+(1), (2), (3), (5), (6) hold.
+Choose a point $x \in X$ and an affine open $U = \Spec(A)$
+of $x$ in $X$ and say $x$ corresponds to the prime ideal
+$\mathfrak p \subset A$. Write $f^{-1}(U) = \Spec(B)$.
+Write $B = A[x_1, \ldots, x_n]/I$. Since $\NL_{B/A}$
+is perfect of tor-amplitude in $[-1, 0]$ by (6)
+we see that $I/I^2$ is a finite locally free $B$-module
+of rank $n$. Since $B_\mathfrak p$ is semi-local we see that
+$(I/I^2)_\mathfrak p$ is free of rank $n$, see
+Algebra, Lemma \ref{algebra-lemma-locally-free-semi-local-free}.
+Thus after replacing $A$ by a principal localization at
+an element not in $\mathfrak p$ we may assume $I/I^2$
+is a free $B$-module of rank $n$.
+Thus by Algebra, Lemma \ref{algebra-lemma-huber}
+we can find a presentation of $B$ over $A$
+with the same number of variables as equations. In other words,
+we may assume $B = A[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$.
+This proves (4).
+\end{proof}
+
+\begin{example}
+\label{example-universal-finite-syntomic}
+Let $d \geq 1$ be an integer. Consider variables
+$a_{ij}^l$ for $1 \leq i, j, l \leq d$ and denote
+$$
+A_d = \mathbf{Z}[a_{ij}^k]/J
+$$
+where $J$ is the ideal generated by the elements
+$$
+\left\{
+\begin{matrix}
+\sum_l a_{ij}^la_{lk}^m - \sum_l a_{il}^ma_{jk}^l & \forall i, j, k, m \\
+a_{ij}^k - a_{ji}^k & \forall i, j, k \\
+a_{i1}^j - \delta_{ij} & \forall i, j
+\end{matrix}
+\right.
+$$
+where $\delta_{ij}$ indices the Kronecker delta function.
+We define an $A_d$-algebra $B_d$ as follows: as an $A_d$-module we set
+$$
+B_d = A_d e_1 \oplus \ldots \oplus A_d e_d
+$$
+The algebra structure is given by $A_d \to B_d$ mapping $1$ to $e_1$.
+The multiplication on $B_d$ is the $A_d$-bilinar map
+$$
+m : B_d \times B_d \longrightarrow B_d, \quad
+m(e_i, e_j) = \sum a_{ij}^k e_k
+$$
+It is straightforward to check that the relations given above
+exactly force this to be an $A_d$-algebra structure.
+The morphism
+$$
+\pi_d : Y_d = \Spec(B_d) \longrightarrow \Spec(A_d) = X_d
+$$
+is the ``universal'' finite free morphism of rank $d$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-universal-finite-syntomic}
+With notation as in Example \ref{example-universal-finite-syntomic}
+there is an open subscheme $U_d \subset X_d$ with the following property:
+a morphism of schemes $X \to X_d$ factors through $U_d$ if and only
+if $Y_d \times_{X_d} X \to X$ is syntomic.
+\end{lemma}
+
+\begin{proof}
+Recall that being syntomic is the same thing as being flat and
+a local complete intersection morphism, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-flat-lci}.
+The set $W_d \subset Y_d$ of points where $\pi_d$ is Koszul
+is open in $Y_d$ and its formation commutes with arbitrary base change, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-base-change-lci-fibres}.
+Since $\pi_d$ is finite and hence closed, we see that
+$Z = \pi_d(Y_d \setminus W_d)$ is closed. Since clearly $U_d = X_d \setminus Z$
+and since its formation commutes with base change we find that the lemma
+is true.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universal-finite-syntomic-smooth}
+With notation as in Example \ref{example-universal-finite-syntomic}
+and $U_d$ as in Lemma \ref{lemma-universal-finite-syntomic}
+then $U_d$ is smooth over $\Spec(\mathbf{Z})$.
+\end{lemma}
+
+\begin{proof}
+Let us use More on Morphisms, Lemma
+\ref{more-morphisms-lemma-lifting-along-artinian-at-point}
+to show that $U_d \to \Spec(\mathbf{Z})$ is smooth.
+Namely, suppose that $\Spec(A) \to U_d$ is a morphism
+and $A' \to A$ is a small extension. Then $B = A \otimes_{A_d} B_d$
+is a finite free $A$-algebra which is syntomic over $A$
+(by construction of $U_d$). By
+Smoothing Ring Maps, Proposition \ref{smoothing-proposition-lift-smooth}
+there exists a syntomic ring map $A' \to B'$ such that
+$B \cong B' \otimes_{A'} A$. Set $e'_1 = 1 \in B'$. For $1 < i \leq d$
+choose lifts $e'_i \in B'$ of the elements
+$1 \otimes e_i \in A \otimes_{A_d} B_d = B$. Then $e'_1, \ldots, e'_d$
+is a basis for $B'$ over $A'$ (for example see Algebra, Lemma
+\ref{algebra-lemma-local-artinian-basis-when-flat}).
+Thus we can write $e'_i e'_j = \sum \alpha_{ij}^l e'_l$ for unique
+elements $\alpha_{ij}^l \in A'$ which satisfy the relations
+$\sum_l \alpha_{ij}^l \alpha_{lk}^m = \sum_l \alpha_{il}^m \alpha _{jk}^l$
+and $\alpha_{ij}^k = \alpha_{ji}^k$ and $\alpha_{i1}^j - \delta_{ij}$
+in $A'$. This determines a morphism $\Spec(A') \to X_d$ by
+sending $a_{ij}^l \in A_d$ to $\alpha_{ij}^l \in A'$. This morphism
+agrees with the given morphism $\Spec(A) \to U_d$. Since $\Spec(A')$
+and $\Spec(A)$ have the same underlying topological space, we see
+that we obtain the desired lift $\Spec(A') \to U_d$ and we
+conclude that $U_d$ is smooth over $\mathbf{Z}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universal-finite-syntomic-etale}
+With notation as in Example \ref{example-universal-finite-syntomic}
+consider the open subscheme $U'_d \subset X_d$ over which
+$\pi_d$ is \'etale. Then $U'_d$ is a dense subset of the
+open $U_d$ of Lemma \ref{lemma-universal-finite-syntomic}
+\end{lemma}
+
+\begin{proof}
+By exactly the same reasoning as in the proof of
+Lemma \ref{lemma-universal-finite-syntomic}, using
+Morphisms, Lemma \ref{morphisms-lemma-set-points-where-fibres-etale},
+there is a maximal open $U'_d \subset X_d$ over which $\pi_d$ is
+\'etale. Moreover, since an \'etale morphism is syntomic, we see
+that $U'_d \subset U_d$. To finish the proof we have to show
+that $U'_d \subset U_d$ is dense. Let $u : \Spec(k) \to U_d$ be a morphism
+where $k$ is a field. Let $B = k \otimes_{A_d} B_d$ as in the
+proof of Lemma \ref{lemma-universal-finite-syntomic-smooth}.
+We will show there is a local domain $A'$ with residue field $k$
+and a finite syntomic $A'$ algebra $B'$ with $B = k \otimes_{A'} B'$
+whose generic fibre is \'etale. Exactly as in the previous paragraph
+this will determine a morphism $\Spec(A') \to U_d$ which will map the
+generic point into $U'_d$ and the closed point to $u$, thereby
+finishing the proof.
+
+\medskip\noindent
+By Lemma \ref{lemma-syntomic-finite} part (4) we can choose a presentation
+$B = k[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$.
+Let $d'$ be the maximum total degree of the polynomials $f_1, \ldots, f_n$.
+Let $Y_{n, d'} \to X_{n, d'}$ be as in
+Example \ref{example-universal-quasi-finite-syntomic}.
+By construction there is a morphism $u' : \Spec(k) \to X_{n, d'}$
+such that
+$$
+\Spec(B) \cong Y_{n, d'} \times_{X_{n, d'}, u'} \Spec(k)
+$$
+Denote $A = \mathcal{O}_{X_{n, d'}, u'}^h$ the henselization of the
+local ring of $X_{n, d'}$ at the image of $u'$. Then we can write
+$$
+Y_{n, d'} \times_{X_{n, d'}} \Spec(A) = Z \amalg W
+$$
+with $Z \to \Spec(A)$ finite and $W \to \Spec(A)$ having empty
+closed fibre, see
+Algebra, Lemma \ref{algebra-lemma-characterize-henselian} part (13)
+or the discussion in More on Morphisms, Section
+\ref{more-morphisms-section-etale-localization}.
+By Lemma \ref{lemma-universal-quasi-finite-syntomic-etale}
+the local ring $A$ is regular (here we also use
+More on Algebra, Lemma \ref{more-algebra-lemma-henselization-regular})
+and the morphism $Z \to \Spec(A)$ is \'etale over the generic point of
+$\Spec(A)$ (because it is mapped to the generic point of $X_{d, n'}$).
+By construction $Z \times_{\Spec(A)} \Spec(k) \cong \Spec(B)$.
+This proves what we want except that the map from
+residue field of $A$ to $k$ may not be an isomorphism.
+By Algebra, Lemma \ref{algebra-lemma-flat-local-given-residue-field}
+there exists a flat local ring map $A \to A'$ such that the residue
+field of $A'$ is $k$. If $A'$ isn't a domain, then we choose a
+minimal prime $\mathfrak p \subset A'$ (which lies over the
+unique minimal prime of $A$ by flatness) and we replace
+$A'$ by $A'/\mathfrak p$. Set $B'$ equal to the unique $A'$-algebra
+such that $Z \times_{\Spec(A)} \Spec(A') = \Spec(B')$.
+This finishes the proof.
+\end{proof}
+
+\begin{remark}
+\label{remark-universal-finite-syntomic-smooth-top}
+Let $\pi_d : Y_d \to X_d$ be as in
+Example \ref{example-universal-finite-syntomic}.
+Let $U_d \subset X_d$ be the maximal open over which
+$V_d = \pi_d^{-1}(U_d)$ is finite syntomic as in
+Lemma \ref{lemma-universal-finite-syntomic}.
+Then it is also true that $V_d$ is smooth over $\mathbf{Z}$.
+(Of course the morphism $V_d \to U_d$ is not smooth when $d \geq 2$.)
+Arguing as in the proof of Lemma \ref{lemma-universal-finite-syntomic-smooth}
+this corresponds to the following deformation
+problem: given a small extension $C' \to C$ and
+a finite syntomic $C$-algebra $B$ with a section $B \to C$,
+find a finite syntomic $C'$-algebra $B'$ and a section $B' \to C'$
+whose tensor product with $C$ recovers $B \to C$.
+By Lemma \ref{lemma-syntomic-finite} we may write
+$B = C[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$ as
+a relative global complete intersection.
+After a change of coordinates with may assume
+$x_1, \ldots, x_n$ are in the kernel of $B \to C$.
+Then the polynomials $f_i$ have vanishing constant terms.
+Choose any lifts $f'_i \in C'[x_1, \ldots, x_n]$ of $f_i$
+with vanishing constant terms. Then
+$B' = C'[x_1, \ldots, x_n]/(f'_1, \ldots, f'_n)$
+with section $B' \to C'$ sending $x_i$ to zero works.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-locally-comes-from-universal-finite}
+Let $f : Y \to X$ be a morphism of schemes. If $f$ satisfies the equivalent
+conditions of Lemma \ref{lemma-syntomic-finite} then for every
+$x \in X$ there exist a $d$ and a commutative diagram
+$$
+\xymatrix{
+Y \ar[d] &
+V \ar[d] \ar[l] \ar[r] &
+V_d \ar[d] \ar[r] &
+Y_d \ar[d]^{\pi_d}\\
+X &
+U \ar[l] \ar[r] &
+U_d \ar[r] &
+X_d
+}
+$$
+with the following properties
+\begin{enumerate}
+\item $U \subset X$ is open and $V = f^{-1}(U)$,
+\item $\pi_d : Y_d \to X_d$ is as in
+Example \ref{example-universal-finite-syntomic},
+\item $U_d \subset X_d$ is as in Lemma \ref{lemma-universal-finite-syntomic}
+and $V_d = \pi_d^{-1}(U_d) \subset Y_d$,
+\item where the middle square is cartesian.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose an affine open neighbourhood $U = \Spec(A) \subset X$ of $x$.
+Write $V = f^{-1}(U) = \Spec(B)$. Then $B$ is a finite locally free
+$A$-module and the inclusion $A \subset B$ is a locally direct summand.
+Thus after shrinking $U$ we can choose a basis $1 = e_1, e_2, \ldots, e_d$
+of $B$ as an $A$-module. Write
+$e_i e_j = \sum \alpha_{ij}^l e_l$ for unique
+elements $\alpha_{ij}^l \in A$ which satisfy the relations
+$\sum_l \alpha_{ij}^l \alpha_{lk}^m = \sum_l \alpha_{il}^m \alpha _{jk}^l$
+and $\alpha_{ij}^k = \alpha_{ji}^k$ and $\alpha_{i1}^j - \delta_{ij}$
+in $A$. This determines a morphism $\Spec(A) \to X_d$ by sending
+$a_{ij}^l \in A_d$ to $\alpha_{ij}^l \in A$. By construction
+$V \cong \Spec(A) \times_{X_d} Y_d$. By the definition of $U_d$
+we see that $\Spec(A) \to X_d$ factors through $U_d$. This
+finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{A formula for the different}
+\label{section-formula-different}
+
+\noindent
+In this section we discuss the material in \cite[Appendix A]{Mazur-Roberts}
+due to Tate. In our language, this will show that the different is
+equal to the K\"ahler different in the case of a flat, quasi-finite,
+local complete intersection morphism.
+First we compute the Noether different in a special case.
+
+\begin{lemma}
+\label{lemma-tate}
+\begin{reference}
+\cite[Appendix]{Mazur-Roberts}
+\end{reference}
+Let $A \to P$ be a ring map. Let $f_1, \ldots, f_n \in P$ be a
+Koszul regular sequence. Assume $B = P/(f_1, \ldots, f_n)$
+is flat over $A$. Let $g_1, \ldots, g_n \in P \otimes_A B$
+be a Koszul regular sequence generating the kernel of the multiplication
+map $P \otimes_A B \to B$. Write $f_i \otimes 1 = \sum g_{ij} g_j$.
+Then the annihilator of $\Ker(B \otimes_A B \to B)$ is a principal
+ideal generated by the image of $\det(g_{ij})$.
+\end{lemma}
+
+\begin{proof}
+The Koszul complex $K_\bullet = K(P, f_1, \ldots, f_n)$ is a resolution
+of $B$ by finite free $P$-modules. The Koszul complex
+$M_\bullet = K(P \otimes_A B, g_1, \ldots, g_n)$ is a resolution
+of $B$ by finite free $P \otimes_A B$-modules. There is a map of
+complexes
+$$
+K_\bullet \longrightarrow M_\bullet
+$$
+which in degree $1$ is given by the matrix $(g_{ij})$ and
+in degree $n$ by $\det(g_{ij})$. See
+More on Algebra, Lemma \ref{more-algebra-lemma-functorial}.
+As $B$ is a flat $A$-module, we can view $M_\bullet$ as a complex
+of flat $P$-modules (via $P \to P \otimes_A B$, $p \mapsto p \otimes 1$).
+Thus we may use both complexes to compute $\text{Tor}_*^P(B, B)$ and
+it follows that the displayed map defines a quasi-isomorphism after tensoring
+with $B$. It is clear that $H_n(K_\bullet \otimes_P B) = B$.
+On the other hand, $H_n(M_\bullet \otimes_P B)$ is the kernel of
+$$
+B \otimes_A B \xrightarrow{g_1, \ldots, g_n} (B \otimes_A B)^{\oplus n}
+$$
+Since $g_1, \ldots, g_n$ generate the kernel of $B \otimes_A B \to B$
+this proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-finite-complete-intersection}
+Let $A$ be a ring. Let $n \geq 1$ and
+$h, f_1, \ldots, f_n \in A[x_1, \ldots, x_n]$.
+Set $B = A[x_1, \ldots, x_n, 1/h]/(f_1, \ldots, f_n)$.
+Assume that $B$ is quasi-finite over $A$.
+Then
+\begin{enumerate}
+\item $B$ is flat over $A$ and $A \to B$ is a relative local complete
+intersection,
+\item the annihilator $J$ of $I = \Ker(B \otimes_A B \to B)$
+is free of rank $1$ over $B$,
+\item the Noether different of $B$ over $A$ is generated
+by $\det(\partial f_i/\partial x_j)$ in $B$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Note that
+$B = A[x, x_1, \ldots, x_n]/(xh - 1, f_1, \ldots, f_n)$
+is a relative global complete intersection over $A$, see
+Algebra, Definition
+\ref{algebra-definition-relative-global-complete-intersection}.
+By Algebra, Lemma \ref{algebra-lemma-relative-global-complete-intersection}
+we see that $B$ is flat over $A$.
+
+\medskip\noindent
+Write $P' = A[x, x_1, \ldots, x_n]$ and
+$P = P'/(xh - 1) = A[x_1, \ldots, x_n, 1/g]$.
+Then we have $P' \to P \to B$.
+By More on Algebra, Lemma
+\ref{more-algebra-lemma-relative-global-complete-intersection-koszul}
+we see that $xh - 1, f_1, \ldots, f_n$ is a Koszul regular sequence
+in $P'$. Since $xh - 1$ is a Koszul regular sequence of length
+one in $P'$ (by the same lemma for example) we conclude that
+$f_1, \ldots, f_n$ is a Koszul regular sequence in $P$ by
+More on Algebra, Lemma \ref{more-algebra-lemma-truncate-koszul-regular}.
+
+\medskip\noindent
+Let $g_i \in P \otimes_A B$ be the image of $x_i \otimes 1 - 1 \otimes x_i$.
+Let us use the short hand $y_i = x_i \otimes 1$ and $z_i = 1 \otimes x_i$
+in $A[x_1, \ldots, x_n] \otimes_A A[x_1, \ldots, x_n]$
+so that $g_i$ is the image of $y_i - z_i$. For a polynomial
+$f \in A[x_1, \ldots, x_n]$ we write $f(y) = f \otimes 1$
+and $f(z) = 1 \otimes f$ in the above tensor product.
+Then we have
+$$
+P \otimes_A B/(g_1, \ldots, g_n) =
+\frac{A[y_1, \ldots, y_n, z_1, \ldots, z_n, \frac{1}{h(y)h(z)}]}
+{(f_1(z), \ldots, f_n(z), y_1 - z_1, \ldots, y_n - z_n)}
+$$
+which is clearly isomorphic to $B$. Hence by the same arguments
+as above we find that $f_1(z), \ldots, f_n(z), y_1 - z_1, \ldots, y_n - z_n$
+is a Koszul regular sequence in
+$A[y_1, \ldots, y_n, z_1, \ldots, z_n, \frac{1}{h(y)h(z)}]$.
+The sequence $f_1(z), \ldots, f_n(z)$ is a Koszul regular in
+$A[y_1, \ldots, y_n, z_1, \ldots, z_n, \frac{1}{h(y)h(z)}]$
+by flatness of the map
+$$
+P \longrightarrow A[y_1, \ldots, y_n, z_1, \ldots, z_n,
+\textstyle{\frac{1}{h(y)h(z)}}],\quad x_i \longmapsto z_i
+$$
+and More on Algebra, Lemma
+\ref{more-algebra-lemma-koszul-regular-flat-base-change}.
+By More on Algebra, Lemma \ref{more-algebra-lemma-truncate-koszul-regular}
+we conclude that $g_1, \ldots, g_n$ is a regular sequence
+in $P \otimes_A B$.
+
+\medskip\noindent
+At this point we have verified all the assumptions of Lemma \ref{lemma-tate}
+above with $P$, $f_1, \ldots, f_n$, and $g_i \in P \otimes_A B$ as above.
+In particular the annihilator $J$ of $I$ is freely generated by one
+element $\delta$ over $B$.
+Set $f_{ij} = \partial f_i/\partial x_j \in A[x_1, \ldots, x_n]$.
+An elementary computation shows that we can write
+$$
+f_i(y) =
+f_i(z_1 + g_1, \ldots, z_n + g_n) =
+f_i(z) + \sum\nolimits_j f_{ij}(z) g_j +
+\sum\nolimits_{j, j'} F_{ijj'}g_jg_{j'}
+$$
+for some $F_{ijj'} \in A[y_1, \ldots, y_n, z_1, \ldots, z_n]$.
+Taking the image in $P \otimes_A B$ the terms $f_i(z)$ map to
+zero and we obtain
+$$
+f_i \otimes 1 = \sum\nolimits_j
+\left(1 \otimes f_{ij} + \sum\nolimits_{j'} F_{ijj'}g_{j'}\right)g_j
+$$
+Thus we conclude from Lemma \ref{lemma-tate}
+that $\delta = \det(g_{ij})$ with
+$g_{ij} = 1 \otimes f_{ij} + \sum_{j'} F_{ijj'}g_{j'}$.
+Since $g_{j'}$ maps to zero in $B$, we conclude
+that the image of $\det(\partial f_i/\partial x_j)$ in $B$
+generates the Noether different of $B$ over $A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-different-syntomic-quasi-finite}
+Let $f : Y \to X$ be a morphism of Noetherian schemes. If $f$
+satisfies the equivalent conditions of Lemma \ref{lemma-syntomic-quasi-finite}
+then the different $\mathfrak{D}_f$ of $f$ is the K\"ahler different
+of $f$.
+\end{lemma}
+
+\begin{proof}
+By Lemmas \ref{lemma-flat-gorenstein-agree-noether} and
+\ref{lemma-dualizing-syntomic-quasi-finite}
+the different of $f$ affine locally is the same as the
+Noether different. Then the lemma follows from the
+computation of the Noether different and the K\"ahler
+different on standard affine pieces done in
+Lemmas \ref{lemma-kahler-different-complete-intersection} and
+\ref{lemma-quasi-finite-complete-intersection}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-different-quasi-finite-complete-intersection}
+Let $A$ be a ring. Let $n \geq 1$ and
+$h, f_1, \ldots, f_n \in A[x_1, \ldots, x_n]$.
+Set $B = A[x_1, \ldots, x_n, 1/h]/(f_1, \ldots, f_n)$.
+Assume that $B$ is quasi-finite over $A$.
+Then there is an isomorphism $B \to \omega_{B/A}$
+mapping $\det(\partial f_i/\partial x_j)$ to $\tau_{B/A}$.
+\end{lemma}
+
+\begin{proof}
+Let $J$ be the annihilator of $\Ker(B \otimes_A B \to B)$.
+By Lemma \ref{lemma-quasi-finite-complete-intersection}
+the map $A \to B$ is flat and
+$J$ is a free $B$-module with generator $\xi$ mapping to
+$\det(\partial f_i/\partial x_j)$ in $B$.
+Thus the lemma follows from
+Lemma \ref{lemma-noether-different-flat-quasi-finite}
+and the fact (Lemma \ref{lemma-dualizing-syntomic-quasi-finite})
+that $\omega_{B/A}$ is an invertible $B$-module.
+(Warning: it is necessary to prove $\omega_{B/A}$
+is invertible because a finite $B$-module $M$ such
+that $\Hom_B(M, B) \cong B$ need not be free.)
+\end{proof}
+
+\begin{example}
+\label{example-different-for-monogenic}
+Let $A$ be a Noetherian ring. Let $f, h \in A[x]$ such that
+$$
+B = (A[x]/(f))_h = A[x, 1/h]/(f)
+$$
+is quasi-finite over $A$. Let $f' \in A[x]$ be the derivative
+of $f$ with respect to $x$. The ideal $\mathfrak{D} = (f') \subset B$
+is the Noether different of $B$ over $A$,
+is the K\"ahler different of $B$ over $A$, and
+is the ideal whose associated quasi-coherent sheaf of ideals is the
+different of $\Spec(B)$ over $\Spec(A)$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-discriminant-quasi-finite-morphism-smooth}
+Let $S$ be a Noetherian scheme. Let $X$, $Y$ be smooth schemes
+of relative dimension $n$ over $S$. Let $f : Y \to X$ be a
+quasi-finite morphism over $S$.
+Then $f$ is flat and the closed subscheme $R \subset Y$
+cut out by the different of $f$ is the locally principal
+closed subscheme cut out by
+$$
+\wedge^n(\text{d}f) \in
+\Gamma(Y,
+(f^*\Omega^n_{X/S})^{\otimes -1} \otimes_{\mathcal{O}_Y} \Omega^n_{Y/S})
+$$
+If $f$ is \'etale at the associated points of $Y$, then $R$ is an
+effective Cartier divisor and
+$$
+f^*\Omega^n_{X/S} \otimes_{\mathcal{O}_Y} \mathcal{O}(R) =
+\Omega^n_{Y/S}
+$$
+as invertible sheaves on $Y$.
+\end{lemma}
+
+\begin{proof}
+To prove that $f$ is flat, it suffices to prove $Y_s \to X_s$
+is flat for all $s \in S$ (More on Morphisms, Lemma
+\ref{more-morphisms-lemma-morphism-between-flat-Noetherian}).
+Flatness of $Y_s \to X_s$ follows from
+Algebra, Lemma \ref{algebra-lemma-CM-over-regular-flat}.
+By More on Morphisms, Lemma
+\ref{more-morphisms-lemma-lci-permanence}
+the morphism $f$ is a local complete intersection morphism.
+Thus the statement on the different follows from the
+corresponding statement on the K\"ahler different by
+Lemma \ref{lemma-different-syntomic-quasi-finite}.
+Finally, since we have the exact sequence
+$$
+f^*\Omega_{X/S} \xrightarrow{\text{d}f} \Omega_{X/S} \to \Omega_{Y/X} \to 0
+$$
+by Morphisms, Lemma \ref{morphisms-lemma-triangle-differentials}
+and since $\Omega_{X/S}$ and $\Omega_{Y/S}$ are finite locally free
+of rank $n$ (Morphisms, Lemma
+\ref{morphisms-lemma-smooth-omega-finite-locally-free}),
+the statement for the K\"ahler different is clear from the definition
+of the zeroth fitting ideal. If $f$ is \'etale at the associated
+points of $Y$, then $\wedge^n\text{d}f$ does not vanish in
+the associated points of $Y$, which implies that the local equation
+of $R$ is a nonzerodivisor. Hence $R$ is an effective Cartier divisor.
+The canonical isomorphism sends $1$ to $\wedge^n\text{d}f$, see
+Divisors, Lemma \ref{divisors-lemma-characterize-OD}.
+\end{proof}
+
+
+
+
+
+
+\section{The Tate map}
+\label{section-tate-map}
+
+\noindent
+In this section we produce an isomorphism between
+the determinant of the relative cotangent complex and
+the relative dualizing module for a locally quasi-finite
+syntomic morphism of locally Noetherian schemes. Following
+\cite[1.4.4]{Garel} we dub the isomorphism the Tate map.
+Our approach is to avoid doing local calculations as
+much as is possible.
+
+\medskip\noindent
+Let $Y \to X$ be a locally quasi-finite syntomic morphism of schemes.
+We will use all the equivalent conditions for this notion given in
+Lemma \ref{lemma-syntomic-quasi-finite} without further mention in
+this section. In particular, we see that $\NL_{Y/X}$ is a perfect
+object of $D(\mathcal{O}_Y)$ with tor-amplitude in $[-1, 0]$. Thus
+we have a canonical invertible module
+$\det(\NL_{Y/X})$ on $Y$ and a global section
+$$
+\delta(\NL_{Y/X}) \in \Gamma(Y, \det(\NL_{Y/X}))
+$$
+See Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-determinant-two-term-complexes}.
+Suppose given a commutative diagram of schemes
+$$
+\xymatrix{
+Y' \ar[r]_b \ar[d] & Y \ar[d] \\
+X' \ar[r] & X
+}
+$$
+whose vertical arrows are locally quasi-finite syntomic and which
+induces an isomorphism of $Y'$ with an open of $X' \times_X Y$.
+Then the canonical map
+$$
+Lb^*\NL_{Y/X} \longrightarrow \NL_{Y'/X'}
+$$
+is a quasi-isomorphism by
+More on Morphisms, Lemma \ref{more-morphisms-lemma-base-change-NL-flat}.
+Thus we get a canonical isomorphism
+$b^*\det(\NL_{Y/X}) \to \det(\NL_{Y'/X'})$ which sends the
+canonical section $\delta(\NL_{Y/X})$ to $\delta(\NL_{Y'/ X'})$, see
+Derived Categories of Schemes, Remark \ref{perfect-remark-functorial-det}.
+
+\begin{remark}
+\label{remark-local-description-delta}
+Let $Y \to X$ be a locally quasi-finite syntomic morphism of schemes.
+What does the pair $(\det(\NL_{Y/X}), \delta(\NL_{Y/X}))$ look
+like locally? Choose affine opens $V = \Spec(B) \subset Y$,
+$U = \Spec(A) \subset X$ with $f(V) \subset U$ and an integer $n$ and
+$f_1, \ldots, f_n \in A[x_1, \ldots, x_n]$ such that
+$B = A[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$. Then
+$$
+\NL_{B/A} = \left(
+(f_1, \ldots, f_n)/(f_1, \ldots, f_n)^2
+\longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} B \text{d} x_i\right)
+$$
+and $(f_1, \ldots, f_n)/(f_1, \ldots, f_n)^2$ is free with generators
+the classes $\overline{f}_i$. See proof of
+Lemma \ref{lemma-syntomic-quasi-finite}.
+Thus $\det(L_{B/A})$ is free on the generator
+$$
+\text{d}x_1 \wedge \ldots \wedge \text{d}x_n
+\otimes
+(\overline{f}_1 \wedge \ldots \wedge \overline{f}_n)^{\otimes -1}
+$$
+and the section $\delta(\NL_{B/A})$ is the element
+$$
+\delta(\NL_{B/A}) =
+\det(\partial f_j/ \partial x_i) \cdot
+\text{d}x_1 \wedge \ldots \wedge \text{d}x_n
+\otimes
+(\overline{f}_1 \wedge \ldots \wedge \overline{f}_n)^{\otimes -1}
+$$
+by definition.
+\end{remark}
+
+\noindent
+Let $Y \to X$ be a locally quasi-finite syntomic morphism of
+locally Noetherian schemes. By
+Remarks \ref{remark-relative-dualizing-for-quasi-finite} and
+\ref{remark-relative-dualizing-for-flat-quasi-finite} we have
+a coherent $\mathcal{O}_Y$-module $\omega_{Y/X}$ and a canonical
+global section
+$$
+\tau_{Y/X} \in \Gamma(Y, \omega_{Y/X})
+$$
+which affine locally recovers the pair $\omega_{B/A}, \tau_{B/A}$.
+By Lemma \ref{lemma-dualizing-syntomic-quasi-finite} the module
+$\omega_{Y/X}$ is invertible. Suppose given a commutative diagram of
+locally Noetherian schemes
+$$
+\xymatrix{
+Y' \ar[r]_b \ar[d] & Y \ar[d] \\
+X' \ar[r] & X
+}
+$$
+whose vertical arrows are locally quasi-finite syntomic and which
+induces an isomorphism of $Y'$ with an open of $X' \times_X Y$.
+Then there is a canonical base change map
+$$
+b^*\omega_{Y/X} \longrightarrow \omega_{Y'/X'}
+$$
+which is an isomorphism
+mapping $\tau_{Y/X}$ to $\tau_{Y'/X'}$. Namely, the base change map
+in the affine setting is (\ref{equation-bc-dualizing}), it is an
+isomorphism by Lemma \ref{lemma-dualizing-base-change-of-flat}, and it
+maps $\tau_{Y/X}$ to $\tau_{Y'/X'}$ by
+Lemma \ref{lemma-trace-base-change} part (1).
+
+\begin{proposition}
+\label{proposition-tate-map}
+There exists a unique rule that to every locally quasi-finite syntomic
+morphism of locally Noetherian schemes $Y \to X$ assigns an isomorphism
+$$
+c_{Y/X} : \det(\NL_{Y/X}) \longrightarrow \omega_{Y/X}
+$$
+satisfying the following two properties
+\begin{enumerate}
+\item the section $\delta(\NL_{Y/X})$ is mapped to $\tau_{Y/X}$, and
+\item the rule is compatible with restriction to opens and with
+base change.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Let us reformulate the statement of the proposition. Consider the category
+$\mathcal{C}$ whose objects, denoted $Y/X$, are locally quasi-finite syntomic
+morphism $Y \to X$ of locally Noetherian schemes and whose morphisms
+$b/a : Y'/X' \to Y/X$ are commutative diagrams
+$$
+\xymatrix{
+Y' \ar[d] \ar[r]_b & Y \ar[d] \\
+X' \ar[r]^a & X
+}
+$$
+which induce an isomorphism of $Y'$ with an open subscheme of
+$X' \times_X Y$. The proposition means that for every object
+$Y/X$ of $\mathcal{C}$ we have an isomorphism
+$c_{Y/X} : \det(\NL_{Y/X}) \to \omega_{Y/X}$
+with $c_{Y/X}(\delta(\NL_{Y/X})) = \tau_{Y/X}$
+and for every morphism $b/a : Y'/X' \to Y/X$ of $\mathcal{C}$ we have
+$b^*c_{Y/X} = c_{Y'/X'}$ via the identifications
+$b^*\det(\NL_{Y/X}) = \det(\NL_{Y'/X'})$ and
+$b^*\omega_{Y/X} = \omega_{Y'/X'}$ described above.
+
+\medskip\noindent
+Given $Y/X$ in $\mathcal{C}$ and $y \in Y$ we can find
+an affine open $V \subset Y$ and $U \subset X$ with $f(V) \subset U$
+such that there exists some isomorphism
+$$
+\det(\NL_{Y/X})|_V \longrightarrow \omega_{Y/X}|_V
+$$
+mapping $\delta(\NL_{Y/X})|_V$ to $\tau_{Y/X}|_V$. This follows
+from picking affine opens as in
+Lemma \ref{lemma-syntomic-quasi-finite} part (5), the affine
+local description of $\delta(\NL_{Y/X})$ in
+Remark \ref{remark-local-description-delta}, and
+Lemma \ref{lemma-different-quasi-finite-complete-intersection}.
+If the annihilator of the section $\tau_{Y/X}$ is zero, then
+these local maps are unique and automatically glue. Hence if the annihilator
+of $\tau_{Y/X}$ is zero, then there is a unique isomorphism
+$c_{Y/X} : \det(\NL_{Y/X}) \to \omega_{Y/X}$ with
+$c_{Y/X}(\delta(\NL_{Y/X})) = \tau_{Y/X}$.
+If $b/a : Y'/X' \to Y/X$ is a morphism of $\mathcal{C}$
+and the annihilator of $\tau_{Y'/X'}$ is zero as well,
+then $b^*c_{Y/X}$ is the unique isomorphism
+$c_{Y'/X'} : \det(\NL_{Y'/X'}) \to \omega_{Y'/X'}$ with
+$c_{Y'/X'}(\delta(\NL_{Y'/X'})) = \tau_{Y'/X'}$.
+This follows formally from the fact that
+$b^*\delta(\NL_{Y/X}) = \delta(\NL_{Y'/X'})$ and
+$b^*\tau_{Y/X} = \tau_{Y'/X'}$.
+
+\medskip\noindent
+We can summarize the results of the previous paragraph as follows.
+Let $\mathcal{C}_{nice} \subset \mathcal{C}$ denote the
+full subcategory of $Y/X$ such that the annihilator of
+$\tau_{Y/X}$ is zero. Then we have solved the problem
+on $\mathcal{C}_{nice}$. For $Y/X$ in $\mathcal{C}_{nice}$
+we continue to denote $c_{Y/X}$ the solution we've just found.
+
+\medskip\noindent
+Consider morphisms
+$$
+Y_1/X_1 \xleftarrow{b_1/a_1} Y/X \xrightarrow{b_2/a_2} Y_2/X_2
+$$
+in $\mathcal{C}$ such that $Y_1/X_1$ and $Y_2/X_2$ are objects
+of $\mathcal{C}_{nice}$. {\bf Claim.} $b_1^*c_{Y_1/X_1} = b_2^*c_{Y_2/X_2}$.
+We will first show that the claim implies the proposition
+and then we will prove the claim.
+
+\medskip\noindent
+Let $d, n \geq 1$ and consider the locally
+quasi-finite syntomic morphism $Y_{n, d} \to X_{n, d}$
+constructed in Example \ref{example-universal-quasi-finite-syntomic}.
+Then $Y_{n, d}$ is an irreducible regular scheme and the
+morphism $Y_{n, d} \to X_{n, d}$ is locally quasi-finite syntomic
+and \'etale over a dense open, see
+Lemma \ref{lemma-universal-quasi-finite-syntomic-etale}.
+Thus $\tau_{Y_{n, d}/X_{n, d}}$ is nonzero for example by
+Lemma \ref{lemma-different-ramification}. Now a nonzero section
+of an invertible module over an irreducible regular scheme
+has vanishing annihilator. Thus
+$Y_{n, d}/X_{n, d}$ is an object of $\mathcal{C}_{nice}$.
+
+\medskip\noindent
+Let $Y/X$ be an arbitrary object of $\mathcal{C}$. Let $y \in Y$.
+By Lemma \ref{lemma-locally-comes-from-universal} we can find
+$n, d \geq 1$ and morphisms
+$$
+Y/X \leftarrow V/U \xrightarrow{b/a} Y_{n, d}/X_{n, d}
+$$
+of $\mathcal{C}$ such that $V \subset Y$ and $U \subset X$ are open.
+Thus we can pullback the canonical morphism $c_{Y_{n, d}/X_{n, d}}$
+constructed above by $b$ to $V$. The claim guarantees these local
+isomorphisms glue! Thus we get a well defined global isomorphism
+$c_{Y/X} : \det(\NL_{Y/X}) \to \omega_{Y/X}$ with
+$c_{Y/X}(\delta(\NL_{Y/X})) = \tau_{Y/X}$.
+If $b/a : Y'/X' \to Y/X$ is a morphism of $\mathcal{C}$, then
+the claim also implies that the similarly constructed map
+$c_{Y'/X'}$ is the pullback by $b$ of the locally constructed
+map $c_{Y/X}$. Thus it remains to prove the claim.
+
+\medskip\noindent
+In the rest of the proof we prove the claim. We may pick a point
+$y \in Y$ and prove the maps agree in an open neighbourhood of $y$.
+Thus we may replace $Y_1$, $Y_2$ by open neighbourhoods of the
+image of $y$ in $Y_1$ and $Y_2$. Thus we may assume there are
+morphisms
+$$
+Y_{n_1, d_1}/X_{n_1, d_1} \leftarrow Y_1/X_1
+\quad\text{and}\quad
+Y_2/X_2 \rightarrow Y_{n_2, d_2}/X_{n_2, d_2}
+$$
+These are morphisms of $\mathcal{C}_{nice}$ for which we know the
+desired compatibilities. Thus we may replace
+$Y_1/X_1$ by $Y_{n_1, d_1}/X_{n_1, d_1}$ and
+$Y_2/X_2$ by $Y_{n_2, d_2}/X_{n_2, d_2}$. This reduces us to the
+case that $Y_1, X_1, Y_2, X_2$ are of finite type over $\mathbf{Z}$.
+(The astute reader will realize that this step wouldn't have been
+necessary if we'd defined $\mathcal{C}_{nice}$ to consist only
+of those objects $Y/X$ with $Y$ and $X$ of finite type over $\mathbf{Z}$.)
+
+\medskip\noindent
+Assume $Y_1, X_1, Y_2, X_2$ are of finite type over $\mathbf{Z}$.
+After replacing $Y, X, Y_1, X_1, Y_2, X_2$ by suitable open neighbourhoods
+of the image of $y$ we may assume $Y, X, Y_1, X_1, Y_2, X_2$ are affine.
+We may write $X = \lim X_\lambda$ as a cofiltered limit of affine
+schemes of finite type over $X_1 \times X_2$. For each $\lambda$
+we get
+$$
+Y_1 \times_{X_1} X_\lambda
+\quad\text{and}\quad
+X_\lambda \times_{X_2} Y_2
+$$
+If we take limits we obtain
+$$
+\lim Y_1 \times_{X_1} X_\lambda =
+Y_1 \times_{X_1} X \supset Y \subset
+X \times_{X_2} Y_2 = \lim X_\lambda \times_{X_2} Y_2
+$$
+By Limits, Lemma \ref{limits-lemma-descend-opens}
+we can find a $\lambda$ and opens
+$V_{1, \lambda} \subset Y_1 \times_{X_1} X_\lambda$ and
+$V_{2, \lambda} \subset X_\lambda \times_{X_2} Y_2$
+whose base change to $X$ recovers $Y$ (on both sides).
+After increasing $\lambda$ we may assume
+there is an isomorphism
+$V_{1, \lambda} \to V_{2, \lambda}$ whose base change to $X$ is the
+identity on $Y$, see
+Limits, Lemma \ref{limits-lemma-descend-finite-presentation}.
+Then we have the commutative diagram
+$$
+\xymatrix{
+& Y/X \ar[d] \ar[ld]_{b_1/a_1} \ar[rd]^{b_2/a_2} \\
+Y_1/X_1 & V_{1, \lambda}/X_\lambda \ar[l] \ar[r] & Y_2/X_2
+}
+$$
+Thus it suffices to prove the claim for the lower row
+of the diagram and we reduce to the case discussed in the
+next paragraph.
+
+\medskip\noindent
+Assume $Y, X, Y_1, X_1, Y_2, X_2$ are affine of finite type over $\mathbf{Z}$.
+Write $X = \Spec(A)$, $X_i = \Spec(A_i)$. The ring map $A_1 \to A$ corresponding
+to $X \to X_1$ is of finite type and hence we may choose a surjection
+$A_1[x_1, \ldots, x_n] \to A$. Similarly, we may choose a surjection
+$A_2[y_1, \ldots, y_m] \to A$. Set $X'_1 = \Spec(A_1[x_1, \ldots, x_n])$
+and $X'_2 = \Spec(A_2[y_1, \ldots, y_m])$.
+Set $Y'_1 = Y_1 \times_{X_1} X'_1$ and $Y'_2 = Y_2 \times_{X_2} X'_2$.
+We get the following diagram
+$$
+Y_1/X_1 \leftarrow
+Y'_1/X'_1 \leftarrow
+Y/X
+\rightarrow Y'_2/X'_2
+\rightarrow Y_2/X_2
+$$
+Since $X'_1 \to X_1$ and $X'_2 \to X_2$ are flat, the same is true
+for $Y'_1 \to Y_1$ and $Y'_2 \to Y_2$. It follows easily that the
+annihilators of $\tau_{Y'_1/X'_1}$ and $\tau_{Y'_2/X'_2}$ are zero.
+Hence $Y'_1/X'_1$ and $Y'_2/X'_2$ are in $\mathcal{C}_{nice}$.
+Thus the outer morphisms in the displayed diagram are morphisms
+of $\mathcal{C}_{nice}$ for which we know the desired compatibilities.
+Thus it suffices to prove the claim for
+$Y'_1/X'_1 \leftarrow Y/X \rightarrow Y'_2/X'_2$. This reduces us
+to the case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $Y, X, Y_1, X_1, Y_2, X_2$ are affine of finite type over
+$\mathbf{Z}$ and $X \to X_1$ and $X \to X_2$ are closed immersions.
+Consider the open embeddings
+$Y_1 \times_{X_1} X \supset Y \subset X \times_{X_2} Y_2$.
+There is an open neighbourhood $V \subset Y$ of $y$ which is a
+standard open of both $Y_1 \times_{X_1} X$ and $X \times_{X_2} Y_2$.
+This follows from Schemes, Lemma \ref{schemes-lemma-standard-open-two-affines}
+applied to the scheme obtained by glueing $Y_1 \times_{X_1} X$ and
+$X \times_{X_2} Y_2$ along $Y$; details omitted.
+Since $X \times_{X_2} Y_2$ is a closed subscheme of $Y_2$
+we can find a standard open $V_2 \subset Y_2$ such that
+$V_2 \times_{X_2} X = V$. Similarly, we can find a standard open
+$V_1 \subset Y_1$ such that $V_1 \times_{X_1} X = V$.
+After replacing $Y, Y_1, Y_2$ by $V, V_1, V_2$ we reduce to the
+case discussed in the next paragraph.
+
+\medskip\noindent
+Assume $Y, X, Y_1, X_1, Y_2, X_2$ are affine of finite type over
+$\mathbf{Z}$ and $X \to X_1$ and $X \to X_2$ are closed immersions
+and $Y_1 \times_{X_1} X = Y = X \times_{X_2} Y_2$.
+Write $X = \Spec(A)$, $X_i = \Spec(A_i)$, $Y = \Spec(B)$,
+$Y_i = \Spec(B_i)$. Then we can consider the affine schemes
+$$
+X' = \Spec(A_1 \times_A A_2) = \Spec(A')
+\quad\text{and}\quad
+Y' = \Spec(B_1 \times_B B_2) = \Spec(B')
+$$
+Observe that $X' = X_1 \amalg_X X_2$ and $Y' = Y_1 \amalg_Y Y_2$, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-basic-example-pushout}.
+By More on Algebra, Lemma \ref{more-algebra-lemma-fibre-product-finite-type}
+the rings $A'$ and $B'$ are of finite type over $\mathbf{Z}$. By
+More on Algebra, Lemma \ref{more-algebra-lemma-module-over-fibre-product}
+we have $B' \otimes_A A_1 = B_1$ and $B' \times_A A_2 = B_2$.
+In particular a fibre of $Y' \to X'$ over a point of
+$X' = X_1 \amalg_X X_2$ is always equal to either a fibre of $Y_1 \to X_1$
+or a fibre of $Y_2 \to X_2$. By More on Algebra, Lemma
+\ref{more-algebra-lemma-flat-module-over-fibre-product}
+the ring map $A' \to B'$ is flat. Thus by
+Lemma \ref{lemma-syntomic-quasi-finite} part (3)
+we conclude that $Y'/X'$ is an object of $\mathcal{C}$.
+Consider now the commutative diagram
+$$
+\xymatrix{
+& Y/X \ar[ld]_{b_1/a_1} \ar[rd]^{b_2/a_2} \\
+Y_1/X_1 \ar[rd] & & Y_2/X_2 \ar[ld] \\
+& Y'/X'
+}
+$$
+Now we would be done if $Y'/X'$ is an object of $\mathcal{C}_{nice}$.
+Namely, then pulling back $c_{Y'/X'}$ around the two sides of the
+square, we would obtain the desired conclusion. Now, in fact, it
+is true that $Y'/X'$ is an object of
+$\mathcal{C}_{nice}$\footnote{Namely, the structure
+sheaf $\mathcal{O}_{Y'}$ is a subsheaf of
+$(Y_1 \to Y')_*\mathcal{O}_{Y_1} \times (Y_2 \to Y')_*\mathcal{O}_{Y_2}$.}.
+But it is amusing to note that we don't even need this.
+Namely, the arguments above show that,
+after possibly shrinking all of the schemes
+$X, Y, X_1, Y_1, X_2, Y_2, X', Y'$ we can find some
+$n, d \geq 1$, and extend the diagram like so:
+$$
+\xymatrix{
+& Y/X \ar[ld]_{b_1/a_1} \ar[rd]^{b_2/a_2} \\
+Y_1/X_1 \ar[rd] & & Y_2/X_2 \ar[ld] \\
+& Y'/X' \ar[d] \\
+& Y_{n, d}/X_{n, d}
+}
+$$
+and then we can use the already given argument by pulling
+back from $c_{Y_{n, d}/X_{n, d}}$. This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{A generalization of the different}
+\label{section-different-generalization}
+
+\noindent
+In this section we generalize Definition \ref{definition-different}
+to take into account all cases of ring maps $A \to B$
+where the Dedekind different is defined and $1 \in \mathcal{L}_{B/A}$.
+First we explain the condition ``$A \to B$ maps nonzerodivisors to
+nonzerodivisors and induces a flat map $Q(A) \to Q(A) \otimes_A B$''.
+
+\begin{lemma}
+\label{lemma-explain-condition}
+Let $A \to B$ be a map of Noetherian rings. Consider the conditions
+\begin{enumerate}
+\item nonzerodivisors of $A$ map to nonzerodivisors of $B$,
+\item (1) holds and $Q(A) \to Q(A) \otimes_A B$ is flat,
+\item $A \to B_\mathfrak q$ is flat for every
+$\mathfrak q \in \text{Ass}(B)$,
+\item (3) holds and $A \to B_\mathfrak q$ is flat for every $\mathfrak q$
+lying over an element in $\text{Ass}(A)$.
+\end{enumerate}
+Then we have the following implications
+$$
+\xymatrix{
+(1) & (2) \ar@{=>}[l] \ar@{=>}[d] \\
+(3) \ar@{=>}[u] & (4) \ar@{=>}[l]
+}
+$$
+If going up holds for $A \to B$ then (2) and (4) are equivalent.
+\end{lemma}
+
+\begin{proof}
+The horizontal implications in the diagram are trivial.
+Let $S \subset A$ be the set of nonzerodivisors so that
+$Q(A) = S^{-1}A$ and $Q(A) \otimes_A B = S^{-1}B$. Recall that
+$S = A \setminus \bigcup_{\mathfrak p \in \text{Ass}(A)} \mathfrak p$
+by Algebra, Lemma \ref{algebra-lemma-ass-zero-divisors}.
+Let $\mathfrak q \subset B$ be a prime lying over $\mathfrak p \subset A$.
+
+\medskip\noindent
+Assume (2). If $\mathfrak q \in \text{Ass}(B)$ then
+$\mathfrak q$ consists of zerodivisors, hence (1) implies
+the same is true for $\mathfrak p$. Hence
+$\mathfrak p$ corresponds to a prime of $S^{-1}A$.
+Hence $A \to B_\mathfrak q$ is flat by our assumption (2).
+If $\mathfrak q$ lies over an associated prime $\mathfrak p$
+of $A$, then certainly $\mathfrak p \in \Spec(S^{-1}A)$ and the
+same argument works.
+
+\medskip\noindent
+Assume (3). Let $f \in A$ be a nonzerodivisor. If $f$ were a zerodivisor
+on $B$, then $f$ is contained in an associated prime $\mathfrak q$
+of $B$. Since $A \to B_\mathfrak q$ is flat by assumption, we conclude that
+$\mathfrak p$ is an associated prime of $A$ by
+Algebra, Lemma \ref{algebra-lemma-bourbaki}. This would imply that
+$f$ is a zerodivisor on $A$, a contradiction.
+
+\medskip\noindent
+Assume (4) and going up for $A \to B$. We already know (1) holds.
+If $\mathfrak q$ corresponds to a prime of $S^{-1}B$ then $\mathfrak p$
+is contained in an associated prime $\mathfrak p'$ of $A$. By going up
+there exists a prime $\mathfrak q'$ containing $\mathfrak q$ and lying
+over $\mathfrak p$. Then $A \to B_{\mathfrak q'}$ is flat by
+(4). Hence $A \to B_{\mathfrak q}$ is flat as a localization.
+Thus $A \to S^{-1}B$ is flat and so is $S^{-1}A \to S^{-1}B$, see
+Algebra, Lemma \ref{algebra-lemma-flat-localization}.
+\end{proof}
+
+\begin{remark}
+\label{remark-different-generalization}
+We can generalize Definition \ref{definition-different}.
+Suppose that $f : Y \to X$ is a quasi-finite morphism of Noetherian schemes
+with the following properties
+\begin{enumerate}
+\item the open $V \subset Y$ where $f$ is flat contains
+$\text{Ass}(\mathcal{O}_Y)$ and $f^{-1}(\text{Ass}(\mathcal{O}_X))$,
+\item the trace element $\tau_{V/X}$ comes from a section
+$\tau \in \Gamma(Y, \omega_{Y/X})$.
+\end{enumerate}
+Condition (1) implies that $V$ contains the associated points of
+$\omega_{Y/X}$ by Lemma \ref{lemma-dualizing-associated-primes}.
+In particular, $\tau$ is unique if it exists
+(Divisors, Lemma \ref{divisors-lemma-restriction-injective-open-contains-ass}).
+Given $\tau$ we can define the different $\mathfrak{D}_f$ as the annihilator of
+$\Coker(\tau : \mathcal{O}_Y \to \omega_{Y/X})$. This agrees with the
+Dedekind different in many cases (Lemma \ref{lemma-agree-dedekind}).
+However, for non-flat maps between non-normal rings, this generalization
+no longer measures ramification of the morphism, see
+Example \ref{example-no-different}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-agree-dedekind}
+Assume the Dedekind different is defined for $A \to B$.
+Set $X = \Spec(A)$ and $Y = \Spec(B)$. The generalization of
+Remark \ref{remark-different-generalization}
+applies to the morphism $f : Y \to X$ if and only if
+$1 \in \mathcal{L}_{B/A}$ (e.g., if $A$ is normal, see
+Lemma \ref{lemma-dedekind-different-ideal}).
+In this case $\mathfrak{D}_{B/A}$ is an ideal of $B$ and we have
+$$
+\mathfrak{D}_f = \widetilde{\mathfrak{D}_{B/A}}
+$$
+as coherent ideal sheaves on $Y$.
+\end{lemma}
+
+\begin{proof}
+As the Dedekind different for $A \to B$ is defined we can apply
+Lemma \ref{lemma-explain-condition} to see that
+$Y \to X$ satisfies condition (1) of
+Remark \ref{remark-different-generalization}.
+Recall that there is a canonical isomorphism
+$c : \mathcal{L}_{B/A} \to \omega_{B/A}$, see
+Lemma \ref{lemma-dedekind-complementary-module}.
+Let $K = Q(A)$ and $L = K \otimes_A B$ as above.
+By construction the map $c$ fits into a commutative diagram
+$$
+\xymatrix{
+\mathcal{L}_{B/A} \ar[r] \ar[d]_c & L \ar[d] \\
+\omega_{B/A} \ar[r] & \Hom_K(L, K)
+}
+$$
+where the right vertical arrow sends $x \in L$ to the map
+$y \mapsto \text{Trace}_{L/K}(xy)$ and the lower horizontal
+arrow is the base change map (\ref{equation-bc-dualizing}) for $\omega_{B/A}$.
+We can factor the lower horizontal map as
+$$
+\omega_{B/A} = \Gamma(Y, \omega_{Y/X})
+\to \Gamma(V, \omega_{V/X}) \to \Hom_K(L, K)
+$$
+Since all associated points of $\omega_{V/X}$
+map to associated primes of $A$
+(Lemma \ref{lemma-dualizing-associated-primes})
+we see that the second map is injective.
+The element $\tau_{V/X}$ maps to $\text{Trace}_{L/K}$ in
+$\Hom_K(L, K)$ by the very definition of trace elements
+(Definition \ref{definition-trace-element}).
+Thus $\tau$ as in condition (2) of
+Remark \ref{remark-different-generalization}
+exists if and only if $1 \in \mathcal{L}_{B/A}$ and then
+$\tau = c(1)$. In this case, by Lemma \ref{lemma-dedekind-different-ideal}
+we see that $\mathfrak{D}_{B/A} \subset B$.
+Finally, the agreement of $\mathfrak{D}_f$ with $\mathfrak{D}_{B/A}$
+is immediate from the definitions and the fact $\tau = c(1)$ seen above.
+\end{proof}
+
+\begin{example}
+\label{example-no-different}
+Let $k$ be a field. Let $A = k[x, y]/(xy)$ and $B = k[u, v]/(uv)$ and let
+$A \to B$ be given by $x \mapsto u^n$ and $y \mapsto v^m$ for some
+$n, m \in \mathbf{N}$ prime to the characteristic of $k$. Then
+$A_{x + y} \to B_{x + y}$ is (finite) \'etale hence we are in the situation
+where the Dedekind different is defined. A computation shows that
+$$
+\text{Trace}_{L/K}(1) = (nx + my)/(x + y),\quad
+\text{Trace}_{L/K}(u^i) = 0,\quad \text{Trace}_{L/K}(v^j) = 0
+$$
+for $1 \leq i < n$ and $1 \leq j < m$. We conclude that
+$1 \in \mathcal{L}_{B/A}$ if and only if $n = m$. Moreover, a
+computation shows that if $n = m$, then $\mathcal{L}_{B/A} = B$
+and the Dedekind different is $B$ as well. In other words, we find that
+the different of Remark \ref{remark-different-generalization}
+is defined for $\Spec(B) \to \Spec(A)$
+if and only if $n = m$, and in this case the different is the
+unit ideal. Thus we see that in nonflat cases the nonvanishing
+of the different does not guarantee the morphism is \'etale or unramified.
+\end{example}
+
+
+
+
+
+
+
+\section{Comparison with duality theory}
+\label{section-comparison}
+
+\noindent
+In this section we compare the elementary algebraic constructions
+above with the constructions in the chapter on duality theory
+for schemes.
+
+\begin{lemma}
+\label{lemma-compare-dualizing}
+Let $f : Y \to X$ be a quasi-finite separated morphism of Noetherian schemes.
+For every pair of affine opens $\Spec(B) = V \subset Y$,
+$\Spec(A) = U \subset X$ with $f(V) \subset U$ there is an isomorphism
+$$
+H^0(V, f^!\mathcal{O}_Y) = \omega_{B/A}
+$$
+where $f^!$ is as in
+Duality for Schemes, Section \ref{duality-section-upper-shriek}.
+These isomorphisms are compatible with restriction maps and define a canonical
+isomorphism $H^0(f^!\mathcal{O}_X) = \omega_{Y/X}$ with
+$\omega_{Y/X}$ as in Remark \ref{remark-relative-dualizing-for-quasi-finite}.
+Similarly, if $f : Y \to X$ is a quasi-finite morphism of schemes of
+finite type over a Noetherian base $S$ endowed with a dualizing complex
+$\omega_S^\bullet$, then $H^0(f_{new}^!\mathcal{O}_X) = \omega_{Y/X}$.
+\end{lemma}
+
+\begin{proof}
+By Zariski's main theorem we can choose a factorization $f = f' \circ j$
+where $j : Y \to Y'$ is an open immersion and $f' : Y' \to X$ is a finite
+morphism, see More on Morphisms, Lemma
+\ref{more-morphisms-lemma-quasi-finite-separated-pass-through-finite}.
+By our construction in
+Duality for Schemes, Lemma \ref{duality-lemma-shriek-well-defined} we have
+$f^! = j^* \circ a'$ where
+$a' : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_{Y'})$
+is the right adjoint to $Rf'_*$ of
+Duality for Schemes, Lemma \ref{duality-lemma-twisted-inverse-image}.
+By Duality for Schemes, Lemma \ref{duality-lemma-finite-twisted}
+we see that
+$\Phi(a'(\mathcal{O}_X)) = R\SheafHom(f'_*\mathcal{O}_{Y'}, \mathcal{O}_X)$ in
+$D_\QCoh^+(f'_*\mathcal{O}_{Y'})$. In particular $a'(\mathcal{O}_X)$ has
+vanishing cohomology sheaves in degrees $< 0$. The zeroth cohomology sheaf
+is determined by the isomorphism
+$$
+f'_*H^0(a'(\mathcal{O}_X)) =
+\SheafHom_{\mathcal{O}_X}(f'_*\mathcal{O}_{Y'}, \mathcal{O}_X)
+$$
+as $f'_*\mathcal{O}_{Y'}$-modules via the equivalence of
+Morphisms, Lemma \ref{morphisms-lemma-affine-equivalence-modules}.
+Writing $(f')^{-1}U = V' = \Spec(B')$, we obtain
+$$
+H^0(V', a'(\mathcal{O}_X)) = \Hom_A(B', A).
+$$
+As the zeroth cohomology sheaf of $a'(\mathcal{O}_X)$
+is a quasi-coherent module we find that
+the restriction to $V$ is given by
+$\omega_{B/A} = \Hom_A(B', A) \otimes_{B'} B$ as desired.
+
+\medskip\noindent
+The statement about restriction maps signifies that the restriction mappings
+of the quasi-coherent $\mathcal{O}_{Y'}$-module $H^0(a'(\mathcal{O}_X))$
+for opens in $Y'$ agrees with the maps defined in
+Lemma \ref{lemma-localize-dualizing}
+for the modules $\omega_{B/A}$ via the isomorphisms given above.
+This is clear.
+
+\medskip\noindent
+Let $f : Y \to X$ be a quasi-finite morphism of schemes of finite type
+over a Noetherian base $S$ endowed with a dualizing complex $\omega_S^\bullet$.
+Consider opens $V \subset Y$ and $U \subset X$ with $f(V) \subset U$
+and $V$ and $U$ separated over $S$. Denote $f|_V : V \to U$ the restriction
+of $f$. By the discussion above and
+Duality for Schemes, Lemma \ref{duality-lemma-duality-bootstrap}
+there are canonical isomorphisms
+$$
+H^0(f_{new}^!\mathcal{O}_X)|_V = H^0((f|_V)^!\mathcal{O}_U) = \omega_{V/U} =
+\omega_{Y/X}|_V
+$$
+We omit the verification that these isomorphisms glue to a global
+isomorphism $H^0(f_{new}^!\mathcal{O}_X) \to \omega_{Y/X}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-trace}
+Let $f : Y \to X$ be a finite flat morphism of Noetherian schemes.
+The map
+$$
+\text{Trace}_f : f_*\mathcal{O}_Y \longrightarrow \mathcal{O}_X
+$$
+of Section \ref{section-discriminant}
+corresponds to a map $\mathcal{O}_Y \to f^!\mathcal{O}_X$ (see proof).
+Denote $\tau_{Y/X} \in H^0(Y, f^!\mathcal{O}_X)$ the image of $1$.
+Via the isomorphism $H^0(f^!\mathcal{O}_X) = \omega_{X/Y}$ of
+Lemma \ref{lemma-compare-dualizing}
+this agrees with the construction in
+Remark \ref{remark-relative-dualizing-for-flat-quasi-finite}.
+\end{lemma}
+
+\begin{proof}
+The functor $f^!$ is defined in
+Duality for Schemes, Section \ref{duality-section-upper-shriek}.
+Since $f$ is finite (and hence proper), we see that $f^!$ is given by
+the right adjoint to pushforward for $f$. In
+Duality for Schemes, Section \ref{duality-section-duality-finite}
+we have made this adjoint explicit. In particular,
+the object $f^!\mathcal{O}_X$ consists of a single
+cohomology sheaf placed in degree $0$ and for this sheaf we have
+$$
+f_*f^!\mathcal{O}_X =
+\SheafHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, \mathcal{O}_X)
+$$
+To see this we use also that $f_*\mathcal{O}_Y$ is finite locally free
+as $f$ is a finite flat morphism of Noetherian schemes
+and hence all higher Ext sheaves are zero. Some details omitted.
+Thus finally
+$$
+\text{Trace}_f \in
+\Hom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, \mathcal{O}_X) =
+\Gamma(X, f_*f^!\mathcal{O}_X) =
+\Gamma(Y, f^!\mathcal{O}_X)
+$$
+On the other hand, we have $f^!\mathcal{O}_X = \omega_{Y/X}$
+by the identification of Lemma \ref{lemma-compare-dualizing}.
+Thus we now have two elements, namely $\text{Trace}_f$
+and $\tau_{Y/X}$ from
+Remark \ref{remark-relative-dualizing-for-flat-quasi-finite} in
+$$
+\Gamma(Y, f^!\mathcal{O}_X) = \Gamma(Y, \omega_{Y/X})
+$$
+and the lemma says these elements are the same.
+
+\medskip\noindent
+Let $U = \Spec(A) \subset X$ be an affine open with inverse image
+$V = \Spec(B) \subset Y$. Since $f$ is finite, we see that
+$A \to B$ is finite and hence the $\omega_{Y/X}(V) = \Hom_A(B,A)$
+by construction and this isomorphism agrees with the identification
+of $f_*f^!\mathcal{O}_Y$ with
+$\SheafHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, \mathcal{O}_X)$ discussed
+above. Hence the agreement of $\text{Trace}_f$ and $\tau_{Y/X}$
+follows from the fact that $\tau_{B/A} = \text{Trace}_{B/A}$
+by Lemma \ref{lemma-finite-flat-trace}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Quasi-finite Gorenstein morphisms}
+\label{section-gorenstein-lci}
+
+\noindent
+This section discusses quasi-finite Gorenstein morphisms.
+
+\begin{lemma}
+\label{lemma-gorenstein-quasi-finite}
+Let $f : Y \to X$ be a quasi-finite morphism of Noetherian schemes.
+The following are equivalent
+\begin{enumerate}
+\item $f$ is Gorenstein,
+\item $f$ is flat and the fibres of $f$ are Gorenstein,
+\item $f$ is flat and $\omega_{Y/X}$ is invertible
+(Remark \ref{remark-relative-dualizing-for-quasi-finite}),
+\item for every $y \in Y$ there are affine opens
+$y \in V = \Spec(B) \subset Y$, $U = \Spec(A) \subset X$
+with $f(V) \subset U$ such that $A \to B$ is flat
+and $\omega_{B/A}$ is an invertible $B$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Parts (1) and (2) are equivalent by definition. Parts (3) and (4)
+are equivalent by the construction of $\omega_{Y/X}$ in
+Remark \ref{remark-relative-dualizing-for-quasi-finite}.
+Thus we have to show that (1)-(2) is equivalent to (3)-(4).
+
+\medskip\noindent
+First proof. Working affine locally we can assume $f$ is a separated
+morphism and apply Lemma \ref{lemma-compare-dualizing} to see that
+$\omega_{Y/X}$ is the zeroth cohomology sheaf of $f^!\mathcal{O}_X$.
+Under both assumptions $f$ is flat and quasi-finite, hence
+$f^!\mathcal{O}_X$ is isomorphic to $\omega_{Y/X}[0]$, see
+Duality for Schemes, Lemma \ref{duality-lemma-flat-quasi-finite-shriek}. Hence
+the equivalence follows from
+Duality for Schemes, Lemma
+\ref{duality-lemma-affine-flat-Noetherian-gorenstein}.
+
+\medskip\noindent
+Second proof. By Lemma \ref{lemma-characterize-invertible},
+we see that it suffices to prove the equivalence of
+(2) and (3) when $X$ is the spectrum of a field $k$.
+Then $Y = \Spec(B)$ where $B$ is a finite $k$-algebra.
+In this case $\omega_{B/A} = \omega_{B/k} = \Hom_k(B, k)$
+placed in degree $0$ is a dualizing complex for $B$, see
+Dualizing Complexes, Lemma \ref{dualizing-lemma-dualizing-finite}.
+Thus the equivalence follows from
+Dualizing Complexes, Lemma \ref{dualizing-lemma-gorenstein}.
+\end{proof}
+
+\begin{remark}
+\label{remark-collect-results-qf-gorenstein}
+Let $f : Y \to X$ be a quasi-finite Gorenstein morphism of Noetherian schemes.
+Let $\mathfrak D_f \subset \mathcal{O}_Y$ be the different and let
+$R \subset Y$ be the closed subscheme cut out by $\mathfrak D_f$.
+Then we have
+\begin{enumerate}
+\item $\mathfrak D_f$ is a locally principal ideal,
+\item $R$ is a locally principal closed subscheme,
+\item $\mathfrak D_f$ is affine locally the same as the Noether different,
+\item formation of $R$ commutes with base change,
+\item if $f$ is finite, then the norm of $R$ is the discriminant of $f$, and
+\item if $f$ is \'etale in the associated points of $Y$, then
+$R$ is an effective Cartier divisor and $\omega_{Y/X} = \mathcal{O}_Y(R)$.
+\end{enumerate}
+This follows from Lemmas \ref{lemma-flat-gorenstein-agree-noether},
+\ref{lemma-base-change-different}, and
+\ref{lemma-norm-different-is-discriminant}.
+\end{remark}
+
+\begin{remark}
+\label{remark-collect-results-qf-gorenstein-two}
+Let $S$ be a Noetherian scheme endowed with a dualizing complex
+$\omega_S^\bullet$. Let $f : Y \to X$ be a quasi-finite Gorenstein
+morphism of compactifyable schemes over $S$. Assume moreover
+$Y$ and $X$ Cohen-Macaulay and $f$ \'etale at the generic
+points of $Y$. Then we can combine
+Duality for Schemes, Remark
+\ref{duality-remark-CM-morphism-compare-dualizing} and
+Remark \ref{remark-collect-results-qf-gorenstein}
+to see that we have a canonical isomorphism
+$$
+\omega_Y = f^*\omega_X \otimes_{\mathcal{O}_Y} \omega_{Y/X} =
+f^*\omega_X \otimes_{\mathcal{O}_Y} \mathcal{O}_Y(R)
+$$
+of $\mathcal{O}_Y$-modules. If further $f$ is finite,
+then the isomorphism $\mathcal{O}_Y(R) = \omega_{Y/X}$ comes
+from the global section $\tau_{Y/X} \in H^0(Y, \omega_{Y/X})$
+which corresponds via duality to the map
+$\text{Trace}_f : f_*\mathcal{O}_Y \to \mathcal{O}_X$, see
+Lemma \ref{lemma-compare-trace}.
+\end{remark}
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/divisors.tex b/books/stacks/divisors.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6045f47913153b2d8428168a8077520e063f5de6
--- /dev/null
+++ b/books/stacks/divisors.tex
@@ -0,0 +1,9267 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Divisors}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we study some very basic questions related
+to defining divisors, etc. A basic reference is \cite{EGA}.
+
+
+
+\section{Associated points}
+\label{section-associated}
+
+\noindent
+Let $R$ be a ring and let $M$ be an $R$-module.
+Recall that a prime $\mathfrak p \subset R$ is {\it associated} to $M$
+if there exists an element of $M$ whose annihilator is $\mathfrak p$.
+See Algebra, Definition \ref{algebra-definition-associated}.
+Here is the definition of associated points
+for quasi-coherent sheaves on schemes
+as given in \cite[IV Definition 3.1.1]{EGA}.
+
+\begin{definition}
+\label{definition-associated}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent sheaf on $X$.
+\begin{enumerate}
+\item We say $x \in X$ is {\it associated} to $\mathcal{F}$
+if the maximal ideal
+$\mathfrak m_x$ is associated to the $\mathcal{O}_{X, x}$-module
+$\mathcal{F}_x$.
+\item We denote $\text{Ass}(\mathcal{F})$ or $\text{Ass}_X(\mathcal{F})$
+the set of associated points of $\mathcal{F}$.
+\item The {\it associated points of $X$} are the associated
+points of $\mathcal{O}_X$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+These definitions are most useful when $X$ is locally Noetherian
+and $\mathcal{F}$ of finite type.
+For example it may happen that a generic point of an irreducible
+component of $X$ is not associated to $X$, see
+Example \ref{example-no-associated-prime}.
+In the non-Noetherian case it may be more convenient to use weakly
+associated points, see
+Section \ref{section-weakly-associated}.
+Let us link the scheme theoretic notion with the algebraic notion
+on affine opens; note that this correspondence works perfectly only
+for locally Noetherian schemes.
+
+\begin{lemma}
+\label{lemma-associated-affine-open}
+Let $X$ be a scheme. Let $\mathcal{F}$ be a quasi-coherent sheaf on $X$.
+Let $\Spec(A) = U \subset X$ be an affine open, and set
+$M = \Gamma(U, \mathcal{F})$.
+Let $x \in U$, and let $\mathfrak p \subset A$ be the corresponding prime.
+\begin{enumerate}
+\item If $\mathfrak p$ is associated to $M$, then $x$ is associated
+to $\mathcal{F}$.
+\item If $\mathfrak p$ is finitely generated, then the converse holds
+as well.
+\end{enumerate}
+In particular, if $X$ is locally Noetherian, then the equivalence
+$$
+\mathfrak p \in \text{Ass}(M) \Leftrightarrow x \in \text{Ass}(\mathcal{F})
+$$
+holds for all pairs $(\mathfrak p, x)$ as above.
+\end{lemma}
+
+\begin{proof}
+This follows from
+Algebra, Lemma \ref{algebra-lemma-associated-primes-localize}.
+But we can also argue directly as follows.
+Suppose $\mathfrak p$ is associated to $M$.
+Then there exists an $m \in M$ whose annihilator is $\mathfrak p$.
+Since localization is exact we see that
+$\mathfrak pA_{\mathfrak p}$ is the annihilator of
+$m/1 \in M_{\mathfrak p}$. Since $M_{\mathfrak p} = \mathcal{F}_x$
+(Schemes, Lemma \ref{schemes-lemma-spec-sheaves})
+we conclude that $x$ is associated to $\mathcal{F}$.
+
+\medskip\noindent
+Conversely, assume that $x$ is associated to $\mathcal{F}$,
+and $\mathfrak p$ is finitely generated.
+As $x$ is associated to $\mathcal{F}$
+there exists an element $m' \in M_{\mathfrak p}$ whose
+annihilator is $\mathfrak pA_{\mathfrak p}$. Write
+$m' = m/f$ for some $f \in A$, $f \not \in \mathfrak p$.
+The annihilator $I$ of $m$ is an ideal of $A$ such that
+$IA_{\mathfrak p} = \mathfrak pA_{\mathfrak p}$. Hence
+$I \subset \mathfrak p$, and $(\mathfrak p/I)_{\mathfrak p} = 0$.
+Since $\mathfrak p$ is finitely generated,
+there exists a $g \in A$, $g \not \in \mathfrak p$ such that
+$g(\mathfrak p/I) = 0$. Hence the annihilator of $gm$ is
+$\mathfrak p$ and we win.
+
+\medskip\noindent
+If $X$ is locally Noetherian, then $A$ is Noetherian
+(Properties, Lemma \ref{properties-lemma-locally-Noetherian})
+and $\mathfrak p$ is always finitely generated.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass-support}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Then $\text{Ass}(\mathcal{F}) \subset \text{Supp}(\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ses-ass}
+Let $X$ be a scheme.
+Let $0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0$
+be a short exact sequence of quasi-coherent sheaves on $X$.
+Then
+$\text{Ass}(\mathcal{F}_2) \subset
+\text{Ass}(\mathcal{F}_1) \cup \text{Ass}(\mathcal{F}_3)$
+and
+$\text{Ass}(\mathcal{F}_1) \subset \text{Ass}(\mathcal{F}_2)$.
+\end{lemma}
+
+\begin{proof}
+For every point $x \in X$ the sequence of stalks
+$0 \to \mathcal{F}_{1, x} \to \mathcal{F}_{2, x} \to \mathcal{F}_{3, x} \to 0$
+is a short exact sequence of $\mathcal{O}_{X, x}$-modules.
+Hence the lemma follows from
+Algebra, Lemma \ref{algebra-lemma-ass}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-ass}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Then $\text{Ass}(\mathcal{F}) \cap U$ is finite for
+every quasi-compact open $U \subset X$.
+\end{lemma}
+
+\begin{proof}
+This is true because the set of associated primes of a finite module over
+a Noetherian ring is finite, see
+Algebra, Lemma \ref{algebra-lemma-finite-ass}.
+To translate from schemes to algebra use that $U$ is a finite union of
+affine opens, each of these opens is the spectrum of a Noetherian ring
+(Properties, Lemma \ref{properties-lemma-locally-Noetherian}),
+$\mathcal{F}$ corresponds to a finite module over this ring
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-Noetherian}),
+and finally use
+Lemma \ref{lemma-associated-affine-open}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass-zero}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{F}$ be a
+quasi-coherent $\mathcal{O}_X$-module. Then
+$$
+\mathcal{F} = 0 \Leftrightarrow \text{Ass}(\mathcal{F}) = \emptyset.
+$$
+\end{lemma}
+
+\begin{proof}
+If $\mathcal{F} = 0$, then $\text{Ass}(\mathcal{F}) = \emptyset$
+by definition. Conversely, if $\text{Ass}(\mathcal{F}) = \emptyset$,
+then $\mathcal{F} = 0$ by
+Algebra, Lemma \ref{algebra-lemma-ass-zero}.
+To translate from schemes to algebra, restrict to any affine and use
+Lemma \ref{lemma-associated-affine-open}.
+\end{proof}
+
+\begin{example}
+\label{example-no-associated-prime}
+Let $k$ be a field. The ring $R = k[x_1, x_2, x_3, \ldots]/(x_i^2)$
+is local with locally nilpotent maximal ideal $\mathfrak m$.
+There exists no element of $R$ which has annihilator $\mathfrak m$.
+Hence $\text{Ass}(R) = \emptyset$, and $X = \Spec(R)$
+is an example of a scheme which has no associated points.
+\end{example}
+
+\begin{lemma}
+\label{lemma-restriction-injective-open-contains-ass}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{F}$ be a quasi-coherent
+$\mathcal{O}_X$-module. If $U \subset X$ is open and
+$\text{Ass}(\mathcal{F}) \subset U$, then
+$\Gamma(X, \mathcal{F}) \to \Gamma(U, \mathcal{F})$ is injective.
+\end{lemma}
+
+\begin{proof}
+Let $s \in \Gamma(X, \mathcal{F})$ be a section which restricts to zero on $U$.
+Let $\mathcal{F}' \subset \mathcal{F}$ be the image of the map
+$\mathcal{O}_X \to \mathcal{F}$ defined by $s$. Then
+$\text{Supp}(\mathcal{F}') \cap U = \emptyset$. On the other hand,
+$\text{Ass}(\mathcal{F}') \subset \text{Ass}(\mathcal{F})$
+by Lemma \ref{lemma-ses-ass}. Since also
+$\text{Ass}(\mathcal{F}') \subset \text{Supp}(\mathcal{F}')$
+(Lemma \ref{lemma-ass-support}) we conclude
+$\text{Ass}(\mathcal{F}') = \emptyset$.
+Hence $\mathcal{F}' = 0$ by Lemma \ref{lemma-ass-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-minimal-support-in-ass}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $x \in \text{Supp}(\mathcal{F})$ be a point in the support
+of $\mathcal{F}$ which is not a specialization of another point of
+$\text{Supp}(\mathcal{F})$. Then $x \in \text{Ass}(\mathcal{F})$.
+In particular, any generic point of an irreducible component of $X$
+is an associated point of $X$.
+\end{lemma}
+
+\begin{proof}
+Since $x \in \text{Supp}(\mathcal{F})$ the module $\mathcal{F}_x$
+is not zero. Hence
+$\text{Ass}(\mathcal{F}_x) \subset \Spec(\mathcal{O}_{X, x})$
+is nonempty by
+Algebra, Lemma \ref{algebra-lemma-ass-zero}.
+On the other hand, by assumption
+$\text{Supp}(\mathcal{F}_x) = \{\mathfrak m_x\}$.
+Since
+$\text{Ass}(\mathcal{F}_x) \subset \text{Supp}(\mathcal{F}_x)$
+(Algebra, Lemma \ref{algebra-lemma-ass-support})
+we see that $\mathfrak m_x$ is associated to $\mathcal{F}_x$
+and we win.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of
+More on Algebra, Lemma \ref{more-algebra-lemma-check-injective-on-ass}.
+
+\begin{lemma}
+\label{lemma-check-injective-on-ass}
+Let $X$ be a locally Noetherian scheme. Let
+$\varphi : \mathcal{F} \to \mathcal{G}$ be a map of
+quasi-coherent $\mathcal{O}_X$-modules.
+Assume that for every $x \in X$
+at least one of the following happens
+\begin{enumerate}
+\item $\mathcal{F}_x \to \mathcal{G}_x$ is injective, or
+\item $x \not \in \text{Ass}(\mathcal{F})$.
+\end{enumerate}
+Then $\varphi$ is injective.
+\end{lemma}
+
+\begin{proof}
+The assumptions imply that $\text{Ass}(\Ker(\varphi)) = \emptyset$
+and hence $\Ker(\varphi) = 0$ by Lemma \ref{lemma-ass-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-isomorphism-via-depth-and-ass}
+Let $X$ be a locally Noetherian scheme. Let
+$\varphi : \mathcal{F} \to \mathcal{G}$ be a map of
+quasi-coherent $\mathcal{O}_X$-modules. Assume $\mathcal{F}$ is coherent
+and that for every $x \in X$ one of the following happens
+\begin{enumerate}
+\item $\mathcal{F}_x \to \mathcal{G}_x$ is an isomorphism, or
+\item $\text{depth}(\mathcal{F}_x) \geq 2$ and
+$x \not \in \text{Ass}(\mathcal{G})$.
+\end{enumerate}
+Then $\varphi$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+This is a translation of More on Algebra, Lemma
+\ref{more-algebra-lemma-check-isomorphism-via-depth-and-ass}
+into the language of schemes.
+\end{proof}
+
+
+
+
+\section{Morphisms and associated points}
+\label{section-morphisms-associated}
+
+\noindent
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_X$-modules.
+If $s \in S$ is a point, then it is often convenient to
+denote $\mathcal{F}_s$ the $\mathcal{O}_{X_s}$-module
+one gets by pulling back $\mathcal{F}$ by the morphism
+$i_s : X_s \to X$. Here $X_s$ is the scheme theoretic fibre of $f$ over $s$.
+In a formula
+$$
+\mathcal{F}_s = i_s^*\mathcal{F}
+$$
+Of course, this notation clashes with the already existing notation
+for the stalk of $\mathcal{F}$ at a point $x \in X$ if $f = \text{id}_X$.
+However, the notation is often convenient, as in the formulation of
+the following lemma.
+
+\begin{lemma}
+\label{lemma-bourbaki}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent sheaf on $X$ which is flat over $S$.
+Let $\mathcal{G}$ be a quasi-coherent sheaf on $S$.
+Then we have
+$$
+\text{Ass}_X(\mathcal{F} \otimes_{\mathcal{O}_X} f^*\mathcal{G})
+\supset
+\bigcup\nolimits_{s \in \text{Ass}_S(\mathcal{G})}
+\text{Ass}_{X_s}(\mathcal{F}_s)
+$$
+and equality holds if $S$ is locally Noetherian (for the notation
+$\mathcal{F}_s$ see above).
+\end{lemma}
+
+\begin{proof}
+Let $x \in X$ and let $s = f(x) \in S$.
+Set $B = \mathcal{O}_{X, x}$, $A = \mathcal{O}_{S, s}$,
+$N = \mathcal{F}_x$, and $M = \mathcal{G}_s$.
+Note that the stalk of $\mathcal{F} \otimes_{\mathcal{O}_X} f^*\mathcal{G}$
+at $x$ is equal to the $B$-module $M \otimes_A N$. Hence
+$x \in \text{Ass}_X(\mathcal{F} \otimes_{\mathcal{O}_X} f^*\mathcal{G})$
+if and only if $\mathfrak m_B$ is in $\text{Ass}_B(M \otimes_A N)$.
+Similarly $s \in \text{Ass}_S(\mathcal{G})$ and
+$x \in \text{Ass}_{X_s}(\mathcal{F}_s)$ if and only if
+$\mathfrak m_A \in \text{Ass}_A(M)$ and
+$\mathfrak m_B/\mathfrak m_A B \in
+\text{Ass}_{B \otimes \kappa(\mathfrak m_A)}(N \otimes \kappa(\mathfrak m_A))$.
+Thus the lemma follows from
+Algebra, Lemma \ref{algebra-lemma-bourbaki-fibres}.
+\end{proof}
+
+
+
+
+
+\section{Embedded points}
+\label{section-embedded}
+
+\noindent
+Let $R$ be a ring and let $M$ be an $R$-module.
+Recall that a prime $\mathfrak p \subset R$ is an
+{\it embedded associated prime} of $M$ if it is an associated prime of
+$M$ which is not minimal among the associated primes of $M$. See
+Algebra, Definition \ref{algebra-definition-embedded-primes}.
+Here is the definition of embedded associated points
+for quasi-coherent sheaves on schemes
+as given in \cite[IV Definition 3.1.1]{EGA}.
+
+\begin{definition}
+\label{definition-embedded}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent sheaf on $X$.
+\begin{enumerate}
+\item An {\it embedded associated point} of $\mathcal{F}$
+is an associated point which is not maximal among the
+associated points of $\mathcal{F}$, i.e., it is the specialization
+of another associated point of $\mathcal{F}$.
+\item A point $x$ of $X$ is called an {\it embedded point}
+if $x$ is an embedded associated point of $\mathcal{O}_X$.
+\item An {\it embedded component} of $X$ is an irreducible
+closed subset $Z = \overline{\{x\}}$ where $x$ is an embedded
+point of $X$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In the Noetherian case when $\mathcal{F}$ is coherent we have
+the following.
+
+\begin{lemma}
+\label{lemma-embedded}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Then
+\begin{enumerate}
+\item the generic points of irreducible components of
+$\text{Supp}(\mathcal{F})$ are associated points of $\mathcal{F}$, and
+\item an associated point of $\mathcal{F}$ is embedded if and only
+if it is not a generic point of an irreducible component
+of $\text{Supp}(\mathcal{F})$.
+\end{enumerate}
+In particular an embedded point of $X$ is an associated point of $X$
+which is not a generic point of an irreducible component of $X$.
+\end{lemma}
+
+\begin{proof}
+Recall that in this case $Z = \text{Supp}(\mathcal{F})$ is closed, see
+Morphisms, Lemma \ref{morphisms-lemma-support-finite-type}
+and that the generic points of irreducible components of $Z$ are
+associated points of $\mathcal{F}$, see
+Lemma \ref{lemma-minimal-support-in-ass}.
+Finally, we have $\text{Ass}(\mathcal{F}) \subset Z$, by
+Lemma \ref{lemma-ass-support}.
+These results, combined with the fact that $Z$ is a sober topological
+space and hence every point of $Z$ is a specialization of a generic
+point of $Z$, imply (1) and (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-S1-no-embedded}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+Then the following are equivalent:
+\begin{enumerate}
+\item $\mathcal{F}$ has no embedded associated points, and
+\item $\mathcal{F}$ has property $(S_1)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is Algebra, Lemma \ref{algebra-lemma-criterion-no-embedded-primes},
+combined with Lemma \ref{lemma-associated-affine-open} above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-noetherian-dim-1-CM-no-embedded-points}
+Let $X$ be a locally Noetherian scheme of dimension $\leq 1$.
+The following are equivalent
+\begin{enumerate}
+\item $X$ is Cohen-Macaulay, and
+\item $X$ has no embedded points.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-S1-no-embedded} and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-scheme-theoretically-dense-contain-embedded-points}
+Let $X$ be a locally Noetherian scheme. Let $U \subset X$ be an
+open subscheme. The following are equivalent
+\begin{enumerate}
+\item $U$ is scheme theoretically dense in $X$
+(Morphisms, Definition \ref{morphisms-definition-scheme-theoretically-dense}),
+\item $U$ is dense in $X$ and $U$ contains all embedded points of $X$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The question is local on $X$, hence we may assume that $X = \Spec(A)$
+where $A$ is a Noetherian ring. Then $U$ is quasi-compact
+(Properties, Lemma \ref{properties-lemma-immersion-into-noetherian})
+hence $U = D(f_1) \cup \ldots \cup D(f_n)$
+(Algebra, Lemma \ref{algebra-lemma-qc-open}).
+In this situation $U$ is scheme theoretically dense in $X$ if and only if
+$A \to A_{f_1} \times \ldots \times A_{f_n}$ is injective, see
+Morphisms, Example \ref{morphisms-example-scheme-theoretic-closure}.
+Condition (2) translated into algebra means that for every associated
+prime $\mathfrak p$ of $A$ there exists an $i$ with $f_i \not \in \mathfrak p$.
+
+\medskip\noindent
+Assume (1), i.e., $A \to A_{f_1} \times \ldots \times A_{f_n}$ is injective.
+If $x \in A$ has annihilator a prime $\mathfrak p$, then $x$ maps
+to a nonzero element of $A_{f_i}$ for some $i$ and hence
+$f_i \not \in \mathfrak p$. Thus (2) holds.
+Assume (2), i.e., every associated prime $\mathfrak p$ of $A$
+corresponds to a prime of $A_{f_i}$ for some $i$. Then
+$A \to A_{f_1} \times \ldots \times A_{f_n}$ is injective because
+$A \to \prod_{\mathfrak p \in \text{Ass}(A)} A_\mathfrak p$ is injective
+by Algebra, Lemma \ref{algebra-lemma-zero-at-ass-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-remove-embedded-points}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$ be a coherent sheaf on $X$.
+The set of coherent subsheaves
+$$
+\{
+\mathcal{K} \subset \mathcal{F}
+\mid
+\text{Supp}(\mathcal{K})\text{ is nowhere dense in }\text{Supp}(\mathcal{F})
+\}
+$$
+has a maximal element $\mathcal{K}$.
+Setting $\mathcal{F}' = \mathcal{F}/\mathcal{K}$ we have the
+following
+\begin{enumerate}
+\item $\text{Supp}(\mathcal{F}') = \text{Supp}(\mathcal{F})$,
+\item $\mathcal{F}'$ has no embedded associated points, and
+\item there exists a dense open $U \subset X$ such that
+$U \cap \text{Supp}(\mathcal{F})$ is dense in $\text{Supp}(\mathcal{F})$
+and $\mathcal{F}'|_U \cong \mathcal{F}|_U$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from
+Algebra, Lemmas \ref{algebra-lemma-remove-embedded-primes} and
+\ref{algebra-lemma-remove-embedded-primes-localize}.
+Note that $U$ can be taken as the complement of the closure
+of the set of embedded associated points of $\mathcal{F}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-no-embedded-points-endos}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module
+without embedded associated points. Set
+$$
+\mathcal{I}
+=
+\Ker(\mathcal{O}_X
+\longrightarrow
+\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{F})).
+$$
+This is a coherent sheaf of ideals which defines a closed
+subscheme $Z \subset X$ without embedded points. Moreover
+there exists a coherent sheaf $\mathcal{G}$ on $Z$
+such that (a) $\mathcal{F} = (Z \to X)_*\mathcal{G}$,
+(b) $\mathcal{G}$ has no associated embedded points, and
+(c) $\text{Supp}(\mathcal{G}) = Z$ (as sets).
+\end{lemma}
+
+\begin{proof}
+Some of the statements we have seen in the proof of
+Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-support-closed}.
+The others follow from
+Algebra, Lemma \ref{algebra-lemma-no-embedded-primes-endos}.
+\end{proof}
+
+
+
+\section{Weakly associated points}
+\label{section-weakly-associated}
+
+\noindent
+Let $R$ be a ring and let $M$ be an $R$-module.
+Recall that a prime $\mathfrak p \subset R$ is {\it weakly associated}
+to $M$ if there exists an element $m$ of $M$ such that $\mathfrak p$ is
+minimal among the primes containing the annihilator of $m$. See
+Algebra, Definition \ref{algebra-definition-weakly-associated}.
+If $R$ is a local ring with maximal ideal $\mathfrak m$, then
+$\mathfrak m$ is weakly associated to $M$ if and only if there exists an
+element $m \in M$ whose annihilator has radical $\mathfrak m$, see
+Algebra, Lemma \ref{algebra-lemma-weakly-ass-local}.
+
+\begin{definition}
+\label{definition-weakly-associated}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent sheaf on $X$.
+\begin{enumerate}
+\item We say $x \in X$ is {\it weakly associated} to $\mathcal{F}$
+if the maximal ideal $\mathfrak m_x$ is weakly associated to the
+$\mathcal{O}_{X, x}$-module $\mathcal{F}_x$.
+\item We denote $\text{WeakAss}(\mathcal{F})$ the set of weakly associated
+points of $\mathcal{F}$.
+\item The {\it weakly associated points of $X$} are the weakly associated
+points of $\mathcal{O}_X$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In this case, on any affine open, this corresponds exactly to the
+weakly associated primes as defined above. Here is the precise statement.
+
+\begin{lemma}
+\label{lemma-weakly-associated-affine-open}
+Let $X$ be a scheme. Let $\mathcal{F}$ be a quasi-coherent sheaf on $X$.
+Let $\Spec(A) = U \subset X$ be an affine open, and set
+$M = \Gamma(U, \mathcal{F})$.
+Let $x \in U$, and let $\mathfrak p \subset A$ be the corresponding prime.
+The following are equivalent
+\begin{enumerate}
+\item $\mathfrak p$ is weakly associated to $M$, and
+\item $x$ is weakly associated to $\mathcal{F}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from
+Algebra, Lemma \ref{algebra-lemma-weakly-ass-local}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-ass-support}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Then
+$$
+\text{Ass}(\mathcal{F}) \subset \text{WeakAss}(\mathcal{F}) \subset
+\text{Supp}(\mathcal{F}).
+$$
+\end{lemma}
+
+\begin{proof}
+This is immediate from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ses-weakly-ass}
+Let $X$ be a scheme.
+Let $0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0$
+be a short exact sequence of quasi-coherent sheaves on $X$.
+Then
+$\text{WeakAss}(\mathcal{F}_2) \subset
+\text{WeakAss}(\mathcal{F}_1) \cup \text{WeakAss}(\mathcal{F}_3)$
+and
+$\text{WeakAss}(\mathcal{F}_1) \subset \text{WeakAss}(\mathcal{F}_2)$.
+\end{lemma}
+
+\begin{proof}
+For every point $x \in X$ the sequence of stalks
+$0 \to \mathcal{F}_{1, x} \to \mathcal{F}_{2, x} \to \mathcal{F}_{3, x} \to 0$
+is a short exact sequence of $\mathcal{O}_{X, x}$-modules.
+Hence the lemma follows from
+Algebra, Lemma \ref{algebra-lemma-weakly-ass}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-ass-zero}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Then
+$$
+\mathcal{F} = (0) \Leftrightarrow \text{WeakAss}(\mathcal{F}) = \emptyset
+$$
+\end{lemma}
+
+\begin{proof}
+Follows from
+Lemma \ref{lemma-weakly-associated-affine-open}
+and
+Algebra, Lemma \ref{algebra-lemma-weakly-ass-zero}
+\end{proof}
+
+\begin{lemma}
+\label{lemma-restriction-injective-open-contains-weakly-ass}
+Let $X$ be a scheme. Let $\mathcal{F}$ be a quasi-coherent
+$\mathcal{O}_X$-module. If $U \subset X$ is open and
+$\text{WeakAss}(\mathcal{F}) \subset U$, then
+$\Gamma(X, \mathcal{F}) \to \Gamma(U, \mathcal{F})$
+is injective.
+\end{lemma}
+
+\begin{proof}
+Let $s \in \Gamma(X, \mathcal{F})$ be a section which restricts to zero on $U$.
+Let $\mathcal{F}' \subset \mathcal{F}$ be the image of the map
+$\mathcal{O}_X \to \mathcal{F}$ defined by $s$. Then
+$\text{Supp}(\mathcal{F}') \cap U = \emptyset$. On the other hand,
+$\text{WeakAss}(\mathcal{F}') \subset \text{WeakAss}(\mathcal{F})$
+by Lemma \ref{lemma-ses-weakly-ass}. Since also
+$\text{WeakAss}(\mathcal{F}') \subset \text{Supp}(\mathcal{F}')$
+(Lemma \ref{lemma-weakly-ass-support}) we conclude
+$\text{WeakAss}(\mathcal{F}') = \emptyset$.
+Hence $\mathcal{F}' = 0$ by Lemma \ref{lemma-weakly-ass-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-minimal-support-in-weakly-ass}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $x \in \text{Supp}(\mathcal{F})$ be a point in the support
+of $\mathcal{F}$ which is not a specialization of another point of
+$\text{Supp}(\mathcal{F})$. Then
+$x \in \text{WeakAss}(\mathcal{F})$.
+In particular, any generic point of an irreducible component of $X$
+is weakly associated to $\mathcal{O}_X$.
+\end{lemma}
+
+\begin{proof}
+Since $x \in \text{Supp}(\mathcal{F})$ the module $\mathcal{F}_x$
+is not zero. Hence
+$\text{WeakAss}(\mathcal{F}_x) \subset \Spec(\mathcal{O}_{X, x})$
+is nonempty by
+Algebra, Lemma \ref{algebra-lemma-weakly-ass-zero}.
+On the other hand, by assumption
+$\text{Supp}(\mathcal{F}_x) = \{\mathfrak m_x\}$.
+Since
+$\text{WeakAss}(\mathcal{F}_x) \subset \text{Supp}(\mathcal{F}_x)$
+(Algebra, Lemma \ref{algebra-lemma-weakly-ass-support})
+we see that $\mathfrak m_x$ is weakly associated to $\mathcal{F}_x$
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass-weakly-ass}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+If $\mathfrak m_x$ is a finitely generated ideal of $\mathcal{O}_{X, x}$,
+then
+$$
+x \in \text{Ass}(\mathcal{F}) \Leftrightarrow
+x \in \text{WeakAss}(\mathcal{F}).
+$$
+In particular, if $X$ is locally Noetherian, then
+$\text{Ass}(\mathcal{F}) = \text{WeakAss}(\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+See
+Algebra, Lemma \ref{algebra-lemma-ass-weakly-ass}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakass-pushforward}
+Let $f : X \to S$ be a quasi-compact and quasi-separated morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $s \in S$ be a point which is not in the image of $f$. Then
+$s$ is not weakly associated to $f_*\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Consider the base change $f' : X' \to \Spec(\mathcal{O}_{S, s})$
+of $f$ by the morphism $g : \Spec(\mathcal{O}_{S, s}) \to S$
+and denote $g' : X' \to X$ the other projection.
+Then
+$$
+(f_*\mathcal{F})_s = (g^*f_*\mathcal{F})_s = (f'_*(g')^*\mathcal{F})_s
+$$
+The first equality because $g$ induces an isomorphism on local
+rings at $s$ and the second by flat base change (Cohomology of Schemes, Lemma
+\ref{coherent-lemma-flat-base-change-cohomology}). Of course
+$s \in \Spec(\mathcal{O}_{S, s})$ is not in the image of $f'$.
+Thus we may assume $S$ is the spectrum of a local ring
+$(A, \mathfrak m)$ and $s$ corresponds to $\mathfrak m$.
+By Schemes, Lemma \ref{schemes-lemma-push-forward-quasi-coherent}
+the sheaf $f_*\mathcal{F}$ is quasi-coherent, say corresponding
+to the $A$-module $M$. As $s$ is not in the image of $f$ we see that
+$X = \bigcup_{a \in \mathfrak m} f^{-1}D(a)$ is an open covering.
+Since $X$ is quasi-compact we can find $a_1, \ldots, a_n \in \mathfrak m$
+such that $X = f^{-1}D(a_1) \cup \ldots \cup f^{-1}D(a_n)$. It follows
+that
+$$
+M \to M_{a_1} \oplus \ldots \oplus M_{a_r}
+$$
+is injective. Hence for any nonzero element $m$ of the stalk $M_\mathfrak p$
+there exists an $i$ such that $a_i^n m$ is nonzero for all $n \geq 0$.
+Thus $\mathfrak m$ is not weakly associated to $M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-injective-on-weakass}
+Let $X$ be a scheme. Let $\varphi : \mathcal{F} \to \mathcal{G}$ be a map of
+quasi-coherent $\mathcal{O}_X$-modules. Assume that for every $x \in X$
+at least one of the following happens
+\begin{enumerate}
+\item $\mathcal{F}_x \to \mathcal{G}_x$ is injective, or
+\item $x \not \in \text{WeakAss}(\mathcal{F})$.
+\end{enumerate}
+Then $\varphi$ is injective.
+\end{lemma}
+
+\begin{proof}
+The assumptions imply that $\text{WeakAss}(\Ker(\varphi)) = \emptyset$
+and hence $\Ker(\varphi) = 0$ by Lemma \ref{lemma-weakly-ass-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-2-hartog}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{F}$
+be a coherent $\mathcal{O}_X$-module. Let $j : U \to X$
+be an open subscheme such that for $x \in X \setminus U$
+we have $\text{depth}(\mathcal{F}_x) \geq 2$. Then
+$$
+\mathcal{F} \longrightarrow j_*(\mathcal{F}|_U)
+$$
+is an isomorphism and consequently
+$\Gamma(X, \mathcal{F}) \to \Gamma(U, \mathcal{F})$
+is an isomorphism too.
+\end{lemma}
+
+\begin{proof}
+We claim Lemma \ref{lemma-check-isomorphism-via-depth-and-ass}
+applies to the map displayed in the lemma.
+Let $x \in X$. If $x \in U$, then the map is an
+isomorphism on stalks as $j_*(\mathcal{F}|_U)|_U = \mathcal{F}|_U$.
+If $x \in X \setminus U$, then $x \not \in \text{Ass}(j_*(\mathcal{F}|_U))$
+(Lemmas \ref{lemma-weakass-pushforward} and \ref{lemma-weakly-ass-support}).
+Since we've assumed $\text{depth}(\mathcal{F}_x) \geq 2$
+this finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakass-reduced}
+Let $X$ be a reduced scheme. Then the weakly associated points of $X$
+are exactly the generic points of the irreducible components of $X$.
+\end{lemma}
+
+\begin{proof}
+Follows from Algebra, Lemma \ref{algebra-lemma-reduced-weakly-ass-minimal}.
+\end{proof}
+
+
+
+\section{Morphisms and weakly associated points}
+\label{section-morphisms-weakly-associated}
+
+\begin{lemma}
+\label{lemma-weakly-ass-reverse-functorial}
+Let $f : X \to S$ be an affine morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Then we have
+$$
+\text{WeakAss}_S(f_*\mathcal{F}) \subset f(\text{WeakAss}_X(\mathcal{F}))
+$$
+\end{lemma}
+
+\begin{proof}
+We may assume $X$ and $S$ affine, so $X \to S$ comes from a ring map
+$A \to B$. Then $\mathcal{F} = \widetilde M$ for some $B$-module $M$. By
+Lemma \ref{lemma-weakly-associated-affine-open}
+the weakly associated points of $\mathcal{F}$ correspond exactly to the
+weakly associated primes of $M$. Similarly, the weakly associated points
+of $f_*\mathcal{F}$ correspond exactly to the weakly associated primes
+of $M$ as an $A$-module. Hence the lemma follows from
+Algebra, Lemma \ref{algebra-lemma-weakly-ass-reverse-functorial}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ass-functorial-equal}
+Let $f : X \to S$ be an affine morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+If $X$ is locally Noetherian, then we have
+$$
+f(\text{Ass}_X(\mathcal{F})) =
+\text{Ass}_S(f_*\mathcal{F}) =
+\text{WeakAss}_S(f_*\mathcal{F}) =
+f(\text{WeakAss}_X(\mathcal{F}))
+$$
+\end{lemma}
+
+\begin{proof}
+We may assume $X$ and $S$ affine, so $X \to S$ comes from a ring map
+$A \to B$. As $X$ is locally Noetherian the ring $B$ is Noetherian, see
+Properties, Lemma \ref{properties-lemma-locally-Noetherian}.
+Write $\mathcal{F} = \widetilde M$ for some $B$-module $M$. By
+Lemma \ref{lemma-associated-affine-open}
+the associated points of $\mathcal{F}$ correspond exactly to the associated
+primes of $M$, and any associated prime of $M$ as an $A$-module is an
+associated points of $f_*\mathcal{F}$.
+Hence the inclusion
+$$
+f(\text{Ass}_X(\mathcal{F})) \subset \text{Ass}_S(f_*\mathcal{F})
+$$
+follows from
+Algebra, Lemma \ref{algebra-lemma-ass-functorial-Noetherian}.
+We have the inclusion
+$$
+\text{Ass}_S(f_*\mathcal{F}) \subset \text{WeakAss}_S(f_*\mathcal{F})
+$$
+by
+Lemma \ref{lemma-weakly-ass-support}.
+We have the inclusion
+$$
+\text{WeakAss}_S(f_*\mathcal{F}) \subset f(\text{WeakAss}_X(\mathcal{F}))
+$$
+by
+Lemma \ref{lemma-weakly-ass-reverse-functorial}.
+The outer sets are equal by
+Lemma \ref{lemma-ass-weakly-ass}
+hence we have equality everywhere.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-associated-finite}
+Let $f : X \to S$ be a finite morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Then $\text{WeakAss}(f_*\mathcal{F}) = f(\text{WeakAss}(\mathcal{F}))$.
+\end{lemma}
+
+\begin{proof}
+We may assume $X$ and $S$ affine, so $X \to S$ comes from a finite ring map
+$A \to B$. Write $\mathcal{F} = \widetilde M$ for some $B$-module $M$. By
+Lemma \ref{lemma-weakly-associated-affine-open}
+the weakly associated points of $\mathcal{F}$ correspond exactly to the
+weakly associated primes of $M$. Similarly, the weakly associated points
+of $f_*\mathcal{F}$ correspond exactly to the weakly associated primes
+of $M$ as an $A$-module. Hence the lemma follows from
+Algebra, Lemma \ref{algebra-lemma-weakly-ass-finite-ring-map}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-ass-pullback}
+Let $f : X \to S$ be a morphism of schemes. Let $\mathcal{G}$ be a
+quasi-coherent $\mathcal{O}_S$-module. Let $x \in X$ with $s = f(x)$.
+If $f$ is flat at $x$, the point $x$ is a generic point of the fibre $X_s$, and
+$s \in \text{WeakAss}_S(\mathcal{G})$, then
+$x \in \text{WeakAss}(f^*\mathcal{G})$.
+\end{lemma}
+
+\begin{proof}
+Let $A = \mathcal{O}_{S, s}$, $B = \mathcal{O}_{X, x}$, and
+$M = \mathcal{G}_s$. Let $m \in M$ be an element whose annihilator
+$I = \{a \in A \mid am = 0\}$ has radical $\mathfrak m_A$. Then
+$m \otimes 1$ has annihilator $I B$ as $A \to B$ is
+faithfully flat. Thus it suffices to see that $\sqrt{I B} = \mathfrak m_B$.
+This follows from the fact that the maximal ideal of $B/\mathfrak m_AB$
+is locally nilpotent (see
+Algebra, Lemma \ref{algebra-lemma-minimal-prime-reduced-ring})
+and the assumption that $\sqrt{I} = \mathfrak m_A$.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weakly-ass-change-fields}
+Let $K/k$ be a field extension. Let $X$ be a scheme over $k$.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $y \in X_K$ with image $x \in X$. If $y$ is a weakly
+associated point of the pullback $\mathcal{F}_K$, then $x$
+is a weakly associated point of $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+This is the translation of
+Algebra, Lemma \ref{algebra-lemma-weakly-ass-change-fields}
+into the language of schemes.
+\end{proof}
+
+\noindent
+Here is a simple lemma where we find that pushforwards often have
+depth at least 2.
+
+\begin{lemma}
+\label{lemma-depth-pushforward}
+Let $f : X \to S$ be a quasi-compact and quasi-separated morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $s \in S$.
+\begin{enumerate}
+\item If $s \not \in f(X)$, then $s$ is not weakly associated
+to $f_*\mathcal{F}$.
+\item If $s \not \in f(X)$ and $\mathcal{O}_{S, s}$ is Noetherian,
+then $s$ is not associated to $f_*\mathcal{F}$.
+\item If $s \not \in f(X)$, $(f_*\mathcal{F})_s$ is a finite
+$\mathcal{O}_{S, s}$-module, and $\mathcal{O}_{S, s}$
+is Noetherian, then $\text{depth}((f_*\mathcal{F})_s) \geq 2$.
+\item If $\mathcal{F}$ is flat over $S$ and $a \in \mathfrak m_s$
+is a nonzerodivisor, then $a$ is a nonzerodivisor on $(f_*\mathcal{F})_s$.
+\item If $\mathcal{F}$ is flat over $S$ and $a, b \in \mathfrak m_s$
+is a regular sequence, then $a$ is a nonzerodivisor on $(f_*\mathcal{F})_s$
+and $b$ is a nonzerodivisor on $(f_*\mathcal{F})_s/a(f_*\mathcal{F})_s$.
+\item If $\mathcal{F}$ is flat over $S$ and $(f_*\mathcal{F})_s$
+is a finite $\mathcal{O}_{S, s}$-module, then
+$\text{depth}((f_*\mathcal{F})_s) \geq
+\min(2, \text{depth}(\mathcal{O}_{S, s}))$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is Lemma \ref{lemma-weakass-pushforward}.
+Part (2) follows from (1) and Lemma \ref{lemma-ass-weakly-ass}.
+
+\medskip\noindent
+Proof of part (3). To show the depth is $\geq 2$ it suffices to show that
+$\Hom_{\mathcal{O}_{S, s}}(\kappa(s), (f_*\mathcal{F})_s) = 0$ and
+$\Ext^1_{\mathcal{O}_{S, s}}(\kappa(s), (f_*\mathcal{F})_s) = 0$, see
+Algebra, Lemma \ref{algebra-lemma-depth-ext}.
+Using the exact sequence
+$0 \to \mathfrak m_s \to \mathcal{O}_{S, s} \to \kappa(s) \to 0$
+it suffices to prove that the map
+$$
+\Hom_{\mathcal{O}_{S, s}}(\mathcal{O}_{S, s}, (f_*\mathcal{F})_s)
+\to
+\Hom_{\mathcal{O}_{S, s}}(\mathfrak m_s, (f_*\mathcal{F})_s)
+$$
+is an isomorphism. By flat base change (Cohomology of Schemes, Lemma
+\ref{coherent-lemma-flat-base-change-cohomology})
+we may replace $S$ by
+$\Spec(\mathcal{O}_{S, s})$ and $X$ by $\Spec(\mathcal{O}_{S, s}) \times_S X$.
+Denote $\mathfrak m \subset \mathcal{O}_S$ the ideal sheaf of $s$.
+Then we see that
+$$
+\Hom_{\mathcal{O}_{S, s}}(\mathfrak m_s, (f_*\mathcal{F})_s) =
+\Hom_{\mathcal{O}_S}(\mathfrak m, f_*\mathcal{F}) =
+\Hom_{\mathcal{O}_X}(f^*\mathfrak m, \mathcal{F})
+$$
+the first equality because $S$ is local with closed point $s$
+and the second equality
+by adjunction for $f^*, f_*$ on quasi-coherent modules. However, since
+$s \not \in f(X)$ we see that $f^*\mathfrak m = \mathcal{O}_X$.
+Working backwards through the arguments we get the desired equality.
+
+\medskip\noindent
+For the proof of (4), (5), and (6) we use flat base change
+(Cohomology of Schemes, Lemma
+\ref{coherent-lemma-flat-base-change-cohomology})
+to reduce to the case where $S$ is the spectrum of
+$\mathcal{O}_{S, s}$.
+Then a nonzerodivisor $a \in \mathcal{O}_{S, s}$
+deterimines a short exact sequence
+$$
+0 \to \mathcal{O}_S \xrightarrow{a} \mathcal{O}_S \to
+\mathcal{O}_S/a \mathcal{O}_S \to 0
+$$
+Since $\mathcal{F}$ is flat over $S$, we obtain an exact sequence
+$$
+0 \to \mathcal{F} \xrightarrow{a} \mathcal{F} \to
+\mathcal{F}/a\mathcal{F} \to 0
+$$
+Pushing forward we obtain an exact sequence
+$$
+0 \to f_*\mathcal{F} \xrightarrow{a} f_*\mathcal{F} \to
+f_*(\mathcal{F}/a\mathcal{F})
+$$
+This proves (4) and it shows that
+$f_*\mathcal{F}/ af_*\mathcal{F} \subset f_*(\mathcal{F}/a\mathcal{F})$.
+If $b$ is a nonzerodivisor on
+$\mathcal{O}_{S, s}/a\mathcal{O}_{S, s}$, then the exact same argument shows
+$b : \mathcal{F}/a\mathcal{F} \to \mathcal{F}/a\mathcal{F}$
+is injective. Pushing forward we conclude
+$$
+b : f_*(\mathcal{F}/a\mathcal{F}) \to f_*(\mathcal{F}/a\mathcal{F})
+$$
+is injective and hence also
+$b : f_*\mathcal{F}/ af_*\mathcal{F} \to f_*\mathcal{F}/ af_*\mathcal{F}$
+is injective. This proves (5). Part (6) follows from
+(4) and (5) and the definitions.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Relative assassin}
+\label{section-relative-assassin}
+
+\noindent
+Let $A \to B$ be a ring map. Let $N$ be a $B$-module. Recall that
+a prime $\mathfrak q \subset B$ is said to be in the relative assassin
+of $N$ over $B/A$ if $\mathfrak q$ is an associated prime of
+$N \otimes_A \kappa(\mathfrak p)$. Here $\mathfrak p = A \cap \mathfrak q$.
+See Algebra, Definition \ref{algebra-definition-relative-assassin}.
+Here is the definition of the relative assassin for quasi-coherent
+sheaves over a morphism of schemes.
+
+\begin{definition}
+\label{definition-relative-assassin}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+The {\it relative assassin of $\mathcal{F}$ in $X$ over $S$}
+is the set
+$$
+\text{Ass}_{X/S}(\mathcal{F}) =
+\bigcup\nolimits_{s \in S} \text{Ass}_{X_s}(\mathcal{F}_s)
+$$
+where $\mathcal{F}_s = (X_s \to X)^*\mathcal{F}$ is the restriction
+of $\mathcal{F}$ to the fibre of $f$ at $s$.
+\end{definition}
+
+\noindent
+Again there is a caveat that this is best used when the fibres of $f$
+are locally Noetherian and $\mathcal{F}$ is of finite type. In the general
+case we should probably use the relative weak assassin (defined in the next
+section). Let us link the scheme theoretic notion with the algebraic notion
+on affine opens; note that this correspondence works perfectly only
+for morphisms of schemes whose fibres are locally Noetherian.
+
+\begin{lemma}
+\label{lemma-relative-assassin-affine-open}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent sheaf on $X$.
+Let $U \subset X$ and $V \subset S$ be affine opens
+with $f(U) \subset V$. Write $U = \Spec(A)$, $V = \Spec(R)$, and set
+$M = \Gamma(U, \mathcal{F})$.
+Let $x \in U$, and let $\mathfrak p \subset A$ be the corresponding prime.
+Then
+$$
+\mathfrak p \in \text{Ass}_{A/R}(M) \Rightarrow
+x \in \text{Ass}_{X/S}(\mathcal{F})
+$$
+If all fibres $X_s$ of $f$ are locally Noetherian, then
+$\mathfrak p \in \text{Ass}_{A/R}(M) \Leftrightarrow
+x \in \text{Ass}_{X/S}(\mathcal{F})$
+for all pairs $(\mathfrak p, x)$ as above.
+\end{lemma}
+
+\begin{proof}
+The set $\text{Ass}_{A/R}(M)$ is defined in
+Algebra, Definition \ref{algebra-definition-relative-assassin}.
+Choose a pair $(\mathfrak p, x)$. Let $s = f(x)$.
+Let $\mathfrak r \subset R$ be the prime lying under $\mathfrak p$,
+i.e., the prime corresponding to $s$.
+Let $\mathfrak p' \subset A \otimes_R \kappa(\mathfrak r)$
+be the prime whose inverse image is $\mathfrak p$, i.e.,
+the prime corresponding to $x$ viewed as a point of its fibre $X_s$.
+Then $\mathfrak p \in \text{Ass}_{A/R}(M)$ if and only if
+$\mathfrak p'$ is an associated prime of
+$M \otimes_R \kappa(\mathfrak r)$, see
+Algebra, Lemma \ref{algebra-lemma-compare-relative-assassins}.
+Note that the ring $A \otimes_R \kappa(\mathfrak r)$ corresponds to $U_s$
+and the module $M \otimes_R \kappa(\mathfrak r)$ corresponds to the
+quasi-coherent sheaf $\mathcal{F}_s|_{U_s}$.
+Hence $x$ is an associated point of $\mathcal{F}_s$
+by Lemma \ref{lemma-associated-affine-open}.
+The reverse implication holds if $\mathfrak p'$ is finitely generated
+which is how the last sentence is seen to be true.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-relative-assassin}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Let $g : S' \to S$ be a morphism of schemes.
+Consider the base change diagram
+$$
+\xymatrix{
+X' \ar[d] \ar[r]_{g'} & X \ar[d] \\
+S' \ar[r]^g & S
+}
+$$
+and set $\mathcal{F}' = (g')^*\mathcal{F}$. Let $x' \in X'$ be a point
+with images $x \in X$, $s' \in S'$ and $s \in S$.
+Assume $f$ locally of finite type.
+Then $x' \in \text{Ass}_{X'/S'}(\mathcal{F}')$ if and only if
+$x \in \text{Ass}_{X/S}(\mathcal{F})$ and $x'$ corresponds to
+a generic point of an irreducible component of
+$\Spec(\kappa(s') \otimes_{\kappa(s)} \kappa(x))$.
+\end{lemma}
+
+\begin{proof}
+Consider the morphism $X'_{s'} \to X_s$ of fibres. As
+$X_{s'} = X_s \times_{\Spec(\kappa(s))} \Spec(\kappa(s'))$
+this is a flat morphism. Moreover $\mathcal{F}'_{s'}$ is the pullback
+of $\mathcal{F}_s$ via this morphism. As $X_s$ is locally of finite
+type over the Noetherian scheme $\Spec(\kappa(s))$ we have that
+$X_s$ is locally Noetherian, see
+Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian}.
+Thus we may apply
+Lemma \ref{lemma-bourbaki}
+and we see that
+$$
+\text{Ass}_{X'_{s'}}(\mathcal{F}'_{s'}) =
+\bigcup\nolimits_{x \in \text{Ass}(\mathcal{F}_s)} \text{Ass}((X'_{s'})_x).
+$$
+Thus to prove the lemma it suffices to show that the associated points
+of the fibre $(X'_{s'})_x$ of the morphism $X'_{s'} \to X_s$ over $x$
+are its generic points. Note that
+$(X'_{s'})_x = \Spec(\kappa(s') \otimes_{\kappa(s)} \kappa(x))$
+as schemes. By
+Algebra, Lemma \ref{algebra-lemma-tensor-fields-CM}
+the ring $\kappa(s') \otimes_{\kappa(s)} \kappa(x)$ is a Noetherian
+Cohen-Macaulay ring. Hence its associated primes are its minimal primes, see
+Algebra, Proposition \ref{algebra-proposition-minimal-primes-associated-primes}
+(minimal primes are associated) and
+Algebra, Lemma \ref{algebra-lemma-criterion-no-embedded-primes}
+(no embedded primes).
+\end{proof}
+
+\begin{remark}
+\label{remark-base-change-relative-assassin}
+With notation and assumptions as in
+Lemma \ref{lemma-base-change-relative-assassin}
+we see that it is always the case that
+$(g')^{-1}(\text{Ass}_{X/S}(\mathcal{F})) \supset
+\text{Ass}_{X'/S'}(\mathcal{F}')$.
+If the morphism $S' \to S$ is locally quasi-finite, then we actually have
+$$
+(g')^{-1}(\text{Ass}_{X/S}(\mathcal{F}))
+=
+\text{Ass}_{X'/S'}(\mathcal{F}')
+$$
+because in this case the field extensions $\kappa(s')/\kappa(s)$
+are always finite. In fact, this holds more generally for any morphism
+$g : S' \to S$ such that all the field extensions
+$\kappa(s')/\kappa(s)$ are algebraic, because in this case all
+prime ideals of $\kappa(s') \otimes_{\kappa(s)} \kappa(x)$ are
+maximal (and minimal) primes, see
+Algebra, Lemma \ref{algebra-lemma-integral-over-field}.
+\end{remark}
+
+
+
+
+\section{Relative weak assassin}
+\label{section-relative-weak-assassin}
+
+\begin{definition}
+\label{definition-relative-weak-assassin}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+The {\it relative weak assassin of $\mathcal{F}$ in $X$ over $S$}
+is the set
+$$
+\text{WeakAss}_{X/S}(\mathcal{F}) =
+\bigcup\nolimits_{s \in S} \text{WeakAss}(\mathcal{F}_s)
+$$
+where $\mathcal{F}_s = (X_s \to X)^*\mathcal{F}$ is the restriction
+of $\mathcal{F}$ to the fibre of $f$ at $s$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-relative-weak-assassin-assassin-finite-type}
+Let $f : X \to S$ be a morphism of schemes which is locally of finite type.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+Then $\text{WeakAss}_{X/S}(\mathcal{F}) = \text{Ass}_{X/S}(\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+This is true because the fibres of $f$ are locally Noetherian schemes,
+and associated and weakly associated points agree on locally Noetherian
+schemes, see
+Lemma \ref{lemma-ass-weakly-ass}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-weak-assassin-finite}
+Let $f : X \to S$ be a morphism of schemes.
+Let $i : Z \to X$ be a finite morphism.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_Z$-module.
+Then $\text{WeakAss}_{X/S}(i_*\mathcal{F}) =
+i(\text{WeakAss}_{Z/S}(\mathcal{F}))$.
+\end{lemma}
+
+\begin{proof}
+Let $i_s : Z_s \to X_s$ be the induced morphism between fibres.
+Then $(i_*\mathcal{F})_s = i_{s, *}(\mathcal{F}_s)$ by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-affine-base-change}
+and the fact that $i$ is affine. Hence
+we may apply Lemma \ref{lemma-weakly-associated-finite} to conclude.
+\end{proof}
+
+
+
+
+
+\section{Fitting ideals}
+\label{section-fitting-ideals}
+
+\noindent
+This section is the continuation of the discussion in
+More on Algebra, Section \ref{more-algebra-section-fitting-ideals}.
+Let $S$ be a scheme. Let $\mathcal{F}$ be a
+finite type quasi-coherent $\mathcal{O}_S$-module.
+In this situation we can construct the Fitting ideals
+$$
+0 = \text{Fit}_{-1}(\mathcal{F}) \subset \text{Fit}_0(\mathcal{F}) \subset
+\text{Fit}_1(\mathcal{F}) \subset \ldots \subset \mathcal{O}_S
+$$
+as the sequence of quasi-coherent ideals characterized by the following
+property: for every affine open $U = \Spec(A)$ of $S$ if $\mathcal{F}|_U$
+corresponds to the $A$-module $M$, then $\text{Fit}_i(\mathcal{F})|_U$
+corresponds to the ideal $\text{Fit}_i(M) \subset A$.
+This is well defined and a quasi-coherent sheaf of ideals because
+if $f \in A$, then the $i$th Fitting ideal of $M_f$ over $A_f$
+is equal to $\text{Fit}_i(M) A_f$ by
+More on Algebra, Lemma \ref{more-algebra-lemma-fitting-ideal-basics}.
+
+\medskip\noindent
+Alternatively, we can construct the Fitting ideals in terms of local
+presentations of $\mathcal{F}$. Namely, if $U \subset X$ is open, and
+$$
+\bigoplus\nolimits_{i \in I} \mathcal{O}_U \to
+\mathcal{O}_U^{\oplus n} \to \mathcal{F}|_U \to 0
+$$
+is a presentation of $\mathcal{F}$ over $U$, then
+$\text{Fit}_r(\mathcal{F})|_U$ is generated by the
+$(n - r) \times (n - r)$-minors
+of the matrix defining the first arrow of the presentation.
+This is compatible with the construction above because this
+is how the Fitting ideal of a module over a ring is actually defined.
+Some details omitted.
+
+\begin{lemma}
+\label{lemma-base-change-fitting-ideal}
+Let $f : T \to S$ be a morphism of schemes.
+Let $\mathcal{F}$ be a finite type quasi-coherent $\mathcal{O}_S$-module.
+Then
+$f^{-1}\text{Fit}_i(\mathcal{F}) \cdot \mathcal{O}_T =
+\text{Fit}_i(f^*\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from More on Algebra, Lemma
+\ref{more-algebra-lemma-fitting-ideal-basics} part (3).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fitting-ideal-of-finitely-presented}
+Let $S$ be a scheme.
+Let $\mathcal{F}$ be a finitely presented $\mathcal{O}_S$-module.
+Then $\text{Fit}_r(\mathcal{F})$ is a quasi-coherent ideal of finite type.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from More on Algebra, Lemma
+\ref{more-algebra-lemma-fitting-ideal-basics} part (4).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-on-subscheme-cut-out-by-Fit-0}
+Let $S$ be a scheme.
+Let $\mathcal{F}$ be a finite type, quasi-coherent $\mathcal{O}_S$-module.
+Let $Z_0 \subset S$ be the closed subscheme cut out by
+$\text{Fit}_0(\mathcal{F})$.
+Let $Z \subset S$ be the scheme theoretic support of $\mathcal{F}$.
+Then
+\begin{enumerate}
+\item $Z \subset Z_0 \subset S$ as closed subschemes,
+\item $Z = Z_0 = \text{Supp}(\mathcal{F})$ as closed subsets,
+\item there exists a finite type, quasi-coherent $\mathcal{O}_{Z_0}$-module
+$\mathcal{G}_0$ with
+$$
+(Z_0 \to X)_*\mathcal{G}_0 = \mathcal{F}.
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Recall that $Z$ is locally cut out by the annihilator of $\mathcal{F}$, see
+Morphisms, Definition \ref{morphisms-definition-scheme-theoretic-support}
+(which uses Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-support}
+to define $Z$). Hence we see that $Z \subset Z_0$ scheme theoretically
+by More on Algebra, Lemma
+\ref{more-algebra-lemma-fitting-ideal-basics} part (6).
+On the other hand we have $Z = \text{Supp}(\mathcal{F})$
+set theoretically by
+Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-support}
+and we have $Z_0 = Z$ set theoretically by
+More on Algebra, Lemma
+\ref{more-algebra-lemma-fitting-ideal-basics} part (7).
+Finally, to get $\mathcal{G}_0$ as in part (3) we can either use
+that we have $\mathcal{G}$ on $Z$ as in
+Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-support}
+and set $\mathcal{G}_0 = (Z \to Z_0)_*\mathcal{G}$
+or we can use Morphisms, Lemma \ref{morphisms-lemma-i-star-equivalence}
+and the fact that $\text{Fit}_0(\mathcal{F})$ annihilates
+$\mathcal{F}$ by More on Algebra, Lemma
+\ref{more-algebra-lemma-fitting-ideal-basics} part (6).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fitting-ideal-generate-locally}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a finite type, quasi-coherent
+$\mathcal{O}_S$-module. Let $s \in S$. Then $\mathcal{F}$ can be
+generated by $r$ elements in a neighbourhood of $s$ if and only
+if $\text{Fit}_r(\mathcal{F})_s = \mathcal{O}_{S, s}$.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from
+More on Algebra, Lemma \ref{more-algebra-lemma-fitting-ideal-generate-locally}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fitting-ideal-finite-locally-free}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a finite type, quasi-coherent
+$\mathcal{O}_S$-module. Let $r \geq 0$. The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is finite locally free of rank $r$
+\item $\text{Fit}_{r - 1}(\mathcal{F}) = 0$ and
+$\text{Fit}_r(\mathcal{F}) = \mathcal{O}_S$, and
+\item $\text{Fit}_k(\mathcal{F}) = 0$ for $k < r$ and
+$\text{Fit}_k(\mathcal{F}) = \mathcal{O}_S$ for $k \geq r$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Follows immediately from
+More on Algebra, Lemma
+\ref{more-algebra-lemma-fitting-ideal-finite-locally-free}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-free-rank-r-pullback}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a finite type, quasi-coherent
+$\mathcal{O}_S$-module. The closed subschemes
+$$
+S = Z_{-1} \supset Z_0 \supset Z_1 \supset Z_2 \ldots
+$$
+defined by the Fitting ideals of $\mathcal{F}$ have the following
+properties
+\begin{enumerate}
+\item The intersection $\bigcap Z_r$ is empty.
+\item The functor $(\Sch/S)^{opp} \to \textit{Sets}$ defined by the rule
+$$
+T \longmapsto
+\left\{
+\begin{matrix}
+\{*\} & \text{if }\mathcal{F}_T\text{ is locally generated by }
+\leq r\text{ sections} \\
+\emptyset & \text{otherwise}
+\end{matrix}
+\right.
+$$
+is representable by the open subscheme $S \setminus Z_r$.
+\item The functor $F_r : (\Sch/S)^{opp} \to \textit{Sets}$ defined by the rule
+$$
+T \longmapsto
+\left\{
+\begin{matrix}
+\{*\} & \text{if }\mathcal{F}_T\text{ locally free rank }r\\
+\emptyset & \text{otherwise}
+\end{matrix}
+\right.
+$$
+is representable by the locally closed subscheme $Z_{r - 1} \setminus Z_r$
+of $S$.
+\end{enumerate}
+If $\mathcal{F}$ is of finite presentation, then
+$Z_r \to S$, $S \setminus Z_r \to S$, and $Z_{r - 1} \setminus Z_r \to S$
+are of finite presentation.
+\end{lemma}
+
+\begin{proof}
+Part (1) is true because over every affine open $U$ there is an integer $n$
+such that $\text{Fit}_n(\mathcal{F})|_U = \mathcal{O}_U$. Namely, we can
+take $n$ to be the number of generators of $\mathcal{F}$ over $U$, see
+More on Algebra, Section \ref{more-algebra-section-fitting-ideals}.
+
+\medskip\noindent
+For any morphism $g : T \to S$ we see from
+Lemmas \ref{lemma-base-change-fitting-ideal} and
+\ref{lemma-fitting-ideal-generate-locally}
+that $\mathcal{F}_T$ is locally generated by $\leq r$ sections if and only if
+$\text{Fit}_r(\mathcal{F}) \cdot \mathcal{O}_T = \mathcal{O}_T$.
+This proves (2).
+
+\medskip\noindent
+For any morphism $g : T \to S$ we see from
+Lemmas \ref{lemma-base-change-fitting-ideal} and
+\ref{lemma-fitting-ideal-finite-locally-free}
+that $\mathcal{F}_T$ is free of rank $r$ if and only if
+$\text{Fit}_r(\mathcal{F}) \cdot \mathcal{O}_T = \mathcal{O}_T$ and
+$\text{Fit}_{r - 1}(\mathcal{F}) \cdot \mathcal{O}_T = 0$.
+This proves (3).
+
+\medskip\noindent
+Part (4) follows from the fact that if
+$\mathcal{F}$ is of finite presentation, then each of the morphisms
+$Z_r \to S$ is of finite presentation as $\text{Fit}_r(\mathcal{F})$
+is of finite type (Lemma \ref{lemma-fitting-ideal-of-finitely-presented} and
+Morphisms, Lemma \ref{morphisms-lemma-closed-immersion-finite-presentation}).
+This implies that $Z_{r - 1} \setminus Z_r$ is a retrocompact open in $Z_r$
+(Properties, Lemma \ref{properties-lemma-quasi-coherent-finite-type-ideals})
+and hence the morphism $Z_{r - 1} \setminus Z_r \to Z_r$
+is of finite presentation as well.
+\end{proof}
+
+\noindent
+Lemma \ref{lemma-locally-free-rank-r-pullback} notwithstanding
+the following lemma does not hold if $\mathcal{F}$ is a finite type
+quasi-coherent module. Namely, the stratification still exists but
+it isn't true that it represents the functor $F_{flat}$ in general.
+
+\begin{lemma}
+\label{lemma-finite-presentation-module}
+Let $S$ be a scheme. Let $\mathcal{F}$ be an $\mathcal{O}_S$-module
+of finite presentation. Let $S = Z_{-1} \supset Z_0 \supset Z_1 \supset \ldots$
+be as in Lemma \ref{lemma-locally-free-rank-r-pullback}.
+Set $S_r = Z_{r - 1} \setminus Z_r$.
+Then $S' = \coprod_{r \geq 0} S_r$ represents the functor
+$$
+F_{flat} : \Sch/S \longrightarrow \textit{Sets},\quad\quad
+T \longmapsto
+\left\{
+\begin{matrix}
+\{*\} & \text{if }\mathcal{F}_T\text{ flat over }T\\
+\emptyset & \text{otherwise}
+\end{matrix}
+\right.
+$$
+Moreover, $\mathcal{F}|_{S_r}$ is locally free of rank $r$ and the
+morphisms $S_r \to S$ and $S' \to S$ are of finite presentation.
+\end{lemma}
+
+\begin{proof}
+Suppose that $g : T \to S$ is a morphism of schemes such that the pullback
+$\mathcal{F}_T = g^*\mathcal{F}$ is flat. Then $\mathcal{F}_T$ is a flat
+$\mathcal{O}_T$-module of finite presentation. Hence
+$\mathcal{F}_T$ is finite locally free, see
+Properties, Lemma \ref{properties-lemma-finite-locally-free}.
+Thus $T = \coprod_{r \geq 0} T_r$, where $\mathcal{F}_T|_{T_r}$ is locally
+free of rank $r$. This implies that
+$$
+F_{flat} = \coprod\nolimits_{r \geq 0} F_r
+$$
+in the category of Zariski sheaves on $\Sch/S$ where $F_r$ is as in
+Lemma \ref{lemma-locally-free-rank-r-pullback}. It follows
+that $F_{flat}$ is represented by
+$\coprod_{r \geq 0} (Z_{r - 1} \setminus Z_r)$ where
+$Z_r$ is as in
+Lemma \ref{lemma-locally-free-rank-r-pullback}.
+The other statements also follow from the lemma.
+\end{proof}
+
+\begin{example}
+\label{example-not-fp-MB}
+Let $R = \prod_{n \in \mathbf{N}} \mathbf{F}_2$. Let $I \subset R$
+be the ideal of elements $a = (a_n)_{n \in \mathbf{N}}$ almost all of whose
+components are zero. Let $\mathfrak m$ be a maximal ideal containing $I$.
+Then $M = R/\mathfrak m$ is a finite flat $R$-module, because $R$ is absolutely
+flat (More on Algebra, Lemma
+\ref{more-algebra-lemma-product-fields-absolutely-flat}).
+Set $S = \Spec(R)$ and $\mathcal{F} = \widetilde{M}$.
+The closed subschemes of Lemma \ref{lemma-locally-free-rank-r-pullback} are
+$S = Z_{-1}$, $Z_0 = \Spec(R/\mathfrak m)$, and $Z_i = \emptyset$ for $i > 0$.
+But $\text{id} : S \to S$ does not factor through
+$(S \setminus Z_0) \amalg Z_0$ because $\mathfrak m$ is a nonisolated
+point of $S$. Thus
+Lemma \ref{lemma-finite-presentation-module}
+does not hold for finite type modules.
+\end{example}
+
+
+
+
+
+
+
+\section{The singular locus of a morphism}
+\label{section-singular-locus-morphism}
+
+\noindent
+Let $f : X \to S$ be a finite type morphism of schemes. The set $U$ of points
+where $f$ is smooth is an open of $X$
+(by Morphisms, Definition \ref{morphisms-definition-smooth}).
+In many situations it is useful to have a canonical closed
+subscheme $\text{Sing}(f) \subset X$ whose complement is $U$
+and whose formation commutes with arbitrary change of base.
+
+\medskip\noindent
+If $f$ is of finite presentation, then one choice would be to consider the
+closed subscheme $Z$ cut out by functions which are affine locally
+``strictly standard'' in the sense of
+Smoothing Ring Maps, Definition \ref{smoothing-definition-strictly-standard}.
+It follows from
+Smoothing Ring Maps, Lemma \ref{smoothing-lemma-strictly-standard-base-change}
+that if $f' : X' \to S'$ is the base change of $f$ by a morphism
+$S' \to S$, then $Z' \subset S' \times_S Z$ where $Z'$ is the
+closed subscheme of $X'$ cut out by functions which are affine
+locally strictly standard. However, equality isn't clear.
+The notion of a strictly standard element was useful in the chapter on
+Popescu's theorem. The closed subscheme defined by these elements is
+(as far as we know) not used in the literature\footnote{If $f$ is a
+local complete intersection morphism
+(More on Morphisms, Definition \ref{more-morphisms-definition-lci})
+then the closed subscheme cut out by the locally strictly standard
+elements is the correct thing to look at.}.
+
+\medskip\noindent
+If $f$ is flat, of finite presentation, and the fibres of $f$
+all are equidimensional of dimension $d$, then the $d$th fitting ideal
+of $\Omega_{X/S}$ is used to get a good closed subscheme. For any
+morphism of finite type the closed subschemes of $X$ defined by the
+fitting ideals of $\Omega_{X/S}$ define a stratification of $X$
+in terms of the rank of $\Omega_{X/S}$ whose formation commutes with
+base change. This can be helpful; it is related to embedding dimensions of
+fibres, see Varieties, Section \ref{varieties-section-embedding-dimension}.
+
+\begin{lemma}
+\label{lemma-base-change-and-fitting-ideal-omega}
+Let $f : X \to S$ be a morphism of schemes which is locally of finite type.
+Let $X = Z_{-1} \supset Z_0 \supset Z_1 \supset \ldots$
+be the closed subschemes defined by the fitting ideals
+of $\Omega_{X/S}$. Then the formation of $Z_i$ commutes
+with arbitrary base change.
+\end{lemma}
+
+\begin{proof}
+Observe that $\Omega_{X/S}$ is a finite type quasi-coherent
+$\mathcal{O}_X$-module
+(Morphisms, Lemma \ref{morphisms-lemma-finite-type-differentials})
+hence the fitting ideals are defined. If $f' : X' \to S'$
+is the base change of $f$ by $g : S' \to S$, then
+$\Omega_{X'/S'} = (g')^*\Omega_{X/S}$ where $g' : X' \to X$
+is the projection
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-differentials}).
+Hence $(g')^{-1}\text{Fit}_i(\Omega_{X/S}) \cdot \mathcal{O}_{X'} =
+\text{Fit}_i(\Omega_{X'/S'})$. This means that
+$$
+Z'_i = (g')^{-1}(Z_i) = Z_i \times_X X'
+$$
+scheme theoretically and this is the meaning of the statement of
+the lemma.
+\end{proof}
+
+\noindent
+The $0$th fitting ideal of $\Omega$
+cuts out the ``ramified locus'' of the morphism.
+
+\begin{lemma}
+\label{lemma-zero-fitting-ideal-omega-unramified}
+Let $f : X \to S$ be a morphism of schemes which is locally of finite type.
+The closed subscheme $Z \subset X$ cut out by the $0$th fitting ideal of
+$\Omega_{X/S}$ is exactly the set of points where $f$ is not unramified.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-on-subscheme-cut-out-by-Fit-0} the complement of $Z$
+is exactly the locus where $\Omega_{X/S}$ is zero. This is exactly
+the set of points where $f$ is unramified by
+Morphisms, Lemma \ref{morphisms-lemma-unramified-omega-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-d-fitting-ideal-omega-smooth}
+Let $f : X \to S$ be a morphism of schemes. Let $d \geq 0$ be an integer.
+Assume
+\begin{enumerate}
+\item $f$ is flat,
+\item $f$ is locally of finite presentation, and
+\item every nonempty fibre of $f$ is equidimensional of dimension $d$.
+\end{enumerate}
+Let $Z \subset X$ be the closed subscheme cut out by the $d$th fitting
+ideal of $\Omega_{X/S}$. Then $Z$ is exactly the set of points
+where $f$ is not smooth.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-locally-free-rank-r-pullback} the complement of $Z$
+is exactly the locus where $\Omega_{X/S}$ can be generated by at most
+$d$ elements. Hence the lemma follows from
+Morphisms, Lemma \ref{morphisms-lemma-smooth-at-point}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Torsion free modules}
+\label{section-torsion-free}
+
+\noindent
+This section is the analogue of
+More on Algebra, Section \ref{more-algebra-section-torsion-flat}
+for quasi-coherent modules.
+
+\begin{lemma}
+\label{lemma-torsion-sections}
+Let $X$ be an integral scheme with generic point $\eta$. Let $\mathcal{F}$
+be a quasi-coherent $\mathcal{O}_X$-module. Let $U \subset X$ be nonempty
+open and $s \in \mathcal{F}(U)$. The following are equivalent
+\begin{enumerate}
+\item for some $x \in U$ the image of $s$ in $\mathcal{F}_x$ is torsion,
+\item for all $x \in U$ the image of $s$ in $\mathcal{F}_x$ is torsion,
+\item the image of $s$ in $\mathcal{F}_\eta$ is zero,
+\item the image of $s$ in $j_*\mathcal{F}_\eta$ is zero, where $j : \eta \to X$
+is the inclusion morphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{definition}
+\label{definition-torsion}
+Let $X$ be an integral scheme. Let $\mathcal{F}$ be a quasi-coherent
+$\mathcal{O}_X$-module.
+\begin{enumerate}
+\item We say a local section of $\mathcal{F}$ is {\it torsion}
+if it satisfies the equivalent conditions of Lemma \ref{lemma-torsion-sections}.
+\item We say $\mathcal{F}$ is {\it torsion free} if every torsion section
+of $\mathcal{F}$ is $0$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Here is the obligatory lemma comparing this to the usual algebraic notion.
+
+\begin{lemma}
+\label{lemma-check-torsion-on-affines}
+Let $X$ be an integral scheme. Let $\mathcal{F}$ be a quasi-coherent
+$\mathcal{O}_X$-module. The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is torsion free,
+\item for $U \subset X$ affine open $\mathcal{F}(U)$
+is a torsion free $\mathcal{O}(U)$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-torsion}
+Let $X$ be an integral scheme. Let $\mathcal{F}$ be a quasi-coherent
+$\mathcal{O}_X$-module. The torsion sections of $\mathcal{F}$ form
+a quasi-coherent $\mathcal{O}_X$-submodule
+$\mathcal{F}_{tors} \subset \mathcal{F}$.
+The quotient module $\mathcal{F}/\mathcal{F}_{tors}$ is torsion free.
+\end{lemma}
+
+\begin{proof}
+Omitted. See More on Algebra, Lemma \ref{more-algebra-lemma-torsion}
+for the algebraic analogue.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-torsion-free}
+Let $X$ be an integral scheme. Any flat quasi-coherent $\mathcal{O}_X$-module
+is torsion free.
+\end{lemma}
+
+\begin{proof}
+Omitted. See More on Algebra, Lemma \ref{more-algebra-lemma-flat-torsion-free}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-pullback-torsion}
+Let $f : X \to Y$ be a flat morphism of integral schemes.
+Let $\mathcal{G}$ be a torsion free quasi-coherent $\mathcal{O}_Y$-module.
+Then $f^*\mathcal{G}$ is a torsion free $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+Omitted. See
+More on Algebra, Lemma \ref{more-algebra-lemma-flat-pullback-reflexive}
+for the algebraic analogue.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-over-integral-integral-fibre}
+Let $f : X \to Y$ be a flat morphism of schemes. If $Y$ is integral
+and the generic fibre of $f$ is integral, then $X$ is integral.
+\end{lemma}
+
+\begin{proof}
+The algebraic analogue is this: let $A$ be a domain with fraction
+field $K$ and let $B$ be a flat $A$-algebra such that $B \otimes_A K$
+is a domain. Then $B$ is a domain. This is true because $B$ is
+torsion free by More on Algebra, Lemma
+\ref{more-algebra-lemma-flat-torsion-free}
+and hence $B \subset B \otimes_A K$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-torsion}
+Let $X$ be an integral scheme. Let $\mathcal{F}$ be a quasi-coherent
+$\mathcal{O}_X$-module. Then $\mathcal{F}$ is torsion free if and only if
+$\mathcal{F}_x$ is a torsion free $\mathcal{O}_{X, x}$-module for all $x \in X$.
+\end{lemma}
+
+\begin{proof}
+Omitted. See More on Algebra, Lemma
+\ref{more-algebra-lemma-check-torsion}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extension-torsion-free}
+Let $X$ be an integral scheme. Let
+$0 \to \mathcal{F} \to \mathcal{F}' \to \mathcal{F}'' \to 0$
+be a short exact sequence of quasi-coherent $\mathcal{O}_X$-modules.
+If $\mathcal{F}$ and $\mathcal{F}''$ are torsion free, then $\mathcal{F}'$
+is torsion free.
+\end{lemma}
+
+\begin{proof}
+Omitted. See
+More on Algebra, Lemma \ref{more-algebra-lemma-extension-torsion-free}
+for the algebraic analogue.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-torsion-free-finite-noetherian-domain}
+Let $X$ be a locally Noetherian integral scheme with generic point $\eta$.
+Let $\mathcal{F}$ be a nonzero coherent $\mathcal{O}_X$-module.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is torsion free,
+\item $\eta$ is the only associated prime of $\mathcal{F}$,
+\item $\eta$ is in the support of $\mathcal{F}$ and $\mathcal{F}$
+has property $(S_1)$, and
+\item $\eta$ is in the support of $\mathcal{F}$ and $\mathcal{F}$
+has no embedded associated prime.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is a translation of More on Algebra, Lemma
+\ref{more-algebra-lemma-torsion-free-finite-noetherian-domain}
+into the language of schemes. We omit the translation.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-torsion-free-over-regular-dim-1}
+Let $X$ be an integral regular scheme of dimension $\leq 1$.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is torsion free,
+\item $\mathcal{F}$ is finite locally free.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that a finite locally free module is torsion free.
+For the converse, we will show that if $\mathcal{F}$ is
+torsion free, then $\mathcal{F}_x$ is a free $\mathcal{O}_{X, x}$-module
+for all $x \in X$. This is enough by
+Algebra, Lemma \ref{algebra-lemma-finite-projective}
+and the fact that $\mathcal{F}$ is coherent.
+If $\dim(\mathcal{O}_{X, x}) = 0$, then
+$\mathcal{O}_{X, x}$ is a field and the statement is clear.
+If $\dim(\mathcal{O}_{X, x}) = 1$, then $\mathcal{O}_{X, x}$
+is a discrete valuation ring
+(Algebra, Lemma \ref{algebra-lemma-characterize-dvr})
+and $\mathcal{F}_x$ is torsion free.
+Hence $\mathcal{F}_x$ is free by More on Algebra, Lemma
+\ref{more-algebra-lemma-dedekind-torsion-free-flat}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hom-into-torsion-free}
+Let $X$ be an integral scheme. Let $\mathcal{F}$, $\mathcal{G}$ be
+quasi-coherent $\mathcal{O}_X$-modules.
+If $\mathcal{G}$ is torsion free and $\mathcal{F}$ is of finite presentation,
+then $\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$ is torsion free.
+\end{lemma}
+
+\begin{proof}
+The statement makes sense because
+$\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$
+is quasi-coherent by Schemes, Section \ref{schemes-section-quasi-coherent}.
+To see the statement is true, see
+More on Algebra, Lemma \ref{more-algebra-lemma-hom-into-torsion-free}.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-isom-depth-2-torsion-free}
+Let $X$ be an integral locally Noetherian scheme. Let
+$\varphi : \mathcal{F} \to \mathcal{G}$ be a map of
+quasi-coherent $\mathcal{O}_X$-modules. Assume $\mathcal{F}$ is coherent,
+$\mathcal{G}$ is torsion free, and that for every $x \in X$ one of the
+following happens
+\begin{enumerate}
+\item $\mathcal{F}_x \to \mathcal{G}_x$ is an isomorphism, or
+\item $\text{depth}(\mathcal{F}_x) \geq 2$.
+\end{enumerate}
+Then $\varphi$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+This is a translation of More on Algebra, Lemma
+\ref{more-algebra-lemma-isom-depth-2-torsion-free}
+into the language of schemes.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Reflexive modules}
+\label{section-reflexive}
+
+\noindent
+This section is the analogue of
+More on Algebra, Section \ref{more-algebra-section-reflexive}
+for coherent modules on locally Noetherian schemes. The reason for
+working with coherent modules is that
+$\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$ is coherent
+for every pair of coherent $\mathcal{O}_X$-modules $\mathcal{F}, \mathcal{G}$,
+see Modules, Lemma \ref{modules-lemma-internal-hom-locally-kernel-direct-sum}.
+
+\begin{definition}
+\label{definition-reflexive}
+Let $X$ be an integral locally Noetherian scheme. Let $\mathcal{F}$
+be a coherent $\mathcal{O}_X$-module. The {\it reflexive hull}
+of $\mathcal{F}$ is the $\mathcal{O}_X$-module
+$$
+\mathcal{F}^{**} = \SheafHom_{\mathcal{O}_X}(
+\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{O}_X), \mathcal{O}_X)
+$$
+We say $\mathcal{F}$ is {\it reflexive} if the natural map
+$j : \mathcal{F} \longrightarrow \mathcal{F}^{**}$
+is an isomorphism.
+\end{definition}
+
+\noindent
+It follows from Lemma \ref{lemma-dual-reflexive} that the reflexive hull
+is a reflexive $\mathcal{O}_X$-module.
+You can use the same definition to define reflexive modules in more
+general situations, but this does not seem to be very useful.
+Here is the obligatory lemma comparing this to the usual algebraic notion.
+
+\begin{lemma}
+\label{lemma-check-reflexive-on-affines}
+Let $X$ be an integral locally Noetherian scheme. Let $\mathcal{F}$ be a
+coherent $\mathcal{O}_X$-module. The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is reflexive,
+\item for $U \subset X$ affine open $\mathcal{F}(U)$
+is a reflexive $\mathcal{O}(U)$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-different-reflexive}
+If $X$ is a scheme of finite type over a field, then sometimes a different
+notion of reflexive modules is used (see for example
+\cite[bottom of page 5 and Definition 1.1.9]{HL}).
+This other notion uses $R\SheafHom$ into a dualizing complex
+$\omega_X^\bullet$ instead of into $\mathcal{O}_X$ and
+should probably have a different name because it can be different
+when $X$ is not Gorenstein. For example, if
+$X = \Spec(k[t^3, t^4, t^5])$, then a computation shows the dualizing
+sheaf $\omega_X$ is not reflexive in our sense, but it is reflexive in the
+other sense as
+$\omega_X \to \SheafHom(\SheafHom(\omega_X, \omega_X), \omega_X)$
+is an isomorphism.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-reflexive-torsion-free}
+Let $X$ be an integral locally Noetherian scheme. Let $\mathcal{F}$
+be a coherent $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item If $\mathcal{F}$ is reflexive, then $\mathcal{F}$ is torsion free.
+\item The map $j : \mathcal{F} \longrightarrow \mathcal{F}^{**}$
+is injective if and only if $\mathcal{F}$ is torsion free.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. See More on Algebra, Lemma
+\ref{more-algebra-lemma-reflexive-torsion-free}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-reflexive}
+Let $X$ be an integral locally Noetherian scheme.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is reflexive,
+\item $\mathcal{F}_x$ is a reflexive $\mathcal{O}_{X, x}$-module
+for all $x \in X$,
+\item $\mathcal{F}_x$ is a reflexive $\mathcal{O}_{X, x}$-module
+for all closed points $x \in X$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Modules, Lemma \ref{modules-lemma-stalk-internal-hom} we see that
+(1) and (2) are equivalent. Since every point of $X$ specializes to
+a closed point
+(Properties, Lemma \ref{properties-lemma-locally-Noetherian-closed-point})
+we see that (2) and (3) are equivalent.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-pullback-reflexive}
+Let $f : X \to Y$ be a flat morphism of integral locally Noetherian schemes.
+Let $\mathcal{G}$ be a coherent reflexive $\mathcal{O}_Y$-module.
+Then $f^*\mathcal{G}$ is a coherent reflexive $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+Omitted. See
+More on Algebra, Lemma \ref{more-algebra-lemma-flat-pullback-torsion}
+for the algebraic analogue.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sequence-reflexive}
+Let $X$ be an integral locally Noetherian scheme.
+Let $0 \to \mathcal{F} \to \mathcal{F}' \to \mathcal{F}''$ be
+an exact sequence of coherent $\mathcal{O}_X$-modules.
+If $\mathcal{F}'$ is reflexive and $\mathcal{F}''$ is torsion free,
+then $\mathcal{F}$ is reflexive.
+\end{lemma}
+
+\begin{proof}
+Omitted. See More on Algebra, Lemma \ref{more-algebra-lemma-sequence-reflexive}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dual-reflexive}
+Let $X$ be an integral locally Noetherian scheme.
+Let $\mathcal{F}$, $\mathcal{G}$ be
+coherent $\mathcal{O}_X$-modules.
+If $\mathcal{G}$ is reflexive,
+then $\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$ is reflexive.
+\end{lemma}
+
+\begin{proof}
+The statement makes sense because
+$\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$
+is coherent by Cohomology of Schemes, Lemma
+\ref{coherent-lemma-tensor-hom-coherent}.
+To see the statement is true, see
+More on Algebra, Lemma \ref{more-algebra-lemma-dual-reflexive}.
+Some details omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-tensor}
+Let $X$ be an integral locally Noetherian scheme. Thanks to
+Lemma \ref{lemma-dual-reflexive} we know that the reflexive
+hull $\mathcal{F}^{**}$ of a coherent $\mathcal{O}_X$-module
+is coherent reflexive. Consider the category $\mathcal{C}$
+of coherent reflexive $\mathcal{O}_X$-modules. Taking
+reflexive hulls gives a left adjoint to the inclusion functor
+$\mathcal{C} \to \textit{Coh}(\mathcal{O}_X)$.
+Observe that $\mathcal{C}$ is an additive category
+with kernels and cokernels. Namely, given
+$\varphi : \mathcal{F} \to \mathcal{G}$ in $\mathcal{C}$, the
+usual kernel $\Ker(\varphi)$ is reflexive
+(Lemma \ref{lemma-sequence-reflexive}) and the reflexive hull
+$\Coker(\varphi)^{**}$ of the usual cokernel
+is the cokernel in $\mathcal{C}$. Moreover $\mathcal{C}$ inherits
+a tensor product
+$$
+\mathcal{F} \otimes_\mathcal{C} \mathcal{G} =
+(\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{G})^{**}
+$$
+which is associative and symmetric. There is an internal Hom
+in the sense that for any three objects
+$\mathcal{F}, \mathcal{G}, \mathcal{H}$ of
+$\mathcal{C}$ we have the identity
+$$
+\Hom_\mathcal{C}(\mathcal{F} \otimes_\mathcal{C} \mathcal{G}, \mathcal{H}) =
+\Hom_\mathcal{C}(\mathcal{F},
+\SheafHom_{\mathcal{O}_X}(\mathcal{G}, \mathcal{H}))
+$$
+see Modules, Lemma \ref{modules-lemma-internal-hom}. In $\mathcal{C}$
+every object $\mathcal{F}$ has a {\it dual object}
+$\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{O}_X)$.
+Without further conditions on $X$ it can happen that
+$$
+\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G}) \not \cong
+\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{O}_X)
+\otimes_\mathcal{C} \mathcal{G}
+\quad\text{and}\quad
+\mathcal{F} \otimes_\mathcal{C}
+\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{O}_X)
+\not \cong \mathcal{O}_X
+$$
+for $\mathcal{F}, \mathcal{G}$ of rank $1$ in $\mathcal{C}$.
+To make an example let $X = \Spec(R)$ where $R$ is as in
+More on Algebra, Example \ref{more-algebra-example-ring-not-S2}
+and let $\mathcal{F}, \mathcal{G}$ be the modules corresponding to $M$.
+Computation omitted.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-reflexive-depth-2}
+Let $X$ be an integral locally Noetherian scheme. Let $\mathcal{F}$
+be a coherent $\mathcal{O}_X$-module. The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is reflexive,
+\item for each $x \in X$ one of the following happens
+\begin{enumerate}
+\item $\mathcal{F}_x$ is a reflexive $\mathcal{O}_{X, x}$-module, or
+\item $\text{depth}(\mathcal{F}_x) \geq 2$.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. See More on Algebra, Lemma \ref{more-algebra-lemma-reflexive-depth-2}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reflexive-S2}
+Let $X$ be an integral locally Noetherian scheme.
+Let $\mathcal{F}$ be a coherent reflexive $\mathcal{O}_X$-module.
+Let $x \in X$.
+\begin{enumerate}
+\item If $\text{depth}(\mathcal{O}_{X, x}) \geq 2$, then
+$\text{depth}(\mathcal{F}_x) \geq 2$.
+\item If $X$ is $(S_2)$, then $\mathcal{F}$ is $(S_2)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. See More on Algebra, Lemma \ref{more-algebra-lemma-reflexive-S2}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reflexive-S2-extend}
+Let $X$ be an integral locally Noetherian scheme. Let $j : U \to X$
+be an open subscheme with complement $Z$. Assume $\mathcal{O}_{X, z}$
+has depth $\geq 2$ for all $z \in Z$. Then $j^*$ and $j_*$ define
+an equivalence of categories between the category of coherent reflexive
+$\mathcal{O}_X$-modules and the category of coherent reflexive
+$\mathcal{O}_U$-modules.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be a coherent reflexive $\mathcal{O}_X$-module. For $z \in Z$
+the stalk $\mathcal{F}_z$ has depth $\geq 2$ by Lemma \ref{lemma-reflexive-S2}.
+Thus $\mathcal{F} \to j_*j^*\mathcal{F}$ is an isomorphism by
+Lemma \ref{lemma-depth-2-hartog}. Conversely, let $\mathcal{G}$
+be a coherent reflexive $\mathcal{O}_U$-module. It suffices to show that
+$j_*\mathcal{G}$ is a coherent reflexive $\mathcal{O}_X$-module.
+To prove this we may assume $X$ is affine. By Properties, Lemma
+\ref{properties-lemma-lift-finite-presentation}
+there exists a coherent $\mathcal{O}_X$-module $\mathcal{F}$
+with $\mathcal{G} = j^*\mathcal{F}$. After replacing $\mathcal{F}$
+by its reflexive hull, we may assume $\mathcal{F}$ is reflexive
+(see discussion above and in particular Lemma \ref{lemma-dual-reflexive}).
+By the above $j_*\mathcal{G} = j_*j^*\mathcal{F} = \mathcal{F}$
+as desired.
+\end{proof}
+
+\noindent
+If the scheme is normal, then reflexive is the same thing as
+torsion free and $(S_2)$.
+
+\begin{lemma}
+\label{lemma-reflexive-over-normal}
+Let $X$ be an integral locally Noetherian normal scheme.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is reflexive,
+\item $\mathcal{F}$ is torsion free and has property $(S_2)$, and
+\item there exists an open subscheme $j : U \to X$ such that
+\begin{enumerate}
+\item every irreducible component of $X \setminus U$
+has codimension $\geq 2$ in $X$,
+\item $j^*\mathcal{F}$ is finite locally free, and
+\item $\mathcal{F} = j_*j^*\mathcal{F}$.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Using Lemma \ref{lemma-check-reflexive-on-affines}
+the equivalence of (1) and (2) follows from
+More on Algebra, Lemma \ref{more-algebra-lemma-reflexive-over-normal}.
+Let $U \subset X$ be as in (3). By
+Properties, Lemma \ref{properties-lemma-criterion-normal}
+we see that $\text{depth}(\mathcal{O}_{X, x}) \geq 2$
+for $x \not \in U$. Since a finite locally free module is reflexive,
+we conclude (3) implies (1) by Lemma \ref{lemma-reflexive-S2-extend}.
+
+\medskip\noindent
+Assume (1). Let $U \subset X$ be the maximal open subscheme such
+that $j^*\mathcal{F} = \mathcal{F}|_U$
+is finite locally free. So (3)(b) holds. Let $x \in X$ be a point.
+If $\mathcal{F}_x$ is a free $\mathcal{O}_{X, x}$-module, then
+$x \in U$, see
+Modules, Lemma \ref{modules-lemma-finite-presentation-stalk-free}.
+If $\dim(\mathcal{O}_{X, x}) \leq 1$, then $\mathcal{O}_{X, x}$
+is either a field or a discrete valuation ring
+(Properties, Lemma \ref{properties-lemma-criterion-normal})
+and hence $\mathcal{F}_x$ is free (More on Algebra, Lemma
+\ref{more-algebra-lemma-dedekind-torsion-free-flat}).
+Thus $x \not \in U \Rightarrow \dim(\mathcal{O}_{X, x}) \geq 2$.
+Then Properties, Lemma \ref{properties-lemma-codimension-local-ring}
+shows (3)(a) holds. By the already used
+Properties, Lemma \ref{properties-lemma-criterion-normal}
+we also see that $\text{depth}(\mathcal{O}_{X, x}) \geq 2$
+for $x \not \in U$ and hence (3)(c) follows from
+Lemma \ref{lemma-reflexive-S2-extend}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-describe-reflexive-hull}
+Let $X$ be an integral locally Noetherian normal scheme with
+generic point $\eta$. Let $\mathcal{F}$, $\mathcal{G}$ be coherent
+$\mathcal{O}_X$-modules. Let $T : \mathcal{G}_\eta \to \mathcal{F}_\eta$
+be a linear map. Then $T$ extends to a map
+$\mathcal{G} \to \mathcal{F}^{**}$ of $\mathcal{O}_X$-modules
+if and only if
+\begin{itemize}
+\item[(*)] for every $x \in X$ with $\dim(\mathcal{O}_{X, x}) = 1$
+we have
+$$
+T\left(\Im(\mathcal{G}_x \to \mathcal{G}_\eta)\right) \subset
+\Im(\mathcal{F}_x \to \mathcal{F}_\eta).
+$$
+\end{itemize}
+\end{lemma}
+
+\begin{proof}
+Because $\mathcal{F}^{**}$ is torsion free and
+$\mathcal{F}_\eta = \mathcal{F}^{**}_\eta$ an extension, if it exists,
+is unique. Thus it suffices to prove the lemma over the members of an
+open covering of $X$, i.e., we may assume $X$ is affine. In this case
+we are asking the following algebra question: Let $R$ be a Noetherian
+normal domain with fraction field $K$, let $M$, $N$ be finite $R$-modules,
+let $T : M \otimes_R K \to N \otimes_R K$ be a $K$-linear map. When
+does $T$ extend to a map $N \to M^{**}$? By More on Algebra, Lemma
+\ref{more-algebra-lemma-describe-reflexive-hull}
+this happens if and only if $N_\mathfrak p$ maps into
+$(M/M_{tors})_\mathfrak p$ for every height $1$ prime $\mathfrak p$ of $R$.
+This is exactly condition $(*)$ of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reflexive-over-regular-dim-2}
+Let $X$ be a regular scheme of dimension $\leq 2$.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is reflexive,
+\item $\mathcal{F}$ is finite locally free.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that a finite locally free module is reflexive.
+For the converse, we will show that if $\mathcal{F}$ is
+reflexive, then $\mathcal{F}_x$ is a free $\mathcal{O}_{X, x}$-module
+for all $x \in X$. This is enough by
+Algebra, Lemma \ref{algebra-lemma-finite-projective}
+and the fact that $\mathcal{F}$ is coherent.
+If $\dim(\mathcal{O}_{X, x}) = 0$, then
+$\mathcal{O}_{X, x}$ is a field and the statement is clear.
+If $\dim(\mathcal{O}_{X, x}) = 1$, then $\mathcal{O}_{X, x}$
+is a discrete valuation ring
+(Algebra, Lemma \ref{algebra-lemma-characterize-dvr})
+and $\mathcal{F}_x$ is torsion free.
+Hence $\mathcal{F}_x$ is free by More on Algebra, Lemma
+\ref{more-algebra-lemma-dedekind-torsion-free-flat}.
+If $\dim(\mathcal{O}_{X, x}) = 2$, then $\mathcal{O}_{X, x}$
+is a regular local ring of dimension $2$. By
+More on Algebra, Lemma \ref{more-algebra-lemma-reflexive-over-normal}
+we see that $\mathcal{F}_x$ has depth $\geq 2$.
+Hence $\mathcal{F}$ is free by
+Algebra, Lemma \ref{algebra-lemma-regular-mcm-free}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Effective Cartier divisors}
+\label{section-effective-Cartier-divisors}
+
+\noindent
+We define the notion of an effective Cartier divisor before any other type
+of divisor.
+
+\begin{definition}
+\label{definition-effective-Cartier-divisor}
+Let $S$ be a scheme.
+\begin{enumerate}
+\item A {\it locally principal closed subscheme} of $S$ is a closed subscheme
+whose sheaf of ideals is locally generated by a single element.
+\item An {\it effective Cartier divisor} on $S$ is a closed subscheme
+$D \subset S$ whose ideal sheaf $\mathcal{I}_D \subset \mathcal{O}_S$
+is an invertible $\mathcal{O}_S$-module.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Thus an effective Cartier divisor is a locally principal closed subscheme,
+but the converse is not always true. Effective Cartier divisors are closed
+subschemes of pure codimension $1$ in the strongest possible sense. Namely
+they are locally cut out by a single element which is a nonzerodivisor.
+In particular they are nowhere dense.
+
+\begin{lemma}
+\label{lemma-characterize-effective-Cartier-divisor}
+Let $S$ be a scheme.
+Let $D \subset S$ be a closed subscheme.
+The following are equivalent:
+\begin{enumerate}
+\item The subscheme $D$ is an effective Cartier divisor on $S$.
+\item For every $x \in D$ there exists an affine open neighbourhood
+$\Spec(A) = U \subset S$ of $x$ such that
+$U \cap D = \Spec(A/(f))$ with $f \in A$ a nonzerodivisor.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). For every $x \in D$ there exists an affine open neighbourhood
+$\Spec(A) = U \subset S$ of $x$ such that
+$\mathcal{I}_D|_U \cong \mathcal{O}_U$. In other words, there exists
+a section $f \in \Gamma(U, \mathcal{I}_D)$ which freely generates the
+restriction $\mathcal{I}_D|_U$. Hence $f \in A$, and the multiplication
+map $f : A \to A$ is injective. Also, since $\mathcal{I}_D$ is
+quasi-coherent we see that $D \cap U = \Spec(A/(f))$.
+
+\medskip\noindent
+Assume (2). Let $x \in D$. By assumption there exists an affine open
+neighbourhood $\Spec(A) = U \subset S$ of $x$ such that
+$U \cap D = \Spec(A/(f))$ with $f \in A$ a nonzerodivisor.
+Then $\mathcal{I}_D|_U \cong \mathcal{O}_U$ since it is equal to
+$\widetilde{(f)} \cong \widetilde{A} \cong \mathcal{O}_U$.
+Of course $\mathcal{I}_D$ restricted to the open subscheme
+$S \setminus D$ is isomorphic to $\mathcal{O}_{S \setminus D}$.
+Hence $\mathcal{I}_D$ is an invertible $\mathcal{O}_S$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complement-locally-principal-closed-subscheme}
+Let $S$ be a scheme. Let $Z \subset S$ be a locally principal closed
+subscheme. Let $U = S \setminus Z$. Then $U \to S$ is an affine morphism.
+\end{lemma}
+
+\begin{proof}
+The question is local on $S$, see
+Morphisms, Lemmas \ref{morphisms-lemma-characterize-affine}.
+Thus we may assume $S = \Spec(A)$ and $Z = V(f)$ for some $f \in A$.
+In this case $U = D(f) = \Spec(A_f)$ is affine hence $U \to S$ is affine.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complement-effective-Cartier-divisor}
+Let $S$ be a scheme. Let $D \subset S$ be an effective Cartier divisor.
+Let $U = S \setminus D$. Then $U \to S$ is an affine morphism and $U$
+is scheme theoretically dense in $S$.
+\end{lemma}
+
+\begin{proof}
+Affineness is Lemma \ref{lemma-complement-locally-principal-closed-subscheme}.
+The density question is local on $S$, see
+Morphisms, Lemma \ref{morphisms-lemma-characterize-scheme-theoretically-dense}.
+Thus we may assume $S = \Spec(A)$ and $D$ corresponding to the
+nonzerodivisor $f \in A$, see
+Lemma \ref{lemma-characterize-effective-Cartier-divisor}.
+Thus $A \subset A_f$ which implies that $U \subset S$ is
+scheme theoretically dense, see
+Morphisms, Example \ref{morphisms-example-scheme-theoretic-closure}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-effective-Cartier-makes-dimension-drop}
+Let $S$ be a scheme.
+Let $D \subset S$ be an effective Cartier divisor.
+Let $s \in D$.
+If $\dim_s(S) < \infty$, then $\dim_s(D) < \dim_s(S)$.
+\end{lemma}
+
+\begin{proof}
+Assume $\dim_s(S) < \infty$.
+Let $U = \Spec(A) \subset S$ be an affine open neighbourhood
+of $s$ such that $\dim(U) = \dim_s(S)$ and such that $D = V(f)$
+for some nonzerodivisor $f \in A$ (see
+Lemma \ref{lemma-characterize-effective-Cartier-divisor}).
+Recall that $\dim(U)$ is the Krull dimension of the ring $A$
+and that $\dim(U \cap D)$ is the Krull dimension of the ring $A/(f)$.
+Then $f$ is not contained in any minimal prime of $A$.
+Hence any maximal chain of primes in $A/(f)$, viewed as a chain
+of primes in $A$, can be extended by adding a minimal prime.
+\end{proof}
+
+\begin{definition}
+\label{definition-sum-effective-Cartier-divisors}
+Let $S$ be a scheme. Given effective Cartier divisors
+$D_1$, $D_2$ on $S$ we set $D = D_1 + D_2$ equal to the
+closed subscheme of $S$ corresponding to the quasi-coherent
+sheaf of ideals
+$\mathcal{I}_{D_1}\mathcal{I}_{D_2} \subset \mathcal{O}_S$.
+We call this the {\it sum of the effective Cartier divisors
+$D_1$ and $D_2$}.
+\end{definition}
+
+\noindent
+It is clear that we may define the sum $\sum n_iD_i$ given
+finitely many effective Cartier divisors $D_i$ on $X$
+and nonnegative integers $n_i$.
+
+\begin{lemma}
+\label{lemma-sum-effective-Cartier-divisors}
+The sum of two effective Cartier divisors is an effective
+Cartier divisor.
+\end{lemma}
+
+\begin{proof}
+Omitted. Locally $f_1, f_2 \in A$ are nonzerodivisors, then also
+$f_1f_2 \in A$ is a nonzerodivisor.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-difference-effective-Cartier-divisors}
+Let $X$ be a scheme.
+Let $D, D'$ be two effective Cartier divisors on $X$.
+If $D \subset D'$ (as closed subschemes of $X$), then
+there exists an effective Cartier divisor $D''$ such
+that $D' = D + D''$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sum-closed-subschemes-effective-Cartier}
+Let $X$ be a scheme. Let $Z, Y$ be two closed subschemes of $X$
+with ideal sheaves $\mathcal{I}$ and $\mathcal{J}$. If $\mathcal{I}\mathcal{J}$
+defines an effective Cartier divisor $D \subset X$, then $Z$ and $Y$
+are effective Cartier divisors and $D = Z + Y$.
+\end{lemma}
+
+\begin{proof}
+Applying Lemma \ref{lemma-characterize-effective-Cartier-divisor} we obtain
+the following algebra situation: $A$ is a ring, $I, J \subset A$
+ideals and $f \in A$ a nonzerodivisor such that $IJ = (f)$.
+Thus the result follows from
+Algebra, Lemma \ref{algebra-lemma-product-ideals-principal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sum-effective-Cartier-divisors-union}
+Let $X$ be a scheme. Let $D, D' \subset X$ be effective Cartier divisors
+such that the scheme theoretic intersection $D \cap D'$ is an effective
+Cartier divisor on $D'$. Then $D + D'$ is the scheme theoretic
+union of $D$ and $D'$.
+\end{lemma}
+
+\begin{proof}
+See Morphisms, Definition
+\ref{morphisms-definition-scheme-theoretic-intersection-union}
+for the definition of scheme theoretic intersection and union.
+To prove the lemma working locally
+(using Lemma \ref{lemma-characterize-effective-Cartier-divisor})
+we obtain the following algebra problem: Given a ring $A$
+and nonzerodivisors $f_1, f_2 \in A$ such that $f_1$ maps
+to a nonzerodivisor in $A/f_2A$, show that $f_1A \cap f_2A = f_1f_2A$.
+We omit the straightforward argument.
+\end{proof}
+
+\noindent
+Recall that we have defined the inverse image of a closed subscheme
+under any morphism of schemes in
+Schemes, Definition \ref{schemes-definition-inverse-image-closed-subscheme}.
+
+\begin{lemma}
+\label{lemma-pullback-locally-principal}
+Let $f : S' \to S$ be a morphism of schemes. Let $Z \subset S$
+be a locally principal closed subscheme. Then the inverse image
+$f^{-1}(Z)$ is a locally principal closed subscheme of $S'$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{definition}
+\label{definition-pullback-effective-Cartier-divisor}
+Let $f : S' \to S$ be a morphism of schemes. Let $D \subset S$
+be an effective Cartier divisor. We say the {\it pullback of
+$D$ by $f$ is defined} if the closed subscheme $f^{-1}(D) \subset S'$
+is an effective Cartier divisor. In this case we denote it either
+$f^*D$ or $f^{-1}(D)$ and we call it the
+{\it pullback of the effective Cartier divisor}.
+\end{definition}
+
+\noindent
+The condition that $f^{-1}(D)$ is an effective Cartier divisor
+is often satisfied in practice. Here is an example lemma.
+
+\begin{lemma}
+\label{lemma-pullback-effective-Cartier-defined}
+Let $f : X \to Y$ be a morphism of schemes.
+Let $D \subset Y$ be an effective Cartier divisor.
+The pullback of $D$ by $f$ is defined in each of the following cases:
+\begin{enumerate}
+\item $f(x) \not \in D$ for any weakly associated point $x$ of $X$,
+\item $X$, $Y$ integral and $f$ dominant,
+\item $X$ reduced and $f(\xi) \not \in D$ for any generic point $\xi$ of any
+irreducible component of $X$,
+\item $X$ is locally Noetherian and $f(x) \not \in D$ for any associated point
+$x$ of $X$,
+\item $X$ is locally Noetherian, has no embedded points, and
+$f(\xi) \not \in D$ for any generic point $\xi$ of an irreducible component of
+$X$,
+\item $f$ is flat, and
+\item add more here as needed.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The question is local on $X$, and hence we reduce to the case
+where $X = \Spec(A)$, $Y = \Spec(R)$, $f$ is
+given by $\varphi : R \to A$ and
+$D = \Spec(R/(t))$ where $t \in R$ is a nonzerodivisor.
+The goal in each case is to show that $\varphi(t) \in A$
+is a nonzerodivisor.
+
+\medskip\noindent
+In case (1) this follows from
+Algebra, Lemma \ref{algebra-lemma-weakly-ass-zero-divisors}.
+Case (4) is a special case of (1) by Lemma \ref{lemma-ass-weakly-ass}.
+Case (5) follows from (4) and the definitions.
+Case (3) is a special case of (1) by
+Lemma \ref{lemma-weakass-reduced}.
+Case (2) is a special case of (3).
+If $R \to A$ is flat, then $t : R \to R$ being injective
+shows that $t : A \to A$ is injective. This proves (6).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-effective-Cartier-divisors-additive}
+Let $f : S' \to S$ be a morphism of schemes.
+Let $D_1$, $D_2$ be effective Cartier divisors on $S$.
+If the pullbacks of $D_1$ and $D_2$ are defined then the
+pullback of $D = D_1 + D_2$ is defined and
+$f^*D = f^*D_1 + f^*D_2$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+
+
+
+
+\section{Effective Cartier divisors and invertible sheaves}
+\label{section-effective-Cartier-invertible}
+
+\noindent
+Since an effective Cartier divisor has an invertible ideal sheaf
+(Definition \ref{definition-effective-Cartier-divisor}) the
+following definition makes sense.
+
+\begin{definition}
+\label{definition-invertible-sheaf-effective-Cartier-divisor}
+Let $S$ be a scheme. Let $D \subset S$ be an effective Cartier divisor
+with ideal sheaf $\mathcal{I}_D$.
+\begin{enumerate}
+\item The {\it invertible sheaf $\mathcal{O}_S(D)$ associated to $D$}
+is defined by
+$$
+\mathcal{O}_S(D) =
+\SheafHom_{\mathcal{O}_S}(\mathcal{I}_D, \mathcal{O}_S) =
+\mathcal{I}_D^{\otimes -1}.
+$$
+\item The {\it canonical section}, usually denoted $1$ or $1_D$, is the
+global section of $\mathcal{O}_S(D)$ corresponding to
+the inclusion mapping $\mathcal{I}_D \to \mathcal{O}_S$.
+\item We write
+$\mathcal{O}_S(-D) = \mathcal{O}_S(D)^{\otimes -1} = \mathcal{I}_D$.
+\item Given a second effective Cartier divisor $D' \subset S$ we define
+$\mathcal{O}_S(D - D') =
+\mathcal{O}_S(D) \otimes_{\mathcal{O}_S} \mathcal{O}_S(-D')$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Some comments. We will see below that the assignment
+$D \mapsto \mathcal{O}_S(D)$ turns addition of effective Cartier
+divisors (Definition \ref{definition-sum-effective-Cartier-divisors})
+into addition in the Picard group of $S$
+(Lemma \ref{lemma-invertible-sheaf-sum-effective-Cartier-divisors}).
+However, the expression $D - D'$ in the definition above does not
+have any geometric meaning. More precisely, we can think of the
+set of effective Cartier divisors on $S$ as a commutative monoid
+$\text{EffCart}(S)$ whose zero element is the empty effective Cartier divisor.
+Then the assignment $(D, D') \mapsto \mathcal{O}_S(D - D')$ defines
+a group homomorphism
+$$
+\text{EffCart}(S)^{gp} \longrightarrow \Pic(S)
+$$
+where the left hand side is the group completion of
+$\text{EffCart}(S)$. In other words, when we write $\mathcal{O}_S(D - D')$
+we may think of $D - D'$ as an element of $\text{EffCart}(S)^{gp}$.
+
+\begin{lemma}
+\label{lemma-conormal-effective-Cartier-divisor}
+Let $S$ be a scheme and let $D \subset S$ be an effective Cartier divisor.
+Then the conormal sheaf is $\mathcal{C}_{D/S} = \mathcal{I}_D|_D =
+\mathcal{O}_S(-D)|_D$ and the normal sheaf is
+$\mathcal{N}_{D/S} = \mathcal{O}_S(D)|_D$.
+\end{lemma}
+
+\begin{proof}
+This follows from Morphisms, Lemma \ref{morphisms-lemma-affine-conormal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ses-add-divisor}
+Let $X$ be a scheme. Let $D, C \subset X$ be
+effective Cartier divisors with $C \subset D$ and let $D' = D + C$.
+Then there is a short exact sequence
+$$
+0 \to \mathcal{O}_X(-D)|_C \to \mathcal{O}_{D'} \to \mathcal{O}_D \to 0
+$$
+of $\mathcal{O}_X$-modules.
+\end{lemma}
+
+\begin{proof}
+In the statement of the lemma and in the proof we use the equivalence of
+Morphisms, Lemma \ref{morphisms-lemma-i-star-equivalence} to think of
+quasi-coherent modules on closed subschemes of $X$
+as quasi-coherent modules on $X$. Let $\mathcal{I}$ be the ideal
+sheaf of $D$ in $D'$. Then there is a short exact sequence
+$$
+0 \to \mathcal{I} \to \mathcal{O}_{D'} \to \mathcal{O}_D \to 0
+$$
+because $D \to D'$ is a closed immersion. There is a
+canonical surjection
+$\mathcal{I} \to \mathcal{I}/\mathcal{I}^2 = \mathcal{C}_{D/D'}$.
+We have $\mathcal{C}_{D/X} = \mathcal{O}_X(-D)|_D$ by
+Lemma \ref{lemma-conormal-effective-Cartier-divisor}
+and there is a canonical surjective map
+$$
+\mathcal{C}_{D/X} \longrightarrow \mathcal{C}_{D/D'}
+$$
+see Morphisms, Lemmas \ref{morphisms-lemma-conormal-functorial} and
+\ref{morphisms-lemma-conormal-functorial-flat}.
+Thus it suffices to show: (a) $\mathcal{I}^2 = 0$ and (b)
+$\mathcal{I}$ is an invertible $\mathcal{O}_C$-module.
+Both (a) and (b) can be checked locally, hence we may assume
+$X = \Spec(A)$, $D = \Spec(A/fA)$ and $C = \Spec(A/gA)$ where
+$f, g \in A$ are nonzerodivisors
+(Lemma \ref{lemma-characterize-effective-Cartier-divisor}).
+Since $C \subset D$ we see
+that $f \in gA$. Then $I = fA/fgA$ has square zero and is invertible
+as an $A/gA$-module as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-invertible-sheaf-sum-effective-Cartier-divisors}
+Let $S$ be a scheme.
+Let $D_1$, $D_2$ be effective Cartier divisors on $S$.
+Let $D = D_1 + D_2$.
+Then there is a unique isomorphism
+$$
+\mathcal{O}_S(D_1) \otimes_{\mathcal{O}_S} \mathcal{O}_S(D_2)
+\longrightarrow
+\mathcal{O}_S(D)
+$$
+which maps $1_{D_1} \otimes 1_{D_2}$ to $1_D$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-effective-Cartier-divisors}
+Let $f : S' \to S$ be a morphism of schemes.
+Let $D$ be a effective Cartier divisors on $S$.
+If the pullback of $D$ is defined then
+$f^*\mathcal{O}_S(D) = \mathcal{O}_{S'}(f^*D)$
+and the canonical section $1_D$ pulls back to
+the canonical section $1_{f^*D}$.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{definition}
+\label{definition-regular-section}
+Let $(X, \mathcal{O}_X)$ be a locally ringed space.
+Let $\mathcal{L}$ be an invertible sheaf on $X$.
+A global section $s \in \Gamma(X, \mathcal{L})$ is called a
+{\it regular section} if the map $\mathcal{O}_X \to \mathcal{L}$,
+$f \mapsto fs$ is injective.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-regular-section-structure-sheaf}
+Let $X$ be a locally ringed space. Let $f \in \Gamma(X, \mathcal{O}_X)$.
+The following are equivalent:
+\begin{enumerate}
+\item $f$ is a regular section, and
+\item for any $x \in X$ the image $f \in \mathcal{O}_{X, x}$
+is a nonzerodivisor.
+\end{enumerate}
+If $X$ is a scheme these are also equivalent to
+\begin{enumerate}
+\item[(3)] for any affine open $\Spec(A) = U \subset X$
+the image $f \in A$ is a nonzerodivisor,
+\item[(4)] there exists an affine open covering
+$X = \bigcup \Spec(A_i)$ such that
+the image of $f$ in $A_i$ is a nonzerodivisor for all $i$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\noindent
+Note that a global section $s$ of an invertible $\mathcal{O}_X$-module
+$\mathcal{L}$ may be seen as an $\mathcal{O}_X$-module map
+$s : \mathcal{O}_X \to \mathcal{L}$. Its dual is therefore a
+map $s : \mathcal{L}^{\otimes -1} \to \mathcal{O}_X$.
+(See Modules, Definition \ref{modules-definition-powers}
+for the definition of the dual invertible sheaf.)
+
+\begin{definition}
+\label{definition-zero-scheme-s}
+Let $X$ be a scheme. Let $\mathcal{L}$ be an invertible sheaf.
+Let $s \in \Gamma(X, \mathcal{L})$ be a global section.
+The {\it zero scheme} of $s$ is the closed subscheme $Z(s) \subset X$
+defined by the quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_X$ which is the image of the
+map $s : \mathcal{L}^{\otimes -1} \to \mathcal{O}_X$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-zero-scheme}
+Let $X$ be a scheme.
+Let $\mathcal{L}$ be an invertible sheaf.
+Let $s \in \Gamma(X, \mathcal{L})$.
+\begin{enumerate}
+\item Consider closed immersions $i : Z \to X$ such that
+$i^*s \in \Gamma(Z, i^*\mathcal{L})$ is zero
+ordered by inclusion. The zero scheme $Z(s)$ is the
+maximal element of this ordered set.
+\item For any morphism of schemes $f : Y \to X$ we have
+$f^*s = 0$ in $\Gamma(Y, f^*\mathcal{L})$ if and only if
+$f$ factors through $Z(s)$.
+\item The zero scheme $Z(s)$ is a locally principal closed subscheme.
+\item The zero scheme $Z(s)$ is an effective Cartier divisor
+if and only if $s$ is a regular section of $\mathcal{L}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-OD}
+\begin{slogan}
+Effective Cartier divisors on a scheme are the same as invertible sheaves
+with fixed regular global section.
+\end{slogan}
+Let $X$ be a scheme.
+\begin{enumerate}
+\item If $D \subset X$ is an effective Cartier divisor, then
+the canonical section $1_D$ of $\mathcal{O}_X(D)$ is regular.
+\item Conversely, if $s$ is a regular section of the invertible
+sheaf $\mathcal{L}$, then there exists a unique effective
+Cartier divisor $D = Z(s) \subset X$ and a unique isomorphism
+$\mathcal{O}_X(D) \to \mathcal{L}$ which maps $1_D$ to $s$.
+\end{enumerate}
+The constructions
+$D \mapsto (\mathcal{O}_X(D), 1_D)$ and $(\mathcal{L}, s) \mapsto Z(s)$
+give mutually inverse maps
+$$
+\left\{
+\begin{matrix}
+\text{effective Cartier divisors on }X
+\end{matrix}
+\right\}
+\leftrightarrow
+\left\{
+\begin{matrix}
+\text{isomorphism classes of pairs }(\mathcal{L}, s)\\
+\text{consisting of an invertible }
+\mathcal{O}_X\text{-module}\\
+\mathcal{L}\text{ and a regular global section }s
+\end{matrix}
+\right\}
+$$
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-ses-regular-section}
+Let $X$ be a scheme, $\mathcal{L}$ an invertible $\mathcal{O}_X$-module,
+and $s$ a regular section of $\mathcal{L}$. Then the zero scheme
+$D = Z(s)$ is an effective Cartier divisor on $X$ and there are
+short exact sequences
+$$
+0 \to \mathcal{O}_X \to \mathcal{L} \to i_*(\mathcal{L}|_D) \to 0
+\quad\text{and}\quad
+0 \to \mathcal{L}^{\otimes -1} \to \mathcal{O}_X \to i_*\mathcal{O}_D \to 0.
+$$
+Given an effective Cartier divisor $D \subset X$ using
+Lemmas \ref{lemma-characterize-OD} and
+\ref{lemma-conormal-effective-Cartier-divisor}
+we get
+$$
+0 \to \mathcal{O}_X \to \mathcal{O}_X(D) \to i_*(\mathcal{N}_{D/X}) \to 0
+\quad\text{and}\quad
+0 \to \mathcal{O}_X(-D) \to \mathcal{O}_X \to i_*(\mathcal{O}_D) \to 0
+$$
+\end{remark}
+
+
+
+
+
+
+\section{Effective Cartier divisors on Noetherian schemes}
+\label{section-Noetherian-effective-Cartier}
+
+\noindent
+In the locally Noetherian setting most of the discussion of
+effective Cartier divisors and regular sections simplifies somewhat.
+
+\begin{lemma}
+\label{lemma-regular-section-associated-points}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{L}$ be an invertible
+$\mathcal{O}_X$-module. Let $s \in \Gamma(X, \mathcal{L})$. Then $s$
+is a regular section if and only if $s$ does not vanish in the associated
+points of $X$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: reduce to the affine case and $\mathcal{L}$ trivial
+and then use Lemma \ref{lemma-regular-section-structure-sheaf} and
+Algebra, Lemma \ref{algebra-lemma-ass-zero-divisors}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-effective-Cartier-in-points}
+Let $X$ be a locally Noetherian scheme. Let $D \subset X$ be a closed subscheme
+corresponding to the quasi-coherent ideal sheaf
+$\mathcal{I} \subset \mathcal{O}_X$.
+\begin{enumerate}
+\item If for every $x \in D$ the ideal
+$\mathcal{I}_x \subset \mathcal{O}_{X, x}$
+can be generated by one element, then $D$ is locally principal.
+\item If for every $x \in D$ the ideal
+$\mathcal{I}_x \subset \mathcal{O}_{X, x}$
+can be generated by a single nonzerodivisor, then $D$ is an
+effective Cartier divisor.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\Spec(A)$ be an affine neighbourhood of a point $x \in D$.
+Let $\mathfrak p \subset A$ be the prime corresponding to $x$.
+Let $I \subset A$ be the ideal defining the trace of $D$ on
+$\Spec(A)$. Since $A$ is Noetherian (as $X$ is locally Noetherian)
+the ideal $I$ is generated by finitely many elements, say
+$I = (f_1, \ldots, f_r)$. Under the assumption of (1) we have
+$I_\mathfrak p = (f)$ for some $f \in A_\mathfrak p$.
+Then $f_i = g_i f$ for some $g_i \in A_\mathfrak p$.
+Write $g_i = a_i/h_i$ and $f = f'/h$ for some
+$a_i, h_i, f', h \in A$, $h_i, h \not \in \mathfrak p$.
+Then $I_{h_1 \ldots h_r h} \subset A_{h_1 \ldots h_r h}$ is
+principal, because it is generated by $f'$. This proves (1).
+For (2) we may assume $I = (f)$. The assumption implies
+that the image of $f$ in $A_\mathfrak p$ is a nonzerodivisor.
+Then $f$ is a nonzerodivisor on a neighbourhood of $x$ by
+Algebra, Lemma \ref{algebra-lemma-regular-sequence-in-neighbourhood}.
+This proves (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-effective-Cartier-codimension-1}
+Let $X$ be a locally Noetherian scheme.
+\begin{enumerate}
+\item Let $D \subset X$ be a locally principal closed subscheme.
+Let $\xi \in D$ be a generic point of an irreducible component of $D$.
+Then $\dim(\mathcal{O}_{X, \xi}) \leq 1$.
+\item Let $D \subset X$ be an effective Cartier divisor.
+Let $\xi \in D$ be a generic point of an irreducible component of $D$.
+Then $\dim(\mathcal{O}_{X, \xi}) = 1$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). By assumption we may assume $X = \Spec(A)$ and
+$D = \Spec(A/(f))$ where $A$ is a Noetherian ring and $f \in A$.
+Let $\xi$ correspond to the prime ideal $\mathfrak p \subset A$.
+The assumption that $\xi$ is a generic point of an irreducible
+component of $D$ signifies $\mathfrak p$ is minimal over $(f)$.
+Thus $\dim(A_\mathfrak p) \leq 1$ by
+Algebra, Lemma \ref{algebra-lemma-minimal-over-1}.
+
+\medskip\noindent
+Proof of (2). By part (1) we see that $\dim(\mathcal{O}_{X, \xi}) \leq 1$.
+On the other hand, the local equation $f$ is a nonzerodivisor in
+$A_\mathfrak p$ by Lemma \ref{lemma-characterize-effective-Cartier-divisor}
+which implies the dimension is at least $1$ (because there must be a
+prime in $A_\mathfrak p$ not containing $f$ by the elementary
+Algebra, Lemma \ref{algebra-lemma-Zariski-topology}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-effective-Cartier-divisor-dvr}
+Let $X$ be a Noetherian scheme. Let $D \subset X$ be an
+integral closed subscheme which is also an
+effective Cartier divisor. Then the local ring of $X$
+at the generic point of $D$ is a discrete valuation ring.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-characterize-effective-Cartier-divisor}
+we may assume $X = \Spec(A)$ and $D = \Spec(A/(f))$
+where $A$ is a Noetherian ring and $f \in A$ is a nonzerodivisor.
+The assumption that $D$ is integral signifies that $(f)$ is prime.
+Hence the local ring of $X$ at the generic point is $A_{(f)}$
+which is a Noetherian local ring whose maximal ideal is generated by
+a nonzerodivisor. Thus it is a discrete valuation ring by
+Algebra, Lemma \ref{algebra-lemma-characterize-dvr}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-effective-Cartier-divisor-Sk}
+Let $X$ be a locally Noetherian scheme. Let $D \subset X$ be an
+effective Cartier divisor. If $X$ is $(S_k)$, then $D$ is $(S_{k - 1})$.
+\end{lemma}
+
+\begin{proof}
+Let $x \in D$. Then $\mathcal{O}_{D, x} = \mathcal{O}_{X, x}/(f)$ where
+$f \in \mathcal{O}_{X, x}$ is a nonzerodivisor. By assumption we have
+$\text{depth}(\mathcal{O}_{X, x}) \geq \min(\dim(\mathcal{O}_{X, x}), k)$.
+By Algebra, Lemma \ref{algebra-lemma-depth-drops-by-one} we have
+$\text{depth}(\mathcal{O}_{D, x}) = \text{depth}(\mathcal{O}_{X, x}) - 1$
+and by Algebra, Lemma \ref{algebra-lemma-one-equation}
+$\dim(\mathcal{O}_{D, x}) = \dim(\mathcal{O}_{X, x}) - 1$.
+It follows that
+$\text{depth}(\mathcal{O}_{D, x}) \geq \min(\dim(\mathcal{O}_{D, x}), k - 1)$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-normal-effective-Cartier-divisor-S1}
+Let $X$ be a locally Noetherian normal scheme. Let $D \subset X$ be an
+effective Cartier divisor. Then $D$ is $(S_1)$.
+\end{lemma}
+
+\begin{proof}
+By Properties, Lemma \ref{properties-lemma-criterion-normal}
+we see that $X$ is $(S_2)$. Thus we conclude by
+Lemma \ref{lemma-effective-Cartier-divisor-Sk}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-weil-divisor-is-cartier-UFD}
+Let $X$ be a Noetherian scheme. Let $D \subset X$ be an integral
+closed subscheme. Assume that
+\begin{enumerate}
+\item $D$ has codimension $1$ in $X$, and
+\item $\mathcal{O}_{X, x}$ is a UFD for all $x \in D$.
+\end{enumerate}
+Then $D$ is an effective Cartier divisor.
+\end{lemma}
+
+\begin{proof}
+Let $x \in D$ and set $A = \mathcal{O}_{X, x}$. Let $\mathfrak p \subset A$
+correspond to the generic point of $D$. Then $A_\mathfrak p$ has dimension
+$1$ by assumption (1). Thus $\mathfrak p$ is a prime ideal of height $1$.
+Since $A$ is a UFD this implies that $\mathfrak p = (f)$ for some $f \in A$.
+Of course $f$ is a nonzerodivisor and we conclude by
+Lemma \ref{lemma-effective-Cartier-in-points}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-codim-1-part}
+Let $X$ be a Noetherian scheme. Let $Z \subset X$ be a closed subscheme.
+Assume there exist integral effective Cartier divisors $D_i \subset X$
+and a closed subset $Z' \subset X$ of codimension $\geq 2$ such that
+$Z \subset Z' \cup \bigcup D_i$ set-theoretically.
+Then there exists an effective Cartier divisor of the form
+$$
+D = \sum a_i D_i \subset Z
+$$
+such that $D \to Z$ is an isomorphism away from codimension $2$ in $X$.
+The existence of the $D_i$ is guaranteed if $\mathcal{O}_{X, x}$
+is a UFD for all $x \in Z$ or if $X$ is regular.
+\end{lemma}
+
+\begin{proof}
+Let $\xi_i \in D_i$ be the generic point and let
+$\mathcal{O}_i = \mathcal{O}_{X, \xi_i}$ be the local ring
+which is a discrete valuation ring by
+Lemma \ref{lemma-integral-effective-Cartier-divisor-dvr}.
+Let $a_i \geq 0$ be the minimal valuation of an element of
+$\mathcal{I}_{Z, \xi_i} \subset \mathcal{O}_i$.
+We claim that the effective Cartier divisor $D = \sum a_i D_i$ works.
+
+\medskip\noindent
+Namely, suppose that $x \in X$. Let $A = \mathcal{O}_{X, x}$.
+Let $D_1, \ldots, D_n$ be the pairwise distinct divisors
+$D_i$ such that $x \in D_i$.
+For $1 \leq i \leq n$ let $f_i \in A$ be a local equation for $D_i$.
+Then $f_i$ is a prime element of $A$ and $\mathcal{O}_i = A_{(f_i)}$. Let
+$I = \mathcal{I}_{Z, x} \subset A$ be the stalk of the
+ideal sheaf of $Z$. By our choice of $a_i$ we have
+$I A_{(f_i)} = f_i^{a_i}A_{(f_i)}$. We claim that
+$I \subset (\prod_{i = 1, \ldots, n} f_i^{a_i})$.
+
+\medskip\noindent
+Proof of the claim. The localization map
+$\varphi : A/(f_i) \to A_{(f_i)}/f_iA_{(f_i)}$ is injective as
+the prime ideal $(f_i)$ is the inverse image of the maximal ideal
+$f_iA_{(f_i)}$. By induction on $n$ we deduce that
+$\varphi_n : A/(f_i^n)\to A_{(f_i)}/f_i^nA_{(f_i)}$ is also injective.
+Since $\varphi_{a_i}(I) = 0$, we have $I \subset (f_i^{a_i})$.
+Thus, for any $x \in I$, we may write $x = f_1^{a_1}x_1$
+for some $x_1 \in A$. Since $D_1, \ldots, D_n$ are pairwise
+distinct, $f_i$ is a unit in $A_{(f_j)}$ for $i \not = j$.
+Comparing $x$ and $x_1$ at $A_{(f_i)}$
+for $n \geq i > 1$, we still have $x_1 \in (f_i^{a_i})$.
+Repeating the previous process, we inductively write
+$x_i = f_{i + 1}^{a_{i + 1}}x_{i + 1}$ for any $n > i \geq 1$.
+In conclusion, $x \in (\prod_{i = 1, \ldots n} f_i^{a_i})$
+for any $x \in I$ as desired.
+
+\medskip\noindent
+The claim shows that $\mathcal{I}_Z \subset \mathcal{I}_D$, i.e., that
+$D \subset Z$. Moreover, we also see that $D$ and $Z$ agree at the $\xi_i$,
+which proves that $D \to Z$ is an isomorphism away from codimension $2$ on $X$.
+
+\medskip\noindent
+To see the final statements we argue as follows. A regular local
+ring is a UFD (More on Algebra, Lemma
+\ref{more-algebra-lemma-regular-local-UFD}) hence it suffices
+to argue in the UFD case. In that case, let
+$D_i$ be the irreducible components of $Z$
+which have codimension $1$ in $X$.
+By Lemma \ref{lemma-weil-divisor-is-cartier-UFD} each $D_i$
+is an effective Cartier divisor.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-codimension-1-is-effective-Cartier}
+Let $Z \subset X$ be a closed subscheme of a Noetherian scheme. Assume
+\begin{enumerate}
+\item $Z$ has no embedded points,
+\item every irreducible component of $Z$ has codimension $1$ in $X$,
+\item every local ring $\mathcal{O}_{X, x}$, $x \in Z$ is
+a UFD or $X$ is regular.
+\end{enumerate}
+Then $Z$ is an effective Cartier divisor.
+\end{lemma}
+
+\begin{proof}
+Let $D = \sum a_i D_i$ be as in Lemma \ref{lemma-codim-1-part}
+where $D_i \subset Z$ are the irreducible components of $Z$.
+If $D \to Z$ is not an isomorphism, then $\mathcal{O}_Z \to \mathcal{O}_D$
+has a nonzero kernel sitting in codimension $\geq 2$. This
+would mean that $Z$ has embedded points, which is forbidden
+by assumption (1). Hence $D \cong Z$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-UFD-one-equation-CM}
+Let $R$ be a Noetherian UFD. Let $I \subset R$ be an ideal
+such that $R/I$ has no embedded primes and such that
+every minimal prime over $I$ has height $1$.
+Then $I = (f)$ for some $f \in R$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-codimension-1-is-effective-Cartier}
+the ideal sheaf $\tilde I$ is invertible on $\Spec(R)$.
+By More on Algebra, Lemma \ref{more-algebra-lemma-UFD-Pic-trivial}
+it is generated by a single element.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-effective-Cartier-divisor-is-a-sum}
+Let $X$ be a Noetherian scheme. Let $D \subset X$ be an effective
+Cartier divisor. Assume that there exist integral effective Cartier
+divisors $D_i \subset X$ such that $D \subset \bigcup D_i$
+set theoretically. Then $D = \sum a_i D_i$ for some $a_i \geq 0$.
+The existence of the $D_i$ is guaranteed if $\mathcal{O}_{X, x}$
+is a UFD for all $x \in D$ or if $X$ is regular.
+\end{lemma}
+
+\begin{proof}
+Choose $a_i$ as in Lemma \ref{lemma-codim-1-part} and set $D' = \sum a_i D_i$.
+Then $D' \to D$ is an inclusion of effective Cartier divisors which
+is an isomorphism away from codimension $2$ on $X$. Pick $x \in X$.
+Set $A = \mathcal{O}_{X, x}$ and let $f, f' \in A$ be the nonzerodivisor
+generating the ideal of $D, D'$ in $A$. Then $f = gf'$ for some $g \in A$.
+Moreover, for every prime $\mathfrak p$ of height $\leq 1$ of $A$ we see
+that $g$ maps to a unit of $A_\mathfrak p$. This implies that $g$ is
+a unit because the minimal primes over $(g)$ have height $1$
+(Algebra, Lemma \ref{algebra-lemma-minimal-over-1}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-projective-Noetherian-pic-effective-Cartier}
+\begin{slogan}
+On a projective scheme, every line bundle has a regular meromorphic section.
+\end{slogan}
+Let $X$ be a Noetherian scheme which has an ample invertible sheaf.
+Then every invertible $\mathcal{O}_X$-module is isomorphic to
+$$
+\mathcal{O}_X(D - D') =
+\mathcal{O}_X(D) \otimes_{\mathcal{O}_X} \mathcal{O}_X(D')^{\otimes -1}
+$$
+for some effective Cartier divisors $D, D'$ in $X$. Moreover, given a
+finite subset $E \subset X$ we may choose $D, D'$ such that
+$E \cap D = \emptyset$ and $E \cap D' = \emptyset$. If
+$X$ is quasi-affine, then we may choose $D' = \emptyset$.
+\end{lemma}
+
+\begin{proof}
+Let $x_1, \ldots, x_n$ be the associated points of $X$
+(Lemma \ref{lemma-finite-ass}).
+
+\medskip\noindent
+If $X$ is quasi-affine and $\mathcal{N}$ is any invertible
+$\mathcal{O}_X$-module, then we can pick a section $t$ of
+$\mathcal{N}$ which does not vanish at any of the points
+of $E \cup \{x_1, \ldots, x_n\}$, see Properties, Lemma
+\ref{properties-lemma-quasi-affine-invertible-nonvanishing-section}.
+Then $t$ is a regular section of $\mathcal{N}$ by
+Lemma \ref{lemma-regular-section-associated-points}.
+Hence $\mathcal{N} \cong \mathcal{O}_X(D)$ where
+$D = Z(t)$ is the effective Cartier divisor corresponding to $t$, see
+Lemma \ref{lemma-characterize-OD}. Since $E \cap D = \emptyset$
+by construction we are done in this case.
+
+\medskip\noindent
+Returning to the general case, let $\mathcal{L}$ be an ample invertible sheaf
+on $X$. There exists an $n > 0$ and a section
+$s \in \Gamma(X, \mathcal{L}^{\otimes n})$ such that $X_s$
+is affine and such that $E \cup \{x_1, \ldots, x_n\} \subset X_s$
+(Properties, Lemma \ref{properties-lemma-ample-finite-set-in-principal-affine}).
+
+\medskip\noindent
+Let $\mathcal{N}$ be an arbitrary invertible $\mathcal{O}_X$-module.
+By the quasi-affine case, we can find a section
+$t \in \mathcal{N}(X_s)$ which does not vanish at any point
+of $E \cup \{x_1, \ldots, x_n\}$.
+By Properties, Lemma \ref{properties-lemma-invert-s-sections}
+we see that for some $e \geq 0$ the section $s^e|_{X_s} t$ extends to
+a global section $\tau$ of $\mathcal{L}^{\otimes e} \otimes \mathcal{N}$.
+Thus both $\mathcal{L}^{\otimes e} \otimes \mathcal{N}$ and
+$\mathcal{L}^{\otimes e}$ are invertible sheaves which have global sections
+which do not vanish at any point of $E \cup \{x_1, \ldots, x_n\}$.
+Thus these are regular sections by
+Lemma \ref{lemma-regular-section-associated-points}.
+Hence $\mathcal{L}^{\otimes e} \otimes \mathcal{N} \cong \mathcal{O}_X(D)$
+and $\mathcal{L}^{\otimes e} \cong \mathcal{O}_X(D')$ for some
+effective Cartier divisors $D$ and $D'$, see Lemma \ref{lemma-characterize-OD}.
+By construction $E \cap D = \emptyset$ and $E \cap D' = \emptyset$
+and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-wedge-product-ses}
+Let $X$ be an integral regular scheme of dimension $2$.
+Let $i : D \to X$ be the immersion of an effective Cartier divisor.
+Let $\mathcal{F} \to \mathcal{F}' \to i_*\mathcal{G} \to 0$
+be an exact sequence of coherent $\mathcal{O}_X$-modules.
+Assume
+\begin{enumerate}
+\item $\mathcal{F}, \mathcal{F}'$ are locally free of rank $r$ on a nonempty
+open of $X$,
+\item $D$ is an integral scheme,
+\item $\mathcal{G}$ is a finite locally free $\mathcal{O}_D$-module
+of rank $s$.
+\end{enumerate}
+Then $\mathcal{L} = (\wedge^r\mathcal{F})^{**}$ and
+$\mathcal{L}' = (\wedge^r \mathcal{F}')^{**}$
+are invertible $\mathcal{O}_X$-modules and
+$\mathcal{L}' \cong \mathcal{L}(k D)$ for some
+$k \in \{0, \ldots, \min(s, r)\}$.
+\end{lemma}
+
+\begin{proof}
+The first statement follows from Lemma \ref{lemma-reflexive-over-regular-dim-2}
+as assumption (1) implies that $\mathcal{L}$ and $\mathcal{L}'$
+have rank $1$. Taking $\wedge^r$ and double duals are functors, hence
+we obtain a canonical map $\sigma : \mathcal{L} \to \mathcal{L}'$
+which is an isomorphism over the nonempty open of (1), hence
+nonzero. To finish the proof, it suffices to see that
+$\sigma$ viewed as a global section of
+$\mathcal{L}' \otimes \mathcal{L}^{\otimes -1}$ does not
+vanish at any codimension point of $X$, except at the generic
+point of $D$ and there with vanishing order at most $\min(s, r)$.
+
+\medskip\noindent
+Translated into algebra, we arrive at the following problem:
+Let $(A, \mathfrak m, \kappa)$ be a discrete valuation ring
+with fraction field $K$. Let $M \to M' \to N \to 0$ be an exact sequence
+of finite $A$-modules with $\dim_K(M \otimes K) = \dim_K(M' \otimes K) = r$
+and with $N \cong \kappa^{\oplus s}$. Show that the induced map
+$L = \wedge^r(M)^{**} \to L' = \wedge^r(M')^{**}$ vanishes to
+order at most $\min(s, r)$. We will use the structure theorem for
+modules over $A$, see
+More on Algebra, Lemma
+\ref{more-algebra-lemma-generalized-valuation-ring-modules} or
+\ref{more-algebra-lemma-modules-PID}.
+Dividing out a finite $A$-module by a torsion submodule does not
+change the double dual.
+Thus we may replace $M$ by $M/M_{tors}$ and $M'$ by
+$M'/\Im(M_{tors} \to M')$ and assume that $M$ is torsion free.
+Then $M \to M'$ is injective and $M'_{tors} \to N$ is injective.
+Hence we may replace $M'$ by $M'/M'_{tors}$ and $N$ by $N/M'_{tors}$.
+Thus we reduce to the case where $M$ and $M'$ are free of rank $r$
+and $N \cong \kappa^{\oplus s}$. In this case $\sigma$
+is the determinant of $M \to M'$ and vanishes to order $s$
+for example by Algebra, Lemma \ref{algebra-lemma-order-vanishing-determinant}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Complements of affine opens}
+\label{section-complement-affine-open}
+
+\noindent
+In this section we discuss the result that the complement of an
+affine open in a variety has pure codimension $1$.
+
+\begin{lemma}
+\label{lemma-affine-punctured-spec}
+Let $(A, \mathfrak m)$ be a Noetherian local ring.
+The punctured spectrum $U = \Spec(A) \setminus \{\mathfrak m\}$
+of $A$ is affine if and only if $\dim(A) \leq 1$.
+\end{lemma}
+
+\begin{proof}
+If $\dim(A) = 0$, then $U$ is empty hence affine (equal to the spectrum of
+the $0$ ring). If $\dim(A) = 1$, then we can choose an element
+$f \in \mathfrak m$ not contained in any of the finite number of minimal
+primes of $A$
+(Algebra, Lemmas \ref{algebra-lemma-Noetherian-irreducible-components} and
+\ref{algebra-lemma-silly}). Then $U = \Spec(A_f)$
+is affine.
+
+\medskip\noindent
+The converse is more interesting. We will give a somewhat nonstandard proof
+and discuss the standard argument in a remark below.
+Assume $U = \Spec(B)$ is affine. Since affineness and dimension are not
+affecting by going to the reduction we may replace $A$ by the quotient by
+its ideal of nilpotent elements and assume $A$ is reduced.
+Set $Q = B/A$ viewed as an $A$-module.
+The support of $Q$ is $\{\mathfrak m\}$ as $A_\mathfrak p = B_\mathfrak p$
+for all nonmaximal primes $\mathfrak p$ of $A$.
+We may assume $\dim(A) \geq 1$, hence as above we can pick
+$f \in \mathfrak m$ not contained in any of the minimal ideals of $A$.
+Since $A$ is reduced this implies that $f$ is a nonzerodivisor.
+In particular $\dim(A/fA) = \dim(A) - 1$, see
+Algebra, Lemma \ref{algebra-lemma-one-equation}.
+Applying the snake lemma to multiplication by $f$ on the short
+exact sequence $0 \to A \to B \to Q \to 0$ we obtain
+$$
+0 \to Q[f] \to A/fA \to B/fB \to Q/fQ \to 0
+$$
+where $Q[f] = \Ker(f : Q \to Q)$.
+This implies that $Q[f]$ is a finite $A$-module. Since the support of
+$Q[f]$ is $\{\mathfrak m\}$ we see $l = \text{length}_A(Q[f]) < \infty$
+(Algebra, Lemma \ref{algebra-lemma-support-point}).
+Set $l_n = \text{length}_A(Q[f^n])$. The exact sequence
+$$
+0 \to Q[f^n] \to Q[f^{n + 1}] \xrightarrow{f^n} Q[f]
+$$
+shows inductively that $l_n < \infty$ and that $l_n \leq l_{n + 1}$.
+Considering the exact sequence
+$$
+0 \to Q[f] \to Q[f^{n + 1}] \xrightarrow{f} Q[f^n] \to Q/fQ
+$$
+and we see that the image of $Q[f^n]$ in $Q/fQ$ has length
+$l_n - l_{n + 1} + l \leq l$. Since $Q = \bigcup Q[f^n]$ we
+find that the length of $Q/fQ$ is at most $l$, i.e., bounded.
+Thus $Q/fQ$ is a finite $A$-module. Hence $A/fA \to B/fB$ is a
+finite ring map, in particular induces a closed map on spectra
+(Algebra, Lemmas \ref{algebra-lemma-integral-going-up} and
+\ref{algebra-lemma-going-up-closed}).
+On the other hand $\Spec(B/fB)$ is the punctured spectrum of $\Spec(A/fA)$.
+This is a contradiction unless $\Spec(B/fB) = \emptyset$ which
+means that $\dim(A/fA) = 0$ as desired.
+\end{proof}
+
+\begin{remark}
+\label{remark-affine-punctured-spectrum-standard-proof}
+If $(A, \mathfrak m)$ is a Noetherian local normal domain of
+dimension $\geq 2$ and $U$
+is the punctured spectrum of $A$, then $\Gamma(U, \mathcal{O}_U) = A$.
+This algebraic version of Hartogs's theorem follows from the fact that
+$A = \bigcap_{\text{height}(\mathfrak p) = 1} A_\mathfrak p$
+we've seen in Algebra, Lemma
+\ref{algebra-lemma-normal-domain-intersection-localizations-height-1}.
+Thus in this case $U$ cannot be affine (since it would force $\mathfrak m$
+to be a point of $U$). This is often used as the starting point of
+the proof of Lemma \ref{lemma-affine-punctured-spec}.
+To reduce the case of a general Noetherian local ring to this case,
+we first complete (to get a Nagata local ring),
+then replace $A$ by $A/\mathfrak q$ for a suitable minimal prime,
+and then normalize. Each of these steps does not change the
+dimension and we obtain a contradiction.
+You can skip the completion step, but then the normalization in
+general is not a Noetherian domain. However, it is still a
+Krull domain of the same dimension (this is proved using
+Krull-Akizuki) and one can apply the same argument.
+\end{remark}
+
+\begin{remark}
+\label{remark-affine-puctured-spectrum-general}
+It is not clear how to characterize the non-Noetherian local
+rings $(A, \mathfrak m)$ whose punctured spectrum is affine.
+Such a ring has a finitely generated ideal $I$ with
+$\mathfrak m = \sqrt{I}$. Of course if we can take $I$
+generated by $1$ element, then $A$ has an affine puncture
+spectrum; this gives lots of non-Noetherian examples.
+Conversely, it follows from the argument in the proof of
+Lemma \ref{lemma-affine-punctured-spec}
+that such a ring cannot possess a nonzerodivisor $f \in \mathfrak m$
+with $H^0_I(A/fA) = 0$ (so $A$ cannot have a regular sequence
+of length $2$). Moreover, the same holds for any ring $A'$ which is
+the target of a local homomorphism of local rings $A \to A'$ such that
+$\mathfrak m_{A'} = \sqrt{\mathfrak mA'}$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-complement-affine-open-immersion}
+\begin{reference}
+\cite[EGA IV, Corollaire 21.12.7]{EGA4}
+\end{reference}
+Let $X$ be a locally Noetherian scheme. Let $U \subset X$ be an open subscheme
+such that the inclusion morphism $U \to X$ is affine.
+For every generic point $\xi$ of an irreducible component of
+$X \setminus U$ the local ring $\mathcal{O}_{X, \xi}$
+has dimension $\leq 1$. If $U$ is dense or if $\xi$ is in the closure
+of $U$, then $\dim(\mathcal{O}_{X, \xi}) = 1$.
+\end{lemma}
+
+\begin{proof}
+Since $\xi$ is a generic point of $X \setminus U$, we see that
+$$
+U_\xi = U \times_X \Spec(\mathcal{O}_{X, \xi}) \subset
+\Spec(\mathcal{O}_{X, \xi})
+$$
+is the punctured spectrum of $\mathcal{O}_{X, \xi}$ (hint: use
+Schemes, Lemma \ref{schemes-lemma-specialize-points}).
+As $U \to X$ is affine, we see that $U_\xi \to \Spec(\mathcal{O}_{X, \xi})$
+is affine (Morphisms, Lemma \ref{morphisms-lemma-base-change-affine})
+and we conclude that $U_\xi$ is affine.
+Hence $\dim(\mathcal{O}_{X, \xi}) \leq 1$ by
+Lemma \ref{lemma-affine-punctured-spec}.
+If $\xi \in \overline{U}$, then there is a specialization
+$\eta \to \xi$ where $\eta \in U$ (just take $\eta$ a generic
+point of an irreducible component of $\overline{U}$ which
+contains $\xi$; since $\overline{U}$ is locally Noetherian,
+hence locally has finitely many irreducible components, we see that
+$\eta \in U$). Then $\eta \in \Spec(\mathcal{O}_{X, \xi})$ and
+we see that the dimension cannot be $0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complement-affine-open}
+Let $X$ be a separated locally Noetherian scheme. Let $U \subset X$ be an
+affine open. For every generic point $\xi$ of an irreducible component of
+$X \setminus U$ the local ring $\mathcal{O}_{X, \xi}$
+has dimension $\leq 1$. If $U$ is dense or if $\xi$ is in the closure
+of $U$, then $\dim(\mathcal{O}_{X, \xi}) = 1$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-complement-affine-open-immersion}
+because the morphism $U \to X$ is affine by
+Morphisms, Lemma \ref{morphisms-lemma-affine-permanence}.
+\end{proof}
+
+\noindent
+The following lemma can sometimes be used to produce effective
+Cartier divisors.
+
+\begin{lemma}
+\label{lemma-complement-open-affine-effective-cartier-divisor}
+Let $X$ be a Noetherian separated scheme. Let $U \subset X$ be
+a dense affine open. If $\mathcal{O}_{X, x}$ is a UFD for all
+$x \in X \setminus U$, then there exists an effective Cartier
+divisor $D \subset X$ with $U = X \setminus D$.
+\end{lemma}
+
+\begin{proof}
+Since $X$ is Noetherian, the complement $X \setminus U$ has finitely
+many irreducible components $D_1, \ldots, D_r$
+(Properties, Lemma \ref{properties-lemma-Noetherian-irreducible-components}
+applied to the reduced induced subscheme structure on $X \setminus U$).
+Each $D_i \subset X$ has codimension $1$ by
+Lemma \ref{lemma-complement-affine-open}
+(and Properties, Lemma \ref{properties-lemma-codimension-local-ring}).
+Thus $D_i$ is an effective Cartier divisor by
+Lemma \ref{lemma-weil-divisor-is-cartier-UFD}.
+Hence we can take $D = D_1 + \ldots + D_r$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-complement-open-affine-effective-cartier-divisor-bis}
+Let $X$ be a Noetherian scheme with affine diagonal. Let $U \subset X$ be
+a dense affine open. If $\mathcal{O}_{X, x}$ is a UFD for all
+$x \in X \setminus U$, then there exists an effective Cartier
+divisor $D \subset X$ with $U = X \setminus D$.
+\end{lemma}
+
+\begin{proof}
+Since $X$ is Noetherian, the complement $X \setminus U$ has finitely
+many irreducible components $D_1, \ldots, D_r$
+(Properties, Lemma \ref{properties-lemma-Noetherian-irreducible-components}
+applied to the reduced induced subscheme structure on $X \setminus U$).
+We view $D_i$ as a reduced closed subscheme of $X$.
+Let $X = \bigcup_{j \in J} X_j$ be an affine open covering of $X$. For all
+$j$ in $J$, set $U_j = U \cap X_j$. Since $X$ has affine diagonal,
+the scheme
+$$
+U_j = X \times_{(X \times X)} (U \times X_j)
+$$
+is affine. Therefore, as $X_j$ is separated, it follows from
+Lemma \ref{lemma-complement-open-affine-effective-cartier-divisor}
+and its proof that for all $j \in J$ and $1 \leq i \leq r$ the
+intersection $D_i \cap X_j$ is either empty or an
+effective Cartier divisor in $X_j$.
+Thus $D_i \subset X$ is an effective Cartier divisor (as this is
+a local property). Hence we can take $D = D_1 + \ldots + D_r$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-ample-family}
+Let $X$ be a quasi-compact, regular scheme with affine diagonal.
+Then $X$ has an ample family of invertible modules
+(Morphisms, Definition
+\ref{morphisms-definition-family-ample-invertible-modules}.
+\end{lemma}
+
+\begin{proof}
+Observe that $X$ is a finite disjoint union of integral schemes
+(Properties, Lemmas
+\ref{properties-lemma-regular-normal} and
+\ref{properties-lemma-normal-Noetherian}).
+Thus we may assume that $X$ is integral as well as Noetherian,
+regular, and having affine diagonal. Let $x \in X$.
+Choose an affine open neighbourhood $U \subset X$ of $x$.
+Since $X$ is integral, $U$ is dense in $X$.
+By More on Algebra, Lemma \ref{more-algebra-lemma-regular-local-UFD}
+the local rings of $X$ are UFDs. Hence by Lemma
+\ref{lemma-complement-open-affine-effective-cartier-divisor-bis}
+we can find an effective Cartier divisor $D \subset X$
+whose complement is $U$. Then the canonical section
+$s = 1_D$ of $\mathcal{L} = \mathcal{O}_X(D)$,
+see Definition \ref{definition-invertible-sheaf-effective-Cartier-divisor},
+vanishes exactly along $D$ hence $U = X_s$.
+Thus both conditions in Morphisms, Definition
+\ref{morphisms-definition-family-ample-invertible-modules}
+hold and we are done.
+\end{proof}
+
+
+
+
+\section{Norms}
+\label{section-norms}
+
+\noindent
+Let $\pi : X \to Y$ be a finite morphism of schemes and let $d \geq 1$
+be an integer. Let us say there exists a
+{\it norm of degree $d$ for $\pi$}\footnote{This is nonstandard
+notation.} if there exists a multiplicative map
+$$
+\text{Norm}_\pi : \pi_*\mathcal{O}_X \to \mathcal{O}_Y
+$$
+of sheaves such that
+\begin{enumerate}
+\item the composition
+$\mathcal{O}_Y \xrightarrow{\pi^\sharp} \pi_*\mathcal{O}_X
+\xrightarrow{\text{Norm}_\pi} \mathcal{O}_Y$ equals $g \mapsto g^d$, and
+\item for $V \subset Y$ open if $f \in \mathcal{O}_X(\pi^{-1}V)$
+is zero at $x \in \pi^{-1}(V)$, then $\text{Norm}_\pi(f)$
+is zero at $\pi(x)$.
+\end{enumerate}
+We observe that condition (1) forces $\pi$ to be surjective.
+Since $\text{Norm}_\pi$ is multiplicative it sends units to units
+hence, given $y \in Y$, if $f$ is a regular function on $X$
+defined at but nonvanishing at any $x \in X$
+with $\pi(x) = y$, then $\text{Norm}_\pi(f)$ is defined
+and does not vanish at $y$. This holds without requiring (2);
+in fact, the constructions in this section will only require condition (1)
+and only certain vanishing properties (which are used in particular
+in the proof of Lemma \ref{lemma-norm-ample}) will require property (2).
+
+\begin{lemma}
+\label{lemma-finite-trivialize-invertible-upstairs}
+Let $\pi : X \to Y$ be a finite morphism of schemes.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $y \in Y$. There exists an open neighbourhood
+$V \subset Y$ of $y$ such that $\mathcal{L}|_{\pi^{-1}(V)}$ is trivial.
+\end{lemma}
+
+\begin{proof}
+Clearly we may assume $Y$ and hence $X$ affine. Since $\pi$ is finite the
+fibre $\pi^{-1}(\{y\})$ over $y$ is finite.
+Since $X$ is affine, we can pick $s \in \Gamma(X, \mathcal{L})$
+not vanishing in any point of $\pi^{-1}(\{y\})$. This follows
+from Properties, Lemma
+\ref{properties-lemma-quasi-affine-invertible-nonvanishing-section}
+but we also give a direct argument. Namely, we can
+pick a finite set $E \subset X$ of closed points such that
+every $x \in \pi^{-1}(\{y\})$ specializes to some point of $E$.
+For $x \in E$ denote $i_x : x \to X$ the closed immersion.
+Then
+$\mathcal{L} \to \bigoplus_{x \in E} i_{x, *}i_x^*\mathcal{L}$
+is a surjective map of quasi-coherent $\mathcal{O}_X$-modules,
+and hence the map
+$$
+\Gamma(X, \mathcal{L}) \to
+\bigoplus\nolimits_{x \in E} \mathcal{L}_x/\mathfrak m_x\mathcal{L}_x
+$$
+is surjective (as taking global sections is an exact functor on the
+category of quasi-coherent $\mathcal{O}_X$-modules, see
+Schemes, Lemma \ref{schemes-lemma-equivalence-quasi-coherent}).
+Thus we can find an $s \in \Gamma(X, \mathcal{L})$
+not vanishing at any point specializing to a point of $E$.
+Then $X_s \subset X$ is an open neighbourhood of $\pi^{-1}(\{y\})$.
+Since $\pi$ is finite, hence closed, we conclude that there is an
+open neighbourhood $V \subset Y$ of $y$ whose inverse image
+is contained in $X_s$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-norm-invertible}
+Let $\pi : X \to Y$ be a finite morphism of schemes. If there exists
+a norm of degree $d$ for $\pi$, then there exists a homomorphism of
+abelian groups
+$$
+\text{Norm}_\pi : \Pic(X) \to \Pic(Y)
+$$
+such that $\text{Norm}_\pi(\pi^*\mathcal{N}) \cong \mathcal{N}^{\otimes d}$
+for all invertible $\mathcal{O}_Y$-modules $\mathcal{N}$.
+\end{lemma}
+
+\begin{proof}
+We will use the correspondence between isomorphism classes of
+invertible $\mathcal{O}_X$-modules and elements of
+$H^1(X, \mathcal{O}_X^*)$ given in
+Cohomology, Lemma \ref{cohomology-lemma-h1-invertible}
+without further mention. We explain how to take the norm of an invertible
+$\mathcal{O}_X$-module $\mathcal{L}$. Namely, by
+Lemma \ref{lemma-finite-trivialize-invertible-upstairs}
+there exists an open covering $Y = \bigcup V_j$ such that
+$\mathcal{L}|_{\pi^{-1}V_j}$ is trivial. Choose a generating section
+$s_j \in \mathcal{L}(\pi^{-1}V_j)$ for each $j$.
+On the overlaps $\pi^{-1}V_j \cap \pi^{-1}V_{j'}$ we can write
+$$
+s_j = u_{jj'} s_{j'}
+$$
+for a unique $u_{jj'} \in \mathcal{O}^*_X(\pi^{-1}V_j \cap \pi^{-1}V_{j'})$.
+Thus we can consider the elements
+$$
+v_{jj'} = \text{Norm}_\pi(u_{jj'}) \in \mathcal{O}_Y^*(V_j \cap V_{j'})
+$$
+These elements satisfy the cocycle condition (because the
+$u_{jj'}$ do and $\text{Norm}_\pi$ is multiplicative) and
+therefore define an invertible $\mathcal{O}_Y$-module.
+We omit the verification that: this is well defined,
+additive on Picard groups, and satisfies the property
+$\text{Norm}_\pi(\pi^*\mathcal{N}) \cong \mathcal{N}^{\otimes d}$
+for all invertible $\mathcal{O}_Y$-modules $\mathcal{N}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-norm-map-invertible}
+Let $\pi : X \to Y$ be a finite morphism of schemes. Assume there exists
+a norm of degree $d$ for $\pi$. For any $\mathcal{O}_X$-linear map
+$\varphi : \mathcal{L} \to \mathcal{L}'$
+of invertible $\mathcal{O}_X$-modules there is an $\mathcal{O}_Y$-linear
+map
+$$
+\text{Norm}_\pi(\varphi) :
+\text{Norm}_\pi(\mathcal{L})
+\longrightarrow
+\text{Norm}_\pi(\mathcal{L}')
+$$
+with $\text{Norm}_\pi(\mathcal{L})$, $\text{Norm}_\pi(\mathcal{L}')$
+as in Lemma \ref{lemma-norm-invertible}. Moreover, for
+$y \in Y$ the following are equivalent
+\begin{enumerate}
+\item $\varphi$ is zero at a point of $x \in X$ with $\pi(x) = y$, and
+\item $\text{Norm}_\pi(\varphi)$ is zero at $y$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We choose an open covering $Y = \bigcup V_j$ such that
+$\mathcal{L}$ and $\mathcal{L}'$ are trivial over the opens $\pi^{-1}V_j$.
+This is possible by
+Lemma \ref{lemma-finite-trivialize-invertible-upstairs}.
+Choose generating sections
+$s_j$ and $s'_j$ of $\mathcal{L}$ and $\mathcal{L}'$
+over the opens $\pi^{-1}V_j$. Then $\varphi(s_j) = f_js'_j$
+for some $f_j \in \mathcal{O}_X(\pi^{-1}V_j)$.
+Define $\text{Norm}_\pi(\varphi)$ to be multiplication
+by $\text{Norm}_\pi(f_j)$ on $V_j$. An simple
+calculation involving the cocycles used to construct
+$\text{Norm}_\pi(\mathcal{L})$, $\text{Norm}_\pi(\mathcal{L}')$
+in the proof of Lemma \ref{lemma-norm-invertible}
+shows that this defines
+a map as stated in the lemma. The final statement follows
+from condition (2) in the definition of a norm map of degree $d$.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-norm-ample}
+Let $\pi : X \to Y$ be a finite morphism of schemes. Assume $X$ has
+an ample invertible sheaf and there exists a norm of degree $d$
+for $\pi$. Then $Y$ has an ample invertible sheaf.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{L}$ be the ample invertible sheaf on $X$ given to us
+by assumption. We will prove that $\mathcal{N} = \text{Norm}_\pi(\mathcal{L})$
+is ample on $Y$.
+
+\medskip\noindent
+Since $X$ is quasi-compact (Properties, Definition
+\ref{properties-definition-ample}) and $X \to Y$ surjective
+(by the existence of $\text{Norm}_\pi$)
+we see that $Y$ is quasi-compact.
+Let $y \in Y$ be a point. To finish the proof
+we will show that there exists a section $t$ of some positive tensor
+power of $\mathcal{N}$ which does not vanish at $y$ such that $Y_t$
+is affine. To do this, choose an affine open neighbourhood $V \subset Y$
+of $y$. Choose $n \gg 0$ and a section
+$s \in \Gamma(X, \mathcal{L}^{\otimes n})$
+such that
+$$
+\pi^{-1}(\{y\}) \subset X_s \subset \pi^{-1}V
+$$
+by
+Properties, Lemma \ref{properties-lemma-ample-finite-set-in-principal-affine}.
+Then $t = \text{Norm}_\pi(s)$ is a section of $\mathcal{N}^{\otimes n}$
+which does not vanish at $x$ and with $Y_t \subset V$, see
+Lemma \ref{lemma-norm-map-invertible}. Then $Y_t$
+is affine by Properties, Lemma \ref{properties-lemma-affine-cap-s-open}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-norm-quasi-affine}
+Let $\pi : X \to Y$ be a finite morphism of schemes. Assume $X$ is quasi-affine
+and there exists a norm of degree $d$ for $\pi$. Then $Y$ is quasi-affine.
+\end{lemma}
+
+\begin{proof}
+By Properties, Lemma \ref{properties-lemma-quasi-affine-O-ample}
+we see that $\mathcal{O}_X$ is an ample invertible sheaf on $X$.
+The proof of Lemma \ref{lemma-norm-ample} shows that
+$\text{Norm}_\pi(\mathcal{O}_X) = \mathcal{O}_Y$
+is an ample invertible $\mathcal{O}_Y$-module. Hence
+Properties, Lemma \ref{properties-lemma-quasi-affine-O-ample}
+shows that $Y$ is quasi-affine.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-locally-free-has-norm}
+Let $\pi : X \to Y$ be a finite locally free morphism of degree $d \geq 1$.
+Then there exists a canonical norm of degree $d$ whose formation commutes
+with arbitrary base change.
+\end{lemma}
+
+\begin{proof}
+Let $V \subset Y$ be an affine open such that $(\pi_*\mathcal{O}_X)|_V$
+is finite free of rank $d$. Choosing a basis we obtain an isomorphism
+$$
+\mathcal{O}_V^{\oplus d} \cong (\pi_*\mathcal{O}_X)|_V
+$$
+For every $f \in \pi_*\mathcal{O}_X(V) = \mathcal{O}_X(\pi^{-1}(V))$
+multiplication by $f$ defines a $\mathcal{O}_V$-linear endomorphism
+$m_f$ of the displayed free vector bundle. Thus we get a $d \times d$
+matrix $M_f \in \text{Mat}(d \times d, \mathcal{O}_Y(V))$ and we can set
+$$
+\text{Norm}_\pi(f) = \det(M_f)
+$$
+Since the determinant of a matrix is independent of the choice of
+the basis chosen we see that this is well defined which also means
+that this construction will glue to a global map as desired.
+Compatibility with base change is straightforward from the construction.
+
+\medskip\noindent
+Property (1) follows from the fact that the determinant of a
+$d \times d$ diagonal matrix with entries $g, g, \ldots, g$ is $g^d$.
+To see property (2) we may base change and assume that $Y$ is the
+spectrum of a field $k$. Then $X = \Spec(A)$ with $A$ a $k$-algebra
+with $\dim_k(A) = d$. If there exists an $x \in X$ such that
+$f \in A$ vanishes at $x$, then there exists a map $A \to \kappa$
+into a field such that $f$ maps to zero in $\kappa$. Then
+$f : A \to A$ cannot be surjective, hence $\det(f : A \to A) = 0$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-norm-in-normal-case}
+Let $\pi : X \to Y$ be a finite surjective morphism with $X$ and $Y$
+integral and $Y$ normal. Then there exists a norm of degree
+$[R(X) : R(Y)]$ for $\pi$.
+\end{lemma}
+
+\begin{proof}
+Let $\Spec(B) \subset Y$ be an affine open subset and let
+$\Spec(A) \subset X$ be its inverse image. Then $A$ and $B$
+are domains. Let $K$ be the fraction
+field of $A$ and $L$ the fraction field of $B$. Picture:
+$$
+\xymatrix{
+L \ar[r] & K \\
+B \ar[u] \ar[r] & A \ar[u]
+}
+$$
+Since $K/L$ is a finite extension, there is a norm map
+$\text{Norm}_{K/L} : K^* \to L^*$ of degree $d = [K : L]$; this is given by
+mapping $f \in K$ to $\det_L(f : K \to K)$ as in the proof
+of Lemma \ref{lemma-finite-locally-free-has-norm}.
+Observe that the characteristic polynomial of $f : K \to K$
+is a power of the minimal polynomial of $f$ over $L$;
+in particular $\text{Norm}_{K/L}(f)$ is a power of the constant
+coefficient of the minimal polynomial of $f$ over $L$. Hence by
+Algebra, Lemma \ref{algebra-lemma-minimal-polynomial-normal-domain}
+$\text{Norm}_{K/L}$ maps $A$ into $B$.
+This determines a compatible system of maps
+on sections over affines and hence a global norm map
+$\text{Norm}_\pi$ of degree $d$.
+
+\medskip\noindent
+Property (1) is immediate from the construction.
+To see property (2) let $f \in A$ be contained in the
+prime ideal $\mathfrak p \subset A$. Let
+$f^m + b_1 f^{m - 1} + \ldots + b_m$ be the minimal
+polynomial of $f$ over $L$. By
+Algebra, Lemma \ref{algebra-lemma-minimal-polynomial-normal-domain}
+we have $b_i \in B$. Hence $b_0 \in B \cap \mathfrak p$.
+Since $\text{Norm}_{K/L}(f) = b_0^{d/m}$ (see above)
+we conclude that the norm vanishes in the image point of $\mathfrak p$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Frobenius-gives-norm-for-reduction}
+Let $X$ be a Noetherian scheme. Let $p$ be a prime number such that
+$p\mathcal{O}_X = 0$. Then for some $e > 0$ there exists a norm
+of degree $p^e$ for $X_{red} \to X$ where $X_{red}$ is the reduction
+of $X$.
+\end{lemma}
+
+\begin{proof}
+Let $A$ be a Noetherian ring with $pA = 0$. Let $I \subset A$ be the
+ideal of nilpotent elements. Then $I^n = 0$ for some $n$ (Algebra,
+Lemma \ref{algebra-lemma-Noetherian-power}).
+Pick $e$ such that $p^e \geq n$. Then
+$$
+A/I \longrightarrow A,\quad
+f \bmod I \longmapsto f^{p^e}
+$$
+is well defined. This produces a norm of degree $p^e$ for
+$\Spec(A/I) \to \Spec(A)$. Now if $X$ is obtained by glueing some
+affine schemes $\Spec(A_i)$ then for some $e \gg 0$ these maps
+glue to a norm map for $X_{red} \to X$. Details omitted.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-push-down-ample}
+Let $\pi : X \to Y$ be a finite surjective morphism of schemes.
+Assume that $X$ has an ample invertible $\mathcal{O}_X$-module. If
+\begin{enumerate}
+\item $\pi$ is finite locally free, or
+\item $Y$ is an integral normal scheme, or
+\item $Y$ is Noetherian, $p\mathcal{O}_Y = 0$, and $X = Y_{red}$,
+\end{enumerate}
+then $Y$ has an ample invertible $\mathcal{O}_Y$-module.
+\end{proposition}
+
+\begin{proof}
+Case (1) follows from a combination of
+Lemmas \ref{lemma-finite-locally-free-has-norm} and \ref{lemma-norm-ample}.
+Case (3) follows from a combination of
+Lemmas \ref{lemma-Frobenius-gives-norm-for-reduction} and
+\ref{lemma-norm-ample}.
+In case (2) we first replace $X$ by an irreducible component of $X$
+which dominates $Y$ (viewed as a reduced closed subscheme of $X$).
+Then we can apply Lemma \ref{lemma-norm-in-normal-case}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-push-down-quasi-affine}
+Let $\pi : X \to Y$ be a finite surjective morphism of schemes.
+Assume that $X$ is quasi-affine. If either
+\begin{enumerate}
+\item $\pi$ is finite locally free, or
+\item $Y$ is an integral normal scheme
+\end{enumerate}
+then $Y$ is quasi-affine.
+\end{lemma}
+
+\begin{proof}
+Case (1) follows from a combination of Lemmas
+\ref{lemma-finite-locally-free-has-norm} and \ref{lemma-norm-quasi-affine}.
+In case (2) we first replace $X$ by an irreducible component of $X$
+which dominates $Y$ (viewed as a reduced closed subscheme of $X$).
+Then we can apply Lemma \ref{lemma-norm-in-normal-case}.
+\end{proof}
+
+
+
+
+
+\section{Relative effective Cartier divisors}
+\label{section-effective-Cartier-morphisms}
+
+\noindent
+The following lemma shows that an effective Cartier divisor which is
+flat over the base is really a ``family of effective Cartier divisors''
+over the base. For example the restriction to any fibre is an effective
+Cartier divisor.
+
+\begin{lemma}
+\label{lemma-relative-Cartier}
+Let $f : X \to S$ be a morphism of schemes.
+Let $D \subset X$ be a closed subscheme.
+Assume
+\begin{enumerate}
+\item $D$ is an effective Cartier divisor, and
+\item $D \to S$ is a flat morphism.
+\end{enumerate}
+Then for every morphism of schemes $g : S' \to S$ the pullback
+$(g')^{-1}D$ is an effective Cartier divisor on $X' = S' \times_S X$
+where $g' : X' \to X$ is the projection.
+\end{lemma}
+
+\begin{proof}
+Using
+Lemma \ref{lemma-characterize-effective-Cartier-divisor}
+we translate this as follows into algebra. Let $A \to B$ be a ring
+map and $h \in B$. Assume $h$ is a nonzerodivisor and that $B/hB$ is flat
+over $A$. Then
+$$
+0 \to B \xrightarrow{h} B \to B/hB \to 0
+$$
+is a short exact sequence of $A$-modules with $B/hB$ flat over $A$. By
+Algebra, Lemma \ref{algebra-lemma-flat-tor-zero}
+this sequence remains exact on tensoring over $A$ with any module, in
+particular with any $A$-algebra $A'$.
+\end{proof}
+
+\noindent
+This lemma is the motivation for the following definition.
+
+\begin{definition}
+\label{definition-relative-effective-Cartier-divisor}
+Let $f : X \to S$ be a morphism of schemes.
+A {\it relative effective Cartier divisor} on $X/S$ is an
+effective Cartier divisor $D \subset X$ such that $D \to S$
+is a flat morphism of schemes.
+\end{definition}
+
+\noindent
+We warn the reader that this may be nonstandard notation.
+In particular, in \cite[IV, Section 21.15]{EGA} the notion of a
+relative divisor is discussed only when $X \to S$ is flat and
+locally of finite presentation. Our definition is a bit more general.
+However, it turns out that if $x \in D$ then $X \to S$ is
+flat at $x$ in many cases (but not always).
+
+\begin{lemma}
+\label{lemma-sum-relative-effective-Cartier-divisor}
+Let $f : X \to S$ be a morphism of schemes. If $D_1, D_2 \subset X$
+are relative effective Cartier divisor on $X/S$ then so
+is $D_1 + D_2$ (Definition \ref{definition-sum-effective-Cartier-divisors}).
+\end{lemma}
+
+\begin{proof}
+This translates into the following algebra fact:
+Let $A \to B$ be a ring map and $h_1, h_2 \in B$.
+Assume the $h_i$ are nonzerodivisors and that $B/h_iB$ is flat over $A$.
+Then $h_1h_2$ is a nonzerodivisor and $B/h_1h_2B$ is flat over $A$.
+The reason is that we have a short exact sequence
+$$
+0 \to B/h_1B \to B/h_1h_2B \to B/h_2B \to 0
+$$
+where the first arrow is given by multiplication by $h_2$. Since
+the outer two are flat modules over $A$, so is the middle one, see
+Algebra, Lemma \ref{algebra-lemma-flat-ses}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-difference-relative-effective-Cartier-divisor}
+Let $f : X \to S$ be a morphism of schemes. If $D_1, D_2 \subset X$
+are relative effective Cartier divisor on $X/S$ and $D_1 \subset D_2$
+as closed subschemes, then the effective Cartier divisor $D$
+such that $D_2 = D_1 + D$
+(Lemma \ref{lemma-difference-effective-Cartier-divisors}) is
+a relative effective Cartier divisor on $X/S$.
+\end{lemma}
+
+\begin{proof}
+This translates into the following algebra fact:
+Let $A \to B$ be a ring map and $h_1, h_2 \in B$.
+Assume the $h_i$ are nonzerodivisors, that $B/h_iB$ is flat over $A$, and
+that $(h_2) \subset (h_1)$. Then we can write $h_2 = h h_1$
+where $h \in B$ is a nonzerodivisor. We get a short exact sequence
+$$
+0 \to B/hB \to B/h_2B \to B/h_1B \to 0
+$$
+where the first arrow is given by multiplication by $h_1$. Since
+the right two are flat modules over $A$, so is the middle one, see
+Algebra, Lemma \ref{algebra-lemma-flat-ses}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-at-x}
+Let $f : X \to S$ be a morphism of schemes.
+Let $D \subset X$ be a relative effective Cartier divisor on $X/S$.
+If $x \in D$ and $\mathcal{O}_{X, x}$ is Noetherian, then $f$ is flat at $x$.
+\end{lemma}
+
+\begin{proof}
+Set $A = \mathcal{O}_{S, f(x)}$ and $B = \mathcal{O}_{X, x}$.
+Let $h \in B$ be an element which generates the ideal of $D$.
+Then $h$ is a nonzerodivisor in $B$ such that $B/hB$ is a flat
+local $A$-algebra. Let $I \subset A$ be a finitely generated ideal.
+Consider the commutative diagram
+$$
+\xymatrix{
+0 \ar[r] &
+B \ar[r]_h &
+B \ar[r] &
+B/hB \ar[r] & 0 \\
+0 \ar[r] &
+B \otimes_A I \ar[r]^h \ar[u] &
+B \otimes_A I \ar[r] \ar[u] &
+B/hB \otimes_A I \ar[r] \ar[u] & 0
+}
+$$
+The lower sequence is short exact as $B/hB$ is flat over $A$, see
+Algebra, Lemma \ref{algebra-lemma-flat-tor-zero}.
+The right vertical arrow is injective as $B/hB$ is flat over $A$, see
+Algebra, Lemma \ref{algebra-lemma-flat}.
+Hence multiplication by $h$ is surjective on the kernel $K$ of
+the middle vertical arrow. By Nakayama's lemma, see
+Algebra, Lemma \ref{algebra-lemma-NAK}
+we conclude that $K= 0$. Hence $B$ is flat over $A$, see
+Algebra, Lemma \ref{algebra-lemma-flat}.
+\end{proof}
+
+\noindent
+The following lemma relies on the algebraic version of
+openness of the flat locus. The scheme theoretic version can be found in
+More on Morphisms, Section \ref{more-morphisms-section-open-flat}.
+
+\begin{lemma}
+\label{lemma-flat-relative-Cartier-divisor}
+Let $f : X \to S$ be a morphism of schemes.
+Let $D \subset X$ be a relative effective Cartier divisor.
+If $f$ is locally of finite presentation, then there exists
+an open subscheme $U \subset X$ such that $D \subset U$ and
+such that $f|_U : U \to S$ is flat.
+\end{lemma}
+
+\begin{proof}
+Pick $x \in D$. It suffices to find an open neighbourhood $U \subset X$
+of $x$ such that $f|_U$ is flat. Hence the lemma reduces to the case
+that $X = \Spec(B)$ and $S = \Spec(A)$ are affine
+and that $D$ is given by a nonzerodivisor $h \in B$. By assumption
+$B$ is a finitely presented $A$-algebra and $B/hB$ is a flat
+$A$-algebra. We are going to use absolute Noetherian approximation.
+
+\medskip\noindent
+Write $B = A[x_1, \ldots, x_n]/(g_1, \ldots, g_m)$. Assume
+$h$ is the image of $h' \in A[x_1, \ldots, x_n]$. Choose a finite type
+$\mathbf{Z}$-subalgebra $A_0 \subset A$ such that all the coefficients
+of the polynomials $h', g_1, \ldots, g_m$ are in $A_0$. Then we can set
+$B_0 = A_0[x_1, \ldots, x_n]/(g_1, \ldots, g_m)$ and $h_0$ the image
+of $h'$ in $B_0$. Then $B = B_0 \otimes_{A_0} A$ and
+$B/hB = B_0/h_0B_0 \otimes_{A_0} A$. By Algebra, Lemma
+\ref{algebra-lemma-flat-finite-presentation-limit-flat}
+we may, after enlarging $A_0$, assume that $B_0/h_0B_0$ is flat
+over $A_0$. Let $K_0 = \Ker(h_0 : B_0 \to B_0)$.
+As $B_0$ is of finite type over $\mathbf{Z}$ we see that $K_0$ is
+a finitely generated ideal. Let $A_1 \subset A$ be a finite type
+$\mathbf{Z}$-subalgebra containing $A_0$ and denote $B_1$, $h_1$, $K_1$
+the corresponding objects over $A_1$. By
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-H1-regular}
+the map $K_0 \otimes_{A_0} A_1 \to K_1$ is surjective. On the other hand,
+the kernel of $h : B \to B$ is zero by assumption. Hence every element
+of $K_0$ maps to zero in $K_1$ for sufficiently large subrings
+$A_1 \subset A$. Since $K_0$ is finitely generated, we conclude that
+$K_1 = 0$ for a suitable choice of $A_1$.
+
+\medskip\noindent
+Set $f_1 : X_1 \to S_1$ equal to $\Spec$ of the
+ring map $A_1 \to B_1$. Set $D_1 = \Spec(B_1/h_1B_1)$.
+Since $B = B_1 \otimes_{A_1} A$, i.e., $X = X_1 \times_{S_1} S$,
+it now suffices to prove the lemma for $X_1 \to S_1$ and the relative
+effective Cartier divisor $D_1$, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-module-flat}.
+Hence we have reduced to the case where $A$ is a Noetherian ring.
+In this case we know that the ring map $A \to B$ is flat at every
+prime $\mathfrak q$ of $V(h)$ by
+Lemma \ref{lemma-flat-at-x}.
+Combined with the fact that the flat locus is open in this case, see
+Algebra, Theorem \ref{algebra-theorem-openness-flatness}
+we win.
+\end{proof}
+
+\noindent
+There is also the following lemma (whose idea is apparently
+due to Michael Artin, see \cite{Nobile}) which needs no finiteness
+assumptions at all.
+
+\begin{lemma}
+\label{lemma-michael-artin}
+Let $f : X \to S$ be a morphism of schemes.
+Let $D \subset X$ be a relative effective Cartier divisor on $X/S$.
+If $f$ is flat at all points of $X \setminus D$, then $f$ is flat.
+\end{lemma}
+
+\begin{proof}
+This translates into the following algebra fact:
+Let $A \to B$ be a ring map and $h \in B$.
+Assume $h$ is a nonzerodivisor, that $B/hB$ is flat over $A$, and
+that the localization $B_h$ is flat over $A$. Then $B$ is flat over $A$.
+The reason is that we have a short exact sequence
+$$
+0 \to B \to B_h \to \colim_n (1/h^n)B/B \to 0
+$$
+and that the second and third terms are flat over $A$, which implies
+that $B$ is flat over $A$ (see
+Algebra, Lemma \ref{algebra-lemma-flat-ses}). Note that a filtered
+colimit of flat modules is flat (see
+Algebra, Lemma \ref{algebra-lemma-colimit-flat})
+and that by induction on $n$ each $(1/h^n)B/B \cong B/h^nB$ is flat over
+$A$ since it fits into the short exact sequence
+$$
+0 \to B/h^{n - 1}B \xrightarrow{h} B/h^nB \to B/hB \to 0
+$$
+Some details omitted.
+\end{proof}
+
+\begin{example}
+\label{example-relative-cartier-ambient-space-not-flat}
+Here is an example of a relative effective Cartier divisor $D$ where the
+ambient scheme is not flat in a neighbourhood of $D$. Namely, let
+$A = k[t]$ and
+$$
+B = k[t, x, y, x^{-1}y, x^{-2}y, \ldots]/(ty, tx^{-1}y, tx^{-2}y, \ldots)
+$$
+Then $B$ is not flat over $A$ but $B/xB \cong A$ is flat over $A$.
+Moreover $x$ is a nonzerodivisor and hence defines a relative effective
+Cartier divisor in $\Spec(B)$ over $\Spec(A)$.
+\end{example}
+
+\noindent
+If the ambient scheme is flat and locally of finite presentation over
+the base, then we can characterize a relative effective Cartier divisor
+in terms of its fibres. See also
+More on Morphisms, Lemma \ref{more-morphisms-lemma-slice-given-element}
+for a slightly different take on this lemma.
+
+\begin{lemma}
+\label{lemma-fibre-Cartier}
+Let $\varphi : X \to S$ be a flat morphism which is locally of finite
+presentation. Let $Z \subset X$ be a closed subscheme.
+Let $x \in Z$ with image $s \in S$.
+\begin{enumerate}
+\item If $Z_s \subset X_s$ is a Cartier divisor in a neighbourhood of $x$,
+then there exists an open $U \subset X$ and a
+relative effective Cartier divisor $D \subset U$ such that
+$Z \cap U \subset D$ and $Z_s \cap U = D_s$.
+\item If $Z_s \subset X_s$ is a Cartier divisor in a neighbourhood of $x$,
+the morphism $Z \to X$ is of finite presentation, and $Z \to S$ is flat at
+$x$, then we can choose $U$ and $D$ such that $Z \cap U = D$.
+\item If $Z_s \subset X_s$ is a Cartier divisor in a neighbourhood of $x$
+and $Z$ is a locally principal closed subscheme of $X$ in a neighbourhood
+of $x$, then we can choose $U$ and $D$ such that $Z \cap U = D$.
+\end{enumerate}
+In particular, if $Z \to S$ is locally of finite presentation and flat and
+all fibres $Z_s \subset X_s$ are effective Cartier divisors, then
+$Z$ is a relative effective Cartier divisor. Similarly, if $Z$
+is a locally principal closed subscheme of $X$ such that all fibres
+$Z_s \subset X_s$ are effective Cartier divisors, then
+$Z$ is a relative effective Cartier divisor.
+\end{lemma}
+
+\begin{proof}
+Choose affine open neighbourhoods $\Spec(A)$ of $s$ and
+$\Spec(B)$ of $x$ such that
+$\varphi(\Spec(B)) \subset \Spec(A)$.
+Let $\mathfrak p \subset A$ be the prime ideal corresponding to $s$.
+Let $\mathfrak q \subset B$ be the prime ideal corresponding to $x$.
+Let $I \subset B$ be the ideal corresponding to $Z$.
+By the initial assumption of the lemma we know that
+$A \to B$ is flat and of finite presentation.
+The assumption in (1) means that, after shrinking $\Spec(B)$, we may
+assume $I(B \otimes_A \kappa(\mathfrak p))$ is generated by a single
+element which is a nonzerodivisor in $B \otimes_A \kappa(\mathfrak p)$.
+Say $f \in I$ maps to this generator. We claim that after inverting
+an element $g \in B$, $g \not \in \mathfrak q$ the closed subscheme
+$D = V(f) \subset \Spec(B_g)$ is a relative effective Cartier
+divisor.
+
+\medskip\noindent
+By
+Algebra, Lemma \ref{algebra-lemma-flat-finite-presentation-limit-flat}
+we can find a flat finite type ring map $A_0 \to B_0$ of Noetherian
+rings, an element $f_0 \in B_0$, a ring map $A_0 \to A$ and an isomorphism
+$A \otimes_{A_0} B_0 \cong B$. If $\mathfrak p_0 = A_0 \cap \mathfrak p$
+then we see that
+$$
+B \otimes_A \kappa(\mathfrak p) =
+\left(B_0 \otimes_{A_0} \kappa(\mathfrak p_0)\right)
+\otimes_{\kappa(\mathfrak p_0))} \kappa(\mathfrak p)
+$$
+hence $f_0$ is a nonzerodivisor in $B_0 \otimes_{A_0} \kappa(\mathfrak p_0)$.
+By
+Algebra, Lemma \ref{algebra-lemma-grothendieck}
+we see that $f_0$ is a nonzerodivisor in $(B_0)_{\mathfrak q_0}$
+where $\mathfrak q_0 = B_0 \cap \mathfrak q$ and
+that $(B_0/f_0B_0)_{\mathfrak q_0}$ is flat over $A_0$. Hence by
+Algebra, Lemma \ref{algebra-lemma-regular-sequence-in-neighbourhood}
+and
+Algebra, Theorem \ref{algebra-theorem-openness-flatness}
+there exists a $g_0 \in B_0$, $g_0 \not \in \mathfrak q_0$ such
+that $f_0$ is a nonzerodivisor in $(B_0)_{g_0}$ and such that
+$(B_0/f_0B_0)_{g_0}$ is flat over $A_0$. Hence we see that
+$D_0 = V(f_0) \subset \Spec((B_0)_{g_0})$ is a relative effective
+Cartier divisor. Since we know that this property is preserved under
+base change, see
+Lemma \ref{lemma-relative-Cartier},
+we obtain the claim mentioned above with $g$ equal to the image of $g_0$
+in $B$.
+
+\medskip\noindent
+At this point we have proved (1). To see (2) consider the closed
+immersion $Z \to D$. The surjective ring map
+$u : \mathcal{O}_{D, x} \to \mathcal{O}_{Z, x}$
+is a map of flat local $\mathcal{O}_{S, s}$-algebras which
+are essentially of finite presentation, and which becomes an
+isomorphisms after dividing by $\mathfrak m_s$. Hence it is
+an isomorphism, see
+Algebra, Lemma \ref{algebra-lemma-mod-injective-general}.
+It follows that $Z \to D$ is an isomorphism in a neighbourhood
+of $x$, see
+Algebra, Lemma \ref{algebra-lemma-local-isomorphism}.
+To see (3), after possibly shrinking $U$ we may assume that
+the ideal of $D$ is generated by a single nonzerodivisor $f$
+and the ideal of $Z$ is generated by an element $g$. Then
+$f = gh$. But $g|_{U_s}$ and $f|_{U_s}$ cut out the same
+effective Cartier divisor in a neighbourhood of $x$. Hence
+$h|_{X_s}$ is a unit in $\mathcal{O}_{X_s, x}$, hence $h$ is
+a unit in $\mathcal{O}_{X, x}$ hence $h$ is a unit in an
+open neighbourhood of $x$. I.e., $Z \cap U = D$ after shrinking $U$.
+
+\medskip\noindent
+The final statements of the lemma follow immediately from
+parts (2) and (3), combined with the fact that $Z \to S$
+is locally of finite presentation if and only if $Z \to X$ is
+of finite presentation, see
+Morphisms, Lemmas \ref{morphisms-lemma-composition-finite-presentation} and
+\ref{morphisms-lemma-finite-presentation-permanence}.
+\end{proof}
+
+
+
+\section{The normal cone of an immersion}
+\label{section-normal-cone}
+
+\noindent
+Let $i : Z \to X$ be a closed immersion. Let
+$\mathcal{I} \subset \mathcal{O}_X$ be the corresponding quasi-coherent
+sheaf of ideals. Consider the quasi-coherent sheaf of graded
+$\mathcal{O}_X$-algebras
+$\bigoplus_{n \geq 0} \mathcal{I}^n/\mathcal{I}^{n + 1}$.
+Since the sheaves $\mathcal{I}^n/\mathcal{I}^{n + 1}$
+are each annihilated by $\mathcal{I}$ this graded algebra
+corresponds to a quasi-coherent sheaf of graded $\mathcal{O}_Z$-algebras
+by
+Morphisms, Lemma \ref{morphisms-lemma-i-star-equivalence}.
+This quasi-coherent graded $\mathcal{O}_Z$-algebra is called the
+{\it conormal algebra of $Z$ in $X$} and is often simply denoted
+$\bigoplus_{n \geq 0} \mathcal{I}^n/\mathcal{I}^{n + 1}$
+by the abuse of notation mentioned in
+Morphisms, Section \ref{morphisms-section-closed-immersions-quasi-coherent}.
+
+\medskip\noindent
+Let $f : Z \to X$ be an immersion. We define the conormal algebra of $f$
+as the conormal sheaf of the closed immersion
+$i : Z \to X \setminus \partial Z$, where
+$\partial Z = \overline{Z} \setminus Z$. It is often denoted
+$\bigoplus_{n \geq 0} \mathcal{I}^n/\mathcal{I}^{n + 1}$
+where $\mathcal{I}$ is the ideal sheaf
+of the closed immersion $i : Z \to X \setminus \partial Z$.
+
+\begin{definition}
+\label{definition-conormal-sheaf}
+Let $f : Z \to X$ be an immersion. The {\it conormal algebra
+$\mathcal{C}_{Z/X, *}$ of $Z$ in $X$} or the {\it conormal algebra of $f$}
+is the quasi-coherent sheaf of graded $\mathcal{O}_Z$-algebras
+$\bigoplus_{n \geq 0} \mathcal{I}^n/\mathcal{I}^{n + 1}$ described above.
+\end{definition}
+
+\noindent
+Thus $\mathcal{C}_{Z/X, 1} = \mathcal{C}_{Z/X}$ is the conormal sheaf
+of the immersion. Also $\mathcal{C}_{Z/X, 0} = \mathcal{O}_Z$ and
+$\mathcal{C}_{Z/X, n}$ is a quasi-coherent $\mathcal{O}_Z$-module
+characterized by the property
+\begin{equation}
+\label{equation-conormal-in-degree-n}
+i_*\mathcal{C}_{Z/X, n} = \mathcal{I}^n/\mathcal{I}^{n + 1}
+\end{equation}
+where $i : Z \to X \setminus \partial Z$ and $\mathcal{I}$ is the ideal
+sheaf of $i$ as above. Finally, note that there is a canonical surjective
+map
+\begin{equation}
+\label{equation-conormal-algebra-quotient}
+\text{Sym}^*(\mathcal{C}_{Z/X}) \longrightarrow \mathcal{C}_{Z/X, *}
+\end{equation}
+of quasi-coherent graded $\mathcal{O}_Z$-algebras which is an isomorphism
+in degrees $0$ and $1$.
+
+\begin{lemma}
+\label{lemma-affine-conormal-sheaf}
+Let $i : Z \to X$ be an immersion. The conormal algebra
+of $i$ has the following properties:
+\begin{enumerate}
+\item Let $U \subset X$ be any open such that $i(Z)$ is
+a closed subset of $U$. Let $\mathcal{I} \subset \mathcal{O}_U$
+be the sheaf of ideals corresponding to the closed subscheme
+$i(Z) \subset U$. Then
+$$
+\mathcal{C}_{Z/X, *} =
+i^*\left(\bigoplus\nolimits_{n \geq 0} \mathcal{I}^n\right) =
+i^{-1}\left(
+\bigoplus\nolimits_{n \geq 0} \mathcal{I}^n/\mathcal{I}^{n + 1}
+\right)
+$$
+\item
+For any affine open $\Spec(R) = U \subset X$
+such that $Z \cap U = \Spec(R/I)$ there is a
+canonical isomorphism
+$\Gamma(Z \cap U, \mathcal{C}_{Z/X, *}) = \bigoplus_{n \geq 0} I^n/I^{n + 1}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Mostly clear from the definitions. Note that given a ring $R$ and
+an ideal $I$ of $R$ we have $I^n/I^{n + 1} = I^n \otimes_R R/I$.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-conormal-algebra-functorial}
+Let
+$$
+\xymatrix{
+Z \ar[r]_i \ar[d]_f & X \ar[d]^g \\
+Z' \ar[r]^{i'} & X'
+}
+$$
+be a commutative diagram in the category of schemes.
+Assume $i$, $i'$ immersions. There is a canonical map
+of graded $\mathcal{O}_Z$-algebras
+$$
+f^*\mathcal{C}_{Z'/X', *}
+\longrightarrow
+\mathcal{C}_{Z/X, *}
+$$
+characterized by the following property: For every pair of affine opens
+$(\Spec(R) = U \subset X, \Spec(R') = U' \subset X')$ with
+$f(U) \subset U'$ such that
+$Z \cap U = \Spec(R/I)$ and $Z' \cap U' = \Spec(R'/I')$
+the induced map
+$$
+\Gamma(Z' \cap U', \mathcal{C}_{Z'/X', *}) =
+\bigoplus\nolimits (I')^n/(I')^{n + 1}
+\longrightarrow
+\bigoplus\nolimits_{n \geq 0} I^n/I^{n + 1} =
+\Gamma(Z \cap U, \mathcal{C}_{Z/X, *})
+$$
+is the one induced by the ring map $f^\sharp : R' \to R$ which
+has the property $f^\sharp(I') \subset I$.
+\end{lemma}
+
+\begin{proof}
+Let $\partial Z' = \overline{Z'} \setminus Z'$ and
+$\partial Z = \overline{Z} \setminus Z$. These are closed subsets of $X'$ and
+of $X$. Replacing $X'$ by $X' \setminus \partial Z'$ and $X$ by
+$X \setminus \big(g^{-1}(\partial Z') \cup \partial Z\big)$ we
+see that we may assume that $i$ and $i'$ are closed immersions.
+
+\medskip\noindent
+The fact that $g \circ i$ factors through $i'$ implies that
+$g^*\mathcal{I}'$ maps into $\mathcal{I}$ under the canonical
+map $g^*\mathcal{I}' \to \mathcal{O}_X$, see
+Schemes, Lemmas
+\ref{schemes-lemma-characterize-closed-subspace} and
+\ref{schemes-lemma-restrict-map-to-closed}.
+Hence we get an induced map of quasi-coherent sheaves
+$g^*((\mathcal{I}')^n/(\mathcal{I}')^{n + 1}) \to
+\mathcal{I}^n/\mathcal{I}^{n + 1}$.
+Pulling back by $i$ gives
+$i^*g^*((\mathcal{I}')^n/(\mathcal{I}')^{n + 1}) \to
+i^*(\mathcal{I}^n/\mathcal{I}^{n + 1})$.
+Note that
+$i^*(\mathcal{I}^n/\mathcal{I}^{n + 1}) = \mathcal{C}_{Z/X, n}$.
+On the other hand,
+$i^*g^*((\mathcal{I}')^n/(\mathcal{I}')^{n + 1}) =
+f^*(i')^*((\mathcal{I}')^n/(\mathcal{I}')^{n + 1}) =
+f^*\mathcal{C}_{Z'/X', n}$.
+This gives the desired map.
+
+\medskip\noindent
+Checking that the map is locally described as the given map
+$(I')^n/(I')^{n + 1} \to I^n/I^{n + 1}$ is a matter of unwinding the
+definitions and is omitted. Another observation is that given any
+$x \in i(Z)$ there do exist affine open neighbourhoods $U$, $U'$
+with $f(U) \subset U'$ and $Z \cap U$ as well as $U' \cap Z'$
+closed such that $x \in U$. Proof omitted. Hence the requirement
+of the lemma indeed characterizes the map (and could have been used
+to define it).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-conormal-algebra-functorial-flat}
+Let
+$$
+\xymatrix{
+Z \ar[r]_i \ar[d]_f & X \ar[d]^g \\
+Z' \ar[r]^{i'} & X'
+}
+$$
+be a fibre product diagram in the category of schemes with
+$i$, $i'$ immersions. Then the canonical map
+$f^*\mathcal{C}_{Z'/X', *} \to \mathcal{C}_{Z/X, *}$ of
+Lemma \ref{lemma-conormal-algebra-functorial}
+is surjective. If $g$ is flat, then it is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Let $R' \to R$ be a ring map, and $I' \subset R'$ an ideal.
+Set $I = I'R$. Then $(I')^n/(I')^{n + 1} \otimes_{R'} R \to I^n/I^{n + 1}$
+is surjective. If $R' \to R$ is flat, then $I^n = (I')^n \otimes_{R'} R$
+and we see the map is an isomorphism.
+\end{proof}
+
+\begin{definition}
+\label{definition-normal-cone}
+Let $i : Z \to X$ be an immersion of schemes.
+The {\it normal cone $C_ZX$} of $Z$ in $X$ is
+$$
+C_ZX = \underline{\Spec}_Z(\mathcal{C}_{Z/X, *})
+$$
+see
+Constructions,
+Definitions \ref{constructions-definition-cone} and
+\ref{constructions-definition-abstract-cone}. The {\it normal bundle}
+of $Z$ in $X$ is the vector bundle
+$$
+N_ZX = \underline{\Spec}_Z(\text{Sym}(\mathcal{C}_{Z/X}))
+$$
+see
+Constructions,
+Definitions \ref{constructions-definition-vector-bundle} and
+\ref{constructions-definition-abstract-vector-bundle}.
+\end{definition}
+
+\noindent
+Thus $C_ZX \to Z$ is a cone over $Z$ and $N_ZX \to Z$ is a vector bundle
+over $Z$ (recall that in our terminology this does not imply that
+the conormal sheaf is a finite locally free sheaf). Moreover, the canonical
+surjection (\ref{equation-conormal-algebra-quotient}) of graded algebras
+defines a canonical closed immersion
+\begin{equation}
+\label{equation-normal-cone-in-normal-bundle}
+C_ZX \longrightarrow N_ZX
+\end{equation}
+of cones over $Z$.
+
+
+
+
+
+\section{Regular ideal sheaves}
+\label{section-regular-ideal-sheaves}
+
+\noindent
+In this section we generalize the notion of an effective Cartier divisor
+to higher codimension. Recall that a sequence of elements
+$f_1, \ldots, f_r$ of a ring $R$ is a {\it regular sequence} if for each
+$i = 1, \ldots, r$ the element $f_i$ is a nonzerodivisor on
+$R/(f_1, \ldots, f_{i - 1})$ and $R/(f_1, \ldots, f_r) \not = 0$, see
+Algebra, Definition \ref{algebra-definition-regular-sequence}.
+There are three closely related weaker conditions that we can impose.
+The first is to assume that $f_1, \ldots, f_r$ is a {\it Koszul-regular
+sequence}, i.e., that $H_i(K_\bullet(f_1, \ldots, f_r)) = 0$ for $i > 0$, see
+More on Algebra,
+Definition \ref{more-algebra-definition-koszul-regular-sequence}.
+The sequence is called an {\it $H_1$-regular sequence} if
+$H_1(K_\bullet(f_1, \ldots, f_r)) = 0$. Another condition we can impose
+is that with $J = (f_1, \ldots, f_r)$, the map
+$$
+R/J[T_1, \ldots, T_r]
+\longrightarrow
+\bigoplus\nolimits_{n \geq 0}
+J^n/J^{n + 1}
+$$
+which maps $T_i$ to $f_i \bmod J^2$ is an isomorphism. In this case
+we say that $f_1, \ldots, f_r$ is a
+{\it quasi-regular sequence}, see
+Algebra, Definition \ref{algebra-definition-quasi-regular-sequence}.
+Given an $R$-module $M$ there is also a notion of $M$-regular and
+$M$-quasi-regular sequence.
+
+\medskip\noindent
+We can generalize this to the case of ringed spaces as follows.
+Let $X$ be a ringed space and let
+$f_1, \ldots, f_r \in \Gamma(X, \mathcal{O}_X)$.
+We say that $f_1, \ldots, f_r$ is a {\it regular sequence} if
+for each $i = 1, \ldots, r$ the map
+\begin{equation}
+\label{equation-map-regular}
+f_i :
+\mathcal{O}_X/(f_1, \ldots, f_{i - 1})
+\longrightarrow
+\mathcal{O}_X/(f_1, \ldots, f_{i - 1})
+\end{equation}
+is an injective map of sheaves. We say that $f_1, \ldots, f_r$ is a
+{\it Koszul-regular sequence} if the Koszul complex
+\begin{equation}
+\label{equation-koszul}
+K_\bullet(\mathcal{O}_X, f_\bullet),
+\end{equation}
+see
+Modules, Definition \ref{modules-definition-koszul-complex},
+is acyclic in degrees $> 0$. We say that $f_1, \ldots, f_r$ is a
+{\it $H_1$-regular sequence} if the Koszul complex
+$K_\bullet(\mathcal{O}_X, f_\bullet)$ is exact in degree $1$. Finally,
+we say that $f_1, \ldots, f_r$ is a
+{\it quasi-regular} sequence if the map
+\begin{equation}
+\label{equation-map-quasi-regular}
+\mathcal{O}_X/\mathcal{J}[T_1, \ldots, T_r]
+\longrightarrow
+\bigoplus\nolimits_{d \geq 0} \mathcal{J}^d/\mathcal{J}^{d + 1}
+\end{equation}
+is an isomorphism of sheaves where $\mathcal{J} \subset \mathcal{O}_X$
+is the sheaf of ideals generated by $f_1, \ldots, f_r$. (There is also
+a notion of $\mathcal{F}$-regular and $\mathcal{F}$-quasi-regular sequence
+for a given $\mathcal{O}_X$-module $\mathcal{F}$ which we will introduce
+here if we ever need it.)
+
+\begin{lemma}
+\label{lemma-types-regular-sequences-implications}
+Let $X$ be a ringed space.
+Let $f_1, \ldots, f_r \in \Gamma(X, \mathcal{O}_X)$.
+We have the following implications
+$f_1, \ldots, f_r$ is a regular sequence $\Rightarrow$
+$f_1, \ldots, f_r$ is a Koszul-regular sequence $\Rightarrow$
+$f_1, \ldots, f_r$ is an $H_1$-regular sequence $\Rightarrow$
+$f_1, \ldots, f_r$ is a quasi-regular sequence.
+\end{lemma}
+
+\begin{proof}
+Since we may check exactness at stalks, a
+sequence $f_1, \ldots, f_r$ is a regular sequence if and only
+if the maps
+$$
+f_i :
+\mathcal{O}_{X, x}/(f_1, \ldots, f_{i - 1})
+\longrightarrow
+\mathcal{O}_{X, x}/(f_1, \ldots, f_{i - 1})
+$$
+are injective for all $x \in X$. In other words, the image of the sequence
+$f_1, \ldots, f_r$ in the ring $\mathcal{O}_{X, x}$ is a
+regular sequence for all $x \in X$. The other types of regularity
+can be checked stalkwise as well (details omitted).
+Hence the implications follow from
+More on Algebra, Lemmas
+\ref{more-algebra-lemma-regular-koszul-regular},
+\ref{more-algebra-lemma-koszul-regular-H1-regular}, and
+\ref{more-algebra-lemma-H1-regular-quasi-regular}.
+\end{proof}
+
+\begin{definition}
+\label{definition-regular-ideal-sheaf}
+Let $X$ be a ringed space. Let $\mathcal{J} \subset \mathcal{O}_X$
+be a sheaf of ideals.
+\begin{enumerate}
+\item We say $\mathcal{J}$ is {\it regular} if for every
+$x \in \text{Supp}(\mathcal{O}_X/\mathcal{J})$ there exists an open
+neighbourhood $x \in U \subset X$ and a regular sequence
+$f_1, \ldots, f_r \in \mathcal{O}_X(U)$ such that $\mathcal{J}|_U$
+is generated by $f_1, \ldots, f_r$.
+\item We say $\mathcal{J}$ is {\it Koszul-regular} if for every
+$x \in \text{Supp}(\mathcal{O}_X/\mathcal{J})$ there exists an open
+neighbourhood $x \in U \subset X$ and a Koszul-regular sequence
+$f_1, \ldots, f_r \in \mathcal{O}_X(U)$ such that $\mathcal{J}|_U$
+is generated by $f_1, \ldots, f_r$.
+\item We say $\mathcal{J}$ is {\it $H_1$-regular} if for every
+$x \in \text{Supp}(\mathcal{O}_X/\mathcal{J})$ there exists an open
+neighbourhood $x \in U \subset X$ and a $H_1$-regular sequence
+$f_1, \ldots, f_r \in \mathcal{O}_X(U)$ such that $\mathcal{J}|_U$
+is generated by $f_1, \ldots, f_r$.
+\item We say $\mathcal{J}$ is {\it quasi-regular} if for every
+$x \in \text{Supp}(\mathcal{O}_X/\mathcal{J})$ there exists an open
+neighbourhood $x \in U \subset X$ and a quasi-regular sequence
+$f_1, \ldots, f_r \in \mathcal{O}_X(U)$ such that $\mathcal{J}|_U$
+is generated by $f_1, \ldots, f_r$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Many properties of this notion immediately follow from the
+corresponding notions for regular and quasi-regular sequences
+in rings.
+
+\begin{lemma}
+\label{lemma-regular-quasi-regular-scheme}
+Let $X$ be a ringed space. Let $\mathcal{J}$ be a sheaf of ideals.
+We have the following implications:
+$\mathcal{J}$ is regular $\Rightarrow$
+$\mathcal{J}$ is Koszul-regular $\Rightarrow$
+$\mathcal{J}$ is $H_1$-regular $\Rightarrow$
+$\mathcal{J}$ is quasi-regular.
+\end{lemma}
+
+\begin{proof}
+The lemma immediately reduces to
+Lemma \ref{lemma-types-regular-sequences-implications}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-regular-ideal}
+Let $X$ be a locally ringed space. Let $\mathcal{J} \subset \mathcal{O}_X$
+be a sheaf of ideals. Then $\mathcal{J}$ is quasi-regular if and
+only if the following conditions are satisfied:
+\begin{enumerate}
+\item $\mathcal{J}$ is an $\mathcal{O}_X$-module of finite type,
+\item $\mathcal{J}/\mathcal{J}^2$ is a finite locally free
+$\mathcal{O}_X/\mathcal{J}$-module, and
+\item the canonical maps
+$$
+\text{Sym}^n_{\mathcal{O}_X/\mathcal{J}}(\mathcal{J}/\mathcal{J}^2)
+\longrightarrow
+\mathcal{J}^n/\mathcal{J}^{n + 1}
+$$
+are isomorphisms for all $n \geq 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that if $U \subset X$ is an open such that
+$\mathcal{J}|_U$ is generated by a quasi-regular sequence
+$f_1, \ldots, f_r \in \mathcal{O}_X(U)$ then $\mathcal{J}|_U$
+is of finite type, $\mathcal{J}|_U/\mathcal{J}^2|_U$ is free
+with basis $f_1, \ldots, f_r$, and the maps in (3) are isomorphisms
+because they are coordinate free formulation of the degree $n$
+part of (\ref{equation-map-quasi-regular}). Hence it is clear that
+being quasi-regular implies conditions (1), (2), and (3).
+
+\medskip\noindent
+Conversely, suppose that (1), (2), and (3) hold. Pick a point
+$x \in \text{Supp}(\mathcal{O}_X/\mathcal{J})$. Then there exists
+a neighbourhood $U \subset X$ of $x$ such that
+$\mathcal{J}|_U/\mathcal{J}^2|_U$
+is free of rank $r$ over $\mathcal{O}_U/\mathcal{J}|_U$.
+After possibly shrinking $U$ we may assume there exist
+$f_1, \ldots, f_r \in \mathcal{J}(U)$ which map to a basis
+of $\mathcal{J}|_U/\mathcal{J}^2|_U$ as an
+$\mathcal{O}_U/\mathcal{J}|_U$-module.
+In particular we see that the images of $f_1, \ldots, f_r$ in
+$\mathcal{J}_x/\mathcal{J}^2_x$ generate. Hence by Nakayama's lemma
+(Algebra, Lemma \ref{algebra-lemma-NAK})
+we see that $f_1, \ldots, f_r$ generate the stalk $\mathcal{J}_x$.
+Hence, since $\mathcal{J}$ is of finite type, by
+Modules, Lemma \ref{modules-lemma-finite-type-surjective-on-stalk}
+after shrinking $U$ we may assume that $f_1, \ldots, f_r$ generate
+$\mathcal{J}$. Finally, from (3) and the isomorphism
+$\mathcal{J}|_U/\mathcal{J}^2|_U = \bigoplus \mathcal{O}_U/\mathcal{J}|_U f_i$
+it is clear that $f_1, \ldots, f_r \in \mathcal{O}_X(U)$
+is a quasi-regular sequence.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-generate-regular-ideal}
+Let $(X, \mathcal{O}_X)$ be a locally ringed space.
+Let $\mathcal{J} \subset \mathcal{O}_X$ be a sheaf of ideals.
+Let $x \in X$ and $f_1, \ldots, f_r \in \mathcal{J}_x$ whose images
+give a basis for the $\kappa(x)$-vector space
+$\mathcal{J}_x/\mathfrak m_x\mathcal{J}_x$.
+\begin{enumerate}
+\item If $\mathcal{J}$ is quasi-regular, then there exists an open
+neighbourhood such that $f_1, \ldots, f_r \in \mathcal{O}_X(U)$
+form a quasi-regular sequence generating $\mathcal{J}|_U$.
+\item If $\mathcal{J}$ is $H_1$-regular, then there exists an open
+neighbourhood such that $f_1, \ldots, f_r \in \mathcal{O}_X(U)$
+form an $H_1$-regular sequence generating $\mathcal{J}|_U$.
+\item If $\mathcal{J}$ is Koszul-regular, then there exists an open
+neighbourhood such that $f_1, \ldots, f_r \in \mathcal{O}_X(U)$
+form an Koszul-regular sequence generating $\mathcal{J}|_U$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First assume that $\mathcal{J}$ is quasi-regular. We may choose an
+open neighbourhood $U \subset X$ of $x$ and a quasi-regular sequence
+$g_1, \ldots, g_s \in \mathcal{O}_X(U)$ which generates $\mathcal{J}|_U$.
+Note that this implies that $\mathcal{J}/\mathcal{J}^2$ is free of
+rank $s$ over $\mathcal{O}_U/\mathcal{J}|_U$ (see
+Lemma \ref{lemma-quasi-regular-ideal}
+and its proof) and hence $r = s$.
+We may shrink $U$ and assume $f_1, \ldots, f_r \in \mathcal{J}(U)$.
+Thus we may write
+$$
+f_i = \sum a_{ij} g_j
+$$
+for some $a_{ij} \in \mathcal{O}_X(U)$. By assumption the matrix
+$A = (a_{ij})$ maps to an invertible matrix over $\kappa(x)$.
+Hence, after shrinking $U$ once more, we may assume that $(a_{ij})$
+is invertible. Thus we see that $f_1, \ldots, f_r$ give a basis
+for $(\mathcal{J}/\mathcal{J}^2)|_U$ which proves that $f_1, \ldots, f_r$
+is a quasi-regular sequence over $U$.
+
+\medskip\noindent
+Note that in order to prove (2) and (3) we may, because the assumptions
+of (2) and (3) are stronger than the assumption in (1), already assume that
+$f_1, \ldots, f_r \in \mathcal{J}(U)$ and $f_i = \sum a_{ij}g_j$
+with $(a_{ij})$ invertible as above, where now $g_1, \ldots, g_r$
+is a $H_1$-regular or Koszul-regular sequence. Since the Koszul complex
+on $f_1, \ldots, f_r$ is isomorphic to the Koszul complex on
+$g_1, \ldots, g_r$ via the matrix $(a_{ij})$ (see
+More on Algebra, Lemma \ref{more-algebra-lemma-change-basis})
+we conclude that $f_1, \ldots, f_r$ is $H_1$-regular or Koszul-regular
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-ideal-sheaf-quasi-coherent}
+Any regular, Koszul-regular, $H_1$-regular, or quasi-regular sheaf
+of ideals on a scheme is a finite type quasi-coherent sheaf of ideals.
+\end{lemma}
+
+\begin{proof}
+This follows as such a sheaf of ideals is locally generated by
+finitely many sections. And any sheaf of ideals locally generated
+by sections on a scheme is quasi-coherent, see
+Schemes, Lemma \ref{schemes-lemma-closed-subspace-scheme}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-ideal-sheaf-scheme}
+Let $X$ be a scheme. Let $\mathcal{J}$ be a sheaf of ideals.
+Then $\mathcal{J}$ is regular
+(resp.\ Koszul-regular, $H_1$-regular, quasi-regular) if and only if
+for every $x \in \text{Supp}(\mathcal{O}_X/\mathcal{J})$ there exists
+an affine open neighbourhood $x \in U \subset X$, $U = \Spec(A)$
+such that $\mathcal{J}|_U = \widetilde{I}$ and such that $I$
+is generated by a regular (resp.\ Koszul-regular, $H_1$-regular,
+quasi-regular) sequence $f_1, \ldots, f_r \in A$.
+\end{lemma}
+
+\begin{proof}
+By assumption we can find an open neighbourhood $U$ of $x$ over which
+$\mathcal{J}$ is generated by a
+regular (resp.\ Koszul-regular, $H_1$-regular, quasi-regular)
+sequence $f_1, \ldots, f_r \in \mathcal{O}_X(U)$. After shrinking
+$U$ we may assume that $U$ is affine, say $U = \Spec(A)$.
+Since $\mathcal{J}$ is quasi-coherent by
+Lemma \ref{lemma-regular-ideal-sheaf-quasi-coherent}
+we see that $\mathcal{J}|_U = \widetilde{I}$ for some ideal $I \subset A$.
+Now we can use the fact that
+$$
+\widetilde{\ } : \text{Mod}_A \longrightarrow \QCoh(\mathcal{O}_U)
+$$
+is an equivalence of categories which preserves exactness. For example
+the fact that the functions $f_i$ generate $\mathcal{J}$ means that
+the $f_i$, seen as elements of $A$ generate $I$. The fact that
+(\ref{equation-map-regular}) is injective
+(resp.\ (\ref{equation-koszul}) is exact, (\ref{equation-koszul}) is exact
+in degree $1$, (\ref{equation-map-quasi-regular}) is an isomorphism)
+implies the corresponding property of the map
+$A/(f_1, \ldots, f_{i - 1}) \to A/(f_1, \ldots, f_{i - 1})$
+(resp.\ the complex $K_\bullet(A, f_1, \ldots, f_r)$, the
+map $A/I[T_1, \ldots, T_r] \to \bigoplus I^n/I^{n + 1}$).
+Thus $f_1, \ldots, f_r \in A$ is a regular
+(resp.\ Koszul-regular, $H_1$-regular, quasi-regular)
+sequence of the ring $A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Noetherian-scheme-regular-ideal}
+Let $X$ be a locally Noetherian scheme. Let $\mathcal{J} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. Let $x$ be a point of the support of
+$\mathcal{O}_X/\mathcal{J}$. The following are equivalent
+\begin{enumerate}
+\item $\mathcal{J}_x$ is generated by a regular sequence in
+$\mathcal{O}_{X, x}$,
+\item $\mathcal{J}_x$ is generated by a Koszul-regular sequence in
+$\mathcal{O}_{X, x}$,
+\item $\mathcal{J}_x$ is generated by an $H_1$-regular sequence in
+$\mathcal{O}_{X, x}$,
+\item $\mathcal{J}_x$ is generated by a quasi-regular sequence in
+$\mathcal{O}_{X, x}$,
+\item there exists an affine neighbourhood $U = \Spec(A)$ of $x$ such
+that $\mathcal{J}|_U = \widetilde{I}$ and $I$ is generated by a
+regular sequence in $A$, and
+\item there exists an affine neighbourhood $U = \Spec(A)$ of $x$ such
+that $\mathcal{J}|_U = \widetilde{I}$ and $I$ is generated by a
+Koszul-regular sequence in $A$, and
+\item there exists an affine neighbourhood $U = \Spec(A)$ of $x$ such
+that $\mathcal{J}|_U = \widetilde{I}$ and $I$ is generated by an
+$H_1$-regular sequence in $A$, and
+\item there exists an affine neighbourhood $U = \Spec(A)$ of $x$ such
+that $\mathcal{J}|_U = \widetilde{I}$ and $I$ is generated by a
+quasi-regular sequence in $A$,
+\item there exists a neighbourhood $U$ of $x$ such that $\mathcal{J}|_U$
+is regular, and
+\item there exists a neighbourhood $U$ of $x$ such that $\mathcal{J}|_U$
+is Koszul-regular, and
+\item there exists a neighbourhood $U$ of $x$ such that $\mathcal{J}|_U$
+is $H_1$-regular, and
+\item there exists a neighbourhood $U$ of $x$ such that $\mathcal{J}|_U$
+is quasi-regular.
+\end{enumerate}
+In particular, on a locally Noetherian scheme the notions of
+regular, Koszul-regular, $H_1$-regular, or quasi-regular ideal sheaf all agree.
+\end{lemma}
+
+\begin{proof}
+It follows from
+Lemma \ref{lemma-regular-ideal-sheaf-scheme}
+that (5) $\Leftrightarrow$ (9), (6) $\Leftrightarrow$ (10),
+(7) $\Leftrightarrow$ (11), and (8) $\Leftrightarrow$ (12).
+It is clear that (5) $\Rightarrow$ (1), (6) $\Rightarrow$ (2),
+(7) $\Rightarrow$ (3), and (8) $\Rightarrow$ (4).
+We have (1) $\Rightarrow$ (5) by
+Algebra, Lemma \ref{algebra-lemma-regular-sequence-in-neighbourhood}.
+We have (9) $\Rightarrow$ (10) $\Rightarrow$ (11) $\Rightarrow$ (12) by
+Lemma \ref{lemma-regular-quasi-regular-scheme}.
+Finally, (4) $\Rightarrow$ (1) by
+Algebra, Lemma \ref{algebra-lemma-quasi-regular-regular}.
+Now all 12 statements are equivalent.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Regular immersions}
+\label{section-regular-immersions}
+
+\noindent
+Let $i : Z \to X$ be an immersion of schemes. By definition this means
+there exists an open subscheme $U \subset X$ such that
+$Z$ is identified with a closed subscheme of $U$. Let
+$\mathcal{I} \subset \mathcal{O}_U$ be the corresponding quasi-coherent
+sheaf of ideals. Suppose $U' \subset X$ is a second such open
+subscheme, and denote $\mathcal{I}' \subset \mathcal{O}_{U'}$
+the corresponding quasi-coherent sheaf of ideals. Then
+$\mathcal{I}|_{U \cap U'} = \mathcal{I}'|_{U \cap U'}$.
+Moreover, the support of $\mathcal{O}_U/\mathcal{I}$
+is $Z$ which is contained in $U \cap U'$ and is also the
+support of $\mathcal{O}_{U'}/\mathcal{I}'$. Hence it follows from
+Definition \ref{definition-regular-ideal-sheaf}
+that $\mathcal{I}$ is a regular ideal if and only if
+$\mathcal{I}'$ is a regular ideal. Similarly for being Koszul-regular,
+$H_1$-regular, or quasi-regular.
+
+\begin{definition}
+\label{definition-regular-immersion}
+Let $i : Z \to X$ be an immersion of schemes. Choose an open subscheme
+$U \subset X$ such that $i$ identifies $Z$ with a closed
+subscheme of $U$ and denote $\mathcal{I} \subset \mathcal{O}_U$
+the corresponding quasi-coherent sheaf of ideals.
+\begin{enumerate}
+\item We say $i$ is a {\it regular immersion} if
+$\mathcal{I}$ is regular.
+\item We say $i$ is a {\it Koszul-regular immersion} if
+$\mathcal{I}$ is Koszul-regular.
+\item We say $i$ is a {\it $H_1$-regular immersion} if
+$\mathcal{I}$ is $H_1$-regular.
+\item We say $i$ is a {\it quasi-regular immersion} if
+$\mathcal{I}$ is quasi-regular.
+\end{enumerate}
+\end{definition}
+
+\noindent
+The discussion above shows that this is independent of the choice
+of $U$. The conditions are listed in decreasing order of strength, see
+Lemma \ref{lemma-regular-quasi-regular-immersion}.
+A Koszul-regular closed immersion is smooth locally a regular immersion, see
+Lemma \ref{lemma-koszul-regular-smooth-locally-regular}.
+In the locally Noetherian case all four notions agree, see
+Lemma \ref{lemma-Noetherian-scheme-regular-ideal}.
+
+\begin{lemma}
+\label{lemma-regular-quasi-regular-immersion}
+Let $i : Z \to X$ be an immersion of schemes.
+We have the following implications:
+$i$ is regular $\Rightarrow$
+$i$ is Koszul-regular $\Rightarrow$
+$i$ is $H_1$-regular $\Rightarrow$
+$i$ is quasi-regular.
+\end{lemma}
+
+\begin{proof}
+The lemma immediately reduces to
+Lemma \ref{lemma-regular-quasi-regular-scheme}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-immersion-noetherian}
+Let $i : Z \to X$ be an immersion of schemes.
+Assume $X$ is locally Noetherian. Then
+$i$ is regular $\Leftrightarrow$
+$i$ is Koszul-regular $\Leftrightarrow$
+$i$ is $H_1$-regular $\Leftrightarrow$
+$i$ is quasi-regular.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from
+Lemma \ref{lemma-regular-quasi-regular-immersion}
+and
+Lemma \ref{lemma-Noetherian-scheme-regular-ideal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-base-change-regular-immersion}
+Let $i : Z \to X$ be a regular (resp.\ Koszul-regular,
+$H_1$-regular, quasi-regular) immersion. Let $X' \to X$ be a flat
+morphism. Then the base change $i' : Z \times_X X' \to X'$
+is a regular (resp.\ Koszul-regular,
+$H_1$-regular, quasi-regular) immersion.
+\end{lemma}
+
+\begin{proof}
+Via
+Lemma \ref{lemma-regular-ideal-sheaf-scheme}
+this translates into the algebraic statements in
+Algebra, Lemmas \ref{algebra-lemma-flat-increases-depth} and
+\ref{algebra-lemma-flat-base-change-quasi-regular}
+and
+More on Algebra,
+Lemma \ref{more-algebra-lemma-koszul-regular-flat-base-change}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-regular-immersion}
+Let $i : Z \to X$ be an immersion of schemes. Then $i$ is a quasi-regular
+immersion if and only if the following conditions are satisfied
+\begin{enumerate}
+\item $i$ is locally of finite presentation,
+\item the conormal sheaf $\mathcal{C}_{Z/X}$ is finite locally free, and
+\item the map (\ref{equation-conormal-algebra-quotient}) is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+An open immersion is locally of finite presentation. Hence we may
+replace $X$ by an open subscheme $U \subset X$ such that $i$ identifies
+$Z$ with a closed subscheme of $U$, i.e., we may assume that $i$
+is a closed immersion. Let $\mathcal{I} \subset \mathcal{O}_X$ be the
+corresponding quasi-coherent sheaf of ideals. Recall, see
+Morphisms, Lemma \ref{morphisms-lemma-closed-immersion-finite-presentation}
+that $\mathcal{I}$ is of finite type if and only if $i$ is locally
+of finite presentation. Hence the equivalence follows from
+Lemma \ref{lemma-quasi-regular-ideal}
+and unwinding the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-transitivity-conormal-quasi-regular}
+Let $Z \to Y \to X$ be immersions of schemes. Assume that
+$Z \to Y$ is $H_1$-regular. Then the canonical sequence of
+Morphisms, Lemma \ref{morphisms-lemma-transitivity-conormal}
+$$
+0 \to i^*\mathcal{C}_{Y/X} \to
+\mathcal{C}_{Z/X} \to
+\mathcal{C}_{Z/Y} \to 0
+$$
+is exact and locally split.
+\end{lemma}
+
+\begin{proof}
+Since $\mathcal{C}_{Z/Y}$ is finite locally free (see
+Lemma \ref{lemma-quasi-regular-immersion}
+and
+Lemma \ref{lemma-regular-quasi-regular-scheme})
+it suffices to prove that the sequence is exact. By what was proven in
+Morphisms, Lemma \ref{morphisms-lemma-transitivity-conormal}
+it suffices to show that the first map is injective.
+Working affine locally this reduces to the following question:
+Suppose that we have a ring $A$ and ideals $I \subset J \subset A$.
+Assume that $J/I \subset A/I$ is generated by an $H_1$-regular sequence.
+Does this imply that $I/I^2 \otimes_A A/J \to J/J^2$ is injective?
+Note that $I/I^2 \otimes_A A/J = I/IJ$. Hence we are trying to prove
+that $I \cap J^2 = IJ$. This is the result of
+More on Algebra, Lemma \ref{more-algebra-lemma-conormal-sequence-H1-regular}.
+\end{proof}
+
+\noindent
+A composition of quasi-regular immersions may not be quasi-regular, see
+Algebra, Remark \ref{algebra-remark-join-quasi-regular-sequences}.
+The other types of regular immersions are preserved under composition.
+
+\begin{lemma}
+\label{lemma-composition-regular-immersion}
+Let $i : Z \to Y$ and $j : Y \to X$ be immersions of schemes.
+\begin{enumerate}
+\item If $i$ and $j$ are regular immersions, so is $j \circ i$.
+\item If $i$ and $j$ are Koszul-regular immersions, so is $j \circ i$.
+\item If $i$ and $j$ are $H_1$-regular immersions, so is $j \circ i$.
+\item If $i$ is an $H_1$-regular immersion and $j$ is a quasi-regular
+immersion, then $j \circ i$ is a quasi-regular immersion.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The algebraic version of (1) is
+Algebra, Lemma \ref{algebra-lemma-join-regular-sequences}.
+The algebraic version of (2) is
+More on Algebra, Lemma \ref{more-algebra-lemma-join-koszul-regular-sequences}.
+The algebraic version of (3) is
+More on Algebra, Lemma \ref{more-algebra-lemma-join-H1-regular-sequences}.
+The algebraic version of (4) is
+More on Algebra, Lemma \ref{more-algebra-lemma-join-quasi-regular-H1-regular}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-permanence-regular-immersion}
+Let $i : Z \to Y$ and $j : Y \to X$ be immersions of schemes. Assume
+that the sequence
+$$
+0 \to i^*\mathcal{C}_{Y/X} \to
+\mathcal{C}_{Z/X} \to
+\mathcal{C}_{Z/Y} \to 0
+$$
+of
+Morphisms, Lemma \ref{morphisms-lemma-transitivity-conormal}
+is exact and locally split.
+\begin{enumerate}
+\item If $j \circ i$ is a quasi-regular immersion, so is $i$.
+\item If $j \circ i$ is a $H_1$-regular immersion, so is $i$.
+\item If both $j$ and $j \circ i$ are Koszul-regular immersions, so is $i$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+After shrinking $Y$ and $X$ we may assume that $i$ and $j$ are closed
+immersions. Denote $\mathcal{I} \subset \mathcal{O}_X$ the ideal sheaf
+of $Y$ and $\mathcal{J} \subset \mathcal{O}_X$ the ideal sheaf of $Z$.
+The conormal sequence is $0 \to \mathcal{I}/\mathcal{I}\mathcal{J}
+\to \mathcal{J}/\mathcal{J}^2 \to
+\mathcal{J}/(\mathcal{I} + \mathcal{J}^2) \to 0$.
+Let $z \in Z$ and set $y = i(z)$, $x = j(y) = j(i(z))$.
+Choose $f_1, \ldots, f_n \in \mathcal{I}_x$ which map to a basis of
+$\mathcal{I}_x/\mathfrak m_z\mathcal{I}_x$. Extend this to
+$f_1, \ldots, f_n, g_1, \ldots, g_m \in \mathcal{J}_x$
+which map to a basis of $\mathcal{J}_x/\mathfrak m_z\mathcal{J}_x$.
+This is possible as we have assumed that the sequence of conormal
+sheaves is split in a neighbourhood of $z$, hence
+$\mathcal{I}_x/\mathfrak m_x\mathcal{I}_x \to
+\mathcal{J}_x/\mathfrak m_x\mathcal{J}_x$ is injective.
+
+\medskip\noindent
+Proof of (1). By
+Lemma \ref{lemma-generate-regular-ideal}
+we can find an affine open neighbourhood $U$ of $x$ such that
+$f_1, \ldots, f_n, g_1, \ldots, g_m$ forms a quasi-regular sequence
+generating $\mathcal{J}$. Hence by
+Algebra, Lemma \ref{algebra-lemma-truncate-quasi-regular}
+we see that $g_1, \ldots, g_m$ induces a quasi-regular sequence on
+$Y \cap U$ cutting out $Z$.
+
+\medskip\noindent
+Proof of (2). Exactly the same as the proof of (1) except using
+More on Algebra, Lemma \ref{more-algebra-lemma-truncate-H1-regular}.
+
+\medskip\noindent
+Proof of (3). By
+Lemma \ref{lemma-generate-regular-ideal}
+(applied twice)
+we can find an affine open neighbourhood $U$ of $x$ such that
+$f_1, \ldots, f_n$ forms a Koszul-regular sequence generating
+$\mathcal{I}$ and $f_1, \ldots, f_n, g_1, \ldots, g_m$ forms a
+Koszul-regular sequence generating $\mathcal{J}$. Hence by
+More on Algebra, Lemma \ref{more-algebra-lemma-truncate-koszul-regular}
+we see that $g_1, \ldots, g_m$ induces a Koszul-regular sequence on
+$Y \cap U$ cutting out $Z$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extra-permanence-regular-immersion-noetherian}
+Let $i : Z \to Y$ and $j : Y \to X$ be immersions of schemes.
+Pick $z \in Z$ and denote $y \in Y$, $x \in X$ the corresponding points.
+Assume $X$ is locally Noetherian.
+The following are equivalent
+\begin{enumerate}
+\item $i$ is a regular immersion in a neighbourhood of $z$ and $j$
+is a regular immersion in a neighbourhood of $y$,
+\item $i$ and $j \circ i$ are regular immersions in a neighbourhood of $z$,
+\item $j \circ i$ is a regular immersion in a neighbourhood of $z$ and the
+conormal sequence
+$$
+0 \to i^*\mathcal{C}_{Y/X} \to
+\mathcal{C}_{Z/X} \to
+\mathcal{C}_{Z/Y} \to 0
+$$
+is split exact in a neighbourhood of $z$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $X$ (and hence $Y$) is locally Noetherian all 4 types of regular
+immersions agree, and moreover we may check whether a morphism is a
+regular immersion on the level of local rings, see
+Lemma \ref{lemma-Noetherian-scheme-regular-ideal}.
+The implication (1) $\Rightarrow$ (2) is
+Lemma \ref{lemma-composition-regular-immersion}.
+The implication (2) $\Rightarrow$ (3) is
+Lemma \ref{lemma-transitivity-conormal-quasi-regular}.
+Thus it suffices to prove that (3) implies (1).
+
+\medskip\noindent
+Assume (3). Set $A = \mathcal{O}_{X, x}$. Denote $I \subset A$ the kernel
+of the surjective map $\mathcal{O}_{X, x} \to \mathcal{O}_{Y, y}$ and
+denote $J \subset A$ the kernel
+of the surjective map $\mathcal{O}_{X, x} \to \mathcal{O}_{Z, z}$.
+Note that any minimal sequence of elements generating $J$ in $A$
+is a quasi-regular hence regular sequence, see
+Lemma \ref{lemma-generate-regular-ideal}.
+By assumption the conormal sequence
+$$
+0 \to I/IJ \to J/J^2 \to J/(I + J^2) \to 0
+$$
+is split exact as a sequence of $A/J$-modules. Hence we can pick
+a minimal system of generators $f_1, \ldots, f_n, g_1, \ldots, g_m$
+of $J$ with $f_1, \ldots, f_n \in I$ a minimal system of generators of $I$.
+As pointed out above $f_1, \ldots, f_n, g_1, \ldots, g_m$ is a regular
+sequence in $A$. It follows directly from the definition of a regular
+sequence that $f_1, \ldots, f_n$ is a regular sequence in $A$ and
+$\overline{g}_1, \ldots, \overline{g}_m$ is a regular sequence in
+$A/I$. Thus $j$ is a regular immersion at $y$ and $i$ is a regular
+immersion at $z$.
+\end{proof}
+
+\begin{remark}
+\label{remark-not-always-extra-permanence}
+In the situation of
+Lemma \ref{lemma-extra-permanence-regular-immersion-noetherian}
+parts (1), (2), (3) are {\bf not} equivalent to
+``$j \circ i$ and $j$ are regular immersions at $z$ and $y$''.
+An example is $X = \mathbf{A}^1_k = \Spec(k[x])$,
+$Y = \Spec(k[x]/(x^2))$ and $Z = \Spec(k[x]/(x))$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-koszul-regular-smooth-locally-regular}
+Let $i : Z \to X$ be a Koszul regular closed immersion.
+Then there exists a surjective smooth morphism $X' \to X$ such
+that the base change $i' : Z \times_X X' \to X'$ of $i$ is
+a regular immersion.
+\end{lemma}
+
+\begin{proof}
+We may assume that $X$ is affine and the ideal of $Z$ generated by
+a Koszul-regular sequence by replacing $X$ by the members of a suitable
+affine open covering (affine opens as in
+Lemma \ref{lemma-regular-ideal-sheaf-scheme}).
+The affine case is
+More on Algebra,
+Lemma \ref{more-algebra-lemma-Koszul-regular-flat-locally-regular}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-immersion-regular-regular-immersion}
+Let $i : Z \to X$ be an immersion. If $Z$ and $X$ are
+regular schemes, then $i$ is a regular immersion.
+\end{lemma}
+
+\begin{proof}
+Let $z \in Z$. By Lemma \ref{lemma-Noetherian-scheme-regular-ideal}
+it suffices to show that the kernel of
+$\mathcal{O}_{X, z} \to \mathcal{O}_{Z, z}$
+is generated by a regular sequence. This follows from
+Algebra, Lemmas \ref{algebra-lemma-regular-quotient-regular} and
+\ref{algebra-lemma-regular-ring-CM}.
+\end{proof}
+
+
+
+
+
+\section{Relative regular immersions}
+\label{section-relative-regular-immersion}
+
+\noindent
+In this section we consider the base change property for regular immersions.
+The following lemma does not hold for regular immersions
+or for Koszul immersions, see
+Examples, Lemma \ref{examples-lemma-base-change-regular-sequence}.
+
+\begin{lemma}
+\label{lemma-relative-regular-immersion}
+Let $f : X \to S$ be a morphism of schemes.
+Let $i : Z \subset X$ be an immersion.
+Assume
+\begin{enumerate}
+\item $i$ is an $H_1$-regular (resp.\ quasi-regular) immersion, and
+\item $Z \to S$ is a flat morphism.
+\end{enumerate}
+Then for every morphism of schemes $g : S' \to S$ the base change
+$Z' = S' \times_S Z \to X' = S' \times_S X$
+is an $H_1$-regular (resp.\ quasi-regular) immersion.
+\end{lemma}
+
+\begin{proof}
+Unwinding the definitions and using
+Lemma \ref{lemma-regular-ideal-sheaf-scheme}
+this translates into More on Algebra, Lemma
+\ref{more-algebra-lemma-relative-regular-immersion-algebra}.
+\end{proof}
+
+\noindent
+This lemma is the motivation for the following definition.
+
+\begin{definition}
+\label{definition-relative-H1-regular-immersion}
+Let $f : X \to S$ be a morphism of schemes.
+Let $i : Z \to X$ be an immersion.
+\begin{enumerate}
+\item We say $i$ is a {\it relative quasi-regular immersion}
+if $Z \to S$ is flat and $i$ is a quasi-regular immersion.
+\item We say $i$ is a {\it relative $H_1$-regular immersion}
+if $Z \to S$ is flat and $i$ is an $H_1$-regular immersion.
+\end{enumerate}
+\end{definition}
+
+\noindent
+We warn the reader that this may be nonstandard notation.
+Lemma \ref{lemma-relative-regular-immersion}
+guarantees that relative quasi-regular (resp.\ $H_1$-regular)
+immersions are preserved under any base change.
+A relative $H_1$-regular immersion is a relative quasi-regular immersion, see
+Lemma \ref{lemma-regular-quasi-regular-immersion}.
+Please take a look at
+Lemma \ref{lemma-flat-relative-H1-regular}
+(or
+Lemma \ref{lemma-relative-regular-immersion-flat-in-neighbourhood})
+which shows that if $Z \to X$ is a relative $H_1$-regular
+(or quasi-regular) immersion and the ambient scheme is (flat and)
+locally of finite presentation over $S$, then $Z \to X$
+is actually a regular immersion and the same remains true after
+any base change.
+
+\begin{lemma}
+\label{lemma-quasi-regular-immersion-flat-at-x}
+Let $f : X \to S$ be a morphism of schemes.
+Let $Z \to X$ be a relative quasi-regular immersion.
+If $x \in Z$ and $\mathcal{O}_{X, x}$ is Noetherian, then $f$ is flat at $x$.
+\end{lemma}
+
+\begin{proof}
+Let $f_1, \ldots, f_r \in \mathcal{O}_{X, x}$ be a quasi-regular
+sequence cutting out the ideal of $Z$ at $x$. By
+Algebra, Lemma \ref{algebra-lemma-quasi-regular-regular}
+we know that $f_1, \ldots, f_r$ is a regular sequence.
+Hence $f_r$ is a nonzerodivisor on
+$\mathcal{O}_{X, x}/(f_1, \ldots, f_{r - 1})$ such that the
+quotient is a flat $\mathcal{O}_{S, f(x)}$-module.
+By
+Lemma \ref{lemma-flat-at-x}
+we conclude that $\mathcal{O}_{X, x}/(f_1, \ldots, f_{r - 1})$
+is a flat $\mathcal{O}_{S, f(x)}$-module.
+Continuing by induction we find that $\mathcal{O}_{X, x}$
+is a flat $\mathcal{O}_{S, s}$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-regular-immersion-flat-in-neighbourhood}
+Let $X \to S$ be a morphism of schemes.
+Let $Z \to X$ be an immersion.
+Assume
+\begin{enumerate}
+\item $X \to S$ is flat and locally of finite presentation,
+\item $Z \to X$ is a relative quasi-regular immersion.
+\end{enumerate}
+Then $Z \to X$ is a regular immersion and
+the same remains true after any base change.
+\end{lemma}
+
+\begin{proof}
+Pick $x \in Z$ with image $s \in S$. To prove this it suffices to
+find an affine neighbourhood of $x$ contained in $U$ such that the
+result holds on that affine open. Hence we may assume that $X$ is affine
+and there exist a quasi-regular sequence
+$f_1, \ldots, f_r \in \Gamma(X, \mathcal{O}_X)$
+such that $Z = V(f_1, \ldots, f_r)$. By
+More on Algebra, Lemma
+\ref{more-algebra-lemma-relative-regular-immersion-algebra}
+the sequence $f_1|_{X_s}, \ldots, f_r|_{X_s}$ is a
+quasi-regular sequence in $\Gamma(X_s, \mathcal{O}_{X_s})$.
+Since $X_s$ is Noetherian, this implies, possibly after shrinking
+$X$ a bit, that $f_1|_{X_s}, \ldots, f_r|_{X_s}$ is a regular
+sequence, see
+Algebra, Lemmas \ref{algebra-lemma-quasi-regular-regular} and
+\ref{algebra-lemma-regular-sequence-in-neighbourhood}.
+By
+Lemma \ref{lemma-fibre-Cartier}
+it follows that $Z_1 = V(f_1) \subset X$ is a relative effective
+Cartier divisor, again after possibly shrinking $X$ a bit.
+Applying the same lemma again, but now to $Z_2 = V(f_1, f_2) \subset Z_1$
+we see that $Z_2 \subset Z_1$ is a relative effective Cartier divisor.
+And so on until on reaches $Z = Z_n = V(f_1, \ldots, f_n)$.
+Since being a relative effective Cartier divisor is preserved under
+arbitrary base change, see
+Lemma \ref{lemma-relative-Cartier},
+we also see that the final statement of the lemma holds.
+\end{proof}
+
+\begin{remark}
+\label{remark-relative-regular-immersion-elements}
+The codimension of a relative quasi-regular immersion,
+if it is constant, does not change after a base change.
+In fact, if we have a ring map $A \to B$ and a quasi-regular
+sequence $f_1, \ldots, f_r \in B$ such that $B/(f_1, \ldots, f_r)$
+is flat over $A$, then for any ring map $A \to A'$
+we have a quasi-regular sequence
+$f_1 \otimes 1, \ldots, f_r \otimes 1$ in $B' = B \otimes_A A'$
+by More on Algebra, Lemma
+\ref{more-algebra-lemma-relative-regular-immersion-algebra}
+(which was used in the proof of
+Lemma \ref{lemma-relative-regular-immersion} above).
+Now the proof of
+Lemma \ref{lemma-relative-regular-immersion-flat-in-neighbourhood}
+shows that if $A \to B$ is flat and locally of finite
+presentation, then for every prime ideal $\mathfrak q' \subset B'$
+the sequence
+$f_1 \otimes 1, \ldots, f_r \otimes 1$ is even a
+regular sequence in the local ring $B'_{\mathfrak q'}$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-flat-relative-H1-regular}
+Let $X \to S$ be a morphism of schemes.
+Let $Z \to X$ be a relative $H_1$-regular immersion.
+Assume $X \to S$ is locally of finite presentation. Then
+\begin{enumerate}
+\item there exists an open subscheme $U \subset X$ such that
+$Z \subset U$ and such that $U \to S$ is flat, and
+\item $Z \to X$ is a regular immersion and the same remains
+true after any base change.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Pick $x \in Z$. To prove (1) suffices to find an open neighbourhood
+$U \subset X$ of $x$ such that $U \to S$ is flat. Hence the lemma reduces
+to the case that $X = \Spec(B)$ and $S = \Spec(A)$ are affine
+and that $Z$ is given by an $H_1$-regular sequence $f_1, \ldots, f_r \in B$.
+By assumption $B$ is a finitely presented $A$-algebra and
+$B/(f_1, \ldots, f_r)B$ is a flat $A$-algebra. We are going to use
+absolute Noetherian approximation.
+
+\medskip\noindent
+Write $B = A[x_1, \ldots, x_n]/(g_1, \ldots, g_m)$. Assume
+$f_i$ is the image of $f_i' \in A[x_1, \ldots, x_n]$. Choose a finite type
+$\mathbf{Z}$-subalgebra $A_0 \subset A$ such that all the coefficients
+of the polynomials $f_1', \ldots, f_r', g_1, \ldots, g_m$ are in $A_0$.
+We set $B_0 = A_0[x_1, \ldots, x_n]/(g_1, \ldots, g_m)$ and we denote
+$f_{i, 0}$ the image of $f_i'$ in $B_0$. Then $B = B_0 \otimes_{A_0} A$
+and
+$$
+B/(f_1, \ldots, f_r) =
+B_0/(f_{0, 1}, \ldots, f_{0, r}) \otimes_{A_0} A.
+$$
+By
+Algebra, Lemma \ref{algebra-lemma-flat-finite-presentation-limit-flat}
+we may, after enlarging $A_0$, assume that
+$B_0/(f_{0, 1}, \ldots, f_{0, r})$ is flat over $A_0$.
+It may not be the case at this point that the Koszul cohomology group
+$H_1(K_\bullet(B_0, f_{0, 1}, \ldots, f_{0, r}))$ is zero.
+On the other hand, as $B_0$ is Noetherian, it is a finitely
+generated $B_0$-module. Let
+$\xi_1, \ldots, \xi_n \in H_1(K_\bullet(B_0, f_{0, 1}, \ldots, f_{0, r}))$
+be generators. Let $A_0 \subset A_1 \subset A$ be a larger finite type
+$\mathbf{Z}$-subalgebra of $A$. Denote $f_{1, i}$ the image
+of $f_{0, i}$ in $B_1 = B_0 \otimes_{A_0} A_1$. By
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-H1-regular}
+the map
+$$
+H_1(K_\bullet(B_0, f_{0, 1}, \ldots, f_{0, r})) \otimes_{A_0} A_1
+\longrightarrow
+H_1(K_\bullet(B_1, f_{1, 1}, \ldots, f_{1, r}))
+$$
+is surjective. Furthermore, it is clear that the colimit (over all
+choices of $A_1$ as above) of the
+complexes $K_\bullet(B_1, f_{1, 1}, \ldots, f_{1, r})$ is the complex
+$K_\bullet(B, f_1, \ldots, f_r)$ which is acyclic in degree $1$. Hence
+$$
+\colim_{A_0 \subset A_1 \subset A}
+H_1(K_\bullet(B_1, f_{1, 1}, \ldots, f_{1, r}))
+= 0
+$$
+by
+Algebra, Lemma \ref{algebra-lemma-directed-colimit-exact}.
+Thus we can find a choice of $A_1$ such that $\xi_1, \ldots, \xi_n$
+all map to zero in $H_1(K_\bullet(B_1, f_{1, 1}, \ldots, f_{1, r}))$.
+In other words, the Koszul cohomology group
+$H_1(K_\bullet(B_1, f_{1, 1}, \ldots, f_{1, r}))$
+is zero.
+
+\medskip\noindent
+Consider the morphism of affine schemes
+$X_1 \to S_1$ equal to $\Spec$ of the
+ring map $A_1 \to B_1$ and
+$Z_1 = \Spec(B_1/(f_{1, 1}, \ldots, f_{1, r}))$.
+Since $B = B_1 \otimes_{A_1} A$, i.e., $X = X_1 \times_{S_1} S$,
+and similarly $Z = Z_1 \times_S S_1$,
+it now suffices to prove (1) for $X_1 \to S_1$ and the relative
+$H_1$-regular immersion $Z_1 \to X_1$, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-module-flat}.
+Hence we have reduced to the case where $X \to S$ is a finite type
+morphism of Noetherian schemes.
+In this case we know that $X \to S$ is flat at every
+point of $Z$ by
+Lemma \ref{lemma-quasi-regular-immersion-flat-at-x}.
+Combined with the fact that the flat locus is open in this case, see
+Algebra, Theorem \ref{algebra-theorem-openness-flatness}
+we see that (1) holds. Part (2) then follows from an application of
+Lemma \ref{lemma-relative-regular-immersion-flat-in-neighbourhood}.
+\end{proof}
+
+\noindent
+If the ambient scheme is flat and locally of finite presentation over
+the base, then we can characterize a relative
+quasi-regular immersion in terms of its fibres.
+
+\begin{lemma}
+\label{lemma-fibre-quasi-regular}
+Let $\varphi : X \to S$ be a flat morphism which is locally of finite
+presentation. Let $T \subset X$ be a closed subscheme.
+Let $x \in T$ with image $s \in S$.
+\begin{enumerate}
+\item If $T_s \subset X_s$ is a quasi-regular immersion
+in a neighbourhood of $x$, then there exists an open
+$U \subset X$ and a relative quasi-regular immersion
+$Z \subset U$ such that $Z_s = T_s \cap U_s$ and $T \cap U \subset Z$.
+\item If $T_s \subset X_s$ is a quasi-regular immersion
+in a neighbourhood of $x$, the morphism $T \to X$ is of finite
+presentation, and $T \to S$ is flat at $x$, then we can choose $U$ and
+$Z$ as in (1) such that $T \cap U = Z$.
+\item If $T_s \subset X_s$ is a quasi-regular immersion in a neighbourhood
+of $x$, and $T$ is cut out by $c$ equations in a neighbourhood of $x$,
+where $c = \dim_x(X_s) - \dim_x(T_s)$, then we can choose $U$ and $Z$ as in (1)
+such that $T \cap U = Z$.
+\end{enumerate}
+In each case $Z \to U$ is a regular immersion by
+Lemma \ref{lemma-relative-regular-immersion-flat-in-neighbourhood}.
+In particular, if $T \to S$ is locally of finite presentation and flat and
+all fibres $T_s \subset X_s$ are quasi-regular immersions, then
+$T \to X$ is a relative quasi-regular immersion.
+\end{lemma}
+
+\begin{proof}
+Choose affine open neighbourhoods $\Spec(A)$ of $s$ and
+$\Spec(B)$ of $x$ such that
+$\varphi(\Spec(B)) \subset \Spec(A)$.
+Let $\mathfrak p \subset A$ be the prime ideal corresponding to $s$.
+Let $\mathfrak q \subset B$ be the prime ideal corresponding to $x$.
+Let $I \subset B$ be the ideal corresponding to $T$.
+By the initial assumption of the lemma we know that
+$A \to B$ is flat and of finite presentation.
+The assumption in (1) means that, after shrinking $\Spec(B)$, we may
+assume $I(B \otimes_A \kappa(\mathfrak p))$ is generated by a
+quasi-regular sequence of elements. After possibly localizing $B$
+at some $g \in B$, $g \not \in \mathfrak q$ we may assume there
+exist $f_1, \ldots, f_r \in I$ which map to a quasi-regular
+sequence in $B \otimes_A \kappa(\mathfrak p)$ which generates
+$I(B \otimes_A \kappa(\mathfrak p))$. By
+Algebra, Lemmas \ref{algebra-lemma-quasi-regular-regular} and
+\ref{algebra-lemma-regular-sequence-in-neighbourhood}
+we may assume after another localization that
+$f_1, \ldots, f_r \in I$ form a regular
+sequence in $B \otimes_A \kappa(\mathfrak p)$. By
+Lemma \ref{lemma-fibre-Cartier}
+it follows that $Z_1 = V(f_1) \subset \Spec(B)$
+is a relative effective Cartier divisor, again after possibly
+localizing $B$. Applying the same lemma again, but now to
+$Z_2 = V(f_1, f_2) \subset Z_1$ we see that $Z_2 \subset Z_1$
+is a relative effective Cartier divisor. And so on until one
+reaches $Z = Z_n = V(f_1, \ldots, f_n)$. Then
+$Z \to \Spec(B)$ is a regular immersion and $Z$ is
+flat over $S$, in particular $Z \to \Spec(B)$ is
+a relative quasi-regular immersion over $\Spec(A)$.
+This proves (1).
+
+\medskip\noindent
+To see (2) consider the closed immersion $Z \to D$. The surjective
+ring map $u : \mathcal{O}_{D, x} \to \mathcal{O}_{Z, x}$
+is a map of flat local $\mathcal{O}_{S, s}$-algebras which
+are essentially of finite presentation, and which becomes an
+isomorphisms after dividing by $\mathfrak m_s$. Hence it is
+an isomorphism, see
+Algebra, Lemma \ref{algebra-lemma-mod-injective-general}.
+It follows that $Z \to D$ is an isomorphism in a neighbourhood
+of $x$, see
+Algebra, Lemma \ref{algebra-lemma-local-isomorphism}.
+
+\medskip\noindent
+To see (3), after possibly shrinking $U$ we may assume that
+the ideal of $Z$ is generated by a regular sequence $f_1, \ldots, f_r$
+(see our construction of $Z$ above)
+and the ideal of $T$ is generated by $g_1, \ldots, g_c$.
+We claim that $c = r$. Namely,
+\begin{align*}
+\dim_x(X_s) & = \dim(\mathcal{O}_{X_s, x}) +
+\text{trdeg}_{\kappa(s)}(\kappa(x)), \\
+\dim_x(T_s) & = \dim(\mathcal{O}_{T_s, x}) +
+\text{trdeg}_{\kappa(s)}(\kappa(x)), \\
+\dim(\mathcal{O}_{X_s, x}) & = \dim(\mathcal{O}_{T_s, x}) + r
+\end{align*}
+the first two equalities by
+Algebra, Lemma \ref{algebra-lemma-dimension-at-a-point-finite-type-field}
+and the second by $r$ times applying
+Algebra, Lemma \ref{algebra-lemma-one-equation}.
+As $T \subset Z$ we see that
+$f_i = \sum b_{ij} g_j$. But the ideals of $Z$ and $T$ cut out the same
+quasi-regular closed subscheme of $X_s$ in a neighbourhood of $x$. Hence
+the matrix $(b_{ij}) \bmod \mathfrak m_x$ is invertible (some details
+omitted). Hence $(b_{ij})$ is invertible in an
+open neighbourhood of $x$. In other words,
+$T \cap U = Z$ after shrinking $U$.
+
+\medskip\noindent
+The final statements of the lemma follow immediately from
+part (2), combined with the fact that $Z \to S$
+is locally of finite presentation if and only if $Z \to X$ is
+of finite presentation, see
+Morphisms, Lemmas \ref{morphisms-lemma-composition-finite-presentation} and
+\ref{morphisms-lemma-finite-presentation-permanence}.
+\end{proof}
+
+\noindent
+The following lemma is an enhancement of
+Morphisms, Lemma \ref{morphisms-lemma-section-smooth-morphism}.
+
+\begin{lemma}
+\label{lemma-section-smooth-regular-immersion}
+Let $f : X \to S$ be a smooth morphism of schemes.
+Let $\sigma : S \to X$ be a section of $f$.
+Then $\sigma$ is a regular immersion.
+\end{lemma}
+
+\begin{proof}
+By
+Schemes, Lemma \ref{schemes-lemma-semi-diagonal}
+the morphism $\sigma$ is an immersion.
+After replacing $X$ by an open neighbourhood of $\sigma(S)$
+we may assume that $\sigma$ is a closed immersion.
+Let $T = \sigma(S)$ be the corresponding closed subscheme of $X$.
+Since $T \to S$ is an isomorphism it is flat and of finite presentation.
+Also a smooth morphism is flat and locally of finite presentation, see
+Morphisms, Lemmas \ref{morphisms-lemma-smooth-flat} and
+\ref{morphisms-lemma-smooth-locally-finite-presentation}.
+Thus, according to
+Lemma \ref{lemma-fibre-quasi-regular},
+it suffices to show that $T_s \subset X_s$ is a quasi-regular closed
+subscheme. This follows immediately from
+Morphisms, Lemma \ref{morphisms-lemma-section-smooth-morphism}
+but we can also see it directly as follows.
+Let $k$ be a field and let $A$ be a smooth $k$-algebra.
+Let $\mathfrak m \subset A$ be a maximal ideal whose residue field is $k$.
+Then $\mathfrak m$ is generated by a quasi-regular sequence, possibly
+after replacing $A$ by $A_g$ for some $g \in A$, $g \not \in \mathfrak m$.
+In
+Algebra, Lemma \ref{algebra-lemma-characterize-smooth-over-field}
+we proved that $A_{\mathfrak m}$ is a regular local ring,
+hence $\mathfrak mA_{\mathfrak m}$ is generated by a regular sequence.
+This does indeed imply that $\mathfrak m$ is generated by a
+regular sequence (after replacing $A$ by $A_g$ for some $g \in A$,
+$g \not \in \mathfrak m$), see
+Algebra, Lemma \ref{algebra-lemma-regular-sequence-in-neighbourhood}.
+\end{proof}
+
+\noindent
+The following lemma has a kind of converse, see
+Lemma \ref{lemma-push-regular-immersion-thru-smooth}.
+
+\begin{lemma}
+\label{lemma-lift-regular-immersion-to-smooth}
+Let
+$$
+\xymatrix{
+Y \ar[rd]_j \ar[rr]_i & & X \ar[ld] \\
+& S
+}
+$$
+be a commutative diagram of morphisms of schemes.
+Assume $X \to S$ smooth, and $i$, $j$ immersions.
+If $j$ is a regular (resp.\ Koszul-regular, $H_1$-regular, quasi-regular)
+immersion, then so is $i$.
+\end{lemma}
+
+\begin{proof}
+We can write $i$ as the composition
+$$
+Y \to Y \times_S X \to X
+$$
+By
+Lemma \ref{lemma-section-smooth-regular-immersion}
+the first arrow is a regular immersion.
+The second arrow is a flat base change of $Y \to S$, hence is a
+regular (resp.\ Koszul-regular, $H_1$-regular, quasi-regular) immersion, see
+Lemma \ref{lemma-flat-base-change-regular-immersion}.
+We conclude by an application of
+Lemma \ref{lemma-composition-regular-immersion}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-immersion-lci-into-smooth-regular-immersion}
+Let
+$$
+\xymatrix{
+Y \ar[rd] \ar[rr]_i & & X \ar[ld] \\
+& S
+}
+$$
+be a commutative diagram of morphisms of schemes.
+Assume that $Y \to S$ is syntomic, $X \to S$ smooth, and
+$i$ an immersion. Then $i$ is a regular immersion.
+\end{lemma}
+
+\begin{proof}
+After replacing $X$ by an open neighbourhood of $i(Y)$
+we may assume that $i$ is a closed immersion.
+Let $T = i(Y)$ be the corresponding closed subscheme of $X$. Since
+$T \cong Y$ the morphism $T \to S$ is flat and of finite presentation
+(Morphisms, Lemmas
+\ref{morphisms-lemma-syntomic-locally-finite-presentation} and
+\ref{morphisms-lemma-syntomic-flat}).
+Also a smooth morphism is flat and locally of finite presentation
+(Morphisms, Lemmas
+\ref{morphisms-lemma-smooth-flat} and
+\ref{morphisms-lemma-smooth-locally-finite-presentation}).
+Thus, according to
+Lemma \ref{lemma-fibre-quasi-regular},
+it suffices to show that $T_s \subset X_s$ is a quasi-regular closed
+subscheme. As $X_s$ is locally of finite type over a field, it is Noetherian
+(Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian}).
+Thus we can check that $T_s \subset X_s$ is a quasi-regular immersion
+at points, see
+Lemma \ref{lemma-Noetherian-scheme-regular-ideal}.
+Take $t \in T_s$. By
+Morphisms, Lemma \ref{morphisms-lemma-local-complete-intersection}
+the local ring $\mathcal{O}_{T_s, t}$ is a local complete intersection
+over $\kappa(s)$.
+The local ring $\mathcal{O}_{X_s, t}$ is regular, see
+Algebra, Lemma \ref{algebra-lemma-characterize-smooth-over-field}.
+By
+Algebra, Lemma \ref{algebra-lemma-lci-local}
+we see that the kernel of the surjection
+$\mathcal{O}_{X_s, t} \to \mathcal{O}_{T_s, t}$ is generated by a regular
+sequence, which is what we had to show.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-immersion-smooth-into-smooth-regular-immersion}
+Let
+$$
+\xymatrix{
+Y \ar[rd] \ar[rr]_i & & X \ar[ld] \\
+& S
+}
+$$
+be a commutative diagram of morphisms of schemes.
+Assume that $Y \to S$ is smooth, $X \to S$ smooth, and
+$i$ an immersion. Then $i$ is a regular immersion.
+\end{lemma}
+
+\begin{proof}
+This is a special case of
+Lemma \ref{lemma-immersion-lci-into-smooth-regular-immersion}
+because a smooth morphism is syntomic, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-syntomic}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-push-regular-immersion-thru-smooth}
+Let
+$$
+\xymatrix{
+Y \ar[rd]_j \ar[rr]_i & & X \ar[ld] \\
+& S
+}
+$$
+be a commutative diagram of morphisms of schemes.
+Assume $X \to S$ smooth, and $i$, $j$ immersions.
+If $i$ is a Koszul-regular (resp.\ $H_1$-regular, quasi-regular)
+immersion, then so is $j$.
+\end{lemma}
+
+\begin{proof}
+Let $y \in Y$ be any point. Set $x = i(y)$ and set $s = j(y)$.
+It suffices to prove the result after replacing $X, S$ by open
+neighbourhoods $U, V$ of $x, s$ and $Y$ by an open neighbourhood
+of $y$ in $i^{-1}(U) \cap j^{-1}(V)$. Hence we may assume that
+$Y$, $X$ and $S$ are affine. In this case we can choose a closed
+immersion $h : X \to \mathbf{A}^n_S$ over $S$ for some $n$. Note that
+$h$ is a regular immersion by
+Lemma \ref{lemma-immersion-smooth-into-smooth-regular-immersion}.
+Hence $h \circ i$ is a Koszul-regular (resp.\ $H_1$-regular, quasi-regular)
+immersion, see
+Lemmas \ref{lemma-composition-regular-immersion} and
+\ref{lemma-regular-quasi-regular-immersion}.
+In this way we reduce to the case $X = \mathbf{A}^n_S$ and $S$ affine.
+
+\medskip\noindent
+After replacing $S$ by an affine open $V$ and replacing $Y$ by
+$j^{-1}(V)$ we may assume that $i$ is a closed immersion and $S$
+affine. Write $S = \Spec(A)$. Then $j : Y \to S$ defines an
+isomorphism of $Y$ to the closed subscheme $\Spec(A/I)$ for
+some ideal $I \subset A$. The map
+$i : Y = \Spec(A/I) \to
+\mathbf{A}^n_S = \Spec(A[x_1, \ldots, x_n])$
+corresponds to an $A$-algebra homomorphism
+$i^\sharp : A[x_1, \ldots, x_n] \to A/I$.
+Choose $a_i \in A$ which map to $i^\sharp(x_i)$ in $A/I$.
+Observe that the ideal of the closed immersion $i$ is
+$$
+J = (x_1 - a_1, \ldots, x_n - a_n) + IA[x_1, \ldots, x_n].
+$$
+Set $K = (x_1 - a_1, \ldots, x_n - a_n)$. We claim the sequence
+$$
+0 \to K/KJ \to J/J^2 \to J/(K + J^2) \to 0
+$$
+is split exact. To see this note that $K/K^2$ is free with basis
+$x_i - a_i$ over the ring $A[x_1, \ldots, x_n]/K \cong A$.
+Hence $K/KJ$ is free with the same basis over the ring
+$A[x_1, \ldots, x_n]/J \cong A/I$. On the other hand, taking derivatives
+gives a map
+$$
+\text{d}_{A[x_1, \ldots, x_n]/A} :
+J/J^2
+\longrightarrow
+\Omega_{A[x_1, \ldots, x_n]/A} \otimes_{A[x_1, \ldots, x_n]}
+A[x_1, \ldots, x_n]/J
+$$
+which maps the generators $x_i - a_i$ to the basis elements $\text{d}x_i$
+of the free module on the right. The claim follows. Moreover, note that
+$x_1 - a_1, \ldots, x_n - a_n$ is a regular sequence in
+$A[x_1, \ldots, x_n]$ with quotient ring
+$A[x_1, \ldots, x_n]/(x_1 - a_1, \ldots, x_n - a_n) \cong A$.
+Thus we have a factorization
+$$
+Y \to V(x_1 - a_1, \ldots, x_n - a_n) \to \mathbf{A}^n_S
+$$
+of our closed immersion $i$ where the composition is
+Koszul-regular (resp.\ $H_1$-regular, quasi-regular),
+the second arrow is a regular immersion, and the associated conormal
+sequence is split. Now the result follows from
+Lemma \ref{lemma-permanence-regular-immersion}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Meromorphic functions and sections}
+\label{section-meromorphic-functions}
+
+\noindent
+This section contains only the general definitions and some elementary results.
+See \cite{misconceptions} for some possible
+pitfalls\footnote{Danger, Will Robinson!}.
+
+\medskip\noindent
+Let $(X, \mathcal{O}_X)$ be a locally ringed space.
+For any open $U \subset X$ we have defined the set
+$\mathcal{S}(U) \subset \mathcal{O}_X(U)$
+of regular sections of $\mathcal{O}_X$ over $U$, see
+Definition \ref{definition-regular-section}. The restriction
+of a regular section to a smaller open is regular. Hence
+$\mathcal{S} : U \mapsto \mathcal{S}(U)$ is a subsheaf (of sets)
+of $\mathcal{O}_X$. We sometimes denote $\mathcal{S} = \mathcal{S}_X$
+if we want to indicate the dependence on $X$.
+Moreover, $\mathcal{S}(U)$
+is a multiplicative subset of the ring $\mathcal{O}_X(U)$ for
+each $U$. Hence we may consider
+the presheaf of rings
+$$
+U \longmapsto \mathcal{S}(U)^{-1} \mathcal{O}_X(U),
+$$
+see Modules, Lemma \ref{modules-lemma-simple-invert}.
+
+\begin{definition}
+\label{definition-sheaf-meromorphic-functions}
+Let $(X, \mathcal{O}_X)$ be a locally ringed space.
+The {\it sheaf of meromorphic functions on $X$} is
+the sheaf {\it $\mathcal{K}_X$} associated to the presheaf
+displayed above. A {\it meromorphic function} on $X$
+is a global section of $\mathcal{K}_X$.
+\end{definition}
+
+\noindent
+Since each element of each $\mathcal{S}(U)$ is a nonzerodivisor on
+$\mathcal{O}_X(U)$ we see that the natural map of sheaves
+of rings $\mathcal{O}_X \to \mathcal{K}_X$ is injective.
+
+\begin{example}
+\label{example-no-change}
+Let $A = \mathbf{C}[x, \{y_\alpha\}_{\alpha \in \mathbf{C}}]/
+((x - \alpha)y_\alpha, y_\alpha y_\beta)$. Any element of $A$
+can be written uniquely as
+$f(x) + \sum \lambda_\alpha y_\alpha$ with $f(x) \in \mathbf{C}[x]$
+and $\lambda_\alpha \in \mathbf{C}$.
+Let $X = \Spec(A)$.
+In this case $\mathcal{O}_X = \mathcal{K}_X$, since on
+any affine open $D(f)$ the ring $A_f$ any nonzerodivisor is
+a unit (proof omitted).
+\end{example}
+
+\noindent
+Let $(X, \mathcal{O}_X)$ be a locally ringed space.
+Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_X$-modules.
+Consider the presheaf $U \mapsto \mathcal{S}(U)^{-1}\mathcal{F}(U)$.
+Its sheafification is the sheaf
+$\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{K}_X$, see
+Modules, Lemma \ref{modules-lemma-simple-invert-module}.
+
+\begin{definition}
+\label{definition-meromorphic-section}
+Let $X$ be a locally ringed space.
+Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_X$-modules.
+\begin{enumerate}
+\item We denote
+$\mathcal{K}_X(\mathcal{F})$
+the sheaf of $\mathcal{K}_X$-modules which is
+the sheafification of the presheaf
+$U \mapsto \mathcal{S}(U)^{-1}\mathcal{F}(U)$. Equivalently
+$\mathcal{K}_X(\mathcal{F}) =
+\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{K}_X$ (see above).
+\item A {\it meromorphic section of $\mathcal{F}$}
+is a global section of $\mathcal{K}_X(\mathcal{F})$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+In particular we have
+$$
+\mathcal{K}_X(\mathcal{F})_x
+=
+\mathcal{F}_x \otimes_{\mathcal{O}_{X, x}} \mathcal{K}_{X, x}
+=
+\mathcal{S}_x^{-1}\mathcal{F}_x
+$$
+for any point $x \in X$. However, one has to be careful since it may
+not be the case that $\mathcal{S}_x$ is the set of nonzerodivisors
+in the local ring $\mathcal{O}_{X, x}$. Namely, there is always
+an injective map
+$$
+\mathcal{K}_{X, x} \longrightarrow Q(\mathcal{O}_{X, x})
+$$
+to the total quotient ring. It is also surjective if and only if
+$\mathcal{S}_x$ is the set of nonzerodivisors in $\mathcal{O}_{X, x}$.
+The sheaves of meromorphic sections aren't quasi-coherent
+modules in general, but they do have some properties in common
+with quasi-coherent modules.
+
+\begin{definition}
+\label{definition-pullback-meromorphic-sections}
+Let $f : (X, \mathcal{O}_X) \to (Y, \mathcal{O}_Y)$ be a morphism
+of locally ringed spaces. We say that {\it pullbacks of meromorphic
+functions are defined for $f$} if for every pair of open
+$U \subset X$, $V \subset Y$ such that $f(U) \subset V$, and any
+section $s \in \Gamma(V, \mathcal{S}_Y)$ the pullback
+$f^\sharp(s) \in \Gamma(U, \mathcal{O}_X)$ is an element
+of $\Gamma(U, \mathcal{S}_X)$.
+\end{definition}
+
+\noindent
+In this case there is an induced map
+$f^\sharp : f^{-1}\mathcal{K}_Y \to \mathcal{K}_X$,
+in other words we obtain a commutative diagram of morphisms
+of ringed spaces
+$$
+\xymatrix{
+(X, \mathcal{K}_X) \ar[r] \ar[d]^f &
+(X, \mathcal{O}_X) \ar[d]^f \\
+(Y, \mathcal{K}_Y) \ar[r] &
+(Y, \mathcal{O}_Y)
+}
+$$
+We sometimes denote $f^*(s) = f^\sharp(s)$ for a
+section $s \in \Gamma(Y, \mathcal{K}_Y)$.
+
+\begin{lemma}
+\label{lemma-pullback-meromorphic-sections-defined}
+Let $f : X \to Y$ be a morphism of schemes.
+In each of the following cases pullbacks of meromorphic
+functions are defined.
+\begin{enumerate}
+\item every weakly associated point of $X$ maps to
+a generic point of an irreducible component of $Y$,
+\item $X$, $Y$ are integral and $f$ is dominant,
+\item $X$ is integral and the generic point of $X$ maps
+to a generic point of an irreducible component of $Y$,
+\item $X$ is reduced and every generic point of every irreducible
+component of $X$ maps to the generic point of an irreducible component
+of $Y$,
+\item $X$ is locally Noetherian, and any associated point of
+$X$ maps to a generic point of an irreducible component of $Y$,
+\item $X$ is locally Noetherian, has no embedded points and
+any generic point of an irreducible component of
+$X$ maps to the generic point of an irreducible component of $Y$, and
+\item $f$ is flat.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The question is local on $X$ and $Y$. Hence we reduce to the case where
+$X = \Spec(A)$, $Y = \Spec(R)$ and $f$ is given by a ring map
+$\varphi : R \to A$.
+By the characterization of regular sections of the structure sheaf
+in Lemma \ref{lemma-regular-section-structure-sheaf} we have to
+show that $R \to A$ maps nonzerodivisors to nonzerodivisors.
+Let $t \in R$ be a nonzerodivisor.
+
+\medskip\noindent
+If $R \to A$ is flat, then $t : R \to R$ being injective
+shows that $t : A \to A$ is injective. This proves (7).
+
+\medskip\noindent
+In the other cases we note that $t$ is not contained in any of
+the minimal primes of $R$ (because every element of a minimal
+prime in a ring is a zerodivisor).
+Hence in case (1) we see that $\varphi(t)$ is not contained
+in any weakly associated prime of $A$. Thus this case follows from
+Algebra, Lemma \ref{algebra-lemma-weakly-ass-zero-divisors}.
+Case (5) is a special case of (1) by Lemma \ref{lemma-ass-weakly-ass}.
+Case (6) follows from (5) and the definitions.
+Case (4) is a special case of (1) by
+Lemma \ref{lemma-weakass-reduced}.
+Cases (2) and (3) are special cases of (4).
+\end{proof}
+
+
+\begin{lemma}
+\label{lemma-meromorphic-weakass-finite}
+Let $X$ be a scheme such that
+\begin{enumerate}
+\item[(a)] every weakly associated point of $X$ is a generic point of an
+irreducible component of $X$, and
+\item[(b)] any quasi-compact open has a finite number of irreducible components.
+\end{enumerate}
+Let $X^0$ be the set of generic points of irreducible components of $X$.
+Then we have
+$$
+\mathcal{K}_X =
+\bigoplus\nolimits_{\eta \in X^0} j_{\eta, *}\mathcal{O}_{X, \eta} =
+\prod\nolimits_{\eta \in X^0} j_{\eta, *}\mathcal{O}_{X, \eta}
+$$
+where $j_\eta : \Spec(\mathcal{O}_{X, \eta}) \to X$ is the canonical map
+of Schemes, Section \ref{schemes-section-points}. Moreover
+\begin{enumerate}
+\item $\mathcal{K}_X$ is a quasi-coherent sheaf of
+$\mathcal{O}_X$-algebras,
+\item for every quasi-coherent $\mathcal{O}_X$-module $\mathcal{F}$ the sheaf
+$$
+\mathcal{K}_X(\mathcal{F}) =
+\bigoplus\nolimits_{\eta \in X^0} j_{\eta, *}\mathcal{F}_\eta =
+\prod\nolimits_{\eta \in X^0} j_{\eta, *}\mathcal{F}_\eta
+$$
+of meromorphic sections of $\mathcal{F}$
+is quasi-coherent,
+\item $\mathcal{S}_x \subset \mathcal{O}_{X, x}$
+is the set of nonzerodivisors for any $x \in X$,
+\item $\mathcal{K}_{X, x}$ is the total quotient ring of $\mathcal{O}_{X, x}$
+for any $x \in X$,
+\item $\mathcal{K}_X(U)$ equals the total quotient ring of $\mathcal{O}_X(U)$
+for any affine open $U \subset X$,
+\item the ring of rational functions of $X$
+(Morphisms, Definition \ref{morphisms-definition-rational-function})
+is the ring of meromorphic
+functions on $X$, in a formula: $R(X) = \Gamma(X, \mathcal{K}_X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Observe that a locally finite direct sum of sheaves of modules
+is equal to the product since you can check this on stalks for
+example. Then since $\mathcal{K}_X(\mathcal{F}) =
+\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{K}_X$
+we see that (2) follows from the other statements.
+Also, observe that part (6) follows from the initial
+statement of the lemma and Morphisms, Lemma
+\ref{morphisms-lemma-integral-scheme-rational-functions}
+when $X^0$ is finite; the general case of (6) follows from this
+by glueing (argument omitted).
+
+\medskip\noindent
+Let $j : Y = \coprod\nolimits_{\eta \in X^0} \Spec(\mathcal{O}_{X, \eta}) \to X$
+be the product of the morphisms $j_\eta$. We have to show that
+$\mathcal{K}_X = j_*\mathcal{O}_Y$.
+First note that $\mathcal{K}_Y = \mathcal{O}_Y$ as $Y$ is a disjoint
+union of spectra of local rings of dimension $0$: in a local
+ring of dimension zero any nonzerodivisor is a unit.
+Next, note that pullbacks of meromorphic
+functions are defined for $j$ by
+Lemma \ref{lemma-pullback-meromorphic-sections-defined}.
+This gives a map
+$$
+\mathcal{K}_X \longrightarrow j_*\mathcal{O}_Y.
+$$
+Let $\Spec(A) = U \subset X$ be an affine open. Then $A$ is a ring
+with finitely many minimal primes $\mathfrak q_1, \ldots, \mathfrak q_t$
+and every weakly associated prime of $A$ is one of the $\mathfrak q_i$.
+We obtain $Q(A) = \prod A_{\mathfrak q_i}$
+by Algebra, Lemmas \ref{algebra-lemma-total-ring-fractions-no-embedded-points}
+and \ref{algebra-lemma-weakly-ass-zero-divisors}.
+In other words, already the value of the presheaf
+$U \mapsto \mathcal{S}(U)^{-1}\mathcal{O}_X(U)$ agrees with
+$j_*\mathcal{O}_Y(U)$ on our affine open $U$. Hence the displayed
+map is an isomorphism which proves the first displayed equality in
+the statement of the lemma.
+
+\medskip\noindent
+Finally, we prove (1), (3), (4), and (5).
+Part (5) we saw during the course of the proof that
+$\mathcal{K}_X = j_*\mathcal{O}_Y$.
+The morphism $j$ is quasi-compact by our assumption
+that the set of irreducible components of $X$ is locally finite.
+Hence $j$ is quasi-compact and quasi-separated (as $Y$ is separated).
+By Schemes, Lemma \ref{schemes-lemma-push-forward-quasi-coherent}
+$j_*\mathcal{O}_Y$ is quasi-coherent. This proves (1).
+Let $x \in X$. We may choose an affine open neighbourhood
+$U = \Spec(A)$ of $x$ all of whose irreducible components
+pass through $x$. Then $A \subset A_\mathfrak p$ because every
+weakly associated prime of $A$ is contained in $\mathfrak p$
+hence elements of $A \setminus \mathfrak p$ are nonzerodivisors
+by Algebra, Lemma \ref{algebra-lemma-weakly-ass-zero-divisors}.
+It follows easily that any nonzerodivisor of $A_\mathfrak p$
+is the image of a nonzerodivisor on a (possibly smaller)
+affine open neighbourhood of $x$. This proves (3).
+Part (4) follows from part (3) by computing stalks.
+\end{proof}
+
+\begin{definition}
+\label{definition-regular-meromorphic-section}
+Let $X$ be a locally ringed space.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+A meromorphic section $s$ of $\mathcal{L}$ is said to be {\it regular}
+if the induced map
+$\mathcal{K}_X \to \mathcal{K}_X(\mathcal{L})$
+is injective. In other words, $s$ is a regular
+section of the invertible $\mathcal{K}_X$-module
+$\mathcal{K}_X(\mathcal{L})$, see
+Definition \ref{definition-regular-section}.
+\end{definition}
+
+\noindent
+Let us spell out when (regular) meromorphic sections can be pulled back.
+
+\begin{lemma}
+\label{lemma-meromorphic-sections-pullback}
+Let $f : X \to Y$ be a morphism of locally ringed spaces.
+Assume that pullbacks of meromorphic functions are defined
+for $f$ (see
+Definition \ref{definition-pullback-meromorphic-sections}).
+\begin{enumerate}
+\item Let $\mathcal{F}$ be a sheaf of $\mathcal{O}_Y$-modules.
+There is a canonical pullback map
+$f^* : \Gamma(Y, \mathcal{K}_Y(\mathcal{F})) \to
+\Gamma(X, \mathcal{K}_X(f^*\mathcal{F}))$
+for meromorphic sections of $\mathcal{F}$.
+\item Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+A regular meromorphic section $s$ of $\mathcal{L}$ pulls back
+to a regular meromorphic section $f^*s$ of $f^*\mathcal{L}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-meromorphic-ideal-denominators}
+Let $X$ be a scheme.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $s$ be a regular meromorphic section of $\mathcal{L}$.
+Let us denote $\mathcal{I} \subset \mathcal{O}_X$ the
+sheaf of ideals defined by the rule
+$$
+\mathcal{I}(V)
+=
+\{f \in \mathcal{O}_X(V) \mid fs \in \mathcal{L}(V)\}.
+$$
+The formula makes sense since
+$\mathcal{L}(V) \subset \mathcal{K}_X(\mathcal{L})(V)$.
+Then $\mathcal{I}$ is a quasi-coherent sheaf of ideals and
+we have injective maps
+$$
+1 : \mathcal{I} \longrightarrow \mathcal{O}_X,
+\quad
+s : \mathcal{I} \longrightarrow \mathcal{L}
+$$
+whose cokernels are supported on closed nowhere dense subsets of $X$.
+\end{lemma}
+
+\begin{proof}
+The question is local on $X$.
+Hence we may assume that $X = \Spec(A)$,
+and $\mathcal{L} = \mathcal{O}_X$. After shrinking further
+we may assume that $s = a/b$ with $a, b \in A$ {\it both}
+nonzerodivisors in $A$. Set $I = \{x \in A \mid x(a/b) \in A\}$.
+
+\medskip\noindent
+To show that $\mathcal{I}$ is quasi-coherent we have to show
+that $I_f = \{x \in A_f \mid x(a/b) \in A_f\}$ for every
+$f \in A$. If $c/f^n \in A_f$, $(c/f^n)(a/b) \in A_f$, then we see
+that $f^mc(a/b) \in A$ for some $m$, hence $c/f^n \in I_f$.
+Conversely it is easy to see that $I_f$ is contained in
+$\{x \in A_f \mid x(a/b) \in A_f\}$. This proves quasi-coherence.
+
+\medskip\noindent
+Let us prove the final statement. It is clear that $(b) \subset I$.
+Hence $V(I) \subset V(b)$ is a nowhere dense subset as $b$ is
+a nonzerodivisor. Thus the cokernel of $1$ is supported in a nowhere
+dense closed set. The same argument works for the cokernel
+of $s$ since $s(b) = (a) \subset sI \subset A$.
+\end{proof}
+
+\begin{definition}
+\label{definition-regular-meromorphic-ideal-denominators}
+Let $X$ be a scheme.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $s$ be a regular meromorphic section of $\mathcal{L}$.
+The sheaf of ideals $\mathcal{I}$ constructed in
+Lemma \ref{lemma-regular-meromorphic-ideal-denominators}
+is called the {\it ideal sheaf of denominators of $s$}.
+\end{definition}
+
+
+
+
+\section{Meromorphic functions and sections; Noetherian case}
+\label{section-meromorphic-noetherian}
+
+\noindent
+For locally Noetherian schemes we can prove some results about the
+sheaf of meromorphic functions. However, there is an example in
+\cite{misconceptions} showing that $\mathcal{K}_X$ need not be quasi-coherent
+for a Noetherian scheme $X$.
+
+\begin{lemma}
+\label{lemma-meromorphic-section-restricts-to-zero}
+Let $X$ be a quasi-compact scheme. Let $h \in \Gamma(X, \mathcal{O}_X)$ and
+$f \in \Gamma(X, \mathcal{K}_X)$ such that $f$ restricts
+to zero on $X_h$. Then $h^n f = 0$ for some $n \gg 0$.
+\end{lemma}
+
+\begin{proof}
+We can find a covering of $X$ by affine opens $U$ such that $f|_U = s^{-1}a$
+with $a \in \mathcal{O}_X(U)$ and $s \in \mathcal{S}(U)$. Since $X$ is
+quasi-compact we can cover it by finitely many affine opens of this form.
+Thus it suffices to prove the lemma when $X = \Spec(A)$ and $f = s^{-1}a$.
+Note that $s \in A$ is a nonzerodivisor hence it suffices to prove
+the result when $f = a$. The condition $f|_{X_h} = 0$ implies that
+$a$ maps to zero in $A_h = \mathcal{O}_X(X_h)$ as
+$\mathcal{O}_X \subset \mathcal{K}_X$. Thus $h^na = 0$ for some $n > 0$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-Noetherian-K}
+Let $X$ be a locally Noetherian scheme.
+\begin{enumerate}
+\item For any $x \in X$ we have $\mathcal{S}_x \subset \mathcal{O}_{X, x}$
+is the set of nonzerodivisors, and hence $\mathcal{K}_{X, x}$
+is the total quotient ring of $\mathcal{O}_{X, x}$.
+\item For any affine open $U \subset X$ the ring
+$\mathcal{K}_X(U)$ equals the total quotient ring of $\mathcal{O}_X(U)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove this lemma we may assume $X$ is the spectrum of a Noetherian
+ring $A$. Say $x \in X$ corresponds to $\mathfrak p \subset A$.
+
+\medskip\noindent
+Proof of (1). It is clear that $\mathcal{S}_x$ is contained
+in the set of nonzerodivisors of $\mathcal{O}_{X, x} = A_\mathfrak p$.
+For the converse, let $f, g \in A$, $g \not \in \mathfrak p$ and
+assume $f/g$ is a nonzerodivisor in $A_{\mathfrak p}$. Let
+$I = \{a \in A \mid af = 0\}$. Then we see that $I_{\mathfrak p} = 0$ by
+exactness of localization. Since $A$ is Noetherian we see that $I$
+is finitely generated and hence that $g'I = 0$ for some $g' \in A$,
+$g' \not \in \mathfrak p$. Hence $f$ is a nonzerodivisor
+in $A_{g'}$, i.e., in a Zariski open neighbourhood of $\mathfrak p$.
+Thus $f/g$ is an element of $\mathcal{S}_x$.
+
+\medskip\noindent
+Proof of (2). Let $f \in \Gamma(X, \mathcal{K}_X)$ be a meromorphic function.
+Set $I = \{a \in A \mid af \in A\}$. Fix a prime $\mathfrak p \subset A$
+corresponding to the point $x \in X$. By (1) we can write the image of $f$
+in the stalk at $\mathfrak p$ as $a/b$, $a, b \in A_{\mathfrak p}$ with
+$b \in A_{\mathfrak p}$ not a zerodivisor. Write $b = c/d$ with
+$c, d \in A$, $d \not \in \mathfrak p$. Then $ad - cf$ is a section of
+$\mathcal{K}_X$ which vanishes in an open neighbourhood of $x$. Say it
+vanishes on $D(e)$ with $e \in A$, $e \not \in \mathfrak p$. Then
+$e^n(ad - cf) = 0$ for some $n \gg 0$ by
+Lemma \ref{lemma-meromorphic-section-restricts-to-zero}.
+Thus $e^nc \in I$ and $e^nc$ maps to a nonzerodivisor in
+$A_{\mathfrak p}$. Let
+$\text{Ass}(A) = \{\mathfrak q_1, \ldots, \mathfrak q_t\}$ be the
+associated primes of $A$. By looking at $IA_{\mathfrak q_i}$ and
+using Algebra, Lemma \ref{algebra-lemma-associated-primes-localize}
+the above says that
+$I \not \subset \mathfrak q_i$ for each $i$. By
+Algebra, Lemma \ref{algebra-lemma-silly}
+there exists an element $x \in I$, $x \not \in \bigcup \mathfrak q_i$.
+By Algebra, Lemma \ref{algebra-lemma-ass-zero-divisors}
+we see that $x$ is not a zerodivisor on $A$.
+Hence $f = (xf)/x$ is an element of the total ring of fractions of $A$.
+This proves (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-quasi-coherent-K}
+Let $X$ be a locally Noetherian scheme having no embedded points.
+Let $X^0$ be the set of generic points of irreducible components of $X$.
+Then we have
+$$
+\mathcal{K}_X =
+\bigoplus\nolimits_{\eta \in X^0} j_{\eta, *}\mathcal{O}_{X, \eta} =
+\prod\nolimits_{\eta \in X^0} j_{\eta, *}\mathcal{O}_{X, \eta}
+$$
+where $j_\eta : \Spec(\mathcal{O}_{X, \eta}) \to X$ is the canonical map
+of Schemes, Section \ref{schemes-section-points}. Moreover
+\begin{enumerate}
+\item $\mathcal{K}_X$ is a quasi-coherent sheaf of $\mathcal{O}_X$-algebras,
+\item for every quasi-coherent $\mathcal{O}_X$-module $\mathcal{F}$ the sheaf
+$$
+\mathcal{K}_X(\mathcal{F}) =
+\bigoplus\nolimits_{\eta \in X^0} j_{\eta, *}\mathcal{F}_\eta =
+\prod\nolimits_{\eta \in X^0} j_{\eta, *}\mathcal{F}_\eta
+$$
+of meromorphic sections of $\mathcal{F}$ is quasi-coherent, and
+\item the ring of rational functions of $X$ is the ring of meromorphic
+functions on $X$, in a formula: $R(X) = \Gamma(X, \mathcal{K}_X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma is a special case of
+Lemma \ref{lemma-meromorphic-weakass-finite}
+because in the locally Noetherian case
+weakly associated points are the same thing
+as associated points by Lemma \ref{lemma-ass-weakly-ass}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-meromorphic-section-exists-noetherian}
+Let $X$ be a locally Noetherian scheme having no embedded points.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Then $\mathcal{L}$ has a regular meromorphic section.
+\end{lemma}
+
+\begin{proof}
+For each generic point $\eta$ of $X$ pick a generator
+$s_\eta$ of the free rank $1$ module $\mathcal{L}_\eta$
+over the artinian local ring $\mathcal{O}_{X, \eta}$.
+It follows immediately from the description of
+$\mathcal{K}_X$ and $\mathcal{K}_X(\mathcal{L})$ in
+Lemma \ref{lemma-quasi-coherent-K} that $s = \prod s_\eta$
+is a regular meromorphic section of $\mathcal{L}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-make-maps-regular-section}
+Suppose given
+\begin{enumerate}
+\item $X$ a locally Noetherian scheme,
+\item $\mathcal{L}$ an invertible $\mathcal{O}_X$-module,
+\item $s$ a regular meromorphic section of $\mathcal{L}$, and
+\item $\mathcal{F}$ coherent on $X$
+without embedded associated points and $\text{Supp}(\mathcal{F}) = X$.
+\end{enumerate}
+Let $\mathcal{I} \subset \mathcal{O}_X$ be the ideal of
+denominators of $s$. Let $T \subset X$ be the union
+of the supports of $\mathcal{O}_X/\mathcal{I}$ and
+$\mathcal{L}/s(\mathcal{I})$ which is a nowhere dense closed
+subset $T \subset X$ according to
+Lemma \ref{lemma-regular-meromorphic-ideal-denominators}.
+Then there are canonical injective maps
+$$
+1 : \mathcal{I}\mathcal{F} \to \mathcal{F}, \quad
+s : \mathcal{I}\mathcal{F} \to \mathcal{F} \otimes_{\mathcal{O}_X}\mathcal{L}
+$$
+whose cokernels are supported on $T$.
+\end{lemma}
+
+\begin{proof}
+Reduce to the affine case with $\mathcal{L} \cong \mathcal{O}_X$,
+and $s = a/b$ with $a, b \in A$ both nonzerodivisors.
+Proof of reduction step omitted.
+Write $\mathcal{F} = \widetilde{M}$.
+Let $I = \{x \in A \mid x(a/b) \in A\}$
+so that $\mathcal{I} = \widetilde{I}$ (see
+proof of Lemma \ref{lemma-regular-meromorphic-ideal-denominators}).
+Note that $T = V(I) \cup V((a/b)I)$.
+For any $A$-module $M$ consider the map $1 : IM \to M$; this is the
+map that gives rise to the map $1$ of the lemma.
+Consider on the other hand the map
+$\sigma : IM \to M_b, x \mapsto ax/b$.
+Since $b$ is not a zerodivisor in $A$, and since
+$M$ has support $\Spec(A)$ and no embedded primes we
+see that $b$ is a nonzerodivisor on $M$ also. Hence $M \subset M_b$.
+By definition of $I$ we have $\sigma(IM) \subset M$ as submodules
+of $M_b$. Hence we get an $A$-module map $s : IM \to M$ (namely the
+unique map such that $s(z)/1 = \sigma(z)$ in $M_b$ for all $z \in IM$).
+It is injective because $a$ is a nonzerodivisor also (on both $A$ and $M$).
+It is clear that $M/IM$ is annihilated by $I$ and that
+$M/s(IM)$ is annihilated by $(a/b)I$. Thus the lemma follows.
+\end{proof}
+
+
+
+
+
+\section{Meromorphic functions and sections; reduced case}
+\label{section-meromorphic-reduced}
+
+\noindent
+For a scheme which is reduced and which locally has finitely many irreducible
+components, the sheaf of meromorphic functions is quasi-coherent.
+
+\begin{lemma}
+\label{lemma-reduced-finite-irreducible}
+Let $X$ be a reduced scheme such that any quasi-compact open
+has a finite number of irreducible components. Let $X^0$ be the set
+of generic points of irreducible components of $X$. Then we have
+$$
+\mathcal{K}_X =
+\bigoplus\nolimits_{\eta \in X^0} j_{\eta, *}\kappa(\eta) =
+\prod\nolimits_{\eta \in X^0} j_{\eta, *}\kappa(\eta)
+$$
+where $j_\eta : \Spec(\kappa(\eta)) \to X$ is the canonical map
+of Schemes, Section \ref{schemes-section-points}. Moreover
+\begin{enumerate}
+\item $\mathcal{K}_X$ is a quasi-coherent sheaf of
+$\mathcal{O}_X$-algebras,
+\item for every quasi-coherent $\mathcal{O}_X$-module $\mathcal{F}$ the sheaf
+$$
+\mathcal{K}_X(\mathcal{F}) =
+\bigoplus\nolimits_{\eta \in X^0} j_{\eta, *}\mathcal{F}_\eta =
+\prod\nolimits_{\eta \in X^0} j_{\eta, *}\mathcal{F}_\eta
+$$
+of meromorphic sections of $\mathcal{F}$
+is quasi-coherent,
+\item $\mathcal{S}_x \subset \mathcal{O}_{X, x}$
+is the set of nonzerodivisors for any $x \in X$,
+\item $\mathcal{K}_{X, x}$ is the total quotient ring of $\mathcal{O}_{X, x}$
+for any $x \in X$,
+\item $\mathcal{K}_X(U)$ equals the total quotient ring of $\mathcal{O}_X(U)$
+for any affine open $U \subset X$,
+\item the ring of rational functions of $X$ is the ring of meromorphic
+functions on $X$, in a formula: $R(X) = \Gamma(X, \mathcal{K}_X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This lemma is a special case of
+Lemma \ref{lemma-meromorphic-weakass-finite}
+because on a reduced scheme the weakly associated
+points are the generic points by
+Lemma \ref{lemma-weakass-reduced}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reduced-normalization}
+Let $X$ be a scheme.
+Assume $X$ is reduced and any quasi-compact open $U \subset X$
+has a finite number of irreducible components.
+Then the normalization morphism $\nu : X^\nu \to X$ is the
+morphism
+$$
+\underline{\Spec}_X(\mathcal{O}') \longrightarrow X
+$$
+where $\mathcal{O}' \subset \mathcal{K}_X$ is the integral
+closure of $\mathcal{O}_X$ in the sheaf of meromorphic functions.
+\end{lemma}
+
+\begin{proof}
+Compare the definition of the normalization morphism
+$\nu : X^\nu \to X$ (see
+Morphisms, Definition \ref{morphisms-definition-normalization})
+with the description of $\mathcal{K}_X$ in
+Lemma \ref{lemma-reduced-finite-irreducible} above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-meromorphic-functions-integral-scheme}
+Let $X$ be an integral scheme with generic point $\eta$. We have
+\begin{enumerate}
+\item the sheaf of meromorphic functions is
+isomorphic to the constant sheaf with value the
+function field (see
+Morphisms, Definition \ref{morphisms-definition-function-field})
+of $X$.
+\item for any quasi-coherent sheaf $\mathcal{F}$ on $X$ the
+sheaf $\mathcal{K}_X(\mathcal{F})$ is isomorphic to the
+constant sheaf with value $\mathcal{F}_\eta$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\noindent
+In some cases we can show regular meromorphic sections exist.
+
+\begin{lemma}
+\label{lemma-regular-meromorphic-section-exists}
+Let $X$ be a scheme.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+In each of the following cases $\mathcal{L}$ has a regular meromorphic
+section:
+\begin{enumerate}
+\item $X$ is integral,
+\item $X$ is reduced and any quasi-compact open has a finite
+number of irreducible components,
+\item $X$ is locally Noetherian and has no embedded points.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+In case (1) let $\eta \in X$ be the generic point. We have seen in
+Lemma \ref{lemma-meromorphic-functions-integral-scheme}
+that $\mathcal{K}_X$, resp.\ $\mathcal{K}_X(\mathcal{L})$
+is the constant sheaf with value
+$\kappa(\eta)$, resp.\ $\mathcal{L}_\eta$.
+Since $\dim_{\kappa(\eta)} \mathcal{L}_\eta = 1$
+we can pick a nonzero element $s \in \mathcal{L}_\eta$.
+Clearly $s$ is a regular meromorphic section of $\mathcal{L}$.
+In case (2) pick $s_\eta \in \mathcal{L}_\eta$ nonzero
+for all generic points $\eta$ of $X$; this is possible
+as $\mathcal{L}_\eta$ is a $1$-dimensional vector space
+over $\kappa(\eta)$.
+It follows immediately from the description of
+$\mathcal{K}_X$ and $\mathcal{K}_X(\mathcal{L})$
+in Lemma \ref{lemma-reduced-finite-irreducible}
+that $s = \prod s_\eta$ is a regular meromorphic section of $\mathcal{L}$.
+Case (3) is Lemma \ref{lemma-regular-meromorphic-section-exists-noetherian}.
+\end{proof}
+
+
+
+
+
+
+\section{Weil divisors}
+\label{section-Weil-divisors}
+
+\noindent
+We will introduce Weil divisors and rational equivalence of Weil
+divisors for locally Noetherian integral schemes.
+Since we are not assuming our schemes are quasi-compact we have
+to be a little careful when defining Weil divisors. We have to allow
+infinite sums of prime divisors because a rational function may have
+infinitely many poles for example. For quasi-compact schemes our
+Weil divisors are finite sums as usual. Here is a basic lemma we will
+often use to prove collections of closed subschemes are locally finite.
+
+\begin{lemma}
+\label{lemma-components-locally-finite}
+Let $X$ be a locally Noetherian scheme. Let $Z \subset X$ be a closed
+subscheme. The collection of irreducible components of $Z$
+is locally finite in $X$.
+\end{lemma}
+
+\begin{proof}
+Let $U \subset X$ be a quasi-compact open subscheme. Then $U$ is a Noetherian
+scheme, and hence has a Noetherian underlying topological space
+(Properties, Lemma \ref{properties-lemma-Noetherian-topology}).
+Hence every subspace is Noetherian and
+has finitely many irreducible components
+(see Topology, Lemma \ref{topology-lemma-Noetherian}).
+\end{proof}
+
+\noindent
+Recall that if $Z$ is an irreducible closed subset of a scheme $X$,
+then the codimension of $Z$ in $X$ is equal to the dimension
+of the local ring $\mathcal{O}_{X, \xi}$, where $\xi \in Z$
+is the generic point. See
+Properties, Lemma \ref{properties-lemma-codimension-local-ring}.
+
+\begin{definition}
+\label{definition-Weil-divisor}
+Let $X$ be a locally Noetherian integral scheme.
+\begin{enumerate}
+\item A {\it prime divisor} is an integral closed subscheme $Z \subset X$
+of codimension $1$.
+\item A {\it Weil divisor} is a formal sum $D = \sum n_Z Z$ where
+the sum is over prime divisors of $X$ and the collection
+$\{Z \mid n_Z \not = 0\}$ is locally finite
+(Topology, Definition \ref{topology-definition-locally-finite}).
+\end{enumerate}
+The group of all Weil divisors on $X$ is denoted $\text{Div}(X)$.
+\end{definition}
+
+\noindent
+Our next task is to define the Weil divisor associated to a rational
+function. In order to do this we use the order of vanishing of a
+rational function along a prime divisor which is defined as follows.
+
+\begin{definition}
+\label{definition-order-vanishing}
+Let $X$ be a locally Noetherian integral scheme. Let $f \in R(X)^*$.
+For every prime divisor $Z \subset X$ we define the
+{\it order of vanishing of $f$ along $Z$} as the integer
+$$
+\text{ord}_Z(f) = \text{ord}_{\mathcal{O}_{X, \xi}}(f)
+$$
+where the right hand side is the notion of
+Algebra, Definition \ref{algebra-definition-ord}
+and $\xi$ is the generic point of $Z$.
+\end{definition}
+
+\noindent
+Note that for $f, g \in R(X)^*$ we have
+$$
+\text{ord}_Z(fg) = \text{ord}_Z(f) + \text{ord}_Z(g).
+$$
+Of course it can happen that $\text{ord}_Z(f) < 0$.
+In this case we say that $f$ has a {\it pole} along $Z$
+and that $-\text{ord}_Z(f) > 0$ is the
+{\it order of pole of $f$ along $Z$}. It is important to note
+that the condition $\text{ord}_Z(f) \geq 0$ is {\bf not} equivalent
+to the condition $f \in \mathcal{O}_{X, \xi}$ unless the local
+ring $\mathcal{O}_{X, \xi}$ is a discrete valuation ring.
+
+\begin{lemma}
+\label{lemma-divisor-locally-finite}
+Let $X$ be a locally Noetherian integral scheme. Let $f \in R(X)^*$.
+Then the collections
+$$
+\{Z \subset X \mid Z\text{ a prime divisor with generic point }\xi
+\text{ and }f\text{ not in }\mathcal{O}_{X, \xi}\}
+$$
+and
+$$
+\{Z \subset X \mid Z \text{ a prime divisor and }\text{ord}_Z(f) \not = 0\}
+$$
+are locally finite in $X$.
+\end{lemma}
+
+\begin{proof}
+There exists a nonempty open subscheme $U \subset X$ such that $f$
+corresponds to a section of $\Gamma(U, \mathcal{O}_X^*)$. Hence the
+prime divisors which can occur in the sets of the lemma are all
+irreducible components of $X \setminus U$.
+Hence Lemma \ref{lemma-components-locally-finite} gives the desired result.
+\end{proof}
+
+\noindent
+This lemma allows us to make the following definition.
+
+\begin{definition}
+\label{definition-principal-divisor}
+Let $X$ be a locally Noetherian integral scheme. Let $f \in R(X)^*$.
+The {\it principal Weil divisor associated to $f$} is the Weil divisor
+$$
+\text{div}(f) = \text{div}_X(f) = \sum \text{ord}_Z(f) [Z]
+$$
+where the sum is over prime divisors and $\text{ord}_Z(f)$ is as in
+Definition \ref{definition-order-vanishing}. This makes sense
+by Lemma \ref{lemma-divisor-locally-finite}.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-div-additive}
+Let $X$ be a locally Noetherian integral scheme. Let $f, g \in R(X)^*$. Then
+$$
+\text{div}_X(fg) = \text{div}_X(f) + \text{div}_X(g)
+$$
+as Weil divisors on $X$.
+\end{lemma}
+
+\begin{proof}
+This is clear from the additivity of the $\text{ord}$ functions.
+\end{proof}
+
+\noindent
+We see from the lemma above that the collection of principal Weil divisors
+form a subgroup of the group of all Weil divisors. This leads to the following
+definition.
+
+\begin{definition}
+\label{definition-class-group}
+Let $X$ be a locally Noetherian integral scheme. The
+{\it Weil divisor class group} of $X$ is the quotient of
+the group of Weil divisors by the subgroup of principal Weil divisors.
+Notation: $\text{Cl}(X)$.
+\end{definition}
+
+\noindent
+By construction we obtain an exact complex
+\begin{equation}
+\label{equation-Weil-divisor-class}
+R(X)^* \xrightarrow{\text{div}} \text{Div}(X) \to \text{Cl}(X) \to 0
+\end{equation}
+which we can think of as a presentation of $\text{Cl}(X)$. Our next task
+is to relate the Weil divisor class group to the Picard group.
+
+
+
+
+
+\section{The Weil divisor class associated to an invertible module}
+\label{section-c1}
+
+\noindent
+In this section we go through exactly the same progression as in
+Section \ref{section-Weil-divisors} to define a canonical map
+$\Pic(X) \to \text{Cl}(X)$
+on a locally Noetherian integral scheme.
+
+\medskip\noindent
+Let $X$ be a scheme. Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $\xi \in X$ be a point. If $s_\xi, s'_\xi \in \mathcal{L}_\xi$ generate
+$\mathcal{L}_\xi$ as $\mathcal{O}_{X, \xi}$-module, then there exists a unit
+$u \in \mathcal{O}_{X, \xi}^*$ such that $s_\xi = u s'_\xi$.
+The stalk of the sheaf of meromorphic sections
+$\mathcal{K}_X(\mathcal{L})$ of $\mathcal{L}$
+at $x$ is equal to
+$\mathcal{K}_{X, x} \otimes_{\mathcal{O}_{X, x}} \mathcal{L}_x$.
+Thus the image of any meromorphic section $s$
+of $\mathcal{L}$ in the stalk at $x$ can be written as $s = fs_\xi$ with
+$f \in \mathcal{K}_{X, x}$. Below we will abbreviate this by
+saying $f = s/s_\xi$. Also, if $X$ is integral we have
+$\mathcal{K}_{X, x} = R(X)$ is equal to the function field of $X$,
+so $s/s_\xi \in R(X)$. If $s$ is a regular meromorphic section,
+then actually $s/s_\xi \in R(X)^*$. On an integral scheme a regular
+meromorphic section is the same thing as a nonzero meromorphic section.
+Finally, we see that $s/s_\xi$ is independent of the choice of $s_\xi$ up to
+multiplication by a unit of the local ring $\mathcal{O}_{X, x}$.
+Putting everything together we see the following definition makes sense.
+
+\begin{definition}
+\label{definition-order-vanishing-meromorphic}
+Let $X$ be a locally Noetherian integral scheme.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $s \in \Gamma(X, \mathcal{K}_X(\mathcal{L}))$
+be a regular meromorphic section of $\mathcal{L}$.
+For every prime divisor $Z \subset X$ we define the
+{\it order of vanishing of $s$ along $Z$} as the integer
+$$
+\text{ord}_{Z, \mathcal{L}}(s)
+= \text{ord}_{\mathcal{O}_{X, \xi}}(s/s_\xi)
+$$
+where the right hand side is the notion of
+Algebra, Definition \ref{algebra-definition-ord},
+$\xi \in Z$ is the generic point,
+and $s_\xi \in \mathcal{L}_\xi$ is a generator.
+\end{definition}
+
+\noindent
+As in the case of principal divisors we have the following lemma.
+
+\begin{lemma}
+\label{lemma-divisor-meromorphic-locally-finite}
+Let $X$ be a locally Noetherian integral scheme. Let $\mathcal{L}$ be an
+invertible $\mathcal{O}_X$-module. Let $s \in \mathcal{K}_X(\mathcal{L})$ be a
+regular (i.e., nonzero) meromorphic section of $\mathcal{L}$. Then the sets
+$$
+\{Z \subset X \mid Z \text{ a prime divisor with generic point }\xi
+\text{ and }s\text{ not in }\mathcal{L}_\xi\}
+$$
+and
+$$
+\{Z \subset X \mid Z \text{ is a prime divisor and }
+\text{ord}_{Z, \mathcal{L}}(s) \not = 0\}
+$$
+are locally finite in $X$.
+\end{lemma}
+
+\begin{proof}
+There exists a nonempty open subscheme $U \subset X$ such that $s$
+corresponds to a section of $\Gamma(U, \mathcal{L})$ which generates
+$\mathcal{L}$ over $U$. Hence the prime divisors which can occur
+in the sets of the lemma are all irreducible components of $X \setminus U$.
+Hence Lemma \ref{lemma-components-locally-finite}.
+gives the desired result.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-divisor-meromorphic-well-defined}
+Let $X$ be a locally Noetherian integral scheme.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $s, s' \in \mathcal{K}_X(\mathcal{L})$ be nonzero
+meromorphic sections of $\mathcal{L}$. Then $f = s/s'$
+is an element of $R(X)^*$ and we have
+$$
+\sum \text{ord}_{Z, \mathcal{L}}(s)[Z]
+=
+\sum \text{ord}_{Z, \mathcal{L}}(s')[Z]
++
+\text{div}(f)
+$$
+as Weil divisors.
+\end{lemma}
+
+\begin{proof}
+This is clear from the definitions.
+Note that Lemma \ref{lemma-divisor-meromorphic-locally-finite}
+guarantees that the sums are indeed Weil divisors.
+\end{proof}
+
+\begin{definition}
+\label{definition-divisor-invertible-sheaf}
+Let $X$ be a locally Noetherian integral scheme.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item For any nonzero meromorphic section $s$ of $\mathcal{L}$
+we define the {\it Weil divisor associated to $s$} as
+$$
+\text{div}_\mathcal{L}(s) =
+\sum \text{ord}_{Z, \mathcal{L}}(s) [Z] \in \text{Div}(X)
+$$
+where the sum is over prime divisors.
+\item We define {\it Weil divisor class associated to $\mathcal{L}$}
+as the image of $\text{div}_\mathcal{L}(s)$ in $\text{Cl}(X)$
+where $s$ is any nonzero meromorphic section of $\mathcal{L}$ over
+$X$. This is well defined by
+Lemma \ref{lemma-divisor-meromorphic-well-defined}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+As expected this construction is additive in the invertible module.
+
+\begin{lemma}
+\label{lemma-c1-additive}
+Let $X$ be a locally Noetherian integral scheme.
+Let $\mathcal{L}$, $\mathcal{N}$ be invertible $\mathcal{O}_X$-modules.
+Let $s$, resp.\ $t$ be a nonzero meromorphic section
+of $\mathcal{L}$, resp.\ $\mathcal{N}$. Then $st$ is a nonzero
+meromorphic section of $\mathcal{L} \otimes \mathcal{N}$, and
+$$
+\text{div}_{\mathcal{L} \otimes \mathcal{N}}(st)
+=
+\text{div}_\mathcal{L}(s) + \text{div}_\mathcal{N}(t)
+$$
+in $\text{Div}(X)$. In particular, the Weil divisor class of
+$\mathcal{L} \otimes_{\mathcal{O}_X} \mathcal{N}$ is the sum
+of the Weil divisor classes of $\mathcal{L}$ and $\mathcal{N}$.
+\end{lemma}
+
+\begin{proof}
+Let $s$, resp.\ $t$ be a nonzero meromorphic section
+of $\mathcal{L}$, resp.\ $\mathcal{N}$. Then $st$ is a nonzero
+meromorphic section of $\mathcal{L} \otimes \mathcal{N}$.
+Let $Z \subset X$ be a prime divisor. Let $\xi \in Z$ be its generic
+point. Choose generators $s_\xi \in \mathcal{L}_\xi$, and
+$t_\xi \in \mathcal{N}_\xi$. Then $s_\xi t_\xi$ is a generator
+for $(\mathcal{L} \otimes \mathcal{N})_\xi$.
+So $st/(s_\xi t_\xi) = (s/s_\xi)(t/t_\xi)$.
+Hence we see that
+$$
+\text{div}_{\mathcal{L} \otimes \mathcal{N}, Z}(st)
+=
+\text{div}_{\mathcal{L}, Z}(s) + \text{div}_{\mathcal{N}, Z}(t)
+$$
+by the additivity of the $\text{ord}_Z$ function.
+\end{proof}
+
+\noindent
+In this way we obtain a homomorphism of abelian groups
+\begin{equation}
+\label{equation-c1}
+\Pic(X) \longrightarrow \text{Cl}(X)
+\end{equation}
+which assigns to an invertible module its Weil divisor class.
+
+\begin{lemma}
+\label{lemma-normal-c1-injective}
+Let $X$ be a locally Noetherian integral scheme. If $X$ is normal,
+then the map (\ref{equation-c1}) $\Pic(X) \to \text{Cl}(X)$
+is injective.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module whose
+associated Weil divisor class is trivial. Let $s$ be a regular
+meromorphic section of $\mathcal{L}$. The assumption means that
+$\text{div}_\mathcal{L}(s) = \text{div}(f)$ for some
+$f \in R(X)^*$. Then we see that $t = f^{-1}s$ is a regular
+meromorphic section of $\mathcal{L}$ with
+$\text{div}_\mathcal{L}(t) = 0$, see
+Lemma \ref{lemma-divisor-meromorphic-well-defined}.
+We will show that $t$ defines a trivialization of $\mathcal{L}$
+which finishes the proof of the lemma.
+In order to prove this we may work locally on $X$.
+Hence we may assume that $X = \Spec(A)$ is affine
+and that $\mathcal{L}$ is trivial. Then $A$ is a Noetherian normal
+domain and $t$ is an element of its fraction field
+such that $\text{ord}_{A_\mathfrak p}(t) = 0$
+for all height $1$ primes $\mathfrak p$ of $A$.
+Our goal is to show that $t$ is a unit of $A$.
+Since $A_\mathfrak p$ is a discrete valuation ring for height
+one primes of $A$ (Algebra, Lemma \ref{algebra-lemma-criterion-normal}), the
+condition signifies that $t \in A_\mathfrak p^*$ for all primes $\mathfrak p$
+of height $1$. This implies $t \in A$ and $t^{-1} \in A$ by
+Algebra, Lemma
+\ref{algebra-lemma-normal-domain-intersection-localizations-height-1}
+and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-rings-UFD-c1-bijective}
+Let $X$ be a locally Noetherian integral scheme. Consider the map
+(\ref{equation-c1}) $\Pic(X) \to \text{Cl}(X)$.
+The following are equivalent
+\begin{enumerate}
+\item the local rings of $X$ are UFDs, and
+\item $X$ is normal and $\Pic(X) \to \text{Cl}(X)$
+is surjective.
+\end{enumerate}
+In this case $\Pic(X) \to \text{Cl}(X)$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+If (1) holds, then $X$ is normal by
+Algebra, Lemma \ref{algebra-lemma-UFD-normal}.
+Hence the map (\ref{equation-c1}) is injective by
+Lemma \ref{lemma-normal-c1-injective}. Moreover,
+every prime divisor $D \subset X$ is an effective
+Cartier divisor by Lemma \ref{lemma-weil-divisor-is-cartier-UFD}.
+In this case the canonical section $1_D$ of $\mathcal{O}_X(D)$
+(Definition \ref{definition-invertible-sheaf-effective-Cartier-divisor})
+vanishes exactly along $D$ and we see that the class of $D$ is the
+image of $\mathcal{O}_X(D)$ under the map (\ref{equation-c1}).
+Thus the map is surjective as well.
+
+\medskip\noindent
+Assume (2) holds. Pick a prime divisor $D \subset X$.
+Since (\ref{equation-c1}) is surjective there exists an invertible
+sheaf $\mathcal{L}$, a regular meromorphic section $s$, and $f \in R(X)^*$
+such that $\text{div}_\mathcal{L}(s) + \text{div}(f) = [D]$.
+In other words, $\text{div}_\mathcal{L}(fs) = [D]$.
+Let $x \in X$ and let $A = \mathcal{O}_{X, x}$. Thus $A$ is
+a Noetherian local normal domain with fraction field $K = R(X)$.
+Every height $1$ prime of $A$ corresponds to a prime divisor on $X$
+and every invertible $\mathcal{O}_X$-module restricts to the
+trivial invertible module on $\Spec(A)$. It follows that for every
+height $1$ prime $\mathfrak p \subset A$ there exists an element $f \in K$
+such that $\text{ord}_{A_\mathfrak p}(f) = 1$ and
+$\text{ord}_{A_{\mathfrak p'}}(f) = 0$ for every other
+height one prime $\mathfrak p'$. Then $f \in A$ by Algebra, Lemma
+\ref{algebra-lemma-normal-domain-intersection-localizations-height-1}.
+Arguing in the same fashion we see that every element $g \in \mathfrak p$
+is of the form $g = af$ for some $a \in A$. Thus we see that every
+height one prime ideal of $A$ is principal and $A$ is a UFD
+by Algebra, Lemma \ref{algebra-lemma-characterize-UFD-height-1}.
+\end{proof}
+
+
+
+
+
+
+
+\section{More on invertible modules}
+\label{section-invertible-modules}
+
+\noindent
+In this section we discuss some properties of invertible modules.
+
+\begin{lemma}
+\label{lemma-in-image-pullback}
+Let $\varphi : X \to Y$ be a morphism of schemes. Let $\mathcal{L}$
+be an invertible $\mathcal{O}_X$-module. Assume that
+\begin{enumerate}
+\item $X$ is locally Noetherian,
+\item $Y$ is locally Noetherian, integral, and normal,
+\item $\varphi$ is flat with integral (hence nonempty) fibres,
+\item $\varphi$ is either quasi-compact or locally of finite type,
+\item $\mathcal{L}$ is trivial when restricted to the generic fibre of
+$\varphi$.
+\end{enumerate}
+Then $\mathcal{L} \cong \varphi^*\mathcal{N}$ for some invertible
+$\mathcal{O}_Y$-module $\mathcal{N}$.
+\end{lemma}
+
+\begin{proof}
+Let $\xi \in Y$ be the generic point. Let $X_\xi$ be the scheme theoretic
+fibre of $\varphi$ over $\xi$. Denote $\mathcal{L}_\xi$ the pullback of
+$\mathcal{L}$ to $X_\xi$. Assumption (5) means that $\mathcal{L}_\xi$
+is trivial. Choose a trivializing section
+$s \in \Gamma(X_\xi, \mathcal{L}_\xi)$. Observe that $X$ is integral by
+Lemma \ref{lemma-flat-over-integral-integral-fibre}.
+Hence we can think of $s$ as a regular meromorphic section of $\mathcal{L}$.
+Pullbacks of meromorphic functions are defined for
+$\varphi$ by Lemma \ref{lemma-pullback-meromorphic-sections-defined}.
+Let $\mathcal{N} \subset \mathcal{K}_Y$ be the $\mathcal{O}_Y$-module
+whose sections over an open $V \subset Y$ are those meromorphic functions
+$g \in \mathcal{K}_Y(V)$ such that
+$\varphi^*(g)s \in \mathcal{L}(\varphi^{-1}V)$.
+A priori $\varphi^*(g)s$ is a section of $\mathcal{K}_X(\mathcal{L})$ over
+$\varphi^{-1}V$. We claim that $\mathcal{N}$ is an invertible
+$\mathcal{O}_Y$-module and that the map
+$$
+\varphi^*\mathcal{N} \longrightarrow \mathcal{L},\quad
+g \longmapsto gs
+$$
+is an isomorphism.
+
+\medskip\noindent
+We first prove the claim in the following situation:
+$X$ and $Y$ are affine and $\mathcal{L}$ trivial. Say $Y = \Spec(R)$,
+$X = \Spec(A)$ and $s$ given by the element $s \in A \otimes_R K$
+where $K$ is the fraction field of $R$. We can write $s = a/r$
+for some nonzero $r \in R$ and $a \in A$. Since $s$ generates $\mathcal{L}$
+on the generic fibre we see that there exists an $s' \in A \otimes_R K$
+such that $ss' = 1$. Thus we see that $s = r'/a'$ for some
+nonzero $r' \in R$ and $a' \in A$. Let
+$\mathfrak p_1, \ldots, \mathfrak p_n \subset R$
+be the minimal primes over $rr'$. Each $R_{\mathfrak p_i}$
+is a discrete valuation ring
+(Algebra, Lemmas \ref{algebra-lemma-minimal-over-1} and
+\ref{algebra-lemma-criterion-normal}). By assumption
+$\mathfrak q_i = \mathfrak p_i A$ is a prime. Hence
+$\mathfrak q_i A_{\mathfrak q_i}$ is generated by a single element
+and we find that $A_{\mathfrak q_i}$ is a discrete valuation ring as
+well (Algebra, Lemma \ref{algebra-lemma-characterize-dvr}). Of course
+$R_{\mathfrak p_i} \to A_{\mathfrak q_i}$ has ramification index $1$.
+Let $e_i, e'_i \geq 0$ be the valuation of $a, a'$ in $A_{\mathfrak q_i}$.
+Then $e_i + e'_i$ is the valuation of $rr'$ in $R_{\mathfrak p_i}$. Note that
+$$
+\mathfrak p_1^{(e_1 + e'_1)} \cap \ldots \cap \mathfrak p_i^{(e_n + e'_n)} =
+(rr')
+$$
+in $R$ by Algebra, Lemma
+\ref{algebra-lemma-normal-domain-intersection-localizations-height-1}.
+Set
+$$
+I = \mathfrak p_1^{(e_1)} \cap \ldots \cap \mathfrak p_i^{(e_n)}
+\quad\text{and}\quad
+I' = \mathfrak p_1^{(e'_1)} \cap \ldots \cap \mathfrak p_i^{(e'_n)}
+$$
+so that $II' \subset (rr')$. Observe that
+$$
+IA =
+(\mathfrak p_1^{(e_1)} \cap \ldots \cap \mathfrak p_i^{(e_n)})A =
+(\mathfrak p_1A)^{(e_1)} \cap \ldots \cap (\mathfrak p_i A)^{(e_n)}
+$$
+by Algebra, Lemmas \ref{algebra-lemma-symbolic-power-flat-extension} and
+\ref{algebra-lemma-flat-intersect-ideals}. Similarly for $I'A$. Hence
+$a \in IA$ and $a' \in I'A$.
+We conclude that $IA \otimes_A I'A \to rr'A$ is surjective.
+By faithful flatness of $R \to A$ we find that
+$I \otimes_R I' \to (rr')$ is surjective as well.
+It follows that $II' = (rr')$ and $I$ and $I'$ are finite locally
+free of rank $1$, see
+Algebra, Lemma \ref{algebra-lemma-product-ideals-principal}.
+Thus Zariski locally on $R$ we can write $I = (g)$ and $I' = (g')$
+with $gg' = rr'$. Then $a = ug$ and $a' = u'g'$ for some $u, u' \in A$.
+We conclude that $u, u'$ are units. Thus Zariski locally on $R$
+we have $s = ug/r$ and the claim follows in this case.
+
+\medskip\noindent
+Let $y \in Y$ be a point.
+Pick $x \in X$ mapping to $y$. We may apply the result of the previous
+paragraph to $\Spec(\mathcal{O}_{X, x}) \to \Spec(\mathcal{O}_{Y, y})$.
+We conclude there exists an element $g \in R(Y)^*$ well defined up to
+multiplication by an element of $\mathcal{O}_{Y, y}^*$ such that
+$\varphi^*(g)s$ generates $\mathcal{L}_x$. Hence $\varphi^*(g)s$
+generates $\mathcal{L}$ in a neighbourhood $U$ of $x$.
+Suppose $x'$ is a second point lying over $y$ and $g' \in R(Y)^*$ is
+such that $\varphi^*(g')s$ generates $\mathcal{L}$ in an open neighbourhood
+$U'$ of $x'$. Then we can choose a point
+$x''$ in $U \cap U' \cap \varphi^{-1}(\{y\})$
+because the fibre is irreducible. By the uniqueness for
+the ring map $\mathcal{O}_{Y, y} \to \mathcal{O}_{X, x''}$
+we find that $g$ and $g'$ differ (multiplicatively)
+by an element in $\mathcal{O}_{Y, y}^*$. Hence we see that $\varphi^*(g)s$
+is a generator for $\mathcal{L}$ on an open neighbourhood
+of $\varphi^{-1}(y)$. Let $Z \subset X$ be the set of points
+$z \in X$ such that $\varphi^*(g)s$ does not generate $\mathcal{L}_z$.
+The arguments above show that $Z$ is closed and that $Z = \varphi^{-1}(T)$
+for some subset $T \subset Y$ with $y \not \in T$. If we can show that
+$T$ is closed, then $g$ will be a generator for $\mathcal{N}$ as an
+$\mathcal{O}_Y$-module in the open neighbourhood $Y \setminus T$ of $y$
+thereby finishing the proof (some details omitted).
+
+\medskip\noindent
+If $\varphi$ is quasi-compact, then $T$ is closed by
+Morphisms, Lemma \ref{morphisms-lemma-fpqc-quotient-topology}.
+If $\varphi$ is locally of finite type, then $\varphi$ is open
+by Morphisms, Lemma \ref{morphisms-lemma-fppf-open}.
+Then $Y \setminus T$ is open as the image of the open $X \setminus Z$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-closure-effective-cartier-divisor}
+Let $X$ be a locally Noetherian scheme. Let $U \subset X$ be
+an open and let $D \subset U$ be an effective Cartier divisor.
+If $\mathcal{O}_{X, x}$ is a UFD for all $x \in X \setminus U$,
+then there exists an effective Cartier divisor $D' \subset X$
+with $D = U \cap D'$.
+\end{lemma}
+
+\begin{proof}
+Let $D' \subset X$ be the scheme theoretic image of the morphism $D \to X$.
+Since $X$ is locally Noetherian the morphism $D \to X$ is quasi-compact, see
+Properties, Lemma \ref{properties-lemma-immersion-into-noetherian}.
+Hence the formation of $D'$ commutes with passing to opens in $X$ by
+Morphisms, Lemma \ref{morphisms-lemma-quasi-compact-scheme-theoretic-image}.
+Thus we may assume $X = \Spec(A)$ is affine.
+Let $I \subset A$ be the ideal corresponding to $D'$.
+Let $\mathfrak p \subset A$ be a prime ideal corresponding to a point
+of $X \setminus U$.
+To finish the proof it is enough to show that $I_\mathfrak p$ is generated
+by one element, see Lemma \ref{lemma-effective-Cartier-in-points}.
+Thus we may replace $X$ by $\Spec(A_\mathfrak p)$, see
+Morphisms, Lemma \ref{morphisms-lemma-flat-base-change-scheme-theoretic-image}.
+In other words, we may assume that $X$ is the spectrum of a local
+UFD $A$. Then all local rings of $A$ are UFD's. It follows that
+$D = \sum a_i D_i$ with $D_i \subset U$ an integral effective Cartier divisor,
+see Lemma \ref{lemma-effective-Cartier-divisor-is-a-sum}.
+The generic points $\xi_i$ of $D_i$ correspond to prime ideals
+$\mathfrak p_i \subset A$ of height $1$, see
+Lemma \ref{lemma-effective-Cartier-codimension-1}.
+Then $\mathfrak p_i = (f_i)$ for some prime element $f_i \in A$
+and we conclude that $D'$ is cut out by $\prod f_i^{a_i}$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extend-invertible-module}
+Let $X$ be a locally Noetherian scheme. Let $U \subset X$ be
+an open and let $\mathcal{L}$ be an invertible $\mathcal{O}_U$-module.
+If $\mathcal{O}_{X, x}$ is a UFD for all $x \in X \setminus U$,
+then there exists an invertible $\mathcal{O}_X$-module $\mathcal{L}'$
+with $\mathcal{L} \cong \mathcal{L}'|_U$.
+\end{lemma}
+
+\begin{proof}
+Choose $x \in X$, $x \not \in U$. We will show there exists an
+affine open neighbourhood $W \subset X$, such that $\mathcal{L}|_{W \cap U}$
+extends to an invertible sheaf on $W$. This implies by glueing of
+sheaves (Sheaves, Section \ref{sheaves-section-glueing-sheaves})
+that we can extend $\mathcal{L}$ to the strictly bigger open $U \cup W$.
+Let $W = \Spec(A)$ be an affine open neighbourhood.
+Since $U \cap W$ is quasi-affine, we see that we can write
+$\mathcal{L}|_{W \cap U}$ as
+$\mathcal{O}(D_1) \otimes \mathcal{O}(D_2)^{\otimes -1}$ for some
+effective Cartier divisors $D_1, D_2 \subset W \cap U$, see
+Lemma \ref{lemma-quasi-projective-Noetherian-pic-effective-Cartier}.
+Then $D_1$ and $D_2$ extend to effective Cartier divisors of
+$W$ by Lemma \ref{lemma-closure-effective-cartier-divisor}
+which gives us the extension of the invertible sheaf.
+
+\medskip\noindent
+If $X$ is Noetherian (which is the case most used in practice), the above
+combined with Noetherian induction finishes the proof. In the general case
+we argue as follows. First, because every local ring of a point outside
+of $U$ is a domain and $X$ is locally Noetherian, we see that the closure
+of $U$ in $X$ is open. Thus we may assume that $U \subset X$ is dense
+and schematically dense.
+Now we consider the set $T$ of triples $(U', \mathcal{L}', \alpha)$
+where $U \subset U' \subset X$ is an open subscheme, $\mathcal{L}'$
+is an invertible $\mathcal{O}_{U'}$-module, and
+$\alpha : \mathcal{L}'|_U \to \mathcal{L}$ is an isomorphism.
+We endow $T$ with a partial ordering $\leq$ defined by the rule
+$(U', \mathcal{L}', \alpha) \leq (U'', \mathcal{L}'', \alpha')$
+if and only if $U' \subset U''$ and there exists an isomorphism
+$\beta : \mathcal{L}''|_{U'} \to \mathcal{L}'$ compatible with
+$\alpha$ and $\alpha'$. Observe that $\beta$ is unique (if it exists)
+because $U \subset X$ is dense. The first part of the proof shows that
+for any element $t = (U', \mathcal{L}', \alpha)$ of $T$ with $U' \not = X$
+there exists a $t' \in T$ with $t' > t$. Hence to finish the proof it
+suffices to show that Zorn's lemma applies. Thus consider a
+totally ordered subset $I \subset T$. If $i \in I$ corresponds to
+the triple $(U_i, \mathcal{L}_i, \alpha_i)$, then we can construct
+an invertible module $\mathcal{L}'$ on $U' = \bigcup U_i$ as follows.
+For $W \subset U'$ open and quasi-compact we see that
+$W \subset U_i$ for some $i$ and we set
+$$
+\mathcal{L}'(W) = \mathcal{L}_i(W)
+$$
+For the transition maps we use the $\beta$'s (which are unique and hence
+compose correctly). This defines an invertible $\mathcal{O}$-module
+$\mathcal{L}'$ on the basis of quasi-compact opens of $U'$ which is sufficient
+to define an invertible module (Sheaves, Section \ref{sheaves-section-bases}).
+We omit the details.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-open-subscheme-UFD}
+Let $R$ be a UFD. The Picard groups of the following are
+trivial.
+\begin{enumerate}
+\item $\Spec(R)$ and any open subscheme of it.
+\item $\mathbf{A}^n_R = \Spec(R[x_1, \ldots, x_n])$ and any open subscheme
+of it.
+\end{enumerate}
+In particular, the Picard group of any open subscheme of affine
+$n$-space $\mathbf{A}^n_k$ over a field $k$ is trivial.
+\end{lemma}
+
+\begin{proof}
+Since $R$ is a UFD so is any localization of it and any polynomial
+ring over it (Algebra, Lemma \ref{algebra-lemma-polynomial-ring-UFD}).
+Thus if $U \subset \mathbf{A}^n_R$ is open, then the map
+$\Pic(\mathbf{A}^n_R) \to \Pic(U)$ is surjective
+by Lemma \ref{lemma-extend-invertible-module}.
+The vanishing of $\Pic(\mathbf{A}^n_R)$ is equivalent to
+the vanishing of the picard group of the UFD $R[x_1, \ldots, x_n]$
+which is proved in
+More on Algebra, Lemma \ref{more-algebra-lemma-UFD-Pic-trivial}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Pic-projective-space-UFD}
+Let $R$ be a UFD. The Picard group of $\mathbf{P}^n_R$
+is $\mathbf{Z}$. More precisely, there is an isomorphism
+$$
+\mathbf{Z} \longrightarrow \Pic(\mathbf{P}^n_R),\quad
+m \longmapsto \mathcal{O}_{\mathbf{P}^n_R}(m)
+$$
+In particular, the Picard group of $\mathbf{P}^n_k$ of projective
+space over a field $k$ is $\mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Observe that the local rings of $X = \mathbf{P}^n_R$ are
+UFDs because $X$ is covered by affine pieces isomorphic
+to $\mathbf{A}^n_R$ and $R[x_1, \ldots, x_n]$ is a UFD
+(Algebra, Lemma \ref{algebra-lemma-polynomial-ring-UFD}).
+Hence $X$ is an integral Noetherian scheme all of whose
+local rings are UFDs and we see that $\Pic(X) = \text{Cl}(X)$
+by Lemma \ref{lemma-local-rings-UFD-c1-bijective}.
+
+\medskip\noindent
+The displayed map is a group homomorphism by
+Constructions, Lemma \ref{constructions-lemma-apply-modules}.
+The map is injective because $H^0$ of
+$\mathcal{O}_X$ and $\mathcal{O}_X(m)$ are non-isomorphic $R$-modules
+if $m > 0$, see Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cohomology-projective-space-over-ring}.
+Let $\mathcal{L}$ be an invertible module on $X$.
+Consider the open $U = D_+(T_0) \cong \mathbf{A}^n_R$.
+The complement $H = X \setminus U$ is a prime divisor because it is
+isomorphic to $\text{Proj}(R[T_1, \ldots, T_n])$ which is
+integral by the discussion in the previous paragraph.
+In fact $H$ is the zero scheme of the regular global
+section $T_0$ of $\mathcal{O}_X(1)$
+hence $\mathcal{O}_X(1)$ maps to the class of $H$ in $\text{Cl}(X)$.
+By Lemma \ref{lemma-open-subscheme-UFD} we see that
+$\mathcal{L}|_U \cong \mathcal{O}_U$.
+Let $s \in \mathcal{L}(U)$ be a trivializing section.
+Then we can think of $s$ as a regular meromorphic section
+of $\mathcal{L}$ and we see that necessarily
+$\text{div}_\mathcal{L}(s) = m[H]$ for some $m \in \mathbf{Z}$
+as $H$ is the only prime divisor of $X$ not meeting $U$.
+In other words, we see that $\mathcal{L}$ and
+$\mathcal{O}_X(m)$ map to the same element of $\text{Cl}(X)$
+and hence $\mathcal{L} \cong \mathcal{O}_X(m)$
+as desired.
+\end{proof}
+
+
+
+
+
+
+\section{Weil divisors on normal schemes}
+\label{section-weil-divisors-normal}
+
+\noindent
+First we discuss properties of reflexive modules.
+
+\begin{lemma}
+\label{lemma-reflexive-normal}
+Let $X$ be an integral locally Noetherian normal scheme.
+For $\mathcal{F}$ and $\mathcal{G}$ coherent reflexive
+$\mathcal{O}_X$-modules the map
+$$
+(\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{O}_X)
+\otimes_{\mathcal{O}_X} \mathcal{G})^{**} \to
+\SheafHom_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})
+$$
+is an isomorphism. The rule $\mathcal{F}, \mathcal{G} \mapsto
+(\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{G})^{**}$
+defines an abelian group law on the set of isomorphism classes of rank $1$
+coherent reflexive $\mathcal{O}_X$-modules.
+\end{lemma}
+
+\begin{proof}
+Although not strictly necessary, we recommend reading
+Remark \ref{remark-tensor} before proceeding with the proof.
+Choose an open subscheme $j : U \to X$ such that
+every irreducible component of $X \setminus U$ has codimension $\geq 2$
+in $X$ and such that $j^*\mathcal{F}$ and $j^*\mathcal{G}$ are finite
+locally free, see Lemma \ref{lemma-reflexive-over-normal}.
+The map
+$$
+\SheafHom_{\mathcal{O}_U}(j^*\mathcal{F}, \mathcal{O}_U)
+\otimes_{\mathcal{O}_U} j^*\mathcal{G} \to
+\SheafHom_{\mathcal{O}_U}(j^*\mathcal{F}, j^*\mathcal{G})
+$$
+is an isomorphism, because we may check it locally and it is
+clear when the modules are finite free. Observe that $j^*$
+applied to the displayed arrow of the lemma gives the arrow
+we've just shown is an isomorphism (small detail omitted).
+Since $j^*$ defines an equivalence between coherent reflexive modules on $U$
+and coherent reflexive modules on $X$
+(by Lemma \ref{lemma-reflexive-S2-extend} and Serre's criterion
+Properties, Lemma \ref{properties-lemma-criterion-normal}),
+we conclude that the arrow of the lemma is an isomorphism too.
+If $\mathcal{F}$ has rank $1$, then $j^*\mathcal{F}$
+is an invertible $\mathcal{O}_U$-module and the reflexive module
+$\mathcal{F}^\vee = \SheafHom(\mathcal{F}, \mathcal{O}_X)$
+restricts to its inverse. It follows in the same manner as before that
+$(\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{F}^\vee)^{**} = \mathcal{O}_X$.
+In this way we see that we have inverses for the group law
+given in the statement of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-normal-class-group}
+Let $X$ be an integral locally Noetherian normal scheme.
+The group of rank $1$ coherent reflexive $\mathcal{O}_X$-modules
+is isomorphic to the Weil divisor class group $\text{Cl}(X)$ of $X$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be a rank $1$ coherent reflexive $\mathcal{O}_X$-module.
+Choose an open $U \subset X$ such that
+every irreducible component of $X \setminus U$ has codimension $\geq 2$
+in $X$ and such that $\mathcal{F}|_U$ is invertible, see
+Lemma \ref{lemma-reflexive-over-normal}.
+Observe that $\text{Cl}(U) = \text{Cl}(X)$
+as the Weil divisor class group of $X$ only depends on
+its field of rational functions and the points of
+codimension $1$ and their local rings.
+Thus we can define the Weil divisor class of $\mathcal{F}$
+to be the Weil divisor class of $\mathcal{F}|_U$
+in $\text{Cl}(U)$. We omit the verification that this
+is independent of the choice of $U$.
+
+\medskip\noindent
+Denote $\text{Cl}'(X)$ the set of isomorphism classes of
+rank $1$ coherent reflexive $\mathcal{O}_X$-modules. The
+construction above gives a group homorphism
+$$
+\text{Cl}'(X) \longrightarrow \text{Cl}(X)
+$$
+because for any pair $\mathcal{F}, \mathcal{G}$ of elements
+of $\text{Cl}'(X)$ we can choose a $U$ which works for both
+and the assignment (\ref{equation-c1}) sending an invertible
+module to its Weil divisor class is a homorphism.
+If $\mathcal{F}$ is in the kernel of this map, then we find that
+$\mathcal{F}|_U$ is trivial (Lemma \ref{lemma-normal-c1-injective})
+and hence $\mathcal{F}$ is trivial too by
+Lemma \ref{lemma-reflexive-S2-extend} and Serre's criterion
+Properties, Lemma \ref{properties-lemma-criterion-normal}.
+To finish the proof it suffices to check the map is surjective.
+
+\medskip\noindent
+Let $D = \sum n_Z Z$ be a Weil divisor on $X$.
+We claim that there is an open $U \subset X$ such that
+every irreducible component of $X \setminus U$ has codimension $\geq 2$
+in $X$ and such that $Z|_U$ is an effective Cartier divisor
+for $n_Z \not = 0$. To prove the claim we may assume $X$ is affine.
+Then we may assume $D = n_1 Z_1 + \ldots + n_r Z_r$ is a finite sum
+with $Z_1, \ldots, Z_r$ pairwise distinct. After throwing out
+$Z_i \cap Z_j$ for $i \not = j$ we may assume $Z_1, \ldots, Z_r$
+are pairwise disjoint. This reduces us to the case of a single
+prime divisor $Z$ on $X$. As $X$ is $(R_1)$ by
+Properties, Lemma \ref{properties-lemma-criterion-normal}
+the local ring
+$\mathcal{O}_{X, \xi}$ at the generic point $\xi$ of $Z$ is a discrete
+valuation ring. Let $f \in \mathcal{O}_{X, \xi}$ be a uniformizer.
+Let $V \subset X$ be an open neighbourhood of $\xi$ such that
+$f$ is the image of an element $f \in \mathcal{O}_X(V)$.
+After shrinking $V$ we may assume that $Z \cap V = V(f)$
+scheme theoretically, since this is true in the local ring
+at $\xi$. In this case taking
+$$
+U = X \setminus (Z \setminus V) = (X \setminus Z) \cup V
+$$
+gives the desired open, thereby proving the claim.
+
+\medskip\noindent
+In order to show that the divisor class of $D$ is in the image,
+we may write $D = \sum_{n_Z < 0} n_Z Z - \sum_{n_Z > 0} (-n_Z) Z$.
+By additivity of the map constructed above, we
+may and do assume $n_Z \leq 0$ for all prime divisors $Z$
+(this step may be avoided if the reader so desires).
+Let $U \subset X$ be as in the claim above. If $U$ is quasi-compact,
+then we write $D|_U = -n_1 Z_1 - \ldots - n_r Z_r$ for
+pairwise distinct prime divisors $Z_i$ and $n_i > 0$ and
+we consider the invertible $\mathcal{O}_U$-module
+$$
+\mathcal{L} =
+\mathcal{I}_1^{n_1} \ldots \mathcal{I}_r^{n_r} \subset \mathcal{O}_U
+$$
+where $\mathcal{I}_i$ is the ideal sheaf of $Z_i$.
+This is invertible by our choice of $U$ and
+Lemma \ref{lemma-sum-effective-Cartier-divisors}.
+Also $\text{div}_\mathcal{L}(1) = D|_U$.
+Since $\mathcal{L} = \mathcal{F}|_U$ for some rank $1$ coherent
+reflexive $\mathcal{O}_X$-module $\mathcal{F}$ by
+Lemma \ref{lemma-reflexive-S2-extend} we find that $D$ is
+in the image of our map.
+
+\medskip\noindent
+If $U$ is not quasi-compact, then we define
+$\mathcal{L} \subset \mathcal{O}_U$ locally by the displayed formula
+above. The reader shows that the construction glues and
+finishes the proof exactly as before. Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-structure-sheaf-Xs}
+Let $X$ be an integral locally Noetherian normal scheme.
+Let $\mathcal{F}$ be a rank 1 coherent reflexive $\mathcal{O}_X$-module.
+Let $s \in \Gamma(X, \mathcal{F})$. Let
+$$
+U = \{x \in X \mid s : \mathcal{O}_{X, x} \to \mathcal{F}_x
+\text{ is an isomorphism}\}
+$$
+Then $j : U \to X$ is an open subscheme of $X$ and
+$$
+j_*\mathcal{O}_U =
+\colim (\mathcal{O}_X \xrightarrow{s} \mathcal{F}
+\xrightarrow{s} \mathcal{F}^{[2]}
+\xrightarrow{s} \mathcal{F}^{[3]}
+\xrightarrow{s} \ldots)
+$$
+where $\mathcal{F}^{[1]} = \mathcal{F}$ and
+inductively $\mathcal{F}^{[n + 1]} =
+(\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{F}^{[n]})^{**}$.
+\end{lemma}
+
+\begin{proof}
+The set $U$ is open by Modules, Lemmas
+\ref{modules-lemma-finite-type-surjective-on-stalk} and
+\ref{modules-lemma-finite-type-to-coherent-injective-on-stalk}.
+Observe that $j$ is quasi-compact by
+Properties, Lemma \ref{properties-lemma-immersion-into-noetherian}.
+To prove the final statement it suffices to show for every
+quasi-compact open $W \subset X$ there is an isomorphism
+$$
+\colim \Gamma(W, \mathcal{F}^{[n]})
+\longrightarrow
+\Gamma(U \cap W, \mathcal{O}_U)
+$$
+of $\mathcal{O}_X(W)$-modules compatible with restriction maps.
+We will omit the verification of compatibilities.
+After replacing $X$ by $W$ and rewriting the above in
+terms of homs, we see that it suffices to construct an isomorphism
+$$
+\colim \Hom_{\mathcal{O}_X}(\mathcal{O}_X, \mathcal{F}^{[n]})
+\longrightarrow
+\Hom_{\mathcal{O}_U}(\mathcal{O}_U, \mathcal{O}_U)
+$$
+Choose an open $V \subset X$ such that every irreducible component of
+$X \setminus V$ has codimension $\geq 2$ in $X$ and such that
+$\mathcal{F}|_V$ is invertible, see Lemma \ref{lemma-reflexive-over-normal}.
+Then restriction defines an equivalence of categories
+between rank $1$ coherent reflexive modules on $X$ and $V$
+and between rank $1$ coherent reflexive modules on $U$ and $V \cap U$.
+See Lemma \ref{lemma-reflexive-S2-extend} and Serre's criterion
+Properties, Lemma \ref{properties-lemma-criterion-normal}.
+Thus it suffices to construct an isomorphism
+$$
+\colim \Gamma(V, (\mathcal{F}|_V)^{\otimes n}) \longrightarrow
+\Gamma(V \cap U, \mathcal{O}_U)
+$$
+Since $\mathcal{F}|_V$ is invertible and since $U \cap V$ is
+equal to the set of points where $s|_V$ generates this invertible module,
+this is a special case of
+Properties, Lemma \ref{properties-lemma-invert-s-sections}
+(there is an explicit formula for the map as well).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Xs-codim-complement}
+Assumptions and notation as in Lemma \ref{lemma-structure-sheaf-Xs}.
+If $s$ is nonzero, then every irreducible component of $X \setminus U$
+has codimension $1$ in $X$.
+\end{lemma}
+
+\begin{proof}
+Let $\xi \in X$ be a generic point of an irreducible component $Z$ of
+$X \setminus U$. After replacing $X$ by an open neighbourhood of
+$\xi$ we may assume that $Z = X \setminus U$ is irreducible. Since
+$s : \mathcal{O}_U \to \mathcal{F}|_U$ is an isomorphism, if
+the codimension of $Z$ in $X$ is $\geq 2$, then
+$s : \mathcal{O}_X \to \mathcal{F}$ is an isomorphism by
+Lemma \ref{lemma-reflexive-S2-extend} and Serre's criterion
+Properties, Lemma \ref{properties-lemma-criterion-normal}.
+This would mean that $Z = \emptyset$, a contradiction.
+\end{proof}
+
+\begin{remark}
+\label{remark-structure-sheaf-Xs}
+Let $A$ be a Noetherian normal domain. Let $M$ be a rank $1$ finite reflexive
+$A$-module. Let $s \in M$ be nonzero. Let $\mathfrak p_1, \ldots, \mathfrak p_r$
+be the height $1$ primes of $A$ in the support of $M/As$.
+Then the open $U$ of Lemma \ref{lemma-structure-sheaf-Xs} is
+$$
+U = \Spec(A) \setminus
+\left(V(\mathfrak p_1) \cup \ldots \cup V(\mathfrak p_r)\right)
+$$
+by Lemma \ref{lemma-Xs-codim-complement}. Moreover, if $M^{[n]}$
+denotes the reflexive hull of $M \otimes_A \ldots \otimes_A M$
+($n$-factors), then
+$$
+\Gamma(U, \mathcal{O}_U) = \colim M^{[n]}
+$$
+according to Lemma \ref{lemma-structure-sheaf-Xs}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-affine-Xs}
+Assumptions and notation as in Lemma \ref{lemma-structure-sheaf-Xs}.
+The following are equivalent
+\begin{enumerate}
+\item the inclusion morphism $j : U \to X$ is affine, and
+\item for every $x \in X \setminus U$ there is an $n > 0$
+such that $s^n \in \mathfrak m_x \mathcal{F}^{[n]}_x$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). Then for $x \in X \setminus U$ the inverse image $U_x$ of $U$
+under the canonical morphism $f_x : \Spec(\mathcal{O}_{X, x}) \to X$ is affine
+and does not contain $x$. Thus $\mathfrak m_x \Gamma(U_x, \mathcal{O}_{U_x})$
+is the unit ideal. In particular, we see that we can write
+$$
+1 = \sum f_i g_i
+$$
+with $f_i \in \mathfrak m_x$ and $g_i \in \Gamma(U_x, \mathcal{O}_{U_x})$.
+By Lemma \ref{lemma-structure-sheaf-Xs} we have
+$\Gamma(U_x, \mathcal{O}_{U_x}) = \colim \mathcal{F}^{[n]}_x$
+with transition maps given by multiplication by $s$.
+Hence for some $n > 0$ we have
+$$
+s^n = \sum f_i t_i
+$$
+for some $t_i = s^ng_i \in \mathcal{F}^{[n]}_x$. Thus (2) holds.
+
+\medskip\noindent
+Conversely, assume that (2) holds. To prove $j$ is affine is local on $X$,
+see Morphisms, Lemma \ref{morphisms-lemma-characterize-affine}.
+Thus we may and do assume that $X$ is affine. Our goal is to
+show that $U$ is affine.
+By Cohomology of Schemes, Lemma \ref{coherent-lemma-affine-if-quasi-affine}
+it suffices to show that $H^p(U, \mathcal{O}_U) = 0$ for $p > 0$.
+Since $H^p(U, \mathcal{O}_U) = H^0(X, R^pj_*\mathcal{O}_U)$
+(Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherence-higher-direct-images-application})
+and since $R^pj_*\mathcal{O}_U$ is quasi-coherent
+(Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherence-higher-direct-images})
+it is enough to show the stalk $(R^pj_*\mathcal{O}_U)_x$
+at a point $x \in X$ is zero. Consider the base change diagram
+$$
+\xymatrix{
+U_x \ar[d]_{j_x} \ar[r] & U \ar[d]^j \\
+\Spec(\mathcal{O}_{X, x}) \ar[r] & X
+}
+$$
+By Cohomology of Schemes, Lemma
+\ref{coherent-lemma-flat-base-change-cohomology} we have
+$(R^pj_*\mathcal{O}_U)_x = R^pj_{x, *}\mathcal{O}_{U_x}$.
+Hence we may assume $X$ is local with closed point $x$
+and we have to show $U$ is affine (because this is equivalent to
+the desired vanishing by the reference given above).
+In particular $d = \dim(X)$ is finite
+(Algebra, Proposition \ref{algebra-proposition-dimension}).
+If $x \in U$, then $U = X$ and the result is clear.
+If $d = 0$ and $x \not \in U$, then $U = \emptyset$
+and the result is clear. Now assume $d > 0$ and $x \not \in U$.
+Since $j_*\mathcal{O}_U = \colim \mathcal{F}^{[n]}$
+our assumption means that we can write
+$$
+1 = \sum f_i g_i
+$$
+for some $n > 0$, $f_i \in \mathfrak m_x$, and $g_i \in \mathcal{O}(U)$.
+By induction on $d$ we know that $D(f_i) \cap U$ is affine
+for all $i$: going through the whole argument just given with
+$X$ replaced by $D(f_i)$ we end up with Noetherian local rings
+whose dimension is strictly smaller than $d$. Hence $U$
+is affine by Properties, Lemma \ref{properties-lemma-characterize-affine}
+as desired.
+\end{proof}
+
+
+
+
+
+\section{Relative Proj}
+\label{section-relative-proj}
+
+\noindent
+Some results on relative Proj.
+First some very basic results. Recall that a relative Proj is always
+separated over the base, see
+Constructions, Lemma \ref{constructions-lemma-relative-proj-separated}.
+
+\begin{lemma}
+\label{lemma-relative-proj-quasi-compact}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent graded
+$\mathcal{O}_S$-algebra. Let
+$p : X = \underline{\text{Proj}}_S(\mathcal{A}) \to S$ be the relative
+Proj of $\mathcal{A}$. If one of the following holds
+\begin{enumerate}
+\item $\mathcal{A}$ is of finite type as a sheaf of
+$\mathcal{A}_0$-algebras,
+\item $\mathcal{A}$ is generated by $\mathcal{A}_1$ as an
+$\mathcal{A}_0$-algebra and $\mathcal{A}_1$ is a finite type
+$\mathcal{A}_0$-module,
+\item there exists a finite type quasi-coherent $\mathcal{A}_0$-submodule
+$\mathcal{F} \subset \mathcal{A}_{+}$ such that
+$\mathcal{A}_{+}/\mathcal{F}\mathcal{A}$ is a locally nilpotent
+sheaf of ideals of $\mathcal{A}/\mathcal{F}\mathcal{A}$,
+\end{enumerate}
+then $p$ is quasi-compact.
+\end{lemma}
+
+\begin{proof}
+The question is local on the base, see
+Schemes, Lemma \ref{schemes-lemma-quasi-compact-affine}.
+Thus we may assume $S$ is affine.
+Say $S = \Spec(R)$ and $\mathcal{A}$ corresponds to the
+graded $R$-algebra $A$. Then $X = \text{Proj}(A)$, see
+Constructions, Section \ref{constructions-section-relative-proj-via-glueing}.
+In case (1) we may after possibly localizing more
+assume that $A$ is generated by homogeneous elements
+$f_1, \ldots, f_n \in A_{+}$ over $A_0$. Then
+$A_{+} = (f_1, \ldots, f_n)$ by
+Algebra, Lemma \ref{algebra-lemma-S-plus-generated}.
+In case (3) we see that $\mathcal{F} = \widetilde{M}$
+for some finite type $A_0$-module $M \subset A_{+}$. Say
+$M = \sum A_0f_i$. Say $f_i = \sum f_{i, j}$ is the decomposition
+into homogeneous pieces. The condition in (3) signifies that
+$A_{+} \subset \sqrt{(f_{i, j})}$. Thus in both cases we conclude that
+$\text{Proj}(A)$ is quasi-compact by
+Constructions, Lemma \ref{constructions-lemma-proj-quasi-compact}.
+Finally, (2) follows from (1).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-proj-finite-type}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent graded
+$\mathcal{O}_S$-algebra. Let
+$p : X = \underline{\text{Proj}}_S(\mathcal{A}) \to S$ be the relative
+Proj of $\mathcal{A}$. If $\mathcal{A}$ is of finite type as a sheaf of
+$\mathcal{O}_S$-algebras, then $p$ is of finite type and $\mathcal{O}_X(d)$
+is a finite type $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+The assumption implies that $p$ is quasi-compact, see
+Lemma \ref{lemma-relative-proj-quasi-compact}. Hence it suffices
+to show that $p$ is locally of finite type.
+Thus the question is local on the base and target, see
+Morphisms, Lemma \ref{morphisms-lemma-locally-finite-type-characterize}.
+Say $S = \Spec(R)$ and $\mathcal{A}$ corresponds to the
+graded $R$-algebra $A$. After further localizing on $S$ we may
+assume that $A$ is a finite type $R$-algebra. The scheme $X$ is constructed
+out of glueing the spectra of the rings $A_{(f)}$ for $f \in A_{+}$
+homogeneous. Each of these is of finite type over $R$ by
+Algebra, Lemma \ref{algebra-lemma-dehomogenize-finite-type} part (1).
+Thus $\text{Proj}(A)$ is of finite type over $R$.
+To see the statement on $\mathcal{O}_X(d)$ use part (2) of
+Algebra, Lemma \ref{algebra-lemma-dehomogenize-finite-type}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-proj-universally-closed}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent graded
+$\mathcal{O}_S$-algebra. Let
+$p : X = \underline{\text{Proj}}_S(\mathcal{A}) \to S$ be the relative
+Proj of $\mathcal{A}$. If $\mathcal{O}_S \to \mathcal{A}_0$
+is an integral algebra map\footnote{In other words, the integral
+closure of $\mathcal{O}_S$ in $\mathcal{A}_0$, see
+Morphisms, Definition \ref{morphisms-definition-integral-closure}, equals
+$\mathcal{A}_0$.} and $\mathcal{A}$ is of finite type as an
+$\mathcal{A}_0$-algebra, then $p$ is universally closed.
+\end{lemma}
+
+\begin{proof}
+The question is local on the base. Thus we may assume that $X = \Spec(R)$
+is affine. Let $\mathcal{A}$ be the quasi-coherent $\mathcal{O}_X$-algebra
+associated to the graded $R$-algebra $A$. The assumption is that $R \to A_0$
+is integral and $A$ is of finite type over $A_0$.
+Write $X \to \Spec(R)$ as the composition $X \to \Spec(A_0) \to \Spec(R)$.
+Since $R \to A_0$ is an integral ring map, we see that
+$\Spec(A_0) \to \Spec(R)$ is universally closed, see
+Morphisms, Lemma \ref{morphisms-lemma-integral-universally-closed}.
+The quasi-compact (see
+Constructions, Lemma \ref{constructions-lemma-proj-quasi-compact}) morphism
+$$
+X = \text{Proj}(A) \to \Spec(A_0)
+$$
+satisfies the existence part of the valuative criterion by
+Constructions, Lemma \ref{constructions-lemma-proj-valuative-criterion}
+and hence it is universally closed by
+Schemes, Proposition \ref{schemes-proposition-characterize-universally-closed}.
+Thus $X \to \Spec(R)$ is universally closed as a composition of
+universally closed morphisms.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-proj-proper}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent graded
+$\mathcal{O}_S$-algebra. Let
+$p : X = \underline{\text{Proj}}_S(\mathcal{A}) \to S$ be the relative
+Proj of $\mathcal{A}$. The following conditions are equivalent
+\begin{enumerate}
+\item $\mathcal{A}_0$ is a finite type $\mathcal{O}_S$-module
+and $\mathcal{A}$ is of finite type as an $\mathcal{A}_0$-algebra,
+\item $\mathcal{A}_0$ is a finite type $\mathcal{O}_S$-module
+and $\mathcal{A}$ is of finite type as an $\mathcal{O}_S$-algebra
+\end{enumerate}
+If these conditions hold, then $p$ is locally projective and in
+particular proper.
+\end{lemma}
+
+\begin{proof}
+Assume that $\mathcal{A}_0$ is a finite type $\mathcal{O}_S$-module.
+Choose an affine open $U = \Spec(R) \subset X$ such that $\mathcal{A}$
+corresponds to a graded $R$-algebra $A$ with $A_0$ a finite $R$-module.
+Condition (1) means that (after possibly localizing further on $S$)
+that $A$ is a finite type $A_0$-algebra and condition (2) means that
+(after possibly localizing further on $S$) that $A$ is a finite type
+$R$-algebra. Thus these conditions imply each other by
+Algebra, Lemma \ref{algebra-lemma-compose-finite-type}.
+
+\medskip\noindent
+A locally projective morphism is proper, see
+Morphisms, Lemma \ref{morphisms-lemma-locally-projective-proper}.
+Thus we may now assume that $S = \Spec(R)$ and $X = \text{Proj}(A)$
+and that $A_0$ is finite over $R$ and $A$ of finite type over $R$.
+We will show that $X = \text{Proj}(A) \to \Spec(R)$ is projective.
+We urge the reader to prove this for themselves, by directly constructing
+a closed immersion of $X$ into a projective space over $R$, instead
+of reading the argument we give below.
+
+\medskip\noindent
+By Lemma \ref{lemma-relative-proj-finite-type}
+we see that $X$ is of finite type over $\Spec(R)$.
+Constructions, Lemma \ref{constructions-lemma-ample-on-proj}
+tells us that $\mathcal{O}_X(d)$ is ample on $X$ for some $d \geq 1$
+(see Properties, Section \ref{properties-section-ample}).
+Hence $X \to \Spec(R)$ is quasi-projective (by
+Morphisms, Definition \ref{morphisms-definition-quasi-projective}).
+By Morphisms, Lemma \ref{morphisms-lemma-quasi-projective-open-projective}
+we conclude that $X$ is isomorphic to an open subscheme of a scheme
+projective over $\Spec(R)$. Therefore, to finish the proof, it suffices
+to show that $X \to \Spec(R)$ is universally closed (use
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed}).
+This follows from Lemma \ref{lemma-relative-proj-universally-closed}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-proj-projective}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent graded
+$\mathcal{O}_S$-algebra. Let
+$p : X = \underline{\text{Proj}}_S(\mathcal{A}) \to S$ be the relative
+Proj of $\mathcal{A}$. If $\mathcal{A}$ is generated by
+$\mathcal{A}_1$ over $\mathcal{A}_0$ and $\mathcal{A}_1$
+is a finite type $\mathcal{O}_S$-module, then $p$ is projective.
+\end{lemma}
+
+\begin{proof}
+Namely, the morphism associated to the graded $\mathcal{O}_S$-algebra map
+$$
+\text{Sym}_{\mathcal{O}_X}^*(\mathcal{A}_1)
+\longrightarrow
+\mathcal{A}
+$$
+is a closed immersion $X \to \mathbf{P}(\mathcal{A}_1)$, see
+Constructions, Lemma
+\ref{constructions-lemma-surjective-generated-degree-1-map-relative-proj}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-proj-flat}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent graded
+$\mathcal{O}_S$-algebra. Let
+$p : X = \underline{\text{Proj}}_S(\mathcal{A}) \to S$ be the relative
+Proj of $\mathcal{A}$. If $\mathcal{A}_d$ is a flat $\mathcal{O}_S$-module
+for $d \gg 0$, then $p$ is flat and $\mathcal{O}_X(d)$ is
+flat over $S$.
+\end{lemma}
+
+\begin{proof}
+Affine locally flatness of $X$ over $S$ reduces to the following statement:
+Let $R$ be a ring, let $A$ be a graded $R$-algebra with
+$A_d$ flat over $R$ for $d \gg 0$, let $f \in A_d$
+for some $d > 0$, then $A_{(f)}$ is flat over $R$.
+Since $A_{(f)} = \colim A_{nd}$ where the transition maps
+are given by multiplication by $f$, this follows from
+Algebra, Lemma \ref{algebra-lemma-colimit-flat}.
+Argue similarly to get flatness of $\mathcal{O}_X(d)$ over $S$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-proj-finite-presentation}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent graded
+$\mathcal{O}_S$-algebra. Let
+$p : X = \underline{\text{Proj}}_S(\mathcal{A}) \to S$ be the relative
+Proj of $\mathcal{A}$. If $\mathcal{A}$ is a finitely presented
+$\mathcal{O}_S$-algebra, then $p$ is of finite presentation
+and $\mathcal{O}_X(d)$ is an $\mathcal{O}_X$-module of finite presentation.
+\end{lemma}
+
+\begin{proof}
+Affine locally this reduces to the following statement:
+Let $R$ be a ring and let $A$ be a finitely presented graded $R$-algebra.
+Then $\text{Proj}(A) \to \Spec(R)$ is of finite presentation
+and $\mathcal{O}_{\text{Proj}(A)}(d)$ is a
+$\mathcal{O}_{\text{Proj}(A)}$-module of finite presentation.
+The finite presentation condition implies we can choose
+a presentation
+$$
+A = R[X_1, \ldots, X_n]/(F_1, \ldots, F_m)
+$$
+where $R[X_1, \ldots, X_n]$ is a polynomial ring graded by giving
+weights $d_i$ to $X_i$ and $F_1, \ldots, F_m$ are homogeneous polynomials
+of degree $e_j$. Let $R_0 \subset R$ be the subring generated by
+the coefficients of the polynomials $F_1, \ldots, F_m$.
+Then we set $A_0 = R_0[X_1, \ldots, X_n]/(F_1, \ldots, F_m)$.
+By construction $A = A_0 \otimes_{R_0} R$.
+Thus by
+Constructions, Lemma \ref{constructions-lemma-base-change-map-proj}
+it suffices to prove the result for $X_0 = \text{Proj}(A_0)$ over $R_0$.
+By Lemma \ref{lemma-relative-proj-finite-type}
+we know $X_0$ is of finite type over $R_0$ and
+$\mathcal{O}_{X_0}(d)$ is a quasi-coherent $\mathcal{O}_{X_0}$-module
+of finite type.
+Since $R_0$ is Noetherian (as a finitely generated $\mathbf{Z}$-algebra)
+we see that $X_0$ is of finite presentation over $R_0$
+(Morphisms, Lemma
+\ref{morphisms-lemma-noetherian-finite-type-finite-presentation})
+and $\mathcal{O}_{X_0}(d)$ is of finite presentation by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-Noetherian}.
+This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Closed subschemes of relative proj}
+\label{section-closed-in-relative-proj}
+
+\noindent
+Some auxiliary lemmas about closed subschemes of relative proj.
+
+\begin{lemma}
+\label{lemma-closed-subscheme-proj}
+Let $S$ be a scheme. Let $\mathcal{A}$ be a quasi-coherent graded
+$\mathcal{O}_S$-algebra. Let
+$p : X = \underline{\text{Proj}}_S(\mathcal{A}) \to S$ be the relative
+Proj of $\mathcal{A}$. Let $i : Z \to X$ be a closed subscheme. Denote
+$\mathcal{I} \subset \mathcal{A}$ the kernel of the canonical map
+$$
+\mathcal{A}
+\longrightarrow
+\bigoplus\nolimits_{d \geq 0} p_*\left((i_*\mathcal{O}_Z)(d)\right).
+$$
+If $p$ is quasi-compact, then there is an isomorphism
+$Z = \underline{\text{Proj}}_S(\mathcal{A}/\mathcal{I})$.
+\end{lemma}
+
+\begin{proof}
+The morphism $p$ is separated by
+Constructions, Lemma \ref{constructions-lemma-relative-proj-separated}.
+As $p$ is quasi-compact, $p_*$ transforms quasi-coherent modules
+into quasi-coherent modules, see
+Schemes, Lemma \ref{schemes-lemma-push-forward-quasi-coherent}.
+Hence $\mathcal{I}$ is a quasi-coherent $\mathcal{O}_S$-module.
+In particular, $\mathcal{B} = \mathcal{A}/\mathcal{I}$ is a
+quasi-coherent graded $\mathcal{O}_S$-algebra. The functoriality
+morphism $Z' = \underline{\text{Proj}}_S(\mathcal{B}) \to
+\underline{\text{Proj}}_S(\mathcal{A})$ is everywhere defined and
+a closed immersion, see Constructions, Lemma
+\ref{constructions-lemma-surjective-graded-rings-map-relative-proj}.
+Hence it suffices to prove $Z = Z'$ as closed subschemes of $X$.
+
+\medskip\noindent
+Having said this, the question is local on the base and we may assume
+that $S = \Spec(R)$ and that $X = \text{Proj}(A)$ for some
+graded $R$-algebra $A$. Assume $\mathcal{I} = \widetilde{I}$
+for $I \subset A$ a graded ideal. By
+Constructions, Lemma \ref{constructions-lemma-proj-quasi-compact}
+there exist $f_0, \ldots, f_n \in A_{+}$ such that
+$A_{+} \subset \sqrt{(f_0, \ldots, f_n)}$ in other words
+$X = \bigcup D_{+}(f_i)$. Therefore, it suffices to check that
+$Z \cap D_{+}(f_i) = Z' \cap D_{+}(f_i)$ for each $i$.
+By renumbering we may assume $i = 0$.
+Say $Z \cap D_{+}(f_0)$, resp.\ $Z' \cap D_{+}(f_0)$
+is cut out by the ideal $J$, resp.\ $J'$ of $A_{(f_0)}$.
+
+\medskip\noindent
+The inclusion $J' \subset J$.
+Let $d$ be the least common multiple of $\deg(f_0), \ldots, \deg(f_n)$.
+Note that each of the twists $\mathcal{O}_X(nd)$ is invertible, trivialized
+by $f_i^{nd/\deg(f_i)}$ over $D_{+}(f_i)$, and that for any quasi-coherent
+module $\mathcal{F}$ on $X$ the multiplication maps
+$\mathcal{O}_X(nd) \otimes_{\mathcal{O}_X} \mathcal{F}(m)
+\to \mathcal{F}(nd + m)$ are isomorphisms, see
+Constructions, Lemma \ref{constructions-lemma-when-invertible}.
+Observe that $J'$ is the ideal generated by the elements $g/f_0^e$ where
+$g \in I$ is homogeneous of degree $e\deg(f_0)$ (see proof of
+Constructions, Lemma
+\ref{constructions-lemma-surjective-graded-rings-map-proj}).
+Of course, by replacing $g$ by $f_0^lg$ for suitable $l$
+we may always assume that $d | e$. Then, since $g$ vanishes as a section of
+$\mathcal{O}_X(e\deg(f_0))$ restricted to $Z$ we see that
+$g/f_0^d$ is an element of $J$. Thus $J' \subset J$.
+
+\medskip\noindent
+Conversely, suppose that $g/f_0^e \in J$. Again we may assume $d | e$.
+Pick $i \in \{1, \ldots, n\}$. Then $Z \cap D_{+}(f_i)$ is
+cut out by some ideal $J_i \subset A_{(f_i)}$. Moreover,
+$$
+J \cdot A_{(f_0f_i)} = J_i \cdot A_{(f_0f_i)}.
+$$
+The right hand side is the localization of $J_i$ with respect to
+$f_0^{\deg(f_i)}/f_i^{\deg(f_0)}$. It follows that
+$$
+f_0^{e_i}g/f_i^{(e_i + e)\deg(f_0)/\deg(f_i)} \in J_i
+$$
+for some $e_i \gg 0$ sufficiently divisible. This proves that
+$f_0^{\max(e_i)}g$ is an element of $I$, because its restriction to each
+affine open $D_{+}(f_i)$ vanishes on the closed subscheme
+$Z \cap D_{+}(f_i)$. Hence $g/f_0^e \in J'$ and we conclude $J \subset J'$
+as desired.
+\end{proof}
+
+\begin{example}
+\label{example-closed-subscheme-of-proj}
+Let $A$ be a graded ring. Let $X = \text{Proj}(A)$ and $S = \Spec(A_0)$.
+Given a graded ideal $I \subset A$ we obtain a closed subscheme
+$V_+(I) = \text{Proj}(A/I) \to X$ by Constructions, Lemma
+\ref{constructions-lemma-surjective-graded-rings-map-proj}.
+Translating the result of Lemma \ref{lemma-closed-subscheme-proj}
+we see that if $X$ is quasi-compact, then any closed subscheme $Z$
+is of the form $V_+(I(Z))$ where the graded ideal $I(Z) \subset A$
+is given by the rule
+$$
+I(Z) = \Ker(A \longrightarrow
+\bigoplus\nolimits_{n \geq 0} \Gamma(Z, \mathcal{O}_Z(n)))
+$$
+Then we can ask the following two natural questions:
+\begin{enumerate}
+\item Which ideals $I$ are of the form $I(Z)$?
+\item Can we describe the operation $I \mapsto I(V_+(I))$?
+\end{enumerate}
+We will answer this when $A$ is Noetherian.
+
+\medskip\noindent
+First, assume that $A$ is generated by $A_1$ over $A_0$. In this case,
+for any ideal $I \subset A$ the kernel of the map
+$A/I \to \bigoplus \Gamma(\text{Proj}(A/I), \mathcal{O})$
+is the set of torsion elements of $A/I$, see
+Cohomology of Schemes, Proposition
+\ref{coherent-proposition-coherent-modules-on-proj}.
+Hence we conclude that
+$$
+I(V_+(I)) = \{x \in A \mid A_n x \subset I\text{ for some }n \geq 0\}
+$$
+The ideal on the right is sometimes called the saturation of $I$.
+This answers (2) and the answer to (1) is that an ideal is
+of the form $I(Z)$ if and only if it is saturated, i.e., equal
+to its own saturation.
+
+\medskip\noindent
+If $A$ is a general Noetherian graded ring, then we use
+Cohomology of Schemes, Proposition
+\ref{coherent-proposition-coherent-modules-on-proj-general}.
+Thus we see that for $d$ equal to the lcm of the degrees
+of generators of $A$ over $A_0$ we get
+$$
+I(V_+(I)) = \{x \in A \mid (Ax)_{nd} \subset I\text{ for all }n \gg 0\}
+$$
+This can be different from the saturation of $I$ if $d \not = 1$.
+For example, suppose that $A = \mathbf{Q}[x, y]$
+with $\deg(x) = 2$ and $\deg(y) = 3$. Then $d = 6$.
+Let $I = (y^2)$. Then we see $y \in I(V_+(I))$ because
+for any homogeneous $f \in A$ such that $6 | \deg(fy)$
+we have $y | f$, hence $fy \in I$. It follows that
+$I(V_+(I)) = (y)$ but $x^n y \not \in I$ for all $n$
+hence $I(V_+(I))$ is not equal to the saturation.
+\end{example}
+
+\begin{lemma}
+\label{lemma-equation-codim-1-in-projective-space}
+Let $R$ be a UFD. Let $Z \subset \mathbf{P}^n_R$ be a closed subscheme
+which has no embedded points such that every irreducible component
+of $Z$ has codimension $1$ in $\mathbf{P}^n_R$.
+Then the ideal $I(Z) \subset R[T_0, \ldots, T_n]$ corresponding
+to $Z$ is principal.
+\end{lemma}
+
+\begin{proof}
+Observe that the local rings of $X = \mathbf{P}^n_R$ are
+UFDs because $X$ is covered by affine pieces isomorphic
+to $\mathbf{A}^n_R$ and $R[x_1, \ldots, x_n]$ is a UFD
+(Algebra, Lemma \ref{algebra-lemma-polynomial-ring-UFD}).
+Thus $Z$ is an effective Cartier divisor by
+Lemma \ref{lemma-codimension-1-is-effective-Cartier}.
+Let $\mathcal{I} \subset \mathcal{O}_X$ be the quasi-coherent
+sheaf of ideals corresponding to $Z$.
+Choose an isomorphism $\mathcal{O}(m) \to \mathcal{I}$
+for some $m \in \mathbf{Z}$, see
+Lemma \ref{lemma-Pic-projective-space-UFD}.
+Then the composition
+$$
+\mathcal{O}_X(m) \to \mathcal{I} \to \mathcal{O}_X
+$$
+is nonzero. We conclude that $m \leq 0$ and that the corresponding
+section of $\mathcal{O}_X(m)^{\otimes -1} = \mathcal{O}_X(-m)$
+is given by some $F \in R[T_0, \ldots, T_n]$ of degree $-m$, see
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cohomology-projective-space-over-ring}.
+Thus on the $i$th standard open $U_i = D_+(T_i)$ the
+closed subscheme $Z \cap U_i$ is cut out by the ideal
+$$
+(F(T_0/T_i, \ldots, T_n/T_i)) \subset R[T_0/T_i, \ldots, T_n/T_i]
+$$
+Thus the homogeneous elements of the graded ideal
+$I(Z) = \Ker(R[T_0, \ldots, T_n] \to \bigoplus \Gamma(\mathcal{O}_Z(m)))$
+is the set of homogeneous polynomials $G$ such that
+$$
+G(T_0/T_i, \ldots, T_n/T_i) \in (F(T_0/T_i, \ldots, T_n/T_i))
+$$
+for $i = 0, \ldots, n$. Clearing denominators, we see there exist
+$e_i \geq 0$ such that
+$$
+T_i^{e_i}G \in (F)
+$$
+for $i = 0, \ldots, n$. As $R$ is a UFD, so is $R[T_0, \ldots, T_n]$.
+Then $F | T_0^{e_0}G$ and $F | T_1^{e_1}G$ implies $F | G$ as
+$T_0^{e_0}$ and $T_1^{e_1}$ have no factor in common. Thus $I(Z) = (F)$.
+\end{proof}
+
+\noindent
+In case the closed subscheme is locally cut out by finitely many
+equations we can define it by a finite type ideal sheaf of
+$\mathcal{A}$.
+
+\begin{lemma}
+\label{lemma-closed-subscheme-proj-finite}
+Let $S$ be a quasi-compact and quasi-separated scheme.
+Let $\mathcal{A}$ be a quasi-coherent graded $\mathcal{O}_S$-algebra. Let
+$p : X = \underline{\text{Proj}}_S(\mathcal{A}) \to S$ be the relative
+Proj of $\mathcal{A}$. Let $i : Z \to X$ be a closed subscheme.
+If $p$ is quasi-compact and $i$ of finite presentation, then there exists
+a $d > 0$ and a quasi-coherent finite type $\mathcal{O}_S$-submodule
+$\mathcal{F} \subset \mathcal{A}_d$ such that
+$Z = \underline{\text{Proj}}_S(\mathcal{A}/\mathcal{F}\mathcal{A})$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-closed-subscheme-proj} we know there exists a
+quasi-coherent graded sheaf of ideals $\mathcal{I} \subset \mathcal{A}$
+such that $Z = \underline{\text{Proj}}(\mathcal{A}/\mathcal{I})$.
+Since $S$ is quasi-compact we can choose a finite affine open covering
+$S = U_1 \cup \ldots \cup U_n$. Say $U_i = \Spec(R_i)$. Let
+$\mathcal{A}|_{U_i}$ correspond to the graded $R_i$-algebra $A_i$ and
+$\mathcal{I}|_{U_i}$ to the graded ideal $I_i \subset A_i$. Note that
+$p^{-1}(U_i) = \text{Proj}(A_i)$ as schemes over $R_i$.
+Since $p$ is quasi-compact we can choose finitely many homogeneous
+elements $f_{i, j} \in A_{i, +}$ such that $p^{-1}(U_i) = D_{+}(f_{i, j})$.
+The condition on $Z \to X$ means that the ideal sheaf of $Z$ in
+$\mathcal{O}_X$ is of finite type, see
+Morphisms, Lemma \ref{morphisms-lemma-closed-immersion-finite-presentation}.
+Hence we can find finitely many homogeneous elements
+$h_{i, j, k} \in I_i \cap A_{i, +}$ such that the ideal of
+$Z \cap D_{+}(f_{i, j})$ is generated by the elements
+$h_{i, j, k}/f_{i, j}^{e_{i, j, k}}$. Choose $d > 0$ to be a common multiple
+of all the integers $\deg(f_{i, j})$ and $\deg(h_{i, j, k})$.
+By Properties, Lemma \ref{properties-lemma-quasi-coherent-colimit-finite-type}
+there exists a finite type quasi-coherent $\mathcal{F} \subset \mathcal{I}_d$
+such that all the local sections
+$$
+h_{i, j, k}f_{i, j}^{(d - \deg(h_{i, j, k}))/\deg(f_{i, j})}
+$$
+are sections of $\mathcal{F}$. By construction $\mathcal{F}$ is a solution.
+\end{proof}
+
+\noindent
+The following version of Lemma \ref{lemma-closed-subscheme-proj-finite}
+will be used in the proof of
+Lemma \ref{lemma-composition-admissible-blowups}.
+
+\begin{lemma}
+\label{lemma-closed-subscheme-proj-finite-type}
+Let $S$ be a quasi-compact and quasi-separated scheme.
+Let $\mathcal{A}$ be a quasi-coherent graded $\mathcal{O}_S$-algebra.
+Let $p : X = \underline{\text{Proj}}_S(\mathcal{A}) \to S$ be the relative
+Proj of $\mathcal{A}$. Let $i : Z \to X$ be a closed subscheme.
+Let $U \subset X$ be an open. Assume that
+\begin{enumerate}
+\item $p$ is quasi-compact,
+\item $i$ of finite presentation,
+\item $U \cap p(i(Z)) = \emptyset$,
+\item $U$ is quasi-compact,
+\item $\mathcal{A}_n$ is a finite type $\mathcal{O}_S$-module for all $n$.
+\end{enumerate}
+Then there exists a $d > 0$ and a quasi-coherent finite type
+$\mathcal{O}_S$-submodule $\mathcal{F} \subset \mathcal{A}_d$ with (a)
+$Z = \underline{\text{Proj}}_S(\mathcal{A}/\mathcal{F}\mathcal{A})$
+and (b) the support of $\mathcal{A}_d/\mathcal{F}$ is disjoint from $U$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{I} \subset \mathcal{A}$ be the sheaf of quasi-coherent
+graded ideals constructed in Lemma \ref{lemma-closed-subscheme-proj}.
+Let $U_i$, $R_i$, $A_i$, $I_i$, $f_{i, j}$, $h_{i, j, k}$, and $d$
+be as constructed in the proof of
+Lemma \ref{lemma-closed-subscheme-proj-finite}.
+Since $U \cap p(i(Z)) = \emptyset$ we see that
+$\mathcal{I}_d|_U = \mathcal{A}_d|_U$ (by our construction of
+$\mathcal{I}$ as a kernel). Since $U$ is quasi-compact we
+can choose a finite affine open covering $U = W_1 \cup \ldots \cup W_m$.
+Since $\mathcal{A}_d$ is of finite type we can find finitely many sections
+$g_{t, s} \in \mathcal{A}_d(W_t)$ which generate
+$\mathcal{A}_d|_{W_t} = \mathcal{I}_d|_{W_t}$
+as an $\mathcal{O}_{W_t}$-module. To finish the proof, note that by
+Properties, Lemma \ref{properties-lemma-quasi-coherent-colimit-finite-type}
+there exists a finite type $\mathcal{F} \subset \mathcal{I}_d$
+such that all the local sections
+$$
+h_{i, j, k}f_{i, j}^{(d - \deg(h_{i, j, k}))/\deg(f_{i, j})}
+\quad\text{and}\quad
+g_{t, s}
+$$
+are sections of $\mathcal{F}$. By construction $\mathcal{F}$ is a solution.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-conormal-sheaf-section-projective-bundle}
+Let $X$ be a scheme. Let $\mathcal{E}$ be a quasi-coherent
+$\mathcal{O}_X$-module. There is a bijection
+$$
+\left\{
+\begin{matrix}
+\text{sections }\sigma\text{ of the } \\
+\text{morphism } \mathbf{P}(\mathcal{E}) \to X
+\end{matrix}
+\right\}
+\leftrightarrow
+\left\{
+\begin{matrix}
+\text{surjections }\mathcal{E} \to \mathcal{L}\text{ where} \\
+\mathcal{L}\text{ is an invertible }\mathcal{O}_X\text{-module}
+\end{matrix}
+\right\}
+$$
+In this case $\sigma$ is a closed immersion and there is a canonical
+isomorphism
+$$
+\Ker(\mathcal{E} \to \mathcal{L})
+\otimes_{\mathcal{O}_X} \mathcal{L}^{\otimes -1}
+\longrightarrow
+\mathcal{C}_{\sigma(X)/\mathbf{P}(\mathcal{E})}
+$$
+Both the bijection and isomorphism are compatible with base change.
+\end{lemma}
+
+\begin{proof}
+Recall that $\pi : \mathbf{P}(\mathcal{E}) \to X$ is the relative proj of the
+symmetric algebra on $\mathcal{E}$, see
+Constructions, Definition \ref{constructions-definition-projective-bundle}.
+Hence the descriptions of sections $\sigma$ follows immediately from
+the description of the functor of points of $\mathbf{P}(\mathcal{E})$
+in Constructions, Lemma \ref{constructions-lemma-apply-relative}.
+Since $\pi$ is separated, any section is a closed immersion
+(Constructions, Lemma \ref{constructions-lemma-relative-proj-separated} and
+Schemes, Lemma \ref{schemes-lemma-section-immersion}).
+Let $U \subset X$ be an affine open and $k \in \mathcal{E}(U)$ and
+$s \in \mathcal{E}(U)$ be local sections such that $k$ maps to
+zero in $\mathcal{L}$ and $s$ maps to a generator $\overline{s}$
+of $\mathcal{L}$.
+Then $f = k/s$ is a section of $\mathcal{O}_{\mathbf{P}(\mathcal{E})}$
+defined in an open neighbourhood $D_+(s)$ of $s(U)$ in $\pi^{-1}(U)$.
+Moreover, since $k$ maps to zero in $\mathcal{L}$ we see that
+$f$ is a section of the ideal sheaf of $s(U)$ in $\pi^{-1}(U)$.
+Thus we can take the image $\overline{f}$ of $f$ in
+$\mathcal{C}_{\sigma(X)/\mathbf{P}(\mathcal{E})}(U)$.
+We claim (1) that the image $\overline{f}$ depends only on the
+sections $k$ and $\overline{s}$ and not on the choice of $s$
+and (2) that we get an isomorphism over $U$ in this manner (see below).
+However, once (1) and (2) are established, we see that
+the construction is compatible with base change by $U' \to U$
+where $U'$ is affine, which proves that these local maps glue
+and are compatible with arbitrary base change.
+
+\medskip\noindent
+To prove (1) and (2) we make explicit what is going on.
+Namely, say $U = \Spec(A)$ and say $\mathcal{E} \to \mathcal{L}$
+corresponds to the map of $A$-modules $M \to N$. Then
+$k \in K = \Ker(M \to N)$ and $s \in M$ maps to a generator $\overline{s}$
+of $N$. Hence $M = K \oplus A s$. Thus
+$$
+\text{Sym}(M) = \text{Sym}(K)[s]
+$$
+Consider the identification $\text{Sym}(K) \to \text{Sym}(M)_{(s)}$
+via the rule $g \mapsto g/s^n$ for $g \in \text{Sym}^n(K)$.
+This gives an isomorphism $D_+(s) = \Spec(\text{Sym}(K))$ such
+that $\sigma$ corresponds to the ring map $\text{Sym}(K) \to A$
+mapping $K$ to zero. Via this isomorphism we see that the quasi-coherent
+module corresponding to $K$ is identified with
+$\mathcal{C}_{\sigma(U)/D_+(s)}$ proving (2).
+Finally, suppose that $s' = k' + s$ for some $k' \in K$.
+Then
+$$
+k/s' = (k/s) (s/s') = (k/s) (s'/s)^{-1} = (k/s) (1 + k'/s)^{-1}
+$$
+in an open neighbourhood of $\sigma(U)$ in $D_+(s)$. Thus we see that
+$s'/s$ restricts to $1$ on $\sigma(U)$ and we see that $k/s'$ maps to
+the same element of the conormal sheaf as does $k/s$ thereby proving (1).
+\end{proof}
+
+
+
+
+
+\section{Blowing up}
+\label{section-blowing-up}
+
+\noindent
+Blowing up is an important tool in algebraic geometry.
+
+\begin{definition}
+\label{definition-blow-up}
+Let $X$ be a scheme. Let $\mathcal{I} \subset \mathcal{O}_X$ be a
+quasi-coherent sheaf of ideals, and let $Z \subset X$ be the closed subscheme
+corresponding to $\mathcal{I}$, see
+Schemes, Definition \ref{schemes-definition-immersion}.
+The {\it blowing up of $X$ along $Z$}, or the
+{\it blowing up of $X$ in the ideal sheaf $\mathcal{I}$} is
+the morphism
+$$
+b :
+\underline{\text{Proj}}_X
+\left(\bigoplus\nolimits_{n \geq 0} \mathcal{I}^n\right)
+\longrightarrow
+X
+$$
+The {\it exceptional divisor} of the blowup is the inverse image
+$b^{-1}(Z)$. Sometimes $Z$ is called the {\it center} of the blowup.
+\end{definition}
+
+\noindent
+We will see later that the exceptional divisor is an effective Cartier
+divisor. Moreover, the blowing up is characterized as the ``smallest'' scheme
+over $X$ such that the inverse image of $Z$ is an effective Cartier divisor.
+
+\medskip\noindent
+If $b : X' \to X$ is the blowup of $X$ in $Z$, then we often denote
+$\mathcal{O}_{X'}(n)$ the twists of the structure sheaf. Note that these
+are invertible $\mathcal{O}_{X'}$-modules and that
+$\mathcal{O}_{X'}(n) = \mathcal{O}_{X'}(1)^{\otimes n}$
+because $X'$ is the relative Proj of a quasi-coherent graded
+$\mathcal{O}_X$-algebra which is generated in degree $1$, see
+Constructions, Lemma \ref{constructions-lemma-apply-relative}.
+Note that $\mathcal{O}_{X'}(1)$ is $b$-relatively very ample, even though
+$b$ need not be of finite type or even quasi-compact, because
+$X'$ comes equipped with a closed immersion into $\mathbf{P}(\mathcal{I})$,
+see Morphisms, Example \ref{morphisms-example-very-ample}.
+
+\begin{lemma}
+\label{lemma-blowing-up-affine}
+Let $X$ be a scheme. Let $\mathcal{I} \subset \mathcal{O}_X$ be a
+quasi-coherent sheaf of ideals. Let $U = \Spec(A)$ be an affine open
+subscheme of $X$ and let $I \subset A$ be the ideal corresponding to
+$\mathcal{I}|_U$. If $b : X' \to X$ is the blowup of $X$ in $\mathcal{I}$,
+then there is a canonical isomorphism
+$$
+b^{-1}(U) = \text{Proj}(\bigoplus\nolimits_{d \geq 0} I^d)
+$$
+of $b^{-1}(U)$ with the homogeneous spectrum of the Rees algebra
+of $I$ in $A$. Moreover, $b^{-1}(U)$ has an affine open covering by
+spectra of the affine blowup algebras $A[\frac{I}{a}]$.
+\end{lemma}
+
+\begin{proof}
+The first statement is clear from the construction of the relative Proj via
+glueing, see Constructions, Section
+\ref{constructions-section-relative-proj-via-glueing}.
+For $a \in I$ denote $a^{(1)}$ the element $a$ seen as an element of
+degree $1$ in the Rees algebra $\bigoplus_{n \geq 0} I^n$.
+Since these elements generate the Rees algebra over $A$ we see that
+$\text{Proj}(\bigoplus_{d \geq 0} I^d)$ is covered by the affine opens
+$D_{+}(a^{(1)})$. The affine scheme $D_{+}(a^{(1)})$ is the spectrum of
+the affine blowup algebra $A' = A[\frac{I}{a}]$, see
+Algebra, Definition \ref{algebra-definition-blow-up}.
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-base-change-blowing-up}
+\begin{slogan}
+Blowing up commutes with flat base change.
+\end{slogan}
+Let $X_1 \to X_2$ be a flat morphism of schemes. Let $Z_2 \subset X_2$ be a
+closed subscheme. Let $Z_1$ be the inverse image of $Z_2$ in $X_1$.
+Let $X'_i$ be the blowup of $Z_i$ in $X_i$. Then there exists a cartesian
+diagram
+$$
+\xymatrix{
+X_1' \ar[r] \ar[d] & X_2' \ar[d] \\
+X_1 \ar[r] & X_2
+}
+$$
+of schemes.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{I}_2$ be the ideal sheaf of $Z_2$ in $X_2$.
+Denote $g : X_1 \to X_2$ the given morphism. Then the ideal sheaf
+$\mathcal{I}_1$ of $Z_1$ is the image of
+$g^*\mathcal{I}_2 \to \mathcal{O}_{X_1}$
+(by definition of the inverse image, see
+Schemes, Definition \ref{schemes-definition-inverse-image-closed-subscheme}).
+By Constructions, Lemma \ref{constructions-lemma-relative-proj-base-change}
+we see that $X_1 \times_{X_2} X_2'$ is the relative Proj of
+$\bigoplus_{n \geq 0} g^*\mathcal{I}_2^n$. Because $g$ is flat the map
+$g^*\mathcal{I}_2^n \to \mathcal{O}_{X_1}$ is injective with image
+$\mathcal{I}_1^n$. Thus we see that $X_1 \times_{X_2} X_2' = X_1'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowing-up-gives-effective-Cartier-divisor}
+Let $X$ be a scheme. Let $Z \subset X$ be a closed subscheme.
+The blowing up $b : X' \to X$ of $Z$ in $X$
+has the following properties:
+\begin{enumerate}
+\item $b|_{b^{-1}(X \setminus Z)} : b^{-1}(X \setminus Z) \to X \setminus Z$
+is an isomorphism,
+\item the exceptional divisor $E = b^{-1}(Z)$ is an effective Cartier divisor
+on $X'$,
+\item there is a canonical isomorphism
+$\mathcal{O}_{X'}(-1) = \mathcal{O}_{X'}(E)$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+As blowing up commutes with restrictions to open subschemes
+(Lemma \ref{lemma-flat-base-change-blowing-up}) the first statement
+just means that $X' = X$ if $Z = \emptyset$. In this case we are blowing
+up in the ideal sheaf $\mathcal{I} = \mathcal{O}_X$ and the result follows from
+Constructions, Example \ref{constructions-example-trivial-proj}.
+
+\medskip\noindent
+The second statement is local on $X$, hence we may assume $X$ affine.
+Say $X = \Spec(A)$ and $Z = \Spec(A/I)$. By Lemma \ref{lemma-blowing-up-affine}
+we see that $X'$ is covered by the spectra of the affine blowup algebras
+$A' = A[\frac{I}{a}]$. Then $IA' = aA'$ and $a$ maps to a nonzerodivisor
+in $A'$ according to Algebra, Lemma \ref{algebra-lemma-affine-blowup}.
+This proves the lemma as the inverse image of $Z$ in $\Spec(A')$
+corresponds to $\Spec(A'/IA') \subset \Spec(A')$.
+
+\medskip\noindent
+Consider the canonical map
+$\psi_{univ, 1} : b^*\mathcal{I} \to \mathcal{O}_{X'}(1)$, see
+discussion following Constructions, Definition
+\ref{constructions-definition-relative-proj}.
+We claim that this factors through an isomorphism
+$\mathcal{I}_E \to \mathcal{O}_{X'}(1)$ (which proves the final assertion).
+Namely, on the affine open corresponding to the blowup algebra
+$A' = A[\frac{I}{a}]$ mentioned above $\psi_{univ, 1}$ corresponds to
+the $A'$-module map
+$$
+I \otimes_A A'
+\longrightarrow
+\left(\Big(\bigoplus\nolimits_{d \geq 0} I^d\Big)_{a^{(1)}}\right)_1
+$$
+where $a^{(1)}$ is as in Algebra, Definition \ref{algebra-definition-blow-up}.
+We omit the verification that this is the map
+$I \otimes_A A' \to IA' = aA'$.
+\end{proof}
+
+\begin{lemma}[Universal property blowing up]
+\label{lemma-universal-property-blowing-up}
+Let $X$ be a scheme. Let $Z \subset X$ be a closed subscheme.
+Let $\mathcal{C}$ be the full subcategory of $(\Sch/X)$ consisting
+of $Y \to X$ such that the inverse image of $Z$ is an effective
+Cartier divisor on $Y$. Then the blowing up $b : X' \to X$ of $Z$ in $X$
+is a final object of $\mathcal{C}$.
+\end{lemma}
+
+\begin{proof}
+We see that $b : X' \to X$ is an object of $\mathcal{C}$ according to
+Lemma \ref{lemma-blowing-up-gives-effective-Cartier-divisor}.
+Let $f : Y \to X$ be an object of $\mathcal{C}$. We have to show there exists
+a unique morphism $Y \to X'$ over $X$. Let $D = f^{-1}(Z)$.
+Let $\mathcal{I} \subset \mathcal{O}_X$ be the ideal sheaf of $Z$
+and let $\mathcal{I}_D$ be the ideal sheaf of $D$. Then
+$f^*\mathcal{I} \to \mathcal{I}_D$ is a surjection
+to an invertible $\mathcal{O}_Y$-module. This extends to a map
+$\psi : \bigoplus f^*\mathcal{I}^d \to \bigoplus \mathcal{I}_D^d$
+of graded $\mathcal{O}_Y$-algebras. (We observe that
+$\mathcal{I}_D^d = \mathcal{I}_D^{\otimes d}$ as $D$ is an
+effective Cartier divisor.) By the material in
+Constructions, Section \ref{constructions-section-relative-proj}
+the triple $(1, f : Y \to X, \psi)$ defines a morphism $Y \to X'$ over $X$.
+The restriction
+$$
+Y \setminus D \longrightarrow X' \setminus b^{-1}(Z) = X \setminus Z
+$$
+is unique. The open $Y \setminus D$ is scheme theoretically dense in $Y$
+according to Lemma \ref{lemma-complement-effective-Cartier-divisor}.
+Thus the morphism $Y \to X'$ is unique by
+Morphisms, Lemma \ref{morphisms-lemma-equality-of-morphisms}
+(also $b$ is separated by Constructions, Lemma
+\ref{constructions-lemma-relative-proj-separated}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-affine-blowup}
+Let $b : X' \to X$ be the blowing up of the scheme $X$ along a closed
+subscheme $Z$. Let $U = \Spec(A)$ be an affine open of $X$ and let
+$I \subset A$ be the ideal corresponding to $Z \cap U$.
+Let $a \in I$ and let $x' \in X'$ be a point mapping to a point of $U$.
+Then $x'$ is a point of the affine open $U' = \Spec(A[\frac{I}{a}])$
+if and only if the image of $a$ in $\mathcal{O}_{X', x'}$ cuts
+out the exceptional divisor.
+\end{lemma}
+
+\begin{proof}
+Since the exceptional divisor over $U'$ is cut out by the image of
+$a$ in $A' = A[\frac{I}{a}]$ one direction is clear. Conversely, assume
+that the image of $a$ in $\mathcal{O}_{X', x'}$ cuts out $E$.
+Since every element of $I$ maps to an element of the ideal
+defining $E$ over $b^{-1}(U)$ we see that elements of $I$ become
+divisible by $a$ in $\mathcal{O}_{X', x'}$. Thus for $f \in I^n$
+we can write $f = \psi(f) a^n$ for some $\psi(f) \in \mathcal{O}_{X', x'}$.
+Observe that since $a$ maps to a nonzerodivisor of $\mathcal{O}_{X', x'}$
+the element $\psi(f)$ is uniquely characterized by this. Then we
+define
+$$
+A' \longrightarrow \mathcal{O}_{X', x'},\quad
+f/a^n \longmapsto \psi(f)
+$$
+Here we use the description of blowup algebras given following
+Algebra, Definition \ref{definition-blow-up}. The uniqueness mentioned
+above shows that this is an $A$-algebra homomorphism.
+This gives a morphism $\Spec(\mathcal{O}_{X', x"}) \to \Spec(A') = U'$.
+By the universal property of blowing up
+(Lemma \ref{lemma-universal-property-blowing-up})
+this is a morphism over
+$X'$, which of course implies that $x' \in U'$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blow-up-effective-Cartier-divisor}
+Let $X$ be a scheme. Let $Z \subset X$ be an effective Cartier divisor.
+The blowup of $X$ in $Z$ is the identity morphism of $X$.
+\end{lemma}
+
+\begin{proof}
+Immediate from the universal property of blowups
+(Lemma \ref{lemma-universal-property-blowing-up}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blow-up-reduced-scheme}
+Let $X$ be a scheme. Let $\mathcal{I} \subset \mathcal{O}_X$ be a
+quasi-coherent sheaf of ideals. If $X$ is reduced, then the
+blowup $X'$ of $X$ in $\mathcal{I}$ is reduced.
+\end{lemma}
+
+\begin{proof}
+Combine Lemma \ref{lemma-blowing-up-affine}
+with Algebra, Lemma \ref{algebra-lemma-blowup-reduced}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blow-up-integral-scheme}
+Let $X$ be a scheme. Let $\mathcal{I} \subset \mathcal{O}_X$ be a
+nonzero quasi-coherent sheaf of ideals. If $X$ is integral, then the
+blowup $X'$ of $X$ in $\mathcal{I}$ is integral.
+\end{lemma}
+
+\begin{proof}
+Combine Lemma \ref{lemma-blowing-up-affine}
+with Algebra, Lemma \ref{algebra-lemma-blowup-domain}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blow-up-and-irreducible-components}
+Let $X$ be a scheme. Let $Z \subset X$ be a closed subscheme.
+Let $b : X' \to X$ be the blowing up of $X$ along $Z$. Then
+$b$ induces an bijective map from the set of generic points
+of irreducible components of $X'$ to the set of generic points of
+irreducible components of $X$ which are not in $Z$.
+\end{lemma}
+
+\begin{proof}
+The exceptional divisor $E \subset X'$ is an effective Cartier divisor
+and $X' \setminus E \to X \setminus Z$ is an isomorphism, see
+Lemma \ref{lemma-blowing-up-gives-effective-Cartier-divisor}.
+Thus it suffices to show the following: given an effective
+Cartier divisor $D \subset S$ of a scheme $S$ none of the
+generic points of irreducible components of $S$ are contained in $D$.
+To see this, we may replace $S$ by the members of an affine open
+covering. Hence by Lemma \ref{lemma-characterize-effective-Cartier-divisor}
+we may assume $S = \Spec(A)$ and $D = V(f)$ where $f \in A$
+is a nonzerodivisor. Then we have to show $f$ is not contained
+in any minimal prime ideal $\mathfrak p \subset A$.
+If so, then $f$ would map to a nonzerodivisor contained
+in the maximal ideal of $R_\mathfrak p$ which is a contradiction
+with Algebra, Lemma \ref{algebra-lemma-minimal-prime-reduced-ring}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blow-up-pullback-effective-Cartier}
+Let $X$ be a scheme. Let $b : X' \to X$ be a blowup of $X$ in a closed
+subscheme. The pullback $b^{-1}D$ is defined
+for all effective Cartier divisors $D \subset X$
+and pullbacks of meromorphic functions are defined for $b$
+(Definitions
+\ref{definition-pullback-effective-Cartier-divisor} and
+\ref{definition-pullback-meromorphic-sections}).
+\end{lemma}
+
+\begin{proof}
+By Lemmas \ref{lemma-blowing-up-affine} and
+\ref{lemma-characterize-effective-Cartier-divisor}
+this reduces to the following algebra fact:
+Let $A$ be a ring, $I \subset A$ an ideal, $a \in I$, and $x \in A$
+a nonzerodivisor. Then the image of $x$ in $A[\frac{I}{a}]$ is a
+nonzerodivisor. Namely, suppose that $x (y/a^n) = 0$ in $A[\frac{I}{a}]$.
+Then $a^mxy = 0$ in $A$ for some $m$. Hence $a^my = 0$ as $x$ is a
+nonzerodivisor. Whence $y/a^n$ is zero in $A[\frac{I}{a}]$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowing-up-two-ideals}
+Let $X$ be a scheme. Let $\mathcal{I}, \mathcal{J} \subset \mathcal{O}_X$
+be quasi-coherent sheaves of ideals. Let $b : X' \to X$
+be the blowing up of $X$ in $\mathcal{I}$. Let $b' : X'' \to X'$ be the
+blowing up of $X'$ in $b^{-1}\mathcal{J} \mathcal{O}_{X'}$. Then $X'' \to X$
+is canonically isomorphic to the blowing up of $X$ in $\mathcal{I}\mathcal{J}$.
+\end{lemma}
+
+\begin{proof}
+Let $E \subset X'$ be the exceptional divisor of $b$ which is an effective
+Cartier divisor by
+Lemma \ref{lemma-blowing-up-gives-effective-Cartier-divisor}.
+Then $(b')^{-1}E$ is an effective Cartier divisor on $X''$ by
+Lemma \ref{lemma-blow-up-pullback-effective-Cartier}.
+Let $E' \subset X''$ be the exceptional divisor of $b'$ (also an effective
+Cartier divisor). Consider the effective Cartier divisor
+$E'' = E' + (b')^{-1}E$. By construction the ideal of $E''$ is
+$(b \circ b')^{-1}\mathcal{I} (b \circ b')^{-1}\mathcal{J} \mathcal{O}_{X''}$.
+Hence according to Lemma \ref{lemma-universal-property-blowing-up}
+there is a canonical morphism from $X''$ to the blowup $c : Y \to X$
+of $X$ in $\mathcal{I}\mathcal{J}$. Conversely, as $\mathcal{I}\mathcal{J}$
+pulls back to an invertible ideal we see that
+$c^{-1}\mathcal{I}\mathcal{O}_Y$ defines
+an effective Cartier divisor, see
+Lemma \ref{lemma-sum-closed-subschemes-effective-Cartier}.
+Thus a morphism $c' : Y \to X'$ over $X$ by
+Lemma \ref{lemma-universal-property-blowing-up}.
+Then $(c')^{-1}b^{-1}\mathcal{J}\mathcal{O}_Y = c^{-1}\mathcal{J}\mathcal{O}_Y$
+which also defines an effective Cartier divisor. Thus a morphism
+$c'' : Y \to X''$ over $X'$. We omit the verification that this
+morphism is inverse to the morphism $X'' \to Y$ constructed earlier.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowing-up-projective}
+Let $X$ be a scheme. Let $\mathcal{I} \subset \mathcal{O}_X$ be a
+quasi-coherent sheaf of ideals. Let $b : X' \to X$ be the blowing up of $X$
+in the ideal sheaf $\mathcal{I}$. If $\mathcal{I}$ is of finite type, then
+\begin{enumerate}
+\item $b : X' \to X$ is a projective morphism, and
+\item $\mathcal{O}_{X'}(1)$ is a $b$-relatively ample invertible sheaf.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The surjection of graded $\mathcal{O}_X$-algebras
+$$
+\text{Sym}_{\mathcal{O}_X}^*(\mathcal{I})
+\longrightarrow
+\bigoplus\nolimits_{d \geq 0} \mathcal{I}^d
+$$
+defines via Constructions, Lemma
+\ref{constructions-lemma-surjective-generated-degree-1-map-relative-proj}
+a closed immersion
+$$
+X' = \underline{\text{Proj}}_X (\bigoplus\nolimits_{d \geq 0} \mathcal{I}^d)
+\longrightarrow
+\mathbf{P}(\mathcal{I}).
+$$
+Hence $b$ is projective, see
+Morphisms, Definition \ref{morphisms-definition-projective}.
+The second statement follows for example from the characterization
+of relatively ample invertible sheaves in
+Morphisms, Lemma \ref{morphisms-lemma-characterize-relatively-ample}.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-finite-type-blowups}
+\begin{slogan}
+Composition of blowing ups is a blowing up
+\end{slogan}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+Let $Z \subset X$ be a closed subscheme of finite presentation.
+Let $b : X' \to X$ be the blowing up with center $Z$. Let $Z' \subset X'$ be
+a closed subscheme of finite presentation.
+Let $X'' \to X'$ be the blowing up with center $Z'$.
+There exists a closed subscheme $Y \subset X$ of finite presentation,
+such that
+\begin{enumerate}
+\item $Y = Z \cup b(Z')$ set theoretically, and
+\item the composition $X'' \to X$ is isomorphic to the blowing up
+of $X$ in $Y$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The condition that $Z \to X$ is of finite presentation means that
+$Z$ is cut out by a finite type quasi-coherent sheaf of ideals
+$\mathcal{I} \subset \mathcal{O}_X$, see
+Morphisms, Lemma \ref{morphisms-lemma-closed-immersion-finite-presentation}.
+Write $\mathcal{A} = \bigoplus_{n \geq 0} \mathcal{I}^n$ so that
+$X' = \underline{\text{Proj}}(\mathcal{A})$.
+Note that $X \setminus Z$ is a quasi-compact open of $X$ by
+Properties, Lemma \ref{properties-lemma-quasi-coherent-finite-type-ideals}.
+Since $b^{-1}(X \setminus Z) \to X \setminus Z$ is an isomorphism
+(Lemma \ref{lemma-blowing-up-gives-effective-Cartier-divisor}) the same
+result shows that
+$b^{-1}(X \setminus Z) \setminus Z'$ is quasi-compact open in $X'$.
+Hence $U = X \setminus (Z \cup b(Z'))$ is quasi-compact open in $X$.
+By Lemma \ref{lemma-closed-subscheme-proj-finite-type}
+there exist a $d > 0$ and a finite type
+$\mathcal{O}_X$-submodule $\mathcal{F} \subset \mathcal{I}^d$ such
+that $Z' = \underline{\text{Proj}}(\mathcal{A}/\mathcal{F}\mathcal{A})$
+and such that the support of $\mathcal{I}^d/\mathcal{F}$ is contained
+in $X \setminus U$.
+
+\medskip\noindent
+Since $\mathcal{F} \subset \mathcal{I}^d$ is an $\mathcal{O}_X$-submodule
+we may think of $\mathcal{F} \subset \mathcal{I}^d \subset \mathcal{O}_X$
+as a finite type quasi-coherent sheaf of ideals on $X$. Let's denote this
+$\mathcal{J} \subset \mathcal{O}_X$ to prevent confusion. Since
+$\mathcal{I}^d / \mathcal{J}$ and $\mathcal{O}/\mathcal{I}^d$
+are supported on $X \setminus U$ we see that $V(\mathcal{J})$ is contained
+in $X \setminus U$. Conversely, as $\mathcal{J} \subset \mathcal{I}^d$
+we see that $Z \subset V(\mathcal{J})$. Over
+$X \setminus Z \cong X' \setminus b^{-1}(Z)$ the sheaf of ideals
+$\mathcal{J}$ cuts out $Z'$ (see displayed formula below). Hence
+$V(\mathcal{J})$ equals $Z \cup b(Z')$. It follows that also
+$V(\mathcal{I}\mathcal{J}) = Z \cup b(Z')$ set theoretically. Moreover,
+$\mathcal{I}\mathcal{J}$ is an ideal of finite type as a product of two such.
+We claim that $X'' \to X$ is isomorphic to the blowing up of $X$ in
+$\mathcal{I}\mathcal{J}$ which finishes the proof of the lemma by setting
+$Y = V(\mathcal{I}\mathcal{J})$.
+
+\medskip\noindent
+First, recall that the blowup of $X$ in $\mathcal{I}\mathcal{J}$
+is the same as the blowup of $X'$ in $b^{-1}\mathcal{J} \mathcal{O}_{X'}$,
+see Lemma \ref{lemma-blowing-up-two-ideals}.
+Hence it suffices to show that the blowup of $X'$ in
+$b^{-1}\mathcal{J} \mathcal{O}_{X'}$ agrees with the blowup of $X'$
+in $Z'$. We will show that
+$$
+b^{-1}\mathcal{J} \mathcal{O}_{X'} = \mathcal{I}_E^d \mathcal{I}_{Z'}
+$$
+as ideal sheaves on $X''$. This will prove what we want as
+$\mathcal{I}_E^d$ cuts out the effective Cartier divisor $dE$
+and we can use Lemmas \ref{lemma-blow-up-effective-Cartier-divisor} and
+\ref{lemma-blowing-up-two-ideals}.
+
+\medskip\noindent
+To see the displayed equality of the ideals we may work locally.
+With notation $A$, $I$, $a \in I$ as in Lemma \ref{lemma-blowing-up-affine}
+we see that $\mathcal{F}$ corresponds to an $R$-submodule $M \subset I^d$
+mapping isomorphically to an ideal $J \subset R$. The condition
+$Z' = \underline{\text{Proj}}(\mathcal{A}/\mathcal{F}\mathcal{A})$
+means that $Z' \cap \Spec(A[\frac{I}{a}])$ is cut out by the ideal
+generated by the elements $m/a^d$, $m \in M$. Say the element $m \in M$
+corresponds to the function $f \in J$. Then in the affine blowup algebra
+$A' = A[\frac{I}{a}]$ we see that $f = (a^dm)/a^d = a^d (m/a^d)$.
+Thus the equality holds.
+\end{proof}
+
+
+
+
+
+
+\section{Strict transform}
+\label{section-strict-transform}
+
+\noindent
+In this section we briefly discuss strict transform under blowing up.
+Let $S$ be a scheme and let $Z \subset S$ be a closed subscheme.
+Let $b : S' \to S$ be the blowing up of $S$ in $Z$ and denote $E \subset S'$
+the exceptional divisor $E = b^{-1}Z$. In the following we will often
+consider a scheme $X$ over $S$ and form the cartesian diagram
+$$
+\xymatrix{
+\text{pr}_{S'}^{-1}E \ar[r] \ar[d] &
+X \times_S S' \ar[r]_-{\text{pr}_X} \ar[d]_{\text{pr}_{S'}} &
+X \ar[d]^f \\
+E \ar[r] & S' \ar[r] & S
+}
+$$
+Since $E$ is an effective Cartier divisor
+(Lemma \ref{lemma-blowing-up-gives-effective-Cartier-divisor})
+we see that $\text{pr}_{S'}^{-1}E \subset X \times_S S'$
+is locally principal
+(Lemma \ref{lemma-pullback-locally-principal}).
+Thus the complement of $\text{pr}_{S'}^{-1}E$ in $X \times_S S'$
+is retrocompact
+(Lemma \ref{lemma-complement-locally-principal-closed-subscheme}).
+Consequently, for a quasi-coherent $\mathcal{O}_{X \times_S S'}$-module
+$\mathcal{G}$ the subsheaf of sections supported on $\text{pr}_{S'}^{-1}E$
+is a quasi-coherent submodule, see
+Properties, Lemma \ref{properties-lemma-sections-supported-on-closed-subset}.
+If $\mathcal{G}$ is a quasi-coherent sheaf of algebras, e.g.,
+$\mathcal{G} = \mathcal{O}_{X \times_S S'}$, then this subsheaf is an ideal
+of $\mathcal{G}$.
+
+\begin{definition}
+\label{definition-strict-transform}
+With $Z \subset S$ and $f : X \to S$ as above.
+\begin{enumerate}
+\item Given a quasi-coherent $\mathcal{O}_X$-module $\mathcal{F}$
+the {\it strict transform} of $\mathcal{F}$ with respect to the blowup
+of $S$ in $Z$ is the quotient $\mathcal{F}'$ of $\text{pr}_X^*\mathcal{F}$
+by the submodule of sections supported on $\text{pr}_{S'}^{-1}E$.
+\item The {\it strict transform} of $X$ is the closed subscheme
+$X' \subset X \times_S S'$ cut out by the quasi-coherent ideal of
+sections of $\mathcal{O}_{X \times_S S'}$ supported on $\text{pr}_{S'}^{-1}E$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that taking the strict transform along a blowup depends on the
+closed subscheme used for the blowup
+(and not just on the morphism $S' \to S$).
+This notion is often used for closed subschemes of $S$.
+It turns out that the strict transform of $X$ is a blowup of $X$.
+
+\begin{lemma}
+\label{lemma-strict-transform}
+In the situation of Definition \ref{definition-strict-transform}.
+\begin{enumerate}
+\item The strict transform $X'$ of $X$ is the blowup of $X$ in the closed
+subscheme $f^{-1}Z$ of $X$.
+\item For a quasi-coherent $\mathcal{O}_X$-module $\mathcal{F}$ the
+strict transform $\mathcal{F}'$ is canonically isomorphic to
+the pushforward along $X' \to X \times_S S'$ of the strict transform of
+$\mathcal{F}$ relative to the blowing up $X' \to X$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $X'' \to X$ be the blowup of $X$ in $f^{-1}Z$. By the universal
+property of blowing up (Lemma \ref{lemma-universal-property-blowing-up})
+there exists a commutative diagram
+$$
+\xymatrix{
+X'' \ar[r] \ar[d] & X \ar[d] \\
+S' \ar[r] & S
+}
+$$
+whence a morphism $X'' \to X \times_S S'$. Thus the first assertion
+is that this morphism is a closed immersion with image $X'$.
+The question is local on $X$. Thus we may assume $X$
+and $S$ are affine. Say that $S = \Spec(A)$, $X = \Spec(B)$, and $Z$
+is cut out by the ideal $I \subset A$. Set $J = IB$. The map
+$B \otimes_A \bigoplus_{n \geq 0} I^n \to \bigoplus_{n \geq 0} J^n$
+defines a closed immersion $X'' \to X \times_S S'$, see
+Constructions, Lemmas
+\ref{constructions-lemma-base-change-map-proj} and
+\ref{constructions-lemma-surjective-graded-rings-generated-degree-1-map-proj}.
+We omit the verification that this morphism is the same as the
+one constructed above from the universal property.
+Pick $a \in I$ corresponding to the affine open
+$\Spec(A[\frac{I}{a}]) \subset S'$, see Lemma \ref{lemma-blowing-up-affine}.
+The inverse image of $\Spec(A[\frac{I}{a}])$ in the strict transform
+$X'$ of $X$ is the spectrum of
+$$
+B' = (B \otimes_A A[\textstyle{\frac{I}{a}}])/a\text{-power-torsion}
+$$
+see Properties, Lemma
+\ref{properties-lemma-sections-supported-on-closed-subset}.
+On the other hand, letting $b \in J$ be the image of $a$ we see that
+$\Spec(B[\frac{J}{b}])$ is the inverse image of $\Spec(A[\frac{I}{a}])$
+in $X''$. By Algebra, Lemma \ref{algebra-lemma-blowup-base-change}
+the open $\Spec(B[\frac{J}{b}])$ maps isomorphically to the open subscheme
+$\text{pr}_{S'}^{-1}(\Spec(A[\frac{I}{a}]))$ of $X'$.
+Thus $X'' \to X'$ is an isomorphism.
+
+\medskip\noindent
+In the notation above, let $\mathcal{F}$ correspond to the $B$-module $N$.
+The strict transform of $\mathcal{F}$ corresponds to the
+$B \otimes_A A[\frac{I}{a}]$-module
+$$
+N' = (N \otimes_A A[\textstyle{\frac{I}{a}}])/a\text{-power-torsion}
+$$
+see Properties, Lemma
+\ref{properties-lemma-sections-supported-on-closed-subset}.
+The strict transform of $\mathcal{F}$ relative to the blowup of
+$X$ in $f^{-1}Z$ corresponds to the $B[\frac{J}{b}]$-module
+$N \otimes_B B[\frac{J}{b}]/b\text{-power-torsion}$. In exactly the same
+way as above one proves that these two modules are isomorphic.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strict-transform-flat}
+In the situation of Definition \ref{definition-strict-transform}.
+\begin{enumerate}
+\item If $X$ is flat over $S$ at all points lying over $Z$, then
+the strict transform of $X$ is equal to the base change $X \times_S S'$.
+\item Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+If $\mathcal{F}$ is flat over $S$ at all points lying over $Z$, then
+the strict transform $\mathcal{F}'$ of $\mathcal{F}$ is equal to the
+pullback $\text{pr}_X^*\mathcal{F}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will prove part (2) as it implies part (1) by the definition of the
+strict transform of a scheme over $S$. The question is local on $X$.
+Thus we may assume that $S = \Spec(A)$, $X = \Spec(B)$, and that
+$\mathcal{F}$ corresponds to the $B$-module $N$. Then $\mathcal{F}'$
+over the open $\Spec(B \otimes_A A[\frac{I}{a}])$ of $X \times_S S'$
+corresponds to the module
+$$
+N' = (N \otimes_A A[\textstyle{\frac{I}{a}}])/a\text{-power-torsion}
+$$
+see Properties, Lemma
+\ref{properties-lemma-sections-supported-on-closed-subset}.
+Thus we have to show that the $a$-power-torsion of
+$N \otimes_A A[\frac{I}{a}]$ is zero. Let $y \in N \otimes_A A[\frac{I}{a}]$
+with $a^n y = 0$. If $\mathfrak q \subset B$
+is a prime and $a \not \in \mathfrak q$, then $y$ maps to
+zero in $(N \otimes_A A[\frac{I}{a}])_\mathfrak q$. on the other hand,
+if $a \in \mathfrak q$, then $N_\mathfrak q$ is a flat $A$-module
+and we see that
+$N_\mathfrak q \otimes_A A[\frac{I}{a}]
+=(N \otimes_A A[\frac{I}{a}])_\mathfrak q$
+has no $a$-power torsion (as $A[\frac{I}{a}]$ doesn't).
+Hence $y$ maps to zero in this localization as well. We conclude that
+$y$ is zero by
+Algebra, Lemma \ref{algebra-lemma-characterize-zero-local}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strict-transform-affine}
+Let $S$ be a scheme. Let $Z \subset S$ be a closed subscheme.
+Let $b : S' \to S$ be the blowing up of $Z$ in $S$. Let
+$g : X \to Y$ be an affine morphism of schemes over $S$.
+Let $\mathcal{F}$ be a quasi-coherent sheaf on $X$.
+Let $g' : X \times_S S' \to Y \times_S S'$ be the base change
+of $g$. Let $\mathcal{F}'$ be the strict transform of $\mathcal{F}$
+relative to $b$. Then $g'_*\mathcal{F}'$ is the strict transform
+of $g_*\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Observe that $g'_*\text{pr}_X^*\mathcal{F} = \text{pr}_Y^*g_*\mathcal{F}$
+by Cohomology of Schemes, Lemma \ref{coherent-lemma-affine-base-change}.
+Let $\mathcal{K} \subset \text{pr}_X^*\mathcal{F}$ be the subsheaf
+of sections supported in the inverse image of $Z$ in $X \times_S S'$.
+By Properties, Lemma
+\ref{properties-lemma-push-sections-supported-on-closed-subset}
+the pushforward $g'_*\mathcal{K}$ is the subsheaf of sections of
+$\text{pr}_Y^*g_*\mathcal{F}$ supported in the inverse
+image of $Z$ in $Y \times_S S'$. As $g'$ is affine
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-affine})
+we see that $g'_*$ is exact, hence we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strict-transform-different-centers}
+Let $S$ be a scheme. Let $Z \subset S$ be a closed subscheme.
+Let $D \subset S$ be an effective Cartier divisor.
+Let $Z' \subset S$ be the closed subscheme cut out by the product
+of the ideal sheaves of $Z$ and $D$.
+Let $S' \to S$ be the blowup of $S$ in $Z$.
+\begin{enumerate}
+\item The blowup of $S$ in $Z'$ is isomorphic to $S' \to S$.
+\item Let $f : X \to S$ be a morphism of schemes and let $\mathcal{F}$
+be a quasi-coherent $\mathcal{O}_X$-module. If $\mathcal{F}$ has
+no nonzero local sections supported in $f^{-1}D$, then the
+strict transform of $\mathcal{F}$ relative to the blowing up
+in $Z$ agrees with the strict transform of $\mathcal{F}$ relative
+to the blowing up of $S$ in $Z'$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first statement follows on combining
+Lemmas \ref{lemma-blowing-up-two-ideals} and
+\ref{lemma-blow-up-effective-Cartier-divisor}.
+Using Lemma \ref{lemma-blowing-up-affine} the second statement
+translates into the
+following algebra problem. Let $A$ be a ring, $I \subset A$ an ideal,
+$x \in A$ a nonzerodivisor, and $a \in I$. Let $M$ be an $A$-module
+whose $x$-torsion is zero. To show: the $a$-power torsion in
+$M \otimes_A A[\frac{I}{a}]$ is equal to the $xa$-power torsion.
+The reason for this is that the kernel and cokernel of the map
+$A \to A[\frac{I}{a}]$ is $a$-power torsion, so this map becomes an
+isomorphism after inverting $a$. Hence the kernel
+and cokernel of $M \to M \otimes_A A[\frac{I}{a}]$ are $a$-power
+torsion too. This implies the result.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strict-transform-composition-blowups}
+Let $S$ be a scheme. Let $Z \subset S$ be a closed subscheme.
+Let $b : S' \to S$ be the blowing up with center $Z$. Let $Z' \subset S'$ be
+a closed subscheme. Let $S'' \to S'$ be the blowing up with center $Z'$.
+Let $Y \subset S$ be a closed subscheme such that
+$Y = Z \cup b(Z')$ set theoretically and the composition $S'' \to S$
+is isomorphic to the blowing up of $S$ in $Y$.
+In this situation, given any scheme $X$ over $S$ and
+$\mathcal{F} \in \QCoh(\mathcal{O}_X)$ we have
+\begin{enumerate}
+\item the strict transform of $\mathcal{F}$ with respect to the blowing
+up of $S$ in $Y$ is equal to the strict transform with respect to the
+blowup $S'' \to S'$ in $Z'$ of the strict transform of $\mathcal{F}$
+with respect to the blowup $S' \to S$ of $S$ in $Z$, and
+\item the strict transform of $X$ with respect to the blowing
+up of $S$ in $Y$ is equal to the strict transform with respect to the
+blowup $S'' \to S'$ in $Z'$ of the strict transform of $X$
+with respect to the blowup $S' \to S$ of $S$ in $Z$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}'$ be the strict transform of $\mathcal{F}$ with respect
+to the blowup $S' \to S$ of $S$ in $Z$.
+Let $\mathcal{F}''$ be the strict transform of $\mathcal{F}'$ with respect
+to the blowup $S'' \to S'$ of $S'$ in $Z'$.
+Let $\mathcal{G}$ be the strict transform of $\mathcal{F}$ with respect
+to the blowup $S'' \to S$ of $S$ in $Y$.
+We also label the morphisms
+$$
+\xymatrix{
+X \times_S S'' \ar[r]_q \ar[d]^{f''} &
+X \times_S S' \ar[r]_p \ar[d]^{f'} &
+X \ar[d]^f \\
+S'' \ar[r] & S' \ar[r] & S
+}
+$$
+By definition there is a surjection $p^*\mathcal{F} \to \mathcal{F}'$
+and a surjection $q^*\mathcal{F}' \to \mathcal{F}''$ which combine
+by right exactness of $q^*$ to a surjection
+$(p \circ q)^*\mathcal{F} \to \mathcal{F}''$. Also we have the surjection
+$(p \circ q)^*\mathcal{F} \to \mathcal{G}$. Thus it suffices to prove
+that these two surjections have the same kernel.
+
+\medskip\noindent
+The kernel of the surjection $p^*\mathcal{F} \to \mathcal{F}'$
+is supported on $(f \circ p)^{-1}Z$, so this map is an isomorphism at
+points in the complement. Hence the kernel of
+$q^*p^*\mathcal{F} \to q^*\mathcal{F}'$
+is supported on $(f \circ p \circ q)^{-1}Z$. The kernel of
+$q^*\mathcal{F}' \to \mathcal{F}''$ is supported on $(f' \circ q)^{-1}Z'$.
+Combined we see that the kernel of
+$(p \circ q)^*\mathcal{F} \to \mathcal{F}''$ is supported on
+$(f \circ p \circ q)^{-1}Z \cup (f' \circ q)^{-1}Z' =
+(f \circ p \circ q)^{-1}Y$.
+By construction of $\mathcal{G}$ we see that we obtain a factorization
+$(p \circ q)^*\mathcal{F} \to \mathcal{F}'' \to \mathcal{G}$.
+To finish the proof it suffices to show that $\mathcal{F}''$ has no
+nonzero (local) sections supported on
+$(f \circ p \circ q)^{-1}(Y) =
+(f \circ p \circ q)^{-1}Z \cup (f' \circ q)^{-1}Z'$.
+This follows from Lemma \ref{lemma-strict-transform-different-centers}
+applied to $\mathcal{F}'$ on $X \times_S S'$ over $S'$, the closed
+subscheme $Z'$ and the effective Cartier divisor $b^{-1}Z$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strict-transform-universally-injective}
+In the situation of Definition \ref{definition-strict-transform}.
+Suppose that
+$$
+0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0
+$$
+is an exact sequence of quasi-coherent sheaves on $X$ which remains
+exact after any base change $T \to S$. Then the strict transforms of
+$\mathcal{F}_i'$ relative to any blowup $S' \to S$
+form a short exact sequence
+$0 \to \mathcal{F}'_1 \to \mathcal{F}'_2 \to \mathcal{F}'_3 \to 0$ too.
+\end{lemma}
+
+\begin{proof}
+We may localize on $S$ and $X$ and assume both are affine.
+Then we may push $\mathcal{F}_i$ to $S$, see
+Lemma \ref{lemma-strict-transform-affine}.
+We may assume that our blowup is the morphism $1 : S \to S$
+associated to an effective Cartier divisor $D \subset S$.
+Then the translation into algebra is the following: Suppose that $A$
+is a ring and $0 \to M_1 \to M_2 \to M_3 \to 0$ is a universally
+exact sequence of $A$-modules. Let $a\in A$. Then the sequence
+$$
+0 \to
+M_1/a\text{-power torsion} \to
+M_2/a\text{-power torsion} \to
+M_3/a\text{-power torsion} \to 0
+$$
+is exact too. Namely, surjectivity of the last map and injectivity of
+the first map are immediate. The problem is exactness in the middle.
+Suppose that $x \in M_2$ maps to zero in $M_3/a\text{-power torsion}$.
+Then $y = a^n x \in M_1$ for some $n$. Then $y$ maps to zero in
+$M_2/a^nM_2$. Since $M_1 \to M_2$ is universally injective we see that
+$y$ maps to zero in $M_1/a^nM_1$. Thus $y = a^n z$ for some $z \in M_1$.
+Thus $a^n(x - y) = 0$. Hence $y$ maps to the class of $x$ in
+$M_2/a\text{-power torsion}$ as desired.
+\end{proof}
+
+
+
+
+
+
+
+\section{Admissible blowups}
+\label{section-admissible-blowups}
+
+\noindent
+To have a bit more control over our blowups we introduce the following
+standard terminology.
+
+\begin{definition}
+\label{definition-admissible-blowup}
+Let $X$ be a scheme. Let $U \subset X$ be an open subscheme. A morphism
+$X' \to X$ is called a {\it $U$-admissible blowup} if there exists a
+closed immersion $Z \to X$ of finite presentation with $Z$ disjoint from
+$U$ such that $X'$ is isomorphic to the blowup of $X$ in $Z$.
+\end{definition}
+
+\noindent
+We recall that $Z \to X$ is of finite presentation if and only if the
+ideal sheaf $\mathcal{I}_Z \subset \mathcal{O}_X$ is of finite type, see
+Morphisms, Lemma \ref{morphisms-lemma-closed-immersion-finite-presentation}.
+In particular, a $U$-admissible blowup is a projective morphism, see
+Lemma \ref{lemma-blowing-up-projective}.
+Note that there can be multiple centers which give rise to the same morphism.
+Hence the requirement is just the existence of some center disjoint from
+$U$ which produces $X'$.
+Finally, as the morphism $b : X' \to X$ is an isomorphism over $U$ (see
+Lemma \ref{lemma-blowing-up-gives-effective-Cartier-divisor}) we will often
+abuse notation and think of $U$ as an open subscheme of $X'$ as well.
+
+\begin{lemma}
+\label{lemma-composition-admissible-blowups}
+\begin{slogan}
+Admissible blowups are stable under composition.
+\end{slogan}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+Let $U \subset X$ be a quasi-compact open subscheme.
+Let $b : X' \to X$ be a $U$-admissible blowup.
+Let $X'' \to X'$ be a $U$-admissible blowup.
+Then the composition $X'' \to X$ is a $U$-admissible blowup.
+\end{lemma}
+
+\begin{proof}
+Immediate from the more precise
+Lemma \ref{lemma-composition-finite-type-blowups}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extend-admissible-blowups}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+Let $U, V \subset X$ be quasi-compact open subschemes.
+Let $b : V' \to V$ be a $U \cap V$-admissible blowup.
+Then there exists a $U$-admissible blowup $X' \to X$
+whose restriction to $V$ is $V'$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{I} \subset \mathcal{O}_V$ be the finite type
+quasi-coherent sheaf of ideals such that $V(\mathcal{I})$ is
+disjoint from $U \cap V$ and such that $V'$ is isomorphic to the
+blowup of $V$ in $\mathcal{I}$. Let
+$\mathcal{I}' \subset \mathcal{O}_{U \cup V}$ be the quasi-coherent
+sheaf of ideals whose restriction to $U$ is $\mathcal{O}_U$ and
+whose restriction to $V$ is $\mathcal{I}$ (see Sheaves, Section
+\ref{sheaves-section-glueing-sheaves}).
+By Properties, Lemma \ref{properties-lemma-extend}
+there exists a finite type quasi-coherent sheaf of ideals
+$\mathcal{J} \subset \mathcal{O}_X$ whose restriction to $U \cup V$ is
+$\mathcal{I}'$. The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dominate-admissible-blowups}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+Let $U \subset X$ be a quasi-compact open subscheme.
+Let $b_i : X_i \to X$, $i = 1, \ldots, n$ be $U$-admissible blowups.
+There exists a $U$-admissible blowup $b : X' \to X$ such that
+(a) $b$ factors as $X' \to X_i \to X$ for $i = 1, \ldots, n$ and
+(b) each of the morphisms $X' \to X_i$ is a $U$-admissible blowup.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{I}_i \subset \mathcal{O}_X$ be the finite type
+quasi-coherent sheaf of ideals such that $V(\mathcal{I}_i)$ is
+disjoint from $U$ and such that $X_i$ is isomorphic to the
+blowup of $X$ in $\mathcal{I}_i$. Set
+$\mathcal{I} = \mathcal{I}_1 \cdot \ldots \cdot \mathcal{I}_n$
+and let $X'$ be the blowup of $X$ in $\mathcal{I}$. Then
+$X' \to X$ factors through $b_i$ by Lemma \ref{lemma-blowing-up-two-ideals}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-separate-disjoint-opens-by-blowing-up}
+\begin{slogan}
+Separate irreducible components by blowing up.
+\end{slogan}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+Let $U, V$ be quasi-compact disjoint open subschemes of $X$.
+Then there exist a $U \cup V$-admissible blowup $b : X' \to X$
+such that $X'$ is a disjoint union of open subschemes
+$X' = X'_1 \amalg X'_2$ with $b^{-1}(U) \subset X'_1$ and
+$b^{-1}(V) \subset X'_2$.
+\end{lemma}
+
+\begin{proof}
+Choose a finite type quasi-coherent sheaf of ideals $\mathcal{I}$,
+resp.\ $\mathcal{J}$ such that $X \setminus U = V(\mathcal{I})$,
+resp.\ $X \setminus V = V(\mathcal{J})$, see
+Properties, Lemma \ref{properties-lemma-quasi-coherent-finite-type-ideals}.
+Then $V(\mathcal{I}\mathcal{J}) = X$ set theoretically, hence
+$\mathcal{I}\mathcal{J}$ is a locally nilpotent sheaf of ideals.
+Since $\mathcal{I}$ and $\mathcal{J}$ are of finite type and $X$
+is quasi-compact there exists an $n > 0$ such that
+$\mathcal{I}^n \mathcal{J}^n = 0$. We may and do replace $\mathcal{I}$
+by $\mathcal{I}^n$ and $\mathcal{J}$ by $\mathcal{J}^n$. Whence
+$\mathcal{I} \mathcal{J} = 0$. Let $b : X' \to X$ be the blowing
+up in $\mathcal{I} + \mathcal{J}$. This is $U \cup V$-admissible
+as $V(\mathcal{I} + \mathcal{J}) = X \setminus U \cup V$. We will show that
+$X'$ is a disjoint union of open subschemes $X' = X'_1 \amalg X'_2$
+such that $b^{-1}\mathcal{I}|_{X'_2} = 0$ and $b^{-1}\mathcal{J}|_{X'_1} = 0$
+which will prove the lemma.
+
+\medskip\noindent
+We will use the description of the blowing up in
+Lemma \ref{lemma-blowing-up-affine}. Suppose that $U = \Spec(A) \subset X$
+is an affine open such that $\mathcal{I}|_U$, resp.\ $\mathcal{J}|_U$
+corresponds to the finitely generated ideal $I \subset A$, resp.\ $J \subset A$.
+Then
+$$
+b^{-1}(U) = \text{Proj}(A \oplus (I + J) \oplus (I + J)^2 \oplus \ldots)
+$$
+This is covered by the affine open subsets $A[\frac{I + J}{x}]$
+and $A[\frac{I + J}{y}]$ with $x \in I$ and $y \in J$. Since $x \in I$ is a
+nonzerodivisor in $A[\frac{I + J}{x}]$ and $IJ = 0$ we see that
+$J A[\frac{I + J}{x}] = 0$. Since $y \in J$ is a nonzerodivisor
+in $A[\frac{I + J}{y}]$ and $IJ = 0$ we see that
+$I A[\frac{I + J}{y}] = 0$. Moreover,
+$$
+\Spec(A[\textstyle{\frac{I + J}{x}}]) \cap
+\Spec(A[\textstyle{\frac{I + J}{y}}]) =
+\Spec(A[\textstyle{\frac{I + J}{xy}}]) = \emptyset
+$$
+because $xy$ is both a nonzerodivisor and zero. Thus $b^{-1}(U)$
+is the disjoint union of the open subscheme $U_1$ defined as the union
+of the standard opens $\Spec(A[\frac{I + J}{x}])$ for $x \in I$ and the open
+subscheme $U_2$ which is the union of the affine opens
+$\Spec(A[\frac{I + J}{y}])$ for $y \in J$. We have seen that
+$b^{-1}\mathcal{I}\mathcal{O}_{X'}$ restricts to zero on $U_2$
+and $b^{-1}\mathcal{I}\mathcal{O}_{X'}$ restricts to zero on $U_1$.
+We omit the verification that these open subschemes glue to global
+open subschemes $X'_1$ and $X'_2$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowing-up-denominators}
+Let $X$ be a locally Noetherian scheme.
+Let $\mathcal{L}$ be an invertible $\mathcal{O}_X$-module.
+Let $s$ be a regular meromorphic section of $\mathcal{L}$.
+Let $U \subset X$ be the maximal open subscheme such that
+$s$ corresponds to a section of $\mathcal{L}$ over $U$.
+The blowup $b : X' \to X$ in the ideal of denominators
+of $s$ is $U$-admissible. There exists an effective Cartier divisor
+$D \subset X'$ and an isomorphism
+$$
+b^*\mathcal{L} = \mathcal{O}_{X'}(D - E),
+$$
+where $E \subset X'$ is the exceptional divisor such that the
+meromorphic section $b^*s$ corresponds, via the isomorphism,
+to the meromorphic section $1_D \otimes (1_E)^{-1}$.
+\end{lemma}
+
+\begin{proof}
+From the definition of the ideal of denominators in
+Definition
+\ref{definition-regular-meromorphic-ideal-denominators}
+we immediately see that $b$ is a $U$-admissible blowup.
+For the notation $1_D$, $1_E$, and $\mathcal{O}_{X'}(D - E)$
+please see Definition
+\ref{definition-invertible-sheaf-effective-Cartier-divisor}.
+The pullback $b^*s$ is defined by
+Lemmas \ref{lemma-blow-up-pullback-effective-Cartier} and
+\ref{lemma-meromorphic-sections-pullback}.
+Thus the statement of the lemma makes sense.
+We can reinterpret the final assertion as saying
+that $b^*s$ is a global regular section of
+$b^*\mathcal{L}(E)$ whose zero scheme is $D$.
+This uniquely defines $D$ hence
+to prove the lemma we may work affine locally on $X$ and $X'$.
+Assume $X = \Spec(A)$ is affine and
+$\mathcal{L} = \mathcal{O}_X$. Then $s$ is a regular meromorphic
+function and shrinking further we may assume
+$s = a'/a$ with $a', a \in A$ nonzerodivisors.
+Then the ideal of denominators of $s$ corresponds
+to the ideal $I = \{x \in A \mid xa' \in aA\}$.
+Recall that $X'$ is covered by spectra of affine blowup
+algebras $A' = A[\frac{I}{x}]$ with $x \in I$
+(Lemma \ref{lemma-blowing-up-affine}). Fix $x \in I$ and
+write $xa' = a a''$ for some $a'' \in A$.
+The divisor $E \subset X'$ is cut out by $x \in A'$ over
+the spectrum of $A'$ and hence $1/x$ is a
+generator of $\mathcal{O}_{X'}(E)$ over $\Spec(A')$.
+Finally, in the total quotient ring
+of $A'$ we have $a'/a = a''/x$. Hence $b^*s = a'/a$ restricts
+to a regular section of $\mathcal{O}_{X'}(E)$ which is
+over $\Spec(A')$ given by $a''/x$. This finishes the proof.
+(The divisor $D \cap \Spec(A')$ is cut out by the image of
+$a''$ in $A'$.)
+\end{proof}
+
+
+
+
+
+\section{Blowing up and flatness}
+\label{section-blowup-flat}
+
+\noindent
+We continue the discussion started in
+More on Algebra, Section \ref{more-algebra-section-blowup-flat}.
+We will prove further results in More on Flatness, Section
+\ref{flat-section-blowup-flat}.
+
+
+\begin{lemma}
+\label{lemma-strict-transform-blowup-fitting-ideal}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a finite type
+quasi-coherent $\mathcal{O}_S$-module. Let $Z_k \subset S$ be the closed
+subscheme cut out by $\text{Fit}_k(\mathcal{F})$, see
+Section \ref{section-fitting-ideals}.
+Let $S' \to S$ be the blowup of $S$ in $Z_k$ and let
+$\mathcal{F}'$ be the strict transform of $\mathcal{F}$.
+Then $\mathcal{F}'$ can locally be generated by $\leq k$
+sections.
+\end{lemma}
+
+\begin{proof}
+Recall that $\mathcal{F}'$ can locally be generated by $\leq k$
+sections if and only if $\text{Fit}_k(\mathcal{F}') = \mathcal{O}_{S'}$, see
+Lemma \ref{lemma-fitting-ideal-generate-locally}.
+Hence this lemma is a translation of
+More on Algebra, Lemma \ref{more-algebra-lemma-blowup-fitting-ideal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strict-transform-blowup-fitting-ideal-locally-free}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a finite type
+quasi-coherent $\mathcal{O}_S$-module. Let $Z_k \subset S$ be the closed
+subscheme cut out by $\text{Fit}_k(\mathcal{F})$, see
+Section \ref{section-fitting-ideals}.
+Assume that $\mathcal{F}$ is locally free of rank $k$ on $S \setminus Z_k$.
+Let $S' \to S$ be the blowup of $S$ in $Z_k$ and let
+$\mathcal{F}'$ be the strict transform of $\mathcal{F}$.
+Then $\mathcal{F}'$ is locally free of rank $k$.
+\end{lemma}
+
+\begin{proof}
+Translation of More on Algebra, Lemma
+\ref{more-algebra-lemma-blowup-fitting-ideal-locally-free}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blowup-fitting-ideal}
+Let $X$ be a scheme. Let $\mathcal{F}$ be a finitely presented
+$\mathcal{O}_X$-module. Let $U \subset X$ be a scheme theoretically
+dense open such that $\mathcal{F}|_U$ is finite locally free of
+constant rank $r$. Then
+\begin{enumerate}
+\item the blowup $b : X' \to X$ of $X$ in the $r$th Fitting
+ideal of $\mathcal{F}$ is $U$-admissible,
+\item the strict transform $\mathcal{F}'$ of $\mathcal{F}$
+with respect to $b$ is locally free of rank $r$,
+\item the kernel $\mathcal{K}$ of the surjection
+$b^*\mathcal{F} \to \mathcal{F}'$ is
+finitely presented and $\mathcal{K}|_U = 0$,
+\item $b^*\mathcal{F}$ and $\mathcal{K}$ are perfect
+$\mathcal{O}_{X'}$-modules of tor dimension $\leq 1$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The ideal $\text{Fit}_r(\mathcal{F})$ is of finite type
+by Lemma \ref{lemma-fitting-ideal-of-finitely-presented}
+and its restriction to $U$ is equal to $\mathcal{O}_U$ by
+Lemma \ref{lemma-fitting-ideal-finite-locally-free}.
+Hence $b : X' \to X$ is $U$-admissible, see
+Definition \ref{definition-admissible-blowup}.
+
+\medskip\noindent
+By Lemma \ref{lemma-fitting-ideal-finite-locally-free}
+the restriction of $\text{Fit}_{r - 1}(\mathcal{F})$
+to $U$ is zero, and since $U$ is scheme theoretically dense
+we conclude that $\text{Fit}_{r - 1}(\mathcal{F}) = 0$
+on all of $X$. Thus it follows from
+Lemma \ref{lemma-fitting-ideal-finite-locally-free}
+that $\mathcal{F}$ is locally free of rank $r$
+on the complement of subscheme cut out by the $r$th
+Fitting ideal of $\mathcal{F}$ (this complement may
+be bigger than $U$ which is why we had to do this step
+in the argument). Hence by
+Lemma \ref{lemma-strict-transform-blowup-fitting-ideal-locally-free}
+the strict transform
+$$
+b^*\mathcal{F} \longrightarrow \mathcal{F}'
+$$
+is locally free of rank $r$. The kernel $\mathcal{K}$
+of this map is supported on the exceptional divisor
+of the blowup $b$ and hence $\mathcal{K}|_U = 0$.
+Finally, since $\mathcal{F}'$ is finite locally free
+and since the displayed arrow is surjective, we can
+locally on $X'$ write $b^*\mathcal{F}$ as the
+direct sum of $\mathcal{K}$ and $\mathcal{F}'$.
+Since $b^*\mathcal{F}'$ is finitely presented
+(Modules, Lemma \ref{modules-lemma-pullback-finite-presentation})
+the same is true for $\mathcal{K}$.
+
+\medskip\noindent
+The statement on tor dimension follows from
+More on Algebra, Lemma \ref{more-algebra-lemma-fitting-ideals-and-pd1}.
+\end{proof}
+
+
+
+
+
+\section{Modifications}
+\label{section-modifications}
+
+\noindent
+In this section we will collect results of the type: after a modification
+such and such are true. We will later see that a modification can be
+dominated by a blowup (More on Flatness, Lemma
+\ref{flat-lemma-dominate-modification-by-blowup}).
+
+\begin{lemma}
+\label{lemma-filter-after-modification}
+Let $X$ be an integral scheme. Let $\mathcal{E}$ be a finite locally free
+$\mathcal{O}_X$-module. There exists a modification $f : X' \to X$
+such that $f^*\mathcal{E}$ has a filtration whose successive quotients
+are invertible $\mathcal{O}_{X'}$-modules.
+\end{lemma}
+
+\begin{proof}
+We prove this by induction on the rank $r$ of $\mathcal{E}$.
+If $r = 1$ or $r = 0$ the lemma is obvious. Assume $r > 1$.
+Let $P = \mathbf{P}(\mathcal{E})$ with structure morphism $\pi : P \to X$,
+see Constructions, Section \ref{constructions-section-projective-bundle}.
+Then $\pi$ is proper (Lemma \ref{lemma-relative-proj-proper}).
+There is a canonical surjection
+$$
+\pi^*\mathcal{E} \to \mathcal{O}_P(1)
+$$
+whose kernel is finite locally free of rank $r - 1$.
+Choose a nonempty open subscheme $U \subset X$ such that
+$\mathcal{E}|_U \cong \mathcal{O}_U^{\oplus r}$.
+Then $P_U = \pi^{-1}(U)$ is isomorphic to $\mathbf{P}^{r - 1}_U$.
+In particular, there exists a section $s : U \to P_U$ of $\pi$.
+Let $X' \subset P$ be the scheme theoretic image of the
+morphism $U \to P_U \to P$. Then $X'$ is integral
+(Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-image-reduced}),
+the morphism $f = \pi|_{X'} : X' \to X$ is proper (Morphisms, Lemmas
+\ref{morphisms-lemma-closed-immersion-proper} and
+\ref{morphisms-lemma-composition-proper}), and
+$f^{-1}(U) \to U$ is an isomorphism. Hence $f$ is a modification
+(Morphisms, Definition \ref{morphisms-definition-modification}).
+By construction the pullback $f^*\mathcal{E}$ has a two step
+filtration whose quotient is invertible because it is equal to
+$\mathcal{O}_P(1)|_{X'}$ and whose sub $\mathcal{E}'$ is locally free of rank
+$r - 1$. By induction we can find a modification $g : X'' \to X'$
+such that $g^*\mathcal{E}'$ has a filtration as in the statement of
+the lemma. Thus $f \circ g : X'' \to X$ is the required modification.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extend-rational-map-after-modification}
+Let $S$ be a scheme. Let $X$, $Y$ be schemes over $S$.
+Assume $X$ is Noetherian and $Y$ is proper over $S$.
+Given an $S$-rational map $f : U \to Y$ from $X$ to $Y$
+there exists a morphism $p : X' \to X$ and an
+$S$-morphism $f' : X' \to Y$ such that
+\begin{enumerate}
+\item $p$ is proper and $p^{-1}(U) \to U$ is an isomorphism,
+\item $f'|_{p^{-1}(U)}$ is equal to $f \circ p|_{p^{-1}(U)}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $j : U \to X$ the inclusion morphism. Let $X' \subset Y \times_S X$
+be the scheme theoretic image of $(f, j) : U \to Y \times_S X$
+(Morphisms, Definition \ref{morphisms-definition-scheme-theoretic-image}).
+The projection $g : Y \times_S X \to X$ is proper
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-proper}).
+The composition $p : X' \to X$ of $X' \to Y \times_S X$ and $g$ is proper
+(Morphisms, Lemmas \ref{morphisms-lemma-closed-immersion-proper} and
+\ref{morphisms-lemma-composition-proper}).
+Since $g$ is separated and $U \subset X$ is retrocompact (as $X$ is Noetherian)
+we conclude that $p^{-1}(U) \to U$ is an isomorphism by
+Morphisms, Lemma
+\ref{morphisms-lemma-scheme-theoretic-image-of-partial-section}.
+On the other hand, the composition $f' : X' \to Y$ of $X' \to Y \times_S X$
+and the projection $Y \times_S X \to Y$ agrees with $f$ on $p^{-1}(U)$.
+\end{proof}
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/dpa.tex b/books/stacks/dpa.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c5e60053c36aaef32f9b58ad2184704f385c7236
--- /dev/null
+++ b/books/stacks/dpa.tex
@@ -0,0 +1,2945 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Divided Power Algebra}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we talk about divided power algebras and what
+you can do with them. A reference is the book \cite{Berthelot}.
+
+
+
+
+
+\section{Divided powers}
+\label{section-divided-powers}
+
+\noindent
+In this section we collect some results on divided power rings.
+We will use the convention $0! = 1$ (as empty products should give $1$).
+
+\begin{definition}
+\label{definition-divided-powers}
+Let $A$ be a ring. Let $I$ be an ideal of $A$. A collection of maps
+$\gamma_n : I \to I$, $n > 0$ is called a {\it divided power structure}
+on $I$ if for all $n \geq 0$, $m > 0$, $x, y \in I$, and $a \in A$ we have
+\begin{enumerate}
+\item $\gamma_1(x) = x$, we also set $\gamma_0(x) = 1$,
+\item $\gamma_n(x)\gamma_m(x) = \frac{(n + m)!}{n! m!} \gamma_{n + m}(x)$,
+\item $\gamma_n(ax) = a^n \gamma_n(x)$,
+\item $\gamma_n(x + y) = \sum_{i = 0, \ldots, n} \gamma_i(x)\gamma_{n - i}(y)$,
+\item $\gamma_n(\gamma_m(x)) = \frac{(nm)!}{n! (m!)^n} \gamma_{nm}(x)$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that the rational numbers $\frac{(n + m)!}{n! m!}$
+and $\frac{(nm)!}{n! (m!)^n}$ occurring in the definition are in fact integers;
+the first is the number of ways to choose $n$ out of $n + m$ and
+the second counts the number of ways to divide a group of $nm$
+objects into $n$ groups of $m$.
+We make some remarks about the definition which show that
+$\gamma_n(x)$ is a replacement for $x^n/n!$ in $I$.
+
+\begin{lemma}
+\label{lemma-silly}
+Let $A$ be a ring. Let $I$ be an ideal of $A$.
+\begin{enumerate}
+\item If $\gamma$ is a divided power structure\footnote{Here
+and in the following, $\gamma$ stands short for a sequence
+of maps $\gamma_1, \gamma_2, \gamma_3, \ldots$ from $I$ to $I$.}
+on $I$, then
+$n! \gamma_n(x) = x^n$ for $n \geq 1$, $x \in I$.
+\end{enumerate}
+Assume $A$ is torsion free as a $\mathbf{Z}$-module.
+\begin{enumerate}
+\item[(2)] A divided power structure on $I$, if it exists, is unique.
+\item[(3)] If $\gamma_n : I \to I$ are maps then
+$$
+\gamma\text{ is a divided power structure}
+\Leftrightarrow
+n! \gamma_n(x) = x^n\ \forall x \in I, n \geq 1.
+$$
+\item[(4)] The ideal $I$ has a divided power structure
+if and only if there exists
+a set of generators $x_i$ of $I$ as an ideal such that
+for all $n \geq 1$ we have $x_i^n \in (n!)I$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). If $\gamma$ is a divided power structure, then condition
+(2) (applied to $1$ and $n-1$ instead of $n$ and $m$)
+implies that $n \gamma_n(x) = \gamma_1(x)\gamma_{n - 1}(x)$. Hence
+by induction and condition (1) we get $n! \gamma_n(x) = x^n$.
+
+\medskip\noindent
+Assume $A$ is torsion free as a $\mathbf{Z}$-module.
+Proof of (2). This is clear from (1).
+
+\medskip\noindent
+Proof of (3). Assume that $n! \gamma_n(x) = x^n$ for all $x \in I$ and
+$n \geq 1$. Since $A \subset A \otimes_{\mathbf{Z}} \mathbf{Q}$ it suffices
+to prove the axioms (1) -- (5) of Definition
+\ref{definition-divided-powers} in case $A$ is a $\mathbf{Q}$-algebra.
+In this case $\gamma_n(x) = x^n/n!$ and it is straightforward
+to verify (1) -- (5); for example, (4) corresponds to the binomial
+formula
+$$
+(x + y)^n = \sum_{i = 0, \ldots, n} \frac{n!}{i!(n - i)!} x^iy^{n - i}
+$$
+We encourage the reader to do the verifications
+to make sure that we have the coefficients correct.
+
+\medskip\noindent
+Proof of (4). Assume we have generators $x_i$ of $I$ as an ideal
+such that $x_i^n \in (n!)I$ for all $n \geq 1$. We claim that
+for all $x \in I$ we have $x^n \in (n!)I$. If the claim holds then
+we can set $\gamma_n(x) = x^n/n!$ which is a divided power structure by (3).
+To prove the claim we note that it holds for $x = ax_i$. Hence we see
+that the claim holds for a set of generators of $I$ as an abelian group.
+By induction on the length of an expression in terms of these, it suffices
+to prove the claim for $x + y$ if it holds for $x$ and $y$. This
+follows immediately from the binomial theorem.
+\end{proof}
+
+\begin{example}
+\label{example-ideal-generated-by-p}
+Let $p$ be a prime number.
+Let $A$ be a ring such that every integer $n$ not divisible by $p$
+is invertible, i.e., $A$ is a $\mathbf{Z}_{(p)}$-algebra. Then
+$I = pA$ has a canonical divided power structure. Namely, given
+$x = pa \in I$ we set
+$$
+\gamma_n(x) = \frac{p^n}{n!} a^n
+$$
+The reader verifies immediately that $p^n/n! \in p\mathbf{Z}_{(p)}$
+for $n \geq 1$ (for instance, this can be derived from the fact
+that the exponent of $p$ in the prime factorization of $n!$ is
+$\left\lfloor n/p \right\rfloor + \left\lfloor n/p^2 \right\rfloor
++ \left\lfloor n/p^3 \right\rfloor + \ldots$),
+so that the definition makes sense and gives us a sequence of
+maps $\gamma_n : I \to I$. It is a straightforward exercise to
+verify that conditions (1) -- (5) of
+Definition \ref{definition-divided-powers} are satisfied.
+Alternatively, it is clear that the definition works for
+$A_0 = \mathbf{Z}_{(p)}$ and then the result follows from
+Lemma \ref{lemma-gamma-extends}.
+\end{example}
+
+\noindent
+We notice that $\gamma_n\left(0\right) = 0$ for any ideal $I$ of
+$A$ and any divided power structure $\gamma$ on $I$. (This follows
+from axiom (3) in Definition \ref{definition-divided-powers},
+applied to $a=0$.)
+
+\begin{lemma}
+\label{lemma-check-on-generators}
+Let $A$ be a ring. Let $I$ be an ideal of $A$. Let $\gamma_n : I \to I$,
+$n \geq 1$ be a sequence of maps. Assume
+\begin{enumerate}
+\item[(a)] (1), (3), and (4) of Definition \ref{definition-divided-powers}
+hold for all $x, y \in I$, and
+\item[(b)] properties (2) and (5) hold for $x$ in
+some set of generators of $I$ as an ideal.
+\end{enumerate}
+Then $\gamma$ is a divided power structure on $I$.
+\end{lemma}
+
+\begin{proof}
+The numbers (1), (2), (3), (4), (5) in this proof refer to the
+conditions listed in Definition \ref{definition-divided-powers}.
+Applying (3) we see that if (2) and (5) hold for $x$ then (2) and (5)
+hold for $ax$ for all $a \in A$. Hence we see (b) implies
+(2) and (5) hold for a set of generators
+of $I$ as an abelian group. Hence, by induction of the length
+of an expression in terms of these it suffices to prove that, given
+$x, y \in I$ such that (2) and (5) hold for $x$ and $y$, then (2) and (5) hold
+for $x + y$.
+
+\medskip\noindent
+Proof of (2) for $x + y$. By (4) we have
+$$
+\gamma_n(x + y)\gamma_m(x + y) =
+\sum\nolimits_{i + j = n,\ k + l = m}
+\gamma_i(x)\gamma_k(x)\gamma_j(y)\gamma_l(y)
+$$
+Using (2) for $x$ and $y$ this equals
+$$
+\sum \frac{(i + k)!}{i!k!}\frac{(j + l)!}{j!l!}
+\gamma_{i + k}(x)\gamma_{j + l}(y)
+$$
+Comparing this with the expansion
+$$
+\gamma_{n + m}(x + y) = \sum \gamma_a(x)\gamma_b(y)
+$$
+we see that we have to prove that given $a + b = n + m$ we have
+$$
+\sum\nolimits_{i + k = a,\ j + l = b,\ i + j = n,\ k + l = m}
+\frac{(i + k)!}{i!k!}\frac{(j + l)!}{j!l!}
+=
+\frac{(n + m)!}{n!m!}.
+$$
+Instead of arguing this directly, we note that the result is true
+for the ideal $I = (x, y)$ in the polynomial ring $\mathbf{Q}[x, y]$
+because $\gamma_n(f) = f^n/n!$, $f \in I$ defines a divided power
+structure on $I$. Hence the equality of rational numbers above is true.
+
+\medskip\noindent
+Proof of (5) for $x + y$ given that (1) -- (4) hold and that (5)
+holds for $x$ and $y$. We will again reduce the proof to an equality
+of rational numbers. Namely, using (4) we can write
+$\gamma_n(\gamma_m(x + y)) = \gamma_n(\sum \gamma_i(x)\gamma_j(y))$.
+Using (4) we can write
+$\gamma_n(\gamma_m(x + y))$ as a sum of terms which are products of
+factors of the form $\gamma_k(\gamma_i(x)\gamma_j(y))$.
+If $i > 0$ then
+\begin{align*}
+\gamma_k(\gamma_i(x)\gamma_j(y)) & =
+\gamma_j(y)^k\gamma_k(\gamma_i(x)) \\
+& = \frac{(ki)!}{k!(i!)^k} \gamma_j(y)^k \gamma_{ki}(x) \\
+& =
+\frac{(ki)!}{k!(i!)^k} \frac{(kj)!}{(j!)^k} \gamma_{ki}(x) \gamma_{kj}(y)
+\end{align*}
+using (3) in the first equality, (5) for $x$ in the second, and
+(2) exactly $k$ times in the third. Using (5) for $y$ we see the
+same equality holds when $i = 0$. Continuing like this using all
+axioms but (5) we see that we can write
+$$
+\gamma_n(\gamma_m(x + y)) =
+\sum\nolimits_{i + j = nm} c_{ij}\gamma_i(x)\gamma_j(y)
+$$
+for certain universal constants $c_{ij} \in \mathbf{Z}$. Again the fact
+that the equality is valid in the polynomial ring $\mathbf{Q}[x, y]$
+implies that the coefficients $c_{ij}$ are all equal to $(nm)!/n!(m!)^n$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-two-ideals}
+Let $A$ be a ring with two ideals $I, J \subset A$.
+Let $\gamma$ be a divided power structure on $I$ and let
+$\delta$ be a divided power structure on $J$.
+Then
+\begin{enumerate}
+\item $\gamma$ and $\delta$ agree on $IJ$,
+\item if $\gamma$ and $\delta$ agree on $I \cap J$ then they are
+the restriction of a unique divided power structure $\epsilon$
+on $I + J$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $x \in I$ and $y \in J$. Then
+$$
+\gamma_n(xy) = y^n\gamma_n(x) = n! \delta_n(y) \gamma_n(x) =
+\delta_n(y) x^n = \delta_n(xy).
+$$
+Hence $\gamma$ and $\delta$ agree on a set of (additive) generators
+of $IJ$. By property (4) of Definition \ref{definition-divided-powers}
+it follows that they agree on all of $IJ$.
+
+\medskip\noindent
+Assume $\gamma$ and $\delta$ agree on $I \cap J$.
+Let $z \in I + J$. Write $z = x + y$ with $x \in I$ and $y \in J$.
+Then we set
+$$
+\epsilon_n(z) = \sum \gamma_i(x)\delta_{n - i}(y)
+$$
+for all $n \geq 1$.
+To see that this is well defined, suppose that $z = x' + y'$ is another
+representation with $x' \in I$ and $y' \in J$. Then
+$w = x - x' = y' - y \in I \cap J$. Hence
+\begin{align*}
+\sum\nolimits_{i + j = n} \gamma_i(x)\delta_j(y)
+& =
+\sum\nolimits_{i + j = n} \gamma_i(x' + w)\delta_j(y) \\
+& =
+\sum\nolimits_{i' + l + j = n} \gamma_{i'}(x')\gamma_l(w)\delta_j(y) \\
+& =
+\sum\nolimits_{i' + l + j = n} \gamma_{i'}(x')\delta_l(w)\delta_j(y) \\
+& =
+\sum\nolimits_{i' + j' = n} \gamma_{i'}(x')\delta_{j'}(y + w) \\
+& =
+\sum\nolimits_{i' + j' = n} \gamma_{i'}(x')\delta_{j'}(y')
+\end{align*}
+as desired. Hence, we have defined maps
+$\epsilon_n : I + J \to I + J$ for all $n \geq 1$; it is easy
+to see that $\epsilon_n \mid_{I} = \gamma_n$ and
+$\epsilon_n \mid_{J} = \delta_n$.
+Next, we prove conditions (1) -- (5) of
+Definition \ref{definition-divided-powers} for the collection
+of maps $\epsilon_n$.
+Properties (1) and (3) are clear. To see (4), suppose
+that $z = x + y$ and $z' = x' + y'$ with $x, x' \in I$ and $y, y' \in J$
+and compute
+\begin{align*}
+\epsilon_n(z + z') & =
+\sum\nolimits_{a + b = n} \gamma_a(x + x')\delta_b(y + y') \\
+& =
+\sum\nolimits_{i + i' + j + j' = n}
+\gamma_i(x) \gamma_{i'}(x')\delta_j(y)\delta_{j'}(y') \\
+& =
+\sum\nolimits_{k = 0, \ldots, n}
+\sum\nolimits_{i+j=k} \gamma_i(x)\delta_j(y)
+\sum\nolimits_{i'+j'=n-k} \gamma_{i'}(x')\delta_{j'}(y') \\
+& =
+\sum\nolimits_{k = 0, \ldots, n}\epsilon_k(z)\epsilon_{n-k}(z')
+\end{align*}
+as desired. Now we see that it suffices to prove (2) and (5) for
+elements of $I$ or $J$, see Lemma \ref{lemma-check-on-generators}.
+This is clear because $\gamma$ and $\delta$ are divided power
+structures.
+
+\medskip\noindent
+The existence of a divided power structure $\epsilon$ on $I+J$
+whose restrictions to $I$ and $J$ are $\gamma$ and $\delta$ is
+thus proven; its uniqueness is rather clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nil}
+Let $p$ be a prime number. Let $A$ be a ring, let $I \subset A$ be an ideal,
+and let $\gamma$ be a divided power structure on $I$. Assume $p$ is nilpotent
+in $A/I$. Then $I$ is locally nilpotent if and only if $p$ is nilpotent in $A$.
+\end{lemma}
+
+\begin{proof}
+If $p^N = 0$ in $A$, then for $x \in I$ we have
+$x^{pN} = (pN)!\gamma_{pN}(x) = 0$ because $(pN)!$ is
+divisible by $p^N$. Conversely, assume $I$ is locally nilpotent.
+We've also assumed that $p$ is nilpotent in $A/I$, hence
+$p^r \in I$ for some $r$, hence $p^r$ nilpotent, hence $p$ nilpotent.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Divided power rings}
+\label{section-divided-power-rings}
+
+\noindent
+There is a category of divided power rings.
+Here is the definition.
+
+\begin{definition}
+\label{definition-divided-power-ring}
+A {\it divided power ring} is a triple $(A, I, \gamma)$ where
+$A$ is a ring, $I \subset A$ is an ideal, and $\gamma = (\gamma_n)_{n \geq 1}$
+is a divided power structure on $I$.
+A {\it homomorphism of divided power rings}
+$\varphi : (A, I, \gamma) \to (B, J, \delta)$ is a ring homomorphism
+$\varphi : A \to B$ such that $\varphi(I) \subset J$ and such that
+$\delta_n(\varphi(x)) = \varphi(\gamma_n(x))$ for all $x \in I$ and
+$n \geq 1$.
+\end{definition}
+
+\noindent
+We sometimes say ``let $(B, J, \delta)$ be a divided power algebra over
+$(A, I, \gamma)$'' to indicate that $(B, J, \delta)$ is a divided power ring
+which comes equipped with a homomorphism of divided power rings
+$(A, I, \gamma) \to (B, J, \delta)$.
+
+\begin{lemma}
+\label{lemma-limits}
+The category of divided power rings has all limits and they agree with
+limits in the category of rings.
+\end{lemma}
+
+\begin{proof}
+The empty limit is the zero ring (that's weird but we need it).
+The product of a collection of divided power rings $(A_t, I_t, \gamma_t)$,
+$t \in T$ is given by $(\prod A_t, \prod I_t, \gamma)$ where
+$\gamma_n((x_t)) = (\gamma_{t, n}(x_t))$.
+The equalizer of $\alpha, \beta : (A, I, \gamma) \to (B, J, \delta)$
+is just $C = \{a \in A \mid \alpha(a) = \beta(a)\}$ with ideal $C \cap I$
+and induced divided powers. It follows that all limits exist, see
+Categories, Lemma \ref{categories-lemma-limits-products-equalizers}.
+\end{proof}
+
+\noindent
+The following lemma illustrates a very general category theoretic
+phenomenon in the case of divided power algebras.
+
+\begin{lemma}
+\label{lemma-a-version-of-brown}
+Let $\mathcal{C}$ be the category of divided power rings. Let
+$F : \mathcal{C} \to \textit{Sets}$ be a functor.
+Assume that
+\begin{enumerate}
+\item there exists a cardinal $\kappa$ such that for every
+$f \in F(A, I, \gamma)$ there exists a morphism
+$(A', I', \gamma') \to (A, I, \gamma)$ of $\mathcal{C}$ such that $f$
+is the image of $f' \in F(A', I', \gamma')$ and $|A'| \leq \kappa$, and
+\item $F$ commutes with limits.
+\end{enumerate}
+Then $F$ is representable, i.e., there exists an object $(B, J, \delta)$
+of $\mathcal{C}$ such that
+$$
+F(A, I, \gamma) = \Hom_\mathcal{C}((B, J, \delta), (A, I, \gamma))
+$$
+functorially in $(A, I, \gamma)$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of
+Categories, Lemma \ref{categories-lemma-a-version-of-brown}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimits}
+The category of divided power rings has all colimits.
+\end{lemma}
+
+\begin{proof}
+The empty colimit is $\mathbf{Z}$ with divided power ideal $(0)$.
+Let's discuss general colimits. Let $\mathcal{C}$ be a category and let
+$c \mapsto (A_c, I_c, \gamma_c)$ be a diagram. Consider the functor
+$$
+F(B, J, \delta) = \lim_{c \in \mathcal{C}}
+Hom((A_c, I_c, \gamma_c), (B, J, \delta))
+$$
+Note that any $f = (f_c)_{c \in C} \in F(B, J, \delta)$ has the property
+that all the images $f_c(A_c)$ generate a subring $B'$ of $B$ of bounded
+cardinality $\kappa$ and that all the images $f_c(I_c)$ generate a
+divided power sub ideal $J'$ of $B'$. And we get a factorization of
+$f$ as a $f'$ in $F(B')$ followed by the inclusion $B' \to B$. Also,
+$F$ commutes with limits. Hence we may apply
+Lemma \ref{lemma-a-version-of-brown}
+to see that $F$ is representable and we win.
+\end{proof}
+
+\begin{remark}
+\label{remark-forgetful}
+The forgetful functor $(A, I, \gamma) \mapsto A$ does not commute with
+colimits. For example, let
+$$
+\xymatrix{
+(B, J, \delta) \ar[r] & (B'', J'', \delta'') \\
+(A, I, \gamma) \ar[r] \ar[u] & (B', J', \delta') \ar[u]
+}
+$$
+be a pushout in the category of divided power rings.
+Then in general the map $B \otimes_A B' \to B''$ isn't an
+isomorphism. (It is always surjective.)
+An explicit example is given by
+$(A, I, \gamma) = (\mathbf{Z}, (0), \emptyset)$,
+$(B, J, \delta) = (\mathbf{Z}/4\mathbf{Z}, 2\mathbf{Z}/4\mathbf{Z}, \delta)$,
+and
+$(B', J', \delta') =
+(\mathbf{Z}/4\mathbf{Z}, 2\mathbf{Z}/4\mathbf{Z}, \delta')$
+where $\delta_2(2) = 2$ and $\delta'_2(2) = 0$.
+More precisely, using Lemma \ref{lemma-need-only-gamma-p}
+we let $\delta$, resp.\ $\delta'$ be the unique
+divided power structure on $J$, resp.\ $J'$ such that
+$\delta_2 : J \to J$, resp.\ $\delta'_2 : J' \to J'$
+is the map $0 \mapsto 0, 2 \mapsto 2$, resp.\ $0 \mapsto 0, 2 \mapsto 0$.
+Then $(B'', J'', \delta'') = (\mathbf{F}_2, (0), \emptyset)$
+which doesn't agree with the tensor product. However, note that it is always
+true that
+$$
+B''/J'' = B/J \otimes_{A/I} B'/J'
+$$
+as can be seen from the universal property of the pushout by considering
+maps into divided power algebras of the form $(C, (0), \emptyset)$.
+\end{remark}
+
+
+\section{Extending divided powers}
+\label{section-extend}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-extends}
+Given a divided power ring $(A, I, \gamma)$ and a ring map
+$A \to B$ we say $\gamma$ {\it extends} to $B$ if there exists a
+divided power structure $\bar \gamma$ on $IB$ such that
+$(A, I, \gamma) \to (B, IB, \bar\gamma)$ is a homomorphism of
+divided power rings.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-gamma-extends}
+Let $(A, I, \gamma)$ be a divided power ring.
+Let $A \to B$ be a ring map.
+If $\gamma$ extends to $B$ then it extends uniquely.
+Assume (at least) one of the following conditions holds
+\begin{enumerate}
+\item $IB = 0$,
+\item $I$ is principal, or
+\item $A \to B$ is flat.
+\end{enumerate}
+Then $\gamma$ extends to $B$.
+\end{lemma}
+
+\begin{proof}
+Any element of $IB$ can be written as a finite sum
+$\sum\nolimits_{i=1}^t b_ix_i$ with
+$b_i \in B$ and $x_i \in I$. If $\gamma$ extends to $\bar\gamma$ on $IB$
+then $\bar\gamma_n(x_i) = \gamma_n(x_i)$.
+Thus, conditions (3) and (4) in
+Definition \ref{definition-divided-powers} imply that
+$$
+\bar\gamma_n(\sum\nolimits_{i=1}^t b_ix_i) =
+\sum\nolimits_{n_1 + \ldots + n_t = n}
+\prod\nolimits_{i = 1}^t b_i^{n_i}\gamma_{n_i}(x_i)
+$$
+Thus we see that $\bar\gamma$ is unique if it exists.
+
+\medskip\noindent
+If $IB = 0$ then setting $\bar\gamma_n(0) = 0$ works. If $I = (x)$
+then we define $\bar\gamma_n(bx) = b^n\gamma_n(x)$. This is well defined:
+if $b'x = bx$, i.e., $(b - b')x = 0$ then
+\begin{align*}
+b^n\gamma_n(x) - (b')^n\gamma_n(x)
+& =
+(b^n - (b')^n)\gamma_n(x) \\
+& =
+(b^{n - 1} + \ldots + (b')^{n - 1})(b - b')\gamma_n(x) = 0
+\end{align*}
+because $\gamma_n(x)$ is divisible by $x$ (since
+$\gamma_n(I) \subset I$) and hence annihilated by $b - b'$.
+Next, we prove conditions (1) -- (5) of
+Definition \ref{definition-divided-powers}.
+Parts (1), (2), (3), (5) are obvious from the construction.
+For (4) suppose that $y, z \in IB$, say $y = bx$ and $z = cx$. Then
+$y + z = (b + c)x$ hence
+\begin{align*}
+\bar\gamma_n(y + z)
+& =
+(b + c)^n\gamma_n(x) \\
+& =
+\sum \frac{n!}{i!(n - i)!}b^ic^{n -i}\gamma_n(x) \\
+& =
+\sum b^ic^{n - i}\gamma_i(x)\gamma_{n - i}(x) \\
+& =
+\sum \bar\gamma_i(y)\bar\gamma_{n -i}(z)
+\end{align*}
+as desired.
+
+\medskip\noindent
+Assume $A \to B$ is flat. Suppose that $b_1, \ldots, b_r \in B$ and
+$x_1, \ldots, x_r \in I$. Then
+$$
+\bar\gamma_n(\sum b_ix_i) =
+\sum b_1^{e_1} \ldots b_r^{e_r} \gamma_{e_1}(x_1) \ldots \gamma_{e_r}(x_r)
+$$
+where the sum is over $e_1 + \ldots + e_r = n$
+if $\bar\gamma_n$ exists. Next suppose that we have $c_1, \ldots, c_s \in B$
+and $a_{ij} \in A$ such that $b_i = \sum a_{ij}c_j$.
+Setting $y_j = \sum a_{ij}x_i$ we claim that
+$$
+\sum b_1^{e_1} \ldots b_r^{e_r} \gamma_{e_1}(x_1) \ldots \gamma_{e_r}(x_r) =
+\sum c_1^{d_1} \ldots c_s^{d_s} \gamma_{d_1}(y_1) \ldots \gamma_{d_s}(y_s)
+$$
+in $B$ where on the right hand side we are summing over
+$d_1 + \ldots + d_s = n$. Namely, using the axioms of a divided power
+structure we can expand both sides into a sum with coefficients
+in $\mathbf{Z}[a_{ij}]$ of terms of the form
+$c_1^{d_1} \ldots c_s^{d_s}\gamma_{e_1}(x_1) \ldots \gamma_{e_r}(x_r)$.
+To see that the coefficients agree we note that the result is true
+in $\mathbf{Q}[x_1, \ldots, x_r, c_1, \ldots, c_s, a_{ij}]$ with
+$\gamma$ the unique divided power structure on $(x_1, \ldots, x_r)$.
+By Lazard's theorem (Algebra, Theorem \ref{algebra-theorem-lazard})
+we can write $B$ as a directed colimit of finite free $A$-modules.
+In particular, if $z \in IB$ is written as $z = \sum x_ib_i$ and
+$z = \sum x'_{i'}b'_{i'}$, then we can find $c_1, \ldots, c_s \in B$
+and $a_{ij}, a'_{i'j} \in A$ such that $b_i = \sum a_{ij}c_j$
+and $b'_{i'} = \sum a'_{i'j}c_j$ such that
+$y_j = \sum x_ia_{ij} = \sum x'_{i'}a'_{i'j}$ holds\footnote{This
+can also be proven without recourse to
+Algebra, Theorem \ref{algebra-theorem-lazard}. Indeed, if
+$z = \sum x_ib_i$ and $z = \sum x'_{i'}b'_{i'}$, then
+$\sum x_ib_i - \sum x'_{i'}b'_{i'} = 0$ is a relation in the
+$A$-module $B$. Thus, Algebra, Lemma \ref{algebra-lemma-flat-eq}
+(applied to the $x_i$ and $x'_{i'}$ taking the place of the $f_i$,
+and the $b_i$ and $b'_{i'}$ taking the role of the $x_i$) yields
+the existence of the $c_1, \ldots, c_s \in B$
+and $a_{ij}, a'_{i'j} \in A$ as required.}.
+Hence the procedure above gives a well defined map $\bar\gamma_n$
+on $IB$. By construction $\bar\gamma$ satisfies conditions (1), (3), and
+(4). Moreover, for $x \in I$ we have $\bar\gamma_n(x) = \gamma_n(x)$. Hence
+it follows from Lemma \ref{lemma-check-on-generators} that $\bar\gamma$
+is a divided power structure on $IB$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kernel}
+Let $(A, I, \gamma)$ be a divided power ring.
+\begin{enumerate}
+\item If $\varphi : (A, I, \gamma) \to (B, J, \delta)$ is a
+homomorphism of divided power rings, then $\Ker(\varphi) \cap I$
+is preserved by $\gamma_n$ for all $n \geq 1$.
+\item Let $\mathfrak a \subset A$ be an ideal and set
+$I' = I \cap \mathfrak a$. The following are equivalent
+\begin{enumerate}
+\item $I'$ is preserved by $\gamma_n$ for all $n > 0$,
+\item $\gamma$ extends to $A/\mathfrak a$, and
+\item there exist a set of generators $x_i$ of $I'$ as an ideal
+such that $\gamma_n(x_i) \in I'$ for all $n > 0$.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). This is clear. Assume (2)(a). Define
+$\bar\gamma_n(x \bmod I') = \gamma_n(x) \bmod I'$ for $x \in I$.
+This is well defined since $\gamma_n(x + y) = \gamma_n(x) \bmod I'$
+for $y \in I'$ by Definition \ref{definition-divided-powers} (4) and
+the fact that $\gamma_j(y) \in I'$ by assumption. It is clear that
+$\bar\gamma$ is a divided power structure as $\gamma$ is one.
+Hence (2)(b) holds. Also, (2)(b) implies (2)(a) by part (1).
+It is clear that (2)(a) implies (2)(c). Assume (2)(c).
+Note that $\gamma_n(x) = a^n\gamma_n(x_i) \in I'$ for $x = ax_i$.
+Hence we see that $\gamma_n(x) \in I'$ for a set of generators of $I'$
+as an abelian group. By induction on the length of an expression in
+terms of these, it suffices to prove $\forall n : \gamma_n(x + y) \in I'$
+if $\forall n : \gamma_n(x), \gamma_n(y) \in I'$. This
+follows immediately from the fourth axiom of a divided power structure.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sub-dp-ideal}
+Let $(A, I, \gamma)$ be a divided power ring.
+Let $E \subset I$ be a subset.
+Then the smallest ideal $J \subset I$ preserved by $\gamma$
+and containing all $f \in E$ is the ideal $J$ generated by
+$\gamma_n(f)$, $n \geq 1$, $f \in E$.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from Lemma \ref{lemma-kernel}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extend-to-completion}
+Let $(A, I, \gamma)$ be a divided power ring. Let $p$ be a prime.
+If $p$ is nilpotent in $A/I$, then
+\begin{enumerate}
+\item the $p$-adic completion $A^\wedge = \lim_e A/p^eA$ surjects onto $A/I$,
+\item the kernel of this map is the $p$-adic completion $I^\wedge$ of $I$, and
+\item each $\gamma_n$ is continuous for the $p$-adic topology and extends
+to $\gamma_n^\wedge : I^\wedge \to I^\wedge$ defining a divided power
+structure on $I^\wedge$.
+\end{enumerate}
+If moreover $A$ is a $\mathbf{Z}_{(p)}$-algebra, then
+\begin{enumerate}
+\item[(4)] for $e$ large enough the ideal $p^eA \subset I$ is preserved by the
+divided power structure $\gamma$ and
+$$
+(A^\wedge, I^\wedge, \gamma^\wedge) = \lim_e (A/p^eA, I/p^eA, \bar\gamma)
+$$
+in the category of divided power rings.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $t \geq 1$ be an integer such that $p^tA/I = 0$, i.e., $p^tA \subset I$.
+The map $A^\wedge \to A/I$ is the composition $A^\wedge \to A/p^tA \to A/I$
+which is surjective (for example by
+Algebra, Lemma \ref{algebra-lemma-completion-generalities}).
+As $p^eI \subset p^eA \cap I \subset p^{e - t}I$ for $e \geq t$ we see
+that the kernel of the composition $A^\wedge \to A/I$ is the $p$-adic
+completion of $I$. The map $\gamma_n$ is continuous because
+$$
+\gamma_n(x + p^ey) =
+\sum\nolimits_{i + j = n} p^{je}\gamma_i(x)\gamma_j(y) =
+\gamma_n(x) \bmod p^eI
+$$
+by the axioms of a divided power structure. It is clear that the axioms
+for divided power structures are inherited by the maps $\gamma_n^\wedge$
+from the maps $\gamma_n$. Finally, to see the last statement say $e > t$.
+Then $p^eA \subset I$ and $\gamma_1(p^eA) \subset p^eA$ and for $n > 1$
+we have
+$$
+\gamma_n(p^ea) = p^n \gamma_n(p^{e - 1}a) = \frac{p^n}{n!} p^{n(e - 1)}a^n
+\in p^e A
+$$
+as $p^n/n! \in \mathbf{Z}_{(p)}$ and as $n \geq 2$ and $e \geq 2$ so
+$n(e - 1) \geq e$.
+This proves that $\gamma$ extends to $A/p^eA$, see Lemma \ref{lemma-kernel}.
+The statement on limits is clear from the construction of limits in
+the proof of Lemma \ref{lemma-limits}.
+\end{proof}
+
+
+
+
+\section{Divided power polynomial algebras}
+\label{section-divided-power-polynomial-ring}
+
+\noindent
+A very useful example is the {\it divided power polynomial algebra}.
+Let $A$ be a ring. Let $t \geq 1$. We will denote
+$A\langle x_1, \ldots, x_t \rangle$ the following $A$-algebra:
+As an $A$-module we set
+$$
+A\langle x_1, \ldots, x_t \rangle =
+\bigoplus\nolimits_{n_1, \ldots, n_t \geq 0} A x_1^{[n_1]} \ldots x_t^{[n_t]}
+$$
+with multiplication given by
+$$
+x_i^{[n]}x_i^{[m]} = \frac{(n + m)!}{n!m!}x_i^{[n + m]}.
+$$
+We also set $x_i = x_i^{[1]}$. Note that
+$1 = x_1^{[0]} \ldots x_t^{[0]}$. There is a similar construction
+which gives the divided power polynomial algebra in infinitely many
+variables. There is an canonical $A$-algebra map
+$A\langle x_1, \ldots, x_t \rangle \to A$ sending $x_i^{[n]}$ to zero
+for $n > 0$. The kernel of this map is denoted
+$A\langle x_1, \ldots, x_t \rangle_{+}$.
+
+\begin{lemma}
+\label{lemma-divided-power-polynomial-algebra}
+Let $(A, I, \gamma)$ be a divided power ring.
+There exists a unique divided power structure $\delta$ on
+$$
+J = IA\langle x_1, \ldots, x_t \rangle + A\langle x_1, \ldots, x_t \rangle_{+}
+$$
+such that
+\begin{enumerate}
+\item $\delta_n(x_i) = x_i^{[n]}$, and
+\item $(A, I, \gamma) \to (A\langle x_1, \ldots, x_t \rangle, J, \delta)$
+is a homomorphism of divided power rings.
+\end{enumerate}
+Moreover, $(A\langle x_1, \ldots, x_t \rangle, J, \delta)$ has the
+following universal property: A homomorphism of divided power rings
+$\varphi : (A\langle x_1, \ldots, x_t \rangle, J, \delta) \to
+(C, K, \epsilon)$ is
+the same thing as a homomorphism of divided power rings
+$A \to C$ and elements $k_1, \ldots, k_t \in K$.
+\end{lemma}
+
+\begin{proof}
+We will prove the lemma in case of a divided power polynomial algebra
+in one variable. The result for the general case can be argued in exactly
+the same way, or by noting that $A\langle x_1, \ldots, x_t\rangle$ is
+isomorphic to the ring obtained by adjoining the divided power variables
+$x_1, \ldots, x_t$ one by one.
+
+\medskip\noindent
+Let $A\langle x \rangle_{+}$ be the ideal generated by
+$x, x^{[2]}, x^{[3]}, \ldots$.
+Note that $J = IA\langle x \rangle + A\langle x \rangle_{+}$
+and that
+$$
+IA\langle x \rangle \cap A\langle x \rangle_{+} =
+IA\langle x \rangle \cdot A\langle x \rangle_{+}
+$$
+Hence by Lemma \ref{lemma-two-ideals} it suffices to show that there
+exist divided power structures on the ideals $IA\langle x \rangle$ and
+$A\langle x \rangle_{+}$. The existence of the first follows from
+Lemma \ref{lemma-gamma-extends} as $A \to A\langle x \rangle$ is flat.
+For the second, note that if $A$ is torsion free, then we can apply
+Lemma \ref{lemma-silly} (4) to see that $\delta$ exists. Namely, choosing
+as generators the elements $x^{[m]}$ we see that
+$(x^{[m]})^n = \frac{(nm)!}{(m!)^n} x^{[nm]}$
+and $n!$ divides the integer $\frac{(nm)!}{(m!)^n}$.
+In general write $A = R/\mathfrak a$ for some torsion free ring $R$
+(e.g., a polynomial ring over $\mathbf{Z}$). The kernel of
+$R\langle x \rangle \to A\langle x \rangle$ is
+$\bigoplus \mathfrak a x^{[m]}$. Applying criterion (2)(c) of
+Lemma \ref{lemma-kernel} we see that the divided power structure
+on $R\langle x \rangle_{+}$ extends to $A\langle x \rangle$ as
+desired.
+
+\medskip\noindent
+Proof of the universal property. Given a homomorphism $\varphi : A \to C$
+of divided power rings and $k_1, \ldots, k_t \in K$ we consider
+$$
+A\langle x_1, \ldots, x_t \rangle \to C,\quad
+x_1^{[n_1]} \ldots x_t^{[n_t]} \longmapsto
+\epsilon_{n_1}(k_1) \ldots \epsilon_{n_t}(k_t)
+$$
+using $\varphi$ on coefficients. The only thing to check is that
+this is an $A$-algebra homomorphism (details omitted). The inverse
+construction is clear.
+\end{proof}
+
+\begin{remark}
+\label{remark-divided-power-polynomial-algebra}
+Let $(A, I, \gamma)$ be a divided power ring.
+There is a variant of Lemma \ref{lemma-divided-power-polynomial-algebra}
+for infinitely many variables. First note that if $s < t$ then there
+is a canonical map
+$$
+A\langle x_1, \ldots, x_s \rangle \to A\langle x_1, \ldots, x_t\rangle
+$$
+Hence if $W$ is any set, then we set
+$$
+A\langle x_w: w \in W\rangle =
+\colim_{E \subset W} A\langle x_e:e \in E\rangle
+$$
+(colimit over $E$ finite subset of $W$)
+with transition maps as above. By the definition of a colimit we see
+that the universal mapping property of $A\langle x_w: w \in W\rangle$ is
+completely analogous to the mapping property stated in
+Lemma \ref{lemma-divided-power-polynomial-algebra}.
+\end{remark}
+
+\noindent
+The following lemma can be found in \cite{BO}.
+
+\begin{lemma}
+\label{lemma-need-only-gamma-p}
+Let $p$ be a prime number. Let $A$ be a ring such that every integer $n$
+not divisible by $p$ is invertible, i.e., $A$ is a $\mathbf{Z}_{(p)}$-algebra.
+Let $I \subset A$ be an ideal. Two divided power structures
+$\gamma, \gamma'$ on $I$ are equal if and only if $\gamma_p = \gamma'_p$.
+Moreover, given a map $\delta : I \to I$ such that
+\begin{enumerate}
+\item $p!\delta(x) = x^p$ for all $x \in I$,
+\item $\delta(ax) = a^p\delta(x)$ for all $a \in A$, $x \in I$, and
+\item
+$\delta(x + y) =
+\delta(x) +
+\sum\nolimits_{i + j = p, i,j \geq 1} \frac{1}{i!j!} x^i y^j +
+\delta(y)$ for all $x, y \in I$,
+\end{enumerate}
+then there exists a unique divided power structure $\gamma$ on $I$ such
+that $\gamma_p = \delta$.
+\end{lemma}
+
+\begin{proof}
+If $n$ is not divisible by $p$, then $\gamma_n(x) = c x \gamma_{n - 1}(x)$
+where $c$ is a unit in $\mathbf{Z}_{(p)}$. Moreover,
+$$
+\gamma_{pm}(x) = c \gamma_m(\gamma_p(x))
+$$
+where $c$ is a unit in $\mathbf{Z}_{(p)}$. Thus the first assertion is clear.
+For the second assertion, we can, working backwards, use these equalities
+to define all $\gamma_n$. More precisely, if
+$n = a_0 + a_1p + \ldots + a_e p^e$ with $a_i \in \{0, \ldots, p - 1\}$ then
+we set
+$$
+\gamma_n(x) = c_n x^{a_0} \delta(x)^{a_1} \ldots \delta^e(x)^{a_e}
+$$
+for $c_n \in \mathbf{Z}_{(p)}$ defined by
+$$
+c_n =
+{(p!)^{a_1 + a_2(1 + p) + \ldots + a_e(1 + \ldots + p^{e - 1})}}/{n!}.
+$$
+Now we have to show the axioms (1) -- (5) of a divided power structure, see
+Definition \ref{definition-divided-powers}. We observe that (1) and (3) are
+immediate. Verification of (2) and (5) is by a direct calculation which
+we omit. Let $x, y \in I$. We claim there is a ring map
+$$
+\varphi : \mathbf{Z}_{(p)}\langle u, v \rangle \longrightarrow A
+$$
+which maps $u^{[n]}$ to $\gamma_n(x)$ and $v^{[n]}$ to $\gamma_n(y)$.
+By construction of $\mathbf{Z}_{(p)}\langle u, v \rangle$ this means
+we have to check that
+$$
+\gamma_n(x)\gamma_m(x) = \frac{(n + m)!}{n!m!} \gamma_{n + m}(x)
+$$
+in $A$ and similarly for $y$. This is true because (2) holds for $\gamma$.
+Let $\epsilon$ denote the divided power structure on the
+ideal $\mathbf{Z}_{(p)}\langle u, v\rangle_{+}$ of
+$\mathbf{Z}_{(p)}\langle u, v\rangle$.
+Next, we claim that $\varphi(\epsilon_n(f)) = \gamma_n(\varphi(f))$
+for $f \in \mathbf{Z}_{(p)}\langle u, v\rangle_{+}$ and all $n$.
+This is clear for $n = 0, 1, \ldots, p - 1$. For $n = p$ it suffices
+to prove it for a set of generators of the ideal
+$\mathbf{Z}_{(p)}\langle u, v\rangle_{+}$ because both $\epsilon_p$
+and $\gamma_p = \delta$ satisfy properties (1) and (3) of the lemma.
+Hence it suffices to prove that
+$\gamma_p(\gamma_n(x)) = \frac{(pn)!}{p!(n!)^p}\gamma_{pn}(x)$ and
+similarly for $y$, which follows as (5) holds for $\gamma$.
+Now, if $n = a_0 + a_1p + \ldots + a_e p^e$
+is an arbitrary integer written in $p$-adic expansion as above, then
+$$
+\epsilon_n(f) =
+c_n f^{a_0} \gamma_p(f)^{a_1} \ldots \gamma_p^e(f)^{a_e}
+$$
+because $\epsilon$ is a divided power structure. Hence we see that
+$\varphi(\epsilon_n(f)) = \gamma_n(\varphi(f))$ holds for all $n$.
+Applying this for $f = u + v$ we see that axiom (4) for $\gamma$
+follows from the fact that $\epsilon$ is a divided power structure.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Tate resolutions}
+\label{section-tate}
+
+\noindent
+In this section we briefly discuss the resolutions constructed in
+\cite{Tate-homology} and \cite{AH}
+which combine divided power structures with
+differential graded algebras.
+In this section we will use {\it homological notation} for
+differential graded algebras.
+Our differential graded algebras will sit in nonnegative homological
+degrees. Thus our differential graded algebras $(A, \text{d})$
+will be given as chain complexes
+$$
+\ldots \to A_2 \to A_1 \to A_0 \to 0 \to \ldots
+$$
+endowed with a multiplication.
+
+\medskip\noindent
+Let $R$ be a ring (commutative, as usual).
+In this section we will often consider graded
+$R$-algebras $A = \bigoplus_{d \geq 0} A_d$ whose components are
+zero in negative degrees. We will set $A_+ = \bigoplus_{d > 0} A_d$.
+We will write $A_{even} = \bigoplus_{d \geq 0} A_{2d}$ and
+$A_{odd} = \bigoplus_{d \geq 0} A_{2d + 1}$.
+Recall that $A$ is graded commutative if
+$x y = (-1)^{\deg(x)\deg(y)} y x$ for homogeneous elements $x, y$.
+Recall that $A$ is strictly graded commutative if in addition
+$x^2 = 0$ for homogeneous elements $x$ of odd degree. Finally, to understand
+the following definition, keep in mind that $\gamma_n(x) = x^n/n!$
+if $A$ is a $\mathbf{Q}$-algebra.
+
+\begin{definition}
+\label{definition-divided-powers-graded}
+Let $R$ be a ring. Let $A = \bigoplus_{d \geq 0} A_d$ be a graded
+$R$-algebra which is strictly graded commutative. A collection of maps
+$\gamma_n : A_{even, +} \to A_{even, +}$ defined for all $n > 0$ is called
+a {\it divided power structure} on $A$ if we have
+\begin{enumerate}
+\item $\gamma_n(x) \in A_{2nd}$ if $x \in A_{2d}$,
+\item $\gamma_1(x) = x$ for any $x$, we also set $\gamma_0(x) = 1$,
+\item $\gamma_n(x)\gamma_m(x) = \frac{(n + m)!}{n! m!} \gamma_{n + m}(x)$,
+\item $\gamma_n(xy) = x^n \gamma_n(y)$ for all $x \in A_{even}$ and
+$y \in A_{even, +}$,
+\item $\gamma_n(xy) = 0$ if $x, y \in A_{odd}$ homogeneous and $n > 1$
+\item if $x, y \in A_{even, +}$ then
+$\gamma_n(x + y) = \sum_{i = 0, \ldots, n} \gamma_i(x)\gamma_{n - i}(y)$,
+\item $\gamma_n(\gamma_m(x)) =
+\frac{(nm)!}{n! (m!)^n} \gamma_{nm}(x)$ for $x \in A_{even, +}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Observe that conditions (2), (3), (4), (6), and (7) imply that
+$\gamma$ is a ``usual'' divided power structure on the ideal
+$A_{even, +}$ of the (commutative) ring $A_{even}$, see
+Sections \ref{section-divided-powers},
+\ref{section-divided-power-rings},
+\ref{section-extend}, and
+\ref{section-divided-power-polynomial-ring}.
+In particular, we have $n! \gamma_n(x) = x^n$ for all $x \in A_{even, +}$.
+Condition (1) states that $\gamma$ is compatible with grading and condition
+(5) tells us $\gamma_n$ for $n > 1$ vanishes on products
+of homogeneous elements of odd degree. But note that it may happen
+that
+$$
+\gamma_2(z_1 z_2 + z_3 z_4) = z_1z_2z_3z_4
+$$
+is nonzero if $z_1, z_2, z_3, z_4$ are homogeneous elements of odd degree.
+
+\begin{example}[Adjoining odd variable]
+\label{example-adjoining-odd}
+Let $R$ be a ring. Let $(A, \gamma)$ be a strictly graded commutative
+graded $R$-algebra endowed with a divided power structure as in the
+definition above. Let $d > 0$ be an odd integer.
+In this setting we can adjoin a variable $T$ of degree $d$ to $A$.
+Namely, set
+$$
+A\langle T \rangle = A \oplus AT
+$$
+with grading given by $A\langle T \rangle_m = A_m \oplus A_{m - d}T$.
+We claim there is a unique divided power structure on
+$A\langle T \rangle$ compatible with the given divided power
+structure on $A$. Namely, we set
+$$
+\gamma_n(x + yT) = \gamma_n(x) + \gamma_{n - 1}(x)yT
+$$
+for $x \in A_{even, +}$ and $y \in A_{odd}$.
+\end{example}
+
+\begin{example}[Adjoining even variable]
+\label{example-adjoining-even}
+Let $R$ be a ring. Let $(A, \gamma)$ be a strictly graded commutative
+graded $R$-algebra endowed with a divided power structure as in the
+definition above. Let $d > 0$ be an even integer.
+In this setting we can adjoin a variable $T$ of degree $d$ to $A$.
+Namely, set
+$$
+A\langle T \rangle = A \oplus AT \oplus AT^{(2)} \oplus AT^{(3)} \oplus \ldots
+$$
+with multiplication given by
+$$
+T^{(n)} T^{(m)} = \frac{(n + m)!}{n!m!} T^{(n + m)}
+$$
+and with grading given by
+$$
+A\langle T \rangle_m =
+A_m \oplus A_{m - d}T \oplus A_{m - 2d}T^{(2)} \oplus \ldots
+$$
+We claim there is a unique divided power structure on
+$A\langle T \rangle$ compatible with the given divided power
+structure on $A$ such that $\gamma_n(T^{(i)}) = T^{(ni)}$.
+To define the divided power structure we first set
+$$
+\gamma_n\left(\sum\nolimits_{i > 0} x_i T^{(i)}\right) =
+\sum \prod\nolimits_{n = \sum e_i} x_i^{e_i} T^{(ie_i)}
+$$
+if $x_i$ is in $A_{even}$. If $x_0 \in A_{even, +}$
+then we take
+$$
+\gamma_n\left(\sum\nolimits_{i \geq 0} x_i T^{(i)}\right) =
+\sum\nolimits_{a + b = n}
+\gamma_a(x_0)\gamma_b\left(\sum\nolimits_{i > 0} x_iT^{(i)}\right)
+$$
+where $\gamma_b$ is as defined above.
+\end{example}
+
+\begin{remark}
+\label{remark-adjoining-set-of-variables}
+We can also adjoin a set (possibly infinite) of exterior or divided
+power generators in a given degree $d > 0$, rather than just one
+as in Examples \ref{example-adjoining-odd}
+and \ref{example-adjoining-even}. Namely,
+following Remark \ref{remark-divided-power-polynomial-algebra}:
+for $(A,\gamma)$
+as above and a set $J$, let $A\langle
+T_j:j\in J\rangle$ be the directed colimit of the algebras
+$A\langle T_j:j\in S\rangle$ over all finite subsets $S$
+of $J$. It is immediate that this algebra has a unique divided power
+structure, compatible with the given structure on $A$ and on
+each generator $T_j$.
+\end{remark}
+
+\noindent
+At this point we tie in the definition of divided power structures
+with differentials. To understand the definition note that
+$\text{d}(x^n/n!) = \text{d}(x) x^{n - 1}/(n - 1)!$ if $A$
+is a $\mathbf{Q}$-algebra and $x \in A_{even, +}$.
+
+\begin{definition}
+\label{definition-divided-powers-dga}
+Let $R$ be a ring. Let $A = \bigoplus_{d \geq 0} A_d$ be a
+differential graded $R$-algebra which is strictly graded commutative.
+A divided power structure $\gamma$ on $A$ is {\it compatible with
+the differential graded structure} if
+$\text{d}(\gamma_n(x)) = \text{d}(x) \gamma_{n - 1}(x)$ for
+all $x \in A_{even, +}$.
+\end{definition}
+
+\noindent
+Warning: Let $(A, \text{d}, \gamma)$ be as in
+Definition \ref{definition-divided-powers-dga}.
+It may not be true that $\gamma_n(x)$ is a boundary, if
+$x$ is a boundary. Thus $\gamma$ in general does not induce
+a divided power structure on the homology algebra $H(A)$.
+In some papers the authors put an additional compatibility
+condition in order to ensure that this is the case, but we elect
+not to do so.
+
+\begin{lemma}
+\label{lemma-dpdga-good}
+Let $(A, \text{d}, \gamma)$ and $(B, \text{d}, \gamma)$ be as in
+Definition \ref{definition-divided-powers-dga}. Let $f : A \to B$
+be a map of differential graded algebras compatible with divided
+power structures. Assume
+\begin{enumerate}
+\item $H_k(A) = 0$ for $k > 0$, and
+\item $f$ is surjective.
+\end{enumerate}
+Then $\gamma$ induces a divided power structure on the graded
+$R$-algebra $H(B)$.
+\end{lemma}
+
+\begin{proof}
+Suppose that $x$ and $x'$ are homogeneous of the same degree $2d$
+and define the same cohomology class in $H(B)$. Say $x' - x = \text{d}(w)$.
+Choose a lift $y \in A_{2d}$ of $x$ and a lift $z \in A_{2d + 1}$
+of $w$. Then $y' = y + \text{d}(z)$ is a lift of $x'$.
+Hence
+$$
+\gamma_n(y') = \sum \gamma_i(y) \gamma_{n - i}(\text{d}(z))
+= \gamma_n(y) +
+\sum\nolimits_{i < n} \gamma_i(y) \gamma_{n - i}(\text{d}(z))
+$$
+Since $A$ is acyclic in positive degrees and since
+$\text{d}(\gamma_j(\text{d}(z))) = 0$ for all $j$ we can write
+this as
+$$
+\gamma_n(y') = \gamma_n(y) +
+\sum\nolimits_{i < n} \gamma_i(y) \text{d}(z_i)
+$$
+for some $z_i$ in $A$. Moreover, for $0 < i < n$ we have
+$$
+\text{d}(\gamma_i(y) z_i) =
+\text{d}(\gamma_i(y))z_i + \gamma_i(y)\text{d}(z_i) =
+\text{d}(y) \gamma_{i - 1}(y) z_i + \gamma_i(y)\text{d}(z_i)
+$$
+and the first term maps to zero in $B$ as $\text{d}(y)$ maps to zero in $B$.
+Hence $\gamma_n(x')$ and $\gamma_n(x)$ map to the same element of $H(B)$.
+Thus we obtain a well defined map $\gamma_n : H_{2d}(B) \to H_{2nd}(B)$
+for all $d > 0$ and $n > 0$. We omit the verification that this
+defines a divided power structure on $H(B)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-div}
+Let $(A, \text{d}, \gamma)$ be as in
+Definition \ref{definition-divided-powers-dga}.
+Let $R \to R'$ be a ring map.
+Then $\text{d}$ and $\gamma$ induce similar structures on
+$A' = A \otimes_R R'$ such that $(A', \text{d}, \gamma)$ is as in
+Definition \ref{definition-divided-powers-dga}.
+\end{lemma}
+
+\begin{proof}
+Observe that $A'_{even} = A_{even} \otimes_R R'$ and
+$A'_{even, +} = A_{even, +} \otimes_R R'$. Hence we are trying to
+show that the divided powers $\gamma$ extend to $A'_{even}$
+(terminology as in Definition \ref{definition-extends}).
+Once we have shown $\gamma$ extends it follows easily that this
+extension has all the desired properties.
+
+\medskip\noindent
+Choose a polynomial $R$-algebra $P$ (on any set of generators)
+and a surjection of $R$-algebras
+$P \to R'$. The ring map $A_{even} \to A_{even} \otimes_R P$ is flat,
+hence the divided powers $\gamma$ extend to $A_{even} \otimes_R P$
+uniquely by Lemma \ref{lemma-gamma-extends}.
+Let $J = \Ker(P \to R')$. To show that $\gamma$ extends
+to $A \otimes_R R'$ it suffices to show that
+$I' = \Ker(A_{even, +} \otimes_R P \to A_{even, +} \otimes_R R')$
+is generated by elements $z$ such that $\gamma_n(z) \in I'$
+for all $n > 0$. This is clear as $I'$ is generated by elements
+of the form $x \otimes f$ with
+$x \in A_{even, +}$ and $f \in \Ker(P \to R')$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extend-differential}
+Let $(A, \text{d}, \gamma)$ be as in
+Definition \ref{definition-divided-powers-dga}.
+Let $d \geq 1$ be an integer.
+Let $A\langle T \rangle$ be the graded divided power polynomial algebra
+on $T$ with $\deg(T) = d$
+constructed in Example \ref{example-adjoining-odd} or
+\ref{example-adjoining-even}.
+Let $f \in A_{d - 1}$ be an element with $\text{d}(f) = 0$.
+There exists a unique differential $\text{d}$
+on $A\langle T\rangle$ such that $\text{d}(T) = f$ and
+such that $\text{d}$ is compatible with the divided power
+structure on $A\langle T \rangle$.
+\end{lemma}
+
+\begin{proof}
+This is proved by a direct computation which is omitted.
+\end{proof}
+
+\noindent
+In Lemma \ref{lemma-compute-cohomology-adjoin-variable}
+we will compute the cohomology of $A\langle T \rangle$ in some special cases.
+Here is Tate's construction, as extended
+by Avramov and Halperin.
+
+\begin{lemma}
+\label{lemma-tate-resolution}
+Let $R \to S$ be a homomorphism of commutative rings.
+There exists a factorization
+$$
+R \to A \to S
+$$
+with the following properties:
+\begin{enumerate}
+\item $(A, \text{d}, \gamma)$ is as in
+Definition \ref{definition-divided-powers-dga},
+\item $A \to S$ is a quasi-isomorphism (if we endow $S$ with
+the zero differential),
+\item $A_0 = R[x_j: j\in J] \to S$ is any surjection of a polynomial
+ring onto $S$, and
+\item $A$ is a graded divided power polynomial algebra over $R$.
+\end{enumerate}
+The last condition means that $A$ is constructed out of $A_0$ by
+successively adjoining a set of variables $T$ in each degree $> 0$ as in
+Example \ref{example-adjoining-odd} or \ref{example-adjoining-even}.
+Moreover, if $R$ is Noetherian and $R\to S$ is of finite type,
+then $A$ can be taken to have only finitely many generators in
+each degree.
+\end{lemma}
+
+\begin{proof}
+We write out the construction for the case that $R$ is Noetherian
+and $R\to S$ is of finite type. Without those assumptions, the proof
+is the same, except that we have to use some set (possibly
+infinite) of generators in each degree.
+
+\medskip\noindent
+Start of the construction: Let $A(0) = R[x_1, \ldots, x_n]$ be
+a (usual) polynomial ring and let $A(0) \to S$ be a surjection.
+As grading we take $A(0)_0 = A(0)$ and $A(0)_d = 0$ for $d \not = 0$.
+Thus $\text{d} = 0$ and $\gamma_n$, $n > 0$, is zero as well.
+
+\medskip\noindent
+Choose generators $f_1, \ldots, f_m \in R[x_1, \ldots, x_n]$
+for the kernel of the given map $A(0) = R[x_1, \ldots, x_n] \to S$.
+We apply Example \ref{example-adjoining-odd} $m$ times to get
+$$
+A(1) = A(0)\langle T_1, \ldots, T_m\rangle
+$$
+with $\deg(T_i) = 1$ as a graded divided power polynomial algebra.
+We set $\text{d}(T_i) = f_i$. Since $A(1)$ is a divided power polynomial
+algebra over $A(0)$ and since $\text{d}(f_i) = 0$
+this extends uniquely to a differential on $A(1)$ by
+Lemma \ref{lemma-extend-differential}.
+
+\medskip\noindent
+Induction hypothesis: Assume we are given factorizations
+$$
+R \to A(0) \to A(1) \to \ldots \to A(m) \to S
+$$
+where $A(0)$ and $A(1)$ are as above and each $R \to A(m') \to S$
+for $2 \leq m' \leq m$ satisfies properties (1) and (4)
+of the statement of the lemma and (2) replaced by the condition that
+$H_i(A(m')) \to H_i(S)$ is an isomorphism for
+$m' > i \geq 0$. The base case is $m = 1$.
+
+\medskip\noindent
+Induction step: Assume we have $R \to A(m) \to S$
+as in the induction hypothesis. Consider the
+group $H_m(A(m))$. This is a module over $H_0(A(m)) = S$.
+In fact, it is a subquotient of $A(m)_m$ which is a finite
+type module over $A(m)_0 = R[x_1, \ldots, x_n]$.
+Thus we can pick finitely many elements
+$$
+e_1, \ldots, e_t \in \Ker(\text{d} : A(m)_m \to A(m)_{m - 1})
+$$
+which map to generators of this module. Applying
+Example \ref{example-adjoining-odd} or
+\ref{example-adjoining-even} $t$ times we get
+$$
+A(m + 1) = A(m)\langle T_1, \ldots, T_t\rangle
+$$
+with $\deg(T_i) = m + 1$ as a graded divided power algebra. We set
+$\text{d}(T_i) = e_i$. Since $A(m+1)$ is a divided power polynomial
+algebra over $A(m)$ and since $\text{d}(e_i) = 0$
+this extends uniquely to a differential on $A(m + 1)$
+compatible with the divided power structure.
+Since we've added only material in degree $m + 1$ and higher we see
+that $H_i(A(m + 1)) = H_i(A(m))$ for $i < m$. Moreover, it is
+clear that $H_m(A(m + 1)) = 0$ by construction.
+
+\medskip\noindent
+To finish the proof we observe that we have shown there exists
+a sequence of maps
+$$
+R \to A(0) \to A(1) \to \ldots \to A(m) \to A(m + 1) \to \ldots \to S
+$$
+and to finish the proof we set $A = \colim A(m)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tate-resoluton-pseudo-coherent-ring-map}
+Let $R \to S$ be a pseudo-coherent ring map (More on Algebra, Definition
+\ref{more-algebra-definition-pseudo-coherent-perfect}). Then
+Lemma \ref{lemma-tate-resolution} holds, with the resolution $A$ of $S$
+having finitely many generators in each degree.
+\end{lemma}
+
+\begin{proof}
+This is proved in exactly the same way as Lemma \ref{lemma-tate-resolution}.
+The only additional twist is that, given $A(m) \to S$ we have to
+show that $H_m = H_m(A(m))$ is a finite $R[x_1, \ldots, x_m]$-module
+(so that in the next step we need only add finitely many variables).
+Consider the complex
+$$
+\ldots \to A(m)_{m - 1} \to A(m)_m \to A(m)_{m - 1} \to
+\ldots \to A(m)_0 \to S \to 0
+$$
+Since $S$ is a pseudo-coherent $R[x_1, \ldots, x_n]$-module
+and since $A(m)_i$ is a finite free $R[x_1, \ldots, x_n]$-module
+we conclude that this is a pseudo-coherent complex, see
+More on Algebra, Lemma \ref{more-algebra-lemma-complex-pseudo-coherent-modules}.
+Since the complex is exact in (homological) degrees $> m$
+we conclude that $H_m$ is a finite $R$-module by
+More on Algebra, Lemma \ref{more-algebra-lemma-finite-cohomology}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-uniqueness-tate-resolution}
+Let $R$ be a commutative ring. Suppose that $(A, \text{d}, \gamma)$ and
+$(B, \text{d}, \gamma)$ are as in
+Definition \ref{definition-divided-powers-dga}.
+Let $\overline{\varphi} : H_0(A) \to H_0(B)$ be an $R$-algebra map.
+Assume
+\begin{enumerate}
+\item $A$ is a graded divided power polynomial algebra over $R$.
+\item $H_k(B) = 0$ for $k > 0$.
+\end{enumerate}
+Then there exists a map $\varphi : A \to B$ of differential
+graded $R$-algebras compatible with divided powers
+that lifts $\overline{\varphi}$.
+\end{lemma}
+
+\begin{proof}
+The assumption means that $A$ is obtained from $R$ by successively adjoining
+some set of polynomial generators in degree zero, exterior generators
+in positive odd degrees, and divided power generators
+in positive even degrees. So we have a filtration
+$R \subset A(0) \subset A(1) \subset \ldots$
+of $A$ such that $A(m + 1)$ is obtained from $A(m)$ by adjoining
+generators of the appropriate type (which we simply call
+``divided power generators'') in degree $m + 1$.
+In particular, $A(0) \to H_0(A)$ is a surjection from a (usual) polynomial
+algebra over $R$ onto $H_0(A)$. Thus we can lift $\overline{\varphi}$
+to an $R$-algebra map $\varphi(0) : A(0) \to B_0$.
+
+\medskip\noindent
+Write $A(1) = A(0)\langle T_j:j\in J\rangle$ for some
+set $J$ of divided power variables $T_j$ of degree $1$. Let $f_j \in B_0$
+be $f_j = \varphi(0)(\text{d}(T_j))$. Observe that $f_j$
+maps to zero in $H_0(B)$ as $\text{d}T_j$ maps to zero in $H_0(A)$.
+Thus we can find $b_j \in B_1$ with $\text{d}(b_j) = f_j$.
+By the universal property of divided power polynomial algebras from
+Lemma \ref{lemma-divided-power-polynomial-algebra},
+we find a lift $\varphi(1) : A(1) \to B$ of $\varphi(0)$
+mapping $T_j$ to $f_j$.
+
+\medskip\noindent
+Having constructed $\varphi(m)$ for some $m \geq 1$ we can construct
+$\varphi(m + 1) : A(m + 1) \to B$ in exactly the same manner.
+We omit the details.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-divided-powers-on-tor}
+Let $R$ be a commutative ring. Let $S$ and $T$ be commutative $R$-algebras.
+Then there is a canonical structure
+of a strictly graded commutative $R$-algebra with divided powers on
+$$
+\operatorname{Tor}_*^R(S, T).
+$$
+\end{lemma}
+
+\begin{proof}
+Choose a factorization $R \to A \to S$ as above. Since $A \to S$
+is a quasi-isomorphism and since $A_d$ is a free $R$-module,
+we see that the differential graded algebra $B = A \otimes_R T$ computes
+the Tor groups displayed in the lemma. Choose a surjection
+$R[y_j:j\in J] \to T$. Then we see that
+$B$ is a quotient of the differential graded algebra
+$A[y_j:j\in J]$ whose homology sits in degree $0$ (it is equal
+to $S[y_j:j\in J]$).
+By Lemma \ref{lemma-base-change-div} the differential graded algebras $B$ and
+$A[y_j:j\in J]$ have divided power structures compatible
+with the differentials. Hence we obtain our divided
+power structure on $H(B)$ by Lemma \ref{lemma-dpdga-good}.
+
+\medskip\noindent
+The divided power algebra structure constructed in this way is independent
+of the choice of $A$. Namely, if $A'$ is a second choice, then
+Lemma \ref{lemma-uniqueness-tate-resolution}
+implies there is a map $A \to A'$ preserving all structure and the
+augmentations towards $S$. Then the induced map
+$B = A \otimes_R T \to A' \otimes_R T' = B'$ also preserves
+all structure
+and is a quasi-isomorphism. The induced isomorphism of
+Tor algebras is therefore compatible with products
+and divided powers.
+\end{proof}
+
+
+
+
+
+\section{Application to complete intersections}
+\label{section-application-ci}
+
+\noindent
+Let $R$ be a ring. Let $(A, \text{d}, \gamma)$ be as in
+Definition \ref{definition-divided-powers-dga}.
+A {\it derivation} of degree $2$ is an $R$-linear
+map $\theta : A \to A$ with the following
+properties
+\begin{enumerate}
+\item $\theta(A_d) \subset A_{d - 2}$,
+\item $\theta(xy) = \theta(x)y + x\theta(y)$,
+\item $\theta$ commutes with $\text{d}$,
+\item $\theta(\gamma_n(x)) = \theta(x) \gamma_{n - 1}(x)$
+for all $x \in A_{2d}$ all $d$.
+\end{enumerate}
+In the following lemma we construct a derivation.
+
+\begin{lemma}
+\label{lemma-get-derivation}
+Let $R$ be a ring. Let $(A, \text{d}, \gamma)$ be as in
+Definition \ref{definition-divided-powers-dga}.
+Let $R' \to R$ be a surjection of rings whose kernel
+has square zero and is generated by one element $f$.
+If $A$ is a graded divided power polynomial algebra over $R$
+with finitely many variables in each degree,
+then we obtain a derivation
+$\theta : A/IA \to A/IA$ where $I$ is the annihilator
+of $f$ in $R$.
+\end{lemma}
+
+\begin{proof}
+Since $A$ is a divided power polynomial algebra, we can find a divided
+power polynomial algebra $A'$ over $R'$ such that $A = A' \otimes_R R'$.
+Moreover, we can lift $\text{d}$ to an $R$-linear
+operator $\text{d}$ on $A'$ such that
+\begin{enumerate}
+\item $\text{d}(xy) = \text{d}(x)y + (-1)^{\deg(x)}x \text{d}(y)$
+for $x, y \in A'$ homogeneous, and
+\item $\text{d}(\gamma_n(x)) = \text{d}(x) \gamma_{n - 1}(x)$ for
+$x \in A'_{even, +}$.
+\end{enumerate}
+We omit the details (hint: proceed one variable at the time).
+However, it may not be the case that $\text{d}^2$
+is zero on $A'$. It is clear that $\text{d}^2$ maps $A'$ into
+$fA' \cong A/IA$. Hence $\text{d}^2$ annihilates $fA'$ and factors
+as a map $A \to A/IA$. Since $\text{d}^2$ is $R$-linear we obtain
+our map $\theta : A/IA \to A/IA$. The verification of the properties
+of a derivation is immediate.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-theta}
+Assumption and notation as in Lemma \ref{lemma-get-derivation}.
+Suppose $S = H_0(A)$ is isomorphic to
+$R[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$
+for some $n$, $m$, and $f_j \in R[x_1, \ldots, x_n]$.
+Moreover, suppose given a relation
+$$
+\sum r_j f_j = 0
+$$
+with $r_j \in R[x_1, \ldots, x_n]$.
+Choose $r'_j, f'_j \in R'[x_1, \ldots, x_n]$ lifting $r_j, f_j$.
+Write $\sum r'_j f'_j = gf$ for some $g \in R/I[x_1, \ldots, x_n]$.
+If $H_1(A) = 0$ and all the coefficients of each $r_j$ are in $I$, then
+there exists an element $\xi \in H_2(A/IA)$ such that
+$\theta(\xi) = g$ in $S/IS$.
+\end{lemma}
+
+\begin{proof}
+Let $A(0) \subset A(1) \subset A(2) \subset \ldots$ be the filtration
+of $A$ such that $A(m)$ is gotten from $A(m - 1)$ by adjoining divided
+power variables of degree $m$. Then $A(0)$ is a polynomial algebra
+over $R$ equipped with an $R$-algebra surjection $A(0) \to S$.
+Thus we can choose a map
+$$
+\varphi : R[x_1, \ldots, x_n] \to A(0)
+$$
+lifting the augmentations to $S$.
+Next, $A(1) = A(0)\langle T_1, \ldots, T_t \rangle$ for some divided
+power variables $T_i$ of degree $1$. Since $H_0(A) = S$ we
+can pick $\xi_j \in \sum A(0)T_i$ with $\text{d}(\xi_j) = \varphi(f_j)$.
+Then
+$$
+\text{d}\left(\sum \varphi(r_j) \xi_j\right) =
+\sum \varphi(r_j) \varphi(f_j) = \sum \varphi(r_jf_j) = 0
+$$
+Since $H_1(A) = 0$ we can pick $\xi \in A_2$ with
+$\text{d}(\xi) = \sum \varphi(r_j) \xi_j$.
+If the coefficients of $r_j$ are in $I$, then the same
+is true for $\varphi(r_j)$. In this case
+$\text{d}(\xi)$ dies in $A_1/IA_1$ and
+hence $\xi$ defines a class in $H_2(A/IA)$.
+
+\medskip\noindent
+The construction of $\theta$ in the proof of Lemma \ref{lemma-get-derivation}
+proceeds by successively lifting $A(i)$ to $A'(i)$ and lifting the
+differential $\text{d}$. We lift $\varphi$
+to $\varphi' : R'[x_1, \ldots, x_n] \to A'(0)$.
+Next, we have $A'(1) = A'(0)\langle T_1, \ldots, T_t\rangle$.
+Moreover, we can lift $\xi_j$ to $\xi'_j \in \sum A'(0)T_i$.
+Then $\text{d}(\xi'_j) = \varphi'(f'_j) + f a_j$ for some
+$a_j \in A'(0)$.
+Consider a lift $\xi' \in A'_2$ of $\xi$.
+Then we know that
+$$
+\text{d}(\xi') = \sum \varphi'(r'_j)\xi'_j + \sum fb_iT_i
+$$
+for some $b_i \in A(0)$. Applying $\text{d}$ again we find
+$$
+\theta(\xi) = \sum \varphi'(r'_j)\varphi'(f'_j) +
+\sum f \varphi'(r'_j) a_j + \sum fb_i \text{d}(T_i)
+$$
+The first term gives us what we want. The second term is zero
+because the coefficients of $r_j$ are in $I$ and hence are
+annihilated by $f$. The third term maps to zero in $H_0$
+because $\text{d}(T_i)$ maps to zero.
+\end{proof}
+
+\noindent
+The method of proof of the following lemma is apparently due to Gulliksen.
+
+\begin{lemma}
+\label{lemma-not-finite-pd}
+Let $R' \to R$ be a surjection of Noetherian rings whose kernel has square
+zero and is generated by one element $f$. Let
+$S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$.
+Let $\sum r_j f_j = 0$ be a relation in $R[x_1, \ldots, x_n]$.
+Assume that
+\begin{enumerate}
+\item each $r_j$ has coefficients in the annihilator $I$ of $f$ in $R$,
+\item for some lifts $r'_j, f'_j \in R'[x_1, \ldots, x_n]$ we have
+$\sum r'_j f'_j = gf$ where $g$ is not nilpotent in $S/IS$.
+\end{enumerate}
+Then $S$ does not have finite tor dimension over $R$ (i.e., $S$ is not
+a perfect $R$-algebra).
+\end{lemma}
+
+\begin{proof}
+Choose a Tate resolution $R \to A \to S$ as in
+Lemma \ref{lemma-tate-resolution}.
+Let $\xi \in H_2(A/IA)$ and $\theta : A/IA \to A/IA$ be the element
+and derivation found in Lemmas \ref{lemma-get-derivation} and
+\ref{lemma-compute-theta}.
+Observe that
+$$
+\theta^n(\gamma_n(\xi)) = g^n
+$$
+in $H_0(A/IA) = S/IS$.
+Hence if $g$ is not nilpotent in $S/IS$, then $\xi^n$ is nonzero in
+$H_{2n}(A/IA)$ for all $n > 0$. Since
+$H_{2n}(A/IA) = \text{Tor}^R_{2n}(S, R/I)$ we conclude.
+\end{proof}
+
+\noindent
+The following result can be found in \cite{Rodicio}.
+
+\begin{lemma}
+\label{lemma-injective}
+Let $(A, \mathfrak m)$ be a Noetherian local ring. Let
+$I \subset J \subset A$ be proper ideals. If $A/J$ has finite
+tor dimension over $A/I$, then $I/\mathfrak m I \to J/\mathfrak m J$
+is injective.
+\end{lemma}
+
+\begin{proof}
+Let $f \in I$ be an element mapping to a nonzero element of $I/\mathfrak m I$
+which is mapped to zero in $J/\mathfrak mJ$. We can choose an ideal $I'$
+with $\mathfrak mI \subset I' \subset I$ such that $I/I'$ is generated by
+the image of $f$. Set $R = A/I$ and $R' = A/I'$. Let $J = (a_1, \ldots, a_m)$
+for some $a_j \in A$. Then $f = \sum b_j a_j$ for some $b_j \in \mathfrak m$.
+Let $r_j, f_j \in R$ resp.\ $r'_j, f'_j \in R'$ be the image of $b_j, a_j$.
+Then we see we are
+in the situation of Lemma \ref{lemma-not-finite-pd}
+(with the ideal $I$ of that lemma equal to $\mathfrak m_R$)
+and the lemma is proved.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-sequence}
+Let $(A, \mathfrak m)$ be a Noetherian local ring. Let
+$I \subset J \subset A$ be proper ideals. Assume
+\begin{enumerate}
+\item $A/J$ has finite tor dimension over $A/I$, and
+\item $J$ is generated by a regular sequence.
+\end{enumerate}
+Then $I$ is generated by a regular sequence and $J/I$
+is generated by a regular sequence.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-injective} we see that
+$I/\mathfrak m I \to J/\mathfrak m J$
+is injective. Thus we can find $s \leq r$ and a minimal system of
+generators $f_1, \ldots, f_r$ of $J$ such that $f_1, \ldots, f_s$ are in $I$
+and form a minimal system of generators of $I$.
+The lemma follows as any minimal system of generators of $J$
+is a regular sequence by
+More on Algebra, Lemmas
+\ref{more-algebra-lemma-independence-of-generators} and
+\ref{more-algebra-lemma-noetherian-finite-all-equivalent}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-perfect-map-ci}
+Let $R \to S$ be a local ring map of Noetherian local rings.
+Let $I \subset R$ and $J \subset S$ be ideals with
+$IS \subset J$. If $R \to S$ is flat and $S/\mathfrak m_RS$ is
+regular, then the following are equivalent
+\begin{enumerate}
+\item $J$ is generated by a regular sequence and
+$S/J$ has finite tor dimension as a module over $R/I$,
+\item $J$ is generated by a regular sequence and
+$\text{Tor}^{R/I}_p(S/J, R/\mathfrak m_R)$ is nonzero
+for only finitely many $p$,
+\item $I$ is generated by a regular sequence
+and $J/IS$ is generated by a regular sequence in $S/IS$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If (3) holds, then $J$ is generated by a regular sequence, see for example
+More on Algebra, Lemmas
+\ref{more-algebra-lemma-join-koszul-regular-sequences} and
+\ref{more-algebra-lemma-noetherian-finite-all-equivalent}.
+Moreover, if (3) holds, then $S/J = (S/I)/(J/I)$
+has finite projective dimension over $S/IS$ because the Koszul
+complex will be a finite free resolution of $S/J$ over $S/IS$.
+Since $R/I \to S/IS$ is flat, it then follows that $S/J$ has finite
+tor dimension over $R/I$ by
+More on Algebra, Lemma \ref{more-algebra-lemma-flat-push-tor-amplitude}.
+Thus (3) implies (1).
+
+\medskip\noindent
+The implication (1) $\Rightarrow$ (2) is trivial.
+Assume (2). By
+More on Algebra, Lemma \ref{more-algebra-lemma-perfect-over-regular-local-ring}
+we find that $S/J$ has finite tor dimension over $S/IS$.
+Thus we can apply Lemma \ref{lemma-regular-sequence}
+to conclude that $IS$ and $J/IS$ are generated by regular sequences.
+Let $f_1, \ldots, f_r \in I$ be a minimal system of generators of $I$.
+Since $R \to S$ is flat, we see that $f_1, \ldots, f_r$ form a minimal
+system of generators for $IS$ in $S$. Thus $f_1, \ldots, f_r \in R$
+is a sequence of elements whose images in $S$ form a regular sequence
+by More on Algebra, Lemmas
+\ref{more-algebra-lemma-independence-of-generators} and
+\ref{more-algebra-lemma-noetherian-finite-all-equivalent}.
+Thus $f_1, \ldots, f_r$ is a regular sequence in $R$ by
+Algebra, Lemma \ref{algebra-lemma-flat-increases-depth}.
+\end{proof}
+
+
+
+
+
+\section{Local complete intersection rings}
+\label{section-lci}
+
+\noindent
+Let $(A, \mathfrak m)$ be a Noetherian complete local ring.
+By the Cohen structure theorem (see
+Algebra, Theorem \ref{algebra-theorem-cohen-structure-theorem})
+we can write $A$ as the quotient of a regular Noetherian
+complete local ring $R$. Let us say that $A$ is a
+{\it complete intersection}
+if there exists some surjection $R \to A$
+with $R$ a regular local ring such that the kernel
+is generated by a regular sequence.
+The following lemma shows this notion is independent of
+the choice of the surjection.
+
+\begin{lemma}
+\label{lemma-ci-well-defined}
+Let $(A, \mathfrak m)$ be a Noetherian complete local ring.
+The following are equivalent
+\begin{enumerate}
+\item for every surjection of local rings $R \to A$ with $R$
+a regular local ring, the kernel of $R \to A$ is generated
+by a regular sequence, and
+\item for some surjection of local rings $R \to A$ with $R$
+a regular local ring, the kernel of $R \to A$ is generated
+by a regular sequence.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $k$ be the residue field of $A$. If the characteristic of
+$k$ is $p > 0$, then we denote $\Lambda$ a Cohen ring
+(Algebra, Definition \ref{algebra-definition-cohen-ring})
+with residue field $k$ (Algebra, Lemma \ref{algebra-lemma-cohen-rings-exist}).
+If the characteristic of $k$ is $0$ we set $\Lambda = k$.
+Recall that $\Lambda[[x_1, \ldots, x_n]]$ for any $n$
+is formally smooth over $\mathbf{Z}$, resp.\ $\mathbf{Q}$
+in the $\mathfrak m$-adic topology, see
+More on Algebra, Lemma
+\ref{more-algebra-lemma-power-series-ring-over-Cohen-fs}.
+Fix a surjection $\Lambda[[x_1, \ldots, x_n]] \to A$ as in
+the Cohen structure theorem
+(Algebra, Theorem \ref{algebra-theorem-cohen-structure-theorem}).
+
+\medskip\noindent
+Let $R \to A$ be a surjection from a regular local ring $R$.
+Let $f_1, \ldots, f_r$ be a minimal sequence of generators
+of $\Ker(R \to A)$. We will use without further mention
+that an ideal in a Noetherian local ring is generated by a regular
+sequence if and only if any minimal set of generators is a
+regular sequence. Observe that $f_1, \ldots, f_r$
+is a regular sequence in $R$ if and only if $f_1, \ldots, f_r$
+is a regular sequence in the completion $R^\wedge$ by
+Algebra, Lemmas \ref{algebra-lemma-flat-increases-depth} and
+\ref{algebra-lemma-completion-flat}.
+Moreover, we have
+$$
+R^\wedge/(f_1, \ldots, f_r)R^\wedge =
+(R/(f_1, \ldots, f_n))^\wedge = A^\wedge = A
+$$
+because $A$ is $\mathfrak m_A$-adically complete (first equality by
+Algebra, Lemma \ref{algebra-lemma-completion-tensor}). Finally,
+the ring $R^\wedge$ is regular since $R$ is regular
+(More on Algebra, Lemma \ref{more-algebra-lemma-completion-regular}).
+Hence we may assume $R$ is complete.
+
+\medskip\noindent
+If $R$ is complete we can choose a map
+$\Lambda[[x_1, \ldots, x_n]] \to R$ lifting the given map
+$\Lambda[[x_1, \ldots, x_n]] \to A$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-lift-continuous}.
+By adding some more variables $y_1, \ldots, y_m$ mapping
+to generators of the kernel of $R \to A$ we may assume that
+$\Lambda[[x_1, \ldots, x_n, y_1, \ldots, y_m]] \to R$ is surjective
+(some details omitted). Then we can consider the commutative diagram
+$$
+\xymatrix{
+\Lambda[[x_1, \ldots, x_n, y_1, \ldots, y_m]] \ar[r] \ar[d] & R \ar[d] \\
+\Lambda[[x_1, \ldots, x_n]] \ar[r] & A
+}
+$$
+By Algebra, Lemma \ref{algebra-lemma-ci-well-defined} we see that
+the condition for $R \to A$ is equivalent to the condition for
+the fixed chosen map
+$\Lambda[[x_1, \ldots, x_n]] \to A$. This finishes the proof of the lemma.
+\end{proof}
+
+\noindent
+The following two lemmas are sanity checks on the definition given above.
+
+\begin{lemma}
+\label{lemma-quotient-regular-ring-by-regular-sequence}
+Let $R$ be a regular ring. Let $\mathfrak p \subset R$ be a prime.
+Let $f_1, \ldots, f_r \in \mathfrak p$ be a regular sequence.
+Then the completion of
+$$
+A = (R/(f_1, \ldots, f_r))_\mathfrak p =
+R_\mathfrak p/(f_1, \ldots, f_r)R_\mathfrak p
+$$
+is a complete intersection in the sense defined above.
+\end{lemma}
+
+\begin{proof}
+The completion of $A$ is equal to
+$A^\wedge = R_\mathfrak p^\wedge/(f_1, \ldots, f_r)R_\mathfrak p^\wedge$
+because completion for finite modules over the Noetherian ring
+$R_\mathfrak p$ is exact
+(Algebra, Lemma \ref{algebra-lemma-completion-tensor}).
+The image of the sequence $f_1, \ldots, f_r$ in $R_\mathfrak p$
+is a regular sequence by
+Algebra, Lemmas \ref{algebra-lemma-completion-flat} and
+\ref{algebra-lemma-flat-increases-depth}.
+Moreover, $R_\mathfrak p^\wedge$ is a regular local ring by
+More on Algebra, Lemma \ref{more-algebra-lemma-completion-regular}.
+Hence the result holds by our definition of complete
+intersection for complete local rings.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of Algebra, Lemma \ref{algebra-lemma-lci}.
+
+\begin{lemma}
+\label{lemma-quotient-regular-ring}
+Let $R$ be a regular ring. Let $\mathfrak p \subset R$ be a prime.
+Let $I \subset \mathfrak p$ be an ideal.
+Set $A = (R/I)_\mathfrak p = R_\mathfrak p/I_\mathfrak p$.
+The following are equivalent
+\begin{enumerate}
+\item the completion of $A$
+is a complete intersection in the sense above,
+\item $I_\mathfrak p \subset R_\mathfrak p$ is generated
+by a regular sequence,
+\item the module $(I/I^2)_\mathfrak p$ can be generated by
+$\dim(R_\mathfrak p) - \dim(A)$ elements,
+\item add more here.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We may and do replace $R$ by its localization at $\mathfrak p$.
+Then $\mathfrak p = \mathfrak m$ is the maximal ideal of $R$
+and $A = R/I$. Let $f_1, \ldots, f_r \in I$ be a minimal sequence
+of generators. The completion of $A$ is equal to
+$A^\wedge = R^\wedge/(f_1, \ldots, f_r)R^\wedge$
+because completion for finite modules over the Noetherian ring
+$R_\mathfrak p$ is exact
+(Algebra, Lemma \ref{algebra-lemma-completion-tensor}).
+
+\medskip\noindent
+If (1) holds, then the image of the sequence $f_1, \ldots, f_r$ in $R^\wedge$
+is a regular sequence by assumption. Hence it is a regular sequence
+in $R$ by Algebra, Lemmas \ref{algebra-lemma-completion-flat} and
+\ref{algebra-lemma-flat-increases-depth}. Thus (1) implies (2).
+
+\medskip\noindent
+Assume (3) holds. Set $c = \dim(R) - \dim(A)$ and let $f_1, \ldots, f_c \in I$
+map to generators of $I/I^2$. by Nakayama's lemma
+(Algebra, Lemma \ref{algebra-lemma-NAK})
+we see that $I = (f_1, \ldots, f_c)$. Since $R$ is regular and hence
+Cohen-Macaulay (Algebra, Proposition \ref{algebra-proposition-CM-module})
+we see that $f_1, \ldots, f_c$ is a regular sequence by
+Algebra, Proposition \ref{algebra-proposition-CM-module}.
+Thus (3) implies (2).
+Finally, (2) implies (1) by
+Lemma \ref{lemma-quotient-regular-ring-by-regular-sequence}.
+\end{proof}
+
+\noindent
+The following result is due to Avramov, see \cite{Avramov}.
+
+\begin{proposition}
+\label{proposition-avramov}
+Let $A \to B$ be a flat local homomorphism of Noetherian local rings.
+Then the following are equivalent
+\begin{enumerate}
+\item $B^\wedge$ is a complete intersection,
+\item $A^\wedge$ and $(B/\mathfrak m_A B)^\wedge$ are complete intersections.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Consider the diagram
+$$
+\xymatrix{
+B \ar[r] & B^\wedge \\
+A \ar[u] \ar[r] & A^\wedge \ar[u]
+}
+$$
+Since the horizontal maps are faithfully flat
+(Algebra, Lemma \ref{algebra-lemma-completion-faithfully-flat})
+we conclude that the right vertical arrow is flat
+(for example by Algebra, Lemma
+\ref{algebra-lemma-criterion-flatness-fibre-Noetherian}).
+Moreover, we have
+$(B/\mathfrak m_A B)^\wedge = B^\wedge/\mathfrak m_{A^\wedge} B^\wedge$
+by Algebra, Lemma \ref{algebra-lemma-completion-tensor}.
+Thus we may assume $A$ and $B$ are complete local Noetherian rings.
+
+\medskip\noindent
+Assume $A$ and $B$ are complete local Noetherian rings.
+Choose a diagram
+$$
+\xymatrix{
+S \ar[r] & B \\
+R \ar[u] \ar[r] & A \ar[u]
+}
+$$
+as in More on Algebra, Lemma
+\ref{more-algebra-lemma-embed-map-Noetherian-complete-local-rings}.
+Let $I = \Ker(R \to A)$ and $J = \Ker(S \to B)$.
+Note that since $R/I = A \to B = S/J$ is flat the map
+$J/IS \otimes_R R/\mathfrak m_R \to J/J \cap \mathfrak m_R S$
+is an isomorphism. Hence a minimal system of generators of $J/IS$
+maps to a minimal system of generators of
+$\Ker(S/\mathfrak m_R S \to B/\mathfrak m_A B)$.
+Finally, $S/\mathfrak m_R S$ is a regular local ring.
+
+\medskip\noindent
+Assume (1) holds, i.e., $J$ is generated by a regular sequence.
+Since $A = R/I \to B = S/J$ is flat we see
+Lemma \ref{lemma-perfect-map-ci} applies and we deduce
+that $I$ and $J/IS$ are generated by regular sequences.
+We have $\dim(B) = \dim(A) + \dim(B/\mathfrak m_A B)$ and
+$\dim(S/IS) = \dim(A) + \dim(S/\mathfrak m_R S)$
+(Algebra, Lemma \ref{algebra-lemma-dimension-base-fibre-equals-total}).
+Thus $J/IS$ is generated by
+$$
+\dim(S/J) - \dim(S/IS) = \dim(S/\mathfrak m_R S) - \dim(B/\mathfrak m_A B)
+$$
+elements (Algebra, Lemma \ref{algebra-lemma-one-equation}).
+It follows that $\Ker(S/\mathfrak m_R S \to B/\mathfrak m_A B)$
+is generated by the same number of elements (see above).
+Hence $\Ker(S/\mathfrak m_R S \to B/\mathfrak m_A B)$
+is generated by a regular sequence, see for example
+Lemma \ref{lemma-quotient-regular-ring}.
+In this way we see that (2) holds.
+
+\medskip\noindent
+If (2) holds, then $I$ and $J/J \cap \mathfrak m_RS$
+are generated by regular sequences. Lifting these generators
+(see above), using flatness of $R/I \to S/IS$,
+and using Grothendieck's lemma
+(Algebra, Lemma \ref{algebra-lemma-grothendieck-regular-sequence})
+we find that $J/IS$ is generated by a regular sequence in $S/IS$.
+Thus Lemma \ref{lemma-perfect-map-ci} tells us that $J$
+is generated by a regular sequence, whence (1) holds.
+\end{proof}
+
+\begin{definition}
+\label{definition-lci}
+Let $A$ be a Noetherian ring.
+\begin{enumerate}
+\item If $A$ is local, then we say $A$ is a {\it complete intersection}
+if its completion is a complete intersection in the sense above.
+\item In general we say $A$ is a {\it local complete intersection}
+if all of its local rings are complete intersections.
+\end{enumerate}
+\end{definition}
+
+\noindent
+We will check below that this does not conflict with the terminology
+introduced in
+Algebra, Definitions \ref{algebra-definition-lci-field} and
+\ref{algebra-definition-lci-local-ring}.
+But first, we show this ``makes sense'' by showing
+that if $A$ is a Noetherian
+local complete intersection, then $A$ is a local complete intersection,
+i.e., all of its local rings are complete intersections.
+
+\begin{lemma}
+\label{lemma-ci-good}
+Let $(A, \mathfrak m)$ be a Noetherian local ring. Let
+$\mathfrak p \subset A$ be a prime ideal. If $A$ is a complete
+intersection, then $A_\mathfrak p$ is a complete intersection too.
+\end{lemma}
+
+\begin{proof}
+Choose a prime $\mathfrak q$ of $A^\wedge$ lying over $\mathfrak p$
+(this is possible as $A \to A^\wedge$ is faithfully flat by
+Algebra, Lemma \ref{algebra-lemma-completion-faithfully-flat}).
+Then $A_\mathfrak p \to (A^\wedge)_\mathfrak q$ is a flat local
+ring homomorphism. Thus by Proposition \ref{proposition-avramov}
+we see that $A_\mathfrak p$ is a complete intersection if and only if
+$(A^\wedge)_\mathfrak q$ is a complete intersection. Thus it suffices
+to prove the lemma in case $A$ is complete (this is the key step
+of the proof).
+
+\medskip\noindent
+Assume $A$ is complete. By definition we may write
+$A = R/(f_1, \ldots, f_r)$ for some regular sequence
+$f_1, \ldots, f_r$ in a regular local ring $R$.
+Let $\mathfrak q \subset R$ be the prime corresponding to $\mathfrak p$.
+Observe that $f_1, \ldots, f_r \in \mathfrak q$ and that
+$A_\mathfrak p = R_\mathfrak q/(f_1, \ldots, f_r)R_\mathfrak q$.
+Hence $A_\mathfrak p$ is a complete intersection by
+Lemma \ref{lemma-quotient-regular-ring-by-regular-sequence}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-lci-at-maximal-ideals}
+Let $A$ be a Noetherian ring. Then $A$ is a local complete intersection
+if and only if $A_\mathfrak m$ is a complete intersection for every
+maximal ideal $\mathfrak m$ of $A$.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from Lemma \ref{lemma-ci-good} and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-lci-agrees}
+Let $S$ be a finite type algebra over a field $k$.
+\begin{enumerate}
+\item for a prime $\mathfrak q \subset S$ the local ring $S_\mathfrak q$
+is a complete intersection in the sense of
+Algebra, Definition \ref{algebra-definition-lci-local-ring}
+if and only if $S_\mathfrak q$ is a complete
+intersection in the sense of Definition \ref{definition-lci}, and
+\item $S$ is a local complete intersection in the sense of
+Algebra, Definition \ref{algebra-definition-lci-field}
+if and only if $S$ is a local complete
+intersection in the sense of Definition \ref{definition-lci}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Let $k[x_1, \ldots, x_n] \to S$ be a surjection.
+Let $\mathfrak p \subset k[x_1, \ldots, x_n]$ be the prime ideal
+corresponding to $\mathfrak q$.
+Let $I \subset k[x_1, \ldots, x_n]$ be the kernel of our surjection.
+Note that $k[x_1, \ldots, x_n]_\mathfrak p \to S_\mathfrak q$
+is surjective with kernel $I_\mathfrak p$. Observe that
+$k[x_1, \ldots, x_n]$ is a regular ring by
+Algebra, Proposition \ref{algebra-proposition-finite-gl-dim-polynomial-ring}.
+Hence the equivalence of the two notions in (1) follows by
+combining
+Lemma \ref{lemma-quotient-regular-ring}
+with Algebra, Lemma \ref{algebra-lemma-lci-local}.
+
+\medskip\noindent
+Having proved (1) the equivalence in (2) follows from the
+definition and Algebra, Lemma \ref{algebra-lemma-lci-global}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-avramov}
+Let $A \to B$ be a flat local homomorphism of Noetherian local rings.
+Then the following are equivalent
+\begin{enumerate}
+\item $B$ is a complete intersection,
+\item $A$ and $B/\mathfrak m_A B$ are complete intersections.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Now that the definition makes sense this is a trivial reformulation
+of the (nontrivial) Proposition \ref{proposition-avramov}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Local complete intersection maps}
+\label{section-lci-homomorphisms}
+
+\noindent
+Let $A \to B$ be a local homomorphism of Noetherian complete local rings.
+A consequence of the Cohen structure theorem is that we can find a
+commutative diagram
+$$
+\xymatrix{
+S \ar[r] & B \\
+& A \ar[lu] \ar[u]
+}
+$$
+of Noetherian complete local rings with
+$S \to B$ surjective, $A \to S$ flat, and
+$S/\mathfrak m_A S$ a regular local ring. This follows from
+More on Algebra, Lemma
+\ref{more-algebra-lemma-embed-map-Noetherian-complete-local-rings}.
+Let us (temporarily) say $A \to S \to B$ is a {\it good factorization}
+of $A \to B$ if $S$ is a Noetherian local ring,
+$A \to S \to B$ are local ring maps,
+$S \to B$ surjective, $A \to S$ flat, and $S/\mathfrak m_AS$ regular.
+Let us say that $A \to B$ is a
+{\it complete intersection homomorphism}
+if there exists some good factorization $A \to S \to B$
+such that the kernel of $S \to B$ is generated by a regular sequence.
+The following lemma shows this notion is independent of
+the choice of the diagram.
+
+\begin{lemma}
+\label{lemma-ci-map-well-defined}
+Let $A \to B$ be a local homomorphism of Noetherian complete local rings.
+The following are equivalent
+\begin{enumerate}
+\item for some good factorization $A \to S \to B$ the kernel of
+$S \to B$ is generated by a regular sequence, and
+\item for every good factorization $A \to S \to B$ the kernel of
+$S \to B$ is generated by a regular sequence.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $A \to S \to B$ be a good factorization.
+As $B$ is complete we obtain a factorization
+$A \to S^\wedge \to B$ where $S^\wedge$ is the completion of $S$.
+Note that this is also a good factorization:
+The ring map $S \to S^\wedge$ is flat
+(Algebra, Lemma \ref{algebra-lemma-completion-flat}),
+hence $A \to S^\wedge$ is flat.
+The ring $S^\wedge/\mathfrak m_A S^\wedge = (S/\mathfrak m_A S)^\wedge$
+is regular since $S/\mathfrak m_A S$ is regular
+(More on Algebra, Lemma \ref{more-algebra-lemma-completion-regular}).
+Let $f_1, \ldots, f_r$ be a minimal sequence of generators
+of $\Ker(S \to B)$. We will use without further mention
+that an ideal in a Noetherian local ring is generated by a regular
+sequence if and only if any minimal set of generators is a
+regular sequence. Observe that $f_1, \ldots, f_r$
+is a regular sequence in $S$ if and only if $f_1, \ldots, f_r$
+is a regular sequence in the completion $S^\wedge$ by
+Algebra, Lemma \ref{algebra-lemma-flat-increases-depth}.
+Moreover, we have
+$$
+S^\wedge/(f_1, \ldots, f_r)R^\wedge =
+(S/(f_1, \ldots, f_n))^\wedge = B^\wedge = B
+$$
+because $B$ is $\mathfrak m_B$-adically complete (first equality by
+Algebra, Lemma \ref{algebra-lemma-completion-tensor}).
+Thus the kernel of $S \to B$ is generated by a regular sequence
+if and only if the kernel of $S^\wedge \to B$ is generated by a
+regular sequence.
+Hence it suffices to consider good factorizations where $S$ is complete.
+
+\medskip\noindent
+Assume we have two factorizations $A \to S \to B$ and
+$A \to S' \to B$ with $S$ and $S'$ complete. By
+More on Algebra, Lemma \ref{more-algebra-lemma-dominate-two-surjections}
+the ring $S \times_B S'$ is a Noetherian complete local ring.
+Hence, using More on Algebra, Lemma
+\ref{more-algebra-lemma-embed-map-Noetherian-complete-local-rings}
+we can choose a good factorization $A \to S'' \to S \times_B S'$
+with $S''$ complete. Thus it suffices to show:
+If $A \to S' \to S \to B$ are comparable good factorizations,
+then $\Ker(S \to B)$ is generated by a regular sequence
+if and only if $\Ker(S' \to B)$ is generated by a regular sequence.
+
+\medskip\noindent
+Let $A \to S' \to S \to B$ be comparable good factorizations.
+First, since $S'/\mathfrak m_R S' \to S/\mathfrak m_R S$ is
+a surjection of regular local rings, the kernel is generated
+by a regular sequence
+$\overline{x}_1, \ldots, \overline{x}_c \in
+\mathfrak m_{S'}/\mathfrak m_R S'$
+which can be extended to a regular system of parameters for
+the regular local ring $S'/\mathfrak m_R S'$, see
+(Algebra, Lemma \ref{algebra-lemma-regular-quotient-regular}).
+Set $I = \Ker(S' \to S)$. By flatness of $S$ over $R$ we have
+$$
+I/\mathfrak m_R I =
+\Ker(S'/\mathfrak m_R S' \to S/\mathfrak m_R S) =
+(\overline{x}_1, \ldots, \overline{x}_c).
+$$
+Choose lifts $x_1, \ldots, x_c \in I$. These lifts form a regular sequence
+generating $I$ as $S'$ is flat over $R$, see
+Algebra, Lemma \ref{algebra-lemma-grothendieck-regular-sequence}.
+
+\medskip\noindent
+We conclude that if also $\Ker(S \to B)$ is generated by a
+regular sequence, then so is $\Ker(S' \to B)$, see
+More on Algebra, Lemmas
+\ref{more-algebra-lemma-join-koszul-regular-sequences} and
+\ref{more-algebra-lemma-noetherian-finite-all-equivalent}.
+
+\medskip\noindent
+Conversely, assume that $J = \Ker(S' \to B)$ is generated
+by a regular sequence. Because the generators $x_1, \ldots, x_c$
+of $I$ map to linearly independent elements of
+$\mathfrak m_{S'}/\mathfrak m_{S'}^2$ we see that
+$I/\mathfrak m_{S'}I \to J/\mathfrak m_{S'}J$ is injective.
+Hence there exists a minimal system of generators
+$x_1, \ldots, x_c, y_1, \ldots, y_d$ for $J$.
+Then $x_1, \ldots, x_c, y_1, \ldots, y_d$ is a regular sequence
+and it follows that the images of $y_1, \ldots, y_d$ in $S$
+form a regular sequence generating $\Ker(S \to B)$.
+This finishes the proof of the lemma.
+\end{proof}
+
+\noindent
+In the following proposition observe that the condition on vanishing of
+Tor's applies in particular if $B$ has finite tor dimension over $A$ and
+thus in particular if $B$ is flat over $A$.
+
+\begin{proposition}
+\label{proposition-avramov-map}
+Let $A \to B$ be a local homomorphism of Noetherian local rings.
+Then the following are equivalent
+\begin{enumerate}
+\item $B$ is a complete intersection and
+$\text{Tor}^A_p(B, A/\mathfrak m_A)$ is nonzero for only finitely many $p$,
+\item $A$ is a complete intersection and
+$A^\wedge \to B^\wedge$ is a complete intersection homomorphism
+in the sense defined above.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Let $F_\bullet \to A/\mathfrak m_A$ be a resolution by finite
+free $A$-modules. Observe that
+$\text{Tor}^A_p(B, A/\mathfrak m_A)$
+is the $p$th homology of the complex $F_\bullet \otimes_A B$.
+Let $F_\bullet^\wedge = F_\bullet \otimes_A A^\wedge$ be the completion.
+Then $F_\bullet^\wedge$ is a resolution of $A^\wedge/\mathfrak m_{A^\wedge}$
+by finite free $A^\wedge$-modules (as $A \to A^\wedge$ is flat and completion
+on finite modules is exact, see
+Algebra, Lemmas \ref{algebra-lemma-completion-tensor} and
+\ref{algebra-lemma-completion-flat}).
+It follows that
+$$
+F_\bullet^\wedge \otimes_{A^\wedge} B^\wedge =
+F_\bullet \otimes_A B \otimes_B B^\wedge
+$$
+By flatness of $B \to B^\wedge$ we conclude that
+$$
+\text{Tor}^{A^\wedge}_p(B^\wedge, A^\wedge/\mathfrak m_{A^\wedge}) =
+\text{Tor}^A_p(B, A/\mathfrak m_A) \otimes_B B^\wedge
+$$
+In this way we see that the condition in (1) on the local ring map $A \to B$
+is equivalent to the same condition for the local ring map
+$A^\wedge \to B^\wedge$.
+Thus we may assume $A$ and $B$ are complete local Noetherian rings
+(since the other conditions are formulated in terms of the completions
+in any case).
+
+\medskip\noindent
+Assume $A$ and $B$ are complete local Noetherian rings.
+Choose a diagram
+$$
+\xymatrix{
+S \ar[r] & B \\
+R \ar[u] \ar[r] & A \ar[u]
+}
+$$
+as in More on Algebra, Lemma
+\ref{more-algebra-lemma-embed-map-Noetherian-complete-local-rings}.
+Let $I = \Ker(R \to A)$ and $J = \Ker(S \to B)$.
+The proposition now follows from Lemma \ref{lemma-perfect-map-ci}.
+\end{proof}
+
+\begin{remark}
+\label{remark-no-good-ci-map}
+It appears difficult to define an good notion of ``local complete
+intersection homomorphisms'' for maps between general Noetherian rings.
+The reason is that, for a local Noetherian ring $A$, the fibres of
+$A \to A^\wedge$ are not local complete intersection rings.
+Thus, if $A \to B$ is a local homomorphism of local Noetherian rings,
+and the map of completions $A^\wedge \to B^\wedge$ is a
+complete intersection homomorphism in the sense defined above,
+then $(A_\mathfrak p)^\wedge \to (B_\mathfrak q)^\wedge$ is in general
+{\bf not} a complete intersection homomorphism in the sense
+defined above. A solution can be had by working exclusively with
+excellent Noetherian rings. More generally, one could work with
+those Noetherian rings whose formal fibres are complete
+intersections, see \cite{Rodicio-ci}.
+We will develop this theory in
+Dualizing Complexes, Section \ref{dualizing-section-formal-fibres}.
+\end{remark}
+
+\noindent
+To finish of this section we compare the notion defined above
+with the notion introduced in
+More on Algebra, Section \ref{section-lci}.
+
+\begin{lemma}
+\label{lemma-well-defined-if-you-can-find-good-factorization}
+Consider a commutative diagram
+$$
+\xymatrix{
+S \ar[r] & B \\
+& A \ar[lu] \ar[u]
+}
+$$
+of Noetherian local rings with $S \to B$ surjective, $A \to S$ flat, and
+$S/\mathfrak m_A S$ a regular local ring. The following are equivalent
+\begin{enumerate}
+\item $\Ker(S \to B)$ is generated by a regular sequence, and
+\item $A^\wedge \to B^\wedge$ is a complete intersection homomorphism
+as defined above.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: the proof is identical to the argument given in
+the first paragraph of the proof of Lemma \ref{lemma-ci-map-well-defined}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-type-lci-map}
+Let $A$ be a Noetherian ring.
+Let $A \to B$ be a finite type ring map.
+The following are equivalent
+\begin{enumerate}
+\item $A \to B$ is a local complete intersection in the sense of
+More on Algebra, Definition
+\ref{more-algebra-definition-local-complete-intersection},
+\item for every prime $\mathfrak q \subset B$ and with
+$\mathfrak p = A \cap \mathfrak q$ the ring map
+$(A_\mathfrak p)^\wedge \to (B_\mathfrak q)^\wedge$ is
+a complete intersection homomorphism in the sense defined above.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose a surjection $R = A[x_1, \ldots, x_n] \to B$.
+Observe that $A \to R$ is flat with regular fibres.
+Let $I$ be the kernel of $R \to B$.
+Assume (2). Then we see that
+$I$ is locally generated by a regular sequence
+by
+Lemma \ref{lemma-well-defined-if-you-can-find-good-factorization}
+and
+Algebra, Lemma \ref{algebra-lemma-regular-sequence-in-neighbourhood}.
+In other words, (1) holds.
+Conversely, assume (1). Then after localizing on $R$ and $B$
+we can assume that $I$ is generated by a Koszul regular sequence.
+By More on Algebra, Lemma
+\ref{more-algebra-lemma-noetherian-finite-all-equivalent}
+we find that $I$ is locally generated by a regular sequence.
+Hence (2) hold by
+Lemma \ref{lemma-well-defined-if-you-can-find-good-factorization}.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-avramov-map-finite-type}
+Let $A$ be a Noetherian ring. Let $A \to B$ be a finite type ring map
+such that the image of $\Spec(B) \to \Spec(A)$ contains all closed
+points of $\Spec(A)$. Then the following are equivalent
+\begin{enumerate}
+\item $B$ is a complete intersection and $A \to B$ has finite
+tor dimension,
+\item $A$ is a complete intersection and $A \to B$ is a local complete
+intersection in the sense of More on Algebra, Definition
+\ref{more-algebra-definition-local-complete-intersection}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is a reformulation of Proposition \ref{proposition-avramov-map}
+via Lemma \ref{lemma-finite-type-lci-map}.
+We omit the details.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Smooth ring maps and diagonals}
+\label{section-smooth-diagonal-perfect}
+
+\noindent
+In this section we use the material above to characterize smooth ring maps as
+those whose diagonal is perfect.
+
+\begin{lemma}
+\label{lemma-local-perfect-diagonal}
+Let $A \to B$ be a local ring homomorphism of Noetherian local rings such that
+$B$ is flat and essentially of finite type over $A$. If
+$$
+B \otimes_A B \longrightarrow B
+$$
+is a perfect ring map, i.e., if $B$ has finite tor dimension over
+$B \otimes_A B$, then $B$ is the localization of a smooth $A$-algebra.
+\end{lemma}
+
+\begin{proof}
+As $B$ is essentially of finite type over $A$, so is $B \otimes_A B$ and
+in particular $B \otimes_A B$ is Noetherian. Hence the quotient $B$ of
+$B \otimes_A B$ is pseudo-coherent over $B \otimes_A B$
+(More on Algebra, Lemma \ref{more-algebra-lemma-Noetherian-pseudo-coherent})
+which explains why perfectness of the ring map (More on Algebra, Definition
+\ref{more-algebra-definition-pseudo-coherent-perfect}) agrees with the
+condition of finite tor dimension.
+
+\medskip\noindent
+We may write $B = R/K$ where $R$ is the localization of $A[x_1, \ldots, x_n]$
+at a prime ideal and $K \subset R$ is an ideal. Denote
+$\mathfrak m \subset R \otimes_A R$ the maximal ideal which is the inverse
+image of the maximal ideal of $B$ via the surjection
+$R \otimes_A R \to B \otimes_A B \to B$. Then we have surjections
+$$
+(R \otimes_A R)_\mathfrak m \to (B \otimes_A B)_\mathfrak m \to B
+$$
+and hence ideals $I \subset J \subset (R \otimes_A R)_\mathfrak m$
+as in Lemma \ref{lemma-injective}. We conclude that
+$I/\mathfrak m I \to J/\mathfrak m J$ is injective.
+
+\medskip\noindent
+Let $K = (f_1, \ldots, f_r)$ with $r$ minimal. We may and do assume that
+$f_i \in R$ is the image of an element of $A[x_1, \ldots, x_n]$ which we
+also denote $f_i$. Observe that $I$ is generated
+by $f_1 \otimes 1, \ldots, f_r \otimes 1$ and
+$1 \otimes f_1, \ldots, 1 \otimes f_r$. We claim that this is a minimal
+set of generators of $I$. Namely, if $\kappa$ is the common residue field
+of $R$, $B$, $(R \otimes_A R)_\mathfrak m$, and $(B \otimes_A B)_\mathfrak m$
+then we have a map
+$R \otimes_A R \to R \otimes_A \kappa \oplus \kappa \otimes_A R$
+which factors through $(R \otimes_A R)_\mathfrak m$. Since $B$ is
+flat over $A$ and since we have the short exact sequence
+$0 \to K \to R \to B \to 0$ we see that
+$K \otimes_A \kappa \subset R \otimes_A \kappa$, see
+Algebra, Lemma \ref{algebra-lemma-flat-tor-zero}.
+Thus restricting the map
+$(R \otimes_A R)_\mathfrak m \to R \otimes_A \kappa \oplus \kappa \otimes_A R$
+to $I$ we obtain a map
+$$
+I \to K \otimes_A \kappa \oplus \kappa \otimes_A K \to
+K \otimes_B \kappa \oplus \kappa \otimes_B K.
+$$
+The elements
+$f_1 \otimes 1, \ldots, f_r \otimes 1, 1 \otimes f_1, \ldots, 1 \otimes f_r$
+map to a basis of the target of this map, since by Nakayama's lemma
+(Algebra, Lemma \ref{algebra-lemma-NAK})
+$f_1, \ldots, f_r$ map to a basis of $K \otimes_B \kappa$.
+This proves our claim.
+
+\medskip\noindent
+The ideal $J$ is generated by $f_1 \otimes 1, \ldots, f_r \otimes 1$
+and the elements $x_1 \otimes 1 - 1 \otimes x_1, \ldots,
+x_n \otimes 1 - 1 \otimes x_n$ (for the proof it suffices to
+see that these elements are contained in the ideal $J$).
+Now we can write
+$$
+f_i \otimes 1 - 1 \otimes f_i =
+\sum g_{ij} (x_j \otimes 1 - 1 \otimes x_j)
+$$
+for some $g_{ij}$ in $(R \otimes_A R)_\mathfrak m$. This is a general
+fact about elements of $A[x_1, \ldots, x_n]$ whose proof we omit.
+Denote $a_{ij} \in \kappa$ the image of $g_{ij}$. Another computation
+shows that $a_{ij}$ is the image of $\partial f_i / \partial x_j$ in $\kappa$.
+The injectivity of $I/\mathfrak m I \to J/\mathfrak m J$ and the remarks
+made above forces the matrix $(a_{ij})$ to have maximal rank $r$.
+Set
+$$
+C = A[x_1, \ldots, x_n]/(f_1, \ldots, f_r)
+$$
+and consider the naive cotangent complex
+$\NL_{C/A} \cong (C^{\oplus r} \to C^{\oplus n})$
+where the map is given by the matrix of partial derivatives.
+Thus $\NL_{C/A} \otimes_A B$
+is isomorphic to a free $B$-module of rank $n - r$ placed in degree $0$.
+Hence $C_g$ is smooth over $A$ for some $g \in C$ mapping to a unit
+in $B$, see Algebra, Lemma \ref{algebra-lemma-smooth-at-point}.
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-perfect-diagonal}
+Let $A \to B$ be a flat finite type ring map of Noetherian rings. If
+$$
+B \otimes_A B \longrightarrow B
+$$
+is a perfect ring map, i.e., if $B$ has finite tor dimension over
+$B \otimes_A B$, then $B$ is a smooth $A$-algebra.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-local-perfect-diagonal}
+and general facts about smooth ring maps, see
+Algebra, Lemmas \ref{algebra-lemma-smooth-at-point} and
+\ref{algebra-lemma-locally-smooth}.
+Alternatively, the reader can slightly modify the proof of
+Lemma \ref{lemma-local-perfect-diagonal} to prove
+this lemma.
+\end{proof}
+
+
+
+
+
+
+\section{Freeness of the conormal module}
+\label{section-freeness-conormal}
+
+\noindent
+Tate resolutions and derivations on them can be used to prove
+(stronger) versions of the results in this section, see \cite{Iyengar}.
+Two more elementary references are
+\cite{Vasconcelos} and \cite{Ferrand-lci}.
+
+\begin{lemma}
+\label{lemma-free-summand-in-ideal-finite-proj-dim}
+\begin{reference}
+\cite{Vasconcelos}
+\end{reference}
+Let $R$ be a Noetherian local ring. Let $I \subset R$ be an ideal
+of finite projective dimension over $R$. If $F \subset I/I^2$ is a
+direct summand isomorphic to $R/I$, then there exists a nonzerodivisor
+$x \in I$ such that the image of $x$ in $I/I^2$ generates $F$.
+\end{lemma}
+
+\begin{proof}
+By assumption we may choose a finite free resolution
+$$
+0 \to R^{\oplus n_e} \to R^{\oplus n_{e-1}} \to \ldots \to
+R^{\oplus n_1} \to R \to R/I \to 0
+$$
+Then $\varphi_1 : R^{\oplus n_1} \to R$ has rank $1$ and
+we see that $I$ contains a nonzerodivisor $y$ by
+Algebra, Proposition \ref{algebra-proposition-what-exact}.
+Let $\mathfrak p_1, \ldots, \mathfrak p_n$ be the associated
+primes of $R$, see Algebra, Lemma \ref{algebra-lemma-finite-ass}.
+Let $I^2 \subset J \subset I$ be an ideal such that $J/I^2 = F$.
+Then $J \not \subset \mathfrak p_i$ for all $i$
+as $y^2 \in J$ and $y^2 \not \in \mathfrak p_i$, see
+Algebra, Lemma \ref{algebra-lemma-ass-zero-divisors}.
+By Nakayama's lemma (Algebra, Lemma \ref{algebra-lemma-NAK})
+we have $J \not \subset \mathfrak m J + I^2$.
+By Algebra, Lemma \ref{algebra-lemma-silly}
+we can pick $x \in J$, $x \not \in \mathfrak m J + I^2$ and
+$x \not \in \mathfrak p_i$ for $i = 1, \ldots, n$.
+Then $x$ is a nonzerodivisor and the image
+of $x$ in $I/I^2$ generates (by Nakayama's lemma)
+the summand $J/I^2 \cong R/I$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vasconcelos}
+\begin{reference}
+Local version of \cite[Theorem 1.1]{Vasconcelos}
+\end{reference}
+Let $R$ be a Noetherian local ring. Let $I \subset R$ be an ideal
+of finite projective dimension over $R$. If $F \subset I/I^2$
+is a direct summand free of rank $r$, then there exists a regular sequence
+$x_1, \ldots, x_r \in I$ such that $x_1 \bmod I^2, \ldots, x_r \bmod I^2$
+generate $F$.
+\end{lemma}
+
+\begin{proof}
+If $r = 0$ there is nothing to prove. Assume $r > 0$. We may pick
+$x \in I$ such that $x$ is a nonzerodivisor and $x \bmod I^2$
+generates a summand of $F$ isomorphic to $R/I$, see
+Lemma \ref{lemma-free-summand-in-ideal-finite-proj-dim}.
+Consider the ring $R' = R/(x)$ and the ideal $I' = I/(x)$.
+Of course $R'/I' = R/I$. The short exact sequence
+$$
+0 \to R/I \xrightarrow{x} I/xI \to I' \to 0
+$$
+splits because the map $I/xI \to I/I^2$ sends $xR/xI$
+to a direct summand. Now $I/xI = I \otimes_R^\mathbf{L} R'$ has
+finite projective dimension over $R'$, see
+More on Algebra, Lemmas \ref{more-algebra-lemma-perfect-module} and
+\ref{more-algebra-lemma-pull-perfect}.
+Hence the summand $I'$ has finite projective dimension over $R'$.
+On the other hand, we have the short exact sequence
+$0 \to xR/xI \to I/I^2 \to I'/(I')^2 \to 0$ and we conclude
+$I'/(I')^2$ has the free direct summand $F' = F/(R/I \cdot x)$
+of rank $r - 1$. By induction on $r$ we may
+we pick a regular sequence $x'_2, \ldots, x'_r \in I'$
+such that there congruence classes freely generate $F'$.
+If $x_1 = x$ and $x_2, \ldots, x_r$ are any elements lifting
+$x'_1, \ldots, x'_r$ in $R$, then we see that the lemma holds.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-regular-ideal}
+\begin{reference}
+Variant of \cite[Corollary 1]{Vasconcelos}. See also
+\cite{Iyengar} and \cite{Ferrand-lci}.
+\end{reference}
+Let $R$ be a Noetherian ring. Let $I \subset R$ be an ideal
+which has finite projective dimension and such that $I/I^2$ is
+finite locally free over $R/I$. Then $I$ is a regular ideal
+(More on Algebra, Definition \ref{more-algebra-definition-regular-ideal}).
+\end{proposition}
+
+\begin{proof}
+By Algebra, Lemma \ref{algebra-lemma-regular-sequence-in-neighbourhood}
+it suffices to show that $I_\mathfrak p \subset R_\mathfrak p$ is generated
+by a regular sequence for every $\mathfrak p \supset I$. Thus we may
+assume $R$ is local. If $I/I^2$ has rank $r$, then by
+Lemma \ref{lemma-vasconcelos} we find a regular sequence
+$x_1, \ldots, x_r \in I$ generating $I/I^2$. By
+Nakayama (Algebra, Lemma \ref{algebra-lemma-NAK})
+we conclude that $I$ is generated by $x_1, \ldots, x_r$.
+\end{proof}
+
+\noindent
+For any local complete intersection homomorphism $A \to B$
+of rings, the naive cotangent complex $\NL_{B/A}$ is perfect
+of tor-amplitude in $[-1, 0]$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-lci-NL}.
+Using the above, we can show that this sometimes
+characterizes local complete intersection homomorphisms.
+
+\begin{lemma}
+\label{lemma-perfect-NL-lci}
+Let $A \to B$ be a perfect (More on Algebra, Definition
+\ref{more-algebra-definition-pseudo-coherent-perfect})
+ring homomorphism of Noetherian rings. Then the following are equivalent
+\begin{enumerate}
+\item $\NL_{B/A}$ has tor-amplitude in $[-1, 0]$,
+\item $\NL_{B/A}$ is a perfect object of $D(B)$
+with tor-amplitude in $[-1, 0]$, and
+\item $A \to B$ is a local complete intersection
+(More on Algebra, Definition
+\ref{more-algebra-definition-local-complete-intersection}).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Write $B = A[x_1, \ldots, x_n]/I$. Then $\NL_{B/A}$ is represented by
+the complex
+$$
+I/I^2 \longrightarrow \bigoplus B \text{d}x_i
+$$
+of $B$-modules with $I/I^2$ placed in degree $-1$. Since the term in
+degree $0$ is finite free, this complex has tor-amplitude in $[-1, 0]$ if and
+only if $I/I^2$ is a flat $B$-module, see
+More on Algebra, Lemma \ref{more-algebra-lemma-last-one-flat}.
+Since $I/I^2$ is a finite $B$-module and $B$ is Noetherian, this is true
+if and only if $I/I^2$ is a finite locally free $B$-module
+(Algebra, Lemma \ref{algebra-lemma-finite-projective}).
+Thus the equivalence of (1) and (2) is clear. Moreover, the equivalence
+of (1) and (3) also follows if we apply
+Proposition \ref{proposition-regular-ideal}
+(and the observation that a regular ideal is a Koszul regular
+ideal as well as a quasi-regular ideal, see
+More on Algebra, Section \ref{more-algebra-section-ideals}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-fp-NL-lci}
+Let $A \to B$ be a flat ring map of finite presentation.
+Then the following are equivalent
+\begin{enumerate}
+\item $\NL_{B/A}$ has tor-amplitude in $[-1, 0]$,
+\item $\NL_{B/A}$ is a perfect object of $D(B)$
+with tor-amplitude in $[-1, 0]$,
+\item $A \to B$ is syntomic
+(Algebra, Definition \ref{algebra-definition-lci}), and
+\item $A \to B$ is a local complete intersection
+(More on Algebra, Definition
+\ref{more-algebra-definition-local-complete-intersection}).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (3) and (4) is More on Algebra, Lemma
+\ref{more-algebra-lemma-syntomic-lci}.
+
+\medskip\noindent
+If $A \to B$ is syntomic, then we can find a cocartesian diagram
+$$
+\xymatrix{
+B_0 \ar[r] & B \\
+A_0 \ar[r] \ar[u] & A \ar[u]
+}
+$$
+such that $A_0 \to B_0$ is syntomic and $A_0$ is Noetherian, see
+Algebra, Lemmas \ref{algebra-lemma-limit-module-finite-presentation} and
+\ref{algebra-lemma-colimit-lci}. By Lemma \ref{lemma-perfect-NL-lci}
+we see that $\NL_{B_0/A_0}$ is perfect of tor-amplitude in $[-1, 0]$.
+By More on Algebra, Lemma \ref{more-algebra-lemma-base-change-NL-flat}
+we conclude the same thing is true for
+$\NL_{B/A} = \NL_{B_0/A_0} \otimes_{B_0}^\mathbf{L} B$ (see
+also More on Algebra, Lemmas \ref{more-algebra-lemma-pull-tor-amplitude} and
+\ref{more-algebra-lemma-pull-perfect}).
+This proves that (3) implies (2).
+
+\medskip\noindent
+Assume (1). By More on Algebra, Lemma
+\ref{more-algebra-lemma-base-change-NL-flat}
+for every ring map $A \to k$ where
+$k$ is a field, we see that $\NL_{B \otimes_A k/k}$ has
+tor-amplitude in $[-1, 0]$ (see
+More on Algebra, Lemma \ref{more-algebra-lemma-pull-tor-amplitude}).
+Hence by Lemma \ref{lemma-perfect-NL-lci} we see that $k \to B \otimes_A k$ is
+a local complete intersection homomorphism. Thus $A \to B$
+is syntomic by definition. This proves (1) implies (3)
+and finishes the proof.
+\end{proof}
+
+
+
+
+
+\section{Koszul complexes and Tate resolutions}
+\label{section-koszul-vs-tate}
+
+\noindent
+In this section we ``lift'' the result of
+More on Algebra, Lemma \ref{more-algebra-lemma-sequence-Koszul-complexes}
+to the category of differential graded algebras endowed with divided
+powers compatible with the differential graded structure (beware
+that in this section we represent Koszul complexes as chain complexes
+whereas in locus citatus we use cochain complexes).
+
+\medskip\noindent
+Let $R$ be a ring. Let $I \subset R$ be an ideal generated
+by $f_1, \ldots, f_r \in R$. For $n \geq 1$ we denote
+$$
+K_n = K_{n, \bullet} = R\langle \xi_1, \ldots, \xi_r\rangle
+$$
+the differential graded Koszul algebra with $\xi_i$ in degree $1$ and
+$\text{d}(\xi_i) = f_i$. There exists a unique divided power structure on this
+(as in Definition \ref{definition-divided-powers-dga}), see
+Example \ref{example-adjoining-odd}. For $m > n$ the transition map
+$K_m \to K_n$ is the differential graded algebra map compatible with
+divided powers given by sending $\xi_i$ to $f_i^{m - n}\xi_i$.
+
+\begin{lemma}
+\label{lemma-lift-tate-to-koszul}
+In the situation above, if $R$ is Noetherian, then
+for every $n$ there exists an $N \geq n$ and maps
+$$
+K_N \to A \to R/(f_1^N, \ldots, f_r^N)\quad\text{and}\quad A \to K_n
+$$
+with the following properties
+\begin{enumerate}
+\item $(A, \text{d}, \gamma)$ is as in
+Definition \ref{definition-divided-powers-dga},
+\item $A \to R/(f_1^N, \ldots, f_r^N)$ is a quasi-isomorphism,
+\item the composition $K_N \to A \to R/(f_1^N, \ldots, f_r^N)$
+is the canonical map,
+\item the composition $K_N \to A \to K_n$ is the transition map,
+\item $A_0 = R \to R/(f_1^N, \ldots, f_r^N)$ is the canonical
+surjection,
+\item $A$ is a graded divided power polynomial algebra over $R$
+with finitely many generators in each degree, and
+\item $A \to K_n$ is a homomorphism of differential graded $R$-algebras
+compatible with divided powers which induces the canonical map
+$R/(f_1^N, \ldots, f_r^N) \to R/(f_1^n, \ldots, f_r^n)$ on
+homology in degree $0$.
+\end{enumerate}
+Condition (4) means that $A$ is constructed out of $A_0$ by
+successively adjoining a finite set of variables $T$ in each degree
+$> 0$ as in Example \ref{example-adjoining-odd} or \ref{example-adjoining-even}.
+\end{lemma}
+
+\begin{proof}
+Fix $n$. If $r = 0$, then we can just pick $A = R$. Assume $r > 0$. By
+More on Algebra, Lemma \ref{more-algebra-lemma-sequence-Koszul-complexes}
+(translated into the language of chain complexes) we can choose
+$$
+n_{r} > n_{r - 1} > \ldots > n_1 > n_0 = n
+$$
+such that the transition maps $K_{n_{i + 1}} \to K_{n_i}$ on Koszul
+algebras (see above) induce the zero map on homology in degrees $> 0$.
+We will prove the lemma with $N = n_r$.
+
+\medskip\noindent
+We will construct $A$ exactly as in the statement and proof of
+Lemma \ref{lemma-tate-resolution}. Thus we will have
+$$
+A = \colim A(m),\quad\text{and}\quad
+A(0) \to A(1) \to A(2) \to \ldots \to R/(f_1^N, \ldots, f_r^N)
+$$
+This will immediately give us properties (1), (2), (5), and (6).
+To finish the proof we will construct the $R$-algebra maps
+$K_N \to A \to K_n$. To do this we will construct
+\begin{enumerate}
+\item an isomorphism $A(1) \to K_N = K_{n_r}$,
+\item a map $A(2) \to K_{n_{r - 1}}$,
+\item $\ldots$
+\item a map $A(r) \to K_{n_1}$,
+\item a map $A(r + 1) \to K_{n_0} = K_n$, and
+\item a map $A \to K_n$.
+\end{enumerate}
+In each of these steps the map constructed will be between
+differential graded algebras compatibly endowed with divided powers
+and each of the maps will be compatible with the
+previous one via the transition maps between the Koszul algebras
+and each of the maps will induce the obvious canonical map
+on homology in degree $0$.
+
+\medskip\noindent
+Recall that $A(0) = R$. For $m = 1$, the proof of
+Lemma \ref{lemma-tate-resolution}
+chooses $A(1) = R\langle T_1, \ldots, T_r\rangle$ with
+$T_i$ of degree $1$ and with $\text{d}(T_i) = f_i^N$.
+Namely, the $f_i^N$ are generators of the kernel of
+$A(0) \to R/(f_1^N, \ldots, f_r^N)$.
+Thus for $A(1) \to K_N = K_{n_r}$ we use the map
+$$
+\varphi_1 : A(1) \longrightarrow K_{n_r},\quad T_i \longmapsto \xi_i
+$$
+which is an isomorphism.
+
+\medskip\noindent
+For $m = 2$, the construction in the proof of Lemma \ref{lemma-tate-resolution}
+chooses generators $e_1, \ldots, e_t \in \Ker(\text{d} : A(1)_1 \to A(1)_0)$.
+The construction proceeds by taking
+$A(2) = A(1)\langle T_1, \ldots, T_t\rangle$
+as a divided power polynomial algebra with $T_i$ of degree $2$
+and with $\text{d}(T_i) = e_i$.
+Since $\varphi_1(e_i)$ is a cocycle in $K_{n_r}$
+we see that its image in $K_{n_{r - 1}}$ is a coboundary by
+our choice of $n_r$ and $n_{r - 1}$ above.
+Hence we can construct the following commutative diagram
+$$
+\xymatrix{
+A(1) \ar[d] \ar[r]_{\varphi_1} & K_{n_r} \ar[d] \\
+A(2) \ar[r]^{\varphi_2} & K_{n_{r - 1}}
+}
+$$
+by sending $T_i$ to an element in degree $2$ whose boundary is the
+image of $\varphi_1(e_i)$. The map $\varphi_2$ exists and is compatible
+with the differential and the divided powers by the universal
+of the divided power polynomial algebra.
+
+\medskip\noindent
+The algebra $A(m)$ and the map $\varphi_m : A(m) \to K_{n_{r + 1 - m}}$
+are constructed in exactly the same manner for $m = 2, \ldots, r$.
+
+\medskip\noindent
+Given the map $A(r) \to K_{n_1}$ we see that the composition
+$H_r(A(r)) \to H_r(K_{n_1}) \to H_r(K_{n_0}) \subset (K_{n_0})_r$
+is zero, hence we can extend this to $A(r + 1) \to K_{n_0} = K_n$
+by sending the new polynomial generators of $A(r + 1)$ to zero.
+
+\medskip\noindent
+Having constructed $A(r + 1) \to K_{n_0} = K_n$ we can simply
+extend to $A(r + 2), A(r + 3), \ldots$ in the only possible way
+by sending the new polynomial generators to zero.
+This finishes the proof.
+\end{proof}
+
+\begin{remark}
+\label{remark-pro-system-koszul}
+In the situation above, if $R$ is Noetherian,
+we can inductively choose a sequence
+of integers $1 = n_0 < n_1 < n_2 < \ldots $ such that
+for $i = 1, 2, 3, \ldots$ we have maps
+$K_{n_i} \to A_i \to R/(f_1^{n_i}, \ldots, f_r^{n_i})$
+and $A_i \to K_{n_{i - 1}}$ as in Lemma \ref{lemma-lift-tate-to-koszul}.
+Denote $A_{i + 1} \to A_i$ the composition $A_{i + 1} \to K_{n_i} \to A_i$.
+Then the diagram
+$$
+\xymatrix{
+K_{n_1} \ar[d] &
+K_{n_2} \ar[d] \ar[l] &
+K_{n_3} \ar[d] \ar[l] &
+\ldots \ar[l] \\
+A_1 \ar[d] &
+A_2 \ar[l] \ar[d] &
+A_3 \ar[l] \ar[d] &
+\ldots \ar[l] \\
+K_1 &
+K_{n_1} \ar[l] &
+K_{n_2} \ar[l] &
+\ldots \ar[l]
+}
+$$
+commutes. In this way we see that the inverse systems
+$(K_n)$ and $(A_n)$ are pro-isomorphic in the category of
+differential graded $R$-algebras with compatible divided powers.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-compute-cohomology-adjoin-variable}
+Let $(A, \text{d}, \gamma)$, $d \geq 1$, $f \in A_{d - 1}$,
+and $A\langle T \rangle$ be as in Lemma \ref{lemma-extend-differential}.
+\begin{enumerate}
+\item If $d = 1$, then there is a long exact sequences
+$$
+\ldots \to H_0(A) \xrightarrow{f} H_0(A) \to H_0(A\langle T \rangle) \to 0
+$$
+\item For $d = 2$ there is a bounded spectral sequence
+$(E_1)_{i, j} = H_{j - i}(A) \cdot T^{[i]}$
+converging to $H_{i + j}(A\langle T \rangle)$. The differential
+$(d_1)_{i, j} : H_{j - i}(A) \cdot T^{[i]} \to
+H_{j - i + 1}(A) \cdot T^{[i - 1]}$
+sends $\xi \cdot T^{[i]}$ to the class of $f \xi \cdot T^{[i - 1]}$.
+\item Add more here for other degrees as needed.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For $d = 1$, we have a short exact sequence of complexes
+$$
+0 \to A \to A\langle T \rangle \to A \cdot T \to 0
+$$
+and the result (1) follows easily from this. For $d = 2$ we view
+$A\langle T \rangle$ as a filtered chain complex with subcomplexes
+$$
+F^pA\langle T \rangle = \bigoplus\nolimits_{i \leq p} A \cdot T^{[i]}
+$$
+Applying the spectral sequence of
+Homology, Section \ref{homology-section-filtered-complex}
+(translated into chain complexes) we obtain (2).
+\end{proof}
+
+\noindent
+The following lemma will be needed later.
+
+\begin{lemma}
+\label{lemma-construct-some-maps}
+In the situation above, for all $n \geq t \geq 1$ there exists an $N > n$
+and a map
+$$
+K_t \longrightarrow K_n \otimes_R K_t
+$$
+in the derived category of left differential graded $K_N$-modules
+whose composition with the multiplication map is the transition map
+(in either direction).
+\end{lemma}
+
+\begin{proof}
+We first prove this for $r = 1$. Set $f = f_1$.
+Write $K_t = R\langle x \rangle$,
+$K_n = R\langle y \rangle$, and $K_N = R\langle z \rangle$
+with $x$, $y$, $z$ of degree $1$ and
+$\text{d}(x) = f^t$, $\text{d}(y) = f^n$, and
+$\text{d}(z) = f^N$. For all $N > t$ we claim there is a quasi-isomorphism
+$$
+B_{N, t} = R\langle x, z, u \rangle
+\longrightarrow
+K_t,\quad
+x \mapsto x,\quad
+z \mapsto f^{N - t}x,\quad
+u \mapsto 0
+$$
+Here the left hand side denotes the divided power polynomial algebra
+in variables $x$ and $z$ of degree $1$ and $u$ of degree $2$ with
+$\text{d}(x) = f^t$, $\text{d}(z) = f^N$, and
+$\text{d}(u) = z - f^{N - t}x$. To prove the claim,
+we observe that the following three submodules of
+$H_*(R\langle x, z\rangle)$ are the same
+\begin{enumerate}
+\item the kernel of $H_*(R\langle x, z\rangle) \to H_*(K_t)$,
+\item the image of
+$z - f^{N - t}x : H_*(R\langle x, z\rangle) \to H_*(R\langle x, z\rangle)$, and
+\item the kernel of
+$z - f^{N - t}x : H_*(R\langle x, z\rangle) \to H_*(R\langle x, z\rangle)$.
+\end{enumerate}
+This observation is proved by a direct computation\footnote{Hint: setting
+$z' = z - f^{N - t}x$ we see that
+$R\langle x, z\rangle = R\langle x, z'\rangle$ with $\text{d}(z') = 0$
+and moreover the map $R\langle x, z'\rangle \to K_t$ is
+the map killing $z'$.} which we omit. Then we can
+apply Lemma \ref{lemma-compute-cohomology-adjoin-variable} part (2)
+to see that the claim is true.
+
+\medskip\noindent
+Via the homomorphism $K_N \to B_{N, t}$ of differential graded $R$-algebras
+sending $z$ to $z$, we may view $B_{N, t} \to K_t$ as a quasi-isomorphism of
+left differential graded $K_N$-modules. To define the arrow in the statement
+of the lemma we use the homomorphism
+$$
+B_{N, t} = R\langle x, z, u \rangle \to K_n \otimes_R K_t,\quad
+x \mapsto 1 \otimes x,\quad
+z \mapsto f^{N - n}y \otimes 1,\quad
+u \mapsto - f^{N - n - t}y \otimes x
+$$
+This makes sense as long as we assume $N \geq n + t$. It is a
+pleasant computation to show that the (pre or post) composition with
+the multiplication map is the transition map.
+
+\medskip\noindent
+For $r > 1$ we proceed by writing each of the Koszul algebras as a
+tensor product of Koszul algebras in $1$ variable and we apply the
+previous construction. In other words, we write
+$$
+K_t = R\langle x_1, \ldots, x_r\rangle =
+R\langle x_1\rangle \otimes_R \ldots \otimes_R R\langle x_r\rangle
+$$
+where $x_i$ is in degree $1$ and $\text{d}(x_i) = f_i^t$.
+In the case $r > 1$ we then use
+$$
+B_{N, t} =
+R\langle x_1, z_1, u_1 \rangle
+\otimes_R \ldots \otimes_R
+R\langle x_r, z_r, u_r \rangle
+$$
+where $x_i, z_i$ have degree $1$ and $u_i$ has degree $2$ and we have
+$\text{d}(x_i) = f_i^t$, $\text{d}(z_i) = f_i^N$, and
+$\text{d}(u_i) = z_i - f_i^{N - t}x_i$.
+The tensor product map $B_{N, t} \to K_t$ will be a quasi-isomorphism
+as it is a tensor product of quasi-isomorphisms between bounded above
+complexes of free $R$-modules. Finally, we define the map
+$$
+B_{N, t}
+\to
+K_n \otimes_R K_t =
+R\langle y_1, \ldots, y_r\rangle \otimes_R
+R\langle x_1, \ldots, x_r\rangle
+$$
+as the tensor product of the maps constructed in the case of $r = 1$
+or simply by the rules $x_i \mapsto 1 \otimes x_i$,
+$z_i \mapsto f_i^{N - n}y_i \otimes 1$, and
+$u_i \mapsto - f_i^{N - n - t}y_i \otimes x_i$ which makes
+sense as long as $N \geq n + t$. We omit the details.
+\end{proof}
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/duality.tex b/books/stacks/duality.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c1c4411aa6f745b7e5cfda4f4797582fb55bbf49
--- /dev/null
+++ b/books/stacks/duality.tex
@@ -0,0 +1,9208 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Duality for Schemes}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+This chapter studies relative duality for morphisms of schemes
+and the dualizing complex on a scheme. A reference is \cite{RD}.
+
+\medskip\noindent
+Dualizing complexes for Noetherian rings were defined and studied in
+Dualizing Complexes, Section \ref{dualizing-section-dualizing} ff.
+In this chapter we continue this by studying dualizing complexes on
+schemes, see Section \ref{section-dualizing-schemes}.
+
+\medskip\noindent
+The bulk of this chapter is devoted to studying the right adjoint
+of pushforward in the setting of derived categories of sheaves
+of modules with quasi-coherent cohomology sheaves.
+See Sections
+\ref{section-twisted-inverse-image},
+\ref{section-restriction-to-opens},
+\ref{section-base-change-map},
+\ref{section-base-change-II},
+\ref{section-trace},
+\ref{section-compare-with-pullback},
+\ref{section-sections-with-exact-support},
+\ref{section-duality-finite},
+\ref{section-perfect-proper},
+\ref{section-dualizing-Cartier}, and
+\ref{section-examples}.
+Here we follow the papers
+\cite{Neeman-Grothendieck}, \cite{LN},
+\cite{Lipman-notes}, and \cite{Neeman-improvement}.
+
+\medskip\noindent
+We discuss the important and useful upper shriek functors $f^!$ for
+separated morphisms of finite type between Noetherian schemes in
+Sections \ref{section-upper-shriek},
+\ref{section-upper-shriek-properties}, and
+\ref{section-base-change-shriek}
+culminating in the overview Section
+\ref{section-duality}.
+
+\medskip\noindent
+In Section \ref{section-glue}
+we explain alternative theory of duality and dualizing
+complexes when working over a fixed locally Noetherian
+base endowed with a dualizing complex (this section corresponds
+to a remark in Hartshorne's book).
+
+\medskip\noindent
+In the remaining sections we give a few applications.
+
+\medskip\noindent
+This chapter is continued by the chapter on duality
+on algebraic spaces, see
+Duality for Spaces, Section \ref{spaces-duality-section-introduction}.
+
+
+
+
+
+
+
+\section{Dualizing complexes on schemes}
+\label{section-dualizing-schemes}
+
+\noindent
+We define a dualizing complex on a locally Noetherian scheme
+to be a complex which affine locally comes from a dualizing
+complex on the corresponding ring. This is not completely
+standard but agrees with all definitions in the literature
+on Noetherian schemes of finite dimension.
+
+\begin{lemma}
+\label{lemma-equivalent-definitions}
+Let $X$ be a locally Noetherian scheme. Let $K$ be an object of
+$D(\mathcal{O}_X)$. The following are equivalent
+\begin{enumerate}
+\item For every affine open $U = \Spec(A) \subset X$ there exists
+a dualizing complex $\omega_A^\bullet$ for $A$ such that
+$K|_U$ is isomorphic to the image of $\omega_A^\bullet$ by
+the functor $\widetilde{} : D(A) \to D(\mathcal{O}_U)$.
+\item There is an affine open covering $X = \bigcup U_i$, $U_i = \Spec(A_i)$
+such that for each $i$ there exists a dualizing complex $\omega_i^\bullet$ for
+$A_i$ such that $K|_{U_i}$ is isomorphic to the image of $\omega_i^\bullet$ by
+the functor $\widetilde{} : D(A_i) \to D(\mathcal{O}_{U_i})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2) and let $U = \Spec(A)$ be an affine open of $X$.
+Since condition (2) implies that $K$ is in $D_\QCoh(\mathcal{O}_X)$
+we find an object $\omega_A^\bullet$ in $D(A)$ whose associated
+complex of quasi-coherent sheaves is isomorphic to $K|_U$, see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-compare-bounded}.
+We will show that $\omega_A^\bullet$ is a dualizing complex for $A$
+which will finish the proof.
+
+\medskip\noindent
+Since $X = \bigcup U_i$ is an open covering, we can find a standard
+open covering $U = D(f_1) \cup \ldots \cup D(f_m)$ such that
+each $D(f_j)$ is a standard open in one of the affine opens $U_i$, see
+Schemes, Lemma \ref{schemes-lemma-standard-open-two-affines}.
+Say $D(f_j) = D(g_j)$ for $g_j \in A_{i_j}$.
+Then $A_{f_j} \cong (A_{i_j})_{g_j}$ and we have
+$$
+(\omega_A^\bullet)_{f_j} \cong (\omega_i^\bullet)_{g_j}
+$$
+in the derived category by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-compare-bounded}.
+By Dualizing Complexes, Lemma \ref{dualizing-lemma-dualizing-localize}
+we find that
+the complex $(\omega_A^\bullet)_{f_j}$ is a dualizing complex over
+$A_{f_j}$ for $j = 1, \ldots, m$. This implies that $\omega_A^\bullet$
+is dualizing by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-dualizing-glue}.
+\end{proof}
+
+\begin{definition}
+\label{definition-dualizing-scheme}
+Let $X$ be a locally Noetherian scheme. An object $K$ of
+$D(\mathcal{O}_X)$ is called a {\it dualizing complex} if
+$K$ satisfies the equivalent conditions of
+Lemma \ref{lemma-equivalent-definitions}.
+\end{definition}
+
+\noindent
+Please see remarks made at the beginning of this section.
+
+\begin{lemma}
+\label{lemma-affine-duality}
+Let $A$ be a Noetherian ring and let $X = \Spec(A)$. Let $K, L$ be objects
+of $D(A)$. If $K \in D_{\textit{Coh}}(A)$ and $L$ has finite injective
+dimension, then
+$$
+R\SheafHom_{\mathcal{O}_X}(\widetilde{K}, \widetilde{L})
+=
+\widetilde{R\Hom_A(K, L)}
+$$
+in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+We may assume that $L$ is given by a finite complex $I^\bullet$
+of injective $A$-modules. By induction on the length of $I^\bullet$
+and compatibility of the constructions with distinguished triangles,
+we reduce to the case that $L = I[0]$ where $I$ is an injective $A$-module.
+In this case, Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-internal-hom}, tells us that
+the $n$th cohomology sheaf of
+$R\SheafHom_{\mathcal{O}_X}(\widetilde{K}, \widetilde{L})$
+is the sheaf associated to the presheaf
+$$
+D(f) \longmapsto \Ext^n_{A_f}(K \otimes_A A_f, I \otimes_A A_f)
+$$
+Since $A$ is Noetherian, the $A_f$-module $I \otimes_A A_f$ is injective
+(Dualizing Complexes, Lemma
+\ref{dualizing-lemma-localization-injective-modules}). Hence we see that
+\begin{align*}
+\Ext^n_{A_f}(K \otimes_A A_f, I \otimes_A A_f)
+& =
+\Hom_{A_f}(H^{-n}(K \otimes_A A_f), I \otimes_A A_f) \\
+& =
+\Hom_{A_f}(H^{-n}(K) \otimes_A A_f, I \otimes_A A_f) \\
+& =
+\Hom_A(H^{-n}(K), I) \otimes_A A_f
+\end{align*}
+The last equality because $H^{-n}(K)$ is a finite $A$-module, see
+Algebra, Lemma \ref{algebra-lemma-hom-from-finitely-presented}.
+This proves that the canonical map
+$$
+\widetilde{R\Hom_A(K, L)}
+\longrightarrow
+R\SheafHom_{\mathcal{O}_X}(\widetilde{K}, \widetilde{L})
+$$
+is a quasi-isomorphism in this case and the proof is done.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-internal-hom-evaluate-isom}
+Let $X$ be a Noetherian scheme. Let $K, L, M \in D_\QCoh(\mathcal{O}_X)$.
+Then the map
+$$
+R\SheafHom(L, M) \otimes_{\mathcal{O}_X}^\mathbf{L} K
+\longrightarrow
+R\SheafHom(R\SheafHom(K, L), M)
+$$
+of Cohomology, Lemma \ref{cohomology-lemma-internal-hom-evaluate}
+is an isomorphism in the following two cases
+\begin{enumerate}
+\item $K \in D^-_{\textit{Coh}}(\mathcal{O}_X)$,
+$L \in D^+_{\textit{Coh}}(\mathcal{O}_X)$, and $M$ affine locally has
+finite injective dimension (see proof), or
+\item $K$ and $L$ are in $D_{\textit{Coh}}(\mathcal{O}_X)$,
+the object $R\SheafHom(L, M)$ has finite tor dimension, and
+$L$ and $M$ affine locally have finite injective dimension
+(in particular $L$ and $M$ are bounded).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). We say $M$ has affine locally finite injective dimension
+if $X$ has an open covering by affines $U = \Spec(A)$ such that the object
+of $D(A)$ corresponding to $M|_U$ (Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-compare-bounded})
+has finite injective dimension\footnote{This condition is independent of the
+choice of the affine open cover of the Noetherian scheme $X$.
+Details omitted.}. To prove the lemma we may
+replace $X$ by $U$, i.e., we may assume $X = \Spec(A)$
+for some Noetherian ring $A$. Observe that
+$R\SheafHom(K, L)$ is in $D^+_{\textit{Coh}}(\mathcal{O}_X)$ by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-coherent-internal-hom}.
+Moreover, the formation of the left and right hand side
+of the arrow commutes with the functor $D(A) \to D_\QCoh(\mathcal{O}_X)$ by
+Lemma \ref{lemma-affine-duality} and
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-internal-hom}
+(to be sure this uses the assumptions on $K$, $L$, $M$ and what we just
+proved about $R\SheafHom(K, L)$).
+Then finally the arrow is an isomorphism by
+More on Algebra, Lemmas
+\ref{more-algebra-lemma-internal-hom-evaluate-isomorphism} part (2).
+
+\medskip\noindent
+Proof of (2). We argue as above. A small change is that here we get
+$R\SheafHom(K, L)$ in $D_{\textit{Coh}}(\mathcal{O}_X)$ because
+affine locally (which is allowable by Lemma \ref{lemma-affine-duality})
+we may appeal to Dualizing Complexes, Lemma
+\ref{dualizing-lemma-finite-ext-into-bounded-injective}.
+Then we finally conclude by
+More on Algebra, Lemma
+\ref{more-algebra-lemma-internal-hom-evaluate-isomorphism-technical}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-schemes}
+Let $K$ be a dualizing complex on a locally Noetherian scheme $X$.
+Then $K$ is an object of $D_{\textit{Coh}}(\mathcal{O}_X)$
+and $D = R\SheafHom_{\mathcal{O}_X}(-, K)$ induces an anti-equivalence
+$$
+D :
+D_{\textit{Coh}}(\mathcal{O}_X)
+\longrightarrow
+D_{\textit{Coh}}(\mathcal{O}_X)
+$$
+which comes equipped with a canonical isomorphism
+$\text{id} \to D \circ D$. If $X$ is quasi-compact, then
+$D$ exchanges $D^+_{\textit{Coh}}(\mathcal{O}_X)$ and
+$D^-_{\textit{Coh}}(\mathcal{O}_X)$ and induces an equivalence
+$D^b_{\textit{Coh}}(\mathcal{O}_X) \to D^b_{\textit{Coh}}(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Let $U \subset X$ be an affine open. Say $U = \Spec(A)$ and
+let $\omega_A^\bullet$ be a dualizing complex for $A$
+corresponding to $K|_U$
+as in Lemma \ref{lemma-equivalent-definitions}.
+By Lemma \ref{lemma-affine-duality} the diagram
+$$
+\xymatrix{
+D_{\textit{Coh}}(A) \ar[r] \ar[d]_{R\Hom_A(-, \omega_A^\bullet)} &
+D_{\textit{Coh}}(\mathcal{O}_U) \ar[d]^{R\SheafHom_{\mathcal{O}_X}(-, K|_U)} \\
+D_{\textit{Coh}}(A) \ar[r] &
+D(\mathcal{O}_U)
+}
+$$
+commutes. We conclude that $D$ sends $D_{\textit{Coh}}(\mathcal{O}_X)$ into
+$D_{\textit{Coh}}(\mathcal{O}_X)$. Moreover, the canonical map
+$$
+L
+\longrightarrow
+R\SheafHom_{\mathcal{O}_X}(K, K) \otimes_{\mathcal{O}_X}^\mathbf{L} L
+\longrightarrow
+R\SheafHom_{\mathcal{O}_X}(R\SheafHom_{\mathcal{O}_X}(L, K), K)
+$$
+(using Cohomology, Lemma \ref{cohomology-lemma-internal-hom-evaluate}
+for the second arrow)
+is an isomorphism for all $L$ because this is true on affines by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-dualizing}\footnote{An
+alternative is to first show that
+$R\SheafHom_{\mathcal{O}_X}(K, K) = \mathcal{O}_X$ by
+working affine locally and then use
+Lemma \ref{lemma-internal-hom-evaluate-isom} part (2)
+to see the map is an isomorphism.}
+and we have already seen on affines that we recover what
+happens in algebra.
+The statement on boundedness properties of the functor $D$
+in the quasi-compact case also follows from the corresponding
+statements of Dualizing Complexes, Lemma \ref{dualizing-lemma-dualizing}.
+\end{proof}
+
+\noindent
+Let $X$ be a locally ringed space. Recall that an object $L$ of
+$D(\mathcal{O}_X)$ is {\it invertible} if it is an invertible object
+for the symmetric monoidal structure on $D(\mathcal{O}_X)$ given
+by derived tensor product. In
+Cohomology, Lemma \ref{cohomology-lemma-invertible-derived}
+we have seen this means $L$ is perfect and there is an open covering
+$X = \bigcup U_i$ such that $L|_{U_i} \cong \mathcal{O}_{U_i}[-n_i]$
+for some integers $n_i$. In this case, the function
+$$
+x \mapsto n_x,\quad
+\text{where }n_x\text{ is the unique integer such that }
+H^{n_x}(L_x) \not = 0
+$$
+is locally constant on $X$. In particular, we have
+$L = \bigoplus H^n(L)[-n]$ which gives a well defined complex of
+$\mathcal{O}_X$-modules (with zero differentials) representing $L$.
+
+\begin{lemma}
+\label{lemma-dualizing-unique-schemes}
+Let $X$ be a locally Noetherian scheme. If $K$ and $K'$ are dualizing
+complexes on $X$, then $K'$ is isomorphic to
+$K \otimes_{\mathcal{O}_X}^\mathbf{L} L$
+for some invertible object $L$ of $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Set
+$$
+L = R\SheafHom_{\mathcal{O}_X}(K, K')
+$$
+This is an invertible object of $D(\mathcal{O}_X)$, because affine locally
+this is true, see Dualizing Complexes, Lemma
+\ref{dualizing-lemma-dualizing-unique} and its proof.
+The evaluation map $L \otimes_{\mathcal{O}_X}^\mathbf{L} K \to K'$
+is an isomorphism for the same reason.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-function-scheme}
+Let $X$ be a locally Noetherian scheme. Let $\omega_X^\bullet$
+be a dualizing complex on $X$. Then $X$ is universally catenary
+and the function
+$X \to \mathbf{Z}$ defined by
+$$
+x \longmapsto \delta(x)\text{ such that }
+\omega_{X, x}^\bullet[-\delta(x)]
+\text{ is a normalized dualizing complex over }
+\mathcal{O}_{X, x}
+$$
+is a dimension function.
+\end{lemma}
+
+\begin{proof}
+Immediate from the affine case
+Dualizing Complexes, Lemma \ref{dualizing-lemma-dimension-function}
+and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sitting-in-degrees}
+Let $X$ be a locally Noetherian scheme. Let $\omega_X^\bullet$
+be a dualizing complex on $X$ with associated dimension function $\delta$.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module. Set
+$\mathcal{E}^i = \SheafExt^{-i}_{\mathcal{O}_X}(\mathcal{F}, \omega_X^\bullet)$.
+Then $\mathcal{E}^i$ is a coherent $\mathcal{O}_X$-module and
+for $x \in X$ we have
+\begin{enumerate}
+\item $\mathcal{E}^i_x$ is nonzero only for
+$\delta(x) \leq i \leq \delta(x) + \dim(\text{Supp}(\mathcal{F}_x))$,
+\item $\dim(\text{Supp}(\mathcal{E}^{i + \delta(x)}_x)) \leq i$,
+\item $\text{depth}(\mathcal{F}_x)$ is the smallest integer
+$i \geq 0$ such that $\mathcal{E}^{i + \delta(x)} \not = 0$, and
+\item we have
+$x \in \text{Supp}(\bigoplus_{j \leq i} \mathcal{E}^j)
+\Leftrightarrow
+\text{depth}_{\mathcal{O}_{X, x}}(\mathcal{F}_x) + \delta(x) \leq i$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Lemma \ref{lemma-dualizing-schemes} tells us that $\mathcal{E}^i$
+is coherent. Choosing an affine neighbourhood of $x$ and using
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-internal-hom}
+and
+More on Algebra, Lemma
+\ref{more-algebra-lemma-base-change-RHom} part (3)
+we have
+$$
+\mathcal{E}^i_x =
+\SheafExt^{-i}_{\mathcal{O}_X}(\mathcal{F}, \omega_X^\bullet)_x =
+\Ext^{-i}_{\mathcal{O}_{X, x}}(\mathcal{F}_x,
+\omega_{X, x}^\bullet) =
+\Ext^{\delta(x) - i}_{\mathcal{O}_{X, x}}(\mathcal{F}_x,
+\omega_{X, x}^\bullet[-\delta(x)])
+$$
+By construction of $\delta$ in Lemma \ref{lemma-dimension-function-scheme}
+this reduces parts (1), (2), and (3) to
+Dualizing Complexes, Lemma \ref{dualizing-lemma-sitting-in-degrees}.
+Part (4) is a formal consequence of (3) and (1).
+\end{proof}
+
+
+
+
+\section{Right adjoint of pushforward}
+\label{section-twisted-inverse-image}
+
+\noindent
+References for this section and the following are
+\cite{Neeman-Grothendieck}, \cite{LN},
+\cite{Lipman-notes}, and \cite{Neeman-improvement}.
+
+\medskip\noindent
+Let $f : X \to Y$ be a morphism of schemes.
+In this section we consider the right adjoint to the functor
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$.
+In the literature, if this functor exists, then it is sometimes
+denoted $f^{\times}$. This notation is not universally accepted and we refrain
+from using it. We will not use the notation $f^!$ for such a functor,
+as this would clash (for general morphisms $f$) with the notation in
+\cite{RD}.
+
+\begin{lemma}
+\label{lemma-twisted-inverse-image}
+\begin{reference}
+This is almost the same as \cite[Example 4.2]{Neeman-Grothendieck}.
+\end{reference}
+Let $f : X \to Y$ be a morphism between quasi-separated and quasi-compact
+schemes. The functor $Rf_* : D_\QCoh(X) \to D_\QCoh(Y)$ has a
+right adjoint.
+\end{lemma}
+
+\begin{proof}
+We will prove a right adjoint exists by verifying the hypotheses of
+Derived Categories, Proposition \ref{derived-proposition-brown}.
+First off, the category $D_\QCoh(\mathcal{O}_X)$ has direct sums, see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-direct-sums}.
+The category $D_\QCoh(\mathcal{O}_X)$ is compactly generated by
+Derived Categories of Schemes, Theorem
+\ref{perfect-theorem-bondal-van-den-Bergh}.
+Since $X$ and $Y$ are quasi-compact and quasi-separated, so is $f$, see
+Schemes, Lemmas \ref{schemes-lemma-compose-after-separated} and
+\ref{schemes-lemma-quasi-compact-permanence}.
+Hence the functor $Rf_*$ commutes with direct sums, see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-pushforward-direct-sums}.
+This finishes the proof.
+\end{proof}
+
+\begin{example}
+\label{example-affine-twisted-inverse-image}
+Let $A \to B$ be a ring map. Let $Y = \Spec(A)$ and $X = \Spec(B)$
+and $f : X \to Y$ the morphism corresponding to $A \to B$.
+Then $Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$
+corresponds to restriction $D(B) \to D(A)$ via
+the equivalences $D(B) \to D_\QCoh(\mathcal{O}_X)$ and
+$D(A) \to D_\QCoh(\mathcal{O}_Y)$. Hence the right adjoint
+corresponds to the functor $K \longmapsto R\Hom(B, K)$ of
+Dualizing Complexes, Section \ref{dualizing-section-trivial}.
+\end{example}
+
+\begin{example}
+\label{example-does-not-preserve-coherent}
+If $f : X \to Y$ is a separated finite type morphism of Noetherian schemes,
+then the right adjoint of
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$ does not map
+$D_{\textit{Coh}}(\mathcal{O}_Y)$ into
+$D_{\textit{Coh}}(\mathcal{O}_X)$. Namely, let $k$ be a field and
+consider the morphism $f : \mathbf{A}^1_k \to \Spec(k)$. By
+Example \ref{example-affine-twisted-inverse-image}
+this corresponds to the question of whether
+$R\Hom(B, -)$ maps $D_{\textit{Coh}}(A)$ into $D_{\textit{Coh}}(B)$
+where $A = k$ and $B = k[x]$. This is not true because
+$$
+R\Hom(k[x], k) = \left(\prod\nolimits_{n \geq 0} k\right)[0]
+$$
+which is not a finite $k[x]$-module. Hence $a(\mathcal{O}_Y)$
+does not have coherent cohomology sheaves.
+\end{example}
+
+\begin{example}
+\label{example-does-not-preserve-bounded-above}
+If $f : X \to Y$ is a proper or even finite morphism of Noetherian schemes,
+then the right adjoint of
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$
+does not map $D_\QCoh^-(\mathcal{O}_Y)$ into
+$D_\QCoh^-(\mathcal{O}_X)$. Namely, let $k$ be a field, let
+$k[\epsilon]$ be the dual numbers over $k$, let
+$X = \Spec(k)$, and let $Y = \Spec(k[\epsilon])$.
+Then $\Ext^i_{k[\epsilon]}(k, k)$ is nonzero for all $i \geq 0$.
+Hence $a(\mathcal{O}_Y)$ is not bounded above
+by Example \ref{example-affine-twisted-inverse-image}.
+\end{example}
+
+\begin{lemma}
+\label{lemma-twisted-inverse-image-bounded-below}
+Let $f : X \to Y$ be a morphism of quasi-compact and quasi-separated
+schemes. Let $a : D_\QCoh(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_X)$
+be the right adjoint to $Rf_*$ of Lemma \ref{lemma-twisted-inverse-image}.
+Then $a$ maps $D^+_\QCoh(\mathcal{O}_Y)$ into $D^+_\QCoh(\mathcal{O}_X)$.
+In fact, there exists an integer $N$ such that
+$H^i(K) = 0$ for $i \leq c$ implies $H^i(a(K)) = 0$ for $i \leq c - N$.
+\end{lemma}
+
+\begin{proof}
+By Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-direct-image}
+the functor $Rf_*$ has finite cohomological dimension. In other words,
+there exist an integer $N$ such that
+$H^i(Rf_*L) = 0$ for $i \geq N + c$ if $H^i(L) = 0$ for $i \geq c$.
+Say $K \in D^+_\QCoh(\mathcal{O}_Y)$ has $H^i(K) = 0$ for $i \leq c$.
+Then
+$$
+\Hom_{D(\mathcal{O}_X)}(\tau_{\leq c - N}a(K), a(K)) =
+\Hom_{D(\mathcal{O}_Y)}(Rf_*\tau_{\leq c - N}a(K), K) = 0
+$$
+by what we said above. Clearly, this implies that
+$H^i(a(K)) = 0$ for $i \leq c - N$.
+\end{proof}
+
+\noindent
+Let $f : X \to Y$ be a morphism of quasi-separated and quasi-compact
+schemes. Let $a$ denote the right adjoint to
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$. For every
+$K \in D_\QCoh(\mathcal{O}_Y)$ and $L \in D_\QCoh(\mathcal{O}_X)$
+we obtain a canonical map
+\begin{equation}
+\label{equation-sheafy-trace}
+Rf_*R\SheafHom_{\mathcal{O}_X}(L, a(K))
+\longrightarrow
+R\SheafHom_{\mathcal{O}_Y}(Rf_*L, K)
+\end{equation}
+Namely, this map is constructed as the composition
+$$
+Rf_*R\SheafHom_{\mathcal{O}_X}(L, a(K)) \to
+R\SheafHom_{\mathcal{O}_Y}(Rf_*L, Rf_*a(K)) \to
+R\SheafHom_{\mathcal{O}_Y}(Rf_*L, K)
+$$
+where the first arrow is
+Cohomology, Remark
+\ref{cohomology-remark-projection-formula-for-internal-hom}
+and the second arrow is the counit $Rf_*a(K) \to K$ of the adjunction.
+
+\begin{lemma}
+\label{lemma-iso-on-RSheafHom}
+Let $f : X \to Y$ be a morphism of quasi-compact and quasi-separated schemes.
+Let $a$ be the right adjoint to
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$.
+Let $L \in D_\QCoh(\mathcal{O}_X)$ and $K \in D_\QCoh(\mathcal{O}_Y)$.
+Then the map (\ref{equation-sheafy-trace})
+$$
+Rf_*R\SheafHom_{\mathcal{O}_X}(L, a(K))
+\longrightarrow
+R\SheafHom_{\mathcal{O}_Y}(Rf_*L, K)
+$$
+becomes an isomorphism after applying the functor
+$DQ_Y : D(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_Y)$
+discussed in Derived Categories of Schemes, Section
+\ref{perfect-section-better-coherator}.
+\end{lemma}
+
+\begin{proof}
+The statement makes sense as $DQ_Y$ exists by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-better-coherator}.
+Since $DQ_Y$ is the right adjoint to the inclusion
+functor $D_\QCoh(\mathcal{O}_Y) \to D(\mathcal{O}_Y)$
+to prove the lemma we have to show that for any $M \in D_\QCoh(\mathcal{O}_Y)$
+the map (\ref{equation-sheafy-trace}) induces an bijection
+$$
+\Hom_Y(M, Rf_*R\SheafHom_{\mathcal{O}_X}(L, a(K)))
+\longrightarrow
+\Hom_Y(M, R\SheafHom_{\mathcal{O}_Y}(Rf_*L, K))
+$$
+To see this we use the following string of equalities
+\begin{align*}
+\Hom_Y(M, Rf_*R\SheafHom_{\mathcal{O}_X}(L, a(K)))
+& =
+\Hom_X(Lf^*M, R\SheafHom_{\mathcal{O}_X}(L, a(K))) \\
+& =
+\Hom_X(Lf^*M \otimes_{\mathcal{O}_X}^\mathbf{L} L, a(K)) \\
+& =
+\Hom_Y(Rf_*(Lf^*M \otimes_{\mathcal{O}_X}^\mathbf{L} L), K) \\
+& =
+\Hom_Y(M \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*L, K) \\
+& =
+\Hom_Y(M, R\SheafHom_{\mathcal{O}_Y}(Rf_*L, K))
+\end{align*}
+The first equality holds by Cohomology, Lemma \ref{cohomology-lemma-adjoint}.
+The second equality by Cohomology, Lemma \ref{cohomology-lemma-internal-hom}.
+The third equality by construction of $a$.
+The fourth equality by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change} (this is the important step).
+The fifth by Cohomology, Lemma \ref{cohomology-lemma-internal-hom}.
+\end{proof}
+
+\begin{example}
+\label{example-iso-on-RSheafHom}
+The statement of Lemma \ref{lemma-iso-on-RSheafHom} is not true without
+applying the ``coherator'' $DQ_Y$. Indeed, suppose $Y = \Spec(R)$ and
+$X = \mathbf{A}^1_R$. Take $L = \mathcal{O}_X$ and $K = \mathcal{O}_Y$.
+The left hand side of the arrow is in $D_\QCoh(\mathcal{O}_Y)$ but
+the right hand side of the arrow is isomorphic to
+$\prod_{n \geq 0} \mathcal{O}_Y$ which is not quasi-coherent.
+\end{example}
+
+\begin{remark}
+\label{remark-iso-on-RSheafHom}
+In the situation of Lemma \ref{lemma-iso-on-RSheafHom} we have
+$$
+DQ_Y(Rf_*R\SheafHom_{\mathcal{O}_X}(L, a(K))) =
+Rf_* DQ_X(R\SheafHom_{\mathcal{O}_X}(L, a(K)))
+$$
+by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-pushforward-better-coherator}.
+Thus if $R\SheafHom_{\mathcal{O}_X}(L, a(K)) \in D_\QCoh(\mathcal{O}_X)$,
+then we can ``erase'' the $DQ_Y$ on the left hand side of the arrow.
+On the other hand, if we know that
+$R\SheafHom_{\mathcal{O}_Y}(Rf_*L, K) \in D_\QCoh(\mathcal{O}_Y)$,
+then we can ``erase'' the $DQ_Y$ from the right hand side of the arrow.
+If both are true then we see that (\ref{equation-sheafy-trace})
+is an isomorphism. Combining this with
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-internal-hom}
+we see that $Rf_*R\SheafHom_{\mathcal{O}_X}(L, a(K)) \to
+R\SheafHom_{\mathcal{O}_Y}(Rf_*L, K)$ is an isomorphism if
+\begin{enumerate}
+\item $L$ and $Rf_*L$ are perfect, or
+\item $K$ is bounded below and $L$ and $Rf_*L$ are pseudo-coherent.
+\end{enumerate}
+For (2) we use that $a(K)$ is bounded below if $K$
+is bounded below, see Lemma \ref{lemma-twisted-inverse-image-bounded-below}.
+\end{remark}
+
+\begin{example}
+\label{example-iso-on-RSheafHom-noetherian}
+Let $f : X \to Y$ be a proper morphism of Noetherian schemes,
+$L \in D^-_{\textit{Coh}}(X)$ and $K \in D^+_{\QCoh}(\mathcal{O}_Y)$.
+Then the map $Rf_*R\SheafHom_{\mathcal{O}_X}(L, a(K)) \to
+R\SheafHom_{\mathcal{O}_Y}(Rf_*L, K)$ is an isomorphism.
+Namely, the complexes $L$ and $Rf_*L$ are pseudo-coherent by
+Derived Categories of Schemes, Lemmas
+\ref{perfect-lemma-identify-pseudo-coherent-noetherian} and
+\ref{perfect-lemma-direct-image-coherent}
+and the discussion in Remark \ref{remark-iso-on-RSheafHom} applies.
+\end{example}
+
+\begin{lemma}
+\label{lemma-iso-global-hom}
+Let $f : X \to Y$ be a morphism of quasi-separated and quasi-compact
+schemes.
+For all $L \in D_\QCoh(\mathcal{O}_X)$ and $K \in D_\QCoh(\mathcal{O}_Y)$
+(\ref{equation-sheafy-trace}) induces an isomorphism
+$R\Hom_X(L, a(K)) \to R\Hom_Y(Rf_*L, K)$ of global derived homs.
+\end{lemma}
+
+\begin{proof}
+By the construction in
+Cohomology, Section \ref{cohomology-section-global-RHom}
+we have
+$$
+R\Hom_X(L, a(K)) =
+R\Gamma(X, R\SheafHom_{\mathcal{O}_X}(L, a(K))) =
+R\Gamma(Y, Rf_*R\SheafHom_{\mathcal{O}_X}(L, a(K)))
+$$
+and
+$$
+R\Hom_Y(Rf_*L, K) = R\Gamma(Y, R\SheafHom_{\mathcal{O}_Y}(Rf_*L, K))
+$$
+Thus the lemma is a consequence of Lemma \ref{lemma-iso-on-RSheafHom}.
+Namely, a map $E \to E'$ in $D(\mathcal{O}_Y)$ which induces
+an isomorphism $DQ_Y(E) \to DQ_Y(E')$ induces a quasi-isomorphism
+$R\Gamma(Y, E) \to R\Gamma(Y, E')$. Indeed we have
+$H^i(Y, E) = \Ext^i_Y(\mathcal{O}_Y, E) = \Hom(\mathcal{O}_Y[-i], E) =
+\Hom(\mathcal{O}_Y[-i], DQ_Y(E))$ because $\mathcal{O}_Y[-i]$
+is in $D_\QCoh(\mathcal{O}_Y)$ and $DQ_Y$ is the right adjoint
+to the inclusion functor $D_\QCoh(\mathcal{O}_Y) \to D(\mathcal{O}_Y)$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Right adjoint of pushforward and restriction to opens}
+\label{section-restriction-to-opens}
+
+\noindent
+In this section we study the question to what extend the right adjoint
+of pushforward commutes with restriction to open subschemes. This is
+a base change question, so let's first discuss this more generally.
+
+\medskip\noindent
+We often want to know whether the right adjoints to pushforward commutes
+with base change. Thus we consider a cartesian square
+\begin{equation}
+\label{equation-base-change}
+\vcenter{
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+}
+\end{equation}
+of quasi-compact and quasi-separated schemes.
+Denote
+$$
+a : D_\QCoh(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_X)
+\quad\text{and}\quad
+a' : D_\QCoh(\mathcal{O}_{Y'}) \to D_\QCoh(\mathcal{O}_{X'})
+$$
+the right adjoints to $Rf_*$ and $Rf'_*$
+(Lemma \ref{lemma-twisted-inverse-image}).
+Consider the base change map of
+Cohomology, Remark \ref{cohomology-remark-base-change}.
+It induces a transformation of functors
+$$
+Lg^* \circ Rf_* \longrightarrow Rf'_* \circ L(g')^*
+$$
+on derived categories of sheaves with quasi-coherent cohomology.
+Hence a transformation between the right adjoints in the opposite direction
+$$
+a \circ Rg_* \longleftarrow Rg'_* \circ a'
+$$
+
+\begin{lemma}
+\label{lemma-flat-precompose-pus}
+In diagram (\ref{equation-base-change}) assume that $g$ is flat or
+more generally that $f$ and $g$ are Tor independent. Then
+$a \circ Rg_* \leftarrow Rg'_* \circ a'$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+In this case the base change map
+$Lg^* \circ Rf_* K \longrightarrow Rf'_* \circ L(g')^*K$
+is an isomorphism for every $K$ in $D_\QCoh(\mathcal{O}_X)$ by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-compare-base-change}.
+Thus the corresponding transformation between adjoint functors
+is an isomorphism as well.
+\end{proof}
+
+\noindent
+Let $f : X \to Y$ be a morphism of quasi-compact and quasi-separated
+schemes. Let $V \subset Y$ be a quasi-compact open subscheme and set
+$U = f^{-1}(V)$. This gives a cartesian square
+$$
+\xymatrix{
+U \ar[r]_{j'} \ar[d]_{f|_U} & X \ar[d]^f \\
+V \ar[r]^j & Y
+}
+$$
+as in (\ref{equation-base-change}). By Lemma \ref{lemma-flat-precompose-pus}
+the map $\xi : a \circ Rj_* \leftarrow Rj'_* \circ a'$ is an isomorphism
+where $a$ and $a'$ are the right adjoints to
+$Rf_*$ and $R(f|_U)_*$. We obtain a transformation
+of functors $D_\QCoh(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_U)$
+\begin{equation}
+\label{equation-sheafy}
+(j')^* \circ a \to
+(j')^* \circ a \circ Rj_* \circ j^* \xrightarrow{\xi^{-1}}
+(j')^* \circ Rj'_* \circ a' \circ j^* \to a' \circ j^*
+\end{equation}
+where the first arrow comes from $\text{id} \to Rj_* \circ j^*$
+and the final arrow from the isomorphism $(j')^* \circ Rj'_* \to \text{id}$.
+In particular, we see that (\ref{equation-sheafy}) is an isomorphism
+when evaluated on $K$ if and only if $a(K)|_U \to a(Rj_*(K|_V))|_U$
+is an isomorphism.
+
+\begin{example}
+\label{example-not-supported-on-inverse-image}
+There is a finite morphism $f : X \to Y$ of Noetherian schemes
+such that (\ref{equation-sheafy}) is not an isomorphism
+when evaluated on some
+$K \in D_{\textit{Coh}}(\mathcal{O}_Y)$.
+Namely, let $X = \Spec(B) \to Y = \Spec(A)$ with
+$A = k[x, \epsilon]$ where $k$ is a field and $\epsilon^2 = 0$ and
+$B = k[x] = A/(\epsilon)$. For $n \in \mathbf{N}$ set
+$M_n = A/(\epsilon, x^n)$. Observe that
+$$
+\Ext^i_A(B, M_n) = M_n,\quad i \geq 0
+$$
+because $B$ has the free periodic resolution
+$\ldots \to A \to A \to A$ with maps given by multiplication by $\epsilon$.
+Consider the object
+$K = \bigoplus M_n[n] = \prod M_n[n]$
+of $D_{\textit{Coh}}(A)$ (equality in $D(A)$ by
+Derived Categories, Lemmas \ref{derived-lemma-direct-sums} and
+\ref{derived-lemma-products}). Then we see that $a(K)$ corresponds
+to $R\Hom(B, K)$ by Example \ref{example-affine-twisted-inverse-image} and
+$$
+H^0(R\Hom(B, K)) = \Ext^0_A(B, K) =
+\prod\nolimits_{n \geq 1} \Ext^n_A(B. M_n) =
+\prod\nolimits_{n \geq 1} M_n
+$$
+by the above. But this module has elements which are not
+annihilated by any power of $x$, whereas the complex $K$
+does have every element of its cohomology annihilated by
+a power of $x$. In other words, for the map (\ref{equation-sheafy})
+with $V = D(x)$ and $U = D(x)$ and the complex $K$ cannot
+be an isomorphism because $(j')^*(a(K))$ is nonzero and
+$a'(j^*K)$ is zero.
+\end{example}
+
+\begin{lemma}
+\label{lemma-when-sheafy}
+Let $f : X \to Y$ be a morphism of quasi-compact and quasi-separated
+schemes. Let $a$ be the right adjoint to
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$.
+Let $V \subset Y$ be quasi-compact open with inverse image $U \subset X$.
+\begin{enumerate}
+\item For every $Q \in D_\QCoh^+(\mathcal{O}_Y)$
+supported on $Y \setminus V$ the image $a(Q)$ is supported on
+$X \setminus U$ if and only if (\ref{equation-sheafy})
+is an isomorphism on all $K$ in $D_\QCoh^+(\mathcal{O}_Y)$.
+\item For every $Q \in D_\QCoh(\mathcal{O}_Y)$
+supported on $Y \setminus V$ the image $a(Q)$ is supported on
+$X \setminus U$ if and only if (\ref{equation-sheafy})
+is an isomorphism on all $K$ in $D_\QCoh(\mathcal{O}_Y)$.
+\item If $a$ commutes with direct sums, then the equivalent conditions of
+(1) imply the equivalent conditions of (2).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Let $K \in D_\QCoh^+(\mathcal{O}_Y)$.
+Choose a distinguished triangle
+$$
+K \to Rj_*K|_V \to Q \to K[1]
+$$
+Observe that $Q$ is in $D_\QCoh^+(\mathcal{O}_Y)$
+(Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-direct-image})
+and is supported on $Y \setminus V$
+(Derived Categories of Schemes, Definition
+\ref{perfect-definition-supported-on}).
+Applying $a$ we obtain a distinguished triangle
+$$
+a(K) \to a(Rj_*K|_V) \to a(Q) \to a(K)[1]
+$$
+on $X$. If $a(Q)$ is supported on $X \setminus U$, then
+restricting to $U$ the map $a(K)|_U \to a(Rj_*K|_V)|_U$ is an
+isomorphism, i.e., (\ref{equation-sheafy}) is an isomorphism on $K$.
+The converse is immediate.
+
+\medskip\noindent
+The proof of (2) is exactly the same as the proof of (1).
+
+\medskip\noindent
+Proof of (3). Assume the equivalent conditions of (1) hold.
+Set $T = Y \setminus V$.
+We will use the notation $D_{\QCoh, T}(\mathcal{O}_Y)$ and
+$D_{\QCoh, f^{-1}(T)}(\mathcal{O}_X)$ to denote complexes
+whose cohomology sheaves are supported on $T$ and $f^{-1}(T)$.
+Since $a$ commutes with direct sums, the strictly full, saturated, triangulated
+subcategory $\mathcal{D}$ with objects
+$$
+\{Q \in D_{\QCoh, T}(\mathcal{O}_Y) \mid
+a(Q) \in D_{\QCoh, f^{-1}(T)}(\mathcal{O}_X)\}
+$$
+is preserved by direct sums and hence derived colimits.
+On the other hand, the category $D_{\QCoh, T}(\mathcal{O}_Y)$
+is generated by a perfect object $E$
+(see Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-generator-with-support}).
+By assumption we see that $E \in \mathcal{D}$.
+By Derived Categories, Lemma \ref{derived-lemma-write-as-colimit}
+every object $Q$ of $D_{\QCoh, T}(\mathcal{O}_Y)$ is a derived
+colimit of a system $Q_1 \to Q_2 \to Q_3 \to \ldots$
+such that the cones of the transition maps are direct sums
+of shifts of $E$. Arguing by induction we see that
+$Q_n \in \mathcal{D}$ for all $n$ and finally that $Q$ is
+in $\mathcal{D}$. Thus the equivalent conditions of (2) hold.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-noetherian}
+Let $Y$ be a quasi-compact and quasi-separated scheme.
+Let $f : X \to Y$ be a proper morphism. If\footnote{This proof works for those
+morphisms of quasi-compact and quasi-separated schemes such that
+$Rf_*P$ is pseudo-coherent for all $P$ perfect on $X$. It follows
+easily from a theorem of Kiehl \cite{Kiehl} that this holds if
+$f$ is proper and pseudo-coherent. This is the correct generality
+for this lemma and some of the other results in this chapter.}
+\begin{enumerate}
+\item $f$ is flat and of finite presentation, or
+\item $Y$ is Noetherian
+\end{enumerate}
+then the equivalent conditions of Lemma \ref{lemma-when-sheafy} part (1)
+hold for all quasi-compact opens $V$ of $Y$.
+\end{lemma}
+
+\begin{proof}
+Let $Q \in D^+_\QCoh(\mathcal{O}_Y)$ be supported on $Y \setminus V$.
+To get a contradiction, assume that $a(Q)$ is not supported on
+$X \setminus U$. Then we can find a perfect complex $P_U$ on $U$
+and a nonzero map $P_U \to a(Q)|_U$ (follows from
+Derived Categories of Schemes, Theorem
+\ref{perfect-theorem-bondal-van-den-Bergh}). Then using
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-lift-map-from-perfect-complex-with-support}
+we may assume there is a perfect complex $P$ on $X$ and a map
+$P \to a(Q)$ whose restriction to $U$ is nonzero.
+By definition of $a$ this map
+is adjoint to a map $Rf_*P \to Q$.
+
+\medskip\noindent
+The complex $Rf_*P$ is pseudo-coherent. In case (1) this follows
+from Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-pseudo-coherent-direct-image-general}.
+In case (2) this follows from
+Derived Categories of Schemes, Lemmas
+\ref{perfect-lemma-direct-image-coherent} and
+\ref{perfect-lemma-identify-pseudo-coherent-noetherian}.
+Thus we may apply
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-map-from-pseudo-coherent-to-complex-with-support}
+and get a map $I \to \mathcal{O}_Y$ of perfect complexes
+whose restriction to $V$ is an isomorphism such that the composition
+$I \otimes^\mathbf{L}_{\mathcal{O}_Y} Rf_*P \to Rf_*P \to Q$ is zero.
+By Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change}
+we have $I \otimes^\mathbf{L}_{\mathcal{O}_Y} Rf_*P =
+Rf_*(Lf^*I \otimes^\mathbf{L}_{\mathcal{O}_X} P)$.
+We conclude that the composition
+$$
+Lf^*I \otimes^\mathbf{L}_{\mathcal{O}_X} P \to P \to a(Q)
+$$
+is zero. However, the restriction to $U$ is the map
+$P|_U \to a(Q)|_U$ which we assumed to be nonzero.
+This contradiction finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Right adjoint of pushforward and base change, I}
+\label{section-base-change-map}
+
+\noindent
+The map (\ref{equation-sheafy}) is a special case of a base change map.
+Namely, suppose that we have a cartesian diagram
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+of quasi-compact and quasi-separated schemes, i.e., a diagram as in
+(\ref{equation-base-change}). Assume $f$ and $g$ are {\bf Tor independent}.
+Then we can consider the morphism of functors
+$D_\QCoh(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_{X'})$
+given by the composition
+\begin{equation}
+\label{equation-base-change-map}
+L(g')^* \circ a \to
+L(g')^* \circ a \circ Rg_* \circ Lg^* \leftarrow
+L(g')^* \circ Rg'_* \circ a' \circ Lg^* \to a' \circ Lg^*
+\end{equation}
+The first arrow comes from the adjunction map $\text{id} \to Rg_* Lg^*$
+and the last arrow from the adjunction map $L(g')^*Rg'_* \to \text{id}$.
+We need the assumption on Tor independence to invert the arrow
+in the middle, see Lemma \ref{lemma-flat-precompose-pus}.
+Alternatively, we can think of (\ref{equation-base-change-map}) by
+adjointness of $L(g')^*$ and $R(g')_*$ as a natural transformation
+$$
+a \to a \circ Rg_* \circ Lg^* \leftarrow Rg'_* \circ a' \circ Lg^*
+$$
+were again the second arrow is invertible. If $M \in D_\QCoh(\mathcal{O}_X)$
+and $K \in D_\QCoh(\mathcal{O}_Y)$
+then on Yoneda functors this map is given by
+\begin{align*}
+\Hom_X(M, a(K))
+& =
+\Hom_Y(Rf_*M, K) \\
+& \to
+\Hom_Y(Rf_*M, Rg_* Lg^*K) \\
+& =
+\Hom_{Y'}(Lg^*Rf_*M, Lg^*K) \\
+& \leftarrow
+\Hom_{Y'}(Rf'_* L(g')^*M, Lg^*K) \\
+& =
+\Hom_{X'}(L(g')^*M, a'(Lg^*K)) \\
+& =
+\Hom_X(M, Rg'_*a'(Lg^*K))
+\end{align*}
+(were the arrow pointing left is invertible by the base
+change theorem given in
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-compare-base-change})
+which makes things a little bit more explicit.
+
+\medskip\noindent
+In this section we first prove that the base change map satisfies
+some natural compatibilities with regards to stacking squares as in
+Cohomology, Remarks \ref{cohomology-remark-compose-base-change} and
+\ref{cohomology-remark-compose-base-change-horizontal} for the usual
+base change map. We suggest the reader skip the rest of this section
+on a first reading.
+
+\begin{lemma}
+\label{lemma-compose-base-change-maps}
+Consider a commutative diagram
+$$
+\xymatrix{
+X' \ar[r]_k \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^l \ar[d]_{g'} & Y \ar[d]^g \\
+Z' \ar[r]^m & Z
+}
+$$
+of quasi-compact and quasi-separated schemes where
+both diagrams are cartesian and where $f$ and $l$
+as well as $g$ and $m$ are Tor independent.
+Then the maps (\ref{equation-base-change-map})
+for the two squares compose to give the base
+change map for the outer rectangle (see proof for a precise statement).
+\end{lemma}
+
+\begin{proof}
+It follows from the assumptions that $g \circ f$ and $m$ are Tor
+independent (details omitted), hence the statement makes sense.
+In this proof we write $k^*$ in place of $Lk^*$ and $f_*$ instead
+of $Rf_*$. Let $a$, $b$, and $c$ be the right adjoints of
+Lemma \ref{lemma-twisted-inverse-image}
+for $f$, $g$, and $g \circ f$ and similarly for the primed versions.
+The arrow corresponding to the top square is the composition
+$$
+\gamma_{top} :
+k^* \circ a \to k^* \circ a \circ l_* \circ l^*
+\xleftarrow{\xi_{top}} k^* \circ k_* \circ a' \circ l^* \to a' \circ l^*
+$$
+where $\xi_{top} : k_* \circ a' \to a \circ l_*$
+is an isomorphism (hence can be inverted)
+and is the arrow ``dual'' to the base change map
+$l^* \circ f_* \to f'_* \circ k^*$. The outer arrows come
+from the canonical maps $1 \to l_* \circ l^*$ and $k^* \circ k_* \to 1$.
+Similarly for the second square we have
+$$
+\gamma_{bot} :
+l^* \circ b \to l^* \circ b \circ m_* \circ m^*
+\xleftarrow{\xi_{bot}} l^* \circ l_* \circ b' \circ m^* \to b' \circ m^*
+$$
+For the outer rectangle we get
+$$
+\gamma_{rect} :
+k^* \circ c \to k^* \circ c \circ m_* \circ m^*
+\xleftarrow{\xi_{rect}} k^* \circ k_* \circ c' \circ m^* \to c' \circ m^*
+$$
+We have $(g \circ f)_* = g_* \circ f_*$ and hence
+$c = a \circ b$ and similarly $c' = a' \circ b'$.
+The statement of the lemma is that $\gamma_{rect}$
+is equal to the composition
+$$
+k^* \circ c = k^* \circ a \circ b \xrightarrow{\gamma_{top}}
+a' \circ l^* \circ b \xrightarrow{\gamma_{bot}}
+a' \circ b' \circ m^* = c' \circ m^*
+$$
+To see this we contemplate the following diagram:
+$$
+\xymatrix{
+& & k^* \circ a \circ b \ar[d] \ar[lldd] \\
+& & k^* \circ a \circ l_* \circ l^* \circ b \ar[ld] \\
+k^* \circ a \circ b \circ m_* \circ m^* \ar[r] &
+k^* \circ a \circ l_* \circ l^* \circ b \circ m_* \circ m^* &
+k^* \circ k_* \circ a' \circ l^* \circ b \ar[u]_{\xi_{top}} \ar[d] \ar[ld] \\
+& k^*\circ k_* \circ a' \circ l^* \circ b \circ m_* \circ m^*
+\ar[u]_{\xi_{top}} \ar[rd] &
+a' \circ l^* \circ b \ar[d] \\
+k^* \circ k_* \circ a' \circ b' \circ m^* \ar[uu]_{\xi_{rect}} \ar[ddrr] &
+k^*\circ k_* \circ a' \circ l^* \circ l_* \circ b' \circ m^*
+\ar[u]_{\xi_{bot}} \ar[l] \ar[dr] &
+a' \circ l^* \circ b \circ m_* \circ m^* \\
+& & a' \circ l^* \circ l_* \circ b' \circ m^* \ar[u]_{\xi_{bot}} \ar[d] \\
+& & a' \circ b' \circ m^*
+}
+$$
+Going down the right hand side we have the composition and going
+down the left hand side we have $\gamma_{rect}$.
+All the quadrilaterals on the right hand side of this diagram commute
+by Categories, Lemma \ref{categories-lemma-properties-2-cat-cats}
+or more simply the discussion preceding
+Categories, Definition \ref{categories-definition-horizontal-composition}.
+Hence we see that it suffices to show the diagram
+$$
+\xymatrix{
+a \circ l_* \circ l^* \circ b \circ m_* &
+a \circ b \circ m_* \ar[l] \\
+k_* \circ a' \circ l^* \circ b \circ m_* \ar[u]_{\xi_{top}} & \\
+k_* \circ a' \circ l^* \circ l_* \circ b' \ar[u]_{\xi_{bot}} \ar[r] &
+k_* \circ a' \circ b' \ar[uu]_{\xi_{rect}}
+}
+$$
+becomes commutative if we invert the arrows $\xi_{top}$, $\xi_{bot}$,
+and $\xi_{rect}$ (note that this is different from asking the
+diagram to be commutative). However, the diagram
+$$
+\xymatrix{
+& a \circ l_* \circ l^* \circ b \circ m_* \\
+a \circ l_* \circ l^* \circ l_* \circ b'
+\ar[ru]^{\xi_{bot}} & &
+k_* \circ a' \circ l^* \circ b \circ m_* \ar[ul]_{\xi_{top}} \\
+& k_* \circ a' \circ l^* \circ l_* \circ b'
+\ar[ul]^{\xi_{top}} \ar[ur]_{\xi_{bot}}
+}
+$$
+commutes by Categories, Lemma \ref{categories-lemma-properties-2-cat-cats}.
+Since the diagrams
+$$
+\vcenter{
+\xymatrix{
+a \circ l_* \circ l^* \circ b \circ m_* & a \circ b \circ m \ar[l] \\
+a \circ l_* \circ l^* \circ l_* \circ b' \ar[u] &
+a \circ l_* \circ b' \ar[l] \ar[u]
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+a \circ l_* \circ l^* \circ l_* \circ b' \ar[r] & a \circ l_* \circ b' \\
+k_* \circ a' \circ l^* \circ l_* \circ b' \ar[u] \ar[r] &
+k_* \circ a' \circ b' \ar[u]
+}
+}
+$$
+commute (see references cited) and since the composition of
+$l_* \to l_* \circ l^* \circ l_* \to l_*$ is the identity,
+we find that it suffices to prove that
+$$
+k \circ a' \circ b' \xrightarrow{\xi_{bot}} a \circ l_* \circ b
+\xrightarrow{\xi_{top}} a \circ b \circ m_*
+$$
+is equal to $\xi_{rect}$ (via the identifications $a \circ b = c$
+and $a' \circ b' = c'$). This is the statement dual to
+Cohomology, Remark \ref{cohomology-remark-compose-base-change}
+and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compose-base-change-maps-horizontal}
+Consider a commutative diagram
+$$
+\xymatrix{
+X'' \ar[r]_{g'} \ar[d]_{f''} & X' \ar[r]_g \ar[d]_{f'} & X \ar[d]^f \\
+Y'' \ar[r]^{h'} & Y' \ar[r]^h & Y
+}
+$$
+of quasi-compact and quasi-separated schemes where
+both diagrams are cartesian and where $f$ and $h$
+as well as $f'$ and $h'$ are Tor independent.
+Then the maps (\ref{equation-base-change-map})
+for the two squares compose to give the base
+change map for the outer rectangle (see proof for a precise statement).
+\end{lemma}
+
+\begin{proof}
+It follows from the assumptions that $f$ and $h \circ h'$ are Tor
+independent (details omitted), hence the statement makes sense.
+In this proof we write $g^*$ in place of $Lg^*$ and $f_*$ instead
+of $Rf_*$. Let $a$, $a'$, and $a''$ be the right adjoints of
+Lemma \ref{lemma-twisted-inverse-image}
+for $f$, $f'$, and $f''$. The arrow corresponding to the right
+square is the composition
+$$
+\gamma_{right} :
+g^* \circ a \to g^* \circ a \circ h_* \circ h^*
+\xleftarrow{\xi_{right}} g^* \circ g_* \circ a' \circ h^* \to a' \circ h^*
+$$
+where $\xi_{right} : g_* \circ a' \to a \circ h_*$
+is an isomorphism (hence can be inverted)
+and is the arrow ``dual'' to the base change map
+$h^* \circ f_* \to f'_* \circ g^*$. The outer arrows come
+from the canonical maps $1 \to h_* \circ h^*$ and $g^* \circ g_* \to 1$.
+Similarly for the left square we have
+$$
+\gamma_{left} :
+(g')^* \circ a' \to (g')^* \circ a' \circ (h')_* \circ (h')^*
+\xleftarrow{\xi_{left}}
+(g')^* \circ (g')_* \circ a'' \circ (h')^* \to a'' \circ (h')^*
+$$
+For the outer rectangle we get
+$$
+\gamma_{rect} :
+k^* \circ a \to
+k^* \circ a \circ m_* \circ m^* \xleftarrow{\xi_{rect}}
+k^* \circ k_* \circ a'' \circ m^* \to
+a'' \circ m^*
+$$
+where $k = g \circ g'$ and $m = h \circ h'$.
+We have $k^* = (g')^* \circ g^*$ and $m^* = (h')^* \circ h^*$.
+The statement of the lemma is that $\gamma_{rect}$
+is equal to the composition
+$$
+k^* \circ a =
+(g')^* \circ g^* \circ a \xrightarrow{\gamma_{right}}
+(g')^* \circ a' \circ h^* \xrightarrow{\gamma_{left}}
+a'' \circ (h')^* \circ h^* = a'' \circ m^*
+$$
+To see this we contemplate the following diagram
+$$
+\xymatrix{
+& (g')^* \circ g^* \circ a \ar[d] \ar[ddl] \\
+& (g')^* \circ g^* \circ a \circ h_* \circ h^* \ar[ld] \\
+(g')^* \circ g^* \circ a \circ h_* \circ (h')_* \circ (h')^* \circ h^* &
+(g')^* \circ g^* \circ g_* \circ a' \circ h^*
+\ar[u]_{\xi_{right}} \ar[d] \ar[ld] \\
+(g')^* \circ g^* \circ g_* \circ a' \circ (h')_* \circ (h')^* \circ h^*
+\ar[u]_{\xi_{right}} \ar[dr] &
+(g')^* \circ a' \circ h^* \ar[d] \\
+(g')^* \circ g^* \circ g_* \circ (g')_* \circ a'' \circ (h')^* \circ h^*
+\ar[u]_{\xi_{left}} \ar[ddr] \ar[dr] &
+(g')^* \circ a' \circ (h')_* \circ (h')^* \circ h^* \\
+& (g')^*\circ (g')_* \circ a'' \circ (h')^* \circ h^*
+\ar[u]_{\xi_{left}} \ar[d] \\
+& a'' \circ (h')^* \circ h^*
+}
+$$
+Going down the right hand side we have the composition and going
+down the left hand side we have $\gamma_{rect}$.
+All the quadrilaterals on the right hand side of this diagram commute
+by Categories, Lemma \ref{categories-lemma-properties-2-cat-cats}
+or more simply the discussion preceding
+Categories, Definition \ref{categories-definition-horizontal-composition}.
+Hence we see that it suffices to show that
+$$
+g_* \circ (g')_* \circ a'' \xrightarrow{\xi_{left}}
+g_* \circ a' \circ (h')_* \xrightarrow{\xi_{right}}
+a \circ h_* \circ (h')_*
+$$
+is equal to $\xi_{rect}$. This is the statement dual to
+Cohomology, Remark \ref{cohomology-remark-compose-base-change-horizontal}
+and the proof is complete.
+\end{proof}
+
+\begin{remark}
+\label{remark-going-around}
+Consider a commutative diagram
+$$
+\xymatrix{
+X'' \ar[r]_{k'} \ar[d]_{f''} & X' \ar[r]_k \ar[d]_{f'} & X \ar[d]^f \\
+Y'' \ar[r]^{l'} \ar[d]_{g''} & Y' \ar[r]^l \ar[d]_{g'} & Y \ar[d]^g \\
+Z'' \ar[r]^{m'} & Z' \ar[r]^m & Z
+}
+$$
+of quasi-compact and quasi-separated schemes where
+all squares are cartesian and where
+$(f, l)$, $(g, m)$, $(f', l')$, $(g', m')$ are
+Tor independent pairs of maps.
+Let $a$, $a'$, $a''$, $b$, $b'$, $b''$ be the
+right adjoints of Lemma \ref{lemma-twisted-inverse-image}
+for $f$, $f'$, $f''$, $g$, $g'$, $g''$.
+Let us label the squares of the diagram $A$, $B$, $C$, $D$
+as follows
+$$
+\begin{matrix}
+A & B \\
+C & D
+\end{matrix}
+$$
+Then the maps (\ref{equation-base-change-map})
+for the squares are (where we use $k^* = Lk^*$, etc)
+$$
+\begin{matrix}
+\gamma_A : (k')^* \circ a' \to a'' \circ (l')^* &
+\gamma_B : k^* \circ a \to a' \circ l^* \\
+\gamma_C : (l')^* \circ b' \to b'' \circ (m')^* &
+\gamma_D : l^* \circ b \to b' \circ m^*
+\end{matrix}
+$$
+For the $2 \times 1$ and $1 \times 2$ rectangles we have four further
+base change maps
+$$
+\begin{matrix}
+\gamma_{A + B} : (k \circ k')^* \circ a \to a'' \circ (l \circ l')^* \\
+\gamma_{C + D} : (l \circ l')^* \circ b \to b'' \circ (m \circ m')^* \\
+\gamma_{A + C} : (k')^* \circ (a' \circ b') \to (a'' \circ b'') \circ (m')^* \\
+\gamma_{B + D} : k^* \circ (a \circ b) \to (a' \circ b') \circ m^*
+\end{matrix}
+$$
+By Lemma \ref{lemma-compose-base-change-maps-horizontal} we have
+$$
+\gamma_{A + B} = \gamma_A \circ \gamma_B, \quad
+\gamma_{C + D} = \gamma_C \circ \gamma_D
+$$
+and by Lemma \ref{lemma-compose-base-change-maps} we have
+$$
+\gamma_{A + C} = \gamma_C \circ \gamma_A, \quad
+\gamma_{B + D} = \gamma_D \circ \gamma_B
+$$
+Here it would be more correct to write
+$\gamma_{A + B} = (\gamma_A \star \text{id}_{l^*}) \circ
+(\text{id}_{(k')^*} \star \gamma_B)$ with notation as in
+Categories, Section \ref{categories-section-formal-cat-cat}
+and similarly for the others. However, we continue the
+abuse of notation used in the proofs of
+Lemmas \ref{lemma-compose-base-change-maps} and
+\ref{lemma-compose-base-change-maps-horizontal}
+of dropping $\star$ products with identities as one can figure
+out which ones to add as long as the source and target of the
+transformation is known.
+Having said all of this we find (a priori) two transformations
+$$
+(k')^* \circ k^* \circ a \circ b
+\longrightarrow
+a'' \circ b'' \circ (m')^* \circ m^*
+$$
+namely
+$$
+\gamma_C \circ \gamma_A \circ \gamma_D \circ \gamma_B =
+\gamma_{A + C} \circ \gamma_{B + D}
+$$
+and
+$$
+\gamma_C \circ \gamma_D \circ \gamma_A \circ \gamma_B =
+\gamma_{C + D} \circ \gamma_{A + B}
+$$
+The point of this remark is to point out that these transformations
+are equal. Namely, to see this it suffices to show that
+$$
+\xymatrix{
+(k')^* \circ a' \circ l^* \circ b \ar[r]_{\gamma_D} \ar[d]_{\gamma_A} &
+(k')^* \circ a' \circ b' \circ m^* \ar[d]^{\gamma_A} \\
+a'' \circ (l')^* \circ l^* \circ b \ar[r]^{\gamma_D} &
+a'' \circ (l')^* \circ b' \circ m^*
+}
+$$
+commutes. This is true by
+Categories, Lemma \ref{categories-lemma-properties-2-cat-cats}
+or more simply the discussion preceding
+Categories, Definition \ref{categories-definition-horizontal-composition}.
+\end{remark}
+
+
+
+
+
+
+
+\section{Right adjoint of pushforward and base change, II}
+\label{section-base-change-II}
+
+\noindent
+In this section we prove that the base change map of
+Section \ref{section-base-change-map} is an isomorphism
+in some cases. We first observe that it suffices to check
+over affine opens, provided formation of the right adjoint
+of pushforward commutes with restriction to opens.
+
+\begin{remark}
+\label{remark-check-over-affines}
+Consider a cartesian diagram
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+of quasi-compact and quasi-separated schemes with $(g, f)$ Tor independent.
+Let $V \subset Y$ and $V' \subset Y'$ be affine opens with
+$g(V') \subset V$. Form the cartesian diagrams
+$$
+\vcenter{
+\xymatrix{
+U \ar[r] \ar[d] & X \ar[d] \\
+V \ar[r] & Y
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+U' \ar[r] \ar[d] & X' \ar[d] \\
+V' \ar[r] & Y'
+}
+}
+$$
+Assume (\ref{equation-sheafy}) with respect to $K$
+and the first diagram and (\ref{equation-sheafy})
+with respect to $Lg^*K$ and the second diagram are isomorphisms.
+Then the restriction of the base change map (\ref{equation-base-change-map})
+$$
+L(g')^*a(K) \longrightarrow a'(Lg^*K)
+$$
+to $U'$ is isomorphic to the base change map
+(\ref{equation-base-change-map}) for $K|_V$ and the
+cartesian diagram
+$$
+\xymatrix{
+U' \ar[r] \ar[d] & U \ar[d] \\
+V' \ar[r] & V
+}
+$$
+This follows from the fact that (\ref{equation-sheafy})
+is a special case of the base change map (\ref{equation-base-change-map})
+and that the base change maps compose correctly if we stack squares
+horizontally, see Lemma \ref{lemma-compose-base-change-maps-horizontal}.
+Thus in order to check the base change map restricted to $U'$
+is an isomorphism it suffices to work with the last diagram.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-more-base-change}
+In diagram (\ref{equation-base-change}) assume
+\begin{enumerate}
+\item $g : Y' \to Y$ is a morphism of affine schemes,
+\item $f : X \to Y$ is proper, and
+\item $f$ and $g$ are Tor independent.
+\end{enumerate}
+Then the base change map (\ref{equation-base-change-map}) induces an
+isomorphism
+$$
+L(g')^*a(K) \longrightarrow a'(Lg^*K)
+$$
+in the following cases
+\begin{enumerate}
+\item for all $K \in D_\QCoh(\mathcal{O}_X)$ if $f$
+is flat of finite presentation,
+\item for all $K \in D_\QCoh(\mathcal{O}_X)$ if $f$
+is perfect and $Y$ Noetherian,
+\item for $K \in D_\QCoh^+(\mathcal{O}_X)$ if $g$ has finite Tor dimension
+and $Y$ Noetherian.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Write $Y = \Spec(A)$ and $Y' = \Spec(A')$. As a base change of an affine
+morphism, the morphism $g'$ is affine. Let $M$ be a perfect generator
+for $D_\QCoh(\mathcal{O}_X)$, see Derived Categories of Schemes, Theorem
+\ref{perfect-theorem-bondal-van-den-Bergh}. Then $L(g')^*M$ is a
+generator for $D_\QCoh(\mathcal{O}_{X'})$, see
+Derived Categories of Schemes, Remark \ref{perfect-remark-pullback-generator}.
+Hence it suffices to show that (\ref{equation-base-change-map})
+induces an isomorphism
+\begin{equation}
+\label{equation-iso}
+R\Hom_{X'}(L(g')^*M, L(g')^*a(K))
+\longrightarrow
+R\Hom_{X'}(L(g')^*M, a'(Lg^*K))
+\end{equation}
+of global hom complexes, see
+Cohomology, Section \ref{cohomology-section-global-RHom},
+as this will imply the cone of $L(g')^*a(K) \to a'(Lg^*K)$
+is zero.
+The structure of the proof is as follows: we will first show that
+these Hom complexes are isomorphic and in the last part of the proof
+we will show that the isomorphism is induced by (\ref{equation-iso}).
+
+\medskip\noindent
+The left hand side. Because $M$ is perfect, the canonical map
+$$
+R\Hom_X(M, a(K)) \otimes^\mathbf{L}_A A'
+\longrightarrow
+R\Hom_{X'}(L(g')^*M, L(g')^*a(K))
+$$
+is an isomorphism by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-morphism-and-hom-out-of-perfect}.
+We can combine this with the isomorphism
+$R\Hom_Y(Rf_*M, K) = R\Hom_X(M, a(K))$
+of Lemma \ref{lemma-iso-global-hom}
+to get that the left hand side equals
+$R\Hom_Y(Rf_*M, K) \otimes^\mathbf{L}_A A'$.
+
+\medskip\noindent
+The right hand side. Here we first use the isomorphism
+$$
+R\Hom_{X'}(L(g')^*M, a'(Lg^*K)) = R\Hom_{Y'}(Rf'_*L(g')^*M, Lg^*K)
+$$
+of Lemma \ref{lemma-iso-global-hom}. Then we use the base change
+map $Lg^*Rf_*M \to Rf'_*L(g')^*M$ is an isomorphism by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-compare-base-change}.
+Hence we may rewrite this as $R\Hom_{Y'}(Lg^*Rf_*M, Lg^*K)$.
+Since $Y$, $Y'$ are affine and $K$, $Rf_*M$ are in $D_\QCoh(\mathcal{O}_Y)$
+(Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-direct-image})
+we have a canonical map
+$$
+\beta :
+R\Hom_Y(Rf_*M, K) \otimes^\mathbf{L}_A A'
+\longrightarrow
+R\Hom_{Y'}(Lg^*Rf_*M, Lg^*K)
+$$
+in $D(A')$. This is the arrow
+More on Algebra, Equation (\ref{more-algebra-equation-base-change-RHom})
+where we have used Derived Categories of Schemes, Lemmas
+\ref{perfect-lemma-affine-compare-bounded} and
+\ref{perfect-lemma-quasi-coherence-internal-hom}
+to translate back and forth into algebra.
+\begin{enumerate}
+\item If $f$ is flat and of finite presentation, the complex $Rf_*M$
+is perfect on $Y$ by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}
+and $\beta$ is an isomorphism by
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-RHom} part (1).
+\item If $f$ is perfect and $Y$ Noetherian, the complex $Rf_*M$
+is perfect on $Y$ by More on Morphisms, Lemma
+\ref{more-morphisms-lemma-perfect-proper-perfect-direct-image}
+and $\beta$ is an isomorphism as before.
+\item If $g$ has finite tor dimension and $Y$ is Noetherian,
+the complex $Rf_*M$ is pseudo-coherent on $Y$
+(Derived Categories of Schemes, Lemmas
+\ref{perfect-lemma-direct-image-coherent} and
+\ref{perfect-lemma-identify-pseudo-coherent-noetherian})
+and $\beta$ is an isomorphism by
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-RHom} part (4).
+\end{enumerate}
+We conclude that we obtain the same answer as in the previous paragraph.
+
+\medskip\noindent
+In the rest of the proof we show that the identifications of
+the left and right hand side of (\ref{equation-iso})
+given in the second and third paragraph are in fact given by
+(\ref{equation-iso}). To make our formulas manageable
+we will use $(-, -)_X = R\Hom_X(-, -)$, use $- \otimes A'$
+in stead of $- \otimes_A^\mathbf{L} A'$, and we will abbreviate
+$g^* = Lg^*$ and $f_* = Rf_*$. Consider the following
+commutative diagram
+$$
+\xymatrix{
+((g')^*M, (g')^*a(K))_{X'} \ar[d] &
+(M, a(K))_X \otimes A' \ar[l]^-\alpha \ar[d] &
+(f_*M, K)_Y \otimes A' \ar@{=}[l] \ar[d] \\
+((g')^*M, (g')^*a(g_*g^*K))_{X'} &
+(M, a(g_*g^*K))_X \otimes A' \ar[l]^-\alpha &
+(f_*M, g_*g^*K)_Y \otimes A' \ar@{=}[l] \ar@/_4pc/[dd]_{\mu'} \\
+((g')^*M, (g')^*g'_*a'(g^*K))_{X'} \ar[u] \ar[d] &
+(M, g'_*a'(g^*K))_X \otimes A' \ar[u] \ar[l]^-\alpha \ar[ld]^\mu &
+(f_*M, K) \otimes A' \ar[d]^\beta \\
+((g')^*M, a'(g^*K))_{X'} &
+(f'_*(g')^*M, g^*K)_{Y'} \ar@{=}[l] \ar[r] &
+(g^*f_*M, g^*K)_{Y'}
+}
+$$
+The arrows labeled $\alpha$ are the maps from
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-morphism-and-hom-out-of-perfect}
+for the diagram with corners $X', X, Y', Y$.
+The upper part of the diagram is commutative as the horizontal arrows are
+functorial in the entries.
+The middle vertical arrows come from the invertible transformation
+$g'_* \circ a' \to a \circ g_*$ of Lemma \ref{lemma-flat-precompose-pus}
+and therefore the middle square is commutative.
+Going down the left hand side is (\ref{equation-iso}).
+The upper horizontal arrows provide the identifications used in the
+second paragraph of the proof.
+The lower horizontal arrows including $\beta$ provide the identifications
+used in the third paragraph of the proof. Given $E \in D(A)$,
+$E' \in D(A')$, and $c : E \to E'$ in $D(A)$ we will denote
+$\mu_c : E \otimes A' \to E'$ the map induced by $c$
+and the adjointness of restriction and base change;
+if $c$ is clear we write $\mu = \mu_c$, i.e., we
+drop $c$ from the notation. The map $\mu$ in the diagram is of this
+form with $c$ given by the identification
+$(M, g'_*a(g^*K))_X = ((g')^*M, a'(g^*K))_{X'}$
+; the triangle involving $\mu$ is commutative by
+Derived Categories of Schemes, Remark \ref{perfect-remark-multiplication-map}.
+
+\medskip\noindent
+Observe that
+$$
+\xymatrix{
+(M, a(g_*g^*K))_X &
+(f_*M, g_* g^*K)_Y \ar@{=}[l] &
+(g^*f_*M, g^*K)_{Y'} \ar@{=}[l] \\
+(M, g'_* a'(g^*K))_X \ar[u] &
+((g')^*M, a'(g^*K))_{X'} \ar@{=}[l] &
+(f'_*(g')^*M, g^*K)_{Y'} \ar@{=}[l] \ar[u]
+}
+$$
+is commutative by the very definition of the transformation
+$g'_* \circ a' \to a \circ g_*$. Letting $\mu'$ be as above
+corresponding to the identification
+$(f_*M, g_*g^*K)_X = (g^*f_*M, g^*K)_{Y'}$, then the
+hexagon commutes as well. Thus it suffices to show that
+$\beta$ is equal to the composition of
+$(f_*M, K)_Y \otimes A' \to (f_*M, g_*g^*K)_X \otimes A'$
+and $\mu'$. To do this, it suffices to prove the two induced maps
+$(f_*M, K)_Y \to (g^*f_*M, g^*K)_{Y'}$ are the same.
+In other words, it suffices to show the diagram
+$$
+\xymatrix{
+R\Hom_A(E, K) \ar[rr]_{\text{induced by }\beta} \ar[rd] & &
+R\Hom_{A'}(E \otimes_A^\mathbf{L} A', K \otimes_A^\mathbf{L} A') \\
+& R\Hom_A(E, K \otimes_A^\mathbf{L} A') \ar[ru]
+}
+$$
+commutes for all $E, K \in D(A)$. Since this is how $\beta$ is constructed in
+More on Algebra, Section \ref{more-algebra-section-base-change-RHom}
+the proof is complete.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Right adjoint of pushforward and trace maps}
+\label{section-trace}
+
+\noindent
+Let $f : X \to Y$ be a morphism of quasi-compact and quasi-separated
+schemes. Let $a : D_\QCoh(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_X)$
+be the right adjoint as in Lemma \ref{lemma-twisted-inverse-image}. By
+Categories, Section \ref{categories-section-adjoint} we obtain a
+transformation of functors
+$$
+\text{Tr}_f : Rf_* \circ a \longrightarrow \text{id}
+$$
+The corresponding map $\text{Tr}_{f, K} : Rf_*a(K) \longrightarrow K$
+for $K \in D_\QCoh(\mathcal{O}_Y)$ is sometimes called the {\it trace map}.
+This is the map which has the property that the bijection
+$$
+\Hom_X(L, a(K)) \longrightarrow \Hom_Y(Rf_*L, K)
+$$
+for $L \in D_\QCoh(\mathcal{O}_X)$ which characterizes the right adjoint
+is given by
+$$
+\varphi \longmapsto \text{Tr}_{f, K} \circ Rf_*\varphi
+$$
+The map (\ref{equation-sheafy-trace})
+$$
+Rf_*R\SheafHom_{\mathcal{O}_X}(L, a(K))
+\longrightarrow
+R\SheafHom_{\mathcal{O}_Y}(Rf_*L, K)
+$$
+comes about by composition with $\text{Tr}_{f, K}$.
+Every trace map we are going to consider in this section will be a
+special case of this trace map. Before we discuss some special cases
+we show that formation of the trace map commutes with base change.
+
+\begin{lemma}[Trace map and base change]
+\label{lemma-trace-map-and-base-change}
+Suppose we have a diagram (\ref{equation-base-change}) where $f$ and $g$
+are tor independent. Then the maps
+$1 \star \text{Tr}_f : Lg^* \circ Rf_* \circ a \to Lg^*$ and
+$\text{Tr}_{f'} \star 1 : Rf'_* \circ a' \circ Lg^* \to Lg^*$
+agree via the base change maps
+$\beta : Lg^* \circ Rf_* \to Rf'_* \circ L(g')^*$
+(Cohomology, Remark \ref{cohomology-remark-base-change})
+and $\alpha : L(g')^* \circ a \to a' \circ Lg^*$
+(\ref{equation-base-change-map}).
+More precisely, the diagram
+$$
+\xymatrix{
+Lg^* \circ Rf_* \circ a
+\ar[d]_{\beta \star 1} \ar[r]_-{1 \star \text{Tr}_f} &
+Lg^* \\
+Rf'_* \circ L(g')^* \circ a \ar[r]^{1 \star \alpha} &
+Rf'_* \circ a' \circ Lg^* \ar[u]_{\text{Tr}_{f'} \star 1}
+}
+$$
+of transformations of functors commutes.
+\end{lemma}
+
+\begin{proof}
+In this proof we write $f_*$ for $Rf_*$ and $g^*$ for $Lg^*$ and we
+drop $\star$ products with identities as one can figure out which ones
+to add as long as the source and target of the transformation is known.
+Recall that $\beta : g^* \circ f_* \to f'_* \circ (g')^*$ is an isomorphism
+and that $\alpha$ is defined using
+the isomorphism $\beta^\vee : g'_* \circ a' \to a \circ g_*$
+which is the adjoint of $\beta$, see Lemma \ref{lemma-flat-precompose-pus}
+and its proof. First we note that the top horizontal arrow
+of the diagram in the lemma is equal to the composition
+$$
+g^* \circ f_* \circ a \to
+g^* \circ f_* \circ a \circ g_* \circ g^* \to
+g^* \circ g_* \circ g^* \to g^*
+$$
+where the first arrow is the unit for $(g^*, g_*)$, the second arrow
+is $\text{Tr}_f$, and the third arrow is the counit for $(g^*, g_*)$.
+This is a simple consequence of the fact that the composition
+$g^* \to g^* \circ g_* \circ g^* \to g^*$ of unit and counit is the identity.
+Consider the diagram
+$$
+\xymatrix{
+& g^* \circ f_* \circ a \ar[ld]_\beta \ar[d] \ar[r]_{\text{Tr}_f} & g^* \\
+f'_* \circ (g')^* \circ a \ar[dr] &
+g^* \circ f_* \circ a \circ g_* \circ g^* \ar[d]_\beta \ar[ru] &
+g^* \circ f_* \circ g'_* \circ a' \circ g^* \ar[l]_{\beta^\vee} \ar[d]_\beta &
+f'_* \circ a' \circ g^* \ar[lu]_{\text{Tr}_{f'}} \\
+& f'_* \circ (g')^* \circ a \circ g_* \circ g^* &
+f'_* \circ (g')^* \circ g'_* \circ a' \circ g^* \ar[ru] \ar[l]_{\beta^\vee}
+}
+$$
+In this diagram the two squares commute
+Categories, Lemma \ref{categories-lemma-properties-2-cat-cats}
+or more simply the discussion preceding
+Categories, Definition \ref{categories-definition-horizontal-composition}.
+The triangle commutes by the discussion above. By
+Categories, Lemma
+\ref{categories-lemma-transformation-between-functors-and-adjoints}
+the square
+$$
+\xymatrix{
+g^* \circ f_* \circ g'_* \circ a' \ar[d]_{\beta^\vee} \ar[r]_-\beta &
+f'_* \circ (g')^* \circ g'_* \circ a' \ar[d] \\
+g^* \circ f_* \circ a \circ g_* \ar[r] &
+\text{id}
+}
+$$
+commutes which implies the pentagon in the big diagram commutes.
+Since $\beta$ and $\beta^\vee$ are isomorphisms, and since going on
+the outside of the big diagram equals
+$\text{Tr}_f \circ \alpha \circ \beta$ by definition this proves the lemma.
+\end{proof}
+
+\noindent
+Let $f : X \to Y$ be a morphism of quasi-compact and quasi-separated
+schemes. Let $a : D_\QCoh(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_X)$
+be the right adjoint of $Rf_*$ as in
+Lemma \ref{lemma-twisted-inverse-image}. By
+Categories, Section \ref{categories-section-adjoint} we obtain a
+transformation of functors
+$$
+\eta_f : \text{id} \to a \circ Rf_*
+$$
+which is called the unit of the adjunction.
+
+\begin{lemma}
+\label{lemma-unit-and-base-change}
+Suppose we have a diagram (\ref{equation-base-change}) where $f$ and $g$
+are tor independent. Then the maps
+$1 \star \eta_f : L(g')^* \to L(g')^* \circ a \circ Rf_*$ and
+$\eta_{f'} \star 1 : L(g')^* \to a' \circ Rf'_* \circ L(g')^*$
+agree via the base change maps
+$\beta : Lg^* \circ Rf_* \to Rf'_* \circ L(g')^*$
+(Cohomology, Remark \ref{cohomology-remark-base-change})
+and $\alpha : L(g')^* \circ a \to a' \circ Lg^*$
+(\ref{equation-base-change-map}).
+More precisely, the diagram
+$$
+\xymatrix{
+L(g')^* \ar[r]_-{1 \star \eta_f} \ar[d]_{\eta_{f'} \star 1} &
+L(g')^* \circ a \circ Rf_* \ar[d]^\alpha \\
+a' \circ Rf'_* \circ L(g')^* &
+a' \circ Lg^* \circ Rf_* \ar[l]_-\beta
+}
+$$
+of transformations of functors commutes.
+\end{lemma}
+
+\begin{proof}
+This proof is dual to the proof of Lemma \ref{lemma-trace-map-and-base-change}.
+In this proof we write $f_*$ for $Rf_*$ and $g^*$ for $Lg^*$ and we
+drop $\star$ products with identities as one can figure out which ones
+to add as long as the source and target of the transformation is known.
+Recall that $\beta : g^* \circ f_* \to f'_* \circ (g')^*$ is an isomorphism
+and that $\alpha$ is defined using
+the isomorphism $\beta^\vee : g'_* \circ a' \to a \circ g_*$
+which is the adjoint of $\beta$, see Lemma \ref{lemma-flat-precompose-pus}
+and its proof. First we note that the left vertical arrow
+of the diagram in the lemma is equal to the composition
+$$
+(g')^* \to (g')^* \circ g'_* \circ (g')^* \to
+(g')^* \circ g'_* \circ a' \circ f'_* \circ (g')^* \to
+a' \circ f'_* \circ (g')^*
+$$
+where the first arrow is the unit for $((g')^*, g'_*)$, the second arrow
+is $\eta_{f'}$, and the third arrow is the counit for $((g')^*, g'_*)$.
+This is a simple consequence of the fact that the composition
+$(g')^* \to (g')^* \circ (g')_* \circ (g')^* \to (g')^*$
+of unit and counit is the identity. Consider the diagram
+$$
+\xymatrix{
+& (g')^* \circ a \circ f_* \ar[r] &
+(g')^* \circ a \circ g_* \circ g^* \circ f_*
+\ar[ld]_\beta \\
+(g')^* \ar[ru]^{\eta_f} \ar[dd]_{\eta_{f'}} \ar[rd] &
+(g')^* \circ a \circ g_* \circ f'_* \circ (g')^* &
+(g')^* \circ g'_* \circ a' \circ g^* \circ f_*
+\ar[u]_{\beta^\vee} \ar[ld]_\beta \ar[d] \\
+& (g')^* \circ g'_* \circ a' \circ f'_* \circ (g')^*
+\ar[ld] \ar[u]_{\beta^\vee} &
+a' \circ g^* \circ f_* \ar[lld]^\beta \\
+a' \circ f'_* \circ (g')^*
+}
+$$
+In this diagram the two squares commute
+Categories, Lemma \ref{categories-lemma-properties-2-cat-cats}
+or more simply the discussion preceding
+Categories, Definition \ref{categories-definition-horizontal-composition}.
+The triangle commutes by the discussion above. By the dual of
+Categories, Lemma
+\ref{categories-lemma-transformation-between-functors-and-adjoints}
+the square
+$$
+\xymatrix{
+\text{id} \ar[r] \ar[d] &
+g'_* \circ a' \circ g^* \circ f_* \ar[d]^\beta \\
+g'_* \circ a' \circ g^* \circ f_* \ar[r]^{\beta^\vee} &
+a \circ g_* \circ f'_* \circ (g')^*
+}
+$$
+commutes which implies the pentagon in the big diagram commutes.
+Since $\beta$ and $\beta^\vee$ are isomorphisms, and since going on
+the outside of the big diagram equals
+$\beta \circ \alpha \circ \eta_f$ by definition this proves the lemma.
+\end{proof}
+
+\begin{example}
+\label{example-trace-affine}
+Let $A \to B$ be a ring map. Let $Y = \Spec(A)$ and $X = \Spec(B)$
+and $f : X \to Y$ the morphism corresponding to $A \to B$. As seen
+in Example \ref{example-affine-twisted-inverse-image}
+the right adjoint of
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$
+sends an object $K$ of $D(A) = D_\QCoh(\mathcal{O}_Y)$ to $R\Hom(B, K)$ in
+$D(B) = D_\QCoh(\mathcal{O}_X)$. The trace map is the map
+$$
+\text{Tr}_{f, K} : R\Hom(B, K) \longrightarrow R\Hom(A, K) = K
+$$
+induced by the $A$-module map $A \to B$.
+\end{example}
+
+
+
+
+\section{Right adjoint of pushforward and pullback}
+\label{section-compare-with-pullback}
+
+\noindent
+Let $f : X \to Y$ be a morphism of quasi-compact and quasi-separated
+schemes. Let $a$ be the right adjoint of pushforward as in
+Lemma \ref{lemma-twisted-inverse-image}. For $K, L \in D_\QCoh(\mathcal{O}_Y)$
+there is a canonical map
+$$
+Lf^*K \otimes^\mathbf{L}_{\mathcal{O}_X} a(L)
+\longrightarrow
+a(K \otimes_{\mathcal{O}_Y}^\mathbf{L} L)
+$$
+Namely, this map is adjoint to a map
+$$
+Rf_*(Lf^*K \otimes^\mathbf{L}_{\mathcal{O}_X} a(L)) =
+K \otimes^\mathbf{L}_{\mathcal{O}_Y} Rf_*(a(L))
+\longrightarrow
+K \otimes^\mathbf{L}_{\mathcal{O}_Y} L
+$$
+(equality by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change})
+for which we use the trace map $Rf_*a(L) \to L$.
+When $L = \mathcal{O}_Y$ we obtain a map
+\begin{equation}
+\label{equation-compare-with-pullback}
+Lf^*K \otimes^\mathbf{L}_{\mathcal{O}_X} a(\mathcal{O}_Y) \longrightarrow a(K)
+\end{equation}
+functorial in $K$ and compatible with distinguished triangles.
+
+\begin{lemma}
+\label{lemma-compare-with-pullback-perfect}
+Let $f : X \to Y$ be a morphism of quasi-compact and quasi-separated
+schemes. The map
+$Lf^*K \otimes^\mathbf{L}_{\mathcal{O}_X} a(L) \to
+a(K \otimes_{\mathcal{O}_Y}^\mathbf{L} L)$
+defined above for $K, L \in D_\QCoh(\mathcal{O}_Y)$
+is an isomorphism if $K$ is perfect. In particular,
+(\ref{equation-compare-with-pullback}) is an isomorphism if $K$ is perfect.
+\end{lemma}
+
+\begin{proof}
+Let $K^\vee$ be the ``dual'' to $K$, see
+Cohomology, Lemma \ref{cohomology-lemma-dual-perfect-complex}.
+For $M \in D_\QCoh(\mathcal{O}_X)$ we have
+\begin{align*}
+\Hom_{D(\mathcal{O}_Y)}(Rf_*M, K \otimes^\mathbf{L}_{\mathcal{O}_Y} L)
+& =
+\Hom_{D(\mathcal{O}_Y)}(
+Rf_*M \otimes^\mathbf{L}_{\mathcal{O}_Y} K^\vee, L) \\
+& =
+\Hom_{D(\mathcal{O}_X)}(
+M \otimes^\mathbf{L}_{\mathcal{O}_X} Lf^*K^\vee, a(L)) \\
+& =
+\Hom_{D(\mathcal{O}_X)}(M,
+Lf^*K \otimes^\mathbf{L}_{\mathcal{O}_X} a(L))
+\end{align*}
+Second equality by the definition of $a$ and the projection formula
+(Cohomology, Lemma \ref{cohomology-lemma-projection-formula-perfect})
+or the more general Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change}.
+Hence the result by the Yoneda lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-restriction-compare-with-pullback}
+Suppose we have a diagram (\ref{equation-base-change}) where $f$ and $g$
+are tor independent. Let $K \in D_\QCoh(\mathcal{O}_Y)$. The diagram
+$$
+\xymatrix{
+L(g')^*(Lf^*K \otimes^\mathbf{L}_{\mathcal{O}_X} a(\mathcal{O}_Y))
+\ar[r] \ar[d] & L(g')^*a(K) \ar[d] \\
+L(f')^*Lg^*K \otimes_{\mathcal{O}_{X'}}^\mathbf{L} a'(\mathcal{O}_{Y'})
+\ar[r] & a'(Lg^*K)
+}
+$$
+commutes where the horizontal arrows are the maps
+(\ref{equation-compare-with-pullback}) for $K$ and $Lg^*K$
+and the vertical maps are constructed using
+Cohomology, Remark \ref{cohomology-remark-base-change} and
+(\ref{equation-base-change-map}).
+\end{lemma}
+
+\begin{proof}
+In this proof we will write $f_*$ for $Rf_*$ and $f^*$ for $Lf^*$, etc,
+and we will write $\otimes$ for $\otimes^\mathbf{L}_{\mathcal{O}_X}$, etc.
+Let us write (\ref{equation-compare-with-pullback}) as the composition
+\begin{align*}
+f^*K \otimes a(\mathcal{O}_Y)
+& \to
+a(f_*(f^*K \otimes a(\mathcal{O}_Y))) \\
+& \leftarrow
+a(K \otimes f_*a(\mathcal{O}_K)) \\
+& \to
+a(K \otimes \mathcal{O}_Y) \\
+& \to
+a(K)
+\end{align*}
+Here the first arrow is the unit $\eta_f$, the second arrow is $a$
+applied to Cohomology, Equation
+(\ref{cohomology-equation-projection-formula-map}) which is an
+isomorphism by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change}, the third arrow is
+$a$ applied to $\text{id}_K \otimes \text{Tr}_f$, and the fourth
+arrow is $a$ applied to the isomorphism $K \otimes \mathcal{O}_Y = K$.
+The proof of the lemma consists in showing that each of these
+maps gives rise to a commutative square as in the statement of the lemma.
+For $\eta_f$ and $\text{Tr}_f$ this is
+Lemmas \ref{lemma-unit-and-base-change} and
+\ref{lemma-trace-map-and-base-change}.
+For the arrow using Cohomology, Equation
+(\ref{cohomology-equation-projection-formula-map})
+this is Cohomology, Remark \ref{cohomology-remark-compatible-with-diagram}.
+For the multiplication map it is clear. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-on-open}
+Let $f : X \to Y$ be a proper morphism of Noetherian schemes. Let $V \subset Y$
+be an open such that $f^{-1}(V) \to V$ is an isomorphism. Then for
+$K \in D_\QCoh^+(\mathcal{O}_Y)$ the map (\ref{equation-compare-with-pullback})
+restricts to an isomorphism over $f^{-1}(V)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-proper-noetherian} the map (\ref{equation-sheafy}) is an
+isomorphism for objects of $D_\QCoh^+(\mathcal{O}_Y)$. Hence
+Lemma \ref{lemma-restriction-compare-with-pullback} tells us the
+restriction of (\ref{equation-compare-with-pullback}) for $K$
+to $f^{-1}(V)$ is the map (\ref{equation-compare-with-pullback})
+for $K|_V$ and $f^{-1}(V) \to V$. Thus it suffices to show that
+the map is an isomorphism when $f$ is the identity morphism. This is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-transitivity-compare-with-pullback}
+Let $f : X \to Y$ and $g : Y \to Z$ be composable morphisms of quasi-compact
+and quasi-separated schemes and set $h = g \circ f$. Let $a, b, c$ be the
+adjoints of Lemma \ref{lemma-twisted-inverse-image} for $f, g, h$.
+For any $K \in D_\QCoh(\mathcal{O}_Z)$ the diagram
+$$
+\xymatrix{
+Lf^*(Lg^*K \otimes_{\mathcal{O}_Y}^\mathbf{L}
+b(\mathcal{O}_Z)) \otimes_{\mathcal{O}_X}^\mathbf{L} a(\mathcal{O}_Y)
+\ar@{=}[d] \ar[r] &
+a(Lg^*K \otimes_{\mathcal{O}_Y}^\mathbf{L} b(\mathcal{O}_Z)) \ar[r] &
+a(b(K)) \ar@{=}[d] \\
+Lh^*K \otimes_{\mathcal{O}_X}^\mathbf{L} Lf^*b(\mathcal{O}_Z)
+\otimes_{\mathcal{O}_X}^\mathbf{L} a(\mathcal{O}_Y) \ar[r] &
+Lh^*K \otimes_{\mathcal{O}_X}^\mathbf{L} c(\mathcal{O}_Z) \ar[r] &
+c(K)
+}
+$$
+is commutative where the arrows are (\ref{equation-compare-with-pullback})
+and we have used $Lh^* = Lf^* \circ Lg^*$ and $c = a \circ b$.
+\end{lemma}
+
+\begin{proof}
+In this proof we will write $f_*$ for $Rf_*$ and $f^*$ for $Lf^*$, etc,
+and we will write $\otimes$ for $\otimes^\mathbf{L}_{\mathcal{O}_X}$, etc.
+The composition of the top arrows is adjoint to a map
+$$
+g_*f_*(f^*(g^*K \otimes b(\mathcal{O}_Z)) \otimes a(\mathcal{O}_Y)) \to K
+$$
+The left hand side is equal to
+$K \otimes g_*f_*(f^*b(\mathcal{O}_Z) \otimes a(\mathcal{O}_Y))$ by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change}
+and inspection of the definitions shows the map comes from the map
+$$
+g_*f_*(f^*b(\mathcal{O}_Z) \otimes a(\mathcal{O}_Y))
+\xleftarrow{g_*\epsilon}
+g_*(b(\mathcal{O}_Z) \otimes f_*a(\mathcal{O}_Y)) \xrightarrow{g_*\alpha}
+g_*(b(\mathcal{O}_Z)) \xrightarrow{\beta} \mathcal{O}_Z
+$$
+tensored with $\text{id}_K$. Here $\epsilon$ is the isomorphism from
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change} and
+$\beta$ comes from the counit map
+$g_*b \to \text{id}$. Similarly, the composition of the lower
+horizontal arrows is adjoint to $\text{id}_K$ tensored with the composition
+$$
+g_*f_*(f^*b(\mathcal{O}_Z) \otimes a(\mathcal{O}_Y)) \xrightarrow{g_*f_*\delta}
+g_*f_*(ab(\mathcal{O}_Z)) \xrightarrow{g_*\gamma}
+g_*(b(\mathcal{O}_Z)) \xrightarrow{\beta}
+\mathcal{O}_Z
+$$
+where $\gamma$ comes from the counit map $f_*a \to \text{id}$
+and $\delta$ is the map whose adjoint is the composition
+$$
+f_*(f^*b(\mathcal{O}_Z) \otimes a(\mathcal{O}_Y))
+\xleftarrow{\epsilon}
+b(\mathcal{O}_Z) \otimes f_*a(\mathcal{O}_Y) \xrightarrow{\alpha}
+b(\mathcal{O}_Z)
+$$
+By general properties of adjoint functors, adjoint maps, and counits
+(see Categories, Section \ref{categories-section-adjoint})
+we have $\gamma \circ f_*\delta = \alpha \circ \epsilon^{-1}$ as desired.
+\end{proof}
+
+
+
+
+
+\section{Right adjoint of pushforward for closed immersions}
+\label{section-sections-with-exact-support}
+
+\noindent
+Let $i : (Z, \mathcal{O}_Z) \to (X, \mathcal{O}_X)$ be a morphism
+of ringed spaces such that $i$ is a homeomorphism onto a closed
+subset and such that $i^\sharp : \mathcal{O}_X \to i_*\mathcal{O}_Z$
+is surjective. (For example a closed immersion of schemes.)
+Let $\mathcal{I} = \Ker(i^\sharp)$. For a sheaf
+of $\mathcal{O}_X$-modules $\mathcal{F}$ the sheaf
+$$
+\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_Z, \mathcal{F})
+$$
+a sheaf of $\mathcal{O}_X$-modules annihilated by $\mathcal{I}$.
+Hence by Modules, Lemma \ref{modules-lemma-i-star-equivalence}
+there is a sheaf of $\mathcal{O}_Z$-modules,
+which we will denote $\SheafHom(\mathcal{O}_Z, \mathcal{F})$,
+such that
+$$
+i_*\SheafHom(\mathcal{O}_Z, \mathcal{F}) =
+\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_Z, \mathcal{F})
+$$
+as $\mathcal{O}_X$-modules. We spell out what this means.
+
+\begin{lemma}
+\label{lemma-compute-sheaf-with-exact-support}
+With notation as above. The functor $\SheafHom(\mathcal{O}_Z, -)$ is a
+right adjoint to the functor
+$i_* : \textit{Mod}(\mathcal{O}_Z) \to \textit{Mod}(\mathcal{O}_X)$.
+For $V \subset Z$ open we have
+$$
+\Gamma(V, \SheafHom(\mathcal{O}_Z, \mathcal{F})) =
+\{s \in \Gamma(U, \mathcal{F}) \mid \mathcal{I}s = 0\}
+$$
+where $U \subset X$ is an open whose intersection with $Z$ is $V$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{G}$ be a sheaf of $\mathcal{O}_Z$-modules. Then
+$$
+\Hom_{\mathcal{O}_X}(i_*\mathcal{G}, \mathcal{F}) =
+\Hom_{i_*\mathcal{O}_Z}(i_*\mathcal{G},
+\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_Z, \mathcal{F})) =
+\Hom_{\mathcal{O}_Z}(\mathcal{G}, \SheafHom(\mathcal{O}_Z, \mathcal{F}))
+$$
+The first equality by
+Modules, Lemma \ref{modules-lemma-adjoint-tensor-restrict}
+and the second by the fully faithfulness of $i_*$, see
+Modules, Lemma \ref{modules-lemma-i-star-equivalence}.
+The description of sections is left to the reader.
+\end{proof}
+
+\noindent
+The functor
+$$
+\textit{Mod}(\mathcal{O}_X)
+\longrightarrow
+\textit{Mod}(\mathcal{O}_Z),
+\quad
+\mathcal{F} \longmapsto \SheafHom(\mathcal{O}_Z, \mathcal{F})
+$$
+is left exact and has a derived extension
+$$
+R\SheafHom(\mathcal{O}_Z, -) : D(\mathcal{O}_X) \to D(\mathcal{O}_Z).
+$$
+
+\begin{lemma}
+\label{lemma-sheaf-with-exact-support-adjoint}
+With notation as above. The functor $R\SheafHom(\mathcal{O}_Z, -)$
+is the right adjoint of the functor
+$Ri_* : D(\mathcal{O}_Z) \to D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+This is a consequence of the fact that $i_*$ and
+$\SheafHom(\mathcal{O}_Z, -)$ are adjoint functors by
+Lemma \ref{lemma-compute-sheaf-with-exact-support}. See
+Derived Categories, Lemma \ref{derived-lemma-derived-adjoint-functors}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sheaf-with-exact-support-ext}
+With notation as above. We have
+$$
+Ri_*R\SheafHom(\mathcal{O}_Z, K) =
+R\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_Z, K)
+$$
+in $D(\mathcal{O}_X)$ for all $K$ in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the construction of the functor
+$R\SheafHom(\mathcal{O}_Z, -)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sheaf-with-exact-support-internal-home}
+With notation as above. For $M \in D(\mathcal{O}_Z)$ we have
+$$
+R\SheafHom_{\mathcal{O}_X}(Ri_*M, K) =
+Ri_*R\SheafHom_{\mathcal{O}_Z}(M, R\SheafHom(\mathcal{O}_Z, K))
+$$
+in $D(\mathcal{O}_Z)$ for all $K$ in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the construction of the functor
+$R\SheafHom(\mathcal{O}_Z, -)$ and the fact that if
+$\mathcal{K}^\bullet$ is a K-injective complex of
+$\mathcal{O}_X$-modules, then $\SheafHom(\mathcal{O}_Z, \mathcal{K}^\bullet)$
+is a K-injective complex of $\mathcal{O}_Z$-modules, see
+Derived Categories, Lemma \ref{derived-lemma-adjoint-preserve-K-injectives}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sheaf-with-exact-support-quasi-coherent}
+Let $i : Z \to X$ be a pseudo-coherent closed immersion of schemes
+(any closed immersion if $X$ is locally Noetherian).
+Then
+\begin{enumerate}
+\item $R\SheafHom(\mathcal{O}_Z, -)$ maps $D^+_\QCoh(\mathcal{O}_X)$
+into $D^+_\QCoh(\mathcal{O}_Z)$, and
+\item if $X = \Spec(A)$ and $Z = \Spec(B)$, then the diagram
+$$
+\xymatrix{
+D^+(B) \ar[r] & D_\QCoh^+(\mathcal{O}_Z) \\
+D^+(A) \ar[r] \ar[u]^{R\Hom(B, -)} &
+D_\QCoh^+(\mathcal{O}_X) \ar[u]_{R\SheafHom(\mathcal{O}_Z, -)}
+}
+$$
+is commutative.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To explain the parenthetical remark, if $X$ is locally Noetherian, then
+$i$ is pseudo-coherent by
+More on Morphisms, Lemma \ref{more-morphisms-lemma-Noetherian-pseudo-coherent}.
+
+\medskip\noindent
+Let $K$ be an object of $D^+_\QCoh(\mathcal{O}_X)$. To prove (1), by
+Morphisms, Lemma \ref{morphisms-lemma-i-star-equivalence}
+it suffices to show that $i_*$ applied to
+$H^n(R\SheafHom(\mathcal{O}_Z, K))$ produces a
+quasi-coherent module on $X$. By
+Lemma \ref{lemma-sheaf-with-exact-support-ext}
+this means we have to show that
+$R\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_Z, K)$
+is in $D_\QCoh(\mathcal{O}_X)$. Since $i$ is pseudo-coherent
+the sheaf $\mathcal{O}_Z$ is a pseudo-coherent $\mathcal{O}_X$-module.
+Hence the result follows from
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-internal-hom}.
+
+\medskip\noindent
+Assume $X = \Spec(A)$ and $Z = \Spec(B)$ as in (2).
+Let $I^\bullet$ be a bounded below complex of injective $A$-modules
+representing an object $K$ of $D^+(A)$.
+Then we know that $R\Hom(B, K) = \Hom_A(B, I^\bullet)$ viewed
+as a complex of $B$-modules. Choose a quasi-isomorphism
+$$
+\widetilde{I^\bullet} \longrightarrow \mathcal{I}^\bullet
+$$
+where $\mathcal{I}^\bullet$ is a bounded below complex of injective
+$\mathcal{O}_X$-modules. It follows from the description of
+the functor $\SheafHom(\mathcal{O}_Z, -)$ in
+Lemma \ref{lemma-compute-sheaf-with-exact-support}
+that there is a map
+$$
+\Hom_A(B, I^\bullet)
+\longrightarrow
+\Gamma(Z, \SheafHom(\mathcal{O}_Z, \mathcal{I}^\bullet))
+$$
+Observe that $\SheafHom(\mathcal{O}_Z, \mathcal{I}^\bullet)$
+represents $R\SheafHom(\mathcal{O}_Z, \widetilde{K})$.
+Applying the universal property of the $\widetilde{\ }$ functor we
+obtain a map
+$$
+\widetilde{\Hom_A(B, I^\bullet)}
+\longrightarrow
+R\SheafHom(\mathcal{O}_Z, \widetilde{K})
+$$
+in $D(\mathcal{O}_Z)$. We may check that this map is an isomorphism in
+$D(\mathcal{O}_Z)$ after applying $i_*$. However, once we apply
+$i_*$ we obtain the isomorphism of Derived Categories of Schemes,
+Lemma \ref{perfect-lemma-quasi-coherence-internal-hom}
+via the identification of
+Lemma \ref{lemma-sheaf-with-exact-support-ext}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sheaf-with-exact-support-coherent}
+Let $i : Z \to X$ be a closed immersion of schemes.
+Assume $X$ is a locally Noetherian.
+Then $R\SheafHom(\mathcal{O}_Z, -)$ maps $D^+_{\textit{Coh}}(\mathcal{O}_X)$
+into $D^+_{\textit{Coh}}(\mathcal{O}_Z)$.
+\end{lemma}
+
+\begin{proof}
+The question is local on $X$, hence we may assume that $X$ is affine.
+Say $X = \Spec(A)$ and $Z = \Spec(B)$ with $A$ Noetherian and
+$A \to B$ surjective. In this case, we can apply
+Lemma \ref{lemma-sheaf-with-exact-support-quasi-coherent}
+to translate the question into algebra.
+The corresponding algebra result is a consequence of
+Dualizing Complexes, Lemma \ref{dualizing-lemma-exact-support-coherent}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-twisted-inverse-image-closed}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+Let $i : Z \to X$ be a pseudo-coherent closed immersion
+(if $X$ is Noetherian, then any closed immersion is pseudo-coherent).
+Let $a : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Z)$ be the
+right adjoint to $Ri_*$. Then there is a functorial isomorphism
+$$
+a(K) = R\SheafHom(\mathcal{O}_Z, K)
+$$
+for $K \in D_\QCoh^+(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+(The parenthetical statement follows from More on Morphisms, Lemma
+\ref{more-morphisms-lemma-Noetherian-pseudo-coherent}.)
+By Lemma \ref{lemma-sheaf-with-exact-support-adjoint}
+the functor $R\SheafHom(\mathcal{O}_Z, -)$ is a right adjoint
+to $Ri_* : D(\mathcal{O}_Z) \to D(\mathcal{O}_X)$. Moreover,
+by Lemma \ref{lemma-sheaf-with-exact-support-quasi-coherent}
+and Lemma \ref{lemma-twisted-inverse-image-bounded-below}
+both $R\SheafHom(\mathcal{O}_Z, -)$ and $a$ map
+$D_\QCoh^+(\mathcal{O}_X)$ into $D_\QCoh^+(\mathcal{O}_Z)$.
+Hence we obtain the isomorphism by uniqueness of adjoint
+functors.
+\end{proof}
+
+\begin{example}
+\label{example-trace-closed-immersion}
+If $i : Z \to X$ is closed immersion of Noetherian schemes, then
+the diagram
+$$
+\xymatrix{
+i_*a(K) \ar[rr]_-{\text{Tr}_{i, K}} \ar@{=}[d] & &
+K \ar@{=}[d] \\
+i_*R\SheafHom(\mathcal{O}_Z, K) \ar@{=}[r] &
+R\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_Z, K)
+\ar[r] & K
+}
+$$
+is commutative for $K \in D_\QCoh^+(\mathcal{O}_X)$.
+Here the horizontal equality sign is
+Lemma \ref{lemma-sheaf-with-exact-support-ext} and the
+lower horizontal arrow is induced
+by the map $\mathcal{O}_X \to i_*\mathcal{O}_Z$. The commutativity
+of the diagram is a consequence of
+Lemma \ref{lemma-twisted-inverse-image-closed}.
+\end{example}
+
+
+
+
+\section{Right adjoint of pushforward for closed immersions and base change}
+\label{section-sections-with-exact-support-base-change}
+
+\noindent
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+Z' \ar[r]_{i'} \ar[d]_g & X' \ar[d]^f \\
+Z \ar[r]^i & X
+}
+$$
+where $i$ is a closed immersion. If $Z$ and $X'$ are
+tor independent over $X$, then there is a canonical
+base change map
+\begin{equation}
+\label{equation-base-change-exact-support}
+Lg^*R\SheafHom(\mathcal{O}_Z, K)
+\longrightarrow
+R\SheafHom(\mathcal{O}_{Z'}, Lf^*K)
+\end{equation}
+in $D(\mathcal{O}_{Z'})$ functorial for $K$ in $D(\mathcal{O}_X)$.
+Namely, by adjointness of Lemma \ref{lemma-sheaf-with-exact-support-adjoint}
+such an arrow is the same thing as a map
+$$
+Ri'_*Lg^*R\SheafHom(\mathcal{O}_Z, K)
+\longrightarrow
+Lf^*K
+$$
+in $D(\mathcal{O}_{X'})$. By tor independence we have
+$Ri'_* \circ Lg^* = Lf^* \circ Ri_*$ (see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-compare-base-change-closed-immersion}).
+Thus this is the same thing as a map
+$$
+Lf^*Ri_*R\SheafHom(\mathcal{O}_Z, K)
+\longrightarrow
+Lf^*K
+$$
+For this we can use $Lf^*(can)$ where
+$can : Ri_* R\SheafHom(\mathcal{O}_Z, K) \to K$ is the
+counit of the adjunction.
+
+\begin{lemma}
+\label{lemma-check-base-change-is-iso}
+In the situation above, the map (\ref{equation-base-change-exact-support})
+is an isomorphism if and only if the base change map
+$$
+Lf^*R\SheafHom_{\mathcal{O}_X}(\mathcal{O}_Z, K)
+\longrightarrow
+R\SheafHom_{\mathcal{O}_{X'}}(\mathcal{O}_{Z'}, Lf^*K)
+$$
+of Cohomology, Remark \ref{cohomology-remark-prepare-fancy-base-change}
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+The statement makes sense because $\mathcal{O}_{Z'} = Lf^*\mathcal{O}_Z$
+by the assumed tor independence.
+Since $i'_*$ is exact and faithful we see that it suffices to show
+the map (\ref{equation-base-change-exact-support})
+is an isomorphism after applying $Ri'_*$. Since
+$Ri'_* \circ Lg^* = Lf^* \circ Ri_*$ by the assumed tor indepence and
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-compare-base-change-closed-immersion}
+we obtain a map
+$$
+Lf^*Ri_*R\SheafHom(\mathcal{O}_Z, K)
+\longrightarrow
+Ri'_*R\SheafHom(\mathcal{O}_{Z'}, Lf^*K)
+$$
+whose source and target are as in the statement of the lemma by
+Lemma \ref{lemma-sheaf-with-exact-support-ext}. We omit the
+verification that this is the same map as the one constructed
+in Cohomology, Remark \ref{cohomology-remark-prepare-fancy-base-change}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-bc-sheaf-with-exact-support}
+In the situation above, assume $f$ is flat and $i$ pseudo-coherent.
+Then (\ref{equation-base-change-exact-support}) is an isomorphism
+for $K$ in $D^+_\QCoh(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+First proof. To prove this map is an isomorphism, we may work locally.
+Hence we may assume $X$, $X'$, $Z$, $Z'$ are affine, say corresponding
+to the rings $A$, $A'$, $B$, $B'$. Then $B$ and $A'$ are tor independent
+over $A$. By Lemma \ref{lemma-check-base-change-is-iso} it suffices
+to check that
+$$
+R\Hom_A(B, K) \otimes_A^\mathbf{L} A' =
+R\Hom_{A'}(B', K \otimes_A^\mathbf{L} A')
+$$
+in $D(A')$ for all $K \in D^+(A)$. Here we use
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-internal-hom}
+and the fact that $B$, resp.\ $B'$ is pseudo-coherent as an
+$A$-module, resp.\ $A'$-module
+to compare derived hom on the level of rings and schemes.
+The displayed equality follows from
+More on Algebra, Lemma
+\ref{more-algebra-lemma-internal-hom-evaluate-tensor-isomorphism}
+part (3). See also the discussion in Dualizing Complexes, Section
+\ref{dualizing-section-base-change-trivial-duality}.
+
+\medskip\noindent
+Second proof\footnote{This proof shows
+it suffices to assume $K$ is in $D^+(\mathcal{O}_X)$.}.
+Let $z' \in Z'$ with image $z \in Z$.
+First show that (\ref{equation-base-change-exact-support})
+on stalks at $z'$ induces the map
+$$
+R\Hom(\mathcal{O}_{Z, z}, K_z)
+\otimes_{\mathcal{O}_{Z, x}}^\mathbf{L} \mathcal{O}_{Z', z'}
+\longrightarrow
+R\Hom(\mathcal{O}_{Z', z'},
+K_z \otimes_{\mathcal{O}_{X, z}}^\mathbf{L} \mathcal{O}_{X', z'})
+$$
+from Dualizing Complexes, Equation (\ref{dualizing-equation-base-change}).
+Namely, the constructions of these maps are identical.
+Then apply Dualizing Complexes, Lemma \ref{dualizing-lemma-flat-bc-surjection}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sheaf-with-exact-support-tensor}
+Let $i : Z \to X$ be a pseudo-coherent closed immersion of schemes.
+Let $M \in D_\QCoh(\mathcal{O}_X)$ locally have tor-amplitude in $[a, \infty)$.
+Let $K \in D_\QCoh^+(\mathcal{O}_X)$. Then there is a canonical isomorphism
+$$
+R\SheafHom(\mathcal{O}_Z, K) \otimes_{\mathcal{O}_Z}^\mathbf{L} Li^*M =
+R\SheafHom(\mathcal{O}_Z, K \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+$$
+in $D(\mathcal{O}_Z)$.
+\end{lemma}
+
+\begin{proof}
+A map from LHS to RHS is the same thing as a map
+$$
+Ri_*R\SheafHom(\mathcal{O}_Z, K) \otimes_{\mathcal{O}_X}^\mathbf{L} M
+\longrightarrow
+K \otimes_{\mathcal{O}_X}^\mathbf{L} M
+$$
+by Lemmas \ref{lemma-sheaf-with-exact-support-adjoint} and
+\ref{lemma-sheaf-with-exact-support-ext}. For this map we take the
+counit $Ri_*R\SheafHom(\mathcal{O}_Z, K) \to K$ tensored with $\text{id}_M$.
+To see this map is an isomorphism under the hypotheses given,
+translate into algebra using
+Lemma \ref{lemma-sheaf-with-exact-support-quasi-coherent}
+and then for example use More on Algebra, Lemma
+\ref{more-algebra-lemma-internal-hom-evaluate-tensor-isomorphism} part (3).
+Instead of using Lemma \ref{lemma-sheaf-with-exact-support-quasi-coherent}
+you can look at stalks as in the second proof of
+Lemma \ref{lemma-flat-bc-sheaf-with-exact-support}.
+\end{proof}
+
+
+
+\section{Right adjoint of pushforward for finite morphisms}
+\label{section-duality-finite}
+
+\noindent
+If $i : Z \to X$ is a closed immersion of schemes, then there is
+a right adjoint $\SheafHom(\mathcal{O}_Z, -)$ to the functor
+$i_* : \textit{Mod}(\mathcal{O}_Z) \to \textit{Mod}(\mathcal{O}_X)$
+whose derived extension $R\SheafHom(\mathcal{O}_Z, -)$
+is the right adjoint to $Ri_* : D(\mathcal{O}_Z) \to D(\mathcal{O}_X)$. See
+Section \ref{section-sections-with-exact-support}.
+In the case of a finite morphism $f : Y \to X$ this strategy
+cannot work, as the functor
+$f_* : \textit{Mod}(\mathcal{O}_Y) \to \textit{Mod}(\mathcal{O}_X)$
+is not exact in general and hence does not have a right adjoint.
+A replacement is to consider the exact functor
+$\textit{Mod}(f_*\mathcal{O}_Y) \to \textit{Mod}(\mathcal{O}_X)$
+and consider the corresponding right adjoint and its derived
+extension.
+
+\medskip\noindent
+Let $f : Y \to X$ be an affine morphism of schemes. For a sheaf
+of $\mathcal{O}_X$-modules $\mathcal{F}$ the sheaf
+$$
+\SheafHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, \mathcal{F})
+$$
+is a sheaf of $f_*\mathcal{O}_Y$-modules. We obtain a functor
+$\textit{Mod}(\mathcal{O}_X) \to \textit{Mod}(f_*\mathcal{O}_Y)$
+which we will denote $\SheafHom(f_*\mathcal{O}_Y, -)$.
+
+\begin{lemma}
+\label{lemma-compute-sheafhom-affine}
+With notation as above. The functor $\SheafHom(f_*\mathcal{O}_Y, -)$ is a
+right adjoint to the restriction functor
+$\textit{Mod}(f_*\mathcal{O}_Y) \to \textit{Mod}(\mathcal{O}_X)$.
+For an affine open $U \subset X$ we have
+$$
+\Gamma(U, \SheafHom(f_*\mathcal{O}_Y, \mathcal{F})) =
+\Hom_A(B, \mathcal{F}(U))
+$$
+where $A = \mathcal{O}_X(U)$ and $B = \mathcal{O}_Y(f^{-1}(U))$.
+\end{lemma}
+
+\begin{proof}
+Adjointness follows from
+Modules, Lemma \ref{modules-lemma-adjoint-tensor-restrict}.
+As $f$ is affine we see that $f_*\mathcal{O}_Y$ is
+the quasi-coherent sheaf corresponding to $B$ viewed
+as an $A$-module. Hence the description of sections over $U$ follows from
+Schemes, Lemma \ref{schemes-lemma-compare-constructions}.
+\end{proof}
+
+\noindent
+The functor $\SheafHom(f_*\mathcal{O}_Y, -)$ is left exact. Let
+$$
+R\SheafHom(f_*\mathcal{O}_Y, -) :
+D(\mathcal{O}_X)
+\longrightarrow
+D(f_*\mathcal{O}_Y)
+$$
+be its derived extension.
+
+\begin{lemma}
+\label{lemma-sheafhom-affine-adjoint}
+With notation as above. The functor $R\SheafHom(f_*\mathcal{O}_Y, -)$
+is the right adjoint of the functor $D(f_*\mathcal{O}_Y) \to D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-compute-sheafhom-affine}
+and
+Derived Categories, Lemma \ref{derived-lemma-derived-adjoint-functors}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sheafhom-affine-ext}
+With notation as above. The composition
+$$
+D(\mathcal{O}_X) \xrightarrow{R\SheafHom(f_*\mathcal{O}_Y, -)}
+D(f_*\mathcal{O}_Y) \to D(\mathcal{O}_X)
+$$
+is the functor $K \mapsto R\SheafHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, K)$.
+\end{lemma}
+
+\begin{proof}
+This is immediate from the construction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-twisted}
+Let $f : Y \to X$ be a finite pseudo-coherent morphism of schemes
+(a finite morphism of Noetherian schemes is pseudo-coherent).
+The functor $R\SheafHom(f_*\mathcal{O}_Y, -)$ maps
+$D_\QCoh^+(\mathcal{O}_X)$ into $D_\QCoh^+(f_*\mathcal{O}_Y)$.
+If $X$ is quasi-compact and quasi-separated, then the diagram
+$$
+\xymatrix{
+D_\QCoh^+(\mathcal{O}_X) \ar[rr]_a \ar[rd]_{R\SheafHom(f_*\mathcal{O}_Y, -)}
+& & D_\QCoh^+(\mathcal{O}_Y) \ar[ld]^\Phi \\
+& D_\QCoh^+(f_*\mathcal{O}_Y)
+}
+$$
+is commutative, where $a$ is the right adjoint of
+Lemma \ref{lemma-twisted-inverse-image} for $f$ and $\Phi$ is the equivalence
+of Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-morphism-equivalence}.
+\end{lemma}
+
+\begin{proof}
+(The parenthetical remark follows from More on Morphisms, Lemma
+\ref{more-morphisms-lemma-Noetherian-pseudo-coherent}.)
+Since $f$ is pseudo-coherent, the $\mathcal{O}_X$-module $f_*\mathcal{O}_Y$
+is pseudo-coherent, see More on Morphisms, Lemma
+\ref{more-morphisms-lemma-finite-pseudo-coherent}.
+Thus $R\SheafHom(f_*\mathcal{O}_Y, -)$ maps
+$D_\QCoh^+(\mathcal{O}_X)$ into
+$D_\QCoh^+(f_*\mathcal{O}_Y)$, see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-internal-hom}.
+Then $\Phi \circ a$ and $R\SheafHom(f_*\mathcal{O}_Y, -)$
+agree on $D_\QCoh^+(\mathcal{O}_X)$ because these functors are
+both right adjoint to the restriction functor
+$D_\QCoh^+(f_*\mathcal{O}_Y) \to D_\QCoh^+(\mathcal{O}_X)$. To see this
+use Lemmas \ref{lemma-twisted-inverse-image-bounded-below} and
+\ref{lemma-sheafhom-affine-adjoint}.
+\end{proof}
+
+\begin{remark}
+\label{remark-trace-map-finite}
+If $f : Y \to X$ is a finite morphism of Noetherian schemes, then the diagram
+$$
+\xymatrix{
+Rf_*a(K) \ar[r]_-{\text{Tr}_{f, K}} \ar@{=}[d] & K \ar@{=}[d] \\
+R\SheafHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, K) \ar[r] & K
+}
+$$
+is commutative for $K \in D_\QCoh^+(\mathcal{O}_X)$. This follows
+from Lemma \ref{lemma-finite-twisted}. The lower horizontal
+arrow is induced by the map $\mathcal{O}_X \to f_*\mathcal{O}_Y$ and the
+upper horizontal arrow is the trace map discussed in
+Section \ref{section-trace}.
+\end{remark}
+
+
+
+
+
+
+\section{Right adjoint of pushforward for proper flat morphisms}
+\label{section-proper-flat}
+
+\noindent
+For proper, flat, and finitely presented morphisms of quasi-compact
+and quasi-separated schemes the right adjoint of pushforward
+enjoys some remarkable properties.
+
+\begin{lemma}
+\label{lemma-proper-flat}
+Let $Y$ be a quasi-compact and quasi-separated scheme.
+Let $f : X \to Y$ be a morphism of schemes which is proper, flat, and
+of finite presentation.
+Let $a$ be the right adjoint for
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$ of
+Lemma \ref{lemma-twisted-inverse-image}. Then $a$ commutes with direct sums.
+\end{lemma}
+
+\begin{proof}
+Let $P$ be a perfect object of $D(\mathcal{O}_X)$. By
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}
+the complex $Rf_*P$ is perfect on $Y$.
+Let $K_i$ be a family of objects of $D_\QCoh(\mathcal{O}_Y)$.
+Then
+\begin{align*}
+\Hom_{D(\mathcal{O}_X)}(P, a(\bigoplus K_i))
+& =
+\Hom_{D(\mathcal{O}_Y)}(Rf_*P, \bigoplus K_i) \\
+& =
+\bigoplus \Hom_{D(\mathcal{O}_Y)}(Rf_*P, K_i) \\
+& =
+\bigoplus \Hom_{D(\mathcal{O}_X)}(P, a(K_i))
+\end{align*}
+because a perfect object is compact (Derived Categories of Schemes,
+Proposition \ref{perfect-proposition-compact-is-perfect}).
+Since $D_\QCoh(\mathcal{O}_X)$ has a perfect generator
+(Derived Categories of Schemes, Theorem
+\ref{perfect-theorem-bondal-van-den-Bergh})
+we conclude that the map $\bigoplus a(K_i) \to a(\bigoplus K_i)$
+is an isomorphism, i.e., $a$ commutes with direct sums.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-flat-relative}
+Let $Y$ be a quasi-compact and quasi-separated scheme.
+Let $f : X \to Y$ be a morphism of schemes which is proper, flat, and
+of finite presentation.
+Let $a$ be the right adjoint for
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$ of
+Lemma \ref{lemma-twisted-inverse-image}. Then
+\begin{enumerate}
+\item for every closed $T \subset Y$ if $Q \in D_\QCoh(Y)$ is supported on $T$,
+then $a(Q)$ is supported on $f^{-1}(T)$,
+\item for every open $V \subset Y$ and any $K \in D_\QCoh(\mathcal{O}_Y)$
+the map (\ref{equation-sheafy}) is an isomorphism, and
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from Lemmas \ref{lemma-when-sheafy},
+\ref{lemma-proper-noetherian}, and
+\ref{lemma-proper-flat}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-with-pullback-flat-proper}
+Let $Y$ be a quasi-compact and quasi-separated scheme.
+Let $f : X \to Y$ be a morphism of schemes which is proper, flat, and
+of finite presentation.
+The map (\ref{equation-compare-with-pullback}) is an isomorphism
+for every object $K$ of $D_\QCoh(\mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-proper-flat} we know that $a$ commutes
+with direct sums. Hence the collection of objects of
+$D_\QCoh(\mathcal{O}_Y)$ for which (\ref{equation-compare-with-pullback})
+is an isomorphism is a strictly full, saturated, triangulated
+subcategory of $D_\QCoh(\mathcal{O}_Y)$ which is moreover
+preserved under taking direct sums. Since $D_\QCoh(\mathcal{O}_Y)$
+is a module category (Derived Categories of Schemes, Theorem
+\ref{perfect-theorem-DQCoh-is-Ddga}) generated by a single
+perfect object (Derived Categories of Schemes, Theorem
+\ref{perfect-theorem-bondal-van-den-Bergh})
+we can argue as in
+More on Algebra, Remark \ref{more-algebra-remark-P-resolution}
+to see that it suffices to prove (\ref{equation-compare-with-pullback})
+is an isomorphism for a single perfect object.
+However, the result holds for perfect objects, see
+Lemma \ref{lemma-compare-with-pullback-perfect}.
+\end{proof}
+
+\noindent
+The following lemma shows that the base change map
+(\ref{equation-base-change-map}) is an isomorphism
+for proper, flat morphisms of finite presentation.
+We will see in
+Example \ref{example-base-change-wrong}
+that this does not remain true for perfect proper morphisms;
+in that case one has to make a tor independence condition.
+
+\begin{lemma}
+\label{lemma-proper-flat-base-change}
+Let $g : Y' \to Y$ be a morphism of quasi-compact and quasi-separated schemes.
+Let $f : X \to Y$ be a proper, flat morphism of finite presentation.
+Then the base change map (\ref{equation-base-change-map}) is an isomorphism
+for all $K \in D_\QCoh(\mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-proper-flat-relative} formation of the
+functors $a$ and $a'$ commutes with restriction to opens of $Y$ and $Y'$.
+Hence we may assume $Y' \to Y$ is a morphism of affine schemes, see
+Remark \ref{remark-check-over-affines}. In this
+case the statement follows from Lemma \ref{lemma-more-base-change}.
+\end{proof}
+
+\begin{remark}
+\label{remark-relative-dualizing-complex}
+Let $Y$ be a quasi-compact and quasi-separated scheme.
+Let $f : X \to Y$ be a proper, flat morphism of finite presentation.
+Let $a$ be the adjoint of Lemma \ref{lemma-twisted-inverse-image} for $f$.
+In this situation, $\omega_{X/Y}^\bullet = a(\mathcal{O}_Y)$
+is sometimes called the {\it relative dualizing complex}. By
+Lemma \ref{lemma-compare-with-pullback-flat-proper}
+there is a functorial isomorphism
+$a(K) = Lf^*K \otimes_{\mathcal{O}_X}^\mathbf{L} \omega_{X/Y}^\bullet$
+for $K \in D_\QCoh(\mathcal{O}_Y)$. Moreover, the trace map
+$$
+\text{Tr}_{f, \mathcal{O}_Y} : Rf_*\omega_{X/Y}^\bullet \to \mathcal{O}_Y
+$$
+of Section \ref{section-trace} induces the trace map for all $K$
+in $D_\QCoh(\mathcal{O}_Y)$. More precisely the diagram
+$$
+\xymatrix{
+Rf_*a(K) \ar[rrr]_{\text{Tr}_{f, K}} \ar@{=}[d] & & &
+K \ar@{=}[d] \\
+Rf_*(Lf^*K \otimes_{\mathcal{O}_X}^\mathbf{L} \omega_{X/Y}^\bullet)
+\ar@{=}[r] &
+K \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*\omega_{X/Y}^\bullet
+\ar[rr]^-{\text{id}_K \otimes \text{Tr}_{f, \mathcal{O}_Y}} & & K
+}
+$$
+where the equality on the lower right is
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-cohomology-base-change}.
+If $g : Y' \to Y$ is a
+morphism of quasi-compact and quasi-separated schemes
+and $X' = Y' \times_Y X$, then by
+Lemma \ref{lemma-proper-flat-base-change} we have
+$\omega_{X'/Y'}^\bullet = L(g')^*\omega_{X/Y}^\bullet$ where $g' : X' \to X$
+is the projection and by Lemma \ref{lemma-trace-map-and-base-change}
+the trace map
+$$
+\text{Tr}_{f', \mathcal{O}_{Y'}} :
+Rf'_*\omega_{X'/Y'}^\bullet \to \mathcal{O}_{Y'}
+$$
+for $f' : X' \to Y'$ is the base change of $\text{Tr}_{f, \mathcal{O}_Y}$
+via the base change isomorphism.
+\end{remark}
+
+\begin{remark}
+\label{remark-relative-dualizing-complex-relative-cup-product}
+Let $f : X \to Y$, $\omega^\bullet_{X/Y}$, and $\text{Tr}_{f, \mathcal{O}_Y}$
+be as in Remark \ref{remark-relative-dualizing-complex}.
+Let $K$ and $M$ be in $D_\QCoh(\mathcal{O}_X)$ with
+$M$ pseudo-coherent (for example perfect). Suppose given a map
+$K \otimes_{\mathcal{O}_X}^\mathbf{L} M \to \omega^\bullet_{X/Y}$
+which corresponds to an isomorphism
+$K \to R\SheafHom_{\mathcal{O}_X}(M, \omega^\bullet_{X/Y})$
+via Cohomology, Equation (\ref{cohomology-equation-internal-hom}).
+Then the relative cup product
+(Cohomology, Remark \ref{cohomology-remark-cup-product})
+$$
+Rf_*K \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*M
+\to
+Rf_*(K \otimes_{\mathcal{O}_X}^\mathbf{L} M)
+\to
+Rf_*\omega^\bullet_{X/Y}
+\xrightarrow{\text{Tr}_{f, \mathcal{O}_Y}}
+\mathcal{O}_Y
+$$
+determines an isomorphism
+$Rf_*K \to R\SheafHom_{\mathcal{O}_Y}(Rf_*M, \mathcal{O}_Y)$.
+Namely, since $\omega^\bullet_{X/Y} = a(\mathcal{O}_Y)$
+the canonical map (\ref{equation-sheafy-trace})
+$$
+Rf_*R\SheafHom_{\mathcal{O}_X}(M, \omega^\bullet_{X/Y}) \to
+R\SheafHom_{\mathcal{O}_Y}(Rf_*M, \mathcal{O}_Y)
+$$
+is an isomorphism by
+Lemma \ref{lemma-iso-on-RSheafHom} and
+Remark \ref{remark-iso-on-RSheafHom}
+and the fact that $M$ and $Rf_*M$ are pseudo-coherent, see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-pseudo-coherent-direct-image-general}.
+To see that the relative cup product
+induces this isomorphism use the commutativity of the diagram in
+Cohomology, Remark \ref{cohomology-remark-relative-cup-and-composition}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-properties-relative-dualizing}
+Let $Y$ be a quasi-compact and quasi-separated scheme.
+Let $f : X \to Y$ be a morphism of schemes which is
+proper, flat, and of finite presentation with
+relative dualizing complex $\omega_{X/Y}^\bullet$
+(Remark \ref{remark-relative-dualizing-complex}).
+Then
+\begin{enumerate}
+\item $\omega_{X/Y}^\bullet$ is a $Y$-perfect object of $D(\mathcal{O}_X)$,
+\item $Rf_*\omega_{X/Y}^\bullet$ has vanishing cohomology sheaves
+in positive degrees,
+\item $\mathcal{O}_X \to
+R\SheafHom_{\mathcal{O}_X}(\omega_{X/Y}^\bullet, \omega_{X/Y}^\bullet)$
+is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+In view of the fact that formation of $\omega_{X/Y}^\bullet$ commutes
+with base change (see Remark \ref{remark-relative-dualizing-complex}),
+we may and do assume that $Y$ is affine. For a perfect object $E$ of
+$D(\mathcal{O}_X)$ we have
+\begin{align*}
+Rf_*(E \otimes_{\mathcal{O}_X}^\mathbf{L} \omega_{X/Y}^\bullet)
+& =
+Rf_*R\SheafHom_{\mathcal{O}_X}(E^\vee, \omega_{X/Y}^\bullet) \\
+& =
+R\SheafHom_{\mathcal{O}_Y}(Rf_*E^\vee, \mathcal{O}_Y) \\
+& =
+(Rf_*E^\vee)^\vee
+\end{align*}
+For the first equality, see
+Cohomology, Lemma \ref{cohomology-lemma-dual-perfect-complex}.
+For the second equality, see Lemma \ref{lemma-iso-on-RSheafHom},
+Remark \ref{remark-iso-on-RSheafHom}, and
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}.
+The third equality is the definition of the dual. In particular
+these references also show that the outcome is a perfect object
+of $D(\mathcal{O}_Y)$. We conclude that $\omega_{X/Y}^\bullet$
+is $Y$-perfect by More on Morphisms, Lemma
+\ref{more-morphisms-lemma-characterize-relatively-perfect}.
+This proves (1).
+
+\medskip\noindent
+Let $M$ be an object of $D_\QCoh(\mathcal{O}_Y)$. Then
+\begin{align*}
+\Hom_Y(M, Rf_*\omega_{X/Y}^\bullet) & =
+\Hom_X(Lf^*M, \omega_{X/Y}^\bullet) \\
+& =
+\Hom_Y(Rf_*Lf^*M, \mathcal{O}_Y) \\
+& =
+\Hom_Y(M \otimes_{\mathcal{O}_Y}^\mathbf{L} Rf_*\mathcal{O}_Y, \mathcal{O}_Y)
+\end{align*}
+The first equality holds by Cohomology, Lemma
+\ref{cohomology-lemma-adjoint}.
+The second equality by construction of $a$.
+The third equality by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change}.
+Recall $Rf_*\mathcal{O}_X$ is perfect of tor amplitude in $[0, N]$
+for some $N$, see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}.
+Thus we can represent $Rf_*\mathcal{O}_X$ by a complex of
+finite projective modules sitting in degrees $[0, N]$
+(using More on Algebra, Lemma \ref{more-algebra-lemma-perfect}
+and the fact that $Y$ is affine).
+Hence if $M = \mathcal{O}_Y[-i]$ for some $i > 0$, then the last
+group is zero. Since $Y$ is affine we conclude that
+$H^i(Rf_*\omega_{X/Y}^\bullet) = 0$ for $i > 0$.
+This proves (2).
+
+\medskip\noindent
+Let $E$ be a perfect object of $D_\QCoh(\mathcal{O}_X)$. Then
+we have
+\begin{align*}
+\Hom_X(E, R\SheafHom_{\mathcal{O}_X}(\omega_{X/Y}^\bullet, \omega_{X/Y}^\bullet)
+& =
+\Hom_X(E \otimes_{\mathcal{O}_X}^\mathbf{L} \omega_{X/Y}^\bullet,
+\omega_{X/Y}^\bullet) \\
+& =
+\Hom_Y(Rf_*(E \otimes_{\mathcal{O}_X}^\mathbf{L} \omega_{X/Y}^\bullet),
+\mathcal{O}_Y) \\
+& =
+\Hom_Y(Rf_*(R\SheafHom_{\mathcal{O}_X}(E^\vee, \omega_{X/Y}^\bullet)),
+\mathcal{O}_Y) \\
+& =
+\Hom_Y(R\SheafHom_{\mathcal{O}_Y}(Rf_*E^\vee, \mathcal{O}_Y),
+\mathcal{O}_Y) \\
+& =
+R\Gamma(Y, Rf_*E^\vee) \\
+& =
+\Hom_X(E, \mathcal{O}_X)
+\end{align*}
+The first equality holds by Cohomology, Lemma
+\ref{cohomology-lemma-internal-hom}.
+The second equality is the definition of $\omega_{X/Y}^\bullet$.
+The third equality comes from the construction of the dual perfect
+complex $E^\vee$, see Cohomology, Lemma
+\ref{cohomology-lemma-dual-perfect-complex}.
+The fourth equality follows from the equality
+$Rf_*R\SheafHom_{\mathcal{O}_X}(E^\vee, \omega_{X/Y}^\bullet) =
+R\SheafHom_{\mathcal{O}_Y}(Rf_*E^\vee, \mathcal{O}_Y)$
+shown in the first paragraph of the proof.
+The fifth equality holds by double duality for perfect complexes
+(Cohomology, Lemma
+\ref{cohomology-lemma-dual-perfect-complex})
+and the fact that $Rf_*E$ is perfect by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}.
+The last equality is Leray for $f$.
+This string of equalities essentially shows (3)
+holds by the Yoneda lemma. Namely, the object
+$R\SheafHom(\omega_{X/Y}^\bullet, \omega_{X/Y}^\bullet)$
+is in $D_\QCoh(\mathcal{O}_X)$ by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-internal-hom}.
+Taking $E = \mathcal{O}_X$ in the above we get a map
+$\alpha : \mathcal{O}_X \to
+R\SheafHom_{\mathcal{O}_X}(\omega_{X/Y}^\bullet, \omega_{X/Y}^\bullet)$
+corresponding to
+$\text{id}_{\mathcal{O}_X} \in \Hom_X(\mathcal{O}_X, \mathcal{O}_X)$.
+Since all the isomorphisms above are functorial in $E$ we
+see that the cone on $\alpha$ is an object $C$ of $D_\QCoh(\mathcal{O}_X)$
+such that $\Hom(E, C) = 0$ for all perfect $E$.
+Since the perfect objects generate
+(Derived Categories of Schemes, Theorem
+\ref{perfect-theorem-bondal-van-den-Bergh})
+we conclude that $\alpha$ is an isomorphism.
+\end{proof}
+
+\begin{lemma}[Rigidity]
+\label{lemma-van-den-bergh}
+Let $Y$ be a quasi-compact and quasi-separated scheme.
+Let $f : X \to Y$ be a proper, flat morphism of finite presentation
+with relative dualizing complex $\omega_{X/Y}^\bullet$
+(Remark \ref{remark-relative-dualizing-complex}).
+There is a canonical isomorphism
+\begin{equation}
+\label{equation-pre-rigid}
+\mathcal{O}_X =
+c(L\text{pr}_1^*\omega_{X/Y}^\bullet) =
+c(L\text{pr}_2^*\omega_{X/Y}^\bullet)
+\end{equation}
+and a canonical isomorphism
+\begin{equation}
+\label{equation-rigid}
+\omega_{X/Y}^\bullet =
+c\left(L\text{pr}_1^*\omega_{X/Y}^\bullet
+\otimes_{\mathcal{O}_{X \times_Y X}}^\mathbf{L}
+L\text{pr}_2^*\omega_{X/Y}^\bullet\right)
+\end{equation}
+where $c$ is the right adjoint of
+Lemma \ref{lemma-twisted-inverse-image}
+for the diagonal $\Delta : X \to X \times_Y X$.
+\end{lemma}
+
+\begin{proof}
+Let $a$ be the right adjoint to $Rf_*$ as in
+Lemma \ref{lemma-twisted-inverse-image}.
+Consider the cartesian square
+$$
+\xymatrix{
+X \times_Y X \ar[r]_q \ar[d]_p & X \ar[d]_f \\
+X \ar[r]^f & Y
+}
+$$
+Let $b$ be the right adjoint for $p$
+as in Lemma \ref{lemma-twisted-inverse-image}. Then
+\begin{align*}
+\omega_{X/Y}^\bullet
+& =
+c(b(\omega_{X/Y}^\bullet)) \\
+& =
+c(Lp^*\omega_{X/Y}^\bullet
+\otimes_{\mathcal{O}_{X \times_Y X}}^\mathbf{L} b(\mathcal{O}_X)) \\
+& =
+c(Lp^*\omega_{X/Y}^\bullet
+\otimes_{\mathcal{O}_{X \times_Y X}}^\mathbf{L}
+Lq^*a(\mathcal{O}_Y)) \\
+& =
+c(Lp^*\omega_{X/Y}^\bullet
+\otimes_{\mathcal{O}_{X \times_Y X}}^\mathbf{L}
+Lq^*\omega_{X/Y}^\bullet)
+\end{align*}
+as in (\ref{equation-rigid}). Explanation as follows:
+\begin{enumerate}
+\item The first equality holds as $\text{id} = c \circ b$ because
+$\text{id}_X = p \circ \Delta$.
+\item The second equality holds by
+Lemma \ref{lemma-compare-with-pullback-flat-proper}.
+\item The third holds by Lemma \ref{lemma-proper-flat-base-change}
+and the fact that $\mathcal{O}_X = Lf^*\mathcal{O}_Y$.
+\item The fourth holds because $\omega_{X/Y}^\bullet = a(\mathcal{O}_Y)$.
+\end{enumerate}
+Equation (\ref{equation-pre-rigid}) is proved in exactly the same way.
+\end{proof}
+
+\begin{remark}
+\label{remark-van-den-bergh}
+Lemma \ref{lemma-van-den-bergh} means our relative dualizing
+complex is {\it rigid} in a sense analogous to the notion introduced
+in \cite{vdB-rigid}. Namely, since the functor on the right of
+(\ref{equation-rigid})
+is ``quadratic'' in $\omega_{X/Y}^\bullet$ and the functor on the left
+of (\ref{equation-rigid})
+is ``linear'' this ``pins down'' the complex $\omega_{X/Y}^\bullet$
+to some extent. There is an approach to duality theory using
+``rigid'' (relative) dualizing complexes, see for example
+\cite{Neeman-rigid}, \cite{Yekutieli-rigid}, and \cite{Yekutieli-Zhang}.
+We will return to this in Section \ref{section-relative-dualizing-complexes}.
+\end{remark}
+
+
+
+
+
+\section{Right adjoint of pushforward for perfect proper morphisms}
+\label{section-perfect-proper}
+
+\noindent
+The correct generality for this section would be to consider
+perfect proper morphisms of quasi-compact and quasi-separated
+schemes, see \cite{LN}.
+
+\begin{lemma}
+\label{lemma-proper-flat-noetherian}
+Let $f : X \to Y$ be a perfect proper morphism of Noetherian schemes.
+Let $a$ be the right adjoint for
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$ of
+Lemma \ref{lemma-twisted-inverse-image}. Then $a$ commutes with direct sums.
+\end{lemma}
+
+\begin{proof}
+Let $P$ be a perfect object of $D(\mathcal{O}_X)$. By
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-perfect-proper-perfect-direct-image}
+the complex $Rf_*P$ is perfect on $Y$.
+Let $K_i$ be a family of objects of $D_\QCoh(\mathcal{O}_Y)$.
+Then
+\begin{align*}
+\Hom_{D(\mathcal{O}_X)}(P, a(\bigoplus K_i))
+& =
+\Hom_{D(\mathcal{O}_Y)}(Rf_*P, \bigoplus K_i) \\
+& =
+\bigoplus \Hom_{D(\mathcal{O}_Y)}(Rf_*P, K_i) \\
+& =
+\bigoplus \Hom_{D(\mathcal{O}_X)}(P, a(K_i))
+\end{align*}
+because a perfect object is compact (Derived Categories of Schemes,
+Proposition \ref{perfect-proposition-compact-is-perfect}).
+Since $D_\QCoh(\mathcal{O}_X)$ has a perfect generator
+(Derived Categories of Schemes, Theorem
+\ref{perfect-theorem-bondal-van-den-Bergh})
+we conclude that the map $\bigoplus a(K_i) \to a(\bigoplus K_i)$
+is an isomorphism, i.e., $a$ commutes with direct sums.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-flat-noetherian-relative}
+Let $f : X \to Y$ be a perfect proper morphism of Noetherian schemes.
+Let $a$ be the right adjoint for
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$ of
+Lemma \ref{lemma-twisted-inverse-image}. Then
+\begin{enumerate}
+\item for every closed $T \subset Y$ if $Q \in D_\QCoh(Y)$ is supported on $T$,
+then $a(Q)$ is supported on $f^{-1}(T)$,
+\item for every open $V \subset Y$ and any $K \in D_\QCoh(\mathcal{O}_Y)$
+the map (\ref{equation-sheafy}) is an isomorphism, and
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from Lemmas \ref{lemma-when-sheafy},
+\ref{lemma-proper-noetherian}, and
+\ref{lemma-proper-flat-noetherian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-with-pullback-flat-proper-noetherian}
+Let $f : X \to Y$ be a perfect proper morphism of Noetherian
+schemes. The map (\ref{equation-compare-with-pullback}) is an isomorphism
+for every object $K$ of $D_\QCoh(\mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-proper-flat-noetherian} we know that $a$ commutes
+with direct sums. Hence the collection of objects of
+$D_\QCoh(\mathcal{O}_Y)$ for which (\ref{equation-compare-with-pullback})
+is an isomorphism is a strictly full, saturated, triangulated
+subcategory of $D_\QCoh(\mathcal{O}_Y)$ which is moreover
+preserved under taking direct sums. Since $D_\QCoh(\mathcal{O}_Y)$
+is a module category (Derived Categories of Schemes, Theorem
+\ref{perfect-theorem-DQCoh-is-Ddga}) generated by a single
+perfect object (Derived Categories of Schemes, Theorem
+\ref{perfect-theorem-bondal-van-den-Bergh})
+we can argue as in
+More on Algebra, Remark \ref{more-algebra-remark-P-resolution}
+to see that it suffices to prove (\ref{equation-compare-with-pullback})
+is an isomorphism for a single perfect object.
+However, the result holds for perfect objects, see
+Lemma \ref{lemma-compare-with-pullback-perfect}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-perfect-base-change}
+Let $f : X \to Y$ be a perfect proper morphism of Noetherian schemes.
+Let $g : Y' \to Y$ be a morphism with $Y'$ Noetherian. If $X$ and
+$Y'$ are tor independent over $Y$, then the base
+change map (\ref{equation-base-change-map}) is an isomorphism
+for all $K \in D_\QCoh(\mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-proper-flat-noetherian-relative} formation of the
+functors $a$ and $a'$ commutes with restriction to opens of $Y$ and $Y'$.
+Hence we may assume $Y' \to Y$ is a morphism of affine schemes, see
+Remark \ref{remark-check-over-affines}. In this
+case the statement follows from Lemma \ref{lemma-more-base-change}.
+\end{proof}
+
+
+
+\section{Right adjoint of pushforward for effective Cartier divisors}
+\label{section-dualizing-Cartier}
+
+\noindent
+Let $X$ be a scheme and let $i : D \to X$ be the inclusion of an
+effective Cartier divisor. Denote $\mathcal{N} = i^*\mathcal{O}_X(D)$
+the normal sheaf of $i$, see
+Morphisms, Section \ref{morphisms-section-conormal-sheaf}
+and
+Divisors, Section \ref{divisors-section-effective-Cartier-divisors}.
+Recall that $R\SheafHom(\mathcal{O}_D, -)$
+denotes the right adjoint to $i_* : D(\mathcal{O}_D) \to D(\mathcal{O}_X)$
+and has the property
+$i_*R\SheafHom(\mathcal{O}_D, -) =
+R\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_D, -)$,
+see Section \ref{section-sections-with-exact-support}.
+
+\begin{lemma}
+\label{lemma-compute-for-effective-Cartier}
+As above, let $X$ be a scheme and let $D \subset X$ be an
+effective Cartier divisor. There is a canonical isomorphism
+$R\SheafHom(\mathcal{O}_D, \mathcal{O}_X) = \mathcal{N}[-1]$
+in $D(\mathcal{O}_D)$.
+\end{lemma}
+
+\begin{proof}
+Equivalently, we are saying that $R\SheafHom(\mathcal{O}_D, \mathcal{O}_X)$
+has a unique nonzero cohomology sheaf in degree $1$ and that this
+sheaf is isomorphic to $\mathcal{N}$. Since $i_*$ is exact and fully
+faithful, it suffices to prove that
+$i_*R\SheafHom(\mathcal{O}_D, \mathcal{O}_X)$ is isomorphic
+to $i_*\mathcal{N}[-1]$. We have
+$i_*R\SheafHom(\mathcal{O}_D, \mathcal{O}_X) =
+R\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_D, \mathcal{O}_X)$
+by Lemma \ref{lemma-sheaf-with-exact-support-ext}. We have a resolution
+$$
+0 \to \mathcal{I} \to \mathcal{O}_X \to i_*\mathcal{O}_D \to 0
+$$
+where $\mathcal{I}$ is the ideal sheaf of $D$
+which we can use to compute. Since
+$R\SheafHom_{\mathcal{O}_X}(\mathcal{O}_X, \mathcal{O}_X) = \mathcal{O}_X$ and
+$R\SheafHom_{\mathcal{O}_X}(\mathcal{I}, \mathcal{O}_X) = \mathcal{O}_X(D)$ by
+a local computation, we see that
+$$
+R\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_D, \mathcal{O}_X) =
+(\mathcal{O}_X \to \mathcal{O}_X(D))
+$$
+where on the right hand side we have $\mathcal{O}_X$ in degree $0$
+and $\mathcal{O}_X(D)$ in degree $1$. The result follows from the
+short exact sequence
+$$
+0 \to \mathcal{O}_X \to \mathcal{O}_X(D) \to i_*\mathcal{N} \to 0
+$$
+coming from the fact that $D$ is the zero scheme of the canonical section
+of $\mathcal{O}_X(D)$ and from the fact that
+$\mathcal{N} = i^*\mathcal{O}_X(D)$.
+\end{proof}
+
+\noindent
+For every object $K$ of $D(\mathcal{O}_X)$ there is a canonical map
+\begin{equation}
+\label{equation-map-effective-Cartier}
+Li^*K
+\otimes_{\mathcal{O}_D}^\mathbf{L}
+R\SheafHom(\mathcal{O}_D, \mathcal{O}_X)
+\longrightarrow
+R\SheafHom(\mathcal{O}_D, K)
+\end{equation}
+in $D(\mathcal{O}_D)$ functorial in $K$ and
+compatible with distinguished triangles.
+Namely, this map is adjoint to a map
+$$
+i_*(Li^*K \otimes^\mathbf{L}_{\mathcal{O}_D}
+R\SheafHom(\mathcal{O}_D, \mathcal{O}_X)) =
+K \otimes^\mathbf{L}_{\mathcal{O}_X}
+R\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_D, \mathcal{O}_X)
+\longrightarrow K
+$$
+where the equality is
+Cohomology, Lemma \ref{cohomology-lemma-projection-formula-closed-immersion}
+and the arrow comes from the canonical map
+$R\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_D, \mathcal{O}_X) \to \mathcal{O}_X$
+induced by $\mathcal{O}_X \to i_*\mathcal{O}_D$.
+
+\medskip\noindent
+If $K \in D_\QCoh(\mathcal{O}_X)$, then
+(\ref{equation-map-effective-Cartier}) is equal to
+(\ref{equation-compare-with-pullback}) via the identification
+$a(K) = R\SheafHom(\mathcal{O}_D, K)$ of
+Lemma \ref{lemma-twisted-inverse-image-closed}.
+If $K \in D_\QCoh(\mathcal{O}_X)$ and $X$ is Noetherian, then
+the following lemma is a special case of
+Lemma \ref{lemma-compare-with-pullback-flat-proper-noetherian}.
+
+\begin{lemma}
+\label{lemma-sheaf-with-exact-support-effective-Cartier}
+As above, let $X$ be a scheme and let $D \subset X$ be an
+effective Cartier divisor. Then (\ref{equation-map-effective-Cartier})
+combined with Lemma \ref{lemma-compute-for-effective-Cartier}
+defines an isomorphism
+$$
+Li^*K \otimes_{\mathcal{O}_D}^\mathbf{L} \mathcal{N}[-1]
+\longrightarrow
+R\SheafHom(\mathcal{O}_D, K)
+$$
+functorial in $K$ in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Since $i_*$ is exact and fully faithful on modules, to prove the map is an
+isomorphism, it suffices to show that it is an isomorphism after applying
+$i_*$. We will use the short exact sequences
+$0 \to \mathcal{I} \to \mathcal{O}_X \to i_*\mathcal{O}_D \to 0$
+and
+$0 \to \mathcal{O}_X \to \mathcal{O}_X(D) \to i_*\mathcal{N} \to 0$
+used in the proof of Lemma \ref{lemma-compute-for-effective-Cartier}
+without further mention. By
+Cohomology, Lemma \ref{cohomology-lemma-projection-formula-closed-immersion}
+which was used to define the map (\ref{equation-map-effective-Cartier})
+the left hand side becomes
+$$
+K \otimes_{\mathcal{O}_X}^\mathbf{L} i_*\mathcal{N}[-1] =
+K \otimes_{\mathcal{O}_X}^\mathbf{L} (\mathcal{O}_X \to \mathcal{O}_X(D))
+$$
+The right hand side becomes
+\begin{align*}
+R\SheafHom_{\mathcal{O}_X}(i_*\mathcal{O}_D, K) & =
+R\SheafHom_{\mathcal{O}_X}((\mathcal{I} \to \mathcal{O}_X), K) \\
+& =
+R\SheafHom_{\mathcal{O}_X}((\mathcal{I} \to \mathcal{O}_X), \mathcal{O}_X)
+\otimes_{\mathcal{O}_X}^\mathbf{L} K
+\end{align*}
+the final equality by
+Cohomology, Lemma \ref{cohomology-lemma-dual-perfect-complex}.
+Since the map comes from the isomorphism
+$$
+R\SheafHom_{\mathcal{O}_X}((\mathcal{I} \to \mathcal{O}_X), \mathcal{O}_X)
+= (\mathcal{O}_X \to \mathcal{O}_X(D))
+$$
+the lemma is clear.
+\end{proof}
+
+
+
+
+
+\section{Right adjoint of pushforward in examples}
+\label{section-examples}
+
+\noindent
+In this section we compute the right adjoint to pushforward in
+some examples. The isomorphisms are canonical but only in the weakest
+possible sense, i.e., we do not prove or claim that these isomorphisms are
+compatible with various operations such as base change and compositions
+of morphisms. There is a huge literature on these types of issues; the reader
+can start with the material in \cite{RD}, \cite{Conrad-GD}
+(these citations use a different starting point for duality but address the
+issue of constructing canonical representatives for relative dualizing
+complexes) and then continue looking at works by
+Joseph Lipman and collaborators.
+
+\begin{lemma}
+\label{lemma-upper-shriek-P1}
+Let $Y$ be a Noetherian scheme. Let $\mathcal{E}$ be a finite locally
+free $\mathcal{O}_Y$-module of rank $n + 1$ with determinant
+$\mathcal{L} = \wedge^{n + 1}(\mathcal{E})$.
+Let $f : X = \mathbf{P}(\mathcal{E}) \to Y$ be the projection.
+Let $a$ be the right adjoint for
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$ of
+Lemma \ref{lemma-twisted-inverse-image}.
+Then there is an isomorphism
+$$
+c : f^*\mathcal{L}(-n - 1)[n] \longrightarrow a(\mathcal{O}_Y)
+$$
+In particular, if $\mathcal{E} = \mathcal{O}_Y^{\oplus n + 1}$, then
+$X = \mathbf{P}^n_Y$ and we obtain
+$a(\mathcal{O}_Y) = \mathcal{O}_X(-n - 1)[n]$.
+\end{lemma}
+
+\begin{proof}
+In (the proof of) Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cohomology-projective-bundle}
+we constructed a canonical isomorphism
+$$
+R^nf_*(f^*\mathcal{L}(-n - 1)) \longrightarrow \mathcal{O}_Y
+$$
+Moreover, $Rf_*(f^*\mathcal{L}(-n - 1))[n] = R^nf_*(f^*\mathcal{L}(-n - 1))$,
+i.e., the other higher direct images are zero. Thus we find an isomorphism
+$$
+Rf_*(f^*\mathcal{L}(-n - 1)[n]) \longrightarrow \mathcal{O}_Y
+$$
+This isomorphism determines $c$ as in the statement of the lemma
+because $a$ is the right adjoint of $Rf_*$.
+By Lemma \ref{lemma-proper-noetherian} construction of the $a$
+is local on the base. In particular, to check that
+$c$ is an isomorphism, we may work locally on $Y$.
+In other words, we may assume $Y$ is affine and
+$\mathcal{E} = \mathcal{O}_Y^{\oplus n + 1}$.
+In this case the sheaves $\mathcal{O}_X, \mathcal{O}_X(-1), \ldots,
+\mathcal{O}_X(-n)$ generate $D_\QCoh(X)$, see
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-generator-P1}.
+Hence it suffices to show that
+$c : \mathcal{O}_X(-n - 1)[n] \to a(\mathcal{O}_Y)$
+is transformed into an isomorphism under the functors
+$$
+F_{i, p}(-) = \Hom_{D(\mathcal{O}_X)}(\mathcal{O}_X(i), (-)[p])
+$$
+for $i \in \{-n, \ldots, 0\}$ and $p \in \mathbf{Z}$.
+For $F_{0, p}$ this holds by construction of the arrow $c$!
+For $i \in \{-n, \ldots, -1\}$ we have
+$$
+\Hom_{D(\mathcal{O}_X)}(\mathcal{O}_X(i), \mathcal{O}_X(-n - 1)[n + p]) =
+H^p(X, \mathcal{O}_X(-n - 1 - i)) = 0
+$$
+by the computation of cohomology of projective space
+(Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cohomology-projective-space-over-ring})
+and we have
+$$
+\Hom_{D(\mathcal{O}_X)}(\mathcal{O}_X(i), a(\mathcal{O}_Y)[p]) =
+\Hom_{D(\mathcal{O}_Y)}(Rf_*\mathcal{O}_X(i), \mathcal{O}_Y[p]) = 0
+$$
+because $Rf_*\mathcal{O}_X(i) = 0$ by the same lemma.
+Hence the source and the target of $F_{i, p}(c)$ vanish
+and $F_{i, p}(c)$ is necessarily an isomorphism.
+This finishes the proof.
+\end{proof}
+
+\begin{example}
+\label{example-base-change-wrong}
+The base change map (\ref{equation-base-change-map}) is not an
+isomorphism if $f$ is perfect proper and $g$ is perfect.
+Let $k$ be a field. Let $Y = \mathbf{A}^2_k$ and let $f : X \to Y$
+be the blowup of $Y$ in the origin. Denote $E \subset X$ the
+exceptional divisor. Then we can factor $f$ as
+$$
+X \xrightarrow{i} \mathbf{P}^1_Y \xrightarrow{p} Y
+$$
+This gives a factorization $a = c \circ b$ where
+$a$, $b$, and $c$ are the right adjoints of
+Lemma \ref{lemma-twisted-inverse-image}
+of $Rf_*$, $Rp_*$, and $Ri_*$. Denote $\mathcal{O}(n)$ the
+Serre twist of the structure sheaf on $\mathbf{P}^1_Y$ and
+denote $\mathcal{O}_X(n)$ its restriction to $X$.
+Note that $X \subset \mathbf{P}^1_Y$ is cut out by
+a degree one equation, hence $\mathcal{O}(X) = \mathcal{O}(1)$.
+By Lemma \ref{lemma-upper-shriek-P1} we have
+$b(\mathcal{O}_Y) = \mathcal{O}(-2)[1]$.
+By Lemma \ref{lemma-twisted-inverse-image-closed}
+we have
+$$
+a(\mathcal{O}_Y) = c(b(\mathcal{O}_Y)) =
+c(\mathcal{O}(-2)[1]) =
+R\SheafHom(\mathcal{O}_X, \mathcal{O}(-2)[1]) =
+\mathcal{O}_X(-1)
+$$
+Last equality by Lemma \ref{lemma-sheaf-with-exact-support-effective-Cartier}.
+Let $Y' = \Spec(k)$ be the origin in $Y$. The restriction of
+$a(\mathcal{O}_Y)$ to $X' = E = \mathbf{P}^1_k$
+is an invertible sheaf of degree $-1$ placed in cohomological
+degree $0$. But on the other hand,
+$a'(\mathcal{O}_{\Spec(k)}) = \mathcal{O}_E(-2)[1]$
+which is an invertible sheaf of degree $-2$ placed in
+cohomological degree $-1$, so different. In this example
+the hypothesis of Tor indepence in Lemma \ref{lemma-more-base-change}
+is violated.
+\end{example}
+
+\begin{lemma}
+\label{lemma-ext}
+Let $Y$ be a ringed space. Let $\mathcal{I} \subset \mathcal{O}_Y$
+be a sheaf of ideals. Set $\mathcal{O}_X = \mathcal{O}_Y/\mathcal{I}$ and
+$\mathcal{N} =
+\SheafHom_{\mathcal{O}_Y}(\mathcal{I}/\mathcal{I}^2, \mathcal{O}_X)$.
+There is a canonical isomorphism
+$c : \mathcal{N} \to
+\SheafExt^1_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+$.
+\end{lemma}
+
+\begin{proof}
+Consider the canonical short exact sequence
+\begin{equation}
+\label{equation-second-order-thickening}
+0 \to \mathcal{I}/\mathcal{I}^2 \to \mathcal{O}_Y/\mathcal{I}^2 \to
+\mathcal{O}_X \to 0
+\end{equation}
+Let $U \subset X$ be open and let $s \in \mathcal{N}(U)$. Then we can
+pushout (\ref{equation-second-order-thickening}) via $s$ to
+get an extension $E_s$ of $\mathcal{O}_X|_U$ by $\mathcal{O}_X|_U$.
+This in turn defines a section $c(s)$ of
+$\SheafExt^1_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)$
+over $U$.
+See Cohomology, Lemma \ref{cohomology-lemma-section-RHom-over-U}
+and Derived Categories, Lemma \ref{derived-lemma-ext-1}.
+Conversely, given an extension
+$$
+0 \to \mathcal{O}_X|_U \to \mathcal{E} \to \mathcal{O}_X|_U \to 0
+$$
+of $\mathcal{O}_U$-modules, we can find an open covering
+$U = \bigcup U_i$ and sections $e_i \in \mathcal{E}(U_i)$
+mapping to $1 \in \mathcal{O}_X(U_i)$. Then $e_i$ defines a map
+$\mathcal{O}_Y|_{U_i} \to \mathcal{E}|_{U_i}$ whose kernel
+contains $\mathcal{I}^2$. In this way we see that
+$\mathcal{E}|_{U_i}$ comes from a pushout as above.
+This shows that $c$ is surjective. We omit the proof
+of injectivity.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-ideal-ext}
+Let $Y$ be a ringed space. Let $\mathcal{I} \subset \mathcal{O}_Y$
+be a sheaf of ideals. Set $\mathcal{O}_X = \mathcal{O}_Y/\mathcal{I}$.
+If $\mathcal{I}$ is Koszul-regular
+(Divisors, Definition \ref{divisors-definition-regular-ideal-sheaf})
+then composition on $R\SheafHom_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)$
+defines isomorphisms
+$$
+\wedge^i(\SheafExt^1_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X))
+\longrightarrow
+\SheafExt^i_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+$$
+for all $i$.
+\end{lemma}
+
+\begin{proof}
+By composition we mean the map
+$$
+R\SheafHom_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+\otimes_{\mathcal{O}_Y}^\mathbf{L}
+R\SheafHom_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+\longrightarrow
+R\SheafHom_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+$$
+of Cohomology, Lemma \ref{cohomology-lemma-internal-hom-composition}.
+This induces multiplication maps
+$$
+\SheafExt^a_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+\otimes_{\mathcal{O}_Y}
+\SheafExt^b_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+\longrightarrow
+\SheafExt^{a + b}_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+$$
+Please compare with
+More on Algebra, Equation (\ref{more-algebra-equation-simple-tor-product}).
+The statement of the lemma means that the induced map
+$$
+\SheafExt^1_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+\otimes \ldots \otimes
+\SheafExt^1_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+\longrightarrow
+\SheafExt^i_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+$$
+factors through the wedge product and then induces an isomorphism.
+To see this is true we may work locally on $Y$. Hence we may assume
+that we have global sections $f_1, \ldots, f_r$ of $\mathcal{O}_Y$
+which generate $\mathcal{I}$ and which form a Koszul regular sequence.
+Denote
+$$
+\mathcal{A} = \mathcal{O}_Y\langle \xi_1, \ldots, \xi_r\rangle
+$$
+the sheaf of strictly commutative differential graded $\mathcal{O}_Y$-algebras
+which is a (divided power) polynomial algebra on
+$\xi_1, \ldots, \xi_r$ in degree $-1$ over $\mathcal{O}_Y$
+with differential $\text{d}$ given by the rule $\text{d}\xi_i = f_i$.
+Let us denote $\mathcal{A}^\bullet$ the underlying
+complex of $\mathcal{O}_Y$-modules which is the Koszul complex
+mentioned above. Thus the canonical map
+$\mathcal{A}^\bullet \to \mathcal{O}_X$
+is a quasi-isomorphism. We obtain quasi-isomorphisms
+$$
+R\SheafHom_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X) \to
+\SheafHom^\bullet(\mathcal{A}^\bullet, \mathcal{A}^\bullet) \to
+\SheafHom^\bullet(\mathcal{A}^\bullet, \mathcal{O}_X)
+$$
+by Cohomology, Lemma \ref{cohomology-lemma-Rhom-strictly-perfect}.
+The differentials of the latter complex are zero, and hence
+$$
+\SheafExt^i_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+\cong \SheafHom_{\mathcal{O}_Y}(\mathcal{A}^{-i}, \mathcal{O}_X)
+$$
+For $j \in \{1, \ldots, r\}$ let $\delta_j : \mathcal{A} \to \mathcal{A}$
+be the derivation of degree $1$ with $\delta_j(\xi_i) = \delta_{ij}$
+(Kronecker delta). A computation shows that
+$\delta_j \circ \text{d} = - \text{d} \circ \delta_j$ which shows that
+we get a morphism of complexes.
+$$
+\delta_j : \mathcal{A}^\bullet \to \mathcal{A}^\bullet[1].
+$$
+Whence $\delta_j$ defines a section of the corresponding
+$\SheafExt$-sheaf.
+Another computation shows that $\delta_1, \ldots, \delta_r$
+map to a basis for $\SheafHom_{\mathcal{O}_Y}(\mathcal{A}^{-1}, \mathcal{O}_X)$
+over $\mathcal{O}_X$.
+Since it is clear that $\delta_j \circ \delta_j = 0$
+and $\delta_j \circ \delta_{j'} = - \delta_{j'} \circ \delta_j$
+as endomorphisms of $\mathcal{A}$ and hence in the
+$\SheafExt$-sheaves
+we obtain the statement that our map above factors through
+the exterior power. To see we get the desired isomorphism
+the reader checks that the elements
+$$
+\delta_{j_1} \circ \ldots \circ \delta_{j_i}
+$$
+for $j_1 < \ldots < j_i$ map to a basis of the sheaf
+$\SheafHom_{\mathcal{O}_Y}(\mathcal{A}^{-i}, \mathcal{O}_X)$
+over $\mathcal{O}_X$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-immersion-ext}
+Let $Y$ be a ringed space. Let $\mathcal{I} \subset \mathcal{O}_Y$
+be a sheaf of ideals. Set $\mathcal{O}_X = \mathcal{O}_Y/\mathcal{I}$ and
+$\mathcal{N} =
+\SheafHom_{\mathcal{O}_Y}(\mathcal{I}/\mathcal{I}^2, \mathcal{O}_X)$.
+If $\mathcal{I}$ is Koszul-regular
+(Divisors, Definition \ref{divisors-definition-regular-ideal-sheaf}) then
+$$
+R\SheafHom_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_Y) =
+\wedge^r \mathcal{N}[r]
+$$
+where $r : Y \to \{1, 2, 3, \ldots \}$ sends $y$ to
+the minimal number of generators of $\mathcal{I}$ needed in a neighbourhood
+of $y$.
+\end{lemma}
+
+\begin{proof}
+We can use Lemmas \ref{lemma-ext} and \ref{lemma-regular-ideal-ext}
+to see that we have isomorphisms
+$\wedge^i\mathcal{N} \to
+\SheafExt^i_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)$
+for $i \geq 0$. Thus it suffices to show that the map
+$\mathcal{O}_Y \to \mathcal{O}_X$ induces an isomorphism
+$$
+\SheafExt^r_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_Y)
+\longrightarrow
+\SheafExt^r_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_X)
+$$
+and that
+$\SheafExt^i_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_Y)$
+is zero for $i \not = r$. These statements are local on $Y$. Thus
+we may assume that we have global sections $f_1, \ldots, f_r$ of
+$\mathcal{O}_Y$ which generate $\mathcal{I}$ and which form a
+Koszul regular sequence. Let $\mathcal{A}^\bullet$
+be the Koszul complex on $f_1, \ldots, f_r$ as introduced in the proof of
+Lemma \ref{lemma-regular-ideal-ext}. Then
+$$
+R\SheafHom_{\mathcal{O}_Y}(\mathcal{O}_X, \mathcal{O}_Y) =
+\SheafHom^\bullet(\mathcal{A}^\bullet, \mathcal{O}_Y)
+$$
+by Cohomology, Lemma \ref{cohomology-lemma-Rhom-strictly-perfect}.
+Denote $1 \in H^0(\SheafHom^\bullet(\mathcal{A}^\bullet, \mathcal{O}_Y))$
+the identity map of $\mathcal{A}^0 = \mathcal{O}_Y \to \mathcal{O}_Y$.
+With $\delta_j$ as in the proof of Lemma \ref{lemma-regular-ideal-ext}
+we get an isomorphism of graded $\mathcal{O}_Y$-modules
+$$
+\mathcal{O}_Y\langle \delta_1, \ldots, \delta_r\rangle
+\longrightarrow
+\SheafHom^\bullet(\mathcal{A}^\bullet, \mathcal{O}_Y)
+$$
+by mapping $\delta_{j_1} \ldots \delta_{j_i}$ to
+$1 \circ \delta_{j_1} \circ \ldots \circ \delta_{j_i}$ in degree $i$.
+Via this isomorphism the differential on the right hand side
+induces a differential $\text{d}$ on the left hand side.
+By our sign rules we have $\text{d}(1) = - \sum f_j \delta_j$.
+Since $\delta_j : \mathcal{A}^\bullet \to \mathcal{A}^\bullet[1]$
+is a morphism of complexes, it follows that
+$$
+\text{d}(\delta_{j_1} \ldots \delta_{j_i}) =
+(- \sum f_j \delta_j )\delta_{j_1} \ldots \delta_{j_i}
+$$
+Observe that we have $\text{d} = \sum f_j \delta_j$ on the differential
+graded algebra $\mathcal{A}$. Therefore the map defined by the rule
+$$
+1 \circ \delta_{j_1} \ldots \delta_{j_i} \longmapsto
+(\delta_{j_1} \circ \ldots \circ \delta_{j_i})(\xi_1 \ldots \xi_r)
+$$
+will define an isomorphism of complexes
+$$
+\SheafHom^\bullet(\mathcal{A}^\bullet, \mathcal{O}_Y)
+\longrightarrow \mathcal{A}^\bullet[-r]
+$$
+if $r$ is odd and commuting with differentials up to sign if $r$ is even.
+In any case these complexes have isomorphic cohomology, which shows the
+desired vanishing. The isomorphism on cohomology in degree $r$
+under the map
+$$
+\SheafHom^\bullet(\mathcal{A}^\bullet, \mathcal{O}_Y)
+\longrightarrow
+\SheafHom^\bullet(\mathcal{A}^\bullet, \mathcal{O}_X)
+$$
+also follows in a straightforward manner from this.
+(We observe that our choice of conventions regarding
+Koszul complexes does intervene in the definition
+of the isomorphism
+$R\SheafHom_{\mathcal{O}_X}(\mathcal{O}_X, \mathcal{O}_Y) =
+\wedge^r \mathcal{N}[r]$.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-immersion}
+Let $Y$ be a quasi-compact and quasi-separated scheme.
+Let $i : X \to Y$ be a Koszul-regular closed immersion.
+Let $a$ be the right adjoint of
+$Ri_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$ of
+Lemma \ref{lemma-twisted-inverse-image}. Then there is an isomorphism
+$$
+\wedge^r\mathcal{N}[-r] \longrightarrow a(\mathcal{O}_Y)
+$$
+where
+$\mathcal{N} = \SheafHom_{\mathcal{O}_X}(\mathcal{C}_{X/Y}, \mathcal{O}_X)$
+is the normal sheaf of $i$
+(Morphisms, Section \ref{morphisms-section-conormal-sheaf})
+and $r$ is its rank viewed as a locally constant
+function on $X$.
+\end{lemma}
+
+\begin{proof}
+Recall, from Lemmas \ref{lemma-twisted-inverse-image-closed}
+and \ref{lemma-sheaf-with-exact-support-ext},
+that $a(\mathcal{O}_Y)$ is an object of $D_\QCoh(\mathcal{O}_X)$ whose
+pushforward to $Y$ is
+$R\SheafHom_{\mathcal{O}_Y}(i_*\mathcal{O}_X, \mathcal{O}_Y)$.
+Thus the result follows from Lemma \ref{lemma-regular-immersion-ext}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-proper}
+Let $S$ be a Noetherian scheme.
+Let $f : X \to S$ be a smooth proper morphism of relative dimension $d$.
+Let $a$ be the right adjoint of
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_S)$ as in
+Lemma \ref{lemma-twisted-inverse-image}. Then there is an isomorphism
+$$
+\wedge^d \Omega_{X/S}[d] \longrightarrow a(\mathcal{O}_S)
+$$
+in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Set $\omega_{X/S}^\bullet = a(\mathcal{O}_S)$ as in
+Remark \ref{remark-relative-dualizing-complex}.
+Let $c$ be the right adjoint of Lemma \ref{lemma-twisted-inverse-image} for
+$\Delta : X \to X \times_S X$. Because $\Delta$
+is the diagonal of a smooth morphism it is a
+Koszul-regular immersion, see Divisors, Lemma
+\ref{divisors-lemma-immersion-smooth-into-smooth-regular-immersion}.
+In particular, $\Delta$ is a perfect proper morphism
+(More on Morphisms, Lemma \ref{more-morphisms-lemma-regular-immersion-perfect})
+and we obtain
+\begin{align*}
+\mathcal{O}_X
+& =
+c(L\text{pr}_1^*\omega_{X/S}^\bullet) \\
+& =
+L\Delta^*(L\text{pr}_1^*\omega_{X/S}^\bullet)
+\otimes_{\mathcal{O}_X}^\mathbf{L}
+c(\mathcal{O}_{X \times_S X}) \\
+& =
+\omega_{X/S}^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L}
+c(\mathcal{O}_{X \times_S X}) \\
+& =
+\omega_{X/S}^\bullet
+\otimes_{\mathcal{O}_X}^\mathbf{L}
+\wedge^d(\mathcal{N}_\Delta)[-d]
+\end{align*}
+The first equality is (\ref{equation-pre-rigid}) because
+$\omega_{X/S}^\bullet = a(\mathcal{O}_S)$. The second equality by
+Lemma \ref{lemma-compare-with-pullback-flat-proper-noetherian}.
+The third equality because $\text{pr}_1 \circ \Delta = \text{id}_X$.
+The fourth equality by Lemma \ref{lemma-regular-immersion}.
+Observe that $\wedge^d(\mathcal{N}_\Delta)$ is an invertible
+$\mathcal{O}_X$-module. Hence $\wedge^d(\mathcal{N}_\Delta)[-d]$
+is an invertible object of $D(\mathcal{O}_X)$ and we conclude that
+$a(\mathcal{O}_S) = \omega_{X/S}^\bullet = \wedge^d(\mathcal{C}_\Delta)[d]$.
+Since the conormal sheaf $\mathcal{C}_\Delta$ of $\Delta$ is
+$\Omega_{X/S}$ by
+Morphisms, Lemma \ref{morphisms-lemma-differentials-diagonal}
+the proof is complete.
+\end{proof}
+
+
+
+
+
+
+
+\section{Upper shriek functors}
+\label{section-upper-shriek}
+
+\noindent
+In this section, we construct the functors $f^!$ for morphisms
+between schemes which are of finite type and separated
+over a fixed Noetherian base using compactifications.
+As is customary in coherent duality, there are a number of diagrams
+that have to be shown to be commutative. We suggest the reader,
+after reading the construction, skips the verification of the
+lemmas and continues to the next section where we discuss
+properties of the upper shriek functors.
+
+\begin{situation}
+\label{situation-shriek}
+Here $S$ is a Noetherian scheme and $\textit{FTS}_S$ is the category whose
+\begin{enumerate}
+\item objects are schemes $X$ over $S$ such that the structure
+morphism $X \to S$ is both separated and of finite type, and
+\item morphisms $f : X \to Y$ between objects are morphisms
+of schemes over $S$.
+\end{enumerate}
+\end{situation}
+
+\noindent
+In Situation \ref{situation-shriek} given a morphism $f : X \to Y$
+in $\textit{FTS}_S$, we will define an exact functor
+$$
+f^! : D_\QCoh^+(\mathcal{O}_Y) \to D_\QCoh^+(\mathcal{O}_X)
+$$
+of triangulated categories. Namely, we choose a compactification
+$X \to \overline{X}$ over $Y$ which is possible by
+More on Flatness, Theorem \ref{flat-theorem-nagata} and
+Lemma \ref{flat-lemma-compactifyable}.
+Denote $\overline{f} : \overline{X} \to Y$
+the structure morphism. Let
+$\overline{a} : D_\QCoh(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_{\overline{X}})$
+be the right adjoint of $R\overline{f}_*$
+constructed in Lemma \ref{lemma-twisted-inverse-image}. Then we set
+$$
+f^!K = \overline{a}(K)|_X
+$$
+for $K \in D_\QCoh^+(\mathcal{O}_Y)$. The result is an object of
+$D_\QCoh^+(\mathcal{O}_X)$ by
+Lemma \ref{lemma-twisted-inverse-image-bounded-below}.
+
+\begin{lemma}
+\label{lemma-shriek-well-defined}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism
+of $\textit{FTS}_S$. The functor $f^!$ is, up to canonical isomorphism,
+independent of the choice of the compactification.
+\end{lemma}
+
+\begin{proof}
+The category of compactifications of $X$ over $Y$ is defined
+in More on Flatness, Section \ref{flat-section-compactify}.
+By More on Flatness, Theorem \ref{flat-theorem-nagata} and
+Lemma \ref{flat-lemma-compactifyable} it is nonempty.
+To every choice of a compactification
+$$
+j : X \to \overline{X},\quad \overline{f} : \overline{X} \to Y
+$$
+the construction above associates the functor $j^* \circ \overline{a} :
+D_\QCoh^+(\mathcal{O}_Y) \to D_\QCoh^+(\mathcal{O}_X)$
+where $\overline{a}$ is the right adjoint of $R\overline{f}_*$
+constructed in Lemma \ref{lemma-twisted-inverse-image}.
+
+\medskip\noindent
+Suppose given a morphism $g : \overline{X}_1 \to \overline{X}_2$
+between compactifications $j_i : X \to \overline{X}_i$ over $Y$
+such that $g^{-1}(j_2(X)) = j_1(X)$\footnote{This may fail
+with our definition of compactification. See
+More on Flatness, Section \ref{flat-section-compactify}.}.
+Let $\overline{c}$ be the right adjoint of
+Lemma \ref{lemma-twisted-inverse-image} for $g$.
+Then $\overline{c} \circ \overline{a}_2 = \overline{a}_1$
+because these functors are adjoint to
+$R\overline{f}_{2, *} \circ Rg_* = R(\overline{f}_2 \circ g)_*$.
+By (\ref{equation-sheafy}) we have a canonical transformation
+$$
+j_1^* \circ \overline{c} \longrightarrow j_2^*
+$$
+of functors
+$D^+_\QCoh(\mathcal{O}_{\overline{X}_2}) \to D^+_\QCoh(\mathcal{O}_X)$
+which is an isomorphism by Lemma \ref{lemma-proper-noetherian}.
+The composition
+$$
+j_1^* \circ \overline{a}_1 \longrightarrow
+j_1^* \circ \overline{c} \circ \overline{a}_2 \longrightarrow
+j_2^* \circ \overline{a}_2
+$$
+is an isomorphism of functors which we will denote by $\alpha_g$.
+
+\medskip\noindent
+Consider two compactifications $j_i : X \to \overline{X}_i$, $i = 1, 2$
+of $X$ over $Y$. By More on Flatness, Lemma
+\ref{flat-lemma-compactifications-cofiltered} part (b)
+we can find a compactification $j : X \to \overline{X}$
+with dense image and morphisms
+$g_i : \overline{X} \to \overline{X}_i$ of compactififcatins.
+By More on Flatness, Lemma
+\ref{flat-lemma-compactifications-cofiltered} part (c)
+we have $g_i^{-1}(j_i(X)) = j(X)$. Hence we get isomorpisms
+$$
+\alpha_{g_i} :
+j^* \circ \overline{a}
+\longrightarrow
+j_i^* \circ \overline{a}_i
+$$
+by the previous paragraph. We obtain an isomorphism
+$$
+\alpha_{g_2} \circ \alpha_{g_1}^{-1} :
+j_1^* \circ \overline{a}_1 \to j_2^* \circ \overline{a}_2
+$$
+To finish the proof we have to show that these isomorphisms are well defined.
+We claim it suffices to show the composition of isomorphisms constructed
+in the previous paragraph is another (for a precise statement see the next
+paragraph). We suggest the reader check this is true on a napkin, but we
+will also completely spell it out in the rest of this paragraph.
+Namely, consider a second choice of a compactification
+$j' : X \to \overline{X}'$ with dense image
+and morphisms of compactifications $g'_i : \overline{X}' \to \overline{X}_i$.
+By More on Flatness, Lemma \ref{flat-lemma-compactifications-cofiltered}
+we can find a compactification $j'' : X \to \overline{X}''$
+with dense image and morphisms of compactifications
+$h : \overline{X}'' \to \overline{X}$ and
+$h' : \overline{X}'' \to \overline{X}'$. We may even assume
+$g_1 \circ h = g'_1 \circ h'$ and $g_2 \circ h = g'_2 \circ h'$.
+The result of the next paragraph gives
+$$
+\alpha_{g_i} \circ \alpha_h = \alpha_{g_i \circ h} =
+\alpha_{g'_i \circ h'} = \alpha_{g'_i} \circ \alpha_{h'}
+$$
+for $i = 1, 2$. Since these are all isomorphisms of functors
+we conclude that $\alpha_{g_2} \circ \alpha_{g_1}^{-1} =
+\alpha_{g'_2} \circ \alpha_{g'_1}^{-1}$ as desired.
+
+\medskip\noindent
+Suppose given compactifications $j_i : X \to \overline{X}_i$
+for $i = 1, 2, 3$. Suppose given morphisms
+$g : \overline{X}_1 \to \overline{X}_2$ and
+$h : \overline{X}_2 \to \overline{X}_3$ of compactifications
+such that $g^{-1}(j_2(X)) = j_1(X)$ and $h^{-1}(j_2(X)) = j_3(X)$.
+Let $\overline{a}_i$ be as above. The claim above means that
+$$
+\alpha_g \circ \alpha_h = \alpha_{g \circ h} :
+j_1^* \circ \overline{a}_1 \to j_3^* \circ \overline{a}_3
+$$
+Let $\overline{c}$, resp.\ $\overline{d}$ be the right adjoint of
+Lemma \ref{lemma-twisted-inverse-image} for $g$, resp.\ $h$.
+Then $\overline{c} \circ \overline{a}_2 = \overline{a}_1$ and
+$\overline{d} \circ \overline{a}_3 = \overline{a}_2$
+and there are canonical transformations
+$$
+j_1^* \circ \overline{c} \longrightarrow j_2^*
+\quad\text{and}\quad
+j_2^* \circ \overline{d} \longrightarrow j_3^*
+$$
+of functors
+$D^+_\QCoh(\mathcal{O}_{\overline{X}_2}) \to D^+_\QCoh(\mathcal{O}_X)$
+and
+$D^+_\QCoh(\mathcal{O}_{\overline{X}_3}) \to D^+_\QCoh(\mathcal{O}_X)$
+for the same reasons as above. Denote $\overline{e}$ the
+right adjoint of Lemma \ref{lemma-twisted-inverse-image}
+for $h \circ g$. There is a canonical transformation
+$$
+j_1^* \circ \overline{e} \longrightarrow j_3^*
+$$
+of functors
+$D^+_\QCoh(\mathcal{O}_{\overline{X}_3}) \to D^+_\QCoh(\mathcal{O}_X)$
+given by (\ref{equation-sheafy}). Spelling things out we have to
+show that the composition
+$$
+\alpha_h \circ \alpha_g :
+j_1^* \circ \overline{a}_1 \to
+j_1^* \circ \overline{c} \circ \overline{a}_2 \to
+j_2^* \circ \overline{a}_2 \to
+j_2^* \circ \overline{d} \circ \overline{a}_3 \to
+j_3^* \circ \overline{a}_3
+$$
+is the same as the composition
+$$
+\alpha_{h \circ g} :
+j_1^* \circ \overline{a}_1 \to
+j_1^* \circ \overline{e} \circ \overline{a}_3 \to
+j_3^* \circ \overline{a}_3
+$$
+We split this into two parts. The first is to show that the diagram
+$$
+\xymatrix{
+\overline{a}_1 \ar[r] \ar[d] & \overline{c} \circ \overline{a}_2 \ar[d] \\
+\overline{e} \circ \overline{a}_3 \ar[r] &
+\overline{c} \circ \overline{d} \circ \overline{a}_3
+}
+$$
+commutes where the lower horizontal arrow comes from the identification
+$\overline{e} = \overline{c} \circ \overline{d}$. This is true
+because the corresponding diagram of total direct image functors
+$$
+\xymatrix{
+R\overline{f}_{1, *} \ar[r] \ar[d] & Rg_* \circ R\overline{f}_{2, *} \ar[d] \\
+R(h \circ g)_* \circ R\overline{f}_{3, *} \ar[r] &
+Rg_* \circ Rh_* \circ R\overline{f}_{3, *}
+}
+$$
+is commutative (insert future reference here). The second part
+is to show that the composition
+$$
+j_1^* \circ \overline{c} \circ \overline{d} \to
+j_2^* \circ \overline{d} \to j_3^*
+$$
+is equal to the map
+$$
+j_1^* \circ \overline{e} \to j_3^*
+$$
+via the identification $\overline{e} = \overline{c} \circ \overline{d}$.
+This was proven in Lemma \ref{lemma-compose-base-change-maps}
+(note that in the current case the morphisms $f', g'$ of that
+lemma are equal to $\text{id}_X$).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-upper-shriek-composition}
+In Situation \ref{situation-shriek} let $f : X \to Y$ and $g : Y \to Z$
+be composable morphisms of $\textit{FTS}_S$. Then there is a canonical
+isomorphism $(g \circ f)^! \to f^! \circ g^!$.
+\end{lemma}
+
+\begin{proof}
+Choose a compactification $i : Y \to \overline{Y}$ of $Y$ over $Z$.
+Choose a compactification $X \to \overline{X}$ of $X$ over
+$\overline{Y}$. This uses More on Flatness, Theorem \ref{flat-theorem-nagata}
+and Lemma \ref{flat-lemma-compactifyable} twice.
+Let $\overline{a}$ be the
+right adjoint of Lemma \ref{lemma-twisted-inverse-image} for
+$\overline{X} \to \overline{Y}$ and let $\overline{b}$
+be the
+right adjoint of Lemma \ref{lemma-twisted-inverse-image} for
+$\overline{Y} \to Z$.
+Then $\overline{a} \circ \overline{b}$ is the
+right adjoint of Lemma \ref{lemma-twisted-inverse-image} for
+the composition $\overline{X} \to Z$.
+Hence $g^! = i^* \circ \overline{b}$ and
+$(g \circ f)^! = (X \to \overline{X})^* \circ \overline{a} \circ \overline{b}$.
+Let $U$ be the inverse image of $Y$ in $\overline{X}$
+so that we get the commutative diagram
+$$
+\xymatrix{
+X \ar[r]_j \ar[d] & U \ar[dl] \ar[r]_{j'} & \overline{X} \ar[dl] \\
+Y \ar[r]_i \ar[d] & \overline{Y} \ar[dl] \\
+Z
+}
+$$
+Let $\overline{a}'$ be the
+right adjoint of Lemma \ref{lemma-twisted-inverse-image} for
+$U \to Y$.
+Then $f^! = j^* \circ \overline{a}'$. We obtain
+$$
+\gamma : (j')^* \circ \overline{a} \to \overline{a}' \circ i^*
+$$
+by (\ref{equation-sheafy}) and we can use it to define
+$$
+(g \circ f)^! =
+(j' \circ j)^* \circ \overline{a} \circ \overline{b} =
+j^* \circ (j')^* \circ \overline{a} \circ \overline{b}
+\to
+j^* \circ \overline{a}' \circ i^* \circ \overline{b} =
+f^! \circ g^!
+$$
+which is an isomorphism on objects of $D_\QCoh^+(\mathcal{O}_Z)$ by
+Lemma \ref{lemma-proper-noetherian}. To finish the proof we show that
+this isomorphism is independent of choices made.
+
+\medskip\noindent
+Suppose we have two diagrams
+$$
+\vcenter{
+\xymatrix{
+X \ar[r]_{j_1} \ar[d] & U_1 \ar[dl] \ar[r]_{j'_1} & \overline{X}_1 \ar[dl] \\
+Y \ar[r]_{i_1} \ar[d] & \overline{Y}_1 \ar[dl] \\
+Z
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+X \ar[r]_{j_2} \ar[d] & U_2 \ar[dl] \ar[r]_{j'_2} & \overline{X}_2 \ar[dl] \\
+Y \ar[r]_{i_2} \ar[d] & \overline{Y}_2 \ar[dl] \\
+Z
+}
+}
+$$
+We can first choose a compactification $i : Y \to \overline{Y}$
+with dense image of $Y$ over $Z$ which dominates both
+$\overline{Y}_1$ and $\overline{Y}_2$,
+see More on Flatness, Lemma \ref{flat-lemma-compactifications-cofiltered}.
+By More on Flatness, Lemma \ref{flat-lemma-right-multiplicative-system} and
+Categories, Lemmas \ref{categories-lemma-morphisms-right-localization} and
+\ref{categories-lemma-equality-morphisms-right-localization}
+we can choose a compactification $X \to \overline{X}$ with dense image of
+$X$ over $\overline{Y}$ with morphisms $\overline{X} \to \overline{X}_1$
+and $\overline{X} \to \overline{X}_2$ and such that the composition
+$\overline{X} \to \overline{Y} \to \overline{Y}_1$ is equal to
+the composition $\overline{X} \to \overline{X}_1 \to \overline{Y}_1$
+and such that the composition
+$\overline{X} \to \overline{Y} \to \overline{Y}_2$ is equal to
+the composition $\overline{X} \to \overline{X}_2 \to \overline{Y}_2$.
+Thus we see that it suffices to compare the maps
+determined by our diagrams when we have a commutative diagram
+as follows
+$$
+\xymatrix{
+X \ar[rr]_{j_1} \ar@{=}[d] & &
+U_1 \ar[d] \ar[ddll] \ar[rr]_{j'_1} & &
+\overline{X}_1 \ar[d] \ar[ddll] \\
+X \ar'[r][rr]^-{j_2} \ar[d] & &
+U_2 \ar'[dl][ddll] \ar'[r][rr]^-{j'_2} & &
+\overline{X}_2 \ar[ddll] \\
+Y \ar[rr]^{i_1} \ar@{=}[d] & & \overline{Y}_1 \ar[d] \\
+Y \ar[rr]^{i_2} \ar[d] & & \overline{Y}_2 \ar[dll] \\
+Z
+}
+$$
+and moreover the compactifications $X \to \overline{X}_1$ and
+$Y \to \overline{Y}_2$ have dense image.
+We use $\overline{a}_i$, $\overline{a}'_i$, $\overline{c}$, and
+$\overline{c}'$ for the
+right adjoint of Lemma \ref{lemma-twisted-inverse-image} for
+$\overline{X}_i \to \overline{Y}_i$, $U_i \to Y$,
+$\overline{X}_1 \to \overline{X}_2$, and $U_1 \to U_2$.
+Each of the squares
+$$
+\xymatrix{
+X \ar[r] \ar[d] \ar@{}[dr]|A & U_1 \ar[d] \\
+X \ar[r] & U_2
+}
+\quad
+\xymatrix{
+U_2 \ar[r] \ar[d] \ar@{}[dr]|B & \overline{X}_2 \ar[d] \\
+Y \ar[r] & \overline{Y}_2
+}
+\quad
+\xymatrix{
+U_1 \ar[r] \ar[d] \ar@{}[dr]|C & \overline{X}_1 \ar[d] \\
+Y \ar[r] & \overline{Y}_1
+}
+\quad
+\xymatrix{
+Y \ar[r] \ar[d] \ar@{}[dr]|D & \overline{Y}_1 \ar[d] \\
+Y \ar[r] & \overline{Y}_2
+}
+\quad
+\xymatrix{
+X \ar[r] \ar[d] \ar@{}[dr]|E & \overline{X}_1 \ar[d] \\
+X \ar[r] & \overline{X}_2
+}
+$$
+is cartesian (see
+More on Flatness, Lemma \ref{flat-lemma-compactifications-cofiltered} part (c)
+for A, D, E and recall that $U_i$ is the inverse image of $Y$
+by $\overline{X}_i \to \overline{Y}_i$ for B, C) and hence
+gives rise to a base change map (\ref{equation-sheafy}) as follows
+$$
+\begin{matrix}
+\gamma_A : j_1^* \circ \overline{c}' \to j_2^* &
+\gamma_B : (j_2')^* \circ \overline{a}_2 \to \overline{a}'_2 \circ i_2^* &
+\gamma_C : (j_1')^* \circ \overline{a}_1 \to \overline{a}'_1 \circ i_1^* \\
+\gamma_D : i_1^* \circ \overline{d} \to i_2^* &
+\gamma_E : (j'_1 \circ j_1)^* \circ \overline{c} \to (j'_2 \circ j_2)^*
+\end{matrix}
+$$
+Denote $f_1^! = j_1^* \circ \overline{a}'_1$,
+$f_2^! = j_2^* \circ \overline{a}'_2$,
+$g_1^! = i_1^* \circ \overline{b}_1$,
+$g_2^! = i_2^* \circ \overline{b}_2$,
+$(g \circ f)_1^! =
+(j_1' \circ j_1)^* \circ \overline{a}_1 \circ \overline{b}_1$, and
+$(g \circ f)^!_2 =
+(j_2' \circ j_2)^* \circ \overline{a}_2 \circ \overline{b}_2$.
+The construction given in the first paragraph of the proof
+and in Lemma \ref{lemma-shriek-well-defined} uses
+\begin{enumerate}
+\item $\gamma_C$ for the map $(g \circ f)^!_1 \to f_1^! \circ g_1^!$,
+\item $\gamma_B$ for the map $(g \circ f)^!_2 \to f_2^! \circ g_2^!$,
+\item $\gamma_A$ for the map $f_1^! \to f_2^!$,
+\item $\gamma_D$ for the map $g_1^! \to g_2^!$, and
+\item $\gamma_E$ for the map $(g \circ f)^!_1 \to (g \circ f)^!_2$.
+\end{enumerate}
+We have to show that the diagram
+$$
+\xymatrix{
+(g \circ f)^!_1 \ar[r]_{\gamma_E} \ar[d]_{\gamma_C} &
+(g \circ f)^!_2 \ar[d]_{\gamma_B} \\
+f_1^! \circ g_1^! \ar[r]^{\gamma_A \circ \gamma_D} & f_2^! \circ g_2^!
+}
+$$
+is commutative. We will use
+Lemmas \ref{lemma-compose-base-change-maps} and
+\ref{lemma-compose-base-change-maps-horizontal}
+and with (abuse of) notation as in
+Remark \ref{remark-going-around} (in particular
+dropping $\star$ products with identity transformations
+from the notation).
+We can write $\gamma_E = \gamma_A \circ \gamma_F$ where
+$$
+\xymatrix{
+U_1 \ar[r] \ar[d] \ar@{}[rd]|F & \overline{X}_1 \ar[d] \\
+U_2 \ar[r] & \overline{X}_2
+}
+$$
+Thus we see that
+$$
+\gamma_B \circ \gamma_E = \gamma_B \circ \gamma_A \circ \gamma_F
+= \gamma_A \circ \gamma_B \circ \gamma_F
+$$
+the last equality because the two squares $A$ and $B$ only
+intersect in one point (similar to the last argument in
+Remark \ref{remark-going-around}). Thus it suffices to prove that
+$\gamma_D \circ \gamma_C = \gamma_B \circ \gamma_F$.
+Since both of these are equal to the map (\ref{equation-sheafy})
+for the square
+$$
+\xymatrix{
+U_1 \ar[r] \ar[d] & \overline{X}_1 \ar[d] \\
+Y \ar[r] & \overline{Y}_2
+}
+$$
+we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pseudo-functor}
+In Situation \ref{situation-shriek} the constructions of
+Lemmas \ref{lemma-shriek-well-defined} and \ref{lemma-upper-shriek-composition}
+define a pseudo functor from the category $\textit{FTS}_S$
+into the $2$-category of categories (see Categories, Definition
+\ref{categories-definition-functor-into-2-category}).
+\end{lemma}
+
+\begin{proof}
+To show this we have to prove given morphisms
+$f : X \to Y$, $g : Y \to Z$, $h : Z \to T$
+that
+$$
+\xymatrix{
+(h \circ g \circ f)^! \ar[r]_{\gamma_{A + B}} \ar[d]_{\gamma_{B + C}} &
+f^! \circ (h \circ g)^! \ar[d]^{\gamma_C} \\
+(g \circ f)^! \circ h^! \ar[r]^{\gamma_A} & f^! \circ g^! \circ h^!
+}
+$$
+is commutative (for the meaning of the $\gamma$'s, see below).
+To do this we choose a compactification $\overline{Z}$
+of $Z$ over $T$, then a compactification $\overline{Y}$ of $Y$ over
+$\overline{Z}$, and then a compactification $\overline{X}$ of
+$X$ over $\overline{Y}$. This uses
+More on Flatness, Theorem \ref{flat-theorem-nagata} and
+Lemma \ref{flat-lemma-compactifyable}.
+Let $W \subset \overline{Y}$ be the inverse image of $Z$ under
+$\overline{Y} \to \overline{Z}$ and let $U \subset V \subset \overline{X}$
+be the inverse images of $Y \subset W$ under $\overline{X} \to \overline{Y}$.
+This produces the following diagram
+$$
+\xymatrix{
+X \ar[d]_f \ar[r] & U \ar[r] \ar[d] \ar@{}[dr]|A &
+V \ar[d] \ar[r] \ar@{}[rd]|B & \overline{X} \ar[d] \\
+Y \ar[d]_g \ar[r] & Y \ar[r] \ar[d] & W \ar[r] \ar[d] \ar@{}[rd]|C &
+\overline{Y} \ar[d] \\
+Z \ar[d]_h \ar[r] & Z \ar[d] \ar[r] & Z \ar[d] \ar[r] & \overline{Z} \ar[d] \\
+T \ar[r] & T \ar[r] & T \ar[r] & T
+}
+$$
+Without introducing tons of notation but arguing exactly
+as in the proof of Lemma \ref{lemma-upper-shriek-composition}
+we see that the maps in the first displayed diagram use the
+maps (\ref{equation-sheafy}) for the rectangles
+$A + B$, $B + C$, $A$, and $C$ as indicated. Since by
+Lemmas \ref{lemma-compose-base-change-maps} and
+\ref{lemma-compose-base-change-maps-horizontal}
+we have $\gamma_{A + B} = \gamma_A \circ \gamma_B$ and
+$\gamma_{B + C} = \gamma_C \circ \gamma_B$ we conclude
+that the desired equality holds provided
+$\gamma_A \circ \gamma_C = \gamma_C \circ \gamma_A$.
+This is true because the two squares $A$ and $C$ only
+intersect in one point (similar to the last argument in
+Remark \ref{remark-going-around}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-map-pullback-to-shriek-well-defined}
+In Situation \ref{situation-shriek} let
+$f : X \to Y$ be a morphism of $\textit{FTS}_S$. There are canonical maps
+$$
+\mu_{f, K} :
+Lf^*K \otimes_{\mathcal{O}_X}^\mathbf{L} f^!\mathcal{O}_Y
+\longrightarrow
+f^!K
+$$
+functorial in $K$ in $D^+_\QCoh(\mathcal{O}_Y)$.
+If $g : Y \to Z$ is another morphism of $\textit{FTS}_S$, then
+the diagram
+$$
+\xymatrix{
+Lf^*(Lg^*K \otimes_{\mathcal{O}_Y}^\mathbf{L} g^!\mathcal{O}_Z)
+\otimes_{\mathcal{O}_X}^\mathbf{L} f^!\mathcal{O}_Y
+\ar@{=}[d] \ar[r]_-{\mu_f} &
+f^!(Lg^*K \otimes_{\mathcal{O}_Y}^\mathbf{L} g^!\mathcal{O}_Z)
+\ar[r]_-{f^!\mu_g} &
+f^!g^!K \ar@{=}[d] \\
+Lf^*Lg^*K \otimes_{\mathcal{O}_X}^\mathbf{L} Lf^* g^!\mathcal{O}_Z
+\otimes_{\mathcal{O}_X}^\mathbf{L} f^!\mathcal{O}_Y \ar[r]^-{\mu_f} &
+Lf^*Lg^*K \otimes_{\mathcal{O}_X}^\mathbf{L} f^!g^!\mathcal{O}_Z
+\ar[r]^-{\mu_{g \circ f}} & f^!g^!K
+}
+$$
+commutes for all $K \in D^+_\QCoh(\mathcal{O}_Z)$.
+\end{lemma}
+
+\begin{proof}
+If $f$ is proper, then $f^! = a$ and we can use
+(\ref{equation-compare-with-pullback}) and if $g$ is also proper,
+then Lemma \ref{lemma-transitivity-compare-with-pullback} proves
+the commutativity of the diagram (in greater generality).
+
+\medskip\noindent
+Let us define the map $\mu_{f, K}$. Choose a compactification
+$j : X \to \overline{X}$ of $X$ over $Y$. Since $f^!$ is defined
+as $j^* \circ \overline{a}$ we obtain $\mu_{f, K}$ as the restriction
+of the map (\ref{equation-compare-with-pullback})
+$$
+L\overline{f}^*K \otimes_{\mathcal{O}_{\overline{X}}}^\mathbf{L}
+\overline{a}(\mathcal{O}_Y)
+\longrightarrow
+\overline{a}(K)
+$$
+to $X$. To see this is independent of the choice of the compactification
+we argue as in the proof of Lemma \ref{lemma-shriek-well-defined}.
+We urge the reader to read the proof of that lemma first.
+
+\medskip\noindent
+Assume given a morphism $g : \overline{X}_1 \to \overline{X}_2$
+between compactifications $j_i : X \to \overline{X}_i$ over $Y$
+such that $g^{-1}(j_2(X)) = j_1(X)$. Denote $\overline{c}$ the
+right adjoint for pushforward of Lemma \ref{lemma-twisted-inverse-image}
+for the morphism $g$. The maps
+$$
+L\overline{f}_1^*K \otimes_{\mathcal{O}_{\overline{X}}}^\mathbf{L}
+\overline{a}_1(\mathcal{O}_Y)
+\longrightarrow
+\overline{a}_1(K)
+\quad\text{and}\quad
+L\overline{f}_2^*K \otimes_{\mathcal{O}_{\overline{X}}}^\mathbf{L}
+\overline{a}_2(\mathcal{O}_Y)
+\longrightarrow
+\overline{a}_2(K)
+$$
+fit into the commutative diagram
+$$
+\xymatrix{
+Lg^*(L\overline{f}_2^*K \otimes^\mathbf{L}
+\overline{a}_2(\mathcal{O}_Y))
+\otimes^\mathbf{L} \overline{c}(\mathcal{O}_{\overline{X}_2})
+\ar@{=}[d] \ar[r]_-\sigma &
+\overline{c}(L\overline{f}_2^*K \otimes^\mathbf{L}
+\overline{a}_2(\mathcal{O}_Y)) \ar[r] &
+\overline{c}(\overline{a}_2(K)) \ar@{=}[d] \\
+L\overline{f}_1^*K \otimes^\mathbf{L} Lg^*\overline{a}_2(\mathcal{O}_Y)
+\otimes^\mathbf{L} \overline{c}(\mathcal{O}_{\overline{X}_2})
+\ar[r]^-{1 \otimes \tau} &
+L\overline{f}_1^*K \otimes^\mathbf{L} \overline{a}_1(\mathcal{O}_Y) \ar[r] &
+\overline{a}_1(K)
+}
+$$
+by Lemma \ref{lemma-transitivity-compare-with-pullback}. By
+Lemma \ref{lemma-compare-on-open} the maps $\sigma$ and $\tau$
+restrict to an isomorphism over $X$. In fact, we can say more.
+Recall that in the proof of Lemma \ref{lemma-shriek-well-defined} we used
+the map (\ref{equation-sheafy}) $\gamma : j_1^* \circ \overline{c} \to j_2^*$
+to construct our isomorphism
+$\alpha_g : j_1^* \circ \overline{a}_1 \to j_2^* \circ \overline{a}_2$.
+Pulling back to map $\sigma$ by $j_1$ we obtain the identity
+map on $j_2^*\left(L\overline{f}_2^*K \otimes^\mathbf{L}
+\overline{a}_2(\mathcal{O}_Y)\right)$ if we identify
+$j_1^*\overline{c}(\mathcal{O}_{\overline{X}_2})$
+with $\mathcal{O}_X$ via $j_1^* \circ \overline{c} \to j_2^*$, see
+Lemma \ref{lemma-restriction-compare-with-pullback}.
+Similarly, the map $\tau : Lg^*\overline{a}_2(\mathcal{O}_Y)
+\otimes^\mathbf{L} \overline{c}(\mathcal{O}_{\overline{X}_2}) \to
+\overline{a}_1(\mathcal{O}_Y) = \overline{c}(\overline{a}_2(\mathcal{O}_Y))$
+pulls back to the identity map on $j_2^*\overline{a}_2(\mathcal{O}_Y)$.
+We conclude that pulling back by $j_1$ and applying $\gamma$ wherever
+we can we obtain a commutative diagram
+$$
+\xymatrix{
+j_2^*\left(L\overline{f}_2^*K \otimes^\mathbf{L}
+\overline{a}_2(\mathcal{O}_Y)\right) \ar[r] \ar[d] &
+j_2^*\overline{a}_2(K) \\
+j_1^*L\overline{f}_1^*K \otimes^\mathbf{L} j_2^*\overline{a}_2(\mathcal{O}_Y) &
+j_1^*(L\overline{f}_1^*K \otimes^\mathbf{L} \overline{a}_1(\mathcal{O}_Y))
+\ar[r] \ar[l]_{1 \otimes \alpha_g} &
+j_1^* \overline{a}_1(K) \ar[lu]_{\alpha_g}
+}
+$$
+The commutativity of this diagram exactly tells us that the map
+$\mu_{f, K}$ constructed using the compactification $\overline{X}_1$
+is the same as the map $\mu_{f, K}$ constructed using the compactification
+$\overline{X}_2$ via the identification $\alpha_g$ used in the proof
+of Lemma \ref{lemma-shriek-well-defined}. Some categorical arguments
+exactly as in the proof of Lemma \ref{lemma-shriek-well-defined}
+now show that $\mu_{f, K}$ is well defined (small detail omitted).
+
+\medskip\noindent
+Having said this, the commutativity of the diagram in the statement
+of our lemma follows from the construction of the isomorphism
+$(g \circ f)^! \to f^! \circ g^!$ (first part of the proof of
+Lemma \ref{lemma-upper-shriek-composition} using
+$\overline{X} \to \overline{Y} \to Z$) and the result
+of Lemma \ref{lemma-transitivity-compare-with-pullback}
+for $\overline{X} \to \overline{Y} \to Z$.
+\end{proof}
+
+
+
+\section{Properties of upper shriek functors}
+\label{section-upper-shriek-properties}
+
+\noindent
+Here are some properties of the upper shriek functors.
+
+\begin{lemma}
+\label{lemma-shriek-open-immersion}
+In Situation \ref{situation-shriek} let $Y$ be an object
+of $\textit{FTS}_S$ and let $j : X \to Y$ be an open immersion.
+Then there is a canonical isomorphism $j^! = j^*$ of functors.
+\end{lemma}
+
+\noindent
+For an \'etale morphism $f : X \to Y$ of $\textit{FTS}_S$
+we also have $f^* \cong f^!$, see Lemma \ref{lemma-shriek-etale}.
+
+\begin{proof}
+In this case we may choose $\overline{X} = Y$ as our compactification.
+Then the
+right adjoint of Lemma \ref{lemma-twisted-inverse-image} for
+$\text{id} : Y \to Y$ is the
+identity functor and hence $j^! = j^*$ by definition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-restrict-before-or-after}
+In Situation \ref{situation-shriek} let
+$$
+\xymatrix{
+U \ar[r]_j \ar[d]_g & X \ar[d]^f \\
+V \ar[r]^{j'} & Y
+}
+$$
+be a commutative diagram of $\textit{FTS}_S$ where $j$ and $j'$ are
+open immersions. Then $j^* \circ f^! = g^! \circ (j')^*$ as functors
+$D^+_\QCoh(\mathcal{O}_Y) \to D^+(\mathcal{O}_U)$.
+\end{lemma}
+
+\begin{proof}
+Let $h = f \circ j = j' \circ g$. By
+Lemma \ref{lemma-upper-shriek-composition} we have
+$h^! = j^! \circ f^! = g^! \circ (j')^!$. By
+Lemma \ref{lemma-shriek-open-immersion}
+we have $j^! = j^*$ and $(j')^! = (j')^*$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-shriek-affine-line}
+In Situation \ref{situation-shriek} let $Y$ be an object of $\textit{FTS}_S$
+and let $f : X = \mathbf{A}^1_Y \to Y$ be
+the projection. Then there is a (noncanonical) isomorphism
+$f^!(-) \cong Lf^*(-) [1]$ of functors.
+\end{lemma}
+
+\begin{proof}
+Since $X = \mathbf{A}^1_Y \subset \mathbf{P}^1_Y$
+and since $\mathcal{O}_{\mathbf{P}^1_Y}(-2)|_X \cong \mathcal{O}_X$
+this follows from Lemmas \ref{lemma-upper-shriek-P1} and
+\ref{lemma-compare-with-pullback-flat-proper-noetherian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-shriek-closed-immersion}
+In Situation \ref{situation-shriek} let $Y$ be an object of
+$\textit{FTS}_S$ and let $i : X \to Y$ be a closed immersion.
+Then there is a canonical isomorphism
+$i^!(-) = R\SheafHom(\mathcal{O}_X, -)$ of functors.
+\end{lemma}
+
+\begin{proof}
+This is a restatement of Lemma \ref{lemma-twisted-inverse-image-closed}.
+\end{proof}
+
+\begin{remark}[Local description upper shriek]
+\label{remark-local-calculation-shriek}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. Using the lemmas above we can compute
+$f^!$ locally as follows. Suppose that we are given affine opens
+$$
+\xymatrix{
+U \ar[r]_j \ar[d]_g & X \ar[d]^f \\
+V \ar[r]^i & Y
+}
+$$
+Since $j^! \circ f^! = g^! \circ i^!$
+(Lemma \ref{lemma-upper-shriek-composition})
+and since $j^!$ and $i^!$ are given by restriction
+(Lemma \ref{lemma-shriek-open-immersion})
+we see that
+$$
+(f^!E)|_U = g^!(E|_V)
+$$
+for any $E \in D^+_\QCoh(\mathcal{O}_X)$. Write
+$U = \Spec(A)$ and $V = \Spec(R)$ and let $\varphi : R \to A$
+be the finite type ring map corresponding to $g$.
+Choose a presentation $A = P/I$ where $P = R[x_1, \ldots, x_n]$
+is a polynomial algebra in $n$ variables over $R$. Choose an
+object $K \in D^+(R)$ corresponding to $E|_V$
+(Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-compare-bounded}).
+Then we claim that $f^!E|_U$ corresponds to
+$$
+\varphi^!(K) = R\Hom(A, K \otimes_R^\mathbf{L} P)[n]
+$$
+where $R\Hom(A, -) : D(P) \to D(A)$ is the functor of
+Dualizing Complexes, Section \ref{dualizing-section-trivial}
+and where $\varphi^! : D(R) \to D(A)$ is the functor of
+Dualizing Complexes, Section
+\ref{dualizing-section-relative-dualizing-complex-algebraic}.
+Namely, the choice of presentation
+gives a factorization
+$$
+U \rightarrow \mathbf{A}^n_V \to \mathbf{A}^{n - 1}_V \to \ldots \to
+\mathbf{A}^1_V \to V
+$$
+Applying Lemma \ref{lemma-shriek-affine-line} exactly $n$ times we see that
+$(\mathbf{A}^n_V \to V)^!(E|_V)$ corresponds to
+$K \otimes_R^\mathbf{L} P[n]$. By Lemmas
+\ref{lemma-sheaf-with-exact-support-quasi-coherent} and
+\ref{lemma-shriek-closed-immersion} the last step corresponds to
+applying $R\Hom(A, -)$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-shriek-coherent}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. Then $f^!$ maps
+$D_{\textit{Coh}}^+(\mathcal{O}_Y)$ into $D_{\textit{Coh}}^+(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+The question is local on $X$ hence we may assume that $X$ and $Y$ are
+affine schemes. In this case we can factor $f : X \to Y$ as
+$$
+X \xrightarrow{i} \mathbf{A}^n_Y \to \mathbf{A}^{n - 1}_Y \to \ldots \to
+\mathbf{A}^1_Y \to Y
+$$
+where $i$ is a closed immersion. The lemma follows from
+By Lemmas \ref{lemma-shriek-affine-line} and
+\ref{lemma-sheaf-with-exact-support-coherent} and
+Dualizing Complexes, Lemma
+\ref{dualizing-lemma-dualizing-polynomial-ring}
+and induction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-shriek-dualizing}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. If $K$ is a dualizing complex
+for $Y$, then $f^!K$ is a dualizing complex for $X$.
+\end{lemma}
+
+\begin{proof}
+The question is local on $X$ hence we may assume that $X$ and $Y$ are
+affine schemes. In this case we can factor $f : X \to Y$ as
+$$
+X \xrightarrow{i} \mathbf{A}^n_Y \to \mathbf{A}^{n - 1}_Y \to \ldots \to
+\mathbf{A}^1_Y \to Y
+$$
+where $i$ is a closed immersion. By Lemma \ref{lemma-shriek-affine-line} and
+Dualizing Complexes, Lemma \ref{dualizing-lemma-dualizing-polynomial-ring}
+and induction we see that
+the $p^!K$ is a dualizing complex on $\mathbf{A}^n_Y$ where
+$p : \mathbf{A}^n_Y \to Y$ is the projection. Similarly, by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-dualizing-quotient}
+and Lemmas
+\ref{lemma-sheaf-with-exact-support-quasi-coherent} and
+\ref{lemma-shriek-closed-immersion} we see that $i^!$
+transforms dualizing complexes into dualizing complexes.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-shriek-via-duality}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. Let $K$ be a dualizing complex
+on $Y$. Set $D_Y(M) = R\SheafHom_{\mathcal{O}_Y}(M, K)$ for
+$M \in D_{\textit{Coh}}(\mathcal{O}_Y)$ and
+$D_X(E) = R\SheafHom_{\mathcal{O}_X}(E, f^!K)$ for
+$E \in D_{\textit{Coh}}(\mathcal{O}_X)$. Then there is a canonical
+isomorphism
+$$
+f^!M \longrightarrow D_X(Lf^*D_Y(M))
+$$
+for $M \in D_{\textit{Coh}}^+(\mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+Choose compactification $j : X \subset \overline{X}$ of $X$ over $Y$
+(More on Flatness, Theorem \ref{flat-theorem-nagata} and
+Lemma \ref{flat-lemma-compactifyable}). Let $a$ be the
+right adjoint of Lemma \ref{lemma-twisted-inverse-image} for
+$\overline{X} \to Y$. Set
+$D_{\overline{X}}(E) = R\SheafHom_{\mathcal{O}_{\overline{X}}}(E, a(K))$
+for $E \in D_{\textit{Coh}}(\mathcal{O}_{\overline{X}})$.
+Since formation of $R\SheafHom$ commutes with restriction to opens
+and since $f^! = j^* \circ a$ we see that it suffices to prove that
+there is a canonical isomorphism
+$$
+a(M) \longrightarrow D_{\overline{X}}(L\overline{f}^*D_Y(M))
+$$
+for $M \in D_{\textit{Coh}}(\mathcal{O}_Y)$. For
+$F \in D_\QCoh(\mathcal{O}_X)$ we have
+\begin{align*}
+\Hom_{\overline{X}}(
+F, D_{\overline{X}}(L\overline{f}^*D_Y(M)))
+& =
+\Hom_{\overline{X}}(
+F \otimes_{\mathcal{O}_X}^\mathbf{L} L\overline{f}^*D_Y(M), a(K)) \\
+& =
+\Hom_Y(
+R\overline{f}_*(F \otimes_{\mathcal{O}_X}^\mathbf{L} L\overline{f}^*D_Y(M)),
+K) \\
+& =
+\Hom_Y(
+R\overline{f}_*(F) \otimes_{\mathcal{O}_Y}^\mathbf{L} D_Y(M),
+K) \\
+& =
+\Hom_Y(
+R\overline{f}_*(F), D_Y(D_Y(M))) \\
+& =
+\Hom_Y(R\overline{f}_*(F), M) \\
+& = \Hom_{\overline{X}}(F, a(M))
+\end{align*}
+The first equality by Cohomology, Lemma \ref{cohomology-lemma-internal-hom}.
+The second by definition of $a$.
+The third by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change}.
+The fourth equality by Cohomology, Lemma \ref{cohomology-lemma-internal-hom}
+and the definition of $D_Y$.
+The fifth equality by Lemma \ref{lemma-dualizing-schemes}.
+The final equality by definition of $a$.
+Hence we see that $a(M) = D_{\overline{X}}(L\overline{f}^*D_Y(M))$
+by Yoneda's lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-perfect-comparison-shriek}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. Assume $f$ is perfect (e.g., flat). Then
+\begin{enumerate}
+\item[(a)] $f^!$ maps $D_{\textit{Coh}}^b(\mathcal{O}_Y)$ into
+$D_{\textit{Coh}}^b(\mathcal{O}_X)$,
+\item[(b)] the map
+$\mu_{f, K} :
+Lf^*K \otimes_{\mathcal{O}_X}^\mathbf{L} f^!\mathcal{O}_Y
+\to
+f^!K$
+of Lemma \ref{lemma-map-pullback-to-shriek-well-defined}
+is an isomorphism for all $K \in D_\QCoh^+(\mathcal{O}_Y)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+(A flat morphism of finite presentation is perfect, see
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-flat-finite-presentation-perfect}.)
+We begin with a series of preliminary remarks.
+\begin{enumerate}
+\item We already know that $f^!$ sends $D_{\textit{Coh}}^+(\mathcal{O}_Y)$
+into $D_{\textit{Coh}}^+(\mathcal{O}_X)$, see
+Lemma \ref{lemma-shriek-coherent}.
+\item If $f$ is an open immersion, then (a) and (b) are true because
+we can take $\overline{X} = Y$ in the construction of $f^!$ and $\mu_f$.
+See also Lemma \ref{lemma-shriek-open-immersion}.
+\item If $f$ is a perfect proper morphism, then (b) is true by
+Lemma \ref{lemma-compare-with-pullback-flat-proper-noetherian}.
+\item If there exists an open covering $X = \bigcup U_i$ and (a) is
+true for $U_i \to Y$, then (a) is true for $X \to Y$. Same for (b).
+This holds because the construction of $f^!$ and $\mu_f$ commutes
+with passing to open subschemes.
+\item If $g : Y \to Z$ is a second perfect morphism in $\textit{FTS}_S$
+and (b) holds for $f$ and $g$, then
+$f^!g^!\mathcal{O}_Z =
+Lf^*g^!\mathcal{O}_Z \otimes_{\mathcal{O}_X}^\mathbf{L} f^!\mathcal{O}_Y$
+and (b) holds for $g \circ f$ by the commutative diagram
+of Lemma \ref{lemma-map-pullback-to-shriek-well-defined}.
+\item If (a) and (b) hold for both $f$ and $g$, then
+(a) and (b) hold for $g \circ f$. Namely, then $f^!g^!\mathcal{O}_Z$
+is bounded above (by the previous point) and $L(g \circ f)^*$ has finite
+cohomological dimension and (a) follows from (b) which we saw above.
+\end{enumerate}
+From these points we see it suffices to prove the result in case $X$ is affine.
+Choose an immersion $X \to \mathbf{A}^n_Y$
+(Morphisms, Lemma \ref{morphisms-lemma-quasi-affine-finite-type-over-S})
+which we factor as $X \to U \to \mathbf{A}^n_Y \to Y$ where $X \to U$
+is a closed immersion and $U \subset \mathbf{A}^n_Y$ is open.
+Note that $X \to U$ is a perfect closed immersion by
+More on Morphisms, Lemma \ref{more-morphisms-lemma-perfect-permanence}.
+Thus it suffices to prove the lemma for a perfect closed immersion
+and for the projection $\mathbf{A}^n_Y \to Y$.
+
+\medskip\noindent
+Let $f : X \to Y$ be a perfect closed immersion. We already know (b) holds.
+Let $K \in D^b_{\textit{Coh}}(\mathcal{O}_Y)$.
+Then $f^!K = R\SheafHom(\mathcal{O}_X, K)$
+(Lemma \ref{lemma-shriek-closed-immersion})
+and $f_*f^!K = R\SheafHom_{\mathcal{O}_Y}(f_*\mathcal{O}_X, K)$.
+Since $f$ is perfect, the complex $f_*\mathcal{O}_X$ is perfect
+and hence $R\SheafHom_{\mathcal{O}_Y}(f_*\mathcal{O}_X, K)$ is bounded above.
+This proves that (a) holds. Some details omitted.
+
+\medskip\noindent
+Let $f : \mathbf{A}^n_Y \to Y$ be the projection. Then (a) holds
+by repeated application of Lemma \ref{lemma-shriek-affine-line}.
+Finally, (b) is true because it holds for $\mathbf{P}^n_Y \to Y$
+(flat and proper) and because $\mathbf{A}^n_Y \subset \mathbf{P}^n_Y$
+is an open.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-shriek-relatively-perfect}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. If $f$ is flat, then
+$f^!\mathcal{O}_Y$ is a $Y$-perfect object of $D(\mathcal{O}_X)$ and
+$\mathcal{O}_X \to
+R\SheafHom_{\mathcal{O}_X}(f^!\mathcal{O}_Y, f^!\mathcal{O}_Y)$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Both assertions are local on $X$. Thus we may assume $X$ and $Y$ are
+affine. Then Remark \ref{remark-local-calculation-shriek}
+turns the lemma into an algebra lemma, namely
+Dualizing Complexes, Lemma \ref{dualizing-lemma-relative-dualizing-algebraic}.
+(Use Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-locally-rel-perfect} to match the languages.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-shriek}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. Assume $f : X \to Y$ is a local complete
+intersection morphism. Then
+\begin{enumerate}
+\item $f^!\mathcal{O}_Y$ is an invertible object of $D(\mathcal{O}_X)$, and
+\item $f^!$ maps perfect complexes to perfect complexes.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Recall that a local complete intersection morphism is perfect, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-lci-properties}.
+By Lemma \ref{lemma-perfect-comparison-shriek} it suffices to show
+that $f^!\mathcal{O}_Y$ is an invertible object in $D(\mathcal{O}_X)$.
+This question is local on $X$ and $Y$. Hence we may assume that $X \to Y$
+factors as $X \to \mathbf{A}^n_Y \to Y$ where the first arrow is a
+Koszul regular immersion. See More on Morphisms, Section
+\ref{more-morphisms-section-lci}.
+The result holds for $\mathbf{A}^n_Y \to Y$
+by Lemma \ref{lemma-shriek-affine-line}. Thus it suffices to prove
+the lemma when $f$ is a Koszul regular immersion.
+Working locally once again we reduce to the case
+$X = \Spec(A)$ and $Y = \Spec(B)$, where $A = B/(f_1, \ldots, f_r)$
+for some regular sequence $f_1, \ldots, f_r \in B$
+(use that for Noetherian local rings the notion of Koszul
+regular and regular are the same, see
+More on Algebra, Lemma
+\ref{more-algebra-lemma-noetherian-finite-all-equivalent}).
+Thus $X \to Y$ is a composition
+$$
+X = X_r \to X_{r - 1} \to \ldots \to X_1 \to X_0 = Y
+$$
+where each arrow is the inclusion of an effective Cartier divisor.
+In this way we reduce to the case of an inclusion of an effective
+Cartier divisor $i : D \to X$. In this case
+$i^!\mathcal{O}_X = \mathcal{N}[1]$ by
+Lemma \ref{lemma-compute-for-effective-Cartier} and the proof is complete.
+\end{proof}
+
+
+
+
+
+
+
+\section{Base change for upper shriek}
+\label{section-base-change-shriek}
+
+\noindent
+In Situation \ref{situation-shriek} let
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+be a cartesian diagram in $\textit{FTS}_S$ such that
+$X$ and $Y'$ are Tor independent over $Y$. Our setup is currently
+not sufficient to construct a base change map
+$L(g')^* \circ f^! \to (f')^! \circ Lg^*$ in this generality.
+The reason is that in general it will not be possible to choose
+a compactification $j : X \to \overline{X}$ over $Y$ such that
+$\overline{X}$ and $Y'$ are tor independent over $Y$ and hence
+our construction of the base change map in
+Section \ref{section-base-change-map} does not apply\footnote{
+The reader who is well versed with derived algebraic geometry
+will realize this is not a ``real'' problem. Namely, taking
+$\overline{X}'$ to be the derived fibre product of
+$\overline{X}$ and $Y'$ over $Y$, one can argue exactly as in
+the proof of Lemma \ref{lemma-base-change-shriek-flat}
+to define this map. After all, the Tor independence
+of $X$ and $Y'$ guarantees that $X'$ will be an open subscheme
+of the derived scheme $\overline{X}'$.}.
+
+\medskip\noindent
+A partial remedy will be found in
+Section \ref{section-relative-dualizing-complexes}.
+Namely, if the morphism $f$ is flat, then there is a good
+notion of a relative dualizing complex and using
+Lemmas \ref{lemma-compactifyable-relative-dualizing}
+\ref{lemma-base-change-relative-dualizing},
+and \ref{lemma-perfect-comparison-shriek}
+we may construct a canonical base change isomorphism.
+If we ever need to use this, we will add precise statements and
+proofs later in this chapter.
+
+\begin{lemma}
+\label{lemma-base-change-shriek-flat}
+In Situation \ref{situation-shriek} let
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+be a cartesian diagram of $\textit{FTS}_S$ with $g$ flat.
+Then there is an isomorphism
+$L(g')^* \circ f^! \to (f')^! \circ Lg^*$ on
+$D_\QCoh^+(\mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+Namely, because $g$ is flat, for every choice of
+compactification $j : X \to \overline{X}$ of $X$ over $Y$
+the scheme $\overline{X}$ is Tor independent of $Y'$.
+Denote $j' : X' \to \overline{X}'$ the
+base change of $j$ and $\overline{g}' : \overline{X}' \to \overline{X}$
+the projection. We define the base change map as the composition
+$$
+L(g')^* \circ f^! = L(g')^* \circ j^* \circ a =
+(j')^* \circ L(\overline{g}')^* \circ a \longrightarrow
+(j')^* \circ a' \circ Lg^* = (f')^! \circ Lg^*
+$$
+where the middle arrow is the base change map
+(\ref{equation-base-change-map})
+and $a$ and $a'$ are the right adjoints to pushforward
+of Lemma \ref{lemma-twisted-inverse-image}
+for $\overline{X} \to Y$ and $\overline{X}' \to Y'$.
+This construction is independent of the choice of
+compactification (we will formulate a precise lemma
+and prove it, if we ever need this result).
+
+\medskip\noindent
+To finish the proof it suffices to show that the base change
+map $L(g')^* \circ a \to a' \circ Lg^*$ is an isomorphism
+on $D_\QCoh^+(\mathcal{O}_Y)$.
+By Lemma \ref{lemma-proper-noetherian} formation of $a$ and $a'$
+commutes with restriction to affine opens of $Y$ and $Y'$.
+Thus by Remark \ref{remark-check-over-affines}
+we may assume that $Y$ and $Y'$ are affine.
+Thus the result by Lemma \ref{lemma-more-base-change}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-shriek-etale}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be an \'etale
+morphism of $\textit{FTS}_S$. Then $f^! \cong f^*$ as functors on
+$D^+_\QCoh(\mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+We are going to use that an \'etale morphism is flat, syntomic,
+and a local complete intersection morphism
+(Morphisms, Lemma \ref{morphisms-lemma-etale-syntomic} and
+\ref{morphisms-lemma-etale-flat} and
+More on Morphisms, Lemma \ref{more-morphisms-lemma-flat-lci}).
+By Lemma \ref{lemma-perfect-comparison-shriek} it suffices
+to show $f^!\mathcal{O}_Y = \mathcal{O}_X$.
+By Lemma \ref{lemma-lci-shriek} we know that $f^!\mathcal{O}_Y$
+is an invertible module. Consider the commutative diagram
+$$
+\xymatrix{
+X \times_Y X \ar[r]_{p_2} \ar[d]_{p_1} & X \ar[d]^f \\
+X \ar[r]^f & Y
+}
+$$
+and the diagonal $\Delta : X \to X \times_Y X$. Since $\Delta$
+is an open immersion (by Morphisms, Lemmas
+\ref{morphisms-lemma-diagonal-unramified-morphism} and
+\ref{morphisms-lemma-etale-smooth-unramified}), by
+Lemma \ref{lemma-shriek-open-immersion} we have $\Delta^! = \Delta^*$.
+By Lemma \ref{lemma-upper-shriek-composition} we have
+$\Delta^! \circ p_1^! \circ f^! = f^!$.
+By Lemma \ref{lemma-base-change-shriek-flat} applied to
+the diagram we have $p_1^!\mathcal{O}_X = p_2^*f^!\mathcal{O}_Y$.
+Hence we conclude
+$$
+f^!\mathcal{O}_Y = \Delta^!p_1^!f^!\mathcal{O}_Y =
+\Delta^*(p_1^*f^!\mathcal{O}_Y \otimes p_1^!\mathcal{O}_X) =
+\Delta^*(p_2^*f^!\mathcal{O}_Y \otimes p_1^*f^!\mathcal{O}_Y) =
+(f^!\mathcal{O}_Y)^{\otimes 2}
+$$
+where in the second step we have used
+Lemma \ref{lemma-perfect-comparison-shriek} once more.
+Thus $f^!\mathcal{O}_Y = \mathcal{O}_X$ as desired.
+\end{proof}
+
+
+\noindent
+In the rest of this section, we formulate some easy to prove
+results which would be consequences of a good theory of the
+base change map.
+
+\begin{lemma}[Makeshift base change]
+\label{lemma-base-change-locally}
+In Situation \ref{situation-shriek} let
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+be a cartesian diagram of $\textit{FTS}_S$.
+Let $E \in D^+_\QCoh(\mathcal{O}_Y)$ be an object
+such that $Lg^*E$ is in $D^+(\mathcal{O}_Y)$.
+If $f$ is flat, then $L(g')^*f^!E$ and $(f')^!Lg^*E$
+restrict to isomorphic objects of $D(\mathcal{O}_{U'})$
+for $U' \subset X'$ affine open mapping into affine opens of $Y$, $Y'$, and $X$.
+\end{lemma}
+
+\begin{proof}
+By our assumptions we immediately reduce to the case where
+$X$, $Y$, $Y'$, and $X'$ are affine.
+Say $Y = \Spec(R)$, $Y' = \Spec(R')$, $X = \Spec(A)$, and $X' = \Spec(A')$.
+Then $A' = A \otimes_R R'$. Let
+$E$ correspond to $K \in D^+(R)$.
+Denoting $\varphi : R \to A$ and $\varphi' : R' \to A'$
+the given maps we see from
+Remark \ref{remark-local-calculation-shriek}
+that $L(g')^*f^!E$ and $(f')^!Lg^*E$ correspond to
+$\varphi^!(K) \otimes_A^\mathbf{L} A'$ and
+$(\varphi')^!(K \otimes_R^\mathbf{L} R')$
+where $\varphi^!$ and $(\varphi')^!$ are the functors from
+Dualizing Complexes, Section
+\ref{dualizing-section-relative-dualizing-complex-algebraic}.
+The result follows from
+Dualizing Complexes, Lemma \ref{dualizing-lemma-bc-flat}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dualizing-fibres}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. Assume $f$ is flat. Set
+$\omega_{X/Y}^\bullet = f^!\mathcal{O}_Y$ in $D^b_{\textit{Coh}}(X)$.
+Let $y \in Y$ and $h : X_y \to X$ the projection.
+Then $Lh^*\omega_{X/Y}^\bullet$ is a dualizing complex
+on $X_y$.
+\end{lemma}
+
+\begin{proof}
+The complex $\omega_{X/Y}^\bullet$ is in $D^b_{\textit{Coh}}$
+by Lemma \ref{lemma-perfect-comparison-shriek}.
+Being a dualizing complex is a local property.
+Hence by Lemma \ref{lemma-base-change-locally}
+it suffices to show that $(X_y \to y)^!\mathcal{O}_y$
+is a dualizing complex on $X_y$.
+This follows from Lemma \ref{lemma-shriek-dualizing}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{A duality theory}
+\label{section-duality}
+
+\noindent
+In this section we spell out what kind of a duality theory our very general
+results above give for finite type separated schemes over a fixed
+Noetherian base scheme.
+
+\medskip\noindent
+Recall that a dualizing complex on a Noetherian scheme $X$, is an
+object of $D(\mathcal{O}_X)$ which affine locally gives a dualizing
+complex for the corresponding rings, see
+Definition \ref{definition-dualizing-scheme}.
+
+\medskip\noindent
+Given a Noetherian scheme $S$ denote $\textit{FTS}_S$ the category
+of schemes which are of finite type and separated over $S$. Then:
+\begin{enumerate}
+\item the functors $f^!$ turn $D_\QCoh^+$ into a pseudo functor
+on $\textit{FTS}_S$,
+\item if $f : X \to Y$ is a proper morphism in $\textit{FTS}_S$,
+then $f^!$ is the restriction of the right adjoint of
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$
+to $D_\QCoh^+(\mathcal{O}_Y)$ and there is a canonical isomorphism
+$$
+Rf_*R\SheafHom_{\mathcal{O}_X}(K, f^!M)
+\to
+R\SheafHom_{\mathcal{O}_Y}(Rf_*K, M)
+$$
+for all $K \in D_{\textit{Coh}}^-(\mathcal{O}_X)$ and
+$M \in D_\QCoh^+(\mathcal{O}_Y)$,
+\item if an object $X$ of $\textit{FTS}_S$ has a dualizing complex
+$\omega_X^\bullet$, then the functor
+$D_X = R\SheafHom_{\mathcal{O}_X}(-, \omega_X^\bullet)$
+defines an involution of $D_{\textit{Coh}}(\mathcal{O}_X)$
+switching $D_{\textit{Coh}}^+(\mathcal{O}_X)$ and
+$D_{\textit{Coh}}^-(\mathcal{O}_X)$ and fixing
+$D_{\textit{Coh}}^b(\mathcal{O}_X)$,
+\item if $f : X \to Y$ is a morphism of $\textit{FTS}_S$
+and $\omega_Y^\bullet$ is a dualizing complex on $Y$, then
+\begin{enumerate}
+\item $\omega_X^\bullet = f^!\omega_Y^\bullet$ is a dualizing complex for $X$,
+\item $f^!M = D_X(Lf^*D_Y(M))$ canonically for
+$M \in D_{\textit{Coh}}^+(\mathcal{O}_Y)$, and
+\item if in addition $f$ is proper then
+$$
+Rf_*R\SheafHom_{\mathcal{O}_X}(K, \omega_X^\bullet) =
+R\SheafHom_{\mathcal{O}_Y}(Rf_*K, \omega_Y^\bullet)
+$$
+for $K$ in $D^-_{\textit{Coh}}(\mathcal{O}_Y)$,
+\end{enumerate}
+\item if $f : X \to Y$ is a closed immersion in $\textit{FTS}_S$,
+then $f^!(-) = R\SheafHom(\mathcal{O}_X, -)$,
+\item if $f : Y \to X$ is a finite morphism in $\textit{FTS}_S$,
+then $f_*f^!(-) = R\SheafHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y, -)$,
+\item if $f : X \to Y$ is the inclusion of an effective Cartier divisor
+into an object of $\textit{FTS}_S$, then
+$f^!(-) = Lf^*(-) \otimes_{\mathcal{O}_X} \mathcal{O}_Y(-X)[-1]$,
+\item if $f : X \to Y$ is a Koszul regular immersion of codimension $c$
+into an object of $\textit{FTS}_S$, then
+$f^!(-) \cong Lf^*(-) \otimes_{\mathcal{O}_X} \wedge^c\mathcal{N}[-c]$, and
+\item if $f : X \to Y$ is a smooth proper morphism of relative dimension $d$
+in $\textit{FTS}_S$, then
+$f^!(-) \cong Lf^* \otimes_{\mathcal{O}_X} \Omega^d_{X/Y}[d]$.
+\end{enumerate}
+This follows from Lemmas
+\ref{lemma-dualizing-schemes},
+\ref{lemma-iso-on-RSheafHom},
+\ref{lemma-twisted-inverse-image-closed},
+\ref{lemma-finite-twisted},
+\ref{lemma-sheaf-with-exact-support-effective-Cartier},
+\ref{lemma-regular-immersion},
+\ref{lemma-smooth-proper},
+\ref{lemma-upper-shriek-composition},
+\ref{lemma-pseudo-functor},
+\ref{lemma-shriek-closed-immersion},
+\ref{lemma-shriek-dualizing},
+\ref{lemma-shriek-via-duality}, and
+\ref{lemma-perfect-comparison-shriek} and
+Example \ref{example-iso-on-RSheafHom-noetherian}.
+We have obtained our functors by a very abstract procedure
+which finally rests on invoking an existence theorem
+(Derived Categories, Proposition \ref{derived-proposition-brown}).
+This means we have, in general, no explicit description of the functors $f^!$.
+This can sometimes be a problem. But in fact, it is often enough to know
+the existence of a dualizing complex and the duality isomorphism
+to pin down $f^!$.
+
+
+
+
+
+
+\section{Glueing dualizing complexes}
+\label{section-glue}
+
+\noindent
+We will now use glueing of dualizing complexes to get a theory which works for
+all finite type schemes over $S$ given a pair $(S, \omega_S^\bullet)$
+as in Situation \ref{situation-dualizing}. This is similar to
+\cite[Remark on page 310]{RD}.
+
+\begin{situation}
+\label{situation-dualizing}
+Here $S$ is a Noetherian scheme and $\omega_S^\bullet$ is a dualizing
+complex.
+\end{situation}
+
+\noindent
+In Situation \ref{situation-dualizing} let $X$ be a scheme of finite type
+over $S$. Let $\mathcal{U} : X = \bigcup_{i = 1, \ldots, n} U_i$
+be a finite open covering of $X$ by objects of $\textit{FTS}_S$, see
+Situation \ref{situation-shriek}. All this means is that the morphisms
+$U_i \to S$ are separated (as they are already of finite type).
+Every affine scheme of finite type over $S$ is an object of $\textit{FTS}_S$
+by Schemes, Lemma \ref{schemes-lemma-compose-after-separated}
+hence such open coverings certainly exist.
+Then for each $i, j, k \in \{1, \ldots, n\}$
+the morphisms $p_i : U_i \to S$, $p_{ij} : U_i \cap U_j \to S$,
+and $p_{ijk} : U_i \cap U_j \cap U_k \to S$ are separated and
+each of these schemes is an object of $\textit{FTS}_S$.
+From such an open covering we obtain
+\begin{enumerate}
+\item $\omega_i^\bullet = p_i^!\omega_S^\bullet$
+a dualizing complex on $U_i$, see Section \ref{section-duality},
+\item for each $i, j$ a canonical isomorphism
+$\varphi_{ij} :
+\omega_i^\bullet|_{U_i \cap U_j} \to \omega_j^\bullet|_{U_i \cap U_j}$, and
+\item
+\label{item-cocycle-glueing}
+for each $i, j, k$ we have
+$$
+\varphi_{ik}|_{U_i \cap U_j \cap U_k} =
+\varphi_{jk}|_{U_i \cap U_j \cap U_k} \circ
+\varphi_{ij}|_{U_i \cap U_j \cap U_k}
+$$
+in $D(\mathcal{O}_{U_i \cap U_j \cap U_k})$.
+\end{enumerate}
+Here, in (2) we use that $(U_i \cap U_j \to U_i)^!$
+is given by restriction (Lemma \ref{lemma-shriek-open-immersion})
+and that we have canonical isomorphisms
+$$
+(U_i \cap U_j \to U_i)^! \circ p_i^! = p_{ij}^! =
+(U_i \cap U_j \to U_j)^! \circ p_j^!
+$$
+by Lemma \ref{lemma-upper-shriek-composition} and to get (3) we use
+that the upper shriek functors form a pseudo functor by
+Lemma \ref{lemma-pseudo-functor}.
+
+\medskip\noindent
+In the situation just described a
+{\it dualizing complex normalized relative to $\omega_S^\bullet$
+and $\mathcal{U}$} is a pair $(K, \alpha_i)$ where $K \in D(\mathcal{O}_X)$
+and $\alpha_i : K|_{U_i} \to \omega_i^\bullet$ are isomorphisms
+such that $\varphi_{ij}$ is given by
+$\alpha_j|_{U_i \cap U_j} \circ \alpha_i^{-1}|_{U_i \cap U_j}$.
+Since being a dualizing complex on a scheme is a local property
+we see that dualizing complexes normalized relative to $\omega_S^\bullet$
+and $\mathcal{U}$ are indeed dualizing complexes.
+
+\begin{lemma}
+\label{lemma-good-dualizing-unique}
+In Situation \ref{situation-dualizing} let $X$ be a scheme of finite type
+over $S$ and let $\mathcal{U}$ be a finite open covering of $X$
+by schemes separated over $S$. If there exists a dualizing complex
+normalized relative to $\omega_S^\bullet$ and $\mathcal{U}$, then it is unique
+up to unique isomorphism.
+\end{lemma}
+
+\begin{proof}
+If $(K, \alpha_i)$ and $(K', \alpha_i')$ are two, then we consider
+$L = R\SheafHom_{\mathcal{O}_X}(K, K')$.
+By Lemma \ref{lemma-dualizing-unique-schemes}
+and its proof, this is an invertible object of $D(\mathcal{O}_X)$.
+Using $\alpha_i$ and $\alpha'_i$ we obtain an isomorphism
+$$
+\alpha_i^t \otimes \alpha'_i :
+L|_{U_i} \longrightarrow
+R\SheafHom_{\mathcal{O}_X}(\omega_i^\bullet, \omega_i^\bullet) =
+\mathcal{O}_{U_i}[0]
+$$
+This already implies that $L = H^0(L)[0]$ in $D(\mathcal{O}_X)$.
+Moreover, $H^0(L)$ is an invertible sheaf with given trivializations
+on the opens $U_i$ of $X$. Finally, the condition that
+$\alpha_j|_{U_i \cap U_j} \circ \alpha_i^{-1}|_{U_i \cap U_j}$
+and
+$\alpha'_j|_{U_i \cap U_j} \circ (\alpha'_i)^{-1}|_{U_i \cap U_j}$
+both give $\varphi_{ij}$ implies that the transition maps
+are $1$ and we get an isomorphism $H^0(L) = \mathcal{O}_X$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-good-dualizing-independence-covering}
+In Situation \ref{situation-dualizing} let $X$ be a scheme of finite type
+over $S$ and let $\mathcal{U}$, $\mathcal{V}$ be two finite open coverings
+of $X$ by schemes separated over $S$.
+If there exists a dualizing complex normalized
+relative to $\omega_S^\bullet$ and $\mathcal{U}$, then
+there exists a dualizing complex normalized relative to
+$\omega_S^\bullet$ and $\mathcal{V}$ and these complexes are
+canonically isomorphic.
+\end{lemma}
+
+\begin{proof}
+It suffices to prove this when $\mathcal{U}$ is given by the opens
+$U_1, \ldots, U_n$ and $\mathcal{V}$ by the opens $U_1, \ldots, U_{n + m}$.
+In fact, we may and do even assume $m = 1$.
+To go from a dualizing complex $(K, \alpha_i)$ normalized
+relative to $\omega_S^\bullet$ and $\mathcal{V}$ to a
+dualizing complex normalized relative to $\omega_S^\bullet$ and $\mathcal{U}$
+is achieved by forgetting about $\alpha_i$ for $i = n + 1$. Conversely, let
+$(K, \alpha_i)$ be a dualizing complex normalized relative to
+$\omega_S^\bullet$ and $\mathcal{U}$.
+To finish the proof we need to construct a map
+$\alpha_{n + 1} : K|_{U_{n + 1}} \to \omega_{n + 1}^\bullet$ satisfying
+the desired conditions.
+To do this we observe that $U_{n + 1} = \bigcup U_i \cap U_{n + 1}$
+is an open covering.
+It is clear that $(K|_{U_{n + 1}}, \alpha_i|_{U_i \cap U_{n + 1}})$
+is a dualizing complex normalized relative to $\omega_S^\bullet$
+and the covering $U_{n + 1} = \bigcup U_i \cap U_{n + 1}$.
+On the other hand, by condition (\ref{item-cocycle-glueing}) the pair
+$(\omega_{n + 1}^\bullet|_{U_{n + 1}}, \varphi_{n + 1i})$
+is another dualizing complex normalized relative to $\omega_S^\bullet$
+and the covering
+$U_{n + 1} = \bigcup U_i \cap U_{n + 1}$.
+By Lemma \ref{lemma-good-dualizing-unique} we obtain a unique isomorphism
+$$
+\alpha_{n + 1} : K|_{U_{n + 1}} \longrightarrow \omega_{n + 1}^\bullet
+$$
+compatible with the given local isomorphisms.
+It is a pleasant exercise to show that this means it satisfies
+the required property.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-existence-good-dualizing}
+In Situation \ref{situation-dualizing} let $X$ be a scheme of finite type
+over $S$ and let $\mathcal{U}$ be a finite open covering
+of $X$ by schemes separated over $S$. Then there exists
+a dualizing complex normalized relative to $\omega_S^\bullet$ and
+$\mathcal{U}$.
+\end{lemma}
+
+\begin{proof}
+Say $\mathcal{U} : X = \bigcup_{i = 1, \ldots, n} U_i$.
+We prove the lemma by induction on $n$. The base case $n = 1$ is immediate.
+Assume $n > 1$. Set $X' = U_1 \cup \ldots \cup U_{n - 1}$
+and let $(K', \{\alpha'_i\}_{i = 1, \ldots, n - 1})$
+be a dualizing complex normalized relative to $\omega_S^\bullet$
+and $\mathcal{U}' : X' = \bigcup_{i = 1, \ldots, n - 1} U_i$.
+It is clear that $(K'|_{X' \cap U_n}, \alpha'_i|_{U_i \cap U_n})$
+is a dualizing complex normalized relative to $\omega_S^\bullet$
+and the covering
+$X' \cap U_n = \bigcup_{i = 1, \ldots, n - 1} U_i \cap U_n$.
+On the other hand, by condition (\ref{item-cocycle-glueing}) the pair
+$(\omega_n^\bullet|_{X' \cap U_n}, \varphi_{ni})$
+is another dualizing complex normalized relative to $\omega_S^\bullet$
+and the covering
+$X' \cap U_n = \bigcup_{i = 1, \ldots, n - 1} U_i \cap U_n$.
+By Lemma \ref{lemma-good-dualizing-unique} we obtain a unique isomorphism
+$$
+\epsilon : K'|_{X' \cap U_n} \longrightarrow \omega_i^\bullet|_{X' \cap U_n}
+$$
+compatible with the given local isomorphisms.
+By Cohomology, Lemma \ref{cohomology-lemma-glue}
+we obtain $K \in D(\mathcal{O}_X)$ together with
+isomorphisms $\beta : K|_{X'} \to K'$ and
+$\gamma : K|_{U_n} \to \omega_n^\bullet$ such that
+$\epsilon = \gamma|_{X'\cap U_n} \circ \beta|_{X' \cap U_n}^{-1}$.
+Then we define
+$$
+\alpha_i = \alpha'_i \circ \beta|_{U_i}, i = 1, \ldots, n - 1,
+\text{ and }
+\alpha_n = \gamma
+$$
+We still need to verify that $\varphi_{ij}$ is given by
+$\alpha_j|_{U_i \cap U_j} \circ \alpha_i^{-1}|_{U_i \cap U_j}$.
+For $i, j \leq n - 1$ this follows from the corresponding
+condition for $\alpha_i'$. For $i = j = n$ it is clear as well.
+If $i < j = n$, then we get
+$$
+\alpha_n|_{U_i \cap U_n} \circ \alpha_i^{-1}|_{U_i \cap U_n} =
+\gamma|_{U_i \cap U_n} \circ \beta^{-1}|_{U_i \cap U_n}
+\circ (\alpha'_i)^{-1}|_{U_i \cap U_n} =
+\epsilon|_{U_i \cap U_n} \circ (\alpha'_i)^{-1}|_{U_i \cap U_n}
+$$
+This is equal to $\alpha_{in}$ exactly because $\epsilon$
+is the unique map compatible with the maps
+$\alpha_i'$ and $\alpha_{ni}$.
+\end{proof}
+
+\noindent
+Let $(S, \omega_S^\bullet)$ be as in Situation \ref{situation-dualizing}.
+The upshot of the lemmas above is that given any scheme $X$ of finite type
+over $S$, there is a pair $(K, \alpha_U)$ given up to unique isomorphism,
+consisting of an object $K \in D(\mathcal{O}_X)$ and isomorphisms
+$\alpha_U : K|_U \to \omega_U^\bullet$ for every open subscheme
+$U \subset X$ which is separated over $S$. Here
+$\omega_U^\bullet = (U \to S)^!\omega_S^\bullet$ is a dualizing
+complex on $U$, see Section \ref{section-duality}. Moreover, if
+$\mathcal{U} : X = \bigcup U_i$ is a finite open covering
+by opens which are separated over $S$, then
+$(K, \alpha_{U_i})$ is a dualizing complex normalized relative to
+$\omega_S^\bullet$ and $\mathcal{U}$.
+Namely, uniqueness up to unique isomorphism by
+Lemma \ref{lemma-good-dualizing-unique},
+existence for one open covering by
+Lemma \ref{lemma-existence-good-dualizing}, and
+the fact that $K$ then works for all open coverings is
+Lemma \ref{lemma-good-dualizing-independence-covering}.
+
+\begin{definition}
+\label{definition-good-dualizing}
+Let $S$ be a Noetherian scheme and let $\omega_S^\bullet$ be a dualizing
+complex on $S$. Let $X$ be a scheme of finite type over $S$.
+The complex $K$ constructed above is called the
+{\it dualizing complex normalized relative to $\omega_S^\bullet$}
+and is denoted $\omega_X^\bullet$.
+\end{definition}
+
+\noindent
+As the terminology suggest, a dualizing complex normalized relative to
+$\omega_S^\bullet$ is not just an object of the derived category of $X$
+but comes equipped with the local isomorphisms described above.
+This does not conflict with setting
+$\omega_X^\bullet = p^!\omega_S^\bullet$ where $p : X \to S$ is the
+structure morphism if $X$ is separated over $S$. More generally
+we have the following sanity check.
+
+\begin{lemma}
+\label{lemma-good-over-both}
+Let $(S, \omega_S^\bullet)$ be as in Situation \ref{situation-dualizing}.
+Let $f : X \to Y$ be a morphism of finite type schemes over $S$.
+Let $\omega_X^\bullet$ and $\omega_Y^\bullet$ be dualizing complexes
+normalized relative to $\omega_S^\bullet$. Then $\omega_X^\bullet$
+is a dualizing complex normalized relative to $\omega_Y^\bullet$.
+\end{lemma}
+
+\begin{proof}
+This is just a matter of bookkeeping.
+Choose a finite affine open covering $\mathcal{V} : Y = \bigcup V_j$.
+For each $j$ choose a finite affine open covering $f^{-1}(V_j) = U_{ji}$.
+Set $\mathcal{U} : X = \bigcup U_{ji}$. The schemes $V_j$ and $U_{ji}$ are
+separated over $S$, hence we have the upper shriek functors for
+$q_j : V_j \to S$, $p_{ji} : U_{ji} \to S$ and
+$f_{ji} : U_{ji} \to V_j$ and $f_{ji}' : U_{ji} \to Y$.
+Let $(L, \beta_j)$ be a dualizing complex normalized relative to
+$\omega_S^\bullet$ and $\mathcal{V}$.
+Let $(K, \gamma_{ji})$ be a dualizing complex normalized relative to
+$\omega_S^\bullet$ and $\mathcal{U}$.
+(In other words, $L = \omega_Y^\bullet$ and $K = \omega_X^\bullet$.)
+We can define
+$$
+\alpha_{ji} :
+K|_{U_{ji}} \xrightarrow{\gamma_{ji}}
+p_{ji}^!\omega_S^\bullet = f_{ji}^!q_j^!\omega_S^\bullet
+\xrightarrow{f_{ji}^!\beta_j^{-1}} f_{ji}^!(L|_{V_j}) =
+(f_{ji}')^!(L)
+$$
+To finish the proof we have to show that
+$\alpha_{ji}|_{U_{ji} \cap U_{j'i'}}
+\circ \alpha_{j'i'}^{-1}|_{U_{ji} \cap U_{j'i'}}$
+is the canonical isomorphism
+$(f_{ji}')^!(L)|_{U_{ji} \cap U_{j'i'}} \to
+(f_{j'i'}')^!(L)|_{U_{ji} \cap U_{j'i'}}$. This is formal and we
+omit the details.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-open-immersion-good-dualizing-complex}
+Let $(S, \omega_S^\bullet)$ be as in Situation \ref{situation-dualizing}.
+Let $j : X \to Y$ be an open immersion of schemes of finite type over $S$.
+Let $\omega_X^\bullet$ and $\omega_Y^\bullet$ be dualizing complexes
+normalized relative to $\omega_S^\bullet$. Then there is a canonical
+isomorphism $\omega_X^\bullet = \omega_Y^\bullet|_X$.
+\end{lemma}
+
+\begin{proof}
+Immediate from the construction of normalized dualizing complexes
+given just above
+Definition \ref{definition-good-dualizing}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-map-good-dualizing-complex}
+Let $(S, \omega_S^\bullet)$ be as in Situation \ref{situation-dualizing}.
+Let $f : X \to Y$ be a proper morphism of schemes of finite type over $S$.
+Let $\omega_X^\bullet$ and $\omega_Y^\bullet$ be dualizing complexes
+normalized relative to $\omega_S^\bullet$. Let $a$ be the
+right adjoint of Lemma \ref{lemma-twisted-inverse-image} for
+$f$. Then there is a canonical isomorphism
+$a(\omega_Y^\bullet) = \omega_X^\bullet$.
+\end{lemma}
+
+\begin{proof}
+Let $p : X \to S$ and $q : Y \to S$ be the structure morphisms.
+If $X$ and $Y$ are separated over $S$, then this follows
+from the fact that $\omega_X^\bullet = p^!\omega_S^\bullet$,
+$\omega_Y^\bullet = q^!\omega_S^\bullet$, $f^! = a$, and
+$f^! \circ q^! = p^!$ (Lemma \ref{lemma-upper-shriek-composition}).
+In the general case we first use Lemma \ref{lemma-good-over-both}
+to reduce to the case $Y = S$. In this case $X$ and $Y$
+are separated over $S$ and we've just seen the result.
+\end{proof}
+
+\noindent
+Let $(S, \omega_S^\bullet)$ be as in Situation \ref{situation-dualizing}.
+For a scheme $X$ of finite type over $S$ denote $\omega_X^\bullet$ the
+dualizing complex for $X$ normalized relative to $\omega_S^\bullet$.
+Define $D_X(-) = R\SheafHom_{\mathcal{O}_X}(-, \omega_X^\bullet)$
+as in Lemma \ref{lemma-dualizing-schemes}.
+Let $f : X \to Y$ be a morphism of finite type schemes over $S$.
+Define
+$$
+f_{new}^! = D_X \circ Lf^* \circ D_Y :
+D_{\textit{Coh}}^+(\mathcal{O}_Y)
+\to
+D_{\textit{Coh}}^+(\mathcal{O}_X)
+$$
+If $f : X \to Y$ and $g : Y \to Z$ are composable
+morphisms between schemes of finite type over $S$, define
+\begin{align*}
+(g \circ f)^!_{new} & = D_X \circ L(g \circ f)^* \circ D_Z \\
+& = D_X \circ Lf^* \circ Lg^* \circ D_Z \\
+& \to D_X \circ Lf^* \circ D_Y \circ D_Y \circ Lg^* \circ D_Z \\
+& = f^!_{new} \circ g^!_{new}
+\end{align*}
+where the arrow is defined in Lemma \ref{lemma-dualizing-schemes}.
+We collect the results together in the following lemma.
+
+\begin{lemma}
+\label{lemma-duality-bootstrap}
+Let $(S, \omega_S^\bullet)$ be as in Situation \ref{situation-dualizing}.
+With $f^!_{new}$ and $\omega_X^\bullet$ defined for all (morphisms of)
+schemes of finite type over $S$ as above:
+\begin{enumerate}
+\item the functors $f^!_{new}$ and the arrows
+$(g \circ f)^!_{new} \to f^!_{new} \circ g^!_{new}$
+turn $D_{\textit{Coh}}^+$ into a pseudo functor from the category of
+schemes of finite type over $S$ into the $2$-category of categories,
+\item $\omega_X^\bullet = (X \to S)^!_{new} \omega_S^\bullet$,
+\item the functor $D_X$
+defines an involution of $D_{\textit{Coh}}(\mathcal{O}_X)$
+switching $D_{\textit{Coh}}^+(\mathcal{O}_X)$ and
+$D_{\textit{Coh}}^-(\mathcal{O}_X)$ and fixing
+$D_{\textit{Coh}}^b(\mathcal{O}_X)$,
+\item $\omega_X^\bullet = f^!_{new}\omega_Y^\bullet$ for
+$f : X \to Y$ a morphism of finite type schemes over $S$,
+\item $f^!_{new}M = D_X(Lf^*D_Y(M))$ for
+$M \in D_{\textit{Coh}}^+(\mathcal{O}_Y)$, and
+\item if in addition $f$ is proper, then $f^!_{new}$ is isomorphic
+to the restriction of the right adjoint of
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$ to
+$D_{\textit{Coh}}^+(\mathcal{O}_Y)$ and there is a canonical isomorphism
+$$
+Rf_*R\SheafHom_{\mathcal{O}_X}(K, f^!_{new}M)
+\to
+R\SheafHom_{\mathcal{O}_Y}(Rf_*K, M)
+$$
+for $K \in D^-_{\textit{Coh}}(\mathcal{O}_X)$ and
+$M \in D_{\textit{Coh}}^+(\mathcal{O}_Y)$, and
+$$
+Rf_*R\SheafHom_{\mathcal{O}_X}(K, \omega_X^\bullet) =
+R\SheafHom_{\mathcal{O}_Y}(Rf_*K, \omega_Y^\bullet)
+$$
+for $K \in D^-_{\textit{Coh}}(\mathcal{O}_X)$ and
+\end{enumerate}
+If $X$ is separated over $S$, then
+$\omega_X^\bullet$ is canonically isomorphic to
+$(X \to S)^!\omega_S^\bullet$ and
+if $f$ is a morphism between schemes separated
+over $S$, then there is a canonical isomorphism\footnote{We haven't
+checked that these are compatible with the isomorphisms
+$(g \circ f)^! \to f^! \circ g^!$ and
+$(g \circ f)^!_{new} \to f^!_{new} \circ g^!_{new}$. We will do this
+here if we need this later.}
+$f_{new}^!K = f^!K$ for $K$ in $D_{\textit{Coh}}^+$.
+\end{lemma}
+
+\begin{proof}
+Let $f : X \to Y$, $g : Y \to Z$, $h : Z \to T$ be morphisms of schemes
+of finite type over $S$. We have to show that
+$$
+\xymatrix{
+(h \circ g \circ f)^!_{new} \ar[r] \ar[d] &
+f^!_{new} \circ (h \circ g)^!_{new} \ar[d] \\
+(g \circ f)^!_{new} \circ h^!_{new} \ar[r] &
+f^!_{new} \circ g^!_{new} \circ h^!_{new}
+}
+$$
+is commutative. Let $\eta_Y : \text{id} \to D_Y^2$
+and $\eta_Z : \text{id} \to D_Z^2$ be the canonical isomorphisms
+of Lemma \ref{lemma-dualizing-schemes}. Then, using
+Categories, Lemma \ref{categories-lemma-properties-2-cat-cats},
+a computation (omitted) shows that both arrows
+$(h \circ g \circ f)^!_{new} \to f^!_{new} \circ g^!_{new} \circ h^!_{new}$
+are given by
+$$
+1 \star \eta_Y \star 1 \star \eta_Z \star 1 :
+D_X \circ Lf^* \circ Lg^* \circ Lh^* \circ D_T
+\longrightarrow
+D_X \circ Lf^* \circ D_Y^2 \circ Lg^* \circ D_Z^2 \circ Lh^* \circ D_T
+$$
+This proves (1). Part (2) is immediate from the definition of
+$(X \to S)^!_{new}$ and the fact that $D_S(\omega_S^\bullet) = \mathcal{O}_S$.
+Part (3) is Lemma \ref{lemma-dualizing-schemes}.
+Part (4) follows by the same argument as part (2).
+Part (5) is the definition of $f^!_{new}$.
+
+\medskip\noindent
+Proof of (6). Let $a$ be the
+right adjoint of Lemma \ref{lemma-twisted-inverse-image} for the
+proper morphism $f : X \to Y$ of schemes of finite type over $S$.
+The issue is that we do not know $X$ or $Y$ is
+separated over $S$ (and in general this won't be true)
+hence we cannot immediately apply
+Lemma \ref{lemma-shriek-via-duality} to $f$ over $S$.
+To get around this we use the canonical identification
+$\omega_X^\bullet = a(\omega_Y^\bullet)$ of
+Lemma \ref{lemma-proper-map-good-dualizing-complex}.
+Hence $f^!_{new}$ is the restriction of $a$ to
+$D_{\textit{Coh}}^+(\mathcal{O}_Y)$ by Lemma \ref{lemma-shriek-via-duality}
+applied to $f : X \to Y$ over the base scheme $Y$!
+The displayed equalities hold by
+Example \ref{example-iso-on-RSheafHom-noetherian}.
+
+\medskip\noindent
+The final assertions follow from the construction of normalized
+dualizing complexes and the already used Lemma \ref{lemma-shriek-via-duality}.
+\end{proof}
+
+\begin{remark}
+\label{remark-independent-omega-S}
+Let $S$ be a Noetherian scheme which has a dualizing complex.
+Let $f : X \to Y$ be a morphism of schemes of finite type
+over $S$. Then the functor
+$$
+f_{new}^! : D^+_{Coh}(\mathcal{O}_Y) \to D^+_{Coh}(\mathcal{O}_X)
+$$
+is independent of the choice of the dualizing complex $\omega_S^\bullet$
+up to canonical isomorphism. We sketch the proof. Any second dualizing complex
+is of the form $\omega_S^\bullet \otimes_{\mathcal{O}_S}^\mathbf{L} \mathcal{L}$
+where $\mathcal{L}$ is an invertible object of $D(\mathcal{O}_S)$, see
+Lemma \ref{lemma-dualizing-unique-schemes}.
+For any separated morphism $p : U \to S$ of finite type we have
+$p^!(\omega_S^\bullet \otimes^\mathbf{L}_{\mathcal{O}_S} \mathcal{L}) =
+p^!(\omega_S^\bullet) \otimes^\mathbf{L}_{\mathcal{O}_U} Lp^*\mathcal{L}$
+by Lemma \ref{lemma-compare-with-pullback-perfect}.
+Hence, if $\omega_X^\bullet$ and $\omega_Y^\bullet$ are the
+dualizing complexes normalized relative to $\omega_S^\bullet$ we see that
+$\omega_X^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} La^*\mathcal{L}$ and
+$\omega_Y^\bullet \otimes_{\mathcal{O}_Y}^\mathbf{L} Lb^*\mathcal{L}$
+are the dualizing complexes normalized relative to
+$\omega_S^\bullet \otimes_{\mathcal{O}_S}^\mathbf{L} \mathcal{L}$
+(where $a : X \to S$ and $b : Y \to S$ are the structure morphisms).
+Then the result follows as
+\begin{align*}
+& R\SheafHom_{\mathcal{O}_X}(Lf^*R\SheafHom_{\mathcal{O}_Y}(K,
+\omega_Y^\bullet \otimes_{\mathcal{O}_Y}^\mathbf{L} Lb^*\mathcal{L}),
+\omega_X^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} La^*\mathcal{L}) \\
+& = R\SheafHom_{\mathcal{O}_X}(Lf^*R(\SheafHom_{\mathcal{O}_Y}(K,
+\omega_Y^\bullet) \otimes_{\mathcal{O}_Y}^\mathbf{L} Lb^*\mathcal{L}),
+\omega_X^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} La^*\mathcal{L}) \\
+& = R\SheafHom_{\mathcal{O}_X}(Lf^*R\SheafHom_{\mathcal{O}_Y}(K,
+\omega_Y^\bullet) \otimes_{\mathcal{O}_X}^\mathbf{L} La^*\mathcal{L},
+\omega_X^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} La^*\mathcal{L}) \\
+& = R\SheafHom_{\mathcal{O}_X}(Lf^*R\SheafHom_{\mathcal{O}_Y}(K,
+\omega_Y^\bullet), \omega_X^\bullet)
+\end{align*}
+for $K \in D^+_{Coh}(\mathcal{O}_Y)$.
+The last equality because $La^*\mathcal{L}$ is invertible in
+$D(\mathcal{O}_X)$.
+\end{remark}
+
+
+\begin{example}
+\label{example-trace-proper}
+Let $S$ be a Noetherian scheme and let $\omega_S^\bullet$ be a
+dualizing complex. Let $f : X \to Y$ be a proper morphism of finite
+type schemes over $S$. Let $\omega_X^\bullet$ and $\omega_Y^\bullet$
+be dualizing complexes normalized relative to $\omega_S^\bullet$.
+In this situation we have $a(\omega_Y^\bullet) = \omega_X^\bullet$
+(Lemma \ref{lemma-proper-map-good-dualizing-complex})
+and hence the trace map (Section \ref{section-trace}) is a canonical arrow
+$$
+\text{Tr}_f : Rf_*\omega_X^\bullet \longrightarrow \omega_Y^\bullet
+$$
+which produces the isomorphisms (Lemma \ref{lemma-duality-bootstrap})
+$$
+\Hom_X(L, \omega_X^\bullet) = \Hom_Y(Rf_*L, \omega_Y^\bullet)
+$$
+and
+$$
+Rf_*R\SheafHom_{\mathcal{O}_X}(L, \omega_X^\bullet) =
+R\SheafHom_{\mathcal{O}_Y}(Rf_*L, \omega_Y^\bullet)
+$$
+for $L$ in $D_\QCoh(\mathcal{O}_X)$.
+\end{example}
+
+\begin{remark}
+\label{remark-dualizing-finite}
+Let $S$ be a Noetherian scheme and let $\omega_S^\bullet$ be a dualizing
+complex. Let $f : X \to Y$ be a finite morphism between schemes of finite
+type over $S$. Let $\omega_X^\bullet$ and $\omega_Y^\bullet$ be
+dualizing complexes normalized relative to $\omega_S^\bullet$.
+Then we have
+$$
+f_*\omega_X^\bullet = R\SheafHom(f_*\mathcal{O}_X, \omega_Y^\bullet)
+$$
+in $D_\QCoh^+(f_*\mathcal{O}_X)$ by Lemmas \ref{lemma-finite-twisted} and
+\ref{lemma-proper-map-good-dualizing-complex}
+and the trace map of Example \ref{example-trace-proper} is the map
+$$
+\text{Tr}_f : Rf_*\omega_X^\bullet = f_*\omega_X^\bullet =
+R\SheafHom(f_*\mathcal{O}_X, \omega_Y^\bullet) \longrightarrow
+\omega_Y^\bullet
+$$
+which often goes under the name ``evaluation at $1$''.
+\end{remark}
+
+\begin{remark}
+\label{remark-relative-dualizing-complex-shriek}
+Let $f : X \to Y$ be a flat proper morphism of finite type
+schemes over a pair $(S, \omega_S^\bullet)$ as in
+Situation \ref{situation-dualizing}. The relative dualizing complex
+(Remark \ref{remark-relative-dualizing-complex}) is
+$\omega_{X/Y}^\bullet = a(\mathcal{O}_Y)$. By
+Lemma \ref{lemma-proper-map-good-dualizing-complex}
+we have the first canonical isomorphism in
+$$
+\omega_X^\bullet = a(\omega_Y^\bullet) =
+Lf^*\omega_Y^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} \omega_{X/Y}^\bullet
+$$
+in $D(\mathcal{O}_X)$. The second canonical isomorphism follows from the
+discussion in Remark \ref{remark-relative-dualizing-complex}.
+\end{remark}
+
+
+
+
+\section{Dimension functions}
+\label{section-dimension-functions}
+
+\noindent
+We need a bit more information about how the dimension functions change
+when passing to a scheme of finite type over another.
+
+\begin{lemma}
+\label{lemma-good-dualizing-normalized}
+Let $S$ be a Noetherian scheme and let $\omega_S^\bullet$ be a
+dualizing complex. Let $X$ be a scheme of finite type over $S$ and let
+$\omega_X^\bullet$ be the dualizing complex normalized relative
+to $\omega_S^\bullet$. If $x \in X$ is a closed point lying over
+a closed point $s$ of $S$, then $\omega_{X, x}^\bullet$
+is a normalized dualizing complex over $\mathcal{O}_{X, x}$
+provided that $\omega_{S, s}^\bullet$ is a normalized dualizing
+complex over $\mathcal{O}_{S, s}$.
+\end{lemma}
+
+\begin{proof}
+We may replace $X$ by an affine neighbourhood of $x$, hence we may
+and do assume that $f : X \to S$ is separated.
+Then $\omega_X^\bullet = f^!\omega_S^\bullet$. We have to show that
+$R\Hom_{\mathcal{O}_{X, x}}(\kappa(x), \omega_{X, x}^\bullet)$
+is sitting in degree $0$. Let $i_x : x \to X$ denote the inclusion
+morphism which is a closed immersion as $x$ is a closed point.
+Hence $R\Hom_{\mathcal{O}_{X, x}}(\kappa(x), \omega_{X, x}^\bullet)$
+represents $i_x^!\omega_X^\bullet$ by
+Lemma \ref{lemma-shriek-closed-immersion}.
+Consider the commutative diagram
+$$
+\xymatrix{
+x \ar[r]_{i_x} \ar[d]_\pi & X \ar[d]^f \\
+s \ar[r]^{i_s} & S
+}
+$$
+By Morphisms, Lemma
+\ref{morphisms-lemma-closed-point-fibre-locally-finite-type}
+the extension $\kappa(x)/\kappa(s)$ is finite and hence
+$\pi$ is a finite morphism. We conclude that
+$$
+i_x^!\omega_X^\bullet = i_x^! f^! \omega_S^\bullet =
+\pi^! i_s^! \omega_S^\bullet
+$$
+Thus if $\omega_{S, s}^\bullet$ is a normalized dualizing complex
+over $\mathcal{O}_{S, s}$, then $i_s^!\omega_S^\bullet = \kappa(s)[0]$
+by the same reasoning as above. We have
+$$
+R\pi_*(\pi^!(\kappa(s)[0])) =
+R\SheafHom_{\mathcal{O}_s}(R\pi_*(\kappa(x)[0]), \kappa(s)[0]) =
+\widetilde{\Hom_{\kappa(s)}(\kappa(x), \kappa(s))}
+$$
+The first equality by Example \ref{example-iso-on-RSheafHom-noetherian}
+applied with $L = \kappa(x)[0]$. The second equality holds because
+$\pi_*$ is exact.
+Thus $\pi^!(\kappa(s)[0])$ is supported in degree $0$ and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-good-dualizing-dimension-function}
+Let $S$ be a Noetherian scheme and let $\omega_S^\bullet$ be a
+dualizing complex. Let $f : X \to S$ be of finite type
+and let $\omega_X^\bullet$ be the dualizing complex
+normalized relative to $\omega_S^\bullet$. For all $x \in X$ we have
+$$
+\delta_X(x) - \delta_S(f(x)) = \text{trdeg}_{\kappa(f(x))}(\kappa(x))
+$$
+where $\delta_S$, resp.\ $\delta_X$
+is the dimension function of
+$\omega_S^\bullet$, resp.\ $\omega_X^\bullet$, see
+Lemma \ref{lemma-dimension-function-scheme}.
+\end{lemma}
+
+\begin{proof}
+We may replace $X$ by an affine neighbourhood of $x$. Hence we may
+and do assume there is a compactification $X \subset \overline{X}$
+over $S$. Then we may replace $X$ by $\overline{X}$ and assume
+that $X$ is proper over $S$. We may also assume $X$ is connected
+by replacing $X$ by the connected component of $X$ containing $x$.
+Next, recall that both $\delta_X$ and the function
+$x \mapsto \delta_S(f(x)) + \text{trdeg}_{\kappa(f(x))}(\kappa(x))$
+are dimension functions on $X$, see
+Morphisms, Lemma \ref{morphisms-lemma-dimension-function-propagates}
+(and the fact that $S$ is universally catenary by
+Lemma \ref{lemma-dimension-function-scheme}).
+By Topology, Lemma \ref{topology-lemma-dimension-function-unique}
+we see that the difference is locally constant, hence constant as $X$ is
+connected. Thus it suffices to prove equality in any point of $X$.
+By Properties, Lemma \ref{properties-lemma-locally-Noetherian-closed-point}
+the scheme $X$ has a closed point $x$. Since $X \to S$ is proper
+the image $s$ of $x$ is closed in $S$. Thus we may apply
+Lemma \ref{lemma-good-dualizing-normalized} to conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-shriek}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. Let $x \in X$ with image $y \in Y$. Then
+$$
+H^i(f^!\mathcal{O}_Y)_x \not = 0
+\Rightarrow - \dim_x(X_y) \leq i.
+$$
+\end{lemma}
+
+\begin{proof}
+Since the statement is local on $X$ we may assume $X$
+and $Y$ are affine schemes. Write
+$X = \Spec(A)$ and $Y = \Spec(R)$.
+Then $f^!\mathcal{O}_Y$ corresponds to the relative dualizing
+complex $\omega_{A/R}^\bullet$ of
+Dualizing Complexes, Section
+\ref{dualizing-section-relative-dualizing-complexes-Noetherian}
+by Remark \ref{remark-local-calculation-shriek}.
+Thus the lemma follows from Dualizing Complexes, Lemma
+\ref{dualizing-lemma-relative-dualizing-trivial-vanishing}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-shriek}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. Let $x \in X$ with image $y \in Y$.
+If $f$ is flat, then
+$$
+H^i(f^!\mathcal{O}_Y)_x \not = 0
+\Rightarrow - \dim_x(X_y) \leq i \leq 0.
+$$
+In fact, if all fibres of $f$ have dimension $\leq d$, then
+$f^!\mathcal{O}_Y$ has tor-amplitude in $[-d, 0]$ as an object
+of $D(X, f^{-1}\mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+Arguing exactly as in the proof of Lemma \ref{lemma-shriek}
+this follows from Dualizing Complexes, Lemma
+\ref{dualizing-lemma-relative-dualizing-flat-vanishing}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-shriek-over-CM}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. Let $x \in X$ with image $y \in Y$. Assume
+\begin{enumerate}
+\item $\mathcal{O}_{Y, y}$ is Cohen-Macaulay, and
+\item $\text{trdeg}_{\kappa(f(\xi))}(\kappa(\xi)) \leq r$
+for any generic point $\xi$ of an irreducible component
+of $X$ containing $x$.
+\end{enumerate}
+Then
+$$
+H^i(f^!\mathcal{O}_Y)_x \not = 0
+\Rightarrow - r \leq i
+$$
+and the stalk $H^{-r}(f^!\mathcal{O}_Y)_x$ is $(S_2)$ as an
+$\mathcal{O}_{X, x}$-module.
+\end{lemma}
+
+\begin{proof}
+After replacing $X$ by an open neighbourhood of $x$, we may
+assume every irreducible component of $X$ passes through $x$.
+Then arguing exactly as in the proof of Lemma \ref{lemma-shriek}
+this follows from Dualizing Complexes, Lemma
+\ref{dualizing-lemma-relative-dualizing-CM-vanishing}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-quasi-finite-shriek}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. If $f$ is flat and quasi-finite, then
+$$
+f^!\mathcal{O}_Y = \omega_{X/Y}[0]
+$$
+for some coherent $\mathcal{O}_X$-module $\omega_{X/Y}$ flat over $Y$.
+\end{lemma}
+
+\begin{proof}
+Consequence of Lemma \ref{lemma-flat-shriek} and the fact that the
+cohomology sheaves of $f^!\mathcal{O}_Y$ are coherent by
+Lemma \ref{lemma-shriek-coherent}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM-shriek}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. If $f$ is Cohen-Macaulay (More on Morphisms, Definition
+\ref{more-morphisms-definition-CM}), then
+$$
+f^!\mathcal{O}_Y = \omega_{X/Y}[d]
+$$
+for some coherent $\mathcal{O}_X$-module $\omega_{X/Y}$ flat over $Y$
+where $d$ is the locally constant
+function on $X$ which gives the relative dimension of $X$ over $Y$.
+\end{lemma}
+
+\begin{proof}
+The relative dimension $d$ is well defined and locally constant by
+Morphisms, Lemma
+\ref{morphisms-lemma-flat-finite-presentation-CM-fibres-relative-dimension}.
+The cohomology sheaves of $f^!\mathcal{O}_Y$ are coherent by
+Lemma \ref{lemma-shriek-coherent}.
+We will get flatness of $\omega_{X/Y}$ from Lemma \ref{lemma-flat-shriek}
+if we can show the other cohomology sheaves of $f^!\mathcal{O}_Y$
+are zero.
+
+\medskip\noindent
+The question is local on $X$, hence we may assume $X$ and $Y$ are affine
+and the morphism has relative dimension $d$. If $d = 0$, then the
+result follows directly from Lemma \ref{lemma-flat-quasi-finite-shriek}.
+If $d > 0$, then we may assume there is a factorization
+$$
+X \xrightarrow{g} \mathbf{A}^d_Y \xrightarrow{p} Y
+$$
+with $g$ quasi-finite and flat, see More on Morphisms, Lemma
+\ref{more-morphisms-lemma-flat-finite-presentation-characterize-CM}.
+Then $f^! = g^! \circ p^!$. By Lemma \ref{lemma-shriek-affine-line}
+we see that $p^!\mathcal{O}_Y \cong \mathcal{O}_{\mathbf{A}^d_Y}[-d]$.
+We conclude by the case $d = 0$.
+\end{proof}
+
+\begin{remark}
+\label{remark-the-same-is-true}
+Let $S$ be a Noetherian scheme endowed with a dualizing complex
+$\omega_S^\bullet$. In this case
+Lemmas \ref{lemma-shriek}, \ref{lemma-flat-shriek},
+\ref{lemma-flat-quasi-finite-shriek}, and \ref{lemma-CM-shriek}
+are true for any morphism $f : X \to Y$ of finite type schemes over $S$
+but with $f^!$ replaced by $f_{new}^!$. This is clear because in each
+case the proof reduces immediately to the affine case
+and then $f^! = f_{new}^!$ by Lemma \ref{lemma-duality-bootstrap}.
+\end{remark}
+
+
+
+
+\section{Dualizing modules}
+\label{section-dualizing-module}
+
+
+\noindent
+This section is a continuation of
+Dualizing Complexes, Section \ref{dualizing-section-dualizing-module}.
+
+\medskip\noindent
+Let $X$ be a Noetherian scheme and let $\omega_X^\bullet$ be a
+dualizing complex. Let $n \in \mathbf{Z}$ be the smallest integer such that
+$H^n(\omega_X^\bullet)$ is nonzero. In other words, $-n$ is the maximal
+value of the dimension function associated to $\omega_X^\bullet$
+(Lemma \ref{lemma-dimension-function-scheme}).
+Sometimes $H^n(\omega_X^\bullet)$
+is called a {\it dualizing module} or {\it dualizing sheaf}
+for $X$ and then it is often denoted
+by $\omega_X$. We will say ``let $\omega_X$ be a dualizing module''
+to indicate the above.
+
+\medskip\noindent
+Care has to be taken when using dualizing modules $\omega_X$ on Noetherian
+schemes $X$:
+\begin{enumerate}
+\item the integer $n$ may change when passing from $X$ to an open $U$
+of $X$ and then it won't be true that $\omega_X|_U = \omega_U$,
+\item the dualizing complex isn't unique; the dualizing module
+is only unique up to tensoring by an invertible module.
+\end{enumerate}
+The second problem will often be irrelevant because we will work
+with $X$ of finite type over a base change $S$ which is
+endowed with a fixed dualizing complex $\omega_S^\bullet$ and
+$\omega_X^\bullet$ will be the dualizing complex normalized relative
+to $\omega_S^\bullet$.
+The first problem will not occur if $X$ is equidimensional, more precisely,
+if the dimension function associated to $\omega_X^\bullet$
+(Lemma \ref{lemma-dimension-function-scheme})
+maps every generic point of $X$ to the same integer.
+
+\begin{example}
+\label{example-proper-over-local}
+Say $S = \Spec(A)$ with $(A, \mathfrak m, \kappa)$
+a local Noetherian ring, and $\omega_S^\bullet$ corresponds to
+a normalized dualizing complex $\omega_A^\bullet$. Then if
+$f : X \to S$ is proper over $S$ and $\omega_X^\bullet = f^!\omega_S^\bullet$
+the coherent sheaf
+$$
+\omega_X = H^{-\dim(X)}(\omega_X^\bullet)
+$$
+is a dualizing module and is often called the dualizing module
+of $X$ (with $S$ and $\omega_S^\bullet$ being understood). We will
+see that this has good properties.
+\end{example}
+
+\begin{example}
+\label{example-equidimensional-over-field}
+Say $X$ is an equidimensional scheme of finite type
+over a field $k$. Then it is customary to take
+$\omega_X^\bullet$ the dualizing complex normalized relative to $k[0]$
+and to refer to
+$$
+\omega_X = H^{-\dim(X)}(\omega_X^\bullet)
+$$
+as the dualizing module of $X$. If $X$ is separated over $k$, then
+$\omega_X^\bullet = f^!\mathcal{O}_{\Spec(k)}$ where
+$f : X \to \Spec(k)$ is the structure morphism by
+Lemma \ref{lemma-duality-bootstrap}. If $X$ is proper over $k$, then
+this is a special case of Example \ref{example-proper-over-local}.
+\end{example}
+
+\begin{lemma}
+\label{lemma-dualizing-module}
+Let $X$ be a connected Noetherian scheme and let $\omega_X$ be a dualizing
+module on $X$. The support of $\omega_X$ is the union of the irreducible
+components of maximal dimension with respect to any dimension function
+and $\omega_X$ is a coherent $\mathcal{O}_X$-module having property $(S_2)$.
+\end{lemma}
+
+\begin{proof}
+By our conventions discussed above there exists a dualizing complex
+$\omega_X^\bullet$ such that $\omega_X$ is the leftmost nonvanishing
+cohomology sheaf. Since $X$ is connected, any two dimension functions
+differ by a constant
+(Topology, Lemma \ref{topology-lemma-dimension-function-unique}).
+Hence we may use the
+dimension function associated to $\omega_X^\bullet$
+(Lemma \ref{lemma-dimension-function-scheme}).
+With these remarks in place, the lemma now
+follows from Dualizing Complexes, Lemma
+\ref{dualizing-lemma-depth-dualizing-module}
+and the definitions (in particular
+Cohomology of Schemes, Definition \ref{coherent-definition-depth}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-good-dualizing}
+Let $X/A$ with $\omega_X^\bullet$ and $\omega_X$ be as in
+Example \ref{example-proper-over-local}. Then
+\begin{enumerate}
+\item $H^i(\omega_X^\bullet) \not = 0 \Rightarrow
+i \in \{-\dim(X), \ldots, 0\}$,
+\item the dimension of the support of $H^i(\omega_X^\bullet)$ is at most $-i$,
+\item $\text{Supp}(\omega_X)$ is the union of
+the components of dimension $\dim(X)$, and
+\item $\omega_X$ has property $(S_2)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\delta_X$ and $\delta_S$ be the dimension functions associated to
+$\omega_X^\bullet$ and $\omega_S^\bullet$ as in
+Lemma \ref{lemma-good-dualizing-dimension-function}.
+As $X$ is proper over $A$, every closed subscheme of $X$ contains
+a closed point $x$ which maps to the closed point $s \in S$
+and $\delta_X(x) = \delta_S(s) = 0$. Hence
+$\delta_X(\xi) = \dim(\overline{\{\xi\}})$ for any point
+$\xi \in X$. Hence we can check each of
+the statements of the lemma by looking at what happens over
+$\Spec(\mathcal{O}_{X, x})$ in which case the result follows
+from Dualizing Complexes, Lemmas \ref{dualizing-lemma-sitting-in-degrees} and
+\ref{dualizing-lemma-depth-dualizing-module}.
+Some details omitted.
+The last two statements can also be deduced from
+Lemma \ref{lemma-dualizing-module}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-module-proper-over-A}
+Let $X/A$ with dualizing module $\omega_X$ be as in
+Example \ref{example-proper-over-local}.
+Let $d = \dim(X_s)$ be the dimension
+of the closed fibre. If $\dim(X) = d + \dim(A)$, then
+the dualizing module $\omega_X$ represents the functor
+$$
+\mathcal{F} \longmapsto \Hom_A(H^d(X, \mathcal{F}), \omega_A)
+$$
+on the category of coherent $\mathcal{O}_X$-modules.
+\end{lemma}
+
+\begin{proof}
+We have
+\begin{align*}
+\Hom_X(\mathcal{F}, \omega_X)
+& =
+\Ext^{-\dim(X)}_X(\mathcal{F}, \omega_X^\bullet) \\
+& =
+\Hom_X(\mathcal{F}[\dim(X)], \omega_X^\bullet) \\
+& =
+\Hom_X(\mathcal{F}[\dim(X)], f^!(\omega_A^\bullet)) \\
+& =
+\Hom_S(Rf_*\mathcal{F}[\dim(X)], \omega_A^\bullet) \\
+& =
+\Hom_A(H^d(X, \mathcal{F}), \omega_A)
+\end{align*}
+The first equality because $H^i(\omega_X^\bullet) = 0$ for
+$i < -\dim(X)$, see Lemma \ref{lemma-vanishing-good-dualizing} and
+Derived Categories, Lemma \ref{derived-lemma-negative-exts}.
+The second equality is follows from the definition of Ext groups.
+The third equality is our choice of $\omega_X^\bullet$.
+The fourth equality holds because $f^!$ is the
+right adjoint of Lemma \ref{lemma-twisted-inverse-image} for
+$f$, see Section \ref{section-duality}.
+The final equality holds because $R^if_*\mathcal{F}$ is zero
+for $i > d$ (Cohomology of Schemes, Lemma
+\ref{coherent-lemma-higher-direct-images-zero-above-dimension-fibre})
+and $H^j(\omega_A^\bullet)$ is zero for $j < -\dim(A)$.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Cohen-Macaulay schemes}
+\label{section-CM}
+
+\noindent
+This section is the continuation of Dualizing Complexes, Section
+\ref{dualizing-section-CM}.
+Duality takes a particularly simple form for Cohen-Macaulay schemes.
+
+\begin{lemma}
+\label{lemma-dualizing-module-CM-scheme}
+Let $X$ be a locally Noetherian scheme with dualizing complex
+$\omega_X^\bullet$.
+\begin{enumerate}
+\item $X$ is Cohen-Macaulay $\Leftrightarrow$ $\omega_X^\bullet$
+locally has a unique nonzero cohomology sheaf,
+\item $\mathcal{O}_{X, x}$ is Cohen-Macaulay $\Leftrightarrow$
+$\omega_{X, x}^\bullet$ has a unique nonzero cohomology,
+\item $U = \{x \in X \mid \mathcal{O}_{X, x}\text{ is Cohen-Macaulay}\}$
+is open and Cohen-Macaulay.
+\end{enumerate}
+If $X$ is connected and Cohen-Macaulay, then there is an integer $n$
+and a coherent Cohen-Macaulay $\mathcal{O}_X$-module $\omega_X$
+such that $\omega_X^\bullet = \omega_X[-n]$.
+\end{lemma}
+
+\begin{proof}
+By definition and Dualizing Complexes, Lemma
+\ref{dualizing-lemma-dualizing-localize} for every $x \in X$
+the complex $\omega_{X, x}^\bullet$ is a dualizing complex over
+$\mathcal{O}_{X, x}$. By
+Dualizing Complexes, Lemma \ref{dualizing-lemma-apply-CM}
+we see that (2) holds.
+
+\medskip\noindent
+To see (3) assume that $\mathcal{O}_{X, x}$ is Cohen-Macaulay.
+Let $n_x$ be the unique integer such that
+$H^{n_{x}}(\omega_{X, x}^\bullet)$ is nonzero.
+For an affine neighbourhood $V \subset X$
+of $x$ we have $\omega_X^\bullet|_V$ is in $D^b_{\textit{Coh}}(\mathcal{O}_V)$
+hence there are finitely many nonzero coherent modules
+$H^i(\omega_X^\bullet)|_V$. Thus after shrinking $V$ we may assume
+only $H^{n_x}$ is nonzero, see
+Modules, Lemma \ref{modules-lemma-finite-type-stalk-zero}.
+In this way we see that $\mathcal{O}_{X, v}$ is Cohen-Macaulay
+for every $v \in V$. This proves that $U$ is open as well
+as a Cohen-Macaulay scheme.
+
+\medskip\noindent
+Proof of (1). The implication $\Leftarrow$ follows from (2).
+The implication $\Rightarrow$ follows from the discussion
+in the previous paragraph, where we showed that if $\mathcal{O}_{X, x}$
+is Cohen-Macaulay, then in a neighbourhood of $x$ the complex
+$\omega_X^\bullet$ has only one nonzero cohomology sheaf.
+
+\medskip\noindent
+Assume $X$ is connected and Cohen-Macaulay. The above shows that
+the map $x \mapsto n_x$ is locally constant.
+Since $X$ is connected it is constant, say equal to $n$.
+Setting $\omega_X = H^n(\omega_X^\bullet)$ we see that the lemma
+holds because $\omega_X$ is Cohen-Macaulay by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-apply-CM}
+(and Cohomology of Schemes, Definition
+\ref{coherent-definition-Cohen-Macaulay}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-has-dualizing-module-CM-scheme}
+Let $X$ be a locally Noetherian scheme. If there exists a coherent sheaf
+$\omega_X$ such that $\omega_X[0]$ is a dualizing complex on $X$, then
+$X$ is a Cohen-Macaulay scheme.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from
+Dualizing Complexes, Lemma \ref{dualizing-lemma-has-dualizing-module-CM}
+and our definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-affine-flat-Noetherian-CM}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism
+of $\textit{FTS}_S$. Let $x \in X$. If $f$ is flat, then
+the following are equivalent
+\begin{enumerate}
+\item $f$ is Cohen-Macaulay at $x$,
+\item $f^!\mathcal{O}_Y$ has a unique nonzero cohomology sheaf
+in a neighbourhood of $x$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+One direction of the lemma follows from Lemma \ref{lemma-CM-shriek}.
+To prove the converse, we may assume $f^!\mathcal{O}_Y$ has a unique
+nonzero cohomology sheaf. Let $y = f(x)$. Let $\xi_1, \ldots, \xi_n \in X_y$
+be the generic points of the fibre $X_y$ specializing to $x$.
+Let $d_1, \ldots, d_n$ be the dimensions of the corresponding
+irreducible components of $X_y$. The morphism $f : X \to Y$ is Cohen-Macaulay
+at $\eta_i$ by More on Morphisms, Lemma
+\ref{more-morphisms-lemma-flat-finite-presentation-CM-open}.
+Hence by Lemma \ref{lemma-CM-shriek} we see that
+$d_1 = \ldots = d_n$. If $d$ denotes the common value, then $d = \dim_x(X_y)$.
+After shrinking $X$ we may assume all fibres have dimension at most $d$
+(Morphisms, Lemma \ref{morphisms-lemma-openness-bounded-dimension-fibres}).
+Then the only nonzero cohomology sheaf $\omega = H^{-d}(f^!\mathcal{O}_Y)$
+is flat over $Y$ by Lemma \ref{lemma-flat-shriek}.
+Hence, if $h : X_y \to X$ denotes the canonical morphism, then
+$Lh^*(f^!\mathcal{O}_Y) = Lh^*(\omega[d]) = (h^*\omega)[d]$
+by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-tor-independence-and-tor-amplitude}.
+Thus $h^*\omega[d]$ is the dualizing complex of $X_y$ by
+Lemma \ref{lemma-relative-dualizing-fibres}.
+Hence $X_y$ is Cohen-Macaulay by
+Lemma \ref{lemma-dualizing-module-CM-scheme}.
+This proves $f$ is Cohen-Macaulay at $x$ as desired.
+\end{proof}
+
+\begin{remark}
+\label{remark-CM-morphism-compare-dualizing}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism
+of $\textit{FTS}_S$. Assume $f$ is a Cohen-Macaulay morphism of
+relative dimension $d$. Let $\omega_{X/Y} = H^{-d}(f^!\mathcal{O}_Y)$
+be the unique nonzero cohomology sheaf of $f^!\mathcal{O}_Y$, see
+Lemma \ref{lemma-CM-shriek}.
+Then there is a canonical isomorphism
+$$
+f^!K = Lf^*K \otimes_{\mathcal{O}_X}^\mathbf{L} \omega_{X/Y}[d]
+$$
+for $K \in D^+_\QCoh(\mathcal{O}_Y)$, see
+Lemma \ref{lemma-perfect-comparison-shriek}. In particular, if
+$S$ has a dualizing complex $\omega_S^\bullet$,
+$\omega_Y^\bullet = (Y \to S)^!\omega_S^\bullet$, and
+$\omega_X^\bullet = (X \to S)^!\omega_S^\bullet$
+then we have
+$$
+\omega_X^\bullet =
+Lf^*\omega_Y^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} \omega_{X/Y}[d]
+$$
+Thus if further $X$ and $Y$ are connected and Cohen-Macaulay and
+if $\omega_Y$ and $\omega_X$ denote the unique nonzero cohomology
+sheaves of $\omega_Y^\bullet$ and $\omega_X^\bullet$, then we
+have
+$$
+\omega_X = f^*\omega_Y \otimes_{\mathcal{O}_X} \omega_{X/Y}.
+$$
+Similar results hold for $X$ and $Y$ arbitrary finite type schemes
+over $S$ (i.e., not necessarily separated over $S$)
+with dualizing complexes normalized with respect to
+$\omega_S^\bullet$ as in Section \ref{section-glue}.
+\end{remark}
+
+
+
+
+
+\section{Gorenstein schemes}
+\label{section-gorenstein}
+
+\noindent
+This section is the continuation of Dualizing Complexes, Section
+\ref{dualizing-section-gorenstein}.
+
+\begin{definition}
+\label{definition-gorenstein}
+Let $X$ be a scheme. We say $X$ is {\it Gorenstein} if $X$ is
+locally Noetherian and $\mathcal{O}_{X, x}$ is Gorenstein for all $x \in X$.
+\end{definition}
+
+\noindent
+This definition makes sense because a Noetherian ring is said to
+be Gorenstein if and only if all of its local rings are Gorenstein,
+see Dualizing Complexes, Definition \ref{dualizing-definition-gorenstein}.
+
+\begin{lemma}
+\label{lemma-gorenstein-CM}
+A Gorenstein scheme is Cohen-Macaulay.
+\end{lemma}
+
+\begin{proof}
+Looking affine locally this follows from the corresponding
+result in algebra, namely
+Dualizing Complexes, Lemma \ref{dualizing-lemma-gorenstein-CM}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-regular-gorenstein}
+A regular scheme is Gorenstein.
+\end{lemma}
+
+\begin{proof}
+Looking affine locally this follows from the corresponding
+result in algebra, namely
+Dualizing Complexes, Lemma \ref{dualizing-lemma-regular-gorenstein}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gorenstein}
+Let $X$ be a locally Noetherian scheme.
+\begin{enumerate}
+\item If $X$ has a dualizing complex $\omega_X^\bullet$, then
+\begin{enumerate}
+\item $X$ is Gorenstein $\Leftrightarrow$ $\omega_X^\bullet$ is an invertible
+object of $D(\mathcal{O}_X)$,
+\item $\mathcal{O}_{X, x}$ is Gorenstein $\Leftrightarrow$
+$\omega_{X, x}^\bullet$ is an invertible object of $D(\mathcal{O}_{X, x})$,
+\item $U = \{x \in X \mid \mathcal{O}_{X, x}\text{ is Gorenstein}\}$
+is an open Gorenstein subscheme.
+\end{enumerate}
+\item If $X$ is Gorenstein, then $X$ has a dualizing complex if and
+only if $\mathcal{O}_X[0]$ is a dualizing complex.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Looking affine locally this follows from the corresponding
+result in algebra, namely
+Dualizing Complexes, Lemma \ref{dualizing-lemma-gorenstein}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gorenstein-lci}
+If $f : Y \to X$ is a local complete intersection morphism
+with $X$ a Gorenstein scheme, then $Y$ is Gorenstein.
+\end{lemma}
+
+\begin{proof}
+By More on Morphisms, Lemma \ref{more-morphisms-lemma-affine-lci}
+it suffices to prove the corresponding statement about ring maps.
+This is Dualizing Complexes, Lemma \ref{dualizing-lemma-gorenstein-lci}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gorenstein-local-syntomic}
+The property $\mathcal{P}(S) =$``$S$ is Gorenstein''
+is local in the syntomic topology.
+\end{lemma}
+
+\begin{proof}
+Let $\{S_i \to S\}$ be a syntomic covering. The scheme $S$ is locally
+Noetherian if and only if each $S_i$ is Noetherian, see
+Descent, Lemma \ref{descent-lemma-Noetherian-local-fppf}.
+Thus we may now assume $S$ and $S_i$ are locally Noetherian.
+If $S$ is Gorenstein, then
+each $S_i$ is Gorenstein by Lemma \ref{lemma-gorenstein-lci}.
+Conversely, if each $S_i$ is Gorenstein, then for each point
+$s \in S$ we can pick $i$ and $t \in S_i$ mapping to $s$.
+Then $\mathcal{O}_{S, s} \to \mathcal{O}_{S_i, t}$
+is a flat local ring homomorphism with $\mathcal{O}_{S_i, t}$
+Gorenstein. Hence $\mathcal{O}_{S, s}$ is Gorenstein by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-flat-under-gorenstein}.
+\end{proof}
+
+
+
+
+
+
+\section{Gorenstein morphisms}
+\label{section-gorenstein-morphisms}
+
+\noindent
+This section is one in a series. The corresponding sections for
+normal morphisms,
+regular morphisms, and
+Cohen-Macaulay morphisms
+can be found in More on Morphisms, Sections
+\ref{more-morphisms-section-normal},
+\ref{more-morphisms-section-regular}, and
+\ref{more-morphisms-section-CM}.
+
+\medskip\noindent
+The following lemma says that it does not make sense to define
+geometrically Gorenstein schemes, since these would be the
+same as Gorenstein schemes.
+
+\begin{lemma}
+\label{lemma-gorenstein-base-change}
+Let $X$ be a locally Noetherian scheme over the field $k$.
+Let $k'/k$ be a finitely generated field extension.
+Let $x \in X$ be a point, and let $x' \in X_{k'}$ be a point lying
+over $x$. Then we have
+$$
+\mathcal{O}_{X, x}\text{ is Gorenstein}
+\Leftrightarrow
+\mathcal{O}_{X_{k'}, x'}\text{ is Gorenstein}
+$$
+If $X$ is locally of finite type over $k$, the same holds for any
+field extension $k'/k$.
+\end{lemma}
+
+\begin{proof}
+In both cases the ring map $\mathcal{O}_{X, x} \to \mathcal{O}_{X_{k'}, x'}$
+is a faithfully flat local homomorphism of Noetherian local rings.
+Thus if $\mathcal{O}_{X_{k'}, x'}$ is Gorenstein, then so is
+$\mathcal{O}_{X, x}$ by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-flat-under-gorenstein}.
+To go up, we use
+Dualizing Complexes, Lemma \ref{dualizing-lemma-flat-under-gorenstein} as well.
+Thus we have to show that
+$$
+\mathcal{O}_{X_{k'}, x'}/\mathfrak m_x \mathcal{O}_{X_{k'}, x'} =
+\kappa(x) \otimes_k k'
+$$
+is Gorenstein. Note that in the first case $k \to k'$ is finitely
+generated and in the second case $k \to \kappa(x)$ is finitely
+generated. Hence this follows as property (A) holds for
+Gorenstein, see Dualizing Complexes, Lemma
+\ref{dualizing-lemma-formal-fibres-gorenstein}.
+\end{proof}
+
+\noindent
+The lemma above guarantees that the following is the correct definition
+of Gorenstein morphisms.
+
+\begin{definition}
+\label{definition-gorenstein-morphism}
+Let $f : X \to Y$ be a morphism of schemes.
+Assume that all the fibres $X_y$ are locally Noetherian schemes.
+\begin{enumerate}
+\item Let $x \in X$, and $y = f(x)$. We say that $f$ is
+{\it Gorenstein at $x$} if $f$ is flat at $x$, and the
+local ring of the scheme $X_y$ at $x$ is Gorenstein.
+\item We say $f$ is a {\it Gorenstein morphism} if $f$ is
+Gorenstein at every point of $X$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Here is a translation.
+
+\begin{lemma}
+\label{lemma-gorenstein-morphism}
+Let $f : X \to Y$ be a morphism of schemes.
+Assume all fibres of $f$ are locally Noetherian.
+The following are equivalent
+\begin{enumerate}
+\item $f$ is Gorenstein, and
+\item $f$ is flat and its fibres are Gorenstein schemes.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows directly from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gorenstein-CM-morphism}
+A Gorenstein morphism is Cohen-Macaulay.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-gorenstein-CM} and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-gorenstein}
+A syntomic morphism is Gorenstein. Equivalently a flat
+local complete intersection morphism is Gorenstein.
+\end{lemma}
+
+\begin{proof}
+Recall that a syntomic morphism is flat and its fibres
+are local complete intersections over fields, see
+Morphisms, Lemma \ref{morphisms-lemma-syntomic-flat-fibres}.
+Since a local complete intersection over a field is a Gorenstein scheme
+by Lemma \ref{lemma-gorenstein-lci} we conclude.
+The properties ``syntomic'' and ``flat and local
+complete intersection morphism'' are equivalent by
+More on Morphisms, Lemma \ref{more-morphisms-lemma-flat-lci}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-gorenstein}
+Let $f : X \to Y$ and $g : Y \to Z$ be morphisms. Assume that the
+fibres $X_y$, $Y_z$ and $X_z$ of $f$, $g$, and $g \circ f$ are
+locally Noetherian.
+\begin{enumerate}
+\item If $f$ is Gorenstein at $x$ and $g$ is Gorenstein
+at $f(x)$, then $g \circ f$ is Gorenstein at $x$.
+\item If $f$ and $g$ are Gorenstein, then $g \circ f$ is Gorenstein.
+\item If $g \circ f$ is Gorenstein at $x$ and $f$ is flat at $x$,
+then $f$ is Gorenstein at $x$ and $g$ is Gorenstein at $f(x)$.
+\item If $f \circ g$ is Gorenstein and $f$ is flat, then
+$f$ is Gorenstein and $g$ is Gorenstein at every point in
+the image of $f$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+After translating into algebra this follows from
+Dualizing Complexes, Lemma \ref{dualizing-lemma-flat-under-gorenstein}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-morphism-from-gorenstein-scheme}
+\begin{slogan}
+Gorensteinnes of the total space of a flat fibration implies
+same for base and fibres
+\end{slogan}
+Let $f : X \to Y$ be a flat morphism of locally Noetherian schemes.
+If $X$ is Gorenstein, then $f$ is Gorenstein and $\mathcal{O}_{Y, f(x)}$
+is Gorenstein for all $x \in X$.
+\end{lemma}
+
+\begin{proof}
+After translating into algebra this follows from
+Dualizing Complexes, Lemma \ref{dualizing-lemma-flat-under-gorenstein}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-gorenstein}
+Let $f : X \to Y$ be a morphism of schemes.
+Assume that all the fibres $X_y$ are locally Noetherian schemes.
+Let $Y' \to Y$ be locally of finite type. Let $f' : X' \to Y'$
+be the base change of $f$.
+Let $x' \in X'$ be a point with image $x \in X$.
+\begin{enumerate}
+\item If $f$ is Gorenstein at $x$, then
+$f' : X' \to Y'$ is Gorenstein at $x'$.
+\item If $f$ is flat and $x$ and $f'$ is Gorenstein at $x'$, then $f$
+is Gorenstein at $x$.
+\item If $Y' \to Y$ is flat at $f'(x')$ and $f'$ is Gorenstein at
+$x'$, then $f$ is Gorenstein at $x$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Note that the assumption on $Y' \to Y$ implies that for $y' \in Y'$
+mapping to $y \in Y$ the field extension $\kappa(y')/\kappa(y)$
+is finitely generated. Hence also all the fibres
+$X'_{y'} = (X_y)_{\kappa(y')}$ are locally Noetherian, see
+Varieties, Lemma \ref{varieties-lemma-locally-Noetherian-base-change}.
+Thus the lemma makes sense. Set $y' = f'(x')$ and $y = f(x)$.
+Hence we get the following commutative diagram of local rings
+$$
+\xymatrix{
+\mathcal{O}_{X', x'} & \mathcal{O}_{X, x} \ar[l] \\
+\mathcal{O}_{Y', y'} \ar[u] & \mathcal{O}_{Y, y} \ar[l] \ar[u]
+}
+$$
+where the upper left corner is a localization of the tensor product
+of the upper right and lower left corners over the lower right corner.
+
+\medskip\noindent
+Assume $f$ is Gorenstein at $x$.
+The flatness of $\mathcal{O}_{Y, y} \to \mathcal{O}_{X, x}$
+implies the flatness of $\mathcal{O}_{Y', y'} \to \mathcal{O}_{X', x'}$, see
+Algebra, Lemma \ref{algebra-lemma-base-change-flat-up-down}.
+The fact that $\mathcal{O}_{X, x}/\mathfrak m_y\mathcal{O}_{X, x}$
+is Gorenstein implies that
+$\mathcal{O}_{X', x'}/\mathfrak m_{y'}\mathcal{O}_{X', x'}$
+is Gorenstein, see
+Lemma \ref{lemma-gorenstein-base-change}. Hence we see that $f'$
+is Gorenstein at $x'$.
+
+\medskip\noindent
+Assume $f$ is flat at $x$ and $f'$ is Gorenstein at $x'$.
+The fact that $\mathcal{O}_{X', x'}/\mathfrak m_{y'}\mathcal{O}_{X', x'}$
+is Gorenstein implies that
+$\mathcal{O}_{X, x}/\mathfrak m_y\mathcal{O}_{X, x}$
+is Gorenstein, see
+Lemma \ref{lemma-gorenstein-base-change}. Hence we see that $f$
+is Gorenstein at $x$.
+
+\medskip\noindent
+Assume $Y' \to Y$ is flat at $y'$ and $f'$ is Gorenstein at
+$x'$. The flatness of $\mathcal{O}_{Y', y'} \to \mathcal{O}_{X', x'}$
+and $\mathcal{O}_{Y, y} \to \mathcal{O}_{Y', y'}$ implies the flatness
+of $\mathcal{O}_{Y, y} \to \mathcal{O}_{X, x}$, see
+Algebra, Lemma \ref{algebra-lemma-base-change-flat-up-down}.
+The fact that $\mathcal{O}_{X', x'}/\mathfrak m_{y'}\mathcal{O}_{X', x'}$
+is Gorenstein implies that
+$\mathcal{O}_{X, x}/\mathfrak m_y\mathcal{O}_{X, x}$
+is Gorenstein, see
+Lemma \ref{lemma-gorenstein-base-change}. Hence we see that $f$
+is Gorenstein at $x$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-lft-base-change-gorenstein}
+Let $f : X \to Y$ be a morphism of schemes which is flat and
+locally of finite type. Then formation of the set
+$\{x \in X \mid f\text{ is Gorenstein at }x\}$
+commutes with arbitrary base change.
+\end{lemma}
+
+\begin{proof}
+The assumption implies any fibre of $f$ is locally of finite type
+over a field and hence locally Noetherian and the same is true for
+any base change. Thus the statement makes sense. Looking at
+fibres we reduce to the following problem: let $X$ be a scheme
+locally of finite type over a field $k$,
+let $K/k$ be a field extension, and
+let $x_K \in X_K$ be a point with image $x \in X$.
+Problem: show that $\mathcal{O}_{X_K, x_K}$ is Gorenstein if and only if
+$\mathcal{O}_{X, x}$ is Gorenstein.
+
+\medskip\noindent
+The problem can be solved using a bit of algebra as follows.
+Choose an affine open $\Spec(A) \subset X$ containing $x$.
+Say $x$ corresponds to $\mathfrak p \subset A$.
+With $A_K = A \otimes_k K$ we see that $\Spec(A_K) \subset X_K$
+contains $x_K$. Say $x_K$ corresponds to $\mathfrak p_K \subset A_K$.
+Let $\omega_A^\bullet$ be a dualizing complex for $A$.
+By Dualizing Complexes, Lemma
+\ref{dualizing-lemma-base-change-dualizing-over-field}
+$\omega_A^\bullet \otimes_A A_K$ is a dualizing complex for $A_K$.
+Now we are done because
+$A_\mathfrak p \to (A_K)_{\mathfrak p_K}$ is a flat local
+homomorphism of Noetherian rings and hence
+$(\omega_A^\bullet)_\mathfrak p$ is an invertible object
+of $D(A_\mathfrak p)$ if and only if
+$(\omega_A^\bullet)_\mathfrak p \otimes_{A_\mathfrak p} (A_K)_{\mathfrak p_K}$
+is an invertible object of $D((A_K)_{\mathfrak p_K})$.
+Some details omitted; hint: look at cohomology modules.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-affine-flat-Noetherian-gorenstein}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism
+of $\textit{FTS}_S$. Let $x \in X$. If $f$ is flat, then
+the following are equivalent
+\begin{enumerate}
+\item $f$ is Gorenstein at $x$,
+\item $f^!\mathcal{O}_Y$ is isomorphic to an invertible object
+in a neighbourhood of $x$.
+\end{enumerate}
+In particular, the set of points where $f$ is Gorenstein is
+open in $X$.
+\end{lemma}
+
+\begin{proof}
+Set $\omega^\bullet = f^!\mathcal{O}_Y$. By
+Lemma \ref{lemma-relative-dualizing-fibres}
+this is a bounded complex with coherent cohomology
+sheaves whose derived restriction $Lh^*\omega^\bullet$
+to the fibre $X_y$ is a dualizing complex on $X_y$.
+Denote $i : x \to X_y$ the inclusion of a point.
+Then the following are equivalent
+\begin{enumerate}
+\item $f$ is Gorenstein at $x$,
+\item $\mathcal{O}_{X_y, x}$ is Gorenstein,
+\item $Lh^*\omega^\bullet$ is invertible in a neighbourhood of $x$,
+\item $Li^* Lh^* \omega^\bullet$ has exactly one nonzero
+cohomology of dimension $1$ over $\kappa(x)$,
+\item $L(h \circ i)^* \omega^\bullet$ has exactly one nonzero
+cohomology of dimension $1$ over $\kappa(x)$,
+\item $\omega^\bullet$ is invertible in a neighbourhood of $x$.
+\end{enumerate}
+The equivalence of (1) and (2) is by definition (as $f$ is flat).
+The equivalence of (2) and (3) follows from
+Lemma \ref{lemma-gorenstein}.
+The equivalence of (3) and (4) follows from
+More on Algebra, Lemma
+\ref{more-algebra-lemma-lift-bounded-pseudo-coherent-to-perfect}.
+The equivalence of (4) and (5) holds because
+$Li^* Lh^* = L(h \circ i)^*$.
+The equivalence of (5) and (6) holds by
+More on Algebra, Lemma
+\ref{more-algebra-lemma-lift-bounded-pseudo-coherent-to-perfect}.
+Thus the lemma is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-finite-presentation-characterize-gorenstein}
+Let $f : X \to S$ be a morphism of schemes which is flat and locally
+of finite presentation. Let $x \in X$ with image $s \in S$.
+Set $d = \dim_x(X_s)$. The following are equivalent
+\begin{enumerate}
+\item $f$ is Gorenstein at $x$,
+\item there exists an open neighbourhood $U \subset X$ of $x$
+and a locally quasi-finite morphism $U \to \mathbf{A}^d_S$ over $S$
+which is Gorenstein at $x$,
+\item there exists an open neighbourhood $U \subset X$ of $x$
+and a locally quasi-finite Gorenstein morphism $U \to \mathbf{A}^d_S$ over $S$,
+\item for any $S$-morphism $g : U \to \mathbf{A}^d_S$
+of an open neighbourhood $U \subset X$ of $x$ we have:
+$g$ is quasi-finite at $x$ $\Rightarrow$ $g$ is Gorenstein at $x$.
+\end{enumerate}
+In particular, the set of points where $f$ is Gorenstein is open in $X$.
+\end{lemma}
+
+\begin{proof}
+Choose affine open $U = \Spec(A) \subset X$ with $x \in U$ and
+$V = \Spec(R) \subset S$ with $f(U) \subset V$. Then $R \to A$
+is a flat ring map of finite presentation. Let $\mathfrak p \subset A$
+be the prime ideal corresponding to $x$. After replacing $A$ by a
+principal localization we may assume there exists a quasi-finite map
+$R[x_1, \ldots, x_d] \to A$, see
+Algebra, Lemma \ref{algebra-lemma-quasi-finite-over-polynomial-algebra}.
+Thus there exists at least one pair $(U, g)$ consisting of an
+open neighbourhood $U \subset X$ of $x$ and a locally\footnote{If $S$
+is quasi-separated, then $g$ will be quasi-finite.} quasi-finite morphism
+$g : U \to \mathbf{A}^d_S$.
+
+\medskip\noindent
+Having said this, the lemma translates into the following algebra
+problem (translation omitted). Given $R \to A$ flat and of finite
+presentation, a prime $\mathfrak p \subset A$ and
+$\varphi : R[x_1, \ldots, x_d] \to A$ quasi-finite at $\mathfrak p$
+the following are equivalent
+\begin{enumerate}
+\item[(a)] $\Spec(\varphi)$ is Gorenstein at $\mathfrak p$, and
+\item[(b)] $\Spec(A) \to \Spec(R)$ is Gorenstein at $\mathfrak p$.
+\item[(c)] $\Spec(A) \to \Spec(R)$ is Gorenstein in an open neighbourhood
+of $\mathfrak p$.
+\end{enumerate}
+In each case $R[x_1, \ldots, x_n] \to A$ is flat at $\mathfrak p$
+hence by openness of flatness
+(Algebra, Theorem \ref{algebra-theorem-openness-flatness}),
+we may assume $R[x_1, \ldots, x_n] \to A$
+is flat (replace $A$ by a suitable principal localization).
+By Algebra, Lemma \ref{algebra-lemma-flat-finite-presentation-limit-flat}
+there exists $R_0 \subset R$ and $R_0[x_1, \ldots, x_n] \to A_0$
+such that $R_0$ is of finite type over $\mathbf{Z}$ and
+$R_0 \to A_0$ is of finite type and $R_0[x_1, \ldots, x_n] \to A_0$ is flat.
+Note that the set of points where a flat finite type morphism
+is Gorenstein commutes with base change by
+Lemma \ref{lemma-base-change-gorenstein}.
+In this way we reduce to the case where $R$ is Noetherian.
+
+\medskip\noindent
+Thus we may assume $X$ and $S$ affine and that
+we have a factorization of $f$ of the form
+$$
+X \xrightarrow{g} \mathbf{A}^n_S \xrightarrow{p} S
+$$
+with $g$ flat and quasi-finite and $S$ Noetherian. Then $X$ and
+$\mathbf{A}^n_S$ are separated over $S$ and we have
+$$
+f^!\mathcal{O}_S = g^!p^!\mathcal{O}_S = g^!\mathcal{O}_{\mathbf{A}^n_S}[n]
+$$
+by know properties of upper shriek functors
+(Lemmas \ref{lemma-upper-shriek-composition} and
+\ref{lemma-shriek-affine-line}).
+Hence the equivalence of (a), (b), and (c) by
+Lemma \ref{lemma-affine-flat-Noetherian-gorenstein}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gorenstein-local-source-and-target}
+The property
+$\mathcal{P}(f)=$``the fibres of $f$ are locally Noetherian and $f$ is
+Gorenstein'' is local in the fppf topology on the target and
+local in the syntomic topology on the source.
+\end{lemma}
+
+\begin{proof}
+We have
+$\mathcal{P}(f) =
+\mathcal{P}_1(f) \wedge \mathcal{P}_2(f)$
+where
+$\mathcal{P}_1(f)=$``$f$ is flat'', and
+$\mathcal{P}_2(f)=$``the fibres of $f$ are locally Noetherian
+and Gorenstein''.
+We know that $\mathcal{P}_1$ is
+local in the fppf topology on the source and the target, see
+Descent, Lemmas \ref{descent-lemma-descending-property-flat} and
+\ref{descent-lemma-flat-fpqc-local-source}. Thus we have to deal
+with $\mathcal{P}_2$.
+
+\medskip\noindent
+Let $f : X \to Y$ be a morphism of schemes.
+Let $\{\varphi_i : Y_i \to Y\}_{i \in I}$ be an fppf covering of $Y$.
+Denote $f_i : X_i \to Y_i$ the base change of $f$ by $\varphi_i$.
+Let $i \in I$ and let $y_i \in Y_i$ be a point.
+Set $y = \varphi_i(y_i)$. Note that
+$$
+X_{i, y_i} = \Spec(\kappa(y_i)) \times_{\Spec(\kappa(y))} X_y.
+$$
+and that $\kappa(y_i)/\kappa(y)$ is a finitely generated field
+extension. Hence if $X_y$ is locally Noetherian, then
+$X_{i, y_i}$ is locally Noetherian, see
+Varieties, Lemma \ref{varieties-lemma-locally-Noetherian-base-change}.
+And if in addition $X_y$ is Gorenstein,
+then $X_{i, y_i}$ is Gorenstein, see
+Lemma \ref{lemma-gorenstein-base-change}.
+Thus $\mathcal{P}_2$ is fppf local on the target.
+
+\medskip\noindent
+Let $\{X_i \to X\}$ be a syntomic covering of $X$.
+Let $y \in Y$. In this case $\{X_{i, y} \to X_y\}$ is a
+syntomic covering of the fibre. Hence the locality of $\mathcal{P}_2$
+for the syntomic topology on the source follows from
+Lemma \ref{lemma-gorenstein-local-syntomic}.
+\end{proof}
+
+
+
+
+\section{More on dualizing complexes}
+\label{section-more-dualizing}
+
+\noindent
+Some lemmas which don't fit anywhere else very well.
+
+\begin{lemma}
+\label{lemma-descent-ascent}
+Let $f : X \to Y$ be a morphism of locally Noetherian schemes. Assume
+\begin{enumerate}
+\item $f$ is syntomic and surjective, or
+\item $f$ is a surjective flat local complete intersection morphism, or
+\item $f$ is a surjective Gorenstein morphism of finite type.
+\end{enumerate}
+Then $K \in D_\QCoh(\mathcal{O}_Y)$ is a dualizing complex on $Y$ if and only
+if $Lf^*K$ is a dualizing complex on $X$.
+\end{lemma}
+
+\begin{proof}
+Taking affine opens and using
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-affine-compare-bounded}
+this translates into
+Dualizing Complexes, Lemma \ref{dualizing-lemma-descent-ascent}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Duality for proper schemes over fields}
+\label{section-duality-proper-over-field}
+
+\noindent
+In this section we work out the consequences of the very general
+material above on dualizing complexes and duality for proper schemes
+over fields.
+
+\begin{lemma}
+\label{lemma-duality-proper-over-field}
+Let $X$ be a proper scheme over a field $k$. There exists a dualizing complex
+$\omega_X^\bullet$ with the following properties
+\begin{enumerate}
+\item $H^i(\omega_X^\bullet)$ is nonzero only for $i \in [-\dim(X), 0]$,
+\item $\omega_X = H^{-\dim(X)}(\omega_X^\bullet)$ is a coherent
+$(S_2)$-module whose support is the irreducible components of dimension $d$,
+\item the dimension of the support of $H^i(\omega_X^\bullet)$ is at most $-i$,
+\item for $x \in X$ closed the module
+$H^i(\omega_{X, x}^\bullet) \oplus \ldots \oplus H^0(\omega_{X, x}^\bullet)$
+is nonzero if and only if $\text{depth}(\mathcal{O}_{X, x}) \leq -i$,
+\item for $K \in D_\QCoh(\mathcal{O}_X)$ there are functorial
+isomorphisms\footnote{This property
+characterizes $\omega_X^\bullet$ in $D_\QCoh(\mathcal{O}_X)$
+up to unique isomorphism by the Yoneda lemma. Since $\omega_X^\bullet$
+is in $D^b_{\textit{Coh}}(\mathcal{O}_X)$ in fact it suffices to consider
+$K \in D^b_{\textit{Coh}}(\mathcal{O}_X)$.}
+$$
+\Ext^i_X(K, \omega_X^\bullet) = \Hom_k(H^{-i}(X, K), k)
+$$
+compatible with shifts and distinguished triangles,
+\item there are functorial isomorphisms
+$\Hom(\mathcal{F}, \omega_X) = \Hom_k(H^{\dim(X)}(X, \mathcal{F}), k)$
+for $\mathcal{F}$ quasi-coherent on $X$, and
+\item if $X \to \Spec(k)$ is smooth of relative dimension $d$,
+then $\omega_X^\bullet \cong \wedge^d\Omega_{X/k}[d]$ and
+$\omega_X \cong \wedge^d\Omega_{X/k}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $f : X \to \Spec(k)$ the structure morphism. Let $a$ be the
+right adjoint of pushforward of this morphism, see
+Lemma \ref{lemma-twisted-inverse-image}. Consider the relative dualizing
+complex
+$$
+\omega_X^\bullet = a(\mathcal{O}_{\Spec(k)})
+$$
+Compare with Remark \ref{remark-relative-dualizing-complex}.
+Since $f$ is proper we have
+$f^!(\mathcal{O}_{\Spec(k)}) = a(\mathcal{O}_{\Spec(k)})$ by
+definition, see Section \ref{section-upper-shriek}.
+Applying Lemma \ref{lemma-shriek-dualizing} we find that
+$\omega_X^\bullet$ is a dualizing complex. Moreover, we see that
+$\omega_X^\bullet$ and $\omega_X$ are as in
+Example \ref{example-proper-over-local} and as in
+Example \ref{example-equidimensional-over-field}.
+
+\medskip\noindent
+Parts (1), (2), and (3) follow from
+Lemma \ref{lemma-vanishing-good-dualizing}.
+
+\medskip\noindent
+For a closed point $x \in X$ we see that $\omega_{X, x}^\bullet$ is a
+normalized dualizing complex over $\mathcal{O}_{X, x}$, see
+Lemma \ref{lemma-good-dualizing-normalized}.
+Part (4) then follows from Dualizing Complexes, Lemma
+\ref{dualizing-lemma-depth-in-terms-dualizing-complex}.
+
+\medskip\noindent
+Part (5) holds by construction as $a$ is the right adjoint to
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D(\mathcal{O}_{\Spec(k)}) = D(k)$
+which we can identify with $K \mapsto R\Gamma(X, K)$. We also use
+that the derived category $D(k)$ of $k$-modules is the same as the
+category of graded $k$-vector spaces.
+
+\medskip\noindent
+Part (6) follows from Lemma \ref{lemma-dualizing-module-proper-over-A}
+for coherent $\mathcal{F}$ and in general by unwinding
+(5) for $K = \mathcal{F}[0]$ and $i = -\dim(X)$.
+
+\medskip\noindent
+Part (7) follows from Lemma \ref{lemma-smooth-proper}.
+\end{proof}
+
+\begin{remark}
+\label{remark-duality-proper-over-field}
+Let $k$, $X$, and $\omega_X^\bullet$
+be as in Lemma \ref{lemma-duality-proper-over-field}.
+The identity on the complex $\omega_X^\bullet$ corresponds, via
+the functorial isomorphism in part (5), to a map
+$$
+t : H^0(X, \omega_X^\bullet) \longrightarrow k
+$$
+For an arbitrary $K$ in $D_\QCoh(\mathcal{O}_X)$ the identification
+$\Hom(K, \omega_X^\bullet)$ with $H^0(X, K)^\vee$ in part (5)
+corresponds to the pairing
+$$
+\Hom_X(K, \omega_X^\bullet) \times H^0(X, K) \longrightarrow k,\quad
+(\alpha, \beta) \longmapsto t(\alpha(\beta))
+$$
+This follows from the functoriality of the isomorphisms in (5). Similarly
+for any $i \in \mathbf{Z}$ we get the pairing
+$$
+\Ext^i_X(K, \omega_X^\bullet) \times H^{-i}(X, K) \longrightarrow k,\quad
+(\alpha, \beta) \longmapsto t(\alpha(\beta))
+$$
+Here we think of $\alpha$ as a morphism $K[-i] \to \omega_X^\bullet$
+and $\beta$ as an element of $H^0(X, K[-i])$ in order to define
+$\alpha(\beta)$. Observe that if $K$ is general, then we only know
+that this pairing is nondegenerate on one side: the pairing induces an
+isomorphism of $\Hom_X(K, \omega_X^\bullet)$,
+resp.\ $\Ext^i_X(K, \omega_X^\bullet)$ with the $k$-linear dual of $H^0(X, K)$,
+resp.\ $H^{-i}(X, K)$ but in general not vice versa. If $K$
+is in $D^b_{\textit{Coh}}(\mathcal{O}_X)$, then
+$\Hom_X(K, \omega_X^\bullet)$, $\Ext_X(K, \omega_X^\bullet)$,
+$H^0(X, K)$, and $H^i(X, K)$ are finite dimensional $k$-vector spaces (by
+Derived Categories of Schemes, Lemmas
+\ref{perfect-lemma-coherent-internal-hom} and
+\ref{perfect-lemma-direct-image-coherent-bdd-below})
+and the pairings are perfect in the usual sense.
+\end{remark}
+
+\begin{remark}
+\label{remark-coherent-duality-proper-over-field}
+We continue the discussion in Remark \ref{remark-duality-proper-over-field}
+and we use the same notation $k$, $X$, $\omega_X^\bullet$, and $t$.
+If $\mathcal{F}$ is a coherent $\mathcal{O}_X$-module we obtain
+perfect pairings
+$$
+\langle -, - \rangle :
+\Ext^i_X(\mathcal{F}, \omega_X^\bullet) \times H^{-i}(X,\mathcal{F})
+\longrightarrow k,\quad
+(\alpha, \beta) \longmapsto t(\alpha(\beta))
+$$
+of finite dimensional $k$-vector spaces. These pairings satisfy the
+following (obvious) functoriality: if $\varphi : \mathcal{F} \to \mathcal{G}$
+is a homomorphism of coherent $\mathcal{O}_X$-modules, then we have
+$$
+\langle \alpha \circ \varphi, \beta \rangle =
+\langle \alpha, \varphi(\beta) \rangle
+$$
+for $\alpha \in \Ext^i_X(\mathcal{G}, \omega_X^\bullet)$ and
+$\beta \in H^{-i}(X, \mathcal{F})$. In other words, the $k$-linear map
+$\Ext^i_X(\mathcal{G}, \omega_X^\bullet) \to
+\Ext^i_X(\mathcal{F}, \omega_X^\bullet)$ induced by $\varphi$
+is, via the pairings, the $k$-linear dual of the $k$-linear map
+$H^{-i}(X, \mathcal{F}) \to H^{-i}(X, \mathcal{G})$ induced
+by $\varphi$. Formulated in this manner, this still works if
+$\varphi$ is a homomorphism of quasi-coherent $\mathcal{O}_X$-modules.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-duality-proper-over-field-perfect}
+Let $k$, $X$, and $\omega_X^\bullet$
+be as in Lemma \ref{lemma-duality-proper-over-field}.
+Let $t : H^0(X, \omega_X^\bullet) \to k$ be as in
+Remark \ref{remark-duality-proper-over-field}.
+Let $E \in D(\mathcal{O}_X)$ be perfect. Then the pairings
+$$
+H^i(X, \omega_X^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} E^\vee)
+\times
+H^{-i}(X, E) \longrightarrow k, \quad
+(\xi, \eta) \longmapsto
+t((1_{\omega_X^\bullet} \otimes \epsilon)(\xi \cup \eta))
+$$
+are perfect for all $i$. Here $\cup$ denotes the cupproduct
+of Cohomology, Section \ref{cohomology-section-cup-product} and
+$\epsilon : E^\vee \otimes_{\mathcal{O}_X}^\mathbf{L} E \to \mathcal{O}_X$
+is as in Cohomology, Example \ref{cohomology-example-dual-derived}.
+\end{lemma}
+
+\begin{proof}
+By replacing $E$ with $E[-i]$ this reduces to the case $i = 0$.
+By Cohomology, Lemma \ref{cohomology-lemma-ext-composition-is-cup}
+we see that the pairing is the same as the one discussed
+in Remark \ref{remark-duality-proper-over-field}
+whence the result by the discussion in that remark.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-duality-proper-over-field-CM}
+Let $X$ be a proper scheme over a field $k$ which is Cohen-Macaulay
+and equidimensional of dimension $d$. The module $\omega_X$
+of Lemma \ref{lemma-duality-proper-over-field} has the following properties
+\begin{enumerate}
+\item $\omega_X$ is a dualizing module on $X$
+(Section \ref{section-dualizing-module}),
+\item $\omega_X$ is a coherent Cohen-Macaulay module whose support is $X$,
+\item there are functorial isomorphisms
+$\Ext^i_X(K, \omega_X[d]) = \Hom_k(H^{-i}(X, K), k)$
+compatible with shifts and distinguished triangles for $K \in D_\QCoh(X)$,
+\item there are functorial isomorphisms
+$\Ext^{d - i}(\mathcal{F}, \omega_X) = \Hom_k(H^i(X, \mathcal{F}), k)$
+for $\mathcal{F}$ quasi-coherent on $X$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear from Lemma \ref{lemma-duality-proper-over-field}
+that $\omega_X$ is a dualizing module (as it is the left most
+nonvanishing cohomology sheaf of a dualizing complex).
+We have $\omega_X^\bullet = \omega_X[d]$ and $\omega_X$ is Cohen-Macaulay
+as $X$ is Cohen-Macualay, see Lemma \ref{lemma-dualizing-module-CM-scheme}.
+The other statements follow from this combined with the
+corresponding statements of Lemma \ref{lemma-duality-proper-over-field}.
+\end{proof}
+
+\begin{remark}
+\label{remark-rework-duality-locally-free-CM}
+Let $X$ be a proper Cohen-Macaulay scheme over a field $k$
+which is equidimensional of dimension $d$.
+Let $\omega_X^\bullet$ and $\omega_X$ be as in
+Lemma \ref{lemma-duality-proper-over-field}.
+By Lemma \ref{lemma-duality-proper-over-field-CM}
+we have $\omega_X^\bullet = \omega_X[d]$.
+Let $t : H^d(X, \omega_X) \to k$ be the map of
+Remark \ref{remark-duality-proper-over-field}.
+Let $\mathcal{E}$ be a finite locally free $\mathcal{O}_X$-module
+with dual $\mathcal{E}^\vee$. Then we have perfect pairings
+$$
+H^i(X, \omega_X \otimes_{\mathcal{O}_X} \mathcal{E}^\vee)
+\times
+H^{d - i}(X, \mathcal{E})
+\longrightarrow
+k,\quad
+(\xi, \eta) \longmapsto t(1 \otimes \epsilon)(\xi \cup \eta))
+$$
+where $\cup$ is the cup-product and
+$\epsilon : \mathcal{E}^\vee \otimes_{\mathcal{O}_X} \mathcal{E}
+\to \mathcal{O}_X$ is the evaluation map.
+This is a special case of Lemma \ref{lemma-duality-proper-over-field-perfect}.
+\end{remark}
+
+\noindent
+Here is a sanity check for the dualizing complex.
+
+\begin{lemma}
+\label{lemma-sanity-check-duality}
+Let $X$ be a proper scheme over a field $k$. Let $\omega_X^\bullet$ and
+$\omega_X$ be as in Lemma \ref{lemma-duality-proper-over-field}.
+\begin{enumerate}
+\item If $X \to \Spec(k)$ factors as $X \to \Spec(k') \to \Spec(k)$
+for some field $k'$, then $\omega_X^\bullet$ and $\omega_X$
+are as in Lemma \ref{lemma-duality-proper-over-field} for the morphism
+$X \to \Spec(k')$.
+\item If $K/k$ is a field extension, then the pullback of
+$\omega_X^\bullet$ and $\omega_X$ to the base change $X_K$
+are as in Lemma \ref{lemma-duality-proper-over-field} for the morphism
+$X_K \to \Spec(K)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Denote $f : X \to \Spec(k)$ the structure morphism and denote
+$f' : X \to \Spec(k')$ the given factorization.
+In the proof of Lemma \ref{lemma-duality-proper-over-field}
+we took $\omega_X^\bullet = a(\mathcal{O}_{\Spec(k)})$
+where $a$ be is the right adjoint of
+Lemma \ref{lemma-twisted-inverse-image} for $f$.
+Thus we have to show
+$a(\mathcal{O}_{\Spec(k)}) \cong a'(\mathcal{O}_{\Spec(k)})$
+where $a'$ be is the right adjoint of
+Lemma \ref{lemma-twisted-inverse-image} for $f'$.
+Since $k' \subset H^0(X, \mathcal{O}_X)$ we see that $k'/k$ is a finite
+extension (Cohomology of Schemes, Lemma
+\ref{coherent-lemma-proper-over-affine-cohomology-finite}).
+By uniqueness of adjoints we have $a = a' \circ b$ where
+$b$ is the right adjoint of Lemma
+\ref{lemma-twisted-inverse-image} for $g : \Spec(k') \to \Spec(k)$.
+Another way to say this: we have $f^! = (f')^! \circ g^!$.
+Thus it suffices to show that $\Hom_k(k', k) \cong k'$ as
+$k'$-modules, see Example \ref{example-affine-twisted-inverse-image}.
+This holds because these are $k'$-vector spaces of
+the same dimension (namely dimension $1$).
+
+\medskip\noindent
+Proof of (2). This holds because we have base change for $a$ by
+Lemma \ref{lemma-more-base-change}. See discussion in
+Remark \ref{remark-relative-dualizing-complex}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Relative dualizing complexes}
+\label{section-relative-dualizing-complexes}
+
+\noindent
+For a proper, flat morphism of finite presentation we have a
+rigid relative dualizing complex, see
+Remark \ref{remark-relative-dualizing-complex} and
+Lemma \ref{lemma-van-den-bergh}. For a separated and finite type
+morphism $f : X \to Y$ of Noetherian schemes, we can consider
+$f^!\mathcal{O}_Y$. In this section we define relative dualizing complexes
+for morphisms which are flat and locally of finite presentation
+(but not necessarily quasi-separated or quasi-compact) between
+schemes (not necessarily locally Noetherian).
+We show such complexes exist, are unique up to unique
+isomorphism, and agree with the cases mentioned above.
+Before reading this section, please read
+Dualizing Complexes, Section
+\ref{dualizing-section-relative-dualizing-complexes}.
+
+\begin{definition}
+\label{definition-relative-dualizing-complex}
+Let $X \to S$ be a morphism of schemes which is flat and
+locally of finite presentation. Let $W \subset X \times_S X$
+be any open such that the diagonal $\Delta_{X/S} : X \to X \times_S X$
+factors through a closed immersion $\Delta : X \to W$.
+A {\it relative dualizing complex} is a
+pair $(K, \xi)$ consisting of an object $K \in D(\mathcal{O}_X)$
+and a map
+$$
+\xi : \Delta_*\mathcal{O}_X \longrightarrow L\text{pr}_1^*K|_W
+$$
+in $D(\mathcal{O}_W)$ such that
+\begin{enumerate}
+\item $K$ is $S$-perfect (Derived Categories of Schemes, Definition
+\ref{perfect-definition-relatively-perfect}), and
+\item $\xi$ defines an isomorphism of $\Delta_*\mathcal{O}_X$
+with
+$R\SheafHom_{\mathcal{O}_W}(
+\Delta_*\mathcal{O}_X, L\text{pr}_1^*K|_W)$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+By Lemma \ref{lemma-sheaf-with-exact-support-ext} condition (2)
+is equivalent to the existence of an isomorphism
+$$
+\mathcal{O}_X \longrightarrow
+R\SheafHom(\mathcal{O}_X, L\text{pr}_1^*K|_W)
+$$
+in $D(\mathcal{O}_X)$ whose pushforward via $\Delta$ is equal to $\xi$.
+Since $R\SheafHom(\mathcal{O}_X, L\text{pr}_1^*K|_W)$ is independent
+of the choice of the open $W$, so is the category of pairs $(K, \xi)$.
+If $X \to S$ is separated, then we can choose $W = X \times_S X$.
+We will reduce many of the arguments to the case of rings
+using the following lemma.
+
+\begin{lemma}
+\label{lemma-relative-dualizing-complex-algebra}
+Let $X \to S$ be a morphism of schemes which is flat and
+locally of finite presentation. Let $(K, \xi)$ be a relative
+dualizing complex. Then for any commutative diagram
+$$
+\xymatrix{
+\Spec(A) \ar[d] \ar[r] & X \ar[d] \\
+\Spec(R) \ar[r] & S
+}
+$$
+whose horizontal arrows are open immersions, the
+restriction of $K$ to $\Spec(A)$ corresponds via
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-compare-bounded}
+to a relative dualizing complex for $R \to A$
+in the sense of Dualizing Complexes, Definition
+\ref{dualizing-definition-relative-dualizing-complex}.
+\end{lemma}
+
+\begin{proof}
+Since formation of $R\SheafHom$ commutes with restrictions to
+opens we may as well assume $X = \Spec(A)$ and $S = \Spec(R)$.
+Observe that relatively perfect objects of $D(\mathcal{O}_X)$
+are pseudo-coherent and hence are in $D_\QCoh(\mathcal{O}_X)$
+(Derived Categories of Schemes, Lemma \ref{perfect-lemma-pseudo-coherent}).
+Thus the statement makes sense.
+Observe that taking $\Delta_*$, $L\text{pr}_1^*$, and
+$R\SheafHom$ is compatible with what happens on the algebraic
+side by
+Derived Categories of Schemes,
+Lemmas \ref{perfect-lemma-quasi-coherence-pushforward},
+\ref{perfect-lemma-quasi-coherence-pullback},
+\ref{perfect-lemma-quasi-coherence-internal-hom}.
+For the last one we observe that $L\text{pr}_1^*K$
+is $S$-perfect (hence bounded below) and that $\Delta_*\mathcal{O}_X$
+is a pseudo-coherent object of $D(\mathcal{O}_W)$;
+translated into algebra this means that $A$ is pseudo-coherent
+as an $A \otimes_R A$-module which follows from
+More on Algebra, Lemma
+\ref{more-algebra-lemma-more-relative-pseudo-coherent-is-moot}
+applied to $R \to A \otimes_R A \to A$.
+Thus we recover exactly the conditions in
+Dualizing Complexes, Definition
+\ref{dualizing-definition-relative-dualizing-complex}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dualizing-RHom}
+Let $X \to S$ be a morphism of schemes which is flat and
+locally of finite presentation. Let $(K, \xi)$ be a relative
+dualizing complex. Then
+$\mathcal{O}_X \to R\SheafHom_{\mathcal{O}_X}(K, K)$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Looking affine locally this reduces using
+Lemma \ref{lemma-relative-dualizing-complex-algebra}
+to the algebraic case which is
+Dualizing Complexes, Lemma \ref{dualizing-lemma-relative-dualizing-RHom}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-uniqueness-relative-dualizing}
+Let $X \to S$ be a morphism of schemes which is flat and
+locally of finite presentation. If $(K, \xi)$ and $(L, \eta)$
+are two relative dualizing complexes on $X/S$, then there is a unique
+isomorphism $K \to L$ sending $\xi$ to $\eta$.
+\end{lemma}
+
+\begin{proof}
+Let $U \subset X$ be an affine open mapping into an
+affine open of $S$. Then there is an isomorphism
+$K|_U \to L|_U$ by Lemma \ref{lemma-relative-dualizing-complex-algebra} and
+Dualizing Complexes, Lemma
+\ref{dualizing-lemma-uniqueness-relative-dualizing}.
+The reader can reuse the argument of that lemma
+in the schemes case to obtain a proof in this case.
+We will instead use a glueing argument.
+
+\medskip\noindent
+Suppose we have an isomorphism $\alpha : K \to L$.
+Then $\alpha(\xi) = u \eta$ for some invertible section
+$u \in H^0(W, \Delta_*\mathcal{O}_X) = H^0(X, \mathcal{O}_X)$.
+(Because both $\eta$ and $\alpha(\xi)$ are generators
+of an invertible $\Delta_*\mathcal{O}_X$-module by assumption.)
+Hence after replacing $\alpha$ by $u^{-1}\alpha$
+we see that $\alpha(\xi) = \eta$.
+Since the automorphism group of
+$K$ is $H^0(X, \mathcal{O}_X^*)$ by Lemma \ref{lemma-relative-dualizing-RHom}
+there is at most one such $\alpha$.
+
+\medskip\noindent
+Let $\mathcal{B}$ be the collection of affine opens of
+$X$ which map into an affine open of $S$. For each $U \in \mathcal{B}$
+we have a unique isomorphism $\alpha_U : K|_U \to L|_U$
+mapping $\xi$ to $\eta$ by the discussion in the previous
+two paragraphs.
+Observe that $\text{Ext}^i(K|_U, K|_U) = 0$ for $i < 0$
+and any open $U$ of $X$ by Lemma \ref{lemma-relative-dualizing-RHom}.
+By Cohomology, Lemma \ref{cohomology-lemma-vanishing-and-glueing}
+applied to $\text{id} : X \to X$ we get a unique morphism
+$\alpha : K \to L$ agreeing
+with $\alpha_U$ for all $U \in \mathcal{B}$.
+Then $\alpha$ sends $\xi$ to $\eta$ as this is true locally.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-existence-relative-dualizing}
+Let $X \to S$ be a morphism of schemes which is
+flat and locally of finite presentation.
+There exists a relative dualizing complex $(K, \xi)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{B}$ be the collection of affine opens of
+$X$ which map into an affine open of $S$. For each $U$
+we have a relative dualizing complex $(K_U, \xi_U)$ for
+$U$ over $S$. Namely, choose an affine open
+$V \subset S$ such that $U \to X \to S$ factors through $V$.
+Write $U = \Spec(A)$ and $V = \Spec(R)$. By
+Dualizing Complexes, Lemma \ref{dualizing-lemma-base-change-relative-dualizing}
+there exists a relative dualizing complex $K_A \in D(A)$
+for $R \to A$. Arguing backwards through the proof of
+Lemma \ref{lemma-relative-dualizing-complex-algebra}
+this determines an $V$-perfect object $K_U \in D(\mathcal{O}_U)$
+and a map
+$$
+\xi : \Delta_*\mathcal{O}_U \to L\text{pr}_1^*K_U
+$$
+in $D(\mathcal{O}_{U \times_V U})$. Since being $V$-perfect is the
+same as being $S$-perfect and since $U \times_V U = U \times_S U$
+we find that $(K_U, \xi_U)$ is as desired.
+
+\medskip\noindent
+If $U' \subset U \subset X$ with $U', U \in \mathcal{B}$, then
+we have a unique isomorphism $\rho_{U'}^U : K_U|_{U'} \to K_{U'}$
+in $D(\mathcal{O}_{U'})$ sending $\xi_U|_{U' \times_S U'}$
+to $\xi_{U'}$ by Lemma \ref{lemma-uniqueness-relative-dualizing}
+(note that trivially the restriction of a relative dualizing
+complex to an open is a relative dualizing complex).
+The uniqueness guarantees that
+$\rho^U_{U''} = \rho^V_{U''} \circ \rho ^U_{U'}|_{U''}$
+for $U'' \subset U' \subset U$ in $\mathcal{B}$.
+Observe that $\text{Ext}^i(K_U, K_U) = 0$ for $i < 0$
+for $U \in \mathcal{B}$ by Lemma \ref{lemma-relative-dualizing-RHom}
+applied to $U/S$ and $K_U$.
+Thus the BBD glueing lemma
+(Cohomology, Theorem \ref{cohomology-theorem-glueing-bbd-general})
+tells us there is a unique solution, namely, an object
+$K \in D(\mathcal{O}_X)$ and isomorphisms $\rho_U : K|_U \to K_U$
+such that we have
+$\rho^U_{U'} \circ \rho_U|_{U'} = \rho_{U'}$ for all $U' \subset U$,
+$U, U' \in \mathcal{B}$.
+
+\medskip\noindent
+To finish the proof we have to construct the map
+$$
+\xi : \Delta_*\mathcal{O}_X \longrightarrow L\text{pr}_1^*K|_W
+$$
+in $D(\mathcal{O}_W)$ inducing an isomorphism from $\Delta_*\mathcal{O}_X$ to
+$R\SheafHom_{\mathcal{O}_W}(\Delta_*\mathcal{O}_X, L\text{pr}_1^*K|_W)$.
+Since we may change $W$, we choose
+$W = \bigcup_{U \in \mathcal{B}} U \times_S U$.
+We can use $\rho_U$ to get isomorphisms
+$$
+R\SheafHom_{\mathcal{O}_W}(
+\Delta_*\mathcal{O}_X, L\text{pr}_1^*K|_W)|_{U \times_S U}
+\xrightarrow{\rho_U}
+R\SheafHom_{\mathcal{O}_{U \times_S U}}(
+\Delta_*\mathcal{O}_U, L\text{pr}_1^*K_U)
+$$
+As $W$ is covered by the opens $U \times_S U$
+we conclude that the cohomology sheaves of
+$R\SheafHom_{\mathcal{O}_W}(\Delta_*\mathcal{O}_X, L\text{pr}_1^*K|_W)$
+are zero except in degree $0$. Moreover, we obtain isomorphisms
+$$
+H^0\left(U \times_S U, R\SheafHom_{\mathcal{O}_W}(\Delta_*\mathcal{O}_X,
+L\text{pr}_1^*K|_W)\right)
+\xrightarrow{\rho_U}
+H^0\left((R\SheafHom_{\mathcal{O}_{U \times_S U}}(
+\Delta_*\mathcal{O}_U, L\text{pr}_1^*K_U)\right)
+$$
+Let $\tau_U$ in the LHS be an element mapping to $\xi_U$ under this map.
+The compatibilities between
+$\rho^U_{U'}$, $\xi_U$, $\xi_{U'}$, $\rho_U$, and $\rho_{U'}$
+for $U' \subset U \subset X$ open $U', U \in \mathcal{B}$
+imply that $\tau_U|_{U' \times_S U'} = \tau_{U'}$.
+Thus we get a global section $\tau$ of the $0$th cohomology sheaf
+$H^0(R\SheafHom_{\mathcal{O}_W}(\Delta_*\mathcal{O}_X, L\text{pr}_1^*K|_W))$.
+Since the other cohomology sheaves of
+$R\SheafHom_{\mathcal{O}_W}(\Delta_*\mathcal{O}_X, L\text{pr}_1^*K|_W)$
+are zero, this global section $\tau$
+determines a morphism $\xi$ as desired. Since the restriction
+of $\xi$ to $U \times_S U$ gives $\xi_U$, we see that it
+satisfies the final condition of
+Definition \ref{definition-relative-dualizing-complex}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-relative-dualizing}
+Consider a cartesian square
+$$
+\xymatrix{
+X' \ar[d]_{f'} \ar[r]_{g'} & X \ar[d]^f \\
+S' \ar[r]^g & S
+}
+$$
+of schemes. Assume $X \to S$ is flat and locally of finite presentation.
+Let $(K, \xi)$ be a relative dualizing complex for $f$.
+Set $K' = L(g')^*K$. Let $\xi'$ be the derived base change of $\xi$
+(see proof). Then $(K', \xi')$ is a relative dualizing complex for $f'$.
+\end{lemma}
+
+\begin{proof}
+Consider the cartesian square
+$$
+\xymatrix{
+X' \ar[d]_{\Delta_{X'/S'}} \ar[r] & X \ar[d]^{\Delta_{X/S}} \\
+X' \times_{S'} X' \ar[r]^{g' \times g'} & X \times_S X
+}
+$$
+Choose $W \subset X \times_S X$ open such that $\Delta_{X/S}$
+factors through a closed immersion $\Delta : X \to W$.
+Choose $W' \subset X' \times_{S'} X'$ open such that $\Delta_{X'/S'}$
+factors through a closed immersion $\Delta' : X \to W'$
+and such that $(g' \times g')(W') \subset W$. Let us still denote
+$g' \times g' : W' \to W$ the induced morphism. We have
+$$
+L(g' \times g')^*\Delta_*\mathcal{O}_X =
+\Delta'_*\mathcal{O}_{X'}
+\quad\text{and}\quad
+L(g' \times g')^*L\text{pr}_1^*K|_W =
+L\text{pr}_1^*K'|_{W'}
+$$
+The first equality holds because $X$ and $X' \times_{S'} X'$
+are tor independent over $X \times_S X$ (see for example
+More on Morphisms, Lemma \ref{more-morphisms-lemma-case-of-tor-independence}).
+The second holds by transitivity of derived pullback
+(Cohomology, Lemma \ref{cohomology-lemma-derived-pullback-composition}).
+Thus $\xi' = L(g' \times g')^*\xi$ can be viewed as a map
+$$
+\xi' : \Delta'_*\mathcal{O}_{X'} \longrightarrow L\text{pr}_1^*K'|_{W'}
+$$
+Having said this the proof of the lemma is straightforward.
+First, $K'$ is $S'$-perfect by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-base-change-relatively-perfect}.
+To check that $\xi'$ induces an isomorphism
+of $\Delta'_*\mathcal{O}_{X'}$ to
+$R\SheafHom_{\mathcal{O}_{W'}}(
+\Delta'_*\mathcal{O}_{X'}, L\text{pr}_1^*K'|_{W'})$
+we may work affine locally. By
+Lemma \ref{lemma-relative-dualizing-complex-algebra}
+we reduce to the corresponding statement in algebra
+which is proven in Dualizing Complexes, Lemma
+\ref{dualizing-lemma-base-change-relative-dualizing}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-proper-relative-dualizing}
+Let $S$ be a quasi-compact and quasi-separated scheme.
+Let $f : X \to S$ be a proper, flat morphism of finite presentation.
+The relative dualizing complex $\omega_{X/S}^\bullet$ of
+Remark \ref{remark-relative-dualizing-complex}
+together with (\ref{equation-pre-rigid}) is a relative
+dualizing complex in the sense of
+Definition \ref{definition-relative-dualizing-complex}.
+\end{lemma}
+
+\begin{proof}
+In Lemma \ref{lemma-properties-relative-dualizing} we proved that
+$\omega_{X/S}^\bullet$ is $S$-perfect.
+Let $c$ be the right adjoint of
+Lemma \ref{lemma-twisted-inverse-image}
+for the diagonal $\Delta : X \to X \times_S X$.
+Then we can apply $\Delta_*$ to (\ref{equation-pre-rigid})
+to get an isomorphism
+$$
+\Delta_*\mathcal{O}_X \to
+\Delta_*(c(L\text{pr}_1^*\omega_{X/S}^\bullet)) =
+R\SheafHom_{\mathcal{O}_{X \times_S X}}(
+\Delta_*\mathcal{O}_X, L\text{pr}_1^*\omega_{X/S}^\bullet)
+$$
+The equality holds by
+Lemmas \ref{lemma-twisted-inverse-image-closed} and
+\ref{lemma-sheaf-with-exact-support-ext}.
+This finishes the proof.
+\end{proof}
+
+\begin{remark}
+\label{remark-relative-dualizing-complex-bis}
+Let $X \to S$ be a morphism of schemes which is flat, proper, and
+of finite presentation. By Lemma \ref{lemma-existence-relative-dualizing}
+there exists a relative dualizing complex $(\omega_{X/S}^\bullet, \xi)$
+in the sense of Definition \ref{definition-relative-dualizing-complex}.
+Consider any morphism $g : S' \to S$ where $S'$ is quasi-compact and
+quasi-separated (for example an affine open of $S$).
+By Lemma \ref{lemma-base-change-relative-dualizing}
+we see that $(L(g')^*\omega_{X/S}^\bullet, L(g')^*\xi)$ is
+a relative dualizing complex for the base change $f' : X' \to S'$
+in the sense of Definition \ref{definition-relative-dualizing-complex}.
+Let $\omega_{X'/S'}^\bullet$ be the relative dualizing complex
+for $X' \to S'$ in the sense of Remark \ref{remark-relative-dualizing-complex}.
+Combining Lemmas \ref{lemma-flat-proper-relative-dualizing} and
+\ref{lemma-uniqueness-relative-dualizing}
+we see that there is a unique isomorphism
+$$
+\omega_{X'/S'}^\bullet \longrightarrow L(g')^*\omega_{X/S}^\bullet
+$$
+compatible with (\ref{equation-pre-rigid}) and $L(g')^*\xi$.
+These isomorphisms are compatible with morphisms between
+quasi-compact and quasi-separated schemes over $S$
+and the base change isomorphisms of Lemma \ref{lemma-proper-flat-base-change}
+(if we ever need this compatibility we will carefully state and prove
+it here).
+\end{remark}
+
+\begin{lemma}
+\label{lemma-compactifyable-relative-dualizing}
+In Situation \ref{situation-shriek} let $f : X \to Y$
+be a morphism of $\textit{FTS}_S$. If $f$ is flat, then
+$f^!\mathcal{O}_Y$ is (the first component of)
+a relative dualizing complex for $X$ over $Y$ in the sense of
+Definition \ref{definition-relative-dualizing-complex}.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-flat-shriek-relatively-perfect}
+we have that $f^!\mathcal{O}_Y$
+is $Y$-perfect. As $f$ is separated the diagonal
+$\Delta : X \to X \times_Y X$ is a closed immersion and
+$\Delta_*\Delta^!(-) =
+R\SheafHom_{\mathcal{O}_{X \times_Y X}}(\mathcal{O}_X, -)$, see
+Lemmas
+\ref{lemma-twisted-inverse-image-closed} and
+\ref{lemma-sheaf-with-exact-support-ext}.
+Hence to finish the proof it suffices to show
+$\Delta^!(L\text{pr}_1^*f^!(\mathcal{O}_Y)) \cong \mathcal{O}_X$
+where $\text{pr}_1 : X \times_Y X \to X$ is the first projection.
+We have
+$$
+\mathcal{O}_X = \Delta^! \text{pr}_1^!\mathcal{O}_X =
+\Delta^! \text{pr}_1^! L\text{pr}_2^*\mathcal{O}_Y =
+\Delta^!(L\text{pr}_1^* f^!\mathcal{O}_Y)
+$$
+where $\text{pr}_2 : X \times_Y X \to X$ is the second projection
+and where we have used the base change isomorphism
+$\text{pr}_1^! \circ L\text{pr}_2^* = L\text{pr}_1^* \circ f^!$ of
+Lemma \ref{lemma-base-change-shriek-flat}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dualizing-composition}
+Let $f : Y \to X$ and $X \to S$ be morphisms of schemes
+which are flat and of finite presentation.
+Let $(K, \xi)$ and $(M, \eta)$
+be a relative dualizing complex for $X \to S$ and $Y \to X$.
+Set $E = M \otimes_{\mathcal{O}_Y}^\mathbf{L} Lf^*K$.
+Then $(E, \zeta)$ is a relative dualizing complex for $Y \to S$ for
+a suitable $\zeta$.
+\end{lemma}
+
+\begin{proof}
+Using Lemma \ref{lemma-relative-dualizing-complex-algebra}
+and the algebraic version of this lemma (Dualizing Complexes, Lemma
+\ref{dualizing-lemma-relative-dualizing-composition})
+we see that $E$
+is affine locally the first component of a relative dualizing complex.
+In particular we see that $E$
+is $S$-perfect since this may be checked affine locally, see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-locally-rel-perfect}.
+
+\medskip\noindent
+Let us first prove the existence of $\zeta$ in case the
+morphisms $X \to S$ and $Y \to X$ are separated so that
+$\Delta_{X/S}$, $\Delta_{Y/X}$, and $\Delta_{Y/S}$
+are closed immersions. Consider the following diagram
+$$
+\xymatrix{
+& & Y \ar@{=}[r] & Y \ar[d]^f \\
+Y \ar[r]_{\Delta_{Y/X}} &
+Y \times_X Y \ar[d]_m \ar[r]_\delta \ar[ru]_q &
+Y \times_S Y \ar[d]^{f \times f} \ar[ru]_p & X\\
+& X \ar[r]^{\Delta_{X/S}} & X \times_S X \ar[ru]_r
+}
+$$
+where $p$, $q$, $r$ are the first projections.
+By Lemma \ref{lemma-sheaf-with-exact-support-internal-home}
+we have
+$$
+R\SheafHom_{\mathcal{O}_{Y \times_S Y}}(
+\Delta_{Y/S, *}\mathcal{O}_Y, Lp^*E) =
+R\delta_*\left(R\SheafHom_{\mathcal{O}_{Y \times_X Y}}(
+\Delta_{Y/X, *}\mathcal{O}_Y,
+R\SheafHom(\mathcal{O}_{Y \times_X Y}, Lp^*E))\right)
+$$
+By Lemma \ref{lemma-sheaf-with-exact-support-tensor} we have
+$$
+R\SheafHom(\mathcal{O}_{Y \times_X Y}, Lp^*E) =
+R\SheafHom(\mathcal{O}_{Y \times_X Y}, L(f \times f)^*Lr^*K)
+\otimes_{\mathcal{O}_{Y \times_S Y}}^\mathbf{L} Lq^*M
+$$
+By Lemma \ref{lemma-flat-bc-sheaf-with-exact-support} we have
+$$
+R\SheafHom(\mathcal{O}_{Y \times_X Y}, L(f \times f)^*Lr^*K) =
+Lm^*R\SheafHom(\mathcal{O}_X, Lr^*K)
+$$
+The last expression is isomorphic (via $\xi$) to
+$Lm^*\mathcal{O}_X = \mathcal{O}_{Y \times_X Y}$.
+Hence the expression preceding is isomorphic to
+$Lq^*M$. Hence
+$$
+R\SheafHom_{\mathcal{O}_{Y \times_S Y}}(
+\Delta_{Y/S, *}\mathcal{O}_Y, Lp^*E) =
+R\delta_*\left(R\SheafHom_{\mathcal{O}_{Y \times_X Y}}(
+\Delta_{Y/X, *}\mathcal{O}_Y, Lq^*M)\right)
+$$
+The material inside the parentheses is isomorphic to
+$\Delta_{Y/X, *}*\mathcal{O}_X$ via $\eta$.
+This finishes the proof in the separated case.
+
+\medskip\noindent
+In the general case we choose an open $W \subset X \times_S X$
+such that $\Delta_{X/S}$ factors through a closed immersion
+$\Delta : X \to W$ and we choose an open $V \subset Y \times_X Y$
+such that $\Delta_{Y/X}$ factors through a closed immersion
+$\Delta' : Y \to V$. Finally, choose an open
+$W' \subset Y \times_S Y$ whose intersection with $Y \times_X Y$
+gives $V$ and which maps into $W$. Then we consider the diagram
+$$
+\xymatrix{
+& & Y \ar@{=}[r] & Y \ar[d]^f \\
+Y \ar[r]_{\Delta'} &
+V \ar[d]_m \ar[r]_\delta \ar[ru]_q &
+W' \ar[d]^{f \times f} \ar[ru]_p & X\\
+& X \ar[r]^\Delta & W \ar[ru]_r
+}
+$$
+and we use exactly the same argument as before.
+\end{proof}
+
+
+
+
+
+
+\section{The fundamental class of an lci morphism}
+\label{section-fundamental-class}
+
+\noindent
+In this section we will use the computations made in
+Section \ref{section-examples}. Thus our result will suffer
+from the same kind of non-uniqueness as we have in that section.
+
+\begin{lemma}
+\label{lemma-determinant}
+Let $X$ be a locally ringed space. Let
+$$
+\mathcal{E}_1 \xrightarrow{\alpha} \mathcal{E}_0 \to \mathcal{F} \to 0
+$$
+be a short exact sequence of $\mathcal{O}_X$-modules.
+Assume $\mathcal{E}_1$ and $\mathcal{E}_0$ are locally
+free of ranks $r_1, r_0$. Then there is a canonical map
+$$
+\wedge^{r_0 - r_1}\mathcal{F}
+\longrightarrow
+\wedge^{r_1}(\mathcal{E}_1^\vee) \otimes \wedge^{r_0}\mathcal{E}_0
+$$
+which is an isomorphism on the stalk at $x \in X$
+if and only if $\mathcal{F}$ is locally free of rank $r_0 - r_1$
+in an open neighbourhood of $x$.
+\end{lemma}
+
+\begin{proof}
+If $r_1 > r_0$ then $\wedge^{r_0 - r_1}\mathcal{F} = 0$ by convention
+and the unique map cannot be an isomorphism. Thus we may assume
+$r = r_0 - r_1 \geq 0$. Define the map by the formula
+$$
+s_1 \wedge \ldots \wedge s_r \mapsto
+t_1^\vee \wedge \ldots \wedge t_{r_1}^\vee \otimes
+\alpha(t_1) \wedge \ldots \wedge \alpha(t_{r_1}) \wedge
+\tilde s_1 \wedge \ldots \wedge \tilde s_r
+$$
+where $t_1, \ldots, t_{r_1}$ is a local basis for $\mathcal{E}_1$,
+correspondingly
+$t_1^\vee, \ldots, t_{r_1}^\vee$ is the dual basis for $\mathcal{E}_1^\vee$,
+and $s'_i$ is a local lift of $s_i$ to a section of $\mathcal{E}_0$.
+We omit the proof that this is well defined.
+
+\medskip\noindent
+If $\mathcal{F}$ is locally free of rank $r$, then it is straightforward
+to verify that the map is an isomorphism. Conversely, assume the map
+is an isomorphism on stalks at $x$. Then $\wedge^r\mathcal{F}_x$
+is invertible. This implies that $\mathcal{F}_x$ is generated by
+at most $r$ elements. This can only happen if $\alpha$ has rank
+$r$ modulo $\mathfrak m_x$, i.e., $\alpha$ has maximal rank modulo
+$\mathfrak m_x$. This implies that $\alpha$ has maximal rank
+in a neighbourhood of $x$ and hence $\mathcal{F}$ is locally free
+of rank $r$ in a neighbourhood as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fundamental-class-lci}
+Let $Y$ be a Noetherian scheme. Let $f : X \to Y$ be a
+local complete intersection morphism which factors
+as an immersion $X \to P$ followed by a proper smooth morphism $P \to Y$.
+Let $r$ be the locally constant function on
+$X$ such that $\omega_{Y/X} = H^{-r}(f^!\mathcal{O}_Y)$
+is the unique nonzero cohomology sheaf of $f^!\mathcal{O}_Y$, see
+Lemma \ref{lemma-lci-shriek}.
+Then there is a map
+$$
+\wedge^r\Omega_{X/Y} \longrightarrow \omega_{Y/X}
+$$
+which is an isomorphism on the stalk at a point $x$ if and only
+if $f$ is smooth at $x$.
+\end{lemma}
+
+\begin{proof}
+The assumption implies that $X$ is compactifyable over $Y$ hence $f^!$
+is defined, see Section \ref{section-upper-shriek}.
+Let $j : W \to P$ be an open subscheme such that
+$X \to P$ factors through a closed immersion $i : X \to W$.
+Moreover, we have $f^! = i^! \circ j^! \circ g^!$ where
+$g : P \to Y$ is the given morphism.
+We have $g^!\mathcal{O}_Y = \wedge^d\Omega_{P/Y}[d]$ by
+Lemma \ref{lemma-smooth-proper} where $d$ is the locally
+constant function giving the relative dimension of $P$ over $Y$.
+We have $j^! = j^*$. We have $i^!\mathcal{O}_W = \wedge^c\mathcal{N}[-c]$
+where $c$ is the codimension of $X$ in $W$ (a locally constant
+function on $X$) and where $\mathcal{N}$ is the normal sheaf of
+the Koszul-regular immersion $i$, see Lemma \ref{lemma-regular-immersion}.
+Combining the above we find
+$$
+f^!\mathcal{O}_Y =
+\left(\wedge^c\mathcal{N} \otimes_{\mathcal{O}_X}
+\wedge^d\Omega_{P/Y}|_X\right)[d - c]
+$$
+where we have also used Lemma \ref{lemma-perfect-comparison-shriek}.
+Thus $r = d|_X - c$ as locally constant functions on $X$.
+The conormal sheaf of $X \to P$ is the module
+$\mathcal{I}/\mathcal{I}^2$ where $\mathcal{I} \subset \mathcal{O}_W$
+is the ideal sheaf of $i$, see
+Morphisms, Section \ref{morphisms-section-conormal-sheaf}.
+Consider the canonical exact sequence
+$$
+\mathcal{I}/\mathcal{I}^2 \to
+\Omega_{P/Y}|_X \to \Omega_{X/Y} \to 0
+$$
+of Morphisms, Lemma \ref{morphisms-lemma-differentials-relative-immersion}.
+We obtain our map by an application of Lemma \ref{lemma-determinant}.
+
+\medskip\noindent
+If $f$ is smooth at $x$, then the map is an isomorphism by an application of
+Lemma \ref{lemma-determinant}
+and the fact that $\Omega_{X/Y}$ is locally free at $x$
+of rank $r$. Conversely, assume that our map is an isomorphism on stalks
+at $x$. Then the lemma shows that $\Omega_{X/Y}$ is free of rank $r$
+after replacing $X$ by an open neighbourhood of $x$.
+On the other hand, we may also assume that $X = \Spec(A)$ and
+$Y = \Spec(R)$ where $A = R[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$
+and where $f_1, \ldots, f_m$ is a Koszul regular sequence
+(this follows from the definition of local complete intersection morphisms).
+Clearly this implies $r = n - m$. We conclude that the rank of the matrix
+of partials $\partial f_j/\partial x_i$ in the residue field at $x$ is $m$.
+Thus after reordering the variables we may assume
+the determinant of $(\partial f_j/\partial x_i)_{1 \leq i, j \leq m}$
+is invertible in an open neighbourhood of $x$. It follows
+that $R \to A$ is smooth at this point, see for example
+Algebra, Example \ref{algebra-example-make-standard-smooth}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fundamental-class-almost-lci}
+Let $f : X \to Y$ be a morphism of schemes. Let $r \geq 0$. Assume
+\begin{enumerate}
+\item $Y$ is Cohen-Macaulay (Properties, Definition
+\ref{properties-definition-Cohen-Macaulay}),
+\item $f$ factors as $X \to P \to Y$ where the first morphism is
+an immersion and the second is smooth and proper,
+\item if $x \in X$ and $\dim(\mathcal{O}_{X, x}) \leq 1$,
+then $f$ is Koszul at $x$ (More on Morphisms, Definition
+\ref{more-morphisms-definition-lci}), and
+\item if $\xi$ is a generic point of an irreducible component of $X$, then
+we have
+$\text{trdeg}_{\kappa(f(\xi))} \kappa(\xi) = r$.
+\end{enumerate}
+Then with $\omega_{Y/X} = H^{-r}(f^!\mathcal{O}_Y)$ there is a map
+$$
+\wedge^r\Omega_{X/Y} \longrightarrow \omega_{Y/X}
+$$
+which is an isomorphism on the locus where $f$ is smooth.
+\end{lemma}
+
+\begin{proof}
+Let $U \subset X$ be the open subscheme over which $f$ is a
+local complete intersection morphism. Since $f$ has relative
+dimension $r$ at all generic points by assumption (4) we
+see that the locally constant function of
+Lemma \ref{lemma-fundamental-class-lci}
+is constant with value $r$ and we obtain a map
+$$
+\wedge^r\Omega_{X/Y}|_U = \wedge^r \Omega_{U/Y}
+\longrightarrow
+\omega_{U/Y} = \omega_{X/Y}|_U
+$$
+which is an isomorphism in the smooth points of $f$ (this locus
+is contained in $U$ because a smooth morphism is a local complete
+intersection morphism). By Lemma \ref{lemma-shriek-over-CM}
+and the assumption that $Y$ is Cohen-Macaulay
+the module $\omega_{X/Y}$ is $(S_2)$.
+Since $U$ contains all the points of codimension $1$ by condition (3)
+and using Divisors, Lemma \ref{divisors-lemma-depth-2-hartog}
+we see that $j_*\omega_{U/Y} = \omega_{X/Y}$.
+Hence the map over $U$ extends to $X$ and the proof
+is complete.
+\end{proof}
+
+
+
+
+
+
+\section{Extension by zero for coherent modules}
+\label{section-extension-by-zero}
+
+\noindent
+The material in this section and the next few can be found
+in the appendix by Deligne of \cite{RD}.
+
+\medskip\noindent
+In this section $j : U \to X$ will be an open immersion of Noetherian schemes.
+We are going to consider inverse systems $(K_n)$ in
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$ constructed as follows.
+Let $\mathcal{F}^\bullet$ be a bounded complex of coherent
+$\mathcal{O}_X$-modules. Let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals with $V(\mathcal{I}) = X \setminus U$.
+Then we can set
+$$
+K_n = \mathcal{I}^n\mathcal{F}^\bullet
+$$
+More precisely, $K_n$ is the object of $D^b_{\textit{Coh}}(\mathcal{O}_X)$
+represented by the complex whose term in degree $q$
+is the coherent submodule $\mathcal{I}^n\mathcal{F}^q$ of $\mathcal{F}^q$.
+Observe that the maps $\ldots \to K_3 \to K_2 \to K_1$ induce isomorphisms
+on restriction to $U$. Let us call such a system a {\it Deligne system}.
+
+\begin{lemma}
+\label{lemma-lift-map}
+Let $j : U \to X$ be an open immersion of Noetherian schemes.
+Let $(K_n)$ be a Deligne system and denote
+$K \in D^b_{\textit{Coh}}(\mathcal{O}_U)$ the value
+of the constant system $(K_n|_U)$. Let $L$ be an object of
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$.
+Then $\colim \Hom_X(K_n, L) = \Hom_U(K, L|_U)$.
+\end{lemma}
+
+\begin{proof}
+Let $L \to M \to N \to L[1]$ be a distinguished triangle in
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$. Then we obtain
+a commutative diagram
+$$
+\xymatrix{
+\ldots \ar[r] &
+\colim \Hom_X(K_n, L) \ar[r] \ar[d] &
+\colim \Hom_X(K_n, M) \ar[r] \ar[d] &
+\colim \Hom_X(K_n, N) \ar[r] \ar[d] &
+\ldots \\
+\ldots \ar[r] &
+\Hom_U(K, L|_U) \ar[r] &
+\Hom_U(K, M|_U) \ar[r] &
+\Hom_U(K, N|_U) \ar[r] &
+\ldots
+}
+$$
+whose rows are exact by Derived Categories, Lemma
+\ref{derived-lemma-representable-homological} and
+Algebra, Lemma \ref{algebra-lemma-directed-colimit-exact}.
+Hence if the statement of the lemma holds for
+$N[-1]$, $L$, $N$, and $L[1]$ then it holds for $M$ by the 5-lemma.
+Thus, using the distinguished triangles
+for the canonical truncations of $L$ (see Derived Categories, Remark
+\ref{derived-remark-truncation-distinguished-triangle})
+we reduce to the case that $L$ has only one nonzero cohomology sheaf.
+
+\medskip\noindent
+Choose a bounded complex $\mathcal{F}^\bullet$ of coherent
+$\mathcal{O}_X$-modules and a quasi-coherent ideal
+$\mathcal{I} \subset \mathcal{O}_X$ cutting out $X \setminus U$
+such that $K_n$ is represented by $\mathcal{I}^n\mathcal{F}^\bullet$.
+Using ``stupid'' truncations we obtain compatible termwise split short
+exact sequences of complexes
+$$
+0 \to \sigma_{\geq a + 1} \mathcal{I}^n\mathcal{F}^\bullet \to
+\mathcal{I}^n\mathcal{F}^\bullet \to
+\sigma_{\leq a} \mathcal{I}^n\mathcal{F}^\bullet \to 0
+$$
+which in turn correspond to compatible systems of distinguished
+triangles in $D^b_{\textit{Coh}}(\mathcal{O}_X)$.
+Arguing as above we reduce to the case where $\mathcal{F}^\bullet$
+has only one nonzero term. This reduces us to the case discussed in
+the next paragraph.
+
+\medskip\noindent
+Given a coherent $\mathcal{O}_X$-module $\mathcal{F}$ and a coherent
+$\mathcal{O}_X$-module $\mathcal{G}$ we
+have to show that the canonical map
+$$
+\colim \Ext^i_X(\mathcal{I}^n\mathcal{F}, \mathcal{G})
+\longrightarrow
+\Ext^i_U(\mathcal{F}|_U, \mathcal{G}|_U)
+$$
+is an isomorphism for all $i \geq 0$. For $i = 0$ this is
+Cohomology of Schemes, Lemma \ref{coherent-lemma-homs-over-open}.
+Assume $i > 0$.
+
+\medskip\noindent
+Injectivity. Let $\xi \in \Ext^i_X(\mathcal{I}^n\mathcal{F}, \mathcal{G})$
+be an element whose restriction to $U$ is zero. We have to show there exists
+an $m \geq n$ such that the restriction of $\xi$ to
+$\mathcal{I}^m\mathcal{F} = \mathcal{I}^{m - n}\mathcal{I}^n\mathcal{F}$
+is zero. After replacing $\mathcal{F}$ by $\mathcal{I}^n\mathcal{F}$
+we may assume $n = 0$, i.e., we have
+$\xi \in \Ext^i_X(\mathcal{F}, \mathcal{G})$
+whose restriction to $U$ is zero. By
+Derived Categories of Schemes, Proposition \ref{perfect-proposition-DCoh}
+we have
+$D^b_{\textit{Coh}}(\mathcal{O}_X) = D^b(\textit{Coh}(\mathcal{O}_X))$.
+Hence we can compute the $\Ext$ group in the abelian category of
+coherent $\mathcal{O}_X$-modules. This implies there exists an
+surjection $\alpha : \mathcal{F}'' \to \mathcal{F}$ such that
+$\xi \circ \alpha = 0$ (this is where we use that $i > 0$).
+Set $\mathcal{F}' = \Ker(\alpha)$ so that we have a short exact
+sequence
+$$
+0 \to \mathcal{F}' \to \mathcal{F}'' \to \mathcal{F} \to 0
+$$
+It follows that $\xi$ is the image of an element
+$\xi' \in \Ext^{i - 1}_X(\mathcal{F}', \mathcal{G})$
+whose restriction to $U$ is in the image of
+$\Ext^{i - 1}_U(\mathcal{F}''|_U, \mathcal{G}|_U) \to
+\Ext^{i - 1}_U(\mathcal{F}'|_U, \mathcal{G}|_U)$.
+By Artin-Rees the inverse systems $(\mathcal{I}^n\mathcal{F}')$ and
+$(\mathcal{I}^n \mathcal{F}'' \cap \mathcal{F}')$ are pro-isomorphic, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-Artin-Rees}.
+Since we have the compatible system of short exact sequences
+$$
+0 \to
+\mathcal{F}' \cap \mathcal{I}^n\mathcal{F}'' \to
+\mathcal{I}^n\mathcal{F}'' \to
+\mathcal{I}^n\mathcal{F} \to 0
+$$
+we obtain a commutativew diagram
+$$
+\xymatrix{
+\colim \Ext^{i - 1}_X(\mathcal{I}^n\mathcal{F}'', \mathcal{G})
+\ar[r] \ar[d] &
+\colim \Ext^{i - 1}_X(\mathcal{F}' \cap \mathcal{I}^n\mathcal{F}'', \mathcal{G})
+\ar[r] \ar[d] &
+\colim \Ext^i_X(\mathcal{I}^n\mathcal{F}, \mathcal{G})
+\ar[d] \\
+\Ext^{i - 1}_U(\mathcal{F}''|_U, \mathcal{G}|_U) \ar[r] &
+\Ext^{i - 1}_U(\mathcal{F}'|_U, \mathcal{G}|_U) \ar[r] &
+\Ext^{i - 1}_U(\mathcal{F}|_U, \mathcal{G}|_U)
+}
+$$
+with exact rows. By induction on $i$ and the comment on inverse systems above
+we find that the left two vertical arrows are isomorphisms.
+Now $\xi$ gives an element in the top right group which is the image
+of $\xi'$ in the middle top group, which in turn maps to an element of
+the bottom middle group coming from some element in the left bottom group.
+We conclude that $\xi$ maps to zero in
+$\Ext^i_X(\mathcal{I}^n\mathcal{F}, \mathcal{G})$
+for some $n$ as desired.
+
+\medskip\noindent
+Surjectivity. Let $\xi \in \Ext^i_U(\mathcal{F}|_U, \mathcal{G}|_U)$.
+Arguing as above using that $i > 0$ we can find an surjection
+$\mathcal{H} \to \mathcal{F}|_U$ of coherent $\mathcal{O}_U$-modules
+such that $\xi$ maps to zero in $\Ext^i_U(\mathcal{H}, \mathcal{G}|_U)$.
+Then we can find a map $\varphi : \mathcal{F}'' \to \mathcal{F}$
+of coherent $\mathcal{O}_X$-modules whose restriction to $U$ is
+$\mathcal{H} \to \mathcal{F}|_U$, see
+Properties, Lemma \ref{properties-lemma-extend-finite-presentation}.
+Observe that the lemma doesn't guarantee $\varphi$ is surjective
+but this won't matter (it is possible to pick a surjective $\varphi$
+with a little bit of additional work).
+Denote $\mathcal{F}' = \Ker(\varphi)$. The short exact sequence
+$$
+0 \to \mathcal{F}'|_U \to \mathcal{F}''|_U \to \mathcal{F}|_U \to 0
+$$
+shows that $\xi$ is the image of $\xi'$ in
+$\Ext^{i - 1}_U(\mathcal{F}'|_U, \mathcal{G}|_U)$.
+By induction on $i$ we can find an $n$ such that
+$\xi'$ is the image of some $\xi'_n$ in
+$\Ext^{i - 1}_X(\mathcal{I}^n\mathcal{F}', \mathcal{G})$.
+By Artin-Rees we can find an $m \geq n$ such that
+$\mathcal{F}' \cap \mathcal{I}^m\mathcal{F}'' \subset
+\mathcal{I}^n\mathcal{F}'$. Using the short exact sequence
+$$
+0 \to \mathcal{F}' \cap \mathcal{I}^m\mathcal{F}'' \to
+\mathcal{I}^m\mathcal{F}'' \to \mathcal{I}^m\Im(\varphi) \to 0
+$$
+the image of $\xi'_n$ in
+$\Ext^{i - 1}_X(\mathcal{F}' \cap \mathcal{I}^m\mathcal{F}'', \mathcal{G})$
+maps by the boundary map to an element $\xi_m$ of
+$\Ext^i_X(\mathcal{I}^m\Im(\varphi), \mathcal{G})$ which maps
+to $\xi$. Since $\Im(\varphi)$ and $\mathcal{F}$ agree over $U$
+we see that $\mathcal{F}/\mathcal{I}^m\Im(\varphi)$ is supported
+on $X \setminus U$. Hence there exists an $l \geq m$
+such that $\mathcal{I}^l\mathcal{F} \subset \mathcal{I}^m\Im(\varphi)$, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-power-ideal-kills-sheaf}.
+Taking the image of $\xi_m$ in
+$\Ext^i_X(\mathcal{I}^l\mathcal{F}, \mathcal{G})$ we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lift-map-plus}
+The result of Lemma \ref{lemma-lift-map} holds even for
+$L \in D^+_{\textit{Coh}}(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Namely, if $(K_n)$ is a Deligne system then there exists a
+$b \in \mathbf{Z}$ such that $H^i(K_n) = 0$ for $i > b$.
+Then $\Hom(K_n, L) = \Hom(K_n, \tau_{\leq b}L)$ and
+$\Hom(K, L) = \Hom(K, \tau_{\leq b}L)$. Hence using
+the result of the lemma for $\tau_{\leq b}L$ we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extension-by-zero}
+Let $j : U \to X$ be an open immersion of Noetherian schemes.
+\begin{enumerate}
+\item Let $(K_n)$ and $(L_n)$ be Deligne systems.
+Let $K$ and $L$ be the values of the constant systems
+$(K_n|_U)$ and $(L_n|_U)$. Given a morphism $\alpha : K \to L$
+of $D(\mathcal{O}_U)$
+there is a unique morphism of pro-systems $(K_n) \to (L_n)$
+of $D^b_{\textit{Coh}}(\mathcal{O}_X)$ whose restriction to $U$ is $\alpha$.
+\item Given $K \in D^b_{\textit{Coh}}(\mathcal{O}_U)$ there exists a
+Deligne system $(K_n)$ such that $(K_n|_U)$ is constant
+with value $K$.
+\item The pro-object $(K_n)$ of $D^b_{\textit{Coh}}(\mathcal{O}_X)$
+of (2) is unique up to unique isomorphism (as a pro-object).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is an immediate consequence of Lemma \ref{lemma-lift-map}
+and the fact that morphisms between pro-systems are
+the same as morphisms between the functors they corepresent, see
+Categories, Remark \ref{categories-remark-pro-category-copresheaves}.
+
+\medskip\noindent
+Let $K$ be as in (2). We can choose $K' \in D^b_{\textit{Coh}}(\mathcal{O}_X)$
+whose restriction to $U$ is isomorphic to $K$, see
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-lift-coherent}.
+By Derived Categories of Schemes, Proposition \ref{perfect-proposition-DCoh}
+we can represent $K'$ by a bounded complex $\mathcal{F}^\bullet$
+of coherent $\mathcal{O}_X$-modules. Choose a quasi-coherent sheaf
+of ideals $\mathcal{I} \subset \mathcal{O}_X$ whose vanishing
+locus is $X \setminus U$ (for example choose $\mathcal{I}$ to correspond
+to the reduced induced subscheme structure on $X \setminus U$).
+Then we can set $K_n$ equal to the object represented by the complex
+$\mathcal{I}^n\mathcal{F}^\bullet$ as in the introduction
+to this section.
+
+\medskip\noindent
+Part (3) is immediate from parts (1) and (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-extension-by-zero-triangle}
+Let $j : U \to X$ be an open immersion of Noetherian schemes. Let
+$$
+K \to L \to M \to K[1]
+$$
+be a distinguished triangle of $D^b_{\textit{Coh}}(\mathcal{O}_U)$.
+Then there exists an inverse system of distinguished triangles
+$$
+K_n \to L_n \to M_n \to K_n[1]
+$$
+in $D^b_{\textit{Coh}}(\mathcal{O}_X)$ such that $(K_n)$, $(L_n)$, $(M_n)$
+are Deligne systems and such that the restriction of these
+distinguished triangles to $U$ is isomorphic to the distinguished triangle
+we started out with.
+\end{lemma}
+
+\begin{proof}
+Let $(K_n)$ be as in Lemma \ref{lemma-extension-by-zero} part (2).
+Choose an object $L'$ of $D^b_{\textit{Coh}}(\mathcal{O}_X)$
+whose restriction to $U$ is $L$ (we can do this as the lemma shows).
+By Lemma \ref{lemma-lift-map} we can find an $n$ and a morphism
+$K_n \to L'$ on $X$ whose restriction to $U$ is the given arrow
+$K \to L$. We conclude there is a morphism $K' \to L'$ of
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$ whose restriction to $U$
+is the given arrow $K \to L$.
+
+\medskip\noindent
+By Derived Categories of Schemes, Proposition \ref{perfect-proposition-DCoh}
+we can find a morphism $\alpha^\bullet : \mathcal{F}^\bullet \to
+\mathcal{G}^\bullet$ of bounded complexes
+of coherent $\mathcal{O}_X$-modules representing $K' \to L'$.
+Choose a quasi-coherent sheaf of ideals $\mathcal{I} \subset \mathcal{O}_X$
+whose vanishing locus is $X \setminus U$.
+Then we let
+$K_n = \mathcal{I}^n\mathcal{F}^\bullet$
+and
+$L_n = \mathcal{I}^n\mathcal{G}^\bullet$.
+Observe that $\alpha^\bullet$ induces a morphism of
+complexes $\alpha_n^\bullet : \mathcal{I}^n\mathcal{F}^\bullet \to
+\mathcal{I}^n\mathcal{G}^\bullet$. From the construction of cones in
+Derived Categories, Section \ref{derived-section-cones}
+it is clear that
+$$
+C(\alpha_n)^\bullet = \mathcal{I}^nC(\alpha^\bullet)
+$$
+and hence we can set $M_n = C(\alpha_n)^\bullet$. Namely, we
+have a compatible system of distinguished triangles
+(see discussion in Derived Categories, Section
+\ref{derived-section-canonical-delta-functor})
+$$
+K_n \to L_n \to M_n \to K_n[1]
+$$
+whose restriction to $U$ is isomorphic to the distinguished
+triangle we started out with by axiom TR3 and Derived Categories,
+Lemma \ref{derived-lemma-third-isomorphism-triangle}.
+\end{proof}
+
+\begin{remark}
+\label{remark-extension-by-zero}
+Let $j : U \to X$ be an open immersion of Noetherian schemes.
+Sending $K \in D^b_{\textit{Coh}}(\mathcal{O}_U)$ to a Deligne
+system whose restriction to $U$ is $K$ determines a functor
+$$
+Rj_! :
+D^b_{\textit{Coh}}(\mathcal{O}_U)
+\longrightarrow
+\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_X)
+$$
+which is ``exact'' by Lemma \ref{lemma-extension-by-zero-triangle} and
+which is
+``left adjoint'' to the functor
+$j^* : D^b_{\textit{Coh}}(\mathcal{O}_X) \to
+D^b_{\textit{Coh}}(\mathcal{O}_U)$ by Lemma \ref{lemma-lift-map}.
+\end{remark}
+
+\begin{remark}
+\label{remark-extension-by-zero-linear-pro-system}
+Let $(A_n)$ and $(B_n)$ be inverse systems of a category $\mathcal{C}$.
+Let us say a linear-pro-morphism from $(A_n)$ to $(B_n)$ is given
+by a compatible family of morphisms $\varphi_n : A_{cn + d} \to B_n$
+for all $n \geq 1$ for some fixed integers $c, d \geq 1$.
+We'll say $(\varphi_n : A_{cn + d} \to B_n)$ and
+$(\psi_n : A_{c'n + d'} \to B_n)$ determine the same morphism
+if there exist $c'' \geq \max(c, c')$ and $d'' \geq \max(d, d')$
+such that the two induced morphisms $A_{c'' n + d''} \to B_n$ are the same
+for all $n$. It seems likely that Deligne systems $(K_n)$ with given value on
+$U$ are well defined up to linear-pro-isomorphisms. If we ever need this
+we will carefully formulate and prove this here.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-deligne-system-2-out-of-3}
+Let $j : U \to X$ be an open immersion of Noetherian schemes.
+Let
+$$
+K_n \to L_n \to M_n \to K_n[1]
+$$
+be an inverse system of distinguished triangles in
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$. If $(K_n)$ and $(M_n)$
+are pro-isomorphic to Deligne systems, then so is $(L_n)$.
+\end{lemma}
+
+\begin{proof}
+Observe that the systems $(K_n|_U)$ and $(M_n|_U)$ are essentially constant
+as they are pro-isomorphic to constant systems.
+Denote $K$ and $M$ their values. By Derived Categories, Lemma
+\ref{derived-lemma-essentially-constant-2-out-of-3}
+we see that the inverse system $L_n|_U$ is essentially constant as well.
+Denote $L$ its value.
+Let $N \in D^b_{\textit{Coh}}(\mathcal{O}_X)$. Consider the commutative
+diagram
+$$
+\xymatrix{
+\ldots \ar[r] &
+\colim \Hom_X(M_n, N) \ar[r] \ar[d] &
+\colim \Hom_X(L_n, N) \ar[r] \ar[d] &
+\colim \Hom_X(K_n, N) \ar[r] \ar[d] &
+\ldots \\
+\ldots \ar[r] &
+\Hom_U(M, N|_U) \ar[r] &
+\Hom_U(L, N|_U) \ar[r] &
+\Hom_U(K, N|_U) \ar[r] &
+\ldots
+}
+$$
+By Lemma \ref{lemma-lift-map} and the fact that isomorphic ind-systems
+have the same colimit, we see that the vertical arrows two to the right
+and two to the left of the middle one are isomorphisms. By the 5-lemma
+we conclude that the
+middle vertical arrow is an isomorphism. Now, if $(L'_n)$ is a Deligne system
+whose restriction to $U$ has constant value $L$ (which
+exists by Lemma \ref{lemma-extension-by-zero}), then
+we have $\colim \Hom_X(L'_n, N) = \Hom_U(L, N|_U)$ as well.
+Hence the pro-systems $(L_n)$ and $(L'_n)$ are
+pro-isomorphic by
+Categories, Remark \ref{categories-remark-pro-category-copresheaves}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-consequence-Artin-Rees-bis}
+Let $X$ be a Noetherian scheme. Let $\mathcal{I} \subset \mathcal{O}_X$
+be a quasi-coherent sheaf of ideals. Let $\mathcal{F}^\bullet$ be a
+complex of coherent $\mathcal{O}_X$-modules. Let $p \in \mathbf{Z}$.
+Set $\mathcal{H} = H^p(\mathcal{F}^\bullet)$ and
+$\mathcal{H}_n = H^p(\mathcal{I}^n\mathcal{F}^\bullet)$.
+Then there are canonical $\mathcal{O}_X$-module maps
+$$
+\ldots \to \mathcal{H}_3 \to \mathcal{H}_2 \to \mathcal{H}_1 \to \mathcal{H}
+$$
+There exists a $c > 0$ such that for $n \geq c$ the image of
+$\mathcal{H}_n \to \mathcal{H}$ is contained in
+$\mathcal{I}^{n - c}\mathcal{H}$ and there is a canonical
+$\mathcal{O}_X$-module map
+$\mathcal{I}^n\mathcal{H} \to \mathcal{H}_{n - c}$ such that the compositions
+$$
+\mathcal{I}^n \mathcal{H} \to \mathcal{H}_{n - c} \to
+\mathcal{I}^{n - 2c}\mathcal{H}
+\quad\text{and}\quad
+\mathcal{H}_n \to \mathcal{I}^{n - c}\mathcal{H} \to \mathcal{H}_{n - 2c}
+$$
+are the canonical ones. In particular, the inverse systems
+$(\mathcal{H}_n)$ and $(\mathcal{I}^n\mathcal{H})$
+are isomorphic as pro-objects of $\textit{Mod}(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+If $X$ is affine, translated into algebra this is More on Algebra, Lemma
+\ref{more-algebra-lemma-consequence-Artin-Rees-bis}.
+In the general case, argue exactly as in the proof of that lemma
+replacing the reference to Artin-Rees in algebra with a reference to
+Cohomology of Schemes, Lemma \ref{coherent-lemma-Artin-Rees}.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-extension-by-zero-algebra}
+Let $j : U \to X$ be an open immersion of Noetherian schemes.
+Let $a \leq b$ be integers. Let $(K_n)$ be an inverse system of
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$
+such that $H^i(K_n) = 0$ for $i \not \in [a, b]$.
+The following are equivalent
+\begin{enumerate}
+\item $(K_n)$ is pro-isomorphic to a Deligne system,
+\item for every $p \in \mathbf{Z}$ there exists a coherent
+$\mathcal{O}_X$-module $\mathcal{F}$ such that the pro-systems
+$(H^p(K_n))$ and $(\mathcal{I}^n\mathcal{F})$ are pro-isomorphic.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1). To prove (2) holds we may assume $(K_n)$ is a Deligne system.
+By definition we may choose a bounded complex $\mathcal{F}^\bullet$
+of coherent $\mathcal{O}_X$-modules and a quasi-coherent
+sheaf of ideals cutting out $X \setminus U$ such that
+$K_n$ is represented by $\mathcal{I}^n\mathcal{F}^\bullet$.
+Thus the result follows from Lemma \ref{lemma-consequence-Artin-Rees-bis}.
+
+\medskip\noindent
+Assume (2). We will prove that $(K_n)$ is as in (1) by induction on
+$b - a$. If $a = b$ then (1) holds essentially by assumption.
+If $a < b$ then we consider the compatible system of
+distinguished triangles
+$$
+\tau_{\leq a}K_n \to K_n \to \tau_{\geq a + 1}K_n \to (\tau_{\leq a}K_n)[1]
+$$
+See Derived Categories, Remark
+\ref{derived-remark-truncation-distinguished-triangle}.
+By induction on $b - a$ we know that $\tau_{\leq a}K_n$ and
+$\tau_{\geq a + 1}K_n$ are pro-isomorphic to Deligne systems.
+We conclude by Lemma \ref{lemma-deligne-system-2-out-of-3}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-extension-by-zero}
+Let $j : U \to X$ be an open immersion of Noetherian schemes. Let
+$(K_n)$ be an inverse system in $D^b_{\textit{Coh}}(\mathcal{O}_X)$.
+Let $X = W_1 \cup \ldots \cup W_r$ be an open covering.
+The following are equivalent
+\begin{enumerate}
+\item $(K_n)$ is pro-isomorphic to a Deligne system,
+\item for each $i$ the restriction $(K_n|_{W_i})$
+is pro-isomorphic to a Deligne system with respect to
+the open immersion $U \cap W_i \to W_i$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By induction on $r$. If $r = 1$ then the result is clear.
+Assume $r > 1$. Set $V = W_1 \cup \ldots \cup W_{r - 1}$.
+By induction we see that $(K_n|_V)$ is a Deligne system.
+This reduces us to the discussion in the next paragraph.
+
+\medskip\noindent
+Assume $X = V \cup W$ is an open covering and
+$(K_n|_W)$ and $(K_n|_V)$ are pro-isomorphic to Deligne systems.
+We have to show that $(K_n)$ is pro-isomorphic to a Deligne system.
+Observe that $(K_n|_{V \cap W})$ is pro-isomorphic to a Deligne system
+(it follows immediately from the construction of Deligne systems
+that restrictions to opens preserves them). In particular the pro-systems
+$(K_n|_{U \cap V})$,
+$(K_n|_{U \cap W})$, and
+$(K_n|_{U \cap V \cap W})$
+are essentially constant. It follows from the distinguished triangles
+in Cohomology, Lemma \ref{cohomology-lemma-exact-sequence-j-star} and
+Derived Categories, Lemma \ref{derived-lemma-essentially-constant-2-out-of-3}
+that $(K_n|_U)$ is essentially constant.
+Denote $K \in D^b_{\textit{Coh}}(\mathcal{O}_U)$ the value of this system.
+Let $L$ be an object of $D^b_{\textit{Coh}}(\mathcal{O}_X)$.
+Consider the diagram
+$$
+\xymatrix{
+\colim \Ext^{-1}(K_n|_V, L|_V) \oplus
+\colim \Ext^{-1}(K_n|_W, L|_W) \ar[r] \ar[d] &
+\Ext^{-1}(K|_{U \cap V}, L|_{U \cap V}) \oplus
+\Ext^{-1}(K|_{U \cap W}, L|_{U \cap W}) \ar[d] \\
+\colim \Ext^{-1}(K_n|_{V \cap W}, L|_{V \cap W}) \ar[r] \ar[d] &
+\Ext^{-1}(K|_{U \cap V \cap W}, L|_{U \cap V \cap W}) \ar[d] \\
+\colim \Hom(K_n, L) \ar[d] \ar[r] &
+\Hom(K|_U, L|_U) \ar[d] \\
+\colim \Hom(K_n|_V, L|_V) \oplus \colim \Hom(K_n|_W, L|_W) \ar[r] \ar[d] &
+\Hom(K|_{U \cap V}, L|_{U \cap V}) \oplus
+\Hom(K|_{U \cap W}, L|_{U \cap W}) \ar[d] \\
+\colim \Hom(K_n|_{V \cap W}, L|_{V \cap W}) \ar[r] &
+\Hom(K|_{U \cap V \cap W}, L|_{U \cap V \cap W})
+}
+$$
+The vertical sequences are exact by
+Cohomology, Lemma \ref{cohomology-lemma-mayer-vietoris-hom}
+and the fact that filtered colimits are exact.
+All horizontal arrows except for the middle one are isomorphisms
+by Lemma \ref{lemma-lift-map} and the fact that pro-isomorphic systems
+have the same colimits. Hence the middle one is an isomorphism too by
+the 5-lemma. It follows that $(K_n)$ is pro-isomorphic to
+a Deligne system for $K$. Namey, if $(K'_n)$ is a Deligne system
+whose restriction to $U$ has constant value $K$ (which
+exists by Lemma \ref{lemma-extension-by-zero}), then
+we have $\colim \Hom_X(K'_n, L) = \Hom_U(K, L|_U)$ as well.
+Hence the pro-systems $(K_n)$ and $(K'_n)$ are
+pro-isomorphic by
+Categories, Remark \ref{categories-remark-pro-category-copresheaves}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensoring-Deligne-system}
+Let $j : U \to X$ be an open immersion of Noetherian schemes. Let
+$\mathcal{I} \subset \mathcal{O}_X$ be a quasi-coherent sheaf of
+ideals with $V(\mathcal{I}) = X \setminus U$.
+Let $K$ be in $D^b_{\textit{Coh}}(\mathcal{O}_X)$.
+Then
+$$
+K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{I}^n
+$$
+is pro-isomorphic to a Deligne system with constant value $K|_U$ over $U$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-characterize-extension-by-zero} the question is
+local on $X$. Thus we may assume $X$ is the spectrum of a Noetherian
+ring. In this case the statement follows from the algebra version which is
+More on Algebra, Lemma \ref{more-algebra-lemma-tensoring-Deligne-system}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Preliminaries to compactly supported cohomology}
+\label{section-preliminaries-compactly-supported}
+
+\noindent
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism
+in the category $\textit{FTS}_S$. Using the constructions in the
+previous section, we will construct a functor
+$$
+Rf_! :
+D^b_{\textit{Coh}}(\mathcal{O}_X)
+\longrightarrow
+\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)
+$$
+which reduces to the functor of Remark \ref{remark-extension-by-zero}
+if $f$ is an open immersion and in general is constructed using a
+compactification of $f$. Before we do this, we need the following lemmas
+to prove our construction is well defined.
+
+\begin{lemma}
+\label{lemma-well-defined-pre}
+Let $f : X \to Y$ be a proper morphism of Noetherian schemes.
+Let $V \subset Y$ be an open subscheme and set $U = f^{-1}(V)$.
+Picture
+$$
+\xymatrix{
+U \ar[r]_j \ar[d]_g & X \ar[d]^f \\
+V \ar[r]^{j'} & Y
+}
+$$
+Then we have a canonical isomorphism $Rj'_! \circ Rg_* \to Rf_* \circ Rj_!$
+of functors $D^b_{\textit{Coh}}(\mathcal{O}_U) \to
+\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)$ where
+$Rj_!$ and $Rj'_!$ are as in Remark \ref{remark-extension-by-zero}.
+\end{lemma}
+
+\begin{proof}[First proof]
+Let $K$ be an object of $D^b_{\textit{Coh}}(\mathcal{O}_U)$. Let $(K_n)$
+be a Deligne system for $U \to X$ whose restriction to $U$ is constant
+with value $K$. Of course this means that $(K_n)$ represents $Rj_!K$
+in $\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_X)$. Observe that
+both $Rj'_!Rg_*K$ and $Rf_*Rj_!K$ restrict to the constant pro-object with
+value $Rg_*K$ on $V$. This is immediate for the first one and for the second
+one it follows from the fact that $(Rf_*K_n)|_V = Rg_*(K_n|_U) = Rg_*K$.
+By the uniqueness of Deligne systems in Lemma \ref{lemma-extension-by-zero}
+it suffices to show that $(Rf_*K_n)$ is pro-isomorphic to a
+Deligne system. The lemma referenced will also show that the
+isomorphism we obtain is functorial.
+
+\medskip\noindent
+Proof that $(Rf_*K_n)$ is pro-isomorphic to a Deligne system.
+First, we observe that the question is independent of the choice
+of the Deligne system $(K_n)$ corresponding to $K$ (by the aforementioned
+uniqueness). By Lemmas \ref{lemma-extension-by-zero-triangle} and
+\ref{lemma-deligne-system-2-out-of-3}
+if we have a distinguished triangle
+$$
+K \to L \to M \to K[1]
+$$
+in $D^b_{\textit{Coh}}(\mathcal{O}_U)$ and the result holds for
+$K$ and $M$, then the result holds for $L$. Using the distinguished
+triangles of canonical truncations (Derived Categories, Remark
+\ref{derived-remark-truncation-distinguished-triangle})
+we reduce to the problem studied in the next paragraph.
+
+\medskip\noindent
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+Let $\mathcal{J} \subset \mathcal{O}_Y$ be a quasi-coherent
+sheaf of ideals cutting out $Y \setminus V$. Denote
+$\mathcal{J}^n\mathcal{F}$ the image of
+$f^*\mathcal{J}^n \otimes \mathcal{F} \to \mathcal{F}$.
+We have to show that $(Rf_*(\mathcal{J}^n\mathcal{F}))$
+is a Deligne system. By Lemma \ref{lemma-characterize-extension-by-zero}
+the question is local on $Y$. Thus we may assume $Y = \Spec(A)$ is affine
+and $\mathcal{J}$ corresponds to an ideal $I \subset A$. By
+Lemma \ref{lemma-characterize-extension-by-zero-algebra}
+it suffices to show that the inverse system of cohomology modules
+$(H^p(X, I^n\mathcal{F}))$ is pro-isomorphic to the inverse system
+$(I^n M)$ for some finite $A$-module $M$.
+This is shown in Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cohomology-powers-ideal-application}.
+\end{proof}
+
+\begin{proof}[Second proof]
+Let $K$ be an object of $D^b_{\textit{Coh}}(\mathcal{O}_U)$.
+Let $L$ be an object of $D^b_{\textit{Coh}}(\mathcal{O}_Y)$.
+We will construct a bijection
+$$
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)}(Rj'_!Rg_*K, L)
+\longrightarrow
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)}(Rf_*Rj_!K, L)
+$$
+functorial in $K$ and $L$. Fixing $K$ this will determine an isomorphism
+of pro-objects
+$Rf_*Rj_!K \to Rj'_!Rg_*K$
+by Categories, Remark \ref{categories-remark-pro-category-copresheaves}
+and varying $K$ we obtain that this determines
+an isomorphism of functors. To actually produce the isomorphism we
+use the sequence of functorial equalities
+\begin{align*}
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)}(Rj'_!Rg_*K, L)
+& =
+\Hom_V(Rg_*K, L|_V) \\
+& =
+\Hom_U(K, g^!(L|_V)) \\
+& =
+\Hom_U(K, f^!L|_U)) \\
+& =
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_X)}(Rj_!K, f^!L) \\
+& =
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)}(Rf_*Rj_!K, L)
+\end{align*}
+The first equality is true by Lemma \ref{lemma-lift-map}.
+The second equality is true because $g$ is proper (as the base change
+of $f$ to $V$) and hence $g^!$ is the right adjoint of pushforward by
+construction, see Section \ref{section-upper-shriek}.
+The third equality holds as $g^!(L|_V) = f^!L|_U$ by
+Lemma \ref{lemma-restrict-before-or-after}.
+Since $f^!L$ is in $D^+_{\textit{Coh}}(\mathcal{O}_X)$ by
+Lemma \ref{lemma-shriek-coherent} the fourth equality follows from
+Lemma \ref{lemma-lift-map-plus}. The fifth equality holds again
+because $f^!$ is the right adjoint to $Rf_*$ as $f$ is proper.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-well-defined}
+Let $j : U \to X$ be an open immersion of Noetherian schemes.
+Let $j' : U \to X'$ be a compactification of $U$ over $X$ (see proof)
+and denote $f : X' \to X$ the structure morphism.
+Then we have a canonical isomorphism $Rj_! \to Rf_* \circ R(j')_!$
+of functors $D^b_{\textit{Coh}}(\mathcal{O}_U) \to
+\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_X)$ where
+$Rj_!$ and $Rj'_!$ are as in Remark \ref{remark-extension-by-zero}.
+\end{lemma}
+
+\begin{proof}
+The fact that $X'$ is a compactification of $U$ over $X$ means
+precisely that $f : X' \to X$ is proper, that $j'$ is an open immersion,
+and $j = f \circ j'$. See
+More on Flatness, Section \ref{flat-section-compactify}.
+If $j'(U) = f^{-1}(j(U))$, then the lemma follows immediately from
+Lemma \ref{lemma-well-defined-pre}.
+If $j'(U) \not = f^{-1}(j(U))$, then denote $X'' \subset X'$ the
+scheme theoretic closure of $j' : U \to X'$ and denote
+$j'' : U \to X''$ the corresponding open immersion.
+Picture
+$$
+\xymatrix{
+& & X'' \ar[d]^{f'} \\
+& & X' \ar[d]^f \\
+U \ar[rr]^j \ar[rru]^{j'} \ar[rruu]^{j''} & & X
+}
+$$
+By
+More on Flatness, Lemma \ref{flat-lemma-compactifications-cofiltered} part (c)
+and the discussion above we have isomorphisms
+$Rf'_* \circ Rj''_! = Rj'_!$ and $R(f \circ f')_* \circ Rj''_! = Rj_!$.
+Since $R(f \circ f')_* = Rf_* \circ Rf'_*$ we conclude.
+\end{proof}
+
+\begin{remark}
+\label{remark-covariance-open-j-lower-shriek}
+Let $X \supset U \supset U'$ be open subschemes of a Noetherian scheme $X$.
+Denote $j : U \to X$ and $j' : U' \to X$ the inclusion morphisms.
+We claim there is a canonical map
+$$
+Rj'_!(K|_{U'}) \longrightarrow Rj_!K
+$$
+functorial for $K$ in $D^b_{\textit{Coh}}(\mathcal{O}_U)$. Namely, by
+Lemma \ref{lemma-lift-map} we have for any $L$ in
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$ the map
+\begin{align*}
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_X)}(Rj_!K, L)
+& =
+\Hom_U(K, L|_U) \\
+& \to
+\Hom_{U'}(K|_{U'}, L|_{U'}) \\
+& =
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_X)}(Rj'_!(K|_{U'}), L)
+\end{align*}
+functorial in $L$ and $K'$. The functoriality in $L$ shows by
+Categories, Remark \ref{categories-remark-pro-category-copresheaves}
+that we obtain a canonical map $Rj'_!(K|_{U'}) \to Rj_!K$ which is
+functorial in $K$ by the functoriality of the arrow above in $K$.
+
+\medskip\noindent
+Here is an explicit construction of this arrow. Namely, suppose
+that $\mathcal{F}^\bullet$ is a bounded complex of coherent
+$\mathcal{O}_X$-modules whose restriction to $U$ represents $K$
+in the derived category. We have seen in the proof of
+Lemma \ref{lemma-extension-by-zero}
+that such a complex always exists. Let $\mathcal{I}$, resp.\ $\mathcal{I}'$
+be a quasi-coherent sheaf of ideals on $X$ with
+$V(\mathcal{I}) = X \setminus U$, resp.\ $V(\mathcal{I}') = X \setminus U'$.
+After replacing $\mathcal{I}$ by $\mathcal{I} + \mathcal{I}'$
+we may assume $\mathcal{I}' \subset \mathcal{I}$.
+By construction $Rj_!K$, resp.\ $Rj'_!(K|_{U'})$ is represented by the
+inverse system $(K_n)$, resp.\ $(K'_n)$ of $D^b_{\textit{Coh}}(\mathcal{O}_X)$
+with
+$$
+K_n = \mathcal{I}^n\mathcal{F}^\bullet
+\quad\text{resp.}\quad
+K'_n = (\mathcal{I}')^n\mathcal{F}^\bullet
+$$
+Clearly the map constructed above is given by the maps
+$K'_n \to K_n$ coming from the inclusions
+$(\mathcal{I}')^n \subset \mathcal{I}^n$.
+\end{remark}
+
+
+
+
+
+
+\section{Compactly supported cohomology for coherent modules}
+\label{section-compactly-supported}
+
+\noindent
+In Situation \ref{situation-shriek} given a morphism $f : X \to Y$
+in $\textit{FTS}_S$, we will define a functor
+$$
+Rf_! : D^b_{\textit{Coh}}(\mathcal{O}_X)
+\longrightarrow
+\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)
+$$
+Namely, we choose a compactification $j : X \to \overline{X}$ over $Y$
+which is possible by More on Flatness, Theorem \ref{flat-theorem-nagata} and
+Lemma \ref{flat-lemma-compactifyable}. Denote
+$\overline{f} : \overline{X} \to Y$ the structure morphism. Then we set
+$$
+Rf_!K = R\overline{f}_* Rj_! K
+$$
+for $K \in D^b_{\textit{Coh}}(\mathcal{O}_X)$ where $Rj_!$ is
+as in Remark \ref{remark-extension-by-zero}.
+
+\begin{lemma}
+\label{lemma-lower-shriek-well-defined}
+The functor $Rf_!$ is, up to isomorphism, independent
+of the choice of the compactification.
+\end{lemma}
+
+\noindent
+In fact, the functor $Rf_!$ will be characterized as a ``left adjoint''
+to $f^!$ which will determine it up to unique isomorphism.
+
+\begin{proof}
+Consider the category of compactifications of $X$ over $Y$, which is
+cofiltered according to More on Flatness, Theorem \ref{flat-theorem-nagata} and
+Lemmas \ref{flat-lemma-compactifications-cofiltered} and
+\ref{flat-lemma-compactifyable}.
+To every choice of a compactification
+$$
+j : X \to \overline{X},\quad \overline{f} : \overline{X} \to Y
+$$
+the construction above associates the functor $R\overline{f}_* \circ Rj_!$.
+Suppose given a morphism $g : \overline{X}_1 \to \overline{X}_2$
+between compactifications $j_i : X \to \overline{X}_i$ over $Y$.
+Then we get an isomorphism
+$$
+R\overline{f}_{2, *} \circ Rj_{2, !} =
+R\overline{f}_{2, *} \circ Rg_* \circ j_{1, !} =
+R\overline{f}_{1, *} \circ Rj_{1, !}
+$$
+using Lemma \ref{lemma-well-defined} in the first equality. In this way
+we see our functor is independent of the choice of compactification
+up to isomorphism.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-duality-compactly-supported}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. Then the functors $Rf_!$ and $f^!$ are adjoint in
+the following sense: for all $K \in D^b_{\textit{Coh}}(\mathcal{O}_X)$
+and $L \in D^+_{\textit{Coh}}(\mathcal{O}_Y)$ we have
+$$
+\Hom_X(K, f^!L) =
+\Hom_{\text{Pro-}D^+_{\textit{Coh}}(\mathcal{O}_Y)}(Rf_!K, L)
+$$
+bifunctorially in $K$ and $L$.
+\end{proposition}
+
+\begin{proof}
+Choose a compactification $j : X \to \overline{X}$ over $Y$ and
+denote $\overline{f} : \overline{X} \to Y$ the structure morphism.
+Then we have
+\begin{align*}
+\Hom_X(K, f^!L)
+& =
+\Hom_X(K, j^*\overline{f}{}^!L) \\
+& =
+\Hom_{\text{Pro-}D^+_{\textit{Coh}}(\mathcal{O}_{\overline{X}})}
+(Rj_!K, \overline{f}{}^!L) \\
+& =
+\Hom_{\text{Pro-}D^+_{\textit{Coh}}(\mathcal{O}_Y)}(Rf_*Rj_!K, L) \\
+& =
+\Hom_{\text{Pro-}D^+_{\textit{Coh}}(\mathcal{O}_Y)}(Rf_!K, L)
+\end{align*}
+The first equality follows immediately from the construction of $f^!$ in
+Section \ref{section-upper-shriek}.
+By Lemma \ref{lemma-shriek-coherent} we have $\overline{f}{}^!L$
+in $D^+_{\textit{Coh}}(\mathcal{O}_{\overline{X}})$ hence the second
+equality follows from Lemma \ref{lemma-lift-map-plus}.
+Since $\overline{f}$ is proper the functor $\overline{f}{}^!$
+is the right adjoint of pushforward by construction. This is
+why we have the third equality.
+The fourth equality holds because $Rf_! = Rf_* Rj_!$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compactly-supported-triangle}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$. Let
+$$
+K \to L \to M \to K[1]
+$$
+be a distinguished triangle of $D^b_{\textit{Coh}}(\mathcal{O}_X)$.
+Then there exists an inverse system of distinguished triangles
+$$
+K_n \to L_n \to M_n \to K_n[1]
+$$
+in $D^b_{\textit{Coh}}(\mathcal{O}_Y)$ such that the pro-systems
+$(K_n)$, $(L_n)$, and $(M_n)$ give $Rf_!K$, $Rf_!L$, and $Rf_!M$.
+\end{lemma}
+
+\begin{proof}
+Choose a compactification $j : X \to \overline{X}$ over $Y$ and
+denote $\overline{f} : \overline{X} \to Y$ the structure morphism.
+Choose an inverse system of distinguished triangles
+$$
+\overline{K}_n \to \overline{L}_n \to \overline{M}_n \to \overline{K}_n[1]
+$$
+in $D^b_{\textit{Coh}}(\mathcal{O}_{\overline{X}})$ as in
+Lemma \ref{lemma-extension-by-zero-triangle} corresponding to the
+open immersion $j$ and the given distinguished triangle. Take
+$K_n = R\overline{f}_*\overline{K}_n$
+and similarly for $L_n$ and $M_n$. This works by the very definition
+of $Rf_!$.
+\end{proof}
+
+\begin{remark}
+\label{remark-compose-inverse-systems}
+Let $\mathcal{C}$ be a category. Suppose given an inverse system
+$$
+\ldots \xrightarrow{\alpha_4} (M_{3, n}) \xrightarrow{\alpha_3} (M_{2, n})
+\xrightarrow{\alpha_2} (M_{1, n})
+$$
+of inverse systems in the category of pro-objects of $\mathcal{C}$.
+In other words, the arrows $\alpha_i$ are morphisms of pro-objects. By
+Categories, Example \ref{categories-example-pro-morphism-inverse-systems}
+we can represent each $\alpha_i$ by a pair $(m_i, a_i)$ where
+$m_i : \mathbf{N} \to \mathbf{N}$ is an increasing function and
+$a_{i, n} : M_{i, m_i(n)} \to M_{i - 1, n}$ is a morphism of $\mathcal{C}$
+making the diagrams
+$$
+\xymatrix{
+\ldots \ar[r] &
+M_{i, m_i(3)} \ar[d]^{a_{i, 3}} \ar[r] &
+M_{i, m_i(2)} \ar[d]^{a_{i, 2}} \ar[r] &
+M_{i, m_i(1)} \ar[d]^{a_{i, 1}} \\
+\ldots \ar[r] &
+M_{i - 1, 3} \ar[r] &
+M_{i - 1, 2} \ar[r] &
+M_{i - 1, 1}
+}
+$$
+commute. By replacing $m_i(n)$ by $\max(n, m_i(n))$ and adjusting
+the morphisms $a_i(n)$ accordingly (as in the example referenced)
+we may assume that $m_i(n) \geq n$. In this situation consider the
+inverse system
+$$
+\ldots \to
+M_{4, m_4(m_3(m_2(4)))} \to
+M_{3, m_3(m_2(3))} \to
+M_{2, m_2(2)} \to
+M_{1, 1}
+$$
+with general term
+$$
+M_k = M_{k, m_k(m_{k - 1}(\ldots (m_2(k))\ldots))}
+$$
+For any object $N$ of $\mathcal{C}$ we have
+$$
+\colim_i \colim_n \Mor_\mathcal{C}(M_{i, n}, N) =
+\colim_k \Mor_\mathcal{C}(M_k, N)
+$$
+We omit the details. In other words, we see that the inverse system $(M_k)$
+has the property
+$$
+\colim_i \Mor_{\text{Pro-}\mathcal{C}}((M_{i, n}), N) =
+\Mor_{\text{Pro-}\mathcal{C}}((M_k), N)
+$$
+This property determines the inverse system $(M_k)$ up to pro-isomorphism
+by the discussion in
+Categories, Remark \ref{categories-remark-pro-category-copresheaves}.
+In this way we can turn certain inverse systems in $\text{Pro-}\mathcal{C}$
+into pro-objects with countable index categories.
+\end{remark}
+
+\begin{remark}
+\label{remark-composition-lower-shriek}
+In Situation \ref{situation-shriek} let $f : X \to Y$ and $g : Y \to Z$
+be composable morphisms of $\textit{FTS}_S$. Let us define the composition
+$$
+Rg_! \circ Rf_! :
+D^b_{\textit{Coh}}(\mathcal{O}_X)
+\longrightarrow
+\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Z)
+$$
+Namely, by the very construction of $Rf_!$
+for $K$ in $D^b_{\textit{Coh}}(\mathcal{O}_X)$
+the output $Rf_!K$ is the pro-isomorphism class
+of an inverse system $(M_n)$ in $D^b_{\textit{Coh}}(\mathcal{O}_Y)$.
+Then, since $Rg_!$ is constructed similarly, we see that
+$$
+\ldots \to Rg_!M_3 \to Rg_!M_2 \to Rg_!M_1
+$$
+is an inverse system of $\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)$.
+By the discussion in Remark \ref{remark-compose-inverse-systems}
+there is a unique pro-isomorphism class, which we will denote
+$Rg_! Rf_! K$, of inverse systems in $D^b_{\textit{Coh}}(\mathcal{O}_Z)$
+such that
+$$
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Z)}(Rg_!Rf_!K, L) =
+\colim_n \Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Z)}(Rg_!M_n, L)
+$$
+We omit the discussion necessary to see that this construction
+is functorial in $K$ as it will immediately follow from the next lemma.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-composition-lower-shriek}
+In Situation \ref{situation-shriek} let $f : X \to Y$ and $g : Y \to Z$
+be composable morphisms of $\textit{FTS}_S$. With notation as in
+Remark \ref{remark-composition-lower-shriek} we have
+$Rg_! \circ Rf_! = R(g \circ f)_!$.
+\end{lemma}
+
+\begin{proof}
+By the discussion in
+Categories, Remark \ref{categories-remark-pro-category-copresheaves}
+it suffices to show that we obtain the same answer if we compute
+$\Hom$ into $L$ in $D^b_{\textit{Coh}}(\mathcal{O}_Z)$. To do this
+we compute, using the notation in
+Remark \ref{remark-composition-lower-shriek}, as follows
+\begin{align*}
+\Hom_Z(Rg_!Rf_!K, L)
+& =
+\colim_n \Hom_Z(Rg_!M_n, L) \\
+& =
+\colim_n \Hom_Y(M_n, g^!L) \\
+& =
+\Hom_Y(Rf_!K, g^!L) \\
+& =
+\Hom_X(K, f^!g^!L) \\
+& =
+\Hom_X(K, (g \circ f)^!L) \\
+& =
+\Hom_Z(R(g \circ f)_!K, L)
+\end{align*}
+The first equality is the definition of $Rg_!Rf_!K$. The second
+equality is Proposition \ref{proposition-duality-compactly-supported}
+for $g$.
+The third equality is the fact that $Rf_!K$ is given by $(M_n)$.
+The fourth equality is
+Proposition \ref{proposition-duality-compactly-supported} for $f$.
+The fifth equality is Lemma \ref{lemma-upper-shriek-composition}.
+The sixth is
+Proposition \ref{proposition-duality-compactly-supported} for $g \circ f$.
+\end{proof}
+
+\begin{remark}
+\label{remark-covariance-open-lower-shriek}
+In Situation \ref{situation-shriek} let $f : X \to Y$ be a morphism of
+$\textit{FTS}_S$ and let $U \subset X$ be an open. Set
+$g = f|_U : U \to Y$. Then there is a canonical morphism
+$$
+Rg_!(K|_U) \longrightarrow Rf_!K
+$$
+functorial in $K$ in $D^b_{\textit{Coh}}(\mathcal{O}_X)$
+which can be defined in at least 3 ways.
+\begin{enumerate}
+\item Denote $i : U \to X$ the inclusion morphism. We have
+$Rg_! = Rf_! \circ Ri_!$ by
+Lemma \ref{lemma-composition-lower-shriek}
+and we can use $Rf_!$ applied to the map $Ri_!(K|_U) \to K$
+which is a special case of
+Remark \ref{remark-covariance-open-j-lower-shriek}.
+\item Choose a compactification $j : X \to \overline{X}$
+of $X$ over $Y$ with structure morphism $\overline{f} : \overline{X} \to Y$.
+Set $j' = j \circ i : U \to \overline{X}$. We can
+use that $Rf_! = R\overline{f}_* \circ Rj_!$ and
+$Rg_! = R\overline{f}_* \circ Rj'_!$
+and we can use $R\overline{f}_*$ applied to the map
+$Rj'_!(K|_U) \to Rj_!K$ of
+Remark \ref{remark-covariance-open-j-lower-shriek}.
+\item We can use
+\begin{align*}
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)}(Rf_!K, L)
+& =
+\Hom_X(K, f^!L) \\
+& \to
+\Hom_U(K|_U, f^!L|_U) \\
+& =
+\Hom_U(K|_U, g^!L) \\
+& =
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)}(Rg_!(K|_U), L)
+\end{align*}
+functorial in $L$ and $K$. Here we have used
+Proposition \ref{proposition-duality-compactly-supported}
+twice and the construction of upper shriek functors which
+shows that $g^! = i^* \circ f^!$. The functoriality in $L$ shows by
+Categories, Remark \ref{categories-remark-pro-category-copresheaves}
+that we obtain a canonical map $Rg_!(K|_U) \to Rf_!K$ in
+$\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)$ which is
+functorial in $K$ by the functoriality of the arrow above in $K$.
+\end{enumerate}
+Each of these three constructions gives the same arrow; we omit the
+details.
+\end{remark}
+
+\begin{remark}
+\label{remark-covariance-etale-lower-shriek}
+Let us generalize the covariance of compactly supported cohomology
+given in Remark \ref{remark-covariance-open-lower-shriek}
+to \'etale morphisms. Namely, in Situation \ref{situation-shriek}
+suppose given a commutative diagram
+$$
+\xymatrix{
+U \ar[rr]_h \ar[rd]_g & & X \ar[ld]^f \\
+& Y
+}
+$$
+of $\textit{FTS}_S$ with $h$ \'etale. Then there is a canonical morphism
+$$
+Rg_!(h^*K) \longrightarrow Rf_!K
+$$
+functorial in $K$ in $D^b_{\textit{Coh}}(\mathcal{O}_X)$. We define
+this transformation using the sequence of maps
+\begin{align*}
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)}(Rf_!K, L)
+& =
+\Hom_X(K, f^!L) \\
+& \to
+\Hom_U(h^*K, h^*(f^!L)) \\
+& =
+\Hom_U(h^*K, h^!f^!L) \\
+& =
+\Hom_U(h^*K, g^!L) \\
+& =
+\Hom_{\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)}(Rg_!(h^*K), L)
+\end{align*}
+functorial in $L$ and $K$. Here we have used
+Proposition \ref{proposition-duality-compactly-supported}
+twice, we have used the equality $h^* = h^!$ of
+Lemma \ref{lemma-shriek-etale}, and we have used the equality
+$h^! \circ f^! = g^!$ of Lemma \ref{lemma-upper-shriek-composition}.
+The functoriality in $L$ shows by
+Categories, Remark \ref{categories-remark-pro-category-copresheaves}
+that we obtain a canonical map $Rg_!(h^*K) \to Rf_!K$ in
+$\text{Pro-}D^b_{\textit{Coh}}(\mathcal{O}_Y)$ which is
+functorial in $K$ by the functoriality of the arrow above in $K$.
+\end{remark}
+
+\begin{remark}
+\label{remark-covariance-lower-shriek}
+In Remarks \ref{remark-covariance-open-lower-shriek} and
+\ref{remark-covariance-etale-lower-shriek} we have seen
+that the construction of compactly supported cohomology is
+covariant with respect to open immersions and \'etale morphisms.
+In fact, the correct generality is that given a commutative diagram
+$$
+\xymatrix{
+U \ar[rr]_h \ar[rd]_g & & X \ar[ld]^f \\
+& Y
+}
+$$
+of $\textit{FTS}_S$ with $h$ flat and quasi-finite there exists a
+canonical transformation
+$$
+Rg_! \circ h^* \longrightarrow Rf_!
+$$
+As in Remark \ref{remark-covariance-etale-lower-shriek}
+this map can be constructed using a transformation of functors
+$h^* \to h^!$ on $D^+_{\textit{Coh}}(\mathcal{O}_X)$. Recall that
+$h^!K = h^*K \otimes \omega_{U/X}$ where $\omega_{U/X} = h^!\mathcal{O}_X$
+is the relative dualizing sheaf of the flat quasi-finite morphism $h$ (see
+Lemmas \ref{lemma-perfect-comparison-shriek} and
+\ref{lemma-flat-quasi-finite-shriek}).
+Recall that $\omega_{U/X}$ is the same as the relative dualizing
+module which will be constructed in Discriminants, Remark
+\ref{discriminant-remark-relative-dualizing-for-quasi-finite}
+by
+Discriminants, Lemma \ref{discriminant-lemma-compare-dualizing}.
+Thus we can use the trace element
+$\tau_{U/X} : \mathcal{O}_U \to \omega_{U/X}$
+which will be constructed in Discriminants, Remark
+\ref{discriminant-remark-relative-dualizing-for-flat-quasi-finite}
+to define our transformation.
+If we ever need this, we will precisely formulate
+and prove the result here.
+\end{remark}
+
+
+
+
+
+
+
+\section{Duality for compactly supported cohomology}
+\label{section-duality-compactly-supported}
+
+\noindent
+Let $k$ be a field. Let $U$ be a separated scheme of finite type over $k$.
+Let $K$ be an object of $D^b_{\textit{Coh}}(\mathcal{O}_U)$. Let us define
+the compactly supported cohomology $H^i_c(U, K)$ of $K$ as follows.
+Choose an open immersion $j : U \to X$ into a scheme proper over $k$
+and a Deligne system $(K_n)$ for $j : U \to X$ whose restriction
+to $U$ is constant with value $K$. Then we set
+$$
+H^i_c(U, K) = \lim H^i(X, K_n)
+$$
+We view this as a topological $k$-vector space using the limit topology
+(see More on Algebra, Section \ref{more-algebra-section-topological-ring}).
+There are several points to make here.
+
+\medskip\noindent
+First, this definition is independent of the choice of $X$ and $(K_n)$.
+Namely, if $p : U \to \Spec(k)$ denotes the structure morphism, then
+we already know that $Rp_!K = (R\Gamma(X, K_n))$
+is well defined up to pro-isomorphism in $D(k)$ hence so is the limit
+defining $H^i_c(U, K)$.
+
+\medskip\noindent
+Second, it may seem more natural to use the expression
+$$
+H^i(R\lim R\Gamma(X, K_n)) = R\Gamma(X, R\lim K_n)
+$$
+but this would give the same answer: since the $k$-vector spaces
+$H^j(X, K_n)$ are finite dimensional, these inverse systems satisfy
+Mittag-Leffler and hence $R^1\lim$ terms of Cohomology, Lemma
+\ref{cohomology-lemma-RGamma-commutes-with-Rlim} vanish.
+
+\medskip\noindent
+If $U' \subset U$ is an open subscheme, then there is a canonical map
+$$
+H^i_c(U', K|_{U'}) \longrightarrow H^i_c(U, K)
+$$
+functorial for $K$ in $D^b_{\textit{Coh}}(\mathcal{O}_U)$.
+See for example Remark \ref{remark-covariance-open-lower-shriek}.
+In fact, using Remark \ref{remark-covariance-etale-lower-shriek}
+we see that more generally such a map exists for an \'etale morphism
+$U' \to U$ of separated schemes of finite type over $k$.
+
+\medskip\noindent
+If $V$ is a $k$-vector space then we put a topology on $\Hom_k(V, k)$
+as follows: write $V = \bigcup V_i$ as the filtered union of its finite
+dimensional $k$-subvector spaces and use the limit topology on
+$\Hom_k(V, k) = \lim \Hom_k(V_i, k)$. If $\dim_k V < \infty$ then
+the topology on $\Hom_k(V, k)$ is discrete. More generally, if
+$V = \colim_n V_n$ is written as a directed colimit of finite dimensional
+vector spaces, then $\Hom_k(V, k) = \lim \Hom_k(V_n, k)$ as topological
+vector spaces.
+
+\begin{lemma}
+\label{lemma-duality-compact-support}
+Let $p : U \to \Spec(k)$ be separated of finite type where $k$ is a field.
+Let $\omega_{U/k}^\bullet = p^!\mathcal{O}_{\Spec(k)}$.
+There are canonical isomorphisms
+$$
+\Hom_k(H^i(U, K), k) =
+H^{-i}_c(U, R\SheafHom_{\mathcal{O}_U}(K, \omega_{U/k}^\bullet))
+$$
+of topological $k$-vector spaces
+functorial for $K$ in $D^b_{\textit{Coh}}(\mathcal{O}_U)$.
+\end{lemma}
+
+\begin{proof}
+Choose a compactification $j : U \to X$ over $k$. Let
+$\mathcal{I} \subset \mathcal{O}_X$ be a quasi-coherent ideal
+sheaf with $V(\mathcal{I}) = X \setminus U$.
+By Derived Categories of Schemes, Proposition \ref{perfect-proposition-DCoh}
+we may choose $M \in D^b_{\textit{Coh}}(\mathcal{O}_X)$
+with $K = M|_U$. We have
+$$
+H^i(U, K) =
+\Ext^i_U(\mathcal{O}_U, M|_U) =
+\colim \Ext^i_X(\mathcal{I}^n, M) =
+\colim H^i(X, R\SheafHom_{\mathcal{O}_X}(\mathcal{I}^n, M))
+$$
+by Lemma \ref{lemma-lift-map}.
+Since $\mathcal{I}^n$ is a coherent $\mathcal{O}_X$-module,
+we have $\mathcal{I}^n$ in $D^-_{\textit{Coh}}(\mathcal{O}_X)$,
+hence $R\SheafHom_{\mathcal{O}_X}(\mathcal{I}^n, M)$ is in
+$D^+_{\textit{Coh}}(\mathcal{O}_X)$ by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-coherent-internal-hom}.
+
+\medskip\noindent
+Let $\omega_{X/k}^\bullet = q^!\mathcal{O}_{\Spec(k)}$ where
+$q : X \to \Spec(k)$ is the structure morphism, see
+Section \ref{section-duality-proper-over-field}. We find that
+\begin{align*}
+\Hom_k(
+&
+H^i(X, R\SheafHom_{\mathcal{O}_X}(\mathcal{I}^n, M)), k) \\
+& =
+\Ext^{-i}_X(R\SheafHom_{\mathcal{O}_X}(\mathcal{I}^n, M),
+\omega_{X/k}^\bullet) \\
+& =
+H^{-i}(X, R\SheafHom_{\mathcal{O}_X}(R\SheafHom_{\mathcal{O}_X}(
+\mathcal{I}^n, M), \omega_{X/k}^\bullet))
+\end{align*}
+by Lemma \ref{lemma-duality-proper-over-field}. By
+Lemma \ref{lemma-internal-hom-evaluate-isom} part (1) the canonical map
+$$
+R\SheafHom_{\mathcal{O}_X}(M, \omega_{X/k}^\bullet)
+\otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{I}^n
+\longrightarrow
+R\SheafHom_{\mathcal{O}_X}(R\SheafHom_{\mathcal{O}_X}(
+\mathcal{I}^n, M), \omega_{X/k}^\bullet)
+$$
+is an isomorphism. Observe that
+$\omega^\bullet_{U/k} = \omega^\bullet_{X/k}|_U$
+because $p^!$ is constructed as $q^!$ composed with restriction to $U$.
+Hence $R\SheafHom_{\mathcal{O}_X}(M, \omega_{X/k}^\bullet)$ is
+an object of $D^b_{\textit{Coh}}(\mathcal{O}_X)$ which restricts
+to $R\SheafHom_{\mathcal{O}_U}(K, \omega_{U/k}^\bullet)$ on $U$.
+Hence by Lemma \ref{lemma-tensoring-Deligne-system} we conclude that
+$$
+\lim H^{-i}(X, R\SheafHom_{\mathcal{O}_X}(M, \omega_{X/k}^\bullet)
+\otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{I}^n)
+$$
+is an avatar for the right hand side of the equality of the lemma.
+Combining all the isomorphisms obtained in this manner we get
+the isomorphism of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-duality-compact-support-restrict-open}
+With notation as in Lemma \ref{lemma-duality-compact-support}
+suppose $U' \subset U$ is an open subscheme. Then the diagram
+$$
+\xymatrix{
+\Hom_k(H^i(U, K), k) \ar[rr] & &
+H^{-i}_c(U, R\SheafHom_{\mathcal{O}_U}(K, \omega_{U/k}^\bullet)) \\
+\Hom_k(H^i(U', K|_{U'}), k) \ar[rr] \ar[u] & &
+H^{-i}_c(U', R\SheafHom_{\mathcal{O}_{U'}}(K, \omega_{U'/k}^\bullet)) \ar[u]
+}
+$$
+is commutative. Here the horizontal arrows are the isomorphisms of
+Lemma \ref{lemma-duality-compact-support}, the vertical arrow on the
+left is the contragredient to the restriction map
+$H^i(U, K) \to H^i(U', K|_{U'})$, and the right vertical arrow
+is Remark \ref{remark-covariance-open-lower-shriek} (see discussion
+before the lemma).
+\end{lemma}
+
+\begin{proof}
+We strongly urge the reader to skip this proof. Choose $X$ and $M$ as
+in the proof of Lemma \ref{lemma-duality-compact-support}. We are going to drop
+the subscript $\mathcal{O}_X$ from $R\SheafHom$ and $\otimes^\mathbf{L}$.
+We write
+$$
+H^i(U, K) = \colim H^i(X, R\SheafHom(\mathcal{I}^n, M))
+$$
+and
+$$
+H^i(U', K|_{U'}) = \colim H^i(X, R\SheafHom((\mathcal{I}')^n, M))
+$$
+as in the proof of Lemma \ref{lemma-duality-compact-support} where we choose
+$\mathcal{I}' \subset \mathcal{I}$ as in the discussion in
+Remark \ref{remark-covariance-open-j-lower-shriek} so that the map
+$H^i(U, K) \to H^i(U', K|_{U'})$ is induced by the maps
+$(\mathcal{I}')^n \to \mathcal{I}^n$.
+We similarly write
+$$
+H^i_c(U, R\SheafHom(K, \omega_{U/k}^\bullet)) =
+\lim H^i(X, R\SheafHom(M, \omega_{X/k}^\bullet)
+\otimes^\mathbf{L} \mathcal{I}^n)
+$$
+and
+$$
+H^i_c(U', R\SheafHom(K|_{U'}, \omega_{U'/k}^\bullet)) =
+\lim H^i(X, R\SheafHom(M, \omega_{X/k}^\bullet)
+\otimes^\mathbf{L} (\mathcal{I}')^n)
+$$
+so that the arrow $H^i_c(U', R\SheafHom(K|_{U'}, \omega_{U'/k}^\bullet))
+\to H^i_c(U, R\SheafHom(K, \omega_{U/k}^\bullet))$ is similarly
+deduced from the maps $(\mathcal{I}')^n \to \mathcal{I}^n$.
+The diagrams
+$$
+\xymatrix{
+R\SheafHom(M, \omega_{X/k}^\bullet)
+\otimes^\mathbf{L} \mathcal{I}^n
+\ar[rr] & &
+R\SheafHom(R\SheafHom(\mathcal{I}^n, M), \omega_{X/k}^\bullet) \\
+R\SheafHom(M, \omega_{X/k}^\bullet) \otimes^\mathbf{L} (\mathcal{I}')^n
+\ar[rr] \ar[u] & &
+R\SheafHom(R\SheafHom((\mathcal{I}')^n, M), \omega_{X/k}^\bullet) \ar[u]
+}
+$$
+commute because the construction of the horizontal arrows in
+Cohomology, Lemma \ref{cohomology-lemma-internal-hom-evaluate}
+is functorial in all three entries. Hence we finally come down
+to the assertion that the diagrams
+$$
+\xymatrix{
+\Hom_k(H^i(X, R\SheafHom(\mathcal{I}^n, M)), k) \ar[r] &
+H^{-i}(X, R\SheafHom(R\SheafHom(
+\mathcal{I}^n, M), \omega_{X/k}^\bullet)) \\
+\Hom_k(H^i(X, R\SheafHom((\mathcal{I}')^n, M)), k) \ar[r] \ar[u] &
+H^{-i}(X, R\SheafHom(R\SheafHom(
+(\mathcal{I}')^n, M), \omega_{X/k}^\bullet)) \ar[u]
+}
+$$
+commute. This is true because the duality isomorphism
+$$
+\Hom_k(H^i(X, L), k) = \Ext^{-i}_X(L, \omega_{X/k}^\bullet) =
+H^{-i}(X, R\SheafHom(L, \omega_{X/k}^\bullet))
+$$
+is functorial for $L$ in $D_\QCoh(\mathcal{O}_X)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-h0-compactly-supported}
+Let $X$ be a proper scheme over a field $k$. Let
+$K \in D^b_{\textit{Coh}}(\mathcal{O}_X)$ with $H^i(K) = 0$
+for $i < 0$. Set $\mathcal{F} = H^0(K)$.
+Let $Z \subset X$ be closed with complement $U = X \setminus U$.
+Then
+$$
+H^0_c(U, K|_U) \subset H^0(X, \mathcal{F})
+$$
+is given by those global sections of $\mathcal{F}$ which
+vanish in an open neighbourhood of $Z$.
+\end{lemma}
+
+\begin{proof}
+Consider the map
+$H^0_c(U, K|_U) \to H^0_x(X, K) = H^0(X, K) = H^0(X, \mathcal{F})$ of
+Remark \ref{remark-covariance-open-lower-shriek}. To study this
+we represent $K$ by a bounded complex $\mathcal{F}^\bullet$ with
+$\mathcal{F}^i = 0$ for $i < 0$. Then we have by definition
+$$
+H^0_c(U, K|_U) = \lim H^0(X, \mathcal{I}^n\mathcal{F}^\bullet)
+= \lim \Ker(
+H^0(X, \mathcal{I}^n\mathcal{F}^0) \to
+H^0(X, \mathcal{I}^n\mathcal{F}^1))
+$$
+By Artin-Rees (Cohomology of Schemes, Lemma \ref{coherent-lemma-Artin-Rees})
+this is the same as $\lim H^0(X, \mathcal{I}^n\mathcal{F})$.
+Thus the arrow $H^0_c(U, K|_U) \to H^0(X, \mathcal{F})$
+is injective and the image consists of those global sections of $\mathcal{F}$
+which are contained in the subsheaf $\mathcal{I}^n\mathcal{F}$
+for any $n$. The characterization of these as the sections which
+vanish in a neighbourhood of $Z$ comes from Krull's intersection
+theorem (Algebra, Lemma \ref{algebra-lemma-intersect-powers-ideal-module-zero})
+by looking at stalks of $\mathcal{F}$. See discussion in
+Algebra, Remark \ref{algebra-remark-intersection-powers-ideal}
+for the case of functions.
+\end{proof}
+
+
+
+
+\section{Lichtenbaum's theorem}
+\label{section-lichtenbaum}
+
+\noindent
+The theorem below was conjectured by Lichtenbaum and proved by Grothendieck
+(see \cite{Hartshorne-local-cohomology}). There is a very nice proof of the
+theorem by Kleiman in \cite{Kleiman-Lichtenbaum}. A generalization of
+the theorem to the case of cohomology with supports can be found in
+\cite{Lyubeznik-Lichtenbaum}. The most interesting part of the argument
+is contained in the proof of the following lemma.
+
+\begin{lemma}
+\label{lemma-lichtenbaum}
+Let $U$ be a variety. Let $\mathcal{F}$ be a coherent $\mathcal{O}_U$-module.
+If $H^d(U, \mathcal{F})$ is nonzero, then $\dim(U) \geq d$ and if
+equality holds, then $U$ is proper.
+\end{lemma}
+
+\begin{proof}
+By the Grothendieck's vanishing result in
+Cohomology, Proposition \ref{cohomology-proposition-vanishing-Noetherian}
+we conclude that $\dim(U) \geq d$.
+Assume $\dim(U) = d$. Choose a compactification $U \to X$
+such that $U$ is dense in $X$. (This is possible by
+More on Flatness, Theorem \ref{flat-theorem-nagata} and
+Lemma \ref{flat-lemma-compactifyable}.) After replacing $X$ by its
+reduction we find that $X$ is a proper variety of dimension $d$
+and we see that $U$ is proper if and only if $U = X$.
+Set $Z = X \setminus U$. We will show that $H^d(U, \mathcal{F})$
+is zero if $Z$ is nonempty.
+
+\medskip\noindent
+Choose a coherent $\mathcal{O}_X$-module
+$\mathcal{G}$ whose restriction to $U$ is $\mathcal{F}$, see
+Properties, Lemma \ref{properties-lemma-lift-finite-presentation}.
+Let $\omega_X^\bullet$ denote the dualizing complex of $X$ as in
+Section \ref{section-duality-proper-over-field}.
+Set $\omega_U^\bullet = \omega_X^\bullet|_U$.
+Then $H^d(U, \mathcal{F})$ is dual to
+$$
+H^{-d}_c(U, R\SheafHom_{\mathcal{O}_U}(\mathcal{F}, \omega_U^\bullet))
+$$
+by Lemma \ref{lemma-duality-compact-support}. By
+Lemma \ref{lemma-duality-proper-over-field} we see that
+the cohomology sheaves of $\omega_X^\bullet$ vanish in degrees $< -d$
+and $H^{-d}(\omega_X^\bullet) = \omega_X$ is a coherent
+$\mathcal{O}_X$-module which is $(S_2)$ and whose support is $X$.
+In particular, $\omega_X$ is torsion free, see
+Divisors, Lemma \ref{divisors-lemma-torsion-free-finite-noetherian-domain}.
+Thus we see that the cohomology sheaf
+$$
+H^{-d}(R\SheafHom_{\mathcal{O}_X}(\mathcal{G}, \omega_X^\bullet)) =
+\SheafHom(\mathcal{G}, \omega_X)
+$$
+is torsion free, see
+Divisors, Lemma \ref{divisors-lemma-hom-into-torsion-free}.
+Consequently this sheaf has no nonzero sections vanishing
+on any nonempty open of $X$ (those would be torsion sections).
+Thus it follows from Lemma \ref{lemma-h0-compactly-supported} that
+$H^{-d}_c(U, R\SheafHom_{\mathcal{O}_U}(\mathcal{F}, \omega_U^\bullet))$
+is zero, and hence $H^d(U, \mathcal{F})$ is zero as desired.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-lichtenbaum}
+Let $X$ be a nonempty separated scheme of finite type over a field $k$.
+Let $d = \dim(X)$. The following are equivalent
+\begin{enumerate}
+\item $H^d(X, \mathcal{F}) = 0$ for all coherent $\mathcal{O}_X$-modules
+$\mathcal{F}$ on $X$,
+\item $H^d(X, \mathcal{F}) = 0$ for all quasi-coherent $\mathcal{O}_X$-modules
+$\mathcal{F}$ on $X$, and
+\item no irreducible component $X' \subset X$ of dimension $d$
+is proper over $k$.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+Assume there exists an irreducible component $X' \subset X$ (which we view
+as an integral closed subscheme) which is proper and has dimension $d$.
+Let $\omega_{X'}$ be a dualizing module of $X'$ over $k$, see
+Lemma \ref{lemma-duality-proper-over-field}. Then
+$H^d(X', \omega_{X'})$ is nonzero as it is dual to $H^0(X', \mathcal{O}_{X'})$
+by the lemma. Hence we see that $H^d(X, \omega_{X'}) = H^d(X', \omega_{X'})$
+is nonzero and we conclude that (1) does not hold.
+In this way we see that (1) implies (3).
+
+\medskip\noindent
+Let us prove that (3) implies (1).
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module such that
+$H^d(X, \mathcal{F})$ is nonzero. Choose a filtration
+$$
+0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset
+\ldots \subset \mathcal{F}_m = \mathcal{F}
+$$
+as in Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-filter}.
+We obtain exact sequences
+$$
+H^d(X, \mathcal{F}_i) \to H^d(X, \mathcal{F}_{i + 1}) \to
+H^d(X, \mathcal{F}_{i + 1}/\mathcal{F}_i)
+$$
+Thus for some $i \in \{1, \ldots, m\}$ we find that
+$H^d(X, \mathcal{F}_{i + 1}/\mathcal{F}_i)$ is nonzero.
+By our choice of the filtration this means that there exists
+an integral closed subscheme $Z \subset X$
+and a nonzero coherent sheaf of ideals $\mathcal{I} \subset \mathcal{O}_Z$
+such that $H^d(Z, \mathcal{I})$ is nonzero.
+By Lemma \ref{lemma-lichtenbaum}
+we conclude $\dim(Z) = d$ and $Z$ is proper over $k$
+contradicting (3). Hence (3) implies (1).
+
+\medskip\noindent
+Finally, let us show that (1) and (2) are equivalent for any Noetherian scheme
+$X$. Namely, (2) trivially implies (1). On the other hand, assume (1) and
+let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module. Then we can write
+$\mathcal{F} = \colim \mathcal{F}_i$ as the filtered colimit of its coherent
+submodules, see
+Properties, Lemma \ref{properties-lemma-quasi-coherent-colimit-finite-type}.
+Then we have $H^d(X, \mathcal{F}) = \colim H^d(X, \mathcal{F}_i) = 0$
+by Cohomology, Lemma \ref{cohomology-lemma-quasi-separated-cohomology-colimit}.
+Thus (2) is true.
+\end{proof}
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/dualizing.tex b/books/stacks/dualizing.tex
new file mode 100644
index 0000000000000000000000000000000000000000..adc91e4a841fed7b3974fe91439527509924deb0
--- /dev/null
+++ b/books/stacks/dualizing.tex
@@ -0,0 +1,5616 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Dualizing Complexes}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we discuss dualizing complexes in commutative algebra.
+A reference is \cite{RD}.
+
+\medskip\noindent
+We begin with a discussion of
+essential surjections and essential injections,
+projective covers,
+injective hulls,
+duality for Artinian rings, and
+study injective hulls of residue fields,
+leading quickly to a proof of Matlis duality.
+See Sections \ref{section-essential},
+\ref{section-injective-modules},
+\ref{section-projective-cover},
+\ref{section-injective-hull},
+\ref{section-artinian}, and
+\ref{section-hull-residue-field} and
+Proposition \ref{proposition-matlis}.
+
+\medskip\noindent
+This is followed by three sections discussing local cohomology in
+great generality, see Sections \ref{section-bad-local-cohomology},
+\ref{section-local-cohomology}, and \ref{section-local-cohomology-noetherian}.
+We apply some of this to a discussion of depth in
+Section \ref{section-depth}. In another application we show how,
+given a finitely generated ideal $I$ of a ring $A$, the
+``$I$-complete'' and ``$I$-torsion'' objects
+of the derived category of $A$ are equivalent, see
+Section \ref{section-torsion-and-complete}.
+To learn more about local cohomology, for example the finiteness
+theorem (which relies on local duality -- see below) please visit
+Local Cohomology, Section \ref{local-cohomology-section-introduction}.
+
+\medskip\noindent
+The bulk of this chapter is devoted to duality for a ring map and
+dualizing complexes. See
+Sections \ref{section-trivial},
+\ref{section-base-change-trivial-duality},
+\ref{section-dualizing},
+\ref{section-dualizing-local},
+\ref{section-dimension-function},
+\ref{section-local-duality},
+\ref{section-dualizing-module},
+\ref{section-CM},
+\ref{section-gorenstein},
+\ref{section-ubiquity-dualizing}, and
+\ref{section-formal-fibres}.
+The key definition is that of a dualizing complex
+$\omega_A^\bullet$ over a Noetherian ring $A$ as an object
+$\omega_A^\bullet \in D^{+}(A)$ whose cohomology modules
+$H^i(\omega_A^\bullet)$ are finite $A$-modules, which has
+finite injective dimension, and is such that the map
+$$
+A \longrightarrow R\Hom_A(\omega_A^\bullet, \omega_A^\bullet)
+$$
+is a quasi-isomorphism. After establishing some elementary properties
+of dualizing complexes, we show a dualizing complex gives rise to a
+dimension function. Next, we prove Grothendieck's local duality theorem.
+After briefly discussing dualizing modules and Cohen-Macaulay rings,
+we introduce Gorenstein rings and we show many familiar Noetherian
+rings have dualizing complexes. In a last section we apply the material
+to show there is a good theory of Noetherian local rings whose formal fibres
+are Gorenstein or local complete intersections.
+
+\medskip\noindent
+In the last few sections, we describe an algebraic construction of
+the ``upper shriek functors'' used in algebraic geometry, for example
+in the book \cite{RD}. This topic is continued in the chapter on
+duality for schemes. See
+Duality for Schemes, Section \ref{duality-section-introduction}.
+
+
+
+
+
+
+
+\section{Essential surjections and injections}
+\label{section-essential}
+
+\noindent
+We will mostly work in categories of modules, but we may as well make
+the definition in general.
+
+\begin{definition}
+\label{definition-essential}
+Let $\mathcal{A}$ be an abelian category.
+\begin{enumerate}
+\item An injection $A \subset B$ of $\mathcal{A}$ is {\it essential},
+or we say that $B$ is an {\it essential extension of} $A$,
+if every nonzero subobject $B' \subset B$ has nonzero intersection with $A$.
+\item A surjection $f : A \to B$ of $\mathcal{A}$ is {\it essential}
+if for every proper subobject $A' \subset A$ we have $f(A') \not = B$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Some lemmas about this notion.
+
+\begin{lemma}
+\label{lemma-essential}
+Let $\mathcal{A}$ be an abelian category.
+\begin{enumerate}
+\item If $A \subset B$ and $B \subset C$ are essential extensions, then
+$A \subset C$ is an essential extension.
+\item If $A \subset B$ is an essential extension and $C \subset B$
+is a subobject, then $A \cap C \subset C$ is an essential extension.
+\item If $A \to B$ and $B \to C$ are essential surjections, then
+$A \to C$ is an essential surjection.
+\item Given an essential surjection $f : A \to B$ and a surjection
+$A \to C$ with kernel $K$, the morphism $C \to B/f(K)$ is an essential
+surjection.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-union-essential-extensions}
+Let $R$ be a ring. Let $M$ be an $R$-module. Let $E = \colim E_i$
+be a filtered colimit of $R$-modules. Suppose given a compatible
+system of essential injections $M \to E_i$ of $R$-modules.
+Then $M \to E$ is an essential injection.
+\end{lemma}
+
+\begin{proof}
+Immediate from the definitions and the fact that filtered
+colimits are exact (Algebra, Lemma \ref{algebra-lemma-directed-colimit-exact}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-essential-extension}
+Let $R$ be a ring. Let $M \subset N$ be $R$-modules. The following
+are equivalent
+\begin{enumerate}
+\item $M \subset N$ is an essential extension,
+\item for all $x \in N$ nonzero there exists an $f \in R$ such that $fx \in M$
+and $fx \not = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (1) and let $x \in N$ be a nonzero element. By (1) we have
+$Rx \cap M \not = 0$. This implies (2).
+
+\medskip\noindent
+Assume (2). Let $N' \subset N$ be a nonzero submodule. Pick $x \in N'$
+nonzero. By (2) we can find $f \in R$ with $fx \in M$ and $fx \not = 0$.
+Thus $N' \cap M \not = 0$.
+\end{proof}
+
+
+
+
+\section{Injective modules}
+\label{section-injective-modules}
+
+\noindent
+Some results about injective modules over rings.
+
+\begin{lemma}
+\label{lemma-product-injectives}
+Let $R$ be a ring. Any product of injective $R$-modules is injective.
+\end{lemma}
+
+\begin{proof}
+Special case of Homology, Lemma \ref{homology-lemma-product-injectives}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-flat}
+Let $R \to S$ be a flat ring map. If $E$ is an injective $S$-module,
+then $E$ is injective as an $R$-module.
+\end{lemma}
+
+\begin{proof}
+This is true because $\Hom_R(M, E) = \Hom_S(M \otimes_R S, E)$
+by Algebra, Lemma \ref{algebra-lemma-adjoint-tensor-restrict}
+and the fact that tensoring with $S$ is exact.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-epimorphism}
+Let $R \to S$ be an epimorphism of rings. Let $E$ be an $S$-module.
+If $E$ is injective as an $R$-module, then $E$ is an injective $S$-module.
+\end{lemma}
+
+\begin{proof}
+This is true because $\Hom_R(N, E) = \Hom_S(N, E)$ for any $S$-module $N$,
+see Algebra, Lemma \ref{algebra-lemma-epimorphism-modules}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hom-injective}
+Let $R \to S$ be a ring map. If $E$ is an injective $R$-module,
+then $\Hom_R(S, E)$ is an injective $S$-module.
+\end{lemma}
+
+\begin{proof}
+This is true because $\Hom_S(N, \Hom_R(S, E)) = \Hom_R(N, E)$ by
+Algebra, Lemma \ref{algebra-lemma-adjoint-hom-restrict}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-essential-extensions-in-injective}
+Let $R$ be a ring. Let $I$ be an injective $R$-module. Let $E \subset I$
+be a submodule. The following are equivalent
+\begin{enumerate}
+\item $E$ is injective, and
+\item for all $E \subset E' \subset I$ with $E \subset E'$ essential
+we have $E = E'$.
+\end{enumerate}
+In particular, an $R$-module is injective if and only if every essential
+extension is trivial.
+\end{lemma}
+
+\begin{proof}
+The final assertion follows from the first and the fact that the
+category of $R$-modules has enough injectives
+(More on Algebra, Section \ref{more-algebra-section-injectives-modules}).
+
+\medskip\noindent
+Assume (1). Let $E \subset E' \subset I$ as in (2).
+Then the map $\text{id}_E : E \to E$ can be extended
+to a map $\alpha : E' \to E$. The kernel of $\alpha$ has to be
+zero because it intersects $E$ trivially and $E'$ is an essential
+extension. Hence $E = E'$.
+
+\medskip\noindent
+Assume (2). Let $M \subset N$ be $R$-modules and let $\varphi : M \to E$
+be an $R$-module map. In order to prove (1) we have to show that
+$\varphi$ extends to a morphism $N \to E$. Consider the set $\mathcal{S}$
+of pairs
+$(M', \varphi')$ where $M \subset M' \subset N$ and $\varphi' : M' \to E$
+is an $R$-module map agreeing with $\varphi$ on $M$. We define an ordering
+on $\mathcal{S}$ by the rule $(M', \varphi') \leq (M'', \varphi'')$
+if and only if $M' \subset M''$ and $\varphi''|_{M'} = \varphi'$.
+It is clear that we can take the maximum of a totally ordered subset
+of $\mathcal{S}$. Hence by Zorn's lemma we may assume $(M, \varphi)$
+is a maximal element.
+
+\medskip\noindent
+Choose an extension $\psi : N \to I$ of $\varphi$ composed
+with the inclusion $E \to I$. This is possible as $I$ is injective.
+If $\psi(N) \subset E$, then $\psi$ is the desired extension.
+If $\psi(N)$ is not contained in $E$, then by (2) the inclusion
+$E \subset E + \psi(N)$ is not essential. hence
+we can find a nonzero submodule $K \subset E + \psi(N)$ meeting $E$ in $0$.
+This means that $M' = \psi^{-1}(E + K)$ strictly contains $M$.
+Thus we can extend $\varphi$ to $M'$ using
+$$
+M' \xrightarrow{\psi|_{M'}} E + K \to (E + K)/K = E
+$$
+This contradicts the maximality of $(M, \varphi)$.
+\end{proof}
+
+\begin{example}
+\label{example-reduced-ring-injective}
+Let $R$ be a reduced ring. Let $\mathfrak p \subset R$ be a minimal prime
+so that $K = R_\mathfrak p$ is a field
+(Algebra, Lemma \ref{algebra-lemma-minimal-prime-reduced-ring}).
+Then $K$ is an injective $R$-module. Namely, we have
+$\Hom_R(M, K) = \Hom_K(M_\mathfrak p, K)$ for any $R$-module
+$M$. Since localization is an exact functor and taking duals is
+an exact functor on $K$-vector spaces we conclude $\Hom_R(-, K)$
+is an exact functor, i.e., $K$ is an injective $R$-module.
+\end{example}
+
+\begin{lemma}
+\label{lemma-sum-injective-modules}
+Let $R$ be a Noetherian ring. A direct sum of injective modules
+is injective.
+\end{lemma}
+
+\begin{proof}
+Let $E_i$ be a family of injective modules parametrized by a set $I$.
+Set $E = \bigcup E_i$. To show that $E$ is injective we use
+Injectives, Lemma \ref{injectives-lemma-criterion-baer}.
+Thus let $\varphi : I \to E$ be a module map from an ideal of $R$
+into $E$. As $I$ is a finite $R$-module (because $R$ is Noetherian)
+we can find finitely many elements $i_1, \ldots, i_r \in I$
+such that $\varphi$ maps into $\bigcup_{j = 1, \ldots, r} E_{i_j}$.
+Then we can extend $\varphi$ into $\bigcup_{j = 1, \ldots, r} E_{i_j}$
+using the injectivity of the modules $E_{i_j}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localization-injective-modules}
+Let $R$ be a Noetherian ring. Let $S \subset R$ be a multiplicative
+subset. If $E$ is an injective $R$-module, then $S^{-1}E$ is an
+injective $S^{-1}R$-module.
+\end{lemma}
+
+\begin{proof}
+Since $R \to S^{-1}R$ is an epimorphism of rings, it suffices
+to show that $S^{-1}E$ is injective as an $R$-module, see
+Lemma \ref{lemma-injective-epimorphism}.
+To show this we use Injectives, Lemma \ref{injectives-lemma-criterion-baer}.
+Thus let $I \subset R$ be an ideal and let
+$\varphi : I \to S^{-1} E$ be an $R$-module map.
+As $I$ is a finitely presented $R$-module (because $R$ is Noetherian)
+we can find an $f \in S$ and an $R$-module map $I \to E$
+such that $f\varphi$ is the composition $I \to E \to S^{-1}E$
+(Algebra, Lemma \ref{algebra-lemma-hom-from-finitely-presented}).
+Then we can extend $I \to E$ to a homomorphism $R \to E$.
+Then the composition
+$$
+R \to E \to S^{-1}E \xrightarrow{f^{-1}} S^{-1}E
+$$
+is the desired extension of $\varphi$ to $R$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-module-divide}
+Let $R$ be a Noetherian ring. Let $I$ be an injective $R$-module.
+\begin{enumerate}
+\item Let $f \in R$. Then $E = \bigcup I[f^n] = I[f^\infty]$
+is an injective submodule of $I$.
+\item Let $J \subset R$ be an ideal. Then the $J$-power torsion
+submodule $I[J^\infty]$ is an injective submodule of $I$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will use Lemma \ref{lemma-essential-extensions-in-injective}
+to prove (1).
+Suppose that $E \subset E' \subset I$ and that $E'$ is an essential
+extension of $E$. We will show that $E' = E$. If not, then we can
+find $x \in E'$ and $x \not \in E$.
+Let $J = \{ a \in R \mid ax \in E\}$. Since $R$ is Noetherian,
+we may write $J = (g_1, \ldots, g_t)$ for some
+$g_i \in R$. By definition $E$ is the set of elements of $I$ annihilated
+by powers of $f$, so we may choose integers $n_i$ so that $f^{n_i}g_ix = 0$.
+Set $n = \mathrm{max}\{ n_i \}$. Then $x' = f^n x$ is an element of $E'$
+not in $E$ and is annihilated by $J$. Set $J' = \{ a \in R \mid ax' \in E \}$
+so $J \subset J'$. Conversely, we have $a \in J'$ if and only if $ax' \in E$
+if and only if $f^m a x' = 0$ for some $m \geq 0$. But then
+$f^m a x' = f^{m + n} a x$ implies $ax \in E$, i.e., $a \in J$.
+Hence $J = J'$. Thus $J = J' = \text{Ann}(x')$, so $Rx' \cap E = 0$.
+Hence $E'$ is not an essential extension of $E$, a contradiction.
+
+\medskip\noindent
+To prove (2) write $J = (f_1, \ldots, f_t)$. Then
+$I[J^\infty]$ is equal to
+$$
+(\ldots((I[f_1^\infty])[f_2^\infty])\ldots)[f_t^\infty]
+$$
+and the result follows from (1) and induction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-dimension-over-polynomial-ring}
+Let $A$ be a Noetherian ring. Let $E$ be an injective $A$-module.
+Then $E \otimes_A A[x]$ has injective-amplitude $[0, 1]$
+as an object of $D(A[x])$. In particular, $E \otimes_A A[x]$
+has finite injective dimension as an $A[x]$-module.
+\end{lemma}
+
+\begin{proof}
+Let us write $E[x] = E \otimes_A A[x]$. Consider the short exact
+sequence of $A[x]$-modules
+$$
+0 \to E[x] \to \Hom_A(A[x], E[x]) \to \Hom_A(A[x], E[x]) \to 0
+$$
+where the first map sends $p \in E[x]$ to $f \mapsto fp$ and the
+second map sends $\varphi$ to $f \mapsto \varphi(xf) - x\varphi(f)$.
+The second map is surjective because
+$\Hom_A(A[x], E[x]) = \prod_{n \geq 0} E[x]$ as an abelian group and
+the map sends $(e_n)$ to $(e_{n + 1} - xe_n)$ which is surjective.
+As an $A$-module we have $E[x] \cong \bigoplus_{n \geq 0} E$
+which is injective by Lemma \ref{lemma-sum-injective-modules}.
+Hence the $A[x]$-module $\Hom_A(A[x], E[x])$ is injective by
+Lemma \ref{lemma-hom-injective} and the proof is complete.
+\end{proof}
+
+
+
+\section{Projective covers}
+\label{section-projective-cover}
+
+\noindent
+In this section we briefly discuss projective covers.
+
+\begin{definition}
+\label{definition-projective-cover}
+Let $R$ be a ring. A surjection $P \to M$ of $R$-modules is said
+to be a {\it projective cover}, or sometimes a {\it projective envelope},
+if $P$ is a projective $R$-module and $P \to M$ is an essential
+surjection.
+\end{definition}
+
+\noindent
+Projective covers do not always exist. For example, if $k$ is a field
+and $R = k[x]$ is the polynomial ring over $k$, then the module $M = R/(x)$
+does not have a projective cover. Namely, for any surjection $f : P \to M$
+with $P$ projective over $R$, the proper submodule $(x - 1)P$ surjects
+onto $M$. Hence $f$ is not essential.
+
+\begin{lemma}
+\label{lemma-projective-cover-unique}
+Let $R$ be a ring and let $M$ be an $R$-module. If a projective cover
+of $M$ exists, then it is unique up to isomorphism.
+\end{lemma}
+
+\begin{proof}
+Let $P \to M$ and $P' \to M$ be projective covers. Because $P$ is a
+projective $R$-module and $P' \to M$ is surjective, we can find an
+$R$-module map $\alpha : P \to P'$ compatible with the maps to $M$.
+Since $P' \to M$ is essential, we see that $\alpha$ is surjective.
+As $P'$ is a projective $R$-module we can choose a direct sum decomposition
+$P = \Ker(\alpha) \oplus P'$. Since $P' \to M$ is surjective
+and since $P \to M$ is essential we conclude that $\Ker(\alpha)$
+is zero as desired.
+\end{proof}
+
+\noindent
+Here is an example where projective covers exist.
+
+\begin{lemma}
+\label{lemma-projective-covers-local}
+Let $(R, \mathfrak m, \kappa)$ be a local ring. Any finite $R$-module has
+a projective cover.
+\end{lemma}
+
+\begin{proof}
+Let $M$ be a finite $R$-module. Let $r = \dim_\kappa(M/\mathfrak m M)$.
+Choose $x_1, \ldots, x_r \in M$ mapping to a basis of $M/\mathfrak m M$.
+Consider the map $f : R^{\oplus r} \to M$. By Nakayama's lemma this is
+a surjection (Algebra, Lemma \ref{algebra-lemma-NAK}). If
+$N \subset R^{\oplus r}$ is a proper submodule, then
+$N/\mathfrak m N \to \kappa^{\oplus r}$ is not surjective (by
+Nakayama's lemma again) hence $N/\mathfrak m N \to M/\mathfrak m M$
+is not surjective. Thus $f$ is an essential surjection.
+\end{proof}
+
+
+
+
+
+
+
+\section{Injective hulls}
+\label{section-injective-hull}
+
+\noindent
+In this section we briefly discuss injective hulls.
+
+\begin{definition}
+\label{definition-injective-hull}
+Let $R$ be a ring. A injection $M \to I$ of $R$-modules is said
+to be an {\it injective hull} if $I$ is a injective $R$-module and
+$M \to I$ is an essential injection.
+\end{definition}
+
+\noindent
+Injective hulls always exist.
+
+\begin{lemma}
+\label{lemma-injective-hull}
+Let $R$ be a ring. Any $R$-module has an injective hull.
+\end{lemma}
+
+\begin{proof}
+Let $M$ be an $R$-module. By
+More on Algebra, Section \ref{more-algebra-section-injectives-modules}
+the category of $R$-modules has enough injectives.
+Choose an injection $M \to I$ with $I$ an injective $R$-module.
+Consider the set $\mathcal{S}$ of submodules $M \subset E \subset I$
+such that $E$ is an essential extension of $M$. We order $\mathcal{S}$
+by inclusion. If $\{E_\alpha\}$ is a totally ordered subset
+of $\mathcal{S}$, then $\bigcup E_\alpha$ is an essential extension of $M$
+too (Lemma \ref{lemma-union-essential-extensions}).
+Thus we can apply Zorn's lemma and find a maximal element
+$E \in \mathcal{S}$. We claim $M \subset E$ is an injective hull, i.e.,
+$E$ is an injective $R$-module. This follows from
+Lemma \ref{lemma-essential-extensions-in-injective}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-hull-unique}
+Let $R$ be a ring. Let $M$, $N$ be $R$-modules and let $M \to E$
+and $N \to E'$ be injective hulls. Then
+\begin{enumerate}
+\item for any $R$-module map $\varphi : M \to N$ there exists an
+$R$-module map $\psi : E \to E'$ such that
+$$
+\xymatrix{
+M \ar[r] \ar[d]_\varphi & E \ar[d]^\psi \\
+N \ar[r] & E'
+}
+$$
+commutes,
+\item if $\varphi$ is injective, then $\psi$ is injective,
+\item if $\varphi$ is an essential injection, then $\psi$ is an isomorphism,
+\item if $\varphi$ is an isomorphism, then $\psi$ is an isomorphism,
+\item if $M \to I$ is an embedding of $M$ into an injective $R$-module,
+then there is an isomorphism $I \cong E \oplus I'$ compatible with
+the embeddings of $M$,
+\end{enumerate}
+In particular, the injective hull $E$ of $M$ is unique up to isomorphism.
+\end{lemma}
+
+\begin{proof}
+Part (1) follows from the fact that $E'$ is an injective $R$-module.
+Part (2) follows as $\Ker(\psi) \cap M = 0$
+and $E$ is an essential extension of $M$.
+Assume $\varphi$ is an essential injection. Then
+$E \cong \psi(E) \subset E'$ by (2) which implies
+$E' = \psi(E) \oplus E''$ because $E$ is injective.
+Since $E'$ is an essential extension of
+$M$ (Lemma \ref{lemma-essential}) we get $E'' = 0$.
+Part (4) is a special case of (3).
+Assume $M \to I$ as in (5).
+Choose a map $\alpha : E \to I$ extending the map $M \to I$.
+Arguing as before we see that $\alpha$ is injective.
+Thus as before $\alpha(E)$ splits off from $I$.
+This proves (5).
+\end{proof}
+
+\begin{example}
+\label{example-injective-hull-domain}
+Let $R$ be a domain with fraction field $K$. Then $R \subset K$ is an
+injective hull of $R$. Namely, by
+Example \ref{example-reduced-ring-injective} we see that $K$ is an injective
+$R$-module and by Lemma \ref{lemma-essential-extension} we see that
+$R \subset K$ is an essential extension.
+\end{example}
+
+\begin{definition}
+\label{definition-indecomposable}
+An object $X$ of an additive category is called {\it indecomposable}
+if it is nonzero and if $X = Y \oplus Z$, then either $Y = 0$ or $Z = 0$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-indecomposable-injective}
+Let $R$ be a ring. Let $E$ be an indecomposable injective $R$-module.
+Then
+\begin{enumerate}
+\item $E$ is the injective hull of any nonzero submodule of $E$,
+\item the intersection of any two nonzero submodules of $E$ is nonzero,
+\item $\text{End}_R(E, E)$ is a noncommutative local ring with maximal
+ideal those $\varphi : E \to E$ whose kernel is nonzero, and
+\item the set of zerodivisors on $E$ is a prime ideal $\mathfrak p$ of $R$
+and $E$ is an injective $R_\mathfrak p$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) follows from Lemma \ref{lemma-injective-hull-unique}.
+Part (2) follows from part (1) and the definition of injective hulls.
+
+\medskip\noindent
+Proof of (3). Set $A = \text{End}_R(E, E)$ and
+$I = \{\varphi \in A \mid \Ker(f) \not = 0\}$.
+The statement means that $I$ is a two sided ideal and
+that any $\varphi \in A$, $\varphi \not \in I$ is invertible.
+Suppose $\varphi$ and $\psi$ are not injective.
+Then $\Ker(\varphi) \cap \Ker(\psi)$ is nonzero
+by (2). Hence $\varphi + \psi \in I$. It follows that $I$
+is a two sided ideal. If $\varphi \in A$, $\varphi \not \in I$,
+then $E \cong \varphi(E) \subset E$ is an injective submodule,
+hence $E = \varphi(E)$ because $E$ is indecomposable.
+
+\medskip\noindent
+Proof of (4). Consider the ring map $R \to A$ and let $\mathfrak p \subset R$
+be the inverse image of the maximal ideal $I$. Then it is clear
+that $\mathfrak p$ is a prime ideal and that $R \to A$ extends to
+$R_\mathfrak p \to A$. Thus $E$ is an $R_\mathfrak p$-module.
+It follows from Lemma \ref{lemma-injective-epimorphism} that $E$ is injective
+as an $R_\mathfrak p$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-hull-indecomposable}
+Let $\mathfrak p \subset R$ be a prime of a ring $R$.
+Let $E$ be the injective hull of $R/\mathfrak p$. Then
+\begin{enumerate}
+\item $E$ is indecomposable,
+\item $E$ is the injective hull of $\kappa(\mathfrak p)$,
+\item $E$ is the injective hull of $\kappa(\mathfrak p)$
+over the ring $R_\mathfrak p$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-essential-extension} the inclusion
+$R/\mathfrak p \subset \kappa(\mathfrak p)$ is an essential
+extension. Then Lemma \ref{lemma-injective-hull-unique}
+shows (2) holds. For $f \in R$, $f \not \in \mathfrak p$
+the map $f : \kappa(\mathfrak p) \to \kappa(\mathfrak p)$ is an isomorphism
+hence the map $f : E \to E$ is an isomorphism,
+see Lemma \ref{lemma-injective-hull-unique}.
+Thus $E$ is an $R_\mathfrak p$-module. It is injective
+as an $R_\mathfrak p$-module by Lemma \ref{lemma-injective-epimorphism}.
+Finally, let $E' \subset E$ be a nonzero injective $R$-submodule.
+Then $J = (R/\mathfrak p) \cap E'$ is nonzero. After shrinking $E'$
+we may assume that $E'$ is the injective hull of $J$ (see
+Lemma \ref{lemma-injective-hull-unique} for example).
+Observe that $R/\mathfrak p$ is an essential extension of $J$ for example by
+Lemma \ref{lemma-essential-extension}. Hence $E' \to E$
+is an isomorphism by Lemma \ref{lemma-injective-hull-unique} part (3).
+Hence $E$ is indecomposable.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-indecomposable-injective-noetherian}
+Let $R$ be a Noetherian ring. Let $E$ be an indecomposable injective
+$R$-module. Then there exists a prime ideal $\mathfrak p$ of $R$ such that
+$E$ is the injective hull of $\kappa(\mathfrak p)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p$ be the prime ideal found in
+Lemma \ref{lemma-indecomposable-injective}.
+Say $\mathfrak p = (f_1, \ldots, f_r)$.
+Pick a nonzero element $x \in \bigcap \Ker(f_i : E \to E)$,
+see Lemma \ref{lemma-indecomposable-injective}.
+Then $(R_\mathfrak p)x$ is a module isomorphic to $\kappa(\mathfrak p)$
+inside $E$. We conclude by Lemma \ref{lemma-indecomposable-injective}.
+\end{proof}
+
+\begin{proposition}[Structure of injective modules over Noetherian rings]
+\label{proposition-structure-injectives-noetherian}
+Let $R$ be a Noetherian ring.
+Every injective module is a direct sum of indecomposable injective modules.
+Every indecomposable injective module is the injective hull of
+the residue field at a prime.
+\end{proposition}
+
+\begin{proof}
+The second statement is Lemma \ref{lemma-indecomposable-injective-noetherian}.
+For the first statement, let $I$ be an injective $R$-module.
+We will use transfinite recursion to construct $I_\alpha \subset I$
+for ordinals $\alpha$ which are direct sums of indecomposable injective
+$R$-modules $E_{\beta + 1}$ for $\beta < \alpha$.
+For $\alpha = 0$ we let $I_0 = 0$. Suppose given an ordinal $\alpha$
+such that $I_\alpha$ has been constructed. Then $I_\alpha$ is an
+injective $R$-module by Lemma \ref{lemma-sum-injective-modules}.
+Hence $I \cong I_\alpha \oplus I'$. If $I' = 0$ we are done.
+If not, then $I'$ has an associated prime by
+Algebra, Lemma \ref{algebra-lemma-ass-zero}.
+Thus $I'$ contains a copy of $R/\mathfrak p$ for some prime $\mathfrak p$.
+Hence $I'$ contains an indecomposable submodule $E$ by
+Lemmas \ref{lemma-injective-hull-unique} and
+\ref{lemma-injective-hull-indecomposable}. Set
+$I_{\alpha + 1} = I_\alpha \oplus E_\alpha$.
+If $\alpha$ is a limit ordinal and $I_\beta$ has been constructed
+for $\beta < \alpha$, then we set
+$I_\alpha = \bigcup_{\beta < \alpha} I_\beta$.
+Observe that $I_\alpha = \bigoplus_{\beta < \alpha} E_{\beta + 1}$.
+This concludes the proof.
+\end{proof}
+
+
+
+\section{Duality over Artinian local rings}
+\label{section-artinian}
+
+\noindent
+Let $(R, \mathfrak m, \kappa)$ be an artinian local ring.
+Recall that this implies $R$ is Noetherian and that $R$ has finite
+length as an $R$-module. Moreover an $R$-module is finite if and
+only if it has finite length. We will use these facts without
+further mention in this section. Please see
+Algebra, Sections \ref{algebra-section-length} and
+\ref{algebra-section-artinian}
+and
+Algebra, Proposition \ref{algebra-proposition-dimension-zero-ring}
+for more details.
+
+\begin{lemma}
+\label{lemma-finite}
+Let $(R, \mathfrak m, \kappa)$ be an artinian local ring.
+Let $E$ be an injective hull of $\kappa$. For every finite
+$R$-module $M$ we have
+$$
+\text{length}_R(M) = \text{length}_R(\Hom_R(M, E))
+$$
+In particular, the injective hull $E$ of $\kappa$ is a finite $R$-module.
+\end{lemma}
+
+\begin{proof}
+Because $E$ is an essential extension of $\kappa$ we have
+$\kappa = E[\mathfrak m]$ where $E[\mathfrak m]$ is the
+$\mathfrak m$-torsion in $E$ (notation as in More on Algebra, Section
+\ref{more-algebra-section-torsion}).
+Hence $\Hom_R(\kappa, E) \cong \kappa$ and the equality of lengths
+holds for $M = \kappa$. We prove the displayed equality of the lemma
+by induction on the length of $M$. If $M$ is nonzero there exists a surjection
+$M \to \kappa$ with kernel $M'$. Since the functor $M \mapsto \Hom_R(M, E)$
+is exact we obtain a short exact sequence
+$$
+0 \to \Hom_R(\kappa, E) \to \Hom_R(M, E) \to \Hom_R(M', E) \to 0.
+$$
+Additivity of length for this sequence and the sequence
+$0 \to M' \to M \to \kappa \to 0$ and the equality for $M'$ (induction
+hypothesis) and $\kappa$ implies the equality for $M$.
+The final statement of the lemma follows as $E = \Hom_R(R, E)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-evaluate}
+Let $(R, \mathfrak m, \kappa)$ be an artinian local ring.
+Let $E$ be an injective hull of $\kappa$.
+For any finite $R$-module $M$ the evaluation map
+$$
+M \longrightarrow \Hom_R(\Hom_R(M, E), E)
+$$
+is an isomorphism. In particular $R = \Hom_R(E, E)$.
+\end{lemma}
+
+\begin{proof}
+Observe that the displayed arrow is injective. Namely, if $x \in M$ is
+a nonzero element, then there is a nonzero map $Rx \to \kappa$ which
+we can extend to a map $\varphi : M \to E$ that doesn't vanish on $x$.
+Since the source and target of the arrow have the same length by
+Lemma \ref{lemma-finite}
+we conclude it is an isomorphism. The final statement follows
+on taking $M = R$.
+\end{proof}
+
+\noindent
+To state the next lemma, denote $\text{Mod}^{fg}_R$ the category of finite
+$R$-modules over a ring $R$.
+
+\begin{lemma}
+\label{lemma-duality}
+Let $(R, \mathfrak m, \kappa)$ be an artinian local ring.
+Let $E$ be an injective hull of $\kappa$.
+The functor $D(-) = \Hom_R(-, E)$ induces an exact anti-equivalence
+$\text{Mod}^{fg}_R \to \text{Mod}^{fg}_R$ and
+$D \circ D \cong \text{id}$.
+\end{lemma}
+
+\begin{proof}
+We have seen that $D \circ D = \text{id}$ on $\text{Mod}^{fg}_R$
+in Lemma \ref{lemma-evaluate}. It follows immediately that
+$D$ is an anti-equivalence.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-duality-torsion-cotorsion}
+Assumptions and notation as in Lemma \ref{lemma-duality}.
+Let $I \subset R$ be an ideal and $M$ a finite $R$-module.
+Then
+$$
+D(M[I]) = D(M)/ID(M) \quad\text{and}\quad D(M/IM) = D(M)[I]
+$$
+\end{lemma}
+
+\begin{proof}
+Say $I = (f_1, \ldots, f_t)$. Consider the map
+$$
+M^{\oplus t} \xrightarrow{f_1, \ldots, f_t} M
+$$
+with cokernel $M/IM$. Applying the exact functor $D$ we conclude that
+$D(M/IM)$ is $D(M)[I]$. The other case is proved in the same way.
+\end{proof}
+
+
+
+\section{Injective hull of the residue field}
+\label{section-hull-residue-field}
+
+\noindent
+Most of our results will be for Noetherian local rings in this section.
+
+\begin{lemma}
+\label{lemma-quotient}
+Let $R \to S$ be a surjective map of local rings with kernel $I$.
+Let $E$ be the injective hull of the residue field of $R$ over $R$.
+Then $E[I]$ is the injective hull of the residue field of $S$ over $S$.
+\end{lemma}
+
+\begin{proof}
+Observe that $E[I] = \Hom_R(S, E)$ as $S = R/I$. Hence $E[I]$ is an injective
+$S$-module by Lemma \ref{lemma-hom-injective}. Since $E$ is an essential
+extension of $\kappa = R/\mathfrak m_R$ it follows that $E[I]$ is an
+essential extension of $\kappa$ as well. The result follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-torsion-submodule-sum-injective-hulls}
+Let $(R, \mathfrak m, \kappa)$ be a local ring.
+Let $E$ be the injective hull of $\kappa$.
+Let $M$ be a $\mathfrak m$-power torsion $R$-module
+with $n = \dim_\kappa(M[\mathfrak m]) < \infty$.
+Then $M$ is isomorphic to a submodule of $E^{\oplus n}$.
+\end{lemma}
+
+\begin{proof}
+Observe that $E^{\oplus n}$ is the injective hull of
+$\kappa^{\oplus n} = M[\mathfrak m]$. Thus there is an $R$-module map
+$M \to E^{\oplus n}$ which is injective on $M[\mathfrak m]$.
+Since $M$ is $\mathfrak m$-power torsion the inclusion
+$M[\mathfrak m] \subset M$ is an essential extension
+(for example by Lemma \ref{lemma-essential-extension})
+we conclude that the kernel of $M \to E^{\oplus n}$ is zero.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-union-artinian}
+Let $(R, \mathfrak m, \kappa)$ be a Noetherian local ring.
+Let $E$ be an injective hull of $\kappa$ over $R$.
+Let $E_n$ be an injective hull of $\kappa$ over $R/\mathfrak m^n$.
+Then $E = \bigcup E_n$ and $E_n = E[\mathfrak m^n]$.
+\end{lemma}
+
+\begin{proof}
+We have $E_n = E[\mathfrak m^n]$ by Lemma \ref{lemma-quotient}.
+We have $E = \bigcup E_n$ because $\bigcup E_n = E[\mathfrak m^\infty]$
+is an injective $R$-submodule which contains $\kappa$, see
+Lemma \ref{lemma-injective-module-divide}.
+\end{proof}
+
+\noindent
+The following lemma tells us the injective hull of the residue
+field of a Noetherian local ring only depends on the completion.
+
+\begin{lemma}
+\label{lemma-compare}
+Let $R \to S$ be a flat local homomorphism of local Noetherian rings
+such that $R/\mathfrak m_R \cong S/\mathfrak m_R S$.
+Then the injective hull of the residue field
+of $R$ is the injective hull of the residue field of $S$.
+\end{lemma}
+
+\begin{proof}
+Note that $\mathfrak m_RS = \mathfrak m_S$ as the quotient by the former
+is a field. Set $\kappa = R/\mathfrak m_R = S/\mathfrak m_S$.
+Let $E_R$ be the injective hull of $\kappa$ over $R$.
+Let $E_S$ be the injective hull of $\kappa$ over $S$.
+Observe that $E_S$ is an injective $R$-module by
+Lemma \ref{lemma-injective-flat}.
+Choose an extension $E_R \to E_S$ of the identification of
+residue fields. This map is an isomorphism by
+Lemma \ref{lemma-union-artinian}
+because $R \to S$ induces an isomorphism
+$R/\mathfrak m_R^n \to S/\mathfrak m_S^n$ for all $n$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-endos}
+Let $(R, \mathfrak m, \kappa)$ be a Noetherian local ring.
+Let $E$ be an injective hull of $\kappa$ over $R$. Then
+$\Hom_R(E, E)$ is canonically isomorphic to the completion of $R$.
+\end{lemma}
+
+\begin{proof}
+Write $E = \bigcup E_n$ with $E_n = E[\mathfrak m^n]$ as in
+Lemma \ref{lemma-union-artinian}. Any endomorphism of $E$
+preserves this filtration. Hence
+$$
+\Hom_R(E, E) = \lim \Hom_R(E_n, E_n)
+$$
+The lemma follows as
+$\Hom_R(E_n, E_n) = \Hom_{R/\mathfrak m^n}(E_n, E_n) = R/\mathfrak m^n$
+by Lemma \ref{lemma-evaluate}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-hull-has-dcc}
+Let $(R, \mathfrak m, \kappa)$ be a Noetherian local ring.
+Let $E$ be an injective hull of $\kappa$ over $R$. Then
+$E$ satisfies the descending chain condition.
+\end{lemma}
+
+\begin{proof}
+If $E \supset M_1 \supset M_2 \supset \ldots$ is a sequence of submodules, then
+$$
+\Hom_R(E, E) \to \Hom_R(M_1, E) \to \Hom_R(M_2, E) \to \ldots
+$$
+is a sequence of surjections. By Lemma \ref{lemma-endos} each of these is a
+module over the completion $R^\wedge = \Hom_R(E, E)$.
+Since $R^\wedge$ is Noetherian
+(Algebra, Lemma \ref{algebra-lemma-completion-Noetherian-Noetherian})
+the sequence stabilizes: $\Hom_R(M_n, E) = \Hom_R(M_{n + 1}, E) = \ldots$.
+Since $E$ is injective, this can only happen if $\Hom_R(M_n/M_{n + 1}, E)$
+is zero. However, if $M_n/M_{n + 1}$ is nonzero, then it contains a
+nonzero element annihilated by $\mathfrak m$, because $E$ is
+$\mathfrak m$-power torsion by Lemma \ref{lemma-union-artinian}.
+In this case $M_n/M_{n + 1}$ has a nonzero map into $E$, contradicting
+the assumed vanishing. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-describe-categories}
+Let $(R, \mathfrak m, \kappa)$ be a Noetherian local ring.
+Let $E$ be an injective hull of $\kappa$.
+\begin{enumerate}
+\item For an $R$-module $M$ the following are equivalent:
+\begin{enumerate}
+\item $M$ satisfies the ascending chain condition,
+\item $M$ is a finite $R$-module, and
+\item there exist $n, m$ and an exact sequence
+$R^{\oplus m} \to R^{\oplus n} \to M \to 0$.
+\end{enumerate}
+\item For an $R$-module $M$ the following are equivalent:
+\begin{enumerate}
+\item $M$ satisfies the descending chain condition,
+\item $M$ is $\mathfrak m$-power torsion and
+$\dim_\kappa(M[\mathfrak m]) < \infty$, and
+\item there exist $n, m$ and an exact sequence
+$0 \to M \to E^{\oplus n} \to E^{\oplus m}$.
+\end{enumerate}
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We omit the proof of (1).
+
+\medskip\noindent
+Let $M$ be an $R$-module with the descending chain condition. Let $x \in M$.
+Then $\mathfrak m^n x$ is a descending chain of submodules, hence stabilizes.
+Thus $\mathfrak m^nx = \mathfrak m^{n + 1}x$ for some $n$. By Nakayama's lemma
+(Algebra, Lemma \ref{algebra-lemma-NAK}) this implies $\mathfrak m^n x = 0$,
+i.e., $x$ is $\mathfrak m$-power torsion. Since $M[\mathfrak m]$ is a vector
+space over $\kappa$ it has to be finite dimensional in order to have the
+descending chain condition.
+
+\medskip\noindent
+Assume that $M$ is $\mathfrak m$-power torsion and has a finite dimensional
+$\mathfrak m$-torsion submodule $M[\mathfrak m]$. By
+Lemma \ref{lemma-torsion-submodule-sum-injective-hulls}
+we see that $M$ is a submodule of $E^{\oplus n}$ for some $n$.
+Consider the quotient $N = E^{\oplus n}/M$. By
+Lemma \ref{lemma-injective-hull-has-dcc} the module $E$ has the
+descending chain condition hence so do $E^{\oplus n}$ and $N$.
+Therefore $N$ satisfies (2)(a) which implies $N$ satisfies
+(2)(b) by the second paragraph of the proof. Thus by
+Lemma \ref{lemma-torsion-submodule-sum-injective-hulls}
+again we see that $N$ is a submodule of $E^{\oplus m}$ for some $m$.
+Thus we have a short exact sequence
+$0 \to M \to E^{\oplus n} \to E^{\oplus m}$.
+
+\medskip\noindent
+Assume we have a short exact sequence
+$0 \to M \to E^{\oplus n} \to E^{\oplus m}$.
+Since $E$ satisfies the descending chain condition by
+Lemma \ref{lemma-injective-hull-has-dcc}
+so does $M$.
+\end{proof}
+
+\begin{proposition}[Matlis duality]
+\label{proposition-matlis}
+Let $(R, \mathfrak m, \kappa)$ be a complete local Noetherian ring.
+Let $E$ be an injective hull of $\kappa$ over $R$. The functor
+$D(-) = \Hom_R(-, E)$ induces an anti-equivalence
+$$
+\left\{
+\begin{matrix}
+R\text{-modules with the} \\
+\text{descending chain condition}
+\end{matrix}
+\right\}
+\longleftrightarrow
+\left\{
+\begin{matrix}
+R\text{-modules with the} \\
+\text{ascending chain condition}
+\end{matrix}
+\right\}
+$$
+and we have $D \circ D = \text{id}$ on either side of the equivalence.
+\end{proposition}
+
+\begin{proof}
+By Lemma \ref{lemma-endos} we have $R = \Hom_R(E, E) = D(E)$.
+Of course we have $E = \Hom_R(R, E) = D(R)$. Since $E$ is injective
+the functor $D$ is exact. The result now follows immediately from the
+description of the categories in
+Lemma \ref{lemma-describe-categories}.
+\end{proof}
+
+\begin{remark}
+\label{remark-matlis}
+Let $(R, \mathfrak m, \kappa)$ be a Noetherian local ring.
+Let $E$ be an injective hull of $\kappa$ over $R$. Here is an
+addendum to Matlis duality: If $N$ is an $\mathfrak m$-power torsion module
+and $M = \Hom_R(N, E)$ is a finite module over the completion of $R$,
+then $N$ satisfies the descending chain condition. Namely, for any
+submodules $N'' \subset N' \subset N$ with $N'' \not = N'$, we can
+find an embedding $\kappa \subset N''/N'$ and hence a nonzero
+map $N' \to E$ annihilating $N''$ which we can extend to a map $N \to E$
+annihilating $N''$. Thus $N \supset N' \mapsto M' = \Hom_R(N/N', E) \subset M$
+is an inclusion preserving map from submodules of $N$ to submodules
+of $M$, whence the conclusion.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Deriving torsion}
+\label{section-bad-local-cohomology}
+
+\noindent
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal
+(if $I$ is not finitely generated perhaps a different definition
+should be used). Let $Z = V(I) \subset \Spec(A)$. Recall that the
+category $I^\infty\text{-torsion}$ of $I$-power torsion modules
+only depends on the closed subset $Z$ and not on the choice of the
+finitely generated ideal $I$ such that $Z = V(I)$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-local-cohomology-closed}.
+In this section we will consider the functor
+$$
+H^0_{I} : \text{Mod}_A \longrightarrow I^\infty\text{-torsion},\quad
+M \longmapsto M[I^\infty] = \bigcup M[I^n]
+$$
+which sends $M$ to the submodule of $I$-power torsion.
+
+\medskip\noindent
+Let $A$ be a ring and let $I$ be a finitely generated ideal.
+Note that $I^\infty\text{-torsion}$ is a Grothendieck
+abelian category (direct sums exist, filtered colimits are
+exact, and $\bigoplus A/I^n$ is a generator by
+More on Algebra, Lemma \ref{more-algebra-lemma-I-power-torsion-presentation}).
+Hence the derived category $D(I^\infty\text{-torsion})$ exists, see
+Injectives, Remark \ref{injectives-remark-existence-D}.
+Our functor $H^0_I$ is left exact and has a derived extension
+which we will denote
+$$
+R\Gamma_I : D(A) \longrightarrow D(I^\infty\text{-torsion}).
+$$
+{\bf Warning:} this functor does not deserve the name
+local cohomology unless the ring $A$ is Noetherian.
+The functors $H^0_I$, $R\Gamma_I$, and the satellites $H^p_I$
+only depend on the closed subset $Z \subset \Spec(A)$ and not
+on the choice of the finitely generated ideal $I$ such that
+$V(I) = Z$. However, we insist on using the subscript $I$ for
+the functors above as the notation $R\Gamma_Z$ is going
+to be used for a different functor, see
+(\ref{equation-local-cohomology}), which
+agrees with the functor $R\Gamma_I$ only (as far as we know)
+in case $A$ is Noetherian
+(see Lemma \ref{lemma-local-cohomology-noetherian}).
+
+\begin{lemma}
+\label{lemma-adjoint}
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal.
+The functor $R\Gamma_I$ is right adjoint to the functor
+$D(I^\infty\text{-torsion}) \to D(A)$.
+\end{lemma}
+
+\begin{proof}
+This follows from the fact that taking $I$-power torsion submodules
+is the right adjoint to the inclusion functor
+$I^\infty\text{-torsion} \to \text{Mod}_A$. See
+Derived Categories, Lemma \ref{derived-lemma-derived-adjoint-functors}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-cohomology-ext}
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal.
+For any object $K$ of $D(A)$ we have
+$$
+R\Gamma_I(K) = \text{hocolim}\ R\Hom_A(A/I^n, K)
+$$
+in $D(A)$ and
+$$
+R^q\Gamma_I(K) = \colim_n \Ext_A^q(A/I^n, K)
+$$
+as modules for all $q \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Let $J^\bullet$ be a K-injective complex representing $K$. Then
+$$
+R\Gamma_I(K) = J^\bullet[I^\infty] = \colim J^\bullet[I^n] =
+\colim \Hom_A(A/I^n, J^\bullet)
+$$
+where the first equality is the definition of $R\Gamma_I(K)$.
+By Derived Categories, Lemma \ref{derived-lemma-colim-hocolim}
+we obtain the first displayed equality in the statement of the lemma.
+The second displayed equality in the statement of the lemma then
+follows because $H^q(\Hom_A(A/I^n, J^\bullet)) = \Ext^q_A(A/I^n, K)$
+and because filtered colimits are exact in the category of abelian
+groups.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bad-local-cohomology-vanishes}
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal.
+Let $K^\bullet$ be a complex of $A$-modules such that
+$f : K^\bullet \to K^\bullet$ is an isomorphism for some
+$f \in I$, i.e., $K^\bullet$ is a complex of $A_f$-modules. Then
+$R\Gamma_I(K^\bullet) = 0$.
+\end{lemma}
+
+\begin{proof}
+Namely, in this case the cohomology modules of $R\Gamma_I(K^\bullet)$
+are both $f$-power torsion and $f$ acts by automorphisms. Hence the
+cohomology modules are zero and hence the object is zero.
+\end{proof}
+
+\noindent
+Let $A$ be a ring and $I \subset A$ a finitely generated ideal.
+By More on Algebra, Lemma \ref{more-algebra-lemma-I-power-torsion}
+the category of $I$-power torsion modules is a Serre subcategory
+of the category of all $A$-modules, hence there is a functor
+\begin{equation}
+\label{equation-compare-torsion}
+D(I^\infty\text{-torsion}) \to D_{I^\infty\text{-torsion}}(A)
+\end{equation}
+see Derived Categories, Section \ref{derived-section-triangulated-sub}.
+
+\begin{lemma}
+\label{lemma-not-equal}
+Let $A$ be a ring and let $I$ be a finitely generated ideal.
+Let $M$ and $N$ be $I$-power torsion modules.
+\begin{enumerate}
+\item $\Hom_{D(A)}(M, N) = \Hom_{D({I^\infty\text{-torsion}})}(M, N)$,
+\item $\Ext^1_{D(A)}(M, N) =
+\Ext^1_{D({I^\infty\text{-torsion}})}(M, N)$,
+\item $\Ext^2_{D({I^\infty\text{-torsion}})}(M, N) \to
+\Ext^2_{D(A)}(M, N)$ is not surjective in general,
+\item (\ref{equation-compare-torsion}) is not an equivalence in general.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Parts (1) and (2) follow immediately from the fact that $I$-power torsion
+forms a Serre subcategory of $\text{Mod}_A$. Part (4) follows from
+part (3).
+
+\medskip\noindent
+For part (3) let $A$ be a ring with an element $f \in A$ such that
+$A[f]$ contains a nonzero element $x$ annihilated by $f$ and
+$A$ contains elements $x_n$ with $f^nx_n = x$. Such a ring $A$
+exists because we can take
+$$
+A = \mathbf{Z}[f, x, x_n]/(fx, f^nx_n - x)
+$$
+Given $A$ set $I = (f)$. Then the exact sequence
+$$
+0 \to A[f] \to A \xrightarrow{f} A \to A/fA \to 0
+$$
+defines an element in $\Ext^2_A(A/fA, A[f])$. We claim this
+element does not come from an element of
+$\Ext^2_{D(f^\infty\text{-torsion})}(A/fA, A[f])$.
+Namely, if it did, then there would be an exact sequence
+$$
+0 \to A[f] \to M \to N \to A/fA \to 0
+$$
+where $M$ and $N$ are $f$-power torsion modules defining the same
+$2$ extension class. Since $A \to A$ is a complex of free modules
+and since the $2$ extension classes are the same
+we would be able to find a map
+$$
+\xymatrix{
+0 \ar[r] &
+A[f] \ar[r] \ar[d] &
+A \ar[r] \ar[d]_\varphi &
+A \ar[r] \ar[d]_\psi &
+A/fA \ar[r] \ar[d] & 0 \\
+0 \ar[r] &
+A[f] \ar[r] &
+M \ar[r] &
+N \ar[r] &
+A/fA \ar[r] & 0
+}
+$$
+(some details omitted). Then we could replace $M$ by the image of
+$\varphi$ and $N$ by the image of $\psi$. Then $M$ would be a cyclic
+module, hence $f^n M = 0$ for some $n$. Considering $\varphi(x_{n + 1})$
+we get a contradiction with the fact that $f^{n + 1}x_n = x$ is
+nonzero in $A[f]$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Local cohomology}
+\label{section-local-cohomology}
+
+\noindent
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal.
+Set $Z = V(I) \subset \Spec(A)$. We will construct a functor
+\begin{equation}
+\label{equation-local-cohomology}
+R\Gamma_Z : D(A) \longrightarrow D_{I^\infty\text{-torsion}}(A).
+\end{equation}
+which is right adjoint to the inclusion functor. For notation
+see Section \ref{section-bad-local-cohomology}. The cohomology
+modules of $R\Gamma_Z(K)$ are the {\it local cohomology groups
+of $K$ with respect to $Z$}.
+By Lemma \ref{lemma-not-equal} this functor will in general {\bf not} be
+equal to $R\Gamma_I( - )$ even viewed as functors into $D(A)$.
+In Section \ref{section-local-cohomology-noetherian}
+we will show that if $A$ is Noetherian, then the two agree.
+
+\medskip\noindent
+We will continue the discussion of local cohomology in
+the chapter on local cohomology, see
+Local Cohomology, Section \ref{local-cohomology-section-introduction}.
+For example, there we will show that $R\Gamma_Z$ computes cohomology
+with support in $Z$ for the associated complex of quasi-coherent sheaves
+on $\Spec(A)$. See Local Cohomology, Lemma
+\ref{local-cohomology-lemma-local-cohomology-is-local-cohomology}.
+
+
+\begin{lemma}
+\label{lemma-local-cohomology-adjoint}
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal.
+There exists a right adjoint $R\Gamma_Z$ (\ref{equation-local-cohomology})
+to the inclusion functor $D_{I^\infty\text{-torsion}}(A) \to D(A)$.
+In fact, if $I$ is generated by $f_1, \ldots, f_r \in A$, then we have
+$$
+R\Gamma_Z(K) =
+(A \to \prod\nolimits_{i_0} A_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} A_{f_{i_0}f_{i_1}}
+\to \ldots \to A_{f_1\ldots f_r}) \otimes_A^\mathbf{L} K
+$$
+functorially in $K \in D(A)$.
+\end{lemma}
+
+\begin{proof}
+Say $I = (f_1, \ldots, f_r)$ is an ideal.
+Let $K^\bullet$ be a complex of $A$-modules.
+There is a canonical map of complexes
+$$
+(A \to \prod\nolimits_{i_0} A_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} A_{f_{i_0}f_{i_1}} \to
+\ldots \to A_{f_1\ldots f_r}) \longrightarrow A.
+$$
+from the extended {\v C}ech complex to $A$.
+Tensoring with $K^\bullet$, taking associated total complex,
+we get a map
+$$
+\text{Tot}\left(
+K^\bullet \otimes_A
+(A \to \prod\nolimits_{i_0} A_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} A_{f_{i_0}f_{i_1}} \to
+\ldots \to A_{f_1\ldots f_r})\right)
+\longrightarrow
+K^\bullet
+$$
+in $D(A)$. We claim the cohomology modules of the complex on the left are
+$I$-power torsion, i.e., the LHS is an object of
+$D_{I^\infty\text{-torsion}}(A)$. Namely, we have
+$$
+(A \to \prod\nolimits_{i_0} A_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} A_{f_{i_0}f_{i_1}} \to
+\ldots \to A_{f_1\ldots f_r}) = \colim K(A, f_1^n, \ldots, f_r^n)
+$$
+by More on Algebra, Lemma
+\ref{more-algebra-lemma-extended-alternating-Cech-is-colimit-koszul}.
+Moreover, multiplication by $f_i^n$ on the complex
+$K(A, f_1^n, \ldots, f_r^n)$ is homotopic to zero by
+More on Algebra, Lemma \ref{more-algebra-lemma-homotopy-koszul}.
+Since
+$$
+H^q\left( LHS \right) =
+\colim H^q(\text{Tot}(K^\bullet \otimes_A K(A, f_1^n, \ldots, f_r^n)))
+$$
+we obtain our claim. On the other hand, if $K^\bullet$ is an
+object of $D_{I^\infty\text{-torsion}}(A)$, then the complexes
+$K^\bullet \otimes_A A_{f_{i_0} \ldots f_{i_p}}$ have vanishing
+cohomology. Hence in this case the map $LHS \to K^\bullet$
+is an isomorphism in $D(A)$. The construction
+$$
+R\Gamma_Z(K^\bullet) =
+\text{Tot}\left(
+K^\bullet \otimes_A
+(A \to \prod\nolimits_{i_0} A_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} A_{f_{i_0}f_{i_1}} \to
+\ldots \to A_{f_1\ldots f_r})\right)
+$$
+is functorial in $K^\bullet$ and defines an exact functor
+$D(A) \to D_{I^\infty\text{-torsion}}(A)$ between
+triangulated categories. It follows formally from the
+existence of the natural transformation $R\Gamma_Z \to \text{id}$
+given above and the fact that this evaluates to an isomorphism
+on $K^\bullet$ in the subcategory, that $R\Gamma_Z$ is the desired
+right adjoint.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-cohomology-and-restriction}
+Let $A \to B$ be a ring homomorphism and let $I \subset A$
+be a finitely generated ideal. Set $J = IB$. Set $Z = V(I)$
+and $Y = V(J)$. Then
+$$
+R\Gamma_Z(M_A) = R\Gamma_Y(M)_A
+$$
+functorially in $M \in D(B)$. Here $(-)_A$ denotes the restriction
+functors $D(B) \to D(A)$ and
+$D_{J^\infty\text{-torsion}}(B) \to D_{I^\infty\text{-torsion}}(A)$.
+\end{lemma}
+
+\begin{proof}
+This follows from uniqueness of adjoint functors as both
+$R\Gamma_Z((-)_A)$ and $R\Gamma_Y(-)_A$
+are right adjoint to the functor $D_{I^\infty\text{-torsion}}(A) \to D(B)$,
+$K \mapsto K \otimes_A^\mathbf{L} B$.
+Alternatively, one can use the description of $R\Gamma_Z$ and $R\Gamma_Y$
+in terms of alternating {\v C}ech complexes
+(Lemma \ref{lemma-local-cohomology-adjoint}).
+Namely, if $I = (f_1, \ldots, f_r)$ then $J$ is generated by the images
+$g_1, \ldots, g_r \in B$ of $f_1, \ldots, f_r$.
+Then the statement of the lemma follows from the existence of
+a canonical isomorphism
+\begin{align*}
+& M_A \otimes_A (A \to \prod\nolimits_{i_0} A_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} A_{f_{i_0}f_{i_1}}
+\to \ldots \to A_{f_1\ldots f_r}) \\
+& =
+M \otimes_B (B \to \prod\nolimits_{i_0} B_{g_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} B_{g_{i_0}g_{i_1}}
+\to \ldots \to B_{g_1\ldots g_r})
+\end{align*}
+for any $B$-module $M$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-torsion-change-rings}
+Let $A \to B$ be a ring homomorphism and let $I \subset A$
+be a finitely generated ideal. Set $J = IB$. Let $Z = V(I)$ and $Y = V(J)$.
+Then
+$$
+R\Gamma_Z(K) \otimes_A^\mathbf{L} B = R\Gamma_Y(K \otimes_A^\mathbf{L} B)
+$$
+functorially in $K \in D(A)$.
+\end{lemma}
+
+\begin{proof}
+Write $I = (f_1, \ldots, f_r)$. Then $J$ is generated by the images
+$g_1, \ldots, g_r \in B$ of $f_1, \ldots, f_r$. Then we have
+$$
+(A \to \prod A_{f_{i_0}} \to \ldots \to A_{f_1\ldots f_r}) \otimes_A B =
+(B \to \prod B_{g_{i_0}} \to \ldots \to B_{g_1\ldots g_r})
+$$
+as complexes of $B$-modules. Represent $K$ by a K-flat complex $K^\bullet$
+of $A$-modules. Since the total complexes associated to
+$$
+K^\bullet \otimes_A
+(A \to \prod A_{f_{i_0}} \to \ldots \to A_{f_1\ldots f_r}) \otimes_A B
+$$
+and
+$$
+K^\bullet \otimes_A B \otimes_B
+(B \to \prod B_{g_{i_0}} \to \ldots \to B_{g_1\ldots g_r})
+$$
+represent the left and right hand side of the displayed formula of the
+lemma (see Lemma \ref{lemma-local-cohomology-adjoint}) we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-cohomology-vanishes}
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal.
+Let $K^\bullet$ be a complex of $A$-modules such that
+$f : K^\bullet \to K^\bullet$ is an isomorphism for some
+$f \in I$, i.e., $K^\bullet$ is a complex of $A_f$-modules. Then
+$R\Gamma_Z(K^\bullet) = 0$.
+\end{lemma}
+
+\begin{proof}
+Namely, in this case the cohomology modules of $R\Gamma_Z(K^\bullet)$
+are both $f$-power torsion and $f$ acts by automorphisms. Hence the
+cohomology modules are zero and hence the object is zero.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-torsion-tensor-product}
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal.
+For $K, L \in D(A)$ we have
+$$
+R\Gamma_Z(K \otimes_A^\mathbf{L} L) =
+K \otimes_A^\mathbf{L} R\Gamma_Z(L) =
+R\Gamma_Z(K) \otimes_A^\mathbf{L} L =
+R\Gamma_Z(K) \otimes_A^\mathbf{L} R\Gamma_Z(L)
+$$
+If $K$ or $L$ is in $D_{I^\infty\text{-torsion}}(A)$ then so is
+$K \otimes_A^\mathbf{L} L$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-local-cohomology-adjoint} we know that
+$R\Gamma_Z$ is given by $C \otimes^\mathbf{L} -$ for some $C \in D(A)$.
+Hence, for $K, L \in D(A)$ general we have
+$$
+R\Gamma_Z(K \otimes_A^\mathbf{L} L) =
+K \otimes^\mathbf{L} L \otimes_A^\mathbf{L} C =
+K \otimes_A^\mathbf{L} R\Gamma_Z(L)
+$$
+The other equalities follow formally from this one. This also implies
+the last statement of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-cohomology-ss}
+Let $A$ be a ring and let $I, J \subset A$ be finitely generated
+ideals. Set $Z = V(I)$ and $Y = V(J)$. Then $Z \cap Y = V(I + J)$
+and $R\Gamma_Y \circ R\Gamma_Z = R\Gamma_{Y \cap Z}$ as functors
+$D(A) \to D_{(I + J)^\infty\text{-torsion}}(A)$. For $K \in D^+(A)$
+there is a spectral sequence
+$$
+E_2^{p, q} = H^p_Y(H^q_Z(K)) \Rightarrow H^{p + q}_{Y \cap Z}(K)
+$$
+as in Derived Categories, Lemma
+\ref{derived-lemma-grothendieck-spectral-sequence}.
+\end{lemma}
+
+\begin{proof}
+There is a bit of abuse of notation in the lemma as strictly
+speaking we cannot compose $R\Gamma_Y$ and $R\Gamma_Z$. The
+meaning of the statement is simply that we are composing
+$R\Gamma_Z$ with the inclusion $D_{I^\infty\text{-torsion}}(A) \to D(A)$
+and then with $R\Gamma_Y$. Then the equality
+$R\Gamma_Y \circ R\Gamma_Z = R\Gamma_{Y \cap Z}$
+follows from the fact that
+$$
+D_{I^\infty\text{-torsion}}(A) \to D(A) \xrightarrow{R\Gamma_Y}
+D_{(I + J)^\infty\text{-torsion}}(A)
+$$
+is right adjoint to the inclusion
+$D_{(I + J)^\infty\text{-torsion}}(A) \to D_{I^\infty\text{-torsion}}(A)$.
+Alternatively one can prove the formula using
+Lemma \ref{lemma-local-cohomology-adjoint}
+and the fact that the tensor product of
+extended {\v C}ech complexes on $f_1, \ldots, f_r$ and
+$g_1, \ldots, g_m$ is the extended {\v C}ech complex on
+$f_1, \ldots, f_n. g_1, \ldots, g_m$.
+The final assertion follows from this and the cited lemma.
+\end{proof}
+
+\noindent
+The following lemma is the analogue of
+More on Algebra, Lemma
+\ref{more-algebra-lemma-restriction-derived-complete-equivalence}
+for complexes with torsion cohomologies.
+
+\begin{lemma}
+\label{lemma-torsion-flat-change-rings}
+Let $A \to B$ be a flat ring map and let $I \subset A$ be a finitely
+generated ideal such that $A/I = B/IB$. Then base change and
+restriction induce quasi-inverse equivalences
+$D_{I^\infty\text{-torsion}}(A) = D_{(IB)^\infty\text{-torsion}}(B)$.
+\end{lemma}
+
+\begin{proof}
+More precisely the functors are $K \mapsto K \otimes_A^\mathbf{L} B$
+for $K$ in $D_{I^\infty\text{-torsion}}(A)$ and $M \mapsto M_A$
+for $M$ in $D_{(IB)^\infty\text{-torsion}}(B)$. The reason this works
+is that $H^i(K \otimes_A^\mathbf{L} B) = H^i(K) \otimes_A B = H^i(K)$.
+The first equality holds as $A \to B$ is flat and the second by
+More on Algebra, Lemma \ref{more-algebra-lemma-neighbourhood-isomorphism}.
+\end{proof}
+
+\noindent
+The following lemma was shown for $\Hom$ and $\Ext^1$ of modules in
+More on Algebra, Lemmas \ref{more-algebra-lemma-neighbourhood-equivalence} and
+\ref{more-algebra-lemma-neighbourhood-extensions}.
+
+\begin{lemma}
+\label{lemma-neighbourhood-extensions}
+Let $A \to B$ be a flat ring map and let $I \subset A$ be a
+finitely generated ideal such that $A/I \to B/IB$ is an isomorphism.
+For $K \in D_{I^\infty\text{-torsion}}(A)$ and $L \in D(A)$
+the map
+$$
+R\Hom_A(K, L) \longrightarrow R\Hom_B(K \otimes_A B, L \otimes_A B)
+$$
+is a quasi-isomorphism. In particular, if $M$, $N$ are $A$-modules and
+$M$ is $I$-power torsion, then the canonical map
+$$
+\Ext^i_A(M, N)
+\longrightarrow
+\Ext^i_B(M \otimes_A B, N \otimes_A B)
+$$
+is an isomorphism for all $i$.
+\end{lemma}
+
+\begin{proof}
+Let $Z = V(I) \subset \Spec(A)$ and $Y = V(IB) \subset \Spec(B)$.
+Since the cohomology modules of $K$ are $I$ power torsion, the
+canonical map $R\Gamma_Z(L) \to L$ induces an isomorphism
+$$
+R\Hom_A(K, R\Gamma_Z(L)) \to R\Hom_A(K, L)
+$$
+in $D(A)$. Similarly, the cohomology modules of $K \otimes_A B$ are
+$IB$ power torsion and we have an isomorphism
+$$
+R\Hom_B(K \otimes_A B, R\Gamma_Y(L \otimes_A B)) \to
+R\Hom_B(K \otimes_A B, L \otimes_A B)
+$$
+in $D(B)$.
+By Lemma \ref{lemma-torsion-change-rings} we have
+$R\Gamma_Z(L) \otimes_A B = R\Gamma_Y(L \otimes_A B)$.
+Hence it suffices to show that the map
+$$
+R\Hom_A(K, R\Gamma_Z(L)) \to R\Hom_B(K \otimes_A B, R\Gamma_Z(L) \otimes_A B)
+$$
+is a quasi-isomorphism. This follows from
+Lemma \ref{lemma-torsion-flat-change-rings}.
+\end{proof}
+
+
+
+
+\section{Local cohomology for Noetherian rings}
+\label{section-local-cohomology-noetherian}
+
+\noindent
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal.
+Set $Z = V(I) \subset \Spec(A)$. Recall that (\ref{equation-compare-torsion})
+is the functor
+$$
+D(I^\infty\text{-torsion}) \to D_{I^\infty\text{-torsion}}(A)
+$$
+In fact, there is a natural transformation of functors
+\begin{equation}
+\label{equation-compare-torsion-functors}
+(\ref{equation-compare-torsion}) \circ R\Gamma_I(-)
+\longrightarrow
+R\Gamma_Z(-)
+\end{equation}
+Namely, given a complex of $A$-modules $K^\bullet$ the canonical map
+$R\Gamma_I(K^\bullet) \to K^\bullet$ in $D(A)$ factors (uniquely)
+through $R\Gamma_Z(K^\bullet)$ as $R\Gamma_I(K^\bullet)$ has
+$I$-power torsion cohomology modules (see Lemma \ref{lemma-adjoint}).
+In general this map is not an isomorphism (we've seen this in
+Lemma \ref{lemma-not-equal}).
+
+\begin{lemma}
+\label{lemma-local-cohomology-noetherian}
+Let $A$ be a Noetherian ring and let $I \subset A$ be an ideal.
+\begin{enumerate}
+\item the adjunction $R\Gamma_I(K) \to K$ is an isomorphism
+for $K \in D_{I^\infty\text{-torsion}}(A)$,
+\item the functor
+(\ref{equation-compare-torsion})
+$D(I^\infty\text{-torsion}) \to D_{I^\infty\text{-torsion}}(A)$
+is an equivalence,
+\item the transformation of functors
+(\ref{equation-compare-torsion-functors}) is an isomorphism,
+in other words $R\Gamma_I(K) = R\Gamma_Z(K)$ for $K \in D(A)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+A formal argument, which we omit, shows that it suffices to prove (1).
+
+\medskip\noindent
+Let $M$ be an $I$-power torsion $A$-module. Choose an embedding
+$M \to J$ into an injective $A$-module. Then $J[I^\infty]$ is
+an injective $A$-module, see Lemma \ref{lemma-injective-module-divide},
+and we obtain an embedding $M \to J[I^\infty]$.
+Thus every $I$-power torsion module has an injective resolution
+$M \to J^\bullet$ with $J^n$ also $I$-power torsion. It follows
+that $R\Gamma_I(M) = M$ (this is not a triviality and this is not
+true in general if $A$ is not Noetherian). Next, suppose that
+$K \in D_{I^\infty\text{-torsion}}^+(A)$. Then the spectral sequence
+$$
+R^q\Gamma_I(H^p(K)) \Rightarrow R^{p + q}\Gamma_I(K)
+$$
+(Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor})
+converges and above we have seen that only the terms with $q = 0$
+are nonzero. Thus we see that $R\Gamma_I(K) \to K$ is an isomorphism.
+
+\medskip\noindent
+Suppose $K$ is an arbitrary object of $D_{I^\infty\text{-torsion}}(A)$.
+We have
+$$
+R^q\Gamma_I(K) = \colim \Ext^q_A(A/I^n, K)
+$$
+by Lemma \ref{lemma-local-cohomology-ext}. Choose $f_1, \ldots, f_r \in A$
+generating $I$. Let $K_n^\bullet = K(A, f_1^n, \ldots, f_r^n)$ be the
+Koszul complex with terms in degrees $-r, \ldots, 0$. Since the
+pro-objects $\{A/I^n\}$ and $\{K_n^\bullet\}$ in $D(A)$ are the same by
+More on Algebra, Lemma \ref{more-algebra-lemma-sequence-Koszul-complexes},
+we see that
+$$
+R^q\Gamma_I(K) = \colim \Ext^q_A(K_n^\bullet, K)
+$$
+Pick any complex $K^\bullet$ of $A$-modules representing $K$.
+Since $K_n^\bullet$ is a finite complex of finite free modules we see
+that
+$$
+\Ext^q_A(K_n, K) =
+H^q(\text{Tot}((K_n^\bullet)^\vee \otimes_A K^\bullet))
+$$
+where $(K_n^\bullet)^\vee$ is the dual of the complex $K_n^\bullet$.
+See More on Algebra, Lemma \ref{more-algebra-lemma-RHom-out-of-projective}.
+As $(K_n^\bullet)^\vee$ is a complex of finite free $A$-modules sitting
+in degrees $0, \ldots, r$ we see that the terms of the complex
+$\text{Tot}((K_n^\bullet)^\vee \otimes_A K^\bullet)$ are the
+same as the terms of the complex
+$\text{Tot}((K_n^\bullet)^\vee \otimes_A \tau_{\geq q - r - 2} K^\bullet)$
+in degrees $q - 1$ and higher. Hence we see that
+$$
+\Ext^q_A(K_n, K) = \text{Ext}^q_A(K_n, \tau_{\geq q - r - 2}K)
+$$
+for all $n$. It follows that
+$$
+R^q\Gamma_I(K) = R^q\Gamma_I(\tau_{\geq q - r - 2}K) =
+H^q(\tau_{\geq q - r - 2}K) = H^q(K)
+$$
+Thus we see that the map $R\Gamma_I(K) \to K$ is an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-local-cohomology-noetherian}
+Let $A$ be a Noetherian ring and let $I = (f_1, \ldots, f_r)$ be an ideal
+of $A$. Set $Z = V(I) \subset \Spec(A)$. There are canonical isomorphisms
+$$
+R\Gamma_I(A) \to
+(A \to \prod\nolimits_{i_0} A_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} A_{f_{i_0}f_{i_1}} \to
+\ldots \to A_{f_1\ldots f_r}) \to R\Gamma_Z(A)
+$$
+in $D(A)$. If $M$ is an $A$-module, then we have similarly
+$$
+R\Gamma_I(M) \cong
+(M \to \prod\nolimits_{i_0} M_{f_{i_0}} \to
+\prod\nolimits_{i_0 < i_1} M_{f_{i_0}f_{i_1}} \to
+\ldots \to M_{f_1\ldots f_r}) \cong R\Gamma_Z(M)
+$$
+in $D(A)$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-local-cohomology-noetherian}
+and the computation of the functor $R\Gamma_Z$ in
+Lemma \ref{lemma-local-cohomology-adjoint}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-cohomology-change-rings}
+If $A \to B$ is a homomorphism of Noetherian rings and $I \subset A$
+is an ideal, then in $D(B)$ we have
+$$
+R\Gamma_I(A) \otimes_A^\mathbf{L} B =
+R\Gamma_Z(A) \otimes_A^\mathbf{L} B =
+R\Gamma_Y(B) = R\Gamma_{IB}(B)
+$$
+where $Y = V(IB) \subset \Spec(B)$.
+\end{lemma}
+
+\begin{proof}
+Combine Lemmas \ref{lemma-compute-local-cohomology-noetherian} and
+\ref{lemma-torsion-change-rings}.
+\end{proof}
+
+
+
+
+
+
+\section{Depth}
+\label{section-depth}
+
+\noindent
+In this section we revisit the notion of depth introduced in
+Algebra, Section \ref{algebra-section-depth}.
+
+\begin{lemma}
+\label{lemma-depth}
+Let $A$ be a Noetherian ring, let $I \subset A$ be an ideal, and
+let $M$ be a finite $A$-module such that $IM \not = M$. Then
+the following integers are equal:
+\begin{enumerate}
+\item $\text{depth}_I(M)$,
+\item the smallest integer $i$ such that $\Ext_A^i(A/I, M)$
+is nonzero, and
+\item the smallest integer $i$ such that $H^i_I(M)$ is nonzero.
+\end{enumerate}
+Moreover, we have $\Ext^i_A(N, M) = 0$ for $i < \text{depth}_I(M)$
+for any finite $A$-module $N$ annihilated by a power of $I$.
+\end{lemma}
+
+\begin{proof}
+We prove the equality of (1) and (2) by induction on $\text{depth}_I(M)$
+which is allowed by
+Algebra, Lemma \ref{algebra-lemma-depth-finite-noetherian}.
+
+\medskip\noindent
+Base case. If $\text{depth}_I(M) = 0$, then $I$ is contained in the union
+of the associated primes of $M$
+(Algebra, Lemma \ref{algebra-lemma-ass-zero-divisors}).
+By prime avoidance (Algebra, Lemma \ref{algebra-lemma-silly})
+we see that $I \subset \mathfrak p$ for some associated prime $\mathfrak p$.
+Hence $\Hom_A(A/I, M)$
+is nonzero. Thus equality holds in this case.
+
+\medskip\noindent
+Assume that $\text{depth}_I(M) > 0$. Let $f \in I$ be a nonzerodivisor
+on $M$ such that $\text{depth}_I(M/fM) = \text{depth}_I(M) - 1$.
+Consider the short exact sequence
+$$
+0 \to M \to M \to M/fM \to 0
+$$
+and the associated long exact sequence for $\Ext^*_A(A/I, -)$.
+Note that $\Ext^i_A(A/I, M)$ is a finite $A/I$-module
+(Algebra, Lemmas \ref{algebra-lemma-ext-noetherian} and
+\ref{algebra-lemma-annihilate-ext}). Hence we obtain
+$$
+\Hom_A(A/I, M/fM) = \Ext^1_A(A/I, M)
+$$
+and short exact sequences
+$$
+0 \to \Ext^i_A(A/I, M) \to \text{Ext}^i_A(A/I, M/fM) \to
+\Ext^{i + 1}_A(A/I, M) \to 0
+$$
+Thus the equality of (1) and (2) by induction.
+
+\medskip\noindent
+Observe that $\text{depth}_I(M) = \text{depth}_{I^n}(M)$ for all $n \geq 1$
+for example by Algebra, Lemma \ref{algebra-lemma-regular-sequence-powers}.
+Hence by the equality of (1) and (2) we see that
+$\Ext^i_A(A/I^n, M) = 0$ for all $n$ and $i < \text{depth}_I(M)$.
+Let $N$ be a finite $A$-module annihilated by a power of $I$.
+Then we can choose a short exact sequence
+$$
+0 \to N' \to (A/I^n)^{\oplus m} \to N \to 0
+$$
+for some $n, m \geq 0$. Then
+$\Hom_A(N, M) \subset \Hom_A((A/I^n)^{\oplus m}, M)$
+and
+$\Ext^i_A(N, M) \subset \text{Ext}^{i - 1}_A(N', M)$
+for $i < \text{depth}_I(M)$. Thus a simply induction argument
+shows that the final statement of the lemma holds.
+
+\medskip\noindent
+Finally, we prove that (3) is equal to (1) and (2).
+We have $H^p_I(M) = \colim \Ext^p_A(A/I^n, M)$ by
+Lemma \ref{lemma-local-cohomology-ext}.
+Thus we see that $H^i_I(M) = 0$ for $i < \text{depth}_I(M)$.
+For $i = \text{depth}_I(M)$, using the vanishing of
+$\Ext_A^{i - 1}(I/I^n, M)$ we see that the map
+$\Ext_A^i(A/I, M) \to H_I^i(M)$ is injective which
+proves nonvanishing in the correct degree.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-in-ses}
+Let $A$ be a Noetherian ring. Let $0 \to N' \to N \to N'' \to 0$
+be a short exact sequence of finite $A$-modules.
+Let $I \subset A$ be an ideal.
+\begin{enumerate}
+\item
+$\text{depth}_I(N) \geq \min\{\text{depth}_I(N'), \text{depth}_I(N'')\}$
+\item
+$\text{depth}_I(N'') \geq \min\{\text{depth}_I(N), \text{depth}_I(N') - 1\}$
+\item
+$\text{depth}_I(N') \geq \min\{\text{depth}_I(N), \text{depth}_I(N'') + 1\}$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $IN \not = N$, $IN' \not = N'$, and $IN'' \not = N''$. Then we
+can use the characterization of depth using the Ext groups
+$\Ext^i(A/I, N)$, see Lemma \ref{lemma-depth},
+and use the long exact cohomology sequence
+$$
+\begin{matrix}
+0
+\to \Hom_A(A/I, N')
+\to \Hom_A(A/I, N)
+\to \Hom_A(A/I, N'')
+\\
+\phantom{0\ }
+\to \Ext^1_A(A/I, N')
+\to \Ext^1_A(A/I, N)
+\to \Ext^1_A(A/I, N'')
+\to \ldots
+\end{matrix}
+$$
+from Algebra, Lemma \ref{algebra-lemma-long-exact-seq-ext}.
+This argument also works if $IN = N$
+because in this case $\Ext^i_A(A/I, N) = 0$ for all $i$.
+Similarly in case $IN' \not = N'$ and/or $IN'' \not = N''$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-drops-by-one}
+Let $A$ be a Noetherian ring, let $I \subset A$ be an ideal, and
+let $M$ a finite $A$-module with $IM \not = M$.
+\begin{enumerate}
+\item If $x \in I$ is a nonzerodivisor on $M$, then
+$\text{depth}_I(M/xM) = \text{depth}_I(M) - 1$.
+\item Any $M$-regular sequence $x_1, \ldots, x_r$ in $I$ can be extended to an
+$M$-regular sequence in $I$ of length $\text{depth}_I(M)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (2) is a formal consequence of part (1). Let $x \in I$ be as in (1).
+By the short exact sequence $0 \to M \to M \to M/xM \to 0$ and
+Lemma \ref{lemma-depth-in-ses} we see that
+$\text{depth}_I(M/xM) \geq \text{depth}_I(M) - 1$.
+On the other hand, if $x_1, \ldots, x_r \in I$
+is a regular sequence for $M/xM$, then $x, x_1, \ldots, x_r$
+is a regular sequence for $M$. Hence (1) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-CM}
+Let $R$ be a Noetherian local ring. If $M$ is a finite Cohen-Macaulay
+$R$-module and $I \subset R$ a nontrivial ideal. Then
+$$
+\text{depth}_I(M) = \dim(\text{Supp}(M)) - \dim(\text{Supp}(M/IM)).
+$$
+\end{lemma}
+
+\begin{proof}
+We will prove this by induction on $\text{depth}_I(M)$.
+
+\medskip\noindent
+If $\text{depth}_I(M) = 0$, then $I$ is contained in one
+of the associated primes $\mathfrak p$ of $M$
+(Algebra, Lemma \ref{algebra-lemma-ideal-nonzerodivisor}).
+Then $\mathfrak p \in \text{Supp}(M/IM)$, hence
+$\dim(\text{Supp}(M/IM)) \geq \dim(R/\mathfrak p) = \dim(\text{Supp}(M))$
+where equality holds by
+Algebra, Lemma \ref{algebra-lemma-CM-ass-minimal-support}.
+Thus the lemma holds in this case.
+
+\medskip\noindent
+If $\text{depth}_I(M) > 0$, we pick $x \in I$ which is a
+nonzerodivisor on $M$. Note that $(M/xM)/I(M/xM) = M/IM$.
+On the other hand we have
+$\text{depth}_I(M/xM) = \text{depth}_I(M) - 1$
+by Lemma \ref{lemma-depth-drops-by-one}
+and $\dim(\text{Supp}(M/xM)) = \dim(\text{Supp}(M)) - 1$
+by Algebra, Lemma \ref{algebra-lemma-one-equation-module}.
+Thus the result by induction hypothesis.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-flat-CM}
+Let $R \to S$ be a flat local ring homomorphism of Noetherian local
+rings. Denote $\mathfrak m \subset R$ the maximal ideal.
+Let $I \subset S$ be an ideal.
+If $S/\mathfrak mS$ is Cohen-Macaulay, then
+$$
+\text{depth}_I(S) \geq \dim(S/\mathfrak mS) - \dim(S/\mathfrak mS + I)
+$$
+\end{lemma}
+
+\begin{proof}
+By Algebra, Lemma \ref{algebra-lemma-grothendieck-regular-sequence}
+any sequence in $S$ which maps to a regular sequence in $S/\mathfrak mS$
+is a regular sequence in $S$. Thus it suffices to prove the lemma
+in case $R$ is a field. This is a special case of Lemma \ref{lemma-depth-CM}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-divide-by-torsion}
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal.
+Let $M$ be an $A$-module. Let $Z = V(I)$.
+Then $H^0_I(M) = H^0_Z(M)$. Let $N$ be the common value and
+set $M' = M/N$. Then
+\begin{enumerate}
+\item $H^0_I(M') = 0$ and $H^p_I(M) = H^p_I(M')$ and $H^p_I(N) = 0$
+for all $p > 0$,
+\item $H^0_Z(M') = 0$ and $H^p_Z(M) = H^p_Z(M')$ and $H^p_Z(N) = 0$
+for all $p > 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By definition $H^0_I(M) = M[I^\infty]$ is $I$-power torsion.
+By Lemma \ref{lemma-local-cohomology-adjoint} we see that
+$$
+H^0_Z(M) = \Ker(M \longrightarrow M_{f_1} \times \ldots \times M_{f_r})
+$$
+if $I = (f_1, \ldots, f_r)$. Thus $H^0_I(M) \subset H^0_Z(M)$ and
+conversely, if $x \in H^0_Z(M)$, then it is annihilated by a $f_i^{e_i}$
+for some $e_i \geq 1$ hence annihilated by some power of $I$.
+This proves the first equality and moreover $N$ is $I$-power torsion.
+By Lemma \ref{lemma-adjoint} we see that $R\Gamma_I(N) = N$.
+By Lemma \ref{lemma-local-cohomology-adjoint} we see that $R\Gamma_Z(N) = N$.
+This proves the higher vanishing of $H^p_I(N)$ and $H^p_Z(N)$ in (1) and (2).
+The vanishing of $H^0_I(M')$ and $H^0_Z(M')$ follow from the preceding
+remarks and the fact that $M'$ is $I$-power torsion free by
+More on Algebra, Lemma \ref{more-algebra-lemma-divide-by-torsion}.
+The equality of higher cohomologies for $M$ and $M'$ follow
+immediately from the long exact cohomology sequence.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Torsion versus complete modules}
+\label{section-torsion-and-complete}
+
+\noindent
+Let $A$ be a ring and let $I$ be a finitely generated ideal.
+In this case we can consider the derived category
+$D_{I^\infty\text{-torsion}}(A)$ of complexes
+with $I$-power torsion cohomology modules
+(Section \ref{section-local-cohomology})
+and the derived category
+$D_{comp}(A, I)$ of derived complete complexes
+(More on Algebra, Section \ref{more-algebra-section-derived-completion}).
+In this section we show these categories are equivalent.
+A more general statement can be found in
+\cite{Dwyer-Greenlees}.
+
+\begin{lemma}
+\label{lemma-complete-and-local}
+\begin{slogan}
+Results of this nature are sometimes referred to as Greenlees-May duality.
+\end{slogan}
+Let $A$ be a ring and let $I$ be a finitely generated ideal.
+Let $R\Gamma_Z$ be as in Lemma \ref{lemma-local-cohomology-adjoint}.
+Let ${\ }^\wedge$ denote derived completion as in
+More on Algebra, Lemma \ref{more-algebra-lemma-derived-completion}.
+For an object $K$ in $D(A)$ we have
+$$
+R\Gamma_Z(K^\wedge) = R\Gamma_Z(K)
+\quad\text{and}\quad
+(R\Gamma_Z(K))^\wedge = K^\wedge
+$$
+in $D(A)$.
+\end{lemma}
+
+\begin{proof}
+Choose $f_1, \ldots, f_r \in A$ generating $I$. Recall that
+$$
+K^\wedge = R\Hom_A\left((A \to \prod A_{f_{i_0}}
+\to \prod A_{f_{i_0i_1}} \to \ldots \to A_{f_1 \ldots f_r}), K\right)
+$$
+by More on Algebra, Lemma \ref{more-algebra-lemma-derived-completion}.
+Hence the cone $C = \text{Cone}(K \to K^\wedge)$
+is given by
+$$
+R\Hom_A\left((\prod A_{f_{i_0}}
+\to \prod A_{f_{i_0i_1}} \to \ldots \to A_{f_1 \ldots f_r}), K\right)
+$$
+which can be represented by a complex endowed with a finite filtration
+whose successive quotients are isomorphic to
+$$
+R\Hom_A(A_{f_{i_0} \ldots f_{i_p}}, K), \quad p > 0
+$$
+These complexes vanish on applying $R\Gamma_Z$, see
+Lemma \ref{lemma-local-cohomology-vanishes}. Applying $R\Gamma_Z$
+to the distinguished triangle $K \to K^\wedge \to C \to K[1]$
+we see that the first formula of the lemma is correct.
+
+\medskip\noindent
+Recall that
+$$
+R\Gamma_Z(K) =
+K \otimes^\mathbf{L} (A \to \prod A_{f_{i_0}}
+\to \prod A_{f_{i_0i_1}} \to \ldots \to A_{f_1 \ldots f_r})
+$$
+by Lemma \ref{lemma-local-cohomology-adjoint}.
+Hence the cone $C = \text{Cone}(R\Gamma_Z(K) \to K)$
+can be represented by a complex endowed with a finite filtration
+whose successive quotients are isomorphic to
+$$
+K \otimes_A A_{f_{i_0} \ldots f_{i_p}}, \quad p > 0
+$$
+These complexes vanish on applying ${\ }^\wedge$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-derived-completion-vanishes}.
+Applying derived completion to the distinguished triangle
+$R\Gamma_Z(K) \to K \to C \to R\Gamma_Z(K)[1]$
+we see that the second formula of the lemma is correct.
+\end{proof}
+
+\noindent
+The following result is a special case of a very general phenomenon
+concerning admissible subcategories of a triangulated category.
+
+\begin{proposition}
+\label{proposition-torsion-complete}
+\begin{reference}
+This is a special case of \cite[Theorem 1.1]{Porta-Liran-Yekutieli}.
+\end{reference}
+Let $A$ be a ring and let $I \subset A$ be a finitely generated ideal.
+The functors $R\Gamma_Z$ and ${\ }^\wedge$
+define quasi-inverse equivalences of categories
+$$
+D_{I^\infty\text{-torsion}}(A) \leftrightarrow D_{comp}(A, I)
+$$
+\end{proposition}
+
+\begin{proof}
+Follows immediately from Lemma \ref{lemma-complete-and-local}.
+\end{proof}
+
+\noindent
+The following addendum of the proposition above makes the
+correspondence on morphisms more precise.
+
+\begin{lemma}
+\label{lemma-compare-RHom}
+With notation as in Lemma \ref{lemma-complete-and-local}.
+For objects $K, L$ in $D(A)$ there is a canonical isomorphism
+$$
+R\Hom_A(K^\wedge, L^\wedge) \longrightarrow R\Hom_A(R\Gamma_Z(K), R\Gamma_Z(L))
+$$
+in $D(A)$.
+\end{lemma}
+
+\begin{proof}
+Say $I = (f_1, \ldots, f_r)$. Denote
+$C = (A \to \prod A_{f_i} \to \ldots \to A_{f_1 \ldots f_r})$ the
+alternating {\v C}ech complex. Then derived completion is given by
+$R\Hom_A(C, -)$ (More on Algebra, Lemma
+\ref{more-algebra-lemma-derived-completion}) and local cohomology by
+$C \otimes^\mathbf{L} -$ (Lemma \ref{lemma-local-cohomology-adjoint}).
+Combining the isomorphism
+$$
+R\Hom_A(K \otimes^\mathbf{L} C, L \otimes^\mathbf{L} C) =
+R\Hom_A(K, R\Hom_A(C, L \otimes^\mathbf{L} C))
+$$
+(More on Algebra, Lemma \ref{more-algebra-lemma-internal-hom})
+and the map
+$$
+L \to R\Hom_A(C, L \otimes^\mathbf{L} C)
+$$
+(More on Algebra, Lemma \ref{more-algebra-lemma-internal-hom-diagonal})
+we obtain a map
+$$
+\gamma :
+R\Hom_A(K, L)
+\longrightarrow
+R\Hom_A(K \otimes^\mathbf{L} C, L \otimes^\mathbf{L} C)
+$$
+On the other hand, the right hand side is derived complete as it is
+equal to
+$$
+R\Hom_A(C, R\Hom_A(K, L \otimes^\mathbf{L} C)).
+$$
+Thus $\gamma$ factors through the derived completion of
+$R\Hom_A(K, L)$ by the universal property of derived completion.
+However, the derived completion goes inside the $R\Hom_A$ by
+More on Algebra, Lemma \ref{more-algebra-lemma-completion-RHom}
+and we obtain the desired map.
+
+\medskip\noindent
+To show that the map of the lemma is an isomorphism
+we may assume that $K$ and $L$ are derived complete, i.e.,
+$K = K^\wedge$ and $L = L^\wedge$. In this case we are
+looking at the map
+$$
+\gamma : R\Hom_A(K, L) \longrightarrow R\Hom_A(R\Gamma_Z(K), R\Gamma_Z(L))
+$$
+By Proposition \ref{proposition-torsion-complete} we know that
+the cohomology groups
+of the left and the right hand side coincide. In other words,
+we have to check that the map $\gamma$ sends a morphism
+$\alpha : K \to L$ in $D(A)$ to the morphism
+$R\Gamma_Z(\alpha) : R\Gamma_Z(K) \to R\Gamma_Z(L)$.
+We omit the verification (hint: note that $R\Gamma_Z(\alpha)$
+is just the map
+$\alpha \otimes \text{id}_C :
+K \otimes^\mathbf{L} C
+\to
+L \otimes^\mathbf{L} C$ which is almost the same as the
+construction of the map in
+More on Algebra, Lemma \ref{more-algebra-lemma-internal-hom-diagonal}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-local}
+Let $I$ and $J$ be ideals in a Noetherian ring $A$. Let $M$ be a finite
+$A$-module. Set $Z =V(J)$. Consider the derived $I$-adic completion
+$R\Gamma_Z(M)^\wedge$ of local cohomology. Then
+\begin{enumerate}
+\item we have $R\Gamma_Z(M)^\wedge = R\lim R\Gamma_Z(M/I^nM)$, and
+\item there are short exact sequences
+$$
+0 \to R^1\lim H^{i - 1}_Z(M/I^nM) \to H^i(R\Gamma_Z(M)^\wedge) \to
+\lim H^i_Z(M/I^nM) \to 0
+$$
+\end{enumerate}
+In particular $R\Gamma_Z(M)^\wedge$ has vanishing cohomology
+in negative degrees.
+\end{lemma}
+
+\begin{proof}
+Suppose that $J = (g_1, \ldots, g_m)$.
+Then $R\Gamma_Z(M)$ is computed by the complex
+$$
+M \to \prod M_{g_{j_0}} \to \prod M_{g_{j_0}g_{j_1}} \to
+\ldots \to M_{g_1g_2\ldots g_m}
+$$
+by Lemma \ref{lemma-local-cohomology-adjoint}.
+By More on Algebra, Lemma
+\ref{more-algebra-lemma-when-derived-completion-is-completion}
+the derived $I$-adic completion of
+this complex is given by the complex
+$$
+\lim M/I^nM \to \prod \lim (M/I^nM)_{g_{j_0}} \to
+\ldots \to \lim (M/I^nM)_{g_1g_2\ldots g_m}
+$$
+of usual completions. Since $R\Gamma_Z(M/I^nM)$ is computed by
+the complex $ M/I^nM \to \prod (M/I^nM)_{g_{j_0}} \to
+\ldots \to (M/I^nM)_{g_1g_2\ldots g_m}$ and since the
+transition maps between these complexes are surjective,
+we conclude that (1) holds by
+More on Algebra, Lemma \ref{more-algebra-lemma-compute-Rlim-modules}.
+Part (2) then follows from More on Algebra, Lemma
+\ref{more-algebra-lemma-break-long-exact-sequence-modules}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-local-H0}
+With notation and hypotheses as in Lemma \ref{lemma-completion-local}
+assume $A$ is $I$-adically complete. Then
+$$
+H^0(R\Gamma_Z(M)^\wedge) = \colim H^0_{V(J')}(M)
+$$
+where the filtered colimit is over $J' \subset J$ such that
+$V(J') \cap V(I) = V(J) \cap V(I)$.
+\end{lemma}
+
+\begin{proof}
+Since $M$ is a finite $A$-module, we have that $M$ is $I$-adically complete.
+The proof of Lemma \ref{lemma-completion-local} shows that
+$$
+H^0(R\Gamma_Z(M)^\wedge) =
+\Ker(M^\wedge \to \prod M_{g_j}^\wedge) =
+\Ker(M \to \prod M_{g_j}^\wedge)
+$$
+where on the right hand side we have usual $I$-adic completion.
+The kernel $K_j$ of $M_{g_j} \to M_{g_j}^\wedge$ is $\bigcap I^n M_{g_j}$.
+By Algebra, Lemma \ref{algebra-lemma-intersection-powers-ideal-module}
+for every $\mathfrak p \in V(IA_{g_j})$ we find an
+$f \in A_{g_j}$, $f \not \in \mathfrak p$ such that $(K_j)_f = 0$.
+
+\medskip\noindent
+Let $s \in H^0(R\Gamma_Z(M)^\wedge)$.
+By the above we may think of $s$ as an element of $M$.
+The support $Z'$ of $s$ intersected with $D(g_j)$ is disjoint from
+$D(g_j) \cap V(I)$ by the arguments above.
+Thus $Z'$ is a closed subset of $\Spec(A)$ with $Z' \cap V(I) \subset V(J)$.
+Then $Z' \cup V(J) = V(J')$ for some ideal $J' \subset J$ with
+$V(J') \cap V(I) \subset V(J)$ and we have $s \in H^0_{V(J')}(M)$.
+Conversely, any $s \in H^0_{V(J')}(M)$ with $J' \subset J$ and
+$V(J') \cap V(I) \subset V(J)$ maps to zero in $M_{g_j}^\wedge$ for all $j$.
+This proves the lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Trivial duality for a ring map}
+\label{section-trivial}
+
+\noindent
+Let $A \to B$ be a ring homomorphism. Consider the functor
+$$
+\Hom_A(B, -) : \text{Mod}_A \longrightarrow \text{Mod}_B,\quad
+M \longmapsto \Hom_A(B, M)
+$$
+This functor is left exact and has a derived extension
+$R\Hom(B, -) : D(A) \to D(B)$.
+
+\begin{lemma}
+\label{lemma-right-adjoint}
+Let $A \to B$ be a ring homomorphism. The functor $R\Hom(B, -)$
+constructed above is right adjoint to the restriction functor
+$D(B) \to D(A)$.
+\end{lemma}
+
+\begin{proof}
+This is a consequence of the fact that restriction and $\Hom_A(B, -)$ are
+adjoint functors by Algebra, Lemma \ref{algebra-lemma-adjoint-hom-restrict}.
+See Derived Categories, Lemma \ref{derived-lemma-derived-adjoint-functors}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-right-adjoints}
+Let $A \to B \to C$ be ring maps. Then
+$R\Hom(C, -) \circ R\Hom(B, -) : D(A) \to D(C)$
+is the functor $R\Hom(C, -) : D(A) \to D(C)$.
+\end{lemma}
+
+\begin{proof}
+Follows from uniqueness of right adjoints and Lemma \ref{lemma-right-adjoint}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-RHom-ext}
+Let $\varphi : A \to B$ be a ring homomorphism. For $K$ in $D(A)$ we have
+$$
+\varphi_*R\Hom(B, K) = R\Hom_A(B, K)
+$$
+where $\varphi_* : D(B) \to D(A)$ is restriction. In particular
+$R^q\Hom(B, K) = \Ext_A^q(B, K)$.
+\end{lemma}
+
+\begin{proof}
+Choose a K-injective complex $I^\bullet$ representing $K$.
+Then $R\Hom(B, K)$ is represented by the complex $\Hom_A(B, I^\bullet)$
+of $B$-modules. Since this complex, as a complex of $A$-modules,
+represents $R\Hom_A(B, K)$ we see that the lemma is true.
+\end{proof}
+
+\noindent
+Let $A$ be a Noetherian ring. We will denote
+$$
+D_{\textit{Coh}}(A) \subset D(A)
+$$
+the full subcategory consisting of those objects $K$ of $D(A)$
+whose cohomology modules are all finite $A$-modules. This makes sense
+by Derived Categories, Section \ref{derived-section-triangulated-sub}
+because as $A$ is Noetherian, the subcategory of finite $A$-modules
+is a Serre subcategory of $\text{Mod}_A$.
+
+\begin{lemma}
+\label{lemma-exact-support-coherent}
+With notation as above, assume $A \to B$ is a finite ring map of
+Noetherian rings. Then $R\Hom(B, -)$ maps
+$D^+_{\textit{Coh}}(A)$ into $D^+_{\textit{Coh}}(B)$.
+\end{lemma}
+
+\begin{proof}
+We have to show: if $K \in D^+(A)$ has finite cohomology modules, then the
+complex $R\Hom(B, K)$ has finite cohomology modules too.
+This follows for example from Lemma \ref{lemma-RHom-ext}
+if we can show the ext modules $\Ext^i_A(B, K)$
+are finite $A$-modules. Since $K$ is bounded below there is a
+convergent spectral sequence
+$$
+\Ext^p_A(B, H^q(K)) \Rightarrow \text{Ext}^{p + q}_A(B, K)
+$$
+This finishes the proof as the modules $\Ext^p_A(B, H^q(K))$
+are finite by
+Algebra, Lemma \ref{algebra-lemma-ext-noetherian}.
+\end{proof}
+
+\begin{remark}
+\label{remark-exact-support}
+Let $A$ be a ring and let $I \subset A$ be an ideal. Set $B = A/I$.
+In this case the functor $\Hom_A(B, -)$ is equal to the functor
+$$
+\text{Mod}_A \longrightarrow \text{Mod}_B,\quad M \longmapsto M[I]
+$$
+which sends $M$ to the submodule of $I$-torsion.
+\end{remark}
+
+\begin{situation}
+\label{situation-resolution}
+Let $R \to A$ be a ring map.
+We will give an alternative construction of $R\Hom(A, -)$
+which will stand us in good stead later in this chapter.
+Namely, suppose we have a differential graded algebra $(E, d)$
+over $R$ and a quasi-isomorphism $E \to A$ where we view $A$
+as a differential graded algebra over $R$ with zero differential.
+Then we have commutative diagrams
+$$
+\vcenter{
+\xymatrix{
+D(E, \text{d}) \ar[rd] & & D(A) \ar[ll] \ar[ld] \\
+& D(R)
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+D(E, \text{d}) \ar[rr]_{- \otimes_E^\mathbf{L} A} & & D(A) \\
+& D(R) \ar[lu]^{- \otimes_R^\mathbf{L} E} \ar[ru]_{- \otimes_R^\mathbf{L} A}
+}
+}
+$$
+where the horizontal arrows are equivalences of categories
+(Differential Graded Algebra, Lemma \ref{dga-lemma-qis-equivalence}).
+It is clear that the first diagram commutes.
+The second diagram commutes because the first one does
+and our functors are their left adjoints
+(Differential Graded Algebra, Example \ref{dga-example-map-hom-tensor})
+or because we have $E \otimes^\mathbf{L}_E A = E \otimes_E A$
+and we can use
+Differential Graded Algebra, Lemma
+\ref{dga-lemma-compose-tensor-functors-general}.
+\end{situation}
+
+\begin{lemma}
+\label{lemma-RHom-dga}
+In Situation \ref{situation-resolution} the functor $R\Hom(A, -)$
+is equal to the composition of
+$R\Hom(E, -) : D(R) \to D(E, \text{d})$
+and the equivalence $- \otimes^\mathbf{L}_E A : D(E, \text{d}) \to D(A)$.
+\end{lemma}
+
+\begin{proof}
+This is true because $R\Hom(E, -)$ is the right adjoint
+to $- \otimes^\mathbf{L}_R E$, see
+Differential Graded Algebra, Lemma \ref{dga-lemma-tensor-hom-adjoint}.
+Hence this functor plays the same role as the functor
+$R\Hom(A, -)$ for the map $R \to A$ (Lemma \ref{lemma-right-adjoint}),
+whence these functors must correspond via the equivalence
+$- \otimes^\mathbf{L}_E A : D(E, \text{d}) \to D(A)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-RHom-is-tensor}
+In Situation \ref{situation-resolution} assume that
+\begin{enumerate}
+\item $E$ viewed as an object of $D(R)$ is compact, and
+\item $N = \Hom^\bullet_R(E^\bullet, R)$ computes $R\Hom(E, R)$.
+\end{enumerate}
+Then $R\Hom(E, -) : D(R) \to D(E)$ is isomorphic to
+$K \mapsto K \otimes_R^\mathbf{L} N$.
+\end{lemma}
+
+\begin{proof}
+Special case of Differential Graded Algebra, Lemma
+\ref{dga-lemma-RHom-is-tensor}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-RHom-is-tensor-special}
+In Situation \ref{situation-resolution} assume $A$ is a perfect $R$-module.
+Then
+$$
+R\Hom(A, -) : D(R) \to D(A)
+$$
+is given by $K \mapsto K \otimes_R^\mathbf{L} M$
+where $M = R\Hom(A, R) \in D(A)$.
+\end{lemma}
+
+\begin{proof}
+We apply Divided Power Algebra, Lemma
+\ref{dpa-lemma-tate-resoluton-pseudo-coherent-ring-map}
+to choose a Tate resolution $(E, \text{d})$ of $A$ over $R$.
+Note that $E^i = 0$ for $i > 0$, $E^0 = R[x_1, \ldots, x_n]$
+is a polynomial algebra, and $E^i$ is a finite free $E^0$-module
+for $i < 0$. It follows that $E$ viewed as a complex of $R$-modules
+is a bounded above complex of free $R$-modules.
+We check the assumptions of Lemma \ref{lemma-RHom-is-tensor}.
+The first holds because $A$ is perfect
+(hence compact by More on Algebra, Proposition
+\ref{more-algebra-proposition-perfect-is-compact})
+and the second by
+More on Algebra, Lemma \ref{more-algebra-lemma-RHom-out-of-projective}.
+From the lemma conclude that $K \mapsto R\Hom(E, K)$ is
+isomorphic to $K \mapsto K \otimes_R^\mathbf{L} N$ for
+some differential graded $E$-module $N$. Observe that
+$$
+(R \otimes_R E) \otimes_E^\mathbf{L} A = R \otimes_E E \otimes_E A
+$$
+in $D(A)$. Hence by Differential Graded Algebra, Lemma
+\ref{dga-lemma-compose-tensor-functors-general-algebra}
+we conclude that the composition of
+$- \otimes_R^\mathbf{L} N$ and $- \otimes_R^\mathbf{L} A$
+is of the form $- \otimes_R M$ for some $M \in D(A)$.
+To finish the proof we apply Lemma \ref{lemma-RHom-dga}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-for-effective-Cartier-algebraic}
+Let $R \to A$ be a surjective ring map whose kernel $I$
+is an invertible $R$-module. The functor
+$R\Hom(A, -) : D(R) \to D(A)$
+is isomorphic to $K \mapsto K \otimes_R^\mathbf{L} N[-1]$
+where $N$ is inverse of the invertible $A$-module $I \otimes_R A$.
+\end{lemma}
+
+\begin{proof}
+Since $A$ has the finite projective resolution
+$$
+0 \to I \to R \to A \to 0
+$$
+we see that $A$ is a perfect $R$-module. By
+Lemma \ref{lemma-RHom-is-tensor-special} it suffices
+to prove that $R\Hom(A, R)$ is represented by $N[-1]$ in $D(A)$.
+This means $R\Hom(A, R)$ has a unique nonzero
+cohomology module, namely $N$ in degree $1$. As
+$\text{Mod}_A \to \text{Mod}_R$ is fully faithful it suffice to prove
+this after applying the restriction functor $i_* : D(A) \to D(R)$.
+By Lemma \ref{lemma-RHom-ext} we have
+$$
+i_*R\Hom(A, R) = R\Hom_R(A, R)
+$$
+Using the finite projective resolution above we find that the latter
+is represented by the complex $R \to I^{\otimes -1}$ with $R$
+in degree $0$. The map $R \to I^{\otimes -1}$ is injective
+and the cokernel is $N$.
+\end{proof}
+
+
+
+
+
+\section{Base change for trivial duality}
+\label{section-base-change-trivial-duality}
+
+\noindent
+In this section we consider a cocartesian square of rings
+$$
+\xymatrix{
+A \ar[r]_\alpha & A' \\
+R \ar[u]^\varphi \ar[r]^\rho & R' \ar[u]_{\varphi'}
+}
+$$
+In other words, we have $A' = A \otimes_R R'$. If $A$ and $R'$
+are {\bf tor independent over} $R$ then there is a canonical base change map
+\begin{equation}
+\label{equation-base-change}
+R\Hom(A, K) \otimes_A^\mathbf{L} A'
+\longrightarrow
+R\Hom(A', K \otimes_R^\mathbf{L} R')
+\end{equation}
+in $D(A')$ functorial for $K$ in $D(R)$. Namely, by the adjointness
+of Lemma \ref{lemma-right-adjoint} such an arrow is the same thing as a map
+$$
+\varphi'_*\left(R\Hom(A, K) \otimes_A^\mathbf{L} A'\right)
+\longrightarrow
+K \otimes_R^\mathbf{L} R'
+$$
+in $D(R')$ where $\varphi'_* : D(A') \to D(R')$ is the restriction functor.
+We may apply
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-comparison}
+to the left hand side to get that this is the same thing as a map
+$$
+\varphi_*(R\Hom(A, K)) \otimes_R^\mathbf{L} R'
+\longrightarrow
+K \otimes_R^\mathbf{L} R'
+$$
+in $D(R')$ where $\varphi_* : D(A) \to D(R)$ is the restriction functor.
+For this we can choose $can \otimes^\mathbf{L} \text{id}_{R'}$
+where $can : \varphi_*(R\Hom(A, K)) \to K$ is the
+counit of the adjunction between $R\Hom(A, -)$ and $\varphi_*$.
+
+\begin{lemma}
+\label{lemma-check-base-change-is-iso}
+In the situation above, the map (\ref{equation-base-change})
+is an isomorphism if and only if the map
+$$
+R\Hom_R(A, K) \otimes_R^\mathbf{L} R'
+\longrightarrow
+R\Hom_R(A, K \otimes_R^\mathbf{L} R')
+$$
+of More on Algebra, Lemma
+\ref{more-algebra-lemma-internal-hom-diagonal-better} is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+To see that the map is an isomorphism, it suffices to prove it
+is an isomorphism after applying $\varphi'_*$.
+Applying the functor $\varphi'_*$ to (\ref{equation-base-change})
+and using that $A' = A \otimes_R^\mathbf{L} R'$
+we obtain the base change map
+$R\Hom_R(A, K) \otimes_R^\mathbf{L} R' \to
+R\Hom_{R'}(A \otimes_R^\mathbf{L} R', K \otimes_R^\mathbf{L} R')$
+for derived hom of
+More on Algebra, Equation (\ref{more-algebra-equation-base-change-RHom}).
+Unwinding the left and right hand side exactly as in the proof of
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-RHom}
+and in particular using
+More on Algebra, Lemma \ref{more-algebra-lemma-upgrade-adjoint-tensor-RHom}
+gives the desired result.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-bc-surjection}
+Let $R \to A$ and $R \to R'$ be ring maps and $A' = A \otimes_R R'$.
+Assume
+\begin{enumerate}
+\item $A$ is pseudo-coherent as an $R$-module,
+\item $R'$ has finite tor dimension as an $R$-module (for example
+$R \to R'$ is flat),
+\item $A$ and $R'$ are tor independent over $R$.
+\end{enumerate}
+Then (\ref{equation-base-change}) is an isomorphism for $K \in D^+(R)$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-check-base-change-is-iso} and
+More on Algebra, Lemma
+\ref{more-algebra-lemma-internal-hom-evaluate-tensor-isomorphism} part (4).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bc-surjection}
+Let $R \to A$ and $R \to R'$ be ring maps and $A' = A \otimes_R R'$.
+Assume
+\begin{enumerate}
+\item $A$ is perfect as an $R$-module,
+\item $A$ and $R'$ are tor independent over $R$.
+\end{enumerate}
+Then (\ref{equation-base-change}) is an isomorphism for all $K \in D(R)$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-check-base-change-is-iso} and
+More on Algebra, Lemma
+\ref{more-algebra-lemma-internal-hom-evaluate-tensor-isomorphism} part (1).
+\end{proof}
+
+
+
+
+
+
+
+\section{Dualizing complexes}
+\label{section-dualizing}
+
+\noindent
+In this section we define dualizing complexes for Noetherian rings.
+
+\begin{definition}
+\label{definition-dualizing}
+Let $A$ be a Noetherian ring. A {\it dualizing complex} is a
+complex of $A$-modules $\omega_A^\bullet$ such that
+\begin{enumerate}
+\item $\omega_A^\bullet$ has finite injective dimension,
+\item $H^i(\omega_A^\bullet)$ is a finite $A$-module for all $i$, and
+\item $A \to R\Hom_A(\omega_A^\bullet, \omega_A^\bullet)$
+is a quasi-isomorphism.
+\end{enumerate}
+\end{definition}
+
+\noindent
+This definition takes some time getting used to. It is perhaps a good
+idea to prove some of the following lemmas yourself without reading
+the proofs.
+
+\begin{lemma}
+\label{lemma-finite-ext-into-bounded-injective}
+Let $A$ be a Noetherian ring. Let $K, L \in D_{\textit{Coh}}(A)$
+and assume $L$ has finite injective dimension. Then
+$R\Hom_A(K, L)$ is in $D_{\textit{Coh}}(A)$.
+\end{lemma}
+
+\begin{proof}
+Pick an integer $n$ and consider the distinguished triangle
+$$
+\tau_{\leq n}K \to K \to \tau_{\geq n + 1}K \to \tau_{\leq n}K[1]
+$$
+see Derived Categories, Remark
+\ref{derived-remark-truncation-distinguished-triangle}.
+Since $L$ has finite injective dimension we see
+that $R\Hom_A(\tau_{\geq n + 1}K, L)$ has vanishing
+cohomology in degrees $\geq c - n$ for some constant $c$.
+Hence, given $i$, we see that
+$\Ext^i_A(K, L) \to \Ext^i_A(\tau_{\leq n}K, L)$
+is an isomorphism for some $n \gg - i$. By
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-coherent-internal-hom}
+applied to $\tau_{\leq n}K$ and $L$
+we see conclude that $\Ext^i_A(K, L)$ is
+a finite $A$-module for all $i$. Hence $R\Hom_A(K, L)$
+is indeed an object of $D_{\textit{Coh}}(A)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing}
+Let $A$ be a Noetherian ring. If $\omega_A^\bullet$ is a dualizing
+complex, then the functor
+$$
+D : K \longmapsto R\Hom_A(K, \omega_A^\bullet)
+$$
+is an anti-equivalence $D_{\textit{Coh}}(A) \to D_{\textit{Coh}}(A)$
+which exchanges $D^+_{\textit{Coh}}(A)$ and $D^-_{\textit{Coh}}(A)$
+and induces an anti-equivalence
+$D^b_{\textit{Coh}}(A) \to D^b_{\textit{Coh}}(A)$.
+Moreover $D \circ D$ is isomorphic to the identity functor.
+\end{lemma}
+
+\begin{proof}
+Let $K$ be an object of $D_{\textit{Coh}}(A)$. From
+Lemma \ref{lemma-finite-ext-into-bounded-injective}
+we see $R\Hom_A(K, \omega_A^\bullet)$ is an object of $D_{\textit{Coh}}(A)$.
+By More on Algebra, Lemma
+\ref{more-algebra-lemma-internal-hom-evaluate-isomorphism-technical}
+and the assumptions on the dualizing complex
+we obtain a canonical isomorphism
+$$
+K = R\Hom_A(\omega_A^\bullet, \omega_A^\bullet) \otimes_A^\mathbf{L} K
+\longrightarrow
+R\Hom_A(R\Hom_A(K, \omega_A^\bullet), \omega_A^\bullet)
+$$
+Thus our functor has a quasi-inverse and the proof is complete.
+\end{proof}
+
+\noindent
+Let $R$ be a ring. Recall that an object $L$ of $D(R)$ is
+{\it invertible} if it is an invertible object for the
+symmetric monoidal structure on $D(R)$ given by derived
+tensor product. In
+More on Algebra, Lemma \ref{more-algebra-lemma-invertible-derived}
+we have seen this means $L$ is perfect, $L = \bigoplus H^n(L)[-n]$,
+this is a finite sum, each $H^n(L)$ is finite projective,
+and there is an open covering $\Spec(R) = \bigcup D(f_i)$ such that
+$L \otimes_R R_{f_i} \cong R_{f_i}[-n_i]$ for some integers $n_i$.
+
+\begin{lemma}
+\label{lemma-equivalence-comes-from-invertible}
+Let $A$ be a Noetherian ring. Let
+$F : D^b_{\textit{Coh}}(A) \to D^b_{\textit{Coh}}(A)$ be an $A$-linear
+equivalence of categories. Then $F(A)$ is an invertible object of $D(A)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak m \subset A$ be a maximal ideal with residue field $\kappa$.
+Consider the object $F(\kappa)$. Since
+$\kappa = \Hom_{D(A)}(\kappa, \kappa)$ we find that all
+cohomology groups of $F(\kappa)$ are annihilated by $\mathfrak m$.
+We also see that
+$$
+\Ext^i_A(\kappa, \kappa) = \text{Ext}^i_A(F(\kappa), F(\kappa))
+= \Hom_{D(A)}(F(\kappa), F(\kappa)[i])
+$$
+is zero for $i < 0$. Say $H^a(F(\kappa)) \not = 0$ and
+$H^b(F(\kappa)) \not = 0$ with $a$ minimal and $b$ maximal
+(so in particular $a \leq b$). Then there is a nonzero map
+$$
+F(\kappa) \to H^b(F(\kappa))[-b] \to H^a(F(\kappa))[-b]
+\to F(\kappa)[a - b]
+$$
+in $D(A)$ (nonzero because it induces a nonzero map on cohomology).
+This proves that $b = a$. We conclude that $F(\kappa) = \kappa[-a]$.
+
+\medskip\noindent
+Let $G$ be a quasi-inverse to our functor $F$. Arguing as above
+we find an integer $b$ such that $G(\kappa) = \kappa[-b]$.
+On composing we find $a + b = 0$. Let $E$ be a finite $A$-module
+wich is annihilated by a power of $\mathfrak m$. Arguing by
+induction on the length of $E$ we find that $G(E) = E'[-b]$
+for some finite $A$-module $E'$ annihilated by a power of
+$\mathfrak m$. Then $E[-a] = F(E')$.
+Next, we consider the groups
+$$
+\Ext^i_A(A, E') = \text{Ext}^i_A(F(A), F(E')) =
+\Hom_{D(A)}(F(A), E[-a + i])
+$$
+The left hand side is nonzero if and only if $i = 0$ and then
+we get $E'$. Applying this with $E = E' = \kappa$ and using Nakayama's
+lemma this implies that $H^j(F(A))_\mathfrak m$ is zero for $j > a$ and
+generated by $1$ element for $j = a$. On the other hand, if
+$H^j(F(A))_\mathfrak m$ is not zero for some $j < a$, then
+there is a map $F(A) \to E[-a + i]$ for some $i < 0$ and some
+$E$ (More on Algebra, Lemma \ref{more-algebra-lemma-detect-cohomology})
+which is a contradiction.
+Thus we see that $F(A)_\mathfrak m = M[-a]$
+for some $A_\mathfrak m$-module $M$ generated by $1$ element.
+However, since
+$$
+A_\mathfrak m = \Hom_{D(A)}(A, A)_\mathfrak m =
+\Hom_{D(A)}(F(A), F(A))_\mathfrak m = \Hom_{A_\mathfrak m}(M, M)
+$$
+we see that $M \cong A_\mathfrak m$. We conclude that there exists
+an element $f \in A$, $f \not \in \mathfrak m$ such that
+$F(A)_f$ is isomorphic to $A_f[-a]$. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-unique}
+Let $A$ be a Noetherian ring. If $\omega_A^\bullet$ and
+$(\omega'_A)^\bullet$ are dualizing complexes, then
+$(\omega'_A)^\bullet$ is quasi-isomorphic to
+$\omega_A^\bullet \otimes_A^\mathbf{L} L$
+for some invertible object $L$ of $D(A)$.
+\end{lemma}
+
+\begin{proof}
+By Lemmas \ref{lemma-dualizing} and
+\ref{lemma-equivalence-comes-from-invertible} the functor
+$K \mapsto R\Hom_A(R\Hom_A(K, \omega_A^\bullet), (\omega_A')^\bullet)$
+maps $A$ to an invertible object $L$. In other words, there is
+an isomorphism
+$$
+L \longrightarrow R\Hom_A(\omega_A^\bullet, (\omega_A')^\bullet)
+$$
+Since $L$ has finite tor dimension, this means that we can apply
+More on Algebra, Lemma
+\ref{more-algebra-lemma-internal-hom-evaluate-isomorphism-technical}
+to see that
+$$
+R\Hom_A(\omega_A^\bullet, (\omega'_A)^\bullet) \otimes_A^\mathbf{L} K
+\longrightarrow
+R\Hom_A(R\Hom_A(K, \omega_A^\bullet), (\omega_A')^\bullet)
+$$
+is an isomorphism for $K$ in $D^b_{\textit{Coh}}(A)$.
+In particular, setting $K = \omega_A^\bullet$ finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-localize}
+Let $A$ be a Noetherian ring. Let $B = S^{-1}A$ be a localization.
+If $\omega_A^\bullet$ is a dualizing
+complex, then $\omega_A^\bullet \otimes_A B$ is a dualizing
+complex for $B$.
+\end{lemma}
+
+\begin{proof}
+Let $\omega_A^\bullet \to I^\bullet$ be a quasi-isomorphism
+with $I^\bullet$ a bounded complex of injectives.
+Then $S^{-1}I^\bullet$ is a bounded complex of injective
+$B = S^{-1}A$-modules (Lemma \ref{lemma-localization-injective-modules})
+representing $\omega_A^\bullet \otimes_A B$.
+Thus $\omega_A^\bullet \otimes_A B$ has finite injective dimension.
+Since $H^i(\omega_A^\bullet \otimes_A B) = H^i(\omega_A^\bullet) \otimes_A B$
+by flatness of $A \to B$ we see that $\omega_A^\bullet \otimes_A B$
+has finite cohomology modules. Finally, the map
+$$
+B \longrightarrow
+R\Hom_A(\omega_A^\bullet \otimes_A B, \omega_A^\bullet \otimes_A B)
+$$
+is a quasi-isomorphism as formation of internal hom commutes with
+flat base change in this case, see
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-RHom}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-glue}
+Let $A$ be a Noetherian ring. Let $f_1, \ldots, f_n \in A$
+generate the unit ideal. If $\omega_A^\bullet$ is a complex
+of $A$-modules such that $(\omega_A^\bullet)_{f_i}$ is a dualizing
+complex for $A_{f_i}$ for all $i$, then $\omega_A^\bullet$ is a dualizing
+complex for $A$.
+\end{lemma}
+
+\begin{proof}
+Consider the double complex
+$$
+\prod\nolimits_{i_0} (\omega_A^\bullet)_{f_{i_0}}
+\to
+\prod\nolimits_{i_0 < i_1} (\omega_A^\bullet)_{f_{i_0}f_{i_1}}
+\to \ldots
+$$
+The associated total complex is quasi-isomorphic to $\omega_A^\bullet$
+for example by Descent, Remark \ref{descent-remark-standard-covering}
+or by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-alternating-cech-complex-complex-computes-cohomology}.
+By assumption the complexes $(\omega_A^\bullet)_{f_i}$ have
+finite injective dimension as complexes of $A_{f_i}$-modules.
+This implies that each of the complexes
+$(\omega_A^\bullet)_{f_{i_0} \ldots f_{i_p}}$, $p > 0$ has
+finite injective dimension over $A_{f_{i_0} \ldots f_{i_p}}$,
+see Lemma \ref{lemma-localization-injective-modules}.
+This in turn implies that each of the complexes
+$(\omega_A^\bullet)_{f_{i_0} \ldots f_{i_p}}$, $p > 0$ has
+finite injective dimension over $A$, see
+Lemma \ref{lemma-injective-flat}. Hence $\omega_A^\bullet$
+has finite injective dimension as a complex of $A$-modules
+(as it can be represented by a complex endowed with
+a finite filtration whose graded parts have finite injective
+dimension). Since $H^n(\omega_A^\bullet)_{f_i}$ is a finite
+$A_{f_i}$ module for each $i$ we see that $H^i(\omega_A^\bullet)$
+is a finite $A$-module, see Algebra, Lemma \ref{algebra-lemma-cover}.
+Finally, the (derived) base change of the map
+$A \to R\Hom_A(\omega_A^\bullet, \omega_A^\bullet)$ to $A_{f_i}$
+is the map
+$A_{f_i} \to R\Hom_A((\omega_A^\bullet)_{f_i}, (\omega_A^\bullet)_{f_i})$ by
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-RHom}.
+Hence we deduce that
+$A \to R\Hom_A(\omega_A^\bullet, \omega_A^\bullet)$
+is an isomorphism and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-finite}
+Let $A \to B$ be a finite ring map of Noetherian rings.
+Let $\omega_A^\bullet$ be a dualizing complex.
+Then $R\Hom(B, \omega_A^\bullet)$ is a dualizing complex for $B$.
+\end{lemma}
+
+\begin{proof}
+Let $\omega_A^\bullet \to I^\bullet$ be a quasi-isomorphism
+with $I^\bullet$ a bounded complex of injectives.
+Then $\Hom_A(B, I^\bullet)$ is a bounded complex of injective
+$B$-modules (Lemma \ref{lemma-hom-injective}) representing
+$R\Hom(B, \omega_A^\bullet)$.
+Thus $R\Hom(B, \omega_A^\bullet)$ has finite injective dimension.
+By Lemma \ref{lemma-exact-support-coherent} it is an object of
+$D_{\textit{Coh}}(B)$. Finally, we compute
+$$
+\Hom_{D(B)}(R\Hom(B, \omega_A^\bullet), R\Hom(B, \omega_A^\bullet)) =
+\Hom_{D(A)}(R\Hom(B, \omega_A^\bullet), \omega_A^\bullet) = B
+$$
+and for $n \not = 0$ we compute
+$$
+\Hom_{D(B)}(R\Hom(B, \omega_A^\bullet), R\Hom(B, \omega_A^\bullet)[n]) =
+\Hom_{D(A)}(R\Hom(B, \omega_A^\bullet), \omega_A^\bullet[n]) = 0
+$$
+which proves the last property of a dualizing complex.
+In the displayed equations, the first
+equality holds by Lemma \ref{lemma-right-adjoint}
+and the second equality holds by Lemma \ref{lemma-dualizing}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-quotient}
+Let $A \to B$ be a surjective homomorphism of Noetherian rings.
+Let $\omega_A^\bullet$ be a dualizing complex.
+Then $R\Hom(B, \omega_A^\bullet)$ is a dualizing complex for $B$.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-dualizing-finite}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-polynomial-ring}
+Let $A$ be a Noetherian ring. If $\omega_A^\bullet$ is a dualizing
+complex, then $\omega_A^\bullet \otimes_A A[x]$ is a dualizing
+complex for $A[x]$.
+\end{lemma}
+
+\begin{proof}
+Set $B = A[x]$ and $\omega_B^\bullet = \omega_A^\bullet \otimes_A B$.
+It follows from Lemma \ref{lemma-injective-dimension-over-polynomial-ring}
+and More on Algebra, Lemma \ref{more-algebra-lemma-finite-injective-dimension}
+that $\omega_B^\bullet$ has finite injective dimension.
+Since $H^i(\omega_B^\bullet) = H^i(\omega_A^\bullet) \otimes_A B$
+by flatness of $A \to B$ we see that $\omega_A^\bullet \otimes_A B$
+has finite cohomology modules. Finally, the map
+$$
+B \longrightarrow R\Hom_B(\omega_B^\bullet, \omega_B^\bullet)
+$$
+is a quasi-isomorphism as formation of internal hom commutes with
+flat base change in this case, see
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-RHom}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-dualizing-essentially-finite-type}
+Let $A$ be a Noetherian ring which has a dualizing complex.
+Then any $A$-algebra essentially of finite type over $A$
+has a dualizing complex.
+\end{proposition}
+
+\begin{proof}
+This follows from a combination of
+Lemmas \ref{lemma-dualizing-localize},
+\ref{lemma-dualizing-quotient}, and \ref{lemma-dualizing-polynomial-ring}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-find-function}
+Let $A$ be a Noetherian ring. Let $\omega_A^\bullet$ be a dualizing
+complex. Let $\mathfrak m \subset A$ be a maximal ideal and set
+$\kappa = A/\mathfrak m$. Then
+$R\Hom_A(\kappa, \omega_A^\bullet) \cong \kappa[n]$ for some
+$n \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+This is true because $R\Hom_A(\kappa, \omega_A^\bullet)$ is a dualizing
+complex over $\kappa$ (Lemma \ref{lemma-dualizing-quotient}),
+because dualizing complexes over $\kappa$ are unique up to shifts
+(Lemma \ref{lemma-dualizing-unique}), and because $\kappa$ is a
+dualizing complex over $\kappa$.
+\end{proof}
+
+
+
+
+\section{Dualizing complexes over local rings}
+\label{section-dualizing-local}
+
+\noindent
+In this section $(A, \mathfrak m, \kappa)$ will be a Noetherian local
+ring endowed with a dualizing complex $\omega_A^\bullet$ such that
+the integer $n$ of Lemma \ref{lemma-find-function} is zero.
+More precisely, we assume that $R\Hom_A(\kappa, \omega_A^\bullet) = \kappa[0]$.
+In this case we will say that the dualizing complex is {\it normalized}.
+Observe that a normalized dualizing complex is unique up to
+isomorphism and that any other dualizing complex for $A$ is isomorphic
+to a shift of a normalized one (Lemma \ref{lemma-dualizing-unique}).
+
+\begin{lemma}
+\label{lemma-normalized-finite}
+Let $(A, \mathfrak m, \kappa) \to (B, \mathfrak m', \kappa')$
+be a finite local map of Noetherian local rings. Let $\omega_A^\bullet$
+be a normalized dualizing complex. Then
+$\omega_B^\bullet = R\Hom(B, \omega_A^\bullet)$ is a
+normalized dualizing complex for $B$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-dualizing-finite} the complex
+$\omega_B^\bullet$ is dualizing for $B$. We have
+$$
+R\Hom_B(\kappa', \omega_B^\bullet) =
+R\Hom_B(\kappa', R\Hom(B, \omega_A^\bullet)) =
+R\Hom_A(\kappa', \omega_A^\bullet)
+$$
+by Lemma \ref{lemma-right-adjoint}. Since $\kappa'$ is isomorphic
+to a finite direct sum of copies of $\kappa$ as an $A$-module
+and since $\omega_A^\bullet$ is normalized, we
+see that this complex only has cohomology placed in degree $0$.
+Thus $\omega_B^\bullet$ is a normalized dualizing complex as well.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-normalized-quotient}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local
+ring with normalized dualizing complex $\omega_A^\bullet$.
+Let $A \to B$ be surjective. Then
+$\omega_B^\bullet = R\Hom_A(B, \omega_A^\bullet)$ is a
+normalized dualizing complex for $B$.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-normalized-finite}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-equivalence-finite-length}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local
+ring. Let $F$ be an $A$-linear self-equivalence of the category of
+finite length $A$-modules. Then $F$ is isomorphic to the identity functor.
+\end{lemma}
+
+\begin{proof}
+Since $\kappa$ is the unique simple object of the category we have
+$F(\kappa) \cong \kappa$. Since our category is abelian, we find that
+$F$ is exact. Hence $F(E)$ has the same length as $E$ for all finite
+length modules $E$.
+Since $\Hom(E, \kappa) = \Hom(F(E), F(\kappa)) \cong \Hom(F(E), \kappa)$
+we conclude from Nakayama's lemma that $E$ and $F(E)$ have the same
+number of generators. Hence $F(A/\mathfrak m^n)$ is a cyclic $A$-module.
+Pick a generator $e \in F(A/\mathfrak m^n)$.
+Since $F$ is $A$-linear we conclude that $\mathfrak m^n e = 0$.
+The map $A/\mathfrak m^n \to F(A/\mathfrak m^n)$ has to be
+an isomorphism as the lengths are equal. Pick an element
+$$
+e \in \lim F(A/\mathfrak m^n)
+$$
+which maps to a generator for all $n$ (small argument omitted).
+Then we obtain a system of isomorphisms
+$A/\mathfrak m^n \to F(A/\mathfrak m^n)$ compatible with all
+$A$-module maps $A/\mathfrak m^n \to A/\mathfrak m^{n'}$ (by $A$-linearity
+of $F$ again). Since any finite length module is a cokernel
+of a map between direct sums of cyclic modules, we obtain the isomorphism
+of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-finite-length}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local
+ring with normalized dualizing complex $\omega_A^\bullet$.
+Let $E$ be an injective hull of $\kappa$. Then there exists
+a functorial isomorphism
+$$
+R\Hom_A(N, \omega_A^\bullet) = \Hom_A(N, E)[0]
+$$
+for $N$ running through the finite length $A$-modules.
+\end{lemma}
+
+\begin{proof}
+By induction on the length of $N$ we see that $R\Hom_A(N, \omega_A^\bullet)$
+is a module of finite length sitting in degree $0$. Thus
+$R\Hom_A(-, \omega_A^\bullet)$ induces an anti-equivalence
+on the category of finite length modules. Since the same is true
+for $\Hom_A(-, E)$ by Proposition \ref{proposition-matlis} we see that
+$$
+N \longmapsto \Hom_A(R\Hom_A(N, \omega_A^\bullet), E)
+$$
+is an equivalence as in Lemma \ref{lemma-equivalence-finite-length}.
+Hence it is isomorphic to the identity functor.
+Since $\Hom_A(-, E)$ applied twice is the identity
+(Proposition \ref{proposition-matlis}) we obtain
+the statement of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sitting-in-degrees}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local ring with
+normalized dualizing complex $\omega_A^\bullet$. Let $M$ be a finite
+$A$-module and let $d = \dim(\text{Supp}(M))$. Then
+\begin{enumerate}
+\item if $\Ext^i_A(M, \omega_A^\bullet)$ is nonzero, then
+$i \in \{-d, \ldots, 0\}$,
+\item the dimension of the support of $\Ext^i_A(M, \omega_A^\bullet)$
+is at most $-i$,
+\item $\text{depth}(M)$ is the smallest integer $\delta \geq 0$ such that
+$\Ext^{-\delta}_A(M, \omega_A^\bullet) \not = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We prove this by induction on $d$. If $d = 0$, this follows from
+Lemma \ref{lemma-dualizing-finite-length} and Matlis duality
+(Proposition \ref{proposition-matlis}) which guarantees that
+$\Hom_A(M, E)$ is nonzero if $M$ is nonzero.
+
+\medskip\noindent
+Assume the result holds for modules with support of dimension $< d$ and that
+$M$ has depth $> 0$. Choose an $f \in \mathfrak m$ which is a nonzerodivisor
+on $M$ and consider the short exact sequence
+$$
+0 \to M \to M \to M/fM \to 0
+$$
+Since $\dim(\text{Supp}(M/fM)) = d - 1$
+(Algebra, Lemma \ref{algebra-lemma-one-equation-module}) we
+may apply the induction hypothesis.
+Writing
+$E^i = \Ext^i_A(M, \omega_A^\bullet)$ and
+$F^i = \Ext^i_A(M/fM, \omega_A^\bullet)$
+we obtain a long exact sequence
+$$
+\ldots \to F^i \to E^i \xrightarrow{f} E^i \to F^{i + 1} \to \ldots
+$$
+By induction $E^i/fE^i = 0$ for
+$i + 1 \not \in \{-\dim(\text{Supp}(M/fM)), \ldots, -\text{depth}(M/fM)\}$.
+By Nakayama's lemma (Algebra, Lemma \ref{algebra-lemma-NAK})
+and Algebra, Lemma \ref{algebra-lemma-depth-drops-by-one}
+we conclude $E^i = 0$ for
+$i \not \in \{-\dim(\text{Supp}(M)), \ldots, -\text{depth}(M)\}$.
+Moreover, in the boundary case $i = - \text{depth}(M)$ we deduce that $E^i$
+is nonzero as $F^{i + 1}$ is nonzero by induction.
+Since $E^i/fE^i \subset F^{i + 1}$ we get
+$$
+\dim(\text{Supp}(F^{i + 1})) \geq \dim(\text{Supp}(E^i/fE^i))
+\geq \dim(\text{Supp}(E^i)) - 1
+$$
+(see lemma used above) we also obtain the dimension estimate (2).
+
+\medskip\noindent
+If $M$ has depth $0$ and $d > 0$ we let $N = M[\mathfrak m^\infty]$ and set
+$M' = M/N$ (compare with Lemma \ref{lemma-divide-by-torsion}).
+Then $M'$ has depth $> 0$ and $\dim(\text{Supp}(M')) = d$.
+Thus we know the result for $M'$ and since
+$R\Hom_A(N, \omega_A^\bullet) = \Hom_A(N, E)$
+(Lemma \ref{lemma-dualizing-finite-length})
+the long exact cohomology sequence of $\Ext$'s implies the
+result for $M$.
+\end{proof}
+
+\begin{remark}
+\label{remark-vanishing-for-arbitrary-modules}
+Let $(A, \mathfrak m)$ and $\omega_A^\bullet$ be as in
+Lemma \ref{lemma-sitting-in-degrees}.
+By More on Algebra, Lemma \ref{more-algebra-lemma-injective-amplitude}
+we see that $\omega_A^\bullet$ has injective-amplitude in $[-d, 0]$
+because part (3) of that lemma applies.
+In particular, for any $A$-module $M$ (not necessarily finite) we have
+$\Ext^i_A(M, \omega_A^\bullet) = 0$ for $i \not \in \{-d, \ldots, 0\}$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-local-CM}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local ring
+with normalized dualizing complex $\omega_A^\bullet$. Let $M$
+be a finite $A$-module. The following are equivalent
+\begin{enumerate}
+\item $M$ is Cohen-Macaulay,
+\item $\Ext^i_A(M, \omega_A^\bullet)$ is nonzero for a single $i$,
+\item $\Ext^{-i}_A(M, \omega_A^\bullet)$ is zero for
+$i \not = \dim(\text{Supp}(M))$.
+\end{enumerate}
+Denote $CM_d$ the category of finite Cohen-Macaulay $A$-modules
+of depth $d$. Then $M \mapsto \Ext^{-d}_A(M, \omega_A^\bullet)$
+defines an anti-auto-equivalence of $CM_d$.
+\end{lemma}
+
+\begin{proof}
+We will use the results of Lemma \ref{lemma-sitting-in-degrees}
+without further mention. Fix a finite module $M$.
+If $M$ is Cohen-Macaulay, then only
+$\Ext^{-d}_A(M, \omega_A^\bullet)$ can be nonzero,
+hence (1) $\Rightarrow$ (3).
+The implication (3) $\Rightarrow$ (2) is immediate.
+Assume (2) and let $N = \Ext^{-\delta}_A(M, \omega_A^\bullet)$
+be the nonzero $\Ext$ where $\delta = \text{depth}(M)$. Then, since
+$$
+M[0] = R\Hom_A(R\Hom_A(M, \omega_A^\bullet), \omega_A^\bullet) =
+R\Hom_A(N[\delta], \omega_A^\bullet)
+$$
+(Lemma \ref{lemma-dualizing})
+we conclude that $M = \Ext_A^{-\delta}(N, \omega_A^\bullet)$.
+Thus $\delta \geq \dim(\text{Supp}(M))$. However,
+since we also know that $\delta \leq \dim(\text{Supp}(M))$
+(Algebra, Lemma \ref{algebra-lemma-bound-depth}) we conclude that $M$ is
+Cohen-Macaulay.
+
+\medskip\noindent
+To prove the final statement, it suffices to show that
+$N = \Ext^{-d}_A(M, \omega_A^\bullet)$ is in $CM_d$
+for $M$ in $CM_d$. Above we have seen that
+$M[0] = R\Hom_A(N[d], \omega_A^\bullet)$ and this proves the
+desired result by the equivalence of (1) and (3).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-artinian}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local
+ring with normalized dualizing complex $\omega_A^\bullet$.
+If $\dim(A) = 0$, then $\omega_A^\bullet \cong E[0]$
+where $E$ is an injective hull of the residue field.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-dualizing-finite-length}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-divide-by-finite-length-ideal}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local
+ring with normalized dualizing complex. Let $I \subset \mathfrak m$ be an
+ideal of finite length. Set $B = A/I$. Then there is a distinguished
+triangle
+$$
+\omega_B^\bullet \to \omega_A^\bullet \to \Hom_A(I, E)[0] \to
+\omega_B^\bullet[1]
+$$
+in $D(A)$ where $E$ is an injective hull of $\kappa$ and
+$\omega_B^\bullet$ is a normalized dualizing complex for $B$.
+\end{lemma}
+
+\begin{proof}
+Use the short exact sequence $0 \to I \to A \to B \to 0$
+and Lemmas \ref{lemma-dualizing-finite-length} and
+\ref{lemma-normalized-quotient}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-divide-by-nonzerodivisor}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local
+ring with normalized dualizing complex $\omega_A^\bullet$.
+Let $f \in \mathfrak m$ be a
+nonzerodivisor. Set $B = A/(f)$. Then there is a distinguished
+triangle
+$$
+\omega_B^\bullet \to \omega_A^\bullet \to \omega_A^\bullet \to
+\omega_B^\bullet[1]
+$$
+in $D(A)$ where $\omega_B^\bullet$ is a normalized dualizing complex
+for $B$.
+\end{lemma}
+
+\begin{proof}
+Use the short exact sequence $0 \to A \to A \to B \to 0$
+and Lemma \ref{lemma-normalized-quotient}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-nonvanishing-generically-local}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local ring with
+normalized dualizing complex $\omega_A^\bullet$.
+Let $\mathfrak p$ be a minimal prime of $A$ with
+$\dim(A/\mathfrak p) = e$. Then
+$H^i(\omega_A^\bullet)_\mathfrak p$ is nonzero
+if and only if $i = -e$.
+\end{lemma}
+
+\begin{proof}
+Since $A_\mathfrak p$ has dimension zero, there exists an integer
+$n > 0$ such that $\mathfrak p^nA_\mathfrak p$ is zero.
+Set $B = A/\mathfrak p^n$ and
+$\omega_B^\bullet = R\Hom_A(B, \omega_A^\bullet)$.
+Since $B_\mathfrak p = A_\mathfrak p$ we see that
+$$
+(\omega_B^\bullet)_\mathfrak p =
+R\Hom_A(B, \omega_A^\bullet) \otimes_A^\mathbf{L} A_\mathfrak p =
+R\Hom_{A_\mathfrak p}(B_\mathfrak p, (\omega_A^\bullet)_\mathfrak p) =
+(\omega_A^\bullet)_\mathfrak p
+$$
+The second equality holds by
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-RHom}.
+By Lemma \ref{lemma-normalized-quotient} we may replace $A$ by $B$.
+After doing so, we see that $\dim(A) = e$. Then we see that
+$H^i(\omega_A^\bullet)_\mathfrak p$ can only be nonzero if $i = -e$
+by Lemma \ref{lemma-sitting-in-degrees} parts (1) and (2).
+On the other hand, since $(\omega_A^\bullet)_\mathfrak p$
+is a dualizing complex for the nonzero ring $A_\mathfrak p$
+(Lemma \ref{lemma-dualizing-localize})
+we see that the remaining module has to be nonzero.
+\end{proof}
+
+
+
+
+
+\section{Dualizing complexes and dimension functions}
+\label{section-dimension-function}
+
+\noindent
+Our results in the local setting have the following consequence:
+a Noetherian ring which has a dualizing complex is a
+universally catenary ring of finite dimension.
+
+\begin{lemma}
+\label{lemma-nonvanishing-generically}
+Let $A$ be a Noetherian ring. Let $\mathfrak p$ be a minimal prime
+of $A$. Then $H^i(\omega_A^\bullet)_\mathfrak p$ is nonzero
+for exactly one $i$.
+\end{lemma}
+
+\begin{proof}
+The complex $\omega_A^\bullet \otimes_A A_\mathfrak p$
+is a dualizing complex for $A_\mathfrak p$
+(Lemma \ref{lemma-dualizing-localize}).
+The dimension of $A_\mathfrak p$ is zero as $\mathfrak p$
+is minimal. Hence the result follows from
+Lemma \ref{lemma-dualizing-artinian}.
+\end{proof}
+
+\noindent
+Let $A$ be a Noetherian ring and let $\omega_A^\bullet$ be a dualizing
+complex. Lemma \ref{lemma-find-function} allows us to define a function
+$$
+\delta = \delta_{\omega_A^\bullet} : \Spec(A) \longrightarrow \mathbf{Z}
+$$
+by mapping $\mathfrak p$ to the integer of Lemma \ref{lemma-find-function}
+for the dualizing complex $(\omega_A^\bullet)_\mathfrak p$
+over $A_\mathfrak p$ (Lemma \ref{lemma-dualizing-localize})
+and the residue field $\kappa(\mathfrak p)$. To be precise, we define
+$\delta(\mathfrak p)$ to be the unique integer such that
+$$
+(\omega_A^\bullet)_\mathfrak p[-\delta(\mathfrak p)]
+$$
+is a normalized dualizing complex over the Noetherian local ring
+$A_\mathfrak p$.
+
+\begin{lemma}
+\label{lemma-quotient-function}
+Let $A$ be a Noetherian ring and let $\omega_A^\bullet$ be a dualizing
+complex. Let $A \to B$ be a surjective ring map and let
+$\omega_B^\bullet = R\Hom(B, \omega_A^\bullet)$ be the dualizing
+complex for $B$ of Lemma \ref{lemma-dualizing-quotient}. Then we have
+$$
+\delta_{\omega_B^\bullet} = \delta_{\omega_A^\bullet}|_{\Spec(B)}
+$$
+\end{lemma}
+
+\begin{proof}
+This follows from the definition of the functions and
+Lemma \ref{lemma-normalized-quotient}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dimension-function}
+Let $A$ be a Noetherian ring and let $\omega_A^\bullet$ be a dualizing
+complex. The function $\delta = \delta_{\omega_A^\bullet}$
+defined above is a dimension function
+(Topology, Definition \ref{topology-definition-dimension-function}).
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p \subset \mathfrak q$ be an immediate specialization.
+We have to show that $\delta(\mathfrak p) = \delta(\mathfrak q) + 1$.
+We may replace $A$ by $A/\mathfrak p$, the complex $\omega_A^\bullet$ by
+$\omega_{A/\mathfrak p}^\bullet = R\Hom(A/\mathfrak p, \omega_A^\bullet)$,
+the prime $\mathfrak p$ by $(0)$, and the prime $\mathfrak q$
+by $\mathfrak q/\mathfrak p$,
+see Lemma \ref{lemma-quotient-function}. Thus we may assume that
+$A$ is a domain, $\mathfrak p = (0)$, and $\mathfrak q$ is a prime
+ideal of height $1$.
+
+\medskip\noindent
+Then $H^i(\omega_A^\bullet)_{(0)}$ is nonzero
+for exactly one $i$, say $i_0$, by Lemma \ref{lemma-nonvanishing-generically}.
+In fact $i_0 = -\delta((0))$ because
+$(\omega_A^\bullet)_{(0)}[-\delta((0))]$
+is a normalized dualizing complex over the field $A_{(0)}$.
+
+\medskip\noindent
+On the other hand $(\omega_A^\bullet)_\mathfrak q[-\delta(\mathfrak q)]$
+is a normalized dualizing complex for $A_\mathfrak q$. By
+Lemma \ref{lemma-nonvanishing-generically-local}
+we see that
+$$
+H^e((\omega_A^\bullet)_\mathfrak q[-\delta(\mathfrak q)])_{(0)} =
+H^{e - \delta(\mathfrak q)}(\omega_A^\bullet)_{(0)}
+$$
+is nonzero only for $e = -\dim(A_\mathfrak q) = -1$.
+We conclude
+$$
+-\delta((0)) = -1 - \delta(\mathfrak q)
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-universally-catenary}
+Let $A$ be a Noetherian ring which has a dualizing
+complex. Then $A$ is universally catenary of finite dimension.
+\end{lemma}
+
+\begin{proof}
+Because $\Spec(A)$ has a dimension function by
+Lemma \ref{lemma-dimension-function}
+it is catenary, see
+Topology, Lemma \ref{topology-lemma-dimension-function-catenary}.
+Hence $A$ is catenary, see
+Algebra, Lemma \ref{algebra-lemma-catenary}.
+It follows from
+Proposition \ref{proposition-dualizing-essentially-finite-type}
+that $A$ is universally catenary.
+
+\medskip\noindent
+Because any dualizing complex $\omega_A^\bullet$ is
+in $D^b_{\textit{Coh}}(A)$ the values of the function
+$\delta_{\omega_A^\bullet}$ in minimal primes are bounded by
+Lemma \ref{lemma-nonvanishing-generically}.
+On the other hand, for a maximal ideal $\mathfrak m$ with
+residue field $\kappa$ the integer $i = -\delta(\mathfrak m)$
+is the unique integer such that
+$\Ext_A^i(\kappa, \omega_A^\bullet)$ is nonzero
+(Lemma \ref{lemma-find-function}).
+Since $\omega_A^\bullet$ has finite injective dimension
+these values are bounded too. Since the dimension of
+$A$ is the maximal value of $\delta(\mathfrak p) - \delta(\mathfrak m)$
+where $\mathfrak p \subset \mathfrak m$ are a pair
+consisting of a minimal prime and a maximal prime we find that the
+dimension of $\Spec(A)$ is bounded.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-depth-dualizing-module}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local ring with
+normalized dualizing complex $\omega_A^\bullet$. Let $d = \dim(A)$
+and $\omega_A = H^{-d}(\omega_A^\bullet)$. Then
+\begin{enumerate}
+\item the support of $\omega_A$ is the union of the irreducible components
+of $\Spec(A)$ of dimension $d$,
+\item $\omega_A$ satisfies $(S_2)$, see
+Algebra, Definition \ref{algebra-definition-conditions}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will use Lemma \ref{lemma-sitting-in-degrees} without further mention.
+By Lemma \ref{lemma-nonvanishing-generically-local} the support
+of $\omega_A$ contains the irreducible components of dimension $d$.
+Let $\mathfrak p \subset A$ be a prime. By Lemma \ref{lemma-dimension-function}
+the complex $(\omega_A^\bullet)_{\mathfrak p}[-\dim(A/\mathfrak p)]$
+is a normalized dualizing complex for $A_\mathfrak p$. Hence if
+$\dim(A/\mathfrak p) + \dim(A_\mathfrak p) < d$, then
+$(\omega_A)_\mathfrak p = 0$.
+This proves the support of $\omega_A$ is the union of the irreducible
+components of dimension $d$, because the complement of this union
+is exactly the primes $\mathfrak p$ of $A$ for which
+$\dim(A/\mathfrak p) + \dim(A_\mathfrak p) < d$ as $A$ is catenary
+(Lemma \ref{lemma-universally-catenary}).
+On the other hand, if $\dim(A/\mathfrak p) + \dim(A_\mathfrak p) = d$, then
+$$
+(\omega_A)_\mathfrak p =
+H^{-\dim(A_\mathfrak p)}\left(
+(\omega_A^\bullet)_{\mathfrak p}[-\dim(A/\mathfrak p)] \right)
+$$
+Hence in order to prove $\omega_A$ has $(S_2)$ it suffices to show that
+the depth of $\omega_A$ is at least $\min(\dim(A), 2)$.
+We prove this by induction on $\dim(A)$. The case $\dim(A) = 0$ is
+trivial.
+
+\medskip\noindent
+Assume $\text{depth}(A) > 0$. Choose a nonzerodivisor $f \in \mathfrak m$
+and set $B = A/fA$. Then $\dim(B) = \dim(A) - 1$ and we may apply the
+induction hypothesis to $B$. By Lemma \ref{lemma-divide-by-nonzerodivisor}
+we see that multiplication by $f$ is injective on $\omega_A$ and we get
+$\omega_A/f\omega_A \subset \omega_B$. This proves the depth of $\omega_A$
+is at least $1$. If $\dim(A) > 1$, then $\dim(B) > 0$ and $\omega_B$
+has depth $ > 0$. Hence $\omega_A$ has depth $> 1$ and we conclude in
+this case.
+
+\medskip\noindent
+Assume $\dim(A) > 0$ and $\text{depth}(A) = 0$. Let
+$I = A[\mathfrak m^\infty]$ and set $B = A/I$. Then $B$ has
+depth $\geq 1$ and $\omega_A = \omega_B$ by
+Lemma \ref{lemma-divide-by-finite-length-ideal}.
+Since we proved the result for $\omega_B$ above the proof is done.
+\end{proof}
+
+
+
+
+
+\section{The local duality theorem}
+\label{section-local-duality}
+
+\noindent
+The main result in this section is due to Grothendieck.
+
+\begin{lemma}
+\label{lemma-local-cohomology-of-dualizing}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local ring.
+Let $\omega_A^\bullet$ be a normalized dualizing complex.
+Let $Z = V(\mathfrak m) \subset \Spec(A)$.
+Then $E = R^0\Gamma_Z(\omega_A^\bullet)$ is an injective hull of
+$\kappa$ and $R\Gamma_Z(\omega_A^\bullet) = E[0]$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-local-cohomology-noetherian} we have
+$R\Gamma_{\mathfrak m} = R\Gamma_Z$. Thus
+$$
+R\Gamma_Z(\omega_A^\bullet) =
+R\Gamma_{\mathfrak m}(\omega_A^\bullet) =
+\text{hocolim}\ R\Hom_A(A/\mathfrak m^n, \omega_A^\bullet)
+$$
+by Lemma \ref{lemma-local-cohomology-ext}. Let $E'$ be an injective
+hull of the residue field.
+By Lemma \ref{lemma-dualizing-finite-length}
+we can find isomorphisms
+$$
+R\Hom_A(A/\mathfrak m^n, \omega_A^\bullet) \cong \Hom_A(A/\mathfrak m^n, E')[0]
+$$
+compatible with transition maps. Since
+$E' = \bigcup E'[\mathfrak m^n] = \colim \Hom_A(A/\mathfrak m^n, E')$
+by Lemma \ref{lemma-union-artinian}
+we conclude that $E \cong E'$ and that all other cohomology
+groups of the complex $R\Gamma_Z(\omega_A^\bullet)$ are zero.
+\end{proof}
+
+\begin{remark}
+\label{remark-specific-injective-hull}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local ring
+with a normalized dualizing complex $\omega_A^\bullet$.
+By Lemma \ref{lemma-local-cohomology-of-dualizing}
+above we see that $R\Gamma_Z(\omega_A^\bullet)$
+is an injective hull of the residue field placed in degree $0$.
+In fact, this gives a ``construction'' or ``realization''
+of the injective hull which is slightly more canonical than
+just picking any old injective hull. Namely, a normalized
+dualizing complex is unique up to isomorphism, with group
+of automorphisms the group of units of $A$, whereas an
+injective hull of $\kappa$ is unique up to isomorphism, with
+group of automorphisms the group of units of the completion
+$A^\wedge$ of $A$ with respect to $\mathfrak m$.
+\end{remark}
+
+\noindent
+Here is the main result of this section.
+
+\begin{theorem}
+\label{theorem-local-duality}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local ring.
+Let $\omega_A^\bullet$ be a normalized dualizing complex.
+Let $E$ be an injective hull of the residue field.
+Let $Z = V(\mathfrak m) \subset \Spec(A)$.
+Denote ${}^\wedge$ derived completion with respect to $\mathfrak m$.
+Then
+$$
+R\Hom_A(K, \omega_A^\bullet)^\wedge \cong R\Hom_A(R\Gamma_Z(K), E[0])
+$$
+for $K$ in $D(A)$.
+\end{theorem}
+
+\begin{proof}
+Observe that $E[0] \cong R\Gamma_Z(\omega_A^\bullet)$ by
+Lemma \ref{lemma-local-cohomology-of-dualizing}.
+By More on Algebra, Lemma \ref{more-algebra-lemma-completion-RHom}
+completion on the left hand side goes inside.
+Thus we have to prove
+$$
+R\Hom_A(K^\wedge, (\omega_A^\bullet)^\wedge)
+=
+R\Hom_A(R\Gamma_Z(K), R\Gamma_Z(\omega_A^\bullet))
+$$
+This follows from the equivalence between
+$D_{comp}(A, \mathfrak m)$ and $D_{\mathfrak m^\infty\text{-torsion}}(A)$
+given in Proposition \ref{proposition-torsion-complete}.
+More precisely, it is a special case of Lemma \ref{lemma-compare-RHom}.
+\end{proof}
+
+\noindent
+Here is a special case of the theorem above.
+
+\begin{lemma}
+\label{lemma-special-case-local-duality}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local ring.
+Let $\omega_A^\bullet$ be a normalized dualizing complex.
+Let $E$ be an injective hull of the residue field.
+Let $K \in D_{\textit{Coh}}(A)$. Then
+$$
+\Ext^{-i}_A(K, \omega_A^\bullet)^\wedge =
+\Hom_A(H^i_{\mathfrak m}(K), E)
+$$
+where ${}^\wedge$ denotes $\mathfrak m$-adic completion.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-dualizing} we see that $R\Hom_A(K, \omega_A^\bullet)$
+is an object of $D_{\textit{Coh}}(A)$.
+It follows that the cohomology modules of the derived completion
+of $R\Hom_A(K, \omega_A^\bullet)$ are equal to the usual completions
+$\Ext^i_A(K, \omega_A^\bullet)^\wedge$ by
+More on Algebra, Lemma
+\ref{more-algebra-lemma-derived-completion-pseudo-coherent}.
+On the other hand, we have $R\Gamma_{\mathfrak m} = R\Gamma_Z$
+for $Z = V(\mathfrak m)$ by Lemma \ref{lemma-local-cohomology-noetherian}.
+Moreover, the functor $\Hom_A(-, E)$ is exact hence
+factors through cohomology.
+Hence the lemma is consequence of
+Theorem \ref{theorem-local-duality}.
+\end{proof}
+
+
+
+
+
+\section{Dualizing modules}
+\label{section-dualizing-module}
+
+\noindent
+If $(A, \mathfrak m, \kappa)$ is a Noetherian local ring and
+$\omega_A^\bullet$ is a normalized dualizing complex, then
+we say the module $\omega_A = H^{-\dim(A)}(\omega_A^\bullet)$, described
+in Lemma \ref{lemma-depth-dualizing-module},
+is a {\it dualizing module}
+for $A$. This module is a canonical module of $A$.
+It seems generally agreed upon to define a {\it canonical module}
+for a Noetherian local ring $(A, \mathfrak m, \kappa)$ to be
+a finite $A$-module $K$ such that
+$$
+\Hom_A(K, E) \cong H^{\dim(A)}_\mathfrak m(A)
+$$
+where $E$ is an injective hull of the residue field. A dualizing
+module is canonical because
+$$
+\Hom_A(H^{\dim(A)}_\mathfrak m(A), E) = (\omega_A)^\wedge
+$$
+by Lemma \ref{lemma-special-case-local-duality}
+and hence applying
+$\Hom_A(-, E)$ we get
+\begin{align*}
+\Hom_A(\omega_A, E)
+& =
+\Hom_A((\omega_A)^\wedge, E) \\
+& =
+\Hom_A(\Hom_A(H^{\dim(A)}_\mathfrak m(A), E), E) \\
+& = H^{\dim(A)}_\mathfrak m(A)
+\end{align*}
+the first equality because $E$ is $\mathfrak m$-power torsion, the
+second by the above, and the third by Matlis duality
+(Proposition \ref{proposition-matlis}).
+The utility of the definition
+of a canonical module given above lies in the fact that it makes sense
+even if $A$ does not have a dualizing complex.
+
+
+
+
+\section{Cohen-Macaulay rings}
+\label{section-CM}
+
+\noindent
+Cohen-Macaulay modules and rings were studied in
+Algebra, Sections \ref{algebra-section-CM} and \ref{algebra-section-CM-ring}.
+
+\begin{lemma}
+\label{lemma-depth-in-terms-dualizing-complex}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local ring with
+normalized dualizing complex $\omega_A^\bullet$.
+Then $\text{depth}(A)$ is equal to the smallest integer $\delta \geq 0$
+such that $H^{-\delta}(\omega_A^\bullet) \not = 0$.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from
+Lemma \ref{lemma-sitting-in-degrees}.
+Here are two other ways to see that it is true.
+
+\medskip\noindent
+First alternative. By Nakayama's lemma we see that
+$\delta$ is the smallest integer such that
+$\Hom_A(H^{-\delta}(\omega_A^\bullet), \kappa) \not = 0$.
+In other words, it is the smallest integer such that
+$\Ext_A^{-\delta}(\omega_A^\bullet, \kappa)$
+is nonzero. Using Lemma \ref{lemma-dualizing} and the fact that
+$\omega_A^\bullet$ is normalized this is equal to the
+smallest integer such that $\Ext_A^\delta(\kappa, A)$ is
+nonzero. This is equal to the depth of $A$ by
+Algebra, Lemma \ref{algebra-lemma-depth-ext}.
+
+\medskip\noindent
+Second alternative. By the local duality theorem
+(in the form of Lemma \ref{lemma-special-case-local-duality})
+$\delta$ is the smallest integer such that $H^\delta_\mathfrak m(A)$
+is nonzero. This is equal to the depth of $A$ by
+Lemma \ref{lemma-depth}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-apply-CM}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local ring
+with normalized dualizing complex $\omega_A^\bullet$
+and dualizing module $\omega_A = H^{-\dim(A)}(\omega_A^\bullet)$.
+The following are equivalent
+\begin{enumerate}
+\item $A$ is Cohen-Macaulay,
+\item $\omega_A^\bullet$ is concentrated in a single degree, and
+\item $\omega_A^\bullet = \omega_A[\dim(A)]$.
+\end{enumerate}
+In this case $\omega_A$ is a maximal Cohen-Macaulay module.
+\end{lemma}
+
+\begin{proof}
+Follows immediately from Lemma \ref{lemma-local-CM}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-has-dualizing-module-CM}
+Let $A$ be a Noetherian ring. If there exists a finite $A$-module
+$\omega_A$ such that $\omega_A[0]$ is a dualizing complex, then
+$A$ is Cohen-Macaulay.
+\end{lemma}
+
+\begin{proof}
+We may replace $A$ by the localization at a prime
+(Lemma \ref{lemma-dualizing-localize} and
+Algebra, Definition \ref{algebra-definition-ring-CM}).
+In this case the result follows immediately from
+Lemma \ref{lemma-apply-CM}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM-open}
+Let $A$ be a Noetherian ring with dualizing complex $\omega_A^\bullet$.
+Let $M$ be a finite $A$-module. Then
+$$
+U = \{\mathfrak p \in \Spec(A) \mid M_\mathfrak p\text{ is Cohen-Macaulay}\}
+$$
+is an open subset of $\Spec(A)$ whose intersection with
+$\text{Supp}(M)$ is dense.
+\end{lemma}
+
+\begin{proof}
+If $\mathfrak p$ is a generic point of $\text{Supp}(M)$, then
+$\text{depth}(M_\mathfrak p) = \dim(M_\mathfrak p) = 0$
+and hence $\mathfrak p \in U$. This proves denseness.
+If $\mathfrak p \in U$, then we see that
+$$
+R\Hom_A(M, \omega_A^\bullet)_\mathfrak p =
+R\Hom_{A_\mathfrak p}(M_\mathfrak p, (\omega_A^\bullet)_\mathfrak p)
+$$
+has a unique nonzero cohomology module, say in degree $i_0$, by
+Lemma \ref{lemma-local-CM}.
+Since $R\Hom_A(M, \omega_A^\bullet)$
+has only a finite number of nonzero cohomology modules $H^i$
+and since each of these is a finite $A$-module, we can
+find an $f \in A$, $f \not \in \mathfrak p$ such that
+$(H^i)_f = 0$ for $i \not = i_0$. Then
+$R\Hom_A(M, \omega_A^\bullet)_f$ has a unique nonzero cohomology
+module and reversing the arguments just given we find
+that $D(f) \subset U$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-CM}
+Let $A$ be a Noetherian ring. If $A$ has a dualizing complex
+$\omega_A^\bullet$, then
+$\{\mathfrak p \in \Spec(A) \mid A_\mathfrak p\text{ is Cohen-Macaulay}\}$
+is a dense open subset of $\Spec(A)$.
+\end{lemma}
+
+\begin{proof}
+Immediate consequence of Lemma \ref{lemma-CM-open} and the definitions.
+\end{proof}
+
+
+
+
+
+
+\section{Gorenstein rings}
+\label{section-gorenstein}
+
+\noindent
+So far, the only explicit dualizing complex we've seen is $\kappa$ on $\kappa$
+for a field $\kappa$, see proof of Lemma \ref{lemma-find-function}.
+By Proposition \ref{proposition-dualizing-essentially-finite-type}
+this means that any finite type algebra over a field has a dualizing
+complex. However, it turns out that there are Noetherian (local) rings
+which do not have a dualizing complex. Namely, we have seen that
+a ring which has a dualizing complex is universally catenary
+(Lemma \ref{lemma-universally-catenary})
+but there are examples of
+Noetherian local rings which are not catenary, see
+Examples, Section \ref{examples-section-non-catenary-Noetherian-local}.
+
+\medskip\noindent
+Nonetheless many rings in algebraic geometry have dualizing complexes
+simply because they are quotients of Gorenstein rings. This condition
+is in fact both necessary and sufficient. That is: a Noetherian ring
+has a dualizing complex if and only if it is a quotient of a finite
+dimensional Gorenstein ring. This is Sharp's conjecture (\cite{Sharp})
+which can be found as \cite[Corollary 1.4]{Kawasaki} in the literature.
+Returning to our current topic, here is the definition of Gorenstein rings.
+
+\begin{definition}
+\label{definition-gorenstein}
+Gorenstein rings.
+\begin{enumerate}
+\item Let $A$ be a Noetherian local ring. We say $A$ is {\it Gorenstein}
+if $A[0]$ is a dualizing complex for $A$.
+\item Let $A$ be a Noetherian ring. We say $A$ is {\it Gorenstein}
+if $A_\mathfrak p$ is Gorenstein for every prime $\mathfrak p$ of $A$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+This definition makes sense, because if $A[0]$ is a dualizing complex
+for $A$, then $S^{-1}A[0]$ is a dualizing complex for $S^{-1}A$ by
+Lemma \ref{lemma-dualizing-localize}.
+We will see later that a finite dimensional Noetherian ring is Gorenstein
+if it has finite injective dimension as a module over itself.
+
+\begin{lemma}
+\label{lemma-gorenstein-CM}
+A Gorenstein ring is Cohen-Macaulay.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-apply-CM}.
+\end{proof}
+
+\noindent
+An example of a Gorenstein ring is a regular ring.
+
+\begin{lemma}
+\label{lemma-regular-gorenstein}
+A regular local ring is Gorenstein.
+A regular ring is Gorenstein.
+\end{lemma}
+
+\begin{proof}
+Let $A$ be a regular ring of finite dimension $d$. Then $A$ has finite
+global dimension $d$, see
+Algebra, Lemma \ref{algebra-lemma-finite-gl-dim-finite-dim-regular}.
+Hence $\Ext^{d + 1}_A(M, A) = 0$ for all $A$-modules $M$, see
+Algebra, Lemma \ref{algebra-lemma-projective-dimension-ext}.
+Thus $A$ has finite injective dimension as an $A$-module by
+More on Algebra, Lemma \ref{more-algebra-lemma-injective-amplitude}.
+It follows that $A[0]$ is a dualizing complex, hence $A$ is
+Gorenstein by the remark following the definition.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gorenstein}
+Let $A$ be a Noetherian ring.
+\begin{enumerate}
+\item If $A$ has a dualizing complex $\omega_A^\bullet$, then
+\begin{enumerate}
+\item $A$ is Gorenstein $\Leftrightarrow$ $\omega_A^\bullet$ is an invertible
+object of $D(A)$,
+\item $A_\mathfrak p$ is Gorenstein $\Leftrightarrow$
+$(\omega_A^\bullet)_\mathfrak p$ is an invertible object of
+$D(A_\mathfrak p)$,
+\item $\{\mathfrak p \in \Spec(A) \mid A_\mathfrak p\text{ is Gorenstein}\}$
+is an open subset.
+\end{enumerate}
+\item If $A$ is Gorenstein, then $A$ has a dualizing complex if and
+only if $A[0]$ is a dualizing complex.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For invertible objects of $D(A)$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-invertible-derived}
+and the discussion in Section \ref{section-dualizing}.
+
+\medskip\noindent
+By Lemma \ref{lemma-dualizing-localize} for every
+$\mathfrak p$ the complex $(\omega_A^\bullet)_\mathfrak p$ is a
+dualizing complex over $A_\mathfrak p$. By definition and uniqueness
+of dualizing complexes (Lemma \ref{lemma-dualizing-unique})
+we see that (1)(b) holds.
+
+\medskip\noindent
+To see (1)(c) assume that $A_\mathfrak p$ is Gorenstein.
+Let $n_x$ be the unique integer such that
+$H^{n_{x}}((\omega_A^\bullet)_\mathfrak p)$
+is nonzero and isomorphic to $A_\mathfrak p$.
+Since $\omega_A^\bullet$ is in $D^b_{\textit{Coh}}(A)$
+there are finitely many nonzero finite $A$-modules
+$H^i(\omega_A^\bullet)$. Thus there exists some
+$f \in A$, $f \not \in \mathfrak p$
+such that only $H^{n_x}((\omega_A^\bullet)_f)$
+is nonzero and generated by $1$ element over $A_f$.
+Since dualizing complexes are faithful (by definition)
+we conclude that $A_f \cong H^{n_x}((\omega_A^\bullet)_f)$.
+In this way we see that $A_\mathfrak q$ is Gorenstein
+for every $\mathfrak q \in D(f)$. This proves that the set
+in (1)(c) is open.
+
+\medskip\noindent
+Proof of (1)(a). The implication $\Leftarrow$ follows from (1)(b).
+The implication $\Rightarrow$ follows from the discussion
+in the previous paragraph, where we showed that if $A_\mathfrak p$
+is Gorenstein, then for some $f \in A$, $f \not \in \mathfrak p$
+the complex $(\omega_A^\bullet)_f$ has only one nonzero cohomology module
+which is invertible.
+
+\medskip\noindent
+If $A[0]$ is a dualizing complex then $A$ is Gorenstein by
+part (1). Conversely, we see that part (1) shows that
+$\omega_A^\bullet$ is locally isomorphic to a shift of $A$.
+Since being a dualizing complex is local
+(Lemma \ref{lemma-dualizing-glue})
+the result is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gorenstein-ext}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local ring.
+Then $A$ is Gorenstein if and only if $\Ext^i_A(\kappa, A)$
+is zero for $i \gg 0$.
+\end{lemma}
+
+\begin{proof}
+Observe that $A[0]$ is a dualizing complex for $A$ if and only
+if $A$ has finite injective dimension as an $A$-module
+(follows immediately from Definition \ref{definition-dualizing}).
+Thus the lemma follows from More on Algebra, Lemma
+\ref{more-algebra-lemma-finite-injective-dimension-Noetherian-local}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gorenstein-divide-by-nonzerodivisor}
+Let $(A, \mathfrak m, \kappa)$
+be a Noetherian local ring. Let $f \in \mathfrak m$ be a
+nonzerodivisor. Set $B = A/(f)$. Then $A$ is Gorenstein if and
+only if $B$ is Gorenstein.
+\end{lemma}
+
+\begin{proof}
+If $A$ is Gorenstein, then $B$ is Gorenstein by
+Lemma \ref{lemma-divide-by-nonzerodivisor}.
+Conversely, suppose that $B$ is Gorenstein. Then
+$\Ext^i_B(\kappa, B)$ is zero for $i \gg 0$
+(Lemma \ref{lemma-gorenstein-ext}).
+Recall that $R\Hom(B, -) : D(A) \to D(B)$ is a right adjoint
+to restriction (Lemma \ref{lemma-right-adjoint}).
+Hence
+$$
+R\Hom_A(\kappa, A) = R\Hom_B(\kappa, R\Hom(B, A)) =
+R\Hom_B(\kappa, B[1])
+$$
+The final equality by direct computation or by
+Lemma \ref{lemma-compute-for-effective-Cartier-algebraic}.
+Thus we see that $\Ext^i_A(\kappa, A)$ is zero for
+$i \gg 0$ and $A$ is Gorenstein (Lemma \ref{lemma-gorenstein-ext}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gorenstein-lci}
+If $A \to B$ is a local complete intersection homomorphism of rings and
+$A$ is a Noetherian Gorenstein ring, then $B$ is a Gorenstein ring.
+\end{lemma}
+
+\begin{proof}
+By More on Algebra, Definition
+\ref{more-algebra-definition-local-complete-intersection}
+we can write $B = A[x_1, \ldots, x_n]/I$
+where $I$ is a Koszul-regular ideal. Observe that a polynomial
+ring over a Gorenstein ring $A$ is Gorenstein: reduce to
+$A$ local and then use Lemmas \ref{lemma-dualizing-polynomial-ring} and
+\ref{lemma-gorenstein}.
+A Koszul-regular ideal is by definition locally generated
+by a Koszul-regular sequence, see More on Algebra, Section
+\ref{more-algebra-section-ideals}.
+Looking at local rings of $A[x_1, \ldots, x_n]$
+we see it suffices to show: if $R$ is a Noetherian local
+Gorenstein ring and $f_1, \ldots, f_c \in \mathfrak m_R$
+is a Koszul regular sequence, then $R/(f_1, \ldots, f_c)$ is Gorenstein.
+This follows from
+Lemma \ref{lemma-gorenstein-divide-by-nonzerodivisor} and
+the fact that a Koszul regular sequence in $R$ is just a
+regular sequence (More on Algebra, Lemma
+\ref{more-algebra-lemma-noetherian-finite-all-equivalent}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-under-gorenstein}
+Let $A \to B$ be a flat local homomorphism of Noetherian local rings.
+The following are equivalent
+\begin{enumerate}
+\item $B$ is Gorenstein, and
+\item $A$ and $B/\mathfrak m_A B$ are Gorenstein.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Below we will use without further mention that a local Gorenstein ring
+has finite injective dimension as well as Lemma \ref{lemma-gorenstein-ext}.
+By More on Algebra, Lemma
+\ref{more-algebra-lemma-pseudo-coherence-and-base-change-ext}
+we have
+$$
+\Ext^i_A(\kappa_A, A) \otimes_A B =
+\Ext^i_B(B/\mathfrak m_A B, B)
+$$
+for all $i$.
+
+\medskip\noindent
+Assume (2). Using that
+$R\Hom(B/\mathfrak m_A B, -) : D(B) \to D(B/\mathfrak m_A B)$ is a
+right adjoint to restriction (Lemma \ref{lemma-right-adjoint}) we obtain
+$$
+R\Hom_B(\kappa_B, B) =
+R\Hom_{B/\mathfrak m_A B}(\kappa_B, R\Hom(B/\mathfrak m_A B, B))
+$$
+The cohomology modules of $R\Hom(B/\mathfrak m_A B, B)$ are the modules
+$\Ext^i_B(B/\mathfrak m_A B, B) =
+\Ext^i_A(\kappa_A, A) \otimes_A B$.
+Since $A$ is Gorenstein, we conclude only a finite number of these are nonzero
+and each is isomorphic to a direct sum of copies of $B/\mathfrak m_A B$.
+Hence since $B/\mathfrak m_A B$ is Gorenstein we conclude that
+$R\Hom_B(B/\mathfrak m_B, B)$ has only a finite number of nonzero
+cohomology modules. Hence $B$ is Gorenstein.
+
+\medskip\noindent
+Assume (1). Since $B$ has finite injective dimension,
+$\Ext^i_B(B/\mathfrak m_A B, B)$ is $0$ for $i \gg 0$.
+Since $A \to B$ is faithfully flat
+we conclude that $\Ext^i_A(\kappa_A, A)$ is $0$
+for $i \gg 0$. We conclude that $A$ is Gorenstein. This implies that
+$\Ext^i_A(\kappa_A, A)$ is nonzero for exactly one $i$,
+namely for $i = \dim(A)$, and
+$\Ext^{\dim(A)}_A(\kappa_A, A) \cong \kappa_A$
+(see Lemmas \ref{lemma-normalized-finite}, \ref{lemma-apply-CM}, and
+\ref{lemma-gorenstein-CM}).
+Thus we see that
+$\Ext^i_B(B/\mathfrak m_A B, B)$ is zero except for one $i$,
+namely $i = \dim(A)$ and
+$\Ext^{\dim(A)}_B(B/\mathfrak m_A B, B) \cong B/\mathfrak m_A B$.
+Thus $B/\mathfrak m_A B$ is Gorenstein by
+Lemma \ref{lemma-normalized-finite}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tor-injective-hull}
+Let $(A, \mathfrak m, \kappa)$ be a Noetherian local Gorenstein ring
+of dimension $d$. Let $E$ be the injective hull of $\kappa$. Then
+$\text{Tor}_i^A(E, \kappa)$ is zero for $i \not = d$
+and $\text{Tor}_d^A(E, \kappa) = \kappa$.
+\end{lemma}
+
+\begin{proof}
+Since $A$ is Gorenstein $\omega_A^\bullet = A[d]$ is a
+normalized dualizing complex for $A$.
+Also $E$ is the only nonzero cohomology module of
+$R\Gamma_\mathfrak m(\omega_A^\bullet)$ sitting in degree $0$, see
+Lemma \ref{lemma-local-cohomology-of-dualizing}.
+By Lemma \ref{lemma-torsion-tensor-product} we have
+$$
+E \otimes_A^\mathbf{L} \kappa =
+R\Gamma_\mathfrak m(\omega_A^\bullet) \otimes_A^\mathbf{L} \kappa =
+R\Gamma_\mathfrak m(\omega_A^\bullet \otimes_A^\mathbf{L} \kappa) =
+R\Gamma_\mathfrak m(\kappa[d]) = \kappa[d]
+$$
+and the lemma follows.
+\end{proof}
+
+
+
+
+\section{The ubiquity of dualizing complexes}
+\label{section-ubiquity-dualizing}
+
+\noindent
+Many Noetherian rings have dualizing complexes.
+
+\begin{lemma}
+\label{lemma-flat-unramified}
+Let $A \to B$ be a local homomorphism of Noetherian local rings.
+Let $\omega_A^\bullet$ be a normalized dualizing complex.
+If $A \to B$ is flat and $\mathfrak m_A B = \mathfrak m_B$,
+then $\omega_A^\bullet \otimes_A B$ is a normalized dualizing
+complex for $B$.
+\end{lemma}
+
+\begin{proof}
+It is clear that $\omega_A^\bullet \otimes_A B$ is in $D^b_{\textit{Coh}}(B)$.
+Let $\kappa_A$ and $\kappa_B$ be the residue fields of $A$ and $B$.
+By More on Algebra, Lemma \ref{more-algebra-lemma-base-change-RHom}
+we see that
+$$
+R\Hom_B(\kappa_B, \omega_A^\bullet \otimes_A B) =
+R\Hom_A(\kappa_A, \omega_A^\bullet) \otimes_A B =
+\kappa_A[0] \otimes_A B = \kappa_B[0]
+$$
+Thus $\omega_A^\bullet \otimes_A B$ has finite injective dimension by
+More on Algebra, Lemma
+\ref{more-algebra-lemma-finite-injective-dimension-Noetherian-local}.
+Finally, we can use the same arguments to see that
+$$
+R\Hom_B(\omega_A^\bullet \otimes_A B, \omega_A^\bullet \otimes_A B) =
+R\Hom_A(\omega_A^\bullet, \omega_A^\bullet) \otimes_A B = A \otimes_A B = B
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-iso-mod-I}
+Let $A \to B$ be a flat map of Noetherian rings. Let
+$I \subset A$ be an ideal such that $A/I = B/IB$ and
+such that $IB$ is contained in the Jacobson radical of $B$.
+Let $\omega_A^\bullet$ be a dualizing complex.
+Then $\omega_A^\bullet \otimes_A B$ is a dualizing
+complex for $B$.
+\end{lemma}
+
+\begin{proof}
+It is clear that $\omega_A^\bullet \otimes_A B$ is in $D^b_{\textit{Coh}}(B)$.
+By More on Algebra, Lemma \ref{more-algebra-lemma-base-change-RHom}
+we see that
+$$
+R\Hom_B(K \otimes_A B, \omega_A^\bullet \otimes_A B) =
+R\Hom_A(K, \omega_A^\bullet) \otimes_A B
+$$
+for any $K \in D^b_{\textit{Coh}}(A)$. For any ideal
+$IB \subset J \subset B$ there is a unique ideal $I \subset J' \subset A$
+such that $A/J' \otimes_A B = B/J$. Thus $\omega_A^\bullet \otimes_A B$
+has finite injective dimension by
+More on Algebra, Lemma
+\ref{more-algebra-lemma-finite-injective-dimension-Noetherian-radical}.
+Finally, we also have
+$$
+R\Hom_B(\omega_A^\bullet \otimes_A B, \omega_A^\bullet \otimes_A B) =
+R\Hom_A(\omega_A^\bullet, \omega_A^\bullet) \otimes_A B = A \otimes_A B = B
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion-henselization-dualizing}
+Let $A$ be a Noetherian ring and let $I \subset A$ be an ideal.
+Let $\omega_A^\bullet$ be a dualizing complex.
+\begin{enumerate}
+\item $\omega_A^\bullet \otimes_A A^h$ is a dualizing complex on the
+henselization $(A^h, I^h)$ of the pair $(A, I)$,
+\item $\omega_A^\bullet \otimes_A A^\wedge$ is a dualizing complex on
+the $I$-adic completion $A^\wedge$, and
+\item if $A$ is local, then $\omega_A^\bullet \otimes_A A^h$,
+resp.\ $\omega_A^\bullet \otimes_A A^{sh}$ is a dualzing complex
+on the henselization, resp.\ strict henselization of $A$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemmas \ref{lemma-flat-unramified} and
+\ref{lemma-flat-iso-mod-I}.
+See More on Algebra, Sections \ref{more-algebra-section-henselian-pairs},
+\ref{more-algebra-section-permanence-completion}, and
+\ref{more-algebra-section-permanence-henselization} and
+Algebra, Sections \ref{algebra-section-completion} and
+\ref{algebra-section-completion-noetherian}
+for information on completions and henselizations.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ubiquity-dualizing}
+The following types of rings have a dualizing complex:
+\begin{enumerate}
+\item fields,
+\item Noetherian complete local rings,
+\item $\mathbf{Z}$,
+\item Dedekind domains,
+\item any ring which is obtained from one of the rings above by
+taking an algebra essentially of finite type, or by taking an
+ideal-adic completion, or by taking a henselization,
+or by taking a strict henselization.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (5) follows from Proposition
+\ref{proposition-dualizing-essentially-finite-type}
+and Lemma \ref{lemma-completion-henselization-dualizing}.
+By Lemma \ref{lemma-regular-gorenstein} a regular local ring has a
+dualizing complex.
+A complete Noetherian local ring is the quotient of a regular
+local ring by the Cohen structure theorem
+(Algebra, Theorem \ref{algebra-theorem-cohen-structure-theorem}).
+Let $A$ be a Dedekind domain. Then every ideal $I$ is a finite
+projective $A$-module (follows from
+Algebra, Lemma \ref{algebra-lemma-finite-projective}
+and the fact that the local rings of $A$ are discrete valuation ring
+and hence PIDs). Thus every $A$-module has finite injective dimension
+at most $1$ by
+More on Algebra, Lemma \ref{more-algebra-lemma-injective-amplitude}.
+It follows easily that $A[0]$ is a dualizing complex.
+\end{proof}
+
+
+
+
+
+\section{Formal fibres}
+\label{section-formal-fibres}
+
+\noindent
+This section is a continuation of
+More on Algebra, Section \ref{more-algebra-section-properties-formal-fibres}.
+There we saw there is a (fairly) good theory of Noetherian rings $A$
+whose local rings have Cohen-Macaulay formal fibres. Namely, we proved
+(1) it suffices to check the formal fibres of localizations at
+maximal ideals are Cohen-Macaulay,
+(2) the property is inherited by rings of finite type over $A$,
+(3) the fibres of $A \to A^\wedge$ are Cohen-Macaulay for
+any completion $A^\wedge$ of $A$, and
+(4) the property is inherited by henselizations of $A$. See
+More on Algebra, Lemma \ref{more-algebra-lemma-check-P-ring-maximal-ideals},
+Proposition \ref{more-algebra-proposition-finite-type-over-P-ring},
+Lemma \ref{more-algebra-lemma-map-P-ring-to-completion-P}, and
+Lemma \ref{more-algebra-lemma-henselization-pair-P-ring}.
+Similarly, for Noetherian rings whose local rings have formal fibres
+which are geometrically reduced, geometrically normal, $(S_n)$, and
+geometrically $(R_n)$.
+In this section we will see that the same is true for Noetherian rings
+whose local rings have formal fibres which are Gorenstein
+or local complete intersections.
+This is relevant to this chapter because a Noetherian ring which has a
+dualizing complex is an example.
+
+\begin{lemma}
+\label{lemma-formal-fibres-gorenstein}
+Properties (A), (B), (C), (D), and (E) of
+More on Algebra, Section \ref{more-algebra-section-properties-formal-fibres}
+hold for $P(k \to R) =$``$R$ is a Gorenstein ring''.
+\end{lemma}
+
+\begin{proof}
+Since we already know the result holds for Cohen-Macaulay instead
+of Gorenstein, we may in each step assume the ring we have is
+Cohen-Macaulay. This is not particularly helpful for the proof, but
+psychologically may be useful.
+
+\medskip\noindent
+Part (A). Let $K/k$ be a finitely generated field extension.
+Let $R$ be a Gorenstein $k$-algebra.
+We can find a global complete intersection
+$A = k[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+over $k$ such that $K$ is isomorphic to the fraction field of $A$, see
+Algebra, Lemma \ref{algebra-lemma-colimit-syntomic}.
+Then $R \to R \otimes_k A$ is a relative global complete intersection.
+Hence $R \otimes_k A$ is Gorenstein by Lemma \ref{lemma-gorenstein-lci}.
+Thus $R \otimes_k K$ is too as a localization.
+
+\medskip\noindent
+Proof of (B). This is clear because a ring is Gorenstein
+if and only if all of its local rings are Gorenstein.
+
+\medskip\noindent
+Part (C). Let $A \to B \to C$ be flat maps of Noetherian rings.
+Assume the fibres of $A \to B$ are Gorenstein and $B \to C$ is regular.
+We have to show the fibres of $A \to C$ are Gorenstein.
+Clearly, we may assume $A = k$ is a field. Then we may assume that
+$B \to C$ is a regular local homomorphism of Noetherian local rings.
+Then $B$ is Gorenstein and $C/\mathfrak m_B C$ is regular, in
+particular Gorenstein (Lemma \ref{lemma-regular-gorenstein}).
+Then $C$ is Gorenstein by
+Lemma \ref{lemma-flat-under-gorenstein}.
+
+\medskip\noindent
+Part (D). This follows from Lemma \ref{lemma-flat-under-gorenstein}.
+Part (E) is immediate as the condition does not refer to the ground field.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-dualizing-gorenstein-formal-fibres}
+Let $A$ be a Noetherian local ring. If $A$ has a dualizing complex,
+then the formal fibres of $A$ are Gorenstein.
+\end{lemma}
+
+\begin{proof}
+Let $\mathfrak p$ be a prime of $A$. The formal fibre of $A$ at $\mathfrak p$
+is isomorphic to the formal fibre of $A/\mathfrak p$ at $(0)$. The quotient
+$A/\mathfrak p$ has a dualizing complex
+(Lemma \ref{lemma-dualizing-quotient}).
+Thus it suffices to check the statement
+when $A$ is a local domain and $\mathfrak p = (0)$.
+Let $\omega_A^\bullet$ be a dualizing complex for $A$. Then
+$\omega_A^\bullet \otimes_A A^\wedge$ is a dualizing complex
+for the completion $A^\wedge$
+(Lemma \ref{lemma-flat-unramified}).
+Then $\omega_A^\bullet \otimes_A K$ is a dualizing
+complex for the fraction field $K$ of $A$
+(Lemma \ref{lemma-dualizing-localize}).
+Hence $\omega_A^\bullet \otimes_A K$
+is isomorphic ot $K[n]$ for some $n \in \mathbf{Z}$.
+Similarly, we conclude a dualizing complex for the formal fibre
+$A^\wedge \otimes_A K$ is
+$$
+\omega_A^\bullet \otimes_A A^\wedge \otimes_{A^\wedge} (A^\wedge \otimes_A K) =
+(\omega_A^\bullet \otimes_A K) \otimes_K (A^\wedge \otimes_A K) \cong
+(A^\wedge \otimes_A K)[n]
+$$
+as desired.
+\end{proof}
+
+\noindent
+Here is the verification promised in
+Divided Power Algebra, Remark \ref{dpa-remark-no-good-ci-map}.
+
+\begin{lemma}
+\label{lemma-formal-fibres-lci}
+Properties (A), (B), (C), (D), and (E) of
+More on Algebra, Section \ref{more-algebra-section-properties-formal-fibres}
+hold for $P(k \to R) =$``$R$ is a local complete intersection''.
+See Divided Power Algebra, Definition \ref{dpa-definition-lci}.
+\end{lemma}
+
+\begin{proof}
+Part (A). Let $K/k$ be a finitely generated field extension.
+Let $R$ be a $k$-algebra which is a local complete intersection.
+We can find a global complete intersection
+$A = k[x_1, \ldots, x_n]/(f_1, \ldots, f_c)$
+over $k$ such that $K$ is isomorphic to the fraction field of $A$, see
+Algebra, Lemma \ref{algebra-lemma-colimit-syntomic}.
+Then $R \to R \otimes_k A$ is a relative global complete intersection.
+It follows that $R \otimes_k A$ is a local complete intersection
+by Divided Power Algebra, Lemma \ref{dpa-lemma-avramov}.
+
+\medskip\noindent
+Proof of (B). This is clear
+because a ring is a local complete intersection if and only if all of its
+local rings are complete intersections.
+
+\medskip\noindent
+Part (C). Let $A \to B \to C$ be flat maps of Noetherian rings.
+Assume the fibres of $A \to B$ are local complete intersections
+and $B \to C$ is regular. We have to show the fibres of $A \to C$
+are local complete intersections. Clearly, we may assume $A = k$ is a field.
+Then we may assume that $B \to C$ is a regular local homomorphism
+of Noetherian local rings. Then $B$ is a complete intersection and
+$C/\mathfrak m_B C$ is regular, in particular a complete intersection
+(by definition). Then $C$ is a complete intersection by
+Divided Power Algebra, Lemma \ref{dpa-lemma-avramov}.
+
+\medskip\noindent
+Part (D). This follows by the same arguments as in (C) from
+the other implication in
+Divided Power Algebra, Lemma \ref{dpa-lemma-avramov}.
+Part (E) is immediate as the condition does not refer to the ground
+field.
+\end{proof}
+
+
+
+
+
+
+\section{Upper shriek algebraically}
+\label{section-relative-dualizing-complex-algebraic}
+
+\noindent
+For a finite type homomorphism $R \to A$ of Noetherian rings
+we will construct a functor $\varphi^! : D(R) \to D(A)$
+well defined up to nonunique isomorphism which
+as we will see in Duality for Schemes, Remark
+\ref{duality-remark-local-calculation-shriek}
+agrees up to isomorphism with the upper shriek functors
+one encounters in the duality theory for schemes.
+To motivate the construction we mention two additional properties:
+\begin{enumerate}
+\item $\varphi^!$ sends a dualizing complex for $R$ (if it exists)
+to a dualizing complex for $A$, and
+\item $\omega_{A/R}^\bullet = \varphi^!(R)$ is a kind of
+relative dualizing complex: it lies in $D^b_{\textit{Coh}}(A)$ and restricts
+to a dualizing complex on the fibres provided $R \to A$ is flat.
+\end{enumerate}
+These statemens are Lemmas \ref{lemma-shriek-dualizing-algebraic} and
+\ref{lemma-relative-dualizing-algebraic}.
+
+\medskip\noindent
+Let $\varphi : R \to A$ be a finite type homomorphism of Noetherian rings.
+We will define a functor $\varphi^! : D(R) \to D(A)$ in the following way
+\begin{enumerate}
+\item If $\varphi : R \to A$ is surjective we set
+$\varphi^!(K) = R\Hom(A, K)$. Here we use the functor
+$R\Hom(A, -) : D(R) \to D(A)$ of
+Section \ref{section-trivial}, and
+\item in general we choose a surjection $\psi : P \to A$ with
+$P = R[x_1, \ldots, x_n]$ and we set
+$\varphi^!(K) = \psi^!(K \otimes_R^\mathbf{L} P)[n]$.
+Here we use the functor
+$- \otimes_R^\mathbf{L} P : D(R) \to D(P)$
+of More on Algebra, Section \ref{more-algebra-section-derived-base-change}.
+\end{enumerate}
+Note the shift $[n]$ by the number of variables in the polynomial
+ring. This construction is {\bf not} canonical and the functor
+$\varphi^!$ will only be well defined up to a (nonunique) isomorphism of
+functors\footnote{It is possible to make the construction canonical:
+use $\Omega^n_{P/R}[n]$ instead of $P[n]$ in the
+construction and use this in Lemma \ref{lemma-well-defined}.
+The material in this section becomes a lot more involved
+if one wants to do this.}.
+
+\begin{lemma}
+\label{lemma-well-defined}
+Let $\varphi : R \to A$ be a finite type homomorphism of
+Noetherian rings. The functor $\varphi^!$ is well defined
+up to isomorphism.
+\end{lemma}
+
+\begin{proof}
+Suppose that $\psi_1 : P_1 = R[x_1, \ldots, x_n] \to A$ and
+$\psi_2 : P_2 = R[y_1, \ldots, y_m] \to A$ are two
+surjections from polynomial rings onto $A$. Then we get a
+commutative diagram
+$$
+\xymatrix{
+R[x_1, \ldots, x_n, y_1, \ldots, y_m]
+\ar[d]^{x_i \mapsto g_i} \ar[rr]_-{y_j \mapsto f_j} & &
+R[x_1, \ldots, x_n] \ar[d] \\
+R[y_1, \ldots, y_m] \ar[rr] & & A
+}
+$$
+where $f_j$ and $g_i$ are chosen such that $\psi_1(f_j) = \psi_2(y_j)$
+and $\psi_2(g_i) = \psi_1(x_i)$. By symmetry it suffices to prove
+the functors defined using $P \to A$ and $P[y_1, \ldots, y_m] \to A$
+are isomorphic. By induction we may assume $m = 1$. This reduces
+us to the case discussed in the next paragraph.
+
+\medskip\noindent
+Here $\psi : P \to A$ is given and $\chi : P[y] \to A$ induces
+$\psi$ on $P$. Write $Q = P[y]$.
+Choose $g \in P$ with $\psi(g) = \chi(y)$.
+Denote $\pi : Q \to P$ the $P$-algebra map
+with $\pi(y) = g$. Then $\chi = \psi \circ \pi$ and hence
+$\chi^! = \psi^! \circ \pi^!$ as both are
+adjoint to the restriction functor $D(A) \to D(Q)$ by the material
+in Section \ref{section-trivial}. Thus
+$$
+\chi^!\left(K \otimes_R^\mathbf{L} Q\right)[n + 1] =
+\psi^!\left(\pi^!\left(K \otimes_R^\mathbf{L} Q\right)[1]\right)[n]
+$$
+Hence it suffices to show that
+$\pi^!(K \otimes_R^\mathbf{L} Q[1]) = K \otimes_R^\mathbf{L} P$
+Thus it suffices to show that the functor
+$\pi^!(-) : D(Q) \to D(P)$
+is isomorphic to $K \mapsto K \otimes_Q^\mathbf{L} P[-1]$.
+This follows from Lemma \ref{lemma-compute-for-effective-Cartier-algebraic}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-shriek-boundedness}
+Let $\varphi : R \to A$ be a finite type homomorphism of Noetherian rings.
+\begin{enumerate}
+\item $\varphi^!$ maps $D^+(R)$ into $D^+(A)$ and
+$D^+_{\textit{Coh}}(R)$ into $D^+_{\textit{Coh}}(A)$.
+\item if $\varphi$ is perfect, then $\varphi^!$ maps
+$D^-(R)$ into $D^-(A)$,
+$D^-_{\textit{Coh}}(R)$ into $D^-_{\textit{Coh}}(A)$, and
+$D^b_{\textit{Coh}}(R)$ into $D^b_{\textit{Coh}}(A)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose a factorization $R \to P \to A$ as in the definition of $\varphi^!$.
+The functor $- \otimes_R^\mathbf{L} : D(R) \to D(P)$ preserves
+the subcategories
+$D^+, D^+_{\textit{Coh}}, D^-, D^-_{\textit{Coh}}, D^b_{\textit{Coh}}$.
+The functor $R\Hom(A, -) : D(P) \to D(A)$
+preserves $D^+$ and $D^+_{\textit{Coh}}$ by
+Lemma \ref{lemma-exact-support-coherent}.
+If $R \to A$ is perfect, then $A$ is perfect as a $P$-module, see
+More on Algebra, Lemma \ref{more-algebra-lemma-perfect-ring-map}.
+Recall that the restriction of $R\Hom(A, K)$ to $D(P)$ is
+$R\Hom_P(A, K)$. By More on Algebra, Lemma
+\ref{more-algebra-lemma-dual-perfect-complex}
+we have $R\Hom_P(A, K) = E \otimes_P^\mathbf{L} K$ for
+some perfect $E \in D(P)$. Since we can represent $E$ by
+a finite complex of finite projective $P$-modules
+it is clear that $R\Hom_P(A, K)$ is in
+$D^-(P), D^-_{\textit{Coh}}(P), D^b_{\textit{Coh}}(P)$
+as soon as $K$ is. Since the restriction functor
+$D(A) \to D(P)$ reflects these subcategories, the
+proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-shriek-dualizing-algebraic}
+Let $\varphi$ be a finite type homomorphism of Noetherian rings.
+If $\omega_R^\bullet$ is a dualizing complex for $R$, then
+$\varphi^!(\omega_R^\bullet)$ is a dualizing complex for $A$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemmas
+\ref{lemma-dualizing-polynomial-ring} and
+\ref{lemma-dualizing-quotient},
+\end{proof}
+
+\begin{lemma}
+\label{lemma-flat-bc}
+Let $R \to R'$ be a flat homomorphism of Noetherian rings.
+Let $\varphi : R \to A$ be a finite type ring map.
+Let $\varphi' : R' \to A' = A \otimes_R R'$ be the map induced by $\varphi$.
+Then we have a functorial maps
+$$
+\varphi^!(K) \otimes_A^\mathbf{L} A' \longrightarrow
+(\varphi')^!(K \otimes_R^\mathbf{L} R')
+$$
+for $K$ in $D(R)$ which are isomorphisms for $K \in D^+(R)$.
+\end{lemma}
+
+\begin{proof}
+Choose a factorization $R \to P \to A$ where $P$ is a polynomial ring over $R$.
+This gives a corresponding factorization $R' \to P' \to A'$ by base change.
+Since we have $(K \otimes_R^\mathbf{L} P) \otimes_P^\mathbf{L} P' =
+(K \otimes_R^\mathbf{L} R') \otimes_{R'}^\mathbf{L} P'$
+by More on Algebra, Lemma \ref{more-algebra-lemma-double-base-change}
+it suffices to construct maps
+$$
+R\Hom(A, K \otimes_R^\mathbf{L} P[n]) \otimes_A^\mathbf{L} A'
+\longrightarrow
+R\Hom(A', (K \otimes_R^\mathbf{L} P[n]) \otimes_P^\mathbf{L} P')
+$$
+functorial in $K$. For this we use the map (\ref{equation-base-change})
+constructed in Section \ref{section-base-change-trivial-duality}
+for $P, A, P', A'$.
+The map is an isomorphism for $K \in D^+(R)$ by
+Lemma \ref{lemma-flat-bc-surjection}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bc}
+Let $R \to R'$ be a homomorphism of Noetherian rings.
+Let $\varphi : R \to A$ be a perfect ring map
+(More on Algebra, Definition
+\ref{more-algebra-definition-pseudo-coherent-perfect})
+such that $R'$ and $A$ are tor independent over $R$.
+Let $\varphi' : R' \to A' = A \otimes_R R'$ be the map induced by $\varphi$.
+Then we have a functorial isomorphism
+$$
+\varphi^!(K) \otimes_A^\mathbf{L} A' =
+(\varphi')^!(K \otimes_R^\mathbf{L} R')
+$$
+for $K$ in $D(R)$.
+\end{lemma}
+
+\begin{proof}
+We may choose a factorization $R \to P \to A$ where $P$
+is a polynomial ring over $R$ such that $A$ is a perfect $P$-module, see
+More on Algebra, Lemma \ref{more-algebra-lemma-perfect-ring-map}.
+This gives a corresponding factorization $R' \to P' \to A'$ by base change.
+Since we have $(K \otimes_R^\mathbf{L} P) \otimes_P^\mathbf{L} P' =
+(K \otimes_R^\mathbf{L} R') \otimes_{R'}^\mathbf{L} P'$
+by More on Algebra, Lemma \ref{more-algebra-lemma-double-base-change}
+it suffices to construct maps
+$$
+R\Hom(A, K \otimes_R^\mathbf{L} P[n]) \otimes_A^\mathbf{L} A'
+\longrightarrow
+R\Hom(A', (K \otimes_R^\mathbf{L} P[n]) \otimes_P^\mathbf{L} P')
+$$
+functorial in $K$. We have
+$$
+A \otimes_P^\mathbf{L} P' = A \otimes_R^\mathbf{L} R' = A'
+$$
+The first equality by
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-comparison}
+applied to $R, R', P, P'$. The second equality because
+$A$ and $R'$ are tor independent over $R$. Hence $A$ and $P'$ are
+tor independent over $P$ and we can use the map (\ref{equation-base-change})
+constructed in Section \ref{section-base-change-trivial-duality} for
+$P, A, P', A'$
+get the desired arrow. By Lemma \ref{lemma-bc-surjection}
+to finish the proof it suffices to prove that $A$ is a perfect $P$-module
+which we saw above.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bc-flat}
+Let $R \to R'$ be a homomorphism of Noetherian rings.
+Let $\varphi : R \to A$ be flat of finite type.
+Let $\varphi' : R' \to A' = A \otimes_R R'$ be the map induced by $\varphi$.
+Then we have a functorial isomorphism
+$$
+\varphi^!(K) \otimes_A^\mathbf{L} A' =
+(\varphi')^!(K \otimes_R^\mathbf{L} R')
+$$
+for $K$ in $D(R)$.
+\end{lemma}
+
+\begin{proof}
+Special case of Lemma \ref{lemma-bc} by
+More on Algebra, Lemma
+\ref{more-algebra-lemma-flat-finite-presentation-perfect}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition-shriek-algebraic}
+Let $A \xrightarrow{a} B \xrightarrow{b} C$ be finite type homomorphisms of
+Noetherian rings. Then there is a transformation of functors
+$b^! \circ a^! \to (b \circ a)^!$ which is an isomorphism on $D^+(A)$.
+\end{lemma}
+
+\begin{proof}
+Choose a polynomial ring $P = A[x_1, \ldots, x_n]$ over $A$
+and a surjection $P \to B$. Choose elements $c_1, \ldots, c_m \in C$
+generating $C$ over $B$. Set $Q = P[y_1, \ldots, y_m]$ and
+denote $Q' = Q \otimes_P B = B[y_1, \ldots, y_m]$.
+Let $\chi : Q' \to C$ be the surjection sending $y_j$ to $c_j$.
+Picture
+$$
+\xymatrix{
+& Q \ar[r]_{\psi'} & Q' \ar[r]_\chi & C \\
+A \ar[r] & P \ar[r]^\psi \ar[u] & B \ar[u]
+}
+$$
+By Lemma \ref{lemma-flat-bc-surjection} for $M \in D(P)$ we have an arrow
+$\psi^!(M) \otimes_B^\mathbf{L} Q' \to (\psi')^!(M \otimes_P^\mathbf{L} Q)$
+which is an isomorphism whenever $M$ is bounded below. Also
+we have $\chi^! \circ (\psi')^! = (\chi \circ \psi')^!$ as both
+functors are adjoint to the restriction functor $D(C) \to D(Q)$
+by Section \ref{section-trivial}. Then we see
+\begin{align*}
+b^!(a^!(K))
+& =
+\chi^!(\psi^!(K \otimes_A^\mathbf{L} P)[n] \otimes_B^\mathbf{L} Q)[m] \\
+& \to
+\chi^!((\psi')^!(K \otimes_A^\mathbf{L} P \otimes_P^\mathbf{L} Q))[n + m] \\
+& =
+(\chi \circ \psi')^!(K\otimes_A^\mathbf{L} Q)[n + m] \\
+& =
+(b \circ a)^!(K)
+\end{align*}
+where we have used in addition to the above
+More on Algebra, Lemma \ref{more-algebra-lemma-double-base-change}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-upper-shriek-finite}
+Let $\varphi : R \to A$ be a finite map of Noetherian rings.
+Then $\varphi^!$ is isomorphic to the functor
+$R\Hom(A, -) : D(R) \to D(A)$ from
+Section \ref{section-trivial}.
+\end{lemma}
+
+\begin{proof}
+Suppose that $A$ is generated by $n > 1$ elements over $R$.
+Then can factor $R \to A$ as a composition of two finite ring maps
+where in both steps the number of generators is $< n$.
+Since we have Lemma \ref{lemma-composition-shriek-algebraic} and
+Lemma \ref{lemma-composition-right-adjoints}
+we conclude that it suffices
+to prove the lemma when $A$ is generated by one element over $R$.
+Since $A$ is finite over $R$, it follows that $A$ is a quotient
+of $B = R[x]/(f)$ where $f$ is a monic polynomial in $x$
+(Algebra, Lemma \ref{algebra-lemma-finite-is-integral}).
+Again using the lemmas on composition and the fact that we
+have agreement for surjections by definition, we conclude that
+it suffices to prove the lemma for $R \to B = R[x]/(f)$.
+In this case, the functor $\varphi^!$ is isomorphic to
+$K \mapsto K \otimes_R^\mathbf{L} B$; you prove this by
+using Lemma \ref{lemma-compute-for-effective-Cartier-algebraic}
+for the map $R[x] \to B$ (note that the shift in the definition
+of $\varphi^!$ and in the lemma add up to zero).
+For the functor $R\Hom(B, -) : D(R) \to D(B)$ we can use
+Lemma \ref{lemma-RHom-is-tensor-special}
+to see that it suffices to show $\Hom_R(B, R) \cong B$
+as $B$-modules. Suppose that $f$ has degree $d$.
+Then an $R$-basis for $B$ is given by $1, x, \ldots, x^{d - 1}$.
+Let $\delta_i : B \to R$, $i = 0, \ldots, d - 1$
+be the $R$-linear map which picks off the coefficient
+of $x^i$ with respect to the given basis. Then
+$\delta_0, \ldots, \delta_{d - 1}$ is a basis for $\Hom_R(B, R)$.
+Finally, for $0 \leq i \leq d - 1$ a computation shows that
+$$
+x^i \delta_{d - 1} =
+\delta_{d - 1 - i} + b_1 \delta_{d - i} + \ldots + b_i \delta_{d - 1}
+$$
+for some $c_1, \ldots, c_d \in R$\footnote{If
+$f = x^d + a_1 x^{d - 1} + \ldots + a_d$, then
+$c_1 = -a_1$, $c_2 = a_1^2 - a_2$, $c_3 = -a_1^3 + 2a_1a_2 -a_3$, etc.}.
+Hence $\Hom_R(B, R)$ is a principal $B$-module with generator
+$\delta_{d - 1}$. By looking
+at ranks we conclude that it is a rank $1$ free $B$-module.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-upper-shriek-localize}
+Let $R$ be a Noetherian ring and let $f \in R$.
+If $\varphi$ denotes the map $R \to R_f$, then $\varphi^!$
+is isomorphic to $- \otimes_R^\mathbf{L} R_f$.
+More generally, if $\varphi : R \to R'$ is a map such that
+$\Spec(R') \to \Spec(R)$ is an open immersion, then
+$\varphi^!$ is isomorphic to $- \otimes_R^\mathbf{L} R'$.
+\end{lemma}
+
+\begin{proof}
+Choose the presentation $R \to R[x] \to R[x]/(fx - 1) = R_f$ and observe
+that $fx - 1$ is a nonzerodivisor in $R[x]$. Thus we can apply
+using Lemma \ref{lemma-compute-for-effective-Cartier-algebraic}
+to compute the functor $\varphi^!$. Details omitted;
+note that the shift in the definition
+of $\varphi^!$ and in the lemma add up to zero.
+
+\medskip\noindent
+In the general case note that $R' \otimes_R R' = R'$.
+Hence the result follows from the base change results
+above. Either Lemma \ref{lemma-flat-bc} or
+Lemma \ref{lemma-bc} will do.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-upper-shriek-is-tensor-functor}
+Let $\varphi : R \to A$ be a perfect homomorphism of Noetherian rings
+(for example $\varphi$ is flat of finite type).
+Then $\varphi^!(K) = K \otimes_R^\mathbf{L} \varphi^!(R)$
+for $K \in D(R)$.
+\end{lemma}
+
+\begin{proof}
+(The parenthetical statement follows from
+More on Algebra, Lemma
+\ref{more-algebra-lemma-flat-finite-presentation-perfect}.)
+We can choose a factorization $R \to P \to A$ where $P$ is a polynomial
+ring in $n$ variables over $R$ and then $A$ is a perfect $P$-module, see
+More on Algebra, Lemma \ref{more-algebra-lemma-perfect-ring-map}.
+Recall that $\varphi^!(K) = R\Hom(A, K \otimes_R^\mathbf{L} P[n])$.
+Thus the result follows from
+Lemma \ref{lemma-RHom-is-tensor-special}
+and More on Algebra, Lemma \ref{more-algebra-lemma-double-base-change}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dualizing-if-have-omega}
+Let $\varphi : A \to B$ be a finite type homomorphism of Noetherian rings.
+Let $\omega_A^\bullet$ be a dualizing complex for $A$. Set
+$\omega_B^\bullet = \varphi^!(\omega_A^\bullet)$. Denote
+$D_A(K) = R\Hom_A(K, \omega_A^\bullet)$ for $K \in D_{\textit{Coh}}(A)$
+and
+$D_B(L) = R\Hom_B(L, \omega_B^\bullet)$ for $L \in D_{\textit{Coh}}(B)$.
+Then there is a functorial isomorphism
+$$
+\varphi^!(K) = D_B(D_A(K) \otimes_A^\mathbf{L} B)
+$$
+for $K \in D_{\textit{Coh}}(A)$.
+\end{lemma}
+
+\begin{proof}
+Observe that $\omega_B^\bullet$ is a dualizing complex for $B$ by
+Lemma \ref{lemma-shriek-dualizing-algebraic}.
+Let $A \to B \to C$ be finite type homomorphisms of Noetherian rings.
+If the lemma holds for $A \to B$ and $B \to C$, then the lemma holds for
+$A \to C$. This follows from
+Lemma \ref{lemma-composition-shriek-algebraic}
+and the fact that $D_B \circ D_B \cong \text{id}$ by
+Lemma \ref{lemma-dualizing}.
+Thus it suffices to prove the lemma in case $A \to B$ is
+a surjection and in the case where $B$ is a
+polynomial ring over $A$.
+
+\medskip\noindent
+Assume $B = A[x_1, \ldots, x_n]$. Since $D_A \circ D_A \cong \text{id}$,
+it suffices to prove
+$D_B(K \otimes_A B) \cong D_A(K) \otimes_A B[n]$ for $K$
+in $D_{\textit{Coh}}(A)$.
+Choose a bounded complex $I^\bullet$ of injectives representing
+$\omega_A^\bullet$. Choose a quasi-isomorphism
+$I^\bullet \otimes_A B \to J^\bullet$ where $J^\bullet$
+is a bounded complex of $B$-modules. Given a complex
+$K^\bullet$ of $A$-modules, consider the obvious
+map of complexes
+$$
+\Hom^\bullet(K^\bullet, I^\bullet) \otimes_A B[n]
+\longrightarrow
+\Hom^\bullet(K^\bullet \otimes_A B, J^\bullet[n])
+$$
+The left hand side represents $D_A(K) \otimes_A B[n]$ and the right hand
+side represents $D_B(K \otimes_A B)$. Thus it suffices to prove this
+map is a quasi-isomorphism if the cohomology modules
+of $K^\bullet$ are finite $A$-modules. Observe that the
+cohomology of the complex in degree $r$ (on either side)
+only depends on finitely many of the $K^i$. Thus we may
+replace $K^\bullet$ by a truncation, i.e., we may assume
+$K^\bullet$ represents an object of $D^-_{\textit{Coh}}(A)$.
+Then $K^\bullet$ is quasi-isomorphic to a bounded
+above complex of finite free $A$-modules.
+Therefore we may assume $K^\bullet$ is a bounded
+above complex of finite free $A$-modules.
+In this case it is easy to that the
+displayed map is an isomorphism of complexes which finishes
+the proof in this case.
+
+\medskip\noindent
+Assume that $A \to B$ is surjective. Denote $i_* : D(B) \to D(A)$
+the restriction functor and recall that $\varphi^!(-) = R\Hom(A, -)$
+is a right adjoint to $i_*$ (Lemma \ref{lemma-right-adjoint}).
+For $F \in D(B)$ we have
+\begin{align*}
+\Hom_B(F, D_B(D_A(K) \otimes_A^\mathbf{L} B))
+& =
+\Hom_B((D_A(K) \otimes_A^\mathbf{L} B) \otimes_B^\mathbf{L} F,
+\omega_B^\bullet) \\
+& =
+\Hom_A(D_A(K) \otimes_A^\mathbf{L} i_*F, \omega_A^\bullet) \\
+& =
+\Hom_A(i_*F, D_A(D_A(K))) \\
+& =
+\Hom_A(i_*F, K) \\
+& =
+\Hom_B(F, \varphi^!(K))
+\end{align*}
+The first equality follows from More on Algebra, Lemma
+\ref{more-algebra-lemma-internal-hom} and the definition
+of $D_B$. The second equality by the adjointness mentioned
+above and the equality
+$i_*((D_A(K) \otimes_A^\mathbf{L} B) \otimes_B^\mathbf{L} F) =
+D_A(K) \otimes_A^\mathbf{L} i_*F$
+(More on Algebra, Lemma \ref{more-algebra-lemma-derived-base-change}).
+The third equality follows from More on Algebra, Lemma
+\ref{more-algebra-lemma-internal-hom}. The fourth because
+$D_A \circ D_A = \text{id}$. The final equality by adjointness again.
+Thus the result holds by the Yoneda lemma.
+\end{proof}
+
+
+
+
+
+
+\section{Relative dualizing complexes in the Noetherian case}
+\label{section-relative-dualizing-complexes-Noetherian}
+
+\noindent
+Let $\varphi : R \to A$ be a finite type homomorphism of
+Noetherian rings. Then we define the {\it relative dualizing
+complex of $A$ over $R$} as the object
+$$
+\omega_{A/R}^\bullet = \varphi^!(R)
+$$
+of $D(A)$. Here $\varphi^!$ is as in
+Section \ref{section-relative-dualizing-complex-algebraic}.
+From the material in that section we see that
+$\omega_{A/R}^\bullet$ is well defined up to (non-unique) isomorphism.
+
+\begin{lemma}
+\label{lemma-base-change-relative-algebraic}
+Let $R \to R'$ be a homomorphism of Noetherian rings.
+Let $R \to A$ be of finite type. Set $A' = A \otimes_R R'$. If
+\begin{enumerate}
+\item $R \to R'$ is flat, or
+\item $R \to A$ is flat, or
+\item $R \to A$ is perfect
+and $R'$ and $A$ are tor independent over $R$,
+\end{enumerate}
+then there is an isomorphism
+$\omega_{A/R}^\bullet \otimes_A^\mathbf{L} A' \to \omega^\bullet_{A'/R'}$
+in $D(A')$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemmas \ref{lemma-flat-bc}, \ref{lemma-bc-flat}, and
+\ref{lemma-bc} and the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dualizing-algebraic}
+Let $\varphi : R \to A$ be a flat finite type map of Noetherian rings.
+Then
+\begin{enumerate}
+\item $\omega_{A/R}^\bullet$ is in $D^b_{\textit{Coh}}(A)$
+and $R$-perfect (More on Algebra,
+Definition \ref{more-algebra-definition-relatively-perfect}),
+\item $A \to R\Hom_A(\omega_{A/R}^\bullet, \omega_{A/R}^\bullet)$
+is an isomorphism, and
+\item for every map $R \to k$ to a field the base change
+$\omega_{A/R}^\bullet \otimes_A^\mathbf{L} (A \otimes_R k)$
+is a dualizing complex for $A \otimes_R k$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose $R \to P \to A$ as in the definition of $\varphi^!$.
+Recall that $R \to A$ is a perfect ring map
+(More on Algebra, Lemma
+\ref{more-algebra-lemma-flat-finite-presentation-perfect}) and
+hence $A$ is perfect as a $P$-modue
+(More on Algebra, Lemma \ref{more-algebra-lemma-perfect-ring-map}).
+This shows that $\omega_{A/R}^\bullet$ is in $D^b_{\textit{Coh}}(A)$
+by Lemma \ref{lemma-shriek-boundedness}.
+To show $\omega_{A/R}^\bullet$ is $R$-perfect it suffices to
+show it has finite tor dimension as a complex of $R$-modules.
+This is true because
+$\omega_{A/R}^\bullet = \varphi^!(R) = R\Hom(A, P)[n]$
+maps to $R\Hom_P(A, P)[n]$ in $D(P)$, which is perfect in $D(P)$
+(More on Algebra, Lemma \ref{more-algebra-lemma-dual-perfect-complex}),
+hence has finite tor dimension in $D(R)$
+as $R \to P$ is flat. This proves (1).
+
+\medskip\noindent
+Proof of (2). The object
+$R\Hom_A(\omega_{A/R}^\bullet, \omega_{A/R}^\bullet)$
+of $D(A)$ maps in $D(P)$ to
+\begin{align*}
+R\Hom_P(\omega_{A/R}^\bullet, R\Hom(A, P)[n])
+& =
+R\Hom_P(R\Hom_P(A, P)[n], P)[n] \\
+& =
+R\Hom_P(R\Hom_P(A, P), P)
+\end{align*}
+This is equal to $A$ by the already used
+More on Algebra, Lemma \ref{more-algebra-lemma-dual-perfect-complex}.
+
+\medskip\noindent
+Proof of (3). By Lemma \ref{lemma-base-change-relative-algebraic}
+there is an isomorphism
+$$
+\omega_{A/R}^\bullet \otimes_A^\mathbf{L} (A \otimes_R k) \cong
+\omega^\bullet_{A \otimes_R k/k}
+$$
+and the right hand side is a dualizing complex by
+Lemma \ref{lemma-shriek-dualizing-algebraic}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-dualizing-over-field}
+Let $K/k$ be an extension of fields. Let $A$ be a finite type
+$k$-algebra. Let $A_K = A \otimes_k K$. If
+$\omega_A^\bullet$ is a dualizing complex for $A$, then
+$\omega_A^\bullet \otimes_A A_K$ is a dualizing complex for $A_K$.
+\end{lemma}
+
+\begin{proof}
+By the uniqueness of dualizing complexes, it doesn't matter which
+dualizing complex we pick for $A$; we omit the detailed proof.
+Denote $\varphi : k \to A$ the algebra structure.
+We may take $\omega_A^\bullet = \varphi^!(k[0])$ by
+Lemma \ref{lemma-shriek-dualizing-algebraic}.
+We conclude by
+Lemma \ref{lemma-relative-dualizing-algebraic}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-lci-shriek}
+Let $\varphi : R \to A$ be a local complete intersection homomorphism of
+Noetherian rings. Then $\omega_{A/R}^\bullet$ is an invertible object of
+$D(A)$ and $\varphi^!(K) = K \otimes_R^\mathbf{L} \omega_{A/R}^\bullet$
+for all $K \in D(R)$.
+\end{lemma}
+
+\begin{proof}
+Recall that a local complete intersection homomorphism is a perfect
+ring map by More on Algebra, Lemma \ref{more-algebra-lemma-lci-perfect}.
+Hence the final statement holds by
+Lemma \ref{lemma-upper-shriek-is-tensor-functor}.
+By More on Algebra, Definition
+\ref{more-algebra-definition-local-complete-intersection}
+we can write $A = R[x_1, \ldots, x_n]/I$ where $I$ is a
+Koszul-regular ideal.
+The construction of $\varphi^!$ in
+Section \ref{section-relative-dualizing-complex-algebraic}
+shows that it suffices to show the lemma in case
+$A = R/I$ where $I \subset R$ is a Koszul-regular ideal.
+Checking $\omega_{A/R}^\bullet$ is invertible in $D(A)$
+is local on $\Spec(A)$ by More on Algebra, Lemma
+\ref{more-algebra-lemma-invertible-derived}.
+Moreover, formation of $\omega_{A/R}^\bullet$ commutes with
+localization on $R$ by Lemma \ref{lemma-flat-bc}.
+Combining
+More on Algebra, Definition \ref{more-algebra-definition-regular-ideal} and
+Lemma \ref{more-algebra-lemma-noetherian-finite-all-equivalent} and
+Algebra, Lemma \ref{algebra-lemma-regular-sequence-in-neighbourhood}
+we can find $g_1, \ldots, g_r \in R$ generating the unit ideal in $A$
+such that $I_{g_j} \subset R_{g_j}$ is generated by a regular sequence.
+Thus we may assume $A = R/(f_1, \ldots, f_c)$ where $f_1, \ldots, f_c$
+is a regular sequence in $R$. Then we consider the ring maps
+$$
+R \to R/(f_1) \to R/(f_1, f_2) \to \ldots \to R/(f_1, \ldots, f_c) = A
+$$
+and we use Lemma \ref{lemma-composition-shriek-algebraic}
+(and the final statement already proven)
+to see that it suffices to prove the lemma for each step.
+Finally, in case $A = R/(f)$ for some nonzerodivisor $f$
+we see that the lemma is true since $\varphi^!(R) = R\Hom(A, R)$
+is invertible by Lemma \ref{lemma-compute-for-effective-Cartier-algebraic}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gorenstein-shriek}
+Let $\varphi : R \to A$ be a flat finite type homomorphism of Noetherian rings.
+The following are equivalent
+\begin{enumerate}
+\item the fibres $A \otimes_R \kappa(\mathfrak p)$ are Gorenstein
+for all primes $\mathfrak p \subset R$, and
+\item $\omega_{A/R}^\bullet$ is an invertible object of $D(A)$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-invertible-derived}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If (2) holds, then the fibre rings $A \otimes_R \kappa(\mathfrak p)$
+have invertible dualizing complexes, and hence are Gorenstein.
+See Lemmas \ref{lemma-relative-dualizing-algebraic} and \ref{lemma-gorenstein}.
+
+\medskip\noindent
+For the converse, assume (1).
+Observe that $\omega_{A/R}^\bullet$ is in $D^b_{\textit{Coh}}(A)$
+by Lemma \ref{lemma-shriek-boundedness} (since flat finite type homomorphisms
+of Noetherian rings are perfect, see
+More on Algebra, Lemma
+\ref{more-algebra-lemma-flat-finite-presentation-perfect}).
+Take a prime $\mathfrak q \subset A$ lying over $\mathfrak p \subset R$.
+Then
+$$
+\omega_{A/R}^\bullet \otimes_A^\mathbf{L} \kappa(\mathfrak q) =
+\omega_{A/R}^\bullet \otimes_A^\mathbf{L}
+(A \otimes_R \kappa(\mathfrak p))
+\otimes_{(A \otimes_R \kappa(\mathfrak p))}^\mathbf{L}
+\kappa(\mathfrak q)
+$$
+Applying Lemmas \ref{lemma-relative-dualizing-algebraic} and
+\ref{lemma-gorenstein} and assumption (1) we find that this complex has $1$
+nonzero cohomology group which is a $1$-dimensional
+$\kappa(\mathfrak q)$-vector space. By
+More on Algebra, Lemma
+\ref{more-algebra-lemma-lift-bounded-pseudo-coherent-to-perfect}
+we conclude that $(\omega_{A/R}^\bullet)_f$ is an invertible
+object of $D(A_f)$ for some $f \in A$, $f \not \in \mathfrak q$.
+This proves (2) holds.
+\end{proof}
+
+\noindent
+The following lemma is useful to see how dimension functions change
+when passing to a finite type algebra over a Noetherian ring.
+
+\begin{lemma}
+\label{lemma-shriek-normalized}
+Let $\varphi : R \to A$ be a finite type homomorphism of Noetherian rings.
+Assume $R$ local and let $\mathfrak m \subset A$ be a maximal
+ideal lying over the maximal ideal of $R$. If $\omega_R^\bullet$
+is a normalized dualizing complex for $R$, then
+$\varphi^!(\omega_R^\bullet)_\mathfrak m$ is a normalized
+dualizing complex for $A_\mathfrak m$.
+\end{lemma}
+
+\begin{proof}
+We already know that $\varphi^!(\omega_R^\bullet)$ is a dualizing
+complex for $A$, see Lemma \ref{lemma-shriek-dualizing-algebraic}.
+Choose a factorization $R \to P \to A$ with $P = R[x_1, \ldots, x_n]$
+as in the construction of $\varphi^!$. If we can prove the
+lemma for $R \to P$ and the maximal ideal $\mathfrak m'$ of $P$ corresponding to
+$\mathfrak m$, then we obtain the result for $R \to A$ by
+applying Lemma \ref{lemma-normalized-quotient} to
+$P_{\mathfrak m'} \to A_\mathfrak m$ or by applying
+Lemma \ref{lemma-quotient-function} to $P \to A$.
+In the case $A = R[x_1, \ldots, x_n]$ we see that
+$\dim(A_\mathfrak m) = \dim(R) + n$ for example by
+Algebra, Lemma \ref{algebra-lemma-dimension-base-fibre-equals-total}
+(combined with Algebra, Lemma \ref{algebra-lemma-dim-affine-space}
+to compute the dimension of the fibre).
+The fact that $\omega_R^\bullet$ is normalized means
+that $i = -\dim(R)$ is the smallest index such that
+$H^i(\omega_R^\bullet)$ is nonzero (follows from
+Lemmas \ref{lemma-sitting-in-degrees} and
+\ref{lemma-nonvanishing-generically-local}).
+Then $\varphi^!(\omega_R^\bullet)_\mathfrak m =
+\omega_R^\bullet \otimes_R A_\mathfrak m[n]$
+has its first nonzero cohomology module in degree $-\dim(R) - n$
+and therefore is the normalized dualizing complex for $A_\mathfrak m$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dualizing-trivial-vanishing}
+Let $R \to A$ be a finite type homomorphism of Noetherian rings.
+Let $\mathfrak q \subset A$ be a prime ideal lying over
+$\mathfrak p \subset R$. Then
+$$
+H^i(\omega_{A/R}^\bullet)_\mathfrak q \not = 0
+\Rightarrow - d \leq i
+$$
+where $d$ is the dimension of the fibre of $\Spec(A) \to \Spec(R)$
+over $\mathfrak p$ at the point $\mathfrak q$.
+\end{lemma}
+
+\begin{proof}
+Choose a factorization $R \to P \to A$ with $P = R[x_1, \ldots, x_n]$
+as in Section \ref{section-relative-dualizing-complex-algebraic}
+so that $\omega_{A/R}^\bullet = R\Hom(A, P)[n]$.
+We have to show that $R\Hom(A, P)_\mathfrak q$
+has vanishing cohomology in degrees $< n - d$.
+By Lemma \ref{lemma-RHom-ext} this means we have to
+show that $\Ext_P^i(P/I, P)_{\mathfrak r} = 0$ for $i < n - d$
+where $\mathfrak r \subset P$ is the prime corresponding to $\mathfrak q$
+and $I$ is the kernel of $P \to A$.
+We may rewrite this as
+$\Ext_{P_\mathfrak r}^i(P_\mathfrak r/IP_\mathfrak r, P_\mathfrak r)$
+by More on Algebra, Lemma
+\ref{more-algebra-lemma-pseudo-coherence-and-base-change-ext}.
+Thus we have to show
+$$
+\text{depth}_{IP_\mathfrak r}(P_\mathfrak r) \geq n - d
+$$
+by Lemma \ref{lemma-depth}.
+By Lemma \ref{lemma-depth-flat-CM} we have
+$$
+\text{depth}_{IP_\mathfrak r}(P_\mathfrak r) \geq
+\dim((P \otimes_R \kappa(\mathfrak p))_\mathfrak r) -
+\dim((P/I \otimes_R \kappa(\mathfrak p))_\mathfrak r)
+$$
+The two expressions on the right hand side agree by
+Algebra, Lemma \ref{algebra-lemma-codimension}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dualizing-flat-vanishing}
+Let $R \to A$ be a flat finite type homomorphism of Noetherian rings.
+Let $\mathfrak q \subset A$ be a prime ideal lying over
+$\mathfrak p \subset R$. Then
+$$
+H^i(\omega_{A/R}^\bullet)_\mathfrak q \not = 0
+\Rightarrow - d \leq i \leq 0
+$$
+where $d$ is the dimension of the fibre of $\Spec(A) \to \Spec(R)$
+over $\mathfrak p$ at the point $\mathfrak q$. If all fibres of
+$\Spec(A) \to \Spec(R)$ have dimension $\leq d$, then
+$\omega_{A/R}^\bullet$ has tor amplitude in $[-d, 0]$
+as a complex of $R$-modules.
+\end{lemma}
+
+\begin{proof}
+The lower bound has been shown in
+Lemma \ref{lemma-relative-dualizing-trivial-vanishing}.
+Choose a factorization $R \to P \to A$ with $P = R[x_1, \ldots, x_n]$
+as in Section \ref{section-relative-dualizing-complex-algebraic}
+so that $\omega_{A/R}^\bullet = R\Hom(A, P)[n]$.
+The upper bound means that $\Ext^i_P(A, P)$ is zero for $i > n$.
+This follows from
+More on Algebra, Lemma \ref{more-algebra-lemma-perfect-over-polynomial-ring}
+which shows that $A$ is a perfect $P$-module with
+tor amplitude in $[-n, 0]$.
+
+\medskip\noindent
+Proof of the final statement. Let $R \to R'$ be a ring homomorphism
+of Noetherian rings. Set $A' = A \otimes_R R'$. Then
+$$
+\omega_{A'/R'}^\bullet =
+\omega_{A/R}^\bullet \otimes_A^\mathbf{L} A' =
+\omega_{A/R}^\bullet \otimes_R^\mathbf{L} R'
+$$
+The first isomorphism by Lemma \ref{lemma-base-change-relative-algebraic}
+and the second, which takes place in $D(R')$, by
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-comparison}.
+By the first part of the proof
+(note that the fibres of $\Spec(A') \to \Spec(R')$ have dimension $\leq d$)
+we conclude that $\omega_{A/R}^\bullet \otimes_R^\mathbf{L} R'$
+has cohomology only in degrees $[-d, 0]$. Taking $R' = R \oplus M$
+to be the square zero thickening of $R$ by a finite $R$-module $M$,
+we see that $R\Hom(A, P) \otimes_R^\mathbf{L} M$
+has cohomology only in the interval $[-d, 0]$ for any finite $R$-module $M$.
+Since any $R$-module is a filtered colimit of finite $R$-modules
+and since tensor products commute with colimits we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dualizing-CM-vanishing}
+Let $R \to A$ be a finite type homomorphism of Noetherian rings.
+Let $\mathfrak p \subset R$ be a prime ideal. Assume
+\begin{enumerate}
+\item $R_\mathfrak p$ is Cohen-Macaulay, and
+\item for any minimal prime $\mathfrak q \subset A$ we have
+$\text{trdeg}_{\kappa(R \cap \mathfrak q)} \kappa(\mathfrak q) \leq r$.
+\end{enumerate}
+Then
+$$
+H^i(\omega_{A/R}^\bullet)_\mathfrak p \not = 0 \Rightarrow - r \leq i
+$$
+and $H^{-r}(\omega_{A/R}^\bullet)_\mathfrak p$ is $(S_2)$
+as an $A_\mathfrak p$-module.
+\end{lemma}
+
+\begin{proof}
+We may replace $R$ by $R_\mathfrak p$ by
+Lemma \ref{lemma-base-change-relative-algebraic}.
+Thus we may assume $R$ is a Cohen-Macaulay local ring
+and we have to show the assertions of the lemma
+for the $A$-modules $H^i(\omega_{A/R}^\bullet)$.
+
+\medskip\noindent
+Let $R^\wedge$ be the completion of $R$.
+The map $R \to R^\wedge$ is flat and $R^\wedge$ is Cohen-Macaulay
+(More on Algebra, Lemma \ref{more-algebra-lemma-completion-CM}).
+Observe that the minimal primes of $A \otimes_R R^\wedge$
+lie over minimal primes of $A$ by the flatness of
+$A \to A \otimes_R R^\wedge$ (and going down for flatness, see
+Algebra, Lemma \ref{algebra-lemma-flat-going-down}).
+Thus condition (2) holds for the finite type ring map
+$R^\wedge \to A \otimes_R R^\wedge$ by
+Morphisms, Lemma \ref{morphisms-lemma-dimension-fibre-after-base-change}.
+Appealing to Lemma \ref{lemma-base-change-relative-algebraic}
+once again it suffices to prove the lemma for
+$R^\wedge \to A \otimes_R R^\wedge$. In this way, using
+Lemma \ref{lemma-ubiquity-dualizing},
+we may assume $R$ is a Noetherian local
+Cohen-Macaulay ring which has a dualizing complex $\omega_R^\bullet$.
+
+\medskip\noindent
+Let $\mathfrak m \subset A$ be a maximal ideal.
+It suffices to show that the assertions of
+the lemma hold for $H^i(\omega_{A/R}^\bullet)_\mathfrak m$.
+If $\mathfrak m$ does not lie over the maximal ideal of $R$,
+then we replace $R$ by a localization to reduce to this case
+(small detail omitted).
+
+\medskip\noindent
+We may assume $\omega_R^\bullet$ is normalized.
+Setting $d = \dim(R)$ we see that $\omega_R^\bullet = \omega_R[d]$
+for some $R$-module $\omega_R$, see
+Lemma \ref{lemma-apply-CM}. Set
+$\omega_A^\bullet = \varphi^!(\omega_R^\bullet)$.
+By Lemma \ref{lemma-relative-dualizing-if-have-omega} we have
+$$
+\omega_{A/R}^\bullet =
+R\Hom_A(\omega_R[d] \otimes_R^\mathbf{L} A, \omega_A^\bullet)
+$$
+By the dimension formula we have $\dim(A_\mathfrak m) \leq d + r$, see
+Morphisms, Lemma \ref{morphisms-lemma-dimension-formula-general}
+and use that $\kappa(\mathfrak m)$ is finite over the residue field of $R$
+by the Hilbert Nullstellensatz.
+By Lemma \ref{lemma-shriek-normalized}
+we see that $(\omega_A^\bullet)_\mathfrak m$
+is a normalized dualizing complex for $A_\mathfrak m$.
+Hence $H^i((\omega_A^\bullet)_\mathfrak m)$ is nonzero
+only for $-d - r \leq i \leq 0$, see
+Lemma \ref{lemma-sitting-in-degrees}.
+Since $\omega_R[d] \otimes_R^\mathbf{L} A$ lives in
+degrees $\leq -d$ we conclude the vanishing holds.
+Finally, we also see that
+$$
+H^{-r}(\omega_{A/R}^\bullet)_\mathfrak m =
+\Hom_A(\omega_R \otimes_R A, H^{-d - r}(\omega_A^\bullet))_\mathfrak m
+$$
+Since $H^{-d - r}(\omega_A^\bullet)_\mathfrak m$ is $(S_2)$ by
+Lemma \ref{lemma-depth-dualizing-module}
+we find that the final statement is true by
+More on Algebra, Lemma \ref{more-algebra-lemma-hom-into-S2}.
+\end{proof}
+
+
+
+\section{More on dualizing complexes}
+\label{section-more-dualizing}
+
+\noindent
+Some lemmas which don't fit anywhere else very well.
+
+\begin{lemma}
+\label{lemma-descent}
+Let $A \to B$ be a faithfully flat map of Noetherian rings.
+If $K \in D(A)$ and $K \otimes_A^\mathbf{L} B$
+is a dualizing complex for $B$, then $K$ is a dualizing complex
+for $A$.
+\end{lemma}
+
+\begin{proof}
+Since $A \to B$ is flat we have
+$H^i(K) \otimes_A B = H^i(K \otimes_A^\mathbf{L} B)$.
+Since $K \otimes_A^\mathbf{L} B$ is in $D^b_{\textit{Coh}}(B)$
+we first find that $K$ is in $D^b(A)$ and then we see that
+$H^i(K)$ is a finite $A$-module by
+Algebra, Lemma \ref{algebra-lemma-descend-properties-modules}.
+Let $M$ be a finite $A$-module. Then
+$$
+R\Hom_A(M, K) \otimes_A B = R\Hom_B(M \otimes_A B, K \otimes_A^\mathbf{L} B)
+$$
+by More on Algebra, Lemma \ref{more-algebra-lemma-base-change-RHom}.
+Since $K \otimes_A^\mathbf{L} B$ has finite injective dimension,
+say injective-amplitude in $[a, b]$, we see that the right hand side
+has vanishing cohomology in degrees $> b$.
+Since $A \to B$ is faithfully flat, we find
+that $R\Hom_A(M, K)$ has vanishing cohomology in degrees $> b$.
+Thus $K$ has finite injective dimension by
+More on Algebra, Lemma \ref{more-algebra-lemma-injective-amplitude}.
+To finish the proof we have to show that the map
+$A \to R\Hom_A(K, K)$ is an isomorphism.
+For this we again use
+More on Algebra, Lemma \ref{more-algebra-lemma-base-change-RHom}
+and the fact that
+$B \to R\Hom_B(K \otimes_A^\mathbf{L} B, K \otimes_A^\mathbf{L} B)$
+is an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-ascent}
+Let $\varphi : A \to B$ be a homomorphism of Noetherian rings. Assume
+\begin{enumerate}
+\item $A \to B$ is syntomic and induces a surjective map on spectra, or
+\item $A \to B$ is a faithfully flat local complete intersection, or
+\item $A \to B$ is faithfully flat of finite type with Gorenstein fibres.
+\end{enumerate}
+Then $K \in D(A)$ is a dualizing complex for $A$ if and only if
+$K \otimes_A^\mathbf{L} B$ is a dualizing complex for $B$.
+\end{lemma}
+
+\begin{proof}
+Observe that $A \to B$ satisfies (1) if and only if $A \to B$
+satisfies (2) by More on Algebra, Lemma \ref{more-algebra-lemma-syntomic-lci}.
+Observe that in both (2) and (3) the relative dualzing
+complex $\varphi^!(A) = \omega_{B/A}^\bullet$ is an invertible
+object of $D(B)$, see
+Lemmas \ref{lemma-lci-shriek} and \ref{lemma-gorenstein-shriek}.
+Moreover we have
+$\varphi^!(K) = K \otimes_A^\mathbf{L} \omega_{B/A}^\bullet$
+in both cases, see Lemma \ref{lemma-upper-shriek-is-tensor-functor}
+for case (3).
+Thus $\varphi^!(K)$ is the same as $K \otimes_A^\mathbf{L} B$
+up to tensoring with an invertible object of $D(B)$.
+Hence $\varphi^!(K)$ is a dualizing complex for $B$
+if and only if $K \otimes_A^\mathbf{L} B$ is
+(as being a dualizing complex is local and invariant under shifts).
+Thus we see that if $K$ is dualizing for $A$, then
+$K \otimes_A^\mathbf{L} B$ is dualizing for $B$ by
+Lemma \ref{lemma-shriek-dualizing-algebraic}.
+To descend the property, see
+Lemma \ref{lemma-descent}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-hull-goes-up}
+Let $(A, \mathfrak m, \kappa) \to (B, \mathfrak n, l)$
+be a flat local homorphism of Noetherian rings such that
+$\mathfrak n = \mathfrak m B$. If $E$ is the injective
+hull of $\kappa$, then $E \otimes_A B$ is the injective
+hull of $l$.
+\end{lemma}
+
+\begin{proof}
+Write $E = \bigcup E_n$ as in Lemma \ref{lemma-union-artinian}.
+It suffices to show that $E_n \otimes_{A/\mathfrak m^n} B/\mathfrak n^n$
+is the injective hull of $l$ over $B/\mathfrak n$.
+This reduces us to the case where $A$ and $B$ are Artinian local.
+Observe that $\text{length}_A(A) = \text{length}_B(B)$ and
+$\text{length}_A(E) = \text{length}_B(E \otimes_A B)$
+by Algebra, Lemma \ref{algebra-lemma-pullback-module}.
+By Lemma \ref{lemma-finite} we have
+$\text{length}_A(E) = \text{length}_A(A)$ and
+$\text{length}_B(E') = \text{length}_B(B)$
+where $E'$ is the injective hull of $l$ over $B$.
+We conclude $\text{length}_B(E') = \text{length}_B(E \otimes_A B)$.
+Observe that
+$$
+\dim_l((E \otimes_A B)[\mathfrak n]) =
+\dim_l(E[\mathfrak m] \otimes_A B) =
+\dim_\kappa(E[\mathfrak m]) = 1
+$$
+where we have used flatness of $A \to B$ and $\mathfrak n = \mathfrak mB$.
+Thus there is an injective $B$-module map $E \otimes_A B \to E'$
+by Lemma \ref{lemma-torsion-submodule-sum-injective-hulls}.
+By equality of lengths shown above this is an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-injective-goes-up}
+Let $\varphi : A \to B$ be a flat homorphism of Noetherian rings such
+that for all primes $\mathfrak q \subset B$ we have
+$\mathfrak p B_\mathfrak q = \mathfrak qB_\mathfrak q$
+where $\mathfrak p = \varphi^{-1}(\mathfrak q)$, for example
+if $\varphi$ is \'etale.
+If $I$ is an injective $A$-module, then $I \otimes_A B$ is
+an injective $B$-module.
+\end{lemma}
+
+\begin{proof}
+\'Etale maps satisfy the assumption by
+Algebra, Lemma \ref{algebra-lemma-etale-at-prime}.
+By Lemma \ref{lemma-sum-injective-modules} and
+Proposition \ref{proposition-structure-injectives-noetherian}
+we may assume $I$ is the injective hull of $\kappa(\mathfrak p)$
+for some prime $\mathfrak p \subset A$.
+Then $I$ is a module over $A_\mathfrak p$.
+It suffices to prove $I \otimes_A B = I \otimes_{A_\mathfrak p} B_\mathfrak p$
+is injective as a $B_\mathfrak p$-module, see
+Lemma \ref{lemma-injective-flat}.
+Thus we may assume $(A, \mathfrak m, \kappa)$ is local Noetherian
+and $I = E$ is the injective hull of the residue field $\kappa$.
+Our assumption implies that the Noetherian ring $B/\mathfrak m B$
+is a product of fields (details omitted).
+Thus there are finitely many prime ideals
+$\mathfrak m_1, \ldots, \mathfrak m_n$ in $B$
+lying over $\mathfrak m$ and they are all maximal ideals.
+Write $E = \bigcup E_n$ as in Lemma \ref{lemma-union-artinian}.
+Then $E \otimes_A B = \bigcup E_n \otimes_A B$
+and $E_n \otimes_A B$ is a finite $B$-module with support
+$\{\mathfrak m_1, \ldots, \mathfrak m_n\}$ hence decomposes
+as a product over the localizations at $\mathfrak m_i$.
+Thus $E \otimes_A B = \prod (E \otimes_A B)_{\mathfrak m_i}$.
+Since $(E \otimes_A B)_{\mathfrak m_i} = E \otimes_A B_{\mathfrak m_i}$
+is the injective hull of the residue field of $\mathfrak m_i$
+by Lemma \ref{lemma-injective-hull-goes-up} we conclude.
+\end{proof}
+
+
+
+
+
+
+\section{Relative dualizing complexes}
+\label{section-relative-dualizing-complexes}
+
+\noindent
+For a finite type ring map $\varphi : R \to A$ of Noetherian rings
+we have the relative dualizing complex $\omega_{A/R}^\bullet = \varphi^!(R)$
+considered in Section \ref{section-relative-dualizing-complexes-Noetherian}.
+If $R$ is not Noetherian, a similarly constructed complex will
+in general not have good properties. In this section, we give a
+definition of a relative dualizing complex for a flat and finitely presented
+ring maps $R \to A$ of non-Noetherian rings. The definition is
+chosen to globalize to flat and finitely presented morphisms
+of schemes, see Duality for Schemes, Section
+\ref{duality-section-relative-dualizing-complexes}. We will
+show that relative dualizing complexes exist (when the definition
+applies), are unique up to (noncanonical) isomorphism,
+and that in the Noetherian case we recover the complex of
+Section \ref{section-relative-dualizing-complexes-Noetherian}.
+
+\medskip\noindent
+The Noetherian reader may safely skip this section!
+
+\begin{definition}
+\label{definition-relative-dualizing-complex}
+Let $R \to A$ be a flat ring map of finite presentation.
+A {\it relative dualizing complex} is an object $K \in D(A)$ such that
+\begin{enumerate}
+\item $K$ is $R$-perfect (More on Algebra, Definition
+\ref{more-algebra-definition-relatively-perfect}), and
+\item $R\Hom_{A \otimes_R A}(A, K \otimes_A^\mathbf{L} (A \otimes_R A))$
+is isomorphic to $A$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+To understand this definition you may have to read and understand some
+of the following lemmas. Lemmas \ref{lemma-relative-dualizing-noetherian} and
+\ref{lemma-uniqueness-relative-dualizing} show this definition
+does not clash with the definition in
+Section \ref{section-relative-dualizing-complexes-Noetherian}.
+
+\begin{lemma}
+\label{lemma-uniqueness-relative-dualizing}
+Let $R \to A$ be a flat ring map of finite presentation.
+Any two relative dualizing complexes for $R \to A$ are isomorphic.
+\end{lemma}
+
+\begin{proof}
+Let $K$ and $L$ be two relative dualizing complexes for $R \to A$.
+Denote $K_1 = K \otimes_A^\mathbf{L} (A \otimes_R A)$
+and $L_2 = (A \otimes_R A) \otimes_A^\mathbf{L} L$ the
+derived base changes via the first and second coprojections
+$A \to A \otimes_R A$. By symmetry the assumption on $L_2$
+implies that $R\Hom_{A \otimes_R A}(A, L_2)$ is isomorphic to $A$.
+By More on Algebra, Lemma
+\ref{more-algebra-lemma-internal-hom-evaluate-tensor-isomorphism} part (3)
+applied twice we have
+$$
+A \otimes_{A \otimes_R A}^\mathbf{L} L_2 \cong
+R\Hom_{A \otimes_R A}(A, K_1 \otimes_{A \otimes_R A}^\mathbf{L} L_2) \cong
+A \otimes_{A \otimes_R A}^\mathbf{L} K_1
+$$
+Applying the restriction functor $D(A \otimes_R A) \to D(A)$
+for either coprojection we obtain the desired result.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dualizing-noetherian}
+Let $\varphi : R \to A$ be a flat finite type ring map of Noetherian rings.
+Then the relative dualizing complex $\omega_{A/R}^\bullet = \varphi^!(R)$
+of Section \ref{section-relative-dualizing-complexes-Noetherian}
+is a relative dualizing complex in the sense of
+Definition \ref{definition-relative-dualizing-complex}.
+\end{lemma}
+
+\begin{proof}
+From Lemma \ref{lemma-relative-dualizing-algebraic} we see that
+$\varphi^!(R)$ is $R$-perfect.
+Denote $\delta : A \otimes_R A \to A$ the multiplication map
+and $p_1, p_2 : A \to A \otimes_R A$ the coprojections.
+Then
+$$
+\varphi^!(R) \otimes_A^\mathbf{L} (A \otimes_R A) =
+\varphi^!(R) \otimes_{A, p_1}^\mathbf{L} (A \otimes_R A) =
+p_2^!(A)
+$$
+by Lemma \ref{lemma-flat-bc}. Recall that
+$
+R\Hom_{A \otimes_R A}(A, \varphi^!(R) \otimes_A^\mathbf{L} (A \otimes_R A))
+$
+is the image of $\delta^!(\varphi^!(R) \otimes_A^\mathbf{L} (A \otimes_R A))$
+under the restriction map $\delta_* : D(A) \to D(A \otimes_R A)$.
+Use the definition of $\delta^!$ from
+Section \ref{section-relative-dualizing-complex-algebraic}
+and Lemma \ref{lemma-RHom-ext}.
+Since $\delta^!(p_2^!(A)) \cong A$ by
+Lemma \ref{lemma-composition-shriek-algebraic}
+we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-relative-dualizing}
+Let $R \to A$ be a flat ring map of finite presentation. Then
+\begin{enumerate}
+\item there exists a relative dualizing complex $K$ in $D(A)$, and
+\item for any ring map $R \to R'$ setting $A' = A \otimes_R R'$
+and $K' = K \otimes_A^\mathbf{L} A'$, then $K'$ is a
+relative dualizing complex for $R' \to A'$.
+\end{enumerate}
+Moreover, if
+$$
+\xi : A \longrightarrow K \otimes_A^\mathbf{L} (A \otimes_R A)
+$$
+is a generator for the cyclic module
+$\Hom_{D(A \otimes_R A)}(A, K \otimes_A^\mathbf{L} (A \otimes_R A))$
+then in (2) the derived base change of $\xi$ by
+$A \otimes_R A \to A' \otimes_{R'} A'$ is a generator for
+the cyclic module
+$\Hom_{D(A' \otimes_{R'} A')}(A',
+K' \otimes_{A'}^\mathbf{L} (A' \otimes_{R'} A'))$
+\end{lemma}
+
+\begin{proof}
+We first reduce to the Noetherian case. By
+Algebra, Lemma \ref{algebra-lemma-flat-finite-presentation-limit-flat}
+there exists a finite type $\mathbf{Z}$ subalgebra $R_0 \subset R$
+and a flat finite type ring map $R_0 \to A_0$ such that
+$A = A_0 \otimes_{R_0} R$. By Lemma \ref{lemma-relative-dualizing-noetherian}
+there exists a relative
+dualizing complex $K_0 \in D(A_0)$.
+Thus if we show (2) for $K_0$, then we find that
+$K_0 \otimes_{A_0}^\mathbf{L} A$ is
+a dualizing complex for $R \to A$ and that it also satisfies (2)
+by transitivity of derived base change.
+The uniqueness of relative dualizing complexes
+(Lemma \ref{lemma-uniqueness-relative-dualizing})
+then shows that this holds for
+any relative dualizing complex.
+
+\medskip\noindent
+Assume $R$ Noetherian and let $K$ be a relative dualizing complex
+for $R \to A$. Given a ring map $R \to R'$ set $A' = A \otimes_R R'$
+and $K' = K \otimes_A^\mathbf{L} A'$. To finish the proof we have
+to show that $K'$ is a relative dualizing complex for $R' \to A'$.
+By More on Algebra, Lemma
+\ref{more-algebra-lemma-base-change-relatively-perfect}
+we see that $K'$ is $R'$-perfect in all cases.
+By Lemmas \ref{lemma-base-change-relative-algebraic} and
+\ref{lemma-relative-dualizing-noetherian}
+if $R'$ is Noetherian, then $K'$ is a relative dualizing complex
+for $R' \to A'$ (in either sense).
+Transitivity of derived tensor product shows that
+$K \otimes_A^\mathbf{L} (A \otimes_R A)
+\otimes_{A \otimes_R A}^\mathbf{L} (A' \otimes_{R'} A') =
+K' \otimes_{A'}^\mathbf{L} (A' \otimes_{R'} A')$.
+Flatness of $R \to A$ guarantees that
+$A \otimes_{A \otimes_R A}^\mathbf{L} (A' \otimes_{R'} A') = A'$;
+namely $A \otimes_R A$ and $R'$ are tor independent over $R$
+so we can apply More on Algebra, Lemma
+\ref{more-algebra-lemma-base-change-comparison}.
+Finally, $A$ is pseudo-coherent as an $A \otimes_R A$-module
+by More on Algebra, Lemma
+\ref{more-algebra-lemma-more-relative-pseudo-coherent-is-moot}. Thus
+we have checked all the assumptions of
+More on Algebra, Lemma
+\ref{more-algebra-lemma-compute-RHom-relatively-perfect}.
+We find there exists a bounded below complex
+$E^\bullet$ of $R$-flat finitely presented $A \otimes_R A$-modules
+such that $E^\bullet \otimes_R R'$ represents
+$R\Hom_{A' \otimes_{R'} A'}(A',
+K' \otimes_{A'}^\mathbf{L} (A' \otimes_{R'} A'))$
+and these identifications are compatible with derived base change.
+Let $n \in \mathbf{Z}$, $n \not = 0$.
+Define $Q^n$ by the sequence
+$$
+E^{n - 1} \to E^n \to Q^n \to 0
+$$
+Since $\kappa(\mathfrak p)$ is a Noetherian ring, we know that
+$H^n(E^\bullet \otimes_R \kappa(\mathfrak p)) = 0$, see remarks above.
+Chasing diagrams this means that
+$$
+Q^n \otimes_R \kappa(\mathfrak p) \to E^{n + 1} \otimes_R \kappa(\mathfrak p)
+$$
+is injective. Hence for a prime $\mathfrak q$ of $A \otimes_R A$
+lying over $\mathfrak p$ we have $Q^n_\mathfrak q$ is $R_\mathfrak p$-flat
+and $Q^n_\mathfrak p \to E^{n + 1}_\mathfrak q$ is
+$R_\mathfrak p$-universally injective, see
+Algebra, Lemma \ref{algebra-lemma-mod-injective}.
+Since this holds for all primes,
+we conclude that $Q^n$ is $R$-flat
+and $Q^n \to E^{n + 1}$ is $R$-universally injective. In particular
+$H^n(E^\bullet \otimes_R R') = 0$ for any ring map $R \to R'$.
+Let $Z^0 = \Ker(E^0 \to E^1)$. Since there is an exact sequence
+$0 \to Z^0 \to E^0 \to E^1 \to Q^1 \to 0$ we see that $Z^0$
+is $R$-flat and that
+$Z^0 \otimes_R R' = \Ker(E^0 \otimes_R R' \to E^1 \otimes_R R')$
+for all $R \to R'$. Then the short exact sequence
+$0 \to Q^{-1} \to Z^0 \to H^0(E^\bullet) \to 0$
+shows that
+$$
+H^0(E^\bullet \otimes_R R') = H^0(E^\bullet) \otimes_R R'
+= A \otimes_R R' = A'
+$$
+as desired. This equality furthermore gives the final assertion
+of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dualizing-RHom}
+Let $R \to A$ be a flat ring map of finite presentation.
+Let $K$ be a relative dualizing complex.
+Then $A \to R\Hom_A(K, K)$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+By
+Algebra, Lemma \ref{algebra-lemma-flat-finite-presentation-limit-flat}
+there exists a finite type $\mathbf{Z}$ subalgebra $R_0 \subset R$
+and a flat finite type ring map $R_0 \to A_0$ such that
+$A = A_0 \otimes_{R_0} R$. By Lemmas
+\ref{lemma-uniqueness-relative-dualizing},
+\ref{lemma-relative-dualizing-noetherian}, and
+\ref{lemma-base-change-relative-dualizing}
+there exists a relative dualizing complex $K_0 \in D(A_0)$
+and its derived base change is $K$.
+This reduces us to the situation discussed in the next paragraph.
+
+\medskip\noindent
+Assume $R$ Noetherian and let $K$ be a relative dualizing complex
+for $R \to A$. Given a ring map $R \to R'$ set $A' = A \otimes_R R'$
+and $K' = K \otimes_A^\mathbf{L} A'$. To finish the proof we show
+$R\Hom_{A'}(K', K') = A'$. By Lemma \ref{lemma-relative-dualizing-algebraic}
+we know this is true whenever $R'$ is Noetherian.
+Since a general $R'$ is a filtered colimit of Noetherian
+$R$-algebras, we find the result holds by
+More on Algebra, Lemma \ref{more-algebra-lemma-colimit-relatively-perfect}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-dualizing-composition}
+Let $R \to A \to B$ be a ring maps which are flat and of finite presentation.
+Let $K_{A/R}$ and $K_{B/A}$ be relative dualizing complexes for $R \to A$
+and $A \to B$. Then $K = K_{A/R} \otimes_A^\mathbf{L} K_{B/A}$
+is a relative dualizing complex for $R \to B$.
+\end{lemma}
+
+\begin{proof}
+We will use reduction to the Noetherian case.
+Namely, by Algebra, Lemma
+\ref{algebra-lemma-flat-finite-presentation-limit-flat}
+there exists a finite type $\mathbf{Z}$ subalgebra $R_0 \subset R$
+and a flat finite type ring map $R_0 \to A_0$ such that
+$A = A_0 \otimes_{R_0} R$. After increasing $R_0$ and correspondingly
+replacing $A_0$ we may assume there is a flat
+finite type ring map $A_0 \to B_0$ such that $B = B_0 \otimes_{R_0} R$
+(use the same lemma). If we prove the lemma for $R_0 \to A_0 \to B_0$,
+then the lemma follows by Lemmas
+\ref{lemma-uniqueness-relative-dualizing},
+\ref{lemma-relative-dualizing-noetherian}, and
+\ref{lemma-base-change-relative-dualizing}.
+This reduces us to the situation discussed in the next paragraph.
+
+\medskip\noindent
+Assume $R$ is Noetherian and denote $\varphi : R \to A$ and
+$\psi : A \to B$ the given ring maps. Then $K_{A/R} \cong \varphi^!(R)$ and
+$K_{B/A} \cong \psi^!(A)$, see references given above.
+Then
+$$
+K = K_{A/R} \otimes_A^\mathbf{L} K_{B/A} \cong
+\varphi^!(R) \otimes_A^\mathbf{L} \psi^!(A) \cong
+\psi^!(\varphi^!(R)) \cong (\psi \circ \varphi)^!(R)
+$$
+by Lemmas \ref{lemma-upper-shriek-is-tensor-functor} and
+\ref{lemma-composition-shriek-algebraic}. Thus $K$ is a relative
+dualizing complex for $R \to B$.
+\end{proof}
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/equiv.tex b/books/stacks/equiv.tex
new file mode 100644
index 0000000000000000000000000000000000000000..510ecf77e74fb1472324546b8facca48b07f94e3
--- /dev/null
+++ b/books/stacks/equiv.tex
@@ -0,0 +1,4449 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Derived Categories of Varieties}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this chapter we continue the discussion started in
+Derived Categories of Schemes, Section \ref{perfect-section-introduction}.
+We will discuss Fourier-Mukai transforms,
+first studied by Mukai in \cite{Mukai}.
+We will prove Orlov's theorem on derived equivalences (\cite{Orlov-K3}).
+We also discuss the countability of derived equivalence
+classes proved by Anel and To\"en in \cite{AT}.
+
+\medskip\noindent
+A good introduction to this material is the book
+\cite{Huybrechts} by Daniel Huybrechts. Some other
+papers which helped popularize this topic are
+\begin{enumerate}
+\item the paper by Bondal and Kapranov, see \cite{Bondal-Kapranov}
+\item the paper by Bondal and Orlov, see \cite{Bondal-Orlov}
+\item the paper by Bondal and Van den Bergh, see \cite{BvdB}
+\item the papers by Beilinson, see
+\cite{Beilinson} and \cite{Beilinson-derived}
+\item the paper by Orlov, see \cite{Orlov-AV}
+\item the paper by Orlov, see \cite{Orlov-motives}
+\item the paper by Rouquier, see \cite{Rouquier-dimensions}
+\item there are many more we could mention here.
+\end{enumerate}
+
+
+
+
+\section{Conventions and notation}
+\label{section-conventions}
+
+\noindent
+Let $k$ be a field. A $k$-linear triangulated category $\mathcal{T}$
+is a triangulated category (Derived Categories, Section
+\ref{derived-section-triangulated-definitions})
+which is endowed with a $k$-linear structure
+(Differential Graded Algebra, Section \ref{dga-section-linear})
+such that the translation functors $[n] : \mathcal{T} \to \mathcal{T}$
+are $k$-linear for all $n \in \mathbf{Z}$.
+
+\medskip\noindent
+Let $k$ be a field. We denote $\text{Vect}_k$ the category of
+$k$-vector spaces. For a $k$-vector space $V$ we denote
+$V^\vee$ the $k$-linear dual of $V$, i.e., $V^\vee = \Hom_k(V, k)$.
+
+\medskip\noindent
+Let $X$ be a scheme. We denote $D_{perf}(\mathcal{O}_X)$ the full
+subcategory of $D(\mathcal{O}_X)$ consisting of perfect complexes
+(Cohomology, Section \ref{cohomology-section-perfect}).
+If $X$ is Noetherian then
+$D_{perf}(\mathcal{O}_X) \subset D^b_{\textit{Coh}}(\mathcal{O}_X)$, see
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-perfect-on-noetherian}.
+If $X$ is Noetherian and regular, then
+$D_{perf}(\mathcal{O}_X) = D^b_{\textit{Coh}}(\mathcal{O}_X)$, see
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-perfect-on-regular}.
+
+\medskip\noindent
+Let $k$ be a field. Let $X$ and $Y$ be schemes over $k$. In this
+situation we will write $X \times Y$ instead of $X \times_{\Spec(k)} Y$.
+
+
+\medskip\noindent
+Let $S$ be a scheme. Let $X$, $Y$ be schemes over $S$.
+Let $\mathcal{F}$ be a $\mathcal{O}_X$-module and let
+$\mathcal{G}$ be a $\mathcal{O}_Y$-module. We set
+$$
+\mathcal{F} \boxtimes \mathcal{G} =
+\text{pr}_1^*\mathcal{F} \otimes_{\mathcal{O}_{X \times_S Y}}
+\text{pr}_2^*\mathcal{G}
+$$
+as $\mathcal{O}_{X \times_S Y}$-modules.
+If $K \in D(\mathcal{O}_X)$ and $M \in D(\mathcal{O}_Y)$ then we set
+$$
+K \boxtimes M =
+L\text{pr}_1^*K \otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L} L\text{pr}_2^*M
+$$
+as an object of $D(\mathcal{O}_{X \times_S Y})$.
+Thus our notation is potentially ambiguous, but context should make it clear
+which of the two is meant.
+
+
+
+
+
+\section{Serre functors}
+\label{section-Serre-functors}
+
+\noindent
+The material in this section is taken from \cite{Bondal-Kapranov}.
+
+\begin{lemma}
+\label{lemma-Serre-functor-exists}
+Let $k$ be a field. Let $\mathcal{T}$ be a $k$-linear
+triangulated category such that $\dim_k \Hom_\mathcal{T}(X, Y) < \infty$
+for all $X, Y \in \Ob(\mathcal{T})$. The following are equivalent
+\begin{enumerate}
+\item there exists a $k$-linear equivalence
+$S : \mathcal{T} \to \mathcal{T}$ and $k$-linear isomorphisms
+$c_{X, Y} : \Hom_\mathcal{T}(X, Y) \to \Hom_\mathcal{T}(Y, S(X))^\vee$
+functorial in $X, Y \in \Ob(\mathcal{T})$,
+\item for every $X \in \Ob(\mathcal{T})$
+the functor $Y \mapsto \Hom_\mathcal{T}(X, Y)^\vee$
+is representable and the functor $Y \mapsto \Hom_\mathcal{T}(Y, X)^\vee$
+is corepresentable.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Condition (1) implies (2) since given $(S, c)$ and $X \in \Ob(\mathcal{T})$
+the object $S(X)$ represents the functor
+$Y \mapsto \Hom_\mathcal{T}(X, Y)^\vee$ and the object $S^{-1}(X)$ corepresents
+the functor $Y \mapsto \Hom_\mathcal{T}(Y, X)^\vee$.
+
+\medskip\noindent
+Assume (2). We will repeatedly use the Yoneda lemma, see
+Categories, Lemma \ref{categories-lemma-yoneda}.
+For every $X$ denote $S(X)$ the object representing the
+functor $Y \mapsto \Hom_\mathcal{T}(X, Y)^\vee$. Given
+$\varphi : X \to X'$, we obtain a unique arrow $S(\varphi) : S(X) \to S(X')$
+determined by the corresponding transformation of functors
+$\Hom_\mathcal{T}(X, -)^\vee \to \Hom_\mathcal{T}(X', -)^\vee$.
+Thus $S$ is a functor and we obtain the isomorphisms $c_{X, Y}$
+by construction. It remains to show that $S$ is an equivalence.
+For every $X$ denote $S'(X)$ the object corepresenting the
+functor $Y \mapsto \Hom_\mathcal{T}(Y, X)^\vee$. Arguing as
+above we find that $S'$ is a functor. We claim that $S'$
+is quasi-inverse to $S$. To see this observe that
+$$
+\Hom_\mathcal{T}(X, Y) = \Hom_\mathcal{T}(Y, S(X))^\vee =
+\Hom_\mathcal{T}(S'(S(X)), Y)
+$$
+bifunctorially, i.e., we find $S' \circ S \cong \text{id}_\mathcal{T}$.
+Similarly, we have
+$$
+\Hom_\mathcal{T}(Y, X) = \Hom_\mathcal{T}(S'(X), Y)^\vee =
+\Hom_\mathcal{T}(Y, S(S'(X)))
+$$
+and we find $S \circ S' \cong \text{id}_\mathcal{T}$.
+\end{proof}
+
+\begin{definition}
+\label{definition-Serre-functor}
+Let $k$ be a field. Let $\mathcal{T}$ be a $k$-linear
+triangulated category such that $\dim_k \Hom_\mathcal{T}(X, Y) < \infty$
+for all $X, Y \in \Ob(\mathcal{T})$. We say {\it a Serre functor
+exists} if the equivalent conditions of Lemma \ref{lemma-Serre-functor-exists}
+are satisfied. In this case a {\it Serre functor} is a $k$-linear equivalence
+$S : \mathcal{T} \to \mathcal{T}$ endowed with $k$-linear isomorphisms
+$c_{X, Y} : \Hom_\mathcal{T}(X, Y) \to \Hom_\mathcal{T}(Y, S(X))^\vee$
+functorial in $X, Y \in \Ob(\mathcal{T})$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-Serre-functor}
+In the situation of Definition \ref{definition-Serre-functor}.
+If a Serre functor exists, then it is unique up to unique isomorphism and
+it is an exact functor of triangulated categories.
+\end{lemma}
+
+\begin{proof}
+Given a Serre functor $S$ the object $S(X)$ represents
+the functor $Y \mapsto \Hom_\mathcal{T}(X, Y)^\vee$.
+Thus the object $S(X)$ together with the functorial identification
+$\Hom_\mathcal{T}(X, Y)^\vee = \Hom_\mathcal{T}(Y, S(X))$
+is determined up to unique isomorphism by the Yoneda lemma
+(Categories, Lemma \ref{categories-lemma-yoneda}).
+Moreover, for $\varphi : X \to X'$, the arrow $S(\varphi) : S(X) \to S(X')$
+is uniquely determined by the corresponding transformation of functors
+$\Hom_\mathcal{T}(X, -)^\vee \to \Hom_\mathcal{T}(X', -)^\vee$.
+
+\medskip\noindent
+For objects $X, Y$ of $\mathcal{T}$ we have
+\begin{align*}
+\Hom(Y, S(X)[1])^\vee
+& =
+\Hom(Y[-1], S(X))^\vee \\
+& =
+\Hom(X, Y[-1]) \\
+& =
+\Hom(X[1], Y) \\
+& =
+\Hom(Y, S(X[1]))^\vee
+\end{align*}
+By the Yoneda lemma we conclude that there is a unique isomorphism
+$S(X[1]) \to S(X)[1]$ inducing the isomorphism from top left to bottom right.
+Since each of the isomorphisms above is functorial in both $X$ and $Y$
+we find that this defines an isomorphism of functors
+$S \circ [1] \to [1] \circ S$.
+
+\medskip\noindent
+Let $(A, B, C, f, g, h)$ be a distinguished triangle in $\mathcal{T}$.
+We have to show that the triangle $(S(A), S(B), S(C), S(f), S(g), S(h))$
+is distinguished. Here we use the canonical isomorphism $S(A[1]) \to S(A)[1]$
+constructed above to identify the target $S(A[1])$ of $S(h)$ with $S(A)[1]$.
+We first observe that for any $X$ in $\mathcal{T}$
+the triangle $(S(A), S(B), S(C), S(f), S(g), S(h))$ induces
+a long exact sequence
+$$
+\ldots \to
+\Hom(X, S(A)) \to
+\Hom(X, S(B)) \to
+\Hom(X, S(C)) \to
+\Hom(X, S(A)[1]) \to \ldots
+$$
+of finite dimensional $k$-vector spaces. Namely, this sequence is
+$k$-linear dual of the sequence
+$$
+\ldots \leftarrow
+\Hom(A, X) \leftarrow
+\Hom(B, X) \leftarrow
+\Hom(C, X) \leftarrow
+\Hom(A[1], X) \leftarrow
+\ldots
+$$
+which is exact by Derived Categories, Lemma
+\ref{derived-lemma-representable-homological}.
+Next, we choose a distinguished triangle $(S(A), E, S(C), i, p, S(h))$
+which is possible by axioms TR1 and TR2. We want to construct the dotted
+arrow making following diagram commute
+$$
+\xymatrix{
+S(C)[-1] \ar[r]_-{S(h[-1])} &
+S(A) \ar[r]_{S(f)} &
+S(B) \ar[r]_{S(g)} &
+S(C) \ar[r]_{S(h)} &
+S(A)[1] \\
+S(C)[-1] \ar[r]^-{S(h[-1])} \ar@{=}[u] &
+S(A) \ar[r]^i \ar@{=}[u] &
+E \ar[r]^p \ar@{..>}[u]^\varphi &
+S(C) \ar[r]^{S(h)} \ar@{=}[u] &
+S(A)[1] \ar@{=}[u]
+}
+$$
+Namely, if we have $\varphi$, then we claim for any $X$ the resulting
+map $\Hom(X, E) \to \Hom(X, S(B))$ will be an isomorphism of $k$-vector
+spaces. Namely, we will obtain a commutative diagram
+$$
+\xymatrix{
+\Hom(X, S(C)[-1]) \ar[r] &
+\Hom(X, S(A)) \ar[r] &
+\Hom(X, S(B)) \ar[r] &
+\Hom(X, S(C)) \ar[r] &
+\Hom(X, S(A)[1]) \\
+\Hom(X, S(C)[-1]) \ar[r] \ar@{=}[u] &
+\Hom(X, S(A)) \ar[r] \ar@{=}[u] &
+\Hom(X, E) \ar[r] \ar[u]^\varphi &
+\Hom(X, S(C)) \ar[r] \ar@{=}[u] &
+\Hom(X, S(A)[1]) \ar@{=}[u]
+}
+$$
+with exact rows (see above) and we can apply the 5 lemma
+(Homology, Lemma \ref{homology-lemma-five-lemma}) to see
+that the middle arrow is an isomorphism. By the Yoneda lemma
+we conclude that $\varphi$ is an isomorphism.
+To find $\varphi$ consider the following diagram
+$$
+\xymatrix{
+\Hom(E, S(C)) \ar[r] &
+\Hom(S(A), S(C)) \\
+\Hom(E, S(B)) \ar[u] \ar[r] &
+\Hom(S(A), S(B)) \ar[u]
+}
+$$
+The elements $p$ and $S(f)$ in positions $(0, 1)$ and
+$(1, 0)$ define a cohomology class $\xi$ in the total complex
+of this double complex. The existence of $\varphi$ is
+equivalent to whether $\xi$ is zero. If we take $k$-linear duals
+of this and we use the defining property of $S$ we obtain
+$$
+\xymatrix{
+\Hom(C, E) \ar[d] &
+\Hom(C, S(A)) \ar[l] \ar[d] \\
+\Hom(B, E) &
+\Hom(B, S(A)) \ar[l]
+}
+$$
+Since both $A \to B \to C$ and $S(A) \to E \to S(C)$ are distinguished
+triangles, we know by TR3 that given elements $\alpha \in \Hom(C, E)$
+and $\beta \in \Hom(B, S(A))$ mapping to the same element in
+$\Hom(B, E)$, there exists an element in $\Hom(C, S(A))$ mapping
+to both $\alpha$ and $\beta$. In other words, the cohomology of
+the total complex associated to this double complex is zero in degree
+$1$, i.e., the degree corresponding to $\Hom(C, E) \oplus \Hom(B, S(A))$.
+Taking duals the same must be true for the previous one which concludes
+the proof.
+\end{proof}
+
+
+
+
+
+
+\section{Examples of Serre functors}
+\label{section-examples-Serre-functors}
+
+\noindent
+The lemma below is the standard example.
+
+\begin{lemma}
+\label{lemma-Serre-functor-Gorenstein-proper}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ which is Gorenstein.
+Consider the complex $\omega_X^\bullet$ of
+Duality for Schemes, Lemmas \ref{duality-lemma-duality-proper-over-field}.
+Then the functor
+$$
+S : D_{perf}(\mathcal{O}_X) \longrightarrow D_{perf}(\mathcal{O}_X),\quad
+K \longmapsto S(K) = \omega_X^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} K
+$$
+is a Serre functor.
+\end{lemma}
+
+\begin{proof}
+The statement make sense because $\dim \Hom_X(K, L) < \infty$
+for $K, L \in D_{perf}(\mathcal{O}_X)$ by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-ext-finite}.
+Since $X$ is Gorenstein the dualizing complex $\omega_X^\bullet$
+is an invertible object of $D(\mathcal{O}_X)$, see
+Duality for Schemes, Lemma \ref{duality-lemma-gorenstein}.
+In particular, locally on $X$ the complex $\omega_X^\bullet$
+has one nonzero cohomology sheaf which is an invertible module, see
+Cohomology, Lemma \ref{cohomology-lemma-invertible-derived}.
+Thus $S(K)$ lies in $D_{perf}(\mathcal{O}_X)$.
+On the other hand, the invertibility of $\omega_X^\bullet$
+clearly implies that $S$ is a self-equivalence of $D_{perf}(\mathcal{O}_X)$.
+Finally, we have to find an isomorphism
+$$
+c_{K, L} : \Hom_X(K, L) \longrightarrow
+\Hom_X(L, \omega_X^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} K)^\vee
+$$
+bifunctorially in $K, L$. To do this we use the canonical isomorphisms
+$$
+\Hom_X(K, L) = H^0(X, L \otimes_{\mathcal{O}_X}^\mathbf{L} K^\vee)
+$$
+and
+$$
+\Hom_X(L, \omega_X^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} K) =
+H^0(X,
+\omega_X^\bullet \otimes_{\mathcal{O}_X}^\mathbf{L} K
+\otimes_{\mathcal{O}_X}^\mathbf{L} L^\vee)
+$$
+given in Cohomology, Lemma \ref{cohomology-lemma-dual-perfect-complex}.
+Since $(L \otimes_{\mathcal{O}_X}^\mathbf{L} K^\vee)^\vee =
+(K^\vee)^\vee \otimes_{\mathcal{O}_X}^\mathbf{L} L^\vee$
+and since there is a canonical isomorphism $K \to (K^\vee)^\vee$
+we find these $k$-vector spaces are canonically dual by
+Duality for Schemes, Lemma
+\ref{duality-lemma-duality-proper-over-field-perfect}.
+This produces the isomorphisms $c_{K, L}$.
+We omit the proof that these isomorphisms are functorial.
+\end{proof}
+
+
+
+
+
+\section{Characterizing coherent modules}
+\label{section-coherent}
+
+\noindent
+This section is in some sense a continuation of the discussion
+in Derived Categories of Schemes, Section \ref{perfect-section-pseudo-coherent}
+and More on Morphisms, Section
+\ref{more-morphisms-section-characterize-pseudo-coherent}.
+
+\medskip\noindent
+Before we can state the result we need some notation.
+Let $k$ be a field. Let $n \geq 0$ be an integer.
+Let $S = k[X_0, \ldots, X_n]$. For an integer $e$ denote
+$S_e \subset S$ the homogeneous polynomials of degree $e$.
+Consider the (noncommutative) $k$-algebra
+$$
+R =
+\left(
+\begin{matrix}
+S_0 & S_1 & S_2 & \ldots & \ldots \\
+0 & S_0 & S_1 & \ldots & \ldots\\
+0 & 0 & S_0 & \ldots & \ldots \\
+\ldots & \ldots & \ldots & \ldots & \ldots \\
+0 & \ldots & \ldots & \ldots & S_0
+\end{matrix}
+\right)
+$$
+(with $n + 1$ rows and columns) with obvious multiplication and addition.
+
+\begin{lemma}
+\label{lemma-perfect-for-R}
+With $k$, $n$, and $R$ as above, for an object $K$ of $D(R)$
+the following are equivalent
+\begin{enumerate}
+\item $\sum_{i \in \mathbf{Z}} \dim_k H^i(K) < \infty$, and
+\item $K$ is a compact object.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $K$ is a compact object, then $K$ can be represented by a complex
+$M^\bullet$ which is finite projective as a graded $R$-module, see
+Differential Graded Algebra, Lemma \ref{dga-lemma-compact}.
+Since $\dim_k R < \infty$ we conclude $\sum \dim_k M^i < \infty$
+and a fortiori $\sum \dim_k H^i(M^\bullet) < \infty$.
+(One can also easily deduce this implication from the easier
+Differential Graded Algebra, Proposition \ref{dga-proposition-compact}.)
+
+\medskip\noindent
+Assume $K$ satisfies (1). Consider the distinguished triangle
+of trunctions $\tau_{\leq m}K \to K \to \tau_{\geq m + 1}K$, see
+Derived Categories, Remark
+\ref{derived-remark-truncation-distinguished-triangle}.
+It is clear that both $\tau_{\leq m}K$ and $\tau_{\geq m + 1} K$
+satisfy (1). If we can show both are compact, then so is $K$, see
+Derived Categories, Lemma \ref{derived-lemma-compact-objects-subcategory}.
+Hence, arguing on the number of nonzero cohomology modules of $K$
+we may assume $H^i(K)$ is nonzero only for one $i$.
+Shifting, we may assume $K$ is given by the complex
+consisting of a single finite dimensional $R$-module $M$ sitting
+in degree $0$.
+
+\medskip\noindent
+Since $\dim_k(M) < \infty$ we see that $M$ is Artinian as an $R$-module.
+Thus it suffices to show that every simple $R$-module represents a
+compact object of $D(R)$. Observe that
+$$
+I =
+\left(
+\begin{matrix}
+0 & S_1 & S_2 & \ldots & \ldots \\
+0 & 0 & S_1 & \ldots & \ldots\\
+0 & 0 & 0 & \ldots & \ldots \\
+\ldots & \ldots & \ldots & \ldots & \ldots \\
+0 & \ldots & \ldots & \ldots & 0
+\end{matrix}
+\right)
+$$
+is a nilpotent two sided ideal of $R$ and that $R/I$
+is a commutative $k$-algebra isomorphic to a product of $n + 1$ copies of
+$k$ (placed along the diagonal in the matrix, i.e., $R/I$ can be lifted
+to a $k$-subalgebra of $R$). It follows that $R$ has exactly $n + 1$
+isomorphism classes of simple modules $M_0, \ldots, M_n$ (sitting along
+the diagonal). Consider the right $R$-module $P_i$ of row vectors
+$$
+P_i =
+\left(
+\begin{matrix}
+0 &
+\ldots &
+0 &
+S_0 &
+\ldots &
+S_{i - 1} &
+S_i
+\end{matrix}
+\right)
+$$
+with obvious multiplication $P_i \times R \to P_i$. Then we see that
+$R \cong P_0 \oplus \ldots \oplus P_n$ as a right $R$-module. Since clearly
+$R$ is a compact object of $D(R)$, we conclude each $P_i$ is a compact
+object of $D(R)$. (We of course also conclude each $P_i$ is projective
+as an $R$-module, but this isn't what we have to show in this proof.)
+Clearly, $P_0 = M_0$ is the first of our simple $R$-modules.
+For $P_1$ we have a short exact sequence
+$$
+0 \to P_0^{\oplus n + 1} \to P_1 \to M_1 \to 0
+$$
+which proves that $M_1$ fits into a distinguished triangle whose
+other members are compact objects and hence $M_1$ is a compact
+object of $D(R)$. More generally, there exists a short exact sequence
+$$
+0 \to C_i \to P_i \to M_i \to 0
+$$
+where $C_i$ is a finite dimensional $R$-module whose simple constituents
+are isomorphic to $M_j$ for $j < i$. By induction, we first conclude that
+$C_i$ determines a compact object of $D(R)$ whereupon we conclude that $M_i$
+does too as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-coherent-on-projective-space}
+Let $k$ be a field. Let $n \geq 0$. Let
+$K \in D_\QCoh(\mathcal{O}_{\mathbf{P}^n_k})$.
+The following are equivalent
+\begin{enumerate}
+\item $K$ is in $D^b_{\textit{Coh}}(\mathcal{O}_{\mathbf{P}^n_k})$,
+\item $\sum_{i \in \mathbf{Z}}
+\dim_k H^i(\mathbf{P}^n_k, E \otimes^\mathbf{L} K) < \infty$
+for each perfect object $E$ of
+$D(\mathcal{O}_{\mathbf{P}^n_k})$,
+\item $\sum_{i \in \mathbf{Z}}
+\dim_k \Ext^i_{\mathbf{P}^n_k}(E, K) < \infty$
+for each perfect object $E$ of $D(\mathcal{O}_{\mathbf{P}^n_k})$,
+\item $\sum_{i \in \mathbf{Z}} \dim_k H^i(\mathbf{P}^n_k,
+K \otimes^\mathbf{L} \mathcal{O}_{\mathbf{P}^n_k}(d)) < \infty$
+for $d = 0, 1, \ldots, n$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Parts (2) and (3) are equivalent by
+Cohomology, Lemma \ref{cohomology-lemma-dual-perfect-complex}.
+If (1) is true, then for $E$ perfect the derived tensor product
+$E \otimes^\mathbf{L} K$ is in
+$D^b_{\textit{Coh}}(\mathcal{O}_{\mathbf{P}^n_k})$
+and we see that (2) holds by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-direct-image-coherent}.
+It is clear that (2) implies (4) as $\mathcal{O}_{\mathbf{P}^n_k}(d)$
+can be viewed
+as a perfect object of the derived category of $\mathbf{P}^n_k$.
+Thus it suffices to prove that (4) implies (1).
+
+\medskip\noindent
+Assume (4). Let $R$ be as in Lemma \ref{lemma-perfect-for-R}.
+Let $P = \bigoplus_{d = 0, \ldots, n} \mathcal{O}_{\mathbf{P}^n_k}(-d)$.
+Recall that $R = \text{End}_{\mathbf{P}^n_k}(P)$ whereas all other
+self-Exts of $P$ are zero and that $P$ determines an equivalence
+$- \otimes^\mathbf{L} P : D(R) \to D_\QCoh(\mathcal{O}_{\mathbf{P}^n_k})$
+by Derived Categories of Schemes, Lemma \ref{perfect-lemma-Pn-module-category}.
+Say $K$ corresponds to $L$ in $D(R)$. Then
+\begin{align*}
+H^i(L)
+& =
+\Ext^i_{D(R)}(R, L) \\
+& =
+\Ext^i_{\mathbf{P}^n_k}(P, K) \\
+& =
+H^i(\mathbf{P}^n_k, K \otimes P^\vee) \\
+& =
+\bigoplus\nolimits_{d = 0, \ldots, n}
+H^i(\mathbf{P}^n_k, K \otimes \mathcal{O}(d))
+\end{align*}
+by Differential Graded Algebra, Lemma
+\ref{dga-lemma-upgrade-tensor-with-complex-derived}
+(and the fact that $- \otimes^\mathbf{L} P$ is an equivalence)
+and Cohomology, Lemma \ref{cohomology-lemma-dual-perfect-complex}.
+Thus our assumption (4) implies that $L$ satisfies condition (2) of
+Lemma \ref{lemma-perfect-for-R} and hence is a compact object of $D(R)$.
+Therefore $K$ is a compact object of
+$D_\QCoh(\mathcal{O}_{\mathbf{P}^n_k})$.
+Thus $K$ is perfect by
+Derived Categories of Schemes, Proposition
+\ref{perfect-proposition-compact-is-perfect}.
+Since $D_{perf}(\mathcal{O}_{\mathbf{P}^n_k}) =
+D^b_{\textit{Coh}}(\mathcal{O}_{\mathbf{P}^n_k})$
+by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-perfect-on-regular}
+we conclude (1) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finiteness}
+Let $X$ be a scheme proper over a field $k$. Let
+$K \in D^b_{\textit{Coh}}(\mathcal{O}_X)$ and let $E$ in $D(\mathcal{O}_X)$
+be perfect. Then
+$\sum_{i \in \mathbf{Z}} \dim_k \Ext^i_X(E, K) < \infty$.
+\end{lemma}
+
+\begin{proof}
+This follows for example by combining
+Derived Categories of Schemes, Lemmas \ref{perfect-lemma-ext-finite} and
+\ref{perfect-lemma-ext-from-perfect-into-bounded-QCoh}.
+Alternative proof: combine
+Derived Categories of Schemes, Lemmas
+\ref{perfect-lemma-perfect-on-noetherian} and
+\ref{perfect-lemma-direct-image-coherent}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-dbcoh-projective}
+\begin{reference}
+\cite[Lemma 7.46]{Rouquier-dimensions} and implicit in
+\cite[Theorem A.1]{BvdB}
+\end{reference}
+Let $X$ be a projective scheme over a field $k$. Let
+$K \in \Ob(D_\QCoh(\mathcal{O}_X))$. The following are equivalent
+\begin{enumerate}
+\item $K \in D^b_{\textit{Coh}}(\mathcal{O}_X)$, and
+\item $\sum_{i \in \mathbf{Z}} \dim_k \Ext^i_X(E, K) < \infty$
+for all perfect $E$ in $D(\mathcal{O}_X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (1) $\Rightarrow$ (2) follows from
+Lemma \ref{lemma-finiteness}.
+
+\medskip\noindent
+Assume (2).
+Choose a closed immersion $i : X \to \mathbf{P}^n_k$. It suffices to show
+that $Ri_*K$ is in $D^b_{\textit{Coh}}(\mathbf{P}^n_k)$ since a quasi-coherent
+module $\mathcal{F}$ on $X$ is coherent, resp.\ zero if and only if
+$i_*\mathcal{F}$ is coherent, resp.\ zero. For a perfect object $E$
+of $D(\mathcal{O}_{\mathbf{P}^n_k})$, $Li^*E$ is a perfect object of
+$D(\mathcal{O}_X)$ and
+$$
+\Ext^q_{\mathbf{P}^n_k}(E, Ri_*K) = \Ext^q_X(Li^*E, K)
+$$
+Hence by our assumption we see that
+$\sum_{q \in \mathbf{Z}} \dim_k \Ext^q_{\mathbf{P}^n_k}(E, Ri_*K) < \infty$.
+We conclude by Lemma \ref{lemma-coherent-on-projective-space}.
+\end{proof}
+
+
+
+
+
+\section{A representability theorem}
+\label{section-bondal-van-den-bergh}
+
+\noindent
+The material in this section is taken from \cite{BvdB}.
+
+\medskip\noindent
+Let $\mathcal{T}$ be a $k$-linear triangulated category.
+In this section we consider $k$-linear cohomological functors
+$H$ from $\mathcal{T}$ to the category of $k$-vector spaces.
+This will mean $H$ is a functor
+$$
+H : \mathcal{T}^{opp} \longrightarrow \text{Vect}_k
+$$
+which is $k$-linear such that for any distinguished triangle
+$X \to Y \to Z$ in $\mathcal{T}$ the sequence $H(Z) \to H(Y) \to H(X)$
+is an exact sequence of $k$-vector spaces. See
+Derived Categories, Definition \ref{derived-definition-homological}
+and Differential Graded Algebra, Section \ref{dga-section-linear}.
+
+\begin{lemma}
+\label{lemma-maps-from-compact-filtered}
+Let $\mathcal{D}$ be a triangulated category. Let
+$\mathcal{D}' \subset \mathcal{D}$ be a full triangulated subcategory. Let
+$X \in \Ob(\mathcal{D})$. The category of arrows $E \to X$ with
+$E \in \Ob(\mathcal{D}')$ is filtered.
+\end{lemma}
+
+\begin{proof}
+We check the conditions of
+Categories, Definition \ref{categories-definition-directed}.
+The category is nonempty because it contains $0 \to X$.
+If $E_i \to X$, $i = 1, 2$ are objects, then $E_1 \oplus E_2 \to X$
+is an object and there are morphisms $(E_i \to X) \to (E_1 \oplus E_2 \to X)$.
+Finally, suppose that $a, b : (E \to X) \to (E' \to X)$ are morphisms.
+Choose a distinguished triangle $E \xrightarrow{a - b} E' \to E''$
+in $\mathcal{D}'$. By Axiom TR3 we obtain a morphism of triangles
+$$
+\xymatrix{
+E \ar[r]_{a - b} \ar[d] &
+E' \ar[d] \ar[r] & E'' \ar[d] \\
+0 \ar[r] &
+X \ar[r] &
+X
+}
+$$
+and we find that the resulting arrow $(E' \to X) \to (E'' \to X)$
+equalizes $a$ and $b$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-van-den-bergh}
+\begin{reference}
+\cite[Lemma 2.14]{CKN}
+\end{reference}
+Let $k$ be a field. Let $\mathcal{D}$ be a $k$-linear triangulated category
+which has direct sums and is compactly generated.
+Denote $\mathcal{D}_c$ the full
+subcategory of compact objects. Let $H : \mathcal{D}_c^{opp} \to \text{Vect}_k$
+be a $k$-linear cohomological functor such that
+$\dim_k H(X) < \infty$ for all $X \in \Ob(\mathcal{D}_c)$.
+Then $H$ is isomorphic to the functor $X \mapsto \Hom(X, Y)$
+for some $Y \in \Ob(\mathcal{D})$.
+\end{lemma}
+
+\begin{proof}
+We will use Derived Categories, Lemma
+\ref{derived-lemma-compact-objects-subcategory} without further mention.
+Denote $G : \mathcal{D}_c \to \text{Vect}_k$ the $k$-linear homological
+functor which sends $X$ to $H(X)^\vee$. For any object $Y$ of $\mathcal{D}$
+we set
+$$
+G'(Y) = \colim_{X \to Y, X \in \Ob(\mathcal{D}_c)} G(X)
+$$
+The colimit is filtered by Lemma \ref{lemma-maps-from-compact-filtered}.
+We claim that $G'$ is a $k$-linear homological functor,
+the restriction of $G'$ to $\mathcal{D}_c$ is $G$, and $G'$
+sends direct sums to direct sums.
+
+\medskip\noindent
+Namely, suppose that $Y_1 \to Y_2 \to Y_3$ is a distinguished triangle.
+Let $\xi \in G'(Y_2)$ map to zero in $G'(Y_3)$. Since the colimit is
+filtered $\xi$ is represented by some $X \to Y_2$ with
+$X \in \Ob(\mathcal{D}_c)$ and $g \in G(X)$.
+The fact that $\xi$ maps to zero in $G'(Y_3)$ means the composition
+$X \to Y_2 \to Y_3$ factors as $X \to X' \to Y_3$ with $X' \in \mathcal{D}_c$
+and $g$ mapping to zero in $G(X')$. Choose a distinguished
+triangle $X'' \to X \to X'$. Then $X'' \in \Ob(\mathcal{D}_c)$.
+Since $G$ is homological we find that $g$ is the image of some
+$g'' \in G'(X'')$. By Axiom TR3 the maps $X \to Y_2$ and $X' \to Y_3$ fit into
+a morphism of distinguished triangles
+$(X'' \to X \to X') \to (Y_1 \to Y_2 \to Y_3)$
+and we find that indeed $\xi$ is the image of the
+element of $G'(Y_1)$ represented by $X'' \to Y_1$ and $g'' \in G(X'')$.
+
+\medskip\noindent
+If $Y \in \Ob(\mathcal{D}_c)$, then $\text{id} : Y \to Y$ is the final
+object in the category of arrows $X \to Y$ with $X \in \Ob(\mathcal{D}_c)$.
+Hence we see that $G'(Y) = G(Y)$ in this case and the
+statement on restriction holds. Let $Y = \bigoplus_{i \in I} Y_i$
+be a direct sum. Let $a : X \to Y$ with $X \in \Ob(\mathcal{D}_c)$
+and $g \in G(X)$ represent an element $\xi$ of $G'(Y)$.
+The morphism $a : X \to Y$ can be uniquely written as a sum of morphisms
+$a_i : X \to Y_i$ almost all zero as $X$ is a compact object of $\mathcal{D}$.
+Let $I' = \{i \in I \mid a_i \not = 0\}$. Then we can factor
+$a$ as the composition
+$$
+X \xrightarrow{(1, \ldots, 1)}
+\bigoplus\nolimits_{i \in I'} X
+\xrightarrow{\bigoplus_{i \in I'} a_i}
+\bigoplus\nolimits_{i \in I} Y_i = Y
+$$
+We conclude that $\xi = \sum_{i \in I'} \xi_i$
+is the sum of the images of the elements
+$\xi_i \in G'(Y_i)$ corresponding to $a_i : X \to Y_i$
+and $g \in G(X)$. Hence $\bigoplus G'(Y_i) \to G'(Y)$
+is surjective. We omit the (trivial) verification that it is injective.
+
+\medskip\noindent
+It follows that the functor $Y \mapsto G'(Y)^\vee$ is cohomological
+and sends direct sums to direct products. Hence by Brown representability,
+see Derived Categories, Proposition \ref{derived-proposition-brown}
+we conclude that there exists a $Y \in \Ob(\mathcal{D})$
+and an isomorphism
+$G'(Z)^\vee = \Hom(Z, Y)$ functorially in $Z$.
+For $X \in \Ob(\mathcal{D}_c)$ we have
+$G'(X)^\vee = G(X)^\vee = (H(X)^\vee)^\vee = H(X)$
+because $\dim_k H(X) < \infty$ and the proof is complete.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-bondal-van-den-bergh}
+\begin{reference}
+\cite[Theorem A.1]{BvdB}
+\end{reference}
+Let $X$ be a projective scheme over a field $k$.
+Let $F : D_{perf}(\mathcal{O}_X)^{opp} \to \text{Vect}_k$
+be a $k$-linear cohomological functor such that
+$$
+\sum\nolimits_{n \in \mathbf{Z}} \dim_k F(E[n]) < \infty
+$$
+for all $E \in D_{perf}(\mathcal{O}_X)$. Then $F$ is isomorphic to a functor
+of the form $E \mapsto \Hom_X(E, K)$ for some
+$K \in D^b_{\textit{Coh}}(\mathcal{O}_X)$.
+\end{theorem}
+
+\begin{proof}
+The derived category $D_\QCoh(\mathcal{O}_X)$ has direct sums,
+is compactly generated, and $D_{perf}(\mathcal{O}_X)$ is the full subcategory
+of compact objects, see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-direct-sums},
+Theorem \ref{perfect-theorem-bondal-van-den-Bergh}, and
+Proposition \ref{perfect-proposition-compact-is-perfect}.
+By Lemma \ref{lemma-van-den-bergh} we may assume
+$F(E) = \Hom_X(E, K)$ for some $K \in \Ob(D_\QCoh(\mathcal{O}_X))$.
+Then it follows that $K$ is in $D^b_{\textit{Coh}}(\mathcal{O}_X)$
+by Lemma \ref{lemma-characterize-dbcoh-projective}.
+\end{proof}
+
+
+
+
+\section{Representability in the regular proper case}
+\label{section-regular-proper}
+
+\noindent
+Theorem \ref{theorem-bondal-van-den-bergh}
+also holds for regular (for example smooth) proper varieties. This
+is proven in \cite{BvdB} using their general characterization
+of ``right saturated'' $k$-linear triangulated categories. In this
+section we give a quick and dirty proof of this result using a little
+bit of duality.
+
+\begin{lemma}
+\label{lemma-trace-map}
+\begin{reference}
+The proof given here follows the argument given in
+\cite[Remark 3.4]{MS}
+\end{reference}
+Let $f : X' \to X$ be a proper birational morphism of integral Noetherian
+schemes with $X$ regular. The map $\mathcal{O}_X \to Rf_*\mathcal{O}_{X'}$
+canonically splits in $D(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Set $E = Rf_*\mathcal{O}_{X'}$ in $D(\mathcal{O}_X)$.
+Observe that $E$ is in $D^b_{\textit{Coh}}(\mathcal{O}_X)$ by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-direct-image-coherent}.
+By
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-perfect-on-regular}
+we find that $E$ is a perfect object of $D(\mathcal{O}_X)$.
+Since $\mathcal{O}_{X'}$ is a sheaf of algebras, we have the
+relative cup product $\mu : E \otimes_{\mathcal{O}_X}^\mathbf{L} E \to E$
+by Cohomology, Remark \ref{cohomology-remark-cup-product}.
+Let $\sigma : E \otimes E^\vee \to E^\vee \otimes E$ be the commutativity
+constraint on the symmetric monoidal category $D(\mathcal{O}_X)$
+(Cohomology, Lemma \ref{cohomology-lemma-symmetric-monoidal-derived}).
+Denote $\eta : \mathcal{O}_X \to E \otimes E^\vee$ and
+$\epsilon : E^\vee \otimes E \to \mathcal{O}_X$ the maps
+constructed in Cohomology, Example \ref{cohomology-example-dual-derived}.
+Then we can consider the map
+$$
+E \xrightarrow{\eta \otimes 1} E \otimes E^\vee \otimes E
+\xrightarrow{\sigma \otimes 1} E^\vee \otimes E \otimes E
+\xrightarrow{1 \otimes \mu} E^\vee \otimes E
+\xrightarrow{\epsilon} \mathcal{O}_X
+$$
+We claim that this map is a one sided inverse to the map in the
+statement of the lemma. To see this it suffices to show that
+the composition $\mathcal{O}_X \to \mathcal{O}_X$ is the identity
+map. This we may do in the generic point of $X$ (or on an open
+subscheme of $X$ over which $f$ is an isomorphism). In this
+case $E = \mathcal{O}_X$ and $\mu$ is the usual multiplication map
+and the result is clear.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-dbcoh-proper-regular}
+Let $X$ be a proper scheme over a field $k$ which is regular. Let
+$K \in \Ob(D_\QCoh(\mathcal{O}_X))$. The following are equivalent
+\begin{enumerate}
+\item $K \in D^b_{\textit{Coh}}(\mathcal{O}_X) = D_{perf}(\mathcal{O}_X)$, and
+\item $\sum_{i \in \mathbf{Z}} \dim_k \Ext^i_X(E, K) < \infty$
+for all perfect $E$ in $D(\mathcal{O}_X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equality in (1) holds by Derived Categories of Schemes,
+Lemma \ref{perfect-lemma-perfect-on-regular}.
+The implication (1) $\Rightarrow$ (2) follows from
+Lemma \ref{lemma-finiteness}.
+The implication (2) $\Rightarrow$ (1) follows from
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-characterize-relatively-perfect}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bondal-van-den-bergh}
+Let $X$ be a proper scheme over a field $k$ which is regular.
+\begin{enumerate}
+\item Let $F : D_{perf}(\mathcal{O}_X)^{opp} \to \text{Vect}_k$
+be a $k$-linear cohomological functor such that
+$$
+\sum\nolimits_{n \in \mathbf{Z}} \dim_k F(E[n]) < \infty
+$$
+for all $E \in D_{perf}(\mathcal{O}_X)$. Then $F$ is isomorphic to a functor
+of the form $E \mapsto \Hom_X(E, K)$ for some $K \in D_{perf}(\mathcal{O}_X)$.
+\item Let $G : D_{perf}(\mathcal{O}_X) \to \text{Vect}_k$
+be a $k$-linear homological functor such that
+$$
+\sum\nolimits_{n \in \mathbf{Z}} \dim_k G(E[n]) < \infty
+$$
+for all $E \in D_{perf}(\mathcal{O}_X)$. Then $G$ is isomorphic to a functor
+of the form $E \mapsto \Hom_X(K, E)$ for some $K \in D_{perf}(\mathcal{O}_X)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). The derived category $D_\QCoh(\mathcal{O}_X)$ has direct sums,
+is compactly generated, and $D_{perf}(\mathcal{O}_X)$ is the full subcategory
+of compact objects, see
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-direct-sums},
+Theorem \ref{perfect-theorem-bondal-van-den-Bergh}, and
+Proposition \ref{perfect-proposition-compact-is-perfect}.
+By Lemma \ref{lemma-van-den-bergh} we may assume
+$F(E) = \Hom_X(E, K)$ for some $K \in \Ob(D_\QCoh(\mathcal{O}_X))$.
+Then it follows that $K$ is in $D^b_{\textit{Coh}}(\mathcal{O}_X)$
+by Lemma \ref{lemma-characterize-dbcoh-proper-regular}.
+
+\medskip\noindent
+Proof of (2). Consider the contravariant functor $E \mapsto E^\vee$
+on $D_{perf}(\mathcal{O}_X)$, see
+Cohomology, Lemma \ref{cohomology-lemma-dual-perfect-complex}.
+This functor is an exact anti-self-equivalence of $D_{perf}(\mathcal{O}_X)$.
+Hence we may apply part (1) to the functor $F(E) = G(E^\vee)$ to find
+$K \in D_{perf}(\mathcal{O}_X)$ such that $G(E^\vee) = \Hom_X(E, K)$.
+It follows that $G(E) = \Hom_X(E^\vee, K) = \Hom_X(K^\vee, E)$
+and we conclude that taking $K^\vee$ works.
+\end{proof}
+
+
+
+
+
+
+\section{Existence of adjoints}
+\label{section-adjoints}
+
+\noindent
+As a consequence of the results in the paper of Bondal and van den Bergh
+we get the following automatic existence of adjoints.
+
+\begin{lemma}
+\label{lemma-always-right-adjoints}
+Let $k$ be a field. Let $X$ and $Y$ be proper schemes over $k$.
+If $X$ is regular, then $k$-linear any exact functor
+$F : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+has an exact right adjoint and an exact left adjoint.
+\end{lemma}
+
+\begin{proof}
+If an adjoint exists it is an exact functor by the very general
+Derived Categories, Lemma \ref{derived-lemma-adjoint-is-exact}.
+
+\medskip\noindent
+Let us prove the existence of a right adjoint.
+To see existence, it suffices to show that for
+$M \in D_{perf}(\mathcal{O}_Y)$ the contravariant functor
+$K \mapsto \Hom_Y(F(K), M)$ is representable.
+This functor is contravariant, $k$-linear, and cohomological.
+Hence by Lemma \ref{lemma-bondal-van-den-bergh} part (1)
+it suffices to show that
+$$
+\sum\nolimits_{i \in \mathbf{Z}} \dim_k \Ext^i_Y(F(K), M) < \infty
+$$
+This follows from Lemma \ref{lemma-finiteness}.
+
+\medskip\noindent
+For the existence of the left adjoint we argue in the same
+manner using part (2) of Lemma \ref{lemma-bondal-van-den-bergh}.
+\end{proof}
+
+
+
+
+
+
+\section{Fourier-Mukai functors}
+\label{section-fourier-mukai}
+
+\noindent
+These functors were first introduced in \cite{Mukai}.
+
+\begin{definition}
+\label{definition-fourier-mukai-functor}
+Let $S$ be a scheme. Let $X$ and $Y$ be schemes over $S$.
+Let $K \in D(\mathcal{O}_{X \times_S Y})$. The exact functor
+$$
+\Phi_K : D(\mathcal{O}_X) \longrightarrow D(\mathcal{O}_Y),\quad
+M \longmapsto R\text{pr}_{2, *}(
+L\text{pr}_1^*M \otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L} K)
+$$
+of triangulated categories is called a {\it Fourier-Mukai functor}
+and $K$ is called a {\it Fourier-Mukai kernel} for this functor.
+Moreover,
+\begin{enumerate}
+\item if $\Phi_K$ sends $D_\QCoh(\mathcal{O}_X)$ into $D_\QCoh(\mathcal{O}_Y)$
+then the resulting exact functor
+$\Phi_K : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$
+is called a Fourier-Mukai functor,
+\item if $\Phi_K$ sends $D_{perf}(\mathcal{O}_X)$ into
+$D_{perf}(\mathcal{O}_Y)$ then the resulting exact functor
+$\Phi_K : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+is called a Fourier-Mukai functor, and
+\item if $X$ and $Y$ are Noetherian and $\Phi_K$ sends
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$ into $D^b_{\textit{Coh}}(\mathcal{O}_Y)$
+then the resulting exact functor
+$\Phi_K : D^b_{\textit{Coh}}(\mathcal{O}_X) \to
+D^b_{\textit{Coh}}(\mathcal{O}_Y)$
+is called a Fourier-Mukai functor.
+Similarly for $D_{\textit{Coh}}$, $D^+_{\textit{Coh}}$, $D^-_{\textit{Coh}}$.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-fourier-Mukai-QCoh}
+Let $S$ be a scheme. Let $X$ and $Y$ be schemes over $S$.
+Let $K \in D(\mathcal{O}_{X \times_S Y})$.
+The corresponding Fourier-Mukai functor $\Phi_K$ sends
+$D_\QCoh(\mathcal{O}_X)$ into $D_\QCoh(\mathcal{O}_Y)$
+if $K$ is in $D_\QCoh(\mathcal{O}_{X \times_S Y})$ and $X \to S$ is
+quasi-compact and quasi-separated.
+\end{lemma}
+
+\begin{proof}
+This follows from the fact that derived pullback preserves
+$D_\QCoh$
+(Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-pullback}),
+derived tensor products preserve $D_\QCoh$
+(Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-tensor-product}),
+the projection $\text{pr}_2 : X \times_S Y \to Y$ is
+quasi-compact and quasi-separated
+(Schemes, Lemmas
+\ref{schemes-lemma-quasi-compact-preserved-base-change} and
+\ref{schemes-lemma-separated-permanence}), and
+total direct image along a quasi-separated and quasi-compact
+morphism preserves $D_\QCoh$
+(Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-direct-image}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compose-fourier-mukai}
+Let $S$ be a scheme. Let $X, Y, Z$ be schemes over $S$. Assume
+$X \to S$, $Y \to S$, and $Z \to S$ are quasi-compact and quasi-separated.
+Let $K \in D_\QCoh(\mathcal{O}_{X \times_S Y})$.
+Let $K' \in D_\QCoh(\mathcal{O}_{Y \times_S Z})$.
+Consider the Fourier-Mukai functors
+$\Phi_K : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$
+and $\Phi_{K'} : D_\QCoh(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_Z)$.
+If $X$ and $Z$ are tor independent over $S$ and $Y \to S$ is flat,
+then
+$$
+\Phi_{K'} \circ \Phi_K = \Phi_{K''} :
+D_\QCoh(\mathcal{O}_X)
+\longrightarrow
+D_\QCoh(\mathcal{O}_Z)
+$$
+where
+$$
+K'' = R\text{pr}_{13, *}(
+L\text{pr}_{12}^*K
+\otimes_{\mathcal{O}_{X \times_S Y \times_S Z}}^\mathbf{L}
+L\text{pr}_{23}^*K')
+$$
+in $D_\QCoh(\mathcal{O}_{X \times_S Z})$.
+\end{lemma}
+
+\begin{proof}
+The statement makes sense by Lemma \ref{lemma-fourier-Mukai-QCoh}.
+We are going to use
+Derived Categories of Schemes, Lemmas
+\ref{perfect-lemma-quasi-coherence-pullback},
+\ref{perfect-lemma-quasi-coherence-tensor-product}, and
+\ref{perfect-lemma-quasi-coherence-direct-image}
+and Schemes, Lemmas
+\ref{schemes-lemma-quasi-compact-preserved-base-change} and
+\ref{schemes-lemma-separated-permanence}
+without further mention.
+By Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-base-change-tor-independent}
+we see that $X \times_S Y$ and $Y \times_S Z$ are tor independent
+over $Y$. This means that we have base change for the cartesian diagram
+$$
+\xymatrix{
+X \times_S Y \times_S Z \ar[d] \ar[r] &
+Y \times_S Z \ar[d]^{p^{YZ}_Y} \\
+X \times_S Y \ar[r]^{p^{XY}_Y} & Y
+}
+$$
+for complexes with quasi-coherent cohomology sheaves, see
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-compare-base-change}.
+Abbreviating $p^* = Lp^*$, $p_* = Rp_*$ and $\otimes = \otimes^\mathbf{L}$
+we have for $M \in D_\QCoh(\mathcal{O}_X)$ the sequence of equalities
+\begin{align*}
+\Phi_{K'}(\Phi_K(M))
+& =
+p^{YZ}_{Z, *}(p^{YZ, *}_Y p^{XY}_{Y, *}(p^{XY, *}_X M \otimes K) \otimes K') \\
+& =
+p^{YZ}_{Z, *}(\text{pr}_{23, *} \text{pr}_{12}^*(p^{XY, *}_X M \otimes K)
+\otimes K') \\
+& =
+p^{YZ}_{Z, *}(\text{pr}_{23, *}(\text{pr}_1^*M \otimes \text{pr}_{12}^*K)
+\otimes K') \\
+& =
+p^{YZ}_{Z, *}(\text{pr}_{23, *}(\text{pr}_1^*M \otimes \text{pr}_{12}^*K
+\otimes \text{pr}_{23}^*K')) \\
+& =
+\text{pr}_{3, *}(\text{pr}_1^*M \otimes \text{pr}_{12}^*K
+\otimes \text{pr}_{23}^*K') \\
+& =
+p^{XZ}_{Z, *}\text{pr}_{13, *}(\text{pr}_1^*M \otimes \text{pr}_{12}^*K
+\otimes \text{pr}_{23}^*K') \\
+& =
+p^{XZ}_{Z, *} (p^{XZ, *}_X M \otimes \text{pr}_{13, *}(\text{pr}_{12}^*K
+\otimes \text{pr}_{23}^*K'))
+\end{align*}
+as desired. Here we have used the remark on base change in the
+second equality and we have use Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change} in the $4$th and
+last equality.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fourier-mukai}
+Let $S$ be a scheme. Let $X$ and $Y$ be schemes over $S$.
+Let $K \in D(\mathcal{O}_{X \times_S Y})$.
+The corresponding Fourier-Mukai functor $\Phi_K$ sends
+$D_{perf}(\mathcal{O}_X)$ into $D_{perf}(\mathcal{O}_Y)$ if at least
+one of the following conditions is satisfied:
+\begin{enumerate}
+\item $S$ is Noetherian, $X \to S$ and $Y \to S$ are of finite type,
+$K \in D^b_{\textit{Coh}}(\mathcal{O}_{X \times_S Y})$, the support of $H^i(K)$
+is proper over $Y$ for all $i$, and $K$ has finite tor dimension
+as an object of $D(\text{pr}_2^{-1}\mathcal{O}_Y)$,
+\item $X \to S$ is of finite presentation and $K$ can be represented
+by a bounded complex $\mathcal{K}^\bullet$ of finitely presented
+$\mathcal{O}_{X \times_S Y}$-modules, flat over $Y$, with support
+proper over $Y$,
+\item $X \to S$ is a proper flat morphism of finite presentation
+and $K$ is perfect,
+\item $S$ is Noetherian, $X \to S$ is flat and proper, and $K$ is perfect
+\item $X \to S$ is a proper flat morphism of finite presentation
+and $K$ is $Y$-perfect,
+\item $S$ is Noetherian, $X \to S$ is flat and proper, and $K$ is
+$Y$-perfect.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $M$ is perfect on $X$, then $L\text{pr}_1^*M$
+is perfect on $X \times_S Y$, see
+Cohomology, Lemma \ref{cohomology-lemma-perfect-pullback}.
+We will use this without further mention below.
+We will also use that if $X \to S$ is of finite type, or proper, or
+flat, or of finite presentation, then the same thing is true for
+the base change $\text{pr}_2 : X \times_S Y \to Y$, see
+Morphisms, Lemmas
+\ref{morphisms-lemma-base-change-finite-type},
+\ref{morphisms-lemma-base-change-proper},
+\ref{morphisms-lemma-base-change-flat}, and
+\ref{morphisms-lemma-base-change-finite-presentation}.
+
+\medskip\noindent
+Part (1) follows from
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-perfect-direct-image}
+combined with
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-perfect-on-noetherian}.
+
+\medskip\noindent
+Part (2) follows from
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-base-change-tensor-perfect}.
+
+\medskip\noindent
+Part (3) follows from
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}.
+
+\medskip\noindent
+Part (4) follows from part (3) and the fact that a finite type
+morphism of Noetherian schemes is of finite presentation by Morphisms, Lemma
+\ref{morphisms-lemma-noetherian-finite-type-finite-presentation}.
+
+\medskip\noindent
+Part (5) follows from
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-derived-pushforward-rel-perfect} combined with
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-perfect-relatively-perfect}.
+
+\medskip\noindent
+Part (6) follows from part (5) in the same way that part (4) follows from
+part (3).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fourier-mukai-Coh}
+Let $S$ be a Noetherian scheme. Let $X$ and $Y$ be schemes of finite type
+over $S$. Let $K \in D^b_{\textit{Coh}}(\mathcal{O}_{X \times_S Y})$.
+The corresponding Fourier-Mukai functor $\Phi_K$ sends
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$ into $D^b_{\textit{Coh}}(\mathcal{O}_Y)$
+if at least one of the following conditions is satisfied:
+\begin{enumerate}
+\item the support of $H^i(K)$ is proper over $Y$ for all $i$, and $K$
+has finite tor dimension as an object of $D(\text{pr}_1^{-1}\mathcal{O}_X)$,
+\item $K$ can be represented by a bounded complex $\mathcal{K}^\bullet$
+of coherent $\mathcal{O}_{X \times_S Y}$-modules, flat over $X$, with support
+proper over $Y$,
+\item the support of $H^i(K)$ is proper over $Y$ for all $i$
+and $X$ is a regular scheme,
+\item $K$ is perfect, the support of $H^i(K)$ is proper over $Y$ for all $i$,
+and $Y \to S$ is flat.
+\end{enumerate}
+Furthermore in each case the support condition is automatic
+if $X \to S$ is proper.
+\end{lemma}
+
+\begin{proof}
+Let $M$ be an object of $D^b_{\textit{Coh}}(\mathcal{O}_X)$.
+In each case we will use Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-direct-image-coherent} to show that
+$$
+\Phi_K(M) = R\text{pr}_{2, *}(
+L\text{pr}_1^*M
+\otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L}
+K)
+$$
+is in $D^b_{\textit{Coh}}(\mathcal{O}_Y)$. The derived tensor product
+$L\text{pr}_1^*M \otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L} K$
+is a pseudo-coherent object of $D(\mathcal{O}_{X \times_S Y})$
+(by
+Cohomology, Lemma \ref{cohomology-lemma-pseudo-coherent-pullback},
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-identify-pseudo-coherent-noetherian}, and
+Cohomology, Lemma \ref{cohomology-lemma-tensor-pseudo-coherent})
+whence has coherent cohomology sheaves (by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-identify-pseudo-coherent-noetherian} again).
+In each case the supports of the cohomology sheaves
+$H^i(L\text{pr}_1^*M \otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L} K)$
+is proper over $Y$ as these supports are contained in the
+union of the supports of the $H^i(K)$. Hence in each case
+it suffices to prove that this tensor product is bounded below.
+
+\medskip\noindent
+Case (1). By Cohomology, Lemma \ref{cohomology-lemma-variant-derived-pullback}
+we have
+$$
+L\text{pr}_1^*M
+\otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L}
+K
+\cong
+\text{pr}_1^{-1}M
+\otimes_{\text{pr}_1^{-1}\mathcal{O}_X}^\mathbf{L}
+K
+$$
+with obvious notation. Hence the assumption on tor dimension
+and the fact that $M$ has only a finite number of nonzero
+cohomology sheaves, implies the bound we want.
+
+\medskip\noindent
+Case (2) follows because here the assumption implies that $K$ has
+finite tor dimension as an object of $D(\text{pr}_1^{-1}\mathcal{O}_X)$
+hence the argument in the previous paragraph applies.
+
+\medskip\noindent
+In Case (3) it is also the case that $K$ has finite tor dimension
+as an object of $D(\text{pr}_1^{-1}\mathcal{O}_X)$. Namely, choose
+affine opens $U = \Spec(A)$ and $V = \Spec(B)$ of $X$ and $Y$ mapping into the
+affine open $W = \Spec(R)$ of $S$. Then
+$K|_{U \times V}$ is given by a bounded complex of finite
+$A \otimes_R B$-modules $M^\bullet$. Since $A$ is a regular ring
+of finite dimension we see that each $M^i$ has finite projective dimension
+as an $A$-module (Algebra, Lemma
+\ref{algebra-lemma-finite-gl-dim-finite-dim-regular})
+and hence finite tor dimension as an $A$-module.
+Thus $M^\bullet$ has finite tor dimension as a complex of $A$-modules
+(More on Algebra, Lemma
+\ref{more-algebra-lemma-complex-finite-tor-dimension-modules}).
+Since $X \times Y$ is quasi-compact we conclude there exist $[a, b]$
+such that for every point $z \in X \times Y$ the stalk $K_z$
+has tor amplitude in $[a, b]$ over $\mathcal{O}_{X, \text{pr}_1(z)}$.
+This implies $K$ has bounded tor dimension as an object of
+$D(\text{pr}_1^{-1}\mathcal{O}_X)$, see
+Cohomology, Lemma \ref{cohomology-lemma-tor-amplitude-stalk}.
+We conclude as in the previous to paragraphs.
+
+\medskip\noindent
+Case (4). With notation as above, the ring map $R \to B$ is flat.
+Hence the ring map $A \to A \otimes_R B$ is flat. Hence any projective
+$A \otimes_R B$-module is $A$-flat. Thus any perfect complex of
+$A \otimes_R B$-modules has finite tor dimension as a complex
+of $A$-modules and we conclude as before.
+\end{proof}
+
+\begin{example}
+\label{example-diagonal-fourier-mukai}
+Let $X \to S$ be a separated morphism of schemes. Then the diagonal
+$\Delta : X \to X \times_S X$ is a closed immersion and hence
+$\mathcal{O}_\Delta = \Delta_*\mathcal{O}_X = R\Delta_*\mathcal{O}_X$
+is a quasi-coherent $\mathcal{O}_{X \times_S X}$-module of finite type
+which is flat over $X$ (under either projection). The Fourier-Mukai functor
+$\Phi_{\mathcal{O}_\Delta}$ is equal to the identity in this case.
+Namely, for any $M \in D(\mathcal{O}_X)$ we have
+\begin{align*}
+L\text{pr}_1^*M \otimes_{\mathcal{O}_{X \times_S X}}^\mathbf{L}
+\mathcal{O}_\Delta
+& =
+L\text{pr}_1^*M \otimes_{\mathcal{O}_{X \times_S X}}^\mathbf{L}
+R\Delta_*\mathcal{O}_X \\
+& =
+R\Delta_*(
+L\Delta^*L\text{pr}_1^*M \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{O}_X) \\
+& =
+R\Delta_*(M)
+\end{align*}
+The first equality we discussed above.
+The second equality is Cohomology, Lemma
+\ref{cohomology-lemma-projection-formula-closed-immersion}.
+The third because $\text{pr}_1 \circ \Delta = \text{id}_X$ and we have
+Cohomology, Lemma \ref{cohomology-lemma-derived-pullback-composition}.
+If we push this to $X$ using $R\text{pr}_{2, *}$
+we obtain $M$ by
+Cohomology, Lemma \ref{cohomology-lemma-derived-pushforward-composition}
+and the fact that $\text{pr}_2 \circ \Delta = \text{id}_X$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-fourier-mukai-right-adjoint}
+\begin{reference}
+Compare with discussion in \cite{Rizzardo}.
+\end{reference}
+Let $X \to S$ and $Y \to S$ be morphisms of quasi-compact and quasi-separated
+schemes. Let $\Phi : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$
+be a Fourier-Mukai functor with pseudo-coherent kernel
+$K \in D_\QCoh(\mathcal{O}_{X \times_S Y})$.
+Let $a : D_\QCoh(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_{X \times_S Y})$
+be the right adjoint to $R\text{pr}_{2, *}$, see
+Duality for Schemes, Lemma \ref{duality-lemma-twisted-inverse-image}.
+Denote
+$$
+K' = (Y \times_S X \to X \times_S Y)^*
+R\SheafHom_{\mathcal{O}_{X \times_S Y}}(K, a(\mathcal{O}_Y)) \in
+D_\QCoh(\mathcal{O}_{Y \times_S X})
+$$
+and denote $\Phi' : D_\QCoh(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_X)$
+the corresponding Fourier-Mukai transform. There is a canonical map
+$$
+\Hom_X(M, \Phi'(N)) \longrightarrow \Hom_Y(\Phi(M), N)
+$$
+functorial in $M$ in $D_\QCoh(\mathcal{O}_X)$ and $N$ in
+$D_\QCoh(\mathcal{O}_Y)$ which is an isomorphism if
+\begin{enumerate}
+\item $N$ is perfect, or
+\item $K$ is perfect and $X \to S$ is proper flat and of finite presentation.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-fourier-Mukai-QCoh} we obtain a functor $\Phi$
+as in the statement. Observe that $a(\mathcal{O}_Y)$ is in
+$D^+_\QCoh(\mathcal{O}_{X \times_S Y})$ by Duality for Schemes,
+Lemma \ref{duality-lemma-twisted-inverse-image-bounded-below}.
+Hence for $K$ pseudo-coherent we have
+$K' \in D_\QCoh(\mathcal{O}_{Y \times_S X})$
+by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-quasi-coherence-internal-hom}
+we we obtain $\Phi'$ as indicated.
+
+\medskip\noindent
+We abbreviate
+$\otimes^\mathbf{L} = \otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L}$
+and
+$\SheafHom = R\SheafHom_{\mathcal{O}_{X \times_S Y}}$.
+Let $M$ be in $D_\QCoh(\mathcal{O}_X)$ and let
+$N$ be in $D_\QCoh(\mathcal{O}_Y)$. We have
+\begin{align*}
+\Hom_Y(\Phi(M), N)
+& =
+\Hom_Y(R\text{pr}_{2, *}(L\text{pr}_1^*M \otimes^\mathbf{L} K), N) \\
+& =
+\Hom_{X \times_S Y}(L\text{pr}_1^*M \otimes^\mathbf{L} K, a(N)) \\
+& =
+\Hom_{X \times_S Y}(L\text{pr}_1^*M,
+R\SheafHom(K, a(N))) \\
+& =
+\Hom_X(M, R\text{pr}_{1, *}R\SheafHom(K, a(N)))
+\end{align*}
+where we have used Cohomology, Lemmas \ref{cohomology-lemma-internal-hom}
+and \ref{cohomology-lemma-adjoint}. There are canonical maps
+$$
+L\text{pr}_2^*N \otimes^\mathbf{L} R\SheafHom(K, a(\mathcal{O}_Y))
+\xrightarrow{\alpha}
+R\SheafHom(K, L\text{pr}_2^*N \otimes^\mathbf{L} a(\mathcal{O}_Y))
+\xrightarrow{\beta}
+R\SheafHom(K, a(N))
+$$
+Here $\alpha$ is
+Cohomology, Lemma \ref{cohomology-lemma-internal-hom-diagonal-better}
+and $\beta$ is Duality for Schemes, Equation
+(\ref{duality-equation-compare-with-pullback}).
+Combining all of these arrows we obtain the functorial displayed
+arrow in the statement of the lemma.
+
+\medskip\noindent
+The arrow $\alpha$ is an isomorphism by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-internal-hom-evaluate-tensor-isomorphism}
+as soon as either $K$ or $N$ is perfect.
+The arrow $\beta$ is an isomorphism if $N$ is perfect by
+Duality for Schemes, Lemma \ref{duality-lemma-compare-with-pullback-perfect}
+or in general if $X \to S$ is
+flat proper of finite presentation by
+Duality for Schemes, Lemma
+\ref{duality-lemma-compare-with-pullback-flat-proper}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fourier-mukai-left-adjoint}
+\begin{reference}
+Compare with discussion in \cite{Rizzardo}.
+\end{reference}
+Let $S$ be a Noetherian scheme. Let $Y \to S$ be a flat proper
+Gorenstein morphism and let $X \to S$ be a finite type morphism.
+Denote $\omega^\bullet_{Y/S}$ the relative dualizing complex of
+$Y$ over $S$. Let $\Phi : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$
+be a Fourier-Mukai functor with perfect kernel
+$K \in D_\QCoh(\mathcal{O}_{X \times_S Y})$. Denote
+$$
+K' = (Y \times_S X \to X \times_S Y)^*(K^\vee
+\otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L}
+L\text{pr}_2^*\omega^\bullet_{Y/S})
+\in
+D_\QCoh(\mathcal{O}_{Y \times_S X})
+$$
+and denote $\Phi' : D_\QCoh(\mathcal{O}_Y) \to D_\QCoh(\mathcal{O}_X)$
+the corresponding Fourier-Mukai transform. There is a canonical
+isomorphism
+$$
+\Hom_Y(N, \Phi(M)) \longrightarrow \Hom_X(\Phi'(N), M)
+$$
+functorial in $M$ in $D_\QCoh(\mathcal{O}_X)$ and $N$ in
+$D_\QCoh(\mathcal{O}_Y)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-fourier-Mukai-QCoh} we obtain a functor $\Phi$
+as in the statement.
+
+\medskip\noindent
+Observe that formation of the relative dualizing complex commutes
+with base change in our setting, see Duality for Schemes,
+Remark \ref{duality-remark-relative-dualizing-complex}.
+Thus $L\text{pr}_2^*\omega^\bullet_{Y/S} = \omega^\bullet_{X \times_S Y/X}$.
+Moreover, we observe that $\omega^\bullet_{Y/S}$ is an
+invertible object of the derived category, see Duality for Schemes, Lemma
+\ref{duality-lemma-affine-flat-Noetherian-gorenstein}, and a fortiori
+perfect.
+
+\medskip\noindent
+To actually prove the lemma we're going to cheat. Namely, we will
+show that if we replace the roles of $X$ and $Y$ and $K$ and $K'$
+then these are as in Lemma \ref{lemma-fourier-mukai-right-adjoint}
+and we get the result. It is clear that $K'$ is perfect as a
+tensor product of perfect objects so that the discussion in
+Lemma \ref{lemma-fourier-mukai-right-adjoint} applies to it.
+To show that the procedure of
+Lemma \ref{lemma-fourier-mukai-right-adjoint} applied to $K'$
+on $Y \times_S X$ produces a complex isomorphic to $K$ it suffices
+(details omitted) to show that
+$$
+R\SheafHom(R\SheafHom(K, \omega^\bullet_{X \times_S Y/X}),
+\omega^\bullet_{X \times_S Y/X}) = K
+$$
+This is clear because $K$ is perfect and $\omega^\bullet_{X \times_S Y/X}$
+is invertible; details omitted. Thus
+Lemma \ref{lemma-fourier-mukai-right-adjoint} produces a map
+$$
+\Hom_Y(N, \Phi(M)) \longrightarrow \Hom_X(\Phi'(N), M)
+$$
+functorial in $M$ in $D_\QCoh(\mathcal{O}_X)$ and $N$ in
+$D_\QCoh(\mathcal{O}_Y)$ which is an isomorphism because
+$K'$ is perfect. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fourier-mukai-flat-proper-over-noetherian}
+Let $S$ be a Noetherian scheme.
+\begin{enumerate}
+\item For $X$, $Y$ proper and flat over $S$ and $K$ in
+$D_{perf}(\mathcal{O}_{X \times_S Y})$ we obtain a Fourier-Mukai functor
+$\Phi_K : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$.
+\item For $X$, $Y$, $Z$ proper and flat over $S$, $K \in
+D_{perf}(\mathcal{O}_{X \times_S Y})$, $K' \in
+D_{perf}(\mathcal{O}_{Y \times_S Z})$ the composition
+$\Phi_{K'} \circ \Phi_K : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Z)$
+is equal to $\Phi_{K''}$ with $K'' \in D_{perf}(\mathcal{O}_{X \times_S Z})$
+computed as in Lemma \ref{lemma-compose-fourier-mukai},
+\item For $X$, $Y$, $K$, $\Phi_K$ as in (1) if $X \to S$ is Gorenstein, then
+$\Phi_{K'} : D_{perf}(\mathcal{O}_Y) \to D_{perf}(\mathcal{O}_X)$ is a right
+adjoint to $\Phi_K$ where $K' \in D_{perf}(\mathcal{O}_{Y \times_S X})$
+is the pullback of $L\text{pr}_1^*\omega_{X/S}^\bullet
+\otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L} K^\vee$ by
+$Y \times_S X \to X \times_S Y$.
+\item For $X$, $Y$, $K$, $\Phi_K$ as in (1) if $Y \to S$ is Gorenstein, then
+$\Phi_{K''} : D_{perf}(\mathcal{O}_Y) \to D_{perf}(\mathcal{O}_X)$ is a left
+adjoint to $\Phi_K$ where $K'' \in D_{perf}(\mathcal{O}_{Y \times_S X})$
+is the pullback of $L\text{pr}_2^*\omega_{Y/S}^\bullet
+\otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L} K^\vee$ by
+$Y \times_S X \to X \times_S Y$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is immediate from Lemma \ref{lemma-fourier-mukai} part (4).
+
+\medskip\noindent
+Part (2) follows from Lemma \ref{lemma-compose-fourier-mukai} and the
+fact that
+$K'' = R\text{pr}_{13, *}(
+L\text{pr}_{12}^*K
+\otimes_{\mathcal{O}_{X \times_S Y \times_S Z}}^\mathbf{L}
+L\text{pr}_{23}^*K')$ is perfect for example by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image}.
+
+\medskip\noindent
+The adjointness in part (3) on all complexes with quasi-coherent cohomology
+sheaves follows from Lemma \ref{lemma-fourier-mukai-right-adjoint} with
+$K'$ equal to the pullback of
+$R\SheafHom_{\mathcal{O}_{X \times_S Y}}(K, a(\mathcal{O}_Y))$
+by $Y \times_S X \to X \times_S Y$ where $a$ is the right adjoint
+to $R\text{pr}_{2, *} : D_\QCoh(\mathcal{O}_{X \times_S Y}) \to
+D_\QCoh(\mathcal{O}_Y)$. Denote $f : X \to S$ the structure morphism of $X$.
+Since $f$ is proper the functor
+$f^! : D_\QCoh^+(\mathcal{O}_S) \to D_\QCoh^+(\mathcal{O}_X)$
+is the restriction to $D_\QCoh^+(\mathcal{O}_S)$
+of the right adjoint to
+$Rf_* : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_S)$, see
+Duality for Schemes, Section \ref{duality-section-upper-shriek}.
+Hence the relative dualizing complex $\omega_{X/S}^\bullet$ as defined in
+Duality for Schemes, Remark
+\ref{duality-remark-relative-dualizing-complex}
+is equal to $\omega_{X/S}^\bullet = f^!\mathcal{O}_S$.
+Since formation of the relative dualizing complex
+commutes with base change (see Duality for Schemes, Remark
+\ref{duality-remark-relative-dualizing-complex}) we see that
+$a(\mathcal{O}_Y) = L\text{pr}_1^*\omega_{X/S}^\bullet$.
+Thus
+$$
+R\SheafHom_{\mathcal{O}_{X \times_S Y}}(K, a(\mathcal{O}_Y))
+\cong
+L\text{pr}_1^*\omega_{X/S}^\bullet
+\otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L} K^\vee
+$$
+by Cohomology, Lemma \ref{cohomology-lemma-dual-perfect-complex}.
+Finally, since $X \to S$ is assumed Gorenstein the relative dualizing complex
+is invertible: this follows from Duality for Schemes, Lemma
+\ref{duality-lemma-affine-flat-Noetherian-gorenstein}.
+We conclude that $\omega_{X/S}^\bullet$ is perfect
+(Cohomology, Lemma \ref{cohomology-lemma-invertible-derived})
+and hence $K'$ is perfect.
+Therefore $\Phi_{K'}$ does indeed map $D_{perf}(\mathcal{O}_Y)$
+into $D_{perf}(\mathcal{O}_X)$ which finishes the proof of (3).
+
+\medskip\noindent
+The proof of (4) is the same as the proof of (3) except one uses
+Lemma \ref{lemma-fourier-mukai-left-adjoint} instead of
+Lemma \ref{lemma-fourier-mukai-right-adjoint}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Resolutions and bounds}
+\label{section-tricks-smooth}
+
+\noindent
+The diagonal of a smooth proper scheme has a nice resolution.
+
+\begin{lemma}
+\label{lemma-on-product}
+Let $R$ be a Noetherian ring. Let $X$, $Y$ be finite type schemes over $R$
+having the resolution property. For any coherent
+$\mathcal{O}_{X \times_R Y}$-module $\mathcal{F}$ there exist
+a surjection $\mathcal{E} \boxtimes \mathcal{G} \to \mathcal{F}$
+where $\mathcal{E}$ is a finite locally free $\mathcal{O}_X$-module
+and $\mathcal{G}$ is a finite locally free $\mathcal{O}_Y$-module.
+\end{lemma}
+
+\begin{proof}
+Let $U \subset X$ and $V \subset Y$ be affine open subschemes. Let
+$\mathcal{I} \subset \mathcal{O}_X$ be the ideal sheaf of the
+reduced induced closed subscheme structure on $X \setminus U$.
+Similarly, let $\mathcal{I}' \subset \mathcal{O}_Y$ be the ideal sheaf of the
+reduced induced closed subscheme structure on $Y \setminus V$.
+Then the ideal sheaf
+$$
+\mathcal{J} = \Im(\text{pr}_1^*\mathcal{I} \otimes_{\mathcal{O}_{X \times_R Y}}
+\text{pr}_2^*\mathcal{I}' \to \mathcal{O}_{X \times_R Y})
+$$
+satisfies $V(\mathcal{J}) = X \times_R Y \setminus U \times_R V$.
+For any section $s \in \mathcal{F}(U \times_R V)$ we can find an integer
+$n > 0$ and a map $\mathcal{J}^n \to \mathcal{F}$ whose restriction to
+$U \times_R V$ gives $s$, see
+Cohomology of Schemes, Lemma \ref{coherent-lemma-homs-over-open}.
+By assumption we can choose surjections
+$\mathcal{E} \to \mathcal{I}$ and $\mathcal{G} \to \mathcal{I}'$.
+These produce corresponding surjections
+$$
+\mathcal{E} \boxtimes \mathcal{G} \to \mathcal{J}
+\quad\text{and}\quad
+\mathcal{E}^{\otimes n} \boxtimes \mathcal{G}^{\otimes n} \to \mathcal{J}^n
+$$
+and hence a map
+$\mathcal{E}^{\otimes n} \boxtimes \mathcal{G}^{\otimes n} \to \mathcal{F}$
+whose image contains the section $s$ over $U \times_R V$.
+Since we can cover $X \times_R Y$ by a finite number of affine opens
+of the form $U \times_R V$ and since $\mathcal{F}|_{U \times_R V}$
+is generated by finitely many sections (Properties, Lemma
+\ref{properties-lemma-finite-type-module})
+we conclude that there exists a surjection
+$$
+\bigoplus\nolimits_{j = 1, \ldots, N}
+\mathcal{E}_j^{\otimes n_j} \boxtimes \mathcal{G}_j^{\otimes n_j}
+\to \mathcal{F}
+$$
+where $\mathcal{E}_j$ is finite locally free on $X$ and
+$\mathcal{G}_j$ is finite locally free on $Y$.
+Setting $\mathcal{E} = \bigoplus \mathcal{E}_j^{\otimes n_j}$
+and $\mathcal{G} = \bigoplus \mathcal{G}_j^{\otimes n_j}$
+we conclude that the lemma is true.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-on-product-general}
+Let $R$ be a ring. Let $X$, $Y$ be quasi-compact and quasi-separated
+schemes over $R$ having the resolution property. For any finite
+type quasi-coherent $\mathcal{O}_{X \times_R Y}$-module $\mathcal{F}$
+there exist a surjection $\mathcal{E} \boxtimes \mathcal{G} \to \mathcal{F}$
+where $\mathcal{E}$ is a finite locally free $\mathcal{O}_X$-module
+and $\mathcal{G}$ is a finite locally free $\mathcal{O}_Y$-module.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-on-product} by a limit argument.
+We urge the reader to skip the proof.
+Since $X \times_R Y$ is a closed subscheme of $X \times_\mathbf{Z} Y$
+it is harmless if we replace $R$ by $\mathbf{Z}$.
+We can write $\mathcal{F}$ as the quotient of
+a finitely presented $\mathcal{O}_{X \times_R Y}$-module by
+Properties, Lemma
+\ref{properties-lemma-finite-directed-colimit-surjective-maps}.
+Hence we may assume $\mathcal{F}$ is of
+finite presentation. Next we can write $X = \lim X_i$
+with $X_i$ of finite presentation over $\mathbf{Z}$ and similarly
+$Y = \lim Y_j$, see Limits, Proposition \ref{limits-proposition-approximate}.
+Then $\mathcal{F}$ will descend to $\mathcal{F}_{ij}$ on some $X_i \times_R Y_j$
+(Limits, Lemma \ref{limits-lemma-descend-modules-finite-presentation}) and
+so does the property of having the resolution property
+(Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-resolution-property-descends}).
+Then we apply Lemma \ref{lemma-on-product}
+to $\mathcal{F}_{ij}$ and we pullback.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-diagonal-resolution}
+Let $R$ be a Noetherian ring. Let $X$ be a separated finite type scheme
+over $R$ which has the resolution property. Set
+$\mathcal{O}_\Delta = \Delta_*(\mathcal{O}_X)$ where
+$\Delta : X \to X \times_R X$ is the diagonal of $X/k$.
+There exists a resolution
+$$
+\ldots \to
+\mathcal{E}_2 \boxtimes \mathcal{G}_2 \to
+\mathcal{E}_1 \boxtimes \mathcal{G}_1 \to
+\mathcal{E}_0 \boxtimes \mathcal{G}_0 \to
+\mathcal{O}_\Delta \to 0
+$$
+where each $\mathcal{E}_i$ and $\mathcal{G}_i$ is a finite locally
+free $\mathcal{O}_X$-module.
+\end{lemma}
+
+\begin{proof}
+Since $X$ is separated, the diagonal morphism $\Delta$ is a closed
+immersion and hence $\mathcal{O}_\Delta$ is a coherent
+$\mathcal{O}_{X \times_R X}$-module (Cohomology of Schemes, Lemma
+\ref{coherent-lemma-i-star-equivalence}).
+Thus the lemma follows immediately from Lemma \ref{lemma-on-product}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Ext-0-regular}
+Let $X$ be a regular Noetherian scheme of dimension $d < \infty$. Then
+\begin{enumerate}
+\item for $\mathcal{F}$, $\mathcal{G}$ coherent $\mathcal{O}_X$-modules
+we have $\Ext^n_X(\mathcal{F}, \mathcal{G}) = 0$ for $n > d$, and
+\item for $K, L \in D^b_{\textit{Coh}}(\mathcal{O}_X)$ and $a \in \mathbf{Z}$
+if $H^i(K) = 0$ for $i < a + d$ and $H^i(L) = 0$ for $i \geq a$ then
+$\Hom_X(K, L) = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove (1) we use the spectral sequence
+$$
+H^p(X, \SheafExt^q(\mathcal{F}, \mathcal{G})) \Rightarrow
+\Ext^{p + q}_X(\mathcal{F}, \mathcal{G})
+$$
+of Cohomology, Section \ref{cohomology-section-ext}. Let $x \in X$.
+We have
+$$
+\SheafExt^q(\mathcal{F}, \mathcal{G})_x =
+\SheafExt^q_{\mathcal{O}_{X, x}}(\mathcal{F}_x, \mathcal{G}_x)
+$$
+see Cohomology, Lemma \ref{cohomology-lemma-stalk-internal-hom}
+(this also uses that $\mathcal{F}$ is pseudo-coherent by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-identify-pseudo-coherent-noetherian}).
+Set $d_x = \dim(\mathcal{O}_{X, x})$.
+Since $\mathcal{O}_{X, x}$ is regular the ring
+$\mathcal{O}_{X, x}$ has global dimension $d_x$, see
+Algebra, Proposition \ref{algebra-proposition-regular-finite-gl-dim}.
+Thus $\SheafExt^q_{\mathcal{O}_{X, x}}(\mathcal{F}_x, \mathcal{G}_x)$
+is zero for $q > d_x$. It follows that the modules
+$\SheafExt^q(\mathcal{F}, \mathcal{G})$ have support
+of dimension at most $d - q$. Hence we have
+$H^p(X, \SheafExt^q(\mathcal{F}, \mathcal{G})) = 0$ for $p > d - q$
+by Cohomology, Proposition \ref{cohomology-proposition-vanishing-Noetherian}.
+This proves (1).
+
+\medskip\noindent
+Proof of (2).
+We may use induction on the number of nonzero cohomology sheaves
+of $K$ and $L$. The case where these numbers are $0, 1$ follows
+from (1). If the number of nonzero cohomology sheaves of $K$
+is $> 1$, then we let $i \in \mathbf{Z}$ be minimal such that
+$H^i(K)$ is nonzero. We obtain a distinguished triangle
+$$
+H^i(K)[-i] \to K \to \tau_{\geq i + 1}K
+$$
+(Derived Categories, Remark
+\ref{derived-remark-truncation-distinguished-triangle})
+and we get the vanishing of $\Hom(K, L)$ from the vanishing
+of $\Hom(H^i(K)[-i], L)$ and $\Hom(\tau_{\geq i + 1}K, L)$
+by Derived Categories, Lemma \ref{derived-lemma-representable-homological}.
+Simlarly if $L$ has more than one nonzero cohomology sheaf.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-split-complex-regular}
+Let $X$ be a regular Noetherian scheme of dimension $d < \infty$.
+Let $K \in D^b_{\textit{Coh}}(\mathcal{O}_X)$ and $a \in \mathbf{Z}$.
+If $H^i(K) = 0$ for $a < i < a + d$, then
+$K = \tau_{\leq a}K \oplus \tau_{\geq a + d}K$.
+\end{lemma}
+
+\begin{proof}
+We have $\tau_{\leq a}K = \tau_{\leq a + d - 1}K$ by the assumed
+vanishing of cohomology sheaves. By Derived Categories, Remark
+\ref{derived-remark-truncation-distinguished-triangle}
+we have a distinguished triangle
+$$
+\tau_{\leq a}K \to K \to \tau_{\geq a + d}K \xrightarrow{\delta}
+(\tau_{\leq a}K)[1]
+$$
+By Derived Categories, Lemma \ref{derived-lemma-split} it
+suffices to show that the morphism $\delta$ is zero.
+This follows from Lemma \ref{lemma-Ext-0-regular}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-diagonal-trick}
+Let $k$ be a field. Let $X$ be a quasi-compact separated smooth scheme over $k$.
+There exist finite locally free $\mathcal{O}_X$-modules
+$\mathcal{E}$ and $\mathcal{G}$ such that
+$$
+\mathcal{O}_\Delta \in \langle \mathcal{E} \boxtimes \mathcal{G} \rangle
+$$
+in $D(\mathcal{O}_{X \times X})$ where the notation is as in
+Derived Categories, Section \ref{derived-section-generators}.
+\end{lemma}
+
+\begin{proof}
+Recall that $X$ is regular by
+Varieties, Lemma \ref{varieties-lemma-smooth-regular}.
+Hence $X$ has the resolution property by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-regular-resolution-property}.
+Hence we may choose a resolution as in Lemma \ref{lemma-diagonal-resolution}.
+Say $\dim(X) = d$. Since $X \times X$ is smooth over $k$ it is regular.
+Hence $X \times X$ is a regular Noetherian scheme with
+$\dim(X \times X) = 2d$. The object
+$$
+K = (\mathcal{E}_{2d} \boxtimes \mathcal{G}_{2d} \to
+\ldots \to
+\mathcal{E}_0 \boxtimes \mathcal{G}_0)
+$$
+of $D_{perf}(\mathcal{O}_{X \times X})$
+has cohomology sheaves $\mathcal{O}_\Delta$
+in degree $0$ and $\Ker(\mathcal{E}_{2d} \boxtimes \mathcal{G}_{2d} \to
+\mathcal{E}_{2d-1} \boxtimes \mathcal{G}_{2d-1})$ in degree $-2d$ and zero
+in all other degrees.
+Hence by Lemma \ref{lemma-split-complex-regular} we see that
+$\mathcal{O}_\Delta$ is a summand of $K$ in
+$D_{perf}(\mathcal{O}_{X \times X})$.
+Clearly, the object $K$ is in
+$$
+\left\langle
+\bigoplus\nolimits_{i = 0, \ldots, 2d} \mathcal{E}_i \boxtimes \mathcal{G}_i
+\right\rangle
+\subset
+\left\langle
+\left(\bigoplus\nolimits_{i = 0, \ldots, 2d} \mathcal{E}_i\right)
+\boxtimes
+\left(\bigoplus\nolimits_{i = 0, \ldots, 2d} \mathcal{G}_i\right)
+\right\rangle
+$$
+which finishes the proof. (The reader may consult
+Derived Categories, Lemmas \ref{derived-lemma-generated-by-E-explicit} and
+\ref{derived-lemma-in-cone-n} to see that our object is contained in this
+category.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-proper-strong-generator}
+Let $k$ be a field. Let $X$ be a scheme proper and smooth over $k$.
+Then $D_{perf}(\mathcal{O}_X)$
+has a strong generator.
+\end{lemma}
+
+\begin{proof}
+Using Lemma \ref{lemma-diagonal-trick} choose finite locally free
+$\mathcal{O}_X$-modules $\mathcal{E}$ and $\mathcal{G}$ such that
+$\mathcal{O}_\Delta \in \langle \mathcal{E} \boxtimes \mathcal{G} \rangle$
+in $D(\mathcal{O}_{X \times X})$. We claim that $\mathcal{G}$
+is a strong generator for $D_{perf}(\mathcal{O}_X)$. With notation as in
+Derived Categories, Section \ref{derived-section-operate-on-full}
+choose $m, n \geq 1$ such that
+$$
+\mathcal{O}_\Delta \in
+smd(add(\mathcal{E} \boxtimes \mathcal{G}[-m, m])^{\star n})
+$$
+This is possible by Derived Categories, Lemma
+\ref{derived-lemma-find-smallest-containing-E}.
+Let $K$ be an object of $D_{perf}(\mathcal{O}_X)$. Since
+$L\text{pr}_1^*K \otimes_{\mathcal{O}_{X \times X}}^\mathbf{L} -$
+is an exact functor and since
+$$
+L\text{pr}_1^*K \otimes_{\mathcal{O}_{X \times X}}^\mathbf{L}
+(\mathcal{E} \boxtimes \mathcal{G}) =
+(K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{E}) \boxtimes \mathcal{G}
+$$
+we conclude from
+Derived Categories, Remark \ref{derived-remark-operations-functor} that
+$$
+L\text{pr}_1^*K
+\otimes_{\mathcal{O}_{X \times X}}^\mathbf{L}
+\mathcal{O}_\Delta
+\in
+smd(add(
+(K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{E})
+\boxtimes \mathcal{G}[-m, m])^{\star n})
+$$
+Applying the exact functor $R\text{pr}_{2, *}$ and observing that
+$$
+R\text{pr}_{2, *}
+\left((K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{E}) \boxtimes
+\mathcal{G}\right) =
+R\Gamma(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{E})
+\otimes_k \mathcal{G}
+$$
+by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change} we conclude that
+$$
+K = R\text{pr}_{2, *}(L\text{pr}_1^*K
+\otimes_{\mathcal{O}_{X \times X}}^\mathbf{L} \mathcal{O}_\Delta)
+\in
+smd(add(R\Gamma(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{E})
+\otimes_k \mathcal{G}[-m, m])^{\star n})
+$$
+The equality follows from the discussion in
+Example \ref{example-diagonal-fourier-mukai}.
+Since $K$ is perfect, there exist $a \leq b$ such that
+$H^i(X, K)$ is nonzero only for $i \in [a, b]$. Since $X$ is proper,
+each $H^i(X, K)$ is finite dimensional. We conclude that
+the right hand side is contained in
+$smd(add(\mathcal{G}[-m + a, m + b])^{\star n})$ which is
+itself contained in $\langle \mathcal{G} \rangle_n$ by one of the
+references given above. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-diagonal-trick-proper}
+Let $k$ be a field. Let $X$ be a proper smooth scheme over $k$.
+There exists integers $m, n \geq 1$ and a finite locally free
+$\mathcal{O}_X$-module $\mathcal{G}$ such that every coherent
+$\mathcal{O}_X$-module is contained in $smd(add(\mathcal{G}[-m, m])^{\star n})$
+with notation as in Derived Categories, Section
+\ref{derived-section-operate-on-full}.
+\end{lemma}
+
+\begin{proof}
+In the proof of Lemma \ref{lemma-smooth-proper-strong-generator}
+we have shown that there exist $m', n \geq 1$ such that for any
+coherent $\mathcal{O}_X$-module $\mathcal{F}$,
+$$
+\mathcal{F} \in smd(add(\mathcal{G}[-m' + a, m' + b])^{\star n})
+$$
+for any $a \leq b$ such that $H^i(X, \mathcal{F})$ is nonzero only
+for $i \in [a, b]$. Thus we can take $a = 0$ and $b = \dim(X)$.
+Taking $m = \max(m', m' + b)$ finishes the proof.
+\end{proof}
+
+\noindent
+The following lemma is the boundedness result referred to
+in the title of this section.
+
+\begin{lemma}
+\label{lemma-boundedness}
+Let $k$ be a field. Let $X$ be a smooth proper scheme over $k$.
+Let $\mathcal{A}$ be an abelian category. Let
+$H : D_{perf}(\mathcal{O}_X) \to \mathcal{A}$ be a homological
+functor (Derived Categories, Definition \ref{derived-definition-homological})
+such that for all $K$ in $D_{perf}(\mathcal{O}_X)$ the object
+$H^i(K)$ is nonzero for only a finite number of $i \in \mathbf{Z}$.
+Then there exists an integer $m \geq 1$ such that
+$H^i(\mathcal{F}) = 0$ for any coherent $\mathcal{O}_X$-module
+$\mathcal{F}$ and $i \not \in [-m, m]$.
+Similarly for cohomological functors.
+\end{lemma}
+
+\begin{proof}
+Combine Lemma \ref{lemma-diagonal-trick-proper} with
+Derived Categories, Lemma \ref{derived-lemma-forward-cone-n}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bounded-fibres}
+Let $k$ be a field. Let $X$, $Y$ be finite type schemes over $k$.
+Let $K_0 \to K_1 \to K_2 \to \ldots$ be a system of objects
+of $D_{perf}(\mathcal{O}_{X \times Y})$ and $m \geq 0$ an integer such that
+\begin{enumerate}
+\item $H^q(K_i)$ is nonzero only for $q \leq m$,
+\item for every coherent $\mathcal{O}_X$-module $\mathcal{F}$ with
+$\dim(\text{Supp}(\mathcal{F})) = 0$ the object
+$$
+R\text{pr}_{2, *}(
+\text{pr}_1^*\mathcal{F} \otimes_{\mathcal{O}_{X \times Y}}^\mathbf{L}
+K_n)
+$$
+has vanishing cohomology sheaves in degrees outside
+$[-m, m] \cup [-m - n, m - n]$ and for $n > 2m$ the transition maps
+induce isomorphisms on cohomology sheaves in degrees in $[-m, m]$.
+\end{enumerate}
+Then $K_n$ has vanishing cohomology sheaves in degrees outside
+$[-m, m] \cup [-m - n, m - n]$ and for $n > 2m$ the
+transition maps induce isomorphisms on cohomology sheaves in degrees in
+$[-m, m]$. Moreover, if $X$ and $Y$ are smooth over $k$, then for $n$
+large enough we find $K_n = K \oplus C_n$ in
+$D_{perf}(\mathcal{O}_{X \times Y})$
+where $K$ has cohomology only indegrees $[-m, m]$ and $C_n$ only in
+degrees $[-m - n, m - n]$ and the transition maps
+define isomorphisms between various copies of $K$.
+\end{lemma}
+
+\begin{proof}
+Let $Z$ be the scheme theoretic support of an $\mathcal{F}$ as in (2).
+Then $Z \to \Spec(k)$ is finite, hence $Z \times Y \to Y$ is finite.
+It follows that for an object $M$ of $D_\QCoh(\mathcal{O}_{X \times Y})$
+with cohomology sheaves supported on $Z \times Y$ we have
+$H^i(R\text{pr}_{2, *}(M)) = \text{pr}_{2, *}H^i(M)$ and the functor
+$\text{pr}_{2, *}$ is faithful on quasi-coherent modules supported
+on $Z \times Y$; details omitted. Hence we see that the objects
+$$
+\text{pr}_1^*\mathcal{F} \otimes_{\mathcal{O}_{X \times Y}}^\mathbf{L} K_n
+$$
+in $D_{perf}(\mathcal{O}_{X \times Y})$ have vanishing cohomology sheaves
+outside $[-m, m] \cup [-m - n, m - n]$ and for $n > 2m$ the transition maps
+induce isomorphisms on cohomology sheaves in $[-m, m]$.
+Let $z \in X \times Y$ be a closed point mapping to the closed point
+$x \in X$. Then we know that
+$$
+K_{n, z} \otimes_{\mathcal{O}_{X \times Y, z}}^\mathbf{L}
+\mathcal{O}_{X \times Y, z}/\mathfrak m_x^t\mathcal{O}_{X \times Y, z}
+$$
+has nonzero cohomology only in the intervals
+$[-m, m] \cup [-m - n, m - n]$.
+We conclude by More on Algebra, Lemma
+\ref{more-algebra-lemma-kollar-kovacs-pseudo-coherent}
+that $K_{n, z}$ only has nonzero cohomology
+in degrees $[-m, m] \cup [-m - n, m - n]$. Since this holds for all
+closed points of $X \times Y$, we conclude $K_n$ only has nonzero
+cohomology sheaves in degrees $[-m, m] \cup [-m - n, m - n]$.
+In exactly the same way we see that the maps $K_n \to K_{n + 1}$
+are isomorphisms on cohomology sheaves in degrees $[-m, m]$
+for $n > 2m$.
+
+\medskip\noindent
+If $X$ and $Y$ are smooth over $k$, then $X \times Y$ is smooth
+over $k$ and hence regular by
+Varieties, Lemma \ref{varieties-lemma-smooth-regular}.
+Thus we will obtain the direct sum decomposition of $K_n$
+as soon as $n > 2m + \dim(X \times Y)$ from
+Lemma \ref{lemma-split-complex-regular}. The final statement
+is clear from this.
+\end{proof}
+
+
+
+
+
+\section{Sibling functors}
+\label{section-sibling}
+
+\noindent
+In this section we prove some categorical result on the following notion.
+
+\begin{definition}
+\label{definition-siblings}
+Let $\mathcal{A}$ be an abelian category. Let $\mathcal{D}$ be a
+triangulated category. We say two exact functors of triangulated categories
+$$
+F, F' : D^b(\mathcal{A}) \longrightarrow \mathcal{D}
+$$
+are {\it siblings}, or we say $F'$ is a {\it sibling} of $F$,
+if the following two conditions are satisfied
+\begin{enumerate}
+\item the functors $F \circ i$ and $F' \circ i$ are isomorphic
+where $i : \mathcal{A} \to D^b(\mathcal{A})$ is the inclusion functor, and
+\item $F(K) \cong F'(K)$ for any $K$ in $D^b(\mathcal{A})$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Sometimes the second condition is a consequence of the first.
+
+\begin{lemma}
+\label{lemma-sibling-fully-faithful}
+Let $\mathcal{A}$ be an abelian category. Let $\mathcal{D}$ be a
+triangulated category. Let
+$F, F' : D^b(\mathcal{A}) \longrightarrow \mathcal{D}$
+be exact functors of triangulated categories. Assume
+\begin{enumerate}
+\item the functors $F \circ i$ and $F' \circ i$ are isomorphic
+where $i : \mathcal{A} \to D^b(\mathcal{A})$ is the inclusion functor, and
+\item for all $X, Y \in \Ob(\mathcal{A})$ we have
+$\Ext^q_\mathcal{D}(F(X), F(Y)) = 0$ for $q < 0$ (for example
+if $F$ is fully faithful).
+\end{enumerate}
+Then $F$ and $F'$ are siblings.
+\end{lemma}
+
+\begin{proof}
+Let $K \in D^b(\mathcal{A})$. We will show $F(K)$ is isomorphic to $F'(K)$.
+We can represent $K$ by a bounded complex $A^\bullet$ of objects of
+$\mathcal{A}$. After replacing $K$ by a translation we may
+assume $A^i = 0$ for $i > 0$. Choose $n \geq 0$ such that $A^{-i} = 0$
+for $i > n$. The objects
+$$
+M_i = (A^{-i} \to \ldots \to A^0)[-i],\quad i = 0, \ldots, n
+$$
+form a Postnikov system in $D^b(\mathcal{A})$ for the complex
+$A^\bullet = A^{-n} \to \ldots \to A^0$ in $D^b(\mathcal{A})$.
+See Derived Categories, Example \ref{derived-example-key-postnikov}.
+Since both $F$ and $F'$ are exact functors of triangulated categories both
+$$
+F(M_i)
+\quad\text{and}\quad
+F'(M_i)
+$$
+form a Postnikov system in $\mathcal{D}$ for the complex
+$$
+F(A^{-n}) \to \ldots \to F(A^0) =
+F'(A^{-n}) \to \ldots \to F'(A^0)
+$$
+Since all negative $\Ext$s between these objects vanish by assumption
+we conclude by uniqueness of Postnikov systems
+(Derived Categories, Lemma \ref{derived-lemma-existence-postnikov-system})
+that $F(K) = F(M_n[n]) \cong F'(M_n[n]) = F'(K)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sibling-faithful}
+Let $F$ and $F'$ be siblings as in Definition \ref{definition-siblings}.
+Then
+\begin{enumerate}
+\item if $F$ is essentially surjective, then $F'$ is essentially
+surjective,
+\item if $F$ is fully faithful, then $F'$ is fully faithful.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is immediate from property (2) for siblings.
+
+\medskip\noindent
+Assume $F$ is fully faithful. Denote $\mathcal{D}' \subset \mathcal{D}$
+the essential image of $F$ so that $F : D^b(\mathcal{A}) \to \mathcal{D}'$
+is an equivalence. Since the functor $F'$ factors through $\mathcal{D}'$
+by property (2) for siblings, we can consider the functor
+$H = F^{-1} \circ F' : D^b(\mathcal{A}) \to D^b(\mathcal{A})$.
+Observe that $H$ is a sibling of the identity functor.
+Since it suffices to prove that $H$ is fully faithful,
+we reduce to the problem discussed in the next paragraph.
+
+\medskip\noindent
+Set $\mathcal{D} = D^b(\mathcal{A})$. We have to show a sibling
+$F : \mathcal{D} \to \mathcal{D}$ of the identity functor is fully faithful.
+Denote $a_X : X \to F(X)$ the functorial isomorphism for
+$X \in \Ob(\mathcal{A})$ given to us by Definition \ref{definition-siblings}.
+For any $K$ in $\mathcal{D}$ and distinguished triangle
+$K_1 \to K_2 \to K_3$ of $\mathcal{D}$
+if the maps
+$$
+F : \Hom(K, K_i[n]) \to \Hom(F(K), F(K_i[n]))
+$$
+are isomorphisms for all $n \in \mathbf{Z}$ and $i = 1, 3$, then the
+same is true for $i = 2$ and all $n \in \mathbf{Z}$. This uses the
+$5$-lemma Homology, Lemma \ref{homology-lemma-five-lemma} and
+Derived Categories, Lemma \ref{derived-lemma-representable-homological};
+details omitted. Similarly, if the maps
+$$
+F : \Hom(K_i[n], K) \to \Hom(F(K_i[n]), F(K))
+$$
+are isomorphisms for all $n \in \mathbf{Z}$ and $i = 1, 3$, then the
+same is true for $i = 2$ and all $n \in \mathbf{Z}$. Using the canonical
+truncations and induction on the number of nonzero cohomology objects,
+we see that it is enough to show
+$$
+F : \Ext^q(X, Y) \to \Ext^q(F(X), F(Y))
+$$
+is bijective for all $X, Y \in \Ob(\mathcal{A})$ and all $q \in \mathbf{Z}$.
+Since $F$ is a sibling of $\text{id}$ we have $F(X) \cong X$ and
+$F(Y) \cong Y$ hence the right hand side is zero for $q < 0$.
+The case $q = 0$ is OK by our assumption that $F$ is a sibling of
+the identity functor. It remains to prove the cases $q > 0$.
+
+\medskip\noindent
+The case $q = 1$: Injectivity. An element $\xi$ of $\Ext^1(X, Y)$
+gives rise to a distinguished triangle
+$$
+Y \to E \to X \xrightarrow{\xi} Y[1]
+$$
+Observe that $E \in \Ob(\mathcal{A})$. Since $F$ is a sibling of the
+identity functor we obtain a commutative diagram
+$$
+\xymatrix{
+E \ar[d] \ar[r] & X \ar[d] \\
+F(E) \ar[r] & F(X)
+}
+$$
+whose vertical arrows are the isomorphisms $a_E$ and $a_X$.
+By TR3 the distinguished triangle associated to $\xi$ we started
+with is isomorphic to the distinguished triangle
+$$
+F(Y) \to F(E) \to F(X) \xrightarrow{F(\xi)} F(Y[1]) = F(Y)[1]
+$$
+Thus $\xi = 0$ if and only if $F(\xi)$ is zero, i.e., we see that
+$F : \Ext^1(X, Y) \to \Ext^1(F(X), F(Y))$ is injective.
+
+\medskip\noindent
+The case $q = 1$: Surjectivity. Let $\theta$ be an element of
+$\Ext^1(F(X), F(Y))$. This defines an extension of $F(X)$ by $F(Y)$
+in $\mathcal{A}$ which we may write as $F(E)$
+as $F$ is a sibling of the identity functor. We thus get a distinguished
+triangle
+$$
+F(Y) \xrightarrow{F(\alpha)} F(E)
+\xrightarrow{F(\beta)} F(X)
+\xrightarrow{\theta} F(Y[1]) = F(Y)[1]
+$$
+for some morphisms $\alpha : Y \to E$ and $\beta : E \to X$.
+Since $F$ is a sibling of the identity functor, the sequence
+$0 \to Y \to E \to X \to 0$
+is a short exact sequence in $\mathcal{A}$! Hence we obtain a
+distinguished triangle
+$$
+Y \xrightarrow{\alpha} E \xrightarrow{\beta} X \xrightarrow{\delta} Y[1]
+$$
+for some morphism $\delta : X \to Y[1]$. Applying the exact functor
+$F$ we obtain the distinguished triangle
+$$
+F(Y) \xrightarrow{F(\alpha)} F(E) \xrightarrow{F(\beta)} F(X)
+\xrightarrow{F(\delta)} F(Y)[1]
+$$
+Arguing as above, we see that these triangles are isomorphic.
+Hence there exists a commutative diagram
+$$
+\xymatrix{
+F(X) \ar[d]^\gamma \ar[r]_{F(\delta)} & F(Y[1]) \ar[d]_\epsilon \\
+F(X) \ar[r]^\theta & F(Y[1])
+}
+$$
+for some isomorphisms $\gamma$, $\epsilon$ (we can say more but we won't
+need more information). We may write $\gamma = F(\gamma')$ and
+$\epsilon = F(\epsilon')$. Then we have
+$\theta = F(\epsilon' \circ \delta \circ (\gamma')^{-1})$
+and we see the surjectivity holds.
+
+\medskip\noindent
+The case $q > 1$: surjectivity. Using Yoneda extensions, see
+Derived Categories, Section \ref{derived-section-ext}, we find that for any
+element $\xi$ in $\Ext^q(F(X), F(Y))$ we can find
+$F(X) = B_0, B_1, \ldots, B_{q - 1}, B_q = F(Y) \in \Ob(\mathcal{A})$ and
+elements
+$$
+\xi_i \in \Ext^1(B_{i - 1}, B_i)
+$$
+such that $\xi$ is the composition $\xi_q \circ \ldots \circ \xi_1$.
+Write $B_i = F(A_i)$ (of course we have $A_i = B_i$ but we don't
+need to use this) so that
+$$
+\xi_i = F(\eta_i) \in \Ext^1(F(A_{i - 1}), F(A_i))
+\quad\text{with}\quad
+\eta_i \in \Ext^1(A_{i - 1}, A_i)
+$$
+by surjectivity for $q = 1$. Then $\eta = \eta_q \circ \ldots \circ \eta_1$
+is an element of $\Ext^q(X, Y)$ with $F(\eta) = \xi$.
+
+\medskip\noindent
+The case $q > 1$: injectivity. An element $\xi$ of $\Ext^q(X, Y)$
+gives rise to a distinguished triangle
+$$
+Y[q - 1] \to E \to X \xrightarrow{\xi} Y[q]
+$$
+Applying $F$ we obtain a distinguished triangle
+$$
+F(Y)[q - 1] \to F(E) \to F(X) \xrightarrow{F(\xi)} F(Y)[q]
+$$
+If $F(\xi) = 0$, then $F(E) \cong F(Y)[q - 1] \oplus F(X)$
+in $\mathcal{D}$, see
+Derived Categories, Lemma \ref{derived-lemma-split}.
+Since $F$ is a sibling of the identity functor we have
+$E \cong F(E)$ and hence
+$$
+E \cong F(E) \cong F(Y)[q - 1] \oplus F(X) \cong Y[q - 1] \oplus X
+$$
+In other words, $E$ is isomorphic to the
+direct sum of its cohomology objects. This implies that the
+initial distinguished triangle is split, i.e., $\xi = 0$.
+\end{proof}
+
+\noindent
+Let us make a nonstandard definition. Let $\mathcal{A}$ be an abelian
+category. Let us say $\mathcal{A}$ {\it has enough negative objects}
+if given any $X \in \Ob(\mathcal{A})$ there exists an object $N$ such that
+\begin{enumerate}
+\item there is a surjection $N \to X$ and
+\item $\Hom(X, N) = 0$.
+\end{enumerate}
+Let us prove a couple of lemmas about this notion in order to
+help with the proof of Proposition \ref{proposition-siblings-isomorphic}.
+
+\begin{lemma}
+\label{lemma-good-map}
+Let $\mathcal{A}$ be an abelian category with enough negative objects.
+Let $X \in D^b(\mathcal{A})$. Let $b \in \mathbf{Z}$ with
+$H^i(X) = 0$ for $i > b$. Then
+there exists a map $N[-b] \to X$ such that the induced map
+$N \to H^b(X)$ is surjective and $\Hom(H^b(X), N) = 0$.
+\end{lemma}
+
+\begin{proof}
+Using the truncation functors we can represent $X$ by a complex
+$A^a \to A^{a + 1} \to \ldots \to A^b$ of objects of $\mathcal{A}$.
+Choose $N$ in $\mathcal{A}$ such that there exists a surjection
+$t : N \to A^b$ and such that $\Hom(A^b, N) = 0$. Then the surjection $t$
+defines a map $N[-b] \to X$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-good-map-zero}
+Let $\mathcal{A}$ be an abelian category with enough negative objects.
+Let $f : X \to X'$ be a morphism of $D^b(\mathcal{A})$. Let $b \in \mathbf{Z}$
+such that $H^i(X) = 0$ for $i > b$ and $H^i(X') = 0$ for $i \geq b$.
+Then there exists a map $N[-b] \to X$ such that the induced map
+$N \to H^b(X)$ is surjective, such that $\Hom(H^b(X), N) = 0$, and
+such that the composition $N[-b] \to X \to X'$ is zero.
+\end{lemma}
+
+\begin{proof}
+We can represent $f$ by a map $f^\bullet : A^\bullet \to B^\bullet$
+of bounded complexes of objects of $\mathcal{A}$, see for example
+Derived Categories, Lemma \ref{derived-lemma-bounded-derived}.
+Consider the object
+$$
+C = \Ker(A^b \to A^{b + 1}) \times_{\Ker(B^b \to B^{b + 1})} B^{b - 1}
+$$
+of $\mathcal{A}$. Since $H^b(B^\bullet) = 0$ we see that
+$C \to H^b(A^\bullet)$ is surjective. On the other hand, the map
+$C \to A^b \to B^b$ is the same as the map $C \to B^{b - 1} \to B^b$
+and hence the composition $C[-b] \to X \to X'$ is zero.
+Since $\mathcal{A}$ has enough negative objects, we can find an object $N$
+which has a surjection $N \to C \oplus H^b(X)$ such that
+$\Hom(C \oplus H^b(X), N) = 0$. Then $N$ together with the map
+$N[-b] \to X$ is a solution to the problem posed by the lemma.
+\end{proof}
+
+\noindent
+We encourage the reader to read the original
+\cite[Proposition 2.16]{Orlov-K3} for the marvellous ideas
+that go into the proof of the following proposition.
+
+\begin{proposition}
+\label{proposition-siblings-isomorphic}
+\begin{reference}
+\cite[Proposition 2.16]{Orlov-K3}; the fact that we do not need
+to assume vanishing of $\Ext^q(N, X)$ for $q > 0$ in the definition
+of negative objects above is due to \cite{Canonaco-Stellari}.
+\end{reference}
+Let $F$ and $F'$ be siblings as in Definition \ref{definition-siblings}.
+Assume that $F$ is fully faithful and that $\mathcal{A}$ has enough
+negative objects (see above). Then $F$ and $F'$ are isomorphic functors.
+\end{proposition}
+
+\begin{proof}
+By part (2) of Definition \ref{definition-siblings} the image of the functor
+$F'$ is contained in the essential image of the functor $F$. Hence
+the functor $H = F^{-1} \circ F'$ is a sibling of the identity functor.
+This reduces us to the case described in the next paragraph.
+
+\medskip\noindent
+Let $\mathcal{D} = D^b(\mathcal{A})$. We have to show a sibling
+$F : \mathcal{D} \to \mathcal{D}$ of the identity functor is
+isomorphic to the identity functor. Given an object $X$ of $\mathcal{D}$
+let us say $X$ has {\it width} $w = w(X)$ if $w \geq 0$ is minimal
+such that there exists an integer $a \in \mathbf{Z}$ with $H^i(X) = 0$
+for $i \not \in [a, a + w - 1]$. Since $F$ is a sibling of the identity
+and since $F \circ [n] = [n] \circ F$ we are aready given isomorphisms
+$$
+c_X : X \to F(X)
+$$
+for $w(X) \leq 1$ compatible with shifts. Moreover, if $X = A[-a]$ and
+$X' = A'[-a]$ for some $A, A' \in \Ob(\mathcal{A})$ then for any morphism
+$f : X \to X'$ the diagram
+\begin{equation}
+\label{equation-to-show}
+\vcenter{
+\xymatrix{
+X \ar[d]_{c_X} \ar[r]_f &
+X' \ar[d]^{c_{X'}} \\
+F(X) \ar[r]^{F(f)} &
+F(X')
+}
+}
+\end{equation}
+is commutative.
+
+\medskip\noindent
+Next, let us show that for any morphism $f : X \to X'$ with
+$w(X), w(X') \leq 1$ the diagram (\ref{equation-to-show}) commutes.
+If $X$ or $X'$ is zero, this is clear. If not then we can write
+$X = A[-a]$ and $X' = A'[-a']$ for unique $A, A'$ in $\mathcal{A}$
+and $a, a' \in \mathbf{Z}$. The case $a = a'$ was discussed above.
+If $a' > a$, then $f = 0$ (Derived Categories, Lemma
+\ref{derived-lemma-negative-exts}) and the result is clear.
+If $a' < a$ then $f$ corresponds to an element
+$\xi \in \Ext^q(A, A')$ with $q = a - a'$. Using Yoneda extensions, see
+Derived Categories, Section \ref{derived-section-ext}, we can find
+$A = A_0, A_1, \ldots, A_{q - 1}, A_q = A' \in \Ob(\mathcal{A})$ and
+elements
+$$
+\xi_i \in \Ext^1(A_{i - 1}, A_i)
+$$
+such that $\xi$ is the composition $\xi_q \circ \ldots \circ \xi_1$.
+In other words, setting $X_i = A_i[-a + i]$
+we obtain morphisms
+$$
+X = X_0 \xrightarrow{f_1} X_1 \to \ldots \to X_{q - 1}
+\xrightarrow{f_q} X_q = X'
+$$
+whose compostion is $f$. Since the commutativity of (\ref{equation-to-show})
+for $f_1, \ldots, f_q$ implies it for $f$, this reduces us to the case $q = 1$.
+In this case after shifting we may assume we have a distinguished triangle
+$$
+A' \to E \to A \xrightarrow{f} A'[1]
+$$
+Observe that $E$ is an object of $\mathcal{A}$. Consider the following
+diagram
+$$
+\xymatrix{
+E \ar[d]_{c_E} \ar[r] &
+A \ar[d]_{c_A} \ar[r]_f &
+A'[1] \ar[d]^{c_{A'}[1]}
+\ar@{..>}@<-1ex>[d]_\gamma \ar@{..>}[ld]^\epsilon \ar[r] &
+E[1] \ar[d]^{c_E[1]} \\
+F(E) \ar[r] &
+F(A) \ar[r]^{F(f)} &
+F(A')[1] \ar[r] &
+F(E)[1]
+}
+$$
+whose rows are distinguished triangles.
+The square on the right commutes already but we don't yet know that
+the middle square does. By the axioms of a triangulated category
+we can find a morphism $\gamma$ which does make the diagram commute.
+Then $\gamma - c_{A'}[1]$ composed with
+$F(A')[1] \to F(E)[1]$ is zero hence we
+can find $\epsilon : A'[1] \to F(A)$ such that
+$\gamma - c_{A'}[1] = F(f) \circ \epsilon$. However, any arrow
+$A'[1] \to F(A)$ is zero as it is a negative ext class
+between objects of $\mathcal{A}$. Hence $\gamma = c_{A'}[1]$
+and we conclude the middle square commutes too which is what we
+wanted to show.
+
+\medskip\noindent
+To finish the proof we are going to argue by induction on $w$
+that there exist isomorphisms $c_X : X \to F(X)$ for all
+$X$ with $w(X) \leq w$ compatible with all morphisms between
+such objects. The base case $w = 1$ was shown above. Assume
+we know the result for some $w \geq 1$.
+
+\medskip\noindent
+Let $X$ be an object with $w(X) = w + 1$. Pick $a \in \mathbf{Z}$ with
+$H^i(X) = 0$ for $i \not \in [a, a + w]$. Set $b = a + w$ so that
+$H^b(X)$ is nonzero. Choose $N[-b] \to X$ as in Lemma \ref{lemma-good-map}.
+Choose a distinguished diagram
+$$
+N[-b] \to X \to Y \to N[-b + 1]
+$$
+Computing the long exact cohomology sequence we find
+$w(Y) \leq w$. Hence by induction we find the solid arrows
+in the following diagram
+$$
+\xymatrix{
+N[-b] \ar[r] \ar[d]_{c_N[-b]} &
+X \ar[r] \ar@{..>}[d]_{c_{N[-b] \to X}} &
+Y \ar[r] \ar[d]^{c_Y} &
+N[-b + 1] \ar[d]^{c_N[-b + 1]} \\
+F(N)[-b] \ar[r] &
+F(X) \ar[r] &
+F(Y) \ar[r] &
+F(N)[-b + 1]
+}
+$$
+We obtain the dotted arrow $c_{N[-b] \to X}$.
+By Derived Categories, Lemma \ref{derived-lemma-uniqueness-third-arrow}
+the dotted arrow is unique because $\Hom(X, F(N)[-b]) \cong \Hom(X, N[-b]) = 0$
+by our choice of $N$. In fact, $c_{N[-b] \to X}$ is the unique dotted
+arrow making the square with vertices $X, Y, F(X), F(Y)$ commute.
+
+\medskip\noindent
+Let $N'[-b] \to X$ be another map as in Lemma \ref{lemma-good-map}
+and let us prove that $c_{N[-b] \to X} = c_{N'[-b] \to X}$.
+Observe that the map $(N \oplus N')[-b] \to X$ also satisfies the
+conditions of Lemma \ref{lemma-good-map}.
+Thus we may assume $N'[-b] \to X$ factors
+as $N'[-b] \to N[-b] \to X$ for some morphism $N' \to N$.
+Choose distinguished triangles $N[-b] \to X \to Y \to N[-b + 1]$ and
+$N'[-b] \to X \to Y' \to N'[-b + 1]$. By axiom TR3 we can find
+a morphism $g : Y' \to Y$ which joint with $\text{id}_X$ and $N' \to N$
+forms a morphism of triangles. Since we have
+(\ref{equation-to-show}) for $g$ we conclude that
+$$
+(F(X) \to F(Y)) \circ c_{N'[-b] \to X} = (F(X) \to F(Y)) \circ c_{N[-b] \to X}
+$$
+The uniqueness of $c_{N[-b] \to X}$ pointed out in the construction
+above now shows that $c_{N'[-b] \to X} = c_{N[-b] \to X}$.
+
+\medskip\noindent
+Thus we can now define for $X$ of width $w + 1$ the isomorphism
+$c_X : X \to F(X)$ as the common value of the maps $c_{N[-b] \to X}$
+where $N[-b] \to X$ is as in Lemma \ref{lemma-good-map}. To finish
+the proof, we have to show that the diagrams (\ref{equation-to-show})
+commute for all morphisms $f : X \to X'$ between objects with $w(X) \leq w + 1$
+and $w(X') \leq w + 1$. Choose $a \leq b \leq a + w$ such that
+$H^i(X) = 0$ for $i \not \in [a, b]$ and
+$a' \leq b' \leq a' + w$ such that $H^i(X') = 0$ for
+$i \not \in [a', b']$. We will use induction on
+$(b' - a') + (b - a)$ to show the claim. (The base case
+is when this number is zero which is OK because $w \geq 1$.)
+We distinguish two cases.
+
+\medskip\noindent
+Case I: $b' < b$. In this case, by Lemma \ref{lemma-good-map-zero}
+we may choose $N[-b] \to X$ as in Lemma \ref{lemma-good-map}
+such that the composition $N[-b] \to X \to X'$ is zero.
+Choose a distuiguished triangle $N[-b] \to X \to Y \to N[-b + 1]$. Since
+$N[-b] \to X'$ is zero, we find that $f$ factors
+as $X \to Y \to X'$. Since $H^i(Y)$ is nonzero only for $i \in [a, b - 1]$
+we see by induction that (\ref{equation-to-show}) commutes for
+$Y \to X'$. The diagram (\ref{equation-to-show}) commutes for
+$X \to Y$ by construction if $w(X) = w + 1$ and by our first
+induction hypothesis if $w(X) \leq w$.
+Hence (\ref{equation-to-show}) commutes for $f$.
+
+\medskip\noindent
+Case II: $b' \geq b$. In this case we choose $N'[-b'] \to X'$
+as in Lemma \ref{lemma-good-map}.
+We may also assume that $\Hom(H^{b'}(X), N') = 0$ (this is
+relevant only if $b' = b$), for example because we can
+replace $N'$ by an object $N''$ which surjects onto $N' \oplus H^{b'}(X)$
+and such that $\Hom(N' \oplus H^{b'}(X), N'') = 0$.
+We choose a distinguished triangle
+$N'[-b'] \to X' \to Y' \to N'[-b' + 1]$. Since
+$\Hom(X, X') \to \Hom(X, Y')$ is injective by our choice of $N'$
+(details omitted) the same is true for
+$\Hom(X, F(X')) \to \Hom(X, F(Y'))$.
+Hence it suffices in this case to check that
+(\ref{equation-to-show}) commutes for the composition $X \to Y'$
+of the morphisms $X \to X' \to Y'$.
+Since $H^i(Y')$ is nonzero only for $i \in [a', b' - 1]$
+we conclude by induction hypothesis.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Deducing fully faithfulness}
+\label{section-get-fully-faithful}
+
+\noindent
+It will be useful for us to know when a functor is fully faithful
+we offer the following variant of \cite[Lemma 2.15]{Orlov-K3}.
+
+\begin{lemma}
+\label{lemma-get-fully-faithful}
+\begin{reference}
+Variant of \cite[Lemma 2.15]{Orlov-K3}
+\end{reference}
+Let $F : \mathcal{D} \to \mathcal{D}'$ be an exact functor of
+triangulated categories. Let $S \subset \Ob(\mathcal{D})$ be
+a set of objects. Assume
+\begin{enumerate}
+\item $F$ has both right and left adjoints,
+\item for $K \in \mathcal{D}$ if $\Hom(E, K[i]) = 0$ for all
+$E \in S$ and $i \in \mathbf{Z}$ then $K = 0$,
+\item for $K \in \mathcal{D}$ if $\Hom(K, E[i]) = 0$ for all
+$E \in S$ and $i \in \mathbf{Z}$ then $K = 0$,
+\item the map $\Hom(E, E'[i]) \to \Hom(F(E), F(E')[i])$ induced by $F$
+is bijective for all $E, E' \in S$ and $i \in \mathbf{Z}$.
+\end{enumerate}
+Then $F$ is fully faithful.
+\end{lemma}
+
+\begin{proof}
+Denote $F_r$ and $F_l$ the right and left adjoints of $F$. For
+$E \in S$ choose a distinguished triangle
+$$
+E \to F_r(F(E)) \to C \to E[1]
+$$
+where the first arrow is the unit of the adjunction. For $E' \in S$ we have
+$$
+\Hom(E', F_r(F(E))[i]) = \Hom(F(E'), F(E)[i]) = \Hom(E', E[i])
+$$
+The last equality holds by assumption (4).
+Hence applying the homological functor $\Hom(E', -)$
+(Derived Categories, Lemma \ref{derived-lemma-representable-homological})
+to the distinguished triangle above we conclude that $\Hom(E', C[i]) = 0$
+for all $i \in \mathbf{Z}$ and $E' \in S$. By assumption (2) we conclude
+that $C = 0$ and $E = F_r(F(E))$.
+
+\medskip\noindent
+For $K \in \Ob(\mathcal{D})$ choose a distinguished triangle
+$$
+F_l(F(K)) \to K \to C \to F_l(F(K))[1]
+$$
+where the first arrow is the counit of the adjunction. For $E \in S$
+we have
+$$
+\Hom(F_l(F(K)), E[i]) = \Hom(F(K), F(E)[i]) =
+\Hom(K, F_r(F(E))[i]) = \Hom(K, E[i])
+$$
+where the last equality holds by the result of the first paragraph.
+Thus we conclude as before that $\Hom(C, E[i]) = 0$ for all $E \in S$
+and $i \in \mathbf{Z}$. Hence $C = 0$ by assumption (3).
+Thus $F$ is fully faithful by Categories, Lemma
+\ref{categories-lemma-adjoint-fully-faithful}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-duality-at-point}
+Let $k$ be a field. Let $X$ be a scheme of finite type over $k$ which
+is regular. Let $x \in X$ be a closed point. For a coherent
+$\mathcal{O}_X$-module $\mathcal{F}$ supported at $x$ choose
+a coherent $\mathcal{O}_X$-module $\mathcal{F}'$ supported at $x$
+such that $\mathcal{F}_x$ and $\mathcal{F}'_x$ are Matlis dual.
+Then there is an isomorphism
+$$
+\Hom_X(\mathcal{F}, M) =
+H^0(X, M \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{F}'[-d_x])
+$$
+where $d_x = \dim(\mathcal{O}_{X, x})$
+functorial in $M$ in $D_{perf}(\mathcal{O}_X)$.
+\end{lemma}
+
+\begin{proof}
+Since $\mathcal{F}$ is supported at $x$ we have
+$$
+\Hom_X(\mathcal{F}, M) =
+\Hom_{\mathcal{O}_{X, x}}(\mathcal{F}_x, M_x)
+$$
+and similarly we have
+$$
+H^0(X, M \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{F}'[-d_x]) =
+\text{Tor}^{\mathcal{O}_{X, x}}_{d_x}(M_x, \mathcal{F}'_x)
+$$
+Thus it suffices to show that given a Noetherian regular local ring $A$
+of dimension $d$ and a finite length $A$-module $N$, if
+$N'$ is the Matlis dual to $N$, then there exists a functorial isomorphism
+$$
+\Hom_A(N, K) = \text{Tor}^A_d(K, N')
+$$
+for $K$ in $D_{perf}(A)$. We can write the left hand side as
+$H^0(R\Hom_A(N, A) \otimes_A^\mathbf{L} K)$ by
+More on Algebra, Lemma \ref{more-algebra-lemma-dual-perfect-complex}
+and the fact that $N$ determines a perfect object of $D(A)$.
+Hence the formula holds because
+$$
+R\Hom_A(N, A) = R\Hom_A(N, A[d])[-d] = N'[-d]
+$$
+by Dualizing Complexes, Lemma \ref{dualizing-lemma-dualizing-finite-length}
+and the fact that $A[d]$ is a normalized dualizing complex over $A$
+($A$ is Gorenstein by
+Dualizing Complexes, Lemma \ref{dualizing-lemma-regular-gorenstein}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-orthogonal-point-sheaf}
+Let $k$ be a field. Let $X$ be a scheme of finite type over $k$ which
+is regular. Let $x \in X$ be a closed point and denote $\mathcal{O}_x$
+the skyscraper sheaf at $x$ with value $\kappa(x)$. Let $K$ in
+$D_{perf}(\mathcal{O}_X)$.
+\begin{enumerate}
+\item If $\Ext^i_X(\mathcal{O}_x, K) = 0$ then there exists an open
+neighbourhood $U$ of $x$ such that $H^{i - d_x}(K)|_U = 0$ where
+$d_x = \dim(\mathcal{O}_{X, x})$.
+\item If $\Hom_X(\mathcal{O}_x, K[i]) = 0$ for all
+$i \in \mathbf{Z}$, then $K$ is zero in an open neighbourhood of $x$.
+\item If $\Ext^i_X(K, \mathcal{O}_x) = 0$ then there exists an open
+neighbourhood $U$ of $x$ such that $H^i(K^\vee)|_U = 0$.
+\item If $\Hom_X(K, \mathcal{O}_x[i]) = 0$ for all
+$i \in \mathbf{Z}$, then $K$ is zero in an open neighbourhood of $x$.
+\item If $H^i(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{O}_x) = 0$
+then there exists an open neighbourhood $U$ of $x$ such that
+$H^i(K)|_U = 0$.
+\item If $H^i(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{O}_x) = 0$
+for $i \in \mathbf{Z}$ then $K$ is zero in an
+open neighbourhood of $x$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Observe that $H^i(X, K \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{O}_x)$
+is equal to $K_x \otimes_{\mathcal{O}_{X, x}}^\mathbf{L} \kappa(x)$.
+Hence part (5) follows from More on Algebra, Lemma
+\ref{more-algebra-lemma-cut-complex-in-two}.
+Part (6) follows from part (5).
+Part (1) follows from part (5), Lemma \ref{lemma-duality-at-point}, and the
+fact that the Matlis dual of $\kappa(x)$ is $\kappa(x)$.
+Part (2) follows from part (1).
+Part (3) follows from part (5) and the fact that
+$\Ext^i(K, \mathcal{O}_x) =
+H^i(X, K^\vee \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{O}_x)$ by
+Cohomology, Lemma \ref{cohomology-lemma-dual-perfect-complex}.
+Part (4) follows from part (3) and the fact that $K \cong (K^\vee)^\vee$
+by the lemma just cited.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hom-into-point-sheaf}
+Let $X$ be a Noetherian scheme. Let $x \in X$ be a closed point and
+denote $\mathcal{O}_x$ the skyscraper sheaf at $x$ with value $\kappa(x)$.
+Let $K$ in $D^b_{\textit{Coh}}(\mathcal{O}_X)$. Let $b \in \mathbf{Z}$.
+The following are equivalent
+\begin{enumerate}
+\item $H^i(K)_x = 0$ for all $i > b$ and
+\item $\Hom_X(K, \mathcal{O}_x[-i]) = 0$ for all $i > b$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Consider the complex $K_x$ in $D^b_{\textit{Coh}}(\mathcal{O}_{X, x})$.
+There exist an integer $b_x \in \mathbf{Z}$ such that $K_x$
+can be represented by a bounded above complex
+$$
+\ldots \to
+\mathcal{O}_{X, x}^{\oplus n_{b_x - 2}} \to
+\mathcal{O}_{X, x}^{\oplus n_{b_x - 1}} \to
+\mathcal{O}_{X, x}^{\oplus n_{b_x}} \to 0 \to \ldots
+$$
+with $\mathcal{O}_{X, x}^{\oplus n_i}$ sitting in degree $i$
+where all the transition maps are given by matrices whose
+coefficients are in $\mathfrak m_x$. See
+More on Algebra, Lemma
+\ref{more-algebra-lemma-lift-pseudo-coherent-from-residue-field}.
+The result follows easily from this (and the equivalent
+conditions hold if and only if $b \geq b_x$).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-get-fully-faithful-geometric}
+Let $k$ be a field. Let $X$ and $Y$ be proper schemes over $k$.
+Assume $X$ is regular. Then a $k$-linear exact functor
+$F : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+is fully faithful if and only if
+for any closed points $x, x' \in X$ the maps
+$$
+F : \Ext^i_X(\mathcal{O}_x, \mathcal{O}_{x'})
+\longrightarrow
+\Ext^i_Y(F(\mathcal{O}_x), F(\mathcal{O}_{x'}))
+$$
+are isomorphisms for all $i \in \mathbf{Z}$.
+Here $\mathcal{O}_x$ is the skyscraper sheaf at $x$ with value $\kappa(x)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-always-right-adjoints} the functor $F$
+has both a left and a right adjoint. Thus we may apply the criterion
+of Lemma \ref{lemma-get-fully-faithful}
+because assumptions (2) and (3) of that lemma
+follow from Lemma \ref{lemma-orthogonal-point-sheaf}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-noah-pre}
+\begin{reference}
+Email from Noah Olander of Jun 9, 2020
+\end{reference}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ which is regular.
+Let $F : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_X)$
+be a $k$-linear exact functor. Assume for every coherent
+$\mathcal{O}_X$-module $\mathcal{F}$ with $\dim(\text{Supp}(\mathcal{F})) = 0$
+there is an isomorphism $\mathcal{F} \cong F(\mathcal{F})$.
+Then $F$ is fully faithful.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-get-fully-faithful-geometric} it suffices to show
+that the maps
+$$
+F : \Ext^i_X(\mathcal{O}_x, \mathcal{O}_{x'})
+\longrightarrow
+\Ext^i_X(F(\mathcal{O}_x), F(\mathcal{O}_{x'}))
+$$
+are isomorphisms for all $i \in \mathbf{Z}$ and all closed points
+$x, x' \in X$. By assumption, the source and the target are isomorphic.
+If $x \not = x'$, then both sides are zero and the result is true.
+If $x = x'$, then it suffices to prove that the map is either injective
+or surjective. For $i < 0$ both sides are zero and the result is true.
+For $i = 0$ any nonzero map $\alpha : \mathcal{O}_x \to \mathcal{O}_x$ of
+$\mathcal{O}_X$-modules is an isomorphism. Hence $F(\alpha)$ is an
+isomorphism too and so $F(\alpha)$ is nonzero. Thus the result for $i = 0$.
+For $i = 1$ a nonzero element $\xi$ in $\Ext^1(\mathcal{O}_x, \mathcal{O}_x)$
+corresponds to a nonsplit short exact sequence
+$$
+0 \to \mathcal{O}_x \to \mathcal{F} \to \mathcal{O}_x \to 0
+$$
+Since $F(\mathcal{F}) \cong \mathcal{F}$ we see that $F(\mathcal{F})$
+is a nonsplit extension of $\mathcal{O}_x$ by $\mathcal{O}_x$ as well.
+Since $\mathcal{O}_x \cong F(\mathcal{O}_x)$ is a simple
+$\mathcal{O}_X$-module and $\mathcal{F} \cong F(\mathcal{F})$ has
+length $2$, we see that in the distinguished triangle
+$$
+F(\mathcal{O}_x) \to F(\mathcal{F}) \to F(\mathcal{O}_x)
+\xrightarrow{F(\xi)} F(\mathcal{O}_x)[1]
+$$
+the first two arrows must form a short exact sequence which must be
+isomorphic to the above short exact sequence and hence is nonsplit.
+It follows that $F(\xi)$ is nonzero and we conclude for $i = 1$.
+For $i > 1$ composition of ext classes defines a surjection
+$$
+\Ext^1(F(\mathcal{O}_x), F(\mathcal{O}_x)) \otimes \ldots \otimes
+\Ext^1(F(\mathcal{O}_x), F(\mathcal{O}_x))
+\longrightarrow
+\Ext^i(F(\mathcal{O}_x), F(\mathcal{O}_x))
+$$
+See Duality for Schemes, Lemma \ref{duality-lemma-regular-ideal-ext}.
+Hence surjectivity in degree $1$ implies surjectivity for $i > 0$.
+This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Special functors}
+\label{section-special-functors}
+
+\noindent
+In this section we prove some results on functors of a special type
+that we will use later in this chapter.
+
+\begin{definition}
+\label{definition-siblings-geometric}
+Let $k$ be a field. Let $X$, $Y$ be finite type schemes over $k$.
+Recall that
+$D^b_{\textit{Coh}}(\mathcal{O}_X) = D^b(\textit{Coh}(\mathcal{O}_X))$
+by Derived Categories of Schemes, Proposition \ref{perfect-proposition-DCoh}.
+We say two $k$-linear exact functors
+$$
+F, F' :
+D^b_{\textit{Coh}}(\mathcal{O}_X) = D^b(\textit{Coh}(\mathcal{O}_X))
+\longrightarrow
+D^b_{\textit{Coh}}(\mathcal{O}_Y)
+$$
+are {\it siblings}, or we say $F'$ is a {\it sibling} of $F$ if $F$ and $F'$
+are siblings in the sense of Definition \ref{definition-siblings}
+with abelian category being $\textit{Coh}(\mathcal{O}_X)$.
+If $X$ is regular then
+$D_{perf}(\mathcal{O}_X) = D^b_{\textit{Coh}}(\mathcal{O}_X)$ by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-perfect-on-noetherian}
+and we use the same terminology for $k$-linear exact functors
+$F, F' : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-exact-functor-preserving-Coh}
+Let $k$ be a field. Let $X$, $Y$ be finite type schemes over $k$ with
+$X$ separated. Let
+$F : D^b_{\textit{Coh}}(\mathcal{O}_X) \to D^b_{\textit{Coh}}(\mathcal{O}_Y)$
+be a $k$-linear exact functor sending
+$\textit{Coh}(\mathcal{O}_X) \subset D^b_{\textit{Coh}}(\mathcal{O}_X)$
+into
+$\textit{Coh}(\mathcal{O}_Y) \subset D^b_{\textit{Coh}}(\mathcal{O}_Y)$.
+Then there exists a Fourier-Mukai functor
+$F' : D^b_{\textit{Coh}}(\mathcal{O}_X) \to D^b_{\textit{Coh}}(\mathcal{O}_Y)$
+whose kernel is a coherent $\mathcal{O}_{X \times Y}$-module $\mathcal{K}$
+flat over $X$ and with support finite over $Y$ which is a sibling of $F$.
+\end{lemma}
+
+\begin{proof}
+Denote $H : \textit{Coh}(\mathcal{O}_X) \to \textit{Coh}(\mathcal{O}_Y)$
+the restriction of $F$. Since $F$ is an exact functor of triangulated
+categories, we see that $H$ is an exact functor of abelian categories.
+Of course $H$ is $k$-linear as $F$ is. By
+Functors and Morphisms, Lemma \ref{functors-lemma-functor-coherent-over-field}
+we obtain a coherent $\mathcal{O}_{X \times Y}$-module
+$\mathcal{K}$ which is flat over $X$ and has support finite over $Y$.
+Let $F'$ be the Fourier-Mukai functor defined using $\mathcal{K}$
+so that $F'$ restricts to $H$ on $ \textit{Coh}(\mathcal{O}_X)$.
+The functor $F'$ sends $D^b_{\textit{Coh}}(\mathcal{O}_X)$
+into $D^b_{\textit{Coh}}(\mathcal{O}_Y)$ by
+Lemma \ref{lemma-fourier-mukai-Coh}.
+Observe that $F$ and $F'$ satisfy the first and second
+condition of Lemma \ref{lemma-sibling-fully-faithful} and hence are siblings.
+\end{proof}
+
+\begin{remark}
+\label{remark-difficult}
+If $F, F' : D^b_{\textit{Coh}}(\mathcal{O}_X) \to \mathcal{D}$ are siblings, $F$
+is fully faithful, and $X$ is reduced and projective over $k$ then
+$F \cong F'$; this follows from
+Proposition \ref{proposition-siblings-isomorphic} via the argument
+given in the proof of Theorem \ref{theorem-fully-faithful}.
+However, in general we do not know whether siblings are isomorphic.
+Even in the situation of Lemma \ref{lemma-exact-functor-preserving-Coh}
+it seems difficult to prove that the siblings $F$ and $F'$
+are isomorphic functors. If $X$ is smooth and proper over $k$
+and $F$ is fully faithful, then $F \cong F'$ as is shown in
+\cite{Noah}.
+If you have a proof or a counter example in more general situations,
+please email
+\href{mailto:stacks.project@gmail.com}{stacks.project@gmail.com}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-two-functors-pre}
+Let $k$ be a field. Let $X$, $Y$ be proper schemes over $k$. Assume
+$X$ is regular. Let
+$F, G : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+be $k$-linear exact functors such that
+\begin{enumerate}
+\item $F(\mathcal{F}) \cong G(\mathcal{F})$ for any coherent
+$\mathcal{O}_X$-module $\mathcal{F}$ with $\dim(\text{Supp}(\mathcal{F})) = 0$,
+\item $F$ is fully faithful.
+\end{enumerate}
+Then the essential image of $G$ is contained in the essential
+image of $F$.
+\end{lemma}
+
+\begin{proof}
+Recall that $F$ and $G$ have both adjoints, see
+Lemma \ref{lemma-always-right-adjoints}. In particular
+the essential image $\mathcal{A} \subset D_{perf}(\mathcal{O}_Y)$ of $F$
+satisfies the equivalent conditions of
+Derived Categories, Lemma \ref{derived-lemma-right-adjoint}.
+We claim that $G$ factors through $\mathcal{A}$.
+Since $\mathcal{A} = {}^\perp(\mathcal{A}^\perp)$ by
+Derived Categories, Lemma \ref{derived-lemma-right-adjoint}
+it suffices to show that $\Hom_Y(G(M), N) = 0$ for
+all $M$ in $D_{perf}(\mathcal{O}_X)$ and $N \in \mathcal{A}^\perp$.
+We have
+$$
+\Hom_Y(G(M), N) = \Hom_X(M, G_r(N))
+$$
+where $G_r$ is the right adjoint to $G$. Thus it suffices to prove
+that $G_r(N) = 0$. Since
+$G(\mathcal{F}) \cong F(\mathcal{F})$ for $\mathcal{F}$ as in (1)
+we see that
+$$
+\Hom_X(\mathcal{F}, G_r(N)) =
+\Hom_Y(G(\mathcal{F}), N) =
+\Hom_Y(F(\mathcal{F}), N) = 0
+$$
+as $N$ is in the right orthogonal to the essential image $\mathcal{A}$ of $F$.
+Of course, the same vanishing holds for $\Hom_X(\mathcal{F}, G_r(N)[i])$
+for any $i \in \mathbf{Z}$. Thus $G_r(N) = 0$ by
+Lemma \ref{lemma-orthogonal-point-sheaf} and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-noah}
+\begin{reference}
+Email from Noah Olander of Jun 8, 2020
+\end{reference}
+Let $k$ be a field. Let $X$ be a proper scheme over $k$ which is regular.
+Let $F : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_X)$
+be a $k$-linear exact functor. Assume for every coherent
+$\mathcal{O}_X$-module $\mathcal{F}$ with $\dim(\text{Supp}(\mathcal{F})) = 0$
+there is an isomorphism $\mathcal{F} \cong F(\mathcal{F})$.
+Then there exists an automorphism $f : X \to X$ over $k$
+which induces the identity on the
+underlying topological space\footnote{This often forces $f$
+to be the identity, see Varieties, Lemma \ref{varieties-lemma-automorphism}.}
+and an invertible $\mathcal{O}_X$-module $\mathcal{L}$
+such that $F$ and $F'(M) = f^*M \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{L}$
+are siblings.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-noah-pre} the functor $F$ is fully faithful.
+By Lemma \ref{lemma-two-functors-pre} the essential image of
+the identity functor is contained in the essential image of $F$, i.e.,
+we see that $F$ is essentially surjective. Thus $F$ is an equivalence.
+Observe that the quasi-inverse $F^{-1}$ satisfies the same assumptions
+as $F$.
+
+\medskip\noindent
+Let $M \in D_{perf}(\mathcal{O}_X)$ and say $H^i(M) = 0$ for $i > b$.
+Since $F$ is fully faithful, we see that
+$$
+\Hom_X(M, \mathcal{O}_x[-i]) =
+\Hom_X(F(M), F(\mathcal{O}_x)[-i]) \cong
+\Hom_X(F(M), \mathcal{O}_x[-i])
+$$
+for any $i \in \mathbf{Z}$ for any closed point $x$ of $X$.
+Thus by Lemma \ref{lemma-hom-into-point-sheaf} we see that $F(M)$
+has vanishing cohomology sheaves in degrees $> b$.
+
+\medskip\noindent
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module. By
+the above $F(\mathcal{F})$ has nonzero cohomology sheaves
+only in degrees $\leq 0$.
+Set $\mathcal{G} = H^0(F(\mathcal{F}))$. Choose a distinguished
+triangle
+$$
+K \to F(\mathcal{F}) \to \mathcal{G} \to K[1]
+$$
+Then $K$ has nonvanishing cohomology sheaves only in
+degrees $\leq -1$.
+Applying $F^{-1}$ we obtain a distinguished triangle
+$$
+F^{-1}(K) \to \mathcal{F} \to F^{-1}(\mathcal{G}) \to F^{-1}(K')[1]
+$$
+Since $F^{-1}(K)$ has nonvanishing cohomology sheaves only
+in degrees $\leq -1$ (by the previous paragraph applied to $F^{-1}$)
+we see that the arrow $F^{-1}(K) \to \mathcal{F}$ is zero
+(Derived Categories, Lemma \ref{derived-lemma-negative-exts}).
+Hence $K \to F(\mathcal{F})$ is zero, which implies
+that $F(\mathcal{F}) = \mathcal{G}$ by our choice of the
+first distinguished triangle.
+
+\medskip\noindent
+From the preceding paragraph, we deduce that $F$ preserves
+$\textit{Coh}(\mathcal{O}_X)$ and indeed defines an equivalence
+$H : \textit{Coh}(\mathcal{O}_X) \to \textit{Coh}(\mathcal{O}_X)$.
+By Functors and Morphisms, Lemma
+\ref{functors-lemma-equivalence-coherent-over-field}
+we get an automorphism $f : X \to X$ over $k$
+and an invertible $\mathcal{O}_X$-module $\mathcal{L}$
+such that $H(\mathcal{F}) = f^*\mathcal{F} \otimes \mathcal{L}$.
+Set $F'(M) = f^*M \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{L}$.
+Using Lemma \ref{lemma-sibling-fully-faithful}
+we see that $F$ and $F'$ are siblings.
+To see that $f$ is the identity on the underlying topological
+space of $X$, we use that $F(\mathcal{O}_x) \cong \mathcal{O}_x$
+and that the support of $\mathcal{O}_x$ is $\{x\}$.
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-two-functors}
+Let $k$ be a field. Let $X$, $Y$ be proper schemes over $k$.
+Assume $X$ regular.
+Let $F, G : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+be $k$-linear exact functors such that
+\begin{enumerate}
+\item $F(\mathcal{F}) \cong G(\mathcal{F})$ for any coherent
+$\mathcal{O}_X$-module $\mathcal{F}$ with $\dim(\text{Supp}(\mathcal{F})) = 0$,
+\item $F$ is fully faithful, and
+\item $G$ is a Fourier-Mukai functor whose kernel is in
+$D_{perf}(\mathcal{O}_{X \times Y})$.
+\end{enumerate}
+Then there exists a Fourier-Mukai functor
+$F' : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+whose kernel is in $D_{perf}(\mathcal{O}_{X \times Y})$
+such that $F$ and $F'$ are siblings.
+\end{lemma}
+
+\begin{proof}
+The essential image of $G$ is contained in the essential
+image of $F$ by Lemma \ref{lemma-two-functors-pre}.
+Consider the functor $H = F^{-1} \circ G$
+which makes sense as $F$ is fully faithful.
+By Lemma \ref{lemma-noah} we obtain an automorphism $f : X \to X$
+and an invertible $\mathcal{O}_X$-module $\mathcal{L}$ such that
+the functor $H' : K \mapsto f^*K \otimes \mathcal{L}$
+is a sibling of $H$. In particular
+$H$ is an auto-equivalence by Lemma \ref{lemma-sibling-faithful}
+and $H$ induces an auto-equivalence of
+$\textit{Coh}(\mathcal{O}_X)$ (as this is true for its sibling functor $H'$).
+Thus the quasi-inverses $H^{-1}$ and $(H')^{-1}$ exist, are siblings
+(small detail omitted), and $(H')^{-1}$ sends $M$ to
+$(f^{-1})^*(M \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{L}^{\otimes -1})$
+which is a Fourier-Mukai functor (details omitted).
+Then of course $F = G \circ H^{-1}$ is a sibling of
+$G \circ (H')^{-1}$. Since compositions of Fourier-Mukai
+functors are Fourier-Mukai by
+Lemma \ref{lemma-compose-fourier-mukai}
+we conclude.
+\end{proof}
+
+
+
+
+
+\section{Fully faithful functors}
+\label{section-fully-faithful}
+
+\noindent
+Our goal is to prove fully faithful functors between derived categories
+are siblings of Fourier-Mukai functors, following
+\cite{Orlov-K3} and \cite{Ballard}.
+
+\begin{situation}
+\label{situation-fully-faithful}
+Here $k$ is a field. We have proper smooth schemes $X$ and $Y$ over $k$.
+We have a $k$-linear, exact, fully faithful functor
+$F : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$.
+\end{situation}
+
+\noindent
+Before reading on, it makes sense to read at least some of
+Derived Categories, Section \ref{derived-section-postnikov}.
+
+\medskip\noindent
+Recall that $X$ is regular and hence has the resolution property
+(Varieties, Lemma \ref{varieties-lemma-smooth-regular} and
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-regular-resolution-property}). Thus
+on $X \times X$ we may choose a resolution
+$$
+\ldots \to
+\mathcal{E}_2 \boxtimes \mathcal{G}_2 \to
+\mathcal{E}_1 \boxtimes \mathcal{G}_1 \to
+\mathcal{E}_0 \boxtimes \mathcal{G}_0 \to
+\mathcal{O}_\Delta \to 0
+$$
+where each $\mathcal{E}_i$ and $\mathcal{G}_i$ is a finite locally
+free $\mathcal{O}_X$-module, see Lemma \ref{lemma-diagonal-resolution}.
+Using the complex
+\begin{equation}
+\label{equation-original-complex}
+\ldots \to
+\mathcal{E}_2 \boxtimes \mathcal{G}_2 \to
+\mathcal{E}_1 \boxtimes \mathcal{G}_1 \to
+\mathcal{E}_0 \boxtimes \mathcal{G}_0
+\end{equation}
+in $D_{perf}(\mathcal{O}_{X \times X})$ as in
+Derived Categories, Example \ref{derived-example-key-postnikov}
+if for each $n$ we denote
+$$
+M_n = (\mathcal{E}_n \boxtimes \mathcal{G}_n \to \ldots \to
+\mathcal{E}_0 \boxtimes \mathcal{G}_0)[-n]
+$$
+we obtain an infinite Postnikov system for the complex
+(\ref{equation-original-complex}). This means
+the morphisms $M_0 \to M_1[1] \to M_2[2] \to \ldots$ and
+$M_n \to \mathcal{E}_n \boxtimes \mathcal{G}_n$ and
+$\mathcal{E}_n \boxtimes \mathcal{G}_n \to M_{n - 1}$
+satisfy certain conditions documented in
+Derived Categories, Definition \ref{derived-definition-postnikov-system}.
+Set
+$$
+\mathcal{F}_n = \Ker(\mathcal{E}_n \boxtimes \mathcal{G}_n \to
+\mathcal{E}_{n - 1} \boxtimes \mathcal{G}_{n - 1})
+$$
+Observe that since $\mathcal{O}_\Delta$ is flat over $X$ via $\text{pr}_1$
+the same is true for $\mathcal{F}_n$ for all $n$ (this is a convenient though
+not essential observation). We have
+$$
+H^q(M_n[n]) = \left\{
+\begin{matrix}
+\mathcal{O}_\Delta & \text{if} & q = 0 \\
+\mathcal{F}_n & \text{if} & q = -n \\
+0 & \text{if} & q \not = 0, -n
+\end{matrix}
+\right.
+$$
+Thus for $n \geq \dim(X \times X)$ we have
+$$
+M_n[n] \cong \mathcal{O}_\Delta \oplus \mathcal{F}_n[n]
+$$
+in $D_{perf}(\mathcal{O}_{X \times X})$ by
+Lemma \ref{lemma-split-complex-regular}.
+
+\medskip\noindent
+We are interested in the complex
+\begin{equation}
+\label{equation-complex}
+\ldots \to
+\mathcal{E}_2 \boxtimes F(\mathcal{G}_2) \to
+\mathcal{E}_1 \boxtimes F(\mathcal{G}_1) \to
+\mathcal{E}_0 \boxtimes F(\mathcal{G}_0)
+\end{equation}
+in $D_{perf}(\mathcal{O}_{X \times Y})$
+as the ``totalization'' of this complex should
+give us the kernel of the Fourier-Mukai functor we are trying to construct.
+For all $i, j \geq 0$ we have
+\begin{align*}
+\Ext^q_{X \times Y}(\mathcal{E}_i \boxtimes F(\mathcal{G}_i),
+\mathcal{E}_j \boxtimes F(\mathcal{G}_j))
+& =
+\bigoplus\nolimits_p
+\Ext^{q + p}_X(\mathcal{E}_i, \mathcal{E}_j) \otimes_k
+\Ext^{-p}_Y(F(\mathcal{G}_i), F(\mathcal{G}_j)) \\
+& =
+\bigoplus\nolimits_p
+\Ext^{q + p}_X(\mathcal{E}_i, \mathcal{E}_j) \otimes_k
+\Ext^{-p}_X(\mathcal{G}_i, \mathcal{G}_j)
+\end{align*}
+The second equality holds because $F$ is
+fully faithful and the first by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-kunneth-Ext}.
+We find these $\Ext^q$ are zero for $q < 0$.
+Hence by
+Derived Categories, Lemma \ref{derived-lemma-existence-postnikov-system}
+we can build an infinite Postnikov system $K_0, K_1, K_2, \ldots$
+in $D_{perf}(\mathcal{O}_{X \times Y})$ for the complex
+(\ref{equation-complex}).
+Parallel to what happens with $M_0, M_1, M_2, \ldots$ this means we
+obtain morphisms
+$K_0 \to K_1[1] \to K_2[2] \to \ldots$ and
+$K_n \to \mathcal{E}_n \boxtimes F(\mathcal{G}_n)$ and
+$\mathcal{E}_n \boxtimes F(\mathcal{G}_n) \to K_{n - 1}$
+in $D_{perf}(\mathcal{O}_{X \times Y})$
+satisfying certain conditions documented in
+Derived Categories, Definition \ref{derived-definition-postnikov-system}.
+
+\medskip\noindent
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module whose support
+has a finite number of points, i.e., with $\dim(\text{Supp}(\mathcal{F})) = 0$.
+Consider the exact functor of triangulated categories
+$$
+D_{perf}(\mathcal{O}_{X \times Y})
+\longrightarrow
+D_{perf}(\mathcal{O}_Y),\quad
+N \longmapsto R\text{pr}_{2, *}(\text{pr}_1^*\mathcal{F}
+\otimes^\mathbf{L}_{\mathcal{O}_{X \times Y}} N)
+$$
+It follows that the objects $R\text{pr}_{2, *}(\text{pr}_1^*\mathcal{F}
+\otimes^\mathbf{L}_{\mathcal{O}_{X \times Y}} K_i)$
+form a Postnikov system for the complex in
+$D_{perf}(\mathcal{O}_Y)$ with terms
+$$
+R\text{pr}_{2, *}(
+(\mathcal{F} \otimes \mathcal{E}_i) \boxtimes F(\mathcal{G}_i)) =
+\Gamma(X, \mathcal{F} \otimes \mathcal{E}_i) \otimes_k F(\mathcal{G}_i) =
+F(\Gamma(X, \mathcal{F} \otimes \mathcal{E}_i) \otimes_k \mathcal{G}_i)
+$$
+Here we have used that $\mathcal{F} \otimes \mathcal{E}_i$ has
+vanishing higher cohomology as its support has dimension $0$.
+On the other hand, applying the exact functor
+$$
+D_{perf}(\mathcal{O}_{X \times X})
+\longrightarrow
+D_{perf}(\mathcal{O}_Y),\quad
+N \longmapsto F(R\text{pr}_{2, *}(\text{pr}_1^*\mathcal{F}
+\otimes^\mathbf{L}_{\mathcal{O}_{X \times X}} N))
+$$
+we find that the objects
+$F(R\text{pr}_{2, *}(\text{pr}_1^*\mathcal{F}
+\otimes^\mathbf{L}_{\mathcal{O}_{X \times X}} M_n))$
+form a second infinite Postnikov system
+for the complex in $D_{perf}(\mathcal{O}_Y)$ with terms
+$$
+F(R\text{pr}_{2, *}(
+(\mathcal{F} \otimes \mathcal{E}_i) \boxtimes \mathcal{G}_i)) =
+F(\Gamma(X, \mathcal{F} \otimes \mathcal{E}_i) \otimes_k \mathcal{G}_i)
+$$
+This is the same as before! By uniqueness of Postnikov systems
+(Derived Categories, Lemma \ref{derived-lemma-existence-postnikov-system})
+which applies because
+$$
+\Ext^q_Y(
+F(\Gamma(X, \mathcal{F} \otimes \mathcal{E}_i) \otimes_k \mathcal{G}_i),
+F(\Gamma(X, \mathcal{F} \otimes \mathcal{E}_j) \otimes_k \mathcal{G}_j)) = 0,
+\quad q < 0
+$$
+as $F$ is fully faithful, we find a system of isomorphisms
+$$
+F(R\text{pr}_{2, *}(\text{pr}_1^*\mathcal{F}
+\otimes^\mathbf{L}_{\mathcal{O}_{X \times X}} M_n[n]))
+\cong
+R\text{pr}_{2, *}(\text{pr}_1^*\mathcal{F}
+\otimes^\mathbf{L}_{\mathcal{O}_{X \times Y}} K_n[n])
+$$
+in $D_{perf}(\mathcal{O}_Y)$ compatible with the morphisms in
+$D_{perf}(\mathcal{O}_Y)$ induced by the morphisms
+$$
+M_{n - 1}[n - 1] \to M_n[n]
+\quad\text{and}\quad
+K_{n - 1}[n - 1] \to K_n[n]
+$$
+$$
+M_n \to \mathcal{E}_n \boxtimes \mathcal{G}_n
+\quad\text{and}\quad
+K_n \to \mathcal{E}_n \boxtimes F(\mathcal{G}_n)
+$$
+$$
+\mathcal{E}_n \boxtimes \mathcal{G}_n \to M_{n - 1}
+\quad\text{and}\quad
+\mathcal{E}_n \boxtimes F(\mathcal{G}_n) \to K_{n - 1}
+$$
+which are part of the structure of Postnikov systems.
+For $n$ sufficiently large we obtain a direct sum decomposition
+$$
+F(R\text{pr}_{2, *}(\text{pr}_1^*\mathcal{F}
+\otimes^\mathbf{L}_{\mathcal{O}_{X \times X}} M_n[n]))
+=
+F(\mathcal{F}) \oplus
+F(R\text{pr}_{2, *}(
+\text{pr}_1^*\mathcal{F} \otimes_{\mathcal{O}_{X \times Y}} \mathcal{F}_n
+))[n]
+$$
+corresponding to the direct sum decomposition of $M_n$ constructed above
+(we are using the flatness of $\mathcal{F}_n$ over $X$ via $\text{pr}_1$
+to write a usual tensor product in the formula above, but this isn't
+essential for the argument).
+By Lemma \ref{lemma-boundedness} we find there exists an integer $m \geq 0$
+such that the first summand in this direct sum decomposition has nonzero
+cohomology sheaves only in the interval $[-m, m]$ and the
+second summand in this direct sum decomposition has nonzero cohomology
+sheaves only in the interval $[-m - n, m + \dim(X) - n]$.
+We conclude the system $K_0 \to K_1[1] \to K_2[2] \to \ldots$
+in $D_{perf}(\mathcal{O}_{X \times Y})$ satisfies the assumptions of
+Lemma \ref{lemma-bounded-fibres} after possibly replacing $m$ by
+a larger integer. We conclude we can write
+$$
+K_n[n] = K \oplus C_n
+$$
+for $n \gg 0$ compatible with transition maps and with $C_n$
+having nonzero cohomology sheaves only in the range $[-m - n, m - n]$.
+Denote $G$ the Fourier-Mukai functor corresponding to $K$.
+Putting everything together we find
+$$
+\begin{matrix}
+G(\mathcal{F}) \oplus
+R\text{pr}_{2, *}(
+\text{pr}_1^*\mathcal{F} \otimes_{\mathcal{O}_{X \times Y}}^\mathbf{L} C_n)
+\cong \\
+R\text{pr}_{2, *}(\text{pr}_1^*\mathcal{F}
+\otimes^\mathbf{L}_{\mathcal{O}_{X \times Y}} K_n[n]) \cong \\
+F(R\text{pr}_{2, *}(\text{pr}_1^*\mathcal{F}
+\otimes^\mathbf{L}_{\mathcal{O}_{X \times X}} M_n[n]))
+\cong \\
+F(\mathcal{F}) \oplus
+F(R\text{pr}_{2, *}(
+\text{pr}_1^*\mathcal{F} \otimes_{\mathcal{O}_{X \times Y}} \mathcal{F}_n
+))[n]
+\end{matrix}
+$$
+Looking at the degrees that objects live in we conclude that for $n \gg m$
+we obtain an isomorphism
+$$
+F(\mathcal{F}) \cong G(\mathcal{F})
+$$
+Moreover, recall that this holds for every coherent $\mathcal{F}$ on $X$
+whose support has dimension $0$.
+
+\begin{lemma}
+\label{lemma-fully-faithful}
+Let $k$ be a field. Let $X$ and $Y$ be smooth proper schemes over $k$.
+Given a $k$-linear, exact, fully faithful functor
+$F : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+there exists a Fourier-Mukai functor
+$F' : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$ whose kernel
+is in $D_{perf}(\mathcal{O}_{X \times Y})$ which is a sibling to $F$.
+\end{lemma}
+
+\begin{proof}
+Apply Lemma \ref{lemma-two-functors} to $F$ and the functor
+$G$ constructed above.
+\end{proof}
+
+\noindent
+The following theorem is also true without assuming $X$ is projective,
+see \cite{Noah}.
+
+\begin{theorem}[Orlov]
+\label{theorem-fully-faithful}
+\begin{reference}
+\cite[Theorem 2.2]{Orlov-K3}; this is shown in \cite{Noah}
+without the assumption that $X$ be projective
+\end{reference}
+Let $k$ be a field. Let $X$ and $Y$ be smooth proper schemes over $k$
+with $X$ projective over $k$. Any $k$-linear fully faithful exact
+functor $F : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+is a Fourier-Mukai functor for some kernel in
+$D_{perf}(\mathcal{O}_{X \times Y})$.
+\end{theorem}
+
+\begin{proof}
+Let $F'$ be the Fourier-Mukai functor which is a sibling of $F$
+as in Lemma \ref{lemma-fully-faithful}.
+By Proposition \ref{proposition-siblings-isomorphic} we have $F \cong F'$
+provided we can show that $\textit{Coh}(\mathcal{O}_X)$ has enough
+negative objects. However, if $X = \Spec(k)$ for example, then
+this isn't true. Thus we first decompose $X = \coprod X_i$
+into its connected (and irreducible) components and we
+argue that it suffices to prove the result for each of the
+(fully faithful) composition functors
+$$
+F_i :
+D_{perf}(\mathcal{O}_{X_i}) \to
+D_{perf}(\mathcal{O}_X) \to
+D_{perf}(\mathcal{O}_Y)
+$$
+Details omitted. Thus we may assume $X$ is irreducible.
+
+\medskip\noindent
+The case $\dim(X) = 0$. Here $X$ is the spectrum of a finite (separable)
+extension $k'/k$ and hence $D_{perf}(\mathcal{O}_X)$
+is equivalent to the category
+of graded $k'$-vector spaces such that $\mathcal{O}_X$ corresponds to the
+trivial $1$-dimensional vector space in degree $0$.
+It is straightforward to see that any two
+siblings $F, F' : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+are isomorphic. Namely, we are given an isomorphism
+$F(\mathcal{O}_X) \cong F'(\mathcal{O}_X)$
+compatible the action of the $k$-algebra
+$k' = \text{End}_{D_{perf}(\mathcal{O}_X)}(\mathcal{O}_X)$
+which extends canonically to an isomorphism on any graded $k'$-vector space.
+
+\medskip\noindent
+The case $\dim(X) > 0$. Here $X$ is a projective smooth
+variety of dimension $> 1$. Let $\mathcal{F}$ be a coherent
+$\mathcal{O}_X$-module. We have to show there exists a
+coherent module $\mathcal{N}$ such that
+\begin{enumerate}
+\item there is a surjection $\mathcal{N} \to \mathcal{F}$ and
+\item $\Hom(\mathcal{F}, \mathcal{N}) = 0$.
+\end{enumerate}
+Choose an ample invertible $\mathcal{O}_X$-module $\mathcal{L}$.
+We claim that $\mathcal{N} = (\mathcal{L}^{\otimes n})^{\oplus r}$
+will work for $n \ll 0$ and $r$ large enough.
+Condition (1) follows from
+Properties, Proposition \ref{properties-proposition-characterize-ample}.
+Finally, we have
+$$
+\Hom(\mathcal{F}, \mathcal{L}^{\otimes n}) =
+H^0(X, \SheafHom(\mathcal{F}, \mathcal{L}^{\otimes n})) =
+H^0(X, \SheafHom(\mathcal{F}, \mathcal{O}_X) \otimes \mathcal{L}^{\otimes n})
+$$
+Since the dual $\SheafHom(\mathcal{F}, \mathcal{O}_X)$ is torsion free, this
+vanishes for $n \ll 0$ by Varieties, Lemma
+\ref{varieties-lemma-vanishin-h0-negative}. This finishes the proof.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-equivalence}
+Let $k$ be a field. Let $X$ and $Y$ be smooth proper schemes over $k$.
+If $F : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+is a $k$-linear exact equivalence of triangulated categories then
+there exists a Fourier-Mukai functor
+$F' : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$ whose
+kernel is in $D_{perf}(\mathcal{O}_{X \times Y})$
+which is an equivalence and a sibling of $F$.
+\end{proposition}
+
+\begin{proof}
+The functor $F'$ of Lemma \ref{lemma-fully-faithful}
+is an equivalence by Lemma \ref{lemma-sibling-faithful}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-uniqueness}
+Let $k$ be a field. Let $X$ be a smooth proper scheme over $k$.
+Let $K \in D_{perf}(\mathcal{O}_{X \times X})$. If the Fourier-Mukai
+functor $\Phi_K : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_X)$
+is isomorphic to the identity functor, then
+$K \cong \Delta_*\mathcal{O}_X$ in $_{perf}(\mathcal{O}_{X \times X})$.
+\end{lemma}
+
+\begin{proof}
+Let $i$ be the minimal integer such that the cohomology sheaf $H^i(K)$ is
+nonzero. Let $\mathcal{E}$ and $\mathcal{G}$ be finite locally free
+$\mathcal{O}_X$-modules. Then
+\begin{align*}
+H^i(X \times X, K \otimes_{\mathcal{O}_{X \times X}}^\mathbf{L}
+(\mathcal{E} \boxtimes \mathcal{G}))
+& =
+H^i(X, R\text{pr}_{2, *}(K \otimes_{\mathcal{O}_{X \times X}}^\mathbf{L}
+(\mathcal{E} \boxtimes \mathcal{G}))) \\
+& =
+H^i(X, \Phi_K(\mathcal{E}) \otimes_{\mathcal{O}_X}^\mathbf{L} \mathcal{G}) \\
+& \cong
+H^i(X, \mathcal{E} \otimes \mathcal{G})
+\end{align*}
+which is zero if $i < 0$. On the other hand, we can choose
+$\mathcal{E}$ and $\mathcal{G}$ such that there is a surjection
+$\mathcal{E}^\vee \boxtimes \mathcal{G}^\vee \to H^i(K)$
+by Lemma \ref{lemma-on-product}.
+In this case the left hand side of the equalities is nonzero.
+Hence we conclude that $H^i(K) = 0$ for $i < 0$.
+
+\medskip\noindent
+Let $i$ be the maximal integer such that $H^i(K)$ is nonzero.
+The same argument with $\mathcal{E}$ and $\mathcal{G}$
+support of dimension $0$ shows that $i \leq 0$.
+Hence we conclude that $K$ is given by a single coherent
+$\mathcal{O}_{X \times X}$-module $\mathcal{K}$ sitting in degree $0$.
+
+\medskip\noindent
+Since $R\text{pr}_{2, *}(\text{pr}_1^*\mathcal{F} \otimes \mathcal{K})$
+is $\mathcal{F}$, by taking $\mathcal{F}$ supported at closed points
+we see that the support of $\mathcal{K}$ is finite over $X$ via
+$\text{pr}_2$. Since $R\text{pr}_{2, *}(\mathcal{K}) \cong \mathcal{O}_X$
+we conclude by Functors and Morphisms, Lemma
+\ref{functors-lemma-pushforward-invertible-pre}
+that $\mathcal{K} = s_*\mathcal{O}_X$ for some section $s : X \to X \times X$
+of the second projection. Then $\Phi_K(M) = f^*M$ where
+$f = \text{pr}_1 \circ s$ and this can happen only if $s$
+is the diagonal morphism as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{A category of Fourier-Mukai kernels}
+\label{section-category-Fourier-Mukai-kernels}
+
+\noindent
+Let $S$ be a scheme. We claim there is a category
+with
+\begin{enumerate}
+\item Objects are proper smooth schemes over $S$.
+\item Morphisms from $X$ to $Y$ are isomorphism classes
+of objects of $D_{perf}(\mathcal{O}_{X \times_S Y})$.
+\item Composition of the isomorphism class of
+$K \in D_{perf}(\mathcal{O}_{X \times_S Y})$
+and the isomorphism class of $K'$ in $D_{perf}(\mathcal{O}_{Y \times_S Z})$
+is the isomorphism class of
+$$
+R\text{pr}_{13, *}(
+L\text{pr}_{12}^*K
+\otimes_{\mathcal{O}_{X \times_S Y \times_S Z}}^\mathbf{L}
+L\text{pr}_{23}^*K')
+$$
+which is in $D_{perf}(\mathcal{O}_{X \times_S Z})$ by
+Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}.
+\item The identity morphism from $X$ to $X$ is the
+isomorphism class of $\Delta_{X/S, *}\mathcal{O}_X$
+which is in $D_{perf}(\mathcal{O}_{X \times_S X})$
+by More on Morphisms, Lemma
+\ref{more-morphisms-lemma-perfect-closed-immersion-perfect-direct-image}
+and the fact that $\Delta_{X/S}$ is a perfect morphism by
+Divisors, Lemma
+\ref{divisors-lemma-immersion-smooth-into-smooth-regular-immersion} and
+More on Morphisms, Lemma \ref{more-morphisms-lemma-regular-immersion-perfect}.
+\end{enumerate}
+Let us check that associativity of composition
+of morphisms holds; we omit verifying that the identity
+morphisms are indeed identities. To see this suppose we have
+$X, Y, Z, W$ and
+$c \in D_{perf}(\mathcal{O}_{X \times_S Y})$,
+$c' \in D_{perf}(\mathcal{O}_{Y \times_S Z})$, and
+$c'' \in D_{perf}(\mathcal{O}_{Z \times_S W})$. Then we have
+\begin{align*}
+c'' \circ (c' \circ c)
+& \cong
+\text{pr}^{134}_{14, *}(
+\text{pr}^{134, *}_{13}
+\text{pr}^{123}_{13, *}(\text{pr}^{123, *}_{12}c \otimes
+\text{pr}^{123, *}_{23}c')
+\otimes \text{pr}^{134, *}_{34}c'') \\
+& \cong
+\text{pr}^{134}_{14, *}(
+\text{pr}^{1234}_{134, *}
+\text{pr}^{1234, *}_{123}(\text{pr}^{123, *}_{12}c \otimes
+\text{pr}^{123, *}_{23}c')
+\otimes \text{pr}^{134, *}_{34}c'') \\
+& \cong
+\text{pr}^{134}_{14, *}(
+\text{pr}^{1234}_{134, *}
+(\text{pr}^{1234, *}_{12}c \otimes
+\text{pr}^{1234, *}_{23}c')
+\otimes \text{pr}^{134, *}_{34}c'') \\
+& \cong
+\text{pr}^{134}_{14, *}
+\text{pr}^{1234}_{134, *}
+((\text{pr}^{1234, *}_{12}c \otimes
+\text{pr}^{1234, *}_{23}c')
+\otimes \text{pr}^{1234, *}_{34}c'') \\
+& \cong
+\text{pr}^{1234}_{14, *}(
+(\text{pr}^{1234, *}_{12}c \otimes
+\text{pr}^{1234, *}_{23}c') \otimes
+\text{pr}^{1234, *}_{34}c'')
+\end{align*}
+Here we use the notation
+$$
+p^{1234}_{134} : X \times_S Y \times_S Z \times_S W
+\to X \times_S Z \times_S W
+\quad\text{and}\quad
+p^{134}_{14} : X \times_S Z \times_S W \to X \times_S W
+$$
+the projections and similarly for other indices.
+We also write $\text{pr}_*$ instead of $R\text{pr}_*$ and
+$\text{pr}^*$ instead of $L\text{pr}^*$ and we drop
+all super and sub scripts on $\otimes$.
+The first equality is the definition of the composition.
+The second equality holds because
+$\text{pr}^{134, *}_{13} \text{pr}^{123}_{13, *} =
+\text{pr}^{1234}_{134, *} \text{pr}^{1234, *}_{123}$
+by base change (Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-compare-base-change}).
+The third equality holds because pullbacks compose
+correctly and pass through tensor products, see
+Cohomology, Lemmas \ref{cohomology-lemma-derived-pullback-composition} and
+\ref{cohomology-lemma-pullback-tensor-product}.
+The fourth equality follows from the ``projection formula'' for
+$p^{1234}_{134}$, see Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-cohomology-base-change}.
+The fifth equality is that proper pushforward is compatible
+with composition, see
+Cohomology, Lemma \ref{cohomology-lemma-derived-pushforward-composition}.
+Since tensor product is associative
+this concludes the proof of associativity of composition.
+
+\begin{lemma}
+\label{lemma-base-change-is-functor}
+Let $S' \to S$ be a morphism of schemes.
+The rule which sends
+\begin{enumerate}
+\item a smooth proper scheme $X$ over $S$ to $X' = S' \times_S X$, and
+\item the isomorphism class of an object $K$
+of $D_{perf}(\mathcal{O}_{X \times_S Y})$ to the isomorphism class of
+$L(X' \times_{S'} Y' \to X \times_S Y)^*K$
+in $D_{perf}(\mathcal{O}_{X' \times_{S'} Y'})$
+\end{enumerate}
+is a functor from the category defined for $S$ to the category
+defined for $S'$.
+\end{lemma}
+
+\begin{proof}
+To see this suppose we have $X, Y, Z$ and
+$K \in D_{perf}(\mathcal{O}_{X \times_S Y})$ and
+$M \in D_{perf}(\mathcal{O}_{Y \times_S Z})$.
+Denote
+$K' \in D_{perf}(\mathcal{O}_{X' \times_{S'} Y'})$ and
+$M' \in D_{perf}(\mathcal{O}_{Y' \times_{S'} Z'})$
+their pullbacks as in the statement of the lemma.
+The diagram
+$$
+\xymatrix{
+X' \times_{S'} Y' \times_{S'} Z' \ar[r] \ar[d]_{\text{pr}'_{13}} &
+X \times_S Y \times_S Z \ar[d]^{\text{pr}_{13}} \\
+X' \times_{S'} Z' \ar[r] &
+X \times_S Z
+}
+$$
+is cartesian and $\text{pr}_{13}$ is proper and smooth.
+By Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}
+we see that the derived pullback by the lower horizontal
+arrow of the composition
+$$
+R\text{pr}_{13, *}(
+L\text{pr}_{12}^*K
+\otimes_{\mathcal{O}_{X \times_S Y \times_S Z}}^\mathbf{L}
+L\text{pr}_{23}^*M)
+$$
+indeed is (canonically) isomorphic to
+$$
+R\text{pr}'_{13, *}(
+L(\text{pr}'_{12})^*K'
+\otimes_{\mathcal{O}_{X' \times_{S'} Y' \times_{S'} Z'}}^\mathbf{L}
+L(\text{pr}'_{23})^*M')
+$$
+as desired. Some details omitted.
+\end{proof}
+
+
+
+
+
+\section{Relative equivalences}
+\label{section-relative-equivalences}
+
+\noindent
+In this section we prove some lemmas about the following concept.
+
+\begin{definition}
+\label{definition-relative-equivalence-kernel}
+Let $S$ be a scheme. Let $X \to S$ and $Y \to S$ be smooth proper morphisms.
+An object $K \in D_{perf}(\mathcal{O}_{X \times_S Y})$
+is said to be {\it the Fourier-Mukai kernel of a relative equivalence
+from $X$ to $Y$ over $S$}
+if there exist an object $K' \in D_{perf}(\mathcal{O}_{X \times_S Y})$
+such that
+$$
+\Delta_{X/S, *}\mathcal{O}_X \cong
+R\text{pr}_{13, *}(L\text{pr}_{12}^*K
+\otimes_{\mathcal{O}_{X \times_S Y \times_S X}}^\mathbf{L}
+L\text{pr}_{23}^*K')
+$$
+in $D(\mathcal{O}_{X \times_S X})$ and
+$$
+\Delta_{Y/S, *}\mathcal{O}_Y \cong
+R\text{pr}_{13, *}(L\text{pr}_{12}^*K'
+\otimes_{\mathcal{O}_{Y \times_S X \times_S Y}}^\mathbf{L}
+L\text{pr}_{23}^*K)
+$$
+in $D(\mathcal{O}_{Y \times_S Y})$. In other words, the isomorphism class
+of $K$ defines an invertible arrow in the category defined in
+Section \ref{section-category-Fourier-Mukai-kernels}.
+\end{definition}
+
+\noindent
+The language is intentionally cumbersome.
+
+\begin{lemma}
+\label{lemma-equivalences-rek}
+With notation as in Definition \ref{definition-relative-equivalence-kernel}
+let $K$ be the Fourier-Mukai kernel of a relative equivalence from $X$
+to $Y$ over $S$. Then the corresponding Fourier-Mukai functors
+$\Phi_K : D_\QCoh(\mathcal{O}_X) \to D_\QCoh(\mathcal{O}_Y)$
+(Lemma \ref{lemma-fourier-Mukai-QCoh})
+and $\Phi_K : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+(Lemma \ref{lemma-fourier-mukai})
+are equivalences.
+\end{lemma}
+
+\begin{proof}
+Immediate from Lemma \ref{lemma-compose-fourier-mukai} and
+Example \ref{example-diagonal-fourier-mukai}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-rek}
+With notation as in Definition \ref{definition-relative-equivalence-kernel}
+let $K$ be the Fourier-Mukai kernel of a relative equivalence from $X$
+to $Y$ over $S$. Let $S_1 \to S$ be a morphism of schemes. Let
+$X_1 = S_1 \times_S X$ and $Y_1 = S_1 \times_S Y$. Then the pullback
+$K_1 = L(X_1 \times_{S_1} Y_1 \to X \times_S Y)^*K$ is
+the Fourier-Mukai kernel of a relative equivalence from $X_1$
+to $Y_1$ over $S_1$.
+\end{lemma}
+
+\begin{proof}
+Let $K' \in D_{perf}(\mathcal{O}_{Y \times_S X})$ be the object assumed to
+exist in Definition \ref{definition-relative-equivalence-kernel}.
+Denote $K'_1$ the pullback of $K'$ by
+$Y_1 \times_{S_1} X_1 \to Y \times_S X$.
+Then it suffices to prove that we have
+$$
+\Delta_{X_1/S_1, *}\mathcal{O}_X \cong
+R\text{pr}_{13, *}(L\text{pr}_{12}^*K_1
+\otimes_{\mathcal{O}_{X_1 \times_{S_1} Y_1 \times_{S_1} X_1}}^\mathbf{L}
+L\text{pr}_{23}^*K_1')
+$$
+in $D(\mathcal{O}_{X_1 \times_{S_1} X_1})$ and similarly for the other
+condition. Since
+$$
+\xymatrix{
+X_1 \times_{S_1} Y_1 \times_{S_1} X_1 \ar[r] \ar[d]_{\text{pr}_{13}} &
+X \times_S Y \times_S X \ar[d]^{\text{pr}_{13}} \\
+X_1 \times_{S_1} X_1 \ar[r] &
+X \times_S X
+}
+$$
+is cartesian it suffices by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}
+to prove that
+$$
+\Delta_{X_1/S_1, *}\mathcal{O}_{X_1}
+\cong
+L(X_1 \times_{S_1} X_1 \to X \times_S X)^*\Delta_{X/S, *}\mathcal{O}_X
+$$
+This in turn will be true if $X$ and $X_1 \times_{S_1} X_1$ are tor
+independent over $X \times_S X$, see
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-compare-base-change}.
+This tor independence can be seen directly but also follows from
+the more general More on Morphisms, Lemma
+\ref{more-morphisms-lemma-case-of-tor-independence} applied to the square
+with corners $X, X, X, S$ and its base change by $S_1 \to S$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descend-rek}
+Let $S = \lim_{i \in I} S_i$ be a limit of a directed system of schemes
+with affine transition morphisms $g_{i'i} : S_{i'} \to S_i$.
+We assume that $S_i$ is quasi-compact and quasi-separated for all $i \in I$.
+Let $0 \in I$. Let $X_0 \to S_0$ and $Y_0 \to S_0$ be smooth proper morphisms.
+We set $X_i = S_i \times_{S_0} X_0$ for $i \geq 0$
+and $X = S \times_{S_0} X_0$ and similarly for $Y_0$. If $K$ is the
+Fourier-Mukai kernel of a relative equivalence from $X$ to $Y$ over $S$
+then for some $i \geq 0$ there exists a
+Fourier-Mukai kernel of a relative equivalence from $X_i$ to $Y_i$ over $S_i$.
+\end{lemma}
+
+\begin{proof}
+Let $K' \in D_{perf}(\mathcal{O}_{Y \times_S X})$ be the object assumed to
+exist in Definition \ref{definition-relative-equivalence-kernel}.
+Since $X \times_S Y = \lim X_i \times_{S_i} Y_i$ there exists an
+$i$ and objects $K_i$ and $K'_i$ in
+$D_{perf}(\mathcal{O}_{Y_i \times_{S_i} X_i})$
+whose pullbacks to $Y \times_S X$ give $K$ and $K'$.
+See Derived Categories of Schemes, Lemma \ref{perfect-lemma-descend-perfect}.
+By Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-flat-proper-perfect-direct-image-general}
+the object
+$$
+R\text{pr}_{13, *}(L\text{pr}_{12}^*K_i
+\otimes_{\mathcal{O}_{X_i \times_{S_i} Y_i \times_{S_i} X_i}}^\mathbf{L}
+L\text{pr}_{23}^*K_i')
+$$
+is perfect and its pullback to $X \times_S X$ is equal to
+$$
+R\text{pr}_{13, *}(L\text{pr}_{12}^*K
+\otimes_{\mathcal{O}_{X \times_S Y \times_S X}}^\mathbf{L}
+L\text{pr}_{23}^*K') \cong \Delta_{X/S, *}\mathcal{O}_X
+$$
+See proof of Lemma \ref{lemma-base-change-rek}.
+On the other hand, since $X_i \to S$ is smooth and separated the
+object
+$$
+\Delta_{i, *}\mathcal{O}_{X_i}
+$$
+of $D(\mathcal{O}_{X_i \times_{S_i} X_i})$ is also perfect
+(by More on Morphisms, Lemmas
+\ref{more-morphisms-lemma-smooth-diagonal-perfect} and
+\ref{more-morphisms-lemma-perfect-proper-perfect-direct-image}) and
+its pullback to $X \times_S X$ is equal to
+$$
+\Delta_{X/S, *}\mathcal{O}_X
+$$
+See proof of Lemma \ref{lemma-base-change-rek}. Thus by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-descend-perfect}
+after increasing $i$ we may assume that
+$$
+\Delta_{i, *}\mathcal{O}_{X_i} \cong
+R\text{pr}_{13, *}(L\text{pr}_{12}^*K_i
+\otimes_{\mathcal{O}_{X_i \times_{S_i} Y_i \times_{S_i} X_i}}^\mathbf{L}
+L\text{pr}_{23}^*K_i')
+$$
+as desired. The same works for the roles of $K$ and $K'$ reversed.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{No deformations}
+\label{section-no-deformations}
+
+\noindent
+The title of this section refers to Lemma \ref{lemma-no-deformations}
+
+\begin{lemma}
+\label{lemma-deform-koszul}
+Let $(R, \mathfrak m, \kappa) \to (A, \mathfrak n, \lambda)$
+be a flat local ring homorphism of local rings
+which is essentially of finite presentation.
+Let $\overline{f}_1, \ldots, \overline{f}_r \in \mathfrak n/\mathfrak m A
+\subset A/\mathfrak m A$ be a regular sequence. Let $K \in D(A)$. Assume
+\begin{enumerate}
+\item $K$ is perfect,
+\item $K \otimes_A^\mathbf{L} A/\mathfrak m A$ is isomorphic in
+$D(A/\mathfrak m A)$ to the
+Koszul complex on $\overline{f}_1, \ldots, \overline{f}_r$.
+\end{enumerate}
+Then $K$ is isomorphic in $D(A)$ to a Koszul complex on a regular sequence
+$f_1, \ldots, f_r \in A$ lifting the given elements
+$\overline{f}_1, \ldots, \overline{f}_r$. Moreover, $A/(f_1, \ldots, f_r)$
+is flat over $R$.
+\end{lemma}
+
+\begin{proof}
+Let us use chain complexes in the proof of this lemma.
+The Koszul complex $K_\bullet(\overline{f}_1, \ldots, \overline{f}_r)$
+is defined in More on Algebra, Definition
+\ref{more-algebra-definition-koszul-complex}.
+By More on Algebra, Lemma \ref{more-algebra-lemma-lift-complex-stably-frees}
+we can represent $K$ by a complex
+$$
+K_\bullet :
+A \to A^{\oplus r} \to \ldots \to A^{\oplus r} \to A
+$$
+whose tensor product with $A/\mathfrak mA$ is equal (!)
+to $K_\bullet(\overline{f}_1, \ldots, \overline{f}_r)$.
+Denote $f_1, \ldots, f_r \in A$ the components of the
+arrow $A^{\oplus r} \to A$. These $f_i$ are lifts of the
+$\overline{f}_i$. By Algebra, Lemma
+\ref{algebra-lemma-grothendieck-regular-sequence-general}
+$f_1, \ldots, f_r$ form a regular sequence in $A$ and $A/(f_1, \ldots, f_r)$
+is flat over $R$. Let $J = (f_1, \ldots, f_r) \subset A$.
+Consider the diagram
+$$
+\xymatrix{
+K_\bullet \ar[rd] \ar@{..>}[rr]_{\varphi_\bullet} & &
+K_\bullet(f_1, \ldots, f_r) \ar[ld] \\
+& A/J
+}
+$$
+Since $f_1, \ldots, f_r$ is a regular sequence the south-west arrow
+is a quasi-isomorphism (see
+More on Algebra, Lemma \ref{more-algebra-lemma-regular-koszul-regular}).
+Hence we can find the dotted arrow making the
+diagram commute for example by
+Algebra, Lemma \ref{algebra-lemma-compare-resolutions}.
+Reducing modulo $\mathfrak m$ we obtain a commutative diagram
+$$
+\xymatrix{
+K_\bullet(\overline{f}_1, \ldots, \overline{f}_r)
+\ar[rd] \ar[rr]_{\overline{\varphi}_\bullet} & &
+K_\bullet(\overline{f}_1, \ldots, \overline{f}_r) \ar[ld] \\
+& (A/\mathfrak m A)/(\overline{f}_1, \ldots, \overline{f}_r)
+}
+$$
+by our choice of $K_\bullet$. Thus $\overline{\varphi}$ is an isomorphism
+in the derived category $D(A/\mathfrak m A)$. It follows that
+$\overline{\varphi} \otimes_{A/\mathfrak m A}^\mathbf{L} \lambda$
+is an isomorphism. Since $\overline{f}_i \in \mathfrak n / \mathfrak m A$
+we see that
+$$
+\text{Tor}_i^{A/\mathfrak m A}(
+K_\bullet(\overline{f}_1, \ldots, \overline{f}_r), \lambda)
+=
+K_i(\overline{f}_1, \ldots, \overline{f}_r) \otimes_{A/\mathfrak m A} \lambda
+$$
+Hence $\varphi_i \bmod \mathfrak n$ is invertible.
+Since $A$ is local this means that $\varphi_i$ is an
+isomorphism and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-limit-arguments}
+Let $R \to S$ be a finite type flat ring map of Noetherian rings.
+Let $\mathfrak q \subset S$ be a prime ideal lying over
+$\mathfrak p \subset R$. Let $K \in D(S)$ be perfect.
+Let $f_1, \ldots, f_r \in \mathfrak q S_\mathfrak q$
+be a regular sequence such that $S_\mathfrak q/(f_1, \ldots, f_r)$
+is flat over $R$ and such that
+$K \otimes_S^\mathbf{L} S_\mathfrak q$ is isomorphic to the
+Koszul complex on $f_1, \ldots, f_r$. Then there exists a
+$g \in S$, $g \not \in \mathfrak q$ such that
+\begin{enumerate}
+\item $f_1, \ldots, f_r$ are the images of
+$f'_1, \ldots, f'_r \in S_g$,
+\item $f'_1, \ldots, f'_r$ form a regular sequence in $S_g$,
+\item $S_g/(f'_1, \ldots, f'_r)$ is flat over $R$,
+\item $K \otimes_S^\mathbf{L} S_g$ is isomorphic to the
+Koszul complex on $f_1, \ldots, f_r$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We can find $g \in S$, $g \not \in \mathfrak q$ with property (1) by
+the definition of localizations. After replacing $g$ by
+$gg'$ for some $g' \in S$, $g' \not \in \mathfrak q$
+we may assume (2) holds, see
+Algebra, Lemma \ref{algebra-lemma-regular-sequence-in-neighbourhood}.
+By Algebra, Theorem \ref{algebra-theorem-openness-flatness}
+we find that $S_g/(f'_1, \ldots, f'_r)$ is flat over $R$
+in an open neighbourhood of $\mathfrak q$.
+Hence after once more replacing $g$ by $gg'$ for some
+$g' \in S$, $g' \not \in \mathfrak q$ we may assume (3) holds as well.
+Finally, we get (4) for a further replacement by
+More on Algebra, Lemma \ref{more-algebra-lemma-colimit-perfect-complexes}.
+\end{proof}
+
+\noindent
+For a generalization of the following lemma, please see
+More on Morphisms of Spaces, Lemma
+\ref{spaces-more-morphisms-lemma-where-isomorphism}.
+
+\begin{lemma}
+\label{lemma-isomorphism-in-neighbourhood}
+Let $S$ be a Noetherian scheme. Let $s \in S$.
+Let $p : X \to Y$ be a morphism of schemes over $S$.
+Assume
+\begin{enumerate}
+\item $Y \to S$ and $X \to S$ proper,
+\item $X$ is flat over $S$,
+\item $X_s \to Y_s$ an isomorphism.
+\end{enumerate}
+Then there exists an open neighbourhood $U \subset S$ of $s$
+such that the base change $X_U \to Y_U$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+The morphism $p$ is proper by Morphisms, Lemma
+\ref{morphisms-lemma-closed-immersion-proper}.
+By Cohomology of Schemes, Lemma
+\ref{coherent-lemma-proper-finite-fibre-finite-in-neighbourhood}
+there is an open $Y_s \subset V \subset Y$ such that
+$p|_{p^{-1}(V)} : p^{-1}(V) \to V$ is finite.
+By More on Morphisms, Theorem
+\ref{more-morphisms-theorem-criterion-flatness-fibre-Noetherian}
+there is an open $X_s \subset U \subset X$ such that
+$p|_U : U \to Y$ is flat. After removing the images of
+$X \setminus U$ and $Y \setminus V$ (which are closed subsets
+not containing $s$) we may assume $p$ is flat and finite.
+Then $p$ is open (Morphisms, Lemma \ref{morphisms-lemma-fppf-open})
+and $Y_s \subset p(X) \subset Y$ hence after shrinking $S$
+we may assume $p$ is surjective.
+As $p_s : X_s \to Y_s$ is an isomorphism, the map
+$$
+p^\sharp : \mathcal{O}_Y \longrightarrow p_*\mathcal{O}_X
+$$
+of coherent $\mathcal{O}_Y$-modules ($p$ is finite)
+becomes an isomorphism after pullback by $i : Y_s \to Y$
+(by Cohomology of Schemes, Lemma
+\ref{coherent-lemma-affine-base-change} for example).
+By Nakayama's lemma, this implies that
+$\mathcal{O}_{Y, y} \to (p_*\mathcal{O}_X)_y$ is surjective
+for all $y \in Y_s$. Hence there is an open $Y_s \subset V \subset Y$
+such that $p^\sharp|_V$ is surjective
+(Modules, Lemma \ref{modules-lemma-finite-type-surjective-on-stalk}).
+Hence after shrinking $S$ once more we may assume
+$p^\sharp$ is surjective which means that $p$ is a closed
+immersion (as $p$ is already finite).
+Thus now $p$ is a surjective flat closed immersion
+of Noetherian schemes and hence an isomorphism, see
+Morphisms, Section \ref{morphisms-section-flat-closed-immersions}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-no-deformations}
+Let $k$ be a field. Let $S$ be a finite type scheme over $k$
+with $k$-rational point $s$. Let $Y \to S$ be a smooth proper morphism.
+Let $X = Y_s \times S \to S$ be the constant family with fibre
+$Y_s$. Let $K$ be the Fourier-Mukai kernel of a relative equivalence
+from $X$ to $Y$ over $S$. Assume the restriction
+$$
+L(Y_s \times_S Y_s \to X \times_S Y)^*K \cong
+\Delta_{Y_s/k, *} \mathcal{O}_{Y_s}
+$$
+in $D(\mathcal{O}_{Y_s \times Y_s})$. Then there is an open neighbourhood
+$s \in U \subset S$ such that $Y|_U$ is isomorphic to $Y_s \times U$ over $U$.
+\end{lemma}
+
+\begin{proof}
+Denote $i : Y_s \times Y_s = X_s \times Y_s \to X \times_S Y$
+the natural closed immersion. (We will write $Y_s$ and not $X_s$
+for the fibre of $X$ over $s$ from now on.) Let
+$z \in Y_s \times Y_s = (X \times_S Y)_s \subset X \times_S Y$
+be a closed point. As indicated we think of $z$ both as a closed point
+of $Y_s \times Y_s$ as well as a closed point of $X \times_S Y$.
+
+\medskip\noindent
+Case I: $z \not \in \Delta_{Y_s/k}(Y_s)$. Denote $\mathcal{O}_z$
+the coherent $\mathcal{O}_{Y_s \times Y_s}$-module supported at $z$
+whose value is $\kappa(z)$. Then $i_*\mathcal{O}_z$ is the
+coherent $\mathcal{O}_{X \times_S Y}$-module supported at $z$
+whose value is $\kappa(z)$. Our assumption means that
+$$
+K \otimes_{\mathcal{O}_{X \times_S Y}}^\mathbf{L} i_*\mathcal{O}_z =
+Li^*K \otimes_{\mathcal{O}_{Y_s \times Y_s}}^\mathbf{L} \mathcal{O}_z = 0
+$$
+Hence by Lemma \ref{lemma-orthogonal-point-sheaf}
+we find an open neighbourhood $U(z) \subset X \times_S Y$ of $z$
+such that $K|_{U(z)} = 0$. In this case we set $Z(z) = \emptyset$
+as closed subscheme of $U(z)$.
+
+\medskip\noindent
+Case II: $z \in \Delta_{Y_s/k}(Y_s)$. Since $Y_s$ is smooth over $k$
+we know that $\Delta_{Y_s/k} : Y_s \to Y_s \times Y_s$ is a
+regular immersion, see More on Morphisms, Lemma
+\ref{more-morphisms-lemma-smooth-diagonal-perfect}.
+Choose a regular sequence $\overline{f}_1, \ldots, \overline{f}_r \in
+\mathcal{O}_{Y_s \times Y_s, z}$ cutting out the ideal sheaf of
+$\Delta_{Y_s/k}(Y_s)$. Since a regular sequence is Koszul-regular
+(More on Algebra, Lemma \ref{more-algebra-lemma-regular-koszul-regular})
+our assumption means that
+$$
+K_z \otimes_{\mathcal{O}_{X \times_S Y, z}}^\mathbf{L}
+\mathcal{O}_{Y_s \times Y_s, z}
+\in D(\mathcal{O}_{Y_s \times Y_s, z})
+$$
+is represented by the Koszul complex on
+$\overline{f}_1, \ldots, \overline{f}_r$ over
+$\mathcal{O}_{Y_s \times Y_s, z}$.
+By Lemma \ref{lemma-deform-koszul} applied to
+$\mathcal{O}_{S, s} \to \mathcal{O}_{X \times_S Y, z}$
+we conclude that $K_z \in D(\mathcal{O}_{X \times_S Y, z})$ is
+represented by the Koszul complex on a regular sequence
+$f_1, \ldots, f_r \in \mathcal{O}_{X \times_S Y, z}$
+lifting the regular sequence
+$\overline{f}_1, \ldots, \overline{f}_r$
+such that moreover $\mathcal{O}_{X \times_S Y}/(f_1, \ldots, f_r)$
+is flat over $\mathcal{O}_{S, s}$.
+By some limit arguments (Lemma \ref{lemma-limit-arguments})
+we conclude that there exists an affine open neighbourhood
+$U(z) \subset X \times_S Y$ of $z$ and a closed subscheme
+$Z(z) \subset U(z)$ such that
+\begin{enumerate}
+\item $Z(z) \to U(z)$ is a regular closed immersion,
+\item $K|_{U(z)}$ is quasi-isomorphic to $\mathcal{O}_{Z(z)}$,
+\item $Z(z) \to S$ is flat,
+\item $Z(z)_s = \Delta_{Y_s/k}(Y_s) \cap U(z)_s$
+as closed subschemes of $U(z)_s$.
+\end{enumerate}
+
+\noindent
+By property (2), for $z, z' \in Y_s \times Y_s$, we
+find that $Z(z) \cap U(z') = Z(z') \cap U(z)$ as closed subschemes.
+Hence we obtain an open neighbourhood
+$$
+U = \bigcup\nolimits_{z \in Y_s \times Y_s\text{ closed}} U(z)
+$$
+of $Y_s \times Y_s$ in $X \times_S Y$ and a closed subscheme $Z \subset U$
+such that (1) $Z \to U$ is a regular closed immersion,
+(2) $Z \to S$ is flat, and (3) $Z_s = \Delta_{Y_s/k}(Y_s)$.
+Since $X \times_S Y \to S$ is proper, after replacing $S$
+by an open neighbourhood of $s$ we may assume $U = X \times_S Y$.
+Since the projections $Z_s \to Y_s$ and $Z_s \to X_s$
+are isomorphisms, we conclude that after shrinking $S$
+we may assume $Z \to Y$ and $Z \to X$ are isomorphisms, see
+Lemma \ref{lemma-isomorphism-in-neighbourhood}.
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-no-deformations-better}
+Let $k$ be an algebraically closed field. Let $X$
+be a smooth proper scheme over $k$.
+Let $f : Y \to S$ be a smooth proper morphism with $S$ of finite type over $k$.
+Let $K$ be the Fourier-Mukai kernel of a relative equivalence
+from $X \times S$ to $Y$ over $S$. Then $S$ can be covered by
+open subschemes $U$ such that there is a $U$-isomorphism
+$f^{-1}(U) \cong Y_0 \times U$ for some $Y_0$ proper and smooth over $k$.
+\end{lemma}
+
+\begin{proof}
+Choose a closed point $s \in S$. Since $k$ is algebraically closed
+this is a $k$-rational point. Set $Y_0 = Y_s$. The restriction
+$K_0$ of $K$ to $X \times Y_0$ is the Fourier-Mukai kernel of a
+relative equivalence from $X$ to $Y_0$ over $\Spec(k)$ by
+Lemma \ref{lemma-base-change-rek}. Let $K'_0$ in
+$D_{perf}(\mathcal{O}_{Y_0 \times X})$ be the
+object assumed to
+exist in Definition \ref{definition-relative-equivalence-kernel}.
+Then $K'_0$ is the Fourier-Mukai kernel of a
+relative equivalence from $Y_0$ to $X$ over $\Spec(k)$
+by the symmetry inherent in
+Definition \ref{definition-relative-equivalence-kernel}.
+Hence by
+Lemma \ref{lemma-base-change-rek}
+we see that the pullback
+$$
+M = (Y_0 \times X \times S \to Y_0 \times X)^*K'_0
+$$
+on $(Y_0 \times S) \times_S (X \times S) = Y_0 \times X \times S$
+is the Fourier-Mukai kernel of a
+relative equivalence from $Y_0 \times S$ to $X \times S$ over $S$.
+Now consider the kernel
+$$
+K_{new} =
+R\text{pr}_{13, *}(L\text{pr}_{12}^*M
+\otimes_{\mathcal{O}_{(Y_0 \times S) \times_S (X \times S)
+\times_S Y}}^\mathbf{L}
+L\text{pr}_{23}^*K)
+$$
+on $(Y_0 \times S) \times_S Y$. This is the Fourier-Mukai kernel of a
+relative equivalence from $Y_0 \times S$ to $Y$ over $S$ since it is
+the composition of two invertible arrows in
+the category constructed in
+Section \ref{section-category-Fourier-Mukai-kernels}.
+Moreover, this composition passes through base change
+(Lemma \ref{lemma-base-change-is-functor}).
+Hence we see that the pullback of $K_{new}$ to
+$((Y_0 \times S) \times_S Y)_s = Y_0 \times Y_0$
+is equal to the composition of $K_0$ and $K'_0$
+and hence equal to the identity in this category.
+In other words, we have
+$$
+L(Y_0 \times Y_0 \to (Y_0 \times S) \times_S Y)^*K_{new}
+\cong
+\Delta_{Y_0/k, *}\mathcal{O}_{Y_0}
+$$
+Thus by Lemma \ref{lemma-no-deformations} we conclude that $Y \to S$
+is isomorphic to $Y_0 \times S$ in an open neighbourhood of $s$.
+This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+\section{Countability}
+\label{section-countability}
+
+\noindent
+In this section we prove some elementary lemmas about countability
+of certain sets. Let $\mathcal{C}$ be a category. In this section
+we will say that $\mathcal{C}$ is {\it countable} if
+\begin{enumerate}
+\item for any $X, Y \in \Ob(\mathcal{C})$ the set
+$\Mor_\mathcal{C}(X, Y)$ is countable, and
+\item the set of isomorphism classes of objects of $\mathcal{C}$
+is countable.
+\end{enumerate}
+
+\begin{lemma}
+\label{lemma-countable-finite-type}
+Let $R$ be a countable Noetherian ring. Then the category of schemes of finite
+type over $R$ is countable.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-countable-abelian}
+Let $\mathcal{A}$ be a countable abelian category.
+Then $D^b(\mathcal{A})$ is countable.
+\end{lemma}
+
+\begin{proof}
+It suffices to prove the statement for $D(\mathcal{A})$ as the others
+are full subcategories of this one. Since every object in $D(\mathcal{A})$
+is a complex of objects of $\mathcal{A}$ it is immediate that the set of
+isomorphism classes of objects of $D^b(\mathcal{A})$ is countable.
+Moreover, for bounded complexes $A^\bullet$ and $B^\bullet$ of $\mathcal{A}$
+it is clear that $\Hom_{K^b(\mathcal{A})}(A^\bullet, B^\bullet)$ is countable.
+We have
+$$
+\Hom_{D^b(\mathcal{A})}(A^\bullet, B^\bullet) =
+\colim_{s : (A')^\bullet \to A^\bullet
+\text{ qis and }(A')^\bullet\text{ bounded}}
+\Hom_{K^b(\mathcal{A})}((A')^\bullet, B^\bullet)
+$$
+by Derived Categories, Lemma \ref{derived-lemma-bounded-derived}.
+Thus this is a countable set as a countable colimit of
+\end{proof}
+
+\begin{lemma}
+\label{lemma-countable-perfect}
+Let $X$ be a scheme of finite type over a countable Noetherian ring.
+Then the categories $D_{perf}(\mathcal{O}_X)$ and
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$ are countable.
+\end{lemma}
+
+\begin{proof}
+Observe that $X$ is Noetherian by
+Morphisms, Lemma \ref{morphisms-lemma-finite-type-noetherian}.
+Hence $D_{perf}(\mathcal{O}_X)$ is a full subcategory of
+$D^b_{\textit{Coh}}(\mathcal{O}_X)$ by
+Derived Categories of Schemes, Lemma \ref{perfect-lemma-perfect-on-noetherian}.
+Thus it suffices to prove
+the result for $D^b_{\textit{Coh}}(\mathcal{O}_X)$.
+Recall that
+$D^b_{\textit{Coh}}(\mathcal{O}_X) = D^b(\textit{Coh}(\mathcal{O}_X))$
+by
+Derived Categories of Schemes, Proposition \ref{perfect-proposition-DCoh}.
+Hence by Lemma \ref{lemma-countable-abelian}
+it suffices to prove that $\textit{Coh}(\mathcal{O}_X)$ is
+countable. This we omit.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-countable-isos}
+Let $K$ be an algebraically closed field.
+Let $S$ be a finite type scheme over $K$.
+Let $X \to S$ and $Y \to S$ be finite type morphisms.
+There exists a countable set $I$ and for $i \in I$ a pair
+$(S_i \to S, h_i)$ with the following properties
+\begin{enumerate}
+\item $S_i \to S$ is a morphism of finite type, set
+$X_i = X \times_S S_i$ and $Y_i = Y \times_S S_i$,
+\item $h_i : X_i \to Y_i$ is an isomorphism over $S_i$, and
+\item for any closed point $s \in S(K)$ if $X_s \cong Y_s$
+over $K = \kappa(s)$ then $s$ is in the image of $S_i \to S$
+for some $i$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The field $K$ is the filtered union of its countable subfields.
+Dually, $\Spec(K)$ is the cofiltered limit of the spectra
+of the countable subfields of $K$.
+Hence Limits, Lemma \ref{limits-lemma-descend-finite-presentation}
+guarantees that we can find a countable subfield
+$k$ and morphisms $X_0 \to S_0$ and $Y_0 \to S_0$
+of schemes of finite type over $k$ such that
+$X \to S$ and $Y \to S$ are the base changes of these.
+
+\medskip\noindent
+By Lemma \ref{lemma-countable-finite-type} there is a countable set $I$ and
+pairs $(S_{0, i} \to S_0, h_{0, i})$ such that
+\begin{enumerate}
+\item $S_{0, i} \to S_0$ is a morphism of finite type, set
+$X_{0, i} = X_0 \times_{S_0} S_{0, i}$ and
+$Y_{0, i} = Y_0 \times_{S_0} S_{0, i}$,
+\item $h_{0, i} : X_{0, i} \to Y_{0, i}$ is an isomorphism over $S_{0, i}$.
+\end{enumerate}
+such that every pair $(T \to S_0, h_T)$ with $T \to S_0$ of finite type
+and $h_T : X_0 \times_{S_0} T \to Y_0 \times_{S_0} T$ an isomorphism
+is isomorphic to one of these.
+Denote $(S_i \to S, h_i)$ the base change of $(S_{0, i} \to S_0, h_{0, i})$
+by $\Spec(K) \to \Spec(k)$.
+We claim this works.
+
+\medskip\noindent
+Let $s \in S(K)$ and let $h_s : X_s \to Y_s$ be an isomorphism over
+$K = \kappa(s)$. We can write $K$ as the filtered union of its
+finitely generated $k$-subalgebras. Hence by
+Limits, Proposition
+\ref{limits-proposition-characterize-locally-finite-presentation} and
+Lemma \ref{limits-lemma-descend-finite-presentation}
+we can find such a finitely generated $k$-subalgebra
+$K \supset A \supset k$ such that
+\begin{enumerate}
+\item there is a commutative diagram
+$$
+\xymatrix{
+\Spec(K) \ar[d]_s \ar[r] &
+\Spec(A) \ar[d]^{s'} \\
+S \ar[r] &
+S_0}
+$$
+for some morphism $s' : \Spec(A) \to S_0$ over $k$,
+\item $h_s$ is the base change of an isomorphism
+$h_{s'} : X_0 \times_{S_0, s'} \Spec(A) \to
+X_0 \times_{S_0, s'} \Spec(A)$ over $A$.
+\end{enumerate}
+Of course, then $(s' : \Spec(A) \to S_0, h_{s'})$ is isomorphic
+to the pair $(S_{0, i} \to S_0, h_{0, i})$ for some $i \in I$.
+This concludes the proof because the commutative diagram
+in (1) shows that $s$ is in the image of
+the base change of $s'$ to $\Spec(K)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-countable-equivs}
+Let $K$ be an algebraically closed field. There exists a countable set $I$
+and for $i \in I$ a pair $(S_i/K, X_i \to S_i, Y_i \to S_i, M_i)$
+with the following properties
+\begin{enumerate}
+\item $S_i$ is a scheme of finite type over $K$,
+\item $X_i \to S_i$ and $Y_i \to S_i$ are proper smooth
+morphisms of schemes,
+\item $M_i \in D_{perf}(\mathcal{O}_{X_i \times_{S_i} Y_i})$
+is the Fourier-Mukai kernel of a relative equivalence from
+$X_i$ to $Y_i$ over $S_i$, and
+\item for any smooth proper schemes $X$ and $Y$ over $K$
+such that there is a $K$-linear exact equivalence
+$D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$
+there exists an $i \in I$ and a $s \in S_i(K)$
+such that $X \cong (X_i)_s$ and $Y \cong (Y_i)_s$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose a countable subfield $k \subset K$ for example the prime field.
+By Lemmas \ref{lemma-countable-finite-type} and \ref{lemma-countable-perfect}
+there exists a countable set of isomorphism classes of systems
+over $k$ satisfying parts (1), (2), (3) of the lemma.
+Thus we can choose a countable set
+$I$ and for each $i \in I$ such a system
+$$
+(S_{0, i}/k, X_{0, i} \to S_{0, i}, Y_{0, i} \to S_{0, i}, M_{0, i})
+$$
+over $k$ such that each isomorphism class occurs at least once.
+Denote $(S_i/K, X_i \to S_i, Y_i \to S_i, M_i)$ the base change
+of the displayed system to $K$. This system has properties (1), (2), (3),
+see Lemma \ref{lemma-base-change-rek}. Let us prove property (4).
+
+\medskip\noindent
+Consider smooth proper schemes $X$ and $Y$ over $K$
+such that there is a $K$-linear exact equivalence
+$F : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$.
+By Proposition \ref{proposition-equivalence}
+we may assume that there exists an object
+$M \in D_{perf}(\mathcal{O}_{X \times Y})$
+such that $F = \Phi_M$ is the corresponding Fourier-Mukai functor.
+By Lemma \ref{lemma-fourier-mukai-flat-proper-over-noetherian}
+there is an $M'$ in $D_{perf}(\mathcal{O}_{Y \times X})$
+such that $\Phi_{M'}$ is the right adjoint to $\Phi_M$.
+Since $\Phi_M$ is an equivalence, this means that
+$\Phi_{M'}$ is the quasi-inverse to $\Phi_M$.
+By Lemma \ref{lemma-fourier-mukai-flat-proper-over-noetherian}
+we see that the Fourier-Mukai functors defined by the objects
+$$
+A = R\text{pr}_{13, *}(
+L\text{pr}_{12}^*M
+\otimes_{\mathcal{O}_{X \times Y \times X}}^\mathbf{L}
+L\text{pr}_{23}^*M')
+$$
+in $D_{perf}(\mathcal{O}_{X \times X})$ and
+$$
+B = R\text{pr}_{13, *}(
+L\text{pr}_{12}^*M'
+\otimes_{\mathcal{O}_{Y \times X \times Y}}^\mathbf{L}
+L\text{pr}_{23}^*M)
+$$
+in $D_{perf}(\mathcal{O}_{Y \times Y})$
+are isomorphic to
+$\text{id} : D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_X)$
+and
+$\text{id} : D_{perf}(\mathcal{O}_Y) \to D_{perf}(\mathcal{O}_Y)$
+Hence
+$A \cong \Delta_{X/K, *}\mathcal{O}_X$ and
+$B \cong \Delta_{Y/K, *}\mathcal{O}_Y$
+by Lemma \ref{lemma-uniqueness}. Hence we see that $M$ is the
+Fourier-Mukai kernel of a relative equivalence from $X$ to $Y$
+over $K$ by definition.
+
+\medskip\noindent
+We can write $K$ as the filtered colimit of its finite type
+$k$-subalgebras $A \subset K$. By
+Limits, Lemma \ref{limits-lemma-descend-finite-presentation}
+we can find $X_0, Y_0$ of finite type over $A$ whose
+base changes to $K$ produces $X$ and $Y$.
+By Limits, Lemmas
+\ref{limits-lemma-eventually-proper} and \ref{limits-lemma-descend-smooth}
+after enlarging $A$ we may assume $X_0$ and $Y_0$
+are smooth and proper over $A$.
+By Lemma \ref{lemma-descend-rek}
+after enlarging $A$ we may assume $M$ is the pullback of
+some $M_0 \in D_{perf}(\mathcal{O}_{X_0 \times_{\Spec(A)} Y_0})$
+which is the Fourier-Mukai kernel of a relative equivalence
+from $X_0$ to $Y_0$ over $\Spec(A)$.
+Thus we see that $(S_0/k, X_0 \to S_0, Y_0 \to S_0, M_0)$
+is isomorphic to
+$(S_{0, i}/k, X_{0, i} \to S_{0, i}, Y_{0, i} \to S_{0, i}, M_{0, i})$
+for some $i \in I$.
+Since $S_i = S_{0, i} \times_{\Spec(k)} \Spec(K)$
+we conclude that (4) is true with $s : \Spec(K) \to S_i$
+induced by the morphism $\Spec(K) \to \Spec(A) \cong S_{0, i}$
+we get from $A \subset K$.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Countability of derived equivalent varieties}
+\label{section-countable-derived-equivalent}
+
+\noindent
+In this section we prove a result of Anel and To\"en, see \cite{AT}.
+
+\begin{definition}
+\label{definition-derived-equivalent}
+Let $k$ be a field. Let $X$ and $Y$ be smooth projective schemes over $k$.
+We say $X$ and $Y$ are {\it derived equivalent} if there exists a $k$-linear
+exact equivalence
+$D_{perf}(\mathcal{O}_X) \to D_{perf}(\mathcal{O}_Y)$.
+\end{definition}
+
+\noindent
+Here is the result
+
+\begin{theorem}
+\label{theorem-countable}
+\begin{reference}
+Slight improvement of \cite{AT}
+\end{reference}
+Let $K$ be an algebraically closed field. Let $\mathbf{X}$ be a smooth proper
+scheme over $K$. There are at most countably many isomorphism classes
+of smooth proper schemes $\mathbf{Y}$ over $K$ which are derived
+equivalent to $\mathbf{X}$.
+\end{theorem}
+
+\begin{proof}
+Choose a countable set $I$ and for $i \in I$ systems
+$(S_i/K, X_i \to S_i, Y_i \to S_i, M_i)$ satisfying properties
+(1), (2), (3), and (4) of Lemma \ref{lemma-countable-equivs}.
+Pick $i \in I$ and set $S = S_i$, $X = X_i$, $Y = Y_i$, and
+$M = M_i$. Clearly it suffice to show that
+the set of isomorphism classes of fibres $Y_s$ for $s \in S(K)$
+such that $X_s \cong \mathbf{X}$ is countable.
+This we prove in the next paragraph.
+
+\medskip\noindent
+Let $S$ be a finite type scheme over $K$, let $X \to S$ and $Y \to S$
+be proper smooth morphisms, and let $M \in D_{perf}(\mathcal{O}_{X \times_S Y})$
+be the Fourier-Mukai kernel of a relative equivalence from $X$
+to $Y$ over $S$. We will show
+the set of isomorphism classes of fibres $Y_s$ for $s \in S(K)$
+such that $X_s \cong \mathbf{X}$ is countable.
+By Lemma \ref{lemma-countable-isos} applied
+to the families $\mathbf{X} \times S \to S$ and $X \to S$
+there exists a countable set $I$ and for $i \in I$ a pair
+$(S_i \to S, h_i)$ with the following properties
+\begin{enumerate}
+\item $S_i \to S$ is a morphism of finite type, set
+$X_i = X \times_S S_i$,
+\item $h_i : \mathbf{X} \times S_i \to X_i$
+is an isomorphism over $S_i$, and
+\item for any closed point $s \in S(K)$ if $\mathbf{X} \cong X_s$
+over $K = \kappa(s)$ then $s$ is in the image of $S_i \to S$
+for some $i$.
+\end{enumerate}
+Set $Y_i = Y \times_S S_i$. Denote
+$M_i \in D_{perf}(\mathcal{O}_{X_i \times_{S_i} Y_i})$
+the pullback of $M$. By Lemma \ref{lemma-base-change-rek}
+$M_i$ is the Fourier-Mukai kernel of a relative equivalence from
+$X_i$ to $Y_i$ over $S_i$. Since $I$ is countable, by
+property (3) it suffices to prove that
+the set of isomorphism classes of fibres $Y_{i, s}$ for $s \in S_i(K)$
+is countable.
+In fact, this number is finite by
+Lemma \ref{lemma-no-deformations-better}
+and the proof is complete.
+\end{proof}
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
+
diff --git a/books/stacks/etale-cohomology.tex b/books/stacks/etale-cohomology.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f6418dd139c90e3d3c0b87707f1bd151a237e5fe
--- /dev/null
+++ b/books/stacks/etale-cohomology.tex
@@ -0,0 +1,23689 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{\'Etale Cohomology}
+
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+This chapter is the first in a series of chapter on the \'etale cohomology
+of schemes. In this chapter we discuss the very basics of the \'etale
+topology and cohomology of abelian sheaves in this topology. Many of the
+topics discussed may be safely skipped on a first reading; please see
+the advice in the next section as to how to decide what to skip.
+
+\medskip\noindent
+The initial version of this chapter was formed by the notes
+of the first part of a course on \'etale cohomology taught by Johan de Jong
+at Columbia University in the Fall of 2009. The original note takers were
+Thibaut Pugin, Zachary Maddock and Min Lee. The second part of the
+course can be found in the chapter on the trace formula, see
+The Trace Formula, Section \ref{trace-section-introduction}.
+
+
+
+
+\section{Which sections to skip on a first reading?}
+\label{section-skip}
+
+\noindent
+We want to use the material in this chapter for the development of
+theory related to algebraic spaces, Deligne-Mumford stacks, algebraic stacks,
+etc. Thus we have added some pretty technical material to the original
+exposition of \'etale cohomology for schemes. The reader can recognize this
+material by the frequency of the word ``topos'', or by discussions related
+to set theory, or by proofs dealing with very general properties of morphisms
+of schemes. Some of these discussions can be skipped on a first reading.
+
+\medskip\noindent
+In particular, we suggest that the reader skip the following sections:
+\begin{enumerate}
+\item Comparing big and small topoi,
+Section \ref{section-compare}.
+\item Recovering morphisms,
+Section \ref{section-morphisms}.
+\item Push and pull,
+Section \ref{section-monomorphisms}.
+\item Property (A),
+Section \ref{section-A}.
+\item Property (B),
+Section \ref{section-B}.
+\item Property (C),
+Section \ref{section-C}.
+\item Topological invariance of the small \'etale site,
+Section \ref{section-topological-invariance}.
+\item Integral universally injective morphisms,
+Section \ref{section-integral-universally-injective}.
+\item Big sites and pushforward,
+Section \ref{section-big}.
+\item Exactness of big lower shriek,
+Section \ref{section-exactness-lower-shriek}.
+\end{enumerate}
+Besides these sections there are some sporadic results that may be skipped
+that the reader can recognize by the keywords given above.
+
+
+
+%9.08.09
+\section{Prologue}
+\label{section-prologue}
+
+\noindent
+These lectures are about another cohomology theory. The first thing to remark
+is that the Zariski topology is not entirely satisfactory. One of the main
+reasons that it fails to give the results that we would want is that if $X$ is
+a complex variety and $\mathcal{F}$ is a constant sheaf then
+$$
+H^i(X, \mathcal{F}) = 0, \quad \text{ for all } i > 0.
+$$
+The reason for that is the following. In an irreducible scheme (a variety in
+particular), any two nonempty open subsets meet, and so the restriction
+mappings of a constant sheaf are surjective. We say that the sheaf is
+{\it flasque}. In this case, all higher {\v C}ech cohomology groups vanish, and
+so do all higher Zariski cohomology groups. In other words, there are ``not
+enough'' open sets in the Zariski topology to detect this higher cohomology.
+
+\medskip\noindent
+On the other hand, if $X$ is a smooth projective complex variety, then
+$$
+H_{Betti}^{2 \dim X}(X (\mathbf{C}), \Lambda) = \Lambda \quad \text{ for }
+\Lambda = \mathbf{Z}, \ \mathbf{Z}/n\mathbf{Z},
+$$
+where $X(\mathbf{C})$ means the set of complex points of $X$. This is a feature
+that would be nice to replicate in algebraic geometry. In positive
+characteristic in particular.
+
+
+
+
+\section{The \'etale topology}
+\label{section-etale-topology}
+
+\noindent
+It is very hard to simply ``add'' extra open sets to refine the Zariski
+topology. One efficient way to define a topology is to consider not only open
+sets, but also some schemes that lie over them. To define the \'etale topology,
+one considers all morphisms $\varphi : U \to X$ which are \'etale. If
+$X$ is a smooth projective variety over $\mathbf{C}$, then this means
+\begin{enumerate}
+\item $U$ is a disjoint union of smooth varieties, and
+\item $\varphi$ is (analytically) locally an isomorphism.
+\end{enumerate}
+The word ``analytically'' refers to the usual (transcendental) topology over
+$\mathbf{C}$. So the second condition means that the derivative of $\varphi$
+has full rank everywhere (and in particular all the components of $U$
+have the same dimension as $X$).
+
+\medskip\noindent
+A double cover -- loosely defined as a finite degree $2$ map between varieties
+-- for example
+$$
+\Spec(\mathbf{C}[t])
+\longrightarrow
+\Spec(\mathbf{C}[t]),
+\quad t \longmapsto t^2
+$$
+will not be an \'etale morphism if it has a fibre consisting of a single point.
+In the example this happens when $t = 0$. For a finite map between varieties
+over $\mathbf{C}$ to be \'etale all the fibers should have the same number of
+points. Removing the point $t = 0$ from the source of the map in the example
+will make the morphism \'etale. But we can remove other points from the source
+of the morphism also, and the morphism will still be \'etale. To consider the
+\'etale topology, we have to look at all such morphisms. Unlike the Zariski
+topology, these need not be merely open subsets of $X$, even though their
+images always are.
+
+\begin{definition}
+\label{definition-etale-covering-initial}
+A family of morphisms $\{ \varphi_i : U_i \to X\}_{i \in I}$ is
+called an {\it \'etale covering} if each $\varphi_i$ is an \'etale morphism
+and their images cover $X$, i.e.,
+$X = \bigcup_{i \in I} \varphi_i(U_i)$.
+\end{definition}
+
+\noindent
+This ``defines'' the \'etale topology. In other words, we can now say what the
+sheaves are. An {\it \'etale sheaf} $\mathcal{F}$ of sets
+(resp.\ abelian groups, vector spaces, etc) on $X$ is the data:
+\begin{enumerate}
+\item for each \'etale morphism $\varphi : U \to X$ a set
+(resp.\ abelian group, vector space, etc) $\mathcal{F}(U)$,
+\item for each pair $U, \ U'$ of \'etale schemes over $X$,
+and each morphism $U \to U'$ over $X$ (which is
+automatically \'etale) a restriction map
+$\rho^{U'}_U : \mathcal{F}(U') \to \mathcal{F}(U)$
+\end{enumerate}
+These data have to satisfy the condition that $\rho^U_U = \text{id}$
+in case of the identity morphism $U \to U$
+and that $\rho^{U'}_U \circ \rho^{U''}_{U'} = \rho^{U''}_U$
+when we have morphisms $U \to U' \to U''$ of schemes \'etale over $X$
+as well as the following {\it sheaf axiom}:
+\begin{itemize}
+\item[(*)] for every \'etale covering $\{ \varphi_i : U_i \to U\}_{i \in
+I}$, the diagram
+$$
+\xymatrix{
+\emptyset \ar[r] &
+\mathcal{F} (U) \ar[r] &
+\Pi_{i \in I} \mathcal{F} (U_i) \ar@<1ex>[r] \ar@<-1ex>[r] &
+\Pi_{i, j \in I} \mathcal{F} (U_i \times_U U_j)
+}
+$$
+is exact in the category of sets (resp.\ abelian groups, vector spaces, etc).
+\end{itemize}
+
+\begin{remark}
+\label{remark-i-is-j}
+In the last statement, it is essential not to forget the case where $i = j$
+which is in general a highly nontrivial condition (unlike in the Zariski
+topology). In fact, frequently important coverings have only one element.
+\end{remark}
+
+\noindent
+Since the identity is an \'etale morphism, we can compute the global sections
+of an \'etale sheaf, and cohomology will simply be the corresponding
+right-derived functors. In other words, once more theory has been developed and
+statements have been made precise, there will be no obstacle to defining
+cohomology.
+
+
+
+
+\section{Feats of the \'etale topology}
+\label{section-feats}
+
+\noindent
+For a natural number $n \in \mathbf{N} = \{1, 2, 3, 4, \ldots\}$ it is true that
+$$
+H_\etale^2 (\mathbf{P}^1_\mathbf{C}, \mathbf{Z}/n\mathbf{Z}) =
+\mathbf{Z}/n\mathbf{Z}.
+$$
+More generally, if $X$ is a complex variety, then its \'etale Betti numbers
+with coefficients in a finite field agree with the usual Betti numbers of
+$X(\mathbf{C})$, i.e.,
+$$
+\dim_{\mathbf{F}_q} H_\etale^{2i} (X, \mathbf{F}_q) =
+\dim_{\mathbf{F}_q} H_{Betti}^{2i} (X(\mathbf{C}), \mathbf{F}_q).
+$$
+This is extremely satisfactory. However, these equalities only hold for torsion
+coefficients, not in general. For integer coefficients, one has
+$$
+H_\etale^2 (\mathbf{P}^1_\mathbf{C}, \mathbf{Z}) = 0.
+$$
+By contrast $H_{Betti}^2(\mathbf{P}^1(\mathbf{C}), \mathbf{Z}) = \mathbf{Z}$
+as the topological space $\mathbf{P}^1(\mathbf{C})$ is homeomorphic to
+a $2$-sphere.
+There are ways to get back to nontorsion coefficients from torsion ones by a
+limit procedure which we will come to shortly.
+
+
+
+
+\section{A computation}
+\label{section-computation}
+
+\noindent
+How do we compute the cohomology of $\mathbf{P}^1_\mathbf{C}$ with coefficients
+$\Lambda = \mathbf{Z}/n\mathbf{Z}$?
+We use {\v C}ech cohomology. A covering of $\mathbf{P}^1_\mathbf{C}$ is given
+by the two standard opens $U_0, U_1$, which are both
+isomorphic to $\mathbf{A}^1_\mathbf{C}$, and whose intersection is isomorphic
+to $\mathbf{A}^1_\mathbf{C} \setminus \{0\} = \mathbf{G}_{m, \mathbf{C}}$.
+It turns out that the Mayer-Vietoris sequence holds in \'etale cohomology.
+This gives an exact sequence
+$$
+H_\etale^{i-1}(U_0\cap U_1, \Lambda) \to
+H_\etale^i(\mathbf{P}^1_C, \Lambda) \to
+H_\etale^i(U_0, \Lambda) \oplus
+H_\etale^i(U_1, \Lambda) \to H_\etale^i(U_0\cap U_1,
+\Lambda).
+$$
+To get the answer we expect, we would need to show that the direct sum in the
+third term vanishes. In fact, it is true that, as for the usual topology,
+$$
+H_\etale^q (\mathbf{A}^1_\mathbf{C}, \Lambda) = 0
+\quad \text{ for } q \geq 1,
+$$
+and
+$$
+H_\etale^q (\mathbf{A}^1_\mathbf{C} \setminus \{0\}, \Lambda) = \left\{
+\begin{matrix}
+\Lambda & \text{ if }q = 1\text{, and} \\
+0 & \text{ for }q \geq 2.
+\end{matrix}
+\right.
+$$
+These results are already quite hard (what is an elementary proof?). Let us
+explain how we would compute this once the machinery of \'etale cohomology is
+at our disposal.
+
+\medskip\noindent
+{\bf Higher cohomology.} This is taken care of by the following general
+fact: if $X$ is an affine curve over $\mathbf{C}$, then
+$$
+H_\etale^q (X, \mathbf{Z}/n\mathbf{Z}) = 0 \quad \text{ for } q \geq 2.
+$$
+This is proved by considering the generic point of the curve and doing some
+Galois cohomology. So we only have to worry about the cohomology in degree 1.
+
+\medskip\noindent
+{\bf Cohomology in degree 1.} We use the following identifications:
+\begin{eqnarray*}
+H_\etale^1 (X, \mathbf{Z}/n\mathbf{Z}) = \left\{
+\begin{matrix}
+\text{sheaves of sets }\mathcal{F}\text{ on the \'etale site }X_\etale
+\text{ endowed with an} \\
+\text{action }\mathbf{Z}/n\mathbf{Z} \times \mathcal{F} \to \mathcal{F}
+\text{ such that }\mathcal{F}\text{ is a }\mathbf{Z}/n\mathbf{Z}\text{-torsor.}
+\end{matrix}
+\right\}
+\Big/ \cong
+\\
+ = \left\{
+\begin{matrix}
+\text{morphisms }Y \to X\text{ which are finite \'etale together} \\
+\text{ with a free }\mathbf{Z}/n\mathbf{Z}\text{ action such that }
+X = Y/(\mathbf{Z}/n\mathbf{Z}).
+\end{matrix}
+\right\}
+\Big/ \cong.
+\end{eqnarray*}
+The first identification is very general (it is true for any cohomology theory
+on a site) and has nothing to do with the \'etale topology. The second
+identification is a consequence of descent theory. The last set describes a
+collection of geometric objects on which we can get our hands.
+
+\medskip\noindent
+The curve $\mathbf{A}^1_\mathbf{C}$ has no nontrivial finite \'etale covering
+and hence
+$H_\etale^1 (\mathbf{A}^1_\mathbf{C}, \mathbf{Z}/n\mathbf{Z}) = 0$.
+This can be seen either topologically or by using the argument in the next
+paragraph.
+
+\medskip\noindent
+Let us describe the finite \'etale coverings
+$\varphi : Y \to \mathbf{A}^1_\mathbf{C} \setminus \{0\}$.
+It suffices to consider the case where $Y$ is
+connected, which we assume. We are going to find out what $Y$ can be
+by applying the Riemann-Hurwitz formula (of course this is a bit silly, and
+you can go ahead and skip the next section if you like).
+Say that this morphism is $n$ to 1, and consider a
+projective compactification
+$$
+\xymatrix{
+{Y\ } \ar@{^{(}->}[r] \ar[d]^\varphi &
+{\bar Y} \ar[d]^{\bar\varphi} \\
+{\mathbf{A}^1_\mathbf{C} \setminus \{0\}} \ar@{^{(}->}[r] &
+{\mathbf{P}^1_\mathbf{C}}
+}
+$$
+Even though $\varphi$ is \'etale and does not ramify, $\bar{\varphi}$ may
+ramify at 0 and $\infty$. Say that the preimages of 0 are the points $y_1,
+\ldots, y_r$ with indices of ramification $e_1, \ldots e_r$, and that the
+preimages of $\infty$ are the points $y_1', \ldots, y_s'$ with indices of
+ramification $d_1, \ldots d_s$. In particular, $\sum e_i = n = \sum d_j$.
+Applying the Riemann-Hurwitz formula, we get
+$$
+2 g_Y - 2 = -2n + \sum (e_i - 1) + \sum (d_j - 1)
+$$
+and therefore $g_Y = 0$, $r = s = 1$ and $e_1 = d_1 = n$.
+Hence $Y \cong {\mathbf{A}^1_\mathbf{C} \setminus \{0\}}$, and it is easy to
+see that $\varphi(z) = \lambda z^n$ for some $\lambda \in \mathbf{C}^*$.
+After reparametrizing $Y$ we may assume $\lambda = 1$. Thus our
+covering is given by taking the $n$th root of the coordinate on
+$\mathbf{A}^1_{\mathbf{C}} \setminus \{0\}$.
+
+\medskip\noindent
+Remember that we need to classify the coverings of
+${\mathbf{A}^1_\mathbf{C} \setminus \{0\}}$ together with free
+$\mathbf{Z}/n\mathbf{Z}$-actions on them.
+In our case any such action corresponds
+to an automorphism of $Y$ sending $z$ to $\zeta_n z$, where $\zeta_n$ is a
+primitive $n$th root of unity. There are $\phi(n)$ such actions
+(here $\phi(n)$ means the Euler function). Thus there are exactly
+$\phi(n)$ connected finite \'etale coverings with a given free
+$\mathbf{Z}/n\mathbf{Z}$-action, each corresponding to a primitive
+$n$th root of unity. We leave it to the reader to see that the
+disconnected finite \'etale degree $n$ coverings of
+$\mathbf{A}^1_{\mathbf{C}} \setminus \{0\}$ with a given free
+$\mathbf{Z}/n\mathbf{Z}$-action correspond one-to-one with $n$th
+roots of $1$ which are not primitive.
+In other words, this computation shows that
+$$
+H_\etale^1 (\mathbf{A}^1_\mathbf{C} \setminus \{0\},
+\mathbf{Z}/n\mathbf{Z}) =
+\Hom(\mu_n(\mathbf{C}), \mathbf{Z}/n\mathbf{Z}) \cong \mathbf{Z}/n\mathbf{Z}.
+$$
+The first identification is canonical, the second isn't, see
+Remark \ref{remark-normalize-H1-Gm}.
+Since the proof of Riemann-Hurwitz does not use the computation of
+cohomology, the above actually constitutes a proof (provided we
+fill in the details on vanishing, etc).
+
+
+
+
+\section{Nontorsion coefficients}
+\label{section-nontorsion}
+
+\noindent
+To study nontorsion coefficients, one makes the following definition:
+$$
+H_\etale^i (X, \mathbf{Q}_\ell) :=
+\left( \lim_n H_\etale^i(X, \mathbf{Z}/\ell^n\mathbf{Z}) \right)
+\otimes_{\mathbf{Z}_\ell} \mathbf{Q}_\ell.
+$$
+The symbol $\lim_n$ denote the {\it limit} of the system of
+cohomology groups $H_\etale^i(X, \mathbf{Z}/\ell^n\mathbf{Z})$ indexed
+by $n$, see
+Categories, Section \ref{categories-section-posets-limits}.
+Thus we will need to study systems of sheaves satisfying some compatibility
+conditions.
+
+
+
+
+\section{Sheaf theory}
+\label{section-sheaf-theory}
+%9.10.09
+
+\noindent
+At this point we start talking about sites and sheaves in earnest.
+There is an amazing amount of useful abstract material that could fit
+in the next few sections. Some of this material is worked out in earlier
+chapters, such as the chapter on sites, modules on sites, and cohomology
+on sites. We try to refrain from adding too much material here, just
+enough so the material later in this chapter makes sense.
+
+
+
+
+\section{Presheaves}
+\label{section-presheaves}
+
+\noindent
+A reference for this section is
+Sites, Section \ref{sites-section-presheaves}.
+
+\begin{definition}
+\label{definition-presheaf}
+Let $\mathcal{C}$ be a category. A {\it presheaf of sets} (respectively, an
+{\it abelian presheaf}) on $\mathcal{C}$ is a functor $\mathcal{C}^{opp} \to
+\textit{Sets}$ (resp.\ $\textit{Ab}$).
+\end{definition}
+
+\noindent
+{\bf Terminology.} If $U \in \Ob(\mathcal{C})$, then elements of
+$\mathcal{F}(U)$ are called {\it sections} of $\mathcal{F}$ over
+$U$. For $\varphi : V \to U$ in $\mathcal{C}$, the
+map $\mathcal{F}(\varphi) : \mathcal{F}(U) \to \mathcal{F}(V)$
+is called the {\it restriction map} and is often denoted $s \mapsto s|_V$
+or sometimes $s \mapsto \varphi^*s$. The notation $s|_V$ is ambiguous
+since the restriction map depends on $\varphi$, but it is a standard
+abuse of notation. We also use the notation
+$\Gamma(U, \mathcal{F}) = \mathcal{F}(U)$.
+
+\medskip\noindent
+Saying that $\mathcal{F}$ is a functor means that if
+$W \to V \to U$ are morphisms in $\mathcal{C}$ and
+$s \in \Gamma(U, \mathcal{F})$ then
+$(s|_V)|_W = s |_W$, with the abuse of
+notation just seen. Moreover, the restriction mappings corresponding to
+the identity morphisms $\text{id}_U : U \to U$ are the identity.
+
+\medskip\noindent
+The category of presheaves of sets (respectively of abelian presheaves) on
+$\mathcal{C}$ is denoted $\textit{PSh} (\mathcal{C})$ (resp. $\textit{PAb}
+(\mathcal{C})$). It is the category of functors from $\mathcal{C}^{opp}$ to
+$\textit{Sets}$ (resp. $\textit{Ab}$), which is to say that the morphisms of
+presheaves are natural transformations of functors. We only consider the
+categories $\textit{PSh}(\mathcal{C})$ and $\textit{PAb}(\mathcal{C})$
+when the category $\mathcal{C}$ is small. (Our convention is that a category
+is small unless otherwise mentioned, and if it isn't small it should be
+listed in Categories, Remark \ref{categories-remark-big-categories}.)
+
+\begin{example}
+\label{example-representable-presheaf}
+Given an object $X \in \Ob(\mathcal{C})$, we consider the functor
+$$
+\begin{matrix}
+h_X : & \mathcal{C}^{opp} & \longrightarrow & \textit{Sets} \\
+& U & \longmapsto & h_X(U) = \Mor_\mathcal{C}(U, X) \\
+& V \xrightarrow{\varphi} U & \longmapsto &
+\varphi \circ - : h_X(U) \to h_X(V).
+\end{matrix}
+$$
+It is a presheaf, called the {\it representable presheaf associated to $X$.}
+It is not true that representable presheaves are sheaves in every topology on
+every site.
+\end{example}
+
+\begin{lemma}[Yoneda]
+\label{lemma-yoneda}
+\begin{slogan}
+Morphisms between objects are in bijection with natural transformations
+between the functors they represent.
+\end{slogan}
+Let $\mathcal{C}$ be a category, and $X, Y \in
+\Ob(\mathcal{C})$. There is a natural bijection
+$$
+\begin{matrix}
+\Mor_\mathcal{C}(X, Y) &
+\longrightarrow &
+\Mor_{\textit{PSh}(\mathcal{C})} (h_X, h_Y) \\
+\psi &
+\longmapsto &
+h_\psi = \psi \circ - : h_X \to h_Y.
+\end{matrix}
+$$
+\end{lemma}
+
+\begin{proof}
+See
+Categories, Lemma \ref{categories-lemma-yoneda}.
+\end{proof}
+
+
+
+
+\section{Sites}
+\label{section-sites}
+
+
+\begin{definition}
+\label{definition-family-morphisms-fixed-target}
+Let $\mathcal{C}$ be a category. A {\it family of morphisms with fixed target}
+$\mathcal{U} = \{\varphi_i : U_i \to U\}_{i\in I}$ is the data of
+\begin{enumerate}
+\item an object $U \in \mathcal{C}$,
+\item a set $I$ (possibly empty), and
+\item for all $i\in I$, a morphism $\varphi_i : U_i \to U$ of $\mathcal{C}$
+with target $U$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+There is a notion of a {\it morphism of families of morphisms with fixed
+target}. A special case of that is the notion of a {\it refinement}.
+A reference for this material is
+Sites, Section \ref{sites-section-refinements}.
+
+\begin{definition}
+\label{definition-site}
+A {\it site}\footnote{What we call a site is a called a category endowed with
+a pretopology in \cite[Expos\'e II, D\'efinition 1.3]{SGA4}.
+In \cite{ArtinTopologies} it is called a category with a Grothendieck
+topology.} consists of a category $\mathcal{C}$ and a set
+$\text{Cov}(\mathcal{C})$ consisting of families of morphisms with fixed target
+called {\it coverings}, such that
+\begin{enumerate}
+\item (isomorphism) if $\varphi : V \to U$ is an isomorphism in $\mathcal{C}$,
+then $\{\varphi : V \to U\}$ is a covering,
+\item (locality) if $\{\varphi_i : U_i \to U\}_{i\in I}$ is a covering and
+for all $i \in I$ we are given a covering
+$\{\psi_{ij} : U_{ij} \to U_i \}_{j\in I_i}$, then
+$$
+\{
+\varphi_i \circ \psi_{ij} : U_{ij} \to U
+\}_{(i, j)\in \prod_{i\in I} \{i\} \times I_i}
+$$
+is also a covering, and
+\item (base change) if $\{U_i \to U\}_{i\in I}$
+is a covering and $V \to U$ is a morphism in $\mathcal{C}$, then
+\begin{enumerate}
+\item for all $i \in I$ the fibre product
+$U_i \times_U V$ exists in $\mathcal{C}$, and
+\item $\{U_i \times_U V \to V\}_{i\in I}$ is a covering.
+\end{enumerate}
+\end{enumerate}
+\end{definition}
+
+\noindent
+For us the category underlying a site is always ``small'', i.e., its
+collection of objects form a set, and the collection of coverings of
+a site is a set as well (as in the definition above). We will mostly,
+in this chapter, leave out the arguments that cut down the collection
+of objects and coverings to a set. For further discussion, see
+Sites, Remark \ref{sites-remark-no-big-sites}.
+
+\begin{example}
+\label{example-site-topological-space}
+If $X$ is a topological space, then it has an associated site $X_{Zar}$
+defined as follows: the objects of $X_{Zar}$ are the open subsets of $X$,
+the morphisms between these are the inclusion mappings, and the coverings are
+the usual topological (surjective) coverings. Observe that if
+$U, V \subset W \subset X$ are open subsets then $U \times_W V = U \cap V$
+exists: this category has fiber products. All the verifications are trivial and
+everything works as expected.
+\end{example}
+
+
+
+
+\section{Sheaves}
+\label{section-sheaves}
+
+\begin{definition}
+\label{definition-sheaf}
+A presheaf $\mathcal{F}$ of sets (resp. abelian presheaf) on a site
+$\mathcal{C}$ is said to be a {\it separated presheaf} if for all coverings
+$\{\varphi_i : U_i \to U\}_{i\in I} \in \text{Cov} (\mathcal{C})$
+the map
+$$
+\mathcal{F}(U) \longrightarrow \prod\nolimits_{i\in I} \mathcal{F}(U_i)
+$$
+is injective. Here the map is $s \mapsto (s|_{U_i})_{i\in I}$.
+The presheaf $\mathcal{F}$ is a {\it sheaf} if for all coverings
+$\{\varphi_i : U_i \to U\}_{i\in I} \in \text{Cov} (\mathcal{C})$, the
+diagram
+\begin{equation}
+\label{equation-sheaf-axiom}
+\xymatrix{
+\mathcal{F}(U) \ar[r] &
+\prod_{i\in I} \mathcal{F}(U_i) \ar@<1ex>[r] \ar@<-1ex>[r] &
+\prod_{i, j \in I} \mathcal{F}(U_i \times_U U_j),
+}
+\end{equation}
+where the first map is $s \mapsto (s|_{U_i})_{i\in I}$ and the two
+maps on the right are
+$(s_i)_{i\in I} \mapsto (s_i |_{U_i \times_U U_j})$ and
+$(s_i)_{i\in I} \mapsto (s_j |_{U_i \times_U U_j})$,
+is an equalizer diagram in the category of sets (resp.\ abelian groups).
+\end{definition}
+
+\begin{remark}
+\label{remark-empty-covering}
+For the empty covering (where $I = \emptyset$), this implies that
+$\mathcal{F}(\emptyset)$ is an empty product, which is a final object in the
+corresponding category (a singleton, for both $\textit{Sets}$ and
+$\textit{Ab}$).
+\end{remark}
+
+\begin{example}
+\label{example-sheaf-site-space}
+Working this out for the site $X_{Zar}$ associated to a topological
+space, see Example \ref{example-site-topological-space}, gives the usual
+notion of sheaves.
+\end{example}
+
+\begin{definition}
+\label{definition-category-sheaves}
+We denote $\Sh(\mathcal{C})$ (resp.\ $\textit{Ab}(\mathcal{C})$)
+the full subcategory of $\textit{PSh}(\mathcal{C})$
+(resp.\ $\textit{PAb}(\mathcal{C})$) whose objects are sheaves. This is the
+{\it category of sheaves of sets} (resp.\ {\it abelian sheaves}) on
+$\mathcal{C}$.
+\end{definition}
+
+
+
+
+\section{The example of G-sets}
+\label{section-G-sets}
+
+\noindent
+Let $G$ be a group and define a site $\mathcal{T}_G$ as follows: the underlying
+category is the category of $G$-sets, i.e., its objects are sets endowed
+with a left $G$-action and the morphisms are equivariant maps; and the
+coverings of $\mathcal{T}_G$ are the families
+$\{\varphi_i : U_i \to U\}_{i\in I}$ satisfying
+$U = \bigcup_{i\in I} \varphi_i(U_i)$.
+
+\medskip\noindent
+There is a special object in the site $\mathcal{T}_G$, namely the $G$-set $G$
+endowed with its natural action by left translations. We denote it ${}_G G$.
+Observe that there is a natural group isomorphism
+$$
+\begin{matrix}
+\rho : & G^{opp} & \longrightarrow & \text{Aut}_{G\textit{-Sets}}({}_G G) \\
+& g & \longmapsto & (h \mapsto hg).
+\end{matrix}
+$$
+In particular, for any presheaf $\mathcal{F}$, the set $\mathcal{F}({}_G G)$
+inherits a $G$-action via $\rho$. (Note that by contravariance of
+$\mathcal{F}$, the set $\mathcal{F}({}_G G)$ is again a left $G$-set.) In fact,
+the functor
+$$
+\begin{matrix}
+\Sh(\mathcal{T}_G) & \longrightarrow & G\textit{-Sets} \\
+\mathcal{F} & \longmapsto & \mathcal{F}({}_G G)
+\end{matrix}
+$$
+is an equivalence of categories. Its quasi-inverse is the functor $X \mapsto
+h_X$. Without giving the complete proof (which can be found in
+Sites, Section \ref{sites-section-example-sheaf-G-sets})
+let us try to explain why this is true.
+\begin{enumerate}
+\item
+If $S$ is a $G$-set, we can decompose it into orbits $S = \coprod_{i\in I}
+O_i$. The sheaf axiom for the covering $\{O_i \to S\}_{i\in I}$ says that
+$$
+\xymatrix{
+\mathcal{F}(S) \ar[r] &
+\prod_{i\in I} \mathcal{F}(O_i) \ar@<1ex>[r] \ar@<-1ex>[r] &
+\prod_{i, j \in I} \mathcal{F}(O_i \times_S O_j)
+}
+$$
+is an equalizer. Observing that fibered products in $G\textit{-Sets}$ are
+induced from fibered products in $\textit{Sets}$, and using the fact that
+$\mathcal{F}(\emptyset)$ is a $G$-singleton, we get that
+$$
+\prod_{i, j \in I} \mathcal{F}(O_i \times_S O_j) = \prod_{i \in I}
+\mathcal{F}(O_i)
+$$
+and the two maps above are in fact the same. Therefore the sheaf axiom merely
+says that $\mathcal{F}(S) = \prod_{i\in I} \mathcal{F}(O_i)$.
+\item
+If $S$ is the $G$-set $S= G/H$ and $\mathcal{F}$ is a sheaf on $\mathcal{T}_G$,
+then we claim that
+$$
+\mathcal{F}(G/H) = \mathcal{F}({}_G G)^H
+$$
+and in particular $\mathcal{F}(\{*\}) = \mathcal{F}({}_G G)^G$. To see this,
+let's use the sheaf axiom for the covering $\{ {}_G G \to G/H \}$ of $S$. We
+have
+\begin{eqnarray*}
+{}_G G \times_{G/H} {}_G G & \cong & G \times H \\
+(g_1, g_2) & \longmapsto & (g_1, g_1 g_2^{-1})
+\end{eqnarray*}
+is a disjoint union of copies of ${}_G G$ (as a $G$-set). Hence the sheaf axiom
+reads
+$$
+\xymatrix{
+\mathcal{F} (G/H) \ar[r] &
+\mathcal{F}({}_G G) \ar@<1ex>[r] \ar@<-1ex>[r] &
+\prod_{h\in H} \mathcal{F}({}_G G)
+}
+$$
+where the two maps on the right are $s \mapsto (s)_{h \in H}$ and $s \mapsto
+(hs)_{h \in H}$. Therefore $\mathcal{F}(G/H) = \mathcal{F}({}_G G)^H$ as
+claimed.
+\end{enumerate}
+This doesn't quite prove the claimed equivalence of categories, but it shows at
+least that a sheaf $\mathcal{F}$ is entirely determined by its sections over
+${}_G G$. Details (and set theoretical remarks) can be found in
+Sites, Section \ref{sites-section-example-sheaf-G-sets}.
+
+
+
+
+\section{Sheafification}
+\label{section-sheafification}
+
+\begin{definition}
+\label{definition-0-cech}
+Let $\mathcal{F}$ be a presheaf on the site $\mathcal{C}$ and
+$\mathcal{U} = \{U_i \to U\} \in \text{Cov} (\mathcal{C})$.
+We define the {\it zeroth {\v C}ech cohomology group} of
+$\mathcal{F}$ with respect to $\mathcal{U}$ by
+$$
+\check H^0 (\mathcal{U}, \mathcal{F}) =
+\left\{
+(s_i)_{i\in I} \in \prod\nolimits_{i\in I }\mathcal{F}(U_i)
+\text{ such that }
+s_i|_{U_i \times_U U_j} = s_j |_{U_i \times_U U_j}
+\right\}.
+$$
+\end{definition}
+
+\noindent
+There is a canonical map
+$\mathcal{F}(U) \to \check H^0 (\mathcal{U}, \mathcal{F})$,
+$s \mapsto (s |_{U_i})_{i\in I}$.
+We say that a {\it morphism of coverings} from a covering
+$\mathcal{V} = \{V_j \to V\}_{j \in J}$ to $\mathcal{U}$ is a triple
+$(\chi, \alpha, \chi_j)$, where
+$\chi : V \to U$ is a morphism,
+$\alpha : J \to I$ is a map of sets, and for all
+$j \in J$ the morphism $\chi_j$ fits into a commutative diagram
+$$
+\xymatrix{
+V_j \ar[rr]_{\chi_j} \ar[d] & & U_{\alpha(j)} \ar[d] \\
+V \ar[rr]^\chi & & U.
+}
+$$
+Given the data $\chi, \alpha, \{\chi_j\}_{j \in J}$ we define
+\begin{eqnarray*}
+\check H^0(\mathcal{U}, \mathcal{F}) & \longrightarrow &
+\check H^0(\mathcal{V}, \mathcal{F}) \\
+(s_i)_{i\in I} & \longmapsto &
+\left(\chi_j^*\left(s_{\alpha(j)}\right)\right)_{j\in J}.
+\end{eqnarray*}
+We then claim that
+\begin{enumerate}
+\item the map is well-defined, and
+\item depends only on $\chi$ and is independent of the choice of
+$\alpha, \{\chi_j\}_{j \in J}$.
+\end{enumerate}
+We omit the proof of the first fact.
+To see part (2), consider another triple $(\psi, \beta, \psi_j)$ with
+$\chi = \psi$. Then we have the commutative diagram
+$$
+\xymatrix{
+V_j \ar[rrr]_{(\chi_j, \psi_j)} \ar[dd] & & &
+U_{\alpha(j)} \times_U U_{\beta(j)} \ar[dl] \ar[dr] \\
+& & U_{\alpha(j)} \ar[dr] & &
+U_{\beta(j)} \ar[dl] \\
+V \ar[rrr]^{\chi = \psi} & & & U.
+}
+$$
+Given a section $s \in \mathcal{F}(\mathcal{U})$, its image in
+$\mathcal{F}(V_j)$ under the map given by
+$(\chi, \alpha, \{\chi_j\}_{j \in J})$
+is $\chi_j^*s_{\alpha(j)}$, and
+its image under the map given by $(\psi, \beta, \{\psi_j\}_{j \in J})$
+is $\psi_j^*s_{\beta(j)}$. These
+two are equal since by assumption $s \in \check H^0(\mathcal{U}, \mathcal{F})$
+and hence both are equal to the pullback of the common value
+$$
+s_{\alpha(j)}|_{U_{\alpha(j)} \times_U U_{\beta(j)}} =
+s_{\beta(j)}|_{U_{\alpha(j)} \times_U U_{\beta(j)}}
+$$
+pulled back by the map $(\chi_j, \psi_j)$ in the diagram.
+
+\begin{theorem}
+\label{theorem-sheafification}
+Let $\mathcal{C}$ be a site and $\mathcal{F}$ a presheaf on $\mathcal{C}$.
+\begin{enumerate}
+\item The rule
+$$
+U \mapsto \mathcal{F}^+(U) :=
+\colim_{\mathcal{U} \text{ covering of }U}
+\check H^0(\mathcal{U}, \mathcal{F})
+$$
+is a presheaf. And the colimit is a directed one.
+\item There is a canonical map of presheaves $\mathcal{F} \to \mathcal{F}^+$.
+\item If $\mathcal{F}$ is a separated presheaf then $\mathcal{F}^+$ is a sheaf
+and the map in (2) is injective.
+\item $\mathcal{F}^+$ is a separated presheaf.
+\item $\mathcal{F}^\# = (\mathcal{F}^+)^+$ is a sheaf, and the canonical
+map induces a functorial isomorphism
+$$
+\Hom_{\textit{PSh}(\mathcal{C})}(\mathcal{F}, \mathcal{G}) =
+\Hom_{\Sh(\mathcal{C})}(\mathcal{F}^\#, \mathcal{G})
+$$
+for any $\mathcal{G} \in \Sh(\mathcal{C})$.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+See Sites, Theorem \ref{sites-theorem-plus}.
+\end{proof}
+
+\noindent
+In other words, this means that the natural map
+$\mathcal{F} \to \mathcal{F}^\#$ is a left adjoint to the forgetful functor
+$\Sh(\mathcal{C}) \to \textit{PSh}(\mathcal{C})$.
+
+
+
+
+\section{Cohomology}
+\label{section-cohomology}
+
+\noindent
+The following is the basic result that makes it possible to define cohomology
+for abelian sheaves on sites.
+
+\begin{theorem}
+\label{theorem-enough-injectives}
+The category of abelian sheaves on a site is an abelian category
+which has enough injectives.
+\end{theorem}
+
+\begin{proof}
+See
+Modules on Sites, Lemma \ref{sites-modules-lemma-abelian-abelian} and
+Injectives, Theorem \ref{injectives-theorem-sheaves-injectives}.
+\end{proof}
+
+\noindent
+So we can define cohomology as the right-derived functors of the
+sections functor: if $U \in \Ob(\mathcal{C})$ and
+$\mathcal{F} \in \textit{Ab}(\mathcal{C})$,
+$$
+H^p(U, \mathcal{F}) :=
+R^p\Gamma(U, \mathcal{F}) =
+H^p(\Gamma(U, \mathcal{I}^\bullet))
+$$
+where $\mathcal{F} \to \mathcal{I}^\bullet$ is an injective resolution. To do
+this, we should check that the functor $\Gamma(U, -)$ is left exact. This is
+true and is part of why the category $\textit{Ab}(\mathcal{C})$ is abelian,
+see
+Modules on Sites, Lemma \ref{sites-modules-lemma-abelian-abelian}.
+For more general discussion of cohomology on sites (including the
+global sections functor and its right derived functors), see
+Cohomology on Sites, Section \ref{sites-cohomology-section-cohomology-sheaves}.
+
+
+
+\section{The fpqc topology}
+\label{section-fpqc}
+%9.15.09
+
+\noindent
+Before doing \'etale cohomology we study a bit the fpqc topology, since
+it works well for quasi-coherent sheaves.
+
+\begin{definition}
+\label{definition-fpqc-covering}
+Let $T$ be a scheme. An {\it fpqc covering} of $T$ is a family
+$\{ \varphi_i : T_i \to T\}_{i \in I}$ such that
+\begin{enumerate}
+\item each $\varphi_i$ is a flat morphism and
+$\bigcup_{i\in I} \varphi_i(T_i) = T$, and
+\item for each affine open $U \subset T$ there exists a finite
+set $K$, a map $\mathbf{i} : K \to I$ and affine opens
+$U_{\mathbf{i}(k)} \subset T_{\mathbf{i}(k)}$ such that
+$U = \bigcup_{k \in K} \varphi_{\mathbf{i}(k)}(U_{\mathbf{i}(k)})$.
+\end{enumerate}
+\end{definition}
+
+\begin{remark}
+\label{remark-fpqc}
+The first condition corresponds to fp, which stands for
+{\it fid\`element plat}, faithfully flat in french, and
+the second to qc, {\it quasi-compact}. The second part of
+the first condition is unnecessary when the second condition holds.
+\end{remark}
+
+\begin{example}
+\label{example-fpqc-coverings}
+Examples of fpqc coverings.
+\begin{enumerate}
+\item Any Zariski open covering of $T$ is an fpqc covering.
+\item A family $\{\Spec(B) \to \Spec(A)\}$ is an fpqc
+covering if and only if $A \to B$ is a faithfully flat ring map.
+\item If $f: X \to Y$ is flat, surjective and quasi-compact, then $\{ f: X\to
+Y\}$ is an fpqc covering.
+\item The morphism
+$\varphi :
+\coprod_{x \in \mathbf{A}^1_k} \Spec(\mathcal{O}_{\mathbf{A}^1_k, x})
+\to \mathbf{A}^1_k$,
+where $k$ is a field, is flat and surjective. It is not quasi-compact, and
+in fact the family $\{\varphi\}$ is not an fpqc covering.
+\item Write
+$\mathbf{A}^2_k = \Spec(k[x, y])$. Denote $i_x : D(x) \to \mathbf{A}^2_k$
+and $i_y : D(y) \to \mathbf{A}^2_k$ the standard opens.
+Then the families
+$\{i_x, i_y, \Spec(k[[x, y]]) \to \mathbf{A}^2_k\}$
+and
+$\{i_x, i_y, \Spec(\mathcal{O}_{\mathbf{A}^2_k, 0}) \to \mathbf{A}^2_k\}$
+are fpqc coverings.
+\end{enumerate}
+\end{example}
+
+\begin{lemma}
+\label{lemma-site-fpqc}
+The collection of fpqc coverings on the category of schemes
+satisfies the axioms of site.
+\end{lemma}
+
+\begin{proof}
+See Topologies, Lemma \ref{topologies-lemma-fpqc}.
+\end{proof}
+
+\noindent
+It seems that this lemma allows us to define the fpqc site of the category
+of schemes. However, there is a set theoretical problem that comes up when
+considering the fpqc topology, see
+Topologies, Section \ref{topologies-section-fpqc}.
+It comes from our requirement that sites are ``small'', but that no small
+category of schemes can contain a cofinal system of fpqc coverings of a
+given nonempty scheme. Although this does not strictly speaking prevent
+us from defining ``partial'' fpqc
+sites, it does not seem prudent to do so. The work-around is to allow
+the notion of a sheaf for the fpqc topology (see below) but to prohibit
+considering the category of all fpqc sheaves.
+
+\begin{definition}
+\label{definition-sheaf-property-fpqc}
+Let $S$ be a scheme. The category of schemes over $S$ is denoted
+$\Sch/S$. Consider a functor
+$\mathcal{F} : (\Sch/S)^{opp} \to \textit{Sets}$, in other words
+a presheaf of sets. We say $\mathcal{F}$
+{\it satisfies the sheaf property for the fpqc topology}
+if for every fpqc covering $\{U_i \to U\}_{i \in I}$ of schemes over $S$
+the diagram (\ref{equation-sheaf-axiom}) is an equalizer diagram.
+\end{definition}
+
+\noindent
+We similarly say that $\mathcal{F}$
+{\it satisfies the sheaf property for the Zariski topology} if for
+every open covering $U = \bigcup_{i \in I} U_i$ the diagram
+(\ref{equation-sheaf-axiom}) is an equalizer diagram. See
+Schemes, Definition \ref{schemes-definition-representable-by-open-immersions}.
+Clearly, this is equivalent to saying that for every scheme $T$ over $S$ the
+restriction of $\mathcal{F}$ to the opens of $T$ is a (usual) sheaf.
+
+\begin{lemma}
+\label{lemma-fpqc-sheaves}
+Let $\mathcal{F}$ be a presheaf on $\Sch/S$. Then
+$\mathcal{F}$ satisfies the sheaf property for the fpqc topology
+if and only if
+\begin{enumerate}
+\item $\mathcal{F}$ satisfies the sheaf property with respect to the
+Zariski topology, and
+\item for every faithfully flat morphism $\Spec(B) \to \Spec(A)$
+of affine schemes over $S$, the sheaf axiom holds for the covering
+$\{\Spec(B) \to \Spec(A)\}$. Namely, this means that
+$$
+\xymatrix{
+\mathcal{F}(\Spec(A)) \ar[r] &
+\mathcal{F}(\Spec(B)) \ar@<1ex>[r] \ar@<-1ex>[r] &
+\mathcal{F}(\Spec(B \otimes_A B))
+}
+$$
+is an equalizer diagram.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+See Topologies, Lemma \ref{topologies-lemma-sheaf-property-fpqc}.
+\end{proof}
+
+\noindent
+An alternative way to think of a presheaf $\mathcal{F}$ on
+$\Sch/S$ which satisfies the sheaf condition for the
+fpqc topology is as the following data:
+\begin{enumerate}
+\item for each $T/S$, a usual (i.e., Zariski) sheaf $\mathcal{F}_T$ on
+$T_{Zar}$,
+\item for every map $f : T' \to T$ over $S$, a restriction mapping
+$f^{-1}\mathcal{F}_T \to \mathcal{F}_{T'}$
+\end{enumerate}
+such that
+\begin{enumerate}
+\item[(a)] the restriction mappings are functorial,
+\item[(b)] if $f : T' \to T$ is an open immersion then the restriction
+mapping $f^{-1}\mathcal{F}_T \to \mathcal{F}_{T'}$ is an isomorphism, and
+\item[(c)] for every faithfully flat morphism
+$\Spec(B) \to \Spec(A)$ over $S$, the diagram
+$$
+\xymatrix{
+\mathcal{F}_{\Spec(A)}(\Spec(A)) \ar[r] &
+\mathcal{F}_{\Spec(B)}(\Spec(B)) \ar@<1ex>[r] \ar@<-1ex>[r] &
+\mathcal{F}_{\Spec(B \otimes_A B)}(\Spec(B \otimes_A B))
+}
+$$
+is an equalizer.
+\end{enumerate}
+Data (1) and (2) and conditions (a), (b) give the data of a presheaf
+on $\Sch/S$ satisfying the sheaf condition for the Zariski topology.
+By Lemma \ref{lemma-fpqc-sheaves} condition (c) then suffices to get the
+sheaf condition for the fpqc topology.
+
+\begin{example}
+\label{example-quasi-coherent}
+Consider the presheaf
+$$
+\begin{matrix}
+\mathcal{F} : & (\Sch/S)^{opp} & \longrightarrow & \textit{Ab} \\
+& T/S & \longmapsto & \Gamma(T, \Omega_{T/S}).
+\end{matrix}
+$$
+The compatibility of differentials with localization implies that
+$\mathcal{F}$ is a sheaf on the Zariski site.
+However, it does not satisfy the sheaf condition for the fpqc topology.
+Namely, consider the case
+$S = \Spec(\mathbf{F}_p)$ and the morphism
+$$
+\varphi :
+V = \Spec(\mathbf{F}_p[v])
+\to
+U = \Spec(\mathbf{F}_p[u])
+$$
+given by mapping $u$ to $v^p$. The family $\{\varphi\}$ is an fpqc covering,
+yet the restriction mapping
+$\mathcal{F}(U) \to \mathcal{F}(V)$
+sends the generator $\text{d}u$ to $\text{d}(v^p) = 0$, so
+it is the zero map, and the diagram
+$$
+\xymatrix{
+\mathcal{F}(U) \ar[r]^{0} &
+\mathcal{F}(V) \ar@<1ex>[r] \ar@<-1ex>[r] &
+\mathcal{F}(V \times_U V)
+}
+$$
+is not an equalizer. We will see later that $\mathcal{F}$ does in fact
+give rise to a sheaf on the \'etale and smooth sites.
+\end{example}
+
+\begin{lemma}
+\label{lemma-representable-sheaf-fpqc}
+Any representable presheaf on $\Sch/S$ satisfies the
+sheaf condition for the fpqc topology.
+\end{lemma}
+
+\begin{proof}
+See
+Descent, Lemma \ref{descent-lemma-fpqc-universal-effective-epimorphisms}.
+\end{proof}
+
+\noindent
+We will return to this later, since the proof of this fact uses
+descent for quasi-coherent sheaves, which we will discuss in the next
+section. A fancy way of expressing the lemma is to say that
+{\it the fpqc topology is weaker than the canonical topology}, or
+that the fpqc topology is {\it subcanonical}. In the setting of sites
+this is discussed in
+Sites, Section \ref{sites-section-representable-sheaves}.
+
+\begin{remark}
+\label{remark-fpqc-finest}
+The fpqc is finer than the Zariski, \'etale, smooth, syntomic, and fppf
+topologies. Hence any presheaf
+satisfying the sheaf condition for the fpqc topology will be a
+sheaf on the Zariski, \'etale, smooth, syntomic, and fppf
+sites. In particular
+representable presheaves will be sheaves on the \'etale site of a scheme
+for example.
+\end{remark}
+
+\begin{example}
+\label{example-additive-group-sheaf}
+Let $S$ be a scheme.
+Consider the additive group scheme $\mathbf{G}_{a, S} = \mathbf{A}^1_S$
+over $S$, see
+Groupoids, Example \ref{groupoids-example-additive-group}.
+The associated representable presheaf is given by
+$$
+h_{\mathbf{G}_{a, S}}(T) =
+\Mor_S(T, \mathbf{G}_{a, S}) =
+\Gamma(T, \mathcal{O}_T).
+$$
+By the above we now know that this is a presheaf of sets which satisfies the
+sheaf condition for the fpqc topology. On the other hand, it is clearly
+a presheaf of rings as well. Hence we can think of this as a functor
+$$
+\begin{matrix}
+\mathcal{O} : &
+(\Sch/S)^{opp} &
+\longrightarrow &
+\textit{Rings} \\
+&
+T/S &
+\longmapsto &
+\Gamma(T, \mathcal{O}_T)
+\end{matrix}
+$$
+which satisfies the sheaf condition for the fpqc topology.
+Correspondingly there is a notion of $\mathcal{O}$-module, and so on and
+so forth.
+\end{example}
+
+
+
+
+\section{Faithfully flat descent}
+\label{section-fpqc-descent}
+
+\noindent
+In this section we discuss faithfully flat descent for quasi-coherent
+modules. More precisely, we will prove quasi-coherent modules satisfy
+effective descent with respect to fpqc coverings.
+
+\begin{definition}
+\label{definition-descent-datum}
+Let $\mathcal{U} = \{ t_i : T_i \to T\}_{i \in I}$ be a family of
+morphisms of schemes with fixed target. A {\it descent datum} for
+quasi-coherent sheaves with respect to $\mathcal{U}$ is a collection
+$((\mathcal{F}_i)_{i \in I}, (\varphi_{ij})_{i, j \in I})$ where
+\begin{enumerate}
+\item $\mathcal{F}_i$ is a quasi-coherent sheaf on $T_i$, and
+\item $\varphi_{ij} : \text{pr}_0^* \mathcal{F}_i \to
+\text{pr}_1^* \mathcal{F}_j$ is an isomorphism of modules
+on $T_i \times_T T_j$,
+\end{enumerate}
+such that the {\it cocycle condition} holds: the diagrams
+$$
+\xymatrix{
+\text{pr}_0^*\mathcal{F}_i \ar[dr]_{\text{pr}_{02}^*\varphi_{ik}}
+\ar[rr]^{\text{pr}_{01}^*\varphi_{ij}} & &
+\text{pr}_1^*\mathcal{F}_j \ar[dl]^{\text{pr}_{12}^*\varphi_{jk}} \\
+& \text{pr}_2^*\mathcal{F}_k
+}
+$$
+commute on $T_i \times_T T_j \times_T T_k$.
+This descent datum is called {\it effective} if there exist a quasi-coherent
+sheaf $\mathcal{F}$ over $T$ and $\mathcal{O}_{T_i}$-module isomorphisms
+$\varphi_i : t_i^* \mathcal{F} \cong \mathcal{F}_i$ compatible with
+the maps $\varphi_{ij}$, namely
+$$
+\varphi_{ij} = \text{pr}_1^* (\varphi_j) \circ \text{pr}_0^* (\varphi_i)^{-1}.
+$$
+\end{definition}
+
+\noindent
+In this and the next section we discuss some ingredients of the proof
+of the following theorem, as well as some related material.
+
+\begin{theorem}
+\label{theorem-descent-quasi-coherent}
+If $\mathcal{V} = \{T_i \to T\}_{i\in I}$ is an fpqc covering, then all
+descent data for quasi-coherent sheaves with respect to $\mathcal{V}$
+are effective.
+\end{theorem}
+
+\begin{proof}
+See
+Descent, Proposition \ref{descent-proposition-fpqc-descent-quasi-coherent}.
+\end{proof}
+
+\noindent
+In other words, the fibered category of quasi-coherent sheaves is a stack on
+the fpqc site.
+The proof of the theorem is in two steps. The first one is to realize that for
+Zariski coverings this is easy (or well-known) using standard glueing of
+sheaves (see
+Sheaves, Section \ref{sheaves-section-glueing-sheaves})
+and the locality of quasi-coherence. The second step is the case of an
+fpqc covering of the form $\{\Spec(B) \to \Spec(A)\}$
+where $A \to B$ is a faithfully flat ring map.
+This is a lemma in algebra, which we now present.
+
+\medskip\noindent
+{\bf Descent of modules.}
+If $A \to B$ is a ring map, we consider the complex
+$$
+(B/A)_\bullet : B \to B \otimes_A B \to B \otimes_A B \otimes_A B \to \ldots
+$$
+where $B$ is in degree 0, $B \otimes_A B$ in degree 1, etc, and the maps are
+given by
+\begin{eqnarray*}
+b & \mapsto & 1 \otimes b - b \otimes 1, \\
+b_0 \otimes b_1 & \mapsto & 1 \otimes b_0 \otimes b_1 - b_0 \otimes 1 \otimes
+b_1 + b_0 \otimes b_1 \otimes 1, \\
+& \text{etc.}
+\end{eqnarray*}
+
+\begin{lemma}
+\label{lemma-algebra-descent}
+If $A \to B$ is faithfully flat, then the complex $(B/A)_\bullet$ is exact in
+positive degrees, and $H^0((B/A)_\bullet) = A$.
+\end{lemma}
+
+\begin{proof}
+See Descent, Lemma \ref{descent-lemma-ff-exact}.
+\end{proof}
+
+\noindent
+Grothendieck proves this in three steps. Firstly, he assumes that the map $A
+\to B$ has a section, and constructs an explicit homotopy to the complex where
+$A$ is the only nonzero term, in degree 0. Secondly, he observes that to prove
+the result, it suffices to do so after a faithfully flat base change $A \to
+A'$, replacing $B$ with $B' = B \otimes_A A'$. Thirdly, he applies the
+faithfully flat base change $A \to A' = B$ and remark that the map
+$A' = B \to B' = B \otimes_A B$ has a natural section.
+
+\medskip\noindent
+The same strategy proves the following lemma.
+
+\begin{lemma}
+\label{lemma-descent-modules}
+If $A \to B$ is faithfully flat and $M$ is an $A$-module, then the
+complex $(B/A)_\bullet \otimes_A M$ is exact in positive degrees, and
+$H^0((B/A)_\bullet \otimes_A M) = M$.
+\end{lemma}
+
+\begin{proof}
+See Descent, Lemma \ref{descent-lemma-ff-exact}.
+\end{proof}
+
+\begin{definition}
+\label{definition-descent-datum-modules}
+Let $A \to B$ be a ring map and $N$ a $B$-module. A {\it descent datum} for
+$N$ with respect to $A \to B$ is an isomorphism
+$\varphi : N \otimes_A B \cong B \otimes_A N$ of $B \otimes_A B$-modules such
+that the diagram of $B \otimes_A B \otimes_A B$-modules
+$$
+\xymatrix{
+{N \otimes_A B \otimes_A B} \ar[dr]_{\varphi_{02}} \ar[rr]^{\varphi_{01}} & &
+{B \otimes_A N \otimes_A B} \ar[dl]^{\varphi_{12}} \\
+& {B \otimes_A B \otimes_A N}
+}
+$$
+commutes where $\varphi_{01} = \varphi \otimes \text{id}_B$ and similarly
+for $\varphi_{12}$ and $\varphi_{02}$.
+\end{definition}
+
+\noindent
+If $N' = B \otimes_A M$ for some $A$-module M, then it has a canonical descent
+datum given by the map
+$$
+\begin{matrix}
+\varphi_\text{can}: & N' \otimes_A B & \to & B \otimes_A N' \\
+& b_0 \otimes m \otimes b_1 & \mapsto & b_0 \otimes b_1 \otimes m.
+\end{matrix}
+$$
+
+\begin{definition}
+\label{definition-effective-modules}
+A descent datum $(N, \varphi)$ is called {\it effective} if there exists an
+$A$-module $M$ such that $(N, \varphi) \cong (B \otimes_A M,
+\varphi_\text{can})$, with the obvious notion of isomorphism of descent data.
+\end{definition}
+
+\noindent
+Theorem \ref{theorem-descent-quasi-coherent} is a consequence the
+following result.
+
+\begin{theorem}
+\label{theorem-descent-modules}
+If $A \to B$ is faithfully flat then descent data with respect to $A\to B$
+are effective.
+\end{theorem}
+
+\begin{proof}
+See
+Descent, Proposition \ref{descent-proposition-descent-module}.
+See also
+Descent, Remark \ref{descent-remark-homotopy-equivalent-cosimplicial-algebras}
+for an alternative view of the proof.
+\end{proof}
+
+\begin{remarks}
+\label{remarks-theorem-modules-exactness}
+The results on descent of modules have several applications:
+\begin{enumerate}
+\item The exactness of the {\v C}ech complex in positive degrees for
+the covering $\{\Spec(B) \to \Spec(A)\}$ where $A \to B$ is
+faithfully flat. This will give some vanishing of cohomology.
+\item If $(N, \varphi)$ is a descent datum with respect to a faithfully
+flat map $A \to B$, then the corresponding $A$-module is given by
+$$
+M = \Ker \left(
+\begin{matrix}
+N & \longrightarrow & B \otimes_A N \\
+n & \longmapsto & 1 \otimes n - \varphi(n \otimes 1)
+\end{matrix}
+\right).
+$$
+See
+Descent, Proposition \ref{descent-proposition-descent-module}.
+\end{enumerate}
+\end{remarks}
+
+
+
+
+%9.17.09
+\section{Quasi-coherent sheaves}
+\label{section-quasi-coherent}
+
+\noindent
+We can apply the descent of modules to study quasi-coherent sheaves.
+
+\begin{proposition}
+\label{proposition-quasi-coherent-sheaf-fpqc}
+For any quasi-coherent sheaf $\mathcal{F}$ on $S$ the presheaf
+$$
+\begin{matrix}
+\mathcal{F}^a : & \Sch/S & \to & \textit{Ab}\\
+& (f: T \to S) & \mapsto & \Gamma(T, f^*\mathcal{F})
+\end{matrix}
+$$
+is an $\mathcal{O}$-module which satisfies the sheaf condition for the
+fpqc topology.
+\end{proposition}
+
+\begin{proof}
+This is proved in
+Descent, Lemma \ref{descent-lemma-sheaf-condition-holds}.
+We indicate the proof here. As established in
+Lemma \ref{lemma-fpqc-sheaves},
+it is enough to check the sheaf property
+on Zariski coverings and faithfully flat morphisms of affine schemes. The
+sheaf property for Zariski coverings is standard scheme theory, since
+$\Gamma(U, i^\ast \mathcal{F}) = \mathcal{F}(U)$ when
+$i : U \hookrightarrow S$ is an open immersion.
+
+\medskip\noindent
+For $\left\{\Spec(B)\to \Spec(A)\right\}$ with $A\to B$ faithfully
+flat and
+$\mathcal{F}|_{\Spec(A)} = \widetilde{M}$
+this corresponds to the fact that
+$M = H^0\left((B/A)_\bullet \otimes_A M \right)$, i.e., that
+\begin{align*}
+0 \to M \to B \otimes_A M \to B \otimes_A B \otimes_A M
+\end{align*}
+is exact by
+Lemma \ref{lemma-descent-modules}.
+\end{proof}
+
+\noindent
+There is an abstract notion of a quasi-coherent sheaf on a ringed site.
+We briefly introduce this here. For more information please consult
+Modules on Sites, Section \ref{sites-modules-section-local}.
+Let $\mathcal{C}$ be a category, and let $U$ be an object of $\mathcal{C}$.
+Then $\mathcal{C}/U$ indicates the category of objects over $U$, see
+Categories, Example \ref{categories-example-category-over-X}.
+If $\mathcal{C}$ is a site, then $\mathcal{C}/U$ is a site as well, namely
+the coverings of $V/U$ are families $\{V_i/U \to V/U\}$ of morphisms
+of $\mathcal{C}/U$ with fixed target such that
+$\{V_i \to V\}$ is a covering of $\mathcal{C}$. Moreover, given any
+sheaf $\mathcal{F}$ on $\mathcal{C}$ the {\it restriction}
+$\mathcal{F}|_{\mathcal{C}/U}$ (defined in the obvious manner)
+is a sheaf as well. See
+Sites, Section \ref{sites-section-localize}
+for details.
+
+\begin{definition}
+\label{definition-ringed-site}
+Let $\mathcal{C}$ be a {\it ringed site}, i.e., a site endowed with a
+sheaf of rings $\mathcal{O}$. A sheaf of $\mathcal{O}$-modules $\mathcal{F}$ on
+$\mathcal{C}$ is called {\it quasi-coherent} if for all
+$U \in \Ob(\mathcal{C})$ there exists a covering
+$\{U_i \to U\}_{i\in I}$ of $\mathcal{C}$ such that the restriction
+$\mathcal{F}|_{\mathcal{C}/U_i}$ is isomorphic to the cokernel of
+an $\mathcal{O}$-linear map of free $\mathcal{O}$-modules
+$$
+\bigoplus\nolimits_{k \in K} \mathcal{O}|_{\mathcal{C}/U_i}
+\longrightarrow
+\bigoplus\nolimits_{l \in L} \mathcal{O}|_{\mathcal{C}/U_i}.
+$$
+The direct sum over $K$ is the sheaf associated to the presheaf
+$V \mapsto \bigoplus_{k \in K} \mathcal{O}(V)$ and similarly for the other.
+\end{definition}
+
+\noindent
+Although it is useful to be able to give a general definition as above
+this notion is not well behaved in general.
+
+\begin{remark}
+\label{remark-final-object}
+In the case where $\mathcal{C}$ has a final object, e.g.\ $S$, it
+suffices to check the condition of the definition for
+$U = S$ in the above statement. See
+Modules on Sites, Lemma \ref{sites-modules-lemma-local-final-object}.
+\end{remark}
+
+\begin{theorem}[Meta theorem on quasi-coherent sheaves]
+\label{theorem-quasi-coherent}
+Let $S$ be a scheme.
+Let $\mathcal{C}$ be a site. Assume that
+\begin{enumerate}
+\item the underlying category $\mathcal{C}$ is a
+full subcategory of $\Sch/S$,
+\item any Zariski covering of $T \in \Ob(\mathcal{C})$
+can be refined by a covering of $\mathcal{C}$,
+\item $S/S$ is an object of $\mathcal{C}$,
+\item every covering of $\mathcal{C}$ is an fpqc covering of schemes.
+\end{enumerate}
+Then the presheaf $\mathcal{O}$ is a sheaf on $\mathcal{C}$ and
+any quasi-coherent $\mathcal{O}$-module on $(\mathcal{C}, \mathcal{O})$
+is of the form $\mathcal{F}^a$ for some quasi-coherent sheaf
+$\mathcal{F}$ on $S$.
+\end{theorem}
+
+\begin{proof}
+After some formal arguments this is exactly Theorem
+\ref{theorem-descent-quasi-coherent}. Details omitted. In
+Descent, Proposition \ref{descent-proposition-equivalence-quasi-coherent}
+we prove a more precise version of the theorem for the
+big Zariski, fppf, \'etale, smooth, and syntomic sites of $S$,
+as well as the small Zariski and \'etale sites of $S$.
+\end{proof}
+
+\noindent
+In other words, there is no difference between quasi-coherent
+modules on the scheme $S$ and quasi-coherent $\mathcal{O}$-modules
+on sites $\mathcal{C}$ as in the theorem. More precise statements
+for the big and small sites $(\Sch/S)_{fppf}$, $S_\etale$, etc
+can be found in Descent, Sections
+\ref{descent-section-quasi-coherent-sheaves},
+\ref{descent-section-quasi-coherent-cohomology}, and
+\ref{descent-section-quasi-coherent-sheaves-bis}.
+In this chapter we will sometimes refer to a
+``site as in Theorem \ref{theorem-quasi-coherent}''
+in order to conveniently state results which hold in any of those
+situations.
+
+
+
+
+
+
+\section{{\v C}ech cohomology}
+\label{section-cech-cohomology}
+
+\noindent
+Our next goal is to use descent theory to show that
+$H^i(\mathcal{C}, \mathcal{F}^a) = H_{Zar}^i(S, \mathcal{F})$
+for all quasi-coherent sheaves $\mathcal{F}$ on $S$, and
+any site $\mathcal{C}$ as in Theorem \ref{theorem-quasi-coherent}.
+To this end, we introduce {\v C}ech cohomology on sites.
+See \cite{ArtinTopologies} and
+Cohomology on Sites, Sections \ref{sites-cohomology-section-cech},
+\ref{sites-cohomology-section-cech-functor}
+and \ref{sites-cohomology-section-cech-cohomology-cohomology}
+for more details.
+
+\begin{definition}
+\label{definition-cech-complex}
+Let $\mathcal{C}$ be a category,
+$\mathcal{U} = \{U_i \to U\}_{i \in I}$ a family of morphisms of $\mathcal{C}$
+with fixed target, and $\mathcal{F} \in \textit{PAb}(\mathcal{C})$ an abelian
+presheaf. We define the {\it {\v C}ech complex}
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$ by
+$$
+\prod_{i_0\in I} \mathcal{F}(U_{i_0}) \to
+\prod_{i_0, i_1\in I} \mathcal{F}(U_{i_0} \times_U U_{i_1}) \to
+\prod_{i_0, i_1, i_2 \in I}
+\mathcal{F}(U_{i_0} \times_U U_{i_1} \times_U U_{i_2}) \to \ldots
+$$
+where the first term is in degree 0, and the maps are the usual ones. Again, it
+is essential to allow the case $i_0 = i_1$ etc. The
+{\it {\v C}ech cohomology groups} are defined by
+$$
+\check{H}^p(\mathcal{U}, \mathcal{F}) =
+H^p(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})).
+$$
+\end{definition}
+
+\begin{lemma}
+\label{lemma-cech-presheaves}
+The functor $\check{\mathcal{C}}^\bullet(\mathcal{U}, -)$
+is exact on the category $\textit{PAb}(\mathcal{C})$.
+\end{lemma}
+
+\noindent
+In other words, if $0\to \mathcal{F}_1\to \mathcal{F}_2\to \mathcal{F}_3\to 0$
+is a short exact sequence of presheaves of abelian groups, then
+$$
+0 \to \check{\mathcal{C}}^\bullet\left(\mathcal{U}, \mathcal{F}_1\right)
+\to\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}_2) \to
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}_3)\to 0
+$$
+is a short exact sequence of complexes.
+
+\begin{proof}
+This follows at once from the definition of a short exact sequence of
+presheaves. Namely, as the category of abelian presheaves is the category of
+functors on some category with values in $\textit{Ab}$, it is automatically an
+abelian category: a sequence $\mathcal{F}_1\to \mathcal{F}_2\to \mathcal{F}_3$
+is exact in $\textit{PAb}$ if and only if for all
+$U \in \Ob(\mathcal{C})$, the sequence
+$\mathcal{F}_1(U) \to \mathcal{F}_2(U) \to \mathcal{F}_3(U)$ is exact in
+$\textit{Ab}$. So the complex above is merely a product of short exact
+sequences in each degree. See also
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-cech-exact-presheaves}.
+\end{proof}
+
+\noindent
+This shows that $\check{H}^\bullet(\mathcal{U}, -)$ is a $\delta$-functor.
+We now proceed to show that it is a universal $\delta$-functor. We thus need to
+show that it is an {\it effaceable} functor. We start by recalling the Yoneda
+lemma.
+
+\begin{lemma}[Yoneda Lemma]
+\label{lemma-yoneda-presheaf}
+For any presheaf $\mathcal{F}$ on a category $\mathcal{C}$ there is a
+functorial isomorphism
+$$
+\Hom_{\textit{PSh}(\mathcal{C})}(h_U, \mathcal{F}) =
+\mathcal{F}(U).
+$$
+\end{lemma}
+
+\begin{proof}
+See Categories, Lemma \ref{categories-lemma-yoneda}.
+\end{proof}
+
+\noindent
+Given a set $E$ we denote (in this section)
+$\mathbf{Z}[E]$ the free abelian group on $E$. In a formula
+$\mathbf{Z}[E] = \bigoplus_{e \in E} \mathbf{Z}$, i.e., $\mathbf{Z}[E]$ is
+a free $\mathbf{Z}$-module having a basis consisting of the elements of $E$.
+Using this notation we introduce the free abelian presheaf on a
+presheaf of sets.
+
+\begin{definition}
+\label{definition-free-abelian-presheaf}
+Let $\mathcal{C}$ be a category.
+Given a presheaf of sets $\mathcal{G}$, we define the
+{\it free abelian presheaf on $\mathcal{G}$},
+denoted $\mathbf{Z}_\mathcal{G}$, by the rule
+$$
+\mathbf{Z}_\mathcal{G}(U)
+=
+\mathbf{Z}[\mathcal{G}(U)]
+$$
+for $U \in \Ob(\mathcal{C})$
+with restriction maps induced by the restriction maps of $\mathcal{G}$.
+In the special case $\mathcal{G} = h_U$ we write simply
+$\mathbf{Z}_U = \mathbf{Z}_{h_U}$.
+\end{definition}
+
+\noindent
+The functor $\mathcal{G} \mapsto \mathbf{Z}_\mathcal{G}$ is left adjoint to the
+forgetful functor $\textit{PAb}(\mathcal{C}) \to \textit{PSh}(\mathcal{C})$.
+Thus, for any presheaf $\mathcal{F}$, there is a canonical isomorphism
+$$
+\Hom_{\textit{PAb}(\mathcal{C})}(\mathbf{Z}_U, \mathcal{F})
+=
+\Hom_{\textit{PSh}(\mathcal{C})}(h_U, \mathcal{F})
+=
+\mathcal{F}(U)
+$$
+the last equality by the Yoneda lemma. In particular, we have the following
+result.
+
+\begin{lemma}
+\label{lemma-cech-complex-describe}
+The {\v C}ech complex $\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$
+can be described explicitly as follows
+\begin{eqnarray*}
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})
+& = &
+\left(
+\prod_{i_0 \in I}
+\Hom_{\textit{PAb}(\mathcal{C})}(\mathbf{Z}_{U_{i_0}}, \mathcal{F}) \to
+\prod_{i_0, i_1 \in I}
+\Hom_{\textit{PAb}(\mathcal{C})}(
+\mathbf{Z}_{U_{i_0} \times_U U_{i_1}}, \mathcal{F}) \to \ldots
+\right) \\
+& = &
+\Hom_{\textit{PAb}(\mathcal{C})}\left(
+\left(
+\bigoplus_{i_0 \in I} \mathbf{Z}_{U_{i_0}} \leftarrow
+\bigoplus_{i_0, i_1 \in I} \mathbf{Z}_{U_{i_0} \times_U U_{i_1}} \leftarrow
+\ldots
+\right), \mathcal{F}\right)
+\end{eqnarray*}
+\end{lemma}
+
+\begin{proof}
+This follows from the formula above. See
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-cech-map-into}.
+\end{proof}
+
+\noindent
+This reduces us to studying only the complex in the first argument of the
+last $\Hom$.
+
+\begin{lemma}
+\label{lemma-exact}
+The complex of abelian presheaves
+\begin{align*}
+\mathbf{Z}_\mathcal{U}^\bullet \quad : \quad
+\bigoplus_{i_0 \in I} \mathbf{Z}_{U_{i_0}} \leftarrow
+\bigoplus_{i_0, i_1 \in I} \mathbf{Z}_{U_{i_0} \times_U U_{i_1}} \leftarrow
+\bigoplus_{i_0, i_1, i_2 \in I}
+\mathbf{Z}_{U_{i_0} \times_U U_{i_1} \times_U U_{i_2}} \leftarrow
+\ldots
+\end{align*}
+is exact in all degrees except $0$ in $\textit{PAb}(\mathcal{C})$.
+\end{lemma}
+
+\begin{proof}
+For any $V\in \Ob(\mathcal{C})$ the complex of abelian groups
+$\mathbf{Z}_\mathcal{U}^\bullet(V)$ is
+$$
+\begin{matrix}
+\mathbf{Z}\left[
+\coprod_{i_0\in I} \Mor_\mathcal{C}(V, U_{i_0})\right]
+\leftarrow
+\mathbf{Z}\left[
+\coprod_{i_0, i_1 \in I}
+\Mor_\mathcal{C}(V, U_{i_0} \times_U U_{i_1})\right]
+\leftarrow \ldots = \\
+\bigoplus_{\varphi : V \to U}
+\left(
+\mathbf{Z}\left[\coprod_{i_0 \in I} \Mor_\varphi(V, U_{i_0})\right]
+\leftarrow
+\mathbf{Z}\left[\coprod_{i_0, i_1\in I} \Mor_\varphi(V, U_{i_0}) \times
+\Mor_\varphi(V, U_{i_1})\right]
+\leftarrow
+\ldots
+\right)
+\end{matrix}
+$$
+where
+$$
+\Mor_{\varphi}(V, U_i)
+=
+\{ V \to U_i \text{ such that } V \to U_i \to U \text{ equals } \varphi \}.
+$$
+Set $S_\varphi = \coprod_{i\in I} \Mor_\varphi(V, U_i)$, so that
+$$
+\mathbf{Z}_\mathcal{U}^\bullet(V)
+=
+\bigoplus_{\varphi : V \to U}
+\left(
+\mathbf{Z}[S_\varphi] \leftarrow
+\mathbf{Z}[S_\varphi \times S_\varphi] \leftarrow
+\mathbf{Z}[S_\varphi \times S_\varphi \times S_\varphi] \leftarrow
+\ldots \right).
+$$
+Thus it suffices to show that for each $S = S_\varphi$, the complex
+\begin{align*}
+\mathbf{Z}[S] \leftarrow
+\mathbf{Z}[S \times S] \leftarrow
+\mathbf{Z}[S \times S \times S] \leftarrow \ldots
+\end{align*}
+is exact in negative degrees. To see this, we can give an explicit homotopy.
+Fix $s\in S$ and define $K: n_{(s_0, \ldots, s_p)} \mapsto n_{(s, s_0,
+\ldots, s_p)}.$ One easily checks that $K$ is a nullhomotopy for the operator
+$$
+\delta :
+\eta_{(s_0, \ldots, s_p)}
+\mapsto
+\sum\nolimits_{i = 0}^p (-1)^p \eta_{(s_0, \ldots, \hat s_i, \ldots, s_p)}.
+$$
+See
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-homology-complex}
+for more details.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-hom-injective}
+Let $\mathcal{C}$ be a category. If $\mathcal{I}$ is an injective object of
+$\textit{PAb}(\mathcal{C})$ and $\mathcal{U}$ is a family of morphisms with
+fixed target in $\mathcal{C}$, then $\check H^p(\mathcal{U}, \mathcal{I}) = 0$
+for all $p > 0$.
+\end{lemma}
+
+\begin{proof}
+The {\v C}ech complex is the result of applying the functor
+$\Hom_{\textit{PAb}(\mathcal{C})}(-, \mathcal{I}) $ to the complex $
+\mathbf{Z}^\bullet_\mathcal{U} $, i.e.,
+$$
+\check H^p(\mathcal{U}, \mathcal{I}) = H^p
+(\Hom_{\textit{PAb}(\mathcal{C})} (\mathbf{Z}^\bullet_\mathcal{U},
+\mathcal{I})).
+$$
+But we have just seen that $\mathbf{Z}^\bullet_\mathcal{U}$ is exact in
+negative degrees, and the functor $\Hom_{\textit{PAb}(\mathcal{C})}(-,
+\mathcal{I})$ is exact, hence $\Hom_{\textit{PAb}(\mathcal{C})}
+(\mathbf{Z}^\bullet_\mathcal{U}, \mathcal{I})$ is exact in positive degrees.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-cech-derived}
+On $\textit{PAb}(\mathcal{C})$ the functors $\check{H}^p(\mathcal{U}, -)$ are
+the right derived functors of $\check{H}^0(\mathcal{U}, -)$.
+\end{theorem}
+
+\begin{proof}
+By the Lemma \ref{lemma-hom-injective}, the functors
+$\check H^p(\mathcal{U}, -)$ are universal
+$\delta$-functors since they are effaceable.
+So are the right derived functors of $\check H^0(\mathcal{U}, -)$. Since they
+agree in degree $0$, they agree by the universal property of universal
+$\delta$-functors. For more details see
+Cohomology on Sites,
+Lemma \ref{sites-cohomology-lemma-cech-cohomology-derived-presheaves}.
+\end{proof}
+
+\begin{remark}
+\label{remark-presheaves-no-topology}
+Observe that all of the preceding statements are about presheaves so we haven't
+made use of the topology yet.
+\end{remark}
+
+
+
+
+\section{The {\v C}ech-to-cohomology spectral sequence}
+\label{section-cech-ss}
+
+\noindent
+This spectral sequence is fundamental in proving foundational results on
+cohomology of sheaves.
+
+\begin{lemma}
+\label{lemma-forget-injectives}
+The forgetful functor $\textit{Ab}(\mathcal{C})\to \textit{PAb}(\mathcal{C})$
+transforms injectives into injectives.
+\end{lemma}
+
+\begin{proof}
+This is formal using the fact that the forgetful functor has a left adjoint,
+namely sheafification, which is an exact functor. For more details see
+Cohomology on Sites,
+Lemma \ref{sites-cohomology-lemma-injective-abelian-sheaf-injective-presheaf}.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-cech-ss}
+Let $\mathcal{C}$ be a site. For any covering
+$\mathcal{U} = \{U_i \to U\}_{i \in I}$ of $U \in \Ob(\mathcal{C})$
+and any abelian sheaf $\mathcal{F}$ on $\mathcal{C}$
+there is a spectral sequence
+$$
+E_2^{p, q}
+=
+\check H^p(\mathcal{U}, \underline{H}^q(\mathcal{F}))
+\Rightarrow
+H^{p+q}(U, \mathcal{F}),
+$$
+where $\underline{H}^q(\mathcal{F})$ is the abelian presheaf
+$V \mapsto H^q(V, \mathcal{F})$.
+\end{theorem}
+
+\begin{proof}
+Choose an injective resolution $\mathcal{F}\to \mathcal{I}^\bullet$ in
+$\textit{Ab}(\mathcal{C})$, and consider the double complex
+$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet)$
+and the maps
+$$
+\xymatrix{
+\Gamma(U, I^\bullet) \ar[r] &
+\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet) \\
+& \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \ar[u]
+}
+$$
+Here the horizontal map is the natural map
+$\Gamma(U, I^\bullet) \to
+\check{\mathcal{C}}^0(\mathcal{U}, \mathcal{I}^\bullet)$
+to the left column, and the vertical map is induced by
+$\mathcal{F}\to \mathcal{I}^0$ and lands in the bottom row.
+By assumption, $\mathcal{I}^\bullet$ is a complex of injectives in
+$\textit{Ab}(\mathcal{C})$, hence by
+Lemma \ref{lemma-forget-injectives}, it is a complex of injectives in
+$\textit{PAb}(\mathcal{C})$. Thus, the rows of the double complex are
+exact in positive degrees (Lemma \ref{lemma-hom-injective}), and
+the kernel of $\check{\mathcal{C}}^0(\mathcal{U}, \mathcal{I}^\bullet)
+\to \check{\mathcal{C}}^1(\mathcal{U}, \mathcal{I}^\bullet)$
+is equal to
+$\Gamma(U, \mathcal{I}^\bullet)$, since $\mathcal{I}^\bullet$
+is a complex of sheaves. In particular, the cohomology of the total complex
+is the standard
+cohomology of the global sections functor $H^0(U, \mathcal{F})$.
+
+\medskip\noindent
+For the vertical direction, the $q$th cohomology group of the $p$th column is
+$$
+\prod_{i_0, \ldots, i_p}
+H^q(U_{i_0} \times_U \ldots \times_U U_{i_p}, \mathcal{F})
+=
+\prod_{i_0, \ldots, i_p}
+\underline{H}^q(\mathcal{F})(U_{i_0} \times_U \ldots \times_U U_{i_p})
+$$
+in the entry $E_1^{p, q}$. So this is a standard double complex spectral
+sequence, and the $E_2$-page is as prescribed. For more details see
+Cohomology on Sites,
+Lemma \ref{sites-cohomology-lemma-cech-spectral-sequence}.
+\end{proof}
+
+\begin{remark}
+\label{remark-grothendieck-ss}
+This is a Grothendieck spectral sequence for the composition of functors
+$$
+\textit{Ab}(\mathcal{C}) \longrightarrow
+\textit{PAb}(\mathcal{C}) \xrightarrow{\check H^0} \textit{Ab}.
+$$
+\end{remark}
+
+
+
+
+
+
+
+
+\section{Big and small sites of schemes}
+\label{section-big-small}
+
+\noindent
+Let $S$ be a scheme.
+Let $\tau$ be one of the topologies we will be discussing.
+Thus $\tau \in \{fppf, syntomic, smooth, \etale, Zariski\}$.
+Of course if you are only interested in the \'etale topology, then
+you can simply assume $\tau = \etale$ throughout. Moreover, we will
+discuss \'etale morphisms, \'etale coverings, and \'etale sites
+in more detail starting in Section \ref{section-etale-site}.
+In order to proceed with the discussion of cohomology of
+quasi-coherent sheaves it is convenient to introduce the
+big $\tau$-site and in case $\tau \in \{\etale, Zariski\}$, the
+small $\tau$-site of $S$. In order to do this we first introduce
+the notion of a $\tau$-covering.
+
+\begin{definition}
+\label{definition-tau-covering}
+(See
+Topologies, Definitions
+\ref{topologies-definition-fppf-covering},
+\ref{topologies-definition-syntomic-covering},
+\ref{topologies-definition-smooth-covering},
+\ref{topologies-definition-etale-covering}, and
+\ref{topologies-definition-zariski-covering}.)
+Let $\tau \in \{fppf, syntomic, smooth, \etale, Zariski\}$.
+A family of morphisms of schemes $\{f_i : T_i \to T\}_{i \in I}$ with fixed
+target is called a {\it $\tau$-covering} if and only if
+each $f_i$ is flat of finite presentation, syntomic, smooth, \'etale,
+resp.\ an open immersion, and we have $\bigcup f_i(T_i) = T$.
+\end{definition}
+
+\noindent
+The class of all $\tau$-coverings satisfies the axioms
+(1), (2) and (3) of
+Definition \ref{definition-site} (our definition of a site), see
+Topologies, Lemmas
+\ref{topologies-lemma-fppf},
+\ref{topologies-lemma-syntomic},
+\ref{topologies-lemma-smooth},
+\ref{topologies-lemma-etale}, and
+\ref{topologies-lemma-zariski}.
+
+\medskip\noindent
+Let us introduce the sites we will be working with.
+Contrary to what happens in
+\cite{SGA4}, we do not want to choose a universe. Instead we pick a ``partial
+universe'' (which is a suitably large set as in
+Sets, Section \ref{sets-section-sets-hierarchy}), and consider all schemes
+contained in this set. Of course we make sure that our favorite base scheme
+$S$ is contained in the partial universe. Having picked the underlying category
+we pick a suitably large set of $\tau$-coverings which turns this into a site.
+The details are in the chapter on topologies on schemes; there is a lot of
+freedom in the choices made, but in the end the actual choices made will not
+affect the \'etale (or other) cohomology of $S$ (just as in \cite{SGA4} the
+actual choice of universe doesn't matter at the end). Moreover, the way the
+material is written the reader who is happy using strongly inaccessible
+cardinals (i.e., universes) can do so as a substitute.
+
+\begin{definition}
+\label{definition-tau-site}
+Let $S$ be a scheme.
+Let $\tau \in \{fppf, syntomic, smooth, \etale, \linebreak[0] Zariski\}$.
+\begin{enumerate}
+\item A {\it big $\tau$-site of $S$} is any of the sites
+$(\Sch/S)_\tau$ constructed as explained above and in more detail in
+Topologies, Definitions
+\ref{topologies-definition-big-small-fppf},
+\ref{topologies-definition-big-small-syntomic},
+\ref{topologies-definition-big-small-smooth},
+\ref{topologies-definition-big-small-etale}, and
+\ref{topologies-definition-big-small-Zariski}.
+\item If $\tau \in \{\etale, Zariski\}$, then the
+{\it small $\tau$-site of $S$}
+is the full subcategory $S_\tau$ of $(\Sch/S)_\tau$ whose objects
+are schemes $T$ over $S$ whose structure morphism $T \to S$ is \'etale,
+resp.\ an open immersion. A covering in $S_\tau$ is a covering
+$\{U_i \to U\}$ in $(\Sch/S)_\tau$
+such that $U$ is an object of $S_\tau$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+The underlying category of the site $(\Sch/S)_\tau$ has reasonable
+``closure'' properties, i.e., given a scheme $T$ in it any locally closed
+subscheme of $T$ is isomorphic to an object of $(\Sch/S)_\tau$.
+Other such closure properties are: closed under fibre products of schemes,
+taking countable disjoint unions,
+taking finite type schemes over a given scheme, given an affine scheme
+$\Spec(R)$ one can complete, localize, or take the quotient of $R$
+by an ideal while staying inside the category, etc.
+On the other hand, for example arbitrary disjoint unions
+of schemes in $(\Sch/S)_\tau$ will take you outside of it.
+Also note that, given an object $T$ of $(\Sch/S)_\tau$ there will exist
+$\tau$-coverings $\{T_i \to T\}_{i \in I}$ (as in
+Definition \ref{definition-tau-covering})
+which are not coverings in $(\Sch/S)_\tau$ for example
+because the schemes $T_i$ are not objects of the category
+$(\Sch/S)_\tau$. But our choice of the sites $(\Sch/S)_\tau$
+is such that there always does exist
+a covering $\{U_j \to T\}_{j \in J}$ of $(\Sch/S)_\tau$ which refines
+the covering $\{T_i \to T\}_{i \in I}$, see
+Topologies, Lemmas
+\ref{topologies-lemma-fppf-induced},
+\ref{topologies-lemma-syntomic-induced},
+\ref{topologies-lemma-smooth-induced},
+\ref{topologies-lemma-etale-induced}, and
+\ref{topologies-lemma-zariski-induced}.
+We will mostly ignore these issues in this chapter.
+
+\medskip\noindent
+If $\mathcal{F}$ is a sheaf on $(\Sch/S)_\tau$ or $S_\tau$, then
+we denote
+$$
+H^p_\tau(U, \mathcal{F}), \text{ in particular }
+H^p_\tau(S, \mathcal{F})
+$$
+the cohomology groups of $\mathcal{F}$ over the object $U$ of the site, see
+Section \ref{section-cohomology}. Thus we have
+$H^p_{fppf}(S, \mathcal{F})$,
+$H^p_{syntomic}(S, \mathcal{F})$,
+$H^p_{smooth}(S, \mathcal{F})$,
+$H^p_\etale(S, \mathcal{F})$, and
+$H^p_{Zar}(S, \mathcal{F})$. The last two are potentially ambiguous since
+they might refer to either the big or small \'etale or Zariski site. However,
+this ambiguity is harmless by the following lemma.
+
+\begin{lemma}
+\label{lemma-compare-cohomology-big-small}
+Let $\tau \in \{\etale, Zariski\}$.
+If $\mathcal{F}$ is an abelian sheaf defined on
+$(\Sch/S)_\tau$, then
+the cohomology groups of $\mathcal{F}$ over $S$ agree with the cohomology
+groups of $\mathcal{F}|_{S_\tau}$ over $S$.
+\end{lemma}
+
+\begin{proof}
+By
+Topologies, Lemmas \ref{topologies-lemma-at-the-bottom} and
+\ref{topologies-lemma-at-the-bottom-etale}
+the functors $S_\tau \to (\Sch/S)_\tau$
+satisfy the hypotheses of
+Sites, Lemma \ref{sites-lemma-bigger-site}.
+Hence our lemma follows from
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-cohomology-bigger-site}.
+\end{proof}
+
+\noindent
+The category of sheaves on the big or small \'etale site of $S$ depends only
+on the full subcategory of $(\Sch/S)_\etale$ or $S_\etale$ consisting of
+affines and one only needs to consider the standard \'etale coverings between
+them (as defined below). This gives rise to sites
+$(\textit{Aff}/S)_\etale$ and $S_{affine, \etale}$, see
+Topologies, Definition \ref{topologies-definition-big-small-etale}.
+The comparison results are proven in Topologies,
+Lemmas \ref{topologies-lemma-affine-big-site-etale} and
+\ref{topologies-lemma-alternative}. Here is our definition of
+standard coverings in some of the topologies we will consider in this chapter.
+
+\begin{definition}
+\label{definition-standard-tau}
+(See
+Topologies, Definitions
+\ref{topologies-definition-standard-fppf},
+\ref{topologies-definition-standard-syntomic},
+\ref{topologies-definition-standard-smooth},
+\ref{topologies-definition-standard-etale}, and
+\ref{topologies-definition-standard-Zariski}.)
+Let $\tau \in \{fppf, syntomic, smooth, \etale, Zariski\}$.
+Let $T$ be an affine scheme.
+A {\it standard $\tau$-covering} of $T$ is a family
+$\{f_j : U_j \to T\}_{j = 1, \ldots, m}$ with each $U_j$ is affine,
+and each $f_j$ flat and of finite presentation,
+standard syntomic, standard smooth, \'etale, resp.\ the immersion of a
+standard principal open in $T$ and $T = \bigcup f_j(U_j)$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-tau-affine}
+Let $\tau \in \{fppf, syntomic, smooth, \etale, Zariski\}$.
+Any $\tau$-covering of an affine scheme can be refined by a
+standard $\tau$-covering.
+\end{lemma}
+
+\begin{proof}
+See
+Topologies, Lemmas
+\ref{topologies-lemma-fppf-affine},
+\ref{topologies-lemma-syntomic-affine},
+\ref{topologies-lemma-smooth-affine},
+\ref{topologies-lemma-etale-affine}, and
+\ref{topologies-lemma-zariski-affine}.
+\end{proof}
+
+\noindent
+For completeness we state and prove the invariance under choice of partial
+universe of the cohomology groups we are considering. We will prove invariance
+of the small \'etale topos in
+Lemma \ref{lemma-etale-topos-independent-partial-universe} below.
+For notation and terminology used in this lemma we refer to
+Topologies, Section \ref{topologies-section-change-alpha}.
+
+\begin{lemma}
+\label{lemma-cohomology-enlarge-partial-universe}
+Let $\tau \in \{fppf, syntomic, smooth, \etale, Zariski\}$.
+Let $S$ be a scheme.
+Let $(\Sch/S)_\tau$ and $(\Sch'/S)_\tau$ be two
+big $\tau$-sites of $S$, and assume that the first is contained in the second.
+In this case
+\begin{enumerate}
+\item for any abelian sheaf $\mathcal{F}'$ defined on $(\Sch'/S)_\tau$ and
+any object $U$ of $(\Sch/S)_\tau$ we have
+$$
+H^p_\tau(U, \mathcal{F}'|_{(\Sch/S)_\tau}) =
+H^p_\tau(U, \mathcal{F}')
+$$
+In words: the cohomology of $\mathcal{F}'$ over $U$ computed in the bigger site
+agrees with the cohomology of $\mathcal{F}'$ restricted to the smaller site
+over $U$.
+\item for any abelian sheaf $\mathcal{F}$ on $(\Sch/S)_\tau$ there is an
+abelian sheaf $\mathcal{F}'$ on $(\Sch/S)_\tau'$ whose restriction to
+$(\Sch/S)_\tau$ is isomorphic to $\mathcal{F}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Topologies, Lemma \ref{topologies-lemma-change-alpha} the inclusion functor
+$(\Sch/S)_\tau \to (\Sch'/S)_\tau$ satisfies the assumptions of
+Sites, Lemma \ref{sites-lemma-bigger-site}. This implies (2) and (1)
+follows from
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-cohomology-bigger-site}.
+\end{proof}
+
+
+
+
+\section{The \'etale topos}
+\label{section-etale-topos}
+
+\noindent
+A {\it topos} is the category of sheaves of sets on a site, see
+Sites, Definition \ref{sites-definition-topos}. Hence it is customary
+to refer to the use the phrase ``\'etale topos of a scheme'' to refer to
+the category of sheaves on the small \'etale site of a scheme.
+Here is the formal definition.
+
+\begin{definition}
+\label{definition-etale-topos}
+Let $S$ be a scheme.
+\begin{enumerate}
+\item The {\it \'etale topos}, or the {\it small \'etale topos}
+of $S$ is the category $\Sh(S_\etale)$ of sheaves of sets on
+the small \'etale site of $S$.
+\item The {\it Zariski topos}, or the {\it small Zariski topos}
+of $S$ is the category $\Sh(S_{Zar})$ of sheaves of sets on the
+small Zariski site of $S$.
+\item For $\tau \in \{fppf, syntomic, smooth, \etale, Zariski\}$ a
+{\it big $\tau$-topos} is the category of sheaves of set on a
+big $\tau$-topos of $S$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Note that the small Zariski topos of $S$ is simply the category of sheaves
+of sets on the underlying topological space of $S$, see
+Topologies, Lemma \ref{topologies-lemma-Zariski-usual}.
+Whereas the small \'etale topos does not depend on the choices made in the
+construction of the small \'etale site, in general the big topoi do depend
+on those choices.
+
+\medskip\noindent
+It turns out that the big or small \'etale topos only depends on the
+full subcategory of $(\Sch/S)_\etale$ or $S_\etale$ consisting of
+affines, see
+Topologies, Lemmas \ref{topologies-lemma-affine-big-site-etale} and
+\ref{topologies-lemma-alternative}.
+We will use this for example in the proof of the following lemma.
+
+\begin{lemma}
+\label{lemma-etale-topos-independent-partial-universe}
+Let $S$ be a scheme. The \'etale topos of $S$ is independent
+(up to canonical equivalence) of the construction of the small
+\'etale site in Definition \ref{definition-tau-site}.
+\end{lemma}
+
+\begin{proof}
+We have to show, given two big \'etale sites
+$\Sch_\etale$ and $\Sch_\etale'$ containing
+$S$, then $\Sh(S_\etale) \cong \Sh(S_\etale')$
+with obvious notation. By Topologies, Lemma \ref{topologies-lemma-contained-in}
+we may assume $\Sch_\etale \subset \Sch_\etale'$.
+By Sets, Lemma \ref{sets-lemma-what-is-in-it}
+any affine scheme \'etale over $S$ is isomorphic to an object
+of both $\Sch_\etale$ and $\Sch_\etale'$.
+Thus the induced functor
+$S_{affine, \etale} \to S_{affine, \etale}'$
+is an equivalence. Moreover, it is clear that both this functor
+and a quasi-inverse map transform standard \'etale coverings into
+standard \'etale coverings.
+Hence the result follows from
+Topologies, Lemma \ref{topologies-lemma-alternative}.
+\end{proof}
+
+
+
+
+
+\section{Cohomology of quasi-coherent sheaves}
+\label{section-cohomology-quasi-coherent}
+%9.22.09
+
+\noindent
+We start with a simple lemma (which holds in greater generality than
+stated). It says that the {\v C}ech complex of a standard covering is
+equal to the {\v C}ech complex of an fpqc covering of the form
+$\{\Spec(B) \to \Spec(A)\}$ with $A \to B$ faithfully flat.
+
+\begin{lemma}
+\label{lemma-cech-complex}
+Let $\tau \in \{fppf, syntomic, smooth, \etale, Zariski\}$.
+Let $S$ be a scheme.
+Let $\mathcal{F}$ be an abelian sheaf on $(\Sch/S)_\tau$, or on
+$S_\tau$ in case $\tau = \etale$, and let
+$\mathcal{U} = \{U_i \to U\}_{i \in I}$
+be a standard $\tau$-covering of this site.
+Let $V = \coprod_{i \in I} U_i$. Then
+\begin{enumerate}
+\item $V$ is an affine scheme,
+\item $\mathcal{V} = \{V \to U\}$ is an fpqc covering
+and also a $\tau$-covering unless $\tau = Zariski$,
+\item the {\v C}ech complexes
+$\check{\mathcal{C}}^\bullet (\mathcal{U}, \mathcal{F})$ and
+$\check{\mathcal{C}}^\bullet (\mathcal{V}, \mathcal{F})$ agree.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The defintion of a standard $\tau$-covering is given in
+Topologies, Definition
+\ref{topologies-definition-standard-Zariski},
+\ref{topologies-definition-standard-etale},
+\ref{topologies-definition-standard-smooth},
+\ref{topologies-definition-standard-syntomic}, and
+\ref{topologies-definition-standard-fppf}.
+By definition each of the schemes
+$U_i$ is affine and $I$ is a finite set. Hence $V$ is an affine scheme.
+It is clear that $V \to U$ is flat and surjective, hence
+$\mathcal{V}$ is an fpqc covering, see
+Example \ref{example-fpqc-coverings}.
+Excepting the Zariski case, the covering $\mathcal{V}$
+is also a $\tau$-covering, see
+Topologies, Definition
+\ref{topologies-definition-etale-covering},
+\ref{topologies-definition-smooth-covering},
+\ref{topologies-definition-syntomic-covering}, and
+\ref{topologies-definition-fppf-covering}.
+
+\medskip\noindent
+Note that $\mathcal{U}$ is a refinement of $\mathcal{V}$
+and hence there is a map of {\v C}ech complexes
+$\check{\mathcal{C}}^\bullet (\mathcal{V}, \mathcal{F}) \to
+\check{\mathcal{C}}^\bullet (\mathcal{U}, \mathcal{F})$, see
+Cohomology on Sites,
+Equation (\ref{sites-cohomology-equation-map-cech-complexes}).
+Next, we observe that if $T = \coprod_{j \in J} T_j$ is a
+disjoint union of schemes in the site on which $\mathcal{F}$ is defined
+then the family of morphisms with fixed target
+$\{T_j \to T\}_{j \in J}$ is a Zariski covering, and so
+\begin{equation}
+\label{equation-sheaf-coprod}
+\mathcal{F}(T) =
+\mathcal{F}(\coprod\nolimits_{j \in J} T_j) =
+\prod\nolimits_{j \in J} \mathcal{F}(T_j)
+\end{equation}
+by the sheaf condition of $\mathcal{F}$.
+This implies the map of {\v C}ech complexes above is an isomorphism
+in each degree because
+$$
+V \times_U \ldots \times_U V
+=
+\coprod\nolimits_{i_0, \ldots i_p} U_{i_0} \times_U \ldots \times_U U_{i_p}
+$$
+as schemes.
+\end{proof}
+
+\noindent
+Note that Equality (\ref{equation-sheaf-coprod})
+is false for a general presheaf. Even for sheaves it does not hold on any
+site, since coproducts may not lead to coverings, and may not be disjoint.
+But it does for all the usual ones (at least all the ones we will study).
+
+\begin{remark}
+\label{remark-refinement}
+In the statement of Lemma \ref{lemma-cech-complex} the covering $\mathcal{U}$
+is a refinement of $\mathcal{V}$ but not the other way around. Coverings
+of the form $\{V \to U\}$ do not form an initial subcategory of the
+category of all coverings of $U$. Yet it is still true that
+we can compute {\v C}ech cohomology $\check H^n(U, \mathcal{F})$ (which
+is defined as the colimit over the opposite of the category of
+coverings $\mathcal{U}$ of $U$ of the {\v C}ech cohomology groups of
+$\mathcal{F}$ with respect to $\mathcal{U}$) in terms of the coverings
+$\{V \to U\}$. We will formulate a precise lemma (it only works for sheaves)
+and add it here if we ever need it.
+\end{remark}
+
+\begin{lemma}[Locality of cohomology]
+\label{lemma-locality-cohomology}
+Let $\mathcal{C}$ be a site, $\mathcal{F}$ an abelian sheaf on $\mathcal{C}$,
+$U$ an object of $\mathcal{C}$, $p > 0$ an integer and $\xi \in
+H^p(U, \mathcal{F})$. Then there exists a covering
+$\mathcal{U} = \{U_i \to U\}_{i \in I}$ of $U$ in $\mathcal{C}$
+such that $\xi |_{U_i} = 0$ for all $i \in I$.
+\end{lemma}
+
+\begin{proof}
+Choose an injective resolution $\mathcal{F} \to \mathcal{I}^\bullet$. Then
+$\xi$ is represented by a cocycle $\tilde{\xi} \in \mathcal{I}^p(U)$
+with $d^p(\tilde{\xi}) = 0$. By assumption, the sequence
+$\mathcal{I}^{p - 1} \to \mathcal{I}^p \to \mathcal{I}^{p + 1}$ in exact in
+$\textit{Ab}(\mathcal{C})$, which means that there exists a covering
+$\mathcal{U} = \{U_i \to U\}_{i \in I}$ such that
+$\tilde{\xi}|_{U_i} = d^{p - 1}(\xi_i)$ for some
+$\xi_i \in \mathcal{I}^{p-1}(U_i)$. Since
+the cohomology class $\xi|_{U_i}$ is represented by the cocycle
+$\tilde{\xi}|_{U_i}$ which is a coboundary, it vanishes.
+For more details see
+Cohomology on Sites,
+Lemma \ref{sites-cohomology-lemma-kill-cohomology-class-on-covering}.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-zariski-fpqc-quasi-coherent}
+Let $S$ be a scheme and $\mathcal{F}$ a quasi-coherent $\mathcal{O}_S$-module.
+Let $\mathcal{C}$ be either $(\Sch/S)_\tau$ for
+$\tau \in \{fppf, syntomic, smooth, \etale, Zariski\}$ or
+$S_\etale$. Then
+$$
+H^p(S, \mathcal{F}) = H^p_\tau(S, \mathcal{F}^a)
+$$
+for all $p \geq 0$ where
+\begin{enumerate}
+\item the left hand side indicates the usual cohomology of the sheaf
+$\mathcal{F}$ on the underlying topological space of the scheme $S$, and
+\item the right hand side indicates cohomology
+of the abelian sheaf $\mathcal{F}^a$ (see
+Proposition \ref{proposition-quasi-coherent-sheaf-fpqc})
+on the site $\mathcal{C}$.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+We are going to show that
+$H^p(U, f^*\mathcal{F}) = H^p_\tau(U, \mathcal{F}^a)$
+for any object $f : U \to S$ of the site $\mathcal{C}$.
+The result is true for $p = 0$ by the sheaf property.
+
+\medskip\noindent
+Assume that $U$ is affine. Then we want to prove that
+$H^p_\tau(U, \mathcal{F}^a) = 0$ for all $p > 0$. We use induction on $p$.
+\begin{enumerate}
+\item[p = 1]
+Pick $\xi \in H^1_\tau(U, \mathcal{F}^a)$.
+By Lemma \ref{lemma-locality-cohomology},
+there exists an fpqc covering $\mathcal{U} = \{U_i \to U\}_{i \in I}$
+such that $\xi|_{U_i} = 0$ for all $i \in I$. Up to refining
+$\mathcal{U}$, we may assume that $\mathcal{U}$ is a standard
+$\tau$-covering. Applying the spectral sequence of
+Theorem \ref{theorem-cech-ss},
+we see that $\xi$ comes from a cohomology class
+$\check \xi \in \check H^1(\mathcal{U}, \mathcal{F}^a)$.
+Consider the covering $\mathcal{V} = \{\coprod_{i\in I} U_i \to U\}$. By
+Lemma \ref{lemma-cech-complex},
+$\check H^\bullet(\mathcal{U}, \mathcal{F}^a) =
+\check H^\bullet(\mathcal{V}, \mathcal{F}^a)$.
+On the other hand, since $\mathcal{V}$ is a covering of the form
+$\{\Spec(B) \to \Spec(A)\}$ and $f^*\mathcal{F} = \widetilde{M}$
+for some $A$-module $M$, we see the {\v C}ech complex
+$\check{\mathcal{C}}^\bullet(\mathcal{V}, \mathcal{F})$
+is none other than the complex $(B/A)_\bullet \otimes_A M$.
+Now by Lemma \ref{lemma-descent-modules},
+$H^p((B/A)_\bullet \otimes_A M) = 0$ for $p > 0$, hence $\check \xi = 0$
+and so $\xi = 0$.
+\item[p > 1]
+Pick $\xi \in H^p_\tau(U, \mathcal{F}^a)$. By
+Lemma \ref{lemma-locality-cohomology},
+there exists an fpqc covering $\mathcal{U} = \{U_i \to U\}_{i \in I}$
+such that $\xi|_{U_i} = 0$ for all $i \in I$. Up to refining
+$\mathcal{U}$, we may assume that $\mathcal{U}$ is a standard
+$\tau$-covering. We apply the spectral sequence of
+Theorem \ref{theorem-cech-ss}.
+Observe that the intersections $U_{i_0} \times_U \ldots \times_U U_{i_p}$
+are affine, so that by induction hypothesis the cohomology groups
+$$
+E_2^{p, q} = \check H^p(\mathcal{U}, \underline{H}^q(\mathcal{F}^a))
+$$
+vanish for all $0 < q < p$. We see that $\xi$ must come from a
+$\check \xi \in \check H^p(\mathcal{U}, \mathcal{F}^a)$. Replacing
+$\mathcal{U}$ with the covering $\mathcal{V}$ containing only one morphism
+and using Lemma \ref{lemma-descent-modules} again,
+we see that the {\v C}ech cohomology class $\check \xi$ must be zero,
+hence $\xi = 0$.
+\end{enumerate}
+Next, assume that $U$ is separated. Choose an affine open covering
+$U = \bigcup_{i \in I} U_i$ of $U$. The family
+$\mathcal{U} = \{U_i \to U\}_{i \in I}$ is then an fpqc covering,
+and all the intersections
+$U_{i_0} \times_U \ldots \times_U U_{i_p}$ are affine
+since $U$ is separated. So all rows of the spectral sequence of
+Theorem \ref{theorem-cech-ss}
+are zero, except the zeroth row. Therefore
+$$
+H^p_\tau(U, \mathcal{F}^a) =
+\check H^p(\mathcal{U}, \mathcal{F}^a) =
+\check H^p(\mathcal{U}, \mathcal{F}) = H^p(U, \mathcal{F})
+$$
+where the last equality results from standard scheme theory, see
+Cohomology of Schemes, Lemma
+\ref{coherent-lemma-cech-cohomology-quasi-coherent}.
+
+\medskip\noindent
+The general case is technical and (to extend the proof as given here)
+requires a discussion about maps of spectral sequences, so we won't treat it.
+It follows from
+Descent, Proposition \ref{descent-proposition-same-cohomology-quasi-coherent}
+(whose proof takes a slightly different approach) combined with
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-cohomology-of-open}.
+\end{proof}
+
+\begin{remark}
+\label{remark-right-derived-global-sections}
+Comment on Theorem \ref{theorem-zariski-fpqc-quasi-coherent}.
+Since $S$ is a final object in the category $\mathcal{C}$, the cohomology
+groups on the right-hand side are merely the right derived functors of the
+global sections functor. In fact the proof shows that
+$H^p(U, f^*\mathcal{F}) = H^p_\tau(U, \mathcal{F}^a)$
+for any object $f : U \to S$ of the site $\mathcal{C}$.
+\end{remark}
+
+
+
+
+
+\section{Examples of sheaves}
+\label{section-examples-sheaves}
+
+\noindent
+Let $S$ and $\tau$ be as in Section \ref{section-big-small}.
+We have already seen that any representable presheaf is a sheaf on
+$(\Sch/S)_\tau$ or $S_\tau$, see
+Lemma \ref{lemma-representable-sheaf-fpqc}
+and
+Remark \ref{remark-fpqc-finest}.
+Here are some special cases.
+
+\begin{definition}
+\label{definition-additive-sheaf}
+On any of the sites $(\Sch/S)_\tau$ or $S_\tau$ of
+Section \ref{section-big-small}.
+\begin{enumerate}
+\item The sheaf $T \mapsto \Gamma(T, \mathcal{O}_T)$ is denoted
+$\mathcal{O}_S$, or $\mathbf{G}_a$, or $\mathbf{G}_{a, S}$ if we
+want to indicate the base scheme.
+\item Similarly, the sheaf
+$T \mapsto \Gamma(T, \mathcal{O}^*_T)$ is denoted $\mathcal{O}_S^*$, or
+$\mathbf{G}_m$, or $\mathbf{G}_{m, S}$ if we want
+to indicate the base scheme.
+\item The {\it constant sheaf} $\underline{\mathbf{Z}/n\mathbf{Z}}$ on any
+site is the sheafification of the constant presheaf
+$U \mapsto \mathbf{Z}/n\mathbf{Z}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+The first is a sheaf by
+Theorem \ref{theorem-quasi-coherent}
+for example. The second is a sub presheaf of the first, which is easily seen
+to be a sheaf itself. The third is a sheaf by definition.
+Note that each of these sheaves is representable.
+The first and second by the schemes $\mathbf{G}_{a, S}$ and
+$\mathbf{G}_{m, S}$, see
+Groupoids, Section \ref{groupoids-section-group-schemes}.
+The third by the finite \'etale group scheme $\mathbf{Z}/n\mathbf{Z}_S$
+sometimes denoted $(\mathbf{Z}/n\mathbf{Z})_S$
+which is just $n$ copies of $S$ endowed
+with the obvious group scheme structure over $S$, see
+Groupoids, Example \ref{groupoids-example-constant-group}
+and the following remark.
+
+\begin{remark}
+\label{remark-constant-locally-constant-maps}
+Let $G$ be an abstract group.
+On any of the sites $(\Sch/S)_\tau$ or $S_\tau$ of
+Section \ref{section-big-small}
+the sheafification $\underline{G}$
+of the constant presheaf associated to $G$ in the
+{\it Zariski topology} of the site already gives
+$$
+\Gamma(U, \underline{G}) =
+\{\text{Zariski locally constant maps }U \to G\}
+$$
+This Zariski sheaf is representable by the group scheme $G_S$ according to
+Groupoids, Example \ref{groupoids-example-constant-group}.
+By
+Lemma \ref{lemma-representable-sheaf-fpqc}
+any representable presheaf satisfies the sheaf condition for the
+$\tau$-topology as well, and hence we conclude that the Zariski
+sheafification $\underline{G}$ above is also the $\tau$-sheafification.
+\end{remark}
+
+\begin{definition}
+\label{definition-structure-sheaf}
+Let $S$ be a scheme. The {\it structure sheaf} of $S$ is the sheaf of rings
+$\mathcal{O}_S$
+on any of the sites $S_{Zar}$, $S_\etale$, or $(\Sch/S)_\tau$
+discussed above.
+\end{definition}
+
+\noindent
+If there is some possible confusion as to which site we are working on
+then we will indicate this by using indices. For example we may use
+$\mathcal{O}_{S_\etale}$ to stress the fact that we are working on the
+small \'etale site of $S$.
+
+\begin{remark}
+\label{remark-special-case-fpqc-cohomology-quasi-coherent}
+In the terminology introduced above a special case of
+Theorem \ref{theorem-zariski-fpqc-quasi-coherent}
+is
+$$
+H_{fppf}^p(X, \mathbf{G}_a) =
+H_\etale^p(X, \mathbf{G}_a) =
+H_{Zar}^p(X, \mathbf{G}_a) =
+H^p(X, \mathcal{O}_X)
+$$
+for all $p \geq 0$. Moreover, we could use the notation
+$H^p_{fppf}(X, \mathcal{O}_X)$ to indicate the cohomology of the
+structure sheaf on the big fppf site of $X$.
+\end{remark}
+
+
+
+
+\section{Picard groups}
+\label{section-picard-groups}
+
+\noindent
+The following theorem is sometimes called ``Hilbert 90''.
+
+\begin{theorem}
+\label{theorem-picard-group}
+For any scheme $X$ we have canonical identifications
+\begin{align*}
+H_{fppf}^1(X, \mathbf{G}_m) & = H^1_{syntomic}(X, \mathbf{G}_m) \\
+& = H^1_{smooth}(X, \mathbf{G}_m) \\
+& = H_\etale^1(X, \mathbf{G}_m) \\
+& = H^1_{Zar}(X, \mathbf{G}_m) \\
+& = \Pic(X) \\
+& = H^1(X, \mathcal{O}_X^*)
+\end{align*}
+\end{theorem}
+
+\begin{proof}
+Let $\tau$ be one of the topologies considered in
+Section \ref{section-big-small}.
+By
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-h1-invertible}
+we see that
+$H^1_\tau(X, \mathbf{G}_m) =
+H^1_\tau(X, \mathcal{O}_\tau^*) =
+\Pic(\mathcal{O}_\tau)$
+where $\mathcal{O}_\tau$ is the structure sheaf of the site
+$(\Sch/X)_\tau$. Now an invertible $\mathcal{O}_\tau$-module
+is a quasi-coherent $\mathcal{O}_\tau$-module.
+By Theorem \ref{theorem-quasi-coherent} or the more precise
+Descent, Proposition \ref{descent-proposition-equivalence-quasi-coherent}
+we see that $\Pic(\mathcal{O}_\tau) = \Pic(X)$.
+The last equality is proved in the same way.
+\end{proof}
+
+
+
+
+
+
+\section{The \'etale site}
+\label{section-etale-site}
+
+\noindent
+At this point we start exploring the \'etale site of a scheme in
+more detail. As a first step we discuss a little the notion of an
+\'etale morphism.
+
+
+
+
+
+\section{\'Etale morphisms}
+\label{section-etale-morphism}
+
+\noindent
+For more details, see
+Morphisms, Section \ref{morphisms-section-etale}
+for the formal definition and
+\'Etale Morphisms, Sections
+\ref{etale-section-etale-morphisms},
+\ref{etale-section-structure-etale-map},
+\ref{etale-section-etale-smooth},
+\ref{etale-section-topological-etale},
+\ref{etale-section-functorial-etale}, and
+\ref{etale-section-properties-permanence}
+for a survey of interesting properties of \'etale morphisms.
+
+\medskip\noindent
+Recall that an algebra $A$ over an algebraically closed field $k$ is
+{\it smooth} if it is of finite type and the module of differentials
+$\Omega_{A/k}$ is finite locally free of rank equal to the dimension.
+A scheme $X$ over $k$ is {\it smooth} over $k$ if it is locally of finite
+type and each affine open is the spectrum of a smooth $k$-algebra.
+If $k$ is not algebraically closed then a $k$-algebra $A$ is
+a smooth $k$-algebra if $A \otimes_k \overline{k}$ is a smooth
+$\overline{k}$-algebra. A ring map $A \to B$ is smooth if it is
+flat, finitely presented, and for all primes $\mathfrak p \subset A$
+the fibre ring $\kappa(\mathfrak p) \otimes_A B$ is smooth over the residue
+field $\kappa(\mathfrak p)$. More generally, a morphism of schemes is
+{\it smooth} if it is flat, locally of finite presentation, and the
+geometric fibers are smooth.
+
+\medskip\noindent
+For these facts please see
+Morphisms, Section \ref{morphisms-section-smooth}.
+Using this we may define an \'etale morphism as follows.
+
+\begin{definition}
+\label{definition-etale-morphism}
+A morphism of schemes is {\it \'etale} if it is smooth of relative dimension 0.
+\end{definition}
+
+\noindent
+In particular, a morphism of schemes $X \to S$ is \'etale if it is smooth
+and $\Omega_{X/S} = 0$.
+
+\begin{proposition}
+\label{proposition-etale-morphisms}
+Facts on \'etale morphisms.
+\begin{enumerate}
+\item Let $k$ be a field. A morphism of schemes $U \to \Spec(k)$ is
+\'etale if and only if $U \cong \coprod_{i \in I} \Spec(k_i)$
+such that for each $i \in I$
+the ring $k_i$ is a field which is a finite separable extension of $k$.
+\item Let $\varphi : U \to S$ be a morphism of schemes. The following
+conditions are equivalent:
+\begin{enumerate}
+\item $\varphi$ is \'etale,
+\item $\varphi$ is locally finitely presented, flat, and all its fibres are
+\'etale,
+\item $\varphi$ is flat, unramified and locally of finite presentation.
+\end{enumerate}
+\item A ring map $A \to B$ is \'etale if and only if
+$B \cong A[x_1, \ldots, x_n]/(f_1, \ldots, f_n)$
+such that $\Delta = \det \left( \frac{\partial f_i}{\partial x_j} \right)$
+is invertible in $B$.
+\item The base change of an \'etale morphism is \'etale.
+\item Compositions of \'etale morphisms are \'etale.
+\item Fibre products and products of \'etale morphisms are \'etale.
+\item An \'etale morphism has relative dimension 0.
+\item Let $Y \to X$ be an \'etale morphism.
+If $X$ is reduced (respectively regular) then so is $Y$.
+\item \'Etale morphisms are open.
+\item If $X \to S$ and $Y \to S$ are \'etale, then any
+$S$-morphism $X \to Y$ is also \'etale.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+We have proved these facts (and more) in the preceding chapters.
+Here is a list of references:
+(1) Morphisms, Lemma \ref{morphisms-lemma-etale-over-field}.
+(2) Morphisms, Lemmas \ref{morphisms-lemma-etale-flat-etale-fibres}
+and \ref{morphisms-lemma-flat-unramified-etale}.
+(3) Algebra, Lemma \ref{algebra-lemma-etale-standard-smooth}.
+(4) Morphisms, Lemma \ref{morphisms-lemma-base-change-etale}.
+(5) Morphisms, Lemma \ref{morphisms-lemma-composition-etale}.
+(6) Follows formally from (4) and (5).
+(7) Morphisms, Lemmas \ref{morphisms-lemma-etale-locally-quasi-finite}
+and \ref{morphisms-lemma-locally-quasi-finite-rel-dimension-0}.
+(8) See Algebra, Lemmas \ref{algebra-lemma-reduced-goes-up} and
+\ref{algebra-lemma-Rk-goes-up}, see also more results of this kind
+in \'Etale Morphisms, Section \ref{etale-section-properties-permanence}.
+(9) See Morphisms, Lemma \ref{morphisms-lemma-fppf-open} and
+\ref{morphisms-lemma-etale-flat}.
+(10) See Morphisms, Lemma \ref{morphisms-lemma-etale-permanence}.
+\end{proof}
+
+\begin{definition}
+\label{definition-standard-etale}
+A ring map $A \to B$ is called {\it standard \'etale} if
+$B \cong \left(A[t]/(f)\right)_g$ with $f, g \in A[t]$, with $f$ monic,
+and $\text{d}f/\text{d}t$ invertible in $B$.
+\end{definition}
+
+\noindent
+It is true that a standard \'etale ring map is \'etale. Namely, suppose
+that $B = \left(A[t]/(f)\right)_g$ with $f, g \in A[t]$, with $f$ monic,
+and $\text{d}f/\text{d}t$ invertible in $B$. Then $A[t]/(f)$ is a finite
+free $A$-module of rank equal to the degree of the monic polynomial $f$.
+Hence $B$, as a localization of this free algebra is finitely presented
+and flat over $A$. To finish the proof that $B$ is \'etale it suffices
+to show that the fibre rings
+$$
+\kappa(\mathfrak p) \otimes_A B
+\cong
+\kappa(\mathfrak p) \otimes_A (A[t]/(f))_g
+\cong
+\kappa(\mathfrak p)[t, 1/\overline{g}]/(\overline{f})
+$$
+are finite products of finite separable field extensions.
+Here $\overline{f}, \overline{g} \in \kappa(\mathfrak p)[t]$ are
+the images of $f$ and $g$. Let
+$$
+\overline{f} = \overline{f}_1 \ldots \overline{f}_a
+\overline{f}_{a + 1}^{e_1} \ldots \overline{f}_{a + b}^{e_b}
+$$
+be the factorization of $\overline{f}$ into powers of pairwise distinct
+irreducible monic factors $\overline{f}_i$ with $e_1, \ldots, e_b > 0$.
+By assumption $\text{d}\overline{f}/\text{d}t$ is invertible in
+$\kappa(\mathfrak p)[t, 1/\overline{g}]$. Hence we see that
+at least all the $\overline{f}_i$, $i > a$ are invertible. We conclude
+that
+$$
+\kappa(\mathfrak p)[t, 1/\overline{g}]/(\overline{f})
+\cong
+\prod\nolimits_{i \in I} \kappa(\mathfrak p)[t]/(\overline{f}_i)
+$$
+where $I \subset \{1, \ldots, a\}$ is the subset of indices $i$ such that
+$\overline{f}_i$ does not divide $\overline{g}$. Moreover, the image of
+$\text{d}\overline{f}/\text{d}t$ in the factor
+$\kappa(\mathfrak p)[t]/(\overline{f}_i)$ is clearly equal to a
+unit times $\text{d}\overline{f}_i/\text{d}t$. Hence we conclude that
+$\kappa_i = \kappa(\mathfrak p)[t]/(\overline{f}_i)$ is a finite field
+extension of $\kappa(\mathfrak p)$ generated by one element whose
+minimal polynomial is separable, i.e., the field extension
+$\kappa_i/\kappa(\mathfrak p)$ is finite separable as desired.
+
+\medskip\noindent
+It turns out that any \'etale ring map is locally standard \'etale.
+To formulate this we introduce the following notation.
+A ring map $A \to B$ is {\it \'etale at a prime $\mathfrak q$} of $B$ if there
+exists $h \in B$, $h \not \in \mathfrak q$ such that $A \to B_h$ is \'etale.
+Here is the result.
+
+\begin{theorem}
+\label{theorem-standard-etale}
+A ring map $A \to B$ is \'etale at a prime $\mathfrak q$ if and only if there
+exists $g \in B$, $g \not \in \mathfrak q$ such that $B_g$ is standard
+\'etale over $A$.
+\end{theorem}
+
+\begin{proof}
+See
+Algebra, Proposition \ref{algebra-proposition-etale-locally-standard}.
+\end{proof}
+
+
+
+
+
+\section{\'Etale coverings}
+\label{section-etale-covering}
+
+\noindent
+We recall the definition.
+
+\begin{definition}
+\label{definition-etale-covering}
+An {\it \'etale covering} of a scheme $U$ is a family of morphisms
+of schemes
+$\{\varphi_i : U_i \to U\}_{i \in I}$ such that
+\begin{enumerate}
+\item each $\varphi_i$ is an \'etale morphism,
+\item the $U_i$ cover $U$, i.e., $U = \bigcup_{i\in I}\varphi_i(U_i)$.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-etale-fpqc}
+Any \'etale covering is an fpqc covering.
+\end{lemma}
+
+\begin{proof}
+(See also
+Topologies,
+Lemma \ref{topologies-lemma-zariski-etale-smooth-syntomic-fppf-fpqc}.)
+Let $\{\varphi_i : U_i \to U\}_{i \in I}$ be an \'etale covering.
+Since an \'etale morphism is flat, and the elements of the covering should
+cover its target, the property fp (faithfully flat) is satisfied.
+To check the property qc (quasi-compact), let $V \subset U$ be an affine
+open, and write $\varphi_i^{-1}(V) = \bigcup_{j \in J_i} V_{ij}$
+for some affine opens $V_{ij} \subset U_i$. Since $\varphi_i$ is open
+(as \'etale morphisms are open), we see that
+$V = \bigcup_{i\in I} \bigcup_{j \in J_i} \varphi_i(V_{ij})$
+is an open covering of $V$.
+Further, since $V$ is quasi-compact, this covering has a finite
+refinement.
+\end{proof}
+
+\noindent
+So any statement which is true for fpqc coverings
+remains true {\it a fortiori} for \'etale coverings. For
+instance, the \'etale site is subcanonical.
+
+\begin{definition}
+\label{definition-big-etale-site}
+(For more details see Section \ref{section-big-small}, or
+Topologies, Section \ref{topologies-section-etale}.)
+Let $S$ be a scheme.
+The {\it big \'etale site over $S$} is the site
+$(\Sch/S)_\etale$, see
+Definition \ref{definition-tau-site}.
+The {\it small \'etale site over $S$} is the site $S_\etale$, see
+Definition \ref{definition-tau-site}.
+We define similarly the {\it big} and {\it small Zariski sites} on $S$,
+denoted $(\Sch/S)_{Zar}$ and $S_{Zar}$.
+\end{definition}
+
+\noindent
+Loosely speaking the big \'etale site of $S$ is made up out of schemes over $S$
+and coverings the \'etale coverings. The small \'etale site of $S$ is made up
+out of schemes \'etale over $S$ with coverings the \'etale coverings.
+Actually any morphism between objects of $S_\etale$ is \'etale, in
+virtue of
+Proposition \ref{proposition-etale-morphisms},
+hence to check that $\{U_i \to U\}_{i \in I}$ in $S_\etale$
+is a covering it suffices to check that $\coprod U_i \to U$ is surjective.
+
+\medskip\noindent
+The small \'etale site has fewer objects than the big \'etale site, it
+contains only the ``opens'' of the \'etale topology on $S$. It is a full
+subcategory of the big \'etale site, and its topology is induced from the
+topology on the big site. Hence it is true that the restriction functor
+from the big \'etale site to the small one is exact and maps injectives to
+injectives. This has the following consequence.
+
+\begin{proposition}
+\label{proposition-cohomology-restrict-small-site}
+Let $S$ be a scheme and $\mathcal{F}$ an abelian sheaf on
+$(\Sch/S)_\etale$.
+Then $\mathcal{F}|_{S_\etale}$ is a sheaf on $S_\etale$ and
+$$
+H^p_\etale(S, \mathcal{F}|_{S_\etale}) =
+H^p_\etale(S, \mathcal{F})
+$$
+for all $p \geq 0$.
+\end{proposition}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-compare-cohomology-big-small}.
+\end{proof}
+
+\noindent
+In accordance with the general notation introduced in
+Section \ref{section-big-small}
+we write $H_\etale^p(S, \mathcal{F})$ for the above cohomology group.
+
+
+
+
+
+%9.24.09
+\section{Kummer theory}
+\label{section-kummer}
+
+\noindent
+Let $n \in \mathbf{N}$ and consider the functor $\mu_n$ defined by
+$$
+\begin{matrix}
+\Sch^{opp} & \longrightarrow & \textit{Ab} \\
+S & \longmapsto &
+\mu_n(S)
+=
+\{t \in \Gamma(S, \mathcal{O}_S^*) \mid t^n = 1 \}.
+\end{matrix}
+$$
+By
+Groupoids, Example \ref{groupoids-example-roots-of-unity}
+this is a representable functor, and the scheme representing it
+is denoted $\mu_n$ also. By
+Lemma \ref{lemma-representable-sheaf-fpqc}
+this functor satisfies the sheaf condition for the fpqc topology
+(in particular, it also satisfies the sheaf condition for the
+\'etale, Zariski, etc topology).
+
+\begin{lemma}
+\label{lemma-kummer-sequence}
+If $n\in \mathcal{O}_S^*$ then
+$$
+0 \to
+\mu_{n, S} \to
+\mathbf{G}_{m, S} \xrightarrow{(\cdot)^n}
+\mathbf{G}_{m, S} \to 0
+$$
+is a short exact sequence of sheaves on both the small and
+big \'etale site of $S$.
+\end{lemma}
+
+\begin{proof}
+By definition the sheaf $\mu_{n, S}$ is the kernel of the map
+$(\cdot)^n$. Hence it suffices to show that the last map is surjective.
+Let $U$ be a scheme over $S$. Let
+$f \in \mathbf{G}_m(U) = \Gamma(U, \mathcal{O}_U^*)$.
+We need to show that we can find an \'etale cover of
+$U$ over the members of which the restriction of $f$ is an $n$th power.
+Set
+$$
+U' =
+\underline{\Spec}_U(\mathcal{O}_U[T]/(T^n-f))
+\xrightarrow{\pi}
+U.
+$$
+(See
+Constructions, Section \ref{constructions-section-spec-via-glueing} or
+\ref{constructions-section-spec}
+for a discussion of the relative spectrum.)
+Let $\Spec(A) \subset U$ be an affine open, and say $f|_{\Spec(A)}$ corresponds
+to the unit $a \in A^*$. Then $\pi^{-1}(\Spec(A)) = \Spec(B)$ with
+$B = A[T]/(T^n - a)$. The ring map $A \to B$ is finite free of rank $n$,
+hence it is faithfully flat, and hence we conclude that
+$\Spec(B) \to \Spec(A)$ is surjective. Since this holds for every
+affine open in $U$ we conclude that $\pi$ is surjective.
+In addition, $n$ and $T^{n - 1}$ are invertible in $B$, so
+$nT^{n-1} \in B^*$ and the ring map $A \to B$ is standard \'etale,
+in particular \'etale. Since this holds for every affine open of $U$
+we conclude that $\pi$ is \'etale. Hence
+$\mathcal{U} = \{\pi : U' \to U\}$ is an \'etale covering.
+Moreover, $f|_{U'} = (f')^n$ where $f'$ is the class of $T$
+in $\Gamma(U', \mathcal{O}_{U'}^*)$, so $\mathcal{U}$ has the desired property.
+\end{proof}
+
+\begin{remark}
+\label{remark-no-kummer-sequence-zariski}
+Lemma \ref{lemma-kummer-sequence} is false when ``\'etale'' is replaced
+with ``Zariski''.
+Since the \'etale topology is coarser than the smooth topology, see
+Topologies, Lemma \ref{topologies-lemma-zariski-etale-smooth}
+it follows that the sequence is also exact in the smooth topology.
+\end{remark}
+
+\noindent
+By
+Theorem \ref{theorem-picard-group}
+and
+Lemma \ref{lemma-kummer-sequence}
+and general properties of cohomology we obtain
+the long exact cohomology sequence
+$$
+\xymatrix{
+0 \ar[r] &
+H_\etale^0(S, \mu_{n, S}) \ar[r] &
+\Gamma(S, \mathcal{O}_S^*) \ar[r]^{(\cdot)^n} &
+\Gamma(S, \mathcal{O}_S^*) \ar@(rd, ul)[rdllllr]
+\\
+& H_\etale^1(S, \mu_{n, S}) \ar[r] &
+\Pic(S) \ar[r]^{(\cdot)^n} &
+\Pic(S) \ar@(rd, ul)[rdllllr] \\
+& H_\etale^2(S, \mu_{n, S}) \ar[r] &
+\ldots
+}
+$$
+at least if $n$ is invertible on $S$. When $n$ is not invertible on $S$
+we can apply the following lemma.
+
+\begin{lemma}
+\label{lemma-kummer-sequence-syntomic}
+For any $n \in \mathbf{N}$ the sequence
+$$
+0 \to
+\mu_{n, S} \to
+\mathbf{G}_{m, S} \xrightarrow{(\cdot)^n}
+\mathbf{G}_{m, S} \to 0
+$$
+is a short exact sequence of sheaves on the site
+$(\Sch/S)_{fppf}$ and $(\Sch/S)_{syntomic}$.
+\end{lemma}
+
+\begin{proof}
+By definition the sheaf $\mu_{n, S}$ is the kernel of the map
+$(\cdot)^n$. Hence it suffices to show that the last map is surjective.
+Since the syntomic topology is weaker than the fppf topology, see
+Topologies, Lemma \ref{topologies-lemma-zariski-etale-smooth-syntomic-fppf},
+it suffices to prove this for the syntomic topology.
+Let $U$ be a scheme over $S$. Let
+$f \in \mathbf{G}_m(U) = \Gamma(U, \mathcal{O}_U^*)$.
+We need to show that we can find a syntomic cover of
+$U$ over the members of which the restriction of $f$ is an $n$th power.
+Set
+$$
+U' =
+\underline{\Spec}_U(\mathcal{O}_U[T]/(T^n-f))
+\xrightarrow{\pi}
+U.
+$$
+(See
+Constructions, Section \ref{constructions-section-spec-via-glueing} or
+\ref{constructions-section-spec}
+for a discussion of the relative spectrum.)
+Let $\Spec(A) \subset U$ be an affine open, and say $f|_{\Spec(A)}$ corresponds
+to the unit $a \in A^*$. Then $\pi^{-1}(\Spec(A)) = \Spec(B)$ with
+$B = A[T]/(T^n - a)$. The ring map $A \to B$ is finite free of rank $n$,
+hence it is faithfully flat, and hence we conclude that
+$\Spec(B) \to \Spec(A)$ is surjective. Since this holds for every
+affine open in $U$ we conclude that $\pi$ is surjective.
+In addition, $B$ is a global relative complete intersection over $A$, so
+the ring map $A \to B$ is standard syntomic,
+in particular syntomic. Since this holds for every affine open of $U$
+we conclude that $\pi$ is syntomic. Hence
+$\mathcal{U} = \{\pi : U' \to U\}$ is a syntomic covering.
+Moreover, $f|_{U'} = (f')^n$ where $f'$ is the class of $T$
+in $\Gamma(U', \mathcal{O}_{U'}^*)$, so $\mathcal{U}$ has the desired property.
+\end{proof}
+
+\begin{remark}
+\label{remark-no-kummer-sequence-smooth-etale-zariski}
+Lemma \ref{lemma-kummer-sequence-syntomic}
+is false for the smooth, \'etale, or Zariski topology.
+\end{remark}
+
+\noindent
+By
+Theorem \ref{theorem-picard-group}
+and
+Lemma \ref{lemma-kummer-sequence-syntomic}
+and general properties of cohomology we obtain
+the long exact cohomology sequence
+$$
+\xymatrix{
+0 \ar[r] &
+H_{fppf}^0(S, \mu_{n, S}) \ar[r] &
+\Gamma(S, \mathcal{O}_S^*) \ar[r]^{(\cdot)^n} &
+\Gamma(S, \mathcal{O}_S^*) \ar@(rd, ul)[rdllllr]
+\\
+& H_{fppf}^1(S, \mu_{n, S}) \ar[r] &
+\Pic(S) \ar[r]^{(\cdot)^n} &
+\Pic(S) \ar@(rd, ul)[rdllllr] \\
+& H_{fppf}^2(S, \mu_{n, S}) \ar[r] &
+\ldots
+}
+$$
+for any scheme $S$ and any integer $n$. Of course there is a similar sequence
+with syntomic cohomology.
+
+\medskip\noindent
+Let $n \in \mathbf{N}$ and let $S$ be any scheme.
+There is another more direct way to describe the first cohomology group with
+values in $\mu_n$. Consider pairs
+$(\mathcal{L}, \alpha)$ where $\mathcal{L}$ is an invertible sheaf on $S$
+and $\alpha : \mathcal{L}^{\otimes n} \to \mathcal{O}_S$ is a trivialization
+of the $n$th tensor power of $\mathcal{L}$.
+Let $(\mathcal{L}', \alpha')$ be a second such pair.
+An isomorphism $\varphi : (\mathcal{L}, \alpha) \to (\mathcal{L}', \alpha')$
+is an isomorphism $\varphi : \mathcal{L} \to \mathcal{L}'$ of invertible
+sheaves such that the diagram
+$$
+\xymatrix{
+\mathcal{L}^{\otimes n} \ar[d]_{\varphi^{\otimes n}} \ar[r]_\alpha &
+\mathcal{O}_S \ar[d]^1 \\
+(\mathcal{L}')^{\otimes n} \ar[r]^{\alpha'} &
+\mathcal{O}_S \\
+}
+$$
+commutes. Thus we have
+\begin{equation}
+\label{equation-isomorphisms-pairs}
+\mathit{Isom}_S((\mathcal{L}, \alpha), (\mathcal{L}', \alpha'))
+=
+\left\{
+\begin{matrix}
+\emptyset & \text{if} & \text{they are not isomorphic} \\
+H^0(S, \mu_{n, S})\cdot \varphi & \text{if} &
+\varphi \text{ isomorphism of pairs}
+\end{matrix}
+\right.
+\end{equation}
+Moreover, given two pairs $(\mathcal{L}, \alpha)$, $(\mathcal{L}', \alpha')$
+the tensor product
+$$
+(\mathcal{L}, \alpha) \otimes (\mathcal{L}', \alpha')
+=
+(\mathcal{L} \otimes \mathcal{L}', \alpha \otimes \alpha')
+$$
+is another pair. The pair $(\mathcal{O}_S, 1)$ is an identity for this
+tensor product operation, and an inverse is given by
+$$
+(\mathcal{L}, \alpha)^{-1} = (\mathcal{L}^{\otimes -1}, \alpha^{\otimes -1}).
+$$
+Hence the collection of isomorphism classes of pairs forms an abelian group.
+Note that
+$$
+(\mathcal{L}, \alpha)^{\otimes n}
+=
+(\mathcal{L}^{\otimes n}, \alpha^{\otimes n})
+\xrightarrow{\alpha}
+(\mathcal{O}_S, 1)
+$$
+is an isomorphism
+hence every element of this group has order dividing $n$. We warn the reader
+that this group is in general {\bf not} the $n$-torsion in $\Pic(S)$.
+
+\begin{lemma}
+\label{lemma-describe-h1-mun}
+Let $S$ be a scheme. There is a canonical identification
+$$
+H_\etale^1(S, \mu_n) =
+\text{group of pairs }(\mathcal{L}, \alpha)\text{ up to isomorphism as above}
+$$
+if $n$ is invertible on $S$. In general we have
+$$
+H_{fppf}^1(S, \mu_n) =
+\text{group of pairs }(\mathcal{L}, \alpha)\text{ up to isomorphism as above}.
+$$
+The same result holds with fppf replaced by syntomic.
+\end{lemma}
+
+\begin{proof}
+We first prove the second isomorphism.
+Let $(\mathcal{L}, \alpha)$ be a pair as above.
+Choose an affine open covering $S = \bigcup U_i$ such that
+$\mathcal{L}|_{U_i} \cong \mathcal{O}_{U_i}$. Say $s_i \in \mathcal{L}(U_i)$
+is a generator. Then $\alpha(s_i^{\otimes n}) = f_i \in \mathcal{O}_S^*(U_i)$.
+Writing $U_i = \Spec(A_i)$ we see there exists a global
+relative complete intersection $A_i \to B_i = A_i[T]/(T^n - f_i)$
+such that $f_i$ maps to an $n$th power in $B_i$. In other words, setting
+$V_i = \Spec(B_i)$ we obtain a syntomic covering
+$\mathcal{V} = \{V_i \to S\}_{i \in I}$ and trivializations
+$\varphi_i : (\mathcal{L}, \alpha)|_{V_i} \to (\mathcal{O}_{V_i}, 1)$.
+
+\medskip\noindent
+We will use this result (the existence of the covering $\mathcal{V}$)
+to associate to this pair a cohomology class in
+$H^1_{syntomic}(S, \mu_{n, S})$. We give two (equivalent) constructions.
+
+\medskip\noindent
+First construction: using {\v C}ech cohomology.
+Over the double overlaps $V_i \times_S V_j$ we have the isomorphism
+$$
+(\mathcal{O}_{V_i \times_S V_j}, 1)
+\xrightarrow{\text{pr}_0^*\varphi_i^{-1}}
+(\mathcal{L}|_{V_i \times_S V_j}, \alpha|_{V_i \times_S V_j})
+\xrightarrow{\text{pr}_1^*\varphi_j}
+(\mathcal{O}_{V_i \times_S V_j}, 1)
+$$
+of pairs. By (\ref{equation-isomorphisms-pairs}) this is given by an
+element $\zeta_{ij} \in \mu_n(V_i \times_S V_j)$. We omit the verification
+that these $\zeta_{ij}$'s give a $1$-cocycle, i.e., give
+an element $(\zeta_{i_0i_1}) \in \check C(\mathcal{V}, \mu_n)$
+with $d(\zeta_{i_0i_1}) = 0$. Thus its class is an element in
+$\check H^1(\mathcal{V}, \mu_n)$ and by
+Theorem \ref{theorem-cech-ss}
+it maps to a cohomology class in $H^1_{syntomic}(S, \mu_{n, S})$.
+
+\medskip\noindent
+Second construction: Using torsors. Consider the presheaf
+$$
+\mu_n(\mathcal{L}, \alpha) :
+U
+\longmapsto
+\mathit{Isom}_U((\mathcal{O}_U, 1), (\mathcal{L}, \alpha)|_U)
+$$
+on $(\Sch/S)_{syntomic}$.
+We may view this as a subpresheaf of
+$\SheafHom_\mathcal{O}(\mathcal{O}, \mathcal{L})$ (internal hom
+sheaf, see
+Modules on Sites, Section \ref{sites-modules-section-internal-hom}).
+Since the conditions defining this subpresheaf are local, we see that it is
+a sheaf.
+By (\ref{equation-isomorphisms-pairs}) this sheaf has a free action of
+the sheaf $\mu_{n, S}$. Hence the only thing we have to check is that
+it locally has sections. This is true because of the existence of the
+trivializing cover $\mathcal{V}$. Hence $\mu_n(\mathcal{L}, \alpha)$
+is a $\mu_{n, S}$-torsor and by
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-torsors-h1}
+we obtain a corresponding element of $H^1_{syntomic}(S, \mu_{n, S})$.
+
+\medskip\noindent
+Ok, now we have to still show the following
+\begin{enumerate}
+\item The two constructions give the same cohomology class.
+\item Isomorphic pairs give rise to the same cohomology class.
+\item The cohomology class of
+$(\mathcal{L}, \alpha) \otimes (\mathcal{L}', \alpha')$
+is the sum of the cohomology classes of
+$(\mathcal{L}, \alpha)$ and $(\mathcal{L}', \alpha')$.
+\item If the cohomology class is trivial, then the pair is trivial.
+\item Any element of $H^1_{syntomic}(S, \mu_{n, S})$ is the
+cohomology class of a pair.
+\end{enumerate}
+We omit the proof of (1). Part (2) is clear from the second construction,
+since isomorphic torsors give the same cohomology classes.
+Part (3) is clear from the first construction, since the resulting
+{\v C}ech classes add up. Part (4) is clear from the second construction
+since a torsor is trivial if and only if it has a global section, see
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-trivial-torsor}.
+
+\medskip\noindent
+Part (5) can be seen as follows (although a direct proof would be
+preferable). Suppose $\xi \in H^1_{syntomic}(S, \mu_{n, S})$.
+Then $\xi$ maps to an element
+$\overline{\xi} \in H^1_{syntomic}(S, \mathbf{G}_{m, S})$
+with $n \overline{\xi} = 0$. By
+Theorem \ref{theorem-picard-group}
+we see that $\overline{\xi}$ corresponds to an invertible sheaf $\mathcal{L}$
+whose $n$th tensor power is isomorphic to $\mathcal{O}_S$.
+Hence there exists a pair $(\mathcal{L}, \alpha')$ whose cohomology
+class $\xi'$ has the same image $\overline{\xi'}$ in
+$H^1_{syntomic}(S, \mathbf{G}_{m, S})$. Thus it suffices to show
+that $\xi - \xi'$ is the class of a pair. By construction, and the
+long exact cohomology sequence above, we see that
+$\xi - \xi' = \partial(f)$ for some $f \in H^0(S, \mathcal{O}_S^*)$.
+Consider the pair $(\mathcal{O}_S, f)$. We omit the verification
+that the cohomology class of this pair is $\partial(f)$, which
+finishes the proof of the first identification (with fppf replaced
+with syntomic).
+
+\medskip\noindent
+To see the first, note that if $n$ is invertible on $S$, then the
+covering $\mathcal{V}$ constructed in the first part of the proof
+is actually an \'etale covering (compare with the proof of
+Lemma \ref{lemma-kummer-sequence}). The rest of the proof is independent
+of the topology, apart from the very last argument which uses that
+the Kummer sequence is exact, i.e., uses Lemma \ref{lemma-kummer-sequence}.
+\end{proof}
+
+
+
+
+
+
+\section{Neighborhoods, stalks and points}
+\label{section-stalks}
+
+\noindent
+We can associate to any geometric point of $S$ a stalk functor which is
+exact. A map of sheaves on $S_\etale$ is an isomorphism if and only
+if it
+is an isomorphism on all these stalks. A complex of abelian sheaves is
+exact if and only if the complex of stalks is exact at all geometric points.
+Altogether this means that the small \'etale site of a scheme $S$
+has enough points. It also turns out that any point of the small \'etale topos
+of $S$ (an abstract notion) is given by a geometric point.
+Thus in some sense the small \'etale topos of $S$ can be understood in
+terms of geometric points and neighbourhoods.
+
+\begin{definition}
+\label{definition-geometric-point}
+Let $S$ be a scheme.
+\begin{enumerate}
+\item A {\it geometric point} of $S$ is a morphism
+$\Spec(k) \to S$ where $k$ is algebraically closed.
+Such a point is usually denoted $\overline{s}$, i.e., by an overlined
+small case letter. We often use $\overline{s}$ to denote the scheme
+$\Spec(k)$ as well as the morphism, and we use $\kappa(\overline{s})$
+to denote $k$.
+\item We say $\overline{s}$ {\it lies over} $s$
+to indicate that $s \in S$ is the image of $\overline{s}$.
+\item An {\it \'etale neighborhood} of a geometric point $\overline{s}$
+of $S$ is a commutative diagram
+$$
+\xymatrix{
+& U \ar[d]^\varphi \\
+{\overline{s}} \ar[r]^{\overline{s}} \ar[ur]^{\bar u} & S
+}
+$$
+where $\varphi$ is an \'etale morphism of schemes.
+We write $(U, \overline{u}) \to (S, \overline{s})$.
+\item A {\it morphism of \'etale neighborhoods}
+$(U, \overline{u}) \to (U', \overline{u}')$
+is an $S$-morphism $h: U \to U'$
+such that $\overline{u}' = h \circ \overline{u}$.
+\end{enumerate}
+\end{definition}
+
+\begin{remark}
+\label{remark-etale-between-etale}
+Since $U$ and $U'$ are \'etale over $S$, any $S$-morphism
+between them is also \'etale, see
+Proposition \ref{proposition-etale-morphisms}.
+In particular all morphisms of \'etale neighborhoods are \'etale.
+\end{remark}
+
+\begin{remark}
+\label{remark-etale-neighbourhoods}
+Let $S$ be a scheme and $s \in S$ a point. In
+More on Morphisms,
+Definition \ref{more-morphisms-definition-etale-neighbourhood}
+we defined the notion of an \'etale neighbourhood $(U, u) \to (S, s)$
+of $(S, s)$. If $\overline{s}$ is a geometric point of $S$ lying over
+$s$, then any \'etale neighbourhood $(U, \overline{u}) \to (S, \overline{s})$
+gives rise to an \'etale neighbourhood $(U, u)$ of $(S, s)$ by taking
+$u \in U$ to be the unique point of $U$ such that $\overline{u}$
+lies over $u$. Conversely, given an \'etale neighbourhood $(U, u)$
+of $(S, s)$ the residue field extension $\kappa(u)/\kappa(s)$
+is finite separable (see
+Proposition \ref{proposition-etale-morphisms})
+and hence we can find an embedding $\kappa(u) \subset \kappa(\overline{s})$
+over $\kappa(s)$. In other words, we can find a geometric point
+$\overline{u}$ of $U$ lying over $u$ such that $(U, \overline{u})$
+is an \'etale neighbourhood of $(S, \overline{s})$.
+We will use these observations to go between the two types of
+\'etale neighbourhoods.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-cofinal-etale}
+Let $S$ be a scheme, and let $\overline{s}$ be a geometric point of $S$.
+The category of \'etale neighborhoods is cofiltered. More precisely:
+\begin{enumerate}
+\item Let $(U_i, \overline{u}_i)_{i = 1, 2}$ be two \'etale neighborhoods of
+$\overline{s}$ in $S$. Then there exists a third \'etale neighborhood
+$(U, \overline{u})$ and morphisms
+$(U, \overline{u}) \to (U_i, \overline{u}_i)$, $i = 1, 2$.
+\item Let $h_1, h_2: (U, \overline{u}) \to (U', \overline{u}')$ be two
+morphisms between \'etale neighborhoods of $\overline{s}$. Then there exist an
+\'etale neighborhood $(U'', \overline{u}'')$ and a morphism
+$h : (U'', \overline{u}'') \to (U, \overline{u})$
+which equalizes $h_1$ and $h_2$, i.e., such that
+$h_1 \circ h = h_2 \circ h$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+For part (1), consider the fibre product $U = U_1 \times_S U_2$.
+It is \'etale over both $U_1$ and $U_2$ because \'etale morphisms are
+preserved under base change, see
+Proposition \ref{proposition-etale-morphisms}.
+The map $\overline{s} \to U$ defined by $(\overline{u}_1, \overline{u}_2)$
+gives it the structure of an \'etale neighborhood mapping to both
+$U_1$ and $U_2$. For part (2), define $U''$ as the fibre product
+$$
+\xymatrix{
+U'' \ar[r] \ar[d] & U \ar[d]^{(h_1, h_2)} \\
+U' \ar[r]^-\Delta & U' \times_S U'.
+}
+$$
+Since $\overline{u}$ and $\overline{u}'$ agree over $S$ with $\overline{s}$,
+we see that $\overline{u}'' = (\overline{u}, \overline{u}')$ is a geometric
+point of $U''$. In particular $U'' \not = \emptyset$.
+Moreover, since $U'$ is \'etale over $S$, so is the fibre product
+$U'\times_S U'$ (see
+Proposition \ref{proposition-etale-morphisms}).
+Hence the vertical arrow $(h_1, h_2)$ is \'etale by
+Remark \ref{remark-etale-between-etale} above.
+Therefore $U''$ is \'etale over $U'$ by base change, and hence also
+\'etale over $S$ (because compositions of \'etale morphisms are \'etale).
+Thus $(U'', \overline{u}'')$ is a solution to the problem.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-geometric-lift-to-cover}
+Let $S$ be a scheme.
+Let $\overline{s}$ be a geometric point of $S$.
+Let $(U, \overline{u})$ be an \'etale neighborhood of $\overline{s}$.
+Let $\mathcal{U} = \{\varphi_i : U_i \to U \}_{i\in I}$ be an \'etale covering.
+Then there exist $i \in I$ and $\overline{u}_i : \overline{s} \to U_i$
+such that $\varphi_i : (U_i, \overline{u}_i) \to (U, \overline{u})$
+is a morphism of \'etale neighborhoods.
+\end{lemma}
+
+\begin{proof}
+As $U = \bigcup_{i\in I} \varphi_i(U_i)$, the fibre product
+$\overline{s} \times_{\overline{u}, U, \varphi_i} U_i$ is not empty
+for some $i$. Then look at the cartesian diagram
+$$
+\xymatrix{
+\overline{s} \times_{\overline{u}, U, \varphi_i} U_i
+\ar[d]^{\text{pr}_1} \ar[r]_-{\text{pr}_2} & U_i
+\ar[d]^{\varphi_i} \\
+\Spec(k) = \overline{s} \ar@/^1pc/[u]^\sigma
+\ar[r]^-{\overline{u}} & U
+}
+$$
+The projection $\text{pr}_1$ is the base change of an \'etale morphisms so it
+is \'etale, see
+Proposition \ref{proposition-etale-morphisms}.
+Therefore, $\overline{s} \times_{\overline{u}, U, \varphi_i} U_i$
+is a disjoint union of finite separable extensions of $k$, by
+Proposition \ref{proposition-etale-morphisms}. Here
+$\overline{s} = \Spec(k)$. But $k$ is algebraically closed, so all
+these extensions are trivial, and there exists a section $\sigma$ of
+$\text{pr}_1$. The composition
+$\text{pr}_2 \circ \sigma$ gives a map compatible with $\overline{u}$.
+\end{proof}
+
+\begin{definition}
+\label{definition-stalk}
+Let $S$ be a scheme.
+Let $\mathcal{F}$ be a presheaf on $S_\etale$.
+Let $\overline{s}$ be a geometric point of $S$.
+The {\it stalk} of $\mathcal{F}$ at $\overline{s}$ is
+$$
+\mathcal{F}_{\overline{s}}
+=
+\colim_{(U, \overline{u})} \mathcal{F}(U)
+$$
+where $(U, \overline{u})$ runs over all \'etale
+neighborhoods of $\overline{s}$ in $S$.
+\end{definition}
+
+\noindent
+By Lemma \ref{lemma-cofinal-etale}, this colimit is over a filtered
+index category, namely the opposite of the category of \'etale neighbourhoods.
+In other words, an element of $\mathcal{F}_{\overline{s}}$ can be
+thought of as a triple $(U, \overline{u}, \sigma)$ where
+$\sigma \in \mathcal{F}(U)$. Two triples
+$(U, \overline{u}, \sigma)$, $(U', \overline{u}', \sigma')$
+define the same element of the stalk if there exists a third
+\'etale neighbourhood $(U'', \overline{u}'')$ and morphisms of \'etale
+neighbourhoods $h : (U'', \overline{u}'') \to (U, \overline{u})$,
+$h' : (U'', \overline{u}'') \to (U', \overline{u}')$ such that
+$h^*\sigma = (h')^*\sigma'$ in $\mathcal{F}(U'')$. See
+Categories, Section \ref{categories-section-directed-colimits}.
+
+\begin{lemma}
+\label{lemma-stalk-gives-point}
+Let $S$ be a scheme. Let $\overline{s}$ be a geometric point of $S$.
+Consider the functor
+\begin{align*}
+u : S_\etale & \longrightarrow \textit{Sets}, \\
+U & \longmapsto
+|U_{\overline{s}}|
+=
+\{\overline{u} \text{ such that }(U, \overline{u})
+\text{ is an \'etale neighbourhood of }\overline{s}\}.
+\end{align*}
+Here $|U_{\overline{s}}|$ denotes the underlying set of the geometric fibre.
+Then $u$ defines a point $p$ of the site $S_\etale$
+(Sites, Definition \ref{sites-definition-point})
+and its associated stalk functor $\mathcal{F} \mapsto \mathcal{F}_p$
+(Sites, Equation \ref{sites-equation-stalk})
+is the functor $\mathcal{F} \mapsto \mathcal{F}_{\overline{s}}$
+defined above.
+\end{lemma}
+
+\begin{proof}
+In the proof of
+Lemma \ref{lemma-geometric-lift-to-cover}
+we have seen that the scheme $U_{\overline{s}}$ is a disjoint union of
+schemes isomorphic to $\overline{s}$. Thus we can also think of
+$|U_{\overline{s}}|$ as the set of geometric points of $U$ lying over
+$\overline{s}$, i.e., as the collection of morphisms
+$\overline{u} : \overline{s} \to U$ fitting into the diagram of
+Definition \ref{definition-geometric-point}.
+From this it follows that $u(S)$ is a singleton, and that
+$u(U \times_V W) = u(U) \times_{u(V)} u(W)$
+whenever $U \to V$ and $W \to V$ are morphisms in $S_\etale$.
+And, given a covering $\{U_i \to U\}_{i \in I}$ in $S_\etale$
+we see that $\coprod u(U_i) \to u(U)$ is surjective by
+Lemma \ref{lemma-geometric-lift-to-cover}.
+Hence
+Sites, Proposition \ref{sites-proposition-point-limits}
+applies, so $p$ is a point of the site $S_\etale$.
+Finally, our functor $\mathcal{F} \mapsto \mathcal{F}_{\overline{s}}$
+is given by exactly the same colimit as the functor
+$\mathcal{F} \mapsto \mathcal{F}_p$ associated to $p$ in
+Sites, Equation \ref{sites-equation-stalk}
+which proves the final assertion.
+\end{proof}
+
+\begin{remark}
+\label{remark-map-stalks}
+Let $S$ be a scheme and let $\overline{s} : \Spec(k) \to S$
+and $\overline{s}' : \Spec(k') \to S$ be two geometric points of
+$S$. A {\it morphism $a : \overline{s} \to \overline{s}'$ of geometric points}
+is simply a morphism $a : \Spec(k) \to \Spec(k')$ such that
+$\overline{s}' \circ a = \overline{s}$. Given such a morphism we obtain
+a functor from the category of \'etale neighbourhoods of $\overline{s}'$
+to the category of \'etale neighbourhoods of $\overline{s}$ by the rule
+$(U, \overline{u}') \mapsto (U, \overline{u}' \circ a)$. Hence we obtain
+a canonical map
+$$
+\mathcal{F}_{\overline{s}'}
+=
+\colim_{(U, \overline{u}')} \mathcal{F}(U)
+\longrightarrow
+\colim_{(U, \overline{u})} \mathcal{F}(U)
+=
+\mathcal{F}_{\overline{s}}
+$$
+from Categories, Lemma \ref{categories-lemma-functorial-colimit}. Using the
+description of elements of stalks as triples this maps the element of
+$\mathcal{F}_{\overline{s}'}$ represented by the triple
+$(U, \overline{u}', \sigma)$ to the element of $\mathcal{F}_{\overline{s}}$
+represented by the triple $(U, \overline{u}' \circ a, \sigma)$.
+Since the functor above is clearly an equivalence we conclude that this
+canonical map is an isomorphism of stalk functors.
+
+\medskip\noindent
+Let us make sure we have the map of stalks corresponding to $a$ pointing
+in the correct direction. Note that the above means, according to
+Sites, Definition \ref{sites-definition-morphism-points},
+that $a$ defines a morphism $a : p \to p'$ between the points $p, p'$ of
+the site $S_\etale$ associated to $\overline{s}, \overline{s}'$ by
+Lemma \ref{lemma-stalk-gives-point}. There are more general morphisms of
+points (corresponding to specializations of points of $S$) which we will
+describe later, and which will not be isomorphisms, see Section
+\ref{section-specialization}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-stalk-exact}
+Let $S$ be a scheme. Let $\overline{s}$ be a geometric point of $S$.
+\begin{enumerate}
+\item The stalk functor
+$\textit{PAb}(S_\etale) \to \textit{Ab}$,
+$\mathcal{F} \mapsto \mathcal{F}_{\overline{s}}$
+is exact.
+\item We have $(\mathcal{F}^\#)_{\overline{s}} = \mathcal{F}_{\overline{s}}$
+for any presheaf of sets $\mathcal{F}$ on $S_\etale$.
+\item The functor
+$\textit{Ab}(S_\etale) \to \textit{Ab}$,
+$\mathcal{F} \mapsto \mathcal{F}_{\overline{s}}$ is exact.
+\item Similarly the functors
+$\textit{PSh}(S_\etale) \to \textit{Sets}$ and
+$\Sh(S_\etale) \to \textit{Sets}$ given by the stalk functor
+$\mathcal{F} \mapsto \mathcal{F}_{\overline{x}}$ are exact (see
+Categories, Definition \ref{categories-definition-exact})
+and commute with arbitrary colimits.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Before we indicate how to prove this by direct arguments
+we note that the result follows from the general material in
+Modules on Sites, Section \ref{sites-modules-section-stalks}.
+This is true because $\mathcal{F} \mapsto \mathcal{F}_{\overline{s}}$
+comes from a point of the small \'etale site of $S$, see
+Lemma \ref{lemma-stalk-gives-point}.
+We will only give a direct proof of (1), (2) and (3), and omit
+a direct proof of (4).
+
+\medskip\noindent
+Exactness as a functor on $\textit{PAb}(S_\etale)$ is formal from the
+fact that directed colimits commute with all colimits and with finite
+limits. The identification of the stalks in (2) is via the map
+$$
+\kappa :
+\mathcal{F}_{\overline{s}}
+\longrightarrow
+(\mathcal{F}^\#)_{\overline{s}}
+$$
+induced by the natural morphism $\mathcal{F}\to \mathcal{F}^\#$, see
+Theorem \ref{theorem-sheafification}.
+We claim that this map is an isomorphism of abelian groups. We will show
+injectivity and omit the proof of surjectivity.
+
+\medskip\noindent
+Let $\sigma\in \mathcal{F}_{\overline{s}}$.
+There exists an \'etale neighborhood
+$(U, \overline{u})\to (S, \overline{s})$ such that $\sigma$ is the image of some
+section $s \in \mathcal{F}(U)$. If $\kappa(\sigma) = 0$ in
+$(\mathcal{F}^\#)_{\overline{s}}$ then there exists a morphism of \'etale
+neighborhoods $(U', \overline{u}')\to (U, \overline{u})$ such that
+$s|_{U'}$ is zero in $\mathcal{F}^\#(U')$. It follows there
+exists an \'etale covering
+$\{U_i'\to U'\}_{i\in I}$ such that $s|_{U_i'}=0$ in
+$\mathcal{F}(U_i')$ for all $i$. By Lemma \ref{lemma-geometric-lift-to-cover}
+there exist $i \in I$ and a morphism
+$\overline{u}_i': \overline{s} \to U_i'$ such that
+$(U_i', \overline{u}_i') \to (U', \overline{u}')\to (U, \overline{u})$
+are morphisms of \'etale neighborhoods. Hence $\sigma = 0$
+since $(U_i', \overline{u}_i') \to (U, \overline{u})$
+is a morphism of \'etale neighbourhoods such that
+we have $s|_{U'_i}=0$. This proves $\kappa$ is injective.
+
+\medskip\noindent
+To show that the functor $\textit{Ab}(S_\etale) \to \textit{Ab}$ is
+exact, consider any short exact sequence in $\textit{Ab}(S_\etale)$:
+$
+0\to \mathcal{F}\to \mathcal{G}\to \mathcal H \to 0.
+$
+This gives us the exact sequence of presheaves
+$$
+0 \to \mathcal{F} \to \mathcal{G} \to \mathcal H \to
+\mathcal H/^p\mathcal{G} \to 0,
+$$
+where $/^p$ denotes the quotient in $\textit{PAb}(S_\etale)$.
+Taking stalks at
+$\overline{s}$, we see that $(\mathcal H /^p\mathcal{G})_{\bar{s}} =
+(\mathcal H /\mathcal{G})_{\bar{s}} = 0$, since the sheafification of
+$\mathcal H/^p\mathcal{G}$ is $0$.
+Therefore,
+$$
+0\to \mathcal{F}_{\overline{s}} \to \mathcal{G}_{\overline{s}} \to
+\mathcal{H}_{\overline{s}} \to 0 = (\mathcal H/^p\mathcal{G})_{\overline{s}}
+$$
+is exact, since taking stalks is exact as a functor from presheaves.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-exactness-stalks}
+Let $S$ be a scheme.
+A map $a : \mathcal{F} \to \mathcal{G}$ of sheaves of sets is injective
+(resp.\ surjective) if and only if the map on stalks
+$a_{\overline{s}} : \mathcal{F}_{\overline{s}} \to \mathcal{G}_{\overline{s}}$
+is injective (resp.\ surjective) for all geometric points of $S$.
+A sequence of abelian sheaves on $S_\etale$ is exact
+if and only if it is exact on all stalks at geometric points of $S$.
+\end{theorem}
+
+\begin{proof}
+The necessity of exactness on stalks follows from
+Lemma \ref{lemma-stalk-exact}.
+For the converse, it suffices to show that a map of sheaves is surjective
+(respectively injective) if and only if it is surjective (respectively
+injective) on all stalks. We prove this in the case of surjectivity, and omit
+the proof in the case of injectivity.
+
+\medskip\noindent
+Let $\alpha : \mathcal{F} \to \mathcal{G}$ be a map of sheaves such
+that $\mathcal{F}_{\overline{s}} \to \mathcal{G}_{\overline{s}}$
+is surjective for all geometric points. Fix
+$U \in \Ob(S_\etale)$
+and $s \in \mathcal{G}(U)$. For every $u \in U$ choose some
+$\overline{u} \to U$ lying over $u$ and an \'etale neighborhood
+$(V_u , \overline{v}_u) \to (U, \overline{u})$ such that
+$s|_{V_u} = \alpha(s_{V_u})$ for some
+$s_{V_u} \in \mathcal{F}(V_u)$.
+This is possible since $\alpha$ is surjective on
+stalks. Then $\{V_u \to U\}_{u \in U}$
+is an \'etale covering on which the restrictions of $s$
+are in the image of the map $\alpha$.
+Thus, $\alpha$ is surjective, see
+Sites, Section \ref{sites-section-sheaves-injective}.
+\end{proof}
+
+\begin{remarks}
+\label{remarks-enough-points}
+On points of the geometric sites.
+\begin{enumerate}
+\item Theorem \ref{theorem-exactness-stalks} says that the family of points
+of $S_\etale$ given by the geometric points of $S$
+(Lemma \ref{lemma-stalk-gives-point}) is conservative, see
+Sites, Definition \ref{sites-definition-enough-points}.
+In particular $S_\etale$ has enough points.
+\item Suppose $\mathcal{F}$ is a sheaf on the big \'etale site
+\label{item-stalks-big}
+of $S$. Let $T \to S$ be an object of the big \'etale site of $S$,
+and let $\overline{t}$ be a geometric point of $T$. Then we define
+$\mathcal{F}_{\overline{t}}$ as the stalk
+of the restriction $\mathcal{F}|_{T_\etale}$ of $\mathcal{F}$
+to the small \'etale site of $T$. In other words, we can define
+the stalk of $\mathcal{F}$ at any geometric point of any
+scheme $T/S \in \Ob((\Sch/S)_\etale)$.
+\item The big \'etale site of $S$ also has enough points, by
+considering all geometric points of all objects of this site, see
+(\ref{item-stalks-big}).
+\end{enumerate}
+\end{remarks}
+
+\noindent
+The following lemma should be skipped on a first reading.
+
+\begin{lemma}
+\label{lemma-points-small-etale-site}
+Let $S$ be a scheme.
+\begin{enumerate}
+\item Let $p$ be a point of the small \'etale site
+$S_\etale$ of $S$ given by a functor
+$u : S_\etale \to \textit{Sets}$.
+Then there exists a geometric point $\overline{s}$ of $S$ such that
+$p$ is isomorphic to the point of $S_\etale$ associated to
+$\overline{s}$ in
+Lemma \ref{lemma-stalk-gives-point}.
+\item Let $p : \Sh(pt) \to \Sh(S_\etale)$ be a point
+of the small \'etale topos of $S$. Then $p$ comes from a geometric point
+of $S$, i.e., the stalk functor $\mathcal{F} \mapsto \mathcal{F}_p$
+is isomorphic to a stalk functor as defined in
+Definition \ref{definition-stalk}.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By
+Sites, Lemma \ref{sites-lemma-point-site-topos}
+there is a one to one correspondence between points of the site and points
+of the associated topos, hence it suffices to prove (1).
+By
+Sites, Proposition \ref{sites-proposition-point-limits}
+the functor $u$ has the following properties:
+(a) $u(S) = \{*\}$, (b) $u(U \times_V W) = u(U) \times_{u(V)} u(W)$, and
+(c) if $\{U_i \to U\}$ is an \'etale covering, then
+$\coprod u(U_i) \to u(U)$ is surjective.
+In particular, if $U' \subset U$ is an open subscheme,
+then $u(U') \subset u(U)$. Moreover, by
+Sites, Lemma \ref{sites-lemma-point-site-topos}
+we can write $u(U) = p^{-1}(h_U^\#)$, in other words $u(U)$ is the
+stalk of the representable sheaf $h_U$. If
+$U = V \amalg W$, then we see that $h_U = (h_V \amalg h_W)^\#$ and we get
+$u(U) = u(V) \amalg u(W)$ since $p^{-1}$ is exact.
+
+\medskip\noindent
+Consider the restriction of $u$ to $S_{Zar}$. By
+Sites, Examples \ref{sites-example-point-topological} and
+\ref{sites-example-point-topology}
+there exists a unique point $s \in S$ such that for $S' \subset S$ open we
+have $u(S') = \{*\}$ if $s \in S'$ and $u(S') = \emptyset$ if $s \not \in S'$.
+Note that if $\varphi : U \to S$ is an object of $S_\etale$ then
+$\varphi(U) \subset S$ is open (see
+Proposition \ref{proposition-etale-morphisms})
+and $\{U \to \varphi(U)\}$ is an \'etale covering. Hence we conclude that
+$u(U) = \emptyset \Leftrightarrow s \in \varphi(U)$.
+
+\medskip\noindent
+Pick a geometric point $\overline{s} : \overline{s} \to S$ lying over $s$, see
+Definition \ref{definition-geometric-point}
+for customary abuse of notation. Suppose that $\varphi : U \to S$ is an object
+of $S_\etale$ with $U$ affine. Note that $\varphi$ is separated, and
+that the fibre $U_s$ of $\varphi$ over $s$ is an affine scheme over
+$\Spec(\kappa(s))$ which is the spectrum of a finite product of
+finite separable extensions $k_i$ of $\kappa(s)$. Hence we may apply
+\'Etale Morphisms, Lemma \ref{etale-lemma-etale-etale-local-technical}
+to get an \'etale neighbourhood $(V, \overline{v})$ of $(S, \overline{s})$
+such that
+$$
+U \times_S V = U_1 \amalg \ldots \amalg U_n \amalg W
+$$
+with $U_i \to V$ an isomorphism and $W$ having no point lying over
+$\overline{v}$. Thus we conclude that
+$$
+u(U) \times u(V) =
+u(U \times_S V) =
+u(U_1) \amalg \ldots \amalg u(U_n) \amalg u(W)
+$$
+and of course also $u(U_i) = u(V)$. After shrinking $V$ a bit we can
+assume that $V$ has exactly one point lying over $s$, and hence $W$ has no
+point lying over $s$. By the above this then gives $u(W) = \emptyset$.
+Hence we obtain
+$$
+u(U) \times u(V) =
+u(U_1) \amalg \ldots \amalg u(U_n) =
+\coprod\nolimits_{i = 1, \ldots, n} u(V)
+$$
+Note that $u(V) \not = \emptyset$ as $s$ is in the image of $V \to S$.
+In particular, we see that in this situation $u(U)$ is a finite
+set with $n$ elements.
+
+\medskip\noindent
+Consider the limit
+$$
+\lim_{(V, \overline{v})} u(V)
+$$
+over the category of \'etale neighbourhoods $(V, \overline{v})$ of
+$\overline{s}$. It is clear that we get the same value when taking
+the limit over the subcategory of $(V, \overline{v})$ with $V$ affine.
+By the previous paragraph (applied with the roles of $V$ and $U$ switched)
+we see that in this case $u(V)$ is always a finite nonempty set.
+Moreover, the limit is cofiltered, see
+Lemma \ref{lemma-cofinal-etale}.
+Hence by
+Categories, Section \ref{categories-section-codirected-limits}
+the limit is nonempty. Pick an element $x$ from this limit.
+This means we obtain a $x_{V, \overline{v}} \in u(V)$ for
+every \'etale neighbourhood $(V, \overline{v})$ of $(S, \overline{s})$
+such that for every morphism of \'etale neighbourhoods
+$\varphi : (V', \overline{v}') \to (V, \overline{v})$ we have
+$u(\varphi)(x_{V', \overline{v}'}) = x_{V, \overline{v}}$.
+
+\medskip\noindent
+We will use the choice of $x$ to construct a functorial bijective map
+$$
+c : |U_{\overline{s}}| \longrightarrow u(U)
+$$
+for $U \in \Ob(S_\etale)$ which will conclude the proof. See
+Lemma \ref{lemma-stalk-gives-point}
+and its proof for a description of $|U_{\overline{s}}|$.
+First we claim that it suffices to construct the map for $U$ affine.
+We omit the proof of this claim.
+Assume $U \to S$ in $S_\etale$ with $U$ affine, and let
+$\overline{u} : \overline{s} \to U$ be an element of $|U_{\overline{s}}|$.
+Choose a $(V, \overline{v})$ such that $U \times_S V$ decomposes
+as in the third paragraph of the proof.
+Then the pair $(\overline{u}, \overline{v})$ gives a geometric point of
+$U \times_S V$ lying over $\overline{v}$ and determines one of the
+components $U_i$ of $U \times_S V$. More precisely, there exists
+a section $\sigma : V \to U \times_S V$ of the projection $\text{pr}_U$
+such that $(\overline{u}, \overline{v}) = \sigma \circ \overline{v}$. Set
+$c(\overline{u}) = u(\text{pr}_U)(u(\sigma)(x_{V, \overline{v}})) \in u(U)$.
+We have to check this is independent of the choice of $(V, \overline{v})$. By
+Lemma \ref{lemma-cofinal-etale}
+the category of \'etale neighbourhoods is cofiltered.
+Hence it suffice to show
+that given a morphism of \'etale neighbourhood
+$\varphi : (V', \overline{v}') \to (V, \overline{v})$ and a choice of a
+section $\sigma' : V' \to U \times_S V'$ of the projection such that
+$(\overline{u}, \overline{v'}) = \sigma' \circ \overline{v}'$
+we have $u(\sigma')(x_{V', \overline{v}'}) = u(\sigma)(x_{V, \overline{v}})$.
+Consider the diagram
+$$
+\xymatrix{
+V' \ar[d]^{\sigma'} \ar[r]_\varphi & V \ar[d]^\sigma \\
+U \times_S V' \ar[r]^{1 \times \varphi} &
+U \times_S V
+}
+$$
+Now, it may not be the case that this diagram commutes. The reason is
+that the schemes $V'$ and $V$ may not be connected, and hence
+the decompositions used to construct $\sigma'$ and $\sigma$ above may
+not be unique. But we do know that
+$\sigma \circ \varphi \circ \overline{v}' =
+(1 \times \varphi) \circ \sigma' \circ \overline{v}'$
+by construction. Hence, since $U \times_S V$ is \'etale over $S$,
+there exists an open neighbourhood
+$V'' \subset V'$ of $\overline{v'}$ such that the diagram does
+commute when restricted to $V''$, see
+Morphisms, Lemma \ref{morphisms-lemma-value-at-one-point}.
+This means we may extend the diagram above to
+$$
+\xymatrix{
+V'' \ar[r] \ar[d]^{\sigma'|_{V''}} &
+V' \ar[d]^{\sigma'} \ar[r]_\varphi &
+V \ar[d]^\sigma \\
+U \times_S V'' \ar[r] &
+U \times_S V' \ar[r]^{1 \times \varphi} &
+U \times_S V
+}
+$$
+such that the left square and the outer rectangle commute.
+Since $u$ is a functor this implies that
+$x_{V'', \overline{v}'}$ maps to the same element in
+$u(U \times_S V)$ no matter which route we take through the
+diagram. On the other hand, it maps to the elements
+$x_{V', \overline{v}'}$ and $x_{V, \overline{v}}$ in
+$u(V')$ and $u(V)$. This implies the desired equality
+$u(\sigma')(x_{V', \overline{v}'}) = u(\sigma)(x_{V, \overline{v}})$.
+
+\medskip\noindent
+In a similar manner one proves that the construction
+$c : |U_{\overline{s}}| \to u(U)$ is functorial in $U$;
+details omitted. And finally, by the results of the
+third paragraph it is clear that the map $c$ is bijective
+which ends the proof of the lemma.
+\end{proof}
+
+
+
+
+
+
+
+\section{Points in other topologies}
+\label{section-points-topologies}
+
+\noindent
+In this section we briefly discuss the existence of points for some
+sites other than the \'etale site of a scheme. We refer to
+Sites, Section \ref{sites-section-sites-enough-points}
+and
+Topologies, Section \ref{topologies-section-procedure} ff
+for the terminology used in this section.
+All of the geometric sites have enough points.
+
+\begin{lemma}
+\label{lemma-points-fppf}
+Let $S$ be a scheme. All of the following sites have enough points
+$S_{affine, Zar}$, $S_{Zar}$, $S_{affine, \etale}$, $S_\etale$,
+$(\Sch/S)_{Zar}$, $(\textit{Aff}/S)_{Zar}$,
+$(\Sch/S)_\etale$, $(\textit{Aff}/S)_\etale$,
+$(\Sch/S)_{smooth}$, $(\textit{Aff}/S)_{smooth}$,
+$(\Sch/S)_{syntomic}$, $(\textit{Aff}/S)_{syntomic}$,
+$(\Sch/S)_{fppf}$, and $(\textit{Aff}/S)_{fppf}$.
+\end{lemma}
+
+\begin{proof}
+For each of the big sites the associated topos is equivalent to the
+topos defined by the site $(\textit{Aff}/S)_\tau$, see
+Topologies, Lemmas \ref{topologies-lemma-affine-big-site-Zariski},
+\ref{topologies-lemma-affine-big-site-etale},
+\ref{topologies-lemma-affine-big-site-smooth},
+\ref{topologies-lemma-affine-big-site-syntomic}, and
+\ref{topologies-lemma-affine-big-site-fppf}.
+The result for the sites $(\textit{Aff}/S)_\tau$ follows immediately
+from Deligne's result
+Sites, Lemma \ref{sites-lemma-criterion-points}.
+
+\medskip\noindent
+The result for $S_{Zar}$ is clear. The result for $S_{affine, Zar}$
+follows from Deligne's result. The result for $S_\etale$
+either follows from (the proof of)
+Theorem \ref{theorem-exactness-stalks}
+or from Topologies, Lemma \ref{topologies-lemma-alternative}
+and Deligne's result applied to $S_{affine, \etale}$.
+\end{proof}
+
+\noindent
+The lemma above guarantees the existence of points, but it doesn't
+tell us what these points look like. We can explicitly construct
+{\it some} points as follows.
+Suppose $\overline{s} : \Spec(k) \to S$ is a geometric
+point with $k$ algebraically closed. Consider the functor
+$$
+u : (\Sch/S)_{fppf} \longrightarrow \textit{Sets},
+\quad
+u(U) = U(k) = \Mor_S(\Spec(k), U).
+$$
+Note that $U \mapsto U(k)$ commutes with finite limits as
+$S(k) = \{\overline{s}\}$ and
+$(U_1 \times_U U_2)(k) = U_1(k) \times_{U(k)} U_2(k)$.
+Moreover, if $\{U_i \to U\}$ is an fppf covering, then
+$\coprod U_i(k) \to U(k)$ is surjective.
+By
+Sites, Proposition \ref{sites-proposition-point-limits}
+we see that $u$ defines a point $p$ of $(\Sch/S)_{fppf}$ with
+stalks
+$$
+\mathcal{F}_p = \colim_{(U, x)} \mathcal{F}(U)
+$$
+where the colimit is over pairs $U \to S$, $x \in U(k)$ as usual.
+But... this category has an initial object, namely
+$(\Spec(k), \text{id})$, hence we see that
+$$
+\mathcal{F}_p = \mathcal{F}(\Spec(k))
+$$
+which isn't terribly interesting! In fact, in general these points won't
+form a conservative family of points. A more interesting type of point
+is described in the following remark.
+
+\begin{remark}
+\label{remark-points-fppf-site}
+\begin{reference}
+This is discussed in \cite{Schroeer}.
+\end{reference}
+Let $S = \Spec(A)$ be an affine scheme. Let $(p, u)$ be a point of
+the site $(\textit{Aff}/S)_{fppf}$, see
+Sites, Sections \ref{sites-section-points} and
+\ref{sites-section-construct-points}. Let $B = \mathcal{O}_p$ be the stalk
+of the structure sheaf at the point $p$. Recall that
+$$
+B = \colim_{(U, x)} \mathcal{O}(U) =
+\colim_{(\Spec(C), x_C)} C
+$$
+where $x_C \in u(\Spec(C))$. It can happen that
+$\Spec(B)$ is an object of $(\textit{Aff}/S)_{fppf}$
+and that there is an element $x_B \in u(\Spec(B))$ mapping to
+the compatible system $x_C$. In this case the system of neighbourhoods
+has an initial object and it follows that
+$\mathcal{F}_p = \mathcal{F}(\Spec(B))$ for any sheaf $\mathcal{F}$
+on $(\textit{Aff}/S)_{fppf}$. It is straightforward
+to see that if $\mathcal{F} \mapsto \mathcal{F}(\Spec(B))$ defines a point
+of $\Sh((\textit{Aff}/S)_{fppf})$, then
+$B$ has to be a local $A$-algebra such that for every faithfully flat,
+finitely presented ring map $B \to B'$ there is a section $B' \to B$.
+Conversely, for any such $A$-algebra $B$ the functor
+$\mathcal{F} \mapsto \mathcal{F}(\Spec(B))$ is the stalk functor
+of a point. Details omitted. It is not clear what a general point of the
+site $(\textit{Aff}/S)_{fppf}$ looks like.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+\section{Supports of abelian sheaves}
+\label{section-support}
+
+\noindent
+First we talk about supports of local sections.
+
+\begin{lemma}
+\label{lemma-support-subsheaf-final}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a subsheaf of the final
+object of the \'etale topos of $S$ (see
+Sites, Example \ref{sites-example-singleton-sheaf}).
+Then there exists a unique open
+$W \subset S$ such that $\mathcal{F} = h_W$.
+\end{lemma}
+
+\begin{proof}
+The condition means that $\mathcal{F}(U)$ is a singleton or
+empty for all $\varphi : U \to S$ in $\Ob(S_\etale)$.
+In particular local sections always glue. If
+$\mathcal{F}(U) \not = \emptyset$, then
+$\mathcal{F}(\varphi(U)) \not = \emptyset$ because
+$\{\varphi : U \to \varphi(U)\}$ is a covering.
+Hence we can take
+$W = \bigcup_{\varphi : U \to S, \mathcal{F}(U) \not = \emptyset} \varphi(U)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-zero-over-image}
+Let $S$ be a scheme.
+Let $\mathcal{F}$ be an abelian sheaf on $S_\etale$.
+Let $\sigma \in \mathcal{F}(U)$ be a local section.
+There exists an open subset $W \subset U$ such that
+\begin{enumerate}
+\item $W \subset U$ is the largest Zariski open subset of $U$ such
+that $\sigma|_W = 0$,
+\item for every $\varphi : V \to U$ in $S_\etale$ we have
+$$
+\sigma|_V = 0 \Leftrightarrow \varphi(V) \subset W,
+$$
+\item for every geometric point $\overline{u}$ of $U$ we have
+$$
+(U, \overline{u}, \sigma) = 0\text{ in }\mathcal{F}_{\overline{s}}
+\Leftrightarrow
+\overline{u} \in W
+$$
+where $\overline{s} = (U \to S) \circ \overline{u}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $\mathcal{F}$ is a sheaf in the \'etale topology the restriction of
+$\mathcal{F}$ to $U_{Zar}$ is a sheaf on $U$ in the Zariski topology.
+Hence there exists a Zariski open $W$ having property (1), see
+Modules, Lemma \ref{modules-lemma-support-section-closed}. Let
+$\varphi : V \to U$ be an arrow of $S_\etale$. Note that
+$\varphi(V) \subset U$ is an open subset and that
+$\{V \to \varphi(V)\}$ is an \'etale covering. Hence if
+$\sigma|_V = 0$, then by the sheaf condition for $\mathcal{F}$ we
+see that $\sigma|_{\varphi(V)} = 0$. This proves (2).
+To prove (3) we have to show that if $(U, \overline{u}, \sigma)$
+defines the zero element of $\mathcal{F}_{\overline{s}}$, then
+$\overline{u} \in W$. This is true because the assumption means
+there exists a morphism of \'etale neighbourhoods
+$(V, \overline{v}) \to (U, \overline{u})$ such that
+$\sigma|_V = 0$. Hence by (2) we see that $V \to U$ maps into $W$, and
+hence $\overline{u} \in W$.
+\end{proof}
+
+\noindent
+Let $S$ be a scheme. Let $s \in S$.
+Let $\mathcal{F}$ be a sheaf on $S_\etale$. By
+Remark \ref{remark-map-stalks}
+the isomorphism class of the stalk of the sheaf $\mathcal{F}$
+at a geometric points lying over $s$ is well defined.
+
+\begin{definition}
+\label{definition-support}
+Let $S$ be a scheme.
+Let $\mathcal{F}$ be an abelian sheaf on $S_\etale$.
+\begin{enumerate}
+\item The {\it support of $\mathcal{F}$} is the set of
+points $s \in S$ such that $\mathcal{F}_{\overline{s}} \not = 0$
+for any (some) geometric point $\overline{s}$ lying over $s$.
+\item Let $\sigma \in \mathcal{F}(U)$ be a section.
+The {\it support of $\sigma$} is the closed subset $U \setminus W$, where
+$W \subset U$ is the largest open subset of $U$ on which $\sigma$
+restricts to zero (see
+Lemma \ref{lemma-zero-over-image}).
+\end{enumerate}
+\end{definition}
+
+\noindent
+In general the support of an abelian sheaf is not closed.
+For example, suppose that $S = \Spec(\mathbf{A}^1_{\mathbf{C}})$.
+Let $i_t : \Spec(\mathbf{C}) \to S$ be the inclusion of the
+point $t \in \mathbf{C}$.
+We will see later that $\mathbf{F}_t = i_{t, *}(\mathbf{Z}/2\mathbf{Z})$
+is an abelian sheaf whose support is exactly $\{t\}$, see
+Section \ref{section-closed-immersions}.
+Then
+$$
+\bigoplus\nolimits_{n \in \mathbf{N}} \mathcal{F}_n
+$$
+is an abelian sheaf with support $\{1, 2, 3, \ldots\} \subset S$.
+This is true because taking stalks commutes with colimits, see
+Lemma \ref{lemma-stalk-exact}.
+Thus an example of an abelian sheaf whose support is not closed.
+Here are some basic facts on supports of sheaves and sections.
+
+\begin{lemma}
+\label{lemma-support-section-closed}
+Let $S$ be a scheme.
+Let $\mathcal{F}$ be an abelian sheaf on $S_\etale$.
+Let $U \in \Ob(S_\etale)$ and $\sigma \in \mathcal{F}(U)$.
+\begin{enumerate}
+\item The support of $\sigma$ is closed in $U$.
+\item The support of $\sigma + \sigma'$ is contained in the union of
+the supports of $\sigma, \sigma' \in \mathcal{F}(U)$.
+\item If $\varphi : \mathcal{F} \to \mathcal{G}$ is a map of
+abelian sheaves on $S_\etale$, then the support of $\varphi(\sigma)$
+is contained in the support of $\sigma \in \mathcal{F}(U)$.
+\item The support of $\mathcal{F}$ is the union of the images of the
+supports of all local sections of $\mathcal{F}$.
+\item If $\mathcal{F} \to \mathcal{G}$ is surjective then the support
+of $\mathcal{G}$ is a subset of the support of $\mathcal{F}$.
+\item If $\mathcal{F} \to \mathcal{G}$ is injective then the support
+of $\mathcal{F}$ is a subset of the support of $\mathcal{G}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) holds by definition.
+Parts (2) and (3) hold because they holds for the restriction of
+$\mathcal{F}$ and $\mathcal{G}$ to $U_{Zar}$, see
+Modules, Lemma \ref{modules-lemma-support-section-closed}.
+Part (4) is a direct consequence of
+Lemma \ref{lemma-zero-over-image} part (3).
+Parts (5) and (6) follow from the other parts.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-support-sheaf-rings-closed}
+The support of a sheaf of rings on $S_\etale$ is closed.
+\end{lemma}
+
+\begin{proof}
+This is true because (according to our conventions)
+a ring is $0$ if and only if
+$1 = 0$, and hence the support of a sheaf of rings
+is the support of the unit section.
+\end{proof}
+
+
+
+
+\section{Henselian rings}
+\label{section-henselian-ring}
+
+\noindent
+We begin by stating a theorem which has already been used many times
+in the Stacks project. There are many versions of this result; here we
+just state the algebraic version.
+
+\begin{theorem}
+\label{theorem-quasi-finite-etale-locally}
+Let $A\to B$ be finite type ring map and $\mathfrak p \subset A$ a prime
+ideal. Then there exist an \'etale ring map $A \to A'$ and a prime
+$\mathfrak p' \subset A'$ lying over $\mathfrak p$ such that
+\begin{enumerate}
+\item
+$\kappa(\mathfrak p) = \kappa(\mathfrak p')$,
+\item
+$ B \otimes_A A' = B_1\times \ldots \times B_r \times C$,
+\item
+$ A'\to B_i$ is finite and there exists a unique prime $q_i\subset B_i$ lying
+over $\mathfrak p'$, and
+\item all irreducible components of the fibre
+$\Spec(C \otimes_{A'} \kappa(\mathfrak p'))$ of $C$ over $\mathfrak p'$
+have dimension at least 1.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+See Algebra, Lemma \ref{algebra-lemma-etale-makes-quasi-finite-finite}, or
+see \cite[Th\'eor\`eme 18.12.1]{EGA4}. For a slew of versions in terms of
+morphisms of schemes, see
+More on Morphisms, Section \ref{more-morphisms-section-etale-localization}.
+\end{proof}
+
+\noindent
+Recall Hensel's lemma.
+There are many versions of this lemma. Here are two:
+\begin{enumerate}
+\item[(f)] if $f\in \mathbf{Z}_p[T]$ monic and
+$f \bmod p = g_0 h_0$ with $gcd(g_0, h_0) = 1$ then $f$ factors
+as $f = gh$ with $\bar g = g_0$ and $\bar h = h_0$,
+\item[(r)] if $f \in \mathbf{Z}_p[T]$, monic $a_0 \in \mathbf{F}_p$,
+$\bar f(a_0) =0$ but $\bar f'(a_0) \neq 0$
+then there exists $a \in \mathbf{Z}_p$ with
+$f(a) = 0$ and $\bar a = a_0$.
+\end{enumerate}
+Both versions are true (we will see this later). The first version
+asks for lifts of factorizations into coprime parts,
+and the second version asks for lifts of simple roots
+modulo the maximal ideal. It turns out that requiring
+these conditions for a general local ring are equivalent, and are
+equivalent to many other conditions. We use the root lifting
+property as the definition of a henselian local ring as it is
+often the easiest one to check.
+
+%10.01.09
+\begin{definition}
+\label{definition-henselian}
+(See Algebra, Definition \ref{algebra-definition-henselian}.)
+A local ring $(R, \mathfrak m, \kappa)$ is called
+{\it henselian} if for all
+$f \in R[T]$ monic, for all $a_0 \in \kappa$ such that
+$\bar f(a_0) = 0$ and $\bar f'(a_0) \neq 0$, there exists
+an $a \in R$ such that $f(a) = 0$ and $a \bmod \mathfrak m = a_0$.
+\end{definition}
+
+\noindent
+A good example of henselian local rings
+to keep in mind is complete local rings.
+Recall
+(Algebra, Definition \ref{algebra-definition-complete-local-ring})
+that a complete local ring is a local ring $(R, \mathfrak m)$ such that
+$R \cong \lim_n R/\mathfrak m^n$, i.e., it is complete and separated
+for the $\mathfrak m$-adic topology.
+
+\begin{theorem}
+\label{theorem-hensel}
+Complete local rings are henselian.
+\end{theorem}
+
+\begin{proof}
+Newton's method. See
+Algebra, Lemma \ref{algebra-lemma-complete-henselian}.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-henselian}
+Let $(R, \mathfrak m, \kappa)$ be a local ring. The following are equivalent:
+\begin{enumerate}
+\item $R$ is henselian,
+\item for any $f\in R[T]$ and any factorization $\bar f = g_0 h_0$ in
+$\kappa[T]$ with $\gcd(g_0, h_0)=1$, there exists a factorization $f = gh$ in
+$R[T]$ with $\bar g = g_0$ and $\bar h = h_0$,
+\item any finite $R$-algebra $S$ is isomorphic to a finite product of
+local rings finite over $R$,
+\item any finite type $R$-algebra $A$ is isomorphic to a product
+$A \cong A' \times C$ where $A' \cong A_1 \times \ldots \times A_r$
+is a product of finite local $R$-algebras and all the irreducible
+components of $C \otimes_R \kappa$ have dimension at least 1,
+\item if $A$ is an \'etale $R$-algebra and $\mathfrak n$ is a maximal ideal of
+$A$ lying over $\mathfrak m$ such that $\kappa \cong A/\mathfrak n$, then there
+exists an isomorphism $\varphi : A \cong R \times A'$ such that
+$\varphi(\mathfrak n) = \mathfrak m \times A' \subset R \times A'$.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+This is just a subset of the results from
+Algebra, Lemma \ref{algebra-lemma-characterize-henselian}.
+Note that part (5) above corresponds to part (8) of
+Algebra, Lemma \ref{algebra-lemma-characterize-henselian}
+but is formulated slightly differently.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-over-henselian}
+If $R$ is henselian and $A$ is a finite $R$-algebra, then $A$ is a finite
+product of henselian local rings.
+\end{lemma}
+
+\begin{proof}
+See
+Algebra, Lemma \ref{algebra-lemma-finite-over-henselian}.
+\end{proof}
+
+\begin{definition}
+\label{definition-strictly-henselian}
+A local ring $R$ is called {\it strictly henselian} if it is henselian and its
+residue field is separably closed.
+\end{definition}
+
+\begin{example}
+\label{example-powerseries}
+In the case $R = \mathbf{C}[[t]]$, the \'etale $R$-algebras are finite products
+of the trivial extension $R \to R$ and the extensions
+$R \to R[X, X^{-1}]/(X^n-t)$.
+The latter ones factor through the open $D(t) \subset \Spec(R)$, so any
+\'etale covering can be refined by the covering
+$\{\text{id} : \Spec(R) \to \Spec(R)\}$. We will see below that
+this is a somewhat general fact on \'etale coverings of spectra of henselian
+rings. This will show that higher \'etale cohomology of the spectrum of a
+strictly henselian ring is zero.
+\end{example}
+
+\begin{theorem}
+\label{theorem-henselization}
+Let $(R, \mathfrak m, \kappa)$ be a local ring and
+$\kappa\subset\kappa^{sep}$ a separable algebraic closure.
+There exist canonical flat local ring maps $R \to R^h \to R^{sh}$ where
+\begin{enumerate}
+\item $R^h$, $R^{sh}$ are filtered colimits of \'etale $R$-algebras,
+\item $R^h$ is henselian, $R^{sh}$ is strictly henselian,
+\item $\mathfrak m R^h$ (resp.\ $\mathfrak m R^{sh}$) is the
+maximal ideal of $R^h$ (resp.\ $R^{sh}$), and
+\item $\kappa = R^h/\mathfrak m R^h$, and
+$\kappa^{sep} = R^{sh}/\mathfrak m R^{sh}$ as extensions of $\kappa$.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+The structure of $R^h$ and $R^{sh}$ is described in
+Algebra, Lemmas \ref{algebra-lemma-henselization} and
+\ref{algebra-lemma-strict-henselization}.
+\end{proof}
+
+\noindent
+The rings constructed in Theorem \ref{theorem-henselization}
+are called respectively the {\it henselization} and the
+{\it strict henselization} of the local ring $R$, see
+Algebra, Definition \ref{algebra-definition-henselization}.
+Many of the properties of $R$ are reflected in its (strict) henselization,
+see More on Algebra,
+Section \ref{more-algebra-section-permanence-henselization}.
+
+
+
+
+\section{Stalks of the structure sheaf}
+\label{section-stalks-structure-sheaf}
+
+\noindent
+In this section we identify the stalk of the structure sheaf at a geometric
+point with the strict henselization of the local ring at the corresponding
+``usual'' point.
+
+\begin{lemma}
+\label{lemma-describe-etale-local-ring}
+\begin{slogan}
+The stalk of the structure sheaf of a scheme
+in the etale topology is the strict henselization.
+\end{slogan}
+Let $S$ be a scheme.
+Let $\overline{s}$ be a geometric point of $S$ lying over $s \in S$.
+Let $\kappa = \kappa(s)$ and let
+$\kappa \subset \kappa^{sep} \subset \kappa(\overline{s})$ denote
+the separable algebraic closure of $\kappa$ in $\kappa(\overline{s})$.
+Then there is a canonical identification
+$$
+(\mathcal{O}_{S, s})^{sh}
+\cong
+(\mathcal{O}_S)_{\overline{s}}
+$$
+where the left hand side is the strict henselization of the local ring
+$\mathcal{O}_{S, s}$ as described in
+Theorem \ref{theorem-henselization}
+and right hand side is the stalk of the structure sheaf
+$\mathcal{O}_S$ on $S_\etale$ at
+the geometric point $\overline{s}$.
+\end{lemma}
+
+\begin{proof}
+Let $\Spec(A) \subset S$ be an affine neighbourhood of $s$.
+Let $\mathfrak p \subset A$ be the prime ideal corresponding to $s$.
+With these choices we have canonical isomorphisms
+$\mathcal{O}_{S, s} = A_{\mathfrak p}$ and $\kappa(s) = \kappa(\mathfrak p)$.
+Thus we have
+$\kappa(\mathfrak p) \subset \kappa^{sep} \subset \kappa(\overline{s})$.
+Recall that
+$$
+(\mathcal{O}_S)_{\overline{s}} =
+\colim_{(U, \overline{u})} \mathcal{O}(U)
+$$
+where the limit is over the \'etale neighbourhoods of $(S, \overline{s})$.
+A cofinal system is given by those \'etale neighbourhoods $(U, \overline{u})$
+such that $U$ is affine and $U \to S$ factors through $\Spec(A)$.
+In other words, we see that
+$$
+(\mathcal{O}_S)_{\overline{s}} = \colim_{(B, \mathfrak q, \phi)} B
+$$
+where the colimit is over \'etale $A$-algebras $B$ endowed with a prime
+$\mathfrak q$ lying over $\mathfrak p$ and a
+$\kappa(\mathfrak p)$-algebra map
+$\phi : \kappa(\mathfrak q) \to \kappa(\overline{s})$.
+Note that since $\kappa(\mathfrak q)$ is finite separable over
+$\kappa(\mathfrak p)$ the image of $\phi$ is contained in $\kappa^{sep}$.
+Via these translations the result of the lemma is equivalent
+to the result of
+Algebra, Lemma \ref{algebra-lemma-strict-henselization-different}.
+\end{proof}
+
+\begin{definition}
+\label{definition-etale-local-rings}
+Let $S$ be a scheme. Let $\overline{s}$ be a geometric point of $S$
+lying over the point $s \in S$.
+\begin{enumerate}
+\item The {\it \'etale local ring of $S$ at $\overline{s}$}
+is the stalk of the structure sheaf $\mathcal{O}_S$ on $S_\etale$
+at $\overline{s}$. We sometimes call this the
+{\it strict henselization of $\mathcal{O}_{S, s}$} relative
+to the geometric point $\overline{s}$.
+Notation used: $\mathcal{O}_{S, \overline{s}}^{sh}$.
+\item The {\it henselization of $\mathcal{O}_{S, s}$} is the
+henselization of the local ring of $S$ at $s$. See
+Algebra, Definition \ref{algebra-definition-henselization},
+and
+Theorem \ref{theorem-henselization}.
+Notation: $\mathcal{O}_{S, s}^h$.
+\item The {\it strict henselization of $S$ at $\overline{s}$}
+is the scheme $\Spec(\mathcal{O}_{S, \overline{s}}^{sh})$.
+\item The {\it henselization of $S$ at $s$} is the scheme
+$\Spec(\mathcal{O}_{S, s}^h)$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Let $f : T \to S$ be a morphism of schemes. Let $\overline{t}$
+be a geometric point of $T$ with image $\overline{s}$ in $S$.
+Let $t \in T$ and $s \in S$ be their images.
+Then we obtain a canonical commutative diagram
+$$
+\xymatrix{
+\Spec(\mathcal{O}^h_{T, t}) \ar[r] \ar[d] &
+\Spec(\mathcal{O}^{sh}_{T, \overline{t}}) \ar[r] \ar[d] &
+T \ar[d]^f \\
+\Spec(\mathcal{O}^h_{S, s}) \ar[r] &
+\Spec(\mathcal{O}^{sh}_{S, \overline{s}}) \ar[r] &
+S
+}
+$$
+of henselizations and strict henselizations of $T$ and $S$. You can
+prove this by choosing affine neighbourhoods of $t$ and $s$ and using
+the functoriality of (strict) henselizations given by
+Algebra, Lemmas \ref{algebra-lemma-henselian-functorial-improve} and
+\ref{algebra-lemma-strictly-henselian-functorial-improve}.
+
+\begin{lemma}
+\label{lemma-describe-henselization}
+Let $S$ be a scheme. Let $s \in S$. Then we have
+$$
+\mathcal{O}_{S, s}^h =
+\colim_{(U, u)} \mathcal{O}(U)
+$$
+where the colimit is over the filtered category of
+\'etale neighbourhoods $(U, u)$ of $(S, s)$ such that
+$\kappa(s) = \kappa(u)$.
+\end{lemma}
+
+\begin{proof}
+This lemma is a copy of
+More on Morphisms, Lemma \ref{more-morphisms-lemma-describe-henselization}.
+\end{proof}
+
+\begin{remark}
+\label{remark-henselization-Noetherian}
+Let $S$ be a scheme. Let $s \in S$.
+If $S$ is locally Noetherian then $\mathcal{O}_{S, s}^h$
+is also Noetherian and it has the same completion:
+$$
+\widehat{\mathcal{O}_{S, s}} \cong \widehat{\mathcal{O}_{S, s}^h}.
+$$
+In particular,
+$\mathcal{O}_{S, s} \subset
+\mathcal{O}_{S, s}^h \subset
+\widehat{\mathcal{O}_{S, s}}$.
+The henselization of $\mathcal{O}_{S, s}$ is in general much
+smaller than its completion and inherits many of its properties.
+For example, if $\mathcal{O}_{S, s}$ is reduced, then so is
+$\mathcal{O}_{S, s}^h$, but this is not true for the completion in general.
+Insert future references here.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-etale-site-locally-ringed}
+Let $S$ be a scheme. The small \'etale site $S_\etale$ endowed with
+its structure sheaf $\mathcal{O}_S$ is a locally ringed site, see
+Modules on Sites, Definition \ref{sites-modules-definition-locally-ringed}.
+\end{lemma}
+
+\begin{proof}
+This follows because the stalks
+$(\mathcal{O}_S)_{\overline{s}} = \mathcal{O}^{sh}_{S, \overline{s}}$ are
+local, and because $S_\etale$ has enough points, see
+Lemma \ref{lemma-describe-etale-local-ring},
+Theorem \ref{theorem-exactness-stalks},
+and
+Remarks \ref{remarks-enough-points}.
+See
+Modules on Sites, Lemmas \ref{sites-modules-lemma-locally-ringed-stalk} and
+\ref{sites-modules-lemma-ringed-stalk-not-zero}
+for the fact that this implies the small \'etale site is locally ringed.
+\end{proof}
+
+
+
+
+
+
+\section{Functoriality of small \'etale topos}
+\label{section-functoriality}
+
+\noindent
+So far we haven't yet discussed the functoriality of the \'etale site, in
+other words what happens when given a morphism of schemes. A precise formal
+discussion can be found in
+Topologies, Section \ref{topologies-section-etale}.
+In this and the next sections we discuss this material briefly specifically
+in the setting of small \'etale sites.
+
+\medskip\noindent
+Let $f : X \to Y$ be a morphism of schemes. We obtain a functor
+\begin{equation}
+\label{equation-functorial}
+u : Y_\etale \longrightarrow X_\etale, \quad
+V/Y \longmapsto X \times_Y V/X.
+\end{equation}
+This functor has the following important properties
+\begin{enumerate}
+\item $u(\text{final object}) = \text{final object}$,
+\item $u$ preserves fibre products,
+\item if $\{V_j \to V\}$ is a covering in $Y_\etale$, then
+$\{u(V_j) \to u(V)\}$ is a covering in $X_\etale$.
+\end{enumerate}
+Each of these is easy to check (omitted). As a consequence we obtain what
+is called a {\it morphism of sites}
+$$
+f_{small} : X_\etale \longrightarrow Y_\etale,
+$$
+see
+Sites, Definition \ref{sites-definition-morphism-sites}
+and
+Sites, Proposition \ref{sites-proposition-get-morphism}.
+It is not necessary to know about the abstract notion in detail
+in order to work with \'etale sheaves and \'etale cohomology.
+It usually suffices to know that there are functors
+$f_{small, *}$ (pushforward) and $f_{small}^{-1}$ (pullback)
+on \'etale sheaves, and to know some of their simple properties.
+We will discuss these properties in the next sections, but we will
+sometimes refer to the more abstract material for proofs since
+that is often the natural setting to prove them.
+
+
+\section{Direct images}
+\label{section-direct-image}
+
+\noindent
+Let us define the pushforward of a presheaf.
+
+\begin{definition}
+\label{definition-direct-image-presheaf}
+Let $f: X\to Y$ be a morphism of schemes.
+Let $\mathcal{F} $ a presheaf of sets on $X_\etale$.
+The {\it direct image}, or {\it pushforward} of $\mathcal{F}$
+(under $f$) is
+$$
+f_*\mathcal{F} : Y_\etale^{opp} \longrightarrow \textit{Sets}, \quad
+(V/Y) \longmapsto \mathcal{F}(X \times_Y V/X).
+$$
+We sometimes write $f_* = f_{small, *}$ to distinguish from other
+direct image functors (such as usual Zariski pushforward or $f_{big, *}$).
+\end{definition}
+
+\noindent
+This is a well-defined \'etale presheaf since the base change of an \'etale
+morphism is again \'etale. A more categorical way of saying this is that
+$f_*\mathcal{F}$ is the composition of functors $\mathcal{F} \circ u$
+where $u$ is as in Equation (\ref{equation-functorial}). This makes it
+clear that the construction is functorial in the presheaf
+$\mathcal{F}$ and hence we obtain a functor
+$$
+f_* = f_{small, *} :
+\textit{PSh}(X_\etale)
+\longrightarrow
+\textit{PSh}(Y_\etale)
+$$
+Note that if $\mathcal{F}$ is a presheaf of abelian groups, then
+$f_*\mathcal{F}$ is also a presheaf of abelian groups and we obtain
+$$
+f_* = f_{small, *} :
+\textit{PAb}(X_\etale)
+\longrightarrow
+\textit{PAb}(Y_\etale)
+$$
+as before (i.e., defined by exactly the same rule).
+
+\begin{remark}
+\label{remark-direct-image-sheaf}
+We claim that the direct image of a sheaf is a sheaf.
+Namely, if $\{V_j \to V\}$ is an \'etale covering in $Y_\etale$
+then $\{X \times_Y V_j \to X \times_Y V\}$ is an \'etale covering in
+$X_\etale$. Hence the sheaf condition for $\mathcal{F}$ with respect
+to $\{X \times_Y V_i \to X \times_Y V\}$
+is equivalent to the sheaf condition for $f_*\mathcal{F}$ with respect to
+$\{V_i \to V\}$. Thus if $\mathcal{F}$ is a sheaf, so is
+$f_*\mathcal{F}$.
+\end{remark}
+
+\begin{definition}
+\label{definition-direct-image-sheaf}
+Let $f: X\to Y$ be a morphism of schemes.
+Let $\mathcal{F} $ a sheaf of sets on $X_\etale$.
+The {\it direct image}, or {\it pushforward} of $\mathcal{F}$
+(under $f$) is
+$$
+f_*\mathcal{F} : Y_\etale^{opp} \longrightarrow \textit{Sets}, \quad
+(V/Y) \longmapsto \mathcal{F}(X \times_Y V/X)
+$$
+which is a sheaf by
+Remark \ref{remark-direct-image-sheaf}.
+We sometimes write $f_* = f_{small, *}$ to distinguish from other
+direct image functors (such as usual Zariski pushforward or $f_{big, *}$).
+\end{definition}
+
+\noindent
+The exact same discussion as above applies and we obtain functors
+$$
+f_* = f_{small, *} :
+\Sh(X_\etale)
+\longrightarrow
+\Sh(Y_\etale)
+$$
+and
+$$
+f_* = f_{small, *} :
+\textit{Ab}(X_\etale)
+\longrightarrow
+\textit{Ab}(Y_\etale)
+$$
+called {\it direct image} again.
+
+\medskip\noindent
+The functor $f_*$ on abelian sheaves is left exact. (See
+Homology, Section \ref{homology-section-functors}
+for what it means for a functor between abelian categories to be left exact.)
+Namely, if
+$0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3$
+is exact on $X_\etale$, then for every
+$U/X \in \Ob(X_\etale)$
+the sequence of abelian groups
+$0 \to \mathcal{F}_1(U) \to \mathcal{F}_2(U) \to \mathcal{F}_3(U)$
+is exact. Hence for every $V/Y \in \Ob(Y_\etale)$
+the sequence of abelian groups
+$0 \to f_*\mathcal{F}_1(V) \to f_*\mathcal{F}_2(V) \to f_*\mathcal{F}_3(V)$
+is exact, because this is the previous sequence with $U = X \times_Y V$.
+
+\begin{definition}
+\label{definition-higher-direct-images}
+Let $f: X \to Y$ be a morphism of schemes.
+The right derived functors $\{R^pf_*\}_{p \geq 1}$ of
+$f_* : \textit{Ab}(X_\etale) \to \textit{Ab}(Y_\etale)$
+are called {\it higher direct images}.
+\end{definition}
+
+\noindent
+The higher direct images and their derived category variants are
+discussed in more detail in (insert future reference here).
+
+
+
+\section{Inverse image}
+\label{section-inverse-image}
+
+\noindent
+In this section we briefly discuss pullback of sheaves on the small
+\'etale sites. The precise construction of this is in
+Topologies, Section \ref{topologies-section-etale}.
+
+\begin{definition}
+\label{definition-inverse-image}
+Let $f: X\to Y$ be a morphism of schemes. The {\it inverse image}, or
+{\it pullback}\footnote{We use the notation $f^{-1}$ for pullbacks of
+sheaves of sets or sheaves of abelian groups, and we reserve $f^*$ for
+pullbacks of sheaves of modules via a morphism of ringed sites/topoi.}
+functors are the functors
+$$
+f^{-1} = f_{small}^{-1} :
+\Sh(Y_\etale)
+\longrightarrow
+\Sh(X_\etale)
+$$
+and
+$$
+f^{-1} = f_{small}^{-1} :
+\textit{Ab}(Y_\etale)
+\longrightarrow
+\textit{Ab}(X_\etale)
+$$
+which are left adjoint to $f_* = f_{small, *}$. Thus
+$f^{-1}$ is characterized by the fact that
+$$
+\Hom_{{\Sh(X_\etale)}} (f^{-1}\mathcal{G}, \mathcal{F})
+=
+\Hom_{\Sh(Y_\etale)} (\mathcal{G}, f_*\mathcal{F})
+$$
+functorially, for any $\mathcal{F} \in \Sh(X_\etale)$ and
+$\mathcal{G} \in \Sh(Y_\etale)$. We similarly have
+$$
+\Hom_{{\textit{Ab}(X_\etale)}} (f^{-1}\mathcal{G}, \mathcal{F})
+=
+\Hom_{\textit{Ab}(Y_\etale)} (\mathcal{G}, f_*\mathcal{F})
+$$
+for $\mathcal{F} \in \textit{Ab}(X_\etale)$ and
+$\mathcal{G} \in \textit{Ab}(Y_\etale)$.
+\end{definition}
+
+\noindent
+It is not trivial that such an adjoint exists.
+On the other hand, it exists in a fairly general setting, see
+Remark \ref{remark-functoriality-general}
+below. The general machinery shows that $f^{-1}\mathcal{G}$
+is the sheaf associated to the presheaf
+\begin{equation}
+\label{equation-pullback}
+U/X
+\longmapsto
+\colim_{U \to X \times_Y V} \mathcal{G}(V/Y)
+\end{equation}
+where the colimit is over the category of pairs
+$(V/Y, \varphi : U/X \to X \times_Y V/X)$.
+To see this apply
+Sites, Proposition \ref{sites-proposition-get-morphism}
+to the functor $u$ of Equation (\ref{equation-functorial})
+and use the description of $u_s = (u_p\ )^\#$ in
+Sites, Sections \ref{sites-section-continuous-functors} and
+\ref{sites-section-functoriality-PSh}.
+We will occasionally use this formula for the pullback
+in order to prove some of its basic properties.
+
+\begin{lemma}
+\label{lemma-stalk-pullback}
+Let $f : X \to Y$ be a morphism of schemes.
+\begin{enumerate}
+\item The functor
+$f^{-1} : \textit{Ab}(Y_\etale) \to \textit{Ab}(X_\etale)$
+is exact.
+\item The functor
+$f^{-1} : \Sh(Y_\etale) \to \Sh(X_\etale)$
+is exact, i.e., it commutes with finite limits and colimits, see
+Categories, Definition \ref{categories-definition-exact}.
+\item Let $\overline{x} \to X$ be a geometric point.
+Let $\mathcal{G}$ be a sheaf on $Y_\etale$.
+Then there is a canonical identification
+$$
+(f^{-1}\mathcal{G})_{\overline{x}} = \mathcal{G}_{\overline{y}}.
+$$
+where $\overline{y} = f \circ \overline{x}$.
+\item For any $V \to Y$ \'etale we have $f^{-1}h_V = h_{X \times_Y V}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The exactness of $f^{-1}$ on sheaves of sets is a consequence of
+Sites, Proposition \ref{sites-proposition-get-morphism}
+applied to our functor $u$ of Equation (\ref{equation-functorial}).
+In fact the exactness of pullback is part of the definition of
+a morphism of topoi (or sites if you like). Thus we see (2) holds.
+It implies part (1) since given an abelian sheaf $\mathcal{G}$ on
+$Y_\etale$
+the underlying sheaf of sets of $f^{-1}\mathcal{F}$ is the same
+as $f^{-1}$ of the underlying sheaf of sets of $\mathcal{F}$, see
+Sites, Section \ref{sites-section-sheaves-algebraic-structures}.
+See also
+Modules on Sites, Lemma \ref{sites-modules-lemma-flat-pullback-exact}.
+In the literature (1) and (2) are sometimes deduced from (3) via
+Theorem \ref{theorem-exactness-stalks}.
+
+\medskip\noindent
+Part (3) is a general fact about stalks of pullbacks, see
+Sites, Lemma \ref{sites-lemma-point-morphism-sites}.
+We will also prove (3) directly as follows. Note that by
+Lemma \ref{lemma-stalk-exact}
+taking stalks commutes with sheafification.
+Now recall that $f^{-1}\mathcal{G}$ is the sheaf
+associated to the presheaf
+$$
+U \longrightarrow \colim_{U \to X \times_Y V} \mathcal{G}(V),
+$$
+see Equation (\ref{equation-pullback}).
+Thus we have
+\begin{align*}
+(f^{-1}\mathcal{G})_{\overline{x}}
+& = \colim_{(U, \overline{u})} f^{-1}\mathcal{G}(U) \\
+& = \colim_{(U, \overline{u})}
+\colim_{a : U \to X \times_Y V} \mathcal{G}(V) \\
+& = \colim_{(V, \overline{v})} \mathcal{G}(V) \\
+& = \mathcal{G}_{\overline{y}}
+\end{align*}
+in the third equality the pair $(U, \overline{u})$ and the map
+$a : U \to X \times_Y V$ corresponds to the pair $(V, a \circ \overline{u})$.
+
+\medskip\noindent
+Part (4) can be proved in a similar manner by identifying the colimits
+which define $f^{-1}h_V$. Or you can use
+Yoneda's lemma (Categories, Lemma \ref{categories-lemma-yoneda})
+and the functorial equalities
+$$
+\Mor_{\Sh(X_\etale)}(f^{-1}h_V, \mathcal{F}) =
+\Mor_{\Sh(Y_\etale)}(h_V, f_*\mathcal{F}) =
+f_*\mathcal{F}(V) = \mathcal{F}(X \times_Y V)
+$$
+combined with the fact that representable presheaves are sheaves. See also
+Sites, Lemma \ref{sites-lemma-pullback-representable-sheaf}
+for a completely general result.
+\end{proof}
+
+\noindent
+The pair of functors $(f_*, f^{-1})$ define a morphism of small \'etale topoi
+$$
+f_{small} :
+\Sh(X_\etale)
+\longrightarrow
+\Sh(Y_\etale)
+$$
+Many generalities on cohomology of sheaves hold for topoi and
+morphisms of topoi. We will try to point out when results are
+general and when they are specific to the \'etale topos.
+
+\begin{remark}
+\label{remark-functoriality-general}
+More generally, let $\mathcal{C}_1, \mathcal{C}_2$ be sites, and
+assume they have final objects and fibre products. Let
+$u: \mathcal{C}_2 \to \mathcal{C}_1$ be a functor satisfying:
+\begin{enumerate}
+\item if $\{V_i \to V\}$ is a covering of $\mathcal{C}_2$, then
+$\{u(V_i) \to u(V)\}$ is a covering of $\mathcal{C}_1$ (we
+say that $u$ is {\it continuous}), and
+\item $u$ commutes with finite limits (i.e., $u$ is left exact, i.e.,
+$u$ preserves fibre products and final objects).
+\end{enumerate}
+Then one can define
+$f_*: \Sh(\mathcal{C}_1) \to \Sh(\mathcal{C}_2)$
+by $ f_* \mathcal{F}(V) = \mathcal{F}(u(V))$.
+Moreover, there exists an exact functor $f^{-1}$ which
+is left adjoint to $f_*$, see
+Sites, Definition \ref{sites-definition-morphism-sites} and
+Proposition \ref{sites-proposition-get-morphism}.
+Warning: It is not enough to require simply that $u$ is continuous
+and commutes with fibre products in order to get a morphism of topoi.
+\end{remark}
+
+
+
+
+\section{Functoriality of big topoi}
+\label{section-functoriality-big-topoi}
+
+\noindent
+Given a morphism of schemes $f : X \to Y$ there are a whole host of
+morphisms of topoi associated to $f$, see
+Topologies, Section \ref{topologies-section-change-topologies}
+for a list. Perhaps the most used ones are the morphisms of topoi
+$$
+f_{big} = f_{big, \tau} :
+\Sh((\Sch/X)_\tau)
+\longrightarrow
+\Sh((\Sch/Y)_\tau)
+$$
+where $\tau \in \{Zariski, \etale, smooth, syntomic, fppf\}$.
+These each correspond to a continuous functor
+$$
+(\Sch/Y)_\tau \longrightarrow (\Sch/X)_\tau, \quad
+V/Y \longmapsto X \times_Y V/X
+$$
+which preserves final objects, fibre products and covering, and hence
+defines a morphism of sites
+$$
+f_{big} : (\Sch/X)_\tau \longrightarrow (\Sch/Y)_\tau.
+$$
+See
+Topologies, Sections \ref{topologies-section-zariski},
+\ref{topologies-section-etale},
+\ref{topologies-section-smooth},
+\ref{topologies-section-syntomic}, and
+\ref{topologies-section-fppf}.
+In particular, pushforward along $f_{big}$ is given by the rule
+$$
+(f_{big, *}\mathcal{F})(V/Y) = \mathcal{F}(X \times_Y V/X)
+$$
+It turns out that these morphisms of topoi have an inverse
+image functor $f_{big}^{-1}$ which is very easy to describe.
+Namely, we have
+$$
+(f_{big}^{-1}\mathcal{G})(U/X) = \mathcal{G}(U/Y)
+$$
+where the structure morphism of $U/Y$ is the composition of the
+structure morphism $U \to X$ with $f$, see
+Topologies, Lemmas \ref{topologies-lemma-morphism-big},
+\ref{topologies-lemma-morphism-big-etale},
+\ref{topologies-lemma-morphism-big-smooth},
+\ref{topologies-lemma-morphism-big-syntomic}, and
+\ref{topologies-lemma-morphism-big-fppf}.
+
+
+
+
+
+
+
+\section{Functoriality and sheaves of modules}
+\label{section-morphisms-modules}
+
+\noindent
+In this section we are going to reformulate some of the material explained in
+Descent, Sections \ref{descent-section-quasi-coherent-sheaves},
+\ref{descent-section-quasi-coherent-cohomology}, and
+\ref{descent-section-quasi-coherent-sheaves-bis}
+in the setting of \'etale topologies. Let $f : X \to Y$ be a morphism of
+schemes. We have seen above, see
+Sections \ref{section-functoriality}, \ref{section-direct-image}, and
+\ref{section-inverse-image}
+that this induces a morphism $f_{small}$ of small \'etale sites. In
+Descent, Remark \ref{descent-remark-change-topologies-ringed}
+we have seen that $f$ also induces a natural map
+$$
+f_{small}^\sharp :
+\mathcal{O}_{Y_\etale}
+\longrightarrow
+f_{small, *}\mathcal{O}_{X_\etale}
+$$
+of sheaves of rings on
+$Y_\etale$ such that $(f_{small}, f_{small}^\sharp)$
+is a morphism of ringed sites. See
+Modules on Sites, Definition \ref{sites-modules-definition-ringed-site}
+for the definition of a morphism of ringed sites.
+Let us just recall here that $f_{small}^\sharp$ is defined by the
+compatible system of maps
+$$
+\text{pr}_V^\sharp : \mathcal{O}(V) \longrightarrow \mathcal{O}(X \times_Y V)
+$$
+for $V$ varying over the objects of $Y_\etale$.
+
+\medskip\noindent
+It is clear that this construction is compatible with compositions of
+morphisms of schemes. More precisely, if $f : X \to Y$ and $g : Y \to Z$
+are morphisms of schemes, then we have
+$$
+(g_{small}, g_{small}^\sharp)
+\circ
+(f_{small}, f_{small}^\sharp)
+=
+((g \circ f)_{small}, (g \circ f)_{small}^\sharp)
+$$
+as morphisms of ringed topoi. Moreover, by
+Modules on Sites, Definition \ref{sites-modules-definition-pushforward}
+we see that given a morphism $f : X \to Y$ of schemes
+we get well defined pullback and direct image functors
+\begin{align*}
+f_{small}^* :
+\textit{Mod}(\mathcal{O}_{Y_\etale})
+\longrightarrow
+\textit{Mod}(\mathcal{O}_{X_\etale}), \\
+f_{small, *} :
+\textit{Mod}(\mathcal{O}_{X_\etale})
+\longrightarrow
+\textit{Mod}(\mathcal{O}_{Y_\etale})
+\end{align*}
+which are adjoint in the usual way. If $g : Y \to Z$ is another morphism
+of schemes, then we have
+$(g \circ f)_{small}^* = f_{small}^* \circ g_{small}^*$
+and $(g \circ f)_{small, *} = g_{small, *} \circ f_{small, *}$
+because of what we said about compositions.
+
+\medskip\noindent
+There is quite a bit of difference between the category
+of all $\mathcal{O}_X$ modules on $X$ and the category between all
+$\mathcal{O}_{X_\etale}$-modules on $X_\etale$. But the
+results of
+Descent, Sections \ref{descent-section-quasi-coherent-sheaves},
+\ref{descent-section-quasi-coherent-cohomology}, and
+\ref{descent-section-quasi-coherent-sheaves-bis}
+tell us that there is not much difference between considering quasi-coherent
+modules on $S$ and quasi-coherent modules on $S_\etale$.
+(We have already seen this in
+Theorem \ref{theorem-quasi-coherent}
+for example.)
+In particular, if $f : X \to Y$ is any morphism of schemes, then
+the pullback functors $f_{small}^*$ and $f^*$ match for
+quasi-coherent sheaves, see
+Descent,
+Proposition \ref{descent-proposition-equivalence-quasi-coherent-functorial}.
+Moreover, the same is true for pushforward provided $f$ is
+quasi-compact and quasi-separated, see
+Descent, Lemma \ref{descent-lemma-higher-direct-images-small-etale}.
+
+\medskip\noindent
+A few words about functoriality of the structure sheaf on big sites.
+Let $f : X \to Y$ be a morphism of schemes. Choose any of the
+topologies $\tau \in \{Zariski,\linebreak[0]
+\etale,\linebreak[0] smooth,\linebreak[0] syntomic,\linebreak[0]
+fppf\}$. Then the morphism
+$f_{big} : (\Sch/X)_\tau \to (\Sch/Y)_\tau$
+becomes a morphism of ringed sites by a map
+$$
+f_{big}^\sharp : \mathcal{O}_Y \longrightarrow f_{big, *}\mathcal{O}_X
+$$
+see Descent, Remark \ref{descent-remark-change-topologies-ringed}.
+In fact it is given by the same construction as in the case of small
+sites explained above.
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Comparing topologies}
+\label{section-compare-topologies}
+
+\noindent
+In this section we start studying what happens when you compare
+sheaves with respect to different topologies.
+
+\begin{lemma}
+\label{lemma-where-sections-are-equal}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a sheaf of sets on $S_\etale$.
+Let $s, t \in \mathcal{F}(S)$. Then there exists an open $W \subset S$
+characterized by the following property: A morphism $f : T \to S$
+factors through $W$ if and only if $s|_T = t|_T$ (restriction is
+pullback by $f_{small}$).
+\end{lemma}
+
+\begin{proof}
+Consider the presheaf which assigns to $U \in \Ob(S_\etale)$ the empty set
+if $s|_U \not = t|_U$ and a singleton else. It is clear that this is
+a subsheaf of the final object of $\Sh(S_\etale)$. By
+Lemma \ref{lemma-support-subsheaf-final}
+we find an open $W \subset S$ representing this presheaf.
+For a geometric point $\overline{x}$ of $S$ we see that $\overline{x} \in W$
+if and only if the stalks of $s$ and $t$ at $\overline{x}$ agree.
+By the description of stalks of pullbacks in
+Lemma \ref{lemma-stalk-pullback}
+we see that $W$ has the desired property.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-describe-pullback}
+Let $S$ be a scheme. Let $\tau \in \{Zariski, \etale\}$. Consider the morphism
+$$
+\pi_S : (\Sch/S)_\tau \longrightarrow S_\tau
+$$
+of Topologies, Lemma \ref{topologies-lemma-at-the-bottom} or
+\ref{topologies-lemma-at-the-bottom-etale}. Let $\mathcal{F}$ be a sheaf on
+$S_\tau$. Then $\pi_S^{-1}\mathcal{F}$ is given by the rule
+$$
+(\pi_S^{-1}\mathcal{F})(T) = \Gamma(T_\tau, f_{small}^{-1}\mathcal{F})
+$$
+where $f : T \to S$. Moreover, $\pi_S^{-1}\mathcal{F}$ satisfies the
+sheaf condition with respect to fpqc coverings.
+\end{lemma}
+
+\begin{proof}
+Observe that we have a morphism $i_f : \Sh(T_\tau) \to \Sh(\Sch/S)_\tau)$
+such that $\pi_S \circ i_f = f_{small}$ as morphisms
+$T_\tau \to S_\tau$, see
+Topologies, Lemmas \ref{topologies-lemma-put-in-T},
+\ref{topologies-lemma-morphism-big-small},
+\ref{topologies-lemma-put-in-T-etale}, and
+\ref{topologies-lemma-morphism-big-small-etale}.
+Since pullback is transitive we see that
+$i_f^{-1} \pi_S^{-1}\mathcal{F} = f_{small}^{-1}\mathcal{F}$ as desired.
+
+\medskip\noindent
+Let $\{g_i : T_i \to T\}_{i \in I}$ be an fpqc covering.
+The final statement means the following: Given a sheaf $\mathcal{G}$
+on $T_\tau$ and given sections
+$s_i \in \Gamma(T_i, g_{i, small}^{-1}\mathcal{G})$ whose pullbacks
+to $T_i \times_T T_j$ agree, there is a unique section $s$ of $\mathcal{G}$
+over $T$ whose pullback to $T_i$ agrees with $s_i$.
+
+\medskip\noindent
+Let $V \to T$ be an object of $T_\tau$ and let $t \in \mathcal{G}(V)$.
+For every $i$ there is a largest open $W_i \subset T_i \times_T V$
+such that the pullbacks of $s_i$ and $t$ agree as sections of the pullback
+of $\mathcal{G}$ to $W_i \subset T_i \times_T V$, see
+Lemma \ref{lemma-where-sections-are-equal}.
+Because $s_i$ and $s_j$ agree over $T_i \times_T T_j$ we find
+that $W_i$ and $W_j$ pullback to the same open over
+$T_i \times_T T_j \times_T V$. By
+Descent, Lemma \ref{descent-lemma-open-fpqc-covering}
+we find an open $W \subset V$ whose inverse image to $T_i \times_T V$
+recovers $W_i$.
+
+\medskip\noindent
+By construction of $g_{i, small}^{-1}\mathcal{G}$ there exists
+a $\tau$-covering $\{T_{ij} \to T_i\}_{j \in J_i}$, for each $j$ an
+open immersion or \'etale morphism $V_{ij} \to T$, a section
+$t_{ij} \in \mathcal{G}(V_{ij})$, and commutative diagrams
+$$
+\xymatrix{
+T_{ij} \ar[r] \ar[d] & V_{ij} \ar[d] \\
+T_i \ar[r] & T
+}
+$$
+such that $s_i|_{T_{ij}}$ is the pullback of $t_{ij}$. In other words,
+after replacing the covering $\{T_i \to T\}$ by $\{T_{ij} \to T\}$
+we may assume there are factorizations $T_i \to V_i \to T$ with
+$V_i \in \Ob(T_\tau)$ and sections $t_i \in \mathcal{G}(V_i)$
+pulling back to $s_i$ over $T_i$.
+By the result of the previous paragraph we find opens $W_i \subset V_i$
+such that $t_i|_{W_i}$ ``agrees with'' every $s_j$ over $T_j \times_T W_i$.
+Note that $T_i \to V_i$ factors through $W_i$.
+Hence $\{W_i \to T\}$ is a $\tau$-covering and the lemma is proven.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sections-upstairs}
+Let $S$ be a scheme. Let $f : T \to S$ be a morphism such that
+\begin{enumerate}
+\item $f$ is flat and quasi-compact, and
+\item the geometric fibres of $f$ are connected.
+\end{enumerate}
+Let $\mathcal{F}$ be a sheaf on $S_\etale$.
+Then $\Gamma(S, \mathcal{F}) = \Gamma(T, f^{-1}_{small}\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+There is a canonical map
+$\Gamma(S, \mathcal{F}) \to \Gamma(T, f_{small}^{-1}\mathcal{F})$.
+Since $f$ is surjective (because its fibres are connected) we see that
+this map is injective.
+
+\medskip\noindent
+To show that the map is surjective, let
+$\alpha \in \Gamma(T, f_{small}^{-1}\mathcal{F})$.
+Since $\{T \to S\}$ is an fpqc covering we can use
+Lemma \ref{lemma-describe-pullback} to see that suffices to prove that
+$\alpha$ pulls back to the same section over $T \times_S T$ by the
+two projections. Let $\overline{s} \to S$ be a geometric point.
+It suffices to show the agreement holds over $(T \times_S T)_{\overline{s}}$
+as every geometric point of $T \times_S T$ is contained in one of
+these geometric fibres. In other words, we are trying to show that
+$\alpha|_{T_{\overline{s}}}$ pulls back to the same section over
+$$
+(T \times_S T)_{\overline{s}} =
+T_{\overline{s}} \times_{\overline{s}} T_{\overline{s}}
+$$
+by the two projections to $T_{\overline{s}}$.
+However, since $\mathcal{F}|_{T_{\overline{s}}}$ is the
+pullback of $\mathcal{F}|_{\overline{s}}$ it is a constant sheaf
+with value $\mathcal{F}_{\overline{s}}$. Since $T_{\overline{s}}$
+is connected by assumption, any section of a constant sheaf is constant.
+Hence $\alpha|_{T_{\overline{s}}}$ corresponds to an element
+of $\mathcal{F}_{\overline{s}}$. Thus the two pullbacks to
+$(T \times_S T)_{\overline{s}}$ both
+correspond to this same element and we conclude.
+\end{proof}
+
+\noindent
+Here is a version of Lemma \ref{lemma-sections-upstairs}
+where we do not assume that the morphism is flat.
+
+\begin{lemma}
+\label{lemma-sections-upstairs-submersive}
+Let $S$ be a scheme. Let $f : X \to S$ be a morphism such that
+\begin{enumerate}
+\item $f$ is submersive, and
+\item the geometric fibres of $f$ are connected.
+\end{enumerate}
+Let $\mathcal{F}$ be a sheaf on $S_\etale$.
+Then $\Gamma(S, \mathcal{F}) = \Gamma(X, f^{-1}_{small}\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+There is a canonical map
+$\Gamma(S, \mathcal{F}) \to \Gamma(X, f_{small}^{-1}\mathcal{F})$.
+Since $f$ is surjective (because its fibres are connected) we see that
+this map is injective.
+
+\medskip\noindent
+To show that the map is surjective, let
+$\tau \in \Gamma(X, f_{small}^{-1}\mathcal{F})$.
+It suffices to find
+an \'etale covering $\{U_i \to S\}$ and
+sections $\sigma_i \in \mathcal{F}(U_i)$
+such that $\sigma_i$ pulls back to $\tau|_{X \times_S U_i}$.
+Namely, the injectivity shown above guarantees
+that $\sigma_i$ and $\sigma_j$ restrict to the same
+section of $\mathcal{F}$ over $U_i \times_S U_j$.
+Thus we obtain a unique section $\sigma \in \mathcal{F}(S)$
+which restricts to $\sigma_i$ over $U_i$.
+Then the pullback of $\sigma$ to $X$ is $\tau$
+because this is true locally.
+
+\medskip\noindent
+Let $\overline{x}$ be a geometric point of $X$ with image $\overline{s}$
+in $S$. Consider the image of $\tau$ in the stalk
+$$
+(f_{small}^{-1}\mathcal{F})_{\overline{x}} = \mathcal{F}_{\overline{s}}
+$$
+See Lemma \ref{lemma-stalk-pullback}.
+We can find an \'etale neighbourhood $U \to S$ of $\overline{s}$
+and a section $\sigma \in \mathcal{F}(U)$ mapping to this image
+in the stalk. Thus after replacing $S$ by $U$ and $X$ by $X \times_S U$
+we may assume there exits a section $\sigma$ of $\mathcal{F}$ over $S$
+whose image in $(f_{small}^{-1}\mathcal{F})_{\overline{x}}$ is the
+same as $\tau$.
+
+\medskip\noindent
+By Lemma \ref{lemma-where-sections-are-equal}
+there exists a maximal open $W \subset X$ such that
+$f_{small}^{-1}\sigma$ and $\tau$ agree over $W$
+and the formation of $W$ commutes with further pullback.
+Observe that the pullback of $\mathcal{F}$ to the
+geometric fibre $X_{\overline{s}}$ is the pullback
+of $\mathcal{F}_{\overline{s}}$ viewed as a sheaf on
+$\overline{s}$ by $X_{\overline{s}} \to \overline{s}$.
+Hence we see that $\tau$ and $\sigma$ give sections
+of the constant sheaf with value $\mathcal{F}_{\overline{s}}$
+on $X_{\overline{s}}$ which agree in one point. Since
+$X_{\overline{s}}$ is connected by assumption, we conclude
+that $W$ contains $X_s$. The same argument for different
+geometric fibres shows that $W$ contains every fibre it meets.
+Since $f$ is submersive, we conclude that $W$ is the inverse
+image of an open neighbourhood of $s$ in $S$.
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sections-base-field-extension}
+Let $K/k$ be an extension of fields with $k$ separably
+algebraically closed. Let $S$ be a scheme over $k$. Denote
+$p : S_K = S \times_{\Spec(k)} \Spec(K) \to S$ the projection.
+Let $\mathcal{F}$ be a sheaf on $S_\etale$.
+Then $\Gamma(S, \mathcal{F}) = \Gamma(S_K, p^{-1}_{small}\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+Follows from Lemma \ref{lemma-sections-upstairs}. Namely, it is clear
+that $p$ is flat and quasi-compact as the base change of
+$\Spec(K) \to \Spec(k)$. On the other hand, if $\overline{s} : \Spec(L) \to S$
+is a geometric point, then the fibre of $p$ over $\overline{s}$
+is the spectrum of $K \otimes_k L$ which is irreducible hence connected by
+Algebra, Lemma \ref{algebra-lemma-separably-closed-irreducible}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Recovering morphisms}
+\label{section-morphisms}
+
+\noindent
+In this section we prove that the rule which associates to a scheme
+its locally ringed small \'etale topos is fully faithful in a suitable
+sense, see
+Theorem \ref{theorem-fully-faithful}.
+
+\begin{lemma}
+\label{lemma-morphism-locally-ringed}
+Let $f : X \to Y$ be a morphism of schemes.
+The morphism of ringed sites $(f_{small}, f_{small}^\sharp)$
+associated to $f$ is a morphism of locally ringed sites, see
+Modules on Sites,
+Definition \ref{sites-modules-definition-morphism-locally-ringed-topoi}.
+\end{lemma}
+
+\begin{proof}
+Note that the assertion makes sense since we have seen that
+$(X_\etale, \mathcal{O}_{X_\etale})$ and
+$(Y_\etale, \mathcal{O}_{Y_\etale})$
+are locally ringed sites, see
+Lemma \ref{lemma-etale-site-locally-ringed}.
+Moreover, we know that $X_\etale$ has enough points, see
+Theorem \ref{theorem-exactness-stalks}
+and
+Remarks \ref{remarks-enough-points}.
+Hence it suffices to prove that $(f_{small}, f_{small}^\sharp)$
+satisfies condition (3) of
+Modules on Sites,
+Lemma \ref{sites-modules-lemma-locally-ringed-morphism}.
+To see this take a point $p$ of $X_\etale$. By
+Lemma \ref{lemma-points-small-etale-site}
+$p$ corresponds to a geometric point $\overline{x}$ of $X$.
+By
+Lemma \ref{lemma-stalk-pullback}
+the point $q = f_{small} \circ p$ corresponds to the
+geometric point $\overline{y} = f \circ \overline{x}$ of $Y$.
+Hence the assertion we have to prove is that the induced map
+of stalks
+$$
+(\mathcal{O}_Y)_{\overline{y}} \longrightarrow (\mathcal{O}_X)_{\overline{x}}
+$$
+is a local ring map. Suppose that $a \in (\mathcal{O}_Y)_{\overline{y}}$
+is an element of the left hand side which maps to an element of the maximal
+ideal of the right hand side. Suppose that $a$ is the equivalence class
+of a triple $(V, \overline{v}, a)$ with $V \to Y$ \'etale,
+$\overline{v} : \overline{x} \to V$ over $Y$, and $a \in \mathcal{O}(V)$.
+It maps to the equivalence class of
+$(X \times_Y V, \overline{x} \times \overline{v}, \text{pr}_V^\sharp(a))$
+in the local ring $(\mathcal{O}_X)_{\overline{x}}$. But it is clear that
+being in the maximal ideal means that pulling back $\text{pr}_V^\sharp(a)$
+to an element of $\kappa(\overline{x})$ gives zero. Hence also pulling back
+$a$ to $\kappa(\overline{x})$ is zero. Which means that $a$ lies in the
+maximal ideal of $(\mathcal{O}_Y)_{\overline{y}}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-2-morphism}
+Let $X$, $Y$ be schemes. Let $f : X \to Y$ be a morphism of schemes.
+Let $t$ be a $2$-morphism from $(f_{small}, f_{small}^\sharp)$ to itself, see
+Modules on Sites,
+Definition \ref{sites-modules-definition-2-morphism-ringed-topoi}.
+Then $t = \text{id}$.
+\end{lemma}
+
+\begin{proof}
+This means that $t : f^{-1}_{small} \to f^{-1}_{small}$
+is a transformation of functors such that the diagram
+$$
+\xymatrix{
+f_{small}^{-1}\mathcal{O}_Y
+\ar[rd]_{f_{small}^\sharp} & &
+f_{small}^{-1}\mathcal{O}_Y \ar[ll]^t \ar[ld]^{f_{small}^\sharp} \\
+& \mathcal{O}_X
+}
+$$
+is commutative. Suppose $V \to Y$ is \'etale with $V$ affine. By
+Morphisms, Lemma \ref{morphisms-lemma-quasi-affine-finite-type-over-S}
+we may choose an immersion $i : V \to \mathbf{A}^n_Y$ over $Y$.
+In terms of sheaves this means that $i$ induces an injection
+$h_i : h_V \to \prod_{j = 1, \ldots, n} \mathcal{O}_Y$ of sheaves.
+The base change $i'$ of $i$ to $X$ is an immersion
+(Schemes, Lemma \ref{schemes-lemma-base-change-immersion}).
+Hence $i' : X \times_Y V \to \mathbf{A}^n_X$ is an immersion, which
+in turn means that
+$h_{i'} : h_{X \times_Y V} \to \prod_{j = 1, \ldots, n} \mathcal{O}_X$
+is an injection of sheaves.
+Via the identification $f_{small}^{-1}h_V = h_{X \times_Y V}$ of
+Lemma \ref{lemma-stalk-pullback}
+the map $h_{i'}$ is equal to
+$$
+\xymatrix{
+f_{small}^{-1}h_V \ar[r]^-{f^{-1}h_i} &
+\prod_{j = 1, \ldots, n} f_{small}^{-1}\mathcal{O}_Y
+\ar[r]^{\prod f^\sharp} &
+\prod_{j = 1, \ldots, n} \mathcal{O}_X
+}
+$$
+(verification omitted). This means that the map
+$t : f_{small}^{-1}h_V \to f_{small}^{-1}h_V$
+fits into the commutative diagram
+$$
+\xymatrix{
+f_{small}^{-1}h_V \ar[r]^-{f^{-1}h_i} \ar[d]^t &
+\prod_{j = 1, \ldots, n} f_{small}^{-1}\mathcal{O}_Y
+\ar[r]^-{\prod f^\sharp} \ar[d]^{\prod t} &
+\prod_{j = 1, \ldots, n} \mathcal{O}_X \ar[d]^{\text{id}}\\
+f_{small}^{-1}h_V \ar[r]^-{f^{-1}h_i} &
+\prod_{j = 1, \ldots, n} f_{small}^{-1}\mathcal{O}_Y
+\ar[r]^-{\prod f^\sharp} &
+\prod_{j = 1, \ldots, n} \mathcal{O}_X
+}
+$$
+The commutativity of the right square holds by our assumption on $t$
+explained above.
+Since the composition of the horizontal arrows is injective
+by the discussion above we conclude that the left vertical arrow
+is the identity map as well. Any sheaf of sets on
+$Y_\etale$ admits a surjection from a (huge) coproduct of sheaves
+of the form $h_V$ with $V$ affine (combine
+Topologies, Lemma \ref{topologies-lemma-alternative}
+with
+Sites, Lemma \ref{sites-lemma-sheaf-coequalizer-representable}).
+Thus we conclude that $t : f_{small}^{-1} \to f_{small}^{-1}$
+is the identity transformation as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-faithful}
+Let $X$, $Y$ be schemes.
+Any two morphisms $a, b : X \to Y$ of schemes
+for which there exists a $2$-isomorphism
+$(a_{small}, a_{small}^\sharp) \cong (b_{small}, b_{small}^\sharp)$
+in the $2$-category of ringed topoi are equal.
+\end{lemma}
+
+\begin{proof}
+Let us argue this carefuly since it is a bit confusing.
+Let $t : a_{small}^{-1} \to b_{small}^{-1}$ be the $2$-isomorphism.
+Consider any open $V \subset Y$. Note that $h_V$ is a subsheaf of the
+final sheaf $*$. Thus both $a_{small}^{-1}h_V = h_{a^{-1}(V)}$
+and $b_{small}^{-1}h_V = h_{b^{-1}(V)}$ are subsheaves of the final sheaf.
+Thus the isomorphism
+$$
+t : a_{small}^{-1}h_V = h_{a^{-1}(V)} \to b_{small}^{-1}h_V = h_{b^{-1}(V)}
+$$
+has to be the identity, and $a^{-1}(V) = b^{-1}(V)$.
+It follows that $a$ and $b$ are equal on underlying topological spaces.
+Next, take a section $f \in \mathcal{O}_Y(V)$. This determines and
+is determined by a map of sheaves of sets
+$f : h_V \to \mathcal{O}_Y$.
+Pull this back and apply $t$ to get a commutative diagram
+$$
+\xymatrix{
+h_{b^{-1}(V)} \ar@{=}[r] &
+b_{small}^{-1}h_V \ar[d]^{b_{small}^{-1}(f)} & &
+a_{small}^{-1}h_V \ar[d]^{a_{small}^{-1}(f)} \ar[ll]^t &
+h_{a^{-1}(V)} \ar@{=}[l]
+\\
+& b_{small}^{-1}\mathcal{O}_Y
+\ar[rd]_{b^\sharp} & &
+a_{small}^{-1}\mathcal{O}_Y \ar[ll]^t \ar[ld]^{a^\sharp} \\
+& & \mathcal{O}_X
+}
+$$
+where the triangle is commutative by definition of a $2$-isomorphism in
+Modules on Sites, Section \ref{sites-modules-section-2-category}.
+Above we have seen that the composition of the top horizontal
+arrows comes from the identity $a^{-1}(V) = b^{-1}(V)$.
+Thus the commutativity of the diagram tells us that
+$a_{small}^\sharp(f) = b_{small}^\sharp(f)$ in
+$\mathcal{O}_X(a^{-1}(V)) = \mathcal{O}_X(b^{-1}(V))$.
+Since this holds for every open $V$ and every $f \in \mathcal{O}_Y(V)$
+we conclude that $a = b$ as morphisms of schemes.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphism-ringed-etale-topoi-affines}
+Let $X$, $Y$ be affine schemes.
+Let
+$$
+(g, g^\#) :
+(\Sh(X_\etale), \mathcal{O}_X)
+\longrightarrow
+(\Sh(Y_\etale), \mathcal{O}_Y)
+$$
+be a morphism of locally ringed topoi. Then there exists a
+unique morphism of schemes $f : X \to Y$ such that
+$(g, g^\#)$ is $2$-isomorphic to $(f_{small}, f_{small}^\sharp)$,
+see
+Modules on Sites,
+Definition \ref{sites-modules-definition-2-morphism-ringed-topoi}.
+\end{lemma}
+
+\begin{proof}
+In this proof we write $\mathcal{O}_X$ for the structure sheaf
+of the small \'etale site $X_\etale$, and similarly for
+$\mathcal{O}_Y$. Say $Y = \Spec(B)$ and $X = \Spec(A)$. Since
+$B = \Gamma(Y_\etale, \mathcal{O}_Y)$,
+$A = \Gamma(X_\etale, \mathcal{O}_X)$
+we see that $g^\sharp$ induces a ring map $\varphi : B \to A$.
+Let $f = \Spec(\varphi) : X \to Y$ be the corresponding morphism
+of affine schemes. We will show this $f$ does the job.
+
+\medskip\noindent
+Let $V \to Y$ be an affine scheme \'etale over $Y$. Thus we may write
+$V = \Spec(C)$ with $C$ an \'etale $B$-algebra. We can write
+$$
+C = B[x_1, \ldots, x_n]/(P_1, \ldots, P_n)
+$$
+with $P_i$ polynomials such that $\Delta = \det(\partial P_i/ \partial x_j)$
+is invertible in $C$, see for example
+Algebra, Lemma \ref{algebra-lemma-etale-standard-smooth}.
+If $T$ is a scheme over $Y$, then a $T$-valued point of $V$ is given by
+$n$ sections of $\Gamma(T, \mathcal{O}_T)$ which satisfy the polynomial
+equations $P_1 = 0, \ldots, P_n = 0$. In other words, the sheaf $h_V$
+on $Y_\etale$ is the equalizer of the two maps
+$$
+\xymatrix{
+\prod\nolimits_{i = 1, \ldots, n} \mathcal{O}_Y
+\ar@<1ex>[r]^a \ar@<-1ex>[r]_b &
+\prod\nolimits_{j = 1, \ldots, n} \mathcal{O}_Y
+}
+$$
+where $b(h_1, \ldots, h_n) = 0$ and
+$a(h_1, \ldots, h_n) =
+(P_1(h_1, \ldots, h_n), \ldots, P_n(h_1, \ldots, h_n))$.
+Since $g^{-1}$ is exact we conclude that the top row of the
+following solid commutative diagram is an equalizer diagram as well:
+$$
+\xymatrix{
+g^{-1}h_V \ar[r] \ar@{..>}[d] &
+\prod\nolimits_{i = 1, \ldots, n} g^{-1}\mathcal{O}_Y
+\ar@<1ex>[r]^{g^{-1}a} \ar@<-1ex>[r]_{g^{-1}b} \ar[d]^{\prod g^\sharp} &
+\prod\nolimits_{j = 1, \ldots, n} g^{-1}\mathcal{O}_Y \ar[d]^{\prod g^\sharp}\\
+h_{X \times_Y V} \ar[r] &
+\prod\nolimits_{i = 1, \ldots, n} \mathcal{O}_X
+\ar@<1ex>[r]^{a'} \ar@<-1ex>[r]_{b'} &
+\prod\nolimits_{j = 1, \ldots, n} \mathcal{O}_X \\
+}
+$$
+Here $b'$ is the zero map and $a'$ is the map defined by the
+images $P'_i = \varphi(P_i) \in A[x_1, \ldots, x_n]$ via the same
+rule
+$a'(h_1, \ldots, h_n) =
+(P'_1(h_1, \ldots, h_n), \ldots, P'_n(h_1, \ldots, h_n))$.
+that $a$ was defined by. The commutativity of the diagram follows from
+the fact that $\varphi = g^\sharp$ on global sections. The lower
+row is an equalizer diagram also, by exactly the same arguments as
+before since $X \times_Y V$ is the affine scheme
+$\Spec(A \otimes_B C)$ and
+$A \otimes_B C = A[x_1, \ldots, x_n]/(P'_1, \ldots, P'_n)$.
+Thus we obtain a unique dotted arrow
+$g^{-1}h_V \to h_{X \times_Y V}$ fitting into the diagram
+
+\medskip\noindent
+We claim that the map of sheaves $g^{-1}h_V \to h_{X \times_Y V}$
+is an isomorphism. Since the small \'etale site of $X$ has enough points
+(Theorem \ref{theorem-exactness-stalks})
+it suffices to prove this on stalks. Hence let $\overline{x}$ be a
+geometric point of $X$, and denote $p$ the associate point of the
+small \'etale topos of $X$. Set $q = g \circ p$. This is a point of
+the small \'etale topos of $Y$. By
+Lemma \ref{lemma-points-small-etale-site}
+we see that $q$ corresponds to a geometric point $\overline{y}$ of
+$Y$. Consider the map of stalks
+$$
+(g^\sharp)_p :
+(\mathcal{O}_Y)_{\overline{y}} =
+\mathcal{O}_{Y, q} =
+(g^{-1}\mathcal{O}_Y)_p
+\longrightarrow
+\mathcal{O}_{X, p} =
+(\mathcal{O}_X)_{\overline{x}}
+$$
+Since $(g, g^\sharp)$ is a morphism of {\it locally} ringed topoi
+$(g^\sharp)_p$ is a local ring homomorphism of strictly henselian
+local rings. Applying localization to the big commutative diagram above and
+Algebra, Lemma \ref{algebra-lemma-strictly-henselian-solutions}
+we conclude that $(g^{-1}h_V)_p \to (h_{X \times_Y V})_p$ is an isomorphism
+as desired.
+
+\medskip\noindent
+We claim that the isomorphisms $g^{-1}h_V \to h_{X \times_Y V}$ are
+functorial. Namely, suppose that $V_1 \to V_2$ is a morphism of affine
+schemes \'etale over $Y$. Write
+$V_i = \Spec(C_i)$ with
+$$
+C_i = B[x_{i, 1}, \ldots, x_{i, n_i}]/(P_{i, 1}, \ldots, P_{i, n_i})
+$$
+The morphism $V_1 \to V_2$ is given by a $B$-algebra map $C_2 \to C_1$
+which in turn is given by some polynomials
+$Q_j \in B[x_{1, 1}, \ldots, x_{1, n_1}]$ for $j = 1, \ldots, n_2$.
+Then it is an easy matter to show that the diagram of sheaves
+$$
+\xymatrix{
+h_{V_1} \ar[d] \ar[r] & \prod_{i = 1, \ldots, n_1} \mathcal{O}_Y
+\ar[d]^{Q_1, \ldots, Q_{n_2}}\\
+h_{V_2} \ar[r] & \prod_{i = 1, \ldots, n_2} \mathcal{O}_Y
+}
+$$
+is commutative, and pulling back to $X_\etale$ we obtain the
+solid commutative diagram
+$$
+\xymatrix{
+g^{-1}h_{V_1} \ar@{..>}[dd] \ar[rrd] \ar[r] &
+\prod_{i = 1, \ldots, n_1} g^{-1}\mathcal{O}_Y
+\ar[dd]^{g^\sharp}
+\ar[rrd]^{Q_1, \ldots, Q_{n_2}} \\
+& & g^{-1}h_{V_2} \ar@{..>}[dd] \ar[r] &
+\prod_{i = 1, \ldots, n_2} g^{-1}\mathcal{O}_Y
+\ar[dd]^{g^\sharp} \\
+h_{X \times_Y V_1} \ar[r] \ar[rrd] &
+\prod\nolimits_{i = 1, \ldots, n_1} \mathcal{O}_X
+\ar[rrd]^{Q'_1, \ldots, Q'_{n_2}} \\
+& & h_{X \times_Y V_2} \ar[r] &
+\prod\nolimits_{i = 1, \ldots, n_2} \mathcal{O}_X
+}
+$$
+where $Q'_j \in A[x_{1, 1}, \ldots, x_{1, n_1}]$ is the image of
+$Q_j$ via $\varphi$. Since the dotted arrows exist, make the
+two squares commute, and the horizontal arrows are injective
+we see that the whole diagram commutes. This proves functoriality
+(and also that the construction of $g^{-1}h_V \to h_{X \times_Y V}$
+is independent of the choice of the presentation, although we
+strictly speaking do not need to show this).
+
+\medskip\noindent
+At this point we are able to show that $f_{small, *} \cong g_*$.
+Namely, let $\mathcal{F}$ be a sheaf on $X_\etale$. For every
+$V \in \Ob(X_\etale)$ affine we have
+\begin{align*}
+(g_*\mathcal{F})(V)
+& =
+\Mor_{\Sh(Y_\etale)}(h_V, g_*\mathcal{F}) \\
+& =
+\Mor_{\Sh(X_\etale)}(g^{-1}h_V, \mathcal{F}) \\
+& =
+\Mor_{\Sh(X_\etale)}(h_{X \times_Y V}, \mathcal{F}) \\
+& =
+\mathcal{F}(X \times_Y V) \\
+& =
+f_{small, *}\mathcal{F}(V)
+\end{align*}
+where in the third equality we use the isomorphism
+$g^{-1}h_V \cong h_{X \times_Y V}$ constructed above. These isomorphisms
+are clearly functorial in $\mathcal{F}$ and functorial in $V$
+as the isomorphisms $g^{-1}h_V \cong h_{X \times_Y V}$ are functorial.
+Now any sheaf on $Y_\etale$ is determined by the restriction
+to the subcategory of affine schemes
+(Topologies, Lemma \ref{topologies-lemma-alternative}),
+and hence we obtain an isomorphism of functors $f_{small, *} \cong g_*$
+as desired.
+
+\medskip\noindent
+Finally, we have to check that, via the isomorphism
+$f_{small, *} \cong g_*$ above, the maps $f_{small}^\sharp$ and
+$g^\sharp$ agree. By construction this is already the case for the
+global sections of $\mathcal{O}_Y$, i.e., for the elements of $B$.
+We only need to check the result on
+sections over an affine $V$ \'etale over $Y$ (by
+Topologies, Lemma \ref{topologies-lemma-alternative}
+again). Writing
+$V = \Spec(C)$, $C = B[x_i]/(P_j)$ as before it suffices
+to check that the coordinate functions $x_i$ are mapped to
+the same sections of $\mathcal{O}_X$ over $X \times_Y V$.
+And this is exactly what it means that the diagram
+$$
+\xymatrix{
+g^{-1}h_V \ar[r] \ar@{..>}[d] &
+\prod\nolimits_{i = 1, \ldots, n} g^{-1}\mathcal{O}_Y
+\ar[d]^{\prod g^\sharp} \\
+h_{X \times_Y V} \ar[r] &
+\prod\nolimits_{i = 1, \ldots, n} \mathcal{O}_X
+}
+$$
+commutes. Thus the lemma is proved.
+\end{proof}
+
+\noindent
+Here is a version for general schemes.
+
+\begin{theorem}
+\label{theorem-fully-faithful}
+Let $X$, $Y$ be schemes. Let
+$$
+(g, g^\#) :
+(\Sh(X_\etale), \mathcal{O}_X)
+\longrightarrow
+(\Sh(Y_\etale), \mathcal{O}_Y)
+$$
+be a morphism of locally ringed topoi. Then there exists a
+unique morphism of schemes $f : X \to Y$ such that
+$(g, g^\#)$ is isomorphic to $(f_{small}, f_{small}^\sharp)$.
+In other words, the construction
+$$
+\Sch \longrightarrow \textit{Locally ringed topoi},
+\quad
+X \longrightarrow (X_\etale, \mathcal{O}_X)
+$$
+is fully faithful (morphisms up to $2$-isomorphisms on the right hand side).
+\end{theorem}
+
+\begin{proof}
+You can prove this theorem by carefuly adjusting the arguments of
+the proof of
+Lemma \ref{lemma-morphism-ringed-etale-topoi-affines}
+to the global setting. However, we want to indicate how we
+can glue the result of that lemma to get a global morphism
+due to the rigidity provided by the result of
+Lemma \ref{lemma-2-morphism}.
+Unfortunately, this is a bit messy.
+
+\medskip\noindent
+Let us prove existence when $Y$ is affine. In this case choose an
+affine open covering $X = \bigcup U_i$. For each $i$ the inclusion
+morphism $j_i : U_i \to X$ induces a morphism of locally ringed topoi
+$(j_{i, small}, j_{i, small}^\sharp) :
+(\Sh(U_{i, \etale}), \mathcal{O}_{U_i})
+\to
+(\Sh(X_\etale), \mathcal{O}_X)$
+by
+Lemma \ref{lemma-morphism-locally-ringed}.
+We can compose this with $(g, g^\sharp)$ to obtain a morphism
+of locally ringed topoi
+$$
+(g, g^\sharp) \circ (j_{i, small}, j_{i, small}^\sharp) :
+(\Sh(U_{i, \etale}), \mathcal{O}_{U_i})
+\to
+(\Sh(Y_\etale), \mathcal{O}_Y)
+$$
+see
+Modules on Sites,
+Lemma \ref{sites-modules-lemma-composition-morphisms-locally-ringed-topoi}.
+By
+Lemma \ref{lemma-morphism-ringed-etale-topoi-affines}
+there exists a unique morphism of schemes $f_i : U_i \to Y$
+and a $2$-isomorphism
+$$
+t_i :
+(f_{i, small}, f_{i, small}^\sharp)
+\longrightarrow
+(g, g^\sharp) \circ (j_{i, small}, j_{i, small}^\sharp).
+$$
+Set $U_{i, i'} = U_i \cap U_{i'}$, and denote $j_{i, i'} : U_{i, i'} \to U_i$
+the inclusion morphism. Since we have
+$j_i \circ j_{i, i'} = j_{i'} \circ j_{i', i}$
+we see that
+\begin{align*}
+(g, g^\sharp) \circ
+(j_{i, small}, j_{i, small}^\sharp) \circ
+(j_{i, i', small}, j_{i, i', small}^\sharp)
+= \\
+(g, g^\sharp) \circ
+(j_{i', small}, j_{i', small}^\sharp) \circ
+(j_{i', i, small}, j_{i', i, small}^\sharp)
+\end{align*}
+Hence by uniqueness (see
+Lemma \ref{lemma-faithful})
+we conclude that
+$f_i \circ j_{i, i'} = f_{i'} \circ j_{i', i}$, in other words the
+morphisms of schemes $f_i = f \circ j_i$ are the restrictions of a
+global morphism of schemes $f : X \to Y$. Consider the diagram
+of $2$-isomorphisms (where we drop the components ${}^\sharp$ to ease the
+notation)
+$$
+\xymatrix{
+g \circ j_{i, small} \circ j_{i, i', small}
+\ar[rr]^{t_i \star \text{id}_{j_{i, i', small}}}
+\ar@{=}[d] & &
+f_{small} \circ j_{i, small} \circ j_{i, i', small} \ar@{=}[d] \\
+g \circ j_{i', small} \circ j_{i', i, small}
+\ar[rr]^{t_{i'} \star \text{id}_{j_{i', i, small}}} & &
+f_{small} \circ j_{i', small} \circ j_{i', i, small}
+}
+$$
+The notation $\star$ indicates horizontal composition, see
+Categories, Definition \ref{categories-definition-2-category}
+in general and
+Sites, Section \ref{sites-section-2-category}
+for our particular case. By the result of
+Lemma \ref{lemma-2-morphism}
+this diagram commutes. Hence for any sheaf $\mathcal{G}$
+on $Y_\etale$ the isomorphisms
+$t_i : f_{small}^{-1}\mathcal{G}|_{U_i} \to g^{-1}\mathcal{G}|_{U_i}$
+agree over $U_{i, i'}$ and we obtain a global isomorphism
+$t : f_{small}^{-1}\mathcal{G} \to g^{-1}\mathcal{G}$.
+It is clear that this isomorphism is functorial in $\mathcal{G}$
+and is compatible with the maps $f_{small}^\sharp$ and $g^\sharp$
+(because it is compatible with these maps locally).
+This proves the theorem in case $Y$ is affine.
+
+\medskip\noindent
+In the general case, let $V \subset Y$ be an affine open.
+Then $h_V$ is a subsheaf of the final sheaf $*$ on $Y_\etale$.
+As $g$ is exact we see that $g^{-1}h_V$ is a subsheaf of the final
+sheaf on $X_\etale$. Hence by
+Lemma \ref{lemma-support-subsheaf-final}
+there exists an open subscheme $W \subset X$ such that $g^{-1}h_V = h_W$. By
+Modules on Sites,
+Lemma \ref{sites-modules-lemma-localize-morphism-locally-ringed-topoi}
+there exists a commutative diagram of morphisms of locally ringed
+topoi
+$$
+\xymatrix{
+(\Sh(W_\etale), \mathcal{O}_W) \ar[r] \ar[d]_{g'} &
+(\Sh(X_\etale), \mathcal{O}_X) \ar[d]^g \\
+(\Sh(V_\etale), \mathcal{O}_V) \ar[r] &
+(\Sh(Y_\etale), \mathcal{O}_Y)
+}
+$$
+where the horizontal arrows are the localization morphisms
+(induced by the inclusion morphisms $V \to Y$ and $W \to X$)
+and where $g'$ is induced from $g$. By the result of the preceding
+paragraph we obtain a morphism of schemes $f' : W \to V$ and
+a $2$-isomorphism
+$t : (f'_{small}, (f'_{small})^\sharp) \to (g', (g')^\sharp)$.
+Exactly as before these morphisms $f'$ (for varying affine opens $V \subset Y$)
+agree on overlaps by uniqueness, so we get a morphism $f : X \to Y$.
+Moreover, the $2$-isomorphisms $t$ are compatible on overlaps by
+Lemma \ref{lemma-2-morphism}
+again and we obtain a global $2$-isomorphism
+$(f_{small}, (f_{small})^\sharp) \to (g, (g)^\sharp)$.
+as desired. Some details omitted.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Push and pull}
+\label{section-monomorphisms}
+
+\noindent
+Let $f : X \to Y$ be a morphism of schemes.
+Here is a list of conditions we will consider in the following:
+\begin{enumerate}
+\item[(A)] For every \'etale morphism $U \to X$ and $u \in U$ there exist
+an \'etale morphism $V \to Y$ and a disjoint union decomposition
+$X \times_Y V = W \amalg W'$ and a morphism $h : W \to U$ over $X$
+with $u$ in the image of $h$.
+\item[(B)] For every $V \to Y$ \'etale, and every \'etale covering
+$\{U_i \to X \times_Y V\}$ there exists an \'etale covering
+$\{V_j \to V\}$ such that for each $j$ we have
+$X \times_Y V_j = \coprod W_{ij}$ where $W_{ij} \to X \times_Y V$
+factors through $U_i \to X \times_Y V$ for some $i$.
+\item[(C)] For every $U \to X$ \'etale, there exists a $V \to Y$ \'etale
+and a surjective morphism $X \times_Y V \to U$ over $X$.
+\end{enumerate}
+It turns out that each of these properties has meaning in terms of
+the behaviour of the functor $f_{small, *}$. We will work this
+out in the next few sections.
+
+
+
+\section{Property (A)}
+\label{section-A}
+
+\noindent
+Please see Section \ref{section-monomorphisms} for the definition of property
+(A).
+
+\begin{lemma}
+\label{lemma-property-A-implies}
+Let $f : X \to Y$ be a morphism of schemes.
+Assume (A).
+\begin{enumerate}
+\item
+$f_{small, *} :
+\textit{Ab}(X_\etale)
+\to
+\textit{Ab}(Y_\etale)$
+reflects injections and surjections,
+\item $f_{small}^{-1}f_{small, *}\mathcal{F} \to \mathcal{F}$
+is surjective for any abelian sheaf $\mathcal{F}$ on $X_\etale$,
+\item
+$f_{small, *} :
+\textit{Ab}(X_\etale)
+\to
+\textit{Ab}(Y_\etale)$
+is faithful.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be an abelian sheaf on $X_\etale$.
+Let $U$ be an object of $X_\etale$. By assumption we can find a
+covering $\{W_i \to U\}$ in $X_\etale$ such that each $W_i$ is
+an open and closed subscheme of $X \times_Y V_i$ for some object
+$V_i$ of $Y_\etale$. The sheaf condition shows that
+$$
+\mathcal{F}(U) \subset \prod \mathcal{F}(W_i)
+$$
+and that $\mathcal{F}(W_i)$ is a direct summand of
+$\mathcal{F}(X \times_Y V_i) = f_{small, *}\mathcal{F}(V_i)$.
+Hence it is clear that $f_{small, *}$ reflects injections.
+
+\medskip\noindent
+Next, suppose that $a : \mathcal{G} \to \mathcal{F}$ is a map of
+abelian sheaves such that $f_{small, *}a$ is surjective. Let
+$s \in \mathcal{F}(U)$ with $U$ as above. With $W_i$, $V_i$ as
+above we see that it suffices to show that $s|_{W_i}$ is \'etale locally
+the image of a section of $\mathcal{G}$ under $a$. Since $\mathcal{F}(W_i)$
+is a direct summand of $\mathcal{F}(X \times_Y V_i)$
+it suffices to show that for any $V \in \Ob(Y_\etale)$
+any element $s \in \mathcal{F}(X \times_Y V)$
+is \'etale locally on $X \times_Y V$ the image of a section of
+$\mathcal{G}$ under $a$. Since
+$\mathcal{F}(X \times_Y V) = f_{small, *}\mathcal{F}(V)$
+we see by assumption that there exists a covering $\{V_j \to V\}$ such that
+$s$ is the image of
+$s_j \in f_{small, *}\mathcal{G}(V_j) = \mathcal{G}(X \times_Y V_j)$.
+This proves $f_{small, *}$ reflects surjections.
+
+\medskip\noindent
+Parts (2), (3) follow formally from part (1), see
+Modules on Sites, Lemma \ref{sites-modules-lemma-reflect-surjections}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-quasi-finite-A}
+Let $f : X \to Y$ be a separated locally quasi-finite morphism of schemes.
+Then property (A) above holds.
+\end{lemma}
+
+\begin{proof}
+Let $U \to X$ be an \'etale morphism and $u \in U$.
+The geometric statement (A) reduces directly to the case where $U$ and $Y$
+are affine schemes. Denote $x \in X$ and $y \in Y$ the
+images of $u$. Since $X \to Y$ is locally quasi-finite, and $U \to X$ is
+locally quasi-finite (see
+Morphisms, Lemma \ref{morphisms-lemma-etale-locally-quasi-finite})
+we see that $U \to Y$ is locally quasi-finite (see
+Morphisms, Lemma \ref{morphisms-lemma-composition-quasi-finite}).
+Moreover both $X \to Y$ and $U \to Y$ are separated. Thus
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-etale-splits-off-quasi-finite-part-technical-variant}
+applies to both morphisms. This means we may pick an \'etale neighbourhood
+$(V, v) \to (Y, y)$ such that
+$$
+X \times_Y V = W \amalg R, \quad
+U \times_Y V = W' \amalg R'
+$$
+and points $w \in W$, $w' \in W'$ such that
+\begin{enumerate}
+\item $W$, $R$ are open and closed in $X \times_Y V$,
+\item $W'$, $R'$ are open and closed in $U \times_Y V$,
+\item $W \to V$ and $W' \to V$ are finite,
+\item $w$, $w'$ map to $v$,
+\item $\kappa(v) \subset \kappa(w)$ and $\kappa(v) \subset \kappa(w')$
+are purely inseparable, and
+\item no other point of $W$ or $W'$ maps to $v$.
+\end{enumerate}
+Here is a commutative diagram
+$$
+\xymatrix{
+U \ar[d] & U \times_Y V \ar[l] \ar[d] & W' \amalg R' \ar[d] \ar[l] \\
+X \ar[d] & X \times_Y V \ar[l] \ar[d] & W \amalg R \ar[l] \\
+Y & V \ar[l]
+}
+$$
+After shrinking $V$ we may assume that $W'$ maps into $W$:
+just remove the image the inverse image of $R$ in $W'$; this is
+a closed set (as $W' \to V$ is finite) not containing $v$.
+Then $W' \to W$ is finite because both $W \to V$ and $W' \to V$ are finite.
+Hence $W' \to W$ is finite \'etale, and there is exactly one point in the
+fibre over $w$ with $\kappa(w) = \kappa(w')$. Hence $W' \to W$ is an
+isomorphism in an open neighbourhood $W^\circ$ of $w$, see
+\'Etale Morphisms, Lemma \ref{etale-lemma-finite-etale-one-point}.
+Since $W \to V$ is finite the image of $W \setminus W^\circ$ is a closed
+subset $T$ of $V$ not containing $v$. Thus after replacing $V$ by
+$V \setminus T$ we may assume that $W' \to W$ is an isomorphism.
+Now the decomposition $X \times_Y V = W \amalg R$ and the morphism
+$W \to U$ are as desired and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-A}
+Let $f : X \to Y$ be an integral morphism of schemes.
+Then property (A) holds.
+\end{lemma}
+
+\begin{proof}
+Let $U \to X$ be \'etale, and let $u \in U$ be a point.
+We have to find $V \to Y$ \'etale, a disjoint union decomposition
+$X \times_Y V = W \amalg W'$ and an $X$-morphism $W \to U$
+with $u$ in the image. We may shrink $U$ and $Y$ and assume
+$U$ and $Y$ are affine. In this case also $X$ is affine, since
+an integral morphism is affine by definition. Write $Y = \Spec(A)$,
+$X = \Spec(B)$ and $U = \Spec(C)$. Then $A \to B$ is an
+integral ring map, and $B \to C$ is an \'etale ring map. By
+Algebra, Lemma \ref{algebra-lemma-etale}
+we can find a finite $A$-subalgebra $B' \subset B$ and an \'etale ring
+map $B' \to C'$ such that $C = B \otimes_{B'} C'$. Thus the question
+reduces to the \'etale morphism
+$U' = \Spec(C') \to X' = \Spec(B')$
+over the finite morphism $X' \to Y$. In this case the result follows from
+Lemma \ref{lemma-locally-quasi-finite-A}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-when-push-pull-surjective}
+Let $f : X \to Y$ be a morphism of schemes. Denote
+$f_{small} :
+\Sh(X_\etale)
+\to
+\Sh(Y_\etale)$
+the associated morphism of small \'etale topoi. Assume at least one
+of the following
+\begin{enumerate}
+\item $f$ is integral, or
+\item $f$ is separated and locally quasi-finite.
+\end{enumerate}
+Then the functor
+$f_{small, *} :
+\textit{Ab}(X_\etale)
+\to
+\textit{Ab}(Y_\etale)$
+has the following properties
+\begin{enumerate}
+\item the map
+$f_{small}^{-1}f_{small, *}\mathcal{F} \to \mathcal{F}$
+is always surjective,
+\item $f_{small, *}$ is faithful, and
+\item $f_{small, *}$ reflects injections and surjections.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Combine
+Lemmas \ref{lemma-locally-quasi-finite-A},
+\ref{lemma-integral-A}, and
+\ref{lemma-property-A-implies}.
+\end{proof}
+
+
+
+\section{Property (B)}
+\label{section-B}
+
+\noindent
+Please see Section \ref{section-monomorphisms} for the definition of property
+(B).
+
+\begin{lemma}
+\label{lemma-property-B-implies}
+Let $f : X \to Y$ be a morphism of schemes. Assume (B) holds.
+Then the functor
+$f_{small, *} :
+\Sh(X_\etale)
+\to
+\Sh(Y_\etale)$
+transforms surjections into surjections.
+\end{lemma}
+
+\begin{proof}
+This follows from
+Sites, Lemma \ref{sites-lemma-weaker}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-simplify-B}
+Let $f : X \to Y$ be a morphism of schemes. Suppose
+\begin{enumerate}
+\item $V \to Y$ is an \'etale morphism of schemes,
+\item $\{U_i \to X \times_Y V\}$ is an \'etale covering, and
+\item $v \in V$ is a point.
+\end{enumerate}
+Assume that for any such data there exists an \'etale neighbourhood
+$(V', v') \to (V, v)$, a disjoint union decomposition
+$X \times_Y V' = \coprod W'_i$, and morphisms $W'_i \to U_i$
+over $X \times_Y V$. Then property (B) holds.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-B}
+Let $f : X \to Y$ be a finite morphism of schemes.
+Then property (B) holds.
+\end{lemma}
+
+\begin{proof}
+Consider $V \to Y$ \'etale, $\{U_i \to X \times_Y V\}$ an \'etale covering, and
+$v \in V$. We have to find a $V' \to V$ and decomposition and maps as in
+Lemma \ref{lemma-simplify-B}.
+We may shrink $V$ and $Y$, hence we may assume that $V$ and $Y$ are affine.
+Since $X$ is finite over $Y$, this also implies that $X$ is affine.
+During the proof we may (finitely often) replace $(V, v)$ by an
+\'etale neighbourhood $(V', v')$ and correspondingly the covering
+$\{U_i \to X \times_Y V\}$ by $\{V' \times_V U_i \to X \times_Y V'\}$.
+
+\medskip\noindent
+Since $X \times_Y V \to V$ is finite there exist finitely
+many (pairwise distinct) points $x_1, \ldots, x_n \in X \times_Y V$
+mapping to $v$. We may apply
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-etale-splits-off-quasi-finite-part-technical-variant}
+to $X \times_Y V \to V$ and the points $x_1, \ldots, x_n$ lying over
+$v$ and find an \'etale neighbourhood $(V', v') \to (V, v)$
+such that
+$$
+X \times_Y V' = R \amalg \coprod T_a
+$$
+with $T_a \to V'$ finite with exactly one point $p_a$ lying over $v'$
+and moreover $\kappa(v') \subset \kappa(p_a)$ purely inseparable, and
+such that $R \to V'$ has empty fibre over $v'$.
+Because $X \to Y$ is finite, also $R \to V'$ is finite. Hence after
+shrinking $V'$ we may assume that $R = \emptyset$. Thus we may
+assume that $X \times_Y V = X_1 \amalg \ldots \amalg X_n$ with
+exactly one point $x_l \in X_l$ lying over $v$ with moreover
+$\kappa(v) \subset \kappa(x_l)$ purely inseparable. Note that this
+property is preserved under refinement of the \'etale neighbourhood
+$(V, v)$.
+
+\medskip\noindent
+For each $l$ choose an $i_l$ and a point $u_l \in U_{i_l}$ mapping to $x_l$.
+Now we apply property (A) for the finite morphism
+$X \times_Y V \to V$ and the \'etale
+morphisms $U_{i_l} \to X \times_Y V$ and the points $u_l$.
+This is permissible by
+Lemma \ref{lemma-integral-A}
+This gives produces an \'etale neighbourhood $(V', v') \to (V, v)$
+and decompositions
+$$
+X \times_Y V' = W_l \amalg R_l
+$$
+and $X$-morphisms $a_l : W_l \to U_{i_l}$ whose image contains $u_{i_l}$.
+Here is a picture:
+$$
+\xymatrix{
+& & & U_{i_l} \ar[d] & \\
+W_l \ar[rrru] \ar[r] & W_l \amalg R_l \ar@{=}[r] &
+X \times_Y V' \ar[r] \ar[d] &
+X \times_Y V \ar[r] \ar[d] & X \ar[d] \\
+& & V' \ar[r] & V \ar[r] & Y
+}
+$$
+After replacing $(V, v)$ by $(V', v')$ we conclude that each
+$x_l$ is contained in an open and closed neighbourhood $W_l$ such that
+the inclusion morphism $W_l \to X \times_Y V$ factors through
+$U_i \to X \times_Y V$ for some $i$. Replacing $W_l$ by $W_l \cap X_l$
+we see that these open and closed sets are disjoint and moreover
+that $\{x_1, \ldots, x_n\} \subset W_1 \cup \ldots \cup W_n$.
+Since $X \times_Y V \to V$ is finite we may shrink $V$ and assume that
+$X \times_Y V = W_1 \amalg \ldots \amalg W_n$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-B}
+Let $f : X \to Y$ be an integral morphism of schemes.
+Then property (B) holds.
+\end{lemma}
+
+\begin{proof}
+Consider $V \to Y$ \'etale, $\{U_i \to X \times_Y V\}$ an \'etale covering, and
+$v \in V$. We have to find a $V' \to V$ and decomposition and maps as in
+Lemma \ref{lemma-simplify-B}.
+We may shrink $V$ and $Y$, hence we may assume that $V$ and $Y$ are affine.
+Since $X$ is integral over $Y$, this also implies that $X$ and
+$X \times_Y V$ are affine. We may refine the covering
+$\{U_i \to X \times_Y V\}$, and hence we may assume that
+$\{U_i \to X \times_Y V\}_{i = 1, \ldots, n}$ is a standard \'etale covering.
+Write $Y = \Spec(A)$, $X = \Spec(B)$,
+$V = \Spec(C)$, and $U_i = \Spec(B_i)$.
+Then $A \to B$ is an integral ring map, and $B \otimes_A C \to B_i$ are
+\'etale ring maps. By
+Algebra, Lemma \ref{algebra-lemma-etale}
+we can find a finite $A$-subalgebra $B' \subset B$ and an \'etale ring
+map $B' \otimes_A C \to B'_i$ for $i = 1, \ldots, n$
+such that $B_i = B \otimes_{B'} B'_i$. Thus the question
+reduces to the \'etale covering
+$\{\Spec(B'_i) \to X' \times_Y V\}_{i = 1, \ldots, n}$
+with $X' = \Spec(B')$ finite over $Y$.
+In this case the result follows from
+Lemma \ref{lemma-finite-B}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-what-integral}
+Let $f : X \to Y$ be a morphism of schemes.
+Assume $f$ is integral (for example finite).
+Then
+\begin{enumerate}
+\item $f_{small, *}$ transforms surjections into surjections (on sheaves
+of sets and on abelian sheaves),
+\item $f_{small}^{-1}f_{small, *}\mathcal{F} \to \mathcal{F}$
+is surjective for any abelian sheaf $\mathcal{F}$ on $X_\etale$,
+\item
+$f_{small, *} :
+\textit{Ab}(X_\etale)
+\to
+\textit{Ab}(Y_\etale)$
+is faithful and reflects injections and surjections, and
+\item
+$f_{small, *} :
+\textit{Ab}(X_\etale)
+\to
+\textit{Ab}(Y_\etale)$
+is exact.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Parts (2), (3) we have seen in
+Lemma \ref{lemma-when-push-pull-surjective}.
+Part (1) follows from
+Lemmas \ref{lemma-integral-B} and \ref{lemma-property-B-implies}.
+Part (4) is a consequence of part (1), see
+Modules on Sites, Lemma \ref{sites-modules-lemma-exactness}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Property (C)}
+\label{section-C}
+
+\noindent
+Please see Section \ref{section-monomorphisms} for the definition of property
+(C).
+
+\begin{lemma}
+\label{lemma-property-C-implies}
+Let $f : X \to Y$ be a morphism of schemes. Assume (C) holds. Then the functor
+$f_{small, *} :
+\Sh(X_\etale)
+\to
+\Sh(Y_\etale)$
+reflects injections and surjections.
+\end{lemma}
+
+\begin{proof}
+Follows from
+Sites, Lemma \ref{sites-lemma-cover-from-below}.
+We omit the verification that property (C) implies that the functor
+$Y_\etale \to X_\etale$, $V \mapsto X \times_Y V$
+satisfies the assumption of
+Sites, Lemma \ref{sites-lemma-cover-from-below}.
+\end{proof}
+
+\begin{remark}
+\label{remark-property-C-strong}
+Property (C) holds if $f : X \to Y$ is an open immersion. Namely, if
+$U \in \Ob(X_\etale)$, then we can view $U$ also as an object
+of $Y_\etale$ and $U \times_Y X = U$. Hence property (C)
+does not imply that $f_{small, *}$ is exact as this is not
+the case for open immersions (in general).
+\end{remark}
+
+\begin{lemma}
+\label{lemma-property-C-closed-implies}
+Let $f : X \to Y$ be a morphism of schemes. Assume that
+for any $V \to Y$ \'etale we have that
+\begin{enumerate}
+\item $X \times_Y V \to V$ has property (C), and
+\item $X \times_Y V \to V$ is closed.
+\end{enumerate}
+Then the functor
+$Y_\etale \to X_\etale$, $V \mapsto X \times_Y V$
+is almost cocontinuous, see
+Sites, Definition \ref{sites-definition-almost-cocontinuous}.
+\end{lemma}
+
+\begin{proof}
+Let $V \to Y$ be an object of $Y_\etale$ and let
+$\{U_i \to X \times_Y V\}_{i \in I}$ be a covering of $X_\etale$.
+By assumption (1) for each $i$ we can find an \'etale morphism
+$h_i : V_i \to V$ and a surjective morphism $X \times_Y V_i \to U_i$
+over $X \times_Y V$. Note that $\bigcup h_i(V_i) \subset V$ is an
+open set containing the closed set $Z = \Im(X \times_Y V \to V)$.
+Let $h_0 : V_0 = V \setminus Z \to V$ be the open immersion.
+It is clear that $\{V_i \to V\}_{i \in I \cup \{0\}}$ is an
+\'etale covering such that for each $i \in I \cup \{0\}$ we have
+either $V_i \times_Y X = \emptyset$ (namely if $i = 0$), or
+$V_i \times_Y X \to V \times_Y X$ factors through $U_i \to X \times_Y V$
+(if $i \not = 0$). Hence the functor $Y_\etale \to X_\etale$
+is almost cocontinuous.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-homeo-onto-image-C}
+Let $f : X \to Y$ be an integral morphism of schemes which defines
+a homeomorphism of $X$ with a closed subset of $Y$.
+Then property (C) holds.
+\end{lemma}
+
+\begin{proof}
+Let $g : U \to X$ be an \'etale morphism. We need to find an object
+$V \to Y$ of $Y_\etale$ and a surjective morphism $X \times_Y V \to U$
+over $X$. Suppose that for every $u \in U$ we can find an object
+$V_u \to Y$ of $Y_\etale$ and a morphism $h_u : X \times_Y V_u \to U$
+over $X$ with $u \in \Im(h_u)$. Then we can take $V = \coprod V_u$
+and $h = \coprod h_u$ and we win. Hence given a point
+$u \in U$ we find a pair $(V_u, h_u)$ as above. To do this we may
+shrink $U$ and assume that $U$ is affine. In this case
+$g : U \to X$ is locally quasi-finite. Let
+$g^{-1}(g(\{u\})) = \{u, u_2, \ldots, u_n\}$. Since there are no
+specializations $u_i \leadsto u$ we may replace $U$ by an affine neighbourhood
+so that $g^{-1}(g(\{u\})) = \{u\}$.
+
+\medskip\noindent
+The image $g(U) \subset X$ is open,
+hence $f(g(U))$ is locally closed in $Y$. Choose an open $V \subset Y$ such
+that $f(g(U)) = f(X) \cap V$. It follows that $g$ factors through
+$X \times_Y V$ and that the resulting $\{U \to X \times_Y V\}$ is an \'etale
+covering. Since $f$ has property (B) , see
+Lemma \ref{lemma-integral-B},
+we see that there exists an \'etale covering $\{V_j \to V\}$ such that
+$X \times_Y V_j \to X \times_Y V$ factor through $U$.
+This implies that $V' = \coprod V_j$ is \'etale over $Y$ and that there is a
+morphism $h : X \times_Y V' \to U$ whose image
+surjects onto $g(U)$. Since $u$ is the only point in its fibre it must
+be in the image of $h$ and we win.
+\end{proof}
+
+\noindent
+We urge the reader to think of the following lemma as a
+way station\footnote{A way station is a place where people stop to eat
+and rest when they are on a long journey.} on the journey towards the
+ultimate truth regarding $f_{small, *}$ for integral universally injective
+morphisms.
+
+\begin{lemma}
+\label{lemma-integral-universally-injective}
+Let $f : X \to Y$ be a morphism of schemes. Assume that $f$ is
+universally injective and integral (for example a closed immersion).
+Then
+\begin{enumerate}
+\item
+$f_{small, *} :
+\Sh(X_\etale)
+\to
+\Sh(Y_\etale)$
+reflects injections and surjections,
+\item
+$f_{small, *} :
+\Sh(X_\etale)
+\to
+\Sh(Y_\etale)$
+commutes with pushouts and coequalizers (and more generally
+finite connected colimits),
+\item $f_{small, *}$ transforms surjections into surjections (on sheaves
+of sets and on abelian sheaves),
+\item the map
+$f_{small}^{-1}f_{small, *}\mathcal{F} \to \mathcal{F}$
+is surjective for any sheaf (of sets or of abelian groups)
+$\mathcal{F}$ on $X_\etale$,
+\item the functor $f_{small, *}$ is faithful (on sheaves of sets and
+on abelian sheaves),
+\item
+$f_{small, *} :
+\textit{Ab}(X_\etale)
+\to
+\textit{Ab}(Y_\etale)$
+is exact, and
+\item the functor
+$Y_\etale \to X_\etale$, $V \mapsto X \times_Y V$ is
+almost cocontinuous.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By
+Lemmas \ref{lemma-integral-A},
+\ref{lemma-integral-B} and
+\ref{lemma-integral-homeo-onto-image-C}
+we know that the morphism $f$ has properties (A), (B), and (C).
+Moreover, by
+Lemma \ref{lemma-property-C-closed-implies}
+we know that the functor $Y_\etale \to X_\etale$ is
+almost cocontinuous. Now we have
+\begin{enumerate}
+\item property (C) implies (1) by
+Lemma \ref{lemma-property-C-implies},
+\item almost continuous implies (2) by
+Sites, Lemma \ref{sites-lemma-morphism-of-sites-almost-cocontinuous},
+\item property (B) implies (3) by
+Lemma \ref{lemma-property-B-implies}.
+\end{enumerate}
+Properties (4), (5), and (6) follow formally from the first three, see
+Sites, Lemma \ref{sites-lemma-exactness-properties}
+and
+Modules on Sites, Lemma \ref{sites-modules-lemma-exactness}.
+Property (7) we saw above.
+\end{proof}
+
+
+
+
+\section{Topological invariance of the small \'etale site}
+\label{section-topological-invariance}
+
+\noindent
+In the following theorem we show that the small \'etale site is a topological
+invariant in the following sense: If $f : X \to Y$ is a morphism of schemes
+which is a universal homeomorphism, then $X_\etale \cong Y_\etale$
+as sites. This improves the result of
+\'Etale Morphisms, Theorem \ref{etale-theorem-remarkable-equivalence}.
+We first prove the result for morphisms and then we state the result
+for categories.
+
+\begin{theorem}
+\label{theorem-etale-topological}
+Let $X$ and $Y$ be two schemes over a base scheme $S$. Let
+$S' \to S$ be a universal homeomorphism.
+Denote $X'$ (resp.\ $Y'$) the base change to $S'$.
+If $X$ is \'etale over $S$, then the map
+$$
+\Mor_S(Y, X) \longrightarrow \Mor_{S'}(Y', X')
+$$
+is bijective.
+\end{theorem}
+
+\begin{proof}
+After base changing via $Y \to S$, we may assume that $Y = S$.
+Thus we may and do assume both $X$ and $Y$ are \'etale over $S$.
+In other words, the theorem states that the base change functor
+is a fully faithful functor from the category of schemes \'etale
+over $S$ to the category of schemes \'etale over $S'$.
+
+\medskip\noindent
+Consider the forgetful functor
+\begin{equation}
+\label{equation-descent-etale-forget}
+\begin{matrix}
+\text{descent data }(X', \varphi')\text{ relative to }S'/S \\
+\text{ with }X'\text{ \'etale over }S'
+\end{matrix}
+\longrightarrow
+\text{schemes }X'\text{ \'etale over }S'
+\end{equation}
+We claim this functor is an equivalence. On the other hand, the
+functor
+\begin{equation}
+\label{equation-descent-etale}
+\text{schemes }X\text{ \'etale over }S \longrightarrow
+\begin{matrix}
+\text{descent data }(X', \varphi')\text{ relative to }S'/S \\
+\text{ with }X'\text{ \'etale over }S'
+\end{matrix}
+\end{equation}
+is fully faithful by \'Etale Morphisms, Lemma
+\ref{etale-lemma-fully-faithful-cases}.
+Thus the claim implies the theorem.
+
+\medskip\noindent
+Proof of the claim.
+Recall that a universal homeomorphism is the same thing as an
+integral, universally injective, surjective morphism, see
+Morphisms, Lemma \ref{morphisms-lemma-universal-homeomorphism}.
+In particular, the diagonal $\Delta : S' \to S' \times_S S'$ is a thickening
+by Morphisms, Lemma \ref{morphisms-lemma-universally-injective}.
+Thus by \'Etale Morphisms, Theorem
+\ref{etale-theorem-etale-topological}
+we see that given $X' \to S'$ \'etale there is a unique isomorphism
+$$
+\varphi' : X' \times_S S' \to S' \times_S X'
+$$
+of schemes \'etale over $S' \times_S S'$ which pulls back under
+$\Delta$ to $\text{id} : X' \to X'$ over $S'$.
+Since $S' \to S' \times_S S' \times_S S'$
+is a thickening as well (it is bijective and a closed immersion)
+we conclude that $(X', \varphi')$ is a descent datum relative to $S'/S$.
+The canonical nature of the construction of $\varphi'$ shows
+that it is compatible with morphisms between schemes \'etale over $S'$.
+In other words, we obtain a quasi-inverse
+$X' \mapsto (X', \varphi')$ of the functor
+(\ref{equation-descent-etale-forget}). This proves the claim and
+finishes the proof of the theorem.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-topological-invariance}
+\begin{reference}
+\cite[IV Theorem 18.1.2]{EGA}
+\end{reference}
+Let $f : X \to Y$ be a morphism of schemes.
+Assume $f$ is integral, universally injective and surjective
+(i.e., $f$ is a universal homeomorphism, see
+Morphisms, Lemma \ref{morphisms-lemma-universal-homeomorphism}).
+The functor
+$$
+V \longmapsto V_X = X \times_Y V
+$$
+defines an equivalence of categories
+$$
+\{
+\text{schemes }V\text{ \'etale over }Y
+\}
+\leftrightarrow
+\{
+\text{schemes }U\text{ \'etale over }X
+\}
+$$
+\end{theorem}
+
+\noindent
+We give two proofs. The first uses effectivity of descent
+for quasi-compact, separated, \'etale morphisms relative
+to surjective integral morphisms. The second uses the material
+on properties (A), (B), and (C) discussed earlier in the chapter.
+
+\begin{proof}[First proof]
+By Theorem \ref{theorem-etale-topological}
+we see that the functor is fully faithful.
+It remains to show that the functor is essentially surjective.
+Let $U \to X$ be an \'etale morphism of schemes.
+
+\medskip\noindent
+Suppose that the result holds if $U$ and $Y$ are affine.
+In that case, we choose an affine open covering
+$U = \bigcup U_i$ such that each $U_i$ maps
+into an affine open of $Y$. By assumption (affine case) we can
+find \'etale morphisms $V_i \to Y$ such that $X \times_Y V_i \cong U_i$
+as schemes over $X$. Let $V_{i, i'} \subset V_i$
+be the open subscheme whose underlying topological space
+corresponds to $U_i \cap U_{i'}$. Because we have isomorphisms
+$$
+X \times_Y V_{i, i'} \cong U_i \cap U_{i'} \cong X \times_Y V_{i', i}
+$$
+as schemes over $X$ we see by fully faithfulness that we
+obtain isomorphisms
+$\theta_{i, i'} : V_{i, i'} \to V_{i', i}$ of schemes over $Y$.
+We omit the verification that these isomorphisms satisfy the
+cocycle condition of Schemes, Section \ref{schemes-section-glueing-schemes}.
+Applying Schemes, Lemma \ref{schemes-lemma-glue-schemes}
+we obtain a scheme $V \to Y$ by
+glueing the schemes $V_i$ along the identifications $\theta_{i, i'}$.
+It is clear that $V \to Y$ is \'etale and $X \times_Y V \cong U$
+by construction.
+
+\medskip\noindent
+Thus it suffices to show the lemma in case $U$ and $Y$ are affine.
+Recall that in the proof of Theorem \ref{theorem-etale-topological}
+we showed that $U$ comes with a unique descent datum
+$(U, \varphi)$ relative to $X/Y$. By
+\'Etale Morphisms, Proposition \ref{etale-proposition-effective}
+(which applies because $U \to X$ is quasi-compact and separated
+as well as \'etale by our reduction to the affine case)
+there exists an \'etale morphism $V \to Y$ such that
+$X \times_Y V \cong U$ and the proof is complete.
+\end{proof}
+
+\begin{proof}[Second proof]
+By Theorem \ref{theorem-etale-topological}
+we see that the functor is fully faithful.
+It remains to show that the functor is essentially surjective.
+Let $U \to X$ be an \'etale morphism of schemes.
+
+\medskip\noindent
+Suppose that the result holds if $U$ and $Y$ are affine.
+In that case, we choose an affine open covering
+$U = \bigcup U_i$ such that each $U_i$ maps
+into an affine open of $Y$. By assumption (affine case) we can
+find \'etale morphisms $V_i \to Y$ such that $X \times_Y V_i \cong U_i$
+as schemes over $X$. Let $V_{i, i'} \subset V_i$
+be the open subscheme whose underlying topological space
+corresponds to $U_i \cap U_{i'}$. Because we have isomorphisms
+$$
+X \times_Y V_{i, i'} \cong U_i \cap U_{i'} \cong X \times_Y V_{i', i}
+$$
+as schemes over $X$ we see by fully faithfulness that we
+obtain isomorphisms
+$\theta_{i, i'} : V_{i, i'} \to V_{i', i}$ of schemes over $Y$.
+We omit the verification that these isomorphisms satisfy the
+cocycle condition of Schemes, Section \ref{schemes-section-glueing-schemes}.
+Applying Schemes, Lemma \ref{schemes-lemma-glue-schemes}
+we obtain a scheme $V \to Y$ by
+glueing the schemes $V_i$ along the identifications $\theta_{i, i'}$.
+It is clear that $V \to Y$ is \'etale and $X \times_Y V \cong U$
+by construction.
+
+\medskip\noindent
+Thus it suffices to prove that the functor
+\begin{equation}
+\label{equation-affine-etale}
+\{
+\text{affine schemes }V\text{ \'etale over }Y
+\}
+\leftrightarrow
+\{
+\text{affine schemes }U\text{ \'etale over }X
+\}
+\end{equation}
+is essentially surjective when $X$ and $Y$ are affine.
+
+\medskip\noindent
+Let $U \to X$ be an affine scheme \'etale over $X$.
+We have to find $V \to Y$ \'etale (and affine) such that $X \times_Y V$
+is isomorphic to $U$ over $X$. Note that an \'etale morphism of affines
+has universally bounded fibres, see
+Morphisms,
+Lemmas \ref{morphisms-lemma-etale-locally-quasi-finite} and
+\ref{morphisms-lemma-locally-quasi-finite-qc-source-universally-bounded}.
+Hence we can do induction on the integer $n$ bounding the degree of the fibres
+of $U \to X$. See
+Morphisms, Lemma \ref{morphisms-lemma-etale-universally-bounded}
+for a description of this integer in the case of an \'etale morphism.
+If $n = 1$, then $U \to X$ is an open immersion (see
+\'Etale Morphisms, Theorem \ref{etale-theorem-etale-radicial-open}),
+and the result is clear. Assume $n > 1$.
+
+\medskip\noindent
+By
+Lemma \ref{lemma-integral-homeo-onto-image-C}
+there exists an \'etale morphism of schemes $W \to Y$ and a
+surjective morphism $W_X \to U$ over $X$.
+As $U$ is quasi-compact we may replace $W$ by a disjoint union of
+finitely many affine opens of $W$, hence we may assume that $W$
+is affine as well. Here is a diagram
+$$
+\xymatrix{
+U \ar[d] & U \times_Y W \ar[l] \ar[d] & W_X \amalg R \ar@{=}[l]\\
+X \ar[d] & W_X \ar[l] \ar[d] \\
+Y & W \ar[l]
+}
+$$
+The disjoint union decomposition arises because by construction the
+\'etale morphism of affine schemes $U \times_Y W \to W_X$ has a section.
+OK, and now we see that the morphism $R \to X \times_Y W$ is an \'etale
+morphism of affine schemes whose fibres have degree universally bounded
+by $n - 1$. Hence by induction assumption there exists a scheme
+$V' \to W$ \'etale such that $R \cong W_X \times_W V'$.
+Taking $V'' = W \amalg V'$ we find a scheme $V''$ \'etale over $W$ whose
+base change to $W_X$ is isomorphic to $U \times_Y W$
+over $X \times_Y W$.
+
+\medskip\noindent
+At this point we can use descent to find $V$ over $Y$ whose base
+change to $X$ is isomorphic to $U$ over $X$. Namely, by the fully
+faithfulness of the functor (\ref{equation-affine-etale})
+corresponding to the universal homeomorphism
+$X \times_Y (W \times_Y W) \to (W \times_Y W)$
+there exists a unique isomorphism $\varphi : V'' \times_Y W \to W \times_Y V''$
+whose base change to $X \times_Y (W \times_Y W)$ is the canonical
+descent datum for $U \times_Y W$ over $X \times_Y W$. In particular
+$\varphi$ satisfies the cocycle condition. Hence by
+Descent, Lemma \ref{descent-lemma-affine}
+we see that $\varphi$ is effective (recall that all schemes above are affine).
+Thus we obtain $V \to Y$ and an isomorphism $V'' \cong W \times_Y V$
+such that the canonical descent datum on $W \times_Y V/W/Y$ agrees
+with $\varphi$. Note that $V \to Y$ is \'etale, by
+Descent, Lemma \ref{descent-lemma-descending-property-etale}.
+Moreover, there is an isomorphism $V_X \cong U$ which comes from
+descending the isomorphism
+$$
+V_X \times_X W_X =
+X \times_Y V \times_Y W =
+(X \times_Y W) \times_W (W \times_Y V) \cong
+W_X \times_W V'' \cong U \times_Y W
+$$
+which we have by construction. Some details omitted.
+\end{proof}
+
+\begin{remark}
+\label{remark-affine-inside-equivalence}
+In the situation of
+Theorem \ref{theorem-topological-invariance}
+it is also true that $V \mapsto V_X$ induces an equivalence
+between those \'etale morphisms $V \to Y$ with $V$ affine and
+those \'etale morphisms $U \to X$ with $U$ affine.
+This follows for example from
+Limits, Proposition \ref{limits-proposition-affine}.
+\end{remark}
+
+\begin{proposition}[Topological invariance of \'etale cohomology]
+\label{proposition-topological-invariance}
+Let $X_0 \to X$ be a universal homeomorphism of schemes
+(for example the closed immersion defined by a nilpotent sheaf of ideals).
+Then
+\begin{enumerate}
+\item the \'etale sites $X_\etale$ and $(X_0)_\etale$ are isomorphic,
+\item the \'etale topoi $\Sh(X_\etale)$ and $\Sh((X_0)_\etale)$
+are equivalent, and
+\item $H^q_\etale(X, \mathcal{F}) = H^q_\etale(X_0, \mathcal{F}|_{X_0})$
+for all $q$ and
+for any abelian sheaf $\mathcal{F}$ on $X_\etale$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+The equivalence of categories $X_\etale \to (X_0)_\etale$ is
+given by Theorem \ref{theorem-topological-invariance}. We omit
+the proof that under this equivalence the \'etale coverings correspond.
+Hence (1) holds. Parts (2) and (3) follow formally from (1).
+\end{proof}
+
+
+
+
+
+
+\section{Closed immersions and pushforward}
+\label{section-closed-immersions}
+
+\noindent
+Before stating and proving
+Proposition \ref{proposition-closed-immersion-pushforward}
+in its correct generality we briefly state and prove it for
+closed immersions. Namely, some of the preceding arguments
+are quite a bit easier to follow in the case of a closed immersion and
+so we repeat them here in their simplified form.
+
+\medskip\noindent
+In the rest of this section $i : Z \to X$ is a closed immersion.
+The functor
+$$
+\Sch/X \longrightarrow \Sch/Z, \quad
+U \longmapsto U_Z = Z \times_X U
+$$
+will be denoted $U \mapsto U_Z$ as indicated. Since being a closed immersion
+is preserved under arbitrary base change the scheme $U_Z$ is a closed subscheme
+of $U$.
+
+\begin{lemma}
+\label{lemma-closed-immersion-almost-full}
+Let $i : Z \to X$ be a closed immersion of schemes.
+Let $U, U'$ be schemes \'etale over $X$. Let $h : U_Z \to U'_Z$
+be a morphism over $Z$. Then there exists a diagram
+$$
+\xymatrix{
+U & W \ar[l]_a \ar[r]^b & U'
+}
+$$
+such that $a_Z : W_Z \to U_Z$ is an isomorphism and $h = b_Z \circ (a_Z)^{-1}$.
+\end{lemma}
+
+\begin{proof}
+Consider the scheme $M = U \times_Y U'$. The graph $\Gamma_h \subset M_Z$
+of $h$ is open. This is true for example as $\Gamma_h$ is the image of a
+section of the \'etale morphism $\text{pr}_{1, Z} : M_Z \to U_Z$, see
+\'Etale Morphisms, Proposition \ref{etale-proposition-properties-sections}.
+Hence there exists an open subscheme $W \subset M$ whose intersection with
+the closed subset $M_Z$ is $\Gamma_h$. Set $a = \text{pr}_1|_W$
+and $b = \text{pr}_2|_W$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-closed-immersion-almost-essentially-surjective}
+Let $i : Z \to X$ be a closed immersion of schemes.
+Let $V \to Z$ be an \'etale morphism of schemes.
+There exist \'etale morphisms $U_i \to X$ and morphisms
+$U_{i, Z} \to V$ such that $\{U_{i, Z} \to V\}$
+is a Zariski covering of $V$.
+\end{lemma}
+
+\begin{proof}
+Since we only have to find a Zariski covering of $V$ consisting of schemes
+of the form $U_Z$ with $U$ \'etale over $X$, we may Zariski localize on $X$
+and $V$. Hence we may assume $X$ and $V$ affine. In the affine case this is
+Algebra, Lemma \ref{algebra-lemma-lift-etale}.
+\end{proof}
+
+\noindent
+If $\overline{x} : \Spec(k) \to X$ is a geometric point of $X$, then
+either $\overline{x}$ factors (uniquely) through the closed subscheme $Z$, or
+$Z_{\overline{x}} = \emptyset$. If $\overline{x}$ factors through $Z$
+we say that $\overline{x}$ is a geometric point of $Z$ (because it is) and
+we use the notation ``$\overline{x} \in Z$'' to indicate this.
+
+\begin{lemma}
+\label{lemma-stalk-pushforward-closed-immersion}
+Let $i : Z \to X$ be a closed immersion of schemes.
+Let $\mathcal{G}$ be a sheaf of sets on $Z_\etale$.
+Let $\overline{x}$ be a geometric point of $X$.
+Then
+$$
+(i_{small, *}\mathcal{G})_{\overline{x}} =
+\left\{
+\begin{matrix}
+* & \text{if} & \overline{x} \not \in Z \\
+\mathcal{G}_{\overline{x}} & \text{if} & \overline{x} \in Z
+\end{matrix}
+\right.
+$$
+where $*$ denotes a singleton set.
+\end{lemma}
+
+\begin{proof}
+Note that $i_{small, *}\mathcal{G}|_{U_\etale} = *$ is the final
+object in the category of \'etale sheaves on $U$, i.e., the sheaf
+which associates a singleton set to each scheme \'etale over $U$.
+This explains the value of $(i_{small, *}\mathcal{G})_{\overline{x}}$
+if $\overline{x} \not \in Z$.
+
+\medskip\noindent
+Next, suppose that $\overline{x} \in Z$. Note that
+$$
+(i_{small, *}\mathcal{G})_{\overline{x}}
+=
+\colim_{(U, \overline{u})} \mathcal{G}(U_Z)
+$$
+and on the other hand
+$$
+\mathcal{G}_{\overline{x}}
+=
+\colim_{(V, \overline{v})} \mathcal{G}(V).
+$$
+Let $\mathcal{C}_1 = \{(U, \overline{u})\}^{opp}$ be the opposite of the
+category of \'etale neighbourhoods of $\overline{x}$ in $X$, and let
+$\mathcal{C}_2 = \{(V, \overline{v})\}^{opp}$ be the opposite of the
+category of \'etale neighbourhoods of $\overline{x}$ in $Z$. The canonical map
+$$
+\mathcal{G}_{\overline{x}}
+\longrightarrow
+(i_{small, *}\mathcal{G})_{\overline{x}}
+$$
+corresponds to the functor $F : \mathcal{C}_1 \to \mathcal{C}_2$,
+$F(U, \overline{u}) = (U_Z, \overline{x})$. Now
+Lemmas \ref{lemma-closed-immersion-almost-essentially-surjective} and
+\ref{lemma-closed-immersion-almost-full}
+imply that $\mathcal{C}_1$ is cofinal in $\mathcal{C}_2$, see
+Categories, Definition \ref{categories-definition-cofinal}.
+Hence it follows that the displayed arrow is an isomorphism, see
+Categories, Lemma \ref{categories-lemma-cofinal}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-closed-immersion-pushforward}
+Let $i : Z \to X$ be a closed immersion of schemes.
+\begin{enumerate}
+\item The functor
+$$
+i_{small, *} :
+\Sh(Z_\etale)
+\longrightarrow
+\Sh(X_\etale)
+$$
+is fully faithful and its essential image is those sheaves of sets
+$\mathcal{F}$ on $X_\etale$ whose restriction to $X \setminus Z$ is
+isomorphic to $*$, and
+\item the functor
+$$
+i_{small, *} :
+\textit{Ab}(Z_\etale)
+\longrightarrow
+\textit{Ab}(X_\etale)
+$$
+is fully faithful and its essential image is those abelian sheaves on
+$X_\etale$ whose support is contained in $Z$.
+\end{enumerate}
+In both cases $i_{small}^{-1}$ is a left inverse to the functor
+$i_{small, *}$.
+\end{proposition}
+
+\begin{proof}
+Let's discuss the case of sheaves of sets.
+For any sheaf $\mathcal{G}$ on $Z$ the morphism
+$i_{small}^{-1}i_{small, *}\mathcal{G} \to \mathcal{G}$
+is an isomorphism by
+Lemma \ref{lemma-stalk-pushforward-closed-immersion}
+(and
+Theorem \ref{theorem-exactness-stalks}).
+This implies formally that $i_{small, *}$ is fully faithful, see
+Sites, Lemma \ref{sites-lemma-exactness-properties}.
+It is clear that $i_{small, *}\mathcal{G}|_{U_\etale} \cong *$
+where $U = X \setminus Z$. Conversely, suppose that $\mathcal{F}$
+is a sheaf of sets on $X$ such that $\mathcal{F}|_{U_\etale} \cong *$.
+Consider the adjunction mapping
+$$
+\mathcal{F} \longrightarrow i_{small, *}i_{small}^{-1}\mathcal{F}
+$$
+Combining
+Lemmas \ref{lemma-stalk-pushforward-closed-immersion} and
+\ref{lemma-stalk-pullback}
+we see that it is an isomorphism. This finishes the proof of (1).
+The proof of (2) is identical.
+\end{proof}
+
+
+
+
+
+\section{Integral universally injective morphisms}
+\label{section-integral-universally-injective}
+
+\noindent
+Here is the general version of
+Proposition \ref{proposition-closed-immersion-pushforward}.
+
+\begin{proposition}
+\label{proposition-integral-universally-injective-pushforward}
+Let $f : X \to Y$ be a morphism of schemes which is integral
+and universally injective.
+\begin{enumerate}
+\item The functor
+$$
+f_{small, *} :
+\Sh(X_\etale)
+\longrightarrow
+\Sh(Y_\etale)
+$$
+is fully faithful and its essential image is those sheaves of sets
+$\mathcal{F}$ on $Y_\etale$ whose restriction to $Y \setminus f(X)$ is
+isomorphic to $*$, and
+\item the functor
+$$
+f_{small, *} :
+\textit{Ab}(X_\etale)
+\longrightarrow
+\textit{Ab}(Y_\etale)
+$$
+is fully faithful and its essential image is those abelian sheaves on
+$Y_\etale$ whose support is contained in $f(X)$.
+\end{enumerate}
+In both cases $f_{small}^{-1}$ is a left inverse to the functor
+$f_{small, *}$.
+\end{proposition}
+
+\begin{proof}
+We may factor $f$ as
+$$
+\xymatrix{
+X \ar[r]^h & Z \ar[r]^i & Y
+}
+$$
+where $h$ is integral, universally injective and surjective
+and $i : Z \to Y$ is a closed immersion.
+Apply
+Proposition \ref{proposition-closed-immersion-pushforward}
+to $i$ and apply
+Theorem \ref{theorem-topological-invariance}
+to $h$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Big sites and pushforward}
+\label{section-big}
+
+\noindent
+In this section we prove some technical results on $f_{big, *}$ for
+certain types of morphisms of schemes.
+
+\begin{lemma}
+\label{lemma-monomorphism-big-push-pull}
+Let $\tau \in \{Zariski, \etale, smooth, syntomic, fppf\}$.
+Let $f : X \to Y$ be a monomorphism of schemes.
+Then the canonical map
+$f_{big}^{-1}f_{big, *}\mathcal{F} \to \mathcal{F}$
+is an isomorphism for any sheaf $\mathcal{F}$ on
+$(\Sch/X)_\tau$.
+\end{lemma}
+
+\begin{proof}
+In this case the functor $(\Sch/X)_\tau \to (\Sch/Y)_\tau$
+is continuous, cocontinuous and fully faithful. Hence the result follows from
+Sites, Lemma \ref{sites-lemma-back-and-forth}.
+\end{proof}
+
+\begin{remark}
+\label{remark-push-pull-shriek}
+In the situation of
+Lemma \ref{lemma-monomorphism-big-push-pull}
+it is true that the canonical map
+$\mathcal{F} \to f_{big}^{-1}f_{big!}\mathcal{F}$
+is an isomorphism for any sheaf of sets $\mathcal{F}$ on
+$(\Sch/X)_\tau$. The proof is the same. This also
+holds for sheaves of abelian groups. However, note
+that the functor $f_{big!}$ for sheaves of abelian groups is defined in
+Modules on Sites, Section \ref{sites-modules-section-exactness-lower-shriek}
+and is in general different from $f_{big!}$ on sheaves of sets.
+The result for sheaves of abelian groups follows from
+Modules on Sites, Lemma \ref{sites-modules-lemma-back-and-forth}.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-closed-immersion-cover-from-below}
+Let $f : X \to Y$ be a closed immersion of schemes.
+Let $U \to X$ be a syntomic (resp.\ smooth, resp.\ \'etale) morphism.
+Then there exist syntomic (resp.\ smooth, resp.\ \'etale) morphisms
+$V_i \to Y$ and morphisms $V_i \times_Y X \to U$ such that
+$\{V_i \times_Y X \to U\}$ is a Zariski covering of $U$.
+\end{lemma}
+
+\begin{proof}
+Let us prove the lemma when $\tau = syntomic$.
+The question is local on $U$. Thus we may assume that $U$ is
+an affine scheme mapping into an affine of $Y$.
+Hence we reduce to proving the following case:
+$Y = \Spec(A)$, $X = \Spec(A/I)$, and
+$U = \Spec(\overline{B})$, where
+$A/I \to \overline{B}$ be a syntomic ring map.
+By Algebra, Lemma \ref{algebra-lemma-lift-syntomic}
+we can find elements $\overline{g}_i \in \overline{B}$
+such that
+$\overline{B}_{\overline{g}_i} = A_i/IA_i$ for certain syntomic ring maps
+$A \to A_i$.
+This proves the lemma in the syntomic case.
+The proof of the smooth case is the same except it uses
+Algebra, Lemma \ref{algebra-lemma-lift-smooth}.
+In the \'etale case use
+Algebra, Lemma \ref{algebra-lemma-lift-etale}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-prepare-closed-immersion-almost-cocontinuous}
+Let $f : X \to Y$ be a closed immersion of schemes.
+Let $\{U_i \to X\}$ be a syntomic (resp.\ smooth, resp.\ \'etale) covering.
+There exists a syntomic (resp.\ smooth, resp.\ \'etale) covering $\{V_j \to Y\}$
+such that for each $j$, either $V_j \times_Y X = \emptyset$, or the
+morphism $V_j \times_Y X \to X$ factors through $U_i$ for some $i$.
+\end{lemma}
+
+\begin{proof}
+For each $i$ we can choose syntomic (resp.\ smooth, resp.\ \'etale) morphisms
+$g_{ij} : V_{ij} \to Y$ and morphisms $V_{ij} \times_Y X \to U_i$ over $X$,
+such that $\{V_{ij} \times_Y X \to U_i\}$ are Zariski coverings, see
+Lemma \ref{lemma-closed-immersion-cover-from-below}.
+This in particular implies that
+$\bigcup_{ij} g_{ij}(V_{ij})$ contains the closed subset $f(X)$.
+Hence the family of syntomic (resp.\ smooth, resp.\ \'etale) maps $g_{ij}$
+together with the open immersion $Y \setminus f(X) \to Y$ forms the desired
+syntomic (resp.\ smooth, resp.\ \'etale) covering of $Y$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-closed-immersion-almost-cocontinuous}
+Let $f : X \to Y$ be a closed immersion of schemes.
+Let $\tau \in \{syntomic, smooth, \etale\}$.
+The functor $V \mapsto X \times_Y V$ defines an almost
+cocontinuous functor (see
+Sites, Definition \ref{sites-definition-almost-cocontinuous})
+$(\Sch/Y)_\tau \to (\Sch/X)_\tau$ between
+big $\tau$ sites.
+\end{lemma}
+
+\begin{proof}
+We have to show the following: given a morphism $V \to Y$
+and any syntomic (resp.\ smooth, resp.\ \'etale)
+covering $\{U_i \to X \times_Y V\}$, there exists a
+smooth (resp.\ smooth, resp.\ \'etale) covering $\{V_j \to V\}$
+such that for each $j$, either $X \times_Y V_j$ is empty, or
+$X \times_Y V_j \to Z \times_Y V$ factors through one of
+the $U_i$. This follows on applying
+Lemma \ref{lemma-prepare-closed-immersion-almost-cocontinuous}
+above to the closed immersion $X \times_Y V \to V$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-closed-immersion-pushforward-exact}
+Let $f : X \to Y$ be a closed immersion of schemes.
+Let $\tau \in \{syntomic, smooth, \etale\}$.
+\begin{enumerate}
+\item The pushforward
+$f_{big, *} :
+\Sh((\Sch/X)_\tau)
+\to
+\Sh((\Sch/Y)_\tau)$
+commutes with coequalizers and pushouts.
+\item The pushforward
+$f_{big, *} :
+\textit{Ab}((\Sch/X)_\tau)
+\to
+\textit{Ab}((\Sch/Y)_\tau)$
+is exact.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from
+Sites, Lemma \ref{sites-lemma-morphism-of-sites-almost-cocontinuous},
+Modules on Sites,
+Lemma \ref{sites-modules-lemma-morphism-ringed-sites-almost-cocontinuous},
+and
+Lemma \ref{lemma-closed-immersion-almost-cocontinuous}
+above.
+\end{proof}
+
+\begin{remark}
+\label{remark-fppf-closed-immersion-not-closed}
+In Lemma \ref{lemma-closed-immersion-pushforward-exact} the case $\tau = fppf$
+is missing. The reason is that given a ring $A$, an ideal $I$ and a
+faithfully flat, finitely presented ring map $A/I \to \overline{B}$, there
+is no reason to think that one can find {\it any} flat finitely presented ring
+map $A \to B$ with $B/IB \not = 0$ such that $A/I \to B/IB$ factors through
+$\overline{B}$. Hence the proof of
+Lemma \ref{lemma-closed-immersion-almost-cocontinuous}
+does not work for the fppf topology.
+In fact it is likely false that
+$f_{big, *} : \textit{Ab}((\Sch/X)_{fppf})
+\to \textit{Ab}((\Sch/Y)_{fppf})$
+is exact when $f$ is a closed immersion.
+If you know an example, please email
+\href{mailto:stacks.project@gmail.com}{stacks.project@gmail.com}.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Exactness of big lower shriek}
+\label{section-exactness-lower-shriek}
+
+\noindent
+This is just the following technical result. Note that the functor $f_{big!}$
+has nothing whatsoever to do with cohomology with compact support in
+general.
+
+\begin{lemma}
+\label{lemma-exactness-lower-shriek}
+Let $\tau \in \{Zariski, \etale, smooth, syntomic, fppf\}$.
+Let $f : X \to Y$ be a morphism of schemes. Let
+$$
+f_{big} :
+\Sh((\Sch/X)_\tau)
+\longrightarrow
+\Sh((\Sch/Y)_\tau)
+$$
+be the corresponding morphism of topoi as in
+Topologies, Lemma
+\ref{topologies-lemma-morphism-big},
+\ref{topologies-lemma-morphism-big-etale},
+\ref{topologies-lemma-morphism-big-smooth},
+\ref{topologies-lemma-morphism-big-syntomic}, or
+\ref{topologies-lemma-morphism-big-fppf}.
+\begin{enumerate}
+\item The functor
+$f_{big}^{-1} : \textit{Ab}((\Sch/Y)_\tau) \to \textit{Ab}((\Sch/X)_\tau)$
+has a left adjoint
+$$
+f_{big!} : \textit{Ab}((\Sch/X)_\tau) \to \textit{Ab}((\Sch/Y)_\tau)
+$$
+which is exact.
+\item The functor
+$f_{big}^* :
+\textit{Mod}((\Sch/Y)_\tau, \mathcal{O})
+\to
+\textit{Mod}((\Sch/X)_\tau, \mathcal{O})$
+has a left adjoint
+$$
+f_{big!} :
+\textit{Mod}((\Sch/X)_\tau, \mathcal{O})
+\to
+\textit{Mod}((\Sch/Y)_\tau, \mathcal{O})
+$$
+which is exact.
+\end{enumerate}
+Moreover, the two functors $f_{big!}$ agree on underlying sheaves
+of abelian groups.
+\end{lemma}
+
+\begin{proof}
+Recall that $f_{big}$ is the morphism of topoi associated to the
+continuous and cocontinuous functor
+$u : (\Sch/X)_\tau \to (\Sch/Y)_\tau$, $U/X \mapsto U/Y$.
+Moreover, we have $f_{big}^{-1}\mathcal{O} = \mathcal{O}$.
+Hence the existence of $f_{big!}$ follows from
+Modules on Sites, Lemma \ref{sites-modules-lemma-g-shriek-adjoint},
+respectively
+Modules on Sites, Lemma \ref{sites-modules-lemma-lower-shriek-modules}.
+Note that if $U$ is an object of $(\Sch/X)_\tau$ then the functor
+$u$ induces an equivalence of categories
+$$
+u' :
+(\Sch/X)_\tau/U
+\longrightarrow
+(\Sch/Y)_\tau/U
+$$
+because both sides of the arrow are equal to $(\Sch/U)_\tau$.
+Hence the agreement of $f_{big!}$ on underlying abelian sheaves
+follows from the discussion in
+Modules on Sites, Remark \ref{sites-modules-remark-when-shriek-equal}.
+The exactness of $f_{big!}$ follows from
+Modules on Sites, Lemma \ref{sites-modules-lemma-exactness-lower-shriek}
+as the functor $u$ above which commutes with fibre products and equalizers.
+\end{proof}
+
+\noindent
+Next, we prove a technical lemma that will be useful later when comparing
+sheaves of modules on different sites associated to algebraic stacks.
+
+\begin{lemma}
+\label{lemma-compare-structure-sheaves}
+Let $X$ be a scheme. Let
+$\tau \in \{Zariski, \etale, smooth, syntomic, fppf\}$.
+Let $\mathcal{C}_1 \subset \mathcal{C}_2 \subset (\Sch/X)_\tau$ be full
+subcategories with the following properties:
+\begin{enumerate}
+\item For an object $U/X$ of $\mathcal{C}_t$,
+\begin{enumerate}
+\item if $\{U_i \to U\}$ is a covering of $(\Sch/X)_\tau$, then
+$U_i/X$ is an object of $\mathcal{C}_t$,
+\item $U \times \mathbf{A}^1/X$ is an object of $\mathcal{C}_t$.
+\end{enumerate}
+\item $X/X$ is an object of $\mathcal{C}_t$.
+\end{enumerate}
+We endow $\mathcal{C}_t$ with the structure of a site whose coverings are
+exactly those coverings $\{U_i \to U\}$ of $(\Sch/X)_\tau$ with
+$U \in \Ob(\mathcal{C}_t)$. Then
+\begin{enumerate}
+\item[(a)] The functor $\mathcal{C}_1 \to \mathcal{C}_2$
+is fully faithful, continuous, and cocontinuous.
+\end{enumerate}
+Denote $g : \Sh(\mathcal{C}_1) \to \Sh(\mathcal{C}_2)$ the corresponding
+morphism of topoi. Denote $\mathcal{O}_t$ the restriction of $\mathcal{O}$
+to $\mathcal{C}_t$. Denote $g_!$ the functor of
+Modules on Sites, Definition \ref{sites-modules-definition-g-shriek}.
+\begin{enumerate}
+\item[(b)] The canonical map $g_!\mathcal{O}_1 \to \mathcal{O}_2$
+is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assertion (a) is immediate from the definitions.
+In this proof all schemes are schemes over $X$ and all morphisms of
+schemes are morphisms of schemes over $X$. Note that $g^{-1}$ is
+given by restriction, so that for an object $U$ of $\mathcal{C}_1$
+we have $\mathcal{O}_1(U) = \mathcal{O}_2(U) = \mathcal{O}(U)$.
+Recall that $g_!\mathcal{O}_1$ is the sheaf associated to the presheaf
+$g_{p!}\mathcal{O}_1$ which associates to $V$ in $\mathcal{C}_2$ the group
+$$
+\colim_{V \to U} \mathcal{O}(U)
+$$
+where $U$ runs over the objects of $\mathcal{C}_1$ and the colimit is
+taken in the category of abelian groups. Below we will use frequently
+that if
+$$
+V \to U \to U'
+$$
+are morphisms with $U, U' \in \Ob(\mathcal{C}_1)$
+and if $f' \in \mathcal{O}(U')$ restricts to $f \in \mathcal{O}(U)$,
+then $(V \to U, f)$ and $(V \to U', f')$ define the same element of the
+colimit. Also, $g_!\mathcal{O}_1 \to \mathcal{O}_2$ maps the element
+$(V \to U, f)$ simply to the pullback of $f$ to $V$.
+
+\medskip\noindent
+Surjectivity. Let $V$ be a scheme and let $h \in \mathcal{O}(V)$.
+Then we obtain a morphism $V \to X \times \mathbf{A}^1$ induced by $h$
+and the structure morphism $V \to X$. Writing
+$\mathbf{A}^1 = \Spec(\mathbf{Z}[x])$ we see the element
+$x \in \mathcal{O}(X \times \mathbf{A}^1)$ pulls
+back to $h$. Since $X \times \mathbf{A}^1$ is an object of $\mathcal{C}_1$
+by assumptions (1)(b) and (2) we obtain the desired surjectivity.
+
+\medskip\noindent
+Injectivity. Let $V$ be a scheme. Let
+$s = \sum_{i = 1, \ldots, n} (V \to U_i, f_i)$ be an element of the colimit
+displayed above. For any $i$ we can use the morphism
+$f_i : U_i \to X \times \mathbf{A}^1$
+to see that $(V \to U_i, f_i)$ defines the same element of the colimit as
+$(f_i : V \to X \times \mathbf{A}^1, x)$. Then we can consider
+$$
+f_1 \times \ldots \times f_n : V \to X \times \mathbf{A}^n
+$$
+and we see that $s$ is equivalent in the colimit to
+$$
+\sum\nolimits_{i = 1, \ldots, n}
+(f_1 \times \ldots \times f_n : V \to X \times \mathbf{A}^n, x_i) =
+(f_1 \times \ldots \times f_n : V \to X \times \mathbf{A}^n,
+x_1 + \ldots + x_n)
+$$
+Now, if $x_1 + \ldots + x_n$ restricts to zero on $V$, then we see
+that $f_1 \times \ldots \times f_n$ factors through
+$X \times \mathbf{A}^{n - 1} = V(x_1 + \ldots + x_n)$. Hence we see
+that $s$ is equivalent to zero in the colimit.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+%9.29.09
+\section{\'Etale cohomology}
+\label{section-etale-cohomology}
+
+\noindent
+In the following sections we prove some basic results on \'etale cohomology.
+Here is an example of something we know for cohomology of topological
+spaces which also holds for \'etale cohomology.
+
+\begin{lemma}[Mayer-Vietoris for \'etale cohomology]
+\label{lemma-mayer-vietoris}
+Let $X$ be a scheme. Suppose that $X = U \cup V$ is a
+union of two opens. For any abelian sheaf $\mathcal{F}$ on $X_\etale$
+there exists a long exact cohomology sequence
+$$
+\begin{matrix}
+0 \to
+H^0_\etale(X, \mathcal{F}) \to
+H^0_\etale(U, \mathcal{F}) \oplus H^0_\etale(V, \mathcal{F}) \to
+H^0_\etale(U \cap V, \mathcal{F}) \phantom{\to \ldots} \\
+\phantom{0} \to H^1_\etale(X, \mathcal{F}) \to
+H^1_\etale(U, \mathcal{F}) \oplus H^1_\etale(V, \mathcal{F}) \to
+H^1_\etale(U \cap V, \mathcal{F}) \to \ldots
+\end{matrix}
+$$
+This long exact sequence is functorial in $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Observe that if $\mathcal{I}$ is an injective abelian sheaf, then
+$$
+0 \to \mathcal{I}(X) \to \mathcal{I}(U) \oplus \mathcal{I}(V) \to
+\mathcal{I}(U \cap V) \to 0
+$$
+is exact. This is true in the first and middle spots as $\mathcal{I}$
+is a sheaf. It is true on the right, because
+$\mathcal{I}(U) \to \mathcal{I}(U \cap V)$ is surjective by
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-restriction-along-monomorphism-surjective}.
+Another way to prove it would be to show that the cokernel
+of the map $\mathcal{I}(U) \oplus \mathcal{I}(V) \to
+\mathcal{I}(U \cap V)$ is the first {\v C}ech cohomology group
+of $\mathcal{I}$ with respect to the covering
+$X = U \cup V$ which vanishes by
+Lemmas \ref{lemma-hom-injective} and \ref{lemma-forget-injectives}.
+Thus, if $\mathcal{F} \to \mathcal{I}^\bullet$ is an injective
+resolution, then
+$$
+0 \to \mathcal{I}^\bullet(X) \to
+\mathcal{I}^\bullet(U) \oplus \mathcal{I}^\bullet(V) \to
+\mathcal{I}^\bullet(U \cap V) \to 0
+$$
+is a short exact sequence of complexes and the associated long
+exact cohomology sequence is the sequence of the statement of the lemma.
+\end{proof}
+
+\begin{lemma}[Relative Mayer-Vietoris]
+\label{lemma-relative-mayer-vietoris}
+Let $f : X \to Y$ be a morphism of schemes. Suppose that $X = U \cup V$
+is a union of two open subschemes. Denote
+$a = f|_U : U \to Y$, $b = f|_V : V \to Y$, and
+$c = f|_{U \cap V} : U \cap V \to Y$.
+For every abelian sheaf $\mathcal{F}$ on $X_\etale$
+there exists a long exact sequence
+$$
+0 \to
+f_*\mathcal{F} \to
+a_*(\mathcal{F}|_U) \oplus b_*(\mathcal{F}|_V) \to
+c_*(\mathcal{F}|_{U \cap V}) \to
+R^1f_*\mathcal{F} \to \ldots
+$$
+on $Y_\etale$.
+This long exact sequence is functorial in $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F} \to \mathcal{I}^\bullet$ be an injective resolution
+of $\mathcal{F}$ on $X_\etale$. We claim that we
+get a short exact sequence of complexes
+$$
+0 \to
+f_*\mathcal{I}^\bullet \to
+a_*\mathcal{I}^\bullet|_U \oplus b_*\mathcal{I}^\bullet|_V \to
+c_*\mathcal{I}^\bullet|_{U \cap V} \to
+0.
+$$
+Namely, for any $W$ in $Y_\etale$, and for any $n \geq 0$ the
+corresponding sequence of groups of sections over $W$
+$$
+0 \to
+\mathcal{I}^n(W \times_Y X) \to
+\mathcal{I}^n(W \times_Y U)
+\oplus \mathcal{I}^n(W \times_Y V) \to
+\mathcal{I}^n(W \times_Y (U \cap V)) \to
+0
+$$
+was shown to be short exact in the proof of Lemma \ref{lemma-mayer-vietoris}.
+The lemma follows by taking cohomology sheaves and using the fact that
+$\mathcal{I}^\bullet|_U$ is an injective resolution of $\mathcal{F}|_U$
+and similarly for $\mathcal{I}^\bullet|_V$, $\mathcal{I}^\bullet|_{U \cap V}$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Colimits}
+\label{section-colimit}
+
+\noindent
+We recall that if $(\mathcal{F}_i, \varphi_{ii'})$ is a diagram of sheaves on
+a site $\mathcal{C}$ its colimit (in the category of sheaves) is the
+sheafification of the presheaf $U \mapsto \colim_i \mathcal{F}_i(U)$. See
+Sites, Lemma \ref{sites-lemma-colimit-sheaves}.
+If the system is directed, $U$ is a quasi-compact object of
+$\mathcal{C}$ which has a cofinal system of coverings by quasi-compact
+objects, then $\mathcal{F}(U) = \colim \mathcal{F}_i(U)$, see
+Sites, Lemma \ref{sites-lemma-directed-colimits-sections}.
+See Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-colim-works-over-collection}
+for a result dealing with higher cohomology groups of colimits
+of abelian sheaves.
+
+\medskip\noindent
+In Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-colimit}
+we generalize this result to a system of sheaves
+on an inverse system of sites. Here is the corresponding
+notion in the case of a system of \'etale sheaves living
+on an inverse system of schemes.
+
+\begin{definition}
+\label{definition-inverse-system-sheaves}
+Let $I$ be a preordered set. Let $(X_i, f_{i'i})$ be an inverse
+system of schemes over $I$.
+A {\it system $(\mathcal{F}_i, \varphi_{i'i})$ of sheaves
+on $(X_i, f_{i'i})$} is given by
+\begin{enumerate}
+\item a sheaf $\mathcal{F}_i$ on $(X_i)_\etale$ for all $i \in I$,
+\item for $i' \geq i$ a map
+$\varphi_{i'i} : f_{i'i}^{-1}\mathcal{F}_i \to \mathcal{F}_{i'}$
+of sheaves on $(X_{i'})_\etale$
+\end{enumerate}
+such that $\varphi_{i''i} = \varphi_{i''i'} \circ f_{i'' i'}^{-1}\varphi_{i'i}$
+whenever $i'' \geq i' \geq i$.
+\end{definition}
+
+\noindent
+In the situation of Definition \ref{definition-inverse-system-sheaves},
+assume $I$ is a directed set and the transition morphisms $f_{i'i}$ affine.
+Let $X = \lim X_i$ be the limit in the category of schemes, see
+Limits, Section \ref{limits-section-limits}.
+Denote $f_i : X \to X_i$ the projection morphisms and consider the maps
+$$
+f_i^{-1}\mathcal{F}_i = f_{i'}^{-1}f_{i'i}^{-1}\mathcal{F}_i
+\xrightarrow{f_{i'}^{-1}\varphi_{i'i}}
+f_{i'}^{-1}\mathcal{F}_{i'}
+$$
+This turns $f_i^{-1}\mathcal{F}_i$ into a system of sheaves on $X_\etale$
+over $I$ (it is a good exercise to check this).
+We often want to know whether there is an isomorphism
+$$
+H^q_\etale(X, \colim f_i^{-1}\mathcal{F}_i) =
+\colim H^q_\etale(X_i, \mathcal{F}_i)
+$$
+It will turn out this is true if $X_i$ is quasi-compact and quasi-separated
+for all $i$, see Theorem \ref{theorem-colimit}.
+
+\begin{lemma}
+\label{lemma-colimit-affine-sites}
+Let $I$ be a directed set. Let $(X_i, f_{i'i})$ be an inverse
+system of schemes over $I$ with affine transition morphisms.
+Let $X = \lim_{i \in I} X_i$. With
+notation as in Topologies, Lemma \ref{topologies-lemma-alternative} we have
+$$
+X_{affine, \etale} = \colim (X_i)_{affine, \etale}
+$$
+as sites in the sense of
+Sites, Lemma \ref{sites-lemma-colimit-sites}.
+\end{lemma}
+
+\begin{proof}
+Let us first prove this when $X$ and $X_i$ are quasi-compact and
+quasi-separated for all $i$ (as this is true in all cases of
+interest). In this case any object of
+$X_{affine, \etale}$, resp.\ $(X_i)_{affine, \etale}$ is
+of finite presentation over $X$. Moreover, the
+category of schemes of finite presentation over $X$ is the
+colimit of the categories of schemes of finite presentation
+over $X_i$, see
+Limits, Lemma \ref{limits-lemma-descend-finite-presentation}.
+The same holds for the subcategories of affine objects \'etale
+over $X$ by Limits, Lemmas
+\ref{limits-lemma-limit-affine} and \ref{limits-lemma-descend-etale}.
+Finally, if $\{U^j \to U\}$ is a covering of $X_{affine, \etale}$
+and if $U_i^j \to U_i$ is morphism of affine schemes \'etale over
+$X_i$ whose base change to $X$ is $U^j \to U$, then we see that
+the base change of $\{U^j_i \to U_i\}$ to some $X_{i'}$ is
+a covering for $i'$ large enough, see
+Limits, Lemma \ref{limits-lemma-descend-surjective}.
+
+\medskip\noindent
+In the general case, let $U$ be an object of $X_{affine, \etale}$.
+Then $U \to X$ is \'etale and separated (as $U$ is separated)
+but in general not quasi-compact. Still, $U \to X$ is locally
+of finite presentation and hence by
+Limits, Lemma \ref{limits-lemma-descend-finite-presentation-variant}
+there exists an $i$, a quasi-compact and quasi-separated scheme $U_i$, and
+a morphism $U_i \to X_i$ which is locally of finite presentation
+whose base change to $X$ is $U \to X$. Then $U = \lim_{i' \geq i} U_{i'}$
+where $U_{i'} = U_i \times_{X_i} X_{i'}$.
+After increasing $i$ we may assume $U_i$ is affine, see
+Limits, Lemma \ref{limits-lemma-limit-affine}.
+To check that $U_i \to X_i$ is \'etale for $i$ sufficiently large,
+choose a finite affine open covering $U_i = U_{i, 1} \cup \ldots \cup U_{i, m}$
+such that $U_{i, j} \to U_i \to X_i$ maps into an affine open
+$W_{i, j} \subset X_i$. Then we can apply
+Limits, Lemma \ref{limits-lemma-descend-etale}
+to see that $U_{i, j} \to W_{i, j}$ is \'etale
+after possibly increasing $i$.
+In this way we see that the functor
+$\colim (X_i)_{affine, \etale} \to X_{affine, \etale}$
+is essentially surjective. Fully faithfulness follows
+directly from the already used
+Limits, Lemma \ref{limits-lemma-descend-finite-presentation-variant}.
+The statement on coverings is proved in exactly the same
+manner as done in the first paragraph of the proof.
+\end{proof}
+
+\noindent
+Using the above we get the following general result on colimits and cohomology.
+
+\begin{theorem}
+\label{theorem-colimit}
+Let $X = \lim_{i \in I} X_i$ be a limit of a directed system of schemes
+with affine transition morphisms $f_{i'i} : X_{i'} \to X_i$. We assume
+that $X_i$ is quasi-compact and quasi-separated for all $i \in I$.
+Let $(\mathcal{F}_i, \varphi_{i'i})$ be a system of abelian sheaves
+on $(X_i, f_{i'i})$. Denote $f_i : X \to X_i$ the projection and set
+$\mathcal{F} = \colim f_i^{-1}\mathcal{F}_i$. Then
+$$
+\colim_{i\in I} H_\etale^p(X_i, \mathcal{F}_i) = H_\etale^p(X, \mathcal{F}).
+$$
+for all $p \geq 0$.
+\end{theorem}
+
+\begin{proof}
+By Topologies, Lemma \ref{topologies-lemma-alternative}
+we can compute the cohomology
+of $\mathcal{F}$ on $X_{affine, \etale}$.
+Thus the result by a combination of
+Lemma \ref{lemma-colimit-affine-sites}
+and
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-colimit}.
+\end{proof}
+
+\noindent
+The following two results are special cases of the theorem above.
+
+\begin{lemma}
+\label{lemma-colimit}
+Let $X$ be a quasi-compact and quasi-separated scheme. Let $I$
+be a directed set. Let $(\mathcal{F}_i, \varphi_{ij})$ be a system
+of abelian sheaves on $X_\etale$ over $I$. Then
+$$
+\colim_{i\in I} H_\etale^p(X, \mathcal{F}_i) = H_\etale^p(X,
+\colim_{i\in I} \mathcal{F}_i).
+$$
+\end{lemma}
+
+\begin{proof}
+This is a special case of Theorem \ref{theorem-colimit}.
+We also sketch a direct proof.
+We prove it for all $X$ at the same time, by induction on $p$.
+\begin{enumerate}
+\item
+For any quasi-compact and quasi-separated scheme $X$ and any \'etale covering
+$\mathcal{U}$ of $X$, show that there exists a refinement
+$\mathcal{V} = \{V_j \to X\}_{j\in J}$ with $J$ finite and each $V_j$
+quasi-compact and quasi-separated such that all
+$V_{j_0} \times_X \ldots \times_X V_{j_p}$ are also
+quasi-compact and quasi-separated.
+\item
+Using the previous step and the definition of colimits in the category of
+sheaves, show that the theorem holds for $p = 0$ and all $X$.
+\item
+Using the locality of cohomology
+(Lemma \ref{lemma-locality-cohomology}),
+the {\v C}ech-to-cohomology spectral sequence
+(Theorem \ref{theorem-cech-ss}) and the fact that the induction
+hypothesis applies to all
+$V_{j_0} \times_X \ldots \times_X V_{j_p}$
+in the above situation, prove the induction step $p \to p + 1$.
+\end{enumerate}
+\end{proof}
+
+\begin{lemma}
+\label{lemma-directed-colimit-cohomology}
+Let $A$ be a ring, $(I, \leq)$ a directed set and $(B_i, \varphi_{ij})$ a
+system of $A$-algebras. Set $B = \colim_{i\in I} B_i$. Let $X \to \Spec(A)$
+be a quasi-compact and quasi-separated morphism of schemes. Let
+$\mathcal{F}$ an abelian sheaf on $X_\etale$.
+Denote $Y_i = X \times_{\Spec(A)} \Spec(B_i)$,
+$Y = X \times_{\Spec(A)} \Spec(B)$,
+$\mathcal{G}_i = (Y_i \to X)^{-1}\mathcal{F}$ and
+$\mathcal{G} = (Y \to X)^{-1}\mathcal{F}$. Then
+$$
+H_\etale^p(Y, \mathcal{G}) =
+\colim_{i\in I} H_\etale^p (Y_i, \mathcal{G}_i).
+$$
+\end{lemma}
+
+\begin{proof}
+This is a special case of Theorem \ref{theorem-colimit}.
+We also outline a direct proof as follows.
+\begin{enumerate}
+\item Given $V \to Y$ \'etale with $V$ quasi-compact and
+quasi-separated, there exist $i\in I$ and $V_i \to Y_i$ such that
+$V = V_i \times_{Y_i} Y$.
+If all the schemes considered were affine, this would correspond to the
+following algebra statement: if $B = \colim B_i$ and $B \to C$ is \'etale,
+then there exist $i \in I$ and $B_i \to C_i$ \'etale such that
+$C \cong B \otimes_{B_i} C_i$.
+This is proved in Algebra, Lemma \ref{algebra-lemma-etale}.
+\item In the situation of (1) show that
+$\mathcal{G}(V) = \colim_{i' \geq i} \mathcal{G}_{i'}(V_{i'})$
+where $V_{i'}$ is the base change of $V_i$ to $Y_{i'}$.
+\item By (1), we see that for every \'etale covering
+$\mathcal{V} = \{V_j \to Y\}_{j\in J}$ with $J$ finite and the
+$V_j$s quasi-compact and quasi-separated, there exists $i \in I$ and
+an \'etale covering $\mathcal{V}_i = \{V_{ij} \to Y_i\}_{j \in J}$
+such that $\mathcal{V} \cong \mathcal{V}_i \times_{Y_i} Y$.
+\item Show that (2) and (3) imply
+$$
+\check H^*(\mathcal{V}, \mathcal{G})=
+\colim_{i\in I} \check H^*(\mathcal{V}_i, \mathcal{G}_i).
+$$
+\item Cleverly use the {\v C}ech-to-cohomology spectral sequence
+(Theorem \ref{theorem-cech-ss}).
+\end{enumerate}
+\end{proof}
+
+\begin{lemma}
+\label{lemma-higher-direct-images}
+Let $f: X\to Y$ be a morphism of schemes and $\mathcal{F}\in
+\textit{Ab}(X_\etale)$. Then $R^pf_*\mathcal{F}$ is the sheaf
+associated to the presheaf
+$$
+(V \to Y) \longmapsto H_\etale^p(X \times_Y V, \mathcal{F}|_{X \times_Y V}).
+$$
+More generally, for $K \in D(X_\etale)$ we have that $R^pf_*K$ is the
+sheaf associated to the presheaf
+$$
+(V \to Y) \longmapsto H_\etale^p(X \times_Y V, K|_{X \times_Y V}).
+$$
+\end{lemma}
+
+\begin{proof}
+This lemma is valid for topological spaces, and the proof in this case is the
+same. See Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-higher-direct-images}
+for the case of a sheaf and see
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-sheafification-cohomology}
+for the case of a complex of abelian sheaves.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-colimit}
+Let $S$ be a scheme. Let $X = \lim_{i \in I} X_i$ be a limit of a
+directed system of schemes over $S$ with affine transition morphisms
+$f_{i'i} : X_{i'} \to X_i$. We assume the structure morphisms
+$g_i : X_i \to S$ and $g : X \to S$ are quasi-compact and quasi-separated.
+Let $(\mathcal{F}_i, \varphi_{i'i})$ be a system of abelian sheaves
+on $(X_i, f_{i'i})$. Denote $f_i : X \to X_i$ the projection and set
+$\mathcal{F} = \colim f_i^{-1}\mathcal{F}_i$. Then
+$$
+\colim_{i\in I} R^p g_{i, *} \mathcal{F}_i = R^p g_* \mathcal{F}
+$$
+for all $p \geq 0$.
+\end{lemma}
+
+\begin{proof}
+Recall (Lemma \ref{lemma-higher-direct-images})
+that $R^p g_{i, *} \mathcal{F}_i$ is the sheaf associated to the
+presheaf $U \mapsto H^p_\etale(U \times_S X_i, \mathcal{F}_i)$
+and similarly for $R^pg_*\mathcal{F}$. Moreover, the colimit of a
+system of sheaves is the sheafification of the colimit on the level
+of presheaves. Note that every object of $S_\etale$ has a covering
+by quasi-compact and quasi-separated objects (e.g., affine schemes).
+Moreover, if $U$ is a quasi-compact and quasi-separated object,
+then we have
+$$
+\colim H^p_\etale(U \times_S X_i, \mathcal{F}_i) =
+H^p_\etale(U \times_S X, \mathcal{F})
+$$
+by Theorem \ref{theorem-colimit}. Thus the lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-colimit-general}
+Let $I$ be a directed set. Let $g_i : X_i \to S_i$ be an inverse system of
+morphisms of schemes over $I$. Assume $g_i$ is quasi-compact and
+quasi-separated and for $i' \geq i$ the transition morphisms
+$f_{i'i} : X_{i'} \to X_i$ and $h_{i'i} : S_{i'} \to S_i$ are affine.
+Let $g : X \to S$ be the limit of the morphisms $g_i$, see
+Limits, Section \ref{limits-section-limits}.
+Denote $f_i : X \to X_i$ and $h_i : S \to S_i$ the projections.
+Let $(\mathcal{F}_i, \varphi_{i'i})$ be a system of sheaves
+on $(X_i, f_{i'i})$. Set $\mathcal{F} = \colim f_i^{-1}\mathcal{F}_i$. Then
+$$
+R^p g_* \mathcal{F} =
+\colim_{i \in I} h_i^{-1}R^p g_{i, *} \mathcal{F}_i
+$$
+for all $p \geq 0$.
+\end{lemma}
+
+\begin{proof}
+How is the map of the lemma constructed?
+For $i' \geq i$ we have a commutative diagram
+$$
+\xymatrix{
+X \ar[r]_{f_{i'}} \ar[d]_g &
+X_{i'} \ar[r]_{f_{i'i}} \ar[d]_{g_{i'}} &
+X_i \ar[d]^{g_i} \\
+S \ar[r]^{h_{i'}} &
+S_{i'} \ar[r]^{h_{i'i}} &
+S_i
+}
+$$
+If we combine the base change map
+$h_{i'i}^{-1}Rg_{i, *}\mathcal{F}_i \to Rg_{i', *}f_{i'i}^{-1}\mathcal{F}_i$
+(Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-base-change-map-flat-case} or
+Remark \ref{sites-cohomology-remark-base-change})
+with the map $Rg_{i', *}\varphi_{i'i}$, then we obtain
+$\psi_{i'i} : h_{i' i}^{-1} R^p g_{i, *} \mathcal{F}_i \to
+R^pg_{i', *} \mathcal{F}_{i'}$. Similarly, using the left square
+in the diagram we obtain maps
+$\psi_i : h_i^{-1}R^pg_{i, *}\mathcal{F}_i \to R^pg_*\mathcal{F}$.
+The maps $h_{i'}^{-1}\psi_{i'i}$ and $\psi_i$ are the maps used in
+the statement of the lemma. For this to make sense, we have to check that
+$\psi_{i''i} = \psi_{i''i'} \circ h_{i''i'}^{-1}\psi_{i'i}$ and
+$\psi_{i'} \circ h_{i'}^{-1}\psi_{i'i} = \psi_i$; this follows
+from Cohomology on Sites, Remark
+\ref{sites-cohomology-remark-compose-base-change-horizontal}.
+
+\medskip\noindent
+Proof of the equality. First proof using
+dimension shifting\footnote{You can also use this method
+to produce the maps in the lemma.}. For any
+$U$ affine and \'etale over $X$ by Theorem \ref{theorem-colimit}
+we have
+$$
+g_*\mathcal{F}(U) =
+H^0(U \times_S X, \mathcal{F}) =
+\colim H^0(U_i \times_{S_i} X_i, \mathcal{F}_i) =
+\colim g_{i, *}\mathcal{F}_i(U_i)
+$$
+where the colimit is over $i$ large enough such that
+there exists an $i$ and $U_i$ affine \'etale over $S_i$
+whose base change is $U$ over $S$ (see
+Lemma \ref{lemma-colimit-affine-sites}).
+The right hand side is equal to
+$(\colim h_i^{-1}g_{i, *}\mathcal{F}_i)(U)$ by
+Sites, Lemma \ref{sites-lemma-colimit}.
+This proves the lemma for $p = 0$.
+If $(\mathcal{G}_i, \varphi_{i'i})$ is a system with
+$\mathcal{G} = \colim f_i^{-1}\mathcal{G}_i$
+such that $\mathcal{G}_i$ is an injective abelian sheaf on $X_i$
+for all $i$, then for any $U$ affine and \'etale over $X$ by
+Theorem \ref{theorem-colimit} we have
+$$
+H^p(U \times_S X, \mathcal{G}) =
+\colim H^p(U_i \times_{S_i} X_i, \mathcal{G}_i) = 0
+$$
+for $p > 0$ (same colimit as before). Hence $R^pg_*\mathcal{G} = 0$
+and we get the result for $p > 0$ for such a system.
+In general we may choose a short exact sequence of systems
+$$
+0 \to (\mathcal{F}_i, \varphi_{i'i}) \to
+(\mathcal{G}_i, \varphi_{i'i}) \to
+(\mathcal{Q}_i, \varphi_{i'i}) \to 0
+$$
+where $(\mathcal{G}_i, \varphi_{i'i})$ is as above, see
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-colim-sites-injective}.
+By induction the lemma holds for $p - 1$ and by the above we have
+vanishing for $p$ and $(\mathcal{G}_i, \varphi_{i'i})$.
+Hence the result for $p$
+and $(\mathcal{F}_i, \varphi_{i'i})$ by the long exact sequence
+of cohomology.
+
+\medskip\noindent
+Second proof.
+Recall that $S_{affine, \etale} = \colim (S_i)_{affine, \etale}$, see
+Lemma \ref{lemma-colimit-affine-sites}. Thus if $U$ is an object of
+$S_{affine, \etale}$, then we can write $U = U_i \times_{S_i} S$
+for some $i$ and some $U_i$ in $(S_i)_{affine, \etale}$ and
+$$
+(\colim_{i \in I} h_i^{-1}R^p g_{i, *} \mathcal{F}_i)(U) =
+\colim_{i' \geq i} (R^p g_{i', *}\mathcal{F}_{i'})(U_i \times_{S_i} S_{i'})
+$$
+by Sites, Lemma \ref{sites-lemma-colimit} and the construction of the
+transition maps in the system described above. Since
+$R^pg_{i', *}\mathcal{F}_{i'}$ is the sheaf associated to the presheaf
+$U_{i'} \mapsto H^p(U_{i'} \times_{S_{i'}} X_{i'}, \mathcal{F}_{i'})$
+and since $R^pg_*\mathcal{F}$ is the sheaf associated to the presheaf
+$U \mapsto H^p(U \times_S X, \mathcal{F})$
+(Lemma \ref{lemma-higher-direct-images})
+we obtain a canonical commutative diagram
+$$
+\xymatrix{
+\colim_{i' \geq i}
+H^p(U_i \times_{S_i} X_{i'}, \mathcal{F}_{i'}) \ar[r] \ar[d] &
+\colim_{i' \geq i}
+(R^p g_{i', *}\mathcal{F}_{i'})(U_i \times_{S_i} S_{i'}) \ar[d] \\
+H^p(U \times_S X, \mathcal{F}) \ar[r] &
+R^pg_*\mathcal{F}(U)
+}
+$$
+Observe that the left hand vertical arrow is an isomorphism
+by Theorem \ref{theorem-colimit}. We're trying to show that
+the right hand vertical arrow is an isomorphism. However, we
+already know that the source and target of this arrow
+are sheaves on $S_{affine, \etale}$. Hence it suffices to
+show: (1) an element in the target, locally comes from an
+element in the source and (2) an element in the source
+which maps to zero in the target locally vanishes.
+Part (1) follows immediately from the above and the fact that
+the lower horizontal arrow comes from a map of presheaves
+which becomes an isomorphism after sheafification.
+For part (2), say $\xi \in \colim_{i' \geq i}
+(R^p g_{i', *}\mathcal{F}_{i'})(U_i \times_{S_i} S_{i'})$
+is in the kernel. Choose an $i' \geq i$ and
+$\xi_{i'} \in (R^p g_{i', *}\mathcal{F}_{i'})(U_i \times_{S_i} S_{i'})$
+representing $\xi$.
+Choose a standard \'etale covering
+$\{U_{i', k} \to U_i \times_{S_i} S_{i'}\}_{k = 1, \ldots, m}$
+such that $\xi_{i'}|_{U_{i', k}}$ comes from
+$\xi_{i', k} \in H^p(U_{i', k} \times_{S_{i'}} X_{i'}, \mathcal{F}_{i'})$.
+Since it is enough to prove that $\xi$ dies locally, we
+may replace $U$ by the members of the \'etale
+covering $\{U_{i', k} \times_{S_{i'}} S \to U = U_i \times_{S_i} S\}$.
+After this replacement we see that $\xi$ is the image of
+an element $\xi'$ of the group
+$\colim_{i' \geq i} H^p(U_i \times_{S_i} X_{i'}, \mathcal{F}_{i'})$
+in the diagram above. Since $\xi'$ maps to zero in $R^pg_*\mathcal{F}(U)$
+we can do another replacement and assume that $\xi'$ maps
+to zero in $H^p(U \times_S X, \mathcal{F})$.
+However, since the left vertical arrow is an isomorphism
+we then conclude $\xi' = 0$ hence $\xi = 0$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-linus-hamann}
+Let $X = \lim_{i \in I} X_i$ be a directed limit of schemes
+with affine transition morphisms $f_{i'i}$ and projection morphisms
+$f_i : X \to X_i$. Let $\mathcal{F}$ be a sheaf on $X_\etale$. Then
+\begin{enumerate}
+\item there are canonical maps
+$\varphi_{i'i} : f_{i'i}^{-1}f_{i, *}\mathcal{F} \to f_{i', *}\mathcal{F}$
+such that $(f_{i, *}\mathcal{F}, \varphi_{i'i})$ is a system of
+sheaves on $(X_i, f_{i'i})$ as in
+Definition \ref{definition-inverse-system-sheaves}, and
+\item $\mathcal{F} = \colim f_i^{-1}f_{i, *}\mathcal{F}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Via Topologies, Lemma \ref{topologies-lemma-alternative} and
+Lemma \ref{lemma-colimit-affine-sites} this is a special case of
+Sites, Lemma \ref{sites-lemma-colimit-push-pull}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compute-strangely}
+Let $I$ be a directed set. Let $g_i : X_i \to S_i$ be an inverse system of
+morphisms of schemes over $I$. Assume $g_i$ is quasi-compact and
+quasi-separated and for $i' \geq i$ the transition morphisms
+$X_{i'} \to X_i$ and $S_{i'} \to S_i$ are affine.
+Let $g : X \to S$ be the limit of the morphisms $g_i$, see
+Limits, Section \ref{limits-section-limits}.
+Denote $f_i : X \to X_i$ and $h_i : S \to S_i$ the projections.
+Let $\mathcal{F}$ be an abelian sheaf on $X$. Then we have
+$$
+R^pg_*\mathcal{F} = \colim_{i \in I} h_i^{-1}R^pg_{i, *}(f_{i, *}\mathcal{F})
+$$
+\end{lemma}
+
+\begin{proof}
+Formal combination of Lemmas \ref{lemma-relative-colimit-general}
+and \ref{lemma-linus-hamann}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Colimits and complexes}
+\label{section-colimit-complexes}
+
+\noindent
+In this section we discuss taking cohomology of systems of complexes
+in various settings, continuing the discussion for sheaves started
+in Section \ref{section-colimit}.
+We strongly urge the reader not to read this section unless absolutely
+necessary.
+
+\begin{lemma}
+\label{lemma-colimit-variant-complexes}
+Let $X = \lim_{i \in I} X_i$ be a limit of a directed system of schemes
+with affine transition morphisms $f_{i'i} : X_{i'} \to X_i$. We assume
+that $X_i$ is quasi-compact and quasi-separated for all $i \in I$.
+Let $\mathcal{F}_i^\bullet$ be a complex of abelian sheaves on
+$X_{i, \etale}$. Let $\varphi_{i'i} : f_{i'i}^{-1}\mathcal{F}_i^\bullet \to
+\mathcal{F}_{i'}^\bullet$ be a map of complexes on $X_{i, \etale}$
+such that $\varphi_{i''i} = \varphi_{i''i'} \circ f_{i'' i'}^{-1}\varphi_{i'i}$
+whenever $i'' \geq i' \geq i$. Assume there is an integer $a$ such that
+$\mathcal{F}_i^n = 0$ for $n < a$ and all $i \in I$.
+Then we have
+$$
+H^p_\etale(X, \colim f_i^{-1}\mathcal{F}_i^\bullet) =
+\colim H^p_\etale(X_i, \mathcal{F}^\bullet_i)
+$$
+where $f_i : X \to X_i$ is the projection.
+\end{lemma}
+
+\begin{proof}
+This is a consequence of Theorem \ref{theorem-colimit}. Set
+$\mathcal{F}^\bullet = \colim f_i^{-1}\mathcal{F}_i^\bullet$.
+The theorem tells us that
+$$
+\colim_{i \in I} H_\etale^p(X_i, \mathcal{F}_i^n) =
+H_\etale^p(X, \mathcal{F}^n)
+$$
+for all $n, p \in \mathbf{Z}$. Let us use the spectral sequences
+$$
+E_{1, i}^{s, t} = H_\etale^t(X_i, \mathcal{F}_i^s) \Rightarrow
+H_\etale^{s + t}(X_i, \mathcal{F}_i^\bullet)
+$$
+and
+$$
+E_1^{s, t} = H_\etale^t(X, \mathcal{F}^s) \Rightarrow
+H_\etale^{s + t}(X, \mathcal{F}^\bullet)
+$$
+of Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}.
+Since $\mathcal{F}_i^n = 0$ for $n < a$ (with $a$ independent of $i$)
+we see that only a fixed finite number of terms $E_{1, i}^{s, t}$
+(independent of $i$) and $E_1^{s, t}$ contribute to
+$H^q_\etale(X_i, \mathcal{F}_i^\bullet)$ and
+$H^q_\etale(X, \mathcal{F}^\bullet)$ and $E_1^{s, t} = \colim E_{i, i}^{s, t}$.
+This implies what we want. Some details omitted.
+(There is an alternative argument using ``stupid'' truncations
+of complexes which avoids using spectral sequences.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-direct-sum-bounded-below-cohomology}
+Let $X$ be a quasi-compact and quasi-sepated scheme. Let
+$K_i \in D(X_\etale)$, $i \in I$ be a family of objects.
+Assume given $a \in \mathbf{Z}$ such that $H^n(K_i) = 0$ for $n < a$
+and $i \in I$. Then $R\Gamma(X, \bigoplus_i K_i) =
+\bigoplus_i R\Gamma(X, K_i)$.
+\end{lemma}
+
+\begin{proof}
+We have to show that $H^p(X, \bigoplus_i K_i) =
+\bigoplus_i H^p(X, K_i)$ for all $p \in \mathbf{Z}$.
+Choose complexes $\mathcal{F}_i^\bullet$ representing $K_i$
+such that $\mathcal{F}_i^n = 0$ for $n < a$.
+The direct sum of the complexes $\mathcal{F}_i^\bullet$
+represents the object $\bigoplus K_i$ by
+Injectives, Lemma \ref{injectives-lemma-derived-products}.
+Since $\bigoplus \mathcal{F}^\bullet$ is the filtered
+colimit of the finite direct sums, the result follows
+from Lemma \ref{lemma-colimit-variant-complexes}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-colimit-complexes}
+Let $S$ be a scheme. Let $X = \lim_{i \in I} X_i$ be a limit of a
+directed system of schemes over $S$ with affine transition morphisms
+$f_{i'i} : X_{i'} \to X_i$. We assume that $X_i$ is quasi-compact and
+quasi-separated for all $i \in I$. Let $K \in D^+(S_\etale)$. Then
+$$
+\colim_{i \in I} H_\etale^p(X_i, K|_{X_i}) = H_\etale^p(X, K|_X).
+$$
+for all $p \in \mathbf{Z}$ where $K|_{X_i}$ and $K|_X$
+are the pullbacks of $K$ to $X_i$ and $X$.
+\end{lemma}
+
+\begin{proof}
+We may represent $K$ by a bounded below complex $\mathcal{G}^\bullet$
+of abelian sheaves on $S_\etale$. Say $\mathcal{G}^n = 0$ for $n < a$.
+Denote $\mathcal{F}^\bullet_i$ and $\mathcal{F}^\bullet$ the pullbacks
+of this complex of $X_i$ and $X$. These complexes represent the
+objects $K|_{X_i}$ and $K|_X$ and we have
+$\mathcal{F}^\bullet = \colim f_i^{-1}\mathcal{F}_i^\bullet$
+termwise. Hence the lemma follows from
+Lemma \ref{lemma-colimit-variant-complexes}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-colimit-general-complexes}
+Let $I$, $g_i : X_i \to S_i$, $g : X \to S$, $f_i$, $g_i$, $h_i$ be as in
+Lemma \ref{lemma-relative-colimit-general}.
+Let $0 \in I$ and $K_0 \in D^+(X_{0, \etale})$.
+For $i \geq 0$ denote $K_i$ the pullback of $K_0$ to $X_i$.
+Denote $K$ the pullback of $K$ to $X$. Then
+$$
+R^pg_*K = \colim_{i \geq 0} h_i^{-1}R^pg_{i, *}K_i
+$$
+for all $p \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+Fix an integer $p_0 \in \mathbf{Z}$.
+Let $a$ be an integer such that $H^j(K_0) = 0$ for $j < a$.
+We will prove the formula holds for all $p \leq p_0$
+by descending induction on $a$. If
+$a > p_0$, then we see that the left and right hand side of
+the formula are zero for $p \leq p_0$ by trivial vanishing, see
+Derived Categories, Lemma \ref{derived-lemma-negative-vanishing}.
+Assume $a \leq p_0$. Consider the distinguished triangle
+$$
+H^a(K_0)[-a] \to K_0 \to \tau_{\geq a + 1}K_0
+$$
+Pulling back this distinguished triangle to $X_i$ and $X$
+gives compatible distinguished triangles for $K_i$ and $K$.
+For $p \leq p_0$ we consider the commutative diagram
+$$
+\xymatrix{
+\colim_{i \geq 0} h_i^{-1}R^{p - 1}g_{i, *}(\tau_{\geq a + 1}K_i)
+\ar[r]_-\alpha \ar[d] &
+R^{p - 1}g_*(\tau_{\geq a + 1}K) \ar[d] \\
+\colim_{i \geq 0} h_i^{-1}R^pg_{i, *}(H^a(K_i)[-a])
+\ar[r]_-\beta \ar[d] &
+R^pg_*(H^a(K)[-a]) \ar[d] \\
+\colim_{i \geq 0} h_i^{-1}R^pg_{i, *}K_i
+\ar[r]_-\gamma \ar[d] &
+R^pg_*K \ar[d] \\
+\colim_{i \geq 0} R^pg_{i, *}\tau_{\geq a + 1}K_i
+\ar[r]_-\delta \ar[d] &
+R^pg_*\tau_{\geq a + 1}K \ar[d] \\
+\colim_{i \geq 0} R^{p + 1}g_{i, *}(H^a(K_i)[-a])
+\ar[r]^-\epsilon &
+R^{p + 1}g_*(H^a(K)[-a])
+}
+$$
+with exact columns. The arrows $\beta$ and $\epsilon$ are isomorphisms by
+Lemma \ref{lemma-relative-colimit-general}.
+The arrows $\alpha$ and $\delta$ are isomorphisms
+by induction hypothesis.
+Hence $\gamma$ is an isomorphism as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-colimit-general-really-complexes}
+Let $I$, $g_i : X_i \to S_i$, $g : X \to S$, $f_{ii'}$, $f_i$, $g_i$, $h_i$
+be as in Lemma \ref{lemma-relative-colimit-general}.
+Let $\mathcal{F}_i^\bullet$ be a complex of abelian sheaves on
+$X_{i, \etale}$. Let $\varphi_{i'i} : f_{i'i}^{-1}\mathcal{F}_i^\bullet \to
+\mathcal{F}_{i'}^\bullet$ be a map of complexes on $X_{i, \etale}$
+such that $\varphi_{i''i} = \varphi_{i''i'} \circ f_{i'' i'}^{-1}\varphi_{i'i}$
+whenever $i'' \geq i' \geq i$. Assume there is an integer $a$ such that
+$\mathcal{F}_i^n = 0$ for $n < a$ and all $i \in I$. Then
+$$
+R^pg_*(\colim f_i^{-1}\mathcal{F}_i^\bullet) =
+\colim_{i \geq 0} h_i^{-1}R^pg_{i, *}\mathcal{F}_i^\bullet
+$$
+for all $p \in \mathbf{Z}$.
+\end{lemma}
+
+\begin{proof}
+This is a consequence of Lemma \ref{lemma-relative-colimit-general}. Set
+$\mathcal{F}^\bullet = \colim f_i^{-1}\mathcal{F}_i^\bullet$.
+The lemma tells us that
+$$
+\colim_{i \in I} h_i^{-1}R^pg_{i, *}\mathcal{F}_i^n = R^pg_*\mathcal{F}^n
+$$
+for all $n, p \in \mathbf{Z}$. Let us use the spectral sequences
+$$
+E_{1, i}^{s, t} = R^tg_{i, *}\mathcal{F}_i^s \Rightarrow
+R^{s + t}g_{i, *}\mathcal{F}_i^\bullet
+$$
+and
+$$
+E_1^{s, t} = R^tg_*\mathcal{F}^s \Rightarrow
+R^{s + t}g_*\mathcal{F}^\bullet
+$$
+of Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}.
+Since $\mathcal{F}_i^n = 0$ for $n < a$ (with $a$ independent of $i$)
+we see that only a fixed finite number of terms $E_{1, i}^{s, t}$
+(independent of $i$) and $E_1^{s, t}$ contribute and
+$E_1^{s, t} = \colim E_{i, i}^{s, t}$.
+This implies what we want. Some details omitted.
+(There is an alternative argument using ``stupid'' truncations
+of complexes which avoids using spectral sequences.)
+\end{proof}
+
+\begin{lemma}
+\label{lemma-direct-sum-bounded-below-pushforward}
+Let $f : X \to Y$ be a quasi-compact and quasi-sepated morphism of
+schemes. Let $K_i \in D(X_\etale)$, $i \in I$ be a family of objects.
+Assume given $a \in \mathbf{Z}$ such that $H^n(K_i) = 0$ for $n < a$
+and $i \in I$. Then $Rf_*(\bigoplus_i K_i) = \bigoplus_i Rf_*K_i$.
+\end{lemma}
+
+\begin{proof}
+We have to show that $R^pf_*(\bigoplus_i K_i) =
+\bigoplus_i R^pf_*K_i$ for all $p \in \mathbf{Z}$.
+Choose complexes $\mathcal{F}_i^\bullet$ representing $K_i$
+such that $\mathcal{F}_i^n = 0$ for $n < a$.
+The direct sum of the complexes $\mathcal{F}_i^\bullet$
+represents the object $\bigoplus K_i$ by
+Injectives, Lemma \ref{injectives-lemma-derived-products}.
+Since $\bigoplus \mathcal{F}^\bullet$ is the filtered
+colimit of the finite direct sums, the result follows
+from Lemma \ref{lemma-relative-colimit-general-really-complexes}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Stalks of higher direct images}
+\label{section-stalks-direct-image}
+
+\noindent
+The stalks of higher direct images can often be computed as follows.
+
+\begin{theorem}
+\label{theorem-higher-direct-images}
+Let $f: X \to S$ be a quasi-compact and quasi-separated morphism of schemes,
+$\mathcal{F}$ an abelian sheaf on $X_\etale$, and $\overline{s}$ a
+geometric point of $S$ lying over $s \in S$. Then
+$$
+\left(R^nf_* \mathcal{F}\right)_{\overline{s}} =
+H_\etale^n( X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}),
+p^{-1}\mathcal{F})
+$$
+where $p : X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}) \to X$
+is the projection. For $K \in D^+(X_\etale)$ and $n \in \mathbf{Z}$
+we have
+$$
+\left(R^nf_*K\right)_{\overline{s}} =
+H_\etale^n(X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}), p^{-1}K)
+$$
+In fact, we have
+$$
+\left(Rf_*K\right)_{\overline{s}}
+=
+R\Gamma_\etale(X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}), p^{-1}K)
+$$
+in $D^+(\textit{Ab})$.
+\end{theorem}
+
+\begin{proof}
+Let $\mathcal{I}$ be the category of \'etale neighborhoods of $\overline{s}$
+on $S$. By Lemma \ref{lemma-higher-direct-images}
+we have
+$$
+(R^nf_*\mathcal{F})_{\overline{s}} =
+\colim_{(V, \overline{v}) \in \mathcal{I}^{opp}}
+H_\etale^n(X \times_S V, \mathcal{F}|_{X \times_S V}).
+$$
+We may replace $\mathcal{I}$ by the initial subcategory consisting
+of affine \'etale neighbourhoods of $\overline{s}$. Observe that
+$$
+\Spec(\mathcal{O}_{S, \overline{s}}^{sh}) =
+\lim_{(V, \overline{v}) \in \mathcal{I}} V
+$$
+by Lemma \ref{lemma-describe-etale-local-ring} and
+Limits, Lemma
+\ref{limits-lemma-directed-inverse-system-affine-schemes-has-limit}.
+Since fibre products commute with limits we also obtain
+$$
+X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}) =
+\lim_{(V, \overline{v}) \in \mathcal{I}} X \times_S V
+$$
+We conclude by Lemma \ref{lemma-directed-colimit-cohomology}.
+For the second variant, use the same argument using
+Lemma \ref{lemma-colimit-complexes} instead of
+Lemma \ref{lemma-directed-colimit-cohomology}.
+
+\medskip\noindent
+To see that the last statement is true, it suffices to produce a map
+$\left(Rf_*K\right)_{\overline{s}} \to
+R\Gamma_\etale(X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}), p^{-1}K)$
+in $D^+(\textit{Ab})$ which realizes the ismorphisms on cohomology
+groups in degree $n$ above for all $n$. To do this, choose a
+bounded below complex $\mathcal{J}^\bullet$ of injective abelian
+sheaves on $X_\etale$ representing $K$. The complex
+$f_*\mathcal{J}^\bullet$ represents $Rf_*K$. Thus the complex
+$$
+(f_*\mathcal{J}^\bullet)_{\overline{s}} =
+\colim_{(V, \overline{v}) \in \mathcal{I}^{opp}}
+(f_*\mathcal{J}^\bullet)(V)
+$$
+represents $(Rf_*K)_{\overline{s}}$. For each $V$ we have maps
+$$
+(f_*\mathcal{J}^\bullet)(V) =
+\Gamma(X \times_S V, \mathcal{J}^\bullet)
+\longrightarrow
+\Gamma(X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}),
+p^{-1}\mathcal{J}^\bullet)
+$$
+and the target complex represents
+$R\Gamma_\etale(X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}), p^{-1}K)$
+in $D^+(\textit{Ab})$.
+Taking the colimit of these maps we obtain the result.
+\end{proof}
+
+\begin{remark}
+\label{remark-stalk-fibre}
+Let $f : X \to S$ be a morphism of schemes.
+Let $K \in D(X_\etale)$.
+Let $\overline{s}$ be a geometric point of $S$.
+There are always canonical maps
+$$
+(Rf_*K)_{\overline{s}}
+\longrightarrow
+R\Gamma(X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}), p^{-1}K)
+\longrightarrow
+R\Gamma(X_{\overline{s}}, K|_{X_{\overline{s}}})
+$$
+where $p : X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}) \to X$
+is the projection. Namely, consider the commutative diagram
+$$
+\xymatrix{
+X_{\overline{s}} \ar[r] \ar[d]^{f_{\overline{s}}} &
+X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}) \ar[r]_-p \ar[d]^{f'} &
+X \ar[d]^f \\
+\overline{s} \ar[r]^-i &
+\Spec(\mathcal{O}_{S, \overline{s}}^{sh}) \ar[r]^-j &
+S
+}
+$$
+We have the base change maps
+$$
+i^{-1}Rf'_*(p^{-1}K) \to Rf_{\overline{s}, *}(K|_{X_{\overline{s}}})
+\quad\text{and}\quad
+j^{-1}Rf_*K \to Rf'_*(p^{-1}K)
+$$
+(Cohomology on Sites, Remark \ref{sites-cohomology-remark-base-change})
+for the two squares in this diagram.
+Taking global sections we obtain the desired maps.
+By Cohomology on Sites, Remark
+\ref{sites-cohomology-remark-compose-base-change-horizontal}
+the composition of these two maps is the
+usual (base change) map
+$(Rf_*K)_{\overline{s}} \to R\Gamma(X_{\overline{s}}, K|_{X_{\overline{s}}})$.
+\end{remark}
+
+
+
+
+
+
+\section{The Leray spectral sequence}
+\label{section-leray}
+
+\begin{lemma}
+\label{lemma-prepare-leray}
+Let $f: X \to Y$ be a morphism and $\mathcal{I}$ an injective object of
+$\textit{Ab}(X_\etale)$. Let $V \in \Ob(Y_\etale)$. Then
+\begin{enumerate}
+\item for any covering $\mathcal{V} = \{V_j\to V\}_{j \in J}$ we have
+$\check H^p(\mathcal{V}, f_*\mathcal{I}) = 0$ for all $p > 0$,
+\item $f_*\mathcal{I}$ is acyclic for the functor $\Gamma(V, -)$, and
+\item if $g : Y \to Z$, then $f_*\mathcal{I}$ is acyclic for $g_*$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Observe that $\check{\mathcal{C}}^\bullet(\mathcal{V}, f_*\mathcal{I}) =
+\check{\mathcal{C}}^\bullet(\mathcal{V} \times_Y X, \mathcal{I})$
+which has vanishing higher cohomology groups by Lemma \ref{lemma-hom-injective}.
+This proves (1). The second statement follows as a sheaf which has
+vanishing higher {\v C}ech cohomology groups for any covering has vanishing
+higher cohomology groups. This a wonderful exercise in using the
+{\v C}ech-to-cohomology spectral sequence, but see
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-cech-vanish-collection}
+for details and a more precise and general statement.
+Part (3) is a consequence of (2) and the description of
+$R^pg_*$ in Lemma \ref{lemma-higher-direct-images}.
+\end{proof}
+
+\noindent
+Using the formalism of Grothendieck spectral sequences, this gives the
+following.
+
+\begin{proposition}[Leray spectral sequence]
+\label{proposition-leray}
+Let $f: X \to Y$ be a morphism of schemes and $\mathcal{F}$ an \'etale sheaf on
+$X$. Then there is a spectral sequence
+$$
+E_2^{p, q} = H_\etale^p(Y, R^qf_*\mathcal{F}) \Rightarrow
+H_\etale^{p+q}(X, \mathcal{F}).
+$$
+\end{proposition}
+
+\begin{proof}
+See Lemma \ref{lemma-prepare-leray} and see
+Derived Categories, Section
+\ref{derived-section-composition-right-derived-functors}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Vanishing of finite higher direct images}
+\label{section-vanishing-finite-morphism}
+
+\noindent
+The next goal is to prove that the higher direct images of a finite morphism of
+schemes vanish.
+
+\begin{lemma}
+\label{lemma-vanishing-etale-cohomology-strictly-henselian}
+Let $R$ be a strictly henselian local ring. Set $S = \Spec(R)$ and let
+$\overline{s}$ be its closed point. Then the global
+sections functor
+$\Gamma(S, -) : \textit{Ab}(S_\etale) \to \textit{Ab}$
+is exact. In fact we have $\Gamma(S, \mathcal{F}) = \mathcal{F}_{\overline{s}}$
+for any sheaf of sets $\mathcal{F}$. In particular
+$$
+\forall p\geq 1, \quad H_\etale^p(S, \mathcal{F})=0
+$$
+for all $\mathcal{F}\in \textit{Ab}(S_\etale)$.
+\end{lemma}
+
+\begin{proof}
+If we show that $\Gamma(S, \mathcal{F}) = \mathcal{F}_{\overline{s}}$
+then $\Gamma(S, -)$ is exact as the stalk functor is exact.
+Let $(U, \overline{u})$ be an \'etale neighbourhood of $\overline{s}$.
+Pick an affine open neighborhood $\Spec(A)$ of $\overline{u}$ in $U$.
+Then $R \to A$ is \'etale and $\kappa(\overline{s}) = \kappa(\overline{u})$.
+By Theorem \ref{theorem-henselian} we see that $A \cong R \times A'$
+as an $R$-algebra compatible with maps to
+$\kappa(\overline{s}) = \kappa(\overline{u})$.
+Hence we get a section
+$$
+\xymatrix{
+\Spec(A) \ar[r] & U \ar[d]\\
+& S \ar[ul]
+}
+$$
+It follows that in the system of \'etale neighbourhoods of $\overline{s}$
+the identity map $(S, \overline{s}) \to (S, \overline{s})$ is cofinal.
+Hence $\Gamma(S, \mathcal{F}) = \mathcal{F}_{\overline{s}}$.
+The final statement of the lemma follows as the higher derived
+functors of an exact functor are zero, see
+Derived Categories, Lemma \ref{derived-lemma-right-derived-exact-functor}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-finite-higher-direct-image-zero}
+Let $f : X \to Y$ be a finite morphism of schemes.
+\begin{enumerate}
+\item For any geometric point $\overline{y} : \Spec(k) \to Y$ we have
+$$
+(f_*\mathcal{F})_{\overline{y}} =
+\prod\nolimits_{\overline{x} : \Spec(k) \to X,\ f(\overline{x}) =
+\overline{y}} \mathcal{F}_{\overline{x}}.
+$$
+for $\mathcal{F}$ in $\Sh(X_\etale)$ and
+$$
+(f_*\mathcal{F})_{\overline{y}} =
+\bigoplus\nolimits_{\overline{x} : \Spec(k) \to X,\ f(\overline{x}) =
+\overline{y}} \mathcal{F}_{\overline{x}}.
+$$
+for $\mathcal{F}$ in $\textit{Ab}(X_\etale)$.
+\item For any $q \geq 1$ we have $R^q f_*\mathcal{F} = 0$
+for $\mathcal{F}$ in $\textit{Ab}(X_\etale)$.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Let $X_{\overline{y}}^{sh}$ denote the fiber product
+$X \times_Y \Spec(\mathcal{O}_{Y, \overline{y}}^{sh})$.
+By Theorem \ref{theorem-higher-direct-images}
+the stalk of $R^qf_*\mathcal{F}$ at $\overline{y}$ is computed by
+$H_\etale^q(X_{\overline{y}}^{sh}, \mathcal{F})$.
+Since $f$ is finite, $X_{\bar y}^{sh}$ is finite over
+$\Spec(\mathcal{O}_{Y, \overline{y}}^{sh})$, thus
+$X_{\bar y}^{sh} = \Spec(A)$ for some ring $A$
+finite over $\mathcal{O}_{Y, \bar y}^{sh}$.
+Since the latter is strictly henselian,
+Lemma \ref{lemma-finite-over-henselian}
+implies that $A$ is a finite product of henselian local rings
+$A = A_1 \times \ldots \times A_r$. Since the residue field of
+$\mathcal{O}_{Y, \overline{y}}^{sh}$ is separably closed the
+same is true for each $A_i$. Hence $A_i$ is strictly henselian.
+This implies that $X_{\overline{y}}^{sh} = \coprod_{i = 1}^r \Spec(A_i)$.
+The vanishing of
+Lemma \ref{lemma-vanishing-etale-cohomology-strictly-henselian}
+implies that $(R^qf_*\mathcal{F})_{\overline{y}} = 0$ for $q > 0$
+which implies (2) by Theorem \ref{theorem-exactness-stalks}.
+Part (1) follows from the corresponding statement of
+Lemma \ref{lemma-vanishing-etale-cohomology-strictly-henselian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-pushforward-commutes-with-base-change}
+Consider a cartesian square
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+of schemes with $f$ a finite morphism.
+For any sheaf of sets $\mathcal{F}$ on $X_\etale$ we have
+$f'_*(g')^{-1}\mathcal{F} = g^{-1}f_*\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+In great generality there is a pullback map
+$g^{-1}f_*\mathcal{F} \to f'_*(g')^{-1}\mathcal{F}$, see
+Sites, Section \ref{sites-section-pullback}.
+It suffices to check on stalks (Theorem \ref{theorem-exactness-stalks}).
+Let $\overline{y}' : \Spec(k) \to Y'$ be a geometric point.
+We have
+\begin{align*}
+(f'_*(g')^{-1}\mathcal{F})_{\overline{y}'}
+& =
+\prod\nolimits_{\overline{x}' : \Spec(k) \to X',\ f' \circ \overline{x}' =
+\overline{y}'}
+((g')^{-1}\mathcal{F})_{\overline{x}'} \\
+& =
+\prod\nolimits_{\overline{x}' : \Spec(k) \to X',\ f' \circ \overline{x}' =
+\overline{y}'} \mathcal{F}_{g' \circ \overline{x}'} \\
+& =
+\prod\nolimits_{\overline{x} : \Spec(k) \to X,\ f \circ \overline{x} =
+g \circ \overline{y}'} \mathcal{F}_{\overline{x}} \\
+& =
+(f_*\mathcal{F})_{g \circ \overline{y}'} \\
+& =
+(g^{-1}f_*\mathcal{F})_{\overline{y}'}
+\end{align*}
+The first equality by
+Proposition \ref{proposition-finite-higher-direct-image-zero}.
+The second equality by
+Lemma \ref{lemma-stalk-pullback}.
+The third equality holds because the diagram is a cartesian square
+and hence the map
+$$
+\{\overline{x}' : \Spec(k) \to X',\ f' \circ \overline{x}' =
+\overline{y}'\}
+\longrightarrow
+\{\overline{x} : \Spec(k) \to X,\ f \circ \overline{x} =
+g \circ \overline{y}'\}
+$$
+sending $\overline{x}'$ to $g' \circ \overline{x}'$ is a bijection.
+The fourth equality by
+Proposition \ref{proposition-finite-higher-direct-image-zero}.
+The fifth equality by
+Lemma \ref{lemma-stalk-pullback}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-pushforward-commutes-with-base-change}
+Consider a cartesian square
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+of schemes with $f$ an integral morphism.
+For any sheaf of sets $\mathcal{F}$ on $X_\etale$ we have
+$f'_*(g')^{-1}\mathcal{F} = g^{-1}f_*\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+The question is local on $Y$ and hence we may assume $Y$ is affine.
+Then we can write $X = \lim X_i$ with $f_i : X_i \to Y$ finite
+(this is easy in the affine case, but see
+Limits, Lemma \ref{limits-lemma-integral-limit-finite-and-finite-presentation}
+for a reference). Denote $p_{i'i} : X_{i'} \to X_i$
+the transition morphisms and $p_i : X \to X_i$ the projections.
+Setting $\mathcal{F}_i = p_{i, *}\mathcal{F}$ we obtain
+from Lemma \ref{lemma-linus-hamann}
+a system $(\mathcal{F}_i, \varphi_{i'i})$
+with $\mathcal{F} = \colim p_i^{-1}\mathcal{F}_i$.
+We get $f_*\mathcal{F} = \colim f_{i, *}\mathcal{F}_i$
+from Lemma \ref{lemma-relative-colimit}.
+Set $X'_i = Y' \times_Y X_i$ with projections $f'_i$ and $g'_i$.
+Then $X' = \lim X'_i$ as limits commute with limits.
+Denote $p'_i : X' \to X'_i$ the projections. We have
+\begin{align*}
+g^{-1}f_*\mathcal{F}
+& =
+g^{-1} \colim f_{i, *}\mathcal{F}_i \\
+& =
+\colim g^{-1}f_{i, *}\mathcal{F}_i \\
+& =
+\colim f'_{i, *}(g'_i)^{-1}\mathcal{F}_i \\
+& =
+f'_*(\colim (p'_i)^{-1}(g'_i)^{-1}\mathcal{F}_i) \\
+& =
+f'_*(\colim (g')^{-1}p_i^{-1}\mathcal{F}_i) \\
+& =
+f'_*(g')^{-1} \colim p_i^{-1}\mathcal{F}_i \\
+& =
+f'_*(g')^{-1}\mathcal{F}
+\end{align*}
+as desired. For the first equality see above.
+For the second use that pullback commutes with colimits.
+For the third use the finite case, see
+Lemma \ref{lemma-finite-pushforward-commutes-with-base-change}.
+For the fourth use Lemma \ref{lemma-relative-colimit}.
+For the fifth use that $g'_i \circ p'_i = p_i \circ g'$.
+For the sixth use that pullback commutes with colimits.
+For the seventh use $\mathcal{F} = \colim p_i^{-1}\mathcal{F}_i$.
+\end{proof}
+
+\noindent
+The following lemma is a case of cohomological descent dealing with
+\'etale sheaves and finite surjective morphisms. We will significantly
+generalize this result once we prove the proper base change theorem.
+
+\begin{lemma}
+\label{lemma-cohomological-descent-finite}
+Let $f : X \to Y$ be a surjective finite morphism of schemes.
+Set $f_n : X_n \to Y$ equal to the $(n + 1)$-fold fibre product
+of $X$ over $Y$. For $\mathcal{F} \in \textit{Ab}(Y_\etale)$ set
+$\mathcal{F}_n = f_{n, *}f_n^{-1}\mathcal{F}$. There is an exact
+sequence
+$$
+0 \to \mathcal{F} \to \mathcal{F}_0 \to \mathcal{F}_1 \to
+\mathcal{F}_2 \to \ldots
+$$
+on $X_\etale$. Moreover, there is a spectral sequence
+$$
+E_1^{p, q} = H^q_\etale(X_p, f_p^{-1}\mathcal{F})
+$$
+converging to $H^{p + q}(Y_\etale, \mathcal{F})$.
+This spectral sequence is functorial in $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+If we prove the first statement of the lemma, then we obtain a spectral
+sequence with $E_1^{p, q} = H^q_\etale(Y, \mathcal{F})$ converging
+to $H^{p + q}(Y_\etale, \mathcal{F})$, see
+Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}.
+On the other hand, since
+$R^if_{p, *}f_p^{-1}\mathcal{F} = 0$ for $i > 0$
+(Proposition \ref{proposition-finite-higher-direct-image-zero})
+we get
+$$
+H^q_\etale(X_p, f_p^{-1}\mathcal{F}) =
+H^q_\etale(Y, f_{p, *}f_p^{-1} \mathcal{F}) =
+H^q_\etale(Y, \mathcal{F}_p)
+$$
+by Proposition \ref{proposition-leray}
+and we get the spectral sequence of the lemma.
+
+\medskip\noindent
+To prove the first statement of the lemma, observe that
+$X_n$ forms a simplicial scheme over $Y$, see
+Simplicial, Example \ref{simplicial-example-fibre-products-simplicial-object}.
+Observe moreover, that for each of the projections
+$d_j : X_{n + 1} \to X_n$ there is a map
+$d_j^{-1} f_n^{-1}\mathcal{F} \to f_{n + 1}^{-1}\mathcal{F}$.
+These maps induce maps
+$$
+\delta_j : \mathcal{F}_n \to \mathcal{F}_{n + 1}
+$$
+for $j = 0, \ldots, n + 1$. We use the alternating sum of these maps
+to define the differentials $\mathcal{F}_n \to \mathcal{F}_{n + 1}$.
+Similarly, there is a canonical augmentation $\mathcal{F} \to \mathcal{F}_0$,
+namely this is just the canonical map $\mathcal{F} \to f_*f^{-1}\mathcal{F}$.
+To check that this sequence of sheaves is an exact complex it suffices
+to check on stalks at geometric points (Theorem \ref{theorem-exactness-stalks}).
+Thus we let $\overline{y} : \Spec(k) \to Y$ be a geometric point. Let
+$E = \{\overline{x} : \Spec(k) \to X \mid f(\overline{x}) = \overline{y}\}$.
+Then $E$ is a finite nonempty set and we see that
+$$
+(\mathcal{F}_n)_{\overline{y}} =
+\bigoplus\nolimits_{e \in E^{n + 1}} \mathcal{F}_{\overline{y}}
+$$
+by Proposition \ref{proposition-finite-higher-direct-image-zero}
+and Lemma \ref{lemma-stalk-pullback}.
+Thus we have to see that given an abelian group $M$ the sequence
+$$
+0 \to M \to \bigoplus\nolimits_{e \in E} M \to
+\bigoplus\nolimits_{e \in E^2} M \to \ldots
+$$
+is exact. Here the first map is the diagonal map and the map
+$\bigoplus_{e \in E^{n + 1}} M \to \bigoplus_{e \in E^{n + 2}} M$
+is the alternating sum of the maps induced by the $(n + 2)$
+projections $E^{n + 2} \to E^{n + 1}$. This can be shown directly
+or deduced by applying Simplicial, Lemma
+\ref{simplicial-lemma-fibre-products-simplicial-object-w-section}
+to the map $E \to \{*\}$.
+\end{proof}
+
+\begin{remark}
+\label{remark-cohomological-descent-finite}
+In the situation of Lemma \ref{lemma-cohomological-descent-finite}
+if $\mathcal{G}$ is a sheaf of sets on $Y_\etale$, then we have
+$$
+\Gamma(Y, \mathcal{G}) =
+\text{Equalizer}(
+\xymatrix{
+\Gamma(X_0, f_0^{-1}\mathcal{G})
+\ar@<1ex>[r] \ar@<-1ex>[r] &
+\Gamma(X_1, f_1^{-1}\mathcal{G})
+}
+)
+$$
+This is proved in exactly the same way, by showing that
+the sheaf $\mathcal{G}$ is the equalizer of the two maps
+$f_{0, *}f_0^{-1}\mathcal{G} \to f_{1, *}f_1^{-1}\mathcal{G}$.
+\end{remark}
+
+
+
+
+
+
+
+
+
+%10.06.09
+\section{Galois action on stalks}
+\label{section-galois-action-stalks}
+
+\noindent
+In this section we define an action of the absolute Galois group of a residue
+field of a point $s$ of $S$ on the stalk functor at any geometric point lying
+over $s$.
+
+\medskip\noindent
+Galois action on stalks.
+Let $S$ be a scheme.
+Let $\overline{s}$ be a geometric point of $S$.
+Let $\sigma \in \text{Aut}(\kappa(\overline{s})/\kappa(s))$.
+Define an action of $\sigma$ on the stalk $\mathcal{F}_{\overline{s}}$
+of a sheaf $\mathcal{F}$ as follows
+\begin{equation}
+\label{equation-galois-action}
+\begin{matrix}
+\mathcal{F}_{\overline{s}} &
+\longrightarrow &
+\mathcal{F}_{\overline{s}} \\
+(U, \overline{u}, t) &
+\longmapsto &
+(U, \overline{u} \circ \Spec(\sigma), t).
+\end{matrix}
+\end{equation}
+where we use the description of elements of the stalk in terms of triples
+as in the discussion following
+Definition \ref{definition-stalk}.
+This is a left action, since if
+$\sigma_i \in \text{Aut}(\kappa(\overline{s})/\kappa(s))$
+then
+\begin{align*}
+\sigma_1 \cdot (\sigma_2 \cdot (U, \overline{u}, t))
+& =
+\sigma_1 \cdot (U, \overline{u} \circ \Spec(\sigma_2), t) \\
+& =
+(U, \overline{u} \circ \Spec(\sigma_2) \circ \Spec(\sigma_1), t) \\
+& =
+(U, \overline{u} \circ \Spec(\sigma_1 \circ \sigma_2), t) \\
+& =
+(\sigma_1 \circ \sigma_2) \cdot (U, \overline{u}, t)
+\end{align*}
+It is clear that this action is functorial in the sheaf $\mathcal{F}$.
+We note that we could have defined this action by referring directly to
+Remark \ref{remark-map-stalks}.
+
+\begin{definition}
+\label{definition-algebraic-geometric-point}
+Let $S$ be a scheme.
+Let $\overline{s}$ be a geometric point lying over the point $s$ of $S$.
+Let $\kappa(s) \subset \kappa(s)^{sep} \subset \kappa(\overline{s})$
+denote the separable algebraic closure of $\kappa(s)$ in the algebraically
+closed field $\kappa(\overline{s})$.
+\begin{enumerate}
+\item In this situation the {\it absolute Galois group} of $\kappa(s)$
+is $\text{Gal}(\kappa(s)^{sep}/\kappa(s))$. It is sometimes denoted
+$\text{Gal}_{\kappa(s)}$.
+\item The geometric point $\overline{s}$ is called
+{\it algebraic} if $\kappa(s) \subset \kappa(\overline{s})$ is
+an algebraic closure of $\kappa(s)$.
+\end{enumerate}
+\end{definition}
+
+\begin{example}
+\label{example-stupid}
+The geometric point
+$\Spec(\mathbf{C}) \to \Spec(\mathbf{Q})$
+is not algebraic.
+\end{example}
+
+\noindent
+Let $\kappa(s) \subset \kappa(s)^{sep} \subset \kappa(\overline{s})$
+be as in the definition. Note that as $\kappa(\overline{s})$ is algebraically
+closed the map
+$$
+\text{Aut}(\kappa(\overline{s})/\kappa(s))
+\longrightarrow
+\text{Gal}(\kappa(s)^{sep}/\kappa(s)) = \text{Gal}_{\kappa(s)}
+$$
+is surjective. Suppose $(U, \overline{u})$ is an
+\'etale neighbourhood of $\overline{s}$, and say $\overline{u}$ lies over
+the point $u$ of $U$. Since $U \to S$ is \'etale, the residue field extension
+$\kappa(u)/\kappa(s)$ is finite separable.
+This implies the following
+\begin{enumerate}
+\item If $\sigma \in \text{Aut}(\kappa(\overline{s})/\kappa(s)^{sep})$
+then $\sigma$ acts trivially on $\mathcal{F}_{\overline{s}}$.
+\item More precisely, the action of
+$\text{Aut}(\kappa(\overline{s})/\kappa(s))$
+determines and is determined by an action of the absolute Galois group
+$\text{Gal}_{\kappa(s)}$ on $\mathcal{F}_{\overline{s}}$.
+\item Given $(U, \overline{u}, t)$ representing an element $\xi$ of
+$\mathcal{F}_{\overline{s}}$ any element of
+$\text{Gal}(\kappa(s)^{sep}/K)$ acts trivially, where
+$\kappa(s) \subset K \subset \kappa(s)^{sep}$ is the image of
+$\overline{u}^\sharp : \kappa(u) \to \kappa(\overline{s})$.
+\end{enumerate}
+Altogether we see that $\mathcal{F}_{\overline{s}}$ becomes a
+$\text{Gal}_{\kappa(s)}$-set (see
+Fundamental Groups, Definition \ref{pione-definition-G-set-continuous}).
+Hence we may think of the stalk functor as a functor
+$$
+\Sh(S_\etale) \longrightarrow
+\text{Gal}_{\kappa(s)}\textit{-Sets},
+\quad
+\mathcal{F} \longmapsto \mathcal{F}_{\overline{s}}
+$$
+and from now on we usually do think about the stalk functor in this way.
+
+\begin{theorem}
+\label{theorem-equivalence-sheaves-point}
+Let $S = \Spec(K)$ with $K$ a field.
+Let $\overline{s}$ be a geometric point of $S$.
+Let $G = \text{Gal}_{\kappa(s)}$ denote the absolute Galois group.
+Taking stalks induces an equivalence of categories
+$$
+\Sh(S_\etale) \longrightarrow G\textit{-Sets},
+\quad
+\mathcal{F} \longmapsto \mathcal{F}_{\overline{s}}.
+$$
+\end{theorem}
+
+\begin{proof}
+Let us construct the inverse to this functor. In
+Fundamental Groups, Lemma \ref{pione-lemma-sheaves-point}
+we have seen that given a $G$-set $M$ there exists an \'etale morphism
+$X \to \Spec(K)$
+such that $\Mor_K(\Spec(K^{sep}), X)$ is
+isomorphic to $M$ as a $G$-set. Consider the sheaf
+$\mathcal{F}$ on $\Spec(K)_\etale$ defined by
+the rule $U \mapsto \Mor_K(U, X)$. This is a sheaf as the \'etale
+topology is subcanonical. Then we see that
+$\mathcal{F}_{\overline{s}} = \Mor_K(\Spec(K^{sep}), X) = M$
+as $G$-sets (details omitted). This gives the inverse of the functor and
+we win.
+\end{proof}
+
+\begin{remark}
+\label{remark-every-sheaf-representable}
+Another way to state the conclusion of
+Theorem \ref{theorem-equivalence-sheaves-point} and
+Fundamental Groups, Lemma \ref{pione-lemma-sheaves-point}
+is to say that every sheaf on $\Spec(K)_\etale$ is representable
+by a scheme $X$ \'etale over $\Spec(K)$.
+This does not mean that every sheaf is representable in the sense of
+Sites, Definition \ref{sites-definition-representable-sheaf}.
+The reason is that in our construction of $\Spec(K)_\etale$
+we chose a sufficiently large set of schemes \'etale over $\Spec(K)$,
+whereas sheaves on $\Spec(K)_\etale$ form a proper class.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-global-sections-point}
+Assumptions and notations as in
+Theorem \ref{theorem-equivalence-sheaves-point}.
+There is a functorial bijection
+$$
+\Gamma(S, \mathcal{F}) = (\mathcal{F}_{\overline{s}})^G
+$$
+\end{lemma}
+
+\begin{proof}
+We can prove this using formal arguments and the result of
+Theorem \ref{theorem-equivalence-sheaves-point}
+as follows. Given a sheaf $\mathcal{F}$ corresponding to
+the $G$-set $M = \mathcal{F}_{\overline{s}}$ we have
+\begin{eqnarray*}
+\Gamma(S, \mathcal{F}) & = &
+\Mor_{\Sh(S_\etale)}(h_{\Spec(K)}, \mathcal{F})
+\\
+& = & \Mor_{G\textit{-Sets}}(\{*\}, M) \\
+& = & M^G
+\end{eqnarray*}
+Here the first identification is explained in
+Sites, Sections \ref{sites-section-presheaves} and
+\ref{sites-section-representable-sheaves},
+the second results from
+Theorem \ref{theorem-equivalence-sheaves-point}
+and the third is clear. We will also give a direct proof\footnote{For
+the doubting Thomases out there.}.
+
+\medskip\noindent
+Suppose that $t \in \Gamma(S, \mathcal{F})$ is a global section.
+Then the triple $(S, \overline{s}, t)$ defines an element of
+$\mathcal{F}_{\overline{s}}$ which is clearly invariant under the
+action of $G$. Conversely, suppose that $(U, \overline{u}, t)$
+defines an element of $\mathcal{F}_{\overline{s}}$ which is invariant.
+Then we may shrink $U$ and assume $U = \Spec(L)$ for some
+finite separable field extension of $K$, see
+Proposition \ref{proposition-etale-morphisms}.
+In this case the map $\mathcal{F}(U) \to \mathcal{F}_{\overline{s}}$
+is injective, because for any morphism of \'etale neighbourhoods
+$(U', \overline{u}') \to (U, \overline{u})$ the restriction map
+$\mathcal{F}(U) \to \mathcal{F}(U')$ is injective since $U' \to U$
+is a covering of $S_\etale$.
+After enlarging $L$ a bit we may assume $K \subset L$ is a finite
+Galois extension. At this point we use that
+$$
+\Spec(L) \times_{\Spec(K)} \Spec(L)
+=
+\coprod\nolimits_{\sigma \in \text{Gal}(L/K)} \Spec(L)
+$$
+where the maps $\Spec(L) \to \Spec(L \otimes_K L)$
+come from the ring maps $a \otimes b \mapsto a\sigma(b)$. Hence we
+see that the condition that $(U, \overline{u}, t)$ is invariant
+under all of $G$ implies that $t \in \mathcal{F}(\Spec(L))$
+maps to the same element of
+$\mathcal{F}(\Spec(L) \times_{\Spec(K)} \Spec(L))$
+via restriction by either projection (this uses the injectivity mentioned
+above; details omitted). Hence the sheaf condition of $\mathcal{F}$
+for the \'etale covering $\{\Spec(L) \to \Spec(K)\}$ kicks
+in and we conclude that $t$ comes from a unique section of $\mathcal{F}$
+over $\Spec(K)$.
+\end{proof}
+
+\begin{remark}
+\label{remark-stalk-pullback}
+Let $S$ be a scheme and let $\overline{s} : \Spec(k) \to S$
+be a geometric point of $S$. By definition this means that $k$
+is algebraically closed. In particular the absolute Galois group of $k$
+is trivial. Hence by
+Theorem \ref{theorem-equivalence-sheaves-point}
+the category of sheaves on $\Spec(k)_\etale$ is equivalent
+to the category of sets. The equivalence is given by taking
+sections over $\Spec(k)$. This finally provides us with an
+alternative definition of the stalk functor. Namely, the functor
+$$
+\Sh(S_\etale) \longrightarrow \textit{Sets}, \quad
+\mathcal{F} \longmapsto \mathcal{F}_{\overline{s}}
+$$
+is isomorphic to the functor
+$$
+\Sh(S_\etale)
+\longrightarrow
+\Sh(\Spec(k)_\etale) = \textit{Sets},
+\quad
+\mathcal{F} \longmapsto \overline{s}^*\mathcal{F}
+$$
+To prove this rigorously one can use
+Lemma \ref{lemma-stalk-pullback} part (3)
+with $f = \overline{s}$. Moreover, having said this the general case of
+Lemma \ref{lemma-stalk-pullback} part (3)
+follows from functoriality of pullbacks.
+\end{remark}
+
+
+
+
+
+
+\section{Group cohomology}
+\label{section-group-cohomology}
+
+\noindent
+In the following, if we write $H^i(G, M)$ we will mean that $G$
+is a topological group and $M$ a discrete $G$-module with
+continuous $G$-action and $H^i(G, -)$ is the $i$th right derived
+functor on the category $\text{Mod}_G$ of such $G$-modules, see
+Definitions \ref{definition-G-module-continuous} and
+\ref{definition-galois-cohomology}. This includes
+the case of an abstract group $G$, which simply means that $G$ is viewed
+as a topological group with the discrete topology.
+
+\medskip\noindent
+When the module has a nondiscrete topology, we will use the notation
+$H^i_{cont}(G, M)$ to indicate the continuous cohomology groups
+introduced in \cite{Tate}, see
+Section \ref{section-continuous-group-cohomology}.
+
+\begin{definition}
+\label{definition-G-module-continuous}
+Let $G$ be a topological group.
+\begin{enumerate}
+\item A {\it $G$-module}, sometimes called a {\it discrete $G$-module},
+is an abelian group $M$ endowed with a left action $a : G \times M \to M$
+by group homomorphisms such that $a$ is continuous when $M$ is given the
+discrete topology.
+\item A {\it morphism of $G$-modules} $f : M \to N$ is a
+$G$-equivariant homomorphism from $M$ to $N$.
+\item The category of $G$-modules is denoted $\text{Mod}_G$.
+\end{enumerate}
+Let $R$ be a ring.
+\begin{enumerate}
+\item An {\it $R\text{-}G$-module} is an $R$-module $M$ endowed with
+a left action $a : G \times M \to M$ by $R$-linear maps such that $a$
+is continuous when $M$ is given the discrete topology.
+\item A {\it morphism of $R\text{-}G$-modules} $f : M \to N$ is a
+$G$-equivariant $R$-module map from $M$ to $N$.
+\item The category of $R\text{-}G$-modules is denoted $\text{Mod}_{R, G}$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+The condition that $a : G \times M \to M$ is continuous is equivalent
+with the condition that the stabilizer of any $x \in M$ is open in $G$.
+If $G$ is an abstract group then this corresponds to the notion of an
+abelian group endowed with a $G$-action provided we endow $G$ with the
+discrete topology. Observe that $\text{Mod}_{\mathbf{Z}, G} = \text{Mod}_G$.
+
+\medskip\noindent
+The category $\text{Mod}_G$ has enough injectives, see
+Injectives, Lemma \ref{injectives-lemma-G-modules}.
+Consider the left exact functor
+$$
+\text{Mod}_G \longrightarrow \textit{Ab},
+\quad
+M \longmapsto M^G =
+\{x \in M \mid g \cdot x = x\ \forall g \in G\}
+$$
+We sometimes denote $M^G = H^0(G, M)$ and sometimes we write
+$M^G = \Gamma_G(M)$. This functor has a total right derived functor
+$R\Gamma_G(M)$ and $i$th right derived functor
+$R^i\Gamma_G(M) = H^i(G, M)$ for any $i \geq 0$.
+
+\medskip\noindent
+The same construction works for
+$H^0(G, -) : \text{Mod}_{R, G} \to \text{Mod}_R$. We will see in
+Lemma \ref{lemma-modules-abelian} that this agrees with the cohomology
+of the underlying $G$-module.
+
+\begin{definition}
+\label{definition-galois-cohomology}
+Let $G$ be a topological group. Let $M$ be a discrete $G$-module
+with continuous $G$-action. In other words, $M$ is an object
+of the category $\text{Mod}_G$ introduced in
+Definition \ref{definition-G-module-continuous}.
+\begin{enumerate}
+\item The right derived functors $H^i(G, M)$ of $H^0(G, M)$ on the
+category $\text{Mod}_G$ are called the
+{\it continuous group cohomology groups} of $M$.
+\item If $G$ is an abstract group endowed with the discrete topology
+then the $H^i(G, M)$ are called the {\it group cohomology groups} of $M$.
+\item If $G$ is a Galois group, then the groups $H^i(G, M)$ are called
+the {\it Galois cohomology groups} of $M$.
+\item If $G$ is the absolute Galois group of a field $K$, then the groups
+$H^i(G, M)$ are sometimes called the {\it Galois cohomology groups of $K$
+with coefficients in $M$}. In this case we sometimes write
+$H^i(K, M)$ instead of $H^i(G, M)$.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-modules-abelian}
+Let $G$ be a topological group. Let $R$ be a ring.
+For every $i \geq 0$ the diagram
+$$
+\xymatrix{
+\text{Mod}_{R, G} \ar[rr]_{H^i(G, -)} \ar[d] & &
+\text{Mod}_R \ar[d] \\
+\text{Mod}_G \ar[rr]^{H^i(G, -)} & &
+\textit{Ab}
+}
+$$
+whose vertical arrows are the forgetful functors is commutative.
+\end{lemma}
+
+\begin{proof}
+Let us denote the forgetful functor $F : \text{Mod}_{R, G} \to \text{Mod}_G$.
+Then $F$ has a left adjoint $H : \text{Mod}_G \to \text{Mod}_{R, G}$
+given by $H(M) = M \otimes_\mathbf{Z} R$. Observe that every object of
+$\text{Mod}_G$ is a quotient of a direct sum of modules of the form
+$\mathbf{Z}[G/U]$ where $U \subset G$ is an open subgroup.
+Here $\mathbf{Z}[G/U]$ denotes the $G$-modules of
+finite $\mathbf{Z}$-linear combinations
+of right $U$ congruence classes in $G$ endowed with left $G$-action.
+Thus every bounded above complex in $\text{Mod}_G$ is quasi-isomorphic
+to a bounded above complex in $\text{Mod}_G$ whose underlying
+terms are flat $\mathbf{Z}$-modules
+(Derived Categories, Lemma \ref{derived-lemma-subcategory-left-resolution}).
+Thus it is clear that $LH$ exists on $D^-(\text{Mod}_G)$ and is computed by
+evaluating $H$ on any complex whose terms are flat $\mathbf{Z}$-modules;
+this follows from
+Derived Categories, Lemma \ref{derived-lemma-subcategory-left-acyclics} and
+Proposition \ref{derived-proposition-enough-acyclics}.
+We conclude from Derived Categories, Lemma
+\ref{derived-lemma-pre-derived-adjoint-functors}
+that
+$$
+\text{Ext}^i(\mathbf{Z}, F(M)) = \text{Ext}^i(R, M)
+$$
+for $M$ in $\text{Mod}_{R, G}$.
+Observe that $H^0(G, -) = \Hom(\mathbf{Z}, -)$ on
+$\text{Mod}_G$ where $\mathbf{Z}$ denotes the $G$-module
+with trivial action. Hence
+$H^i(G, -) = \text{Ext}^i(\mathbf{Z}, -)$ on $\text{Mod}_G$.
+Similarly we have $H^i(G, -) = \text{Ext}^i(R, -)$ on
+$\text{Mod}_{R, G}$. Combining everything we see that the lemma is true.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ext-modules-hom}
+Let $G$ be a topological group. Let $R$ be a ring.
+Let $M$, $N$ be $R\text{-}G$-modules. If $M$ is finite projective
+as an $R$-module, then
+$\text{Ext}^i(M, N) = H^i(G, M^\vee \otimes_R N)$ (for notation
+see proof).
+\end{lemma}
+
+\begin{proof}
+The module $M^\vee = \Hom_R(M, R)$ endowed with the contragredient
+action of $G$. Namely $(g \cdot \lambda)(m) = \lambda(g^{-1} \cdot m)$
+for $g \in G$, $\lambda \in M^\vee$, $m \in M$. The action of $G$ on
+$M^\vee \otimes_R N$ is the diagonal one, i.e., given by
+$g \cdot (\lambda \otimes n) = g \cdot \lambda \otimes g \cdot n$.
+Note that for a third $R\text{-}G$-module $E$ we have
+$\Hom(E, M^\vee \otimes_R N) = \Hom(M \otimes_R E, N)$.
+Namely, this is true on the level of $R$-modules by
+Algebra, Lemmas \ref{algebra-lemma-hom-from-tensor-product} and
+\ref{algebra-lemma-evaluation-map-iso-finite-projective}
+and the definitions of $G$-actions are chosen such that it
+remains true for $R\text{-}G$-modules. It follows that
+$M^\vee \otimes_R N$ is an injective $R\text{-}G$-module
+if $N$ is an injective $R\text{-}G$-module. Hence if
+$N \to N^\bullet$ is an injective resolution, then
+$M^\vee \otimes_R N \to M^\vee \otimes_R N^\bullet$
+is an injective resolution. Then
+$$
+\Hom(M, N^\bullet) = \Hom(R, M^\vee \otimes_R N^\bullet) =
+(M^\vee \otimes_R N^\bullet)^G
+$$
+Since the left hand side computes $\text{Ext}^i(M, N)$ and the right
+hand side computes $H^i(G, M^\vee \otimes_R N)$ the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-dim-group-cohomology}
+Let $G$ be a topological group. Let $k$ be a field.
+Let $V$ be a $k\text{-}G$-module.
+If $G$ is topologically finitely generated and
+$\dim_k(V) < \infty$, then $\dim_k H^1(G, V) < \infty$.
+\end{lemma}
+
+\begin{proof}
+Let $g_1, \ldots, g_r \in G$ be elements which topologically generate $G$,
+i.e., this means that the subgroup generated by $g_1, \ldots, g_r$ is dense.
+By Lemma \ref{lemma-ext-modules-hom}
+we see that $H^1(G, V)$ is the $k$-vector space of extensions
+$$
+0 \to V \to E \to k \to 0
+$$
+of $k\text{-}G$-modules. Choose $e \in E$ mapping to $1 \in k$.
+Write
+$$
+g_i \cdot e = v_i + e
+$$
+for some $v_i \in V$. This is possible because $g_i \cdot 1 = 1$.
+We claim that the list of elements $v_1, \ldots, v_r \in V$
+determine the isomorphism class of the extension $E$.
+Once we prove this the lemma follows as this means that our
+Ext vector space is isomorphic to a subquotient of the $k$-vector
+space $V^{\oplus r}$; some details omitted.
+Since $E$ is an object of the category defined in
+Definition \ref{definition-G-module-continuous}
+we know there is an open subgroup $U$ such that
+$u \cdot e = e$ for all $u \in U$.
+Now pick any $g \in G$. Then $gU$ contains a word $w$ in
+the elements $g_1, \ldots, g_r$.
+Say $gu = w$. Since the element $w \cdot e$ is determined by
+$v_1, \ldots, v_r$, we see that $g \cdot e = (gu) \cdot e = w \cdot e$
+is too.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-profinite-group-cohomology-torsion}
+Let $G$ be a profinite topological group.
+Then
+\begin{enumerate}
+\item $H^i(G, M)$ is torsion for $i > 0$ and any $G$-module $M$, and
+\item $H^i(G, M) = 0$ if $M$ is a $\mathbf{Q}$-vector space.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). By dimension shifting we see that it suffices
+to show that $H^1(G, M)$ is torsion for every $G$-module $M$.
+Choose an exact sequence $0 \to M \to I \to N \to 0$ with $I$
+an injective object of the category of $G$-modules.
+Then any element of $H^1(G, M)$ is the image of an element
+$y \in N^G$. Choose $x \in I$ mapping to $y$.
+The stabilizer $U \subset G$ of $x$ is open, hence
+has finite index $r$. Let $g_1, \ldots, g_r \in G$ be a system
+of representatives for $G/U$. Then $\sum g_i(x)$ is an invariant
+element of $I$ which maps to $ry$. Thus $r$ kills the element
+of $H^1(G, M)$ we started with. Part (2) follows as then
+$H^i(G, M)$ is both a $\mathbf{Q}$-vector space and torsion.
+\end{proof}
+
+
+
+
+
+\section{Tate's continuous cohomology}
+\label{section-continuous-group-cohomology}
+
+\noindent
+Tate's continuous cohomology (\cite{Tate}) is defined by the complex of
+continuous inhomogeneous cochains. We can define this when $M$ is an
+arbitrary topological abelian group endowed with a continuous $G$-action.
+Namely, we consider the complex
+$$
+C^\bullet_{cont}(G, M) :
+M \to \text{Maps}_{cont}(G, M) \to
+\text{Maps}_{cont}(G \times G, M) \to \ldots
+$$
+where the boundary map is defined for $n \geq 1$ by the rule
+\begin{align*}
+\text{d}(f)(g_1, \ldots, g_{n + 1})
+& = g_1(f(g_2, \ldots, g_{n + 1})) \\
+&
++ \sum\nolimits_{j = 1, \ldots, n}
+(-1)^jf(g_1, \ldots, g_jg_{j + 1}, \ldots, g_{n + 1}) \\
+&
++ (-1)^{n + 1}f(g_1, \ldots, g_n)
+\end{align*}
+and for $n = 0$ sends $m \in M$ to the map $g \mapsto g(m) - m$. We define
+$$
+H^i_{cont}(G, M) = H^i(C^\bullet_{cont}(G, M))
+$$
+Since the terms of the complex involve continuous maps from $G$ and
+self products of $G$ into the topological module $M$, it is not clear
+that this turns a short exact sequence of topological modules into
+a long exact cohomology sequence. Another difficulty is that the category
+of topological abelian groups isn't an abelian category!
+
+\medskip\noindent
+However, a short exact sequence of discrete $G$-modules does give
+rise to a short exact sequence of complexes of continuous cochains
+and hence a long exact cohomology sequence of continuous
+cohomology groups $H^i_{cont}(G, -)$.
+Therefore, on the category $\text{Mod}_G$ of
+Definition \ref{definition-G-module-continuous} the functors
+$H^i_{cont}(G, M)$ form a cohomological $\delta$-functor as defined
+in Homology, Section \ref{homology-section-cohomological-delta-functor}.
+Since the cohomology $H^i(G, M)$
+of Definition \ref{definition-galois-cohomology}
+is a universal $\delta$-functor
+(Derived Categories, Lemma \ref{derived-lemma-right-derived-delta-functor})
+we obtain canonical maps
+$$
+H^i(G, M) \longrightarrow H^i_{cont}(G, M)
+$$
+for $M \in \text{Mod}_G$. It is known that these maps are
+isomorphisms when $G$ is an abstract group (i.e., $G$ has
+the discrete topology) or when $G$ is a profinite group
+(insert future reference here).
+If you know an example showing this map is not an isomorphism
+for a topological group $G$ and $M \in \Ob(\text{Mod}_G)$
+please email
+\href{mailto:stacks.project@gmail.com}{stacks.project@gmail.com}.
+
+
+
+
+\section{Cohomology of a point}
+\label{section-cohomology-point}
+
+\noindent
+As a consequence of the discussion in the preceding sections
+we obtain the equivalence of \'etale cohomology of the spectrum of a
+field with Galois cohomology.
+
+\begin{lemma}
+\label{lemma-equivalence-abelian-sheaves-point}
+Let $S = \Spec(K)$ with $K$ a field.
+Let $\overline{s}$ be a geometric point of $S$.
+Let $G = \text{Gal}_{\kappa(s)}$ denote the absolute Galois group.
+The stalk functor induces an equivalence of categories
+$$
+\textit{Ab}(S_\etale) \longrightarrow \text{Mod}_G,
+\quad
+\mathcal{F} \longmapsto \mathcal{F}_{\overline{s}}.
+$$
+\end{lemma}
+
+\begin{proof}
+In
+Theorem \ref{theorem-equivalence-sheaves-point}
+we have seen the equivalence between sheaves of sets and $G$-sets.
+The current lemma follows formally from this as an abelian sheaf is just
+a sheaf of sets endowed with a commutative group law, and a $G$-module
+is just a $G$-set endowed with a commutative group law.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-cohomology-point}
+Notation and assumptions as in
+Lemma \ref{lemma-equivalence-abelian-sheaves-point}.
+Let $\mathcal{F}$ be an abelian sheaf on $\Spec(K)_\etale$
+which corresponds to the $G$-module $M$.
+Then
+\begin{enumerate}
+\item in $D(\textit{Ab})$ we have a canonical isomorphism
+$R\Gamma(S, \mathcal{F}) = R\Gamma_G(M)$,
+\item $H_\etale^0(S, \mathcal{F}) = M^G$, and
+\item $H_\etale^q(S, \mathcal{F}) = H^q(G, M)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Combine
+Lemma \ref{lemma-equivalence-abelian-sheaves-point}
+with
+Lemma \ref{lemma-global-sections-point}.
+\end{proof}
+
+\begin{example}
+\label{example-sheaves-point}
+Sheaves on $\Spec(K)_\etale$.
+Let $G = \text{Gal}(K^{sep}/K)$ be the absolute Galois group of $K$.
+\begin{enumerate}
+\item The constant sheaf $\underline{\mathbf{Z}/n\mathbf{Z}}$ corresponds to
+the module $\mathbf{Z}/n\mathbf{Z}$ with trivial $G$-action,
+\item the sheaf $\mathbf{G}_m|_{\Spec(K)_\etale}$ corresponds to
+$(K^{sep})^*$ with its $G$-action,
+\item the sheaf $\mathbf{G}_a|_{\Spec(K^{sep})}$ corresponds to
+$(K^{sep}, +)$ with its $G$-action, and
+\item the sheaf $\mu_n|_{\Spec(K^{sep})}$ corresponds to
+$\mu_n(K^{sep})$ with its $G$-action.
+\end{enumerate}
+By
+Remark \ref{remark-special-case-fpqc-cohomology-quasi-coherent}
+and
+Theorem \ref{theorem-picard-group}
+we have the following identifications for cohomology groups:
+\begin{align*}
+H_\etale^0(S_\etale, \mathbf{G}_m) & =
+\Gamma(S, \mathcal{O}_S^*) \\
+H_\etale^1(S_\etale, \mathbf{G}_m) & =
+H_{Zar}^1(S, \mathcal{O}_S^*) = \Pic(S) \\
+H_\etale^i(S_\etale, \mathbf{G}_a) & =
+H_{Zar}^i(S, \mathcal{O}_S)
+\end{align*}
+Also, for any quasi-coherent sheaf $\mathcal{F}$ on $S_\etale$ we have
+$$
+H^i(S_\etale, \mathcal{F}) = H_{Zar}^i(S, \mathcal{F}),
+$$
+see
+Theorem \ref{theorem-zariski-fpqc-quasi-coherent}.
+In particular, this gives the following sequence of equalities
+$$
+0 =
+\Pic(\Spec(K)) =
+H_\etale^1(\Spec(K)_\etale, \mathbf{G}_m) =
+H^1(G, (K^{sep})^*)
+$$
+which is none other than Hilbert's 90 theorem. Similarly, for $i \geq 1$,
+$$
+0 = H^i(\Spec(K), \mathcal{O})
+= H_\etale^i(\Spec(K)_\etale, \mathbf{G}_a)
+= H^i(G, K^{sep})
+$$
+where the $K^{sep}$ indicates $K^{sep}$ as a Galois module with addition
+as group law. In this way we may consider the work we have done so far as
+a complicated way of computing Galois cohomology groups.
+\end{example}
+
+\noindent
+The following result is a curiosity and should be skipped on a
+first reading.
+
+\begin{lemma}
+\label{lemma-all-modules-quasi-coherent}
+Let $R$ be a local ring of dimension $0$. Let $S = \Spec(R)$.
+Then every $\mathcal{O}_S$-module on $S_\etale$ is quasi-coherent.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be an $\mathcal{O}_S$-module on $S_\etale$.
+We have to show that $\mathcal{F}$ is determined by the
+$R$-module $M = \Gamma(S, \mathcal{F})$.
+More precisely, if $\pi : X \to S$ is \'etale we have to
+show that $\Gamma(X, \mathcal{F}) = \Gamma(X, \pi^*\widetilde{M})$.
+
+\medskip\noindent
+Let $\mathfrak m \subset R$ be the maximal ideal and let
+$\kappa$ be the residue field. By
+Algebra, Lemma \ref{algebra-lemma-local-dimension-zero-henselian}
+the local ring $R$ is henselian. If $X \to S$ is \'etale,
+then the underlying topological space of $X$ is discrete
+by Morphisms, Lemma \ref{morphisms-lemma-etale-over-field}
+and hence $X$ is a disjoint union of affine schemes
+each having one point. Moreover, if $X = \Spec(A)$ is affine and
+has one point, then $R \to A$ is finite \'etale by
+Algebra, Lemma \ref{algebra-lemma-mop-up}.
+We have to show that $\Gamma(X, \mathcal{F}) = M \otimes_R A$
+in this case.
+
+\medskip\noindent
+The functor $A \mapsto A/\mathfrak m A$ defines an equivalence of
+the category of finite \'etale $R$-algebras
+with the category of finite separable $\kappa$-algebras by
+Algebra, Lemma \ref{algebra-lemma-henselian-cat-finite-etale}.
+Let us first consider the case where $A/\mathfrak m A$
+is a Galois extension of $\kappa$ with Galois group $G$.
+For each $\sigma \in G$ let $\sigma : A \to A$ denote the
+corresponding automorphism of $A$ over $R$.
+Let $N = \Gamma(X, \mathcal{F})$.
+Then $\Spec(\sigma) : X \to X$ is an automorphism over $S$
+and hence pullback by this defines a map $\sigma : N \to N$
+which is a $\sigma$-linear map: $\sigma(an) = \sigma(a) \sigma(n)$
+for $a \in A$ and $n \in N$.
+We will apply Galois descent to the quasi-coherent module
+$\widetilde{N}$ on $X$ endowed with the isomorphisms
+coming from the action on $\sigma$ on $N$. See Descent, Lemma
+\ref{descent-lemma-galois-descent-more-general}.
+This lemma tells us there is an isomorphism $N = N^G \otimes_R A$.
+On the other hand, it is clear that $N^G = M$ by the sheaf property
+for $\mathcal{F}$. Thus the required isomorphism holds.
+
+\medskip\noindent
+The general case (with $A$ local and finite \'etale over $R$)
+is deduced from the Galois case as follows. Choose $A \to B$
+finite \'etale such that $B$ is local with residue field
+Galois over $\kappa$. Let $G = \text{Aut}(B/R) = \text{Gal}(\kappa_B/\kappa)$.
+Let $H \subset G$ be the Galois group corresponding to the
+Galois extension $\kappa_B/\kappa_A$. Then as above one
+shows that $\Gamma(X, \mathcal{F}) = \Gamma(\Spec(B), \mathcal{F})^H$.
+By the result for Galois extensions (used twice) we get
+$$
+\Gamma(X, \mathcal{F}) = (M \otimes_R B)^H = M \otimes_R A
+$$
+as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Cohomology of curves}
+\label{section-cohomology-curves}
+
+\noindent
+The next task at hand is to compute the \'etale cohomology of a smooth curve
+over an algebraically closed field with torsion coefficients, and in
+particular show that it vanishes in degree at least 3. To prove this, we
+will compute cohomology at the generic point, which
+amounts to some Galois cohomology.
+
+
+
+
+
+\section{Brauer groups}
+\label{section-brauer-groups}
+
+\noindent
+Brauer groups of fields are defined using finite central simple algebras.
+In this section we review the relevant facts about Brauer groups, most of
+which are discussed in the chapter
+Brauer Groups, Section \ref{brauer-section-introduction}.
+For other references, see \cite{SerreCorpsLocaux},
+\cite{SerreGaloisCohomology} or \cite{Weil}.
+
+\begin{theorem}
+\label{theorem-central-simple-algebra}
+Let $K$ be a field. For a unital, associative (not necessarily commutative)
+$K$-algebra $A$ the following are equivalent
+\begin{enumerate}
+\item $A$ is finite central simple $K$-algebra,
+\item $A$ is a finite dimensional $K$-vector space, $K$ is the center of $A$,
+and $A$ has no nontrivial two-sided ideal,
+\item there exists $d \geq 1$ such that
+$A \otimes_K \bar K \cong \text{Mat}(d \times d, \bar K)$,
+\item there exists $d \geq 1$ such that
+$A \otimes_K K^{sep} \cong \text{Mat}(d \times d, K^{sep})$,
+\item there exist $d \geq 1$ and a finite Galois extension $K'/K$
+such that
+$A \otimes_K K' \cong \text{Mat}(d \times d, K')$,
+\item there exist $n \geq 1$ and a finite central skew field $D$
+over $K$ such that $A \cong \text{Mat}(n \times n, D)$.
+\end{enumerate}
+The integer $d$ is called the {\it degree} of $A$.
+\end{theorem}
+
+\begin{proof}
+This is a copy of
+Brauer Groups, Lemma \ref{brauer-lemma-finite-central-simple-algebra}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-brauer-inverse}
+Let $A$ be a finite central simple algebra over $K$. Then
+$$
+\begin{matrix}
+A \otimes_K A^{opp} & \longrightarrow & \text{End}_K(A) \\
+\ a \otimes a' & \longmapsto & (x \mapsto a x a')
+\end{matrix}
+$$
+is an isomorphism of algebras over $K$.
+\end{lemma}
+
+\begin{proof}
+See
+Brauer Groups, Lemma \ref{brauer-lemma-inverse}.
+\end{proof}
+
+\begin{definition}
+\label{definition-brauer-equivalent}
+Two finite central simple algebras $A_1$ and $A_2$ over $K$ are called
+{\it similar}, or {\it equivalent} if there exist $m, n \geq 1$
+such that $\text{Mat}(n \times n, A_1)
+\cong \text{Mat}(m \times m, A_2)$. We write $A_1 \sim A_2$.
+\end{definition}
+
+\noindent
+By Brauer Groups, Lemma \ref{brauer-lemma-similar} this is an
+equivalence relation.
+
+\begin{definition}
+\label{definition-brauer-group}
+Let $K$ be a field. The {\it Brauer group} of $K$ is the set $\text{Br} (K)$
+of similarity classes of finite central simple algebras over $K$, endowed with
+the group law induced by tensor product (over $K$). The class of $A$ in
+$\text{Br}(K)$ is denoted by $[A]$. The neutral element is
+$[K] = [\text{Mat}(d \times d, K)]$ for any $d \geq 1$.
+\end{definition}
+
+\noindent
+The previous lemma implies that inverses exist and that $-[A] = [A^{opp}]$.
+The Brauer group of a field is always torsion.
+In fact, we will see that $[A]$ has order dividing $\deg(A)$
+for any finite central simple algebra $A$ (see
+Lemma \ref{lemma-annihilated-by-degree}).
+In general the Brauer group is not finitely generated, for example
+the Brauer group of a non-Archimedean local field is $\mathbf{Q}/\mathbf{Z}$.
+The Brauer group of $\mathbf{C}(x, y)$ is uncountable.
+
+\begin{lemma}
+\label{lemma-central-simple-algebra-pgln}
+\begin{slogan}
+Central simple algebras are classified by Galois cohomology of PGL.
+\end{slogan}
+Let $K$ be a field and let $K^{sep}$ be a separable algebraic closure.
+Then the set of isomorphism classes of central simple algebras of degree
+$d$ over $K$ is in bijection with the non-abelian cohomology
+$H^1(\text{Gal}(K^{sep}/K), \text{PGL}_d(K^{sep}))$.
+\end{lemma}
+
+\begin{proof}[Sketch of proof.]
+The Skolem-Noether theorem (see
+Brauer Groups, Theorem \ref{brauer-theorem-skolem-noether})
+implies that for any field $L$ the group
+$\text{Aut}_{L\text{-Algebras}}(\text{Mat}_d(L))$
+equals $\text{PGL}_d(L)$. By
+Theorem \ref{theorem-central-simple-algebra}, we see that
+central simple algebras of degree $d$ correspond
+to forms of the $K$-algebra $\text{Mat}_d(K)$.
+Combined we see that isomorphism classes of degree $d$ central
+simple algebras correspond to elements of
+$H^1(\text{Gal}(K^{sep}/K), \text{PGL}_d(K^{sep}))$.
+For more details on twisting, see for example
+\cite{SilvermanEllipticCurves}.
+\end{proof}
+
+\noindent
+If $A$ is a finite central simple algebra of degree $d$ over a field $K$,
+we denote $\xi_A$ the corresponding cohomology class in
+$H^1(\text{Gal}(K^{sep}/K), \text{PGL}_d(K^{sep}))$.
+Consider the short exact sequence
+$$
+1 \to (K^{sep})^* \to \text{GL}_d(K^{sep}) \to \text{PGL}_d(K^{sep}) \to 1,
+$$
+which gives rise to a long exact cohomology sequence (up to degree 2) with
+coboundary map
+$$
+\delta_d :
+H ^1(\text{Gal}(K^{sep}/K), \text{PGL}_d(K^{sep}))
+\longrightarrow
+H^2(\text{Gal}(K^{sep}/K), (K^{sep})^*).
+$$
+Explicitly, this is given as follows: if $\xi$ is a cohomology class
+represented by the 1-cocycle $(g_\sigma)$, then $\delta_d(\xi)$ is the
+class of the 2-cocycle
+\begin{equation}
+\label{equation-two-cocycle}
+(\sigma, \tau)
+\longmapsto
+\tilde g_\sigma^{-1} \tilde g_{\sigma \tau} \sigma(\tilde g_\tau^{-1})
+\in (K^{sep})^*
+\end{equation}
+where $\tilde g_\sigma \in \text{GL}_d(K^{sep})$ is a lift of $g_\sigma$.
+Using this we can make explicit the map
+$$
+\delta : \text{Br}(K) \longrightarrow H^2(\text{Gal}(K^{sep}/K), (K^{sep})^*),
+\quad
+[A] \longmapsto \delta_{\deg A} (\xi_A)
+$$
+as follows. Assume $A$ has degree $d$ over $K$. Choose an isomorphism
+$\varphi : \text{Mat}_d(K^{sep}) \to A \otimes_K K^{sep}$. For
+$\sigma \in \text{Gal}(K^{sep}/K)$ choose an element
+$\tilde g_\sigma \in \text{GL}_d(K^{sep})$ such that
+$\varphi^{-1} \circ \sigma(\varphi)$ is equal to the map
+$x \mapsto \tilde g_\sigma x \tilde g_\sigma^{-1}$. The class in $H^2$
+is defined by the two cocycle (\ref{equation-two-cocycle}).
+
+\begin{theorem}
+\label{theorem-brauer-delta}
+Let $K$ be a field with separable algebraic closure $K^{sep}$. The map
+$\delta : \text{Br}(K) \to H^2(\text{Gal}(K^{sep}/K), (K^{sep})^*)$
+defined above is a group isomorphism.
+\end{theorem}
+
+\begin{proof}[Sketch of proof]
+To prove that $\delta$ defines a group homomorphism, i.e., that
+$\delta(A \otimes_K B) = \delta(A) + \delta(B)$, one computes
+directly with cocycles.
+
+\medskip\noindent
+Injectivity of $\delta$. In the abelian case ($d = 1$), one has the
+identification
+$$
+H^1(\text{Gal}(K^{sep}/K), \text{GL}_d(K^{sep})) =
+H_\etale^1(\Spec(K), \text{GL}_d(\mathcal{O}))
+$$
+the latter of which is trivial by fpqc descent. If this were true in the
+non-abelian case, this would readily imply injectivity of $\delta$. (See
+\cite{SGA4.5}.) Rather, to prove this, one can reinterpret $\delta([A])$ as the
+obstruction to the existence of a $K$-vector space $V$ with a left $A$-module
+structure and such that $\dim_K V = \deg A$. In the case where $V$ exists, one
+has $A \cong \text{End}_K(V)$.
+
+\medskip\noindent
+For surjectivity, pick a
+cohomology class $\xi \in H^2(\text{Gal}(K^{sep}/K), (K^{sep})^*)$,
+then there exists a finite Galois extension $K^{sep}/K'/K$
+such that $\xi$ is the image of some
+$\xi' \in H^2(\text{Gal}(K'|K), (K')^*)$. Then write
+down an explicit central simple algebra over $K$ using the data $K', \xi'$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{The Brauer group of a scheme}
+\label{section-brauer-scheme}
+
+\noindent
+Let $S$ be a scheme. An $\mathcal{O}_S$-algebra
+$\mathcal{A}$ is called {\it Azumaya} if it is \'etale locally a matrix
+algebra, i.e., if there exists an \'etale covering
+$\mathcal{U} = \{ \varphi_i : U_i \to S\}_{i \in I}$ such that
+$\varphi_i^*\mathcal{A} \cong \text{Mat}_{d_i}(\mathcal{O}_{U_i})$
+for some $d_i \geq 1$. Two such
+$\mathcal{A}$ and $\mathcal{B}$ are called {\it equivalent} if there exist
+finite locally free $\mathcal{O}_S$-modules $\mathcal{F}$ and $\mathcal{G}$
+which have positive rank at every $s \in S$ such that
+$$
+\mathcal{A} \otimes_{\mathcal{O}_S}
+\SheafHom_{\mathcal{O}_S}(\mathcal{F}, \mathcal{F})
+\cong
+\mathcal{B} \otimes_{\mathcal{O}_S}
+\SheafHom_{\mathcal{O}_S}(\mathcal{G}, \mathcal{G})
+$$
+as $\mathcal{O}_S$-algebras. The {\it Brauer group} of
+$S$ is the set $\text{Br}(S)$ of equivalence classes of Azumaya
+$\mathcal{O}_S$-algebras with the operation induced by tensor product (over
+$\mathcal{O}_S$).
+
+\begin{lemma}
+\label{lemma-end-unique-up-to-invertible}
+Let $S$ be a scheme. Let $\mathcal{F}$ and $\mathcal{G}$ be finite locally
+free sheaves of $\mathcal{O}_S$-modules of positive rank. If there
+exists an isomorphism
+$\SheafHom_{\mathcal{O}_S}(\mathcal{F}, \mathcal{F}) \cong
+\SheafHom_{\mathcal{O}_S}(\mathcal{G}, \mathcal{G})$ of
+$\mathcal{O}_S$-algebras, then there exists an invertible sheaf
+$\mathcal{L}$ on $S$ such that
+$\mathcal{F} \otimes_{\mathcal{O}_S} \mathcal{L} \cong \mathcal{G}$
+and such that this isomorphism induces the given isomorphism of
+endomorphism algebras.
+\end{lemma}
+
+\begin{proof}
+Fix an isomorphism
+$\SheafHom_{\mathcal{O}_S}(\mathcal{F}, \mathcal{F}) \to
+\SheafHom_{\mathcal{O}_S}(\mathcal{G}, \mathcal{G})$.
+Consider the sheaf $\mathcal{L} \subset \SheafHom(\mathcal{F}, \mathcal{G})$
+generated as an $\mathcal{O}_S$-module by the local isomorphisms
+$\varphi : \mathcal{F} \to \mathcal{G}$ such that conjugation by
+$\varphi$ is the given isomorphism of endomorphism algebras.
+A local calculation (reducing to the case that $\mathcal{F}$ and $\mathcal{G}$
+are finite free and $S$ is affine) shows that $\mathcal{L}$ is invertible.
+Another local calculation shows that the evaluation map
+$$
+\mathcal{F} \otimes_{\mathcal{O}_S} \mathcal{L} \longrightarrow \mathcal{G}
+$$
+is an isomorphism.
+\end{proof}
+
+\noindent
+The argument given in the proof of the following lemma can be found in
+\cite{Saltman-torsion}.
+
+\begin{lemma}
+\label{lemma-annihilated-by-degree}
+\begin{reference}
+Argument taken from \cite{Saltman-torsion}.
+\end{reference}
+Let $S$ be a scheme. Let $\mathcal{A}$ be an Azumaya algebra which is
+locally free of rank $d^2$ over $S$. Then the class
+of $\mathcal{A}$ in the Brauer group of $S$ is annihilated by $d$.
+\end{lemma}
+
+\begin{proof}
+Choose an \'etale covering $\{U_i \to S\}$ and choose isomorphisms
+$\mathcal{A}|_{U_i} \to \SheafHom(\mathcal{F}_i, \mathcal{F}_i)$
+for some locally free $\mathcal{O}_{U_i}$-modules $\mathcal{F}_i$
+of rank $d$. (We may assume $\mathcal{F}_i$ is free.) Consider the
+composition
+$$
+p_i : \mathcal{F}_i^{\otimes d} \to
+\wedge^d(\mathcal{F}_i) \to \mathcal{F}_i^{\otimes d}
+$$
+The first arrow is the usual projection and the second arrow is
+the isomorphism of the top exterior power of $\mathcal{F}_i$ with
+the submodule of sections of $\mathcal{F}_i^{\otimes d}$ which transform
+according to the sign character under the action of the symmetric group
+on $d$ letters. Then $p_i^2 = d! p_i$ and the rank of $p_i$ is $1$.
+Using the given isomorphism
+$\mathcal{A}|_{U_i} \to \SheafHom(\mathcal{F}_i, \mathcal{F}_i)$
+and the canonical isomorphism
+$$
+\SheafHom(\mathcal{F}_i, \mathcal{F}_i)^{\otimes d} =
+\SheafHom(\mathcal{F}_i^{\otimes d}, \mathcal{F}_i^{\otimes d})
+$$
+we may think of $p_i$ as a section of $\mathcal{A}^{\otimes d}$
+over $U_i$. We claim that $p_i|_{U_i \times_S U_j} = p_j|_{U_i \times_S U_j}$
+as sections of $\mathcal{A}^{\otimes d}$. Namely, applying
+Lemma \ref{lemma-end-unique-up-to-invertible}
+we obtain an invertible sheaf $\mathcal{L}_{ij}$ and a canonical isomorphism
+$$
+\mathcal{F}_i|_{U_i \times_S U_j} \otimes \mathcal{L}_{ij}
+\longrightarrow
+\mathcal{F}_j|_{U_i \times_S U_j}.
+$$
+Using this isomorphism we see that $p_i$ maps to $p_j$.
+Since $\mathcal{A}^{\otimes d}$ is a sheaf on $S_\etale$
+(Proposition \ref{proposition-quasi-coherent-sheaf-fpqc}) we find a canonical
+global section $p \in \Gamma(S, \mathcal{A}^{\otimes d})$. A local calculation
+shows that
+$$
+\mathcal{H} =
+\Im(\mathcal{A}^{\otimes d} \to \mathcal{A}^{\otimes d}, f \mapsto fp)
+$$
+is a locally free module of rank $d^d$ and that (left) multiplication
+by $\mathcal{A}^{\otimes d}$ induces an isomorphism
+$\mathcal{A}^{\otimes d} \to \SheafHom(\mathcal{H}, \mathcal{H})$.
+In other words, $\mathcal{A}^{\otimes d}$ is the trivial element
+of the Brauer group of $S$ as desired.
+\end{proof}
+
+\noindent
+In this setting, the analogue of the isomorphism $\delta$ of
+Theorem \ref{theorem-brauer-delta}
+is a map
+$$
+\delta_S: \text{Br}(S) \to H_\etale^2(S, \mathbf{G}_m).
+$$
+It is true that $\delta_S$ is injective. If $S$ is quasi-compact or
+connected, then $\text{Br}(S)$ is a torsion group, so in this case the
+image of $\delta_S$ is contained in the {\it cohomological Brauer group} of $S$
+$$
+\text{Br}'(S) := H_\etale^2(S, \mathbf{G}_m)_\text{torsion}.
+$$
+So if $S$ is quasi-compact or connected, there is an inclusion $\text{Br}(S)
+\subset \text{Br}'(S)$. This is not always an equality: there exists a
+nonseparated singular surface $S$ for which $\text{Br}(S) \subset
+\text{Br}'(S)$ is a strict inclusion. If $S$ is quasi-projective, then
+$\text{Br}(S) = \text{Br}'(S)$. However, it is not known whether this holds for
+a smooth proper variety over $\mathbf{C}$, say.
+
+
+
+
+\section{The Artin-Schreier sequence}
+\label{section-artin-schreier}
+
+\noindent
+Let $p$ be a prime number. Let $S$ be a scheme in characteristic $p$.
+The {\it Artin-Schreier} sequence is the short exact sequence
+$$
+0 \longrightarrow \underline{\mathbf{Z}/p\mathbf{Z}}_S \longrightarrow
+\mathbf{G}_{a, S} \xrightarrow{F-1} \mathbf{G}_{a, S} \longrightarrow 0
+$$
+where $F - 1$ is the map $x \mapsto x^p - x$.
+
+\begin{lemma}
+\label{lemma-vanishing-affine-char-p-p}
+Let $p$ be a prime. Let $S$ be a scheme of characteristic $p$.
+\begin{enumerate}
+\item If $S$ is affine, then
+$H_\etale^q(S, \underline{\mathbf{Z}/p\mathbf{Z}}) = 0$ for all
+$q \geq 2$.
+\item If $S$ is a quasi-compact and quasi-separated scheme of
+dimension $d$, then $H_\etale^q(S, \underline{\mathbf{Z}/p\mathbf{Z}}) = 0$
+for all $q \geq 2 + d$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Recall that the \'etale cohomology of the structure sheaf is equal
+to its cohomology on the underlying topological space
+(Theorem \ref{theorem-zariski-fpqc-quasi-coherent}).
+The first statement follows from the Artin-Schreier exact sequence
+and the vanishing of cohomology of the structure sheaf on an affine
+scheme (Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherent-affine-cohomology-zero}).
+The second statement follows by the same argument from
+the vanishing of Cohomology, Proposition
+\ref{cohomology-proposition-cohomological-dimension-spectral}
+and the fact that $S$ is a spectral space
+(Properties, Lemma
+\ref{properties-lemma-quasi-compact-quasi-separated-spectral}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-F-1}
+Let $k$ be an algebraically closed field of characteristic $p > 0$.
+Let $V$ be a finite dimensional $k$-vector space. Let $F : V \to V$
+be a frobenius linear map, i.e., an additive map such that
+$F(\lambda v) = \lambda^p F(v)$ for all $\lambda \in k$ and $v \in V$.
+Then $F - 1 : V \to V$ is surjective with kernel a finite dimensional
+$\mathbf{F}_p$-vector space of dimension $\leq \dim_k(V)$.
+\end{lemma}
+
+\begin{proof}
+If $F = 0$, then the statement holds. If we have a filtration of $V$ by
+$F$-stable subvector spaces such that the statement holds for each
+graded piece, then it holds for $(V, F)$. Combining these two remarks
+we may assume the kernel of $F$ is zero.
+
+\medskip\noindent
+Choose a basis $v_1, \ldots, v_n$ of $V$ and write
+$F(v_i) = \sum a_{ij} v_j$. Observe that $v = \sum \lambda_i v_i$
+is in the kernel if and only if $\sum \lambda_i^p a_{ij} v_j = 0$.
+Since $k$ is algebraically closed this implies the matrix $(a_{ij})$
+is invertible. Let $(b_{ij})$ be its inverse. Then to see that $F - 1$
+is surjective we pick $w = \sum \mu_i v_i \in V$ and we try to solve
+$$
+(F - 1)(\sum \lambda_iv_i) =
+\sum \lambda_i^p a_{ij} v_j - \sum \lambda_j v_j = \sum \mu_j v_j
+$$
+This is equivalent to
+$$
+\sum \lambda_j^p v_j - \sum b_{ij} \lambda_i v_j = \sum b_{ij} \mu_i v_j
+$$
+in other words
+$$
+\lambda_j^p - \sum b_{ij} \lambda_i = \sum b_{ij} \mu_i,
+\quad j = 1, \ldots, \dim(V).
+$$
+The algebra
+$$
+A = k[x_1, \ldots, x_n]/
+(x_j^p - \sum b_{ij} x_i - \sum b_{ij} \mu_i)
+$$
+is standard smooth over $k$
+(Algebra, Definition \ref{algebra-definition-standard-smooth})
+because the matrix $(b_{ij})$ is invertible and the partial derivatives
+of $x_j^p$ are zero. A basis of $A$ over $k$ is the set of monomials
+$x_1^{e_1} \ldots x_n^{e_n}$ with $e_i < p$, hence $\dim_k(A) = p^n$.
+Since $k$ is algebraically closed we see that $\Spec(A)$ has exactly
+$p^n$ points. It follows that $F - 1$ is surjective and every fibre
+has $p^n$ points, i.e., the kernel of $F - 1$ is a group with $p^n$ elements.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-top-cohomology-coherent}
+Let $X$ be a separated scheme of finite type over a field $k$.
+Let $\mathcal{F}$ be a coherent sheaf of $\mathcal{O}_X$-modules.
+Then $\dim_k H^d(X, \mathcal{F}) < \infty$ where $d = \dim(X)$.
+\end{lemma}
+
+\begin{proof}
+We will prove this by induction on $d$. The case $d = 0$ holds because
+in that case $X$ is the spectrum of a finite dimensional $k$-algebra $A$
+(Varieties, Lemma \ref{varieties-lemma-algebraic-scheme-dim-0})
+and every coherent sheaf $\mathcal{F}$ corresponds to a finite $A$-module
+$M = H^0(X, \mathcal{F})$ which has $\dim_k M < \infty$.
+
+\medskip\noindent
+Assume $d > 0$ and the result has been shown for separated schemes
+of finite type of dimension $< d$. The scheme $X$ is Noetherian. Consider
+the property $\mathcal{P}$ of coherent sheaves on $X$ defined by the rule
+$$
+\mathcal{P}(\mathcal{F}) \Leftrightarrow
+\dim_k H^d(X, \mathcal{F}) < \infty
+$$
+We are going to use the result of
+Cohomology of Schemes, Lemma \ref{coherent-lemma-property-initial}
+to prove that $\mathcal{P}$ holds for every coherent sheaf on $X$.
+
+\medskip\noindent
+Let
+$$
+0 \to \mathcal{F}_1 \to \mathcal{F} \to \mathcal{F}_2 \to 0
+$$
+be a short exact sequence of coherent sheaves on $X$.
+Consider the long exact sequence of cohomology
+$$
+H^d(X, \mathcal{F}_1) \to
+H^d(X, \mathcal{F}) \to
+H^d(X, \mathcal{F}_2)
+$$
+Thus if $\mathcal{P}$ holds for $\mathcal{F}_1$ and $\mathcal{F}_2$,
+then it holds for $\mathcal{F}$.
+
+\medskip\noindent
+Let $Z \subset X$ be an integral closed subscheme. Let $\mathcal{I}$
+be a coherent sheaf of ideals on $Z$. To finish the proof we have to show
+that $H^d(X, i_*\mathcal{I}) = H^d(Z, \mathcal{I})$ is finite dimensional.
+If $\dim(Z) < d$, then the result holds because the cohomology group
+will be zero (Cohomology, Proposition
+\ref{cohomology-proposition-vanishing-Noetherian}).
+In this way we reduce to the situation discussed in the following paragraph.
+
+\medskip\noindent
+Assume $X$ is a variety of dimension $d$ and
+$\mathcal{F} = \mathcal{I}$ is a coherent ideal sheaf. In this
+case we have a short exact sequence
+$$
+0 \to \mathcal{I} \to \mathcal{O}_X \to i_*\mathcal{O}_Z \to 0
+$$
+where $i : Z \to X$ is the closed subscheme defined by $\mathcal{I}$.
+By induction hypothesis we see that
+$H^{d - 1}(Z, \mathcal{O}_Z) = H^{d - 1}(X, i_*\mathcal{O}_Z)$ is
+finite dimensional. Thus we see that it suffices to prove the result
+for the structure sheaf.
+
+\medskip\noindent
+We can apply Chow's lemma
+(Cohomology of Schemes, Lemma \ref{coherent-lemma-chow-Noetherian})
+to the morphism $X \to \Spec(k)$. Thus we get a diagram
+$$
+\xymatrix{
+X \ar[rd]_g & X' \ar[d]^-{g'} \ar[l]^\pi \ar[r]_i & \mathbf{P}^n_k \ar[dl] \\
+& \Spec(k) &
+}
+$$
+as in the statement of Chow's lemma. Also, let $U \subset X$ be
+the dense open subscheme such that $\pi^{-1}(U) \to U$ is an isomorphism.
+We may assume $X'$ is a variety as well, see
+Cohomology of Schemes, Remark \ref{coherent-remark-chow-Noetherian}.
+The morphism $i' = (i, \pi) : X' \to \mathbf{P}^n_X$ is
+a closed immersion (loc. cit.). Hence
+$$
+\mathcal{L} = i^*\mathcal{O}_{\mathbf{P}^n_k}(1) \cong
+(i')^*\mathcal{O}_{\mathbf{P}^n_X}(1)
+$$
+is $\pi$-relatively ample (for example by
+Morphisms, Lemma \ref{morphisms-lemma-characterize-ample-on-finite-type}).
+Hence by Cohomology of Schemes, Lemma \ref{coherent-lemma-kill-by-twisting}
+there exists an $n \geq 0$ such that
+$R^p\pi_*\mathcal{L}^{\otimes n} = 0$ for all $p > 0$.
+Set $\mathcal{G} = \pi_*\mathcal{L}^{\otimes n}$.
+Choose any nonzero global section $s$ of $\mathcal{L}^{\otimes n}$.
+Since $\mathcal{G} = \pi_*\mathcal{L}^{\otimes n}$, the section $s$
+corresponds to section of $\mathcal{G}$, i.e., a map
+$\mathcal{O}_X \to \mathcal{G}$.
+Since $s|_U \not = 0$ as $X'$ is a variety and $\mathcal{L}$
+invertible, we see that $\mathcal{O}_X|_U \to \mathcal{G}|_U$
+is nonzero. As $\mathcal{G}|_U = \mathcal{L}^{\otimes n}|_{\pi^{-1}(U)}$
+is invertible we conclude that we have a short exact sequence
+$$
+0 \to \mathcal{O}_X \to \mathcal{G} \to \mathcal{Q} \to 0
+$$
+where $\mathcal{Q}$ is coherent and supported on a proper
+closed subscheme of $X$. Arguing as before using our induction
+hypothesis, we see that it
+suffices to prove $\dim H^d(X, \mathcal{G}) < \infty$.
+
+\medskip\noindent
+By the Leray spectral sequence
+(Cohomology, Lemma \ref{cohomology-lemma-apply-Leray})
+we see that $H^d(X, \mathcal{G}) = H^d(X', \mathcal{L}^{\otimes n})$.
+Let $\overline{X}' \subset \mathbf{P}^n_k$ be the closure
+of $X'$. Then $\overline{X}'$ is a projective variety of dimension $d$
+over $k$ and $X' \subset \overline{X}'$ is a dense open.
+The invertible sheaf $\mathcal{L}$ is the restriction of
+$\mathcal{O}_{\overline{X}'}(n)$ to $X$. By
+Cohomology, Proposition
+\ref{cohomology-proposition-cohomological-dimension-spectral}
+the map
+$$
+H^d(\overline{X}', \mathcal{O}_{\overline{X}'}(n))
+\longrightarrow
+H^d(X', \mathcal{L}^{\otimes n})
+$$
+is surjective. Since the cohomology group on the left has
+finite dimension by
+Cohomology of Schemes, Lemma \ref{coherent-lemma-coherent-projective}
+the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-variety-char-p-p}
+Let $X$ be separated of finite type over an algebraically closed
+field $k$ of characteristic $p > 0$. Then
+$H_\etale^q(X, \underline{\mathbf{Z}/p\mathbf{Z}}) = 0$ for
+$q \geq dim(X) + 1$.
+\end{lemma}
+
+\begin{proof}
+Let $d = \dim(X)$. By the vanishing established in
+Lemma \ref{lemma-vanishing-affine-char-p-p}
+it suffices to show that
+$H_\etale^{d + 1}(X, \underline{\mathbf{Z}/p\mathbf{Z}}) = 0$.
+By Lemma \ref{lemma-top-cohomology-coherent} we see that
+$H^d(X, \mathcal{O}_X)$ is a finite dimensional $k$-vector space.
+Hence the long exact cohomology sequence associated to the
+Artin-Schreier sequence ends with
+$$
+H^d(X, \mathcal{O}_X) \xrightarrow{F - 1}
+H^d(X, \mathcal{O}_X) \to H^{d + 1}_\etale(X, \mathbf{Z}/p\mathbf{Z}) \to 0
+$$
+By Lemma \ref{lemma-F-1} the map $F - 1$ in this sequence is surjective.
+This proves the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finiteness-proper-variety-char-p-p}
+Let $X$ be a proper scheme over an algebraically closed
+field $k$ of characteristic $p > 0$. Then
+\begin{enumerate}
+\item $H_\etale^q(X, \underline{\mathbf{Z}/p\mathbf{Z}})$
+is a finite $\mathbf{Z}/p\mathbf{Z}$-module for all $q$, and
+\item $H^q_\etale(X, \underline{\mathbf{Z}/p\mathbf{Z}}) \to
+H^q_\etale(X_{k'}, \underline{\mathbf{Z}/p\mathbf{Z}}))$
+is an isomorphism if $k'/k$ is an extension of algebraically
+closed fields.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Cohomology of Schemes, Lemma
+\ref{coherent-lemma-proper-over-affine-cohomology-finite})
+and the comparison of cohomology of
+Theorem \ref{theorem-zariski-fpqc-quasi-coherent} the cohomology groups
+$H^q_\etale(X, \mathbf{G}_a) = H^q(X, \mathcal{O}_X)$ are
+finite dimensional $k$-vector spaces. Hence by
+Lemma \ref{lemma-F-1} the long exact cohomology sequence
+associated to the Artin-Schreier sequence, splits into
+short exact sequences
+$$
+0 \to H_\etale^q(X, \underline{\mathbf{Z}/p\mathbf{Z}}) \to
+H^q(X, \mathcal{O}_X) \xrightarrow{F - 1} H^q(X, \mathcal{O}_X) \to 0
+$$
+and moreover the $\mathbf{F}_p$-dimension of the cohomology groups
+$H_\etale^q(X, \underline{\mathbf{Z}/p\mathbf{Z}})$ is equal to the
+$k$-dimension of the vector space $H^q(X, \mathcal{O}_X)$.
+This proves the first statement. The second statement follows
+as $H^q(X, \mathcal{O}_X) \otimes_k k' \to H^q(X_{k'}, \mathcal{O}_{X_{k'}})$
+is an isomorphism by flat base change
+(Cohomology of Schemes,
+Lemma \ref{coherent-lemma-flat-base-change-cohomology}).
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Locally constant sheaves}
+\label{section-locally-constant}
+
+\noindent
+This section is the analogue of
+Modules on Sites, Section \ref{sites-modules-section-locally-constant}
+for the \'etale site.
+
+\begin{definition}
+\label{definition-finite-locally-constant}
+Let $X$ be a scheme.
+Let $\mathcal{F}$ be a sheaf of sets on $X_\etale$.
+\begin{enumerate}
+\item Let $E$ be a set. We say $\mathcal{F}$ is the
+{\it constant sheaf with value $E$} if $\mathcal{F}$ is the
+sheafification of the presheaf $U \mapsto E$.
+Notation: $\underline{E}_X$ or $\underline{E}$.
+\item We say $\mathcal{F}$ is a {\it constant sheaf} if it is
+isomorphic to a sheaf as in (1).
+\item We say $\mathcal{F}$ is {\it locally constant} if there exists a
+covering $\{U_i \to X\}$ such that $\mathcal{F}|_{U_i}$ is a constant sheaf.
+\item We say that $\mathcal{F}$ is {\it finite locally constant} if it
+is locally constant and the values are finite sets.
+\end{enumerate}
+Let $\mathcal{F}$ be a sheaf of abelian groups on $X_\etale$.
+\begin{enumerate}
+\item Let $A$ be an abelian group.
+We say $\mathcal{F}$ is the {\it constant sheaf with value $A$} if
+$\mathcal{F}$ is the sheafification of the presheaf $U \mapsto A$.
+Notation: $\underline{A}_X$ or $\underline{A}$.
+\item We say $\mathcal{F}$ is a {\it constant sheaf} if it is isomorphic
+as an abelian sheaf to a sheaf as in (1).
+\item We say $\mathcal{F}$ is {\it locally constant} if there exists a
+covering $\{U_i \to X\}$ such that $\mathcal{F}|_{U_i}$ is a constant sheaf.
+\item We say that $\mathcal{F}$ is {\it finite locally constant} if it
+is locally constant and the values are finite abelian groups.
+\end{enumerate}
+Let $\Lambda$ be a ring. Let $\mathcal{F}$ be a sheaf of $\Lambda$-modules
+on $X_\etale$.
+\begin{enumerate}
+\item Let $M$ be a $\Lambda$-module.
+We say $\mathcal{F}$ is the {\it constant sheaf with value $M$} if
+$\mathcal{F}$ is the sheafification of the presheaf $U \mapsto M$.
+Notation: $\underline{M}_X$ or $\underline{M}$.
+\item We say $\mathcal{F}$ is a {\it constant sheaf} if it is isomorphic
+as a sheaf of $\Lambda$-modules to a sheaf as in (1).
+\item We say $\mathcal{F}$ is {\it locally constant} if there exists a
+covering $\{U_i \to X\}$ such that $\mathcal{F}|_{U_i}$ is a constant sheaf.
+\end{enumerate}
+\end{definition}
+
+\begin{lemma}
+\label{lemma-pullback-locally-constant}
+Let $f : X \to Y$ be a morphism of schemes. If $\mathcal{G}$ is a
+locally constant sheaf of sets, abelian groups, or $\Lambda$-modules
+on $Y_\etale$, the same is true for $f^{-1}\mathcal{G}$
+on $X_\etale$.
+\end{lemma}
+
+\begin{proof}
+Holds for any morphism of topoi, see
+Modules on Sites, Lemma \ref{sites-modules-lemma-pullback-locally-constant}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushforward-locally-constant}
+Let $f : X \to Y$ be a finite \'etale morphism of schemes.
+If $\mathcal{F}$ is a (finite) locally constant sheaf of sets,
+(finite) locally constant sheaf of abelian groups, or
+(finite type) locally constant sheaf of $\Lambda$-modules
+on $X_\etale$, the same is true for $f_*\mathcal{F}$
+on $Y_\etale$.
+\end{lemma}
+
+\begin{proof}
+The construction of $f_*$ commutes with \'etale localization.
+A finite \'etale morphism is locally isomorphic to a disjoint union
+of isomorphisms, see
+\'Etale Morphisms, Lemma \ref{etale-lemma-finite-etale-etale-local}.
+Thus the lemma says that if $\mathcal{F}_i$, $i = 1, \ldots, n$
+are (finite) locally constant sheaves of sets, then
+$\prod_{i = 1, \ldots, n} \mathcal{F}_i$ is too.
+This is clear. Similarly for sheaves of abelian groups and modules.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-finite-locally-constant}
+Let $X$ be a scheme and $\mathcal{F}$ a sheaf of sets on $X_\etale$.
+Then the following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is finite locally constant, and
+\item $\mathcal{F} = h_U$ for some finite \'etale morphism $U \to X$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+A finite \'etale morphism is locally isomorphic to a disjoint union
+of isomorphisms, see
+\'Etale Morphisms, Lemma \ref{etale-lemma-finite-etale-etale-local}.
+Thus (2) implies (1). Conversely, if $\mathcal{F}$ is finite locally
+constant, then there exists an \'etale covering $\{X_i \to X\}$ such that
+$\mathcal{F}|_{X_i}$ is representable by $U_i \to X_i$ finite \'etale.
+Arguing exactly as in the proof of
+Descent, Lemma \ref{descent-lemma-descent-data-sheaves}
+we obtain a descent datum for schemes $(U_i, \varphi_{ij})$ relative to
+$\{X_i \to X\}$ (details omitted). This descent datum is effective for
+example by Descent, Lemma \ref{descent-lemma-affine}
+and the resulting morphism of schemes $U \to X$ is finite \'etale
+by Descent, Lemmas \ref{descent-lemma-descending-property-finite} and
+\ref{descent-lemma-descending-property-etale}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-morphism-locally-constant}
+Let $X$ be a scheme.
+\begin{enumerate}
+\item Let $\varphi : \mathcal{F} \to \mathcal{G}$ be a map
+of locally constant sheaves of sets on $X_\etale$.
+If $\mathcal{F}$ is finite locally constant, there exists an
+\'etale covering $\{U_i \to X\}$ such that
+$\varphi|_{U_i}$ is the map of constant sheaves associated to
+a map of sets.
+\item Let $\varphi : \mathcal{F} \to \mathcal{G}$ be a map
+of locally constant sheaves of abelian groups on $X_\etale$.
+If $\mathcal{F}$ is finite locally constant, there exists an \'etale
+covering $\{U_i \to X\}$ such that $\varphi|_{U_i}$ is the map of
+constant abelian sheaves associated to a map of abelian groups.
+\item Let $\Lambda$ be a ring.
+Let $\varphi : \mathcal{F} \to \mathcal{G}$ be a map
+of locally constant sheaves of $\Lambda$-modules on $X_\etale$.
+If $\mathcal{F}$ is of finite type, then there exists an \'etale covering
+$\{U_i \to X\}$ such that $\varphi|_{U_i}$ is the map of constant
+sheaves of $\Lambda$-modules associated to a map of $\Lambda$-modules.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This holds on any site, see
+Modules on Sites, Lemma \ref{sites-modules-lemma-morphism-locally-constant}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kernel-finite-locally-constant}
+Let $X$ be a scheme.
+\begin{enumerate}
+\item The category of finite locally constant sheaves of sets
+is closed under finite limits and colimits inside $\Sh(X_\etale)$.
+\item The category of finite locally constant abelian sheaves is a
+weak Serre subcategory of $\textit{Ab}(X_\etale)$.
+\item Let $\Lambda$ be a Noetherian ring. The category of
+finite type, locally constant sheaves of $\Lambda$-modules on
+$X_\etale$ is a weak Serre subcategory of
+$\textit{Mod}(X_\etale, \Lambda)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This holds on any site, see
+Modules on Sites, Lemma
+\ref{sites-modules-lemma-kernel-finite-locally-constant}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-product-locally-constant}
+Let $X$ be a scheme. Let $\Lambda$ be a ring.
+The tensor product of two locally constant sheaves of $\Lambda$-modules
+on $X_\etale$ is a locally constant sheaf of $\Lambda$-modules.
+\end{lemma}
+
+\begin{proof}
+This holds on any site, see
+Modules on Sites, Lemma
+\ref{sites-modules-lemma-tensor-product-locally-constant}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-connected-locally-constant}
+Let $X$ be a connected scheme. Let $\Lambda$ be a ring and let
+$\mathcal{F}$ be a locally constant sheaf of $\Lambda$-modules.
+Then there exists a $\Lambda$-module $M$ and an \'etale covering
+$\{U_i \to X\}$ such that $\mathcal{F}|_{U_i} \cong \underline{M}|_{U_i}$.
+\end{lemma}
+
+\begin{proof}
+Choose an \'etale covering
+$\{U_i \to X\}$ such that $\mathcal{F}|_{U_i}$ is constant, say
+$\mathcal{F}|_{U_i} \cong \underline{M_i}_{U_i}$.
+Observe that $U_i \times_X U_j$ is empty if $M_i$ is not isomorphic
+to $M_j$.
+For each $\Lambda$-module $M$ let $I_M = \{i \in I \mid M_i \cong M\}$.
+As \'etale morphisms are open we see that
+$U_M = \bigcup_{i \in I_M} \Im(U_i \to X)$
+is an open subset of $X$. Then $X = \coprod U_M$ is a disjoint
+open covering of $X$. As $X$ is connected only one $U_M$ is nonempty
+and the lemma follows.
+\end{proof}
+
+
+
+
+
+
+\section{Locally constant sheaves and the fundamental group}
+\label{section-pione}
+
+\noindent
+We can relate locally constant sheaves to the fundamental group
+of a scheme in some cases.
+
+\begin{lemma}
+\label{lemma-locally-constant-on-connected}
+Let $X$ be a connected scheme. Let $\overline{x}$ be a geometric point of $X$.
+\begin{enumerate}
+\item There is an equivalence of categories
+$$
+\left\{
+\begin{matrix}
+\text{finite locally constant}\\
+\text{sheaves of sets on }X_\etale
+\end{matrix}
+\right\}
+\longleftrightarrow
+\left\{
+\begin{matrix}
+\text{finite }\pi_1(X, \overline{x})\text{-sets}
+\end{matrix}
+\right\}
+$$
+\item There is an equivalence of categories
+$$
+\left\{
+\begin{matrix}
+\text{finite locally constant}\\
+\text{sheaves of abelian groups on }X_\etale
+\end{matrix}
+\right\}
+\longleftrightarrow
+\left\{
+\begin{matrix}
+\text{finite }\pi_1(X, \overline{x})\text{-modules}
+\end{matrix}
+\right\}
+$$
+\item Let $\Lambda$ be a finite ring. There is an equivalence of categories
+$$
+\left\{
+\begin{matrix}
+\text{finite type, locally constant}\\
+\text{sheaves of }\Lambda\text{-modules on }X_\etale
+\end{matrix}
+\right\}
+\longleftrightarrow
+\left\{
+\begin{matrix}
+\text{finite }\pi_1(X, \overline{x})\text{-modules endowed}\\
+\text{with commuting }\Lambda\text{-module structure}
+\end{matrix}
+\right\}
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We observe that $\pi_1(X, \overline{x})$ is a profinite
+topological group, see Fundamental Groups, Definition
+\ref{pione-definition-fundamental-group}.
+The left hand categories are defined in
+Section \ref{section-locally-constant}.
+The notation used in the right hand categories is taken from
+Fundamental Groups, Definition \ref{pione-definition-G-set-continuous}
+for sets and
+Definition \ref{definition-G-module-continuous} for abelian groups.
+This explains the notation.
+
+\medskip\noindent
+Assertion (1) follows from
+Lemma \ref{lemma-characterize-finite-locally-constant}
+and Fundamental Groups, Theorem \ref{pione-theorem-fundamental-group}.
+Parts (2) and (3) follow immediately from this by endowing the underlying
+(sheaves of) sets with additional structure. For example, a finite
+locally constant sheaf of abelian groups on $X_\etale$ is the same thing
+as a finite locally constant sheaf of sets $\mathcal{F}$
+together with a map $+ : \mathcal{F} \times \mathcal{F} \to \mathcal{F}$
+satisfying the usual axioms. The equivalence in (1) sends products
+to products and hence sends $+$ to an addition on the corresponding
+finite $\pi_1(X, \overline{x})$-set. Since $\pi_1(X, \overline{x})$-modules
+are the same thing as $\pi_1(X, \overline{x})$-sets with a compatible
+abelian group structure we obtain (2). Part (3) is proved in
+exactly the same way.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-constant-on-connected-geom-unibranch}
+Let $X$ be an irreducible, geometrically unibranch scheme.
+Let $\overline{x}$ be a geometric point of $X$.
+Let $\Lambda$ be a ring. There is an equivalence of categories
+$$
+\left\{
+\begin{matrix}
+\text{finite type, locally constant}\\
+\text{sheaves of }\Lambda\text{-modules on }X_\etale
+\end{matrix}
+\right\}
+\longleftrightarrow
+\left\{
+\begin{matrix}
+\text{finite }\Lambda\text{-modules }M\text{ endowed}\\
+\text{with a continuous }\pi_1(X, \overline{x})\text{-action}
+\end{matrix}
+\right\}
+$$
+\end{lemma}
+
+\begin{proof}
+The proof given in Lemma \ref{lemma-locally-constant-on-connected}
+does not work as a finite $\Lambda$-module $M$ may not have a finite
+underlying set.
+
+\medskip\noindent
+Let $\nu : X^\nu \to X$ be the normalization morphism. By Morphisms, Lemma
+\ref{morphisms-lemma-normalization-geom-unibranch-univ-homeo}
+this is a universal homeomorphism. By
+Fundamental Groups, Proposition \ref{pione-proposition-universal-homeomorphism}
+this induces an isomorphism
+$\pi_1(X^\nu, \overline{x}) \to \pi_1(X, \overline{x})$
+and by Theorem \ref{theorem-topological-invariance} we get
+an equivalence of category between finite type, locally constant
+$\Lambda$-modules on $X_\etale$ and on $X^\nu_\etale$.
+This reduces us to the case where $X$ is an integral normal scheme.
+
+\medskip\noindent
+Assume $X$ is an integral normal scheme. Let $\eta \in X$ be the generic point.
+Let $\overline{\eta}$ be a geometric point lying over $\eta$.
+By Fundamental Groups, Proposition \ref{pione-proposition-normal}
+have a continuous surjection
+$$
+\text{Gal}(\kappa(\eta)^{sep}/\kappa(\eta)) = \pi_1(\eta, \overline{\eta})
+\longrightarrow
+\pi_1(X, \overline{\eta})
+$$
+whose kernel is described in Fundamental Groups, Lemma
+\ref{pione-lemma-normal-pione-quotient-inertia}.
+Let $\mathcal{F}$ be a finite type, locally constant sheaf of $\Lambda$-modules
+on $X_\etale$. Let $M = \mathcal{F}_{\overline{\eta}}$
+be the stalk of $\mathcal{F}$ at $\overline{\eta}$.
+We obtain a continuous action of $\text{Gal}(\kappa(\eta)^{sep}/\kappa(\eta))$
+on $M$ by Section \ref{section-galois-action-stalks}.
+Our goal is to show that this action factors through the
+displayed surjection.
+Since $\mathcal{F}$ is of finite type, $M$ is a finite $\Lambda$-module.
+Since $\mathcal{F}$ is locally constant, for every $x \in X$
+the restriction of $\mathcal{F}$ to $\Spec(\mathcal{O}_{X, x}^{sh})$
+is constant. Hence the action of $\text{Gal}(K^{sep}/K_x^{sh})$
+(with notation as in Fundamental Groups, Lemma
+\ref{pione-lemma-normal-pione-quotient-inertia})
+on $M$ is trivial. We conclude we have the factorization as desired.
+
+\medskip\noindent
+On the other hand, suppose we have a finite $\Lambda$-module $M$
+with a continuous action of $\pi_1(X, \overline{\eta})$.
+We are going to construct an $\mathcal{F}$ such that
+$M \cong \mathcal{F}_{\overline{\eta}}$ as
+$\Lambda[\pi_1(X, \overline{\eta})]$-modules.
+Choose generators $m_1, \ldots, m_r \in M$.
+Since the action of $\pi_1(X, \overline{\eta})$ on $M$ is
+continuous, for each $i$ there exists an open subgroup
+$N_i$ of the profinite group $\pi_1(X, \overline{\eta})$
+such that every $\gamma \in H_i$ fixes $m_i$. We conclude that
+every element of the open subgroup $H = \bigcap_{i = 1, \ldots, r} H_i$
+fixes every element of $M$. After shrinking $H$ we may assume $H$
+is an open normal subgroup of $\pi_1(X, \overline{\eta})$.
+Set $G = \pi_1(X, \overline{\eta})/H$. Let $f : Y \to X$ be the
+corresponding Galois finite \'etale $G$-cover.
+We can view $f_*\underline{\mathbf{Z}}$ as a sheaf of
+$\mathbf{Z}[G]$-modules on $X_\etale$. Then we just take
+$$
+\mathcal{F} =
+f_*\underline{\mathbf{Z}} \otimes_{\underline{\mathbf{Z}[G]}} \underline{M}
+$$
+We leave it to the reader to compute
+$\mathcal{F}_{\overline{\eta}}$.
+We also omit the verification that this construction
+is the inverse to the construction in the previous paragraph.
+\end{proof}
+
+\begin{remark}
+\label{remark-functorial-locally-constant-on-connected}
+The equivalences of Lemmas \ref{lemma-locally-constant-on-connected} and
+\ref{lemma-locally-constant-on-connected-geom-unibranch}
+are compatible with pullbacks. For example, suppose $f : Y \to X$
+is a morphism of connected schemes. Let $\overline{y}$ be geometric
+point of $Y$ and set $\overline{x} = f(\overline{y})$.
+Then the diagram
+$$
+\xymatrix{
+\text{finite locally constant sheaves of sets on }Y_\etale
+\ar[r] &
+\text{finite }\pi_1(Y, \overline{y})\text{-sets} \\
+\text{finite locally constant sheaves of sets on }X_\etale
+\ar[r] \ar[u]_{f^{-1}} &
+\text{finite }\pi_1(X, \overline{x})\text{-sets} \ar[u]
+}
+$$
+is commutative, where the vertical arrow on the right comes
+from the continuous homomorphism
+$\pi_1(Y, \overline{y}) \to \pi_1(X, \overline{x})$
+induced by $f$. This follows immediately from
+the commutative diagram in
+Fundamental Groups, Theorem \ref{pione-theorem-fundamental-group}.
+A similar result holds for the other cases.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+\section{M\'ethode de la trace}
+\label{section-trace-method}
+
+\noindent
+A reference for this section is \cite[Expos\'e IX, \S 5]{SGA4}.
+The material here will be used in the proof of
+Lemma \ref{lemma-vanishing-easier} below.
+
+\medskip\noindent
+Let $f : Y \to X$ be an \'etale morphism of schemes. There
+is a sequence
+$$
+f_!, f^{-1}, f_*
+$$
+of adjoint functors between
+$\textit{Ab}(X_\etale)$ and $\textit{Ab}(Y_\etale)$.
+The functor $f_!$ is discussed in Section \ref{section-extension-by-zero}.
+The adjunction map $\text{id} \to f_* f^{-1}$ is called {\it restriction}.
+The adjunction map $f_! f^{-1} \to \text{id}$ is often
+called the {\it trace map}. If $f$ is finite \'etale, then $f_* = f_!$
+(Lemma \ref{lemma-shriek-equals-star-finite-etale}) and
+we can view this as a map $f_*f^{-1} \to \text{id}$.
+
+\begin{definition}
+\label{definition-trace-map}
+Let $f : Y \to X$ be a finite \'etale morphism of schemes.
+The map $f_* f^{-1} \to \text{id}$ described above and explicitly below
+is called the {\it trace}.
+\end{definition}
+
+\noindent
+Let $f : Y \to X$ be a finite \'etale morphism of schemes. The trace map is
+characterized by the following two properties:
+\begin{enumerate}
+\item it commutes with \'etale localization on $X$ and
+\item if $Y = \coprod_{i = 1}^d X$ then the trace map is
+the sum map $f_*f^{-1} \mathcal{F} = \mathcal{F}^{\oplus d} \to \mathcal{F}$.
+\end{enumerate}
+By \'Etale Morphisms, Lemma \ref{etale-lemma-finite-etale-etale-local}
+every finite \'etale morphism $f : Y \to X$ is \'etale locally on $X$
+of the form given in (2) for some integer $d \geq 0$. Hence we
+can define the trace map using the characterization given; in particular
+we do not need to know about the existence of $f_!$ and the agreement
+of $f_!$ with $f_*$ in order to construct the trace map.
+This description shows that if $f$ has constant degree $d$, then
+the composition
+$$
+\mathcal{F} \xrightarrow{res}
+f_* f^{-1} \mathcal{F} \xrightarrow{trace}
+\mathcal{F}
+$$
+is multiplication by $d$. The ``m\'ethode de la trace''
+is the following observation: if $\mathcal{F}$
+is an abelian sheaf on $X_\etale$ such that multiplication by $d$
+on $\mathcal{F}$ is an isomorphism, then the map
+$$
+H^n_\etale(X, \mathcal{F}) \longrightarrow H^n_\etale(Y, f^{-1}\mathcal{F})
+$$
+is injective. Namely, we have
+$$
+H^n_\etale(Y, f^{-1}\mathcal{F}) = H^n_\etale(X, f_*f^{-1}\mathcal{F})
+$$
+by the vanishing of the higher direct images
+(Proposition \ref{proposition-finite-higher-direct-image-zero})
+and the Leray spectral sequence
+(Proposition \ref{proposition-leray}).
+Thus we can consider the maps
+$$
+H^n_\etale(X, \mathcal{F}) \to
+H^n_\etale(Y, f^{-1}\mathcal{F})= H^n_\etale(X, f_*f^{-1}\mathcal{F})
+\xrightarrow{trace}
+H^n_\etale(X, \mathcal{F})
+$$
+and the composition is an isomorphism (under our assumption on $\mathcal{F}$
+and $f$). In particular, if
+$H_\etale^q(Y, f^{-1}\mathcal{F}) = 0$ then
+$H_\etale^q(X, \mathcal{F}) = 0$ as well.
+Indeed, multiplication by $d$ induces an
+isomorphism on $H_\etale^q(X, \mathcal{F})$ which factors through
+$H_\etale^q(Y, f^{-1}\mathcal{F})= 0$.
+
+\medskip\noindent
+This is often combined with the following.
+
+\begin{lemma}
+\label{lemma-pullback-filtered}
+Let $S$ be a connected scheme. Let $\ell$ be a prime number. Let
+$\mathcal{F}$ be a finite type, locally constant sheaf of
+$\mathbf{F}_\ell$-vector spaces on $S_\etale$.
+Then there exists a finite \'etale morphism
+$f : T \to S$ of degree prime to $\ell$ such that $f^{-1}\mathcal{F}$
+has a finite filtration whose successive quotients are
+$\underline{\mathbf{Z}/\ell\mathbf{Z}}_T$.
+\end{lemma}
+
+\begin{proof}
+Choose a geometric point $\overline{s}$ of $S$.
+Via the equivalence of Lemma \ref{lemma-locally-constant-on-connected}
+the sheaf $\mathcal{F}$ corresponds to a finite dimensional
+$\mathbf{F}_\ell$-vector space $V$ with a continuous
+$\pi_1(S, \overline{s})$-action.
+Let $G \subset \text{Aut}(V)$ be the image of the homomorphism
+$\rho : \pi_1(S, \overline{s}) \to \text{Aut}(V)$ giving the action.
+Observe that $G$ is finite.
+The surjective continuous homomorphism
+$\overline{\rho} : \pi_1(S, \overline{s}) \to G$
+corresponds to a Galois object $Y \to S$ of
+$\textit{F\'Et}_S$ with automorphism group $G = \text{Aut}(Y/S)$, see
+Fundamental Groups, Section \ref{pione-section-finite-etale-under-galois}.
+Let $H \subset G$ be an $\ell$-Sylow subgroup.
+We claim that $T = Y/H \to S$ works. Namely, let $\overline{t} \in T$
+be a geometric point over $\overline{s}$. The image of
+$\pi_1(T, \overline{t}) \to \pi_1(S, \overline{s})$
+is $(\overline{\rho})^{-1}(H)$ as follows from the functorial
+nature of fundamental groups. Hence the action of $\pi_1(T, \overline{t})$
+on $V$ corresponding to $f^{-1}\mathcal{F}$ is through
+the map $\pi_1(T, \overline{t}) \to H$, see
+Remark \ref{remark-functorial-locally-constant-on-connected}. As
+$H$ is a finite $\ell$-group, the irreducible constituents of the
+representation $\rho|_{\pi_1(T, \overline{t})}$
+are each trivial of rank $1$ (this is a simple lemma on
+representation theory of finite groups; insert future reference here).
+Via the equivalence of
+Lemma \ref{lemma-locally-constant-on-connected}
+this means $f^{-1}\mathcal{F}$ is a successive extension of
+constant sheaves with value $\underline{\mathbf{Z}/\ell\mathbf{Z}}_T$.
+Moreover the degree of $T = Y/H \to S$ is prime to $\ell$
+as it is equal to the index of $H$ in $G$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-action-l-group}
+Let $\Lambda$ be a Noetherian ring.
+Let $\ell$ be a prime number and $n \geq 1$.
+Let $H$ be a finite $\ell$-group.
+Let $M$ be a finite $\Lambda[H]$-module annihilated by $\ell^n$.
+Then there is a finite filtration
+$0 = M_0 \subset M_1 \subset \ldots \subset M_t = M$
+by $\Lambda[H]$-submodules
+such that $H$ acts trivially on $M_{i + 1}/M_i$ for all
+$i = 0, \ldots, t - 1$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: Show that the augmentation ideal $\mathfrak m$
+of the noncommutative ring $\mathbf{Z}/\ell^n\mathbf{Z}[H]$
+is nilpotent.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-filtered-modules}
+Let $S$ be an irreducible, geometrically unibranch scheme.
+Let $\ell$ be a prime number and $n \geq 1$. Let $\Lambda$
+be a Noetherian ring. Let $\mathcal{F}$ be a finite type,
+locally constant sheaf of $\Lambda$-modules on $S_\etale$
+which is annihilated by $\ell^n$. Then there exists a
+finite \'etale morphism $f : T \to S$ of degree prime to $\ell$
+such that $f^{-1}\mathcal{F}$ has a finite filtration whose
+successive quotients are of the form $\underline{M}_T$
+for some finite $\Lambda$-modules $M$.
+\end{lemma}
+
+\begin{proof}
+Choose a geometric point $\overline{s}$ of $S$.
+Via the equivalence of
+Lemma \ref{lemma-locally-constant-on-connected-geom-unibranch}
+the sheaf $\mathcal{F}$ corresponds to a finite $\Lambda$-module
+$M$ with a continuous $\pi_1(S, \overline{s})$-action.
+Let $G \subset \text{Aut}(V)$ be the image of the homomorphism
+$\rho : \pi_1(S, \overline{s}) \to \text{Aut}(M)$ giving the action.
+Observe that $G$ is finite as $M$ is a finite $\Lambda$-module
+(see proof of Lemma \ref{lemma-locally-constant-on-connected-geom-unibranch}).
+The surjective continuous homomorphism
+$\overline{\rho} : \pi_1(S, \overline{s}) \to G$
+corresponds to a Galois object $Y \to S$ of
+$\textit{F\'Et}_S$ with automorphism group $G = \text{Aut}(Y/S)$, see
+Fundamental Groups, Section \ref{pione-section-finite-etale-under-galois}.
+Let $H \subset G$ be an $\ell$-Sylow subgroup.
+We claim that $T = Y/H \to S$ works. Namely, let $\overline{t} \in T$
+be a geometric point over $\overline{s}$. The image of
+$\pi_1(T, \overline{t}) \to \pi_1(S, \overline{s})$
+is $(\overline{\rho})^{-1}(H)$ as follows from the functorial
+nature of fundamental groups. Hence the action of $\pi_1(T, \overline{t})$
+on $M$ corresponding to $f^{-1}\mathcal{F}$ is through
+the map $\pi_1(T, \overline{t}) \to H$, see
+Remark \ref{remark-functorial-locally-constant-on-connected}.
+Let $0 = M_0 \subset M_1 \subset \ldots \subset M_t = M$
+be as in Lemma \ref{lemma-action-l-group}.
+This induces a filtration
+$0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset \ldots
+\subset \mathcal{F}_t = f^{-1}\mathcal{F}$
+such that the successive quotients are constant with
+value $M_{i + 1}/M_i$.
+Finally, the degree of $T = Y/H \to S$ is prime to $\ell$
+as it is equal to the index of $H$ in $G$.
+\end{proof}
+
+
+
+
+
+
+\section{Galois cohomology}
+\label{section-galois-cohomology}
+
+\noindent
+In this section we prove a result on Galois cohomology
+(Proposition \ref{proposition-serre-galois})
+using \'etale cohomology and the trick from
+Section \ref{section-trace-method}.
+This will allow us to prove vanishing of higher \'etale cohomology groups
+over the spectrum of a field.
+
+\begin{lemma}
+\label{lemma-nonvanishing-inherited}
+Let $\ell$ be a prime number and $n$ an integer $> 0$.
+Let $S$ be a quasi-compact and quasi-separated scheme.
+Let $X = \lim_{i \in I} X_i$ be the limit of a
+directed system of $S$-schemes each $X_i \to S$
+being finite \'etale of constant degree relatively prime to $\ell$.
+The following are equivalent:
+\begin{enumerate}
+\item there exists an $\ell$-power torsion sheaf
+$\mathcal{G}$ on $S$ such that $H_\etale^n(S, \mathcal{G}) \neq 0$ and
+\item there exists an $\ell$-power torsion sheaf $\mathcal{F}$ on $X$
+such that $H_\etale^n(X, \mathcal{F}) \neq 0$.
+\end{enumerate}
+In fact, given
+$\mathcal{G}$ we can take $\mathcal{F} = g^{-1}\mathcal{F}$
+and given
+$\mathcal{F}$ we can take $\mathcal{G} = g_*\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Let $g : X \to S$ and $g_i : X_i \to S$ denote the structure morphisms.
+Fix an $\ell$-power torsion sheaf $\mathcal{G}$ on $S$
+with $H^n_\etale(S, \mathcal{G}) \not = 0$.
+The system given by $\mathcal{G}_i = g_i^{-1}\mathcal{G}$
+satisify the conditions of Theorem \ref{theorem-colimit}
+with colimit sheaf given by $g^{-1}\mathcal{G}$. This tells
+us that:
+$$
+\colim_{i\in I} H^n_\etale(X_i, g_i^{-1}\mathcal{G}) =
+H^n_\etale(X, \mathcal{G})
+$$
+By virtue of the $g_i$ being finite \'etale morphism of degree prime
+to $\ell$ we can apply ``la m\'ethode de la trace'' and we find
+the maps
+$$
+H^n_\etale(S, \mathcal{G}) \to H^n_\etale(X_i, g_i^{-1}\mathcal{G})
+$$
+are all injective (and compatible with the transition maps).
+See Section \ref{section-trace-method}. Thus, the colimit is non-zero, i.e.,
+$H^n(X,g^{-1}\mathcal{G}) \neq 0$, giving us the desired result with
+$\mathcal{F} = g^{-1}\mathcal{G}$.
+
+\medskip\noindent
+Conversely, suppose given an $\ell$-power torsion sheaf $\mathcal{F}$ on $X$
+with $H^n_\etale(X, \mathcal{F}) \not = 0$. We note that since the $g_i$
+are finite morphisms the higher direct images vanish
+(Proposition \ref{proposition-finite-higher-direct-image-zero}).
+Then, by applying Lemma \ref{lemma-relative-colimit}
+we may also conclude the same for $g$.
+The vanishing of the higher direct images tells us that
+$H^n_\etale(X, \mathcal{F}) = H^n(S, g_*\mathcal{F}) \neq 0$
+by Leray (Proposition \ref{proposition-leray})
+giving us what we want with $\mathcal{G} = g_*\mathcal{F}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reduce-to-l-group}
+Let $\ell$ be a prime number and $n$ an integer $> 0$.
+Let $K$ be a field with $G = Gal(K^{sep}/K)$ and let
+$H \subset G$ be a maximal pro-$\ell$ subgroup with $L/K$
+being the corresponding field extension. Then
+$H^n_\etale(\Spec(K), \mathcal{F}) = 0$ for all
+$\ell$-power torsion $\mathcal{F}$ if and only if
+$H^n_\etale(\Spec(L), \underline{\mathbf{Z}/\ell\mathbf{Z}}) = 0$.
+\end{lemma}
+
+\begin{proof}
+Write $L = \bigcup L_i$ as the union of its finite subextensions over $K$.
+Our choice of $H$ implies that $[L_i : K]$ is prime to $\ell$.
+Thus $\Spec(L) = \lim_{i \in I} \Spec(L_i)$ as in
+Lemma \ref{lemma-nonvanishing-inherited}.
+Thus we may replace $K$ by $L$ and assume that
+the absolute Galois group $G$ of $K$ is a
+profinite pro-$\ell$ group.
+
+\medskip\noindent
+Assume $H^n(\Spec(K), \underline{\mathbf{Z}/\ell\mathbf{Z}}) = 0$.
+Let $\mathcal{F}$ be an $\ell$-power torsion sheaf on $\Spec(K)_\etale$.
+We will show that $H^n_\etale(\Spec(K), \mathcal{F}) = 0$.
+By the correspondence specified in
+Lemma \ref{lemma-equivalence-abelian-sheaves-point}
+our sheaf $\mathcal{F}$ corresponds to an $\ell$-power torsion
+$G$-module $M$. Any finite set of elements $x_1, \ldots, x_m \in M$
+must be fixed by an open subgroup $U$ by continuity.
+Let $M'$ be the module spanned by the orbits of $x_1, \ldots, x_m$.
+This is a finite abelian $\ell$-group
+as each $x_i$ is killed by a power of $\ell$
+and the orbits are finite. Since $M$ is the filtered colimit of
+these submodules $M'$, we see that $\mathcal{F}$ is the filtered
+colimit of the corresponding subsheaves $\mathcal{F}' \subset \mathcal{F}$.
+Applying Theorem \ref{theorem-colimit} to this colimit, we reduce
+to the case where $\mathcal{F}$ is a finite locally constant sheaf.
+
+\medskip\noindent
+Let $M$ be a finite abelian $\ell$-group with a continuous action
+of the profinite pro-$\ell$ group $G$. Then there is a $G$-invariant
+filtration
+$$
+0 = M_0 \subset M_1 \subset \ldots \subset M_r = M
+$$
+such that $M_{i + 1}/M_i \cong \mathbf{Z}/\ell \mathbf{Z}$ with
+trivial $G$-action (this is a simple lemma on representation
+theory of finite groups; insert future reference here).
+Thus the corresponding sheaf $\mathcal{F}$ has a filtration
+$$
+0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset \ldots \subset
+\mathcal{F}_r = \mathcal{F}
+$$
+with successive quotients isomorphic to
+$\underline{\mathbf{Z}/\ell \mathbf{Z}}$.
+Thus by induction and the long exact cohomology
+sequence we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reduce-to-l-group-higher}
+Let $\ell$ be a prime number and $n$ an integer $> 0$.
+Let $K$ be a field with $G = Gal(K^{sep}/K)$ and let
+$H \subset G$ be a maximal pro-$\ell$ subgroup
+with $L/K$ being the corresponding field extension.
+Then $H^q_\etale(\Spec(K),\mathcal{F}) = 0$ for $q \geq n$ and all
+$\ell$-torsion sheaves $\mathcal{F}$ if and only if
+$H^n_\etale(\Spec(L), \underline{\mathbf{Z}/\ell\mathbf{Z}}) = 0$.
+\end{lemma}
+
+\begin{proof}
+The forward direction is trivial, so we need only prove the reverse direction.
+We proceed by induction on $q$. The case of $q = n$ is
+Lemma \ref{lemma-reduce-to-l-group}. Now let
+$\mathcal{F}$ be an $\ell$-power torsion sheaf on $\Spec(K)$.
+Let $f : \Spec(K^{sep}) \rightarrow \Spec(K)$
+be the inclusion of a geometric point.
+Then consider the exact sequence:
+$$
+0 \rightarrow \mathcal{F} \xrightarrow{res}
+f_* f^{-1} \mathcal{F} \rightarrow f_* f^{-1} \mathcal{F}/\mathcal{F}
+\rightarrow 0
+$$
+Note that $K^{sep}$ may be written as the filtered colimit of finite
+separable extensions. Thus $f$
+is the limit of a directed system of finite \'etale morphisms.
+We may, as was seen in the proof of
+Lemma \ref{lemma-nonvanishing-inherited}, conclude that $f$ has
+vanishing higher direct images. Thus, we may express the higher cohomology of
+$f_* f^{-1} \mathcal{F}$ as the higher cohomology on the geometric point which
+clearly vanishes. Hence, as everything here is still $\ell$-torsion, we may use
+the inductive hypothesis in conjunction with the long-exact cohomology sequence
+to conclude the result for $q + 1$.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-serre-galois}
+\begin{reference}
+\cite[Chapter II, Section 3, Proposition 5]{SerreGaloisCohomology}
+\end{reference}
+Let $K$ be a field with separable algebraic closure $K^{sep}$.
+Assume that for any finite extension $K'$ of $K$ we have
+$\text{Br}(K') = 0$. Then
+\begin{enumerate}
+\item $H^q(\text{Gal}(K^{sep}/K), (K^{sep})^*) = 0$
+for all $q \geq 1$, and
+\item $H^q(\text{Gal}(K^{sep}/K), M) = 0$
+for any torsion $\text{Gal}(K^{sep}/K)$-module $M$ and any $q \geq 2$,
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Set $p = \text{char}(K)$.
+By Lemma \ref{lemma-compare-cohomology-point},
+Theorem \ref{theorem-brauer-delta},
+and Example \ref{example-sheaves-point}
+the proposition is equivalent to showing that if
+$H^2(\Spec(K'),\mathbf{G}_m|_{\Spec(K')_\etale}) = 0$
+for all finite extensions $K'/K$ then:
+\begin{itemize}
+\item $H^q(\Spec(K),\mathbf{G}_m|_{\Spec(K)_\etale}) = 0$
+for all $q \geq 1$, and
+\item $H^q(\Spec(K),\mathcal{F}) = 0$
+for any torsion sheaf $\mathcal{F}$ and any $q \geq 2$.
+\end{itemize}
+We prove the second part first.
+Since $\mathcal{F}$ is a torsion sheaf, we may use the
+$\ell$-primary decomposition as well as the compatibility of
+cohomology with colimits (i.e, direct sums, see Theorem \ref{theorem-colimit})
+to reduce to showing $H^q(\Spec(K),\mathcal{F}) = 0$, $q \geq 2$
+for all $\ell$-power torsion sheaves for every prime $\ell$. This
+allows us to analyze each prime individually.
+
+\medskip\noindent
+Suppose that $\ell \neq p$. For any extension $K'/K$
+consider the Kummer sequence (Lemma \ref{lemma-kummer-sequence})
+$$
+0 \to
+\mu_{\ell, \Spec{K'}} \to
+\mathbf{G}_{m, \Spec{K'}} \xrightarrow{(\cdot)^{\ell}}
+\mathbf{G}_{m, \Spec{K'}} \to 0
+$$
+Since $H^q(\Spec{K'},\mathbf{G}_m|_{\Spec(K')_\etale}) = 0$
+for $q = 2$ by assumption and for $q = 1$ by
+Theorem \ref{theorem-picard-group} combined with
+$\Pic(K) = (0)$. Thus, by the long-exact cohomology
+sequence we may conclude that $H^2(\Spec{K'}, \mu_\ell) = 0$ for
+any separable $K'/K$. Now let $H$ be a maximal pro-$\ell$ subgroup
+of the absolute Galois group of $K$ and let $L$ be the
+corresponding extension. We can write $L$ as the colimit of finite extensions,
+applying Theorem \ref{theorem-colimit} to this colimit we see that
+$H^2(\Spec(L), \mu_\ell) = 0$.
+Now $\mu_\ell$ must be the constant sheaf. If it weren't, that would imply
+there exists a Galois extension of degree relatively prime to
+$\ell$ of $L$ which is not true by definition of $L$ (namely, the extension
+one gets by adjoining the $\ell$th roots of unity to $L$).
+Hence, via Lemma \ref{lemma-reduce-to-l-group-higher},
+we conclude the result for $\ell \neq p$.
+
+\medskip\noindent
+Now suppose that $\ell = p$. We consider the
+Artin-Schrier exact sequence (Section \ref{section-artin-schreier})
+$$
+0 \longrightarrow \underline{\mathbf{Z}/p\mathbf{Z}}_{\Spec{K}} \longrightarrow
+\mathbf{G}_{a, \Spec{K}} \xrightarrow{F-1} \mathbf{G}_{a, \Spec{K}}
+\longrightarrow 0
+$$
+where $F - 1$ is the map $x \mapsto x^p - x$. Then note that the higher
+Cohomology of $\mathbf{G}_{a, \Spec{K}}$ vanishes, by
+Remark \ref{remark-special-case-fpqc-cohomology-quasi-coherent} and the
+vanishing of the higher cohomology of the structure sheaf of an affine scheme
+(Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherent-affine-cohomology-zero}).
+Note this can be applied to any field of
+characteristic $p$. In particular, we can apply it to the field extension $L$
+defined by a maximal pro-$p$ subgroup $H$. This allows us to conclude
+$H^n(\Spec{L}, \underline{\mathbf{Z}/p\mathbf{Z}}_{\Spec{L}}) = 0$
+for $n \geq 2$, from which the result follows for $\ell = p$, by
+Lemma \ref{lemma-reduce-to-l-group-higher}.
+
+\medskip\noindent
+To finish the proof we still have to show that
+$H^q(\text{Gal}(K^{sep}/K), (K^{sep})^*) = 0$ for all $q \geq 1$.
+Set $G = \text{Gal}(K^{sep}/K)$ and set $M = (K^{sep})^*$
+viewed as a $G$-module. We have already shown (above) that
+$H^1(G, M) = 0$ and $H^2(G, M) = 0$. Consider the exact sequence
+$$
+0 \to A \to M \to M \otimes \mathbf{Q} \to B \to 0
+$$
+of $G$-modules. By the above we have $H^i(G, A) = 0$
+and $H^i(G, B) = 0$ for $i > 1$ since $A$ and $B$ are
+torsion $G$-modules. By
+Lemma \ref{lemma-profinite-group-cohomology-torsion}
+we have $H^i(G, M \otimes \mathbf{Q}) = 0$ for $i > 0$.
+It is a pleasant exercise to see that this implies that
+$H^i(G, M) = 0$ also for $i \geq 3$.
+\end{proof}
+
+%10.08.09
+
+\begin{definition}
+\label{definition-Cr}
+A field $K$ is called {\it $C_r$}
+if for every $0 < d^r < n$ and every $f \in K[T_1,
+\ldots, T_n]$ homogeneous of degree $d$, there exist $\alpha = (\alpha_1,
+\ldots, \alpha_n)$, $\alpha_i \in K$ not all zero, such that $f(\alpha) = 0$.
+Such an $\alpha$ is called a {\it nontrivial solution} of $f$.
+\end{definition}
+
+\begin{example}
+\label{example-algebraically-closed-field-Cr}
+An algebraically closed field is $C_r$.
+\end{example}
+
+\noindent
+In fact, we have the following simple lemma.
+
+\begin{lemma}
+\label{lemma-algebraically-closed-find-solutions}
+Let $k$ be an algebraically closed field. Let
+$f_1, \ldots, f_s \in k[T_1, \ldots, T_n]$
+be homogeneous polynomials of degree $d_1, \ldots, d_s$ with $d_i
+> 0$. If $s < n$, then $f_1 = \ldots = f_s = 0$ have a common nontrivial
+solution.
+\end{lemma}
+
+\begin{proof}
+This follows from dimension theory, for example in the form of
+Varieties, Lemma \ref{varieties-lemma-intersection-in-affine-space}
+applied $s - 1$ times.
+\end{proof}
+
+\noindent
+The following result computes the Brauer group of $C_1$ fields.
+
+\begin{theorem}
+\label{theorem-C1-brauer-group-zero}
+Let $K$ be a $C_1$ field. Then $\text{Br}(K) = 0$.
+\end{theorem}
+
+\begin{proof}
+Let $D$ be a finite dimensional division algebra over $K$ with center $K$. We
+have seen that
+$$
+D \otimes_K K^{sep} \cong \text{Mat}_d(K^{sep})
+$$
+uniquely up to inner isomorphism. Hence the determinant $\det :
+\text{Mat}_d(K^{sep}) \to K^{sep}$ is Galois invariant and descends to a
+homogeneous degree $d$ map
+$$
+\det = N_\text{red} : D \longrightarrow K
+$$
+called the {\it reduced norm}. Since $K$ is $C_1$, if $d > 1$, then there
+exists a nonzero $x \in D$ with $N_\text{red}(x) = 0$. This clearly implies
+that $x$ is not invertible, which is a contradiction. Hence $\text{Br}(K) = 0$.
+\end{proof}
+
+\begin{definition}
+\label{definition-variety}
+Let $k$ be a field. A {\it variety} is separated, integral scheme of
+finite type over $k$. A {\it curve} is a variety of dimension $1$.
+\end{definition}
+
+\begin{theorem}[Tsen's theorem]
+\label{theorem-tsen}
+The function field of a variety of dimension $r$ over an algebraically closed
+field $k$ is $C_r$.
+\end{theorem}
+
+\begin{proof}
+For projective space one can show directly that the field
+$k(x_1, \ldots, x_r)$ is $C_r$ (exercise).
+
+\medskip\noindent
+General case. Without loss of generality, we may assume $X$ to be projective.
+Let $f \in k(X)[T_1, \ldots, T_n]_d$ with $0 < d^r < n$.
+Say the coefficients of $f$ are in $\Gamma(X, \mathcal{O}_X(H))$
+for some ample $H \subset X$. Let
+$\mathbf{\alpha} = (\alpha_1, \ldots, \alpha_n)$ with $\alpha_i \in \Gamma(X,
+\mathcal{O}_X(eH))$. Then $f(\mathbf{\alpha}) \in \Gamma(X,
+\mathcal{O}_X((de + 1)H))$. Consider the system of equations $f(\mathbf{\alpha})
+=0$. Then by asymptotic Riemann-Roch
+(Varieties, Proposition \ref{varieties-proposition-asymptotic-riemann-roch})
+there exists a $c > 0$ such that
+\begin{itemize}
+\item the number of variables is
+$n\dim_k \Gamma(X, \mathcal{O}_X(eH)) \sim n e^r c$, and
+\item the number of equations is
+$\dim_k \Gamma(X, \mathcal{O}_X((de + 1)H)) \sim (de + 1)^r c$.
+\end{itemize}
+Since $n > d^r$, there are more variables than equations. The equations are
+homogeneous hence there is a solution by
+Lemma \ref{lemma-algebraically-closed-find-solutions}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-curve-brauer-zero}
+Let $C$ be a curve over an algebraically closed field $k$. Then
+the Brauer group of the function field of $C$ is zero:
+$\text{Br}(k(C)) = 0$.
+\end{lemma}
+
+\begin{proof}
+This is clear from Tsen's theorem,
+Theorem \ref{theorem-tsen} and
+Theorem \ref{theorem-C1-brauer-group-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-Gm-function-field-curve}
+Let $k$ be an algebraically closed field and $K/k$ a field extension
+of transcendence degree 1. Then for all $q \geq 1$,
+$H_\etale^q(\Spec(K), \mathbf{G}_m) = 0$.
+\end{lemma}
+
+\begin{proof}
+Recall that
+$H_\etale^q(\Spec(K), \mathbf{G}_m) = H^q(\text{Gal}(K^{sep}/K), (K^{sep})^*)$
+by Lemma \ref{lemma-compare-cohomology-point}.
+Thus by Proposition \ref{proposition-serre-galois}
+it suffices to show that if $K'/K$ is a finite field extension, then
+$\text{Br}(K') = 0$. Now observe that $K' = \colim K''$, where $K''$ runs
+over the finitely generated subextensions of $k$ contained in $K'$ of
+transcendence degree $1$.
+Note that $\text{Br}(K') = \colim \text{Br}(K'')$ which reduces us
+to a finitely generated field extension $K''/k$ of transcendence
+degree $1$. Such a field is the function field of a curve over $k$,
+hence has trivial Brauer group by
+Lemma \ref{lemma-curve-brauer-zero}.
+\end{proof}
+
+
+
+
+
+
+\section{Higher vanishing for the multiplicative group}
+\label{section-higher-Gm}
+
+\noindent
+In this section, we fix an algebraically closed field $k$ and a smooth curve
+$X$ over $k$. We denote $i_x : x \hookrightarrow X$ the inclusion of a closed
+point of $X$ and $j : \eta \hookrightarrow X$ the inclusion of the generic
+point. We also denote $X_0$ the set of closed points of $X$.
+
+\begin{theorem}[The Fundamental Exact Sequence]
+\label{theorem-fundamental-exact-sequence}
+There is a short exact sequence of \'etale sheaves on $X$
+$$
+0 \longrightarrow
+\mathbf{G}_{m, X} \longrightarrow
+j_* \mathbf{G}_{m, \eta} \longrightarrow
+\bigoplus\nolimits_{x \in X_0} {i_x}_* \underline{\mathbf{Z}}
+\longrightarrow 0.
+$$
+\end{theorem}
+
+\begin{proof}
+Let $\varphi : U \to X$ be an \'etale morphism. Then by properties of
+\'etale morphisms (Proposition \ref{proposition-etale-morphisms}),
+$U = \coprod_i U_i$ where each $U_i$ is a smooth curve mapping to $X$.
+The above sequence for $U$ is a product of the corresponding sequences
+for each $U_i$, so it suffices to treat the case where $U$ is connected,
+hence irreducible. In this case, there is a well known exact sequence
+$$
+1 \longrightarrow
+\Gamma(U, \mathcal{O}_U^*) \longrightarrow
+k(U)^* \longrightarrow \bigoplus\nolimits_{y \in U_0} \mathbf{Z}_y.
+$$
+This amounts to a sequence
+$$
+0 \longrightarrow \Gamma(U, \mathcal{O}_U^*) \longrightarrow
+\Gamma(\eta \times_X U, \mathcal{O}_{\eta \times_X U}^*) \longrightarrow
+\bigoplus\nolimits_{x \in X_0}
+\Gamma(x \times_X U, \underline{\mathbf{Z}})
+$$
+which, unfolding definitions, is nothing but a sequence
+$$
+0 \longrightarrow \mathbf{G}_m(U) \longrightarrow
+j_* \mathbf{G}_{m, \eta}(U) \longrightarrow
+\left(\bigoplus\nolimits_{x \in X_0} {i_x}_* \underline{\mathbf{Z}}\right)(U).
+$$
+This defines the maps in the Fundamental Exact Sequence and shows it is exact
+except possibly at the last step. To see surjectivity, let us recall that if
+$U$ is a nonsingular curve and $D$ is a divisor on $U$,
+then there exists a Zariski open covering $\{U_j \to U\}$ of $U$
+such that $D|_{U_j} = \text{div}(f_j)$ for some $f_j \in k(U)^*$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-higher-direct-jstar-Gm}
+For any $q \geq 1$, $R^q j_*\mathbf{G}_{m, \eta} = 0$.
+\end{lemma}
+
+\begin{proof}
+We need to show that $(R^q j_*\mathbf{G}_{m, \eta})_{\bar x} = 0$ for every
+geometric point $\bar x$ of $X$.
+
+\medskip\noindent
+Assume that $\bar x$ lies over a closed point $x$ of $X$. Let $\Spec(A)$
+be an affine open neighbourhood of $x$ in $X$, and $K$ the fraction field
+of $A$. Then
+$$
+\Spec(\mathcal{O}^{sh}_{X, \bar x}) \times_X \eta =
+\Spec(\mathcal{O}^{sh}_{X, \bar x} \otimes_A K).
+$$
+The ring $\mathcal{O}^{sh}_{X, \bar x} \otimes_A K$ is a localization of
+the discrete valuation ring $\mathcal{O}^{sh}_{X, \bar x}$, so it is either
+$\mathcal{O}^{sh}_{X, \bar x}$ again, or its fraction field
+$K^{sh}_{\bar x}$. But since some local uniformizer gets inverted, it must
+be the latter. Hence
+$$
+(R^q j_*\mathbf{G}_{m, \eta})_{(X, \bar x)} = H_\etale^q(\Spec
+K^{sh}_{\bar x}, \mathbf{G}_m).
+$$
+Now recall that $\mathcal{O}^{sh}_{X, \bar x} =
+\colim_{(U, \bar u) \to \bar x} \mathcal{O}(U) = \colim_{A \subset B} B$
+where $A \to B$ is \'etale, hence $K^{sh}_{\bar x}$ is an algebraic
+extension of $K = k(X)$, and we may apply
+Lemma \ref{lemma-cohomology-Gm-function-field-curve}
+to get the vanishing.
+
+\medskip\noindent
+Assume that $\bar x = \bar \eta$ lies over the generic point $\eta$ of $X$ (in
+fact, this case is superfluous). Then
+$\mathcal{O}^{sh}_{X, \bar \eta} = \kappa(\eta)^{sep}$ and thus
+\begin{eqnarray*}
+(R^q j_*\mathbf{G}_{m, \eta})_{\bar \eta}
+& = &
+H_\etale^q(\Spec(\kappa(\eta)^{sep}) \times_X \eta, \mathbf{G}_m) \\
+& = & H_\etale^q (\Spec(\kappa(\eta)^{sep}), \mathbf{G}_m) \\
+& = & 0 \ \ \text{ for } q \geq 1
+\end{eqnarray*}
+since the corresponding Galois group is trivial.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-jstar-Gm}
+For all $p \geq 1$, $H_\etale^p(X, j_*\mathbf{G}_{m, \eta}) = 0$.
+\end{lemma}
+
+\begin{proof}
+The Leray spectral sequence reads
+$$
+E_2^{p, q} = H_\etale^p(X, R^qj_*\mathbf{G}_{m, \eta}) \Rightarrow
+H_\etale^{p+q}(\eta, \mathbf{G}_{m, \eta}),
+$$
+which vanishes for $p+q \geq 1$ by
+Lemma \ref{lemma-cohomology-Gm-function-field-curve}. Taking
+$q = 0$, we get the desired vanishing.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-istar-Z}
+For all $q \geq 1$, $H_\etale^q(X, \bigoplus_{x \in X_0} {i_x}_*
+\underline{\mathbf{Z}}) = 0$.
+\end{lemma}
+
+\begin{proof}
+For $X$ quasi-compact and quasi-separated, cohomology commutes with colimits,
+so it suffices to show the vanishing of $H_\etale^q(X, {i_x}_*
+\underline{\mathbf{Z}})$. But then the inclusion $i_x$ of a closed point is
+finite so $R^p {i_x}_* \underline{\mathbf{Z}} = 0$ for all $p \geq 1$ by
+Proposition \ref{proposition-finite-higher-direct-image-zero}.
+Applying the Leray spectral sequence, we see that
+$H_\etale^q(X, {i_x}_* \underline{\mathbf{Z}}) =
+H_\etale^q(x, \underline{\mathbf{Z}})$.
+Finally, since $x$ is the spectrum of an
+algebraically closed field, all higher cohomology on $x$ vanishes.
+\end{proof}
+
+\noindent
+Concluding this series of lemmata, we get the following result.
+
+\begin{theorem}
+\label{theorem-vanishing-cohomology-Gm-curve}
+Let $X$ be a smooth curve over an algebraically closed field. Then
+$$
+H_\etale^q(X, \mathbf{G}_m) = 0 \ \ \text{ for all } q \geq 2.
+$$
+\end{theorem}
+
+\begin{proof}
+See discussion above.
+\end{proof}
+
+\noindent
+We also get the cohomology long exact sequence
+$$
+0 \to
+H_\etale^0(X, \mathbf{G}_m) \to
+H_\etale^0(X, j_*\mathbf{G}_{m\eta}) \to
+H_\etale^0(X, \bigoplus {i_x}_*\underline{\mathbf{Z}}) \to
+H_\etale^1(X, \mathbf{G}_m) \to 0
+$$
+although this is the familiar
+$$
+0 \to H_{Zar}^0(X, \mathcal{O}_X^*) \to k(X)^* \to \text{Div}(X)
+\to \Pic(X) \to 0.
+$$
+
+
+
+
+
+\section{Picard groups of curves}
+\label{section-pic-curves}
+
+\noindent
+Our next step is to use the Kummer sequence to deduce some information
+about the cohomology group of a curve with finite coefficients. In order
+to get vanishing in the long exact sequence, we review some facts about
+Picard groups.
+
+\medskip\noindent
+Let $X$ be a smooth projective curve over an algebraically closed field $k$.
+Let $g = \dim_k H^1(X, \mathcal{O}_X)$ be the genus of $X$.
+There exists a short exact sequence
+$$
+0 \to \Pic^0(X) \to \Pic(X) \xrightarrow{\deg} \mathbf{Z} \to 0.
+$$
+The abelian group $\Pic^0(X)$ can be identified with
+$\Pic^0(X) = \underline{\Picardfunctor}^0_{X/k}(k)$, i.e.,
+the $k$-valued points of an abelian variety
+$\underline{\Picardfunctor}^0_{X/k}$ over $k$ of dimension $g$.
+Consequently, if $n \in k^*$ then
+$\Pic^0(X)[n] \cong (\mathbf{Z}/n\mathbf{Z})^{2g}$
+as abelian groups. See
+Picard Schemes of Curves, Section \ref{pic-section-picard-curve}
+and
+Groupoids, Section \ref{groupoids-section-abelian-varieties}.
+This key fact, namely the description of the torsion in the
+Picard group of a smooth projective curve over an algebraically
+closed field does not appear to have an elementary proof.
+
+\begin{lemma}
+\label{lemma-cohomology-smooth-projective-curve}
+Let $X$ be a smooth projective curve of genus $g$ over an
+algebraically closed field $k$ and let $n \geq 1$ be invertible in $k$.
+Then there are canonical identifications
+$$
+H_\etale^q(X, \mu_n) =
+\left\{
+\begin{matrix}
+\mu_n(k) & \text{ if }q = 0, \\
+\Pic^0(X)[n] & \text{ if }q = 1, \\
+\mathbf{Z}/n\mathbf{Z} & \text{ if }q = 2, \\
+0 & \text{ if }q \geq 3.
+\end{matrix}
+\right.
+$$
+Since $\mu_n \cong \underline{\mathbf{Z}/n\mathbf{Z}}$, this gives
+(noncanonical) identifications
+$$
+H_\etale^q(X, \underline{\mathbf{Z}/n\mathbf{Z}}) \cong
+\left\{
+\begin{matrix}
+\mathbf{Z}/n\mathbf{Z} & \text{ if }q = 0, \\
+(\mathbf{Z}/n\mathbf{Z})^{2g} & \text{ if }q = 1, \\
+\mathbf{Z}/n\mathbf{Z} & \text{ if }q = 2, \\
+0 & \text{ if }q \geq 3.
+\end{matrix}
+\right.
+$$
+\end{lemma}
+
+\begin{proof}
+Theorems \ref{theorem-picard-group} and
+\ref{theorem-vanishing-cohomology-Gm-curve}
+determine the \'etale cohomology of $\mathbf{G}_m$ on $X$
+in terms of the Picard group of $X$.
+The Kummer sequence $0\to \mu_{n, X} \to \mathbf{G}_{m, X}
+\to \mathbf{G}_{m, X}\to 0$ (Lemma \ref{lemma-kummer-sequence})
+then gives us the long exact cohomology sequence
+$$
+\xymatrix{
+0 \ar[r] & \mu_n(k) \ar[r] &
+k^* \ar[r]^{(\cdot)^n} &
+k^* \ar@(rd, ul)[rdllllr] \\
+& H_\etale^1(X, \mu_n) \ar[r] &
+\Pic(X) \ar[r]^{(\cdot)^n} &
+\Pic(X) \ar@(rd, ul)[rdllllr] \\
+& H_\etale^2(X, \mu_n) \ar[r] & 0 \ar[r] & 0 \ldots
+}
+$$
+The $n$th power map $k^* \to k^*$ is surjective since $k$ is algebraically
+closed. So we need to compute the kernel and cokernel of the map
+$n : \Pic(X) \to \Pic(X)$. Consider the commutative diagram with
+exact rows
+$$
+\xymatrix{
+0 \ar[r] &
+\Pic^0(X) \ar[r] \ar@{>>}[d]^{(\cdot)^n} &
+\Pic(X) \ar[r]_-\deg \ar[d]^{(\cdot)^n} &
+\mathbf{Z} \ar[r] \ar@{^{(}->}[d]^n & 0 \\
+0 \ar[r] &
+\Pic^0(X) \ar[r] &
+\Pic(X) \ar[r]^-\deg &
+\mathbf{Z} \ar[r] & 0
+}
+$$
+The group $\Pic^0(X)$ is the $k$-points of
+the group scheme $\underline{\Picardfunctor}^0_{X/k}$, see
+Picard Schemes of Curves, Lemma \ref{pic-lemma-picard-pieces}.
+The same lemma tells us that $\underline{\Picardfunctor}^0_{X/k}$
+is a $g$-dimensional abelian variety over $k$ as defined in
+Groupoids, Definition \ref{groupoids-definition-abelian-variety}.
+Hence the left vertical map is surjective by
+Groupoids, Proposition \ref{groupoids-proposition-review-abelian-varieties}.
+Applying the snake lemma gives canonical identifications as stated
+in the lemma.
+
+\medskip\noindent
+To get the noncanonical identifications of the lemma we need to
+show the kernel of $n : \Pic^0(X) \to \Pic^0(X)$
+is isomorphic to $(\mathbf{Z}/n\mathbf{Z})^{\oplus 2g}$.
+This is also part of Groupoids, Proposition
+\ref{groupoids-proposition-review-abelian-varieties}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-on-h2-curve}
+Let $\pi : X \to Y$ be a nonconstant morphism of smooth projective curves
+over an algebraically closed field $k$ and let $n \geq 1$ be invertible in $k$.
+The map
+$$
+\pi^* : H^2_\etale(Y, \mu_n) \longrightarrow H^2_\etale(X, \mu_n)
+$$
+is given by multiplication by the degree of $\pi$.
+\end{lemma}
+
+\begin{proof}
+Observe that the statement makes sense as we have identified both
+cohomology groups $ H^2_\etale(Y, \mu_n)$ and $H^2_\etale(X, \mu_n)$
+with $\mathbf{Z}/n\mathbf{Z}$ in
+Lemma \ref{lemma-cohomology-smooth-projective-curve}.
+In fact, if $\mathcal{L}$ is a line bundle of degree $1$
+on $Y$ with class $[\mathcal{L}] \in H^1_\etale(Y, \mathbf{G}_m)$,
+then the coboundary of $[\mathcal{L}]$ is the generator of
+$H^2_\etale(Y, \mu_n)$. Here the coboundary is the coboundary
+of the long exact sequence of cohomology associated to the Kummer
+sequence. Thus the result of the lemma follows from the fact that
+the degree of the line bundle $\pi^*\mathcal{L}$ on $X$ is $\deg(\pi)$.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-cohomology-mu-smooth-curve}
+Let $X$ be an affine smooth curve over an algebraically closed field $k$ and
+$n\in k^*$. Then
+\begin{enumerate}
+\item
+$H_\etale^0(X, \mu_n) = \mu_n(k)$;
+\item
+$H_\etale^1(X, \mu_n) \cong
+\left(\mathbf{Z}/n\mathbf{Z}\right)^{2g+r-1}$, where
+$r$ is the number of points in $\bar X - X$ for some smooth projective
+compactification $\bar X$ of $X$, and
+\item
+for all $q\geq 2$, $H_\etale^q(X, \mu_n) = 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Write $X = \bar X - \{ x_1, \ldots, x_r\}$. Then $\Pic(X) =
+\Pic(\bar X)/ R$, where $R$ is the subgroup generated by
+$\mathcal{O}_{\bar X}(x_i)$, $1 \leq i \leq r$. Since $r \geq 1$, we see that
+$\Pic^0(\bar X) \to \Pic(X)$ is surjective, hence $\Pic(X)$ is
+divisible. Applying the Kummer sequence, we get (1) and (3). For (2), recall
+that
+\begin{align*}
+H_\etale^1(X, \mu_n)
+& =
+\{(\mathcal L, \alpha) |
+\mathcal L \in \Pic(X),
+\alpha : \mathcal{L}^{\otimes n} \to \mathcal{O}_X\}/\cong \\
+& =
+\{(\bar{\mathcal L},\ D,\ \bar \alpha)\}/\tilde{R}
+\end{align*}
+where $\bar{\mathcal L} \in \Pic^0(\bar X)$, $D$ is a divisor on $\bar X$
+supported on $\left\{x_1, \ldots, x_r\right\}$ and $ \bar{\alpha}:
+\bar{\mathcal L}^{\otimes n} \cong \mathcal{O}_{\bar{X}}(D)$ is an isomorphism.
+Note that $D$ must have degree 0. Further $\tilde{R}$ is the subgroup of
+triples of the form $(\mathcal{O}_{\bar X}(D'), n D', 1^{\otimes n})$ where
+$D'$ is supported on $\left\{x_1, \ldots, x_r\right\}$ and has degree 0. Thus,
+we get an exact sequence
+$$
+0 \longrightarrow
+H_\etale^1(\bar X, \mu_n) \longrightarrow
+H_\etale^1(X, \mu_n) \longrightarrow
+\bigoplus_{i = 1}^r \mathbf{Z}/n\mathbf{Z}
+\xrightarrow{\ \sum\ }
+\mathbf{Z}/n\mathbf{Z} \longrightarrow 0
+$$
+where the middle map sends the class of a triple $(\bar{ \mathcal L}, D, \bar
+\alpha)$ with $D = \sum_{i = 1}^r a_i (x_i)$ to the $r$-tuple
+$(a_i)_{i = 1}^r$. It now suffices to use
+Lemma \ref{lemma-cohomology-smooth-projective-curve}
+to count ranks.
+\end{proof}
+
+\begin{remark}
+\label{remark-natural-proof}
+The ``natural'' way to prove the previous corollary is to excise $X$ from $\bar
+X$. This is possible, we just haven't developed that theory.
+\end{remark}
+
+\begin{remark}
+\label{remark-normalize-H1-Gm}
+Let $k$ be an algebraically closed field. Let $n$ be an integer prime to
+the characteristic of $k$. Recall that
+$$
+\mathbf{G}_{m, k} = \mathbf{A}^1_k \setminus \{0\} =
+\mathbf{P}^1_k \setminus \{0, \infty\}
+$$
+We claim there is a canonical isomorphism
+$$
+H^1_\etale(\mathbf{G}_{m, k}, \mu_n) = \mathbf{Z}/n\mathbf{Z}
+$$
+What does this mean? This means there is an element $1_k$ in
+$H^1_\etale(\mathbf{G}_{m, k}, \mu_n)$ such that for
+every morphism $\Spec(k') \to \Spec(k)$ the pullback map on
+\'etale cohomology for the map $\mathbf{G}_{m, k'} \to \mathbf{G}_{m, k}$
+maps $1_k$ to $1_{k'}$. (In particular this element is
+fixed under all automorphisms of $k$.) To see this, consider the
+$\mu_{n, \mathbf{Z}}$-torsor
+$\mathbf{G}_{m, \mathbf{Z}} \to \mathbf{G}_{m, \mathbf{Z}}$,
+$x \mapsto x^n$. By the identification of torsors with
+first cohomology, this pulls back to give our canonical elements $1_k$.
+Twisting back we see that there are canonical identifications
+$$
+H^1_\etale(\mathbf{G}_{m, k}, \mathbf{Z}/n\mathbf{Z}) =
+\Hom(\mu_n(k), \mathbf{Z}/n\mathbf{Z}),
+$$
+i.e., these isomorphisms are compatible with respect to maps of
+algebraically closed fields, in particular with respect to
+automorphisms of $k$.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+\section{Extension by zero}
+\label{section-extension-by-zero}
+
+\noindent
+The general material in
+Modules on Sites, Section \ref{sites-modules-section-localize}
+allows us to make the following definition.
+
+\begin{definition}
+\label{definition-extension-zero}
+Let $j : U \to X$ be an \'etale morphism of schemes.
+\begin{enumerate}
+\item The restriction functor
+$j^{-1} : \Sh(X_\etale) \to \Sh(U_\etale)$
+has a left adjoint
+$j_!^{Sh} : \Sh(U_\etale) \to \Sh(X_\etale)$.
+\item The restriction functor
+$j^{-1} : \textit{Ab}(X_\etale) \to \textit{Ab}(U_\etale)$
+has a left adjoint which is denoted
+$j_! : \textit{Ab}(U_\etale) \to \textit{Ab}(X_\etale)$
+and called {\it extension by zero}.
+\item Let $\Lambda$ be a ring. The restriction functor
+$j^{-1} : \textit{Mod}(X_\etale, \Lambda) \to
+\textit{Mod}(U_\etale, \Lambda)$
+has a left adjoint which is denoted
+$j_! : \textit{Mod}(U_\etale, \Lambda) \to
+\textit{Mod}(X_\etale, \Lambda)$
+and called {\it extension by zero}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+If $\mathcal{F}$ is an abelian sheaf on $X_\etale$, then
+$j_!\mathcal{F} \not = j_!^{Sh}\mathcal{F}$ in general. On the other hand
+$j_!$ for sheaves of $\Lambda$-modules agrees with $j_!$ on underlying
+abelian sheaves
+(Modules on Sites, Remark \ref{sites-modules-remark-localize-shriek-equal}).
+The functor $j_!$ is characterized by the functorial isomorphism
+$$
+\Hom_X(j_!\mathcal{F}, \mathcal{G}) = \Hom_U(\mathcal{F}, j^{-1}\mathcal{G})
+$$
+for all $\mathcal{F} \in \textit{Ab}(U_\etale)$ and
+$\mathcal{G} \in \textit{Ab}(X_\etale)$. Similarly for
+sheaves of $\Lambda$-modules.
+
+\medskip\noindent
+To describe the functors in Definition \ref{definition-extension-zero}
+more explicitly, recall that $j^{-1}$ is just the restriction
+via the functor $U_\etale \to X_\etale$. In other words,
+$j^{-1}\mathcal{G}(U') = \mathcal{G}(U')$ for $U'$ \'etale over $U$.
+On the other hand, for $\mathcal{F} \in \textit{Ab}(U_\etale)$
+we consider the presheaf
+\begin{equation}
+\label{equation-j-p-shriek}
+j_{p!}\mathcal{F} : X_\etale \longrightarrow \textit{Ab},
+\quad
+V \longmapsto \bigoplus\nolimits_{V \to U} \mathcal{F}(V \to U)
+\end{equation}
+Then $j_!\mathcal{F}$ is the sheafification of $j_{p!}\mathcal{F}$.
+This is proven in Modules on Sites, Lemma
+\ref{sites-modules-lemma-extension-by-zero};
+more generally see the discussion in
+Modules on Sites, Sections \ref{sites-modules-section-localize} and
+\ref{sites-modules-section-exactness-lower-shriek}.
+
+\begin{exercise}
+\label{exercise-jshriek-direct}
+Prove directly that the functor $j_!$ defined as the sheafification of
+the functor $j_{p!}$ given in
+(\ref{equation-j-p-shriek}) is a left adjoint to $j^{-1}$.
+\end{exercise}
+
+\begin{proposition}
+\label{proposition-describe-jshriek}
+Let $j : U \to X$ be an \'etale morphism of schemes.
+Let $\mathcal{F}$ in $\textit{Ab}(U_\etale)$.
+If $\overline{x} : \Spec(k) \to X$ is a geometric point of $X$, then
+$$
+(j_!\mathcal{F})_{\overline{x}} =
+\bigoplus\nolimits_{\overline{u} : \Spec(k) \to U,\ j(\overline{u}) =
+\overline{x}} \mathcal{F}_{\bar{u}}.
+$$
+In particular, $j_!$ is an exact functor.
+\end{proposition}
+
+\begin{proof}
+Exactness of $j_!$ is very general, see Modules on Sites,
+Lemma \ref{sites-modules-lemma-extension-by-zero-exact}.
+Of course it does also follow from the description of stalks.
+The formula for the stalk follows from
+Modules on Sites, Lemma \ref{sites-modules-lemma-stalk-j-shriek}
+and the description of points of the small \'etale site
+in terms of geometric points, see Lemma \ref{lemma-points-small-etale-site}.
+
+\medskip\noindent
+For later use we note that the isomorphism
+\begin{align*}
+(j_!\mathcal{F})_{\overline{x}}
+& =
+(j_{p!}\mathcal{F})_{\overline{x}} \\
+& =
+\colim_{(V, \overline{v})} j_{p!}\mathcal{F}(V) \\
+& =
+\colim_{(V, \overline{v})}
+\bigoplus\nolimits_{\varphi : V \to U}
+\mathcal{F}(V \xrightarrow{\varphi} U) \\
+& \to
+\bigoplus\nolimits_{\overline{u} : \Spec(k) \to U,\ j(\overline{u}) =
+\overline{x}} \mathcal{F}_{\bar{u}}.
+\end{align*}
+constructed in Modules on Sites, Lemma \ref{sites-modules-lemma-stalk-j-shriek}
+sends $(V, \overline{v}, \varphi, s)$ to the class of $s$ in the stalk
+of $\mathcal{F}$ at $\overline{u} = \varphi(\overline{v})$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-jshriek-open}
+Let $j : U \to X$ be an open immersion of schemes. For any
+abelian sheaf $\mathcal{F}$ on $U_\etale$, the adjunction mappings
+$j^{-1}j_*\mathcal{F} \to \mathcal{F}$ and
+$\mathcal{F} \to j^{-1}j_!\mathcal{F}$ are isomorphisms.
+In fact, $j_!\mathcal{F}$ is the unique abelian sheaf on $X_\etale$
+whose restriction to $U$ is $\mathcal{F}$ and whose stalks at
+geometric points of $X \setminus U$ are zero.
+\end{lemma}
+
+\begin{proof}
+We encourage the reader to prove the first statement by working through the
+definitions, but here we just use that it is a special case of the very general
+Modules on Sites, Lemma \ref{sites-modules-lemma-restrict-back}.
+For the second statement, observe that if $\mathcal{G}$ is an abelian sheaf
+on $X_\etale$ whose restriction to $U$ is $\mathcal{F}$, then we obtain
+by adjointness a map $j_!\mathcal{F} \to \mathcal{G}$. This map
+is then an isomorphism at stalks of geometric points of $U$ by
+Proposition \ref{proposition-describe-jshriek}.
+Thus if $\mathcal{G}$ has vanishing stalks at geometric points
+of $X \setminus U$, then $j_!\mathcal{F} \to \mathcal{G}$ is an
+isomorphism by Theorem \ref{theorem-exactness-stalks}.
+\end{proof}
+
+\begin{lemma}[Extension by zero commutes with base change]
+\label{lemma-shriek-base-change}
+Let $f: Y \to X$ be a morphism of schemes. Let $j: V \to X$ be an \'etale
+morphism. Consider the fibre product
+$$
+\xymatrix{
+V' = Y \times_X V \ar[d]_{f'} \ar[r]_-{j'} & Y \ar[d]^f \\
+V \ar[r]^j & X
+}
+$$
+Then we have $j'_! f'^{-1} = f^{-1} j_!$ on abelian sheaves and on
+sheaves of modules.
+\end{lemma}
+
+\begin{proof}
+This is true because $j'_! f'^{-1}$ is left adjoint to
+$f'_* (j')^{-1}$ and $f^{-1} j_!$ is left adjoint to $j^{-1}f_*$.
+Further $f'_* (j')^{-1} = j^{-1}f_*$ because $f_*$ commutes with
+\'etale localization (by construction). In fact, the lemma holds very generally
+in the setting of a morphism of sites, see
+Modules on Sites, Lemma
+\ref{sites-modules-lemma-localize-morphism-ringed-sites}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-shriek-into-star-separated-etale}
+Let $j : U \to X$ be separated and \'etale. Then there is a functorial
+injective map $j_!\mathcal{F} \to j_*\mathcal{F}$
+on abelian sheaves and sheaves of $\Lambda$-modules.
+\end{lemma}
+
+\begin{proof}
+We prove this in the case of abelian sheaves. Let us construct a canonical map
+$$
+j_{p!}\mathcal{F} \to j_*\mathcal{F}
+$$
+of abelian presheaves on $X_\etale$ for any abelian sheaf
+$\mathcal{F}$ on $U_\etale$ where $j_{p!}$ is as in
+(\ref{equation-j-p-shriek}). Sheafification of this map will
+be the desired map $j_!\mathcal{F} \to j_*\mathcal{F}$.
+Evaluating both sides on $V \to X$ \'etale we obtain
+$$
+j_{p!}\mathcal{F}(V) =
+\bigoplus\nolimits_{\varphi : V \to U} \mathcal{F}(V \xrightarrow{\varphi} U)
+\quad\text{and}\quad
+j_*\mathcal{F}(V) = \mathcal{F}(V \times_X U)
+$$
+For each $\varphi$ we have an open and closed immersion
+$$
+\Gamma_\varphi = (1, \varphi) : V \longrightarrow V \times_X U
+$$
+over $U$. It is open as it is a morphism between schemes \'etale
+over $U$ and it is closed as it is a section of a scheme separated
+over $V$ (Schemes, Lemma \ref{schemes-lemma-section-immersion}).
+Thus for a section $s_\varphi \in \mathcal{F}(V \xrightarrow{\varphi} U)$
+there exists a unique section $s'_\varphi$ in $\mathcal{F}(V \times_X U)$
+which pulls back to $s_\varphi$ by $\Gamma_\varphi$
+and which restricts to zero on the complement of the image of $\Gamma_\varphi$.
+
+\medskip\noindent
+To show that our map is injective suppose that
+$\sum_{i = 1, \ldots, n} s_{\varphi_i}$ is an element of
+$j_{p!}\mathcal{F}(V)$ in the formula above maps to zero
+in $j_*\mathcal{F}(V)$. Our task is to show that
+$\sum_{i = 1, \ldots, n} s_{\varphi_i}$ restricts to zero
+on the members of an \'etale covering of $V$.
+Looking at all pairwise equalizers
+(which are open and closed in $V$) of the morphisms
+$\varphi_i : V \to U$ and working locally on $V$, we
+may assume the images of the morphisms
+$\Gamma_{\varphi_1}, \ldots, \Gamma_{\varphi_n}$
+are pairwise disjoint. Since our assumption is that
+$\sum_{i = 1, \ldots, n} s'_{\varphi_i} = 0$
+we then immediately conclude that $s'_{\varphi_i} = 0$
+for each $i$ (by the disjointness of the supports of these
+sections), whence $s_{\varphi_i} = 0$ for all $i$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-shriek-equals-star-finite-etale}
+Let $j : U \to X$ be finite and \'etale. Then the map
+$j_! \to j_*$ of Lemma \ref{lemma-shriek-into-star-separated-etale}
+is an isomorphism
+on abelian sheaves and sheaves of $\Lambda$-modules.
+\end{lemma}
+
+\begin{proof}
+
+\medskip\noindent
+It suffices to check $j_!\mathcal{F} \to j_*\mathcal{F}$
+is an isomorphism \'etale locally on $X$.
+Thus we may assume $U \to X$ is a finite disjoint union
+of isomorphisms, see
+\'Etale Morphisms, Lemma \ref{etale-lemma-finite-etale-etale-local}.
+We omit the proof in this case.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ses-associated-to-open}
+Let $X$ be a scheme. Let $Z \subset X$ be a closed subscheme and let
+$U \subset X$ be the complement. Denote $i : Z \to X$ and $j : U \to X$
+the inclusion morphisms. For every abelian sheaf $\mathcal{F}$ on $X_\etale$
+there is a canonical short exact sequence
+$$
+0 \to j_!j^{-1}\mathcal{F} \to \mathcal{F} \to i_*i^{-1}\mathcal{F} \to 0
+$$
+on $X_\etale$.
+\end{lemma}
+
+\begin{proof}
+We obtain the maps by the adjointness properties of the functors
+involved. For a geometric point $\overline{x}$ in $X$ we have either
+$\overline{x} \in U$ in which case the map on the left hand side
+is an isomorphism on stalks and the stalk of $i_*i^{-1}\mathcal{F}$
+is zero or $\overline{x} \in Z$ in which case the map on the right hand side
+is an isomorphism on stalks and the stalk of $j_!j^{-1}\mathcal{F}$
+is zero. Here we have used the description of stalks of
+Lemma \ref{lemma-stalk-pushforward-closed-immersion} and
+Proposition \ref{proposition-describe-jshriek}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compatible-shriek-push-finite}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+U \ar[d]_g \ar[r]_{j'} & X \ar[d]^f \\
+V \ar[r]^j & Y
+}
+$$
+where $f$ is finite, $g$ is \'etale, and $j$ is an open immersion.
+Then $f_* \circ j'_! = j_! \circ g_*$ as functors
+$\textit{Ab}(U_\etale) \to \textit{Ab}(Y_\etale)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be an object of $\textit{Ab}(U_\etale)$.
+Let $\overline{y}$ be a geometric point of $Y$ not contained
+in the open $V$. Then
+$$
+(f_*j'_!\mathcal{F})_{\overline{y}} =
+\bigoplus\nolimits_{\overline{x},\ f(\overline{x}) = \overline{y}}
+(j'_!\mathcal{F})_{\overline{x}} = 0
+$$
+by Proposition \ref{proposition-finite-higher-direct-image-zero} and
+because the stalk of $j'_!\mathcal{F}$ at $\overline{x} \not \in U$
+are zero by Lemma \ref{lemma-jshriek-open}. On the other hand, we have
+$$
+j^{-1}f_*j'_!\mathcal{F} =
+g_*(j')^{-1}j'_!\mathcal{F} =
+g_*\mathcal{F}
+$$
+by Lemmas \ref{lemma-finite-pushforward-commutes-with-base-change} and
+Lemma \ref{lemma-jshriek-open}.
+Hence by the characterization of
+$j_!$ in Lemma \ref{lemma-jshriek-open} we see that
+$f_*j'_!\mathcal{F} = j_!g_*\mathcal{F}$.
+We omit the verification that this identification
+is functorial in $\mathcal{F}$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Constructible sheaves}
+\label{section-constructible}
+
+\noindent
+Let $X$ be a scheme. A {\it constructible locally closed subscheme} of $X$
+is a locally closed subscheme $T \subset X$ such that the underlying
+topological space of $T$ is a constructible subset of $X$.
+If $T, T' \subset X$ are locally
+closed subschemes with the same underlying topological space, then
+$T_\etale \cong T'_\etale$ by the topological
+invariance of the \'etale site (Theorem \ref{theorem-topological-invariance}).
+Thus in the following definition we may assume our locally closed
+subschemes are reduced.
+
+\begin{definition}
+\label{definition-constructible}
+Let $X$ be a scheme.
+\begin{enumerate}
+\item A sheaf of sets on $X_\etale$ is {\it constructible}
+if for every affine open $U \subset X$ there exists a finite decomposition
+of $U$ into constructible locally closed subschemes $U = \coprod_i U_i$
+such that $\mathcal{F}|_{U_i}$ is finite locally constant for all $i$.
+\item A sheaf of abelian groups on $X_\etale$ is {\it constructible}
+if for every affine open $U \subset X$ there exists a finite decomposition
+of $U$ into constructible locally closed subschemes $U = \coprod_i U_i$
+such that $\mathcal{F}|_{U_i}$ is finite locally constant for all $i$.
+\item Let $\Lambda$ be a Noetherian ring. A sheaf of $\Lambda$-modules
+on $X_\etale$ is {\it constructible} if for every affine open
+$U \subset X$ there exists a finite decomposition
+of $U$ into constructible locally closed subschemes
+$U = \coprod_i U_i$ such that
+$\mathcal{F}|_{U_i}$ is of finite type and locally constant for all $i$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+It seems that this is the accepted definition. An alternative, which lends
+itself more readily to generalizations beyond the \'etale site of a scheme,
+would have been to define constructible sheaves by starting with
+$h_U$, $j_{U!}\mathbf{Z}/n\mathbf{Z}$, and $j_{U!}\underline{\Lambda}$
+where $U$ runs over all quasi-compact and quasi-separated objects
+of $X_\etale$, and then take the smallest full subcategory of
+$\Sh(X_\etale)$, $\textit{Ab}(X_\etale)$, and
+$\textit{Mod}(X_\etale, \underline{\Lambda})$ containing these
+and closed under finite limits and colimits. It follows from
+Lemma \ref{lemma-constructible-abelian}
+and
+Lemmas \ref{lemma-category-constructible-sets},
+\ref{lemma-category-constructible-abelian}, and
+\ref{lemma-category-constructible-modules}
+that this produces the same category if $X$ is quasi-compact and
+quasi-separated. In general this does not produce the same
+category however.
+
+\medskip\noindent
+A disjoint union decomposition $U = \coprod U_i$ of a scheme by
+locally closed subschemes will be called a {\it partition} of $U$
+(compare with Topology, Section \ref{topology-section-stratifications}).
+
+\begin{lemma}
+\label{lemma-constructible-quasi-compact-quasi-separated}
+Let $X$ be a quasi-compact and quasi-separated scheme. Let $\mathcal{F}$
+be a sheaf of sets on $X_\etale$. The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is constructible,
+\item there exists an open covering $X = \bigcup U_i$ such that
+$\mathcal{F}|_{U_i}$ is constructible, and
+\item there exists a partition $X = \bigcup X_i$ by constructible
+locally closed subschemes such that $\mathcal{F}|_{X_i}$ is finite
+locally constant.
+\end{enumerate}
+A similar statement holds for abelian sheaves and sheaves of
+$\Lambda$-modules if $\Lambda$ is Noetherian.
+\end{lemma}
+
+\begin{proof}
+It is clear that (1) implies (2).
+
+\medskip\noindent
+Assume (2). For every $x \in X$ we can find an $i$ and an affine open
+neighbourhood $V_x \subset U_i$ of $x$. Hence we can find a finite
+affine open covering $X = \bigcup V_j$ such that for each $j$ there
+exists a finite decomposition $V_j = \coprod V_{j, k}$ by locally closed
+constructible subsets such that $\mathcal{F}|_{V_{j, k}}$ is finite
+locally constant. By Topology, Lemma
+\ref{topology-lemma-quasi-compact-open-immersion-constructible-image}
+each $V_{j, k}$ is constructible as a subset of $X$.
+By Topology, Lemma
+\ref{topology-lemma-constructible-partition-refined-by-stratification}
+we can find a finite stratification $X = \coprod X_l$ with constructible
+locally closed strata such that each
+$V_{j, k}$ is a union of $X_l$. Thus (3) holds.
+
+\medskip\noindent
+Assume (3) holds. Let $U \subset X$ be an affine open.
+Then $U \cap X_i$ is a constructible locally closed subset of $U$
+(for example by Properties, Lemma \ref{properties-lemma-locally-constructible})
+and $U = \coprod U \cap X_i$ is a partition of $U$ as in
+Definition \ref{definition-constructible}. Thus (1) holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-constructible-constructible}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+Let $\mathcal{F}$ be a sheaf of sets, abelian groups,
+$\Lambda$-modules (with $\Lambda$ Noetherian) on $X_\etale$.
+If there exist constructible locally closed subschemes $T_i \subset X$
+such that (a) $X = \bigcup T_j$ and (b) $\mathcal{F}|_{T_j}$ is
+constructible, then $\mathcal{F}$ is constructible.
+\end{lemma}
+
+\begin{proof}
+First, we can assume the covering is finite as $X$ is quasi-compact
+in the spectral topology
+(Topology, Lemma \ref{topology-lemma-constructible-hausdorff-quasi-compact}
+and
+Properties, Lemma
+\ref{properties-lemma-quasi-compact-quasi-separated-spectral}).
+Observe that each $T_i$ is a quasi-compact and quasi-separated
+scheme in its own right (because it is constructible in $X$; details omitted).
+Thus we can find a finite partition $T_i = \coprod T_{i, j}$ into
+locally closed constructible parts of $T_i$ such that
+$\mathcal{F}|_{T_{i, j}}$ is finite locally constant
+(Lemma \ref{lemma-constructible-quasi-compact-quasi-separated}).
+By Topology, Lemma \ref{topology-lemma-constructible-in-constructible}
+we see that $T_{i, j}$ is a constructible locally closed subscheme of $X$.
+Then we can apply Topology, Lemma
+\ref{topology-lemma-constructible-partition-refined-by-stratification}
+to $X = \bigcup T_{i, j}$ to find the desired partition of $X$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-constructible-local}
+Let $X$ be a scheme. Checking constructibility of a sheaf
+of sets, abelian groups, $\Lambda$-modules (with $\Lambda$ Noetherian)
+can be done Zariski locally on $X$.
+\end{lemma}
+
+\begin{proof}
+The statement means if $X = \bigcup U_i$ is an open covering
+such that $\mathcal{F}|_{U_i}$ is constructible, then $\mathcal{F}$
+is constructible. If $U \subset X$ is affine open, then
+$U = \bigcup U \cap U_i$ and $\mathcal{F}|_{U \cap U_i}$ is constructible
+(it is trivial that the restriction of a constructible sheaf to
+an open is constructible). It follows from
+Lemma \ref{lemma-constructible-quasi-compact-quasi-separated}
+that $\mathcal{F}|_U$ is constructible, i.e., a suitable partition
+of $U$ exists.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-constructible}
+Let $f : X \to Y$ be a morphism of schemes. If $\mathcal{F}$ is a
+constructible sheaf of sets, abelian groups, or $\Lambda$-modules
+(with $\Lambda$ Noetherian) on $Y_\etale$, the same
+is true for $f^{-1}\mathcal{F}$ on $X_\etale$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-constructible-local} this reduces to the case
+where $X$ and $Y$ are affine. By
+Lemma \ref{lemma-constructible-quasi-compact-quasi-separated}
+it suffices to find a finite partition of $X$ by constructible
+locally closed subschemes such that $f^{-1}\mathcal{F}$ is finite locally
+constant on each of them.
+To find it we just pull back the partition of $Y$ adapted to
+$\mathcal{F}$ and use
+Lemma \ref{lemma-pullback-locally-constant}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-constructible-abelian}
+Let $X$ be a scheme.
+\begin{enumerate}
+\item The category of constructible sheaves of sets
+is closed under finite limits and colimits inside $\Sh(X_\etale)$.
+\item The category of constructible abelian sheaves is a
+weak Serre subcategory of $\textit{Ab}(X_\etale)$.
+\item Let $\Lambda$ be a Noetherian ring. The category of
+constructible sheaves of $\Lambda$-modules on
+$X_\etale$ is a weak Serre subcategory of
+$\textit{Mod}(X_\etale, \Lambda)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We prove (3). We will use the criterion of
+Homology, Lemma \ref{homology-lemma-characterize-weak-serre-subcategory}.
+Suppose that $\varphi : \mathcal{F} \to \mathcal{G}$
+is a map of constructible sheaves of $\Lambda$-modules.
+We have to show that $\mathcal{K} = \Ker(\varphi)$ and
+$\mathcal{Q} = \Coker(\varphi)$ are constructible.
+Similarly, suppose that
+$0 \to \mathcal{F} \to \mathcal{E} \to \mathcal{G} \to 0$
+is a short exact sequence of sheaves of $\Lambda$-modules
+with $\mathcal{F}$, $\mathcal{G}$ constructible. We have to show
+that $\mathcal{E}$ is constructible.
+In both cases we can replace $X$ with the members of an
+affine open covering. Hence we may assume $X$ is affine.
+Then we may further replace $X$ by the members of a finite
+partition of $X$ by constructible locally closed subschemes
+on which $\mathcal{F}$ and $\mathcal{G}$ are of finite type and
+locally constant. Thus we may apply
+Lemma \ref{lemma-kernel-finite-locally-constant} to conclude.
+
+\medskip\noindent
+The proofs of (1) and (2) are very similar and are omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-support-constructible}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+\begin{enumerate}
+\item Let $\mathcal{F} \to \mathcal{G}$ be a map of constructible
+sheaves of sets on $X_\etale$. Then the set of points $x \in X$
+where $\mathcal{F}_{\overline{x}} \to \mathcal{G}_{\overline{x}}$
+is surjective, resp.\ injective, resp.\ is isomorphic to a given map
+of sets, is constructible in $X$.
+\item Let $\mathcal{F}$ be a constructible abelian sheaf on $X_\etale$.
+The support of $\mathcal{F}$ is constructible.
+\item Let $\Lambda$ be a Noetherian ring.
+Let $\mathcal{F}$ be a constructible sheaf of $\Lambda$-modules on $X_\etale$.
+The support of $\mathcal{F}$ is constructible.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1).
+Let $X = \coprod X_i$ be a partition of $X$ by locally closed constructible
+subschemes such that both $\mathcal{F}$ and $\mathcal{G}$ are
+finite locally constant over the parts (use
+Lemma \ref{lemma-constructible-quasi-compact-quasi-separated}
+for both $\mathcal{F}$ and $\mathcal{G}$ and choose a common
+refinement). Then apply Lemma \ref{lemma-morphism-locally-constant}
+to the restriction of the map to each part.
+
+\medskip\noindent
+The proof of (2) and (3) is omitted.
+\end{proof}
+
+\noindent
+The following lemma will turn out to be very useful later on.
+It roughly says that the category of constructible sheaves
+has a kind of weak ``Noetherian'' property.
+
+\begin{lemma}
+\label{lemma-colimit-constructible}
+Let $X$ be a quasi-compact and quasi-separated scheme. Let
+$\mathcal{F} = \colim_{i \in I} \mathcal{F}_i$ be a filtered
+colimit of sheaves of sets, abelian sheaves, or sheaves of modules.
+\begin{enumerate}
+\item If $\mathcal{F}$ and $\mathcal{F}_i$ are constructible sheaves of
+sets, then the ind-object $\mathcal{F}_i$ is essentially constant with
+value $\mathcal{F}$.
+\item If $\mathcal{F}$ and $\mathcal{F}_i$ are constructible sheaves of
+abelian groups, then the ind-object $\mathcal{F}_i$ is essentially constant
+with value $\mathcal{F}$.
+\item Let $\Lambda$ be a Noetherian ring.
+If $\mathcal{F}$ and $\mathcal{F}_i$ are constructible sheaves of
+$\Lambda$-modules, then the ind-object $\mathcal{F}_i$ is essentially constant
+with value $\mathcal{F}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). We will use without further mention that finite limits
+and colimits of constructible sheaves are constructible
+(Lemma \ref{lemma-kernel-finite-locally-constant}).
+For each $i$ let $T_i \subset X$ be the set of points $x \in X$
+where $\mathcal{F}_{i, \overline{x}} \to \mathcal{F}_{\overline{x}}$
+is not surjective. Because $\mathcal{F}_i$ and $\mathcal{F}$ are
+constructible $T_i$ is a constructible subset of $X$
+(Lemma \ref{lemma-support-constructible}).
+Since the stalks of $\mathcal{F}$ are finite
+and since $\mathcal{F} = \colim_{i \in I} \mathcal{F}_i$ we see
+that for all $x \in X$ we have $x \not \in T_i$ for $i$ large enough.
+Since $X$ is a spectral space by Properties, Lemma
+\ref{properties-lemma-quasi-compact-quasi-separated-spectral}
+the constructible topology on $X$ is quasi-compact by
+Topology, Lemma \ref{topology-lemma-constructible-hausdorff-quasi-compact}.
+Thus $T_i = \emptyset$ for $i$ large enough. Thus
+$\mathcal{F}_i \to \mathcal{F}$ is surjective for $i$ large enough.
+Assume now that $\mathcal{F}_i \to \mathcal{F}$ is surjective for all $i$.
+Choose $i \in I$. For $i' \geq i$ denote $S_{i'} \subset X$ the set of
+points $x$ such that the number of elements in
+$\Im(\mathcal{F}_{i, \overline{x}} \to \mathcal{F}_{\overline{x}})$
+is equal to the number of elements in
+$\Im(\mathcal{F}_{i, \overline{x}} \to \mathcal{F}_{i', \overline{x}})$.
+Because $\mathcal{F}_i$, $\mathcal{F}_{i'}$ and $\mathcal{F}$ are
+constructible $S_{i'}$ is a constructible subset of $X$
+(details omitted; hint: use Lemma \ref{lemma-support-constructible}).
+Since the stalks of $\mathcal{F}_i$ and $\mathcal{F}$
+are finite and since $\mathcal{F} = \colim_{i' \geq i} \mathcal{F}_{i'}$
+we see that for all $x \in X$ we have $x \not \in S_{i'}$ for $i'$
+large enough. By the same argument as above we can find a large $i'$ such
+that $S_{i'} = \emptyset$. Thus $\mathcal{F}_i \to \mathcal{F}_{i'}$
+factors through $\mathcal{F}$ as desired.
+
+\medskip\noindent
+Proof of (2). Observe that a constructible abelian sheaf is a constructible
+sheaf of sets. Thus case (2) follows from (1).
+
+\medskip\noindent
+Proof of (3). We will use without further mention that the category of
+constructible sheaves of $\Lambda$-modules is abelian
+(Lemma \ref{lemma-kernel-finite-locally-constant}).
+For each $i$ let $\mathcal{Q}_i$ be the cokernel of the map
+$\mathcal{F}_i \to \mathcal{F}$. The support $T_i$ of $\mathcal{Q}_i$
+is a constructible subset of $X$ as $\mathcal{Q}_i$ is constructible
+(Lemma \ref{lemma-support-constructible}).
+Since the stalks of $\mathcal{F}$ are finite $\Lambda$-modules
+and since $\mathcal{F} = \colim_{i \in I} \mathcal{F}_i$ we see
+that for all $x \in X$ we have $x \not \in T_i$ for $i$ large enough.
+Since $X$ is a spectral space by Properties, Lemma
+\ref{properties-lemma-quasi-compact-quasi-separated-spectral}
+the constructible topology on $X$ is quasi-compact by
+Topology, Lemma \ref{topology-lemma-constructible-hausdorff-quasi-compact}.
+Thus $T_i = \emptyset$ for $i$ large enough. This proves the first
+assertion. For the second, assume now that
+$\mathcal{F}_i \to \mathcal{F}$ is surjective for all $i$.
+Choose $i \in I$. For $i' \geq i$ denote $\mathcal{K}_{i'}$ the
+image of $\Ker(\mathcal{F}_i \to \mathcal{F})$ in $\mathcal{F}_{i'}$.
+The support $S_{i'}$ of $\mathcal{K}_{i'}$
+is a constructible subset of $X$ as $\mathcal{K}_{i'}$ is constructible.
+Since the stalks of $\Ker(\mathcal{F}_i \to \mathcal{F})$
+are finite $\Lambda$-modules and since
+$\mathcal{F} = \colim_{i' \geq i} \mathcal{F}_{i'}$ we see
+that for all $x \in X$ we have $x \not \in S_{i'}$ for $i'$ large enough.
+By the same argument as above we can find a large $i'$ such
+that $S_{i'} = \emptyset$. Thus $\mathcal{F}_i \to \mathcal{F}_{i'}$
+factors through $\mathcal{F}$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-product-constructible}
+Let $X$ be a scheme. Let $\Lambda$ be a Noetherian ring.
+The tensor product of two constructible sheaves of $\Lambda$-modules
+on $X_\etale$ is a constructible sheaf of $\Lambda$-modules.
+\end{lemma}
+
+\begin{proof}
+The question immediately reduces to the case where $X$ is affine.
+Since any two partitions of $X$ with constructible locally
+closed strata have a common refinement of the same type and
+since pullbacks commute with tensor product we reduce to
+Lemma \ref{lemma-tensor-product-locally-constant}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-constructible}
+Let $\Lambda \to \Lambda'$ be a homomorphism of Noetherian rings.
+Let $X$ be a scheme. Let $\mathcal{F}$ be a constructible
+sheaf of $\Lambda$-modules on $X_\etale$. Then
+$\mathcal{F} \otimes_{\underline{\Lambda}} \underline{\Lambda'}$
+is a constructible sheaf of $\Lambda'$-modules.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: affine locally you can use the same stratification.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Auxiliary lemmas on morphisms}
+\label{section-stratify-morphisms}
+
+\noindent
+Some lemmas that are useful for proving functoriality properties
+of constructible sheaves.
+
+\begin{lemma}
+\label{lemma-etale-stratified-finite}
+Let $U \to X$ be an \'etale morphism of quasi-compact and quasi-separated
+schemes (for example an \'etale morphism of Noetherian schemes). Then there
+exists a partition $X = \coprod_i X_i$ by constructible locally closed
+subschemes such that $X_i \times_X U \to X_i$ is finite \'etale for all $i$.
+\end{lemma}
+
+\begin{proof}
+If $U \to X$ is separated, then this is
+More on Morphisms, Lemma \ref{more-morphisms-lemma-stratify-flat-fp-qf}.
+In general, we may assume $X$ is affine. Choose a finite affine open
+covering $U = \bigcup U_j$. Apply the previous case to all the morphisms
+$U_j \to X$ and $U_j \cap U_{j'} \to X$ and choose a common
+refinement $X = \coprod X_i$ of the resulting partitions.
+After refining the partition further we may assume $X_i$ affine as well.
+Fix $i$ and set $V = U \times_X X_i$. The morphisms
+$V_j = U_j \times_X X_i \to X_i$ and
+$V_{jj'} = (U_j \cap U_{j'}) \times_X X_i \to X_i$ are finite \'etale.
+Hence $V_j$ and $V_{jj'}$ are affine schemes and $V_{jj'} \subset V_j$
+is closed as well as open (since $V_{jj'} \to X_i$ is proper, so
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed}
+applies). Then $V = \bigcup V_j$ is separated because
+$\mathcal{O}(V_j) \to \mathcal{O}(V_{jj'})$ is surjective, see
+Schemes, Lemma \ref{schemes-lemma-characterize-separated}.
+Thus the previous case applies to $V \to X_i$ and we can further
+refine the partition if needed (it actually isn't but we don't
+need this).
+\end{proof}
+
+\noindent
+In the Noetherian case one can prove the preceding lemma by
+Noetherian induction and the following amusing lemma.
+
+\begin{lemma}
+\label{lemma-generically-finite}
+Let $f: X \to Y$ be a morphism of schemes which is quasi-compact,
+quasi-separated, and locally of finite type. If $\eta$ is a generic point
+of an irreducible component of $Y$ such that $f^{-1}(\eta)$ is finite, then
+there exists an open $V \subset Y$ containing $\eta$ such that
+$f^{-1}(V) \to V$ is finite.
+\end{lemma}
+
+\begin{proof}
+This is Morphisms, Lemma \ref{morphisms-lemma-generically-finite}.
+\end{proof}
+
+\noindent
+The statement of the following lemma can be strengthened a bit.
+
+\begin{lemma}
+\label{lemma-decompose-quasi-finite-morphism}
+Let $f : Y \to X$ be a quasi-finite and finitely presented
+morphism of affine schemes.
+\begin{enumerate}
+\item There exists a surjective morphism of affine schemes $X' \to X$ and a
+closed subscheme $Z' \subset Y' = X' \times_X Y$ such that
+\begin{enumerate}
+\item $Z' \subset Y'$ is a thickening, and
+\item $Z' \to X'$ is a finite \'etale morphism.
+\end{enumerate}
+\item There exists a finite partition $X = \coprod X_i$ by
+locally closed, constructible, affine strata, and surjective finite locally
+free morphisms $X'_i \to X_i$ such that the reduction of
+$Y'_i = X'_i \times_X Y \to X'_i$ is isomorphic to
+$\coprod_{j = 1}^{n_i} (X'_i)_{red} \to (X'_i)_{red}$ for some $n_i$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Setting $X' = \coprod X'_i$ we see that (2) implies (1).
+Write $X = \Spec(A)$ and $Y = \Spec(B)$. Write $A$ as a filtered colimit
+of finite type $\mathbf{Z}$-algebras $A_i$. Since $B$ is an $A$-algebra of
+finite presentation, we see that there exists $0 \in I$ and a
+finite type ring map $A_0 \to B_0$ such that $B = \colim B_i$ with
+$B_i = A_i \otimes_{A_0} B_0$, see
+Algebra, Lemma \ref{algebra-lemma-colimit-category-fp-algebras}.
+For $i$ sufficiently large we see that $A_i \to B_i$ is
+quasi-finite, see Limits, Lemma \ref{limits-lemma-descend-quasi-finite}.
+Thus we reduce to the case of finite type algebras over $\mathbf{Z}$,
+in particular we reduce to the Noetherian case. (Details omitted.)
+
+\medskip\noindent
+Assume $X$ and $Y$ Noetherian. In this case any locally closed
+subset of $X$ is constructible. By Lemma \ref{lemma-generically-finite}
+and Noetherian induction we see that
+there is a finite partition $X = \coprod X_i$ of $X$
+by locally closed strata such that $Y \times_X X_i \to X_i$ is finite.
+We can refine this partition to get affine strata.
+Thus after replacing $X$ by $X' = \coprod X_i$ we may assume
+$Y \to X$ is finite.
+
+\medskip\noindent
+Assume $X$ and $Y$ Noetherian and $Y \to X$ finite.
+Suppose that we can prove (2) after base change by a surjective,
+flat, quasi-finite morphism $U \to X$. Thus we have a partition
+$U = \coprod U_i$ and finite locally free morphisms $U'_i \to U_i$
+such that $U'_i \times_X Y \to U'_i$ is isomorphic to
+$\coprod_{j = 1}^{n_i} (U'_i)_{red} \to (U'_i)_{red}$ for some $n_i$.
+Then, by the argument in the previous paragraph, we can find a
+partition $X = \coprod X_j$ with locally closed affine strata such that
+$X_j \times_X U_i \to X_j$ is finite for all $i, j$. By
+Morphisms, Lemma \ref{morphisms-lemma-finite-flat}
+each $X_j \times_X U_i \to X_j$ is finite locally free.
+Hence $X_j \times_X U'_i \to X_j$ is finite locally free
+(Morphisms, Lemma \ref{morphisms-lemma-composition-finite-locally-free}).
+It follows that $X = \coprod X_j$ and $X_j' = \coprod_i X_j \times_X U'_i$
+is a solution for $Y \to X$. Thus it suffices to prove
+the result (in the Noetherian case) after a surjective flat quasi-finite
+base change.
+
+\medskip\noindent
+Applying Morphisms, Lemma \ref{morphisms-lemma-massage-finite}
+we see we may assume that $Y$ is a closed subscheme of an
+affine scheme $Z$ which is (set theoretically) a finite union
+$Z = \bigcup_{i \in I} Z_i$ of closed subschemes mapping isomorphically
+to $X$. In this case we will find a finite partition of $X = \coprod X_j$
+with affine locally closed strata that works (in other words $X'_j = X_j$).
+Set $T_i = Y \cap Z_i$. This is a closed subscheme of $X$.
+As $X$ is Noetherian we can find a finite partition of $X = \coprod X_j$
+by affine locally closed subschemes, such that each
+$X_j \times_X T_i$ is (set theoretically) a union of strata $X_j \times_X Z_i$.
+Replacing $X$ by $X_j$ we see that we may assume $I = I_1 \amalg I_2$
+with $Z_i \subset Y$ for $i \in I_1$ and $Z_i \cap Y = \emptyset$ for
+$i \in I_2$. Replacing $Z$ by $\bigcup_{i \in I_1} Z_i$ we see that we
+may assume $Y = Z$.
+Finally, we can replace $X$ again by the members of a partition
+as above such that for every $i, i' \subset I$ the intersection
+$Z_i \cap Z_{i'}$ is either empty or (set theoretically) equal
+to $Z_i$ and $Z_{i'}$. This clearly means that $Y$ is (set theoretically)
+equal to a disjoint union of the $Z_i$ which is what we wanted to show.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{More on constructible sheaves}
+\label{section-more-constructible}
+
+\noindent
+Let $\Lambda$ be a Noetherian ring. Let $X$ be a scheme.
+We often consider $X_\etale$ as a ringed site with
+sheaf of rings $\underline{\Lambda}$. In case of abelian sheaves
+we often take $\Lambda = \mathbf{Z}/n\mathbf{Z}$ for a suitable
+integer $n$.
+
+\begin{lemma}
+\label{lemma-jshriek-constructible}
+Let $j : U \to X$ be an \'etale morphism of quasi-compact and
+quasi-separated schemes.
+\begin{enumerate}
+\item The sheaf $h_U$ is a constructible sheaf of sets.
+\item The sheaf $j_!\underline{M}$ is a constructible abelian sheaf
+for a finite abelian group $M$.
+\item If $\Lambda$ is a Noetherian ring and $M$ is a finite $\Lambda$-module,
+then $j_!\underline{M}$ is a constructible sheaf of $\Lambda$-modules
+on $X_\etale$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-etale-stratified-finite} there is a partition
+$\coprod_i X_i$ such that $\pi_i : j^{-1}(X_i) \to X_i$ is finite \'etale.
+The restriction of $h_U$ to $X_i$ is $h_{j^{-1}(X_i)}$ which is finite
+locally constant by Lemma \ref{lemma-characterize-finite-locally-constant}.
+For cases (2) and (3) we note that
+$$
+j_!(\underline{M})|_{X_i} =
+\pi_{i!}(\underline{M}) =
+\pi_{i*}(\underline{M})
+$$
+by Lemmas \ref{lemma-shriek-base-change} and
+\ref{lemma-shriek-equals-star-finite-etale}.
+Thus it suffices to show the lemma for $\pi : Y \to X$ finite \'etale.
+This is Lemma \ref{lemma-pushforward-locally-constant}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-torsion-colimit-constructible}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+\begin{enumerate}
+\item Let $\mathcal{F}$ be a sheaf of sets on $X_\etale$.
+Then $\mathcal{F}$ is a filtered colimit of constructible
+sheaves of sets.
+\item Let $\mathcal{F}$ be a torsion abelian sheaf on $X_\etale$.
+Then $\mathcal{F}$ is a filtered colimit of constructible abelian sheaves.
+\item Let $\Lambda$ be a Noetherian ring and $\mathcal{F}$ a sheaf
+of $\Lambda$-modules on $X_\etale$. Then
+$\mathcal{F}$ is a filtered colimit of constructible sheaves of
+$\Lambda$-modules.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{B}$ be the collection of quasi-compact and quasi-separated
+objects of $X_\etale$. By Modules on Sites,
+Lemma \ref{sites-modules-lemma-module-filtered-colimit-constructibles}
+any sheaf of sets is a filtered colimit of sheaves of the form
+$$
+\text{Coequalizer}\left(
+\xymatrix{
+\coprod\nolimits_{j = 1, \ldots, m} h_{V_j}
+\ar@<1ex>[r] \ar@<-1ex>[r] &
+\coprod\nolimits_{i = 1, \ldots, n} h_{U_i}
+}
+\right)
+$$
+with $V_j$ and $U_i$ quasi-compact and quasi-separated objects
+of $X_\etale$. By
+Lemmas \ref{lemma-jshriek-constructible} and \ref{lemma-constructible-abelian}
+these coequalizers are constructible. This proves (1).
+
+\medskip\noindent
+Let $\Lambda$ be a Noetherian ring.
+By Modules on Sites,
+Lemma \ref{sites-modules-lemma-module-filtered-colimit-constructibles}
+$\Lambda$-modules $\mathcal{F}$ is a filtered colimit
+of modules of the form
+$$
+\Coker\left(
+\bigoplus\nolimits_{j = 1, \ldots, m} j_{V_j!}\underline{\Lambda}_{V_j}
+\longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} j_{U_i!}\underline{\Lambda}_{U_i}
+\right)
+$$
+with $V_j$ and $U_i$ quasi-compact and quasi-separated objects
+of $X_\etale$. By
+Lemmas \ref{lemma-jshriek-constructible} and \ref{lemma-constructible-abelian}
+these cokernels are constructible. This proves (3).
+
+\medskip\noindent
+Proof of (2). First write $\mathcal{F} = \bigcup \mathcal{F}[n]$ where
+$\mathcal{F}[n]$ is the $n$-torsion subsheaf. Then we can view
+$\mathcal{F}[n]$ as a sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules
+and apply (3).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-constructible}
+Let $f : X \to Y$ be a surjective morphism of quasi-compact and
+quasi-separated schemes.
+\begin{enumerate}
+\item Let $\mathcal{F}$ be a sheaf of sets on $Y_\etale$. Then $\mathcal{F}$
+is constructible if and only if $f^{-1}\mathcal{F}$ is constructible.
+\item Let $\mathcal{F}$ be an abelian sheaf on $Y_\etale$. Then $\mathcal{F}$
+is constructible if and only if $f^{-1}\mathcal{F}$ is constructible.
+\item Let $\Lambda$ be a Noetherian ring.
+Let $\mathcal{F}$ be sheaf of $\Lambda$-modules on $Y_\etale$.
+Then $\mathcal{F}$ is constructible if and only if $f^{-1}\mathcal{F}$
+is constructible.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+One implication follows from Lemma \ref{lemma-pullback-constructible}.
+For the converse, assume $f^{-1}\mathcal{F}$ is constructible.
+Write $\mathcal{F} = \colim \mathcal{F}_i$ as a
+filtered colimit of constructible sheaves (of sets, abelian groups, or modules)
+using Lemma \ref{lemma-torsion-colimit-constructible}.
+Since $f^{-1}$ is a left adjoint it commutes with colimits
+(Categories, Lemma \ref{categories-lemma-adjoint-exact}) and we see that
+$f^{-1}\mathcal{F} = \colim f^{-1}\mathcal{F}_i$.
+By Lemma \ref{lemma-colimit-constructible} we see that
+$f^{-1}\mathcal{F}_i \to f^{-1}\mathcal{F}$
+is surjective for all $i$ large enough.
+Since $f$ is surjective we conclude (by looking at stalks using
+Lemma \ref{lemma-stalk-pullback} and
+Theorem \ref{theorem-exactness-stalks})
+that $\mathcal{F}_i \to \mathcal{F}$ is surjective for all $i$ large enough.
+Thus $\mathcal{F}$ is the quotient of a constructible sheaf $\mathcal{G}$.
+Applying the argument once more to
+$\mathcal{G} \times_\mathcal{F} \mathcal{G}$ or
+the kernel of $\mathcal{G} \to \mathcal{F}$
+we conclude using that $f^{-1}$ is exact and that the category of
+constructible sheaves (of sets, abelian groups, or modules) is
+preserved under finite (co)limits or (co)kernels inside
+$\Sh(Y_\etale)$, $\Sh(X_\etale)$, $\textit{Ab}(Y_\etale)$,
+$\textit{Ab}(X_\etale)$, $\textit{Mod}(Y_\etale, \Lambda)$, and
+$\textit{Mod}(X_\etale, \Lambda)$, see
+Lemma \ref{lemma-constructible-abelian}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pushforward-constructible}
+Let $f : X \to Y$ be a finite \'etale morphism of schemes. Let $\Lambda$ be a
+Noetherian ring. If $\mathcal{F}$ is a constructible sheaf of sets,
+constructible sheaf of abelian groups, or constructible sheaf of
+$\Lambda$-modules on $X_\etale$, the same is true for
+$f_*\mathcal{F}$ on $Y_\etale$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-constructible-local} it suffices to check this
+Zariski locally on $Y$ and by Lemma \ref{lemma-check-constructible}
+we may replace $Y$ by an \'etale cover (the construction of $f_*$
+commutes with \'etale localization). A finite \'etale morphism is
+\'etale locally isomorphic to a disjoint union of isomorphisms, see
+\'Etale Morphisms, Lemma \ref{etale-lemma-finite-etale-etale-local}.
+Thus, in the case of sheaves of sets, the lemma says that if
+$\mathcal{F}_i$, $i = 1, \ldots, n$ are constructible sheaves of sets, then
+$\prod_{i = 1, \ldots, n} \mathcal{F}_i$ is too.
+This is clear. Similarly for sheaves of abelian groups and modules.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-category-constructible-sets}
+Let $X$ be a quasi-compact and quasi-separated scheme. The category of
+constructible sheaves of sets is the full subcategory of $\Sh(X_\etale)$
+consisting of sheaves $\mathcal{F}$ which are coequalizers
+$$
+\xymatrix{
+\mathcal{F}_1
+\ar@<1ex>[r] \ar@<-1ex>[r]
+&
+\mathcal{F}_0 \ar[r]
+&
+\mathcal{F}}
+$$
+such that $\mathcal{F}_i$, $i = 0, 1$ is a finite coproduct of sheaves of
+the form $h_U$ with $U$ a quasi-compact and quasi-separated
+object of $X_\etale$.
+\end{lemma}
+
+\begin{proof}
+In the proof of Lemma \ref{lemma-torsion-colimit-constructible}
+we have seen that sheaves of this form are constructible.
+For the converse, suppose that for every constructible sheaf of
+sets $\mathcal{F}$ we can find a surjection $\mathcal{F}_0 \to \mathcal{F}$
+with $\mathcal{F}_0$ as in the lemma. Then we find our surjection
+$\mathcal{F}_1 \to \mathcal{F}_0 \times_\mathcal{F} \mathcal{F}_0$
+because the latter is constructible by Lemma \ref{lemma-constructible-abelian}.
+
+\medskip\noindent
+By Topology, Lemma
+\ref{topology-lemma-constructible-partition-refined-by-stratification}
+we may choose a finite stratification
+$X = \coprod_{i \in I} X_i$ such that $\mathcal{F}$ is finite locally
+constant on each stratum. We will prove the result by induction on
+the cardinality of $I$. Let $i \in I$ be a minimal element in the
+partial ordering of $I$. Then $X_i \subset X$ is closed.
+By induction, there exist finitely many quasi-compact and quasi-separated
+objects $U_\alpha$ of $(X \setminus X_i)_\etale$ and a surjective
+map $\coprod h_{U_\alpha} \to \mathcal{F}|_{X \setminus X_i}$.
+These determine a map
+$$
+\coprod h_{U_\alpha} \to \mathcal{F}
+$$
+which is surjective after restricting to $X \setminus X_i$. By
+Lemma \ref{lemma-characterize-finite-locally-constant}
+we see that $\mathcal{F}|_{X_i} = h_V$ for some scheme $V$ finite \'etale
+over $X_i$. Let $\overline{v}$ be a geometric point of $V$ lying
+over $\overline{x} \in X_i$. We may think of $\overline{v}$ as an
+element of the stalk $\mathcal{F}_{\overline{x}} = V_{\overline{x}}$.
+Thus we can find an \'etale neighbourhood $(U, \overline{u})$
+of $\overline{x}$ and a section $s \in \mathcal{F}(U)$ whose stalk at
+$\overline{x}$ gives $\overline{v}$. Thinking of $s$ as a map
+$s : h_U \to \mathcal{F}$, restricting to $X_i$ we obtain a morphism
+$s|_{X_i} : U \times_X X_i \to V$ over $X_i$ which maps $\overline{u}$
+to $\overline{v}$. Since $V$ is quasi-compact (finite over the closed
+subscheme $X_i$ of the quasi-compact scheme $X$) a finite number
+$s^{(1)}, \ldots, s^{(m)}$ of these sections of $\mathcal{F}$ over
+$U^{(1)}, \ldots, U^{(m)}$ will determine a jointly
+surjective map
+$$
+\coprod s^{(j)}|_{X_i} : \coprod U^{(j)} \times_X X_i \longrightarrow V
+$$
+Then we obtain the surjection
+$$
+\coprod h_{U_\alpha} \amalg \coprod h_{U^{(j)}} \to \mathcal{F}
+$$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-category-constructible-modules}
+Let $X$ be a quasi-compact and quasi-separated scheme. Let $\Lambda$
+be a Noetherian ring. The category of constructible sheaves of
+$\Lambda$-modules is exactly the category of modules of the form
+$$
+\Coker\left(
+\bigoplus\nolimits_{j = 1, \ldots, m} j_{V_j!}\underline{\Lambda}_{V_j}
+\longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} j_{U_i!}\underline{\Lambda}_{U_i}
+\right)
+$$
+with $V_j$ and $U_i$ quasi-compact and quasi-separated objects of
+$X_\etale$. In fact, we can even assume $U_i$ and $V_j$ affine.
+\end{lemma}
+
+\begin{proof}
+In the proof of Lemma \ref{lemma-torsion-colimit-constructible}
+we have seen modules of this form are constructible. Since the
+category of constructible modules is abelian
+(Lemma \ref{lemma-constructible-abelian})
+it suffices to prove that given a constructible module $\mathcal{F}$
+there is a surjection
+$$
+\bigoplus\nolimits_{i = 1, \ldots, n} j_{U_i!}\underline{\Lambda}_{U_i}
+\longrightarrow \mathcal{F}
+$$
+for some affine objects $U_i$ in $X_\etale$. By
+Modules on Sites, Lemma
+\ref{sites-modules-lemma-module-filtered-colimit-constructibles}
+there is a surjection
+$$
+\Psi :
+\bigoplus\nolimits_{i \in I} j_{U_i!}\underline{\Lambda}_{U_i}
+\longrightarrow
+\mathcal{F}
+$$
+with $U_i$ affine and the direct sum over a possibly infinite
+index set $I$. For every finite subset $I' \subset I$ set
+$$
+T_{I'} = \text{Supp}(\Coker(
+\bigoplus\nolimits_{i \in I'} j_{U_i!}\underline{\Lambda}_{U_i}
+\longrightarrow \mathcal{F}))
+$$
+By the very definition of constructible sheaves, the set $T_{I'}$
+is a constructible subset of $X$. We want to show that $T_{I'} = \emptyset$
+for some $I'$. Since every stalk $\mathcal{F}_{\overline{x}}$ is
+a finite type $\Lambda$-module and since $\Psi$ is surjective, for
+every $x \in X$ there is an $I'$ such that $x \not \in T_{I'}$.
+In other words we have
+$\emptyset = \bigcap_{I' \subset I\text{ finite}} T_{I'}$. Since
+$X$ is a spectral space by Properties, Lemma
+\ref{properties-lemma-quasi-compact-quasi-separated-spectral}
+the constructible topology on $X$ is quasi-compact by
+Topology, Lemma \ref{topology-lemma-constructible-hausdorff-quasi-compact}.
+Thus $T_{I'} = \emptyset$ for some $I' \subset I$ finite
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-category-constructible-abelian}
+Let $X$ be a quasi-compact and quasi-separated scheme. The category of
+constructible abelian sheaves is exactly the category of abelian
+sheaves of the form
+$$
+\Coker\left(
+\bigoplus\nolimits_{j = 1, \ldots, m}
+j_{V_j!}\underline{\mathbf{Z}/m_j\mathbf{Z}}_{V_j}
+\longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n}
+j_{U_i!}\underline{\mathbf{Z}/n_i\mathbf{Z}}_{U_i}
+\right)
+$$
+with $V_j$ and $U_i$ quasi-compact and quasi-separated objects of
+$X_\etale$ and $m_j$, $n_i$ positive integers.
+In fact, we can even assume $U_i$ and $V_j$ affine.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-category-constructible-modules}
+applied with $\Lambda = \mathbf{Z}/n\mathbf{Z}$
+and the fact that, since $X$ is quasi-compact, every constructible
+abelian sheaf is annihilated by some positive integer $n$ (details omitted).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-constructible-is-compact}
+Let $X$ be a quasi-compact and quasi-separated scheme. Let $\Lambda$ be a
+Noetherian ring. Let $\mathcal{F}$ be a constructible sheaf of sets, abelian
+groups, or $\Lambda$-modules on $X_\etale$. Let
+$\mathcal{G} = \colim \mathcal{G}_i$ be a filtered colimit of sheaves of
+sets, abelian groups, or $\Lambda$-modules. Then
+$$
+\Mor(\mathcal{F}, \mathcal{G}) = \colim \Mor(\mathcal{F}, \mathcal{G}_i)
+$$
+in the category of sheaves of sets, abelian groups, or $\Lambda$-modules on
+$X_\etale$.
+\end{lemma}
+
+\begin{proof}
+The case of sheaves of sets. By Lemma \ref{lemma-category-constructible-sets}
+it suffices to prove the lemma for $h_U$ where $U$ is a quasi-compact
+and quasi-separated object of $X_\etale$. Recall that
+$\Mor(h_U, \mathcal{G}) = \mathcal{G}(U)$. Hence the result
+follows from Sites, Lemma \ref{sites-lemma-directed-colimits-sections}.
+
+\medskip\noindent
+In the case of abelian sheaves or sheaves of modules, the result follows
+in the same way using
+Lemmas \ref{lemma-category-constructible-abelian} and
+\ref{lemma-category-constructible-modules}.
+For the case of abelian sheaves, we add that
+$\Mor(j_{U!}\underline{\mathbf{Z}/n\mathbf{Z}}, \mathcal{G})$
+is equal to the $n$-torsion elements of $\mathcal{G}(U)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-pushforward-constructible}
+Let $f : X \to Y$ be a finite and finitely presented morphism of schemes.
+Let $\Lambda$ be a Noetherian ring. If $\mathcal{F}$ is a constructible
+sheaf of sets, abelian groups, or $\Lambda$-modules on $X_\etale$,
+then $f_*\mathcal{F}$ is too.
+\end{lemma}
+
+\begin{proof}
+It suffices to prove this when $X$ and $Y$ are affine by
+Lemma \ref{lemma-constructible-local}.
+By
+Lemmas \ref{lemma-finite-pushforward-commutes-with-base-change} and
+\ref{lemma-check-constructible} we may base change to any
+affine scheme surjective over $X$. By
+Lemma \ref{lemma-decompose-quasi-finite-morphism}
+this reduces us to the case of a finite \'etale morphism
+(because a thickening leads to an equivalence of \'etale topoi
+and even small \'etale sites, see
+Theorem \ref{theorem-topological-invariance}).
+The finite \'etale case is
+Lemma \ref{lemma-pushforward-constructible}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-category-is-colimit}
+Let $X = \lim_{i \in I} X_i$ be a limit of a directed
+system of schemes with affine transition morphisms.
+We assume that $X_i$ is quasi-compact and quasi-separated
+for all $i \in I$.
+\begin{enumerate}
+\item The category of constructible sheaves of sets on $X_\etale$
+is the colimit of the categories of constructible sheaves of sets
+on $(X_i)_\etale$.
+\item The category of constructible abelian sheaves on $X_\etale$
+is the colimit of the categories of constructible abelian sheaves
+on $(X_i)_\etale$.
+\item Let $\Lambda$ be a Noetherian ring. The category of constructible
+sheaves of $\Lambda$-modules on $X_\etale$ is the colimit of the
+categories of constructible sheaves of $\Lambda$-modules on $(X_i)_\etale$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Denote $f_i : X \to X_i$ the projection maps.
+There are 3 parts to the proof corresponding to ``faithful'',
+``fully faithful'', and ``essentially surjective''.
+
+\medskip\noindent
+Faithful. Choose $0 \in I$ and let $\mathcal{F}_0$, $\mathcal{G}_0$ be
+constructible sheaves on $X_0$. Suppose that
+$a, b : \mathcal{F}_0 \to \mathcal{G}_0$ are maps such that
+$f_0^{-1}a = f_0^{-1}b$. Let $E \subset X_0$ be the set
+of points $x \in X_0$ such that $a_{\overline{x}} = b_{\overline{x}}$.
+By Lemma \ref{lemma-support-constructible} the subset
+$E \subset X_0$ is constructible. By assumption $X \to X_0$ maps into $E$.
+By Limits, Lemma \ref{limits-lemma-limit-contained-in-constructible}
+we find an $i \geq 0$ such that $X_i \to X_0$ maps into $E$.
+Hence $f_{i0}^{-1}a = f_{i0}^{-1}b$.
+
+\medskip\noindent
+Fully faithful. Choose $0 \in I$ and let $\mathcal{F}_0$, $\mathcal{G}_0$ be
+constructible sheaves on $X_0$. Suppose that
+$a : f_0^{-1}\mathcal{F}_0 \to f_0^{-1}\mathcal{G}_0$ is a map.
+We claim there is an $i$ and a map
+$a_i : f_{i0}^{-1}\mathcal{F}_0 \to f_{i0}^{-1}\mathcal{G}_0$
+which pulls back to $a$ on $X$.
+By Lemma \ref{lemma-category-constructible-sets}
+we can replace $\mathcal{F}_0$
+by a finite coproduct of sheaves represented by quasi-compact
+and quasi-separated objects of $(X_0)_\etale$.
+Thus we have to show: If $U_0 \to X_0$ is such an object
+of $(X_0)_\etale$, then
+$$
+f_0^{-1}\mathcal{G}(U) = \colim_{i \geq 0} f_{i0}^{-1}\mathcal{G}(U_i)
+$$
+where $U = X \times_{X_0} U_0$ and $U_i = X_i \times_{X_0} U_0$.
+This is a special case of Theorem \ref{theorem-colimit}.
+
+\medskip\noindent
+Essentially surjective. We have to show every constructible $\mathcal{F}$
+on $X$ is isomorphic to $f_i^{-1}\mathcal{F}$ for some constructible
+$\mathcal{F}_i$ on $X_i$. Applying
+Lemma \ref{lemma-category-constructible-sets}
+and using the results of the previous two paragraphs, we see that
+it suffices to prove this for $h_U$ for some quasi-compact
+and quasi-separated object $U$ of $X_\etale$.
+In this case we have to show that $U$ is the base change of
+a quasi-compact and quasi-separated scheme \'etale over $X_i$ for some $i$.
+This follows from
+Limits, Lemmas \ref{limits-lemma-descend-finite-presentation} and
+\ref{limits-lemma-descend-etale}.
+
+\medskip\noindent
+Proof of (3). The argument is very similar to the argument for
+sheaves of sets, but using
+Lemma \ref{lemma-category-constructible-modules}
+instead of
+Lemma \ref{lemma-category-constructible-sets}. Details omitted.
+Part (2) follows from part (3) because every constructible abelian
+sheaf over a quasi-compact scheme is a constructible sheaf of
+$\mathbf{Z}/n\mathbf{Z}$-modules for some $n$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-category-loc-cst-is-colimit}
+Let $X = \lim_{i \in I} X_i$ be a limit of a directed
+system of schemes with affine transition morphisms.
+We assume that $X_i$ is quasi-compact and quasi-separated
+for all $i \in I$.
+\begin{enumerate}
+\item The category of finite locally constant sheaves on $X_\etale$
+is the colimit of the categories of finite locally constant sheaves
+on $(X_i)_\etale$.
+\item The category of finite locally constant abelian sheaves on $X_\etale$
+is the colimit of the categories of finite locally constant abelian sheaves
+on $(X_i)_\etale$.
+\item Let $\Lambda$ be a Noetherian ring. The category of finite type,
+locally constant sheaves of $\Lambda$-modules on $X_\etale$
+is the colimit of the categories of finite type, locally constant
+sheaves of $\Lambda$-modules on $(X_i)_\etale$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-category-is-colimit} the functor in each case
+is fully faithful. By the same lemma, all we have to show to finish
+the proof in case (1) is the following: given a constructible sheaf
+$\mathcal{F}_i$ on $X_i$ whose pullback $\mathcal{F}$ to $X$ is
+finite locally constant, there exists an $i' \geq i$ such that the
+pullback $\mathcal{F}_{i'}$ of $\mathcal{F}_i$ to $X_{i'}$
+is finite locally constant. By assumption there exists an \'etale covering
+$\mathcal{U} = \{U_j \to X\}_{j \in J}$
+such that $\mathcal{F}|_{U_j} \cong \underline{S_j}$ for some finite set $S_j$.
+We may assume $U_j$ is affine for all $j \in J$.
+Since $X$ is quasi-compact, we may assume $J$ finite.
+By Lemma \ref{lemma-colimit-affine-sites} we can find an $i' \geq i$
+and an \'etale covering $\mathcal{U}_{i'} = \{U_{i', j} \to X_{i'}\}_{j \in J}$
+whose base change to $X$ is $\mathcal{U}$. Then
+$\mathcal{F}_{i'}|_{U_{i', j}}$ and $\underline{S_j}$ are
+constructible sheaves on $(U_{i', j})_\etale$ whose pullbacks to
+$U_j$ are isomorphic. Hence after increasing $i'$ we get
+that $\mathcal{F}_{i'}|_{U_{i', j}}$ and $\underline{S_j}$
+are isomorphic. Thus $\mathcal{F}_{i'}$ is finite locally constant.
+The proof in cases (2) and (3) is exactly the same.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-irreducible-subsheaf-constant-zero}
+Let $X$ be an irreducible scheme with generic point $\eta$.
+\begin{enumerate}
+\item Let $S' \subset S$ be an inclusion of sets. If we have
+$\underline{S'} \subset \mathcal{G} \subset \underline{S}$
+in $\Sh(X_\etale)$ and $S' = \mathcal{G}_{\overline{\eta}}$, then
+$\mathcal{G} = \underline{S'}$.
+\item Let $A' \subset A$ be an inclusion of abelian groups. If we have
+$\underline{A'} \subset \mathcal{G} \subset \underline{A}$
+in $\textit{Ab}(X_\etale)$ and $A' = \mathcal{G}_{\overline{\eta}}$, then
+$\mathcal{G} = \underline{A'}$.
+\item Let $M' \subset M$ be an inclusion of modules over a ring $\Lambda$.
+If we have $\underline{M'} \subset \mathcal{G} \subset \underline{M}$
+in $\textit{Mod}(X_\etale, \underline{\Lambda})$
+and $M' = \mathcal{G}_{\overline{\eta}}$, then
+$\mathcal{G} = \underline{M'}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This is true because for every \'etale morphism $U \to X$
+with $U \not = \emptyset$ the point $\eta$ is in the image.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-push-constant-sheaf-from-open}
+Let $X$ be an integral normal scheme with function field $K$.
+Let $E$ be a set.
+\begin{enumerate}
+\item Let $g : \Spec(K) \to X$ be the inclusion of the generic point.
+Then $g_*\underline{E} = \underline{E}$.
+\item Let $j : U \to X$ be the inclusion of a nonempty open. Then
+$j_*\underline{E} = \underline{E}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Let $x \in X$ be a point. Let
+$\mathcal{O}^{sh}_{X, \overline{x}}$
+be a strict henselization of $\mathcal{O}_{X, x}$.
+By More on Algebra, Lemma \ref{more-algebra-lemma-henselization-normal}
+we see that $\mathcal{O}^{sh}_{X, \overline{x}}$ is a normal domain.
+Hence $\Spec(K) \times_X \Spec(\mathcal{O}^{sh}_{X, \overline{x}})$
+is irreducible. It follows that
+the stalk $(g_*\underline{E}_{\underline{x}}$ is equal to $E$,
+see Theorem \ref{theorem-higher-direct-images}.
+
+\medskip\noindent
+Proof of (2). Since $g$ factors through $j$ there is a map
+$j_*\underline{E} \to g_*\underline{E}$. This map is injective because
+for every scheme $V$ \'etale over $X$ the set $\Spec(K) \times_X V$
+is dense in $U \times_X V$. On the other hand, we have a map
+$\underline{E} \to j_*\underline{E}$ and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-zero-in-generic-point}
+Let $X$ be a quasi-compact and quasi-separated scheme. Let
+$\eta \in X$ be a generic point of an irreducible component of $X$.
+\begin{enumerate}
+\item Let $\mathcal{F}$ be a torsion abelian sheaf on $X_\etale$
+whose stalk $\mathcal{F}_{\overline{\eta}}$ is zero.
+Then $\mathcal{F} = \colim \mathcal{F}_i$ is a filtered colimit of
+constructible abelian sheaves $\mathcal{F}_i$ such that for each $i$
+the support of $\mathcal{F}_i$ is contained
+in a closed subscheme not containing $\eta$.
+\item Let $\Lambda$ be a Noetherian ring and $\mathcal{F}$ a sheaf
+of $\Lambda$-modules on $X_\etale$ whose stalk
+$\mathcal{F}_{\overline{\eta}}$ is zero. Then
+$\mathcal{F} = \colim \mathcal{F}_i$
+is a filtered colimit of constructible sheaves of
+$\Lambda$-modules $\mathcal{F}_i$ such that for each $i$
+the support of $\mathcal{F}_i$ is contained in a closed subscheme
+not containing $\eta$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). We can write $\mathcal{F} = \colim_{i \in I} \mathcal{F}_i$
+with $\mathcal{F}_i$ constructible abelian by
+Lemma \ref{lemma-torsion-colimit-constructible}.
+Choose $i \in I$. Since $\mathcal{F}|_\eta$ is zero by assumption, we
+see that there exists an $i'(i) \geq i$ such that
+$\mathcal{F}_i|_\eta \to \mathcal{F}_{i'(i)}|_\eta$ is zero, see
+Lemma \ref{lemma-colimit-constructible}.
+Then $\mathcal{G}_i = \Im(\mathcal{F}_i \to \mathcal{F}_{i'(i)})$
+is a constructible abelian sheaf (Lemma \ref{lemma-constructible-abelian})
+whose stalk at $\eta$ is zero.
+Hence the support $E_i$ of $\mathcal{G}_i$ is a constructible
+subset of $X$ not containing $\eta$. Since
+$\eta$ is a generic point of an irreducible component of
+$X$, we see that $\eta \not \in Z_i = \overline{E_i}$ by
+Topology, Lemma \ref{topology-lemma-generic-point-in-constructible}.
+Define a new directed set $I'$ by using the set $I$ with
+ordering defined by the rule
+$i_1$ is bigger or equal to $i_2$ if and only if $i_1 \geq i'(i_2)$.
+Then the sheaves $\mathcal{G}_i$ form a system over $I'$
+with colimit $\mathcal{F}$ and the proof is complete.
+
+\medskip\noindent
+The proof in case (2) is exactly the same and we omit it.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Constructible sheaves on Noetherian schemes}
+\label{section-noetherian-constructible}
+
+\noindent
+If $X$ is a Noetherian scheme then any locally closed subset is a
+constructible locally closed subset
+(Topology, Lemma \ref{topology-lemma-constructible-Noetherian-space}).
+Hence an abelian sheaf $\mathcal{F}$ on $X_\etale$ is constructible
+if and only if there exists a finite partition $X = \coprod X_i$
+such that $\mathcal{F}|_{X_i}$ is finite locally constant.
+(By convention a partition of a topological space has locally
+closed parts, see Topology, Section \ref{topology-section-stratifications}.)
+In other words, we can omit the adjective ``constructible'' in
+Definition \ref{definition-constructible}. Actually, the category
+of constructible sheaves on
+Noetherian schemes has some additional properties which we will
+catalogue in this section.
+
+\begin{proposition}
+\label{proposition-constructible-over-noetherian}
+Let $X$ be a Noetherian scheme. Let $\Lambda$ be a Noetherian ring.
+\begin{enumerate}
+\item Any sub or quotient sheaf of a constructible sheaf of sets
+is constructible.
+\item The category of constructible abelian sheaves on $X_\etale$ is a
+(strong) Serre subcategory of $\textit{Ab}(X_\etale)$. In particular,
+every sub and quotient sheaf of a constructible abelian sheaf
+on $X_\etale$ is constructible.
+\item The category of constructible sheaves of $\Lambda$-modules
+on $X_\etale$ is a (strong) Serre subcategory of
+$\textit{Mod}(X_\etale, \Lambda)$. In particular, every submodule
+and quotient module of a constructible sheaf of $\Lambda$-modules
+on $X_\etale$ is constructible.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Proof of (1). Let $\mathcal{G} \subset \mathcal{F}$ with $\mathcal{F}$
+a constructible sheaf of sets on $X_\etale$. Let $\eta \in X$ be a generic
+point of an irreducible component of $X$. By Noetherian induction
+it suffices to find an open neighbourhood $U$ of $\eta$ such that
+$\mathcal{G}|_U$ is locally constant. To do this we may replace $X$
+by an \'etale neighbourhood of $\eta$.
+Hence we may assume $\mathcal{F}$ is constant and $X$ is irreducible.
+
+\medskip\noindent
+Say $\mathcal{F} = \underline{S}$ for some finite set $S$.
+Then $S' = \mathcal{G}_{\overline{\eta}} \subset S$
+say $S' = \{s_1, \ldots, s_t\}$.
+Pick an \'etale neighbourhood $(U, \overline{u})$ of $\overline{\eta}$
+and sections $\sigma_1, \ldots, \sigma_t \in \mathcal{G}(U)$ which map to
+$s_i$ in $\mathcal{G}_{\overline{\eta}} \subset S$.
+Since $\sigma_i$ maps to an element
+$s_i \in S' \subset S = \Gamma(X, \mathcal{F})$
+we see that the two pullbacks of $\sigma_i$ to $U \times_X U$
+are the same as sections of $\mathcal{G}$. By the sheaf condition
+for $\mathcal{G}$ we find that $\sigma_i$ comes from a section
+of $\mathcal{G}$ over the open $\Im(U \to X)$ of $X$.
+Shrinking $X$ we may assume
+$\underline{S'} \subset \mathcal{G} \subset \underline{S}$.
+Then we see that $\underline{S'} = \mathcal{G}$ by
+Lemma \ref{lemma-irreducible-subsheaf-constant-zero}.
+
+\medskip\noindent
+Let $\mathcal{F} \to \mathcal{Q}$ be a surjection with $\mathcal{F}$
+a constructible sheaf of sets on $X_\etale$. Then set
+$\mathcal{G} = \mathcal{F} \times_\mathcal{Q} \mathcal{F}$.
+By the first part of the proof we see that $\mathcal{G}$ is
+constructible as a subsheaf of $\mathcal{F} \times \mathcal{F}$.
+This in turn implies that $\mathcal{Q}$ is constructible, see
+Lemma \ref{lemma-constructible-abelian}.
+
+\medskip\noindent
+Proof of (3). we already know that constructible sheaves of modules
+form a weak Serre subcategory, see Lemma \ref{lemma-constructible-abelian}.
+Thus it suffices to show the statement on submodules.
+
+\medskip\noindent
+Let $\mathcal{G} \subset \mathcal{F}$ be a submodule of a
+constructible sheaf of $\Lambda$-modules on $X_\etale$. Let $\eta \in X$
+be a generic point of an irreducible component of $X$. By Noetherian induction
+it suffices to find an open neighbourhood $U$ of $\eta$ such that
+$\mathcal{G}|_U$ is locally constant. To do this we may replace $X$
+by an \'etale neighbourhood of $\eta$. Hence we may assume $\mathcal{F}$
+is constant and $X$ is irreducible.
+
+\medskip\noindent
+Say $\mathcal{F} = \underline{M}$ for some finite $\Lambda$-module $M$.
+Then $M' = \mathcal{G}_{\overline{\eta}} \subset M$. Pick finitely
+many elements $s_1, \ldots, s_t$ generating $M'$ as a $\Lambda$-module.
+(This is possible as $\Lambda$ is Noetherian and $M$ is finite.)
+Pick an \'etale neighbourhood $(U, \overline{u})$ of $\overline{\eta}$
+and sections $\sigma_1, \ldots, \sigma_t \in \mathcal{G}(U)$ which map to
+$s_i$ in $\mathcal{G}_{\overline{\eta}} \subset M$.
+Since $\sigma_i$ maps to an element
+$s_i \in M' \subset M = \Gamma(X, \mathcal{F})$
+we see that the two pullbacks of $\sigma_i$ to $U \times_X U$
+are the same as sections of $\mathcal{G}$. By the sheaf condition
+for $\mathcal{G}$ we find that $\sigma_i$ comes from a section
+of $\mathcal{G}$ over the open $\Im(U \to X)$ of $X$.
+Shrinking $X$ we may assume
+$\underline{M'} \subset \mathcal{G} \subset \underline{M}$.
+Then we see that $\underline{M'} = \mathcal{G}$ by
+Lemma \ref{lemma-irreducible-subsheaf-constant-zero}.
+
+\medskip\noindent
+Proof of (2). This follows in the usual manner from (3). Details
+omitted.
+\end{proof}
+
+\noindent
+The following lemma tells us that every object of the abelian category of
+constructible sheaves on $X$ is ``Noetherian'', i.e., satisfies
+a.c.c.\ for subobjects.
+
+\begin{lemma}
+\label{lemma-constructible-over-noetherian-noetherian}
+Let $X$ be a Noetherian scheme. Let $\Lambda$ be a Noetherian ring.
+Consider inclusions
+$$
+\mathcal{F}_1 \subset \mathcal{F}_2 \subset \mathcal{F}_3 \subset \ldots
+\subset \mathcal{F}
+$$
+in the category of sheaves of sets, abelian groups, or $\Lambda$-modules.
+If $\mathcal{F}$ is constructible, then for some $n$
+we have $\mathcal{F}_n = \mathcal{F}_{n + 1} = \mathcal{F}_{n + 2} = \ldots$.
+\end{lemma}
+
+\begin{proof}
+By Proposition \ref{proposition-constructible-over-noetherian}
+we see that $\mathcal{F}_i$ and $\colim \mathcal{F}_i$ are constructible.
+Then the lemma follows from
+Lemma \ref{lemma-colimit-constructible}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-constructible-maps-into-constant}
+Let $X$ be a Noetherian scheme.
+\begin{enumerate}
+\item Let $\mathcal{F}$ be a constructible sheaf of sets on $X_\etale$.
+There exist an injective map of sheaves
+$$
+\mathcal{F} \longrightarrow
+\prod\nolimits_{i = 1, \ldots, n} f_{i, *}\underline{E_i}
+$$
+where $f_i : Y_i \to X$ is a finite morphism and $E_i$ is a finite set.
+\item Let $\mathcal{F}$ be a constructible abelian sheaf on $X_\etale$.
+There exist an injective map of abelian sheaves
+$$
+\mathcal{F} \longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} f_{i, *}\underline{M_i}
+$$
+where $f_i : Y_i \to X$ is a finite morphism and
+$M_i$ is a finite abelian group.
+\item Let $\Lambda$ be a Noetherian ring.
+Let $\mathcal{F}$ be a constructible sheaf of $\Lambda$-modules on $X_\etale$.
+There exist an injective map of sheaves of modules
+$$
+\mathcal{F} \longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} f_{i, *}\underline{M_i}
+$$
+where $f_i : Y_i \to X$ is a finite morphism and
+$M_i$ is a finite $\Lambda$-module.
+\end{enumerate}
+Moreover, we may assume each $Y_i$ is irreducible, reduced, maps onto
+an irreducible and reduced closed subscheme $Z_i \subset X$ such that
+$Y_i \to Z_i$ is finite \'etale over a nonempty open of $Z_i$.
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Because we have the ascending chain condition for
+subsheaves of $\mathcal{F}$
+(Lemma \ref{lemma-constructible-over-noetherian-noetherian}), it
+suffices to show that for every point $x \in X$ we
+can find a map $\varphi : \mathcal{F} \to f_*\underline{E}$ where
+$f : Y \to X$ is finite and $E$ is a finite set such that
+$\varphi_{\overline{x}} : \mathcal{F}_{\overline{x}} \to
+(f_*S)_{\overline{x}}$ is injective.
+(This argument can be avoided by picking a partition of $X$ as in
+Lemma \ref{lemma-constructible-quasi-compact-quasi-separated}
+and constructing a $Y_i \to X$ for each irreducible component
+of each part.)
+Let $Z \subset X$ be the induced reduced scheme structure
+(Schemes, Definition \ref{schemes-definition-reduced-induced-scheme})
+on $\overline{\{x\}}$.
+Since $\mathcal{F}$ is constructible, there is a finite separable
+extension $K/\kappa(x)$ such that
+$\mathcal{F}|_{\Spec(K)}$ is the constant sheaf with value $E$
+for some finite set $E$. Let $Y \to Z$ be the normalization
+of $Z$ in $\Spec(K)$.
+By Morphisms, Lemma \ref{morphisms-lemma-normal-normalization}
+we see that $Y$ is a normal integral scheme.
+As $K/\kappa(x)$ is a finite extension, it is clear that $K$ is the function
+field of $Y$. Denote $g : \Spec(K) \to Y$ the inclusion.
+The map $\mathcal{F}|_{\Spec(K)} \to \underline{E}$ is adjoint
+to a map $\mathcal{F}|_Y \to g_*\underline{E} = \underline{E}$
+(Lemma \ref{lemma-push-constant-sheaf-from-open}).
+This in turn is adjoint to a map
+$\varphi : \mathcal{F} \to f_*\underline{E}$.
+Observe that the stalk of $\varphi$ at a geometric point
+$\overline{x}$ is injective: we may take a lift $\overline{y} \in Y$
+of $\overline{x}$ and the commutative diagram
+$$
+\xymatrix{
+\mathcal{F}_{\overline{x}} \ar@{=}[r] \ar[d] &
+(\mathcal{F}|_Y)_{\overline{y}} \ar@{=}[d] \\
+(f_*\underline{E})_{\overline{x}} \ar[r] &
+\underline{E}_{\overline{y}}
+}
+$$
+proves the injectivity. We are not yet done, however, as the
+morphism $f : Y \to Z$ is integral but in general not
+finite\footnote{If $X$ is a Nagata scheme, for example of finite
+type over a field, then $Y \to Z$ is finite.}.
+
+\medskip\noindent
+To fix the problem stated in the last sentence of the previous paragraph,
+we write $Y = \lim_{i \in I} Y_i$ with $Y_i$ irreducible, integral, and
+finite over $Z$. Namely, apply Properties, Lemma
+\ref{properties-lemma-integral-algebra-directed-colimit-finite}
+to $f_*\mathcal{O}_Y$ viewed as a sheaf of $\mathcal{O}_Z$-algebras
+and apply the functor $\underline{\Spec}_Z$.
+Then $f_*\underline{E} = \colim f_{i, *}\underline{E}$
+by Lemma \ref{lemma-relative-colimit}.
+By Lemma \ref{lemma-constructible-is-compact} the map
+$\mathcal{F} \to f_*\underline{E}$
+factors through $f_{i, *}\underline{E}$ for some $i$.
+Since $Y_i \to Z$ is a finite morphism of integral schemes
+and since the function field extension
+induced by this morphism is finite separable, we see that the
+morphism is finite \'etale over a nonempty open of $Z$ (use
+Algebra, Lemma \ref{algebra-lemma-smooth-at-generic-point}; details omitted).
+This finishes the proof of (1).
+
+\medskip\noindent
+The proofs of (2) and (3) are identical to the proof of (1).
+\end{proof}
+
+\noindent
+In the following lemma we use a standard trick to reduce a very general
+statement to the Noetherian case.
+
+\begin{lemma}
+\label{lemma-constructible-maps-into-constant-general}
+\begin{reference}
+\cite[Exposee IX, Proposition 2.14]{SGA4}
+\end{reference}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+\begin{enumerate}
+\item Let $\mathcal{F}$ be a constructible sheaf of sets on $X_\etale$.
+There exist an injective map of sheaves
+$$
+\mathcal{F} \longrightarrow
+\prod\nolimits_{i = 1, \ldots, n} f_{i, *}\underline{E_i}
+$$
+where $f_i : Y_i \to X$ is a finite and finitely presented morphism and
+$E_i$ is a finite set.
+\item Let $\mathcal{F}$ be a constructible abelian sheaf on $X_\etale$.
+There exist an injective map of abelian sheaves
+$$
+\mathcal{F} \longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} f_{i, *}\underline{M_i}
+$$
+where $f_i : Y_i \to X$ is a finite and finitely presented morphism and
+$M_i$ is a finite abelian group.
+\item Let $\Lambda$ be a Noetherian ring.
+Let $\mathcal{F}$ be a constructible sheaf of $\Lambda$-modules on $X_\etale$.
+There exist an injective map of sheaves of modules
+$$
+\mathcal{F} \longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} f_{i, *}\underline{M_i}
+$$
+where $f_i : Y_i \to X$ is a finite and finitely presented morphism and
+$M_i$ is a finite $\Lambda$-module.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We will reduce this lemma to the Noetherian case by absolute Noetherian
+approximation. Namely, by
+Limits, Proposition \ref{limits-proposition-approximate}
+we can write $X = \lim_{t \in T} X_t$ with each $X_t$ of finite type over
+$\Spec(\mathbf{Z})$ and with affine transition morphisms. By
+Lemma \ref{lemma-category-is-colimit}
+the category of constructible sheaves (of sets, abelian groups, or
+$\Lambda$-modules) on $X_\etale$ is the colimit of the corresponding
+categories for $X_t$. Thus our constructible sheaf $\mathcal{F}$
+is the pullback of a similar constructible sheaf $\mathcal{F}_t$
+over $X_t$ for some $t$. Then we apply the Noetherian case
+(Lemma \ref{lemma-constructible-maps-into-constant})
+to find an injection
+$$
+\mathcal{F}_t \longrightarrow
+\prod\nolimits_{i = 1, \ldots, n} f_{i, *}\underline{E_i}
+\quad\text{or}\quad
+\mathcal{F}_t \longrightarrow
+\bigoplus\nolimits_{i = 1, \ldots, n} f_{i, *}\underline{M_i}
+$$
+over $X_t$ for some finite morphisms $f_i : Y_i \to X_t$.
+Since $X_t$ is Noetherian the morphisms $f_i$ are of finite presentation.
+Since pullback is exact and since formation of $f_{i, *}$ commutes
+with base change
+(Lemma \ref{lemma-finite-pushforward-commutes-with-base-change}), we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-support-in-subset}
+Let $X$ be a Noetherian scheme. Let $E \subset X$ be a subset closed
+under specialization.
+\begin{enumerate}
+\item Let $\mathcal{F}$ be a torsion abelian sheaf on $X_\etale$
+whose support is contained in $E$. Then $\mathcal{F} = \colim \mathcal{F}_i$
+is a filtered colimit of constructible abelian sheaves $\mathcal{F}_i$
+such that for each $i$ the support of $\mathcal{F}_i$ is contained in
+a closed subset contained in $E$.
+\item Let $\Lambda$ be a Noetherian ring and $\mathcal{F}$ a sheaf
+of $\Lambda$-modules on $X_\etale$ whose support is contained in $E$.
+Then $\mathcal{F} = \colim \mathcal{F}_i$
+is a filtered colimit of constructible sheaves of
+$\Lambda$-modules $\mathcal{F}_i$ such that for each $i$
+the support of $\mathcal{F}_i$ is contained in a closed subset
+contained in $E$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). We can write $\mathcal{F} = \colim_{i \in I} \mathcal{F}_i$
+with $\mathcal{F}_i$ constructible abelian by
+Lemma \ref{lemma-torsion-colimit-constructible}.
+By Proposition \ref{proposition-constructible-over-noetherian}
+the image $\mathcal{F}'_i \subset \mathcal{F}$
+of the map $\mathcal{F}_i \to \mathcal{F}$ is constructible.
+Thus $\mathcal{F} = \colim \mathcal{F}'_i$ and the support
+of $\mathcal{F}'_i$ is contained in $E$.
+Since the support of $\mathcal{F}'_i$ is constructible
+(by our definition of constructible sheaves), we see
+that its closure is also contained in $E$, see for example
+Topology, Lemma
+\ref{topology-lemma-constructible-stable-specialization-closed}.
+
+\medskip\noindent
+The proof in case (2) is exactly the same and we omit it.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Specializations and \'etale sheaves}
+\label{section-specialization}
+
+\noindent
+Topological picture: Let $X$ be a topological space and let $x' \leadsto x$
+be a specialization of points in $X$. Then every open neighbourhood of $x$
+contains $x'$. Hence for any sheaf $\mathcal{F}$ on $X$ there is a
+{\it specialization map}
+$$
+sp : \mathcal{F}_x \longrightarrow \mathcal{F}_{x'}
+$$
+of stalks sending the equivalence class of the pair $(U, s)$ in
+$\mathcal{F}_x$ to the equivalence class of the pair $(U, s)$ in
+$\mathcal{F}_{x'}$; see Sheaves, Section \ref{sheaves-section-stalks}
+for the description of stalks in terms of equivalence classes of pairs.
+Of course this map is functorial in $\mathcal{F}$, i.e., $sp$
+is a transformation of functors.
+
+\medskip\noindent
+For sheaves in the \'etale topology we can mimick this construction, see
+\cite[Exposee VII, 7.7, page 397]{SGA4}. To do this suppose we have
+a scheme $S$, a geometric point $\overline{s}$ of $S$, and a geometric
+point $\overline{t}$ of $\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$.
+For any sheaf $\mathcal{F}$ on $S_\etale$ we will construct the
+{\it specialization map}
+$$
+sp : \mathcal{F}_{\overline{s}} \longrightarrow \mathcal{F}_{\overline{t}}
+$$
+Here we have abused language: instead of writing
+$\mathcal{F}_{\overline{t}}$ we should write $\mathcal{F}_{p(\overline{t})}$
+where $p : \Spec(\mathcal{O}^{sh}_{S, \overline{s}}) \to S$ is the canonical
+morphism. Recall that
+$$
+\mathcal{F}_{\overline{s}} = \colim_{(U, \overline{u})} \mathcal{F}(U)
+$$
+where the colimit is over all \'etale neighbourhoods $(U, \overline{u})$
+of $(S, \overline{s})$, see Section \ref{section-stalks}. Since
+$\mathcal{O}^{sh}_{S, \overline{s}}$ is the stalk of the
+structure sheaf, we find for every \'etale neighbourhood
+$(U, \overline{u})$ of $(S, \overline{s})$ a canonical map
+$\mathcal{O}_{U, u} \to \mathcal{O}^{sh}_{S, \overline{s}}$.
+Hence we get a unique factorization
+$$
+\Spec(\mathcal{O}^{sh}_{S, \overline{s}}) \to U \to S
+$$
+If $\overline{v}$ denotes the image of $\overline{t}$ in $U$, then we see that
+$(U, \overline{v})$ is an \'etale neighbourhood of $(S, \overline{t})$.
+This construction defines a functor from the category of \'etale neighbourhoods
+of $(S, \overline{s})$ to the category of \'etale neighbourhoods of
+$(S, \overline{t})$. Thus we may define the map
+$sp : \mathcal{F}_{\overline{s}} \to \mathcal{F}_{\overline{t}}$
+by sending the equivalence class of
+$(U, \overline{u}, \sigma)$ where $\sigma \in \mathcal{F}(U)$
+to the equivalence class of $(U, \overline{v}, \sigma)$.
+
+\medskip\noindent
+Let $K \in D(S_\etale)$. With $\overline{s}$ and $\overline{t}$
+as above we have the {\it specialization map}
+$$
+sp : K_{\overline{s}} \longrightarrow K_{\overline{t}}
+\quad\text{in}\quad D(\textit{Ab})
+$$
+Namely, if $K$ is represented by the complex $\mathcal{F}^\bullet$
+of abelian sheaves, then we simply that the map
+$$
+K_{\overline{s}} = \mathcal{F}^\bullet_{\overline{s}}
+\longrightarrow
+\mathcal{F}^\bullet_{\overline{t}} = K_{\overline{t}}
+$$
+which is termwise given by the specialization maps for sheaves
+constructed above. This is independent of the choice of complex
+representing $K$ by the exactness of the stalk functors (i.e.,
+taking stalks of complexes is well defined on the derived category).
+
+\medskip\noindent
+Clearly the construction is functorial in the sheaf $\mathcal{F}$ on
+$S_\etale$. If we think of the stalk functors as morphisms of topoi
+$\overline{s}, \overline{t} : \textit{Sets} \to \Sh(S_\etale)$, then
+we may think of $sp$ as a $2$-morphism
+$$
+\xymatrix{
+\textit{Sets}
+\rrtwocell^{\overline{t}}_{\overline{s}}{\ sp}
+&
+&
+\Sh(S_\etale)
+}
+$$
+of topoi.
+
+\begin{remark}[Alternative description of sp]
+\label{remark-alternative-sp}
+Let $S$, $\overline{s}$, and $\overline{t}$ be as above.
+Another way to describe the specialization map is to use that
+$$
+\mathcal{F}_{\overline{s}} =
+\Gamma(\Spec(\mathcal{O}^{sh}_{S, \overline{s}}), p^{-1}\mathcal{F})
+\quad\text{and}\quad
+\mathcal{F}_{\overline{t}} = \Gamma(\overline{t},
+\overline{t}^{-1}p^{-1}\mathcal{F})
+$$
+The first equality follows from Theorem \ref{theorem-higher-direct-images}
+applied to $\text{id}_S : S \to S$ and the second equality follows
+from Lemma \ref{lemma-stalk-pullback}. Then we can think of $sp$
+as the map
+$$
+sp :
+\mathcal{F}_{\overline{s}} =
+\Gamma(\Spec(\mathcal{O}^{sh}_{S, \overline{s}}), p^{-1}\mathcal{F})
+\xrightarrow{\text{pullback by }\overline{t}}
+\Gamma(\overline{t}, \overline{t}^{-1}p^{-1}\mathcal{F}) =
+\mathcal{F}_{\overline{t}}
+$$
+\end{remark}
+
+\begin{remark}[Yet another description of sp]
+\label{remark-another-sp}
+Let $S$, $\overline{s}$, and $\overline{t}$ be as above.
+Another alternative is to use the unique morphism
+$$
+c :
+\Spec(\mathcal{O}^{sh}_{S, \overline{t}})
+\longrightarrow
+\Spec(\mathcal{O}^{sh}_{S, \overline{s}})
+$$
+over $S$ which is compatible with the given morphism
+$\overline{t} \to \Spec(\mathcal{O}^{sh}_{S, \overline{s}})$
+and the morphism
+$\overline{t} \to \Spec(\mathcal{O}^{sh}_{t, \overline{t}})$.
+The uniqueness and existence of the displayed arrow
+follows from Algebra, Lemma \ref{algebra-lemma-map-into-henselian-colimit}
+applied to $\mathcal{O}_{S, s}$, $\mathcal{O}^{sh}_{S, \overline{t}}$, and
+$\mathcal{O}^{sh}_{S, \overline{s}} \to \kappa(\overline{t})$.
+We obtain
+$$
+sp :
+\mathcal{F}_{\overline{s}} =
+\Gamma(\Spec(\mathcal{O}^{sh}_{S, \overline{s}}), \mathcal{F})
+\xrightarrow{\text{pullback by }c}
+\Gamma(\Spec(\mathcal{O}^{sh}_{S, \overline{t}}), \mathcal{F}) =
+\mathcal{F}_{\overline{t}}
+$$
+(with obvious notational conventions).
+In fact this procedure also works for objects $K$ in $D(S_\etale)$:
+the specialization map for $K$ is equal to the map
+$$
+sp :
+K_{\overline{s}} =
+R\Gamma(\Spec(\mathcal{O}^{sh}_{S, \overline{s}}), K)
+\xrightarrow{\text{pullback by }c}
+R\Gamma(\Spec(\mathcal{O}^{sh}_{S, \overline{t}}), K) =
+K_{\overline{t}}
+$$
+The equality signs are valid as taking global sections over
+the striclty henselian schemes
+$\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$ and
+$\Spec(\mathcal{O}^{sh}_{S, \overline{t}})$
+is exact (and the same as taking stalks at $\overline{s}$ and $\overline{t}$)
+and hence no subtleties related to the fact that $K$ may be unbounded
+arise.
+\end{remark}
+
+\begin{remark}[Lifting specializations]
+\label{remark-can-lift}
+Let $S$ be a scheme and let $t \leadsto s$ be a specialization of
+point on $S$. Choose geometric points $\overline{t}$ and $\overline{s}$
+lying over $t$ and $s$. Since $t$ corresponds to a point of
+$\Spec(\mathcal{O}_{S, s})$ by
+Schemes, Lemma \ref{schemes-lemma-specialize-points}
+and since $\mathcal{O}_{S, s} \to \mathcal{O}^{sh}_{S, \overline{s}}$
+is faithfully flat, we can find a point
+$t' \in \Spec(\mathcal{O}^{sh}_{S, \overline{s}})$ mapping to $t$.
+As $\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$ is a limit of
+schemes \'etale over $S$ we see that $\kappa(t')/\kappa(t)$
+is a separable algebraic extension (usually not finite of course).
+Since $\kappa(\overline{t})$ is algebraically closed, we can
+choose an embedding $\kappa(t') \to \kappa(\overline{t})$
+as extensions of $\kappa(t)$. This choice gives us a
+commutative diagram
+$$
+\xymatrix{
+\overline{t} \ar[d] \ar[r] &
+\Spec(\mathcal{O}^{sh}_{S, \overline{s}}) \ar[d] &
+\overline{s} \ar[l] \ar[d] \\
+t \ar[r] & S & s \ar[l]
+}
+$$
+of points and geometric points. Thus if $t \leadsto s$
+we can always ``lift'' $\overline{t}$ to a geometric point
+of the strict henselization of $S$ at $\overline{s}$
+and get specialization maps as above.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-specialization-map-pullback}
+Let $g : S' \to S$ be a morphism of schemes. Let $\mathcal{F}$ be a sheaf
+on $S_\etale$. Let $\overline{s}'$ be a geometric point of $S'$, and let
+$\overline{t}'$ be a geometric point of
+$\Spec(\mathcal{O}^{sh}_{S', \overline{s}'})$. Denote
+$\overline{s} = g(\overline{s}')$ and $\overline{t} = h(\overline{t}')$
+where $h : \Spec(\mathcal{O}^{sh}_{S', \overline{s}'}) \to
+\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$ is the canonical morphism.
+For any sheaf $\mathcal{F}$ on $S_\etale$ the specialization map
+$$
+sp :
+(f^{-1}\mathcal{F})_{\overline{s}'}
+\longrightarrow
+(f^{-1}\mathcal{F})_{\overline{t}'}
+$$
+is equal to the specialization map
+$sp : \mathcal{F}_{\overline{s}} \to \mathcal{F}_{\overline{t}}$
+via the identifications
+$(f^{-1}\mathcal{F})_{\overline{s}'} = \mathcal{F}_{\overline{s}}$ and
+$(f^{-1}\mathcal{F})_{\overline{t}'} = \mathcal{F}_{\overline{t}}$
+of Lemma \ref{lemma-stalk-pullback}.
+\end{lemma}
+
+\begin{proof}
+Omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-locally-constant}
+Let $S$ be a scheme such that every quasi-compact open of $S$ has
+finite number of irreducible components (for example if $S$ has a
+Noetherian underlying topological space, or if $S$ is locally Noetherian).
+Let $\mathcal{F}$ be a sheaf of sets on $S_\etale$.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is finite locally constant, and
+\item all stalks of $\mathcal{F}$ are finite sets and all specialization maps
+$sp : \mathcal{F}_{\overline{s}} \to \mathcal{F}_{\overline{t}}$
+are bijective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume (2). Let $\overline{s}$ be a geometric point of $S$ lying over
+$s \in S$. In order to prove (1) we have to find an \'etale neighbourhood
+$(U, \overline{u})$ of $(S, \overline{s})$ such that $\mathcal{F}|_U$
+is constant. We may and do assume $S$ is affine.
+
+\medskip\noindent
+Since $\mathcal{F}_{\overline{s}}$ is finite, we can choose
+$(U, \overline{u})$, $n \geq 0$, and pairwise distinct elements
+$\sigma_1, \ldots, \sigma_n \in \mathcal{F}(U)$
+such that $\{\sigma_1, \ldots, \sigma_n\} \subset \mathcal{F}(U)$ maps
+bijectively to $\mathcal{F}_{\overline{s}}$ via the map
+$\mathcal{F}(U) \to \mathcal{F}_{\overline{s}}$.
+Consider the map
+$$
+\varphi : \underline{\{1, \ldots, n\}} \longrightarrow \mathcal{F}|_U
+$$
+on $U_\etale$ defined by $\sigma_1, \ldots, \sigma_n$.
+This map is a bijection on stalks at $\overline{u}$
+by construction. Let us consider the subset
+$$
+E = \{u' \in U \mid \varphi_{\overline{u}'}\text{ is bijective}\} \subset U
+$$
+Here $\overline{u}'$ is any geometric point of $U$ lying over $u'$
+(the condition is independent of the choice by Remark \ref{remark-map-stalks}).
+The image $u \in U$ of $\overline{u}$ is in $E$.
+By our assumption on the specialization maps for $\mathcal{F}$,
+by Remark \ref{remark-can-lift}, and by
+Lemma \ref{lemma-specialization-map-pullback}
+we see that $E$ is closed under specializations
+and generalizations in the topological space $U$.
+
+\medskip\noindent
+After shrinking $U$ we may assume $U$ is affine too. By
+Descent, Lemma \ref{descent-lemma-locally-finite-nr-irred-local-fppf}
+we see that $U$ has a finite number of irreducible components.
+After removing the irreducible components which do not pass
+through $u$, we may assume every irreducible component of $U$
+passes through $u$. Since $U$ is a sober topological space
+it follows that $E = U$ and we conclude that $\varphi$ is an isomorphism by
+Theorem \ref{theorem-exactness-stalks}. Thus (1) follows.
+
+\medskip\noindent
+We omit the proof that (1) implies (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-locally-constant-module}
+Let $S$ be a scheme such that every quasi-compact open of $S$ has
+finite number of irreducible components (for example if $S$ has a
+Noetherian underlying topological space, or if $S$ is locally Noetherian).
+Let $\Lambda$ be a Noetherian ring.
+Let $\mathcal{F}$ be a sheaf of $\Lambda$-modules on $S_\etale$.
+The following are equivalent
+\begin{enumerate}
+\item $\mathcal{F}$ is a finite type, locally constant sheaf
+of $\Lambda$-modules, and
+\item all stalks of $\mathcal{F}$ are finite $\Lambda$-modules and
+all specialization maps
+$sp : \mathcal{F}_{\overline{s}} \to \mathcal{F}_{\overline{t}}$
+are bijective.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The proof of this lemma is the same as the proof of
+Lemma \ref{lemma-characterize-locally-constant}.
+Assume (2). Let $\overline{s}$ be a geometric point of $S$ lying over
+$s \in S$. In order to prove (1) we have to find an \'etale neighbourhood
+$(U, \overline{u})$ of $(S, \overline{s})$ such that $\mathcal{F}|_U$
+is constant. We may and do assume $S$ is affine.
+
+\medskip\noindent
+Since $M = \mathcal{F}_{\overline{s}}$ is a finite $\Lambda$-module
+and $\Lambda$ is Noetherian, we can choose a presentation
+$$
+\Lambda^{\oplus m} \xrightarrow{A} \Lambda^{\oplus n} \to M \to 0
+$$
+for some matrix $A = (a_{ji})$ with coefficients in $\Lambda$.
+We can choose $(U, \overline{u})$ and elements
+$\sigma_1, \ldots, \sigma_n \in \mathcal{F}(U)$
+such that $\sum a_{ji}\sigma_i = 0$ in $\mathcal{F}(U)$ and
+such that the images of $\sigma_i$ in $\mathcal{F}_{\overline{s}} = M$
+are the images of the standard basis element of
+$\Lambda^n$ in the presentation of $M$ given above.
+Consider the map
+$$
+\varphi : \underline{M} \longrightarrow \mathcal{F}|_U
+$$
+on $U_\etale$ defined by $\sigma_1, \ldots, \sigma_n$.
+This map is a bijection on stalks at $\overline{u}$
+by construction. Let us consider the subset
+$$
+E = \{u' \in U \mid \varphi_{\overline{u}'}\text{ is bijective}\} \subset U
+$$
+Here $\overline{u}'$ is any geometric point of $U$ lying over $u'$
+(the condition is independent of the choice by Remark \ref{remark-map-stalks}).
+The image $u \in U$ of $\overline{u}$ is in $E$.
+By our assumption on the specialization maps for $\mathcal{F}$,
+by Remark \ref{remark-can-lift}, and by
+Lemma \ref{lemma-specialization-map-pullback}
+we see that $E$ is closed under specializations
+and generalizations in the topological space $U$.
+
+\medskip\noindent
+After shrinking $U$ we may assume $U$ is affine too. By
+Descent, Lemma \ref{descent-lemma-locally-finite-nr-irred-local-fppf}
+we see that $U$ has a finite number of irreducible components.
+After removing the irreducible components which do not pass
+through $u$, we may assume every irreducible component of $U$
+passes through $u$. Since $U$ is a sober topological space
+it follows that $E = U$ and we conclude that $\varphi$ is an isomorphism by
+Theorem \ref{theorem-exactness-stalks}. Thus (1) follows.
+
+\medskip\noindent
+We omit the proof that (1) implies (2).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-specialization-map-pushforward}
+Let $f : X \to S$ be a quasi-compact and quasi-separated morphism of schemes.
+Let $K \in D^+(X_\etale)$. Let $\overline{s}$ be a geometric point of $S$
+and let $\overline{t}$ be a geometric point of
+$\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$. We have a commutative
+diagram
+$$
+\xymatrix{
+(Rf_*K)_{\overline{s}} \ar[r]_{sp} \ar@{=}[d] &
+(Rf_*K)_{\overline{t}} \ar@{=}[d] \\
+R\Gamma(X \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{s}}), K)
+\ar[r] &
+R\Gamma(X \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{t}}), K)
+}
+$$
+where the bottom horizontal arrow arises as pullback by the morphism
+$\text{id}_X \times c$ where
+$c : \Spec(\mathcal{O}^{sh}_{S, \overline{t}}) \to
+\Spec(\mathcal{O}^{sh}_{S, \overline{S}})$
+is the morphism introduced in Remark \ref{remark-another-sp}.
+The vertical arrows are given by
+Theorem \ref{theorem-higher-direct-images}.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from the description
+of $sp$ in Remark \ref{remark-another-sp}.
+\end{proof}
+
+\begin{remark}
+\label{remark-specialization-map-and-fibres}
+Let $f : X \to S$ be a morphism of schemes. Let $K \in D(X_\etale)$.
+Let $\overline{s}$ be a geometric point of $S$ and let $\overline{t}$
+be a geometric point of $\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$.
+Let $c$ be as in Remark \ref{remark-another-sp}.
+We can always make a commutative diagram
+$$
+\xymatrix{
+(Rf_*K)_{\overline{s}} \ar[r] \ar[d]_{sp} &
+R\Gamma(X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}), K)
+\ar[r] \ar[d]_{(\text{id}_X \times c)^{-1}} &
+R\Gamma(X_{\overline{s}}, K) \\
+(Rf_*K)_{\overline{t}} \ar[r] &
+R\Gamma(X \times_S \Spec(\mathcal{O}_{S, \overline{t}}^{sh}), K) \ar[r] &
+R\Gamma(X_{\overline{t}}, K)
+}
+$$
+where the horizontal arrows are those of Remark \ref{remark-stalk-fibre}.
+In general there won't be a vertical map on the right
+between the cohomologies of $K$ on the fibres fitting into this
+diagram, even in the case of Lemma \ref{lemma-specialization-map-pushforward}.
+\end{remark}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Complexes with constructible cohomology}
+\label{section-Dc}
+
+\noindent
+Let $\Lambda$ be a ring. Denote $D(X_\etale, \Lambda)$ the derived category
+of sheaves of $\Lambda$-modules on $X_\etale$.
+We denote by $D^b(X_\etale, \Lambda)$ (respectively $D^+$, $D^-$)
+the full subcategory of bounded (resp. above, below) complexes in
+$D(X_\etale, \Lambda)$.
+
+\begin{definition}
+\label{definition-c}
+Let $X$ be a scheme. Let $\Lambda$ be a Noetherian ring.
+We denote {\it $D_c(X_\etale, \Lambda)$} the full subcategory
+of $D(X_\etale, \Lambda)$ of complexes whose cohomology sheaves
+are constructible sheaves of $\Lambda$-modules.
+\end{definition}
+
+\noindent
+This definition makes sense by Lemma \ref{lemma-constructible-abelian} and
+Derived Categories, Section \ref{derived-section-triangulated-sub}.
+Thus we see that $D_c(X_\etale, \Lambda)$ is a strictly full, saturated
+triangulated subcategory of $D(X_\etale, \Lambda)$.
+
+\begin{lemma}
+\label{lemma-restrict-and-shriek-from-etale-c}
+Let $\Lambda$ be a Noetherian ring.
+If $j : U \to X$ is an \'etale morphism of schemes, then
+\begin{enumerate}
+\item $K|_U \in D_c(U_\etale, \Lambda)$ if $K \in D_c(X_\etale, \Lambda)$, and
+\item $j_!M \in D_c(X_\etale, \Lambda)$ if $M \in D_c(U_\etale, \Lambda)$ and
+the morphism $j$ is quasi-compact and quasi-separated.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The first assertion is clear. The second follows from the fact
+that $j_!$ is exact and Lemma \ref{lemma-jshriek-constructible}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-c}
+Let $\Lambda$ be a Noetherian ring.
+Let $f : X \to Y$ be a morphism of schemes. If $K \in D_c(Y_\etale, \Lambda)$
+then $Lf^*K \in D_c(X_\etale, \Lambda)$.
+\end{lemma}
+
+\begin{proof}
+This follows as $f^{-1} = f^*$ is exact and
+Lemma \ref{lemma-pullback-constructible}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-one-constructible}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+Let $\Lambda$ be a Noetherian ring. Let $K \in D(X_\etale, \Lambda)$
+and $b \in \mathbf{Z}$ such that $H^b(K)$ is constructible.
+Then there exist a sheaf $\mathcal{F}$ which is a finite direct sum
+of $j_{U!}\underline{\Lambda}$ with $U \in \Ob(X_\etale)$ affine and
+a map $\mathcal{F}[-b] \to K$ in $D(X_\etale, \Lambda)$
+inducing a surjection $\mathcal{F} \to H^b(K)$.
+\end{lemma}
+
+\begin{proof}
+Represent $K$ by a complex $\mathcal{K}^\bullet$ of sheaves of
+$\Lambda$-modules. Consider the surjection
+$$
+\Ker(\mathcal{K}^b \to \mathcal{K}^{b + 1})
+\longrightarrow
+H^b(K)
+$$
+By Modules on Sites, Lemma
+\ref{sites-modules-lemma-module-quotient-direct-sum}
+we may choose a surjection
+$\bigoplus_{i \in I} j_{U_i!} \underline{\Lambda} \to
+\Ker(\mathcal{K}^b \to \mathcal{K}^{b + 1})$
+with $U_i$ affine. For $I' \subset I$ finite, denote
+$\mathcal{H}_{I'} \subset H^b(K)$ the image of
+$\bigoplus_{i \in I'} j_{U_i!} \underline{\Lambda}$. By
+Lemma \ref{lemma-colimit-constructible}
+we see that $\mathcal{H}_{I'} = H^b(K)$ for some $I' \subset I$ finite.
+The lemma follows taking
+$\mathcal{F} = \bigoplus_{i \in I'} j_{U_i!} \underline{\Lambda}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-bounded-above-c}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+Let $\Lambda$ be a Noetherian ring. Let $K \in D^-(X_\etale, \Lambda)$. Then
+the following are equivalent
+\begin{enumerate}
+\item $K$ is in $D_c(X_\etale, \Lambda)$,
+\item $K$ can be represented by a bounded above complex
+whose terms are finite direct sums of $j_{U!}\underline{\Lambda}$
+with $U \in \Ob(X_\etale)$ affine,
+\item $K$ can be represented by a bounded above complex
+of flat constructible sheaves of $\Lambda$-modules.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that (2) implies (3) and that (3) implies (1).
+Assume $K$ is in $D_c^-(X_\etale, \Lambda)$.
+Say $H^i(K) = 0$ for $i > b$. By induction on $a$
+we will construct a complex $\mathcal{F}^a \to \ldots \to \mathcal{F}^b$
+such that each $\mathcal{F}^i$ is a finite direct sum
+of $j_{U!}\underline{\Lambda}$ with $U \in \Ob(X_\etale)$ affine
+and a map $\mathcal{F}^\bullet \to K$ which induces an isomorphism
+$H^i(\mathcal{F}^\bullet) \to H^i(K)$ for $i > a$ and a surjection
+$H^a(\mathcal{F}^\bullet) \to H^a(K)$.
+For $a = b$ this can be done by Lemma \ref{lemma-one-constructible}.
+Given such a datum choose a distinguished triangle
+$$
+\mathcal{F}^\bullet \to K \to L \to \mathcal{F}^\bullet[1]
+$$
+Then we see that $H^i(L) = 0$ for $i \geq a$. Choose
+$\mathcal{F}^{a - 1}[-a +1] \to L$ as in
+Lemma \ref{lemma-one-constructible}. The composition
+$\mathcal{F}^{a - 1}[-a +1] \to L \to \mathcal{F}^\bullet$
+corresponds to a map $\mathcal{F}^{a - 1} \to \mathcal{F}^a$
+such that the composition with $\mathcal{F}^a \to \mathcal{F}^{a + 1}$
+is zero. By TR4 we obtain a map
+$$
+(\mathcal{F}^{a - 1} \to \ldots \to \mathcal{F}^b) \to K
+$$
+in $D(X_\etale, \Lambda)$. This finishes the induction step and the
+proof of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-tensor-c}
+Let $X$ be a scheme. Let $\Lambda$ be a Noetherian ring.
+Let $K, L \in D_c^-(X_\etale, \Lambda)$. Then
+$K \otimes_\Lambda^\mathbf{L} L$ is in $D_c^-(X_\etale, \Lambda)$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemmas \ref{lemma-bounded-above-c} and
+\ref{lemma-tensor-product-constructible}.
+\end{proof}
+
+
+
+
+\section{Tor finite with constructible cohomology}
+\label{section-ctf}
+
+\noindent
+Let $X$ be a scheme and let $\Lambda$ be a Noetherian ring. An often used
+subcategory of the derived category $D_c(X_\etale, \Lambda)$ defined
+in Section \ref{section-Dc} is the full subcategory consisting of objects
+having (locally) finite tor dimension. Here is the formal definition.
+
+\begin{definition}
+\label{definition-ctf}
+Let $X$ be a scheme. Let $\Lambda$ be a Noetherian ring. We denote
+{\it $D_{ctf}(X_\etale, \Lambda)$} the full subcategory of
+$D_c(X_\etale, \Lambda)$
+consisting of objects having locally finite tor dimension.
+\end{definition}
+
+\noindent
+This is a strictly full, saturated triangulated subcategory of
+$D_c(X_\etale, \Lambda)$ and $D(X_\etale, \Lambda)$. By our conventions, see
+Cohomology on Sites, Definition \ref{sites-cohomology-definition-tor-amplitude},
+we see that
+$$
+D_{ctf}(X_\etale, \Lambda) \subset D^b_c(X_\etale, \Lambda)
+\subset D^b(X_\etale, \Lambda)
+$$
+if $X$ is quasi-compact. A good way to think about objects of
+$D_{ctf}(X_\etale, \Lambda)$ is given in Lemma \ref{lemma-when-ctf}.
+
+\begin{remark}
+\label{remark-different}
+Objects in the derived category $D_{ctf}(X_\etale, \Lambda)$ in some sense have
+better global properties than the perfect objects in $D(\mathcal{O}_X)$.
+Namely, it can happen that a complex of $\mathcal{O}_X$-modules
+is locally quasi-isomorphic to a finite complex of finite locally free
+$\mathcal{O}_X$-modules, without being
+globally quasi-isomorphic to a bounded complex of locally free
+$\mathcal{O}_X$-modules. The following lemma shows this does not
+happen for $D_{ctf}$ on a Noetherian scheme.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-when-ctf}
+Let $\Lambda$ be a Noetherian ring. Let $X$ be a quasi-compact
+and quasi-separated scheme. Let $K \in D(X_\etale, \Lambda)$. The following
+are equivalent
+\begin{enumerate}
+\item $K \in D_{ctf}(X_\etale, \Lambda)$, and
+\item $K$ can be represented by a finite complex of constructible
+flat sheaves of $\Lambda$-modules.
+\end{enumerate}
+In fact, if $K$ has tor amplitude in $[a, b]$ then we can represent
+$K$ by a complex $\mathcal{F}^a \to \ldots \to \mathcal{F}^b$ with
+$\mathcal{F}^p$ a constructible flat sheaf of $\Lambda$-modules.
+\end{lemma}
+
+\begin{proof}
+It is clear that a finite complex of constructible
+flat sheaves of $\Lambda$-modules has finite tor dimension.
+It is also clear that it is an object of $D_c(X_\etale, \Lambda)$.
+Thus we see that (2) implies (1).
+
+\medskip\noindent
+Assume (1). Choose $a, b \in \mathbf{Z}$ such that
+$H^i(K \otimes_\Lambda^\mathbf{L} \mathcal{G}) = 0$ if
+$i \not \in [a, b]$ for all sheaves of $\Lambda$-modules $\mathcal{G}$.
+We will prove the final assertion holds by induction on $b - a$. If
+$a = b$, then $K = H^a(K)[-a]$ is a flat constructible sheaf
+and the result holds. Next, assume $b > a$. Represent $K$
+by a complex $\mathcal{K}^\bullet$ of sheaves of $\Lambda$-modules.
+Consider the surjection
+$$
+\Ker(\mathcal{K}^b \to \mathcal{K}^{b + 1})
+\longrightarrow
+H^b(K)
+$$
+By Lemma \ref{lemma-category-constructible-modules}
+we can find finitely many affine schemes $U_i$ \'etale over $X$ and a
+surjection $\bigoplus j_{U_i!}\underline{\Lambda}_{U_i} \to H^b(K)$.
+After replacing $U_i$ by standard \'etale coverings $\{U_{ij} \to U_i\}$
+we may assume this surjection lifts to a map
+$\mathcal{F} = \bigoplus j_{U_i!}\underline{\Lambda}_{U_i} \to
+\Ker(\mathcal{K}^b \to \mathcal{K}^{b + 1})$.
+This map determines a distinguished triangle
+$$
+\mathcal{F}[-b] \to K \to L \to \mathcal{F}[-b + 1]
+$$
+in $D(X_\etale, \Lambda)$. Since $D_{ctf}(X_\etale, \Lambda)$ is a triangulated
+subcategory we see that $L$ is in it too. In fact $L$ has
+tor amplitude in $[a, b - 1]$ as $\mathcal{F}$ surjects onto
+$H^b(K)$ (details omitted). By induction hypothesis we can find
+a finite complex $\mathcal{F}^a \to \ldots \to \mathcal{F}^{b - 1}$
+of flat constructible sheaves of $\Lambda$-modules representing $L$.
+The map $L \to \mathcal{F}[-b + 1]$ corresponds to a map
+$\mathcal{F}^b \to \mathcal{F}$ annihilating the image
+of $\mathcal{F}^{b - 1} \to \mathcal{F}^b$. Then it follows
+from axiom TR3 that $K$ is represented by the complex
+$$
+\mathcal{F}^a \to \ldots \to \mathcal{F}^{b - 1} \to \mathcal{F}^b
+$$
+which finishes the proof.
+\end{proof}
+
+\begin{remark}
+\label{remark-projective-each-degree}
+Let $\Lambda$ be a Noetherian ring. Let $X$ be a scheme.
+For a bounded complex $K^\bullet$ of constructible flat $\Lambda$-modules
+on $X_\etale$
+each stalk $K^p_{\overline{x}}$ is a finite projective $\Lambda$-module.
+Hence the stalks of the complex are perfect complexes of $\Lambda$-modules.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-restrict-and-shriek-from-etale-ctf}
+Let $\Lambda$ be a Noetherian ring.
+If $j : U \to X$ is an \'etale morphism of schemes, then
+\begin{enumerate}
+\item $K|_U \in D_{ctf}(U_\etale, \Lambda)$ if
+$K \in D_{ctf}(X_\etale, \Lambda)$, and
+\item $j_!M \in D_{ctf}(X_\etale, \Lambda)$ if
+$M \in D_{ctf}(U_\etale, \Lambda)$ and
+the morphism $j$ is quasi-compact and quasi-separated.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Perhaps the easiest way to prove this lemma is to reduce to the
+case where $X$ is affine and then apply Lemma \ref{lemma-when-ctf}
+to translate it into a statement about finite complexes
+of flat constructible sheaves of $\Lambda$-modules
+where the result follows from Lemma \ref{lemma-jshriek-constructible}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pullback-ctf}
+Let $\Lambda$ be a Noetherian ring. Let $f : X \to Y$ be a morphism of schemes.
+If $K \in D_{ctf}(Y_\etale, \Lambda)$ then
+$Lf^*K \in D_{ctf}(X_\etale, \Lambda)$.
+\end{lemma}
+
+\begin{proof}
+Apply Lemma \ref{lemma-when-ctf} to reduce this to a question
+about finite complexes of flat constructible sheaves of $\Lambda$-modules.
+Then the statement follows as $f^{-1} = f^*$ is exact and
+Lemma \ref{lemma-pullback-constructible}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-connected-ctf-locally-constant}
+Let $X$ be a connected scheme. Let $\Lambda$ be a Noetherian ring.
+Let $K \in D_{ctf}(X_\etale, \Lambda)$ have locally constant cohomology sheaves.
+Then there exists a finite complex of finite projective $\Lambda$-modules
+$M^\bullet$ and an \'etale covering $\{U_i \to X\}$ such that
+$K|_{U_i} \cong \underline{M^\bullet}|_{U_i}$ in $D(U_{i, \etale}, \Lambda)$.
+\end{lemma}
+
+\begin{proof}
+Choose an \'etale covering $\{U_i \to X\}$ such that $K|_{U_i}$
+is constant, say $K|_{U_i} \cong \underline{M_i^\bullet}_{U_i}$
+for some finite complex of finite $\Lambda$-modules $M_i^\bullet$.
+See Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-locally-constant}.
+Observe that $U_i \times_X U_j$ is empty if $M_i^\bullet$
+is not isomorphic to $M_j^\bullet$ in $D(\Lambda)$.
+For each complex of $\Lambda$-modules $M^\bullet$ let
+$I_{M^\bullet} =
+\{i \in I \mid M_i^\bullet \cong M^\bullet\text{ in }D(\Lambda)\}$.
+As \'etale morphisms are open we see that
+$U_{M^\bullet} = \bigcup_{i \in I_{M^\bullet}} \Im(U_i \to X)$
+is an open subset of $X$. Then $X = \coprod U_{M^\bullet}$ is a disjoint
+open covering of $X$. As $X$ is connected only one $U_{M^\bullet}$
+is nonempty. As $K$ is in $D_{ctf}(X_\etale, \Lambda)$ we see that $M^\bullet$
+is a perfect complex of $\Lambda$-modules, see
+More on Algebra, Lemma \ref{more-algebra-lemma-perfect}.
+Hence we may assume $M^\bullet$ is a finite complex of finite projective
+$\Lambda$-modules.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Torsion sheaves}
+\label{section-torsion}
+
+\noindent
+A brief section on torsion abelian sheaves and their \'etale cohomology.
+Let $\mathcal{C}$ be a site. We have shown in
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-torsion}
+that any object in $D(\mathcal{C})$ whose cohomology sheaves are
+torsion sheaves, can be represented by a complex all of whose terms
+are torsion.
+
+\begin{lemma}
+\label{lemma-torsion-cohomology}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+\begin{enumerate}
+\item If $\mathcal{F}$ is a torsion abelian sheaf on $X_\etale$, then
+$H^n_\etale(X, \mathcal{F})$ is a torsion abelian group for all $n$.
+\item If $K$ in $D^+(X_\etale)$ has torsion cohomology sheaves, then
+$H^n_\etale(X, K)$ is a torsion abelian group for all $n$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove (1) we write $\mathcal{F} = \bigcup \mathcal{F}[n]$ where
+$\mathcal{F}[d]$ is the $d$-torsion subsheaf. By
+Lemma \ref{lemma-colimit} we have
+$H^n_\etale(X, \mathcal{F}) = \colim H^n_\etale(X, \mathcal{F}[d])$.
+This proves (1) as $H^n_\etale(X, \mathcal{F}[d])$ is annihilated by $d$.
+
+\medskip\noindent
+To prove (2) we can use the spectral sequence
+$E_2^{p, q} = H^p_\etale(X, H^q(K))$ converging to $H^n_\etale(X, K)$
+(Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor})
+and the result for sheaves.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-torsion-direct-image}
+Let $f : X \to Y$ be a quasi-compact and quasi-separated
+morphism of schemes.
+\begin{enumerate}
+\item If $\mathcal{F}$ is a torsion abelian sheaf on $X_\etale$, then
+$R^nf_*\mathcal{F}$ is a torsion abelian sheaf on $Y_\etale$ for all $n$.
+\item If $K$ in $D^+(X_\etale)$ has torsion cohomology sheaves, then
+$Rf_*K$ is an object of $D^+(Y_\etale)$ whose cohomology sheaves are
+torsion abelian sheaves.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Recall that $R^nf_*\mathcal{F}$ is the sheaf associated
+to the presheaf $V \mapsto H^n_\etale(X \times_Y V, \mathcal{F})$
+on $Y_\etale$. See Cohomology on Sites,
+Lemma \ref{sites-cohomology-lemma-higher-direct-images}.
+If we choose $V$ affine, then $X \times_Y V$ is
+quasi-compact and quasi-separated because $f$ is, hence
+we can apply Lemma \ref{lemma-torsion-cohomology} to see that
+$H^n_\etale(X \times_Y V, \mathcal{F})$ is torsion.
+
+\medskip\noindent
+Proof of (2). Recall that $R^nf_*K$ is the sheaf associated
+to the presheaf $V \mapsto H^n_\etale(X \times_Y V, K)$
+on $Y_\etale$. See Cohomology on Sites,
+Lemma \ref{sites-cohomology-lemma-unbounded-describe-higher-direct-images}.
+If we choose $V$ affine, then $X \times_Y V$ is
+quasi-compact and quasi-separated because $f$ is, hence
+we can apply Lemma \ref{lemma-torsion-cohomology} to see that
+$H^n_\etale(X \times_Y V, K)$ is torsion.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Cohomology with support in a closed subscheme}
+\label{section-cohomology-support}
+
+\noindent
+Let $X$ be a scheme and let $Z \subset X$ be a closed subscheme.
+Let $\mathcal{F}$ be an abelian sheaf on $X_\etale$. We let
+$$
+\Gamma_Z(X, \mathcal{F}) =
+\{s \in \mathcal{F}(X) \mid \text{Supp}(s) \subset Z\}
+$$
+be the sections with support in $Z$ (Definition \ref{definition-support}).
+This is a left exact functor which is not exact in general.
+Hence we obtain a derived functor
+$$
+R\Gamma_Z(X, -) : D(X_\etale) \longrightarrow D(\textit{Ab})
+$$
+and cohomology groups with support in $Z$ defined by
+$H^q_Z(X, \mathcal{F}) = R^q\Gamma_Z(X, \mathcal{F})$.
+
+\medskip\noindent
+Let $\mathcal{I}$ be an injective abelian sheaf on $X_\etale$.
+Let $U = X \setminus Z$.
+Then the restriction map $\mathcal{I}(X) \to \mathcal{I}(U)$ is surjective
+(Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-restriction-along-monomorphism-surjective})
+with kernel $\Gamma_Z(X, \mathcal{I})$. It immediately follows that
+for $K \in D(X_\etale)$ there is a distinguished triangle
+$$
+R\Gamma_Z(X, K) \to R\Gamma(X, K) \to R\Gamma(U, K) \to R\Gamma_Z(X, K)[1]
+$$
+in $D(\textit{Ab})$. As a consequence we obtain a long exact cohomology
+sequence
+$$
+\ldots \to H^i_Z(X, K) \to H^i(X, K) \to H^i(U, K) \to
+H^{i + 1}_Z(X, K) \to \ldots
+$$
+for any $K$ in $D(X_\etale)$.
+
+\medskip\noindent
+For an abelian sheaf $\mathcal{F}$ on $X_\etale$ we can consider
+the {\it subsheaf of sections with support in $Z$}, denoted
+$\mathcal{H}_Z(\mathcal{F})$, defined by the rule
+$$
+\mathcal{H}_Z(\mathcal{F})(U) =
+\{s \in \mathcal{F}(U) \mid \text{Supp}(s) \subset U \times_X Z\}
+$$
+Here we use the support of a section from Definition \ref{definition-support}.
+Using the equivalence of
+Proposition \ref{proposition-closed-immersion-pushforward}
+we may view $\mathcal{H}_Z(\mathcal{F})$ as an abelian sheaf on
+$Z_\etale$. Thus we obtain a functor
+$$
+\textit{Ab}(X_\etale) \longrightarrow \textit{Ab}(Z_\etale),\quad
+\mathcal{F} \longmapsto \mathcal{H}_Z(\mathcal{F})
+$$
+which is left exact, but in general not exact.
+
+\begin{lemma}
+\label{lemma-sections-with-support-acyclic}
+Let $i : Z \to X$ be a closed immersion of schemes.
+Let $\mathcal{I}$ be an injective abelian sheaf on $X_\etale$.
+Then $\mathcal{H}_Z(\mathcal{I})$ is an injective abelian sheaf
+on $Z_\etale$.
+\end{lemma}
+
+\begin{proof}
+Observe that for any abelian sheaf $\mathcal{G}$ on $Z_\etale$
+we have
+$$
+\Hom_Z(\mathcal{G}, \mathcal{H}_Z(\mathcal{F})) =
+\Hom_X(i_*\mathcal{G}, \mathcal{F})
+$$
+because after all any section of $i_*\mathcal{G}$ has support in $Z$.
+Since $i_*$ is exact (Section \ref{section-closed-immersions}) and as
+$\mathcal{I}$ is injective on $X_\etale$ we conclude that
+$\mathcal{H}_Z(\mathcal{I})$ is injective on $Z_\etale$.
+\end{proof}
+
+\noindent
+Denote
+$$
+R\mathcal{H}_Z : D(X_\etale) \longrightarrow D(Z_\etale)
+$$
+the derived functor. We set
+$\mathcal{H}^q_Z(\mathcal{F}) = R^q\mathcal{H}_Z(\mathcal{F})$ so that
+$\mathcal{H}^0_Z(\mathcal{F}) = \mathcal{H}_Z(\mathcal{F})$.
+By the lemma above we have a Grothendieck spectral sequence
+$$
+E_2^{p, q} = H^p(Z, \mathcal{H}^q_Z(\mathcal{F}))
+\Rightarrow H^{p + q}_Z(X, \mathcal{F})
+$$
+
+\begin{lemma}
+\label{lemma-cohomology-with-support-sheaf-on-support}
+Let $i : Z \to X$ be a closed immersion of schemes.
+Let $\mathcal{G}$ be an injective abelian sheaf on $Z_\etale$.
+Then $\mathcal{H}^p_Z(i_*\mathcal{G}) = 0$ for $p > 0$.
+\end{lemma}
+
+\begin{proof}
+This is true because the functor $i_*$ is exact and transforms
+injective abelian sheaves into injective abelian sheaves
+(Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-pushforward-injective-flat}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-with-support-triangle}
+Let $i : Z \to X$ be a closed immersion of schemes.
+Let $j : U \to X$ be the inclusion of the complement of $Z$.
+Let $\mathcal{F}$ be an abelian sheaf on $X_\etale$.
+There is a distinguished triangle
+$$
+i_*R\mathcal{H}_Z(\mathcal{F}) \to \mathcal{F} \to Rj_*(\mathcal{F}|_U) \to
+i_*R\mathcal{H}_Z(\mathcal{F})[1]
+$$
+in $D(X_\etale)$. This produces an exact sequence
+$$
+0 \to i_*\mathcal{H}_Z(\mathcal{F}) \to \mathcal{F} \to
+j_*(\mathcal{F}|_U) \to i_*\mathcal{H}^1_Z(\mathcal{F}) \to 0
+$$
+and isomorphisms
+$R^pj_*(\mathcal{F}|_U) \cong i_*\mathcal{H}^{p + 1}_Z(\mathcal{F})$
+for $p \geq 1$.
+\end{lemma}
+
+\begin{proof}
+To get the distinguished triangle, choose an injective resolution
+$\mathcal{F} \to \mathcal{I}^\bullet$. Then we obtain a short exact
+sequence of complexes
+$$
+0 \to
+i_*\mathcal{H}_Z(\mathcal{I}^\bullet) \to \mathcal{I}^\bullet
+\to j_*(\mathcal{I}^\bullet|_U) \to 0
+$$
+by the discussion above. Thus the distinguished triangle by
+Derived Categories, Section \ref{derived-section-canonical-delta-functor}.
+\end{proof}
+
+\noindent
+Let $X$ be a scheme and let $Z \subset X$ be a closed subscheme.
+We denote $D_Z(X_\etale)$ the strictly full saturated triangulated
+subcategory of $D(X_\etale)$ consisting of complexes whose cohomology
+sheaves are supported on $Z$. Note that $D_Z(X_\etale)$ only
+depends on the underlying closed subset of $X$.
+
+\begin{lemma}
+\label{lemma-complexes-with-support-on-closed}
+Let $i : Z \to X$ be a closed immersion of schemes.
+The map $Ri_{small, *} = i_{small, *} : D(Z_\etale) \to D(X_\etale)$
+induces an equivalence $D(Z_\etale) \to D_Z(X_\etale)$ with quasi-inverse
+$$
+i_{small}^{-1}|_{D_Z(X_\etale)} = R\mathcal{H}_Z|_{D_Z(X_\etale)}
+$$
+\end{lemma}
+
+\begin{proof}
+Recall that $i_{small}^{-1}$ and $i_{small, *}$ is an adjoint pair of
+exact functors such that $i_{small}^{-1}i_{small, *}$ is isomorphic to
+the identify functor on abelian sheaves. See
+Proposition \ref{proposition-closed-immersion-pushforward} and
+Lemma \ref{lemma-stalk-pullback}. Thus
+$i_{small, *} : D(Z_\etale) \to D_Z(X_\etale)$ is fully faithful and
+$i_{small}^{-1}$ determines
+a left inverse. On the other hand, suppose that $K$ is an object of
+$D_Z(X_\etale)$ and consider the adjunction map
+$K \to i_{small, *}i_{small}^{-1}K$.
+Using exactness of $i_{small, *}$ and $i_{small}^{-1}$
+this induces the adjunction maps
+$H^n(K) \to i_{small, *}i_{small}^{-1}H^n(K)$ on cohomology sheaves.
+Since these cohomology
+sheaves are supported on $Z$ we see these adjunction maps are isomorphisms
+and we conclude that $D(Z_\etale) \to D_Z(X_\etale)$ is an equivalence.
+
+\medskip\noindent
+To finish the proof we have to show that $R\mathcal{H}_Z(K) = i_{small}^{-1}K$
+if $K$ is an object of $D_Z(X_\etale)$. To do this we can use that
+$K = i_{small, *}i_{small}^{-1}K$
+as we've just proved this is the case. Then we
+can choose a K-injective representative $\mathcal{I}^\bullet$ for
+$i_{small}^{-1}K$.
+Since $i_{small, *}$ is the right adjoint to the exact functor
+$i_{small}^{-1}$, the
+complex $i_{small, *}\mathcal{I}^\bullet$ is K-injective
+(Derived Categories, Lemma \ref{derived-lemma-adjoint-preserve-K-injectives}).
+We see that $R\mathcal{H}_Z(K)$ is computed by
+$\mathcal{H}_Z(i_{small, *}\mathcal{I}^\bullet) = \mathcal{I}^\bullet$
+as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomology-with-support-quasi-coherent}
+Let $X$ be a scheme. Let $Z \subset X$ be a closed subscheme.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module
+and denote $\mathcal{F}^a$ the associated quasi-coherent sheaf
+on the small \'etale site of $X$
+(Proposition \ref{proposition-quasi-coherent-sheaf-fpqc}). Then
+\begin{enumerate}
+\item $H^q_Z(X, \mathcal{F})$ agrees with $H^q_Z(X_\etale, \mathcal{F}^a)$,
+\item if the complement of $Z$ is retrocompact in $X$, then
+$i_*\mathcal{H}^q_Z(\mathcal{F}^a)$ is a quasi-coherent sheaf of
+$\mathcal{O}_X$-modules equal to $(i_*\mathcal{H}^q_Z(\mathcal{F}))^a$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $j : U \to X$ be the inclusion of the complement of $Z$.
+The statement (1) on cohomology groups follows from the long
+exact sequences for cohomology with supports and the agreements
+$H^q(X_\etale, \mathcal{F}^a) = H^q(X, \mathcal{F})$ and
+$H^q(U_\etale, \mathcal{F}^a) = H^q(U, \mathcal{F})$, see
+Theorem \ref{theorem-zariski-fpqc-quasi-coherent}.
+If $j : U \to X$ is a quasi-compact morphism, i.e., if $U \subset X$
+is retrocompact, then $R^qj_*$ transforms quasi-coherent sheaves
+into quasi-coherent sheaves
+(Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherence-higher-direct-images})
+and commutes with taking associated
+sheaf on \'etale sites
+(Descent, Lemma \ref{descent-lemma-higher-direct-images-small-etale}).
+We conclude by applying
+Lemma \ref{lemma-cohomology-with-support-triangle}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Schemes with strictly henselian local rings}
+\label{section-strictly-henselian-local-rings}
+
+\noindent
+In this section we collect some results about the \'etale cohomology
+of schemes whose local rings are strictly henselian. For example,
+here is a fun generalization of
+Lemma \ref{lemma-vanishing-etale-cohomology-strictly-henselian}.
+
+\begin{lemma}
+\label{lemma-local-rings-strictly-henselian}
+Let $S$ be a scheme all of whose local rings are strictly henselian.
+Then for any abelian sheaf $\mathcal{F}$ on $S_\etale$ we have
+$H^i(S_\etale, \mathcal{F}) = H^i(S_{Zar}, \mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+Let $\epsilon : S_\etale \to S_{Zar}$ be the morphism of sites given
+by the inclusion functor. The Zariski sheaf $R^p\epsilon_*\mathcal{F}$
+is the sheaf associated to the presheaf $U \mapsto H^p_\etale(U, \mathcal{F})$.
+Thus the stalk at $x \in X$ is
+$\colim H^p_\etale(U, \mathcal{F}) =
+H^p_\etale(\Spec(\mathcal{O}_{X, x}), \mathcal{G}_x)$
+where $\mathcal{G}_x$ denotes the pullback of $\mathcal{F}$
+to $\Spec(\mathcal{O}_{X, x})$, see
+Lemma \ref{lemma-directed-colimit-cohomology}.
+Thus the higher direct images of $R^p\epsilon_*\mathcal{F}$ are
+zero by
+Lemma \ref{lemma-vanishing-etale-cohomology-strictly-henselian}
+and we conclude by the Leray spectral sequence.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gabber-criterion}
+Let $R$ be a ring all of whose local rings are strictly henselian.
+Let $\mathcal{F}$ be a sheaf on $\Spec(R)_\etale$.
+Assume that for all $f, g \in R$ the kernel of
+$$
+H^1_\etale(D(f + g), \mathcal{F})
+\longrightarrow
+H^1_\etale(D(f(f + g)), \mathcal{F}) \oplus
+H^1_\etale(D(g(f + g)), \mathcal{F})
+$$
+is zero. Then $H^q_\etale(\Spec(R), \mathcal{F}) = 0$ for $q > 0$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-local-rings-strictly-henselian} we see that
+\'etale cohomology of $\mathcal{F}$ agrees with Zariski cohomology
+on any open of $\Spec(R)$. We will prove by induction on $i$ the
+statement: for $h \in R$ we have $H^q_\etale(D(h), \mathcal{F}) = 0$
+for $1 \leq q \leq i$. The base case $i = 0$ is trivial. Assume $i \geq 1$.
+
+\medskip\noindent
+Let $\xi \in H^q_\etale(D(h), \mathcal{F})$ for some $1 \leq q \leq i$
+and $h \in R$. If $q < i$ then we are done by induction, so we assume $q = i$.
+After replacing $R$ by $R_h$ we may assume
+$\xi \in H^i_\etale(\Spec(R), \mathcal{F})$; some details omitted.
+Let $I \subset R$ be the set of elements $f \in R$ such that
+$\xi|_{D(f)} = 0$. Since $\xi$ is Zariski locally trivial, it follows
+that for every prime $\mathfrak p$ of $R$ there exists an $f \in I$
+with $f \not \in \mathfrak p$. Thus if we can show that $I$ is an
+ideal, then $1 \in I$ and we're done. It is clear that
+$f \in I$, $r \in R$ implies $rf \in I$. Thus we assume that $f, g \in I$
+and we show that $f + g \in I$. If $q = i = 1$, then this is exactly the
+assumption of the lemma! Whence the result for $i = 1$. For
+$q = i > 1$, note that
+$$
+D(f + g) = D(f(f + g)) \cup D(g(f + g))
+$$
+By Mayer-Vietoris (Cohomology, Lemma \ref{cohomology-lemma-mayer-vietoris}
+which applies as \'etale cohomology on open subschemes of $\Spec(R)$ equals
+Zariski cohomology) we have an exact sequence
+$$
+\xymatrix{
+H^{i - 1}_\etale(D(fg(f + g)), \mathcal{F}) \ar[d] \\
+H^i_\etale(D(f + g), \mathcal{F}) \ar[d] \\
+H^i_\etale(D(f(f + g)), \mathcal{F}) \oplus
+H^i_\etale(D(g(f + g)), \mathcal{F})
+}
+$$
+and the result follows as the first group is zero by induction.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-affine-only-closed-points}
+Let $S$ be an affine scheme such that
+(1) all points are closed, and (2) all residue fields are separably
+algebraically closed. Then
+for any abelian sheaf $\mathcal{F}$ on $S_\etale$ we have
+$H^i(S_\etale, \mathcal{F}) = 0$ for $i > 0$.
+\end{lemma}
+
+\begin{proof}
+Condition (1) implies that the underlying topological space
+of $S$ is profinite, see
+Algebra, Lemma \ref{algebra-lemma-ring-with-only-minimal-primes}.
+Thus the higher cohomology groups of an abelian sheaf on the topological
+space $S$ (i.e., Zariski cohomology) is trivial, see
+Cohomology, Lemma \ref{cohomology-lemma-vanishing-for-profinite}.
+The local rings are strictly henselian by
+Algebra, Lemma \ref{algebra-lemma-local-dimension-zero-henselian}.
+Thus \'etale cohomology of $S$ is computed by Zariski cohomology
+by Lemma \ref{lemma-local-rings-strictly-henselian}
+and the proof is done.
+\end{proof}
+
+\noindent
+The spectrum of an absolutely integrally closed ring
+is an example of a scheme all of whose local rings are
+strictly henselian, see More on Algebra, Lemma
+\ref{more-algebra-lemma-absolutely-integrally-closed-strictly-henselian}.
+It turns out that normal domains with separably closed fraction
+fields have an even stronger property as explained in the
+following lemma.
+
+\begin{lemma}
+\label{lemma-normal-scheme-with-alg-closed-function-field}
+Let $X$ be an integral normal scheme with separably closed
+function field.
+\begin{enumerate}
+\item A separated \'etale morphism $U \to X$ is a
+disjoint union of open immersions.
+\item All local rings of $X$ are strictly henselian.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $R$ be a normal domain whose fraction field is separably algebraically
+closed. Let $R \to A$ be an \'etale ring map. Then
+$A \otimes_R K$ is as a $K$-algebra a finite product
+$\prod_{i = 1, \ldots, n} K$ of copies of $K$. Let $e_i$, $i = 1, \ldots, n$
+be the corresponding idempotents of $A \otimes_R K$. Since $A$ is normal
+(Algebra, Lemma \ref{algebra-lemma-normal-goes-up})
+the idempotents $e_i$ are in $A$
+(Algebra, Lemma \ref{algebra-lemma-normal-ring-integrally-closed}).
+Hence $A = \prod Ae_i$ and we may assume $A \otimes_R K = K$.
+Since $A \subset A \otimes_R K = K$ (by flatness of $R \to A$ and
+since $R \subset K$) we conclude that $A$ is a domain.
+By the same argument we conclude that
+$A \otimes_R A \subset (A \otimes_R A) \otimes_R K = K$.
+It follows that the map $A \otimes_R A \to A$ is
+injective as well as surjective. Thus $R \to A$ defines an
+open immersion by
+Morphisms, Lemma \ref{morphisms-lemma-universally-injective}
+and
+\'Etale Morphisms, Theorem \ref{etale-theorem-etale-radicial-open}.
+
+\medskip\noindent
+Let $f : U \to X$ be a separated \'etale morphism. Let $\eta \in X$
+be the generic point and let $f^{-1}(\{\eta\}) = \{\xi_i\}_{i \in I}$.
+The result of the previous paragraph shows the following:
+For any affine open $U' \subset U$ whose image in $X$ is contained in
+an affine we have $U' = \coprod_{i \in I} U'_i$ where $U'_i$
+is the set of point of $U'$ which are specializations of $\xi_i$.
+Moreover, the morphism $U'_i \to X$ is an open immersion.
+It follows that $U_i = \overline{\{\xi_i\}}$ is an open and closed
+subscheme of $U$ and that $U_i \to X$ is locally on the source
+an isomorphism. By Morphisms,
+Lemma \ref{morphisms-lemma-distinct-local-rings}
+the fact that $U_i \to X$ is separated, implies that
+$U_i \to X$ is injective and we conclude that $U_i \to X$
+is an open immersion, i.e., (1) holds.
+
+\medskip\noindent
+Part (2) follows from part (1) and the description of the strict
+henselization of $\mathcal{O}_{X, x}$ as the local ring at $\overline{x}$
+on the \'etale site of $X$ (Lemma \ref{lemma-describe-etale-local-ring}).
+It can also be proved directly, see
+Fundamental Groups, Lemma
+\ref{pione-lemma-normal-local-domain-separablly-closed-fraction-field}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-Rf-star-zero-normal-with-alg-closed-function-field}
+Let $f : X \to Y$ be a morphism of schemes where $X$ is an integral
+normal scheme with separably closed function field. Then
+$R^qf_*\underline{M} = 0$ for $q > 0$ and any abelian group $M$.
+\end{lemma}
+
+\begin{proof}
+Recall that $R^qf_*\underline{M}$ is the sheaf associated
+to the presheaf $V \mapsto H^q_\etale(V \times_Y X, M)$ on $Y_\etale$, see
+Lemma \ref{lemma-higher-direct-images}.
+If $V$ is affine, then $V \times_Y X \to X$ is separated and \'etale.
+Hence $V \times_Y X = \coprod U_i$ is a disjoint union of open
+subschemes $U_i$ of $X$, see
+Lemma \ref{lemma-normal-scheme-with-alg-closed-function-field}.
+By Lemma \ref{lemma-local-rings-strictly-henselian}
+we see that $H^q_\etale(U_i, M)$ is equal to
+$H^q_{Zar}(U_i, M)$. This vanishes by
+Cohomology, Lemma \ref{cohomology-lemma-irreducible-constant-cohomology-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-closed-of-affine-normal-scheme-with-alg-closed-function-field}
+Let $X$ be an affine integral normal scheme with separably closed
+function field. Let $Z \subset X$ be a closed subscheme. Let
+$V \to Z$ be an \'etale morphism with $V$ affine. Then $V$ is a finite
+disjoint union of open subschemes of $Z$. If $V \to Z$ is
+surjective and finite \'etale, then $V \to Z$ has a section.
+\end{lemma}
+
+\begin{proof}
+By Algebra, Lemma \ref{algebra-lemma-lift-etale}
+we can lift $V$ to an affine scheme $U$ \'etale over $X$.
+Apply Lemma \ref{lemma-normal-scheme-with-alg-closed-function-field}
+to $U \to X$ to get the first statement.
+
+\medskip\noindent
+The final statement is a consequence of the first.
+Let $V = \coprod_{i = 1, \ldots, n} V_i$ be a finite
+decomposition into open and
+closed subschemes with $V_i \to Z$ an open immersion.
+As $V \to Z$ is finite we see that $V_i \to Z$ is also closed.
+Let $U_i \subset Z$ be the image. Then we have a decomposition
+into open and closed subschemes
+$$
+Z =
+\coprod\nolimits_{(A, B)}
+\bigcap\nolimits_{i \in A} U_i \cap
+\bigcap\nolimits_{i \in B} U_i^c
+$$
+where the disjoint union is over $\{1, \ldots, n\} = A \amalg B$
+where $A$ has at least one element.
+Each of the strata is contained in a single $U_i$ and
+we find our section.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gabber-for-h1-absolutely-algebraically-closed}
+Let $X$ be a normal integral affine scheme with separably closed
+function field. Let $Z \subset X$ be a closed subscheme.
+For any finite abelian group $M$ we have $H^1_\etale(Z, \underline{M}) = 0$.
+\end{lemma}
+
+\begin{proof}
+By Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-torsors-h1}
+an element of $H^1_\etale(Z, \underline{M})$ corresponds to a
+$\underline{M}$-torsor $\mathcal{F}$ on $Z_\etale$.
+Such a torsor is clearly a finite locally constant sheaf.
+Hence $\mathcal{F}$ is representable by a scheme $V$ finite
+\'etale over $Z$, Lemma \ref{lemma-characterize-finite-locally-constant}.
+Of course $V \to Z$ is surjective as a torsor is locally trivial.
+Since $V \to Z$ has a section by
+Lemma \ref{lemma-closed-of-affine-normal-scheme-with-alg-closed-function-field}
+we are done.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-gabber-for-absolutely-algebraically-closed}
+Let $X$ be a normal integral affine scheme with separably closed
+function field. Let $Z \subset X$ be a closed subscheme.
+For any finite abelian group $M$ we have
+$H^q_\etale(Z, \underline{M}) = 0$ for $q \geq 1$.
+\end{lemma}
+
+\begin{proof}
+Write $X = \Spec(R)$ and $Z = \Spec(R')$ so that we have a surjection
+of rings $R \to R'$. All local rings of $R'$ are strictly henselian by
+Lemma \ref{lemma-normal-scheme-with-alg-closed-function-field} and
+Algebra, Lemma \ref{algebra-lemma-quotient-strict-henselization}.
+Furthermore, we see that for any $f' \in R'$ there is a surjection
+$R_f \to R'_{f'}$ where $f \in R$ is a lift of $f'$.
+Since $R_f$ is a normal domain with separably closed fraction field
+we see that $H^1_\etale(D(f'), \underline{M}) = 0$ by
+Lemma \ref{lemma-gabber-for-h1-absolutely-algebraically-closed}.
+Thus we may apply Lemma \ref{lemma-gabber-criterion} to $Z = \Spec(R')$
+to conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-integral-cover-trivial-cohomology}
+Let $X$ be an affine scheme.
+\begin{enumerate}
+\item There exists an integral surjective morphism $X' \to X$ such that for
+every closed subscheme $Z' \subset X'$, every finite abelian group $M$, and
+every $q \geq 1$ we have $H^q_\etale(Z', \underline{M}) = 0$.
+\item For any closed subscheme $Z \subset X$, finite abelian group $M$,
+$q \geq 1$, and $\xi \in H^q_\etale(Z, \underline{M})$ there exists a
+finite surjective morphism $X' \to X$ of finite presentation such that
+$\xi$ pulls back to zero in $H^q_\etale(X' \times_X Z, \underline{M})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Write $X = \Spec(A)$. Write $A = \mathbf{Z}[x_i]/J$ for some ideal $J$.
+Let $R$ be the integral closure of $\mathbf{Z}[x_i]$ in an algebraic
+closure of the fraction field of $\mathbf{Z}[x_i]$. Let
+$A' = R/JR$ and set $X' = \Spec(A')$. This gives an example as in (1) by
+Lemma \ref{lemma-gabber-for-absolutely-algebraically-closed}.
+
+\medskip\noindent
+Proof of (2). Let $X' \to X$ be the integral surjective morphism we found
+above. Certainly, $\xi$ maps to zero in
+$H^q_\etale(X' \times_X Z, \underline{M})$. We may write $X'$ as a
+limit $X' = \lim X'_i$ of schemes finite and of finite presentation
+over $X$; this is easy to do in our current affine case, but it
+is a special case of the more general Limits, Lemma
+\ref{limits-lemma-integral-limit-finite-and-finite-presentation}.
+By Lemma \ref{lemma-directed-colimit-cohomology}
+we see that $\xi$ maps to zero in $H^q_\etale(X'_i \times_X Z, \underline{M})$
+for some $i$ large enough.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Absolutely integrally closed vanishing}
+\label{section-aic-vanishing}
+
+\noindent
+Recall that we say a ring $R$ is absolutely integrally closed
+if every monic polynomial over $R$ has a root in $R$
+(More on Algebra, Definition
+\ref{more-algebra-definition-absolutely-integrally-closed}).
+In this section we prove that the \'etale cohomology of
+$\Spec(R)$ with coefficients in a finite torsion group
+vanishes in positive degrees (Proposition \ref{proposition-aic-vanishing})
+thereby slightly improving the earlier
+Lemma \ref{lemma-gabber-for-absolutely-algebraically-closed}.
+We suggest the reader skip this section.
+
+\begin{lemma}
+\label{lemma-find-extension}
+Let $A$ be a ring. Let $a, b \in A$ such that
+$aA + bA = A$ and $a \bmod bA$ is a root of unity.
+Then there exists a monogenic extension $A \subset B$
+and an element $y \in B$ such that $u = a - by$ is a unit.
+\end{lemma}
+
+\begin{proof}
+Say $a^n \equiv 1 \bmod bA$. In particular $a^i$ is a unit modulo
+$b^mA$ for all $i, m \geq 1$. We claim there exist
+$a_1, \ldots, a_n \in A$ such that
+$$
+1 = a^n + a_1 a^{n - 1}b + a_2 a^{n - 2}b^2 + \ldots + a_n b^n
+$$
+Namely, since $1 - a^n \in bA$ we can find an element $a_1 \in A$
+such that $1 - a^n - a_1 a^{n - 1} b \in b^2A$ using the unit property
+of $a^{n - 1}$ modulo $bA$. Next, we can find an element $a_2 \in A$
+such that $1 - a^n - a_1 a^{n - 1} b - a_2 a^{n - 2} b^2 \in b^3A$.
+And so on. Eventually we find $a_1, \ldots, a_{n - 1} \in A$
+such that $1 - (a^n + a_1 a^{n - 1}b + a_2 a^{n - 2}b^2 +
+\ldots + a_{n - 1} ab^{n - 1}) \in b^nA$. This allows us to find
+$a_n \in A$ such that the displayed equality holds.
+
+\medskip\noindent
+With $a_1, \ldots, a_n$ as above we claim that setting
+$$
+B = A[y]/(y^n + a_1 y^{n - 1} + a_2 y^{n - 2} + \ldots + a_n)
+$$
+works. Namely, suppose that $\mathfrak q \subset B$ is a prime ideal
+lying over $\mathfrak p \subset A$. To get a contradiction
+assume $u = a - by$ is in $\mathfrak q$. If $b \in \mathfrak p$
+then $a \not \in \mathfrak p$ as $aA + bA = A$ and hence $u$ is not
+in $\mathfrak q$. Thus we may assume $b \not \in \mathfrak p$, i.e.,
+$b \not \in \mathfrak q$. This implies that $y \bmod \mathfrak q$
+is equal to $a/b \bmod \mathfrak q$. However, then we obtain
+$$
+0 = y^n + a_1 y^{n - 1} + a_2 y^{n - 2} + \ldots + a_n =
+b^{-n}(a^n + a_1 a^{n - 1}b + a_2 a^{n - 2}b^2 + \ldots + a_nb^n) =
+b^{-n}
+$$
+a contradiction. This finishes the proof.
+\end{proof}
+
+\noindent
+In order to explain the proof we need to introduce some group schemes.
+Fix a prime number $\ell$. Let
+$$
+A = \mathbf{Z}[\zeta] =
+\mathbf{Z}[x]/(x^{\ell - 1} + x^{\ell - 2} + \ldots + 1)
+$$
+In other words $A$ is the monogenic extension of $\mathbf{Z}$ generated by
+a primitive $\ell$th root of unity $\zeta$. We set
+$$
+\pi = \zeta - 1
+$$
+A calculation (omitted) shows that $\ell$ is divisible by $\pi^{\ell - 1}$
+in $A$. Our first group scheme over $A$ is
+$$
+G = \Spec(A[s, \frac{1}{\pi s + 1}])
+$$
+with group law given by the comultiplication
+$$
+\mu :
+A[s, \frac{1}{\pi s + 1}]
+\longrightarrow
+A[s, \frac{1}{\pi s + 1}] \otimes_A A[s, \frac{1}{\pi s + 1}],\quad
+s \longmapsto \pi s \otimes s + s \otimes 1 + 1 \otimes s
+$$
+With this choice we have
+$$
+\mu(\pi s + 1) = (\pi s + 1) \otimes (\pi s + 1)
+$$
+and hence we indeed have an $A$-algebra map as indicated. We omit the
+verification that this indeed defines a group law. Our second group
+scheme over $A$ is
+$$
+H = \Spec(A[t, \frac{1}{\pi^\ell t + 1}])
+$$
+with group law given by the comultiplication
+$$
+\mu : A[t, \frac{1}{\pi^\ell t + 1}]
+\longrightarrow
+A[t, \frac{1}{\pi^\ell t + 1}] \otimes_A A[t, \frac{1}{\pi^\ell t + 1}],\quad
+t \longmapsto \pi^\ell t \otimes t + t \otimes 1 + 1 \otimes t
+$$
+The same verification as before shows that this defines a group law.
+Next, we observe that the polynomial
+$$
+\Phi(s) = \frac{(\pi s + 1)^\ell - 1}{\pi^\ell}
+$$
+is in $A[s]$ and of degree $\ell$ and monic in $s$. Namely, the coefficicient
+of $s^i$ for $0 < i < \ell$ is equal to ${\ell \choose i}\pi^{i - \ell}$
+and since $\pi^{\ell - 1}$ divides $\ell$ in $A$ this is an element of $A$.
+We obtain a ring map
+$$
+A[t, \frac{1}{\pi^\ell t + 1}]
+\longrightarrow
+A[s, \frac{1}{\pi s + 1}],\quad
+t \longmapsto \Phi(s)
+$$
+which the reader easily verifies is compatible with the comultiplications.
+Thus we get a morphism of group schemes
+$$
+f : G \to H
+$$
+The following lemma in particular shows that this morphism is faithfully
+flat (in fact we will see that it is finite \'etale surjective).
+
+\begin{lemma}
+\label{lemma-monogenic-one}
+We have
+$$
+A[s, \frac{1}{\pi s + 1}] =
+\left(A[t, \frac{1}{\pi^\ell t + 1}]\right)[s]/(\Phi(s) - t)
+$$
+In particular, the Hopf algebra of $G$ is a monogenic extension
+of the Hopf algebra of $H$.
+\end{lemma}
+
+\begin{proof}
+Follows from the discussion above and the shape of $\Phi(s)$.
+In particular, note that using $\Phi(s) = t$ the element
+$\frac{1}{\pi^\ell t + 1}$ becomes the element
+$\frac{1}{(\pi s + 1)^\ell}$.
+\end{proof}
+
+\noindent
+Next, let us compute the kernel of $f$. Since the origin of $H$ is
+given by $t = 0$ in $H$ we see that the kernel of $f$ is given
+by $\Phi(s) = 0$. Now observe that the $A$-valued points
+$\sigma_0, \ldots, \sigma_{\ell - 1}$
+of $G$ given by
+$$
+\sigma_i :
+s = \frac{\zeta^i - 1}{\pi} = \frac{\zeta^i - 1}{\zeta - 1} =
+\zeta^{i - 1} + \zeta^{i - 2} + \ldots + 1,\quad
+i = 0, 1, \ldots, \ell - 1
+$$
+are certainly contained in $\Ker(f)$. Moreover, these are all pairwise
+distinct in {\bf all} fibres of $G \to \Spec(A)$. Also, the reader
+computes that $\sigma_i +_G \sigma_j = \sigma_{i + j \bmod \ell}$.
+Hence we find a closed immersion of group schemes
+$$
+\underline{\mathbf{Z}/\ell \mathbf{Z}}_A \longrightarrow \Ker(f)
+$$
+sending $i$ to $\sigma_i$. However, by construction $\Ker(f)$
+is finite flat over $\Spec(A)$ of degree $\ell$. Hence we conclude
+that this map is an isomorphism. All in all we conclude that
+we have a short exact sequence
+\begin{equation}
+\label{equation-ses}
+0 \to
+\underline{\mathbf{Z}/\ell \mathbf{Z}}_A
+\to G
+\to H
+\to 0
+\end{equation}
+of group schemes over $A$.
+
+\begin{lemma}
+\label{lemma-lift-points-H-to-G}
+Let $R$ be an $A$-algebra which is absolutely integrally closed. Then
+$G(R) \to H(R)$ is surjective.
+\end{lemma}
+
+\begin{proof}
+Let $h \in H(R)$ correspond to the $A$-algebra map
+$A[t, \frac{1}{\pi^\ell t + 1}] \to R$ sending $t$ to $a \in A$.
+Since $\Phi(s)$ is monic we can find $b \in A$
+with $\Phi(b) = a$. By Lemma \ref{lemma-monogenic-one}
+sending $s$ to $b$ we obtain a unique $A$-algebra map
+$A[s, \frac{1}{\pi s + 1}] \to R$ compatible
+with the map $A[t, \frac{1}{\pi^\ell t + 1}] \to R$ above.
+This in turn corresponds to an element $g \in G(R)$
+mapping to $h \in H(R)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-interpolate}
+Let $R$ be an $A$-algebra which is absolutely integrally closed.
+Let $I, J \subset R$ be ideals with $I + J = R$.
+There exists a $g \in G(R)$ such
+that $g \bmod I = \sigma_0$ and $g \bmod J = \sigma_1$.
+\end{lemma}
+
+\begin{proof}
+Choose $x \in I$ such that $x \equiv 1 \bmod J$.
+We may and do replace $I$ by $xR$ and $J$ by $(x - 1)R$.
+Then we are looking for an $s \in R$ such that
+\begin{enumerate}
+\item $1 + \pi s$ is a unit,
+\item $s \equiv 0 \bmod xR$, and
+\item $s \equiv 1 \bmod (x - 1)R$.
+\end{enumerate}
+The last two conditions say that $s = x + x(x - 1)y$ for some $y \in R$.
+The first condition says that $1 + \pi s = 1 + \pi x + \pi x (x - 1) y$
+needs to be a unit of $R$.
+However, note that $1 + \pi x$ and $\pi x (x - 1)$ generate the
+unit ideal of $R$ and that $1 + \pi x$ is an $\ell$th root of $1$
+modulo $\pi x (x - 1)$\footnote{Because $1 + \pi x$ is congruent
+to $1$ modulo $\pi$, congruent to $1$ modulo $x$, and congruent to
+$1 + \pi = \zeta$ modulo $x - 1$ and because we have
+$(\pi) \cap (x) \cap (x - 1) = (\pi x (x - 1))$ in $A[x]$.}.
+Thus we win by Lemma \ref{lemma-find-extension} and the
+fact that $R$ is absolutely integrally closed.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-aic-vanishing}
+Let $R$ be an absolutely integrally closed ring.
+Let $M$ be a finite abelian group.
+Then $H^i_\etale(\Spec(R), \underline{M}) = 0$ for $i > 0$.
+\end{proposition}
+
+\begin{proof}
+Since any finite abelian group has a finite filtration whose
+subquotients are cyclic of prime order, we may assume
+$M = \mathbf{Z}/\ell\mathbf{Z}$ where $\ell$ is a prime number.
+
+\medskip\noindent
+Observe that all local rings of $R$ are strictly henselian, see
+More on Algebra, Lemma
+\ref{more-algebra-lemma-absolutely-integrally-closed-strictly-henselian}.
+Furthermore, any localization of $R$ is also absolutely integrally closed
+by More on Algebra, Lemma
+\ref{more-algebra-lemma-absolutely-integrally-closed-quotient-localization}.
+Thus Lemma \ref{lemma-gabber-criterion} tells us it suffices to
+show that the kernel of
+$$
+H^1_\etale(D(f + g), \mathbf{Z}/\ell\mathbf{Z})
+\longrightarrow
+H^1_\etale(D(f(f + g)), \mathbf{Z}/\ell\mathbf{Z}) \oplus
+H^1_\etale(D(g(f + g)), \mathbf{Z}/\ell\mathbf{Z})
+$$
+is zero for any $f, g \in R$. After replacing $R$ by $R_{f + g}$ we
+reduce to the following claim: given
+$\xi \in H^1_\etale(\Spec(R), \mathbf{Z}/\ell\mathbf{Z})$
+and an affine open covering $\Spec(R) = U \cup V$
+such that $\xi|_U$ and $\xi|_V$ are trivial, then $\xi = 0$.
+
+\medskip\noindent
+Let $A = \mathbf{Z}[\zeta]$ as above.
+Since $\mathbf{Z} \subset A$ is monogenic, we can find a ring
+map $A \to R$. From now on we think of $R$ as an $A$-algebra
+and we think of $\Spec(R)$ as a scheme over $\Spec(A)$.
+If we base change the short exact sequence (\ref{equation-ses})
+to $\Spec(R)$ and take \'etale cohomology we obtain
+$$
+G(R) \to H(R) \to
+H^1_\etale(\Spec(R), \mathbf{Z}/\ell\mathbf{Z}) \to
+H^1_\etale(\Spec(R), G)
+$$
+Please keep this in mind during the rest of the proof.
+
+\medskip\noindent
+Let $\tau \in \Gamma(U \cap V, \mathbf{Z}/\ell\mathbf{Z})$
+be a section whose boundary in the Mayer-Vietoris sequence
+(Lemma \ref{lemma-mayer-vietoris}) gives $\xi$.
+For $i = 0, 1, \ldots, \ell - 1$ let $A_i \subset U \cap V$
+be the open and closed subset where $\tau$ has the value $i \bmod \ell$.
+Thus we have a finite disjoint union decomposition
+$$
+U \cap V = A_0 \amalg \ldots \amalg A_{\ell - 1}
+$$
+such that $\tau$ is constant on each $A_i$. For
+$i = 0, 1, \ldots, \ell - 1$
+denote $\tau_i \in H^0(U \cap V, \mathbf{Z}/\ell\mathbf{Z})$
+the element which is equal to $1$ on $A_i$ and
+equal to $0$ on $A_j$ for $j \not = i$.
+Then $\tau$ is a sum of multiples of the $\tau_i$\footnote{Modulo
+calculation errors we have $\tau = \sum i \tau_i$.}.
+Hence it suffices to show that the cohomology class corresponding
+to $\tau_i$ is trivial. This reduces us to the case where
+$\tau$ takes only two distinct values, namely $1$ and $0$.
+
+\medskip\noindent
+Assume $\tau$ takes only the values $1$ and $0$. Write
+$$
+U \cap V = A \amalg B
+$$
+where $A$ is the locus where $\tau = 0$ and $B$ is the locus
+where $\tau = 1$. Then $A$ and $B$ are disjoint closed subsets.
+Denote $\overline{A}$ and $\overline{B}$ the closures of $A$ and $B$ in
+$\Spec(R)$. Then we have a ``banana'': namely
+we have
+$$
+\overline{A} \cap \overline{B} = Z_1 \amalg Z_2
+$$
+with $Z_1 \subset U$ and $Z_2 \subset V$ disjoint closed subsets.
+Set $T_1 = \Spec(R) \setminus V$ and $T_2 = \Spec(R) \setminus U$.
+Observe that $Z_1 \subset T_1 \subset U$, $Z_2 \subset T_2 \subset V$, and
+$T_1 \cap T_2 = \emptyset$. Topologically we can write
+$$
+\Spec(R) = \overline{A} \cup \overline{B} \cup T_1 \cup T_2
+$$
+We suggest drawing a picture to visualize this.
+In order to prove that $\xi$ is zero, we may and do replace $R$ by
+its reduction (Proposition \ref{proposition-topological-invariance}).
+Below, we think of $A$, $\overline{A}$, $B$, $\overline{B}$, $T_1$, $T_2$
+as reduced closed subschemes of $\Spec(R)$. Next, as scheme structures on $Z_1$
+and $Z_2$ we use
+$$
+Z_1 = \overline{A} \cap (\overline{B} \cup T_1)
+\quad\text{and}\quad
+Z_2 = \overline{A} \cap (\overline{B} \cup T_2)
+$$
+(scheme theoretic unions and intersections as in Morphisms, Definition
+\ref{morphisms-definition-scheme-theoretic-intersection-union}).
+
+\medskip\noindent
+Denote $X$ the $G$-torsor over $\Spec(R)$ corresponding to the image
+of $\xi$ in $H^1(\Spec(R), G)$. If $X$ is trivial, then $\xi$
+comes from an element $h \in H(R)$ (see exact sequence of cohomology
+above). However, then by Lemma \ref{lemma-lift-points-H-to-G}
+the element $h$ lifts to an element of $G(R)$ and we conclude
+$\xi = 0$ as desired. Thus our goal is to prove that $X$ is trivial.
+
+\medskip\noindent
+Recall that the embedding $\mathbf{Z}/\ell \mathbf{Z} \to G(R)$
+sends $i \bmod \ell$ to $\sigma_i \in G(R)$. Observe that $\overline{A}$
+is the spectrum of an absolutely integrally closed ring (namely
+a qotient of $R$). By
+Lemma \ref{lemma-interpolate} we can find $g \in G(\overline{A})$ with
+$g|_{\overline{A} \cap Z_1} = \sigma_0$ and
+$g|_{\overline{A} \cap Z_2} = \sigma_1$ (scheme theoretically).
+Then we can define
+\begin{enumerate}
+\item $g_1 \in G(U)$ which is $g$ on $\overline{A} \cap U$, which
+is $\sigma_0$ on $\overline{B} \cap U$, and $\sigma_0$ on $T_1$, and
+\item $g_2 \in G(V)$ which is $g$ on $\overline{A} \cap V$, which
+is $\sigma_1$ on $\overline{B} \cap V$, and $\sigma_1$ on $T_2$.
+\end{enumerate}
+Namely, to find $g_1$ as in (1) we glue the section
+$\sigma_0$ on $\Omega = (\overline{B} \cup T_1) \cap U$
+to the restriction of the section $g$ on $\Omega' = \overline{A} \cap U$.
+Note that $U = \Omega \cup \Omega'$ (scheme theoretically)
+because $U$ is reduced and $\Omega \cap \Omega' = Z_1$ (scheme theoretically)
+by our choice of $Z_1$. Hence by
+Morphisms, Lemma \ref{morphisms-lemma-scheme-theoretic-union}
+we have that $U$ is the pushout of $\Omega$ and $\Omega'$
+along $Z_1$. Thus we can find $g_1$.
+Similarly for the existence of $g_2$ in (2).
+Then we have
+$$
+\tau = g_2|_{A \cup B} - g_1|_{A \cup B} \quad(\text{addition in group law})
+$$
+and we see that $X$ is trivial thereby finishing the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Affine analog of proper base change}
+\label{section-gabber-affine-proper}
+
+\noindent
+In this section we discuss a result by Ofer Gabber, see
+\cite{gabber-affine-proper}. This was also proved by Roland Huber, see
+\cite{Huber-henselian}. We have already done some of the work needed
+for Gabber's proof in Section \ref{section-strictly-henselian-local-rings}.
+
+\begin{lemma}
+\label{lemma-efface-cohomology-on-closed-by-finite-cover}
+Let $X$ be an affine scheme. Let $\mathcal{F}$ be a torsion abelian sheaf
+on $X_\etale$. Let $Z \subset X$ be a closed subscheme. Let
+$\xi \in H^q_\etale(Z, \mathcal{F}|_Z)$ for some $q > 0$.
+Then there exists an injective map $\mathcal{F} \to \mathcal{F}'$
+of torsion abelian sheaves on $X_\etale$ such that
+the image of $\xi$ in $H^q_\etale(Z, \mathcal{F}'|_Z)$ is zero.
+\end{lemma}
+
+\begin{proof}
+By Lemmas \ref{lemma-torsion-colimit-constructible} and \ref{lemma-colimit}
+we can find a map $\mathcal{G} \to \mathcal{F}$ with $\mathcal{G}$
+a constructible abelian sheaf and $\xi$ coming from an element $\zeta$ of
+$H^q_\etale(Z, \mathcal{G}|_Z)$. Suppose we can find an injective map
+$\mathcal{G} \to \mathcal{G}'$ of torsion abelian sheaves on $X_\etale$
+such that the image of $\zeta$ in $H^q_\etale(Z, \mathcal{G}'|_Z)$ is zero.
+Then we can take $\mathcal{F}'$ to be the pushout
+$$
+\mathcal{F}' = \mathcal{G}' \amalg_{\mathcal{G}} \mathcal{F}
+$$
+and we conclude the result of the lemma holds. (Observe that restriction
+to $Z$ is exact, so commutes with finite limits and colimits and moreover
+it commutes with arbitrary colimits as a left adjoint to pushforward.)
+Thus we may assume $\mathcal{F}$ is constructible.
+
+\medskip\noindent
+Assume $\mathcal{F}$ is constructible. By
+Lemma \ref{lemma-constructible-maps-into-constant-general}
+it suffices to prove the result when $\mathcal{F}$
+is of the form $f_*\underline{M}$ where $M$ is a finite abelian group
+and $f : Y \to X$ is a finite morphism of finite presentation
+(such sheaves are still constructible by
+Lemma \ref{lemma-finite-pushforward-constructible}
+but we won't need this).
+Since formation of $f_*$ commutes with any base change
+(Lemma \ref{lemma-finite-pushforward-commutes-with-base-change})
+we see that the restriction of $f_*\underline{M}$ to $Z$ is
+equal to the pushforward of $\underline{M}$ via
+$Y \times_X Z \to Z$. By the Leray spectral sequence
+(Proposition \ref{proposition-leray})
+and vanishing of higher direct images
+(Proposition \ref{proposition-finite-higher-direct-image-zero}),
+we find
+$$
+H^q_\etale(Z, f_*\underline{M}|_Z) = H^q_\etale(Y \times_X Z, \underline{M}).
+$$
+By Lemma \ref{lemma-integral-cover-trivial-cohomology}
+we can find a finite surjective morphism $Y' \to Y$ of finite presentation
+such that $\xi$ maps to zero in $H^q(Y' \times_X Z, \underline{M})$.
+Denoting $f' : Y' \to X$ the composition $Y' \to Y \to X$ we claim
+the map
+$$
+f_*\underline{M} \longrightarrow f'_*\underline{M}
+$$
+is injective which finishes the proof by what was said above.
+To see the desired injectivity we can look at stalks. Namely,
+if $\overline{x} : \Spec(k) \to X$ is a geometric point, then
+$$
+(f_*\underline{M})_{\overline{x}} =
+\bigoplus\nolimits_{f(\overline{y}) = \overline{x}} M
+$$
+by Proposition \ref{proposition-finite-higher-direct-image-zero}
+and similarly for the other sheaf.
+Since $Y' \to Y$ is surjective and finite we see that
+the induced map on geometric points lifting $\overline{x}$ is
+surjective too and we conclude.
+\end{proof}
+
+\noindent
+The lemma above will take care of higher cohomology groups in
+Gabber's result. The following lemma will be used to deal with
+global sections.
+
+\begin{lemma}
+\label{lemma-gabber-h0}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+Let $i : Z \to X$ be a closed immersion. Assume that
+\begin{enumerate}
+\item for any sheaf $\mathcal{F}$ on $X_{Zar}$ the map
+$\Gamma(X, \mathcal{F}) \to \Gamma(Z, i^{-1}\mathcal{F})$
+is bijective, and
+\item for any finite morphism $X' \to X$ assumption (1) holds
+for $Z \times_X X' \to X'$.
+\end{enumerate}
+Then for any sheaf $\mathcal{F}$ on $X_\etale$ we have
+$\Gamma(X, \mathcal{F}) = \Gamma(Z, i^{-1}_{small}\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be a sheaf on $X_\etale$. There is a canonical
+(base change) map
+$$
+i^{-1}(\mathcal{F}|_{X_{Zar}})
+\longrightarrow
+(i_{small}^{-1}\mathcal{F})|_{Z_{Zar}}
+$$
+of sheaves on $Z_{Zar}$. We will show this map is injective by looking
+at stalks. The stalk on the left hand side at $z \in Z$
+is the stalk of $\mathcal{F}|_{X_{Zar}}$ at $z$. The stalk on the right
+hand side is the colimit over all elementary \'etale neighbourhoods
+$(U, u) \to (X, z)$ such that $U \times_X Z \to Z$ has a section over
+a neighbourhood of $z$. As \'etale morphisms are open, the image of
+$U \to X$ is an open neighbourhood $U_0$ of $z$ in $X$. The map
+$\mathcal{F}(U_0) \to \mathcal{F}(U)$ is injective by the sheaf
+condition for $\mathcal{F}$ with respect to the \'etale covering $U \to U_0$.
+Taking the colimit over all $U$ and $U_0$ we obtain injectivity on stalks.
+
+\medskip\noindent
+It follows from this and assumption (1) that the map
+$\Gamma(X, \mathcal{F}) \to \Gamma(Z, i^{-1}_{small}\mathcal{F})$
+is injective. By (2) the same thing is true on all $X'$ finite over $X$.
+
+\medskip\noindent
+Let $s \in \Gamma(Z, i^{-1}_{small}\mathcal{F})$. By construction of
+$i^{-1}_{small}\mathcal{F}$ there exists an \'etale covering
+$\{V_j \to Z\}$, \'etale morphisms $U_j \to X$, sections
+$s_j \in \mathcal{F}(U_j)$ and morphisms $V_j \to U_j$ over $X$
+such that $s|_{V_j}$ is the pullback of $s_j$.
+Observe that every nonempty closed subscheme $T \subset X$ meets $Z$
+by assumption (1) applied to the sheaf $(T \to X)_*\underline{\mathbf{Z}}$
+for example. Thus we see that $\coprod U_j \to X$ is surjective.
+By More on Morphisms, Lemma
+\ref{more-morphisms-lemma-there-is-a-scheme-integral-over}
+we can find a finite surjective morphism $X' \to X$
+such that $X' \to X$ Zariski locally factors through $\coprod U_j \to X$.
+It follows that $s|_{Z'}$ Zariski locally comes
+from a section of $\mathcal{F}|_{X'}$. In other words,
+$s|_{Z'}$ comes from $t' \in \Gamma(X', \mathcal{F}|_{X'})$
+by assumption (2).
+By injectivity we conclude that the two pullbacks of $t'$ to
+$X' \times_X X'$ are the same (after all this is true for
+the pullbacks of $s$ to $Z' \times_Z Z'$). Hence we conclude
+$t'$ comes from a section of $\mathcal{F}$ over $X$ by
+Remark \ref{remark-cohomological-descent-finite}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-connected-topological}
+Let $Z \subset X$ be a closed subset of a topological space $X$.
+Assume
+\begin{enumerate}
+\item $X$ is a spectral space
+(Topology, Definition \ref{topology-definition-spectral-space}), and
+\item for $x \in X$ the intersection $Z \cap \overline{\{x\}}$
+is connected (in particular nonempty).
+\end{enumerate}
+If $Z = Z_1 \amalg Z_2$ with $Z_i$ closed in $Z$,
+then there exists a decomposition $X = X_1 \amalg X_2$ with
+$X_i$ closed in $X$ and $Z_i = Z \cap X_i$.
+\end{lemma}
+
+\begin{proof}
+Observe that $Z_i$ is quasi-compact. Hence the set of points
+$W_i$ specializing to $Z_i$ is closed in the constructible topology
+by Topology, Lemma \ref{topology-lemma-make-spectral-space}.
+Assumption (2) implies that $X = W_1 \amalg W_2$.
+Let $x \in \overline{W_1}$. By Topology, Lemma
+\ref{topology-lemma-constructible-stable-specialization-closed} part (1)
+there exists a specialization $x_1 \leadsto x$ with $x_1 \in W_1$.
+Thus $\overline{\{x\}} \subset \overline{\{x_1\}}$ and we see
+that $x \in W_1$. In other words, setting $X_i = W_i$ does the job.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-h0-topological}
+Let $Z \subset X$ be a closed subset of a topological space $X$.
+Assume
+\begin{enumerate}
+\item $X$ is a spectral space
+(Topology, Definition \ref{topology-definition-spectral-space}), and
+\item for $x \in X$ the intersection $Z \cap \overline{\{x\}}$
+is connected (in particular nonempty).
+\end{enumerate}
+Then for any sheaf $\mathcal{F}$ on $X$ we have
+$\Gamma(X, \mathcal{F}) = \Gamma(Z, \mathcal{F}|_Z)$.
+\end{lemma}
+
+\begin{proof}
+If $x \leadsto x'$ is a specialization of points, then there is a
+canonical map $\mathcal{F}_{x'} \to \mathcal{F}_x$ compatible with
+sections over opens and functorial in $\mathcal{F}$. Since every point
+of $X$ specializes to a point of $Z$ it follows that
+$\Gamma(X, \mathcal{F}) \to \Gamma(Z, \mathcal{F}|_Z)$ is injective.
+The difficult part is to show that it is surjective.
+
+\medskip\noindent
+Denote $\mathcal{B}$ be the set of all quasi-compact opens of $X$.
+Write $\mathcal{F}$ as a filtered colimit $\mathcal{F} = \colim \mathcal{F}_i$
+where each $\mathcal{F}_i$ is as in
+Modules, Equation (\ref{modules-equation-towards-constructible-sets}).
+See Modules, Lemma \ref{modules-lemma-filtered-colimit-constructibles}.
+Then $\mathcal{F}|_Z = \colim \mathcal{F}_i|_Z$ as restriction to $Z$
+is a left adjoint (Categories, Lemma \ref{categories-lemma-adjoint-exact} and
+Sheaves, Lemma \ref{sheaves-lemma-f-map}).
+By Sheaves, Lemma \ref{sheaves-lemma-directed-colimits-sections}
+the functors $\Gamma(X, -)$ and $\Gamma(Z, -)$ commute with filtered colimits.
+Hence we may assume our sheaf $\mathcal{F}$ is as in
+Modules, Equation (\ref{modules-equation-towards-constructible-sets}).
+
+\medskip\noindent
+Suppose that we have an embedding $\mathcal{F} \subset \mathcal{G}$.
+Then we have
+$$
+\Gamma(X, \mathcal{F}) =
+\Gamma(Z, \mathcal{F}|_Z) \cap \Gamma(X, \mathcal{G})
+$$
+where the intersection takes place in $\Gamma(Z, \mathcal{G}|_Z)$.
+This follows from the first remark of the proof because we can check
+whether a global section of $\mathcal{G}$ is in $\mathcal{F}$ by looking
+at the stalks and because every point of $X$ specializes to a point of $Z$.
+
+\medskip\noindent
+By Modules, Lemma \ref{modules-lemma-constructible-in-constant}
+there is an injection $\mathcal{F} \to \prod (Z_i \to X)_*\underline{S_i}$
+where the product is finite, $Z_i \subset X$ is closed, and $S_i$ is finite.
+Thus it suffices to prove surjectivity for the sheaves
+$(Z_i \to X)_*\underline{S_i}$. Observe that
+$$
+\Gamma(X, (Z_i \to X)_*\underline{S_i}) = \Gamma(Z_i, \underline{S_i})
+\quad\text{and}\quad
+\Gamma(X, (Z_i \to X)_*\underline{S_i}|_Z) =
+\Gamma(Z \cap Z_i, \underline{S_i})
+$$
+Moreover, conditions (1) and (2) are inherited by $Z_i$; this is clear
+for (2) and follows from
+Topology, Lemma \ref{topology-lemma-spectral-sub} for (1). Thus it
+suffices to prove the lemma in the case of a (finite) constant sheaf.
+This case is a restatement of Lemma \ref{lemma-connected-topological}
+which finishes the proof.
+\end{proof}
+
+\begin{example}
+\label{example-quinard}
+Lemma \ref{lemma-h0-topological} is false if $X$ is not spectral.
+Here is an example: Let $Y$ be a $T_1$ topological space, and
+$y \in Y$ a non-open point. Let $X = Y \amalg \{ x \}$, endowed with
+the topology whose closed sets are $\emptyset$, $\{y\}$, and all
+$F \amalg \{ x \}$, where $F$ is a closed subset of $Y$. Then
+$Z = \{x, y\}$ is a closed subset of $X$, which satisfies assumption (2)
+of Lemma \ref{lemma-h0-topological}. But $X$ is connected, while $Z$ is not.
+The conclusion of the lemma thus fails for the constant sheaf
+with value $\{0, 1\}$ on $X$.
+\end{example}
+
+\begin{lemma}
+\label{lemma-h0-henselian-pair}
+Let $(A, I)$ be a henselian pair. Set $X = \Spec(A)$ and
+$Z = \Spec(A/I)$. For any sheaf $\mathcal{F}$ on $X_\etale$
+we have $\Gamma(X, \mathcal{F}) = \Gamma(Z, \mathcal{F}|_Z)$.
+\end{lemma}
+
+\begin{proof}
+Recall that the spectrum of any ring is a spectral space, see
+Algebra, Lemma \ref{algebra-lemma-spec-spectral}. By
+More on Algebra, Lemma
+\ref{more-algebra-lemma-irreducible-henselian-pair-connected}
+we see that $\overline{\{x\}} \cap Z$ is connected for every $x \in X$.
+By Lemma \ref{lemma-h0-topological} we see that the statement
+is true for sheaves on $X_{Zar}$. For any finite morphism $X' \to X$
+we have $X' = \Spec(A')$ and $Z \times_X X' = \Spec(A'/IA')$
+with $(A', IA')$ a henselian pair, see More on Algebra, Lemma
+\ref{more-algebra-lemma-integral-over-henselian-pair}
+and we get the same statement for sheaves on $(X')_{Zar}$.
+Thus we can apply Lemma \ref{lemma-gabber-h0} to conclude.
+\end{proof}
+
+\noindent
+Finally, we can state and prove Gabber's theorem.
+
+\begin{theorem}[Gabber]
+\label{theorem-gabber}
+Let $(A, I)$ be a henselian pair. Set $X = \Spec(A)$ and
+$Z = \Spec(A/I)$. For any torsion abelian sheaf $\mathcal{F}$ on $X_\etale$
+we have $H^q_\etale(X, \mathcal{F}) = H^q_\etale(Z, \mathcal{F}|_Z)$.
+\end{theorem}
+
+\begin{proof}
+The result holds for $q = 0$ by Lemma \ref{lemma-h0-henselian-pair}.
+Let $q \geq 1$. Suppose the result has been shown in all degrees $< q$.
+Let $\mathcal{F}$ be a torsion abelian sheaf. Let
+$\mathcal{F} \to \mathcal{F}'$
+be an injective map of torsion abelian sheaves (to be chosen later)
+with cokernel $\mathcal{Q}$ so that we have the short exact sequence
+$$
+0 \to \mathcal{F} \to \mathcal{F}' \to \mathcal{Q} \to 0
+$$
+of torsion abelian sheaves on $X_\etale$. This gives a map of long exact
+cohomology sequences over $X$ and $Z$ part of which looks like
+$$
+\xymatrix{
+H^{q - 1}_\etale(X, \mathcal{F}') \ar[d] \ar[r] &
+H^{q - 1}_\etale(X, \mathcal{Q}) \ar[d] \ar[r] &
+H^q_\etale(X, \mathcal{F}) \ar[d] \ar[r] &
+H^q_\etale(X, \mathcal{F}') \ar[d] \\
+H^{q - 1}_\etale(Z, \mathcal{F}'|_Z) \ar[r] &
+H^{q - 1}_\etale(Z, \mathcal{Q}|_Z) \ar[r] &
+H^q_\etale(Z, \mathcal{F}|_Z) \ar[r] &
+H^q_\etale(Z, \mathcal{F}'|_Z)
+}
+$$
+Using this commutative diagram of abelian groups with exact rows
+we will finish the proof.
+
+\medskip\noindent
+Injectivity for $\mathcal{F}$. Let $\xi$ be a nonzero element of
+$H^q_\etale(X, \mathcal{F})$. By
+Lemma \ref{lemma-efface-cohomology-on-closed-by-finite-cover} applied with
+$Z = X$ (!) we can find $\mathcal{F} \subset \mathcal{F}'$ such that
+$\xi$ maps to zero to the right. Then $\xi$ is the image of
+an element of $H^{q - 1}_\etale(X, \mathcal{Q})$ and bijectivity
+for $q - 1$ implies $\xi$ does not map to zero in
+$H^q_\etale(Z, \mathcal{F}|_Z)$.
+
+\medskip\noindent
+Surjectivity for $\mathcal{F}$. Let $\xi$ be an element of
+$H^q_\etale(Z, \mathcal{F}|_Z)$. By
+Lemma \ref{lemma-efface-cohomology-on-closed-by-finite-cover} applied with
+$Z = Z$ we can find $\mathcal{F} \subset \mathcal{F}'$ such that
+$\xi$ maps to zero to the right. Then $\xi$ is the image of
+an element of $H^{q - 1}_\etale(Z, \mathcal{Q}|_Z)$ and bijectivity
+for $q - 1$ implies $\xi$ is in the image of the vertical map.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-restriction-injective}
+Let $X$ be a scheme with affine diagonal which can be covered by
+$n + 1$ affine opens. Let $Z \subset X$ be a closed subscheme.
+Let $\mathcal{A}$ be a torsion sheaf of rings on $X_\etale$
+and let $\mathcal{I}$ be an injective sheaf of $\mathcal{A}$-modules
+on $X_\etale$.
+Then $H^q_\etale(Z, \mathcal{I}|_Z) = 0$ for $q > n$.
+\end{lemma}
+
+\begin{proof}
+We will prove this by induction on $n$. If $n = 0$, then $X$ is affine.
+Say $X = \Spec(A)$ and $Z = \Spec(A/I)$. Let $A^h$ be the filtered colimit
+of \'etale $A$-algebras $B$ such that $A/I \to B/IB$ is an isomorphism.
+Then $(A^h, IA^h)$ is a henselian pair and $A/I = A^h/IA^h$, see
+More on Algebra, Lemma \ref{more-algebra-lemma-henselization}
+and its proof. Set $X^h = \Spec(A^h)$.
+By Theorem \ref{theorem-gabber}
+we see that
+$$
+H^q_\etale(Z, \mathcal{I}|_Z) = H^q_\etale(X^h, \mathcal{I}|_{X^h})
+$$
+By Theorem \ref{theorem-colimit} we have
+$$
+H^q_\etale(X^h, \mathcal{I}|_{X^h}) =
+\colim_{A \to B} H^q_\etale(\Spec(B), \mathcal{I}|_{\Spec(B)})
+$$
+where the colimit is over the $A$-algebras $B$ as above.
+Since the morphisms $\Spec(B) \to \Spec(A)$ are \'etale,
+the restriction $\mathcal{I}|_{\Spec(B)}$ is an injective
+sheaf of $\mathcal{A}|_{\Spec(B)}$-modules
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-cohomology-of-open}).
+Thus the cohomology groups on the right are zero and we get the
+result in this case.
+
+\medskip\noindent
+Induction step. We can use Mayer-Vietoris to do the induction step.
+Namely, suppose that $X = U \cup V$ where $U$ is a union of $n$ affine
+opens and $V$ is affine. Then, using that the diagonal of $X$ is affine,
+we see that $U \cap V$ is the union of $n$ affine opens. Mayer-Vietoris
+gives an exact sequence
+$$
+H^{q - 1}_\etale(U \cap V \cap Z, \mathcal{I}|_Z) \to
+H^q_\etale(Z, \mathcal{I}|_Z) \to
+H^q_\etale(U \cap Z, \mathcal{I}|_Z) \oplus
+H^q_\etale(V \cap Z, \mathcal{I}|_Z)
+$$
+and by our induction hypothesis we obtain vanishing for $q > n$ as desired.
+\end{proof}
+
+
+
+
+
+\section{Cohomology of torsion sheaves on curves}
+\label{section-vanishing-torsion}
+
+\noindent
+The goal of this section is to prove the basic finiteness and vanishing
+results for cohomology of torsion sheaves on curves, see
+Theorem \ref{theorem-vanishing-affine-curves}.
+In Section \ref{section-vanishing-torsion-coefficients}
+we will discuss constructible sheaves of torsion modules
+over a Noetherian ring.
+
+\begin{situation}
+\label{situation-what-to-prove}
+Here $k$ is an algebraically closed field, $X$ is a separated, finite type
+scheme of dimension $\leq 1$ over $k$, and $\mathcal{F}$ is a torsion
+abelian sheaf on $X_\etale$.
+\end{situation}
+
+\noindent
+In Situation \ref{situation-what-to-prove}
+we want to prove the following statements
+\begin{enumerate}
+\item
+\label{item-vanishing}
+$H^q_\etale(X, \mathcal{F}) = 0$ for $q > 2$,
+\item
+\label{item-vanishing-affine}
+$H^q_\etale(X, \mathcal{F}) = 0$ for $q > 1$ if $X$ is affine,
+\item
+\label{item-vanishing-p-p}
+$H^q_\etale(X, \mathcal{F}) = 0$ for $q > 1$ if $p = \text{char}(k) > 0$
+and $\mathcal{F}$ is $p$-power torsion,
+\item
+\label{item-finite-prime-to-p}
+$H^q_\etale(X, \mathcal{F})$ is finite if $\mathcal{F}$ is
+constructible and torsion prime to $\text{char}(k)$,
+\item
+\label{item-finite-proper}
+$H^q_\etale(X, \mathcal{F})$ is finite if $X$ is proper and
+$\mathcal{F}$ constructible,
+\item
+\label{item-base-change-prime-to-p}
+$H^q_\etale(X, \mathcal{F}) \to
+H^q_\etale(X_{k'}, \mathcal{F}|_{X_{k'}})$ is an isomorphism
+for any extension $k'/k$ of algebraically closed fields
+if $\mathcal{F}$ is torsion prime to $\text{char}(k)$,
+\item
+\label{item-base-change-proper}
+$H^q_\etale(X, \mathcal{F}) \to
+H^q_\etale(X_{k'}, \mathcal{F}|_{X_{k'}})$ is an isomorphism
+for any extension $k'/k$ of algebraically closed fields
+if $X$ is proper,
+\item
+\label{item-surjective}
+$H^2_\etale(X, \mathcal{F}) \to H^2_\etale(U, \mathcal{F})$
+is surjective for all $U \subset X$ open.
+\end{enumerate}
+Given any Situation \ref{situation-what-to-prove}
+we will say that
+``statements (\ref{item-vanishing}) -- (\ref{item-surjective}) hold''
+if those statements that apply to the given situation are true.
+We start the proof with the following consequence of our computation
+of cohomology with constant coefficients.
+
+\begin{lemma}
+\label{lemma-constant-smooth-statements}
+In Situation \ref{situation-what-to-prove}
+assume $X$ is smooth and $\mathcal{F} = \underline{\mathbf{Z}/\ell\mathbf{Z}}$
+for some prime number $\ell$. Then statements
+(\ref{item-vanishing}) -- (\ref{item-surjective}) hold
+for $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Since $X$ is smooth, we see that $X$ is a finite disjoint union of
+smooth curves. Hence we may assume $X$ is a smooth curve.
+
+\medskip\noindent
+Case I: $\ell$ different from the characteristic of $k$.
+This case follows from
+Lemma \ref{lemma-cohomology-smooth-projective-curve}
+(projective case) and
+Lemma \ref{lemma-vanishing-cohomology-mu-smooth-curve}
+(affine case). Statement (\ref{item-base-change-prime-to-p})
+on cohomology and extension of algebraically closed ground
+field follows from the fact that the genus $g$ and the number
+of ``punctures'' $r$ do not change when passing from $k$ to $k'$.
+Statement (\ref{item-surjective}) follows as $H^2_\etale(U, \mathcal{F})$
+is zero as soon as $U \not = X$, because then $U$ is affine
+(Varieties, Lemmas \ref{varieties-lemma-proper-minus-point} and
+\ref{varieties-lemma-curve-affine-projective}).
+
+\medskip\noindent
+Case II: $\ell$ is equal to the characteristic of $k$.
+Vanishing by Lemma \ref{lemma-vanishing-variety-char-p-p}.
+Statements (\ref{item-finite-proper}) and (\ref{item-base-change-proper})
+follow from
+Lemma \ref{lemma-finiteness-proper-variety-char-p-p}.
+\end{proof}
+
+\begin{remark}[Invariance under extension of algebraically closed ground field]
+\label{remark-invariance}
+Let $k$ be an algebraically closed field of characteristic $p > 0$.
+In Section \ref{section-artin-schreier} we have seen that there is
+an exact sequence
+$$
+k[x] \to k[x] \to
+H^1_\etale(\mathbf{A}^1_k, \mathbf{Z}/p\mathbf{Z}) \to 0
+$$
+where the first arrow maps $f(x)$ to $f^p - f$. A set of representatives
+for the cokernel is formed by the polynomials
+$$
+\sum\nolimits_{p \not | n} \lambda_n x^n
+$$
+with $\lambda_n \in k$. (If $k$ is not algebraically closed
+you have to add some constants to this as well.) In particular
+when $k'/k$ is an algebraically closed extension, then
+the map
+$$
+H^1_\etale(\mathbf{A}^1_k, \mathbf{Z}/p\mathbf{Z})
+\to
+H^1_\etale(\mathbf{A}^1_{k'}, \mathbf{Z}/p\mathbf{Z})
+$$
+is not an isomorphism in general. In particular, the map
+$\pi_1(\mathbf{A}^1_{k'}) \to \pi_1(\mathbf{A}^1_k)$
+between \'etale fundamental groups (insert future reference here)
+is not an isomorphism either. Thus the \'etale homotopy type
+of the affine line depends on the algebraically closed ground field.
+From Lemma \ref{lemma-constant-smooth-statements} above we see that
+this is a phenomenon which only happens in characteristic $p$
+with $p$-power torsion coefficients.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-ses-statements}
+Let $k$ be an algebraically closed field. Let $X$ be a separated finite
+type scheme over $k$ of dimension $\leq 1$. Let
+$0 \to \mathcal{F}_1 \to \mathcal{F} \to \mathcal{F}_2 \to 0$
+be a short exact sequence of torsion abelian sheaves on $X$.
+If statements (\ref{item-vanishing}) -- (\ref{item-surjective}) hold
+for $\mathcal{F}_1$ and $\mathcal{F}_2$, then they hold
+for $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+This is mostly immediate from the definitions and the long exact sequence
+of cohomology. Also observe that $\mathcal{F}$ is constructible
+(resp.\ of torsion prime to the characteristic of $k$) if and only if
+both $\mathcal{F}_1$ and $\mathcal{F}_2$ are constructible
+(resp.\ of torsion prime to the characteristic of $k$). See
+Proposition \ref{proposition-constructible-over-noetherian}.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-pushforward-statements}
+Let $k$ be an algebraically closed field. Let $f : X \to Y$ be a
+finite morphism of separated finite type schemes over $k$ of
+dimension $\leq 1$. Let $\mathcal{F}$ be a torsion abelian sheaf on $X$.
+If statements (\ref{item-vanishing}) -- (\ref{item-surjective}) hold
+for $\mathcal{F}$, then they hold for $f_*\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Namely, we have $H^q_\etale(X, \mathcal{F}) = H^q_\etale(Y, f_*\mathcal{F})$
+by the vanishing of $R^qf_*$ for $q > 0$
+(Proposition \ref{proposition-finite-higher-direct-image-zero}) and
+the Leray spectral sequence
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-apply-Leray}).
+For (\ref{item-surjective}) use that formation of $f_*$ commutes with
+arbitrary base change
+(Lemma \ref{lemma-finite-pushforward-commutes-with-base-change}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-restrict-to-open}
+In Situation \ref{situation-what-to-prove} assume $\mathcal{F}$
+constructible. Let $j : X' \to X$ be the inclusion of a dense open subscheme.
+Then statements
+(\ref{item-vanishing}) -- (\ref{item-surjective}) hold for $\mathcal{F}$
+if and only if they hold for $j_!j^{-1}\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Since $X'$ is dense, we see that $Z = X \setminus X'$ has dimension $0$
+and hence is a finite set $Z = \{x_1, \ldots, x_n\}$ of $k$-rational points.
+Consider the short exact sequence
+$$
+0 \to j_!j^{-1}\mathcal{F} \to \mathcal{F} \to i_*i^{-1}\mathcal{F} \to 0
+$$
+of Lemma \ref{lemma-ses-associated-to-open}. Observe that
+$H^q_\etale(X, i_*i^{-1}\mathcal{F}) = H^q_\etale(Z, i^*\mathcal{F})$.
+Namely, $i : Z \to X$ is a closed immersion, hence finite,
+hence we have the vanishing of $R^qi_*$ for $q > 0$ by
+Proposition \ref{proposition-finite-higher-direct-image-zero}, and
+hence the equality follows from the Leray spectral sequence
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-apply-Leray}).
+Since $Z$ is a disjoint union of spectra of algebraically closed
+fields, we conclude that $H^q_\etale(Z, i^*\mathcal{F}) = 0$
+for $q > 0$ and
+$$
+H^0_\etale(Z, i^{-1}\mathcal{F}) =
+\bigoplus\nolimits_{i = 1, \ldots, n} \mathcal{F}_{x_i}
+$$
+which is finite as $\mathcal{F}_{x_i}$ is finite due to the
+assumption that $\mathcal{F}$ is constructible.
+The long exact cohomology sequence gives an exact sequence
+$$
+0 \to
+H^0_\etale(X, j_!j^{-1}\mathcal{F}) \to
+H^0_\etale(X, \mathcal{F}) \to
+H^0_\etale(Z, i^{-1}\mathcal{F}) \to
+H^1_\etale(X, j_!j^{-1}\mathcal{F}) \to
+H^1_\etale(X, \mathcal{F}) \to 0
+$$
+and isomorphisms $H^q_\etale(X, j_!j^{-1}\mathcal{F}) \to
+H^q_\etale(X, \mathcal{F})$ for $q > 1$.
+
+\medskip\noindent
+At this point it is easy to deduce each of
+(\ref{item-vanishing}) -- (\ref{item-surjective})
+holds for $\mathcal{F}$ if and only if it holds for
+$j_!j^{-1}\mathcal{F}$. We make a few small remarks to help the reader:
+(a) if $\mathcal{F}$ is torsion prime to the characteristic
+of $k$, then so is $j_!j^{-1}\mathcal{F}$,
+(b) the sheaf $j_!j^{-1}\mathcal{F}$ is constructible,
+(c) we have $H^0_\etale(Z, i^{-1}\mathcal{F}) =
+H^0_\etale(Z_{k'}, i^{-1}\mathcal{F}|_{Z_{k'}})$, and (d)
+if $U \subset X$ is an open, then $U' = U \cap X'$ is dense in $U$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-even-easier}
+In Situation \ref{situation-what-to-prove} assume $X$ is smooth.
+Let $j : U \to X$ an open immersion. Let $\ell$ be a prime number.
+Let $\mathcal{F} = j_!\underline{\mathbf{Z}/\ell\mathbf{Z}}$.
+Then statements (\ref{item-vanishing}) -- (\ref{item-surjective}) hold
+for $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Since $X$ is smooth, it is a disjoint union of smooth curves
+and hence we may assume $X$ is a curve (i.e., irreducible).
+Then either $U = \emptyset$ and there is nothing to prove
+or $U \subset X$ is dense. In this case the lemma follows from
+Lemmas \ref{lemma-constant-smooth-statements} and
+\ref{lemma-restrict-to-open}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-somewhat-easier}
+In Situation \ref{situation-what-to-prove} assume $X$ reduced.
+Let $j : U \to X$ an open immersion. Let $\ell$ be a prime number
+and $\mathcal{F} = j_!\underline{\mathbf{Z}/\ell\mathbf{Z}}$.
+Then statements (\ref{item-vanishing}) -- (\ref{item-surjective}) hold
+for $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+The difference with Lemma \ref{lemma-even-easier} is that here we do not
+assume $X$ is smooth. Let $\nu : X^\nu \to X$ be the normalization
+morphism. Then $\nu$ is finite
+(Varieties, Lemma \ref{varieties-lemma-normalization-locally-algebraic})
+and $X^\nu$ is smooth
+(Varieties, Lemma \ref{varieties-lemma-regular-point-on-curve}).
+Let $j^\nu : U^\nu \to X^\nu$ be the inverse image of $U$.
+By Lemma \ref{lemma-even-easier} the result holds for
+$j^\nu_!\underline{\mathbf{Z}/\ell\mathbf{Z}}$.
+By Lemma \ref{lemma-finite-pushforward-statements}
+the result holds for $\nu_*j^\nu_!\underline{\mathbf{Z}/\ell\mathbf{Z}}$.
+In general it won't be true that
+$\nu_*j^\nu_!\underline{\mathbf{Z}/\ell\mathbf{Z}}$ is equal to
+$j_!\underline{\mathbf{Z}/\ell\mathbf{Z}}$
+but we can work around this as follows.
+As $X$ is reduced the morphism $\nu : X^\nu \to X$
+is an isomorphism over a dense open $j' : X' \to X$
+(Varieties, Lemma \ref{varieties-lemma-normalization-locally-algebraic}).
+Over this open we have agreement
+$$
+(j')^{-1}(\nu_*j^\nu_!\underline{\mathbf{Z}/\ell\mathbf{Z}})
+=
+(j')^{-1}(j_!\underline{\mathbf{Z}/\ell\mathbf{Z}})
+$$
+Using Lemma \ref{lemma-restrict-to-open}
+twice for $j' : X' \to X$ and the sheaves above
+we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-easier}
+In Situation \ref{situation-what-to-prove} assume $X$ reduced.
+Let $j : U \to X$ an open immersion with $U$ connected. Let
+$\ell$ be a prime number. Let $\mathcal{G}$ a finite locally
+constant sheaf of $\mathbf{F}_\ell$-vector spaces on $U$. Let
+$\mathcal{F} = j_!\mathcal{G}$. Then statements
+(\ref{item-vanishing}) -- (\ref{item-surjective}) hold for $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Let $f : V \to U$ be a finite \'etale morphism of degree prime to $\ell$
+as in Lemma \ref{lemma-pullback-filtered}. The discussion in
+Section \ref{section-trace-method} gives maps
+$$
+\mathcal{G} \to f_*f^{-1}\mathcal{G} \to \mathcal{G}
+$$
+whose composition is an isomorphism. Hence it suffices to prove the
+lemma with $\mathcal{F} = j_!f_*f^{-1}\mathcal{G}$.
+By Zariski's Main theorem
+(More on Morphisms, Lemma
+\ref{more-morphisms-lemma-quasi-finite-separated-pass-through-finite})
+we can choose a diagram
+$$
+\xymatrix{
+V \ar[r]_{j'} \ar[d]_f & Y \ar[d]^{\overline{f}} \\
+U \ar[r]^j & X
+}
+$$
+with $\overline{f} : Y \to X$ finite and $j'$ an open immersion
+with dense image. We may replace $Y$ by its reduction (this does
+not change $V$ as $V$ is reduced being \'etale over $U$).
+Since $f$ is finite and $V$ dense in $Y$ we have $V = U \times_X Y$. By
+Lemma \ref{lemma-compatible-shriek-push-finite} we have
+$$
+j_!f_*f^{-1}\mathcal{G} = \overline{f}_*j'_!f^{-1}\mathcal{G}
+$$
+By Lemma \ref{lemma-finite-pushforward-statements} it suffices to
+consider $j'_!f^{-1}\mathcal{G}$.
+The existence of the filtration given by
+Lemma \ref{lemma-pullback-filtered},
+the fact that $j'_!$ is exact, and
+Lemma \ref{lemma-ses-statements}
+reduces us to the case
+$\mathcal{F} = j'_!\underline{\mathbf{Z}/\ell\mathbf{Z}}$
+which is Lemma \ref{lemma-somewhat-easier}.
+\end{proof}
+
+
+%10.20.09
+
+\begin{theorem}
+\label{theorem-vanishing-affine-curves}
+If $k$ is an algebraically closed field, $X$ is a separated, finite type
+scheme of dimension $\leq 1$ over $k$, and $\mathcal{F}$ is a torsion
+abelian sheaf on $X_\etale$, then
+\begin{enumerate}
+\item
+$H^q_\etale(X, \mathcal{F}) = 0$ for $q > 2$,
+\item
+$H^q_\etale(X, \mathcal{F}) = 0$ for $q > 1$ if $X$ is affine,
+\item
+$H^q_\etale(X, \mathcal{F}) = 0$ for $q > 1$ if $p = \text{char}(k) > 0$
+and $\mathcal{F}$ is $p$-power torsion,
+\item
+$H^q_\etale(X, \mathcal{F})$ is finite if $\mathcal{F}$ is
+constructible and torsion prime to $\text{char}(k)$,
+\item
+$H^q_\etale(X, \mathcal{F})$ is finite if $X$ is proper and
+$\mathcal{F}$ constructible,
+\item
+$H^q_\etale(X, \mathcal{F}) \to
+H^q_\etale(X_{k'}, \mathcal{F}|_{X_{k'}})$ is an isomorphism
+for any extension $k'/k$ of algebraically closed fields
+if $\mathcal{F}$ is torsion prime to $\text{char}(k)$,
+\item
+$H^q_\etale(X, \mathcal{F}) \to
+H^q_\etale(X_{k'}, \mathcal{F}|_{X_{k'}})$ is an isomorphism
+for any extension $k'/k$ of algebraically closed fields
+if $X$ is proper,
+\item
+$H^2_\etale(X, \mathcal{F}) \to H^2_\etale(U, \mathcal{F})$
+is surjective for all $U \subset X$ open.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+The theorem says that in Situation \ref{situation-what-to-prove}
+statements (\ref{item-vanishing}) -- (\ref{item-surjective}) hold.
+Our first step is to replace $X$ by its reduction, which is permissible
+by Proposition \ref{proposition-topological-invariance}.
+By Lemma \ref{lemma-torsion-colimit-constructible} we can write
+$\mathcal{F}$ as a filtered colimit of constructible abelian sheaves.
+Taking cohomology commutes with colimits, see Lemma \ref{lemma-colimit}.
+Moreover, pullback via $X_{k'} \to X$ commutes with colimits as a left
+adjoint. Thus it suffices to prove the statements for a constructible sheaf.
+
+\medskip\noindent
+In this paragraph we use Lemma \ref{lemma-ses-statements} without further
+mention. Writing
+$\mathcal{F} = \mathcal{F}_1 \oplus \ldots \oplus \mathcal{F}_r$
+where $\mathcal{F}_i$ is $\ell_i$-primary for some prime $\ell_i$, we may
+assume that $\ell^n$ kills $\mathcal{F}$ for some prime $\ell$. Now consider
+the exact sequence
+$$
+0 \to \mathcal{F}[\ell] \to \mathcal{F} \to \mathcal{F}/\mathcal{F}[\ell] \to 0.
+$$
+Thus we see that it suffices to assume that $\mathcal{F}$ is $\ell$-torsion.
+This means that $\mathcal{F}$ is a constructible sheaf of
+$\mathbf{F}_\ell$-vector spaces for some prime number $\ell$.
+
+\medskip\noindent
+By definition this means there is a dense open $U \subset X$
+such that $\mathcal{F}|_U$ is finite locally constant sheaf of
+$\mathbf{F}_\ell$-vector spaces. Since $\dim(X) \leq 1$ we may
+assume, after shrinking $U$, that $U = U_1 \amalg \ldots \amalg U_n$
+is a disjoint union of irreducible schemes (just remove the closed
+points which lie in the intersections of $\geq 2$ components of $U$).
+By Lemma \ref{lemma-restrict-to-open} we reduce to the case
+$\mathcal{F} = j_!\mathcal{G}$ where $\mathcal{G}$ is a finite
+locally constant sheaf of $\mathbf{F}_\ell$-vector spaces on $U$.
+
+\medskip\noindent
+Since we chose $U = U_1 \amalg \ldots \amalg U_n$ with $U_i$ irreducible
+we have
+$$
+j_!\mathcal{G} =
+j_{1!}(\mathcal{G}|_{U_1}) \oplus \ldots \oplus
+j_{n!}(\mathcal{G}|_{U_n})
+$$
+where $j_i : U_i \to X$ is the inclusion morphism.
+The case of $j_{i!}(\mathcal{G}|_{U_i})$ is handled in
+Lemma \ref{lemma-vanishing-easier}.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-vanishing-curves}
+Let $X$ be a finite type, dimension $1$ scheme over an
+algebraically closed field $k$. Let $\mathcal{F}$ be a torsion sheaf
+on $X_\etale$. Then
+$$
+H_\etale^q(X, \mathcal{F}) = 0, \quad \forall q \geq 3.
+$$
+If $X$ affine then also $H_\etale^2(X, \mathcal{F}) = 0$.
+\end{theorem}
+
+\begin{proof}
+If $X$ is separated, this follows immediately from the more precise
+Theorem \ref{theorem-vanishing-affine-curves}.
+If $X$ is nonseparated, choose an affine open covering
+$X = X_1 \cup \ldots \cup X_n$. By induction on $n$ we may assume
+the vanishing holds over $U = X_1 \cup \ldots \cup X_{n - 1}$.
+Then Mayer-Vietoris (Lemma \ref{lemma-mayer-vietoris}) gives
+$$
+H^2_\etale(U, \mathcal{F}) \oplus H^2_\etale(X_n, \mathcal{F}) \to
+H^2_\etale(U \cap X_n, \mathcal{F}) \to
+H^3_\etale(X, \mathcal{F}) \to 0
+$$
+However, since $U \cap X_n$ is an open of an affine scheme
+and hence affine by our dimension assumption, the group
+$H^2_\etale(U \cap X_n, \mathcal{F})$ vanishes
+by Theorem \ref{theorem-vanishing-affine-curves}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-dim-1-separably-closed}
+Let $k'/k$ be an extension of separably closed fields.
+Let $X$ be a proper scheme over $k$ of dimension $\leq 1$.
+Let $\mathcal{F}$ be a torsion abelian sheaf on $X$.
+Then the map $H^q_\etale(X, \mathcal{F}) \to
+H^q_\etale(X_{k'}, \mathcal{F}|_{X_{k'}})$ is an isomorphism
+for $q \geq 0$.
+\end{lemma}
+
+\begin{proof}
+We have seen this for algebraically closed fields in
+Theorem \ref{theorem-vanishing-affine-curves}.
+Given $k \subset k'$ as in the statement of the lemma we can
+choose a diagram
+$$
+\xymatrix{
+k' \ar[r] & \overline{k}' \\
+k \ar[u] \ar[r] & \overline{k} \ar[u]
+}
+$$
+where $k \subset \overline{k}$ and $k' \subset \overline{k}'$ are
+the algebraic closures. Since $k$ and $k'$ are separably closed
+the field extensions
+$\overline{k}/k$ and $\overline{k}'/k'$
+are algebraic and purely inseparable. In this case the morphisms
+$X_{\overline{k}} \to X$ and $X_{\overline{k}'} \to X_{k'}$
+are universal homeomorphisms. Thus the cohomology of $\mathcal{F}$
+may be computed on $X_{\overline{k}}$ and the cohomology
+of $\mathcal{F}|_{X_{k'}}$ may be computed on $X_{\overline{k}'}$,
+see Proposition \ref{proposition-topological-invariance}.
+Hence we deduce the general case from the case of algebraically
+closed fields.
+\end{proof}
+
+
+
+
+
+
+\section{Cohomology of torsion modules on curves}
+\label{section-vanishing-torsion-coefficients}
+
+\noindent
+In this section we repeat the arguments of
+Section \ref{section-vanishing-torsion}
+for constructible sheaves of modules over a Noetherian ring
+which are torsion. We start with the most interesting step.
+
+\begin{lemma}
+\label{lemma-constant-statements-coefficients}
+Let $\Lambda$ be a Noetherian ring, let $M$ be a finite $\Lambda$-module which
+is annihilated by an integer $n > 0$, let $k$ be an algebraically closed field,
+and let $X$ be a separated, finite type scheme of dimension $\leq 1$ over $k$.
+Then
+\begin{enumerate}
+\item $H^q_\etale(X, \underline{M})$ is a finite $\Lambda$-module
+if $n$ is prime to $\text{char}(k)$,
+\item $H^q_\etale(X, \underline{M})$ is a finite $\Lambda$-module
+if $X$ is proper.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+If $n = \ell n'$ for some prime number $\ell$, then
+we get a short exact sequence $0 \to M[\ell] \to M \to M' \to 0$
+of finite $\Lambda$-modules and $M'$ is annihilated by $n'$.
+This produces a corresponding short exact sequence of constant
+sheaves, which in turn gives rise to an exact sequence
+of cohomology modules
+$$
+H^q_\etale(X, \underline{M[n]}) \to
+H^q_\etale(X, \underline{M}) \to
+H^q_\etale(X, \underline{M'})
+$$
+Thus, if we can show the result in case $M$ is annihilated by
+a prime number, then by induction on $n$ we win.
+
+\medskip\noindent
+Let $\ell$ be a prime number such that $\ell$ annihilates $M$.
+Then we can replace $\Lambda$ by the $\mathbf{F}_\ell$-algebra
+$\Lambda/\ell \Lambda$. Namely, the cohomology of $\mathcal{F}$
+as a sheaf of $\Lambda$-modules is the same as the cohomology
+of $\mathcal{F}$ as a sheaf of $\Lambda/\ell \Lambda$-modules,
+for example by
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-cohomology-modules-abelian-agree}.
+
+\medskip\noindent
+Assume $\ell$ be a prime number such that $\ell$ annihilates $M$
+and $\Lambda$.
+Let us reduce to the case where $M$ is a finite free $\Lambda$-module.
+Namely, choose a short exact sequence
+$$
+0 \to N \to \Lambda^{\oplus m} \to M \to 0
+$$
+This determines an exact sequence
+$$
+H^q_\etale(X, \underline{\Lambda^{\oplus m}}) \to
+H^q_\etale(X, \underline{M}) \to
+H^{q + 1}_\etale(X, \underline{N})
+$$
+By descending induction on $q$ we get the result for $M$
+if we know the result for $\Lambda^{\oplus m}$.
+Here we use that we know that our cohomology groups
+vanish in degrees $> 2$ by
+Theorem \ref{theorem-vanishing-affine-curves}.
+
+\medskip\noindent
+Let $\ell$ be a prime number and assume that $\ell$ annihilates
+$\Lambda$. It remains to show that the cohomology groups
+$H^q_\etale(X, \underline{\Lambda})$ are finite $\Lambda$-modules.
+We will use a trick to show this; the ``correct'' argument uses
+a coefficient theorem which we will show later.
+Choose a basis $\Lambda = \bigoplus_{i \in I} \mathbf{F}_\ell e_i$
+such that $e_0 = 1$ for some $0 \in I$.
+The choice of this basis determines an isomorphism
+$$
+\underline{\Lambda} = \bigoplus \underline{\mathbf{F}_\ell} e_i
+$$
+of sheaves on $X_\etale$. Thus we see that
+$$
+H^q_\etale(X, \underline{\Lambda}) =
+H^q_\etale(X, \bigoplus \underline{\mathbf{F}_\ell} e_i) =
+\bigoplus H^q_\etale(X, \underline{\mathbf{F}_\ell})e_i
+$$
+since taking cohomology over $X$ commutes with direct sums
+by Theorem \ref{theorem-colimit}
+(or Lemma \ref{lemma-colimit} or
+Lemma \ref{lemma-direct-sum-bounded-below-cohomology}).
+Since we already know that $H^q_\etale(X, \underline{\mathbf{F}_\ell})$
+is a finite dimensional $\mathbf{F}_\ell$-vector space
+(by Theorem \ref{theorem-vanishing-affine-curves}),
+we see that $H^q_\etale(X, \underline{\Lambda})$
+is free over $\Lambda$ of the same rank. Namely, given
+a basis $\xi_1, \ldots, \xi_m$ of $H^q_\etale(X, \underline{\mathbf{F}_\ell})$
+we see that $\xi_1 e_0, \ldots, \xi_m e_0$ form a $\Lambda$-basis for
+$H^q_\etale(X, \underline{\Lambda})$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-pushforward-coefficients}
+Let $\Lambda$ be a Noetherian ring, let $k$ be an algebraically closed field,
+let $f : X \to Y$ be a finite morphism of separated finite type schemes
+over $k$ of dimension $\leq 1$, and let $\mathcal{F}$ be a sheaf
+of $\Lambda$-modules on $X_\etale$. If
+$H^q_\etale(X, \mathcal{F})$ is a finite $\Lambda$-module, then
+so is $H^q_\etale(Y, f_*\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+Namely, we have $H^q_\etale(X, \mathcal{F}) = H^q_\etale(Y, f_*\mathcal{F})$
+by the vanishing of $R^qf_*$ for $q > 0$
+(Proposition \ref{proposition-finite-higher-direct-image-zero}) and
+the Leray spectral sequence
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-apply-Leray}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-restrict-to-open-coefficients}
+Let $\Lambda$ be a Noetherian ring, let $k$ be an algebraically closed field,
+let $X$ be a separated finite type scheme over $k$ of dimension $\leq 1$,
+let $\mathcal{F}$ be a constructible sheaf of $\Lambda$-modules on $X_\etale$,
+and let $j : X' \to X$ be the inclusion of a dense open subscheme. Then
+$H^q_\etale(X, \mathcal{F})$ is a finite $\Lambda$-module if and only if
+$H^q_\etale(X, j_!j^{-1}\mathcal{F})$ is a finite $\Lambda$-module.
+\end{lemma}
+
+\begin{proof}
+Since $X'$ is dense, we see that $Z = X \setminus X'$ has dimension $0$
+and hence is a finite set $Z = \{x_1, \ldots, x_n\}$ of $k$-rational points.
+Consider the short exact sequence
+$$
+0 \to j_!j^{-1}\mathcal{F} \to \mathcal{F} \to i_*i^{-1}\mathcal{F} \to 0
+$$
+of Lemma \ref{lemma-ses-associated-to-open}. Observe that
+$H^q_\etale(X, i_*i^{-1}\mathcal{F}) = H^q_\etale(Z, i^*\mathcal{F})$.
+Namely, $i : Z \to X$ is a closed immersion, hence finite,
+hence we have the vanishing of $R^qi_*$ for $q > 0$ by
+Proposition \ref{proposition-finite-higher-direct-image-zero}, and
+hence the equality follows from the Leray spectral sequence
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-apply-Leray}).
+Since $Z$ is a disjoint union of spectra of algebraically closed
+fields, we conclude that $H^q_\etale(Z, i^*\mathcal{F}) = 0$
+for $q > 0$ and
+$$
+H^0_\etale(Z, i^{-1}\mathcal{F}) =
+\bigoplus\nolimits_{i = 1, \ldots, n} \mathcal{F}_{x_i}
+$$
+which is a finite $\Lambda$-module $\mathcal{F}_{x_i}$ is finite due to the
+assumption that $\mathcal{F}$ is a constructible sheaf of $\Lambda$-modules.
+The long exact cohomology sequence gives an exact sequence
+$$
+0 \to
+H^0_\etale(X, j_!j^{-1}\mathcal{F}) \to
+H^0_\etale(X, \mathcal{F}) \to
+H^0_\etale(Z, i^{-1}\mathcal{F}) \to
+H^1_\etale(X, j_!j^{-1}\mathcal{F}) \to
+H^1_\etale(X, \mathcal{F}) \to 0
+$$
+and isomorphisms $H^0_\etale(X, j_!j^{-1}\mathcal{F}) \to
+H^0_\etale(X, \mathcal{F})$ for $q > 1$. The lemma follows easily from this.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-somewhat-easier-coefficients}
+Let $\Lambda$ be a Noetherian ring, let $M$ be a finite $\Lambda$-module which
+is annihilated by an integer $n > 0$, let $k$ be an algebraically closed field,
+let $X$ be a separated, finite type scheme of dimension $\leq 1$ over $k$, and
+let $j : U \to X$ be an open immersion. Then
+\begin{enumerate}
+\item $H^q_\etale(X, j_!\underline{M})$ is a finite $\Lambda$-module
+if $n$ is prime to $\text{char}(k)$,
+\item $H^q_\etale(X, j_!\underline{M})$ is a finite $\Lambda$-module
+if $X$ is proper.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $\dim(X) \leq 1$ there is an open $V \subset X$ which is disjoint from
+$U$ such that $X' = U \cup V$ is dense open in $X$ (details omitted).
+If $j' : X' \to X$ denotes the inclusion morphism, then we see that
+$j_!\underline{M}$ is a direct summand of $j'_!\underline{M}$.
+Hence it suffices to prove the lemma in case $U$ is open and dense in $X$.
+This case follows from Lemmas \ref{lemma-restrict-to-open-coefficients}
+and \ref{lemma-constant-statements-coefficients}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-ses-statements-coefficients}
+Let $\Lambda$ be a Noetherian ring, let $k$ be an algebraically closed field,
+let $X$ be a separated finite type scheme over $k$ of dimension $\leq 1$, and
+let $0 \to \mathcal{F}_1 \to \mathcal{F} \to \mathcal{F}_2 \to 0$
+be a short exact sequence of sheaves of $\Lambda$-modules on $X_\etale$. If
+$H^q_\etale(X, \mathcal{F}_i)$, $i = 1, 2$ are finite $\Lambda$-modules
+then $H^q_\etale(X, \mathcal{F})$ is a finite $\Lambda$-module.
+\end{lemma}
+
+\begin{proof}
+Immediate from the long exact sequence of cohomology.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-easier-coefficients}
+Let $\Lambda$ be a Noetherian ring, let $k$ be an algebraically closed field,
+let $X$ be a separated, finite type scheme of dimension $\leq 1$ over $k$,
+let $j : U \to X$ be an open immersion with $U$ connected, let
+$\ell$ be a prime number, let $n > 0$, and let $\mathcal{G}$ be a finite type,
+locally constant sheaf of $\Lambda$-modules on $U_\etale$ annihilated by
+$\ell^n$. Then
+\begin{enumerate}
+\item $H^q_\etale(X, j_!\mathcal{G})$ is a finite $\Lambda$-module
+if $\ell$ is prime to $\text{char}(k)$,
+\item $H^q_\etale(X, j_!\mathcal{G})$ is a finite $\Lambda$-module
+if $X$ is proper.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Let $f : V \to U$ be a finite \'etale morphism of degree prime to $\ell$
+as in Lemma \ref{lemma-pullback-filtered-modules}. The discussion in
+Section \ref{section-trace-method} gives maps
+$$
+\mathcal{G} \to f_*f^{-1}\mathcal{G} \to \mathcal{G}
+$$
+whose composition is an isomorphism. Hence it suffices to prove the
+finiteness of $H^q_\etale(X, j_!f_*f^{-1}\mathcal{G})$.
+By Zariski's Main theorem
+(More on Morphisms, Lemma
+\ref{more-morphisms-lemma-quasi-finite-separated-pass-through-finite})
+we can choose a diagram
+$$
+\xymatrix{
+V \ar[r]_{j'} \ar[d]_f & Y \ar[d]^{\overline{f}} \\
+U \ar[r]^j & X
+}
+$$
+with $\overline{f} : Y \to X$ finite and $j'$ an open immersion with
+dense image. Since $f$ is finite and $V$ dense in $Y$ we have
+$V = U \times_X Y$. By Lemma \ref{lemma-compatible-shriek-push-finite} we have
+$$
+j_!f_*f^{-1}\mathcal{G} = \overline{f}_*j'_!f^{-1}\mathcal{G}
+$$
+By Lemma \ref{lemma-finite-pushforward-coefficients} it suffices to
+consider $j'_!f^{-1}\mathcal{G}$. The existence of the filtration given by
+Lemma \ref{lemma-pullback-filtered-modules},
+the fact that $j'_!$ is exact, and
+Lemma \ref{lemma-ses-statements-coefficients}
+reduces us to the case
+$\mathcal{F} = j'_!\underline{M}$ for a finite $\Lambda$-module $M$
+which is Lemma \ref{lemma-somewhat-easier-coefficients}.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-vanishing-affine-curves-coefficients}
+Let $\Lambda$ be a Noetherian ring, let $k$ be an algebraically closed field,
+let $X$ be a separated, finite type scheme of dimension $\leq 1$ over $k$,
+and let $\mathcal{F}$ be a constructible sheaf of $\Lambda$-modules
+on $X_\etale$ which is torsion. Then
+\begin{enumerate}
+\item
+\label{item-finite-prime-to-p-coefficients}
+$H^q_\etale(X, \mathcal{F})$ is a finite $\Lambda$-module
+if $\mathcal{F}$ is torsion prime to $\text{char}(k)$,
+\item
+\label{item-finite-proper-coefficients}
+$H^q_\etale(X, \mathcal{F})$ is a finite $\Lambda$-module if $X$ is proper.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+without further mention.
+Write $\mathcal{F} = \mathcal{F}_1 \oplus \ldots \oplus \mathcal{F}_r$
+where $\mathcal{F}_i$ is annihilated by $\ell_i^{n_i}$
+for some prime $\ell_i$ and integer $n_i > 0$.
+By Lemma \ref{lemma-ses-statements-coefficients} it suffices
+to prove the theorem for $\mathcal{F}_i$. Thus we may and do
+assume that $\ell^n$ kills $\mathcal{F}$ for some prime $\ell$
+and integer $n > 0$.
+
+\medskip\noindent
+Since $\mathcal{F}$ is constructible as a sheaf of $\Lambda$-modules,
+there is a dense open $U \subset X$ such that $\mathcal{F}|_U$ is a
+finite type, locally constant sheaf of $\Lambda$-modules.
+Since $\dim(X) \leq 1$ we may assume, after shrinking $U$, that
+$U = U_1 \amalg \ldots \amalg U_n$ is a disjoint union of irreducible
+schemes (just remove the closed points which lie in the intersections
+of $\geq 2$ components of $U$). By
+Lemma \ref{lemma-restrict-to-open-coefficients}
+we reduce to the case
+$\mathcal{F} = j_!\mathcal{G}$ where $\mathcal{G}$ is a finite type,
+locally constant sheaf of $\Lambda$-modules on $U$ (and annihilated
+by $\ell^n$).
+
+\medskip\noindent
+Since we chose $U = U_1 \amalg \ldots \amalg U_n$ with $U_i$ irreducible
+we have
+$$
+j_!\mathcal{G} =
+j_{1!}(\mathcal{G}|_{U_1}) \oplus \ldots \oplus
+j_{n!}(\mathcal{G}|_{U_n})
+$$
+where $j_i : U_i \to X$ is the inclusion morphism.
+The case of $j_{i!}(\mathcal{G}|_{U_i})$ is handled in
+Lemma \ref{lemma-vanishing-easier-coefficients}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{First cohomology of proper schemes}
+\label{section-finite-etale-over-proper}
+
+\noindent
+In Fundamental Groups, Section \ref{pione-section-finite-etale-over-proper}
+we have seen, in some sense, that taking
+$R^1f_*\underline{G}$ commutes with base change if $f : X \to Y$
+is a proper morphism and $G$ is a finite group (not necessarily
+commutative). In this section
+we deduce a useful consequence of these results.
+
+\begin{lemma}
+\label{lemma-proper-over-henselian-and-h1}
+Let $A$ be a henselian local ring. Let $X$ be a proper scheme over $A$
+with closed fibre $X_0$. Let $M$ be a finite abelian group.
+Then $H^1_\etale(X, \underline{M}) = H^1_\etale(X_0, \underline{M})$.
+\end{lemma}
+
+\begin{proof}
+By Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-torsors-h1}
+an element of $H^1_\etale(X, \underline{M})$ corresponds to a
+$\underline{M}$-torsor $\mathcal{F}$ on $X_\etale$.
+Such a torsor is clearly a finite locally constant sheaf.
+Hence $\mathcal{F}$ is representable by a scheme $V$ finite
+\'etale over $X$, Lemma \ref{lemma-characterize-finite-locally-constant}.
+Conversely, a scheme $V$ finite \'etale over $X$ with an $M$-action
+which turns it into an $M$-torsor over $X$ gives rise to a cohomology
+class. The same translation between cohomology classes over $X_0$ and
+torsors finite \'etale over $X_0$ holds. Thus the lemma
+is a consequence of the equivalence of categories of
+Fundamental Groups, Lemma
+\ref{pione-lemma-finite-etale-on-proper-over-henselian}.
+\end{proof}
+
+\noindent
+The following technical lemma is a key ingredient in the proof of
+the proper base change theorem. The argument works word for word
+for any proper scheme over $A$ whose special fibre has dimension
+$\leq 1$, but in fact the conclusion will be a consequence of the
+proper base change theorem and we only need this particular version
+in its proof.
+
+\begin{lemma}
+\label{lemma-efface-cohomology-on-fibre-by-finite-cover}
+Let $A$ be a henselian local ring. Let $X = \mathbf{P}^1_A$.
+Let $X_0 \subset X$ be the closed fibre. Let $\ell$ be a prime
+number. Let $\mathcal{I}$ be an injective sheaf of
+$\mathbf{Z}/\ell\mathbf{Z}$-modules on $X_\etale$. Then
+$H^q_\etale(X_0, \mathcal{I}|_{X_0}) = 0$ for $q > 0$.
+\end{lemma}
+
+\begin{proof}
+Observe that $X$ is a separated scheme which can be covered by $2$
+affine opens. Hence for $q > 1$ this follows from Gabber's affine
+variant of the proper base change theorem, see
+Lemma \ref{lemma-vanishing-restriction-injective}.
+Thus we may assume $q = 1$. Let
+$\xi \in H^1_\etale(X_0, \mathcal{I}|_{X_0})$.
+Goal: show that $\xi$ is $0$.
+By Lemmas \ref{lemma-torsion-colimit-constructible} and
+\ref{lemma-colimit} we can find a map $\mathcal{F} \to \mathcal{I}$
+with $\mathcal{F}$ a constructible sheaf of
+$\mathbf{Z}/\ell\mathbf{Z}$-modules
+and $\xi$ coming from an element $\zeta$ of
+$H^1_\etale(X_0, \mathcal{F}|_{X_0})$. Suppose we have an injective map
+$\mathcal{F} \to \mathcal{F}'$ of sheaves of
+$\mathbf{Z}/\ell\mathbf{Z}$-modules on $X_\etale$.
+Since $\mathcal{I}$ is injective we can extend the given map
+$\mathcal{F} \to \mathcal{I}$ to a map $\mathcal{F}' \to \mathcal{I}$.
+In this situation we may replace $\mathcal{F}$ by $\mathcal{F}'$
+and $\zeta$ by the image of $\zeta$ in $H^1_\etale(X_0, \mathcal{F}'|_{X_0})$.
+Also, if $\mathcal{F} = \mathcal{F}_1 \oplus \mathcal{F}_2$ is a direct sum,
+then we may replace $\mathcal{F}$ by $\mathcal{F}_i$
+and $\zeta$ by the image of $\zeta$ in $H^1_\etale(X_0, \mathcal{F}_i|_{X_0})$.
+
+\medskip\noindent
+By Lemma \ref{lemma-constructible-maps-into-constant-general}
+and the remarks above we may assume $\mathcal{F}$
+is of the form $f_*\underline{M}$ where $M$ is a finite
+$\mathbf{Z}/\ell\mathbf{Z}$-module
+and $f : Y \to X$ is a finite morphism of finite presentation
+(such sheaves are still constructible by
+Lemma \ref{lemma-finite-pushforward-constructible}
+but we won't need this).
+Since formation of $f_*$ commutes with any base change
+(Lemma \ref{lemma-finite-pushforward-commutes-with-base-change})
+we see that the restriction of $f_*\underline{M}$ to $X_0$ is
+equal to the pushforward of $\underline{M}$ via the induced morphism
+$Y_0 \to X_0$ of special fibres. By the Leray spectral sequence
+(Proposition \ref{proposition-leray})
+and vanishing of higher direct images
+(Proposition \ref{proposition-finite-higher-direct-image-zero}),
+we find
+$$
+H^1_\etale(X_0, f_*\underline{M}|_{X_0}) = H^1_\etale(Y_0, \underline{M}).
+$$
+Since $Y \to \Spec(A)$ is proper we can use
+Lemma \ref{lemma-proper-over-henselian-and-h1} to see that
+the $H^1_\etale(Y_0, \underline{M})$ is equal to
+$H^1_\etale(Y, \underline{M})$. Thus we see that our cohomology
+class $\zeta$ lifts to a cohomology class
+$$
+\tilde\zeta \in H^1_\etale(Y, \underline{M}) = H^1_\etale(X, f_*\underline{M})
+$$
+However, $\tilde \zeta$ maps to zero in
+$H^1_\etale(X, \mathcal{I})$ as $\mathcal{I}$ is injective
+and by commutativity of
+$$
+\xymatrix{
+H^1_\etale(X, f_*\underline{M}) \ar[r] \ar[d] &
+H^1_\etale(X, \mathcal{I}) \ar[d] \\
+H^1_\etale(X_0, (f_*\underline{M})|_{X_0}) \ar[r] &
+H^1_\etale(X_0, \mathcal{I}|_{X_0})
+}
+$$
+we conclude that the image $\xi$ of $\zeta$ is zero as well.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Preliminaries on base change}
+\label{section-base-change-preliminaries}
+
+\noindent
+If you are interested in either the smooth base change theorem
+or the proper base change theorem, you should skip directly to
+the corresponding sections.
+In this section and the next few sections we consider commutative diagrams
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+of schemes; we usually assume this diagram is cartesian,
+i.e., $Y = X \times_S T$. A commutative diagram as above
+gives rise to a commutative diagram
+$$
+\xymatrix{
+X_\etale \ar[d]_{f_{small}} & Y_\etale \ar[d]^{e_{small}} \ar[l]^{h_{small}} \\
+S_\etale & T_\etale \ar[l]_{g_{small}}
+}
+$$
+of small \'etale sites. Let us use the notation
+$$
+f^{-1} = f_{small}^{-1}, \quad
+g_* = g_{small, *}, \quad
+e^{-1} = e_{small}^{-1}, \text{ and}\quad
+h_* = h_{small, *}.
+$$
+By Sites, Section \ref{sites-section-pullback}
+we get a base change or pullback map
+$$
+f^{-1}g_*\mathcal{F}
+\longrightarrow
+h_*e^{-1}\mathcal{F}
+$$
+for a sheaf $\mathcal{F}$ on $T_\etale$. If $\mathcal{F}$ is an abelian
+sheaf on $T_\etale$, then we get a derived base change map
+$$
+f^{-1}Rg_*\mathcal{F}
+\longrightarrow
+Rh_*e^{-1}\mathcal{F}
+$$
+see Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-base-change-map-flat-case}.
+Finally, if $K$ is an arbitrary object of $D(T_\etale)$
+there is a base change map
+$$
+f^{-1}Rg_*K
+\longrightarrow
+Rh_*e^{-1}K
+$$
+see
+Cohomology on Sites, Remark \ref{sites-cohomology-remark-base-change}.
+
+\begin{lemma}
+\label{lemma-base-change-local}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+Let $\{U_i \to X\}$ be an \'etale covering such that $U_i \to S$
+factors as $U_i \to V_i \to S$ with $V_i \to S$ \'etale
+and consider the cartesian diagrams
+$$
+\xymatrix{
+U_i \ar[d]_{f_i} & U_i \times_X Y \ar[l]^{h_i} \ar[d]^{e_i} \\
+V_i & V_i \times_S T \ar[l]_{g_i}
+}
+$$
+Let $\mathcal{F}$ be a sheaf on $T_\etale$. Let $K$ in $D(T_\etale)$.
+Set $K_i = K|_{V_i \times_S T}$ and
+$\mathcal{F}_i = \mathcal{F}|_{V_i \times_S T}$.
+\begin{enumerate}
+\item If $f_i^{-1}g_{i, *}\mathcal{F}_i = h_{i, *}e_i^{-1}\mathcal{F}_i$
+for all $i$, then $f^{-1}g_*\mathcal{F} = h_*e^{-1}\mathcal{F}$.
+\item If $f_i^{-1}Rg_{i, *}K_i = Rh_{i, *}e_i^{-1}K_i$
+for all $i$, then $f^{-1}Rg_*K = Rh_*e^{-1}K$.
+\item If $\mathcal{F}$ is an abelian sheaf and
+$f_i^{-1}R^qg_{i, *}\mathcal{F}_i = R^qh_{i, *}e_i^{-1}\mathcal{F}_i$
+for all $i$, then
+$f^{-1}R^qg_*\mathcal{F} = R^qh_*e^{-1}\mathcal{F}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). First we observe that
+$$
+(f^{-1}g_*\mathcal{F})|_{U_i} =
+f_i^{-1}(g_*\mathcal{F}|_{V_i}) =
+f_i^{-1}g_{i, *}\mathcal{F}_i
+$$
+The first equality because $U_i \to X \to S$ is equal to
+$U_i \to V_i \to S$ and the second equality because
+$g_*\mathcal{F}|_{V_i} = g_{i, *}\mathcal{F}_i$ by
+Sites, Lemma \ref{sites-lemma-localize-morphism-strong}.
+Similarly we have
+$$
+(h_*e^{-1}\mathcal{F})|_{U_i} =
+h_{i, *}(e^{-1}\mathcal{F}|_{U_i \times_X Y}) =
+h_{i, *}e_i^{-1}\mathcal{F}_i
+$$
+Thus if the base change maps
+$f_i^{-1}g_{i, *}\mathcal{F}_i \to h_{i, *}e_i^{-1}\mathcal{F}_i$
+are isomorphisms for all $i$, then the base change map
+$f^{-1}g_*\mathcal{F} \to h_*e^{-1}\mathcal{F}$
+restricts to an isomorphism over $U_i$ for all $i$
+and we conclude it is an isomorphism as $\{U_i \to X\}$
+is an \'etale covering.
+
+\medskip\noindent
+For the other two statements we replace the appeal to
+Sites, Lemma \ref{sites-lemma-localize-morphism-strong}
+by an appeal to
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-restrict-direct-image-open}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-compose}
+Consider a tower of cartesian diagrams of schemes
+$$
+\xymatrix{
+W \ar[d]_i & Z \ar[l]^j \ar[d]^k \\
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+Let $K$ in $D(T_\etale)$. If
+$$
+f^{-1}Rg_*K \to Rh_*e^{-1}K
+\quad\text{and}\quad
+i^{-1}Rh_*e^{-1}K \to Rj_*k^{-1}e^{-1}K
+$$
+are isomorphisms, then
+$(f \circ i)^{-1}Rg_*K \to Rj_*(e \circ k)^{-1}K$
+is an isomorphism.
+Similarly, if $\mathcal{F}$ is an abelian sheaf on $T_\etale$ and if
+$$
+f^{-1}R^qg_*\mathcal{F} \to R^qh_*e^{-1}\mathcal{F}
+\quad\text{and}\quad
+i^{-1}R^qh_*e^{-1}\mathcal{F} \to R^qj_*k^{-1}e^{-1}\mathcal{F}
+$$
+are isomorphisms, then
+$(f \circ i)^{-1}R^qg_*\mathcal{F} \to R^qj_*(e \circ k)^{-1}\mathcal{F}$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+This is formal, provided one checks that the composition of these
+base change maps is the base change maps for the outer rectangle, see
+Cohomology on Sites, Remark
+\ref{sites-cohomology-remark-compose-base-change-horizontal}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-Rf-star-colim}
+Let $I$ be a directed set. Consider an inverse system of
+cartesian diagrams of schemes
+$$
+\xymatrix{
+X_i \ar[d]_{f_i} & Y_i \ar[l]^{h_i} \ar[d]^{e_i} \\
+S_i & T_i \ar[l]_{g_i}
+}
+$$
+with affine transition morphisms and with $g_i$ quasi-compact and
+quasi-separated. Set $X = \lim X_i$,
+$S = \lim S_i$, $T = \lim T_i$ and $Y = \lim Y_i$ to
+obtain the cartesian diagram
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+Let $(\mathcal{F}_i, \varphi_{i'i})$ be a system of sheaves on
+$(T_i)$ as in Definition \ref{definition-inverse-system-sheaves}. Set
+$\mathcal{F} = \colim p_i^{-1}\mathcal{F}_i$ on $T$
+where $p_i : T \to T_i$ is the projection.
+Then we have the following
+\begin{enumerate}
+\item If $f_i^{-1}g_{i, *}\mathcal{F}_i = h_{i, *}e_i^{-1}\mathcal{F}_i$
+for all $i$, then
+$f^{-1}g_*\mathcal{F} = h_*e^{-1}\mathcal{F}$.
+\item If $\mathcal{F}_i$ is an abelian sheaf for all $i$ and
+$f_i^{-1}R^qg_{i, *}\mathcal{F}_i = R^qh_{i, *}e_i^{-1}\mathcal{F}_i$
+for all $i$, then
+$f^{-1}R^qg_*\mathcal{F} = R^qh_*e^{-1}\mathcal{F}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We prove (2) and we omit the proof of (1). We will use without further
+mention that pullback of sheaves commutes with colimits as it is a
+left adjoint. Observe that $h_i$ is quasi-compact and quasi-separated as a
+base change of $g_i$.
+Denoting $q_i : Y \to Y_i$ the projections, observe that
+$e^{-1}\mathcal{F} = \colim e^{-1}p_i^{-1}\mathcal{F}_i =
+\colim q_i^{-1}e_i^{-1}\mathcal{F}_i$.
+By Lemma \ref{lemma-relative-colimit-general}
+this gives
+$$
+R^qh_*e^{-1}\mathcal{F} = \colim r_i^{-1}R^qh_{i, *}e_i^{-1}\mathcal{F}_i
+$$
+where $r_i : X \to X_i$ is the projection.
+Similarly, we have
+$$
+f^{-1}Rg_*\mathcal{F} =
+f^{-1}\colim s_i^{-1}R^qg_{i, *}\mathcal{F}_i =
+\colim r_i^{-1}f_i^{-1}R^qg_{i, *}\mathcal{F}_i
+$$
+where $s_i : S \to S_i$ is the projection. The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-Rf-star-colim-complexes}
+Let $I$, $X_i$, $Y_i$, $S_i$, $T_i$, $f_i$, $h_i$, $e_i$, $g_i$,
+$X$, $Y$, $S$, $T$, $f$, $h$, $e$, $g$ be as in the statement
+of Lemma \ref{lemma-base-change-Rf-star-colim}.
+Let $0 \in I$ and let $K_0 \in D^+(T_{0, \etale})$.
+For $i \in I$, $i \geq 0$ denote $K_i$ the pullback of
+$K_0$ to $T_i$. Denote $K$ the pullback of $K_0$ to $T$.
+If $f_i^{-1}Rg_{i, *}K_i = Rh_{i, *}e_i^{-1}K_i$
+for all $i \geq 0$, then $f^{-1}Rg_*K = Rh_*e^{-1}K$.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that the base change map
+$f^{-1}Rg_*K \to Rh_*e^{-1}K$ induces an isomorphism
+on cohomology sheaves. In other words, we have to show
+that $f^{-1}R^pg_*K \to R^ph_*e^{-1}K$ is an isomorphism
+for all $p \in \mathbf{Z}$ if we are given that
+$f_i^{-1}R^pg_{i, *}K_i \to R^ph_{i, *}e_i^{-1}K_i$
+is an isomorphism for all $i \geq 0$ and $p \in \mathbf{Z}$.
+At this point we can argue exactly as in the proof of
+Lemma \ref{lemma-base-change-Rf-star-colim}
+replacing reference to
+Lemma \ref{lemma-relative-colimit-general}
+by a reference to
+Lemma \ref{lemma-relative-colimit-general-complexes}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-f-star-general-stalks}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+where $g : T \to S$ is quasi-compact and quasi-separated.
+Let $\mathcal{F}$ be an
+abelian sheaf on $T_\etale$. Let $q \geq 0$. The following are equivalent
+\begin{enumerate}
+\item For every geometric point $\overline{x}$ of $X$ with image
+$\overline{s} = f(\overline{x})$ we have
+$$
+H^q(\Spec(\mathcal{O}^{sh}_{X, \overline{x}}) \times_S T, \mathcal{F})
+=
+H^q(\Spec(\mathcal{O}^{sh}_{S, \overline{s}}) \times_S T, \mathcal{F})
+$$
+\item $f^{-1}R^qg_*\mathcal{F} \to R^qh_*e^{-1}\mathcal{F}$
+is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Since $Y = X \times_S T$ we have
+$\Spec(\mathcal{O}^{sh}_{X, \overline{x}}) \times_X Y =
+\Spec(\mathcal{O}^{sh}_{X, \overline{x}}) \times_S T$. Thus
+the map in (1) is the map of stalks at $\overline{x}$ for the map
+in (2) by Theorem \ref{theorem-higher-direct-images} (and
+Lemma \ref{lemma-stalk-pullback}).
+Thus the result by Theorem \ref{theorem-exactness-stalks}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-check-stalks-better}
+Let $f : X \to S$ be a morphism of schemes.
+Let $\overline{x}$ be a geometric point of $X$ with image $\overline{s}$ in $S$.
+Let $\Spec(K) \to \Spec(\mathcal{O}^{sh}_{S, \overline{s}})$
+be a morphism with $K$ a separably closed field. Let $\mathcal{F}$ be an
+abelian sheaf on $\Spec(K)_\etale$. Let $q \geq 0$. The following are
+equivalent
+\begin{enumerate}
+\item
+$H^q(\Spec(\mathcal{O}^{sh}_{X, \overline{x}}) \times_S \Spec(K), \mathcal{F}) =
+H^q(\Spec(\mathcal{O}^{sh}_{S, \overline{s}}) \times_S \Spec(K), \mathcal{F})$
+\item
+$H^q(\Spec(\mathcal{O}^{sh}_{X, \overline{x}})
+\times_{\Spec(\mathcal{O}^{sh}_{S, \overline{s}})} \Spec(K), \mathcal{F}) =
+H^q(\Spec(K), \mathcal{F})$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Observe that $\Spec(K) \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{s}})$
+is the spectrum of a filtered colimit of \'etale algebras over $K$.
+Since $K$ is separably closed, each \'etale $K$-algebra
+is a finite product of copies of $K$. Thus we can write
+$$
+\Spec(K) \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{s}}) =
+\lim_{i \in I} \coprod\nolimits_{a \in A_i} \Spec(K)
+$$
+as a cofiltered limit where each term is a disjoint union
+of copies of $\Spec(K)$ over a finite set $A_i$.
+Note that $A_i$ is nonempty as we are given
+$\Spec(K) \to \Spec(\mathcal{O}^{sh}_{S, \overline{s}})$.
+It follows that
+\begin{align*}
+\Spec(\mathcal{O}^{sh}_{X, \overline{x}}) \times_S \Spec(K)
+& =
+\Spec(\mathcal{O}^{sh}_{X, \overline{x}})
+\times_{\Spec(\mathcal{O}^{sh}_{S, \overline{s}})}
+\left(
+\Spec(\mathcal{O}^{sh}_{S, \overline{s}})
+\times_S \Spec(K)\right) \\
+& =
+\lim_{i \in I} \coprod\nolimits_{a \in A_i}
+\Spec(\mathcal{O}^{sh}_{X, \overline{x}})
+\times_{\Spec(\mathcal{O}^{sh}_{S, \overline{s}})} \Spec(K)
+\end{align*}
+Since taking cohomology in our setting commutes with limits
+of schemes (Theorem \ref{theorem-colimit}) we conclude.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Base change for pushforward}
+\label{section-base-change-f-star}
+
+\noindent
+This section is preliminary and should be skipped on a first reading.
+In this section we discuss for what morphisms $f : X \to S$ we have
+$f^{-1}g_* = h_*e^{-1}$ on all sheaves (of sets) for every cartesian
+diagram
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+with $g$ quasi-compact and quasi-separated.
+
+\begin{lemma}
+\label{lemma-base-change-f-star-general}
+Consider the cartesian diagram of schemes
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+Assume that $f$ is flat and every object $U$ of $X_\etale$ has
+a covering $\{U_i \to U\}$ such that $U_i \to S$
+factors as $U_i \to V_i \to S$ with $V_i \to S$
+\'etale and $U_i \to V_i$ quasi-compact with
+geometrically connected fibres.
+Then for any sheaf $\mathcal{F}$ of sets on $T_\etale$ we have
+$f^{-1}g_*\mathcal{F} = h_*e^{-1}\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Let $U \to X$ be an \'etale morphism such that $U \to S$ factors as
+$U \to V \to S$ with $V \to S$ \'etale and $U \to V$ quasi-compact
+with geometrically connected fibres. Observe that $U \to V$ is flat
+(More on Flatness, Lemma \ref{flat-lemma-etale-flat-up-down}).
+We claim that
+\begin{align*}
+f^{-1}g_*\mathcal{F}(U)
+& = g_*\mathcal{F}(V) \\
+& = \mathcal{F}(V \times_S T) \\
+& = e^{-1}\mathcal{F}(U \times_X Y) \\
+& = h_*e^{-1}\mathcal{F}(U)
+\end{align*}
+Namely, thinking of $U$ as an object of $X_\etale$ and
+$V$ as an object of $S_\etale$ we see that the first equality
+follows from Lemma \ref{lemma-sections-upstairs}\footnote{Strictly
+speaking, we are also using that the restriction of $f^{-1}g_*\mathcal{F}$
+to $U_\etale$ is the pullback via $U \to V$ of the restriction of
+$g_*\mathcal{F}$ to $V_\etale$. See
+Sites, Lemma \ref{sites-lemma-localize-morphism-strong}.}.
+Thinking of $V \times_S T$ as an object of $T_\etale$
+the second equality follows from the definition of $g_*$.
+Observe that $U \times_X Y = U \times_S T$ (because $Y = X \times_S T$)
+and hence $U \times_X Y \to V \times_S T$
+has geometrically connected fibres as a base change of $U \to V$.
+Thinking of $U \times_X Y$ as an object of $Y_\etale$, we see that
+the third equality follows from Lemma \ref{lemma-sections-upstairs}
+as before. Finally, the fourth equality follows from the definition
+of $h_*$.
+
+\medskip\noindent
+Since by assumption every object of $X_\etale$ has an \'etale
+covering to which the argument of the previous paragraph applies
+we see that the lemma is true.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fppf-reduced-fibres-base-change-f-star}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+where $f$ is flat and locally of finite presentation
+with geometrically reduced fibres.
+Then $f^{-1}g_*\mathcal{F} = h_*e^{-1}\mathcal{F}$
+for any sheaf $\mathcal{F}$ on $T_\etale$.
+\end{lemma}
+
+\begin{proof}
+Combine Lemma \ref{lemma-base-change-f-star-general} with
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-cover-by-geometrically-connected}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-f-star-field}
+Consider the cartesian diagrams of schemes
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+Assume that $S$ is the spectrum of a separably closed field.
+Then $f^{-1}g_*\mathcal{F} = h_*e^{-1}\mathcal{F}$
+for any sheaf $\mathcal{F}$ on $T_\etale$.
+\end{lemma}
+
+\begin{proof}
+We may work locally on $X$. Hence we may assume $X$ is affine.
+Then we can write $X$ as a cofiltered limit of affine schemes of
+finite type over $S$. By Lemma \ref{lemma-base-change-Rf-star-colim}
+we may assume that $X$ is of finite type over $S$.
+Then Lemma \ref{lemma-base-change-f-star-general}
+applies because any scheme of finite
+type over a separably closed field is a finite disjoint
+union of connected and geometrically connected schemes
+(see Varieties, Lemma
+\ref{varieties-lemma-separably-closed-field-connected-components}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-f-star-valuation}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+Assume that
+\begin{enumerate}
+\item $f$ is flat and open,
+\item the residue fields of $S$ are separably algebraically closed,
+\item given an \'etale morphism $U \to X$ with $U$ affine
+we can write $U$ as a finite disjoint union of open subschemes
+of $X$ (for example if $X$ is a normal integral scheme
+with separably closed function field),
+\item any nonempty open of a fibre $X_s$ of $f$ is connected
+(for example if $X_s$ is irreducible or empty).
+\end{enumerate}
+Then for any sheaf $\mathcal{F}$ of sets on $T_\etale$ we have
+$f^{-1}g_*\mathcal{F} = h_*e^{-1}\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: the assumptions almost trivially imply
+the condition of Lemma \ref{lemma-base-change-f-star-general}.
+The for example in part (3) follows from
+Lemma \ref{lemma-normal-scheme-with-alg-closed-function-field}.
+\end{proof}
+
+\noindent
+The following lemma doesn't really belong here but there does not
+seem to be a good place for it anywhere.
+
+\begin{lemma}
+\label{lemma-fppf-reduced-fibres-pullback-products}
+Let $f : X \to S$ be a morphism of schemes which is flat and
+locally of finite presentation with geometrically reduced fibres.
+Then $f^{-1} : \Sh(S_\etale) \to \Sh(X_\etale)$ commutes
+with products.
+\end{lemma}
+
+\begin{proof}
+Let $I$ be a set and let $\mathcal{G}_i$ be a sheaf on $S_\etale$
+for $i \in I$.
+Let $U \to X$ be an \'etale morphism such that $U \to S$ factors as
+$U \to V \to S$ with $V \to S$ \'etale and $U \to V$ flat of finite
+presentation with geometrically connected fibres. Then we have
+\begin{align*}
+f^{-1}(\prod \mathcal{G}_i)(U)
+& =
+(\prod \mathcal{G}_i)(V) \\
+& =
+\prod \mathcal{G}_i(V) \\
+& =
+\prod f^{-1}\mathcal{G}_i(U) \\
+& =
+(\prod f^{-1}\mathcal{G}_i)(U)
+\end{align*}
+where we have used Lemma \ref{lemma-sections-upstairs}
+in the first and third equality
+(we are also using that the restriction of $f^{-1}\mathcal{G}$
+to $U_\etale$ is the pullback via $U \to V$ of the restriction of
+$\mathcal{G}$ to $V_\etale$, see
+Sites, Lemma \ref{sites-lemma-localize-morphism-strong}).
+By More on Morphisms, Lemma
+\ref{more-morphisms-lemma-cover-by-geometrically-connected}
+every object $U$ of $X_\etale$ has an \'etale covering
+$\{U_i \to U\}$ such that the discussion in the previous
+paragraph applies to $U_i$. The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-f-star}
+Let $f : X \to S$ be a flat morphism of schemes such
+that for every geometric point $\overline{x}$ of $X$ the map
+$$
+\mathcal{O}_{S, f(\overline{x})}^{sh}
+\longrightarrow
+\mathcal{O}_{X, \overline{x}}^{sh}
+$$
+has geometrically connected fibres. Then for every
+cartesian diagram of schemes
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+with $g$ quasi-compact and quasi-separated we have
+$f^{-1}g_*\mathcal{F} = h_*e^{-1}\mathcal{F}$
+for any sheaf $\mathcal{F}$ of sets on $T_\etale$.
+\end{lemma}
+
+\begin{proof}
+It suffices to check equality on stalks, see
+Theorem \ref{theorem-exactness-stalks}.
+By Theorem \ref{theorem-higher-direct-images} we have
+$$
+(h_*e^{-1}\mathcal{F})_{\overline{x}} =
+\Gamma(\Spec(\mathcal{O}_{X, \overline{x}}^{sh}) \times_X Y, e^{-1}\mathcal{F})
+$$
+and we have similarly
+$$
+(f^{-1}g_*^{-1}\mathcal{F})_{\overline{x}} =
+(g_*^{-1}\mathcal{F})_{f(\overline{x})} =
+\Gamma(\Spec(\mathcal{O}_{S, f(\overline{x})}^{sh}) \times_S T, \mathcal{F})
+$$
+These sets are equal by an application of Lemma \ref{lemma-sections-upstairs}
+to the morphism
+$$
+\Spec(\mathcal{O}_{X, \overline{x}}^{sh}) \times_X Y
+\longrightarrow
+\Spec(\mathcal{O}_{S, f(\overline{x})}^{sh}) \times_S T
+$$
+which is a base change of
+$\Spec(\mathcal{O}_{X, \overline{x}}^{sh}) \to
+\Spec(\mathcal{O}_{S, f(\overline{x})}^{sh})$
+because $Y = X \times_S T$.
+\end{proof}
+
+
+
+
+
+
+
+\section{Base change for higher direct images}
+\label{section-base-change}
+
+\noindent
+This section is the analogue of Section \ref{section-base-change-f-star}
+for higher direct images.
+This section is preliminary and should be skipped on a first reading.
+
+\begin{remark}
+\label{remark-base-change-holds}
+Let $f : X \to S$ be a morphism of schemes. Let $n$ be an integer.
+We will say $BC(f, n, q_0)$ is true if for every commutative diagram
+$$
+\xymatrix{
+X \ar[d]_f & X' \ar[l] \ar[d]_{f'} & Y \ar[l]^h \ar[d]^e \\
+S & S' \ar[l] & T \ar[l]_g
+}
+$$
+with $X' = X \times_S S'$ and $Y = X' \times_{S'} T$ and
+$g$ quasi-compact and quasi-separated, and every abelian sheaf
+$\mathcal{F}$ on $T_\etale$ annihilated by $n$ the base change map
+$$
+(f')^{-1}R^qg_*\mathcal{F}
+\longrightarrow
+R^qh_*e^{-1}\mathcal{F}
+$$
+is an isomorphism for $q \leq q_0$.
+\end{remark}
+
+\begin{lemma}
+\label{lemma-base-change-q-injective}
+With $f : X \to S$ and $n$ as in Remark \ref{remark-base-change-holds}
+assume for some $q \geq 1$ we have $BC(f, n, q - 1)$. Then
+for every commutative diagram
+$$
+\xymatrix{
+X \ar[d]_f & X' \ar[l] \ar[d]_{f'} & Y \ar[l]^h \ar[d]^e \\
+S & S' \ar[l] & T \ar[l]_g
+}
+$$
+with $X' = X \times_S S'$ and $Y = X' \times_{S'} T$ and
+$g$ quasi-compact and quasi-separated, and every abelian sheaf
+$\mathcal{F}$ on $T_\etale$ annihilated by $n$
+\begin{enumerate}
+\item the base change map
+$(f')^{-1}R^qg_*\mathcal{F}\to R^qh_*e^{-1}\mathcal{F}$
+is injective,
+\item if $\mathcal{F} \subset \mathcal{G}$ where $\mathcal{G}$
+on $T_\etale$ is annihilated by $n$, then
+$$
+\Coker\left(
+(f')^{-1}R^qg_*\mathcal{F}\to R^qh_*e^{-1}\mathcal{F}
+\right)
+\subset
+\Coker\left(
+(f')^{-1}R^qg_*\mathcal{G}\to R^qh_*e^{-1}\mathcal{G}
+\right)
+$$
+\item if in (2) the sheaf $\mathcal{G}$ is an injective sheaf
+of $\mathbf{Z}/n\mathbf{Z}$-modules, then
+$$
+\Coker\left((f')^{-1}R^qg_*\mathcal{F}\to R^qh_*e^{-1}\mathcal{F} \right)
+\subset R^qh_*e^{-1}\mathcal{G}
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Choose a short exact sequence
+$0 \to \mathcal{F} \to \mathcal{I} \to \mathcal{Q} \to 0$
+where $\mathcal{I}$ is an injective sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules.
+Consider the induced diagram
+$$
+\xymatrix{
+(f')^{-1}R^{q - 1}g_*\mathcal{I} \ar[d]_{\cong} \ar[r] &
+(f')^{-1}R^{q - 1}g_*\mathcal{Q} \ar[d]_{\cong} \ar[r] &
+(f')^{-1}R^qg_*\mathcal{F} \ar[d] \ar[r] &
+0 \ar[d] \\
+R^{q - 1}h_*e^{-1}\mathcal{I} \ar[r] &
+R^{q - 1}h_*e^{-1}\mathcal{Q} \ar[r] &
+R^qh_*e^{-1}\mathcal{F} \ar[r] &
+R^qh_*e^{-1}\mathcal{I}
+}
+$$
+with exact rows. We have the zero in the right upper corner
+as $\mathcal{I}$ is injective. The left two vertical arrows are
+isomorphisms by $BC(f, n, q - 1)$. We conclude that part (1) holds.
+The above also shows that
+$$
+\Coker\left(
+(f')^{-1}R^qg_*\mathcal{F}\to R^qh_*e^{-1}\mathcal{F}
+\right)
+\subset
+R^qh_*e^{-1}\mathcal{I}
+$$
+hence part (3) holds. To prove (2) choose
+$\mathcal{F} \subset \mathcal{G} \subset \mathcal{I}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-q-integral-top}
+With $f : X \to S$ and $n$ as in Remark \ref{remark-base-change-holds}
+assume for some $q \geq 1$ we have $BC(f, n, q - 1)$. Consider
+commutative diagrams
+$$
+\vcenter{
+\xymatrix{
+X \ar[d]_f &
+X' \ar[d]_{f'} \ar[l] &
+Y \ar[l]^h \ar[d]^e &
+Y' \ar[l]^{\pi'} \ar[d]^{e'} \\
+S &
+S' \ar[l] &
+T \ar[l]_g &
+T' \ar[l]_\pi
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+X' \ar[d]_{f'} & & Y' \ar[ll]^{h' = h \circ \pi'} \ar[d]^{e'} \\
+S' & & T' \ar[ll]_{g' = g \circ \pi}
+}
+}
+$$
+where all squares are cartesian, $g$ quasi-compact and quasi-separated, and
+$\pi$ is integral surjective. Let $\mathcal{F}$ be an abelian sheaf
+on $T_\etale$ annihilated by $n$ and set $\mathcal{F}' = \pi^{-1}\mathcal{F}$.
+If the base change map
+$$
+(f')^{-1}R^qg'_*\mathcal{F}' \longrightarrow R^qh'_*(e')^{-1}\mathcal{F}'
+$$
+is an isomorphism, then the base change map
+$(f')^{-1}R^qg_*\mathcal{F} \to R^qh_*e^{-1}\mathcal{F}$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Observe that $\mathcal{F} \to \pi_*\pi^{-1}\mathcal{F}'$ is injective
+as $\pi$ is surjective (check on stalks). Thus by
+Lemma \ref{lemma-base-change-q-injective}
+we see that it suffices to show that the base change map
+$$
+(f')^{-1}R^qg_*\pi_*\mathcal{F}'
+\longrightarrow
+R^qh_*e^{-1}\pi_*\mathcal{F}'
+$$
+is an isomorphism. This follows from the assumption because
+we have $R^qg_*\pi_*\mathcal{F}' = R^qg'_*\mathcal{F}'$,
+we have $e^{-1}\pi_*\mathcal{F}' =\pi'_*(e')^{-1}\mathcal{F}'$, and
+we have $R^qh_*\pi'_*(e')^{-1}\mathcal{F}' = R^qh'_*(e')^{-1}\mathcal{F}'$.
+This follows from Lemmas
+\ref{lemma-integral-pushforward-commutes-with-base-change} and
+\ref{lemma-what-integral} and the relative leray spectral sequence
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-relative-Leray}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-q-integral-bottom}
+With $f : X \to S$ and $n$ as in Remark \ref{remark-base-change-holds}
+assume for some $q \geq 1$ we have $BC(f, n, q - 1)$. Consider
+commutative diagrams
+$$
+\vcenter{
+\xymatrix{
+X \ar[d]_f &
+X' \ar[d]_{f'} \ar[l] &
+X'' \ar[l]^{\pi'} \ar[d]_{f''} &
+Y \ar[l]^{h'} \ar[d]^e \\
+S &
+S' \ar[l] &
+S'' \ar[l]_\pi &
+T \ar[l]_{g'}
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+X' \ar[d]_{f'} & & Y \ar[ll]^{h = h' \circ \pi'} \ar[d]^e \\
+S' & & T \ar[ll]_{g = g' \circ \pi}
+}
+}
+$$
+where all squares are cartesian, $g'$ quasi-compact and quasi-separated, and
+$\pi$ is integral. Let $\mathcal{F}$ be an abelian sheaf
+on $T_\etale$ annihilated by $n$. If the base change map
+$$
+(f')^{-1}R^qg_*\mathcal{F} \longrightarrow R^qh_*e^{-1}\mathcal{F}
+$$
+is an isomorphism, then the base change map
+$(f'')^{-1}R^qg'_*\mathcal{F} \to R^qh'_*e^{-1}\mathcal{F}$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Since $\pi$ and $\pi'$ are integral we have $R\pi_* = \pi_*$ and
+$R\pi'_* = \pi'_*$, see Lemma \ref{lemma-what-integral}.
+We also have $(f')^{-1}\pi_* = \pi'_*(f'')^{-1}$. Thus we see that
+$\pi'_*(f'')^{-1}R^qg'_*\mathcal{F} = (f')^{-1}R^qg_*\mathcal{F}$
+and
+$\pi'_*R^qh'_*e^{-1}\mathcal{F} = R^qh_*e^{-1}\mathcal{F}$.
+Thus the assumption means that our map becomes an
+isomorphism after applying the functor $\pi'_*$.
+Hence we see that it is an isomorphism by Lemma \ref{lemma-what-integral}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-formal-argument}
+Let $T$ be a quasi-compact and quasi-separated scheme.
+Let $P$ be a property for quasi-compact and quasi-separated
+schemes over $T$. Assume
+\begin{enumerate}
+\item If $T'' \to T'$ is a thickening of quasi-compact and
+quasi-separated schemes over $T$, then $P(T'')$ if and only if $P(T')$.
+\item If $T' = \lim T_i$ is a limit of an inverse system of
+quasi-compact and quasi-separated schemes over $T$ with affine
+transition morphisms and $P(T_i)$ holds for all $i$, then
+$P(T')$ holds.
+\item If $Z \subset T'$ is a closed subscheme with
+quasi-compact complement $V \subset T'$ and $P(T')$ holds,
+then either $P(V)$ or $P(Z)$ holds.
+\end{enumerate}
+Then $P(T)$ implies $P(\Spec(K))$ for some morphism $\Spec(K) \to T$
+where $K$ is a field.
+\end{lemma}
+
+\begin{proof}
+Consider the set $\mathfrak T$ of closed subschemes $T' \subset T$
+such that $P(T')$. By assumption (2) this set has a minimal element,
+say $T'$. By assumption (1) we see that $T'$ is reduced.
+Let $\eta \in T'$ be the generic point of an irreducible
+component of $T'$. Then $\eta = \Spec(K)$ for some field
+$K$ and $\eta = \lim V$ where the limit is over the affine
+open subschemes $V \subset T'$ containing $\eta$.
+By assumption (3) and the minimality of $T'$ we see
+that $P(V)$ holds for all these $V$. Hence $P(\eta)$
+by (2) and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-does-not-hold-pre}
+With $f : X \to S$ and $n$ as in Remark \ref{remark-base-change-holds}
+assume for some $q \geq 1$ we have that $BC(f, n, q - 1)$ is true, but
+$BC(f, n, q)$ is not. Then there exist a commutative diagram
+$$
+\xymatrix{
+X \ar[d]_f & X' \ar[d]_{f'} \ar[l] & Y \ar[l]^h \ar[d]^e \\
+S & S' \ar[l] & \Spec(K) \ar[l]_g
+}
+$$
+where $X' = X \times_S S'$, $Y = X' \times_{S'} \Spec(K)$,
+$K$ is a field, and $\mathcal{F}$ is an abelian sheaf
+on $\Spec(K)$ annihilated by $n$ such that
+$(f')^{-1}R^qg_*\mathcal{F} \to R^qh_*e^{-1}\mathcal{F}$
+is not an isomorphism.
+\end{lemma}
+
+\begin{proof}
+Choose a commutative diagram
+$$
+\xymatrix{
+X \ar[d]_f & X' \ar[l] \ar[d]_{f'} & Y \ar[l]^h \ar[d]^e \\
+S & S' \ar[l] & T \ar[l]_g
+}
+$$
+with $X' = X \times_S S'$ and $Y = X' \times_{S'} T$ and
+$g$ quasi-compact and quasi-separated, and an abelian sheaf
+$\mathcal{F}$ on $T_\etale$ annihilated by $n$ such that
+the base change map
+$(f')^{-1}R^qg_*\mathcal{F} \to R^qh_*e^{-1}\mathcal{F}$
+is not an isomorphism. Of course we may and do replace $S'$
+by an affine open of $S'$; this implies that $T$ is quasi-compact
+and quasi-separated. By Lemma \ref{lemma-base-change-q-injective} we see
+$(f')^{-1}R^qg_*\mathcal{F} \to R^qh_*e^{-1}\mathcal{F}$
+is injective. Pick a geometric point $\overline{x}$ of $X'$
+and an element $\xi$ of $(R^qh_*q^{-1}\mathcal{F})_{\overline{x}}$
+which is not in the image of the map
+$((f')^{-1}R^qg_*\mathcal{F})_{\overline{x}} \to
+(R^qh_*e^{-1}\mathcal{F})_{\overline{x}}$.
+
+\medskip\noindent
+Consider a morphism $\pi : T' \to T$ with $T'$ quasi-compact
+and quasi-separated and denote $\mathcal{F}' = \pi^{-1}\mathcal{F}$.
+Denote $\pi' : Y' = Y \times_T T' \to Y$ the base change of $\pi$
+and $e' : Y' \to T'$ the base change of $e$. Picture
+$$
+\vcenter{
+\xymatrix{
+X' \ar[d]_{f'} & Y \ar[l]^h \ar[d]^e & Y' \ar[l]^{\pi'} \ar[d]^{e'} \\
+S' & T \ar[l]_g & T' \ar[l]_\pi
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+X' \ar[d]_{f'} & & Y' \ar[ll]^{h' = h \circ \pi'} \ar[d]^{e'} \\
+S' & & T' \ar[ll]_{g' = g \circ \pi}
+}
+}
+$$
+Using pullback maps we obtain a canonical commutative diagram
+$$
+\xymatrix{
+(f')^{-1}R^qg_*\mathcal{F} \ar[r] \ar[d] &
+(f')^{-1}R^qg'_*\mathcal{F}' \ar[d] \\
+R^qh_*e^{-1}\mathcal{F} \ar[r] &
+R^qh'_*(e')^{-1}\mathcal{F}'
+}
+$$
+of abelian sheaves on $X'$. Let $P(T')$ be the property
+\begin{itemize}
+\item The image $\xi'$ of $\xi$ in
+$(Rh'_*(e')^{-1}\mathcal{F}')_{\overline{x}}$ is
+not in the image of the map
+$(f^{-1}R^qg'_*\mathcal{F}')_{\overline{x}} \to
+(R^qh'_*(e')^{-1}\mathcal{F}')_{\overline{x}}$.
+\end{itemize}
+We claim that hypotheses (1), (2), and (3) of
+Lemma \ref{lemma-formal-argument} hold for $P$
+which proves our lemma.
+
+\medskip\noindent
+Condition (1) of Lemma \ref{lemma-formal-argument}
+holds for $P$ because the \'etale topology of a scheme
+and a thickening of the scheme is the same. See
+Proposition \ref{proposition-topological-invariance}.
+
+\medskip\noindent
+Suppose that $I$ is a directed set and that $T_i$
+is an inverse system over $I$ of quasi-compact and quasi-separated
+schemes over $T$ with affine transition morphisms.
+Set $T' = \lim T_i$. Denote $\mathcal{F}'$ and $\mathcal{F}_i$
+the pullback of $\mathcal{F}$ to $T'$, resp.\ $T_i$. Consider
+the diagrams
+$$
+\vcenter{
+\xymatrix{
+X \ar[d]_{f'} & Y \ar[l]^h \ar[d]^e & Y_i \ar[l]^{\pi_i'} \ar[d]^{e_i} \\
+S & T \ar[l]_g & T_i \ar[l]_{\pi_i}
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+X \ar[d]_{f'} & & Y_i \ar[ll]^{h_i = h \circ \pi_i'} \ar[d]^{e_i} \\
+S & & T_i \ar[ll]_{g_i = g \circ \pi_i}
+}
+}
+$$
+as in the previous paragraph. It is clear that $\mathcal{F}'$ on
+$T'$ is the colimit of the pullbacks of $\mathcal{F}_i$ to $T'$
+and that $(e')^{-1}\mathcal{F}'$ is the colimit of the pullbacks
+of $e_i^{-1}\mathcal{F}_i$ to $Y'$.
+By Lemma \ref{lemma-relative-colimit-general}
+we have
+$$
+R^qh'_*(e')^{-1}\mathcal{F}' = \colim R^qh_{i, *}e_i^{-1}\mathcal{F}_i
+\quad\text{and}\quad
+(f')^{-1}R^qg'_*\mathcal{F}' = \colim (f')^{-1}R^qg_{i, *}\mathcal{F}_i
+$$
+It follows that if $P(T_i)$ is true for all $i$, then
+$P(T')$ holds. Thus condition (2) of Lemma \ref{lemma-formal-argument}
+holds for $P$.
+
+\medskip\noindent
+The most interesting is condition (3) of Lemma \ref{lemma-formal-argument}.
+Assume $T'$ is a quasi-compact and quasi-separated scheme over $T$
+such that $P(T')$ is true.
+Let $Z \subset T'$ be a closed subscheme with complement $V \subset T'$
+quasi-compact. Consider the diagram
+$$
+\xymatrix{
+Y' \times_{T'} Z \ar[d]_{e_Z} \ar[r]_{i'} &
+Y' \ar[d]_{e'} &
+Y' \times_{T'} V \ar[l]^{j'} \ar[d]^{e_V} \\
+Z \ar[r]^i &
+T' &
+V \ar[l]_j
+}
+$$
+Choose an injective map $j^{-1}\mathcal{F}' \to \mathcal{J}$
+where $\mathcal{J}$ is an injective sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules
+on $V$. Looking at stalks we see that the map
+$$
+\mathcal{F}' \to \mathcal{G} = j_*\mathcal{J} \oplus i_*i^{-1}\mathcal{F}'
+$$
+is injective. Thus $\xi'$ maps to a nonzero element of
+\begin{align*}
+& \Coker\left(
+((f')^{-1}R^qg'_*\mathcal{G})_{\overline{x}}
+\to
+(R^qh'_*(e')^{-1}\mathcal{G})_{\overline{x}}
+\right) = \\
+&
+\Coker\left(
+((f')^{-1}R^qg'_*j_*\mathcal{J})_{\overline{x}}
+\to
+(R^qh'_*(e')^{-1}j_*\mathcal{J})_{\overline{x}}
+\right) \oplus \\
+& \Coker\left(
+((f')^{-1}R^qg'_*i_*i^{-1}\mathcal{F}')_{\overline{x}}
+\to
+(R^qh'_*(e')^{-1}i_*i^{-1}\mathcal{F}')_{\overline{x}}
+\right)
+\end{align*}
+by part (2) of Lemma \ref{lemma-base-change-q-injective}.
+If $\xi'$ does not map to zero in the second summand, then
+we use
+$$
+(f')^{-1}R^qg'_*i_*i^{-1}\mathcal{F}' =
+(f')^{-1}R^q(g' \circ i)_*i^{-1}\mathcal{F}'
+$$
+(because $Ri_* = i_*$ by
+Proposition \ref{proposition-finite-higher-direct-image-zero}) and
+$$
+R^qh'_*(e')^{-1}i_*i^{-1}\mathcal{F} =
+R^qh'_*i'_*e_Z^{-1}i^{-1}\mathcal{F} =
+R^q(h' \circ i')_*e_Z^{-1}i^{-1}\mathcal{F}'
+$$
+(first equality by
+Lemma \ref{lemma-finite-pushforward-commutes-with-base-change}
+and the second because
+$Ri'_* = i'_*$ by
+Proposition \ref{proposition-finite-higher-direct-image-zero})
+to we see that we have $P(Z)$.
+Finally, suppose $\xi'$ does not map to zero in the first summand.
+We have
+$$
+(e')^{-1}j_*\mathcal{J} = j'_*e_V^{-1}\mathcal{J}
+\quad\text{and}\quad
+R^aj'_*e_V^{-1}\mathcal{J} = 0, \quad a = 1, \ldots, q - 1
+$$
+by $BC(f, n, q - 1)$ applied to the diagram
+$$
+\xymatrix{
+X \ar[d]_f & Y' \ar[l] \ar[d]_{e'} & Y \ar[l]^{j'} \ar[d]^{e_V} \\
+S & T' \ar[l] & V \ar[l]_j
+}
+$$
+and the fact that $\mathcal{J}$ is injective.
+By the relative Leray spectral sequence for $h' \circ j'$
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-relative-Leray})
+we deduce that
+$$
+R^qh'_*(e')^{-1}j_*\mathcal{J} =
+R^qh'_*j'_*e_V^{-1}\mathcal{J}
+\longrightarrow
+R^q(h' \circ j')_* e_V^{-1}\mathcal{J}
+$$
+is injective. Thus $\xi$ maps to a nonzero element of
+$(R^q(h' \circ j')_* e_V^{-1}\mathcal{J})_{\overline{x}}$.
+Applying part (3) of Lemma \ref{lemma-base-change-q-injective}
+to the injection $j^{-1}\mathcal{F}' \to \mathcal{J}$
+we conclude that $P(V)$ holds.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-does-not-hold}
+With $f : X \to S$ and $n$ as in Remark \ref{remark-base-change-holds}
+assume for some $q \geq 1$ we have that
+$BC(f, n, q - 1)$ is true, but $BC(f, n, q)$ is not.
+Then there exist a commutative diagram
+$$
+\xymatrix{
+X \ar[d]_f & X' \ar[d] \ar[l] & Y \ar[l]^h \ar[d] \\
+S & S' \ar[l] & \Spec(K) \ar[l]
+}
+$$
+with both squares cartesian, where
+\begin{enumerate}
+\item $S'$ is affine, integral, and normal with algebraically
+closed function field,
+\item $K$ is algebraically closed and $\Spec(K) \to S'$
+is dominant (in other words $K$ is an extension of
+the function field of $S'$)
+\end{enumerate}
+and there exists an integer $d | n$
+such that $R^qh_*(\mathbf{Z}/d\mathbf{Z})$ is nonzero.
+\end{lemma}
+
+\noindent
+Conversely, nonvanishing of $R^qh_*(\mathbf{Z}/d\mathbf{Z})$
+in the lemma implies $BC(f, n, q)$ isn't true as
+Lemma \ref{lemma-Rf-star-zero-normal-with-alg-closed-function-field}
+shows that $R^q(\Spec(K) \to S')_*\mathbf{Z}/d\mathbf{Z} = 0$.
+
+\begin{proof}
+First choose a diagram and $\mathcal{F}$ as in
+Lemma \ref{lemma-base-change-does-not-hold-pre}.
+We may and do assume $S'$ is affine (this is obvious, but
+see proof of the lemma in case of doubt).
+By Lemma \ref{lemma-base-change-q-integral-top}
+we may assume $K$ is algebraically closed.
+Then $\mathcal{F}$ corresponds to a $\mathbf{Z}/n\mathbf{Z}$-module.
+Such a modules is a direct sum of copies of $\mathbf{Z}/d\mathbf{Z}$
+for varying $d | n$ hence we may assume $\mathcal{F}$ is
+constant with value $\mathbf{Z}/d\mathbf{Z}$.
+By Lemma \ref{lemma-base-change-q-integral-bottom}
+we may replace $S'$ by the normalization
+of $S'$ in $\Spec(K)$ which finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Smooth base change}
+\label{section-smooth-base-change}
+
+\noindent
+In this section we prove the smooth base change theorem.
+
+\begin{lemma}
+\label{lemma-smooth-base-change-fields}
+Let $K/k$ be an extension of fields. Let $X$ be a smooth affine curve
+over $k$ with a rational point $x \in X(k)$. Let $\mathcal{F}$ be an abelian
+sheaf on $\Spec(K)$ annihilated by an integer $n$ invertible in $k$.
+Let $q > 0$ and
+$$
+\xi \in H^q(X_K, (X_K \to \Spec(K))^{-1}\mathcal{F})
+$$
+There exist
+\begin{enumerate}
+\item finite extensions $K'/K$ and $k'/k$ with $k' \subset K'$,
+\item a finite \'etale Galois cover $Z \to X_{k'}$ with group $G$
+\end{enumerate}
+such that the order of $G$ divides a power of $n$, such that
+$Z \to X_{k'}$ is split over $x_{k'}$, and
+such that $\xi$ dies in $H^q(Z_{K'}, (Z_{K'} \to \Spec(K))^{-1}\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+For $q > 1$ we know that $\xi$ dies in
+$H^q(X_{\overline{K}}, (X_{\overline{K}} \to \Spec(K))^{-1}\mathcal{F})$
+(Theorem \ref{theorem-vanishing-affine-curves}).
+By Lemma \ref{lemma-directed-colimit-cohomology} we see that
+this means there is a finite extension $K'/K$ such that
+$\xi$ dies in $H^q(X_{K'}, (X_{K'} \to \Spec(K))^{-1}\mathcal{F})$.
+Thus we can take $k' = k$ and $Z = X$ in this case.
+
+\medskip\noindent
+Assume $q = 1$. Recall that $\mathcal{F}$ corresponds to a
+discrete module $M$ with continuous $\text{Gal}_K$-action, see
+Lemma \ref{lemma-equivalence-abelian-sheaves-point}.
+Since $M$ is $n$-torsion, it is the uninon of finite
+$\text{Gal}_K$-stable subgroups. Thus
+we reduce to the case where $M$ is a finite abelian group annihilated by $n$,
+see Lemma \ref{lemma-colimit}. After replacing $K$ by a finite extension
+we may assume that the action of $\text{Gal}_K$ on $M$ is trivial.
+Thus we may assume $\mathcal{F} = \underline{M}$ is the constant
+sheaf with value a finite abelian group $M$ annihilated by $n$.
+
+\medskip\noindent
+We can write $M$ as a direct sum of cyclic groups.
+Any two finite \'etale Galois coverings whose Galois groups
+have order invertible in $k$, can be dominated by a third one whose
+Galois group has order invertible in $k$
+(Fundamental Groups, Section \ref{pione-section-finite-etale-under-galois}).
+Thus it suffices to prove the lemma when
+$M = \mathbf{Z}/d\mathbf{Z}$ where $d | n$.
+
+\medskip\noindent
+Assume $M = \mathbf{Z}/d\mathbf{Z}$ where $d | n$.
+In this case $\overline{\xi} = \xi|_{X_{\overline{K}}}$ is an element of
+$$
+H^1(X_{\overline{k}}, \mathbf{Z}/d\mathbf{Z}) =
+H^1(X_{\overline{K}}, \mathbf{Z}/d\mathbf{Z})
+$$
+See Theorem \ref{theorem-vanishing-affine-curves}.
+This group classifies $\mathbf{Z}/d\mathbf{Z}$-torsors, see
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-torsors-h1}.
+The torsor corresponding to $\overline{\xi}$ (viewed as a sheaf on
+$X_{\overline{k}, \etale}$) in turn gives rise to a finite \'etale morphism
+$T \to X_{\overline{k}}$ endowed an action of $\mathbf{Z}/d\mathbf{Z}$
+transitive on the fibre of $T$ over $x_{\overline{k}}$, see
+Lemma \ref{lemma-characterize-finite-locally-constant}.
+Choose a connected component $T' \subset T$ (if $\overline{\xi}$ has
+order $d$, then $T$ is already connected).
+Then $T' \to X_{\overline{k}}$ is a finite \'etale Galois
+cover whose Galois group is a subgroup $G \subset \mathbf{Z}/d\mathbf{Z}$
+(small detail omitted). Moreover the element $\overline{\xi}$ maps to zero
+under the map $H^1(X_{\overline{k}}, \mathbf{Z}/d\mathbf{Z}) \to
+H^1(T', \mathbf{Z}/d\mathbf{Z})$ as this is one
+of the defining properties of $T$.
+
+\medskip\noindent
+Next, we use a limit argument to choose a finite extension $k'/k$
+contained in $\overline{k}$ such that $T' \to X_{\overline{k}}$
+descends to a finite \'etale Galois cover $Z \to X_{k'}$ with group $G$.
+See Limits, Lemmas \ref{limits-lemma-descend-finite-presentation},
+\ref{limits-lemma-descend-finite-finite-presentation}, and
+\ref{limits-lemma-descend-etale}.
+After increasing $k'$ we may assume that $Z$ splits over $x_{k'}$.
+The image of $\xi$ in
+$H^1(Z_{\overline{K}}, \mathbf{Z}/d\mathbf{Z})$ is zero by construction.
+Thus by Lemma \ref{lemma-directed-colimit-cohomology}
+we can find a finite subextension $\overline{K}/K'/K$
+containing $k'$ such that $\xi$ dies in $H^1(Z_{K'}, \mathbf{Z}/d\mathbf{Z})$
+and this finishes the proof.
+\end{proof}
+
+\begin{theorem}[Smooth base change]
+\label{theorem-smooth-base-change}
+Consider a cartesian diagram of schemes
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+where $f$ is smooth and $g$ quasi-compact and quasi-separated. Then
+$$
+f^{-1}R^qg_*\mathcal{F} = R^qh_*e^{-1}\mathcal{F}
+$$
+for any $q$ and any abelian sheaf $\mathcal{F}$
+on $T_\etale$ all of whose stalks at geometric points are torsion of
+orders invertible on $S$.
+\end{theorem}
+
+\begin{proof}[First proof of smooth base change]
+This proof is very long but more direct (using less general theory)
+than the second proof given below.
+
+\medskip\noindent
+The theorem is local on $X_\etale$. More precisely, suppose we have
+$U \to X$ \'etale such that $U \to S$ factors as $U \to V \to S$
+with $V \to S$ \'etale. Then we can consider the cartesian square
+$$
+\xymatrix{
+U \ar[d]_{f'} & U \times_X Y \ar[l]^{h'} \ar[d]^{e'} \\
+V & V \times_S T \ar[l]_{g'}
+}
+$$
+and setting $\mathcal{F}' = \mathcal{F}|_{V \times_S T}$
+we have $f^{-1}R^qg_*\mathcal{F}|_U = (f')^{-1}R^qg'_*\mathcal{F}'$
+and $R^qh_*e^{-1}\mathcal{F}|_U = R^qh'_*(e')^{-1}\mathcal{F}'$
+(as follows from the compatibility of localization with morphisms of sites, see
+Sites, Lemma \ref{sites-lemma-localize-morphism-strong} and
+and
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-restrict-direct-image-open}).
+Thus it suffices to produce an \'etale covering of $X$ by
+$U \to X$ and factorizations $U \to V \to S$
+as above such that the theorem holds for the diagram with
+$f'$, $h'$, $g'$, $e'$.
+
+\medskip\noindent
+By the local structure of smooth morphisms, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-etale-over-affine-space},
+we may assume $X$ and $S$ are affine and $X \to S$
+factors through an \'etale morphism $X \to \mathbf{A}^d_S$.
+If we have a tower of cartesian diagrams
+$$
+\xymatrix{
+W \ar[d]_i & Z \ar[l]^j \ar[d]^k \\
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e \\
+S & T \ar[l]_g
+}
+$$
+and the theorem holds for the bottom and top squares, then
+the theorem holds for the outer rectangle; this is formal.
+Writing $X \to S$ as the composition
+$$
+X \to \mathbf{A}^{d - 1}_S \to \mathbf{A}^{d - 2}_S \to \ldots \to
+\mathbf{A}^1_S \to S
+$$
+we conclude that it suffices to prove the theorem when $X$ and $S$
+are affine and $X \to S$ has relative dimension $1$.
+
+\medskip\noindent
+For every $n \geq 1$ invertible on $S$, let $\mathcal{F}[n]$
+be the subsheaf of sections of $\mathcal{F}$ annihilated by $n$. Then
+$\mathcal{F} = \colim \mathcal{F}[n]$ by our assumption on
+the stalks of $\mathcal{F}$. The functors $e^{-1}$ and $f^{-1}$
+commute with colimits as they are left adjoints. The functors
+$R^qh_*$ and $R^qg_*$ commute with filtered colimits by
+Lemma \ref{lemma-relative-colimit}.
+Thus it suffices to prove the theorem for $\mathcal{F}[n]$.
+From now on we fix an integer $n$, we work with
+sheaves of $\mathbf{Z}/n\mathbf{Z}$-modules and
+we assume $S$ is a scheme over $\Spec(\mathbf{Z}[1/n])$.
+
+\medskip\noindent
+Next, we reduce to the case where $T$ is affine. Since $g$ is quasi-compact
+and quasi-separate and $S$ is affine, the scheme $T$ is quasi-compact and
+quasi-separated. Thus we can use the induction principle of
+Cohomology of Schemes, Lemma \ref{coherent-lemma-induction-principle}.
+Hence it suffices to show that if $T = W \cup W'$
+is an open covering and the theorem holds for the squares
+$$
+\xymatrix{
+X \ar[d] & e^{-1}(W) \ar[l]^i \ar[d] \\
+S & W \ar[l]_a
+}
+\quad
+\xymatrix{
+X \ar[d] & e^{-1}(W') \ar[l]^j \ar[d] \\
+S & W' \ar[l]_b
+}
+\quad
+\xymatrix{
+X \ar[d] & e^{-1}(W \cap W') \ar[l]^-k \ar[d] \\
+S & W \cap W' \ar[l]_c
+}
+$$
+then the theorem holds for the original diagram. To see this we consider the
+diagram
+$$
+\xymatrix{
+f^{-1}R^{q - 1}c_*\mathcal{F}|_{W \cap W'} \ar[d]^{\cong} \ar[r] &
+f^{-1}R^qg_*\mathcal{F} \ar[d] \ar[r] &
+f^{-1}R^qa_*\mathcal{F}|_W \oplus
+f^{-1}R^qb_*\mathcal{F}|_{W'} \ar[d]_{\cong} \\
+R^qk_*e^{-1}\mathcal{F}|_{e^{-1}(W \cap W')} \ar[r] &
+R^qh_*e^{-1}\mathcal{F} \ar[r] &
+R^qi_*e^{-1}\mathcal{F}|_{e^{-1}(W)} \oplus
+R^qj_*e^{-1}\mathcal{F}|_{e^{-1}(W')}
+}
+$$
+whose rows are the long exact sequences of
+Lemma \ref{lemma-relative-mayer-vietoris}.
+Thus the $5$-lemma gives the desired conclusion.
+
+\medskip\noindent
+Summarizing, we may assume $S$, $X$, $T$, and $Y$ affine,
+$\mathcal{F}$ is $n$ torsion, $X \to S$ is smooth of relative dimension $1$,
+and $S$ is a scheme over $\mathbf{Z}[1/n]$.
+We will prove the theorem by induction on $q$. The base case $q = 0$
+is handled by Lemma \ref{lemma-fppf-reduced-fibres-base-change-f-star}.
+Assume $q > 0$ and the theorem holds for all smaller degrees.
+Choose a short exact sequence
+$0 \to \mathcal{F} \to \mathcal{I} \to \mathcal{Q} \to 0$
+where $\mathcal{I}$ is an injective sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules.
+Consider the induced diagram
+$$
+\xymatrix{
+f^{-1}R^{q - 1}g_*\mathcal{I} \ar[d]_{\cong} \ar[r] &
+f^{-1}R^{q - 1}g_*\mathcal{Q} \ar[d]_{\cong} \ar[r] &
+f^{-1}R^qg_*\mathcal{F} \ar[d] \ar[r] &
+0 \ar[d] \\
+R^{q - 1}h_*e^{-1}\mathcal{I} \ar[r] &
+R^{q - 1}h_*e^{-1}\mathcal{Q} \ar[r] &
+R^qh_*e^{-1}\mathcal{F} \ar[r] &
+R^qh_*e^{-1}\mathcal{I}
+}
+$$
+with exact rows. We have the zero in the right upper corner
+as $\mathcal{I}$ is injective. The left two vertical arrows are
+isomorphisms by induction hypothesis. Thus it suffices to prove
+that $R^qh_*e^{-1}\mathcal{I} = 0$.
+
+\medskip\noindent
+Write $S = \Spec(A)$ and $T = \Spec(B)$ and say the morphism
+$T \to S$ is given by the ring map $A \to B$. We can write
+$A \to B = \colim_{i \in I} (A_i \to B_i)$ as a filtered
+colimit of maps of rings of finite type over $\mathbf{Z}[1/n]$
+(see Algebra, Lemma \ref{algebra-lemma-limit-no-condition}).
+For $i \in I$ we set $S_i = \Spec(A_i)$ and $T_i = \Spec(B_i)$.
+For $i$ large enough we can find a smooth morphism $X_i \to S_i$
+of relative dimension $1$ such that $X = X_i \times_{S_i} S$, see
+Limits, Lemmas \ref{limits-lemma-descend-finite-presentation},
+\ref{limits-lemma-descend-smooth}, and
+\ref{limits-lemma-descend-dimension-d}. Set $Y_i = X_i \times_{S_i} T_i$
+to get squares
+$$
+\xymatrix{
+X_i \ar[d]_{f_i} & Y_i \ar[l]^{h_i} \ar[d]^{e_i} \\
+S_i & T_i \ar[l]_{g_i}
+}
+$$
+Observe that $\mathcal{I}_i = (T \to T_i)_*\mathcal{I}$
+is an injective sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules on
+$T_i$, see Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-pushforward-injective-flat}.
+We have $\mathcal{I} = \colim (T \to T_i)^{-1}\mathcal{I}_i$
+by Lemma \ref{lemma-linus-hamann}. Pulling back by $e$ we get
+$e^{-1}\mathcal{I} = \colim (Y \to Y_i)^{-1}e_i^{-1}\mathcal{I}_i$.
+By Lemma \ref{lemma-relative-colimit-general} applied to the system
+of morphisms $Y_i \to X_i$ with limit $Y \to X$ we have
+$$
+R^qh_*e^{-1}\mathcal{I} =
+\colim (X \to X_i)^{-1} R^qh_{i, *} e_i^{-1}\mathcal{I}_i
+$$
+This reduces us to the case where $T$ and $S$ are affine of finite
+type over $\mathbf{Z}[1/n]$.
+
+\medskip\noindent
+Summarizing, we have an integer $q \geq 1$ such that the theorem holds
+in degrees $< q$, the schemes $S$ and $T$ affine of finite type
+type over $\mathbf{Z}[1/n]$, we have
+$X \to S$ smooth of relative dimension $1$ with $X$ affine, and
+$\mathcal{I}$ is an injective sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules
+and we have to show that $R^qh_*e^{-1}\mathcal{I} = 0$.
+We will do this by induction on $\dim(T)$.
+
+\medskip\noindent
+The base case is $T = \emptyset$, i.e., $\dim(T) < 0$.
+If you don't like this, you can take as your base case
+the case $\dim(T) = 0$. In this case $T \to S$ is finite
+(in fact even $T \to \Spec(\mathbf{Z}[1/n])$ is finite as the
+target is Jacobson; details omitted), so
+$h$ is finite too and hence has
+vanishing higher direct images (see references below).
+
+\medskip\noindent
+Assume $\dim(T) = d \geq 0$ and we know the result for all situations where $T$
+has lower dimension. Pick $U$ affine and \'etale over $X$ and a section $\xi$
+of $R^qh_*q^{-1}\mathcal{I}$ over $U$. We have to show
+that $\xi$ is zero. Of course, we may replace $X$ by $U$
+(and correspondingly $Y$ by $U \times_X Y$)
+and assume $\xi \in H^0(X, R^qh_*e^{-1}\mathcal{I})$.
+Moreover, since $R^qh_*e^{-1}\mathcal{I}$ is a sheaf,
+it suffices to prove that $\xi$ is zero locally on $X$.
+Hence we may replace $X$ by the members of an \'etale covering.
+In particular, using Lemma \ref{lemma-higher-direct-images}
+we may assume that $\xi$ is the image of an element
+$\tilde \xi \in H^q(Y, e^{-1}\mathcal{I})$.
+In terms of $\tilde \xi$ our task is to show that
+$\tilde \xi$ dies in $H^q(U_i \times_X Y, e^{-1}\mathcal{I})$
+for some \'etale covering $\{U_i \to X\}$.
+
+\medskip\noindent
+By More on Morphisms, Lemma \ref{more-morphisms-lemma-cover-smooth-by-special}
+we may assume that $X \to S$ factors as $X \to V \to S$
+where $V \to S$ is \'etale and $X \to V$ is a smooth morphism
+of affine schemes of relative dimension $1$, has a section, and
+has geometrically connected fibres. Observe that
+$\dim(V \times_S T) \leq \dim(T) = d$
+for example by More on Algebra, Lemma
+\ref{more-algebra-lemma-dimension-etale-extension}.
+Hence we may then replace $S$ by $V$ and $T$ by $V \times_S T$
+(exactly as in the discussion in the first paragraph of the proof).
+Thus we may assume $X \to S$ is smooth of relative dimension $1$,
+geometrically connected fibres, and has a section $\sigma : S \to X$.
+
+\medskip\noindent
+Let $\pi : T' \to T$ be a finite surjective morphism.
+We will use below that $\dim(T') \leq \dim(T) = d$, see
+Algebra, Lemma \ref{algebra-lemma-integral-dim-up}.
+Choose an injective map $\pi^{-1}\mathcal{I} \to \mathcal{I}'$
+into an injective sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules.
+Then $\mathcal{I} \to \pi_*\mathcal{I}'$ is injective
+and hence has a splitting (as $\mathcal{I}$ is an injective
+sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules).
+Denote $\pi' : Y' = Y \times_T T' \to Y$ the base change of $\pi$
+and $e' : Y' \to T'$ the base change of $e$. Picture
+$$
+\xymatrix{
+X \ar[d]_f & Y \ar[l]^h \ar[d]^e & Y' \ar[l]^{\pi'} \ar[d]^{e'} \\
+S & T \ar[l]_g & T' \ar[l]_\pi
+}
+$$
+By Proposition \ref{proposition-finite-higher-direct-image-zero} and
+Lemma \ref{lemma-finite-pushforward-commutes-with-base-change} we have
+$R\pi'_*(e')^{-1}\mathcal{I}' = e^{-1}\pi_*\mathcal{I}'$.
+Thus by the Leray spectral sequence
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-Leray})
+we have
+$$
+H^q(Y', (e')^{-1}\mathcal{I}') =
+H^q(Y, e^{-1}\pi_*\mathcal{I}') \supset H^q(Y, e^{-1}\mathcal{I})
+$$
+and this remains true after base change by any $U \to X$ \'etale.
+Thus we may replace $T$ by $T'$, $\mathcal{I}$ by $\mathcal{I}'$
+and $\tilde \xi$ by its image in $H^q(Y', (e')^{-1}\mathcal{I}')$.
+
+\medskip\noindent
+Suppose we have a factorization $T \to S' \to S$ where
+$\pi : S' \to S$ is finite.
+Setting $X' = S' \times_S X$ we can consider the induced diagram
+$$
+\xymatrix{
+X \ar[d]_f & X' \ar[l]^{\pi'} \ar[d]^{f'} & Y \ar[l]^{h'} \ar[d]^e \\
+S & S' \ar[l]_\pi & T \ar[l]_g
+}
+$$
+Since $\pi'$ has vanishing higher direct images we see that
+$R^qh_*e^{-1}\mathcal{I} = \pi'_*R^qh'_*e^{-1}\mathcal{I}$
+by the Leray spectral sequence. Hence
+$H^0(X, R^qh_*e^{-1}\mathcal{I}) = H^0(X', R^qh'_*e^{-1}\mathcal{I})$.
+Thus $\xi$ is zero if and only if the corresponding section of
+$R^qh'_*e^{-1}\mathcal{I}$ is zero\footnote{This
+step can also be seen another way. Namely, we have to show that
+there is an \'etale covering $\{U_i \to X\}$ such that
+$\tilde \xi$ dies in $H^q(U_i \times_X Y, e^{-1}\mathcal{I})$.
+However, if we prove there is an \'etale covering
+$\{U'_j \to X'\}$ such that $\tilde \xi$ dies in
+$H^q(U'_i \times_{X'} Y, e^{-1}\mathcal{I})$, then by property (B)
+for $X' \to X$ (Lemma \ref{lemma-finite-B}) there exists an \'etale covering
+$\{U_i \to X\}$ such that $U_i \times_X X'$ is a disjoint union
+of schemes over $X'$ each of which factors through $U'_j$ for some $j$.
+Thus we see that $\tilde \xi$ dies in $H^q(U_i \times_X Y, e^{-1}\mathcal{I})$
+as desired.}.
+Thus we may replace $S$ by $S'$ and $X$ by $X'$.
+Observe that $\sigma : S \to X$ base changes to $\sigma' : S' \to X'$
+and hence after this replacement it is still true that $X \to S$
+has a section $\sigma$ and geometrically connected fibres.
+
+\medskip\noindent
+We will use that $S$ and $T$ are Nagata schemes, see
+Algebra, Proposition \ref{algebra-proposition-ubiquity-nagata}
+which will guarantee
+that various normalizations are finite, see
+Morphisms, Lemmas \ref{morphisms-lemma-nagata-normalization-finite} and
+\ref{morphisms-lemma-nagata-normalization}.
+In particular, we may first replace $T$ by its normalization
+and then replace $S$ by the normalization of $S$ in $T$.
+Then $T \to S$ is a disjoint union of dominant
+morphisms of integral normal schemes, see
+Morphisms, Lemma \ref{morphisms-lemma-normal-normalization}.
+Clearly we may argue one connnected component
+at a time, hence we may assume $T \to S$ is a
+dominant morphism of integral normal schemes.
+
+\medskip\noindent
+Let $s \in S$ and $t \in T$ be the generic points. By
+Lemma \ref{lemma-smooth-base-change-fields} there exist finite field
+extensions $K/\kappa(t)$ and $k/\kappa(s)$ such that $k$ is contained in $K$
+and a finite \'etale Galois covering $Z \to X_k$ with Galois group $G$
+of order dividing a power of $n$ split over $\sigma(\Spec(k))$
+such that $\tilde \xi$ maps to zero in $H^q(Z_K, e^{-1}\mathcal{I}|_{Z_K})$.
+Let $T' \to T$ be the normalization of $T$ in $\Spec(K)$
+and let $S' \to S$ be the normalization of $S$ in $\Spec(k)$.
+Then we obtain a commutative diagram
+$$
+\xymatrix{
+S' \ar[d] & T' \ar[l] \ar[d] \\
+S & T \ar[l]
+}
+$$
+whose vertical arrows are finite. By the arguments given above we
+may and do replace $S$ and $T$ by $S'$ and $T'$ (and correspondingly
+$X$ by $X \times_S S'$ and $Y$ by $Y \times_T T'$). After this replacement
+we conclude we have a finite \'etale Galois covering
+$Z \to X_s$ of the generic fibre of $X \to S$
+with Galois group $G$ of order dividing a power of $n$
+split over $\sigma(s)$ such that $\tilde \xi$ maps to zero in
+$H^q(Z_t, (Z_t \to Y)^{-1}e^{-1}\mathcal{I})$.
+Here $Z_t = Z \times_S t = Z \times_s t = Z \times_{X_s} Y_t$.
+Since $n$ is invertible on $S$,
+by Fundamental Groups, Lemma \ref{pione-lemma-extend-covering}
+we can find a finite \'etale morphism $U \to X$ whose restriction to $X_s$
+is $Z$.
+
+\medskip\noindent
+At this point we replace $X$ by $U$ and $Y$ by $U \times_X Y$.
+After this replacement it may
+no longer be the case that the fibres of $X \to S$ are geometrically
+connected (there still is a section but we won't use this), but what
+we gain is that after this replacement $\tilde \xi$ maps to zero
+in $H^q(Y_t, e^{-1}\mathcal{I})$, i.e., $\tilde \xi$ restricts to
+zero on the generic fibre of $Y \to T$.
+
+\medskip\noindent
+Recall that $t$ is the spectrum of the function field of $T$, i.e.,
+as a scheme $t$ is the limit of the nonempty affine open subschemes of $T$.
+By Lemma \ref{lemma-directed-colimit-cohomology} we conclude there exists
+a nonempty open subscheme $V \subset T$ such that $\tilde \xi$ maps to zero in
+$H^q(Y \times_T V, e^{-1}\mathcal{I}|_{Y \times_T V})$.
+
+\medskip\noindent
+Denote $Z = T \setminus V$. Consider the diagram
+$$
+\xymatrix{
+Y \times_T Z \ar[d]_{e_Z} \ar[r]_{i'} &
+Y \ar[d]_e &
+Y \times_T V \ar[l]^{j'} \ar[d]^{e_V} \\
+Z \ar[r]^i &
+T &
+V \ar[l]_j
+}
+$$
+Choose an injection $i^{-1}\mathcal{I} \to \mathcal{I}'$
+into an injective sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules on $Z$.
+Looking at stalks we see that the map
+$$
+\mathcal{I} \to j_*\mathcal{I}|_V \oplus i_*\mathcal{I}'
+$$
+is injective and hence splits as $\mathcal{I}$ is an injective sheaf
+of $\mathbf{Z}/n\mathbf{Z}$-modules. Thus it suffices to show that
+$\tilde \xi$ maps to zero in
+$$
+H^q(Y, e^{-1}j_*\mathcal{I}|_V) \oplus
+H^q(Y, e^{-1}i_*\mathcal{I}')
+$$
+at least after replacing $X$ by the members of an \'etale covering.
+Observe that
+$$
+e^{-1}j_*\mathcal{I}|_V = j'_*e_V^{-1}\mathcal{I}|_V,\quad
+e^{-1}i_*\mathcal{I}' = i'_*e_Z^{-1}\mathcal{I}'
+$$
+By induction hypothesis on $q$ we see that
+$$
+R^aj'_*e_V^{-1}\mathcal{I}|_V = 0, \quad a = 1, \ldots, q - 1
+$$
+By the Leray spectral sequence for $j'$ and the vanishing
+above it follows that
+$$
+H^q(Y, j'_*(e_V^{-1}\mathcal{I}|_V))
+\longrightarrow
+H^q(Y \times_T V, e_V^{-1}\mathcal{I}_V) =
+H^q(Y \times_T V, e^{-1}\mathcal{I}|_{Y \times_T V})
+$$
+is injective. Thus the vanishing of the image of $\tilde \xi$
+in the first summand above because we know $\tilde \xi$ vanishes
+in $H^q(Y \times_T V, e^{-1}\mathcal{I}|_{Y \times_T V})$.
+Since $\dim(Z) < \dim(T) = d$ by induction the image of $\tilde \xi$
+in the second summand
+$$
+H^q(Y, e^{-1}i_*\mathcal{I}') =
+H^q(Y, i'_*e_Z^{-1}\mathcal{I}') =
+H^q(Y \times_T Z, e_Z^{-1}\mathcal{I}')
+$$
+dies after replacing $X$ by the members of a suitable \'etale covering.
+This finishes the proof of the smooth base change theorem.
+\end{proof}
+
+\begin{proof}[Second proof of smooth base change]
+This proof is the same as the longer first proof;
+it is shorter only in that we have split out the
+arguments used in a number of lemmas.
+
+\medskip\noindent
+The case of $q = 0$ is
+Lemma \ref{lemma-fppf-reduced-fibres-base-change-f-star}.
+Thus we may assume $q > 0$ and the result is true
+for all smaller degrees.
+
+\medskip\noindent
+For every $n \geq 1$ invertible on $S$, let $\mathcal{F}[n]$
+be the subsheaf of sections of $\mathcal{F}$ annihilated by $n$. Then
+$\mathcal{F} = \colim \mathcal{F}[n]$ by our assumption on
+the stalks of $\mathcal{F}$. The functors $e^{-1}$ and $f^{-1}$
+commute with colimits as they are left adjoints. The functors
+$R^qh_*$ and $R^qg_*$ commute with filtered colimits by
+Lemma \ref{lemma-relative-colimit}.
+Thus it suffices to prove the theorem for $\mathcal{F}[n]$.
+From now on we fix an integer $n$ invertible on $S$ and we work with
+sheaves of $\mathbf{Z}/n\mathbf{Z}$-modules.
+
+\medskip\noindent
+By Lemma \ref{lemma-base-change-local} the question is \'etale local
+on $X$ and $S$. By the local structure of smooth morphisms, see
+Morphisms, Lemma \ref{morphisms-lemma-smooth-etale-over-affine-space},
+we may assume $X$ and $S$ are affine and $X \to S$
+factors through an \'etale morphism $X \to \mathbf{A}^d_S$.
+Writing $X \to S$ as the composition
+$$
+X \to \mathbf{A}^{d - 1}_S \to \mathbf{A}^{d - 2}_S \to \ldots \to
+\mathbf{A}^1_S \to S
+$$
+we conclude from Lemma \ref{lemma-base-change-compose}
+that it suffices to prove the theorem when $X$ and $S$
+are affine and $X \to S$ has relative dimension $1$.
+
+\medskip\noindent
+By Lemma \ref{lemma-base-change-does-not-hold} it suffices to show that
+$R^qh_*\mathbf{Z}/d\mathbf{Z} = 0$ for $d | n$ whenever we have a
+cartesian diagram
+$$
+\xymatrix{
+X \ar[d] & Y \ar[d] \ar[l]^h \\
+S & \Spec(K) \ar[l]
+}
+$$
+where $X \to S$ is affine and smooth of relative dimension $1$,
+$S$ is the spectrum of a normal
+domain $A$ with algebraically closed fraction field $L$,
+and $K/L$ is an extension of algebraically closed fields.
+
+\medskip\noindent
+Recall that $R^qh_*\mathbf{Z}/d\mathbf{Z}$ is the sheaf associated
+to the presheaf
+$$
+U \longmapsto
+H^q(U \times_X Y, \mathbf{Z}/d\mathbf{Z}) =
+H^q(U \times_S \Spec(K), \mathbf{Z}/d\mathbf{Z})
+$$
+on $X_\etale$ (Lemma \ref{lemma-higher-direct-images}).
+Thus it suffices to show: given $U$ and
+$\xi \in H^q(U \times_S \Spec(K), \mathbf{Z}/d\mathbf{Z})$
+there exists an \'etale covering $\{U_i \to U\}$
+such that $\xi$ dies in $H^q(U_i \times_S \Spec(K), \mathbf{Z}/d\mathbf{Z})$.
+
+\medskip\noindent
+Of course we may take $U$ affine. Then $U \times_S \Spec(K)$
+is a (smooth) affine curve over $K$ and hence we have vanishing
+for $q > 1$ by Theorem \ref{theorem-vanishing-affine-curves}.
+
+\medskip\noindent
+Final case: $q = 1$. We may replace $U$ by the members of an
+\'etale covering as in More on Morphisms, Lemma
+\ref{more-morphisms-lemma-cover-smooth-by-special}.
+Then $U \to S$ factors as $U \to V \to S$ where
+$U \to V$ has geometrically connected fibres,
+$U$, $V$ are affine, $V \to S$ is \'etale, and there is
+a section $\sigma : V \to U$. By
+Lemma \ref{lemma-normal-scheme-with-alg-closed-function-field}
+we see that $V$ is isomorphic to a (finite) disjoint union of
+(affine) open subschemes of $S$.
+Clearly we may replace $S$ by one of these
+and $X$ by the corresponding component of $U$.
+Thus we may assume $X \to S$ has geometrically connected
+fibres, has a section $\sigma$, and
+$\xi \in H^1(Y, \mathbf{Z}/d\mathbf{Z})$.
+Since $K$ and $L$ are algebraically closed we have
+$$
+H^1(X_L, \mathbf{Z}/d\mathbf{Z}) =
+H^1(Y, \mathbf{Z}/d\mathbf{Z})
+$$
+See Lemma \ref{lemma-base-change-dim-1-separably-closed}.
+Thus there is a finite \'etale Galois covering
+$Z \to X_L$ with Galois group $G \subset \mathbf{Z}/d\mathbf{Z}$
+which annihilates $\xi$.
+You can either see this by looking at the statement or
+proof of Lemma \ref{lemma-smooth-base-change-fields}
+or by using directly that $\xi$ corresponds to a
+$\mathbf{Z}/d\mathbf{Z}$-torsor over $X_L$.
+Finally, by
+Fundamental Groups, Lemma \ref{pione-lemma-extend-covering-general}
+we find a (necessarily surjective)
+finite \'etale morphism $X' \to X$
+whose restriction to $X_L$ is $Z \to X_L$.
+Since $\xi$ dies in $X'_K$ this finishes the proof.
+\end{proof}
+
+\noindent
+The following immediate consquence of the smooth base change
+theorem is what is often used in practice.
+
+\begin{lemma}
+\label{lemma-smooth-base-change-general}
+Let $S$ be a scheme. Let $S' = \lim S_i$ be a directed inverse
+limit of schemes $S_i$ smooth over $S$ with affine transition
+morphisms. Let $f : X \to S$ be quasi-compact and quasi-separated
+and form the fibre square
+$$
+\xymatrix{
+X' \ar[d]_{f'} \ar[r]_{g'} & X \ar[d]^f \\
+S' \ar[r]^g & S
+}
+$$
+Then
+$$
+g^{-1}Rf_*E = R(f')_*(g')^{-1}E
+$$
+for any $E \in D^+(X_\etale)$ whose cohomology sheaves $H^q(E)$
+have stalks which are torsion of orders invertible on $S$.
+\end{lemma}
+
+\begin{proof}
+Consider the spectral sequences
+$$
+E_2^{p, q} = R^pf_*H^q(E)
+\quad\text{and}\quad
+{E'}_2^{p, q} = R^pf'_*H^q((g')^{-1}E) = R^pf'_*(g')^{-1}H^q(E)
+$$
+converging to $R^nf_*E$ and $R^nf'_*(g')^{-1}E$.
+These spectral sequences are constructed in
+Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}.
+Combining the smooth base change theorem
+(Theorem \ref{theorem-smooth-base-change})
+with Lemma \ref{lemma-base-change-Rf-star-colim} we see that
+$$
+g^{-1}R^pf_*H^q(E) = R^p(f')_*(g')^{-1}H^q(E)
+$$
+Combining all of the above we get the lemma.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Applications of smooth base change}
+\label{section-applications-smooth-base-change}
+
+\noindent
+In this section we discuss some more or less immediate
+consequences of the smooth base change theorem.
+
+\begin{lemma}
+\label{lemma-base-change-field-extension}
+Let $L/K$ be an extension of fields. Let $g : T \to S$ be a quasi-compact
+and quasi-separated morphism of schemes over $K$. Denote
+$g_L : T_L \to S_L$ the base change of $g$ to $\Spec(L)$.
+Let $E \in D^+(T_\etale)$ have cohomology sheaves whose stalks
+are torsion of orders invertible in $K$. Let $E_L$ be the
+pullback of $E$ to $(T_L)_\etale$. Then
+$Rg_{L, *}E_L$ is the pullback of $Rg_*E$ to $S_L$.
+\end{lemma}
+
+\begin{proof}
+If $L/K$ is separable, then $L$ is a filtered colimit of smooth
+$K$-algebras, see
+Algebra, Lemma \ref{algebra-lemma-colimit-syntomic}.
+Thus the lemma in this case follows immediately from
+Lemma \ref{lemma-smooth-base-change-general}.
+In the general case, let $K'$ and $L'$ be the perfect closures
+(Algebra, Definition \ref{algebra-definition-perfection})
+of $K$ and $L$. Then $\Spec(K') \to \Spec(K)$ and
+$\Spec(L') \to \Spec(L)$ are universal homeomorphisms
+as $K'/K$ and $L'/L$ are purely inseparable
+(see Algebra, Lemma \ref{algebra-lemma-p-ring-map}).
+Thus we have $(T_{K'})_\etale = T_\etale$,
+$(S_{K'})_\etale = S_\etale$,
+$(T_{L'})_\etale = (T_L)\etale$, and
+$(S_{L'})_\etale = (S_L)_\etale$ by
+the topological invariance of \'etale cohomology, see
+Proposition \ref{proposition-topological-invariance}.
+This reduces the lemma to the case of the field
+extension $L'/K'$ which is separable (by definition of
+perfect fields, see Algebra, Definition \ref{algebra-definition-perfect}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-base-change-separably-closed}
+Let $K/k$ be an extension of separably closed fields. Let $X$
+be a quasi-compact and quasi-separated scheme over $k$.
+Let $E \in D^+(X_\etale)$ have cohomology sheaves whose stalks
+are torsion of orders invertible in $k$. Then
+\begin{enumerate}
+\item the maps $H^q_\etale(X, E) \to H^q_\etale(X_K, E|_{X_K})$
+are isomorphisms, and
+\item $E \to R(X_K \to X)_*E|_{X_K}$ is an isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). First let $\overline{k}$ and $\overline{K}$
+be the algebraic closures
+of $k$ and $K$. The morphisms $\Spec(\overline{k}) \to \Spec(k)$ and
+$\Spec(\overline{K}) \to \Spec(K)$ are universal homeomorphisms
+as $\overline{k}/k$ and $\overline{K}/K$ are purely inseparable
+(see Algebra, Lemma \ref{algebra-lemma-p-ring-map}).
+Thus $H^q_\etale(X, \mathcal{F}) =
+H^q_\etale(X_{\overline{k}}, \mathcal{F}_{X_{\overline{k}}})$ by
+the topological invariance of \'etale cohomology, see
+Proposition \ref{proposition-topological-invariance}.
+Similarly for $X_K$ and $X_{\overline{K}}$.
+Thus we may assume $k$ and $K$ are algebraically closed.
+In this case $K$ is a limit of smooth $k$-algebras, see
+Algebra, Lemma \ref{algebra-lemma-colimit-syntomic}.
+We conclude our lemma is a special case of
+Theorem \ref{theorem-smooth-base-change} as reformulated in
+Lemma \ref{lemma-smooth-base-change-general}.
+
+\medskip\noindent
+Proof of (2). For any quasi-compact and quasi-separated $U$ in $X_\etale$
+the above shows that the restriction of the map
+$E \to R(X_K \to X)_*E|_{X_K}$ determines an isomorphism on cohomology.
+Since every object of $X_\etale$ has an \'etale covering by such $U$
+this proves the desired statement.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-base-change-does-not-hold-post}
+With $f : X \to S$ and $n$ as in Remark \ref{remark-base-change-holds}
+assume $n$ is invertible on $S$ and that for some $q \geq 1$
+we have that $BC(f, n, q - 1)$ is true, but $BC(f, n, q)$ is not.
+Then there exist a commutative diagram
+$$
+\xymatrix{
+X \ar[d]_f & X' \ar[d] \ar[l] & Y \ar[l]^h \ar[d] \\
+S & S' \ar[l] & \Spec(K) \ar[l]
+}
+$$
+with both squares cartesian, where $S'$ is affine, integral, and normal
+with algebraically closed function field $K$ and there exists an integer
+$d | n$ such that $R^qh_*(\mathbf{Z}/d\mathbf{Z})$ is nonzero.
+\end{lemma}
+
+\begin{proof}
+First choose a diagram and $\mathcal{F}$ as in
+Lemma \ref{lemma-base-change-does-not-hold}.
+We may and do assume $S'$ is affine (this is obvious, but
+see proof of the lemma in case of doubt).
+Let $K'$ be the function field of $S'$
+and let $Y' = X' \times_{S'} \Spec(K')$ to get the diagram
+$$
+\xymatrix{
+X \ar[d]_f &
+X' \ar[d] \ar[l] &
+Y' \ar[l]^{h'} \ar[d] &
+Y \ar[l] \ar[d] \\
+S &
+S' \ar[l] &
+\Spec(K') \ar[l] &
+\Spec(K) \ar[l]
+}
+$$
+By Lemma \ref{lemma-smooth-base-change-separably-closed}
+the total direct image $R(Y \to Y')_*\mathbf{Z}/d\mathbf{Z}$
+is isomorphic to $\mathbf{Z}/d\mathbf{Z}$ in $D(Y'_\etale)$; here
+we use that $n$ is invertible on $S$.
+Thus $Rh'_*\mathbf{Z}/d\mathbf{Z} = Rh_*\mathbf{Z}/d\mathbf{Z}$
+by the relative Leray spectral sequence. This finishes the proof.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{The proper base change theorem}
+\label{section-proper-base-change}
+
+\noindent
+The proper base change theorem is stated and proved in this section.
+Our approach follows roughly the proof in \cite[XII, Theorem 5.1]{SGA4}
+using Gabber's ideas (from the affine case) to slightly simplify
+the arguments.
+
+\begin{lemma}
+\label{lemma-zariski-h0-proper-over-henselian-pair}
+Let $(A, I)$ be a henselian pair. Let $f : X \to \Spec(A)$ be a proper morphism
+of schemes. Let $Z = X \times_{\Spec(A)} \Spec(A/I)$. For any
+sheaf $\mathcal{F}$ on the topological space associated to $X$ we
+have $\Gamma(X, \mathcal{F}) = \Gamma(Z, \mathcal{F}|_Z)$.
+\end{lemma}
+
+\begin{proof}
+We will use Lemma \ref{lemma-h0-topological} to prove this. First observe
+that the underlying topological space of $X$ is spectral by Properties, Lemma
+\ref{properties-lemma-quasi-compact-quasi-separated-spectral}.
+Let $Y \subset X$ be an irreducible closed subscheme. To finish the proof
+we show that $Y \cap Z = Y \times_{\Spec(A)} \Spec(A/I)$ is connected.
+Replacing $X$ by $Y$
+we may assume that $X$ is irreducible and we have to show that $Z$
+is connected. Let $X \to \Spec(B) \to \Spec(A)$ be the Stein factorization
+of $f$ (More on Morphisms, Theorem
+\ref{more-morphisms-theorem-stein-factorization-general}).
+Then $A \to B$ is integral and $(B, IB)$ is a henselian pair
+(More on Algebra, Lemma \ref{more-algebra-lemma-integral-over-henselian-pair}).
+Thus we may assume the fibres of $X \to \Spec(A)$ are geometrically
+connected. On the other hand, the image $T \subset \Spec(A)$ of $f$
+is irreducible and closed as $X$ is proper over $A$. Hence $T \cap V(I)$
+is connected by More on Algebra, Lemma
+\ref{more-algebra-lemma-irreducible-henselian-pair-connected}.
+Now $Y \times_{\Spec(A)} \Spec(A/I) \to T \cap V(I)$
+is a surjective closed map with connected fibres.
+The result now follows from Topology, Lemma
+\ref{topology-lemma-connected-fibres-quotient-topology-connected-components}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-h0-proper-over-henselian-pair}
+Let $(A, I)$ be a henselian pair. Let $f : X \to \Spec(A)$ be a proper morphism
+of schemes. Let $i : Z \to X$ be the closed immersion of
+$X \times_{\Spec(A)} \Spec(A/I)$ into $X$. For any
+sheaf $\mathcal{F}$ on $X_\etale$ we
+have $\Gamma(X, \mathcal{F}) = \Gamma(Z, i_{small}^{-1}\mathcal{F})$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-gabber-h0} and
+\ref{lemma-zariski-h0-proper-over-henselian-pair}
+and the fact that any scheme finite over $X$ is proper over $\Spec(A)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-h0-proper-over-henselian-local}
+Let $A$ be a henselian local ring. Let $f : X \to \Spec(A)$
+be a proper morphism of schemes. Let $X_0 \subset X$ be the fibre of
+$f$ over the closed point. For any sheaf $\mathcal{F}$ on $X_\etale$ we
+have $\Gamma(X, \mathcal{F}) = \Gamma(X_0, \mathcal{F}|_{X_0})$.
+\end{lemma}
+
+\begin{proof}
+This is a special case of Lemma \ref{lemma-h0-proper-over-henselian-pair}.
+\end{proof}
+
+\noindent
+Let $f : X \to S$ be a morphism of schemes. Let
+$\overline{s} : \Spec(k) \to S$ be a geometric point. The fibre
+of $f$ at $\overline{s}$ is the scheme
+$X_{\overline{s}} = \Spec(k) \times_{\overline{s}, S} X$ viewed
+as a scheme over $\Spec(k)$. If $\mathcal{F}$ is a sheaf on
+$X_\etale$, then denote
+$\mathcal{F}_{\overline{s}} = p_{small}^{-1}\mathcal{F}$
+the pullback of $\mathcal{F}$ to $(X_{\overline{s}})_\etale$.
+In the following we will consider the set
+$$
+\Gamma(X_{\overline{s}}, \mathcal{F}_{\overline{s}})
+$$
+Let $s \in S$ be the image point of $\overline{s}$. Let $\kappa(s)^{sep}$
+be the separable algebraic closure of $\kappa(s)$ in $k$ as in
+Definition \ref{definition-algebraic-geometric-point}.
+By Lemma \ref{lemma-sections-base-field-extension}
+pullback defines a bijection
+$$
+\Gamma(X_{\kappa(s)^{sep}},
+p_{sep}^{-1} \mathcal{F})
+\longrightarrow
+\Gamma(X_{\overline{s}}, \mathcal{F}_{\overline{s}})
+$$
+where $p_{sep} : X_{\kappa(s)^{sep}} = \Spec(\kappa(s)^{sep}) \times_S X \to X$
+is the projection.
+
+\begin{lemma}
+\label{lemma-proper-pushforward-stalk}
+Let $f : X \to S$ be a proper morphism of schemes. Let
+$\overline{s} \to S$ be a geometric point.
+For any sheaf $\mathcal{F}$ on $X_\etale$
+the canonical map
+$$
+(f_*\mathcal{F})_{\overline{s}} \longrightarrow
+\Gamma(X_{\overline{s}}, \mathcal{F}_{\overline{s}})
+$$
+is bijective.
+\end{lemma}
+
+\begin{proof}
+By Theorem \ref{theorem-higher-direct-images} (for sheaves of sets)
+we have
+$$
+(f_*\mathcal{F})_{\overline{s}} =
+\Gamma(X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}),
+p_{small}^{-1}\mathcal{F})
+$$
+where $p : X \times_S \Spec(\mathcal{O}_{S, \overline{s}}^{sh}) \to X$
+is the projection. Since the residue field of the strictly henselian
+local ring $\mathcal{O}_{S, \overline{s}}^{sh}$ is $\kappa(s)^{sep}$
+we conclude from the discussion above the lemma and
+Lemma \ref{lemma-h0-proper-over-henselian-local}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-base-change-f-star}
+Let $f : X \to Y$ be a proper morphism of schemes. Let $g : Y' \to Y$
+be a morphism of schemes. Set $X' = Y' \times_Y X$ with projections
+$f' : X' \to Y'$ and $g' : X' \to X$. Let $\mathcal{F}$ be any sheaf on
+$X_\etale$. Then $g^{-1}f_*\mathcal{F} = f'_*(g')^{-1}\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+There is a canonical map $g^{-1}f_*\mathcal{F} \to f'_*(g')^{-1}\mathcal{F}$.
+Namely, it is adjoint to the map
+$$
+f_*\mathcal{F} \longrightarrow
+g_*f'_*(g')^{-1}\mathcal{F} = f_*g'_*(g')^{-1}\mathcal{F}
+$$
+which is $f_*$ applied to the canonical map
+$\mathcal{F} \to g'_*(g')^{-1}\mathcal{F}$. To check this map is an
+isomorphism we can compute what happens on stalks.
+Let $y' : \Spec(k) \to Y'$ be a geometric point with image $y$ in $Y$.
+By Lemma \ref{lemma-proper-pushforward-stalk} the stalks are
+$\Gamma(X'_{y'}, \mathcal{F}_{y'})$ and $\Gamma(X_y, \mathcal{F}_y)$
+respectively. Here the sheaves $\mathcal{F}_y$ and $\mathcal{F}_{y'}$
+are the pullbacks of $\mathcal{F}$ by the projections $X_y \to X$
+and $X'_{y'} \to X$. Thus we see that the groups agree by
+Lemma \ref{lemma-sections-base-field-extension}. We omit the
+verification that this isomorphism is compatible with our map.
+\end{proof}
+
+
+\noindent
+At this point we start discussing the proper base change theorem.
+To do so we introduce some notation. consider a commutative diagram
+\begin{equation}
+\label{equation-base-change-diagram}
+\vcenter{
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+}
+\end{equation}
+of morphisms of schemes. Then we obtain a commutative diagram of sites
+$$
+\xymatrix{
+X'_\etale \ar[r]_{g'_{small}} \ar[d]_{f'_{small}} &
+X_\etale \ar[d]^{f_{small}} \\
+Y'_\etale \ar[r]^{g_{small}} &
+Y_\etale
+}
+$$
+For any object $E$ of $D(X_\etale)$ we obtain a canonical base change map
+\begin{equation}
+\label{equation-base-change}
+g_{small}^{-1}Rf_{small, *}E \longrightarrow Rf'_{small, *}(g'_{small})^{-1}E
+\end{equation}
+in $D(Y'_\etale)$. See Cohomology on Sites, Remark
+\ref{sites-cohomology-remark-base-change} where we use the constant
+sheaf $\mathbf{Z}$ as our sheaf of rings.
+We will usually omit the subscripts ${}_{small}$ in this formula.
+For example, if $E = \mathcal{F}[0]$ where $\mathcal{F}$ is an abelian
+sheaf on $X_\etale$, the base change map is a map
+\begin{equation}
+\label{equation-base-change-sheaf}
+g^{-1}Rf_*\mathcal{F} \longrightarrow Rf'_*(g')^{-1}\mathcal{F}
+\end{equation}
+in $D(Y'_\etale)$.
+
+\medskip\noindent
+The map (\ref{equation-base-change}) has no chance of being an isomorphism
+in the generality given above. The goal is to show
+it is an isomorphism if the diagram (\ref{equation-base-change-diagram})
+is cartesian, $f : X \to Y$ proper, the cohomology sheaves of $E$
+are torsion, and $E$ is bounded below.
+To study this question we introduce the following terminology.
+Let us say that {\it cohomology commutes with base change
+for $f : X \to Y$} if (\ref{equation-base-change-sheaf})
+is an isomorphism for every diagram (\ref{equation-base-change-diagram})
+where $X' = Y' \times_Y X$ and every torsion abelian sheaf $\mathcal{F}$.
+
+\begin{lemma}
+\label{lemma-proper-base-change-in-terms-of-injectives}
+Let $f : X \to Y$ be a proper morphism of schemes.
+The following are equivalent
+\begin{enumerate}
+\item cohomology commutes with base change for $f$ (see above),
+\item for every prime number $\ell$ and every injective
+sheaf of $\mathbf{Z}/\ell\mathbf{Z}$-modules $\mathcal{I}$
+on $X_\etale$ and every diagram (\ref{equation-base-change-diagram})
+where $X' = Y' \times_Y X$ the sheaves
+$R^qf'_*(g')^{-1}\mathcal{I}$ are zero for $q > 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+It is clear that (1) implies (2). Conversely, assume (2) and let
+$\mathcal{F}$ be a torsion abelian sheaf on $X_\etale$. Let $Y' \to Y$
+be a morphism of schemes and let $X' = Y' \times_Y X$
+with projections $g' : X' \to X$ and $f' : X' \to Y'$ as in
+diagram (\ref{equation-base-change-diagram}).
+We want to show the maps of sheaves
+$$
+g^{-1}R^qf_*\mathcal{F} \longrightarrow R^qf'_*(g')^{-1}\mathcal{F}
+$$
+are isomorphisms for all $q \geq 0$.
+
+\medskip\noindent
+For every $n \geq 1$, let $\mathcal{F}[n]$ be the subsheaf of sections
+of $\mathcal{F}$ annihilated by $n$. Then
+$\mathcal{F} = \colim \mathcal{F}[n]$.
+The functors $g^{-1}$ and $(g')^{-1}$ commute with arbitrary colimits
+(as left adjoints). Taking higher direct images along $f$ or $f'$
+commutes with filtered colimits by Lemma \ref{lemma-relative-colimit}.
+Hence we see that
+$$
+g^{-1}R^qf_*\mathcal{F} = \colim g^{-1}R^qf_*\mathcal{F}[n]
+\quad\text{and}\quad
+R^qf'_*(g')^{-1}\mathcal{F} =
+\colim R^qf'_*(g')^{-1}\mathcal{F}[n]
+$$
+Thus it suffices to prove the result in case $\mathcal{F}$ is
+annihilated by a positive integer $n$.
+
+\medskip\noindent
+If $n = \ell n'$ for some prime number $\ell$, then we obtain a short
+exact sequence
+$$
+0 \to \mathcal{F}[\ell] \to \mathcal{F} \to
+\mathcal{F}/\mathcal{F}[\ell] \to 0
+$$
+Observe that $\mathcal{F}/\mathcal{F}[\ell]$ is annihilated by $n'$.
+Moreover, if the result holds for both $\mathcal{F}[\ell]$ and
+$\mathcal{F}/\mathcal{F}[\ell]$, then the result holds by
+the long exact sequence of higher direct images (and the $5$ lemma).
+In this way we reduce to the case that $\mathcal{F}$ is annihilated
+by a prime number $\ell$.
+
+\medskip\noindent
+Assume $\mathcal{F}$ is annihilated by a prime number $\ell$.
+Choose an injective resolution $\mathcal{F} \to \mathcal{I}^\bullet$
+in $D(X_\etale, \mathbf{Z}/\ell\mathbf{Z})$. Applying assumption
+(2) and Leray's acyclicity lemma
+(Derived Categories, Lemma \ref{derived-lemma-leray-acyclicity})
+we see that
+$$
+f'_*(g')^{-1}\mathcal{I}^\bullet
+$$
+computes $Rf'_*(g')^{-1}\mathcal{F}$. We conclude by applying
+Lemma \ref{lemma-proper-base-change-f-star}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sandwich}
+Let $f : X \to Y$ and $g : Y \to Z$ be proper morphisms of schemes. Assume
+\begin{enumerate}
+\item cohomology commutes with base change for $f$,
+\item cohomology commutes with base change for $g \circ f$, and
+\item $f$ is surjective.
+\end{enumerate}
+Then cohomology commutes with base change for $g$.
+\end{lemma}
+
+\begin{proof}
+We will use the equivalence of
+Lemma \ref{lemma-proper-base-change-in-terms-of-injectives}
+without further mention. Let $\ell$ be a prime number.
+Let $\mathcal{I}$ be an injective sheaf of
+$\mathbf{Z}/\ell\mathbf{Z}$-modules on $Y_\etale$.
+Choose an injective map of sheaves $f^{-1}\mathcal{I} \to \mathcal{J}$
+where $\mathcal{J}$ is an injective sheaf of
+$\mathbf{Z}/\ell\mathbf{Z}$-modules on $Z_\etale$.
+Since $f$ is surjective the map $\mathcal{I} \to f_*\mathcal{J}$
+is injective (look at stalks in geometric points).
+Since $\mathcal{I}$ is injective we see that $\mathcal{I}$
+is a direct summand of $f_*\mathcal{J}$. Thus it suffices
+to prove the desired vanishing for $f_*\mathcal{J}$.
+
+\medskip\noindent
+Let $Z' \to Z$ be a morphism of schemes and set
+$Y' = Z' \times_Z Y$ and $X' = Z' \times_Z X = Y' \times_ Y X$.
+Denote $a : X' \to X$, $b : Y' \to Y$, and $c : Z' \to Z$ the
+projections. Similarly for $f' : X' \to Y'$ and $g' : Y' \to Z'$.
+By Lemma \ref{lemma-proper-base-change-f-star} we have
+$b^{-1}f_*\mathcal{J} = f'_*a^{-1}\mathcal{J}$.
+On the other hand, we know that $R^qf'_*a^{-1}\mathcal{J}$ and
+$R^q(g' \circ f')_*a^{-1}\mathcal{J}$ are zero for $q > 0$.
+Using the spectral sequence
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-relative-Leray})
+$$
+R^pg'_* R^qf'_* a^{-1}\mathcal{J} \Rightarrow
+R^{p + q}(g' \circ f')_* a^{-1}\mathcal{J}
+$$
+we conclude that
+$ R^pg'_*(b^{-1}f_*\mathcal{J}) = R^pg'_*(f'_*a^{-1}\mathcal{J}) = 0$
+for $p > 0$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-composition}
+Let $f : X \to Y$ and $g : Y \to Z$ be proper morphisms of schemes. Assume
+\begin{enumerate}
+\item cohomology commutes with base change for $f$, and
+\item cohomology commutes with base change for $g$.
+\end{enumerate}
+Then cohomology commutes with base change for $g \circ f$.
+\end{lemma}
+
+\begin{proof}
+We will use the equivalence of
+Lemma \ref{lemma-proper-base-change-in-terms-of-injectives}
+without further mention. Let $\ell$ be a prime number.
+Let $\mathcal{I}$ be an injective sheaf of
+$\mathbf{Z}/\ell\mathbf{Z}$-modules on $X_\etale$.
+Then $f_*\mathcal{I}$ is an injective sheaf of
+$\mathbf{Z}/\ell\mathbf{Z}$-modules on $Y_\etale$
+(Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-pushforward-injective-flat}).
+The result follows formally from this, but we will also
+spell it out.
+
+\medskip\noindent
+Let $Z' \to Z$ be a morphism of schemes and set
+$Y' = Z' \times_Z Y$ and $X' = Z' \times_Z X = Y' \times_ Y X$.
+Denote $a : X' \to X$, $b : Y' \to Y$, and $c : Z' \to Z$ the
+projections. Similarly for $f' : X' \to Y'$ and $g' : Y' \to Z'$.
+By Lemma \ref{lemma-proper-base-change-f-star} we have
+$b^{-1}f_*\mathcal{I} = f'_*a^{-1}\mathcal{I}$.
+On the other hand, we know that $R^qf'_*a^{-1}\mathcal{I}$ and
+$R^q(g')_*b^{-1}f_*\mathcal{I}$ are zero for $q > 0$.
+Using the spectral sequence
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-relative-Leray})
+$$
+R^pg'_* R^qf'_* a^{-1}\mathcal{I} \Rightarrow
+R^{p + q}(g' \circ f')_* a^{-1}\mathcal{I}
+$$
+we conclude that $R^p(g' \circ f')_*a^{-1}\mathcal{I} = 0$ for
+$p > 0$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite}
+\begin{slogan}
+Proper base change for \'etale cohomology holds for finite morphisms.
+\end{slogan}
+Let $f : X \to Y$ be a finite morphism of schemes.
+Then cohomology commutes with base change for $f$.
+\end{lemma}
+
+\begin{proof}
+Observe that a finite morphism is proper, see
+Morphisms, Lemma \ref{morphisms-lemma-finite-proper}.
+Moreover, the base change of a finite morphism is finite, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-finite}.
+Thus the result follows from
+Lemma \ref{lemma-proper-base-change-in-terms-of-injectives}
+combined with
+Proposition \ref{proposition-finite-higher-direct-image-zero}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reduce-to-P1}
+To prove that cohomology commutes with base change for
+every proper morphism of schemes it suffices to prove it
+holds for the morphism $\mathbf{P}^1_S \to S$ for every scheme $S$.
+\end{lemma}
+
+\begin{proof}
+Let $f : X \to Y$ be a proper morphism of schemes.
+Let $Y = \bigcup Y_i$ be an affine open covering
+and set $X_i = f^{-1}(Y_i)$. If we can prove
+cohomology commutes with base change for $X_i \to Y_i$,
+then cohomology commutes with base change for $f$.
+Namely, the formation of the higher direct images
+commutes with Zariski (and even \'etale) localization
+on the base, see
+Lemma \ref{lemma-higher-direct-images}.
+Thus we may assume $Y$ is affine.
+
+\medskip\noindent
+Let $Y$ be an affine scheme and let $X \to Y$ be a proper morphism.
+By Chow's lemma there exists a commutative diagram
+$$
+\xymatrix{
+X \ar[rd] & X' \ar[d] \ar[l]^\pi \ar[r] & \mathbf{P}^n_Y \ar[dl] \\
+& Y &
+}
+$$
+where $X' \to \mathbf{P}^n_Y$ is an immersion, and
+$\pi : X' \to X$ is proper and surjective, see
+Limits, Lemma \ref{limits-lemma-chow-finite-type}.
+Since $X \to Y$ is proper, we find that $X' \to Y$ is proper
+(Morphisms, Lemma \ref{morphisms-lemma-composition-proper}).
+Hence $X' \to \mathbf{P}^n_Y$ is a closed immersion
+(Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed}).
+It follows that $X' \to X \times_Y \mathbf{P}^n_Y = \mathbf{P}^n_X$
+is a closed immersion (as an immersion with closed image).
+
+\medskip\noindent
+By Lemma \ref{lemma-sandwich}
+it suffices to prove cohomology commutes with base change for
+$\pi$ and $X' \to Y$. These morphisms both factor as a closed
+immersion followed by a projection $\mathbf{P}^n_S \to S$ (for some $S$).
+By Lemma \ref{lemma-finite} the result holds for closed
+immersions (as closed immersions are finite).
+By Lemma \ref{lemma-composition} it suffices to prove the
+result for projections $\mathbf{P}^n_S \to S$.
+
+\medskip\noindent
+For every $n \geq 1$ there is a finite surjective morphism
+$$
+\mathbf{P}^1_S \times_S \ldots \times_S \mathbf{P}^1_S
+\longrightarrow
+\mathbf{P}^n_S
+$$
+given on coordinates by
+$$
+((x_1 : y_1), (x_2 : y_2), \ldots, (x_n : y_n))
+\longmapsto
+(F_0 : \ldots : F_n)
+$$
+where $F_0, \ldots, F_n$ in $x_1, \ldots, y_n$
+are the polynomials with integer coefficients such that
+$$
+\prod (x_i t + y_i) = F_0 t^n + F_1 t^{n - 1} + \ldots + F_n
+$$
+Applying
+Lemmas \ref{lemma-sandwich}, \ref{lemma-finite}, and \ref{lemma-composition}
+one more time we conclude that the lemma is true.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-proper-base-change}
+Let $f : X \to Y$ be a proper morphism of schemes. Let $g : Y' \to Y$ be
+a morphism of schemes. Set $X' = Y' \times_Y X$
+and consider the cartesian diagram
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+Y' \ar[r]^g & Y
+}
+$$
+Let $\mathcal{F}$ be an abelian torsion sheaf on $X_\etale$.
+Then the base change map
+$$
+g^{-1}Rf_*\mathcal{F} \longrightarrow Rf'_*(g')^{-1}\mathcal{F}
+$$
+is an isomorphism.
+\end{theorem}
+
+\begin{proof}
+In the terminology introduced above, this means that cohomology commutes
+with base change for every proper morphism of schemes. By
+Lemma \ref{lemma-reduce-to-P1}
+it suffices to prove that cohomology commutes with base change
+for the morphism $\mathbf{P}^1_S \to S$ for every scheme $S$.
+
+\medskip\noindent
+Let $S$ be the spectrum of a strictly henselian local ring with closed
+point $s$. Set $X = \mathbf{P}^1_S$ and $X_0 = X_s = \mathbf{P}^1_s$.
+Let $\mathcal{F}$ be a sheaf of $\mathbf{Z}/\ell\mathbf{Z}$-modules
+on $X_\etale$. The key to our proof is that
+$$
+H^q_\etale(X, \mathcal{F}) = H^q_\etale(X_0, \mathcal{F}|_{X_0}).
+$$
+Namely, choose a resolution $\mathcal{F} \to \mathcal{I}^\bullet$
+by injective sheaves of $\mathbf{Z}/\ell\mathbf{Z}$-modules.
+Then $\mathcal{I}^\bullet|_{X_0}$ is a resolution of $\mathcal{F}|_{X_0}$
+by right $H^0_\etale(X_0, -)$-acyclic objects, see
+Lemma \ref{lemma-efface-cohomology-on-fibre-by-finite-cover}.
+Leray's acyclicity lemma tells us the right hand side is computed by
+the complex $H^0_\etale(X_0, \mathcal{I}^\bullet|_{X_0})$
+which is equal to $H^0_\etale(X, \mathcal{I}^\bullet)$ by
+Lemma \ref{lemma-h0-proper-over-henselian-local}. This complex
+computes the left hand side.
+
+\medskip\noindent
+Assume $S$ is general and $\mathcal{F}$ is a sheaf of
+$\mathbf{Z}/\ell\mathbf{Z}$-modules on $X_\etale$.
+Let $\overline{s} : \Spec(k) \to S$ be a geometric point
+of $S$ lying over $s \in S$. We have
+$$
+(R^qf_*\mathcal{F})_{\overline{s}} =
+H^q_\etale(\mathbf{P}^1_{\mathcal{O}_{S, \overline{s}}^{sh}},
+\mathcal{F}|_{\mathbf{P}^1_{\mathcal{O}_{S, \overline{s}}^{sh}}}) =
+H^q_\etale(\mathbf{P}^1_{\kappa(s)^{sep}},
+\mathcal{F}|_{\mathbf{P}^1_{\kappa(s)^{sep}}})
+$$
+where $\kappa(s)^{sep}$ is the residue field of
+$\mathcal{O}_{S, \overline{s}}^{sh}$, i.e., the separable algebraic
+closure of $\kappa(s)$ in $k$.
+The first equality by Theorem \ref{theorem-higher-direct-images}
+and the second equality by the displayed formula in the
+previous paragraph.
+
+\medskip\noindent
+Finally, consider any morphism of schemes $g : T \to S$ where
+$S$ and $\mathcal{F}$ are as above.
+Set $f' : \mathbf{P}^1_T \to T$ the projection and let
+$g' : \mathbf{P}^1_T \to \mathbf{P}^1_S$ the morphism induced
+by $g$. Consider the base change map
+$$
+g^{-1}R^qf_*\mathcal{F}
+\longrightarrow
+R^qf'_*(g')^{-1}\mathcal{F}
+$$
+Let $\overline{t}$ be a geometric point of $T$ with image
+$\overline{s} = g(\overline{t})$. By our discussion
+above the map on stalks at $\overline{t}$ is the map
+$$
+H^q_\etale(\mathbf{P}^1_{\kappa(s)^{sep}},
+\mathcal{F}|_{\mathbf{P}^1_{\kappa(s)^{sep}}})
+\longrightarrow
+H^q_\etale(\mathbf{P}^1_{\kappa(t)^{sep}},
+\mathcal{F}|_{\mathbf{P}^1_{\kappa(t)^{sep}}})
+$$
+Since $\kappa(s)^{sep} \subset \kappa(t)^{sep}$ this map is an
+isomorphism by Lemma \ref{lemma-base-change-dim-1-separably-closed}.
+
+\medskip\noindent
+This proves cohomology commutes with base change for
+$\mathbf{P}^1_S \to S$ and sheaves of $\mathbf{Z}/\ell\mathbf{Z}$-modules.
+In particular, for an injective sheaf of $\mathbf{Z}/\ell\mathbf{Z}$-modules
+the higher direct images of any base change are zero.
+In other words, condition (2) of
+Lemma \ref{lemma-proper-base-change-in-terms-of-injectives}
+holds and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-base-change}
+Let $f : X \to Y$ be a proper morphism of schemes. Let $g : Y' \to Y$ be
+a morphism of schemes. Set $X' = Y' \times_Y X$ and denote
+$f' : X' \to Y'$ and $g' : X' \to X$ the projections.
+Let $E \in D^+(X_\etale)$ have torsion cohomology sheaves.
+Then the base change map (\ref{equation-base-change})
+$g^{-1}Rf_*E \to Rf'_*(g')^{-1}E$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+This is a simple consequence of the proper base change theorem
+(Theorem \ref{theorem-proper-base-change}) using the spectral
+sequences
+$$
+E_2^{p, q} = R^pf_*H^q(E)
+\quad\text{and}\quad
+{E'}_2^{p, q} = R^pf'_*(g')^{-1}H^q(E)
+$$
+converging to $R^nf_*E$ and $R^nf'_*(g')^{-1}E$.
+The spectral sequences are constructed in
+Derived Categories, Lemma \ref{derived-lemma-two-ss-complex-functor}.
+Some details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-base-change-stalk}
+Let $f : X \to Y$ be a proper morphism of schemes. Let $\overline{y} \to Y$
+be a geometric point.
+\begin{enumerate}
+\item For a torsion abelian sheaf $\mathcal{F}$ on $X_\etale$ we have
+$(R^nf_*\mathcal{F})_{\overline{y}} =
+H^n_\etale(X_{\overline{y}}, \mathcal{F}_{\overline{y}})$.
+\item For $E \in D^+(X_\etale)$ with torsion cohomology sheaves we have
+$(R^nf_*E)_{\overline{y}} =
+H^n_\etale(X_{\overline{y}}, E|_{X_{\overline{y}}})$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+In the statement, $\mathcal{F}_{\overline{y}}$ denotes the pullback
+of $\mathcal{F}$ to the scheme theoretic fibre
+$X_{\overline{y}} = \overline{y} \times_Y X$.
+Since pulling back by $\overline{y} \to Y$ produces the
+stalk of $\mathcal{F}$, the first statement of the lemma
+is a special case of Theorem \ref{theorem-proper-base-change}.
+The second one is a special case of Lemma \ref{lemma-proper-base-change}.
+\end{proof}
+
+
+
+
+
+
+\section{Applications of proper base change}
+\label{section-applications-proper-base-change}
+
+\noindent
+In this section we discuss some more or less immediate
+consequences of the proper base change theorem.
+
+\begin{lemma}
+\label{lemma-base-change-separably-closed}
+Let $K/k$ be an extension of separably closed fields.
+Let $X$ be a proper scheme over $k$.
+Let $\mathcal{F}$ be a torsion abelian sheaf on $X_\etale$.
+Then the map $H^q_\etale(X, \mathcal{F}) \to
+H^q_\etale(X_K, \mathcal{F}|_{X_K})$ is an isomorphism
+for $q \geq 0$.
+\end{lemma}
+
+\begin{proof}
+Looking at stalks we see that
+this is a special case of
+Theorem \ref{theorem-proper-base-change}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomological-dimension-proper}
+Let $f : X \to Y$ be a proper morphism of schemes
+all of whose fibres have dimension $\leq n$.
+Then for any abelian torsion sheaf $\mathcal{F}$ on $X_\etale$
+we have $R^qf_*\mathcal{F} = 0$ for $q > 2n$.
+\end{lemma}
+
+\begin{proof}
+We will prove this by induction on $n$ for all proper morphisms.
+
+\medskip\noindent
+If $n = 0$, then $f$ is a finite morphism
+(More on Morphisms, Lemma \ref{more-morphisms-lemma-characterize-finite})
+and the
+result is true by Proposition \ref{proposition-finite-higher-direct-image-zero}.
+
+\medskip\noindent
+If $n > 0$, then using Lemma \ref{lemma-proper-base-change-stalk}
+we see that it suffices to prove $H^i_\etale(X, \mathcal{F}) = 0$
+for $i > 2n$ and $X$ a proper scheme, $\dim(X) \leq n$
+over an algebraically closed field $k$
+and $\mathcal{F}$ is a torsion abelian sheaf on $X$.
+
+\medskip\noindent
+If $n = 1$ this follows from Theorem \ref{theorem-vanishing-curves}.
+Assume $n > 1$. By Proposition \ref{proposition-topological-invariance}
+we may replace $X$ by its reduction.
+Let $\nu : X^\nu \to X$ be the normalization.
+This is a surjective birational finite morphism
+(see Varieties, Lemma \ref{varieties-lemma-normalization-locally-algebraic})
+and hence an isomorphism over a dense open $U \subset X$
+(Morphisms, Lemma \ref{morphisms-lemma-birational-birational}).
+Then we see that
+$c : \mathcal{F} \to \nu_*\nu^{-1}\mathcal{F}$ is injective
+(as $\nu$ is surjective) and an isomorphism over $U$.
+Denote $i : Z \to X$ the inclusion of the complement of $U$.
+Since $U$ is dense in $X$ we have $\dim(Z) < \dim(X) = n$.
+By Proposition \ref{proposition-closed-immersion-pushforward} have
+$\Coker(c) = i_*\mathcal{G}$ for some
+abelian torsion sheaf $\mathcal{G}$ on $Z_\etale$.
+Then $H^q_\etale(X, \Coker(c)) = H^q_\etale(Z, \mathcal{F})$
+(by Proposition \ref{proposition-finite-higher-direct-image-zero}
+and the Leray spectral sequence) and by induction hypothesis we conclude that
+the cokernel of $c$ has cohomology in degrees $\leq 2(n - 1)$.
+Thus it suffices to prove the result for $\nu_*\nu^{-1}\mathcal{F}$.
+As $\nu$ is finite this reduces us to showing
+that $H^i_\etale(X^\nu, \nu^{-1}\mathcal{F})$ is
+zero for $i > 2n$. This case is treated in the next paragraph.
+
+\medskip\noindent
+Assume $X$ is integral normal proper scheme over $k$ of dimension $n$.
+Choose a nonconstant rational function $f$ on $X$. The graph
+$X' \subset X \times \mathbf{P}^1_k$ of $f$ sits into a diagram
+$$
+X \xleftarrow{b} X' \xrightarrow{f} \mathbf{P}^1_k
+$$
+Observe that $b$ is an isomorphism over an open subscheme
+$U \subset X$ whose complement is a closed subscheme
+$Z \subset X$ of codimension $\geq 2$. Namely, $U$ is the
+domain of definition of $f$ which contains all codimension $1$
+points of $X$, see
+Morphisms, Lemmas \ref{morphisms-lemma-rational-map-from-reduced-to-separated}
+and \ref{morphisms-lemma-extend-across}
+(combined with Serre's criterion for normality, see
+Properties, Lemma \ref{properties-lemma-criterion-normal}).
+Moreover the fibres of $b$ have dimension $\leq 1$ (as closed subschemes
+of $\mathbf{P}^1$). Hence $R^ib_*b^{-1}\mathcal{F}$ is nonzero only
+if $i \in \{0, 1, 2\}$ by induction. Choose a distinguished triangle
+$$
+\mathcal{F} \to Rb_*b^{-1}\mathcal{F} \to Q \to \mathcal{F}[1]
+$$
+Using that $\mathcal{F} \to b_*b^{-1}\mathcal{F}$ is injective
+as before and using what we just said, we see that $Q$ has nonzero
+cohomology sheaves only in degrees $0, 1, 2$ sitting on $Z$.
+Moreover, these cohomology sheaves are torsion by
+Lemma \ref{lemma-torsion-direct-image}.
+By induction we see that $H^i(X, Q)$ is zero for
+$i > 2 + 2\dim(Z) \leq 2 + 2(n - 2) = 2n - 2$. Thus it suffices
+to prove that $H^i(X', b^{-1}\mathcal{F}) = 0$ for
+$i > 2n$. At this point we use the morphism
+$$
+f : X' \to \mathbf{P}^1_k
+$$
+whose fibres have dimension $< n$. Hence by induction we see that
+$R^if_*b^{-1}\mathcal{F} = 0$ for $i > 2(n - 1)$.
+We conclude by the Leray spectral seqence
+$$
+H^i(\mathbf{P}^1_k, R^jf_*b^{-1}\mathcal{F})
+\Rightarrow
+H^{i + j}(X', b^{-1}\mathcal{F})
+$$
+and the fact that $\dim(\mathbf{P}^1_k) = 1$.
+\end{proof}
+
+\noindent
+When working with mod $n$ coefficients we can do proper
+base change for unbounded complexes.
+
+\begin{lemma}
+\label{lemma-proper-base-change-mod-n}
+Let $f : X \to Y$ be a proper morphism of schemes. Let $g : Y' \to Y$ be
+a morphism of schemes. Set $X' = Y' \times_Y X$ and denote
+$f' : X' \to Y'$ and $g' : X' \to X$ the projections.
+Let $n \geq 1$ be an integer.
+Let $E \in D(X_\etale, \mathbf{Z}/n\mathbf{Z})$.
+Then the base change map (\ref{equation-base-change})
+$g^{-1}Rf_*E \to Rf'_*(g')^{-1}E$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+It is enough to prove this when $Y$ and $Y'$ are quasi-compact.
+By Morphisms, Lemma
+\ref{morphisms-lemma-morphism-finite-type-bounded-dimension}
+we see that the dimension of the fibres of
+$f : X \to Y$ and $f' : X' \to Y'$ are bounded. Thus
+Lemma \ref{lemma-cohomological-dimension-proper} implies that
+$$
+f_* :
+\textit{Mod}(X_\etale, \mathbf{Z}/n\mathbf{Z})
+\longrightarrow
+\textit{Mod}(Y_\etale, \mathbf{Z}/n\mathbf{Z})
+$$
+and
+$$
+f'_* :
+\textit{Mod}(X'_\etale, \mathbf{Z}/n\mathbf{Z})
+\longrightarrow
+\textit{Mod}(Y'_\etale, \mathbf{Z}/n\mathbf{Z})
+$$
+have finite cohomological dimension in the sense of
+Derived Categories, Lemma \ref{derived-lemma-unbounded-right-derived}.
+Choose a K-injective complex $\mathcal{I}^\bullet$
+of $\mathbf{Z}/n\mathbf{Z}$-modules each of whose
+terms $\mathcal{I}^n$ is an injective sheaf of
+$\mathbf{Z}/n\mathbf{Z}$-modules representing $E$.
+See Injectives, Theorem
+\ref{injectives-theorem-K-injective-embedding-grothendieck}.
+By the usual proper base change theorem we find
+that $R^qf'_*(g')^{-1}\mathcal{I}^n = 0$ for $q > 0$, see
+Theorem \ref{theorem-proper-base-change}.
+Hence we conclude by
+Derived Categories, Lemma \ref{derived-lemma-unbounded-right-derived}
+that we may compute $Rf'_*(g')^{-1}E$ by the complex
+$f'_*(g')^{-1}\mathcal{I}^\bullet$. Another application
+of the usual proper base change theorem shows that
+this is equal to $g^{-1}f_*\mathcal{I}^\bullet$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pull-out-constant}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+Let $E \in D^+(X_\etale)$ and $K \in D^+(\mathbf{Z})$.
+Then
+$$
+R\Gamma(X, E \otimes_\mathbf{Z}^\mathbf{L} \underline{K}) =
+R\Gamma(X, E) \otimes_\mathbf{Z}^\mathbf{L} K
+$$
+\end{lemma}
+
+\begin{proof}
+Say $H^i(E) = 0$ for $i \geq a$ and $H^j(K) = 0$ for $j \geq b$.
+We may represent $K$ by a bounded below complex
+$K^\bullet$ of torsion free $\mathbf{Z}$-modules.
+(Choose a K-flat complex $L^\bullet$ representing $K$ and then
+take $K^\bullet = \tau_{\geq b - 1}L^\bullet$. This works because
+$\mathbf{Z}$ has global dimension $1$. See
+More on Algebra, Lemma \ref{more-algebra-lemma-last-one-flat}.)
+We may represent $E$ by a bounded below complex
+$\mathcal{E}^\bullet$.
+Then $E \otimes_\mathbf{Z}^\mathbf{L} \underline{K}$ is
+represented by
+$$
+\text{Tot}(\mathcal{E}^\bullet \otimes_\mathbf{Z} \underline{K}^\bullet)
+$$
+Using distinguished triangles
+$$
+\sigma_{\geq -b + n + 1}K^\bullet
+\to K^\bullet \to
+\sigma_{\leq -b + n}K^\bullet
+$$
+and the trivial vanishing
+$$
+H^n(X,
+\text{Tot}(\mathcal{E}^\bullet \otimes_\mathbf{Z}
+\sigma_{\geq -a + n + 1}\underline{K}^\bullet) = 0
+$$
+and
+$$
+H^n(R\Gamma(X, E)
+\otimes_\mathbf{Z}^\mathbf{L} \sigma_{\geq -a + n + 1}K^\bullet) = 0
+$$
+we reduce to the case where $K^\bullet$ is a bounded complex of
+flat $\mathbf{Z}$-modules. Repeating the argument we reduce to the
+case where $K^\bullet$ is equal to a single flat $\mathbf{Z}$-module
+sitting in some degree. Next, using the stupid
+trunctions for $\mathcal{E}^\bullet$
+we reduce in exactly the same manner to the case where
+$\mathcal{E}^\bullet$ is a single abelian sheaf sitting
+in some degree. Thus it suffices to show that
+$$
+H^n(X, \mathcal{E} \otimes_\mathbf{Z} \underline{M}) =
+H^n(X, \mathcal{E}) \otimes_\mathbf{Z} M
+$$
+when $M$ is a flat $\mathbf{Z}$-module and $\mathcal{E}$
+is an abelian sheaf on $X$. In this case we
+write $M$ is a filtered colimit of finite free $\mathbf{Z}$-modules
+(Lazard's theorem, see Algebra, Theorem \ref{algebra-theorem-lazard}).
+By Theorem \ref{theorem-colimit} this reduces us to the case of finite free
+$\mathbf{Z}$-module $M$ in which case the result is trivially true.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projection-formula-proper}
+Let $f : X \to Y$ be a proper morphism of schemes.
+Let $E \in D^+(X_\etale)$ have torsion cohomology sheaves.
+Let $K \in D^+(Y_\etale)$. Then
+$$
+Rf_*E \otimes_\mathbf{Z}^\mathbf{L} K =
+Rf_*(E \otimes_\mathbf{Z}^\mathbf{L} f^{-1}K)
+$$
+in $D^+(Y_\etale)$.
+\end{lemma}
+
+\begin{proof}
+There is a canonical map from left to right by
+Cohomology on Sites, Section \ref{sites-cohomology-section-projection-formula}.
+We will check the equality on stalks.
+Recall that computing derived tensor products commutes with pullbacks.
+See Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-pullback-tensor-product}.
+Thus we have
+$$
+(E \otimes_\mathbf{Z}^\mathbf{L} f^{-1}K)_{\overline{x}}
+=
+E_{\overline{x}} \otimes_\mathbf{Z}^\mathbf{L} K_{\overline{y}}
+$$
+where $\overline{y}$ is the image of $\overline{x}$ in $Y$.
+Since $\mathbf{Z}$ has global dimension $1$ we see that
+this complex has vanishing cohomology in degree $< - 1 + a + b$
+if $H^i(E) = 0$ for $i \geq a$ and $H^j(K) = 0$ for $j \geq b$.
+Moreover, since $H^i(E)$ is a torsion abelian
+sheaf for each $i$, the same is true for the cohomology
+sheaves of the complex $E \otimes_\mathbf{Z}^\mathbf{L} K$.
+Namely, we have
+$$
+(E \otimes_\mathbf{Z}^\mathbf{L} f^{-1}K)
+\otimes_{\mathbf{Z}}^\mathbf{L} \mathbf{Q} =
+(E \otimes_\mathbf{Z}^\mathbf{L} \mathbf{Q})
+\otimes_{\mathbf{Q}}^\mathbf{L}
+(f^{-1}K \otimes_{\mathbf{Z}}^\mathbf{L} \mathbf{Q})
+$$
+which is zero in the derived category.
+In this way we see that Lemma \ref{lemma-proper-base-change-stalk}
+applies to both sides to see that it suffices to show
+$$
+R\Gamma(X_{\overline{y}},
+E|_{X_{\overline{y}}} \otimes_\mathbf{Z}^\mathbf{L}
+(X_{\overline{y}} \to \overline{y})^{-1}K_{\overline{y}}) =
+R\Gamma(X_{\overline{y}},
+E|_{X_{\overline{y}}}) \otimes_\mathbf{Z}^\mathbf{L} K_{\overline{y}}
+$$
+This is shown in Lemma \ref{lemma-pull-out-constant}.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Local acyclicity}
+\label{section-local-acyclicity}
+
+\noindent
+In this section we deduce local acyclicity of smooth morphisms
+from the smooth base change theorem. In SGA 4 or SGA 4.5 the authors
+first prove a version of local acyclicity for smooth morphisms
+and then deduce the smooth base change theorem.
+
+\medskip\noindent
+We will use the formulation of local acyclicity given by
+Deligne \cite[Definition 2.12, page 242]{SGA4.5}.
+Let $f : X \to S$ be a morphism of schemes.
+Let $\overline{x}$ be a geometric point of $X$ with image
+$\overline{s} = f(\overline{x})$ in $S$.
+Let $\overline{t}$ be a geometric point of
+$\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$.
+We obtain a commutative diagram
+$$
+\xymatrix{
+F_{\overline{x}, \overline{t}} =
+\overline{t} \times_{\Spec(\mathcal{O}^{sh}_{S, \overline{s}})}
+\Spec(\mathcal{O}^{sh}_{X, \overline{x}}) \ar[r] \ar[d] &
+\Spec(\mathcal{O}^{sh}_{X, \overline{x}}) \ar[r] \ar[d] &
+X \ar[d] \\
+\overline{t} \ar[r] &
+\Spec(\mathcal{O}^{sh}_{S, \overline{s}}) \ar[r] &
+S
+}
+$$
+The scheme $F_{\overline{x}, \overline{t}}$ is called
+a {\it variety of vanishing cycles of $f$ at $\overline{x}$}.
+Let $K$ be an object of $D(X_\etale)$. For any morphism of schemes
+$g : Y\to X$ we write $R\Gamma(Y, K)$ instead of
+$R\Gamma(Y_\etale, g_{small}^{-1}K)$. Since
+$\mathcal{O}^{sh}_{X, \overline{x}}$ is strictly henselian
+we have
+$K_{\overline{x}} = R\Gamma(\Spec(\mathcal{O}^{sh}_{X, \overline{x}}), K)$.
+Thus we obtain a canonical map
+\begin{equation}
+\label{equation-alpha-K}
+\alpha_{K, \overline{x}, \overline{t}} :
+K_{\overline{x}}
+\longrightarrow
+R\Gamma(F_{\overline{x}, \overline{t}}, K)
+\end{equation}
+by pulling back cohomology along
+$F_{\overline{x}, \overline{t}} \to \Spec(\mathcal{O}^{sh}_{X, \overline{x}})$.
+
+\begin{definition}
+\label{definition-locally-acyclic}
+\begin{reference}
+\cite[Definition 2.12, page 242]{SGA4.5} and
+\cite[Definition (1.3), page 54]{SGA4.5}
+\end{reference}
+Let $f : X \to S$ be a morphism of schemes.
+Let $K$ be an object of $D(X_\etale)$.
+\begin{enumerate}
+\item
+Let $\overline{x}$ be a geometric point of $X$ with image
+$\overline{s} = f(\overline{x})$.
+We say $f$ is {\it locally acyclic at $\overline{x}$ relative to $K$}
+if for every geometric point $\overline{t}$ of
+$\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$ the
+map (\ref{equation-alpha-K}) is an isomorphism\footnote{We do not
+assume $\overline{t}$ is an algebraic geometric point of
+$\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$. Often using
+Lemma \ref{lemma-smooth-base-change-separably-closed}
+one may reduce to this case.}.
+\item We say $f$ is {\it locally acyclic relative to $K$}
+if $f$ is locally acyclic at $\overline{x}$ relative to $K$
+for every geometric point $\overline{x}$ of $X$.
+\item We say $f$ is {\it universally locally acyclic relative to $K$}
+if for any morphism $S' \to S$ of schemes the base change $f' : X' \to S'$
+is locally acyclic relative to the pullback of $K$ to $X'$.
+\item We say $f$ is {\it locally acyclic} if for all geometric
+points $\overline{x}$ of $X$ and any integer $n$ prime to the characteristic
+of $\kappa(\overline{x})$, the morphism $f$ is locally acyclic
+at $\overline{x}$ relative to the constant sheaf with value
+$\mathbf{Z}/n\mathbf{Z}$.
+\item We say $f$ is {\it universally locally acyclic} if
+for any morphism $S' \to S$ of schemes the base change $f' : X' \to S'$
+is locally acyclic.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Let $M$ be an abelian group. Then local acyclicity of $f : X \to S$
+with respect to the constant sheaf $\underline{M}$ boils down
+to the requirement that
+$$
+H^q(F_{\overline{x}, \overline{t}}, \underline{M}) =
+\left\{
+\begin{matrix}
+M & \text{if} & q = 0 \\
+0 & \text{if} & q \not = 0
+\end{matrix}
+\right.
+$$
+for any geometric point $\overline{x}$ of $X$ and any geometric
+point $\overline{t}$ of $\Spec(\mathcal{O}^{sh}_{S, f(\overline{x})})$.
+In this way we see that being locally acyclic corresponds to the
+vanishing of the higher cohomology groups of the geometric fibres
+$F_{\overline{x}, \overline{t}}$ of the maps between the
+strict henselizations at $\overline{x}$ and $\overline{s}$.
+
+\begin{proposition}
+\label{proposition-smooth-locally-acyclic}
+Let $f : X \to S$ be a smooth morphism of schemes.
+Then $f$ is universally locally acyclic.
+\end{proposition}
+
+\begin{proof}
+Since the base change of a smooth morphism is smooth, it suffices
+to show that smooth morphisms are locally acyclic.
+Let $\overline{x}$ be a geometric point of $X$ with image
+$\overline{s} = f(\overline{x})$. Let
+$\overline{t}$ be a geometric point of
+$\Spec(\mathcal{O}^{sh}_{S, f(\overline{x})})$.
+Since we are trying to prove a property of the ring map
+$\mathcal{O}^{sh}_{S, \overline{s}} \to \mathcal{O}^{sh}_{X, \overline{x}}$
+(see discussion following Definition \ref{definition-locally-acyclic})
+we may and do replace $f : X \to S$ by the base change
+$X \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{s}}) \to
+\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$.
+Thus we may and do assume that $S$ is the spectrum of a strictly
+henselian local ring and that $\overline{s}$ lies over
+the closed point of $S$.
+
+\medskip\noindent
+We will apply Lemma \ref{lemma-base-change-f-star-general-stalks}
+to the diagram
+$$
+\xymatrix{
+X \ar[d]_f & X_{\overline{t}} \ar[l]^h \ar[d]^e \\
+S & \overline{t} \ar[l]_g
+}
+$$
+and the sheaf $\mathcal{F} = \underline{M}$ where $M = \mathbf{Z}/n\mathbf{Z}$
+for some integer $n$ prime to the characteristic of the residue field of
+$\overline{x}$. We know that the map
+$f^{-1}R^qg_*\mathcal{F} \to R^qh_*e^{-1}\mathcal{F}$
+is an isomorphism by smooth base change, see
+Theorem \ref{theorem-smooth-base-change} (the assumption
+on torsion holds by our choice of $n$).
+Thus Lemma \ref{lemma-base-change-f-star-general-stalks}
+gives us the middle equality in
+$$
+H^q(F_{\overline{x}, \overline{t}}, \underline{M}) =
+H^q(\Spec(\mathcal{O}^{sh}_{X, \overline{x}}) \times_S \overline{t},
+\underline{M})
+=
+H^q(\Spec(\mathcal{O}^{sh}_{S, \overline{s}}) \times_S \overline{t},
+\underline{M}) =
+H^q(\overline{t}, \underline{M})
+$$
+For the outer two equalities we use that
+$S = \Spec(\mathcal{O}^{sh}_{S, \overline{s}})$.
+Since $\overline{t}$ is the spectrum of a separably
+closed field we conclude that
+$$
+H^q(F_{\overline{x}, \overline{t}}, \underline{M}) =
+\left\{
+\begin{matrix}
+M & \text{if} & q = 0 \\
+0 & \text{if} & q \not = 0
+\end{matrix}
+\right.
+$$
+which is what we had to show (see discussion following
+Definition \ref{definition-locally-acyclic}).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-acyclic-locally-constant}
+Let $f : X \to S$ be a morphism of schemes. Let $\mathcal{F}$ be a
+locally constant abelian sheaf on $X_\etale$ such that for every geometric
+point $\overline{x}$ of $X$ the abelian group
+$\mathcal{F}_{\overline{x}}$ is a torsion group all of whose elements have
+order prime to the characteristic of the
+residue field of $\overline{x}$. If $f$ is locally acyclic, then $f$ is
+locally acyclic relative to $\mathcal{F}$.
+\end{lemma}
+
+\begin{proof}
+Namely, let $\overline{x}$ be a geometric point of $X$.
+Since $\mathcal{F}$ is locally constant we see that
+the restriction of $\mathcal{F}$ to $\Spec(\mathcal{O}^{sh}_{X, \overline{x}})$
+is isomorphic to the constant sheaf $\underline{M}$ with
+$M = \mathcal{F}_{\overline{x}}$. By assumption we can write
+$M = \colim M_i$ as a filtered colimit of finite abelian groups
+$M_i$ of order prime to the characteristic of the residue field of
+$\overline{x}$. Consider a geometric point $\overline{t}$ of
+$\Spec(\mathcal{O}^{sh}_{S, f(\overline{x})})$.
+Since $F_{\overline{x}, \overline{t}}$ is affine, we have
+$$
+H^q(F_{\overline{x}, \overline{t}}, \underline{M}) =
+\colim H^q(F_{\overline{x}, \overline{t}}, \underline{M_i})
+$$
+by Lemma \ref{lemma-colimit}.
+For each $i$ we can write $M_i = \bigoplus \mathbf{Z}/n_{i, j}\mathbf{Z}$
+as a finite direct sum for some integers $n_{i, j}$ prime to the
+characteristic of the residue field of $\overline{x}$.
+Since $f$ is locally acyclic we see that
+$$
+H^q(F_{\overline{x}, \overline{t}},
+\underline{\mathbf{Z}/n_{i, j}\mathbf{Z}}) =
+\left\{
+\begin{matrix}
+\mathbf{Z}/n_{i, j}\mathbf{Z} & \text{if} & q = 0 \\
+0 & \text{if} & q \not = 0
+\end{matrix}
+\right.
+$$
+See discussion following Definition \ref{definition-locally-acyclic}.
+Taking the direct sums and the colimit we conclude that
+$$
+H^q(F_{\overline{x}, \overline{t}}, \underline{M}) =
+\left\{
+\begin{matrix}
+M & \text{if} & q = 0 \\
+0 & \text{if} & q \not = 0
+\end{matrix}
+\right.
+$$
+and we win.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-locally-acyclic-quasi-finite-base-change}
+Let
+$$
+\xymatrix{
+X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^f \\
+S' \ar[r]^g & S
+}
+$$
+be a cartesian diagram of schemes. Let $K$ be an object of $D(X_\etale)$.
+Let $\overline{x}'$ be a geometric point of $X'$ with image $\overline{x}$
+in $X$. If
+\begin{enumerate}
+\item $f$ is locally acyclic at $\overline{x}$ relative to $K$ and
+\item $g$ is locally quasi-finite, or $S' = \lim S_i$
+is a directed inverse limit of schemes locally quasi-finite over $S$
+with affine transition morphisms, or $g : S' \to S$ is integral,
+\end{enumerate}
+then $f'$ locally acyclic at $\overline{x}'$ relative to $(g')^{-1}K$.
+\end{lemma}
+
+\begin{proof}
+Denote $\overline{s}'$ and $\overline{s}$ the images of
+$\overline{x}'$ and $\overline{x}$ in $S'$ and $S$.
+Let $\overline{t}'$ be a geometric point of the spectrum of
+$\Spec(\mathcal{O}^{sh}_{S', \overline{s}'})$ and denote
+$\overline{t}$ the image in $\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$.
+By Algebra, Lemma
+\ref{algebra-lemma-base-change-strict-henselization-quasi-finite}
+and our assumptions on $g$ we have
+$$
+\mathcal{O}^{sh}_{X, \overline{x}}
+\otimes_{\mathcal{O}^{sh}_{S, \overline{s}}}
+\mathcal{O}^{sh}_{S', \overline{s}'}
+\longrightarrow
+\mathcal{O}^{sh}_{X', \overline{x}'}
+$$
+is an isomorphism. Since by our conventions
+$\kappa(\overline{t}) = \kappa(\overline{t}')$
+we conclude that
+$$
+F_{\overline{x}', \overline{t}'} =
+\Spec\left(
+\mathcal{O}^{sh}_{X', \overline{x}'}
+\otimes_{\mathcal{O}^{sh}_{S', \overline{s}'}}
+\kappa(\overline{t}')\right) =
+\Spec\left(
+\mathcal{O}^{sh}_{X, \overline{x}}
+\otimes_{\mathcal{O}^{sh}_{S, \overline{s}}}
+\kappa(\overline{t})\right) =
+F_{\overline{x}, \overline{t}}
+$$
+In other words, the varieties of vanishing cycles
+of $f'$ at $\overline{x}'$ are examples of varieties of vanishing
+cycles of $f$ at $\overline{x}$. The lemma follows
+immediately from this and the definitions.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{The cospecialization map}
+\label{section-cospecialization}
+
+\noindent
+Let $f : X \to S$ be a morphism of schemes. Let $\overline{x}$ be a
+geometric point of $X$ with image $\overline{s} = f(\overline{x})$ in $S$.
+Let $\overline{t}$ be a geometric point of
+$\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$.
+Let $K \in D(X_\etale)$. For any morphism $g : Y \to X$ of schemes
+we write $K|_Y$ instead of $g_{small}^{-1}K$ and $R\Gamma(Y, K)$
+instead of $R\Gamma(Y_\etale, g_{small}^{-1}K)$.
+We claim that if
+\begin{enumerate}
+\item $K$ is bounded below, i.e., $K \in D^+(X_\etale)$,
+\item $f$ is locally acyclic relative to $K$
+\end{enumerate}
+then there is a {\it cospecialization map}
+$$
+cosp :
+R\Gamma(X_{\overline{t}}, K)
+\longrightarrow
+R\Gamma(X_{\overline{s}}, K)
+$$
+which will be closely related to the specialization map
+considered in Section \ref{section-specialization} and especially
+Remark \ref{remark-specialization-map-and-fibres}.
+
+\medskip\noindent
+To construct the map we consider the morphisms
+$$
+X_{\overline{t}}
+\xrightarrow{h}
+X \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{s}})
+\xleftarrow{i}
+X_{\overline{s}}
+$$
+The unit of the adjunction between $h^{-1}$ and $Rh_*$ gives a map
+$$
+\beta_{K, \overline{s}, \overline{t}} :
+K|_{X \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{s}})}
+\longrightarrow
+Rh_*(K|_{X_{\overline{t}}})
+$$
+in $D((X \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{s}}))_\etale)$.
+Lemma \ref{lemma-beta-is-isomorphism} below shows that the pullback
+$i^{-1}\beta_{K, \overline{s}, \overline{t}}$
+is an isomorphism under the assumptions above.
+Thus we can define the cospecialization map as the composition
+\begin{align*}
+R\Gamma(X_{\overline{t}}, K)
+& =
+R\Gamma(X \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{s}}),
+Rh_*(K|_{X_{\overline{t}}})) \\
+&
+\xrightarrow{i^{-1}}
+R\Gamma(X_{\overline{s}}, i^{-1}Rh_*(K|_{X_{\overline{t}}})) \\
+&
+\xrightarrow{(i^{-1}\beta_{K, \overline{s}, \overline{t}})^{-1}}
+R\Gamma(X_{\overline{s}},
+i^{-1}(K|_{X \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{s}})})) \\
+& =
+R\Gamma(X_{\overline{s}}, K)
+\end{align*}
+
+\begin{lemma}
+\label{lemma-beta-is-isomorphism}
+The map $i^{-1}\beta_{K, \overline{s}, \overline{t}}$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+The construction of the maps $h$, $i$, $\beta_{K, \overline{s}, \overline{t}}$
+only depends on the
+base change of $X$ and $K$ to $\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$.
+Thus we may and do assume that $S$ is a strictly henselian scheme
+with closed point $\overline{s}$. Observe that the local acyclicity
+of $f$ relative to $K$ is preserved by this base change (for example
+by Lemma \ref{lemma-locally-acyclic-quasi-finite-base-change} or just
+directly by comparing strictly henselian rings in this very special case).
+
+\medskip\noindent
+Let $\overline{x}$ be a geometric point of $X_{\overline{s}}$.
+Or equivalently, let $\overline{x}$ be a geometric point whose image
+by $f$ is $\overline{s}$. Let us compute the stalk of
+$i^{-1}\beta_{K, \overline{s}, \overline{t}}$
+at $\overline{x}$. First, we have
+$$
+(i^{-1}\beta_{K, \overline{s}, \overline{t}})_{\overline{x}} =
+(\beta_{K, \overline{s}, \overline{t}})_{\overline{x}}
+$$
+since pullback preserves stalks, see Lemma \ref{lemma-stalk-pullback}.
+Since we are in the situation $S = \Spec(\mathcal{O}^{sh}_{S, \overline{s}})$
+we see that $h : X_{\overline{t}} \to X$ has the property that
+$X_{\overline{t}} \times_X \Spec(\mathcal{O}^{sh}_{X, \overline{x}})
+= F_{\overline{x}, \overline{t}}$. Thus we see that
+$$
+(\beta_{K, \overline{s}, \overline{t}})_{\overline{x}} :
+K_{\overline{x}}
+\longrightarrow
+Rh_*(K|_{X_{\overline{t}}})_{\overline{x}} =
+R\Gamma(F_{\overline{x}, \overline{t}}, K)
+$$
+where the equal sign is Theorem \ref{theorem-higher-direct-images}.
+It follows that the map $(\beta_{K, \overline{s}, \overline{t}})_{\overline{x}}$
+is none other than the map $\alpha_{K, \overline{x}, \overline{t}}$ used
+in Definition \ref{definition-locally-acyclic}. The result follows as we may
+check whether a map is an isomorphism in stalks by
+Theorem \ref{theorem-exactness-stalks}.
+\end{proof}
+
+\noindent
+The cospecialization map when it exists is trying to be the inverse
+of the specialization map.
+
+\begin{lemma}
+\label{lemma-specialization-cospecialization}
+In the situation above, if in addition
+$f$ is quasi-compact and quasi-separated, then the diagram
+$$
+\xymatrix{
+(Rf_*K)_{\overline{s}} \ar[r] \ar[d]_{sp} &
+R\Gamma(X_{\overline{s}}, K) \\
+(Rf_*K)_{\overline{t}} \ar[r] &
+R\Gamma(X_{\overline{t}}, K) \ar[u]_{cosp}
+}
+$$
+is commutative.
+\end{lemma}
+
+\begin{proof}
+As in the proof of Lemma \ref{lemma-beta-is-isomorphism}
+we may replace $S$ by $\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$.
+Then our maps simplify to $h : X_{\overline{t}} \to X$,
+$i : X_{\overline{s}} \to X$, and
+$\beta_{K, \overline{s}, \overline{t}} : K \to Rh_*(K|_{X_{\overline{t}}})$.
+Using that $(Rf_*K)_{\overline{s}} = R\Gamma(X, K)$ by
+Theorem \ref{theorem-higher-direct-images}
+the composition of $sp$ with the base change map
+$(Rf_*K)_{\overline{t}} \to R\Gamma(X_{\overline{t}}, K)$
+is just pullback of cohomology along $h$.
+This is the same as the map
+$$
+R\Gamma(X, K) \xrightarrow{\beta_{K, \overline{s}, \overline{t}}}
+R\Gamma(X, Rh_*(K|_{X_{\overline{t}}})) =
+R\Gamma(X_{\overline{t}}, K)
+$$
+Now the map $cosp$ first inverts the $=$ sign in this displayed
+formula, then pulls back along $i$, and finally applies
+the inverse of $i^{-1}\beta_{K, \overline{s}, \overline{t}}$.
+Hence we get the desired commutativity.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sp-isom-proper-torsion-loc-ac}
+Let $f : X \to S$ be a morphism of schemes. Let $K \in D(X_\etale)$.
+Assume
+\begin{enumerate}
+\item $K$ is bounded below, i.e., $K \in D^+(X_\etale)$,
+\item $f$ is locally acyclic relative to $K$,
+\item $f$ is proper, and
+\item $K$ has torsion cohomology sheaves.
+\end{enumerate}
+Then for every geometric point $\overline{s}$ of $S$ and every geometric
+point $\overline{t}$ of $\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$
+both the specialization map
+$sp : (Rf_*K)_{\overline{s}} \to (Rf_*K)_{\overline{t}}$
+and the cospecialization map
+$cosp : R\Gamma(X_{\overline{t}}, K) \to R\Gamma(X_{\overline{s}}, K)$
+are isomorphisms.
+\end{lemma}
+
+\begin{proof}
+By the proper base change theorem (in the form of
+Lemma \ref{lemma-proper-base-change-stalk}) we have
+$(Rf_*K)_{\overline{s}} = R\Gamma(X_{\overline{s}}, K)$
+and similarly for $\overline{t}$. The ``correct'' proof would be
+to show that the argument in
+Lemma \ref{lemma-specialization-cospecialization}
+shows that $sp$ and $cosp$ are inverse isomorphisms in this case.
+Instead we will show directly that $cosp$ is an isomorphism.
+From the discussion above we see that $cosp$ is an isomorphism
+if and only if pullback by $i$
+$$
+R\Gamma(X \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{s}}),
+Rh_*(K|_{X_{\overline{t}}}))
+\longrightarrow
+R\Gamma(X_{\overline{s}}, i^{-1}Rh_*(K|_{X_{\overline{t}}}))
+$$
+is an isomorphism in $D^+(\textit{Ab})$. This is true by the proper
+base change theorem for the proper morphism
+$f' : X \times_S \Spec(\mathcal{O}^{sh}_{S, \overline{s}}) \to
+\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$
+by the morphism $\overline{s} \to \Spec(\mathcal{O}^{sh}_{S, \overline{s}})$
+and the complex $K' = Rh_*(K|_{X_{\overline{t}}})$. The complex
+$K'$ is bounded below and has torsion cohomology sheaves
+by Lemma \ref{lemma-torsion-direct-image}.
+Since $\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$ is strictly henselian
+with $\overline{s}$ lying over the closed point,
+we see that the source of the displayed arrow equals
+$(Rf'_*K')_{\overline{s}}$ and the target equals
+$R\Gamma(X_{\overline{s}}, K')$ and the displayed map is an isomorphism
+by the already used
+Lemma \ref{lemma-proper-base-change-stalk}.
+Thus we see that three out of the four arrows in the diagram
+of Lemma \ref{lemma-specialization-cospecialization} are isomorphisms
+and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-sp-isom-proper-loc-cst-torsion}
+Let $f : X \to S$ be a morphism of schemes. Let $\mathcal{F}$
+be an abelian sheaf on $X_\etale$. Assume
+\begin{enumerate}
+\item $f$ is smooth and proper
+\item $\mathcal{F}$ is locally constant, and
+\item $\mathcal{F}_{\overline{x}}$ is a torsion group all of
+whose elements have order prime to the residue characteristic of
+$\overline{x}$ for every geometric point $\overline{x}$ of $X$.
+\end{enumerate}
+Then for every geometric point $\overline{s}$ of $S$ and every geometric
+point $\overline{t}$ of $\Spec(\mathcal{O}^{sh}_{S, \overline{s}})$
+the specialization map
+$sp : (Rf_*\mathcal{F})_{\overline{s}} \to (Rf_*\mathcal{F})_{\overline{t}}$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemmas \ref{lemma-sp-isom-proper-torsion-loc-ac}
+and \ref{lemma-locally-acyclic-locally-constant} and
+Proposition \ref{proposition-smooth-locally-acyclic}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Cohomological dimension}
+\label{section-cd}
+
+\noindent
+We can deduce some bounds on the cohomological dimension of
+schemes and on the cohomological dimension of fields using
+the results in Section \ref{section-vanishing-torsion}
+and one, seemingly innocuous, application of the proper base
+change theorem (in the proof of Proposition \ref{proposition-cd-affine}).
+
+\begin{definition}
+\label{definition-cd}
+Let $X$ be a quasi-compact and quasi-separated scheme.
+The {\it cohomological dimension of $X$} is the smallest
+element
+$$
+\text{cd}(X) \in \{0, 1, 2, \ldots\} \cup \{\infty\}
+$$
+such that for any abelian torsion sheaf $\mathcal{F}$
+on $X_\etale$ we have $H^i_\etale(X, \mathcal{F}) = 0$
+for $i > \text{cd}(X)$. If $X = \Spec(A)$ we sometimes
+call this the cohomological dimension of $A$.
+\end{definition}
+
+\noindent
+If the scheme is in characteristic $p$, then we often can
+obtain sharper bounds for the vanishing of cohomology of
+$p$-power torsion sheaves. We will address this elsewhere
+(insert future reference here).
+
+\begin{lemma}
+\label{lemma-cd-limit}
+Let $X = \lim X_i$ be a directed limit of a system of
+quasi-compact and quasi-separated schemes with affine
+transition morphisms. Then $\text{cd}(X) \leq \max \text{cd}(X_i)$.
+\end{lemma}
+
+\begin{proof}
+Denote $f_i : X \to X_i$ the projections.
+Let $\mathcal{F}$ be an abelian torsion sheaf on $X_\etale$.
+Then we have $\mathcal{F} = \lim f_i^{-1}f_{i, *}\mathcal{F}$
+by Lemma \ref{lemma-linus-hamann}.
+Thus $H^q_\etale(X, \mathcal{F}) =
+\colim H^q_\etale(X_i, f_{i, *}\mathcal{F})$
+by Theorem \ref{theorem-colimit}.
+The lemma follows.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cd-curve-over-field}
+Let $K$ be a field. Let $X$ be a $1$-dimensional
+affine scheme of finite type over $K$. Then
+$\text{cd}(X) \leq 1 + \text{cd}(K)$.
+\end{lemma}
+
+\begin{proof}
+Let $\mathcal{F}$ be an abelian torsion sheaf on $X_\etale$.
+Consider the Leray spectral sequence for the morphism
+$f : X \to \Spec(K)$. We obtain
+$$
+E_2^{p, q} = H^p(\Spec(K), R^qf_*\mathcal{F})
+$$
+converging to $H^{p + q}_\etale(X, \mathcal{F})$.
+The stalk of $R^qf_*\mathcal{F}$ at a geometric point
+$\Spec(\overline{K}) \to \Spec(K)$ is the cohomology of the
+pullback of $\mathcal{F}$ to $X_{\overline{K}}$.
+Hence it vanishes in degrees $\geq 2$ by
+Theorem \ref{theorem-vanishing-affine-curves}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cd-field-extension}
+Let $L/K$ be a field extension. Then we have
+$\text{cd}(L) \leq \text{cd}(K) + \text{trdeg}_K(L)$.
+\end{lemma}
+
+\begin{proof}
+If $\text{trdeg}_K(L) = \infty$, then this is clear.
+If not then we can find a sequence of extensions
+$L= L_r/L_{r - 1}/ \ldots /L_1/L_0 = K$ such that
+$\text{trdeg}_{L_i}(L_{i + 1}) = 1$ and $r = \text{trdeg}_K(L)$.
+Hence it suffices to prove the lemma in the case that $r = 1$.
+In this case we can write $L = \colim A_i$
+as a filtered colimit of its finite type $K$-subalgebras.
+By Lemma \ref{lemma-cd-limit} it suffices to prove that
+$\text{cd}(A_i) \leq 1 + \text{cd}(K)$. This follows
+from Lemma \ref{lemma-cd-curve-over-field}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strictly-henselian}
+Let $K$ be a field. Let $X$ be a scheme of finite type over $K$.
+Let $x \in X$. Set $a = \text{trdeg}_K(\kappa(x))$
+and $d = \dim_x(X)$. Then there is a map
+$$
+K(t_1, \ldots, t_a)^{sep} \longrightarrow \mathcal{O}_{X, x}^{sh}
+$$
+such that
+\begin{enumerate}
+\item the residue field of $\mathcal{O}_{X, x}^{sh}$ is a purely inseparable
+extension of $K(t_1, \ldots, t_a)^{sep}$,
+\item $\mathcal{O}_{X, x}^{sh}$ is a filtered colimit of finite
+type $K(t_1, \ldots, t_a)^{sep}$-algebras of dimension $\leq d - a$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We may assume $X$ is affine. By Noether normalization, after possibly
+shrinking $X$ again, we can choose a finite morphism
+$\pi : X \to \mathbf{A}^d_K$, see
+Algebra, Lemma \ref{algebra-lemma-Noether-normalization-at-point}.
+Since $\kappa(x)$ is a finite extension of the residue field of $\pi(x)$,
+this residue field has transcendence degree $a$ over $K$ as well.
+Thus we can find a finite morphism $\pi' : \mathbf{A}^d_K \to \mathbf{A}^d_K$
+such that $\pi'(\pi(x))$ corresponds to the generic point of the linear
+subspace $\mathbf{A}^a_K \subset \mathbf{A}^d_K$ given by setting
+the last $d - a$ coordinates equal to zero. Hence the composition
+$$
+X \xrightarrow{\pi' \circ \pi} \mathbf{A}^d_K \xrightarrow{p} \mathbf{A}^a_K
+$$
+of $\pi' \circ \pi$ and the projection $p$ onto the first $a$ coordinates
+maps $x$ to the generic point $\eta \in \mathbf{A}^a_K$. The induced map
+$$
+K(t_1, \ldots, t_a)^{sep} =
+\mathcal{O}_{\mathbf{A}^a_k, \eta}^{sh}
+\longrightarrow \mathcal{O}_{X, x}^{sh}
+$$
+on \'etale local rings satisfies (1) since it is clear that the residue field
+of $\mathcal{O}_{X, x}^{sh}$ is an algebraic extension of the separably closed
+field $K(t_1, \ldots, t_a)^{sep}$. On the other hand, if $X = \Spec(B)$,
+then $\mathcal{O}_{X, x}^{sh} = \colim B_j$ is a filtered colimit
+of \'etale $B$-algebras $B_j$. Observe that $B_j$ is quasi-finite over
+$K[t_1, \ldots, t_d]$ as $B$ is finite over $K[t_1, \ldots, t_d]$.
+We may similarly write
+$K(t_1, \ldots, t_a)^{sep} = \colim A_i$ as a filtered colimit
+of \'etale $K[t_1, \ldots, t_a]$-algebras. For every $i$ we can
+find an $j$ such that
+$A_i \to K(t_1, \ldots, t_a)^{sep} \to \mathcal{O}_{X, x}^{sh}$
+factors through a map $\psi_{i, j} : A_i \to B_j$. Then $B_j$ is quasi-finite
+over $A_i[t_{a + 1}, \ldots, t_d]$. Hence
+$$
+B_{i, j} = B_j \otimes_{\psi_{i, j}, A_i} K(t_1, \ldots, t_a)^{sep}
+$$
+has dimension $\leq d - a$ as it is quasi-finite over
+$K(t_1, \ldots, t_a)^{sep}[t_{a + 1}, \ldots, t_d]$.
+The proof of (2) is now finished as $\mathcal{O}_{X, x}^{sh}$ is a
+filtered colimit\footnote{Let $R$ be a ring.
+Let $A = \colim_{i \in I} A_i$
+be a filtered colimit of finitely presented $R$-algebras.
+Let $B = \colim_{j \in J} B_j$ be a filtered colimit of $R$-algebras.
+Let $A \to B$ be an $R$-algebra map.
+Assume that for all $i \in I$ there is a $j \in J$ and an $R$-algebra map
+$\psi_{i, j} : A_i \to B_j$.
+Say $(i', j', \psi_{i', j'}) \geq (i, j, \psi_{i, j})$ if
+$i' \geq i$, $j' \geq j$, and $\psi_{i, j}$ and $\psi_{i', j'}$
+are compatible. Then the collection of triples forms a
+directed set and $B = \colim B_j \otimes_{\psi_{i, j} A_i} A$.}
+of the algebras $B_{i, j}$. Some details omitted.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-cd-affine}
+Let $K$ be a field. Let $X$ be an affine scheme of finite type over $K$.
+Then we have $\text{cd}(X) \leq \dim(X) + \text{cd}(K)$.
+\end{proposition}
+
+\begin{proof}
+We will prove this by induction on $\dim(X)$.
+Let $\mathcal{F}$ be an abelian torsion sheaf on $X_\etale$.
+
+\medskip\noindent
+The case $\dim(X) = 0$. In this case the structure morphism
+$f : X \to \Spec(K)$ is finite. Hence we see that $R^if_*\mathcal{F} = 0$
+for $i > 0$, see Proposition \ref{proposition-finite-higher-direct-image-zero}.
+Thus $H^i_\etale(X, \mathcal{F}) = H^i_\etale(\Spec(K), f_*\mathcal{F})$
+by the Leray spectral sequence for $f$
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-Leray})
+and the result is clear.
+
+\medskip\noindent
+The case $\dim(X) = 1$. This is Lemma \ref{lemma-cd-curve-over-field}.
+
+\medskip\noindent
+Assume $d = \dim(X) > 1$ and the proposition holds for finite type
+affine schemes of dimension $< d$ over fields.
+By Noether normalization, see for example
+Varieties, Lemma \ref{varieties-lemma-noether-normalization-affine},
+there exists a finite morphism $f : X \to \mathbf{A}^d_K$.
+Recall that $R^if_*\mathcal{F} = 0$ for $i > 0$ by
+Proposition \ref{proposition-finite-higher-direct-image-zero}.
+By the Leray spectral sequence for $f$
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-Leray})
+we conclude that it suffices to prove the result for $\pi_*\mathcal{F}$
+on $\mathbf{A}^d_K$.
+
+\medskip\noindent
+Interlude I. Let $j : X \to Y$ be an open immersion of smooth
+$d$-dimensional varieties over $K$ (not necessarily affine)
+whose complement is the support of an effective Cartier divisor $D$.
+The sheaves $R^qj_*\mathcal{F}$ for $q > 0$ are supported on $D$.
+We claim that $(R^qj_*\mathcal{F})_{\overline{y}} = 0$
+for $a = \text{trdeg}_K(\kappa(y)) > d - q$. Namely, by
+Theorem \ref{theorem-higher-direct-images} we have
+$$
+(R^qj_*\mathcal{F})_{\overline{y}} =
+H^q(\Spec(\mathcal{O}_{Y, y}^{sh}) \times_Y X, \mathcal{F})
+$$
+Choose a local equation $f \in \mathfrak m_y = \mathcal{O}_{Y, y}$
+for $D$. Then we have
+$$
+\Spec(\mathcal{O}_{Y, y}^{sh}) \times_Y X =
+\Spec(\mathcal{O}_{Y, y}^{sh}[1/f])
+$$
+Using Lemma \ref{lemma-strictly-henselian} we get an embedding
+$$
+K(t_1, \ldots, t_a)^{sep}(x) =
+K(t_1, \ldots, t_a)^{sep}[x]_{(x)}[1/x]
+\longrightarrow
+\mathcal{O}_{Y, y}^{sh}[1/f]
+$$
+Since the transcendence degree over $K$ of the fraction field of
+$\mathcal{O}_{Y, y}^{sh}$ is $d$, we see that $\mathcal{O}_{Y, y}^{sh}[1/f]$
+is a filtered colimit of $(d - a - 1)$-dimensional finite type algebras over
+the field $K(t_1, \ldots, t_a)^{sep}(x)$ which itself has cohomological
+dimension $1$ by Lemma \ref{lemma-cd-field-extension}. Thus by induction
+hypothesis and Lemma \ref{lemma-cd-limit}
+we obtain the desired vanishing.
+
+\medskip\noindent
+Interlude II. Let $Z$ be a smooth variety over $K$ of dimension $d - 1$.
+Let $E_a \subset Z$ be the set of points $z \in Z$ with
+$\text{trdeg}_K(\kappa(z)) \leq a$. Observe that $E_a$ is closed
+under specialization, see
+Varieties, Lemma \ref{varieties-lemma-dimension-locally-algebraic}.
+Suppose that $\mathcal{G}$ is a torsion abelian sheaf on $Z$
+whose support is contained in $E_a$. Then we claim that
+$H^b_\etale(Z, \mathcal{G}) = 0$ for $b > a + \text{cd}(K)$.
+Namely, we can write $\mathcal{G} = \colim \mathcal{G}_i$
+with $\mathcal{G}_i$ a torsion abelian sheaf
+supported on a closed subscheme $Z_i$ contained in $E_a$, see
+Lemma \ref{lemma-support-in-subset}.
+Then the induction hypothesis kicks in to imply
+the desired vanishing for $\mathcal{G}_i$\footnote{Here
+we first use Proposition \ref{proposition-closed-immersion-pushforward}
+to write $\mathcal{G}_i$ as the pushforward of a sheaf on $Z_i$,
+the induction hypothesis gives the vanishing for this sheaf on $Z_i$, and
+the Leray spectral sequence for $Z_i \to Z$ gives the vanishing
+for $\mathcal{G}_i$.}. Finally, we
+conclude by Theorem \ref{theorem-colimit}.
+
+\medskip\noindent
+Consider the commutative diagram
+$$
+\xymatrix{
+\mathbf{A}^d_K \ar[rd]_f \ar[rr]_-j & &
+\mathbf{P}^1_K \times_K \mathbf{A}^{d - 1}_K \ar[ld]^g \\
+& \mathbf{A}^{d - 1}_K
+}
+$$
+Observe that $j$ is an open immersion of smooth $d$-dimensional
+varieties whose complement is an effective Cartier divisor $D$. Thus
+we may use the results obtained in interlude I.
+We are going to study the relative Leray spectral sequence
+$$
+E_2^{p, q} = R^pg_*R^qj_*\mathcal{F} \Rightarrow R^{p + q}f_*\mathcal{F}
+$$
+Since $R^qj_*\mathcal{F}$ for $q > 0$ is supported on $D$ and since
+$g|_D : D \to \mathbf{A}^{d - 1}_K$ is an isomorphism, we find
+$R^pg_*R^qj_*\mathcal{F} = 0$ for $p > 0$ and $q > 0$. Moreover, we have
+$R^qj_*\mathcal{F} = 0$ for $q > d$. On the other hand, $g$ is a
+proper morphism of relative dimension $1$. Hence by
+Lemma \ref{lemma-cohomological-dimension-proper}
+we see that $R^pg_*j_*\mathcal{F} = 0$ for $p > 2$. Thus the $E_2$-page
+of the spectral sequence looks like this
+$$
+\begin{matrix}
+g_*R^dj_*\mathcal{F} & 0 & 0 \\
+\ldots & \ldots & \ldots \\
+g_*R^2j_*\mathcal{F} & 0 & 0 \\
+g_*R^1j_*\mathcal{F} & 0 & 0 \\
+g_*j_*\mathcal{F} & R^1g_*j_*\mathcal{F} & R^2g_*j_*\mathcal{F}
+\end{matrix}
+$$
+We conclude that $R^qf_*\mathcal{F} = g_*R^qj_*\mathcal{F}$ for $q > 2$.
+By interlude I we see that the support of $R^qf_*\mathcal{F}$ for $q > 2$
+is contained in the set of points of $\mathbf{A}^{d - 1}_K$
+whose residue field has transcendence degree $\leq d - q$.
+By interlude II
+$$
+H^p(\mathbf{A}^{d - 1}_K, R^qf_*\mathcal{F}) = 0
+\text{ for }p > d - q + \text{cd}(K)\text{ and }q > 2
+$$
+On the other hand, by Theorem \ref{theorem-higher-direct-images}
+we have $R^2f_*\mathcal{F}_{\overline{\eta}} =
+H^2(\mathbf{A}^1_{\overline{\eta}}, \mathcal{F}) = 0$
+(vanishing by the case of dimension $1$) where $\eta$ is the
+generic point of $\mathbf{A}^{d - 1}_K$.
+Hence by interlude II again we see
+$$
+H^p(\mathbf{A}^{d - 1}_K, R^2f_*\mathcal{F}) = 0
+\text{ for }p > d - 2 + \text{cd}(K)
+$$
+Finally, we have
+$$
+H^p(\mathbf{A}^{d - 1}_K, R^qf_*\mathcal{F}) = 0
+\text{ for }p > d - 1 + \text{cd}(K)\text{ and }q = 0, 1
+$$
+by induction hypothesis. Combining everything we just said
+with the Leray spectral sequence
+$H^p(\mathbf{A}^{d - 1}_K, R^qf_*\mathcal{F}) \Rightarrow
+H^{p + q}(\mathbf{A}^d_K, \mathcal{F})$ we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-interlude-II}
+Let $K$ be a field. Let $X$ be an affine scheme of finite type over $K$.
+Let $E_a \subset X$ be the set of points
+$x \in X$ with $\text{trdeg}_K(\kappa(x)) \leq a$.
+Let $\mathcal{F}$ be an abelian torsion sheaf on $X_\etale$
+whose support is contained in $E_a$. Then
+$H^b_\etale(X, \mathcal{F}) = 0$ for $b > a + \text{cd}(K)$.
+\end{lemma}
+
+\begin{proof}
+We can write $\mathcal{F} = \colim \mathcal{F}_i$
+with $\mathcal{F}_i$ a torsion abelian sheaf
+supported on a closed subscheme $Z_i$ contained in $E_a$, see
+Lemma \ref{lemma-support-in-subset}.
+Then Proposition \ref{proposition-cd-affine} gives
+the desired vanishing for $\mathcal{F}_i$. Details omitted;
+hints: first use Proposition \ref{proposition-closed-immersion-pushforward}
+to write $\mathcal{F}_i$ as the pushforward of a sheaf on $Z_i$,
+use the vanishing for this sheaf on $Z_i$, and use
+the Leray spectral sequence for $Z_i \to Z$ to get the vanishing
+for $\mathcal{F}_i$. Finally, we
+conclude by Theorem \ref{theorem-colimit}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-interlude-I}
+Let $f : X \to Y$ be an affine morphism of schemes of finite
+type over a field $K$. Let $E_a(X)$ be the set of points $x \in X$
+with $\text{trdeg}_K(\kappa(x)) \leq a$.
+Let $\mathcal{F}$ be an abelian torsion sheaf on $X_\etale$
+whose support is contained in $E_a$. Then
+$R^qf_*\mathcal{F}$ has support contained in
+$E_{a - q}(Y)$.
+\end{lemma}
+
+\begin{proof}
+The question is local on $Y$ hence we can assume $Y$ is affine.
+Then $X$ is affine too and we can choose a diagram
+$$
+\xymatrix{
+X \ar[d]_f \ar[r]_i & \mathbf{A}^{n + m}_K \ar[d]^{\text{pr}} \\
+Y \ar[r]^j & \mathbf{A}^n_K
+}
+$$
+where the horizontal arrows are closed immersions and the vertical
+arrow on the right is the projection (details omitted).
+Then $j_*R^qf_*\mathcal{F} = R^q\text{pr}_*i_*\mathcal{F}$
+by the vanishing of the higher direct images of $i$ and $j$, see
+Proposition \ref{proposition-finite-higher-direct-image-zero}.
+Moreover, the description of the stalks of $j_*$ in the proposition
+shows that it suffices to prove the vanishing for $j_*R^qf_*\mathcal{F}$.
+Thus we may assume $f$ is the projection
+morphism $\text{pr} : \mathbf{A}^{n + m}_K \to \mathbf{A}^n_K$
+and an abelian torsion sheaf $\mathcal{F}$ on $\mathbf{A}^{n + m}_K$
+satisfying the assumption in the statement of the lemma.
+
+\medskip\noindent
+Let $y$ be a point in $\mathbf{A}^n_K$.
+By Theorem \ref{theorem-higher-direct-images} we have
+$$
+(R^q\text{pr}_*\mathcal{F})_{\overline{y}} =
+H^q(\mathbf{A}^{n + m}_K \times_{A^n_K} \Spec(\mathcal{O}_{Y, y}^{sh}),
+\mathcal{F}) =
+H^q(\mathbf{A}^m_{\mathcal{O}_{Y, y}^{sh}}, \mathcal{F})
+$$
+Say $b = \text{trdeg}_K(\kappa(y))$. From Lemma \ref{lemma-strictly-henselian}
+we get an embedding
+$$
+L = K(t_1, \ldots, t_b)^{sep} \longrightarrow \mathcal{O}_{Y, y}^{sh}
+$$
+Write $\mathcal{O}_{Y, y}^{sh} = \colim B_i$ as the filtered
+colimit of finite type $L$-subalgebras $B_i \subset \mathcal{O}_{Y, y}^{sh}$
+containing the ring $K[T_1, \ldots, T_n]$ of regular functions
+on $\mathbf{A}^n_K$. Then we get
+$$
+\mathbf{A}^m_{\mathcal{O}_{Y, y}^{sh}} =
+\lim \mathbf{A}^m_{B_i}
+$$
+If $z \in \mathbf{A}^m_{B_i}$ is a point in the support of
+$\mathcal{F}$, then the image $x$ of $z$ in $\mathbf{A}^{m + n}_K$
+satisfies $\text{trdeg}_K(\kappa(x)) \leq a$ by our assumption on
+$\mathcal{F}$ in the lemma.
+Since $\mathcal{O}_{Y, y}^{sh}$ is a filtered colimit of
+\'etale algebras over $K[T_1, \ldots, T_n]$ and since
+$B_i \subset \mathcal{O}_{Y, y}^{sh}$
+we see that $\kappa(z)/\kappa(x)$ is algebraic
+(some details omitted). Then $\text{trdeg}_K(\kappa(z)) \leq a$
+and hence $\text{trdeg}_L(\kappa(z)) \leq a - b$.
+By Lemma \ref{lemma-interlude-II} we see that
+$$
+H^q(\mathbf{A}^m_{B_i}, \mathcal{F}) = 0\text{ for }q > a - b
+$$
+Thus by Theorem \ref{theorem-colimit} we get
+$(Rf_*\mathcal{F})_{\overline{y}} = 0$ for $q > a - b$
+as desired.
+\end{proof}
+
+
+
+
+
+
+
+
+\section{Finite cohomological dimension}
+\label{section-finite-cd}
+
+\noindent
+We continue the discussion started in Section \ref{section-cd}.
+
+\begin{definition}
+\label{definition-cd-f}
+Let $f : X \to Y$ be a quasi-compact and quasi-separated
+morphism of schemes.
+The {\it cohomological dimension of $f$} is the smallest
+element
+$$
+\text{cd}(f) \in \{0, 1, 2, \ldots\} \cup \{\infty\}
+$$
+such that for any abelian torsion sheaf $\mathcal{F}$
+on $X_\etale$ we have $R^if_*\mathcal{F} = 0$
+for $i > \text{cd}(f)$.
+\end{definition}
+
+\begin{lemma}
+\label{lemma-finite-cd}
+Let $K$ be a field.
+\begin{enumerate}
+\item If $f : X \to Y$ is a morphism of finite type schemes over $K$,
+then $\text{cd}(f) < \infty$.
+\item If $\text{cd}(K) < \infty$, then $\text{cd}(X) < \infty$
+for any finite type scheme $X$ over $K$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). We may assume $Y$ is affine. We will use the
+induction principle of
+Cohomology of Schemes, Lemma \ref{coherent-lemma-induction-principle}
+to prove this. If $X$ is affine too, then the result holds by
+Lemma \ref{lemma-interlude-I}. Thus it suffices to show that if
+$X = U \cup V$ and the result is true for $U \to Y$, $V \to Y$, and
+$U \cap V \to Y$, then it is true for $f$. This follows from the
+relative Mayer-Vietoris sequence, see
+Lemma \ref{lemma-relative-mayer-vietoris}.
+
+\medskip\noindent
+Proof of (2). We will use the induction principle of
+Cohomology of Schemes, Lemma \ref{coherent-lemma-induction-principle}
+to prove this. If $X$ is affine, then the result holds by
+Proposition \ref{proposition-cd-affine}. Thus it suffices to show that if
+$X = U \cup V$ and the result is true for $U$, $V$, and
+$U \cap V $, then it is true for $X$. This follows from the
+Mayer-Vietoris sequence, see
+Lemma \ref{lemma-mayer-vietoris}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-cd-mod-n-direct-sums}
+Cohomology and direct sums. Let $n \geq 1$ be an integer.
+\begin{enumerate}
+\item Let $f : X \to Y$ be a quasi-compact and quasi-separated morphism
+of schemes with $\text{cd}(f) < \infty$. Then the functor
+$$
+Rf_* :
+D(X_\etale, \mathbf{Z}/n\mathbf{Z})
+\longrightarrow
+D(Y_\etale, \mathbf{Z}/n\mathbf{Z})
+$$
+commutes with direct sums.
+\item Let $X$ be a quasi-compact and quasi-separated scheme with
+$\text{cd}(X) < \infty$. Then the functor
+$$
+R\Gamma(X, -) :
+D(X_\etale, \mathbf{Z}/n\mathbf{Z})
+\longrightarrow
+D(\mathbf{Z}/n\mathbf{Z})
+$$
+commutes with direct sums.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Since $\text{cd}(f) < \infty$ we see that
+$$
+f_* :
+\textit{Mod}(X_\etale, \mathbf{Z}/n\mathbf{Z})
+\longrightarrow
+\textit{Mod}(Y_\etale, \mathbf{Z}/n\mathbf{Z})
+$$
+has finite cohomological dimension in the sense of
+Derived Categories, Lemma \ref{derived-lemma-unbounded-right-derived}.
+Let $I$ be a set and for $i \in I$ let $E_i$ be
+an object of $D(X_\etale, \mathbf{Z}/n\mathbf{Z})$.
+Choose a K-injective complex $\mathcal{I}_i^\bullet$
+of $\mathbf{Z}/n\mathbf{Z}$-modules each of whose
+terms $\mathcal{I}_i^n$ is an injective sheaf of
+$\mathbf{Z}/n\mathbf{Z}$-modules representing $E_i$.
+See Injectives, Theorem
+\ref{injectives-theorem-K-injective-embedding-grothendieck}.
+Then $\bigoplus E_i$ is represented by the complex
+$\bigoplus \mathcal{I}_i^\bullet$ (termwise direct sum), see
+Injectives, Lemma \ref{injectives-lemma-derived-products}.
+By Lemma \ref{lemma-relative-colimit} we have
+$$
+R^qf_*(\bigoplus \mathcal{I}_i^n) =
+\bigoplus R^qf_*(\mathcal{I}_i^n) = 0
+$$
+for $q > 0$ and any $n$. Hence we conclude by
+Derived Categories, Lemma \ref{derived-lemma-unbounded-right-derived}
+that we may compute $Rf_*(\bigoplus E_i)$ by the complex
+$$
+f_*(\bigoplus \mathcal{I}_i^\bullet) =
+\bigoplus f_*(\mathcal{I}_i^\bullet)
+$$
+(equality again by Lemma \ref{lemma-relative-colimit}) which
+represents $\bigoplus Rf_*E_i$ by the already used
+Injectives, Lemma \ref{injectives-lemma-derived-products}.
+
+\medskip\noindent
+Proof of (2). This is identical to the proof of (1)
+and we omit it.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-mod-n-direct-sums}
+Let $f : X \to Y$ be a proper morphism of schemes. Let $n \geq 1$
+be an integer. Then the functor
+$$
+Rf_* :
+D(X_\etale, \mathbf{Z}/n\mathbf{Z})
+\longrightarrow
+D(Y_\etale, \mathbf{Z}/n\mathbf{Z})
+$$
+commutes with direct sums.
+\end{lemma}
+
+\begin{proof}
+It is enough to prove this when $Y$ is quasi-compact. By
+Morphisms, Lemma \ref{morphisms-lemma-morphism-finite-type-bounded-dimension}
+we see that the dimension of the fibres of
+$f : X \to Y$ is bounded.
+Thus Lemma \ref{lemma-cohomological-dimension-proper}
+implies that $\text{cd}(f) < \infty$. Hence the
+result by Lemma \ref{lemma-finite-cd-mod-n-direct-sums}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-pull-out-constant-mod-n}
+Let $X$ be a quasi-compact and quasi-separated scheme
+such that $\text{cd}(X) < \infty$. Let $\Lambda$ be a torsion ring.
+Let $E \in D(X_\etale, \Lambda)$ and $K \in D(\Lambda)$. Then
+$$
+R\Gamma(X, E \otimes_\Lambda^\mathbf{L} \underline{K}) =
+R\Gamma(X, E) \otimes_\Lambda^\mathbf{L} K
+$$
+\end{lemma}
+
+\begin{proof}
+There is a canonical map from left to right by
+Cohomology on Sites, Section \ref{sites-cohomology-section-projection-formula}.
+Let $T(K)$ be the property that the statement of the
+lemma holds for $K \in D(\Lambda)$.
+We will check conditions (1), (2), and (3) of
+More on Algebra, Remark \ref{more-algebra-remark-P-resolution}
+hold for $T$ to conclude.
+Property (1) holds because both sides of the equality
+commute with direct sums, see
+Lemma \ref{lemma-finite-cd-mod-n-direct-sums}.
+Property (2) holds because we are comparing exact functors
+between triangulated categories and we can use
+Derived Categories, Lemma \ref{derived-lemma-third-isomorphism-triangle}.
+Property (3) says the lemma holds when $K = \Lambda[k]$
+for any shift $k \in \mathbf{Z}$ and this is obvious.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-projection-formula-proper-mod-n}
+Let $f : X \to Y$ be a proper morphism of schemes. Let $\Lambda$
+be a torsion ring. Let $E \in D(X_\etale, \Lambda)$ and
+$K \in D(Y_\etale, \Lambda)$. Then
+$$
+Rf_*E \otimes_\Lambda^\mathbf{L} K =
+Rf_*(E \otimes_\Lambda^\mathbf{L} f^{-1}K)
+$$
+in $D(Y_\etale, \Lambda)$.
+\end{lemma}
+
+\begin{proof}
+There is a canonical map from left to right by
+Cohomology on Sites, Section \ref{sites-cohomology-section-projection-formula}.
+We will check the equality on stalks at $\overline{y}$.
+By the proper base change (in the form of
+Lemma \ref{lemma-proper-base-change-mod-n} where $Y' = \overline{y}$)
+this reduces to the case where $Y$ is the spectrum of
+an algebraically closed field.
+This is shown in Lemma \ref{lemma-pull-out-constant-mod-n}
+where we use that $\text{cd}(X) < \infty$ by
+Lemma \ref{lemma-cohomological-dimension-proper}.
+\end{proof}
+
+
+
+
+
+
+\section{K\"unneth in \'etale cohomology}
+\label{section-kunneth}
+
+\noindent
+We first prove a K\"unneth formula in case one of the factors
+is proper. Then we use this formula to prove a base change property
+for open immersions. This then gives a
+``base change by morphisms towards spectra of fields''
+(akin to smooth base change).
+Finally we use this to get a more general K\"unneth formula.
+
+\begin{remark}
+\label{remark-define-kunneth-map}
+Consider a cartesian diagram in the category of schemes:
+$$
+\xymatrix{
+X \times_S Y \ar[d]_p \ar[r]_q \ar[rd]_c & Y \ar[d]^g \\
+X \ar[r]^f & S
+}
+$$
+Let $\Lambda$ be a ring and let $E \in D(X_\etale, \Lambda)$
+and $K \in D(Y_\etale, \Lambda)$. Then there is a canonical map
+$$
+Rf_*E \otimes_\Lambda^\mathbf{L} Rg_*K
+\longrightarrow
+Rc_*(p^{-1}E \otimes_\Lambda^\mathbf{L} q^{-1}K)
+$$
+For example we can define this using the canonical maps
+$Rf_*E \to Rc_*p^{-1}E$ and $Rg_*K \to Rc_*q^{-1}K$ and
+the relative cup product defined in Cohomology on Sites,
+Remark \ref{sites-cohomology-remark-cup-product}.
+Or you can use the adjoint to the map
+$$
+c^{-1}(Rf_*E \otimes_\Lambda^\mathbf{L} Rg_*K)
+=
+p^{-1}f^{-1}Rf_*E \otimes_\Lambda^\mathbf{L} q^{-1} g^{-1}Rg_*K
+\to
+p^{-1}E \otimes_\Lambda^\mathbf{L} q^{-1}K
+$$
+which uses the adjunction maps $f^{-1}Rf_*E \to E$ and
+$g^{-1}Rg_*K \to K$.
+\end{remark}
+
+
+\begin{lemma}
+\label{lemma-kunneth-one-proper}
+Let $k$ be a separably closed field. Let $X$ be a proper scheme over $k$.
+Let $Y$ be a quasi-compact and quasi-separated scheme over $k$.
+\begin{enumerate}
+\item If $E \in D^+(X_\etale)$ has torsion cohomology sheaves and
+$K \in D^+(Y_\etale)$, then
+$$
+R\Gamma(X \times_{\Spec(k)} Y,
+\text{pr}_1^{-1}E
+\otimes_\mathbf{Z}^\mathbf{L}
+\text{pr}_2^{-1}K
+)
+=
+R\Gamma(X, E)
+\otimes_\mathbf{Z}^\mathbf{L}
+R\Gamma(Y, K)
+$$
+\item If $n \geq 1$ is an integer, $Y$ is of finite type over $k$,
+$E \in D(X_\etale, \mathbf{Z}/n\mathbf{Z})$, and
+$K \in D(Y_\etale, \mathbf{Z}/n\mathbf{Z})$, then
+$$
+R\Gamma(X \times_{\Spec(k)} Y,
+\text{pr}_1^{-1}E
+\otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L}
+\text{pr}_2^{-1}K
+)
+=
+R\Gamma(X, E)
+\otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L}
+R\Gamma(Y, K)
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). By Lemma \ref{lemma-projection-formula-proper} we have
+$$
+R\text{pr}_{2, *}(
+\text{pr}_1^{-1}E
+\otimes_\mathbf{Z}^\mathbf{L}
+\text{pr}_2^{-1}K) =
+R\text{pr}_{2, *}(\text{pr}_1^{-1}E)
+\otimes_\mathbf{Z}^\mathbf{L}
+K
+$$
+By proper base change (in the form of Lemma \ref{lemma-proper-base-change})
+this is equal to the object
+$$
+\underline{R\Gamma(X, E)}
+\otimes_\mathbf{Z}^\mathbf{L}
+K
+$$
+of $D(Y_\etale)$. Taking $R\Gamma(Y, -)$ on this object reproduces the
+left hand side of the equality in (1) by the Leray spectral sequence
+for $\text{pr}_2$. Thus we conclude by
+Lemma \ref{lemma-pull-out-constant}.
+
+\medskip\noindent
+Proof of (2). This is exactly the same as the proof of (1)
+except that we use Lemmas \ref{lemma-projection-formula-proper-mod-n},
+\ref{lemma-proper-base-change-mod-n}, and
+\ref{lemma-pull-out-constant-mod-n} as well as
+$\text{cd}(Y) < \infty$ by Lemma \ref{lemma-finite-cd}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-supported-in-closed-points}
+Let $K$ be a separably closed field. Let $X$ be a scheme of finite
+type over $K$. Let $\mathcal{F}$ be an abelian sheaf on $X_\etale$
+whose support is contained in the set of closed points of $X$.
+Then $H^q(X, \mathcal{F}) = 0$ for $q > 0$ and $\mathcal{F}$
+is globally generated.
+\end{lemma}
+
+\begin{proof}
+(If $\mathcal{F}$ is torsion, then the vanishing follows immediately
+from Lemma \ref{lemma-interlude-II}.)
+By Lemma \ref{lemma-support-in-subset} we can write $\mathcal{F}$
+as a filtered colimit of constructible sheaves $\mathcal{F}_i$
+of $\mathbf{Z}$-modules whose supports $Z_i \subset X$ are finite
+sets of closed points. By
+Proposition \ref{proposition-closed-immersion-pushforward}
+such a sheaf is of the form $(Z_i \to X)_*\mathcal{G}_i$
+where $\mathcal{G}_i$ is a sheaf on $Z_i$. As $K$ is separably closed,
+the scheme $Z_i$ is a finite disjoint union of spectra of separably closed
+fields. Recall that $H^q(Z_i, \mathcal{G}_i) = H^q(X, \mathcal{F}_i)$
+by the Leray spectral sequence for $Z_i \to X$ and vanising of higher
+direct images for this morphism
+(Proposition \ref{proposition-finite-higher-direct-image-zero}).
+By Lemmas \ref{lemma-equivalence-abelian-sheaves-point} and
+\ref{lemma-compare-cohomology-point}
+we see that $H^q(Z_i, \mathcal{G}_i)$ is zero for $q > 0$
+and that $H^0(Z_i, \mathcal{G}_i)$ generates $\mathcal{G}_i$.
+We conclude the vanishing of $H^q(X, \mathcal{F}_i)$ for $q > 0$
+and that $\mathcal{F}_i$ is generated by global sections.
+By Theorem \ref{theorem-colimit} we see that
+$H^q(X, \mathcal{F}) = 0$ for $q > 0$.
+The proof is now done because a
+filtered colimit of globally generated sheaves of abelian groups
+is globally generated (details omitted).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-vanishing-closed-points}
+Let $K$ be a separably closed field. Let $X$ be a scheme of finite
+type over $K$. Let $Q \in D(X_\etale)$. Assume that $Q_{\overline{x}}$
+is nonzero only if $x$ is a closed point of $X$. Then
+$$
+Q = 0 \Leftrightarrow H^i(X, Q) = 0 \text{ for all }i
+$$
+\end{lemma}
+
+\begin{proof}
+The implication from left to right is trivial. Thus we need
+to prove the reverse implication.
+
+\medskip\noindent
+Assume $Q$ is bounded below; this cases suffices for almost all
+applications. If $Q$ is not zero, then we can look at the smallest $i$
+such that the cohomology sheaf $H^i(Q)$ is nonzero.
+By Lemma \ref{lemma-supported-in-closed-points} we have
+$H^i(X, Q) = H^0(X, H^i(Q)) \not = 0$ and we conclude.
+
+\medskip\noindent
+General case. Let $\mathcal{B} \subset \Ob(X_\etale)$ be the
+quasi-compact objects. By Lemma \ref{lemma-supported-in-closed-points}
+the assumptions of Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-cohomology-over-U-trivial}
+are satisfied. We conclude that $H^q(U, Q) = H^0(U, H^q(Q))$
+for all $U \in \mathcal{B}$. In particular, this holds for $U = X$.
+Thus the conclusion by Lemma \ref{lemma-supported-in-closed-points}
+as $Q$ is zero in $D(X_\etale)$ if and only if $H^q(Q)$ is zero
+for all $q$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kunneth-localize-on-X}
+Let $K$ be a field. Let $j : U \to X$ be an open immersion of
+schemes of finite type over $K$. Let $Y$ be a scheme of finite type
+over $K$. Consider the diagram
+$$
+\xymatrix{
+Y \times_{\Spec(K)} X \ar[d]_q &
+Y \times_{\Spec(K)} U \ar[l]^h \ar[d]^p \\
+X & U \ar[l]_j
+}
+$$
+Then the base change map $q^{-1}Rj_*\mathcal{F} \to Rh_*p^{-1}\mathcal{F}$
+is an isomorphism for $\mathcal{F}$ an abelian sheaf on $U_\etale$
+whose stalks are torsion of orders invertible in $K$.
+\end{lemma}
+
+\begin{proof}
+Write $\mathcal{F} = \colim \mathcal{F}[n]$ where the colimit
+is over the multiplicative system of integers invertible in $K$.
+Since cohomology commutes with filtered colimits in our situation
+(for a precise reference see Lemma \ref{lemma-base-change-Rf-star-colim}),
+it suffices to prove the lemma for $\mathcal{F}[n]$. Thus we may
+assume $\mathcal{F}$ is a sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules
+for some $n$ invertible in $K$ (we will use this at the very end of
+the proof). In the proof we use the short hand $X \times_K Y$ for the fibre
+product over $\Spec(K)$.
+We will prove the lemma by induction on $\dim(X) + \dim(Y)$.
+The lemma is trivial if $\dim(X) \leq 0$, since in this case
+$U$ is an open and closed subscheme of $X$.
+Choose a point $z \in X \times_K Y$. We will show
+the stalk at $\overline{z}$ is an isomorphism.
+
+\medskip\noindent
+Suppose that $z \mapsto x \in X$ and assume $\text{trdeg}_K(\kappa(x)) > 0$.
+Set $X' = \Spec(\mathcal{O}_{X, x}^{sh})$
+and denote $U' \subset X'$ the inverse image of $U$.
+Consider the base change
+$$
+\xymatrix{
+Y \times_K X' \ar[d]_{q'} &
+Y \times_K U' \ar[l]^{h'} \ar[d]^{p'} \\
+X' & U' \ar[l]_{j'}
+}
+$$
+of our diagram by $X' \to X$. Observe that $X' \to X$ is a filtered
+colimit of \'etale morphisms. By smooth base change in the form of
+Lemma \ref{lemma-smooth-base-change-general} the pullback of
+$q^{-1}Rj_*\mathcal{F} \to Rh_*p^{-1}\mathcal{F}$ to $X'$ to
+$Y \times_K X'$ is the map
+$(q')^{-1}Rj'_*\mathcal{F}' \to Rj'_*(p')^{-1}\mathcal{F}'$ where
+$\mathcal{F}'$ is the pullback of $\mathcal{F}$ to $U'$.
+(In this step it would suffice to use \'etale base change which is
+an essentially trivial result.) So it suffices to show
+that $(q')^{-1}Rj'_*\mathcal{F}' \to Rj'_*(p')^{-1}\mathcal{F}'$
+is an isomorphism in order to prove that our original
+map is an isomorphism on stalks at $\overline{z}$.
+By Lemma \ref{lemma-strictly-henselian}
+there is a separably closed field $L/K$ such that
+$X' = \lim X_i$ with $X_i$ affine of finite type over $L$
+and $\dim(X_i) < \dim(X)$. For $i$ large enough there
+exists an open $U_i \subset X_i$ restricting to $U'$ in $X'$.
+We may apply the induction hypothesis to the diagram
+$$
+\vcenter{
+\xymatrix{
+Y \times_K X_i \ar[d]_{q_i} &
+Y \times_K U_i \ar[l]^{h_i} \ar[d]^{p_i} \\
+X_i & U_i \ar[l]_{j_i}
+}
+}
+\quad\text{equal to}\quad
+\vcenter{
+\xymatrix{
+Y_L \times_L X_i \ar[d]_{q_i} &
+Y_L \times_L U_i \ar[l]^{h_i} \ar[d]^{p_i} \\
+X_i & U_i \ar[l]_{j_i}
+}
+}
+$$
+over the field $L$ and the pullback of $\mathcal{F}$ to these diagrams.
+By Lemma \ref{lemma-base-change-Rf-star-colim} we conclude that the map
+$(q')^{-1}Rj'_*\mathcal{F}' \to Rj'_*(p')^{-1}\mathcal{F}$ is an isomorphism.
+
+\medskip\noindent
+Suppose that $z \mapsto y \in Y$ and assume $\text{trdeg}_K(\kappa(y)) > 0$.
+Let $Y' = \Spec(\mathcal{O}_{X, x}^{sh})$.
+By Lemma \ref{lemma-strictly-henselian} there is a separably closed field
+$L/K$ such that $Y' = \lim Y_i$ with $Y_i$ affine of finite type over $L$
+and $\dim(Y_i) < \dim(Y)$. In particular $Y'$ is a scheme over $L$.
+Denote with a subscript $L$ the base change from schemes over $K$
+to schemes over $L$. Consider the commutative diagrams
+$$
+\vcenter{
+\xymatrix{
+Y' \times_K X \ar[d]_f &
+Y' \times_K U \ar[l]^{h'} \ar[d]^{f'} \\
+Y \times_K X \ar[d]_q &
+Y \times_K U \ar[l]^h \ar[d]^p \\
+X & U \ar[l]_j
+}
+}
+\quad\text{and}\quad
+\vcenter{
+\xymatrix{
+Y' \times_L X_L \ar[d]_{q'} &
+Y' \times_L U_L \ar[l]^{h'} \ar[d]^{p'} \\
+X_L \ar[d] &
+U_L \ar[l]^{j_L} \ar[d] \\
+X & U \ar[l]_j
+}
+}
+$$
+and observe the top and bottom rows are the same on the left and the right.
+By smooth base change we see that
+$f^{-1}Rh_*p^{-1}\mathcal{F} = Rh'_*(f')^{-1}p^{-1}\mathcal{F}$
+(similarly to the previous paragraph).
+By smooth base change for $\Spec(L) \to \Spec(K)$
+(Lemma \ref{lemma-base-change-field-extension})
+we see that $Rj_{L, *}\mathcal{F}_L$ is the pullback of
+$Rj_*\mathcal{F}$ to $X_L$. Combining these two observations,
+we conclude that it suffices to prove the base change map
+for the upper square in the diagram on the right
+is an isomorphism in order to prove that our original
+map is an isomorphism on stalks at $\overline{z}$\footnote{Here we
+use that a ``vertical composition'' of base change maps is a base change
+map as explained in Cohomology on Sites, Remark
+\ref{sites-cohomology-remark-compose-base-change}.}.
+Then using that $Y' = \lim Y_i$ and argueing exactly
+as in the previous paragraph we see that the induction
+hypothesis forces our map over $Y' \times_K X$ to be
+an isomorphism.
+
+\medskip\noindent
+Thus any counter example with $\dim(X) + \dim(Y)$ minimal
+would only have nonisomorphisms
+$q^{-1}Rj_*\mathcal{F} \to Rh_*p^{-1}\mathcal{F}$
+on stalks at closed points of $X \times_K Y$ (because a point
+$z$ of $X \times_K Y$ is a closed point if and only if
+both the image of $z$ in $X$ and in $Y$ are closed).
+Since it is enough to prove the isomorphism locally,
+we may assume $X$ and $Y$ are affine. However, then
+we can choose an open dense immersion $Y \to Y'$ with $Y'$
+projective. (Choose a closed immersion $Y \to \mathbf{A}^n_K$
+and let $Y'$ be the scheme theoretic closure of $Y$ in
+$\mathbf{P}^n_K$.) Then $\dim(Y') = \dim(Y)$ and hence
+we get a ``minimal'' counter example with $Y$ projective over $K$.
+In the next paragraph we show that this can't happen.
+
+\medskip\noindent
+Consider a diagram as in the statement of the lemma
+such that $q^{-1}Rj_*\mathcal{F} \to Rh_*p^{-1}\mathcal{F}$
+is an isomorphism at all non-closed points of $X \times_K Y$
+and such that $Y$ is projective.
+The restriction of the map to $(X \times_K Y)_{K^{sep}}$
+is the corresponding map for the diagram of the lemma
+base changed to $K^{sep}$. Thus we may and do assume $K$
+is separably algebraically closed. Choose a distinguished triangle
+$$
+q^{-1}Rj_*\mathcal{F} \to Rh_*p^{-1}\mathcal{F} \to Q \to
+(q^{-1}Rj_*\mathcal{F})[1]
+$$
+in $D((X \times_K Y)_\etale)$. Since $Q$ is supported in
+closed points we see that it suffices to prove
+$H^i(X \times_K Y, Q) = 0$ for all $i$, see
+Lemma \ref{lemma-vanishing-closed-points}.
+Thus it suffices to prove that
+$q^{-1}Rj_*\mathcal{F} \to Rh_*p^{-1}\mathcal{F}$
+induces an isomorphism on cohomology. Recall that $\mathcal{F}$
+is annihilated by $n$ invertible in $K$.
+By the K\"unneth formula of Lemma \ref{lemma-kunneth-one-proper}
+we have
+\begin{align*}
+R\Gamma(X \times_K Y, q^{-1}Rj_*\mathcal{F})
+&=
+R\Gamma(X, Rj_*\mathcal{F})
+\otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L}
+R\Gamma(Y, \mathbf{Z}/n\mathbf{Z}) \\
+& =
+R\Gamma(U, \mathcal{F})
+\otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L}
+R\Gamma(Y, \mathbf{Z}/n\mathbf{Z})
+\end{align*}
+and
+$$
+R\Gamma(X \times_K Y, Rh_*p^{-1}\mathcal{F}) =
+R\Gamma(U \times_K Y, p^{-1}\mathcal{F}) =
+R\Gamma(U, \mathcal{F})
+\otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L}
+R\Gamma(Y, \mathbf{Z}/n\mathbf{Z})
+$$
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-punctual-base-change}
+Let $K$ be a field. For any commutative diagram
+$$
+\xymatrix{
+X \ar[d] & X' \ar[l] \ar[d]_{f'} & Y \ar[l]^h \ar[d]^e \\
+\Spec(K) & S' \ar[l] & T \ar[l]_g
+}
+$$
+of schemes over $K$ with
+$X' = X \times_{\Spec(K)} S'$ and $Y = X' \times_{S'} T$ and
+$g$ quasi-compact and quasi-separated, and every abelian sheaf
+$\mathcal{F}$ on $T_\etale$ whose stalks are torsion of orders
+invertible in $K$ the base change map
+$$
+(f')^{-1}Rg_*\mathcal{F}
+\longrightarrow
+Rh_*e^{-1}\mathcal{F}
+$$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+The question is local on $X$, hence we may assume $X$ is affine.
+By Limits, Lemma \ref{limits-lemma-relative-approximation}
+we can write $X = \lim X_i$ as a cofiltered limit with affine
+transition morphisms of schemes $X_i$ of finite type over $K$.
+Denote $X'_i = X_i \times_{\Spec(K)} S'$ and $Y_i = X'_i \times_{S'} T$.
+By Lemma \ref{lemma-base-change-Rf-star-colim}
+it suffices to prove the statement for the squares
+with corners $X_i, Y_i, S_i, T_i$.
+Thus we may assume $X$ is of finite type over $K$.
+Similarly, we may write
+$\mathcal{F} = \colim \mathcal{F}[n]$ where the colimit
+is over the multiplicative system of integers invertible in $K$.
+The same lemma used above reduces us to the case where
+$\mathcal{F}$ is a sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules
+for some $n$ invertible in $K$.
+
+\medskip\noindent
+We may replace $K$ by its algebraic closure $\overline{K}$.
+Namely, formation of direct image commutes with base change
+to $\overline{K}$ according to
+Lemma \ref{lemma-base-change-field-extension} (works for both
+$g$ and $h$). And it suffices to prove the agreement after
+restriction to $X'_{\overline{K}}$. Next, we may replace
+$X$ by its reduction as we have the topological invariance of \'etale
+cohomology, see
+Proposition \ref{proposition-topological-invariance}.
+After this replacement the morphism $X \to \Spec(K)$
+is flat, finite presentation, with geometrically reduced fibres
+and the same is true for any base change, in particular for $X' \to S'$.
+Hence $(f')^{-1}g_*\mathcal{F} \to Rh_*e^{-1}\mathcal{F}$
+is an isomorphism by Lemma \ref{lemma-fppf-reduced-fibres-base-change-f-star}.
+
+\medskip\noindent
+At this point we may apply
+Lemma \ref{lemma-base-change-does-not-hold-post}
+to see that it suffices to prove: given a commutative diagram
+$$
+\xymatrix{
+X \ar[d]_f & X' \ar[d] \ar[l] & Y \ar[l]^h \ar[d] \\
+\Spec(K) & S' \ar[l] & \Spec(L) \ar[l]
+}
+$$
+with both squares cartesian, where $S'$ is affine, integral, and normal
+with algebraically closed function field $K$, then
+$R^qh_*(\mathbf{Z}/d\mathbf{Z})$ is zero for $q > 0$ and $d | n$.
+Observe that this vanishing is equivalent to the statement that
+$$
+(f')^{-1}R^q(\Spec(L) \to S')_*\mathbf{Z}/d\mathbf{Z}
+\longrightarrow
+R^qh_*\mathbf{Z}/d\mathbf{Z}
+$$
+is an isomorphism, because the left hand side is zero for example by
+Lemma \ref{lemma-Rf-star-zero-normal-with-alg-closed-function-field}.
+
+\medskip\noindent
+Write $S' = \Spec(B)$ so that $L$ is the fraction field of $B$.
+Write $B = \bigcup_{i \in I} B_i$ as the union of its finite type
+$K$-subalgebras $B_i$. Let $J$ be the set of pairs $(i, g)$ where
+$i \in I$ and $g \in B_i$ nonzero with ordering
+$(i', g') \geq (i, g)$ if and only if $i' \geq i$ and
+$g$ maps to an invertible element of $(B_{i'})_{g'}$.
+Then $L = \colim_{(i, g) \in J} (B_i)_g$.
+For $j = (i, g) \in J$ set $S_j = \Spec(B_i)$
+and $U_j = \Spec((B_i)_g)$.
+Then
+$$
+\vcenter{
+\xymatrix{
+X' \ar[d] & Y \ar[l]^h \ar[d] \\
+S' & \Spec(L) \ar[l]
+}
+}
+\quad\text{is the colimit of}\quad
+\vcenter{
+\xymatrix{
+X \times_K S_j \ar[d] & X \times_K U_j \ar[l]^{h_j} \ar[d] \\
+S_j & U_j \ar[l]
+}
+}
+$$
+Thus we may apply Lemma \ref{lemma-base-change-Rf-star-colim}
+to see that it suffices to prove
+base change holds in the diagrams on the right which is what we
+proved in Lemma \ref{lemma-kunneth-localize-on-X}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-punctual-base-change-upgrade}
+Let $K$ be a field. Let $n \geq 1$ be invertible in $K$.
+Consider a commutative diagram
+$$
+\xymatrix{
+X \ar[d] & X' \ar[l]^p \ar[d]_{f'} & Y \ar[l]^h \ar[d]^e \\
+\Spec(K) & S' \ar[l] & T \ar[l]_g
+}
+$$
+of schemes with
+$X' = X \times_{\Spec(K)} S'$ and $Y = X' \times_{S'} T$ and
+$g$ quasi-compact and quasi-separated. The canonical map
+$$
+p^{-1}E \otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L} (f')^{-1}Rg_*F
+\longrightarrow
+Rh_*(h^{-1}p^{-1}E \otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L} e^{-1}F)
+$$
+is an isomorphism if $E$ in $D^+(X_\etale, \mathbf{Z}/n\mathbf{Z})$
+has tor amplitude in $[a, \infty]$ for some $a \in \mathbf{Z}$ and
+$F$ in $D^+(T_\etale, \mathbf{Z}/n\mathbf{Z})$.
+\end{lemma}
+
+\begin{proof}
+This lemma is a generalization of Lemma \ref{lemma-punctual-base-change}
+to objects of the derived category; the assertion of our lemma is true because
+in Lemma \ref{lemma-punctual-base-change} the scheme $X$ over $K$
+is arbitrary. We strongly urge the reader to skip the laborious proof
+(alternative: read only the last paragraph).
+
+\medskip\noindent
+We may represent $E$ by a bounded below K-flat complex
+$\mathcal{E}^\bullet$ consisting of flat $\mathbf{Z}/n\mathbf{Z}$-modules.
+See Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-bounded-below-tor-amplitude}.
+Choose an integer $b$ such that $H^i(F) = 0$ for $i < b$.
+Choose a large integer $N$ and consider the short exact sequence
+$$
+0 \to \sigma_{\geq N + 1}\mathcal{E}^\bullet \to
+\mathcal{E}^\bullet \to
+\sigma_{\leq N}\mathcal{E}^\bullet \to 0
+$$
+of stupid truncations. This produces a distinguished triangle
+$E'' \to E \to E' \to E''[1]$ in $D(X_\etale, \mathbf{Z}/n\mathbf{Z})$.
+For fixed $F$ both sides of the arrow
+in the statement of the lemma are exact functors in $E$. Observe that
+$$
+p^{-1}E'' \otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L} (f')^{-1}Rg_*F
+\quad\text{and}\quad
+Rh_*(h^{-1}p^{-1}E'' \otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L} e^{-1}F)
+$$
+are sitting in degrees $\geq N + b$. Hence, if we can prove the lemma
+for the object $E'$, then we see that the lemma holds in degrees
+$\leq N + b$ and we will conclude. Some details omitted.
+Thus we may assume $E$ is represented
+by a bounded complex of flat $\mathbf{Z}/n\mathbf{Z}$-modules.
+Doing another argument of the same nature, we may assume
+$E$ is given by a single flat $\mathbf{Z}/n\mathbf{Z}$-module
+$\mathcal{E}$.
+
+\medskip\noindent
+Next, we use the same arguments for the variable $F$
+to reduce to the case where $F$ is given by a single
+sheaf of $\mathbf{Z}/n\mathbf{Z}$-modules $\mathcal{F}$.
+Say $\mathcal{F}$ is annihilated by an integer $m | n$.
+If $\ell$ is a prime number dividing $m$ and $m > \ell$,
+then we can look at the short exact sequence
+$0 \to \mathcal{F}[\ell] \to \mathcal{F} \to
+\mathcal{F}/\mathcal{F}[\ell] \to 0$
+and reduce to smaller $m$. This finally reduces us to
+the case where $\mathcal{F}$ is annihilated by a prime
+number $\ell$ dividing $n$.
+In this case observe that
+$$
+p^{-1}\mathcal{E}
+\otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L}
+(f')^{-1}Rg_*\mathcal{F}
+=
+p^{-1}(\mathcal{E}/\ell \mathcal{E})
+\otimes_{\mathbf{F}_\ell}^\mathbf{L}
+(f')^{-1}Rg_*\mathcal{F}
+$$
+by the flatness of $\mathcal{E}$. Similarly for the other term.
+This reduces us to the case where we are working with sheaves
+of $\mathbf{F}_\ell$-vector spaces which is discussed
+
+\medskip\noindent
+Assume $\ell$ is a prime number invertible in $K$.
+Assume $\mathcal{E}$, $\mathcal{F}$ are sheaves of
+$\mathbf{F}_\ell$-vector spaces on $X_\etale$ and $T_\etale$.
+We want to show that
+$$
+p^{-1}\mathcal{E} \otimes_{\mathbf{F}_\ell} (f')^{-1}R^qg_*\mathcal{F}
+\longrightarrow
+R^qh_*(h^{-1}p^{-1}\mathcal{E} \otimes_{\mathbf{F}_\ell} e^{-1}\mathcal{F})
+$$
+is an isomorphism for every $q \geq 0$. This question is local on $X$
+hence we may assume $X$ is affine. We can write $\mathcal{E}$
+as a filtered colimit of constructible sheaves of
+$\mathbf{F}_\ell$-vector spaces on $X_\etale$, see
+Lemma \ref{lemma-torsion-colimit-constructible}.
+Since tensor products commute
+with filtered colimits and since
+higher direct images do too (Lemma \ref{lemma-relative-colimit})
+we may assume $\mathcal{E}$ is a constructible sheaf of
+$\mathbf{F}_\ell$-vector spaces on $X_\etale$.
+Then we can choose an integer $m$ and finite and finitely presented morphisms
+$\pi_i : X_i \to X$, $i = 1, \ldots, m$
+such that there is an injective map
+$$
+\mathcal{E} \to
+\bigoplus\nolimits_{i = 1, \ldots, m}
+\pi_{i, *}\mathbf{F}_\ell
+$$
+See Lemma \ref{lemma-constructible-maps-into-constant-general}.
+Observe that the direct sum is a constructible sheaf as well
+(Lemma \ref{lemma-finite-pushforward-constructible}).
+Thus the cokernel is constructible too
+(Lemma \ref{lemma-constructible-abelian}).
+By dimension shifting, i.e., induction on $q$,
+on the category of constructible sheaves of
+$\mathbf{F}_\ell$-vector spaces on $X_\etale$, it suffices to prove the
+result for the sheaves $\pi_{i, *}\mathbf{F}_\ell$
+(details omitted; hint: start with proving injectivity for $q = 0$
+for all constructible $\mathcal{E}$).
+To prove this case we extend the diagram of the lemma to
+$$
+\xymatrix{
+X_i \ar[d]^{\pi_i} &
+X'_i \ar[l]^{p_i} \ar[d]^{\pi'_i} &
+Y_i \ar[l]^{h_i} \ar[d]^{\rho_i} \\
+X \ar[d] & X' \ar[l]^p \ar[d]_{f'} & Y \ar[l]^h \ar[d]^e \\
+\Spec(K) & S' \ar[l] & T \ar[l]_g
+}
+$$
+with all squares cartesian. In the equations below we are
+going to use that $R\pi_{i, *} = \pi_{i, *}$ and similarly
+for $\pi'_i$, $\rho_i$, we are going to use the Leray spectral sequence,
+we are going to use
+Lemma \ref{lemma-finite-pushforward-commutes-with-base-change}, and
+we are going to use
+Lemma \ref{lemma-projection-formula-proper-mod-n}
+(although this lemma is almost trivial for finite morphisms) for
+$\pi_i$, $\pi'_i$, $\rho_i$.
+Doing so we see that
+\begin{align*}
+p^{-1}\pi_{i, *}\mathbf{F}_\ell
+\otimes_{\mathbf{F}_\ell} (f')^{-1}R^qg_*\mathcal{F}
+& =
+\pi'_{i, *}\mathbf{F}_\ell
+\otimes_{\mathbf{F}_\ell} (f')^{-1}R^qg_*\mathcal{F} \\
+& =
+\pi'_{i, *}((\pi'_i)^{-1} (f')^{-1}R^qg_*\mathcal{F})
+\end{align*}
+Similarly, we have
+\begin{align*}
+R^qh_*(h^{-1}p^{-1} \pi_{i, *}\mathbf{F}_\ell
+\otimes_{\mathbf{F}_\ell} e^{-1}\mathcal{F})
+& =
+R^qh_*(\rho_{i, *}\mathbf{F}_\ell
+\otimes_{\mathbf{F}_\ell} e^{-1}\mathcal{F}) \\
+& =
+R^qh_*(\rho_i^{-1}e^{-1}\mathcal{F}) \\
+& =
+\pi'_{i, *}R^qh_{i, *} \rho_i^{-1}e^{-1}\mathcal{F})
+\end{align*}
+Simce
+$R^qh_{i, *} \rho_i^{-1}e^{-1}\mathcal{F} =
+(\pi'_i)^{-1} (f')^{-1}R^qg_*\mathcal{F}$ by
+Lemma \ref{lemma-punctual-base-change}
+we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-punctual-base-change-upgrade-unbounded}
+Let $K$ be a field. Let $n \geq 1$ be invertible in $K$.
+Consider a commutative diagram
+$$
+\xymatrix{
+X \ar[d] & X' \ar[l]^p \ar[d]_{f'} & Y \ar[l]^h \ar[d]^e \\
+\Spec(K) & S' \ar[l] & T \ar[l]_g
+}
+$$
+of schemes of finite type over $K$ with
+$X' = X \times_{\Spec(K)} S'$ and $Y = X' \times_{S'} T$.
+The canonical map
+$$
+p^{-1}E \otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L} (f')^{-1}Rg_*F
+\longrightarrow
+Rh_*(h^{-1}p^{-1}E \otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L} e^{-1}F)
+$$
+is an isomorphism for $E$ in $D(X_\etale, \mathbf{Z}/n\mathbf{Z})$
+and $F$ in $D(T_\etale, \mathbf{Z}/n\mathbf{Z})$.
+\end{lemma}
+
+\begin{proof}
+We will reduce this to
+Lemma \ref{lemma-punctual-base-change-upgrade}
+using that our functors commute with direct sums.
+We suggest the reader skip the proof.
+Recall that derived tensor product commutes with
+direct sums. Recall that (derived) pullback commutes with direct sums.
+Recall that $Rh_*$ and $Rg_*$ commute with direct sums, see
+Lemmas \ref{lemma-finite-cd} and
+\ref{lemma-finite-cd-mod-n-direct-sums}
+(this is where we use our schemes are of finite type
+over $K$).
+
+\medskip\noindent
+To finish the proof we can argue as follows.
+First we write $E = \text{hocolim} \tau_{\leq N} E$.
+Since our functors commute with direct sums, they commute
+with homotopy colimits. Hence it suffices to prove
+the lemma for $E$ bounded above. Similarly for $F$ we
+may assume $F$ is bounded above.
+Then we can represent $E$ by a bounded above complex
+$\mathcal{E}^\bullet$ of sheaves of $\mathbf{Z}/n\mathbf{Z}$-modules.
+Then
+$$
+\mathcal{E}^\bullet = \colim \sigma_{\geq -N}\mathcal{E}^\bullet
+$$
+(stupid truncations).
+Thus we may assume $\mathcal{E}^\bullet$ is a bounded
+complex of sheaves of $\mathbf{Z}/n\mathbf{Z}$-modules.
+For $F$ we choose a bounded above complex of
+flat(!) sheaves of $\mathbf{Z}/n\mathbf{Z}$-modules.
+Then we reduce to the case where $F$ is represented
+by a bounded complex of flat sheaves of $\mathbf{Z}/n\mathbf{Z}$-modules.
+At this point Lemma \ref{lemma-punctual-base-change-upgrade}
+kicks in and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-kunneth}
+Let $k$ be a separably closed field. Let $X$ and $Y$ be
+finite type schemes over $k$. Let $n \geq 1$ be an integer
+invertible in $k$. Then for
+$E \in D(X_\etale, \mathbf{Z}/n\mathbf{Z})$ and
+$K \in D(Y_\etale, \mathbf{Z}/n\mathbf{Z})$
+we have
+$$
+R\Gamma(X \times_{\Spec(k)} Y,
+\text{pr}_1^{-1}E
+\otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L}
+\text{pr}_2^{-1}K
+)
+=
+R\Gamma(X, E)
+\otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L}
+R\Gamma(Y, K)
+$$
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-punctual-base-change-upgrade-unbounded} we have
+$$
+R\text{pr}_{1, *}(
+\text{pr}_1^{-1}E
+\otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L}
+\text{pr}_2^{-1}K) =
+E \otimes_{\mathbf{Z}/n\mathbf{Z}}^\mathbf{L}
+\underline{R\Gamma(Y, K)}
+$$
+We conclude by
+Lemma \ref{lemma-pull-out-constant-mod-n}
+which we may use because
+$\text{cd}(X) < \infty$ by Lemma \ref{lemma-finite-cd}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Comparing chaotic and Zariski topologies}
+\label{section-compare-chaotic-Zariski}
+
+\noindent
+When constructing the structure sheaf of an affine scheme, we first
+construct the values on affine opens, and then we extend to all opens.
+A similar construction is often useful for constructing complexes
+of abelian groups on a scheme $X$. Recall that $X_{affine, Zar}$
+denotes the category of affine opens of $X$ with topology given
+by standard Zariski coverings, see
+Topologies, Definition \ref{topologies-definition-big-small-Zariski}.
+We remind the reader that the topos of $X_{affine, Zar}$
+is the small Zariski topos of $X$, see Topologies, Lemma
+\ref{topologies-lemma-alternative-zariski}.
+In this section we denote $X_{affine}$ the same underlying
+category with the chaotic topology, i.e., such that sheaves
+agree with presheaves. We obtain a morphisms of sites
+$$
+\epsilon : X_{affine, Zar} \longrightarrow X_{affine}
+$$
+as in Cohomology on Sites, Section \ref{sites-cohomology-section-compare}.
+
+\begin{lemma}
+\label{lemma-check-zar}
+In the situation above let $K$ be an object of $D^+(X_{affine})$.
+Then $K$ is in the essential image of the (fully faithful) functor
+$R\epsilon_* ; D(X_{affine, Zar}) \to D(X_{affine})$ if and only
+if the following two conditions hold
+\begin{enumerate}
+\item $R\Gamma(\emptyset, K)$ is zero in $D(\textit{Ab})$, and
+\item if $U = V \cup W$ with $U, V, W \subset X$ affine open and
+$V, W \subset U$ standard open
+(Algebra, Definition \ref{algebra-definition-Zariski-topology}), then
+the map $c^K_{U, V, W, V \cap W}$ of
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-c-square}
+is a quasi-isomorphism.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+(The functor $R\epsilon_*$ is fully faithful by the discussion in
+Cohomology on Sites, Section \ref{sites-cohomology-section-compare}.)
+Except for a snafu having to do with the empty set,
+this follows from the very general Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-descent-squares} whose hypotheses hold by
+Schemes, Lemma \ref{schemes-lemma-sheaf-on-affines} and
+Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-descent-squares-helper}.
+
+\medskip\noindent
+To get around the snafu, denote $X_{affine, almost-chaotic}$
+the site where the empty object $\emptyset$ has two coverings,
+namely, $\{\emptyset \to \emptyset\}$ and the empty covering
+(see Sites, Example \ref{sites-example-site-topological} for a
+discussion). Then we have morphisms of sites
+$$
+X_{affine, Zar} \to X_{affine, almost-chaotic} \to X_{affine}
+$$
+The argument above works for the first arrow. Then we leave it
+to the reader to see that an object $K$ of $D^+(X_{affine})$
+is in the essential image of the (fully faithful) functor
+$D(X_{affine, almost-chaotic}) \to D(X_{affine})$ if and only
+if $R\Gamma(\emptyset, K)$ is zero in $D(\textit{Ab})$.
+\end{proof}
+
+
+
+
+
+
+
+
+
+\section{Comparing big and small topoi}
+\label{section-compare}
+
+\noindent
+Let $S$ be a scheme. In
+Topologies, Lemma \ref{topologies-lemma-at-the-bottom-etale}
+we have introduced comparison morphisms
+$\pi_S : (\Sch/S)_\etale \to S_\etale$ and
+$i_S : \Sh(S_\etale) \to \Sh((\Sch/S)_\etale)$
+with $\pi_S \circ i_S = \text{id}$ and $\pi_{S, *} = i_S^{-1}$.
+More generally, if $f : T \to S$ is an object of $(\Sch/S)_\etale$,
+then there is a morphism $i_f : \Sh(T_\etale) \to \Sh((\Sch/S)_\etale)$
+such that $f_{small} = \pi_S \circ i_f$, see
+Topologies, Lemmas \ref{topologies-lemma-put-in-T-etale} and
+\ref{topologies-lemma-morphism-big-small-etale}. In
+Descent, Remark \ref{descent-remark-change-topologies-ringed}
+we have extended these to a morphism of ringed sites
+$$
+\pi_S : ((\Sch/S)_\etale, \mathcal{O}) \to (S_\etale, \mathcal{O}_S)
+$$
+and morphisms of ringed topoi
+$$
+i_S : (\Sh(S_\etale), \mathcal{O}_S) \to (\Sh((\Sch/S)_\etale), \mathcal{O})
+$$
+and
+$$
+i_f : (\Sh(T_\etale), \mathcal{O}_T) \to (\Sh((\Sch/S)_\etale, \mathcal{O}))
+$$
+Note that the restriction $i_S^{-1} = \pi_{S, *}$ (see
+Topologies, Definition \ref{topologies-definition-restriction-small-etale})
+transforms $\mathcal{O}$ into $\mathcal{O}_S$.
+Similarly, $i_f^{-1}$ transforms $\mathcal{O}$ into $\mathcal{O}_T$.
+See Descent, Remark \ref{descent-remark-change-topologies-ringed}.
+Hence $i_S^*\mathcal{F} = i_S^{-1}\mathcal{F}$ and
+$i_f^*\mathcal{F} = i_f^{-1}\mathcal{F}$ for any $\mathcal{O}$-module
+$\mathcal{F}$ on $(\Sch/S)_\etale$. In particular $i_S^*$ and $i_f^*$
+are exact functors. The functor $i_S^*$ is often denoted
+$\mathcal{F} \mapsto \mathcal{F}|_{S_\etale}$ (and this does not
+conflict with the notation in
+Topologies, Definition \ref{topologies-definition-restriction-small-etale}).
+
+\begin{lemma}
+\label{lemma-compare-injectives}
+Let $S$ be a scheme. Let $T$ be an object of $(\Sch/S)_\etale$.
+\begin{enumerate}
+\item If $\mathcal{I}$ is injective in $\textit{Ab}((\Sch/S)_\etale)$, then
+\begin{enumerate}
+\item $i_f^{-1}\mathcal{I}$ is injective in $\textit{Ab}(T_\etale)$,
+\item $\mathcal{I}|_{S_\etale}$ is injective in $\textit{Ab}(S_\etale)$,
+\end{enumerate}
+\item If $\mathcal{I}^\bullet$ is a K-injective complex
+in $\textit{Ab}((\Sch/S)_\etale)$, then
+\begin{enumerate}
+\item $i_f^{-1}\mathcal{I}^\bullet$ is a K-injective complex in
+$\textit{Ab}(T_\etale)$,
+\item $\mathcal{I}^\bullet|_{S_\etale}$ is a K-injective complex in
+$\textit{Ab}(S_\etale)$,
+\end{enumerate}
+\end{enumerate}
+The corresponding statements for modules do not hold.
+\end{lemma}
+
+\begin{proof}
+Parts (1)(b) and (2)(b)
+follow formally from the fact that the restriction functor
+$\pi_{S, *} = i_S^{-1}$ is a right adjoint of the exact functor
+$\pi_S^{-1}$, see
+Homology, Lemma \ref{homology-lemma-adjoint-preserve-injectives} and
+Derived Categories, Lemma \ref{derived-lemma-adjoint-preserve-K-injectives}.
+
+\medskip\noindent
+Parts (1)(a) and (2)(a) can be seen in two ways. First proof: We can use
+that $i_f^{-1}$ is a right adjoint of the exact functor $i_{f, !}$.
+This functor is constructed in
+Topologies, Lemma \ref{topologies-lemma-put-in-T-etale}
+for sheaves of sets and for abelian sheaves in
+Modules on Sites, Lemma \ref{sites-modules-lemma-g-shriek-adjoint}.
+It is shown in Modules on Sites, Lemma
+\ref{sites-modules-lemma-exactness-lower-shriek} that it is exact.
+Second proof. We can use that $i_f = i_T \circ f_{big}$ as is shown
+in Topologies, Lemma \ref{topologies-lemma-morphism-big-small-etale}.
+Since $f_{big}$ is a localization, we see that pullback by it
+preserves injectives and K-injectives, see
+Cohomology on Sites, Lemmas \ref{sites-cohomology-lemma-cohomology-of-open} and
+\ref{sites-cohomology-lemma-restrict-K-injective-to-open}.
+Then we apply the already proved parts (1)(b) and (2)(b)
+to the functor $i_T^{-1}$ to conclude.
+
+\medskip\noindent
+Let $S = \Spec(\mathbf{Z})$ and consider the map
+$2 : \mathcal{O}_S \to \mathcal{O}_S$. This is an injective map
+of $\mathcal{O}_S$-modules on $S_\etale$. However, the pullback
+$\pi_S^*(2) : \mathcal{O} \to \mathcal{O}$ is not injective
+as we see by evaluating on $\Spec(\mathbf{F}_2)$. Now choose
+an injection $\alpha : \mathcal{O} \to \mathcal{I}$ into an injective
+$\mathcal{O}$-module $\mathcal{I}$ on $(\Sch/S)_\etale$.
+Then consider the diagram
+$$
+\xymatrix{
+\mathcal{O}_S \ar[d]_2 \ar[rr]_{\alpha|_{S_\etale}} & &
+\mathcal{I}|_{S_\etale} \\
+\mathcal{O}_S \ar@{..>}[rru]
+}
+$$
+Then the dotted arrow cannot exist in the category of $\mathcal{O}_S$-modules
+because it would mean
+(by adjunction) that the injective map $\alpha$ factors through
+the noninjective map $\pi_S^*(2)$ which cannot be the case.
+Thus $\mathcal{I}|_{S_\etale}$ is not an injective $\mathcal{O}_S$-module.
+\end{proof}
+
+\noindent
+Let $f : T \to S$ be a morphism of schemes. The commutative diagram of
+Topologies, Lemma \ref{topologies-lemma-morphism-big-small-etale} (3)
+leads to a commutative diagram of ringed sites
+$$
+\xymatrix{
+(T_\etale, \mathcal{O}_T) \ar[d]_{f_{small}} &
+((\Sch/T)_\etale, \mathcal{O}) \ar[d]^{f_{big}} \ar[l]^{\pi_T} \\
+(S_\etale, \mathcal{O}_S) &
+((\Sch/S)_\etale, \mathcal{O}) \ar[l]_{\pi_S}
+}
+$$
+as one easily sees by writing out the definitions of
+$f_{small}^\sharp$, $f_{big}^\sharp$, $\pi_S^\sharp$, and $\pi_T^\sharp$.
+In particular this means that
+\begin{equation}
+\label{equation-compare-big-small}
+(f_{big, *}\mathcal{F})|_{S_\etale} =
+f_{small, *}(\mathcal{F}|_{T_\etale})
+\end{equation}
+for any sheaf $\mathcal{F}$ on $(\Sch/T)_\etale$ and if $\mathcal{F}$
+is a sheaf of $\mathcal{O}$-modules, then (\ref{equation-compare-big-small})
+is an isomorphism of $\mathcal{O}_S$-modules on $S_\etale$.
+
+\begin{lemma}
+\label{lemma-compare-higher-direct-image}
+Let $f : T \to S$ be a morphism of schemes.
+\begin{enumerate}
+\item For $K$ in $D((\Sch/T)_\etale)$ we have
+$
+(Rf_{big, *}K)|_{S_\etale} = Rf_{small, *}(K|_{T_\etale})
+$
+in $D(S_\etale)$.
+\item For $K$ in $D((\Sch/T)_\etale, \mathcal{O})$ we have
+$
+(Rf_{big, *}K)|_{S_\etale} = Rf_{small, *}(K|_{T_\etale})
+$
+in $D(\textit{Mod}(S_\etale, \mathcal{O}_S))$.
+\end{enumerate}
+More generally, let $g : S' \to S$ be an object of $(\Sch/S)_\etale$.
+Consider the fibre product
+$$
+\xymatrix{
+T' \ar[r]_{g'} \ar[d]_{f'} & T \ar[d]^f \\
+S' \ar[r]^g & S
+}
+$$
+Then
+\begin{enumerate}
+\item[(3)] For $K$ in $D((\Sch/T)_\etale)$ we have
+$i_g^{-1}(Rf_{big, *}K) = Rf'_{small, *}(i_{g'}^{-1}K)$
+in $D(S'_\etale)$.
+\item[(4)] For $K$ in $D((\Sch/T)_\etale, \mathcal{O})$ we have
+$i_g^*(Rf_{big, *}K) = Rf'_{small, *}(i_{g'}^*K)$
+in $D(\textit{Mod}(S'_\etale, \mathcal{O}_{S'}))$.
+\item[(5)] For $K$ in $D((\Sch/T)_\etale)$ we have
+$g_{big}^{-1}(Rf_{big, *}K) = Rf'_{big, *}((g'_{big})^{-1}K)$
+in $D((\Sch/S')_\etale)$.
+\item[(6)] For $K$ in $D((\Sch/T)_\etale, \mathcal{O})$ we have
+$g_{big}^*(Rf_{big, *}K) = Rf'_{big, *}((g'_{big})^*K)$
+in $D(\textit{Mod}(S'_\etale, \mathcal{O}_{S'}))$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) follows from
+Lemma \ref{lemma-compare-injectives}
+and (\ref{equation-compare-big-small})
+on choosing a K-injective complex of abelian sheaves representing $K$.
+
+\medskip\noindent
+Part (3) follows from Lemma \ref{lemma-compare-injectives}
+and Topologies, Lemma
+\ref{topologies-lemma-morphism-big-small-cartesian-diagram-etale}
+on choosing a K-injective complex of abelian sheaves representing $K$.
+
+\medskip\noindent
+Part (5) is Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-localize-cartesian-square}.
+
+\medskip\noindent
+Part (6) is Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-localize-cartesian-square-modules}.
+
+\medskip\noindent
+Part (2) can be proved as follows. Above we have seen
+that $\pi_S \circ f_{big} = f_{small} \circ \pi_T$ as morphisms
+of ringed sites. Hence we obtain
+$R\pi_{S, *} \circ Rf_{big, *} = Rf_{small, *} \circ R\pi_{T, *}$
+by Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-derived-pushforward-composition}.
+Since the restriction functors $\pi_{S, *}$ and $\pi_{T, *}$
+are exact, we conclude.
+
+\medskip\noindent
+Part (4) follows from part (6) and part (2) applied to $f' : T' \to S'$.
+\end{proof}
+
+\noindent
+Let $S$ be a scheme and let $\mathcal{H}$ be an abelian sheaf on
+$(\Sch/S)_\etale$. Recall that $H^n_\etale(U, \mathcal{H})$ denotes
+the cohomology of $\mathcal{H}$ over an object $U$ of $(\Sch/S)_\etale$.
+
+\begin{lemma}
+\label{lemma-compare-cohomology}
+Let $f : T \to S$ be a morphism of schemes. Then
+\begin{enumerate}
+\item For $K$ in $D(S_\etale)$ we have
+$H^n_\etale(S, \pi_S^{-1}K) = H^n(S_\etale, K)$.
+\item For $K$ in $D(S_\etale, \mathcal{O}_S)$ we have
+$H^n_\etale(S, L\pi_S^*K) = H^n(S_\etale, K)$.
+\item For $K$ in $D(S_\etale)$ we have
+$H^n_\etale(T, \pi_S^{-1}K) = H^n(T_\etale, f_{small}^{-1}K)$.
+\item For $K$ in $D(S_\etale, \mathcal{O}_S)$ we have
+$H^n_\etale(T, L\pi_S^*K) = H^n(T_\etale, Lf_{small}^*K)$.
+\item For $M$ in $D((\Sch/S)_\etale)$ we have
+$H^n_\etale(T, M) = H^n(T_\etale, i_f^{-1}M)$.
+\item For $M$ in $D((\Sch/S)_\etale, \mathcal{O})$ we have
+$H^n_\etale(T, M) = H^n(T_\etale, i_f^*M)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+To prove (5) represent $M$ by a K-injective complex of abelian sheaves
+and apply Lemma \ref{lemma-compare-injectives}
+and work out the definitions. Part (3) follows from
+this as $i_f^{-1}\pi_S^{-1} = f_{small}^{-1}$. Part (1) is a special
+case of (3).
+
+\medskip\noindent
+Part (6) follows from the very general Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-pullback-same-cohomology}. Then part
+(4) follows because $Lf_{small}^* = i_f^* \circ L\pi_S^*$.
+Part (2) is a special case of (4).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomological-descent-etale}
+Let $S$ be a scheme. For $K \in D(S_\etale)$ the map
+$$
+K \longrightarrow R\pi_{S, *}\pi_S^{-1}K
+$$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+This is true because both $\pi_S^{-1}$ and $\pi_{S, *} = i_S^{-1}$
+are exact functors and the composition $\pi_{S, *} \circ \pi_S^{-1}$
+is the identity functor.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-higher-direct-image-proper}
+Let $f : T \to S$ be a proper morphism of schemes. Then we have
+\begin{enumerate}
+\item $\pi_S^{-1} \circ f_{small, *} = f_{big, *} \circ \pi_T^{-1}$
+as functors $\Sh(T_\etale) \to \Sh((\Sch/S)_\etale)$,
+\item $\pi_S^{-1}Rf_{small, *}K = Rf_{big, *}\pi_T^{-1}K$
+for $K$ in $D^+(T_\etale)$ whose cohomology sheaves are torsion,
+\item $\pi_S^{-1}Rf_{small, *}K = Rf_{big, *}\pi_T^{-1}K$
+for $K$ in $D(T_\etale, \mathbf{Z}/n\mathbf{Z})$, and
+\item $\pi_S^{-1}Rf_{small, *}K = Rf_{big, *}\pi_T^{-1}K$
+for all $K$ in $D(T_\etale)$ if $f$ is finite.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Let $\mathcal{F}$ be a sheaf on $T_\etale$.
+Let $g : S' \to S$ be an object of $(\Sch/S)_\etale$. Consider the
+fibre product
+$$
+\xymatrix{
+T' \ar[r]_{f'} \ar[d]_{g'} & S' \ar[d]^g \\
+T \ar[r]^f & S
+}
+$$
+Then we have
+$$
+(f_{big, *}\pi_T^{-1}\mathcal{F})(S') =
+(\pi_T^{-1}\mathcal{F})(T') =
+((g'_{small})^{-1}\mathcal{F})(T') =
+(f'_{small, *}(g'_{small})^{-1}\mathcal{F})(S')
+$$
+the second equality by Lemma \ref{lemma-describe-pullback}.
+On the other hand
+$$
+(\pi_S^{-1}f_{small, *}\mathcal{F})(S') =
+(g_{small}^{-1}f_{small, *}\mathcal{F})(S')
+$$
+again by Lemma \ref{lemma-describe-pullback}.
+Hence by proper base change for sheaves of sets
+(Lemma \ref{lemma-proper-base-change-f-star})
+we conclude the two sets are canonically isomorphic.
+The isomorphism is compatible with restriction mappings
+and defines an isomorphism
+$\pi_S^{-1}f_{small, *}\mathcal{F} = f_{big, *}\pi_T^{-1}\mathcal{F}$.
+Thus an isomorphism of functors
+$\pi_S^{-1} \circ f_{small, *} = f_{big, *} \circ \pi_T^{-1}$.
+
+\medskip\noindent
+Proof of (2). There is a canonical base change map
+$\pi_S^{-1}Rf_{small, *}K \to Rf_{big, *}\pi_T^{-1}K$
+for any $K$ in $D(T_\etale)$, see
+Cohomology on Sites, Remark \ref{sites-cohomology-remark-base-change}.
+To prove it is an isomorphism, it suffices to prove the pull back of
+the base change map by $i_g : \Sh(S'_\etale) \to \Sh((\Sch/S)_\etale)$
+is an isomorphism for any object $g : S' \to S$ of $(\Sch/S)_\etale$.
+Let $T', g', f'$ be as in the previous paragraph.
+The pullback of the base change map is
+\begin{align*}
+g_{small}^{-1}Rf_{small, *}K
+& =
+i_g^{-1}\pi_S^{-1}Rf_{small, *}K \\
+& \to
+i_g^{-1}Rf_{big, *}\pi_T^{-1}K \\
+& =
+Rf'_{small, *}(i_{g'}^{-1}\pi_T^{-1}K) \\
+& =
+Rf'_{small, *}((g'_{small})^{-1}K)
+\end{align*}
+where we have used $\pi_S \circ i_g = g_{small}$,
+$\pi_T \circ i_{g'} = g'_{small}$, and
+Lemma \ref{lemma-compare-higher-direct-image}.
+This map is an isomorphism by the proper base change theorem
+(Lemma \ref{lemma-proper-base-change}) provided $K$ is bounded
+below and the cohomology sheaves of $K$ are torsion.
+
+\medskip\noindent
+The proof of part (3) is the same as the proof of part (2), except
+we use Lemma \ref{lemma-proper-base-change-mod-n}
+instead of Lemma \ref{lemma-proper-base-change}.
+
+\medskip\noindent
+Proof of (4). If $f$ is finite, then the functors
+$f_{small, *}$ and $f_{big, *}$ are exact. This follows
+from Proposition \ref{proposition-finite-higher-direct-image-zero}
+for $f_{small}$. Since any base change $f'$ of $f$ is finite too,
+we conclude from Lemma \ref{lemma-compare-higher-direct-image} part (3)
+that $f_{big, *}$ is exact too (as the higher derived functors are zero).
+Thus this case follows from part (1).
+\end{proof}
+
+
+
+
+\section{Comparing fppf and \'etale topologies}
+\label{section-fppf-etale}
+
+\noindent
+A model for this section is the section on the comparison of the
+usual topology and the qc topology on locally compact topological
+spaces as discussed in Cohomology on Sites, Section
+\ref{sites-cohomology-section-cohomology-LC}.
+We first review some material from
+Topologies, Sections
+\ref{topologies-section-change-topologies} and
+\ref{topologies-section-etale}.
+
+\medskip\noindent
+Let $S$ be a scheme and let $(\Sch/S)_{fppf}$ be an fppf site.
+On the same underlying category we have a second topology,
+namely the \'etale topology, and hence a second site
+$(\Sch/S)_\etale$. The identity functor
+$(\Sch/S)_\etale \to (\Sch/S)_{fppf}$ is continuous and defines
+a morphism of sites
+$$
+\epsilon_S : (\Sch/S)_{fppf} \longrightarrow (\Sch/S)_\etale
+$$
+See Cohomology on Sites, Section \ref{sites-cohomology-section-compare}.
+Please note that $\epsilon_{S, *}$ is the identity functor on underlying
+presheaves and that $\epsilon_S^{-1}$ associates to an \'etale sheaf the
+fppf sheafification. Let $S_\etale$ be the small \'etale site.
+There is a morphism of sites
+$$
+\pi_S : (\Sch/S)_\etale \longrightarrow S_\etale
+$$
+given by the continuous functor
+$S_\etale \to (\Sch/S)_\etale$, $U \mapsto U$.
+Namely, $S_\etale$ has fibre products and a final object and the
+functor above commutes with these and
+Sites, Proposition \ref{sites-proposition-get-morphism} applies.
+
+\begin{lemma}
+\label{lemma-describe-pullback-pi-fppf}
+With notation as above.
+Let $\mathcal{F}$ be a sheaf on $S_\etale$. The rule
+$$
+(\Sch/S)_{fppf} \longrightarrow \textit{Sets},\quad
+(f : X \to S) \longmapsto \Gamma(X, f_{small}^{-1}\mathcal{F})
+$$
+is a sheaf and a fortiori a sheaf on $(\Sch/S)_\etale$.
+In fact this sheaf is equal to
+$\pi_S^{-1}\mathcal{F}$ on $(\Sch/S)_\etale$ and
+$\epsilon_S^{-1}\pi_S^{-1}\mathcal{F}$ on $(\Sch/S)_{fppf}$.
+\end{lemma}
+
+\begin{proof}
+The statement about the \'etale topology is the content
+of Lemma \ref{lemma-describe-pullback}. To finish the proof it
+suffices to show that $\pi_S^{-1}\mathcal{F}$ is a sheaf for the fppf
+topology. This is shown in Lemma \ref{lemma-describe-pullback}
+as well.
+\end{proof}
+
+\noindent
+In the situation of Lemma \ref{lemma-describe-pullback-pi-fppf}
+the composition of $\epsilon_S$ and $\pi_S$ and the equality
+determine a morphism of sites
+$$
+a_S : (\Sch/S)_{fppf} \longrightarrow S_\etale
+$$
+
+\begin{lemma}
+\label{lemma-push-pull-fppf-etale}
+With notation as above.
+Let $f : X \to Y$ be a morphism of $(\Sch/S)_{fppf}$.
+Then there are commutative diagrams of topoi
+$$
+\xymatrix{
+\Sh((\Sch/X)_{fppf}) \ar[rr]_{f_{big, fppf}} \ar[d]_{\epsilon_X} & &
+\Sh((\Sch/Y)_{fppf}) \ar[d]^{\epsilon_Y} \\
+\Sh((\Sch/X)_\etale) \ar[rr]^{f_{big, \etale}} & &
+\Sh((\Sch/Y)_\etale)
+}
+$$
+and
+$$
+\xymatrix{
+\Sh((\Sch/X)_{fppf}) \ar[rr]_{f_{big, fppf}} \ar[d]_{a_X} & &
+\Sh((\Sch/Y)_{fppf}) \ar[d]^{a_Y} \\
+\Sh(X_\etale) \ar[rr]^{f_{small}} & &
+\Sh(Y_\etale)
+}
+$$
+with $a_X = \pi_X \circ \epsilon_X$ and $a_Y = \pi_X \circ \epsilon_X$.
+\end{lemma}
+
+\begin{proof}
+The commutativity of the diagrams follows from the discussion in
+Topologies, Section \ref{topologies-section-change-topologies}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-push-pull-fppf-etale}
+In Lemma \ref{lemma-push-pull-fppf-etale} if $f$ is proper, then we have
+$a_Y^{-1} \circ f_{small, *} = f_{big, fppf, *} \circ a_X^{-1}$.
+\end{lemma}
+
+\begin{proof}
+You can prove this by repeating the proof of
+Lemma \ref{lemma-compare-higher-direct-image-proper} part (1);
+we will instead deduce the result from this.
+As $\epsilon_{Y, *}$ is the identity functor on underlying presheaves,
+it reflects isomorphisms. The description
+in Lemma \ref{lemma-describe-pullback-pi-fppf}
+shows that $\epsilon_{Y, *} \circ a_Y^{-1} = \pi_Y^{-1}$
+and similarly for $X$. To show that the canonical map
+$a_Y^{-1}f_{small, *}\mathcal{F} \to f_{big, fppf, *}a_X^{-1}\mathcal{F}$
+is an isomorphism, it suffices to show that
+\begin{align*}
+\pi_Y^{-1}f_{small, *}\mathcal{F}
+& =
+\epsilon_{Y, *}a_Y^{-1}f_{small, *}\mathcal{F} \\
+& \to
+\epsilon_{Y, *}f_{big, fppf, *}a_X^{-1}\mathcal{F} \\
+& =
+f_{big, \etale, *} \epsilon_{X, *}a_X^{-1}\mathcal{F} \\
+& =
+f_{big, \etale, *}\pi_X^{-1}\mathcal{F}
+\end{align*}
+is an isomorphism. This is part
+(1) of Lemma \ref{lemma-compare-higher-direct-image-proper}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-descent-sheaf-fppf-etale}
+In Lemma \ref{lemma-push-pull-fppf-etale} assume
+$f$ is flat, locally of finite presentation, and surjective.
+Then the functor
+$$
+\Sh(Y_\etale) \longrightarrow
+\left\{
+(\mathcal{G}, \mathcal{H}, \alpha)
+\middle|
+\begin{matrix}
+\mathcal{G} \in \Sh(X_\etale),\ \mathcal{H} \in \Sh((\Sch/Y)_{fppf}), \\
+\alpha : a_X^{-1}\mathcal{G} \to f_{big, fppf}^{-1}\mathcal{H}
+\text{ an isomorphism}
+\end{matrix}
+\right\}
+$$
+sending $\mathcal{F}$ to
+$(f_{small}^{-1}\mathcal{F}, a_Y^{-1}\mathcal{F}, can)$ is an equivalence.
+\end{lemma}
+
+\begin{proof}
+The functor $a_X^{-1}$ is fully faithful (as $a_{X, *}a_X^{-1} = \text{id}$ by
+Lemma \ref{lemma-describe-pullback-pi-fppf}). Hence the forgetful functor
+$(\mathcal{G}, \mathcal{H}, \alpha) \mapsto \mathcal{H}$ identifies the
+category of triples with a full subcategory of $\Sh((\Sch/Y)_{fppf})$.
+Moreover, the functor $a_Y^{-1}$ is fully faithful, hence the functor
+in the lemma is fully faithful as well.
+
+\medskip\noindent
+Suppose that we have an \'etale covering $\{Y_i \to Y\}$.
+Let $f_i : X_i \to Y_i$ be the base change of $f$.
+Denote $f_{ij} = f_i \times f_j : X_i \times_X X_j \to Y_i \times_Y Y_j$.
+Claim: if the lemma is true for $f_i$ and $f_{ij}$ for all $i, j$, then
+the lemma is true for $f$. To see this, note that the given \'etale covering
+determines an \'etale covering of the final object in each of
+the four sites $Y_\etale, X_\etale, (\Sch/Y)_{fppf}, (\Sch/X)_{fppf}$.
+Thus the category of sheaves is equivalent to the category of
+glueing data for this covering
+(Sites, Lemma \ref{sites-lemma-mapping-property-glue})
+in each of the four cases. A huge commutative diagram of
+categories then finishes the proof of the claim. We omit the details.
+The claim shows that we may work \'etale locally on $Y$.
+
+\medskip\noindent
+Note that $\{X \to Y\}$ is an fppf covering. Working \'etale locally on $Y$,
+we may assume there exists a morphism $s : X' \to X$ such that the composition
+$f' = f \circ s : X' \to Y$ is surjective finite locally free, see
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-dominate-fppf-etale-locally}.
+Claim: if the lemma is true for $f'$, then it is true for $f$.
+Namely, given a triple $(\mathcal{G}, \mathcal{H}, \alpha)$
+for $f$, we can pullback by $s$ to get a triple
+$(s_{small}^{-1}\mathcal{G}, \mathcal{H}, s_{big, fppf}^{-1}\alpha)$
+for $f'$. A solution for this triple gives a sheaf
+$\mathcal{F}$ on $Y_\etale$ with $a_Y^{-1}\mathcal{F} = \mathcal{H}$.
+By the first paragraph of the proof this means the triple is
+in the essential image. This reduces us to
+the case described in the next paragraph.
+
+\medskip\noindent
+Assume $f$ is surjective finite locally free.
+Let $(\mathcal{G}, \mathcal{H}, \alpha)$ be a triple.
+In this case consider the triple
+$$
+(\mathcal{G}_1, \mathcal{H}_1, \alpha_1) =
+(f_{small}^{-1}f_{small, *}\mathcal{G},
+f_{big, fppf, *}f_{big, fppf}^{-1}\mathcal{H}, \alpha_1)
+$$
+where $\alpha_1$ comes from the identifications
+\begin{align*}
+a_X^{-1}f_{small}^{-1}f_{small, *}\mathcal{G}
+& =
+f_{big, fppf}^{-1}a_Y^{-1}f_{small, *}\mathcal{G} \\
+& =
+f_{big, fppf}^{-1}f_{big, fppf, *}a_X^{-1}\mathcal{G} \\
+& \to
+f_{big, fppf}^{-1}f_{big, fppf, *}f_{big, fppf}^{-1}\mathcal{H}
+\end{align*}
+where the third equality is Lemma \ref{lemma-proper-push-pull-fppf-etale}
+and the arrow is given by $\alpha$.
+This triple is in the image of our functor because
+$\mathcal{F}_1 = f_{small, *}\mathcal{F}$ is a solution
+(to see this use Lemma \ref{lemma-proper-push-pull-fppf-etale} again;
+details omitted). There is a canonical map of triples
+$$
+(\mathcal{G}, \mathcal{H}, \alpha)
+\to
+(\mathcal{G}_1, \mathcal{H}_1, \alpha_1)
+$$
+which uses the unit $\text{id} \to f_{big, fppf, *}f_{big, fppf}^{-1}$
+on the second entry (it is enough to prescribe morphisms on the
+second entry by the first paragraph of the proof). Since
+$\{f : X \to Y\}$ is an fppf covering the map
+$\mathcal{H} \to \mathcal{H}_1$ is injective (details omitted).
+Set
+$$
+\mathcal{G}_2 = \mathcal{G}_1 \amalg_\mathcal{G} \mathcal{G}_1\quad
+\mathcal{H}_2 = \mathcal{H}_1 \amalg_\mathcal{H} \mathcal{H}_1
+$$
+and let $\alpha_2$ be the induced isomorphism (pullback functors
+are exact, so this makes sense). Then $\mathcal{H}$ is the
+equalizer of the two maps $\mathcal{H}_1 \to \mathcal{H}_2$.
+Repeating the arguments above for the triple
+$(\mathcal{G}_2, \mathcal{H}_2, \alpha_2)$
+we find an injective morphism of triples
+$$
+(\mathcal{G}_2, \mathcal{H}_2, \alpha_2)
+\to
+(\mathcal{G}_3, \mathcal{H}_3, \alpha_3)
+$$
+such that this last triple is in the image of our functor.
+Say it corresponds to $\mathcal{F}_3$ in $\Sh(Y_\etale)$.
+By fully faithfulness we obtain two maps
+$\mathcal{F}_1 \to \mathcal{F}_3$ and we can let
+$\mathcal{F}$ be the equalizer of these two maps.
+By exactness of the pullback functors involved we
+find that $a_Y^{-1}\mathcal{F} = \mathcal{H}$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-fppf-etale}
+Consider the comparison morphism
+$\epsilon : (\Sch/S)_{fppf} \to (\Sch/S)_\etale$.
+Let $\mathcal{P}$ denote the class of finite morphisms of schemes.
+For $X$ in $(\Sch/S)_\etale$ denote
+$\mathcal{A}'_X \subset \textit{Ab}((\Sch/X)_\etale)$
+the full subcategory consisting of sheaves of the form
+$\pi_X^{-1}\mathcal{F}$ with $\mathcal{F}$ in $\textit{Ab}(X_\etale)$.
+Then Cohomology on Sites, Properties
+(\ref{sites-cohomology-item-base-change-P}),
+(\ref{sites-cohomology-item-restriction-A}),
+(\ref{sites-cohomology-item-A-sheaf}),
+(\ref{sites-cohomology-item-A-and-P}), and
+(\ref{sites-cohomology-item-refine-tau-by-P})
+of Cohomology on Sites, Situation
+\ref{sites-cohomology-situation-compare} hold.
+\end{lemma}
+
+\begin{proof}
+We first show that $\mathcal{A}'_X \subset \textit{Ab}((\Sch/X)_\etale)$
+is a weak Serre subcategory by checking conditions (1), (2), (3), and (4)
+of Homology, Lemma \ref{homology-lemma-characterize-weak-serre-subcategory}.
+Parts (1), (2), (3) are immediate as $\pi_X^{-1}$ is exact and
+fully faithful for example by Lemma \ref{lemma-cohomological-descent-etale}. If
+$0 \to \pi_X^{-1}\mathcal{F} \to \mathcal{G} \to \pi_X^{-1}\mathcal{F}' \to 0$
+is a short exact sequence in $\textit{Ab}((\Sch/X)_\etale)$
+then $0 \to \mathcal{F} \to \pi_{X, *}\mathcal{G} \to \mathcal{F}' \to 0$
+is exact by Lemma \ref{lemma-cohomological-descent-etale}.
+Hence $\mathcal{G} = \pi_X^{-1}\pi_{X, *}\mathcal{G}$ is in
+$\mathcal{A}'_X$ which checks the final condition.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-base-change-P}) holds
+by the existence of fibre products of schemes
+and the fact that the base change of a finite morphism of
+schemes is a finite morphism of schemes, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-finite}.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-restriction-A})
+follows from the commutative diagram (3) in
+Topologies, Lemma \ref{topologies-lemma-morphism-big-small-etale}.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-A-sheaf}) is
+Lemma \ref{lemma-describe-pullback-pi-fppf}.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-A-and-P}) holds by
+Lemma \ref{lemma-compare-higher-direct-image-proper} part (4).
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-refine-tau-by-P})
+is implied by
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-dominate-fppf-etale-locally}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-V-C-all-n-etale-fppf}
+With notation as above.
+\begin{enumerate}
+\item For $X \in \Ob((\Sch/S)_{fppf})$ and an abelian sheaf $\mathcal{F}$
+on $X_\etale$ we have
+$\epsilon_{X, *}a_X^{-1}\mathcal{F} = \pi_X^{-1}\mathcal{F}$
+and $R^i\epsilon_{X, *}(a_X^{-1}\mathcal{F}) = 0$ for $i > 0$.
+\item For a finite morphism $f : X \to Y$ in $(\Sch/S)_{fppf}$
+and abelian sheaf $\mathcal{F}$ on $X$ we have
+$a_Y^{-1}(R^if_{small, *}\mathcal{F}) =
+R^if_{big, fppf, *}(a_X^{-1}\mathcal{F})$
+for all $i$.
+\item For a scheme $X$ and $K$ in $D^+(X_\etale)$ the map
+$\pi_X^{-1}K \to R\epsilon_{X, *}(a_X^{-1}K)$ is an isomorphism.
+\item For a finite morphism $f : X \to Y$ of schemes
+and $K$ in $D^+(X_\etale)$ we have
+$a_Y^{-1}(Rf_{small, *}K) = Rf_{big, fppf, *}(a_X^{-1}K)$.
+\item For a proper morphism $f : X \to Y$ of schemes
+and $K$ in $D^+(X_\etale)$ with torsion cohomology sheaves we have
+$a_Y^{-1}(Rf_{small, *}K) = Rf_{big, fppf, *}(a_X^{-1}K)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-compare-fppf-etale} the lemmas in
+Cohomology on Sites, Section \ref{sites-cohomology-section-compare-general}
+all apply to our current setting. To translate the results
+observe that the category $\mathcal{A}_X$ of
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-A}
+is the essential image of
+$a_X^{-1} : \textit{Ab}(X_\etale) \to \textit{Ab}((\Sch/X)_{fppf})$.
+
+\medskip\noindent
+Part (1) is equivalent to $(V_n)$ for all $n$ which holds by
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-V-C-all-n-general}.
+
+\medskip\noindent
+Part (2) follows by applying $\epsilon_Y^{-1}$ to the conclusion of
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-V-implies-C-general}.
+
+\medskip\noindent
+Part (3) follows from Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-V-C-all-n-general} part (1)
+because $\pi_X^{-1}K$ is in $D^+_{\mathcal{A}'_X}((\Sch/X)_\etale)$
+and $a_X^{-1} = \epsilon_X^{-1} \circ a_X^{-1}$.
+
+\medskip\noindent
+Part (4) follows from Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-V-C-all-n-general} part (2)
+for the same reason.
+
+\medskip\noindent
+Part (5). We use that
+\begin{align*}
+R\epsilon_{Y, *}Rf_{big, fppf, *}a_X^{-1}K
+& =
+Rf_{big, \etale, *}R\epsilon_{X, *}a_X^{-1}K \\
+& =
+Rf_{big, \etale, *}\pi_X^{-1}K \\
+& =
+\pi_Y^{-1}Rf_{small, *}K \\
+& =
+R\epsilon_{Y, *} a_Y^{-1}Rf_{small, *}K
+\end{align*}
+The first equality by the commutative diagram in
+Lemma \ref{lemma-push-pull-fppf-etale}
+and Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-derived-pushforward-composition}.
+The second equality is (3). The third is
+Lemma \ref{lemma-compare-higher-direct-image-proper} part (2).
+The fourth is (3) again. Thus the base change map
+$a_Y^{-1}(Rf_{small, *}K) \to Rf_{big, fppf, *}(a_X^{-1}K)$
+induces an isomorphism
+$$
+R\epsilon_{Y, *}a_Y^{-1}Rf_{small, *}K \to
+R\epsilon_{Y, *}Rf_{big, fppf, *}a_X^{-1}K
+$$
+The proof is finished by the following remark: a map
+$\alpha : a_Y^{-1}L \to M$ with $L$ in $D^+(Y_\etale)$
+and $M$ in $D^+((\Sch/Y)_{fppf})$ such that $R\epsilon_{Y, *}\alpha$
+is an isomorphism, is an isomorphism. Namely,
+we show by induction on $i$ that $H^i(\alpha)$ is an isomorphism.
+This is true for all sufficiently small $i$.
+If it holds for $i \leq i_0$, then we see that
+$R^j\epsilon_{Y, *}H^i(M) = 0$ for $j > 0$ and $i \leq i_0$
+by (1) because $H^i(M) = a_Y^{-1}H^i(L)$ in this range.
+Hence $\epsilon_{Y, *}H^{i_0 + 1}(M) = H^{i_0 + 1}(R\epsilon_{Y, *}M)$
+by a spectral sequence argument.
+Thus $\epsilon_{Y, *}H^{i_0 + 1}(M) = \pi_Y^{-1}H^{i_0 + 1}(L) =
+\epsilon_{Y, *}a_Y^{-1}H^{i_0 + 1}(L)$.
+This implies $H^{i_0 + 1}(\alpha)$ is an isomorphism
+(because $\epsilon_{Y, *}$ reflects isomorphisms as it is the
+identity on underlying presheaves) as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomological-descent-etale-fppf}
+Let $X$ be a scheme. For $K \in D^+(X_\etale)$ the map
+$$
+K \longrightarrow Ra_{X, *}a_X^{-1}K
+$$
+is an isomorphism with $a_X : \Sh((\Sch/X)_{fppf}) \to \Sh(X_\etale)$
+as above.
+\end{lemma}
+
+\begin{proof}
+We first reduce the statement to the case where
+$K$ is given by a single abelian sheaf. Namely, represent $K$
+by a bounded below complex $\mathcal{F}^\bullet$. By the case of a
+sheaf we see that
+$\mathcal{F}^n = a_{X, *} a_X^{-1} \mathcal{F}^n$
+and that the sheaves $R^qa_{X, *}a_X^{-1}\mathcal{F}^n$
+are zero for $q > 0$. By Leray's acyclicity lemma
+(Derived Categories, Lemma \ref{derived-lemma-leray-acyclicity})
+applied to $a_X^{-1}\mathcal{F}^\bullet$
+and the functor $a_{X, *}$ we conclude. From now on assume $K = \mathcal{F}$.
+
+\medskip\noindent
+By Lemma \ref{lemma-describe-pullback-pi-fppf} we have
+$a_{X, *}a_X^{-1}\mathcal{F} = \mathcal{F}$. Thus it suffices to show that
+$R^qa_{X, *}a_X^{-1}\mathcal{F} = 0$ for $q > 0$.
+For this we can use $a_X = \epsilon_X \circ \pi_X$ and
+the Leray spectral sequence
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-relative-Leray}).
+By Lemma \ref{lemma-V-C-all-n-etale-fppf}
+we have $R^i\epsilon_{X, *}(a_X^{-1}\mathcal{F}) = 0$ for $i > 0$
+and
+$\epsilon_{X, *}a_X^{-1}\mathcal{F} = \pi_X^{-1}\mathcal{F}$.
+By Lemma \ref{lemma-cohomological-descent-etale} we have
+$R^j\pi_{X, *}(\pi_X^{-1}\mathcal{F}) = 0$ for $j > 0$.
+This concludes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-cohomology-etale-fppf}
+For a scheme $X$ and $a_X : \Sh((\Sch/X)_{fppf}) \to \Sh(X_\etale)$
+as above:
+\begin{enumerate}
+\item $H^q(X_\etale, \mathcal{F}) = H^q_{fppf}(X, a_X^{-1}\mathcal{F})$
+for an abelian sheaf $\mathcal{F}$ on $X_\etale$,
+\item $H^q(X_\etale, K) = H^q_{fppf}(X, a_X^{-1}K)$ for $K \in D^+(X_\etale)$.
+\end{enumerate}
+Example: if $A$ is an abelian group, then
+$H^q_\etale(X, \underline{A}) = H^q_{fppf}(X, \underline{A})$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-cohomological-descent-etale-fppf}
+by Cohomology on Sites, Remark \ref{sites-cohomology-remark-before-Leray}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Comparing fppf and \'etale topologies: modules}
+\label{section-fppf-etale-modules}
+
+\noindent
+We continue the discussion in Section \ref{section-fppf-etale} but in this
+section we briefly discuss what happens for sheaves of modules.
+
+\medskip\noindent
+Let $S$ be a scheme. The morphisms of sites $\epsilon_S$, $\pi_S$, and
+their composition $a_S$ introduced in Section \ref{section-fppf-etale}
+have natural enhancements to morphisms of ringed sites. The first
+is written as
+$$
+\epsilon_S :
+((\Sch/S)_{fppf}, \mathcal{O})
+\longrightarrow
+((\Sch/S)_\etale, \mathcal{O})
+$$
+Note that we can use the same symbol for the structure sheaf as indeed
+the sheaves have the same underlying presheaf. The second is
+$$
+\pi_S :
+((\Sch/S)_\etale, \mathcal{O})
+\longrightarrow
+(S_\etale, \mathcal{O}_S)
+$$
+The third is the morphism
+$$
+a_S :
+((\Sch/S)_{fppf}, \mathcal{O})
+\longrightarrow
+(S_\etale, \mathcal{O}_S)
+$$
+We already know that the category of quasi-coherent modules on the
+scheme $S$ is the same as the category of quasi-coherent modules
+on $(S_\etale, \mathcal{O}_S)$, see
+Descent, Proposition \ref{descent-proposition-equivalence-quasi-coherent}.
+Since we are interested in stating a comparison between
+\'etale and fppf cohomology, we will in the rest of this
+section think of quasi-coherent sheaves in terms of the
+small \'etale site.
+Let us review what we already know about quasi-coherent
+modules on these sites.
+
+\begin{lemma}
+\label{lemma-review-quasi-coherent}
+Let $S$ be a scheme. Let $\mathcal{F}$ be a quasi-coherent
+$\mathcal{O}_S$-module on $S_\etale$.
+\begin{enumerate}
+\item The rule
+$$
+\mathcal{F}^a : (\Sch/S)_\etale \longrightarrow \textit{Ab},\quad
+(f : T \to S) \longmapsto \Gamma(T, f_{small}^*\mathcal{F})
+$$
+satisfies the sheaf condition for fppf and a fortiori \'etale coverings,
+\item $\mathcal{F}^a = \pi_S^*\mathcal{F}$ on $(\Sch/S)_\etale$,
+\item $\mathcal{F}^a = a_S^*\mathcal{F}$ on $(\Sch/S)_{fppf}$,
+\item the rule $\mathcal{F} \mapsto \mathcal{F}^a$ defines
+an equivalence between quasi-coherent $\mathcal{O}_S$-modules
+and quasi-coherent modules on
+$((\Sch/S)_\etale, \mathcal{O})$,
+\item the rule $\mathcal{F} \mapsto \mathcal{F}^a$ defines
+an equivalence between quasi-coherent $\mathcal{O}_S$-modules
+and quasi-coherent modules on
+$((\Sch/S)_{fppf}, \mathcal{O})$,
+\item we have $\epsilon_{S, *}a_S^*\mathcal{F} = \pi_S^*\mathcal{F}$
+and $a_{S, *}a_S^*\mathcal{F} = \mathcal{F}$,
+\item we have $R^i\epsilon_{S, *}(a_S^*\mathcal{F}) = 0$
+and $R^ia_{S, *}(a_S^*\mathcal{F}) = 0$ for $i > 0$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We urge the reader to find their own proof of these results
+based on the material in
+Descent, Sections \ref{descent-section-quasi-coherent-sheaves},
+\ref{descent-section-quasi-coherent-cohomology}, and
+\ref{descent-section-quasi-coherent-sheaves-bis}.
+
+\medskip\noindent
+We first explain why the notation in this lemma is consistent with our
+earlier use of the notation $\mathcal{F}^a$ in
+Sections \ref{section-quasi-coherent} and
+\ref{section-cohomology-quasi-coherent}
+and in
+Descent, Section \ref{descent-section-quasi-coherent-sheaves}.
+Namely, we know by
+Descent, Proposition \ref{descent-proposition-equivalence-quasi-coherent}
+that there exists a quasi-coherent module
+$\mathcal{F}_0$ on the scheme $S$ (in other words on the small
+Zariski site) such that $\mathcal{F}$ is the restriction of the
+rule
+$$
+\mathcal{F}_0^a : (\Sch/S)_\etale \longrightarrow \textit{Ab},\quad
+(f : T \to S) \longmapsto \Gamma(T, f^*\mathcal{F})
+$$
+to the subcategory $S_\etale \subset (\Sch/S)_\etale$
+where here $f^*$ denotes usual pullback of sheaves of modules on schemes.
+Since $\mathcal{F}_0^a$ is pullback by the morphism of ringed
+sites
+$$
+((\Sch/S)_\etale, \mathcal{O}) \longrightarrow (S_{Zar}, \mathcal{O}_{S_{Zar}})
+$$
+by Descent, Remark \ref{descent-remark-change-topologies-ringed-sites}
+it follows immediately (from composition of pullbacks) that
+$\mathcal{F}^a = \mathcal{F}_0^a$. This proves the sheaf property
+even for fpqc coverings by
+Descent, Lemma \ref{descent-lemma-sheaf-condition-holds} (see also
+Proposition \ref{proposition-quasi-coherent-sheaf-fpqc}).
+Then (2) and (3) follow
+again by Descent, Remark \ref{descent-remark-change-topologies-ringed-sites}
+and (4) and (5) follow from
+Descent, Proposition \ref{descent-proposition-equivalence-quasi-coherent}
+(see also the meta result
+Theorem \ref{theorem-quasi-coherent}).
+
+\medskip\noindent
+Part (6) is immediate from the description of the sheaf
+$\mathcal{F}^a = \pi_S^*\mathcal{F} = a_S^*\mathcal{F}$.
+
+\medskip\noindent
+For any abelian $\mathcal{H}$ on $(\Sch/S)_{fppf}$ the
+higher direct image $R^p\epsilon_{S, *}\mathcal{H}$ is the sheaf
+associated to the presheaf $U \mapsto H^p_{fppf}(U, \mathcal{H})$
+on $(\Sch/S)_\etale$. See
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-higher-direct-images}.
+Hence to prove
+$R^p\epsilon_{S, *}a_S^*\mathcal{F} = R^p\epsilon_{S, *}\mathcal{F}^a = 0$
+for $p > 0$ it suffices to show that any scheme $U$ over $S$
+has an \'etale covering $\{U_i \to U\}_{i \in I}$ such that
+$H^p_{fppf}(U_i, \mathcal{F}^a) = 0$ for $p > 0$.
+If we take an open covering by affines, then the required
+vanishing follows from comparison with usual cohomology
+(Descent, Proposition \ref{descent-proposition-same-cohomology-quasi-coherent}
+or
+Theorem \ref{theorem-zariski-fpqc-quasi-coherent})
+and the vanishing of cohomology of quasi-coherent sheaves
+on affine schemes afforded by Cohomology of Schemes, Lemma
+\ref{coherent-lemma-quasi-coherent-affine-cohomology-zero}.
+
+\medskip\noindent
+To show that $R^pa_{S, *}a_S^{-1}\mathcal{F} = R^pa_{S, *}\mathcal{F}^a = 0$
+for $p > 0$ we argue in exactly the same manner. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomological-descent-etale-fppf-modules}
+Let $S$ be a scheme. For $\mathcal{F}$ a quasi-coherent
+$\mathcal{O}_S$-module on $S_\etale$ the maps
+$$
+\pi_S^*\mathcal{F} \longrightarrow R\epsilon_{S, *}(a_S^*\mathcal{F})
+\quad\text{and}\quad
+\mathcal{F} \longrightarrow Ra_{S, *}(a_S^*\mathcal{F})
+$$
+are isomorphisms with
+$a_S : \Sh((\Sch/S)_{fppf}) \to \Sh(S_\etale)$ as above.
+\end{lemma}
+
+\begin{proof}
+This is an immediate consequence of
+parts (6) and (7) of
+Lemma \ref{lemma-review-quasi-coherent}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomological-descent-complex-modules}
+Let $S = \Spec(A)$ be an affine scheme. Let $M^\bullet$ be a complex
+of $A$-modules. Consider the complex $\mathcal{F}^\bullet$ of
+presheaves of $\mathcal{O}$-modules on
+$(\textit{Aff}/S)_{fppf}$ given by the rule
+$$
+(U/S) = (\Spec(B)/\Spec(A)) \longmapsto M^\bullet \otimes_A B
+$$
+Then this is a complex of modules and the canonical map
+$$
+M^\bullet \longrightarrow
+R\Gamma((\textit{Aff}/S)_{fppf}, \mathcal{F}^\bullet)
+$$
+is a quasi-isomorphism.
+\end{lemma}
+
+\begin{proof}
+Each $\mathcal{F}^n$ is a sheaf of modules as it agrees with the
+restriction of the module $\mathcal{G}^n = (\widetilde{M}^n)^a$ of
+Lemma \ref{lemma-review-quasi-coherent} to
+$(\textit{Aff}/S)_{fppf} \subset (\Sch/S)_{fppf}$.
+Since this inclusion defines an equivalence of ringed topoi
+(Topologies, Lemma \ref{topologies-lemma-affine-big-site-fppf}),
+we have
+$$
+R\Gamma((\textit{Aff}/S)_{fppf}, \mathcal{F}^\bullet) =
+R\Gamma((\Sch/S)_{fppf}, \mathcal{G}^\bullet)
+$$
+We observe that $M^\bullet = R\Gamma(S, \widetilde{M}^\bullet)$
+for example by Derived Categories of Schemes, Lemma
+\ref{perfect-lemma-affine-compare-bounded}. Hence we are trying
+to show the comparison map
+$$
+R\Gamma(S, \widetilde{M}^\bullet)
+\longrightarrow
+R\Gamma((\Sch/S)_{fppf}, (\widetilde{M}^\bullet)^a)
+$$
+is an isomorphism. If $M^\bullet$ is bounded below, then this
+holds by Descent, Proposition
+\ref{descent-proposition-same-cohomology-quasi-coherent}
+and the first spectral sequence of Derived Categories, Lemma
+\ref{derived-lemma-two-ss-complex-functor}.
+For the general case, let us write $M^\bullet = \lim M_n^\bullet$
+with $M_n^\bullet = \tau_{\geq -n}M^\bullet$. Whence the system
+$M_n^p$ is eventually constant with value $M^p$. We claim that
+$$
+(\widetilde{M}^\bullet)^a = R\lim (\widetilde{M}_n^\bullet)^a
+$$
+Namely, it suffices to show that the natural map from left to right
+induces an isomorphism on cohomology over any affine object
+$U = \Spec(B)$ of $(\Sch/S)_{fppf}$. For $i \in \mathbf{Z}$ and
+$n > |i|$ we have
+$$
+H^i(U, (\widetilde{M}_n^\bullet)^a) =
+H^i(\tau_{\geq -n}M^\bullet \otimes_A B) =
+H^i(M^\bullet \otimes_A B)
+$$
+The first equality holds by the bounded below case treated above.
+Thus we see from Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-RGamma-commutes-with-Rlim}
+that the claim holds. Then we finally get
+\begin{align*}
+R\Gamma((\Sch/S)_{fppf}, (\widetilde{M}^\bullet)^a)
+& =
+R\Gamma((\Sch/S)_{fppf}, R\lim (\widetilde{M}_n^\bullet)^a) \\
+& =
+R\lim R\Gamma((\Sch/S)_{fppf}, (\widetilde{M}_n^\bullet)^a) \\
+& =
+R\lim M_n^\bullet \\
+& =
+M^\bullet
+\end{align*}
+as desired. The second equality holds because $R\lim$ commutes with
+$R\Gamma$, see Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-RGamma-commutes-with-Rlim}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+\section{Comparing ph and \'etale topologies}
+\label{section-ph-etale}
+
+\noindent
+A model for this section is the section on the comparison of the
+usual topology and the qc topology on locally compact topological
+spaces as discussed in Cohomology on Sites, Section
+\ref{sites-cohomology-section-cohomology-LC}.
+We first review some material from
+Topologies, Sections
+\ref{topologies-section-change-topologies} and
+\ref{topologies-section-etale}.
+
+\medskip\noindent
+Let $S$ be a scheme and let $(\Sch/S)_{ph}$ be a ph site.
+On the same underlying category we have a second topology,
+namely the \'etale topology, and hence a second site
+$(\Sch/S)_\etale$. The identity functor
+$(\Sch/S)_\etale \to (\Sch/S)_{ph}$ is continuous
+(by More on Morphisms, Lemma \ref{more-morphisms-lemma-fppf-ph}
+and Topologies, Lemma
+\ref{topologies-lemma-zariski-etale-smooth-syntomic-fppf})
+and defines a morphism of sites
+$$
+\epsilon_S : (\Sch/S)_{ph} \longrightarrow (\Sch/S)_\etale
+$$
+See Cohomology on Sites, Section \ref{sites-cohomology-section-compare}.
+Please note that $\epsilon_{S, *}$ is the identity functor on underlying
+presheaves and that $\epsilon_S^{-1}$ associates to an \'etale sheaf the
+ph sheafification.
+Let $S_\etale$ be the small \'etale site.
+There is a morphism of sites
+$$
+\pi_S : (\Sch/S)_\etale \longrightarrow S_\etale
+$$
+given by the continuous functor
+$S_\etale \to (\Sch/S)_\etale$, $U \mapsto U$.
+Namely, $S_\etale$ has fibre products and a final object and the
+functor above commutes with these and
+Sites, Proposition \ref{sites-proposition-get-morphism} applies.
+
+\begin{lemma}
+\label{lemma-describe-pullback-pi-ph}
+With notation as above.
+Let $\mathcal{F}$ be a sheaf on $S_\etale$. The rule
+$$
+(\Sch/S)_{ph} \longrightarrow \textit{Sets},\quad
+(f : X \to S) \longmapsto \Gamma(X, f_{small}^{-1}\mathcal{F})
+$$
+is a sheaf and a fortiori a sheaf on $(\Sch/S)_\etale$.
+In fact this sheaf is equal to
+$\pi_S^{-1}\mathcal{F}$ on $(\Sch/S)_\etale$ and
+$\epsilon_S^{-1}\pi_S^{-1}\mathcal{F}$ on $(\Sch/S)_{ph}$.
+\end{lemma}
+
+\begin{proof}
+The statement about the \'etale topology is the content
+of Lemma \ref{lemma-describe-pullback}. To finish the proof it
+suffices to show that $\pi_S^{-1}\mathcal{F}$ is a sheaf for the ph
+topology. By Topologies, Lemma \ref{topologies-lemma-characterize-sheaf}
+it suffices to show that given a proper surjective morphism
+$V \to U$ of schemes over $S$ we have an equalizer diagram
+$$
+\xymatrix{
+(\pi_S^{-1}\mathcal{F})(U) \ar[r] &
+(\pi_S^{-1}\mathcal{F})(V) \ar@<1ex>[r] \ar@<-1ex>[r] &
+(\pi_S^{-1}\mathcal{F})(V \times_U V)
+}
+$$
+Set $\mathcal{G} = \pi_S^{-1}\mathcal{F}|_{U_\etale}$.
+Consider the commutative diagram
+$$
+\xymatrix{
+V \times_U V \ar[r] \ar[rd]_g \ar[d] & V \ar[d]^f \\
+V \ar[r]^f & U
+}
+$$
+We have
+$$
+(\pi_S^{-1}\mathcal{F})(V) = \Gamma(V, f^{-1}\mathcal{G}) =
+\Gamma(U, f_*f^{-1}\mathcal{G})
+$$
+where we use $f_*$ and $f^{-1}$ to denote functorialities between
+small \'etale sites. Second, we have
+$$
+(\pi_S^{-1}\mathcal{F})(V \times_U V) =
+\Gamma(V \times_U V, g^{-1}\mathcal{G}) =
+\Gamma(U, g_*g^{-1}\mathcal{G})
+$$
+The two maps in the equalizer diagram come from the two maps
+$$
+f_*f^{-1}\mathcal{G} \longrightarrow g_*g^{-1}\mathcal{G}
+$$
+Thus it suffices to prove $\mathcal{G}$ is
+the equalizer of these two maps of sheaves.
+Let $\overline{u}$ be a geometric point of $U$. Set
+$\Omega = \mathcal{G}_{\overline{u}}$.
+Taking stalks at $\overline{u}$ by
+Lemma \ref{lemma-proper-pushforward-stalk}
+we obtain the two maps
+$$
+H^0(V_{\overline{u}}, \underline{\Omega}) \longrightarrow
+H^0((V \times_U V)_{\overline{u}}, \underline{\Omega}) =
+H^0(V_{\overline{u}} \times_{\overline{u}} V_{\overline{u}},
+\underline{\Omega})
+$$
+where $\underline{\Omega}$ indicates the constant sheaf with value
+$\Omega$. Of course these maps are the pullback by the projection maps.
+Then it is clear that the sections coming from pullback
+by projection onto the first factor are constant on the fibres of
+the first projection, and sections coming from pullback
+by projection onto the first factor are constant on the fibres of
+the first projection. The sections in the intersection of the images
+of these pullback maps are constant on all of
+$V_{\overline{u}} \times_{\overline{u}} V_{\overline{u}}$, i.e.,
+these come from elements of $\Omega$ as desired.
+\end{proof}
+
+\noindent
+In the situation of Lemma \ref{lemma-describe-pullback-pi-ph}
+the composition of $\epsilon_S$ and $\pi_S$ and the equality
+determine a morphism of sites
+$$
+a_S : (\Sch/S)_{ph} \longrightarrow S_\etale
+$$
+
+\begin{lemma}
+\label{lemma-push-pull-ph-etale}
+With notation as above.
+Let $f : X \to Y$ be a morphism of $(\Sch/S)_{ph}$.
+Then there are commutative diagrams of topoi
+$$
+\xymatrix{
+\Sh((\Sch/X)_{ph}) \ar[rr]_{f_{big, ph}} \ar[d]_{\epsilon_X} & &
+\Sh((\Sch/Y)_{ph}) \ar[d]^{\epsilon_Y} \\
+\Sh((\Sch/X)_\etale) \ar[rr]^{f_{big, \etale}} & &
+\Sh((\Sch/Y)_\etale)
+}
+$$
+and
+$$
+\xymatrix{
+\Sh((\Sch/X)_{ph}) \ar[rr]_{f_{big, ph}} \ar[d]_{a_X} & &
+\Sh((\Sch/Y)_{ph}) \ar[d]^{a_Y} \\
+\Sh(X_\etale) \ar[rr]^{f_{small}} & &
+\Sh(Y_\etale)
+}
+$$
+with $a_X = \pi_X \circ \epsilon_X$ and $a_Y = \pi_X \circ \epsilon_X$.
+\end{lemma}
+
+\begin{proof}
+The commutativity of the diagrams follows from the discussion in
+Topologies, Section \ref{topologies-section-change-topologies}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-push-pull-ph-etale}
+In Lemma \ref{lemma-push-pull-ph-etale} if $f$ is proper, then we have
+$a_Y^{-1} \circ f_{small, *} = f_{big, ph, *} \circ a_X^{-1}$.
+\end{lemma}
+
+\begin{proof}
+You can prove this by repeating the proof of
+Lemma \ref{lemma-compare-higher-direct-image-proper} part (1);
+we will instead deduce the result from this.
+As $\epsilon_{Y, *}$ is the identity functor on underlying presheaves,
+it reflects isomorphisms. The description
+in Lemma \ref{lemma-describe-pullback-pi-ph}
+shows that $\epsilon_{Y, *} \circ a_Y^{-1} = \pi_Y^{-1}$
+and similarly for $X$. To show that the canonical map
+$a_Y^{-1}f_{small, *}\mathcal{F} \to f_{big, ph, *}a_X^{-1}\mathcal{F}$
+is an isomorphism, it suffices to show that
+\begin{align*}
+\pi_Y^{-1}f_{small, *}\mathcal{F}
+& =
+\epsilon_{Y, *}a_Y^{-1}f_{small, *}\mathcal{F} \\
+& \to
+\epsilon_{Y, *}f_{big, ph, *}a_X^{-1}\mathcal{F} \\
+& =
+f_{big, \etale, *} \epsilon_{X, *}a_X^{-1}\mathcal{F} \\
+& =
+f_{big, \etale, *}\pi_X^{-1}\mathcal{F}
+\end{align*}
+is an isomorphism. This is part
+(1) of Lemma \ref{lemma-compare-higher-direct-image-proper}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-ph-etale}
+Consider the comparison morphism
+$\epsilon : (\Sch/S)_{ph} \to (\Sch/S)_\etale$.
+Let $\mathcal{P}$ denote the class of proper morphisms of schemes.
+For $X$ in $(\Sch/S)_\etale$ denote
+$\mathcal{A}'_X \subset \textit{Ab}((\Sch/X)_\etale)$
+the full subcategory consisting of sheaves of the form
+$\pi_X^{-1}\mathcal{F}$ where $\mathcal{F}$ is a
+torsion abelian sheaf on $X_\etale$
+Then Cohomology on Sites, Properties
+(\ref{sites-cohomology-item-base-change-P}),
+(\ref{sites-cohomology-item-restriction-A}),
+(\ref{sites-cohomology-item-A-sheaf}),
+(\ref{sites-cohomology-item-A-and-P}), and
+(\ref{sites-cohomology-item-refine-tau-by-P})
+of Cohomology on Sites, Situation
+\ref{sites-cohomology-situation-compare} hold.
+\end{lemma}
+
+\begin{proof}
+We first show that $\mathcal{A}'_X \subset \textit{Ab}((\Sch/X)_\etale)$
+is a weak Serre subcategory by checking conditions (1), (2), (3), and (4)
+of Homology, Lemma \ref{homology-lemma-characterize-weak-serre-subcategory}.
+Parts (1), (2), (3) are immediate as $\pi_X^{-1}$ is exact and
+fully faithful for example by Lemma \ref{lemma-cohomological-descent-etale}. If
+$0 \to \pi_X^{-1}\mathcal{F} \to \mathcal{G} \to \pi_X^{-1}\mathcal{F}' \to 0$
+is a short exact sequence in $\textit{Ab}((\Sch/X)_\etale)$
+then $0 \to \mathcal{F} \to \pi_{X, *}\mathcal{G} \to \mathcal{F}' \to 0$
+is exact by Lemma \ref{lemma-cohomological-descent-etale}.
+In particular we see that $\pi_{X, *}\mathcal{G}$ is an abelian
+torsion sheaf on $X_\etale$.
+Hence $\mathcal{G} = \pi_X^{-1}\pi_{X, *}\mathcal{G}$ is in
+$\mathcal{A}'_X$ which checks the final condition.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-base-change-P}) holds
+by the existence of fibre products of schemes
+and the fact that the base change of a proper morphism of
+schemes is a proper morphism of schemes, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-proper}.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-restriction-A})
+follows from the commutative diagram (3) in
+Topologies, Lemma \ref{topologies-lemma-morphism-big-small-etale}.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-A-sheaf}) is
+Lemma \ref{lemma-describe-pullback-pi-ph}.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-A-and-P}) holds by
+Lemma \ref{lemma-compare-higher-direct-image-proper} part (2)
+and the fact that $R^if_{small}\mathcal{F}$
+is torsion if $\mathcal{F}$ is an abelian torsion
+sheaf on $X_\etale$, see Lemma \ref{lemma-torsion-direct-image}.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-refine-tau-by-P})
+follows from More on Morphisms, Lemma
+\ref{more-morphisms-lemma-dominate-fppf-etale-locally}
+combined with the fact that a finite morphism is proper
+and a surjective proper morphism defines a ph covering, see
+Topologies, Lemma \ref{topologies-lemma-surjective-proper-ph}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-V-C-all-n-etale-ph}
+With notation as above.
+\begin{enumerate}
+\item For $X \in \Ob((\Sch/S)_{ph})$ and an abelian torsion sheaf $\mathcal{F}$
+on $X_\etale$ we have
+$\epsilon_{X, *}a_X^{-1}\mathcal{F} = \pi_X^{-1}\mathcal{F}$
+and $R^i\epsilon_{X, *}(a_X^{-1}\mathcal{F}) = 0$ for $i > 0$.
+\item For a proper morphism $f : X \to Y$ in $(\Sch/S)_{ph}$
+and abelian torsion sheaf $\mathcal{F}$ on $X$ we have
+$a_Y^{-1}(R^if_{small, *}\mathcal{F}) =
+R^if_{big, ph, *}(a_X^{-1}\mathcal{F})$
+for all $i$.
+\item For a scheme $X$ and $K$ in $D^+(X_\etale)$ with torsion
+cohomology sheaves the map
+$\pi_X^{-1}K \to R\epsilon_{X, *}(a_X^{-1}K)$ is an isomorphism.
+\item For a proper morphism $f : X \to Y$ of schemes
+and $K$ in $D^+(X_\etale)$ with torsion cohomology sheaves we have
+$a_Y^{-1}(Rf_{small, *}K) = Rf_{big, ph, *}(a_X^{-1}K)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-compare-ph-etale} the lemmas in
+Cohomology on Sites, Section \ref{sites-cohomology-section-compare-general}
+all apply to our current setting. To translate the results
+observe that the category $\mathcal{A}_X$ of
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-A}
+is the full subcategory of $\textit{Ab}((\Sch/X)_{ph})$
+consisting of sheaves of the form $a_X^{-1}\mathcal{F}$
+where $\mathcal{F}$ is an abelian torsion sheaf on $X_\etale$.
+
+\medskip\noindent
+Part (1) is equivalent to $(V_n)$ for all $n$ which holds by
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-V-C-all-n-general}.
+
+\medskip\noindent
+Part (2) follows by applying $\epsilon_Y^{-1}$ to the conclusion of
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-V-implies-C-general}.
+
+\medskip\noindent
+Part (3) follows from Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-V-C-all-n-general} part (1)
+because $\pi_X^{-1}K$ is in $D^+_{\mathcal{A}'_X}((\Sch/X)_\etale)$
+and $a_X^{-1} = \epsilon_X^{-1} \circ a_X^{-1}$.
+
+\medskip\noindent
+Part (4) follows from Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-V-C-all-n-general} part (2)
+for the same reason.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomological-descent-etale-ph}
+Let $X$ be a scheme. For $K \in D^+(X_\etale)$ with torsion cohomology
+sheaves the map
+$$
+K \longrightarrow Ra_{X, *}a_X^{-1}K
+$$
+is an isomorphism with $a_X : \Sh((\Sch/X)_{ph}) \to \Sh(X_\etale)$ as above.
+\end{lemma}
+
+\begin{proof}
+We first reduce the statement to the case where
+$K$ is given by a single abelian sheaf. Namely, represent $K$
+by a bounded below complex $\mathcal{F}^\bullet$ of torsion
+abelian sheaves. This is possible by Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-torsion}. By the case of a
+sheaf we see that
+$\mathcal{F}^n = a_{X, *} a_X^{-1} \mathcal{F}^n$
+and that the sheaves $R^qa_{X, *}a_X^{-1}\mathcal{F}^n$
+are zero for $q > 0$. By Leray's acyclicity lemma
+(Derived Categories, Lemma \ref{derived-lemma-leray-acyclicity})
+applied to $a_X^{-1}\mathcal{F}^\bullet$
+and the functor $a_{X, *}$ we conclude. From now on assume $K = \mathcal{F}$
+where $\mathcal{F}$ is a torsion abelian sheaf.
+
+\medskip\noindent
+By Lemma \ref{lemma-describe-pullback-pi-ph} we have
+$a_{X, *}a_X^{-1}\mathcal{F} = \mathcal{F}$. Thus it suffices to show that
+$R^qa_{X, *}a_X^{-1}\mathcal{F} = 0$ for $q > 0$.
+For this we can use $a_X = \epsilon_X \circ \pi_X$ and
+the Leray spectral sequence
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-relative-Leray}).
+By Lemma \ref{lemma-V-C-all-n-etale-ph}
+we have $R^i\epsilon_{X, *}(a_X^{-1}\mathcal{F}) = 0$ for $i > 0$
+and $\epsilon_{X, *}a_X^{-1}\mathcal{F} = \pi_X^{-1}\mathcal{F}$.
+By Lemma \ref{lemma-cohomological-descent-etale} we have
+$R^j\pi_{X, *}(\pi_X^{-1}\mathcal{F}) = 0$ for $j > 0$.
+This concludes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-cohomology-etale-ph}
+For a scheme $X$ and $a_X : \Sh((\Sch/X)_{ph}) \to \Sh(X_\etale)$
+as above:
+\begin{enumerate}
+\item $H^q(X_\etale, \mathcal{F}) = H^q_{ph}(X, a_X^{-1}\mathcal{F})$
+for a torsion abelian sheaf $\mathcal{F}$ on $X_\etale$,
+\item $H^q(X_\etale, K) = H^q_{ph}(X, a_X^{-1}K)$
+for $K \in D^+(X_\etale)$ with torsion cohomology sheaves.
+\end{enumerate}
+Example: if $A$ is a torsion abelian group, then
+$H^q_\etale(X, \underline{A}) = H^q_{ph}(X, \underline{A})$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-cohomological-descent-etale-ph}
+by Cohomology on Sites, Remark \ref{sites-cohomology-remark-before-Leray}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Comparing h and \'etale topologies}
+\label{section-h-etale}
+
+\noindent
+A model for this section is the section on the comparison of the
+usual topology and the qc topology on locally compact topological
+spaces as discussed in
+Cohomology on Sites, Section \ref{sites-cohomology-section-cohomology-LC}.
+Moreover, this section is almost word for word the same as the
+section comparing the ph and \'etale topologies.
+We first review some material from
+Topologies, Sections
+\ref{topologies-section-change-topologies} and
+\ref{topologies-section-etale} and
+More on Flatness, Section \ref{flat-section-h}.
+
+\medskip\noindent
+Let $S$ be a scheme and let $(\Sch/S)_h$ be an h site.
+On the same underlying category we have a second topology,
+namely the \'etale topology, and hence a second site
+$(\Sch/S)_\etale$. The identity functor
+$(\Sch/S)_\etale \to (\Sch/S)_h$ is continuous
+(by More on Flatness, Lemma \ref{flat-lemma-zariski-h}
+and Topologies, Lemma
+\ref{topologies-lemma-zariski-etale-smooth-syntomic-fppf})
+and defines a morphism of sites
+$$
+\epsilon_S : (\Sch/S)_h \longrightarrow (\Sch/S)_\etale
+$$
+See Cohomology on Sites, Section \ref{sites-cohomology-section-compare}.
+Please note that $\epsilon_{S, *}$ is the identity functor on underlying
+presheaves and that $\epsilon_S^{-1}$ associates to an \'etale sheaf the
+h sheafification. Let $S_\etale$ be the small \'etale site.
+There is a morphism of sites
+$$
+\pi_S : (\Sch/S)_\etale \longrightarrow S_\etale
+$$
+given by the continuous functor
+$S_\etale \to (\Sch/S)_\etale$, $U \mapsto U$.
+Namely, $S_\etale$ has fibre products and a final object and the
+functor above commutes with these and
+Sites, Proposition \ref{sites-proposition-get-morphism} applies.
+
+\begin{lemma}
+\label{lemma-describe-pullback-pi-h}
+With notation as above.
+Let $\mathcal{F}$ be a sheaf on $S_\etale$. The rule
+$$
+(\Sch/S)_h \longrightarrow \textit{Sets},\quad
+(f : X \to S) \longmapsto \Gamma(X, f_{small}^{-1}\mathcal{F})
+$$
+is a sheaf and a fortiori a sheaf on $(\Sch/S)_\etale$.
+In fact this sheaf is equal to
+$\pi_S^{-1}\mathcal{F}$ on $(\Sch/S)_\etale$ and
+$\epsilon_S^{-1}\pi_S^{-1}\mathcal{F}$ on $(\Sch/S)_h$.
+\end{lemma}
+
+\begin{proof}
+The statement about the \'etale topology is the content
+of Lemma \ref{lemma-describe-pullback}. To finish the proof it
+suffices to show that $\pi_S^{-1}\mathcal{F}$ is a sheaf for the h
+topology. However, in Lemma \ref{lemma-describe-pullback-pi-ph}
+we have shown that
+$\pi_S^{-1}\mathcal{F}$ is a sheaf even in the stronger ph
+topology.
+\end{proof}
+
+\noindent
+In the situation of Lemma \ref{lemma-describe-pullback-pi-h}
+the composition of $\epsilon_S$ and $\pi_S$ and the equality
+determine a morphism of sites
+$$
+a_S : (\Sch/S)_h \longrightarrow S_\etale
+$$
+
+\begin{lemma}
+\label{lemma-push-pull-h-etale}
+With notation as above.
+Let $f : X \to Y$ be a morphism of $(\Sch/S)_h$.
+Then there are commutative diagrams of topoi
+$$
+\xymatrix{
+\Sh((\Sch/X)_h) \ar[rr]_{f_{big, h}} \ar[d]_{\epsilon_X} & &
+\Sh((\Sch/Y)_h) \ar[d]^{\epsilon_Y} \\
+\Sh((\Sch/X)_\etale) \ar[rr]^{f_{big, \etale}} & &
+\Sh((\Sch/Y)_\etale)
+}
+$$
+and
+$$
+\xymatrix{
+\Sh((\Sch/X)_h) \ar[rr]_{f_{big, h}} \ar[d]_{a_X} & &
+\Sh((\Sch/Y)_h) \ar[d]^{a_Y} \\
+\Sh(X_\etale) \ar[rr]^{f_{small}} & &
+\Sh(Y_\etale)
+}
+$$
+with $a_X = \pi_X \circ \epsilon_X$ and $a_Y = \pi_X \circ \epsilon_X$.
+\end{lemma}
+
+\begin{proof}
+The commutativity of the diagrams follows similarly to what was said in
+Topologies, Section \ref{topologies-section-change-topologies}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-push-pull-h-etale}
+In Lemma \ref{lemma-push-pull-h-etale} if $f$ is proper, then we have
+$a_Y^{-1} \circ f_{small, *} = f_{big, h, *} \circ a_X^{-1}$.
+\end{lemma}
+
+\begin{proof}
+You can prove this by repeating the proof of
+Lemma \ref{lemma-compare-higher-direct-image-proper} part (1);
+we will instead deduce the result from this.
+As $\epsilon_{Y, *}$ is the identity functor on underlying presheaves,
+it reflects isomorphisms. The description
+in Lemma \ref{lemma-describe-pullback-pi-h}
+shows that $\epsilon_{Y, *} \circ a_Y^{-1} = \pi_Y^{-1}$
+and similarly for $X$. To show that the canonical map
+$a_Y^{-1}f_{small, *}\mathcal{F} \to f_{big, h, *}a_X^{-1}\mathcal{F}$
+is an isomorphism, it suffices to show that
+\begin{align*}
+\pi_Y^{-1}f_{small, *}\mathcal{F}
+& =
+\epsilon_{Y, *}a_Y^{-1}f_{small, *}\mathcal{F} \\
+& \to
+\epsilon_{Y, *}f_{big, h, *}a_X^{-1}\mathcal{F} \\
+& =
+f_{big, \etale, *} \epsilon_{X, *}a_X^{-1}\mathcal{F} \\
+& =
+f_{big, \etale, *}\pi_X^{-1}\mathcal{F}
+\end{align*}
+is an isomorphism. This is part
+(1) of Lemma \ref{lemma-compare-higher-direct-image-proper}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-h-etale}
+Consider the comparison morphism $\epsilon : (\Sch/S)_h \to (\Sch/S)_\etale$.
+Let $\mathcal{P}$ denote the class of proper morphisms.
+For $X$ in $(\Sch/S)_\etale$ denote
+$\mathcal{A}'_X \subset \textit{Ab}((\Sch/X)_\etale)$
+the full subcategory consisting of sheaves of the form
+$\pi_X^{-1}\mathcal{F}$ where $\mathcal{F}$ is a
+torsion abelian sheaf on $X_\etale$
+Then Cohomology on Sites, Properties
+(\ref{sites-cohomology-item-base-change-P}),
+(\ref{sites-cohomology-item-restriction-A}),
+(\ref{sites-cohomology-item-A-sheaf}),
+(\ref{sites-cohomology-item-A-and-P}), and
+(\ref{sites-cohomology-item-refine-tau-by-P})
+of Cohomology on Sites, Situation
+\ref{sites-cohomology-situation-compare} hold.
+\end{lemma}
+
+\begin{proof}
+We first show that $\mathcal{A}'_X \subset \textit{Ab}((\Sch/X)_\etale)$
+is a weak Serre subcategory by checking conditions (1), (2), (3), and (4)
+of Homology, Lemma \ref{homology-lemma-characterize-weak-serre-subcategory}.
+Parts (1), (2), (3) are immediate as $\pi_X^{-1}$ is exact and
+fully faithful for example by Lemma \ref{lemma-cohomological-descent-etale}. If
+$0 \to \pi_X^{-1}\mathcal{F} \to \mathcal{G} \to \pi_X^{-1}\mathcal{F}' \to 0$
+is a short exact sequence in $\textit{Ab}((\Sch/X)_\etale)$
+then $0 \to \mathcal{F} \to \pi_{X, *}\mathcal{G} \to \mathcal{F}' \to 0$
+is exact by Lemma \ref{lemma-cohomological-descent-etale}.
+In particular we see that $\pi_{X, *}\mathcal{G}$ is an abelian
+torsion sheaf on $X_\etale$.
+Hence $\mathcal{G} = \pi_X^{-1}\pi_{X, *}\mathcal{G}$ is in
+$\mathcal{A}'_X$ which checks the final condition.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-base-change-P}) holds
+by the existence of fibre products of schemes,
+the fact that the base change of a proper morphism of
+schemes is a proper morphism of schemes, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-proper}, and
+the fact that the base change of a morphism of finite presentation
+is a morphism of finite presentation, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-finite-presentation}.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-restriction-A})
+follows from the commutative diagram (3) in
+Topologies, Lemma \ref{topologies-lemma-morphism-big-small-etale}.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-A-sheaf}) is
+Lemma \ref{lemma-describe-pullback-pi-h}.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-A-and-P}) holds by
+Lemma \ref{lemma-compare-higher-direct-image-proper} part (2)
+and the fact that $R^if_{small}\mathcal{F}$
+is torsion if $\mathcal{F}$ is an abelian torsion
+sheaf on $X_\etale$, see Lemma \ref{lemma-torsion-direct-image}.
+
+\medskip\noindent
+Cohomology on Sites, Property (\ref{sites-cohomology-item-refine-tau-by-P})
+is implied by More on Morphisms, Lemma
+\ref{more-morphisms-lemma-dominate-fppf-etale-locally}
+combined with the fact that a surjective finite locally free morphism
+is surjective, proper, and of finite presentation and hence
+defines a h-covering by More on Flatness, Lemma
+\ref{flat-lemma-surjective-proper-finite-presentation-h}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-V-C-all-n-etale-h}
+With notation as above.
+\begin{enumerate}
+\item For $X \in \Ob((\Sch/S)_{h})$ and an abelian torsion sheaf $\mathcal{F}$
+on $X_\etale$ we have
+$\epsilon_{X, *}a_X^{-1}\mathcal{F} = \pi_X^{-1}\mathcal{F}$
+and $R^i\epsilon_{X, *}(a_X^{-1}\mathcal{F}) = 0$ for $i > 0$.
+\item For a proper morphism $f : X \to Y$ in $(\Sch/S)_h$
+and abelian torsion sheaf $\mathcal{F}$ on $X$ we have
+$a_Y^{-1}(R^if_{small, *}\mathcal{F}) =
+R^if_{big, h, *}(a_X^{-1}\mathcal{F})$
+for all $i$.
+\item For a scheme $X$ and $K$ in $D^+(X_\etale)$ with torsion
+cohomology sheaves the map
+$\pi_X^{-1}K \to R\epsilon_{X, *}(a_X^{-1}K)$ is an isomorphism.
+\item For a proper morphism $f : X \to Y$ of schemes
+and $K$ in $D^+(X_\etale)$ with torsion cohomology sheaves we have
+$a_Y^{-1}(Rf_{small, *}K) = Rf_{big, h, *}(a_X^{-1}K)$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-compare-h-etale} the lemmas in
+Cohomology on Sites, Section \ref{sites-cohomology-section-compare-general}
+all apply to our current setting. To translate the results
+observe that the category $\mathcal{A}_X$ of
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-A}
+is the full subcategory of $\textit{Ab}((\Sch/X)_h)$
+consisting of sheaves of the form $a_X^{-1}\mathcal{F}$
+where $\mathcal{F}$ is an abelian torsion sheaf on $X_\etale$.
+
+\medskip\noindent
+Part (1) is equivalent to $(V_n)$ for all $n$ which holds by
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-V-C-all-n-general}.
+
+\medskip\noindent
+Part (2) follows by applying $\epsilon_Y^{-1}$ to the conclusion of
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-V-implies-C-general}.
+
+\medskip\noindent
+Part (3) follows from Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-V-C-all-n-general} part (1)
+because $\pi_X^{-1}K$ is in $D^+_{\mathcal{A}'_X}((\Sch/X)_\etale)$
+and $a_X^{-1} = \epsilon_X^{-1} \circ a_X^{-1}$.
+
+\medskip\noindent
+Part (4) follows from Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-V-C-all-n-general} part (2)
+for the same reason.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-cohomological-descent-etale-h}
+Let $X$ be a scheme. For $K \in D^+(X_\etale)$ with torsion cohomology
+sheaves the map
+$$
+K \longrightarrow Ra_{X, *}a_X^{-1}K
+$$
+is an isomorphism with $a_X : \Sh((\Sch/X)_h) \to \Sh(X_\etale)$ as above.
+\end{lemma}
+
+\begin{proof}
+We first reduce the statement to the case where
+$K$ is given by a single abelian sheaf. Namely, represent $K$
+by a bounded below complex $\mathcal{F}^\bullet$ of torsion
+abelian sheaves. This is possible by Cohomology on Sites, Lemma
+\ref{sites-cohomology-lemma-torsion}. By the case of a
+sheaf we see that
+$\mathcal{F}^n = a_{X, *} a_X^{-1} \mathcal{F}^n$
+and that the sheaves $R^qa_{X, *}a_X^{-1}\mathcal{F}^n$
+are zero for $q > 0$. By Leray's acyclicity lemma
+(Derived Categories, Lemma \ref{derived-lemma-leray-acyclicity})
+applied to $a_X^{-1}\mathcal{F}^\bullet$
+and the functor $a_{X, *}$ we conclude. From now on assume $K = \mathcal{F}$
+where $\mathcal{F}$ is a torsion abelian sheaf.
+
+\medskip\noindent
+By Lemma \ref{lemma-describe-pullback-pi-h} we have
+$a_{X, *}a_X^{-1}\mathcal{F} = \mathcal{F}$. Thus it suffices to show that
+$R^qa_{X, *}a_X^{-1}\mathcal{F} = 0$ for $q > 0$.
+For this we can use $a_X = \epsilon_X \circ \pi_X$ and
+the Leray spectral sequence
+(Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-relative-Leray}).
+By Lemma \ref{lemma-V-C-all-n-etale-h}
+we have $R^i\epsilon_{X, *}(a_X^{-1}\mathcal{F}) = 0$ for $i > 0$
+and $\epsilon_{X, *}a_X^{-1}\mathcal{F} = \pi_X^{-1}\mathcal{F}$.
+By Lemma \ref{lemma-cohomological-descent-etale} we have
+$R^j\pi_{X, *}(\pi_X^{-1}\mathcal{F}) = 0$ for $j > 0$.
+This concludes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-compare-cohomology-etale-h}
+For a scheme $X$ and $a_X : \Sh((\Sch/X)_h) \to \Sh(X_\etale)$
+as above:
+\begin{enumerate}
+\item $H^q(X_\etale, \mathcal{F}) = H^q_h(X, a_X^{-1}\mathcal{F})$
+for a torsion abelian sheaf $\mathcal{F}$ on $X_\etale$,
+\item $H^q(X_\etale, K) = H^q_h(X, a_X^{-1}K)$
+for $K \in D^+(X_\etale)$ with torsion cohomology sheaves.
+\end{enumerate}
+Example: if $A$ is a torsion abelian group, then
+$H^q_\etale(X, \underline{A}) = H^q_h(X, \underline{A})$.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-cohomological-descent-etale-h}
+by Cohomology on Sites, Remark \ref{sites-cohomology-remark-before-Leray}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+\section{Descending \'etale sheaves}
+\label{section-glueing-etale}
+
+\noindent
+We prove that \'etale sheaves ``glue'' in the fppf and h topology
+and related results. We have already shown the following related
+results
+\begin{enumerate}
+\item Lemma \ref{lemma-describe-pullback} tells us that a sheaf
+on the small \'etale site of a scheme $S$ determines a sheaf on the
+big \'etale site of $S$ satisfying the sheaf condition for
+fpqc coverings (and a fortiori for Zariski, \'etale, smooth, syntomic,
+and fppf coverings),
+\item Lemma \ref{lemma-describe-pullback-pi-fppf} is a restatement of
+the previous point for the fppf topology,
+\item Lemma \ref{lemma-describe-pullback-pi-ph} proves the same for the
+ph topology,
+\item Lemma \ref{lemma-describe-pullback-pi-h} proves the same for the
+h topology,
+\item Lemma \ref{lemma-descent-sheaf-fppf-etale} is a version of
+fppf descent for \'etale sheaves, and
+\item Remark \ref{remark-cohomological-descent-finite} tells us that
+we have descent of \'etale sheaves for finite surjective morphisms
+(we will clarify and strengthen this below).
+\end{enumerate}
+In the chapter on simplicial spaces we will prove some additional
+results on this, see for example
+Simplicial Spaces, Sections \ref{spaces-simplicial-section-fppf-hypercovering}
+and \ref{spaces-simplicial-section-proper-hypercovering-spaces}.
+
+\medskip\noindent
+In order to conveniently express our results we need some notation.
+Let $\mathcal{U} = \{f_i : X_i \to X\}$
+be a family of morphisms of schemes with fixed target.
+A {\it descent datum} for \'etale sheaves with respect to
+$\mathcal{U}$ is a family
+$((\mathcal{F}_i)_{i \in I}, (\varphi_{ij})_{i, j \in I})$ where
+\begin{enumerate}
+\item $\mathcal{F}_i$ is in $\Sh(X_{i, \etale})$, and
+\item $\varphi_{ij} :
+\text{pr}_{0, small}^{-1} \mathcal{F}_i
+\longrightarrow
+\text{pr}_{1, small}^{-1} \mathcal{F}_j$
+is an isomorphism in $\Sh((X_i \times_X X_j)_\etale)$
+\end{enumerate}
+such that the {\it cocycle condition} holds: the diagrams
+$$
+\xymatrix{
+\text{pr}_{0, small}^{-1}\mathcal{F}_i
+\ar[dr]_{\text{pr}_{02, small}^{-1}\varphi_{ik}}
+\ar[rr]^{\text{pr}_{01, small}^{-1}\varphi_{ij}} & &
+\text{pr}_{1, small}^{-1}\mathcal{F}_j
+\ar[dl]^{\text{pr}_{12, small}^{-1}\varphi_{jk}} \\
+& \text{pr}_{2, small}^{-1}\mathcal{F}_k
+}
+$$
+commute in $\Sh((X_i \times_X X_j \times_X X_k)_\etale)$.
+There is an obvious notion of {\it morphisms of descent data}
+and we obtain a category of descent data.
+A descent datum
+$((\mathcal{F}_i)_{i \in I}, (\varphi_{ij})_{i, j \in I})$
+is called {\it effective} if there exist a
+$\mathcal{F}$ in $\Sh(X_\etale)$ and isomorphisms
+$\varphi_i : f_{i, small}^{-1} \mathcal{F} \to \mathcal{F}_i$
+in $\Sh(X_{i, \etale})$ compatible with the $\varphi_{ij}$, i.e.,
+such that
+$$
+\varphi_{ij} =
+\text{pr}_{1, small}^{-1} (\varphi_j) \circ
+\text{pr}_{0, small}^{-1} (\varphi_i^{-1})
+$$
+Another way to say this is the following. Given an object $\mathcal{F}$
+of $\Sh(X_\etale)$ we obtain the {\it canonical descent datum}
+$(f_{i, small}^{-1}\mathcal{F}_i, c_{ij})$ where $c_{ij}$
+is the canonical isomorphism
+$$
+c_{ij} : \text{pr}_{0, small}^{-1} f_{i, small}^{-1}\mathcal{F}
+\longrightarrow
+\text{pr}_{1, small}^{-1} f_{j, small}^{-1}\mathcal{F}
+$$
+The descent datum
+$((\mathcal{F}_i)_{i \in I}, (\varphi_{ij})_{i, j \in I})$
+is effective if and only if it is isomorphic to the canonical
+descent datum associated to some $\mathcal{F}$ in $\Sh(X_\etale)$.
+
+\medskip\noindent
+If the family consists of a single morphism $\{X \to Y\}$,
+then we think of a descent datum as a pair $(\mathcal{F}, \varphi)$
+where $\mathcal{F}$ is an object of $\Sh(X_\etale)$ and
+$\varphi$ is an isomorphism
+$$
+\text{pr}_{0, small}^{-1} \mathcal{F}
+\longrightarrow
+\text{pr}_{1, small}^{-1} \mathcal{F}
+$$
+in $\Sh((X \times_Y X)_\etale)$ such that the cocycle condition holds:
+$$
+\xymatrix{
+\text{pr}_{0, small}^{-1}\mathcal{F}
+\ar[dr]_{\text{pr}_{02, small}^{-1}\varphi}
+\ar[rr]^{\text{pr}_{01, small}^{-1}\varphi} & &
+\text{pr}_{1, small}^{-1}\mathcal{F}
+\ar[dl]^{\text{pr}_{12, small}^{-1}\varphi} \\
+& \text{pr}_{2, small}^{-1}\mathcal{F}
+}
+$$
+commutes in $\Sh((X \times_Y X \times_Y X)_\etale)$.
+There is a notion of morphisms of descent data and effectivity
+exactly as before.
+
+\medskip\noindent
+We first prove effective descent for surjective integral morphisms.
+
+\begin{lemma}
+\label{lemma-glue-etale-sheaf-section}
+Let $f : X \to Y$ be a morphism of schemes which has a section. Then the
+functor
+$$
+\Sh(Y_\etale)
+\longrightarrow
+\text{descent data for \'etale sheaves wrt }\{X \to Y\}
+$$
+sending $\mathcal{G}$ in $\Sh(Y_\etale)$ to the canonical descent datum
+is an equivalence of categories.
+\end{lemma}
+
+\begin{proof}
+This is formal and depends only on functoriality of the pullback
+functors. We omit the details. Hint: If $s : Y \to X$ is a section,
+then a quasi-inverse is the functor sending $(\mathcal{F}, \varphi)$
+to $s_{small}^{-1}\mathcal{F}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glue-etale-sheaf-integral-surjective}
+Let $f : X \to Y$ be a surjective integral morphism of schemes.
+The functor
+$$
+\Sh(Y_\etale)
+\longrightarrow
+\text{descent data for \'etale sheaves wrt }\{X \to Y\}
+$$
+is an equivalence of categories.
+\end{lemma}
+
+\begin{proof}
+In this proof we drop the subscript ${}_{small}$ from our pullback
+and pushforward functors.
+Denote $X_1 = X \times_Y X$ and denote $f_1 : X_1 \to Y$ the
+morphism $f \circ \text{pr}_0 = f \circ \text{pr}_1$.
+Let $(\mathcal{F}, \varphi)$ be a descent datum for $\{X \to Y\}$.
+Let us set $\mathcal{F}_1 = \text{pr}_0^{-1}\mathcal{F}$.
+We may think of $\varphi$ as defining an isomorphism
+$\mathcal{F}_1 \to \text{pr}_1^{-1}\mathcal{F}$.
+We claim that the rule which sends a descent datum
+$(\mathcal{F}, \varphi)$
+to the sheaf
+$$
+\mathcal{G} =
+\text{Equalizer}\left(
+\xymatrix{
+f_*\mathcal{F}
+\ar@<1ex>[r] \ar@<-1ex>[r] &
+f_{1, *}\mathcal{F}_1
+}
+\right)
+$$
+is a quasi-inverse to the functor in the statement of the lemma.
+The first of the two arrows comes from the map
+$$
+f_*\mathcal{F} \to
+f_*\text{pr}_{0, *}\text{pr}_0^{-1}\mathcal{F} =
+f_{1, *}\mathcal{F}_1
+$$
+and the second arrow comes from the map
+$$
+f_*\mathcal{F} \to
+f_* \text{pr}_{1, *}\text{pr}_1^{-1}\mathcal{F}
+\xleftarrow{\varphi}
+f_* \text{pr}_{0, *} \text{pr}_0^{-1}\mathcal{F} =
+f_{1, *}\mathcal{F}_1
+$$
+where the arrow pointing left is invertible.
+To prove this works we have to show
+that the canonical map $f^{-1}\mathcal{G} \to \mathcal{F}$
+is an isomorphism; details omitted. In order to prove this it
+suffices to check after pulling back by any collection of morphisms
+$\Spec(k) \to Y$ where $k$ is an algebraically closed field.
+Namely, the corresponing base changes $X_k \to X$ are jointly
+surjective and we can check whether a map of sheaves on $X_\etale$
+is an isomorphism by looking at stalks on geometric points, see
+Theorem \ref{theorem-exactness-stalks}.
+By Lemma \ref{lemma-integral-pushforward-commutes-with-base-change}
+the construction of $\mathcal{G}$ from the descent datum
+$(\mathcal{F}, \varphi)$ commutes with any base change.
+Thus we may assume $Y$ is the spectrum of an algebraically
+closed point (note that base change preserves the properties
+of the morphism $f$, see
+Morphisms, Lemma \ref{morphisms-lemma-base-change-surjective}
+and \ref{morphisms-lemma-base-change-finite}).
+In this case the morphism $X \to Y$ has a section, so we
+know that the functor is an equivalence by
+Lemma \ref{lemma-glue-etale-sheaf-section}.
+However, the reader may show that the functor is an equivalence
+if and only if the construction above is a quasi-inverse;
+details omitted. This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glue-etale-sheaf-proper-surjective}
+Let $f : X \to Y$ be a surjective proper morphism of schemes.
+The functor
+$$
+\Sh(Y_\etale)
+\longrightarrow
+\text{descent data for \'etale sheaves wrt }\{X \to Y\}
+$$
+is an equivalence of categories.
+\end{lemma}
+
+\begin{proof}
+The exact same proof as given in
+Lemma \ref{lemma-glue-etale-sheaf-integral-surjective}
+works, except the appeal to
+Lemma \ref{lemma-integral-pushforward-commutes-with-base-change}
+should be replaced by an appeal to
+Lemma \ref{lemma-proper-base-change-f-star}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glue-etale-sheaf-check-after-base-change}
+Let $f : X \to Y$ be a morphism of schemes. Let $Z \to Y$ be a surjective
+integral morphism of schemes or a surjective proper morphism of schemes.
+If the functors
+$$
+\Sh(Z_\etale)
+\longrightarrow
+\text{descent data for \'etale sheaves wrt }\{X \times_Y Z \to Z\}
+$$
+and
+$$
+\Sh((Z \times_Y Z)_\etale)
+\longrightarrow
+\text{descent data for \'etale sheaves wrt }
+\{X \times_Y (Z \times_Y Z) \to Z \times_Y Z\}
+$$
+are equivalences of categories, then
+$$
+\Sh(Y_\etale)
+\longrightarrow
+\text{descent data for \'etale sheaves wrt }\{X \to Y\}
+$$
+is an equivalence.
+\end{lemma}
+
+\begin{proof}
+Formal consequence of the definitions and
+Lemmas \ref{lemma-glue-etale-sheaf-integral-surjective} and
+\ref{lemma-glue-etale-sheaf-proper-surjective}. Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glue-etale-sheaf-fppf-cover}
+Let $f : X \to Y$ be a morphism of schemes which is
+surjective, flat, locally of finite presentation.
+The functor
+$$
+\Sh(Y_\etale)
+\longrightarrow
+\text{descent data for \'etale sheaves wrt }\{X \to Y\}
+$$
+is an equivalence of categories.
+\end{lemma}
+
+\begin{proof}
+Exactly as in the proof of
+Lemma \ref{lemma-glue-etale-sheaf-integral-surjective}
+we claim a quasi-inverse is given by the functor sending
+$(\mathcal{F}, \varphi)$ to
+$$
+\mathcal{G} =
+\text{Equalizer}\left(
+\xymatrix{
+f_*\mathcal{F}
+\ar@<1ex>[r] \ar@<-1ex>[r] &
+f_{1, *}\mathcal{F}_1
+}
+\right)
+$$
+and in order to prove this it suffices to show that
+$f^{-1}\mathcal{G} \to \mathcal{F}$ is an isomorphism.
+This we may check locally, hence we may and do assume $Y$
+is affine. Then we can find a finite surjective morphism
+$Z \to Y$ such that there exists an open covering
+$Z = \bigcup W_i$ such that $W_i \to Y$ factors through $X$.
+See
+More on Morphisms, Lemma \ref{more-morphisms-lemma-dominate-fppf-finite}.
+Applying Lemma
+\ref{lemma-glue-etale-sheaf-check-after-base-change}
+we see that it suffices to prove
+the lemma after replacing $Y$ by $Z$ and $Z \times_Y Z$ and $f$
+by its base change. Thus we may assume $f$ has sections Zariski locally.
+Of course, using that the problem is local on $Y$ we reduce
+to the case where we have a section which is
+Lemma \ref{lemma-glue-etale-sheaf-section}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glue-etale-sheaf-fppf}
+Let $\{f_i : X_i \to X\}$ be an fppf covering of schemes.
+The functor
+$$
+\Sh(X_\etale)
+\longrightarrow
+\text{descent data for \'etale sheaves wrt }\{f_i : X_i \to X\}
+$$
+is an equivalence of categories.
+\end{lemma}
+
+\begin{proof}
+We have Lemma \ref{lemma-glue-etale-sheaf-fppf-cover}
+for the morphism $f : \coprod X_i \to X$.
+Then a formal argument shows that descent data for $f$
+are the same thing as descent data for the covering, compare
+with Descent, Lemma \ref{descent-lemma-family-is-one}.
+Details omitted.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-glue-etale-sheaf-modification}
+Let $f : X' \to X$ be a proper morphism of schemes. Let $i : Z \to X$
+be a closed immersion. Set $E = Z \times_X X'$. Picture
+$$
+\xymatrix{
+E \ar[d]_g \ar[r]_j & X' \ar[d]^f \\
+Z \ar[r]^i & X
+}
+$$
+If $f$ is an isomorphism over $X \setminus Z$, then the functor
+$$
+\Sh(X_\etale)
+\longrightarrow
+\Sh(X'_\etale) \times_{\Sh(E_\etale)} \Sh(Z_\etale)
+$$
+is an equivalence of categories.
+\end{lemma}
+
+\begin{proof}
+We will work with the $2$-fibre product category as constructed in
+Categories, Example \ref{categories-example-2-fibre-product-categories}.
+The functor sends $\mathcal{F}$ to the triple
+$(f^{-1}\mathcal{F}, i^{-1}\mathcal{F}, c)$ where
+$c : j^{-1}f^{-1}\mathcal{F} \to g^{-1}i^{-1}\mathcal{F}$
+is the canonical isomorphism. We will construct a quasi-inverse functor. Let
+$(\mathcal{F}', \mathcal{G}, \alpha)$ be an object
+of the right hand side of the arrow.
+We obtain an isomorphism
+$$
+i^{-1}f_*\mathcal{F}' = g_*j^{-1}\mathcal{F}'
+\xrightarrow{g_*\alpha}
+g_*g^{-1}\mathcal{G}
+$$
+The first equality is Lemma \ref{lemma-proper-base-change-f-star}.
+Using this we obtain maps
+$i_*\mathcal{G} \to i_*g_*g^{-1}\mathcal{G}$
+and $f'_*\mathcal{F}' \to i_*g_*g^{-1}\mathcal{G}$. We set
+$$
+\mathcal{F} = f_*\mathcal{F}' \times_{i_*g_*g^{-1}\mathcal{G}} i_*\mathcal{G}
+$$
+and we claim that $\mathcal{F}$ is an object of the left hand side
+of the arrow whose image in the right hand side is isomorphic to
+the triple we started out with. Let us compute the stalk of $\mathcal{F}$
+at a geometric point $\overline{x}$ of $X$.
+
+\medskip\noindent
+If $\overline{x}$ is not
+in $Z$, then on the one hand $\overline{x}$ comes from a unique
+geometric point $\overline{x}'$ of $X'$ and
+$\mathcal{F}'_{\overline{x}'} = (f_*\mathcal{F}')_{\overline{x}}$
+and on the other hand we have $(i_*\mathcal{G})_{\overline{x}}$
+and $(i_*g_*g^{-1}\mathcal{G})_{\overline{x}}$ are singletons.
+Hence we see that $\mathcal{F}_{\overline{x}}$ equals
+$\mathcal{F}'_{\overline{x}'}$.
+
+\medskip\noindent
+If $\overline{x}$ is in $Z$, i.e., $\overline{x}$ is the image of
+a geometric point $\overline{z}$ of $Z$, then we obtain
+$(i_*\mathcal{G})_{\overline{x}} = \mathcal{G}_{\overline{z}}$
+and
+$$
+(i_*g_*g^{-1}\mathcal{G})_{\overline{x}} =
+(g_*g^{-1}\mathcal{G})_{\overline{z}} =
+\Gamma(E_{\overline{z}}, g^{-1}\mathcal{G}|_{E_{\overline{z}}})
+$$
+(by the proper base change for pushforward used above)
+and similarly
+$$
+(f_*\mathcal{F}')_{\overline{x}} =
+\Gamma(X'_{\overline{x}}, \mathcal{F}'|_{X'_{\overline{x}}})
+$$
+Since we have the identification
+$E_{\overline{z}} = X'_{\overline{x}}$ and since $\alpha$
+defines an isomorphism between the sheaves
+$\mathcal{F}'|_{X'_{\overline{x}}}$ and
+$g^{-1}\mathcal{G}|_{E_{\overline{z}}}$
+we conclude that we get
+$$
+\mathcal{F}_{\overline{x}} = \mathcal{G}_{\overline{z}}
+$$
+in this case.
+
+\medskip\noindent
+To finish the proof, we observe that there are canonical maps
+$i^{-1}\mathcal{F} \to \mathcal{G}$ and $f^{-1}\mathcal{F} \to \mathcal{F}'$
+compatible with $\alpha$ which on stalks produce the isomorphisms
+we saw above. We omit the careful construction of these maps.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-h-descent-etale-sheaves}
+Let $S$ be a scheme. Then the category fibred in groupoids
+$$
+p : \mathcal{S} \longrightarrow (\Sch/S)_h
+$$
+whose fibre category over $U$ is the category $\Sh(U_\etale)$
+of sheaves on the small \'etale site of $U$ is a stack in groupoids.
+\end{lemma}
+
+\begin{proof}
+To prove the lemma we will check conditions (1), (2), and (3) of
+More on Flatness, Lemma \ref{flat-lemma-refine-check-h-stack}.
+
+\medskip\noindent
+Condition (1) holds because we have glueing for sheaves (and
+Zariski coverings are \'etale coverings). See
+Sites, Lemma \ref{sites-lemma-glue-sheaves}.
+
+\medskip\noindent
+To see condition (2), suppose that $f : X \to Y$ is a surjective,
+flat, proper morphism of finite presentation over $S$ with $Y$ affine.
+Then we have descent for $\{X \to Y\}$ by either
+Lemma \ref{lemma-glue-etale-sheaf-fppf-cover} or
+Lemma \ref{lemma-glue-etale-sheaf-proper-surjective}.
+
+\medskip\noindent
+Condition (3) follows immediately from the more general
+Lemma \ref{lemma-glue-etale-sheaf-modification}.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Blow up squares and \'etale cohomology}
+\label{section-blow-up-square}
+
+\noindent
+Blow up squares are introduced in
+More on Flatness, Section \ref{flat-section-blow-up-ph}.
+Using the proper base change theorem we can see that
+we have a Mayer-Vietoris type result for blow up squares.
+
+\begin{lemma}
+\label{lemma-blow-up-square-cohomological-descent}
+Let $X$ be a scheme and let $Z \subset X$ be a closed subscheme
+cut out by a quasi-coherent ideal of finite type. Consider the
+corresponding blow up square
+$$
+\xymatrix{
+E \ar[d]_\pi \ar[r]_j & X' \ar[d]^b \\
+Z \ar[r]^i & X
+}
+$$
+For $K \in D^+(X_\etale)$ with torsion cohomology sheaves
+we have a distinguished triangle
+$$
+K \to Ri_*(K|_Z) \oplus Rb_*(K|_{X'}) \to Rc_*(K|_E) \to K[1]
+$$
+in $D(X_\etale)$ where $c = i \circ \pi = b \circ j$.
+\end{lemma}
+
+\begin{proof}
+The notation $K|_{X'}$ stands for $b_{small}^{-1}K$.
+Choose a bounded below complex $\mathcal{F}^\bullet$
+of abelian sheaves representing $K$. Observe that
+$i_*(\mathcal{F}^\bullet|_Z)$ represents $Ri_*(K|_Z)$
+because $i_*$ is exact
+(Proposition \ref{proposition-finite-higher-direct-image-zero}).
+Choose a quasi-isomorphism
+$b_{small}^{-1}\mathcal{F}^\bullet \to \mathcal{I}^\bullet$
+where $\mathcal{I}^\bullet$ is a bounded below complex of injective
+abelian sheaves on $X'_\etale$. This map is adjoint to a map
+$\mathcal{F}^\bullet \to b_*(\mathcal{I}^\bullet)$ and
+$b_*(\mathcal{I}^\bullet)$ represents $Rb_*(K|_{X'})$.
+We have $\pi_*(\mathcal{I}^\bullet|_E) = (b_*\mathcal{I}^\bullet)|_Z$
+by Lemma \ref{lemma-proper-base-change-f-star} and by
+Lemma \ref{lemma-proper-base-change} this complex represents
+$R\pi_*(K|_E)$. Hence the map
+$$
+Ri_*(K|_Z) \oplus Rb_*(K|_{X'}) \to Rc_*(K|_E)
+$$
+is represented by the surjective map of bounded below complexes
+$$
+i_*(\mathcal{F}^\bullet|_Z) \oplus
+b_*(\mathcal{I}^\bullet)
+\to
+i_*\left(b_*(\mathcal{I}^\bullet)|_Z\right)
+$$
+To get our distinguished triangle it suffices to show that
+the canonical map
+$\mathcal{F}^\bullet \to i_*(\mathcal{F}^\bullet|_Z) \oplus
+b_*(\mathcal{I}^\bullet)$
+maps quasi-isomorphically onto the kernel of the map
+of complexes displayed above (namely a short exact sequence
+of complexes determines a distinguished triangle in the derived
+category, see
+Derived Categories, Section \ref{derived-section-canonical-delta-functor}).
+We may check this on stalks at a geometric point $\overline{x}$ of $X$.
+If $\overline{x}$ is not in $Z$, then $X' \to X$ is an isomorphism
+over an open neighbourhood of $\overline{x}$. Thus, if $\overline{x}'$
+denotes the corresponding geometric point of $X'$ in this case, then
+we have to show that
+$$
+\mathcal{F}^\bullet_{\overline{x}} \to \mathcal{I}^\bullet_{\overline{x}'}
+$$
+is a quasi-isomorphism. This is true by our choice of $\mathcal{I}^\bullet$.
+If $\overline{x}$ is in $Z$, then
+$b_(\mathcal{I}^\bullet)_{\overline{x}} \to
+i_*\left(b_*(\mathcal{I}^\bullet)|_Z\right)_{\overline{x}}$
+is an isomorphism of complexes of abelian groups. Hence the
+kernel is equal to
+$i_*(\mathcal{F}^\bullet|_Z)_{\overline{x}} =
+\mathcal{F}^\bullet_{\overline{x}}$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blow-up-square-etale-cohomology}
+Let $X$ be a scheme and let $K \in D^+(X_\etale)$ have
+torsion cohomology sheaves. Let $Z \subset X$ be a closed subscheme
+cut out by a quasi-coherent ideal of finite type. Consider the
+corresponding blow up square
+$$
+\xymatrix{
+E \ar[d] \ar[r] & X' \ar[d]^b \\
+Z \ar[r] & X
+}
+$$
+Then there is a canonical long exact sequence
+$$
+H^p_\etale(X, K) \to
+H^p_\etale(X', K|_{X'}) \oplus
+H^p_\etale(Z, K|_Z) \to
+H^p_\etale(E, K|_E) \to
+H^{p + 1}_\etale(X, K)
+$$
+\end{lemma}
+
+\begin{proof}[First proof]
+This follows immediately from
+Lemma \ref{lemma-blow-up-square-cohomological-descent}
+and the fact that
+$$
+R\Gamma(X, Rb_*(K|_{X'})) = R\Gamma(X', K|_{X'})
+$$
+(see Cohomology on Sites, Section \ref{sites-cohomology-section-leray})
+and similarly for the others.
+\end{proof}
+
+\begin{proof}[Second proof]
+By Lemma \ref{lemma-compare-cohomology-etale-ph}
+these cohomology groups are the cohomology of
+$X, X', E, Z$ with values in some complex of abelian sheaves
+on the site $(\Sch/X)_{ph}$. (Namely, the object
+$a_X^{-1}K$ of the derived category, see
+Lemma \ref{lemma-describe-pullback-pi-ph} above
+and recall that $K|_{X'} = b_{small}^{-1}K$.)
+By More on Flatness, Lemma \ref{flat-lemma-blow-up-square-ph}
+the ph sheafification of the diagram of representable
+presheaves is cocartesian. Thus the lemma follows from the very general
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-square-triangle}
+applied to the site $(\Sch/X)_{ph}$ and the commutative diagram
+of the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-blow-up-square-equivalence}
+Let $X$ be a scheme and let $Z \subset X$ be a closed subscheme
+cut out by a quasi-coherent ideal of finite type. Consider the
+corresponding blow up square
+$$
+\xymatrix{
+E \ar[d]_\pi \ar[r]_j & X' \ar[d]^b \\
+Z \ar[r]^i & X
+}
+$$
+Suppose given
+\begin{enumerate}
+\item an object $K'$ of $D^+(X'_\etale)$ with torsion cohomology sheaves,
+\item an object $L$ of $D^+(Z_\etale)$ with torsion cohomology sheaves, and
+\item an isomorphism $\gamma : K'|_E \to L|_E$.
+\end{enumerate}
+Then there exists an object $K$ of $D^+(X_\etale)$
+and isomorphisms $f : K|_{X'} \to K'$, $g : K|_Z \to L$ such
+that $\gamma = g|_E \circ f^{-1}|_E$.
+Moreover, given
+\begin{enumerate}
+\item an object $M$ of $D^+(X_\etale)$ with torsion cohomology sheaves,
+\item a morphism $\alpha : K' \to M|_{X'}$ of $D(X'_\etale)$,
+\item a morphism $\beta : L \to M|_Z$ of $D(Z_\etale)$,
+\end{enumerate}
+such that
+$$
+\alpha|_E = \beta|_E \circ \gamma.
+$$
+Then there exists a morphism $M \to K$ in $D(X_\etale)$
+whose restriction to $X'$ is $a \circ f$
+and whose restriction to $Z$ is $b \circ g$.
+\end{lemma}
+
+\begin{proof}
+If $K$ exists, then Lemma \ref{lemma-blow-up-square-cohomological-descent}
+tells us a distinguished triangle that it fits in. Thus we simply choose
+a distinguished triangle
+$$
+K \to Ri_*(L) \oplus Rb_*(K') \to Rc_*(L|_E) \to K[1]
+$$
+where $c = i \circ \pi = b \circ j$. Here the map $Ri_*(L) \to Rc_*(L|_E)$
+is $Ri_*$ applied to the adjunction mapping $E \to R\pi_*(L|_E)$.
+The map $Rb_*(K') \to Rc_*(L|_E)$ is the composition of the canonical map
+$Rb_*(K') \to Rc_*(K'|_E)) = R$ and $Rc_*(\gamma)$.
+The maps $g$ and $f$ of the statement of the lemma are the adjoints
+of these maps. If we restrict this distinguished triangle to $Z$
+then the map $Rb_*(K) \to Rc_*(L|_E)$ becomes an isomorphism
+by the base change theorem (Lemma \ref{lemma-proper-base-change}) and hence
+the map $g : K|_Z \to L$ is an isomorphism.
+Looking at the distinguished triangle we see that $f : K|_{X'} \to K'$
+is an isomorphism over $X' \setminus E = X \setminus Z$.
+Moreover, we have $\gamma \circ f|_E = g|_E$ by construction.
+Then since $\gamma$ and $g$ are isomorphisms we conclude
+that $f$ induces isomorphisms on stalks at geometric points of $E$
+as well. Thus $f$ is an isomorphism.
+
+\medskip\noindent
+For the final statement, we may replace $K'$ by $K|_{X'}$,
+$L$ by $K|_Z$, and $\gamma$ by the canonical identification.
+Observe that $\alpha$ and $\beta$ induce a commutative square
+$$
+\xymatrix{
+K \ar[r] \ar@{..>}[d] &
+Ri_*(K|_Z) \oplus Rb_*(K|_{X'}) \ar[r] \ar[d]_{\beta \oplus \alpha} &
+Rc_*(K|_E) \ar[r] \ar[d]_{\alpha|_E} &
+K[1] \ar@{..>}[d] \\
+M \ar[r] &
+Ri_*(M|_Z) \oplus Rb_*(M|_{X'}) \ar[r] &
+Rc_*(M|_E) \ar[r] &
+M[1]
+}
+$$
+Thus by the axioms of a derived category we get a dotted
+arrow producing a morphism of distinguished triangles.
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\section{Almost blow up squares and the h topology}
+\label{section-blow-up-h}
+
+\noindent
+In this section we continue the discussion in
+More on Flatness, Section \ref{flat-section-blow-up-h}.
+For the convenience of the reader we recall that an
+almost blow up square is a commutative diagram
+\begin{equation}
+\label{equation-almost-blow-up-square}
+\vcenter{
+\xymatrix{
+E \ar[d] \ar[r] & X' \ar[d]^b \\
+Z \ar[r] & X
+}
+}
+\end{equation}
+of schemes satisfying the following conditions:
+\begin{enumerate}
+\item $Z \to X$ is a closed immersion of finite presentation,
+\item $E = b^{-1}(Z)$ is a locally principal closed subscheme of $X'$,
+\item $b$ is proper and of finite presentation,
+\item the closed subscheme $X'' \subset X'$ cut out by the quasi-coherent
+ideal of sections of $\mathcal{O}_{X'}$ supported on $E$
+(Properties, Lemma \ref{properties-lemma-sections-supported-on-closed-subset})
+is the blow up of $X$ in $Z$.
+\end{enumerate}
+It follows that the morphism $b$ induces an isomorphism
+$X' \setminus E \to X \setminus Z$.
+
+\medskip\noindent
+We are going to give a criterion for ``h sheafiness'' for
+objects in the derived category of the big fppf site
+$(\Sch/S)_{fppf}$ of a scheme $S$. On the same underlying category
+we have a second topology, namely the h topology
+(More on Flatness, Section \ref{flat-section-h}).
+Recall that fppf coverings are h coverings
+(More on Flatness, Lemma \ref{flat-lemma-zariski-h}). Hence we may
+consider the morphism
+$$
+\epsilon : (\Sch/S)_h \longrightarrow (\Sch/S)_{fppf}
+$$
+See Cohomology on Sites, Section \ref{sites-cohomology-section-compare}.
+In particular, we have a fully faithful functor
+$$
+R\epsilon_* : D((\Sch/S)_h) \longrightarrow D((\Sch/S)_{fppf})
+$$
+and we can ask: what is the essential image of this functor?
+
+\begin{lemma}
+\label{lemma-blow-up-square-h}
+With notation as above, if $K$ is in the essential image
+of $R\epsilon_*$, then the maps $c^K_{X, Z, X', E}$ of
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-c-square}
+are quasi-isomorphisms.
+\end{lemma}
+
+\begin{proof}
+Denote ${}^\#$ sheafification in the h topology.
+We have seen in More on Flatness, Lemma \ref{flat-lemma-blow-up-square-h}
+that $h_X^\# = h_Z^\# \amalg_{h_E^\#} h_{X'}^\#$. On the other hand,
+the map $h_E^\# \to h_{X'}^\#$ is injective as $E \to X'$ is a
+monomorphism. Thus this lemma is a special case of
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-descent-squares-helper}
+(which itself is a formal consequence of
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-square-triangle}).
+\end{proof}
+
+\begin{proposition}
+\label{proposition-check-h}
+Let $K$ be an object of $D^+((\Sch/S)_{fppf})$.
+Then $K$ is in the essential image of
+$R\epsilon_* : D((\Sch/S)_h) \to D((\Sch/S)_{fppf})$
+if and only if $c^K_{X, X', Z, E}$ is a quasi-isomorphism
+for every almost blow up square (\ref{equation-almost-blow-up-square})
+in $(\Sch/S)_h$ with $X$ affine.
+\end{proposition}
+
+\begin{proof}
+We prove this by applying
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-descent-squares}
+whose hypotheses hold by
+Lemma \ref{lemma-blow-up-square-h} and
+More on Flatness, Proposition \ref{flat-proposition-check-h}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-refine-check-h}
+Let $K$ be an object of $D^+((\Sch/S)_{fppf})$. Then $K$ is in the
+essential image of $R\epsilon_* : D((\Sch/S)_h) \to D((\Sch/S)_{fppf})$
+if and only if $c^K_{X, X', Z, E}$ is a quasi-isomorphism
+for every almost blow up square as in
+More on Flatness, Examples \ref{flat-example-one-generator} and
+\ref{flat-example-two-generators}.
+\end{lemma}
+
+\begin{proof}
+We prove this by applying
+Cohomology on Sites, Lemma \ref{sites-cohomology-lemma-descent-squares}
+whose hypotheses hold by
+Lemma \ref{lemma-blow-up-square-h} and
+More on Flatness, Lemma \ref{flat-lemma-refine-check-h}
+\end{proof}
+
+
+
+
+
+
+\section{Cohomology of the structure sheaf in the h topology}
+\label{section-cohomology-O-h}
+
+\noindent
+Let $p$ be a prime number. Let $(\mathcal{C}, \mathcal{O})$ be a ringed site
+with $p\mathcal{O} = 0$. Then we set $\colim_F \mathcal{O}$
+equal to the colimit in the category of sheaves of rings of the system
+$$
+\mathcal{O} \xrightarrow{F} \mathcal{O} \xrightarrow{F}
+\mathcal{O} \xrightarrow{F} \ldots
+$$
+where $F : \mathcal{O} \to \mathcal{O}$, $f \mapsto f^p$
+is the Frobenius endomorphism.
+
+\begin{lemma}
+\label{lemma-h-sheaf-colim-F}
+Let $p$ be a prime number. Let $S$ be a scheme over $\mathbf{F}_p$.
+Consider the sheaf $\mathcal{O}^{perf} = \colim_F \mathcal{O}$
+on $(\Sch/S)_{fppf}$. Then $\mathcal{O}^{perf}$ is in the essential
+image of $R\epsilon_* : D((\Sch/S)_h) \to D((\Sch/S)_{fppf})$.
+\end{lemma}
+
+\begin{proof}
+We prove this using the criterion of Lemma \ref{lemma-refine-check-h}.
+Before check the conditions, we note that for a
+quasi-compact and quasi-separated object $X$ of
+$(\Sch/S)_{fppf}$ we have
+$$
+H^i_{fppf}(X, \mathcal{O}^{perf}) = \colim_F H^i_{fppf}(X, \mathcal{O})
+$$
+See Cohomology on Sites,
+Lemma \ref{sites-cohomology-lemma-colim-works-over-collection}.
+We will also use that $H^i_{fppf}(X, \mathcal{O}) = H^i(X, \mathcal{O})$, see
+Descent, Proposition \ref{descent-proposition-same-cohomology-quasi-coherent}.
+
+\medskip\noindent
+Let $A, f, J$ be as in
+More on Flatness, Example \ref{flat-example-one-generator}
+and consider the associated almost blow up square.
+Since $X$, $X'$, $Z$, $E$ are affine, we have no
+higher cohomology of $\mathcal{O}$. Hence we only
+have to check that
+$$
+0 \to
+\mathcal{O}^{perf}(X) \to
+\mathcal{O}^{perf}(X') \oplus \mathcal{O}^{perf}(Z) \to
+\mathcal{O}^{perf}(E) \to 0
+$$
+is a short exact sequence. This was shown in (the proof of)
+More on Flatness, Lemma \ref{flat-lemma-h-sheaf-colim-F}.
+
+\medskip\noindent
+Let $X, X', Z, E$ be as in
+More on Flatness, Example \ref{flat-example-two-generators}.
+Since $X$ and $Z$ are affine we have
+$H^p(X, \mathcal{O}_X) = H^p(Z, \mathcal{O}_X) = 0$ for $p > 0$.
+By More on Flatness, Lemma \ref{flat-lemma-funny-blow-up}
+we have $H^p(X', \mathcal{O}_{X'}) = 0$ for $p > 0$.
+Since $E = \mathbf{P}^1_Z$ and $Z$ is affine we also have
+$H^p(E, \mathcal{O}_E) = 0$ for $p > 0$. As in the previous
+paragraph we reduce to checking that
+$$
+0 \to
+\mathcal{O}^{perf}(X) \to
+\mathcal{O}^{perf}(X') \oplus \mathcal{O}^{perf}(Z) \to
+\mathcal{O}^{perf}(E) \to 0
+$$
+is a short exact sequence. This was shown in (the proof of)
+More on Flatness, Lemma \ref{flat-lemma-h-sheaf-colim-F}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-h-cohomology-structure-sheaf}
+Let $p$ be a prime number. Let $S$ be a quasi-compact and quasi-separated
+scheme over $\mathbf{F}_p$. Then
+$$
+H^i((\Sch/S)_h, \mathcal{O}^h) =
+\colim_F H^i(S, \mathcal{O})
+$$
+Here on the left hand side by $\mathcal{O}^h$ we mean
+the h sheafification of the structure sheaf.
+\end{proposition}
+
+\begin{proof}
+This is just a reformulation of Lemma \ref{lemma-h-sheaf-colim-F}.
+Recall that
+$\mathcal{O}^h = \mathcal{O}^{perf} = \colim_F \mathcal{O}$, see
+More on Flatness, Lemma \ref{flat-lemma-char-p}.
+By Lemma \ref{lemma-h-sheaf-colim-F} we see that
+$\mathcal{O}^{perf}$ viewed as an object of $D((\Sch/S)_{fppf})$
+is of the form $R\epsilon_*K$ for some $K \in D((\Sch/S)_h)$.
+Then $K = \epsilon^{-1}\mathcal{O}^{perf}$ which is actually
+equal to $\mathcal{O}^{perf}$ because $\mathcal{O}^{perf}$ is an h sheaf. See
+Cohomology on Sites, Section \ref{sites-cohomology-section-compare}.
+Hence $R\epsilon_*\mathcal{O}^{perf} = \mathcal{O}^{perf}$
+(with apologies for the confusing notation).
+Thus the lemma now follows from Leray
+$$
+R\Gamma_h(S, \mathcal{O}^{perf}) =
+R\Gamma_{fppf}(S, R\epsilon_*\mathcal{O}^{perf}) =
+R\Gamma_{fppf}(S, \mathcal{O}^{perf})
+$$
+and the fact that
+$$
+H^i_{fppf}(S, \mathcal{O}^{perf}) =
+H^i_{fppf}(S, \colim_F \mathcal{O}) =
+\colim_F H^i_{fppf}(S, \mathcal{O})
+$$
+as $S$ is quasi-compact and quasi-separated
+(see proof of Lemma \ref{lemma-h-sheaf-colim-F}).
+\end{proof}
+
+
+
+
+
+
+
+
+
+
+
+
+\input{chapters}
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/etale.tex b/books/stacks/etale.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e14399a52926998d9ccd3d12dd8516579d151be1
--- /dev/null
+++ b/books/stacks/etale.tex
@@ -0,0 +1,2767 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{\'Etale Morphisms of Schemes}
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+In this Chapter, we discuss \'etale morphisms of schemes. We illustrate
+some of the more important concepts by working with the Noetherian case.
+Our principal goal is to collect for the reader enough commutative
+algebra results to start reading a treatise on \'etale cohomology. An
+auxiliary goal is to provide enough evidence to ensure that the reader stops
+calling the phrase ``the \'etale topology of schemes'' an exercise in general
+nonsense, if (s)he does indulge in such blasphemy.
+
+\medskip\noindent
+We will refer to the other
+chapters of the Stacks project for standard results in algebraic geometry
+(on schemes and commutative algebra). We will provide detailed
+proofs of the new results that we state here.
+
+
+
+
+\section{Conventions}
+\label{section-conventions}
+
+\noindent
+In this chapter, frequently schemes will be assumed locally Noetherian
+and frequently rings will be assumed Noetherian. But in all the statements
+we will reiterate this when necessary, and make sure we list all the
+hypotheses! On the other hand, here are some general facts that we will use
+often and are useful to keep in mind:
+\begin{enumerate}
+\item A ring homomorphism $A \to B$ of finite type with $A$ Noetherian
+is of finite presentation. See Algebra,
+Lemma \ref{algebra-lemma-Noetherian-finite-type-is-finite-presentation}.
+\item A morphism (locally) of finite type between locally Noetherian schemes
+is automatically (locally) of finite presentation.
+See Morphisms,
+Lemma \ref{morphisms-lemma-noetherian-finite-type-finite-presentation}.
+\item Add more like this here.
+\end{enumerate}
+
+
+
+
+\section{Unramified morphisms}
+\label{section-unramified-definition}
+
+\noindent
+We first define ``unramified homomorphisms of local rings'' for Noetherian
+local rings. We cannot use the term ``unramified'' as there already is
+a notion of
+an unramified ring map (Algebra, Section \ref{algebra-section-unramified})
+and it is different. After discussing the notion a bit we
+globalize it to describe unramified morphisms of locally Noetherian schemes.
+
+\begin{definition}
+\label{definition-unramified-rings}
+Let $A$, $B$ be Noetherian local rings. A local homomorphism $A \to B$
+is said to be {\it unramified homomorphism of local rings} if
+\begin{enumerate}
+\item $\mathfrak m_AB = \mathfrak m_B$,
+\item $\kappa(\mathfrak m_B)$ is a finite separable extension of
+$\kappa(\mathfrak m_A)$, and
+\item $B$ is essentially of finite type over $A$ (this means
+that $B$ is the localization of a finite type $A$-algebra at a prime).
+\end{enumerate}
+\end{definition}
+
+\noindent
+This is the local version of the
+definition in Algebra, Section \ref{algebra-section-unramified}.
+In that section a ring map $R \to S$ is defined to be unramified if and
+only if it is of finite type, and $\Omega_{S/R} = 0$.
+We say $R \to S$ is unramified at a prime $\mathfrak q \subset S$
+if there exists a $g \in S$, $g \not \in \mathfrak q$ such that
+$R \to S_g$ is an unramified ring map. It is shown in
+Algebra, Lemmas \ref{algebra-lemma-unramified-at-prime} and
+\ref{algebra-lemma-characterize-unramified} that given a ring
+map $R \to S$ of finite type, and a prime $\mathfrak q$ of $S$
+lying over $\mathfrak p \subset R$, then we have
+$$
+R \to S\text{ is unramified at }\mathfrak q
+\Leftrightarrow
+\mathfrak pS_{\mathfrak q} = \mathfrak q S_{\mathfrak q}
+\text{ and }
+\kappa(\mathfrak p) \subset \kappa(\mathfrak q)\text{ finite separable}
+$$
+Thus we see that for a local homomorphism of local rings the properties
+of our definition above are closely related to the question of
+being unramified. In fact, we have proved the following lemma.
+
+\begin{lemma}
+\label{lemma-characterize-unramified-Noetherian}
+\begin{slogan}
+Unramifiedness is a stalk local condition.
+\end{slogan}
+Let $A \to B$ be of finite type with $A$ a Noetherian ring.
+Let $\mathfrak q$ be a prime of $B$ lying over $\mathfrak p \subset A$.
+Then $A \to B$ is unramified at $\mathfrak q$ if and only if
+$A_{\mathfrak p} \to B_{\mathfrak q}$ is an unramified homomorphism
+of local rings.
+\end{lemma}
+
+\begin{proof}
+See discussion above.
+\end{proof}
+
+\noindent
+We will characterize the property of being unramified in terms
+of completions. For a Noetherian local ring $A$
+we denote $A^\wedge$ the completion of $A$ with respect to the
+maximal ideal. It is also a Noetherian local ring, see
+Algebra, Lemma \ref{algebra-lemma-completion-Noetherian-Noetherian}.
+
+\begin{lemma}
+\label{lemma-unramified-completions}
+Let $A$, $B$ be Noetherian local rings.
+Let $A \to B$ be a local homomorphism.
+\begin{enumerate}
+\item if $A \to B$ is an unramified homomorphism of local rings,
+then $B^\wedge$ is a finite $A^\wedge$ module,
+\item if $A \to B$ is an unramified homomorphism of local rings and
+$\kappa(\mathfrak m_A) = \kappa(\mathfrak m_B)$,
+then $A^\wedge \to B^\wedge$ is surjective,
+\item if $A \to B$ is an unramified homomorphism of local rings
+and $\kappa(\mathfrak m_A)$
+is separably closed, then $A^\wedge \to B^\wedge$ is surjective,
+\item if $A$ and $B$ are complete discrete valuation rings, then
+$A \to B$ is an unramified homomorphism of local rings
+if and only if the uniformizer for $A$ maps to a uniformizer for $B$,
+and the residue field extension is finite separable (and $B$ is
+essentially of finite type over $A$).
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Part (1) is a special case of
+Algebra, Lemma \ref{algebra-lemma-finite-after-completion}.
+For part (2), note that the $\kappa(\mathfrak m_A)$-vector space
+$B^\wedge/\mathfrak m_{A^\wedge}B^\wedge$
+is generated by $1$. Hence by Nakayama's lemma
+(Algebra, Lemma \ref{algebra-lemma-NAK}) the map
+$A^\wedge \to B^\wedge$ is surjective.
+Part (3) is a special case of part (2).
+Part (4) is immediate from the definitions.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-unramified-completions}
+Let $A$, $B$ be Noetherian local rings.
+Let $A \to B$ be a local homomorphism such that $B$ is
+essentially of finite type over $A$.
+The following are equivalent
+\begin{enumerate}
+\item $A \to B$ is an unramified homomorphism of local rings
+\item $A^\wedge \to B^\wedge$ is an unramified homomorphism of local rings, and
+\item $A^\wedge \to B^\wedge$ is unramified.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (1) and (2) follows from the fact that
+$\mathfrak m_AA^\wedge$ is the maximal ideal of $A^\wedge$
+(and similarly for $B$) and faithful flatness of $B \to B^\wedge$.
+For example if $A^\wedge \to B^\wedge$ is unramified, then
+$\mathfrak m_AB^\wedge = (\mathfrak m_AB)B^\wedge = \mathfrak m_BB^\wedge$
+and hence $\mathfrak m_AB = \mathfrak m_B$.
+
+\medskip\noindent
+Assume the equivalent conditions (1) and (2).
+By Lemma \ref{lemma-unramified-completions}
+we see that $A^\wedge \to B^\wedge$ is
+finite. Hence $A^\wedge \to B^\wedge$ is of finite presentation, and by
+Algebra, Lemma \ref{algebra-lemma-characterize-unramified}
+we conclude that $A^\wedge \to B^\wedge$ is unramified at
+$\mathfrak m_{B^\wedge}$. Since $B^\wedge$ is local we conclude
+that $A^\wedge \to B^\wedge$ is unramified.
+
+\medskip\noindent
+Assume (3). By Algebra, Lemma \ref{algebra-lemma-unramified-at-prime}
+we conclude that $A^\wedge \to B^\wedge$ is an unramified homomorphism
+of local rings, i.e., (2) holds.
+\end{proof}
+
+\begin{definition}
+\label{definition-unramified-schemes}
+(See Morphisms, Definition \ref{morphisms-definition-unramified}
+for the definition in the general case.)
+Let $Y$ be a locally Noetherian scheme.
+Let $f : X \to Y$ be locally of finite type.
+Let $x \in X$.
+\begin{enumerate}
+\item We say $f$ is {\it unramified at $x$} if
+$\mathcal{O}_{Y, f(x)} \to \mathcal{O}_{X, x}$
+is an unramified homomorphism of local rings.
+\item The morphism $f : X \to Y$ is said to be {\it unramified}
+if it is unramified at all points of $X$.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Let us prove that this definition agrees with the definition in the
+chapter on morphisms of schemes. This in particular guarantees that the
+set of points where a morphism is unramified is open.
+
+\begin{lemma}
+\label{lemma-unramified-definition}
+Let $Y$ be a locally Noetherian scheme.
+Let $f : X \to Y$ be locally of finite type.
+Let $x \in X$. The morphism $f$ is unramified at $x$ in
+the sense of Definition \ref{definition-unramified-schemes}
+if and only if it is unramified in
+the sense of Morphisms, Definition \ref{morphisms-definition-unramified}.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-characterize-unramified-Noetherian}
+and the definitions.
+\end{proof}
+
+\noindent
+Here are some results on unramified morphisms.
+The formulations as given in this list apply only to
+morphisms locally of finite type between locally Noetherian schemes.
+In each case we give a reference to the general result as
+proved earlier in the project, but in some cases one can
+prove the result more easily in the Noetherian case.
+Here is the list:
+\begin{enumerate}
+\item Unramifiedness is local on the source and the target in the Zariski
+topology.
+\item Unramified morphisms are stable under base change and composition.
+See Morphisms, Lemmas \ref{morphisms-lemma-base-change-unramified}
+and \ref{morphisms-lemma-composition-unramified}.
+\item Unramified morphisms of schemes are locally quasi-finite
+and quasi-compact unramified morphisms are quasi-finite.
+See Morphisms, Lemma \ref{morphisms-lemma-unramified-quasi-finite}
+\item Unramified morphisms have relative dimension $0$. See
+Morphisms, Definition \ref{morphisms-definition-relative-dimension-d}
+and
+Morphisms, Lemma \ref{morphisms-lemma-locally-quasi-finite-rel-dimension-0}.
+\item A morphism is unramified if and only if all its fibres are unramified.
+That is, unramifiedness can be checked on the scheme theoretic fibres. See
+Morphisms, Lemma \ref{morphisms-lemma-unramified-etale-fibres}.
+\item Let $X$ and $Y$ be unramified over a base scheme $S$.
+Any $S$-morphism from $X$ to $Y$ is unramified.
+See Morphisms, Lemma \ref{morphisms-lemma-unramified-permanence}.
+\end{enumerate}
+
+\section{Three other characterizations of unramified morphisms}
+\label{section-three-other}
+
+\noindent
+The following theorem gives three equivalent notions of being
+unramified at a point. See
+Morphisms, Lemma \ref{morphisms-lemma-unramified-at-point}
+for (part of) the statement for general schemes.
+
+\begin{theorem}
+\label{theorem-unramified-equivalence}
+Let $Y$ be a locally Noetherian scheme.
+Let $f : X \to Y$ be a morphism of schemes which is locally of finite type.
+Let $x$ be a point of $X$. The following are equivalent
+\begin{enumerate}
+\item $f$ is unramified at $x$,
+\item the stalk $\Omega_{X/Y, x}$ of the module of relative differentials
+at $x$ is trivial,
+\item there exist open neighbourhoods $U$ of $x$ and $V$ of $f(x)$, and a
+commutative diagram
+$$
+\xymatrix{
+U \ar[rr]_i \ar[rd] & & \mathbf{A}^n_V \ar[ld] \\
+& V
+}
+$$
+where $i$ is a closed immersion defined by a
+quasi-coherent sheaf of ideals $\mathcal{I}$ such that the differentials
+$\text{d}g$ for $g \in \mathcal{I}_{i(x)}$ generate
+$\Omega_{\mathbf{A}^n_V/V, i(x)}$, and
+\item the diagonal $\Delta_{X/Y} : X \to X \times_Y X$
+is a local isomorphism at $x$.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+The equivalence of (1) and (2) is proved in
+Morphisms, Lemma \ref{morphisms-lemma-unramified-at-point}.
+
+\medskip\noindent
+If $f$ is unramified at $x$, then $f$ is unramified in an open
+neighbourhood of $x$; this does not follow immediately
+from Definition \ref{definition-unramified-schemes} of this chapter
+but it does follow from
+Morphisms, Definition \ref{morphisms-definition-unramified} which we
+proved to be equivalent in
+Lemma \ref{lemma-unramified-definition}.
+Choose affine opens $V \subset Y$, $U \subset X$
+with $f(U) \subset V$ and $x \in U$, such that $f$ is
+unramified on $U$, i.e., $f|_U : U \to V$ is unramified.
+By Morphisms, Lemma \ref{morphisms-lemma-diagonal-unramified-morphism}
+the morphism $U \to U \times_V U$
+is an open immersion. This proves that (1) implies (4).
+
+\medskip\noindent
+If $\Delta_{X/Y}$ is a local isomorphism at $x$, then
+$\Omega_{X/Y, x} = 0$ by
+Morphisms, Lemma \ref{morphisms-lemma-differentials-diagonal}.
+Hence we see that (4) implies (2).
+At this point we know that (1), (2) and (4) are all equivalent.
+
+\medskip\noindent
+Assume (3). The assumption on the diagram combined with
+Morphisms, Lemma \ref{morphisms-lemma-differentials-relative-immersion}
+show that $\Omega_{U/V, x} = 0$. Since $\Omega_{U/V, x} = \Omega_{X/Y, x}$
+we conclude (2) holds.
+
+\medskip\noindent
+Finally, assume that (2) holds. To prove (3) we may localize on
+$X$ and $Y$ and assume that $X$ and $Y$ are affine.
+Say $X = \Spec(B)$ and $Y = \Spec(A)$.
+The point $x \in X$ corresponds to a prime $\mathfrak q \subset B$.
+Our assumption is that $\Omega_{B/A, \mathfrak q} = 0$
+(see Morphisms, Lemma \ref{morphisms-lemma-differentials-affine} for the
+relationship between differentials on schemes and modules
+of differentials in commutative algebra).
+Since $Y$ is locally Noetherian and $f$ locally of finite type
+we see that $A$ is Noetherian and
+$B \cong A[x_1, \ldots, x_n]/(f_1, \ldots, f_m)$, see
+Properties, Lemma \ref{properties-lemma-locally-Noetherian} and
+Morphisms, Lemma \ref{morphisms-lemma-locally-finite-type-characterize}.
+In particular, $\Omega_{B/A}$ is a finite $B$-module. Hence we
+can find a single $g \in B$, $g \not \in \mathfrak q$ such that
+the principal localization $(\Omega_{B/A})_g$ is zero. Hence after
+replacing $B$ by $B_g$ we see that $\Omega_{B/A} = 0$ (formation
+of modules of differentials commutes with localization, see
+Algebra, Lemma \ref{algebra-lemma-differentials-localize}). This means that
+$\text{d}(f_j)$ generate the kernel of the canonical map
+$\Omega_{A[x_1, \ldots, x_n]/A} \otimes_A B \to \Omega_{B/A}$.
+Thus the surjection $A[x_1, \ldots, x_n] \to B$ of $A$-algebras gives the
+commutative diagram of (3), and the theorem is proved.
+\end{proof}
+
+\noindent
+How can we use this theorem? Well, here are a few remarks:
+\begin{enumerate}
+\item Suppose that
+$f : X \to Y$ and $g : Y \to Z$ are two morphisms locally of finite
+type between locally Noetherian schemes. There is a canonical short
+exact sequence
+$$
+f^*(\Omega_{Y/Z}) \to \Omega_{X/Z} \to \Omega_{X/Y} \to 0
+$$
+see Morphisms, Lemma \ref{morphisms-lemma-triangle-differentials}.
+The theorem therefore implies that if $g \circ f$ is unramified,
+then so is $f$. This is
+Morphisms, Lemma \ref{morphisms-lemma-unramified-permanence}.
+\item Since $\Omega_{X/Y}$ is isomorphic to the conormal sheaf
+of the diagonal morphism
+(Morphisms, Lemma \ref{morphisms-lemma-differentials-diagonal})
+we see that if $X \to Y$ is a monomorphism of
+locally Noetherian schemes and locally of finite type,
+then $X \to Y$ is unramified.
+In particular, open and closed immersions of locally Noetherian schemes
+are unramified. See
+Morphisms, Lemmas
+\ref{morphisms-lemma-open-immersion-unramified} and
+\ref{morphisms-lemma-closed-immersion-unramified}.
+\item The theorem also implies that the set of points
+where a morphism $f : X \to Y$ (locally of finite type of locally Noetherian
+schemes) is not unramified is
+the support of the coherent sheaf $\Omega_{X/Y}$.
+This allows one to give a scheme theoretic definition to the
+``ramification locus''.
+\end{enumerate}
+
+\section{The functorial characterization of unramified morphisms}
+\label{section-functorial-unramified}
+
+\noindent
+In basic algebraic geometry we learn that some classes of morphisms can be
+characterized functorially, and that such descriptions are quite useful.
+Unramified morphisms too have such a characterization.
+
+\begin{theorem}
+\label{theorem-formally-unramified}
+Let $f : X \to S$ be a morphism of schemes.
+Assume $S$ is a locally Noetherian scheme, and $f$ is locally of finite type.
+Then the following are equivalent:
+\begin{enumerate}
+\item $f$ is unramified,
+\item the morphism $f$ is formally unramified:
+for any affine $S$-scheme $T$ and subscheme $T_0$ of $T$
+defined by a square-zero ideal,
+the natural map
+$$
+\Hom_S(T, X) \longrightarrow \Hom_S(T_0, X)
+$$
+is injective.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+See More on Morphisms,
+Lemma \ref{more-morphisms-lemma-unramified-formally-unramified}
+for a more general statement and proof.
+What follows is a sketch of the proof in the current case.
+
+\medskip\noindent
+Firstly, one checks both properties are local on the source and the target.
+This we may assume that $S$ and $X$ are affine.
+Say $X = \Spec(B)$ and $S = \Spec(R)$.
+Say $T = \Spec(C)$. Let $J$ be the square-zero ideal of $C$
+with $T_0 = \Spec(C/J)$. Assume that we are given the diagram
+$$
+\xymatrix{
+& B \ar[d]^\phi \ar[rd]^{\bar{\phi}}
+& \\
+R \ar[r] \ar[ur] & C \ar[r]
+& C/J
+}
+$$
+Secondly, one checks that the association $\phi' \mapsto \phi' - \phi$
+gives a bijection between the set of liftings of $\bar{\phi}$ and the module
+$\text{Der}_R(B, J)$. Thus, we obtain the implication (1) $\Rightarrow$ (2)
+via the description of unramified morphisms having trivial module
+of differentials, see Theorem \ref{theorem-unramified-equivalence}.
+
+\medskip\noindent
+To obtain the reverse implication, consider the surjection
+$q : C = (B \otimes_R B)/I^2 \to B = C/J$ defined by the square zero ideal
+$J = I/I^2$ where $I$ is the kernel of the multiplication map
+$B \otimes_R B \to B$. We already have a lifting $B \to C$ defined by, say,
+$b \mapsto b \otimes 1$. Thus, by the same reasoning as above, we obtain a
+bijective correspondence between liftings of $\text{id} : B \to C/J$ and
+$\text{Der}_R(B, J)$. The hypothesis therefore implies that the latter module is
+trivial. But we know that $J \cong \Omega_{B/R}$. Thus, $B/R$ is unramified.
+\end{proof}
+
+
+
+\section{Topological properties of unramified morphisms}
+\label{section-topological-unramified}
+
+\noindent
+The first topological result that will be of utility to us is one which says
+that unramified and separated morphisms have ``nice'' sections.
+The material in this section does not require any Noetherian hypotheses.
+
+\begin{proposition}
+\label{proposition-properties-sections}
+Sections of unramified morphisms.
+\begin{enumerate}
+\item Any section of an unramified morphism is an open immersion.
+\item Any section of a separated morphism is a closed immersion.
+\item Any section of an unramified separated morphism is open and closed.
+\end{enumerate}
+\end{proposition}
+
+\begin{proof}
+Fix a base scheme $S$.
+If $f : X' \to X$ is any $S$-morphism, then the graph
+$\Gamma_f : X' \to X' \times_S X$
+is obtained as the base change of the diagonal
+$\Delta_{X/S} : X \to X \times_S X$ via the projection
+$X' \times_S X \to X \times_S X$.
+If $g : X \to S$ is separated (resp. unramified)
+then the diagonal is a closed immersion (resp. open immersion)
+by Schemes, Definition \ref{schemes-definition-separated}
+(resp.\ Morphisms, Lemma \ref{morphisms-lemma-diagonal-unramified-morphism}).
+Hence so is the graph as a base change (by
+Schemes, Lemma \ref{schemes-lemma-base-change-immersion}).
+In the special case $X' = S$, we obtain (1), resp.\ (2).
+Part (3) follows on combining (1) and (2).
+\end{proof}
+
+\noindent
+We can now explicitly describe the sections of unramified morphisms.
+
+\begin{theorem}
+\label{theorem-sections-unramified-maps}
+Let $Y$ be a connected scheme.
+Let $f : X \to Y$ be unramified and separated.
+Every section of $f$ is an isomorphism onto a connected component.
+There exists a bijective correspondence
+$$
+\text{sections of }f
+\leftrightarrow
+\left\{
+\begin{matrix}
+\text{connected components }X'\text{ of }X\text{ such that}\\
+\text{the induced map }X' \to Y\text{ is an isomorphism}
+\end{matrix}
+\right\}
+$$
+In particular, given $x \in X$ there is at most one
+section passing through $x$.
+\end{theorem}
+
+\begin{proof}
+Direct from Proposition \ref{proposition-properties-sections} part (3).
+\end{proof}
+
+\noindent
+The preceding theorem gives us some idea of the ``rigidity'' of unramified
+morphisms. Further indication is provided by the following proposition
+which, besides being intrinsically interesting, is also useful in the
+theory of the algebraic fundamental group (see \cite[Expos\'e V]{SGA1}).
+See also the more general
+Morphisms, Lemma \ref{morphisms-lemma-value-at-one-point}.
+
+\begin{proposition}
+\label{proposition-equality}
+Let $S$ is be a scheme.
+Let $\pi : X \to S$ be unramified and separated.
+Let $Y$ be an $S$-scheme and $y \in Y$ a point.
+Let $f, g : Y \to X$ be two $S$-morphisms. Assume
+\begin{enumerate}
+\item $Y$ is connected
+\item $x = f(y) = g(y)$, and
+\item the induced maps $f^\sharp, g^\sharp : \kappa(x) \to \kappa(y)$
+on residue fields are equal.
+\end{enumerate}
+Then $f = g$.
+\end{proposition}
+
+\begin{proof}
+The maps $f, g : Y \to X$ define maps $f', g' : Y \to X_Y = Y \times_S X$
+which are sections of the structure map $X_Y \to Y$.
+Note that $f = g$ if and only if $f' = g'$.
+The structure map $X_Y \to Y$ is the base change of $\pi$ and hence
+unramified and separated also (see
+Morphisms, Lemmas \ref{morphisms-lemma-base-change-unramified} and
+Schemes, Lemma \ref{schemes-lemma-separated-permanence}).
+Thus according to Theorem \ref{theorem-sections-unramified-maps}
+it suffices to prove that $f'$ and $g'$ pass through the same
+point of $X_Y$. And this is exactly what the hypotheses (2) and (3)
+guarantee, namely $f'(y) = g'(y) \in X_Y$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finitely-many-maps-to-unramified}
+Let $S$ be a Noetherian scheme. Let $X \to S$ be a quasi-compact unramified
+morphism. Let $Y \to S$ be a morphism with $Y$ Noetherian. Then
+$\Mor_S(Y, X)$ is a finite set.
+\end{lemma}
+
+\begin{proof}
+Assume first $X \to S$ is separated (which is often the case in practice).
+Since $Y$ is Noetherian it has finitely many connected components. Thus we
+may assume $Y$ is connected. Choose a point $y \in Y$ with image $s \in S$.
+Since $X \to S$ is unramified and quasi-compact
+then fibre $X_s$ is finite, say $X_s = \{x_1, \ldots, x_n\}$
+and $\kappa(x_i)/\kappa(s)$ is a finite field extension.
+See Morphisms, Lemma \ref{morphisms-lemma-unramified-quasi-finite},
+\ref{morphisms-lemma-residue-field-quasi-finite}, and
+\ref{morphisms-lemma-quasi-finite}.
+For each $i$ there are at most finitely many $\kappa(s)$-algebra
+maps $\kappa(x_i) \to \kappa(y)$ (by elementary field theory).
+Thus $\Mor_S(Y, X)$ is finite by
+Proposition \ref{proposition-equality}.
+
+\medskip\noindent
+General case. There exists a nonempty open $U \subset S$ such
+that $X_U \to U$ is finite (in particular separated), see
+Morphisms, Lemma \ref{morphisms-lemma-generically-finite}
+(the lemma applies since we've already seen above that a quasi-compact
+unramified morphism is quasi-finite and since $X \to S$ is quasi-separated by
+Morphisms, Lemma \ref{morphisms-lemma-finite-type-Noetherian-quasi-separated}).
+Let $Z \subset S$ be the reduced closed subscheme supported on
+the complement of $U$. By Noetherian induction, we see that
+$\Mor_Z(Y_Z, X_Z)$ is finite (details omitted).
+By the result of the first paragraph the set
+$\Mor_U(Y_U, X_U)$ is finite. Thus it suffices to show that
+$$
+\Mor_S(Y, X) \longrightarrow \Mor_Z(Y_Z, X_Z) \times \Mor_U(Y_U, X_U)
+$$
+is injective. This follows from the fact that the set of points where
+two morphisms $a, b : Y \to X$ agree is open in $Y$, due to the fact
+that $\Delta : X \to X \times_S X$ is open, see
+Morphisms, Lemma \ref{morphisms-lemma-diagonal-unramified-morphism}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Universally injective, unramified morphisms}
+\label{section-universally-injective-unramified}
+
+\noindent
+Recall that a morphism of schemes $f : X \to Y$ is universally
+injective if any base change of $f$ is injective (on underlying
+topological spaces), see
+Morphisms, Definition \ref{morphisms-definition-universally-injective}.
+Universally injective and unramified morphisms can be
+characterized as follows.
+
+\begin{lemma}
+\label{lemma-universally-injective-unramified}
+Let $f : X \to S$ be a morphism of schemes.
+The following are equivalent:
+\begin{enumerate}
+\item $f$ is unramified and a monomorphism,
+\item $f$ is unramified and universally injective,
+\item $f$ is locally of finite type and a monomorphism,
+\item $f$ is universally injective, locally of finite type, and
+formally unramified,
+\item $f$ is locally of finite type and $X_s$ is either empty
+or $X_s \to s$ is an isomorphism for all $s \in S$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+We have seen in
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-unramified-formally-unramified}
+that being formally unramified and locally of finite type is the same thing
+as being unramified. Hence (4) is equivalent to (2).
+A monomorphism is certainly universally injective and
+formally unramified hence (3) implies (4).
+It is clear that (1) implies (3). Finally, if (2) holds, then
+$\Delta : X \to X \times_S X$ is both an open immersion
+(Morphisms, Lemma \ref{morphisms-lemma-diagonal-unramified-morphism})
+and surjective
+(Morphisms, Lemma \ref{morphisms-lemma-universally-injective})
+hence an isomorphism, i.e., $f$ is a monomorphism. In this way we see that
+(2) implies (1).
+
+\medskip\noindent
+Condition (3) implies (5) because monomorphisms are preserved under
+base change
+(Schemes, Lemma \ref{schemes-lemma-base-change-monomorphism})
+and because of the description of monomorphisms towards the spectra of fields
+in
+Schemes, Lemma \ref{schemes-lemma-mono-towards-spec-field}.
+Condition (5) implies (4) by
+Morphisms, Lemmas \ref{morphisms-lemma-universally-injective} and
+\ref{morphisms-lemma-unramified-etale-fibres}.
+\end{proof}
+
+\noindent
+This leads to the following useful characterization of closed immersions.
+
+\begin{lemma}
+\label{lemma-characterize-closed-immersion}
+Let $f : X \to S$ be a morphism of schemes.
+The following are equivalent:
+\begin{enumerate}
+\item $f$ is a closed immersion,
+\item $f$ is a proper monomorphism,
+\item $f$ is proper, unramified, and universally injective,
+\item $f$ is universally closed, unramified, and a monomorphism,
+\item $f$ is universally closed, unramified, and universally injective,
+\item $f$ is universally closed, locally of finite type, and a monomorphism,
+\item $f$ is universally closed, universally injective, locally of
+finite type, and formally unramified.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The equivalence of (4) -- (7) follows immediately from
+Lemma \ref{lemma-universally-injective-unramified}.
+
+\medskip\noindent
+Let $f : X \to S$ satisfy (6). Then $f$ is separated, see
+Schemes, Lemma \ref{schemes-lemma-monomorphism-separated}
+and has finite fibres. Hence
+More on Morphisms, Lemma \ref{more-morphisms-lemma-characterize-finite}
+shows $f$ is finite. Then
+Morphisms, Lemma \ref{morphisms-lemma-finite-monomorphism-closed}
+implies $f$ is a closed immersion, i.e., (1) holds.
+
+\medskip\noindent
+Note that (1) $\Rightarrow$ (2) because a closed immersion is
+proper and a monomorphism
+(Morphisms, Lemma \ref{morphisms-lemma-closed-immersion-proper}
+and
+Schemes, Lemma \ref{schemes-lemma-immersions-monomorphisms}).
+By
+Lemma \ref{lemma-universally-injective-unramified}
+we see that (2) implies (3). It is clear that (3) implies (5).
+\end{proof}
+
+\noindent
+Here is another result of a similar flavor.
+
+\begin{lemma}
+\label{lemma-finite-unramified-one-point}
+Let $\pi : X \to S$ be a morphism of schemes. Let $s \in S$.
+Assume that
+\begin{enumerate}
+\item $\pi$ is finite,
+\item $\pi$ is unramified,
+\item $\pi^{-1}(\{s\}) = \{x\}$, and
+\item $\kappa(s) \subset \kappa(x)$ is purely
+inseparable\footnote{In view of condition (2)
+this is equivalent to $\kappa(s) = \kappa(x)$.}.
+\end{enumerate}
+Then there exists an open neighbourhood $U$ of $s$ such that
+$\pi|_{\pi^{-1}(U)} : \pi^{-1}(U) \to U$ is a closed immersion.
+\end{lemma}
+
+\begin{proof}
+The question is local on $S$. Hence we may assume that $S = \Spec(A)$.
+By definition of a finite morphism this implies $X = \Spec(B)$.
+Note that the ring map $\varphi : A \to B$ defining $\pi$
+is a finite unramified ring map.
+Let $\mathfrak p \subset A$ be the prime corresponding to $s$.
+Let $\mathfrak q \subset B$ be the prime corresponding to $x$.
+Conditions (2), (3) and (4) imply that
+$B_{\mathfrak q}/\mathfrak pB_{\mathfrak q} = \kappa(\mathfrak p)$.
+By Algebra, Lemma \ref{algebra-lemma-unique-prime-over-localize-below}
+we have $B_{\mathfrak q} = B_{\mathfrak p}$
+(note that a finite ring map satisfies going up, see
+Algebra, Section \ref{algebra-section-going-up}.)
+Hence we see that
+$B_{\mathfrak p}/\mathfrak pB_{\mathfrak p} = \kappa(\mathfrak p)$.
+As $B$ is a finite $A$-module we see from Nakayama's lemma (see
+Algebra, Lemma \ref{algebra-lemma-NAK})
+that $B_{\mathfrak p} = \varphi(A_{\mathfrak p})$. Hence (using the finiteness
+of $B$ as an $A$-module again) there exists a
+$f \in A$, $f \not \in \mathfrak p$ such that $B_f = \varphi(A_f)$
+as desired.
+\end{proof}
+
+\noindent
+The topological results presented above will be used to give a functorial
+characterization of \'etale morphisms similar to Theorem
+\ref{theorem-formally-unramified}.
+
+
+
+
+\section{Examples of unramified morphisms}
+\label{section-examples}
+
+\noindent
+Here are a few examples.
+
+\begin{example}
+\label{example-etale-field-extensions}
+Let $k$ be a field.
+Unramified quasi-compact morphisms $X \to \Spec(k)$ are affine.
+This is true because $X$ has dimension $0$ and is Noetherian,
+hence is a finite discrete set, and each point gives an affine open,
+so $X$ is a finite disjoint union of affines hence affine.
+Noether normalization forces $X$ to be the spectrum of a finite
+$k$-algebra $A$.
+This algebra is a product of finite separable field extensions of $k$.
+Thus, an unramified quasi-compact morphism to $\Spec(k)$
+corresponds to a finite number of finite separable field extensions of $k$.
+In particular, an unramified morphism with a connected source and a one point
+target is forced to be a finite separable field extension.
+As we will see later, $X \to \Spec(k)$ is \'etale if and
+only if it is unramified. Thus, in this case at least, we obtain a very easy
+description of the \'etale topology of a scheme. Of course, the cohomology of
+this topology is another story.
+\end{example}
+
+\begin{example}
+\label{example-standard-etale}
+Property (3) in
+Theorem \ref{theorem-unramified-equivalence}
+gives us a canonical source of examples for unramified morphisms.
+Fix a ring $R$ and an integer $n$. Let $I = (g_1, \ldots, g_m)$ be an
+ideal in $R[x_1, \ldots, x_n]$. Let $\mathfrak q \subset R[x_1, \ldots, x_n]$
+be a prime. Assume $I \subset \mathfrak q$ and that the matrix
+$$
+\left(\frac{\partial g_i}{\partial x_j}\right) \bmod \mathfrak q
+\quad\in\quad
+\text{Mat}(n \times m, \kappa(\mathfrak q))
+$$
+has rank $n$. Then the morphism
+$f : Z = \Spec(R[x_1, \ldots, x_n]/I) \to \Spec(R)$
+is unramified at the point $x \in Z \subset \mathbf{A}^n_R$ corresponding
+to $\mathfrak q$. Clearly we must have $m \geq n$.
+In the extreme case $m = n$, i.e., the differential of the map
+$\mathbf{A}^n_R \to \mathbf{A}^n_R$ defined by the $g_i$'s
+is an isomorphism of the tangent spaces, then $f$ is also flat
+$x$ and, hence, is an \'etale map (see Algebra,
+Definition \ref{algebra-definition-standard-smooth},
+Lemma \ref{algebra-lemma-standard-smooth} and
+Example \ref{algebra-example-make-standard-smooth}).
+\end{example}
+
+\begin{example}
+\label{example-number-theory-etale}
+Fix an extension of number fields $L/K$ with rings of integers
+$\mathcal{O}_L$ and $\mathcal{O}_K$. The injection $K \to L$ defines a
+morphism $f : \Spec(\mathcal{O}_L) \to \Spec(\mathcal{O}_K)$.
+As discussed above, the points where $f$ is unramified in our sense
+correspond to the set of points where $f$ is unramified in the conventional
+sense. In the conventional sense, the locus of ramification in
+$\Spec(\mathcal{O}_L)$ can be defined by vanishing set of the
+different; this is an ideal in $\mathcal{O}_L$. In fact, the different is
+nothing but the annihilator of the module
+$\Omega_{\mathcal{O}_L/\mathcal{O}_K}$. Similarly, the
+discriminant is an ideal in $\mathcal{O}_K$, namely it is the
+norm of the different.
+The vanishing set of the discriminant is precisely the set
+of points of $K$ which ramify in $L$.
+Thus, denoting by $X$ the complement of the closed subset
+defined by the different in $\Spec(\mathcal{O}_L)$,
+we obtain a morphism $X \to \Spec(\mathcal{O}_K)$ which is unramified.
+Furthermore, this morphism is also flat, as any local homomorphism
+of discrete valuation rings is flat, and hence this morphism is
+actually \'etale. If $L/K$ is finite Galois, then denoting by
+$Y$ the complement of the closed subset defined by the discriminant in
+$\Spec(\mathcal{O}_K)$, we see that we get even a
+finite \'etale morphism $X \to Y$.
+Thus, this is an example of a finite \'etale covering.
+\end{example}
+
+
+
+
+
+\section{Flat morphisms}
+\label{section-flat-morphisms}
+
+\noindent
+This section simply exists to summarize the properties of flatness that will
+be useful to us. Thus, we will be content with stating the theorems precisely
+and giving references for the proofs.
+
+\medskip\noindent
+After briefly recalling the necessary facts about flat modules over Noetherian
+rings, we state a theorem of Grothendieck which gives sufficient conditions
+for ``hyperplane sections'' of certain modules to be flat.
+
+\begin{definition}
+\label{definition-flat-rings}
+Flatness of modules and rings.
+\begin{enumerate}
+\item A module $N$ over a ring $A$ is said to be {\it flat}
+if the functor $M \mapsto M \otimes_A N$ is exact.
+\item If this functor is also faithful, we say that
+$N$ is {\it faithfully flat} over $A$.
+\item A morphism of rings $f : A \to B$ is said to be
+{\it flat (resp. faithfully flat)}
+if the functor $M \mapsto M \otimes_A B$ is exact
+(resp. faithful and exact).
+\end{enumerate}
+\end{definition}
+
+\noindent
+Here is a list of facts with references to the algebra chapter.
+\begin{enumerate}
+\item Free and projective modules are flat. This is clear for free modules
+and follows for projective modules as they are direct summands of free
+modules and $\otimes$ commutes with direct sums.
+\item Flatness is a local property, that is, $M$ is flat over $A$
+if and only if $M_{\mathfrak p}$ is flat over $A_{\mathfrak p}$ for all
+$\mathfrak p \in \Spec(A)$.
+See Algebra, Lemma \ref{algebra-lemma-flat-localization}.
+\item If $M$ is a flat $A$-module and $A \to B$ is a ring map,
+then $M \otimes_A B$ is a flat $B$-module. See
+Algebra, Lemma \ref{algebra-lemma-flat-base-change}.
+\item Finite flat modules over local rings are free.
+See Algebra, Lemma \ref{algebra-lemma-finite-flat-local}.
+\item If $f : A \to B$ is a morphism of arbitrary rings,
+$f$ is flat if and only if the induced maps
+$A_{f^{-1}(\mathfrak q)} \to B_{\mathfrak q}$ are flat for all
+$\mathfrak q \in \Spec(B)$.
+See Algebra, Lemma \ref{algebra-lemma-flat-localization}
+\item If $f : A \to B$ is a local homomorphism of local rings,
+$f$ is flat if and only if it is faithfully flat.
+See Algebra, Lemma \ref{algebra-lemma-local-flat-ff}.
+\item A map $A \to B$ of rings is faithfully flat if and only if it is
+flat and the induced map on spectra is surjective.
+See Algebra, Lemma \ref{algebra-lemma-ff-rings}.
+\item If $A$ is a Noetherian local ring, the completion
+$A^\wedge$ is faithfully flat over $A$.
+See Algebra, Lemma \ref{algebra-lemma-completion-faithfully-flat}.
+\item Let $A$ be a Noetherian local ring and $M$ an $A$-module.
+Then $M$ is flat over $A$ if and only if $M \otimes_A A^\wedge$
+is flat over $A^\wedge$. (Combine the previous statement with
+Algebra, Lemma \ref{algebra-lemma-flatness-descends}.)
+\end{enumerate}
+Before we move on to the geometric category, we present Grothendieck's
+theorem, which provides a convenient recipe for producing flat
+modules.
+
+\begin{theorem}
+\label{theorem-flatness-grothendieck}
+Let $A$, $B$ be Noetherian local rings.
+Let $f : A \to B$ be a local homomorphism.
+If $M$ is a finite $B$-module that is flat as an $A$-module,
+and $t \in \mathfrak m_B$ is an element such that multiplication
+by $t$ is injective on $M/\mathfrak m_AM$, then $M/tM$ is also $A$-flat.
+\end{theorem}
+
+\begin{proof}
+See Algebra, Lemma \ref{algebra-lemma-mod-injective}.
+See also \cite[Section 20]{MatCA}.
+\end{proof}
+
+\begin{definition}
+\label{definition-flat-schemes}
+(See Morphisms, Definition \ref{morphisms-definition-flat}).
+Let $f : X \to Y$ be a morphism of schemes.
+Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_X$-module.
+\begin{enumerate}
+\item Let $x \in X$. We say $\mathcal{F}$ is
+{\it flat over $Y$ at $x \in X$} if $\mathcal{F}_x$
+is a flat $\mathcal{O}_{Y, f(x)}$-module.
+This uses the map $\mathcal{O}_{Y, f(x)} \to \mathcal{O}_{X, x}$ to
+think of $\mathcal{F}_x$ as a $\mathcal{O}_{Y, f(x)}$-module.
+\item Let $x \in X$. We say $f$ is {\it flat at $x \in X$}
+if $\mathcal{O}_{Y, f(x)} \to \mathcal{O}_{X, x}$ is flat.
+\item We say $f$ is {\it flat} if it is flat at all points of $X$.
+\item A morphism $f : X \to Y$ that is flat and surjective is sometimes
+said to be {\it faithfully flat}.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Once again, here is a list of results:
+\begin{enumerate}
+\item The property (of a morphism) of being flat is, by fiat,
+local in the Zariski topology on the source and the target.
+\item Open immersions are flat. (This is clear because it induces isomorphisms
+on local rings.)
+\item Flat morphisms are stable under base change and composition.
+Morphisms, Lemmas \ref{morphisms-lemma-base-change-flat} and
+\ref{morphisms-lemma-composition-flat}.
+\item If $f : X \to Y$ is flat, then the pullback functor
+$\QCoh(\mathcal{O}_Y) \to \QCoh(\mathcal{O}_X)$ is exact.
+This is immediate by looking at stalks.
+\item Let $f : X \to Y$ be a morphism of schemes, and assume $Y$
+is quasi-compact and quasi-separated. In this case
+if the functor $f^*$ is exact then $f$ is flat.
+(Proof omitted. Hint: Use
+Properties, Lemma \ref{properties-lemma-extend-trivial} to see that
+$Y$ has ``enough'' ideal sheaves and use the characterization of
+flatness in Algebra, Lemma \ref{algebra-lemma-flat}.)
+\end{enumerate}
+
+
+
+\section{Topological properties of flat morphisms}
+\label{section-topological-flat}
+
+\noindent
+We ``recall'' below some openness properties that flat morphisms enjoy.
+
+\begin{theorem}
+\label{theorem-flat-open}
+Let $Y$ be a locally Noetherian scheme.
+Let $f : X \to Y$ be a morphism which is locally of finite type.
+Let $\mathcal{F}$ be a coherent $\mathcal{O}_X$-module.
+The set of points in $X$ where $\mathcal{F}$ is flat over $Y$ is an open set.
+In particular the set of points where $f$ is flat is open in $X$.
+\end{theorem}
+
+\begin{proof}
+See More on Morphisms, Theorem \ref{more-morphisms-theorem-openness-flatness}.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-flat-map-open}
+Let $Y$ be a locally Noetherian scheme.
+Let $f : X \to Y$ be a morphism which is flat and locally of finite type.
+Then $f$ is (universally) open.
+\end{theorem}
+
+\begin{proof}
+See Morphisms, Lemma \ref{morphisms-lemma-fppf-open}.
+\end{proof}
+
+\begin{theorem}
+\label{theorem-flat-is-quotient}
+A faithfully flat quasi-compact morphism is a quotient map for
+the Zariski topology.
+\end{theorem}
+
+\begin{proof}
+See Morphisms, Lemma \ref{morphisms-lemma-fpqc-quotient-topology}.
+\end{proof}
+
+\noindent
+An important reason to study flat morphisms is that they provide the adequate
+framework for capturing the notion of a family of schemes parametrized by the
+points of another scheme. Naively one may think that any morphism $f : X \to S$
+should be thought of as a family parametrized by the points of $S$. However,
+without a flatness restriction on $f$, really bizarre things can happen in
+this so-called family. For instance, we aren't guaranteed that relative
+dimension (dimension of the fibres) is constant in a family. Other numerical
+invariants, such as the Hilbert polynomial, too may change from fibre to
+fibre. Flatness prevents such things from happening and, therefore, provides
+some ``continuity'' to the fibres.
+
+
+\section{\'Etale morphisms}
+\label{section-etale-morphisms}
+
+\noindent
+In this section, we will define \'etale morphisms and prove a number of
+important properties about them. The most important one, no doubt, is the
+functorial characterization presented in Theorem \ref{theorem-formally-etale}.
+Following this, we will also discuss a few properties of rings which are
+insensitive to an \'etale extension (properties which hold for a ring
+if and only if they hold for all its \'etale extensions) to motivate the basic
+tenet of \'etale cohomology -- \'etale morphisms are the algebraic analogue of
+local isomorphisms.
+
+\medskip\noindent
+As the title suggests, we will define the class of \'etale morphisms -- the
+class of morphisms (whose surjective families) we shall deem to be coverings
+in the category of schemes over a base scheme $S$ in order to define the
+\'etale site $S_\etale$. Intuitively, an \'etale morphism is supposed
+to capture the idea of a covering space and, therefore, should be close to a
+local isomorphism. If we're working with varieties over algebraically closed
+fields, this last statement can be made into a definition provided we replace
+``local isomorphism'' with ``formal local isomorphism'' (isomorphism after
+completion). One can then give a definition over any base field by asking
+that the base change to the algebraic closure be \'etale (in the
+aforementioned sense). But, rather than proceeding via such aesthetically
+displeasing constructions, we will adopt a cleaner, albeit slightly more
+abstract, algebraic approach.
+
+\medskip\noindent
+We first define ``\'etale homomorphisms of local rings'' for Noetherian
+local rings. We cannot use the term ``\'etale'', as there already
+is a notion of an \'etale ring map
+(Algebra, Section \ref{algebra-section-etale})
+and it is different.
+
+\begin{definition}
+\label{definition-etale-ring}
+Let $A$, $B$ be Noetherian local rings.
+A local homomorphism $f : A \to B$ is said to be an
+{\it \'etale homomorphism of local rings}
+if it is flat and an unramified homomorphism of local rings
+(please see Definition \ref{definition-unramified-rings}).
+\end{definition}
+
+\noindent
+This is the local version of the definition of an \'etale ring map in
+Algebra, Section \ref{algebra-section-etale}.
+The exact definition
+given in that section is that it is a smooth ring map of relative
+dimension $0$. It is shown (in
+Algebra, Lemma \ref{algebra-lemma-etale-standard-smooth})
+that an \'etale $R$-algebra $S$ always has a presentation
+$$
+S = R[x_1, \ldots, x_n]/(f_1, \ldots, f_n)
+$$
+such that
+$$
+g =
+\det
+\left(
+\begin{matrix}
+\partial f_1/\partial x_1 &
+\partial f_2/\partial x_1 &
+\ldots &
+\partial f_n/\partial x_1 \\
+\partial f_1/\partial x_2 &
+\partial f_2/\partial x_2 &
+\ldots &
+\partial f_n/\partial x_2 \\
+\ldots & \ldots & \ldots & \ldots \\
+\partial f_1/\partial x_n &
+\partial f_2/\partial x_n &
+\ldots &
+\partial f_n/\partial x_n
+\end{matrix}
+\right)
+$$
+maps to an invertible element in $S$. The following two lemmas link the two
+notions.
+
+\begin{lemma}
+\label{lemma-characterize-etale-Noetherian}
+Let $A \to B$ be of finite type with $A$ a Noetherian ring.
+Let $\mathfrak q$ be a prime of $B$ lying over $\mathfrak p \subset A$.
+Then $A \to B$ is \'etale at $\mathfrak q$ if and only if
+$A_{\mathfrak p} \to B_{\mathfrak q}$ is an \'etale homomorphism
+of local rings.
+\end{lemma}
+
+\begin{proof}
+See Algebra, Lemmas \ref{algebra-lemma-etale} (flatness of \'etale maps),
+\ref{algebra-lemma-etale-at-prime} (\'etale maps are unramified)
+and \ref{algebra-lemma-characterize-etale} (flat and unramified maps
+are \'etale).
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-etale-completions}
+Let $A$, $B$ be Noetherian local rings.
+Let $A \to B$ be a local homomorphism such that $B$ is essentially of
+finite type over $A$.
+The following are equivalent
+\begin{enumerate}
+\item $A \to B$ is an \'etale homomorphism of local rings
+\item $A^\wedge \to B^\wedge$ is an \'etale homomorphism of local rings, and
+\item $A^\wedge \to B^\wedge$ is \'etale.
+\end{enumerate}
+Moreover, in this case $B^\wedge \cong (A^\wedge)^{\oplus n}$ as
+$A^\wedge$-modules for some $n \geq 1$.
+\end{lemma}
+
+\begin{proof}
+To see the equivalences of (1), (2) and (3), as we have the corresponding
+results for unramified ring maps
+(Lemma \ref{lemma-characterize-unramified-completions})
+it suffices to prove that
+$A \to B$ is flat if and only if $A^\wedge \to B^\wedge$ is flat.
+This is clear from our lists of properties of flat maps since
+the ring maps $A \to A^\wedge$ and $B \to B^\wedge$ are faithfully flat.
+For the final statement, by Lemma \ref{lemma-unramified-completions}
+we see that $B^\wedge$ is a finite flat $A^\wedge$ module.
+Hence it is finite free by our list
+of properties on flat modules in Section \ref{section-flat-morphisms}.
+\end{proof}
+
+\noindent
+The integer $n$ which occurs in the lemma above
+is nothing other than the degree
+$[\kappa(\mathfrak m_B) : \kappa(\mathfrak m_A)]$ of the residue field
+extension. In particular, if $\kappa(\mathfrak m_A)$
+is separably closed, we see that $A^\wedge \to B^\wedge$
+is an isomorphism, which vindicates our earlier claims.
+
+\begin{definition}
+\label{definition-etale-schemes-1}
+(See Morphisms, Definition \ref{morphisms-definition-etale}.)
+Let $Y$ be a locally Noetherian scheme.
+Let $f : X \to Y$ be a morphism of schemes which is locally of finite type.
+\begin{enumerate}
+\item Let $x \in X$. We say $f$ is {\it \'etale at $x \in X$} if
+$\mathcal{O}_{Y, f(x)} \to \mathcal{O}_{X, x}$ is an
+\'etale homomorphism of local rings.
+\item The morphism is said to be {\it \'etale} if it is \'etale at all its
+points.
+\end{enumerate}
+\end{definition}
+
+\noindent
+Let us prove that this definition agrees with the definition in the
+chapter on morphisms of schemes. This in particular guarantees that the
+set of points where a morphism is \'etale is open.
+
+\begin{lemma}
+\label{lemma-etale-definition}
+Let $Y$ be a locally Noetherian scheme.
+Let $f : X \to Y$ be locally of finite type.
+Let $x \in X$. The morphism $f$ is \'etale at $x$ in
+the sense of Definition \ref{definition-etale-schemes-1}
+if and only if it is \'etale at $x$ in
+the sense of Morphisms, Definition \ref{morphisms-definition-etale}.
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-characterize-etale-Noetherian}
+and the definitions.
+\end{proof}
+
+\noindent
+Here are some results on \'etale morphisms.
+The formulations as given in this list apply only to
+morphisms locally of finite type between locally Noetherian schemes.
+In each case we give a reference to the general result as
+proved earlier in the project, but in some cases one can
+prove the result more easily in the Noetherian case.
+Here is the list:
+\begin{enumerate}
+\item An \'etale morphism is unramified. (Clear from our definitions.)
+\item \'Etaleness is local on the source and the target in the Zariski
+topology.
+\item \'Etale morphisms are stable under base change and composition.
+See Morphisms, Lemmas \ref{morphisms-lemma-base-change-etale}
+and \ref{morphisms-lemma-composition-etale}.
+\item \'Etale morphisms of schemes are locally quasi-finite
+and quasi-compact \'etale morphisms are quasi-finite. (This is
+true because it holds for unramified morphisms as seen earlier.)
+\item \'Etale morphisms have relative dimension $0$. See
+Morphisms, Definition \ref{morphisms-definition-relative-dimension-d}
+and
+Morphisms, Lemma \ref{morphisms-lemma-locally-quasi-finite-rel-dimension-0}.
+\item A morphism is \'etale if and only if it is flat and
+all its fibres are \'etale. See
+Morphisms, Lemma \ref{morphisms-lemma-etale-flat-etale-fibres}.
+\item \'Etale morphisms are open. This is true because an \'etale
+morphism is flat, and Theorem \ref{theorem-flat-map-open}.
+\item Let $X$ and $Y$ be \'etale over a base scheme $S$.
+Any $S$-morphism from $X$ to $Y$ is \'etale.
+See Morphisms, Lemma \ref{morphisms-lemma-etale-permanence}.
+\end{enumerate}
+
+
+
+
+
+
+\section{The structure theorem}
+\label{section-structure-etale-map}
+
+\noindent
+We present a theorem which describes the local structure of \'etale
+and unramified morphisms. Besides its obvious independent importance,
+this theorem also allows us to make the transition to another
+definition of \'etale morphisms that captures the geometric intuition better
+than the one we've used so far.
+
+\medskip\noindent
+To state it we need the notion of a {\it standard \'etale ring map}, see
+Algebra, Definition \ref{algebra-definition-standard-etale}.
+Namely, suppose that $R$ is a ring and $f, g \in R[t]$ are polynomials
+such that
+\begin{enumerate}
+\item[(a)] $f$ is a monic polynomial, and
+\item[(b)] $f' = \text{d}f/\text{d}t$ is invertible in the localization
+$R[t]_g/(f)$.
+\end{enumerate}
+Then the map
+$$
+R \longrightarrow R[t]_g/(f) = R[t, 1/g]/(f)
+$$
+is a standard \'etale algebra, and any standard \'etale algebra is isomorphic
+to one of these. It is a pleasant exercise to prove that such a ring map
+is flat, and unramified and hence \'etale (as expected of course).
+A special case of a standard \'etale ring map is any ring map
+$$
+R \longrightarrow R[t]_{f'}/(f) = R[t, 1/f']/(f)
+$$
+with $f$ a monic polynomial, and any standard \'etale algebra is (isomorphic to)
+a principal localization of one of these.
+
+\begin{theorem}
+\label{theorem-structure-etale}
+Let $f : A \to B$ be an \'etale homomorphism of local rings.
+Then there exist $f, g \in A[t]$ such that
+\begin{enumerate}
+\item $B' = A[t]_g/(f)$ is standard \'etale -- see (a) and (b) above, and
+\item $B$ is isomorphic to a localization of $B'$ at a prime.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+Write $B = B'_{\mathfrak q}$ for some finite type $A$-algebra $B'$
+(we can do this because $B$ is essentially of finite type over $A$).
+By Lemma \ref{lemma-characterize-etale-Noetherian}
+we see that $A \to B'$ is \'etale at $\mathfrak q$.
+Hence we may apply
+Algebra, Proposition \ref{algebra-proposition-etale-locally-standard}
+to see that a principal localization of $B'$ is standard \'etale.
+\end{proof}
+
+\noindent
+Here is the version for unramified homomorphisms of local rings.
+
+\begin{theorem}
+\label{theorem-structure-unramified}
+Let $f : A \to B$ be an unramified morphism of local rings.
+Then there exist $f, g \in A[t]$ such that
+\begin{enumerate}
+\item $B' = A[t]_g/(f)$ is standard \'etale -- see (a) and (b) above, and
+\item $B$ is isomorphic to a quotient of a localization of $B'$ at a prime.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+Write $B = B'_{\mathfrak q}$ for some finite type $A$-algebra $B'$
+(we can do this because $B$ is essentially of finite type over $A$).
+By Lemma \ref{lemma-characterize-unramified-Noetherian}
+we see that $A \to B'$ is unramified at $\mathfrak q$.
+Hence we may apply
+Algebra, Proposition \ref{algebra-proposition-unramified-locally-standard}
+to see that a principal localization of $B'$ is a quotient of a
+standard \'etale $A$-algebra.
+\end{proof}
+
+\noindent
+Via standard lifting arguments, one then obtains the following geometric
+statement which will be of essential use to us.
+
+\begin{theorem}
+\label{theorem-geometric-structure}
+Let $\varphi : X \to Y$ be a morphism of schemes. Let $x \in X$.
+Let $V \subset Y$ be an affine open neighbourhood of $\varphi(x)$.
+If $\varphi$ is \'etale at $x$, then there exist exists an affine open
+$U \subset X$ with $x \in U$ and $\varphi(U) \subset V$
+such that we have the following diagram
+$$
+\xymatrix{
+X \ar[d] & U \ar[l] \ar[d] \ar[r]_-j & \Spec(R[t]_{f'}/(f)) \ar[d] \\
+Y & V \ar[l] \ar@{=}[r] & \Spec(R)
+}
+$$
+where $j$ is an open immersion, and $f \in R[t]$ is monic.
+\end{theorem}
+
+\begin{proof}
+This is equivalent to
+Morphisms, Lemma \ref{morphisms-lemma-etale-locally-standard-etale}
+although the statements differ slightly.
+See also, Varieties, Lemma \ref{varieties-lemma-geometric-structure-unramified}
+for a variant for unramified morphisms.
+\end{proof}
+
+
+\section{\'Etale and smooth morphisms}
+\label{section-etale-smooth}
+
+\noindent
+An \'etale morphism is smooth of relative dimension zero.
+The projection $\mathbf{A}^n_S \to S$ is a standard example
+of a smooth morphism of relative dimension $n$.
+It turns out that any smooth morphism is \'etale locally
+of this form. Here is the precise statement.
+
+\begin{theorem}
+\label{theorem-smooth-etale-over-n-space}
+Let $\varphi : X \to Y$ be a morphism of schemes.
+Let $x \in X$.
+If $\varphi$ is smooth at $x$, then
+there exist an integer $n \geq 0$ and affine opens
+$V \subset Y$ and $U \subset X$ with $x \in U$ and $\varphi(U) \subset V$
+such that there exists a commutative diagram
+$$
+\xymatrix{
+X \ar[d] & U \ar[l] \ar[d] \ar[r]_-\pi &
+\mathbf{A}^n_R \ar[d] \ar@{=}[r] & \Spec(R[x_1, \ldots, x_n]) \ar[dl] \\
+Y & V \ar[l] \ar@{=}[r] & \Spec(R)
+}
+$$
+where $\pi$ is \'etale.
+\end{theorem}
+
+\begin{proof}
+See
+Morphisms, Lemma \ref{morphisms-lemma-smooth-etale-over-affine-space}.
+\end{proof}
+
+
+
+
+\section{Topological properties of \'etale morphisms}
+\label{section-topological-etale}
+
+\noindent
+We present a few of the topological properties of \'etale and
+unramified morphisms. First, we give what Grothendieck
+calls the {\it fundamental property of \'etale morphisms}, see
+\cite[Expos\'e I.5]{SGA1}.
+
+\begin{theorem}
+\label{theorem-etale-radicial-open}
+Let $f : X \to Y$ be a morphism of schemes.
+The following are equivalent:
+\begin{enumerate}
+\item $f$ is an open immersion,
+\item $f$ is universally injective and \'etale, and
+\item $f$ is a flat monomorphism, locally of finite presentation.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+An open immersion is universally injective
+since any base change of an open immersion
+is an open immersion. Moreover, it is \'etale by
+Morphisms, Lemma \ref{morphisms-lemma-open-immersion-etale}.
+Hence (1) implies (2).
+
+\medskip\noindent
+Assume $f$ is universally injective and \'etale.
+Since $f$ is \'etale it is flat and locally of finite presentation, see
+Morphisms, Lemmas \ref{morphisms-lemma-etale-flat} and
+\ref{morphisms-lemma-etale-locally-finite-presentation}.
+By
+Lemma \ref{lemma-universally-injective-unramified}
+we see that $f$ is a monomorphism. Hence (2) implies (3).
+
+\medskip\noindent
+Assume $f$ is flat, locally of finite presentation, and a monomorphism.
+Then $f$ is open, see
+Morphisms, Lemma \ref{morphisms-lemma-fppf-open}.
+Thus we may replace $Y$ by $f(X)$ and we may assume $f$ is
+surjective. Then $f$ is open and bijective hence a homeomorphism.
+Hence $f$ is quasi-compact. Hence
+Descent, Lemma
+\ref{descent-lemma-flat-surjective-quasi-compact-monomorphism-isomorphism}
+shows that $f$ is an isomorphism and we win.
+\end{proof}
+
+\noindent
+Here is another result of a similar flavor.
+
+\begin{lemma}
+\label{lemma-finite-etale-one-point}
+Let $\pi : X \to S$ be a morphism of schemes. Let $s \in S$.
+Assume that
+\begin{enumerate}
+\item $\pi$ is finite,
+\item $\pi$ is \'etale,
+\item $\pi^{-1}(\{s\}) = \{x\}$, and
+\item $\kappa(s) \subset \kappa(x)$ is purely
+inseparable\footnote{In view of condition (2)
+this is equivalent to $\kappa(s) = \kappa(x)$.}.
+\end{enumerate}
+Then there exists an open neighbourhood $U$ of $s$ such that
+$\pi|_{\pi^{-1}(U)} : \pi^{-1}(U) \to U$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+By
+Lemma \ref{lemma-finite-unramified-one-point}
+there exists an open neighbourhood $U$ of $s$ such that
+$\pi|_{\pi^{-1}(U)} : \pi^{-1}(U) \to U$ is a closed immersion.
+But a morphism which is \'etale and a closed immersion is an
+open immersion (for example by
+Theorem \ref{theorem-etale-radicial-open}).
+Hence after shrinking $U$ we obtain an isomorphism.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-relative-frobenius-etale}
+Let $U \to X$ be an \'etale morphism of schemes
+where $X$ is a scheme in characteristic $p$.
+Then the relative Frobenius $F_{U/X} : U \to U \times_{X, F_X} X$
+is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+The morphism $F_{U/X}$ is a universal homeomorphism by
+Varieties, Lemma \ref{varieties-lemma-relative-frobenius}.
+The morphism $F_{U/X}$ is \'etale as a
+morphism between schemes \'etale over $X$
+(Morphisms, Lemma \ref{morphisms-lemma-etale-permanence}).
+Hence $F_{U/X}$ is an isomorphism by
+Theorem \ref{theorem-etale-radicial-open}.
+\end{proof}
+
+
+
+
+
+
+\section{Topological invariance of the \'etale topology}
+\label{section-topological-invariance}
+
+\noindent
+Next, we present an extremely crucial theorem which, roughly speaking, says
+that \'etaleness is a topological property.
+
+\begin{theorem}
+\label{theorem-etale-topological}
+Let $X$ and $Y$ be two schemes over a base scheme $S$. Let $S_0$ be a closed
+subscheme of $S$ with the same underlying topological space
+(for example if the ideal sheaf of $S_0$ in $S$ has square zero).
+Denote $X_0$ (resp.\ $Y_0$) the base change $S_0 \times_S X$
+(resp.\ $S_0 \times_S Y$).
+If $X$ is \'etale over $S$, then the map
+$$
+\Mor_S(Y, X) \longrightarrow \Mor_{S_0}(Y_0, X_0)
+$$
+is bijective.
+\end{theorem}
+
+\begin{proof}
+After base changing via $Y \to S$, we may assume that $Y = S$.
+In this case the theorem states that any $S$-morphism $\sigma_0 : S_0 \to X$
+actually factors uniquely through a section $S \to X$ of the
+\'etale structure morphism $f : X \to S$.
+
+\medskip\noindent
+Uniqueness. Suppose we have two sections $\sigma, \sigma'$
+through which $\sigma_0$ factors. Because $X \to S$ is \'etale
+we see that $\Delta : X \to X \times_S X$ is an open immersion
+(Morphisms, Lemma \ref{morphisms-lemma-diagonal-unramified-morphism}).
+The morphism $(\sigma, \sigma') : S \to X \times_S X$ factors
+through this open because for any $s \in S$ we have
+$(\sigma, \sigma')(s) = (\sigma_0(s), \sigma_0(s))$. Thus
+$\sigma = \sigma'$.
+
+\medskip\noindent
+To prove existence we first reduce to the affine case
+(we suggest the reader skip this step).
+Let $X = \bigcup X_i$ be an affine open covering such
+that each $X_i$ maps into an affine open $S_i$ of $S$.
+For every $s \in S$ we can choose an $i$ such that
+$\sigma_0(s) \in X_i$.
+Choose an affine open neighbourhood $U \subset S_i$ of $s$
+such that $\sigma_0(U_0) \subset X_{i, 0}$. Note that
+$X' = X_i \times_S U = X_i \times_{S_i} U$ is affine.
+If we can lift $\sigma_0|_{U_0} : U_0 \to X'_0$ to
+$U \to X'$, then by uniqueness these local lifts will glue
+to a global morphism $S \to X$. Thus we may assume $S$ and
+$X$ are affine.
+
+\medskip\noindent
+Existence when $S$ and $X$ are affine. Write $S = \Spec(A)$
+and $X = \Spec(B)$. Then $A \to B$ is \'etale and in particular
+smooth (of relative dimension $0$). As $|S_0| = |S|$ we see
+that $S_0 = \Spec(A/I)$ with $I \subset A$ locally nilpotent.
+Thus existence follows from
+Algebra, Lemma \ref{algebra-lemma-smooth-strong-lift}.
+\end{proof}
+
+\noindent
+From the proof of preceeding theorem, we also obtain one direction of the
+promised functorial characterization of \'etale morphisms. The following
+theorem will be strengthened in
+\'Etale Cohomology,
+Theorem \ref{etale-cohomology-theorem-topological-invariance}.
+
+\begin{theorem}[Une equivalence remarquable de cat\'egories]
+\label{theorem-remarkable-equivalence}
+\begin{reference}
+\cite[IV, Theorem 18.1.2]{EGA}
+\end{reference}
+Let $S$ be a scheme.
+Let $S_0 \subset S$ be a closed subscheme with the same underlying
+topological space (for example if the ideal sheaf of $S_0$ in $S$
+has square zero). The functor
+$$
+X \longmapsto X_0 = S_0 \times_S X
+$$
+defines an equivalence of categories
+$$
+\{
+\text{schemes }X\text{ \'etale over }S
+\}
+\leftrightarrow
+\{
+\text{schemes }X_0\text{ \'etale over }S_0
+\}
+$$
+\end{theorem}
+
+\begin{proof}
+By Theorem \ref{theorem-etale-topological}
+we see that this functor is fully faithful.
+It remains to show that the functor is essentially surjective.
+Let $Y \to S_0$ be an \'etale morphism of schemes.
+
+\medskip\noindent
+Suppose that the result holds if $S$ and $Y$ are affine.
+In that case, we choose an affine open covering
+$Y = \bigcup V_j$ such that each $V_j$ maps
+into an affine open of $S$. By assumption (affine case) we can
+find \'etale morphisms $W_j \to S$ such that $W_{j, 0} \cong V_j$
+(as schemes over $S_0$). Let $W_{j, j'} \subset W_j$
+be the open subscheme whose underlying topological space
+corresponds to $V_j \cap V_{j'}$. Because we have isomorphisms
+$$
+W_{j, j', 0} \cong V_j \cap V_{j'} \cong W_{j', j, 0}
+$$
+as schemes over $S_0$ we see by fully faithfulness that we
+obtain isomorphisms
+$\theta_{j, j'} : W_{j, j'} \to W_{j', j}$ of schemes over $S$.
+We omit the verification that these isomorphisms satisfy the
+cocycle condition of Schemes, Section \ref{schemes-section-glueing-schemes}.
+Applying Schemes, Lemma \ref{schemes-lemma-glue-schemes}
+we obtain a scheme $X \to S$ by
+glueing the schemes $W_j$ along the identifications $\theta_{j, j'}$.
+It is clear that $X \to S$ is \'etale and $X_0 \cong Y$ by construction.
+
+\medskip\noindent
+Thus it suffices to show the lemma in case $S$ and $Y$ are affine.
+Say $S = \Spec(R)$ and $S_0 = \Spec(R/I)$ with $I$ locally nilpotent.
+By Algebra, Lemma \ref{algebra-lemma-etale-standard-smooth} we know that
+$Y$ is the spectrum of a ring $\overline{A}$ with
+$$
+\overline{A} = (R/I)[x_1, \ldots, x_n]/(\overline{f}_1, \ldots, \overline{f}_n)
+$$
+such that
+$$
+\overline{g} =
+\det
+\left(
+\begin{matrix}
+\partial \overline{f}_1/\partial x_1 &
+\partial \overline{f}_2/\partial x_1 &
+\ldots &
+\partial \overline{f}_n/\partial x_1 \\
+\partial \overline{f}_1/\partial x_2 &
+\partial \overline{f}_2/\partial x_2 &
+\ldots &
+\partial \overline{f}_n/\partial x_2 \\
+\ldots & \ldots & \ldots & \ldots \\
+\partial \overline{f}_1/\partial x_n &
+\partial \overline{f}_2/\partial x_n &
+\ldots &
+\partial \overline{f}_n/\partial x_n
+\end{matrix}
+\right)
+$$
+maps to an invertible element in $\overline{A}$. Choose any lifts
+$f_i \in R[x_1, \ldots, x_n]$. Set
+$$
+A = R[x_1, \ldots, x_n]/(f_1, \ldots, f_n)
+$$
+Since $I$ is locally nilpotent the ideal $IA$ is locally nilpotent
+(Algebra, Lemma \ref{algebra-lemma-locally-nilpotent}).
+Observe that $\overline{A} = A/IA$.
+It follows that the determinant of the matrix of partials of the
+$f_i$ is invertible in the algebra $A$ by
+Algebra, Lemma \ref{algebra-lemma-locally-nilpotent-unit}.
+Hence $R \to A$ is \'etale and the proof is complete.
+\end{proof}
+
+
+
+\section{The functorial characterization}
+\label{section-functorial-etale}
+
+\noindent
+We finally present the promised functorial characterization.
+Thus there are four ways to think about \'etale morphisms of schemes:
+\begin{enumerate}
+\item as a smooth morphism of relative dimension $0$,
+\item as locally finitely presented, flat, and unramified morphisms,
+\item using the structure theorem, and
+\item using the functorial characterization.
+\end{enumerate}
+
+\begin{theorem}
+\label{theorem-formally-etale}
+Let $f : X \to S$ be a morphism that is locally of finite presentation.
+The following are equivalent
+\begin{enumerate}
+\item $f$ is \'etale,
+\item for all affine $S$-schemes $Y$, and closed subschemes $Y_0 \subset Y$
+defined by square-zero ideals, the natural map
+$$
+\Mor_S(Y, X) \longrightarrow \Mor_S(Y_0, X)
+$$
+is bijective.
+\end{enumerate}
+\end{theorem}
+
+\begin{proof}
+This is
+More on Morphisms, Lemma \ref{more-morphisms-lemma-etale-formally-etale}.
+\end{proof}
+
+\noindent
+This characterization says that solutions to the equations defining $X$ can
+be lifted uniquely through nilpotent thickenings.
+
+
+
+\section{\'Etale local structure of unramified morphisms}
+\label{section-unramified-etale-local}
+
+\noindent
+In the chapter
+More on Morphisms, Section \ref{more-morphisms-section-etale-localization}
+the reader can find some results on the \'etale local structure of
+quasi-finite morphisms. In this section we want to combine this
+with the topological properties of unramified morphisms we have seen
+in this chapter. The basic overall picture to keep in mind is
+$$
+\xymatrix{
+V \ar[r] \ar[dr] & X_U \ar[d] \ar[r] & X \ar[d]^f \\
+& U \ar[r] & S
+}
+$$
+see
+More on Morphisms, Equation (\ref{more-morphisms-equation-basic-diagram}).
+We start with a very general case.
+
+\begin{lemma}
+\label{lemma-unramified-etale-local}
+Let $f : X \to S$ be a morphism of schemes.
+Let $x_1, \ldots, x_n \in X$ be points having the same image $s$ in $S$.
+Assume $f$ is unramified at each $x_i$.
+Then there exists an \'etale neighbourhood $(U, u) \to (S, s)$
+and opens $V_{i, j} \subset X_U$, $i = 1, \ldots, n$, $j = 1, \ldots, m_i$
+such that
+\begin{enumerate}
+\item $V_{i, j} \to U$ is a closed immersion passing through $u$,
+\item $u$ is not in the image of $V_{i, j} \cap V_{i', j'}$ unless
+$i = i'$ and $j = j'$, and
+\item any point of $(X_U)_u$ mapping to $x_i$ is in some $V_{i, j}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+By
+Morphisms, Definition \ref{morphisms-definition-unramified}
+there exists an open neighbourhood of each $x_i$ which is locally of finite
+type over $S$. Replacing $X$ by an open neighbourhood of $\{x_1, \ldots, x_n\}$
+we may assume $f$ is locally of finite type. Apply
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-etale-makes-quasi-finite-finite-multiple-points-var}
+to get the \'etale neighbourhood $(U, u)$ and the opens $V_{i, j}$ finite over
+$U$. By
+Lemma \ref{lemma-finite-unramified-one-point}
+after possibly shrinking $U$ we get that $V_{i, j} \to U$ is a closed
+immersion.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-unramified-etale-local-technical}
+Let $f : X \to S$ be a morphism of schemes.
+Let $x_1, \ldots, x_n \in X$ be points having the same image $s$ in $S$.
+Assume $f$ is separated and $f$ is unramified at each $x_i$.
+Then there exists an \'etale neighbourhood $(U, u) \to (S, s)$
+and a disjoint union decomposition
+$$
+X_U =
+W \amalg \coprod\nolimits_{i, j} V_{i, j}
+$$
+such that
+\begin{enumerate}
+\item $V_{i, j} \to U$ is a closed immersion passing through $u$,
+\item the fibre $W_u$ contains no point mapping to any $x_i$.
+\end{enumerate}
+In particular, if $f^{-1}(\{s\}) = \{x_1, \ldots, x_n\}$, then
+the fibre $W_u$ is empty.
+\end{lemma}
+
+\begin{proof}
+Apply
+Lemma \ref{lemma-unramified-etale-local}.
+We may assume $U$ is affine, so $X_U$ is separated.
+Then $V_{i, j} \to X_U$ is a closed map, see
+Morphisms, Lemma \ref{morphisms-lemma-image-proper-scheme-closed}.
+Suppose $(i, j) \not = (i', j')$.
+Then $V_{i, j} \cap V_{i', j'}$ is closed in $V_{i, j}$ and
+its image in $U$ does not contain $u$.
+Hence after shrinking $U$ we may assume that
+$V_{i, j} \cap V_{i', j'} = \emptyset$. Moreover, $\bigcup V_{i, j}$ is
+a closed and open subscheme of $X_U$ and hence has an open and closed
+complement $W$. This finishes the proof.
+\end{proof}
+
+\noindent
+The following lemma is in some sense much weaker than the preceding one
+but it may be useful to state it explicitly here. It says that a finite
+unramified morphism is \'etale locally on the base a closed immersion.
+
+\begin{lemma}
+\label{lemma-finite-unramified-etale-local}
+Let $f : X \to S$ be a finite unramified morphism of schemes.
+Let $s \in S$.
+There exists an \'etale neighbourhood $(U, u) \to (S, s)$
+and a finite disjoint union decomposition
+$$
+X_U = \coprod\nolimits_j V_j
+$$
+such that each $V_j \to U$ is a closed immersion.
+\end{lemma}
+
+\begin{proof}
+Since $X \to S$ is finite the fibre over $s$ is a finite set
+$\{x_1, \ldots, x_n\}$ of points of $X$. Apply
+Lemma \ref{lemma-unramified-etale-local-technical}
+to this set (a finite morphism is separated, see
+Morphisms, Section \ref{morphisms-section-integral}).
+The image of $W$ in $U$ is a closed
+subset (as $X_U \to U$ is finite, hence proper) which does not
+contain $u$. After removing this from $U$ we see that $W = \emptyset$
+as desired.
+\end{proof}
+
+
+
+
+\section{\'Etale local structure of \'etale morphisms}
+\label{section-etale-local-etale}
+
+\noindent
+This is a bit silly, but perhaps helps form intuition about \'etale
+morphisms. We simply copy over the results of
+Section \ref{section-unramified-etale-local}
+and change ``closed immersion'' into ``isomorphism''.
+
+\begin{lemma}
+\label{lemma-etale-etale-local}
+Let $f : X \to S$ be a morphism of schemes.
+Let $x_1, \ldots, x_n \in X$ be points having the same image $s$ in $S$.
+Assume $f$ is \'etale at each $x_i$.
+Then there exists an \'etale neighbourhood $(U, u) \to (S, s)$
+and opens $V_{i, j} \subset X_U$, $i = 1, \ldots, n$, $j = 1, \ldots, m_i$
+such that
+\begin{enumerate}
+\item $V_{i, j} \to U$ is an isomorphism,
+\item $u$ is not in the image of $V_{i, j} \cap V_{i', j'}$ unless
+$i = i'$ and $j = j'$, and
+\item any point of $(X_U)_u$ mapping to $x_i$ is in some $V_{i, j}$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+An \'etale morphism is unramified, hence we may apply
+Lemma \ref{lemma-unramified-etale-local}.
+Now $V_{i, j} \to U$ is a closed immersion and \'etale.
+Hence it is an open immersion, for example by
+Theorem \ref{theorem-etale-radicial-open}.
+Replace $U$ by the intersection of the images of $V_{i, j} \to U$
+to get the lemma.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-etale-etale-local-technical}
+Let $f : X \to S$ be a morphism of schemes.
+Let $x_1, \ldots, x_n \in X$ be points having the same image $s$ in $S$.
+Assume $f$ is separated and $f$ is \'etale at each $x_i$.
+Then there exists an \'etale neighbourhood $(U, u) \to (S, s)$
+and a finite disjoint union decomposition
+$$
+X_U =
+W \amalg \coprod\nolimits_{i, j} V_{i, j}
+$$
+of schemes such that
+\begin{enumerate}
+\item $V_{i, j} \to U$ is an isomorphism,
+\item the fibre $W_u$ contains no point mapping to any $x_i$.
+\end{enumerate}
+In particular, if $f^{-1}(\{s\}) = \{x_1, \ldots, x_n\}$, then
+the fibre $W_u$ is empty.
+\end{lemma}
+
+\begin{proof}
+An \'etale morphism is unramified, hence we may apply
+Lemma \ref{lemma-unramified-etale-local-technical}.
+As in the proof of
+Lemma \ref{lemma-etale-etale-local}
+the morphisms $V_{i, j} \to U$ are open immersions and
+we win after replacing $U$ by the intersection of their
+images.
+\end{proof}
+
+\noindent
+The following lemma is in some sense much weaker than the preceding one
+but it may be useful to state it explicitly here. It says that a finite
+\'etale morphism is \'etale locally on the base a
+``topological covering space'', i.e., a finite product of copies of the base.
+
+\begin{lemma}
+\label{lemma-finite-etale-etale-local}
+Let $f : X \to S$ be a finite \'etale morphism of schemes.
+Let $s \in S$. There exists an \'etale neighbourhood $(U, u) \to (S, s)$
+and a finite disjoint union decomposition
+$$
+X_U = \coprod\nolimits_j V_j
+$$
+of schemes such that each $V_j \to U$ is an isomorphism.
+\end{lemma}
+
+\begin{proof}
+An \'etale morphism is unramified, hence we may apply
+Lemma \ref{lemma-finite-unramified-etale-local}.
+As in the proof of
+Lemma \ref{lemma-etale-etale-local}
+we see that $V_{i, j} \to U$ is an open immersion and we win
+after replacing $U$ by the intersection of their images.
+\end{proof}
+
+
+
+
+\section{Permanence properties}
+\label{section-properties-permanence}
+
+\noindent
+In what follows, we present a few ``permanence''
+properties of \'etale homomorphisms of Noetherian local rings
+(as defined in Definition \ref{definition-etale-ring}). See
+More on Algebra, Sections \ref{more-algebra-section-permanence-completion} and
+\ref{more-algebra-section-permanence-henselization}
+for the analogue of this material for the completion and
+henselization of a Noetherian local ring.
+
+\begin{lemma}
+\label{lemma-etale-dimension}
+Let $A$, $B$ be Noetherian local rings.
+Let $A \to B$ be a \'etale homomorphism of local rings.
+Then $\dim(A) = \dim(B)$.
+\end{lemma}
+
+\begin{proof}
+See for example
+Algebra, Lemma \ref{algebra-lemma-dimension-base-fibre-equals-total}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-etale-depth}
+Let $A$, $B$ be Noetherian local rings.
+Let $f : A \to B$ be an \'etale homomorphism of local rings.
+Then $\text{depth}(A) = \text{depth}(B)$
+\end{proposition}
+
+\begin{proof}
+See Algebra, Lemma \ref{algebra-lemma-apply-grothendieck}.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-etale-CM}
+\begin{slogan}
+Being Cohen-Macaulay ascends and descends along \'etale maps.
+\end{slogan}
+Let $A$, $B$ be Noetherian local rings.
+Let $f : A \to B$ be an \'etale homomorphism of local rings.
+Then $A$ is Cohen-Macaulay if and only if $B$ is so.
+\end{proposition}
+
+\begin{proof}
+A local ring $A$ is Cohen-Macaulay if and only if $\dim(A) = \text{depth}(A)$.
+As both of these invariants is preserved under an \'etale extension,
+the claim follows.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-etale-regular}
+Let $A$, $B$ be Noetherian local rings.
+Let $f : A \to B$ be an \'etale homomorphism of local rings.
+Then $A$ is regular if and only if $B$ is so.
+\end{proposition}
+
+\begin{proof}
+If $B$ is regular, then $A$ is regular by
+Algebra, Lemma \ref{algebra-lemma-flat-under-regular}.
+Assume $A$ is regular. Let $\mathfrak m$ be the maximal ideal
+of $A$. Then $\dim_{\kappa(\mathfrak m)} \mathfrak m/\mathfrak m^2 =
+\dim(A) = \dim(B)$ (see Lemma \ref{lemma-etale-dimension}).
+On the other hand, $\mathfrak mB$ is the maximal ideal of
+$B$ and hence $\mathfrak m_B/\mathfrak m_B = \mathfrak mB/\mathfrak m^2B$
+is generated by at most $\dim(B)$ elements. Thus $B$ is regular.
+(You can also use the slightly more general
+Algebra, Lemma \ref{algebra-lemma-flat-over-regular-with-regular-fibre}.)
+\end{proof}
+
+
+\begin{proposition}
+\label{proposition-etale-reduced}
+Let $A$, $B$ be Noetherian local rings.
+Let $f : A \to B$ be an \'etale homomorphism of local rings.
+Then $A$ is reduced if and only if $B$ is so.
+\end{proposition}
+
+\begin{proof}
+It is clear from the faithful flatness of $A \to B$ that if $B$ is reduced, so
+is $A$. See also Algebra, Lemma \ref{algebra-lemma-descent-reduced}.
+Conversely, assume $A$ is reduced. By assumption $B$ is a localization
+of a finite type $A$-algebra $B'$ at some prime $\mathfrak q$.
+After replacing $B'$ by a localization we may assume that $B'$
+is \'etale over $A$, see Lemma \ref{lemma-characterize-etale-Noetherian}.
+Then we see that Algebra, Lemma \ref{algebra-lemma-reduced-goes-up} applies to
+$A \to B'$ and $B'$ is reduced. Hence $B$ is reduced.
+\end{proof}
+
+\begin{remark}
+\label{remark-technicality-needed}
+The result on ``reducedness'' does not hold with a weaker
+definition of \'etale local ring maps $A \to B$ where one
+drops the assumption that $B$ is essentially of finite type over $A$.
+Namely, it can happen that a Noetherian local domain $A$ has nonreduced
+completion $A^\wedge$, see
+Examples, Section \ref{examples-section-local-completion-nonreduced}.
+But the ring map $A \to A^\wedge$ is flat, and $\mathfrak m_AA^\wedge$
+is the maximal ideal of $A^\wedge$ and of course $A$ and $A^\wedge$ have
+the same residue fields. This is why it is important to consider
+this notion only for ring extensions which are essentially of finite type
+(or essentially of finite presentation if $A$ is not Noetherian).
+\end{remark}
+
+\begin{proposition}
+\label{proposition-etale-normal}
+\begin{reference}
+\cite[Expose I, Theorem 9.5 part (i)]{SGA1}
+\end{reference}
+Let $A$, $B$ be Noetherian local rings.
+Let $f : A \to B$ be an \'etale homomorphism of local rings.
+Then $A$ is a normal domain if and only if $B$ is so.
+\end{proposition}
+
+\begin{proof}
+See
+Algebra, Lemma \ref{algebra-lemma-descent-normal}
+for descending normality. Conversely, assume $A$ is normal.
+By assumption $B$ is a localization of a finite type $A$-algebra
+$B'$ at some prime $\mathfrak q$. After replacing $B'$ by a localization
+we may assume that $B'$ is \'etale over $A$, see
+Lemma \ref{lemma-characterize-etale-Noetherian}.
+Then we see that
+Algebra, Lemma \ref{algebra-lemma-normal-goes-up}
+applies to $A \to B'$ and we conclude that $B'$ is normal.
+Hence $B$ is a normal domain.
+\end{proof}
+
+\noindent
+The preceeding propositions give some indication as to why we'd like to think
+of \'etale maps as ``local isomorphisms''. Another property that gives an
+excellent indication that we have the ``right'' definition is the fact that
+for $\mathbf{C}$-schemes of finite type, a morphism is \'etale if and only if
+the associated morphism on analytic spaces (the $\mathbf{C}$-valued points given
+the complex topology) is a local isomorphism in the analytic sense (open
+embedding locally on the source). This fact can be proven with the aid of the
+structure theorem and the fact that the analytification commutes with the
+formation of the completed local rings -- the details are left to the reader.
+
+
+
+
+
+
+
+
+\section{Descending \'etale morphisms}
+\label{section-descending-etale}
+
+\noindent
+In order to understand the language used in this section we encourage
+the reader to take a look at
+Descent, Section \ref{descent-section-descent-datum}.
+Let $f : X \to S$ be a morphism of schemes. Consider the
+pullback functor
+\begin{equation}
+\label{equation-descent-etale}
+\text{schemes }U\text{ \'etale over }S \longrightarrow
+\begin{matrix}
+\text{descent data }(V, \varphi)\text{ relative to }X/S \\
+\text{ with }V\text{ \'etale over }X
+\end{matrix}
+\end{equation}
+sending $U$ to the canonical descent datum $(X \times_S U, can)$.
+
+\begin{lemma}
+\label{lemma-faithful}
+If $f : X \to S$ is surjective, then the functor
+(\ref{equation-descent-etale}) is faithful.
+\end{lemma}
+
+\begin{proof}
+Let $a, b : U_1 \to U_2$ be two morphisms between schemes \'etale over $S$.
+Assume the base changes of $a$ and $b$ to $X$ agree.
+We have to show that $a = b$.
+By Proposition \ref{proposition-equality} it suffices to
+show that $a$ and $b$ agree on points and residue fields.
+This is clear because for every $u \in U_1$ we can find a point
+$v \in X \times_S U_1$ mapping to $u$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful}
+Assume $f : X \to S$ is submersive and any \'etale base change
+of $f$ is submersive. Then the functor
+(\ref{equation-descent-etale}) is fully faithful.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-faithful} the functor is faithful.
+Let $U_1 \to S$ and $U_2 \to S$ be \'etale morphisms
+and let $a : X \times_S U_1 \to X \times_S U_2$ be a
+morphism compatible with canonical descent data.
+We will prove that $a$ is the base change of a morphism $U_1 \to U_2$.
+
+\medskip\noindent
+Let $U'_2 \subset U_2$ be an open subscheme. Consider
+$W = a^{-1}(X \times_S U'_2)$. This is an open subscheme
+of $X \times_S U_1$ which is compatible with the canonical
+descent datum on $V_1 = X \times_S U_1$. This means that the
+two inverse images of $W$ by the projections
+$V_1 \times_{U_1} V_1 \to V_1$ agree. Since $V_1 \to U_1$
+is surjective (as the base change of $X \to S$) we conclude
+that $W$ is the inverse image of some subset $U'_1 \subset U_1$.
+Since $W$ is open, our assumption on $f$ implies that $U'_1 \subset U_1$
+is open.
+
+\medskip\noindent
+Let $U_2 = \bigcup U_{2, i}$ be an affine open covering.
+By the result of the preceding paragraph we obtain an open
+covering $U_1 = \bigcup U_{1, i}$ such that
+$X \times_S U_{1, i} = a^{-1}(X \times_S U_{2, i})$.
+If we can prove there exists a morphism $U_{1, i} \to U_{2, i}$
+whose base change is the morphism
+$a_i : X \times_S U_{1, i} \to X \times_S U_{2, i}$
+then we can glue these morphisms to a morphism $U_1 \to U_2$
+(using faithfulness). In this way we reduce to the case that
+$U_2$ is affine. In particular $U_2 \to S$ is separated
+(Schemes, Lemma \ref{schemes-lemma-compose-after-separated}).
+
+\medskip\noindent
+Assume $U_2 \to S$ is separated. Then the graph $\Gamma_a$ of $a$
+is a closed subscheme of
+$$
+V = (X \times_S U_1) \times_X (X \times_S U_2) = X \times_S U_1 \times_S U_2
+$$
+by Schemes, Lemma \ref{schemes-lemma-semi-diagonal}.
+On the other hand the graph is open for example
+because it is a section of an \'etale morphism
+(Proposition \ref{proposition-properties-sections}).
+Since $a$ is a morphism of descent data, the two inverse images of
+$\Gamma_a \subset V$ under the projections
+$V \times_{U_1 \times_S U_2} V \to V$ are the same.
+Hence arguing as in the second paragraph of the proof we
+find an open and closed subscheme $\Gamma \subset U_1 \times_S U_2$
+whose base change to $X$ gives $\Gamma_a$. Then
+$\Gamma \to U_1$ is an \'etale morphism whose base change
+to $X$ is an isomorphism. This means that $\Gamma \to U_1$
+is universally bijective, hence an isomorphism
+by Theorem \ref{theorem-etale-radicial-open}.
+Thus $\Gamma$ is the graph of a morphism $U_1 \to U_2$
+and the base change of this morphism is $a$ as desired.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-fully-faithful-cases}
+Let $f : X \to S$ be a morphism of schemes. In the following
+cases the functor (\ref{equation-descent-etale}) is fully faithful:
+\begin{enumerate}
+\item $f$ is surjective and universally closed
+(e.g., finite, integral, or proper),
+\item $f$ is surjective and universally open
+(e.g., locally of finite presentation and flat, smooth, or etale),
+\item $f$ is surjective, quasi-compact, and flat.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+This follows from Lemma \ref{lemma-fully-faithful}.
+For example a closed surjective map of topological spaces
+is submersive (Topology, Lemma
+\ref{topology-lemma-closed-morphism-quotient-topology}).
+Finite, integral, and proper morphisms are universally closed, see
+Morphisms, Lemmas \ref{morphisms-lemma-integral-universally-closed} and
+\ref{morphisms-lemma-finite-proper} and
+Definition \ref{morphisms-definition-proper}.
+On the other hand an open surjective map of topological spaces
+is submersive (Topology, Lemma
+\ref{topology-lemma-open-morphism-quotient-topology}).
+Flat locally finitely presented, smooth, and \'etale morphisms are
+universally open, see
+Morphisms, Lemmas \ref{morphisms-lemma-fppf-open},
+\ref{morphisms-lemma-smooth-open}, and
+\ref{morphisms-lemma-etale-open}.
+The case of surjective, quasi-compact, flat morphisms follows
+from Morphisms, Lemma \ref{morphisms-lemma-fpqc-quotient-topology}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-reduce-to-affine}
+Let $f : X \to S$ be a morphism of schemes.
+Let $(V, \varphi)$ be a descent datum relative to $X/S$
+with $V \to X$ \'etale. Let $S = \bigcup S_i$ be an
+open covering. Assume that
+\begin{enumerate}
+\item the pullback of the descent datum $(V, \varphi)$
+to $X \times_S S_i/S_i$ is effective,
+\item the functor (\ref{equation-descent-etale})
+for $X \times_S (S_i \cap S_j) \to (S_i \cap S_j)$ is fully faithful, and
+\item the functor (\ref{equation-descent-etale})
+for $X \times_S (S_i \cap S_j \cap S_k) \to (S_i \cap S_j \cap S_k)$
+is faithful.
+\end{enumerate}
+Then $(V, \varphi)$ is effective.
+\end{lemma}
+
+\begin{proof}
+(Recall that pullbacks of descent data are defined in
+Descent, Definition \ref{descent-definition-pullback-functor}.)
+Set $X_i = X \times_S S_i$. Denote $(V_i, \varphi_i)$ the pullback
+of $(V, \varphi)$ to $X_i/S_i$.
+By assumption (1) we can find an \'etale morphism $U_i \to S_i$
+which comes with an isomorphism $X_i \times_{S_i} U_i \to V_i$ compatible with
+$can$ and $\varphi_i$. By assumption (2) we obtain isomorphisms
+$\psi_{ij} : U_i \times_{S_i} (S_i \cap S_j) \to
+U_j \times_{S_j} (S_i \cap S_j)$.
+By assumption (3) these isomorphisms satisfy the cocycle condition
+so that $(U_i, \psi_{ij})$ is a descend datum for the
+Zariski covering $\{S_i \to S\}$. Then Descent, Lemma
+\ref{descent-lemma-Zariski-refinement-coverings-equivalence}
+(which is essentially just a reformulation of
+Schemes, Section \ref{schemes-section-glueing-schemes})
+tells us that there exists a morphism of schemes $U \to S$
+and isomorphisms $U \times_S S_i \to U_i$ compatible
+with $\psi_{ij}$. The isomorphisms $U \times_S S_i \to U_i$
+determine corresponding isomorphisms $X_i \times_S U \to V_i$
+which glue to a morphism $X \times_S U \to V$ compatible
+with the canonical descent datum and $\varphi$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-split-henselian}
+Let $(A, I)$ be a henselian pair. Let $U \to \Spec(A)$ be a
+quasi-compact, separated, \'etale morphism such that
+$U \times_{\Spec(A)} \Spec(A/I) \to \Spec(A/I)$ is finite.
+Then
+$$
+U = U_{fin} \amalg U_{away}
+$$
+where $U_{fin} \to \Spec(A)$ is finite and $U_{away}$ has
+no points lying over $Z$.
+\end{lemma}
+
+\begin{proof}
+By Zariski's main theorem, the scheme $U$ is quasi-affine.
+In fact, we can find an open immersion $U \to T$ with $T$ affine and
+$T \to \Spec(A)$ finite, see More on Morphisms, Lemma
+\ref{more-morphisms-lemma-quasi-finite-separated-pass-through-finite}.
+Write $Z = \Spec(A/I)$ and denote $U_Z \to T_Z$ the base change.
+Since $U_Z \to Z$ is finite, we see that $U_Z \to T_Z$ is closed
+as well as open. Hence by
+More on Algebra, Lemma \ref{more-algebra-lemma-characterize-henselian-pair}
+we obtain a unique decomposition $T = T' \amalg T''$ with $T'_Z = U_Z$.
+Set $U_{fin} = U \cap T'$ and $U_{away} = U \cap T''$. Since
+$T'_Z \subset U_Z$ we see that all closed points of $T'$ are in $U$
+hence $T' \subset U$, hence $U_{fin} = T'$, hence $U_{fin} \to \Spec(A)$
+is finite. We omit the proof
+of uniqueness of the decomposition.
+\end{proof}
+
+\begin{proposition}
+\label{proposition-effective}
+Let $f : X \to S$ be a surjective integral morphism.
+The functor (\ref{equation-descent-etale}) induces an equivalence
+$$
+\begin{matrix}
+\text{schemes quasi-compact,}\\
+\text{separated, \'etale over }S
+\end{matrix}
+\longrightarrow
+\begin{matrix}
+\text{descent data }(V, \varphi)\text{ relative to }X/S\text{ with}\\
+V\text{ quasi-compact, separated, \'etale over }X
+\end{matrix}
+$$
+\end{proposition}
+
+\begin{proof}
+By Lemma \ref{lemma-fully-faithful-cases} the
+functor (\ref{equation-descent-etale})
+is fully faithful and the same remains the case after any
+base change $S \to S'$. Let $(V, \varphi)$ be a descent data
+relative to $X/S$ with $V \to X$ quasi-compact, separated, and \'etale.
+We can use Lemma \ref{lemma-reduce-to-affine}
+to see that it suffices to prove the effectivity
+Zariski locally on $S$. In particular we may and do
+assume that $S$ is affine.
+
+\medskip\noindent
+If $S$ is affine we can find a directed set $\Lambda$ and
+an inverse system $X_\lambda \to S_\lambda$
+of finite morphisms of affine schemes of finite type over
+$\Spec(\mathbf{Z})$ such that $(X \to S) = \lim (X_\lambda \to S_\lambda)$.
+See Algebra, Lemma \ref{algebra-lemma-limit-integral}.
+Since limits commute with limits we deduce that
+$X \times_S X = \lim X_\lambda \times_{S_\lambda} X_\lambda$
+and
+$X \times_S X \times_S X = \lim
+X_\lambda \times_{S_\lambda} X_\lambda \times_{S_\lambda} X_\lambda$.
+Observe that $V \to X$ is a morphism of finite presentation.
+Using Limits, Lemmas \ref{limits-lemma-descend-finite-presentation}
+we can find an $\lambda$ and a descent datum $(V_\lambda, \varphi_\lambda)$
+relative to $X_\lambda/S_\lambda$ whose pullback to $X/S$ is
+$(V, \varphi)$. Of course it is enough to show that
+$(V_\lambda, \varphi_\lambda)$ is effective. Note that $V_\lambda$
+is quasi-compact by construction.
+After possibly increasing $\lambda$ we may assume
+that $V_\lambda \to X_\lambda$ is separated and \'etale, see
+Limits, Lemma \ref{limits-lemma-descend-separated-finite-presentation} and
+\ref{limits-lemma-descend-etale}.
+Thus we may assume that $f$ is finite surjective and
+$S$ affine of finite type over $\mathbf{Z}$.
+
+\medskip\noindent
+Consider an open $S' \subset S$ such that the pullback $(V', \varphi')$
+of $(V, \varphi)$ to $X' = X \times_S S'$ is effective. Below we will
+prove, that $S' \not = S$ implies there is a strictly larger open over
+which the descent datum is effective. Since $S$ is Noetherian (and hence
+has a Noetherian underlying topological space) this will finish the proof.
+Let $\xi \in S$ be a generic point of an irreducible component of the
+closed subset $Z = S \setminus S'$.
+If $\xi \in S'' \subset S$ is an open over which the descent datum is
+effective, then the descent datum is effective over
+$S' \cup S''$ by the glueing argument of the first paragraph. Thus
+in the rest of the proof we may replace $S$ by an affine open
+neighbourhood of $\xi$.
+
+\medskip\noindent
+After a first such replacement we may assume that $Z$ is irreducible
+with generic point $Z$. Let us endow $Z$ with the reduced induced
+closed subscheme structure. After another shrinking we may assume
+$X_Z = X \times_S Z = f^{-1}(Z) \to Z$ is flat, see
+Morphisms, Proposition \ref{morphisms-proposition-generic-flatness}.
+Let $(V_Z, \varphi_Z)$ be the pullback of the descent datum to $X_Z/Z$.
+By More on Morphisms, Lemma
+\ref{more-morphisms-lemma-separated-locally-quasi-finite-morphisms-fppf-descend}
+this descent datum is effective and we obtain an \'etale morphism
+$U_Z \to Z$ whose base change is isomorphic to $V_Z$ in a manner
+compatible with descent data.
+Of course $U_Z \to Z$ is quasi-compact and separated
+(Descent, Lemmas \ref{descent-lemma-descending-property-quasi-compact} and
+\ref{descent-lemma-descending-property-separated}).
+Thus after shrinking once more we may assume
+that $U_Z \to Z$ is finite, see
+Morphisms, Lemma \ref{morphisms-lemma-generically-finite}.
+
+\medskip\noindent
+Let $S = \Spec(A)$ and let $I \subset A$ be the prime ideal corresponding
+to $Z \subset S$. Let $(A^h, IA^h)$ be the henselization of the pair
+$(A, I)$. Denote $S^h = \Spec(A^h)$ and $Z^h = V(IA^h) \cong Z$.
+We claim that it suffices to show effectivity after base change to
+$S^h$. Namely, $\{S^h \to S, S' \to S\}$ is an fpqc covering
+($A \to A^h$ is flat by More on Algebra, Lemma
+\ref{more-algebra-lemma-henselization-flat}) and
+by More on Morphisms, Lemma
+\ref{more-morphisms-lemma-separated-locally-quasi-finite-morphisms-fppf-descend}
+we have fpqc descent for separated \'etale morphisms.
+Namely, if $U^h \to S^h$ and $U' \to S'$ are the objects
+corresponding to the pullbacks $(V^h, \varphi^h)$ and
+$(V', \varphi')$, then the required isomorphisms
+$$
+U^h \times_S S^h \to S^h \times_S V^h
+\quad\text{and}\quad
+U^h \times_S S' \to S^h \times_S U'
+$$
+are obtained by the fully faithfulness pointed out in the first
+paragraph. In this way we reduce to the situation described in
+the next paragraph.
+
+\medskip\noindent
+Here $S = \Spec(A)$, $Z = V(I)$, $S' = S \setminus Z$ where
+$(A, I)$ is a henselian pair, we have $U' \to S'$ corresponding
+to the descent datum $(V', \varphi')$ and we have a finite \'etale
+morphism $U_Z \to Z$ corresponding to the descent datum
+$(V_Z, \varphi_Z)$. We no longer have that $A$ is of finite type
+over $\mathbf{Z}$; but the rest of the argument will not even use
+that $A$ is Noetherian.
+By More on Algebra, Lemma \ref{more-algebra-lemma-finite-etale-equivalence}
+we can find a finite \'etale morphism $U_{fin} \to S$ whose
+restriction to $Z$ is isomorphic to $U_Z \to Z$.
+Write $X = \Spec(B)$ and $Y = V(IB)$. Since $(B, IB)$ is a henselian pair
+(More on Algebra, Lemma \ref{more-algebra-lemma-integral-over-henselian-pair})
+and since the restriction $V \to X$ to $Y$
+is finite (as base change of $U_Z \to Z$) we see that
+there is a canonical disjoint union decomposition
+$$
+V = V_{fin} \amalg V_{away}
+$$
+were $V_{fin} \to X$ is finite and where $V_{away}$ has no
+points lying over $Y$. See Lemma \ref{lemma-split-henselian}.
+Using the uniqueness of this decomposition over $X \times_S X$
+we see that $\varphi$ preserves it and we obtain
+$$
+(V, \varphi) = (V_{fin}, \varphi_{fin}) \amalg (V_{away}, \varphi_{away})
+$$
+in the category of descent data.
+By More on Algebra, Lemma \ref{more-algebra-lemma-finite-etale-equivalence}
+there is a unique isomorphism
+$$
+X \times_S U_{fin} \longrightarrow V_{fin}
+$$
+compatible with the given isomorphism $Y \times_Z U_Z \to V \times_X Y$
+over $Y$.
+By the uniqueness we see that this isomorphism is compatible
+with descent data, i.e.,
+$(X \times_S U_{fin}, can) \cong (V_{fin}, \varphi_{fin})$.
+Denote $U'_{fin} = U_{fin} \times_S S'$. By fully faithfulness
+we obtain a morphism $U'_{fin} \to U'$ which is
+the inclusion of an open (and closed) subscheme.
+Then we set $U = U_{fin} \amalg_{U'_{fin}} U'$ (glueing of schemes as
+in Schemes, Section \ref{schemes-section-glueing-schemes}).
+The morphisms $X \times_S U_{fin} \to V$ and
+$X \times_S U' \to V$ glue to a morphism $X \times_S U \to V$
+which is the desired isomorphism.
+\end{proof}
+
+
+
+
+
+\section{Normal crossings divisors}
+\label{section-normal-crossings}
+
+\noindent
+Here is the definition.
+
+\begin{definition}
+\label{definition-strict-normal-crossings}
+Let $X$ be a locally Noetherian scheme. A
+{\it strict normal crossings divisor}
+on $X$ is an effective Cartier divisor $D \subset X$ such that
+for every $p \in D$ the local ring $\mathcal{O}_{X, p}$ is regular
+and there exists a regular system of parameters
+$x_1, \ldots, x_d \in \mathfrak m_p$ and $1 \leq r \leq d$
+such that $D$ is cut out by $x_1 \ldots x_r$ in $\mathcal{O}_{X, p}$.
+\end{definition}
+
+\noindent
+We often encounter effective Cartier divisors $E$ on locally Noetherian
+schemes $X$ such that there exists a strict normal crossings divisor $D$
+with $E \subset D$ set theoretically.
+In this case we have
+$E = \sum a_i D_i$ with $a_i \geq 0$ where $D = \bigcup_{i \in I} D_i$
+is the decomposition of $D$ into its irreducible components.
+Observe that $D' = \bigcup_{a_i > 0} D_i$ is a strict normal crossings
+divisor with $E = D'$ set theoretically.
+When the above happens we will say that
+$E$ is {\it supported on a strict normal crossings divisor}.
+
+\begin{lemma}
+\label{lemma-strict-normal-crossings}
+Let $X$ be a locally Noetherian scheme. Let $D \subset X$ be an
+effective Cartier divisor. Let $D_i \subset D$, $i \in I$ be its
+irreducible components viewed as reduced closed subschemes of $X$.
+The following are equivalent
+\begin{enumerate}
+\item $D$ is a strict normal crossings divisor, and
+\item $D$ is reduced, each $D_i$ is an effective Cartier divisor, and
+for $J \subset I$ finite the scheme theoretic
+intersection $D_J = \bigcap_{j \in J} D_j$ is a
+regular scheme each of whose irreducible components has
+codimension $|J|$ in $X$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Assume $D$ is a strict normal crossings divisor. Pick $p \in D$
+and choose a regular system of parameters $x_1, \ldots, x_d \in \mathfrak m_p$
+and $1 \leq r \leq d$ as in
+Definition \ref{definition-strict-normal-crossings}.
+Since $\mathcal{O}_{X, p}/(x_i)$ is a regular local ring
+(and in particular a domain) we see that the irreducible components
+$D_1, \ldots, D_r$ of $D$ passing through $p$ correspond $1$-to-$1$
+to the height one primes $(x_1), \ldots, (x_r)$ of $\mathcal{O}_{X, p}$.
+By Algebra, Lemma \ref{algebra-lemma-regular-ring-CM}
+we find that the intersections $D_{i_1} \cap \ldots \cap D_{i_s}$
+have codimension $s$ in an open neighbourhood of $p$
+and that this intersection has a regular local ring at $p$.
+Since this holds for all $p \in D$ we conclude that (2) holds.
+
+\medskip\noindent
+Assume (2). Let $p \in D$. Since $\mathcal{O}_{X, p}$ is finite
+dimensional we see that $p$ can be contained in at most
+$\dim(\mathcal{O}_{X, p})$ of the components $D_i$.
+Say $p \in D_1, \ldots, D_r$ for some $r \geq 1$.
+Let $x_1, \ldots, x_r \in \mathfrak m_p$ be local equations
+for $D_1, \ldots, D_r$. Then $x_1$ is a nonzerodivisor in $\mathcal{O}_{X, p}$
+and $\mathcal{O}_{X, p}/(x_1) = \mathcal{O}_{D_1, p}$ is regular.
+Hence $\mathcal{O}_{X, p}$ is regular, see
+Algebra, Lemma \ref{algebra-lemma-regular-mod-x}.
+Since $D_1 \cap \ldots \cap D_r$ is a regular (hence normal) scheme
+it is a disjoint union of its irreducible components
+(Properties, Lemma \ref{properties-lemma-normal-Noetherian}).
+Let $Z \subset D_1 \cap \ldots \cap D_r$
+be the irreducible component containing $p$.
+Then $\mathcal{O}_{Z, p} = \mathcal{O}_{X, p}/(x_1, \ldots, x_r)$
+is regular of codimension $r$ (note that since we already know
+that $\mathcal{O}_{X, p}$ is regular and hence Cohen-Macaulay,
+there is no ambiguity about codimension as the ring is catenary, see
+Algebra, Lemmas \ref{algebra-lemma-regular-ring-CM} and
+\ref{algebra-lemma-CM-dim-formula}).
+Hence $\dim(\mathcal{O}_{Z, p}) = \dim(\mathcal{O}_{X, p}) - r$.
+Choose additional $x_{r + 1}, \ldots, x_n \in \mathfrak m_p$
+which map to a minimal system of generators of $\mathfrak m_{Z, p}$.
+Then $\mathfrak m_p = (x_1, \ldots, x_n)$ by Nakayama's lemma
+and we see that $D$ is a normal crossings divisor.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth-pullback-strict-normal-crossings}
+\begin{slogan}
+Pullback of a strict normal crossings divisor by a smooth
+morphism is a strict normal crossings divisor.
+\end{slogan}
+Let $X$ be a locally Noetherian scheme. Let $D \subset X$ be a
+strict normal crossings divisor. If $f : Y \to X$ is a smooth
+morphism of schemes, then the pullback $f^*D$ is a
+strict normal crossings divisor on $Y$.
+\end{lemma}
+
+\begin{proof}
+As $f$ is flat the pullback is defined by
+Divisors, Lemma \ref{divisors-lemma-pullback-effective-Cartier-defined}
+hence the statement makes sense.
+Let $q \in f^*D$ map to $p \in D$. Choose a regular system
+of parameters $x_1, \ldots, x_d \in \mathfrak m_p$
+and $1 \leq r \leq d$ as in
+Definition \ref{definition-strict-normal-crossings}.
+Since $f$ is smooth the local ring homomorphism
+$\mathcal{O}_{X, p} \to \mathcal{O}_{Y, q}$ is flat
+and the fibre ring
+$$
+\mathcal{O}_{Y, q}/\mathfrak m_p \mathcal{O}_{Y, q} =
+\mathcal{O}_{Y_p, q}
+$$
+is a regular local ring (see for example
+Algebra, Lemma \ref{algebra-lemma-characterize-smooth-over-field}).
+Pick $y_1, \ldots, y_n \in \mathfrak m_q$ which map to a regular
+system of parameters in $\mathcal{O}_{Y_p, q}$.
+Then $x_1, \ldots, x_d, y_1, \ldots, y_n$ generate the
+maximal ideal $\mathfrak m_q$. Hence $\mathcal{O}_{Y, q}$
+is a regular local ring of dimension
+$d + n$ by Algebra, Lemma \ref{algebra-lemma-dimension-base-fibre-equals-total}
+and $x_1, \ldots, x_d, y_1, \ldots, y_n$
+is a regular system of parameters. Since $f^*D$ is cut
+out by $x_1 \ldots x_r$ in $\mathcal{O}_{Y, q}$ we conclude
+that the lemma is true.
+\end{proof}
+
+\noindent
+Here is the definition of a normal crossings divisor.
+
+\begin{definition}
+\label{definition-normal-crossings}
+Let $X$ be a locally Noetherian scheme. A {\it normal crossings divisor}
+on $X$ is an effective Cartier divisor $D \subset X$ such that for
+every $p \in D$ there exists an \'etale morphism $U \to X$ with
+$p$ in the image and $D \times_X U$ a
+strict normal crossings divisor on $U$.
+\end{definition}
+
+\noindent
+For example $D = V(x^2 + y^2)$ is a normal crossings divisor
+(but not a strict one) on
+$\Spec(\mathbf{R}[x, y])$ because after pulling back to
+the \'etale cover $\Spec(\mathbf{C}[x, y])$ we obtain $(x - iy)(x + iy) = 0$.
+
+\begin{lemma}
+\label{lemma-smooth-pullback-normal-crossings}
+\begin{slogan}
+Pullback of a normal crossings divisor by a smooth
+morphism is a normal crossings divisor.
+\end{slogan}
+Let $X$ be a locally Noetherian scheme. Let $D \subset X$ be a
+normal crossings divisor. If $f : Y \to X$ is a smooth
+morphism of schemes, then the pullback $f^*D$ is a
+normal crossings divisor on $Y$.
+\end{lemma}
+
+\begin{proof}
+As $f$ is flat the pullback is defined by
+Divisors, Lemma \ref{divisors-lemma-pullback-effective-Cartier-defined}
+hence the statement makes sense.
+Let $q \in f^*D$ map to $p \in D$.
+Choose an \'etale morphism $U \to X$ whose image contains $p$
+such that $D \times_X U \subset U$ is a strict normal crossings
+divisor as in Definition \ref{definition-normal-crossings}.
+Set $V = Y \times_X U$. Then $V \to Y$ is \'etale as a base
+change of $U \to X$
+(Morphisms, Lemma \ref{morphisms-lemma-base-change-etale})
+and the pullback $D \times_X V$ is a strict normal crossings
+divisor on $V$ by Lemma \ref{lemma-smooth-pullback-strict-normal-crossings}.
+Thus we have checked the condition of
+Definition \ref{definition-normal-crossings}
+for $q \in f^*D$ and we conclude.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-normal-crossings-normalization}
+Let $X$ be a locally Noetherian scheme. Let $D \subset X$ be a closed
+subscheme. The following are equivalent
+\begin{enumerate}
+\item $D$ is a normal crossings divisor in $X$,
+\item $D$ is reduced, the normalization $\nu : D^\nu \to D$ is unramified,
+and for any $n \geq 1$ the scheme
+$$
+Z_n = D^\nu \times_D \ldots \times_D D^\nu
+\setminus \{(p_1, \ldots, p_n) \mid p_i = p_j\text{ for some }i\not = j\}
+$$
+is regular, the morphism $Z_n \to X$ is a local complete intersection
+morphism whose conormal sheaf is locally free of rank $n$.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+First we explain how to think about condition (2).
+The diagonal of an unramified morphism is open
+(Morphisms, Lemma \ref{morphisms-lemma-diagonal-unramified-morphism}).
+On the other hand $D^\nu \to D$ is separated, hence the
+diagonal $D^\nu \to D^\nu \times_D D^\nu$ is closed.
+Thus $Z_n$ is an open and closed subscheme of
+$D^\nu \times_D \ldots \times_D D^\nu$. On the other hand,
+$Z_n \to X$ is unramified as it is the composition
+$$
+Z_n \to D^\nu \times_D \ldots \times_D D^\nu \to \ldots \to
+D^\nu \times_D D^\nu \to D^\nu \to D \to X
+$$
+and each of the arrows is unramified.
+Since an unramified morphism is formally unramified
+(More on Morphisms, Lemma
+\ref{more-morphisms-lemma-unramified-formally-unramified})
+we have a conormal sheaf
+$\mathcal{C}_n = \mathcal{C}_{Z_n/X}$ of $Z_n \to X$, see
+More on Morphisms, Definition
+\ref{more-morphisms-definition-universal-thickening}.
+
+\medskip\noindent
+Formation of normalization commutes with \'etale localization by
+More on Morphisms, Lemma \ref{more-morphisms-lemma-normalization-and-smooth}.
+Checking that local rings are regular, or that
+a morphism is unramified, or that a morphism is a
+local complete intersection or that a morphism is
+unramified and has a conormal sheaf which is
+locally free of a given rank, may be done \'etale locally (see
+More on Algebra, Lemma \ref{more-algebra-lemma-regular-etale-extension},
+Descent, Lemma \ref{descent-lemma-descending-property-unramified},
+More on Morphisms, Lemma \ref{more-morphisms-lemma-descending-property-lci}
+and
+Descent, Lemma \ref{descent-lemma-finite-locally-free-descends}).
+
+\medskip\noindent
+By the remark of the preceding paragraph and the definition
+of normal crossings divisor it suffices to prove that a
+strict normal crossings divisor $D = \bigcup_{i \in I} D_i$
+satisfies (2). In this case $D^\nu = \coprod D_i$
+and $D^\nu \to D$ is unramified (being unramified
+is local on the source and $D_i \to D$ is a closed
+immersion which is unramified). Similarly, $Z_1 = D^\nu \to X$
+is a local complete intersection morphism because we may
+check this locally on the source and each morphism $D_i \to X$
+is a regular immersion as it is the inclusion of a Cartier divisor
+(see Lemma \ref{lemma-strict-normal-crossings} and
+More on Morphisms, Lemma \ref{more-morphisms-lemma-regular-immersion-lci}).
+Since an effective Cartier divisor has an invertible
+conormal sheaf, we conclude that the requirement on the
+conormal sheaf is satisfied.
+Similarly, the scheme $Z_n$ for $n \geq 2$ is the disjoint union
+of the schemes $D_J = \bigcap_{j \in J} D_j$ where $J \subset I$
+runs over the subsets of order $n$. Since $D_J \to X$ is
+a regular immersion of codimension $n$
+(by the definition of strict normal crossings and the
+fact that we may check this on stalks by
+Divisors, Lemma \ref{divisors-lemma-Noetherian-scheme-regular-ideal})
+it follows in the same manner that $Z_n \to X$ has the required
+properties. Some details omitted.
+
+\medskip\noindent
+Assume (2). Let $p \in D$. Since $D^\nu \to D$ is unramified, it is
+finite (by Morphisms, Lemma \ref{morphisms-lemma-finite-integral}).
+Hence $D^\nu \to X$ is finite unramified.
+By Lemma \ref{lemma-finite-unramified-etale-local}
+and \'etale localization (permissible by the discussion
+in the second paragraph and the definition of normal
+crossings divisors) we reduce to the case where
+$D^\nu = \coprod_{i \in I} D_i$
+with $I$ finite and $D_i \to U$ a closed immersion.
+After shrinking $X$ if necessary, we may assume
+$p \in D_i$ for all $i \in I$. The condition that $Z_1 = D^\nu \to X$ is an
+unramified local complete intersection morphism
+with conormal sheaf locally free of rank $1$
+implies that $D_i \subset X$ is an effective Cartier divisor, see
+More on Morphisms, Lemma \ref{more-morphisms-lemma-lci} and
+Divisors, Lemma \ref{divisors-lemma-regular-immersion-noetherian}.
+To finish the proof we may assume $X = \Spec(A)$ is affine
+and $D_i = V(f_i)$ with $f_i \in A$ a nonzerodivisor.
+If $I = \{1, \ldots, r\}$, then $p \in Z_r = V(f_1, \ldots, f_r)$.
+The same reference as above implies that
+$(f_1, \ldots, f_r)$ is a Koszul regular ideal in $A$.
+Since the conormal sheaf has rank $r$, we see that
+$f_1, \ldots, f_r$ is a minimal set of generators of
+the ideal defining $Z_r$ in $\mathcal{O}_{X, p}$.
+This implies that $f_1, \ldots, f_r$ is a regular sequence
+in $\mathcal{O}_{X, p}$ such that $\mathcal{O}_{X, p}/(f_1, \ldots, f_r)$
+is regular. Thus we conclude by
+Algebra, Lemma \ref{algebra-lemma-regular-mod-x}
+that $f_1, \ldots, f_r$ can be extended to a regular system of parameters
+in $\mathcal{O}_{X, p}$ and this finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-characterize-normal-crossings}
+Let $X$ be a locally Noetherian scheme. Let $D \subset X$ be a closed
+subscheme. If $X$ is J-2 or Nagata, then following are equivalent
+\begin{enumerate}
+\item $D$ is a normal crossings divisor in $X$,
+\item for every $p \in D$ the pullback of $D$ to the spectrum of the
+strict henselization $\mathcal{O}_{X, p}^{sh}$
+is a strict normal crossings divisor.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+The implication (1) $\Rightarrow$ (2) is straightforward and
+does not need the assumption that $X$ is J-2 or Nagata.
+Namely, let $p \in D$ and choose an \'etale neighbourhood
+$(U, u) \to (X, p)$ such that the pullback of $D$ is
+a strict normal crossings divisor on $U$.
+Then $\mathcal{O}_{X, p}^{sh} = \mathcal{O}_{U, u}^{sh}$
+and we see that the trace of $D$ on $\Spec(\mathcal{O}_{U, u}^{sh})$
+is cut out by part of a regular system of parameters
+as this is already the case in $\mathcal{O}_{U, u}$.
+
+\medskip\noindent
+To prove the implication in the other direction
+we will use the criterion of
+Lemma \ref{lemma-characterize-normal-crossings-normalization}.
+Observe that formation of the normalization $D^\nu \to D$
+commutes with strict henselization, see
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-normalization-and-henselization}.
+If we can show that $D^\nu \to D$ is finite,
+then we see that $D^\nu \to D$ and the schemes
+$Z_n$ satisfy all desired properties because these
+can all be checked on the level of local rings
+(but the finiteness of the morphism $D^\nu \to D$
+is not something we can check on local rings).
+We omit the detailed verifications.
+
+\medskip\noindent
+If $X$ is Nagata, then $D^\nu \to D$ is finite by
+Morphisms, Lemma \ref{morphisms-lemma-nagata-normalization}.
+
+\medskip\noindent
+Assume $X$ is J-2. Choose a point $p \in D$. We will show
+that $D^\nu \to D$ is finite over a neighbourhood of $p$.
+By assumption there exists a regular system of
+parameters $f_1, \ldots, f_d$ of $\mathcal{O}_{X, p}^{sh}$
+and $1 \leq r \leq d$ such that the trace of $D$ on
+$\Spec(\mathcal{O}_{X, p}^{sh})$ is cut out by $f_1 \ldots f_r$.
+Then
+$$
+D^\nu \times_X \Spec(\mathcal{O}_{X, p}^{sh}) =
+\coprod\nolimits_{i = 1, \ldots, r} V(f_i)
+$$
+Choose an affine \'etale neighbourhood
+$(U, u) \to (X, p)$ such that $f_i$ comes from
+$f_i \in \mathcal{O}_U(U)$. Set $D_i = V(f_i) \subset U$.
+The strict henselization of $\mathcal{O}_{D_i, u}$
+is $\mathcal{O}_{X, p}^{sh}/(f_i)$ which is regular.
+Hence $\mathcal{O}_{D_i, u}$ is regular (for example by
+More on Algebra, Lemma \ref{more-algebra-lemma-henselization-regular}).
+Because $X$ is J-2 the regular locus is open in $D_i$.
+Thus after replacing $U$ by a Zariski open we may assume
+that $D_i$ is regular for each $i$. It follows that
+$$
+\coprod\nolimits_{i = 1, \ldots, r} D_i = D^\nu \times_X U
+\longrightarrow D \times_X U
+$$
+is the normalization morphism and it is clearly finite.
+In other words, we have found
+an \'etale neighbourhood $(U, u)$ of $(X, p)$ such that
+the base change of $D^\nu \to D$ to this neighbourhood is finite.
+This implies $D^\nu \to D$ is finite by descent
+(Descent, Lemma \ref{descent-lemma-descending-property-finite})
+and the proof is complete.
+\end{proof}
+
+
+
+
+
+
+\input{chapters}
+
+
+\bibliography{my}
+\bibliographystyle{amsalpha}
+
+\end{document}
diff --git a/books/stacks/examples-defos.tex b/books/stacks/examples-defos.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ecada3a5fe2c9ea9c8a7483fa63d5d46837419ec
--- /dev/null
+++ b/books/stacks/examples-defos.tex
@@ -0,0 +1,3246 @@
+\input{preamble}
+
+% OK, start here.
+%
+\begin{document}
+
+\title{Deformation Problems}
+
+\maketitle
+
+\phantomsection
+\label{section-phantom}
+
+\tableofcontents
+
+\section{Introduction}
+\label{section-introduction}
+
+\noindent
+The goal of this chapter is to work out examples of the general theory
+developed in the chapters Formal Deformation Theory,
+Deformation Theory, The Cotangent Complex.
+
+\medskip\noindent
+Section 3 of the paper \cite{Sch} by Schlessinger discusses some
+examples as well.
+
+
+
+
+
+
+\section{Examples of deformation problems}
+\label{section-examples}
+
+\noindent
+List of things that should go here:
+\begin{enumerate}
+\item Deformations of schemes:
+\begin{enumerate}
+\item The Rim-Schlessinger condition.
+\item Computing the tangent space.
+\item Computing the infinitesimal deformations.
+\item The deformation category of an affine hypersurface.
+\end{enumerate}
+\item Deformations of sheaves (for example fix $X/S$, a finite type point
+$s$ of $S$, and a quasi-coherent sheaf $\mathcal{F}_s$ over $X_s$).
+\item Deformations of algebraic spaces (very similar to deformations
+of schemes; maybe even easier?).
+\item Deformations of maps (eg morphisms between schemes; you can fix
+both or one of the target and/or source).
+\item Add more here.
+\end{enumerate}
+
+
+
+
+
+\section{General outline}
+\label{section-general}
+
+\noindent
+This section lays out the procedure for discussing the next few examples.
+
+\medskip\noindent
+Step I. For each section we fix a Noetherian ring $\Lambda$ and
+we fix a finite ring map $\Lambda \to k$ where $k$ is a field.
+As usual we let $\mathcal{C}_\Lambda = \mathcal{C}_{\Lambda, k}$
+be our base category, see
+Formal Deformation Theory,
+Definition \ref{formal-defos-definition-CLambda}.
+
+\medskip\noindent
+Step II. In each section we define a category $\mathcal{F}$
+cofibred in groupoids over $\mathcal{C}_\Lambda$. Occassionally
+we will consider instead a functor
+$F : \mathcal{C}_\Lambda \to \textit{Sets}$.
+
+\medskip\noindent
+Step III. We explain to what extent $\mathcal{F}$ satisfies
+the Rim-Schlesssinger condition (RS) discussed in
+Formal Deformation Theory, Section \ref{formal-defos-section-RS-condition}.
+Similarly, we may discuss to what extent our $\mathcal{F}$
+satisfies (S1) and (S2) or to what extent $F$ satisfies
+the corresponding Schlessinger's conditions (H1) and (H2).
+See Formal Deformation Theory, Section
+\ref{formal-defos-section-schlessinger-conditions}.
+
+\medskip\noindent
+Step IV. Let $x_0$ be an object of $\mathcal{F}(k)$, in other words an object
+of $\mathcal{F}$ over $k$. In this chapter we will use the notation
+$$
+\Deformationcategory_{x_0} = \mathcal{F}_{x_0}
+$$
+to denote the predeformation category constructed in
+Formal Deformation Theory, Remark
+\ref{formal-defos-remark-localize-cofibered-groupoid}.
+If $\mathcal{F}$ satisfies (RS), then
+$\Deformationcategory_{x_0}$ is a deformation category
+(Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-localize-RS})
+and satisfies (S1) and (S2)
+(Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-RS-implies-S1-S2}).
+If (S1) and (S2) are satisfied, then
+an important question is whether the tangent space
+$$
+T\Deformationcategory_{x_0} = T_{x_0}\mathcal{F} = T\mathcal{F}_{x_0}
+$$
+(see Formal Deformation Theory, Remark
+\ref{formal-defos-remark-tangent-space-cofibered-groupoid} and
+Definition \ref{formal-defos-definition-tangent-space})
+is finite dimensional. Namely, this insures that
+$\Deformationcategory_{x_0}$ has a versal formal object
+(Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-versal-object-existence}).
+
+\medskip\noindent
+Step V. If $\mathcal{F}$ passes Step IV, then the next question is whether
+the $k$-vector space
+$$
+\text{Inf}(\Deformationcategory_{x_0}) = \text{Inf}_{x_0}(\mathcal{F})
+$$
+of infinitesimal automorphisms of $x_0$ is finite dimensional.
+Namely, if true, this implies that
+$\Deformationcategory_{x_0}$ admits a presentation by a
+smooth prorepresentable groupoid in functors on $\mathcal{C}_\Lambda$, see
+Formal Deformation Theory, Theorem
+\ref{formal-defos-theorem-presentation-deformation-groupoid}.
+
+
+
+
+\section{Finite projective modules}
+\label{section-finite-projective-modules}
+
+\noindent
+This section is just a warmup. Of course finite projective modules
+should not have any ``moduli''.
+
+\begin{example}[Finite projective modules]
+\label{example-finite-projective-modules}
+Let $\mathcal{F}$ be the category defined as follows
+\begin{enumerate}
+\item an object is a pair $(A, M)$ consisting of an
+object $A$ of $\mathcal{C}_\Lambda$ and a
+finite projective $A$-module $M$, and
+\item a morphism $(f, g) : (B, N) \to (A, M)$ consists of
+a morphism $f : B \to A$ in $\mathcal{C}_\Lambda$ together
+with a map $g : N \to M$ which is $f$-linear and induces
+an isomorpism $N \otimes_{B, f} A \cong M$.
+\end{enumerate}
+The functor $p : \mathcal{F} \to \mathcal{C}_\Lambda$ sends $(A, M)$ to $A$
+and $(f, g)$ to $f$. It is clear that $p$ is cofibred in groupoids.
+Given a finite dimensional $k$-vector space $V$,
+let $x_0 = (k, V)$ be the corresponding object of $\mathcal{F}(k)$.
+We set
+$$
+\Deformationcategory_V = \mathcal{F}_{x_0}
+$$
+\end{example}
+
+\noindent
+Since every finite projective module over a local ring is finite free
+(Algebra, Lemma \ref{algebra-lemma-finite-projective})
+we see that
+$$
+\begin{matrix}
+\text{isomorphism classes} \\
+\text{of objects of }\mathcal{F}(A)
+\end{matrix}
+= \coprod\nolimits_{n \geq 0} \{*\}
+$$
+Although this means that the deformation theory of $\mathcal{F}$
+is essentially trivial, we still work through the steps outlined
+in Section \ref{section-general} to provide an easy example.
+
+\begin{lemma}
+\label{lemma-finite-projective-modules-RS}
+Example \ref{example-finite-projective-modules}
+satisfies the Rim-Schlessinger condition (RS).
+In particular, $\Deformationcategory_V$ is a deformation category
+for any finite dimensional vector space $V$ over $k$.
+\end{lemma}
+
+\begin{proof}
+Let $A_1 \to A$ and $A_2 \to A$ be morphisms of $\mathcal{C}_\Lambda$.
+Assume $A_2 \to A$ is surjective. According to
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-RS-2-categorical}
+it suffices to show that the functor
+$\mathcal{F}(A_1 \times_A A_2) \to
+\mathcal{F}(A_1) \times_{\mathcal{F}(A)} \mathcal{F}(A_2)$
+is an equivalence of categories.
+
+\medskip\noindent
+Thus we have to show that the category of finite projective modules
+over $A_1 \times_A A_2$ is equivalent to the fibre product
+of the categories of finite projective modules over $A_1$ and $A_2$
+over the category of finite projective modules over $A$.
+This is a special case of More on Algebra, Lemma
+\ref{more-algebra-lemma-finitely-presented-module-over-fibre-product}.
+We recall that the inverse functor sends the triple
+$(M_1, M_2, \varphi)$ where
+$M_1$ is a finite projective $A_1$-module,
+$M_2$ is a finite projective $A_2$-module, and
+$\varphi : M_1 \otimes_{A_1} A \to M_2 \otimes_{A_2} A$
+is an isomorphism of $A$-module, to the finite projective
+$A_1 \times_A A_2$-module $M_1 \times_\varphi M_2$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-projective-modules-TI}
+In Example \ref{example-finite-projective-modules}
+let $V$ be a finite dimensional $k$-vector space. Then
+$$
+T\Deformationcategory_V = (0)
+\quad\text{and}\quad
+\text{Inf}(\Deformationcategory_V) = \text{End}_k(V)
+$$
+are finite dimensional.
+\end{lemma}
+
+\begin{proof}
+With $\mathcal{F}$ as in Example \ref{example-finite-projective-modules}
+set $x_0 = (k, V) \in \Ob(\mathcal{F}(k))$.
+Recall that $T\Deformationcategory_V = T_{x_0}\mathcal{F}$
+is the set of isomorphism
+classes of pairs $(x, \alpha)$ consisting of an object $x$ of $\mathcal{F}
+$ over the dual numbers $k[\epsilon]$ and a morphism
+$\alpha : x \to x_0$ of $\mathcal{F}$ lying over $k[\epsilon] \to k$.
+
+\medskip\noindent
+Up to isomorphism, there is a unique pair $(M, \alpha)$ consisting of a
+finite projective module $M$ over $k[\epsilon]$
+and $k[\epsilon]$-linear map $\alpha : M \to V$
+which induces an isomorphism $M \otimes_{k[\epsilon]} k \to V$.
+For example, if $V = k^{\oplus n}$, then we take
+$M = k[\epsilon]^{\oplus n}$ with the obvious map $\alpha$.
+
+\medskip\noindent
+Similarly, $\text{Inf}(\Deformationcategory_V) = \text{Inf}_{x_0}(\mathcal{F})$
+is the set of automorphisms
+of the trivial deformation $x'_0$ of $x_0$ over $k[\epsilon]$.
+See Formal Deformation Theory, Definition
+\ref{formal-defos-definition-infinitesimal-auts} for details.
+
+\medskip\noindent
+Given $(M, \alpha)$ as in the second paragraph, we see that an element of
+$\text{Inf}_{x_0}(\mathcal{F})$ is an automorphism $\gamma : M \to M$ with
+$\gamma \bmod \epsilon = \text{id}$. Then we can write
+$\gamma = \text{id}_M + \epsilon \psi$ where
+$\psi : M/\epsilon M \to M/\epsilon M$ is $k$-linear.
+Using $\alpha$ we can think of $\psi$ as an element of
+$\text{End}_k(V)$ and this finishes the proof.
+\end{proof}
+
+
+\section{Representations of a group}
+\label{section-representations}
+
+\noindent
+The deformation theory of representations can be very interesting.
+
+\begin{example}[Representations of a group]
+\label{example-representations}
+Let $\Gamma$ be a group.
+Let $\mathcal{F}$ be the category defined as follows
+\begin{enumerate}
+\item an object is a triple $(A, M, \rho)$ consisting of an
+object $A$ of $\mathcal{C}_\Lambda$, a finite projective $A$-module $M$,
+and a homomorphism $\rho : \Gamma \to \text{GL}_A(M)$, and
+\item a morphism $(f, g) : (B, N, \tau) \to (A, M, \rho)$ consists of
+a morphism $f : B \to A$ in $\mathcal{C}_\Lambda$ together
+with a map $g : N \to M$ which is $f$-linear and $\Gamma$-equivariant
+and induces an isomorpism $N \otimes_{B, f} A \cong M$.
+\end{enumerate}
+The functor $p : \mathcal{F} \to \mathcal{C}_\Lambda$ sends $(A, M, \rho)$
+to $A$ and $(f, g)$ to $f$. It is clear that $p$ is cofibred in groupoids.
+Given a finite dimensional $k$-vector space $V$ and a representation
+$\rho_0 : \Gamma \to \text{GL}_k(V)$,
+let $x_0 = (k, V, \rho_0)$ be the corresponding object of $\mathcal{F}(k)$.
+We set
+$$
+\Deformationcategory_{V, \rho_0} = \mathcal{F}_{x_0}
+$$
+\end{example}
+
+\noindent
+Since every finite projective module over a local ring is finite free
+(Algebra, Lemma \ref{algebra-lemma-finite-projective})
+we see that
+$$
+\begin{matrix}
+\text{isomorphism classes} \\
+\text{of objects of }\mathcal{F}(A)
+\end{matrix}
+=
+\coprod\nolimits_{n \geq 0}\quad
+\begin{matrix}
+\text{GL}_n(A)\text{-conjugacy classes of}\\
+\text{homomorphisms }\rho : \Gamma \to \text{GL}_n(A)
+\end{matrix}
+$$
+This is already more interesting than the discussion in
+Section \ref{section-finite-projective-modules}.
+
+\begin{lemma}
+\label{lemma-representations-RS}
+Example \ref{example-representations}
+satisfies the Rim-Schlessinger condition (RS).
+In particular, $\Deformationcategory_{V, \rho_0}$ is a deformation category
+for any finite dimensional representation
+$\rho_0 : \Gamma \to \text{GL}_k(V)$.
+\end{lemma}
+
+\begin{proof}
+Let $A_1 \to A$ and $A_2 \to A$ be morphisms of $\mathcal{C}_\Lambda$.
+Assume $A_2 \to A$ is surjective. According to
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-RS-2-categorical}
+it suffices to show that the functor
+$\mathcal{F}(A_1 \times_A A_2) \to
+\mathcal{F}(A_1) \times_{\mathcal{F}(A)} \mathcal{F}(A_2)$
+is an equivalence of categories.
+
+\medskip\noindent
+Consider an object
+$$
+((A_1, M_1, \rho_1), (A_2, M_2, \rho_2), (\text{id}_A, \varphi))
+$$
+of the category $\mathcal{F}(A_1) \times_{\mathcal{F}(A)} \mathcal{F}(A_2)$.
+Then, as seen in the proof of Lemma \ref{lemma-finite-projective-modules-RS},
+we can consider the finite projective
+$A_1 \times_A A_2$-module $M_1 \times_\varphi M_2$.
+Since $\varphi$ is compatible with the given actions we obtain
+$$
+\rho_1 \times \rho_2 : \Gamma \longrightarrow
+\text{GL}_{A_1 \times_A A_2}(M_1 \times_\varphi M_2)
+$$
+Then $(M_1 \times_\varphi M_2, \rho_1 \times \rho_2)$
+is an object of $\mathcal{F}(A_1 \times_A A_2)$.
+This construction determines a quasi-inverse to our functor.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-representations-TI}
+In Example \ref{example-representations} let
+$\rho_0 : \Gamma \to \text{GL}_k(V)$
+be a finite dimensional representation. Then
+$$
+T\Deformationcategory_{V, \rho_0} = \Ext^1_{k[\Gamma]}(V, V) =
+H^1(\Gamma, \text{End}_k(V))
+\quad\text{and}\quad
+\text{Inf}(\Deformationcategory_{V, \rho_0}) = H^0(\Gamma, \text{End}_k(V))
+$$
+Thus $\text{Inf}(\Deformationcategory_{V, \rho_0})$
+is always finite dimensional
+and $T\Deformationcategory_{V, \rho_0}$ is finite dimensional
+if $\Gamma$ is finitely generated.
+\end{lemma}
+
+\begin{proof}
+We first deal with the infinitesimal automorphisms.
+Let $M = V \otimes_k k[\epsilon]$ with induced action
+$\rho_0' : \Gamma \to \text{GL}_n(M)$.
+Then an infinitesimal automorphism, i.e., an element of
+$\text{Inf}(\Deformationcategory_{V, \rho_0})$,
+is given by an automorphism
+$\gamma = \text{id} + \epsilon \psi : M \to M$
+as in the proof of Lemma \ref{lemma-finite-projective-modules-TI},
+where moreover $\psi$ has to commute
+with the action of $\Gamma$ (given by $\rho_0$).
+Thus we see that
+$$
+\text{Inf}(\Deformationcategory_{V, \rho_0}) = H^0(\Gamma, \text{End}_k(V))
+$$
+as predicted in the lemma.
+
+\medskip\noindent
+Next, let $(k[\epsilon], M, \rho)$ be an object of $\mathcal{F}$
+over $k[\epsilon]$ and let $\alpha : M \to V$ be a $\Gamma$-equivariant map
+inducing an isomorphism $M/\epsilon M \to V$.
+Since $M$ is free as a $k[\epsilon]$-module we obtain
+an extension of $\Gamma$-modules
+$$
+0 \to V \to M \xrightarrow{\alpha} V \to 0
+$$
+We omit the detailed construction of the map on the left.
+Conversely, if we have an extension of $\Gamma$-modules as
+above, then we can use this to make a $k[\epsilon]$-module
+structure on $M$ and get an object of $\mathcal{F}(k[\epsilon])$
+together with a map $\alpha$ as above.
+It follows that
+$$
+T\Deformationcategory_{V, \rho_0} = \Ext^1_{k[\Gamma]}(V, V)
+$$
+as predicted in the lemma. This is equal to
+$H^1(\Gamma, \text{End}_k(V))$ by
+\'Etale Cohomology, Lemma \ref{etale-cohomology-lemma-ext-modules-hom}.
+
+\medskip\noindent
+The statement on dimensions follows from
+\'Etale Cohomology, Lemma
+\ref{etale-cohomology-lemma-finite-dim-group-cohomology}.
+\end{proof}
+
+\noindent
+In Example \ref{example-representations} if $\Gamma$ is finitely generated
+and $(V, \rho_0)$ is a finite dimensional representation of $\Gamma$
+over $k$, then $\Deformationcategory_{V, \rho_0}$
+admits a presentation by a smooth prorepresentable groupoid in functors
+over $\mathcal{C}_\Lambda$
+and a fortiori has a (minimal) versal formal object. This follows
+from Lemmas \ref{lemma-representations-RS} and \ref{lemma-representations-TI}
+and the general discussion in Section \ref{section-general}.
+
+\begin{lemma}
+\label{lemma-representations-hull}
+In Example \ref{example-representations} assume $\Gamma$ finitely generated.
+Let $\rho_0 : \Gamma \to \text{GL}_k(V)$ be a finite dimensional representation.
+Assume $\Lambda$ is a complete local ring with residue field $k$
+(the classical case). Then the functor
+$$
+F : \mathcal{C}_\Lambda \longrightarrow \textit{Sets},\quad
+A \longmapsto \Ob(\Deformationcategory_{V, \rho_0}(A))/\cong
+$$
+of isomorphism classes of objects has a hull. If
+$H^0(\Gamma, \text{End}_k(V)) = k$, then $F$ is
+prorepresentable.
+\end{lemma}
+
+\begin{proof}
+The existence of a hull follows from Lemmas \ref{lemma-representations-RS} and
+\ref{lemma-representations-TI} and
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-RS-implies-S1-S2}
+and Remark \ref{formal-defos-remark-compose-minimal-into-iso-classes}.
+
+\medskip\noindent
+Assume $H^0(\Gamma, \text{End}_k(V)) = k$. To see that $F$
+is prorepresentable it suffices to show that $F$ is a
+deformation functor, see Formal Deformation Theory, Theorem
+\ref{formal-defos-theorem-Schlessinger-prorepresentability}.
+In other words, we have to show $F$ satisfies (RS).
+For this we can use the criterion of Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-RS-associated-functor}.
+The required surjectivity of automorphism groups will follow if we
+show that
+$$
+A \cdot \text{id}_M =
+\text{End}_{A[\Gamma]}(M)
+$$
+for any object $(A, M, \rho)$ of $\mathcal{F}$ such that
+$M \otimes_A k$ is isomorphic to $V$ as a representation of $\Gamma$.
+Since the left hand side is contained in the right hand side,
+it suffices to show
+$\text{length}_A \text{End}_{A[\Gamma]}(M) \leq \text{length}_A A$.
+Choose pairwise distinct ideals
+$(0) = I_n \subset \ldots \subset I_1 \subset A$
+with $n = \text{length}(A)$. By correspondingly filtering
+$M$, we see that it suffices to prove $\Hom_{A[\Gamma]}(M, I_tM/I_{t + 1}M)$
+has length $1$. Since $I_tM/I_{t + 1}M \cong M \otimes_A k$
+and since any $A[\Gamma]$-module map $M \to M \otimes_A k$ factors
+uniquely through the quotient map $M \to M \otimes_A k$
+to give an element of
+$$
+\text{End}_{A[\Gamma]}(M \otimes_A k) = \text{End}_{k[\Gamma]}(V) = k
+$$
+we conclude.
+\end{proof}
+
+
+
+\section{Continuous representations}
+\label{section-continuous-representations}
+
+\noindent
+A very interesting thing one can do is to take an infinite Galois
+group and study the deformation theory of its representations, see
+\cite{Mazur-deforming}.
+
+\begin{example}[Representations of a topological group]
+\label{example-continuous-representations}
+Let $\Gamma$ be a topological group.
+Let $\mathcal{F}$ be the category defined as follows
+\begin{enumerate}
+\item an object is a triple $(A, M, \rho)$ consisting of an
+object $A$ of $\mathcal{C}_\Lambda$, a finite projective $A$-module $M$,
+and a continuous homomorphism $\rho : \Gamma \to \text{GL}_A(M)$
+where $\text{GL}_A(M)$ is given the discrete topology\footnote{An alternative
+would be to require the $A$-module $M$ with $G$-action given by $\rho$
+is an $A\text{-}G$-module as defined in \'Etale Cohomology, Definition
+\ref{etale-cohomology-definition-G-module-continuous}. However,
+since $M$ is a finite $A$-module, this is equivalent.}, and
+\item a morphism $(f, g) : (B, N, \tau) \to (A, M, \rho)$ consists of
+a morphism $f : B \to A$ in $\mathcal{C}_\Lambda$ together
+with a map $g : N \to M$ which is $f$-linear and $\Gamma$-equivariant
+and induces an isomorpism $N \otimes_{B, f} A \cong M$.
+\end{enumerate}
+The functor $p : \mathcal{F} \to \mathcal{C}_\Lambda$ sends $(A, M, \rho)$
+to $A$ and $(f, g)$ to $f$. It is clear that $p$ is cofibred in groupoids.
+Given a finite dimensional $k$-vector space $V$ and a
+continuous representation $\rho_0 : \Gamma \to \text{GL}_k(V)$,
+let $x_0 = (k, V, \rho_0)$ be the corresponding object of $\mathcal{F}(k)$.
+We set
+$$
+\Deformationcategory_{V, \rho_0} = \mathcal{F}_{x_0}
+$$
+\end{example}
+
+\noindent
+Since every finite projective module over a local ring is finite free
+(Algebra, Lemma \ref{algebra-lemma-finite-projective})
+we see that
+$$
+\begin{matrix}
+\text{isomorphism classes} \\
+\text{of objects of }\mathcal{F}(A)
+\end{matrix}
+=
+\coprod\nolimits_{n \geq 0}\quad
+\begin{matrix}
+\text{GL}_n(A)\text{-conjugacy classes of}\\
+\text{continuous homomorphisms }\rho : \Gamma \to \text{GL}_n(A)
+\end{matrix}
+$$
+
+\begin{lemma}
+\label{lemma-continuous-representations-RS}
+Example \ref{example-continuous-representations}
+satisfies the Rim-Schlessinger condition (RS).
+In particular, $\Deformationcategory_{V, \rho_0}$ is a deformation category
+for any finite dimensional continuous representation
+$\rho_0 : \Gamma \to \text{GL}_k(V)$.
+\end{lemma}
+
+\begin{proof}
+The proof is exactly the same as the proof of
+Lemma \ref{lemma-representations-RS}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-continuous-representations-TI}
+In Example \ref{example-continuous-representations} let
+$\rho_0 : \Gamma \to \text{GL}_k(V)$ be a finite dimensional
+continuous representation. Then
+$$
+T\Deformationcategory_{V, \rho_0} = H^1(\Gamma, \text{End}_k(V))
+\quad\text{and}\quad
+\text{Inf}(\Deformationcategory_{V, \rho_0}) = H^0(\Gamma, \text{End}_k(V))
+$$
+Thus $\text{Inf}(\Deformationcategory_{V, \rho_0})$
+is always finite dimensional
+and $T\Deformationcategory_{V, \rho_0}$ is finite dimensional
+if $\Gamma$ is topologically finitely generated.
+\end{lemma}
+
+\begin{proof}
+The proof is exactly the same as the proof of
+Lemma \ref{lemma-representations-TI}.
+\end{proof}
+
+\noindent
+In Example \ref{example-continuous-representations} if $\Gamma$
+is topologically finitely generated
+and $(V, \rho_0)$ is a finite dimensional continuous representation of $\Gamma$
+over $k$, then $\Deformationcategory_{V, \rho_0}$
+admits a presentation by a smooth prorepresentable groupoid in functors
+over $\mathcal{C}_\Lambda$
+and a fortiori has a (minimal) versal formal object. This follows
+from Lemmas \ref{lemma-continuous-representations-RS} and
+\ref{lemma-continuous-representations-TI}
+and the general discussion in Section \ref{section-general}.
+
+\begin{lemma}
+\label{lemma-continuous-representations-hull}
+In Example \ref{example-continuous-representations} assume $\Gamma$
+is topologically finitely generated.
+Let $\rho_0 : \Gamma \to \text{GL}_k(V)$ be a finite dimensional representation.
+Assume $\Lambda$ is a complete local ring with residue field $k$
+(the classical case). Then the functor
+$$
+F : \mathcal{C}_\Lambda \longrightarrow \textit{Sets},\quad
+A \longmapsto \Ob(\Deformationcategory_{V, \rho_0}(A))/\cong
+$$
+of isomorphism classes of objects has a hull. If
+$H^0(\Gamma, \text{End}_k(V)) = k$, then $F$ is
+prorepresentable.
+\end{lemma}
+
+\begin{proof}
+The proof is exactly the same as the proof of
+Lemma \ref{lemma-representations-hull}.
+\end{proof}
+
+
+
+\section{Graded algebras}
+\label{section-graded-algebras}
+
+\noindent
+We will use the example in this section in the proof that the stack of
+polarized proper schemes is an algebraic stack. For this reason we will
+consider commutative graded algebras whose homogeneous parts are
+finite projective modules (sometimes called ``locally finite'').
+
+\begin{example}[Graded algebras]
+\label{example-graded-algebras}
+Let $\mathcal{F}$ be the category defined as follows
+\begin{enumerate}
+\item an object is a pair $(A, P)$ consisting of an
+object $A$ of $\mathcal{C}_\Lambda$ and a graded $A$-algebra $P$
+such that $P_d$ is a finite projective $A$-module for all $d \geq 0$, and
+\item a morphism $(f, g) : (B, Q) \to (A, P)$ consists of
+a morphism $f : B \to A$ in $\mathcal{C}_\Lambda$ together
+with a map $g : Q \to P$ which is $f$-linear and induces an
+isomorpism $Q \otimes_{B, f} A \cong P$.
+\end{enumerate}
+The functor $p : \mathcal{F} \to \mathcal{C}_\Lambda$ sends $(A, P)$
+to $A$ and $(f, g)$ to $f$. It is clear that $p$ is cofibred in groupoids.
+Given a graded $k$-algebra $P$ with $\dim_k(P_d) < \infty$ for all
+$d \geq 0$, let $x_0 = (k, P)$ be the corresponding object of $\mathcal{F}(k)$.
+We set
+$$
+\Deformationcategory_P = \mathcal{F}_{x_0}
+$$
+\end{example}
+
+\begin{lemma}
+\label{lemma-graded-algebras-RS}
+Example \ref{example-graded-algebras}
+satisfies the Rim-Schlessinger condition (RS).
+In particular, $\Deformationcategory_P$ is a deformation category
+for any graded $k$-algebra $P$.
+\end{lemma}
+
+\begin{proof}
+Let $A_1 \to A$ and $A_2 \to A$ be morphisms of $\mathcal{C}_\Lambda$.
+Assume $A_2 \to A$ is surjective. According to
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-RS-2-categorical}
+it suffices to show that the functor
+$\mathcal{F}(A_1 \times_A A_2) \to
+\mathcal{F}(A_1) \times_{\mathcal{F}(A)} \mathcal{F}(A_2)$
+is an equivalence of categories.
+
+\medskip\noindent
+Consider an object
+$$
+((A_1, P_1), (A_2, P_2), (\text{id}_A, \varphi))
+$$
+of the category $\mathcal{F}(A_1) \times_{\mathcal{F}(A)} \mathcal{F}(A_2)$.
+Then we consider $P_1 \times_\varphi P_2$. Since
+$\varphi : P_1 \otimes_{A_1} A \to P_2 \otimes_{A_2} A$
+is an isomorphism of graded algebras, we see that the graded pieces
+of $P_1 \times_\varphi P_2$ are finite projective $A_1 \times_A A_2$-modules,
+see proof of Lemma \ref{lemma-finite-projective-modules-RS}.
+Thus $P_1 \times_\varphi P_2$ is an object of $\mathcal{F}(A_1 \times_A A_2)$.
+This construction determines a quasi-inverse to our functor
+and the proof is complete.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-graded-algebras-TI}
+In Example \ref{example-graded-algebras} let $P$ be a graded $k$-algebra.
+Then
+$$
+T\Deformationcategory_P
+\quad\text{and}\quad
+\text{Inf}(\Deformationcategory_P) = \text{Der}_k(P, P)
+$$
+are finite dimensional if $P$ is finitely generated over $k$.
+\end{lemma}
+
+\begin{proof}
+We first deal with the infinitesimal automorphisms.
+Let $Q = P \otimes_k k[\epsilon]$.
+Then an element of $\text{Inf}(\Deformationcategory_P)$
+is given by an automorphism
+$\gamma = \text{id} + \epsilon \delta : Q \to Q$
+as above where now $\delta : P \to P$.
+The fact that $\gamma$ is graded implies that
+$\delta$ is homogeneous of degree $0$.
+The fact that $\gamma$ is $k$-linear implies that
+$\delta$ is $k$-linear.
+The fact that $\gamma$ is multiplicative implies that
+$\delta$ is a $k$-derivation.
+Conversely, given a $k$-derivation $\delta : P \to P$
+homogeneous of degree $0$, we obtain an automorphism
+$\gamma = \text{id} + \epsilon \delta$ as above.
+Thus we see that
+$$
+\text{Inf}(\Deformationcategory_P) = \text{Der}_k(P, P)
+$$
+as predicted in the lemma.
+Clearly, if $P$ is generated in degrees $P_i$,
+$0 \leq i \leq N$, then $\delta$ is determined by
+the linear maps $\delta_i : P_i \to P_i$ for
+$0 \leq i \leq N$ and we see that
+$$
+\dim_k \text{Der}_k(P, P) < \infty
+$$
+as desired.
+
+\medskip\noindent
+To finish the proof of the lemma we show that there is a finite
+dimensional deformation space. To do this we
+choose a presentation
+$$
+k[X_1, \ldots, X_n]/(F_1, \ldots, F_m) \longrightarrow P
+$$
+of graded $k$-algebras where $\deg(X_i) = d_i$ and
+$F_j$ is homogeneous of degree $e_j$.
+Let $Q$ be any graded $k[\epsilon]$-algebra
+finite free in each degree which comes with an isomorphsm
+$\alpha : Q/\epsilon Q \to P$ so that $(Q, \alpha)$ defines
+an element of $T\Deformationcategory_P$.
+Choose a homogeneous element $q_i \in Q$ of degree $d_i$
+mapping to the image of $X_i$ in $P$.
+Then we obtain
+$$
+k[\epsilon][X_1, \ldots, X_n] \longrightarrow Q,\quad
+X_i \longmapsto q_i
+$$
+and since $P = Q/\epsilon Q$ this map is surjective by Nakayama's lemma.
+A small diagram chase shows we can choose homogeneous elements
+$F_{\epsilon, j} \in k[\epsilon][X_1, \ldots, X_n]$ of degree $e_j$
+mapping to zero in $Q$ and mapping to $F_j$ in $k[X_1, \ldots, X_n]$.
+Then
+$$
+k[\epsilon][X_1, \ldots, X_n]/(F_{\epsilon, 1}, \ldots, F_{\epsilon, m})
+\longrightarrow Q
+$$
+is a presentation of $Q$ by flatness of $Q$ over $k[\epsilon]$.
+Write
+$$
+F_{\epsilon, j} = F_j + \epsilon G_j
+$$
+There is some ambiguity in the vector $(G_1, \ldots, G_m)$.
+First, using different choices of $F_{\epsilon, j}$
+we can modify $G_j$ by an arbitrary element of degree $e_j$
+in the kernel of $k[X_1, \ldots, X_n] \to P$.
+Hence, instead of $(G_1, \ldots, G_m)$, we remember the
+element
+$$
+(g_1, \ldots, g_m) \in P_{e_1} \oplus \ldots \oplus P_{e_m}
+$$
+where $g_j$ is the image of $G_j$ in $P_{e_j}$.
+Moreover, if we change our choice of $q_i$ into $q_i + \epsilon p_i$
+with $p_i$ of degree $d_i$ then a computation (omitted) shows
+that $g_j$ changes into
+$$
+g_j^{new} = g_j - \sum\nolimits_{i = 1}^n p_i \partial F_j / \partial X_i
+$$
+We conclude that the isomorphism class of $Q$ is determined by the
+image of the vector $(G_1, \ldots, G_m)$ in the $k$-vector space
+$$
+W = \Coker(P_{d_1} \oplus \ldots \oplus P_{d_n}
+\xrightarrow{(\frac{\partial F_j}{\partial X_i})}
+P_{e_1} \oplus \ldots \oplus P_{e_m})
+$$
+In this way we see that we obtain an injection
+$$
+T\Deformationcategory_P \longrightarrow W
+$$
+Since $W$ visibly has finite dimension, we conclude that the lemma is true.
+\end{proof}
+
+\noindent
+In Example \ref{example-graded-algebras} if $P$ is a finitely generated
+graded $k$-algebra, then $\Deformationcategory_P$
+admits a presentation by a smooth prorepresentable groupoid in functors
+over $\mathcal{C}_\Lambda$
+and a fortiori has a (minimal) versal formal object. This follows
+from Lemmas \ref{lemma-graded-algebras-RS} and
+\ref{lemma-graded-algebras-TI}
+and the general discussion in Section \ref{section-general}.
+
+\begin{lemma}
+\label{lemma-graded-algebras-hull}
+In Example \ref{example-graded-algebras} assume $P$ is a finitely generated
+graded $k$-algebra. Assume $\Lambda$ is a complete local ring
+with residue field $k$
+(the classical case). Then the functor
+$$
+F : \mathcal{C}_\Lambda \longrightarrow \textit{Sets},\quad
+A \longmapsto \Ob(\Deformationcategory_P(A))/\cong
+$$
+of isomorphism classes of objects has a hull.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from Lemmas \ref{lemma-graded-algebras-RS} and
+\ref{lemma-graded-algebras-TI} and
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-RS-implies-S1-S2}
+and Remark \ref{formal-defos-remark-compose-minimal-into-iso-classes}.
+\end{proof}
+
+
+
+
+
+
+
+\section{Rings}
+\label{section-rings}
+
+\noindent
+The deformation theory of rings is the same as the deformation theory
+of affine schemes. For rings and schemes when we talk about deformations
+it means we are thinking about {\it flat} deformations.
+
+\begin{example}[Rings]
+\label{example-rings}
+Let $\mathcal{F}$ be the category defined as follows
+\begin{enumerate}
+\item an object is a pair $(A, P)$ consisting of an
+object $A$ of $\mathcal{C}_\Lambda$ and a flat $A$-algebra $P$, and
+\item a morphism $(f, g) : (B, Q) \to (A, P)$ consists of
+a morphism $f : B \to A$ in $\mathcal{C}_\Lambda$ together
+with a map $g : Q \to P$ which is $f$-linear and induces an
+isomorpism $Q \otimes_{B, f} A \cong P$.
+\end{enumerate}
+The functor $p : \mathcal{F} \to \mathcal{C}_\Lambda$ sends $(A, P)$
+to $A$ and $(f, g)$ to $f$. It is clear that $p$ is cofibred in groupoids.
+Given a $k$-algebra $P$, let $x_0 = (k, P)$ be the corresponding object
+of $\mathcal{F}(k)$. We set
+$$
+\Deformationcategory_P = \mathcal{F}_{x_0}
+$$
+\end{example}
+
+\begin{lemma}
+\label{lemma-rings-RS}
+Example \ref{example-rings}
+satisfies the Rim-Schlessinger condition (RS).
+In particular, $\Deformationcategory_P$ is a deformation category
+for any $k$-algebra $P$.
+\end{lemma}
+
+\begin{proof}
+Let $A_1 \to A$ and $A_2 \to A$ be morphisms of $\mathcal{C}_\Lambda$.
+Assume $A_2 \to A$ is surjective. According to
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-RS-2-categorical}
+it suffices to show that the functor
+$\mathcal{F}(A_1 \times_A A_2) \to
+\mathcal{F}(A_1) \times_{\mathcal{F}(A)} \mathcal{F}(A_2)$
+is an equivalence of categories.
+This is a special case of More on Algebra, Lemma
+\ref{more-algebra-lemma-properties-algebras-over-fibre-product}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-rings-TI}
+In Example \ref{example-rings} let $P$ be a $k$-algebra. Then
+$$
+T\Deformationcategory_P = \text{Ext}^1_P(\NL_{P/k}, P)
+\quad\text{and}\quad
+\text{Inf}(\Deformationcategory_P) = \text{Der}_k(P, P)
+$$
+\end{lemma}
+
+\begin{proof}
+Recall that $\text{Inf}(\Deformationcategory_P)$ is the set of
+automorphisms of the trivial deformation
+$P[\epsilon] = P \otimes_k k[\epsilon]$ of $P$ to $k[\epsilon]$
+equal to the identity modulo $\epsilon$.
+By Deformation Theory, Lemma \ref{defos-lemma-huge-diagram}
+this is equal to $\Hom_P(\Omega_{P/k}, P)$ which in turn is
+equal to $\text{Der}_k(P, P)$ by
+Algebra, Lemma \ref{algebra-lemma-universal-omega}.
+
+\medskip\noindent
+Recall that $T\Deformationcategory_P$ is the set of isomorphism classes
+of flat deformations $Q$ of $P$ to $k[\epsilon]$, more precisely,
+the set of isomorphism classes of $\Deformationcategory_P(k[\epsilon])$.
+Recall that a $k[\epsilon]$-algebra $Q$ with $Q/\epsilon Q = P$
+is flat over $k[\epsilon]$ if and only if
+$$
+0 \to P \xrightarrow{\epsilon} Q \to P \to 0
+$$
+is exact. This is proven in More on Morphisms, Lemma
+\ref{more-morphisms-lemma-deform} and more generally in
+Deformation Theory, Lemma \ref{defos-lemma-deform-module}.
+Thus we may apply
+Deformation Theory, Lemma \ref{defos-lemma-choices}
+to see that the set of isomorphism classes of such
+deformations is equal to $\text{Ext}^1_P(\NL_{P/k}, P)$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-smooth}
+In Example \ref{example-rings} let $P$ be a smooth $k$-algebra. Then
+$T\Deformationcategory_P = (0)$.
+\end{lemma}
+
+\begin{proof}
+By Lemma \ref{lemma-rings-TI} we have to show
+$\text{Ext}^1_P(\NL_{P/k}, P) = (0)$.
+Since $k \to P$ is smooth $\NL_{P/k}$ is quasi-isomorphic to the
+complex consisting of a finite projective
+$P$-module placed in degree $0$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-finite-type-rings-TI}
+In Lemma \ref{lemma-rings-TI} if $P$ is a finite type $k$-algebra, then
+\begin{enumerate}
+\item $\text{Inf}(\Deformationcategory_P)$ is finite dimensional if and only if
+$\dim(P) = 0$, and
+\item $T\Deformationcategory_P$ is finite dimensional if
+$\Spec(P) \to \Spec(k)$ is smooth except at a finite number of points.
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). We view $\text{Der}_k(P, P)$ as a $P$-module.
+If it has finite dimension over $k$, then it has finite length
+as a $P$-module, hence it is supported in finitely many
+closed points of $\Spec(P)$
+(Algebra, Lemma \ref{algebra-lemma-simple-pieces}).
+Since $\text{Der}_k(P, P) = \Hom_P(\Omega_{P/k}, P)$
+we see that
+$\text{Der}_k(P, P)_\mathfrak p = \text{Der}_k(P_\mathfrak p, P_\mathfrak p)$
+for any prime $\mathfrak p \subset P$
+(this uses Algebra, Lemmas
+\ref{algebra-lemma-differentials-localize},
+\ref{algebra-lemma-differentials-finitely-presented}, and
+\ref{algebra-lemma-hom-from-finitely-presented}).
+Let $\mathfrak p$ be a minimal prime ideal of $P$
+corresponding to an irreducible component of dimension $d > 0$.
+Then $P_\mathfrak p$ is an Artinian local ring
+essentially of finite type over $k$ with residue field
+and $\Omega_{P_\mathfrak p/k}$ is nonzero for example by
+Algebra, Lemma \ref{algebra-lemma-characterize-smooth-over-field}.
+Any nonzero finite module over an Artinian local ring
+has both a sub and a quotient module isomorphic to the residue field.
+Thus we find that
+$\text{Der}_k(P_\mathfrak p, P_\mathfrak p) =
+\Hom_{P_\mathfrak p}(\Omega_{P_\mathfrak p/k}, P_\mathfrak p)$
+is nonzero too. Combining all of the above we find that (1) is true.
+
+\medskip\noindent
+Proof of (2). For a prime $\mathfrak p$ of $P$ we will use that
+$\NL_{P_\mathfrak p/k} = (\NL_{P/k})_\mathfrak p$
+(Algebra, Lemma \ref{algebra-lemma-localize-NL})
+and we will
+use that
+$\text{Ext}_P^1(\NL_{P/k}, P)_\mathfrak p =
+\text{Ext}_{P_\mathfrak p}^1(\NL_{P_\mathfrak p/k}, P_\mathfrak p)$
+(More on Algebra, Lemma
+\ref{more-algebra-lemma-pseudo-coherence-and-base-change-ext}).
+Given a prime $\mathfrak p \subset P$
+then $k \to P$ is smooth at $\mathfrak p$ if and only if
+$(\NL_{P/k})_\mathfrak p$ is quasi-isomorphic
+to a finite projective module placed in degree $0$ (this follows
+immediately from the definition of a smooth ring map but it also
+follows from the stronger Algebra, Lemma \ref{algebra-lemma-smooth-at-point}).
+
+\medskip\noindent
+Assume that $P$ is smooth over $k$ at all but finitely many primes.
+Then these ``bad'' primes are maximal ideals
+$\mathfrak m_1, \ldots, \mathfrak m_n \subset P$ by
+Algebra, Lemma \ref{algebra-lemma-finite-type-algebra-finite-nr-primes}
+and the fact that the ``bad'' primes form a closed subset of $\Spec(P)$.
+For $\mathfrak p \not \in \{\mathfrak m_1, \ldots, \mathfrak m_n\}$
+we have $\text{Ext}^1_P(\NL_{P/k}, P)_\mathfrak p = 0$ by the results above.
+Thus $\text{Ext}^1_P(\NL_{P/k}, P)$ is a finite $P$-module
+whose support is contained in $\{\mathfrak m_1, \ldots, \mathfrak m_r\}$.
+By Algebra, Proposition
+\ref{algebra-proposition-minimal-primes-associated-primes}
+for example, we find that the dimension over $k$ of
+$\text{Ext}^1_P(\NL_{P/k}, P)$ is a finite integer combination
+of $\dim_k \kappa(\mathfrak m_i)$ and hence finite by
+the Hilbert Nullstellensatz
+(Algebra, Theorem \ref{algebra-theorem-nullstellensatz}).
+\end{proof}
+
+\noindent
+In Example \ref{example-rings}, let $P$ be a finite type
+$k$-algebra. Then $\Deformationcategory_P$
+admits a presentation by a smooth prorepresentable groupoid in functors
+over $\mathcal{C}_\Lambda$ if and only if $\dim(P) = 0$.
+Furthermore, $\Deformationcategory_P$ has a versal formal
+object if $\Spec(P) \to \Spec(k)$ has finitely many
+singular points. This follows from Lemmas \ref{lemma-rings-RS} and
+\ref{lemma-finite-type-rings-TI}
+and the general discussion in Section \ref{section-general}.
+
+\begin{lemma}
+\label{lemma-rings-hull}
+In Example \ref{example-rings} assume $P$ is a finite type
+$k$-algebra such that $\Spec(P) \to \Spec(k)$ is smooth except
+at a finite number of points.
+Assume $\Lambda$ is a complete local ring with residue field $k$
+(the classical case). Then the functor
+$$
+F : \mathcal{C}_\Lambda \longrightarrow \textit{Sets},\quad
+A \longmapsto \Ob(\Deformationcategory_P(A))/\cong
+$$
+of isomorphism classes of objects has a hull.
+\end{lemma}
+
+\begin{proof}
+This follows immediately from Lemmas \ref{lemma-rings-RS} and
+\ref{lemma-finite-type-rings-TI} and
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-RS-implies-S1-S2}
+and Remark \ref{formal-defos-remark-compose-minimal-into-iso-classes}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-localization}
+In Example \ref{example-rings} let $P$ be a $k$-algebra.
+Let $S \subset P$ be a multiplicative subset. There is a natural functor
+$$
+\Deformationcategory_P \longrightarrow \Deformationcategory_{S^{-1}P}
+$$
+of deformation categories.
+\end{lemma}
+
+\begin{proof}
+Given a deformation of $P$ we can take the localization
+of it to get a deformation of the localization; this is
+clear and we encourage the reader to skip the proof. More precisely,
+let $(A, Q) \to (k, P)$ be a morphism in $\mathcal{F}$, i.e.,
+an object of $\Deformationcategory_P$. Let $S_Q \subset Q$ be the
+inverse image of $S$. Then
+Hence $(A, S_Q^{-1}Q) \to (k, S^{-1}P)$
+is the desired object of $\Deformationcategory_{S^{-1}P}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-henselization}
+In Example \ref{example-rings} let $P$ be a $k$-algebra.
+Let $J \subset P$ be an ideal.
+Denote $(P^h, J^h)$ the henselization of the pair $(P, J)$.
+There is a natural functor
+$$
+\Deformationcategory_P \longrightarrow \Deformationcategory_{P^h}
+$$
+of deformation categories.
+\end{lemma}
+
+\begin{proof}
+Given a deformation of $P$ we can take the henselization
+of it to get a deformation of the henselization; this is
+clear and we encourage the reader to skip the proof. More precisely,
+let $(A, Q) \to (k, P)$ be a morphism in $\mathcal{F}$, i.e.,
+an object of $\Deformationcategory_P$. Denote $J_Q \subset Q$ the inverse
+image of $J$ in $Q$. Let $(Q^h, J_Q^h)$ be the henselization of
+the pair $(Q, J_Q)$. Recall that $Q \to Q^h$ is flat
+(More on Algebra, Lemma \ref{more-algebra-lemma-henselization-flat})
+and hence $Q^h$ is flat over $A$.
+By More on Algebra, Lemma \ref{more-algebra-lemma-henselization-integral}
+we see that the map $Q^h \to P^h$ induces an isomorphism
+$Q^h \otimes_A k = Q^h \otimes_Q P = P^h$.
+Hence $(A, Q^h) \to (k, P^h)$ is the desired object of
+$\Deformationcategory_{P^h}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-strict-henselization}
+In Example \ref{example-rings} let $P$ be a $k$-algebra.
+Assume $P$ is a local ring and let $P^{sh}$ be a strict henselization of $P$.
+There is a natural functor
+$$
+\Deformationcategory_P \longrightarrow \Deformationcategory_{P^{sh}}
+$$
+of deformation categories.
+\end{lemma}
+
+\begin{proof}
+Given a deformation of $P$ we can take the strict henselization
+of it to get a deformation of the strict henselization; this is
+clear and we encourage the reader to skip the proof. More precisely,
+let $(A, Q) \to (k, P)$ be a morphism in $\mathcal{F}$, i.e.,
+an object of $\Deformationcategory_P$. Since the kernel of the surjection
+$Q \to P$ is nilpotent, we find that $Q$ is a local ring with the
+same residue field as $P$. Let $Q^{sh}$ be the strict henselization
+of $Q$. Recall that $Q \to Q^{sh}$ is flat
+(More on Algebra, Lemma \ref{more-algebra-lemma-dumb-properties-henselization})
+and hence $Q^{sh}$ is flat over $A$.
+By Algebra, Lemma \ref{algebra-lemma-quotient-strict-henselization}
+we see that the map $Q^{sh} \to P^{sh}$ induces an isomorphism
+$Q^{sh} \otimes_A k = Q^{sh} \otimes_Q P = P^{sh}$.
+Hence $(A, Q^{sh}) \to (k, P^{sh})$ is the desired object of
+$\Deformationcategory_{P^{sh}}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-completion}
+In Example \ref{example-rings} let $P$ be a $k$-algebra.
+Assume $P$ Noetherian and let $J \subset P$ be an ideal.
+Denote $P^\wedge$ the $J$-adic completion.
+There is a natural functor
+$$
+\Deformationcategory_P \longrightarrow \Deformationcategory_{P^\wedge}
+$$
+of deformation categories.
+\end{lemma}
+
+\begin{proof}
+Given a deformation of $P$ we can take the completion
+of it to get a deformation of the completion; this is
+clear and we encourage the reader to skip the proof. More precisely,
+let $(A, Q) \to (k, P)$ be a morphism in $\mathcal{F}$, i.e.,
+an object of $\Deformationcategory_P$. Observe that $Q$ is a Noetherian
+ring: the kernel of the surjective ring map $Q \to P$ is
+nilpotent and finitely generated and $P$ is Noetherian; apply
+Algebra, Lemma \ref{algebra-lemma-completion-Noetherian}.
+Denote $J_Q \subset Q$ the inverse
+image of $J$ in $Q$. Let $Q^\wedge$ be the $J_Q$-adic completion of $Q$.
+Recall that $Q \to Q^\wedge$ is flat
+(Algebra, Lemma \ref{algebra-lemma-completion-flat})
+and hence $Q^\wedge$ is flat over $A$.
+The induced map $Q^\wedge \to P^\wedge$ induces an isomorphism
+$Q^\wedge \otimes_A k = Q^\wedge \otimes_Q P = P^\wedge$ by
+Algebra, Lemma \ref{algebra-lemma-completion-tensor} for example.
+Hence $(A, Q^\wedge) \to (k, P^\wedge)$
+is the desired object of $\Deformationcategory_{P^\wedge}$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-power-series-rings-TI}
+In Lemma \ref{lemma-rings-TI} if $P = k[[x_1, \ldots, x_n]]/(f)$
+for some nonzero $f \in (x_1, \ldots, x_n)^2$, then
+\begin{enumerate}
+\item $\text{Inf}(\Deformationcategory_P)$ is finite dimensional
+if and only if $n = 1$, and
+\item $T\Deformationcategory_P$ is finite dimensional if
+$$
+\sqrt{(f, \partial f/\partial x_1, \ldots, \partial f/\partial x_n)} =
+(x_1, \ldots, x_n)
+$$
+\end{enumerate}
+\end{lemma}
+
+\begin{proof}
+Proof of (1). Consider the derivations $\partial/\partial x_i$ of
+$k[[x_1, \ldots, x_n]]$ over $k$. Write $f_i = \partial f/\partial x_i$.
+The derivation
+$$
+\theta = \sum h_i \partial/\partial x_i
+$$
+of $k[[x_1, \ldots, x_n]]$
+induces a derivation of $P = k[[x_1, \ldots, x_n]]/(f)$
+if and only if
+$\sum h_i f_i \in (f)$. Moreover, the induced derivation of $P$
+is zero if and only if $h_i \in (f)$ for $i = 1, \ldots, n$.
+Thus we find
+$$
+\Ker((f_1, \ldots, f_n) : P^{\oplus n} \longrightarrow P) \subset
+\text{Der}_k(P, P)
+$$
+The left hand side is a finite dimensional $k$-vector space only if
+$n = 1$; we omit the proof. We also leave it to the reader to see
+that the right hand side has finite dimension if $n = 1$.
+This proves (1).
+
+\medskip\noindent
+Proof of (2). Let $Q$ be a flat deformation of $P$ over $k[\epsilon]$
+as in the proof of Lemma \ref{lemma-rings-TI}. Choose lifts $q_i \in Q$
+of the image of $x_i$ in $P$. Then $Q$ is a complete local ring
+with maximal ideal generated by $q_1, \ldots, q_n$ and $\epsilon$
+(small argument omitted). Thus we get a surjection
+$$
+k[\epsilon][[x_1, \ldots, x_n]] \longrightarrow Q,\quad
+x_i \longmapsto q_i
+$$
+Choose an element of the form
+$f + \epsilon g \in k[\epsilon][[x_1, \ldots, x_n]]$
+mapping to zero in $Q$. Observe that $g$ is well defined modulo $(f)$.
+Since $Q$ is flat over $k[\epsilon]$ we get
+$$
+Q = k[\epsilon][[x_1, \ldots, x_n]]/(f + \epsilon g)
+$$
+Finally, if we changing the choice of $q_i$ amounts to
+changing the coordinates $x_i$ into $x_i + \epsilon h_i$
+for some $h_i \in k[[x_1, \ldots, x_n]]$. Then
+$f + \epsilon g$ changes into $f + \epsilon (g + \sum h_i f_i)$
+where $f_i = \partial f/\partial x_i$. Thus we see that the
+isomorphism class of the deformation $Q$ is determined
+by an element of
+$$
+k[[x_1, \ldots, x_n]]/
+(f, \partial f/\partial x_1, \ldots, \partial f/\partial x_n)
+$$
+This has finite dimension over $k$ if and only if
+its support is the closed point of $k[[x_1, \ldots, x_n]]$
+if and only if
+$\sqrt{(f, \partial f/\partial x_1, \ldots, \partial f/\partial x_n)} =
+(x_1, \ldots, x_n)$.
+\end{proof}
+
+
+
+
+
+
+\section{Schemes}
+\label{section-schemes}
+
+\noindent
+The deformation theory of schemes.
+
+\begin{example}[Schemes]
+\label{example-schemes}
+Let $\mathcal{F}$ be the category defined as follows
+\begin{enumerate}
+\item an object is a pair $(A, X)$ consisting of an
+object $A$ of $\mathcal{C}_\Lambda$ and a scheme $X$ flat over $A$, and
+\item a morphism $(f, g) : (B, Y) \to (A, X)$ consists of
+a morphism $f : B \to A$ in $\mathcal{C}_\Lambda$ together
+with a morphism $g : X \to Y$ such that
+$$
+\xymatrix{
+X \ar[r]_g \ar[d] & Y \ar[d] \\
+\Spec(A) \ar[r]^f & \Spec(B)
+}
+$$
+is a cartesian commutative diagram of schemes.
+\end{enumerate}
+The functor $p : \mathcal{F} \to \mathcal{C}_\Lambda$ sends $(A, X)$
+to $A$ and $(f, g)$ to $f$. It is clear that $p$ is cofibred in groupoids.
+Given a scheme $X$ over $k$, let $x_0 = (k, X)$ be the corresponding object
+of $\mathcal{F}(k)$. We set
+$$
+\Deformationcategory_X = \mathcal{F}_{x_0}
+$$
+\end{example}
+
+\begin{lemma}
+\label{lemma-schemes-RS}
+Example \ref{example-schemes}
+satisfies the Rim-Schlessinger condition (RS).
+In particular, $\Deformationcategory_X$ is a deformation category
+for any scheme $X$ over $k$.
+\end{lemma}
+
+\begin{proof}
+Let $A_1 \to A$ and $A_2 \to A$ be morphisms of $\mathcal{C}_\Lambda$.
+Assume $A_2 \to A$ is surjective. According to
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-RS-2-categorical}
+it suffices to show that the functor
+$\mathcal{F}(A_1 \times_A A_2) \to
+\mathcal{F}(A_1) \times_{\mathcal{F}(A)} \mathcal{F}(A_2)$
+is an equivalence of categories.
+Observe that
+$$
+\xymatrix{
+\Spec(A) \ar[r] \ar[d] & \Spec(A_2) \ar[d] \\
+\Spec(A_1) \ar[r] &
+\Spec(A_1 \times_A A_2)
+}
+$$
+is a pushout diagram as in More on Morphisms, Lemma
+\ref{more-morphisms-lemma-pushout-along-thickening}.
+Thus the lemma is a special case of More on Morphisms, Lemma
+\ref{more-morphisms-lemma-equivalence-categories-schemes-over-pushout-flat}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-schemes-TI}
+In Example \ref{example-schemes} let $X$ be a scheme over $k$. Then
+$$
+\text{Inf}(\Deformationcategory_X) =
+\text{Ext}^0_{\mathcal{O}_X}(\NL_{X/k}, \mathcal{O}_X) =
+\Hom_{\mathcal{O}_X}(\Omega_{X/k}, \mathcal{O}_X) =
+\text{Der}_k(\mathcal{O}_X, \mathcal{O}_X)
+$$
+and
+$$
+T\Deformationcategory_X =
+\text{Ext}^1_{\mathcal{O}_X}(\NL_{X/k}, \mathcal{O}_X)
+$$
+\end{lemma}
+
+\begin{proof}
+Recall that $\text{Inf}(\Deformationcategory_X)$ is the set of
+automorphisms of the trivial deformation
+$X' = X \times_{\Spec(k)} \Spec(k[\epsilon])$ of $X$ to $k[\epsilon]$
+equal to the identity modulo $\epsilon$.
+By Deformation Theory, Lemma \ref{defos-lemma-deform}
+this is equal to $\text{Ext}^0_{\mathcal{O}_X}(\NL_{X/k}, \mathcal{O}_X)$.
+The equality $\text{Ext}^0_{\mathcal{O}_X}(\NL_{X/k}, \mathcal{O}_X) =
+\Hom_{\mathcal{O}_X}(\Omega_{X/k}, \mathcal{O}_X)$ follows from
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-netherlander-quasi-coherent}.
+The equality
+$\Hom_{\mathcal{O}_X}(\Omega_{X/k}, \mathcal{O}_X) =
+\text{Der}_k(\mathcal{O}_X, \mathcal{O}_X)$
+follows from Morphisms, Lemma
+\ref{morphisms-lemma-universal-derivation-universal}.
+
+\medskip\noindent
+Recall that $T_{x_0}\Deformationcategory_X$ is the set of isomorphism classes
+of flat deformations $X'$ of $X$ to $k[\epsilon]$, more precisely,
+the set of isomorphism classes of $\Deformationcategory_X(k[\epsilon])$.
+Thus the second statement of the lemma follows from
+Deformation Theory, Lemma \ref{defos-lemma-deform}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-schemes-TI}
+In Lemma \ref{lemma-schemes-TI} if $X$ is proper over $k$, then
+$\text{Inf}(\Deformationcategory_X)$ and $T\Deformationcategory_X$ are
+finite dimensional.
+\end{lemma}
+
+\begin{proof}
+By the lemma we have to show
+$\Ext^1_{\mathcal{O}_X}(\NL_{X/k}, \mathcal{O}_X)$ and
+$\Ext^0_{\mathcal{O}_X}(\NL_{X/k}, \mathcal{O}_X)$ are finite
+dimensional. By More on Morphisms, Lemma
+\ref{more-morphisms-lemma-netherlander-fp}
+and the fact that $X$ is Noetherian, we see that
+$\NL_{X/k}$ has coherent cohomology sheaves zero except
+in degrees $0$ and $-1$.
+By Derived Categories of Schemes, Lemma \ref{perfect-lemma-ext-finite}
+the displayed $\Ext$-groups are finite $k$-vector spaces
+and the proof is complete.
+\end{proof}
+
+\noindent
+In Example \ref{example-schemes} if $X$ is a proper scheme over $k$,
+then $\Deformationcategory_X$
+admits a presentation by a smooth prorepresentable groupoid in functors
+over $\mathcal{C}_\Lambda$
+and a fortiori has a (minimal) versal formal object. This follows
+from Lemmas \ref{lemma-schemes-RS} and
+\ref{lemma-proper-schemes-TI}
+and the general discussion in Section \ref{section-general}.
+
+\begin{lemma}
+\label{lemma-schemes-hull}
+In Example \ref{example-schemes} assume $X$ is a proper $k$-scheme.
+Assume $\Lambda$ is a complete local ring with residue field $k$
+(the classical case). Then the functor
+$$
+F : \mathcal{C}_\Lambda \longrightarrow \textit{Sets},\quad
+A \longmapsto \Ob(\Deformationcategory_X(A))/\cong
+$$
+of isomorphism classes of objects has a hull. If
+$\text{Der}_k(\mathcal{O}_X, \mathcal{O}_X) = 0$, then
+$F$ is prorepresentable.
+\end{lemma}
+
+\begin{proof}
+The existence of a hull follows immediately from
+Lemmas \ref{lemma-schemes-RS} and \ref{lemma-proper-schemes-TI} and
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-RS-implies-S1-S2}
+and Remark \ref{formal-defos-remark-compose-minimal-into-iso-classes}.
+
+\medskip\noindent
+Assume $\text{Der}_k(\mathcal{O}_X, \mathcal{O}_X) = 0$. Then
+$\Deformationcategory_X$ and $F$ are equivalent by
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-infdef-trivial}.
+Hence $F$ is a deformation functor (because $\Deformationcategory_X$ is a
+deformation category) with finite tangent space and we can apply
+Formal Deformation Theory, Theorem
+\ref{formal-defos-theorem-Schlessinger-prorepresentability}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-open}
+In Example \ref{example-schemes} let $X$ be a scheme over $k$.
+Let $U \subset X$ be an open subscheme.
+There is a natural functor
+$$
+\Deformationcategory_X \longrightarrow \Deformationcategory_U
+$$
+of deformation categories.
+\end{lemma}
+
+\begin{proof}
+Given a deformation of $X$ we can take the corresponding open
+of it to get a deformation of $U$. We omit the details.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-affine}
+In Example \ref{example-schemes} let $X = \Spec(P)$ be an
+affine scheme over $k$. With $\Deformationcategory_P$ as in
+Example \ref{example-rings} there is a natural equivalence
+$$
+\Deformationcategory_X \longrightarrow \Deformationcategory_P
+$$
+of deformation categories.
+\end{lemma}
+
+\begin{proof}
+The functor sends $(A, Y)$ to $\Gamma(Y, \mathcal{O}_Y)$.
+This works because
+any deformation of $X$ is affine by
+More on Morphisms, Lemma \ref{more-morphisms-lemma-thickening-affine-scheme}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-local-ring}
+In Example \ref{example-schemes} let $X$ be a scheme over $k$
+Let $p \in X$ be a point. With $\Deformationcategory_{\mathcal{O}_{X, p}}$
+as in Example \ref{example-rings} there is a natural functor
+$$
+\Deformationcategory_X
+\longrightarrow
+\Deformationcategory_{\mathcal{O}_{X, p}}
+$$
+of deformation categories.
+\end{lemma}
+
+\begin{proof}
+Choose an affine open $U = \Spec(P) \subset X$ containing $p$.
+Then $\mathcal{O}_{X, p}$ is a localization of $P$. We combine
+the functors from
+Lemmas \ref{lemma-open}, \ref{lemma-affine}, and \ref{lemma-localization}.
+\end{proof}
+
+\begin{situation}
+\label{situation-glueing}
+Let $\Lambda \to k$ be as in Section \ref{section-general}.
+Let $X$ be a scheme over $k$ which has an affine open covering
+$X = U_1 \cup U_2$ with $U_{12} = U_1 \cap U_2$ affine too.
+Write $U_1 = \Spec(P_1)$, $U_2 = \Spec(P_2)$ and $U_{12} = \Spec(P_{12})$.
+Let $\Deformationcategory_X$, $\Deformationcategory_{U_1}$,
+$\Deformationcategory_{U_2}$, and $\Deformationcategory_{U_{12}}$
+be as in Example \ref{example-schemes} and let
+$\Deformationcategory_{P_1}$, $\Deformationcategory_{P_2}$, and
+$\Deformationcategory_{P_{12}}$ be as in Example \ref{example-rings}.
+\end{situation}
+
+\begin{lemma}
+\label{lemma-glueing}
+In Situation \ref{situation-glueing}
+there is an equivalence
+$$
+\Deformationcategory_X =
+\Deformationcategory_{P_1}
+\times_{\Deformationcategory_{P_{12}}}
+\Deformationcategory_{P_2}
+$$
+of deformation categories, see Examples \ref{example-schemes} and
+\ref{example-rings}.
+\end{lemma}
+
+\begin{proof}
+It suffices to show that the functors of Lemma \ref{lemma-open}
+define an equivalence
+$$
+\Deformationcategory_X \longrightarrow
+\Deformationcategory_{U_1}
+\times_{\Deformationcategory_{U_{12}}}
+\Deformationcategory_{U_2}
+$$
+because then we can apply Lemma \ref{lemma-affine} to translate into rings.
+To do this we construct a quasi-inverse. Denote
+$F_i : \Deformationcategory_{U_i} \to \Deformationcategory_{U_{12}}$
+the functor of Lemma \ref{lemma-open}.
+An object of the RHS is given by an $A$ in $\mathcal{C}_\Lambda$,
+objects $(A, V_1) \to (k, U_1)$ and $(A, V_2) \to (k, U_2)$, and
+a morphism
+$$
+g : F_1(A, V_1) \to F_2(A, V_2)
+$$
+Now $F_i(A, V_i) = (A, V_{i, 3 - i})$ where $V_{i, 3 - i} \subset V_i$
+is the open subscheme whose base change to $k$ is $U_{12} \subset U_i$.
+The morphism $g$ defines an isomorphism
+$V_{1, 2} \to V_{2, 1}$ of schemes over $A$ compatible
+with $\text{id} : U_{12} \to U_{12}$ over $k$.
+Thus $(\{1, 2\}, V_i, V_{i, 3 - i}, g, g^{-1})$ is a glueing
+data as in Schemes, Section \ref{schemes-section-glueing-schemes}.
+Let $Y$ be the glueing, see Schemes, Lemma \ref{schemes-lemma-glue}.
+Then $Y$ is a scheme over $A$ and the
+compatibilities mentioned above show that
+there is a canonical isomorphism
+$Y \times_{\Spec(A)} \Spec(k) = X$.
+Thus $(A, Y) \to (k, X)$ is an object of $\Deformationcategory_X$.
+We omit the verification that this construction is a functor
+and is quasi-inverse to the given one.
+\end{proof}
+
+
+
+
+
+
+\section{Morphisms of Schemes}
+\label{section-schemes-morphisms}
+
+\noindent
+The deformation theory of morphisms of schemes.
+Of course this is just an example of
+deformations of diagrams of schemes.
+
+\begin{example}[Morphisms of schemes]
+\label{example-schemes-morphisms}
+Let $\mathcal{F}$ be the category defined as follows
+\begin{enumerate}
+\item an object is a pair $(A, X \to Y)$ consisting of an
+object $A$ of $\mathcal{C}_\Lambda$ and a morphism
+$X \to Y$ of schemes over $A$ with both $X$ and $Y$ flat over $A$, and
+\item a morphism $(f, g, h) : (A', X' \to Y') \to (A, X \to Y)$ consists of
+a morphism $f : A' \to A$ in $\mathcal{C}_\Lambda$ together
+with morphisms of schemes $g : X \to X'$ and $h : Y \to Y'$ such that
+$$
+\xymatrix{
+X \ar[r]_g \ar[d] & X' \ar[d] \\
+Y \ar[r]_h \ar[d] & Y' \ar[d] \\
+\Spec(A) \ar[r]^f & \Spec(A')
+}
+$$
+is a commutative diagram of schemes where both squares are cartesian.
+\end{enumerate}
+The functor $p : \mathcal{F} \to \mathcal{C}_\Lambda$ sends $(A, X \to Y)$
+to $A$ and $(f, g, h)$ to $f$. It is clear that $p$ is cofibred in groupoids.
+Given a morphism of schemes $X \to Y$ over $k$, let $x_0 = (k, X \to Y)$
+be the corresponding object of $\mathcal{F}(k)$. We set
+$$
+\Deformationcategory_{X \to Y} = \mathcal{F}_{x_0}
+$$
+\end{example}
+
+\begin{lemma}
+\label{lemma-schemes-morphisms-RS}
+Example \ref{example-schemes-morphisms}
+satisfies the Rim-Schlessinger condition (RS).
+In particular, $\Deformationcategory_{X \to Y}$ is a deformation category
+for any morphism of schemes $X \to Y$ over $k$.
+\end{lemma}
+
+\begin{proof}
+Let $A_1 \to A$ and $A_2 \to A$ be morphisms of $\mathcal{C}_\Lambda$.
+Assume $A_2 \to A$ is surjective. According to
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-RS-2-categorical}
+it suffices to show that the functor
+$\mathcal{F}(A_1 \times_A A_2) \to
+\mathcal{F}(A_1) \times_{\mathcal{F}(A)} \mathcal{F}(A_2)$
+is an equivalence of categories.
+Observe that
+$$
+\xymatrix{
+\Spec(A) \ar[r] \ar[d] & \Spec(A_2) \ar[d] \\
+\Spec(A_1) \ar[r] &
+\Spec(A_1 \times_A A_2)
+}
+$$
+is a pushout diagram as in More on Morphisms, Lemma
+\ref{more-morphisms-lemma-pushout-along-thickening}.
+Thus the lemma follows immediately from
+More on Morphisms, Lemma
+\ref{more-morphisms-lemma-equivalence-categories-schemes-over-pushout-flat}
+as this describes the category of schemes flat over $A_1 \times_A A_2$
+as the fibre product of the category of schemes flat over $A_1$
+with the category of schemes flat over $A_2$ over the category of
+schemes flat over $A$.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-schemes-morphisms-TI}
+In Example \ref{example-schemes} let $f : X \to Y$ be a morphism of schemes
+over $k$. There is a canonical exact sequence of $k$-vector spaces
+$$
+\xymatrix{
+0 \ar[r] &
+\text{Inf}(\Deformationcategory_{X \to Y}) \ar[r] &
+\text{Inf}(\Deformationcategory_X \times \Deformationcategory_Y) \ar[r] &
+\text{Der}_k(\mathcal{O}_Y, f_*\mathcal{O}_X) \ar[lld] \\
+& T\Deformationcategory_{X \to Y} \ar[r] &
+T(\Deformationcategory_X \times \Deformationcategory_Y) \ar[r] &
+\text{Ext}^1_{\mathcal{O}_X}(Lf^*\NL_{Y/k}, \mathcal{O}_X)
+}
+$$
+\end{lemma}
+
+\begin{proof}
+The obvious map of deformation categories
+$\Deformationcategory_{X \to Y} \to
+\Deformationcategory_X \times \Deformationcategory_Y$
+gives two of the arrows in the exact sequence of the lemma.
+Recall that $\text{Inf}(\Deformationcategory_{X \to Y})$
+is the set of automorphisms of the trivial deformation
+$$
+f' : X' = X \times_{\Spec(k)} \Spec(k[\epsilon])
+\xrightarrow{f \times \text{id}}
+Y' = Y \times_{\Spec(k)} \Spec(k[\epsilon])
+$$
+of $X \to Y$ to $k[\epsilon]$ equal to the identity modulo $\epsilon$.
+This is clearly the same thing as pairs
+$(\alpha, \beta) \in
+\text{Inf}(\Deformationcategory_X \times \Deformationcategory_Y)$
+of infinitesimal automorphisms of $X$ and $Y$ compatible with $f'$, i.e.,
+such that $f' \circ \alpha = \beta \circ f'$.
+By Deformation Theory, Lemma \ref{defos-lemma-huge-diagram-ringed-spaces}
+for an arbitrary pair $(\alpha, \beta)$ the difference between
+the morphism $f' : X' \to Y'$ and the morphism
+$\beta^{-1} \circ f' \circ \alpha : X' \to Y'$ defines an elment
+in
+$$
+\text{Der}_k(\mathcal{O}_Y, f_*\mathcal{O}_X) =
+\Hom_{\mathcal{O}_Y}(\Omega_{Y/k}, f_*\mathcal{O}_X)
+$$
+Equality by More on Morphisms, Lemma
+\ref{more-morphisms-lemma-netherlander-quasi-coherent}.
+This defines the last top horizontal arrow and shows exactness
+in the first two places. For the map
+$$
+\text{Der}_k(\mathcal{O}_Y, f_*\mathcal{O}_X)
+\to
+T\Deformationcategory_{X \to Y}
+$$
+we interpret elements of the source as morphisms
+$f_\epsilon : X' \to Y'$ over $\Spec(k[\epsilon])$
+equal to $f$ modulo $\epsilon$
+using Deformation Theory, Lemma \ref{defos-lemma-huge-diagram-ringed-spaces}.
+We send $f_\epsilon$ to the isomorphism class of
+$(f_\epsilon : X' \to Y')$ in $T\Deformationcategory_{X \to Y}$.
+Note that $(f_\epsilon : X' \to Y')$ is isomorphic to the
+trivial deformation $(f' : X' \to Y')$ exactly when
+$f_\epsilon = \beta^{-1} \circ f \circ \alpha$ for some
+pair $(\alpha, \beta)$ which implies exactness in the third spot.
+Clearly, if some first order deformation
+$(f_\epsilon : X_\epsilon \to Y_\epsilon)$
+maps to zero in $T(\Deformationcategory_X \times \Deformationcategory_Y)$,
+then we can choose isomorphisms $X' \to X_\epsilon$ and $Y' \to Y_\epsilon$
+and we conclude we are in the image of the south-west arrow.
+Therefore we have exactness at the fourth spot.
+Finally, given two first order deformations $X_\epsilon$, $Y_\epsilon$
+of $X$, $Y$ there is an obstruction in
+$$
+ob(X_\epsilon, Y_\epsilon) \in
+\text{Ext}^1_{\mathcal{O}_X}(Lf^*\NL_{Y/k}, \mathcal{O}_X)
+$$
+which vanishes if and only if $f : X \to Y$ lifts to
+$X_\epsilon \to Y_\epsilon$, see
+Deformation Theory, Lemma \ref{defos-lemma-huge-diagram-ringed-spaces}.
+This finishes the proof.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-proper-schemes-morphisms-TI}
+In Lemma \ref{lemma-schemes-morphisms-TI} if $X$ and $Y$ are both
+proper over $k$, then
+$\text{Inf}(\Deformationcategory_{X \to Y})$ and
+$T\Deformationcategory_{X \to Y}$ are finite dimensional.
+\end{lemma}
+
+\begin{proof}
+Omitted. Hint: argue as in Lemma \ref{lemma-proper-schemes-TI}
+and use the exact sequence of the lemma.
+\end{proof}
+
+\noindent
+In Example \ref{example-schemes-morphisms}
+if $X \to Y$ is a morphism of proper schemes over $k$,
+then $\Deformationcategory_{X \to Y}$
+admits a presentation by a smooth prorepresentable groupoid in functors
+over $\mathcal{C}_\Lambda$
+and a fortiori has a (minimal) versal formal object. This follows
+from Lemmas \ref{lemma-schemes-morphisms-RS} and
+\ref{lemma-proper-schemes-morphisms-TI}
+and the general discussion in Section \ref{section-general}.
+
+\begin{lemma}
+\label{lemma-schemes-morphisms-hull}
+In Example \ref{example-schemes-morphisms} assume $X \to Y$
+is a morphism of proper $k$-schemes.
+Assume $\Lambda$ is a complete local ring with residue field $k$
+(the classical case). Then the functor
+$$
+F : \mathcal{C}_\Lambda \longrightarrow \textit{Sets},\quad
+A \longmapsto \Ob(\Deformationcategory_{X \to Y}(A))/\cong
+$$
+of isomorphism classes of objects has a hull. If
+$\text{Der}_k(\mathcal{O}_X, \mathcal{O}_X) =
+\text{Der}_k(\mathcal{O}_Y, \mathcal{O}_Y) = 0$, then
+$F$ is prorepresentable.
+\end{lemma}
+
+\begin{proof}
+The existence of a hull follows immediately from
+Lemmas \ref{lemma-schemes-morphisms-RS} and
+\ref{lemma-proper-schemes-morphisms-TI} and
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-RS-implies-S1-S2}
+and Remark \ref{formal-defos-remark-compose-minimal-into-iso-classes}.
+
+\medskip\noindent
+Assume $\text{Der}_k(\mathcal{O}_X, \mathcal{O}_X) =
+\text{Der}_k(\mathcal{O}_Y, \mathcal{O}_Y) = 0$. Then
+the exact sequence of Lemma \ref{lemma-schemes-morphisms-TI}
+combined with Lemma \ref{lemma-schemes-TI}
+shows that $\text{Inf}(\Deformationcategory_{X \to Y}) = 0$.
+Then $\Deformationcategory_{X \to Y}$ and $F$ are equivalent by
+Formal Deformation Theory, Lemma \ref{formal-defos-lemma-infdef-trivial}.
+Hence $F$ is a deformation functor (because
+$\Deformationcategory_{X \to Y}$ is a
+deformation category) with finite tangent space and we can apply
+Formal Deformation Theory, Theorem
+\ref{formal-defos-theorem-Schlessinger-prorepresentability}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-schemes-morphisms-smooth-to-base}
+\begin{reference}
+This is discussed in \cite[Section 5.3]{Ravi-Murphys-Law} and
+\cite[Theorem 3.3]{Ran-deformations}.
+\end{reference}
+In Example \ref{example-schemes} let $f : X \to Y$ be a morphism of schemes
+over $k$. If $f_*\mathcal{O}_X = \mathcal{O}_Y$ and $R^1f_*\mathcal{O}_X = 0$,
+then the morphism of deformation categories
+$$
+\Deformationcategory_{X \to Y} \to \Deformationcategory_X
+$$
+is an equivalence.
+\end{lemma}
+
+\begin{proof}
+We construct a quasi-inverse to the forgetful functor of the lemma.
+Namely, suppose that $(A, U)$ is an object of $\Deformationcategory_X$.
+The given map $X \to U$ is a finite order thickening and we can use
+it to identify the underlying topological spaces of $U$ and $X$, see
+More on Morphisms, Section \ref{more-morphisms-section-thickenings}.
+Thus we may and do think of $\mathcal{O}_U$ as a sheaf of
+$A$-algebras on $X$; moreover the fact that $U \to \Spec(A)$ is
+flat, means that $\mathcal{O}_U$ is flat as a sheaf of $A$-modules.
+In particular, we have a filtration
+$$
+0 = \mathfrak m_A^n\mathcal{O}_U \subset
+\mathfrak m_A^{n - 1}\mathcal{O}_U \subset \ldots \subset
+\mathfrak m_A^2\mathcal{O}_U \subset
+\mathfrak m_A\mathcal{O}_U \subset \mathcal{O}_U
+$$
+with subquotients equal to
+$\mathcal{O}_X \otimes_k \mathfrak m_A^i/\mathfrak m_A^{i + 1}$
+by flatness, see More on Morphisms, Lemma \ref{more-morphisms-lemma-deform}
+or the more general Deformation Theory, Lemma \ref{defos-lemma-deform-module}.
+Set
+$$
+\mathcal{O}_V = f_*\mathcal{O}_U
+$$
+viewed as sheaf of $A$-algebras on $Y$. Since
+$R^1f_*\mathcal{O}_X = 0$ we find by the description above that
+$R^1f_*(\mathfrak m_A^i\mathcal{O}_U/\mathfrak m_A^{i + 1}\mathcal{O}_U) = 0$
+for all $i$. This implies that the sequences
+$$
+0 \to
+(f_*\mathcal{O}_X) \otimes_k \mathfrak m_A^i/\mathfrak m_A^{i + 1} \to
+f_*(\mathcal{O}_U/\mathfrak m_A^{i + 1}\mathcal{O}_U) \to
+f_*(\mathcal{O}_U/\mathfrak m_A^i\mathcal{O}_U) \to 0
+$$
+are exact for all $i$. Reading the references given above backwards
+(and using induction) we find that $\mathcal{O}_V$ is a flat
+sheaf of $A$-algebras with
+$\mathcal{O}_V/\mathfrak m_A\mathcal{O}_V = \mathcal{O}_Y$.
+Using More on Morphisms, Lemma
+\ref{more-morphisms-lemma-first-order-thickening}
+we find that $(Y, \mathcal{O}_V)$ is a scheme, call it $V$.
+The equality $\mathcal{O}_V = f_*\mathcal{O}_U$ defines a
+morphism of ringed spaces $U \to V$ which is easily seen to be
+a morphism of schemes. This finishes the proof by the
+flatness already established.
+\end{proof}
+
+
+
+
+
+
+
+\section{Algebraic spaces}
+\label{section-algebraic-spaces}
+
+\noindent
+The deformation theory of algebraic spaces.
+
+\begin{example}[Algebraic spaces]
+\label{example-spaces}
+Let $\mathcal{F}$ be the category defined as follows
+\begin{enumerate}
+\item an object is a pair $(A, X)$ consisting of an
+object $A$ of $\mathcal{C}_\Lambda$ and an algebraic space
+$X$ flat over $A$, and
+\item a morphism $(f, g) : (B, Y) \to (A, X)$ consists of
+a morphism $f : B \to A$ in $\mathcal{C}_\Lambda$ together
+with a morphism $g : X \to Y$ of algebraic spaces over $\Lambda$
+such that
+$$
+\xymatrix{
+X \ar[r]_g \ar[d] & Y \ar[d] \\
+\Spec(A) \ar[r]^f & \Spec(B)
+}
+$$
+is a cartesian commutative diagram of algebraic spaces.
+\end{enumerate}
+The functor $p : \mathcal{F} \to \mathcal{C}_\Lambda$ sends $(A, X)$
+to $A$ and $(f, g)$ to $f$. It is clear that $p$ is cofibred in groupoids.
+Given an algebraic space $X$ over $k$, let
+$x_0 = (k, X)$ be the corresponding object of $\mathcal{F}(k)$. We set
+$$
+\Deformationcategory_X = \mathcal{F}_{x_0}
+$$
+\end{example}
+
+\begin{lemma}
+\label{lemma-spaces-RS}
+Example \ref{example-spaces}
+satisfies the Rim-Schlessinger condition (RS).
+In particular, $\Deformationcategory_X$ is a deformation category
+for any algebraic space $X$ over $k$.
+\end{lemma}
+
+\begin{proof}
+Let $A_1 \to A$ and $A_2 \to A$ be morphisms of $\mathcal{C}_\Lambda$.
+Assume $A_2 \to A$ is surjective. According to
+Formal Deformation Theory, Lemma
+\ref{formal-defos-lemma-RS-2-categorical}
+it suffices to show that the functor
+$\mathcal{F}(A_1 \times_A A_2) \to
+\mathcal{F}(A_1) \times_{\mathcal{F}(A)} \mathcal{F}(A_2)$
+is an equivalence of categories.
+Observe that
+$$
+\xymatrix{
+\Spec(A) \ar[r] \ar[d] & \Spec(A_2) \ar[d] \\
+\Spec(A_1) \ar[r] &
+\Spec(A_1 \times_A A_2)
+}
+$$
+is a pushout diagram as in Pushouts of Spaces, Lemma
+\ref{spaces-pushouts-lemma-pushout-along-thickening}.
+Thus the lemma is a special case of Pushouts of Spaces, Lemma
+\ref{spaces-pushouts-lemma-equivalence-categories-spaces-pushout-flat}.
+\end{proof}
+
+\begin{lemma}
+\label{lemma-spaces-TI}
+In Example \ref{example-spaces} let $X$ be an algebraic space over $k$. Then
+$$
+\text{Inf}(\Deformationcategory_X) =
+\text{Ext}^0_{\mathcal{O}_X}(\NL_{X/k}, \mathcal{O}_X) =
+\Hom_{\mathcal{O}_X}(\Omega_{X/k}, \mathcal{O}_X) =
+\text{Der}_k(\mathcal{O}_X, \mathcal{O}_X)
+$$
+and
+$$
+T\Deformationcategory_X =
+\text{Ext}^1_{\mathcal{O}_X}(\NL_{X/k}, \mathcal{O}_X)
+$$
+\end{lemma}
+
+\begin{proof}
+Recall that $\text{Inf}(\Deformationcategory_X)$ is the set of
+automorphisms of the trivial deformation
+$X' = X \times_{\Spec(k)} \Spec(k[\epsilon])$ of $X$ to $k[\epsilon]$
+equal to the identity modulo $\epsilon$.
+By Deformation Theory, Lemma \ref{defos-le